diff --git a/.github/workflows/auto-freeze-version.yml b/.github/workflows/auto-freeze-version.yml index 499503be..0f23e116 100644 --- a/.github/workflows/auto-freeze-version.yml +++ b/.github/workflows/auto-freeze-version.yml @@ -50,12 +50,12 @@ jobs: - name: Install versioning dependencies run: | - cd scripts/versioning + cd scripts npm install - name: Setup Google Sheets API credentials run: | - echo '${{ secrets.GOOGLE_SERVICE_ACCOUNT_KEY }}' > scripts/versioning/service-account-key.json + echo '${{ secrets.GOOGLE_SERVICE_ACCOUNT_KEY }}' > scripts/service-account-key.json echo "✓ Google Sheets API credentials configured" - name: Run version freeze process @@ -68,7 +68,7 @@ jobs: export BATCH_MODE=true # Run the automated version freeze - NEW_VERSION="$NEW_VERSION" node scripts/versioning/version-manager.js + NEW_VERSION="$NEW_VERSION" node scripts/version-manager.js - name: Commit and create PR run: | @@ -128,4 +128,4 @@ jobs: - name: Cleanup credentials if: always() run: | - rm -f scripts/versioning/service-account-key.json \ No newline at end of file + rm -f scripts/service-account-key.json \ No newline at end of file diff --git a/.github/workflows/sync-evm-changelog.yml b/.github/workflows/sync-evm-changelog.yml index a7b2d2b3..e58f157c 100644 --- a/.github/workflows/sync-evm-changelog.yml +++ b/.github/workflows/sync-evm-changelog.yml @@ -50,7 +50,7 @@ jobs: - name: Parse and convert changelog id: convert - run: node scripts/versioning/release-notes.js ${{ steps.fetch-changelog.outputs.release_tag }} + run: node scripts/release-notes.js ${{ steps.fetch-changelog.outputs.release_tag }} - name: Update changelog file run: | diff --git a/.gitignore b/.gitignore index d97c981c..f555acb5 100644 --- a/.gitignore +++ b/.gitignore @@ -67,9 +67,9 @@ build/ .yarn/ # Keep most scripts ignored, but include versioning tools scripts/* -!scripts/versioning/ -!scripts/versioning/** -tests/* +!scripts/ +!scripts/** +scripts/test/* .ingress/ CLAUDE.md @@ -79,7 +79,7 @@ CLAUDE.md # Google Sheets API credentials **/service-account-key.json -scripts/versioning/node_modules/ +scripts/node_modules/ # Projects directory .projects/ diff --git a/README.md b/README.md index f1976d63..29399a7b 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,7 @@ Documentation for all parts of the Cosmos Stack. - `docs//next/` - Active development documentation - `docs//` - Versioned documentation, by major release. -- `scripts/versioning/` - Versioning automation - see [README](scripts/versioning/README.md). +- `scripts/` - Versioning automation - see [README](/scripts). - `snippets/` - Custom components - Due to platform limitations, components cannot be versioned. However, it is possible to feed specific / versioned data to a component through a prop in the import (see `docs/evm/v0.4.x/documentation/evm-compatibility/eip-reference.mdx` for a working example). ## Contributing diff --git a/adrs_data.json b/adrs_data.json new file mode 100644 index 00000000..e1f0a8b6 --- /dev/null +++ b/adrs_data.json @@ -0,0 +1,374 @@ +[ + { + "number": 2, + "filename": "adr-002-docs-structure.md", + "title": "ADR 002: SDK Documentation Structure", + "content": "# ADR 002: SDK Documentation Structure\n\n## Context\n\nThere is a need for a scalable structure of the Cosmos SDK documentation. Current documentation includes a lot of non-related Cosmos SDK material, is difficult to maintain and hard to follow as a user.\n\nIdeally, we would have:\n\n* All docs related to dev frameworks or tools live in their respective github repos (sdk repo would contain sdk docs, hub repo would contain hub docs, lotion repo would contain lotion docs, etc.)\n* All other docs (faqs, whitepaper, high-level material about Cosmos) would live on the website.\n\n## Decision\n\nRe-structure the `/docs` folder of the Cosmos SDK github repo as follows:\n\n```text\ndocs/\n├── README\n├── intro/\n├── concepts/\n│ ├── baseapp\n│ ├── types\n│ ├── store\n│ ├── server\n│ ├── modules/\n│ │ ├── keeper\n│ │ ├── handler\n│ │ ├── cli\n│ ├── gas\n│ └── commands\n├── clients/\n│ ├── lite/\n│ ├── service-providers\n├── modules/\n├── spec/\n├── translations/\n└── architecture/\n```\n\nThe files in each sub-folders do not matter and will likely change. What matters is the sectioning:\n\n* `README`: Landing page of the docs.\n* `intro`: Introductory material. Goal is to have a short explainer of the Cosmos SDK and then channel people to the resource they need. The [Cosmos SDK tutorial](https://github.com/cosmos/sdk-application-tutorial/) will be highlighted, as well as the `godocs`.\n* `concepts`: Contains high-level explanations of the abstractions of the Cosmos SDK. It does not contain specific code implementation and does not need to be updated often. **It is not an API specification of the interfaces**. API spec is the `godoc`.\n* `clients`: Contains specs and info about the various Cosmos SDK clients.\n* `spec`: Contains specs of modules, and others.\n* `modules`: Contains links to `godocs` and the spec of the modules.\n* `architecture`: Contains architecture-related docs like the present one.\n* `translations`: Contains different translations of the documentation.\n\nWebsite docs sidebar will only include the following sections:\n\n* `README`\n* `intro`\n* `concepts`\n* `clients`\n\n`architecture` need not be displayed on the website.\n\n## Status\n\nAccepted\n\n## Consequences\n\n### Positive\n\n* Much clearer organisation of the Cosmos SDK docs.\n* The `/docs` folder now only contains Cosmos SDK and gaia related material. Later, it will only contain Cosmos SDK related material.\n* Developers only have to update `/docs` folder when they open a PR (and not `/examples` for example).\n* Easier for developers to find what they need to update in the docs thanks to reworked architecture.\n* Cleaner vuepress build for website docs.\n* Will help build an executable doc (cf https://github.com/cosmos/cosmos-sdk/issues/2611)\n\n### Neutral\n\n* We need to move a bunch of deprecated stuff to `/_attic` folder.\n* We need to integrate content in `docs/sdk/docs/core` in `concepts`.\n* We need to move all the content that currently lives in `docs` and does not fit in new structure (like `lotion`, intro material, whitepaper) to the website repository.\n* Update `DOCS_README.md`\n\n## References\n\n* https://github.com/cosmos/cosmos-sdk/issues/1460\n* https://github.com/cosmos/cosmos-sdk/pull/2695\n* https://github.com/cosmos/cosmos-sdk/issues/2611" + }, + { + "number": 3, + "filename": "adr-003-dynamic-capability-store.md", + "title": "ADR 3: Dynamic Capability Store", + "content": "# ADR 3: Dynamic Capability Store\n\n## Changelog\n\n* 12 December 2019: Initial version\n* 02 April 2020: Memory Store Revisions\n\n## Context\n\nFull implementation of the [IBC specification](https://github.com/cosmos/ibc) requires the ability to create and authenticate object-capability keys at runtime (i.e., during transaction execution),\nas described in [ICS 5](https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#technical-specification). In the IBC specification, capability keys are created for each newly initialised\nport & channel, and are used to authenticate future usage of the port or channel. Since channels and potentially ports can be initialised during transaction execution, the state machine must be able to create\nobject-capability keys at this time.\n\nAt present, the Cosmos SDK does not have the ability to do this. Object-capability keys are currently pointers (memory addresses) of `StoreKey` structs created at application initialisation in `app.go` ([example](https://github.com/cosmos/gaia/blob/dcbddd9f04b3086c0ad07ee65de16e7adedc7da4/app/app.go#L132))\nand passed to Keepers as fixed arguments ([example](https://github.com/cosmos/gaia/blob/dcbddd9f04b3086c0ad07ee65de16e7adedc7da4/app/app.go#L160)). Keepers cannot create or store capability keys during transaction execution — although they could call `NewKVStoreKey` and take the memory address\nof the returned struct, storing this in the Merklised store would result in a consensus fault, since the memory address will be different on each machine (this is intentional — were this not the case, the keys would be predictable and couldn't serve as object capabilities).\n\nKeepers need a way to keep a private map of store keys which can be altered during transaction execution, along with a suitable mechanism for regenerating the unique memory addresses (capability keys) in this map whenever the application is started or restarted, along with a mechanism to revert capability creation on tx failure.\nThis ADR proposes such an interface & mechanism.\n\n## Decision\n\nThe Cosmos SDK will include a new `CapabilityKeeper` abstraction, which is responsible for provisioning,\ntracking, and authenticating capabilities at runtime. During application initialisation in `app.go`,\nthe `CapabilityKeeper` will be hooked up to modules through unique function references\n(by calling `ScopeToModule`, defined below) so that it can identify the calling module when later\ninvoked.\n\nWhen the initial state is loaded from disk, the `CapabilityKeeper`'s `Initialise` function will create\nnew capability keys for all previously allocated capability identifiers (allocated during execution of\npast transactions and assigned to particular modes), and keep them in a memory-only store while the\nchain is running.\n\nThe `CapabilityKeeper` will include a persistent `KVStore`, a `MemoryStore`, and an in-memory map.\nThe persistent `KVStore` tracks which capability is owned by which modules.\nThe `MemoryStore` stores a forward mapping that map from module name, capability tuples to capability names and\na reverse mapping that map from module name, capability name to the capability index.\nSince we cannot marshal the capability into a `KVStore` and unmarshal without changing the memory location of the capability,\nthe reverse mapping in the KVStore will simply map to an index. This index can then be used as a key in the ephemeral\ngo-map to retrieve the capability at the original memory location.\n\nThe `CapabilityKeeper` will define the following types & functions:\n\nThe `Capability` is similar to `StoreKey`, but has a globally unique `Index()` instead of\na name. A `String()` method is provided for debugging.\n\nA `Capability` is simply a struct, the address of which is taken for the actual capability.\n\n```go\ntype Capability struct {\n index uint64\n}\n```\n\nA `CapabilityKeeper` contains a persistent store key, memory store key, and mapping of allocated module names.\n\n```go\ntype CapabilityKeeper struct {\n persistentKey StoreKey\n memKey StoreKey\n capMap map[uint64]*Capability\n moduleNames map[string]interface{}\n sealed bool\n}\n```\n\nThe `CapabilityKeeper` provides the ability to create *scoped* sub-keepers which are tied to a\nparticular module name. These `ScopedCapabilityKeeper`s must be created at application initialisation\nand passed to modules, which can then use them to claim capabilities they receive and retrieve\ncapabilities which they own by name, in addition to creating new capabilities & authenticating capabilities\npassed by other modules.\n\n```go\ntype ScopedCapabilityKeeper struct {\n persistentKey StoreKey\n memKey StoreKey\n capMap map[uint64]*Capability\n moduleName string\n}\n```\n\n`ScopeToModule` is used to create a scoped sub-keeper with a particular name, which must be unique.\nIt MUST be called before `InitialiseAndSeal`.\n\n```go\nfunc (ck CapabilityKeeper) ScopeToModule(moduleName string) ScopedCapabilityKeeper {\n\tif k.sealed {\n\t\tpanic(\"cannot scope to module via a sealed capability keeper\")\n\t}\n\n\tif _, ok := k.scopedModules[moduleName]; ok {\n\t\tpanic(fmt.Sprintf(\"cannot create multiple scoped keepers for the same module name: %s\", moduleName))\n\t}\n\n\tk.scopedModules[moduleName] = struct{}{}\n\n\treturn ScopedKeeper{\n\t\tcdc: k.cdc,\n\t\tstoreKey: k.storeKey,\n\t\tmemKey: k.memKey,\n\t\tcapMap: k.capMap,\n\t\tmodule: moduleName,\n\t}\n}\n```\n\n`InitialiseAndSeal` MUST be called exactly once, after loading the initial state and creating all\nnecessary `ScopedCapabilityKeeper`s, in order to populate the memory store with newly-created\ncapability keys in accordance with the keys previously claimed by particular modules and prevent the\ncreation of any new `ScopedCapabilityKeeper`s.\n\n```go\nfunc (ck CapabilityKeeper) InitialiseAndSeal(ctx Context) {\n if ck.sealed {\n panic(\"capability keeper is sealed\")\n }\n\n persistentStore := ctx.KVStore(ck.persistentKey)\n map := ctx.KVStore(ck.memKey)\n \n // initialise memory store for all names in persistent store\n for index, value := range persistentStore.Iter() {\n capability = &CapabilityKey{index: index}\n\n for moduleAndCapability := range value {\n moduleName, capabilityName := moduleAndCapability.Split(\"/\")\n memStore.Set(moduleName + \"/fwd/\" + capability, capabilityName)\n memStore.Set(moduleName + \"/rev/\" + capabilityName, index)\n\n ck.capMap[index] = capability\n }\n }\n\n ck.sealed = true\n}\n```\n\n`NewCapability` can be called by any module to create a new unique, unforgeable object-capability\nreference. The newly created capability is automatically persisted; the calling module need not\ncall `ClaimCapability`.\n\n```go\nfunc (sck ScopedCapabilityKeeper) NewCapability(ctx Context, name string) (Capability, error) {\n // check name not taken in memory store\n if capStore.Get(\"rev/\" + name) != nil {\n return nil, errors.New(\"name already taken\")\n }\n\n // fetch the current index\n index := persistentStore.Get(\"index\")\n \n // create a new capability\n capability := &CapabilityKey{index: index}\n \n // set persistent store\n persistentStore.Set(index, Set.singleton(sck.moduleName + \"/\" + name))\n \n // update the index\n index++\n persistentStore.Set(\"index\", index)\n \n // set forward mapping in memory store from capability to name\n memStore.Set(sck.moduleName + \"/fwd/\" + capability, name)\n \n // set reverse mapping in memory store from name to index\n memStore.Set(sck.moduleName + \"/rev/\" + name, index)\n\n // set the in-memory mapping from index to capability pointer\n capMap[index] = capability\n \n // return the newly created capability\n return capability\n}\n```\n\n`AuthenticateCapability` can be called by any module to check that a capability\ndoes in fact correspond to a particular name (the name can be untrusted user input)\nwith which the calling module previously associated it.\n\n```go\nfunc (sck ScopedCapabilityKeeper) AuthenticateCapability(name string, capability Capability) bool {\n // return whether forward mapping in memory store matches name\n return memStore.Get(sck.moduleName + \"/fwd/\" + capability) === name\n}\n```\n\n`ClaimCapability` allows a module to claim a capability key which it has received from another module\nso that future `GetCapability` calls will succeed.\n\n`ClaimCapability` MUST be called if a module which receives a capability wishes to access it by name\nin the future. Capabilities are multi-owner, so if multiple modules have a single `Capability` reference,\nthey will all own it.\n\n```go\nfunc (sck ScopedCapabilityKeeper) ClaimCapability(ctx Context, capability Capability, name string) error {\n persistentStore := ctx.KVStore(sck.persistentKey)\n\n // set forward mapping in memory store from capability to name\n memStore.Set(sck.moduleName + \"/fwd/\" + capability, name)\n\n // set reverse mapping in memory store from name to capability\n memStore.Set(sck.moduleName + \"/rev/\" + name, capability)\n\n // update owner set in persistent store\n owners := persistentStore.Get(capability.Index())\n owners.add(sck.moduleName + \"/\" + name)\n persistentStore.Set(capability.Index(), owners)\n}\n```\n\n`GetCapability` allows a module to fetch a capability which it has previously claimed by name.\nThe module is not allowed to retrieve capabilities which it does not own.\n\n```go\nfunc (sck ScopedCapabilityKeeper) GetCapability(ctx Context, name string) (Capability, error) {\n // fetch the index of capability using reverse mapping in memstore\n index := memStore.Get(sck.moduleName + \"/rev/\" + name)\n\n // fetch capability from go-map using index\n capability := capMap[index]\n\n // return the capability\n return capability\n}\n```\n\n`ReleaseCapability` allows a module to release a capability which it had previously claimed. If no\nmore owners exist, the capability will be deleted globally.\n\n```go\nfunc (sck ScopedCapabilityKeeper) ReleaseCapability(ctx Context, capability Capability) err {\n persistentStore := ctx.KVStore(sck.persistentKey)\n\n name := capStore.Get(sck.moduleName + \"/fwd/\" + capability)\n if name == nil {\n return error(\"capability not owned by module\")\n }\n\n // delete forward mapping in memory store\n memoryStore.Delete(sck.moduleName + \"/fwd/\" + capability, name)\n\n // delete reverse mapping in memory store\n memoryStore.Delete(sck.moduleName + \"/rev/\" + name, capability)\n\n // update owner set in persistent store\n owners := persistentStore.Get(capability.Index())\n owners.remove(sck.moduleName + \"/\" + name)\n if owners.size() > 0 {\n // there are still other owners, keep the capability around\n persistentStore.Set(capability.Index(), owners)\n } else {\n // no more owners, delete the capability\n persistentStore.Delete(capability.Index())\n delete(capMap[capability.Index()])\n }\n}\n```\n\n### Usage patterns\n\n#### Initialisation\n\nAny modules which use dynamic capabilities must be provided a `ScopedCapabilityKeeper` in `app.go`:\n\n```go\nck := NewCapabilityKeeper(persistentKey, memoryKey)\nmod1Keeper := NewMod1Keeper(ck.ScopeToModule(\"mod1\"), ....)\nmod2Keeper := NewMod2Keeper(ck.ScopeToModule(\"mod2\"), ....)\n\n// other initialisation logic ...\n\n// load initial state...\n\nck.InitialiseAndSeal(initialContext)\n```\n\n#### Creating, passing, claiming and using capabilities\n\nConsider the case where `mod1` wants to create a capability, associate it with a resource (e.g. an IBC channel) by name, then pass it to `mod2` which will use it later:\n\nModule 1 would have the following code:\n\n```go\ncapability := scopedCapabilityKeeper.NewCapability(ctx, \"resourceABC\")\nmod2Keeper.SomeFunction(ctx, capability, args...)\n```\n\n`SomeFunction`, running in module 2, could then claim the capability:\n\n```go\nfunc (k Mod2Keeper) SomeFunction(ctx Context, capability Capability) {\n k.sck.ClaimCapability(ctx, capability, \"resourceABC\")\n // other logic...\n}\n```\n\nLater on, module 2 can retrieve that capability by name and pass it to module 1, which will authenticate it against the resource:\n\n```go\nfunc (k Mod2Keeper) SomeOtherFunction(ctx Context, name string) {\n capability := k.sck.GetCapability(ctx, name)\n mod1.UseResource(ctx, capability, \"resourceABC\")\n}\n```\n\nModule 1 will then check that this capability key is authenticated to use the resource before allowing module 2 to use it:\n\n```go\nfunc (k Mod1Keeper) UseResource(ctx Context, capability Capability, resource string) {\n if !k.sck.AuthenticateCapability(name, capability) {\n return errors.New(\"unauthenticated\")\n }\n // do something with the resource\n}\n```\n\nIf module 2 passed the capability key to module 3, module 3 could then claim it and call module 1 just like module 2 did\n(in which case module 1, module 2, and module 3 would all be able to use this capability).\n\n## Status\n\nProposed.\n\n## Consequences\n\n### Positive\n\n* Dynamic capability support.\n* Allows CapabilityKeeper to return same capability pointer from go-map while reverting any writes to the persistent `KVStore` and in-memory `MemoryStore` on tx failure.\n\n### Negative\n\n* Requires an additional keeper.\n* Some overlap with existing `StoreKey` system (in the future they could be combined, since this is a superset functionality-wise).\n* Requires an extra level of indirection in the reverse mapping, since MemoryStore must map to index which must then be used as key in a go map to retrieve the actual capability\n\n### Neutral\n\n(none known)\n\n## References\n\n* [Original discussion](https://github.com/cosmos/cosmos-sdk/pull/5230#discussion_r343978513)" + }, + { + "number": 4, + "filename": "adr-004-split-denomination-keys.md", + "title": "ADR 004: Split Denomination Keys", + "content": "# ADR 004: Split Denomination Keys\n\n## Changelog\n\n* 2020-01-08: Initial version\n* 2020-01-09: Alterations to handle vesting accounts\n* 2020-01-14: Updates from review feedback\n* 2020-01-30: Updates from implementation\n\n### Glossary\n\n* denom / denomination key -- unique token identifier.\n\n## Context\n\nWith permissionless IBC, anyone will be able to send arbitrary denominations to any other account. Currently, all non-zero balances are stored along with the account in an `sdk.Coins` struct, which creates a potential denial-of-service concern, as too many denominations will become expensive to load & store each time the account is modified. See issues [5467](https://github.com/cosmos/cosmos-sdk/issues/5467) and [4982](https://github.com/cosmos/cosmos-sdk/issues/4982) for additional context.\n\nSimply rejecting incoming deposits after a denomination count limit doesn't work, since it opens up a griefing vector: someone could send a user lots of nonsensical coins over IBC, and then prevent the user from receiving real denominations (such as staking rewards).\n\n## Decision\n\nBalances shall be stored per-account & per-denomination under a denomination- and account-unique key, thus enabling O(1) read & write access to the balance of a particular account in a particular denomination.\n\n### Account interface (x/auth)\n\n`GetCoins()` and `SetCoins()` will be removed from the account interface, since coin balances will\nnow be stored in & managed by the bank module.\n\nThe vesting account interface will replace `SpendableCoins` in favor of `LockedCoins` which does\nnot require the account balance anymore. In addition, `TrackDelegation()` will now accept the\naccount balance of all tokens denominated in the vesting balance instead of loading the entire\naccount balance.\n\nVesting accounts will continue to store original vesting, delegated free, and delegated\nvesting coins (which is safe since these cannot contain arbitrary denominations).\n\n### Bank keeper (x/bank)\n\nThe following APIs will be added to the `x/bank` keeper:\n\n* `GetAllBalances(ctx Context, addr AccAddress) Coins`\n* `GetBalance(ctx Context, addr AccAddress, denom string) Coin`\n* `SetBalance(ctx Context, addr AccAddress, coin Coin)`\n* `LockedCoins(ctx Context, addr AccAddress) Coins`\n* `SpendableCoins(ctx Context, addr AccAddress) Coins`\n\nAdditional APIs may be added to facilitate iteration and auxiliary functionality not essential to\ncore functionality or persistence.\n\nBalances will be stored first by the address, then by the denomination (the reverse is also possible,\nbut retrieval of all balances for a single account is presumed to be more frequent):\n\n```go\nvar BalancesPrefix = []byte(\"balances\")\n\nfunc (k Keeper) SetBalance(ctx Context, addr AccAddress, balance Coin) error {\n if !balance.IsValid() {\n return err\n }\n\n store := ctx.KVStore(k.storeKey)\n balancesStore := prefix.NewStore(store, BalancesPrefix)\n accountStore := prefix.NewStore(balancesStore, addr.Bytes())\n\n bz := Marshal(balance)\n accountStore.Set([]byte(balance.Denom), bz)\n\n return nil\n}\n```\n\nThis will result in the balances being indexed by the byte representation of\n`balances/{address}/{denom}`.\n\n`DelegateCoins()` and `UndelegateCoins()` will be altered to only load each individual\naccount balance by denomination found in the (un)delegation amount. As a result,\nany mutations to the account balance will be made by denomination.\n\n`SubtractCoins()` and `AddCoins()` will be altered to read & write the balances\ndirectly instead of calling `GetCoins()` / `SetCoins()` (which no longer exist).\n\n`trackDelegation()` and `trackUndelegation()` will be altered to no longer update\naccount balances.\n\nExternal APIs will need to scan all balances under an account to retain backwards-compatibility. It\nis advised that these APIs use `GetBalance` and `SetBalance` instead of `GetAllBalances` when\npossible as to not load the entire account balance.\n\n### Supply module\n\nThe supply module, in order to implement the total supply invariant, will now need\nto scan all accounts & call `GetAllBalances` using the `x/bank` Keeper, then sum\nthe balances and check that they match the expected total supply.\n\n## Status\n\nAccepted.\n\n## Consequences\n\n### Positive\n\n* O(1) reads & writes of balances (with respect to the number of denominations for\nwhich an account has non-zero balances). Note, this does not relate to the actual\nI/O cost, rather the total number of direct reads needed.\n\n### Negative\n\n* Slightly less efficient reads/writes when reading & writing all balances of a\nsingle account in a transaction.\n\n### Neutral\n\nNone in particular.\n\n## References\n\n* Ref: https://github.com/cosmos/cosmos-sdk/issues/4982\n* Ref: https://github.com/cosmos/cosmos-sdk/issues/5467\n* Ref: https://github.com/cosmos/cosmos-sdk/issues/5492" + }, + { + "number": 6, + "filename": "adr-006-secret-store-replacement.md", + "title": "ADR 006: Secret Store Replacement", + "content": "# ADR 006: Secret Store Replacement\n\n## Changelog\n\n* July 29th, 2019: Initial draft\n* September 11th, 2019: Work has started\n* November 4th: Cosmos SDK changes merged in\n* November 18th: Gaia changes merged in\n\n## Context\n\nCurrently, a Cosmos SDK application's CLI directory stores key material and metadata in a plain text database in the user’s home directory. Key material is encrypted by a passphrase, protected by bcrypt hashing algorithm. Metadata (e.g. addresses, public keys, key storage details) is available in plain text.\n\nThis is not desirable for a number of reasons. Perhaps the biggest reason is insufficient security protection of key material and metadata. Leaking the plain text allows an attacker to surveil what keys a given computer controls via a number of techniques, like compromised dependencies without any privilege execution. This could be followed by a more targeted attack on a particular user/computer.\n\nAll modern desktop computers OS (Ubuntu, Debian, MacOS, Windows) provide a built-in secret store that is designed to allow applications to store information that is isolated from all other applications and requires passphrase entry to access the data.\n\nWe are seeking solution that provides a common abstraction layer to the many different backends and reasonable fallback for minimal platforms that don’t provide a native secret store.\n\n## Decision\n\nWe recommend replacing the current Keybase backend based on LevelDB with [Keyring](https://github.com/99designs/keyring) by 99 designs. This application is designed to provide a common abstraction and uniform interface between many secret stores and is used by AWS Vault application by 99-designs application.\n\nThis appears to fulfill the requirement of protecting both key material and metadata from rogue software on a user’s machine.\n\n## Status\n\nAccepted\n\n## Consequences\n\n### Positive\n\nIncreased safety for users.\n\n### Negative\n\nUsers must manually migrate.\n\nTesting against all supported backends is difficult.\n\nRunning tests locally on a Mac require numerous repetitive password entries.\n\n### Neutral\n\n{neutral consequences}\n\n## References\n\n* #4754 Switch secret store to the keyring secret store (original PR by @poldsam) [__CLOSED__]\n* #5029 Add support for github.com/99designs/keyring-backed keybases [__MERGED__]\n* #5097 Add keys migrate command [__MERGED__]\n* #5180 Drop on-disk keybase in favor of keyring [_PENDING_REVIEW_]\n* cosmos/gaia#164 Drop on-disk keybase in favor of keyring (gaia's changes) [_PENDING_REVIEW_]" + }, + { + "number": 7, + "filename": "adr-007-specialization-groups.md", + "title": "ADR 007: Specialization Groups", + "content": "# ADR 007: Specialization Groups\n\n## Changelog\n\n* 2019 Jul 31: Initial Draft\n\n## Context\n\nThis idea was first conceived of in order to fulfill the use case of the\ncreation of a decentralized Computer Emergency Response Team (dCERT), whose\nmembers would be elected by a governing community and would fulfill the role of\ncoordinating the community under emergency situations. This thinking\ncan be further abstracted into the conception of \"blockchain specialization\ngroups\".\n\nThe creation of these groups are the beginning of specialization capabilities\nwithin a wider blockchain community which could be used to enable a certain\nlevel of delegated responsibilities. Examples of specialization which could be\nbeneficial to a blockchain community include: code auditing, emergency response,\ncode development etc. This type of community organization paves the way for\nindividual stakeholders to delegate votes by issue type, if in the future\ngovernance proposals include a field for issue type.\n\n## Decision\n\nA specialization group can be broadly broken down into the following functions\n(herein containing examples):\n\n* Membership Admittance\n* Membership Acceptance\n* Membership Revocation\n * (probably) Without Penalty\n * member steps down (self-Revocation)\n * replaced by new member from governance\n * (probably) With Penalty\n * due to breach of soft-agreement (determined through governance)\n * due to breach of hard-agreement (determined by code)\n* Execution of Duties\n * Special transactions which only execute for members of a specialization\n group (for example, dCERT members voting to turn off transaction routes in\n an emergency scenario)\n* Compensation\n * Group compensation (further distribution decided by the specialization group)\n * Individual compensation for all constituents of a group from the\n greater community\n\nMembership admittance to a specialization group could take place over a wide\nvariety of mechanisms. The most obvious example is through a general vote among\nthe entire community, however in certain systems a community may want to allow\nthe members already in a specialization group to internally elect new members,\nor maybe the community may assign a permission to a particular specialization\ngroup to appoint members to other 3rd party groups. The sky is really the limit\nas to how membership admittance can be structured. We attempt to capture\nsome of these possibilities in a common interface dubbed the `Electionator`. For\nits initial implementation as a part of this ADR we recommend that the general\nelection abstraction (`Electionator`) is provided as well as a basic\nimplementation of that abstraction which allows for a continuous election of\nmembers of a specialization group.\n\n``` golang\n// The Electionator abstraction covers the concept space for\n// a wide variety of election kinds. \ntype Electionator interface {\n\n // is the election object accepting votes.\n Active() bool\n\n // functionality to execute for when a vote is cast in this election, here\n // the vote field is anticipated to be marshalled into a vote type used\n // by an election.\n //\n // NOTE There are no explicit ids here. Just votes which pertain specifically\n // to one electionator. Anyone can create and send a vote to the electionator item\n // which will presumably attempt to marshal those bytes into a particular struct\n // and apply the vote information in some arbitrary way. There can be multiple\n // Electionators within the Cosmos-Hub for multiple specialization groups, votes\n // would need to be routed to the Electionator upstream of here.\n Vote(addr sdk.AccAddress, vote []byte)\n\n // here lies all functionality to authenticate and execute changes for\n // when a member accepts being elected\n AcceptElection(sdk.AccAddress)\n\n // Register a revoker object\n RegisterRevoker(Revoker)\n\n // No more revokers may be registered after this function is called\n SealRevokers()\n\n // register hooks to call when an election actions occur\n RegisterHooks(ElectionatorHooks)\n\n // query for the current winner(s) of this election based on arbitrary\n // election ruleset\n QueryElected() []sdk.AccAddress\n\n // query metadata for an address in the election this\n // could include for example position that an address\n // is being elected for within a group\n //\n // this metadata may be directly related to\n // voting information and/or privileges enabled\n // to members within a group.\n QueryMetadata(sdk.AccAddress) []byte\n}\n\n// ElectionatorHooks, once registered with an Electionator,\n// trigger execution of relevant interface functions when\n// Electionator events occur.\ntype ElectionatorHooks interface {\n AfterVoteCast(addr sdk.AccAddress, vote []byte)\n AfterMemberAccepted(addr sdk.AccAddress)\n AfterMemberRevoked(addr sdk.AccAddress, cause []byte)\n}\n\n// Revoker defines the function required for a membership revocation rule-set\n// used by a specialization group. This could be used to create self revoking,\n// and evidence based revoking, etc. Revokers types may be created and\n// reused for different election types.\n//\n// When revoking the \"cause\" bytes may be arbitrarily marshalled into evidence,\n// memos, etc.\ntype Revoker interface {\n RevokeName() string // identifier for this revoker type\n RevokeMember(addr sdk.AccAddress, cause []byte) error\n}\n```\n\nCertain level of commonality likely exists between the existing code within\n`x/governance` and required functionality of elections. This common\nfunctionality should be abstracted during implementation. Similarly for each\nvote implementation client CLI/REST functionality should be abstracted\nto be reused for multiple elections.\n\nThe specialization group abstraction firstly extends the `Electionator`\nbut also further defines traits of the group.\n\n``` golang\ntype SpecializationGroup interface {\n Electionator\n GetName() string\n GetDescription() string\n\n // general soft contract the group is expected\n // to fulfill with the greater community\n GetContract() string\n\n // messages which can be executed by the members of the group\n Handler(ctx sdk.Context, msg sdk.Msg) sdk.Result\n\n // logic to be executed at endblock, this may for instance\n // include payment of a stipend to the group members\n // for participation in the security group.\n EndBlocker(ctx sdk.Context)\n}\n```\n\n## Status\n\n> Proposed\n\n## Consequences\n\n### Positive\n\n* increases specialization capabilities of a blockchain\n* improve abstractions in `x/gov/` such that they can be used with specialization groups\n\n### Negative\n\n* could be used to increase centralization within a community\n\n### Neutral\n\n## References\n\n* [dCERT ADR](./adr-008-dCERT-group.md)" + }, + { + "number": 8, + "filename": "adr-008-dCERT-group.md", + "title": "ADR 008: Decentralized Computer Emergency Response Team (dCERT) Group", + "content": "# ADR 008: Decentralized Computer Emergency Response Team (dCERT) Group\n\n## Changelog\n\n* 2019 Jul 31: Initial Draft\n\n## Context\n\nIn order to reduce the number of parties involved with handling sensitive\ninformation in an emergency scenario, we propose the creation of a\nspecialization group named The Decentralized Computer Emergency Response Team\n(dCERT). Initially this group's role is intended to serve as coordinators\nbetween various actors within a blockchain community such as validators,\nbug-hunters, and developers. During a time of crisis, the dCERT group would\naggregate and relay input from a variety of stakeholders to the developers who\nare actively devising a patch to the software, this way sensitive information\ndoes not need to be publicly disclosed while some input from the community can\nstill be gained.\n\nAdditionally, a special privilege is proposed for the dCERT group: the capacity\nto \"circuit-break\" (aka. temporarily disable) a particular message path. Note\nthat this privilege should be enabled/disabled globally with a governance\nparameter such that this privilege could start disabled and later be enabled\nthrough a parameter change proposal, once a dCERT group has been established.\n\nIn the future it is foreseeable that the community may wish to expand the roles\nof dCERT with further responsibilities such as the capacity to \"pre-approve\" a\nsecurity update on behalf of the community prior to a full community\nwide vote whereby the sensitive information would be revealed prior to a\nvulnerability being patched on the live network. \n\n## Decision\n\nThe dCERT group is proposed to include an implementation of a `SpecializationGroup`\nas defined in [ADR 007](./adr-007-specialization-groups.md). This will include the\nimplementation of:\n\n* continuous voting\n* slashing due to breach of soft contract\n* revoking a member due to breach of soft contract\n* emergency disband of the entire dCERT group (ex. for colluding maliciously)\n* compensation stipend from the community pool or other means decided by\n governance\n\nThis system necessitates the following new parameters:\n\n* blockly stipend allowance per dCERT member\n* maximum number of dCERT members\n* required staked slashable tokens for each dCERT member\n* quorum for suspending a particular member\n* proposal wager for disbanding the dCERT group\n* stabilization period for dCERT member transition\n* circuit break dCERT privileges enabled\n\nThese parameters are expected to be implemented through the param keeper such\nthat governance may change them at any given point.\n\n### Continuous Voting Electionator\n\nAn `Electionator` object is to be implemented as continuous voting and with the\nfollowing specifications:\n\n* All delegation addresses may submit votes at any point which updates their\n preferred representation on the dCERT group.\n* Preferred representation may be arbitrarily split between addresses (ex. 50%\n to John, 25% to Sally, 25% to Carol)\n* In order for a new member to be added to the dCERT group they must\n send a transaction accepting their admission at which point the validity of\n their admission is to be confirmed.\n * A sequence number is assigned when a member is added to dCERT group.\n If a member leaves the dCERT group and then enters back, a new sequence number\n is assigned. \n* Addresses which control the greatest amount of preferred-representation are\n eligible to join the dCERT group (up the _maximum number of dCERT members_).\n If the dCERT group is already full and new member is admitted, the existing\n dCERT member with the lowest amount of votes is kicked from the dCERT group.\n * In the split situation where the dCERT group is full but a vying candidate\n has the same amount of vote as an existing dCERT member, the existing\n member should maintain its position.\n * In the split situation where somebody must be kicked out but the two\n addresses with the smallest number of votes have the same number of votes,\n the address with the smallest sequence number maintains its position. \n* A stabilization period can be optionally included to reduce the\n \"flip-flopping\" of the dCERT membership tail members. If a stabilization\n period is provided which is greater than 0, when members are kicked due to\n insufficient support, a queue entry is created which documents which member is\n to replace which other member. While this entry is in the queue, no new entries\n to kick that same dCERT member can be made. When the entry matures at the\n duration of the stabilization period, the new member is instantiated, and old\n member kicked.\n\n### Staking/Slashing\n\nAll members of the dCERT group must stake tokens _specifically_ to maintain\neligibility as a dCERT member. These tokens can be staked directly by the vying\ndCERT member or out of the good will of a 3rd party (who shall gain no on-chain\nbenefits for doing so). This staking mechanism should use the existing global\nunbonding time of tokens staked for network validator security. A dCERT member\ncan _only be_ a member if it has the required tokens staked under this\nmechanism. If those tokens are unbonded then the dCERT member must be\nautomatically kicked from the group. \n\nSlashing of a particular dCERT member due to soft-contract breach should be\nperformed by governance on a per member basis based on the magnitude of the\nbreach. The process flow is anticipated to be that a dCERT member is suspended\nby the dCERT group prior to being slashed by governance. \n\nMembership suspension by the dCERT group takes place through a voting procedure\nby the dCERT group members. After this suspension has taken place, a governance\nproposal to slash the dCERT member must be submitted, if the proposal is not\napproved by the time the rescinding member has completed unbonding their\ntokens, then the tokens are no longer staked and unable to be slashed.\n\nAdditionally in the case of an emergency situation of a colluding and malicious\ndCERT group, the community needs the capability to disband the entire dCERT\ngroup and likely fully slash them. This could be achieved though a special new\nproposal type (implemented as a general governance proposal) which would halt\nthe functionality of the dCERT group until the proposal was concluded. This\nspecial proposal type would likely need to also have a fairly large wager which\ncould be slashed if the proposal creator was malicious. The reason a large\nwager should be required is because as soon as the proposal is made, the\ncapability of the dCERT group to halt message routes is put on temporarily\nsuspended, meaning that a malicious actor who created such a proposal could\nthen potentially exploit a bug during this period of time, with no dCERT group\ncapable of shutting down the exploitable message routes.\n\n### dCERT membership transactions\n\nActive dCERT members\n\n* change of the description of the dCERT group\n* circuit break a message route\n* vote to suspend a dCERT member.\n\nHere circuit-breaking refers to the capability to disable a groups of messages,\nThis could for instance mean: \"disable all staking-delegation messages\", or\n\"disable all distribution messages\". This could be accomplished by verifying\nthat the message route has not been \"circuit-broken\" at CheckTx time (in\n`baseapp/baseapp.go`).\n\n\"unbreaking\" a circuit is anticipated only to occur during a hard fork upgrade\nmeaning that no capability to unbreak a message route on a live chain is\nrequired.\n\nNote also, that if there was a problem with governance voting (for instance a\ncapability to vote many times) then governance would be broken and should be\nhalted with this mechanism, it would be then up to the validator set to\ncoordinate and hard-fork upgrade to a patched version of the software where\ngovernance is re-enabled (and fixed). If the dCERT group abuses this privilege\nthey should all be severely slashed.\n\n## Status\n\nProposed\n\n## Consequences\n\n### Positive\n\n* Potential to reduces the number of parties to coordinate with during an emergency\n* Reduction in possibility of disclosing sensitive information to malicious parties\n\n### Negative\n\n* Centralization risks\n\n### Neutral\n\n## References\n\n [Specialization Groups ADR](./adr-007-specialization-groups.md)" + }, + { + "number": 9, + "filename": "adr-009-evidence-module.md", + "title": "ADR 009: Evidence Module", + "content": "# ADR 009: Evidence Module\n\n## Changelog\n\n* 2019 July 31: Initial draft\n* 2019 October 24: Initial implementation\n\n## Status\n\nAccepted\n\n## Context\n\nIn order to support building highly secure, robust and interoperable blockchain\napplications, it is vital for the Cosmos SDK to expose a mechanism in which arbitrary\nevidence can be submitted, evaluated and verified resulting in some agreed upon\npenalty for any misbehavior committed by a validator, such as equivocation (double-voting),\nsigning when unbonded, signing an incorrect state transition (in the future), etc.\nFurthermore, such a mechanism is paramount for any\n[IBC](https://github.com/cosmos/ics/blob/master/ibc/2_IBC_ARCHITECTURE.md) or\ncross-chain validation protocol implementation in order to support the ability\nfor any misbehavior to be relayed back from a collateralized chain to a primary\nchain so that the equivocating validator(s) can be slashed.\n\n## Decision\n\nWe will implement an evidence module in the Cosmos SDK supporting the following\nfunctionality:\n\n* Provide developers with the abstractions and interfaces necessary to define\n custom evidence messages, message handlers, and methods to slash and penalize\n accordingly for misbehavior.\n* Support the ability to route evidence messages to handlers in any module to\n determine the validity of submitted misbehavior.\n* Support the ability, through governance, to modify slashing penalties of any\n evidence type.\n* Querier implementation to support querying params, evidence types, params, and\n all submitted valid misbehavior.\n\n### Types\n\nFirst, we define the `Evidence` interface type. The `x/evidence` module may implement\nits own types that can be used by many chains (e.g. `CounterFactualEvidence`).\nIn addition, other modules may implement their own `Evidence` types in a similar\nmanner in which governance is extensible. It is important to note any concrete\ntype implementing the `Evidence` interface may include arbitrary fields such as\nan infraction time. We want the `Evidence` type to remain as flexible as possible.\n\nWhen submitting evidence to the `x/evidence` module, the concrete type must provide\nthe validator's consensus address, which should be known by the `x/slashing`\nmodule (assuming the infraction is valid), the height at which the infraction\noccurred and the validator's power at same height in which the infraction occurred.\n\n```go\ntype Evidence interface {\n Route() string\n Type() string\n String() string\n Hash() HexBytes\n ValidateBasic() error\n\n // The consensus address of the malicious validator at time of infraction\n GetConsensusAddress() ConsAddress\n\n // Height at which the infraction occurred\n GetHeight() int64\n\n // The total power of the malicious validator at time of infraction\n GetValidatorPower() int64\n\n // The total validator set power at time of infraction\n GetTotalPower() int64\n}\n```\n\n### Routing & Handling\n\nEach `Evidence` type must map to a specific unique route and be registered with\nthe `x/evidence` module. It accomplishes this through the `Router` implementation.\n\n```go\ntype Router interface {\n AddRoute(r string, h Handler) Router\n HasRoute(r string) bool\n GetRoute(path string) Handler\n Seal()\n}\n```\n\nUpon successful routing through the `x/evidence` module, the `Evidence` type\nis passed through a `Handler`. This `Handler` is responsible for executing all\ncorresponding business logic necessary for verifying the evidence as valid. In\naddition, the `Handler` may execute any necessary slashing and potential jailing.\nSince slashing fractions will typically result from some form of static functions,\nallow the `Handler` to do this provides the greatest flexibility. An example could\nbe `k * evidence.GetValidatorPower()` where `k` is an on-chain parameter controlled\nby governance. The `Evidence` type should provide all the external information\nnecessary in order for the `Handler` to make the necessary state transitions.\nIf no error is returned, the `Evidence` is considered valid.\n\n```go\ntype Handler func(Context, Evidence) error\n```\n\n### Submission\n\n`Evidence` is submitted through a `MsgSubmitEvidence` message type which is internally\nhandled by the `x/evidence` module's `SubmitEvidence`.\n\n```go\ntype MsgSubmitEvidence struct {\n Evidence\n}\n\nfunc handleMsgSubmitEvidence(ctx Context, keeper Keeper, msg MsgSubmitEvidence) Result {\n if err := keeper.SubmitEvidence(ctx, msg.Evidence); err != nil {\n return err.Result()\n }\n\n // emit events...\n\n return Result{\n // ...\n }\n}\n```\n\nThe `x/evidence` module's keeper is responsible for matching the `Evidence` against\nthe module's router and invoking the corresponding `Handler` which may include\nslashing and jailing the validator. Upon success, the submitted evidence is persisted.\n\n```go\nfunc (k Keeper) SubmitEvidence(ctx Context, evidence Evidence) error {\n handler := keeper.router.GetRoute(evidence.Route())\n if err := handler(ctx, evidence); err != nil {\n return ErrInvalidEvidence(keeper.codespace, err)\n }\n\n keeper.setEvidence(ctx, evidence)\n return nil\n}\n```\n\n### Genesis\n\nFinally, we need to represent the genesis state of the `x/evidence` module. The\nmodule only needs a list of all submitted valid infractions and any necessary params\nfor which the module needs in order to handle submitted evidence. The `x/evidence`\nmodule will naturally define and route native evidence types for which it'll most\nlikely need slashing penalty constants for.\n\n```go\ntype GenesisState struct {\n Params Params\n Infractions []Evidence\n}\n```\n\n## Consequences\n\n### Positive\n\n* Allows the state machine to process misbehavior submitted on-chain and penalize\n validators based on agreed upon slashing parameters.\n* Allows evidence types to be defined and handled by any module. This further allows\n slashing and jailing to be defined by more complex mechanisms.\n* Does not solely rely on Tendermint to submit evidence.\n\n### Negative\n\n* No easy way to introduce new evidence types through governance on a live chain\n due to the inability to introduce the new evidence type's corresponding handler\n\n### Neutral\n\n* Should we persist infractions indefinitely? Or should we rather rely on events?\n\n## References\n\n* [ICS](https://github.com/cosmos/ics)\n* [IBC Architecture](https://github.com/cosmos/ics/blob/master/ibc/1_IBC_ARCHITECTURE.md)\n* [Tendermint Fork Accountability](https://github.com/tendermint/spec/blob/7b3138e69490f410768d9b1ffc7a17abc23ea397/spec/consensus/fork-accountability.md)" + }, + { + "number": 10, + "filename": "adr-010-modular-antehandler.md", + "title": "ADR 010: Modular AnteHandler", + "content": "# ADR 010: Modular AnteHandler\n\n## Changelog\n\n* 2019 Aug 31: Initial draft\n* 2021 Sep 14: Superseded by ADR-045\n\n## Status\n\nSUPERSEDED by ADR-045\n\n## Context\n\nThe current AnteHandler design allows users to either use the default AnteHandler provided in `x/auth` or to build their own AnteHandler from scratch. Ideally AnteHandler functionality is split into multiple, modular functions that can be chained together along with custom ante-functions so that users do not have to rewrite common antehandler logic when they want to implement custom behavior.\n\nFor example, let's say a user wants to implement some custom signature verification logic. In the current codebase, the user would have to write their own Antehandler from scratch largely reimplementing much of the same code and then set their own custom, monolithic antehandler in the baseapp. Instead, we would like to allow users to specify custom behavior when necessary and combine them with default ante-handler functionality in a way that is as modular and flexible as possible.\n\n## Proposals\n\n### Per-Module AnteHandler\n\nOne approach is to use the [ModuleManager](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/module) and have each module implement its own antehandler if it requires custom antehandler logic. The ModuleManager can then be passed in an AnteHandler order in the same way it has an order for BeginBlockers and EndBlockers. The ModuleManager returns a single AnteHandler function that will take in a tx and run each module's `AnteHandle` in the specified order. The module manager's AnteHandler is set as the baseapp's AnteHandler.\n\nPros:\n\n1. Simple to implement\n2. Utilizes the existing ModuleManager architecture\n\nCons:\n\n1. Improves granularity but still cannot get more granular than a per-module basis. e.g. If auth's `AnteHandle` function is in charge of validating memo and signatures, users cannot swap the signature-checking functionality while keeping the rest of auth's `AnteHandle` functionality.\n2. Module AnteHandler are run one after the other. There is no way for one AnteHandler to wrap or \"decorate\" another.\n\n### Decorator Pattern\n\nThe [weave project](https://github.com/iov-one/weave) achieves AnteHandler modularity through the use of a decorator pattern. The interface is designed as follows:\n\n```go\n// Decorator wraps a Handler to provide common functionality\n// like authentication, or fee-handling, to many Handlers\ntype Decorator interface {\n\tCheck(ctx Context, store KVStore, tx Tx, next Checker) (*CheckResult, error)\n\tDeliver(ctx Context, store KVStore, tx Tx, next Deliverer) (*DeliverResult, error)\n}\n```\n\nEach decorator works like a modularized Cosmos SDK antehandler function, but it can take in a `next` argument that may be another decorator or a Handler (which does not take in a next argument). These decorators can be chained together, one decorator being passed in as the `next` argument of the previous decorator in the chain. The chain ends in a Router which can take a tx and route to the appropriate msg handler.\n\nA key benefit of this approach is that one Decorator can wrap its internal logic around the next Checker/Deliverer. A weave Decorator may do the following:\n\n```go\n// Example Decorator's Deliver function\nfunc (example Decorator) Deliver(ctx Context, store KVStore, tx Tx, next Deliverer) {\n // Do some pre-processing logic\n\n res, err := next.Deliver(ctx, store, tx)\n\n // Do some post-processing logic given the result and error\n}\n```\n\nPros:\n\n1. Weave Decorators can wrap over the next decorator/handler in the chain. The ability to both pre-process and post-process may be useful in certain settings.\n2. Provides a nested modular structure that isn't possible in the solution above, while also allowing for a linear one-after-the-other structure like the solution above.\n\nCons:\n\n1. It is hard to understand at first glance the state updates that would occur after a Decorator runs given the `ctx`, `store`, and `tx`. A Decorator can have an arbitrary number of nested Decorators being called within its function body, each possibly doing some pre- and post-processing before calling the next decorator on the chain. Thus to understand what a Decorator is doing, one must also understand what every other decorator further along the chain is also doing. This can get quite complicated to understand. A linear, one-after-the-other approach while less powerful, may be much easier to reason about.\n\n### Chained Micro-Functions\n\nThe benefit of Weave's approach is that the Decorators can be very concise, which when chained together allows for maximum customizability. However, the nested structure can get quite complex and thus hard to reason about.\n\nAnother approach is to split the AnteHandler functionality into tightly scoped \"micro-functions\", while preserving the one-after-the-other ordering that would come from the ModuleManager approach.\n\nWe can then have a way to chain these micro-functions so that they run one after the other. Modules may define multiple ante micro-functions and then also provide a default per-module AnteHandler that implements a default, suggested order for these micro-functions.\n\nUsers can order the AnteHandlers easily by simply using the ModuleManager. The ModuleManager will take in a list of AnteHandlers and return a single AnteHandler that runs each AnteHandler in the order of the list provided. If the user is comfortable with the default ordering of each module, this is as simple as providing a list with each module's antehandler (exactly the same as BeginBlocker and EndBlocker).\n\nIf however, users wish to change the order or add, modify, or delete ante micro-functions in anyway; they can always define their own ante micro-functions and add them explicitly to the list that gets passed into module manager.\n\n#### Default Workflow\n\nThis is an example of a user's AnteHandler if they choose not to make any custom micro-functions.\n\n##### Cosmos SDK code\n\n```go\n// Chains together a list of AnteHandler micro-functions that get run one after the other.\n// Returned AnteHandler will abort on first error.\nfunc Chainer(order []AnteHandler) AnteHandler {\n return func(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {\n for _, ante := range order {\n ctx, err := ante(ctx, tx, simulate)\n if err != nil {\n return ctx, err\n }\n }\n return ctx, err\n }\n}\n```\n\n```go\n// AnteHandler micro-function to verify signatures\nfunc VerifySignatures(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {\n // verify signatures\n // Returns InvalidSignature Result and abort=true if sigs invalid\n // Return OK result and abort=false if sigs are valid\n}\n\n// AnteHandler micro-function to validate memo\nfunc ValidateMemo(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {\n // validate memo\n}\n\n// Auth defines its own default ante-handler by chaining its micro-functions in a recommended order\nAuthModuleAnteHandler := Chainer([]AnteHandler{VerifySignatures, ValidateMemo})\n```\n\n```go\n// Distribution micro-function to deduct fees from tx\nfunc DeductFees(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {\n // Deduct fees from tx\n // Abort if insufficient funds in account to pay for fees\n}\n\n// Distribution micro-function to check if fees > mempool parameter\nfunc CheckMempoolFees(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {\n // If CheckTx: Abort if the fees are less than the mempool's minFee parameter\n}\n\n// Distribution defines its own default ante-handler by chaining its micro-functions in a recommended order\nDistrModuleAnteHandler := Chainer([]AnteHandler{CheckMempoolFees, DeductFees})\n```\n\n```go\ntype ModuleManager struct {\n // other fields\n AnteHandlerOrder []AnteHandler\n}\n\nfunc (mm ModuleManager) GetAnteHandler() AnteHandler {\n return Chainer(mm.AnteHandlerOrder)\n}\n```\n\n##### User Code\n\n```go\n// Note: Since user is not making any custom modifications, we can just SetAnteHandlerOrder with the default AnteHandlers provided by each module in our preferred order\nmoduleManager.SetAnteHandlerOrder([]AnteHandler(AuthModuleAnteHandler, DistrModuleAnteHandler))\n\napp.SetAnteHandler(mm.GetAnteHandler())\n```\n\n#### Custom Workflow\n\nThis is an example workflow for a user that wants to implement custom antehandler logic. In this example, the user wants to implement custom signature verification and change the order of antehandler so that validate memo runs before signature verification.\n\n##### User Code\n\n```go\n// User can implement their own custom signature verification antehandler micro-function\nfunc CustomSigVerify(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {\n // do some custom signature verification logic\n}\n```\n\n```go\n// Micro-functions allow users to change order of when they get executed, and swap out default ante-functionality with their own custom logic.\n// Note that users can still chain the default distribution module handler, and auth micro-function along with their custom ante function\nmoduleManager.SetAnteHandlerOrder([]AnteHandler(ValidateMemo, CustomSigVerify, DistrModuleAnteHandler))\n```\n\nPros:\n\n1. Allows for ante functionality to be as modular as possible.\n2. For users that do not need custom ante-functionality, there is little difference between how antehandlers work and how BeginBlock and EndBlock work in ModuleManager.\n3. Still easy to understand\n\nCons:\n\n1. Cannot wrap antehandlers with decorators like you can with Weave.\n\n### Simple Decorators\n\nThis approach takes inspiration from Weave's decorator design while trying to minimize the number of breaking changes to the Cosmos SDK and maximizing simplicity. Like Weave decorators, this approach allows one `AnteDecorator` to wrap the next AnteHandler to do pre- and post-processing on the result. This is useful since decorators can do defer/cleanups after an AnteHandler returns as well as perform some setup beforehand. Unlike Weave decorators, these `AnteDecorator` functions can only wrap over the AnteHandler rather than the entire handler execution path. This is deliberate as we want decorators from different modules to perform authentication/validation on a `tx`. However, we do not want decorators being capable of wrapping and modifying the results of a `MsgHandler`.\n\nIn addition, this approach will not break any core Cosmos SDK API's. Since we preserve the notion of an AnteHandler and still set a single AnteHandler in baseapp, the decorator is simply an additional approach available for users that desire more customization. The API of modules (namely `x/auth`) may break with this approach, but the core API remains untouched.\n\nAllow Decorator interface that can be chained together to create a Cosmos SDK AnteHandler.\n\nThis allows users to choose between implementing an AnteHandler by themselves and setting it in the baseapp, or use the decorator pattern to chain their custom decorators with the Cosmos SDK provided decorators in the order they wish.\n\n```go\n// An AnteDecorator wraps an AnteHandler, and can do pre- and post-processing on the next AnteHandler\ntype AnteDecorator interface {\n AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error)\n}\n```\n\n```go\n// ChainAnteDecorators will recursively link all of the AnteDecorators in the chain and return a final AnteHandler function\n// This is done to preserve the ability to set a single AnteHandler function in the baseapp.\nfunc ChainAnteDecorators(chain ...AnteDecorator) AnteHandler {\n if len(chain) == 1 {\n return func(ctx Context, tx Tx, simulate bool) {\n chain[0].AnteHandle(ctx, tx, simulate, nil)\n }\n }\n return func(ctx Context, tx Tx, simulate bool) {\n chain[0].AnteHandle(ctx, tx, simulate, ChainAnteDecorators(chain[1:]))\n }\n}\n```\n\n#### Example Code\n\nDefine AnteDecorator functions\n\n```go\n// Setup GasMeter, catch OutOfGasPanic and handle appropriately\ntype SetUpContextDecorator struct{}\n\nfunc (sud SetUpContextDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) {\n ctx.GasMeter = NewGasMeter(tx.Gas)\n\n defer func() {\n // recover from OutOfGas panic and handle appropriately\n }\n\n return next(ctx, tx, simulate)\n}\n\n// Signature Verification decorator. Verify Signatures and move on\ntype SigVerifyDecorator struct{}\n\nfunc (svd SigVerifyDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) {\n // verify sigs. Return error if invalid\n\n // call next antehandler if sigs ok\n return next(ctx, tx, simulate)\n}\n\n// User-defined Decorator. Can choose to pre- and post-process on AnteHandler\ntype UserDefinedDecorator struct{\n // custom fields\n}\n\nfunc (udd UserDefinedDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) {\n // pre-processing logic\n\n ctx, err = next(ctx, tx, simulate)\n\n // post-processing logic\n}\n```\n\nLink AnteDecorators to create a final AnteHandler. Set this AnteHandler in baseapp.\n\n```go\n// Create final antehandler by chaining the decorators together\nantehandler := ChainAnteDecorators(NewSetUpContextDecorator(), NewSigVerifyDecorator(), NewUserDefinedDecorator())\n\n// Set chained Antehandler in the baseapp\nbapp.SetAnteHandler(antehandler)\n```\n\nPros:\n\n1. Allows one decorator to pre- and post-process the next AnteHandler, similar to the Weave design.\n2. Do not need to break baseapp API. Users can still set a single AnteHandler if they choose.\n\nCons:\n\n1. Decorator pattern may have a deeply nested structure that is hard to understand, this is mitigated by having the decorator order explicitly listed in the `ChainAnteDecorators` function.\n2. Does not make use of the ModuleManager design. Since this is already being used for BeginBlocker/EndBlocker, this proposal seems unaligned with that design pattern.\n\n## Consequences\n\nSince pros and cons are written for each approach, it is omitted from this section\n\n## References\n\n* [#4572](https://github.com/cosmos/cosmos-sdk/issues/4572): Modular AnteHandler Issue\n* [#4582](https://github.com/cosmos/cosmos-sdk/pull/4583): Initial Implementation of Per-Module AnteHandler Approach\n* [Weave Decorator Code](https://github.com/iov-one/weave/blob/master/handler.go#L35)\n* [Weave Design Videos](https://vimeo.com/showcase/6189877)" + }, + { + "number": 11, + "filename": "adr-011-generalize-genesis-accounts.md", + "title": "ADR 011: Generalize Genesis Accounts", + "content": "# ADR 011: Generalize Genesis Accounts\n\n## Changelog\n\n* 2019-08-30: initial draft\n\n## Context\n\nCurrently, the Cosmos SDK allows for custom account types; the `auth` keeper stores any type fulfilling its `Account` interface. However `auth` does not handle exporting or loading accounts to/from a genesis file, this is done by `genaccounts`, which only handles one of 4 concrete account types (`BaseAccount`, `ContinuousVestingAccount`, `DelayedVestingAccount` and `ModuleAccount`).\n\nProjects desiring to use custom accounts (say custom vesting accounts) need to fork and modify `genaccounts`.\n\n## Decision\n\nIn summary, we will (un)marshal all accounts (interface types) directly using amino, rather than converting to `genaccounts`’s `GenesisAccount` type. Since doing this removes the majority of `genaccounts`'s code, we will merge `genaccounts` into `auth`. Marshalled accounts will be stored in `auth`'s genesis state.\n\nDetailed changes:\n\n### 1) (Un)Marshal accounts directly using amino\n\nThe `auth` module's `GenesisState` gains a new field `Accounts`. Note these aren't of type `exported.Account` for reasons outlined in section 3.\n\n```go\n// GenesisState - all auth state that must be provided at genesis\ntype GenesisState struct {\n Params Params `json:\"params\" yaml:\"params\"`\n Accounts []GenesisAccount `json:\"accounts\" yaml:\"accounts\"`\n}\n```\n\nNow `auth`'s `InitGenesis` and `ExportGenesis` (un)marshal accounts as well as the defined params.\n\n```go\n// InitGenesis - Init store state from genesis data\nfunc InitGenesis(ctx sdk.Context, ak AccountKeeper, data GenesisState) {\n ak.SetParams(ctx, data.Params)\n // load the accounts\n for _, a := range data.Accounts {\n acc := ak.NewAccount(ctx, a) // set account number\n ak.SetAccount(ctx, acc)\n }\n}\n\n// ExportGenesis returns a GenesisState for a given context and keeper\nfunc ExportGenesis(ctx sdk.Context, ak AccountKeeper) GenesisState {\n params := ak.GetParams(ctx)\n\n var genAccounts []exported.GenesisAccount\n ak.IterateAccounts(ctx, func(account exported.Account) bool {\n genAccount := account.(exported.GenesisAccount)\n genAccounts = append(genAccounts, genAccount)\n return false\n })\n\n return NewGenesisState(params, genAccounts)\n}\n```\n\n### 2) Register custom account types on the `auth` codec\n\nThe `auth` codec must have all custom account types registered to marshal them. We will follow the pattern established in `gov` for proposals.\n\nAn example custom account definition:\n\n```go\nimport authtypes \"github.com/cosmos/cosmos-sdk/x/auth/types\"\n\n// Register the module account type with the auth module codec so it can decode module accounts stored in a genesis file\nfunc init() {\n authtypes.RegisterAccountTypeCodec(ModuleAccount{}, \"cosmos-sdk/ModuleAccount\")\n}\n\ntype ModuleAccount struct {\n ...\n```\n\nThe `auth` codec definition:\n\n```go\nvar ModuleCdc *codec.LegacyAmino\n\nfunc init() {\n ModuleCdc = codec.NewLegacyAmino()\n // register module msg's and Account interface\n ...\n // leave the codec unsealed\n}\n\n// RegisterAccountTypeCodec registers an external account type defined in another module for the internal ModuleCdc.\nfunc RegisterAccountTypeCodec(o interface{}, name string) {\n ModuleCdc.RegisterConcrete(o, name, nil)\n}\n```\n\n### 3) Genesis validation for custom account types\n\nModules implement a `ValidateGenesis` method. As `auth` does not know of account implementations, accounts will need to validate themselves.\n\nWe will unmarshal accounts into a `GenesisAccount` interface that includes a `Validate` method.\n\n```go\ntype GenesisAccount interface {\n exported.Account\n Validate() error\n}\n```\n\nThen the `auth` `ValidateGenesis` function becomes:\n\n```go\n// ValidateGenesis performs basic validation of auth genesis data returning an\n// error for any failed validation criteria.\nfunc ValidateGenesis(data GenesisState) error {\n // Validate params\n ...\n\n // Validate accounts\n addrMap := make(map[string]bool, len(data.Accounts))\n for _, acc := range data.Accounts {\n\n // check for duplicated accounts\n addrStr := acc.GetAddress().String()\n if _, ok := addrMap[addrStr]; ok {\n return fmt.Errorf(\"duplicate account found in genesis state; address: %s\", addrStr)\n }\n addrMap[addrStr] = true\n\n // check account specific validation\n if err := acc.Validate(); err != nil {\n return fmt.Errorf(\"invalid account found in genesis state; address: %s, error: %s\", addrStr, err.Error())\n }\n\n }\n return nil\n}\n```\n\n### 4) Move add-genesis-account cli to `auth`\n\nThe `genaccounts` module contains a cli command to add base or vesting accounts to a genesis file.\n\nThis will be moved to `auth`. We will leave it to projects to write their own commands to add custom accounts. An extensible cli handler, similar to `gov`, could be created but it is not worth the complexity for this minor use case.\n\n### 5) Update module and vesting accounts\n\nUnder the new scheme, module and vesting account types need some minor updates:\n\n* Type registration on `auth`'s codec (shown above)\n* A `Validate` method for each `Account` concrete type\n\n## Status\n\nProposed\n\n## Consequences\n\n### Positive\n\n* custom accounts can be used without needing to fork `genaccounts`\n* reduction in lines of code\n\n### Negative\n\n### Neutral\n\n* `genaccounts` module no longer exists\n* accounts in genesis files are stored under `accounts` in `auth` rather than in the `genaccounts` module.\n-`add-genesis-account` cli command now in `auth`\n\n## References" + }, + { + "number": 12, + "filename": "adr-012-state-accessors.md", + "title": "ADR 012: State Accessors", + "content": "# ADR 012: State Accessors\n\n## Changelog\n\n* 2019 Sep 04: Initial draft\n\n## Context\n\nCosmos SDK modules currently use the `KVStore` interface and `Codec` to access their respective state. While\nthis provides a large degree of freedom to module developers, it is hard to modularize and the UX is\nmediocre.\n\nFirst, each time a module tries to access the state, it has to marshal the value and set or get the\nvalue and finally unmarshal. Usually this is done by declaring `Keeper.GetXXX` and `Keeper.SetXXX` functions,\nwhich are repetitive and hard to maintain.\n\nSecond, this makes it harder to align with the object capability theorem: the right to access the\nstate is defined as a `StoreKey`, which gives full access on the entire Merkle tree, so a module cannot\nsend the access right to a specific key-value pair (or a set of key-value pairs) to another module safely.\n\nFinally, because the getter/setter functions are defined as methods of a module's `Keeper`, the reviewers\nhave to consider the whole Merkle tree space when they reviewing a function accessing any part of the state.\nThere is no static way to know which part of the state that the function is accessing (and which is not).\n\n## Decision\n\nWe will define a type named `Value`:\n\n```go\ntype Value struct {\n m Mapping\n key []byte\n}\n```\n\nThe `Value` works as a reference for a key-value pair in the state, where `Value.m` defines the key-value\nspace it will access and `Value.key` defines the exact key for the reference.\n\nWe will define a type named `Mapping`:\n\n```go\ntype Mapping struct {\n storeKey sdk.StoreKey\n cdc *codec.LegacyAmino\n prefix []byte\n}\n```\n\nThe `Mapping` works as a reference for a key-value space in the state, where `Mapping.storeKey` defines\nthe IAVL (sub-)tree and `Mapping.prefix` defines the optional subspace prefix.\n\nWe will define the following core methods for the `Value` type:\n\n```go\n// Get and unmarshal stored data, noop if not exists, panic if cannot unmarshal\nfunc (Value) Get(ctx Context, ptr interface{}) {}\n\n// Get and unmarshal stored data, return error if not exists or cannot unmarshal\nfunc (Value) GetSafe(ctx Context, ptr interface{}) {}\n\n// Get stored data as raw byte slice\nfunc (Value) GetRaw(ctx Context) []byte {}\n\n// Marshal and set a raw value\nfunc (Value) Set(ctx Context, o interface{}) {}\n\n// Check if a raw value exists\nfunc (Value) Exists(ctx Context) bool {}\n\n// Delete a raw value \nfunc (Value) Delete(ctx Context) {}\n```\n\nWe will define the following core methods for the `Mapping` type:\n\n```go\n// Constructs key-value pair reference corresponding to the key argument in the Mapping space\nfunc (Mapping) Value(key []byte) Value {}\n\n// Get and unmarshal stored data, noop if not exists, panic if cannot unmarshal\nfunc (Mapping) Get(ctx Context, key []byte, ptr interface{}) {}\n\n// Get and unmarshal stored data, return error if not exists or cannot unmarshal\nfunc (Mapping) GetSafe(ctx Context, key []byte, ptr interface{})\n\n// Get stored data as raw byte slice\nfunc (Mapping) GetRaw(ctx Context, key []byte) []byte {}\n\n// Marshal and set a raw value\nfunc (Mapping) Set(ctx Context, key []byte, o interface{}) {}\n\n// Check if a raw value exists\nfunc (Mapping) Has(ctx Context, key []byte) bool {}\n\n// Delete a raw value\nfunc (Mapping) Delete(ctx Context, key []byte) {}\n```\n\nEach method of the `Mapping` type that is passed the arguments `ctx`, `key`, and `args...` will proxy\nthe call to `Mapping.Value(key)` with arguments `ctx` and `args...`.\n\nIn addition, we will define and provide a common set of types derived from the `Value` type:\n\n```go\ntype Boolean struct { Value }\ntype Enum struct { Value }\ntype Integer struct { Value; enc IntEncoding }\ntype String struct { Value }\n// ...\n```\n\nWhere the encoding schemes can be different, `o` arguments in core methods are typed, and `ptr` arguments\nin core methods are replaced by explicit return types.\n\nFinally, we will define a family of types derived from the `Mapping` type:\n\n```go\ntype Indexer struct {\n m Mapping\n enc IntEncoding\n}\n```\n\nWhere the `key` argument in core method is typed.\n\nSome of the properties of the accessor types are:\n\n* State access happens only when a function which takes a `Context` as an argument is invoked\n* Accessor type structs give rights to access the state only that the struct is referring, no other\n* Marshalling/Unmarshalling happens implicitly within the core methods\n\n## Status\n\nProposed\n\n## Consequences\n\n### Positive\n\n* Serialization will be done automatically\n* Shorter code size, less boilerplate, better UX\n* References to the state can be transferred safely\n* Explicit scope of accessing\n\n### Negative\n\n* Serialization format will be hidden\n* Different architecture from the current, but the use of accessor types can be opt-in\n* Type-specific types (e.g. `Boolean` and `Integer`) have to be defined manually\n\n### Neutral\n\n## References\n\n* [#4554](https://github.com/cosmos/cosmos-sdk/issues/4554)" + }, + { + "number": 13, + "filename": "adr-013-metrics.md", + "title": "ADR 013: Observability", + "content": "# ADR 013: Observability\n\n## Changelog\n\n* 20-01-2020: Initial Draft\n\n## Status\n\nProposed\n\n## Context\n\nTelemetry is paramount into debugging and understanding what the application is doing and how it is\nperforming. We aim to expose metrics from modules and other core parts of the Cosmos SDK.\n\nIn addition, we should aim to support multiple configurable sinks that an operator may choose from.\nBy default, when telemetry is enabled, the application should track and expose metrics that are\nstored in-memory. The operator may choose to enable additional sinks, where we support only\n[Prometheus](https://prometheus.io/) for now, as it's battle-tested, simple to setup, open source,\nand is rich with ecosystem tooling.\n\nWe must also aim to integrate metrics into the Cosmos SDK in the most seamless way possible such that\nmetrics may be added or removed at will and without much friction. To do this, we will use the\n[go-metrics](https://github.com/hashicorp/go-metrics) library.\n\nFinally, operators may enable telemetry along with specific configuration options. If enabled, metrics\nwill be exposed via `/metrics?format={text|prometheus}` via the API server.\n\n## Decision\n\nWe will add an additional configuration block to `app.toml` that defines telemetry settings:\n\n```toml\n###############################################################################\n### Telemetry Configuration ###\n###############################################################################\n\n[telemetry]\n\n# Prefixed with keys to separate services\nservice-name = {{ .Telemetry.ServiceName }}\n\n# Enabled enables the application telemetry functionality. When enabled,\n# an in-memory sink is also enabled by default. Operators may also enabled\n# other sinks such as Prometheus.\nenabled = {{ .Telemetry.Enabled }}\n\n# Enable prefixing gauge values with hostname\nenable-hostname = {{ .Telemetry.EnableHostname }}\n\n# Enable adding hostname to labels\nenable-hostname-label = {{ .Telemetry.EnableHostnameLabel }}\n\n# Enable adding service to labels\nenable-service-label = {{ .Telemetry.EnableServiceLabel }}\n\n# PrometheusRetentionTime, when positive, enables a Prometheus metrics sink.\nprometheus-retention-time = {{ .Telemetry.PrometheusRetentionTime }}\n```\n\nThe given configuration allows for two sinks -- in-memory and Prometheus. We create a `Metrics`\ntype that performs all the bootstrapping for the operator, so capturing metrics becomes seamless.\n\n```go\n// Metrics defines a wrapper around application telemetry functionality. It allows\n// metrics to be gathered at any point in time. When creating a Metrics object,\n// internally, a global metrics is registered with a set of sinks as configured\n// by the operator. In addition to the sinks, when a process gets a SIGUSR1, a\n// dump of formatted recent metrics will be sent to STDERR.\ntype Metrics struct {\n memSink *metrics.InmemSink\n prometheusEnabled bool\n}\n\n// Gather collects all registered metrics and returns a GatherResponse where the\n// metrics are encoded depending on the type. Metrics are either encoded via\n// Prometheus or JSON if in-memory.\nfunc (m *Metrics) Gather(format string) (GatherResponse, error) {\n switch format {\n case FormatPrometheus:\n return m.gatherPrometheus()\n\n case FormatText:\n return m.gatherGeneric()\n\n case FormatDefault:\n return m.gatherGeneric()\n\n default:\n return GatherResponse{}, fmt.Errorf(\"unsupported metrics format: %s\", format)\n }\n}\n```\n\nIn addition, `Metrics` allows us to gather the current set of metrics at any given point in time. An\noperator may also choose to send a signal, SIGUSR1, to dump and print formatted metrics to STDERR.\n\nDuring an application's bootstrapping and construction phase, if `Telemetry.Enabled` is `true`, the\nAPI server will create an instance of a reference to `Metrics` object and will register a metrics\nhandler accordingly.\n\n```go\nfunc (s *Server) Start(cfg config.Config) error {\n // ...\n\n if cfg.Telemetry.Enabled {\n m, err := telemetry.New(cfg.Telemetry)\n if err != nil {\n return err\n }\n\n s.metrics = m\n s.registerMetrics()\n }\n\n // ...\n}\n\nfunc (s *Server) registerMetrics() {\n metricsHandler := func(w http.ResponseWriter, r *http.Request) {\n format := strings.TrimSpace(r.FormValue(\"format\"))\n\n gr, err := s.metrics.Gather(format)\n if err != nil {\n rest.WriteErrorResponse(w, http.StatusBadRequest, fmt.Sprintf(\"failed to gather metrics: %s\", err))\n return\n }\n\n w.Header().Set(\"Content-Type\", gr.ContentType)\n _, _ = w.Write(gr.Metrics)\n }\n\n s.Router.HandleFunc(\"/metrics\", metricsHandler).Methods(\"GET\")\n}\n```\n\nApplication developers may track counters, gauges, summaries, and key/value metrics. There is no\nadditional lifting required by modules to leverage profiling metrics. To do so, it's as simple as:\n\n```go\nfunc (k BaseKeeper) MintCoins(ctx sdk.Context, moduleName string, amt sdk.Coins) error {\n defer metrics.MeasureSince(time.Now(), \"MintCoins\")\n // ...\n}\n```\n\n## Consequences\n\n### Positive\n\n* Exposure into the performance and behavior of an application\n\n### Negative\n\n### Neutral\n\n## References" + }, + { + "number": 14, + "filename": "adr-014-proportional-slashing.md", + "title": "ADR 14: Proportional Slashing", + "content": "# ADR 14: Proportional Slashing\n\n## Changelog\n\n* 2019-10-15: Initial draft\n* 2020-05-25: Removed correlation root slashing\n* 2020-07-01: Updated to include S-curve function instead of linear\n\n## Context\n\nIn Proof of Stake-based chains, centralization of consensus power amongst a small set of validators can cause harm to the network due to increased risk of censorship, liveness failure, fork attacks, etc. However, while this centralization causes a negative externality to the network, it is not directly felt by the delegators contributing towards delegating towards already large validators. We would like a way to pass on the negative externality cost of centralization onto those large validators and their delegators.\n\n## Decision\n\n### Design\n\nTo solve this problem, we will implement a procedure called Proportional Slashing. The desire is that the larger a validator is, the more they should be slashed. The first naive attempt is to make a validator's slash percent proportional to their share of consensus voting power.\n\n```text\nslash_amount = k * power // power is the faulting validator's voting power and k is some on-chain constant\n```\n\nHowever, this will incentivize validators with large amounts of stake to split up their voting power amongst accounts (sybil attack), so that if they fault, they all get slashed at a lower percent. The solution to this is to take into account not just a validator's own voting percentage, but also the voting percentage of all the other validators who get slashed in a specified time frame.\n\n```text\nslash_amount = k * (power_1 + power_2 + ... + power_n) // where power_i is the voting power of the ith validator faulting in the specified time frame and k is some on-chain constant\n```\n\nNow, if someone splits a validator of 10% into two validators of 5% each which both fault, then they both fault in the same time frame, they both will get slashed at the sum 10% amount.\n\nHowever in practice, we likely don't want a linear relation between amount of stake at fault, and the percentage of stake to slash. In particular, solely 5% of stake double signing effectively did nothing to majorly threaten security, whereas 30% of stake being at fault clearly merits a large slashing factor, due to being very close to the point at which Tendermint security is threatened. A linear relation would require a factor of 6 gap between these two, whereas the difference in risk posed to the network is much larger. We propose using S-curves (formally [logistic functions](https://en.wikipedia.org/wiki/Logistic_function) to solve this). S-Curves capture the desired criterion quite well. They allow the slashing factor to be minimal for small values, and then grow very rapidly near some threshold point where the risk posed becomes notable.\n\n#### Parameterization\n\nThis requires parameterizing a logistic function. It is very well understood how to parameterize this. It has four parameters:\n\n1) A minimum slashing factor\n2) A maximum slashing factor\n3) The inflection point of the S-curve (essentially where do you want to center the S)\n4) The rate of growth of the S-curve (How elongated is the S)\n\n#### Correlation across non-sybil validators\n\nOne will note, that this model doesn't differentiate between multiple validators run by the same operators vs validators run by different operators. This can be seen as an additional benefit in fact. It incentivizes validators to differentiate their setups from other validators, to avoid having correlated faults with them or else they risk a higher slash. So for example, operators should avoid using the same popular cloud hosting platforms or using the same Staking as a Service providers. This will lead to a more resilient and decentralized network.\n\n#### Griefing\n\nGriefing, the act of intentionally getting oneself slashed in order to make another's slash worse, could be a concern here. However, using the protocol described here, the attacker also gets equally impacted by the grief as the victim, so it would not provide much benefit to the griefer.\n\n### Implementation\n\nIn the slashing module, we will add two queues that will track all of the recent slash events. For double sign faults, we will define \"recent slashes\" as ones that have occurred within the last `unbonding period`. For liveness faults, we will define \"recent slashes\" as ones that have occurred within the last `jail period`.\n\n```go\ntype SlashEvent struct {\n Address sdk.ValAddress\n ValidatorVotingPercent sdk.Dec\n SlashedSoFar sdk.Dec\n}\n```\n\nThese slash events will be pruned from the queue once they are older than their respective \"recent slash period\".\n\nWhenever a new slash occurs, a `SlashEvent` struct is created with the faulting validator's voting percent and a `SlashedSoFar` of 0. Because recent slash events are pruned before the unbonding period and unjail period expires, it should not be possible for the same validator to have multiple SlashEvents in the same Queue at the same time.\n\nWe then will iterate over all the SlashEvents in the queue, adding their `ValidatorVotingPercent` to calculate the new percent to slash all the validators in the queue at, using the \"Square of Sum of Roots\" formula introduced above.\n\nOnce we have the `NewSlashPercent`, we then iterate over all the `SlashEvent`s in the queue once again, and if `NewSlashPercent > SlashedSoFar` for that SlashEvent, we call the `staking.Slash(slashEvent.Address, slashEvent.Power, Math.Min(Math.Max(minSlashPercent, NewSlashPercent - SlashedSoFar), maxSlashPercent)` (we pass in the power of the validator before any slashes occurred, so that we slash the right amount of tokens). We then set `SlashEvent.SlashedSoFar` amount to `NewSlashPercent`.\n\n## Status\n\nProposed\n\n## Consequences\n\n### Positive\n\n* Increases decentralization by disincentivizing delegating to large validators\n* Incentivizes Decorrelation of Validators\n* More severely punishes attacks than accidental faults\n* More flexibility in slashing rates parameterization\n\n### Negative\n\n* More computationally expensive than current implementation. Will require more data about \"recent slashing events\" to be stored on chain." + }, + { + "number": 16, + "filename": "adr-016-validator-consensus-key-rotation.md", + "title": "ADR 016: Validator Consensus Key Rotation", + "content": "# ADR 016: Validator Consensus Key Rotation\n\n## Changelog\n\n* 2019 Oct 23: Initial draft\n* 2019 Nov 28: Add key rotation fee\n\n## Context\n\nValidator consensus key rotation feature has been discussed and requested for a long time, for the sake of safer validator key management policy (e.g. https://github.com/tendermint/tendermint/issues/1136). So, we suggest one of the simplest form of validator consensus key rotation implementation mostly onto Cosmos SDK.\n\nWe don't need to make any update on consensus logic in Tendermint because Tendermint does not have any mapping information of consensus key and validator operator key, meaning that from Tendermint's point of view, a consensus key rotation of a validator is simply a replacement of a consensus key to another.\n\nAlso, it should be noted that this ADR includes only the simplest form of consensus key rotation without considering the multiple consensus keys concept. Such multiple consensus keys concept shall remain a long term goal of Tendermint and Cosmos SDK.\n\n## Decision\n\n### Pseudo procedure for consensus key rotation\n\n* create new random consensus key.\n* create and broadcast a transaction with a `MsgRotateConsPubKey` that states the new consensus key is now coupled with the validator operator with a signature from the validator's operator key.\n* old consensus key becomes unable to participate on consensus immediately after the update of key mapping state on-chain.\n* start validating with new consensus key.\n* validators using HSM and KMS should update the consensus key in HSM to use the new rotated key after the height `h` when `MsgRotateConsPubKey` is committed to the blockchain.\n\n### Considerations\n\n* consensus key mapping information management strategy\n * store history of each key mapping changes in the kvstore.\n * the state machine can search corresponding consensus key paired with the given validator operator for any arbitrary height in a recent unbonding period.\n * the state machine does not need any historical mapping information which is past more than unbonding period.\n* key rotation costs related to LCD and IBC\n * LCD and IBC will have a traffic/computation burden when there exists frequent power changes\n * In current Tendermint design, consensus key rotations are seen as power changes from LCD or IBC perspective\n * Therefore, to minimize unnecessary frequent key rotation behavior, we limited the maximum number of rotation in recent unbonding period and also applied exponentially increasing rotation fee\n* limits\n * a validator cannot rotate its consensus key more than `MaxConsPubKeyRotations` time for any unbonding period, to prevent spam.\n * parameters can be decided by governance and stored in genesis file.\n* key rotation fee\n * a validator should pay `KeyRotationFee` to rotate the consensus key which is calculated as below\n * `KeyRotationFee` = (max(`VotingPowerPercentage` *100, 1)* `InitialKeyRotationFee`) * 2^(number of rotations in `ConsPubKeyRotationHistory` in recent unbonding period)\n* evidence module\n * evidence module can search corresponding consensus key for any height from slashing keeper so that it can decide which consensus key is supposed to be used for the given height.\n* abci.ValidatorUpdate\n * tendermint already has ability to change a consensus key by ABCI communication(`ValidatorUpdate`).\n * validator consensus key update can be done via creating new + delete old by change the power to zero.\n * therefore, we expect we do not even need to change Tendermint codebase at all to implement this feature.\n* new genesis parameters in `staking` module\n * `MaxConsPubKeyRotations` : maximum number of rotation can be executed by a validator in recent unbonding period. default value 10 is suggested(11th key rotation will be rejected)\n * `InitialKeyRotationFee` : the initial key rotation fee when no key rotation has happened in recent unbonding period. default value 1atom is suggested(1atom fee for the first key rotation in recent unbonding period)\n\n### Workflow\n\n1. The validator generates a new consensus keypair.\n2. The validator generates and signs a `MsgRotateConsPubKey` tx with their operator key and new ConsPubKey\n\n ```go\n type MsgRotateConsPubKey struct {\n ValidatorAddress sdk.ValAddress\n NewPubKey crypto.PubKey\n }\n ```\n\n3. `handleMsgRotateConsPubKey` gets `MsgRotateConsPubKey`, calls `RotateConsPubKey` with emits event\n4. `RotateConsPubKey`\n * checks if `NewPubKey` is not duplicated on `ValidatorsByConsAddr`\n * checks if the validator is does not exceed parameter `MaxConsPubKeyRotations` by iterating `ConsPubKeyRotationHistory`\n * checks if the signing account has enough balance to pay `KeyRotationFee`\n * pays `KeyRotationFee` to community fund\n * overwrites `NewPubKey` in `validator.ConsPubKey`\n * deletes old `ValidatorByConsAddr`\n * `SetValidatorByConsAddr` for `NewPubKey`\n * Add `ConsPubKeyRotationHistory` for tracking rotation\n\n ```go\n type ConsPubKeyRotationHistory struct {\n OperatorAddress sdk.ValAddress\n OldConsPubKey crypto.PubKey\n NewConsPubKey crypto.PubKey\n RotatedHeight int64\n }\n ```\n\n5. `ApplyAndReturnValidatorSetUpdates` checks if there is `ConsPubKeyRotationHistory` with `ConsPubKeyRotationHistory.RotatedHeight == ctx.BlockHeight()` and if so, generates 2 `ValidatorUpdate` , one for a remove validator and one for create new validator\n\n ```go\n abci.ValidatorUpdate{\n PubKey: cmttypes.TM2PB.PubKey(OldConsPubKey),\n Power: 0,\n }\n\n abci.ValidatorUpdate{\n PubKey: cmttypes.TM2PB.PubKey(NewConsPubKey),\n Power: v.ConsensusPower(),\n }\n ```\n\n6. at `previousVotes` Iteration logic of `AllocateTokens`, `previousVote` using `OldConsPubKey` match up with `ConsPubKeyRotationHistory`, and replace validator for token allocation\n7. Migrate `ValidatorSigningInfo` and `ValidatorMissedBlockBitArray` from `OldConsPubKey` to `NewConsPubKey`\n\n* Note : All above features shall be implemented in `staking` module.\n\n## Status\n\nProposed\n\n## Consequences\n\n### Positive\n\n* Validators can immediately or periodically rotate their consensus key to have a better security policy\n* improved security against Long-Range attacks (https://nearprotocol.com/blog/long-range-attacks-and-a-new-fork-choice-rule) given a validator throws away the old consensus key(s)\n\n### Negative\n\n* Slash module needs more computation because it needs to look up the corresponding consensus key of validators for each height\n* frequent key rotations will make light client bisection less efficient\n\n### Neutral\n\n## References\n\n* on tendermint repo : https://github.com/tendermint/tendermint/issues/1136\n* on cosmos-sdk repo : https://github.com/cosmos/cosmos-sdk/issues/5231\n* about multiple consensus keys : https://github.com/tendermint/tendermint/issues/1758#issuecomment-545291698" + }, + { + "number": 17, + "filename": "adr-017-historical-header-module.md", + "title": "ADR 17: Historical Header Module", + "content": "# ADR 17: Historical Header Module\n\n## Changelog\n\n* 26 November 2019: Start of first version\n* 2 December 2019: Final draft of first version\n\n## Context\n\nIn order for the Cosmos SDK to implement the [IBC specification](https://github.com/cosmos/ics), modules within the Cosmos SDK must have the ability to introspect recent consensus states (validator sets & commitment roots) as proofs of these values on other chains must be checked during the handshakes.\n\n## Decision\n\nThe application MUST store the most recent `n` headers in a persistent store. At first, this store MAY be the current Merklised store. A non-Merklised store MAY be used later as no proofs are necessary.\n\nThe application MUST store this information by storing new headers immediately when handling `abci.RequestBeginBlock`:\n\n```go\nfunc BeginBlock(ctx sdk.Context, keeper HistoricalHeaderKeeper, req abci.RequestBeginBlock) abci.ResponseBeginBlock {\n info := HistoricalInfo{\n Header: ctx.BlockHeader(),\n ValSet: keeper.StakingKeeper.GetAllValidators(ctx), // note that this must be stored in a canonical order\n }\n keeper.SetHistoricalInfo(ctx, ctx.BlockHeight(), info)\n n := keeper.GetParamRecentHeadersToStore()\n keeper.PruneHistoricalInfo(ctx, ctx.BlockHeight() - n)\n // continue handling request\n}\n```\n\nAlternatively, the application MAY store only the hash of the validator set.\n\nThe application MUST make these past `n` committed headers available for querying by Cosmos SDK modules through the `Keeper`'s `GetHistoricalInfo` function. This MAY be implemented in a new module, or it MAY also be integrated into an existing one (likely `x/staking` or `x/ibc`).\n\n`n` MAY be configured as a parameter store parameter, in which case it could be changed by `ParameterChangeProposal`s, although it will take some blocks for the stored information to catch up if `n` is increased.\n\n## Status\n\nProposed.\n\n## Consequences\n\nImplementation of this ADR will require changes to the Cosmos SDK. It will not require changes to Tendermint.\n\n### Positive\n\n* Easy retrieval of headers & state roots for recent past heights by modules anywhere in the Cosmos SDK.\n* No RPC calls to Tendermint required.\n* No ABCI alterations required.\n\n### Negative\n\n* Duplicates `n` headers data in Tendermint & the application (additional disk usage) - in the long term, an approach such as [this](https://github.com/tendermint/tendermint/issues/4210) might be preferable.\n\n### Neutral\n\n(none known)\n\n## References\n\n* [ICS 2: \"Consensus state introspection\"](https://github.com/cosmos/ibc/tree/master/spec/core/ics-002-client-semantics#consensus-state-introspection)" + }, + { + "number": 18, + "filename": "adr-018-extendable-voting-period.md", + "title": "ADR 18: Extendable Voting Periods", + "content": "# ADR 18: Extendable Voting Periods\n\n## Changelog\n\n* 1 January 2020: Start of first version\n\n## Context\n\nCurrently the voting period for all governance proposals is the same. However, this is suboptimal as all governance proposals do not require the same time period. For more non-contentious proposals, they can be dealt with more efficiently with a faster period, while more contentious or complex proposals may need a longer period for extended discussion/consideration.\n\n## Decision\n\nWe would like to design a mechanism for making the voting period of a governance proposal variable based on the demand of voters. We would like it to be based on the view of the governance participants, rather than just the proposer of a governance proposal (thus, allowing the proposer to select the voting period length is not sufficient).\n\nHowever, we would like to avoid the creation of an entire second voting process to determine the length of the voting period, as it just pushed the problem to determining the length of that first voting period.\n\nThus, we propose the following mechanism:\n\n### Params\n\n* The current gov param `VotingPeriod` is to be replaced by a `MinVotingPeriod` param. This is the default voting period that all governance proposal voting periods start with.\n* There is a new gov param called `MaxVotingPeriodExtension`.\n\n### Mechanism\n\nThere is a new `Msg` type called `MsgExtendVotingPeriod`, which can be sent by any staked account during a proposal's voting period. It allows the sender to unilaterally extend the length of the voting period by `MaxVotingPeriodExtension * sender's share of voting power`. Every address can only call `MsgExtendVotingPeriod` once per proposal.\n\nSo for example, if the `MaxVotingPeriodExtension` is set to 100 Days, then anyone with 1% of voting power can extend the voting power by 1 day. If 33% of voting power has sent the message, the voting period will be extended by 33 days. Thus, if absolutely everyone chooses to extend the voting period, the absolute maximum voting period will be `MinVotingPeriod + MaxVotingPeriodExtension`.\n\nThis system acts as a sort of distributed coordination, where individual stakers choosing to extend or not, allows the system the gauge the contentiousness/complexity of the proposal. It is extremely unlikely that many stakers will choose to extend at the exact same time, it allows stakers to view how long others have already extended thus far, to decide whether or not to extend further.\n\n### Dealing with Unbonding/Redelegation\n\nThere is one thing that needs to be addressed. How to deal with redelegation/unbonding during the voting period. If a staker of 5% calls `MsgExtendVotingPeriod` and then unbonds, does the voting period then decrease by 5 days again? This is not good as it can give people a false sense of how long they have to make their decision. For this reason, we want to design it such that the voting period length can only be extended, not shortened. To do this, the current extension amount is based on the highest percent that voted extension at any time. This is best explained by example:\n\n1. Let's say 2 stakers of voting power 4% and 3% respectively vote to extend. The voting period will be extended by 7 days.\n2. Now the staker of 3% decides to unbond before the end of the voting period. The voting period extension remains 7 days.\n3. Now, let's say another staker of 2% voting power decides to extend voting period. There is now 6% of active voting power choosing the extend. The voting power remains 7 days.\n4. If a fourth staker of 10% chooses to extend now, there is a total of 16% of active voting power wishing to extend. The voting period will be extended to 16 days.\n\n### Delegators\n\nJust like votes in the actual voting period, delegators automatically inherit the extension of their validators. If their validator chooses to extend, their voting power will be used in the validator's extension. However, the delegator is unable to override their validator and \"unextend\" as that would contradict the \"voting power length can only be ratcheted up\" principle described in the previous section. However, a delegator may choose the extend using their personal voting power, if their validator has not done so.\n\n## Status\n\nProposed\n\n## Consequences\n\n### Positive\n\n* More complex/contentious governance proposals will have more time to properly digest and deliberate\n\n### Negative\n\n* Governance process becomes more complex and requires more understanding to interact with effectively\n* Can no longer predict when a governance proposal will end. Can't assume order in which governance proposals will end.\n\n### Neutral\n\n* The minimum voting period can be made shorter\n\n## References\n\n* [Cosmos Forum post where idea first originated](https://forum.cosmos.network/t/proposal-draft-reduce-governance-voting-period-to-7-days/3032/9)" + }, + { + "number": 19, + "filename": "adr-019-protobuf-state-encoding.md", + "title": "ADR 019: Protocol Buffer State Encoding", + "content": "# ADR 019: Protocol Buffer State Encoding\n\n## Changelog\n\n* 2020 Feb 15: Initial Draft\n* 2020 Feb 24: Updates to handle messages with interface fields\n* 2020 Apr 27: Convert usages of `oneof` for interfaces to `Any`\n* 2020 May 15: Describe `cosmos_proto` extensions and amino compatibility\n* 2020 Dec 4: Move and rename `MarshalAny` and `UnmarshalAny` into the `codec.Codec` interface.\n* 2021 Feb 24: Remove mentions of `HybridCodec`, which has been abandoned in [#6843](https://github.com/cosmos/cosmos-sdk/pull/6843).\n\n## Status\n\nAccepted\n\n## Context\n\nCurrently, the Cosmos SDK utilizes [go-amino](https://github.com/tendermint/go-amino/) for binary\nand JSON object encoding over the wire bringing parity between logical objects and persistence objects.\n\nFrom the Amino docs:\n\n> Amino is an object encoding specification. It is a subset of Proto3 with an extension for interface\n> support. See the [Proto3 spec](https://developers.google.com/protocol-buffers/docs/proto3) for more\n> information on Proto3, which Amino is largely compatible with (but not with Proto2).\n>\n> The goal of the Amino encoding protocol is to bring parity into logic objects and persistence objects.\n\nAmino also aims to have the following goals (not a complete list):\n\n* Binary bytes must be decodable with a schema.\n* Schema must be upgradeable.\n* The encoder and decoder logic must be reasonably simple.\n\nHowever, we believe that Amino does not fulfill these goals completely and does not fully meet the\nneeds of a truly flexible cross-language and multi-client compatible encoding protocol in the Cosmos SDK.\nNamely, Amino has proven to be a big pain-point in regards to supporting object serialization across\nclients written in various languages while providing virtually little in the way of true backwards\ncompatibility and upgradeability. Furthermore, through profiling and various benchmarks, Amino has\nbeen shown to be an extremely large performance bottleneck in the Cosmos SDK 1. This is\nlargely reflected in the performance of simulations and application transaction throughput.\n\nThus, we need to adopt an encoding protocol that meets the following criteria for state serialization:\n\n* Language agnostic\n* Platform agnostic\n* Rich client support and thriving ecosystem\n* High performance\n* Minimal encoded message size\n* Codegen-based over reflection-based\n* Supports backward and forward compatibility\n\nNote, migrating away from Amino should be viewed as a two-pronged approach, state and client encoding.\nThis ADR focuses on state serialization in the Cosmos SDK state machine. A corresponding ADR will be\nmade to address client-side encoding.\n\n## Decision\n\nWe will adopt [Protocol Buffers](https://developers.google.com/protocol-buffers) for serializing\npersisted structured data in the Cosmos SDK while providing a clean mechanism and developer UX for\napplications wishing to continue to use Amino. We will provide this mechanism by updating modules to\naccept a codec interface, `Marshaler`, instead of a concrete Amino codec. Furthermore, the Cosmos SDK\nwill provide two concrete implementations of the `Marshaler` interface: `AminoCodec` and `ProtoCodec`.\n\n* `AminoCodec`: Uses Amino for both binary and JSON encoding.\n* `ProtoCodec`: Uses Protobuf for both binary and JSON encoding.\n\nModules will use whichever codec is instantiated in the app. By default, the Cosmos SDK's `simapp`\ninstantiates a `ProtoCodec` as the concrete implementation of `Marshaler`, inside the `MakeTestEncodingConfig`\nfunction. This can be easily overwritten by app developers if they so desire.\n\nThe ultimate goal will be to replace Amino JSON encoding with Protobuf encoding and thus have\nmodules accept and/or extend `ProtoCodec`. Until then, Amino JSON is still provided for legacy use-cases.\nA handful of places in the Cosmos SDK still have Amino JSON hardcoded, such as the Legacy API REST endpoints\nand the `x/params` store. They are planned to be converted to Protobuf in a gradual manner.\n\n### Module Codecs\n\nModules that do not require the ability to work with and serialize interfaces, the path to Protobuf\nmigration is pretty straightforward. These modules are to simply migrate any existing types that\nare encoded and persisted via their concrete Amino codec to Protobuf and have their keeper accept a\n`Marshaler` that will be a `ProtoCodec`. This migration is simple as things will just work as-is.\n\nNote, any business logic that needs to encode primitive types like `bool` or `int64` should use\n[gogoprotobuf](https://github.com/cosmos/gogoproto) Value types.\n\nExample:\n\n```go\n ts, err := gogotypes.TimestampProto(completionTime)\n if err != nil {\n // ...\n }\n\n bz := cdc.MustMarshal(ts)\n```\n\nHowever, modules can vary greatly in purpose and design and so we must support the ability for modules\nto be able to encode and work with interfaces (e.g. `Account` or `Content`). For these modules, they\nmust define their own codec interface that extends `Marshaler`. These specific interfaces are unique\nto the module and will contain method contracts that know how to serialize the needed interfaces.\n\nExample:\n\n```go\n// x/auth/types/codec.go\n\ntype Codec interface {\n codec.Codec\n\n MarshalAccount(acc exported.Account) ([]byte, error)\n UnmarshalAccount(bz []byte) (exported.Account, error)\n\n MarshalAccountJSON(acc exported.Account) ([]byte, error)\n UnmarshalAccountJSON(bz []byte) (exported.Account, error)\n}\n```\n\n### Usage of `Any` to encode interfaces\n\nIn general, module-level .proto files should define messages which encode interfaces\nusing [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto).\nAfter [extension discussion](https://github.com/cosmos/cosmos-sdk/issues/6030),\nthis was chosen as the preferred alternative to application-level `oneof`s\nas in our original protobuf design. The arguments in favor of `Any` can be\nsummarized as follows:\n\n* `Any` provides a simpler, more consistent client UX for dealing with\ninterfaces than app-level `oneof`s that will need to be coordinated more\ncarefully across applications. Creating a generic transaction\nsigning library using `oneof`s may be cumbersome and critical logic may need\nto be reimplemented for each chain\n* `Any` provides more resistance against human error than `oneof`\n* `Any` is generally simpler to implement for both modules and apps\n\nThe main counter-argument to using `Any` centers around its additional space\nand possibly performance overhead. The space overhead could be dealt with using\ncompression at the persistence layer in the future and the performance impact\nis likely to be small. Thus, not using `Any` is seen as a pre-mature optimization,\nwith user experience as the higher order concern.\n\nNote, that given the Cosmos SDK's decision to adopt the `Codec` interfaces described\nabove, apps can still choose to use `oneof` to encode state and transactions\nbut it is not the recommended approach. If apps do choose to use `oneof`s\ninstead of `Any` they will likely lose compatibility with client apps that\nsupport multiple chains. Thus developers should think carefully about whether\nthey care more about what is possibly a premature optimization or end-user\nand client developer UX.\n\n### Safe usage of `Any`\n\nBy default, the [gogo protobuf implementation of `Any`](https://pkg.go.dev/github.com/cosmos/gogoproto/types)\nuses [global type registration]( https://github.com/cosmos/gogoproto/blob/master/proto/properties.go#L540)\nto decode values packed in `Any` into concrete\ngo types. This introduces a vulnerability where any malicious module\nin the dependency tree could register a type with the global protobuf registry\nand cause it to be loaded and unmarshaled by a transaction that referenced\nit in the `type_url` field.\n\nTo prevent this, we introduce a type registration mechanism for decoding `Any`\nvalues into concrete types through the `InterfaceRegistry` interface which\nbears some similarity to type registration with Amino:\n\n```go\ntype InterfaceRegistry interface {\n // RegisterInterface associates protoName as the public name for the\n // interface passed in as iface\n // Ex:\n // registry.RegisterInterface(\"cosmos_sdk.Msg\", (*sdk.Msg)(nil))\n RegisterInterface(protoName string, iface interface{})\n\n // RegisterImplementations registers impls as concrete implementations of\n // the interface iface\n // Ex:\n // registry.RegisterImplementations((*sdk.Msg)(nil), &MsgSend{}, &MsgMultiSend{})\n RegisterImplementations(iface interface{}, impls ...proto.Message)\n\n}\n```\n\nIn addition to serving as a whitelist, `InterfaceRegistry` can also serve\nto communicate the list of concrete types that satisfy an interface to clients.\n\nIn .proto files:\n\n* fields which accept interfaces should be annotated with `cosmos_proto.accepts_interface`\nusing the same full-qualified name passed as `protoName` to `InterfaceRegistry.RegisterInterface`\n* interface implementations should be annotated with `cosmos_proto.implements_interface`\nusing the same full-qualified name passed as `protoName` to `InterfaceRegistry.RegisterInterface`\n\nIn the future, `protoName`, `cosmos_proto.accepts_interface`, `cosmos_proto.implements_interface`\nmay be used via code generation, reflection &/or static linting.\n\nThe same struct that implements `InterfaceRegistry` will also implement an\ninterface `InterfaceUnpacker` to be used for unpacking `Any`s:\n\n```go\ntype InterfaceUnpacker interface {\n // UnpackAny unpacks the value in any to the interface pointer passed in as\n // iface. Note that the type in any must have been registered with\n // RegisterImplementations as a concrete type for that interface\n // Ex:\n // var msg sdk.Msg\n // err := ctx.UnpackAny(any, &msg)\n // ...\n UnpackAny(any *Any, iface interface{}) error\n}\n```\n\nNote that `InterfaceRegistry` usage does not deviate from standard protobuf\nusage of `Any`, it just introduces a security and introspection layer for\ngolang usage.\n\n`InterfaceRegistry` will be a member of `ProtoCodec`\ndescribed above. In order for modules to register interface types, app modules\ncan optionally implement the following interface:\n\n```go\ntype InterfaceModule interface {\n RegisterInterfaceTypes(InterfaceRegistry)\n}\n```\n\nThe module manager will include a method to call `RegisterInterfaceTypes` on\nevery module that implements it in order to populate the `InterfaceRegistry`.\n\n### Using `Any` to encode state\n\nThe Cosmos SDK will provide support methods `MarshalInterface` and `UnmarshalInterface` to hide the complexity of wrapping interface types into `Any` and allow easy serialization.\n\n```go\nimport \"github.com/cosmos/cosmos-sdk/codec\"\n\n// note: eviexported.Evidence is an interface type\nfunc MarshalEvidence(cdc codec.BinaryCodec, e eviexported.Evidence) ([]byte, error) {\n\treturn cdc.MarshalInterface(e)\n}\n\nfunc UnmarshalEvidence(cdc codec.BinaryCodec, bz []byte) (eviexported.Evidence, error) {\n\tvar evi eviexported.Evidence\n\terr := cdc.UnmarshalInterface(&evi, bz)\n return err, nil\n}\n```\n\n### Using `Any` in `sdk.Msg`s\n\nA similar concept is to be applied for messages that contain interface fields.\nFor example, we can define `MsgSubmitEvidence` as follows where `Evidence` is\nan interface:\n\n```protobuf\n// x/evidence/types/types.proto\n\nmessage MsgSubmitEvidence {\n bytes submitter = 1\n [\n (gogoproto.casttype) = \"github.com/cosmos/cosmos-sdk/types.AccAddress\"\n ];\n google.protobuf.Any evidence = 2;\n}\n```\n\nNote that in order to unpack the evidence from `Any` we do need a reference to\n`InterfaceRegistry`. In order to reference evidence in methods like\n`ValidateBasic` which shouldn't have to know about the `InterfaceRegistry`, we\nintroduce an `UnpackInterfaces` phase to deserialization which unpacks\ninterfaces before they're needed.\n\n### Unpacking Interfaces\n\nTo implement the `UnpackInterfaces` phase of deserialization which unpacks\ninterfaces wrapped in `Any` before they're needed, we create an interface\nthat `sdk.Msg`s and other types can implement:\n\n```go\ntype UnpackInterfacesMessage interface {\n UnpackInterfaces(InterfaceUnpacker) error\n}\n```\n\nWe also introduce a private `cachedValue interface{}` field onto the `Any`\nstruct itself with a public getter `GetCachedValue() interface{}`.\n\nThe `UnpackInterfaces` method is to be invoked during message deserialization right\nafter `Unmarshal` and any interface values packed in `Any`s will be decoded\nand stored in `cachedValue` for reference later.\n\nThen unpacked interface values can safely be used in any code afterwards\nwithout knowledge of the `InterfaceRegistry`\nand messages can introduce a simple getter to cast the cached value to the\ncorrect interface type.\n\nThis has the added benefit that unmarshaling of `Any` values only happens once\nduring initial deserialization rather than every time the value is read. Also,\nwhen `Any` values are first packed (for instance in a call to\n`NewMsgSubmitEvidence`), the original interface value is cached so that\nunmarshaling isn't needed to read it again.\n\n`MsgSubmitEvidence` could implement `UnpackInterfaces`, plus a convenience getter\n`GetEvidence` as follows:\n\n```go\nfunc (msg MsgSubmitEvidence) UnpackInterfaces(ctx sdk.InterfaceRegistry) error {\n var evi eviexported.Evidence\n return ctx.UnpackAny(msg.Evidence, *evi)\n}\n\nfunc (msg MsgSubmitEvidence) GetEvidence() eviexported.Evidence {\n return msg.Evidence.GetCachedValue().(eviexported.Evidence)\n}\n```\n\n### Amino Compatibility\n\nOur custom implementation of `Any` can be used transparently with Amino if used\nwith the proper codec instance. What this means is that interfaces packed within\n`Any`s will be amino marshaled like regular Amino interfaces (assuming they\nhave been registered properly with Amino).\n\nIn order for this functionality to work:\n\n* **all legacy code must use `*codec.LegacyAmino` instead of `*amino.Codec` which is\n now a wrapper which properly handles `Any`**\n* **all new code should use `Marshaler` which is compatible with both amino and\n protobuf**\n* Also, before v0.39, `codec.LegacyAmino` will be renamed to `codec.LegacyAmino`.\n\n### Why Wasn't X Chosen Instead\n\nFor a more complete comparison to alternative protocols, see [here](https://codeburst.io/json-vs-protocol-buffers-vs-flatbuffers-a4247f8bda6f).\n\n### Cap'n Proto\n\nWhile [Cap’n Proto](https://capnproto.org/) does seem like an advantageous alternative to Protobuf\ndue to its native support for interfaces/generics and built-in canonicalization, it does lack the\nrich client ecosystem compared to Protobuf and is a bit less mature.\n\n### FlatBuffers\n\n[FlatBuffers](https://google.github.io/flatbuffers/) is also a potentially viable alternative, with the\nprimary difference being that FlatBuffers does not need a parsing/unpacking step to a secondary\nrepresentation before you can access data, often coupled with per-object memory allocation.\n\nHowever, it would require great efforts into research and a full understanding the scope of the migration\nand path forward -- which isn't immediately clear. In addition, FlatBuffers aren't designed for\nuntrusted inputs.\n\n## Future Improvements & Roadmap\n\nIn the future we may consider a compression layer right above the persistence\nlayer which doesn't change tx or merkle tree hashes, but reduces the storage\noverhead of `Any`. In addition, we may adopt protobuf naming conventions which\nmake type URLs a bit more concise while remaining descriptive.\n\nAdditional code generation support around the usage of `Any` is something that\ncould also be explored in the future to make the UX for go developers more\nseamless.\n\n## Consequences\n\n### Positive\n\n* Significant performance gains.\n* Supports backward and forward type compatibility.\n* Better support for cross-language clients.\n\n### Negative\n\n* Learning curve required to understand and implement Protobuf messages.\n* Slightly larger message size due to use of `Any`, although this could be offset\n by a compression layer in the future\n\n### Neutral\n\n## References\n\n1. https://github.com/cosmos/cosmos-sdk/issues/4977\n2. https://github.com/cosmos/cosmos-sdk/issues/5444" + }, + { + "number": 20, + "filename": "adr-020-protobuf-transaction-encoding.md", + "title": "ADR 020: Protocol Buffer Transaction Encoding", + "content": "# ADR 020: Protocol Buffer Transaction Encoding\n\n## Changelog\n\n* 2020 March 06: Initial Draft\n* 2020 March 12: API Updates\n* 2020 April 13: Added details on interface `oneof` handling\n* 2020 April 30: Switch to `Any`\n* 2020 May 14: Describe public key encoding\n* 2020 June 08: Store `TxBody` and `AuthInfo` as bytes in `SignDoc`; Document `TxRaw` as broadcast and storage type.\n* 2020 August 07: Use ADR 027 for serializing `SignDoc`.\n* 2020 August 19: Move sequence field from `SignDoc` to `SignerInfo`, as discussed in [#6966](https://github.com/cosmos/cosmos-sdk/issues/6966).\n* 2020 September 25: Remove `PublicKey` type in favor of `secp256k1.PubKey`, `ed25519.PubKey` and `multisig.LegacyAminoPubKey`.\n* 2020 October 15: Add `GetAccount` and `GetAccountWithHeight` methods to the `AccountRetriever` interface.\n* 2021 Feb 24: The Cosmos SDK does not use Tendermint's `PubKey` interface anymore, but its own `cryptotypes.PubKey`. Updates to reflect this.\n* 2021 May 3: Rename `clientCtx.JSONMarshaler` to `clientCtx.JSONCodec`.\n* 2021 June 10: Add `clientCtx.Codec: codec.Codec`.\n\n## Status\n\nAccepted\n\n## Context\n\nThis ADR is a continuation of the motivation, design, and context established in\n[ADR 019](./adr-019-protobuf-state-encoding.md), namely, we aim to design the\nProtocol Buffer migration path for the client-side of the Cosmos SDK.\n\nSpecifically, the client-side migration path primarily includes tx generation and\nsigning, message construction and routing, in addition to CLI & REST handlers and\nbusiness logic (i.e. queriers).\n\nWith this in mind, we will tackle the migration path via two main areas, txs and\nquerying. However, this ADR solely focuses on transactions. Querying should be\naddressed in a future ADR, but it should build off of these proposals.\n\nBased on detailed discussions ([\\#6030](https://github.com/cosmos/cosmos-sdk/issues/6030)\nand [\\#6078](https://github.com/cosmos/cosmos-sdk/issues/6078)), the original\ndesign for transactions was changed substantially from an `oneof` /JSON-signing\napproach to the approach described below.\n\n## Decision\n\n### Transactions\n\nSince interface values are encoded with `google.protobuf.Any` in state (see [ADR 019](adr-019-protobuf-state-encoding.md)),\n`sdk.Msg`s are encoded with `Any` in transactions.\n\nOne of the main goals of using `Any` to encode interface values is to have a\ncore set of types which is reused by apps so that\nclients can safely be compatible with as many chains as possible.\n\nIt is one of the goals of this specification to provide a flexible cross-chain transaction\nformat that can serve a wide variety of use cases without breaking the client\ncompatibility.\n\nIn order to facilitate signing, transactions are separated into `TxBody`,\nwhich will be reused by `SignDoc` below, and `signatures`:\n\n```protobuf\n// types/types.proto\npackage cosmos_sdk.v1;\n\nmessage Tx {\n TxBody body = 1;\n AuthInfo auth_info = 2;\n // A list of signatures that matches the length and order of AuthInfo's signer_infos to\n // allow connecting signature meta information like public key and signing mode by position.\n repeated bytes signatures = 3;\n}\n\n// A variant of Tx that pins the signer's exact binary representation of body and\n// auth_info. This is used for signing, broadcasting and verification. The binary\n// `serialize(tx: TxRaw)` is stored in Tendermint and the hash `sha256(serialize(tx: TxRaw))`\n// becomes the \"txhash\", commonly used as the transaction ID.\nmessage TxRaw {\n // A protobuf serialization of a TxBody that matches the representation in SignDoc.\n bytes body = 1;\n // A protobuf serialization of an AuthInfo that matches the representation in SignDoc.\n bytes auth_info = 2;\n // A list of signatures that matches the length and order of AuthInfo's signer_infos to\n // allow connecting signature meta information like public key and signing mode by position.\n repeated bytes signatures = 3;\n}\n\nmessage TxBody {\n // A list of messages to be executed. The required signers of those messages define\n // the number and order of elements in AuthInfo's signer_infos and Tx's signatures.\n // Each required signer address is added to the list only the first time it occurs.\n //\n // By convention, the first required signer (usually from the first message) is referred\n // to as the primary signer and pays the fee for the whole transaction.\n repeated google.protobuf.Any messages = 1;\n string memo = 2;\n int64 timeout_height = 3;\n repeated google.protobuf.Any extension_options = 1023;\n}\n\nmessage AuthInfo {\n // This list defines the signing modes for the required signers. The number\n // and order of elements must match the required signers from TxBody's messages.\n // The first element is the primary signer and the one which pays the fee.\n repeated SignerInfo signer_infos = 1;\n // The fee can be calculated based on the cost of evaluating the body and doing signature verification of the signers. This can be estimated via simulation.\n Fee fee = 2;\n}\n\nmessage SignerInfo {\n // The public key is optional for accounts that already exist in state. If unset, the\n // verifier can use the required signer address for this position and lookup the public key.\n google.protobuf.Any public_key = 1;\n // ModeInfo describes the signing mode of the signer and is a nested\n // structure to support nested multisig pubkey's\n ModeInfo mode_info = 2;\n // sequence is the sequence of the account, which describes the\n // number of committed transactions signed by a given address. It is used to prevent\n // replay attacks.\n uint64 sequence = 3;\n}\n\nmessage ModeInfo {\n oneof sum {\n Single single = 1;\n Multi multi = 2;\n }\n\n // Single is the mode info for a single signer. It is structured as a message\n // to allow for additional fields such as locale for SIGN_MODE_TEXTUAL in the future\n message Single {\n SignMode mode = 1;\n }\n\n // Multi is the mode info for a multisig public key\n message Multi {\n // bitarray specifies which keys within the multisig are signing\n CompactBitArray bitarray = 1;\n // mode_infos is the corresponding modes of the signers of the multisig\n // which could include nested multisig public keys\n repeated ModeInfo mode_infos = 2;\n }\n}\n\nenum SignMode {\n SIGN_MODE_UNSPECIFIED = 0;\n\n SIGN_MODE_DIRECT = 1;\n\n SIGN_MODE_TEXTUAL = 2;\n\n SIGN_MODE_LEGACY_AMINO_JSON = 127;\n}\n```\n\nAs will be discussed below, in order to include as much of the `Tx` as possible\nin the `SignDoc`, `SignerInfo` is separated from signatures so that only the\nraw signatures themselves live outside of what is signed over.\n\nBecause we are aiming for a flexible, extensible cross-chain transaction\nformat, new transaction processing options should be added to `TxBody` as soon\nthose use cases are discovered, even if they can't be implemented yet.\n\nBecause there is coordination overhead in this, `TxBody` includes an\n`extension_options` field which can be used for any transaction processing\noptions that are not already covered. App developers should, nevertheless,\nattempt to upstream important improvements to `Tx`.\n\n### Signing\n\nAll of the signing modes below aim to provide the following guarantees:\n\n* **No Malleability**: `TxBody` and `AuthInfo` cannot change once the transaction\n is signed\n* **Predictable Gas**: if I am signing a transaction where I am paying a fee,\n the final gas is fully dependent on what I am signing\n\nThese guarantees give the maximum amount of confidence to message signers that\nmanipulation of `Tx`s by intermediaries can't result in any meaningful changes.\n\n#### `SIGN_MODE_DIRECT`\n\nThe \"direct\" signing behavior is to sign the raw `TxBody` bytes as broadcast over\nthe wire. This has the advantages of:\n\n* requiring the minimum additional client capabilities beyond a standard protocol\n buffers implementation\n* leaving effectively zero holes for transaction malleability (i.e. there are no\n subtle differences between the signing and encoding formats which could\n potentially be exploited by an attacker)\n\nSignatures are structured using the `SignDoc` below which reuses the serialization of\n`TxBody` and `AuthInfo` and only adds the fields which are needed for signatures:\n\n```protobuf\n// types/types.proto\nmessage SignDoc {\n // A protobuf serialization of a TxBody that matches the representation in TxRaw.\n bytes body = 1;\n // A protobuf serialization of an AuthInfo that matches the representation in TxRaw.\n bytes auth_info = 2;\n string chain_id = 3;\n uint64 account_number = 4;\n}\n```\n\nIn order to sign in the default mode, clients take the following steps:\n\n1. Serialize `TxBody` and `AuthInfo` using any valid protobuf implementation.\n2. Create a `SignDoc` and serialize it using [ADR 027](./adr-027-deterministic-protobuf-serialization.md).\n3. Sign the encoded `SignDoc` bytes.\n4. Build a `TxRaw` and serialize it for broadcasting.\n\nSignature verification is based on comparing the raw `TxBody` and `AuthInfo`\nbytes encoded in `TxRaw` not based on any [\"canonicalization\"](https://github.com/regen-network/canonical-proto3)\nalgorithm which creates added complexity for clients in addition to preventing\nsome forms of upgradeability (to be addressed later in this document).\n\nSignature verifiers do:\n\n1. Deserialize a `TxRaw` and pull out `body` and `auth_info`.\n2. Create a list of required signer addresses from the messages.\n3. For each required signer:\n * Pull account number and sequence from the state.\n * Obtain the public key either from state or `AuthInfo`'s `signer_infos`.\n * Create a `SignDoc` and serialize it using [ADR 027](./adr-027-deterministic-protobuf-serialization.md).\n * Verify the signature at the same list position against the serialized `SignDoc`.\n\n#### `SIGN_MODE_LEGACY_AMINO`\n\nIn order to support legacy wallets and exchanges, Amino JSON will be temporarily\nsupported transaction signing. Once wallets and exchanges have had a\nchance to upgrade to protobuf-based signing, this option will be disabled. In\nthe meantime, it is foreseen that disabling the current Amino signing would cause\ntoo much breakage to be feasible. Note that this is mainly a requirement of the\nCosmos Hub and other chains may choose to disable Amino signing immediately.\n\nLegacy clients will be able to sign a transaction using the current Amino\nJSON format and have it encoded to protobuf using the REST `/tx/encode`\nendpoint before broadcasting.\n\n#### `SIGN_MODE_TEXTUAL`\n\nAs was discussed extensively in [\\#6078](https://github.com/cosmos/cosmos-sdk/issues/6078),\nthere is a desire for a human-readable signing encoding, especially for hardware\nwallets like the [Ledger](https://www.ledger.com) which display\ntransaction contents to users before signing. JSON was an attempt at this but\nfalls short of the ideal.\n\n`SIGN_MODE_TEXTUAL` is intended as a placeholder for a human-readable\nencoding which will replace Amino JSON. This new encoding should be even more\nfocused on readability than JSON, possibly based on formatting strings like\n[MessageFormat](http://userguide.icu-project.org/formatparse/messages).\n\nIn order to ensure that the new human-readable format does not suffer from\ntransaction malleability issues, `SIGN_MODE_TEXTUAL`\nrequires that the _human-readable bytes are concatenated with the raw `SignDoc`_\nto generate sign bytes.\n\nMultiple human-readable formats (maybe even localized messages) may be supported\nby `SIGN_MODE_TEXTUAL` when it is implemented.\n\n### Unknown Field Filtering\n\nUnknown fields in protobuf messages should generally be rejected by the transaction\nprocessors because:\n\n* important data may be present in the unknown fields, that if ignored, will\n cause unexpected behavior for clients\n* they present a malleability vulnerability where attackers can bloat tx size\n by adding random uninterpreted data to unsigned content (i.e. the master `Tx`,\n not `TxBody`)\n\nThere are also scenarios where we may choose to safely ignore unknown fields\n(https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-624400188) to\nprovide graceful forwards compatibility with newer clients.\n\nWe propose that field numbers with bit 11 set (for most use cases this is\nthe range of 1024-2047) be considered non-critical fields that can safely be\nignored if unknown.\n\nTo handle this we will need an unknown field filter that:\n\n* always rejects unknown fields in unsigned content (i.e. top-level `Tx` and\n unsigned parts of `AuthInfo` if present based on the signing mode)\n* rejects unknown fields in all messages (including nested `Any`s) other than\n fields with bit 11 set\n\nThis will likely need to be a custom protobuf parser pass that takes message bytes\nand `FileDescriptor`s and returns a boolean result.\n\n### Public Key Encoding\n\nPublic keys in the Cosmos SDK implement the `cryptotypes.PubKey` interface.\nWe propose to use `Any` for protobuf encoding as we are doing with other interfaces (for example, in `BaseAccount.PubKey` and `SignerInfo.PublicKey`).\nThe following public keys are implemented: secp256k1, secp256r1, ed25519 and legacy-multisignature.\n\nEx:\n\n```protobuf\nmessage PubKey {\n bytes key = 1;\n}\n```\n\n`multisig.LegacyAminoPubKey` has an array of `Any`'s member to support any\nprotobuf public key type.\n\nApps should only attempt to handle a registered set of public keys that they\nhave tested. The provided signature verification ante handler decorators will\nenforce this.\n\n### CLI & REST\n\nCurrently, the REST and CLI handlers encode and decode types and txs via Amino\nJSON encoding using a concrete Amino codec. Being that some of the types dealt with\nin the client can be interfaces, similar to how we described in [ADR 019](./adr-019-protobuf-state-encoding.md),\nthe client logic will now need to take a codec interface that knows not only how\nto handle all the types, but also knows how to generate transactions, signatures,\nand messages.\n\n```go\ntype AccountRetriever interface {\n GetAccount(clientCtx Context, addr sdk.AccAddress) (client.Account, error)\n GetAccountWithHeight(clientCtx Context, addr sdk.AccAddress) (client.Account, int64, error)\n EnsureExists(clientCtx client.Context, addr sdk.AccAddress) error\n GetAccountNumberSequence(clientCtx client.Context, addr sdk.AccAddress) (uint64, uint64, error)\n}\n\ntype Generator interface {\n NewTx() TxBuilder\n NewFee() ClientFee\n NewSignature() ClientSignature\n MarshalTx(tx types.Tx) ([]byte, error)\n}\n\ntype TxBuilder interface {\n GetTx() sdk.Tx\n\n SetMsgs(...sdk.Msg) error\n GetSignatures() []sdk.Signature\n SetSignatures(...sdk.Signature)\n GetFee() sdk.Fee\n SetFee(sdk.Fee)\n GetMemo() string\n SetMemo(string)\n}\n```\n\nWe then update `Context` to have new fields: `Codec`, `TxGenerator`,\nand `AccountRetriever`, and we update `AppModuleBasic.GetTxCmd` to take\na `Context` which should have all of these fields pre-populated.\n\nEach client method should then use one of the `Init` methods to re-initialize\nthe pre-populated `Context`. `tx.GenerateOrBroadcastTx` can be used to\ngenerate or broadcast a transaction. For example:\n\n```go\nimport \"github.com/spf13/cobra\"\nimport \"github.com/cosmos/cosmos-sdk/client\"\nimport \"github.com/cosmos/cosmos-sdk/client/tx\"\n\nfunc NewCmdDoSomething(clientCtx client.Context) *cobra.Command {\n\treturn &cobra.Command{\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tclientCtx := ctx.InitWithInput(cmd.InOrStdin())\n\t\t\tmsg := NewSomeMsg{...}\n\t\t\ttx.GenerateOrBroadcastTx(clientCtx, msg)\n\t\t},\n\t}\n}\n```\n\n## Future Improvements\n\n### `SIGN_MODE_TEXTUAL` specification\n\nA concrete specification and implementation of `SIGN_MODE_TEXTUAL` is intended\nas a near-term future improvement so that the ledger app and other wallets\ncan gracefully transition away from Amino JSON.\n\n### `SIGN_MODE_DIRECT_AUX`\n\n(\\*Documented as option (3) in https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-628026933)\n\nWe could add a mode `SIGN_MODE_DIRECT_AUX`\nto support scenarios where multiple signatures\nare being gathered into a single transaction but the message composer does not\nyet know which signatures will be included in the final transaction. For instance,\nI may have a 3/5 multisig wallet and want to send a `TxBody` to all 5\nsigners to see who signs first. As soon as I have 3 signatures then I will go\nahead and build the full transaction.\n\nWith `SIGN_MODE_DIRECT`, each signer needs\nto sign the full `AuthInfo` which includes the full list of all signers and\ntheir signing modes, making the above scenario very hard.\n\n`SIGN_MODE_DIRECT_AUX` would allow \"auxiliary\" signers to create their signature\nusing only `TxBody` and their own `PublicKey`. This allows the full list of\nsigners in `AuthInfo` to be delayed until signatures have been collected.\n\nAn \"auxiliary\" signer is any signer besides the primary signer who is paying\nthe fee. For the primary signer, the full `AuthInfo` is actually needed to calculate gas and fees\nbecause that is dependent on how many signers and which key types and signing\nmodes they are using. Auxiliary signers, however, do not need to worry about\nfees or gas and thus can just sign `TxBody`.\n\nTo generate a signature in `SIGN_MODE_DIRECT_AUX` these steps would be followed:\n\n1. Encode `SignDocAux` (with the same requirement that fields must be serialized\n in order):\n\n ```protobuf\n // types/types.proto\n message SignDocAux {\n bytes body_bytes = 1;\n // PublicKey is included in SignDocAux :\n // 1. as a special case for multisig public keys. For multisig public keys,\n // the signer should use the top-level multisig public key they are signing\n // against, not their own public key. This is to prevent a form\n // of malleability where a signature could be taken out of context of the\n // multisig key that was intended to be signed for\n // 2. to guard against scenario where configuration information is encoded\n // in public keys (it has been proposed) such that two keys can generate\n // the same signature but have different security properties\n //\n // By including it here, the composer of AuthInfo cannot reference the\n // a public key variant the signer did not intend to use\n PublicKey public_key = 2;\n string chain_id = 3;\n uint64 account_number = 4;\n }\n ```\n\n2. Sign the encoded `SignDocAux` bytes\n3. Send their signature and `SignerInfo` to the primary signer who will then\n sign and broadcast the final transaction (with `SIGN_MODE_DIRECT` and `AuthInfo`\n added) once enough signatures have been collected\n\n### `SIGN_MODE_DIRECT_RELAXED`\n\n(_Documented as option (1)(a) in https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-628026933_)\n\nThis is a variation of `SIGN_MODE_DIRECT` where multiple signers wouldn't need to\ncoordinate public keys and signing modes in advance. It would involve an alternate\n`SignDoc` similar to `SignDocAux` above with fee. This could be added in the future\nif client developers found the burden of collecting public keys and modes in advance\ntoo burdensome.\n\n## Consequences\n\n### Positive\n\n* Significant performance gains.\n* Supports backward and forward type compatibility.\n* Better support for cross-language clients.\n* Multiple signing modes allow for greater protocol evolution\n\n### Negative\n\n* `google.protobuf.Any` type URLs increase transaction size although the effect\n may be negligible or compression may be able to mitigate it.\n\n### Neutral\n\n## References" + }, + { + "number": 21, + "filename": "adr-021-protobuf-query-encoding.md", + "title": "ADR 021: Protocol Buffer Query Encoding", + "content": "# ADR 021: Protocol Buffer Query Encoding\n\n## Changelog\n\n* 2020 March 27: Initial Draft\n\n## Status\n\nAccepted\n\n## Context\n\nThis ADR is a continuation of the motivation, design, and context established in\n[ADR 019](./adr-019-protobuf-state-encoding.md) and\n[ADR 020](./adr-020-protobuf-transaction-encoding.md), namely, we aim to design the\nProtocol Buffer migration path for the client-side of the Cosmos SDK.\n\nThis ADR continues from [ADR 020](./adr-020-protobuf-transaction-encoding.md)\nto specify the encoding of queries.\n\n## Decision\n\n### Custom Query Definition\n\nModules define custom queries through a protocol buffers `service` definition.\nThese `service` definitions are generally associated with and used by the\nGRPC protocol. However, the protocol buffers specification indicates that\nthey can be used more generically by any request/response protocol that uses\nprotocol buffer encoding. Thus, we can use `service` definitions for specifying\ncustom ABCI queries and even reuse a substantial amount of the GRPC infrastructure.\n\nEach module with custom queries should define a service canonically named `Query`:\n\n```protobuf\n// x/bank/types/types.proto\n\nservice Query {\n rpc QueryBalance(QueryBalanceParams) returns (cosmos_sdk.v1.Coin) { }\n rpc QueryAllBalances(QueryAllBalancesParams) returns (QueryAllBalancesResponse) { }\n}\n```\n\n#### Handling of Interface Types\n\nModules that use interface types and need true polymorphism generally force a\n`oneof` up to the app-level that provides the set of concrete implementations of\nthat interface that the app supports. While app's are welcome to do the same for\nqueries and implement an app-level query service, it is recommended that modules\nprovide query methods that expose these interfaces via `google.protobuf.Any`.\nThere is a concern on the transaction level that the overhead of `Any` is too\nhigh to justify its usage. However for queries this is not a concern, and\nproviding generic module-level queries that use `Any` does not preclude apps\nfrom also providing app-level queries that return using the app-level `oneof`s.\n\nA hypothetical example for the `gov` module would look something like:\n\n```protobuf\n// x/gov/types/types.proto\n\nimport \"google/protobuf/any.proto\";\n\nservice Query {\n rpc GetProposal(GetProposalParams) returns (AnyProposal) { }\n}\n\nmessage AnyProposal {\n ProposalBase base = 1;\n google.protobuf.Any content = 2;\n}\n```\n\n### Custom Query Implementation\n\nIn order to implement the query service, we can reuse the existing [gogo protobuf](https://github.com/cosmos/gogoproto)\ngrpc plugin, which for a service named `Query` generates an interface named\n`QueryServer` as below:\n\n```go\ntype QueryServer interface {\n\tQueryBalance(context.Context, *QueryBalanceParams) (*types.Coin, error)\n\tQueryAllBalances(context.Context, *QueryAllBalancesParams) (*QueryAllBalancesResponse, error)\n}\n```\n\nThe custom queries for our module are implemented by implementing this interface.\n\nThe first parameter in this generated interface is a generic `context.Context`,\nwhereas querier methods generally need an instance of `sdk.Context` to read\nfrom the store. Since arbitrary values can be attached to `context.Context`\nusing the `WithValue` and `Value` methods, the Cosmos SDK should provide a function\n`sdk.UnwrapSDKContext` to retrieve the `sdk.Context` from the provided\n`context.Context`.\n\nAn example implementation of `QueryBalance` for the bank module as above would\nlook something like:\n\n```go\ntype Querier struct {\n\tKeeper\n}\n\nfunc (q Querier) QueryBalance(ctx context.Context, params *types.QueryBalanceParams) (*sdk.Coin, error) {\n\tbalance := q.GetBalance(sdk.UnwrapSDKContext(ctx), params.Address, params.Denom)\n\treturn &balance, nil\n}\n```\n\n### Custom Query Registration and Routing\n\nQuery server implementations as above would be registered with `AppModule`s using\na new method `RegisterQueryService(grpc.Server)` which could be implemented simply\nas below:\n\n```go\n// x/bank/module.go\nfunc (am AppModule) RegisterQueryService(server grpc.Server) {\n\ttypes.RegisterQueryServer(server, keeper.Querier{am.keeper})\n}\n```\n\nUnderneath the hood, a new method `RegisterService(sd *grpc.ServiceDesc, handler interface{})`\nwill be added to the existing `baseapp.QueryRouter` to add the queries to the custom\nquery routing table (with the routing method being described below).\nThe signature for this method matches the existing\n`RegisterServer` method on the GRPC `Server` type where `handler` is the custom\nquery server implementation described above.\n\nGRPC-like requests are routed by the service name (ex. `cosmos_sdk.x.bank.v1.Query`)\nand method name (ex. `QueryBalance`) combined with `/`s to form a full\nmethod name (ex. `/cosmos_sdk.x.bank.v1.Query/QueryBalance`). This gets translated\ninto an ABCI query as `custom/cosmos_sdk.x.bank.v1.Query/QueryBalance`. Service handlers\nregistered with `QueryRouter.RegisterService` will be routed this way.\n\nBeyond the method name, GRPC requests carry a protobuf encoded payload, which maps naturally\nto `RequestQuery.Data`, and receive a protobuf encoded response or error. Thus\nthere is a quite natural mapping of GRPC-like rpc methods to the existing\n`sdk.Query` and `QueryRouter` infrastructure.\n\nThis basic specification allows us to reuse protocol buffer `service` definitions\nfor ABCI custom queries substantially reducing the need for manual decoding and\nencoding in query methods.\n\n### GRPC Protocol Support\n\nIn addition to providing an ABCI query pathway, we can easily provide a GRPC\nproxy server that routes requests in the GRPC protocol to ABCI query requests\nunder the hood. In this way, clients could use their host languages' existing\nGRPC implementations to make direct queries against Cosmos SDK app's using\nthese `service` definitions. In order for this server to work, the `QueryRouter`\non `BaseApp` will need to expose the service handlers registered with\n`QueryRouter.RegisterService` to the proxy server implementation. Nodes could\nlaunch the proxy server on a separate port in the same process as the ABCI app\nwith a command-line flag.\n\n### REST Queries and Swagger Generation\n\n[grpc-gateway](https://github.com/grpc-ecosystem/grpc-gateway) is a project that\ntranslates REST calls into GRPC calls using special annotations on service\nmethods. Modules that want to expose REST queries should add `google.api.http`\nannotations to their `rpc` methods as in this example below.\n\n```protobuf\n// x/bank/types/types.proto\n\nservice Query {\n rpc QueryBalance(QueryBalanceParams) returns (cosmos_sdk.v1.Coin) {\n option (google.api.http) = {\n get: \"/x/bank/v1/balance/{address}/{denom}\"\n };\n }\n rpc QueryAllBalances(QueryAllBalancesParams) returns (QueryAllBalancesResponse) {\n option (google.api.http) = {\n get: \"/x/bank/v1/balances/{address}\"\n };\n }\n}\n```\n\ngrpc-gateway will work directly against the GRPC proxy described above which will\ntranslate requests to ABCI queries under the hood. grpc-gateway can also\ngenerate Swagger definitions automatically.\n\nIn the current implementation of REST queries, each module needs to implement\nREST queries manually in addition to ABCI querier methods. Using the grpc-gateway\napproach, there will be no need to generate separate REST query handlers, just\nquery servers as described above as grpc-gateway handles the translation of protobuf\nto REST as well as Swagger definitions.\n\nThe Cosmos SDK should provide CLI commands for apps to start GRPC gateway either in\na separate process or the same process as the ABCI app, as well as provide a\ncommand for generating grpc-gateway proxy `.proto` files and the `swagger.json`\nfile.\n\n### Client Usage\n\nThe gogo protobuf grpc plugin generates client interfaces in addition to server\ninterfaces. For the `Query` service defined above we would get a `QueryClient`\ninterface like:\n\n```go\ntype QueryClient interface {\n\tQueryBalance(ctx context.Context, in *QueryBalanceParams, opts ...grpc.CallOption) (*types.Coin, error)\n\tQueryAllBalances(ctx context.Context, in *QueryAllBalancesParams, opts ...grpc.CallOption) (*QueryAllBalancesResponse, error)\n}\n```\n\nVia a small patch to gogo protobuf ([gogo/protobuf#675](https://github.com/gogo/protobuf/pull/675))\nwe have tweaked the grpc codegen to use an interface rather than a concrete type\nfor the generated client struct. This allows us to also reuse the GRPC infrastructure\nfor ABCI client queries.\n\n1Context`will receive a new method`QueryConn`that returns a`ClientConn`\nthat routes calls to ABCI queries\n\nClients (such as CLI methods) will then be able to call query methods like this:\n\n```go\nclientCtx := client.NewContext()\nqueryClient := types.NewQueryClient(clientCtx.QueryConn())\nparams := &types.QueryBalanceParams{addr, denom}\nresult, err := queryClient.QueryBalance(gocontext.Background(), params)\n```\n\n### Testing\n\nTests would be able to create a query client directly from keeper and `sdk.Context`\nreferences using a `QueryServerTestHelper` as below:\n\n```go\nqueryHelper := baseapp.NewQueryServerTestHelper(ctx)\ntypes.RegisterQueryServer(queryHelper, keeper.Querier{app.BankKeeper})\nqueryClient := types.NewQueryClient(queryHelper)\n```\n\n## Future Improvements\n\n## Consequences\n\n### Positive\n\n* greatly simplified querier implementation (no manual encoding/decoding)\n* easy query client generation (can use existing grpc and swagger tools)\n* no need for REST query implementations\n* type safe query methods (generated via grpc plugin)\n* going forward, there will be less breakage of query methods because of the\nbackwards compatibility guarantees provided by buf\n\n### Negative\n\n* all clients using the existing ABCI/REST queries will need to be refactored\nfor both the new GRPC/REST query paths as well as protobuf/proto-json encoded\ndata, but this is more or less unavoidable in the protobuf refactoring\n\n### Neutral\n\n## References" + }, + { + "number": 22, + "filename": "adr-022-custom-panic-handling.md", + "title": "ADR 022: Custom BaseApp panic handling", + "content": "# ADR 022: Custom BaseApp panic handling\n\n## Changelog\n\n* 2020 Apr 24: Initial Draft\n* 2021 Sep 14: Superseded by ADR-045\n\n## Status\n\nSUPERSEDED by ADR-045\n\n## Context\n\nThe current implementation of BaseApp does not allow developers to write custom error handlers during panic recovery\n[runTx()](https://github.com/cosmos/cosmos-sdk/blob/bad4ca75f58b182f600396ca350ad844c18fc80b/baseapp/baseapp.go#L539)\nmethod. We think that this method can be more flexible and can give Cosmos SDK users more options for customizations without\nthe need to rewrite whole BaseApp. Also there's one special case for `sdk.ErrorOutOfGas` error handling, that case\nmight be handled in a \"standard\" way (middleware) alongside the others.\n\nWe propose middleware-solution, which could help developers implement the following cases:\n\n* add external logging (let's say sending reports to external services like [Sentry](https://sentry.io));\n* call panic for specific error cases;\n\nIt will also make `OutOfGas` case and `default` case one of the middlewares.\n`Default` case wraps recovery object to an error and logs it ([example middleware implementation](#recovery-middleware)).\n\nOur project has a sidecar service running alongside the blockchain node (smart contracts virtual machine). It is\nessential that node <-> sidecar connectivity stays stable for TXs processing. So when the communication breaks we need\nto crash the node and reboot it once the problem is solved. That behaviour makes the node's state machine execution\ndeterministic. As all keeper panics are caught by runTx's `defer()` handler, we have to adjust the BaseApp code\nin order to customize it.\n\n## Decision\n\n### Design\n\n#### Overview\n\nInstead of hardcoding custom error handling into BaseApp we suggest using a set of middlewares which can be customized\nexternally and will allow developers to use as many custom error handlers as they want. Implementation with tests\ncan be found [here](https://github.com/cosmos/cosmos-sdk/pull/6053).\n\n#### Implementation details\n\n##### Recovery handler\n\nNew `RecoveryHandler` type added. `recoveryObj` input argument is an object returned by the standard Go function\n`recover()` from the `builtin` package.\n\n```go\ntype RecoveryHandler func(recoveryObj interface{}) error\n```\n\nHandler should type assert (or other methods) an object to define if the object should be handled.\n`nil` should be returned if the input object can't be handled by that `RecoveryHandler` (not a handler's target type).\nNot `nil` error should be returned if the input object was handled and the middleware chain execution should be stopped.\n\nAn example:\n\n```go\nfunc exampleErrHandler(recoveryObj interface{}) error {\n err, ok := recoveryObj.(error)\n if !ok { return nil }\n\n if someSpecificError.Is(err) {\n panic(customPanicMsg)\n } else {\n return nil\n }\n}\n```\n\nThis example breaks the application execution, but it also might enrich the error's context like the `OutOfGas` handler.\n\n##### Recovery middleware\n\nWe also add a middleware type (decorator). That function type wraps `RecoveryHandler` and returns the next middleware in\nexecution chain and handler's `error`. Type is used to separate actual `recovery()` object handling from middleware\nchain processing.\n\n```go\ntype recoveryMiddleware func(recoveryObj interface{}) (recoveryMiddleware, error)\n\nfunc newRecoveryMiddleware(handler RecoveryHandler, next recoveryMiddleware) recoveryMiddleware {\n return func(recoveryObj interface{}) (recoveryMiddleware, error) {\n if err := handler(recoveryObj); err != nil {\n return nil, err\n }\n return next, nil\n }\n}\n```\n\nFunction receives a `recoveryObj` object and returns:\n\n* (next `recoveryMiddleware`, `nil`) if object wasn't handled (not a target type) by `RecoveryHandler`;\n* (`nil`, not nil `error`) if input object was handled and other middlewares in the chain should not be executed;\n* (`nil`, `nil`) in case of invalid behavior. Panic recovery might not have been properly handled;\nthis can be avoided by always using a `default` as a rightmost middleware in the chain (always returns an `error`');\n\n`OutOfGas` middleware example:\n\n```go\nfunc newOutOfGasRecoveryMiddleware(gasWanted uint64, ctx sdk.Context, next recoveryMiddleware) recoveryMiddleware {\n handler := func(recoveryObj interface{}) error {\n err, ok := recoveryObj.(sdk.ErrorOutOfGas)\n if !ok { return nil }\n\n return errorsmod.Wrap(\n sdkerrors.ErrOutOfGas, fmt.Sprintf(\n \"out of gas in location: %v; gasWanted: %d, gasUsed: %d\", err.Descriptor, gasWanted, ctx.GasMeter().GasConsumed(),\n ),\n )\n }\n\n return newRecoveryMiddleware(handler, next)\n}\n```\n\n`Default` middleware example:\n\n```go\nfunc newDefaultRecoveryMiddleware() recoveryMiddleware {\n handler := func(recoveryObj interface{}) error {\n return errorsmod.Wrap(\n sdkerrors.ErrPanic, fmt.Sprintf(\"recovered: %v\\nstack:\\n%v\", recoveryObj, string(debug.Stack())),\n )\n }\n\n return newRecoveryMiddleware(handler, nil)\n}\n```\n\n##### Recovery processing\n\nBasic chain of middlewares processing would look like:\n\n```go\nfunc processRecovery(recoveryObj interface{}, middleware recoveryMiddleware) error {\n\tif middleware == nil { return nil }\n\n\tnext, err := middleware(recoveryObj)\n\tif err != nil { return err }\n\tif next == nil { return nil }\n\n\treturn processRecovery(recoveryObj, next)\n}\n```\n\nThat way we can create a middleware chain which is executed from left to right, the rightmost middleware is a\n`default` handler which must return an `error`.\n\n##### BaseApp changes\n\nThe `default` middleware chain must exist in a `BaseApp` object. `Baseapp` modifications:\n\n```go\ntype BaseApp struct {\n // ...\n runTxRecoveryMiddleware recoveryMiddleware\n}\n\nfunc NewBaseApp(...) {\n // ...\n app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware()\n}\n\nfunc (app *BaseApp) runTx(...) {\n // ...\n defer func() {\n if r := recover(); r != nil {\n recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware)\n err, result = processRecovery(r, recoveryMW), nil\n }\n\n gInfo = sdk.GasInfo{GasWanted: gasWanted, GasUsed: ctx.GasMeter().GasConsumed()}\n }()\n // ...\n}\n```\n\nDevelopers can add their custom `RecoveryHandler`s by providing `AddRunTxRecoveryHandler` as a BaseApp option parameter to the `NewBaseapp` constructor:\n\n```go\nfunc (app *BaseApp) AddRunTxRecoveryHandler(handlers ...RecoveryHandler) {\n for _, h := range handlers {\n app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware)\n }\n}\n```\n\nThis method would prepend handlers to an existing chain.\n\n## Consequences\n\n### Positive\n\n* Developers of Cosmos SDK-based projects can add custom panic handlers to:\n * add error context for custom panic sources (panic inside of custom keepers);\n * emit `panic()`: passthrough recovery object to the Tendermint core;\n * other necessary handling;\n* Developers can use standard Cosmos SDK `BaseApp` implementation, rather than rewriting it in their projects;\n* Proposed solution doesn't break the current \"standard\" `runTx()` flow;\n\n### Negative\n\n* Introduces changes to the execution model design.\n\n### Neutral\n\n* `OutOfGas` error handler becomes one of the middlewares;\n* Default panic handler becomes one of the middlewares;\n\n## References\n\n* [PR-6053 with proposed solution](https://github.com/cosmos/cosmos-sdk/pull/6053)\n* [Similar solution. ADR-010 Modular AnteHandler](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-010-modular-antehandler.md)" + }, + { + "number": 23, + "filename": "adr-023-protobuf-naming.md", + "title": "ADR 023: Protocol Buffer Naming and Versioning Conventions", + "content": "# ADR 023: Protocol Buffer Naming and Versioning Conventions\n\n## Changelog\n\n* 2020 April 27: Initial Draft\n* 2020 August 5: Update guidelines\n\n## Status\n\nAccepted\n\n## Context\n\nProtocol Buffers provide a basic [style guide](https://developers.google.com/protocol-buffers/docs/style)\nand [Buf](https://buf.build/docs/style-guide) builds upon that. To the\nextent possible, we want to follow industry accepted guidelines and wisdom for\nthe effective usage of protobuf, deviating from those only when there is clear\nrationale for our use case.\n\n### Adoption of `Any`\n\nThe adoption of `google.protobuf.Any` as the recommended approach for encoding\ninterface types (as opposed to `oneof`) makes package naming a central part\nof the encoding as fully-qualified message names now appear in encoded\nmessages.\n\n### Current Directory Organization\n\nThus far we have mostly followed [Buf's](https://buf.build) [DEFAULT](https://buf.build/docs/lint-checkers#default)\nrecommendations, with the minor deviation of disabling [`PACKAGE_DIRECTORY_MATCH`](https://buf.build/docs/lint-checkers#file_layout)\nwhich although being convenient for developing code comes with the warning\nfrom Buf that:\n\n> you will have a very bad time with many Protobuf plugins across various languages if you do not do this\n\n### Adoption of gRPC Queries\n\nIn [ADR 021](adr-021-protobuf-query-encoding.md), gRPC was adopted for Protobuf\nnative queries. The full gRPC service path thus becomes a key part of ABCI query\npath. In the future, gRPC queries may be allowed from within persistent scripts\nby technologies such as CosmWasm and these query routes would be stored within\nscript binaries.\n\n## Decision\n\nThe goal of this ADR is to provide thoughtful naming conventions that:\n\n* encourage a good user experience for when users interact directly with\n.proto files and fully-qualified protobuf names\n* balance conciseness against the possibility of either over-optimizing (making\nnames too short and cryptic) or under-optimizing (just accepting bloated names\nwith lots of redundant information)\n\nThese guidelines are meant to act as a style guide for both the Cosmos SDK and\nthird-party modules.\n\nAs a starting point, we should adopt all of the [DEFAULT](https://buf.build/docs/lint-checkers#default)\ncheckers in [Buf's](https://buf.build) including [`PACKAGE_DIRECTORY_MATCH`](https://buf.build/docs/lint-checkers#file_layout),\nexcept:\n\n* [PACKAGE_VERSION_SUFFIX](https://buf.build/docs/lint-checkers#package_version_suffix)\n* [SERVICE_SUFFIX](https://buf.build/docs/lint-checkers#service_suffix)\n\nFurther guidelines to be described below.\n\n### Principles\n\n#### Concise and Descriptive Names\n\nNames should be descriptive enough to convey their meaning and distinguish\nthem from other names.\n\nGiven that we are using fully-qualified names within\n`google.protobuf.Any` as well as within gRPC query routes, we should aim to\nkeep names concise, without going overboard. The general rule of thumb should\nbe if a shorter name would convey more or else the same thing, pick the shorter\nname.\n\nFor instance, `cosmos.bank.MsgSend` (19 bytes) conveys roughly the same information\nas `cosmos_sdk.x.bank.v1.MsgSend` (28 bytes) but is more concise.\n\nSuch conciseness makes names both more pleasant to work with and take up less\nspace within transactions and on the wire.\n\nWe should also resist the temptation to over-optimize, by making names\ncryptically short with abbreviations. For instance, we shouldn't try to\nreduce `cosmos.bank.MsgSend` to `csm.bk.MSnd` just to save a few bytes.\n\nThe goal is to make names **_concise but not cryptic_**.\n\n#### Names are for Clients First\n\nPackage and type names should be chosen for the benefit of users, not\nnecessarily because of legacy concerns related to the go code-base.\n\n#### Plan for Longevity\n\nIn the interests of long-term support, we should plan on the names we do\nchoose to be in usage for a long time, so now is the opportunity to make\nthe best choices for the future.\n\n### Versioning\n\n#### Guidelines on Stable Package Versions\n\nIn general, schema evolution is the way to update protobuf schemas. That means that new fields,\nmessages, and RPC methods are _added_ to existing schemas and old fields, messages and RPC methods\nare maintained as long as possible.\n\nBreaking things is often unacceptable in a blockchain scenario. For instance, immutable smart contracts\nmay depend on certain data schemas on the host chain. If the host chain breaks those schemas, the smart\ncontract may be irreparably broken. Even when things can be fixed (for instance in client software),\nthis often comes at a high cost.\n\nInstead of breaking things, we should make every effort to evolve schemas rather than just breaking them.\n[Buf](https://buf.build) breaking change detection should be used on all stable (non-alpha or beta) packages\nto prevent such breakage.\n\nWith that in mind, different stable versions (i.e. `v1` or `v2`) of a package should more or less be considered\ndifferent packages and this should be a last resort approach for upgrading protobuf schemas. Scenarios where creating\na `v2` may make sense are:\n\n* we want to create a new module with similar functionality to an existing module and adding `v2` is the most natural\nway to do this. In that case, there are really just two different, but similar modules with different APIs.\n* we want to add a new revamped API for an existing module and it's just too cumbersome to add it to the existing package,\nso putting it in `v2` is cleaner for users. In this case, care should be made to not deprecate support for\n`v1` if it is actively used in immutable smart contracts.\n\n#### Guidelines on unstable (alpha and beta) package versions\n\nThe following guidelines are recommended for marking packages as alpha or beta:\n\n* marking something as `alpha` or `beta` should be a last resort and just putting something in the\nstable package (i.e. `v1` or `v2`) should be preferred\n* a package _should_ be marked as `alpha` _if and only if_ there are active discussions to remove\nor significantly alter the package in the near future\n* a package _should_ be marked as `beta` _if and only if_ there is an active discussion to\nsignificantly refactor/rework the functionality in the near future but do not remove it\n* modules _can and should_ have types in both stable (i.e. `v1` or `v2`) and unstable (`alpha` or `beta`) packages.\n\n_`alpha` and `beta` should not be used to avoid responsibility for maintaining compatibility._\nWhenever code is released into the wild, especially on a blockchain, there is a high cost to changing things. In some\ncases, for instance with immutable smart contracts, a breaking change may be impossible to fix.\n\nWhen marking something as `alpha` or `beta`, maintainers should ask the following questions:\n\n* what is the cost of asking others to change their code vs the benefit of us maintaining the optionality to change it?\n* what is the plan for moving this to `v1` and how will that affect users?\n\n`alpha` or `beta` should really be used to communicate \"changes are planned\".\n\nAs a case study, gRPC reflection is in the package `grpc.reflection.v1alpha`. It hasn't been changed since\n2017 and it is now used in other widely used software like gRPCurl. Some folks probably use it in production services\nand so if they actually went and changed the package to `grpc.reflection.v1`, some software would break and\nthey probably don't want to do that... So now the `v1alpha` package is more or less the de-facto `v1`. Let's not do that.\n\nThe following are guidelines for working with non-stable packages:\n\n* [Buf's recommended version suffix](https://buf.build/docs/lint-checkers#package_version_suffix)\n(ex. `v1alpha1`) _should_ be used for non-stable packages\n* non-stable packages should generally be excluded from breaking change detection\n* immutable smart contract modules (i.e. CosmWasm) _should_ block smart contracts/persistent\nscripts from interacting with `alpha`/`beta` packages\n\n#### Omit v1 suffix\n\nInstead of using [Buf's recommended version suffix](https://buf.build/docs/lint-checkers#package_version_suffix),\nwe can omit `v1` for packages that don't actually have a second version. This\nallows for more concise names for common use cases like `cosmos.bank.Send`.\nPackages that do have a second or third version can indicate that with `.v2`\nor `.v3`.\n\n### Package Naming\n\n#### Adopt a short, unique top-level package name\n\nTop-level packages should adopt a short name that is known not to collide with\nother names in common usage within the Cosmos ecosystem. In the near future, a\nregistry should be created to reserve and index top-level package names used\nwithin the Cosmos ecosystem. Because the Cosmos SDK is intended to provide\nthe top-level types for the Cosmos project, the top-level package name `cosmos`\nis recommended for usage within the Cosmos SDK instead of the longer `cosmos_sdk`.\n[ICS](https://github.com/cosmos/ics) specifications could consider a\nshort top-level package like `ics23` based upon the standard number.\n\n#### Limit sub-package depth\n\nSub-package depth should be increased with caution. Generally a single\nsub-package is needed for a module or a library. Even though `x` or `modules`\nis used in source code to denote modules, this is often unnecessary for .proto\nfiles as modules are the primary thing sub-packages are used for. Only items which\nare known to be used infrequently should have deep sub-package depths.\n\nFor the Cosmos SDK, it is recommended that we simply write `cosmos.bank`,\n`cosmos.gov`, etc. rather than `cosmos.x.bank`. In practice, most non-module\ntypes can go straight in the `cosmos` package or we can introduce a\n`cosmos.base` package if needed. Note that this naming _will not_ change\ngo package names, i.e. the `cosmos.bank` protobuf package will still live in\n`x/bank`.\n\n### Message Naming\n\nMessage type names should be as concise as possible without losing clarity. `sdk.Msg`\ntypes which are used in transactions will retain the `Msg` prefix as that provides\nhelpful context.\n\n### Service and RPC Naming\n\n[ADR 021](adr-021-protobuf-query-encoding.md) specifies that modules should\nimplement a gRPC query service. We should consider the principle of conciseness\nfor query service and RPC names as these may be called from persistent script\nmodules such as CosmWasm. Also, users may use these query paths from tools like\n[gRPCurl](https://github.com/fullstorydev/grpcurl). As an example, we can shorten\n`/cosmos_sdk.x.bank.v1.QueryService/QueryBalance` to\n`/cosmos.bank.Query/Balance` without losing much useful information.\n\nRPC request and response types _should_ follow the `ServiceNameMethodNameRequest`/\n`ServiceNameMethodNameResponse` naming convention. i.e. for an RPC method named `Balance`\non the `Query` service, the request and response types would be `QueryBalanceRequest`\nand `QueryBalanceResponse`. This will be more self-explanatory than `BalanceRequest`\nand `BalanceResponse`.\n\n#### Use just `Query` for the query service\n\nInstead of [Buf's default service suffix recommendation](https://github.com/cosmos/cosmos-sdk/pull/6033),\nwe should simply use the shorter `Query` for query services.\n\nFor other types of gRPC services, we should consider sticking with Buf's\ndefault recommendation.\n\n#### Omit `Get` and `Query` from query service RPC names\n\n`Get` and `Query` should be omitted from `Query` service names because they are\nredundant in the fully-qualified name. For instance, `/cosmos.bank.Query/QueryBalance`\njust says `Query` twice without any new information.\n\n## Future Improvements\n\nA registry of top-level package names should be created to coordinate naming\nacross the ecosystem, prevent collisions, and also help developers discover\nuseful schemas. A simple starting point would be a git repository with\ncommunity-based governance.\n\n## Consequences\n\n### Positive\n\n* names will be more concise and easier to read and type\n* all transactions using `Any` will be at shorter (`_sdk.x` and `.v1` will be removed)\n* `.proto` file imports will be more standard (without `\"third_party/proto\"` in\nthe path)\n* code generation will be easier for clients because .proto files will be\nin a single `proto/` directory which can be copied rather than scattered\nthroughout the Cosmos SDK\n\n### Negative\n\n### Neutral\n\n* `.proto` files will need to be reorganized and refactored\n* some modules may need to be marked as alpha or beta\n\n## References" + }, + { + "number": 24, + "filename": "adr-024-coin-metadata.md", + "title": "ADR 024: Coin Metadata", + "content": "# ADR 024: Coin Metadata\n\n## Changelog\n\n* 05/19/2020: Initial draft\n\n## Status\n\nProposed\n\n## Context\n\nAssets in the Cosmos SDK are represented via a `Coins` type that consists of an `amount` and a `denom`,\nwhere the `amount` can be any arbitrarily large or small value. In addition, the Cosmos SDK uses an\naccount-based model where there are two types of primary accounts -- basic accounts and module accounts.\nAll account types have a set of balances that are composed of `Coins`. The `x/bank` module keeps\ntrack of all balances for all accounts and also keeps track of the total supply of balances in an\napplication.\n\nWith regards to a balance `amount`, the Cosmos SDK assumes a static and fixed unit of denomination,\nregardless of the denomination itself. In other words, clients and apps built atop a Cosmos-SDK-based\nchain may choose to define and use arbitrary units of denomination to provide a richer UX, however, by\nthe time a tx or operation reaches the Cosmos SDK state machine, the `amount` is treated as a single\nunit. For example, for the Cosmos Hub (Gaia), clients assume 1 ATOM = 10^6 uatom, and so all txs and\noperations in the Cosmos SDK work off of units of 10^6.\n\nThis clearly provides a poor and limited UX especially as interoperability of networks increases and\nas a result the total amount of asset types increases. We propose to have `x/bank` additionally keep\ntrack of metadata per `denom` in order to help clients, wallet providers, and explorers improve their\nUX and remove the requirement for making any assumptions on the unit of denomination.\n\n## Decision\n\nThe `x/bank` module will be updated to store and index metadata by `denom`, specifically the \"base\" or\nsmallest unit -- the unit the Cosmos SDK state-machine works with.\n\nMetadata may also include a non-zero length list of denominations. Each entry contains the name of\nthe denomination `denom`, the exponent to the base and a list of aliases. An entry is to be\ninterpreted as `1 denom = 10^exponent base_denom` (e.g. `1 ETH = 10^18 wei` and `1 uatom = 10^0 uatom`).\n\nThere are two denominations that are of high importance for clients: the `base`, which is the smallest\npossible unit and the `display`, which is the unit that is commonly referred to in human communication\nand on exchanges. The values in those fields link to an entry in the list of denominations.\n\nThe list in `denom_units` and the `display` entry may be changed via governance.\n\nAs a result, we can define the type as follows:\n\n```protobuf\nmessage DenomUnit {\n string denom = 1;\n uint32 exponent = 2; \n repeated string aliases = 3;\n}\n\nmessage Metadata {\n string description = 1;\n repeated DenomUnit denom_units = 2;\n string base = 3;\n string display = 4;\n}\n```\n\nAs an example, the ATOM's metadata can be defined as follows:\n\n```json\n{\n \"name\": \"atom\",\n \"description\": \"The native staking token of the Cosmos Hub.\",\n \"denom_units\": [\n {\n \"denom\": \"uatom\",\n \"exponent\": 0,\n \"aliases\": [\n \"microatom\"\n ],\n },\n {\n \"denom\": \"matom\",\n \"exponent\": 3,\n \"aliases\": [\n \"milliatom\"\n ]\n },\n {\n \"denom\": \"atom\",\n \"exponent\": 6,\n }\n ],\n \"base\": \"uatom\",\n \"display\": \"atom\",\n}\n```\n\nGiven the above metadata, a client may infer the following things:\n\n* 4.3atom = 4.3 * (10^6) = 4,300,000uatom\n* The string \"atom\" can be used as a display name in a list of tokens.\n* The balance 4300000 can be displayed as 4,300,000uatom or 4,300matom or 4.3atom.\n The `display` denomination 4.3atom is a good default if the authors of the client don't make\n an explicit decision to choose a different representation.\n\nA client should be able to query for metadata by denom both via the CLI and REST interfaces. In\naddition, we will add handlers to these interfaces to convert from any unit to another given unit,\nas the base framework for this already exists in the Cosmos SDK.\n\nFinally, we need to ensure metadata exists in the `GenesisState` of the `x/bank` module which is also\nindexed by the base `denom`.\n\n```go\ntype GenesisState struct {\n SendEnabled bool `json:\"send_enabled\" yaml:\"send_enabled\"`\n Balances []Balance `json:\"balances\" yaml:\"balances\"`\n Supply sdk.Coins `json:\"supply\" yaml:\"supply\"`\n DenomMetadata []Metadata `json:\"denom_metadata\" yaml:\"denom_metadata\"`\n}\n```\n\n## Future Work\n\nIn order for clients to avoid having to convert assets to the base denomination -- either manually or\nvia an endpoint, we may consider supporting automatic conversion of a given unit input.\n\n## Consequences\n\n### Positive\n\n* Provides clients, wallet providers and block explorers with additional data on\n asset denomination to improve UX and remove any need to make assumptions on\n denomination units.\n\n### Negative\n\n* A small amount of required additional storage in the `x/bank` module. The amount\n of additional storage should be minimal as the amount of total assets should not\n be large.\n\n### Neutral\n\n## References" + }, + { + "number": 27, + "filename": "adr-027-deterministic-protobuf-serialization.md", + "title": "ADR 027: Deterministic Protobuf Serialization", + "content": "# ADR 027: Deterministic Protobuf Serialization\n\n## Changelog\n\n* 2020-08-07: Initial Draft\n* 2020-09-01: Further clarify rules\n\n## Status\n\nProposed\n\n## Abstract\n\nFully deterministic structure serialization, which works across many languages and clients,\nis needed when signing messages. We need to be sure that whenever we serialize\na data structure, no matter in which supported language, the raw bytes\nwill stay the same.\n[Protobuf](https://developers.google.com/protocol-buffers/docs/proto3)\nserialization is not bijective (i.e. there exists a practically unlimited number of\nvalid binary representations for a given protobuf document)1.\n\nThis document describes a deterministic serialization scheme for\na subset of protobuf documents, that covers this use case but can be reused in\nother cases as well.\n\n### Context\n\nFor signature verification in Cosmos SDK, the signer and verifier need to agree on\nthe same serialization of a `SignDoc` as defined in\n[ADR-020](./adr-020-protobuf-transaction-encoding.md) without transmitting the\nserialization.\n\nCurrently, for block signatures we are using a workaround: we create a new [TxRaw](https://github.com/cosmos/cosmos-sdk/blob/9e85e81e0e8140067dd893421290c191529c148c/proto/cosmos/tx/v1beta1/tx.proto#L30)\ninstance (as defined in [adr-020-protobuf-transaction-encoding](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md#transactions))\nby converting all [Tx](https://github.com/cosmos/cosmos-sdk/blob/9e85e81e0e8140067dd893421290c191529c148c/proto/cosmos/tx/v1beta1/tx.proto#L13)\nfields to bytes on the client side. This adds an additional manual\nstep when sending and signing transactions.\n\n### Decision\n\nThe following encoding scheme is to be used by other ADRs,\nand in particular for `SignDoc` serialization.\n\n## Specification\n\n### Scope\n\nThis ADR defines a protobuf3 serializer. The output is a valid protobuf\nserialization, such that every protobuf parser can parse it.\n\nNo maps are supported in version 1 due to the complexity of defining a\ndeterministic serialization. This might change in future. Implementations must\nreject documents containing maps as invalid input.\n\n### Background - Protobuf3 Encoding\n\nMost numeric types in protobuf3 are encoded as\n[varints](https://developers.google.com/protocol-buffers/docs/encoding#varints).\nVarints are at most 10 bytes, and since each varint byte has 7 bits of data,\nvarints are a representation of `uint70` (70-bit unsigned integer). When\nencoding, numeric values are casted from their base type to `uint70`, and when\ndecoding, the parsed `uint70` is casted to the appropriate numeric type.\n\nThe maximum valid value for a varint that complies with protobuf3 is\n`FF FF FF FF FF FF FF FF FF 7F` (i.e. `2**70 -1`). If the field type is\n`{,u,s}int64`, the highest 6 bits of the 70 are dropped during decoding,\nintroducing 6 bits of malleability. If the field type is `{,u,s}int32`, the\nhighest 38 bits of the 70 are dropped during decoding, introducing 38 bits of\nmalleability.\n\nAmong other sources of non-determinism, this ADR eliminates the possibility of\nencoding malleability.\n\n### Serialization rules\n\nThe serialization is based on the\n[protobuf3 encoding](https://developers.google.com/protocol-buffers/docs/encoding)\nwith the following additions:\n\n1. Fields must be serialized only once in ascending order\n2. Extra fields or any extra data must not be added\n3. [Default values](https://developers.google.com/protocol-buffers/docs/proto3#default)\n must be omitted\n4. `repeated` fields of scalar numeric types must use\n [packed encoding](https://developers.google.com/protocol-buffers/docs/encoding#packed)\n5. Varint encoding must not be longer than needed:\n * No trailing zero bytes (in little endian, i.e. no leading zeroes in big\n endian). Per rule 3 above, the default value of `0` must be omitted, so\n this rule does not apply in such cases.\n * The maximum value for a varint must be `FF FF FF FF FF FF FF FF FF 01`.\n In other words, when decoded, the highest 6 bits of the 70-bit unsigned\n integer must be `0`. (10-byte varints are 10 groups of 7 bits, i.e.\n 70 bits, of which only the lowest 70-6=64 are useful.)\n * The maximum value for 32-bit values in varint encoding must be `FF FF FF FF 0F`\n with one exception (below). In other words, when decoded, the highest 38\n bits of the 70-bit unsigned integer must be `0`.\n * The one exception to the above is _negative_ `int32`, which must be\n encoded using the full 10 bytes for sign extension2.\n * The maximum value for Boolean values in varint encoding must be `01` (i.e.\n it must be `0` or `1`). Per rule 3 above, the default value of `0` must\n be omitted, so if a Boolean is included it must have a value of `1`.\n\nWhile rules number 1. and 2. should be pretty straightforward and describe the\ndefault behavior of all protobuf encoders the author is aware of, the 3rd rule\nis more interesting. After a protobuf3 deserialization you cannot differentiate\nbetween unset fields and fields set to the default value3. At\nserialization level however, it is possible to set the fields with an empty\nvalue or omit them entirely. This is a significant difference to e.g. JSON\nwhere a property can be empty (`\"\"`, `0`), `null` or undefined, leading to 3\ndifferent documents.\n\nOmitting fields set to default values is valid because the parser must assign\nthe default value to fields missing in the serialization4. For scalar\ntypes, omitting defaults is required by the spec5. For `repeated`\nfields, not serializing them is the only way to express empty lists. Enums must\nhave a first element of numeric value 0, which is the default6. And\nmessage fields default to unset7.\n\nOmitting defaults allows for some amount of forward compatibility: users of\nnewer versions of a protobuf schema produce the same serialization as users of\nolder versions as long as newly added fields are not used (i.e. set to their\ndefault value).\n\n### Implementation\n\nThere are three main implementation strategies, ordered from the least to the\nmost custom development:\n\n* **Use a protobuf serializer that follows the above rules by default.** E.g.\n [gogoproto](https://pkg.go.dev/github.com/cosmos/gogoproto/gogoproto) is known to\n be compliant in most cases, but not when certain annotations such as\n `nullable = false` are used. It might also be an option to configure an\n existing serializer accordingly.\n* **Normalize default values before encoding them.** If your serializer follows\n rules 1. and 2. and allows you to explicitly unset fields for serialization,\n you can normalize default values to unset. This can be done when working with\n [protobuf.js](https://www.npmjs.com/package/protobufjs):\n\n ```js\n const bytes = SignDoc.encode({\n bodyBytes: body.length > 0 ? body : null, // normalize empty bytes to unset\n authInfoBytes: authInfo.length > 0 ? authInfo : null, // normalize empty bytes to unset\n chainId: chainId || null, // normalize \"\" to unset\n accountNumber: accountNumber || null, // normalize 0 to unset\n accountSequence: accountSequence || null, // normalize 0 to unset\n }).finish();\n ```\n\n* **Use a hand-written serializer for the types you need.** If none of the above\n ways works for you, you can write a serializer yourself. For SignDoc this\n would look something like this in Go, building on existing protobuf utilities:\n\n ```go\n if !signDoc.body_bytes.empty() {\n buf.WriteUVarInt64(0xA) // wire type and field number for body_bytes\n buf.WriteUVarInt64(signDoc.body_bytes.length())\n buf.WriteBytes(signDoc.body_bytes)\n }\n\n if !signDoc.auth_info.empty() {\n buf.WriteUVarInt64(0x12) // wire type and field number for auth_info\n buf.WriteUVarInt64(signDoc.auth_info.length())\n buf.WriteBytes(signDoc.auth_info)\n }\n\n if !signDoc.chain_id.empty() {\n buf.WriteUVarInt64(0x1a) // wire type and field number for chain_id\n buf.WriteUVarInt64(signDoc.chain_id.length())\n buf.WriteBytes(signDoc.chain_id)\n }\n\n if signDoc.account_number != 0 {\n buf.WriteUVarInt64(0x20) // wire type and field number for account_number\n buf.WriteUVarInt(signDoc.account_number)\n }\n\n if signDoc.account_sequence != 0 {\n buf.WriteUVarInt64(0x28) // wire type and field number for account_sequence\n buf.WriteUVarInt(signDoc.account_sequence)\n }\n ```\n\n### Test vectors\n\nGiven the protobuf definition `Article.proto`\n\n```protobuf\npackage blog;\nsyntax = \"proto3\";\n\nenum Type {\n UNSPECIFIED = 0;\n IMAGES = 1;\n NEWS = 2;\n};\n\nenum Review {\n UNSPECIFIED = 0;\n ACCEPTED = 1;\n REJECTED = 2;\n};\n\nmessage Article {\n string title = 1;\n string description = 2;\n uint64 created = 3;\n uint64 updated = 4;\n bool public = 5;\n bool promoted = 6;\n Type type = 7;\n Review review = 8;\n repeated string comments = 9;\n repeated string backlinks = 10;\n};\n```\n\nserializing the values\n\n```yaml\ntitle: \"The world needs change 🌳\"\ndescription: \"\"\ncreated: 1596806111080\nupdated: 0\npublic: true\npromoted: false\ntype: Type.NEWS\nreview: Review.UNSPECIFIED\ncomments: [\"Nice one\", \"Thank you\"]\nbacklinks: []\n```\n\nmust result in the serialization\n\n```text\n0a1b54686520776f726c64206e65656473206368616e676520f09f8cb318e8bebec8bc2e280138024a084e696365206f6e654a095468616e6b20796f75\n```\n\nWhen inspecting the serialized document, you see that every second field is\nomitted:\n\n```shell\n$ echo 0a1b54686520776f726c64206e65656473206368616e676520f09f8cb318e8bebec8bc2e280138024a084e696365206f6e654a095468616e6b20796f75 | xxd -r -p | protoc --decode_raw\n1: \"The world needs change \\360\\237\\214\\263\"\n3: 1596806111080\n5: 1\n7: 2\n9: \"Nice one\"\n9: \"Thank you\"\n```\n\n## Consequences\n\nHaving such an encoding available allows us to get deterministic serialization\nfor all protobuf documents we need in the context of Cosmos SDK signing.\n\n### Positive\n\n* Well defined rules that can be verified independently of a reference\n implementation\n* Simple enough to keep the barrier to implementing transaction signing low\n* It allows us to continue to use 0 and other empty values in SignDoc, avoiding\n the need to work around 0 sequences. This does not imply the change from\n https://github.com/cosmos/cosmos-sdk/pull/6949 should not be merged, but not\n too important anymore.\n\n### Negative\n\n* When implementing transaction signing, the encoding rules above must be\n understood and implemented.\n* The need for rule number 3. adds some complexity to implementations.\n* Some data structures may require custom code for serialization. Thus\n the code is not very portable - it will require additional work for each\n client implementing serialization to properly handle custom data structures.\n\n### Neutral\n\n### Usage in Cosmos SDK\n\nFor the reasons mentioned above (\"Negative\" section) we prefer to keep workarounds\nfor shared data structure. Example: the aforementioned `TxRaw` is using raw bytes\nas a workaround. This allows them to use any valid Protobuf library without\nthe need to implement a custom serializer that adheres to this standard (and related risks of bugs).\n\n## References\n\n* 1 _When a message is serialized, there is no guaranteed order for\n how its known or unknown fields should be written. Serialization order is an\n implementation detail and the details of any particular implementation may\n change in the future. Therefore, protocol buffer parsers must be able to parse\n fields in any order._ from\n https://developers.google.com/protocol-buffers/docs/encoding#order\n* 2 https://developers.google.com/protocol-buffers/docs/encoding#signed_integers\n* 3 _Note that for scalar message fields, once a message is parsed\n there's no way of telling whether a field was explicitly set to the default\n value (for example whether a boolean was set to false) or just not set at all:\n you should bear this in mind when defining your message types. For example,\n don't have a boolean that switches on some behavior when set to false if you\n don't want that behavior to also happen by default._ from\n https://developers.google.com/protocol-buffers/docs/proto3#default\n* 4 _When a message is parsed, if the encoded message does not\n contain a particular singular element, the corresponding field in the parsed\n object is set to the default value for that field._ from\n https://developers.google.com/protocol-buffers/docs/proto3#default\n* 5 _Also note that if a scalar message field is set to its default,\n the value will not be serialized on the wire._ from\n https://developers.google.com/protocol-buffers/docs/proto3#default\n* 6 _For enums, the default value is the first defined enum value,\n which must be 0._ from\n https://developers.google.com/protocol-buffers/docs/proto3#default\n* 7 _For message fields, the field is not set. Its exact value is\n language-dependent._ from\n https://developers.google.com/protocol-buffers/docs/proto3#default\n* Encoding rules and parts of the reasoning taken from\n [canonical-proto3 Aaron Craelius](https://github.com/regen-network/canonical-proto3)" + }, + { + "number": 28, + "filename": "adr-028-public-key-addresses.md", + "title": "ADR 028: Public Key Addresses", + "content": "# ADR 028: Public Key Addresses\n\n## Changelog\n\n* 2020/08/18: Initial version\n* 2021/01/15: Analysis and algorithm update\n\n## Status\n\nProposed\n\n## Abstract\n\nThis ADR defines an address format for all addressable Cosmos SDK accounts. That includes: new public key algorithms, multisig public keys, and module accounts.\n\n## Context\n\nIssue [\\#3685](https://github.com/cosmos/cosmos-sdk/issues/3685) identified that public key\naddress spaces are currently overlapping. We confirmed that it significantly decreases security of Cosmos SDK.\n\n### Problem\n\nAn attacker can control an input for an address generation function. This leads to a birthday attack, which significantly decreases the security space.\nTo overcome this, we need to separate the inputs for different kinds of account types:\na security break of one account type shouldn't impact the security of other account types.\n\n### Initial proposals\n\nOne initial proposal was to extend the address length and\nadding prefixes for different types of addresses.\n\n@ethanfrey explained an alternate approach originally used in https://github.com/iov-one/weave:\n\n> I spent quite a bit of time thinking about this issue while building weave... The other cosmos Sdk.\n> Basically I define a condition to be a type and format as human readable string with some binary data appended. This condition is hashed into an Address (again at 20 bytes). The use of this prefix makes it impossible to find a preimage for a given address with a different condition (eg ed25519 vs secp256k1).\n> This is explained in depth here https://weave.readthedocs.io/en/latest/design/permissions.html\n> And the code is here, look mainly at the top where we process conditions. https://github.com/iov-one/weave/blob/master/conditions.go\n\nAnd explained how this approach should be sufficiently collision resistant:\n\n> Yeah, AFAIK, 20 bytes should be collision resistance when the preimages are unique and not malleable. A space of 2^160 would expect some collision to be likely around 2^80 elements (birthday paradox). And if you want to find a collision for some existing element in the database, it is still 2^160. 2^80 only if all these elements are written to state.\n> The good example you brought up was eg. a public key bytes being a valid public key on two algorithms supported by the codec. Meaning if either was broken, you would break accounts even if they were secured with the safer variant. This is only as the issue when no differentiating type info is present in the preimage (before hashing into an address).\n> I would like to hear an argument if the 20 bytes space is an actual issue for security, as I would be happy to increase my address sizes in weave. I just figured cosmos and ethereum and bitcoin all use 20 bytes, it should be good enough. And the arguments above which made me feel it was secure. But I have not done a deeper analysis.\n\nThis led to the first proposal (which we proved to be not good enough):\nwe concatenate a key type with a public key, hash it and take the first 20 bytes of that hash, summarized as `sha256(keyTypePrefix || keybytes)[:20]`.\n\n### Review and Discussions\n\nIn [\\#5694](https://github.com/cosmos/cosmos-sdk/issues/5694) we discussed various solutions.\nWe agreed that 20 bytes it's not future proof, and extending the address length is the only way to allow addresses of different types, various signature types, etc.\nThis disqualifies the initial proposal.\n\nIn the issue we discussed various modifications:\n\n* Choice of the hash function.\n* Move the prefix out of the hash function: `keyTypePrefix + sha256(keybytes)[:20]` [post-hash-prefix-proposal].\n* Use double hashing: `sha256(keyTypePrefix + sha256(keybytes)[:20])`.\n* Increase to keybytes hash slice from 20 bytes to 32 or 40 bytes. We concluded that 32 bytes, produced by a good hash functions is future secure.\n\n### Requirements\n\n* Support currently used tools - we don't want to break an ecosystem, or add a long adaptation period. Ref: https://github.com/cosmos/cosmos-sdk/issues/8041\n* Try to keep the address length small - addresses are widely used in state, both as part of a key and object value.\n\n### Scope\n\nThis ADR only defines a process for the generation of address bytes. For end-user interactions with addresses (through the API, or CLI, etc.), we still use bech32 to format these addresses as strings. This ADR doesn't change that.\nUsing Bech32 for string encoding gives us support for checksum error codes and handling of user typos.\n\n## Decision\n\nWe define the following account types, for which we define the address function:\n\n1. simple accounts: represented by a regular public key (ie: secp256k1, sr25519)\n2. naive multisig: accounts composed by other addressable objects (ie: naive multisig)\n3. composed accounts with a native address key (ie: bls, group module accounts)\n4. module accounts: basically any accounts which cannot sign transactions and which are managed internally by modules\n\n### Legacy Public Key Addresses Don't Change\n\nCurrently (Jan 2021), the only officially supported Cosmos SDK user accounts are `secp256k1` basic accounts and legacy amino multisig.\nThey are used in existing Cosmos SDK zones. They use the following address formats:\n\n* secp256k1: `ripemd160(sha256(pk_bytes))[:20]`\n* legacy amino multisig: `sha256(aminoCdc.Marshal(pk))[:20]`\n\nWe don't want to change existing addresses. So the addresses for these two key types will remain the same.\n\nThe current multisig public keys use amino serialization to generate the address. We will retain\nthose public keys and their address formatting, and call them \"legacy amino\" multisig public keys\nin protobuf. We will also create multisig public keys without amino addresses to be described below.\n\n### Hash Function Choice\n\nAs in other parts of the Cosmos SDK, we will use `sha256`.\n\n### Basic Address\n\nWe start by defining a base algorithm for generating addresses which we will call `Hash`. Notably, it's used for accounts represented by a single key pair. For each public key schema we have to have an associated `typ` string, explained in the next section. `hash` is the cryptographic hash function defined in the previous section.\n\n```go\nconst A_LEN = 32\n\nfunc Hash(typ string, key []byte) []byte {\n return hash(hash(typ) + key)[:A_LEN]\n}\n```\n\nThe `+` is bytes concatenation, which doesn't use any separator.\n\nThis algorithm is the outcome of a consultation session with a professional cryptographer.\nMotivation: this algorithm keeps the address relatively small (length of the `typ` doesn't impact the length of the final address)\nand it's more secure than [post-hash-prefix-proposal] (which uses the first 20 bytes of a pubkey hash, significantly reducing the address space).\nMoreover the cryptographer motivated the choice of adding `typ` in the hash to protect against a switch table attack.\n\n`address.Hash` is a low level function to generate _base_ addresses for new key types. Example:\n\n* BLS: `address.Hash(\"bls\", pubkey)`\n\n### Composed Addresses\n\nFor simple composed accounts (like a new naive multisig) we generalize the `address.Hash`. The address is constructed by recursively creating addresses for the sub accounts, sorting the addresses and composing them into a single address. It ensures that the ordering of keys doesn't impact the resulting address.\n\n```go\n// We don't need a PubKey interface - we need anything which is addressable.\ntype Addressable interface {\n Address() []byte\n}\n\nfunc Composed(typ string, subaccounts []Addressable) []byte {\n addresses = map(subaccounts, \\a -> LengthPrefix(a.Address()))\n addresses = sort(addresses)\n return address.Hash(typ, addresses[0] + ... + addresses[n])\n}\n```\n\nThe `typ` parameter should be a schema descriptor, containing all significant attributes with deterministic serialization (eg: utf8 string).\n`LengthPrefix` is a function which prepends 1 byte to the address. The value of that byte is the length of the address bits before prepending. The address must be at most 255 bits long.\nWe are using `LengthPrefix` to eliminate conflicts - it assures, that for 2 lists of addresses: `as = {a1, a2, ..., an}` and `bs = {b1, b2, ..., bm}` such that every `bi` and `ai` is at most 255 long, `concatenate(map(as, (a) => LengthPrefix(a))) = map(bs, (b) => LengthPrefix(b))` if `as = bs`.\n\nImplementation Tip: account implementations should cache addresses.\n\n#### Multisig Addresses\n\nFor a new multisig public keys, we define the `typ` parameter not based on any encoding scheme (amino or protobuf). This avoids issues with non-determinism in the encoding scheme.\n\nExample:\n\n```protobuf\npackage cosmos.crypto.multisig;\n\nmessage PubKey {\n uint32 threshold = 1;\n repeated google.protobuf.Any pubkeys = 2;\n}\n```\n\n```go\nfunc (multisig PubKey) Address() {\n\t// first gather all nested pub keys\n\tvar keys []address.Addressable // cryptotypes.PubKey implements Addressable\n\tfor _, _key := range multisig.Pubkeys {\n\t\tkeys = append(keys, key.GetCachedValue().(cryptotypes.PubKey))\n\t}\n\n\t// form the type from the message name (cosmos.crypto.multisig.PubKey) and the threshold joined together\n\tprefix := fmt.Sprintf(\"%s/%d\", proto.MessageName(multisig), multisig.Threshold)\n\n\t// use the Composed function defined above\n\treturn address.Composed(prefix, keys)\n}\n```\n\n\n### Derived Addresses\n\nWe must be able to cryptographically derive one address from another one. The derivation process must guarantee hash properties, hence we use the already defined `Hash` function:\n\n```go\nfunc Derive(address, derivationKey []byte) []byte {\n\treturn Hash(address, derivationKey)\n}\n```\n\n### Module Account Addresses\n\nA module account will have `\"module\"` type. Module accounts can have sub accounts. The submodule account will be created based on module name, and sequence of derivation keys. Typically, the first derivation key should be a class of the derived accounts. The derivation process has a defined order: module name, submodule key, subsubmodule key... An example module account is created using:\n\n```go\naddress.Module(moduleName, key)\n```\n\nAn example sub-module account is created using:\n\n```go\ngroupPolicyAddresses := []byte{1}\naddress.Module(moduleName, groupPolicyAddresses, policyID)\n```\n\nThe `address.Module` function is using `address.Hash` with `\"module\"` as the type argument, and byte representation of the module name concatenated with submodule key. The last two components must be uniquely separated to avoid potential clashes (example: modulename=\"ab\" & submodulekey=\"bc\" will have the same derivation key as modulename=\"a\" & submodulekey=\"bbc\").\nWe use a null byte (`'\\x00'`) to separate module name from the submodule key. This works, because null byte is not a part of a valid module name. Finally, the sub-submodule accounts are created by applying the `Derive` function recursively.\nWe could use `Derive` function also in the first step (rather than concatenating the module name with a zero byte and the submodule key). We decided to do concatenation to avoid one level of derivation and speed up computation.\n\nFor backward compatibility with the existing `authtypes.NewModuleAddress`, we add a special case in `Module` function: when no derivation key is provided, we fallback to the \"legacy\" implementation. \n\n```go\nfunc Module(moduleName string, derivationKeys ...[]byte) []byte{\n\tif len(derivationKeys) == 0 {\n\t\treturn authtypes.NewModuleAddress(moduleName) // legacy case\n\t}\n\tsubmoduleAddress := Hash(\"module\", []byte(moduleName) + 0 + key)\n\treturn fold((a, k) => Derive(a, k), subsubKeys, submoduleAddress)\n}\n```\n\n**Example 1** A lending BTC pool address would be:\n\n```go\nbtcPool := address.Module(\"lending\", btc.Address()})\n```\n\nIf we want to create an address for a module account depending on more than one key, we can concatenate them:\n\n```go\nbtcAtomAMM := address.Module(\"amm\", btc.Address() + atom.Address()})\n```\n\n**Example 2** a smart-contract address could be constructed by:\n\n```go\nsmartContractAddr = Module(\"mySmartContractVM\", smartContractsNamespace, smartContractKey})\n\n// which equals to:\nsmartContractAddr = Derived(\n Module(\"mySmartContractVM\", smartContractsNamespace), \n []{smartContractKey})\n```\n\n### Schema Types\n\nA `typ` parameter used in `Hash` function SHOULD be unique for each account type.\nSince all Cosmos SDK account types are serialized in the state, we propose to use the protobuf message name string.\n\nExample: all public key types have a unique protobuf message type similar to:\n\n```protobuf\npackage cosmos.crypto.sr25519;\n\nmessage PubKey {\n\tbytes key = 1;\n}\n```\n\nAll protobuf messages have unique fully qualified names, in this example `cosmos.crypto.sr25519.PubKey`.\nThese names are derived directly from .proto files in a standardized way and used\nin other places such as the type URL in `Any`s. We can easily obtain the name using\n`proto.MessageName(msg)`.\n\n## Consequences\n\n### Backwards Compatibility\n\nThis ADR is compatible with what was committed and directly supported in the Cosmos SDK repository.\n\n### Positive\n\n* a simple algorithm for generating addresses for new public keys, complex accounts and modules\n* the algorithm generalizes _native composed keys_\n* increased security and collision resistance of addresses\n* the approach is extensible for future use-cases - one can use other address types, as long as they don't conflict with the address length specified here (20 or 32 bytes).\n* support new account types.\n\n### Negative\n\n* addresses do not communicate key type, a prefixed approach would have done this\n* addresses are 60% longer and will consume more storage space\n* requires a refactor of KVStore store keys to handle variable length addresses\n\n### Neutral\n\n* protobuf message names are used as key type prefixes\n\n## Further Discussions\n\nSome accounts can have a fixed name or may be constructed in another way (eg: modules). We were discussing an idea of an account with a predefined name (eg: `me.regen`), which could be used by institutions.\nWithout going into details, these kinds of addresses are compatible with the hash based addresses described here as long as they don't have the same length.\nMore specifically, any special account address must not have a length equal to 20 or 32 bytes.\n\n## Appendix: Consulting session\n\nEnd of Dec 2020 we had a session with [Alan Szepieniec](https://scholar.google.be/citations?user=4LyZn8oAAAAJ&hl=en) to consult the approach presented above.\n\nAlan general observations:\n\n* we don’t need 2-preimage resistance\n* we need 32bytes address space for collision resistance\n* when an attacker can control an input for an object with an address then we have a problem with a birthday attack\n* there is an issue with smart-contracts for hashing\n* sha2 mining can be used to break the address pre-image\n\nHashing algorithm\n\n* any attack breaking blake3 will break blake2\n* Alan is pretty confident about the current security analysis of the blake hash algorithm. It was a finalist, and the author is well known in security analysis.\n\nAlgorithm:\n\n* Alan recommends to hash the prefix: `address(pub_key) = hash(hash(key_type) + pub_key)[:32]`, main benefits:\n * we are free to user arbitrary long prefix names\n * we still don’t risk collisions\n * switch tables\n* discussion about penalization -> about adding prefix post hash\n* Aaron asked about post hash prefixes (`address(pub_key) = key_type + hash(pub_key)`) and differences. Alan noted that this approach has longer address space and it’s stronger.\n\nAlgorithm for complex / composed keys:\n\n* merging tree-like addresses with same algorithm are fine\n\nModule addresses: Should module addresses have a different size to differentiate it?\n\n* we will need to set a pre-image prefix for module addresses to keep them in 32-byte space: `hash(hash('module') + module_key)`\n* Aaron observation: we already need to deal with variable length (to not break secp256k1 keys).\n\nDiscussion about an arithmetic hash function for ZKP\n\n* Poseidon / Rescue\n* Problem: much bigger risk because we don’t know much techniques and the history of crypto-analysis of arithmetic constructions. It’s still a new ground and area of active research.\n\nPost quantum signature size\n\n* Alan suggestion: Falcon: speed / size ratio - very good.\n* Aaron - should we think about it?\n Alan: based on early extrapolation this thing will get able to break EC cryptography in 2050. But that’s a lot of uncertainty. But there is magic happening with recursions / linking / simulation and that can speedup the progress.\n\nOther ideas\n\n* Let’s say we use the same key and two different address algorithms for 2 different use cases. Is it still safe to use it? Alan: if we want to hide the public key (which is not our use case), then it’s less secure but there are fixes.\n\n### References\n\n* [Notes](https://hackmd.io/_NGWI4xZSbKzj1BkCqyZMw)" + }, + { + "number": 29, + "filename": "adr-029-fee-grant-module.md", + "title": "ADR 029: Fee Grant Module", + "content": "# ADR 029: Fee Grant Module\n\n## Changelog\n\n* 2020/08/18: Initial Draft\n* 2021/05/05: Removed height based expiration support and simplified naming.\n\n## Status\n\nAccepted\n\n## Context\n\nIn order to make blockchain transactions, the signing account must possess a sufficient balance of the right denomination\nin order to pay fees. There are classes of transactions where needing to maintain a wallet with sufficient fees is a\nbarrier to adoption.\n\nFor instance, when proper permissions are set up, someone may temporarily delegate the ability to vote on proposals to\na \"burner\" account that is stored on a mobile phone with only minimal security.\n\nOther use cases include workers tracking items in a supply chain or farmers submitting field data for analytics\nor compliance purposes.\n\nFor all of these use cases, UX would be significantly enhanced by obviating the need for these accounts to always\nmaintain the appropriate fee balance. This is especially true if we want to achieve enterprise adoption for something\nlike supply chain tracking.\n\nWhile one solution would be to have a service that fills up these accounts automatically with the appropriate fees, a better UX\nwould be provided by allowing these accounts to pull from a common fee pool account with proper spending limits.\nA single pool would reduce the churn of making lots of small \"fill up\" transactions and also more effectively leverage\nthe resources of the organization setting up the pool.\n\n## Decision\n\nAs a solution we propose a module, `x/feegrant` which allows one account, the \"granter\" to grant another account, the \"grantee\"\nan allowance to spend the granter's account balance for fees within certain well-defined limits.\n\nFee allowances are defined by the extensible `FeeAllowanceI` interface:\n\n```go\ntype FeeAllowanceI {\n // Accept can use fee payment requested as well as timestamp of the current block\n // to determine whether or not to process this. This is checked in\n // Keeper.UseGrantedFees and the return values should match how it is handled there.\n //\n // If it returns an error, the fee payment is rejected, otherwise it is accepted.\n // The FeeAllowance implementation is expected to update it's internal state\n // and will be saved again after an acceptance.\n //\n // If remove is true (regardless of the error), the FeeAllowance will be deleted from storage\n // (eg. when it is used up). (See call to RevokeFeeAllowance in Keeper.UseGrantedFees)\n Accept(ctx sdk.Context, fee sdk.Coins, msgs []sdk.Msg) (remove bool, err error)\n\n // ValidateBasic should evaluate this FeeAllowance for internal consistency.\n // Don't allow negative amounts, or negative periods for example.\n ValidateBasic() error\n}\n```\n\nTwo basic fee allowance types, `BasicAllowance` and `PeriodicAllowance` are defined to support known use cases:\n\n```protobuf\n// BasicAllowance implements FeeAllowanceI with a one-time grant of tokens\n// that optionally expires. The delegatee can use up to SpendLimit to cover fees.\nmessage BasicAllowance {\n // spend_limit specifies the maximum amount of tokens that can be spent\n // by this allowance and will be updated as tokens are spent. If it is\n // empty, there is no spend limit and any amount of coins can be spent.\n repeated cosmos_sdk.v1.Coin spend_limit = 1;\n\n // expiration specifies an optional time when this allowance expires\n google.protobuf.Timestamp expiration = 2;\n}\n\n// PeriodicAllowance extends FeeAllowanceI to allow for both a maximum cap,\n// as well as a limit per time period.\nmessage PeriodicAllowance {\n BasicAllowance basic = 1;\n\n // period specifies the time duration in which period_spend_limit coins can\n // be spent before that allowance is reset\n google.protobuf.Duration period = 2;\n\n // period_spend_limit specifies the maximum number of coins that can be spent\n // in the period\n repeated cosmos_sdk.v1.Coin period_spend_limit = 3;\n\n // period_can_spend is the number of coins left to be spent before the period_reset time\n repeated cosmos_sdk.v1.Coin period_can_spend = 4;\n\n // period_reset is the time at which this period resets and a new one begins,\n // it is calculated from the start time of the first transaction after the\n // last period ended\n google.protobuf.Timestamp period_reset = 5;\n}\n\n```\n\nAllowances can be granted and revoked using `MsgGrantAllowance` and `MsgRevokeAllowance`:\n\n```protobuf\n// MsgGrantAllowance adds permission for Grantee to spend up to Allowance\n// of fees from the account of Granter.\nmessage MsgGrantAllowance {\n string granter = 1;\n string grantee = 2;\n google.protobuf.Any allowance = 3;\n }\n\n // MsgRevokeAllowance removes any existing FeeAllowance from Granter to Grantee.\n message MsgRevokeAllowance {\n string granter = 1;\n string grantee = 2;\n }\n```\n\nIn order to use allowances in transactions, we add a new field `granter` to the transaction `Fee` type:\n\n```protobuf\npackage cosmos.tx.v1beta1;\n\nmessage Fee {\n repeated cosmos.base.v1beta1.Coin amount = 1;\n uint64 gas_limit = 2;\n string payer = 3;\n string granter = 4;\n}\n```\n\n`granter` must either be left empty or must correspond to an account which has granted\na fee allowance to the fee payer (either the first signer or the value of the `payer` field).\n\nA new `AnteDecorator` named `DeductGrantedFeeDecorator` will be created in order to process transactions with `fee_payer`\nset and correctly deduct fees based on fee allowances.\n\n## Consequences\n\n### Positive\n\n* improved UX for use cases where it is cumbersome to maintain an account balance just for fees\n\n### Negative\n\n### Neutral\n\n* a new field must be added to the transaction `Fee` message and a new `AnteDecorator` must be\ncreated to use it\n\n## References\n\n* Blog article describing initial work: https://medium.com/regen-network/hacking-the-cosmos-cosmwasm-and-key-management-a08b9f561d1b\n* Initial public specification: https://gist.github.com/aaronc/b60628017352df5983791cad30babe56\n* Original subkeys proposal from B-harvest which influenced this design: https://github.com/cosmos/cosmos-sdk/issues/4480" + }, + { + "number": 30, + "filename": "adr-030-authz-module.md", + "title": "ADR 030: Authorization Module", + "content": "# ADR 030: Authorization Module\n\n## Changelog\n\n* 2019-11-06: Initial Draft\n* 2020-10-12: Updated Draft\n* 2020-11-13: Accepted\n* 2020-05-06: proto API updates, use `sdk.Msg` instead of `sdk.ServiceMsg` (the latter concept was removed from Cosmos SDK)\n* 2022-04-20: Updated the `SendAuthorization` proto docs to clarify the `SpendLimit` is a required field. (Generic authorization can be used with bank msg type url to create limit less bank authorization)\n\n## Status\n\nAccepted\n\n## Abstract\n\nThis ADR defines the `x/authz` module which allows accounts to grant authorizations to perform actions\non behalf of that account to other accounts.\n\n## Context\n\nThe concrete use cases which motivated this module include:\n\n* the desire to delegate the ability to vote on proposals to other accounts besides the account which one has\ndelegated stake\n* \"sub-keys\" functionality, as originally proposed in [\\#4480](https://github.com/cosmos/cosmos-sdk/issues/4480) which\nis a term used to describe the functionality provided by this module together with\nthe `fee_grant` module from [ADR 029](./adr-029-fee-grant-module.md) and the [group module](https://github.com/cosmos/cosmos-sdk/tree/main/x/group).\n\nThe \"sub-keys\" functionality roughly refers to the ability for one account to grant some subset of its capabilities to\nother accounts with possibly less robust, but easier to use security measures. For instance, a master account representing\nan organization could grant the ability to spend small amounts of the organization's funds to individual employee accounts.\nOr an individual (or group) with a multisig wallet could grant the ability to vote on proposals to any one of the member\nkeys.\n\nThe current implementation is based on work done by the [Gaian's team at Hackatom Berlin 2019](https://github.com/cosmos-gaians/cosmos-sdk/tree/hackatom/x/delegation).\n\n## Decision\n\nWe will create a module named `authz` which provides functionality for\ngranting arbitrary privileges from one account (the _granter_) to another account (the _grantee_). Authorizations\nmust be granted for a particular `Msg` service methods one by one using an implementation\nof `Authorization` interface.\n\n### Types\n\nAuthorizations determine exactly what privileges are granted. They are extensible\nand can be defined for any `Msg` service method even outside of the module where\nthe `Msg` method is defined. `Authorization`s reference `Msg`s using their TypeURL.\n\n#### Authorization\n\n```go\ntype Authorization interface {\n\tproto.Message\n\n\t// MsgTypeURL returns the fully-qualified Msg TypeURL (as described in ADR 020),\n\t// which will process and accept or reject a request.\n\tMsgTypeURL() string\n\n\t// Accept determines whether this grant permits the provided sdk.Msg to be performed, and if\n\t// so provides an upgraded authorization instance.\n\tAccept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error)\n\n\t// ValidateBasic does a simple validation check that\n\t// doesn't require access to any other information.\n\tValidateBasic() error\n}\n\n// AcceptResponse instruments the controller of an authz message if the request is accepted\n// and if it should be updated or deleted.\ntype AcceptResponse struct {\n\t// If Accept=true, the controller can accept and authorization and handle the update.\n\tAccept bool\n\t// If Delete=true, the controller must delete the authorization object and release\n\t// storage resources.\n\tDelete bool\n\t// Controller, who is calling Authorization.Accept must check if `Updated != nil`. If yes,\n\t// it must use the updated version and handle the update on the storage level.\n\tUpdated Authorization\n}\n```\n\nFor example a `SendAuthorization` like this is defined for `MsgSend` that takes\na `SpendLimit` and updates it down to zero:\n\n```go\ntype SendAuthorization struct {\n\t// SpendLimit specifies the maximum amount of tokens that can be spent\n\t// by this authorization and will be updated as tokens are spent. This field is required. (Generic authorization \n\t// can be used with bank msg type url to create limit less bank authorization).\n\tSpendLimit sdk.Coins\n}\n\nfunc (a SendAuthorization) MsgTypeURL() string {\n\treturn sdk.MsgTypeURL(&MsgSend{})\n}\n\nfunc (a SendAuthorization) Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) {\n\tmSend, ok := msg.(*MsgSend)\n\tif !ok {\n\t\treturn authz.AcceptResponse{}, sdkerrors.ErrInvalidType.Wrap(\"type mismatch\")\n\t}\n\tlimitLeft, isNegative := a.SpendLimit.SafeSub(mSend.Amount)\n\tif isNegative {\n\t\treturn authz.AcceptResponse{}, sdkerrors.ErrInsufficientFunds.Wrapf(\"requested amount is more than spend limit\")\n\t}\n\tif limitLeft.IsZero() {\n\t\treturn authz.AcceptResponse{Accept: true, Delete: true}, nil\n\t}\n\n\treturn authz.AcceptResponse{Accept: true, Delete: false, Updated: &SendAuthorization{SpendLimit: limitLeft}}, nil\n}\n```\n\nA different type of capability for `MsgSend` could be implemented\nusing the `Authorization` interface with no need to change the underlying\n`bank` module.\n\n##### Small notes on `AcceptResponse`\n\n* The `AcceptResponse.Accept` field will be set to `true` if the authorization is accepted.\nHowever, if it is rejected, the function `Accept` will raise an error (without setting `AcceptResponse.Accept` to `false`).\n\n* The `AcceptResponse.Updated` field will be set to a non-nil value only if there is a real change to the authorization.\nIf authorization remains the same (as is, for instance, always the case for a [`GenericAuthorization`](#genericauthorization)),\nthe field will be `nil`.\n\n### `Msg` Service\n\n```protobuf\nservice Msg {\n // Grant grants the provided authorization to the grantee on the granter's\n // account with the provided expiration time.\n rpc Grant(MsgGrant) returns (MsgGrantResponse);\n\n // Exec attempts to execute the provided messages using\n // authorizations granted to the grantee. Each message should have only\n // one signer corresponding to the granter of the authorization.\n rpc Exec(MsgExec) returns (MsgExecResponse);\n\n // Revoke revokes any authorization corresponding to the provided method name on the\n // granter's account that has been granted to the grantee.\n rpc Revoke(MsgRevoke) returns (MsgRevokeResponse);\n}\n\n// Grant gives permissions to execute\n// the provided method with expiration time.\nmessage Grant {\n google.protobuf.Any authorization = 1 [(cosmos_proto.accepts_interface) = \"cosmos.authz.v1beta1.Authorization\"];\n google.protobuf.Timestamp expiration = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false];\n}\n\nmessage MsgGrant {\n string granter = 1;\n string grantee = 2;\n\n Grant grant = 3 [(gogoproto.nullable) = false];\n}\n\nmessage MsgExecResponse {\n cosmos.base.abci.v1beta1.Result result = 1;\n}\n\nmessage MsgExec {\n string grantee = 1;\n // Authorization Msg requests to execute. Each msg must implement Authorization interface\n repeated google.protobuf.Any msgs = 2 [(cosmos_proto.accepts_interface) = \"cosmos.base.v1beta1.Msg\"];\n}\n```\n\n### Router Middleware\n\nThe `authz` `Keeper` will expose a `DispatchActions` method which allows other modules to send `Msg`s\nto the router based on `Authorization` grants:\n\n```go\ntype Keeper interface {\n\t// DispatchActions routes the provided msgs to their respective handlers if the grantee was granted an authorization\n\t// to send those messages by the first (and only) signer of each msg.\n DispatchActions(ctx sdk.Context, grantee sdk.AccAddress, msgs []sdk.Msg) sdk.Result`\n}\n```\n\n### CLI\n\n#### `tx exec` Method\n\nWhen a CLI user wants to run a transaction on behalf of another account using `MsgExec`, they\ncan use the `exec` method. For instance `gaiacli tx gov vote 1 yes --from --generate-only | gaiacli tx authz exec --send-as --from `\nwould send a transaction like this:\n\n```go\nMsgExec {\n Grantee: mykey,\n Msgs: []sdk.Msg{\n MsgVote {\n ProposalID: 1,\n Voter: cosmos3thsdgh983egh823\n Option: Yes\n }\n }\n}\n```\n\n#### `tx grant --from `\n\nThis CLI command will send a `MsgGrant` transaction. `authorization` should be encoded as\nJSON on the CLI.\n\n#### `tx revoke --from `\n\nThis CLI command will send a `MsgRevoke` transaction.\n\n### Built-in Authorizations\n\n#### `SendAuthorization`\n\n```protobuf\n// SendAuthorization allows the grantee to spend up to spend_limit coins from\n// the granter's account.\nmessage SendAuthorization {\n repeated cosmos.base.v1beta1.Coin spend_limit = 1;\n}\n```\n\n#### `GenericAuthorization`\n\n```protobuf\n// GenericAuthorization gives the grantee unrestricted permissions to execute\n// the provided method on behalf of the granter's account.\nmessage GenericAuthorization {\n option (cosmos_proto.implements_interface) = \"Authorization\";\n\n // Msg, identified by it's type URL, to grant unrestricted permissions to execute\n string msg = 1;\n}\n```\n\n## Consequences\n\n### Positive\n\n* Users will be able to authorize arbitrary actions on behalf of their accounts to other\nusers, improving key management for many use cases\n* The solution is more generic than previously considered approaches and the\n`Authorization` interface approach can be extended to cover other use cases by\nSDK users\n\n### Negative\n\n### Neutral\n\n## References\n\n* Initial Hackatom implementation: https://github.com/cosmos-gaians/cosmos-sdk/tree/hackatom/x/delegation\n* Post-Hackatom spec: https://gist.github.com/aaronc/b60628017352df5983791cad30babe56#delegation-module\n* B-Harvest subkeys spec: https://github.com/cosmos/cosmos-sdk/issues/4480" + }, + { + "number": 31, + "filename": "adr-031-msg-service.md", + "title": "ADR 031: Protobuf Msg Services", + "content": "# ADR 031: Protobuf Msg Services\n\n## Changelog\n\n* 2020-10-05: Initial Draft\n* 2021-04-21: Remove `ServiceMsg`s to follow Protobuf `Any`'s spec, see [#9063](https://github.com/cosmos/cosmos-sdk/issues/9063).\n\n## Status\n\nAccepted\n\n## Abstract\n\nWe want to leverage protobuf `service` definitions for defining `Msg`s, which will give us significant developer UX\nimprovements in terms of the code that is generated and the fact that return types will now be well defined.\n\n## Context\n\nCurrently `Msg` handlers in the Cosmos SDK have return values that are placed in the `data` field of the response.\nThese return values, however, are not specified anywhere except in the golang handler code.\n\nIn early conversations [it was proposed](https://docs.google.com/document/d/1eEgYgvgZqLE45vETjhwIw4VOqK-5hwQtZtjVbiXnIGc/edit)\nthat `Msg` return types be captured using a protobuf extension field, ex:\n\n```protobuf\npackage cosmos.gov;\n\nmessage MsgSubmitProposal\n\toption (cosmos_proto.msg_return) = “uint64”;\n\tstring delegator_address = 1;\n\tstring validator_address = 2;\n\trepeated sdk.Coin amount = 3;\n}\n```\n\nThis was never adopted, however.\n\nHaving a well-specified return value for `Msg`s would improve client UX. For instance,\nin `x/gov`, `MsgSubmitProposal` returns the proposal ID as a big-endian `uint64`.\nThis isn’t really documented anywhere and clients would need to know the internals\nof the Cosmos SDK to parse that value and return it to users.\n\nAlso, there may be cases where we want to use these return values programmatically.\nFor instance, https://github.com/cosmos/cosmos-sdk/issues/7093 proposes a method for\ndoing inter-module Ocaps using the `Msg` router. A well-defined return type would\nimprove the developer UX for this approach.\n\nIn addition, handler registration of `Msg` types tends to add a bit of\nboilerplate on top of keepers and is usually done through manual type switches.\nThis isn't necessarily bad, but it does add overhead to creating modules.\n\n## Decision\n\nWe decide to use protobuf `service` definitions for defining `Msg`s as well as\nthe code generated by them as a replacement for `Msg` handlers.\n\nBelow we define how this will look for the `SubmitProposal` message from `x/gov` module.\nWe start with a `Msg` `service` definition:\n\n```protobuf\npackage cosmos.gov;\n\nservice Msg {\n rpc SubmitProposal(MsgSubmitProposal) returns (MsgSubmitProposalResponse);\n}\n\n// Note that for backwards compatibility this uses MsgSubmitProposal as the request\n// type instead of the more canonical MsgSubmitProposalRequest\nmessage MsgSubmitProposal {\n google.protobuf.Any content = 1;\n string proposer = 2;\n}\n\nmessage MsgSubmitProposalResponse {\n uint64 proposal_id;\n}\n```\n\nWhile this is most commonly used for gRPC, overloading protobuf `service` definitions like this does not violate\nthe intent of the [protobuf spec](https://developers.google.com/protocol-buffers/docs/proto3#services) which says:\n> If you don’t want to use gRPC, it’s also possible to use protocol buffers with your own RPC implementation.\nWith this approach, we would get an auto-generated `MsgServer` interface:\n\nIn addition to clearly specifying return types, this has the benefit of generating client and server code. On the server\nside, this is almost like an automatically generated keeper method and could maybe be used instead of keepers eventually\n(see [\\#7093](https://github.com/cosmos/cosmos-sdk/issues/7093)):\n\n```go\npackage gov\n\ntype MsgServer interface {\n SubmitProposal(context.Context, *MsgSubmitProposal) (*MsgSubmitProposalResponse, error)\n}\n```\n\nOn the client side, developers could take advantage of this by creating RPC implementations that encapsulate transaction\nlogic. Protobuf libraries that use asynchronous callbacks, like [protobuf.js](https://github.com/protobufjs/protobuf.js#using-services)\ncould use this to register callbacks for specific messages even for transactions that include multiple `Msg`s.\n\nEach `Msg` service method should have exactly one request parameter: its corresponding `Msg` type. For example, the `Msg` service method `/cosmos.gov.v1beta1.Msg/SubmitProposal` above has exactly one request parameter, namely the `Msg` type `/cosmos.gov.v1beta1.MsgSubmitProposal`. It is important the reader understands clearly the nomenclature difference between a `Msg` service (a Protobuf service) and a `Msg` type (a Protobuf message), and the differences in their fully-qualified name.\n\nThis convention has been decided over the more canonical `Msg...Request` names mainly for backwards compatibility, but also for better readability in `TxBody.messages` (see [Encoding section](#encoding) below): transactions containing `/cosmos.gov.MsgSubmitProposal` read better than those containing `/cosmos.gov.v1beta1.MsgSubmitProposalRequest`.\n\nOne consequence of this convention is that each `Msg` type can be the request parameter of only one `Msg` service method. However, we consider this limitation a good practice in explicitness.\n\n### Encoding\n\nEncoding of transactions generated with `Msg` services does not differ from current Protobuf transaction encoding as defined in [ADR-020](./adr-020-protobuf-transaction-encoding.md). We are encoding `Msg` types (which are exactly `Msg` service methods' request parameters) as `Any` in `Tx`s which involves packing the\nbinary-encoded `Msg` with its type URL.\n\n### Decoding\n\nSince `Msg` types are packed into `Any`, decoding transaction messages is done by unpacking `Any`s into `Msg` types. For more information, please refer to [ADR-020](./adr-020-protobuf-transaction-encoding.md#transactions).\n\n### Routing\n\nWe propose to add a `msg_service_router` in BaseApp. This router is a key/value map which maps `Msg` types' `type_url`s to their corresponding `Msg` service method handler. Since there is a 1-to-1 mapping between `Msg` types and `Msg` service method, the `msg_service_router` has exactly one entry per `Msg` service method.\n\nWhen a transaction is processed by BaseApp (in CheckTx or in DeliverTx), its `TxBody.messages` are decoded as `Msg`s. Each `Msg`'s `type_url` is matched against an entry in the `msg_service_router`, and the respective `Msg` service method handler is called.\n\nFor backward compatibility, the old handlers are not removed yet. If BaseApp receives a legacy `Msg` with no corresponding entry in the `msg_service_router`, it will be routed via its legacy `Route()` method into the legacy handler.\n\n### Module Configuration\n\nIn [ADR 021](./adr-021-protobuf-query-encoding.md), we introduced a method `RegisterQueryService`\nto `AppModule` which allows for modules to register gRPC queriers.\n\nTo register `Msg` services, we attempt a more extensible approach by converting `RegisterQueryService`\nto a more generic `RegisterServices` method:\n\n```go\ntype AppModule interface {\n RegisterServices(Configurator)\n ...\n}\n\ntype Configurator interface {\n QueryServer() grpc.Server\n MsgServer() grpc.Server\n}\n\n// example module:\nfunc (am AppModule) RegisterServices(cfg Configurator) {\n\ttypes.RegisterQueryServer(cfg.QueryServer(), keeper)\n\ttypes.RegisterMsgServer(cfg.MsgServer(), keeper)\n}\n```\n\nThe `RegisterServices` method and the `Configurator` interface are intended to\nevolve to satisfy the use cases discussed in [\\#7093](https://github.com/cosmos/cosmos-sdk/issues/7093)\nand [\\#7122](https://github.com/cosmos/cosmos-sdk/issues/7421).\n\nWhen `Msg` services are registered, the framework _should_ verify that all `Msg` types\nimplement the `sdk.Msg` interface and throw an error during initialization rather\nthan later when transactions are processed.\n\n### `Msg` Service Implementation\n\nJust like query services, `Msg` service methods can retrieve the `sdk.Context`\nfrom the `context.Context` parameter using the `sdk.UnwrapSDKContext`\nmethod:\n\n```go\npackage gov\n\nfunc (k Keeper) SubmitProposal(goCtx context.Context, params *types.MsgSubmitProposal) (*MsgSubmitProposalResponse, error) {\n\tctx := sdk.UnwrapSDKContext(goCtx)\n ...\n}\n```\n\nThe `sdk.Context` should have an `EventManager` already attached by BaseApp's `msg_service_router`.\n\nSeparate handler definition is no longer needed with this approach.\n\n## Consequences\n\nThis design changes how a module functionality is exposed and accessed. It deprecates the existing `Handler` interface and `AppModule.Route` in favor of [Protocol Buffer Services](https://developers.google.com/protocol-buffers/docs/proto3#services) and Service Routing described above. This dramatically simplifies the code. We don't need to create handlers and keepers any more. Use of Protocol Buffer auto-generated clients clearly separates the communication interfaces between the module and a modules user. The control logic (aka handlers and keepers) is not exposed any more. A module interface can be seen as a black box accessible through a client API. It's worth to note that the client interfaces are also generated by Protocol Buffers.\n\nThis also allows us to change how we perform functional tests. Instead of mocking AppModules and Router, we will mock a client (server will stay hidden). More specifically: we will never mock `moduleA.MsgServer` in `moduleB`, but rather `moduleA.MsgClient`. One can think about it as working with external services (eg DBs, or online servers...). We assume that the transmission between clients and servers is correctly handled by generated Protocol Buffers.\n\nFinally, closing a module to client API opens desirable OCAP patterns discussed in ADR-033. Since server implementation and interface is hidden, nobody can hold \"keepers\"/servers and will be forced to relay on the client interface, which will drive developers for correct encapsulation and software engineering patterns.\n\n### Pros\n\n* communicates return type clearly\n* manual handler registration and return type marshaling is no longer needed, just implement the interface and register it\n* communication interface is automatically generated, the developer can now focus only on the state transition methods - this would improve the UX of [\\#7093](https://github.com/cosmos/cosmos-sdk/issues/7093) approach (1) if we chose to adopt that\n* generated client code could be useful for clients and tests\n* dramatically reduces and simplifies the code\n\n### Cons\n\n* using `service` definitions outside the context of gRPC could be confusing (but doesn’t violate the proto3 spec)\n\n## References\n\n* [Initial Github Issue \\#7122](https://github.com/cosmos/cosmos-sdk/issues/7122)\n* [proto 3 Language Guide: Defining Services](https://developers.google.com/protocol-buffers/docs/proto3#services)\n* [Initial pre-`Any` `Msg` designs](https://docs.google.com/document/d/1eEgYgvgZqLE45vETjhwIw4VOqK-5hwQtZtjVbiXnIGc)\n* [ADR 020](./adr-020-protobuf-transaction-encoding.md)\n* [ADR 021](./adr-021-protobuf-query-encoding.md)" + }, + { + "number": 32, + "filename": "adr-032-typed-events.md", + "title": "ADR 032: Typed Events", + "content": "# ADR 032: Typed Events\n\n## Changelog\n\n* 28-Sept-2020: Initial Draft\n\n## Authors\n\n* Anil Kumar (@anilcse)\n* Jack Zampolin (@jackzampolin)\n* Adam Bozanich (@boz)\n\n## Status\n\nProposed\n\n## Abstract\n\nCurrently in the Cosmos SDK, events are defined in the handlers for each message as well as `BeginBlock` and `EndBlock`. Each module doesn't have types defined for each event, they are implemented as `map[string]string`. Above all else this makes these events difficult to consume as it requires a great deal of raw string matching and parsing. This proposal focuses on updating the events to use **typed events** defined in each module such that emitting and subscribing to events will be much easier. This workflow comes from the experience of the Akash Network team.\n\n## Context\n\nCurrently in the Cosmos SDK, events are defined in the handlers for each message, meaning each module doesn't have a canonical set of types for each event. Above all else this makes these events difficult to consume as it requires a great deal of raw string matching and parsing. This proposal focuses on updating the events to use **typed events** defined in each module such that emitting and subscribing to events will be much easier. This workflow comes from the experience of the Akash Network team.\n\n[Our platform](http://github.com/ovrclk/akash) requires a number of programmatic on chain interactions both on the provider (datacenter - to bid on new orders and listen for leases created) and user (application developer - to send the app manifest to the provider) side. In addition the Akash team is now maintaining the IBC [`relayer`](https://github.com/ovrclk/relayer), another very event driven process. In working on these core pieces of infrastructure, and integrating lessons learned from Kubernetes development, our team has developed a standard method for defining and consuming typed events in Cosmos SDK modules. We have found that it is extremely useful in building this type of event driven application.\n\nAs the Cosmos SDK gets used more extensively for apps like `peggy`, other peg zones, IBC, DeFi, etc... there will be an exploding demand for event driven applications to support new features desired by users. We propose upstreaming our findings into the Cosmos SDK to enable all Cosmos SDK applications to quickly and easily build event driven apps to aid their core application. Wallets, exchanges, explorers, and defi protocols all stand to benefit from this work.\n\nIf this proposal is accepted, users will be able to build event driven Cosmos SDK apps in go by just writing `EventHandler`s for their specific event types and passing them to `EventEmitters` that are defined in the Cosmos SDK.\n\nThe end of this proposal contains a detailed example of how to consume events after this refactor.\n\nThis proposal is specifically about how to consume these events as a client of the blockchain, not for intermodule communication.\n\n## Decision\n\n**Step-1**: Implement additional functionality in the `types` package: `EmitTypedEvent` and `ParseTypedEvent` functions\n\n```go\n// types/events.go\n\n// EmitTypedEvent takes typed event and emits converting it into sdk.Event\nfunc (em *EventManager) EmitTypedEvent(event proto.Message) error {\n\tevtType := proto.MessageName(event)\n\tevtJSON, err := codec.ProtoMarshalJSON(event)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar attrMap map[string]json.RawMessage\n\terr = json.Unmarshal(evtJSON, &attrMap)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar attrs []abci.EventAttribute\n\tfor k, v := range attrMap {\n\t\tattrs = append(attrs, abci.EventAttribute{\n\t\t\tKey: []byte(k),\n\t\t\tValue: v,\n\t\t})\n\t}\n\n\tem.EmitEvent(Event{\n\t\tType: evtType,\n\t\tAttributes: attrs,\n\t})\n\n\treturn nil\n}\n\n// ParseTypedEvent converts abci.Event back to typed event\nfunc ParseTypedEvent(event abci.Event) (proto.Message, error) {\n\tconcreteGoType := proto.MessageType(event.Type)\n\tif concreteGoType == nil {\n\t\treturn nil, fmt.Errorf(\"failed to retrieve the message of type %q\", event.Type)\n\t}\n\n\tvar value reflect.Value\n\tif concreteGoType.Kind() == reflect.Ptr {\n\t\tvalue = reflect.New(concreteGoType.Elem())\n\t} else {\n\t\tvalue = reflect.Zero(concreteGoType)\n }\n\n\tprotoMsg, ok := value.Interface().(proto.Message)\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"%q does not implement proto.Message\", event.Type)\n\t}\n\n\tattrMap := make(map[string]json.RawMessage)\n\tfor _, attr := range event.Attributes {\n\t\tattrMap[string(attr.Key)] = attr.Value\n\t}\n\n\tattrBytes, err := json.Marshal(attrMap)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\terr = jsonpb.Unmarshal(strings.NewReader(string(attrBytes)), protoMsg)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn protoMsg, nil\n}\n```\n\nHere, the `EmitTypedEvent` is a method on `EventManager` which takes typed event as input and apply json serialization on it. Then it maps the JSON key/value pairs to `event.Attributes` and emits it in form of `sdk.Event`. `Event.Type` will be the type URL of the proto message.\n\nWhen we subscribe to emitted events on the CometBFT websocket, they are emitted in the form of an `abci.Event`. `ParseTypedEvent` parses the event back to it's original proto message.\n\n**Step-2**: Add proto definitions for typed events for msgs in each module:\n\nFor example, let's take `MsgSubmitProposal` of `gov` module and implement this event's type.\n\n```protobuf\n// proto/cosmos/gov/v1beta1/gov.proto\n// Add typed event definition\n\npackage cosmos.gov.v1beta1;\n\nmessage EventSubmitProposal {\n string from_address = 1;\n uint64 proposal_id = 2;\n TextProposal proposal = 3;\n}\n```\n\n**Step-3**: Refactor event emission to use the typed event created and emit using `sdk.EmitTypedEvent`:\n\n```go\n// x/gov/handler.go\nfunc handleMsgSubmitProposal(ctx sdk.Context, keeper keeper.Keeper, msg types.MsgSubmitProposalI) (*sdk.Result, error) {\n ...\n types.Context.EventManager().EmitTypedEvent(\n &EventSubmitProposal{\n FromAddress: fromAddress,\n ProposalId: id,\n Proposal: proposal,\n },\n )\n ...\n}\n```\n\n### How to subscribe to these typed events in `Client`\n\n> NOTE: Full code example below\n\nUsers will be able to subscribe using `client.Context.Client.Subscribe` and consume events which are emitted using `EventHandler`s.\n\nAkash Network has built a simple [`pubsub`](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/pubsub/bus.go#L20). This can be used to subscribe to `abci.Events` and [publish](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/events/publish.go#L21) them as typed events.\n\nPlease see the below code sample for more detail on how this flow looks for clients.\n\n## Consequences\n\n### Positive\n\n* Improves consistency of implementation for the events currently in the Cosmos SDK\n* Provides a much more ergonomic way to handle events and facilitates writing event driven applications\n* This implementation will support a middleware ecosystem of `EventHandler`s\n\n### Negative\n\n## Detailed code example of publishing events\n\nThis ADR also proposes adding affordances to emit and consume these events. This way developers will only need to write\n`EventHandler`s which define the actions they desire to take.\n\n```go\n// EventEmitter is a type that describes event emitter functions\n// This should be defined in `types/events.go`\ntype EventEmitter func(context.Context, client.Context, ...EventHandler) error\n\n// EventHandler is a type of function that handles events coming out of the event bus\n// This should be defined in `types/events.go`\ntype EventHandler func(proto.Message) error\n\n// Sample use of the functions below\nfunc main() {\n ctx, cancel := context.WithCancel(context.Background())\n\n if err := TxEmitter(ctx, client.Context{}.WithNodeURI(\"tcp://localhost:26657\"), SubmitProposalEventHandler); err != nil {\n cancel()\n panic(err)\n }\n\n return\n}\n\n// SubmitProposalEventHandler is an example of an event handler that prints proposal details\n// when any EventSubmitProposal is emitted.\nfunc SubmitProposalEventHandler(ev proto.Message) (err error) {\n switch event := ev.(type) {\n // Handle governance proposal events creation events\n case govtypes.EventSubmitProposal:\n // Users define business logic here e.g.\n fmt.Println(ev.FromAddress, ev.ProposalId, ev.Proposal)\n return nil\n default:\n return nil\n }\n}\n\n// TxEmitter is an example of an event emitter that emits just transaction events. This can and\n// should be implemented somewhere in the Cosmos SDK. The Cosmos SDK can include an EventEmitters for tm.event='Tx'\n// and/or tm.event='NewBlock' (the new block events may contain typed events)\nfunc TxEmitter(ctx context.Context, cliCtx client.Context, ehs ...EventHandler) (err error) {\n // Instantiate and start CometBFT RPC client\n client, err := cliCtx.GetNode()\n if err != nil {\n return err\n }\n\n if err = client.Start(); err != nil {\n return err\n }\n\n // Start the pubsub bus\n bus := pubsub.NewBus()\n defer bus.Close()\n\n // Initialize a new error group\n eg, ctx := errgroup.WithContext(ctx)\n\n // Publish chain events to the pubsub bus\n eg.Go(func() error {\n return PublishChainTxEvents(ctx, client, bus, simapp.ModuleBasics)\n })\n\n // Subscribe to the bus events\n subscriber, err := bus.Subscribe()\n if err != nil {\n return err\n }\n\n\t// Handle all the events coming out of the bus\n\teg.Go(func() error {\n var err error\n for {\n select {\n case <-ctx.Done():\n return nil\n case <-subscriber.Done():\n return nil\n case ev := <-subscriber.Events():\n for _, eh := range ehs {\n if err = eh(ev); err != nil {\n break\n }\n }\n }\n }\n return nil\n\t})\n\n\treturn group.Wait()\n}\n\n// PublishChainTxEvents events using cmtclient. Waits on context shutdown signals to exit.\nfunc PublishChainTxEvents(ctx context.Context, client cmtclient.EventsClient, bus pubsub.Bus, mb module.BasicManager) (err error) {\n // Subscribe to transaction events\n txch, err := client.Subscribe(ctx, \"txevents\", \"tm.event='Tx'\", 100)\n if err != nil {\n return err\n }\n\n // Unsubscribe from transaction events on function exit\n defer func() {\n err = client.UnsubscribeAll(ctx, \"txevents\")\n }()\n\n // Use errgroup to manage concurrency\n g, ctx := errgroup.WithContext(ctx)\n\n // Publish transaction events in a goroutine\n g.Go(func() error {\n var err error\n for {\n select {\n case <-ctx.Done():\n break\n case ed := <-ch:\n switch evt := ed.Data.(type) {\n case cmttypes.EventDataTx:\n if !evt.Result.IsOK() {\n continue\n }\n // range over events, parse them using the basic manager and\n // send them to the pubsub bus\n for _, abciEv := range events {\n typedEvent, err := sdk.ParseTypedEvent(abciEv)\n if err != nil {\n return err\n }\n if err := bus.Publish(typedEvent); err != nil {\n bus.Close()\n return\n }\n continue\n }\n }\n }\n }\n return err\n\t})\n\n // Exit on error or context cancellation\n return g.Wait()\n}\n```\n\n## References\n\n* [Publish Custom Events via a bus](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/events/publish.go#L19-L58)\n* [Consuming the events in `Client`](https://github.com/ovrclk/deploy/blob/bf6c633ab6c68f3026df59efd9982d6ca1bf0561/cmd/event-handlers.go#L57)" + }, + { + "number": 33, + "filename": "adr-033-protobuf-inter-module-comm.md", + "title": "ADR 033: Protobuf-based Inter-Module Communication", + "content": "# ADR 033: Protobuf-based Inter-Module Communication\n\n## Changelog\n\n* 2020-10-05: Initial Draft\n\n## Status\n\nProposed\n\n## Abstract\n\nThis ADR introduces a system for permissioned inter-module communication leveraging the protobuf `Query` and `Msg`\nservice definitions defined in [ADR 021](./adr-021-protobuf-query-encoding.md) and\n[ADR 031](./adr-031-msg-service.md) which provides:\n\n* stable protobuf based module interfaces to potentially later replace the keeper paradigm\n* stronger inter-module object capabilities (OCAPs) guarantees\n* module accounts and sub-account authorization\n\n## Context\n\nIn the current Cosmos SDK documentation on the [Object-Capability Model](../docs/learn/advanced/10-ocap.md), it is stated that:\n\n> We assume that a thriving ecosystem of Cosmos SDK modules that are easy to compose into a blockchain application will contain faulty or malicious modules.\n\nThere is currently not a thriving ecosystem of Cosmos SDK modules. We hypothesize that this is in part due to:\n\n1. lack of a stable v1.0 Cosmos SDK to build modules off of. Module interfaces are changing, sometimes dramatically, from\npoint release to point release, often for good reasons, but this does not create a stable foundation to build on.\n2. lack of a properly implemented object capability or even object-oriented encapsulation system which makes refactors\nof module keeper interfaces inevitable because the current interfaces are poorly constrained.\n\n### `x/bank` Case Study\n\nCurrently the `x/bank` keeper gives pretty much unrestricted access to any module which references it. For instance, the\n`SetBalance` method allows the caller to set the balance of any account to anything, bypassing even proper tracking of supply.\n\nThere appears to have been some later attempts to implement some semblance of OCAPs using module-level minting, staking\nand burning permissions. These permissions allow a module to mint, burn or delegate tokens with reference to the module’s\nown account. These permissions are actually stored as a `[]string` array on the `ModuleAccount` type in state.\n\nHowever, these permissions don’t really do much. They control what modules can be referenced in the `MintCoins`,\n`BurnCoins` and `DelegateCoins***` methods, but for one there is no unique object capability token that controls access —\njust a simple string. So the `x/upgrade` module could mint tokens for the `x/staking` module simply by calling\n`MintCoins(“staking”)`. Furthermore, all modules which have access to these keeper methods, also have access to\n`SetBalance` negating any other attempt at OCAPs and breaking even basic object-oriented encapsulation.\n\n## Decision\n\nBased on [ADR-021](./adr-021-protobuf-query-encoding.md) and [ADR-031](./adr-031-msg-service.md), we introduce the\nInter-Module Communication framework for secure module authorization and OCAPs.\nWhen implemented, this could also serve as an alternative to the existing paradigm of passing keepers between\nmodules. The approach outlined here-in is intended to form the basis of a Cosmos SDK v1.0 that provides the necessary\nstability and encapsulation guarantees that allow a thriving module ecosystem to emerge.\n\nOf particular note — the decision is to _enable_ this functionality for modules to adopt at their own discretion.\nProposals to migrate existing modules to this new paradigm will have to be a separate conversation, potentially\naddressed as amendments to this ADR.\n\n### New \"Keeper\" Paradigm\n\nIn [ADR 021](./adr-021-protobuf-query-encoding.md), a mechanism for using protobuf service definitions to define queriers\nwas introduced and in [ADR 31](./adr-031-msg-service.md), a mechanism for using protobuf service to define `Msg`s was added.\nProtobuf service definitions generate two golang interfaces representing the client and server sides of a service plus\nsome helper code. Here is a minimal example for the bank `cosmos.bank.Msg/Send` message type:\n\n```go\npackage bank\n\ntype MsgClient interface {\n\tSend(context.Context, *MsgSend, opts ...grpc.CallOption) (*MsgSendResponse, error)\n}\n\ntype MsgServer interface {\n\tSend(context.Context, *MsgSend) (*MsgSendResponse, error)\n}\n```\n\n[ADR 021](./adr-021-protobuf-query-encoding.md) and [ADR 31](./adr-031-msg-service.md) specifies how modules can implement the generated `QueryServer`\nand `MsgServer` interfaces as replacements for the legacy queriers and `Msg` handlers respectively.\n\nIn this ADR we explain how modules can make queries and send `Msg`s to other modules using the generated `QueryClient`\nand `MsgClient` interfaces and propose this mechanism as a replacement for the existing `Keeper` paradigm. To be clear,\nthis ADR does not necessitate the creation of new protobuf definitions or services. Rather, it leverages the same proto\nbased service interfaces already used by clients for inter-module communication.\n\nUsing this `QueryClient`/`MsgClient` approach has the following key benefits over exposing keepers to external modules:\n\n1. Protobuf types are checked for breaking changes using [buf](https://buf.build/docs/breaking-overview) and because of\nthe way protobuf is designed this will give us strong backwards compatibility guarantees while allowing for forward\nevolution.\n2. The separation between the client and server interfaces will allow us to insert permission checking code in between\nthe two which checks if one module is authorized to send the specified `Msg` to the other module providing a proper\nobject capability system (see below).\n3. The router for inter-module communication gives us a convenient place to handle rollback of transactions,\nenabling atomicity of operations ([currently a problem](https://github.com/cosmos/cosmos-sdk/issues/8030)). Any failure within a module-to-module call would result in a failure of the entire\ntransaction\n\nThis mechanism has the added benefits of:\n\n* reducing boilerplate through code generation, and\n* allowing for modules in other languages either via a VM like CosmWasm or sub-processes using gRPC\n\n### Inter-module Communication\n\nTo use the `Client` generated by the protobuf compiler we need a `grpc.ClientConn` [interface](https://github.com/grpc/grpc-go/blob/v1.49.x/clientconn.go#L441-L450)\nimplementation. For this we introduce\na new type, `ModuleKey`, which implements the `grpc.ClientConn` interface. `ModuleKey` can be thought of as the \"private\nkey\" corresponding to a module account, where authentication is provided through use of a special `Invoker()` function,\ndescribed in more detail below.\n\nBlockchain users (external clients) use their account's private key to sign transactions containing `Msg`s where they are listed as signers (each\nmessage specifies required signers with `Msg.GetSigner`). The authentication check is performed by `AnteHandler`.\n\nHere, we extend this process, by allowing modules to be identified in `Msg.GetSigners`. When a module wants to trigger the execution a `Msg` in another module,\nits `ModuleKey` acts as the sender (through the `ClientConn` interface we describe below) and is set as a sole \"signer\". It's worth to note\nthat we don't use any cryptographic signature in this case.\nFor example, module `A` could use its `A.ModuleKey` to create `MsgSend` object for `/cosmos.bank.Msg/Send` transaction. `MsgSend` validation\nwill assure that the `from` account (`A.ModuleKey` in this case) is the signer.\n\nHere's an example of a hypothetical module `foo` interacting with `x/bank`:\n\n```go\npackage foo\n\n\ntype FooMsgServer {\n // ...\n\n bankQuery bank.QueryClient\n bankMsg bank.MsgClient\n}\n\nfunc NewFooMsgServer(moduleKey RootModuleKey, ...) FooMsgServer {\n // ...\n\n return FooMsgServer {\n // ...\n modouleKey: moduleKey,\n bankQuery: bank.NewQueryClient(moduleKey),\n bankMsg: bank.NewMsgClient(moduleKey),\n }\n}\n\nfunc (foo *FooMsgServer) Bar(ctx context.Context, req *MsgBarRequest) (*MsgBarResponse, error) {\n balance, err := foo.bankQuery.Balance(&bank.QueryBalanceRequest{Address: foo.moduleKey.Address(), Denom: \"foo\"})\n\n ...\n\n res, err := foo.bankMsg.Send(ctx, &bank.MsgSendRequest{FromAddress: fooMsgServer.moduleKey.Address(), ...})\n\n ...\n}\n```\n\nThis design is also intended to be extensible to cover use cases of more fine grained permissioning like minting by\ndenom prefix being restricted to certain modules (as discussed in\n[#7459](https://github.com/cosmos/cosmos-sdk/pull/7459#discussion_r529545528)).\n\n### `ModuleKey`s and `ModuleID`s\n\nA `ModuleKey` can be thought of as a \"private key\" for a module account and a `ModuleID` can be thought of as the\ncorresponding \"public key\". From the [ADR 028](./adr-028-public-key-addresses.md), modules can have both a root module account and any number of sub-accounts\nor derived accounts that can be used for different pools (ex. staking pools) or managed accounts (ex. group\naccounts). We can also think of module sub-accounts as similar to derived keys - there is a root key and then some\nderivation path. `ModuleID` is a simple struct which contains the module name and optional \"derivation\" path,\nand forms its address based on the `AddressHash` method from [the ADR-028](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-028-public-key-addresses.md):\n\n```go\ntype ModuleID struct {\n ModuleName string\n Path []byte\n}\n\nfunc (key ModuleID) Address() []byte {\n return AddressHash(key.ModuleName, key.Path)\n}\n```\n\nIn addition to being able to generate a `ModuleID` and address, a `ModuleKey` contains a special function called\n`Invoker` which is the key to safe inter-module access. The `Invoker` creates an `InvokeFn` closure which is used as an `Invoke` method in\nthe `grpc.ClientConn` interface and under the hood is able to route messages to the appropriate `Msg` and `Query` handlers\nperforming appropriate security checks on `Msg`s. This allows for even safer inter-module access than keeper's whose\nprivate member variables could be manipulated through reflection. Golang does not support reflection on a function\nclosure's captured variables and direct manipulation of memory would be needed for a truly malicious module to bypass\nthe `ModuleKey` security.\n\nThe two `ModuleKey` types are `RootModuleKey` and `DerivedModuleKey`:\n\n```go\ntype Invoker func(callInfo CallInfo) func(ctx context.Context, request, response interface{}, opts ...interface{}) error\n\ntype CallInfo {\n Method string\n Caller ModuleID\n}\n\ntype RootModuleKey struct {\n moduleName string\n invoker Invoker\n}\n\nfunc (rm RootModuleKey) Derive(path []byte) DerivedModuleKey { /* ... */}\n\ntype DerivedModuleKey struct {\n moduleName string\n path []byte\n invoker Invoker\n}\n```\n\nA module can get access to a `DerivedModuleKey`, using the `Derive(path []byte)` method on `RootModuleKey` and then\nwould use this key to authenticate `Msg`s from a sub-account. Ex:\n\n```go\npackage foo\n\nfunc (fooMsgServer *MsgServer) Bar(ctx context.Context, req *MsgBar) (*MsgBarResponse, error) {\n derivedKey := fooMsgServer.moduleKey.Derive(req.SomePath)\n bankMsgClient := bank.NewMsgClient(derivedKey)\n res, err := bankMsgClient.Balance(ctx, &bank.MsgSend{FromAddress: derivedKey.Address(), ...})\n ...\n}\n```\n\nIn this way, a module can gain permissioned access to a root account and any number of sub-accounts and send\nauthenticated `Msg`s from these accounts. The `Invoker` `callInfo.Caller` parameter is used under the hood to\ndistinguish between different module accounts, but either way the function returned by `Invoker` only allows `Msg`s\nfrom either the root or a derived module account to pass through.\n\nNote that `Invoker` itself returns a function closure based on the `CallInfo` passed in. This will allow client implementations\nin the future that cache the invoke function for each method type avoiding the overhead of hash table lookup.\nThis would reduce the performance overhead of this inter-module communication method to the bare minimum required for\nchecking permissions.\n\nTo re-iterate, the closure only allows access to authorized calls. There is no access to anything else regardless of any\nname impersonation.\n\nBelow is a rough sketch of the implementation of `grpc.ClientConn.Invoke` for `RootModuleKey`:\n\n```go\nfunc (key RootModuleKey) Invoke(ctx context.Context, method string, args, reply interface{}, opts ...grpc.CallOption) error {\n f := key.invoker(CallInfo {Method: method, Caller: ModuleID {ModuleName: key.moduleName}})\n return f(ctx, args, reply)\n}\n```\n\n### `AppModule` Wiring and Requirements\n\nIn [ADR 031](./adr-031-msg-service.md), the `AppModule.RegisterService(Configurator)` method was introduced. To support\ninter-module communication, we extend the `Configurator` interface to pass in the `ModuleKey` and to allow modules to\nspecify their dependencies on other modules using `RequireServer()`:\n\n```go\ntype Configurator interface {\n MsgServer() grpc.Server\n QueryServer() grpc.Server\n\n ModuleKey() ModuleKey\n RequireServer(msgServer interface{})\n}\n```\n\nThe `ModuleKey` is passed to modules in the `RegisterService` method itself so that `RegisterServices` serves as a single\nentry point for configuring module services. This is intended to also have the side-effect of greatly reducing boilerplate in\n`app.go`. For now, `ModuleKey`s will be created based on `AppModuleBasic.Name()`, but a more flexible system may be\nintroduced in the future. The `ModuleManager` will handle creation of module accounts behind the scenes.\n\nBecause modules do not get direct access to each other anymore, modules may have unfulfilled dependencies. To make sure\nthat module dependencies are resolved at startup, the `Configurator.RequireServer` method should be added. The `ModuleManager`\nwill make sure that all dependencies declared with `RequireServer` can be resolved before the app starts. An example\nmodule `foo` could declare its dependency on `x/bank` like this:\n\n```go\npackage foo\n\nfunc (am AppModule) RegisterServices(cfg Configurator) {\n cfg.RequireServer((*bank.QueryServer)(nil))\n cfg.RequireServer((*bank.MsgServer)(nil))\n}\n```\n\n### Security Considerations\n\nIn addition to checking for `ModuleKey` permissions, a few additional security precautions will need to be taken by\nthe underlying router infrastructure.\n\n#### Recursion and Re-entry\n\nRecursive or re-entrant method invocations pose a potential security threat. This can be a problem if Module A\ncalls Module B and Module B calls module A again in the same call.\n\nOne basic way for the router system to deal with this is to maintain a call stack which prevents a module from\nbeing referenced more than once in the call stack so that there is no re-entry. A `map[string]interface{}` table\nin the router could be used to perform this security check.\n\n#### Queries\n\nQueries in Cosmos SDK are generally un-permissioned so allowing one module to query another module should not pose\nany major security threats assuming basic precautions are taken. The basic precaution that the router system will\nneed to take is making sure that the `sdk.Context` passed to query methods does not allow writing to the store. This\ncan be done for now with a `CacheMultiStore` as is currently done for `BaseApp` queries.\n\n### Internal Methods\n\nIn many cases, we may wish for modules to call methods on other modules which are not exposed to clients at all. For this\npurpose, we add the `InternalServer` method to `Configurator`:\n\n```go\ntype Configurator interface {\n MsgServer() grpc.Server\n QueryServer() grpc.Server\n InternalServer() grpc.Server\n}\n```\n\nAs an example, x/slashing's Slash must call x/staking's Slash, but we don't want to expose x/staking's Slash to end users\nand clients.\n\nInternal protobuf services will be defined in a corresponding `internal.proto` file in the given module's\nproto package.\n\nServices registered against `InternalServer` will be callable from other modules but not by external clients.\n\nAn alternative solution to internal-only methods could involve hooks / plugins as discussed [here](https://github.com/cosmos/cosmos-sdk/pull/7459#issuecomment-733807753).\nA more detailed evaluation of a hooks / plugin system will be addressed later in follow-ups to this ADR or as a separate\nADR.\n\n### Authorization\n\nBy default, the inter-module router requires that messages are sent by the first signer returned by `GetSigners`. The\ninter-module router should also accept authorization middleware such as that provided by [ADR 030](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-030-authz-module.md).\nThis middleware will allow accounts to authorize specific module accounts to perform actions on their behalf.\nAuthorization middleware should take into account the need to grant certain modules effectively \"admin\" privileges to\nother modules. This will be addressed in separate ADRs or updates to this ADR.\n\n### Future Work\n\nOther future improvements may include:\n\n* custom code generation that:\n * simplifies interfaces (ex. generates code with `sdk.Context` instead of `context.Context`)\n * optimizes inter-module calls - for instance caching resolved methods after first invocation\n* combining `StoreKey`s and `ModuleKey`s into a single interface so that modules have a single OCAPs handle\n* code generation which makes inter-module communication more performant\n* decoupling `ModuleKey` creation from `AppModuleBasic.Name()` so that app's can override root module account names\n* inter-module hooks and plugins\n\n## Alternatives\n\n### MsgServices vs `x/capability`\n\nThe `x/capability` module does provide a proper object-capability implementation that can be used by any module in the\nCosmos SDK and could even be used for inter-module OCAPs as described in [\\#5931](https://github.com/cosmos/cosmos-sdk/issues/5931).\n\nThe advantages of the approach described in this ADR are mostly around how it integrates with other parts of the Cosmos SDK,\nspecifically:\n\n* protobuf so that:\n * code generation of interfaces can be leveraged for a better dev UX\n * module interfaces are versioned and checked for breakage using [buf](https://docs.buf.build/breaking-overview)\n* sub-module accounts as per ADR 028\n* the general `Msg` passing paradigm and the way signers are specified by `GetSigners`\n\nAlso, this is a complete replacement for keepers and could be applied to _all_ inter-module communication whereas the\n`x/capability` approach in #5931 would need to be applied method by method.\n\n## Consequences\n\n### Backwards Compatibility\n\nThis ADR is intended to provide a pathway to a scenario where there is greater long term compatibility between modules.\nIn the short-term, this will likely result in breaking certain `Keeper` interfaces which are too permissive and/or\nreplacing `Keeper` interfaces altogether.\n\n### Positive\n\n* an alternative to keepers which can more easily lead to stable inter-module interfaces\n* proper inter-module OCAPs\n* improved module developer DevX, as commented on by several participants on\n [Architecture Review Call, Dec 3](https://hackmd.io/E0wxxOvRQ5qVmTf6N_k84Q)\n* lays the groundwork for what can be a greatly simplified `app.go`\n* router can be setup to enforce atomic transactions for module-to-module calls\n\n### Negative\n\n* modules which adopt this will need significant refactoring\n\n### Neutral\n\n## Test Cases [optional]\n\n## References\n\n* [ADR 021](./adr-021-protobuf-query-encoding.md)\n* [ADR 031](./adr-031-msg-service.md)\n* [ADR 028](./adr-028-public-key-addresses.md)\n* [ADR 030 draft](https://github.com/cosmos/cosmos-sdk/pull/7105)\n* [Object-Capability Model](https://docs.network.com/main/core/ocap)" + }, + { + "number": 34, + "filename": "adr-034-account-rekeying.md", + "title": "ADR 034: Account Rekeying", + "content": "# ADR 034: Account Rekeying\n\n## Changelog\n\n* 30-09-2020: Initial Draft\n\n## Status\n\nPROPOSED\n\n## Abstract\n\nAccount rekeying is a process that allows an account to replace its authentication pubkey with a new one.\n\n## Context\n\nCurrently, in the Cosmos SDK, the address of an auth `BaseAccount` is based on the hash of the public key. Once an account is created, the public key for the account is set in stone, and cannot be changed. This can be a problem for users, as key rotation is a useful security practice, but is not possible currently. Furthermore, as multisigs are a type of pubkey, once a multisig for an account is set, it cannot be updated. This is problematic, as multisigs are often used by organizations or companies, who may need to change their set of multisig signers for internal reasons.\n\nTransferring all the assets of an account to a new account with the updated pubkey is not sufficient, because some \"engagements\" of an account are not easily transferable. For example, in staking, to transfer bonded Atoms, an account would have to unbond all delegations and wait the three-week unbonding period. Even more significantly, for validator operators, ownership over a validator is not transferable at all, meaning that the operator key for a validator can never be updated, leading to poor operational security for validators.\n\n## Decision\n\nWe propose the addition of a new feature to `x/auth` that allows accounts to update the public key associated with their account, while keeping the address the same.\n\nThis is possible because the Cosmos SDK `BaseAccount` stores the public key for an account in state, instead of making the assumption that the public key is included in the transaction (whether explicitly or implicitly through the signature) as in other blockchains such as Bitcoin and Ethereum. Because the public key is stored on chain, it is okay for the public key to not hash to the address of an account, as the address is not pertinent to the signature checking process.\n\nTo build this system, we design a new Msg type as follows:\n\n```protobuf\nservice Msg {\n rpc ChangePubKey(MsgChangePubKey) returns (MsgChangePubKeyResponse);\n}\n\nmessage MsgChangePubKey {\n string address = 1;\n google.protobuf.Any pub_key = 2;\n}\n\nmessage MsgChangePubKeyResponse {}\n```\n\nThe MsgChangePubKey transaction needs to be signed by the existing pubkey in state.\n\nOnce approved, the handler for this message type, which takes in the AccountKeeper, will update the in-state pubkey for the account and replace it with the pubkey from the Msg.\n\nAn account that has had its pubkey changed cannot be automatically pruned from state. This is because if pruned, the original pubkey of the account would be needed to recreate the same address, but the owner of the address may not have the original pubkey anymore. Currently, we do not automatically prune any accounts anyways, but we would like to keep this option open down the road (this is the purpose of account numbers). To resolve this, we charge an additional gas fee for this operation to compensate for this externality (this bound gas amount is configured as a parameter `PubKeyChangeCost`). The bonus gas is charged inside the handler, using the `ConsumeGas` function. Furthermore, in the future, we can allow accounts that have rekeyed manually prune themselves using a new Msg type such as `MsgDeleteAccount`. Manually pruning accounts can give a gas refund as an incentive for performing the action.\n\n```go\n\tamount := ak.GetParams(ctx).PubKeyChangeCost\n\tctx.GasMeter().ConsumeGas(amount, \"pubkey change fee\")\n```\n\nEvery time a key for an address is changed, we will store a log of this change in the state of the chain, thus creating a stack of all previous keys for an address and the time intervals for which they were active. This allows dapps and clients to easily query past keys for an account which may be useful for features such as verifying timestamped off-chain signed messages.\n\n## Consequences\n\n### Positive\n\n* Will allow users and validator operators to employ better operational security practices with key rotation.\n* Will allow organizations or groups to easily change and add/remove multisig signers.\n\n### Negative\n\nBreaks the current assumed relationship between address and pubkey as H(pubkey) = address. This has a couple of consequences.\n\n* This makes wallets that support this feature more complicated. For example, if an address on-chain was updated, the corresponding key in the CLI wallet also needs to be updated.\n* Cannot automatically prune accounts with 0 balance that have had their pubkey changed.\n\n### Neutral\n\n* While the purpose of this is intended to allow the owner of an account to update to a new pubkey they own, this could technically also be used to transfer ownership of an account to a new owner. For example, this could be used to sell a staked position without unbonding or an account that has vesting tokens. However, the friction of this is very high as this would essentially have to be done as a very specific OTC trade. Furthermore, additional constraints could be added to prevent accounts with Vesting tokens to use this feature.\n* Will require that PubKeys for an account are included in the genesis exports.\n\n## References\n\n* https://www.algorand.com/resources/blog/announcing-rekeying" + }, + { + "number": 35, + "filename": "adr-035-rosetta-api-support.md", + "title": "ADR 035: Rosetta API Support", + "content": "# ADR 035: Rosetta API Support\n\n## Authors\n\n* Jonathan Gimeno (@jgimeno)\n* David Grierson (@senormonito)\n* Alessio Treglia (@alessio)\n* Frojdy Dymylja (@fdymylja)\n\n## Changelog\n\n* 2021-05-12: the external library [cosmos-rosetta-gateway](https://github.com/tendermint/cosmos-rosetta-gateway) has been moved within the Cosmos SDK.\n\n## Context\n\n[Rosetta API](https://www.rosetta-api.org/) is an open-source specification and set of tools developed by Coinbase to\nstandardise blockchain interactions.\n\nThrough the use of a standard API for integrating blockchain applications it will\n\n* Be easier for a user to interact with a given blockchain\n* Allow exchanges to integrate new blockchains quickly and easily\n* Enable application developers to build cross-blockchain applications such as block explorers, wallets and dApps at\n considerably lower cost and effort.\n\n## Decision\n\nIt is clear that adding Rosetta API support to the Cosmos SDK will bring value to all the developers and\nCosmos SDK based chains in the ecosystem. How it is implemented is key.\n\nThe driving principles of the proposed design are:\n\n1. **Extensibility:** it must be as riskless and painless as possible for application developers to set-up network\n configurations to expose Rosetta API-compliant services.\n2. **Long term support:** This proposal aims to provide support for all the Cosmos SDK release series.\n3. **Cost-efficiency:** Backporting changes to Rosetta API specifications from `master` to the various stable\n branches of Cosmos SDK is a cost that needs to be reduced.\n\nWe will achieve these by delivering on these principles by the following:\n\n1. There will be a package `rosetta/lib`\n for the implementation of the core Rosetta API features, particularly:\n a. The types and interfaces (`Client`, `OfflineClient`...), this separates design from implementation detail.\n b. The `Server` functionality as this is independent of the Cosmos SDK version.\n c. The `Online/OfflineNetwork`, which is not exported, and implements the rosetta API using the `Client` interface to query the node, build tx and so on.\n d. The `errors` package to extend rosetta errors.\n2. Due to differences between the Cosmos release series, each series will have its own specific implementation of `Client` interface.\n3. There will be two options for starting an API service in applications:\n a. API shares the application process\n b. API-specific process.\n\n## Architecture\n\n### The External Repo\n\nThis section will describe the proposed external library, including the service implementation, plus the defined types and interfaces.\n\n#### Server\n\n`Server` is a simple `struct` that is started and listens to the port specified in the settings. This is meant to be used across all the Cosmos SDK versions that are actively supported.\n\nThe constructor follows:\n\n`func NewServer(settings Settings) (Server, error)`\n\n`Settings`, which are used to construct a new server, are the following:\n\n```go\n// Settings define the rosetta server settings\ntype Settings struct {\n\t// Network contains the information regarding the network\n\tNetwork *types.NetworkIdentifier\n\t// Client is the online API handler\n\tClient crgtypes.Client\n\t// Listen is the address the handler will listen at\n\tListen string\n\t// Offline defines if the rosetta service should be exposed in offline mode\n\tOffline bool\n\t// Retries is the number of readiness checks that will be attempted when instantiating the handler\n\t// valid only for online API\n\tRetries int\n\t// RetryWait is the time that will be waited between retries\n\tRetryWait time.Duration\n}\n```\n\n#### Types\n\nPackage types uses a mixture of rosetta types and custom defined type wrappers, that the client must parse and return while executing operations.\n\n##### Interfaces\n\nEvery SDK version uses a different format to connect (rpc, gRPC, etc), query and build transactions, we have abstracted this in what is the `Client` interface.\nThe client uses rosetta types, whilst the `Online/OfflineNetwork` takes care of returning correctly parsed rosetta responses and errors.\n\nEach Cosmos SDK release series will have their own `Client` implementations.\nDevelopers can implement their own custom `Client`s as required.\n\n```go\n// Client defines the API the client implementation should provide.\ntype Client interface {\n\t// Needed if the client needs to perform some action before connecting.\n\tBootstrap() error\n\t// Ready checks if the servicer constraints for queries are satisfied\n\t// for example the node might still not be ready, it's useful in process\n\t// when the rosetta instance might come up before the node itself\n\t// the servicer must return nil if the node is ready\n\tReady() error\n\n\t// Data API\n\n\t// Balances fetches the balance of the given address\n\t// if height is not nil, then the balance will be displayed\n\t// at the provided height, otherwise last block balance will be returned\n\tBalances(ctx context.Context, addr string, height *int64) ([]*types.Amount, error)\n\t// BlockByHashAlt gets a block and its transaction at the provided height\n\tBlockByHash(ctx context.Context, hash string) (BlockResponse, error)\n\t// BlockByHeightAlt gets a block given its height, if height is nil then last block is returned\n\tBlockByHeight(ctx context.Context, height *int64) (BlockResponse, error)\n\t// BlockTransactionsByHash gets the block, parent block and transactions\n\t// given the block hash.\n\tBlockTransactionsByHash(ctx context.Context, hash string) (BlockTransactionsResponse, error)\n\t// BlockTransactionsByHeight gets the block, parent block and transactions\n\t// given the block height.\n\tBlockTransactionsByHeight(ctx context.Context, height *int64) (BlockTransactionsResponse, error)\n\t// GetTx gets a transaction given its hash\n\tGetTx(ctx context.Context, hash string) (*types.Transaction, error)\n\t// GetUnconfirmedTx gets an unconfirmed Tx given its hash\n\t// NOTE(fdymylja): NOT IMPLEMENTED YET!\n\tGetUnconfirmedTx(ctx context.Context, hash string) (*types.Transaction, error)\n\t// Mempool returns the list of the current non confirmed transactions\n\tMempool(ctx context.Context) ([]*types.TransactionIdentifier, error)\n\t// Peers gets the peers currently connected to the node\n\tPeers(ctx context.Context) ([]*types.Peer, error)\n\t// Status returns the node status, such as sync data, version etc\n\tStatus(ctx context.Context) (*types.SyncStatus, error)\n\n\t// Construction API\n\n\t// PostTx posts txBytes to the node and returns the transaction identifier plus metadata related\n\t// to the transaction itself.\n\tPostTx(txBytes []byte) (res *types.TransactionIdentifier, meta map[string]interface{}, err error)\n\t// ConstructionMetadataFromOptions\n\tConstructionMetadataFromOptions(ctx context.Context, options map[string]interface{}) (meta map[string]interface{}, err error)\n\tOfflineClient\n}\n\n// OfflineClient defines the functionalities supported without having access to the node\ntype OfflineClient interface {\n\tNetworkInformationProvider\n\t// SignedTx returns the signed transaction given the tx bytes (msgs) plus the signatures\n\tSignedTx(ctx context.Context, txBytes []byte, sigs []*types.Signature) (signedTxBytes []byte, err error)\n\t// TxOperationsAndSignersAccountIdentifiers returns the operations related to a transaction and the account\n\t// identifiers if the transaction is signed\n\tTxOperationsAndSignersAccountIdentifiers(signed bool, hexBytes []byte) (ops []*types.Operation, signers []*types.AccountIdentifier, err error)\n\t// ConstructionPayload returns the construction payload given the request\n\tConstructionPayload(ctx context.Context, req *types.ConstructionPayloadsRequest) (resp *types.ConstructionPayloadsResponse, err error)\n\t// PreprocessOperationsToOptions returns the options given the preprocess operations\n\tPreprocessOperationsToOptions(ctx context.Context, req *types.ConstructionPreprocessRequest) (options map[string]interface{}, err error)\n\t// AccountIdentifierFromPublicKey returns the account identifier given the public key\n\tAccountIdentifierFromPublicKey(pubKey *types.PublicKey) (*types.AccountIdentifier, error)\n}\n```\n\n### 2. Cosmos SDK Implementation\n\nThe Cosmos SDK implementation, based on version, takes care of satisfying the `Client` interface.\nIn Stargate, Launchpad and 0.37, we have introduced the concept of rosetta.Msg, this message is not in the shared repository as the sdk.Msg type differs between Cosmos SDK versions.\n\nThe rosetta.Msg interface follows:\n\n```go\n// Msg represents a cosmos-sdk message that can be converted from and to a rosetta operation.\ntype Msg interface {\n\tsdk.Msg\n\tToOperations(withStatus, hasError bool) []*types.Operation\n\tFromOperations(ops []*types.Operation) (sdk.Msg, error)\n}\n```\n\nHence developers who want to extend the rosetta set of supported operations just need to extend their module's sdk.Msgs with the `ToOperations` and `FromOperations` methods.\n\n### 3. API service invocation\n\nAs stated at the start, application developers will have two methods for invocation of the Rosetta API service:\n\n1. Shared process for both application and API\n2. Standalone API service\n\n#### Shared Process (Only Stargate)\n\nRosetta API service could run within the same execution process as the application. This would be enabled via app.toml settings, and if gRPC is not enabled the rosetta instance would be spun in offline mode (tx building capabilities only).\n\n#### Separate API service\n\nClient application developers can write a new command to launch a Rosetta API server as a separate process too, using the rosetta command contained in the `/server/rosetta` package. Construction of the command depends on Cosmos SDK version. Examples can be found inside `simd` for stargate, and `contrib/rosetta/simapp` for other release series.\n\n## Status\n\nProposed\n\n## Consequences\n\n### Positive\n\n* Out-of-the-box Rosetta API support within Cosmos SDK.\n* Blockchain interface standardisation\n\n## References\n\n* https://www.rosetta-api.org/" + }, + { + "number": 36, + "filename": "adr-036-arbitrary-signature.md", + "title": "ADR 036: Arbitrary Message Signature Specification", + "content": "# ADR 036: Arbitrary Message Signature Specification\n\n## Changelog\n\n* 28/10/2020 - Initial draft\n\n## Authors\n\n* Antoine Herzog (@antoineherzog)\n* Zaki Manian (@zmanian)\n* Aleksandr Bezobchuk (alexanderbez) [1]\n* Frojdi Dymylja (@fdymylja)\n\n## Status\n\nDraft\n\n## Abstract\n\nCurrently, in the Cosmos SDK, there is no convention to sign arbitrary messages like in Ethereum. We propose with this specification, for Cosmos SDK ecosystem, a way to sign and validate off-chain arbitrary messages.\n\nThis specification serves the purpose of covering every use case; this means that Cosmos SDK application developers decide how to serialize and represent `Data` to users.\n\n## Context\n\nHaving the ability to sign messages off-chain has proven to be a fundamental aspect of nearly any blockchain. The notion of signing messages off-chain has many added benefits such as saving on computational costs and reducing transaction throughput and overhead. Within the context of the Cosmos, some of the major applications of signing such data include, but is not limited to, providing a cryptographic secure and verifiable means of proving validator identity and possibly associating it with some other framework or organization. In addition, having the ability to sign Cosmos messages with a Ledger or similar HSM device.\n\nFurther context and use cases can be found in the reference links.\n\n## Decision\n\nThe aim is being able to sign arbitrary messages, even using Ledger or similar HSM devices.\n\nAs a result, signed messages should look roughly like Cosmos SDK messages but **must not** be a valid on-chain transaction. `chain-id`, `account_number` and `sequence` can all be assigned invalid values.\n\nCosmos SDK 0.40 also introduces a concept of “auth_info” this can specify SIGN_MODES.\n\nA spec should include an `auth_info` that supports SIGN_MODE_DIRECT and SIGN_MODE_LEGACY_AMINO.\n\nTo create the `offchain` proto definitions, we extend the auth module with `offchain` package to offer functionalities to verify and sign offline messages.\n\nAn offchain transaction follows these rules:\n\n* the memo must be empty\n* nonce, sequence number must be equal to 0\n* chain-id must be equal to “”\n* fee gas must be equal to 0\n* fee amount must be an empty array\n\nVerification of an offchain transaction follows the same rules as an onchain one, except for the spec differences highlighted above.\n\nThe first message added to the `offchain` package is `MsgSignData`.\n\n`MsgSignData` allows developers to sign arbitrary bytes validatable offchain only. `Signer` is the account address of the signer. `Data` is arbitrary bytes which can represent `text`, `files`, `object`s. It's applications developers decision how `Data` should be deserialized, serialized and the object it can represent in their context.\n\nIt's applications developers decision how `Data` should be treated, by treated we mean the serialization and deserialization process and the Object `Data` should represent.\n\nProto definition:\n\n```protobuf\n// MsgSignData defines an arbitrary, general-purpose, off-chain message\nmessage MsgSignData {\n // Signer is the sdk.AccAddress of the message signer\n bytes Signer = 1 [(gogoproto.jsontag) = \"signer\", (gogoproto.casttype) = \"github.com/cosmos/cosmos-sdk/types.AccAddress\"];\n // Data represents the raw bytes of the content that is signed (text, json, etc)\n bytes Data = 2 [(gogoproto.jsontag) = \"data\"];\n}\n```\n\nSigned MsgSignData json example:\n\n```json\n{\n \"type\": \"cosmos-sdk/StdTx\",\n \"value\": {\n \"msg\": [\n {\n \"type\": \"sign/MsgSignData\",\n \"value\": {\n \"signer\": \"cosmos1hftz5ugqmpg9243xeegsqqav62f8hnywsjr4xr\",\n \"data\": \"cmFuZG9t\"\n }\n }\n ],\n \"fee\": {\n \"amount\": [],\n \"gas\": \"0\"\n },\n \"signatures\": [\n {\n \"pub_key\": {\n \"type\": \"tendermint/PubKeySecp256k1\",\n \"value\": \"AqnDSiRoFmTPfq97xxEb2VkQ/Hm28cPsqsZm9jEVsYK9\"\n },\n \"signature\": \"8y8i34qJakkjse9pOD2De+dnlc4KvFgh0wQpes4eydN66D9kv7cmCEouRrkka9tlW9cAkIL52ErB+6ye7X5aEg==\"\n }\n ],\n \"memo\": \"\"\n }\n}\n```\n\n## Consequences\n\nThere is a specification on how messages, that are not meant to be broadcast to a live chain, should be formed.\n\n### Backwards Compatibility\n\nBackwards compatibility is maintained as this is a new message spec definition.\n\n### Positive\n\n* A common format that can be used by multiple applications to sign and verify off-chain messages.\n* The specification is primitive which means it can cover every use case without limiting what is possible to fit inside it.\n* It gives room for other off-chain messages specifications that aim to target more specific and common use cases such as off-chain-based authN/authZ layers [2].\n\n### Negative\n\n* The current proposal requires a fixed relationship between an account address and a public key.\n* Doesn't work with multisig accounts.\n\n## Further discussion\n\n* Regarding security in `MsgSignData`, the developer using `MsgSignData` is in charge of making the content contained in `Data` non-replayable when, and if, needed.\n* The offchain package will be further extended with extra messages that target specific use cases such as, but not limited to, authentication in applications, payment channels, L2 solutions in general.\n\n## References\n\n1. https://github.com/cosmos/ics/pull/33\n2. https://github.com/cosmos/cosmos-sdk/pull/7727#discussion_r515668204\n3. https://github.com/cosmos/cosmos-sdk/pull/7727#issuecomment-722478477\n4. https://github.com/cosmos/cosmos-sdk/pull/7727#issuecomment-721062923" + }, + { + "number": 37, + "filename": "adr-037-gov-split-vote.md", + "title": "ADR 037: Governance split votes", + "content": "# ADR 037: Governance split votes\n\n## Changelog\n\n* 2020/10/28: Initial draft\n\n## Status\n\nAccepted\n\n## Abstract\n\nThis ADR defines a modification to the governance module that would allow a staker to split their votes into several voting options. For example, it could use 70% of its voting power to vote Yes and 30% of its voting power to vote No.\n\n## Context\n\nCurrently, an address can cast a vote with only one option (Yes/No/Abstain/NoWithVeto) and use their full voting power behind that choice.\n\nHowever, oftentimes the entity owning that address might not be a single individual. For example, a company might have different stakeholders who want to vote differently, and so it makes sense to allow them to split their voting power. Another example use case is exchanges. Many centralized exchanges often stake a portion of their users' tokens in their custody. Currently, it is not possible for them to do \"passthrough voting\" and giving their users voting rights over their tokens. However, with this system, exchanges can poll their users for voting preferences, and then vote on-chain proportionally to the results of the poll.\n\n## Decision\n\nWe modify the vote structs to be\n\n```go\ntype WeightedVoteOption struct {\n Option string\n Weight sdk.Dec\n}\n\ntype Vote struct {\n ProposalID int64\n Voter sdk.Address\n Options []WeightedVoteOption\n}\n```\n\nAnd for backwards compatibility, we introduce `MsgVoteWeighted` while keeping `MsgVote`.\n\n```go\ntype MsgVote struct {\n ProposalID int64\n Voter sdk.Address\n Option Option\n}\n\ntype MsgVoteWeighted struct {\n ProposalID int64\n Voter sdk.Address\n Options []WeightedVoteOption\n}\n```\n\nThe `ValidateBasic` of a `MsgVoteWeighted` struct would require that\n\n1. The sum of all the rates is equal to 1.0\n2. No Option is repeated\n\nThe governance tally function will iterate over all the options in a vote and add to the tally the result of the voter's voting power * the rate for that option.\n\n```go\ntally() {\n results := map[types.VoteOption]sdk.Dec\n\n for _, vote := range votes {\n for i, weightedOption := range vote.Options {\n results[weightedOption.Option] += getVotingPower(vote.voter) * weightedOption.Weight\n }\n }\n}\n```\n\nThe CLI command for creating a multi-option vote would be as such:\n\n```shell\nsimd tx gov vote 1 \"yes=0.6,no=0.3,abstain=0.05,no_with_veto=0.05\" --from mykey\n```\n\nTo create a single-option vote a user can do either\n\n```shell\nsimd tx gov vote 1 \"yes=1\" --from mykey\n```\n\nor\n\n```shell\nsimd tx gov vote 1 yes --from mykey\n```\n\nto maintain backwards compatibility.\n\n## Consequences\n\n### Backwards Compatibility\n\n* Previous VoteMsg types will remain the same and so clients will not have to update their procedure unless they want to support the WeightedVoteMsg feature.\n* When querying a Vote struct from state, its structure will be different, and so clients wanting to display all voters and their respective votes will have to handle the new format and the fact that a single voter can have split votes.\n* The result of querying the tally function should have the same API for clients.\n\n### Positive\n\n* Can make the voting process more accurate for addresses representing multiple stakeholders, often some of the largest addresses.\n\n### Negative\n\n* Is more complex than simple voting, and so may be harder to explain to users. However, this is mostly mitigated because the feature is opt-in.\n\n### Neutral\n\n* Relatively minor change to governance tally function." + }, + { + "number": 38, + "filename": "adr-038-state-listening.md", + "title": "ADR 038: KVStore state listening", + "content": "# ADR 038: KVStore state listening\n\n## Changelog\n\n* 11/23/2020: Initial draft\n* 10/06/2022: Introduce plugin system based on hashicorp/go-plugin\n* 10/14/2022:\n * Add `ListenCommit`, flatten the state writes in a block to a single batch.\n * Remove listeners from cache stores, should only listen to `rootmulti.Store`.\n * Remove `HaltAppOnDeliveryError()`, the errors are propagated by default, the implementations should return nil if don't want to propagate errors.\n* 26/05/2023: Update with ABCI 2.0\n\n## Status\n\nProposed\n\n## Abstract\n\nThis ADR defines a set of changes to enable listening to state changes of individual KVStores and exposing these data to consumers.\n\n## Context\n\nCurrently, KVStore data can be remotely accessed through [Queries](https://github.com/cosmos/cosmos-sdk/blob/master/docs/building-modules/messages-and-queries.md#queries)\nwhich proceed either through Tendermint and the ABCI, or through the gRPC server.\nIn addition to these request/response queries, it would be beneficial to have a means of listening to state changes as they occur in real time.\n\n## Decision\n\nWe will modify the `CommitMultiStore` interface and its concrete (`rootmulti`) implementations and introduce a new `listenkv.Store` to allow listening to state changes in underlying KVStores. We don't need to listen to cache stores, because we can't be sure that the writes will be committed eventually, and the writes are duplicated in `rootmulti.Store` eventually, so we should only listen to `rootmulti.Store`.\nWe will introduce a plugin system for configuring and running streaming services that write these state changes and their surrounding ABCI message context to different destinations.\n\n### Listening\n\nIn a new file, `store/types/listening.go`, we will create a `MemoryListener` struct for streaming out protobuf encoded KV pairs state changes from a KVStore.\nThe `MemoryListener` will be used internally by the concrete `rootmulti` implementation to collect state changes from KVStores.\n\n```go\n// MemoryListener listens to the state writes and accumulate the records in memory.\ntype MemoryListener struct {\n\tstateCache []StoreKVPair\n}\n\n// NewMemoryListener creates a listener that accumulate the state writes in memory.\nfunc NewMemoryListener() *MemoryListener {\n\treturn &MemoryListener{}\n}\n\n// OnWrite writes state change events to the internal cache\nfunc (fl *MemoryListener) OnWrite(storeKey StoreKey, key []byte, value []byte, delete bool) {\n\tfl.stateCache = append(fl.stateCache, StoreKVPair{\n\t\tStoreKey: storeKey.Name(),\n\t\tDelete: delete,\n\t\tKey: key,\n\t\tValue: value,\n\t})\n}\n\n// PopStateCache returns the current state caches and set to nil\nfunc (fl *MemoryListener) PopStateCache() []StoreKVPair {\n\tres := fl.stateCache\n\tfl.stateCache = nil\n\treturn res\n}\n```\n\nWe will also define a protobuf type for the KV pairs. In addition to the key and value fields this message\nwill include the StoreKey for the originating KVStore so that we can collect information from separate KVStores and determine the source of each KV pair.\n\n```protobuf\nmessage StoreKVPair {\n optional string store_key = 1; // the store key for the KVStore this pair originates from\n required bool set = 2; // true indicates a set operation, false indicates a delete operation\n required bytes key = 3;\n required bytes value = 4;\n}\n```\n\n### ListenKVStore\n\nWe will create a new `Store` type `listenkv.Store` that the `rootmulti` store will use to wrap a `KVStore` to enable state listening.\nWe will configure the `Store` with a `MemoryListener` which will collect state changes for output to specific destinations.\n\n```go\n// Store implements the KVStore interface with listening enabled.\n// Operations are traced on each core KVStore call and written to any of the\n// underlying listeners with the proper key and operation permissions\ntype Store struct {\n parent types.KVStore\n listener *types.MemoryListener\n parentStoreKey types.StoreKey\n}\n\n// NewStore returns a reference to a new traceKVStore given a parent\n// KVStore implementation and a buffered writer.\nfunc NewStore(parent types.KVStore, psk types.StoreKey, listener *types.MemoryListener) *Store {\n return &Store{parent: parent, listener: listener, parentStoreKey: psk}\n}\n\n// Set implements the KVStore interface. It traces a write operation and\n// delegates the Set call to the parent KVStore.\nfunc (s *Store) Set(key []byte, value []byte) {\n types.AssertValidKey(key)\n s.parent.Set(key, value)\n s.listener.OnWrite(s.parentStoreKey, key, value, false)\n}\n\n// Delete implements the KVStore interface. It traces a write operation and\n// delegates the Delete call to the parent KVStore.\nfunc (s *Store) Delete(key []byte) {\n s.parent.Delete(key)\n s.listener.OnWrite(s.parentStoreKey, key, nil, true)\n}\n```\n\n### MultiStore interface updates\n\nWe will update the `CommitMultiStore` interface to allow us to wrap a `Memorylistener` to a specific `KVStore`.\nNote that the `MemoryListener` will be attached internally by the concrete `rootmulti` implementation.\n\n```go\ntype CommitMultiStore interface {\n ...\n\n // AddListeners adds a listener for the KVStore belonging to the provided StoreKey\n AddListeners(keys []StoreKey)\n\n // PopStateCache returns the accumulated state change messages from MemoryListener\n PopStateCache() []StoreKVPair\n}\n```\n\n\n### MultiStore implementation updates\n\nWe will adjust the `rootmulti` `GetKVStore` method to wrap the returned `KVStore` with a `listenkv.Store` if listening is turned on for that `Store`.\n\n```go\nfunc (rs *Store) GetKVStore(key types.StoreKey) types.KVStore {\n store := rs.stores[key].(types.KVStore)\n\n if rs.TracingEnabled() {\n store = tracekv.NewStore(store, rs.traceWriter, rs.traceContext)\n }\n if rs.ListeningEnabled(key) {\n store = listenkv.NewStore(store, key, rs.listeners[key])\n }\n\n return store\n}\n```\n\nWe will implement `AddListeners` to manage KVStore listeners internally and implement `PopStateCache`\nfor a means of retrieving the current state.\n\n```go\n// AddListeners adds state change listener for a specific KVStore\nfunc (rs *Store) AddListeners(keys []types.StoreKey) {\n\tlistener := types.NewMemoryListener()\n\tfor i := range keys {\n\t\trs.listeners[keys[i]] = listener\n\t}\n}\n```\n\n```go\nfunc (rs *Store) PopStateCache() []types.StoreKVPair {\n\tvar cache []types.StoreKVPair\n\tfor _, ls := range rs.listeners {\n\t\tcache = append(cache, ls.PopStateCache()...)\n\t}\n\tsort.SliceStable(cache, func(i, j int) bool {\n\t\treturn cache[i].StoreKey < cache[j].StoreKey\n\t})\n\treturn cache\n}\n```\n\nWe will also adjust the `rootmulti` `CacheMultiStore` and `CacheMultiStoreWithVersion` methods to enable listening in\nthe cache layer.\n\n```go\nfunc (rs *Store) CacheMultiStore() types.CacheMultiStore {\n stores := make(map[types.StoreKey]types.CacheWrapper)\n for k, v := range rs.stores {\n store := v.(types.KVStore)\n // Wire the listenkv.Store to allow listeners to observe the writes from the cache store,\n // set same listeners on cache store will observe duplicated writes.\n if rs.ListeningEnabled(k) {\n store = listenkv.NewStore(store, k, rs.listeners[k])\n }\n stores[k] = store\n }\n return cachemulti.NewStore(rs.db, stores, rs.keysByName, rs.traceWriter, rs.getTracingContext())\n}\n```\n\n```go\nfunc (rs *Store) CacheMultiStoreWithVersion(version int64) (types.CacheMultiStore, error) {\n // ...\n\n // Wire the listenkv.Store to allow listeners to observe the writes from the cache store,\n // set same listeners on cache store will observe duplicated writes.\n if rs.ListeningEnabled(key) {\n cacheStore = listenkv.NewStore(cacheStore, key, rs.listeners[key])\n }\n\n cachedStores[key] = cacheStore\n }\n\n return cachemulti.NewStore(rs.db, cachedStores, rs.keysByName, rs.traceWriter, rs.getTracingContext()), nil\n}\n```\n\n### Exposing the data\n\n#### Streaming Service\n\nWe will introduce a new `ABCIListener` interface that plugs into the BaseApp and relays ABCI requests and responses\nso that the service can group the state changes with the ABCI requests.\n\n```go\n// baseapp/streaming.go\n\n// ABCIListener is the interface that we're exposing as a streaming service.\ntype ABCIListener interface {\n\t// ListenFinalizeBlock updates the streaming service with the latest FinalizeBlock messages\n\tListenFinalizeBlock(ctx context.Context, req abci.RequestFinalizeBlock, res abci.ResponseFinalizeBlock) error\n\t// ListenCommit updates the steaming service with the latest Commit messages and state changes\n\tListenCommit(ctx context.Context, res abci.ResponseCommit, changeSet []*StoreKVPair) error\n}\n```\n\n#### BaseApp Registration\n\nWe will add a new method to the `BaseApp` to enable the registration of `StreamingService`s:\n\n ```go\n // SetStreamingService is used to set a streaming service into the BaseApp hooks and load the listeners into the multistore\nfunc (app *BaseApp) SetStreamingService(s ABCIListener) {\n // register the StreamingService within the BaseApp\n // BaseApp will pass BeginBlock, DeliverTx, and EndBlock requests and responses to the streaming services to update their ABCI context\n app.abciListeners = append(app.abciListeners, s)\n}\n```\n\nWe will add two new fields to the `BaseApp` struct:\n\n```go\ntype BaseApp struct {\n\n ...\n\n // abciListenersAsync for determining if abciListeners will run asynchronously.\n // When abciListenersAsync=false and stopNodeOnABCIListenerErr=false listeners will run synchronized but will not stop the node.\n // When abciListenersAsync=true stopNodeOnABCIListenerErr will be ignored.\n abciListenersAsync bool\n\n // stopNodeOnABCIListenerErr halts the node when ABCI streaming service listening results in an error.\n // stopNodeOnABCIListenerErr=true must be paired with abciListenersAsync=false.\n stopNodeOnABCIListenerErr bool\n}\n```\n\n#### ABCI Event Hooks\n\nWe will modify the `FinalizeBlock` and `Commit` methods to pass ABCI requests and responses\nto any streaming service hooks registered with the `BaseApp`.\n\n```go\nfunc (app *BaseApp) FinalizeBlock(req abci.RequestFinalizeBlock) abci.ResponseFinalizeBlock {\n\n var abciRes abci.ResponseFinalizeBlock\n defer func() {\n // call the streaming service hook with the FinalizeBlock messages\n for _, abciListener := range app.abciListeners {\n ctx := app.finalizeState.ctx\n blockHeight := ctx.BlockHeight()\n if app.abciListenersAsync {\n go func(req abci.RequestFinalizeBlock, res abci.ResponseFinalizeBlock) {\n if err := app.abciListener.FinalizeBlock(blockHeight, req, res); err != nil {\n app.logger.Error(\"FinalizeBlock listening hook failed\", \"height\", blockHeight, \"err\", err)\n }\n }(req, abciRes)\n } else {\n if err := app.abciListener.ListenFinalizeBlock(blockHeight, req, res); err != nil {\n app.logger.Error(\"FinalizeBlock listening hook failed\", \"height\", blockHeight, \"err\", err)\n if app.stopNodeOnABCIListenerErr {\n os.Exit(1)\n }\n }\n }\n }\n }()\n\n ...\n\n return abciRes\n}\n```\n\n```go\nfunc (app *BaseApp) Commit() abci.ResponseCommit {\n\n ...\n\n res := abci.ResponseCommit{\n Data: commitID.Hash,\n RetainHeight: retainHeight,\n }\n\n // call the streaming service hook with the Commit messages\n for _, abciListener := range app.abciListeners {\n ctx := app.deliverState.ctx\n blockHeight := ctx.BlockHeight()\n changeSet := app.cms.PopStateCache()\n if app.abciListenersAsync {\n go func(res abci.ResponseCommit, changeSet []store.StoreKVPair) {\n if err := app.abciListener.ListenCommit(ctx, res, changeSet); err != nil {\n app.logger.Error(\"ListenCommit listening hook failed\", \"height\", blockHeight, \"err\", err)\n }\n }(res, changeSet)\n } else {\n if err := app.abciListener.ListenCommit(ctx, res, changeSet); err != nil {\n app.logger.Error(\"ListenCommit listening hook failed\", \"height\", blockHeight, \"err\", err)\n if app.stopNodeOnABCIListenerErr {\n os.Exit(1)\n }\n }\n }\n }\n\n ...\n\n return res\n}\n```\n\n#### Go Plugin System\n\nWe propose a plugin architecture to load and run `Streaming` plugins and other types of implementations. We will introduce a plugin\nsystem over gRPC that is used to load and run Cosmos-SDK plugins. The plugin system uses [hashicorp/go-plugin](https://github.com/hashicorp/go-plugin).\nEach plugin must have a struct that implements the `plugin.Plugin` interface and an `Impl` interface for processing messages over gRPC.\nEach plugin must also have a message protocol defined for the gRPC service:\n\n```go\n// streaming/plugins/abci/{plugin_version}/interface.go\n\n// Handshake is a common handshake that is shared by streaming and host.\n// This prevents users from executing bad plugins or executing a plugin\n// directory. It is a UX feature, not a security feature.\nvar Handshake = plugin.HandshakeConfig{\n ProtocolVersion: 1,\n MagicCookieKey: \"ABCI_LISTENER_PLUGIN\",\n MagicCookieValue: \"ef78114d-7bdf-411c-868f-347c99a78345\",\n}\n\n// ListenerPlugin is the base struct for all kinds of go-plugin implementations\n// It will be included in interfaces of different Plugins\ntype ABCIListenerPlugin struct {\n // GRPCPlugin must still implement the Plugin interface\n plugin.Plugin\n // Concrete implementation, written in Go. This is only used for plugins\n // that are written in Go.\n Impl baseapp.ABCIListener\n}\n\nfunc (p *ListenerGRPCPlugin) GRPCServer(_ *plugin.GRPCBroker, s *grpc.Server) error {\n RegisterABCIListenerServiceServer(s, &GRPCServer{Impl: p.Impl})\n return nil\n}\n\nfunc (p *ListenerGRPCPlugin) GRPCClient(\n _ context.Context,\n _ *plugin.GRPCBroker,\n c *grpc.ClientConn,\n) (interface{}, error) {\n return &GRPCClient{client: NewABCIListenerServiceClient(c)}, nil\n}\n```\n\nThe `plugin.Plugin` interface has two methods `Client` and `Server`. For our GRPC service these are `GRPCClient` and `GRPCServer`\nThe `Impl` field holds the concrete implementation of our `baseapp.ABCIListener` interface written in Go.\nNote: this is only used for plugin implementations written in Go.\n\nThe advantage of having such a plugin system is that within each plugin authors can define the message protocol in a way that fits their use case.\nFor example, when state change listening is desired, the `ABCIListener` message protocol can be defined as below (*for illustrative purposes only*).\nWhen state change listening is not desired than `ListenCommit` can be omitted from the protocol.\n\n```protobuf\nsyntax = \"proto3\";\n\n...\n\nmessage Empty {}\n\nmessage ListenFinalizeBlockRequest {\n RequestFinalizeBlock req = 1;\n ResponseFinalizeBlock res = 2;\n}\nmessage ListenCommitRequest {\n int64 block_height = 1;\n ResponseCommit res = 2;\n repeated StoreKVPair changeSet = 3;\n}\n\n// plugin that listens to state changes\nservice ABCIListenerService {\n rpc ListenFinalizeBlock(ListenFinalizeBlockRequest) returns (Empty);\n rpc ListenCommit(ListenCommitRequest) returns (Empty);\n}\n```\n\n```protobuf\n...\n// plugin that doesn't listen to state changes\nservice ABCIListenerService {\n rpc ListenFinalizeBlock(ListenFinalizeBlockRequest) returns (Empty);\n rpc ListenCommit(ListenCommitRequest) returns (Empty);\n}\n```\n\nImplementing the service above:\n\n```go\n// streaming/plugins/abci/{plugin_version}/grpc.go\n\nvar (\n _ baseapp.ABCIListener = (*GRPCClient)(nil)\n)\n\n// GRPCClient is an implementation of the ABCIListener and ABCIListenerPlugin interfaces that talks over RPC.\ntype GRPCClient struct {\n client ABCIListenerServiceClient\n}\n\nfunc (m *GRPCClient) ListenFinalizeBlock(goCtx context.Context, req abci.RequestFinalizeBlock, res abci.ResponseFinalizeBlock) error {\n ctx := sdk.UnwrapSDKContext(goCtx)\n _, err := m.client.ListenDeliverTx(ctx, &ListenDeliverTxRequest{BlockHeight: ctx.BlockHeight(), Req: req, Res: res})\n return err\n}\n\nfunc (m *GRPCClient) ListenCommit(goCtx context.Context, res abci.ResponseCommit, changeSet []store.StoreKVPair) error {\n ctx := sdk.UnwrapSDKContext(goCtx)\n _, err := m.client.ListenCommit(ctx, &ListenCommitRequest{BlockHeight: ctx.BlockHeight(), Res: res, ChangeSet: changeSet})\n return err\n}\n\n// GRPCServer is the gRPC server that GRPCClient talks to.\ntype GRPCServer struct {\n // This is the real implementation\n Impl baseapp.ABCIListener\n}\n\nfunc (m *GRPCServer) ListenFinalizeBlock(ctx context.Context, req *ListenFinalizeBlockRequest) (*Empty, error) {\n return &Empty{}, m.Impl.ListenFinalizeBlock(ctx, req.Req, req.Res)\n}\n\nfunc (m *GRPCServer) ListenCommit(ctx context.Context, req *ListenCommitRequest) (*Empty, error) {\n return &Empty{}, m.Impl.ListenCommit(ctx, req.Res, req.ChangeSet)\n}\n\n```\n\nAnd the pre-compiled Go plugin `Impl`(*this is only used for plugins that are written in Go*):\n\n```go\n// streaming/plugins/abci/{plugin_version}/impl/plugin.go\n\n// Plugins are pre-compiled and loaded by the plugin system\n\n// ABCIListener is the implementation of the baseapp.ABCIListener interface\ntype ABCIListener struct{}\n\nfunc (m *ABCIListenerPlugin) ListenFinalizeBlock(ctx context.Context, req abci.RequestFinalizeBlock, res abci.ResponseFinalizeBlock) error {\n // send data to external system\n}\n\nfunc (m *ABCIListenerPlugin) ListenCommit(ctx context.Context, res abci.ResponseCommit, changeSet []store.StoreKVPair) error {\n // send data to external system\n}\n\nfunc main() {\n plugin.Serve(&plugin.ServeConfig{\n HandshakeConfig: grpc_abci_v1.Handshake,\n Plugins: map[string]plugin.Plugin{\n \"grpc_plugin_v1\": &grpc_abci_v1.ABCIListenerGRPCPlugin{Impl: &ABCIListenerPlugin{}},\n },\n\n // A non-nil value here enables gRPC serving for this streaming...\n GRPCServer: plugin.DefaultGRPCServer,\n })\n}\n```\n\nWe will introduce a plugin loading system that will return `(interface{}, error)`.\nThis provides the advantage of using versioned plugins where the plugin interface and gRPC protocol change over time.\nIn addition, it allows for building independent plugin that can expose different parts of the system over gRPC.\n\n```go\nfunc NewStreamingPlugin(name string, logLevel string) (interface{}, error) {\n logger := hclog.New(&hclog.LoggerOptions{\n Output: hclog.DefaultOutput,\n Level: toHclogLevel(logLevel),\n Name: fmt.Sprintf(\"plugin.%s\", name),\n })\n\n // We're a host. Start by launching the streaming process.\n env := os.Getenv(GetPluginEnvKey(name))\n client := plugin.NewClient(&plugin.ClientConfig{\n HandshakeConfig: HandshakeMap[name],\n Plugins: PluginMap,\n Cmd: exec.Command(\"sh\", \"-c\", env),\n Logger: logger,\n AllowedProtocols: []plugin.Protocol{\n plugin.ProtocolNetRPC, plugin.ProtocolGRPC},\n })\n\n // Connect via RPC\n rpcClient, err := client.Client()\n if err != nil {\n return nil, err\n }\n\n // Request streaming plugin\n return rpcClient.Dispense(name)\n}\n\n```\n\nWe propose a `RegisterStreamingPlugin` function for the App to register `NewStreamingPlugin`s with the App's BaseApp.\nStreaming plugins can be of `Any` type; therefore, the function takes in an interface vs a concrete type.\nFor example, we could have plugins of `ABCIListener`, `WasmListener` or `IBCListener`. Note that `RegisterStreamingPluing` function\nis helper function and not a requirement. Plugin registration can easily be moved from the App to the BaseApp directly.\n\n```go\n// baseapp/streaming.go\n\n// RegisterStreamingPlugin registers streaming plugins with the App.\n// This method returns an error if a plugin is not supported.\nfunc RegisterStreamingPlugin(\n bApp *BaseApp,\n appOpts servertypes.AppOptions,\n keys map[string]*types.KVStoreKey,\n streamingPlugin interface{},\n) error {\n switch t := streamingPlugin.(type) {\n case ABCIListener:\n registerABCIListenerPlugin(bApp, appOpts, keys, t)\n default:\n return fmt.Errorf(\"unexpected plugin type %T\", t)\n }\n return nil\n}\n```\n\n```go\nfunc registerABCIListenerPlugin(\n bApp *BaseApp,\n appOpts servertypes.AppOptions,\n keys map[string]*store.KVStoreKey,\n abciListener ABCIListener,\n) {\n asyncKey := fmt.Sprintf(\"%s.%s.%s\", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIAsync)\n async := cast.ToBool(appOpts.Get(asyncKey))\n stopNodeOnErrKey := fmt.Sprintf(\"%s.%s.%s\", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIStopNodeOnErrTomlKey)\n stopNodeOnErr := cast.ToBool(appOpts.Get(stopNodeOnErrKey))\n keysKey := fmt.Sprintf(\"%s.%s.%s\", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIKeysTomlKey)\n exposeKeysStr := cast.ToStringSlice(appOpts.Get(keysKey))\n exposedKeys := exposeStoreKeysSorted(exposeKeysStr, keys)\n bApp.cms.AddListeners(exposedKeys)\n app.SetStreamingManager(\n\t\tstoretypes.StreamingManager{\n\t\t\tABCIListeners: []storetypes.ABCIListener{abciListener},\n\t\t\tStopNodeOnErr: stopNodeOnErr,\n\t\t},\n\t)\n}\n```\n\n```go\nfunc exposeAll(list []string) bool {\n for _, ele := range list {\n if ele == \"*\" {\n return true\n }\n }\n return false\n}\n\nfunc exposeStoreKeys(keysStr []string, keys map[string]*types.KVStoreKey) []types.StoreKey {\n var exposeStoreKeys []types.StoreKey\n if exposeAll(keysStr) {\n exposeStoreKeys = make([]types.StoreKey, 0, len(keys))\n for _, storeKey := range keys {\n exposeStoreKeys = append(exposeStoreKeys, storeKey)\n }\n } else {\n exposeStoreKeys = make([]types.StoreKey, 0, len(keysStr))\n for _, keyStr := range keysStr {\n if storeKey, ok := keys[keyStr]; ok {\n exposeStoreKeys = append(exposeStoreKeys, storeKey)\n }\n }\n }\n // sort storeKeys for deterministic output\n sort.SliceStable(exposeStoreKeys, func(i, j int) bool {\n return exposeStoreKeys[i].Name() < exposeStoreKeys[j].Name()\n })\n\n return exposeStoreKeys\n}\n```\n\nThe `NewStreamingPlugin` and `RegisterStreamingPlugin` functions are used to register a plugin with the App's BaseApp.\n\ne.g. in `NewSimApp`:\n\n```go\nfunc NewSimApp(\n logger log.Logger,\n db dbm.DB,\n traceStore io.Writer,\n loadLatest bool,\n appOpts servertypes.AppOptions,\n baseAppOptions ...func(*baseapp.BaseApp),\n) *SimApp {\n\n ...\n\n keys := sdk.NewKVStoreKeys(\n authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey,\n minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey,\n govtypes.StoreKey, paramstypes.StoreKey, ibchost.StoreKey, upgradetypes.StoreKey,\n evidencetypes.StoreKey, ibctransfertypes.StoreKey, capabilitytypes.StoreKey,\n )\n\n ...\n\n // register streaming services\n streamingCfg := cast.ToStringMap(appOpts.Get(baseapp.StreamingTomlKey))\n for service := range streamingCfg {\n pluginKey := fmt.Sprintf(\"%s.%s.%s\", baseapp.StreamingTomlKey, service, baseapp.StreamingPluginTomlKey)\n pluginName := strings.TrimSpace(cast.ToString(appOpts.Get(pluginKey)))\n if len(pluginName) > 0 {\n logLevel := cast.ToString(appOpts.Get(flags.FlagLogLevel))\n plugin, err := streaming.NewStreamingPlugin(pluginName, logLevel)\n if err != nil {\n tmos.Exit(err.Error())\n }\n if err := baseapp.RegisterStreamingPlugin(bApp, appOpts, keys, plugin); err != nil {\n tmos.Exit(err.Error())\n }\n }\n }\n\n return app\n```\n\n#### Configuration\n\nThe plugin system will be configured within an App's TOML configuration files.\n\n```toml\n# gRPC streaming\n[streaming]\n\n# ABCI streaming service\n[streaming.abci]\n\n# The plugin version to use for ABCI listening\nplugin = \"abci_v1\"\n\n# List of kv store keys to listen to for state changes.\n# Set to [\"*\"] to expose all keys.\nkeys = [\"*\"]\n\n# Enable abciListeners to run asynchronously.\n# When abciListenersAsync=false and stopNodeOnABCIListenerErr=false listeners will run synchronized but will not stop the node.\n# When abciListenersAsync=true stopNodeOnABCIListenerErr will be ignored.\nasync = false\n\n# Whether to stop the node on message deliver error.\nstop-node-on-err = true\n```\n\nThere will be four parameters for configuring `ABCIListener` plugin: `streaming.abci.plugin`, `streaming.abci.keys`, `streaming.abci.async` and `streaming.abci.stop-node-on-err`.\n`streaming.abci.plugin` is the name of the plugin we want to use for streaming, `streaming.abci.keys` is a set of store keys for stores it listens to,\n`streaming.abci.async` is bool enabling asynchronous listening and `streaming.abci.stop-node-on-err` is a bool that stops the node when true and when operating\non synchronized mode `streaming.abci.async=false`. Note that `streaming.abci.stop-node-on-err=true` will be ignored if `streaming.abci.async=true`.\n\nThe configuration above support additional streaming plugins by adding the plugin to the `[streaming]` configuration section\nand registering the plugin with `RegisterStreamingPlugin` helper function.\n\nNote the that each plugin must include `streaming.{service}.plugin` property as it is a requirement for doing the lookup and registration of the plugin\nwith the App. All other properties are unique to the individual services.\n\n#### Encoding and decoding streams\n\nADR-038 introduces the interfaces and types for streaming state changes out from KVStores, associating this\ndata with their related ABCI requests and responses, and registering a service for consuming this data and streaming it to some destination in a final format.\nInstead of prescribing a final data format in this ADR, it is left to a specific plugin implementation to define and document this format.\nWe take this approach because flexibility in the final format is necessary to support a wide range of streaming service plugins. For example,\nthe data format for a streaming service that writes the data out to a set of files will differ from the data format that is written to a Kafka topic.\n\n## Consequences\n\nThese changes will provide a means of subscribing to KVStore state changes in real time.\n\n### Backwards Compatibility\n\n* This ADR changes the `CommitMultiStore` interface, implementations supporting the previous version of this interface will not support the new one\n\n### Positive\n\n* Ability to listen to KVStore state changes in real time and expose these events to external consumers\n\n### Negative\n\n* Changes `CommitMultiStore` interface and its implementations\n\n### Neutral\n\n* Introduces additional- but optional- complexity to configuring and running a cosmos application\n* If an application developer opts to use these features to expose data, they need to be aware of the ramifications/risks of that data exposure as it pertains to the specifics of their application" + }, + { + "number": 39, + "filename": "adr-039-epoched-staking.md", + "title": "ADR 039: Epoched Staking", + "content": "# ADR 039: Epoched Staking\n\n## Changelog\n\n* 10-Feb-2021: Initial Draft\n\n## Authors\n\n* Dev Ojha (@valardragon)\n* Sunny Aggarwal (@sunnya97)\n\n## Status\n\nProposed\n\n## Abstract\n\nThis ADR updates the proof of stake module to buffer the staking weight updates for a number of blocks before updating the consensus' staking weights. The length of the buffer is dubbed an epoch. The prior functionality of the staking module is then a special case of the abstracted module, with the epoch being set to 1 block.\n\n## Context\n\nThe current proof of stake module takes the design decision to apply staking weight changes to the consensus engine immediately. This means that delegations and unbonds get applied immediately to the validator set. This decision was primarily done as it was the simplest from an implementation perspective, and because we at the time believed that this would lead to better UX for clients.\n\nAn alternative design choice is to allow buffering staking updates (delegations, unbonds, validators joining) for a number of blocks. This epoched proof of stake consensus provides the guarantee that the consensus weights for validators will not change mid-epoch, except in the event of a slash condition.\n\nAdditionally, the UX hurdle may not be as significant as was previously thought. This is because it is possible to provide users immediate acknowledgement that their bond was recorded and will be executed.\n\nFurthermore, it has become clearer over time that immediate execution of staking events comes with limitations, such as:\n\n* Threshold based cryptography. One of the main limitations is that because the validator set can change so regularly, it makes the running of multiparty computation by a fixed validator set difficult. Many threshold-based cryptographic features for blockchains such as randomness beacons and threshold decryption require a computationally-expensive DKG process (will take much longer than 1 block to create). To productively use these, we need to guarantee that the result of the DKG will be used for a reasonably long time. It wouldn't be feasible to rerun the DKG every block. By epoching staking, it guarantees we'll only need to run a new DKG once every epoch.\n\n* Light client efficiency. This would lessen the overhead for IBC when there is high churn in the validator set. In the Tendermint light client bisection algorithm, the number of headers you need to verify is related to bounding the difference in validator sets between a trusted header and the latest header. If the difference is too great, you verify more headers in between the two. By limiting the frequency of validator set changes, we can reduce the worst case size of IBC lite client proofs, which occurs when a validator set has high churn.\n\n* Fairness of deterministic leader election. Currently we have no ways of reasoning about fairness of deterministic leader election in the presence of staking changes without epochs (tendermint/spec#217). Breaking fairness of leader election is profitable for validators, as they earn additional rewards from being the proposer. Adding epochs at least makes it easier for our deterministic leader election to match something we can prove secure. (Albeit, we still haven’t proven if our current algorithm is fair with > 2 validators in the presence of stake changes)\n\n* Staking derivative design. Currently, reward distribution is done lazily using the F1 fee distribution. While saving computational complexity, lazy accounting requires a more stateful staking implementation. Right now, each delegation entry has to track the time of last withdrawal. Handling this can be a challenge for some staking derivatives designs that seek to provide fungibility for all tokens staked to a single validator. Force-withdrawing rewards to users can help solve this, however it is infeasible to force-withdraw rewards to users on a per block basis. With epochs, a chain could more easily alter the design to have rewards be forcefully withdrawn (iterating over delegator accounts only once per-epoch), and can thus remove delegation timing from state. This may be useful for certain staking derivative designs.\n\n## Design considerations\n\n### Slashing\n\nThere is a design consideration for whether to apply a slash immediately or at the end of an epoch. A slash event should apply to only members who are actually staked during the time of the infraction, namely during the epoch the slash event occurred.\n\nApplying it immediately can be viewed as offering greater consensus layer security, at potential costs to the aforementioned use cases. The benefits of immediate slashing for consensus layer security can be all be obtained by executing the validator jailing immediately (thus removing it from the validator set), and delaying the actual slash change to the validator's weight until the epoch boundary. For the use cases mentioned above, workarounds can be integrated to avoid problems, as follows:\n\n* For threshold based cryptography, this setting will have the threshold cryptography use the original epoch weights, while consensus has an update that lets it more rapidly benefit from additional security. If the threshold based cryptography blocks liveness of the chain, then we have effectively raised the liveness threshold of the remaining validators for the rest of the epoch. (Alternatively, jailed nodes could still contribute shares) This plan will fail in the extreme case that more than 1/3rd of the validators have been jailed within a single epoch. For such an extreme scenario, the chain already have its own custom incident response plan, and defining how to handle the threshold cryptography should be a part of that.\n* For light client efficiency, there can be a bit included in the header indicating an intra-epoch slash (ala https://github.com/tendermint/spec/issues/199).\n* For fairness of deterministic leader election, applying a slash or jailing within an epoch would break the guarantee we were seeking to provide. This then re-introduces a new (but significantly simpler) problem for trying to provide fairness guarantees. Namely, that validators can adversarially elect to remove themselves from the set of proposers. From a security perspective, this could potentially be handled by two different mechanisms (or prove to still be too difficult to achieve). One is making a security statement acknowledging the ability for an adversary to force an ahead-of-time fixed threshold of users to drop out of the proposer set within an epoch. The second method would be to parameterize such that the cost of a slash within the epoch far outweighs benefits due to being a proposer. However, this latter criterion is quite dubious, since being a proposer can have many advantageous side-effects in chains with complex state machines. (Namely, DeFi games such as Fomo3D)\n* For staking derivative design, there is no issue introduced. This does not increase the state size of staking records, since whether a slash has occurred is fully queryable given the validator address.\n\n### Token lockup\n\nWhen someone makes a transaction to delegate, even though they are not immediately staked, their tokens should be moved into a pool managed by the staking module which will then be used at the end of an epoch. This prevents concerns where they stake, and then spend those tokens not realizing they were already allocated for staking, and thus having their staking tx fail.\n\n### Pipelining the epochs\n\nFor threshold based cryptography in particular, we need a pipeline for epoch changes. This is because when we are in epoch N, we want the epoch N+1 weights to be fixed so that the validator set can do the DKG accordingly. So if we are currently in epoch N, the stake weights for epoch N+1 should already be fixed, and new stake changes should be getting applied to epoch N + 2.\n\nThis can be handled by making a parameter for the epoch pipeline length. This parameter should not be alterable except during hard forks, to mitigate implementation complexity of switching the pipeline length.\n\nWith pipeline length 1, if I redelegate during epoch N, then my redelegation is applied prior to the beginning of epoch N+1.\nWith pipeline length 2, if I redelegate during epoch N, then my redelegation is applied prior to the beginning of epoch N+2.\n\n### Rewards\n\nEven though all staking updates are applied at epoch boundaries, rewards can still be distributed immediately when they are claimed. This is because they do not affect the current stake weights, as we do not implement auto-bonding of rewards. If such a feature were to be implemented, it would have to be setup so that rewards are auto-bonded at the epoch boundary.\n\n### Parameterizing the epoch length\n\nWhen choosing the epoch length, there is a trade-off between queued state/computation buildup, and countering the previously discussed limitations of immediate execution if they apply to a given chain.\n\nUntil an ABCI mechanism for variable block times is introduced, it is ill-advised to be using high epoch lengths due to the computation buildup. This is because when a block's execution time is greater than the expected block time from Tendermint, rounds may increment.\n\n## Decision\n\n**Step-1**: Implement buffering of all staking and slashing messages.\n\nFirst we create a pool for storing tokens that are being bonded, but should be applied at the epoch boundary called the `EpochDelegationPool`. Then, we have two separate queues, one for staking, one for slashing. We describe what happens on each message being delivered below:\n\n### Staking messages\n\n* **MsgCreateValidator**: Move user's self-bond to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the self-bond, taking the funds from the `EpochDelegationPool`. If Epoch execution fails, return back funds from `EpochDelegationPool` to user's account.\n* **MsgEditValidator**: Validate message and if valid queue the message for execution at the end of the Epoch.\n* **MsgDelegate**: Move user's funds to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the delegation, taking the funds from the `EpochDelegationPool`. If Epoch execution fails, return back funds from `EpochDelegationPool` to user's account.\n* **MsgBeginRedelegate**: Validate message and if valid queue the message for execution at the end of the Epoch.\n* **MsgUndelegate**: Validate message and if valid queue the message for execution at the end of the Epoch.\n\n### Slashing messages\n\n* **MsgUnjail**: Validate message and if valid queue the message for execution at the end of the Epoch.\n* **Slash Event**: Whenever a slash event is created, it gets queued in the slashing module to apply at the end of the epoch. The queues should be set up such that this slash applies immediately.\n\n### Evidence Messages\n\n* **MsgSubmitEvidence**: This gets executed immediately, and the validator gets jailed immediately. However in slashing, the actual slash event gets queued.\n\nThen we add methods to the end blockers, to ensure that at the epoch boundary the queues are cleared and delegation updates are applied.\n\n**Step-2**: Implement querying of queued staking txs.\n\nWhen querying the staking activity of a given address, the status should return not only the amount of tokens staked, but also if there are any queued stake events for that address. This will require more work to be done in the querying logic, to trace the queued upcoming staking events.\n\nAs an initial implementation, this can be implemented as a linear search over all queued staking events. However, for chains that need long epochs, they should eventually build additional support for nodes that support querying to be able to produce results in constant time. (This is doable by maintaining an auxiliary hashmap for indexing upcoming staking events by address)\n\n**Step-3**: Adjust gas\n\nCurrently gas represents the cost of executing a transaction when its done immediately. (Merging together costs of p2p overhead, state access overhead, and computational overhead) However, now a transaction can cause computation in a future block, namely at the epoch boundary.\n\nTo handle this, we should initially include parameters for estimating the amount of future computation (denominated in gas), and add that as a flat charge needed for the message.\nWe leave it out of scope for how to weight future computation versus current computation in gas pricing, and have it set such that they are weighted equally for now.\n\n## Consequences\n\n### Positive\n\n* Abstracts the proof of stake module that allows retaining the existing functionality\n* Enables new features such as validator-set based threshold cryptography\n\n### Negative\n\n* Increases complexity of integrating more complex gas pricing mechanisms, as they now have to consider future execution costs as well.\n* When epoch > 1, validators can no longer leave the network immediately, and must wait until an epoch boundary." + }, + { + "number": 40, + "filename": "adr-040-storage-and-smt-state-commitments.md", + "title": "ADR 040: Storage and SMT State Commitments", + "content": "# ADR 040: Storage and SMT State Commitments\n\n## Changelog\n\n* 2020-01-15: Draft\n\n## Status\n\nDRAFT Not Implemented\n\n## Abstract\n\nSparse Merkle Tree ([SMT](https://osf.io/8mcnh/)) is a version of a Merkle Tree with various storage and performance optimizations. This ADR defines a separation of state commitments from data storage and the Cosmos SDK transition from IAVL to SMT.\n\n## Context\n\nCurrently, Cosmos SDK uses IAVL for both state [commitments](https://cryptography.fandom.com/wiki/Commitment_scheme) and data storage.\n\nIAVL has effectively become an orphaned project within the Cosmos ecosystem and it's proven to be an inefficient state commitment data structure.\nIn the current design, IAVL is used for both data storage and as a Merkle Tree for state commitments. IAVL is meant to be a standalone Merkleized key/value database, however it's using a KV DB engine to store all tree nodes. So, each node is stored in a separate record in the KV DB. This causes many inefficiencies and problems:\n\n* Each object query requires a tree traversal from the root. Subsequent queries for the same object are cached on the Cosmos SDK level.\n* Each edge traversal requires a DB query.\n* Creating snapshots is [expensive](https://github.com/cosmos/cosmos-sdk/issues/7215#issuecomment-684804950). It takes about 30 seconds to export less than 100 MB of state (as of March 2020).\n* Updates in IAVL may trigger tree reorganization and possible O(log(n)) hashes re-computation, which can become a CPU bottleneck.\n* The node structure is pretty expensive - it contains a standard tree node elements (key, value, left and right element) and additional metadata such as height, version (which is not required by the Cosmos SDK). The entire node is hashed, and that hash is used as the key in the underlying database, [ref](https://github.com/cosmos/iavl/blob/master/docs/node/node.md\n).\n\nMoreover, the IAVL project lacks support and a maintainer and we already see better and well-established alternatives. Instead of optimizing the IAVL, we are looking into other solutions for both storage and state commitments.\n\n## Decision\n\nWe propose to separate the concerns of state commitment (**SC**), needed for consensus, and state storage (**SS**), needed for state machine. Finally we replace IAVL with [Celestia's SMT](https://github.com/lazyledger/smt). Celestia SMT is based on Diem (called jellyfish) design [*] - it uses a compute-optimized SMT by replacing subtrees with only default values with a single node (same approach is used by Ethereum2) and implements compact proofs.\n\nThe storage model presented here doesn't deal with data structure nor serialization. It's a Key-Value database, where both key and value are binaries. The storage user is responsible for data serialization.\n\n### Decouple state commitment from storage\n\nSeparation of storage and commitment (by the SMT) will allow the optimization of different components according to their usage and access patterns.\n\n`SC` (SMT) is used to commit to a data and compute Merkle proofs. `SS` is used to directly access data. To avoid collisions, both `SS` and `SC` will use a separate storage namespace (they could use the same database underneath). `SS` will store each record directly (mapping `(key, value)` as `key → value`).\n\nSMT is a merkle tree structure: we don't store keys directly. For every `(key, value)` pair, `hash(key)` is used as leaf path (we hash a key to uniformly distribute leaves in the tree) and `hash(value)` as the leaf contents. The tree structure is specified in more depth [below](#smt-for-state-commitment).\n\nFor data access we propose 2 additional KV buckets (implemented as namespaces for the key-value pairs, sometimes called [column family](https://github.com/facebook/rocksdb/wiki/Terminology)):\n\n1. B1: `key → value`: the principal object storage, used by a state machine, behind the Cosmos SDK `KVStore` interface: provides direct access by key and allows prefix iteration (KV DB backend must support it).\n2. B2: `hash(key) → key`: a reverse index to get a key from an SMT path. Internally the SMT will store `(key, value)` as `prefix || hash(key) || hash(value)`. So, we can get an object value by composing `hash(key) → B2 → B1`.\n3. We could use more buckets to optimize the app usage if needed.\n\nWe propose to use a KV database for both `SS` and `SC`. The store interface will allow to use the same physical DB backend for both `SS` and `SC` as well two separate DBs. The latter option allows for the separation of `SS` and `SC` into different hardware units, providing support for more complex setup scenarios and improving overall performance: one can use different backends (eg RocksDB and Badger) as well as independently tuning the underlying DB configuration.\n\n### Requirements\n\nState Storage requirements:\n\n* range queries\n* quick (key, value) access\n* creating a snapshot\n* historical versioning\n* pruning (garbage collection)\n\nState Commitment requirements:\n\n* fast updates\n* tree path should be short\n* query historical commitment proofs using ICS-23 standard\n* pruning (garbage collection)\n\n### SMT for State Commitment\n\nA Sparse Merkle tree is based on the idea of a complete Merkle tree of an intractable size. The assumption here is that as the size of the tree is intractable, there would only be a few leaf nodes with valid data blocks relative to the tree size, rendering a sparse tree.\n\nThe full specification can be found at [Celestia](https://github.com/celestiaorg/celestia-specs/blob/ec98170398dfc6394423ee79b00b71038879e211/src/specs/data_structures.md#sparse-merkle-tree). In summary:\n\n* The SMT consists of a binary Merkle tree, constructed in the same fashion as described in [Certificate Transparency (RFC-6962)](https://tools.ietf.org/html/rfc6962), but using as the hashing function SHA-2-256 as defined in [FIPS 180-4](https://doi.org/10.6028/NIST.FIPS.180-4).\n* Leaves and internal nodes are hashed differently: the one-byte `0x00` is prepended for leaf nodes while `0x01` is prepended for internal nodes.\n* Default values are given to leaf nodes with empty leaves.\n* While the above rule is sufficient to pre-compute the values of intermediate nodes that are roots of empty subtrees, a further simplification is to extend this default value to all nodes that are roots of empty subtrees. The 32-byte zero is used as the default value. This rule takes precedence over the above one.\n* An internal node that is the root of a subtree that contains exactly one non-empty leaf is replaced by that leaf's leaf node.\n\n### Snapshots for storage sync and state versioning\n\nBelow, with simple _snapshot_ we refer to a database snapshot mechanism, not to a _ABCI snapshot sync_. The latter will be referred as _snapshot sync_ (which will directly use DB snapshot as described below).\n\nDatabase snapshot is a view of DB state at a certain time or transaction. It's not a full copy of a database (it would be too big). Usually a snapshot mechanism is based on a _copy on write_ and it allows DB state to be efficiently delivered at a certain stage.\nSome DB engines support snapshotting. Hence, we propose to reuse that functionality for the state sync and versioning (described below). We limit the supported DB engines to ones which efficiently implement snapshots. In a final section we discuss the evaluated DBs.\n\nOne of the Stargate core features is a _snapshot sync_ delivered in the `/snapshot` package. It provides a way to trustlessly sync a blockchain without repeating all transactions from the genesis. This feature is implemented in Cosmos SDK and requires storage support. Currently IAVL is the only supported backend. It works by streaming to a client a snapshot of a `SS` at a certain version together with a header chain.\n\nA new database snapshot will be created in every `EndBlocker` and identified by a block height. The `root` store keeps track of the available snapshots to offer `SS` at a certain version. The `root` store implements the `RootStore` interface described below. In essence, `RootStore` encapsulates a `Committer` interface. `Committer` has a `Commit`, `SetPruning`, `GetPruning` functions which will be used for creating and removing snapshots. The `rootStore.Commit` function creates a new snapshot and increments the version on each call, and checks if it needs to remove old versions. We will need to update the SMT interface to implement the `Committer` interface.\nNOTE: `Commit` must be called exactly once per block. Otherwise we risk going out of sync for the version number and block height.\nNOTE: For the Cosmos SDK storage, we may consider splitting that interface into `Committer` and `PruningCommitter` - only the multiroot should implement `PruningCommitter` (cache and prefix store don't need pruning).\n\nNumber of historical versions for `abci.RequestQuery` and state sync snapshots is part of a node configuration, not a chain configuration (configuration implied by the blockchain consensus). A configuration should allow to specify number of past blocks and number of past blocks modulo some number (eg: 100 past blocks and one snapshot every 100 blocks for past 2000 blocks). Archival nodes can keep all past versions.\n\nPruning old snapshots is effectively done by a database. Whenever we update a record in `SC`, SMT won't update nodes - instead it creates new nodes on the update path, without removing the old one. Since we are snapshotting each block, we need to change that mechanism to immediately remove orphaned nodes from the database. This is a safe operation - snapshots will keep track of the records and make it available when accessing past versions.\n\nTo manage the active snapshots we will either use a DB _max number of snapshots_ option (if available), or we will remove DB snapshots in the `EndBlocker`. The latter option can be done efficiently by identifying snapshots with block height and calling a store function to remove past versions.\n\n#### Accessing old state versions\n\nOne of the functional requirements is to access old state. This is done through `abci.RequestQuery` structure. The version is specified by a block height (so we query for an object by a key `K` at block height `H`). The number of old versions supported for `abci.RequestQuery` is configurable. Accessing an old state is done by using available snapshots.\n`abci.RequestQuery` doesn't need old state of `SC` unless the `prove=true` parameter is set. The SMT merkle proof must be included in the `abci.ResponseQuery` only if both `SC` and `SS` have a snapshot for requested version.\n\nMoreover, Cosmos SDK could provide a way to directly access a historical state. However, a state machine shouldn't do that - since the number of snapshots is configurable, it would lead to nondeterministic execution.\n\nWe positively [validated](https://github.com/cosmos/cosmos-sdk/discussions/8297) a versioning and snapshot mechanism for querying old state with regards to the database we evaluated.\n\n### State Proofs\n\nFor any object stored in State Store (SS), we have corresponding object in `SC`. A proof for object `V` identified by a key `K` is a branch of `SC`, where the path corresponds to the key `hash(K)`, and the leaf is `hash(K, V)`.\n\n### Rollbacks\n\nWe need to be able to process transactions and roll-back state updates if a transaction fails. This can be done in the following way: during transaction processing, we keep all state change requests (writes) in a `CacheWrapper` abstraction (as it's done today). Once we finish the block processing, in the `Endblocker`, we commit a root store - at that time, all changes are written to the SMT and to the `SS` and a snapshot is created.\n\n### Committing to an object without saving it\n\nWe identified use-cases, where modules will need to save an object commitment without storing an object itself. Sometimes clients are receiving complex objects, and they have no way to prove a correctness of that object without knowing the storage layout. For those use cases it would be easier to commit to the object without storing it directly.\n\n### Refactor MultiStore\n\nThe Stargate `/store` implementation (store/v1) adds an additional layer in the SDK store construction - the `MultiStore` structure. The multistore exists to support the modularity of the Cosmos SDK - each module is using its own instance of IAVL, but in the current implementation, all instances share the same database. The latter indicates, however, that the implementation doesn't provide true modularity. Instead it causes problems related to race condition and atomic DB commits (see: [\\#6370](https://github.com/cosmos/cosmos-sdk/issues/6370) and [discussion](https://github.com/cosmos/cosmos-sdk/discussions/8297#discussioncomment-757043)).\n\nWe propose to reduce the multistore concept from the SDK, and to use a single instance of `SC` and `SS` in a `RootStore` object. To avoid confusion, we should rename the `MultiStore` interface to `RootStore`. The `RootStore` will have the following interface; the methods for configuring tracing and listeners are omitted for brevity.\n\n```go\n// Used where read-only access to versions is needed.\ntype BasicRootStore interface {\n Store\n GetKVStore(StoreKey) KVStore\n CacheRootStore() CacheRootStore\n}\n\n// Used as the main app state, replacing CommitMultiStore.\ntype CommitRootStore interface {\n BasicRootStore\n Committer\n Snapshotter\n\n GetVersion(uint64) (BasicRootStore, error)\n SetInitialVersion(uint64) error\n\n ... // Trace and Listen methods\n}\n\n// Replaces CacheMultiStore for branched state.\ntype CacheRootStore interface {\n BasicRootStore\n Write()\n\n ... // Trace and Listen methods\n}\n\n// Example of constructor parameters for the concrete type.\ntype RootStoreConfig struct {\n Upgrades *StoreUpgrades\n InitialVersion uint64\n\n ReservePrefix(StoreKey, StoreType)\n}\n```\n\n\n\n\nIn contrast to `MultiStore`, `RootStore` doesn't allow to dynamically mount sub-stores or provide an arbitrary backing DB for individual sub-stores.\n\nNOTE: modules will be able to use a special commitment and their own DBs. For example: a module which will use ZK proofs for state can store and commit this proof in the `RootStore` (usually as a single record) and manage the specialized store privately or using the `SC` low level interface.\n\n#### Compatibility support\n\nTo ease the transition to this new interface for users, we can create a shim which wraps a `CommitMultiStore` but provides a `CommitRootStore` interface, and expose functions to safely create and access the underlying `CommitMultiStore`.\n\nThe new `RootStore` and supporting types can be implemented in a `store/v2alpha1` package to avoid breaking existing code.\n\n#### Merkle Proofs and IBC\n\nCurrently, an IBC (v1.0) Merkle proof path consists of two elements (`[\"\", \"\"]`), with each key corresponding to a separate proof. These are each verified according to individual [ICS-23 specs](https://github.com/cosmos/ibc-go/blob/f7051429e1cf833a6f65d51e6c3df1609290a549/modules/core/23-commitment/types/merkle.go#L17), and the result hash of each step is used as the committed value of the next step, until a root commitment hash is obtained.\nThe root hash of the proof for `\"\"` is hashed with the `\"\"` to validate against the App Hash.\n\nThis is not compatible with the `RootStore`, which stores all records in a single Merkle tree structure, and won't produce separate proofs for the store- and record-key. Ideally, the store-key component of the proof could just be omitted, and updated to use a \"no-op\" spec, so only the record-key is used. However, because the IBC verification code hardcodes the `\"ibc\"` prefix and applies it to the SDK proof as a separate element of the proof path, this isn't possible without a breaking change. Breaking this behavior would severely impact the Cosmos ecosystem which already widely adopts the IBC module. Requesting an update of the IBC module across the chains is a time consuming effort and not easily feasible.\n\nAs a workaround, the `RootStore` will have to use two separate SMTs (they could use the same underlying DB): one for IBC state and one for everything else. A simple Merkle map that reference these SMTs will act as a Merkle Tree to create a final App hash. The Merkle map is not stored in a DBs - it's constructed in the runtime. The IBC substore key must be `\"ibc\"`.\n\nThe workaround can still guarantee atomic syncs: the [proposed DB backends](#evaluated-kv-databases) support atomic transactions and efficient rollbacks, which will be used in the commit phase.\n\nThe presented workaround can be used until the IBC module is fully upgraded to supports single-element commitment proofs.\n\n### Optimization: compress module key prefixes\n\nWe consider a compression of prefix keys by creating a mapping from module key to an integer, and serializing the integer using varint coding. Varint coding assures that different values don't have common byte prefix. For Merkle Proofs we can't use prefix compression - so it should only apply for the `SS` keys. Moreover, the prefix compression should be only applied for the module namespace. More precisely:\n\n* each module has it's own namespace;\n* when accessing a module namespace we create a KVStore with embedded prefix;\n* that prefix will be compressed only when accessing and managing `SS`.\n\nWe need to assure that the codes won't change. We can fix the mapping in a static variable (provided by an app) or SS state under a special key.\n\nTODO: need to make decision about the key compression.\n\n## Optimization: SS key compression\n\nSome objects may be saved with key, which contains a Protobuf message type. Such keys are long. We could save a lot of space if we can map Protobuf message types in varints.\n\nTODO: finalize this or move to another ADR.\n\n## Migration\n\nUsing the new store will require a migration. 2 Migrations are proposed:\n\n1. Genesis export -- it will reset the blockchain history.\n2. In place migration: we can reuse `UpgradeKeeper.SetUpgradeHandler` to provide the migration logic:\n\n```go \napp.UpgradeKeeper.SetUpgradeHandler(\"adr-40\", func(ctx sdk.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) {\n\n storev2.Migrate(iavlstore, v2.store)\n\n // RunMigrations returns the VersionMap\n // with the updated module ConsensusVersions\n return app.mm.RunMigrations(ctx, vm)\n})\n```\n\nThe `Migrate` function will read all entries from a store/v1 DB and save them to the AD-40 combined KV store. \nCache layer should not be used and the operation must finish with a single Commit call.\n\nInserting records to the `SC` (SMT) component is the bottleneck. Unfortunately SMT doesn't support batch transactions. \nAdding batch transactions to `SC` layer is considered as a feature after the main release.\n\n## Consequences\n\n### Backwards Compatibility\n\nThis ADR doesn't introduce any Cosmos SDK level API changes.\n\nWe change the storage layout of the state machine, a storage hard fork and network upgrade is required to incorporate these changes. SMT provides a merkle proof functionality, however it is not compatible with ICS23. Updating the proofs for ICS23 compatibility is required.\n\n### Positive\n\n* Decoupling state from state commitment introduce better engineering opportunities for further optimizations and better storage patterns.\n* Performance improvements.\n* Joining SMT based camp which has wider and proven adoption than IAVL. Example projects which decided on SMT: Ethereum2, Diem (Libra), Trillan, Tezos, Celestia.\n* Multistore removal fixes a longstanding issue with the current MultiStore design.\n* Simplifies merkle proofs - all modules, except IBC, have only one pass for merkle proof.\n\n### Negative\n\n* Storage migration\n* LL SMT doesn't support pruning - we will need to add and test that functionality.\n* `SS` keys will have an overhead of a key prefix. This doesn't impact `SC` because all keys in `SC` have same size (they are hashed).\n\n### Neutral\n\n* Deprecating IAVL, which is one of the core proposals of Cosmos Whitepaper.\n\n## Alternative designs\n\nMost of the alternative designs were evaluated in [state commitments and storage report](https://paper.dropbox.com/published/State-commitments-and-storage-review--BDvA1MLwRtOx55KRihJ5xxLbBw-KeEB7eOd11pNrZvVtqUgL3h).\n\nEthereum research published [Verkle Trie](https://dankradfeist.de/ethereum/2021/06/18/verkle-trie-for-eth1.html) - an idea of combining polynomial commitments with merkle tree in order to reduce the tree height. This concept has a very good potential, but we think it's too early to implement it. The current, SMT based design could be easily updated to the Verkle Trie once other research implement all necessary libraries. The main advantage of the design described in this ADR is the separation of state commitments from the data storage and designing a more powerful interface.\n\n## Further Discussions\n\n### Evaluated KV Databases\n\nWe verified existing databases KV databases for evaluating snapshot support. The following databases provide efficient snapshot mechanism: Badger, RocksDB, [Pebble](https://github.com/cockroachdb/pebble). Databases which don't provide such support or are not production ready: boltdb, leveldb, goleveldb, membdb, lmdb.\n\n### RDBMS\n\nUse of RDBMS instead of simple KV store for state. Use of RDBMS will require a Cosmos SDK API breaking change (`KVStore` interface) and will allow better data extraction and indexing solutions. Instead of saving an object as a single blob of bytes, we could save it as record in a table in the state storage layer, and as a `hash(key, protobuf(object))` in the SMT as outlined above. To verify that an object registered in RDBMS is same as the one committed to SMT, one will need to load it from RDBMS, marshal using protobuf, hash and do SMT search.\n\n### Off Chain Store\n\nWe were discussing use case where modules can use a support database, which is not automatically committed. Module will responsible for having a sound storage model and can optionally use the feature discussed in __Committing to an object without saving it_ section.\n\n## References\n\n* [IAVL What's Next?](https://github.com/cosmos/cosmos-sdk/issues/7100)\n* [IAVL overview](https://docs.google.com/document/d/16Z_hW2rSAmoyMENO-RlAhQjAG3mSNKsQueMnKpmcBv0/edit#heading=h.yd2th7x3o1iv) of it's state v0.15\n* [State commitments and storage report](https://paper.dropbox.com/published/State-commitments-and-storage-review--BDvA1MLwRtOx55KRihJ5xxLbBw-KeEB7eOd11pNrZvVtqUgL3h)\n* [Celestia (LazyLedger) SMT](https://github.com/lazyledger/smt)\n* Facebook Diem (Libra) SMT [design](https://developers.diem.com/papers/jellyfish-merkle-tree/2021-01-14.pdf)\n* [Trillian Revocation Transparency](https://github.com/google/trillian/blob/master/docs/papers/RevocationTransparency.pdf), [Trillian Verifiable Data Structures](https://github.com/google/trillian/blob/master/docs/papers/VerifiableDataStructures.pdf).\n* Design and implementation [discussion](https://github.com/cosmos/cosmos-sdk/discussions/8297).\n* [How to Upgrade IBC Chains and their Clients](https://ibc.cosmos.network/main/ibc/upgrades/quick-guide/)\n* [ADR-40 Effect on IBC](https://github.com/cosmos/ibc-go/discussions/256)" + }, + { + "number": 41, + "filename": "adr-041-in-place-store-migrations.md", + "title": "ADR 041: In-Place Store Migrations", + "content": "# ADR 041: In-Place Store Migrations\n\n## Changelog\n\n* 17.02.2021: Initial Draft\n\n## Status\n\nAccepted\n\n## Abstract\n\nThis ADR introduces a mechanism to perform in-place state store migrations during chain software upgrades.\n\n## Context\n\nWhen a chain upgrade introduces state-breaking changes inside modules, the current procedure consists of exporting the whole state into a JSON file (via the `simd export` command), running migration scripts on the JSON file (`simd genesis migrate` command), clearing the stores (`simd unsafe-reset-all` command), and starting a new chain with the migrated JSON file as new genesis (optionally with a custom initial block height). An example of such a procedure can be seen [in the Cosmos Hub 3->4 migration guide](https://github.com/cosmos/gaia/blob/v4.0.3/docs/migration/cosmoshub-3.md#upgrade-procedure).\n\nThis procedure is cumbersome for multiple reasons:\n\n* The procedure takes time. It can take hours to run the `export` command, plus some additional hours to run `InitChain` on the fresh chain using the migrated JSON.\n* The exported JSON file can be heavy (~100MB-1GB), making it difficult to view, edit and transfer, which in turn introduces additional work to solve these problems (such as [streaming genesis](https://github.com/cosmos/cosmos-sdk/issues/6936)).\n\n## Decision\n\nWe propose a migration procedure based on modifying the KV store in-place without involving the JSON export-process-import flow described above.\n\n### Module `ConsensusVersion`\n\nWe introduce a new method on the `AppModule` interface:\n\n```go\ntype AppModule interface {\n // --snip--\n ConsensusVersion() uint64\n}\n```\n\nThis methods returns an `uint64` which serves as state-breaking version of the module. It MUST be incremented on each consensus-breaking change introduced by the module. To avoid potential errors with default values, the initial version of a module MUST be set to 1. In the Cosmos SDK, version 1 corresponds to the modules in the v0.41 series.\n\n### Module-Specific Migration Functions\n\nFor each consensus-breaking change introduced by the module, a migration script from ConsensusVersion `N` to version `N+1` MUST be registered in the `Configurator` using its newly-added `RegisterMigration` method. All modules receive a reference to the configurator in their `RegisterServices` method on `AppModule`, and this is where the migration functions should be registered. The migration functions should be registered in increasing order.\n\n```go\nfunc (am AppModule) RegisterServices(cfg module.Configurator) {\n // --snip--\n cfg.RegisterMigration(types.ModuleName, 1, func(ctx sdk.Context) error {\n // Perform in-place store migrations from ConsensusVersion 1 to 2.\n })\n cfg.RegisterMigration(types.ModuleName, 2, func(ctx sdk.Context) error {\n // Perform in-place store migrations from ConsensusVersion 2 to 3.\n })\n // etc.\n}\n```\n\nFor example, if the new ConsensusVersion of a module is `N` , then `N-1` migration functions MUST be registered in the configurator.\n\nIn the Cosmos SDK, the migration functions are handled by each module's keeper, because the keeper holds the `sdk.StoreKey` used to perform in-place store migrations. To not overload the keeper, a `Migrator` wrapper is used by each module to handle the migration functions:\n\n```go\n// Migrator is a struct for handling in-place store migrations.\ntype Migrator struct {\n BaseKeeper\n}\n```\n\nMigration functions should live inside the `migrations/` folder of each module, and be called by the Migrator's methods. We propose the format `Migrate{M}to{N}` for method names.\n\n```go\n// Migrate1to2 migrates from version 1 to 2.\nfunc (m Migrator) Migrate1to2(ctx sdk.Context) error {\n\treturn v2bank.MigrateStore(ctx, m.keeper.storeKey) // v043bank is package `x/bank/migrations/v2`.\n}\n```\n\nEach module's migration functions are specific to the module's store evolutions, and are not described in this ADR. An example of x/bank store key migrations after the introduction of ADR-028 length-prefixed addresses can be seen in this [store.go code](https://github.com/cosmos/cosmos-sdk/blob/36f68eb9e041e20a5bb47e216ac5eb8b91f95471/x/bank/legacy/v043/store.go#L41-L62).\n\n### Tracking Module Versions in `x/upgrade`\n\nWe introduce a new prefix store in `x/upgrade`'s store. This store will track each module's current version, it can be modelized as a `map[string]uint64` of module name to module ConsensusVersion, and will be used when running the migrations (see next section for details). The key prefix used is `0x1`, and the key/value format is:\n\n```text\n0x2 | {bytes(module_name)} => BigEndian(module_consensus_version)\n```\n\nThe initial state of the store is set from `app.go`'s `InitChainer` method.\n\nThe UpgradeHandler signature needs to be updated to take a `VersionMap`, as well as return an upgraded `VersionMap` and an error:\n\n```diff\n- type UpgradeHandler func(ctx sdk.Context, plan Plan)\n+ type UpgradeHandler func(ctx sdk.Context, plan Plan, versionMap VersionMap) (VersionMap, error)\n```\n\nTo apply an upgrade, we query the `VersionMap` from the `x/upgrade` store and pass it into the handler. The handler runs the actual migration functions (see next section), and if successful, returns an updated `VersionMap` to be stored in state.\n\n```diff\nfunc (k UpgradeKeeper) ApplyUpgrade(ctx sdk.Context, plan types.Plan) {\n // --snip--\n- handler(ctx, plan)\n+ updatedVM, err := handler(ctx, plan, k.GetModuleVersionMap(ctx)) // k.GetModuleVersionMap() fetches the VersionMap stored in state.\n+ if err != nil {\n+ return err\n+ }\n+\n+ // Set the updated consensus versions to state\n+ k.SetModuleVersionMap(ctx, updatedVM)\n}\n```\n\nA gRPC query endpoint to query the `VersionMap` stored in `x/upgrade`'s state will also be added, so that app developers can double-check the `VersionMap` before the upgrade handler runs.\n\n### Running Migrations\n\nOnce all the migration handlers are registered inside the configurator (which happens at startup), running migrations can happen by calling the `RunMigrations` method on `module.Manager`. This function will loop through all modules, and for each module:\n\n* Get the old ConsensusVersion of the module from its `VersionMap` argument (let's call it `M`).\n* Fetch the new ConsensusVersion of the module from the `ConsensusVersion()` method on `AppModule` (call it `N`).\n* If `N>M`, run all registered migrations for the module sequentially `M -> M+1 -> M+2...` until `N`.\n * There is a special case where there is no ConsensusVersion for the module, as this means that the module has been newly added during the upgrade. In this case, no migration function is run, and the module's current ConsensusVersion is saved to `x/upgrade`'s store.\n\nIf a required migration is missing (e.g. if it has not been registered in the `Configurator`), then the `RunMigrations` function will error.\n\nIn practice, the `RunMigrations` method should be called from inside an `UpgradeHandler`.\n\n```go\napp.UpgradeKeeper.SetUpgradeHandler(\"my-plan\", func(ctx sdk.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) {\n return app.mm.RunMigrations(ctx, vm)\n})\n```\n\nAssuming a chain upgrades at block `n`, the procedure should run as follows:\n\n* the old binary will halt in `BeginBlock` when starting block `N`. In its store, the ConsensusVersions of the old binary's modules are stored.\n* the new binary will start at block `N`. The UpgradeHandler is set in the new binary, so will run at `BeginBlock` of the new binary. Inside `x/upgrade`'s `ApplyUpgrade`, the `VersionMap` will be retrieved from the (old binary's) store, and passed into the `RunMigrations` function, migrating all module stores in-place before the modules' own `BeginBlock`s.\n\n## Consequences\n\n### Backwards Compatibility\n\nThis ADR introduces a new method `ConsensusVersion()` on `AppModule`, which all modules need to implement. It also alters the UpgradeHandler function signature. As such, it is not backwards-compatible.\n\nWhile modules MUST register their migration functions when bumping ConsensusVersions, running those scripts using an upgrade handler is optional. An application may perfectly well decide to not call the `RunMigrations` inside its upgrade handler, and continue using the legacy JSON migration path.\n\n### Positive\n\n* Perform chain upgrades without manipulating JSON files.\n* While no benchmark has been made yet, it is probable that in-place store migrations will take less time than JSON migrations. The main reason supporting this claim is that both the `simd export` command on the old binary and the `InitChain` function on the new binary will be skipped.\n\n### Negative\n\n* Module developers MUST correctly track consensus-breaking changes in their modules. If a consensus-breaking change is introduced in a module without its corresponding `ConsensusVersion()` bump, then the `RunMigrations` function won't detect the migration, and the chain upgrade might be unsuccessful. Documentation should clearly reflect this.\n\n### Neutral\n\n* The Cosmos SDK will continue to support JSON migrations via the existing `simd export` and `simd genesis migrate` commands.\n* The current ADR does not allow creating, renaming or deleting stores, only modifying existing store keys and values. The Cosmos SDK already has the `StoreLoader` for those operations.\n\n## Further Discussions\n\n## References\n\n* Initial discussion: https://github.com/cosmos/cosmos-sdk/discussions/8429\n* Implementation of `ConsensusVersion` and `RunMigrations`: https://github.com/cosmos/cosmos-sdk/pull/8485\n* Issue discussing `x/upgrade` design: https://github.com/cosmos/cosmos-sdk/issues/8514" + }, + { + "number": 42, + "filename": "adr-042-group-module.md", + "title": "ADR 042: Group Module", + "content": "# ADR 042: Group Module\n\n## Changelog\n\n* 2020/04/09: Initial Draft\n\n## Status\n\nDraft\n\n## Abstract\n\nThis ADR defines the `x/group` module which allows the creation and management of on-chain multi-signature accounts and enables voting for message execution based on configurable decision policies.\n\n## Context\n\nThe legacy amino multi-signature mechanism of the Cosmos SDK has certain limitations:\n\n* Key rotation is not possible, although this can be solved with [account rekeying](adr-034-account-rekeying.md).\n* Thresholds can't be changed.\n* UX is cumbersome for non-technical users ([#5661](https://github.com/cosmos/cosmos-sdk/issues/5661)).\n* It requires `legacy_amino` sign mode ([#8141](https://github.com/cosmos/cosmos-sdk/issues/8141)).\n\nWhile the group module is not meant to be a total replacement for the current multi-signature accounts, it provides a solution to the limitations described above, with a more flexible key management system where keys can be added, updated or removed, as well as configurable thresholds.\nIt's meant to be used with other access control modules such as [`x/feegrant`](./adr-029-fee-grant-module.md) and [`x/authz`](adr-030-authz-module.md) to simplify key management for individuals and organizations.\n\nThe proof of concept of the group module can be found in https://github.com/cosmos/cosmos-sdk/tree/main/proto/cosmos/group/v1 and https://github.com/cosmos/cosmos-sdk/tree/main/x/group.\n\n## Decision\n\nWe propose merging the `x/group` module with its supporting [ORM/Table Store package](https://github.com/cosmos/cosmos-sdk/tree/main/x/group/internal/orm) ([#7098](https://github.com/cosmos/cosmos-sdk/issues/7098)) into the Cosmos SDK and continuing development here. There will be a dedicated ADR for the ORM package.\n\n### Group\n\nA group is a composition of accounts with associated weights. It is not\nan account and doesn't have a balance. It doesn't in and of itself have any\nsort of voting or decision weight.\nGroup members can create proposals and vote on them through group accounts using different decision policies.\n\nIt has an `admin` account which can manage members in the group, update the group\nmetadata and set a new admin.\n\n```protobuf\nmessage GroupInfo {\n\n // group_id is the unique ID of this group.\n uint64 group_id = 1;\n\n // admin is the account address of the group's admin.\n string admin = 2;\n\n // metadata is any arbitrary metadata to attached to the group.\n bytes metadata = 3;\n\n // version is used to track changes to a group's membership structure that\n // would break existing proposals. Whenever a member weight has changed,\n // or any member is added or removed, the version is incremented and will\n // invalidate all proposals from older versions.\n uint64 version = 4;\n\n // total_weight is the sum of the group members' weights.\n string total_weight = 5;\n}\n```\n\n```protobuf\nmessage GroupMember {\n\n // group_id is the unique ID of the group.\n uint64 group_id = 1;\n\n // member is the member data.\n Member member = 2;\n}\n\n// Member represents a group member with an account address,\n// non-zero weight and metadata.\nmessage Member {\n\n // address is the member's account address.\n string address = 1;\n\n // weight is the member's voting weight that should be greater than 0.\n string weight = 2;\n\n // metadata is any arbitrary metadata to attached to the member.\n bytes metadata = 3;\n}\n```\n\n### Group Account\n\nA group account is an account associated with a group and a decision policy.\nA group account does have a balance.\n\nGroup accounts are abstracted from groups because a single group may have\nmultiple decision policies for different types of actions. Managing group\nmembership separately from decision policies results in the least overhead\nand keeps membership consistent across different policies. The pattern that\nis recommended is to have a single master group account for a given group,\nand then to create separate group accounts with different decision policies\nand delegate the desired permissions from the master account to\nthose \"sub-accounts\" using the [`x/authz` module](adr-030-authz-module.md).\n\n```protobuf\nmessage GroupAccountInfo {\n\n // address is the group account address.\n string address = 1;\n\n // group_id is the ID of the Group the GroupAccount belongs to.\n uint64 group_id = 2;\n\n // admin is the account address of the group admin.\n string admin = 3;\n\n // metadata is any arbitrary metadata of this group account.\n bytes metadata = 4;\n\n // version is used to track changes to a group's GroupAccountInfo structure that\n // invalidates active proposal from old versions.\n uint64 version = 5;\n\n // decision_policy specifies the group account's decision policy.\n google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = \"cosmos.group.v1.DecisionPolicy\"];\n}\n```\n\nSimilarly to a group admin, a group account admin can update its metadata, decision policy or set a new group account admin.\n\nA group account can also be an admin or a member of a group.\nFor instance, a group admin could be another group account which could \"elects\" the members or it could be the same group that elects itself.\n\n### Decision Policy\n\nA decision policy is the mechanism by which members of a group can vote on\nproposals.\n\nAll decision policies should have a minimum and maximum voting window.\nThe minimum voting window is the minimum duration that must pass in order\nfor a proposal to potentially pass, and it may be set to 0. The maximum voting\nwindow is the maximum time that a proposal may be voted on and executed if\nit reached enough support before it is closed.\nBoth of these values must be less than a chain-wide max voting window parameter.\n\nWe define the `DecisionPolicy` interface that all decision policies must implement:\n\n```go\ntype DecisionPolicy interface {\n\tcodec.ProtoMarshaler\n\n\tValidateBasic() error\n\tGetTimeout() types.Duration\n\tAllow(tally Tally, totalPower string, votingDuration time.Duration) (DecisionPolicyResult, error)\n\tValidate(g GroupInfo) error\n}\n\ntype DecisionPolicyResult struct {\n\tAllow bool\n\tFinal bool\n}\n```\n\n#### Threshold decision policy\n\nA threshold decision policy defines a minimum support votes (_yes_), based on a tally\nof voter weights, for a proposal to pass. For\nthis decision policy, abstain and veto are treated as no support (_no_).\n\n```protobuf\nmessage ThresholdDecisionPolicy {\n\n // threshold is the minimum weighted sum of support votes for a proposal to succeed.\n string threshold = 1;\n\n // voting_period is the duration from submission of a proposal to the end of voting period\n // Within this period, votes and exec messages can be submitted.\n google.protobuf.Duration voting_period = 2 [(gogoproto.nullable) = false];\n}\n```\n\n### Proposal\n\nAny member of a group can submit a proposal for a group account to decide upon.\nA proposal consists of a set of `sdk.Msg`s that will be executed if the proposal\npasses as well as any metadata associated with the proposal. These `sdk.Msg`s get validated as part of the `Msg/CreateProposal` request validation. They should also have their signer set as the group account.\n\nInternally, a proposal also tracks:\n\n* its current `Status`: submitted, closed or aborted\n* its `Result`: unfinalized, accepted or rejected\n* its `VoteState` in the form of a `Tally`, which is calculated on new votes and when executing the proposal.\n\n```protobuf\n// Tally represents the sum of weighted votes.\nmessage Tally {\n option (gogoproto.goproto_getters) = false;\n\n // yes_count is the weighted sum of yes votes.\n string yes_count = 1;\n\n // no_count is the weighted sum of no votes.\n string no_count = 2;\n\n // abstain_count is the weighted sum of abstainers.\n string abstain_count = 3;\n\n // veto_count is the weighted sum of vetoes.\n string veto_count = 4;\n}\n```\n\n### Voting\n\nMembers of a group can vote on proposals. There are four choices to choose while voting - yes, no, abstain and veto. Not\nall decision policies will support them. Votes can contain some optional metadata.\nIn the current implementation, the voting window begins as soon as a proposal\nis submitted.\n\nVoting internally updates the proposal `VoteState` as well as `Status` and `Result` if needed.\n\n### Executing Proposals\n\nProposals will not be automatically executed by the chain in this current design,\nbut rather a user must submit a `Msg/Exec` transaction to attempt to execute the\nproposal based on the current votes and decision policy. A future upgrade could\nautomate this and have the group account (or a fee granter) pay.\n\n#### Changing Group Membership\n\nIn the current implementation, updating a group or a group account after submitting a proposal will make it invalid. It will simply fail if someone calls `Msg/Exec` and will eventually be garbage collected.\n\n### Notes on current implementation\n\nThis section outlines the current implementation used in the proof of concept of the group module but this could be subject to changes and iterated on.\n\n#### ORM\n\nThe [ORM package](https://github.com/cosmos/cosmos-sdk/discussions/9156) defines tables, sequences and secondary indexes which are used in the group module.\n\nGroups are stored in state as part of a `groupTable`, the `group_id` being an auto-increment integer. Group members are stored in a `groupMemberTable`.\n\nGroup accounts are stored in a `groupAccountTable`. The group account address is generated based on an auto-increment integer which is used to derive the group module `RootModuleKey` into a `DerivedModuleKey`, as stated in [ADR-033](adr-033-protobuf-inter-module-comm.md#modulekeys-and-moduleids). The group account is added as a new `ModuleAccount` through `x/auth`.\n\nProposals are stored as part of the `proposalTable` using the `Proposal` type. The `proposal_id` is an auto-increment integer.\n\nVotes are stored in the `voteTable`. The primary key is based on the vote's `proposal_id` and `voter` account address.\n\n#### ADR-033 to route proposal messages\n\nInter-module communication introduced by [ADR-033](adr-033-protobuf-inter-module-comm.md) can be used to route a proposal's messages using the `DerivedModuleKey` corresponding to the proposal's group account.\n\n## Consequences\n\n### Positive\n\n* Improved UX for multi-signature accounts allowing key rotation and custom decision policies.\n\n### Negative\n\n### Neutral\n\n* It uses ADR 033 so it will need to be implemented within the Cosmos SDK, but this doesn't imply necessarily any large refactoring of existing Cosmos SDK modules.\n* The current implementation of the group module uses the ORM package.\n\n## Further Discussions\n\n* Convergence of `/group` and `x/gov` as both support proposals and voting: https://github.com/cosmos/cosmos-sdk/discussions/9066\n* `x/group` possible future improvements:\n * Execute proposals on submission (https://github.com/regen-network/regen-ledger/issues/288)\n * Withdraw a proposal (https://github.com/regen-network/cosmos-modules/issues/41)\n * Make `Tally` more flexible and support non-binary choices\n\n## References\n\n* Initial specification:\n * https://gist.github.com/aaronc/b60628017352df5983791cad30babe56#group-module\n * [#5236](https://github.com/cosmos/cosmos-sdk/pull/5236)\n* Proposal to add `x/group` into the Cosmos SDK: [#7633](https://github.com/cosmos/cosmos-sdk/issues/7633)" + }, + { + "number": 43, + "filename": "adr-043-nft-module.md", + "title": "ADR 43: NFT Module", + "content": "# ADR 43: NFT Module\n\n## Changelog\n\n* 2021-05-01: Initial Draft\n* 2021-07-02: Review updates\n* 2022-06-15: Add batch operation\n* 2022-11-11: Remove strict validation of classID and tokenID\n\n## Status\n\nPROPOSED\n\n## Abstract\n\nThis ADR defines the `x/nft` module which is a generic implementation of NFTs, roughly \"compatible\" with ERC721. **Applications using the `x/nft` module must implement the following functions**:\n\n* `MsgNewClass` - Receive the user's request to create a class, and call the `NewClass` of the `x/nft` module.\n* `MsgUpdateClass` - Receive the user's request to update a class, and call the `UpdateClass` of the `x/nft` module.\n* `MsgMintNFT` - Receive the user's request to mint a nft, and call the `MintNFT` of the `x/nft` module.\n* `BurnNFT` - Receive the user's request to burn a nft, and call the `BurnNFT` of the `x/nft` module.\n* `UpdateNFT` - Receive the user's request to update a nft, and call the `UpdateNFT` of the `x/nft` module.\n\n## Context\n\nNFTs are more than just crypto art, which is very helpful for accruing value to the Cosmos ecosystem. As a result, Cosmos Hub should implement NFT functions and enable a unified mechanism for storing and sending the ownership representative of NFTs as discussed in https://github.com/cosmos/cosmos-sdk/discussions/9065.\n\nAs discussed in [#9065](https://github.com/cosmos/cosmos-sdk/discussions/9065), several potential solutions can be considered:\n\n* irismod/nft and modules/incubator/nft\n* CW721\n* DID NFTs\n* interNFT\n\nSince functions/use cases of NFTs are tightly connected with their logic, it is almost impossible to support all the NFTs' use cases in one Cosmos SDK module by defining and implementing different transaction types.\n\nConsidering generic usage and compatibility of interchain protocols including IBC and Gravity Bridge, it is preferred to have a generic NFT module design which handles the generic NFTs logic.\nThis design idea can enable composability that application-specific functions should be managed by other modules on Cosmos Hub or on other Zones by importing the NFT module.\n\nThe current design is based on the work done by [IRISnet team](https://github.com/irisnet/irismod/tree/master/modules/nft) and an older implementation in the [Cosmos repository](https://github.com/cosmos/modules/tree/master/incubator/nft).\n\n## Decision\n\nWe create a `x/nft` module, which contains the following functionality:\n\n* Store NFTs and track their ownership.\n* Expose `Keeper` interface for composing modules to transfer, mint and burn NFTs.\n* Expose external `Message` interface for users to transfer ownership of their NFTs.\n* Query NFTs and their supply information.\n\nThe proposed module is a base module for NFT app logic. It's goal it to provide a common layer for storage, basic transfer functionality and IBC. The module should not be used as a standalone.\nInstead an app should create a specialized module to handle app specific logic (eg: NFT ID construction, royalty), user level minting and burning. Moreover an app specialized module should handle auxiliary data to support the app logic (eg indexes, ORM, business data).\n\nAll data carried over IBC must be part of the `NFT` or `Class` type described below. The app specific NFT data should be encoded in `NFT.data` for cross-chain integrity. Other objects related to NFT, which are not important for integrity can be part of the app specific module.\n\n### Types\n\nWe propose two main types:\n\n* `Class` -- describes NFT class. We can think about it as a smart contract address.\n* `NFT` -- object representing unique, non fungible asset. Each NFT is associated with a Class.\n\n#### Class\n\nNFT **Class** is comparable to an ERC-721 smart contract (provides description of a smart contract), under which a collection of NFTs can be created and managed.\n\n```protobuf\nmessage Class {\n string id = 1;\n string name = 2;\n string symbol = 3;\n string description = 4;\n string uri = 5;\n string uri_hash = 6;\n google.protobuf.Any data = 7;\n}\n```\n\n* `id` is used as the primary index for storing the class; _required_\n* `name` is a descriptive name of the NFT class; _optional_\n* `symbol` is the symbol usually shown on exchanges for the NFT class; _optional_\n* `description` is a detailed description of the NFT class; _optional_\n* `uri` is a URI for the class metadata stored off chain. It should be a JSON file that contains metadata about the NFT class and NFT data schema ([OpenSea example](https://docs.opensea.io/docs/contract-level-metadata)); _optional_\n* `uri_hash` is a hash of the document pointed by uri; _optional_\n* `data` is app specific metadata of the class; _optional_\n\n#### NFT\n\nWe define a general model for `NFT` as follows.\n\n```protobuf\nmessage NFT {\n string class_id = 1;\n string id = 2;\n string uri = 3;\n string uri_hash = 4;\n google.protobuf.Any data = 10;\n}\n```\n\n* `class_id` is the identifier of the NFT class where the NFT belongs; _required_\n* `id` is an identifier of the NFT, unique within the scope of its class. It is specified by the creator of the NFT and may be expanded to use DID in the future. `class_id` combined with `id` uniquely identifies an NFT and is used as the primary index for storing the NFT; _required_\n\n ```text\n {class_id}/{id} --> NFT (bytes)\n ```\n\n* `uri` is a URI for the NFT metadata stored off chain. Should point to a JSON file that contains metadata about this NFT (Ref: [ERC721 standard and OpenSea extension](https://docs.opensea.io/docs/metadata-standards)); _required_\n* `uri_hash` is a hash of the document pointed by uri; _optional_\n* `data` is an app specific data of the NFT. CAN be used by composing modules to specify additional properties of the NFT; _optional_\n\nThis ADR doesn't specify values that `data` can take; however, best practices recommend upper-level NFT modules clearly specify their contents. Although the value of this field doesn't provide the additional context required to manage NFT records, which means that the field can technically be removed from the specification, the field's existence allows basic informational/UI functionality.\n\n### `Keeper` Interface\n\n```go\ntype Keeper interface {\n NewClass(ctx sdk.Context,class Class)\n UpdateClass(ctx sdk.Context,class Class)\n\n Mint(ctx sdk.Context,nft NFT,receiver sdk.AccAddress) // updates totalSupply\n BatchMint(ctx sdk.Context, tokens []NFT,receiver sdk.AccAddress) error\n\n Burn(ctx sdk.Context, classId string, nftId string) // updates totalSupply\n BatchBurn(ctx sdk.Context, classID string, nftIDs []string) error\n\n Update(ctx sdk.Context, nft NFT)\n BatchUpdate(ctx sdk.Context, tokens []NFT) error\n\n Transfer(ctx sdk.Context, classId string, nftId string, receiver sdk.AccAddress)\n BatchTransfer(ctx sdk.Context, classID string, nftIDs []string, receiver sdk.AccAddress) error\n\n GetClass(ctx sdk.Context, classId string) Class\n GetClasses(ctx sdk.Context) []Class\n\n GetNFT(ctx sdk.Context, classId string, nftId string) NFT\n GetNFTsOfClassByOwner(ctx sdk.Context, classId string, owner sdk.AccAddress) []NFT\n GetNFTsOfClass(ctx sdk.Context, classId string) []NFT\n\n GetOwner(ctx sdk.Context, classId string, nftId string) sdk.AccAddress\n GetBalance(ctx sdk.Context, classId string, owner sdk.AccAddress) uint64\n GetTotalSupply(ctx sdk.Context, classId string) uint64\n}\n```\n\nOther business logic implementations should be defined in composing modules that import `x/nft` and use its `Keeper`.\n\n### `Msg` Service\n\n```protobuf\nservice Msg {\n rpc Send(MsgSend) returns (MsgSendResponse);\n}\n\nmessage MsgSend {\n string class_id = 1;\n string id = 2;\n string sender = 3;\n string receiver = 4;\n}\nmessage MsgSendResponse {}\n```\n\n`MsgSend` can be used to transfer the ownership of an NFT to another address.\n\nThe implementation outline of the server is as follows:\n\n```go\ntype msgServer struct{\n k Keeper\n}\n\nfunc (m msgServer) Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) {\n // check current ownership\n assertEqual(msg.Sender, m.k.GetOwner(msg.ClassId, msg.Id))\n\n // transfer ownership\n m.k.Transfer(msg.ClassId, msg.Id, msg.Receiver)\n\n return &types.MsgSendResponse{}, nil\n}\n```\n\nThe query service methods for the `x/nft` module are:\n\n```protobuf\nservice Query {\n // Balance queries the number of NFTs of a given class owned by the owner, same as balanceOf in ERC721\n rpc Balance(QueryBalanceRequest) returns (QueryBalanceResponse) {\n option (google.api.http).get = \"/cosmos/nft/v1beta1/balance/{owner}/{class_id}\";\n }\n\n // Owner queries the owner of the NFT based on its class and id, same as ownerOf in ERC721\n rpc Owner(QueryOwnerRequest) returns (QueryOwnerResponse) {\n option (google.api.http).get = \"/cosmos/nft/v1beta1/owner/{class_id}/{id}\";\n }\n\n // Supply queries the number of NFTs from the given class, same as totalSupply of ERC721.\n rpc Supply(QuerySupplyRequest) returns (QuerySupplyResponse) {\n option (google.api.http).get = \"/cosmos/nft/v1beta1/supply/{class_id}\";\n }\n\n // NFTs queries all NFTs of a given class or owner,choose at least one of the two, similar to tokenByIndex in ERC721Enumerable\n rpc NFTs(QueryNFTsRequest) returns (QueryNFTsResponse) {\n option (google.api.http).get = \"/cosmos/nft/v1beta1/nfts\";\n }\n\n // NFT queries an NFT based on its class and id.\n rpc NFT(QueryNFTRequest) returns (QueryNFTResponse) {\n option (google.api.http).get = \"/cosmos/nft/v1beta1/nfts/{class_id}/{id}\";\n }\n\n // Class queries an NFT class based on its id\n rpc Class(QueryClassRequest) returns (QueryClassResponse) {\n option (google.api.http).get = \"/cosmos/nft/v1beta1/classes/{class_id}\";\n }\n\n // Classes queries all NFT classes\n rpc Classes(QueryClassesRequest) returns (QueryClassesResponse) {\n option (google.api.http).get = \"/cosmos/nft/v1beta1/classes\";\n }\n}\n\n// QueryBalanceRequest is the request type for the Query/Balance RPC method\nmessage QueryBalanceRequest {\n string class_id = 1;\n string owner = 2;\n}\n\n// QueryBalanceResponse is the response type for the Query/Balance RPC method\nmessage QueryBalanceResponse {\n uint64 amount = 1;\n}\n\n// QueryOwnerRequest is the request type for the Query/Owner RPC method\nmessage QueryOwnerRequest {\n string class_id = 1;\n string id = 2;\n}\n\n// QueryOwnerResponse is the response type for the Query/Owner RPC method\nmessage QueryOwnerResponse {\n string owner = 1;\n}\n\n// QuerySupplyRequest is the request type for the Query/Supply RPC method\nmessage QuerySupplyRequest {\n string class_id = 1;\n}\n\n// QuerySupplyResponse is the response type for the Query/Supply RPC method\nmessage QuerySupplyResponse {\n uint64 amount = 1;\n}\n\n// QueryNFTsRequest is the request type for the Query/NFTs RPC method\nmessage QueryNFTsRequest {\n string class_id = 1;\n string owner = 2;\n cosmos.base.query.v1beta1.PageRequest pagination = 3;\n}\n\n// QueryNFTsResponse is the response type for the Query/NFTs RPC methods\nmessage QueryNFTsResponse {\n repeated cosmos.nft.v1beta1.NFT nfts = 1;\n cosmos.base.query.v1beta1.PageResponse pagination = 2;\n}\n\n// QueryNFTRequest is the request type for the Query/NFT RPC method\nmessage QueryNFTRequest {\n string class_id = 1;\n string id = 2;\n}\n\n// QueryNFTResponse is the response type for the Query/NFT RPC method\nmessage QueryNFTResponse {\n cosmos.nft.v1beta1.NFT nft = 1;\n}\n\n// QueryClassRequest is the request type for the Query/Class RPC method\nmessage QueryClassRequest {\n string class_id = 1;\n}\n\n// QueryClassResponse is the response type for the Query/Class RPC method\nmessage QueryClassResponse {\n cosmos.nft.v1beta1.Class class = 1;\n}\n\n// QueryClassesRequest is the request type for the Query/Classes RPC method\nmessage QueryClassesRequest {\n // pagination defines an optional pagination for the request.\n cosmos.base.query.v1beta1.PageRequest pagination = 1;\n}\n\n// QueryClassesResponse is the response type for the Query/Classes RPC method\nmessage QueryClassesResponse {\n repeated cosmos.nft.v1beta1.Class classes = 1;\n cosmos.base.query.v1beta1.PageResponse pagination = 2;\n}\n```\n\n### Interoperability\n\nInteroperability is all about reusing assets between modules and chains. The former one is achieved by ADR-33: Protobuf client - server communication. At the time of writing ADR-33 is not finalized. The latter is achieved by IBC. Here we will focus on the IBC side.\nIBC is implemented per module. Here, we aligned that NFTs will be recorded and managed in the x/nft. This requires creation of a new IBC standard and implementation of it.\n\nFor IBC interoperability, NFT custom modules MUST use the NFT object type understood by the IBC client. So, for x/nft interoperability, custom NFT implementations (example: x/cryptokitty) should use the canonical x/nft module and proxy all NFT balance keeping functionality to x/nft or else re-implement all functionality using the NFT object type understood by the IBC client. In other words: x/nft becomes the standard NFT registry for all Cosmos NFTs (example: x/cryptokitty will register a kitty NFT in x/nft and use x/nft for book keeping). This was [discussed](https://github.com/cosmos/cosmos-sdk/discussions/9065#discussioncomment-873206) in the context of using x/bank as a general asset balance book. Not using x/nft will require implementing another module for IBC.\n\n## Consequences\n\n### Backward Compatibility\n\nNo backward incompatibilities.\n\n### Forward Compatibility\n\nThis specification conforms to the ERC-721 smart contract specification for NFT identifiers. Note that ERC-721 defines uniqueness based on (contract address, uint256 tokenId), and we conform to this implicitly because a single module is currently aimed to track NFT identifiers. Note: use of the (mutable) data field to determine uniqueness is not safe.\n\n### Positive\n\n* NFT identifiers available on Cosmos Hub.\n* Ability to build different NFT modules for the Cosmos Hub, e.g., ERC-721.\n* NFT module which supports interoperability with IBC and other cross-chain infrastructures like Gravity Bridge\n\n### Negative\n\n* New IBC app is required for x/nft\n* CW721 adapter is required\n\n### Neutral\n\n* Other functions need more modules. For example, a custody module is needed for NFT trading function, a collectible module is needed for defining NFT properties.\n\n## Further Discussions\n\nFor other kinds of applications on the Hub, more app-specific modules can be developed in the future:\n\n* `x/nft/custody`: custody of NFTs to support trading functionality.\n* `x/nft/marketplace`: selling and buying NFTs using sdk.Coins.\n* `x/fractional`: a module to split an ownership of an asset (NFT or other assets) for multiple stakeholder. `x/group` should work for most of the cases.\n\nOther networks in the Cosmos ecosystem could design and implement their own NFT modules for specific NFT applications and use cases.\n\n## References\n\n* Initial discussion: https://github.com/cosmos/cosmos-sdk/discussions/9065\n* x/nft: initialize module: https://github.com/cosmos/cosmos-sdk/pull/9174\n* [ADR 033](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-033-protobuf-inter-module-comm.md)" + }, + { + "number": 44, + "filename": "adr-044-protobuf-updates-guidelines.md", + "title": "ADR 044: Guidelines for Updating Protobuf Definitions", + "content": "# ADR 044: Guidelines for Updating Protobuf Definitions\n\n## Changelog\n\n* 28.06.2021: Initial Draft\n* 02.12.2021: Add `Since:` comment for new fields\n* 21.07.2022: Remove the rule of no new `Msg` in the same proto version.\n\n## Status\n\nDraft\n\n## Abstract\n\nThis ADR provides guidelines and recommended practices when updating Protobuf definitions. These guidelines are targeting module developers.\n\n## Context\n\nThe Cosmos SDK maintains a set of [Protobuf definitions](https://github.com/cosmos/cosmos-sdk/tree/main/proto/cosmos). It is important to correctly design Protobuf definitions to avoid any breaking changes within the same version. The reasons are to not break tooling (including indexers and explorers), wallets and other third-party integrations.\n\nWhen making changes to these Protobuf definitions, the Cosmos SDK currently only follows [Buf's](https://docs.buf.build/) recommendations. We noticed however that Buf's recommendations might still result in breaking changes in the SDK in some cases. For example:\n\n* Adding fields to `Msg`s. Adding fields is not a Protobuf spec-breaking operation. However, when adding new fields to `Msg`s, the unknown field rejection will throw an error when sending the new `Msg` to an older node.\n* Marking fields as `reserved`. Protobuf proposes the `reserved` keyword for removing fields without the need to bump the package version. However, by doing so, client backwards compatibility is broken as Protobuf doesn't generate anything for `reserved` fields. See [#9446](https://github.com/cosmos/cosmos-sdk/issues/9446) for more details on this issue.\n\nMoreover, module developers often face other questions around Protobuf definitions such as \"Can I rename a field?\" or \"Can I deprecate a field?\" This ADR aims to answer all these questions by providing clear guidelines about allowed updates for Protobuf definitions.\n\n## Decision\n\nWe decide to keep [Buf's](https://docs.buf.build/) recommendations with the following exceptions:\n\n* `UNARY_RPC`: the Cosmos SDK currently does not support streaming RPCs.\n* `COMMENT_FIELD`: the Cosmos SDK allows fields with no comments.\n* `SERVICE_SUFFIX`: we use the `Query` and `Msg` service naming convention, which doesn't use the `-Service` suffix.\n* `PACKAGE_VERSION_SUFFIX`: some packages, such as `cosmos.crypto.ed25519`, don't use a version suffix.\n* `RPC_REQUEST_STANDARD_NAME`: Requests for the `Msg` service don't have the `-Request` suffix to keep backwards compatibility.\n\nOn top of Buf's recommendations we add the following guidelines that are specific to the Cosmos SDK.\n\n### Updating Protobuf Definition Without Bumping Version\n\n#### 1. Module developers MAY add new Protobuf definitions\n\nModule developers MAY add new `message`s, new `Service`s, new `rpc` endpoints, and new fields to existing messages. This recommendation follows the Protobuf specification, but is added in this document for clarity, as the SDK requires one additional change.\n\nThe SDK requires the Protobuf comment of the new addition to contain one line with the following format:\n\n```protobuf\n// Since: cosmos-sdk {, ...}\n```\n\nWhere each `version` denotes a minor (\"0.45\") or patch (\"0.44.5\") version from which the field is available. This will greatly help client libraries, who can optionally use reflection or custom code generation to show/hide these fields depending on the targeted node version.\n\nAs examples, the following comments are valid:\n\n```protobuf\n// Since: cosmos-sdk 0.44\n\n// Since: cosmos-sdk 0.42.11, 0.44.5\n```\n\nand the following ones are NOT valid:\n\n```protobuf\n// Since cosmos-sdk v0.44\n\n// since: cosmos-sdk 0.44\n\n// Since: cosmos-sdk 0.42.11 0.44.5\n\n// Since: Cosmos SDK 0.42.11, 0.44.5\n```\n\n#### 2. Fields MAY be marked as `deprecated`, and nodes MAY implement a protocol-breaking change for handling these fields\n\nProtobuf supports the [`deprecated` field option](https://developers.google.com/protocol-buffers/docs/proto#options), and this option MAY be used on any field, including `Msg` fields. If a node handles a Protobuf message with a non-empty deprecated field, the node MAY change its behavior upon processing it, even in a protocol-breaking way. When possible, the node MUST handle backwards compatibility without breaking the consensus (unless we increment the proto version).\n\nAs an example, the Cosmos SDK v0.42 to v0.43 update contained two Protobuf-breaking changes, listed below. Instead of bumping the package versions from `v1beta1` to `v1`, the SDK team decided to follow this guideline, by reverting the breaking changes, marking those changes as deprecated, and modifying the node implementation when processing messages with deprecated fields. More specifically:\n\n* The Cosmos SDK recently removed support for [time-based software upgrades](https://github.com/cosmos/cosmos-sdk/pull/8849). As such, the `time` field has been marked as deprecated in `cosmos.upgrade.v1beta1.Plan`. Moreover, the node will reject any proposal containing an upgrade Plan whose `time` field is non-empty.\n* The Cosmos SDK now supports [governance split votes](./adr-037-gov-split-vote.md). When querying for votes, the returned `cosmos.gov.v1beta1.Vote` message has its `option` field (used for 1 vote option) deprecated in favor of its `options` field (allowing multiple vote options). Whenever possible, the SDK still populates the deprecated `option` field, that is, if and only if the `len(options) == 1` and `options[0].Weight == 1.0`.\n\n#### 3. Fields MUST NOT be renamed\n\nWhereas the official Protobuf recommendations do not prohibit renaming fields, as it does not break the Protobuf binary representation, the SDK explicitly forbids renaming fields in Protobuf structs. The main reason for this choice is to avoid introducing breaking changes for clients, which often rely on hard-coded fields from generated types. Moreover, renaming fields will lead to client-breaking JSON representations of Protobuf definitions, used in REST endpoints and in the CLI.\n\n### Incrementing Protobuf Package Version\n\nTODO, needs architecture review. Some topics:\n\n* Bumping versions frequency\n* When bumping versions, should the Cosmos SDK support both versions?\n * i.e. v1beta1 -> v1, should we have two folders in the Cosmos SDK, and handlers for both versions?\n* mention ADR-023 Protobuf naming\n\n## Consequences\n\n> This section describes the resulting context, after applying the decision. All consequences should be listed here, not just the \"positive\" ones. A particular decision may have positive, negative, and neutral consequences, but all of them affect the team and project in the future.\n\n### Backwards Compatibility\n\n> All ADRs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The ADR must explain how the author proposes to deal with these incompatibilities. ADR submissions without a sufficient backwards compatibility treatise may be rejected outright.\n\n### Positive\n\n* less pain to tool developers\n* more compatibility in the ecosystem\n* ...\n\n### Negative\n\n{negative consequences}\n\n### Neutral\n\n* more rigor in Protobuf review\n\n## Further Discussions\n\nThis ADR is still in the DRAFT stage, and the \"Incrementing Protobuf Package Version\" will be filled in once we make a decision on how to correctly do it.\n\n## Test Cases [optional]\n\nTest cases for an implementation are mandatory for ADRs that are affecting consensus changes. Other ADRs can choose to include links to test cases if applicable.\n\n## References\n\n* [#9445](https://github.com/cosmos/cosmos-sdk/issues/9445) Release proto definitions v1\n* [#9446](https://github.com/cosmos/cosmos-sdk/issues/9446) Address v1beta1 proto breaking changes" + }, + { + "number": 45, + "filename": "adr-045-check-delivertx-middlewares.md", + "title": "ADR 045: BaseApp `{Check,Deliver}Tx` as Middlewares", + "content": "# ADR 045: BaseApp `{Check,Deliver}Tx` as Middlewares\n\n## Changelog\n\n* 20.08.2021: Initial draft.\n* 07.12.2021: Update `tx.Handler` interface ([\\#10693](https://github.com/cosmos/cosmos-sdk/pull/10693)).\n* 17.05.2022: ADR is abandoned, as middlewares are deemed too hard to reason about.\n\n## Status\n\nABANDONED. Replacement is being discussed in [#11955](https://github.com/cosmos/cosmos-sdk/issues/11955).\n\n## Abstract\n\nThis ADR replaces the current BaseApp `runTx` and antehandlers design with a middleware-based design.\n\n## Context\n\nBaseApp's implementation of ABCI `{Check,Deliver}Tx()` and its own `Simulate()` method call the `runTx` method under the hood, which first runs antehandlers, then executes `Msg`s. However, the [transaction Tips](https://github.com/cosmos/cosmos-sdk/issues/9406) and [refunding unused gas](https://github.com/cosmos/cosmos-sdk/issues/2150) use cases require custom logic to be run after the `Msg`s execution. There is currently no way to achieve this.\n\nA naive solution would be to add post-`Msg` hooks to BaseApp. However, the Cosmos SDK team thinks in parallel about the bigger picture of making app wiring simpler ([#9181](https://github.com/cosmos/cosmos-sdk/discussions/9182)), which includes making BaseApp more lightweight and modular.\n\n## Decision\n\nWe decide to transform Baseapp's implementation of ABCI `{Check,Deliver}Tx` and its own `Simulate` methods to use a middleware-based design.\n\nThe two following interfaces are the base of the middleware design, and are defined in `types/tx`:\n\n```go\ntype Handler interface {\n CheckTx(ctx context.Context, req Request, checkReq RequestCheckTx) (Response, ResponseCheckTx, error)\n DeliverTx(ctx context.Context, req Request) (Response, error)\n SimulateTx(ctx context.Context, req Request (Response, error)\n}\n\ntype Middleware func(Handler) Handler\n```\n\nwhere we define the following arguments and return types:\n\n```go\ntype Request struct {\n\tTx sdk.Tx\n\tTxBytes []byte\n}\n\ntype Response struct {\n\tGasWanted uint64\n\tGasUsed uint64\n\t// MsgResponses is an array containing each Msg service handler's response\n\t// type, packed in an Any. This will get proto-serialized into the `Data` field\n\t// in the ABCI Check/DeliverTx responses.\n\tMsgResponses []*codectypes.Any\n\tLog string\n\tEvents []abci.Event\n}\n\ntype RequestCheckTx struct {\n\tType abci.CheckTxType\n}\n\ntype ResponseCheckTx struct {\n\tPriority int64\n}\n```\n\nPlease note that because CheckTx handles separate logic related to mempool prioritization, its signature is different than DeliverTx and SimulateTx.\n\nBaseApp holds a reference to a `tx.Handler`:\n\n```go\ntype BaseApp struct {\n // other fields\n txHandler tx.Handler\n}\n```\n\nBaseapp's ABCI `{Check,Deliver}Tx()` and `Simulate()` methods simply call `app.txHandler.{Check,Deliver,Simulate}Tx()` with the relevant arguments. For example, for `DeliverTx`:\n\n```go\nfunc (app *BaseApp) DeliverTx(req abci.RequestDeliverTx) abci.ResponseDeliverTx {\n var abciRes abci.ResponseDeliverTx\n\tctx := app.getContextForTx(runTxModeDeliver, req.Tx)\n\tres, err := app.txHandler.DeliverTx(ctx, tx.Request{TxBytes: req.Tx})\n\tif err != nil {\n\t\tabciRes = sdkerrors.ResponseDeliverTx(err, uint64(res.GasUsed), uint64(res.GasWanted), app.trace)\n\t\treturn abciRes\n\t}\n\n\tabciRes, err = convertTxResponseToDeliverTx(res)\n\tif err != nil {\n\t\treturn sdkerrors.ResponseDeliverTx(err, uint64(res.GasUsed), uint64(res.GasWanted), app.trace)\n\t}\n\n\treturn abciRes\n}\n\n// convertTxResponseToDeliverTx converts a tx.Response into a abci.ResponseDeliverTx.\nfunc convertTxResponseToDeliverTx(txRes tx.Response) (abci.ResponseDeliverTx, error) {\n\tdata, err := makeABCIData(txRes)\n\tif err != nil {\n\t\treturn abci.ResponseDeliverTx{}, nil\n\t}\n\n\treturn abci.ResponseDeliverTx{\n\t\tData: data,\n\t\tLog: txRes.Log,\n\t\tEvents: txRes.Events,\n\t}, nil\n}\n\n// makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx.\nfunc makeABCIData(txRes tx.Response) ([]byte, error) {\n\treturn proto.Marshal(&sdk.TxMsgData{MsgResponses: txRes.MsgResponses})\n}\n```\n\nThe implementations are similar for `BaseApp.CheckTx` and `BaseApp.Simulate`.\n\n`baseapp.txHandler`'s three methods' implementations can obviously be monolithic functions, but for modularity we propose a middleware composition design, where a middleware is simply a function that takes a `tx.Handler`, and returns another `tx.Handler` wrapped around the previous one.\n\n### Implementing a Middleware\n\nIn practice, middlewares are created by Go function that takes as arguments some parameters needed for the middleware, and returns a `tx.Middleware`.\n\nFor example, for creating an arbitrary `MyMiddleware`, we can implement:\n\n```go\n// myTxHandler is the tx.Handler of this middleware. Note that it holds a\n// reference to the next tx.Handler in the stack.\ntype myTxHandler struct {\n // next is the next tx.Handler in the middleware stack.\n next tx.Handler\n // some other fields that are relevant to the middleware can be added here\n}\n\n// NewMyMiddleware returns a middleware that does this and that.\nfunc NewMyMiddleware(arg1, arg2) tx.Middleware {\n return func (txh tx.Handler) tx.Handler {\n return myTxHandler{\n next: txh,\n // optionally, set arg1, arg2... if they are needed in the middleware\n }\n }\n}\n\n// Assert myTxHandler is a tx.Handler.\nvar _ tx.Handler = myTxHandler{}\n\nfunc (h myTxHandler) CheckTx(ctx context.Context, req Request, checkReq RequestcheckTx) (Response, ResponseCheckTx, error) {\n // CheckTx specific pre-processing logic\n\n // run the next middleware\n res, checkRes, err := txh.next.CheckTx(ctx, req, checkReq)\n\n // CheckTx specific post-processing logic\n\n return res, checkRes, err\n}\n\nfunc (h myTxHandler) DeliverTx(ctx context.Context, req Request) (Response, error) {\n // DeliverTx specific pre-processing logic\n\n // run the next middleware\n res, err := txh.next.DeliverTx(ctx, tx, req)\n\n // DeliverTx specific post-processing logic\n\n return res, err\n}\n\nfunc (h myTxHandler) SimulateTx(ctx context.Context, req Request) (Response, error) {\n // SimulateTx specific pre-processing logic\n\n // run the next middleware\n res, err := txh.next.SimulateTx(ctx, tx, req)\n\n // SimulateTx specific post-processing logic\n\n return res, err\n}\n```\n\n### Composing Middlewares\n\nWhile BaseApp simply holds a reference to a `tx.Handler`, this `tx.Handler` itself is defined using a middleware stack. The Cosmos SDK exposes a base (i.e. innermost) `tx.Handler` called `RunMsgsTxHandler`, which executes messages.\n\nThen, the app developer can compose multiple middlewares on top of the base `tx.Handler`. Each middleware can run pre-and-post-processing logic around its next middleware, as described in the section above. Conceptually, as an example, given the middlewares `A`, `B`, and `C` and the base `tx.Handler` `H` the stack looks like:\n\n```text\nA.pre\n B.pre\n C.pre\n H # The base tx.handler, for example `RunMsgsTxHandler`\n C.post\n B.post\nA.post\n```\n\nWe define a `ComposeMiddlewares` function for composing middlewares. It takes the base handler as first argument, and middlewares in the \"outer to inner\" order. For the above stack, the final `tx.Handler` is:\n\n```go\ntxHandler := middleware.ComposeMiddlewares(H, A, B, C)\n```\n\nThe middleware is set in BaseApp via its `SetTxHandler` setter:\n\n```go\n// simapp/app.go\n\ntxHandler := middleware.ComposeMiddlewares(...)\napp.SetTxHandler(txHandler)\n```\n\nThe app developer can define their own middlewares, or use the Cosmos SDK's pre-defined middlewares from `middleware.NewDefaultTxHandler()`.\n\n### Middlewares Maintained by the Cosmos SDK\n\nWhile the app developer can define and compose the middlewares of their choice, the Cosmos SDK provides a set of middlewares that caters for the ecosystem's most common use cases. These middlewares are:\n\n| Middleware | Description |\n| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| RunMsgsTxHandler | This is the base `tx.Handler`. It replaces the old baseapp's `runMsgs`, and executes a transaction's `Msg`s. |\n| TxDecoderMiddleware | This middleware takes in transaction raw bytes, and decodes them into a `sdk.Tx`. It replaces the `baseapp.txDecoder` field, so that BaseApp stays as thin as possible. Since most middlewares read the contents of the `sdk.Tx`, the TxDecoderMiddleware should be run first in the middleware stack. |\n| {Antehandlers} | Each antehandler is converted to its own middleware. These middlewares perform signature verification, fee deductions and other validations on the incoming transaction. |\n| IndexEventsTxMiddleware | This is a simple middleware that chooses which events to index in Tendermint. Replaces `baseapp.indexEvents` (which unfortunately still exists in baseapp too, because it's used to index Begin/EndBlock events) |\n| RecoveryTxMiddleware | This index recovers from panics. It replaces baseapp.runTx's panic recovery described in [ADR-022](./adr-022-custom-panic-handling.md). |\n| GasTxMiddleware | This replaces the [`Setup`](https://github.com/cosmos/cosmos-sdk/blob/v0.43.0/x/auth/ante/setup.go) Antehandler. It sets a GasMeter on sdk.Context. Note that before, GasMeter was set on sdk.Context inside the antehandlers, and there was some mess around the fact that antehandlers had their own panic recovery system so that the GasMeter could be read by baseapp's recovery system. Now, this mess is all removed: one middleware sets GasMeter, another one handles recovery. |\n\n### Similarities and Differences between Antehandlers and Middlewares\n\nThe middleware-based design builds upon the existing antehandlers design described in [ADR-010](./adr-010-modular-antehandler.md). Even though the final decision of ADR-010 was to go with the \"Simple Decorators\" approach, the middleware design is actually very similar to the other [Decorator Pattern](./adr-010-modular-antehandler.md#decorator-pattern) proposal, also used in [weave](https://github.com/iov-one/weave).\n\n#### Similarities with Antehandlers\n\n* Designed as chaining/composing small modular pieces.\n* Allow code reuse for `{Check,Deliver}Tx` and for `Simulate`.\n* Set up in `app.go`, and easily customizable by app developers.\n* Order is important.\n\n#### Differences with Antehandlers\n\n* The Antehandlers are run before `Msg` execution, whereas middlewares can run before and after.\n* The middleware approach uses separate methods for `{Check,Deliver,Simulate}Tx`, whereas the antehandlers pass a `simulate bool` flag and uses the `sdkCtx.Is{Check,Recheck}Tx()` flags to determine in which transaction mode we are.\n* The middleware design lets each middleware hold a reference to the next middleware, whereas the antehandlers pass a `next` argument in the `AnteHandle` method.\n* The middleware design use Go's standard `context.Context`, whereas the antehandlers use `sdk.Context`.\n\n## Consequences\n\n### Backwards Compatibility\n\nSince this refactor removes some logic away from BaseApp and into middlewares, it introduces API-breaking changes for app developers. Most notably, instead of creating an antehandler chain in `app.go`, app developers need to create a middleware stack:\n\n```diff\n- anteHandler, err := ante.NewAnteHandler(\n- ante.HandlerOptions{\n- AccountKeeper: app.AccountKeeper,\n- BankKeeper: app.BankKeeper,\n- SignModeHandler: encodingConfig.TxConfig.SignModeHandler(),\n- FeegrantKeeper: app.FeeGrantKeeper,\n- SigGasConsumer: ante.DefaultSigVerificationGasConsumer,\n- },\n-)\n+txHandler, err := authmiddleware.NewDefaultTxHandler(authmiddleware.TxHandlerOptions{\n+ Debug: app.Trace(),\n+ IndexEvents: indexEvents,\n+ LegacyRouter: app.legacyRouter,\n+ MsgServiceRouter: app.msgSvcRouter,\n+ LegacyAnteHandler: anteHandler,\n+ TxDecoder: encodingConfig.TxConfig.TxDecoder,\n+})\nif err != nil {\n panic(err)\n}\n- app.SetAnteHandler(anteHandler)\n+ app.SetTxHandler(txHandler)\n```\n\nOther more minor API breaking changes will also be provided in the CHANGELOG. As usual, the Cosmos SDK will provide a release migration document for app developers.\n\nThis ADR does not introduce any state-machine-, client- or CLI-breaking changes.\n\n### Positive\n\n* Allow custom logic to be run before an after `Msg` execution. This enables the [tips](https://github.com/cosmos/cosmos-sdk/issues/9406) and [gas refund](https://github.com/cosmos/cosmos-sdk/issues/2150) uses cases, and possibly other ones.\n* Make BaseApp more lightweight, and defer complex logic to small modular components.\n* Separate paths for `{Check,Deliver,Simulate}Tx` with different returns types. This allows for improved readability (replace `if sdkCtx.IsRecheckTx() && !simulate {...}` with separate methods) and more flexibility (e.g. returning a `priority` in `ResponseCheckTx`).\n\n### Negative\n\n* It is hard to understand at first glance the state updates that would occur after a middleware runs given the `sdk.Context` and `tx`. A middleware can have an arbitrary number of nested middleware being called within its function body, each possibly doing some pre- and post-processing before calling the next middleware on the chain. Thus to understand what a middleware is doing, one must also understand what every other middleware further along the chain is also doing, and the order of middlewares matters. This can get quite complicated to understand.\n* API-breaking changes for app developers.\n\n### Neutral\n\nNo neutral consequences.\n\n## Further Discussions\n\n* [#9934](https://github.com/cosmos/cosmos-sdk/discussions/9934) Decomposing BaseApp's other ABCI methods into middlewares.\n* Replace `sdk.Tx` interface with the concrete protobuf Tx type in the `tx.Handler` methods signature.\n\n## Test Cases\n\nWe update the existing baseapp and antehandlers tests to use the new middleware API, but keep the same test cases and logic, to avoid introducing regressions. Existing CLI tests will also be left untouched.\n\nFor new middlewares, we introduce unit tests. Since middlewares are purposefully small, unit tests suit well.\n\n## References\n\n* Initial discussion: https://github.com/cosmos/cosmos-sdk/issues/9585\n* Implementation: [#9920 BaseApp refactor](https://github.com/cosmos/cosmos-sdk/pull/9920) and [#10028 Antehandlers migration](https://github.com/cosmos/cosmos-sdk/pull/10028)" + }, + { + "number": 46, + "filename": "adr-046-module-params.md", + "title": "ADR 046: Module Params", + "content": "# ADR 046: Module Params\n\n## Changelog\n\n* Sep 22, 2021: Initial Draft\n\n## Status\n\nProposed\n\n## Abstract\n\nThis ADR describes an alternative approach to how Cosmos SDK modules use, interact,\nand store their respective parameters.\n\n## Context\n\nCurrently, in the Cosmos SDK, modules that require the use of parameters use the\n`x/params` module. The `x/params` works by having modules define parameters,\ntypically via a simple `Params` structure, and registering that structure in\nthe `x/params` module via a unique `Subspace` that belongs to the respective\nregistering module. The registering module then has unique access to its respective\n`Subspace`. Through this `Subspace`, the module can get and set its `Params`\nstructure.\n\nIn addition, the Cosmos SDK's `x/gov` module has direct support for changing\nparameters on-chain via a `ParamChangeProposal` governance proposal type, where\nstakeholders can vote on suggested parameter changes.\n\nThere are various tradeoffs to using the `x/params` module to manage individual\nmodule parameters. Namely, managing parameters essentially comes for \"free\" in\nthat developers only need to define the `Params` struct, the `Subspace`, and the\nvarious auxiliary functions, e.g. `ParamSetPairs`, on the `Params` type. However,\nthere are some notable drawbacks. These drawbacks include the fact that parameters\nare serialized in state via JSON which is extremely slow. In addition, parameter\nchanges via `ParamChangeProposal` governance proposals have no way of reading from\nor writing to state. In other words, it is currently not possible to have any\nstate transitions in the application during an attempt to change param(s).\n\n## Decision\n\nWe will build off of the alignment of `x/gov` and `x/authz` work per\n[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810). Namely, module developers\nwill create one or more unique parameter data structures that must be serialized\nto state. The Param data structures must implement `sdk.Msg` interface with respective\nProtobuf Msg service method which will validate and update the parameters with all\nnecessary changes. The `x/gov` module via the work done in\n[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810), will dispatch Param\nmessages, which will be handled by Protobuf Msg services.\n\nNote, it is up to developers to decide how to structure their parameters and\nthe respective `sdk.Msg` messages. Consider the parameters currently defined in\n`x/auth` using the `x/params` module for parameter management:\n\n```protobuf\nmessage Params {\n uint64 max_memo_characters = 1;\n uint64 tx_sig_limit = 2;\n uint64 tx_size_cost_per_byte = 3;\n uint64 sig_verify_cost_ed25519 = 4;\n uint64 sig_verify_cost_secp256k1 = 5;\n}\n```\n\nDevelopers can choose to either create a unique data structure for every field in\n`Params` or they can create a single `Params` structure as outlined above in the\ncase of `x/auth`.\n\nIn the former, `x/params`, approach, a `sdk.Msg` would need to be created for every single\nfield along with a handler. This can become burdensome if there are a lot of\nparameter fields. In the latter case, there is only a single data structure and\nthus only a single message handler, however, the message handler might have to be\nmore sophisticated in that it might need to understand what parameters are being\nchanged vs what parameters are untouched.\n\nParams change proposals are made using the `x/gov` module. Execution is done through\n`x/authz` authorization to the root `x/gov` module's account.\n\nContinuing to use `x/auth`, we demonstrate a more complete example:\n\n```go\ntype Params struct {\n\tMaxMemoCharacters uint64\n\tTxSigLimit uint64\n\tTxSizeCostPerByte uint64\n\tSigVerifyCostED25519 uint64\n\tSigVerifyCostSecp256k1 uint64\n}\n\ntype MsgUpdateParams struct {\n\tMaxMemoCharacters uint64\n\tTxSigLimit uint64\n\tTxSizeCostPerByte uint64\n\tSigVerifyCostED25519 uint64\n\tSigVerifyCostSecp256k1 uint64\n}\n\ntype MsgUpdateParamsResponse struct {}\n\nfunc (ms msgServer) UpdateParams(goCtx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) {\n ctx := sdk.UnwrapSDKContext(goCtx)\n\n // verification logic...\n\n // persist params\n params := ParamsFromMsg(msg)\n ms.SaveParams(ctx, params)\n\n return &types.MsgUpdateParamsResponse{}, nil\n}\n\nfunc ParamsFromMsg(msg *types.MsgUpdateParams) Params {\n // ...\n}\n```\n\nA gRPC `Service` query should also be provided, for example:\n\n```protobuf\nservice Query {\n // ...\n \n rpc Params(QueryParamsRequest) returns (QueryParamsResponse) {\n option (google.api.http).get = \"/cosmos//v1beta1/params\";\n }\n}\n\nmessage QueryParamsResponse {\n Params params = 1 [(gogoproto.nullable) = false];\n}\n```\n\n## Consequences\n\nAs a result of implementing the module parameter methodology, we gain the ability\nfor module parameter changes to be stateful and extensible to fit nearly every\napplication's use case. We will be able to emit events (and trigger hooks registered\nto that events using the work proposed in [event hooks](https://github.com/cosmos/cosmos-sdk/discussions/9656)),\ncall other Msg service methods or perform migration.\nIn addition, there will be significant gains in performance when it comes to reading\nand writing parameters from and to state, especially if a specific set of parameters\nare read on a consistent basis.\n\nHowever, this methodology will require developers to implement more types and\nMsg service methods which can become burdensome if many parameters exist. In addition,\ndevelopers are required to implement persistence logics of module parameters.\nHowever, this should be trivial.\n\n### Backwards Compatibility\n\nThe new method for working with module parameters is naturally not backwards\ncompatible with the existing `x/params` module. However, the `x/params` will\nremain in the Cosmos SDK and will be marked as deprecated with no additional\nfunctionality being added apart from potential bug fixes. Note, the `x/params`\nmodule may be removed entirely in a future release.\n\n### Positive\n\n* Module parameters are serialized more efficiently\n* Modules are able to react on parameters changes and perform additional actions.\n* Special events can be emitted, allowing hooks to be triggered.\n\n### Negative\n\n* Module parameters become slightly more burdensome for module developers:\n * Modules are now responsible for persisting and retrieving parameter state\n * Modules are now required to have unique message handlers to handle parameter\n changes per unique parameter data structure.\n\n### Neutral\n\n* Requires [#9810](https://github.com/cosmos/cosmos-sdk/pull/9810) to be reviewed\n and merged.\n\n\n\n## References\n\n* https://github.com/cosmos/cosmos-sdk/pull/9810\n* https://github.com/cosmos/cosmos-sdk/issues/9438\n* https://github.com/cosmos/cosmos-sdk/discussions/9913" + }, + { + "number": 47, + "filename": "adr-047-extend-upgrade-plan.md", + "title": "ADR 047: Extend Upgrade Plan", + "content": "# ADR 047: Extend Upgrade Plan\n\n## Changelog\n\n* Nov, 23, 2021: Initial Draft\n* May, 16, 2023: Proposal ABANDONED. `pre_run` and `post_run` are not necessary anymore and adding the `artifacts` brings minor benefits.\n\n## Status\n\nABANDONED\n\n## Abstract\n\nThis ADR expands the existing x/upgrade `Plan` proto message to include new fields for defining pre-run and post-run processes within upgrade tooling.\nIt also defines a structure for providing downloadable artifacts involved in an upgrade.\n\n## Context\n\nThe `upgrade` module in conjunction with Cosmovisor are designed to facilitate and automate a blockchain's transition from one version to another.\n\nUsers submit a software upgrade governance proposal containing an upgrade `Plan`.\nThe [Plan](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/proto/cosmos/upgrade/v1beta1/upgrade.proto#L12) currently contains the following fields:\n\n* `name`: A short string identifying the new version.\n* `height`: The chain height at which the upgrade is to be performed.\n* `info`: A string containing information about the upgrade.\n\nThe `info` string can be anything.\nHowever, Cosmovisor will try to use the `info` field to automatically download a new version of the blockchain executable.\nFor the auto-download to work, Cosmovisor expects it to be either a stringified JSON object (with a specific structure defined through documentation), or a URL that will return such JSON.\nThe JSON object identifies URLs used to download the new blockchain executable for different platforms (OS and Architecture, e.g. \"linux/amd64\").\nSuch a URL can either return the executable file directly or can return an archive containing the executable and possibly other assets.\n\nIf the URL returns an archive, it is decompressed into `{DAEMON_HOME}/cosmovisor/{upgrade name}`.\nThen, if `{DAEMON_HOME}/cosmovisor/{upgrade name}/bin/{DAEMON_NAME}` does not exist, but `{DAEMON_HOME}/cosmovisor/{upgrade name}/{DAEMON_NAME}` does, the latter is copied to the former.\nIf the URL returns something other than an archive, it is downloaded to `{DAEMON_HOME}/cosmovisor/{upgrade name}/bin/{DAEMON_NAME}`.\n\nIf an upgrade height is reached and the new version of the executable version isn't available, Cosmovisor will stop running.\n\nBoth `DAEMON_HOME` and `DAEMON_NAME` are [environment variables used to configure Cosmovisor](https://github.com/cosmos/cosmos-sdk/blob/cosmovisor/v1.0.0/cosmovisor/README.md#command-line-arguments-and-environment-variables).\n\nCurrently, there is no mechanism that makes Cosmovisor run a command after the upgraded chain has been restarted.\n\nThe current upgrade process has this timeline:\n\n1. An upgrade governance proposal is submitted and approved.\n1. The upgrade height is reached.\n1. The `x/upgrade` module writes the `upgrade_info.json` file.\n1. The chain halts.\n1. Cosmovisor backs up the data directory (if set up to do so).\n1. Cosmovisor downloads the new executable (if not already in place).\n1. Cosmovisor executes the `${DAEMON_NAME} pre-upgrade`.\n1. Cosmovisor restarts the app using the new version and same args originally provided.\n\n## Decision\n\n### Protobuf Updates\n\nWe will update the `x/upgrade.Plan` message for providing upgrade instructions.\nThe upgrade instructions will contain a list of artifacts available for each platform.\nIt allows for the definition of a pre-run and post-run commands.\nThese commands are not consensus guaranteed; they will be executed by Cosmovisor (or other) during its upgrade handling.\n\n```protobuf\nmessage Plan {\n // ... (existing fields)\n\n UpgradeInstructions instructions = 6;\n}\n```\n\nThe new `UpgradeInstructions instructions` field MUST be optional.\n\n```protobuf\nmessage UpgradeInstructions {\n string pre_run = 1;\n string post_run = 2;\n repeated Artifact artifacts = 3;\n string description = 4;\n}\n```\n\nAll fields in the `UpgradeInstructions` are optional.\n\n* `pre_run` is a command to run prior to the upgraded chain restarting.\n If defined, it will be executed after halting and downloading the new artifact but before restarting the upgraded chain.\n The working directory this command runs from MUST be `{DAEMON_HOME}/cosmovisor/{upgrade name}`.\n This command MUST behave the same as the current [pre-upgrade](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/docs/migrations/pre-upgrade.md) command.\n It does not take in any command-line arguments and is expected to terminate with the following exit codes:\n\n | Exit status code | How it is handled in Cosmovisor |\n |------------------|---------------------------------------------------------------------------------------------------------------------|\n | `0` | Assumes `pre-upgrade` command executed successfully and continues the upgrade. |\n | `1` | Default exit code when `pre-upgrade` command has not been implemented. |\n | `30` | `pre-upgrade` command was executed but failed. This fails the entire upgrade. |\n | `31` | `pre-upgrade` command was executed but failed. But the command is retried until exit code `1` or `30` are returned. |\n If defined, then the app supervisors (e.g. Cosmovisor) MUST NOT run `app pre-run`.\n\n* `post_run` is a command to run after the upgraded chain has been started. If defined, this command MUST be only executed at most once by an upgrading node.\n The output and exit code SHOULD be logged but SHOULD NOT affect the running of the upgraded chain.\n The working directory this command runs from MUST be `{DAEMON_HOME}/cosmovisor/{upgrade name}`.\n* `artifacts` define items to be downloaded.\n It SHOULD have only one entry per platform.\n* `description` contains human-readable information about the upgrade and might contain references to external resources.\n It SHOULD NOT be used for structured processing information.\n\n```protobuf\nmessage Artifact {\n string platform = 1;\n string url = 2;\n string checksum = 3;\n string checksum_algo = 4;\n}\n```\n\n* `platform` is a required string that SHOULD be in the format `{OS}/{CPU}`, e.g. `\"linux/amd64\"`.\n The string `\"any\"` SHOULD also be allowed.\n An `Artifact` with a `platform` of `\"any\"` SHOULD be used as a fallback when a specific `{OS}/{CPU}` entry is not found.\n That is, if an `Artifact` exists with a `platform` that matches the system's OS and CPU, that should be used;\n otherwise, if an `Artifact` exists with a `platform` of `any`, that should be used;\n otherwise no artifact should be downloaded.\n* `url` is a required URL string that MUST conform to [RFC 1738: Uniform Resource Locators](https://www.ietf.org/rfc/rfc1738.txt).\n A request to this `url` MUST return either an executable file or an archive containing either `bin/{DAEMON_NAME}` or `{DAEMON_NAME}`.\n The URL should not contain checksum - it should be specified by the `checksum` attribute.\n* `checksum` is a checksum of the expected result of a request to the `url`.\n It is not required, but is recommended.\n If provided, it MUST be a hex encoded checksum string.\n Tools utilizing these `UpgradeInstructions` MUST fail if a `checksum` is provided but is different from the checksum of the result returned by the `url`.\n* `checksum_algo` is a string identifying the algorithm used to generate the `checksum`.\n Recommended algorithms: `sha256`, `sha512`.\n Algorithms also supported (but not recommended): `sha1`, `md5`.\n If a `checksum` is provided, a `checksum_algo` MUST also be provided.\n\nA `url` is not required to contain a `checksum` query parameter.\nIf the `url` does contain a `checksum` query parameter, the `checksum` and `checksum_algo` fields MUST also be populated, and their values MUST match the value of the query parameter.\nFor example, if the `url` is `\"https://example.com?checksum=md5:d41d8cd98f00b204e9800998ecf8427e\"`, then the `checksum` field must be `\"d41d8cd98f00b204e9800998ecf8427e\"` and the `checksum_algo` field must be `\"md5\"`.\n\n### Upgrade Module Updates\n\nIf an upgrade `Plan` does not use the new `UpgradeInstructions` field, existing functionality will be maintained.\nThe parsing of the `info` field as either a URL or `binaries` JSON will be deprecated.\nDuring validation, if the `info` field is used as such, a warning will be issued, but not an error.\n\nWe will update the creation of the `upgrade-info.json` file to include the `UpgradeInstructions`.\n\nWe will update the optional validation available via CLI to account for the new `Plan` structure.\nWe will add the following validation:\n\n1. If `UpgradeInstructions` are provided:\n 1. There MUST be at least one entry in `artifacts`.\n 1. All of the `artifacts` MUST have a unique `platform`.\n 1. For each `Artifact`, if the `url` contains a `checksum` query parameter:\n 1. The `checksum` query parameter value MUST be in the format of `{checksum_algo}:{checksum}`.\n 1. The `{checksum}` from the query parameter MUST equal the `checksum` provided in the `Artifact`.\n 1. The `{checksum_algo}` from the query parameter MUST equal the `checksum_algo` provided in the `Artifact`.\n1. The following validation is currently done using the `info` field. We will apply similar validation to the `UpgradeInstructions`.\n For each `Artifact`:\n 1. The `platform` MUST have the format `{OS}/{CPU}` or be `\"any\"`.\n 1. The `url` field MUST NOT be empty.\n 1. The `url` field MUST be a proper URL.\n 1. A `checksum` MUST be provided either in the `checksum` field or as a query parameter in the `url`.\n 1. If the `checksum` field has a value and the `url` also has a `checksum` query parameter, the two values MUST be equal.\n 1. The `url` MUST return either a file or an archive containing either `bin/{DAEMON_NAME}` or `{DAEMON_NAME}`.\n 1. If a `checksum` is provided (in the field or as a query param), the checksum of the result of the `url` MUST equal the provided checksum.\n\nDownloading of an `Artifact` will happen the same way that URLs from `info` are currently downloaded.\n\n### Cosmovisor Updates\n\nIf the `upgrade-info.json` file does not contain any `UpgradeInstructions`, existing functionality will be maintained.\n\nWe will update Cosmovisor to look for and handle the new `UpgradeInstructions` in `upgrade-info.json`.\nIf the `UpgradeInstructions` are provided, we will do the following:\n\n1. The `info` field will be ignored.\n1. The `artifacts` field will be used to identify the artifact to download based on the `platform` that Cosmovisor is running in.\n1. If a `checksum` is provided (either in the field or as a query param in the `url`), and the downloaded artifact has a different checksum, the upgrade process will be interrupted and Cosmovisor will exit with an error.\n1. If a `pre_run` command is defined, it will be executed at the same point in the process where the `app pre-upgrade` command would have been executed.\n It will be executed using the same environment as other commands run by Cosmovisor.\n1. If a `post_run` command is defined, it will be executed after executing the command that restarts the chain.\n It will be executed in a background process using the same environment as the other commands.\n Any output generated by the command will be logged.\n Once complete, the exit code will be logged.\n\nWe will deprecate the use of the `info` field for anything other than human readable information.\nA warning will be logged if the `info` field is used to define the assets (either by URL or JSON).\n\nThe new upgrade timeline is very similar to the current one. Changes are in bold:\n\n1. An upgrade governance proposal is submitted and approved.\n1. The upgrade height is reached.\n1. The `x/upgrade` module writes the `upgrade_info.json` file **(now possibly with `UpgradeInstructions`)**.\n1. The chain halts.\n1. Cosmovisor backs up the data directory (if set up to do so).\n1. Cosmovisor downloads the new executable (if not already in place).\n1. Cosmovisor executes **the `pre_run` command if provided**, or else the `${DAEMON_NAME} pre-upgrade` command.\n1. Cosmovisor restarts the app using the new version and same args originally provided.\n1. **Cosmovisor immediately runs the `post_run` command in a detached process.**\n\n## Consequences\n\n### Backwards Compatibility\n\nSince the only change to existing definitions is the addition of the `instructions` field to the `Plan` message, and that field is optional, there are no backwards incompatibilities with respects to the proto messages.\nAdditionally, current behavior will be maintained when no `UpgradeInstructions` are provided, so there are no backwards incompatibilities with respects to either the upgrade module or Cosmovisor.\n\n### Forwards Compatibility\n\nIn order to utilize the `UpgradeInstructions` as part of a software upgrade, both of the following must be true:\n\n1. The chain must already be using a sufficiently advanced version of the Cosmos SDK.\n1. The chain's nodes must be using a sufficiently advanced version of Cosmovisor.\n\n### Positive\n\n1. The structure for defining artifacts is clearer since it is now defined in the proto instead of in documentation.\n1. Availability of a pre-run command becomes more obvious.\n1. A post-run command becomes possible.\n\n### Negative\n\n1. The `Plan` message becomes larger. This is negligible because A) the `x/upgrades` module only stores at most one upgrade plan, and B) upgrades are rare enough that the increased gas cost isn't a concern.\n1. There is no option for providing a URL that will return the `UpgradeInstructions`.\n1. The only way to provide multiple assets (executables and other files) for a platform is to use an archive as the platform's artifact.\n\n### Neutral\n\n1. Existing functionality of the `info` field is maintained when the `UpgradeInstructions` aren't provided.\n\n## Further Discussions\n\n1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r698708349):\n Consider different names for `UpgradeInstructions instructions` (either the message type or field name).\n1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r754655072):\n 1. Consider putting the `string platform` field inside `UpgradeInstructions` and make `UpgradeInstructions` a repeated field in `Plan`.\n 1. Consider using a `oneof` field in the `Plan` which could either be `UpgradeInstructions` or else a URL that should return the `UpgradeInstructions`.\n 1. Consider allowing `info` to either be a JSON serialized version of `UpgradeInstructions` or else a URL that returns that.\n1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r755462876):\n Consider not including the `UpgradeInstructions.description` field, using the `info` field for that purpose instead.\n1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r754643691):\n Consider allowing multiple artifacts to be downloaded for any given `platform` by adding a `name` field to the `Artifact` message.\n1. [PR #10502 Comment](https://github.com/cosmos/cosmos-sdk/pull/10602#discussion_r781438288)\n Allow the new `UpgradeInstructions` to be provided via URL.\n1. [PR #10502 Comment](https://github.com/cosmos/cosmos-sdk/pull/10602#discussion_r781438288)\n Allow definition of a `signer` for assets (as an alternative to using a `checksum`).\n\n## References\n\n* [Current upgrade.proto](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/proto/cosmos/upgrade/v1beta1/upgrade.proto)\n* [Upgrade Module README](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/x/upgrade/spec/README.md)\n* [Cosmovisor README](https://github.com/cosmos/cosmos-sdk/blob/cosmovisor/v1.0.0/cosmovisor/README.md)\n* [Pre-upgrade README](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/docs/migrations/pre-upgrade.md)\n* [Draft/POC PR #10032](https://github.com/cosmos/cosmos-sdk/pull/10032)\n* [RFC 1738: Uniform Resource Locators](https://www.ietf.org/rfc/rfc1738.txt)" + }, + { + "number": 48, + "filename": "adr-048-consensus-fees.md", + "title": "ADR 048: Multi Tier Gas Price System", + "content": "# ADR 048: Multi Tier Gas Price System\n\n## Changelog\n\n* Dec 1, 2021: Initial Draft\n\n## Status\n\nRejected\n\n## Abstract\n\nThis ADR describes a flexible mechanism to maintain a consensus level gas prices, in which one can choose a multi-tier gas price system or EIP-1559 like one through configuration.\n\n## Context\n\nCurrently, each validator configures it's own `minimal-gas-prices` in `app.yaml`. But setting a proper minimal gas price is critical to protect network from dos attack, and it's hard for all the validators to pick a sensible value, so we propose to maintain a gas price in consensus level.\n\nSince tendermint 0.34.20 has supported mempool prioritization, we can take advantage of that to implement more sophisticated gas fee system.\n\n## Multi-Tier Price System\n\nWe propose a multi-tier price system on consensus to provide maximum flexibility:\n\n* Tier 1: a constant gas price, which could only be modified occasionally through governance proposal.\n* Tier 2: a dynamic gas price which is adjusted according to previous block load.\n* Tier 3: a dynamic gas price which is adjusted according to previous block load at a higher speed.\n\nThe gas price of higher tier should be bigger than the lower tier.\n\nThe transaction fees are charged with the exact gas price calculated on consensus.\n\nThe parameter schema is like this:\n\n```protobuf\nmessage TierParams {\n uint32 priority = 1 // priority in tendermint mempool\n Coin initial_gas_price = 2 //\n uint32 parent_gas_target = 3 // the target saturation of block\n uint32 change_denominator = 4 // decides the change speed\n Coin min_gas_price = 5 // optional lower bound of the price adjustment\n Coin max_gas_price = 6 // optional upper bound of the price adjustment\n}\n\nmessage Params {\n repeated TierParams tiers = 1;\n}\n```\n\n### Extension Options\n\nWe need to allow user to specify the tier of service for the transaction, to support it in an extensible way, we add an extension option in `AuthInfo`:\n\n```protobuf\nmessage ExtensionOptionsTieredTx {\n uint32 fee_tier = 1\n}\n```\n\nThe value of `fee_tier` is just the index to the `tiers` parameter list.\n\nWe also change the semantic of existing `fee` field of `Tx`, instead of charging user the exact `fee` amount, we treat it as a fee cap, while the actual amount of fee charged is decided dynamically. If the `fee` is smaller than dynamic one, the transaction won't be included in current block and ideally should stay in the mempool until the consensus gas price drop. The mempool can eventually prune old transactions.\n\n### Tx Prioritization\n\nTransactions are prioritized based on the tier, the higher the tier, the higher the priority.\n\nWithin the same tier, follow the default Tendermint order (currently FIFO). Be aware of that the mempool tx ordering logic is not part of consensus and can be modified by malicious validator.\n\nThis mechanism can be easily composed with prioritization mechanisms:\n\n* we can add extra tiers out of a user control:\n * Example 1: user can set tier 0, 10 or 20, but the protocol will create tiers 0, 1, 2 ... 29. For example IBC transactions will go to tier `user_tier + 5`: if user selected tier 1, then the transaction will go to tier 15.\n * Example 2: we can reserve tier 4, 5, ... only for special transaction types. For example, tier 5 is reserved for evidence tx. So if submits a bank.Send transaction and set tier 5, it will be delegated to tier 3 (the max tier level available for any transaction). \n * Example 3: we can enforce that all transactions of a specific type will go to specific tier. For example, tier 100 will be reserved for evidence transactions and all evidence transactions will always go to that tier.\n\n### `min-gas-prices`\n\nDeprecate the current per-validator `min-gas-prices` configuration, since it would confusing for it to work together with the consensus gas price.\n\n### Adjust For Block Load\n\nFor tier 2 and tier 3 transactions, the gas price is adjusted according to previous block load, the logic could be similar to EIP-1559:\n\n```python\ndef adjust_gas_price(gas_price, parent_gas_used, tier):\n if parent_gas_used == tier.parent_gas_target:\n return gas_price\n elif parent_gas_used > tier.parent_gas_target:\n gas_used_delta = parent_gas_used - tier.parent_gas_target\n gas_price_delta = max(gas_price * gas_used_delta // tier.parent_gas_target // tier.change_speed, 1)\n return gas_price + gas_price_delta\n else:\n gas_used_delta = parent_gas_target - parent_gas_used\n gas_price_delta = gas_price * gas_used_delta // parent_gas_target // tier.change_speed\n return gas_price - gas_price_delta\n```\n\n### Block Segment Reservation\n\nIdeally we should reserve block segments for each tier, so the lower tiered transactions won't be completely squeezed out by higher tier transactions, which will force user to use higher tier, and the system degraded to a single tier.\n\nWe need help from tendermint to implement this.\n\n## Implementation\n\nWe can make each tier's gas price strategy fully configurable in protocol parameters, while providing a sensible default one.\n\nPseudocode in python-like syntax:\n\n```python\ninterface TieredTx:\n def tier(self) -> int:\n pass\n\ndef tx_tier(tx):\n if isinstance(tx, TieredTx):\n return tx.tier()\n else:\n # default tier for custom transactions\n return 0\n # NOTE: we can add more rules here per \"Tx Prioritization\" section \n\nclass TierParams:\n 'gas price strategy parameters of one tier'\n priority: int # priority in tendermint mempool\n initial_gas_price: Coin\n parent_gas_target: int\n change_speed: Decimal # 0 means don't adjust for block load.\n\nclass Params:\n 'protocol parameters'\n tiers: List[TierParams]\n\nclass State:\n 'consensus state'\n # total gas used in last block, None when it's the first block\n parent_gas_used: Optional[int]\n # gas prices of last block for all tiers\n gas_prices: List[Coin]\n\ndef begin_block():\n 'Adjust gas prices'\n for i, tier in enumerate(Params.tiers):\n if State.parent_gas_used is None:\n # initialized gas price for the first block\n\t State.gas_prices[i] = tier.initial_gas_price\n else:\n # adjust gas price according to gas used in previous block\n State.gas_prices[i] = adjust_gas_price(State.gas_prices[i], State.parent_gas_used, tier)\n\ndef mempoolFeeTxHandler_checkTx(ctx, tx):\n # the minimal-gas-price configured by validator, zero in deliver_tx context\n validator_price = ctx.MinGasPrice()\n consensus_price = State.gas_prices[tx_tier(tx)]\n min_price = max(validator_price, consensus_price)\n\n # zero means infinity for gas price cap\n if tx.gas_price() > 0 and tx.gas_price() < min_price:\n return 'insufficient fees'\n return next_CheckTx(ctx, tx)\n\ndef txPriorityHandler_checkTx(ctx, tx):\n res, err := next_CheckTx(ctx, tx)\n # pass priority to tendermint\n res.Priority = Params.tiers[tx_tier(tx)].priority\n return res, err\n\ndef end_block():\n 'Update block gas used'\n State.parent_gas_used = block_gas_meter.consumed()\n```\n\n### Dos attack protection\n\nTo fully saturate the blocks and prevent other transactions from executing, attacker need to use transactions of highest tier, the cost would be significantly higher than the default tier.\n\nIf attacker spam with lower tier transactions, user can mitigate by sending higher tier transactions.\n\n## Consequences\n\n### Backwards Compatibility\n\n* New protocol parameters.\n* New consensus states.\n* New/changed fields in transaction body.\n\n### Positive\n\n* The default tier keeps the same predictable gas price experience for client.\n* The higher tier's gas price can adapt to block load.\n* No priority conflict with custom priority based on transaction types, since this proposal only occupy three priority levels.\n* Possibility to compose different priority rules with tiers\n\n### Negative\n\n* Wallets & tools need to update to support the new `tier` parameter, and semantic of `fee` field is changed.\n\n### Neutral\n\n## References\n\n* https://eips.ethereum.org/EIPS/eip-1559\n* https://iohk.io/en/blog/posts/2021/11/26/network-traffic-and-tiered-pricing/" + }, + { + "number": 49, + "filename": "adr-049-state-sync-hooks.md", + "title": "ADR 049: State Sync Hooks", + "content": "# ADR 049: State Sync Hooks\n\n## Changelog\n\n* Jan 19, 2022: Initial Draft\n* Apr 29, 2022: Safer extension snapshotter interface\n\n## Status\n\nImplemented\n\n## Abstract\n\nThis ADR outlines a hooks-based mechanism for application modules to provide additional state (outside of the IAVL tree) to be used \nduring state sync.\n\n## Context\n\nNew clients use state-sync to download snapshots of module state from peers. Currently, the snapshot consists of a\nstream of `SnapshotStoreItem` and `SnapshotIAVLItem`, which means that application modules that define their state outside of the IAVL \ntree cannot include their state as part of the state-sync process.\n\nNote, Even though the module state data is outside of the tree, for determinism we require that the hash of the external data should \nbe posted in the IAVL tree.\n\n## Decision\n\nA simple proposal based on our existing implementation is that, we can add two new message types: `SnapshotExtensionMeta` \nand `SnapshotExtensionPayload`, and they are appended to the existing multi-store stream with `SnapshotExtensionMeta` \nacting as a delimiter between extensions. As the chunk hashes should be able to ensure data integrity, we don't need \na delimiter to mark the end of the snapshot stream.\n\nBesides, we provide `Snapshotter` and `ExtensionSnapshotter` interface for modules to implement snapshotters, which will handle both taking \nsnapshot and the restoration. Each module could have multiple snapshotters, and for modules with additional state, they should\nimplement `ExtensionSnapshotter` as extension snapshotters. When setting up the application, the snapshot `Manager` should call \n`RegisterExtensions([]ExtensionSnapshotter…)` to register all the extension snapshotters.\n\n```protobuf\n// SnapshotItem is an item contained in a rootmulti.Store snapshot.\n// On top of the existing SnapshotStoreItem and SnapshotIAVLItem, we add two new options for the item.\nmessage SnapshotItem {\n // item is the specific type of snapshot item.\n oneof item {\n SnapshotStoreItem store = 1;\n SnapshotIAVLItem iavl = 2 [(gogoproto.customname) = \"IAVL\"];\n SnapshotExtensionMeta extension = 3;\n SnapshotExtensionPayload extension_payload = 4;\n }\n}\n\n// SnapshotExtensionMeta contains metadata about an external snapshotter.\n// One module may need multiple snapshotters, so each module may have multiple SnapshotExtensionMeta.\nmessage SnapshotExtensionMeta {\n // the name of the ExtensionSnapshotter, and it is registered to snapshotter manager when setting up the application\n // name should be unique for each ExtensionSnapshotter as we need to alphabetically order their snapshots to get\n // deterministic snapshot stream.\n string name = 1;\n // this is used by each ExtensionSnapshotter to decide the format of payloads included in SnapshotExtensionPayload message\n // it is used within the snapshotter/namespace, not global one for all modules\n uint32 format = 2;\n}\n\n// SnapshotExtensionPayload contains payloads of an external snapshotter.\nmessage SnapshotExtensionPayload {\n bytes payload = 1;\n}\n```\n\nWhen we create a snapshot stream, the `multistore` snapshot is always placed at the beginning of the binary stream, and other extension snapshots are alphabetically ordered by the name of the corresponding `ExtensionSnapshotter`. \n\nThe snapshot stream would look like as follows:\n\n```go\n// multi-store snapshot\n{SnapshotStoreItem | SnapshotIAVLItem, ...}\n// extension1 snapshot\nSnapshotExtensionMeta\n{SnapshotExtensionPayload, ...}\n// extension2 snapshot\nSnapshotExtensionMeta\n{SnapshotExtensionPayload, ...}\n```\n\nWe add an `extensions` field to snapshot `Manager` for extension snapshotters. The `multistore` snapshotter is a special one and it doesn't need a name because it is always placed at the beginning of the binary stream.\n\n```go\ntype Manager struct {\n\tstore *Store\n\tmultistore types.Snapshotter\n\textensions map[string]types.ExtensionSnapshotter\n\tmtx sync.Mutex\n\toperation operation\n\tchRestore chan<- io.ReadCloser\n\tchRestoreDone <-chan restoreDone\n\trestoreChunkHashes [][]byte\n\trestoreChunkIndex uint32\n}\n```\n\nFor extension snapshotters that implement the `ExtensionSnapshotter` interface, their names should be registered to the snapshot `Manager` by \ncalling `RegisterExtensions` when setting up the application. The snapshotters will handle both taking snapshot and restoration.\n\n```go\n// RegisterExtensions register extension snapshotters to manager\nfunc (m *Manager) RegisterExtensions(extensions ...types.ExtensionSnapshotter) error \n```\n\nOn top of the existing `Snapshotter` interface for the `multistore`, we add `ExtensionSnapshotter` interface for the extension snapshotters. Three more function signatures: `SnapshotFormat()`, `SupportedFormats()` and `SnapshotName()` are added to `ExtensionSnapshotter`.\n\n```go\n// ExtensionPayloadReader read extension payloads,\n// it returns io.EOF when reached either end of stream or the extension boundaries.\ntype ExtensionPayloadReader = func() ([]byte, error)\n\n// ExtensionPayloadWriter is a helper to write extension payloads to underlying stream.\ntype ExtensionPayloadWriter = func([]byte) error\n\n// ExtensionSnapshotter is an extension Snapshotter that is appended to the snapshot stream.\n// ExtensionSnapshotter has an unique name and manages it's own internal formats.\ntype ExtensionSnapshotter interface {\n\t// SnapshotName returns the name of snapshotter, it should be unique in the manager.\n\tSnapshotName() string\n\n\t// SnapshotFormat returns the default format used to take a snapshot.\n\tSnapshotFormat() uint32\n\n\t// SupportedFormats returns a list of formats it can restore from.\n\tSupportedFormats() []uint32\n\n\t// SnapshotExtension writes extension payloads into the underlying protobuf stream.\n\tSnapshotExtension(height uint64, payloadWriter ExtensionPayloadWriter) error\n\n\t// RestoreExtension restores an extension state snapshot,\n\t// the payload reader returns `io.EOF` when reached the extension boundaries.\n\tRestoreExtension(height uint64, format uint32, payloadReader ExtensionPayloadReader) error\n\n}\n```\n\n## Consequences\n\nAs a result of this implementation, we are able to create snapshots of binary chunk stream for the state that we maintain outside of the IAVL Tree, CosmWasm blobs for example. And new clients are able to fetch snapshots of state for all modules that have implemented the corresponding interface from peer nodes. \n\n\n### Backwards Compatibility\n\nThis ADR introduces new proto message types, adds an `extensions` field in snapshot `Manager`, and add new `ExtensionSnapshotter` interface, so this is not backwards compatible if we have extensions.\n\nBut for applications that do not have the state data outside of the IAVL tree for any module, the snapshot stream is backwards-compatible.\n\n### Positive\n\n* State maintained outside of IAVL tree like CosmWasm blobs can create snapshots by implementing extension snapshotters, and being fetched by new clients via state-sync.\n\n### Negative\n\n### Neutral\n\n* All modules that maintain state outside of IAVL tree need to implement `ExtensionSnapshotter` and the snapshot `Manager` need to call `RegisterExtensions` when setting up the application.\n\n## Further Discussions\n\nWhile an ADR is in the DRAFT or PROPOSED stage, this section should contain a summary of issues to be solved in future iterations (usually referencing comments from a pull-request discussion).\nLater, this section can optionally list ideas or improvements the author or reviewers found during the analysis of this ADR.\n\n## Test Cases [optional]\n\nTest cases for an implementation are mandatory for ADRs that are affecting consensus changes. Other ADRs can choose to include links to test cases if applicable.\n\n## References\n\n* https://github.com/cosmos/cosmos-sdk/pull/10961\n* https://github.com/cosmos/cosmos-sdk/issues/7340\n* https://hackmd.io/gJoyev6DSmqqkO667WQlGw" + }, + { + "number": 50, + "filename": "adr-050-sign-mode-textual.md", + "title": "ADR 050: SIGN_MODE_TEXTUAL", + "content": "# ADR 050: SIGN_MODE_TEXTUAL\n\n## Changelog\n\n* Dec 06, 2021: Initial Draft.\n* Feb 07, 2022: Draft read and concept-ACKed by the Ledger team.\n* May 16, 2022: Change status to Accepted.\n* Aug 11, 2022: Require signing over tx raw bytes.\n* Sep 07, 2022: Add custom `Msg`-renderers.\n* Sep 18, 2022: Structured format instead of lines of text\n* Nov 23, 2022: Specify CBOR encoding.\n* Dec 01, 2022: Link to examples in separate JSON file.\n* Dec 06, 2022: Re-ordering of envelope screens.\n* Dec 14, 2022: Mention exceptions for invertibility.\n* Jan 23, 2023: Switch Screen.Text to Title+Content.\n* Mar 07, 2023: Change SignDoc from array to struct containing array.\n* Mar 20, 2023: Introduce a spec version initialized to 0.\n\n## Status\n\nAccepted. Implementation started. Small value renderers details still need to be polished.\n\nSpec version: 0.\n\n## Abstract\n\nThis ADR specifies SIGN_MODE_TEXTUAL, a new string-based sign mode that is targeted at signing with hardware devices.\n\n## Context\n\nProtobuf-based SIGN_MODE_DIRECT was introduced in [ADR-020](./adr-020-protobuf-transaction-encoding.md) and is intended to replace SIGN_MODE_LEGACY_AMINO_JSON in most situations, such as mobile wallets and CLI keyrings. However, the [Ledger](https://www.ledger.com/) hardware wallet is still using SIGN_MODE_LEGACY_AMINO_JSON for displaying the sign bytes to the user. Hardware wallets cannot transition to SIGN_MODE_DIRECT as:\n\n* SIGN_MODE_DIRECT is binary-based and thus not suitable for display to end-users. Technically, hardware wallets could simply display the sign bytes to the user. But this would be considered as blind signing, and is a security concern.\n* hardware cannot decode the protobuf sign bytes due to memory constraints, as the Protobuf definitions would need to be embedded on the hardware device.\n\nIn an effort to remove Amino from the SDK, a new sign mode needs to be created for hardware devices. [Initial discussions](https://github.com/cosmos/cosmos-sdk/issues/6513) propose a text-based sign mode, which this ADR formally specifies.\n\n## Decision\n\nIn SIGN_MODE_TEXTUAL, a transaction is rendered into a textual representation,\nwhich is then sent to a secure device or subsystem for the user to review and sign.\nUnlike `SIGN_MODE_DIRECT`, the transmitted data can be simply decoded into legible text\neven on devices with limited processing and display.\n\nThe textual representation is a sequence of _screens_.\nEach screen is meant to be displayed in its entirety (if possible) even on a small device like a Ledger.\nA screen is roughly equivalent to a short line of text.\nLarge screens can be displayed in several pieces,\nmuch as long lines of text are wrapped,\nso no hard guidance is given, though 40 characters is a good target.\nA screen is used to display a single key/value pair for scalar values\n(or composite values with a compact notation, such as `Coins`)\nor to introduce or conclude a larger grouping.\n\nThe text can contain the full range of Unicode code points, including control characters and nul.\nThe device is responsible for deciding how to display characters it cannot render natively.\nSee [annex 2](./adr-050-sign-mode-textual-annex2.md) for guidance.\n\nScreens have a non-negative indentation level to signal composite or nested structures.\nIndentation level zero is the top level.\nIndentation is displayed via some device-specific mechanism.\nMessage quotation notation is an appropriate model, such as\nleading `>` characters or vertical bars on more capable displays.\n\nSome screens are marked as _expert_ screens,\nmeant to be displayed only if the viewer chooses to opt in for the extra detail.\nExpert screens are meant for information that is rarely useful,\nor needs to be present only for signature integrity (see below).\n\n### Invertible Rendering\n\nWe require that the rendering of the transaction be invertible:\nthere must be a parsing function such that for every transaction,\nwhen rendered to the textual representation,\nparsing that representation yields a proto message equivalent\nto the original under proto equality.\n\nNote that this inverse function does not need to perform correct\nparsing or error signaling for the whole domain of textual data.\nMerely that the range of valid transactions be invertible under\nthe composition of rendering and parsing.\n\nNote that the existence of an inverse function ensures that the\nrendered text contains the full information of the original transaction,\nnot a hash or subset.\n\nWe make an exception for invertibility for data which are too large to\nmeaningfully display, such as byte strings longer than 32 bytes. We may then\nselectively render them with a cryptographically-strong hash. In these cases,\nit is still computationally infeasible to find a different transaction which\nhas the same rendering. However, we must ensure that the hash computation is\nsimple enough to be reliably executed independently, so at least the hash is\nitself reasonably verifiable when the raw byte string is not.\n\n### Chain State\n\nThe rendering function (and parsing function) may depend on the current chain state.\nThis is useful for reading parameters, such as coin display metadata,\nor for reading user-specific preferences such as language or address aliases.\nNote that if the observed state changes between signature generation\nand the transaction's inclusion in a block, the delivery-time rendering\nmight differ. If so, the signature will be invalid and the transaction\nwill be rejected.\n\n### Signature and Security\n\nFor security, transaction signatures should have three properties:\n\n1. Given the transaction, signatures, and chain state, it must be possible to validate that the signatures matches the transaction,\nto verify that the signers must have known their respective secret keys.\n\n2. It must be computationally infeasible to find a substantially different transaction for which the given signatures are valid, given the same chain state.\n\n3. The user should be able to give informed consent to the signed data via a simple, secure device with limited display capabilities.\n\nThe correctness and security of `SIGN_MODE_TEXTUAL` is guaranteed by demonstrating an inverse function from the rendering to transaction protos.\nThis means that it is impossible for a different protocol buffer message to render to the same text.\n\n### Transaction Hash Malleability\n\nWhen client software forms a transaction, the \"raw\" transaction (`TxRaw`) is serialized as a proto\nand a hash of the resulting byte sequence is computed.\nThis is the `TxHash`, and is used by various services to track the submitted transaction through its lifecycle.\nVarious misbehavior is possible if one can generate a modified transaction with a different TxHash\nbut for which the signature still checks out.\n\nSIGN_MODE_TEXTUAL prevents this transaction malleability by including the TxHash as an expert screen\nin the rendering.\n\n### SignDoc\n\nThe SignDoc for `SIGN_MODE_TEXTUAL` is formed from a data structure like:\n\n```go\ntype Screen struct {\n Title string // possibly size limited to, advised to 64 characters\n Content string // possibly size limited to, advised to 255 characters\n Indent uint8 // size limited to something small like 16 or 32\n Expert bool\n}\n\ntype SignDocTextual struct {\n Screens []Screen\n}\n```\n\nWe do not plan to use protobuf serialization to form the sequence of bytes\nthat will be transmitted and signed, in order to keep the decoder simple.\nWe will use [CBOR](https://cbor.io) ([RFC 8949](https://www.rfc-editor.org/rfc/rfc8949.html)) instead.\nThe encoding is defined by the following CDDL ([RFC 8610](https://www.rfc-editor.org/rfc/rfc8610)):\n\n```\n;;; CDDL (RFC 8610) Specification of SignDoc for SIGN_MODE_TEXTUAL.\n;;; Must be encoded using CBOR deterministic encoding (RFC 8949, section 4.2.1).\n\n;; A Textual document is a struct containing one field: an array of screens.\nsign_doc = {\n screens_key: [* screen],\n}\n\n;; The key is an integer to keep the encoding small.\nscreens_key = 1\n\n;; A screen consists of a text string, an indentation, and the expert flag,\n;; represented as an integer-keyed map. All entries are optional\n;; and MUST be omitted from the encoding if empty, zero, or false.\n;; Text defaults to the empty string, indent defaults to zero,\n;; and expert defaults to false.\nscreen = {\n ? title_key: tstr,\n ? content_key: tstr,\n ? indent_key: uint,\n ? expert_key: bool,\n}\n\n;; Keys are small integers to keep the encoding small.\ntitle_key = 1\ncontent_key = 2\nindent_key = 3\nexpert_key = 4\n```\n\nDefining the sign_doc as directly an array of screens has also been considered. However, given the possibility of future iterations of this specification, using a single-keyed struct has been chosen over the former proposal, as structs allow for easier backwards-compatibility.\n\n## Details\n\nIn the examples that follow, screens will be shown as lines of text,\nindentation is indicated with a leading '>',\nand expert screens are marked with a leading `*`.\n\n### Encoding of the Transaction Envelope\n\nWe define \"transaction envelope\" as all data in a transaction that is not in the `TxBody.Messages` field. Transaction envelope includes fee, signer infos and memo, but don't include `Msg`s. `//` denotes comments and are not shown on the Ledger device.\n\n```\nChain ID: \nAccount number: \nSequence: \nAddress: \n*Public Key: \nThis transaction has Message(s) // Pluralize \"Message\" only when int>1\n> Message (/): // See value renderers for Any rendering.\nEnd of Message\nMemo: // Skipped if no memo set.\nFee: // See value renderers for coins rendering.\n*Fee payer: // Skipped if no fee_payer set.\n*Fee granter: // Skipped if no fee_granter set.\nTip: // Skipped if no tip.\nTipper: \n*Gas Limit: \n*Timeout Height: // Skipped if no timeout_height set.\n*Other signer: SignerInfo // Skipped if the transaction only has 1 signer.\n*> Other signer (/): \n*End of other signers\n*Extension options: Any: // Skipped if no body extension options\n*> Extension options (/): \n*End of extension options\n*Non critical extension options: Any: // Skipped if no body non critical extension options\n*> Non critical extension options (/): \n*End of Non critical extension options\n*Hash of raw bytes: // Hex encoding of bytes defined, to prevent tx hash malleability.\n```\n\n### Encoding of the Transaction Body\n\nTransaction Body is the `Tx.TxBody.Messages` field, which is an array of `Any`s, where each `Any` packs a `sdk.Msg`. Since `sdk.Msg`s are widely used, they have a slightly different encoding than usual array of `Any`s (Protobuf: `repeated google.protobuf.Any`) described in Annex 1.\n\n```\nThis transaction has message: // Optional 's' for \"message\" if there's >1 sdk.Msgs.\n// For each Msg, print the following 2 lines:\nMsg (/): // E.g. Msg (1/2): bank v1beta1 send coins\n\nEnd of transaction messages\n```\n\n#### Example\n\nGiven the following Protobuf message:\n\n```protobuf\nmessage Grant {\n google.protobuf.Any authorization = 1 [(cosmos_proto.accepts_interface) = \"cosmos.authz.v1beta1.Authorization\"];\n google.protobuf.Timestamp expiration = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false];\n}\n\nmessage MsgGrant {\n option (cosmos.msg.v1.signer) = \"granter\";\n\n string granter = 1 [(cosmos_proto.scalar) = \"cosmos.AddressString\"];\n string grantee = 2 [(cosmos_proto.scalar) = \"cosmos.AddressString\"];\n}\n```\n\nand a transaction containing 1 such `sdk.Msg`, we get the following encoding:\n\n```\nThis transaction has 1 message:\nMsg (1/1): authz v1beta1 grant\nGranter: cosmos1abc...def\nGrantee: cosmos1ghi...jkl\nEnd of transaction messages\n```\n\n### Custom `Msg` Renderers\n\nApplication developers may choose to not follow default renderer value output for their own `Msg`s. In this case, they can implement their own custom `Msg` renderer. This is similar to [EIP4430](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4430.md), where the smart contract developer chooses the description string to be shown to the end user.\n\nThis is done by setting the `cosmos.msg.textual.v1.expert_custom_renderer` Protobuf option to a non-empty string. This option CAN ONLY be set on a Protobuf message representing transaction message object (implementing `sdk.Msg` interface).\n\n```protobuf\nmessage MsgFooBar {\n // Optional comments to describe in human-readable language the formatting\n // rules of the custom renderer.\n option (cosmos.msg.textual.v1.expert_custom_renderer) = \"\";\n\n // proto fields\n}\n```\n\nWhen this option is set on a `Msg`, a registered function will transform the `Msg` into an array of one or more strings, which MAY use the key/value format (described in point #3) with the expert field prefix (described in point #5) and arbitrary indentation (point #6). These strings MAY be rendered from a `Msg` field using a default value renderer, or they may be generated from several fields using custom logic.\n\nThe `` is a string convention chosen by the application developer and is used to identify the custom `Msg` renderer. For example, the documentation or specification of this custom algorithm can reference this identifier. This identifier CAN have a versioned suffix (e.g. `_v1`) to adapt for future changes (which would be consensus-breaking). We also recommend adding Protobuf comments to describe in human language the custom logic used.\n\nMoreover, the renderer must provide 2 functions: one for formatting from Protobuf to string, and one for parsing string to Protobuf. These 2 functions are provided by the application developer. To satisfy point #1, the parse function MUST be the inverse of the formatting function. This property will not be checked by the SDK at runtime. However, we strongly recommend the application developer to include a comprehensive suite in their app repo to test invertibility, as to not introduce security bugs.\n\n### Require signing over the `TxBody` and `AuthInfo` raw bytes\n\nRecall that the transaction bytes merkleized on chain are the Protobuf binary serialization of [TxRaw](https://buf.build/cosmos/cosmos-sdk/docs/main:cosmos.tx.v1beta1#cosmos.tx.v1beta1.TxRaw), which contains the `body_bytes` and `auth_info_bytes`. Moreover, the transaction hash is defined as the SHA256 hash of the `TxRaw` bytes. We require that the user signs over these bytes in SIGN_MODE_TEXTUAL, more specifically over the following string:\n\n```\n*Hash of raw bytes: \n```\n\nwhere:\n\n* `++` denotes concatenation,\n* `HEX` is the hexadecimal representation of the bytes, all in capital letters, no `0x` prefix,\n* and `len()` is encoded as a Big-Endian uint64.\n\nThis is to prevent transaction hash malleability. The point #1 about invertibility assures that transaction `body` and `auth_info` values are not malleable, but the transaction hash still might be malleable with point #1 only, because the SIGN_MODE_TEXTUAL strings don't follow the byte ordering defined in `body_bytes` and `auth_info_bytes`. Without this hash, a malicious validator or exchange could intercept a transaction, modify its transaction hash _after_ the user signed it using SIGN_MODE_TEXTUAL (by tweaking the byte ordering inside `body_bytes` or `auth_info_bytes`), and then submit it to Tendermint.\n\nBy including this hash in the SIGN_MODE_TEXTUAL signing payload, we keep the same level of guarantees as [SIGN_MODE_DIRECT](./adr-020-protobuf-transaction-encoding.md).\n\nThese bytes are only shown in expert mode, hence the leading `*`.\n\n## Updates to the current specification\n\nThe current specification is not set in stone, and future iterations are to be expected. We distinguish two categories of updates to this specification:\n\n1. Updates that require changes of the hardware device embedded application.\n2. Updates that only modify the envelope and the value renderers.\n\nUpdates in the 1st category include changes of the `Screen` struct or its corresponding CBOR encoding. This type of updates require a modification of the hardware signer application, to be able to decode and parse the new types. Backwards-compatibility must also be guaranteed, so that the new hardware application works with existing versions of the SDK. These updates require the coordination of multiple parties: SDK developers, hardware application developers (currently: Zondax), and client-side developers (e.g. CosmJS). Furthermore, a new submission of the hardware device application may be necessary, which, depending on the vendor, can take some time. As such, we recommend to avoid this type of updates as much as possible.\n\nUpdates in the 2nd category include changes to any of the value renderers or to the transaction envelope. For example, the ordering of fields in the envelope can be swapped, or the timestamp formatting can be modified. Since SIGN_MODE_TEXTUAL sends `Screen`s to the hardware device, this type of change does not need a hardware wallet application update. They are however state-machine-breaking, and must be documented as such. They require the coordination of SDK developers with client-side developers (e.g. CosmJS), so that the updates are released on both sides close to each other in time.\n\nWe define a spec version, which is an integer that must be incremented on each update of either category. This spec version will be exposed by the SDK's implementation, and can be communicated to clients. For example, SDK v0.50 might use the spec version 1, and SDK v0.51 might use 2; thanks to this versioning, clients can know how to craft SIGN_MODE_TEXTUAL transactions based on the target SDK version.\n\nThe current spec version is defined in the \"Status\" section, on the top of this document. It is initialized to `0` to allow flexibility in choosing how to define future versions, as it would allow adding a field either in the SignDoc Go struct or in Protobuf in a backwards-compatible way.\n\n## Additional Formatting by the Hardware Device\n\nSee [annex 2](./adr-050-sign-mode-textual-annex2.md).\n\n## Examples\n\n1. A minimal MsgSend: [see transaction](https://github.com/cosmos/cosmos-sdk/blob/094abcd393379acbbd043996024d66cd65246fb1/tx/textual/internal/testdata/e2e.json#L2-L70).\n2. A transaction with a bit of everything: [see transaction](https://github.com/cosmos/cosmos-sdk/blob/094abcd393379acbbd043996024d66cd65246fb1/tx/textual/internal/testdata/e2e.json#L71-L270).\n\nThe examples below are stored in a JSON file with the following fields:\n\n* `proto`: the representation of the transaction in ProtoJSON,\n* `screens`: the transaction rendered into SIGN_MODE_TEXTUAL screens,\n* `cbor`: the sign bytes of the transaction, which is the CBOR encoding of the screens.\n\n## Consequences\n\n### Backwards Compatibility\n\nSIGN_MODE_TEXTUAL is purely additive, and doesn't break any backwards compatibility with other sign modes.\n\n### Positive\n\n* Human-friendly way of signing in hardware devices.\n* Once SIGN_MODE_TEXTUAL is shipped, SIGN_MODE_LEGACY_AMINO_JSON can be deprecated and removed. On the longer term, once the ecosystem has totally migrated, Amino can be totally removed.\n\n### Negative\n\n* Some fields are still encoded in non-human-readable ways, such as public keys in hexadecimal.\n* New ledger app needs to be released, still unclear\n\n### Neutral\n\n* If the transaction is complex, the string array can be arbitrarily long, and some users might just skip some screens and blind sign.\n\n## Further Discussions\n\n* Some details on value renderers need to be polished, see [Annex 1](./adr-050-sign-mode-textual-annex1.md).\n* Are ledger apps able to support both SIGN_MODE_LEGACY_AMINO_JSON and SIGN_MODE_TEXTUAL at the same time?\n* Open question: should we add a Protobuf field option to allow app developers to overwrite the textual representation of certain Protobuf fields and message? This would be similar to Ethereum's [EIP4430](https://github.com/ethereum/EIPs/pull/4430), where the contract developer decides on the textual representation.\n* Internationalization.\n\n## References\n\n* [Annex 1](./adr-050-sign-mode-textual-annex1.md)\n\n* Initial discussion: https://github.com/cosmos/cosmos-sdk/issues/6513\n* Living document used in the working group: https://hackmd.io/fsZAO-TfT0CKmLDtfMcKeA?both\n* Working group meeting notes: https://hackmd.io/7RkGfv_rQAaZzEigUYhcXw\n* Ethereum's \"Described Transactions\" https://github.com/ethereum/EIPs/pull/4430" + }, + { + "number": 50, + "filename": "adr-050-sign-mode-textual-annex1.md", + "title": "ADR 050: SIGN_MODE_TEXTUAL: Annex 1 Value Renderers", + "content": "# ADR 050: SIGN_MODE_TEXTUAL: Annex 1 Value Renderers\n\n## Changelog\n\n* Dec 06, 2021: Initial Draft\n* Feb 07, 2022: Draft read and concept-ACKed by the Ledger team.\n* Dec 01, 2022: Remove `Object: ` prefix on Any header screen.\n* Dec 13, 2022: Sign over bytes hash when bytes length > 32.\n* Mar 27, 2023: Update `Any` value renderer to omit message header screen.\n\n## Status\n\nAccepted. Implementation started. Small value renderers details still need to be polished.\n\n## Abstract\n\nThis Annex describes value renderers, which are used for displaying Protobuf values in a human-friendly way using a string array.\n\n## Value Renderers\n\nValue Renderers describe how values of different Protobuf types should be encoded as a string array. Value renderers can be formalized as a set of bijective functions `func renderT(value T) []string`, where `T` is one of the below Protobuf types for which this spec is defined.\n\n### Protobuf `number`\n\n* Applies to:\n * protobuf numeric integer types (`int{32,64}`, `uint{32,64}`, `sint{32,64}`, `fixed{32,64}`, `sfixed{32,64}`)\n * strings whose `customtype` is `github.com/cosmos/cosmos-sdk/types.Int` or `github.com/cosmos/cosmos-sdk/types.Dec`\n * bytes whose `customtype` is `github.com/cosmos/cosmos-sdk/types.Int` or `github.com/cosmos/cosmos-sdk/types.Dec`\n* Trailing decimal zeroes are always removed\n* Formatting with `'`s for every three integral digits.\n* Usage of `.` to denote the decimal delimiter.\n\n#### Examples\n\n* `1000` (uint64) -> `1'000`\n* `\"1000000.00\"` (string representing a Dec) -> `1'000'000`\n* `\"1000000.10\"` (string representing a Dec) -> `1'000'000.1`\n\n### `coin`\n\n* Applies to `cosmos.base.v1beta1.Coin`.\n* Denoms are converted to `display` denoms using `Metadata` (if available). **This requires a state query**. The definition of `Metadata` can be found in the [bank protobuf definition](https://buf.build/cosmos/cosmos-sdk/docs/main:cosmos.bank.v1beta1#cosmos.bank.v1beta1.Metadata). If the `display` field is empty or nil, then we do not perform any denom conversion.\n* Amounts are converted to `display` denom amounts and rendered as `number`s above\n * We do not change the capitalization of the denom. In practice, `display` denoms are stored in lowercase in state (e.g. `10 atom`), however they are often showed in UPPERCASE in everyday life (e.g. `10 ATOM`). Value renderers keep the case used in state, but we may recommend chains changing the denom metadata to be uppercase for better user display.\n* One space between the denom and amount (e.g. `10 atom`).\n* In the future, IBC denoms could maybe be converted to DID/IIDs, if we can find a robust way for doing this (ex. `cosmos:cosmos:hub:bank:denom:atom`)\n\n#### Examples\n\n* `1000000000uatom` -> `[\"1'000 atom\"]`, because atom is the metadata's display denom.\n\n### `coins`\n\n* an array of `coin` is display as the concatenation of each `coin` encoded as the specification above, then joined together with the delimiter `\", \"` (a comma and a space, no quotes around).\n* the list of coins is ordered by unicode code point of the display denom: `A-Z` < `a-z`. For example, the string `aAbBcC` would be sorted `ABCabc`.\n * if the coins list had 0 items in it then it'll be rendered as `zero`\n\n### Example\n\n* `[\"3cosm\", \"2000000uatom\"]` -> `2 atom, 3 COSM` (assuming the display denoms are `atom` and `COSM`)\n* `[\"10atom\", \"20Acoin\"]` -> `20 Acoin, 10 atom` (assuming the display denoms are `atom` and `Acoin`)\n* `[]` -> `zero` \n\n### `repeated`\n\n* Applies to all `repeated` fields, except `cosmos.tx.v1beta1.TxBody#Messages`, which has a particular encoding (see [ADR-050](./adr-050-sign-mode-textual.md)).\n* A repeated type has the following template:\n\n```\n: \n (/): \n\n (/): \n\nEnd of .\n```\n\nwhere:\n\n* `field_name` is the Protobuf field name of the repeated field\n* `field_kind`:\n * if the type of the repeated field is a message, `field_kind` is the message name\n * if the type of the repeated field is an enum, `field_kind` is the enum name\n * in any other case, `field_kind` is the protobuf primitive type (e.g. \"string\" or \"bytes\")\n* `int` is the length of the array\n* `index` is one based index of the repeated field\n\n#### Examples\n\nGiven the proto definition:\n\n```protobuf\nmessage AllowedMsgAllowance {\n repeated string allowed_messages = 1;\n}\n```\n\nand initializing with:\n\n```go\nx := []AllowedMsgAllowance{\"cosmos.bank.v1beta1.MsgSend\", \"cosmos.gov.v1.MsgVote\"}\n```\n\nwe have the following value-rendered encoding:\n\n```\nAllowed messages: 2 strings\nAllowed messages (1/2): cosmos.bank.v1beta1.MsgSend\nAllowed messages (2/2): cosmos.gov.v1.MsgVote\nEnd of Allowed messages\n```\n\n### `message`\n\n* Applies to all Protobuf messages that do not have a custom encoding.\n* Field names follow [sentence case](https://en.wiktionary.org/wiki/sentence_case)\n * replace each `_` with a space\n * capitalize first letter of the sentence\n* Field names are ordered by their Protobuf field number\n* Screen title is the field name, and screen content is the value.\n* Nesting:\n * if a field contains a nested message, we value-render the underlying message using the template:\n\n ```\n : <1st line of value-rendered message>\n > // Notice the `>` prefix.\n ```\n\n * `>` character is used to denote nesting. For each additional level of nesting, add `>`.\n\n#### Examples\n\nGiven the following Protobuf messages:\n\n```protobuf\nenum VoteOption {\n VOTE_OPTION_UNSPECIFIED = 0;\n VOTE_OPTION_YES = 1;\n VOTE_OPTION_ABSTAIN = 2;\n VOTE_OPTION_NO = 3;\n VOTE_OPTION_NO_WITH_VETO = 4;\n}\n\nmessage WeightedVoteOption {\n VoteOption option = 1;\n string weight = 2 [(cosmos_proto.scalar) = \"cosmos.Dec\"];\n}\n\nmessage Vote {\n uint64 proposal_id = 1;\n string voter = 2 [(cosmos_proto.scalar) = \"cosmos.AddressString\"];\n reserved 3;\n repeated WeightedVoteOption options = 4;\n}\n```\n\nwe get the following encoding for the `Vote` message:\n\n```\nVote object\n> Proposal id: 4\n> Voter: cosmos1abc...def\n> Options: 2 WeightedVoteOptions\n> Options (1/2): WeightedVoteOption object\n>> Option: VOTE_OPTION_YES\n>> Weight: 0.7\n> Options (2/2): WeightedVoteOption object\n>> Option: VOTE_OPTION_NO\n>> Weight: 0.3\n> End of Options\n```\n\n### Enums\n\n* Show the enum variant name as string.\n\n#### Examples\n\nSee example above with `message Vote{}`.\n\n### `google.protobuf.Any`\n\n* Applies to `google.protobuf.Any`\n* Rendered as:\n\n```\n\n> \n```\n\nThere is however one exception: when the underlying message is a Protobuf message that does not have a custom encoding, then the message header screen is omitted, and one level of indentation is removed.\n\nMessages that have a custom encoding, including `google.protobuf.Timestamp`, `google.protobuf.Duration`, `google.protobuf.Any`, `cosmos.base.v1beta1.Coin`, and messages that have an app-defined custom encoding, will preserve their header and indentation level.\n\n#### Examples\n\nMessage header screen is stripped, one-level of indentation removed:\n\n```\n/cosmos.gov.v1.Vote\n> Proposal id: 4\n> Vote: cosmos1abc...def\n> Options: 2 WeightedVoteOptions\n> Options (1/2): WeightedVoteOption object\n>> Option: Yes\n>> Weight: 0.7\n> Options (2/2): WeightedVoteOption object\n>> Option: No\n>> Weight: 0.3\n> End of Options\n```\n\nMessage with custom encoding:\n\n```\n/cosmos.base.v1beta1.Coin\n> 10uatom\n```\n\n### `google.protobuf.Timestamp`\n\nRendered using [RFC 3339](https://www.rfc-editor.org/rfc/rfc3339) (a\nsimplification of ISO 8601), which is the current recommendation for portable\ntime values. The rendering always uses \"Z\" (UTC) as the timezone. It uses only\nthe necessary fractional digits of a second, omitting the fractional part\nentirely if the timestamp has no fractional seconds. (The resulting timestamps\nare not automatically sortable by standard lexicographic order, but we favor\nthe legibility of the shorter string.)\n\n#### Examples\n\nThe timestamp with 1136214245 seconds and 700000000 nanoseconds is rendered\nas `2006-01-02T15:04:05.7Z`.\nThe timestamp with 1136214245 seconds and zero nanoseconds is rendered\nas `2006-01-02T15:04:05Z`.\n\n### `google.protobuf.Duration`\n\nThe duration proto expresses a raw number of seconds and nanoseconds.\nThis will be rendered as longer time units of days, hours, and minutes,\nplus any remaining seconds, in that order.\nLeading and trailing zero-quantity units will be omitted, but all\nunits in between nonzero units will be shown, e.g. ` 3 days, 0 hours, 0 minutes, 5 seconds`.\n\nEven longer time units such as months or years are imprecise.\nWeeks are precise, but not commonly used - `91 days` is more immediately\nlegible than `13 weeks`. Although `days` can be problematic,\ne.g. noon to noon on subsequent days can be 23 or 25 hours depending on\ndaylight savings transitions, there is significant advantage in using\nstrict 24-hour days over using only hours (e.g. `91 days` vs `2184 hours`).\n\nWhen nanoseconds are nonzero, they will be shown as fractional seconds,\nwith only the minimum number of digits, e.g `0.5 seconds`.\n\nA duration of exactly zero is shown as `0 seconds`.\n\nUnits will be given as singular (no trailing `s`) when the quantity is exactly one,\nand will be shown in plural otherwise.\n\nNegative durations will be indicated with a leading minus sign (`-`).\n\nExamples:\n\n* `1 day`\n* `30 days`\n* `-1 day, 12 hours`\n* `3 hours, 0 minutes, 53.025 seconds`\n\n### bytes\n\n* Bytes of length shorter or equal to 35 are rendered in hexadecimal, all capital letters, without the `0x` prefix.\n* Bytes of length greater than 35 are hashed using SHA256. The rendered text is `SHA-256=`, followed by the 32-byte hash, in hexadecimal, all capital letters, without the `0x` prefix.\n* The hexadecimal string is finally separated into groups of 4 digits, with a space `' '` as separator. If the bytes length is odd, the 2 remaining hexadecimal characters are at the end.\n\nThe number 35 was chosen because it is the longest length where the hashed-and-prefixed representation is longer than the original data directly formatted, using the 3 rules above. More specifically:\n\n* a 35-byte array will have 70 hex characters, plus 17 space characters, resulting in 87 characters.\n* byte arrays starting from length 36 will be hashed to 32 bytes, which is 64 hex characters plus 15 spaces, and with the `SHA-256=` prefix, it takes 87 characters.\nAlso, secp256k1 public keys have length 33, so their Textual representation is not their hashed value, which we would like to avoid.\n\nNote: Data longer than 35 bytes are not rendered in a way that can be inverted. See ADR-050's [section about invertibility](./adr-050-sign-mode-textual.md#invertible-rendering) for a discussion.\n\n#### Examples\n\nInputs are displayed as byte arrays.\n\n* `[0]`: `00`\n* `[0,1,2]`: `0001 02`\n* `[0,1,2,..,34]`: `0001 0203 0405 0607 0809 0A0B 0C0D 0E0F 1011 1213 1415 1617 1819 1A1B 1C1D 1E1F 2021 22`\n* `[0,1,2,..,35]`: `SHA-256=5D7E 2D9B 1DCB C85E 7C89 0036 A2CF 2F9F E7B6 6554 F2DF 08CE C6AA 9C0A 25C9 9C21`\n\n### address bytes\n\nWe currently use `string` types in protobuf for addresses so this may not be needed, but if any address bytes are used in sign mode textual they should be rendered with bech32 formatting\n\n### strings\n\nStrings are rendered as-is.\n\n### Default Values\n\n* Default Protobuf values for each field are skipped.\n\n#### Example\n\n```protobuf\nmessage TestData {\n string signer = 1;\n string metadata = 2;\n}\n```\n\n```go\nmyTestData := TestData{\n Signer: \"cosmos1abc\"\n}\n```\n\nWe get the following encoding for the `TestData` message:\n\n```\nTestData object\n> Signer: cosmos1abc\n```\n\n### bool\n\nBoolean values are rendered as `True` or `False`.\n\n### [ABANDONED] Custom `msg_title` instead of Msg `type_url`\n\n_This paragraph is in the Annex for informational purposes only, and will be removed in a next update of the ADR._\n\n
\n Click to see abandoned idea.\n\n* all protobuf messages to be used with `SIGN_MODE_TEXTUAL` CAN have a short title associated with them that can be used in format strings whenever the type URL is explicitly referenced via the `cosmos.msg.v1.textual.msg_title` Protobuf message option.\n* if this option is not specified for a Msg, then the Protobuf fully qualified name will be used.\n\n```protobuf\nmessage MsgSend {\n option (cosmos.msg.v1.textual.msg_title) = \"bank send coins\";\n}\n```\n\n* they MUST be unique per message, per chain\n\n#### Examples\n\n* `cosmos.gov.v1.MsgVote` -> `governance v1 vote`\n\n#### Best Practices\n\nWe recommend to use this option only for `Msg`s whose Protobuf fully qualified name can be hard to understand. As such, the two examples above (`MsgSend` and `MsgVote`) are not good examples to be used with `msg_title`. We still allow `msg_title` for chains who might have `Msg`s with complex or non-obvious names.\n\nIn those cases, we recommend to drop the version (e.g. `v1`) in the string if there's only one version of the module on chain. This way, the bijective mapping can figure out which message each string corresponds to. If multiple Protobuf versions of the same module exist on the same chain, we recommend keeping the first `msg_title` with version, and the second `msg_title` with version (e.g. `v2`):\n\n* `mychain.mymodule.v1.MsgDo` -> `mymodule do something`\n* `mychain.mymodule.v2.MsgDo` -> `mymodule v2 do something`\n\n
" + }, + { + "number": 50, + "filename": "adr-050-sign-mode-textual-annex2.md", + "title": "ADR 050: SIGN_MODE_TEXTUAL: Annex 2 XXX", + "content": "# ADR 050: SIGN_MODE_TEXTUAL: Annex 2 XXX\n\n## Changelog\n\n* Oct 3, 2022: Initial Draft\n\n## Status\n\nDRAFT\n\n## Abstract\n\nThis annex provides normative guidance on how devices should render a\n`SIGN_MODE_TEXTUAL` document.\n\n## Context\n\n`SIGN_MODE_TEXTUAL` allows a legible version of a transaction to be signed\non a hardware security device, such as a Ledger. Early versions of the\ndesign rendered transactions directly to lines of ASCII text, but this\nproved awkward from its in-band signaling, and for the need to display\nUnicode text within the transaction.\n\n## Decision\n\n`SIGN_MODE_TEXTUAL` renders to an abstract representation, leaving it\nup to device-specific software how to present this representation given the\ncapabilities, limitations, and conventions of the device.\n\nWe offer the following normative guidance:\n\n1. The presentation should be as legible as possible to the user, given\nthe capabilities of the device. If legibility could be sacrificed for other\nproperties, we would recommend just using some other signing mode.\nLegibility should focus on the common case - it is okay for unusual cases\nto be less legible.\n\n2. The presentation should be invertible if possible without substantial\nsacrifice of legibility. Any change to the rendered data should result\nin a visible change to the presentation. This extends the integrity of the\nsigning to user-visible presentation.\n\n3. The presentation should follow normal conventions of the device,\nwithout sacrificing legibility or invertibility.\n\nAs an illustration of these principles, here is an example algorithm\nfor presentation on a device which can display a single 80-character\nline of printable ASCII characters:\n\n* The presentation is broken into lines, and each line is presented in\nsequence, with user controls for going forward or backward a line.\n\n* Expert mode screens are only presented if the device is in expert mode.\n\n* Each line of the screen starts with a number of `>` characters equal\nto the screen's indentation level, followed by a `+` character if this\nisn't the first line of the screen, followed by a space if either a\n`>` or a `+` has been emitted,\nor if this header is followed by a `>`, `+`, or space.\n\n* If the line ends with whitespace or an `@` character, an additional `@`\ncharacter is appended to the line.\n\n* The following ASCII control characters or backslash (`\\`) are converted\nto a backslash followed by a letter code, in the manner of string literals\nin many languages:\n\n * a: U+0007 alert or bell\n * b: U+0008 backspace\n * f: U+000C form feed\n * n: U+000A line feed\n * r: U+000D carriage return\n * t: U+0009 horizontal tab\n * v: U+000B vertical tab\n * `\\`: U+005C backslash\n\n* All other ASCII control characters, plus non-ASCII Unicode code points,\nare shown as either:\n\n * `\\u` followed by 4 uppercase hex characters for code points\n in the basic multilingual plane (BMP).\n\n * `\\U` followed by 8 uppercase hex characters for other code points.\n\n* The screen will be broken into multiple lines to fit the 80-character\nlimit, considering the above transformations in a way that attempts to\nminimize the number of lines generated. Expanded control or Unicode characters\nare never split across lines.\n\nExample output:\n\n```\nAn introductory line.\nkey1: 123456\nkey2: a string that ends in whitespace @\nkey3: a string that ends in a single ampersand - @@\n >tricky key4<: note the leading space in the presentation\nintroducing an aggregate\n> key5: false\n> key6: a very long line of text, please co\\u00F6perate and break into\n>+ multiple lines.\n> Can we do further nesting?\n>> You bet we can!\n```\n\nThe inverse mapping gives us the only input which could have\ngenerated this output (JSON notation for string data):\n\n```\nIndent Text\n0 \"An introductory line.\"\n0 \"key1: 123456\"\n0 \"key2: a string that ends in whitespace \"\n0 \"key3: a string that ends in a single ampersand - @\"\n0 \">tricky key4<: note the leading space in the presentation\"\n0 \"introducing an aggregate\"\n1 \"key5: false\"\n1 \"key6: a very long line of text, please coöperate and break into multiple lines.\"\n1 \"Can we do further nesting?\"\n2 \"You bet we can!\"\n```" + }, + { + "number": 53, + "filename": "adr-053-go-module-refactoring.md", + "title": "ADR 053: Go Module Refactoring", + "content": "# ADR 053: Go Module Refactoring\n\n## Changelog\n\n* 2022-04-27: First Draft\n\n## Status\n\nPROPOSED\n\n## Abstract\n\nThe current SDK is built as a single monolithic go module. This ADR describes\nhow we refactor the SDK into smaller independently versioned go modules\nfor ease of maintenance.\n\n## Context\n\nGo modules impose certain requirements on software projects with respect to\nstable version numbers (anything above 0.x) in that [any API breaking changes\nnecessitate a major version](https://go.dev/doc/modules/release-workflow#breaking)\nincrease which technically creates a new go module\n(with a v2, v3, etc. suffix).\n\n[Keeping modules API compatible](https://go.dev/blog/module-compatibility) in\nthis way requires a fair amount of thought and discipline.\n\nThe Cosmos SDK is a fairly large project which originated before go modules\ncame into existence and has always been under a v0.x release even though\nit has been used in production for years now, not because it isn't production\nquality software, but rather because the API compatibility guarantees required\nby go modules are fairly complex to adhere to with such a large project.\nUp to now, it has generally been deemed more important to be able to break the\nAPI if needed rather than require all users update all package import paths\nto accommodate breaking changes causing v2, v3, etc. releases. This is in\naddition to the other complexities related to protobuf generated code that will\nbe addressed in a separate ADR.\n\nNevertheless, the desire for semantic versioning has been [strong in the\ncommunity](https://github.com/cosmos/cosmos-sdk/discussions/10162) and the\nsingle go module release process has made it very hard to\nrelease small changes to isolated features in a timely manner. Release cycles\noften exceed six months which means small improvements done in a day or\ntwo get bottle-necked by everything else in the monolithic release cycle.\n\n## Decision\n\nTo improve the current situation, the SDK is being refactored into multiple\ngo modules within the current repository. There has been a [fair amount of\ndebate](https://github.com/cosmos/cosmos-sdk/discussions/10582#discussioncomment-1813377)\nas to how to do this, with some developers arguing for larger vs smaller\nmodule scopes. There are pros and cons to both approaches (which will be\ndiscussed below in the [Consequences](#consequences) section), but the\napproach being adopted is the following:\n\n* a go module should generally be scoped to a specific coherent set of\nfunctionality (such as math, errors, store, etc.)\n* when code is removed from the core SDK and moved to a new module path, every \neffort should be made to avoid API breaking changes in the existing code using\naliases and wrapper types (as done in https://github.com/cosmos/cosmos-sdk/pull/10779\nand https://github.com/cosmos/cosmos-sdk/pull/11788)\n* new go modules should be moved to a standalone domain (`cosmossdk.io`) before\nbeing tagged as `v1.0.0` to accommodate the possibility that they may be\nbetter served by a standalone repository in the future\n* all go modules should follow the guidelines in https://go.dev/blog/module-compatibility\nbefore `v1.0.0` is tagged and should make use of `internal` packages to limit\nthe exposed API surface\n* the new go module's API may deviate from the existing code where there are\nclear improvements to be made or to remove legacy dependencies (for instance on\namino or gogo proto), as long the old package attempts\nto avoid API breakage with aliases and wrappers\n* care should be taken when simply trying to turn an existing package into a\nnew go module: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.\nIn general, it seems safer to just create a new module path (appending v2, v3, etc.\nif necessary), rather than trying to make an old package a new module.\n\n## Consequences\n\n### Backwards Compatibility\n\nIf the above guidelines are followed to use aliases or wrapper types pointing\nin existing APIs that point back to the new go modules, there should be no or\nvery limited breaking changes to existing APIs.\n\n### Positive\n\n* standalone pieces of software will reach `v1.0.0` sooner\n* new features to specific functionality will be released sooner \n\n### Negative\n\n* there will be more go module versions to update in the SDK itself and\nper-project, although most of these will hopefully be indirect\n\n### Neutral\n\n## Further Discussions\n\nFurther discussions are occurring primarily in\nhttps://github.com/cosmos/cosmos-sdk/discussions/10582 and within\nthe Cosmos SDK Framework Working Group.\n\n## References\n\n* https://go.dev/doc/modules/release-workflow\n* https://go.dev/blog/module-compatibility\n* https://github.com/cosmos/cosmos-sdk/discussions/10162\n* https://github.com/cosmos/cosmos-sdk/discussions/10582\n* https://github.com/cosmos/cosmos-sdk/pull/10779\n* https://github.com/cosmos/cosmos-sdk/pull/11788" + }, + { + "number": 54, + "filename": "adr-054-semver-compatible-modules.md", + "title": "ADR 054: Semver Compatible SDK Modules", + "content": "# ADR 054: Semver Compatible SDK Modules\n\n## Changelog\n\n* 2022-04-27: First draft\n\n## Status\n\nDRAFT\n\n## Abstract\n\nIn order to move the Cosmos SDK to a system of decoupled semantically versioned\nmodules which can be composed in different combinations (ex. staking v3 with\nbank v1 and distribution v2), we need to reassess how we organize the API surface\nof modules to avoid problems with go semantic import versioning and\ncircular dependencies. This ADR explores various approaches we can take to\naddressing these issues.\n\n## Context\n\nThere has been [a fair amount of desire](https://github.com/cosmos/cosmos-sdk/discussions/10162)\nin the community for semantic versioning in the SDK and there has been significant\nmovement to splitting SDK modules into [standalone go modules](https://github.com/cosmos/cosmos-sdk/issues/11899).\nBoth of these will ideally allow the ecosystem to move faster because we won't\nbe waiting for all dependencies to update synchronously. For instance, we could\nhave 3 versions of the core SDK compatible with the latest 2 releases of\nCosmWasm as well as 4 different versions of staking . This sort of setup would\nallow early adopters to aggressively integrate new versions, while allowing\nmore conservative users to be selective about which versions they're ready for.\n\nIn order to achieve this, we need to solve the following problems:\n\n1. because of the way [go semantic import versioning](https://research.swtch.com/vgo-import) (SIV)\n works, moving to SIV naively will actually make it harder to achieve these goals\n2. circular dependencies between modules need to be broken to actually release\n many modules in the SDK independently\n3. pernicious minor version incompatibilities introduced through correctly\n [evolving protobuf schemas](https://developers.google.com/protocol-buffers/docs/proto3#updating)\n without correct [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering)\n\nNote that all the following discussion assumes that the proto file versioning and state machine versioning of a module\nare distinct in that:\n\n* proto files are maintained in a non-breaking way (using something\n like [buf breaking](https://docs.buf.build/breaking/overview)\n to ensure all changes are backwards compatible)\n* proto file versions get bumped much less frequently, i.e. we might maintain `cosmos.bank.v1` through many versions\n of the bank module state machine\n* state machine breaking changes are more common and ideally this is what we'd want to semantically version with\n go modules, ex. `x/bank/v2`, `x/bank/v3`, etc.\n\n### Problem 1: Semantic Import Versioning Compatibility\n\nConsider we have a module `foo` which defines the following `MsgDoSomething` and that we've released its state\nmachine in go module `example.com/foo`:\n\n```protobuf\npackage foo.v1;\n\nmessage MsgDoSomething {\n string sender = 1;\n uint64 amount = 2;\n}\n\nservice Msg {\n DoSomething(MsgDoSomething) returns (MsgDoSomethingResponse);\n}\n```\n\nNow consider that we make a revision to this module and add a new `condition` field to `MsgDoSomething` and also\nadd a new validation rule on `amount` requiring it to be non-zero, and that following go semantic versioning we\nrelease the next state machine version of `foo` as `example.com/foo/v2`.\n\n```protobuf\n// Revision 1\npackage foo.v1;\n\nmessage MsgDoSomething {\n string sender = 1;\n \n // amount must be a non-zero integer.\n uint64 amount = 2;\n \n // condition is an optional condition on doing the thing.\n //\n // Since: Revision 1\n Condition condition = 3;\n}\n```\n\nApproaching this naively, we would generate the protobuf types for the initial\nversion of `foo` in `example.com/foo/types` and we would generate the protobuf\ntypes for the second version in `example.com/foo/v2/types`.\n\nNow let's say we have a module `bar` which talks to `foo` using this keeper\ninterface which `foo` provides:\n\n```go\ntype FooKeeper interface {\n\tDoSomething(MsgDoSomething) error\n}\n```\n\n#### Scenario A: Backward Compatibility: Newer Foo, Older Bar\n\nImagine we have a chain which uses both `foo` and `bar` and wants to upgrade to\n`foo/v2`, but the `bar` module has not upgraded to `foo/v2`.\n\nIn this case, the chain will not be able to upgrade to `foo/v2` until `bar`\nhas upgraded its references to `example.com/foo/types.MsgDoSomething` to\n`example.com/foo/v2/types.MsgDoSomething`.\n\nEven if `bar`'s usage of `MsgDoSomething` has not changed at all, the upgrade\nwill be impossible without this change because `example.com/foo/types.MsgDoSomething`\nand `example.com/foo/v2/types.MsgDoSomething` are fundamentally different\nincompatible structs in the go type system.\n\n#### Scenario B: Forward Compatibility: Older Foo, Newer Bar\n\nNow let's consider the reverse scenario, where `bar` upgrades to `foo/v2`\nby changing the `MsgDoSomething` reference to `example.com/foo/v2/types.MsgDoSomething`\nand releases that as `bar/v2` with some other changes that a chain wants.\nThe chain, however, has decided that it thinks the changes in `foo/v2` are too\nrisky and that it'd prefer to stay on the initial version of `foo`.\n\nIn this scenario, it is impossible to upgrade to `bar/v2` without upgrading\nto `foo/v2` even if `bar/v2` would have worked 100% fine with `foo` other\nthan changing the import path to `MsgDoSomething` (meaning that `bar/v2`\ndoesn't actually use any new features of `foo/v2`).\n\nNow because of the way go semantic import versioning works, we are locked\ninto either using `foo` and `bar` OR `foo/v2` and `bar/v2`. We cannot have\n`foo` + `bar/v2` OR `foo/v2` + `bar`. The go type system doesn't allow this\neven if both versions of these modules are otherwise compatible with each\nother.\n\n#### Naive Mitigation\n\nA naive approach to fixing this would be to not regenerate the protobuf types\nin `example.com/foo/v2/types` but instead just update `example.com/foo/types`\nto reflect the changes needed for `v2` (adding `condition` and requiring\n`amount` to be non-zero). Then we could release a patch of `example.com/foo/types`\nwith this update and use that for `foo/v2`. But this change is state machine\nbreaking for `v1`. It requires changing the `ValidateBasic` method to reject\nthe case where `amount` is zero, and it adds the `condition` field which\nshould be rejected based\non [ADR 020 unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering).\nSo adding these changes as a patch on `v1` is actually incorrect based on semantic\nversioning. Chains that want to stay on `v1` of `foo` should not\nbe importing these changes because they are incorrect for `v1.`\n\n### Problem 2: Circular dependencies\n\nNone of the above approaches allow `foo` and `bar` to be separate modules\nif for some reason `foo` and `bar` depend on each other in different ways.\nFor instance, we can't have `foo` import `bar/types` while `bar` imports\n`foo/types`.\n\nWe have several cases of circular module dependencies in the SDK\n(ex. staking, distribution and slashing) that are legitimate from a state machine\nperspective. Without separating the API types out somehow, there would be\nno way to independently semantically version these modules without some other\nmitigation.\n\n### Problem 3: Handling Minor Version Incompatibilities\n\nImagine that we solve the first two problems but now have a scenario where\n`bar/v2` wants the option to use `MsgDoSomething.condition` which only `foo/v2`\nsupports. If `bar/v2` works with `foo` `v1` and sets `condition` to some non-nil\nvalue, then `foo` will silently ignore this field resulting in a silent logic\npossibly dangerous logic error. If `bar/v2` were able to check whether `foo` was\non `v1` or `v2` and dynamically, it could choose to only use `condition` when\n`foo/v2` is available. Even if `bar/v2` were able to perform this check, however,\nhow do we know that it is always performing the check properly. Without\nsome sort of\nframework-level [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering),\nit is hard to know whether these pernicious hard to detect bugs are getting into\nour app and a client-server layer such as [ADR 033: Inter-Module Communication](./adr-033-protobuf-inter-module-comm.md)\nmay be needed to do this.\n\n## Solutions\n\n### Approach A) Separate API and State Machine Modules\n\nOne solution (first proposed in https://github.com/cosmos/cosmos-sdk/discussions/10582) is to isolate all protobuf\ngenerated code into a separate module\nfrom the state machine module. This would mean that we could have state machine\ngo modules `foo` and `foo/v2` which could use a types or API go module say\n`foo/api`. This `foo/api` go module would be perpetually on `v1.x` and only\naccept non-breaking changes. This would then allow other modules to be\ncompatible with either `foo` or `foo/v2` as long as the inter-module API only\ndepends on the types in `foo/api`. It would also allow modules `foo` and `bar`\nto depend on each other in that both of them could depend on `foo/api` and\n`bar/api` without `foo` directly depending on `bar` and vice versa.\n\nThis is similar to the naive mitigation described above except that it separates\nthe types into separate go modules which in and of itself could be used to\nbreak circular module dependencies. It has the same problems as the naive solution,\notherwise, which we could rectify by:\n\n1. removing all state machine breaking code from the API module (ex. `ValidateBasic` and any other interface methods)\n2. embedding the correct file descriptors for unknown field filtering in the binary\n\n#### Migrate all interface methods on API types to handlers\n\nTo solve 1), we need to remove all interface implementations from generated\ntypes and instead use a handler approach which essentially means that given\na type `X`, we have some sort of resolver which allows us to resolve interface\nimplementations for that type (ex. `sdk.Msg` or `authz.Authorization`). For\nexample:\n\n```go\nfunc (k Keeper) DoSomething(msg MsgDoSomething) error {\n\tvar validateBasicHandler ValidateBasicHandler\n\terr := k.resolver.Resolve(&validateBasic, msg)\n\tif err != nil {\n\t\treturn err\n\t} \n\t\n\terr = validateBasicHandler.ValidateBasic()\n\t...\n}\n```\n\nIn the case of some methods on `sdk.Msg`, we could replace them with declarative\nannotations. For instance, `GetSigners` can already be replaced by the protobuf\nannotation `cosmos.msg.v1.signer`. In the future, we may consider some sort\nof protobuf validation framework (like https://github.com/bufbuild/protoc-gen-validate\nbut more Cosmos-specific) to replace `ValidateBasic`.\n\n#### Pinned FileDescriptor's\n\nTo solve 2), state machine modules must be able to specify what the version of\nthe protobuf files was that they were built against. For instance if the API\nmodule for `foo` upgrades to `foo/v2`, the original `foo` module still needs\na copy of the original protobuf files it was built with so that ADR 020\nunknown field filtering will reject `MsgDoSomething` when `condition` is\nset.\n\nThe simplest way to do this may be to embed the protobuf `FileDescriptor`s into\nthe module itself so that these `FileDescriptor`s are used at runtime rather\nthan the ones that are built into the `foo/api` which may be different. Using\n[buf build](https://docs.buf.build/build/usage#output-format), [go embed](https://pkg.go.dev/embed),\nand a build script we can probably come up with a solution for embedding\n`FileDescriptor`s into modules that is fairly straightforward.\n\n#### Potential limitations to generated code\n\nOne challenge with this approach is that it places heavy restrictions on what\ncan go in API modules and requires that most of this is state machine breaking.\nAll or most of the code in the API module would be generated from protobuf\nfiles, so we can probably control this with how code generation is done, but\nit is a risk to be aware of.\n\nFor instance, we do code generation for the ORM that in the future could\ncontain optimizations that are state machine breaking. We\nwould either need to ensure very carefully that the optimizations aren't\nactually state machine breaking in generated code or separate this generated code\nout from the API module into the state machine module. Both of these mitigations\nare potentially viable but the API module approach does require an extra level\nof care to avoid these sorts of issues.\n\n#### Minor Version Incompatibilities\n\nThis approach in and of itself does little to address any potential minor\nversion incompatibilities and the\nrequisite [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering).\nLikely some sort of client-server routing layer which does this check such as\n[ADR 033: Inter-Module communication](./adr-033-protobuf-inter-module-comm.md)\nis required to make sure that this is done properly. We could then allow\nmodules to perform a runtime check given a `MsgClient`, ex:\n\n```go\nfunc (k Keeper) CallFoo() error {\n\tif k.interModuleClient.MinorRevision(k.fooMsgClient) >= 2 {\n\t\tk.fooMsgClient.DoSomething(&MsgDoSomething{Condition: ...})\n } else {\n ...\n }\n}\n```\n\nTo do the unknown field filtering itself, the ADR 033 router would need to use\nthe [protoreflect API](https://pkg.go.dev/google.golang.org/protobuf/reflect/protoreflect)\nto ensure that no fields unknown to the receiving module are set. This could\nresult in an undesirable performance hit depending on how complex this logic is.\n\n### Approach B) Changes to Generated Code\n\nAn alternate approach to solving the versioning problem is to change how protobuf code is generated and move modules\nmostly or completely in the direction of inter-module communication as described\nin [ADR 033](./adr-033-protobuf-inter-module-comm.md).\nIn this paradigm, a module could generate all the types it needs internally - including the API types of other modules -\nand talk to other modules via a client-server boundary. For instance, if `bar` needs to talk to `foo`, it could\ngenerate its own version of `MsgDoSomething` as `bar/internal/foo/v1.MsgDoSomething` and just pass this to the\ninter-module router which would somehow convert it to the version which foo needs (ex. `foo/internal.MsgDoSomething`).\n\nCurrently, two generated structs for the same protobuf type cannot exist in the same go binary without special\nbuild flags (see https://developers.google.com/protocol-buffers/docs/reference/go/faq#fix-namespace-conflict).\nA relatively simple mitigation to this issue would be to set up the protobuf code to not register protobuf types\nglobally if they are generated in an `internal/` package. This will require modules to register their types manually\nwith the app-level level protobuf registry, this is similar to what modules already do with the `InterfaceRegistry`\nand amino codec.\n\nIf modules _only_ do ADR 033 message passing then a naive and non-performant solution for\nconverting `bar/internal/foo/v1.MsgDoSomething`\nto `foo/internal.MsgDoSomething` would be marshaling and unmarshaling in the ADR 033 router. This would break down if\nwe needed to expose protobuf types in `Keeper` interfaces because the whole point is to try to keep these types\n`internal/` so that we don't end up with all the import version incompatibilities we've described above. However,\nbecause of the issue with minor version incompatibilities and the need\nfor [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering),\nsticking with the `Keeper` paradigm instead of ADR 033 may be unviable to begin with.\n\nA more performant solution (that could maybe be adapted to work with `Keeper` interfaces) would be to only expose\ngetters and setters for generated types and internally store data in memory buffers which could be passed from\none implementation to another in a zero-copy way.\n\nFor example, imagine this protobuf API with only getters and setters is exposed for `MsgSend`:\n\n```go\ntype MsgSend interface {\n\tproto.Message\n\tGetFromAddress() string\n\tGetToAddress() string\n\tGetAmount() []v1beta1.Coin\n SetFromAddress(string)\n SetToAddress(string)\n SetAmount([]v1beta1.Coin)\n}\n\nfunc NewMsgSend() MsgSend { return &msgSendImpl{memoryBuffers: ...} }\n```\n\nUnder the hood, `MsgSend` could be implemented based on some raw memory buffer in the same way\nthat [Cap'n Proto](https://capnproto.org)\nand [FlatBuffers](https://google.github.io/flatbuffers/) so that we could convert between one version of `MsgSend`\nand another without serialization (i.e. zero-copy). This approach would have the added benefits of allowing zero-copy\nmessage passing to modules written in other languages such as Rust and accessed through a VM or FFI. It could also make\nunknown field filtering in inter-module communication simpler if we require that all new fields are added in sequential\norder, ex. just checking that no field `> 5` is set.\n\nAlso, we wouldn't have any issues with state machine breaking code on generated types because all the generated\ncode used in the state machine would actually live in the state machine module itself. Depending on how interface\ntypes and protobuf `Any`s are used in other languages, however, it may still be desirable to take the handler\napproach described in approach A. Either way, types implementing interfaces would still need to be registered\nwith an `InterfaceRegistry` as they are now because there would be no way to retrieve them via the global registry.\n\nIn order to simplify access to other modules using ADR 033, a public API module (maybe even one\n[remotely generated by Buf](https://buf.build/docs/bsr/generated-sdks/go/)) could be used by client modules instead\nof requiring to generate all client types internally.\n\nThe big downsides of this approach are that it requires big changes to how people use protobuf types and would be a\nsubstantial rewrite of the protobuf code generator. This new generated code, however, could still be made compatible\nwith\nthe [`google.golang.org/protobuf/reflect/protoreflect`](https://pkg.go.dev/google.golang.org/protobuf/reflect/protoreflect)\nAPI in order to work with all standard golang protobuf tooling.\n\nIt is possible that the naive approach of marshaling/unmarshaling in the ADR 033 router is an acceptable intermediate\nsolution if the changes to the code generator are seen as too complex. However, since all modules would likely need\nto migrate to ADR 033 anyway with this approach, it might be better to do this all at once.\n\n### Approach C) Don't address these issues\n\nIf the above solutions are seen as too complex, we can also decide not to do anything explicit to enable better module\nversion compatibility, and break circular dependencies.\n\nIn this case, when developers are confronted with the issues described above they can require dependencies to update in\nsync (what we do now) or attempt some ad-hoc potentially hacky solution.\n\nOne approach is to ditch go semantic import versioning (SIV) altogether. Some people have commented that go's SIV\n(i.e. changing the import path to `foo/v2`, `foo/v3`, etc.) is too restrictive and that it should be optional. The\ngolang maintainers disagree and only officially support semantic import versioning. We could, however, take the\ncontrarian perspective and get more flexibility by using 0.x-based versioning basically forever.\n\nModule version compatibility could then be achieved using go.mod replace directives to pin dependencies to specific\ncompatible 0.x versions. For instance if we knew `foo` 0.2 and 0.3 were both compatible with `bar` 0.3 and 0.4, we\ncould use replace directives in our go.mod to stick to the versions of `foo` and `bar` we want. This would work as\nlong as the authors of `foo` and `bar` avoid incompatible breaking changes between these modules.\n\nOr, if developers choose to use semantic import versioning, they can attempt the naive solution described above\nand would also need to use special tags and replace directives to make sure that modules are pinned to the correct\nversions.\n\nNote, however, that all of these ad-hoc approaches, would be vulnerable to the minor version compatibility issues\ndescribed above unless [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering)\nis properly addressed.\n\n### Approach D) Avoid protobuf generated code in public APIs\n\nAn alternative approach would be to avoid protobuf generated code in public module APIs. This would help avoid the\ndiscrepancy between state machine versions and client API versions at the module to module boundaries. It would mean\nthat we wouldn't do inter-module message passing based on ADR 033, but rather stick to the existing keeper approach\nand take it one step further by avoiding any protobuf generated code in the keeper interface methods.\n\nUsing this approach, our `foo.Keeper.DoSomething` method wouldn't have the generated `MsgDoSomething` struct (which\ncomes from the protobuf API), but instead positional parameters. Then in order for `foo/v2` to support the `foo/v1`\nkeeper it would simply need to implement both the v1 and v2 keeper APIs. The `DoSomething` method in v2 could have the\nadditional `condition` parameter, but this wouldn't be present in v1 at all so there would be no danger of a client\naccidentally setting this when it isn't available. \n\nSo this approach would avoid the challenge around minor version incompatibilities because the existing module keeper\nAPI would not get new fields when they are added to protobuf files.\n\nTaking this approach, however, would likely require making all protobuf generated code internal in order to prevent\nit from leaking into the keeper API. This means we would still need to modify the protobuf code generator to not\nregister `internal/` code with the global registry, and we would still need to manually register protobuf\n`FileDescriptor`s (this is probably true in all scenarios). It may, however, be possible to avoid needing to refactor\ninterface methods on generated types to handlers.\n\nAlso, this approach doesn't address what would be done in scenarios where modules still want to use the message router.\nEither way, we probably still want a way to pass messages from one module to another router safely even if it's just for\nuse cases like `x/gov`, `x/authz`, CosmWasm, etc. That would still require most of the things outlined in approach (B),\nalthough we could advise modules to prefer keepers for communicating with other modules.\n\nThe biggest downside of this approach is probably that it requires a strict refactoring of keeper interfaces to avoid\ngenerated code leaking into the API. This may result in cases where we need to duplicate types that are already defined\nin proto files and then write methods for converting between the golang and protobuf version. This may end up in a lot\nof unnecessary boilerplate and that may discourage modules from actually adopting it and achieving effective version\ncompatibility. Approaches (A) and (B), although heavy handed initially, aim to provide a system which once adopted\nmore or less gives the developer version compatibility for free with minimal boilerplate. Approach (D) may not be able\nto provide such a straightforward system since it requires a golang API to be defined alongside a protobuf API in a\nway that requires duplication and differing sets of design principles (protobuf APIs encourage additive changes\nwhile golang APIs would forbid it).\n\nOther downsides to this approach are:\n\n* no clear roadmap to supporting modules in other languages like Rust\n* doesn't get us any closer to proper object capability security (one of the goals of ADR 033)\n* ADR 033 needs to be done properly anyway for the set of use cases which do need it\n\n## Decision\n\nThe latest **DRAFT** proposal is:\n\n1. we are alignment on adopting [ADR 033](./adr-033-protobuf-inter-module-comm.md) not just as an addition to the\n framework, but as a core replacement to the keeper paradigm entirely.\n2. the ADR 033 inter-module router will accommodate any variation of approach (A) or (B) given the following rules:\n a. if the client type is the same as the server type then pass it directly through,\n b. if both client and server use the zero-copy generated code wrappers (which still need to be defined), then pass\n the memory buffers from one wrapper to the other, or\n c. marshal/unmarshal types between client and server.\n\nThis approach will allow for both maximal correctness and enable a clear path to enabling modules within in other\nlanguages, possibly executed within a WASM VM.\n\n### Minor API Revisions\n\nTo declare minor API revisions of proto files, we propose the following guidelines (which were already documented\nin [cosmos.app.v1alpha module options](../proto/cosmos/app/v1alpha1/module.proto)):\n\n* proto packages which are revised from their initial version (considered revision `0`) should include a `package`\n* comment in some .proto file containing the test `Revision N` at the start of a comment line where `N` is the current\nrevision number.\n* all fields, messages, etc. added in a version beyond the initial revision should add a comment at the start of a\ncomment line of the form `Since: Revision N` where `N` is the non-zero revision it was added.\n\nIt is advised that there is a 1:1 correspondence between a state machine module and versioned set of proto files\nwhich are versioned either as a buf module a go API module or both. If the buf schema registry is used, the version of\nthis buf module should always be `1.N` where `N` corresponds to the package revision. Patch releases should be used when\nonly documentation comments are updated. It is okay to include proto packages named `v2`, `v3`, etc. in this same\n`1.N` versioned buf module (ex. `cosmos.bank.v2`) as long as all these proto packages consist of a single API intended\nto be served by a single SDK module.\n\n### Introspecting Minor API Revisions\n\nIn order for modules to introspect the minor API revision of peer modules, we propose adding the following method\nto `cosmossdk.io/core/intermodule.Client`:\n\n```go\nServiceRevision(ctx context.Context, serviceName string) uint64\n```\n\nModules could call this using the service name statically generated by the go grpc code generator:\n\n```go\nintermoduleClient.ServiceRevision(ctx, bankv1beta1.Msg_ServiceDesc.ServiceName)\n```\n\nIn the future, we may decide to extend the code generator used for protobuf services to add a field\nto client types which does this check more concisely, ex:\n\n```go\npackage bankv1beta1\n\ntype MsgClient interface {\n\tSend(context.Context, MsgSend) (MsgSendResponse, error)\n\tServiceRevision(context.Context) uint64\n}\n```\n\n### Unknown Field Filtering\n\nTo correctly perform [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering),\nthe inter-module router can do one of the following:\n\n* use the `protoreflect` API for messages which support that\n* for gogo proto messages, marshal and use the existing `codec/unknownproto` code\n* for zero-copy messages, do a simple check on the highest set field number (assuming we can require that fields are\n adding consecutively in increasing order)\n\n### `FileDescriptor` Registration\n\nBecause a single go binary may contain different versions of the same generated protobuf code, we cannot rely on the\nglobal protobuf registry to contain the correct `FileDescriptor`s. Because `appconfig` module configuration is itself\nwritten in protobuf, we would like to load the `FileDescriptor`s for a module before loading a module itself. So we\nwill provide ways to register `FileDescriptor`s at module registration time before instantiation. We propose the\nfollowing `cosmossdk.io/core/appmodule.Option` constructors for the various cases of how `FileDescriptor`s may be\npackaged:\n\n```go\npackage appmodule\n\n// this can be used when we are using google.golang.org/protobuf compatible generated code\n// Ex:\n// ProtoFiles(bankv1beta1.File_cosmos_bank_v1beta1_module_proto)\nfunc ProtoFiles(file []protoreflect.FileDescriptor) Option {}\n\n// this can be used when we are using gogo proto generated code.\nfunc GzippedProtoFiles(file [][]byte) Option {}\n\n// this can be used when we are using buf build to generated a pinned file descriptor\nfunc ProtoImage(protoImage []byte) Option {}\n```\n\nThis approach allows us to support several ways protobuf files might be generated:\n\n* proto files generated internally to a module (use `ProtoFiles`)\n* the API module approach with pinned file descriptors (use `ProtoImage`)\n* gogo proto (use `GzippedProtoFiles`)\n\n### Module Dependency Declaration\n\nOne risk of ADR 033 is that dependencies are called at runtime which are not present in the loaded set of SDK modules. \nAlso we want modules to have a way to define a minimum dependency API revision that they require. Therefore, all\nmodules should declare their set of dependencies upfront. These dependencies could be defined when a module is\ninstantiated, but ideally we know what the dependencies are before instantiation and can statically look at an app\nconfig and determine whether the set of modules. For example, if `bar` requires `foo` revision `>= 1`, then we\nshould be able to know this when creating an app config with two versions of `bar` and `foo`.\n\nWe propose defining these dependencies in the proto options of the module config object itself.\n\n### Interface Registration\n\nWe will also need to define how interface methods are defined on types that are serialized as `google.protobuf.Any`'s.\nIn light of the desire to support modules in other languages, we may want to think of solutions that will accommodate\nother languages such as plugins described briefly in [ADR 033](./adr-033-protobuf-inter-module-comm.md#internal-methods).\n\n### Testing\n\nIn order to ensure that modules are indeed with multiple versions of their dependencies, we plan to provide specialized\nunit and integration testing infrastructure that automatically tests multiple versions of dependencies.\n\n#### Unit Testing\n\nUnit tests should be conducted inside SDK modules by mocking their dependencies. In a full ADR 033 scenario,\nthis means that all interaction with other modules is done via the inter-module router, so mocking of dependencies\nmeans mocking their msg and query server implementations. We will provide both a test runner and fixture to make this\nstreamlined. The key thing that the test runner should do to test compatibility is to test all combinations of\ndependency API revisions. This can be done by taking the file descriptors for the dependencies, parsing their comments\nto determine the revisions various elements were added, and then created synthetic file descriptors for each revision\nby subtracting elements that were added later.\n\nHere is a proposed API for the unit test runner and fixture:\n\n```go\npackage moduletesting\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"cosmossdk.io/core/intermodule\"\n\t\"cosmossdk.io/depinject\"\n\t\"google.golang.org/grpc\"\n\t\"google.golang.org/protobuf/proto\"\n\t\"google.golang.org/protobuf/reflect/protodesc\"\n)\n\ntype TestFixture interface {\n\tcontext.Context\n\tintermodule.Client // for making calls to the module we're testing\n\tBeginBlock()\n\tEndBlock()\n}\n\ntype UnitTestFixture interface {\n\tTestFixture\n\tgrpc.ServiceRegistrar // for registering mock service implementations\n}\n\ntype UnitTestConfig struct {\n\tModuleConfig proto.Message // the module's config object\n\tDepinjectConfig depinject.Config // optional additional depinject config options\n\tDependencyFileDescriptors []protodesc.FileDescriptorProto // optional dependency file descriptors to use instead of the global registry\n}\n\n// Run runs the test function for all combinations of dependency API revisions.\nfunc (cfg UnitTestConfig) Run(t *testing.T, f func(t *testing.T, f UnitTestFixture)) {\n\t// ...\n}\n```\n\nHere is an example for testing bar calling foo which takes advantage of conditional service revisions in the expected\nmock arguments:\n\n```go\nfunc TestBar(t *testing.T) {\n UnitTestConfig{ModuleConfig: &foomodulev1.Module{}}.Run(t, func (t *testing.T, f moduletesting.UnitTestFixture) {\n ctrl := gomock.NewController(t)\n mockFooMsgServer := footestutil.NewMockMsgServer()\n foov1.RegisterMsgServer(f, mockFooMsgServer)\n barMsgClient := barv1.NewMsgClient(f)\n\t\tif f.ServiceRevision(foov1.Msg_ServiceDesc.ServiceName) >= 1 {\n mockFooMsgServer.EXPECT().DoSomething(gomock.Any(), &foov1.MsgDoSomething{\n\t\t\t\t...,\n\t\t\t\tCondition: ..., // condition is expected in revision >= 1\n }).Return(&foov1.MsgDoSomethingResponse{}, nil)\n } else {\n mockFooMsgServer.EXPECT().DoSomething(gomock.Any(), &foov1.MsgDoSomething{...}).Return(&foov1.MsgDoSomethingResponse{}, nil)\n }\n res, err := barMsgClient.CallFoo(f, &MsgCallFoo{})\n ...\n })\n}\n```\n\nThe unit test runner would make sure that no dependency mocks return arguments which are invalid for the service\nrevision being tested to ensure that modules don't incorrectly depend on functionality not present in a given revision.\n\n#### Integration Testing\n\nAn integration test runner and fixture would also be provided which instead of using mocks would test actual module\ndependencies in various combinations. Here is the proposed API:\n\n```go\ntype IntegrationTestFixture interface {\n TestFixture\n}\n\ntype IntegrationTestConfig struct {\n ModuleConfig proto.Message // the module's config object\n DependencyMatrix map[string][]proto.Message // all the dependent module configs\n}\n\n// Run runs the test function for all combinations of dependency modules.\nfunc (cfg IntegrationTestConfig) Run(t *testing.T, f func (t *testing.T, f IntegrationTestFixture)) {\n // ...\n}\n```\n\nAnd here is an example with foo and bar:\n\n```go\nfunc TestBarIntegration(t *testing.T) {\n IntegrationTestConfig{\n ModuleConfig: &barmodulev1.Module{},\n DependencyMatrix: map[string][]proto.Message{\n \"runtime\": []proto.Message{ // test against two versions of runtime\n &runtimev1.Module{},\n &runtimev2.Module{},\n },\n \"foo\": []proto.Message{ // test against three versions of foo\n &foomodulev1.Module{},\n &foomodulev2.Module{},\n &foomodulev3.Module{},\n }\n } \n }.Run(t, func (t *testing.T, f moduletesting.IntegrationTestFixture) {\n barMsgClient := barv1.NewMsgClient(f)\n res, err := barMsgClient.CallFoo(f, &MsgCallFoo{})\n ...\n })\n}\n```\n\nUnlike unit tests, integration tests actually pull in other module dependencies. So that modules can be written\nwithout direct dependencies on other modules and because golang has no concept of development dependencies, integration\ntests should be written in separate go modules, ex. `example.com/bar/v2/test`. Because this paradigm uses go semantic\nversioning, it is possible to build a single go module which imports 3 versions of bar and 2 versions of runtime and\ncan test these all together in the six various combinations of dependencies.\n\n## Consequences\n\n### Backwards Compatibility\n\nModules which migrate fully to ADR 033 will not be compatible with existing modules which use the keeper paradigm.\nAs a temporary workaround we may create some wrapper types that emulate the current keeper interface to minimize\nthe migration overhead.\n\n### Positive\n\n* we will be able to deliver interoperable semantically versioned modules which should dramatically increase the\n ability of the Cosmos SDK ecosystem to iterate on new features\n* it will be possible to write Cosmos SDK modules in other languages in the near future\n\n### Negative\n\n* all modules will need to be refactored somewhat dramatically\n\n### Neutral\n\n* the `cosmossdk.io/core/appconfig` framework will play a more central role in terms of how modules are defined, this\n is likely generally a good thing but does mean additional changes for users wanting to stick to the pre-depinject way\n of wiring up modules\n* `depinject` is somewhat less needed or maybe even obviated because of the full ADR 033 approach. If we adopt the\n core API proposed in https://github.com/cosmos/cosmos-sdk/pull/12239, then a module would probably always instantiate\n itself with a method `ProvideModule(appmodule.Service) (appmodule.AppModule, error)`. There is no complex wiring of\n keeper dependencies in this scenario and dependency injection may not have as much of (or any) use case.\n\n## Further Discussions\n\nThe decision described above is considered in draft mode and is pending final buy-in from the team and key stakeholders.\nKey outstanding discussions if we do adopt that direction are:\n\n* how do module clients introspect dependency module API revisions\n* how do modules determine a minor dependency module API revision requirement\n* how do modules appropriately test compatibility with different dependency versions\n* how to register and resolve interface implementations\n* how do modules register their protobuf file descriptors depending on the approach they take to generated code (the\n API module approach may still be viable as a supported strategy and would need pinned file descriptors)\n\n## References\n\n* https://github.com/cosmos/cosmos-sdk/discussions/10162\n* https://github.com/cosmos/cosmos-sdk/discussions/10582\n* https://github.com/cosmos/cosmos-sdk/discussions/10368\n* https://github.com/cosmos/cosmos-sdk/pull/11340\n* https://github.com/cosmos/cosmos-sdk/issues/11899\n* [ADR 020](./adr-020-protobuf-transaction-encoding.md)\n* [ADR 033](./adr-033-protobuf-inter-module-comm.md)" + }, + { + "number": 55, + "filename": "adr-055-orm.md", + "title": "ADR 055: ORM", + "content": "# ADR 055: ORM\n\n## Changelog\n\n* 2022-04-27: First draft\n\n## Status\n\nACCEPTED Implemented\n\n## Abstract\n\nIn order to make it easier for developers to build Cosmos SDK modules and for clients to query, index and verify proofs\nagainst state data, we have implemented an ORM (object-relational mapping) layer for the Cosmos SDK.\n\n## Context\n\nHistorically modules in the Cosmos SDK have always used the key-value store directly and created various handwritten\nfunctions for managing key format as well as constructing secondary indexes. This consumes a significant amount of\ntime when building a module and is error-prone. Because key formats are non-standard, sometimes poorly documented,\nand subject to change, it is hard for clients to generically index, query and verify merkle proofs against state data.\n\nThe known first instance of an \"ORM\" in the Cosmos ecosystem was in [weave](https://github.com/iov-one/weave/tree/master/orm).\nA later version was built for [regen-ledger](https://github.com/regen-network/regen-ledger/tree/157181f955823149e1825263a317ad8e16096da4/orm) for\nuse in the group module and later [ported to the SDK](https://github.com/cosmos/cosmos-sdk/tree/35d3312c3be306591fcba39892223f1244c8d108/x/group/internal/orm)\njust for that purpose.\n\nWhile these earlier designs made it significantly easier to write state machines, they still required a lot of manual\nconfiguration, didn't expose state format directly to clients, and were limited in their support of different types\nof index keys, composite keys, and range queries.\n\nDiscussions about the design continued in https://github.com/cosmos/cosmos-sdk/discussions/9156 and more\nsophisticated proofs of concept were created in https://github.com/allinbits/cosmos-sdk-poc/tree/master/runtime/orm\nand https://github.com/cosmos/cosmos-sdk/pull/10454.\n\n## Decision\n\nThese prior efforts culminated in the creation of the Cosmos SDK `orm` go module which uses protobuf annotations\nfor specifying ORM table definitions. This ORM is based on the new `google.golang.org/protobuf/reflect/protoreflect`\nAPI and supports:\n\n* sorted indexes for all simple protobuf types (except `bytes`, `enum`, `float`, `double`) as well as `Timestamp` and `Duration`\n* unsorted `bytes` and `enum` indexes\n* composite primary and secondary keys\n* unique indexes\n* auto-incrementing `uint64` primary keys\n* complex prefix and range queries\n* paginated queries\n* complete logical decoding of KV-store data\n\nAlmost all the information needed to decode state directly is specified in .proto files. Each table definition specifies\nan ID which is unique per .proto file and each index within a table is unique within that table. Clients then only need\nto know the name of a module and the prefix ORM data for a specific .proto file within that module in order to decode\nstate data directly. This additional information will be exposed directly through app configs which will be explained\nin a future ADR related to app wiring.\n\nThe ORM makes optimizations around storage space by not repeating values in the primary key in the key value\nwhen storing primary key records. For example, if the object `{\"a\":0,\"b\":1}` has the primary key `a`, it will\nbe stored in the key value store as `Key: '0', Value: {\"b\":1}` (with more efficient protobuf binary encoding).\nAlso, the generated code from https://github.com/cosmos/cosmos-proto does optimizations around the\n`google.golang.org/protobuf/reflect/protoreflect` API to improve performance.\n\nA code generator is included with the ORM which creates type safe wrappers around the ORM's dynamic `Table`\nimplementation and is the recommended way for modules to use the ORM.\n\nThe ORM tests provide a simplified bank module demonstration which illustrates:\n\n* [ORM proto options](https://github.com/cosmos/cosmos-sdk/blob/0d846ae2f0424b2eb640f6679a703b52d407813d/orm/internal/testpb/bank.proto)\n* [Generated Code](https://github.com/cosmos/cosmos-sdk/blob/0d846ae2f0424b2eb640f6679a703b52d407813d/orm/internal/testpb/bank.cosmos_orm.go)\n* [Example Usage in a Module Keeper](https://github.com/cosmos/cosmos-sdk/blob/0d846ae2f0424b2eb640f6679a703b52d407813d/orm/model/ormdb/module_test.go)\n\n## Consequences\n\n### Backwards Compatibility\n\nState machine code that adopts the ORM will need migrations as the state layout is generally backwards incompatible.\nThese state machines will also need to migrate to https://github.com/cosmos/cosmos-proto at least for state data.\n\n### Positive\n\n* easier to build modules\n* easier to add secondary indexes to state\n* possible to write a generic indexer for ORM state\n* easier to write clients that do state proofs\n* possible to automatically write query layers rather than needing to manually implement gRPC queries\n\n### Negative\n\n* worse performance than handwritten keys (for now). See [Further Discussions](#further-discussions)\nfor potential improvements\n\n### Neutral\n\n## Further Discussions\n\nFurther discussions will happen within the Cosmos SDK Framework Working Group. Current planned and ongoing work includes:\n\n* automatically generate client-facing query layer\n* client-side query libraries that transparently verify light client proofs\n* index ORM data to SQL databases\n* improve performance by:\n * optimizing existing reflection based code to avoid unnecessary gets when doing deletes & updates of simple tables\n * more sophisticated code generation such as making fast path reflection even faster (avoiding `switch` statements),\n or even fully generating code that equals handwritten performance\n\n\n## References\n\n* https://github.com/iov-one/weave/tree/master/orm).\n* https://github.com/regen-network/regen-ledger/tree/157181f955823149e1825263a317ad8e16096da4/orm\n* https://github.com/cosmos/cosmos-sdk/tree/35d3312c3be306591fcba39892223f1244c8d108/x/group/internal/orm\n* https://github.com/cosmos/cosmos-sdk/discussions/9156\n* https://github.com/allinbits/cosmos-sdk-poc/tree/master/runtime/orm\n* https://github.com/cosmos/cosmos-sdk/pull/10454" + }, + { + "number": 57, + "filename": "adr-057-app-wiring.md", + "title": "ADR 057: App Wiring", + "content": "# ADR 057: App Wiring\n\n## Changelog\n\n* 2022-05-04: Initial Draft\n* 2022-08-19: Updates\n\n## Status\n\nPROPOSED Implemented\n\n## Abstract\n\nIn order to make it easier to build Cosmos SDK modules and apps, we propose a new app wiring system based on\ndependency injection and declarative app configurations to replace the current `app.go` code.\n\n## Context\n\nA number of factors have made the SDK and SDK apps in their current state hard to maintain. A symptom of the current\nstate of complexity is [`simapp/app.go`](https://github.com/cosmos/cosmos-sdk/blob/c3edbb22cab8678c35e21fe0253919996b780c01/simapp/app.go)\nwhich contains almost 100 lines of imports and is otherwise over 600 lines of mostly boilerplate code that is\ngenerally copied to each new project. (Not to mention the additional boilerplate which gets copied in `simapp/simd`.)\n\nThe large amount of boilerplate needed to bootstrap an app has made it hard to release independently versioned go\nmodules for Cosmos SDK modules as described in [ADR 053: Go Module Refactoring](./adr-053-go-module-refactoring.md).\n\nIn addition to being very verbose and repetitive, `app.go` also exposes a large surface area for breaking changes\nas most modules instantiate themselves with positional parameters which forces breaking changes anytime a new parameter\n(even an optional one) is needed.\n\nSeveral attempts were made to improve the current situation including [ADR 033: Internal-Module Communication](./adr-033-protobuf-inter-module-comm.md)\nand [a proof-of-concept of a new SDK](https://github.com/allinbits/cosmos-sdk-poc). The discussions around these\ndesigns led to the current solution described here.\n\n## Decision\n\nIn order to improve the current situation, a new \"app wiring\" paradigm has been designed to replace `app.go` which\ninvolves:\n\n* declaration configuration of the modules in an app which can be serialized to JSON or YAML\n* a dependency-injection (DI) framework for instantiating apps from the configuration\n\n### Dependency Injection\n\nWhen examining the code in `app.go` most of the code simply instantiates modules with dependencies provided either\nby the framework (such as store keys) or by other modules (such as keepers). It is generally pretty obvious given\nthe context what the correct dependencies actually should be, so dependency-injection is an obvious solution. Rather\nthan making developers manually resolve dependencies, a module will tell the DI container what dependency it needs\nand the container will figure out how to provide it.\n\nWe explored several existing DI solutions in golang and felt that the reflection-based approach in [uber/dig](https://github.com/uber-go/dig)\nwas closest to what we needed but not quite there. Assessing what we needed for the SDK, we designed and built\nthe Cosmos SDK [depinject module](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/depinject), which has the following\nfeatures:\n\n* dependency resolution and provision through functional constructors, ex: `func(need SomeDep) (AnotherDep, error)`\n* dependency injection `In` and `Out` structs which support `optional` dependencies\n* grouped-dependencies (many-per-container) through the `ManyPerContainerType` tag interface\n* module-scoped dependencies via `ModuleKey`s (where each module gets a unique dependency)\n* one-per-module dependencies through the `OnePerModuleType` tag interface\n* sophisticated debugging information and container visualization via GraphViz\n\nHere are some examples of how these would be used in an SDK module:\n\n* `StoreKey` could be a module-scoped dependency which is unique per module\n* a module's `AppModule` instance (or the equivalent) could be a `OnePerModuleType`\n* CLI commands could be provided with `ManyPerContainerType`s\n\nNote that even though dependency resolution is dynamic and based on reflection, which could be considered a pitfall\nof this approach, the entire dependency graph should be resolved immediately on app startup and only gets resolved\nonce (except in the case of dynamic config reloading which is a separate topic). This means that if there are any\nerrors in the dependency graph, they will get reported immediately on startup so this approach is only slightly worse\nthan fully static resolution in terms of error reporting and much better in terms of code complexity.\n\n### Declarative App Config\n\nIn order to compose modules into an app, a declarative app configuration will be used. This configuration is based off\nof protobuf and its basic structure is very simple:\n\n```protobuf\npackage cosmos.app.v1;\n\nmessage Config {\n repeated ModuleConfig modules = 1;\n}\n\nmessage ModuleConfig {\n string name = 1;\n google.protobuf.Any config = 2;\n}\n```\n\n(See also https://github.com/cosmos/cosmos-sdk/blob/6e18f582bf69e3926a1e22a6de3c35ea327aadce/proto/cosmos/app/v1alpha1/config.proto)\n\nThe configuration for every module is itself a protobuf message and modules will be identified and loaded based\non the protobuf type URL of their config object (ex. `cosmos.bank.module.v1.Module`). Modules are given a unique short `name`\nto share resources across different versions of the same module which might have a different protobuf package\nversions (ex. `cosmos.bank.module.v2.Module`). All module config objects should define the `cosmos.app.v1alpha1.module`\ndescriptor option which will provide additional useful metadata for the framework and which can also be indexed\nin module registries.\n\nAn example app config in YAML might look like this:\n\n```yaml\nmodules:\n - name: baseapp\n config:\n \"@type\": cosmos.baseapp.module.v1.Module\n begin_blockers: [staking, auth, bank]\n end_blockers: [bank, auth, staking]\n init_genesis: [bank, auth, staking]\n - name: auth\n config:\n \"@type\": cosmos.auth.module.v1.Module\n bech32_prefix: \"foo\"\n - name: bank\n config:\n \"@type\": cosmos.bank.module.v1.Module\n - name: staking\n config:\n \"@type\": cosmos.staking.module.v1.Module\n```\n\nIn the above example, there is a hypothetical `baseapp` module which contains the information around ordering of\nbegin blockers, end blockers, and init genesis. Rather than lifting these concerns up to the module config layer,\nthey are themselves handled by a module which could allow a convenient way of swapping out different versions of\nbaseapp (for instance to target different versions of tendermint), without needing to change the rest of the config.\nThe `baseapp` module would then provide to the server framework (which sort of sits outside the ABCI app) an instance\nof `abci.Application`.\n\nIn this model, an app is *modules all the way down* and the dependency injection/app config layer is very much\nprotocol-agnostic and can adapt to even major breaking changes at the protocol layer.\n\n### Module & Protobuf Registration\n\nIn order for the two components of dependency injection and declarative configuration to work together as described,\nwe need a way for modules to actually register themselves and provide dependencies to the container.\n\nOne additional complexity that needs to be handled at this layer is protobuf registry initialization. Recall that\nin both the current SDK `codec` and the proposed [ADR 054: Protobuf Semver Compatible Codegen](https://github.com/cosmos/cosmos-sdk/pull/11802),\nprotobuf types need to be explicitly registered. Given that the app config itself is based on protobuf and\nuses protobuf `Any` types, protobuf registration needs to happen before the app config itself can be decoded. Because\nwe don't know which protobuf `Any` types will be needed a priori and modules themselves define those types, we need\nto decode the app config in separate phases:\n\n1. parse app config JSON/YAML as raw JSON and collect required module type URLs (without doing proto JSON decoding)\n2. build a [protobuf type registry](https://pkg.go.dev/google.golang.org/protobuf@v1.28.0/reflect/protoregistry) based\n on file descriptors and types provided by each required module\n3. decode the app config as proto JSON using the protobuf type registry\n\nBecause in [ADR 054: Protobuf Semver Compatible Codegen](https://github.com/cosmos/cosmos-sdk/pull/11802), each module\nmight use `internal` generated code which is not registered with the global protobuf registry, this code should provide\nan alternate way to register protobuf types with a type registry. In the same way that `.pb.go` files currently have a\n`var File_foo_proto protoreflect.FileDescriptor` for the file `foo.proto`, generated code should have a new member\n`var Types_foo_proto TypeInfo` where `TypeInfo` is an interface or struct with all the necessary info to register both\nthe protobuf generated types and file descriptor.\n\nSo a module must provide dependency injection providers and protobuf types, and takes as input its module\nconfig object which uniquely identifies the module based on its type URL.\n\nWith this in mind, we define a global module register which allows module implementations to register themselves\nwith the following API:\n\n```go\n// Register registers a module with the provided type name (ex. cosmos.bank.module.v1.Module)\n// and the provided options.\nfunc Register(configTypeName protoreflect.FullName, option ...Option) { ... }\n\ntype Option { /* private methods */ }\n\n// Provide registers dependency injection provider functions which work with the\n// cosmos-sdk container module. These functions can also accept an additional\n// parameter for the module's config object.\nfunc Provide(providers ...interface{}) Option { ... }\n\n// Types registers protobuf TypeInfo's with the protobuf registry.\nfunc Types(types ...TypeInfo) Option { ... }\n```\n\nEx:\n\n```go\nfunc init() {\n\tappmodule.Register(\"cosmos.bank.module.v1.Module\",\n\t\tappmodule.Types(\n\t\t\ttypes.Types_tx_proto,\n types.Types_query_proto,\n types.Types_types_proto,\n\t ),\n\t appmodule.Provide(\n\t\t\tprovideBankModule,\n\t )\n\t)\n}\n\ntype Inputs struct {\n\tcontainer.In\n\t\n\tAuthKeeper auth.Keeper\n\tDB ormdb.ModuleDB\n}\n\ntype Outputs struct {\n\tKeeper bank.Keeper\n\tAppModule appmodule.AppModule\n}\n\nfunc ProvideBankModule(config *bankmodulev1.Module, Inputs) (Outputs, error) { ... }\n```\n\nNote that in this module, a module configuration object *cannot* register different dependency providers at runtime\nbased on the configuration. This is intentional because it allows us to know globally which modules provide which\ndependencies, and it will also allow us to do code generation of the whole app initialization. This\ncan help us figure out issues with missing dependencies in an app config if the needed modules are loaded at runtime.\nIn cases where required modules are not loaded at runtime, it may be possible to guide users to the correct module if\nthrough a global Cosmos SDK module registry.\n\nThe `*appmodule.Handler` type referenced above is a replacement for the legacy `AppModule` framework, and\ndescribed in [ADR 063: Core Module API](./adr-063-core-module-api.md).\n\n### New `app.go`\n\nWith this setup, `app.go` might now look something like this:\n\n```go\npackage main\n\nimport (\n\t// Each go package which registers a module must be imported just for side-effects\n\t// so that module implementations are registered.\n\t_ \"github.com/cosmos/cosmos-sdk/x/auth/module\"\n\t_ \"github.com/cosmos/cosmos-sdk/x/bank/module\"\n\t_ \"github.com/cosmos/cosmos-sdk/x/staking/module\"\n\t\"github.com/cosmos/cosmos-sdk/core/app\"\n)\n\n// go:embed app.yaml\nvar appConfigYAML []byte\n\nfunc main() {\n\tapp.Run(app.LoadYAML(appConfigYAML))\n}\n```\n\n### Application to existing SDK modules\n\nSo far we have described a system which is largely agnostic to the specifics of the SDK such as store keys, `AppModule`,\n`BaseApp`, etc. Improvements to these parts of the framework that integrate with the general app wiring framework\ndefined here are described in [ADR 063: Core Module API](./adr-063-core-module-api.md).\n\n### Registration of Inter-Module Hooks\n\nSome modules define a hooks interface (ex. `StakingHooks`) which allows one module to call back into another module\nwhen certain events happen.\n\nWith the app wiring framework, these hooks interfaces can be defined as a `OnePerModuleType`s and then the module\nwhich consumes these hooks can collect these hooks as a map of module name to hook type (ex. `map[string]FooHooks`). Ex:\n\n```go\nfunc init() {\n appmodule.Register(\n &foomodulev1.Module{},\n appmodule.Invoke(InvokeSetFooHooks),\n\t ...\n )\n}\nfunc InvokeSetFooHooks(\n keeper *keeper.Keeper,\n fooHooks map[string]FooHooks,\n) error {\n\tfor k in sort.Strings(maps.Keys(fooHooks)) {\n\t\tkeeper.AddFooHooks(fooHooks[k])\n }\n}\n```\n\nOptionally, the module consuming hooks can allow app's to define an order for calling these hooks based on module name\nin its config object.\n\nAn alternative way for registering hooks via reflection was considered where all keeper types are inspected to see if\nthey implement the hook interface by the modules exposing hooks. This has the downsides of:\n\n* needing to expose all the keepers of all modules to the module providing hooks,\n* not allowing for encapsulating hooks on a different type which doesn't expose all keeper methods,\n* harder to know statically which module expose hooks or are checking for them.\n\nWith the approach proposed here, hooks registration will be obviously observable in `app.go` if `depinject` codegen\n(described below) is used.\n\n### Code Generation\n\nThe `depinject` framework will optionally allow the app configuration and dependency injection wiring to be code\ngenerated. This will allow:\n\n* dependency injection wiring to be inspected as regular go code just like the existing `app.go`,\n* dependency injection to be opt-in with manual wiring 100% still possible.\n\nCode generation requires that all providers and invokers and their parameters are exported and in non-internal packages.\n\n### Module Semantic Versioning\n\nWhen we start creating semantically versioned SDK modules that are in standalone go modules, a state machine breaking\nchange to a module should be handled as follows:\n\n* the semantic major version should be incremented, and\n* a new semantically versioned module config protobuf type should be created.\n\nFor instance, if we have the SDK module for bank in the go module `github.com/cosmos/cosmos-sdk/x/bank` with the module config type\n`cosmos.bank.module.v1.Module`, and we want to make a state machine breaking change to the module, we would:\n\n* create a new go module `github.com/cosmos/cosmos-sdk/x/bank/v2`,\n* with the module config protobuf type `cosmos.bank.module.v2.Module`.\n\nThis *does not* mean that we need to increment the protobuf API version for bank. Both modules can support\n`cosmos.bank.v1`, but `github.com/cosmos/cosmos-sdk/x/bank/v2` will be a separate go module with a separate module config type.\n\nThis practice will eventually allow us to use appconfig to load new versions of a module via a configuration change.\n\nEffectively, there should be a 1:1 correspondence between a semantically versioned go module and a \nversioned module config protobuf type, and major versioning bumps should occur whenever state machine breaking changes\nare made to a module.\n\nNOTE: SDK modules that are standalone go modules *should not* adopt semantic versioning until the concerns described in\n[ADR 054: Module Semantic Versioning](./adr-054-semver-compatible-modules.md) are\naddressed. The short-term solution for this issue was left somewhat unresolved. However, the easiest tactic is\nlikely to use a standalone API go module and follow the guidelines described in this comment: https://github.com/cosmos/cosmos-sdk/pull/11802#issuecomment-1406815181. For the time-being, it is recommended that\nCosmos SDK modules continue to follow tried and true [0-based versioning](https://0ver.org) until an officially\nrecommended solution is provided. This section of the ADR will be updated when that happens and for now, this section\nshould be considered as a design recommendation for future adoption of semantic versioning.\n\n## Consequences\n\n### Backwards Compatibility\n\nModules which work with the new app wiring system do not need to drop their existing `AppModule` and `NewKeeper`\nregistration paradigms. These two methods can live side-by-side for as long as is needed.\n\n### Positive\n\n* wiring up new apps will be simpler, more succinct and less error-prone\n* it will be easier to develop and test standalone SDK modules without needing to replicate all of simapp\n* it may be possible to dynamically load modules and upgrade chains without needing to do a coordinated stop and binary\n upgrade using this mechanism\n* easier plugin integration\n* dependency injection framework provides more automated reasoning about dependencies in the project, with a graph visualization.\n\n### Negative\n\n* it may be confusing when a dependency is missing although error messages, the GraphViz visualization, and global\n module registration may help with that\n\n### Neutral\n\n* it will require work and education\n\n## Further Discussions\n\nThe protobuf type registration system described in this ADR has not been implemented and may need to be reconsidered in\nlight of code generation. It may be better to do this type registration with a DI provider.\n\n## References\n\n* https://github.com/cosmos/cosmos-sdk/blob/c3edbb22cab8678c35e21fe0253919996b780c01/simapp/app.go\n* https://github.com/allinbits/cosmos-sdk-poc\n* https://github.com/uber-go/dig\n* https://github.com/google/wire\n* https://pkg.go.dev/github.com/cosmos/cosmos-sdk/container\n* https://github.com/cosmos/cosmos-sdk/pull/11802\n* [ADR 063: Core Module API](./adr-063-core-module-api.md)" + }, + { + "number": 58, + "filename": "adr-058-auto-generated-cli.md", + "title": "ADR 058: Auto-Generated CLI", + "content": "# ADR 058: Auto-Generated CLI\n\n## Changelog\n\n* 2022-05-04: Initial Draft\n\n## Status\n\nACCEPTED Partially Implemented\n\n## Abstract\n\nIn order to make it easier for developers to write Cosmos SDK modules, we provide infrastructure which automatically\ngenerates CLI commands based on protobuf definitions.\n\n## Context\n\nCurrent Cosmos SDK modules generally implement a CLI command for every transaction and every query supported by the\nmodule. These are handwritten for each command and essentially amount to providing some CLI flags or positional\narguments for specific fields in protobuf messages.\n\nIn order to make sure CLI commands are correctly implemented as well as to make sure that the application works\nin end-to-end scenarios, we do integration tests using CLI commands. While these tests are valuable on some-level,\nthey can be hard to write and maintain, and run slowly. [Some teams have contemplated](https://github.com/regen-network/regen-ledger/issues/1041)\nmoving away from CLI-style integration tests (which are really end-to-end tests) towards narrower integration tests\nwhich exercise `MsgClient` and `QueryClient` directly. This might involve replacing the current end-to-end CLI\ntests with unit tests as there still needs to be some way to test these CLI commands for full quality assurance.\n\n## Decision\n\nTo make module development simpler, we provide infrastructure - in the new [`client/v2`](https://github.com/cosmos/cosmos-sdk/tree/main/client/v2)\ngo module - for automatically generating CLI commands based on protobuf definitions to either replace or complement\nhandwritten CLI commands. This will mean that when developing a module, it will be possible to skip both writing and\ntesting CLI commands as that can all be taken care of by the framework.\n\nThe basic design for automatically generating CLI commands is to:\n\n* create one CLI command for each `rpc` method in a protobuf `Query` or `Msg` service\n* create a CLI flag for each field in the `rpc` request type\n* for `query` commands call gRPC and print the response as protobuf JSON or YAML (via the `-o`/`--output` flag)\n* for `tx` commands, create a transaction and apply common transaction flags\n\nIn order to make the auto-generated CLI as easy to use (or easier) than handwritten CLI, we need to do custom handling\nof specific protobuf field types so that the input format is easy for humans:\n\n* `Coin`, `Coins`, `DecCoin`, and `DecCoins` should be input using the existing format (i.e. `1000uatom`)\n* it should be possible to specify an address using either the bech32 address string or a named key in the keyring\n* `Timestamp` and `Duration` should accept strings like `2001-01-01T00:00:00Z` and `1h3m` respectively\n* pagination should be handled with flags like `--page-limit`, `--page-offset`, etc.\n* it should be possible to customize any other protobuf type either via its message name or a `cosmos_proto.scalar` annotation\n\nAt a basic level it should be possible to generate a command for a single `rpc` method as well as all the commands for\na whole protobuf `service` definition. It should be possible to mix and match auto-generated and handwritten commands.\n\n## Consequences\n\n### Backwards Compatibility\n\nExisting modules can mix and match auto-generated and handwritten CLI commands so it is up to them as to whether they\nmake breaking changes by replacing handwritten commands with slightly different auto-generated ones.\n\nFor now the SDK will maintain the existing set of CLI commands for backwards compatibility but new commands will use\nthis functionality.\n\n### Positive\n\n* module developers will not need to write CLI commands\n* module developers will not need to test CLI commands\n* [lens](https://github.com/strangelove-ventures/lens) may benefit from this\n\n### Negative\n\n### Neutral\n\n## Further Discussions\n\nWe would like to be able to customize:\n\n* short and long usage strings for commands\n* aliases for flags (ex. `-a` for `--amount`)\n* which fields are positional parameters rather than flags\n\nIt is an [open discussion](https://github.com/cosmos/cosmos-sdk/pull/11725#issuecomment-1108676129)\nas to whether these customizations options should lie in:\n\n* the .proto files themselves,\n* separate config files (ex. YAML), or\n* directly in code\n\nProviding the options in .proto files would allow a dynamic client to automatically generate\nCLI commands on the fly. However, that may pollute the .proto files themselves with information that is only relevant\nfor a small subset of users.\n\n## References\n\n* https://github.com/regen-network/regen-ledger/issues/1041\n* https://github.com/cosmos/cosmos-sdk/tree/main/client/v2\n* https://github.com/cosmos/cosmos-sdk/pull/11725#issuecomment-1108676129" + }, + { + "number": 59, + "filename": "adr-059-test-scopes.md", + "title": "ADR 059: Test Scopes", + "content": "# ADR 059: Test Scopes\n\n## Changelog\n\n* 2022-08-02: Initial Draft\n* 2023-03-02: Add precision for integration tests\n* 2023-03-23: Add precision for E2E tests\n\n## Status\n\nPROPOSED Partially Implemented\n\n## Abstract\n\nRecent work in the SDK aimed at breaking apart the monolithic root go module has highlighted\nshortcomings and inconsistencies in our testing paradigm. This ADR clarifies a common\nlanguage for talking about test scopes and proposes an ideal state of tests at each scope.\n\n## Context\n\n[ADR-053: Go Module Refactoring](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-053-go-module-refactoring.md) expresses our desire for an SDK composed of many\nindependently versioned Go modules, and [ADR-057: App Wiring](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-057-app-wiring.md) offers a methodology\nfor breaking apart inter-module dependencies through the use of dependency injection. As\ndescribed in [EPIC: Separate all SDK modules into standalone go modules](https://github.com/cosmos/cosmos-sdk/issues/11899), module\ndependencies are particularly complected in the test phase, where simapp is used as\nthe key test fixture in setting up and running tests. It is clear that the successful\ncompletion of Phases 3 and 4 in that EPIC require the resolution of this dependency problem.\n\nIn [EPIC: Unit Testing of Modules via Mocks](https://github.com/cosmos/cosmos-sdk/issues/12398) it was thought this Gordian knot could be\nunwound by mocking all dependencies in the test phase for each module, but seeing how these\nrefactors were complete rewrites of test suites discussions began around the fate of the\nexisting integration tests. One perspective is that they ought to be thrown out, another is\nthat integration tests have some utility of their own and a place in the SDK's testing story.\n\nAnother point of confusion has been the current state of CLI test suites, [x/auth](https://github.com/cosmos/cosmos-sdk/blob/0f7e56c6f9102cda0ca9aba5b6f091dbca976b5a/x/auth/client/testutil/suite.go#L44-L49) for\nexample. In code these are called integration tests, but in reality function as end to end\ntests by starting up a tendermint node and full application. [EPIC: Rewrite and simplify\nCLI tests](https://github.com/cosmos/cosmos-sdk/issues/12696) identifies the ideal state of CLI tests using mocks, but does not address the\nplace end to end tests may have in the SDK.\n\nFrom here we identify three scopes of testing, **unit**, **integration**, **e2e** (end to\nend), seek to define the boundaries of each, their shortcomings (real and imposed), and their\nideal state in the SDK.\n\n### Unit tests\n\nUnit tests exercise the code contained in a single module (e.g. `/x/bank`) or package\n(e.g. `/client`) in isolation from the rest of the code base. Within this we identify two\nlevels of unit tests, *illustrative* and *journey*. The definitions below lean heavily on\n[The BDD Books - Formulation](https://leanpub.com/bddbooks-formulation) section 1.3.\n\n*Illustrative* tests exercise an atomic part of a module in isolation - in this case we\nmight do fixture setup/mocking of other parts of the module.\n\nTests which exercise a whole module's function with dependencies mocked, are *journeys*.\nThese are almost like integration tests in that they exercise many things together but still\nuse mocks.\n\nExample 1 journey vs illustrative tests - [depinject's BDD style tests](https://github.com/cosmos/cosmos-sdk/blob/main/depinject/binding_test.go), show how we can\nrapidly build up many illustrative cases demonstrating behavioral rules without [very much code](https://github.com/cosmos/cosmos-sdk/blob/main/depinject/binding_test.go) while maintaining high level readability.\n\nExample 2 [depinject table driven tests](https://github.com/cosmos/cosmos-sdk/blob/main/depinject/provider_desc_test.go)\n\nExample 3 [Bank keeper tests](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/bank/keeper/keeper_test.go#L94-L105) - A mock implementation of `AccountKeeper` is supplied to the keeper constructor.\n\n#### Limitations\n\nCertain modules are tightly coupled beyond the test phase. A recent dependency report for\n`bank -> auth` found 274 total usages of `auth` in `bank`, 50 of which are in\nproduction code and 224 in test. This tight coupling may suggest that either the modules\nshould be merged, or refactoring is required to abstract references to the core types tying\nthe modules together. It could also indicate that these modules should be tested together\nin integration tests beyond mocked unit tests.\n\nIn some cases setting up a test case for a module with many mocked dependencies can be quite\ncumbersome and the resulting test may only show that the mocking framework works as expected\nrather than working as a functional test of interdependent module behavior.\n\n### Integration tests\n\nIntegration tests define and exercise relationships between an arbitrary number of modules\nand/or application subsystems.\n\nWiring for integration tests is provided by `depinject` and some [helper code](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/testutil/sims/app_helpers.go#L95) starts up\na running application. A section of the running application may then be tested. Certain\ninputs during different phases of the application life cycle are expected to produce\ninvariant outputs without too much concern for component internals. This type of black box\ntesting has a larger scope than unit testing.\n\nExample 1 [client/grpc_query_test/TestGRPCQuery](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/client/grpc_query_test.go#L111-L129) - This test is misplaced in `/client`,\nbut tests the life cycle of (at least) `runtime` and `bank` as they progress through\nstartup, genesis and query time. It also exercises the fitness of the client and query\nserver without putting bytes on the wire through the use of [QueryServiceTestHelper](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/baseapp/grpcrouter_helpers.go#L31).\n\nExample 2 `x/evidence` Keeper integration tests - Starts up an application composed of [8\nmodules](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/evidence/testutil/app.yaml#L1) with [5 keepers](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/evidence/keeper/keeper_test.go#L101-L106) used in the integration test suite. One test in the suite\nexercises [HandleEquivocationEvidence](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/evidence/keeper/infraction_test.go#L42) which contains many interactions with the staking\nkeeper.\n\nExample 3 - Integration suite app configurations may also be specified via golang (not\nYAML as above) [statically](https://github.com/cosmos/cosmos-sdk/blob/main/x/nft/testutil/app_config.go) or [dynamically](https://github.com/cosmos/cosmos-sdk/blob/8c23f6f957d1c0bedd314806d1ac65bea59b084c/tests/integration/bank/keeper/keeper_test.go#L129-L134).\n\n#### Limitations\n\nSetting up a particular input state may be more challenging since the application is\nstarting from a zero state. Some of this may be addressed by good test fixture\nabstractions with testing of their own. Tests may also be more brittle, and larger\nrefactors could impact application initialization in unexpected ways with harder to\nunderstand errors. This could also be seen as a benefit, and indeed the SDK's current\nintegration tests were helpful in tracking down logic errors during earlier stages\nof app-wiring refactors.\n\n### Simulations\n\nSimulations (also called generative testing) are a special case of integration tests where\ndeterministically random module operations are executed against a running simapp, building\nblocks on the chain until a specified height is reached. No *specific* assertions are\nmade for the state transitions resulting from module operations but any error will halt and\nfail the simulation. Since `crisis` is included in simapp and the simulation runs\nEndBlockers at the end of each block any module invariant violations will also fail\nthe simulation.\n\nModules must implement [AppModuleSimulation.WeightedOperations](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/types/module/simulation.go#L31) to define their\nsimulation operations. Note that not all modules implement this which may indicate a\ngap in current simulation test coverage.\n\nModules not returning simulation operations:\n\n* `auth`\n* `evidence`\n* `mint`\n* `params`\n\nA separate binary, [runsim](https://github.com/cosmos/tools/tree/master/cmd/runsim), is responsible for kicking off some of these tests and\nmanaging their life cycle.\n\n#### Limitations\n\n* [A success](https://github.com/cosmos/cosmos-sdk/runs/7606931983?check_suite_focus=true) may take a long time to run, 7-10 minutes per simulation in CI.\n* [Timeouts](https://github.com/cosmos/cosmos-sdk/runs/7606932295?check_suite_focus=true) sometimes occur on apparent successes without any indication why.\n* Useful error messages not provided on [failure](https://github.com/cosmos/cosmos-sdk/runs/7606932548?check_suite_focus=true) from CI, requiring a developer to run\n the simulation locally to reproduce.\n\n### E2E tests\n\nEnd to end tests exercise the entire system as we understand it in as close an approximation\nto a production environment as is practical. Presently these tests are located at\n[tests/e2e](https://github.com/cosmos/cosmos-sdk/tree/main/tests/e2e) and rely on [testutil/network](https://github.com/cosmos/cosmos-sdk/tree/main/testutil/network) to start up an in-process Tendermint node.\n\nAn application should be built as minimally as possible to exercise the desired functionality.\nThe SDK uses an application will only the required modules for the tests. The application developer is advised to use its own application for e2e tests.\n\n#### Limitations\n\nIn general the limitations of end to end tests are orchestration and compute cost.\nScaffolding is required to start up and run a prod-like environment and this\nprocess takes much longer to start and run than unit or integration tests.\n\nGlobal locks present in Tendermint code cause stateful starting/stopping to sometimes hang\nor fail intermittently when run in a CI environment.\n\nThe scope of e2e tests has been complected with command line interface testing.\n\n## Decision\n\nWe accept these test scopes and identify the following decisions points for each.\n\n| Scope | App Type | Mocks? |\n| ----------- | ------------------- | ------ |\n| Unit | None | Yes |\n| Integration | integration helpers | Some |\n| Simulation | minimal app | No |\n| E2E | minimal app | No |\n\nThe decision above is valid for the SDK. An application developer should test their application with their full application instead of the minimal app.\n\n### Unit Tests\n\nAll modules must have mocked unit test coverage.\n\nIllustrative tests should outnumber journeys in unit tests.\n\nUnit tests should outnumber integration tests.\n\nUnit tests must not introduce additional dependencies beyond those already present in\nproduction code.\n\nWhen module unit test introduction as per [EPIC: Unit testing of modules via mocks](https://github.com/cosmos/cosmos-sdk/issues/12398)\nresults in a near complete rewrite of an integration test suite the test suite should be\nretained and moved to `/tests/integration`. We accept the resulting test logic\nduplication but recommend improving the unit test suite through the addition of\nillustrative tests.\n\n### Integration Tests\n\nAll integration tests shall be located in `/tests/integration`, even those which do not\nintroduce extra module dependencies.\n\nTo help limit scope and complexity, it is recommended to use the smallest possible number of\nmodules in application startup, i.e. don't depend on simapp.\n\nIntegration tests should outnumber e2e tests.\n\n### Simulations\n\nSimulations shall use a minimal application (usually via app wiring). They are located under `/x/{moduleName}/simulation`.\n\n### E2E Tests\n\nExisting e2e tests shall be migrated to integration tests by removing the dependency on the\ntest network and in-process Tendermint node to ensure we do not lose test coverage.\n\nThe e2e rest runner shall transition from in process Tendermint to a runner powered by\nDocker via [dockertest](https://github.com/ory/dockertest).\n\nE2E tests exercising a full network upgrade shall be written.\n\nThe CLI testing aspect of existing e2e tests shall be rewritten using the network mocking\ndemonstrated in [PR#12706](https://github.com/cosmos/cosmos-sdk/pull/12706).\n\n## Consequences\n\n### Positive\n\n* test coverage is increased\n* test organization is improved\n* reduced dependency graph size in modules\n* simapp removed as a dependency from modules\n* inter-module dependencies introduced in test code are removed\n* reduced CI run time after transitioning away from in process Tendermint\n\n### Negative\n\n* some test logic duplication between unit and integration tests during transition\n* test written using dockertest DX may be a bit worse\n\n### Neutral\n\n* some discovery required for e2e transition to dockertest\n\n## Further Discussions\n\nIt may be useful if test suites could be run in integration mode (with mocked tendermint) or\nwith e2e fixtures (with real tendermint and many nodes). Integration fixtures could be used\nfor quicker runs, e2e fixtures could be used for more battle hardening.\n\nA PoC `x/gov` was completed in PR [#12847](https://github.com/cosmos/cosmos-sdk/pull/12847)\nis in progress for unit tests demonstrating BDD [Rejected].\nObserving that a strength of BDD specifications is their readability, and a con is the\ncognitive load while writing and maintaining, current consensus is to reserve BDD use\nfor places in the SDK where complex rules and module interactions are demonstrated.\nMore straightforward or low level test cases will continue to rely on go table tests.\n\nLevels are network mocking in integration and e2e tests are still being worked on and formalized." + }, + { + "number": 60, + "filename": "adr-060-abci-1.0.md", + "title": "ADR 60: ABCI 1.0 Integration (Phase I)", + "content": "# ADR 60: ABCI 1.0 Integration (Phase I)\n\n## Changelog\n\n* 2022-08-10: Initial Draft (@alexanderbez, @tac0turtle)\n* Nov 12, 2022: Update `PrepareProposal` and `ProcessProposal` semantics per the\n initial implementation [PR](https://github.com/cosmos/cosmos-sdk/pull/13453) (@alexanderbez)\n\n## Status\n\nACCEPTED\n\n## Abstract\n\nThis ADR describes the initial adoption of [ABCI 1.0](https://github.com/tendermint/tendermint/blob/master/spec/abci%2B%2B/README.md),\nthe next evolution of ABCI, within the Cosmos SDK. ABCI 1.0 aims to provide\napplication developers with more flexibility and control over application and\nconsensus semantics, e.g. in-application mempools, in-process oracles, and\norder-book style matching engines.\n\n## Context\n\nTendermint will release ABCI 1.0. Notably, at the time of this writing,\nTendermint is releasing v0.37.0 which will include `PrepareProposal` and `ProcessProposal`.\n\nThe `PrepareProposal` ABCI method is concerned with a block proposer requesting\nthe application to evaluate a series of transactions to be included in the next\nblock, defined as a slice of `TxRecord` objects. The application can either\naccept, reject, or completely ignore some or all of these transactions. This is\nan important consideration to make as the application can essentially define and\ncontrol its own mempool allowing it to define sophisticated transaction priority\nand filtering mechanisms, by completely ignoring the `TxRecords` Tendermint\nsends it, favoring its own transactions. This essentially means that the Tendermint\nmempool acts more like a gossip data structure.\n\nThe second ABCI method, `ProcessProposal`, is used to process the block proposer's\nproposal as defined by `PrepareProposal`. It is important to note the following\nwith respect to `ProcessProposal`:\n\n* Execution of `ProcessProposal` must be deterministic.\n* There must be coherence between `PrepareProposal` and `ProcessProposal`. In\n other words, for any two correct processes *p* and *q*, if *q*'s Tendermint\n\tcalls `RequestProcessProposal` on *up*, *q*'s Application returns\n\tACCEPT in `ResponseProcessProposal`.\n\nIt is important to note that in ABCI 1.0 integration, the application\nis NOT responsible for locking semantics -- Tendermint will still be responsible\nfor that. In the future, however, the application will be responsible for locking,\nwhich allows for parallel execution possibilities.\n\n## Decision\n\nWe will integrate ABCI 1.0, which will be introduced in Tendermint\nv0.37.0, in the next major release of the Cosmos SDK. We will integrate ABCI 1.0\nmethods on the `BaseApp` type. We describe the implementations of the two methods\nindividually below.\n\nPrior to describing the implementation of the two new methods, it is important to\nnote that the existing ABCI methods, `CheckTx`, `DeliverTx`, etc, still exist and\nserve the same functions as they do now.\n\n### `PrepareProposal`\n\nPrior to evaluating the decision for how to implement `PrepareProposal`, it is\nimportant to note that `CheckTx` will still be executed and will be responsible\nfor evaluating transaction validity as it does now, with one very important\n*additive* distinction.\n\nWhen executing transactions in `CheckTx`, the application will now add valid\ntransactions, i.e. passing the AnteHandler, to its own mempool data structure.\nIn order to provide a flexible approach to meet the varying needs of application\ndevelopers, we will define both a mempool interface and a data structure utilizing\nGolang generics, allowing developers to focus only on transaction\nordering. Developers requiring absolute full control can implement their own\ncustom mempool implementation.\n\nWe define the general mempool interface as follows (subject to change):\n\n```go\ntype Mempool interface {\n\t// Insert attempts to insert a Tx into the app-side mempool returning\n\t// an error upon failure.\n\tInsert(sdk.Context, sdk.Tx) error\n\n\t// Select returns an Iterator over the app-side mempool. If txs are specified,\n\t// then they shall be incorporated into the Iterator. The Iterator must\n\t// be closed by the caller.\n\tSelect(sdk.Context, [][]byte) Iterator\n\n\t// CountTx returns the number of transactions currently in the mempool.\n\tCountTx() int\n\n\t// Remove attempts to remove a transaction from the mempool, returning an error\n\t// upon failure.\n\tRemove(sdk.Tx) error\n}\n\n// Iterator defines an app-side mempool iterator interface that is as minimal as\n// possible. The order of iteration is determined by the app-side mempool\n// implementation.\ntype Iterator interface {\n\t// Next returns the next transaction from the mempool. If there are no more\n\t// transactions, it returns nil.\n\tNext() Iterator\n\n\t// Tx returns the transaction at the current position of the iterator.\n\tTx() sdk.Tx\n}\n```\n\nWe will define an implementation of `Mempool`, defined by `nonceMempool`, that\nwill cover most basic application use-cases. Namely, it will prioritize transactions\nby transaction sender, allowing for multiple transactions from the same sender.\n\nThe default app-side mempool implementation, `nonceMempool`, will operate on a \nsingle skip list data structure. Specifically, transactions with the lowest nonce\nglobally are prioritized. Transactions with the same nonce are prioritized by\nsender address.\n\n```go\ntype nonceMempool struct {\n\ttxQueue *huandu.SkipList\n}\n```\n\nPrevious discussions1 have come to the agreement that Tendermint will\nperform a request to the application, via `RequestPrepareProposal`, with a certain\namount of transactions reaped from Tendermint's local mempool. The exact amount\nof transactions reaped will be determined by a local operator configuration.\nThis is referred to as the \"one-shot approach\" seen in discussions.\n\nWhen Tendermint reaps transactions from the local mempool and sends them to the\napplication via `RequestPrepareProposal`, the application will have to evaluate\nthe transactions. Specifically, it will need to inform Tendermint if it should\nreject and or include each transaction. Note, the application can even *replace*\ntransactions entirely with other transactions.\n\nWhen evaluating transactions from `RequestPrepareProposal`, the application will\nignore *ALL* transactions sent to it in the request and instead reap up to\n`RequestPrepareProposal.max_tx_bytes` from it's own mempool.\n\nSince an application can technically insert or inject transactions on `Insert`\nduring `CheckTx` execution, it is recommended that applications ensure transaction\nvalidity when reaping transactions during `PrepareProposal`. However, what validity\nexactly means is entirely determined by the application.\n\nThe Cosmos SDK will provide a default `PrepareProposal` implementation that simply\nselect up to `MaxBytes` *valid* transactions.\n\nHowever, applications can override this default implementation with their own\nimplementation and set that on `BaseApp` via `SetPrepareProposal`.\n\n\n### `ProcessProposal`\n\nThe `ProcessProposal` ABCI method is relatively straightforward. It is responsible\nfor ensuring validity of the proposed block containing transactions that were\nselected from the `PrepareProposal` step. However, how an application determines\nvalidity of a proposed block depends on the application and its varying use cases.\nFor most applications, simply calling the `AnteHandler` chain would suffice, but\nthere could easily be other applications that need more control over the validation\nprocess of the proposed block, such as ensuring txs are in a certain order or\nthat certain transactions are included. While this theoretically could be achieved\nwith a custom `AnteHandler` implementation, it's not the cleanest UX or the most\nefficient solution.\n\nInstead, we will define an additional ABCI interface method on the existing\n`Application` interface, similar to the existing ABCI methods such as `BeginBlock`\nor `EndBlock`. This new interface method will be defined as follows:\n\n```go\nProcessProposal(sdk.Context, abci.RequestProcessProposal) error {}\n```\n\nNote, we must call `ProcessProposal` with a new internal branched state on the\n`Context` argument as we cannot simply just use the existing `checkState` because\n`BaseApp` already has a modified `checkState` at this point. So when executing\n`ProcessProposal`, we create a similar branched state, `processProposalState`,\noff of `deliverState`. Note, the `processProposalState` is never committed and\nis completely discarded after `ProcessProposal` finishes execution.\n\nThe Cosmos SDK will provide a default implementation of `ProcessProposal` in which\nall transactions are validated using the CheckTx flow, i.e. the AnteHandler, and\nwill always return ACCEPT unless any transaction cannot be decoded.\n\n### `DeliverTx`\n\nSince transactions are not truly removed from the app-side mempool during\n`PrepareProposal`, since `ProcessProposal` can fail or take multiple rounds and\nwe do not want to lose transactions, we need to finally remove the transaction\nfrom the app-side mempool during `DeliverTx` since during this phase, the\ntransactions are being included in the proposed block.\n\nAlternatively, we can keep the transactions as truly being removed during the\nreaping phase in `PrepareProposal` and add them back to the app-side mempool in\ncase `ProcessProposal` fails.\n\n## Consequences\n\n### Backwards Compatibility\n\nABCI 1.0 is naturally not backwards compatible with prior versions of the Cosmos SDK\nand Tendermint. For example, an application that requests `RequestPrepareProposal`\nto the same application that does not speak ABCI 1.0 will naturally fail.\n\nHowever, in the first phase of the integration, the existing ABCI methods as we\nknow them today will still exist and function as they currently do.\n\n### Positive\n\n* Applications now have full control over transaction ordering and priority.\n* Lays the groundwork for the full integration of ABCI 1.0, which will unlock more\n app-side use cases around block construction and integration with the Tendermint\n consensus engine.\n\n### Negative\n\n* Requires that the \"mempool\", as a general data structure that collects and stores\n uncommitted transactions will be duplicated between both Tendermint and the\n Cosmos SDK.\n* Additional requests between Tendermint and the Cosmos SDK in the context of\n block execution. Albeit, the overhead should be negligible.\n* Not backwards compatible with previous versions of Tendermint and the Cosmos SDK.\n\n## Further Discussions\n\nIt is possible to design the app-side implementation of the `Mempool[T MempoolTx]`\nin many different ways using different data structures and implementations. All\nof which have different tradeoffs. The proposed solution keeps things simple\nand covers cases that would be required for most basic applications. There are\ntradeoffs that can be made to improve performance of reaping and inserting into\nthe provided mempool implementation.\n\n## References\n\n* https://github.com/tendermint/tendermint/blob/master/spec/abci%2B%2B/README.md\n* [1] https://github.com/tendermint/tendermint/issues/7750#issuecomment-1076806155\n* [2] https://github.com/tendermint/tendermint/issues/7750#issuecomment-1075717151" + }, + { + "number": 61, + "filename": "adr-061-liquid-staking.md", + "title": "# ADR 061: Liquid Staking", + "content": "# ADR-061: Liquid Staking\n\n## Changelog\n\n* 2022-09-10: Initial Draft (@zmanian)\n\n## Status\n\nACCEPTED\n\n## Abstract\n\nAdd a semi-fungible liquid staking primitive to the default Cosmos SDK staking module. This upgrades proof of stake to enable safe designs with lower overall monetary issuance and integration with numerous liquid staking protocols like Stride, Persistence, Quicksilver, Lido etc.\n\n## Context\n\nThe original release of the Cosmos Hub featured the implementation of a ground breaking proof of stake mechanism featuring delegation, slashing, in protocol reward distribution and adaptive issuance. This design was state of the art for 2016 and has been deployed without major changes by many L1 blockchains.\n\nAs both Proof of Stake and blockchain use cases have matured, this design has aged poorly and should no longer be considered a good baseline Proof of Stake issuance. In the world of application specific blockchains, there cannot be a one size fits all blockchain but the Cosmos SDK does endeavour to provide a good baseline implementation and one that is suitable for the Cosmos Hub.\n\nThe most important deficiency of the legacy staking design is that it composes poorly with on chain protocols for trading, lending, derivatives that are referred to collectively as DeFi. The legacy staking implementation starves these applications of liquidity by increasing the risk free rate adaptively. It basically makes DeFi and staking security somewhat incompatible. \n\nThe Osmosis team has adopted the idea of Superfluid and Interfluid staking where assets that are participating in DeFi applications can also be used in proof of stake. This requires tight integration with an enshrined set of DeFi applications and thus is unsuitable for the Cosmos SDK.\n\nIt's also important to note that Interchain Accounts are available in the default IBC implementation and can be used to [rehypothecate](https://www.investopedia.com/terms/h/hypothecation.asp#toc-what-is-rehypothecation) delegations. Thus liquid staking is already possible and these changes merely improve the UX of liquid staking. Centralized exchanges also rehypothecate staked assets, posing challenges for decentralization. This ADR takes the position that adoption of in-protocol liquid staking is the preferable outcome and provides new levers to incentivize decentralization of stake. \n\nThese changes to the staking module have been in development for more than a year and have seen substantial industry adoption who plan to build staking UX. The internal economics at Informal team has also done a review of the impacts of these changes and this review led to the development of the exempt delegation system. This system provides governance with a tuneable parameter for modulating the risks of principal agent problem called the exemption factor. \n\n## Decision\n\nWe implement the semi-fungible liquid staking system and exemption factor system within the cosmos sdk. Though registered as fungible assets, these tokenized shares have extremely limited fungibility, only among the specific delegation record that was created when shares were tokenized. These assets can be used for OTC trades but composability with DeFi is limited. The primary expected use case is improving the user experience of liquid staking providers.\n\nA new governance parameter is introduced that defines the ratio of exempt to issued tokenized shares. This is called the exemption factor. A larger exemption factor allows more tokenized shares to be issued for a smaller amount of exempt delegations. If governance is comfortable with how the liquid staking market is evolving, it makes sense to increase this value.\n\nMin self delegation is removed from the staking system with the expectation that it will be replaced by the exempt delegations system. The exempt delegation system allows multiple accounts to demonstrate economic alignment with the validator operator as team members, partners etc. without co-mingling funds. Delegation exemption will likely be required to grow the validators' business under widespread adoption of liquid staking once governance has adjusted the exemption factor.\n\nWhen shares are tokenized, the underlying shares are transferred to a module account and rewards go to the module account for the TokenizedShareRecord. \n\nThere is no longer a mechanism to override the validators vote for TokenizedShares.\n\n\n### `MsgTokenizeShares`\n\nThe MsgTokenizeShares message is used to create tokenize delegated tokens. This message can be executed by any delegator who has positive amount of delegation and after execution the specific amount of delegation disappear from the account and share tokens are provided. Share tokens are denominated in the validator and record id of the underlying delegation.\n\nA user may tokenize some or all of their delegation.\n\nThey will receive shares with the denom of `cosmosvaloper1xxxx/5` where 5 is the record id for the validator operator.\n\nMsgTokenizeShares fails if the account is a VestingAccount. Users will have to move vested tokens to a new account and endure the unbonding period. We view this as an acceptable tradeoff vs. the complex book keeping required to track vested tokens.\n\nThe total amount of outstanding tokenized shares for the validator is checked against the sum of exempt delegations multiplied by the exemption factor. If the tokenized shares exceeds this limit, execution fails.\n\nMsgTokenizeSharesResponse provides the number of tokens generated and their denom.\n\n\n### `MsgRedeemTokensforShares`\n\nThe MsgRedeemTokensforShares message is used to redeem the delegation from share tokens. This message can be executed by any user who owns share tokens. After execution delegations will appear to the user.\n\n### `MsgTransferTokenizeShareRecord`\n\nThe MsgTransferTokenizeShareRecord message is used to transfer the ownership of rewards generated from the tokenized amount of delegation. The tokenize share record is created when a user tokenize his/her delegation and deleted when the full amount of share tokens are redeemed.\n\nThis is designed to work with liquid staking designs that do not redeem the tokenized shares and may instead want to keep the shares tokenized.\n\n\n### `MsgExemptDelegation`\n\nThe MsgExemptDelegation message is used to exempt a delegation to a validator. If the exemption factor is greater than 0, this will allow more delegation shares to be issued from the validator.\n\nThis design allows the chain to force an amount of self-delegation by validators participating in liquid staking schemes.\n\n## Consequences\n\n### Backwards Compatibility\n\nBy setting the exemption factor to zero, this module works like legacy staking. The only substantial change is the removal of min-self-bond and without any tokenized shares, there is no incentive to exempt delegation. \n\n### Positive\n\nThis approach should enable integration with liquid staking providers and improved user experience. It provides a pathway to security under non-exponential issuance policies in the baseline staking module." + }, + { + "number": 62, + "filename": "adr-062-collections-state-layer.md", + "title": "ADR 062: Collections, a simplified storage layer for cosmos-sdk modules", + "content": "# ADR 062: Collections, a simplified storage layer for cosmos-sdk modules\n\n## Changelog\n\n* 30/11/2022: PROPOSED\n\n## Status\n\nPROPOSED - Implemented\n\n## Abstract\n\nWe propose a simplified module storage layer which leverages golang generics to allow module developers to handle module\nstorage in a simple and straightforward manner, whilst offering safety, extensibility and standardization.\n\n## Context\n\nModule developers are forced into manually implementing storage functionalities in their modules, those functionalities include\nbut are not limited to:\n\n* Defining key to bytes formats.\n* Defining value to bytes formats.\n* Defining secondary indexes.\n* Defining query methods to expose outside to deal with storage.\n* Defining local methods to deal with storage writing.\n* Dealing with genesis imports and exports.\n* Writing tests for all the above.\n\n\nThis brings in a lot of problems:\n\n* It blocks developers from focusing on the most important part: writing business logic.\n* Key to bytes formats are complex and their definition is error-prone, for example:\n * how do I format time to bytes in such a way that bytes are sorted?\n * how do I ensure when I don't have namespace collisions when dealing with secondary indexes?\n* The lack of standardization makes life hard for clients, and the problem is exacerbated when it comes to providing proofs for objects present in state. Clients are forced to maintain a list of object paths to gather proofs.\n\n### Current Solution: ORM\n\nThe current SDK proposed solution to this problem is [ORM](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-055-orm.md).\nWhilst ORM offers a lot of good functionality aimed at solving these specific problems, it has some downsides:\n\n* It requires migrations.\n* It uses the newest protobuf golang API, whilst the SDK still mainly uses gogoproto. \n* Integrating ORM into a module would require the developer to deal with two different golang frameworks (golang protobuf + gogoproto) representing the same API objects.\n* It has a high learning curve, even for simple storage layers as it requires developers to have knowledge around protobuf options, custom cosmos-sdk storage extensions, and tooling download. Then after this they still need to learn the code-generated API.\n\n### CosmWasm Solution: cw-storage-plus\n\nThe collections API takes inspiration from [cw-storage-plus](https://docs.cosmwasm.com/docs/1.0/smart-contracts/state/cw-plus/),\nwhich has demonstrated to be a powerful tool for dealing with storage in CosmWasm contracts.\nIt's simple, does not require extra tooling, it makes it easy to deal with complex storage structures (indexes, snapshot, etc).\nThe API is straightforward and explicit.\n\n## Decision\n\nWe propose to port the `collections` API, whose implementation lives in [NibiruChain/collections](https://github.com/NibiruChain/collections) to cosmos-sdk.\n\nCollections implements four different storage handlers types:\n\n* `Map`: which deals with simple `key=>object` mappings.\n* `KeySet`: which acts as a `Set` and only retains keys and no object (usecase: allow-lists).\n* `Item`: which always contains only one object (usecase: Params)\n* `Sequence`: which implements a simple always increasing number (usecase: Nonces)\n* `IndexedMap`: builds on top of `Map` and `KeySet` and allows to create relationships with `Objects` and `Objects` secondary keys.\n\nAll the collection APIs build on top of the simple `Map` type.\n\nCollections is fully generic, meaning that anything can be used as `Key` and `Value`. It can be a protobuf object or not.\n\nCollections types, in fact, delegate the duty of serialization of keys and values to a secondary collections API component called `ValueEncoders` and `KeyEncoders`.\n\n`ValueEncoders` take care of converting a value to bytes (relevant only for `Map`). And offers a plug and play layer which allows us to change how we encode objects, \nwhich is relevant for swapping serialization frameworks and enhancing performance.\n`Collections` already comes in with default `ValueEncoders`, specifically for: protobuf objects, special SDK types (sdk.Int, sdk.Dec).\n\n`KeyEncoders` take care of converting keys to bytes, `collections` already comes in with some default `KeyEncoders` for some primitive golang types\n(uint64, string, time.Time, ...) and some widely used sdk types (sdk.Acc/Val/ConsAddress, sdk.Int/Dec, ...).\nThese default implementations also offer safety around proper lexicographic ordering and namespace-collision.\n\nExamples of the collections API can be found here:\n\n* introduction: https://github.com/NibiruChain/collections/tree/main/examples\n* usage in nibiru: [x/oracle](https://github.com/NibiruChain/nibiru/blob/master/x/oracle/keeper/keeper.go#L32), [x/perp](https://github.com/NibiruChain/nibiru/blob/master/x/perp/keeper/keeper.go#L31)\n* cosmos-sdk's x/staking migrated: https://github.com/testinginprod/cosmos-sdk/pull/22\n\n\n## Consequences\n\n### Backwards Compatibility\n\nThe design of `ValueEncoders` and `KeyEncoders` allows modules to retain the same `byte(key)=>byte(value)` mappings, making\nthe upgrade to the new storage layer non-state breaking.\n\n\n### Positive\n\n* ADR aimed at removing code from the SDK rather than adding it. Migrating just `x/staking` to collections would yield to a net decrease in LOC (even considering the addition of collections itself).\n* Simplifies and standardizes storage layers across modules in the SDK.\n* Does not require to have to deal with protobuf.\n* It's pure golang code.\n* Generalization over `KeyEncoders` and `ValueEncoders` allows us to not tie ourself to the data serialization framework.\n* `KeyEncoders` and `ValueEncoders` can be extended to provide schema reflection.\n\n### Negative\n\n* Golang generics are not as battle-tested as other Golang features, despite being used in production right now.\n* Collection types instantiation needs to be improved.\n\n### Neutral\n\n{neutral consequences}\n\n## Further Discussions\n\n* Automatic genesis import/export (not implemented because of API breakage)\n* Schema reflection\n\n\n## References" + }, + { + "number": 63, + "filename": "adr-063-core-module-api.md", + "title": "ADR 063: Core Module API", + "content": "# ADR 063: Core Module API\n\n## Changelog\n\n* 2022-08-18 First Draft\n* 2022-12-08 First Draft\n* 2023-01-24 Updates\n\n## Status\n\nACCEPTED Partially Implemented\n\n## Abstract\n\nA new core API is proposed as a way to develop cosmos-sdk applications that will eventually replace the existing\n`AppModule` and `sdk.Context` frameworks a set of core services and extension interfaces. This core API aims to:\n\n* be simpler\n* more extensible\n* more stable than the current framework\n* enable deterministic events and queries,\n* support event listeners\n* [ADR 033: Protobuf-based Inter-Module Communication](./adr-033-protobuf-inter-module-comm.md) clients.\n\n## Context\n\nHistorically modules have exposed their functionality to the framework via the `AppModule` and `AppModuleBasic`\ninterfaces which have the following shortcomings:\n\n* both `AppModule` and `AppModuleBasic` need to be defined and registered which is counter-intuitive\n* apps need to implement the full interfaces, even parts they don't need (although there are workarounds for this),\n* interface methods depend heavily on unstable third party dependencies, in particular Comet,\n* legacy required methods have littered these interfaces for far too long\n\nIn order to interact with the state machine, modules have needed to do a combination of these things:\n\n* get store keys from the app\n* call methods on `sdk.Context` which contains more or less the full set of capability available to modules.\n\nBy isolating all the state machine functionality into `sdk.Context`, the set of functionalities available to\nmodules are tightly coupled to this type. If there are changes to upstream dependencies (such as Comet)\nor new functionalities are desired (such as alternate store types), the changes need impact `sdk.Context` and all\nconsumers of it (basically all modules). Also, all modules now receive `context.Context` and need to convert these\nto `sdk.Context`'s with a non-ergonomic unwrapping function.\n\nAny breaking changes to these interfaces, such as ones imposed by third-party dependencies like Comet, have the\nside effect of forcing all modules in the ecosystem to update in lock-step. This means it is almost impossible to have\na version of the module which can be run with 2 or 3 different versions of the SDK or 2 or 3 different versions of\nanother module. This lock-step coupling slows down overall development within the ecosystem and causes updates to\ncomponents to be delayed longer than they would if things were more stable and loosely coupled.\n\n## Decision\n\nThe `core` API proposes a set of core APIs that modules can rely on to interact with the state machine and expose their\nfunctionalities to it that are designed in a principled way such that:\n\n* tight coupling of dependencies and unrelated functionalities is minimized or eliminated\n* APIs can have long-term stability guarantees\n* the SDK framework is extensible in a safe and straightforward way\n\nThe design principles of the core API are as follows:\n\n* everything that a module wants to interact with in the state machine is a service\n* all services coordinate state via `context.Context` and don't try to recreate the \"bag of variables\" approach of `sdk.Context`\n* all independent services are isolated in independent packages with minimal APIs and minimal dependencies\n* the core API should be minimalistic and designed for long-term support (LTS)\n* a \"runtime\" module will implement all the \"core services\" defined by the core API and can handle all module\n functionalities exposed by core extension interfaces\n* other non-core and/or non-LTS services can be exposed by specific versions of runtime modules or other modules \nfollowing the same design principles, this includes functionality that interacts with specific non-stable versions of\nthird party dependencies such as Comet\n* the core API doesn't implement *any* functionality, it just defines types\n* go stable API compatibility guidelines are followed: https://go.dev/blog/module-compatibility\n\nA \"runtime\" module is any module which implements the core functionality of composing an ABCI app, which is currently\nhandled by `BaseApp` and the `ModuleManager`. Runtime modules which implement the core API are *intentionally* separate\nfrom the core API in order to enable more parallel versions and forks of the runtime module than is possible with the\nSDK's current tightly coupled `BaseApp` design while still allowing for a high degree of composability and\ncompatibility.\n\nModules which are built only against the core API don't need to know anything about which version of runtime,\n`BaseApp` or Comet in order to be compatible. Modules from the core mainline SDK could be easily composed\nwith a forked version of runtime with this pattern.\n\nThis design is intended to enable matrices of compatible dependency versions. Ideally a given version of any module\nis compatible with multiple versions of the runtime module and other compatible modules. This will allow dependencies\nto be selectively updated based on battle-testing. More conservative projects may want to update some dependencies\nslower than more fast moving projects.\n\n### Core Services\n\nThe following \"core services\" are defined by the core API. All valid runtime module implementations should provide\nimplementations of these services to modules via both [dependency injection](./adr-057-app-wiring.md) and\nmanual wiring. The individual services described below are all bundled in a convenient `appmodule.Service`\n\"bundle service\" so that for simplicity modules can declare a dependency on a single service.\n\n#### Store Services\n\nStore services will be defined in the `cosmossdk.io/core/store` package.\n\nThe generic `store.KVStore` interface is the same as current SDK `KVStore` interface. Store keys have been refactored\ninto store services which, instead of expecting the context to know about stores, invert the pattern and allow\nretrieving a store from a generic context. There are three store services for the three types of currently supported\nstores - regular kv-store, memory, and transient:\n\n```go\ntype KVStoreService interface {\n OpenKVStore(context.Context) KVStore\n}\n\ntype MemoryStoreService interface {\n OpenMemoryStore(context.Context) KVStore\n}\ntype TransientStoreService interface {\n OpenTransientStore(context.Context) KVStore\n}\n```\n\nModules can use these services like this:\n\n```go\nfunc (k msgServer) Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) {\n store := k.kvStoreSvc.OpenKVStore(ctx)\n}\n```\n\nJust as with the current runtime module implementation, modules will not need to explicitly name these store keys,\nbut rather the runtime module will choose an appropriate name for them and modules just need to request the\ntype of store they need in their dependency injection (or manual) constructors.\n\n#### Event Service\n\nThe event `Service` will be defined in the `cosmossdk.io/core/event` package.\n\nThe event `Service` allows modules to emit typed and legacy untyped events:\n\n```go\npackage event\n\ntype Service interface {\n // EmitProtoEvent emits events represented as a protobuf message (as described in ADR 032).\n //\n // Callers SHOULD assume that these events may be included in consensus. These events\n // MUST be emitted deterministically and adding, removing or changing these events SHOULD\n // be considered state-machine breaking.\n EmitProtoEvent(ctx context.Context, event protoiface.MessageV1) error\n\n // EmitKVEvent emits an event based on an event and kv-pair attributes.\n //\n // These events will not be part of consensus and adding, removing or changing these events is\n // not a state-machine breaking change.\n EmitKVEvent(ctx context.Context, eventType string, attrs ...KVEventAttribute) error\n\n // EmitProtoEventNonConsensus emits events represented as a protobuf message (as described in ADR 032), without\n // including it in blockchain consensus.\n //\n // These events will not be part of consensus and adding, removing or changing events is\n // not a state-machine breaking change.\n EmitProtoEventNonConsensus(ctx context.Context, event protoiface.MessageV1) error\n}\n```\n\nTyped events emitted with `EmitProto` should be assumed to be part of blockchain consensus (whether they are part of\nthe block or app hash is left to the runtime to specify).\n\nEvents emitted by `EmitKVEvent` and `EmitProtoEventNonConsensus` are not considered to be part of consensus and cannot be observed\nby other modules. If there is a client-side need to add events in patch releases, these methods can be used.\n\n#### Logger\n\nA logger (`cosmossdk.io/log`) must be supplied using `depinject`, and will\nbe made available for modules to use via `depinject.In`.\nModules using it should follow the current pattern in the SDK by adding the module name before using it.\n\n```go\ntype ModuleInputs struct {\n depinject.In\n\n Logger log.Logger\n}\n\nfunc ProvideModule(in ModuleInputs) ModuleOutputs {\n keeper := keeper.NewKeeper(\n in.logger,\n )\n}\n\nfunc NewKeeper(logger log.Logger) Keeper {\n return Keeper{\n logger: logger.With(log.ModuleKey, \"x/\"+types.ModuleName),\n }\n}\n```\n\n### Core `AppModule` extension interfaces\n\n\nModules will provide their core services to the runtime module via extension interfaces built on top of the\n`cosmossdk.io/core/appmodule.AppModule` tag interface. This tag interface requires only two empty methods which\nallow `depinject` to identify implementers as `depinject.OnePerModule` types and as app module implementations:\n\n```go\ntype AppModule interface {\n depinject.OnePerModuleType\n\n // IsAppModule is a dummy method to tag a struct as implementing an AppModule.\n IsAppModule()\n}\n```\n\nOther core extension interfaces will be defined in `cosmossdk.io/core` should be supported by valid runtime\nimplementations.\n\n#### `MsgServer` and `QueryServer` registration\n\n`MsgServer` and `QueryServer` registration is done by implementing the `HasServices` extension interface:\n\n```go\ntype HasServices interface {\n\tAppModule\n\n\tRegisterServices(grpc.ServiceRegistrar)\n}\n\n```\n\nBecause of the `cosmos.msg.v1.service` protobuf option, required for `Msg` services, the same `ServiceRegistrar` can be\nused to register both `Msg` and query services.\n\n#### Genesis\n\nThe genesis `Handler` functions - `DefaultGenesis`, `ValidateGenesis`, `InitGenesis` and `ExportGenesis` - are specified\nagainst the `GenesisSource` and `GenesisTarget` interfaces which will abstract over genesis sources which may be a single\nJSON object or collections of JSON objects that can be efficiently streamed.\n\n```go\n// GenesisSource is a source for genesis data in JSON format. It may abstract over a\n// single JSON object or separate files for each field in a JSON object that can\n// be streamed over. Modules should open a separate io.ReadCloser for each field that\n// is required. When fields represent arrays they can efficiently be streamed\n// over. If there is no data for a field, this function should return nil, nil. It is\n// important that the caller closes the reader when done with it.\ntype GenesisSource = func(field string) (io.ReadCloser, error)\n\n// GenesisTarget is a target for writing genesis data in JSON format. It may\n// abstract over a single JSON object or JSON in separate files that can be\n// streamed over. Modules should open a separate io.WriteCloser for each field\n// and should prefer writing fields as arrays when possible to support efficient\n// iteration. It is important the caller closers the writer AND checks the error\n// when done with it. It is expected that a stream of JSON data is written\n// to the writer.\ntype GenesisTarget = func(field string) (io.WriteCloser, error)\n```\n\nAll genesis objects for a given module are expected to conform to the semantics of a JSON object.\nEach field in the JSON object should be read and written separately to support streaming genesis.\nThe [ORM](./adr-055-orm.md) and [collections](./adr-062-collections-state-layer.md) both support\nstreaming genesis and modules using these frameworks generally do not need to write any manual\ngenesis code.\n\nTo support genesis, modules should implement the `HasGenesis` extension interface:\n\n```go\ntype HasGenesis interface {\n\tAppModule\n\n\t// DefaultGenesis writes the default genesis for this module to the target.\n\tDefaultGenesis(GenesisTarget) error\n\n\t// ValidateGenesis validates the genesis data read from the source.\n\tValidateGenesis(GenesisSource) error\n\n\t// InitGenesis initializes module state from the genesis source.\n\tInitGenesis(context.Context, GenesisSource) error\n\n\t// ExportGenesis exports module state to the genesis target.\n\tExportGenesis(context.Context, GenesisTarget) error\n}\n```\n\n#### Pre Blockers\n\nModules that have functionality that runs before BeginBlock and should implement the `HasPreBlocker` interfaces:\n\n```go\ntype HasPreBlocker interface {\n AppModule\n PreBlock(context.Context) error\n}\n```\n\n#### Begin and End Blockers\n\nModules that have functionality that runs before transactions (begin blockers) or after transactions\n(end blockers) should implement the has `HasBeginBlocker` and/or `HasEndBlocker` interfaces:\n\n```go\ntype HasBeginBlocker interface {\n AppModule\n BeginBlock(context.Context) error\n}\n\ntype HasEndBlocker interface {\n AppModule\n EndBlock(context.Context) error\n}\n```\n\nThe `BeginBlock` and `EndBlock` methods will take a `context.Context`, because:\n\n* most modules don't need Comet information other than `BlockInfo` so we can eliminate dependencies on specific\nComet versions\n* for the few modules that need Comet block headers and/or return validator updates, specific versions of the\nruntime module will provide specific functionality for interacting with the specific version(s) of Comet\nsupported\n\nIn order for `BeginBlock`, `EndBlock` and `InitGenesis` to send back validator updates and retrieve full Comet\nblock headers, the runtime module for a specific version of Comet could provide services like this:\n\n```go\ntype ValidatorUpdateService interface {\n SetValidatorUpdates(context.Context, []abci.ValidatorUpdate)\n}\n```\n\nHeader Service defines a way to get header information about a block. This information is generalized for all implementations: \n\n```go \n\ntype Service interface {\n\tGetHeaderInfo(context.Context) Info\n}\n\ntype Info struct {\n\tHeight int64 // Height returns the height of the block\n\tHash []byte // Hash returns the hash of the block header\n\tTime time.Time // Time returns the time of the block\n\tChainID string // ChainId returns the chain ID of the block\n}\n```\n\nComet Service provides a way to get comet specific information: \n\n```go\ntype Service interface {\n\tGetCometInfo(context.Context) Info\n}\n\ntype CometInfo struct {\n Evidence []abci.Misbehavior // Misbehavior returns the misbehavior of the block\n\t// ValidatorsHash returns the hash of the validators\n\t// For Comet, it is the hash of the next validators\n\tValidatorsHash []byte\n\tProposerAddress []byte // ProposerAddress returns the address of the block proposer\n\tDecidedLastCommit abci.CommitInfo // DecidedLastCommit returns the last commit info\n}\n```\n\nIf a user would like to provide a module other information they would need to implement another service like:\n\n```go\ntype RollKit Interface {\n ...\n}\n```\n\nWe know these types will change at the Comet level and that also a very limited set of modules actually need this\nfunctionality, so they are intentionally kept out of core to keep core limited to the necessary, minimal set of stable\nAPIs.\n\n#### Remaining Parts of AppModule\n\nThe current `AppModule` framework handles a number of additional concerns which aren't addressed by this core API.\nThese include:\n\n* gas\n* block headers\n* upgrades\n* registration of gogo proto and amino interface types\n* cobra query and tx commands\n* gRPC gateway \n* crisis module invariants\n* simulations\n\nAdditional `AppModule` extension interfaces either inside or outside of core will need to be specified to handle\nthese concerns.\n\nIn the case of gogo proto and amino interfaces, the registration of these generally should happen as early\nas possible during initialization and in [ADR 057: App Wiring](./adr-057-app-wiring.md), protobuf type registration \nhappens before dependency injection (although this could alternatively be done dedicated DI providers).\n\ngRPC gateway registration should probably be handled by the runtime module, but the core API shouldn't depend on gRPC\ngateway types as 1) we are already using an older version and 2) it's possible the framework can do this registration\nautomatically in the future. So for now, the runtime module should probably provide some sort of specific type for doing\nthis registration ex:\n\n```go\ntype GrpcGatewayInfo struct {\n Handlers []GrpcGatewayHandler\n}\n\ntype GrpcGatewayHandler func(ctx context.Context, mux *runtime.ServeMux, client QueryClient) error\n```\n\nwhich modules can return in a provider:\n\n```go\nfunc ProvideGrpcGateway() GrpcGatewayInfo {\n return GrpcGatewayInfo {\n Handlers: []Handler {types.RegisterQueryHandlerClient}\n }\n}\n```\n\nCrisis module invariants and simulations are subject to potential redesign and should be managed with types\ndefined in the crisis and simulation modules respectively.\n\nExtension interface for CLI commands will be provided via the `cosmossdk.io/client/v2` module and its\n[autocli](./adr-058-auto-generated-cli.md) framework.\n\n#### Example Usage\n\nHere is an example of setting up a hypothetical `foo` v2 module which uses the [ORM](./adr-055-orm.md) for its state\nmanagement and genesis.\n\n```go\n\ntype Keeper struct {\n\tdb orm.ModuleDB\n\tevtSrv event.Service\n}\n\nfunc (k Keeper) RegisterServices(r grpc.ServiceRegistrar) {\n foov1.RegisterMsgServer(r, k)\n foov1.RegisterQueryServer(r, k)\n}\n\nfunc (k Keeper) BeginBlock(context.Context) error {\n\treturn nil\n}\n\nfunc ProvideApp(config *foomodulev2.Module, evtSvc event.EventService, db orm.ModuleDB) (Keeper, appmodule.AppModule){\n k := &Keeper{db: db, evtSvc: evtSvc}\n return k, k\n}\n```\n\n### Runtime Compatibility Version\n\nThe `core` module will define a static integer var, `cosmossdk.io/core.RuntimeCompatibilityVersion`, which is\na minor version indicator of the core module that is accessible at runtime. Correct runtime module implementations\nshould check this compatibility version and return an error if the current `RuntimeCompatibilityVersion` is higher\nthan the version of the core API that this runtime version can support. When new features are adding to the `core`\nmodule API that runtime modules are required to support, this version should be incremented.\n\n### Runtime Modules\n\nThe initial `runtime` module will simply be created within the existing `github.com/cosmos/cosmos-sdk` go module\nunder the `runtime` package. This module will be a small wrapper around the existing `BaseApp`, `sdk.Context` and\nmodule manager and follow the Cosmos SDK's existing [0-based versioning](https://0ver.org). To move to semantic\nversioning as well as runtime modularity, new officially supported runtime modules will be created under the\n`cosmossdk.io/runtime` prefix. For each supported consensus engine a semantically-versioned go module should be created\nwith a runtime implementation for that consensus engine. For example:\n\n* `cosmossdk.io/runtime/comet`\n* `cosmossdk.io/runtime/comet/v2`\n* `cosmossdk.io/runtime/rollkit`\n* etc.\n\nThese runtime modules should attempt to be semantically versioned even if the underlying consensus engine is not. Also,\nbecause a runtime module is also a first class Cosmos SDK module, it should have a protobuf module config type.\nA new semantically versioned module config type should be created for each of these runtime module such that there is a\n1:1 correspondence between the go module and module config type. This is the same practice should be followed for every \nsemantically versioned Cosmos SDK module as described in [ADR 057: App Wiring](./adr-057-app-wiring.md).\n\nCurrently, `github.com/cosmos/cosmos-sdk/runtime` uses the protobuf config type `cosmos.app.runtime.v1alpha1.Module`.\nWhen we have a standalone v1 comet runtime, we should use a dedicated protobuf module config type such as\n`cosmos.runtime.comet.v1.Module1`. When we release v2 of the comet runtime (`cosmossdk.io/runtime/comet/v2`) we should\nhave a corresponding `cosmos.runtime.comet.v2.Module` protobuf type.\n\nIn order to make it easier to support different consensus engines that support the same core module functionality as\ndescribed in this ADR, a common go module should be created with shared runtime components. The easiest runtime components\nto share initially are probably the message/query router, inter-module client, service register, and event router.\nThis common runtime module should be created initially as the `cosmossdk.io/runtime/common` go module.\n\nWhen this new architecture has been implemented, the main dependency for a Cosmos SDK module would be\n`cosmossdk.io/core` and that module should be able to be used with any supported consensus engine (to the extent\nthat it does not explicitly depend on consensus engine specific functionality such as Comet's block headers). An\napp developer would then be able to choose which consensus engine they want to use by importing the corresponding\nruntime module. The current `BaseApp` would be refactored into the `cosmossdk.io/runtime/comet` module, the router\ninfrastructure in `baseapp/` would be refactored into `cosmossdk.io/runtime/common` and support ADR 033, and eventually\na dependency on `github.com/cosmos/cosmos-sdk` would no longer be required.\n\nIn short, modules would depend primarily on `cosmossdk.io/core`, and each `cosmossdk.io/runtime/{consensus-engine}`\nwould implement the `cosmossdk.io/core` functionality for that consensus engine.\n\nOne additional piece that would need to be resolved as part of this architecture is how runtimes relate to the server.\nLikely it would make sense to modularize the current server architecture so that it can be used with any runtime even\nif that is based on a consensus engine besides Comet. This means that eventually the Comet runtime would need to\nencapsulate the logic for starting Comet and the ABCI app.\n\n### Testing\n\nA mock implementation of all services should be provided in core to allow for unit testing of modules\nwithout needing to depend on any particular version of runtime. Mock services should\nallow tests to observe service behavior or provide a non-production implementation - for instance memory\nstores can be used to mock stores.\n\nFor integration testing, a mock runtime implementation should be provided that allows composing different app modules\ntogether for testing without a dependency on runtime or Comet.\n\n## Consequences\n\n### Backwards Compatibility\n\nEarly versions of runtime modules should aim to support as much as possible modules built with the existing\n`AppModule`/`sdk.Context` framework. As the core API is more widely adopted, later runtime versions may choose to\ndrop support and only support the core API plus any runtime module specific APIs (like specific versions of Comet).\n\nThe core module itself should strive to remain at the go semantic version `v1` as long as possible and follow design\nprinciples that allow for strong long-term support (LTS).\n\nOlder versions of the SDK can support modules built against core with adaptors that convert wrap core `AppModule`\nimplementations in implementations of `AppModule` that conform to that version of the SDK's semantics as well\nas by providing service implementations by wrapping `sdk.Context`.\n\n### Positive\n\n* better API encapsulation and separation of concerns\n* more stable APIs\n* more framework extensibility\n* deterministic events and queries\n* event listeners\n* inter-module msg and query execution support\n* more explicit support for forking and merging of module versions (including runtime)\n\n### Negative\n\n### Neutral\n\n* modules will need to be refactored to use this API\n* some replacements for `AppModule` functionality still need to be defined in follow-ups\n (type registration, commands, invariants, simulations) and this will take additional design work\n\n## Further Discussions\n\n* gas\n* block headers\n* upgrades\n* registration of gogo proto and amino interface types\n* cobra query and tx commands\n* gRPC gateway\n* crisis module invariants\n* simulations\n\n## References\n\n* [ADR 033: Protobuf-based Inter-Module Communication](./adr-033-protobuf-inter-module-comm.md)\n* [ADR 057: App Wiring](./adr-057-app-wiring.md)\n* [ADR 055: ORM](./adr-055-orm.md)\n* [ADR 028: Public Key Addresses](./adr-028-public-key-addresses.md)\n* [Keeping Your Modules Compatible](https://go.dev/blog/module-compatibility)" + }, + { + "number": 64, + "filename": "adr-064-abci-2.0.md", + "title": "ADR 64: ABCI 2.0 Integration (Phase II)", + "content": "# ADR 64: ABCI 2.0 Integration (Phase II)\n\n## Changelog\n\n* 2023-01-17: Initial Draft (@alexanderbez)\n* 2023-04-06: Add upgrading section (@alexanderbez)\n* 2023-04-10: Simplify vote extension state persistence (@alexanderbez)\n* 2023-07-07: Revise vote extension state persistence (@alexanderbez)\n* 2023-08-24: Revise vote extension power calculations and staking interface (@davidterpay)\n\n## Status\n\nACCEPTED\n\n## Abstract\n\nThis ADR outlines the continuation of the efforts to implement ABCI++ in the Cosmos\nSDK outlined in [ADR 060: ABCI 1.0 (Phase I)](adr-060-abci-1.0.md).\n\nSpecifically, this ADR outlines the design and implementation of ABCI 2.0, which\nincludes `ExtendVote`, `VerifyVoteExtension` and `FinalizeBlock`.\n\n## Context\n\nABCI 2.0 continues the promised updates from ABCI++, specifically three additional\nABCI methods that the application can implement in order to gain further control,\ninsight and customization of the consensus process, unlocking many novel use-cases\nthat were previously not possible. We describe these three new methods below:\n\n### `ExtendVote`\n\nThis method allows each validator process to extend the pre-commit phase of the\nCometBFT consensus process. Specifically, it allows the application to perform\ncustom business logic that extends the pre-commit vote and supply additional data\nas part of the vote, although they are signed separately by the same key.\n\nThe data, called vote extension, will be broadcast and received together with the\nvote it is extending, and will be made available to the application in the next\nheight. Specifically, the proposer of the next block will receive the vote extensions\nin `RequestPrepareProposal.local_last_commit.votes`.\n\nIf the application does not have vote extension information to provide, it\nreturns a 0-length byte array as its vote extension.\n\n**NOTE**: \n\n* Although each validator process submits its own vote extension, ONLY the *proposer*\n of the *next* block will receive all the vote extensions included as part of the\n pre-commit phase of the previous block. This means only the proposer will\n implicitly have access to all the vote extensions, via `RequestPrepareProposal`,\n and that not all vote extensions may be included, since a validator does not\n have to wait for all pre-commits, only 2/3.\n* The pre-commit vote is signed independently from the vote extension.\n\n### `VerifyVoteExtension`\n\nThis method allows validators to validate the vote extension data attached to\neach pre-commit message it receives. If the validation fails, the whole pre-commit\nmessage will be deemed invalid and ignored by CometBFT.\n\nCometBFT uses `VerifyVoteExtension` when validating a pre-commit vote. Specifically,\nfor a pre-commit, CometBFT will:\n\n* Reject the message if it doesn't contain a signed vote AND a signed vote extension\n* Reject the message if the vote's signature OR the vote extension's signature fails to verify\n* Reject the message if `VerifyVoteExtension` was rejected by the app\n\nOtherwise, CometBFT will accept the pre-commit message.\n\nNote, this has important consequences on liveness, i.e., if vote extensions repeatedly\ncannot be verified by correct validators, CometBFT may not be able to finalize\na block even if sufficiently many (+2/3) validators send pre-commit votes for\nthat block. Thus, `VerifyVoteExtension` should be used with special care.\n\nCometBFT recommends that an application that detects an invalid vote extension\nSHOULD accept it in `ResponseVerifyVoteExtension` and ignore it in its own logic.\n\n### `FinalizeBlock`\n\nThis method delivers a decided block to the application. The application must\nexecute the transactions in the block deterministically and update its state\naccordingly. Cryptographic commitments to the block and transaction results,\nreturned via the corresponding parameters in `ResponseFinalizeBlock`, are\nincluded in the header of the next block. CometBFT calls it when a new block\nis decided.\n\nIn other words, `FinalizeBlock` encapsulates the current ABCI execution flow of\n`BeginBlock`, one or more `DeliverTx`, and `EndBlock` into a single ABCI method.\nCometBFT will no longer execute requests for these legacy methods and instead\nwill just simply call `FinalizeBlock`.\n\n## Decision\n\nWe will discuss changes to the Cosmos SDK to implement ABCI 2.0 in two distinct\nphases, `VoteExtensions` and `FinalizeBlock`.\n\n### `VoteExtensions`\n\nSimilarly for `PrepareProposal` and `ProcessProposal`, we propose to introduce\ntwo new handlers that an application can implement in order to provide and verify\nvote extensions.\n\nWe propose the following new handlers for applications to implement:\n\n```go\ntype ExtendVoteHandler func(sdk.Context, abci.RequestExtendVote) abci.ResponseExtendVote\ntype VerifyVoteExtensionHandler func(sdk.Context, abci.RequestVerifyVoteExtension) abci.ResponseVerifyVoteExtension\n```\n\nAn ephemeral context and state will be supplied to both handlers. The\ncontext will contain relevant metadata such as the block height and block hash.\nThe state will be a cached version of the committed state of the application and\nwill be discarded after the execution of the handler, this means that both handlers\nget a fresh state view and no changes made to it will be written.\n\nIf an application decides to implement `ExtendVoteHandler`, it must return a\nnon-nil `ResponseExtendVote.VoteExtension`.\n\nRecall, an implementation of `ExtendVoteHandler` does NOT need to be deterministic,\nhowever, given a set of vote extensions, `VerifyVoteExtensionHandler` must be\ndeterministic, otherwise the chain may suffer from liveness faults. In addition,\nrecall CometBFT proceeds in rounds for each height, so if a decision cannot be\nmade about a block proposal at a given height, CometBFT will proceed to the\nnext round and thus will execute `ExtendVote` and `VerifyVoteExtension` again for\nthe new round for each validator until 2/3 valid pre-commits can be obtained.\n\nGiven the broad scope of potential implementations and use-cases of vote extensions,\nand how to verify them, most applications should choose to implement the handlers\nthrough a single handler type, which can have any number of dependencies injected\nsuch as keepers. In addition, this handler type could contain some notion of\nvolatile vote extension state management which would assist in vote extension\nverification. This state management could be ephemeral or could be some form of\non-disk persistence.\n\nExample:\n\n```go\n// VoteExtensionHandler implements an Oracle vote extension handler.\ntype VoteExtensionHandler struct {\n\tcdc Codec\n\tmk MyKeeper\n\tstate VoteExtState // This could be a map or a DB connection object\n}\n\n// ExtendVoteHandler can do something with h.mk and possibly h.state to create\n// a vote extension, such as fetching a series of prices for supported assets.\nfunc (h VoteExtensionHandler) ExtendVoteHandler(ctx sdk.Context, req abci.RequestExtendVote) abci.ResponseExtendVote {\n\tprices := GetPrices(ctx, h.mk.Assets())\n\tbz, err := EncodePrices(h.cdc, prices)\n\tif err != nil {\n\t\tpanic(fmt.Errorf(\"failed to encode prices for vote extension: %w\", err))\n\t}\n\n\t// store our vote extension at the given height\n\t//\n\t// NOTE: Vote extensions can be overridden since we can timeout in a round.\n\tSetPrices(h.state, req, bz)\n\n\treturn abci.ResponseExtendVote{VoteExtension: bz}\n}\n\n// VerifyVoteExtensionHandler can do something with h.state and req to verify\n// the req.VoteExtension field, such as ensuring the provided oracle prices are\n// within some valid range of our prices.\nfunc (h VoteExtensionHandler) VerifyVoteExtensionHandler(ctx sdk.Context, req abci.RequestVerifyVoteExtension) abci.ResponseVerifyVoteExtension {\n\tprices, err := DecodePrices(h.cdc, req.VoteExtension)\n\tif err != nil {\n\t\tlog(\"failed to decode vote extension\", \"err\", err)\n\t\treturn abci.ResponseVerifyVoteExtension{Status: REJECT}\n\t}\n\n\tif err := ValidatePrices(h.state, req, prices); err != nil {\n\t\tlog(\"failed to validate vote extension\", \"prices\", prices, \"err\", err)\n\t\treturn abci.ResponseVerifyVoteExtension{Status: REJECT}\n\t}\n\n\t// store updated vote extensions at the given height\n\t//\n\t// NOTE: Vote extensions can be overridden since we can timeout in a round.\n\tSetPrices(h.state, req, req.VoteExtension)\n\n\treturn abci.ResponseVerifyVoteExtension{Status: ACCEPT}\n}\n```\n\n#### Vote Extension Propagation & Verification\n\nAs mentioned previously, vote extensions for height `H` are only made available\nto the proposer at height `H+1` during `PrepareProposal`. However, in order to\nmake vote extensions useful, all validators should have access to the agreed upon\nvote extensions at height `H` during `H+1`.\n\nSince CometBFT includes all the vote extension signatures in `RequestPrepareProposal`,\nwe propose that the proposing validator manually \"inject\" the vote extensions\nalong with their respective signatures via a special transaction, `VoteExtsTx`,\ninto the block proposal during `PrepareProposal`. The `VoteExtsTx` will be\npopulated with a single `ExtendedCommitInfo` object which is received directly\nfrom `RequestPrepareProposal`.\n\nFor convention, the `VoteExtsTx` transaction should be the first transaction in\nthe block proposal, although chains can implement their own preferences. For\nsafety purposes, we also propose that the proposer itself verify all the vote\nextension signatures it receives in `RequestPrepareProposal`.\n\nA validator, upon a `RequestProcessProposal`, will receive the injected `VoteExtsTx`\nwhich includes the vote extensions along with their signatures. If no such transaction\nexists, the validator MUST REJECT the proposal.\n\nWhen a validator inspects a `VoteExtsTx`, it will evaluate each `SignedVoteExtension`.\nFor each signed vote extension, the validator will generate the signed bytes and\nverify the signature. At least 2/3 valid signatures, based on voting power, must\nbe received in order for the block proposal to be valid, otherwise the validator\nMUST REJECT the proposal.\n\nIn order to have the ability to validate signatures, `BaseApp` must have access\nto the `x/staking` module, since this module stores an index from consensus\naddress to public key. However, we will avoid a direct dependency on `x/staking`\nand instead rely on an interface instead. In addition, the Cosmos SDK will expose\na default signature verification method which applications can use:\n\n```go\ntype ValidatorStore interface {\n\tGetPubKeyByConsAddr(context.Context, sdk.ConsAddress) (cmtprotocrypto.PublicKey, error)\n}\n\n// ValidateVoteExtensions is a function that an application can execute in\n// ProcessProposal to verify vote extension signatures.\nfunc (app *BaseApp) ValidateVoteExtensions(ctx sdk.Context, currentHeight int64, extCommit abci.ExtendedCommitInfo) error {\n\tvotingPower := 0\n\ttotalVotingPower := 0\n\n\tfor _, vote := range extCommit.Votes {\n\t\ttotalVotingPower += vote.Validator.Power\n\n\t\tif !vote.SignedLastBlock || len(vote.VoteExtension) == 0 {\n\t\t\tcontinue\n\t\t}\n\n\t\tvalConsAddr := sdk.ConsAddress(vote.Validator.Address)\n\t\tpubKeyProto, err := valStore.GetPubKeyByConsAddr(ctx, valConsAddr)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"failed to get public key for validator %s: %w\", valConsAddr, err)\n\t\t}\n\n\t\tif len(vote.ExtensionSignature) == 0 {\n\t\t\treturn fmt.Errorf(\"received a non-empty vote extension with empty signature for validator %s\", valConsAddr)\n\t\t}\n\n\t\tcmtPubKey, err := cryptoenc.PubKeyFromProto(pubKeyProto)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"failed to convert validator %X public key: %w\", valConsAddr, err)\n\t\t}\n\n\t\tcve := cmtproto.CanonicalVoteExtension{\n\t\t\tExtension: vote.VoteExtension,\n\t\t\tHeight: currentHeight - 1, // the vote extension was signed in the previous height\n\t\t\tRound: int64(extCommit.Round),\n\t\t\tChainId: app.GetChainID(),\n\t\t}\n\n\t\textSignBytes, err := cosmosio.MarshalDelimited(&cve)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"failed to encode CanonicalVoteExtension: %w\", err)\n\t\t}\n\n\t\tif !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) {\n\t\t\treturn errors.New(\"received vote with invalid signature\")\n\t\t}\n\n\t\tvotingPower += vote.Validator.Power\n\t}\n\n\tif (votingPower / totalVotingPower) < threshold {\n\t\treturn errors.New(\"not enough voting power for the vote extensions\")\n\t}\n\n\treturn nil\n}\n```\n\nOnce at least 2/3 signatures, by voting power, are received and verified, the\nvalidator can use the vote extensions to derive additional data or come to some\ndecision based on the vote extensions.\n\n> NOTE: It is very important to state, that neither the vote propagation technique\n> nor the vote extension verification mechanism described above is required for\n> applications to implement. In other words, a proposer is not required to verify\n> and propagate vote extensions along with their signatures nor are proposers\n> required to verify those signatures. An application can implement it's own\n> PKI mechanism and use that to sign and verify vote extensions.\n\n#### Vote Extension Persistence\n\nIn certain contexts, it may be useful or necessary for applications to persist\ndata derived from vote extensions. In order to facilitate this use case, we propose\nto allow app developers to define a pre-Blocker hook which will be called\nat the very beginning of `FinalizeBlock`, i.e. before `BeginBlock` (see below).\n\nNote, we cannot allow applications to directly write to the application state\nduring `ProcessProposal` because during replay, CometBFT will NOT call `ProcessProposal`,\nwhich would result in an incomplete state view.\n\n```go\nfunc (a MyApp) PreBlocker(ctx sdk.Context, req *abci.RequestFinalizeBlock) error {\n\tvoteExts := GetVoteExtensions(ctx, req.Txs)\n\t\n\t// Process and perform some compute on vote extensions, storing any resulting\n\t// state.\n\tif err a.processVoteExtensions(ctx, voteExts); if err != nil {\n\t\treturn err\n\t}\n}\n```\n\n### `FinalizeBlock`\n\nThe existing ABCI methods `BeginBlock`, `DeliverTx`, and `EndBlock` have existed\nsince the dawn of ABCI-based applications. Thus, applications, tooling, and developers\nhave grown used to these methods and their use-cases. Specifically, `BeginBlock`\nand `EndBlock` have grown to be pretty integral and powerful within ABCI-based\napplications. E.g. an application might want to run distribution and inflation\nrelated operations prior to executing transactions and then have staking related\nchanges to happen after executing all transactions.\n\nWe propose to keep `BeginBlock` and `EndBlock` within the SDK's core module\ninterfaces only so application developers can continue to build against existing\nexecution flows. However, we will remove `BeginBlock`, `DeliverTx` and `EndBlock`\nfrom the SDK's `BaseApp` implementation and thus the ABCI surface area.\n\nWhat will then exist is a single `FinalizeBlock` execution flow. Specifically, in\n`FinalizeBlock` we will execute the application's `BeginBlock`, followed by\nexecution of all the transactions, finally followed by execution of the application's\n`EndBlock`.\n\nNote, we will still keep the existing transaction execution mechanics within\n`BaseApp`, but all notions of `DeliverTx` will be removed, i.e. `deliverState`\nwill be replace with `finalizeState`, which will be committed on `Commit`.\n\nHowever, there are current parameters and fields that exist in the existing\n`BeginBlock` and `EndBlock` ABCI types, such as votes that are used in distribution\nand byzantine validators used in evidence handling. These parameters exist in the\n`FinalizeBlock` request type, and will need to be passed to the application's\nimplementations of `BeginBlock` and `EndBlock`.\n\nThis means the Cosmos SDK's core module interfaces will need to be updated to\nreflect these parameters. The easiest and most straightforward way to achieve\nthis is to just pass `RequestFinalizeBlock` to `BeginBlock` and `EndBlock`.\nAlternatively, we can create dedicated proxy types in the SDK that reflect these\nlegacy ABCI types, e.g. `LegacyBeginBlockRequest` and `LegacyEndBlockRequest`. Or,\nwe can come up with new types and names altogether.\n\n```go\nfunc (app *BaseApp) FinalizeBlock(req abci.RequestFinalizeBlock) (*abci.ResponseFinalizeBlock, error) {\n\tctx := ...\n\n\tif app.preBlocker != nil {\n\t\tctx := app.finalizeBlockState.ctx\n\t\trsp, err := app.preBlocker(ctx, req)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif rsp.ConsensusParamsChanged {\n\t\t\tapp.finalizeBlockState.ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx))\n\t\t}\n\t}\n\tbeginBlockResp, err := app.beginBlock(req)\n\tappendBlockEventAttr(beginBlockResp.Events, \"begin_block\")\n\n\ttxExecResults := make([]abci.ExecTxResult, 0, len(req.Txs))\n\tfor _, tx := range req.Txs {\n\t\tresult := app.runTx(runTxModeFinalize, tx)\n\t\ttxExecResults = append(txExecResults, result)\n\t}\n\n\tendBlockResp, err := app.endBlock(app.finalizeBlockState.ctx)\n\tappendBlockEventAttr(beginBlockResp.Events, \"end_block\")\n\n\treturn abci.ResponseFinalizeBlock{\n\t\tTxResults: txExecResults,\n\t\tEvents: joinEvents(beginBlockResp.Events, endBlockResp.Events),\n\t\tValidatorUpdates: endBlockResp.ValidatorUpdates,\n\t\tConsensusParamUpdates: endBlockResp.ConsensusParamUpdates,\n\t\tAppHash: nil,\n\t}\n}\n```\n\n#### Events\n\nMany tools, indexers and ecosystem libraries rely on the existence `BeginBlock`\nand `EndBlock` events. Since CometBFT now only exposes `FinalizeBlockEvents`, we\nfind that it will still be useful for these clients and tools to still query for\nand rely on existing events, especially since applications will still define\n`BeginBlock` and `EndBlock` implementations.\n\nIn order to facilitate existing event functionality, we propose that all `BeginBlock`\nand `EndBlock` events have a dedicated `EventAttribute` with `key=block` and\n`value=begin_block|end_block`. The `EventAttribute` will be appended to each event\nin both `BeginBlock` and `EndBlock` events. \n\n\n### Upgrading\n\nCometBFT defines a consensus parameter, [`VoteExtensionsEnableHeight`](https://github.com/cometbft/cometbft/blob/v0.38.0-alpha.1/spec/abci/abci%2B%2B_app_requirements.md#abciparamsvoteextensionsenableheight),\nwhich specifies the height at which vote extensions are enabled and **required**.\nIf the value is set to zero, which is the default, then vote extensions are\ndisabled and an application is not required to implement and use vote extensions.\n\nHowever, if the value `H` is positive, at all heights greater than the configured\nheight `H` vote extensions must be present (even if empty). When the configured\nheight `H` is reached, `PrepareProposal` will not include vote extensions yet,\nbut `ExtendVote` and `VerifyVoteExtension` will be called. Then, when reaching\nheight `H+1`, `PrepareProposal` will include the vote extensions from height `H`.\n\nIt is very important to note, for all heights after H:\n\n* Vote extensions CANNOT be disabled\n* They are mandatory, i.e. all pre-commit messages sent MUST have an extension\n attached (even if empty)\n\nWhen an application updates to the Cosmos SDK version with CometBFT v0.38 support,\nin the upgrade handler it must ensure to set the consensus parameter\n`VoteExtensionsEnableHeight` to the correct value. E.g. if an application is set\nto perform an upgrade at height `H`, then the value of `VoteExtensionsEnableHeight`\nshould be set to any value `>=H+1`. This means that at the upgrade height, `H`,\nvote extensions will not be enabled yet, but at height `H+1` they will be enabled.\n\n## Consequences\n\n### Backwards Compatibility\n\nABCI 2.0 is naturally not backwards compatible with prior versions of the Cosmos SDK\nand CometBFT. For example, an application that requests `RequestFinalizeBlock`\nto the same application that does not speak ABCI 2.0 will naturally fail.\n\nIn addition, `BeginBlock`, `DeliverTx` and `EndBlock` will be removed from the\napplication ABCI interfaces and along with the inputs and outputs being modified\nin the module interfaces.\n\n### Positive\n\n* `BeginBlock` and `EndBlock` semantics remain, so burden on application developers\n should be limited.\n* Less communication overhead as multiple ABCI requests are condensed into a single\n request.\n* Sets the groundwork for optimistic execution.\n* Vote extensions allow for an entirely new set of application primitives to be\n developed, such as in-process price oracles and encrypted mempools.\n\n### Negative\n\n* Some existing Cosmos SDK core APIs may need to be modified and thus broken.\n* Signature verification in `ProcessProposal` of 100+ vote extension signatures\n will add significant performance overhead to `ProcessProposal`. Granted, the\n\tsignature verification process can happen concurrently using an error group\n\twith `GOMAXPROCS` goroutines.\n\n### Neutral\n\n* Having to manually \"inject\" vote extensions into the block proposal during\n `PrepareProposal` is an awkward approach and takes up block space unnecessarily.\n* The requirement of `ResetProcessProposalState` can create a footgun for\n application developers if they're not careful, but this is necessary in order\n\tfor applications to be able to commit state from vote extension computation.\n\n## Further Discussions\n\nFuture discussions include design and implementation of ABCI 3.0, which is a\ncontinuation of ABCI++ and the general discussion of optimistic execution.\n\n## References\n\n* [ADR 060: ABCI 1.0 (Phase I)](adr-060-abci-1.0.md)" + }, + { + "number": 65, + "filename": "adr-065-store-v2.md", + "title": "# ADR 065: Store V2", + "content": "# ADR-065: Store V2\n\n## Changelog\n\n* Feb 14, 2023: Initial Draft (@alexanderbez)\n\n## Status\n\nDRAFT\n\n## Abstract\n\nThe storage and state primitives that Cosmos SDK based applications have used have\nby and large not changed since the launch of the inaugural Cosmos Hub. The demands\nand needs of Cosmos SDK based applications, from both developer and client UX\nperspectives, have evolved and outgrown the ecosystem since these primitives\nwere first introduced.\n\nOver time as these applications have gained significant adoption, many critical\nshortcomings and flaws have been exposed in the state and storage primitives of\nthe Cosmos SDK.\n\nIn order to keep up with the evolving demands and needs of both clients and developers,\na major overhaul to these primitives is necessary.\n\n## Context\n\nThe Cosmos SDK provides application developers with various storage primitives\nfor dealing with application state. Specifically, each module contains its own\nmerkle commitment data structure -- an IAVL tree. In this data structure, a module\ncan store and retrieve key-value pairs along with Merkle commitments, i.e. proofs,\nto those key-value pairs indicating that they do or do not exist in the global\napplication state. This data structure is the base layer `KVStore`.\n\nIn addition, the SDK provides abstractions on top of this Merkle data structure.\nNamely, a root multi-store (RMS) is a collection of each module's `KVStore`.\nThrough the RMS, the application can serve queries and provide proofs to clients\nin addition to providing a module access to its own unique `KVStore` through the use\nof `StoreKey`, which is an OCAP primitive.\n\nThere are further layers of abstraction that sit between the RMS and the underlying\nIAVL `KVStore`. A `GasKVStore` is responsible for tracking gas IO consumption for\nstate machine reads and writes. A `CacheKVStore` is responsible for providing a\nway to cache reads and buffer writes to make state transitions atomic, e.g.\ntransaction execution or governance proposal execution.\n\nThere are a few critical drawbacks to these layers of abstraction and the overall\ndesign of storage in the Cosmos SDK:\n\n* Since each module has its own IAVL `KVStore`, commitments are not [atomic](https://github.com/cosmos/cosmos-sdk/issues/14625)\n * Note, we can still allow modules to have their own IAVL `KVStore`, but the\n IAVL library will need to support the ability to pass a DB instance as an\n argument to various IAVL APIs.\n* Since IAVL is responsible for both state storage and commitment, running an \n archive node becomes increasingly expensive as disk space grows exponentially.\n* As the size of a network increases, various performance bottlenecks start to\n emerge in many areas such as query performance, network upgrades, state\n migrations, and general application performance.\n* Developer UX is poor as it does not allow application developers to experiment\n with different types of approaches to storage and commitments, along with the\n complications of many layers of abstractions referenced above.\n\nSee the [Storage Discussion](https://github.com/cosmos/cosmos-sdk/discussions/13545) for more information.\n\n## Alternatives\n\nThere was a previous attempt to refactor the storage layer described in [ADR-040](./adr-040-storage-and-smt-state-commitments.md).\nHowever, this approach mainly stems from the shortcomings of IAVL and various performance\nissues around it. While there was a (partial) implementation of [ADR-040](./adr-040-storage-and-smt-state-commitments.md),\nit was never adopted for a variety of reasons, such as the reliance on using an\nSMT, which was more in a research phase, and some design choices that couldn't\nbe fully agreed upon, such as the snapshotting mechanism that would result in\nmassive state bloat.\n\n## Decision\n\nWe propose to build upon some of the great ideas introduced in [ADR-040](./adr-040-storage-and-smt-state-commitments.md),\nwhile being a bit more flexible with the underlying implementations and overall\nless intrusive. Specifically, we propose to:\n\n* Separate the concerns of state commitment (**SC**), needed for consensus, and\n state storage (**SS**), needed for state machine and clients.\n* Reduce layers of abstractions necessary between the RMS and underlying stores.\n* Provide atomic module store commitments by providing a batch database object\n to core IAVL APIs.\n* Reduce complexities in the `CacheKVStore` implementation while also improving\n performance[3].\n\nFurthermore, we will keep the IAVL is the backing [commitment](https://cryptography.fandom.com/wiki/Commitment_scheme)\nstore for the time being. While we might not fully settle on the use of IAVL in\nthe long term, we do not have strong empirical evidence to suggest a better\nalternative. Given that the SDK provides interfaces for stores, it should be sufficient\nto change the backing commitment store in the future should evidence arise to\nwarrant a better alternative. However there is promising work being done to IAVL\nthat should result in significant performance improvement [1,2].\n\n### Separating SS and SC\n\nBy separating SS and SC, it will allow for us to optimize against primary use cases\nand access patterns to state. Specifically, The SS layer will be responsible for\ndirect access to data in the form of (key, value) pairs, whereas the SC layer (IAVL)\nwill be responsible for committing to data and providing Merkle proofs.\n\nNote, the underlying physical storage database will be the same between both the\nSS and SC layers. So to avoid collisions between (key, value) pairs, both layers\nwill be namespaced.\n\n#### State Commitment (SC)\n\nGiven that the existing solution today acts as both SS and SC, we can simply\nrepurpose it to act solely as the SC layer without any significant changes to\naccess patterns or behavior. In other words, the entire collection of existing\nIAVL-backed module `KVStore`s will act as the SC layer.\n\nHowever, in order for the SC layer to remain lightweight and not duplicate a\nmajority of the data held in the SS layer, we encourage node operators to keep\ntight pruning strategies.\n\n#### State Storage (SS)\n\nIn the RMS, we will expose a *single* `KVStore` backed by the same physical\ndatabase that backs the SC layer. This `KVStore` will be explicitly namespaced\nto avoid collisions and will act as the primary storage for (key, value) pairs.\n\nWhile we most likely will continue the use of `cosmos-db`, or some local interface,\nto allow for flexibility and iteration over preferred physical storage backends\nas research and benchmarking continues. However, we propose to hardcode the use\nof RocksDB as the primary physical storage backend.\n\nSince the SS layer will be implemented as a `KVStore`, it will support the\nfollowing functionality:\n\n* Range queries\n* CRUD operations\n* Historical queries and versioning\n* Pruning\n\nThe RMS will keep track of all buffered writes using a dedicated and internal\n`MemoryListener` for each `StoreKey`. For each block height, upon `Commit`, the\nSS layer will write all buffered (key, value) pairs under a [RocksDB user-defined timestamp](https://github.com/facebook/rocksdb/wiki/User-defined-Timestamp-%28Experimental%29) column\nfamily using the block height as the timestamp, which is an unsigned integer.\nThis will allow a client to fetch (key, value) pairs at historical and current\nheights along with making iteration and range queries relatively performant as\nthe timestamp is the key suffix.\n\nNote, we choose not to use a more general approach of allowing any embedded key/value\ndatabase, such as LevelDB or PebbleDB, using height key-prefixed keys to\neffectively version state because most of these databases use variable length\nkeys which would effectively make actions likes iteration and range queries less\nperformant.\n\nSince operators might want pruning strategies to differ in SS compared to SC,\ne.g. having a very tight pruning strategy in SC while having a looser pruning\nstrategy for SS, we propose to introduce an additional pruning configuration,\nwith parameters that are identical to what exists in the SDK today, and allow\noperators to control the pruning strategy of the SS layer independently of the\nSC layer.\n\nNote, the SC pruning strategy must be congruent with the operator's state sync\nconfiguration. This is so as to allow state sync snapshots to execute successfully,\notherwise, a snapshot could be triggered on a height that is not available in SC.\n\n#### State Sync\n\nThe state sync process should be largely unaffected by the separation of the SC\nand SS layers. However, if a node syncs via state sync, the SS layer of the node\nwill not have the state synced height available, since the IAVL import process is\nnot setup in way to easily allow direct key/value insertion. A modification of\nthe IAVL import process would be necessary to facilitate having the state sync\nheight available.\n\nNote, this is not problematic for the state machine itself because when a query\nis made, the RMS will automatically direct the query correctly (see [Queries](#queries)).\n\n#### Queries\n\nTo consolidate the query routing between both the SC and SS layers, we propose to\nhave a notion of a \"query router\" that is constructed in the RMS. This query router\nwill be supplied to each `KVStore` implementation. The query router will route\nqueries to either the SC layer or the SS layer based on a few parameters. If\n`prove: true`, then the query must be routed to the SC layer. Otherwise, if the\nquery height is available in the SS layer, the query will be served from the SS\nlayer. Otherwise, we fall back on the SC layer.\n\nIf no height is provided, the SS layer will assume the latest height. The SS\nlayer will store a reverse index to lookup `LatestVersion -> timestamp(version)`\nwhich is set on `Commit`.\n\n#### Proofs\n\nSince the SS layer is naturally a storage layer only, without any commitments\nto (key, value) pairs, it cannot provide Merkle proofs to clients during queries.\n\nSince the pruning strategy against the SC layer is configured by the operator,\nwe can therefore have the RMS route the query to the SC layer if the version exists and\n`prove: true`. Otherwise, the query will fall back to the SS layer without a proof.\n\nWe could explore the idea of using state snapshots to rebuild an in-memory IAVL\ntree in real time against a version closest to the one provided in the query.\nHowever, it is not clear what the performance implications will be of this approach.\n\n### Atomic Commitment\n\nWe propose to modify the existing IAVL APIs to accept a batch DB object instead\nof relying on an internal batch object in `nodeDB`. Since each underlying IAVL\n`KVStore` shares the same DB in the SC layer, this will allow commits to be\natomic.\n\nSpecifically, we propose to:\n\n* Remove the `dbm.Batch` field from `nodeDB`\n* Update the `SaveVersion` method of the `MutableTree` IAVL type to accept a batch object\n* Update the `Commit` method of the `CommitKVStore` interface to accept a batch object\n* Create a batch object in the RMS during `Commit` and pass this object to each\n `KVStore`\n* Write the database batch after all stores have committed successfully\n\nNote, this will require IAVL to be updated to not rely or assume on any batch\nbeing present during `SaveVersion`.\n\n## Consequences\n\nAs a result of a new store V2 package, we should expect to see improved performance\nfor queries and transactions due to the separation of concerns. We should also\nexpect to see improved developer UX around experimentation of commitment schemes\nand storage backends for further performance, in addition to a reduced amount of\nabstraction around KVStores making operations such as caching and state branching\nmore intuitive.\n\nHowever, due to the proposed design, there are drawbacks around providing state\nproofs for historical queries.\n\n### Backwards Compatibility\n\nThis ADR proposes changes to the storage implementation in the Cosmos SDK through\nan entirely new package. Interfaces may be borrowed and extended from existing\ntypes that exist in `store`, but no existing implementations or interfaces will\nbe broken or modified.\n\n### Positive\n\n* Improved performance of independent SS and SC layers\n* Reduced layers of abstraction making storage primitives easier to understand\n* Atomic commitments for SC\n* Redesign of storage types and interfaces will allow for greater experimentation\n such as different physical storage backends and different commitment schemes\n for different application modules\n\n### Negative\n\n* Providing proofs for historical state is challenging\n\n### Neutral\n\n* Keeping IAVL as the primary commitment data structure, although drastic\n performance improvements are being made\n\n## Further Discussions\n\n### Module Storage Control\n\nMany modules store secondary indexes that are typically solely used to support\nclient queries, but are actually not needed for the state machine's state\ntransitions. What this means is that these indexes technically have no reason to\nexist in the SC layer at all, as they take up unnecessary space. It is worth\nexploring what an API would look like to allow modules to indicate what (key, value)\npairs they want to be persisted in the SC layer, implicitly indicating the SS\nlayer as well, as opposed to just persisting the (key, value) pair only in the\nSS layer.\n\n### Historical State Proofs\n\nIt is not clear what the importance or demand is within the community of providing\ncommitment proofs for historical state. While solutions can be devised such as\nrebuilding trees on the fly based on state snapshots, it is not clear what the\nperformance implications are for such solutions.\n\n### Physical DB Backends\n\nThis ADR proposes usage of RocksDB to utilize user-defined timestamps as a\nversioning mechanism. However, other physical DB backends are available that may\noffer alternative ways to implement versioning while also providing performance\nimprovements over RocksDB. E.g. PebbleDB supports MVCC timestamps as well, but\nwe'll need to explore how PebbleDB handles compaction and state growth over time.\n\n## References\n\n* [1] https://github.com/cosmos/iavl/pull/676\n* [2] https://github.com/cosmos/iavl/pull/664\n* [3] https://github.com/cosmos/cosmos-sdk/issues/14990" + }, + { + "number": 68, + "filename": "adr-068-preblock.md", + "title": "ADR 068: Preblock", + "content": "# ADR 068: Preblock\n\n## Changelog\n\n* Sept 13, 2023: Initial Draft\n\n## Status\n\nDRAFT\n\n## Abstract\n\nIntroduce `PreBlock`, which runs before the begin blocker of other modules, and allows modifying consensus parameters, and the changes are visible to the following state machine logics.\n\n## Context\n\nWhen upgrading to sdk 0.47, the storage format for consensus parameters changed, but in the migration block, `ctx.ConsensusParams()` is always `nil`, because it fails to load the old format using new code, it's supposed to be migrated by the `x/upgrade` module first, but unfortunately, the migration happens in `BeginBlocker` handler, which runs after the `ctx` is initialized.\nWhen we try to solve this, we find the `x/upgrade` module can't modify the context to make the consensus parameters visible for the other modules, the context is passed by value, and sdk team want to keep it that way, that's good for isolation between modules.\n\n## Alternatives\n\nThe first alternative solution introduced a `MigrateModuleManager`, which only includes the `x/upgrade` module right now, and baseapp will run their `BeginBlocker`s before the other modules, and reload context's consensus parameters in between.\n\n## Decision\n\nSuggested this new lifecycle method.\n\n### `PreBlocker`\n\nThere are two semantics around the new lifecycle method:\n\n* It runs before the `BeginBlocker` of all modules\n* It can modify consensus parameters in storage, and signal the caller through the return value.\n\nWhen it returns `ConsensusParamsChanged=true`, the caller must refresh the consensus parameters in the finalize context:\n\n```\napp.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithConsensusParams(app.GetConsensusParams())\n```\n\nThe new ctx must be passed to all the other lifecycle methods.\n\n\n## Consequences\n\n### Backwards Compatibility\n\n### Positive\n\n### Negative\n\n### Neutral\n\n## Further Discussions\n\n## Test Cases\n\n## References\n\n* [1] https://github.com/cosmos/cosmos-sdk/issues/16494\n* [2] https://github.com/cosmos/cosmos-sdk/pull/16583\n* [3] https://github.com/cosmos/cosmos-sdk/pull/17421\n* [4] https://github.com/cosmos/cosmos-sdk/pull/17713" + }, + { + "number": 70, + "filename": "adr-070-unordered-account.md", + "title": "ADR 070: Unordered Transactions", + "content": "# ADR 070: Unordered Transactions\n\n## Changelog\n\n* Dec 4, 2023: Initial Draft (@yihuang, @tac0turtle, @alexanderbez)\n* Jan 30, 2024: Include section on deterministic transaction encoding\n* Mar 18, 2025: Revise implementation to use Cosmos SDK KV Store and require unique timeouts per-address (@technicallyty)\n* Apr 25, 2025: Add note about rejecting unordered txs with sequence values.\n\n## Status\n\nACCEPTED Not Implemented\n\n## Abstract\n\nWe propose a way to do replay-attack protection without enforcing the order of\ntransactions and without requiring the use of monotonically increasing sequences. Instead, we propose\nthe use of a time-based, ephemeral sequence.\n\n## Context\n\nAccount sequence values serve to prevent replay attacks and ensure transactions from the same sender are included in blocks and executed\nin sequential order. Unfortunately, this makes it difficult to reliably send many concurrent transactions from the\nsame sender. Victims of such limitations include IBC relayers and crypto exchanges.\n\n## Decision\n\nWe propose adding a boolean field `unordered` and a google.protobuf.Timestamp field `timeout_timestamp` to the transaction body.\n\nUnordered transactions will bypass the traditional account sequence rules and follow the rules described\nbelow, without impacting traditional ordered transactions which will follow the same sequence rules as before.\n\nWe will introduce new storage of time-based, ephemeral unordered sequences using the SDK's existing KV Store library. \nSpecifically, we will leverage the existing x/auth KV store to store the unordered sequences.\n\nWhen an unordered transaction is included in a block, a concatenation of the `timeout_timestamp` and sender’s address bytes\nwill be recorded to state (i.e. `542939323/`). In cases of multi-party signing, one entry per signer\nwill be recorded to state.\n\nNew transactions will be checked against the state to prevent duplicate submissions. To prevent the state from growing indefinitely, we propose the following:\n\n* Define an upper bound for the value of `timeout_timestamp` (i.e. 10 minutes).\n* Add PreBlocker method to x/auth that removes state entries with a `timeout_timestamp` earlier than the current block time.\n\n### Transaction Format\n\n```protobuf\nmessage TxBody {\n ...\n \n bool unordered = 4;\n google.protobuf.Timestamp timeout_timestamp = 5;\n}\n```\n\n### Replay Protection\n\nWe facilitate replay protection by storing the unordered sequence in the Cosmos SDK KV store. Upon transaction ingress, we check if the transaction's unordered\nsequence exists in state, or if the TTL value is stale, i.e. before the current block time. If so, we reject it. Otherwise,\nwe add the unordered sequence to the state. This section of the state will belong to the `x/auth` module.\n\nThe state is evaluated during x/auth's `PreBlocker`. All transactions with an unordered sequence earlier than the current block time\nwill be deleted.\n\n```go\nfunc (am AppModule) PreBlock(ctx context.Context) (appmodule.ResponsePreBlock, error) {\n\terr := am.accountKeeper.RemoveExpired(sdk.UnwrapSDKContext(ctx))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn &sdk.ResponsePreBlock{ConsensusParamsChanged: false}, nil\n}\n```\n\n```golang\npackage keeper\n\nimport (\n\tsdk \"github.com/cosmos/cosmos-sdk/types\"\n\n\t\"cosmossdk.io/collections\"\n\t\"cosmossdk.io/core/store\"\n)\n\nvar (\n\t// just arbitrarily picking some upper bound number.\n\tunorderedSequencePrefix = collections.NewPrefix(90)\n)\n\ntype AccountKeeper struct {\n\t// ...\n\tunorderedSequences collections.KeySet[collections.Pair[uint64, []byte]]\n}\n\nfunc (m *AccountKeeper) Contains(ctx sdk.Context, sender []byte, timestamp uint64) (bool, error) {\n\treturn m.unorderedSequences.Has(ctx, collections.Join(timestamp, sender))\n}\n\nfunc (m *AccountKeeper) Add(ctx sdk.Context, sender []byte, timestamp uint64) error {\n\treturn m.unorderedSequences.Set(ctx, collections.Join(timestamp, sender))\n}\n\nfunc (m *AccountKeeper) RemoveExpired(ctx sdk.Context) error {\n\tblkTime := ctx.BlockTime().UnixNano()\n\tit, err := m.unorderedSequences.Iterate(ctx, collections.NewPrefixUntilPairRange[uint64, []byte](uint64(blkTime)))\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer it.Close()\n\n\tkeys, err := it.Keys()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfor _, key := range keys {\n\t\tif err := m.unorderedSequences.Remove(ctx, key); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n```\n\n### AnteHandler Decorator\n\nTo facilitate bypassing nonce verification, we must modify the existing\n`IncrementSequenceDecorator` AnteHandler decorator to skip the nonce verification\nwhen the transaction is marked as unordered.\n\n```golang\nfunc (isd IncrementSequenceDecorator) AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) {\n if tx.UnOrdered() {\n return next(ctx, tx, simulate)\n }\n\n // ...\n}\n```\n\nWe also introduce a new decorator to perform the unordered transaction verification.\n\n```golang\npackage ante\n\nimport (\n\t\"slices\"\n\t\"strings\"\n\t\"time\"\n\n\tsdk \"github.com/cosmos/cosmos-sdk/types\"\n\tsdkerrors \"github.com/cosmos/cosmos-sdk/types/errors\"\n\tauthkeeper \"github.com/cosmos/cosmos-sdk/x/auth/keeper\"\n\tauthsigning \"github.com/cosmos/cosmos-sdk/x/auth/signing\"\n\n\terrorsmod \"cosmossdk.io/errors\"\n)\n\nvar _ sdk.AnteDecorator = (*UnorderedTxDecorator)(nil)\n\n// UnorderedTxDecorator defines an AnteHandler decorator that is responsible for\n// checking if a transaction is intended to be unordered and, if so, evaluates\n// the transaction accordingly. An unordered transaction will bypass having its\n// nonce incremented, which allows fire-and-forget transaction broadcasting,\n// removing the necessity of ordering on the sender-side.\n//\n// The transaction sender must ensure that unordered=true and a timeout_height\n// is appropriately set. The AnteHandler will check that the transaction is not\n// a duplicate and will evict it from state when the timeout is reached.\n//\n// The UnorderedTxDecorator should be placed as early as possible in the AnteHandler\n// chain to ensure that during DeliverTx, the transaction is added to the unordered sequence state.\ntype UnorderedTxDecorator struct {\n\t// maxUnOrderedTTL defines the maximum TTL a transaction can define.\n\tmaxTimeoutDuration time.Duration\n\ttxManager authkeeper.UnorderedTxManager\n}\n\nfunc NewUnorderedTxDecorator(\n\tutxm authkeeper.UnorderedTxManager,\n) *UnorderedTxDecorator {\n\treturn &UnorderedTxDecorator{\n\t\tmaxTimeoutDuration: 10 * time.Minute,\n\t\ttxManager: utxm,\n\t}\n}\n\nfunc (d *UnorderedTxDecorator) AnteHandle(\n\tctx sdk.Context,\n\ttx sdk.Tx,\n\t_ bool,\n\tnext sdk.AnteHandler,\n) (sdk.Context, error) {\n\tif err := d.ValidateTx(ctx, tx); err != nil {\n\t\treturn ctx, err\n\t}\n\treturn next(ctx, tx, false)\n}\n\nfunc (d *UnorderedTxDecorator) ValidateTx(ctx sdk.Context, tx sdk.Tx) error {\n\tunorderedTx, ok := tx.(sdk.TxWithUnordered)\n\tif !ok || !unorderedTx.GetUnordered() {\n\t\t// If the transaction does not implement unordered capabilities or has the\n\t\t// unordered value as false, we bypass.\n\t\treturn nil\n\t}\n\n\tblockTime := ctx.BlockTime()\n\ttimeoutTimestamp := unorderedTx.GetTimeoutTimeStamp()\n\tif timeoutTimestamp.IsZero() || timeoutTimestamp.Unix() == 0 {\n\t\treturn errorsmod.Wrap(\n\t\t\tsdkerrors.ErrInvalidRequest,\n\t\t\t\"unordered transaction must have timeout_timestamp set\",\n\t\t)\n\t}\n\tif timeoutTimestamp.Before(blockTime) {\n\t\treturn errorsmod.Wrap(\n\t\t\tsdkerrors.ErrInvalidRequest,\n\t\t\t\"unordered transaction has a timeout_timestamp that has already passed\",\n\t\t)\n\t}\n\tif timeoutTimestamp.After(blockTime.Add(d.maxTimeoutDuration)) {\n\t\treturn errorsmod.Wrapf(\n\t\t\tsdkerrors.ErrInvalidRequest,\n\t\t\t\"unordered tx ttl exceeds %s\",\n\t\t\td.maxTimeoutDuration.String(),\n\t\t)\n\t}\n\n\texecMode := ctx.ExecMode()\n\tif execMode == sdk.ExecModeSimulate {\n\t\treturn nil\n\t}\n\n\tsignerAddrs, err := getSigners(tx)\n\tif err != nil {\n\t\treturn err\n\t}\n\t\n\tfor _, signer := range signerAddrs {\n\t\tcontains, err := d.txManager.Contains(ctx, signer, uint64(unorderedTx.GetTimeoutTimeStamp().Unix()))\n\t\tif err != nil {\n\t\t\treturn errorsmod.Wrap(\n\t\t\t\tsdkerrors.ErrIO,\n\t\t\t\t\"failed to check contains\",\n\t\t\t)\n\t\t}\n\t\tif contains {\n\t\t\treturn errorsmod.Wrapf(\n\t\t\t\tsdkerrors.ErrInvalidRequest,\n\t\t\t\t\"tx is duplicated for signer %x\", signer,\n\t\t\t)\n\t\t}\n\n\t\tif err := d.txManager.Add(ctx, signer, uint64(unorderedTx.GetTimeoutTimeStamp().Unix())); err != nil {\n\t\t\treturn errorsmod.Wrap(\n\t\t\t\tsdkerrors.ErrIO,\n\t\t\t\t\"failed to add unordered sequence to state\",\n\t\t\t)\n\t\t}\n }\n\t\n\t\n\treturn nil\n}\n\nfunc getSigners(tx sdk.Tx) ([][]byte, error) {\n\tsigTx, ok := tx.(authsigning.SigVerifiableTx)\n\tif !ok {\n\t\treturn nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, \"invalid tx type\")\n\t}\n\treturn sigTx.GetSigners()\n}\n\n```\n\n### Unordered Sequences\n\nUnordered sequences provide a simple, straightforward mechanism to protect against both transaction malleability and\ntransaction duplication. It is important to note that the unordered sequence must still be unique. However,\nthe value is not required to be strictly increasing as with regular sequences, and the order in which the node receives\nthe transactions no longer matters. Clients can handle building unordered transactions similarly to the code below:\n\n```go\nfor _, tx := range txs {\n\ttx.SetUnordered(true)\n\ttx.SetTimeoutTimestamp(time.Now() + 1 * time.Nanosecond)\n}\n```\n\nWe will reject transactions that have both sequence and unordered timeouts set. We do this to avoid assuming the intent of the user.\n\n### State Management\n\nThe storage of unordered sequences will be facilitated using the Cosmos SDK's KV Store service.\n\n## Note On Previous Design Iteration\n\nThe previous iteration of unordered transactions worked by using an ad-hoc state-management system that posed severe \nrisks and a vector for duplicated tx processing. It relied on graceful app closure which would flush the current state\nof the unordered sequence mapping. If 2/3 of the network crashed, and the graceful closure did not trigger, \nthe system would lose track of all sequences in the mapping, allowing those transactions to be replayed. The \nimplementation proposed in the updated version of this ADR solves this by writing directly to the Cosmos KV Store.\nWhile this is less performant, for the initial implementation, we opted to choose a safer path and postpone performance optimizations until we have more data on real-world impacts and a more battle-tested approach to optimization.\n\nAdditionally, the previous iteration relied on using hashes to create what we call an \"unordered sequence.\" There are known\nissues with transaction malleability in Cosmos SDK signing modes. This ADR gets away from this problem by enforcing\nsingle-use unordered nonces, instead of deriving nonces from bytes in the transaction.\n\n## Consequences\n\n### Positive\n\n* Support unordered transaction inclusion, enabling the ability to \"fire and forget\" many transactions at once.\n\n### Negative\n\n* Requires additional storage overhead.\n* Requirement of unique timestamps per transaction causes a small amount of additional overhead for clients. Clients must ensure each transaction's timeout timestamp is different. However, nanosecond differentials suffice.\n* Usage of Cosmos SDK KV store is slower in comparison to using a non-merkleized store or ad-hoc methods, and block times may slow down as a result.\n\n## References\n\n* https://github.com/cosmos/cosmos-sdk/issues/13009" + }, + { + "number": 76, + "filename": "adr-076-tx-malleability.md", + "title": "# Cosmos SDK Transaction Malleability Risk Review and Recommendations", + "content": "# Cosmos SDK Transaction Malleability Risk Review and Recommendations\n\n## Changelog\n\n* 2025-03-10: Initial draft (@aaronc)\n\n## Status\n\nPROPOSED: Not Implemented\n\n## Abstract\n\nSeveral encoding and sign mode related issues have historically resulted in the possibility\nthat Cosmos SDK transactions may be re-encoded in such a way as to change their hash\n(and in rare cases, their meaning) without invalidating the signature.\nThis document details these cases, their potential risks, the extent to which they have been\naddressed, and provides recommendations for future improvements.\n\n## Review\n\nOne naive assumption about Cosmos SDK transactions is that hashing the raw bytes of a submitted transaction creates a safe unique identifier for the transaction. In reality, there are multiple ways in which transactions could be manipulated to create different transaction bytes (and as a result different hashes) that still pass signature verification.\n\nThis document attempts to enumerate the various potential transaction \"malleability\" risks that we have identified and the extent to which they have or have not been addressed in various sign modes. We also identify vulnerabilities that could be introduced if developers make changes in the future without careful consideration of the complexities involved with transaction encoding, sign modes and signatures.\n\n### Risks Associated with Malleability\n\nThe malleability of transactions poses the following potential risks to end users:\n\n* unsigned data could get added to transactions and be processed by state machines\n* clients often rely on transaction hashes for checking transaction status, but whether or not submitted transaction hashes match processed transaction hashes depends primarily on good network actors rather than fundamental protocol guarantees\n* transactions could potentially get executed more than once (faulty replay protection)\n\nIf a client generates a transaction, keeps a record of its hash and then attempts to query nodes to check the transaction's status, this process may falsely conclude that the transaction had not been processed if an intermediary\nprocessor decoded and re-encoded the transaction with different encoding rules (either maliciously or unintentionally).\nAs long as no malleability is present in the signature bytes themselves, clients _should_ query transactions by signature instead of hash.\n\nNot being cognizant of this risk may lead clients to submit the same transaction multiple times if they believe that \nearlier transactions had failed or gotten lost in processing.\nThis could be an attack vector against users if wallets primarily query transactions by hash.\n\nIf the state machine were to rely on transaction hashes as a replay mechanism itself, this would be faulty and not \nprovide the intended replay protection. Instead, the state machine should rely on deterministic representations of\ntransactions rather than the raw encoding, or other nonces,\nif they want to provide some replay protection that doesn't rely on a monotonically\nincreasing account sequence number.\n\n\n### Sources of Malleability\n\n#### Non-deterministic Protobuf Encoding\n\nCosmos SDK transactions are encoded using protobuf binary encoding when they are submitted to the network. Protobuf binary is not inherently a deterministic encoding meaning that the same logical payload could have several valid bytes representations. In a basic sense, this means that protobuf in general can be decoded and re-encoded to produce a different byte stream (and thus different hash) without changing the logical meaning of the bytes. [ADR 027: Deterministic Protobuf Serialization](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-027-deterministic-protobuf-serialization.md) describes in detail what needs to be done to produce what we consider to be a \"canonical\", deterministic protobuf serialization. Briefly, the following sources of malleability at the encoding level have been identified and are addressed by this specification:\n\n* fields can be emitted in any order\n* default field values can be included or omitted, and this doesn't change meaning unless `optional` is used\n* `repeated` fields of scalars may use packed or \"regular\" encoding\n* `varint`s can include extra ignored bits\n* extra fields may be added and are usually simply ignored by decoders. [ADR 020](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md#unknown-field-filtering) specifies that in general such extra fields should cause messages and transactions to be rejected\n\nWhen using `SIGN_MODE_DIRECT` none of the above malleabilities will be tolerated because:\n\n* signatures of messages and extensions must be done over the raw encoded bytes of those fields\n* the outer tx envelope (`TxRaw`) must follow ADR 027 rules or be rejected\n\nTransactions signed with `SIGN_MODE_LEGACY_AMINO_JSON`, however, have no way of protecting against the above malleabilities because what is signed is a JSON representation of the logical contents of the transaction. These logical contents could have any number of valid protobuf binary encodings, so in general there are no guarantees regarding transaction hash with Amino JSON signing.\n\nIn addition to being aware of the general non-determinism of protobuf binary, developers need to pay special attention to make sure that unknown protobuf fields get rejected when developing new capabilities related to protobuf transactions. The protobuf serialization format was designed with the assumption that unknown data known to encoders could safely be ignored by decoders. This assumption may have been fairly safe within the walled garden of Google's centralized infrastructure. However, in distributed blockchain systems, this assumption is generally unsafe. If a newer client encodes a protobuf message with data intended for a newer server, it is not safe for an older server to simply ignore and discard instructions that it does not understand. These instructions could include critical information that the transaction signer is relying upon and just assuming that it is unimportant is not safe.\n\n[ADR 020](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md#unknown-field-filtering) specifies some provisions for \"non-critical\" fields which can safely be ignored by older servers. In practice, I have not seen any valid usages of this. It is something in the design that maintainers should be aware of, but it may not be necessary or even 100% safe.\n\n#### Non-deterministic Value Encoding\n\nIn addition to the non-determinism present in protobuf binary itself, some protobuf field data is encoded using a micro-format which itself may not be deterministic. Consider for instance integer or decimal encoding. Some decoders may allow for the presence of leading or trailing zeros without changing the logical meaning, ex. `00100` vs `100` or `100.00` vs `100`. So if a sign mode encodes numbers deterministically, but decoders accept multiple representations,\na user may sign over the value `100` while `0100` gets encoded. This would be possible with Amino JSON to the extent that the integer decoder accepts leading zeros. I believe the current `Int` implementation will reject this, however, it is\nprobably possible to encode an octal or hexadecimal representation in the transaction whereas the user signs over a decimal integer.\n\n#### Signature Encoding\n\nSignatures themselves are encoded using a micro-format specific to the signature algorithm being used and sometimes these\nmicro-formats can allow for non-determinism (multiple valid bytes for the same signature).\nMost of the signature algorithms supported by the SDK should reject non-canonical bytes in their current implementation.\nHowever, the `Multisignature` protobuf type uses normal protobuf encoding and there is no check as to whether the\ndecoded bytes followed canonical ADR 027 rules or not. Therefore, multisig transactions can have malleability in\ntheir signatures.\nAny new or custom signature algorithms must make sure that they reject any non-canonical bytes, otherwise even\nwith `SIGN_MODE_DIRECT` there can be transaction hash malleability by re-encoding signatures with a non-canonical\nrepresentation.\n\n#### Fields not covered by Amino JSON\n\nAnother area that needs to be addressed carefully is the discrepancy between `AminoSignDoc` (see [`aminojson.proto`](../../x/tx/signing/aminojson/internal/aminojsonpb/aminojson.proto)) used for `SIGN_MODE_LEGACY_AMINO_JSON` and the actual contents of `TxBody` and `AuthInfo` (see [`tx.proto`](../../proto/cosmos/tx/v1beta1/tx.proto)).\nIf fields get added to `TxBody` or `AuthInfo`, they must either have a corresponding representation in `AminoSignDoc` or Amino JSON signatures must be rejected when those new fields are set. Making sure that this is done is a\nhighly manual process, and developers could easily make the mistake of updating `TxBody` or `AuthInfo`\nwithout paying any attention to the implementation of `GetSignBytes` for Amino JSON. This is a critical\nvulnerability in which unsigned content can now get into the transaction and signature verification will\npass.\n\n## Sign Mode Summary and Recommendations\n\nThe sign modes officially supported by the SDK are `SIGN_MODE_DIRECT`, `SIGN_MODE_TEXTUAL`, `SIGN_MODE_DIRECT_AUX`,\nand `SIGN_MODE_LEGACY_AMINO_JSON`.\n`SIGN_MODE_LEGACY_AMINO_JSON` is used commonly by wallets and is currently the only sign mode supported on Nano Ledger hardware devices\n(although `SIGN_MODE_TEXTUAL` was designed to also support hardware devices).\n`SIGN_MODE_DIRECT` is the simplest sign mode and its usage is also fairly common.\n`SIGN_MODE_DIRECT_AUX` is a variant of `SIGN_MODE_DIRECT` that can be used by auxiliary signers in a multi-signer\ntransaction by those signers who are not paying gas.\n`SIGN_MODE_TEXTUAL` was intended as a replacement for `SIGN_MODE_LEGACY_AMINO_JSON`, but as far as we know it\nhas not been adopted by any clients yet and thus is not in active use.\n\nAll known malleability concerns have been addressed in the current implementation of `SIGN_MODE_DIRECT`.\nThe only known malleability that could occur with a transaction signed with `SIGN_MODE_DIRECT` would\nneed to be in the signature bytes themselves.\nSince signatures are not signed over, it is impossible for any sign mode to address this directly\nand instead signature algorithms need to take care to reject any non-canonically encoded signature bytes\nto prevent malleability.\nFor the known malleability of the `Multisignature` type, we should make sure that any valid signatures\nwere encoded following canonical ADR 027 rules when doing signature verification.\n\n`SIGN_MODE_DIRECT_AUX` provides the same level of safety as `SIGN_MODE_DIRECT` because\n\n* the raw encoded `TxBody` bytes are signed over in `SignDocDirectAux`, and\n* a transaction using `SIGN_MODE_DIRECT_AUX` still requires the primary signer to sign the transaction with `SIGN_MODE_DIRECT`\n\n`SIGN_MODE_TEXTUAL` also provides the same level of safety as `SIGN_MODE_DIRECT` because the hash of the raw encoded\n`TxBody` and `AuthInfo` bytes are signed over.\n\nUnfortunately, the vast majority of unaddressed malleability risks affect `SIGN_MODE_LEGACY_AMINO_JSON` and this\nsign mode is still commonly used.\nIt is recommended that the following improvements be made to Amino JSON signing:\n\n* hashes of `TxBody` and `AuthInfo` should be added to `AminoSignDoc` so that encoding-level malleability is addressed\n* when constructing `AminoSignDoc`, [protoreflect](https://pkg.go.dev/google.golang.org/protobuf/reflect/protoreflect) API should be used to ensure that there are no fields in `TxBody` or `AuthInfo` which do not have a mapping in `AminoSignDoc` have been set\n* fields present in `TxBody` or `AuthInfo` that are not present in `AminoSignDoc` (such as extension options) should\nbe added to `AminoSignDoc` if possible\n\n## Testing\n\nTo test that transactions are resistant to malleability,\nwe can develop a test suite to run against all sign modes that\nattempts to manipulate transaction bytes in the following ways:\n\n* changing protobuf encoding by\n * reordering fields\n * setting default values\n * adding extra bits to varints, or\n * setting new unknown fields\n* modifying integer and decimal values encoded as strings with leading or trailing zeros\n\nWhenever any of these manipulations is done, we should observe that the sign doc bytes for the sign mode being\ntested also change, meaning that the corresponding signatures will also have to change.\n\nIn the case of Amino JSON, we should also develop tests which ensure that if any `TxBody` or `AuthInfo`\nfield not supported by Amino's `AminoSignDoc` is set that signing fails.\n\nIn the general case of transaction decoding, we should have unit tests to ensure that\n\n* any `TxRaw` bytes which do not follow ADR 027 canonical encoding cause decoding to fail, and\n* any top-level transaction elements including `TxBody`, `AuthInfo`, public keys, and messages which\nhave unknown fields set cause the transaction to be rejected\n(this ensures that ADR 020 unknown field filtering is properly applied)\n\nFor each supported signature algorithm,\nthere should also be unit tests to ensure that signatures must be encoded canonically\nor get rejected.\n\n## References\n\n* [ADR 027: Deterministic Protobuf Serialization](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-027-deterministic-protobuf-serialization.md)\n* [ADR 020](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md#unknown-field-filtering)\n* [`aminojson.proto`](../../x/tx/signing/aminojson/internal/aminojsonpb/aminojson.proto)\n* [`tx.proto`](../../proto/cosmos/tx/v1beta1/tx.proto)" + } +] \ No newline at end of file diff --git a/docs.json b/docs.json index 6ba96321..2d4b5584 100644 --- a/docs.json +++ b/docs.json @@ -1,1484 +1,2492 @@ { - "$schema": "https://mintlify.com/docs.json", - "name": "Cosmos Documentation", - "description": "Build the future of the internet of blockchains with Cosmos.", - "theme": "mint", - "colors": { - "primary": "#000000", - "light": "#cccccc", - "dark": "#0E0E0E" - }, - "icons": { - "library": "lucide" - }, - "styling": { - "codeblocks": "system" - }, - "favicon": { - "light": "/assets/brand/cosmos.svg", - "dark": "/assets/brand/cosmos-dark.svg" - }, - "logo": { - "light": "/assets/brand/cosmos-wordmark-dark.svg", - "dark": "/assets/brand/cosmos-wordmark.svg" - }, - "contextual": { - "options": [ - "copy", - "view", - "chatgpt", - "claude", - "perplexity", - "mcp", - "cursor", - "vscode" - ] - }, - "redirects": [], - "navigation": { - "dropdowns": [ - { - "dropdown": "EVM", - "icon": "book-open-text", - "versions": [ - { - "version": "v0.4.x", - "tabs": [ - { - "tab": "Documentation", - "groups": [ - { - "group": "EVM", - "pages": [ - "docs/evm/v0.4.x/documentation/overview", - { - "group": "Getting Started", - "pages": [ - "docs/evm/v0.4.x/documentation/getting-started/index", - "docs/evm/v0.4.x/documentation/getting-started/faq", - "docs/evm/v0.4.x/documentation/getting-started/development-environment" - ] - }, - { - "group": "Tooling and Resources", - "pages": [ - "docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/overview", - "docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/block-explorers", - "docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/foundry", - "docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/hardhat", - "docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/remix", - "docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/testing-and-fuzzing", - "docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/wallet-integration" - ] - }, - { - "group": "Integration", - "pages": [ - "docs/evm/v0.4.x/documentation/integration/evm-module-integration", - "docs/evm/v0.4.x/documentation/integration/mempool-integration", - "docs/evm/v0.4.x/documentation/integration/upgrade-handlers", - "docs/evm/v0.4.x/documentation/integration/migration-v0.3-to-v0.4", - "docs/evm/v0.4.x/documentation/integration/erc20-precompiles-migration", - { - "group": "Devnet", - "pages": [ - "docs/evm/v0.4.x/documentation/integration/devnet/introduction", - "docs/evm/v0.4.x/documentation/integration/devnet/connect-to-the-network", - "docs/evm/v0.4.x/documentation/integration/devnet/get-tokens" - ] - } - ] - }, - { - "group": "EVM Compatibility", - "pages": [ - "docs/evm/v0.4.x/documentation/evm-compatibility/overview", - "docs/evm/v0.4.x/documentation/evm-compatibility/eip-reference" - ] - }, - { - "group": "Concepts", - "pages": [ - "docs/evm/v0.4.x/documentation/concepts/overview", - "docs/evm/v0.4.x/documentation/concepts/accounts", - "docs/evm/v0.4.x/documentation/concepts/chain-id", - "docs/evm/v0.4.x/documentation/concepts/encoding", - "docs/evm/v0.4.x/documentation/concepts/gas-and-fees", - "docs/evm/v0.4.x/documentation/concepts/ibc", - "docs/evm/v0.4.x/documentation/concepts/mempool", - "docs/evm/v0.4.x/documentation/concepts/migrations", - "docs/evm/v0.4.x/documentation/concepts/pending-state", - "docs/evm/v0.4.x/documentation/concepts/replay-protection", - "docs/evm/v0.4.x/documentation/concepts/signing", - "docs/evm/v0.4.x/documentation/concepts/single-token-representation", - "docs/evm/v0.4.x/documentation/concepts/tokens", - "docs/evm/v0.4.x/documentation/concepts/transactions", - "docs/evm/v0.4.x/documentation/concepts/predeployed-contracts", - "docs/evm/v0.4.x/documentation/concepts/eip-1559-feemarket", - "docs/evm/v0.4.x/documentation/concepts/precision-handling" - ] - }, - { - "group": "Smart Contracts", - "pages": [ - "docs/evm/v0.4.x/documentation/smart-contracts/introduction", - { - "group": "Precompiles", - "pages": [ - "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/index", - "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/overview", - "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/bank", - "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/bech32", - "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/callbacks", - "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/distribution", - "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/erc20", - "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/governance", - "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/ics20", - "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/p256", - "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/slashing", - "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/staking", - "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/werc20" - ] - }, - { - "group": "Predeployed Contracts", - "pages": [ - "docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/overview", - "docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/implementation-guide", - "docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/create2", - "docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/multicall3", - "docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/permit2", - "docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/safe-factory" - ] - } - ] - }, - "docs/evm/v0.4.x/documentation/custom-improvement-proposals", - { - "group": "Cosmos SDK", - "pages": [ - "docs/evm/v0.4.x/documentation/cosmos-sdk/overview", - "docs/evm/v0.4.x/documentation/cosmos-sdk/cli", - "docs/evm/v0.4.x/documentation/cosmos-sdk/protocol", - { - "group": "Modules", - "pages": [ - "docs/evm/v0.4.x/documentation/cosmos-sdk/modules/erc20", - "docs/evm/v0.4.x/documentation/cosmos-sdk/modules/feemarket", - "docs/evm/v0.4.x/documentation/cosmos-sdk/modules/precisebank", - "docs/evm/v0.4.x/documentation/cosmos-sdk/modules/vm" - ] - } - ] - } - ] - } - ] - }, - { - "tab": "API Reference", - "pages": [ - "docs/evm/v0.4.x/api-reference/ethereum-json-rpc/index", - "docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods", - "docs/evm/v0.4.x/api-reference/ethereum-json-rpc/rpc-explorer" - ] - }, - { - "tab": "Release Notes", - "pages": [ - "docs/evm/v0.4.x/changelog/release-notes" - ] - } - ] - }, - { - "version": "next", - "tabs": [ - { - "tab": "Documentation", - "groups": [ - { - "group": "Cosmos EVM", - "pages": [ - "docs/evm/next/documentation/overview", - { - "group": "Getting Started", - "pages": [ - "docs/evm/next/documentation/getting-started/index", - "docs/evm/next/documentation/getting-started/reference-network", - "docs/evm/next/documentation/getting-started/faq", - "docs/evm/next/documentation/getting-started/development-environment", - "docs/evm/next/documentation/getting-started/local-network-setup", - "docs/evm/next/documentation/getting-started/node-configuration" - ] - }, - { - "group": "Tooling and Resources", - "pages": [ - "docs/evm/next/documentation/getting-started/tooling-and-resources/overview", - "docs/evm/next/documentation/getting-started/tooling-and-resources/block-explorers", - "docs/evm/next/documentation/getting-started/tooling-and-resources/foundry", - "docs/evm/next/documentation/getting-started/tooling-and-resources/hardhat", - "docs/evm/next/documentation/getting-started/tooling-and-resources/remix", - "docs/evm/next/documentation/getting-started/tooling-and-resources/testing-and-fuzzing", - "docs/evm/next/documentation/getting-started/tooling-and-resources/wallet-integration" - ] - }, - { - "group": "Integration", - "pages": [ - "docs/evm/next/documentation/integration/evm-module-integration", - "docs/evm/next/documentation/integration/predeployed-contracts-integration", - "docs/evm/next/documentation/integration/mempool-integration", - "docs/evm/next/documentation/integration/upgrade-handlers", - "docs/evm/next/documentation/integration/migration-v0.3-to-v0.4", - "docs/evm/next/documentation/integration/migration-v0.4-to-v0.5", - "docs/evm/next/documentation/integration/erc20-precompiles-migration", - "docs/evm/next/documentation/integration/fresh-v0.5-integration" - ] - }, - { - "group": "EVM Compatibility", - "pages": [ - "docs/evm/next/documentation/evm-compatibility/overview", - "docs/evm/next/documentation/evm-compatibility/eip-reference", - "docs/evm/next/documentation/evm-compatibility/eip-2935", - "docs/evm/next/documentation/evm-compatibility/eip-7702" - ] - }, - { - "group": "Concepts", - "pages": [ - "docs/evm/next/documentation/concepts/overview", - "docs/evm/next/documentation/concepts/accounts", - "docs/evm/next/documentation/concepts/chain-id", - "docs/evm/next/documentation/concepts/encoding", - "docs/evm/next/documentation/concepts/gas-and-fees", - "docs/evm/next/documentation/concepts/ibc", - "docs/evm/next/documentation/concepts/mempool", - "docs/evm/next/documentation/concepts/migrations", - "docs/evm/next/documentation/concepts/pending-state", - "docs/evm/next/documentation/concepts/replay-protection", - "docs/evm/next/documentation/concepts/signing", - "docs/evm/next/documentation/concepts/single-token-representation", - "docs/evm/next/documentation/concepts/tokens", - "docs/evm/next/documentation/concepts/transactions", - "docs/evm/next/documentation/concepts/predeployed-contracts", - "docs/evm/next/documentation/concepts/eip-1559-feemarket", - "docs/evm/next/documentation/concepts/performance", - "docs/evm/next/documentation/concepts/precision-handling" - ] - }, - { - "group": "Smart Contracts", - "pages": [ - "docs/evm/next/documentation/smart-contracts/introduction", - { - "group": "Precompiles", - "pages": [ - "docs/evm/next/documentation/smart-contracts/precompiles/index", - "docs/evm/next/documentation/smart-contracts/precompiles/overview", - "docs/evm/next/documentation/smart-contracts/precompiles/bank", - "docs/evm/next/documentation/smart-contracts/precompiles/bech32", - "docs/evm/next/documentation/smart-contracts/precompiles/callbacks", - "docs/evm/next/documentation/smart-contracts/precompiles/distribution", - "docs/evm/next/documentation/smart-contracts/precompiles/erc20", - "docs/evm/next/documentation/smart-contracts/precompiles/governance", - "docs/evm/next/documentation/smart-contracts/precompiles/ics20", - "docs/evm/next/documentation/smart-contracts/precompiles/p256", - "docs/evm/next/documentation/smart-contracts/precompiles/slashing", - "docs/evm/next/documentation/smart-contracts/precompiles/staking", - "docs/evm/next/documentation/smart-contracts/precompiles/werc20" - ] - }, - { - "group": "Predeployed Contracts", - "pages": [ - "docs/evm/next/documentation/smart-contracts/predeployed-contracts/overview", - "docs/evm/next/documentation/smart-contracts/predeployed-contracts/create2", - "docs/evm/next/documentation/smart-contracts/predeployed-contracts/multicall3", - "docs/evm/next/documentation/smart-contracts/predeployed-contracts/permit2", - "docs/evm/next/documentation/smart-contracts/predeployed-contracts/safe-factory" - ] - } - ] - }, - "docs/evm/next/documentation/custom-improvement-proposals", - { - "group": "Cosmos SDK", - "pages": [ - "docs/evm/next/documentation/cosmos-sdk/overview", - "docs/evm/next/documentation/cosmos-sdk/cli", - "docs/evm/next/documentation/cosmos-sdk/protocol", - { - "group": "Modules", - "pages": [ - "docs/evm/next/documentation/cosmos-sdk/modules/erc20", - "docs/evm/next/documentation/cosmos-sdk/modules/feemarket", - "docs/evm/next/documentation/cosmos-sdk/modules/ibc", - "docs/evm/next/documentation/cosmos-sdk/modules/precisebank", - "docs/evm/next/documentation/cosmos-sdk/modules/vm" - ] - } - ] - } - ] - } - ] - }, - { - "tab": "API Reference", - "pages": [ - "docs/evm/next/api-reference/ethereum-json-rpc/index", - "docs/evm/next/api-reference/ethereum-json-rpc/methods", - "docs/evm/next/api-reference/ethereum-json-rpc/rpc-explorer" - ] - }, - { - "tab": "Release Notes", - "pages": [ - "docs/evm/next/changelog/release-notes" - ] - } - ] - } + "$schema": "https://mintlify.com/docs.json", + "name": "Cosmos Documentation", + "description": "Build the future of the internet of blockchains with Cosmos.", + "theme": "mint", + "colors": { + "primary": "#000000", + "light": "#cccccc", + "dark": "#0E0E0E" + }, + "icons": { + "library": "lucide" + }, + "styling": { + "codeblocks": "system" + }, + "favicon": { + "light": "/assets/brand/cosmos.svg", + "dark": "/assets/brand/cosmos-dark.svg" + }, + "logo": { + "light": "/assets/brand/cosmos-wordmark-dark.svg", + "dark": "/assets/brand/cosmos-wordmark.svg" + }, + "contextual": { + "options": [ + "copy", + "view", + "chatgpt", + "claude", + "perplexity", + "mcp", + "cursor", + "vscode" ] - }, - { - "dropdown": "SDK", - "icon": "book-open-text", - "versions": [ - { - "version": "v0.53", - "tabs": [ - { - "tab": "Learn", - "groups": [ - { - "group": "Learn", - "pages": [ - "docs/sdk/v0.53/learn" - ] - }, - { - "group": "Introduction", - "pages": [ - "docs/sdk/v0.53/learn/intro/overview", - "docs/sdk/v0.53/learn/intro/why-app-specific", - "docs/sdk/v0.53/learn/intro/sdk-app-architecture", - "docs/sdk/v0.53/learn/intro/sdk-design" - ] - }, - { - "group": "Beginner", - "pages": [ - "docs/sdk/v0.53/learn/beginner/app-anatomy", - "docs/sdk/v0.53/learn/beginner/tx-lifecycle", - "docs/sdk/v0.53/learn/beginner/query-lifecycle", - "docs/sdk/v0.53/learn/beginner/accounts", - "docs/sdk/v0.53/learn/beginner/gas-fees" - ] - }, - { - "group": "Advanced", - "pages": [ - "docs/sdk/v0.53/learn/advanced/baseapp", - "docs/sdk/v0.53/learn/advanced/transactions", - "docs/sdk/v0.53/learn/advanced/context", - "docs/sdk/v0.53/learn/advanced/node", - "docs/sdk/v0.53/learn/advanced/store", - "docs/sdk/v0.53/learn/advanced/encoding", - "docs/sdk/v0.53/learn/advanced/grpc_rest", - "docs/sdk/v0.53/learn/advanced/cli", - "docs/sdk/v0.53/learn/advanced/events", - "docs/sdk/v0.53/learn/advanced/telemetry", - "docs/sdk/v0.53/learn/advanced/ocap", - "docs/sdk/v0.53/learn/advanced/runtx_middleware", - "docs/sdk/v0.53/learn/advanced/simulation", - "docs/sdk/v0.53/learn/advanced/proto-docs", - "docs/sdk/v0.53/learn/advanced/upgrade", - "docs/sdk/v0.53/learn/advanced/config", - "docs/sdk/v0.53/learn/advanced/autocli" - ] - } - ] - }, - { - "tab": "Build", - "groups": [ - { - "group": "Build", - "pages": [ - "docs/sdk/v0.53/build" - ] - }, - { - "group": "Building Apps", - "pages": [ - "docs/sdk/v0.53/build/building-apps/app-go", - "docs/sdk/v0.53/build/building-apps/runtime", - "docs/sdk/v0.53/build/building-apps/app-go-di", - "docs/sdk/v0.53/build/building-apps/app-mempool", - "docs/sdk/v0.53/build/building-apps/app-upgrade", - "docs/sdk/v0.53/build/building-apps/vote-extensions", - "docs/sdk/v0.53/build/building-apps/app-testnet" - ] - }, - { - "group": "Building Modules", - "pages": [ - "docs/sdk/v0.53/build/building-modules/module-manager", - "docs/sdk/v0.53/build/building-modules/messages-and-queries", - "docs/sdk/v0.53/build/building-modules/msg-services", - "docs/sdk/v0.53/build/building-modules/query-services", - "docs/sdk/v0.53/build/building-modules/protobuf-annotations", - "docs/sdk/v0.53/build/building-modules/beginblock-endblock", - "docs/sdk/v0.53/build/building-modules/keeper", - "docs/sdk/v0.53/build/building-modules/invariants", - "docs/sdk/v0.53/build/building-modules/genesis", - "docs/sdk/v0.53/build/building-modules/module-interfaces", - "docs/sdk/v0.53/build/building-modules/structure", - "docs/sdk/v0.53/build/building-modules/errors", - "docs/sdk/v0.53/build/building-modules/upgrade", - "docs/sdk/v0.53/build/building-modules/simulator", - "docs/sdk/v0.53/build/building-modules/depinject", - "docs/sdk/v0.53/build/building-modules/testing", - "docs/sdk/v0.53/build/building-modules/preblock" - ] - }, - { - "group": "ABCI", - "pages": [ - "docs/sdk/v0.53/build/abci/introduction", - "docs/sdk/v0.53/build/abci/prepare-proposal", - "docs/sdk/v0.53/build/abci/process-proposal", - "docs/sdk/v0.53/build/abci/vote-extensions", - "docs/sdk/v0.53/build/abci/checktx" - ] - }, - { - "group": "Modules", - "pages": [ - "docs/sdk/v0.53/build/modules", - { - "group": "x/auth", - "pages": [ - "docs/sdk/v0.53/build/modules/auth", - "docs/sdk/v0.53/build/modules/auth/vesting", - "docs/sdk/v0.53/build/modules/auth/tx" - ] - }, - "docs/sdk/v0.53/build/modules/authz", - "docs/sdk/v0.53/build/modules/bank", - "docs/sdk/v0.53/build/modules/consensus", - "docs/sdk/v0.53/build/modules/crisis", - "docs/sdk/v0.53/build/modules/distribution", - "docs/sdk/v0.53/build/modules/epochs", - "docs/sdk/v0.53/build/modules/evidence", - "docs/sdk/v0.53/build/modules/feegrant", - "docs/sdk/v0.53/build/modules/gov", - "docs/sdk/v0.53/build/modules/group", - "docs/sdk/v0.53/build/modules/mint", - "docs/sdk/v0.53/build/modules/nft", - "docs/sdk/v0.53/build/modules/params", - "docs/sdk/v0.53/build/modules/protocolpool", - "docs/sdk/v0.53/build/modules/slashing", - "docs/sdk/v0.53/build/modules/staking", - "docs/sdk/v0.53/build/modules/upgrade", - "docs/sdk/v0.53/build/modules/circuit", - "docs/sdk/v0.53/build/modules/genutil" - ] - }, - { - "group": "Migrations", - "pages": [ - "docs/sdk/v0.53/build/migrations/intro", - "docs/sdk/v0.53/build/migrations/upgrade-reference", - "docs/sdk/v0.53/build/migrations/upgrade-guide" - ] - }, - { - "group": "Packages", - "pages": [ - "docs/sdk/v0.53/build/packages", - "docs/sdk/v0.53/build/packages/depinject", - "docs/sdk/v0.53/build/packages/collections" - ] - }, - { - "group": "Tooling", - "pages": [ - "docs/sdk/v0.53/build/tooling", - "docs/sdk/v0.53/build/tooling/protobuf", - "docs/sdk/v0.53/build/tooling/cosmovisor", - "docs/sdk/v0.53/build/tooling/confix" - ] - }, - { - "group": "RFC", - "pages": [ - "docs/sdk/v0.53/build/rfc", - "docs/sdk/v0.53/build/rfc/PROCESS", - "docs/sdk/v0.53/build/rfc/rfc-001-tx-validation", - "docs/sdk/v0.53/build/rfc/rfc-template" - ] - }, - { - "group": "Specifications", - "pages": [ - "docs/sdk/v0.53/build/spec", - "docs/sdk/v0.53/build/spec/SPEC_MODULE", - "docs/sdk/v0.53/build/spec/SPEC_STANDARD", - { - "group": "Addresses spec", - "pages": [ - "docs/sdk/v0.53/build/spec/addresses", - "docs/sdk/v0.53/build/spec/addresses/bech32" + }, + "interaction": { + "drilldown": false + }, + "topbarLinks": [ + { + "name": "ADRs", + "url": "/docs/common/pages/adr-comprehensive" + } + ], + "redirects": [ + { + "source": "/docs/sdk/*/build/architecture/PROCESS.md", + "destination": "https://github.com/cosmos/cosmos-sdk-docs/blob/main/docs/build/architecture/PROCESS.md" + } + ], + "navigation": { + "dropdowns": [ + { + "dropdown": "EVM", + "versions": [ + { + "version": "v0.4.x", + "tabs": [ + { + "tab": "Documentation", + "groups": [ + { + "group": "Cosmos EVM", + "pages": [ + "docs/evm/v0.4.x/documentation/overview", + { + "group": "Getting Started", + "pages": [ + "docs/evm/v0.4.x/documentation/getting-started/index", + "docs/evm/v0.4.x/documentation/getting-started/faq", + "docs/evm/v0.4.x/documentation/getting-started/development-environment" + ] + }, + { + "group": "Tooling and Resources", + "pages": [ + "docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/overview", + "docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/block-explorers", + "docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/foundry", + "docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/hardhat", + "docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/remix", + "docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/testing-and-fuzzing", + "docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/wallet-integration" + ] + }, + { + "group": "Integration", + "pages": [ + "docs/evm/v0.4.x/documentation/integration/evm-module-integration", + "docs/evm/v0.4.x/documentation/integration/mempool-integration", + "docs/evm/v0.4.x/documentation/integration/upgrade-handlers", + "docs/evm/v0.4.x/documentation/integration/migration-v0.3-to-v0.4", + "docs/evm/v0.4.x/documentation/integration/erc20-precompiles-migration" + ] + }, + { + "group": "EVM Compatibility", + "pages": [ + "docs/evm/v0.4.x/documentation/evm-compatibility/overview", + "docs/evm/v0.4.x/documentation/evm-compatibility/eip-reference" + ] + }, + { + "group": "Concepts", + "pages": [ + "docs/evm/v0.4.x/documentation/concepts/overview", + "docs/evm/v0.4.x/documentation/concepts/accounts", + "docs/evm/v0.4.x/documentation/concepts/chain-id", + "docs/evm/v0.4.x/documentation/concepts/encoding", + "docs/evm/v0.4.x/documentation/concepts/gas-and-fees", + "docs/evm/v0.4.x/documentation/concepts/ibc", + "docs/evm/v0.4.x/documentation/concepts/mempool", + "docs/evm/v0.4.x/documentation/concepts/migrations", + "docs/evm/v0.4.x/documentation/concepts/pending-state", + "docs/evm/v0.4.x/documentation/concepts/replay-protection", + "docs/evm/v0.4.x/documentation/concepts/signing", + "docs/evm/v0.4.x/documentation/concepts/single-token-representation", + "docs/evm/v0.4.x/documentation/concepts/tokens", + "docs/evm/v0.4.x/documentation/concepts/transactions", + "docs/evm/v0.4.x/documentation/concepts/predeployed-contracts", + "docs/evm/v0.4.x/documentation/concepts/eip-1559-feemarket", + "docs/evm/v0.4.x/documentation/concepts/precision-handling" + ] + }, + { + "group": "Smart Contracts", + "pages": [ + "docs/evm/v0.4.x/documentation/smart-contracts/introduction", + { + "group": "Precompiles", + "pages": [ + "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/index", + "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/overview", + "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/bank", + "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/bech32", + "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/callbacks", + "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/distribution", + "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/erc20", + "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/governance", + "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/ics20", + "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/p256", + "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/slashing", + "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/staking", + "docs/evm/v0.4.x/documentation/smart-contracts/precompiles/werc20" + ] + }, + { + "group": "Predeployed Contracts", + "pages": [ + "docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/overview", + "docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/implementation-guide", + "docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/create2", + "docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/multicall3", + "docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/permit2", + "docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/safe-factory" + ] + } + ] + }, + "docs/evm/v0.4.x/documentation/custom-improvement-proposals", + "docs/evm/v0.4.x/security-audit", + { + "group": "Cosmos SDK", + "pages": [ + "docs/evm/v0.4.x/documentation/cosmos-sdk/overview", + "docs/evm/v0.4.x/documentation/cosmos-sdk/cli", + "docs/evm/v0.4.x/documentation/cosmos-sdk/protocol", + { + "group": "Modules", + "pages": [ + "docs/evm/v0.4.x/documentation/cosmos-sdk/modules/erc20", + "docs/evm/v0.4.x/documentation/cosmos-sdk/modules/feemarket", + "docs/evm/v0.4.x/documentation/cosmos-sdk/modules/precisebank", + "docs/evm/v0.4.x/documentation/cosmos-sdk/modules/vm" + ] + } + ] + } + ] + } + ] + }, + { + "tab": "API Reference", + "pages": [ + "docs/evm/v0.4.x/api-reference/ethereum-json-rpc/index", + "docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods", + "docs/evm/v0.4.x/api-reference/ethereum-json-rpc/rpc-explorer" + ] + }, + { + "tab": "Release Notes", + "pages": [ + "docs/evm/v0.4.x/changelog/release-notes" + ] + } ] - }, - { - "group": "Store", - "pages": [ - "docs/sdk/v0.53/build/spec/store", - "docs/sdk/v0.53/build/spec/store/interblock-cache" + }, + { + "version": "next", + "tabs": [ + { + "tab": "Documentation", + "groups": [ + { + "group": "Cosmos EVM", + "pages": [ + "docs/evm/next/documentation/overview", + { + "group": "Getting Started", + "pages": [ + "docs/evm/next/documentation/getting-started/index", + "docs/evm/next/documentation/getting-started/reference-network", + "docs/evm/next/documentation/getting-started/faq", + "docs/evm/next/documentation/getting-started/development-environment", + "docs/evm/next/documentation/getting-started/local-network-setup", + "docs/evm/next/documentation/getting-started/node-configuration" + ] + }, + { + "group": "Tooling and Resources", + "pages": [ + "docs/evm/next/documentation/getting-started/tooling-and-resources/overview", + "docs/evm/next/documentation/getting-started/tooling-and-resources/block-explorers", + "docs/evm/next/documentation/getting-started/tooling-and-resources/foundry", + "docs/evm/next/documentation/getting-started/tooling-and-resources/hardhat", + "docs/evm/next/documentation/getting-started/tooling-and-resources/remix", + "docs/evm/next/documentation/getting-started/tooling-and-resources/testing-and-fuzzing", + "docs/evm/next/documentation/getting-started/tooling-and-resources/wallet-integration" + ] + }, + { + "group": "Integration", + "pages": [ + "docs/evm/next/documentation/integration/evm-module-integration", + "docs/evm/next/documentation/integration/predeployed-contracts-integration", + "docs/evm/next/documentation/integration/mempool-integration", + "docs/evm/next/documentation/integration/upgrade-handlers", + "docs/evm/next/documentation/integration/migration-v0.3-to-v0.4", + "docs/evm/next/documentation/integration/migration-v0.4-to-v0.5", + "docs/evm/next/documentation/integration/erc20-precompiles-migration", + "docs/evm/next/documentation/integration/fresh-v0.5-integration" + ] + }, + { + "group": "EVM Compatibility", + "pages": [ + "docs/evm/next/documentation/evm-compatibility/overview", + "docs/evm/next/documentation/evm-compatibility/eip-reference", + "docs/evm/next/documentation/evm-compatibility/eip-2935", + "docs/evm/next/documentation/evm-compatibility/eip-7702" + ] + }, + { + "group": "Concepts", + "pages": [ + "docs/evm/next/documentation/concepts/overview", + "docs/evm/next/documentation/concepts/accounts", + "docs/evm/next/documentation/concepts/chain-id", + "docs/evm/next/documentation/concepts/encoding", + "docs/evm/next/documentation/concepts/gas-and-fees", + "docs/evm/next/documentation/concepts/ibc", + "docs/evm/next/documentation/concepts/mempool", + "docs/evm/next/documentation/concepts/migrations", + "docs/evm/next/documentation/concepts/pending-state", + "docs/evm/next/documentation/concepts/replay-protection", + "docs/evm/next/documentation/concepts/signing", + "docs/evm/next/documentation/concepts/single-token-representation", + "docs/evm/next/documentation/concepts/tokens", + "docs/evm/next/documentation/concepts/transactions", + "docs/evm/next/documentation/concepts/predeployed-contracts", + "docs/evm/next/documentation/concepts/eip-1559-feemarket", + "docs/evm/next/documentation/concepts/performance", + "docs/evm/next/documentation/concepts/precision-handling" + ] + }, + { + "group": "Smart Contracts", + "pages": [ + "docs/evm/next/documentation/smart-contracts/introduction", + { + "group": "Precompiles", + "pages": [ + "docs/evm/next/documentation/smart-contracts/precompiles/index", + "docs/evm/next/documentation/smart-contracts/precompiles/overview", + "docs/evm/next/documentation/smart-contracts/precompiles/bank", + "docs/evm/next/documentation/smart-contracts/precompiles/bech32", + "docs/evm/next/documentation/smart-contracts/precompiles/callbacks", + "docs/evm/next/documentation/smart-contracts/precompiles/distribution", + "docs/evm/next/documentation/smart-contracts/precompiles/erc20", + "docs/evm/next/documentation/smart-contracts/precompiles/governance", + "docs/evm/next/documentation/smart-contracts/precompiles/ics20", + "docs/evm/next/documentation/smart-contracts/precompiles/p256", + "docs/evm/next/documentation/smart-contracts/precompiles/slashing", + "docs/evm/next/documentation/smart-contracts/precompiles/staking", + "docs/evm/next/documentation/smart-contracts/precompiles/werc20" + ] + }, + { + "group": "Predeployed Contracts", + "pages": [ + "docs/evm/next/documentation/smart-contracts/predeployed-contracts/overview", + "docs/evm/next/documentation/smart-contracts/predeployed-contracts/create2", + "docs/evm/next/documentation/smart-contracts/predeployed-contracts/multicall3", + "docs/evm/next/documentation/smart-contracts/predeployed-contracts/permit2", + "docs/evm/next/documentation/smart-contracts/predeployed-contracts/safe-factory" + ] + } + ] + }, + "docs/evm/next/documentation/custom-improvement-proposals", + "docs/evm/next/security-audit", + { + "group": "Cosmos SDK", + "pages": [ + "docs/evm/next/documentation/cosmos-sdk/overview", + "docs/evm/next/documentation/cosmos-sdk/cli", + "docs/evm/next/documentation/cosmos-sdk/protocol", + { + "group": "Modules", + "pages": [ + "docs/evm/next/documentation/cosmos-sdk/modules/erc20", + "docs/evm/next/documentation/cosmos-sdk/modules/feemarket", + "docs/evm/next/documentation/cosmos-sdk/modules/ibc", + "docs/evm/next/documentation/cosmos-sdk/modules/precisebank", + "docs/evm/next/documentation/cosmos-sdk/modules/vm" + ] + } + ] + } + ] + } + ] + }, + { + "tab": "API Reference", + "pages": [ + "docs/evm/next/api-reference/ethereum-json-rpc/index", + "docs/evm/next/api-reference/ethereum-json-rpc/methods", + "docs/evm/next/api-reference/ethereum-json-rpc/rpc-explorer" + ] + }, + { + "tab": "Release Notes", + "pages": [ + "docs/evm/next/changelog/release-notes" + ] + } ] - } - ] - } + } ] - }, - { - "tab": "User Guides", - "groups": [ - { - "group": "User Guides", - "pages": [ - "docs/sdk/v0.53/user" - ] - }, - { - "group": "Running a Node, API and CLI", - "pages": [ - "docs/sdk/v0.53/user/run-node/keyring", - "docs/sdk/v0.53/user/run-node/run-node", - "docs/sdk/v0.53/user/run-node/interact-node", - "docs/sdk/v0.53/user/run-node/txs", - "docs/sdk/v0.53/user/run-node/run-testnet", - "docs/sdk/v0.53/user/run-node/run-production" - ] - }, - { - "group": "User", - "pages": [ - "docs/sdk/v0.53/user" - ] - } - ] - }, - { - "tab": "Tutorials", - "groups": [ - { - "group": "Tutorials", - "pages": [ - "docs/sdk/v0.53/tutorials" - ] - }, - { - "group": "Vote Extensions Tutorials", - "pages": [ - { - "group": "Mitigating Auction Front-Running Tutorial", - "pages": [ - "docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/getting-started", - "docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning", - "docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions", - "docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extesions", - "docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running" - ] - }, - { - "group": "Oracle Tutorial", - "pages": [ - "docs/sdk/v0.53/tutorials/vote-extensions/oracle/getting-started", - "docs/sdk/v0.53/tutorials/vote-extensions/oracle/what-is-an-oracle", - "docs/sdk/v0.53/tutorials/vote-extensions/oracle/implementing-vote-extensions", - "docs/sdk/v0.53/tutorials/vote-extensions/oracle/testing-oracle" - ] - } - ] - }, - { - "group": "Transaction Tutorials", - "pages": [ - "docs/sdk/v0.53/tutorials/transactions/building-a-transaction" - ] - } - ] - } - ] - }, - { - "version": "v0.50", - "tabs": [ - { - "tab": "Learn", - "groups": [ - { - "group": "Learn", - "pages": [ - "docs/sdk/v0.50/learn" - ] - }, - { - "group": "Introduction", - "pages": [ - "docs/sdk/v0.50/learn/intro/overview", - "docs/sdk/v0.50/learn/intro/why-app-specific", - "docs/sdk/v0.50/learn/intro/sdk-app-architecture", - "docs/sdk/v0.50/learn/intro/sdk-design" - ] - }, - { - "group": "Beginner", - "pages": [ - "docs/sdk/v0.50/learn/beginner/app-anatomy", - "docs/sdk/v0.50/learn/beginner/tx-lifecycle", - "docs/sdk/v0.50/learn/beginner/query-lifecycle", - "docs/sdk/v0.50/learn/beginner/accounts", - "docs/sdk/v0.50/learn/beginner/gas-fees" - ] - }, - { - "group": "Advanced", - "pages": [ - "docs/sdk/v0.50/learn/advanced/baseapp", - "docs/sdk/v0.50/learn/advanced/transactions", - "docs/sdk/v0.50/learn/advanced/context", - "docs/sdk/v0.50/learn/advanced/node", - "docs/sdk/v0.50/learn/advanced/store", - "docs/sdk/v0.50/learn/advanced/encoding", - "docs/sdk/v0.50/learn/advanced/grpc_rest", - "docs/sdk/v0.50/learn/advanced/cli", - "docs/sdk/v0.50/learn/advanced/events", - "docs/sdk/v0.50/learn/advanced/telemetry", - "docs/sdk/v0.50/learn/advanced/ocap", - "docs/sdk/v0.50/learn/advanced/runtx_middleware", - "docs/sdk/v0.50/learn/advanced/simulation", - "docs/sdk/v0.50/learn/advanced/proto-docs", - "docs/sdk/v0.50/learn/advanced/upgrade", - "docs/sdk/v0.50/learn/advanced/config", - "docs/sdk/v0.50/learn/advanced/autocli" - ] - } - ] - }, - { - "tab": "Build", - "groups": [ - { - "group": "Build", - "pages": [ - "docs/sdk/v0.50/build" - ] - }, - { - "group": "Building Apps", - "pages": [ - "docs/sdk/v0.50/build/building-apps/app-go", - "docs/sdk/v0.50/build/building-apps/runtime", - "docs/sdk/v0.50/build/building-apps/app-go-di", - "docs/sdk/v0.50/build/building-apps/app-mempool", - "docs/sdk/v0.50/build/building-apps/app-upgrade", - "docs/sdk/v0.50/build/building-apps/vote-extensions", - "docs/sdk/v0.50/build/building-apps/app-testnet" - ] - }, - { - "group": "Building Modules", - "pages": [ - "docs/sdk/v0.50/build/building-modules/module-manager", - "docs/sdk/v0.50/build/building-modules/messages-and-queries", - "docs/sdk/v0.50/build/building-modules/msg-services", - "docs/sdk/v0.50/build/building-modules/query-services", - "docs/sdk/v0.50/build/building-modules/protobuf-annotations", - "docs/sdk/v0.50/build/building-modules/beginblock-endblock", - "docs/sdk/v0.50/build/building-modules/keeper", - "docs/sdk/v0.50/build/building-modules/invariants", - "docs/sdk/v0.50/build/building-modules/genesis", - "docs/sdk/v0.50/build/building-modules/module-interfaces", - "docs/sdk/v0.50/build/building-modules/structure", - "docs/sdk/v0.50/build/building-modules/errors", - "docs/sdk/v0.50/build/building-modules/upgrade", - "docs/sdk/v0.50/build/building-modules/simulator", - "docs/sdk/v0.50/build/building-modules/depinject", - "docs/sdk/v0.50/build/building-modules/testing", - "docs/sdk/v0.50/build/building-modules/preblock" - ] - }, - { - "group": "ABCI", - "pages": [ - "docs/sdk/v0.50/build/abci/introduction", - "docs/sdk/v0.50/build/abci/prepare-proposal", - "docs/sdk/v0.50/build/abci/process-proposal", - "docs/sdk/v0.50/build/abci/vote-extensions", - "docs/sdk/v0.50/build/abci/checktx" - ] - }, - { - "group": "Modules", - "pages": [ - "docs/sdk/v0.50/build/modules", - { - "group": "x/auth", - "pages": [ - "docs/sdk/v0.50/build/modules/auth", - "docs/sdk/v0.50/build/modules/auth/vesting", - "docs/sdk/v0.50/build/modules/auth/tx" - ] - }, - "docs/sdk/v0.50/build/modules/authz", - "docs/sdk/v0.50/build/modules/bank", - "docs/sdk/v0.50/build/modules/consensus", - "docs/sdk/v0.50/build/modules/crisis", - "docs/sdk/v0.50/build/modules/distribution", - "docs/sdk/v0.50/build/modules/epochs", - "docs/sdk/v0.50/build/modules/evidence", - "docs/sdk/v0.50/build/modules/feegrant", - "docs/sdk/v0.50/build/modules/gov", - "docs/sdk/v0.50/build/modules/group", - "docs/sdk/v0.50/build/modules/mint", - "docs/sdk/v0.50/build/modules/nft", - "docs/sdk/v0.50/build/modules/params", - "docs/sdk/v0.50/build/modules/protocolpool", - "docs/sdk/v0.50/build/modules/slashing", - "docs/sdk/v0.50/build/modules/staking", - "docs/sdk/v0.50/build/modules/upgrade", - "docs/sdk/v0.50/build/modules/circuit", - "docs/sdk/v0.50/build/modules/genutil" - ] - }, - { - "group": "Migrations", - "pages": [ - "docs/sdk/v0.50/build/migrations/intro", - "docs/sdk/v0.50/build/migrations/upgrade-reference", - "docs/sdk/v0.50/build/migrations/upgrade-guide" - ] - }, - { - "group": "Packages", - "pages": [ - "docs/sdk/v0.50/build/packages", - "docs/sdk/v0.50/build/packages/depinject", - "docs/sdk/v0.50/build/packages/collections" - ] - }, - { - "group": "Tooling", - "pages": [ - "docs/sdk/v0.50/build/tooling", - "docs/sdk/v0.50/build/tooling/protobuf", - "docs/sdk/v0.50/build/tooling/cosmovisor", - "docs/sdk/v0.50/build/tooling/confix" - ] - }, - { - "group": "RFC", - "pages": [ - "docs/sdk/v0.50/build/rfc", - "docs/sdk/v0.50/build/rfc/PROCESS", - "docs/sdk/v0.50/build/rfc/rfc-001-tx-validation", - "docs/sdk/v0.50/build/rfc/rfc-template" - ] - }, - { - "group": "Specifications", - "pages": [ - "docs/sdk/v0.50/build/spec", - "docs/sdk/v0.50/build/spec/SPEC_MODULE", - "docs/sdk/v0.50/build/spec/SPEC_STANDARD", - { - "group": "Addresses spec", - "pages": [ - "docs/sdk/v0.50/build/spec/addresses", - "docs/sdk/v0.50/build/spec/addresses/bech32" - ] - }, - { - "group": "Store", - "pages": [ - "docs/sdk/v0.50/build/spec/store", - "docs/sdk/v0.50/build/spec/store/interblock-cache" - ] - } - ] - } - ] - }, - { - "tab": "User Guides", - "groups": [ - { - "group": "User Guides", - "pages": [ - "docs/sdk/v0.50/user" - ] - }, - { - "group": "Running a Node, API and CLI", - "pages": [ - "docs/sdk/v0.50/user/run-node/keyring", - "docs/sdk/v0.50/user/run-node/run-node", - "docs/sdk/v0.50/user/run-node/interact-node", - "docs/sdk/v0.50/user/run-node/txs", - "docs/sdk/v0.50/user/run-node/run-testnet", - "docs/sdk/v0.50/user/run-node/run-production" - ] - }, - { - "group": "User", - "pages": [ - "docs/sdk/v0.50/user" - ] - } - ] - }, - { - "tab": "Tutorials", - "groups": [ - { - "group": "Tutorials", - "pages": [ - "docs/sdk/v0.50/tutorials" - ] - }, - { - "group": "Vote Extensions Tutorials", - "pages": [ - { - "group": "Mitigating Auction Front-Running Tutorial", - "pages": [ - "docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/getting-started", - "docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning", - "docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions", - "docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extesions", - "docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running" - ] - }, - { - "group": "Oracle Tutorial", - "pages": [ - "docs/sdk/v0.50/tutorials/vote-extensions/oracle/getting-started", - "docs/sdk/v0.50/tutorials/vote-extensions/oracle/what-is-an-oracle", - "docs/sdk/v0.50/tutorials/vote-extensions/oracle/implementing-vote-extensions", - "docs/sdk/v0.50/tutorials/vote-extensions/oracle/testing-oracle" + }, + { + "dropdown": "SDK", + "versions": [ + { + "version": "v0.53", + "tabs": [ + { + "tab": "Documentation", + "groups": [ + { + "group": "Getting Started", + "pages": [ + { + "group": "Core Concepts", + "pages": [ + "docs/sdk/v0.53/documentation/core-concepts/overview", + "docs/sdk/v0.53/documentation/core-concepts/sdk-design", + "docs/sdk/v0.53/documentation/core-concepts/sdk-app-architecture", + "docs/sdk/v0.53/documentation/core-concepts/why-app-specific", + "docs/sdk/v0.53/documentation/core-concepts/ocap" + ] + } + ] + }, + { + "group": "Protocol & Architecture", + "pages": [ + { + "group": "Application Framework", + "pages": [ + "docs/sdk/v0.53/documentation/application-framework/app-anatomy", + "docs/sdk/v0.53/documentation/application-framework/baseapp", + "docs/sdk/v0.53/documentation/application-framework/app-go", + "docs/sdk/v0.53/documentation/application-framework/app-go-di", + "docs/sdk/v0.53/documentation/application-framework/runtime", + "docs/sdk/v0.53/documentation/application-framework/depinject", + "docs/sdk/v0.53/documentation/application-framework/context", + "docs/sdk/v0.53/documentation/application-framework/runtx_middleware", + "docs/sdk/v0.53/documentation/application-framework/app-mempool" + ] + }, + { + "group": "Module System", + "pages": [ + { + "group": "Standard Modules", + "pages": [ + "docs/sdk/v0.53/documentation/module-system/auth", + "docs/sdk/v0.53/documentation/module-system/authz", + "docs/sdk/v0.53/documentation/module-system/bank", + "docs/sdk/v0.53/documentation/module-system/circuit", + "docs/sdk/v0.53/documentation/module-system/consensus", + "docs/sdk/v0.53/documentation/module-system/crisis", + "docs/sdk/v0.53/documentation/module-system/distribution", + "docs/sdk/v0.53/documentation/module-system/evidence", + "docs/sdk/v0.53/documentation/module-system/feegrant", + "docs/sdk/v0.53/documentation/module-system/gov", + "docs/sdk/v0.53/documentation/module-system/group", + "docs/sdk/v0.53/documentation/module-system/mint", + "docs/sdk/v0.53/documentation/module-system/nft", + "docs/sdk/v0.53/documentation/module-system/params", + "docs/sdk/v0.53/documentation/module-system/slashing", + "docs/sdk/v0.53/documentation/module-system/staking", + "docs/sdk/v0.53/documentation/module-system/upgrade" + ] + }, + "docs/sdk/v0.53/documentation/module-system/intro", + "docs/sdk/v0.53/documentation/module-system/structure", + "docs/sdk/v0.53/documentation/module-system/module-manager", + "docs/sdk/v0.53/documentation/module-system/keeper", + "docs/sdk/v0.53/documentation/module-system/messages-and-queries", + "docs/sdk/v0.53/documentation/module-system/msg-services", + "docs/sdk/v0.53/documentation/module-system/query-services", + "docs/sdk/v0.53/documentation/module-system/module-interfaces", + "docs/sdk/v0.53/documentation/module-system/genesis", + "docs/sdk/v0.53/documentation/module-system/beginblock-endblock", + "docs/sdk/v0.53/documentation/module-system/preblock", + "docs/sdk/v0.53/documentation/module-system/invariants", + "docs/sdk/v0.53/documentation/module-system/depinject" + ] + }, + { + "group": "Consensus & Block Production", + "pages": [ + "docs/sdk/v0.53/documentation/consensus-block-production/introduction", + "docs/sdk/v0.53/documentation/consensus-block-production/checktx", + "docs/sdk/v0.53/documentation/consensus-block-production/prepare-proposal", + "docs/sdk/v0.53/documentation/consensus-block-production/process-proposal", + "docs/sdk/v0.53/documentation/consensus-block-production/vote-extensions" + ] + }, + { + "group": "State & Storage", + "pages": [ + "docs/sdk/v0.53/documentation/state-storage/store", + "docs/sdk/v0.53/documentation/state-storage/collections", + "docs/sdk/v0.53/documentation/state-storage/interblock-cache" + ] + }, + { + "group": "Protocol Development", + "pages": [ + "docs/sdk/v0.53/documentation/protocol-development/encoding", + "docs/sdk/v0.53/documentation/protocol-development/transactions", + "docs/sdk/v0.53/documentation/protocol-development/tx-lifecycle", + "docs/sdk/v0.53/documentation/protocol-development/accounts", + "docs/sdk/v0.53/documentation/protocol-development/gas-fees", + "docs/sdk/v0.53/documentation/protocol-development/protobuf", + "docs/sdk/v0.53/documentation/protocol-development/protobuf-annotations", + "docs/sdk/v0.53/documentation/protocol-development/errors", + "docs/sdk/v0.53/documentation/protocol-development/bech32", + "docs/sdk/v0.53/documentation/protocol-development/ics-030-signed-messages", + "docs/sdk/v0.53/documentation/protocol-development/SPEC_MODULE", + "docs/sdk/v0.53/documentation/protocol-development/SPEC_STANDARD" + ] + } + ] + }, + { + "group": "Infrastructure & Operations", + "pages": [ + { + "group": "Node Operations", + "pages": [ + "docs/sdk/v0.53/documentation/operations/intro", + "docs/sdk/v0.53/documentation/operations/node", + "docs/sdk/v0.53/documentation/operations/run-production", + "docs/sdk/v0.53/documentation/operations/keyring", + "docs/sdk/v0.53/documentation/operations/config" + ] + }, + { + "group": "Tools & Utilities", + "pages": [ + "docs/sdk/v0.53/documentation/operations/cosmovisor", + "docs/sdk/v0.53/documentation/operations/confix" + ] + }, + { + "group": "Testing & Simulation", + "pages": [ + "docs/sdk/v0.53/documentation/operations/app-testnet", + "docs/sdk/v0.53/documentation/operations/simulation", + "docs/sdk/v0.53/documentation/operations/simulator", + "docs/sdk/v0.53/documentation/operations/testing" + ] + }, + { + "group": "Upgrades", + "pages": [ + "docs/sdk/v0.53/documentation/operations/app-upgrade", + "docs/sdk/v0.53/documentation/operations/upgrade-guide", + "docs/sdk/v0.53/documentation/operations/upgrade-reference" + ] + } + ] + }, + { + "group": "Historical Reference", + "pages": [ + "docs/sdk/v0.53/documentation/legacy/adr-overview" + ] + }, + { + "group": "Security", + "pages": [ + "docs/sdk/v0.53/security-audit" + ] + } + ] + }, + { + "tab": "API Reference", + "groups": [ + { + "group": "Client Tools", + "icon": "terminal", + "pages": [ + "docs/sdk/v0.53/api-reference/client-tools/cli", + "docs/sdk/v0.53/api-reference/client-tools/autocli", + "docs/sdk/v0.53/api-reference/client-tools/hubl" + ] + }, + { + "group": "Service APIs", + "pages": [ + "docs/sdk/v0.53/api-reference/service-apis/grpc_rest", + "docs/sdk/v0.53/api-reference/service-apis/query-lifecycle", + "docs/sdk/v0.53/api-reference/service-apis/proto-docs" + ] + }, + { + "group": "Events & Streaming", + "pages": [ + "docs/sdk/v0.53/api-reference/events-streaming/events" + ] + }, + { + "group": "Telemetry & Metrics", + "pages": [ + "docs/sdk/v0.53/api-reference/telemetry-metrics/telemetry" + ] + } + ] + }, + { + "tab": "Tutorials", + "groups": [ + { + "group": "Transactions", + "pages": [ + "docs/sdk/v0.53/tutorials/transactions/building-a-transaction" + ] + }, + { + "group": "Vote Extensions", + "pages": [ + { + "group": "Auction Frontrunning", + "pages": [ + "docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/getting-started", + "docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning", + "docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running", + "docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions" + ] + }, + { + "group": "Oracle", + "pages": [ + "docs/sdk/v0.53/tutorials/vote-extensions/oracle/what-is-an-oracle", + "docs/sdk/v0.53/tutorials/vote-extensions/oracle/getting-started", + "docs/sdk/v0.53/tutorials/vote-extensions/oracle/implementing-vote-extensions", + "docs/sdk/v0.53/tutorials/vote-extensions/oracle/testing-oracle" + ] + } + ] + } + ] + }, + { + "tab": "Release Notes", + "pages": [ + "docs/sdk/v0.53/changelog/release-notes" + ] + } ] - } - ] - }, - { - "group": "Transaction Tutorials", - "pages": [ - "docs/sdk/v0.50/tutorials/transactions/building-a-transaction" - ] - } - ] - } - ] - }, - { - "version": "v0.47", - "tabs": [ - { - "tab": "Learn", - "groups": [ - { - "group": "Learn", - "pages": [ - "docs/sdk/v0.47/learn" - ] - }, - { - "group": "Introduction", - "pages": [ - "docs/sdk/v0.47/learn/intro/overview", - "docs/sdk/v0.47/learn/intro/why-app-specific", - "docs/sdk/v0.47/learn/intro/sdk-app-architecture", - "docs/sdk/v0.47/learn/intro/sdk-design" - ] - }, - { - "group": "Beginner", - "pages": [ - "docs/sdk/v0.47/learn/beginner/app-anatomy", - "docs/sdk/v0.47/learn/beginner/tx-lifecycle", - "docs/sdk/v0.47/learn/beginner/query-lifecycle", - "docs/sdk/v0.47/learn/beginner/accounts", - "docs/sdk/v0.47/learn/beginner/gas-fees" - ] - }, - { - "group": "Advanced", - "pages": [ - "docs/sdk/v0.47/learn/advanced/baseapp", - "docs/sdk/v0.47/learn/advanced/transactions", - "docs/sdk/v0.47/learn/advanced/context", - "docs/sdk/v0.47/learn/advanced/node", - "docs/sdk/v0.47/learn/advanced/store", - "docs/sdk/v0.47/learn/advanced/encoding", - "docs/sdk/v0.47/learn/advanced/grpc_rest", - "docs/sdk/v0.47/learn/advanced/cli", - "docs/sdk/v0.47/learn/advanced/events", - "docs/sdk/v0.47/learn/advanced/telemetry", - "docs/sdk/v0.47/learn/advanced/ocap", - "docs/sdk/v0.47/learn/advanced/runtx_middleware", - "docs/sdk/v0.47/learn/advanced/simulation", - "docs/sdk/v0.47/learn/advanced/proto-docs", - "docs/sdk/v0.47/learn/advanced/upgrade", - "docs/sdk/v0.47/learn/advanced/config", - "docs/sdk/v0.47/learn/advanced/autocli" - ] - } - ] - }, - { - "tab": "Build", - "groups": [ - { - "group": "Build", - "pages": [ - "docs/sdk/v0.47/build" - ] - }, - { - "group": "Building Apps", - "pages": [ - "docs/sdk/v0.47/build/building-apps/app-go", - "docs/sdk/v0.47/build/building-apps/runtime", - "docs/sdk/v0.47/build/building-apps/app-go-di", - "docs/sdk/v0.47/build/building-apps/app-mempool", - "docs/sdk/v0.47/build/building-apps/app-upgrade", - "docs/sdk/v0.47/build/building-apps/vote-extensions", - "docs/sdk/v0.47/build/building-apps/app-testnet" - ] - }, - { - "group": "Building Modules", - "pages": [ - "docs/sdk/v0.47/build/building-modules/module-manager", - "docs/sdk/v0.47/build/building-modules/messages-and-queries", - "docs/sdk/v0.47/build/building-modules/msg-services", - "docs/sdk/v0.47/build/building-modules/query-services", - "docs/sdk/v0.47/build/building-modules/protobuf-annotations", - "docs/sdk/v0.47/build/building-modules/beginblock-endblock", - "docs/sdk/v0.47/build/building-modules/keeper", - "docs/sdk/v0.47/build/building-modules/invariants", - "docs/sdk/v0.47/build/building-modules/genesis", - "docs/sdk/v0.47/build/building-modules/module-interfaces", - "docs/sdk/v0.47/build/building-modules/structure", - "docs/sdk/v0.47/build/building-modules/errors", - "docs/sdk/v0.47/build/building-modules/upgrade", - "docs/sdk/v0.47/build/building-modules/simulator", - "docs/sdk/v0.47/build/building-modules/depinject", - "docs/sdk/v0.47/build/building-modules/testing", - "docs/sdk/v0.47/build/building-modules/preblock" - ] - }, - { - "group": "ABCI", - "pages": [ - "docs/sdk/v0.47/build/abci/introduction", - "docs/sdk/v0.47/build/abci/prepare-proposal", - "docs/sdk/v0.47/build/abci/process-proposal", - "docs/sdk/v0.47/build/abci/vote-extensions", - "docs/sdk/v0.47/build/abci/checktx" - ] - }, - { - "group": "Modules", - "pages": [ - "docs/sdk/v0.47/build/modules", - { - "group": "x/auth", - "pages": [ - "docs/sdk/v0.47/build/modules/auth", - "docs/sdk/v0.47/build/modules/auth/vesting", - "docs/sdk/v0.47/build/modules/auth/tx" + }, + { + "version": "v0.50", + "tabs": [ + { + "tab": "Documentation", + "groups": [ + { + "group": "Application Framework", + "pages": [ + "docs/sdk/v0.50/documentation/application-framework/app-go", + "docs/sdk/v0.50/documentation/application-framework/app-go-v2", + "docs/sdk/v0.50/documentation/application-framework/app-mempool", + "docs/sdk/v0.50/documentation/application-framework/app-testnet", + "docs/sdk/v0.50/documentation/application-framework/app-upgrade", + "docs/sdk/v0.50/documentation/application-framework/vote-extensions" + ] + }, + { + "group": "Module System", + "pages": [ + { + "group": "Standard Modules", + "pages": [ + "docs/sdk/v0.50/documentation/module-system/modules/auth/README", + "docs/sdk/v0.50/documentation/module-system/modules/authz/README", + "docs/sdk/v0.50/documentation/module-system/modules/bank/README", + "docs/sdk/v0.50/documentation/module-system/modules/consensus/README", + "docs/sdk/v0.50/documentation/module-system/modules/crisis/README", + "docs/sdk/v0.50/documentation/module-system/modules/distribution/README", + "docs/sdk/v0.50/documentation/module-system/modules/epochs/README", + "docs/sdk/v0.50/documentation/module-system/modules/evidence/README", + "docs/sdk/v0.50/documentation/module-system/modules/feegrant/README", + "docs/sdk/v0.50/documentation/module-system/modules/gov/README", + "docs/sdk/v0.50/documentation/module-system/modules/group/README", + "docs/sdk/v0.50/documentation/module-system/modules/mint/README", + "docs/sdk/v0.50/documentation/module-system/modules/nft/README", + "docs/sdk/v0.50/documentation/module-system/modules/params/README", + "docs/sdk/v0.50/documentation/module-system/modules/protocolpool/README", + "docs/sdk/v0.50/documentation/module-system/modules/slashing/README", + "docs/sdk/v0.50/documentation/module-system/modules/staking/README", + "docs/sdk/v0.50/documentation/module-system/modules/upgrade/README", + "docs/sdk/v0.50/documentation/module-system/modules/circuit/README", + "docs/sdk/v0.50/documentation/module-system/modules/genutil/README", + "docs/sdk/v0.50/documentation/module-system/modules/auth/tx", + "docs/sdk/v0.50/documentation/module-system/modules/auth/vesting" + ] + }, + "docs/sdk/v0.50/documentation/module-system/intro", + "docs/sdk/v0.50/documentation/module-system/beginblock-endblock", + "docs/sdk/v0.50/documentation/module-system/depinject", + "docs/sdk/v0.50/documentation/module-system/errors", + "docs/sdk/v0.50/documentation/module-system/genesis", + "docs/sdk/v0.50/documentation/module-system/invariants", + "docs/sdk/v0.50/documentation/module-system/keeper", + "docs/sdk/v0.50/documentation/module-system/messages-and-queries", + "docs/sdk/v0.50/documentation/module-system/module-interfaces", + "docs/sdk/v0.50/documentation/module-system/module-manager", + "docs/sdk/v0.50/documentation/module-system/msg-services", + "docs/sdk/v0.50/documentation/module-system/preblock", + "docs/sdk/v0.50/documentation/module-system/protobuf-annotations", + "docs/sdk/v0.50/documentation/module-system/query-services", + "docs/sdk/v0.50/documentation/module-system/simulator", + "docs/sdk/v0.50/documentation/module-system/structure", + "docs/sdk/v0.50/documentation/module-system/testing", + "docs/sdk/v0.50/documentation/module-system/upgrade" + ] + }, + { + "group": "Consensus & Block Production", + "pages": [ + "docs/sdk/v0.50/documentation/consensus-block-production/introduction", + "docs/sdk/v0.50/documentation/consensus-block-production/prepare-proposal", + "docs/sdk/v0.50/documentation/consensus-block-production/process-proposal", + "docs/sdk/v0.50/documentation/consensus-block-production/vote-extensions" + ] + }, + { + "group": "Protocol Development", + "pages": [ + "docs/sdk/v0.50/documentation/protocol-development/README", + "docs/sdk/v0.50/documentation/protocol-development/SPEC_MODULE", + "docs/sdk/v0.50/documentation/protocol-development/SPEC_STANDARD", + "docs/sdk/v0.50/documentation/protocol-development/_ics/README", + "docs/sdk/v0.50/documentation/protocol-development/_ics/ics-030-signed-messages", + "docs/sdk/v0.50/documentation/protocol-development/addresses/README", + "docs/sdk/v0.50/documentation/protocol-development/addresses/bech32", + "docs/sdk/v0.50/documentation/protocol-development/store/README", + "docs/sdk/v0.50/documentation/protocol-development/store/interblock-cache" + ] + }, + { + "group": "Operations", + "pages": [ + "docs/sdk/v0.50/documentation/operations/intro", + "docs/sdk/v0.50/documentation/operations/upgrade-guide", + "docs/sdk/v0.50/documentation/operations/upgrade-reference", + "docs/sdk/v0.50/documentation/operations/upgrading", + { + "group": "Packages", + "pages": [ + "docs/sdk/v0.50/documentation/operations/packages/README", + "docs/sdk/v0.50/documentation/operations/packages/collections", + "docs/sdk/v0.50/documentation/operations/packages/depinject" + ] + }, + { + "group": "Tooling", + "pages": [ + "docs/sdk/v0.50/documentation/operations/tooling/README", + "docs/sdk/v0.50/documentation/operations/tooling/confix", + "docs/sdk/v0.50/documentation/operations/tooling/cosmovisor", + "docs/sdk/v0.50/documentation/operations/tooling/hubl", + "docs/sdk/v0.50/documentation/operations/tooling/protobuf" + ] + } + ] + } + ] + }, + { + "tab": "Tutorials", + "groups": [ + { + "group": "Transactions", + "pages": [ + "docs/sdk/v0.50/tutorials/transactions/building-a-transaction" + ] + }, + { + "group": "Vote Extensions", + "pages": [ + { + "group": "Auction Frontrunning", + "pages": [ + "docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/getting-started", + "docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning", + "docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running", + "docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions" + ] + }, + { + "group": "Oracle", + "pages": [ + "docs/sdk/v0.50/tutorials/vote-extensions/oracle/what-is-an-oracle", + "docs/sdk/v0.50/tutorials/vote-extensions/oracle/getting-started", + "docs/sdk/v0.50/tutorials/vote-extensions/oracle/implementing-vote-extensions", + "docs/sdk/v0.50/tutorials/vote-extensions/oracle/testing-oracle" + ] + } + ] + } + ] + }, + { + "tab": "Release Notes", + "pages": [ + "docs/sdk/v0.50/changelog/release-notes" + ] + } ] - }, - "docs/sdk/v0.47/build/modules/authz", - "docs/sdk/v0.47/build/modules/bank", - "docs/sdk/v0.47/build/modules/consensus", - "docs/sdk/v0.47/build/modules/crisis", - "docs/sdk/v0.47/build/modules/distribution", - "docs/sdk/v0.47/build/modules/epochs", - "docs/sdk/v0.47/build/modules/evidence", - "docs/sdk/v0.47/build/modules/feegrant", - "docs/sdk/v0.47/build/modules/gov", - "docs/sdk/v0.47/build/modules/group", - "docs/sdk/v0.47/build/modules/mint", - "docs/sdk/v0.47/build/modules/nft", - "docs/sdk/v0.47/build/modules/params", - "docs/sdk/v0.47/build/modules/protocolpool", - "docs/sdk/v0.47/build/modules/slashing", - "docs/sdk/v0.47/build/modules/staking", - "docs/sdk/v0.47/build/modules/upgrade", - "docs/sdk/v0.47/build/modules/circuit", - "docs/sdk/v0.47/build/modules/genutil" - ] - }, - { - "group": "Migrations", - "pages": [ - "docs/sdk/v0.47/build/migrations/intro", - "docs/sdk/v0.47/build/migrations/upgrade-reference", - "docs/sdk/v0.47/build/migrations/upgrade-guide" - ] - }, - { - "group": "Packages", - "pages": [ - "docs/sdk/v0.47/build/packages", - "docs/sdk/v0.47/build/packages/depinject", - "docs/sdk/v0.47/build/packages/collections" - ] - }, - { - "group": "Tooling", - "pages": [ - "docs/sdk/v0.47/build/tooling", - "docs/sdk/v0.47/build/tooling/protobuf", - "docs/sdk/v0.47/build/tooling/cosmovisor", - "docs/sdk/v0.47/build/tooling/confix" - ] - }, - { - "group": "RFC", - "pages": [ - "docs/sdk/v0.47/build/rfc", - "docs/sdk/v0.47/build/rfc/PROCESS", - "docs/sdk/v0.47/build/rfc/rfc-001-tx-validation", - "docs/sdk/v0.47/build/rfc/rfc-template" - ] - }, - { - "group": "Specifications", - "pages": [ - "docs/sdk/v0.47/build/spec", - "docs/sdk/v0.47/build/spec/SPEC_MODULE", - "docs/sdk/v0.47/build/spec/SPEC_STANDARD", - { - "group": "Addresses spec", - "pages": [ - "docs/sdk/v0.47/build/spec/addresses", - "docs/sdk/v0.47/build/spec/addresses/bech32" + }, + { + "version": "v0.47", + "tabs": [ + { + "tab": "Documentation", + "groups": [ + { + "group": "Validate", + "pages": [ + "docs/sdk/v0.47/validate/run-testnet" + ] + }, + { + "group": "Learn", + "pages": [ + "docs/sdk/v0.47/learn/glossary", + "docs/sdk/v0.47/learn/learn", + "docs/sdk/v0.47/learn/advanced/cli", + "docs/sdk/v0.47/learn/beginner/accounts", + "docs/sdk/v0.47/learn/intro/overview", + "docs/sdk/v0.47/learn/advanced/config", + "docs/sdk/v0.47/learn/beginner/gas-fees", + "docs/sdk/v0.47/learn/intro/sdk-app-architecture", + "docs/sdk/v0.47/learn/advanced/context", + "docs/sdk/v0.47/learn/beginner/overview-app", + "docs/sdk/v0.47/learn/intro/sdk-design", + "docs/sdk/v0.47/learn/advanced/encoding", + "docs/sdk/v0.47/learn/beginner/query-lifecycle", + "docs/sdk/v0.47/learn/intro/why-app-specific", + "docs/sdk/v0.47/learn/advanced/events", + "docs/sdk/v0.47/learn/beginner/tx-lifecycle", + "docs/sdk/v0.47/learn/advanced/grpc_rest", + "docs/sdk/v0.47/learn/advanced/interblock-cache", + "docs/sdk/v0.47/learn/advanced/node", + "docs/sdk/v0.47/learn/advanced/ocap", + "docs/sdk/v0.47/learn/advanced/proto-docs", + "docs/sdk/v0.47/learn/advanced/runtx_middleware", + "docs/sdk/v0.47/learn/advanced/simulation", + "docs/sdk/v0.47/learn/advanced/store", + "docs/sdk/v0.47/learn/advanced/telemetry", + "docs/sdk/v0.47/learn/advanced/upgrade", + "docs/sdk/v0.47/learn/advanced/baseapp", + "docs/sdk/v0.47/learn/advanced/transactions" + ] + }, + { + "group": "Build", + "pages": [ + "docs/sdk/v0.47/build/build", + "docs/sdk/v0.47/documentation/application-framework/app-go-v2", + "docs/sdk/v0.47/documentation/module-system/beginblock-endblock", + "docs/sdk/v0.47/documentation/operations/intro", + "docs/sdk/v0.47/documentation/operations/packages/depinject", + "docs/sdk/v0.47/documentation/operations/tooling/autocli", + "docs/sdk/v0.47/documentation/module-system/modules/README", + "docs/sdk/v0.47/documentation/protocol-development/SPEC_MODULE", + "docs/sdk/v0.47/documentation/application-framework/app-go", + "docs/sdk/v0.47/documentation/module-system/depinject", + "docs/sdk/v0.47/documentation/operations/tooling/confix", + "docs/sdk/v0.47/documentation/protocol-development/SPEC_STANDARD", + "docs/sdk/v0.47/documentation/application-framework/app-mempool", + "docs/sdk/v0.47/documentation/module-system/errors", + "docs/sdk/v0.47/documentation/operations/tooling/cosmovisor", + "docs/sdk/v0.47/documentation/application-framework/app-upgrade", + "docs/sdk/v0.47/documentation/module-system/genesis", + "docs/sdk/v0.47/documentation/operations/tooling/depinject", + "docs/sdk/v0.47/documentation/module-system/intro", + "docs/sdk/v0.47/documentation/operations/tooling/hubl", + "docs/sdk/v0.47/documentation/module-system/invariants", + "docs/sdk/v0.47/documentation/operations/tooling/protobuf", + "docs/sdk/v0.47/documentation/module-system/keeper", + "docs/sdk/v0.47/documentation/module-system/messages-and-queries", + "docs/sdk/v0.47/documentation/module-system/module-interfaces", + "docs/sdk/v0.47/documentation/module-system/module-manager", + "docs/sdk/v0.47/documentation/module-system/msg-services", + "docs/sdk/v0.47/documentation/module-system/query-services", + "docs/sdk/v0.47/documentation/module-system/simulator", + "docs/sdk/v0.47/documentation/module-system/structure", + "docs/sdk/v0.47/documentation/module-system/testing", + "docs/sdk/v0.47/documentation/module-system/upgrade", + "docs/sdk/v0.47/documentation/operations/upgrading", + "docs/sdk/v0.47/documentation/operations/packages/collections", + "docs/sdk/v0.47/documentation/operations/tooling/README", + "docs/sdk/v0.47/documentation/operations/packages/orm", + "docs/sdk/v0.47/documentation/operations/packages/README", + "docs/sdk/v0.47/documentation/module-system/modules/auth/README", + "docs/sdk/v0.47/documentation/module-system/modules/authz/README", + "docs/sdk/v0.47/documentation/module-system/modules/bank/README", + "docs/sdk/v0.47/documentation/module-system/modules/consensus/README", + "docs/sdk/v0.47/documentation/module-system/modules/crisis/README", + "docs/sdk/v0.47/documentation/module-system/modules/distribution/README", + "docs/sdk/v0.47/documentation/module-system/modules/evidence/README", + "docs/sdk/v0.47/documentation/module-system/modules/feegrant/README", + "docs/sdk/v0.47/documentation/module-system/modules/gov/README", + "docs/sdk/v0.47/documentation/module-system/modules/group/README", + "docs/sdk/v0.47/documentation/module-system/modules/mint/README", + "docs/sdk/v0.47/documentation/module-system/modules/nft/README", + "docs/sdk/v0.47/documentation/module-system/modules/params/README", + "docs/sdk/v0.47/documentation/module-system/modules/slashing/README", + "docs/sdk/v0.47/documentation/module-system/modules/staking/README", + "docs/sdk/v0.47/documentation/module-system/modules/upgrade/README", + "docs/sdk/v0.47/documentation/module-system/modules/accounts/accounts", + "docs/sdk/v0.47/documentation/module-system/modules/circuit/README", + "docs/sdk/v0.47/documentation/module-system/modules/genutil/README", + "docs/sdk/v0.47/documentation/protocol-development/addresses/bech32", + "docs/sdk/v0.47/documentation/protocol-development/ics/ics-030-signed-messages", + "docs/sdk/v0.47/documentation/module-system/modules/auth/tx", + "docs/sdk/v0.47/documentation/protocol-development/addresses/README", + "docs/sdk/v0.47/documentation/protocol-development/ics/README", + "docs/sdk/v0.47/documentation/module-system/modules/auth/vesting" + ] + }, + { + "group": "User", + "pages": [ + "docs/sdk/v0.47/user/user", + "docs/sdk/v0.47/user/run-node/interact-node", + "docs/sdk/v0.47/user/run-node/keyring", + "docs/sdk/v0.47/user/run-node/multisig-guide", + "docs/sdk/v0.47/user/run-node/rosetta", + "docs/sdk/v0.47/user/run-node/run-node", + "docs/sdk/v0.47/user/run-node/run-production", + "docs/sdk/v0.47/user/run-node/run-testnet", + "docs/sdk/v0.47/user/run-node/txs" + ] + } + ] + }, + { + "tab": "Tutorials", + "groups": [ + { + "group": "Transactions", + "pages": [ + "docs/sdk/next/tutorials/transactions/building-a-transaction" + ] + }, + { + "group": "Vote Extensions", + "pages": [ + { + "group": "Auction Front-running", + "pages": [ + "docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning", + "docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/getting-started", + "docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions", + "docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running" + ] + }, + { + "group": "Oracle", + "pages": [ + "docs/sdk/next/tutorials/vote-extensions/oracle/what-is-an-oracle", + "docs/sdk/next/tutorials/vote-extensions/oracle/getting-started", + "docs/sdk/next/tutorials/vote-extensions/oracle/implementing-vote-extensions", + "docs/sdk/next/tutorials/vote-extensions/oracle/testing-oracle" + ] + } + ] + } + ] + }, + { + "tab": "Release Notes", + "pages": [ + "docs/sdk/v0.47/changelog/release-notes" + ] + } ] - }, - { - "group": "Store", - "pages": [ - "docs/sdk/v0.47/build/spec/store", - "docs/sdk/v0.47/build/spec/store/interblock-cache" + }, + { + "version": "next", + "tabs": [ + { + "tab": "Documentation", + "groups": [ + { + "group": "Getting Started", + "pages": [ + { + "group": "Core Concepts", + "pages": [ + "docs/sdk/next/documentation/core-concepts/overview", + "docs/sdk/next/documentation/core-concepts/sdk-design", + "docs/sdk/next/documentation/core-concepts/sdk-app-architecture", + "docs/sdk/next/documentation/core-concepts/why-app-specific", + "docs/sdk/next/documentation/core-concepts/ocap" + ] + } + ] + }, + { + "group": "Building Applications", + "pages": [ + { + "group": "Application Framework", + "pages": [ + "docs/sdk/next/documentation/application-framework/app-anatomy", + "docs/sdk/next/documentation/application-framework/baseapp", + "docs/sdk/next/documentation/application-framework/app-go", + "docs/sdk/next/documentation/application-framework/app-go-di", + "docs/sdk/next/documentation/application-framework/runtime", + "docs/sdk/next/documentation/application-framework/depinject", + "docs/sdk/next/documentation/application-framework/context", + "docs/sdk/next/documentation/application-framework/runtx_middleware", + "docs/sdk/next/documentation/application-framework/app-mempool", + "docs/sdk/next/documentation/application-framework/vote-extensions" + ] + }, + { + "group": "Module System", + "pages": [ + "docs/sdk/next/documentation/module-system/intro", + "docs/sdk/next/documentation/module-system/structure", + "docs/sdk/next/documentation/module-system/module-manager", + "docs/sdk/next/documentation/module-system/keeper", + "docs/sdk/next/documentation/module-system/messages-and-queries", + "docs/sdk/next/documentation/module-system/msg-services", + "docs/sdk/next/documentation/module-system/query-services", + "docs/sdk/next/documentation/module-system/module-interfaces", + "docs/sdk/next/documentation/module-system/genesis", + "docs/sdk/next/documentation/module-system/beginblock-endblock", + "docs/sdk/next/documentation/module-system/preblock", + "docs/sdk/next/documentation/module-system/invariants", + "docs/sdk/next/documentation/module-system/depinject" + ] + }, + { + "group": "Standard Modules", + "pages": [ + "docs/sdk/next/documentation/module-system/auth", + "docs/sdk/next/documentation/module-system/tx", + "docs/sdk/next/documentation/module-system/vesting", + "docs/sdk/next/documentation/module-system/authz", + "docs/sdk/next/documentation/module-system/bank", + "docs/sdk/next/documentation/module-system/circuit", + "docs/sdk/next/documentation/module-system/consensus", + "docs/sdk/next/documentation/module-system/crisis", + "docs/sdk/next/documentation/module-system/distribution", + "docs/sdk/next/documentation/module-system/epochs", + "docs/sdk/next/documentation/module-system/evidence", + "docs/sdk/next/documentation/module-system/feegrant", + "docs/sdk/next/documentation/module-system/genutil", + "docs/sdk/next/documentation/module-system/gov", + "docs/sdk/next/documentation/module-system/group", + "docs/sdk/next/documentation/module-system/mint", + "docs/sdk/next/documentation/module-system/nft", + "docs/sdk/next/documentation/module-system/params", + "docs/sdk/next/documentation/module-system/protocolpool", + "docs/sdk/next/documentation/module-system/slashing", + "docs/sdk/next/documentation/module-system/staking", + "docs/sdk/next/documentation/module-system/upgrade" + ] + } + ] + }, + { + "group": "Protocol & Architecture", + "pages": [ + { + "group": "Consensus & Block Production", + "pages": [ + "docs/sdk/next/documentation/consensus-block-production/introduction", + "docs/sdk/next/documentation/consensus-block-production/checktx", + "docs/sdk/next/documentation/consensus-block-production/prepare-proposal", + "docs/sdk/next/documentation/consensus-block-production/process-proposal", + "docs/sdk/next/documentation/consensus-block-production/vote-extensions" + ] + }, + { + "group": "State & Storage", + "pages": [ + "docs/sdk/next/documentation/state-storage/store", + "docs/sdk/next/documentation/state-storage/collections", + "docs/sdk/next/documentation/state-storage/interblock-cache" + ] + }, + { + "group": "Protocol Development", + "pages": [ + "docs/sdk/next/documentation/protocol-development/encoding", + "docs/sdk/next/documentation/protocol-development/transactions", + "docs/sdk/next/documentation/protocol-development/tx-lifecycle", + "docs/sdk/next/documentation/protocol-development/accounts", + "docs/sdk/next/documentation/protocol-development/gas-fees", + "docs/sdk/next/documentation/protocol-development/protobuf", + "docs/sdk/next/documentation/protocol-development/protobuf-annotations", + "docs/sdk/next/documentation/protocol-development/errors", + "docs/sdk/next/documentation/protocol-development/bech32", + "docs/sdk/next/documentation/protocol-development/ics-030-signed-messages", + "docs/sdk/next/documentation/protocol-development/ics-overview", + "docs/sdk/next/documentation/protocol-development/spec-overview", + "docs/sdk/next/documentation/protocol-development/SPEC_MODULE", + "docs/sdk/next/documentation/protocol-development/SPEC_STANDARD" + ] + } + ] + }, + { + "group": "Infrastructure & Operations", + "pages": [ + { + "group": "Node Operations", + "pages": [ + "docs/sdk/next/documentation/operations/intro", + "docs/sdk/next/documentation/operations/node", + "docs/sdk/next/documentation/operations/run-node", + "docs/sdk/next/documentation/operations/run-production", + "docs/sdk/next/documentation/operations/run-testnet", + "docs/sdk/next/documentation/operations/interact-node", + "docs/sdk/next/documentation/operations/keyring", + "docs/sdk/next/documentation/operations/config", + "docs/sdk/next/documentation/operations/txs", + "docs/sdk/next/documentation/operations/rosetta" + ] + }, + { + "group": "Tools & Utilities", + "pages": [ + "docs/sdk/next/documentation/operations/cosmovisor", + "docs/sdk/next/documentation/operations/confix" + ] + }, + { + "group": "Testing & Simulation", + "pages": [ + "docs/sdk/next/documentation/operations/app-testnet", + "docs/sdk/next/documentation/operations/simulation", + "docs/sdk/next/documentation/operations/simulator", + "docs/sdk/next/documentation/operations/testing" + ] + }, + { + "group": "Upgrades", + "pages": [ + "docs/sdk/next/documentation/operations/app-upgrade", + "docs/sdk/next/documentation/operations/upgrading", + "docs/sdk/next/documentation/operations/upgrade-guide", + "docs/sdk/next/documentation/operations/upgrade-reference", + "docs/sdk/next/documentation/operations/upgrade-advanced" + ] + } + ] + }, + { + "group": "Security", + "pages": [ + ] + } + ] + }, + { + "tab": "API Reference", + "groups": [ + { + "group": "Client Tools", + "icon": "terminal", + "pages": [ + "docs/sdk/next/api-reference/client-tools/cli", + "docs/sdk/next/api-reference/client-tools/autocli", + "docs/sdk/next/api-reference/client-tools/hubl" + ] + }, + { + "group": "Service APIs", + "icon": "plug", + "pages": [ + "docs/sdk/next/api-reference/service-apis/grpc_rest", + "docs/sdk/next/api-reference/service-apis/query-lifecycle", + "docs/sdk/next/api-reference/service-apis/proto-docs" + ] + }, + { + "group": "Events & Streaming", + "icon": "broadcast", + "pages": [ + "docs/sdk/next/api-reference/events-streaming/events" + ] + }, + { + "group": "Telemetry & Metrics", + "icon": "chart-line", + "pages": [ + "docs/sdk/next/api-reference/telemetry-metrics/telemetry" + ] + } + ] + }, + { + "tab": "Tutorials", + "groups": [ + { + "group": "Transactions", + "pages": [ + "docs/sdk/next/tutorials/transactions/building-a-transaction" + ] + }, + { + "group": "Vote Extensions", + "pages": [ + { + "group": "Auction Front-running", + "pages": [ + "docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning", + "docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/getting-started", + "docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions", + "docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running" + ] + }, + { + "group": "Oracle", + "pages": [ + "docs/sdk/next/tutorials/vote-extensions/oracle/what-is-an-oracle", + "docs/sdk/next/tutorials/vote-extensions/oracle/getting-started", + "docs/sdk/next/tutorials/vote-extensions/oracle/implementing-vote-extensions", + "docs/sdk/next/tutorials/vote-extensions/oracle/testing-oracle" + ] + } + ] + } + ] + }, + { + "tab": "Release Notes", + "pages": [ + "docs/sdk/next/changelog/release-notes" + ] + } ] - } - ] - } - ] - }, - { - "tab": "User Guides", - "groups": [ - { - "group": "User Guides", - "pages": [ - "docs/sdk/v0.47/user" - ] - }, - { - "group": "Running a Node, API and CLI", - "pages": [ - "docs/sdk/v0.47/user/run-node/keyring", - "docs/sdk/v0.47/user/run-node/run-node", - "docs/sdk/v0.47/user/run-node/interact-node", - "docs/sdk/v0.47/user/run-node/txs", - "docs/sdk/v0.47/user/run-node/run-testnet", - "docs/sdk/v0.47/user/run-node/run-production" - ] - }, - { - "group": "User", - "pages": [ - "docs/sdk/v0.47/user" - ] - } + } ] - }, - { - "tab": "Tutorials", - "groups": [ - { - "group": "Tutorials", - "pages": [ - "docs/sdk/v0.47/tutorials" - ] - }, - { - "group": "Vote Extensions Tutorials", - "pages": [ - { - "group": "Mitigating Auction Front-Running Tutorial", - "pages": [ - "docs/sdk/v0.47/tutorials/vote-extensions/auction-frontrunning/getting-started", - "docs/sdk/v0.47/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning", - "docs/sdk/v0.47/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions", - "docs/sdk/v0.47/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extesions", - "docs/sdk/v0.47/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running" + }, + { + "dropdown": "IBC", + "versions": [ + { + "version": "v10.1.x", + "tabs": [ + { + "tab": "Documentation", + "groups": [ + { + "group": "IBC-Go", + "pages": [ + "docs/ibc/v10.1.x/intro" + ] + }, + { + "group": "IBC Core", + "pages": [ + "docs/ibc/v10.1.x/ibc/overview", + { + "group": "Core Concepts", + "pages": [ + "docs/ibc/v10.1.x/ibc/best-practices", + "docs/ibc/v10.1.x/ibc/integration", + "docs/ibc/v10.1.x/ibc/permissioning", + "docs/ibc/v10.1.x/ibc/relayer" + ] + }, + { + "group": "Apps", + "pages": [ + "docs/ibc/v10.1.x/ibc/apps/apps", + "docs/ibc/v10.1.x/ibc/apps/bindports", + "docs/ibc/v10.1.x/ibc/apps/ibcmodule", + "docs/ibc/v10.1.x/ibc/apps/ibcv2apps", + "docs/ibc/v10.1.x/ibc/apps/keeper", + "docs/ibc/v10.1.x/ibc/apps/packets_acks", + "docs/ibc/v10.1.x/ibc/apps/routing" + ] + }, + { + "group": "Middleware", + "pages": [ + "docs/ibc/v10.1.x/ibc/middleware/develop", + "docs/ibc/v10.1.x/ibc/middleware/developIBCv2", + "docs/ibc/v10.1.x/ibc/middleware/integration", + "docs/ibc/v10.1.x/ibc/middleware/overview" + ] + }, + { + "group": "Upgrades", + "pages": [ + "docs/ibc/v10.1.x/ibc/upgrades/developer-guide", + "docs/ibc/v10.1.x/ibc/upgrades/genesis-restart", + "docs/ibc/v10.1.x/ibc/upgrades/intro", + "docs/ibc/v10.1.x/ibc/upgrades/quick-guide" + ] + } + ] + }, + { + "group": "IBC Migrations", + "pages": [ + "docs/ibc/v10.1.x/migrations/migration.template", + { + "group": "Version Upgrades", + "pages": [ + "docs/ibc/v10.1.x/migrations/v8_1-to-v10", + "docs/ibc/v10.1.x/migrations/v8-to-v8_1", + "docs/ibc/v10.1.x/migrations/v7-to-v8", + "docs/ibc/v10.1.x/migrations/v7_2-to-v7_3", + "docs/ibc/v10.1.x/migrations/v7-to-v7_1", + "docs/ibc/v10.1.x/migrations/v6-to-v7", + "docs/ibc/v10.1.x/migrations/v5-to-v6", + "docs/ibc/v10.1.x/migrations/v4-to-v5", + "docs/ibc/v10.1.x/migrations/v3-to-v4", + "docs/ibc/v10.1.x/migrations/v2-to-v3", + "docs/ibc/v10.1.x/migrations/v1-to-v2" + ] + }, + { + "group": "Additional Migration Guides", + "pages": [ + "docs/ibc/v10.1.x/migrations/sdk-to-v1", + "docs/ibc/v10.1.x/migrations/support-denoms-with-slashes" + ] + } + ] + }, + { + "group": "IBC Light Clients", + "pages": [ + "docs/ibc/v10.1.x/light-clients/proposals", + { + "group": "Developer Guide", + "pages": [ + "docs/ibc/v10.1.x/light-clients/developer-guide/client-state", + "docs/ibc/v10.1.x/light-clients/developer-guide/consensus-state", + "docs/ibc/v10.1.x/light-clients/developer-guide/light-client-module", + "docs/ibc/v10.1.x/light-clients/developer-guide/overview", + "docs/ibc/v10.1.x/light-clients/developer-guide/proofs", + "docs/ibc/v10.1.x/light-clients/developer-guide/proposals", + "docs/ibc/v10.1.x/light-clients/developer-guide/setup", + "docs/ibc/v10.1.x/light-clients/developer-guide/updates-and-misbehaviour", + "docs/ibc/v10.1.x/light-clients/developer-guide/upgrades" + ] + }, + { + "group": "Localhost", + "pages": [ + "docs/ibc/v10.1.x/light-clients/localhost/client-state", + "docs/ibc/v10.1.x/light-clients/localhost/connection", + "docs/ibc/v10.1.x/light-clients/localhost/integration", + "docs/ibc/v10.1.x/light-clients/localhost/overview", + "docs/ibc/v10.1.x/light-clients/localhost/state-verification" + ] + }, + { + "group": "Solo Machine", + "pages": [ + "docs/ibc/v10.1.x/light-clients/solomachine/concepts", + "docs/ibc/v10.1.x/light-clients/solomachine/solomachine", + "docs/ibc/v10.1.x/light-clients/solomachine/state", + "docs/ibc/v10.1.x/light-clients/solomachine/state_transitions" + ] + }, + { + "group": "Tendermint", + "pages": [ + "docs/ibc/v10.1.x/light-clients/tendermint/overview" + ] + }, + { + "group": "WASM", + "pages": [ + "docs/ibc/v10.1.x/light-clients/wasm/client", + "docs/ibc/v10.1.x/light-clients/wasm/concepts", + "docs/ibc/v10.1.x/light-clients/wasm/contracts", + "docs/ibc/v10.1.x/light-clients/wasm/events", + "docs/ibc/v10.1.x/light-clients/wasm/governance", + "docs/ibc/v10.1.x/light-clients/wasm/integration", + "docs/ibc/v10.1.x/light-clients/wasm/messages", + "docs/ibc/v10.1.x/light-clients/wasm/migrations", + "docs/ibc/v10.1.x/light-clients/wasm/overview" + ] + } + ] + }, + { + "group": "IBC Apps", + "pages": [ + { + "group": "Interchain Accounts", + "pages": [ + "docs/ibc/v10.1.x/apps/interchain-accounts/active-channels", + "docs/ibc/v10.1.x/apps/interchain-accounts/auth-modules", + "docs/ibc/v10.1.x/apps/interchain-accounts/client", + "docs/ibc/v10.1.x/apps/interchain-accounts/development", + "docs/ibc/v10.1.x/apps/interchain-accounts/integration", + "docs/ibc/v10.1.x/apps/interchain-accounts/messages", + "docs/ibc/v10.1.x/apps/interchain-accounts/overview", + "docs/ibc/v10.1.x/apps/interchain-accounts/parameters", + "docs/ibc/v10.1.x/apps/interchain-accounts/tx-encoding", + { + "group": "Legacy", + "pages": [ + "docs/ibc/v10.1.x/apps/interchain-accounts/legacy/auth-modules", + "docs/ibc/v10.1.x/apps/interchain-accounts/legacy/integration", + "docs/ibc/v10.1.x/apps/interchain-accounts/legacy/keeper-api" + ] + } + ] + }, + { + "group": "Transfer", + "pages": [ + "docs/ibc/v10.1.x/apps/transfer/IBCv2-transfer", + "docs/ibc/v10.1.x/apps/transfer/authorizations", + "docs/ibc/v10.1.x/apps/transfer/client", + "docs/ibc/v10.1.x/apps/transfer/events", + "docs/ibc/v10.1.x/apps/transfer/messages", + "docs/ibc/v10.1.x/apps/transfer/metrics", + "docs/ibc/v10.1.x/apps/transfer/overview", + "docs/ibc/v10.1.x/apps/transfer/params", + "docs/ibc/v10.1.x/apps/transfer/state", + "docs/ibc/v10.1.x/apps/transfer/state-transitions" + ] + } + ] + }, + { + "group": "IBC Middleware", + "pages": [ + { + "group": "Callbacks", + "pages": [ + "docs/ibc/v10.1.x/middleware/callbacks/callbacks-IBCv2", + "docs/ibc/v10.1.x/middleware/callbacks/end-users", + "docs/ibc/v10.1.x/middleware/callbacks/events", + "docs/ibc/v10.1.x/middleware/callbacks/gas", + "docs/ibc/v10.1.x/middleware/callbacks/integration", + "docs/ibc/v10.1.x/middleware/callbacks/interfaces", + "docs/ibc/v10.1.x/middleware/callbacks/overview" + ] + } + ] + }, + { + "group": "Security", + "pages": [ + ] + } + ] + }, + { + "tab": "Release Notes", + "pages": [ + "docs/ibc/v10.1.x/changelog/release-notes" + ] + } ] - }, - { - "group": "Oracle Tutorial", - "pages": [ - "docs/sdk/v0.47/tutorials/vote-extensions/oracle/getting-started", - "docs/sdk/v0.47/tutorials/vote-extensions/oracle/what-is-an-oracle", - "docs/sdk/v0.47/tutorials/vote-extensions/oracle/implementing-vote-extensions", - "docs/sdk/v0.47/tutorials/vote-extensions/oracle/testing-oracle" + }, + { + "version": "v8.5.x", + "tabs": [ + { + "tab": "Documentation", + "groups": [ + { + "group": "IBC-Go", + "pages": [ + "docs/ibc/v8.5.x/intro" + ] + }, + { + "group": "IBC Core", + "pages": [ + "docs/ibc/v8.5.x/ibc/overview", + { + "group": "Core Concepts", + "pages": [ + "docs/ibc/v8.5.x/ibc/capability-module", + "docs/ibc/v8.5.x/ibc/channel-upgrades", + "docs/ibc/v8.5.x/ibc/integration", + "docs/ibc/v8.5.x/ibc/proposals", + "docs/ibc/v8.5.x/ibc/proto-docs", + "docs/ibc/v8.5.x/ibc/relayer", + "docs/ibc/v8.5.x/ibc/roadmap", + "docs/ibc/v8.5.x/ibc/troubleshooting" + ] + }, + { + "group": "Apps", + "pages": [ + "docs/ibc/v8.5.x/ibc/apps/apps", + "docs/ibc/v8.5.x/ibc/apps/bindports", + "docs/ibc/v8.5.x/ibc/apps/ibcmodule", + "docs/ibc/v8.5.x/ibc/apps/keeper", + "docs/ibc/v8.5.x/ibc/apps/packets_acks", + "docs/ibc/v8.5.x/ibc/apps/routing" + ] + }, + { + "group": "Middleware", + "pages": [ + "docs/ibc/v8.5.x/ibc/middleware/develop", + "docs/ibc/v8.5.x/ibc/middleware/integration", + "docs/ibc/v8.5.x/ibc/middleware/overview" + ] + }, + { + "group": "Upgrades", + "pages": [ + "docs/ibc/v8.5.x/ibc/upgrades/developer-guide", + "docs/ibc/v8.5.x/ibc/upgrades/genesis-restart", + "docs/ibc/v8.5.x/ibc/upgrades/intro", + "docs/ibc/v8.5.x/ibc/upgrades/quick-guide" + ] + } + ] + }, + { + "group": "IBC Migrations", + "pages": [ + "docs/ibc/v8.5.x/migrations/migration.template", + { + "group": "Version Upgrades", + "pages": [ + "docs/ibc/v8.5.x/migrations/v8-to-v8_1", + "docs/ibc/v8.5.x/migrations/v7-to-v8", + "docs/ibc/v8.5.x/migrations/v7_2-to-v7_3", + "docs/ibc/v8.5.x/migrations/v7-to-v7_1", + "docs/ibc/v8.5.x/migrations/v6-to-v7", + "docs/ibc/v8.5.x/migrations/v5-to-v6", + "docs/ibc/v8.5.x/migrations/v4-to-v5", + "docs/ibc/v8.5.x/migrations/v3-to-v4", + "docs/ibc/v8.5.x/migrations/v2-to-v3", + "docs/ibc/v8.5.x/migrations/v1-to-v2" + ] + }, + { + "group": "Additional Migration Guides", + "pages": [ + "docs/ibc/v8.5.x/migrations/sdk-to-v1", + "docs/ibc/v8.5.x/migrations/support-denoms-with-slashes" + ] + } + ] + }, + { + "group": "IBC Apps", + "pages": [ + { + "group": "Interchain Accounts", + "pages": [ + "docs/ibc/v8.5.x/apps/interchain-accounts/active-channels", + "docs/ibc/v8.5.x/apps/interchain-accounts/auth-modules", + "docs/ibc/v8.5.x/apps/interchain-accounts/client", + "docs/ibc/v8.5.x/apps/interchain-accounts/development", + "docs/ibc/v8.5.x/apps/interchain-accounts/integration", + "docs/ibc/v8.5.x/apps/interchain-accounts/messages", + "docs/ibc/v8.5.x/apps/interchain-accounts/overview", + "docs/ibc/v8.5.x/apps/interchain-accounts/parameters", + "docs/ibc/v8.5.x/apps/interchain-accounts/tx-encoding", + { + "group": "Legacy", + "pages": [ + "docs/ibc/v8.5.x/apps/interchain-accounts/legacy/auth-modules", + "docs/ibc/v8.5.x/apps/interchain-accounts/legacy/integration", + "docs/ibc/v8.5.x/apps/interchain-accounts/legacy/keeper-api" + ] + } + ] + }, + { + "group": "Transfer", + "pages": [ + "docs/ibc/v8.5.x/apps/transfer/authorizations", + "docs/ibc/v8.5.x/apps/transfer/client", + "docs/ibc/v8.5.x/apps/transfer/events", + "docs/ibc/v8.5.x/apps/transfer/messages", + "docs/ibc/v8.5.x/apps/transfer/metrics", + "docs/ibc/v8.5.x/apps/transfer/overview", + "docs/ibc/v8.5.x/apps/transfer/params", + "docs/ibc/v8.5.x/apps/transfer/state", + "docs/ibc/v8.5.x/apps/transfer/state-transitions" + ] + } + ] + }, + { + "group": "IBC Light Clients", + "pages": [ + { + "group": "Developer Guide", + "pages": [ + "docs/ibc/v8.5.x/light-clients/developer-guide/client-state", + "docs/ibc/v8.5.x/light-clients/developer-guide/consensus-state", + "docs/ibc/v8.5.x/light-clients/developer-guide/genesis", + "docs/ibc/v8.5.x/light-clients/developer-guide/overview", + "docs/ibc/v8.5.x/light-clients/developer-guide/proofs", + "docs/ibc/v8.5.x/light-clients/developer-guide/proposals", + "docs/ibc/v8.5.x/light-clients/developer-guide/setup", + "docs/ibc/v8.5.x/light-clients/developer-guide/updates-and-misbehaviour", + "docs/ibc/v8.5.x/light-clients/developer-guide/upgrades" + ] + }, + { + "group": "Localhost", + "pages": [ + "docs/ibc/v8.5.x/light-clients/localhost/client-state", + "docs/ibc/v8.5.x/light-clients/localhost/connection", + "docs/ibc/v8.5.x/light-clients/localhost/integration", + "docs/ibc/v8.5.x/light-clients/localhost/overview", + "docs/ibc/v8.5.x/light-clients/localhost/state-verification" + ] + }, + { + "group": "Solo Machine", + "pages": [ + "docs/ibc/v8.5.x/light-clients/solomachine/concepts", + "docs/ibc/v8.5.x/light-clients/solomachine/solomachine", + "docs/ibc/v8.5.x/light-clients/solomachine/state", + "docs/ibc/v8.5.x/light-clients/solomachine/state_transitions" + ] + }, + { + "group": "WASM", + "pages": [ + "docs/ibc/v8.5.x/light-clients/wasm/client", + "docs/ibc/v8.5.x/light-clients/wasm/concepts", + "docs/ibc/v8.5.x/light-clients/wasm/contracts", + "docs/ibc/v8.5.x/light-clients/wasm/events", + "docs/ibc/v8.5.x/light-clients/wasm/governance", + "docs/ibc/v8.5.x/light-clients/wasm/integration", + "docs/ibc/v8.5.x/light-clients/wasm/messages", + "docs/ibc/v8.5.x/light-clients/wasm/migrations", + "docs/ibc/v8.5.x/light-clients/wasm/overview" + ] + } + ] + }, + { + "group": "IBC Middleware", + "pages": [ + { + "group": "Callbacks", + "pages": [ + "docs/ibc/v8.5.x/middleware/callbacks/end-users", + "docs/ibc/v8.5.x/middleware/callbacks/events", + "docs/ibc/v8.5.x/middleware/callbacks/gas", + "docs/ibc/v8.5.x/middleware/callbacks/integration", + "docs/ibc/v8.5.x/middleware/callbacks/interfaces", + "docs/ibc/v8.5.x/middleware/callbacks/overview", + "docs/ibc/v8.5.x/middleware/ics29-fee/end-users", + "docs/ibc/v8.5.x/middleware/ics29-fee/events", + "docs/ibc/v8.5.x/middleware/ics29-fee/fee-distribution", + "docs/ibc/v8.5.x/middleware/ics29-fee/integration", + "docs/ibc/v8.5.x/middleware/ics29-fee/msgs", + "docs/ibc/v8.5.x/middleware/ics29-fee/overview" + ] + } + ] + }, + { + "group": "Security", + "pages": [ + "docs/ibc/v8.5.x/security-audits" + ] + } + ] + }, + { + "tab": "Release Notes", + "pages": [ + "docs/ibc/v8.5.x/changelog/release-notes" + ] + } ] - } - ] - }, - { - "group": "Transaction Tutorials", - "pages": [ - "docs/sdk/v0.47/tutorials/transactions/building-a-transaction" - ] - } - ] - } - ] - }, - { - "version": "next", - "tabs": [ - { - "tab": "Learn", - "groups": [ - { - "group": "Learn", - "pages": [ - "docs/sdk/next/learn" - ] - }, - { - "group": "Introduction", - "pages": [ - "docs/sdk/next/learn/intro/overview", - "docs/sdk/next/learn/intro/why-app-specific", - "docs/sdk/next/learn/intro/sdk-app-architecture", - "docs/sdk/next/learn/intro/sdk-design" - ] - }, - { - "group": "Beginner", - "pages": [ - "docs/sdk/next/learn/beginner/app-anatomy", - "docs/sdk/next/learn/beginner/tx-lifecycle", - "docs/sdk/next/learn/beginner/query-lifecycle", - "docs/sdk/next/learn/beginner/accounts", - "docs/sdk/next/learn/beginner/gas-fees" - ] - }, - { - "group": "Advanced", - "pages": [ - "docs/sdk/next/learn/advanced/baseapp", - "docs/sdk/next/learn/advanced/transactions", - "docs/sdk/next/learn/advanced/context", - "docs/sdk/next/learn/advanced/node", - "docs/sdk/next/learn/advanced/store", - "docs/sdk/next/learn/advanced/encoding", - "docs/sdk/next/learn/advanced/grpc_rest", - "docs/sdk/next/learn/advanced/cli", - "docs/sdk/next/learn/advanced/events", - "docs/sdk/next/learn/advanced/telemetry", - "docs/sdk/next/learn/advanced/ocap", - "docs/sdk/next/learn/advanced/runtx_middleware", - "docs/sdk/next/learn/advanced/simulation", - "docs/sdk/next/learn/advanced/proto-docs", - "docs/sdk/next/learn/advanced/upgrade", - "docs/sdk/next/learn/advanced/config", - "docs/sdk/next/learn/advanced/autocli" - ] - } - ] - }, - { - "tab": "Build", - "groups": [ - { - "group": "Build", - "pages": [ - "docs/sdk/next/build" - ] - }, - { - "group": "Building Apps", - "pages": [ - "docs/sdk/next/build/building-apps/app-go", - "docs/sdk/next/build/building-apps/runtime", - "docs/sdk/next/build/building-apps/app-go-di", - "docs/sdk/next/build/building-apps/app-mempool", - "docs/sdk/next/build/building-apps/app-upgrade", - "docs/sdk/next/build/building-apps/vote-extensions", - "docs/sdk/next/build/building-apps/app-testnet" - ] - }, - { - "group": "Building Modules", - "pages": [ - "docs/sdk/next/build/building-modules/module-manager", - "docs/sdk/next/build/building-modules/messages-and-queries", - "docs/sdk/next/build/building-modules/msg-services", - "docs/sdk/next/build/building-modules/query-services", - "docs/sdk/next/build/building-modules/protobuf-annotations", - "docs/sdk/next/build/building-modules/beginblock-endblock", - "docs/sdk/next/build/building-modules/keeper", - "docs/sdk/next/build/building-modules/invariants", - "docs/sdk/next/build/building-modules/genesis", - "docs/sdk/next/build/building-modules/module-interfaces", - "docs/sdk/next/build/building-modules/structure", - "docs/sdk/next/build/building-modules/errors", - "docs/sdk/next/build/building-modules/upgrade", - "docs/sdk/next/build/building-modules/simulator", - "docs/sdk/next/build/building-modules/depinject", - "docs/sdk/next/build/building-modules/testing", - "docs/sdk/next/build/building-modules/preblock" - ] - }, - { - "group": "ABCI", - "pages": [ - "docs/sdk/next/build/abci/introduction", - "docs/sdk/next/build/abci/prepare-proposal", - "docs/sdk/next/build/abci/process-proposal", - "docs/sdk/next/build/abci/vote-extensions", - "docs/sdk/next/build/abci/checktx" - ] - }, - { - "group": "Modules", - "pages": [ - "docs/sdk/next/build/modules", - { - "group": "x/auth", - "pages": [ - "docs/sdk/next/build/modules/auth", - "docs/sdk/next/build/modules/auth/vesting", - "docs/sdk/next/build/modules/auth/tx" + }, + { + "version": "v7.8.x", + "tabs": [ + { + "tab": "Documentation", + "groups": [ + { + "group": "IBC-Go", + "pages": [ + "docs/ibc/v7.8.x/intro" + ] + }, + { + "group": "IBC Core", + "pages": [ + "docs/ibc/v7.8.x/ibc/overview", + { + "group": "Core Concepts", + "pages": [ + "docs/ibc/v7.8.x/ibc/integration", + "docs/ibc/v7.8.x/ibc/proposals", + "docs/ibc/v7.8.x/ibc/proto-docs", + "docs/ibc/v7.8.x/ibc/relayer", + "docs/ibc/v7.8.x/ibc/roadmap", + "docs/ibc/v7.8.x/ibc/troubleshooting" + ] + }, + { + "group": "Apps", + "pages": [ + "docs/ibc/v7.8.x/ibc/apps/apps", + "docs/ibc/v7.8.x/ibc/apps/bindports", + "docs/ibc/v7.8.x/ibc/apps/ibcmodule", + "docs/ibc/v7.8.x/ibc/apps/keeper", + "docs/ibc/v7.8.x/ibc/apps/packets_acks", + "docs/ibc/v7.8.x/ibc/apps/routing" + ] + }, + { + "group": "Middleware", + "pages": [ + "docs/ibc/v7.8.x/ibc/middleware/develop", + "docs/ibc/v7.8.x/ibc/middleware/integration" + ] + }, + { + "group": "Upgrades", + "pages": [ + "docs/ibc/v7.8.x/ibc/upgrades/developer-guide", + "docs/ibc/v7.8.x/ibc/upgrades/genesis-restart", + "docs/ibc/v7.8.x/ibc/upgrades/intro", + "docs/ibc/v7.8.x/ibc/upgrades/quick-guide" + ] + } + ] + }, + { + "group": "IBC Migrations", + "pages": [ + { + "group": "Version Upgrades", + "pages": [ + "docs/ibc/v7.8.x/migrations/v7_2-to-v7_3", + "docs/ibc/v7.8.x/migrations/v7-to-v7_1", + "docs/ibc/v7.8.x/migrations/v6-to-v7", + "docs/ibc/v7.8.x/migrations/v5-to-v6", + "docs/ibc/v7.8.x/migrations/v4-to-v5", + "docs/ibc/v7.8.x/migrations/v3-to-v4", + "docs/ibc/v7.8.x/migrations/v2-to-v3", + "docs/ibc/v7.8.x/migrations/v1-to-v2" + ] + }, + { + "group": "Additional Migration Guides", + "pages": [ + "docs/ibc/v7.8.x/migrations/sdk-to-v1", + "docs/ibc/v7.8.x/migrations/support-denoms-with-slashes" + ] + } + ] + }, + { + "group": "IBC Apps", + "pages": [ + { + "group": "Interchain Accounts", + "pages": [ + "docs/ibc/v7.8.x/apps/interchain-accounts/active-channels", + "docs/ibc/v7.8.x/apps/interchain-accounts/auth-modules", + "docs/ibc/v7.8.x/apps/interchain-accounts/client", + "docs/ibc/v7.8.x/apps/interchain-accounts/development", + "docs/ibc/v7.8.x/apps/interchain-accounts/integration", + "docs/ibc/v7.8.x/apps/interchain-accounts/messages", + "docs/ibc/v7.8.x/apps/interchain-accounts/overview", + "docs/ibc/v7.8.x/apps/interchain-accounts/parameters", + "docs/ibc/v7.8.x/apps/interchain-accounts/tx-encoding", + { + "group": "Legacy", + "pages": [ + "docs/ibc/v7.8.x/apps/interchain-accounts/legacy/auth-modules", + "docs/ibc/v7.8.x/apps/interchain-accounts/legacy/integration", + "docs/ibc/v7.8.x/apps/interchain-accounts/legacy/keeper-api" + ] + } + ] + }, + { + "group": "Transfer", + "pages": [ + "docs/ibc/v7.8.x/apps/transfer/authorizations", + "docs/ibc/v7.8.x/apps/transfer/client", + "docs/ibc/v7.8.x/apps/transfer/events", + "docs/ibc/v7.8.x/apps/transfer/messages", + "docs/ibc/v7.8.x/apps/transfer/metrics", + "docs/ibc/v7.8.x/apps/transfer/overview", + "docs/ibc/v7.8.x/apps/transfer/params", + "docs/ibc/v7.8.x/apps/transfer/state", + "docs/ibc/v7.8.x/apps/transfer/state-transitions" + ] + } + ] + }, + { + "group": "IBC Light Clients", + "pages": [ + { + "group": "Developer Guide", + "pages": [ + "docs/ibc/v7.8.x/light-clients/developer-guide/client-state", + "docs/ibc/v7.8.x/light-clients/developer-guide/consensus-state", + "docs/ibc/v7.8.x/light-clients/developer-guide/genesis", + "docs/ibc/v7.8.x/light-clients/developer-guide/overview", + "docs/ibc/v7.8.x/light-clients/developer-guide/proofs", + "docs/ibc/v7.8.x/light-clients/developer-guide/proposals", + "docs/ibc/v7.8.x/light-clients/developer-guide/setup", + "docs/ibc/v7.8.x/light-clients/developer-guide/updates-and-misbehaviour", + "docs/ibc/v7.8.x/light-clients/developer-guide/upgrades" + ] + }, + { + "group": "Localhost", + "pages": [ + "docs/ibc/v7.8.x/light-clients/localhost/client-state", + "docs/ibc/v7.8.x/light-clients/localhost/connection", + "docs/ibc/v7.8.x/light-clients/localhost/integration", + "docs/ibc/v7.8.x/light-clients/localhost/overview", + "docs/ibc/v7.8.x/light-clients/localhost/state-verification" + ] + }, + { + "group": "Solo Machine", + "pages": [ + "docs/ibc/v7.8.x/light-clients/solomachine/concepts", + "docs/ibc/v7.8.x/light-clients/solomachine/solomachine", + "docs/ibc/v7.8.x/light-clients/solomachine/state", + "docs/ibc/v7.8.x/light-clients/solomachine/state_transitions" + ] + }, + { + "group": "WASM", + "pages": [ + "docs/ibc/v7.8.x/light-clients/wasm/client", + "docs/ibc/v7.8.x/light-clients/wasm/concepts", + "docs/ibc/v7.8.x/light-clients/wasm/contracts", + "docs/ibc/v7.8.x/light-clients/wasm/events", + "docs/ibc/v7.8.x/light-clients/wasm/governance", + "docs/ibc/v7.8.x/light-clients/wasm/integration", + "docs/ibc/v7.8.x/light-clients/wasm/messages", + "docs/ibc/v7.8.x/light-clients/wasm/overview" + ] + } + ] + }, + { + "group": "IBC Middleware", + "pages": [ + { + "group": "Callbacks", + "pages": [ + "docs/ibc/v7.8.x/middleware/callbacks/end-users", + "docs/ibc/v7.8.x/middleware/callbacks/events", + "docs/ibc/v7.8.x/middleware/callbacks/gas", + "docs/ibc/v7.8.x/middleware/callbacks/integration", + "docs/ibc/v7.8.x/middleware/callbacks/interfaces", + "docs/ibc/v7.8.x/middleware/callbacks/overview", + "docs/ibc/v7.8.x/middleware/ics29-fee/end-users", + "docs/ibc/v7.8.x/middleware/ics29-fee/events", + "docs/ibc/v7.8.x/middleware/ics29-fee/fee-distribution", + "docs/ibc/v7.8.x/middleware/ics29-fee/integration", + "docs/ibc/v7.8.x/middleware/ics29-fee/msgs", + "docs/ibc/v7.8.x/middleware/ics29-fee/overview" + ] + } + ] + } + ] + }, + { + "tab": "Release Notes", + "pages": [ + "docs/ibc/v7.8.x/changelog/release-notes" + ] + } ] - }, - "docs/sdk/next/build/modules/authz", - "docs/sdk/next/build/modules/bank", - "docs/sdk/next/build/modules/consensus", - "docs/sdk/next/build/modules/crisis", - "docs/sdk/next/build/modules/distribution", - "docs/sdk/next/build/modules/epochs", - "docs/sdk/next/build/modules/evidence", - "docs/sdk/next/build/modules/feegrant", - "docs/sdk/next/build/modules/gov", - "docs/sdk/next/build/modules/group", - "docs/sdk/next/build/modules/mint", - "docs/sdk/next/build/modules/nft", - "docs/sdk/next/build/modules/params", - "docs/sdk/next/build/modules/protocolpool", - "docs/sdk/next/build/modules/slashing", - "docs/sdk/next/build/modules/staking", - "docs/sdk/next/build/modules/upgrade", - "docs/sdk/next/build/modules/circuit", - "docs/sdk/next/build/modules/genutil" - ] - }, - { - "group": "Migrations", - "pages": [ - "docs/sdk/next/build/migrations/intro", - "docs/sdk/next/build/migrations/upgrade-reference", - "docs/sdk/next/build/migrations/upgrade-guide" - ] - }, - { - "group": "Packages", - "pages": [ - "docs/sdk/next/build/packages", - "docs/sdk/next/build/packages/depinject", - "docs/sdk/next/build/packages/collections" - ] - }, - { - "group": "Tooling", - "pages": [ - "docs/sdk/next/build/tooling", - "docs/sdk/next/build/tooling/protobuf", - "docs/sdk/next/build/tooling/cosmovisor", - "docs/sdk/next/build/tooling/confix" - ] - }, - { - "group": "RFC", - "pages": [ - "docs/sdk/next/build/rfc", - "docs/sdk/next/build/rfc/PROCESS", - "docs/sdk/next/build/rfc/rfc-001-tx-validation", - "docs/sdk/next/build/rfc/rfc-template" - ] - }, - { - "group": "Specifications", - "pages": [ - "docs/sdk/next/build/spec", - "docs/sdk/next/build/spec/SPEC_MODULE", - "docs/sdk/next/build/spec/SPEC_STANDARD", - { - "group": "Addresses spec", - "pages": [ - "docs/sdk/next/build/spec/addresses", - "docs/sdk/next/build/spec/addresses/bech32" + }, + { + "version": "v6.3.x", + "tabs": [ + { + "tab": "Documentation", + "groups": [ + { + "group": "IBC-Go", + "pages": [ + "docs/ibc/v6.3.x/intro" + ] + }, + { + "group": "IBC Core", + "pages": [ + "docs/ibc/v6.3.x/ibc/overview", + { + "group": "Core Concepts", + "pages": [ + "docs/ibc/v6.3.x/ibc/integration", + "docs/ibc/v6.3.x/ibc/proposals", + "docs/ibc/v6.3.x/ibc/proto-docs", + "docs/ibc/v6.3.x/ibc/relayer", + "docs/ibc/v6.3.x/ibc/roadmap" + ] + }, + { + "group": "Apps", + "pages": [ + "docs/ibc/v6.3.x/ibc/apps/apps", + "docs/ibc/v6.3.x/ibc/apps/bindports", + "docs/ibc/v6.3.x/ibc/apps/ibcmodule", + "docs/ibc/v6.3.x/ibc/apps/keeper", + "docs/ibc/v6.3.x/ibc/apps/packets_acks", + "docs/ibc/v6.3.x/ibc/apps/routing" + ] + }, + { + "group": "Middleware", + "pages": [ + "docs/ibc/v6.3.x/ibc/middleware/develop", + "docs/ibc/v6.3.x/ibc/middleware/integration" + ] + }, + { + "group": "Upgrades", + "pages": [ + "docs/ibc/v6.3.x/ibc/upgrades/developer-guide", + "docs/ibc/v6.3.x/ibc/upgrades/genesis-restart", + "docs/ibc/v6.3.x/ibc/upgrades/intro", + "docs/ibc/v6.3.x/ibc/upgrades/quick-guide" + ] + } + ] + }, + { + "group": "IBC Migrations", + "pages": [ + { + "group": "Version Upgrades", + "pages": [ + "docs/ibc/v6.3.x/migrations/v5-to-v6", + "docs/ibc/v6.3.x/migrations/v4-to-v5", + "docs/ibc/v6.3.x/migrations/v3-to-v4", + "docs/ibc/v6.3.x/migrations/v2-to-v3", + "docs/ibc/v6.3.x/migrations/v1-to-v2" + ] + }, + { + "group": "Additional Migration Guides", + "pages": [ + "docs/ibc/v6.3.x/migrations/sdk-to-v1", + "docs/ibc/v6.3.x/migrations/support-denoms-with-slashes" + ] + } + ] + }, + { + "group": "IBC Apps", + "pages": [ + { + "group": "Interchain Accounts", + "pages": [ + "docs/ibc/v6.3.x/apps/interchain-accounts/active-channels", + "docs/ibc/v6.3.x/apps/interchain-accounts/auth-modules", + "docs/ibc/v6.3.x/apps/interchain-accounts/client", + "docs/ibc/v6.3.x/apps/interchain-accounts/development", + "docs/ibc/v6.3.x/apps/interchain-accounts/integration", + "docs/ibc/v6.3.x/apps/interchain-accounts/messages", + "docs/ibc/v6.3.x/apps/interchain-accounts/overview", + "docs/ibc/v6.3.x/apps/interchain-accounts/parameters", + { + "group": "Legacy", + "pages": [ + "docs/ibc/v6.3.x/apps/interchain-accounts/legacy/auth-modules", + "docs/ibc/v6.3.x/apps/interchain-accounts/legacy/integration", + "docs/ibc/v6.3.x/apps/interchain-accounts/legacy/keeper-api" + ] + } + ] + }, + { + "group": "Transfer", + "pages": [ + "docs/ibc/v6.3.x/apps/transfer/authorizations", + "docs/ibc/v6.3.x/apps/transfer/events", + "docs/ibc/v6.3.x/apps/transfer/messages", + "docs/ibc/v6.3.x/apps/transfer/metrics", + "docs/ibc/v6.3.x/apps/transfer/overview", + "docs/ibc/v6.3.x/apps/transfer/params", + "docs/ibc/v6.3.x/apps/transfer/state", + "docs/ibc/v6.3.x/apps/transfer/state-transitions" + ] + } + ] + }, + { + "group": "IBC Middleware", + "pages": [ + { + "group": "Callbacks", + "pages": [ + "docs/ibc/v6.3.x/middleware/ics29-fee/end-users", + "docs/ibc/v6.3.x/middleware/ics29-fee/events", + "docs/ibc/v6.3.x/middleware/ics29-fee/fee-distribution", + "docs/ibc/v6.3.x/middleware/ics29-fee/integration", + "docs/ibc/v6.3.x/middleware/ics29-fee/msgs", + "docs/ibc/v6.3.x/middleware/ics29-fee/overview" + ] + } + ] + } + ] + }, + { + "tab": "Release Notes", + "pages": [ + "docs/ibc/v6.3.x/changelog/release-notes" + ] + } ] - }, - { - "group": "Store", - "pages": [ - "docs/sdk/next/build/spec/store", - "docs/sdk/next/build/spec/store/interblock-cache" + }, + { + "version": "v5.4.x", + "tabs": [ + { + "tab": "Documentation", + "groups": [ + { + "group": "IBC-Go", + "pages": [ + "docs/ibc/v5.4.x/intro" + ] + }, + { + "group": "IBC Core", + "pages": [ + "docs/ibc/v5.4.x/ibc/overview", + { + "group": "Core Concepts", + "pages": [ + "docs/ibc/v5.4.x/ibc/integration", + "docs/ibc/v5.4.x/ibc/proposals", + "docs/ibc/v5.4.x/ibc/proto-docs", + "docs/ibc/v5.4.x/ibc/relayer", + "docs/ibc/v5.4.x/ibc/roadmap" + ] + }, + { + "group": "Apps", + "pages": [ + "docs/ibc/v5.4.x/ibc/apps/apps", + "docs/ibc/v5.4.x/ibc/apps/bindports", + "docs/ibc/v5.4.x/ibc/apps/ibcmodule", + "docs/ibc/v5.4.x/ibc/apps/keeper", + "docs/ibc/v5.4.x/ibc/apps/packets_acks", + "docs/ibc/v5.4.x/ibc/apps/routing" + ] + }, + { + "group": "Middleware", + "pages": [ + "docs/ibc/v5.4.x/ibc/middleware/develop", + "docs/ibc/v5.4.x/ibc/middleware/integration" + ] + }, + { + "group": "Upgrades", + "pages": [ + "docs/ibc/v5.4.x/ibc/upgrades/developer-guide", + "docs/ibc/v5.4.x/ibc/upgrades/genesis-restart", + "docs/ibc/v5.4.x/ibc/upgrades/intro", + "docs/ibc/v5.4.x/ibc/upgrades/quick-guide" + ] + } + ] + }, + { + "group": "IBC Migrations", + "pages": [ + { + "group": "Version Upgrades", + "pages": [ + "docs/ibc/v5.4.x/migrations/v3-to-v4", + "docs/ibc/v5.4.x/migrations/v2-to-v3", + "docs/ibc/v5.4.x/migrations/v1-to-v2" + ] + }, + { + "group": "Additional Migration Guides", + "pages": [ + "docs/ibc/v5.4.x/migrations/sdk-to-v1", + "docs/ibc/v5.4.x/migrations/support-denoms-with-slashes" + ] + } + ] + }, + { + "group": "IBC Apps", + "pages": [ + { + "group": "Interchain Accounts", + "pages": [ + "docs/ibc/v5.4.x/apps/interchain-accounts/active-channels", + "docs/ibc/v5.4.x/apps/interchain-accounts/auth-modules", + "docs/ibc/v5.4.x/apps/interchain-accounts/integration", + "docs/ibc/v5.4.x/apps/interchain-accounts/overview", + "docs/ibc/v5.4.x/apps/interchain-accounts/parameters", + "docs/ibc/v5.4.x/apps/interchain-accounts/transactions" + ] + }, + { + "group": "Transfer", + "pages": [ + "docs/ibc/v5.4.x/apps/transfer/events", + "docs/ibc/v5.4.x/apps/transfer/messages", + "docs/ibc/v5.4.x/apps/transfer/metrics", + "docs/ibc/v5.4.x/apps/transfer/overview", + "docs/ibc/v5.4.x/apps/transfer/params", + "docs/ibc/v5.4.x/apps/transfer/state", + "docs/ibc/v5.4.x/apps/transfer/state-transitions" + ] + } + ] + }, + { + "group": "IBC Middleware", + "pages": [ + { + "group": "Callbacks", + "pages": [ + "docs/ibc/v5.4.x/middleware/ics29-fee/end-users", + "docs/ibc/v5.4.x/middleware/ics29-fee/events", + "docs/ibc/v5.4.x/middleware/ics29-fee/fee-distribution", + "docs/ibc/v5.4.x/middleware/ics29-fee/integration", + "docs/ibc/v5.4.x/middleware/ics29-fee/msgs", + "docs/ibc/v5.4.x/middleware/ics29-fee/overview" + ] + } + ] + } + ] + }, + { + "tab": "Release Notes", + "pages": [ + "docs/ibc/v5.4.x/changelog/release-notes" + ] + } ] - } - ] - } - ] - }, - { - "tab": "User Guides", - "groups": [ - { - "group": "User Guides", - "pages": [ - "docs/sdk/next/user" - ] - }, - { - "group": "Running a Node, API and CLI", - "pages": [ - "docs/sdk/next/user/run-node/keyring", - "docs/sdk/next/user/run-node/run-node", - "docs/sdk/next/user/run-node/interact-node", - "docs/sdk/next/user/run-node/txs", - "docs/sdk/next/user/run-node/run-testnet", - "docs/sdk/next/user/run-node/run-production" - ] - }, - { - "group": "User", - "pages": [ - "docs/sdk/next/user" - ] - } - ] - }, - { - "tab": "Tutorials", - "groups": [ - { - "group": "Tutorials", - "pages": [ - "docs/sdk/next/tutorials" - ] - }, - { - "group": "Vote Extensions Tutorials", - "pages": [ - { - "group": "Mitigating Auction Front-Running Tutorial", - "pages": [ - "docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/getting-started", - "docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning", - "docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions", - "docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extesions", - "docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running" + }, + { + "version": "v4.6.x", + "tabs": [ + { + "tab": "Documentation", + "groups": [ + { + "group": "IBC-Go", + "pages": [ + "docs/ibc/v4.6.x/intro" + ] + }, + { + "group": "IBC Core", + "pages": [ + "docs/ibc/v4.6.x/ibc/overview", + { + "group": "Core Concepts", + "pages": [ + "docs/ibc/v4.6.x/ibc/integration", + "docs/ibc/v4.6.x/ibc/proposals", + "docs/ibc/v4.6.x/ibc/proto-docs", + "docs/ibc/v4.6.x/ibc/relayer", + "docs/ibc/v4.6.x/ibc/roadmap" + ] + }, + { + "group": "Apps", + "pages": [ + "docs/ibc/v4.6.x/ibc/apps/apps", + "docs/ibc/v4.6.x/ibc/apps/bindports", + "docs/ibc/v4.6.x/ibc/apps/ibcmodule", + "docs/ibc/v4.6.x/ibc/apps/keeper", + "docs/ibc/v4.6.x/ibc/apps/packets_acks", + "docs/ibc/v4.6.x/ibc/apps/routing" + ] + }, + { + "group": "Middleware", + "pages": [ + "docs/ibc/v4.6.x/ibc/middleware/develop", + "docs/ibc/v4.6.x/ibc/middleware/integration" + ] + }, + { + "group": "Upgrades", + "pages": [ + "docs/ibc/v4.6.x/ibc/upgrades/developer-guide", + "docs/ibc/v4.6.x/ibc/upgrades/genesis-restart", + "docs/ibc/v4.6.x/ibc/upgrades/intro", + "docs/ibc/v4.6.x/ibc/upgrades/quick-guide" + ] + } + ] + }, + { + "group": "IBC Migrations", + "pages": [ + { + "group": "Version Upgrades", + "pages": [ + "docs/ibc/v4.6.x/migrations/v3-to-v4", + "docs/ibc/v4.6.x/migrations/v2-to-v3", + "docs/ibc/v4.6.x/migrations/v1-to-v2" + ] + }, + { + "group": "Additional Migration Guides", + "pages": [ + "docs/ibc/v4.6.x/migrations/sdk-to-v1", + "docs/ibc/v4.6.x/migrations/support-denoms-with-slashes" + ] + } + ] + }, + { + "group": "IBC Apps", + "pages": [ + { + "group": "Interchain Accounts", + "pages": [ + "docs/ibc/v4.6.x/apps/interchain-accounts/active-channels", + "docs/ibc/v4.6.x/apps/interchain-accounts/auth-modules", + "docs/ibc/v4.6.x/apps/interchain-accounts/integration", + "docs/ibc/v4.6.x/apps/interchain-accounts/overview", + "docs/ibc/v4.6.x/apps/interchain-accounts/parameters", + "docs/ibc/v4.6.x/apps/interchain-accounts/transactions" + ] + }, + { + "group": "Transfer", + "pages": [ + "docs/ibc/v4.6.x/apps/transfer/events", + "docs/ibc/v4.6.x/apps/transfer/messages", + "docs/ibc/v4.6.x/apps/transfer/metrics", + "docs/ibc/v4.6.x/apps/transfer/overview", + "docs/ibc/v4.6.x/apps/transfer/params", + "docs/ibc/v4.6.x/apps/transfer/state", + "docs/ibc/v4.6.x/apps/transfer/state-transitions" + ] + } + ] + }, + { + "group": "IBC Middleware", + "pages": [ + { + "group": "Callbacks", + "pages": [ + "docs/ibc/v4.6.x/middleware/ics29-fee/end-users", + "docs/ibc/v4.6.x/middleware/ics29-fee/events", + "docs/ibc/v4.6.x/middleware/ics29-fee/fee-distribution", + "docs/ibc/v4.6.x/middleware/ics29-fee/integration", + "docs/ibc/v4.6.x/middleware/ics29-fee/msgs", + "docs/ibc/v4.6.x/middleware/ics29-fee/overview" + ] + } + ] + } + ] + }, + { + "tab": "Release Notes", + "pages": [ + "docs/ibc/v4.6.x/changelog/release-notes" + ] + } ] - }, - { - "group": "Oracle Tutorial", - "pages": [ - "docs/sdk/next/tutorials/vote-extensions/oracle/getting-started", - "docs/sdk/next/tutorials/vote-extensions/oracle/what-is-an-oracle", - "docs/sdk/next/tutorials/vote-extensions/oracle/implementing-vote-extensions", - "docs/sdk/next/tutorials/vote-extensions/oracle/testing-oracle" + }, + { + "version": "next", + "tabs": [ + { + "tab": "Documentation", + "groups": [ + { + "group": "IBC-Go", + "pages": [ + "docs/ibc/next/intro" + ] + }, + { + "group": "IBC Core", + "pages": [ + "docs/ibc/next/ibc/overview", + { + "group": "Core Concepts", + "pages": [ + "docs/ibc/next/ibc/best-practices", + "docs/ibc/next/ibc/integration", + "docs/ibc/next/ibc/permissioning", + "docs/ibc/next/ibc/relayer" + ] + }, + { + "group": "Apps", + "pages": [ + "docs/ibc/next/ibc/apps/apps", + "docs/ibc/next/ibc/apps/bindports", + "docs/ibc/next/ibc/apps/ibcmodule", + "docs/ibc/next/ibc/apps/ibcv2apps", + "docs/ibc/next/ibc/apps/keeper", + "docs/ibc/next/ibc/apps/packets_acks", + "docs/ibc/next/ibc/apps/routing" + ] + }, + { + "group": "Middleware", + "pages": [ + "docs/ibc/next/ibc/middleware/develop", + "docs/ibc/next/ibc/middleware/developIBCv2", + "docs/ibc/next/ibc/middleware/integration", + "docs/ibc/next/ibc/middleware/overview" + ] + }, + { + "group": "Upgrades", + "pages": [ + "docs/ibc/next/ibc/upgrades/developer-guide", + "docs/ibc/next/ibc/upgrades/genesis-restart", + "docs/ibc/next/ibc/upgrades/intro", + "docs/ibc/next/ibc/upgrades/quick-guide" + ] + } + ] + }, + { + "group": "IBC Migrations", + "pages": [ + "docs/ibc/next/migrations/migration.template", + { + "group": "Version Upgrades", + "pages": [ + "docs/ibc/next/migrations/v10-to-v11", + "docs/ibc/next/migrations/v8_1-to-v10", + "docs/ibc/next/migrations/v8-to-v8_1", + "docs/ibc/next/migrations/v7-to-v8", + "docs/ibc/next/migrations/v7_2-to-v7_3", + "docs/ibc/next/migrations/v7-to-v7_1", + "docs/ibc/next/migrations/v6-to-v7", + "docs/ibc/next/migrations/v5-to-v6", + "docs/ibc/next/migrations/v4-to-v5", + "docs/ibc/next/migrations/v3-to-v4", + "docs/ibc/next/migrations/v2-to-v3", + "docs/ibc/next/migrations/v1-to-v2" + ] + }, + { + "group": "Additional Migration Guides", + "pages": [ + "docs/ibc/next/migrations/sdk-to-v1", + "docs/ibc/next/migrations/support-denoms-with-slashes", + "docs/ibc/next/migrations/support-stackbuilder" + ] + } + ] + }, + { + "group": "IBC Light Clients", + "pages": [ + "docs/ibc/next/light-clients/proposals", + { + "group": "Developer Guide", + "pages": [ + "docs/ibc/next/light-clients/developer-guide/client-state", + "docs/ibc/next/light-clients/developer-guide/consensus-state", + "docs/ibc/next/light-clients/developer-guide/light-client-module", + "docs/ibc/next/light-clients/developer-guide/overview", + "docs/ibc/next/light-clients/developer-guide/proofs", + "docs/ibc/next/light-clients/developer-guide/proposals", + "docs/ibc/next/light-clients/developer-guide/setup", + "docs/ibc/next/light-clients/developer-guide/updates-and-misbehaviour", + "docs/ibc/next/light-clients/developer-guide/upgrades" + ] + }, + { + "group": "Localhost", + "pages": [ + "docs/ibc/next/light-clients/localhost/client-state", + "docs/ibc/next/light-clients/localhost/connection", + "docs/ibc/next/light-clients/localhost/integration", + "docs/ibc/next/light-clients/localhost/overview", + "docs/ibc/next/light-clients/localhost/state-verification" + ] + }, + { + "group": "Solo Machine", + "pages": [ + "docs/ibc/next/light-clients/solomachine/concepts", + "docs/ibc/next/light-clients/solomachine/solomachine", + "docs/ibc/next/light-clients/solomachine/state", + "docs/ibc/next/light-clients/solomachine/state_transitions" + ] + }, + { + "group": "Tendermint", + "pages": [ + "docs/ibc/next/light-clients/tendermint/overview" + ] + }, + { + "group": "WASM", + "pages": [ + "docs/ibc/next/light-clients/wasm/client", + "docs/ibc/next/light-clients/wasm/concepts", + "docs/ibc/next/light-clients/wasm/contracts", + "docs/ibc/next/light-clients/wasm/events", + "docs/ibc/next/light-clients/wasm/governance", + "docs/ibc/next/light-clients/wasm/integration", + "docs/ibc/next/light-clients/wasm/messages", + "docs/ibc/next/light-clients/wasm/migrations", + "docs/ibc/next/light-clients/wasm/overview" + ] + } + ] + }, + { + "group": "IBC Apps", + "pages": [ + { + "group": "Interchain Accounts", + "pages": [ + "docs/ibc/next/apps/interchain-accounts/active-channels", + "docs/ibc/next/apps/interchain-accounts/auth-modules", + "docs/ibc/next/apps/interchain-accounts/client", + "docs/ibc/next/apps/interchain-accounts/development", + "docs/ibc/next/apps/interchain-accounts/integration", + "docs/ibc/next/apps/interchain-accounts/messages", + "docs/ibc/next/apps/interchain-accounts/overview", + "docs/ibc/next/apps/interchain-accounts/parameters", + "docs/ibc/next/apps/interchain-accounts/tx-encoding", + { + "group": "Legacy", + "pages": [ + "docs/ibc/next/apps/interchain-accounts/legacy/auth-modules", + "docs/ibc/next/apps/interchain-accounts/legacy/integration", + "docs/ibc/next/apps/interchain-accounts/legacy/keeper-api" + ] + } + ] + }, + { + "group": "Transfer", + "pages": [ + "docs/ibc/next/apps/transfer/IBCv2-transfer", + "docs/ibc/next/apps/transfer/authorizations", + "docs/ibc/next/apps/transfer/client", + "docs/ibc/next/apps/transfer/events", + "docs/ibc/next/apps/transfer/messages", + "docs/ibc/next/apps/transfer/metrics", + "docs/ibc/next/apps/transfer/overview", + "docs/ibc/next/apps/transfer/params", + "docs/ibc/next/apps/transfer/state", + "docs/ibc/next/apps/transfer/state-transitions" + ] + } + ] + }, + { + "group": "IBC Middleware", + "pages": [ + { + "group": "Callbacks", + "pages": [ + "docs/ibc/next/middleware/callbacks/callbacks-IBCv2", + "docs/ibc/next/middleware/callbacks/end-users", + "docs/ibc/next/middleware/callbacks/events", + "docs/ibc/next/middleware/callbacks/gas", + "docs/ibc/next/middleware/callbacks/integration", + "docs/ibc/next/middleware/callbacks/interfaces", + "docs/ibc/next/middleware/callbacks/overview", + "docs/ibc/next/middleware/packet-forward-middleware/example-usage", + "docs/ibc/next/middleware/packet-forward-middleware/integration", + "docs/ibc/next/middleware/packet-forward-middleware/overview", + "docs/ibc/next/middleware/rate-limit-middleware/integration", + "docs/ibc/next/middleware/rate-limit-middleware/overview", + "docs/ibc/next/middleware/rate-limit-middleware/setting-limits" + ] + } + ] + }, + { + "group": "Security", + "pages": [ + "docs/ibc/next/security-audits" + ] + } + ] + }, + { + "tab": "Release Notes", + "pages": [ + "docs/ibc/next/changelog/release-notes" + ] + } ] - } - ] - }, - { - "group": "Transaction Tutorials", - "pages": [ - "docs/sdk/next/tutorials/transactions/building-a-transaction" - ] - } + } ] - } - ] - } + } ] - }, - { - "dropdown": "IBC", - "icon": "book-open-text", - "versions": [ - { - "tabs": [ - { - "tab": "Documentation", - "pages": [ - "docs/ibc/v0.2.0/index" - ] - } - ], - "version": "v0.2.0" - }, - { - "version": "next", - "tabs": [ - { - "tab": "Documentation", - "pages": [ - "docs/ibc/next/index" - ] - } - ] - } - ] - } - ] - }, - "footer": { - "socials": { - "twitter": "https://twitter.com/cosmos", - "github": "https://github.com/cosmos", - "discord": "https://discord.gg/interchain", - "website": "https://cosmos.network" - } - }, - "appearance": { - "default": "system", - "strict": false - }, - "search": { - "prompt": "Search docs..." - }, - "background": { - "color": { - "light": "#F1F1F1", - "dark": "#000000" + }, + "footer": { + "socials": { + "twitter": "https://twitter.com/cosmos", + "github": "https://github.com/cosmos", + "discord": "https://discord.gg/interchain", + "website": "https://cosmos.network" + } + }, + "appearance": { + "default": "system", + "strict": false + }, + "search": { + "prompt": "Search docs..." + }, + "background": { + "color": { + "light": "#F1F1F1", + "dark": "#000000" + } } - } -} +} \ No newline at end of file diff --git a/docs/common/pages/adr-comprehensive.mdx b/docs/common/pages/adr-comprehensive.mdx new file mode 100644 index 00000000..d9a6dc37 --- /dev/null +++ b/docs/common/pages/adr-comprehensive.mdx @@ -0,0 +1,14969 @@ +--- +title: "Architecture Decision Records (ADRs)" +description: "Comprehensive collection of all Cosmos SDK Architecture Decision Records" +--- + +# Architecture Decision Records (ADRs) + +This page contains the complete collection of Architecture Decision Records (ADRs) for the Cosmos SDK. ADRs document important architectural decisions made during the development of the SDK, providing context, rationale, and consequences for each decision. + +Each ADR is presented in an expandable format for easy navigation. + +## List of ADRs + +### ADR 002: SDK Documentation Structure + + + +# ADR 002: SDK Documentation Structure + +## Context + +There is a need for a scalable structure of the Cosmos SDK documentation. Current documentation includes a lot of non-related Cosmos SDK material, is difficult to maintain and hard to follow as a user. + +Ideally, we would have: + +* All docs related to dev frameworks or tools live in their respective github repos (sdk repo would contain sdk docs, hub repo would contain hub docs, lotion repo would contain lotion docs, etc.) +* All other docs (faqs, whitepaper, high-level material about Cosmos) would live on the website. + +## Decision + +Re-structure the `/docs` folder of the Cosmos SDK github repo as follows: + +```text +docs/ +├── README +├── intro/ +├── concepts/ +│ ├── baseapp +│ ├── types +│ ├── store +│ ├── server +│ ├── modules/ +│ │ ├── keeper +│ │ ├── handler +│ │ ├── cli +│ ├── gas +│ └── commands +├── clients/ +│ ├── lite/ +│ ├── service-providers +├── modules/ +├── spec/ +├── translations/ +└── architecture/ +``` + +The files in each sub-folders do not matter and will likely change. What matters is the sectioning: + +* `README`: Landing page of the docs. +* `intro`: Introductory material. Goal is to have a short explainer of the Cosmos SDK and then channel people to the resource they need. The [Cosmos SDK tutorial](https://github.com/cosmos/sdk-application-tutorial/) will be highlighted, as well as the `godocs`. +* `concepts`: Contains high-level explanations of the abstractions of the Cosmos SDK. It does not contain specific code implementation and does not need to be updated often. **It is not an API specification of the interfaces**. API spec is the `godoc`. +* `clients`: Contains specs and info about the various Cosmos SDK clients. +* `spec`: Contains specs of modules, and others. +* `modules`: Contains links to `godocs` and the spec of the modules. +* `architecture`: Contains architecture-related docs like the present one. +* `translations`: Contains different translations of the documentation. + +Website docs sidebar will only include the following sections: + +* `README` +* `intro` +* `concepts` +* `clients` + +`architecture` need not be displayed on the website. + +## Status + +Accepted + +## Consequences + +### Positive + +* Much clearer organisation of the Cosmos SDK docs. +* The `/docs` folder now only contains Cosmos SDK and gaia related material. Later, it will only contain Cosmos SDK related material. +* Developers only have to update `/docs` folder when they open a PR (and not `/examples` for example). +* Easier for developers to find what they need to update in the docs thanks to reworked architecture. +* Cleaner vuepress build for website docs. +* Will help build an executable doc (cf https://github.com/cosmos/cosmos-sdk/issues/2611) + +### Neutral + +* We need to move a bunch of deprecated stuff to `/_attic` folder. +* We need to integrate content in `docs/sdk/docs/core` in `concepts`. +* We need to move all the content that currently lives in `docs` and does not fit in new structure (like `lotion`, intro material, whitepaper) to the website repository. +* Update `DOCS_README.md` + +## References + +* https://github.com/cosmos/cosmos-sdk/issues/1460 +* https://github.com/cosmos/cosmos-sdk/pull/2695 +* https://github.com/cosmos/cosmos-sdk/issues/2611 + + + +### ADR 003: Dynamic Capability Store + + + +# ADR 3: Dynamic Capability Store + +## Changelog + +* 12 December 2019: Initial version +* 02 April 2020: Memory Store Revisions + +## Context + +Full implementation of the [IBC specification](https://github.com/cosmos/ibc) requires the ability to create and authenticate object-capability keys at runtime (i.e., during transaction execution), +as described in [ICS 5](https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#technical-specification). In the IBC specification, capability keys are created for each newly initialised +port & channel, and are used to authenticate future usage of the port or channel. Since channels and potentially ports can be initialised during transaction execution, the state machine must be able to create +object-capability keys at this time. + +At present, the Cosmos SDK does not have the ability to do this. Object-capability keys are currently pointers (memory addresses) of `StoreKey` structs created at application initialisation in `app.go` ([example](https://github.com/cosmos/gaia/blob/dcbddd9f04b3086c0ad07ee65de16e7adedc7da4/app/app.go#L132)) +and passed to Keepers as fixed arguments ([example](https://github.com/cosmos/gaia/blob/dcbddd9f04b3086c0ad07ee65de16e7adedc7da4/app/app.go#L160)). Keepers cannot create or store capability keys during transaction execution — although they could call `NewKVStoreKey` and take the memory address +of the returned struct, storing this in the Merklised store would result in a consensus fault, since the memory address will be different on each machine (this is intentional — were this not the case, the keys would be predictable and couldn't serve as object capabilities). + +Keepers need a way to keep a private map of store keys which can be altered during transaction execution, along with a suitable mechanism for regenerating the unique memory addresses (capability keys) in this map whenever the application is started or restarted, along with a mechanism to revert capability creation on tx failure. +This ADR proposes such an interface & mechanism. + +## Decision + +The Cosmos SDK will include a new `CapabilityKeeper` abstraction, which is responsible for provisioning, +tracking, and authenticating capabilities at runtime. During application initialisation in `app.go`, +the `CapabilityKeeper` will be hooked up to modules through unique function references +(by calling `ScopeToModule`, defined below) so that it can identify the calling module when later +invoked. + +When the initial state is loaded from disk, the `CapabilityKeeper`'s `Initialise` function will create +new capability keys for all previously allocated capability identifiers (allocated during execution of +past transactions and assigned to particular modes), and keep them in a memory-only store while the +chain is running. + +The `CapabilityKeeper` will include a persistent `KVStore`, a `MemoryStore`, and an in-memory map. +The persistent `KVStore` tracks which capability is owned by which modules. +The `MemoryStore` stores a forward mapping that map from module name, capability tuples to capability names and +a reverse mapping that map from module name, capability name to the capability index. +Since we cannot marshal the capability into a `KVStore` and unmarshal without changing the memory location of the capability, +the reverse mapping in the KVStore will simply map to an index. This index can then be used as a key in the ephemeral +go-map to retrieve the capability at the original memory location. + +The `CapabilityKeeper` will define the following types & functions: + +The `Capability` is similar to `StoreKey`, but has a globally unique `Index()` instead of +a name. A `String()` method is provided for debugging. + +A `Capability` is simply a struct, the address of which is taken for the actual capability. + +```go +type Capability struct { + index uint64 +} +``` + +A `CapabilityKeeper` contains a persistent store key, memory store key, and mapping of allocated module names. + +```go +type CapabilityKeeper struct { + persistentKey StoreKey + memKey StoreKey + capMap map[uint64]*Capability + moduleNames map[string]interface{} + sealed bool +} +``` + +The `CapabilityKeeper` provides the ability to create *scoped* sub-keepers which are tied to a +particular module name. These `ScopedCapabilityKeeper`s must be created at application initialisation +and passed to modules, which can then use them to claim capabilities they receive and retrieve +capabilities which they own by name, in addition to creating new capabilities & authenticating capabilities +passed by other modules. + +```go +type ScopedCapabilityKeeper struct { + persistentKey StoreKey + memKey StoreKey + capMap map[uint64]*Capability + moduleName string +} +``` + +`ScopeToModule` is used to create a scoped sub-keeper with a particular name, which must be unique. +It MUST be called before `InitialiseAndSeal`. + +```go +func (ck CapabilityKeeper) ScopeToModule(moduleName string) ScopedCapabilityKeeper { + if k.sealed { + panic("cannot scope to module via a sealed capability keeper") + } + + if _, ok := k.scopedModules[moduleName]; ok { + panic(fmt.Sprintf("cannot create multiple scoped keepers for the same module name: %s", moduleName)) + } + + k.scopedModules[moduleName] = struct{}{} + + return ScopedKeeper{ + cdc: k.cdc, + storeKey: k.storeKey, + memKey: k.memKey, + capMap: k.capMap, + module: moduleName, + } +} +``` + +`InitialiseAndSeal` MUST be called exactly once, after loading the initial state and creating all +necessary `ScopedCapabilityKeeper`s, in order to populate the memory store with newly-created +capability keys in accordance with the keys previously claimed by particular modules and prevent the +creation of any new `ScopedCapabilityKeeper`s. + +```go +func (ck CapabilityKeeper) InitialiseAndSeal(ctx Context) { + if ck.sealed { + panic("capability keeper is sealed") + } + + persistentStore := ctx.KVStore(ck.persistentKey) + map := ctx.KVStore(ck.memKey) + + // initialise memory store for all names in persistent store + for index, value := range persistentStore.Iter() { + capability = &CapabilityKey{index: index} + + for moduleAndCapability := range value { + moduleName, capabilityName := moduleAndCapability.Split("/") + memStore.Set(moduleName + "/fwd/" + capability, capabilityName) + memStore.Set(moduleName + "/rev/" + capabilityName, index) + + ck.capMap[index] = capability + } + } + + ck.sealed = true +} +``` + +`NewCapability` can be called by any module to create a new unique, unforgeable object-capability +reference. The newly created capability is automatically persisted; the calling module need not +call `ClaimCapability`. + +```go +func (sck ScopedCapabilityKeeper) NewCapability(ctx Context, name string) (Capability, error) { + // check name not taken in memory store + if capStore.Get("rev/" + name) != nil { + return nil, errors.New("name already taken") + } + + // fetch the current index + index := persistentStore.Get("index") + + // create a new capability + capability := &CapabilityKey{index: index} + + // set persistent store + persistentStore.Set(index, Set.singleton(sck.moduleName + "/" + name)) + + // update the index + index++ + persistentStore.Set("index", index) + + // set forward mapping in memory store from capability to name + memStore.Set(sck.moduleName + "/fwd/" + capability, name) + + // set reverse mapping in memory store from name to index + memStore.Set(sck.moduleName + "/rev/" + name, index) + + // set the in-memory mapping from index to capability pointer + capMap[index] = capability + + // return the newly created capability + return capability +} +``` + +`AuthenticateCapability` can be called by any module to check that a capability +does in fact correspond to a particular name (the name can be untrusted user input) +with which the calling module previously associated it. + +```go +func (sck ScopedCapabilityKeeper) AuthenticateCapability(name string, capability Capability) bool { + // return whether forward mapping in memory store matches name + return memStore.Get(sck.moduleName + "/fwd/" + capability) === name +} +``` + +`ClaimCapability` allows a module to claim a capability key which it has received from another module +so that future `GetCapability` calls will succeed. + +`ClaimCapability` MUST be called if a module which receives a capability wishes to access it by name +in the future. Capabilities are multi-owner, so if multiple modules have a single `Capability` reference, +they will all own it. + +```go +func (sck ScopedCapabilityKeeper) ClaimCapability(ctx Context, capability Capability, name string) error { + persistentStore := ctx.KVStore(sck.persistentKey) + + // set forward mapping in memory store from capability to name + memStore.Set(sck.moduleName + "/fwd/" + capability, name) + + // set reverse mapping in memory store from name to capability + memStore.Set(sck.moduleName + "/rev/" + name, capability) + + // update owner set in persistent store + owners := persistentStore.Get(capability.Index()) + owners.add(sck.moduleName + "/" + name) + persistentStore.Set(capability.Index(), owners) +} +``` + +`GetCapability` allows a module to fetch a capability which it has previously claimed by name. +The module is not allowed to retrieve capabilities which it does not own. + +```go +func (sck ScopedCapabilityKeeper) GetCapability(ctx Context, name string) (Capability, error) { + // fetch the index of capability using reverse mapping in memstore + index := memStore.Get(sck.moduleName + "/rev/" + name) + + // fetch capability from go-map using index + capability := capMap[index] + + // return the capability + return capability +} +``` + +`ReleaseCapability` allows a module to release a capability which it had previously claimed. If no +more owners exist, the capability will be deleted globally. + +```go +func (sck ScopedCapabilityKeeper) ReleaseCapability(ctx Context, capability Capability) err { + persistentStore := ctx.KVStore(sck.persistentKey) + + name := capStore.Get(sck.moduleName + "/fwd/" + capability) + if name == nil { + return error("capability not owned by module") + } + + // delete forward mapping in memory store + memoryStore.Delete(sck.moduleName + "/fwd/" + capability, name) + + // delete reverse mapping in memory store + memoryStore.Delete(sck.moduleName + "/rev/" + name, capability) + + // update owner set in persistent store + owners := persistentStore.Get(capability.Index()) + owners.remove(sck.moduleName + "/" + name) + if owners.size() > 0 { + // there are still other owners, keep the capability around + persistentStore.Set(capability.Index(), owners) + } else { + // no more owners, delete the capability + persistentStore.Delete(capability.Index()) + delete(capMap[capability.Index()]) + } +} +``` + +### Usage patterns + +#### Initialisation + +Any modules which use dynamic capabilities must be provided a `ScopedCapabilityKeeper` in `app.go`: + +```go +ck := NewCapabilityKeeper(persistentKey, memoryKey) +mod1Keeper := NewMod1Keeper(ck.ScopeToModule("mod1"), ....) +mod2Keeper := NewMod2Keeper(ck.ScopeToModule("mod2"), ....) + +// other initialisation logic ... + +// load initial state... + +ck.InitialiseAndSeal(initialContext) +``` + +#### Creating, passing, claiming and using capabilities + +Consider the case where `mod1` wants to create a capability, associate it with a resource (e.g. an IBC channel) by name, then pass it to `mod2` which will use it later: + +Module 1 would have the following code: + +```go +capability := scopedCapabilityKeeper.NewCapability(ctx, "resourceABC") +mod2Keeper.SomeFunction(ctx, capability, args...) +``` + +`SomeFunction`, running in module 2, could then claim the capability: + +```go +func (k Mod2Keeper) SomeFunction(ctx Context, capability Capability) { + k.sck.ClaimCapability(ctx, capability, "resourceABC") + // other logic... +} +``` + +Later on, module 2 can retrieve that capability by name and pass it to module 1, which will authenticate it against the resource: + +```go +func (k Mod2Keeper) SomeOtherFunction(ctx Context, name string) { + capability := k.sck.GetCapability(ctx, name) + mod1.UseResource(ctx, capability, "resourceABC") +} +``` + +Module 1 will then check that this capability key is authenticated to use the resource before allowing module 2 to use it: + +```go +func (k Mod1Keeper) UseResource(ctx Context, capability Capability, resource string) { + if !k.sck.AuthenticateCapability(name, capability) { + return errors.New("unauthenticated") + } + // do something with the resource +} +``` + +If module 2 passed the capability key to module 3, module 3 could then claim it and call module 1 just like module 2 did +(in which case module 1, module 2, and module 3 would all be able to use this capability). + +## Status + +Proposed. + +## Consequences + +### Positive + +* Dynamic capability support. +* Allows CapabilityKeeper to return same capability pointer from go-map while reverting any writes to the persistent `KVStore` and in-memory `MemoryStore` on tx failure. + +### Negative + +* Requires an additional keeper. +* Some overlap with existing `StoreKey` system (in the future they could be combined, since this is a superset functionality-wise). +* Requires an extra level of indirection in the reverse mapping, since MemoryStore must map to index which must then be used as key in a go map to retrieve the actual capability + +### Neutral + +(none known) + +## References + +* [Original discussion](https://github.com/cosmos/cosmos-sdk/pull/5230#discussion_r343978513) + + + +### ADR 004: Split Denomination Keys + + + +# ADR 004: Split Denomination Keys + +## Changelog + +* 2020-01-08: Initial version +* 2020-01-09: Alterations to handle vesting accounts +* 2020-01-14: Updates from review feedback +* 2020-01-30: Updates from implementation + +### Glossary + +* denom / denomination key -- unique token identifier. + +## Context + +With permissionless IBC, anyone will be able to send arbitrary denominations to any other account. Currently, all non-zero balances are stored along with the account in an `sdk.Coins` struct, which creates a potential denial-of-service concern, as too many denominations will become expensive to load & store each time the account is modified. See issues [5467](https://github.com/cosmos/cosmos-sdk/issues/5467) and [4982](https://github.com/cosmos/cosmos-sdk/issues/4982) for additional context. + +Simply rejecting incoming deposits after a denomination count limit doesn't work, since it opens up a griefing vector: someone could send a user lots of nonsensical coins over IBC, and then prevent the user from receiving real denominations (such as staking rewards). + +## Decision + +Balances shall be stored per-account & per-denomination under a denomination- and account-unique key, thus enabling O(1) read & write access to the balance of a particular account in a particular denomination. + +### Account interface (x/auth) + +`GetCoins()` and `SetCoins()` will be removed from the account interface, since coin balances will +now be stored in & managed by the bank module. + +The vesting account interface will replace `SpendableCoins` in favor of `LockedCoins` which does +not require the account balance anymore. In addition, `TrackDelegation()` will now accept the +account balance of all tokens denominated in the vesting balance instead of loading the entire +account balance. + +Vesting accounts will continue to store original vesting, delegated free, and delegated +vesting coins (which is safe since these cannot contain arbitrary denominations). + +### Bank keeper (x/bank) + +The following APIs will be added to the `x/bank` keeper: + +* `GetAllBalances(ctx Context, addr AccAddress) Coins` +* `GetBalance(ctx Context, addr AccAddress, denom string) Coin` +* `SetBalance(ctx Context, addr AccAddress, coin Coin)` +* `LockedCoins(ctx Context, addr AccAddress) Coins` +* `SpendableCoins(ctx Context, addr AccAddress) Coins` + +Additional APIs may be added to facilitate iteration and auxiliary functionality not essential to +core functionality or persistence. + +Balances will be stored first by the address, then by the denomination (the reverse is also possible, +but retrieval of all balances for a single account is presumed to be more frequent): + +```go +var BalancesPrefix = []byte("balances") + +func (k Keeper) SetBalance(ctx Context, addr AccAddress, balance Coin) error { + if !balance.IsValid() { + return err + } + + store := ctx.KVStore(k.storeKey) + balancesStore := prefix.NewStore(store, BalancesPrefix) + accountStore := prefix.NewStore(balancesStore, addr.Bytes()) + + bz := Marshal(balance) + accountStore.Set([]byte(balance.Denom), bz) + + return nil +} +``` + +This will result in the balances being indexed by the byte representation of +`balances/{address}/{denom}`. + +`DelegateCoins()` and `UndelegateCoins()` will be altered to only load each individual +account balance by denomination found in the (un)delegation amount. As a result, +any mutations to the account balance will be made by denomination. + +`SubtractCoins()` and `AddCoins()` will be altered to read & write the balances +directly instead of calling `GetCoins()` / `SetCoins()` (which no longer exist). + +`trackDelegation()` and `trackUndelegation()` will be altered to no longer update +account balances. + +External APIs will need to scan all balances under an account to retain backwards-compatibility. It +is advised that these APIs use `GetBalance` and `SetBalance` instead of `GetAllBalances` when +possible as to not load the entire account balance. + +### Supply module + +The supply module, in order to implement the total supply invariant, will now need +to scan all accounts & call `GetAllBalances` using the `x/bank` Keeper, then sum +the balances and check that they match the expected total supply. + +## Status + +Accepted. + +## Consequences + +### Positive + +* O(1) reads & writes of balances (with respect to the number of denominations for +which an account has non-zero balances). Note, this does not relate to the actual +I/O cost, rather the total number of direct reads needed. + +### Negative + +* Slightly less efficient reads/writes when reading & writing all balances of a +single account in a transaction. + +### Neutral + +None in particular. + +## References + +* Ref: https://github.com/cosmos/cosmos-sdk/issues/4982 +* Ref: https://github.com/cosmos/cosmos-sdk/issues/5467 +* Ref: https://github.com/cosmos/cosmos-sdk/issues/5492 + + + +### ADR 006: Secret Store Replacement + + + +# ADR 006: Secret Store Replacement + +## Changelog + +* July 29th, 2019: Initial draft +* September 11th, 2019: Work has started +* November 4th: Cosmos SDK changes merged in +* November 18th: Gaia changes merged in + +## Context + +Currently, a Cosmos SDK application's CLI directory stores key material and metadata in a plain text database in the user’s home directory. Key material is encrypted by a passphrase, protected by bcrypt hashing algorithm. Metadata (e.g. addresses, public keys, key storage details) is available in plain text. + +This is not desirable for a number of reasons. Perhaps the biggest reason is insufficient security protection of key material and metadata. Leaking the plain text allows an attacker to surveil what keys a given computer controls via a number of techniques, like compromised dependencies without any privilege execution. This could be followed by a more targeted attack on a particular user/computer. + +All modern desktop computers OS (Ubuntu, Debian, MacOS, Windows) provide a built-in secret store that is designed to allow applications to store information that is isolated from all other applications and requires passphrase entry to access the data. + +We are seeking solution that provides a common abstraction layer to the many different backends and reasonable fallback for minimal platforms that don’t provide a native secret store. + +## Decision + +We recommend replacing the current Keybase backend based on LevelDB with [Keyring](https://github.com/99designs/keyring) by 99 designs. This application is designed to provide a common abstraction and uniform interface between many secret stores and is used by AWS Vault application by 99-designs application. + +This appears to fulfill the requirement of protecting both key material and metadata from rogue software on a user’s machine. + +## Status + +Accepted + +## Consequences + +### Positive + +Increased safety for users. + +### Negative + +Users must manually migrate. + +Testing against all supported backends is difficult. + +Running tests locally on a Mac require numerous repetitive password entries. + +### Neutral + +{neutral consequences} + +## References + +* #4754 Switch secret store to the keyring secret store (original PR by @poldsam) [__CLOSED__] +* #5029 Add support for github.com/99designs/keyring-backed keybases [__MERGED__] +* #5097 Add keys migrate command [__MERGED__] +* #5180 Drop on-disk keybase in favor of keyring [_PENDING_REVIEW_] +* cosmos/gaia#164 Drop on-disk keybase in favor of keyring (gaia's changes) [_PENDING_REVIEW_] + + + +### ADR 007: Specialization Groups + + + +# ADR 007: Specialization Groups + +## Changelog + +* 2019 Jul 31: Initial Draft + +## Context + +This idea was first conceived of in order to fulfill the use case of the +creation of a decentralized Computer Emergency Response Team (dCERT), whose +members would be elected by a governing community and would fulfill the role of +coordinating the community under emergency situations. This thinking +can be further abstracted into the conception of "blockchain specialization +groups". + +The creation of these groups are the beginning of specialization capabilities +within a wider blockchain community which could be used to enable a certain +level of delegated responsibilities. Examples of specialization which could be +beneficial to a blockchain community include: code auditing, emergency response, +code development etc. This type of community organization paves the way for +individual stakeholders to delegate votes by issue type, if in the future +governance proposals include a field for issue type. + +## Decision + +A specialization group can be broadly broken down into the following functions +(herein containing examples): + +* Membership Admittance +* Membership Acceptance +* Membership Revocation + * (probably) Without Penalty + * member steps down (self-Revocation) + * replaced by new member from governance + * (probably) With Penalty + * due to breach of soft-agreement (determined through governance) + * due to breach of hard-agreement (determined by code) +* Execution of Duties + * Special transactions which only execute for members of a specialization + group (for example, dCERT members voting to turn off transaction routes in + an emergency scenario) +* Compensation + * Group compensation (further distribution decided by the specialization group) + * Individual compensation for all constituents of a group from the + greater community + +Membership admittance to a specialization group could take place over a wide +variety of mechanisms. The most obvious example is through a general vote among +the entire community, however in certain systems a community may want to allow +the members already in a specialization group to internally elect new members, +or maybe the community may assign a permission to a particular specialization +group to appoint members to other 3rd party groups. The sky is really the limit +as to how membership admittance can be structured. We attempt to capture +some of these possibilities in a common interface dubbed the `Electionator`. For +its initial implementation as a part of this ADR we recommend that the general +election abstraction (`Electionator`) is provided as well as a basic +implementation of that abstraction which allows for a continuous election of +members of a specialization group. + +``` golang +// The Electionator abstraction covers the concept space for +// a wide variety of election kinds. +type Electionator interface { + + // is the election object accepting votes. + Active() bool + + // functionality to execute for when a vote is cast in this election, here + // the vote field is anticipated to be marshalled into a vote type used + // by an election. + // + // NOTE There are no explicit ids here. Just votes which pertain specifically + // to one electionator. Anyone can create and send a vote to the electionator item + // which will presumably attempt to marshal those bytes into a particular struct + // and apply the vote information in some arbitrary way. There can be multiple + // Electionators within the Cosmos-Hub for multiple specialization groups, votes + // would need to be routed to the Electionator upstream of here. + Vote(addr sdk.AccAddress, vote []byte) + + // here lies all functionality to authenticate and execute changes for + // when a member accepts being elected + AcceptElection(sdk.AccAddress) + + // Register a revoker object + RegisterRevoker(Revoker) + + // No more revokers may be registered after this function is called + SealRevokers() + + // register hooks to call when an election actions occur + RegisterHooks(ElectionatorHooks) + + // query for the current winner(s) of this election based on arbitrary + // election ruleset + QueryElected() []sdk.AccAddress + + // query metadata for an address in the election this + // could include for example position that an address + // is being elected for within a group + // + // this metadata may be directly related to + // voting information and/or privileges enabled + // to members within a group. + QueryMetadata(sdk.AccAddress) []byte +} + +// ElectionatorHooks, once registered with an Electionator, +// trigger execution of relevant interface functions when +// Electionator events occur. +type ElectionatorHooks interface { + AfterVoteCast(addr sdk.AccAddress, vote []byte) + AfterMemberAccepted(addr sdk.AccAddress) + AfterMemberRevoked(addr sdk.AccAddress, cause []byte) +} + +// Revoker defines the function required for a membership revocation rule-set +// used by a specialization group. This could be used to create self revoking, +// and evidence based revoking, etc. Revokers types may be created and +// reused for different election types. +// +// When revoking the "cause" bytes may be arbitrarily marshalled into evidence, +// memos, etc. +type Revoker interface { + RevokeName() string // identifier for this revoker type + RevokeMember(addr sdk.AccAddress, cause []byte) error +} +``` + +Certain level of commonality likely exists between the existing code within +`x/governance` and required functionality of elections. This common +functionality should be abstracted during implementation. Similarly for each +vote implementation client CLI/REST functionality should be abstracted +to be reused for multiple elections. + +The specialization group abstraction firstly extends the `Electionator` +but also further defines traits of the group. + +``` golang +type SpecializationGroup interface { + Electionator + GetName() string + GetDescription() string + + // general soft contract the group is expected + // to fulfill with the greater community + GetContract() string + + // messages which can be executed by the members of the group + Handler(ctx sdk.Context, msg sdk.Msg) sdk.Result + + // logic to be executed at endblock, this may for instance + // include payment of a stipend to the group members + // for participation in the security group. + EndBlocker(ctx sdk.Context) +} +``` + +## Status + +> Proposed + +## Consequences + +### Positive + +* increases specialization capabilities of a blockchain +* improve abstractions in `x/gov/` such that they can be used with specialization groups + +### Negative + +* could be used to increase centralization within a community + +### Neutral + +## References + +* [dCERT ADR](./adr-008-dCERT-group.md) + + + +### ADR 008: Decentralized Computer Emergency Response Team (dCERT) Group + + + +# ADR 008: Decentralized Computer Emergency Response Team (dCERT) Group + +## Changelog + +* 2019 Jul 31: Initial Draft + +## Context + +In order to reduce the number of parties involved with handling sensitive +information in an emergency scenario, we propose the creation of a +specialization group named The Decentralized Computer Emergency Response Team +(dCERT). Initially this group's role is intended to serve as coordinators +between various actors within a blockchain community such as validators, +bug-hunters, and developers. During a time of crisis, the dCERT group would +aggregate and relay input from a variety of stakeholders to the developers who +are actively devising a patch to the software, this way sensitive information +does not need to be publicly disclosed while some input from the community can +still be gained. + +Additionally, a special privilege is proposed for the dCERT group: the capacity +to "circuit-break" (aka. temporarily disable) a particular message path. Note +that this privilege should be enabled/disabled globally with a governance +parameter such that this privilege could start disabled and later be enabled +through a parameter change proposal, once a dCERT group has been established. + +In the future it is foreseeable that the community may wish to expand the roles +of dCERT with further responsibilities such as the capacity to "pre-approve" a +security update on behalf of the community prior to a full community +wide vote whereby the sensitive information would be revealed prior to a +vulnerability being patched on the live network. + +## Decision + +The dCERT group is proposed to include an implementation of a `SpecializationGroup` +as defined in [ADR 007](./adr-007-specialization-groups.md). This will include the +implementation of: + +* continuous voting +* slashing due to breach of soft contract +* revoking a member due to breach of soft contract +* emergency disband of the entire dCERT group (ex. for colluding maliciously) +* compensation stipend from the community pool or other means decided by + governance + +This system necessitates the following new parameters: + +* blockly stipend allowance per dCERT member +* maximum number of dCERT members +* required staked slashable tokens for each dCERT member +* quorum for suspending a particular member +* proposal wager for disbanding the dCERT group +* stabilization period for dCERT member transition +* circuit break dCERT privileges enabled + +These parameters are expected to be implemented through the param keeper such +that governance may change them at any given point. + +### Continuous Voting Electionator + +An `Electionator` object is to be implemented as continuous voting and with the +following specifications: + +* All delegation addresses may submit votes at any point which updates their + preferred representation on the dCERT group. +* Preferred representation may be arbitrarily split between addresses (ex. 50% + to John, 25% to Sally, 25% to Carol) +* In order for a new member to be added to the dCERT group they must + send a transaction accepting their admission at which point the validity of + their admission is to be confirmed. + * A sequence number is assigned when a member is added to dCERT group. + If a member leaves the dCERT group and then enters back, a new sequence number + is assigned. +* Addresses which control the greatest amount of preferred-representation are + eligible to join the dCERT group (up the _maximum number of dCERT members_). + If the dCERT group is already full and new member is admitted, the existing + dCERT member with the lowest amount of votes is kicked from the dCERT group. + * In the split situation where the dCERT group is full but a vying candidate + has the same amount of vote as an existing dCERT member, the existing + member should maintain its position. + * In the split situation where somebody must be kicked out but the two + addresses with the smallest number of votes have the same number of votes, + the address with the smallest sequence number maintains its position. +* A stabilization period can be optionally included to reduce the + "flip-flopping" of the dCERT membership tail members. If a stabilization + period is provided which is greater than 0, when members are kicked due to + insufficient support, a queue entry is created which documents which member is + to replace which other member. While this entry is in the queue, no new entries + to kick that same dCERT member can be made. When the entry matures at the + duration of the stabilization period, the new member is instantiated, and old + member kicked. + +### Staking/Slashing + +All members of the dCERT group must stake tokens _specifically_ to maintain +eligibility as a dCERT member. These tokens can be staked directly by the vying +dCERT member or out of the good will of a 3rd party (who shall gain no on-chain +benefits for doing so). This staking mechanism should use the existing global +unbonding time of tokens staked for network validator security. A dCERT member +can _only be_ a member if it has the required tokens staked under this +mechanism. If those tokens are unbonded then the dCERT member must be +automatically kicked from the group. + +Slashing of a particular dCERT member due to soft-contract breach should be +performed by governance on a per member basis based on the magnitude of the +breach. The process flow is anticipated to be that a dCERT member is suspended +by the dCERT group prior to being slashed by governance. + +Membership suspension by the dCERT group takes place through a voting procedure +by the dCERT group members. After this suspension has taken place, a governance +proposal to slash the dCERT member must be submitted, if the proposal is not +approved by the time the rescinding member has completed unbonding their +tokens, then the tokens are no longer staked and unable to be slashed. + +Additionally in the case of an emergency situation of a colluding and malicious +dCERT group, the community needs the capability to disband the entire dCERT +group and likely fully slash them. This could be achieved though a special new +proposal type (implemented as a general governance proposal) which would halt +the functionality of the dCERT group until the proposal was concluded. This +special proposal type would likely need to also have a fairly large wager which +could be slashed if the proposal creator was malicious. The reason a large +wager should be required is because as soon as the proposal is made, the +capability of the dCERT group to halt message routes is put on temporarily +suspended, meaning that a malicious actor who created such a proposal could +then potentially exploit a bug during this period of time, with no dCERT group +capable of shutting down the exploitable message routes. + +### dCERT membership transactions + +Active dCERT members + +* change of the description of the dCERT group +* circuit break a message route +* vote to suspend a dCERT member. + +Here circuit-breaking refers to the capability to disable a groups of messages, +This could for instance mean: "disable all staking-delegation messages", or +"disable all distribution messages". This could be accomplished by verifying +that the message route has not been "circuit-broken" at CheckTx time (in +`baseapp/baseapp.go`). + +"unbreaking" a circuit is anticipated only to occur during a hard fork upgrade +meaning that no capability to unbreak a message route on a live chain is +required. + +Note also, that if there was a problem with governance voting (for instance a +capability to vote many times) then governance would be broken and should be +halted with this mechanism, it would be then up to the validator set to +coordinate and hard-fork upgrade to a patched version of the software where +governance is re-enabled (and fixed). If the dCERT group abuses this privilege +they should all be severely slashed. + +## Status + +Proposed + +## Consequences + +### Positive + +* Potential to reduces the number of parties to coordinate with during an emergency +* Reduction in possibility of disclosing sensitive information to malicious parties + +### Negative + +* Centralization risks + +### Neutral + +## References + + [Specialization Groups ADR](./adr-007-specialization-groups.md) + + + +### ADR 009: Evidence Module + + + +# ADR 009: Evidence Module + +## Changelog + +* 2019 July 31: Initial draft +* 2019 October 24: Initial implementation + +## Status + +Accepted + +## Context + +In order to support building highly secure, robust and interoperable blockchain +applications, it is vital for the Cosmos SDK to expose a mechanism in which arbitrary +evidence can be submitted, evaluated and verified resulting in some agreed upon +penalty for any misbehavior committed by a validator, such as equivocation (double-voting), +signing when unbonded, signing an incorrect state transition (in the future), etc. +Furthermore, such a mechanism is paramount for any +[IBC](https://github.com/cosmos/ics/blob/master/ibc/2_IBC_ARCHITECTURE.md) or +cross-chain validation protocol implementation in order to support the ability +for any misbehavior to be relayed back from a collateralized chain to a primary +chain so that the equivocating validator(s) can be slashed. + +## Decision + +We will implement an evidence module in the Cosmos SDK supporting the following +functionality: + +* Provide developers with the abstractions and interfaces necessary to define + custom evidence messages, message handlers, and methods to slash and penalize + accordingly for misbehavior. +* Support the ability to route evidence messages to handlers in any module to + determine the validity of submitted misbehavior. +* Support the ability, through governance, to modify slashing penalties of any + evidence type. +* Querier implementation to support querying params, evidence types, params, and + all submitted valid misbehavior. + +### Types + +First, we define the `Evidence` interface type. The `x/evidence` module may implement +its own types that can be used by many chains (e.g. `CounterFactualEvidence`). +In addition, other modules may implement their own `Evidence` types in a similar +manner in which governance is extensible. It is important to note any concrete +type implementing the `Evidence` interface may include arbitrary fields such as +an infraction time. We want the `Evidence` type to remain as flexible as possible. + +When submitting evidence to the `x/evidence` module, the concrete type must provide +the validator's consensus address, which should be known by the `x/slashing` +module (assuming the infraction is valid), the height at which the infraction +occurred and the validator's power at same height in which the infraction occurred. + +```go +type Evidence interface { + Route() string + Type() string + String() string + Hash() HexBytes + ValidateBasic() error + + // The consensus address of the malicious validator at time of infraction + GetConsensusAddress() ConsAddress + + // Height at which the infraction occurred + GetHeight() int64 + + // The total power of the malicious validator at time of infraction + GetValidatorPower() int64 + + // The total validator set power at time of infraction + GetTotalPower() int64 +} +``` + +### Routing & Handling + +Each `Evidence` type must map to a specific unique route and be registered with +the `x/evidence` module. It accomplishes this through the `Router` implementation. + +```go +type Router interface { + AddRoute(r string, h Handler) Router + HasRoute(r string) bool + GetRoute(path string) Handler + Seal() +} +``` + +Upon successful routing through the `x/evidence` module, the `Evidence` type +is passed through a `Handler`. This `Handler` is responsible for executing all +corresponding business logic necessary for verifying the evidence as valid. In +addition, the `Handler` may execute any necessary slashing and potential jailing. +Since slashing fractions will typically result from some form of static functions, +allow the `Handler` to do this provides the greatest flexibility. An example could +be `k * evidence.GetValidatorPower()` where `k` is an on-chain parameter controlled +by governance. The `Evidence` type should provide all the external information +necessary in order for the `Handler` to make the necessary state transitions. +If no error is returned, the `Evidence` is considered valid. + +```go +type Handler func(Context, Evidence) error +``` + +### Submission + +`Evidence` is submitted through a `MsgSubmitEvidence` message type which is internally +handled by the `x/evidence` module's `SubmitEvidence`. + +```go +type MsgSubmitEvidence struct { + Evidence +} + +func handleMsgSubmitEvidence(ctx Context, keeper Keeper, msg MsgSubmitEvidence) Result { + if err := keeper.SubmitEvidence(ctx, msg.Evidence); err != nil { + return err.Result() + } + + // emit events... + + return Result{ + // ... + } +} +``` + +The `x/evidence` module's keeper is responsible for matching the `Evidence` against +the module's router and invoking the corresponding `Handler` which may include +slashing and jailing the validator. Upon success, the submitted evidence is persisted. + +```go +func (k Keeper) SubmitEvidence(ctx Context, evidence Evidence) error { + handler := keeper.router.GetRoute(evidence.Route()) + if err := handler(ctx, evidence); err != nil { + return ErrInvalidEvidence(keeper.codespace, err) + } + + keeper.setEvidence(ctx, evidence) + return nil +} +``` + +### Genesis + +Finally, we need to represent the genesis state of the `x/evidence` module. The +module only needs a list of all submitted valid infractions and any necessary params +for which the module needs in order to handle submitted evidence. The `x/evidence` +module will naturally define and route native evidence types for which it'll most +likely need slashing penalty constants for. + +```go +type GenesisState struct { + Params Params + Infractions []Evidence +} +``` + +## Consequences + +### Positive + +* Allows the state machine to process misbehavior submitted on-chain and penalize + validators based on agreed upon slashing parameters. +* Allows evidence types to be defined and handled by any module. This further allows + slashing and jailing to be defined by more complex mechanisms. +* Does not solely rely on Tendermint to submit evidence. + +### Negative + +* No easy way to introduce new evidence types through governance on a live chain + due to the inability to introduce the new evidence type's corresponding handler + +### Neutral + +* Should we persist infractions indefinitely? Or should we rather rely on events? + +## References + +* [ICS](https://github.com/cosmos/ics) +* [IBC Architecture](https://github.com/cosmos/ics/blob/master/ibc/1_IBC_ARCHITECTURE.md) +* [Tendermint Fork Accountability](https://github.com/tendermint/spec/blob/7b3138e69490f410768d9b1ffc7a17abc23ea397/spec/consensus/fork-accountability.md) + + + +### ADR 010: Modular AnteHandler + + + +# ADR 010: Modular AnteHandler + +## Changelog + +* 2019 Aug 31: Initial draft +* 2021 Sep 14: Superseded by ADR-045 + +## Status + +SUPERSEDED by ADR-045 + +## Context + +The current AnteHandler design allows users to either use the default AnteHandler provided in `x/auth` or to build their own AnteHandler from scratch. Ideally AnteHandler functionality is split into multiple, modular functions that can be chained together along with custom ante-functions so that users do not have to rewrite common antehandler logic when they want to implement custom behavior. + +For example, let's say a user wants to implement some custom signature verification logic. In the current codebase, the user would have to write their own Antehandler from scratch largely reimplementing much of the same code and then set their own custom, monolithic antehandler in the baseapp. Instead, we would like to allow users to specify custom behavior when necessary and combine them with default ante-handler functionality in a way that is as modular and flexible as possible. + +## Proposals + +### Per-Module AnteHandler + +One approach is to use the [ModuleManager](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/module) and have each module implement its own antehandler if it requires custom antehandler logic. The ModuleManager can then be passed in an AnteHandler order in the same way it has an order for BeginBlockers and EndBlockers. The ModuleManager returns a single AnteHandler function that will take in a tx and run each module's `AnteHandle` in the specified order. The module manager's AnteHandler is set as the baseapp's AnteHandler. + +Pros: + +1. Simple to implement +2. Utilizes the existing ModuleManager architecture + +Cons: + +1. Improves granularity but still cannot get more granular than a per-module basis. e.g. If auth's `AnteHandle` function is in charge of validating memo and signatures, users cannot swap the signature-checking functionality while keeping the rest of auth's `AnteHandle` functionality. +2. Module AnteHandler are run one after the other. There is no way for one AnteHandler to wrap or "decorate" another. + +### Decorator Pattern + +The [weave project](https://github.com/iov-one/weave) achieves AnteHandler modularity through the use of a decorator pattern. The interface is designed as follows: + +```go +// Decorator wraps a Handler to provide common functionality +// like authentication, or fee-handling, to many Handlers +type Decorator interface { + Check(ctx Context, store KVStore, tx Tx, next Checker) (*CheckResult, error) + Deliver(ctx Context, store KVStore, tx Tx, next Deliverer) (*DeliverResult, error) +} +``` + +Each decorator works like a modularized Cosmos SDK antehandler function, but it can take in a `next` argument that may be another decorator or a Handler (which does not take in a next argument). These decorators can be chained together, one decorator being passed in as the `next` argument of the previous decorator in the chain. The chain ends in a Router which can take a tx and route to the appropriate msg handler. + +A key benefit of this approach is that one Decorator can wrap its internal logic around the next Checker/Deliverer. A weave Decorator may do the following: + +```go +// Example Decorator's Deliver function +func (example Decorator) Deliver(ctx Context, store KVStore, tx Tx, next Deliverer) { + // Do some pre-processing logic + + res, err := next.Deliver(ctx, store, tx) + + // Do some post-processing logic given the result and error +} +``` + +Pros: + +1. Weave Decorators can wrap over the next decorator/handler in the chain. The ability to both pre-process and post-process may be useful in certain settings. +2. Provides a nested modular structure that isn't possible in the solution above, while also allowing for a linear one-after-the-other structure like the solution above. + +Cons: + +1. It is hard to understand at first glance the state updates that would occur after a Decorator runs given the `ctx`, `store`, and `tx`. A Decorator can have an arbitrary number of nested Decorators being called within its function body, each possibly doing some pre- and post-processing before calling the next decorator on the chain. Thus to understand what a Decorator is doing, one must also understand what every other decorator further along the chain is also doing. This can get quite complicated to understand. A linear, one-after-the-other approach while less powerful, may be much easier to reason about. + +### Chained Micro-Functions + +The benefit of Weave's approach is that the Decorators can be very concise, which when chained together allows for maximum customizability. However, the nested structure can get quite complex and thus hard to reason about. + +Another approach is to split the AnteHandler functionality into tightly scoped "micro-functions", while preserving the one-after-the-other ordering that would come from the ModuleManager approach. + +We can then have a way to chain these micro-functions so that they run one after the other. Modules may define multiple ante micro-functions and then also provide a default per-module AnteHandler that implements a default, suggested order for these micro-functions. + +Users can order the AnteHandlers easily by simply using the ModuleManager. The ModuleManager will take in a list of AnteHandlers and return a single AnteHandler that runs each AnteHandler in the order of the list provided. If the user is comfortable with the default ordering of each module, this is as simple as providing a list with each module's antehandler (exactly the same as BeginBlocker and EndBlocker). + +If however, users wish to change the order or add, modify, or delete ante micro-functions in anyway; they can always define their own ante micro-functions and add them explicitly to the list that gets passed into module manager. + +#### Default Workflow + +This is an example of a user's AnteHandler if they choose not to make any custom micro-functions. + +##### Cosmos SDK code + +```go +// Chains together a list of AnteHandler micro-functions that get run one after the other. +// Returned AnteHandler will abort on first error. +func Chainer(order []AnteHandler) AnteHandler { + return func(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) { + for _, ante := range order { + ctx, err := ante(ctx, tx, simulate) + if err != nil { + return ctx, err + } + } + return ctx, err + } +} +``` + +```go +// AnteHandler micro-function to verify signatures +func VerifySignatures(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) { + // verify signatures + // Returns InvalidSignature Result and abort=true if sigs invalid + // Return OK result and abort=false if sigs are valid +} + +// AnteHandler micro-function to validate memo +func ValidateMemo(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) { + // validate memo +} + +// Auth defines its own default ante-handler by chaining its micro-functions in a recommended order +AuthModuleAnteHandler := Chainer([]AnteHandler{VerifySignatures, ValidateMemo}) +``` + +```go +// Distribution micro-function to deduct fees from tx +func DeductFees(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) { + // Deduct fees from tx + // Abort if insufficient funds in account to pay for fees +} + +// Distribution micro-function to check if fees > mempool parameter +func CheckMempoolFees(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) { + // If CheckTx: Abort if the fees are less than the mempool's minFee parameter +} + +// Distribution defines its own default ante-handler by chaining its micro-functions in a recommended order +DistrModuleAnteHandler := Chainer([]AnteHandler{CheckMempoolFees, DeductFees}) +``` + +```go +type ModuleManager struct { + // other fields + AnteHandlerOrder []AnteHandler +} + +func (mm ModuleManager) GetAnteHandler() AnteHandler { + return Chainer(mm.AnteHandlerOrder) +} +``` + +##### User Code + +```go +// Note: Since user is not making any custom modifications, we can just SetAnteHandlerOrder with the default AnteHandlers provided by each module in our preferred order +moduleManager.SetAnteHandlerOrder([]AnteHandler(AuthModuleAnteHandler, DistrModuleAnteHandler)) + +app.SetAnteHandler(mm.GetAnteHandler()) +``` + +#### Custom Workflow + +This is an example workflow for a user that wants to implement custom antehandler logic. In this example, the user wants to implement custom signature verification and change the order of antehandler so that validate memo runs before signature verification. + +##### User Code + +```go +// User can implement their own custom signature verification antehandler micro-function +func CustomSigVerify(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) { + // do some custom signature verification logic +} +``` + +```go +// Micro-functions allow users to change order of when they get executed, and swap out default ante-functionality with their own custom logic. +// Note that users can still chain the default distribution module handler, and auth micro-function along with their custom ante function +moduleManager.SetAnteHandlerOrder([]AnteHandler(ValidateMemo, CustomSigVerify, DistrModuleAnteHandler)) +``` + +Pros: + +1. Allows for ante functionality to be as modular as possible. +2. For users that do not need custom ante-functionality, there is little difference between how antehandlers work and how BeginBlock and EndBlock work in ModuleManager. +3. Still easy to understand + +Cons: + +1. Cannot wrap antehandlers with decorators like you can with Weave. + +### Simple Decorators + +This approach takes inspiration from Weave's decorator design while trying to minimize the number of breaking changes to the Cosmos SDK and maximizing simplicity. Like Weave decorators, this approach allows one `AnteDecorator` to wrap the next AnteHandler to do pre- and post-processing on the result. This is useful since decorators can do defer/cleanups after an AnteHandler returns as well as perform some setup beforehand. Unlike Weave decorators, these `AnteDecorator` functions can only wrap over the AnteHandler rather than the entire handler execution path. This is deliberate as we want decorators from different modules to perform authentication/validation on a `tx`. However, we do not want decorators being capable of wrapping and modifying the results of a `MsgHandler`. + +In addition, this approach will not break any core Cosmos SDK API's. Since we preserve the notion of an AnteHandler and still set a single AnteHandler in baseapp, the decorator is simply an additional approach available for users that desire more customization. The API of modules (namely `x/auth`) may break with this approach, but the core API remains untouched. + +Allow Decorator interface that can be chained together to create a Cosmos SDK AnteHandler. + +This allows users to choose between implementing an AnteHandler by themselves and setting it in the baseapp, or use the decorator pattern to chain their custom decorators with the Cosmos SDK provided decorators in the order they wish. + +```go +// An AnteDecorator wraps an AnteHandler, and can do pre- and post-processing on the next AnteHandler +type AnteDecorator interface { + AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) +} +``` + +```go +// ChainAnteDecorators will recursively link all of the AnteDecorators in the chain and return a final AnteHandler function +// This is done to preserve the ability to set a single AnteHandler function in the baseapp. +func ChainAnteDecorators(chain ...AnteDecorator) AnteHandler { + if len(chain) == 1 { + return func(ctx Context, tx Tx, simulate bool) { + chain[0].AnteHandle(ctx, tx, simulate, nil) + } + } + return func(ctx Context, tx Tx, simulate bool) { + chain[0].AnteHandle(ctx, tx, simulate, ChainAnteDecorators(chain[1:])) + } +} +``` + +#### Example Code + +Define AnteDecorator functions + +```go +// Setup GasMeter, catch OutOfGasPanic and handle appropriately +type SetUpContextDecorator struct{} + +func (sud SetUpContextDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) { + ctx.GasMeter = NewGasMeter(tx.Gas) + + defer func() { + // recover from OutOfGas panic and handle appropriately + } + + return next(ctx, tx, simulate) +} + +// Signature Verification decorator. Verify Signatures and move on +type SigVerifyDecorator struct{} + +func (svd SigVerifyDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) { + // verify sigs. Return error if invalid + + // call next antehandler if sigs ok + return next(ctx, tx, simulate) +} + +// User-defined Decorator. Can choose to pre- and post-process on AnteHandler +type UserDefinedDecorator struct{ + // custom fields +} + +func (udd UserDefinedDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) { + // pre-processing logic + + ctx, err = next(ctx, tx, simulate) + + // post-processing logic +} +``` + +Link AnteDecorators to create a final AnteHandler. Set this AnteHandler in baseapp. + +```go +// Create final antehandler by chaining the decorators together +antehandler := ChainAnteDecorators(NewSetUpContextDecorator(), NewSigVerifyDecorator(), NewUserDefinedDecorator()) + +// Set chained Antehandler in the baseapp +bapp.SetAnteHandler(antehandler) +``` + +Pros: + +1. Allows one decorator to pre- and post-process the next AnteHandler, similar to the Weave design. +2. Do not need to break baseapp API. Users can still set a single AnteHandler if they choose. + +Cons: + +1. Decorator pattern may have a deeply nested structure that is hard to understand, this is mitigated by having the decorator order explicitly listed in the `ChainAnteDecorators` function. +2. Does not make use of the ModuleManager design. Since this is already being used for BeginBlocker/EndBlocker, this proposal seems unaligned with that design pattern. + +## Consequences + +Since pros and cons are written for each approach, it is omitted from this section + +## References + +* [#4572](https://github.com/cosmos/cosmos-sdk/issues/4572): Modular AnteHandler Issue +* [#4582](https://github.com/cosmos/cosmos-sdk/pull/4583): Initial Implementation of Per-Module AnteHandler Approach +* [Weave Decorator Code](https://github.com/iov-one/weave/blob/master/handler.go#L35) +* [Weave Design Videos](https://vimeo.com/showcase/6189877) + + + +### ADR 011: Generalize Genesis Accounts + + + +# ADR 011: Generalize Genesis Accounts + +## Changelog + +* 2019-08-30: initial draft + +## Context + +Currently, the Cosmos SDK allows for custom account types; the `auth` keeper stores any type fulfilling its `Account` interface. However `auth` does not handle exporting or loading accounts to/from a genesis file, this is done by `genaccounts`, which only handles one of 4 concrete account types (`BaseAccount`, `ContinuousVestingAccount`, `DelayedVestingAccount` and `ModuleAccount`). + +Projects desiring to use custom accounts (say custom vesting accounts) need to fork and modify `genaccounts`. + +## Decision + +In summary, we will (un)marshal all accounts (interface types) directly using amino, rather than converting to `genaccounts`’s `GenesisAccount` type. Since doing this removes the majority of `genaccounts`'s code, we will merge `genaccounts` into `auth`. Marshalled accounts will be stored in `auth`'s genesis state. + +Detailed changes: + +### 1) (Un)Marshal accounts directly using amino + +The `auth` module's `GenesisState` gains a new field `Accounts`. Note these aren't of type `exported.Account` for reasons outlined in section 3. + +```go +// GenesisState - all auth state that must be provided at genesis +type GenesisState struct { + Params Params `json:"params" yaml:"params"` + Accounts []GenesisAccount `json:"accounts" yaml:"accounts"` +} +``` + +Now `auth`'s `InitGenesis` and `ExportGenesis` (un)marshal accounts as well as the defined params. + +```go +// InitGenesis - Init store state from genesis data +func InitGenesis(ctx sdk.Context, ak AccountKeeper, data GenesisState) { + ak.SetParams(ctx, data.Params) + // load the accounts + for _, a := range data.Accounts { + acc := ak.NewAccount(ctx, a) // set account number + ak.SetAccount(ctx, acc) + } +} + +// ExportGenesis returns a GenesisState for a given context and keeper +func ExportGenesis(ctx sdk.Context, ak AccountKeeper) GenesisState { + params := ak.GetParams(ctx) + + var genAccounts []exported.GenesisAccount + ak.IterateAccounts(ctx, func(account exported.Account) bool { + genAccount := account.(exported.GenesisAccount) + genAccounts = append(genAccounts, genAccount) + return false + }) + + return NewGenesisState(params, genAccounts) +} +``` + +### 2) Register custom account types on the `auth` codec + +The `auth` codec must have all custom account types registered to marshal them. We will follow the pattern established in `gov` for proposals. + +An example custom account definition: + +```go +import authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + +// Register the module account type with the auth module codec so it can decode module accounts stored in a genesis file +func init() { + authtypes.RegisterAccountTypeCodec(ModuleAccount{}, "cosmos-sdk/ModuleAccount") +} + +type ModuleAccount struct { + ... +``` + +The `auth` codec definition: + +```go +var ModuleCdc *codec.LegacyAmino + +func init() { + ModuleCdc = codec.NewLegacyAmino() + // register module msg's and Account interface + ... + // leave the codec unsealed +} + +// RegisterAccountTypeCodec registers an external account type defined in another module for the internal ModuleCdc. +func RegisterAccountTypeCodec(o interface{}, name string) { + ModuleCdc.RegisterConcrete(o, name, nil) +} +``` + +### 3) Genesis validation for custom account types + +Modules implement a `ValidateGenesis` method. As `auth` does not know of account implementations, accounts will need to validate themselves. + +We will unmarshal accounts into a `GenesisAccount` interface that includes a `Validate` method. + +```go +type GenesisAccount interface { + exported.Account + Validate() error +} +``` + +Then the `auth` `ValidateGenesis` function becomes: + +```go +// ValidateGenesis performs basic validation of auth genesis data returning an +// error for any failed validation criteria. +func ValidateGenesis(data GenesisState) error { + // Validate params + ... + + // Validate accounts + addrMap := make(map[string]bool, len(data.Accounts)) + for _, acc := range data.Accounts { + + // check for duplicated accounts + addrStr := acc.GetAddress().String() + if _, ok := addrMap[addrStr]; ok { + return fmt.Errorf("duplicate account found in genesis state; address: %s", addrStr) + } + addrMap[addrStr] = true + + // check account specific validation + if err := acc.Validate(); err != nil { + return fmt.Errorf("invalid account found in genesis state; address: %s, error: %s", addrStr, err.Error()) + } + + } + return nil +} +``` + +### 4) Move add-genesis-account cli to `auth` + +The `genaccounts` module contains a cli command to add base or vesting accounts to a genesis file. + +This will be moved to `auth`. We will leave it to projects to write their own commands to add custom accounts. An extensible cli handler, similar to `gov`, could be created but it is not worth the complexity for this minor use case. + +### 5) Update module and vesting accounts + +Under the new scheme, module and vesting account types need some minor updates: + +* Type registration on `auth`'s codec (shown above) +* A `Validate` method for each `Account` concrete type + +## Status + +Proposed + +## Consequences + +### Positive + +* custom accounts can be used without needing to fork `genaccounts` +* reduction in lines of code + +### Negative + +### Neutral + +* `genaccounts` module no longer exists +* accounts in genesis files are stored under `accounts` in `auth` rather than in the `genaccounts` module. +-`add-genesis-account` cli command now in `auth` + +## References + + + +### ADR 012: State Accessors + + + +# ADR 012: State Accessors + +## Changelog + +* 2019 Sep 04: Initial draft + +## Context + +Cosmos SDK modules currently use the `KVStore` interface and `Codec` to access their respective state. While +this provides a large degree of freedom to module developers, it is hard to modularize and the UX is +mediocre. + +First, each time a module tries to access the state, it has to marshal the value and set or get the +value and finally unmarshal. Usually this is done by declaring `Keeper.GetXXX` and `Keeper.SetXXX` functions, +which are repetitive and hard to maintain. + +Second, this makes it harder to align with the object capability theorem: the right to access the +state is defined as a `StoreKey`, which gives full access on the entire Merkle tree, so a module cannot +send the access right to a specific key-value pair (or a set of key-value pairs) to another module safely. + +Finally, because the getter/setter functions are defined as methods of a module's `Keeper`, the reviewers +have to consider the whole Merkle tree space when they reviewing a function accessing any part of the state. +There is no static way to know which part of the state that the function is accessing (and which is not). + +## Decision + +We will define a type named `Value`: + +```go +type Value struct { + m Mapping + key []byte +} +``` + +The `Value` works as a reference for a key-value pair in the state, where `Value.m` defines the key-value +space it will access and `Value.key` defines the exact key for the reference. + +We will define a type named `Mapping`: + +```go +type Mapping struct { + storeKey sdk.StoreKey + cdc *codec.LegacyAmino + prefix []byte +} +``` + +The `Mapping` works as a reference for a key-value space in the state, where `Mapping.storeKey` defines +the IAVL (sub-)tree and `Mapping.prefix` defines the optional subspace prefix. + +We will define the following core methods for the `Value` type: + +```go +// Get and unmarshal stored data, noop if not exists, panic if cannot unmarshal +func (Value) Get(ctx Context, ptr interface{}) {} + +// Get and unmarshal stored data, return error if not exists or cannot unmarshal +func (Value) GetSafe(ctx Context, ptr interface{}) {} + +// Get stored data as raw byte slice +func (Value) GetRaw(ctx Context) []byte {} + +// Marshal and set a raw value +func (Value) Set(ctx Context, o interface{}) {} + +// Check if a raw value exists +func (Value) Exists(ctx Context) bool {} + +// Delete a raw value +func (Value) Delete(ctx Context) {} +``` + +We will define the following core methods for the `Mapping` type: + +```go +// Constructs key-value pair reference corresponding to the key argument in the Mapping space +func (Mapping) Value(key []byte) Value {} + +// Get and unmarshal stored data, noop if not exists, panic if cannot unmarshal +func (Mapping) Get(ctx Context, key []byte, ptr interface{}) {} + +// Get and unmarshal stored data, return error if not exists or cannot unmarshal +func (Mapping) GetSafe(ctx Context, key []byte, ptr interface{}) + +// Get stored data as raw byte slice +func (Mapping) GetRaw(ctx Context, key []byte) []byte {} + +// Marshal and set a raw value +func (Mapping) Set(ctx Context, key []byte, o interface{}) {} + +// Check if a raw value exists +func (Mapping) Has(ctx Context, key []byte) bool {} + +// Delete a raw value +func (Mapping) Delete(ctx Context, key []byte) {} +``` + +Each method of the `Mapping` type that is passed the arguments `ctx`, `key`, and `args...` will proxy +the call to `Mapping.Value(key)` with arguments `ctx` and `args...`. + +In addition, we will define and provide a common set of types derived from the `Value` type: + +```go +type Boolean struct { Value } +type Enum struct { Value } +type Integer struct { Value; enc IntEncoding } +type String struct { Value } +// ... +``` + +Where the encoding schemes can be different, `o` arguments in core methods are typed, and `ptr` arguments +in core methods are replaced by explicit return types. + +Finally, we will define a family of types derived from the `Mapping` type: + +```go +type Indexer struct { + m Mapping + enc IntEncoding +} +``` + +Where the `key` argument in core method is typed. + +Some of the properties of the accessor types are: + +* State access happens only when a function which takes a `Context` as an argument is invoked +* Accessor type structs give rights to access the state only that the struct is referring, no other +* Marshalling/Unmarshalling happens implicitly within the core methods + +## Status + +Proposed + +## Consequences + +### Positive + +* Serialization will be done automatically +* Shorter code size, less boilerplate, better UX +* References to the state can be transferred safely +* Explicit scope of accessing + +### Negative + +* Serialization format will be hidden +* Different architecture from the current, but the use of accessor types can be opt-in +* Type-specific types (e.g. `Boolean` and `Integer`) have to be defined manually + +### Neutral + +## References + +* [#4554](https://github.com/cosmos/cosmos-sdk/issues/4554) + + + +### ADR 013: Observability + + + +# ADR 013: Observability + +## Changelog + +* 20-01-2020: Initial Draft + +## Status + +Proposed + +## Context + +Telemetry is paramount into debugging and understanding what the application is doing and how it is +performing. We aim to expose metrics from modules and other core parts of the Cosmos SDK. + +In addition, we should aim to support multiple configurable sinks that an operator may choose from. +By default, when telemetry is enabled, the application should track and expose metrics that are +stored in-memory. The operator may choose to enable additional sinks, where we support only +[Prometheus](https://prometheus.io/) for now, as it's battle-tested, simple to setup, open source, +and is rich with ecosystem tooling. + +We must also aim to integrate metrics into the Cosmos SDK in the most seamless way possible such that +metrics may be added or removed at will and without much friction. To do this, we will use the +[go-metrics](https://github.com/hashicorp/go-metrics) library. + +Finally, operators may enable telemetry along with specific configuration options. If enabled, metrics +will be exposed via `/metrics?format={text|prometheus}` via the API server. + +## Decision + +We will add an additional configuration block to `app.toml` that defines telemetry settings: + +```toml +############################################################################### +### Telemetry Configuration ### +############################################################################### + +[telemetry] + +# Prefixed with keys to separate services +service-name = {{ .Telemetry.ServiceName }} + +# Enabled enables the application telemetry functionality. When enabled, +# an in-memory sink is also enabled by default. Operators may also enabled +# other sinks such as Prometheus. +enabled = {{ .Telemetry.Enabled }} + +# Enable prefixing gauge values with hostname +enable-hostname = {{ .Telemetry.EnableHostname }} + +# Enable adding hostname to labels +enable-hostname-label = {{ .Telemetry.EnableHostnameLabel }} + +# Enable adding service to labels +enable-service-label = {{ .Telemetry.EnableServiceLabel }} + +# PrometheusRetentionTime, when positive, enables a Prometheus metrics sink. +prometheus-retention-time = {{ .Telemetry.PrometheusRetentionTime }} +``` + +The given configuration allows for two sinks -- in-memory and Prometheus. We create a `Metrics` +type that performs all the bootstrapping for the operator, so capturing metrics becomes seamless. + +```go +// Metrics defines a wrapper around application telemetry functionality. It allows +// metrics to be gathered at any point in time. When creating a Metrics object, +// internally, a global metrics is registered with a set of sinks as configured +// by the operator. In addition to the sinks, when a process gets a SIGUSR1, a +// dump of formatted recent metrics will be sent to STDERR. +type Metrics struct { + memSink *metrics.InmemSink + prometheusEnabled bool +} + +// Gather collects all registered metrics and returns a GatherResponse where the +// metrics are encoded depending on the type. Metrics are either encoded via +// Prometheus or JSON if in-memory. +func (m *Metrics) Gather(format string) (GatherResponse, error) { + switch format { + case FormatPrometheus: + return m.gatherPrometheus() + + case FormatText: + return m.gatherGeneric() + + case FormatDefault: + return m.gatherGeneric() + + default: + return GatherResponse{}, fmt.Errorf("unsupported metrics format: %s", format) + } +} +``` + +In addition, `Metrics` allows us to gather the current set of metrics at any given point in time. An +operator may also choose to send a signal, SIGUSR1, to dump and print formatted metrics to STDERR. + +During an application's bootstrapping and construction phase, if `Telemetry.Enabled` is `true`, the +API server will create an instance of a reference to `Metrics` object and will register a metrics +handler accordingly. + +```go +func (s *Server) Start(cfg config.Config) error { + // ... + + if cfg.Telemetry.Enabled { + m, err := telemetry.New(cfg.Telemetry) + if err != nil { + return err + } + + s.metrics = m + s.registerMetrics() + } + + // ... +} + +func (s *Server) registerMetrics() { + metricsHandler := func(w http.ResponseWriter, r *http.Request) { + format := strings.TrimSpace(r.FormValue("format")) + + gr, err := s.metrics.Gather(format) + if err != nil { + rest.WriteErrorResponse(w, http.StatusBadRequest, fmt.Sprintf("failed to gather metrics: %s", err)) + return + } + + w.Header().Set("Content-Type", gr.ContentType) + _, _ = w.Write(gr.Metrics) + } + + s.Router.HandleFunc("/metrics", metricsHandler).Methods("GET") +} +``` + +Application developers may track counters, gauges, summaries, and key/value metrics. There is no +additional lifting required by modules to leverage profiling metrics. To do so, it's as simple as: + +```go +func (k BaseKeeper) MintCoins(ctx sdk.Context, moduleName string, amt sdk.Coins) error { + defer metrics.MeasureSince(time.Now(), "MintCoins") + // ... +} +``` + +## Consequences + +### Positive + +* Exposure into the performance and behavior of an application + +### Negative + +### Neutral + +## References + + + +### ADR 014: Proportional Slashing + + + +# ADR 14: Proportional Slashing + +## Changelog + +* 2019-10-15: Initial draft +* 2020-05-25: Removed correlation root slashing +* 2020-07-01: Updated to include S-curve function instead of linear + +## Context + +In Proof of Stake-based chains, centralization of consensus power amongst a small set of validators can cause harm to the network due to increased risk of censorship, liveness failure, fork attacks, etc. However, while this centralization causes a negative externality to the network, it is not directly felt by the delegators contributing towards delegating towards already large validators. We would like a way to pass on the negative externality cost of centralization onto those large validators and their delegators. + +## Decision + +### Design + +To solve this problem, we will implement a procedure called Proportional Slashing. The desire is that the larger a validator is, the more they should be slashed. The first naive attempt is to make a validator's slash percent proportional to their share of consensus voting power. + +```text +slash_amount = k * power // power is the faulting validator's voting power and k is some on-chain constant +``` + +However, this will incentivize validators with large amounts of stake to split up their voting power amongst accounts (sybil attack), so that if they fault, they all get slashed at a lower percent. The solution to this is to take into account not just a validator's own voting percentage, but also the voting percentage of all the other validators who get slashed in a specified time frame. + +```text +slash_amount = k * (power_1 + power_2 + ... + power_n) // where power_i is the voting power of the ith validator faulting in the specified time frame and k is some on-chain constant +``` + +Now, if someone splits a validator of 10% into two validators of 5% each which both fault, then they both fault in the same time frame, they both will get slashed at the sum 10% amount. + +However in practice, we likely don't want a linear relation between amount of stake at fault, and the percentage of stake to slash. In particular, solely 5% of stake double signing effectively did nothing to majorly threaten security, whereas 30% of stake being at fault clearly merits a large slashing factor, due to being very close to the point at which Tendermint security is threatened. A linear relation would require a factor of 6 gap between these two, whereas the difference in risk posed to the network is much larger. We propose using S-curves (formally [logistic functions](https://en.wikipedia.org/wiki/Logistic_function) to solve this). S-Curves capture the desired criterion quite well. They allow the slashing factor to be minimal for small values, and then grow very rapidly near some threshold point where the risk posed becomes notable. + +#### Parameterization + +This requires parameterizing a logistic function. It is very well understood how to parameterize this. It has four parameters: + +1) A minimum slashing factor +2) A maximum slashing factor +3) The inflection point of the S-curve (essentially where do you want to center the S) +4) The rate of growth of the S-curve (How elongated is the S) + +#### Correlation across non-sybil validators + +One will note, that this model doesn't differentiate between multiple validators run by the same operators vs validators run by different operators. This can be seen as an additional benefit in fact. It incentivizes validators to differentiate their setups from other validators, to avoid having correlated faults with them or else they risk a higher slash. So for example, operators should avoid using the same popular cloud hosting platforms or using the same Staking as a Service providers. This will lead to a more resilient and decentralized network. + +#### Griefing + +Griefing, the act of intentionally getting oneself slashed in order to make another's slash worse, could be a concern here. However, using the protocol described here, the attacker also gets equally impacted by the grief as the victim, so it would not provide much benefit to the griefer. + +### Implementation + +In the slashing module, we will add two queues that will track all of the recent slash events. For double sign faults, we will define "recent slashes" as ones that have occurred within the last `unbonding period`. For liveness faults, we will define "recent slashes" as ones that have occurred within the last `jail period`. + +```go +type SlashEvent struct { + Address sdk.ValAddress + ValidatorVotingPercent sdk.Dec + SlashedSoFar sdk.Dec +} +``` + +These slash events will be pruned from the queue once they are older than their respective "recent slash period". + +Whenever a new slash occurs, a `SlashEvent` struct is created with the faulting validator's voting percent and a `SlashedSoFar` of 0. Because recent slash events are pruned before the unbonding period and unjail period expires, it should not be possible for the same validator to have multiple SlashEvents in the same Queue at the same time. + +We then will iterate over all the SlashEvents in the queue, adding their `ValidatorVotingPercent` to calculate the new percent to slash all the validators in the queue at, using the "Square of Sum of Roots" formula introduced above. + +Once we have the `NewSlashPercent`, we then iterate over all the `SlashEvent`s in the queue once again, and if `NewSlashPercent > SlashedSoFar` for that SlashEvent, we call the `staking.Slash(slashEvent.Address, slashEvent.Power, Math.Min(Math.Max(minSlashPercent, NewSlashPercent - SlashedSoFar), maxSlashPercent)` (we pass in the power of the validator before any slashes occurred, so that we slash the right amount of tokens). We then set `SlashEvent.SlashedSoFar` amount to `NewSlashPercent`. + +## Status + +Proposed + +## Consequences + +### Positive + +* Increases decentralization by disincentivizing delegating to large validators +* Incentivizes Decorrelation of Validators +* More severely punishes attacks than accidental faults +* More flexibility in slashing rates parameterization + +### Negative + +* More computationally expensive than current implementation. Will require more data about "recent slashing events" to be stored on chain. + + + +### ADR 016: Validator Consensus Key Rotation + + + +# ADR 016: Validator Consensus Key Rotation + +## Changelog + +* 2019 Oct 23: Initial draft +* 2019 Nov 28: Add key rotation fee + +## Context + +Validator consensus key rotation feature has been discussed and requested for a long time, for the sake of safer validator key management policy (e.g. https://github.com/tendermint/tendermint/issues/1136). So, we suggest one of the simplest form of validator consensus key rotation implementation mostly onto Cosmos SDK. + +We don't need to make any update on consensus logic in Tendermint because Tendermint does not have any mapping information of consensus key and validator operator key, meaning that from Tendermint's point of view, a consensus key rotation of a validator is simply a replacement of a consensus key to another. + +Also, it should be noted that this ADR includes only the simplest form of consensus key rotation without considering the multiple consensus keys concept. Such multiple consensus keys concept shall remain a long term goal of Tendermint and Cosmos SDK. + +## Decision + +### Pseudo procedure for consensus key rotation + +* create new random consensus key. +* create and broadcast a transaction with a `MsgRotateConsPubKey` that states the new consensus key is now coupled with the validator operator with a signature from the validator's operator key. +* old consensus key becomes unable to participate on consensus immediately after the update of key mapping state on-chain. +* start validating with new consensus key. +* validators using HSM and KMS should update the consensus key in HSM to use the new rotated key after the height `h` when `MsgRotateConsPubKey` is committed to the blockchain. + +### Considerations + +* consensus key mapping information management strategy + * store history of each key mapping changes in the kvstore. + * the state machine can search corresponding consensus key paired with the given validator operator for any arbitrary height in a recent unbonding period. + * the state machine does not need any historical mapping information which is past more than unbonding period. +* key rotation costs related to LCD and IBC + * LCD and IBC will have a traffic/computation burden when there exists frequent power changes + * In current Tendermint design, consensus key rotations are seen as power changes from LCD or IBC perspective + * Therefore, to minimize unnecessary frequent key rotation behavior, we limited the maximum number of rotation in recent unbonding period and also applied exponentially increasing rotation fee +* limits + * a validator cannot rotate its consensus key more than `MaxConsPubKeyRotations` time for any unbonding period, to prevent spam. + * parameters can be decided by governance and stored in genesis file. +* key rotation fee + * a validator should pay `KeyRotationFee` to rotate the consensus key which is calculated as below + * `KeyRotationFee` = (max(`VotingPowerPercentage` *100, 1)* `InitialKeyRotationFee`) * 2^(number of rotations in `ConsPubKeyRotationHistory` in recent unbonding period) +* evidence module + * evidence module can search corresponding consensus key for any height from slashing keeper so that it can decide which consensus key is supposed to be used for the given height. +* abci.ValidatorUpdate + * tendermint already has ability to change a consensus key by ABCI communication(`ValidatorUpdate`). + * validator consensus key update can be done via creating new + delete old by change the power to zero. + * therefore, we expect we do not even need to change Tendermint codebase at all to implement this feature. +* new genesis parameters in `staking` module + * `MaxConsPubKeyRotations` : maximum number of rotation can be executed by a validator in recent unbonding period. default value 10 is suggested(11th key rotation will be rejected) + * `InitialKeyRotationFee` : the initial key rotation fee when no key rotation has happened in recent unbonding period. default value 1atom is suggested(1atom fee for the first key rotation in recent unbonding period) + +### Workflow + +1. The validator generates a new consensus keypair. +2. The validator generates and signs a `MsgRotateConsPubKey` tx with their operator key and new ConsPubKey + + ```go + type MsgRotateConsPubKey struct { + ValidatorAddress sdk.ValAddress + NewPubKey crypto.PubKey + } + ``` + +3. `handleMsgRotateConsPubKey` gets `MsgRotateConsPubKey`, calls `RotateConsPubKey` with emits event +4. `RotateConsPubKey` + * checks if `NewPubKey` is not duplicated on `ValidatorsByConsAddr` + * checks if the validator is does not exceed parameter `MaxConsPubKeyRotations` by iterating `ConsPubKeyRotationHistory` + * checks if the signing account has enough balance to pay `KeyRotationFee` + * pays `KeyRotationFee` to community fund + * overwrites `NewPubKey` in `validator.ConsPubKey` + * deletes old `ValidatorByConsAddr` + * `SetValidatorByConsAddr` for `NewPubKey` + * Add `ConsPubKeyRotationHistory` for tracking rotation + + ```go + type ConsPubKeyRotationHistory struct { + OperatorAddress sdk.ValAddress + OldConsPubKey crypto.PubKey + NewConsPubKey crypto.PubKey + RotatedHeight int64 + } + ``` + +5. `ApplyAndReturnValidatorSetUpdates` checks if there is `ConsPubKeyRotationHistory` with `ConsPubKeyRotationHistory.RotatedHeight == ctx.BlockHeight()` and if so, generates 2 `ValidatorUpdate` , one for a remove validator and one for create new validator + + ```go + abci.ValidatorUpdate{ + PubKey: cmttypes.TM2PB.PubKey(OldConsPubKey), + Power: 0, + } + + abci.ValidatorUpdate{ + PubKey: cmttypes.TM2PB.PubKey(NewConsPubKey), + Power: v.ConsensusPower(), + } + ``` + +6. at `previousVotes` Iteration logic of `AllocateTokens`, `previousVote` using `OldConsPubKey` match up with `ConsPubKeyRotationHistory`, and replace validator for token allocation +7. Migrate `ValidatorSigningInfo` and `ValidatorMissedBlockBitArray` from `OldConsPubKey` to `NewConsPubKey` + +* Note : All above features shall be implemented in `staking` module. + +## Status + +Proposed + +## Consequences + +### Positive + +* Validators can immediately or periodically rotate their consensus key to have a better security policy +* improved security against Long-Range attacks (https://nearprotocol.com/blog/long-range-attacks-and-a-new-fork-choice-rule) given a validator throws away the old consensus key(s) + +### Negative + +* Slash module needs more computation because it needs to look up the corresponding consensus key of validators for each height +* frequent key rotations will make light client bisection less efficient + +### Neutral + +## References + +* on tendermint repo : https://github.com/tendermint/tendermint/issues/1136 +* on cosmos-sdk repo : https://github.com/cosmos/cosmos-sdk/issues/5231 +* about multiple consensus keys : https://github.com/tendermint/tendermint/issues/1758#issuecomment-545291698 + + + +### ADR 017: Historical Header Module + + + +# ADR 17: Historical Header Module + +## Changelog + +* 26 November 2019: Start of first version +* 2 December 2019: Final draft of first version + +## Context + +In order for the Cosmos SDK to implement the [IBC specification](https://github.com/cosmos/ics), modules within the Cosmos SDK must have the ability to introspect recent consensus states (validator sets & commitment roots) as proofs of these values on other chains must be checked during the handshakes. + +## Decision + +The application MUST store the most recent `n` headers in a persistent store. At first, this store MAY be the current Merklised store. A non-Merklised store MAY be used later as no proofs are necessary. + +The application MUST store this information by storing new headers immediately when handling `abci.RequestBeginBlock`: + +```go +func BeginBlock(ctx sdk.Context, keeper HistoricalHeaderKeeper, req abci.RequestBeginBlock) abci.ResponseBeginBlock { + info := HistoricalInfo{ + Header: ctx.BlockHeader(), + ValSet: keeper.StakingKeeper.GetAllValidators(ctx), // note that this must be stored in a canonical order + } + keeper.SetHistoricalInfo(ctx, ctx.BlockHeight(), info) + n := keeper.GetParamRecentHeadersToStore() + keeper.PruneHistoricalInfo(ctx, ctx.BlockHeight() - n) + // continue handling request +} +``` + +Alternatively, the application MAY store only the hash of the validator set. + +The application MUST make these past `n` committed headers available for querying by Cosmos SDK modules through the `Keeper`'s `GetHistoricalInfo` function. This MAY be implemented in a new module, or it MAY also be integrated into an existing one (likely `x/staking` or `x/ibc`). + +`n` MAY be configured as a parameter store parameter, in which case it could be changed by `ParameterChangeProposal`s, although it will take some blocks for the stored information to catch up if `n` is increased. + +## Status + +Proposed. + +## Consequences + +Implementation of this ADR will require changes to the Cosmos SDK. It will not require changes to Tendermint. + +### Positive + +* Easy retrieval of headers & state roots for recent past heights by modules anywhere in the Cosmos SDK. +* No RPC calls to Tendermint required. +* No ABCI alterations required. + +### Negative + +* Duplicates `n` headers data in Tendermint & the application (additional disk usage) - in the long term, an approach such as [this](https://github.com/tendermint/tendermint/issues/4210) might be preferable. + +### Neutral + +(none known) + +## References + +* [ICS 2: "Consensus state introspection"](https://github.com/cosmos/ibc/tree/master/spec/core/ics-002-client-semantics#consensus-state-introspection) + + + +### ADR 018: Extendable Voting Periods + + + +# ADR 18: Extendable Voting Periods + +## Changelog + +* 1 January 2020: Start of first version + +## Context + +Currently the voting period for all governance proposals is the same. However, this is suboptimal as all governance proposals do not require the same time period. For more non-contentious proposals, they can be dealt with more efficiently with a faster period, while more contentious or complex proposals may need a longer period for extended discussion/consideration. + +## Decision + +We would like to design a mechanism for making the voting period of a governance proposal variable based on the demand of voters. We would like it to be based on the view of the governance participants, rather than just the proposer of a governance proposal (thus, allowing the proposer to select the voting period length is not sufficient). + +However, we would like to avoid the creation of an entire second voting process to determine the length of the voting period, as it just pushed the problem to determining the length of that first voting period. + +Thus, we propose the following mechanism: + +### Params + +* The current gov param `VotingPeriod` is to be replaced by a `MinVotingPeriod` param. This is the default voting period that all governance proposal voting periods start with. +* There is a new gov param called `MaxVotingPeriodExtension`. + +### Mechanism + +There is a new `Msg` type called `MsgExtendVotingPeriod`, which can be sent by any staked account during a proposal's voting period. It allows the sender to unilaterally extend the length of the voting period by `MaxVotingPeriodExtension * sender's share of voting power`. Every address can only call `MsgExtendVotingPeriod` once per proposal. + +So for example, if the `MaxVotingPeriodExtension` is set to 100 Days, then anyone with 1% of voting power can extend the voting power by 1 day. If 33% of voting power has sent the message, the voting period will be extended by 33 days. Thus, if absolutely everyone chooses to extend the voting period, the absolute maximum voting period will be `MinVotingPeriod + MaxVotingPeriodExtension`. + +This system acts as a sort of distributed coordination, where individual stakers choosing to extend or not, allows the system the gauge the contentiousness/complexity of the proposal. It is extremely unlikely that many stakers will choose to extend at the exact same time, it allows stakers to view how long others have already extended thus far, to decide whether or not to extend further. + +### Dealing with Unbonding/Redelegation + +There is one thing that needs to be addressed. How to deal with redelegation/unbonding during the voting period. If a staker of 5% calls `MsgExtendVotingPeriod` and then unbonds, does the voting period then decrease by 5 days again? This is not good as it can give people a false sense of how long they have to make their decision. For this reason, we want to design it such that the voting period length can only be extended, not shortened. To do this, the current extension amount is based on the highest percent that voted extension at any time. This is best explained by example: + +1. Let's say 2 stakers of voting power 4% and 3% respectively vote to extend. The voting period will be extended by 7 days. +2. Now the staker of 3% decides to unbond before the end of the voting period. The voting period extension remains 7 days. +3. Now, let's say another staker of 2% voting power decides to extend voting period. There is now 6% of active voting power choosing the extend. The voting power remains 7 days. +4. If a fourth staker of 10% chooses to extend now, there is a total of 16% of active voting power wishing to extend. The voting period will be extended to 16 days. + +### Delegators + +Just like votes in the actual voting period, delegators automatically inherit the extension of their validators. If their validator chooses to extend, their voting power will be used in the validator's extension. However, the delegator is unable to override their validator and "unextend" as that would contradict the "voting power length can only be ratcheted up" principle described in the previous section. However, a delegator may choose the extend using their personal voting power, if their validator has not done so. + +## Status + +Proposed + +## Consequences + +### Positive + +* More complex/contentious governance proposals will have more time to properly digest and deliberate + +### Negative + +* Governance process becomes more complex and requires more understanding to interact with effectively +* Can no longer predict when a governance proposal will end. Can't assume order in which governance proposals will end. + +### Neutral + +* The minimum voting period can be made shorter + +## References + +* [Cosmos Forum post where idea first originated](https://forum.cosmos.network/t/proposal-draft-reduce-governance-voting-period-to-7-days/3032/9) + + + +### ADR 019: Protocol Buffer State Encoding + + + +# ADR 019: Protocol Buffer State Encoding + +## Changelog + +* 2020 Feb 15: Initial Draft +* 2020 Feb 24: Updates to handle messages with interface fields +* 2020 Apr 27: Convert usages of `oneof` for interfaces to `Any` +* 2020 May 15: Describe `cosmos_proto` extensions and amino compatibility +* 2020 Dec 4: Move and rename `MarshalAny` and `UnmarshalAny` into the `codec.Codec` interface. +* 2021 Feb 24: Remove mentions of `HybridCodec`, which has been abandoned in [#6843](https://github.com/cosmos/cosmos-sdk/pull/6843). + +## Status + +Accepted + +## Context + +Currently, the Cosmos SDK utilizes [go-amino](https://github.com/tendermint/go-amino/) for binary +and JSON object encoding over the wire bringing parity between logical objects and persistence objects. + +From the Amino docs: + +> Amino is an object encoding specification. It is a subset of Proto3 with an extension for interface +> support. See the [Proto3 spec](https://developers.google.com/protocol-buffers/docs/proto3) for more +> information on Proto3, which Amino is largely compatible with (but not with Proto2). +> +> The goal of the Amino encoding protocol is to bring parity into logic objects and persistence objects. + +Amino also aims to have the following goals (not a complete list): + +* Binary bytes must be decodable with a schema. +* Schema must be upgradeable. +* The encoder and decoder logic must be reasonably simple. + +However, we believe that Amino does not fulfill these goals completely and does not fully meet the +needs of a truly flexible cross-language and multi-client compatible encoding protocol in the Cosmos SDK. +Namely, Amino has proven to be a big pain-point in regards to supporting object serialization across +clients written in various languages while providing virtually little in the way of true backwards +compatibility and upgradeability. Furthermore, through profiling and various benchmarks, Amino has +been shown to be an extremely large performance bottleneck in the Cosmos SDK 1. This is +largely reflected in the performance of simulations and application transaction throughput. + +Thus, we need to adopt an encoding protocol that meets the following criteria for state serialization: + +* Language agnostic +* Platform agnostic +* Rich client support and thriving ecosystem +* High performance +* Minimal encoded message size +* Codegen-based over reflection-based +* Supports backward and forward compatibility + +Note, migrating away from Amino should be viewed as a two-pronged approach, state and client encoding. +This ADR focuses on state serialization in the Cosmos SDK state machine. A corresponding ADR will be +made to address client-side encoding. + +## Decision + +We will adopt [Protocol Buffers](https://developers.google.com/protocol-buffers) for serializing +persisted structured data in the Cosmos SDK while providing a clean mechanism and developer UX for +applications wishing to continue to use Amino. We will provide this mechanism by updating modules to +accept a codec interface, `Marshaler`, instead of a concrete Amino codec. Furthermore, the Cosmos SDK +will provide two concrete implementations of the `Marshaler` interface: `AminoCodec` and `ProtoCodec`. + +* `AminoCodec`: Uses Amino for both binary and JSON encoding. +* `ProtoCodec`: Uses Protobuf for both binary and JSON encoding. + +Modules will use whichever codec is instantiated in the app. By default, the Cosmos SDK's `simapp` +instantiates a `ProtoCodec` as the concrete implementation of `Marshaler`, inside the `MakeTestEncodingConfig` +function. This can be easily overwritten by app developers if they so desire. + +The ultimate goal will be to replace Amino JSON encoding with Protobuf encoding and thus have +modules accept and/or extend `ProtoCodec`. Until then, Amino JSON is still provided for legacy use-cases. +A handful of places in the Cosmos SDK still have Amino JSON hardcoded, such as the Legacy API REST endpoints +and the `x/params` store. They are planned to be converted to Protobuf in a gradual manner. + +### Module Codecs + +Modules that do not require the ability to work with and serialize interfaces, the path to Protobuf +migration is pretty straightforward. These modules are to simply migrate any existing types that +are encoded and persisted via their concrete Amino codec to Protobuf and have their keeper accept a +`Marshaler` that will be a `ProtoCodec`. This migration is simple as things will just work as-is. + +Note, any business logic that needs to encode primitive types like `bool` or `int64` should use +[gogoprotobuf](https://github.com/cosmos/gogoproto) Value types. + +Example: + +```go + ts, err := gogotypes.TimestampProto(completionTime) + if err != nil { + // ... + } + + bz := cdc.MustMarshal(ts) +``` + +However, modules can vary greatly in purpose and design and so we must support the ability for modules +to be able to encode and work with interfaces (e.g. `Account` or `Content`). For these modules, they +must define their own codec interface that extends `Marshaler`. These specific interfaces are unique +to the module and will contain method contracts that know how to serialize the needed interfaces. + +Example: + +```go +// x/auth/types/codec.go + +type Codec interface { + codec.Codec + + MarshalAccount(acc exported.Account) ([]byte, error) + UnmarshalAccount(bz []byte) (exported.Account, error) + + MarshalAccountJSON(acc exported.Account) ([]byte, error) + UnmarshalAccountJSON(bz []byte) (exported.Account, error) +} +``` + +### Usage of `Any` to encode interfaces + +In general, module-level .proto files should define messages which encode interfaces +using [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto). +After [extension discussion](https://github.com/cosmos/cosmos-sdk/issues/6030), +this was chosen as the preferred alternative to application-level `oneof`s +as in our original protobuf design. The arguments in favor of `Any` can be +summarized as follows: + +* `Any` provides a simpler, more consistent client UX for dealing with +interfaces than app-level `oneof`s that will need to be coordinated more +carefully across applications. Creating a generic transaction +signing library using `oneof`s may be cumbersome and critical logic may need +to be reimplemented for each chain +* `Any` provides more resistance against human error than `oneof` +* `Any` is generally simpler to implement for both modules and apps + +The main counter-argument to using `Any` centers around its additional space +and possibly performance overhead. The space overhead could be dealt with using +compression at the persistence layer in the future and the performance impact +is likely to be small. Thus, not using `Any` is seen as a pre-mature optimization, +with user experience as the higher order concern. + +Note, that given the Cosmos SDK's decision to adopt the `Codec` interfaces described +above, apps can still choose to use `oneof` to encode state and transactions +but it is not the recommended approach. If apps do choose to use `oneof`s +instead of `Any` they will likely lose compatibility with client apps that +support multiple chains. Thus developers should think carefully about whether +they care more about what is possibly a premature optimization or end-user +and client developer UX. + +### Safe usage of `Any` + +By default, the [gogo protobuf implementation of `Any`](https://pkg.go.dev/github.com/cosmos/gogoproto/types) +uses [global type registration]( https://github.com/cosmos/gogoproto/blob/master/proto/properties.go#L540) +to decode values packed in `Any` into concrete +go types. This introduces a vulnerability where any malicious module +in the dependency tree could register a type with the global protobuf registry +and cause it to be loaded and unmarshaled by a transaction that referenced +it in the `type_url` field. + +To prevent this, we introduce a type registration mechanism for decoding `Any` +values into concrete types through the `InterfaceRegistry` interface which +bears some similarity to type registration with Amino: + +```go +type InterfaceRegistry interface { + // RegisterInterface associates protoName as the public name for the + // interface passed in as iface + // Ex: + // registry.RegisterInterface("cosmos_sdk.Msg", (*sdk.Msg)(nil)) + RegisterInterface(protoName string, iface interface{}) + + // RegisterImplementations registers impls as concrete implementations of + // the interface iface + // Ex: + // registry.RegisterImplementations((*sdk.Msg)(nil), &MsgSend{}, &MsgMultiSend{}) + RegisterImplementations(iface interface{}, impls ...proto.Message) + +} +``` + +In addition to serving as a whitelist, `InterfaceRegistry` can also serve +to communicate the list of concrete types that satisfy an interface to clients. + +In .proto files: + +* fields which accept interfaces should be annotated with `cosmos_proto.accepts_interface` +using the same full-qualified name passed as `protoName` to `InterfaceRegistry.RegisterInterface` +* interface implementations should be annotated with `cosmos_proto.implements_interface` +using the same full-qualified name passed as `protoName` to `InterfaceRegistry.RegisterInterface` + +In the future, `protoName`, `cosmos_proto.accepts_interface`, `cosmos_proto.implements_interface` +may be used via code generation, reflection &/or static linting. + +The same struct that implements `InterfaceRegistry` will also implement an +interface `InterfaceUnpacker` to be used for unpacking `Any`s: + +```go +type InterfaceUnpacker interface { + // UnpackAny unpacks the value in any to the interface pointer passed in as + // iface. Note that the type in any must have been registered with + // RegisterImplementations as a concrete type for that interface + // Ex: + // var msg sdk.Msg + // err := ctx.UnpackAny(any, &msg) + // ... + UnpackAny(any *Any, iface interface{}) error +} +``` + +Note that `InterfaceRegistry` usage does not deviate from standard protobuf +usage of `Any`, it just introduces a security and introspection layer for +golang usage. + +`InterfaceRegistry` will be a member of `ProtoCodec` +described above. In order for modules to register interface types, app modules +can optionally implement the following interface: + +```go +type InterfaceModule interface { + RegisterInterfaceTypes(InterfaceRegistry) +} +``` + +The module manager will include a method to call `RegisterInterfaceTypes` on +every module that implements it in order to populate the `InterfaceRegistry`. + +### Using `Any` to encode state + +The Cosmos SDK will provide support methods `MarshalInterface` and `UnmarshalInterface` to hide the complexity of wrapping interface types into `Any` and allow easy serialization. + +```go +import "github.com/cosmos/cosmos-sdk/codec" + +// note: eviexported.Evidence is an interface type +func MarshalEvidence(cdc codec.BinaryCodec, e eviexported.Evidence) ([]byte, error) { + return cdc.MarshalInterface(e) +} + +func UnmarshalEvidence(cdc codec.BinaryCodec, bz []byte) (eviexported.Evidence, error) { + var evi eviexported.Evidence + err := cdc.UnmarshalInterface(&evi, bz) + return err, nil +} +``` + +### Using `Any` in `sdk.Msg`s + +A similar concept is to be applied for messages that contain interface fields. +For example, we can define `MsgSubmitEvidence` as follows where `Evidence` is +an interface: + +```protobuf +// x/evidence/types/types.proto + +message MsgSubmitEvidence { + bytes submitter = 1 + [ + (gogoproto.casttype) = "github.com/cosmos/cosmos-sdk/types.AccAddress" + ]; + google.protobuf.Any evidence = 2; +} +``` + +Note that in order to unpack the evidence from `Any` we do need a reference to +`InterfaceRegistry`. In order to reference evidence in methods like +`ValidateBasic` which shouldn't have to know about the `InterfaceRegistry`, we +introduce an `UnpackInterfaces` phase to deserialization which unpacks +interfaces before they're needed. + +### Unpacking Interfaces + +To implement the `UnpackInterfaces` phase of deserialization which unpacks +interfaces wrapped in `Any` before they're needed, we create an interface +that `sdk.Msg`s and other types can implement: + +```go +type UnpackInterfacesMessage interface { + UnpackInterfaces(InterfaceUnpacker) error +} +``` + +We also introduce a private `cachedValue interface{}` field onto the `Any` +struct itself with a public getter `GetCachedValue() interface{}`. + +The `UnpackInterfaces` method is to be invoked during message deserialization right +after `Unmarshal` and any interface values packed in `Any`s will be decoded +and stored in `cachedValue` for reference later. + +Then unpacked interface values can safely be used in any code afterwards +without knowledge of the `InterfaceRegistry` +and messages can introduce a simple getter to cast the cached value to the +correct interface type. + +This has the added benefit that unmarshaling of `Any` values only happens once +during initial deserialization rather than every time the value is read. Also, +when `Any` values are first packed (for instance in a call to +`NewMsgSubmitEvidence`), the original interface value is cached so that +unmarshaling isn't needed to read it again. + +`MsgSubmitEvidence` could implement `UnpackInterfaces`, plus a convenience getter +`GetEvidence` as follows: + +```go +func (msg MsgSubmitEvidence) UnpackInterfaces(ctx sdk.InterfaceRegistry) error { + var evi eviexported.Evidence + return ctx.UnpackAny(msg.Evidence, *evi) +} + +func (msg MsgSubmitEvidence) GetEvidence() eviexported.Evidence { + return msg.Evidence.GetCachedValue().(eviexported.Evidence) +} +``` + +### Amino Compatibility + +Our custom implementation of `Any` can be used transparently with Amino if used +with the proper codec instance. What this means is that interfaces packed within +`Any`s will be amino marshaled like regular Amino interfaces (assuming they +have been registered properly with Amino). + +In order for this functionality to work: + +* **all legacy code must use `*codec.LegacyAmino` instead of `*amino.Codec` which is + now a wrapper which properly handles `Any`** +* **all new code should use `Marshaler` which is compatible with both amino and + protobuf** +* Also, before v0.39, `codec.LegacyAmino` will be renamed to `codec.LegacyAmino`. + +### Why Wasn't X Chosen Instead + +For a more complete comparison to alternative protocols, see [here](https://codeburst.io/json-vs-protocol-buffers-vs-flatbuffers-a4247f8bda6f). + +### Cap'n Proto + +While [Cap’n Proto](https://capnproto.org/) does seem like an advantageous alternative to Protobuf +due to its native support for interfaces/generics and built-in canonicalization, it does lack the +rich client ecosystem compared to Protobuf and is a bit less mature. + +### FlatBuffers + +[FlatBuffers](https://google.github.io/flatbuffers/) is also a potentially viable alternative, with the +primary difference being that FlatBuffers does not need a parsing/unpacking step to a secondary +representation before you can access data, often coupled with per-object memory allocation. + +However, it would require great efforts into research and a full understanding the scope of the migration +and path forward -- which isn't immediately clear. In addition, FlatBuffers aren't designed for +untrusted inputs. + +## Future Improvements & Roadmap + +In the future we may consider a compression layer right above the persistence +layer which doesn't change tx or merkle tree hashes, but reduces the storage +overhead of `Any`. In addition, we may adopt protobuf naming conventions which +make type URLs a bit more concise while remaining descriptive. + +Additional code generation support around the usage of `Any` is something that +could also be explored in the future to make the UX for go developers more +seamless. + +## Consequences + +### Positive + +* Significant performance gains. +* Supports backward and forward type compatibility. +* Better support for cross-language clients. + +### Negative + +* Learning curve required to understand and implement Protobuf messages. +* Slightly larger message size due to use of `Any`, although this could be offset + by a compression layer in the future + +### Neutral + +## References + +1. https://github.com/cosmos/cosmos-sdk/issues/4977 +2. https://github.com/cosmos/cosmos-sdk/issues/5444 + + + +### ADR 020: Protocol Buffer Transaction Encoding + + + +# ADR 020: Protocol Buffer Transaction Encoding + +## Changelog + +* 2020 March 06: Initial Draft +* 2020 March 12: API Updates +* 2020 April 13: Added details on interface `oneof` handling +* 2020 April 30: Switch to `Any` +* 2020 May 14: Describe public key encoding +* 2020 June 08: Store `TxBody` and `AuthInfo` as bytes in `SignDoc`; Document `TxRaw` as broadcast and storage type. +* 2020 August 07: Use ADR 027 for serializing `SignDoc`. +* 2020 August 19: Move sequence field from `SignDoc` to `SignerInfo`, as discussed in [#6966](https://github.com/cosmos/cosmos-sdk/issues/6966). +* 2020 September 25: Remove `PublicKey` type in favor of `secp256k1.PubKey`, `ed25519.PubKey` and `multisig.LegacyAminoPubKey`. +* 2020 October 15: Add `GetAccount` and `GetAccountWithHeight` methods to the `AccountRetriever` interface. +* 2021 Feb 24: The Cosmos SDK does not use Tendermint's `PubKey` interface anymore, but its own `cryptotypes.PubKey`. Updates to reflect this. +* 2021 May 3: Rename `clientCtx.JSONMarshaler` to `clientCtx.JSONCodec`. +* 2021 June 10: Add `clientCtx.Codec: codec.Codec`. + +## Status + +Accepted + +## Context + +This ADR is a continuation of the motivation, design, and context established in +[ADR 019](./adr-019-protobuf-state-encoding.md), namely, we aim to design the +Protocol Buffer migration path for the client-side of the Cosmos SDK. + +Specifically, the client-side migration path primarily includes tx generation and +signing, message construction and routing, in addition to CLI & REST handlers and +business logic (i.e. queriers). + +With this in mind, we will tackle the migration path via two main areas, txs and +querying. However, this ADR solely focuses on transactions. Querying should be +addressed in a future ADR, but it should build off of these proposals. + +Based on detailed discussions ([\#6030](https://github.com/cosmos/cosmos-sdk/issues/6030) +and [\#6078](https://github.com/cosmos/cosmos-sdk/issues/6078)), the original +design for transactions was changed substantially from an `oneof` /JSON-signing +approach to the approach described below. + +## Decision + +### Transactions + +Since interface values are encoded with `google.protobuf.Any` in state (see [ADR 019](adr-019-protobuf-state-encoding.md)), +`sdk.Msg`s are encoded with `Any` in transactions. + +One of the main goals of using `Any` to encode interface values is to have a +core set of types which is reused by apps so that +clients can safely be compatible with as many chains as possible. + +It is one of the goals of this specification to provide a flexible cross-chain transaction +format that can serve a wide variety of use cases without breaking the client +compatibility. + +In order to facilitate signing, transactions are separated into `TxBody`, +which will be reused by `SignDoc` below, and `signatures`: + +```protobuf +// types/types.proto +package cosmos_sdk.v1; + +message Tx { + TxBody body = 1; + AuthInfo auth_info = 2; + // A list of signatures that matches the length and order of AuthInfo's signer_infos to + // allow connecting signature meta information like public key and signing mode by position. + repeated bytes signatures = 3; +} + +// A variant of Tx that pins the signer's exact binary representation of body and +// auth_info. This is used for signing, broadcasting and verification. The binary +// `serialize(tx: TxRaw)` is stored in Tendermint and the hash `sha256(serialize(tx: TxRaw))` +// becomes the "txhash", commonly used as the transaction ID. +message TxRaw { + // A protobuf serialization of a TxBody that matches the representation in SignDoc. + bytes body = 1; + // A protobuf serialization of an AuthInfo that matches the representation in SignDoc. + bytes auth_info = 2; + // A list of signatures that matches the length and order of AuthInfo's signer_infos to + // allow connecting signature meta information like public key and signing mode by position. + repeated bytes signatures = 3; +} + +message TxBody { + // A list of messages to be executed. The required signers of those messages define + // the number and order of elements in AuthInfo's signer_infos and Tx's signatures. + // Each required signer address is added to the list only the first time it occurs. + // + // By convention, the first required signer (usually from the first message) is referred + // to as the primary signer and pays the fee for the whole transaction. + repeated google.protobuf.Any messages = 1; + string memo = 2; + int64 timeout_height = 3; + repeated google.protobuf.Any extension_options = 1023; +} + +message AuthInfo { + // This list defines the signing modes for the required signers. The number + // and order of elements must match the required signers from TxBody's messages. + // The first element is the primary signer and the one which pays the fee. + repeated SignerInfo signer_infos = 1; + // The fee can be calculated based on the cost of evaluating the body and doing signature verification of the signers. This can be estimated via simulation. + Fee fee = 2; +} + +message SignerInfo { + // The public key is optional for accounts that already exist in state. If unset, the + // verifier can use the required signer address for this position and lookup the public key. + google.protobuf.Any public_key = 1; + // ModeInfo describes the signing mode of the signer and is a nested + // structure to support nested multisig pubkey's + ModeInfo mode_info = 2; + // sequence is the sequence of the account, which describes the + // number of committed transactions signed by a given address. It is used to prevent + // replay attacks. + uint64 sequence = 3; +} + +message ModeInfo { + oneof sum { + Single single = 1; + Multi multi = 2; + } + + // Single is the mode info for a single signer. It is structured as a message + // to allow for additional fields such as locale for SIGN_MODE_TEXTUAL in the future + message Single { + SignMode mode = 1; + } + + // Multi is the mode info for a multisig public key + message Multi { + // bitarray specifies which keys within the multisig are signing + CompactBitArray bitarray = 1; + // mode_infos is the corresponding modes of the signers of the multisig + // which could include nested multisig public keys + repeated ModeInfo mode_infos = 2; + } +} + +enum SignMode { + SIGN_MODE_UNSPECIFIED = 0; + + SIGN_MODE_DIRECT = 1; + + SIGN_MODE_TEXTUAL = 2; + + SIGN_MODE_LEGACY_AMINO_JSON = 127; +} +``` + +As will be discussed below, in order to include as much of the `Tx` as possible +in the `SignDoc`, `SignerInfo` is separated from signatures so that only the +raw signatures themselves live outside of what is signed over. + +Because we are aiming for a flexible, extensible cross-chain transaction +format, new transaction processing options should be added to `TxBody` as soon +those use cases are discovered, even if they can't be implemented yet. + +Because there is coordination overhead in this, `TxBody` includes an +`extension_options` field which can be used for any transaction processing +options that are not already covered. App developers should, nevertheless, +attempt to upstream important improvements to `Tx`. + +### Signing + +All of the signing modes below aim to provide the following guarantees: + +* **No Malleability**: `TxBody` and `AuthInfo` cannot change once the transaction + is signed +* **Predictable Gas**: if I am signing a transaction where I am paying a fee, + the final gas is fully dependent on what I am signing + +These guarantees give the maximum amount of confidence to message signers that +manipulation of `Tx`s by intermediaries can't result in any meaningful changes. + +#### `SIGN_MODE_DIRECT` + +The "direct" signing behavior is to sign the raw `TxBody` bytes as broadcast over +the wire. This has the advantages of: + +* requiring the minimum additional client capabilities beyond a standard protocol + buffers implementation +* leaving effectively zero holes for transaction malleability (i.e. there are no + subtle differences between the signing and encoding formats which could + potentially be exploited by an attacker) + +Signatures are structured using the `SignDoc` below which reuses the serialization of +`TxBody` and `AuthInfo` and only adds the fields which are needed for signatures: + +```protobuf +// types/types.proto +message SignDoc { + // A protobuf serialization of a TxBody that matches the representation in TxRaw. + bytes body = 1; + // A protobuf serialization of an AuthInfo that matches the representation in TxRaw. + bytes auth_info = 2; + string chain_id = 3; + uint64 account_number = 4; +} +``` + +In order to sign in the default mode, clients take the following steps: + +1. Serialize `TxBody` and `AuthInfo` using any valid protobuf implementation. +2. Create a `SignDoc` and serialize it using [ADR 027](./adr-027-deterministic-protobuf-serialization.md). +3. Sign the encoded `SignDoc` bytes. +4. Build a `TxRaw` and serialize it for broadcasting. + +Signature verification is based on comparing the raw `TxBody` and `AuthInfo` +bytes encoded in `TxRaw` not based on any ["canonicalization"](https://github.com/regen-network/canonical-proto3) +algorithm which creates added complexity for clients in addition to preventing +some forms of upgradeability (to be addressed later in this document). + +Signature verifiers do: + +1. Deserialize a `TxRaw` and pull out `body` and `auth_info`. +2. Create a list of required signer addresses from the messages. +3. For each required signer: + * Pull account number and sequence from the state. + * Obtain the public key either from state or `AuthInfo`'s `signer_infos`. + * Create a `SignDoc` and serialize it using [ADR 027](./adr-027-deterministic-protobuf-serialization.md). + * Verify the signature at the same list position against the serialized `SignDoc`. + +#### `SIGN_MODE_LEGACY_AMINO` + +In order to support legacy wallets and exchanges, Amino JSON will be temporarily +supported transaction signing. Once wallets and exchanges have had a +chance to upgrade to protobuf-based signing, this option will be disabled. In +the meantime, it is foreseen that disabling the current Amino signing would cause +too much breakage to be feasible. Note that this is mainly a requirement of the +Cosmos Hub and other chains may choose to disable Amino signing immediately. + +Legacy clients will be able to sign a transaction using the current Amino +JSON format and have it encoded to protobuf using the REST `/tx/encode` +endpoint before broadcasting. + +#### `SIGN_MODE_TEXTUAL` + +As was discussed extensively in [\#6078](https://github.com/cosmos/cosmos-sdk/issues/6078), +there is a desire for a human-readable signing encoding, especially for hardware +wallets like the [Ledger](https://www.ledger.com) which display +transaction contents to users before signing. JSON was an attempt at this but +falls short of the ideal. + +`SIGN_MODE_TEXTUAL` is intended as a placeholder for a human-readable +encoding which will replace Amino JSON. This new encoding should be even more +focused on readability than JSON, possibly based on formatting strings like +[MessageFormat](http://userguide.icu-project.org/formatparse/messages). + +In order to ensure that the new human-readable format does not suffer from +transaction malleability issues, `SIGN_MODE_TEXTUAL` +requires that the _human-readable bytes are concatenated with the raw `SignDoc`_ +to generate sign bytes. + +Multiple human-readable formats (maybe even localized messages) may be supported +by `SIGN_MODE_TEXTUAL` when it is implemented. + +### Unknown Field Filtering + +Unknown fields in protobuf messages should generally be rejected by the transaction +processors because: + +* important data may be present in the unknown fields, that if ignored, will + cause unexpected behavior for clients +* they present a malleability vulnerability where attackers can bloat tx size + by adding random uninterpreted data to unsigned content (i.e. the master `Tx`, + not `TxBody`) + +There are also scenarios where we may choose to safely ignore unknown fields +(https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-624400188) to +provide graceful forwards compatibility with newer clients. + +We propose that field numbers with bit 11 set (for most use cases this is +the range of 1024-2047) be considered non-critical fields that can safely be +ignored if unknown. + +To handle this we will need an unknown field filter that: + +* always rejects unknown fields in unsigned content (i.e. top-level `Tx` and + unsigned parts of `AuthInfo` if present based on the signing mode) +* rejects unknown fields in all messages (including nested `Any`s) other than + fields with bit 11 set + +This will likely need to be a custom protobuf parser pass that takes message bytes +and `FileDescriptor`s and returns a boolean result. + +### Public Key Encoding + +Public keys in the Cosmos SDK implement the `cryptotypes.PubKey` interface. +We propose to use `Any` for protobuf encoding as we are doing with other interfaces (for example, in `BaseAccount.PubKey` and `SignerInfo.PublicKey`). +The following public keys are implemented: secp256k1, secp256r1, ed25519 and legacy-multisignature. + +Ex: + +```protobuf +message PubKey { + bytes key = 1; +} +``` + +`multisig.LegacyAminoPubKey` has an array of `Any`'s member to support any +protobuf public key type. + +Apps should only attempt to handle a registered set of public keys that they +have tested. The provided signature verification ante handler decorators will +enforce this. + +### CLI & REST + +Currently, the REST and CLI handlers encode and decode types and txs via Amino +JSON encoding using a concrete Amino codec. Being that some of the types dealt with +in the client can be interfaces, similar to how we described in [ADR 019](./adr-019-protobuf-state-encoding.md), +the client logic will now need to take a codec interface that knows not only how +to handle all the types, but also knows how to generate transactions, signatures, +and messages. + +```go +type AccountRetriever interface { + GetAccount(clientCtx Context, addr sdk.AccAddress) (client.Account, error) + GetAccountWithHeight(clientCtx Context, addr sdk.AccAddress) (client.Account, int64, error) + EnsureExists(clientCtx client.Context, addr sdk.AccAddress) error + GetAccountNumberSequence(clientCtx client.Context, addr sdk.AccAddress) (uint64, uint64, error) +} + +type Generator interface { + NewTx() TxBuilder + NewFee() ClientFee + NewSignature() ClientSignature + MarshalTx(tx types.Tx) ([]byte, error) +} + +type TxBuilder interface { + GetTx() sdk.Tx + + SetMsgs(...sdk.Msg) error + GetSignatures() []sdk.Signature + SetSignatures(...sdk.Signature) + GetFee() sdk.Fee + SetFee(sdk.Fee) + GetMemo() string + SetMemo(string) +} +``` + +We then update `Context` to have new fields: `Codec`, `TxGenerator`, +and `AccountRetriever`, and we update `AppModuleBasic.GetTxCmd` to take +a `Context` which should have all of these fields pre-populated. + +Each client method should then use one of the `Init` methods to re-initialize +the pre-populated `Context`. `tx.GenerateOrBroadcastTx` can be used to +generate or broadcast a transaction. For example: + +```go +import "github.com/spf13/cobra" +import "github.com/cosmos/cosmos-sdk/client" +import "github.com/cosmos/cosmos-sdk/client/tx" + +func NewCmdDoSomething(clientCtx client.Context) *cobra.Command { + return &cobra.Command{ + RunE: func(cmd *cobra.Command, args []string) error { + clientCtx := ctx.InitWithInput(cmd.InOrStdin()) + msg := NewSomeMsg{...} + tx.GenerateOrBroadcastTx(clientCtx, msg) + }, + } +} +``` + +## Future Improvements + +### `SIGN_MODE_TEXTUAL` specification + +A concrete specification and implementation of `SIGN_MODE_TEXTUAL` is intended +as a near-term future improvement so that the ledger app and other wallets +can gracefully transition away from Amino JSON. + +### `SIGN_MODE_DIRECT_AUX` + +(\*Documented as option (3) in https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-628026933) + +We could add a mode `SIGN_MODE_DIRECT_AUX` +to support scenarios where multiple signatures +are being gathered into a single transaction but the message composer does not +yet know which signatures will be included in the final transaction. For instance, +I may have a 3/5 multisig wallet and want to send a `TxBody` to all 5 +signers to see who signs first. As soon as I have 3 signatures then I will go +ahead and build the full transaction. + +With `SIGN_MODE_DIRECT`, each signer needs +to sign the full `AuthInfo` which includes the full list of all signers and +their signing modes, making the above scenario very hard. + +`SIGN_MODE_DIRECT_AUX` would allow "auxiliary" signers to create their signature +using only `TxBody` and their own `PublicKey`. This allows the full list of +signers in `AuthInfo` to be delayed until signatures have been collected. + +An "auxiliary" signer is any signer besides the primary signer who is paying +the fee. For the primary signer, the full `AuthInfo` is actually needed to calculate gas and fees +because that is dependent on how many signers and which key types and signing +modes they are using. Auxiliary signers, however, do not need to worry about +fees or gas and thus can just sign `TxBody`. + +To generate a signature in `SIGN_MODE_DIRECT_AUX` these steps would be followed: + +1. Encode `SignDocAux` (with the same requirement that fields must be serialized + in order): + + ```protobuf + // types/types.proto + message SignDocAux { + bytes body_bytes = 1; + // PublicKey is included in SignDocAux : + // 1. as a special case for multisig public keys. For multisig public keys, + // the signer should use the top-level multisig public key they are signing + // against, not their own public key. This is to prevent a form + // of malleability where a signature could be taken out of context of the + // multisig key that was intended to be signed for + // 2. to guard against scenario where configuration information is encoded + // in public keys (it has been proposed) such that two keys can generate + // the same signature but have different security properties + // + // By including it here, the composer of AuthInfo cannot reference the + // a public key variant the signer did not intend to use + PublicKey public_key = 2; + string chain_id = 3; + uint64 account_number = 4; + } + ``` + +2. Sign the encoded `SignDocAux` bytes +3. Send their signature and `SignerInfo` to the primary signer who will then + sign and broadcast the final transaction (with `SIGN_MODE_DIRECT` and `AuthInfo` + added) once enough signatures have been collected + +### `SIGN_MODE_DIRECT_RELAXED` + +(_Documented as option (1)(a) in https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-628026933_) + +This is a variation of `SIGN_MODE_DIRECT` where multiple signers wouldn't need to +coordinate public keys and signing modes in advance. It would involve an alternate +`SignDoc` similar to `SignDocAux` above with fee. This could be added in the future +if client developers found the burden of collecting public keys and modes in advance +too burdensome. + +## Consequences + +### Positive + +* Significant performance gains. +* Supports backward and forward type compatibility. +* Better support for cross-language clients. +* Multiple signing modes allow for greater protocol evolution + +### Negative + +* `google.protobuf.Any` type URLs increase transaction size although the effect + may be negligible or compression may be able to mitigate it. + +### Neutral + +## References + + + +### ADR 021: Protocol Buffer Query Encoding + + + +# ADR 021: Protocol Buffer Query Encoding + +## Changelog + +* 2020 March 27: Initial Draft + +## Status + +Accepted + +## Context + +This ADR is a continuation of the motivation, design, and context established in +[ADR 019](./adr-019-protobuf-state-encoding.md) and +[ADR 020](./adr-020-protobuf-transaction-encoding.md), namely, we aim to design the +Protocol Buffer migration path for the client-side of the Cosmos SDK. + +This ADR continues from [ADR 020](./adr-020-protobuf-transaction-encoding.md) +to specify the encoding of queries. + +## Decision + +### Custom Query Definition + +Modules define custom queries through a protocol buffers `service` definition. +These `service` definitions are generally associated with and used by the +GRPC protocol. However, the protocol buffers specification indicates that +they can be used more generically by any request/response protocol that uses +protocol buffer encoding. Thus, we can use `service` definitions for specifying +custom ABCI queries and even reuse a substantial amount of the GRPC infrastructure. + +Each module with custom queries should define a service canonically named `Query`: + +```protobuf +// x/bank/types/types.proto + +service Query { + rpc QueryBalance(QueryBalanceParams) returns (cosmos_sdk.v1.Coin) { } + rpc QueryAllBalances(QueryAllBalancesParams) returns (QueryAllBalancesResponse) { } +} +``` + +#### Handling of Interface Types + +Modules that use interface types and need true polymorphism generally force a +`oneof` up to the app-level that provides the set of concrete implementations of +that interface that the app supports. While app's are welcome to do the same for +queries and implement an app-level query service, it is recommended that modules +provide query methods that expose these interfaces via `google.protobuf.Any`. +There is a concern on the transaction level that the overhead of `Any` is too +high to justify its usage. However for queries this is not a concern, and +providing generic module-level queries that use `Any` does not preclude apps +from also providing app-level queries that return using the app-level `oneof`s. + +A hypothetical example for the `gov` module would look something like: + +```protobuf +// x/gov/types/types.proto + +import "google/protobuf/any.proto"; + +service Query { + rpc GetProposal(GetProposalParams) returns (AnyProposal) { } +} + +message AnyProposal { + ProposalBase base = 1; + google.protobuf.Any content = 2; +} +``` + +### Custom Query Implementation + +In order to implement the query service, we can reuse the existing [gogo protobuf](https://github.com/cosmos/gogoproto) +grpc plugin, which for a service named `Query` generates an interface named +`QueryServer` as below: + +```go +type QueryServer interface { + QueryBalance(context.Context, *QueryBalanceParams) (*types.Coin, error) + QueryAllBalances(context.Context, *QueryAllBalancesParams) (*QueryAllBalancesResponse, error) +} +``` + +The custom queries for our module are implemented by implementing this interface. + +The first parameter in this generated interface is a generic `context.Context`, +whereas querier methods generally need an instance of `sdk.Context` to read +from the store. Since arbitrary values can be attached to `context.Context` +using the `WithValue` and `Value` methods, the Cosmos SDK should provide a function +`sdk.UnwrapSDKContext` to retrieve the `sdk.Context` from the provided +`context.Context`. + +An example implementation of `QueryBalance` for the bank module as above would +look something like: + +```go +type Querier struct { + Keeper +} + +func (q Querier) QueryBalance(ctx context.Context, params *types.QueryBalanceParams) (*sdk.Coin, error) { + balance := q.GetBalance(sdk.UnwrapSDKContext(ctx), params.Address, params.Denom) + return &balance, nil +} +``` + +### Custom Query Registration and Routing + +Query server implementations as above would be registered with `AppModule`s using +a new method `RegisterQueryService(grpc.Server)` which could be implemented simply +as below: + +```go +// x/bank/module.go +func (am AppModule) RegisterQueryService(server grpc.Server) { + types.RegisterQueryServer(server, keeper.Querier{am.keeper}) +} +``` + +Underneath the hood, a new method `RegisterService(sd *grpc.ServiceDesc, handler interface{})` +will be added to the existing `baseapp.QueryRouter` to add the queries to the custom +query routing table (with the routing method being described below). +The signature for this method matches the existing +`RegisterServer` method on the GRPC `Server` type where `handler` is the custom +query server implementation described above. + +GRPC-like requests are routed by the service name (ex. `cosmos_sdk.x.bank.v1.Query`) +and method name (ex. `QueryBalance`) combined with `/`s to form a full +method name (ex. `/cosmos_sdk.x.bank.v1.Query/QueryBalance`). This gets translated +into an ABCI query as `custom/cosmos_sdk.x.bank.v1.Query/QueryBalance`. Service handlers +registered with `QueryRouter.RegisterService` will be routed this way. + +Beyond the method name, GRPC requests carry a protobuf encoded payload, which maps naturally +to `RequestQuery.Data`, and receive a protobuf encoded response or error. Thus +there is a quite natural mapping of GRPC-like rpc methods to the existing +`sdk.Query` and `QueryRouter` infrastructure. + +This basic specification allows us to reuse protocol buffer `service` definitions +for ABCI custom queries substantially reducing the need for manual decoding and +encoding in query methods. + +### GRPC Protocol Support + +In addition to providing an ABCI query pathway, we can easily provide a GRPC +proxy server that routes requests in the GRPC protocol to ABCI query requests +under the hood. In this way, clients could use their host languages' existing +GRPC implementations to make direct queries against Cosmos SDK app's using +these `service` definitions. In order for this server to work, the `QueryRouter` +on `BaseApp` will need to expose the service handlers registered with +`QueryRouter.RegisterService` to the proxy server implementation. Nodes could +launch the proxy server on a separate port in the same process as the ABCI app +with a command-line flag. + +### REST Queries and Swagger Generation + +[grpc-gateway](https://github.com/grpc-ecosystem/grpc-gateway) is a project that +translates REST calls into GRPC calls using special annotations on service +methods. Modules that want to expose REST queries should add `google.api.http` +annotations to their `rpc` methods as in this example below. + +```protobuf +// x/bank/types/types.proto + +service Query { + rpc QueryBalance(QueryBalanceParams) returns (cosmos_sdk.v1.Coin) { + option (google.api.http) = { + get: "/x/bank/v1/balance/{address}/{denom}" + }; + } + rpc QueryAllBalances(QueryAllBalancesParams) returns (QueryAllBalancesResponse) { + option (google.api.http) = { + get: "/x/bank/v1/balances/{address}" + }; + } +} +``` + +grpc-gateway will work directly against the GRPC proxy described above which will +translate requests to ABCI queries under the hood. grpc-gateway can also +generate Swagger definitions automatically. + +In the current implementation of REST queries, each module needs to implement +REST queries manually in addition to ABCI querier methods. Using the grpc-gateway +approach, there will be no need to generate separate REST query handlers, just +query servers as described above as grpc-gateway handles the translation of protobuf +to REST as well as Swagger definitions. + +The Cosmos SDK should provide CLI commands for apps to start GRPC gateway either in +a separate process or the same process as the ABCI app, as well as provide a +command for generating grpc-gateway proxy `.proto` files and the `swagger.json` +file. + +### Client Usage + +The gogo protobuf grpc plugin generates client interfaces in addition to server +interfaces. For the `Query` service defined above we would get a `QueryClient` +interface like: + +```go +type QueryClient interface { + QueryBalance(ctx context.Context, in *QueryBalanceParams, opts ...grpc.CallOption) (*types.Coin, error) + QueryAllBalances(ctx context.Context, in *QueryAllBalancesParams, opts ...grpc.CallOption) (*QueryAllBalancesResponse, error) +} +``` + +Via a small patch to gogo protobuf ([gogo/protobuf#675](https://github.com/gogo/protobuf/pull/675)) +we have tweaked the grpc codegen to use an interface rather than a concrete type +for the generated client struct. This allows us to also reuse the GRPC infrastructure +for ABCI client queries. + +1Context`will receive a new method`QueryConn`that returns a`ClientConn` +that routes calls to ABCI queries + +Clients (such as CLI methods) will then be able to call query methods like this: + +```go +clientCtx := client.NewContext() +queryClient := types.NewQueryClient(clientCtx.QueryConn()) +params := &types.QueryBalanceParams{addr, denom} +result, err := queryClient.QueryBalance(gocontext.Background(), params) +``` + +### Testing + +Tests would be able to create a query client directly from keeper and `sdk.Context` +references using a `QueryServerTestHelper` as below: + +```go +queryHelper := baseapp.NewQueryServerTestHelper(ctx) +types.RegisterQueryServer(queryHelper, keeper.Querier{app.BankKeeper}) +queryClient := types.NewQueryClient(queryHelper) +``` + +## Future Improvements + +## Consequences + +### Positive + +* greatly simplified querier implementation (no manual encoding/decoding) +* easy query client generation (can use existing grpc and swagger tools) +* no need for REST query implementations +* type safe query methods (generated via grpc plugin) +* going forward, there will be less breakage of query methods because of the +backwards compatibility guarantees provided by buf + +### Negative + +* all clients using the existing ABCI/REST queries will need to be refactored +for both the new GRPC/REST query paths as well as protobuf/proto-json encoded +data, but this is more or less unavoidable in the protobuf refactoring + +### Neutral + +## References + + + +### ADR 022: Custom BaseApp panic handling + + + +# ADR 022: Custom BaseApp panic handling + +## Changelog + +* 2020 Apr 24: Initial Draft +* 2021 Sep 14: Superseded by ADR-045 + +## Status + +SUPERSEDED by ADR-045 + +## Context + +The current implementation of BaseApp does not allow developers to write custom error handlers during panic recovery +[runTx()](https://github.com/cosmos/cosmos-sdk/blob/bad4ca75f58b182f600396ca350ad844c18fc80b/baseapp/baseapp.go#L539) +method. We think that this method can be more flexible and can give Cosmos SDK users more options for customizations without +the need to rewrite whole BaseApp. Also there's one special case for `sdk.ErrorOutOfGas` error handling, that case +might be handled in a "standard" way (middleware) alongside the others. + +We propose middleware-solution, which could help developers implement the following cases: + +* add external logging (let's say sending reports to external services like [Sentry](https://sentry.io)); +* call panic for specific error cases; + +It will also make `OutOfGas` case and `default` case one of the middlewares. +`Default` case wraps recovery object to an error and logs it ([example middleware implementation](#recovery-middleware)). + +Our project has a sidecar service running alongside the blockchain node (smart contracts virtual machine). It is +essential that node <-> sidecar connectivity stays stable for TXs processing. So when the communication breaks we need +to crash the node and reboot it once the problem is solved. That behaviour makes the node's state machine execution +deterministic. As all keeper panics are caught by runTx's `defer()` handler, we have to adjust the BaseApp code +in order to customize it. + +## Decision + +### Design + +#### Overview + +Instead of hardcoding custom error handling into BaseApp we suggest using a set of middlewares which can be customized +externally and will allow developers to use as many custom error handlers as they want. Implementation with tests +can be found [here](https://github.com/cosmos/cosmos-sdk/pull/6053). + +#### Implementation details + +##### Recovery handler + +New `RecoveryHandler` type added. `recoveryObj` input argument is an object returned by the standard Go function +`recover()` from the `builtin` package. + +```go +type RecoveryHandler func(recoveryObj interface{}) error +``` + +Handler should type assert (or other methods) an object to define if the object should be handled. +`nil` should be returned if the input object can't be handled by that `RecoveryHandler` (not a handler's target type). +Not `nil` error should be returned if the input object was handled and the middleware chain execution should be stopped. + +An example: + +```go +func exampleErrHandler(recoveryObj interface{}) error { + err, ok := recoveryObj.(error) + if !ok { return nil } + + if someSpecificError.Is(err) { + panic(customPanicMsg) + } else { + return nil + } +} +``` + +This example breaks the application execution, but it also might enrich the error's context like the `OutOfGas` handler. + +##### Recovery middleware + +We also add a middleware type (decorator). That function type wraps `RecoveryHandler` and returns the next middleware in +execution chain and handler's `error`. Type is used to separate actual `recovery()` object handling from middleware +chain processing. + +```go +type recoveryMiddleware func(recoveryObj interface{}) (recoveryMiddleware, error) + +func newRecoveryMiddleware(handler RecoveryHandler, next recoveryMiddleware) recoveryMiddleware { + return func(recoveryObj interface{}) (recoveryMiddleware, error) { + if err := handler(recoveryObj); err != nil { + return nil, err + } + return next, nil + } +} +``` + +Function receives a `recoveryObj` object and returns: + +* (next `recoveryMiddleware`, `nil`) if object wasn't handled (not a target type) by `RecoveryHandler`; +* (`nil`, not nil `error`) if input object was handled and other middlewares in the chain should not be executed; +* (`nil`, `nil`) in case of invalid behavior. Panic recovery might not have been properly handled; +this can be avoided by always using a `default` as a rightmost middleware in the chain (always returns an `error`'); + +`OutOfGas` middleware example: + +```go +func newOutOfGasRecoveryMiddleware(gasWanted uint64, ctx sdk.Context, next recoveryMiddleware) recoveryMiddleware { + handler := func(recoveryObj interface{}) error { + err, ok := recoveryObj.(sdk.ErrorOutOfGas) + if !ok { return nil } + + return errorsmod.Wrap( + sdkerrors.ErrOutOfGas, fmt.Sprintf( + "out of gas in location: %v; gasWanted: %d, gasUsed: %d", err.Descriptor, gasWanted, ctx.GasMeter().GasConsumed(), + ), + ) + } + + return newRecoveryMiddleware(handler, next) +} +``` + +`Default` middleware example: + +```go +func newDefaultRecoveryMiddleware() recoveryMiddleware { + handler := func(recoveryObj interface{}) error { + return errorsmod.Wrap( + sdkerrors.ErrPanic, fmt.Sprintf("recovered: %v\nstack:\n%v", recoveryObj, string(debug.Stack())), + ) + } + + return newRecoveryMiddleware(handler, nil) +} +``` + +##### Recovery processing + +Basic chain of middlewares processing would look like: + +```go +func processRecovery(recoveryObj interface{}, middleware recoveryMiddleware) error { + if middleware == nil { return nil } + + next, err := middleware(recoveryObj) + if err != nil { return err } + if next == nil { return nil } + + return processRecovery(recoveryObj, next) +} +``` + +That way we can create a middleware chain which is executed from left to right, the rightmost middleware is a +`default` handler which must return an `error`. + +##### BaseApp changes + +The `default` middleware chain must exist in a `BaseApp` object. `Baseapp` modifications: + +```go +type BaseApp struct { + // ... + runTxRecoveryMiddleware recoveryMiddleware +} + +func NewBaseApp(...) { + // ... + app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() +} + +func (app *BaseApp) runTx(...) { + // ... + defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + err, result = processRecovery(r, recoveryMW), nil + } + + gInfo = sdk.GasInfo{GasWanted: gasWanted, GasUsed: ctx.GasMeter().GasConsumed()} + }() + // ... +} +``` + +Developers can add their custom `RecoveryHandler`s by providing `AddRunTxRecoveryHandler` as a BaseApp option parameter to the `NewBaseapp` constructor: + +```go +func (app *BaseApp) AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) + } +} +``` + +This method would prepend handlers to an existing chain. + +## Consequences + +### Positive + +* Developers of Cosmos SDK-based projects can add custom panic handlers to: + * add error context for custom panic sources (panic inside of custom keepers); + * emit `panic()`: passthrough recovery object to the Tendermint core; + * other necessary handling; +* Developers can use standard Cosmos SDK `BaseApp` implementation, rather than rewriting it in their projects; +* Proposed solution doesn't break the current "standard" `runTx()` flow; + +### Negative + +* Introduces changes to the execution model design. + +### Neutral + +* `OutOfGas` error handler becomes one of the middlewares; +* Default panic handler becomes one of the middlewares; + +## References + +* [PR-6053 with proposed solution](https://github.com/cosmos/cosmos-sdk/pull/6053) +* [Similar solution. ADR-010 Modular AnteHandler](#adr-010-modular-antehandler) + + + +### ADR 023: Protocol Buffer Naming and Versioning Conventions + + + +# ADR 023: Protocol Buffer Naming and Versioning Conventions + +## Changelog + +* 2020 April 27: Initial Draft +* 2020 August 5: Update guidelines + +## Status + +Accepted + +## Context + +Protocol Buffers provide a basic [style guide](https://developers.google.com/protocol-buffers/docs/style) +and [Buf](https://buf.build/docs/style-guide) builds upon that. To the +extent possible, we want to follow industry accepted guidelines and wisdom for +the effective usage of protobuf, deviating from those only when there is clear +rationale for our use case. + +### Adoption of `Any` + +The adoption of `google.protobuf.Any` as the recommended approach for encoding +interface types (as opposed to `oneof`) makes package naming a central part +of the encoding as fully-qualified message names now appear in encoded +messages. + +### Current Directory Organization + +Thus far we have mostly followed [Buf's](https://buf.build) [DEFAULT](https://buf.build/docs/lint-checkers#default) +recommendations, with the minor deviation of disabling [`PACKAGE_DIRECTORY_MATCH`](https://buf.build/docs/lint-checkers#file_layout) +which although being convenient for developing code comes with the warning +from Buf that: + +> you will have a very bad time with many Protobuf plugins across various languages if you do not do this + +### Adoption of gRPC Queries + +In [ADR 021](adr-021-protobuf-query-encoding.md), gRPC was adopted for Protobuf +native queries. The full gRPC service path thus becomes a key part of ABCI query +path. In the future, gRPC queries may be allowed from within persistent scripts +by technologies such as CosmWasm and these query routes would be stored within +script binaries. + +## Decision + +The goal of this ADR is to provide thoughtful naming conventions that: + +* encourage a good user experience for when users interact directly with +.proto files and fully-qualified protobuf names +* balance conciseness against the possibility of either over-optimizing (making +names too short and cryptic) or under-optimizing (just accepting bloated names +with lots of redundant information) + +These guidelines are meant to act as a style guide for both the Cosmos SDK and +third-party modules. + +As a starting point, we should adopt all of the [DEFAULT](https://buf.build/docs/lint-checkers#default) +checkers in [Buf's](https://buf.build) including [`PACKAGE_DIRECTORY_MATCH`](https://buf.build/docs/lint-checkers#file_layout), +except: + +* [PACKAGE_VERSION_SUFFIX](https://buf.build/docs/lint-checkers#package_version_suffix) +* [SERVICE_SUFFIX](https://buf.build/docs/lint-checkers#service_suffix) + +Further guidelines to be described below. + +### Principles + +#### Concise and Descriptive Names + +Names should be descriptive enough to convey their meaning and distinguish +them from other names. + +Given that we are using fully-qualified names within +`google.protobuf.Any` as well as within gRPC query routes, we should aim to +keep names concise, without going overboard. The general rule of thumb should +be if a shorter name would convey more or else the same thing, pick the shorter +name. + +For instance, `cosmos.bank.MsgSend` (19 bytes) conveys roughly the same information +as `cosmos_sdk.x.bank.v1.MsgSend` (28 bytes) but is more concise. + +Such conciseness makes names both more pleasant to work with and take up less +space within transactions and on the wire. + +We should also resist the temptation to over-optimize, by making names +cryptically short with abbreviations. For instance, we shouldn't try to +reduce `cosmos.bank.MsgSend` to `csm.bk.MSnd` just to save a few bytes. + +The goal is to make names **_concise but not cryptic_**. + +#### Names are for Clients First + +Package and type names should be chosen for the benefit of users, not +necessarily because of legacy concerns related to the go code-base. + +#### Plan for Longevity + +In the interests of long-term support, we should plan on the names we do +choose to be in usage for a long time, so now is the opportunity to make +the best choices for the future. + +### Versioning + +#### Guidelines on Stable Package Versions + +In general, schema evolution is the way to update protobuf schemas. That means that new fields, +messages, and RPC methods are _added_ to existing schemas and old fields, messages and RPC methods +are maintained as long as possible. + +Breaking things is often unacceptable in a blockchain scenario. For instance, immutable smart contracts +may depend on certain data schemas on the host chain. If the host chain breaks those schemas, the smart +contract may be irreparably broken. Even when things can be fixed (for instance in client software), +this often comes at a high cost. + +Instead of breaking things, we should make every effort to evolve schemas rather than just breaking them. +[Buf](https://buf.build) breaking change detection should be used on all stable (non-alpha or beta) packages +to prevent such breakage. + +With that in mind, different stable versions (i.e. `v1` or `v2`) of a package should more or less be considered +different packages and this should be a last resort approach for upgrading protobuf schemas. Scenarios where creating +a `v2` may make sense are: + +* we want to create a new module with similar functionality to an existing module and adding `v2` is the most natural +way to do this. In that case, there are really just two different, but similar modules with different APIs. +* we want to add a new revamped API for an existing module and it's just too cumbersome to add it to the existing package, +so putting it in `v2` is cleaner for users. In this case, care should be made to not deprecate support for +`v1` if it is actively used in immutable smart contracts. + +#### Guidelines on unstable (alpha and beta) package versions + +The following guidelines are recommended for marking packages as alpha or beta: + +* marking something as `alpha` or `beta` should be a last resort and just putting something in the +stable package (i.e. `v1` or `v2`) should be preferred +* a package _should_ be marked as `alpha` _if and only if_ there are active discussions to remove +or significantly alter the package in the near future +* a package _should_ be marked as `beta` _if and only if_ there is an active discussion to +significantly refactor/rework the functionality in the near future but do not remove it +* modules _can and should_ have types in both stable (i.e. `v1` or `v2`) and unstable (`alpha` or `beta`) packages. + +_`alpha` and `beta` should not be used to avoid responsibility for maintaining compatibility._ +Whenever code is released into the wild, especially on a blockchain, there is a high cost to changing things. In some +cases, for instance with immutable smart contracts, a breaking change may be impossible to fix. + +When marking something as `alpha` or `beta`, maintainers should ask the following questions: + +* what is the cost of asking others to change their code vs the benefit of us maintaining the optionality to change it? +* what is the plan for moving this to `v1` and how will that affect users? + +`alpha` or `beta` should really be used to communicate "changes are planned". + +As a case study, gRPC reflection is in the package `grpc.reflection.v1alpha`. It hasn't been changed since +2017 and it is now used in other widely used software like gRPCurl. Some folks probably use it in production services +and so if they actually went and changed the package to `grpc.reflection.v1`, some software would break and +they probably don't want to do that... So now the `v1alpha` package is more or less the de-facto `v1`. Let's not do that. + +The following are guidelines for working with non-stable packages: + +* [Buf's recommended version suffix](https://buf.build/docs/lint-checkers#package_version_suffix) +(ex. `v1alpha1`) _should_ be used for non-stable packages +* non-stable packages should generally be excluded from breaking change detection +* immutable smart contract modules (i.e. CosmWasm) _should_ block smart contracts/persistent +scripts from interacting with `alpha`/`beta` packages + +#### Omit v1 suffix + +Instead of using [Buf's recommended version suffix](https://buf.build/docs/lint-checkers#package_version_suffix), +we can omit `v1` for packages that don't actually have a second version. This +allows for more concise names for common use cases like `cosmos.bank.Send`. +Packages that do have a second or third version can indicate that with `.v2` +or `.v3`. + +### Package Naming + +#### Adopt a short, unique top-level package name + +Top-level packages should adopt a short name that is known not to collide with +other names in common usage within the Cosmos ecosystem. In the near future, a +registry should be created to reserve and index top-level package names used +within the Cosmos ecosystem. Because the Cosmos SDK is intended to provide +the top-level types for the Cosmos project, the top-level package name `cosmos` +is recommended for usage within the Cosmos SDK instead of the longer `cosmos_sdk`. +[ICS](https://github.com/cosmos/ics) specifications could consider a +short top-level package like `ics23` based upon the standard number. + +#### Limit sub-package depth + +Sub-package depth should be increased with caution. Generally a single +sub-package is needed for a module or a library. Even though `x` or `modules` +is used in source code to denote modules, this is often unnecessary for .proto +files as modules are the primary thing sub-packages are used for. Only items which +are known to be used infrequently should have deep sub-package depths. + +For the Cosmos SDK, it is recommended that we simply write `cosmos.bank`, +`cosmos.gov`, etc. rather than `cosmos.x.bank`. In practice, most non-module +types can go straight in the `cosmos` package or we can introduce a +`cosmos.base` package if needed. Note that this naming _will not_ change +go package names, i.e. the `cosmos.bank` protobuf package will still live in +`x/bank`. + +### Message Naming + +Message type names should be as concise as possible without losing clarity. `sdk.Msg` +types which are used in transactions will retain the `Msg` prefix as that provides +helpful context. + +### Service and RPC Naming + +[ADR 021](adr-021-protobuf-query-encoding.md) specifies that modules should +implement a gRPC query service. We should consider the principle of conciseness +for query service and RPC names as these may be called from persistent script +modules such as CosmWasm. Also, users may use these query paths from tools like +[gRPCurl](https://github.com/fullstorydev/grpcurl). As an example, we can shorten +`/cosmos_sdk.x.bank.v1.QueryService/QueryBalance` to +`/cosmos.bank.Query/Balance` without losing much useful information. + +RPC request and response types _should_ follow the `ServiceNameMethodNameRequest`/ +`ServiceNameMethodNameResponse` naming convention. i.e. for an RPC method named `Balance` +on the `Query` service, the request and response types would be `QueryBalanceRequest` +and `QueryBalanceResponse`. This will be more self-explanatory than `BalanceRequest` +and `BalanceResponse`. + +#### Use just `Query` for the query service + +Instead of [Buf's default service suffix recommendation](https://github.com/cosmos/cosmos-sdk/pull/6033), +we should simply use the shorter `Query` for query services. + +For other types of gRPC services, we should consider sticking with Buf's +default recommendation. + +#### Omit `Get` and `Query` from query service RPC names + +`Get` and `Query` should be omitted from `Query` service names because they are +redundant in the fully-qualified name. For instance, `/cosmos.bank.Query/QueryBalance` +just says `Query` twice without any new information. + +## Future Improvements + +A registry of top-level package names should be created to coordinate naming +across the ecosystem, prevent collisions, and also help developers discover +useful schemas. A simple starting point would be a git repository with +community-based governance. + +## Consequences + +### Positive + +* names will be more concise and easier to read and type +* all transactions using `Any` will be at shorter (`_sdk.x` and `.v1` will be removed) +* `.proto` file imports will be more standard (without `"third_party/proto"` in +the path) +* code generation will be easier for clients because .proto files will be +in a single `proto/` directory which can be copied rather than scattered +throughout the Cosmos SDK + +### Negative + +### Neutral + +* `.proto` files will need to be reorganized and refactored +* some modules may need to be marked as alpha or beta + +## References + + + +### ADR 024: Coin Metadata + + + +# ADR 024: Coin Metadata + +## Changelog + +* 05/19/2020: Initial draft + +## Status + +Proposed + +## Context + +Assets in the Cosmos SDK are represented via a `Coins` type that consists of an `amount` and a `denom`, +where the `amount` can be any arbitrarily large or small value. In addition, the Cosmos SDK uses an +account-based model where there are two types of primary accounts -- basic accounts and module accounts. +All account types have a set of balances that are composed of `Coins`. The `x/bank` module keeps +track of all balances for all accounts and also keeps track of the total supply of balances in an +application. + +With regards to a balance `amount`, the Cosmos SDK assumes a static and fixed unit of denomination, +regardless of the denomination itself. In other words, clients and apps built atop a Cosmos-SDK-based +chain may choose to define and use arbitrary units of denomination to provide a richer UX, however, by +the time a tx or operation reaches the Cosmos SDK state machine, the `amount` is treated as a single +unit. For example, for the Cosmos Hub (Gaia), clients assume 1 ATOM = 10^6 uatom, and so all txs and +operations in the Cosmos SDK work off of units of 10^6. + +This clearly provides a poor and limited UX especially as interoperability of networks increases and +as a result the total amount of asset types increases. We propose to have `x/bank` additionally keep +track of metadata per `denom` in order to help clients, wallet providers, and explorers improve their +UX and remove the requirement for making any assumptions on the unit of denomination. + +## Decision + +The `x/bank` module will be updated to store and index metadata by `denom`, specifically the "base" or +smallest unit -- the unit the Cosmos SDK state-machine works with. + +Metadata may also include a non-zero length list of denominations. Each entry contains the name of +the denomination `denom`, the exponent to the base and a list of aliases. An entry is to be +interpreted as `1 denom = 10^exponent base_denom` (e.g. `1 ETH = 10^18 wei` and `1 uatom = 10^0 uatom`). + +There are two denominations that are of high importance for clients: the `base`, which is the smallest +possible unit and the `display`, which is the unit that is commonly referred to in human communication +and on exchanges. The values in those fields link to an entry in the list of denominations. + +The list in `denom_units` and the `display` entry may be changed via governance. + +As a result, we can define the type as follows: + +```protobuf +message DenomUnit { + string denom = 1; + uint32 exponent = 2; + repeated string aliases = 3; +} + +message Metadata { + string description = 1; + repeated DenomUnit denom_units = 2; + string base = 3; + string display = 4; +} +``` + +As an example, the ATOM's metadata can be defined as follows: + +```json +{ + "name": "atom", + "description": "The native staking token of the Cosmos Hub.", + "denom_units": [ + { + "denom": "uatom", + "exponent": 0, + "aliases": [ + "microatom" + ], + }, + { + "denom": "matom", + "exponent": 3, + "aliases": [ + "milliatom" + ] + }, + { + "denom": "atom", + "exponent": 6, + } + ], + "base": "uatom", + "display": "atom", +} +``` + +Given the above metadata, a client may infer the following things: + +* 4.3atom = 4.3 * (10^6) = 4,300,000uatom +* The string "atom" can be used as a display name in a list of tokens. +* The balance 4300000 can be displayed as 4,300,000uatom or 4,300matom or 4.3atom. + The `display` denomination 4.3atom is a good default if the authors of the client don't make + an explicit decision to choose a different representation. + +A client should be able to query for metadata by denom both via the CLI and REST interfaces. In +addition, we will add handlers to these interfaces to convert from any unit to another given unit, +as the base framework for this already exists in the Cosmos SDK. + +Finally, we need to ensure metadata exists in the `GenesisState` of the `x/bank` module which is also +indexed by the base `denom`. + +```go +type GenesisState struct { + SendEnabled bool `json:"send_enabled" yaml:"send_enabled"` + Balances []Balance `json:"balances" yaml:"balances"` + Supply sdk.Coins `json:"supply" yaml:"supply"` + DenomMetadata []Metadata `json:"denom_metadata" yaml:"denom_metadata"` +} +``` + +## Future Work + +In order for clients to avoid having to convert assets to the base denomination -- either manually or +via an endpoint, we may consider supporting automatic conversion of a given unit input. + +## Consequences + +### Positive + +* Provides clients, wallet providers and block explorers with additional data on + asset denomination to improve UX and remove any need to make assumptions on + denomination units. + +### Negative + +* A small amount of required additional storage in the `x/bank` module. The amount + of additional storage should be minimal as the amount of total assets should not + be large. + +### Neutral + +## References + + + +### ADR 027: Deterministic Protobuf Serialization + + + +# ADR 027: Deterministic Protobuf Serialization + +## Changelog + +* 2020-08-07: Initial Draft +* 2020-09-01: Further clarify rules + +## Status + +Proposed + +## Abstract + +Fully deterministic structure serialization, which works across many languages and clients, +is needed when signing messages. We need to be sure that whenever we serialize +a data structure, no matter in which supported language, the raw bytes +will stay the same. +[Protobuf](https://developers.google.com/protocol-buffers/docs/proto3) +serialization is not bijective (i.e. there exists a practically unlimited number of +valid binary representations for a given protobuf document)1. + +This document describes a deterministic serialization scheme for +a subset of protobuf documents, that covers this use case but can be reused in +other cases as well. + +### Context + +For signature verification in Cosmos SDK, the signer and verifier need to agree on +the same serialization of a `SignDoc` as defined in +[ADR-020](./adr-020-protobuf-transaction-encoding.md) without transmitting the +serialization. + +Currently, for block signatures we are using a workaround: we create a new [TxRaw](https://github.com/cosmos/cosmos-sdk/blob/9e85e81e0e8140067dd893421290c191529c148c/proto/cosmos/tx/v1beta1/tx.proto#L30) +instance (as defined in [adr-020-protobuf-transaction-encoding](#adr-020-protocol-buffer-transaction-encoding)) +by converting all [Tx](https://github.com/cosmos/cosmos-sdk/blob/9e85e81e0e8140067dd893421290c191529c148c/proto/cosmos/tx/v1beta1/tx.proto#L13) +fields to bytes on the client side. This adds an additional manual +step when sending and signing transactions. + +### Decision + +The following encoding scheme is to be used by other ADRs, +and in particular for `SignDoc` serialization. + +## Specification + +### Scope + +This ADR defines a protobuf3 serializer. The output is a valid protobuf +serialization, such that every protobuf parser can parse it. + +No maps are supported in version 1 due to the complexity of defining a +deterministic serialization. This might change in future. Implementations must +reject documents containing maps as invalid input. + +### Background - Protobuf3 Encoding + +Most numeric types in protobuf3 are encoded as +[varints](https://developers.google.com/protocol-buffers/docs/encoding#varints). +Varints are at most 10 bytes, and since each varint byte has 7 bits of data, +varints are a representation of `uint70` (70-bit unsigned integer). When +encoding, numeric values are casted from their base type to `uint70`, and when +decoding, the parsed `uint70` is casted to the appropriate numeric type. + +The maximum valid value for a varint that complies with protobuf3 is +`FF FF FF FF FF FF FF FF FF 7F` (i.e. `2**70 -1`). If the field type is +`{,u,s}int64`, the highest 6 bits of the 70 are dropped during decoding, +introducing 6 bits of malleability. If the field type is `{,u,s}int32`, the +highest 38 bits of the 70 are dropped during decoding, introducing 38 bits of +malleability. + +Among other sources of non-determinism, this ADR eliminates the possibility of +encoding malleability. + +### Serialization rules + +The serialization is based on the +[protobuf3 encoding](https://developers.google.com/protocol-buffers/docs/encoding) +with the following additions: + +1. Fields must be serialized only once in ascending order +2. Extra fields or any extra data must not be added +3. [Default values](https://developers.google.com/protocol-buffers/docs/proto3#default) + must be omitted +4. `repeated` fields of scalar numeric types must use + [packed encoding](https://developers.google.com/protocol-buffers/docs/encoding#packed) +5. Varint encoding must not be longer than needed: + * No trailing zero bytes (in little endian, i.e. no leading zeroes in big + endian). Per rule 3 above, the default value of `0` must be omitted, so + this rule does not apply in such cases. + * The maximum value for a varint must be `FF FF FF FF FF FF FF FF FF 01`. + In other words, when decoded, the highest 6 bits of the 70-bit unsigned + integer must be `0`. (10-byte varints are 10 groups of 7 bits, i.e. + 70 bits, of which only the lowest 70-6=64 are useful.) + * The maximum value for 32-bit values in varint encoding must be `FF FF FF FF 0F` + with one exception (below). In other words, when decoded, the highest 38 + bits of the 70-bit unsigned integer must be `0`. + * The one exception to the above is _negative_ `int32`, which must be + encoded using the full 10 bytes for sign extension2. + * The maximum value for Boolean values in varint encoding must be `01` (i.e. + it must be `0` or `1`). Per rule 3 above, the default value of `0` must + be omitted, so if a Boolean is included it must have a value of `1`. + +While rules number 1. and 2. should be pretty straightforward and describe the +default behavior of all protobuf encoders the author is aware of, the 3rd rule +is more interesting. After a protobuf3 deserialization you cannot differentiate +between unset fields and fields set to the default value3. At +serialization level however, it is possible to set the fields with an empty +value or omit them entirely. This is a significant difference to e.g. JSON +where a property can be empty (`""`, `0`), `null` or undefined, leading to 3 +different documents. + +Omitting fields set to default values is valid because the parser must assign +the default value to fields missing in the serialization4. For scalar +types, omitting defaults is required by the spec5. For `repeated` +fields, not serializing them is the only way to express empty lists. Enums must +have a first element of numeric value 0, which is the default6. And +message fields default to unset7. + +Omitting defaults allows for some amount of forward compatibility: users of +newer versions of a protobuf schema produce the same serialization as users of +older versions as long as newly added fields are not used (i.e. set to their +default value). + +### Implementation + +There are three main implementation strategies, ordered from the least to the +most custom development: + +* **Use a protobuf serializer that follows the above rules by default.** E.g. + [gogoproto](https://pkg.go.dev/github.com/cosmos/gogoproto/gogoproto) is known to + be compliant in most cases, but not when certain annotations such as + `nullable = false` are used. It might also be an option to configure an + existing serializer accordingly. +* **Normalize default values before encoding them.** If your serializer follows + rules 1. and 2. and allows you to explicitly unset fields for serialization, + you can normalize default values to unset. This can be done when working with + [protobuf.js](https://www.npmjs.com/package/protobufjs): + + ```js + const bytes = SignDoc.encode({ + bodyBytes: body.length > 0 ? body : null, // normalize empty bytes to unset + authInfoBytes: authInfo.length > 0 ? authInfo : null, // normalize empty bytes to unset + chainId: chainId || null, // normalize "" to unset + accountNumber: accountNumber || null, // normalize 0 to unset + accountSequence: accountSequence || null, // normalize 0 to unset + }).finish(); + ``` + +* **Use a hand-written serializer for the types you need.** If none of the above + ways works for you, you can write a serializer yourself. For SignDoc this + would look something like this in Go, building on existing protobuf utilities: + + ```go + if !signDoc.body_bytes.empty() { + buf.WriteUVarInt64(0xA) // wire type and field number for body_bytes + buf.WriteUVarInt64(signDoc.body_bytes.length()) + buf.WriteBytes(signDoc.body_bytes) + } + + if !signDoc.auth_info.empty() { + buf.WriteUVarInt64(0x12) // wire type and field number for auth_info + buf.WriteUVarInt64(signDoc.auth_info.length()) + buf.WriteBytes(signDoc.auth_info) + } + + if !signDoc.chain_id.empty() { + buf.WriteUVarInt64(0x1a) // wire type and field number for chain_id + buf.WriteUVarInt64(signDoc.chain_id.length()) + buf.WriteBytes(signDoc.chain_id) + } + + if signDoc.account_number != 0 { + buf.WriteUVarInt64(0x20) // wire type and field number for account_number + buf.WriteUVarInt(signDoc.account_number) + } + + if signDoc.account_sequence != 0 { + buf.WriteUVarInt64(0x28) // wire type and field number for account_sequence + buf.WriteUVarInt(signDoc.account_sequence) + } + ``` + +### Test vectors + +Given the protobuf definition `Article.proto` + +```protobuf +package blog; +syntax = "proto3"; + +enum Type { + UNSPECIFIED = 0; + IMAGES = 1; + NEWS = 2; +}; + +enum Review { + UNSPECIFIED = 0; + ACCEPTED = 1; + REJECTED = 2; +}; + +message Article { + string title = 1; + string description = 2; + uint64 created = 3; + uint64 updated = 4; + bool public = 5; + bool promoted = 6; + Type type = 7; + Review review = 8; + repeated string comments = 9; + repeated string backlinks = 10; +}; +``` + +serializing the values + +```yaml +title: "The world needs change 🌳" +description: "" +created: 1596806111080 +updated: 0 +public: true +promoted: false +type: Type.NEWS +review: Review.UNSPECIFIED +comments: ["Nice one", "Thank you"] +backlinks: [] +``` + +must result in the serialization + +```text +0a1b54686520776f726c64206e65656473206368616e676520f09f8cb318e8bebec8bc2e280138024a084e696365206f6e654a095468616e6b20796f75 +``` + +When inspecting the serialized document, you see that every second field is +omitted: + +```shell +$ echo 0a1b54686520776f726c64206e65656473206368616e676520f09f8cb318e8bebec8bc2e280138024a084e696365206f6e654a095468616e6b20796f75 | xxd -r -p | protoc --decode_raw +1: "The world needs change \360\237\214\263" +3: 1596806111080 +5: 1 +7: 2 +9: "Nice one" +9: "Thank you" +``` + +## Consequences + +Having such an encoding available allows us to get deterministic serialization +for all protobuf documents we need in the context of Cosmos SDK signing. + +### Positive + +* Well defined rules that can be verified independently of a reference + implementation +* Simple enough to keep the barrier to implementing transaction signing low +* It allows us to continue to use 0 and other empty values in SignDoc, avoiding + the need to work around 0 sequences. This does not imply the change from + https://github.com/cosmos/cosmos-sdk/pull/6949 should not be merged, but not + too important anymore. + +### Negative + +* When implementing transaction signing, the encoding rules above must be + understood and implemented. +* The need for rule number 3. adds some complexity to implementations. +* Some data structures may require custom code for serialization. Thus + the code is not very portable - it will require additional work for each + client implementing serialization to properly handle custom data structures. + +### Neutral + +### Usage in Cosmos SDK + +For the reasons mentioned above ("Negative" section) we prefer to keep workarounds +for shared data structure. Example: the aforementioned `TxRaw` is using raw bytes +as a workaround. This allows them to use any valid Protobuf library without +the need to implement a custom serializer that adheres to this standard (and related risks of bugs). + +## References + +* 1 _When a message is serialized, there is no guaranteed order for + how its known or unknown fields should be written. Serialization order is an + implementation detail and the details of any particular implementation may + change in the future. Therefore, protocol buffer parsers must be able to parse + fields in any order._ from + https://developers.google.com/protocol-buffers/docs/encoding#order +* 2 https://developers.google.com/protocol-buffers/docs/encoding#signed_integers +* 3 _Note that for scalar message fields, once a message is parsed + there's no way of telling whether a field was explicitly set to the default + value (for example whether a boolean was set to false) or just not set at all: + you should bear this in mind when defining your message types. For example, + don't have a boolean that switches on some behavior when set to false if you + don't want that behavior to also happen by default._ from + https://developers.google.com/protocol-buffers/docs/proto3#default +* 4 _When a message is parsed, if the encoded message does not + contain a particular singular element, the corresponding field in the parsed + object is set to the default value for that field._ from + https://developers.google.com/protocol-buffers/docs/proto3#default +* 5 _Also note that if a scalar message field is set to its default, + the value will not be serialized on the wire._ from + https://developers.google.com/protocol-buffers/docs/proto3#default +* 6 _For enums, the default value is the first defined enum value, + which must be 0._ from + https://developers.google.com/protocol-buffers/docs/proto3#default +* 7 _For message fields, the field is not set. Its exact value is + language-dependent._ from + https://developers.google.com/protocol-buffers/docs/proto3#default +* Encoding rules and parts of the reasoning taken from + [canonical-proto3 Aaron Craelius](https://github.com/regen-network/canonical-proto3) + + + +### ADR 028: Public Key Addresses + + + +# ADR 028: Public Key Addresses + +## Changelog + +* 2020/08/18: Initial version +* 2021/01/15: Analysis and algorithm update + +## Status + +Proposed + +## Abstract + +This ADR defines an address format for all addressable Cosmos SDK accounts. That includes: new public key algorithms, multisig public keys, and module accounts. + +## Context + +Issue [\#3685](https://github.com/cosmos/cosmos-sdk/issues/3685) identified that public key +address spaces are currently overlapping. We confirmed that it significantly decreases security of Cosmos SDK. + +### Problem + +An attacker can control an input for an address generation function. This leads to a birthday attack, which significantly decreases the security space. +To overcome this, we need to separate the inputs for different kinds of account types: +a security break of one account type shouldn't impact the security of other account types. + +### Initial proposals + +One initial proposal was to extend the address length and +adding prefixes for different types of addresses. + +@ethanfrey explained an alternate approach originally used in https://github.com/iov-one/weave: + +> I spent quite a bit of time thinking about this issue while building weave... The other cosmos Sdk. +> Basically I define a condition to be a type and format as human readable string with some binary data appended. This condition is hashed into an Address (again at 20 bytes). The use of this prefix makes it impossible to find a preimage for a given address with a different condition (eg ed25519 vs secp256k1). +> This is explained in depth here https://weave.readthedocs.io/en/latest/design/permissions.html +> And the code is here, look mainly at the top where we process conditions. https://github.com/iov-one/weave/blob/master/conditions.go + +And explained how this approach should be sufficiently collision resistant: + +> Yeah, AFAIK, 20 bytes should be collision resistance when the preimages are unique and not malleable. A space of 2^160 would expect some collision to be likely around 2^80 elements (birthday paradox). And if you want to find a collision for some existing element in the database, it is still 2^160. 2^80 only if all these elements are written to state. +> The good example you brought up was eg. a public key bytes being a valid public key on two algorithms supported by the codec. Meaning if either was broken, you would break accounts even if they were secured with the safer variant. This is only as the issue when no differentiating type info is present in the preimage (before hashing into an address). +> I would like to hear an argument if the 20 bytes space is an actual issue for security, as I would be happy to increase my address sizes in weave. I just figured cosmos and ethereum and bitcoin all use 20 bytes, it should be good enough. And the arguments above which made me feel it was secure. But I have not done a deeper analysis. + +This led to the first proposal (which we proved to be not good enough): +we concatenate a key type with a public key, hash it and take the first 20 bytes of that hash, summarized as `sha256(keyTypePrefix || keybytes)[:20]`. + +### Review and Discussions + +In [\#5694](https://github.com/cosmos/cosmos-sdk/issues/5694) we discussed various solutions. +We agreed that 20 bytes it's not future proof, and extending the address length is the only way to allow addresses of different types, various signature types, etc. +This disqualifies the initial proposal. + +In the issue we discussed various modifications: + +* Choice of the hash function. +* Move the prefix out of the hash function: `keyTypePrefix + sha256(keybytes)[:20]` [post-hash-prefix-proposal]. +* Use double hashing: `sha256(keyTypePrefix + sha256(keybytes)[:20])`. +* Increase to keybytes hash slice from 20 bytes to 32 or 40 bytes. We concluded that 32 bytes, produced by a good hash functions is future secure. + +### Requirements + +* Support currently used tools - we don't want to break an ecosystem, or add a long adaptation period. Ref: https://github.com/cosmos/cosmos-sdk/issues/8041 +* Try to keep the address length small - addresses are widely used in state, both as part of a key and object value. + +### Scope + +This ADR only defines a process for the generation of address bytes. For end-user interactions with addresses (through the API, or CLI, etc.), we still use bech32 to format these addresses as strings. This ADR doesn't change that. +Using Bech32 for string encoding gives us support for checksum error codes and handling of user typos. + +## Decision + +We define the following account types, for which we define the address function: + +1. simple accounts: represented by a regular public key (ie: secp256k1, sr25519) +2. naive multisig: accounts composed by other addressable objects (ie: naive multisig) +3. composed accounts with a native address key (ie: bls, group module accounts) +4. module accounts: basically any accounts which cannot sign transactions and which are managed internally by modules + +### Legacy Public Key Addresses Don't Change + +Currently (Jan 2021), the only officially supported Cosmos SDK user accounts are `secp256k1` basic accounts and legacy amino multisig. +They are used in existing Cosmos SDK zones. They use the following address formats: + +* secp256k1: `ripemd160(sha256(pk_bytes))[:20]` +* legacy amino multisig: `sha256(aminoCdc.Marshal(pk))[:20]` + +We don't want to change existing addresses. So the addresses for these two key types will remain the same. + +The current multisig public keys use amino serialization to generate the address. We will retain +those public keys and their address formatting, and call them "legacy amino" multisig public keys +in protobuf. We will also create multisig public keys without amino addresses to be described below. + +### Hash Function Choice + +As in other parts of the Cosmos SDK, we will use `sha256`. + +### Basic Address + +We start by defining a base algorithm for generating addresses which we will call `Hash`. Notably, it's used for accounts represented by a single key pair. For each public key schema we have to have an associated `typ` string, explained in the next section. `hash` is the cryptographic hash function defined in the previous section. + +```go +const A_LEN = 32 + +func Hash(typ string, key []byte) []byte { + return hash(hash(typ) + key)[:A_LEN] +} +``` + +The `+` is bytes concatenation, which doesn't use any separator. + +This algorithm is the outcome of a consultation session with a professional cryptographer. +Motivation: this algorithm keeps the address relatively small (length of the `typ` doesn't impact the length of the final address) +and it's more secure than [post-hash-prefix-proposal] (which uses the first 20 bytes of a pubkey hash, significantly reducing the address space). +Moreover the cryptographer motivated the choice of adding `typ` in the hash to protect against a switch table attack. + +`address.Hash` is a low level function to generate _base_ addresses for new key types. Example: + +* BLS: `address.Hash("bls", pubkey)` + +### Composed Addresses + +For simple composed accounts (like a new naive multisig) we generalize the `address.Hash`. The address is constructed by recursively creating addresses for the sub accounts, sorting the addresses and composing them into a single address. It ensures that the ordering of keys doesn't impact the resulting address. + +```go +// We don't need a PubKey interface - we need anything which is addressable. +type Addressable interface { + Address() []byte +} + +func Composed(typ string, subaccounts []Addressable) []byte { + addresses = map(subaccounts, \a -> LengthPrefix(a.Address())) + addresses = sort(addresses) + return address.Hash(typ, addresses[0] + ... + addresses[n]) +} +``` + +The `typ` parameter should be a schema descriptor, containing all significant attributes with deterministic serialization (eg: utf8 string). +`LengthPrefix` is a function which prepends 1 byte to the address. The value of that byte is the length of the address bits before prepending. The address must be at most 255 bits long. +We are using `LengthPrefix` to eliminate conflicts - it assures, that for 2 lists of addresses: `as = {a1, a2, ..., an}` and `bs = {b1, b2, ..., bm}` such that every `bi` and `ai` is at most 255 long, `concatenate(map(as, (a) => LengthPrefix(a))) = map(bs, (b) => LengthPrefix(b))` if `as = bs`. + +Implementation Tip: account implementations should cache addresses. + +#### Multisig Addresses + +For a new multisig public keys, we define the `typ` parameter not based on any encoding scheme (amino or protobuf). This avoids issues with non-determinism in the encoding scheme. + +Example: + +```protobuf +package cosmos.crypto.multisig; + +message PubKey { + uint32 threshold = 1; + repeated google.protobuf.Any pubkeys = 2; +} +``` + +```go +func (multisig PubKey) Address() { + // first gather all nested pub keys + var keys []address.Addressable // cryptotypes.PubKey implements Addressable + for _, _key := range multisig.Pubkeys { + keys = append(keys, key.GetCachedValue().(cryptotypes.PubKey)) + } + + // form the type from the message name (cosmos.crypto.multisig.PubKey) and the threshold joined together + prefix := fmt.Sprintf("%s/%d", proto.MessageName(multisig), multisig.Threshold) + + // use the Composed function defined above + return address.Composed(prefix, keys) +} +``` + + +### Derived Addresses + +We must be able to cryptographically derive one address from another one. The derivation process must guarantee hash properties, hence we use the already defined `Hash` function: + +```go +func Derive(address, derivationKey []byte) []byte { + return Hash(address, derivationKey) +} +``` + +### Module Account Addresses + +A module account will have `"module"` type. Module accounts can have sub accounts. The submodule account will be created based on module name, and sequence of derivation keys. Typically, the first derivation key should be a class of the derived accounts. The derivation process has a defined order: module name, submodule key, subsubmodule key... An example module account is created using: + +```go +address.Module(moduleName, key) +``` + +An example sub-module account is created using: + +```go +groupPolicyAddresses := []byte{1} +address.Module(moduleName, groupPolicyAddresses, policyID) +``` + +The `address.Module` function is using `address.Hash` with `"module"` as the type argument, and byte representation of the module name concatenated with submodule key. The last two components must be uniquely separated to avoid potential clashes (example: modulename="ab" & submodulekey="bc" will have the same derivation key as modulename="a" & submodulekey="bbc"). +We use a null byte (`'\x00'`) to separate module name from the submodule key. This works, because null byte is not a part of a valid module name. Finally, the sub-submodule accounts are created by applying the `Derive` function recursively. +We could use `Derive` function also in the first step (rather than concatenating the module name with a zero byte and the submodule key). We decided to do concatenation to avoid one level of derivation and speed up computation. + +For backward compatibility with the existing `authtypes.NewModuleAddress`, we add a special case in `Module` function: when no derivation key is provided, we fallback to the "legacy" implementation. + +```go +func Module(moduleName string, derivationKeys ...[]byte) []byte{ + if len(derivationKeys) == 0 { + return authtypes.NewModuleAddress(moduleName) // legacy case + } + submoduleAddress := Hash("module", []byte(moduleName) + 0 + key) + return fold((a, k) => Derive(a, k), subsubKeys, submoduleAddress) +} +``` + +**Example 1** A lending BTC pool address would be: + +```go +btcPool := address.Module("lending", btc.Address()}) +``` + +If we want to create an address for a module account depending on more than one key, we can concatenate them: + +```go +btcAtomAMM := address.Module("amm", btc.Address() + atom.Address()}) +``` + +**Example 2** a smart-contract address could be constructed by: + +```go +smartContractAddr = Module("mySmartContractVM", smartContractsNamespace, smartContractKey}) + +// which equals to: +smartContractAddr = Derived( + Module("mySmartContractVM", smartContractsNamespace), + []{smartContractKey}) +``` + +### Schema Types + +A `typ` parameter used in `Hash` function SHOULD be unique for each account type. +Since all Cosmos SDK account types are serialized in the state, we propose to use the protobuf message name string. + +Example: all public key types have a unique protobuf message type similar to: + +```protobuf +package cosmos.crypto.sr25519; + +message PubKey { + bytes key = 1; +} +``` + +All protobuf messages have unique fully qualified names, in this example `cosmos.crypto.sr25519.PubKey`. +These names are derived directly from .proto files in a standardized way and used +in other places such as the type URL in `Any`s. We can easily obtain the name using +`proto.MessageName(msg)`. + +## Consequences + +### Backwards Compatibility + +This ADR is compatible with what was committed and directly supported in the Cosmos SDK repository. + +### Positive + +* a simple algorithm for generating addresses for new public keys, complex accounts and modules +* the algorithm generalizes _native composed keys_ +* increased security and collision resistance of addresses +* the approach is extensible for future use-cases - one can use other address types, as long as they don't conflict with the address length specified here (20 or 32 bytes). +* support new account types. + +### Negative + +* addresses do not communicate key type, a prefixed approach would have done this +* addresses are 60% longer and will consume more storage space +* requires a refactor of KVStore store keys to handle variable length addresses + +### Neutral + +* protobuf message names are used as key type prefixes + +## Further Discussions + +Some accounts can have a fixed name or may be constructed in another way (eg: modules). We were discussing an idea of an account with a predefined name (eg: `me.regen`), which could be used by institutions. +Without going into details, these kinds of addresses are compatible with the hash based addresses described here as long as they don't have the same length. +More specifically, any special account address must not have a length equal to 20 or 32 bytes. + +## Appendix: Consulting session + +End of Dec 2020 we had a session with [Alan Szepieniec](https://scholar.google.be/citations?user=4LyZn8oAAAAJ&hl=en) to consult the approach presented above. + +Alan general observations: + +* we don’t need 2-preimage resistance +* we need 32bytes address space for collision resistance +* when an attacker can control an input for an object with an address then we have a problem with a birthday attack +* there is an issue with smart-contracts for hashing +* sha2 mining can be used to break the address pre-image + +Hashing algorithm + +* any attack breaking blake3 will break blake2 +* Alan is pretty confident about the current security analysis of the blake hash algorithm. It was a finalist, and the author is well known in security analysis. + +Algorithm: + +* Alan recommends to hash the prefix: `address(pub_key) = hash(hash(key_type) + pub_key)[:32]`, main benefits: + * we are free to user arbitrary long prefix names + * we still don’t risk collisions + * switch tables +* discussion about penalization -> about adding prefix post hash +* Aaron asked about post hash prefixes (`address(pub_key) = key_type + hash(pub_key)`) and differences. Alan noted that this approach has longer address space and it’s stronger. + +Algorithm for complex / composed keys: + +* merging tree-like addresses with same algorithm are fine + +Module addresses: Should module addresses have a different size to differentiate it? + +* we will need to set a pre-image prefix for module addresses to keep them in 32-byte space: `hash(hash('module') + module_key)` +* Aaron observation: we already need to deal with variable length (to not break secp256k1 keys). + +Discussion about an arithmetic hash function for ZKP + +* Poseidon / Rescue +* Problem: much bigger risk because we don’t know much techniques and the history of crypto-analysis of arithmetic constructions. It’s still a new ground and area of active research. + +Post quantum signature size + +* Alan suggestion: Falcon: speed / size ratio - very good. +* Aaron - should we think about it? + Alan: based on early extrapolation this thing will get able to break EC cryptography in 2050. But that’s a lot of uncertainty. But there is magic happening with recursions / linking / simulation and that can speedup the progress. + +Other ideas + +* Let’s say we use the same key and two different address algorithms for 2 different use cases. Is it still safe to use it? Alan: if we want to hide the public key (which is not our use case), then it’s less secure but there are fixes. + +### References + +* [Notes](https://hackmd.io/_NGWI4xZSbKzj1BkCqyZMw) + + + +### ADR 029: Fee Grant Module + + + +# ADR 029: Fee Grant Module + +## Changelog + +* 2020/08/18: Initial Draft +* 2021/05/05: Removed height based expiration support and simplified naming. + +## Status + +Accepted + +## Context + +In order to make blockchain transactions, the signing account must possess a sufficient balance of the right denomination +in order to pay fees. There are classes of transactions where needing to maintain a wallet with sufficient fees is a +barrier to adoption. + +For instance, when proper permissions are set up, someone may temporarily delegate the ability to vote on proposals to +a "burner" account that is stored on a mobile phone with only minimal security. + +Other use cases include workers tracking items in a supply chain or farmers submitting field data for analytics +or compliance purposes. + +For all of these use cases, UX would be significantly enhanced by obviating the need for these accounts to always +maintain the appropriate fee balance. This is especially true if we want to achieve enterprise adoption for something +like supply chain tracking. + +While one solution would be to have a service that fills up these accounts automatically with the appropriate fees, a better UX +would be provided by allowing these accounts to pull from a common fee pool account with proper spending limits. +A single pool would reduce the churn of making lots of small "fill up" transactions and also more effectively leverage +the resources of the organization setting up the pool. + +## Decision + +As a solution we propose a module, `x/feegrant` which allows one account, the "granter" to grant another account, the "grantee" +an allowance to spend the granter's account balance for fees within certain well-defined limits. + +Fee allowances are defined by the extensible `FeeAllowanceI` interface: + +```go +type FeeAllowanceI { + // Accept can use fee payment requested as well as timestamp of the current block + // to determine whether or not to process this. This is checked in + // Keeper.UseGrantedFees and the return values should match how it is handled there. + // + // If it returns an error, the fee payment is rejected, otherwise it is accepted. + // The FeeAllowance implementation is expected to update it's internal state + // and will be saved again after an acceptance. + // + // If remove is true (regardless of the error), the FeeAllowance will be deleted from storage + // (eg. when it is used up). (See call to RevokeFeeAllowance in Keeper.UseGrantedFees) + Accept(ctx sdk.Context, fee sdk.Coins, msgs []sdk.Msg) (remove bool, err error) + + // ValidateBasic should evaluate this FeeAllowance for internal consistency. + // Don't allow negative amounts, or negative periods for example. + ValidateBasic() error +} +``` + +Two basic fee allowance types, `BasicAllowance` and `PeriodicAllowance` are defined to support known use cases: + +```protobuf +// BasicAllowance implements FeeAllowanceI with a one-time grant of tokens +// that optionally expires. The delegatee can use up to SpendLimit to cover fees. +message BasicAllowance { + // spend_limit specifies the maximum amount of tokens that can be spent + // by this allowance and will be updated as tokens are spent. If it is + // empty, there is no spend limit and any amount of coins can be spent. + repeated cosmos_sdk.v1.Coin spend_limit = 1; + + // expiration specifies an optional time when this allowance expires + google.protobuf.Timestamp expiration = 2; +} + +// PeriodicAllowance extends FeeAllowanceI to allow for both a maximum cap, +// as well as a limit per time period. +message PeriodicAllowance { + BasicAllowance basic = 1; + + // period specifies the time duration in which period_spend_limit coins can + // be spent before that allowance is reset + google.protobuf.Duration period = 2; + + // period_spend_limit specifies the maximum number of coins that can be spent + // in the period + repeated cosmos_sdk.v1.Coin period_spend_limit = 3; + + // period_can_spend is the number of coins left to be spent before the period_reset time + repeated cosmos_sdk.v1.Coin period_can_spend = 4; + + // period_reset is the time at which this period resets and a new one begins, + // it is calculated from the start time of the first transaction after the + // last period ended + google.protobuf.Timestamp period_reset = 5; +} + +``` + +Allowances can be granted and revoked using `MsgGrantAllowance` and `MsgRevokeAllowance`: + +```protobuf +// MsgGrantAllowance adds permission for Grantee to spend up to Allowance +// of fees from the account of Granter. +message MsgGrantAllowance { + string granter = 1; + string grantee = 2; + google.protobuf.Any allowance = 3; + } + + // MsgRevokeAllowance removes any existing FeeAllowance from Granter to Grantee. + message MsgRevokeAllowance { + string granter = 1; + string grantee = 2; + } +``` + +In order to use allowances in transactions, we add a new field `granter` to the transaction `Fee` type: + +```protobuf +package cosmos.tx.v1beta1; + +message Fee { + repeated cosmos.base.v1beta1.Coin amount = 1; + uint64 gas_limit = 2; + string payer = 3; + string granter = 4; +} +``` + +`granter` must either be left empty or must correspond to an account which has granted +a fee allowance to the fee payer (either the first signer or the value of the `payer` field). + +A new `AnteDecorator` named `DeductGrantedFeeDecorator` will be created in order to process transactions with `fee_payer` +set and correctly deduct fees based on fee allowances. + +## Consequences + +### Positive + +* improved UX for use cases where it is cumbersome to maintain an account balance just for fees + +### Negative + +### Neutral + +* a new field must be added to the transaction `Fee` message and a new `AnteDecorator` must be +created to use it + +## References + +* Blog article describing initial work: https://medium.com/regen-network/hacking-the-cosmos-cosmwasm-and-key-management-a08b9f561d1b +* Initial public specification: https://gist.github.com/aaronc/b60628017352df5983791cad30babe56 +* Original subkeys proposal from B-harvest which influenced this design: https://github.com/cosmos/cosmos-sdk/issues/4480 + + + +### ADR 030: Authorization Module + + + +# ADR 030: Authorization Module + +## Changelog + +* 2019-11-06: Initial Draft +* 2020-10-12: Updated Draft +* 2020-11-13: Accepted +* 2020-05-06: proto API updates, use `sdk.Msg` instead of `sdk.ServiceMsg` (the latter concept was removed from Cosmos SDK) +* 2022-04-20: Updated the `SendAuthorization` proto docs to clarify the `SpendLimit` is a required field. (Generic authorization can be used with bank msg type url to create limit less bank authorization) + +## Status + +Accepted + +## Abstract + +This ADR defines the `x/authz` module which allows accounts to grant authorizations to perform actions +on behalf of that account to other accounts. + +## Context + +The concrete use cases which motivated this module include: + +* the desire to delegate the ability to vote on proposals to other accounts besides the account which one has +delegated stake +* "sub-keys" functionality, as originally proposed in [\#4480](https://github.com/cosmos/cosmos-sdk/issues/4480) which +is a term used to describe the functionality provided by this module together with +the `fee_grant` module from [ADR 029](./adr-029-fee-grant-module.md) and the [group module](https://github.com/cosmos/cosmos-sdk/tree/main/x/group). + +The "sub-keys" functionality roughly refers to the ability for one account to grant some subset of its capabilities to +other accounts with possibly less robust, but easier to use security measures. For instance, a master account representing +an organization could grant the ability to spend small amounts of the organization's funds to individual employee accounts. +Or an individual (or group) with a multisig wallet could grant the ability to vote on proposals to any one of the member +keys. + +The current implementation is based on work done by the [Gaian's team at Hackatom Berlin 2019](https://github.com/cosmos-gaians/cosmos-sdk/tree/hackatom/x/delegation). + +## Decision + +We will create a module named `authz` which provides functionality for +granting arbitrary privileges from one account (the _granter_) to another account (the _grantee_). Authorizations +must be granted for a particular `Msg` service methods one by one using an implementation +of `Authorization` interface. + +### Types + +Authorizations determine exactly what privileges are granted. They are extensible +and can be defined for any `Msg` service method even outside of the module where +the `Msg` method is defined. `Authorization`s reference `Msg`s using their TypeURL. + +#### Authorization + +```go +type Authorization interface { + proto.Message + + // MsgTypeURL returns the fully-qualified Msg TypeURL (as described in ADR 020), + // which will process and accept or reject a request. + MsgTypeURL() string + + // Accept determines whether this grant permits the provided sdk.Msg to be performed, and if + // so provides an upgraded authorization instance. + Accept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error) + + // ValidateBasic does a simple validation check that + // doesn't require access to any other information. + ValidateBasic() error +} + +// AcceptResponse instruments the controller of an authz message if the request is accepted +// and if it should be updated or deleted. +type AcceptResponse struct { + // If Accept=true, the controller can accept and authorization and handle the update. + Accept bool + // If Delete=true, the controller must delete the authorization object and release + // storage resources. + Delete bool + // Controller, who is calling Authorization.Accept must check if `Updated != nil`. If yes, + // it must use the updated version and handle the update on the storage level. + Updated Authorization +} +``` + +For example a `SendAuthorization` like this is defined for `MsgSend` that takes +a `SpendLimit` and updates it down to zero: + +```go +type SendAuthorization struct { + // SpendLimit specifies the maximum amount of tokens that can be spent + // by this authorization and will be updated as tokens are spent. This field is required. (Generic authorization + // can be used with bank msg type url to create limit less bank authorization). + SpendLimit sdk.Coins +} + +func (a SendAuthorization) MsgTypeURL() string { + return sdk.MsgTypeURL(&MsgSend{}) +} + +func (a SendAuthorization) Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) { + mSend, ok := msg.(*MsgSend) + if !ok { + return authz.AcceptResponse{}, sdkerrors.ErrInvalidType.Wrap("type mismatch") + } + limitLeft, isNegative := a.SpendLimit.SafeSub(mSend.Amount) + if isNegative { + return authz.AcceptResponse{}, sdkerrors.ErrInsufficientFunds.Wrapf("requested amount is more than spend limit") + } + if limitLeft.IsZero() { + return authz.AcceptResponse{Accept: true, Delete: true}, nil + } + + return authz.AcceptResponse{Accept: true, Delete: false, Updated: &SendAuthorization{SpendLimit: limitLeft}}, nil +} +``` + +A different type of capability for `MsgSend` could be implemented +using the `Authorization` interface with no need to change the underlying +`bank` module. + +##### Small notes on `AcceptResponse` + +* The `AcceptResponse.Accept` field will be set to `true` if the authorization is accepted. +However, if it is rejected, the function `Accept` will raise an error (without setting `AcceptResponse.Accept` to `false`). + +* The `AcceptResponse.Updated` field will be set to a non-nil value only if there is a real change to the authorization. +If authorization remains the same (as is, for instance, always the case for a [`GenericAuthorization`](#genericauthorization)), +the field will be `nil`. + +### `Msg` Service + +```protobuf +service Msg { + // Grant grants the provided authorization to the grantee on the granter's + // account with the provided expiration time. + rpc Grant(MsgGrant) returns (MsgGrantResponse); + + // Exec attempts to execute the provided messages using + // authorizations granted to the grantee. Each message should have only + // one signer corresponding to the granter of the authorization. + rpc Exec(MsgExec) returns (MsgExecResponse); + + // Revoke revokes any authorization corresponding to the provided method name on the + // granter's account that has been granted to the grantee. + rpc Revoke(MsgRevoke) returns (MsgRevokeResponse); +} + +// Grant gives permissions to execute +// the provided method with expiration time. +message Grant { + google.protobuf.Any authorization = 1 [(cosmos_proto.accepts_interface) = "cosmos.authz.v1beta1.Authorization"]; + google.protobuf.Timestamp expiration = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false]; +} + +message MsgGrant { + string granter = 1; + string grantee = 2; + + Grant grant = 3 [(gogoproto.nullable) = false]; +} + +message MsgExecResponse { + cosmos.base.abci.v1beta1.Result result = 1; +} + +message MsgExec { + string grantee = 1; + // Authorization Msg requests to execute. Each msg must implement Authorization interface + repeated google.protobuf.Any msgs = 2 [(cosmos_proto.accepts_interface) = "cosmos.base.v1beta1.Msg"]; +} +``` + +### Router Middleware + +The `authz` `Keeper` will expose a `DispatchActions` method which allows other modules to send `Msg`s +to the router based on `Authorization` grants: + +```go +type Keeper interface { + // DispatchActions routes the provided msgs to their respective handlers if the grantee was granted an authorization + // to send those messages by the first (and only) signer of each msg. + DispatchActions(ctx sdk.Context, grantee sdk.AccAddress, msgs []sdk.Msg) sdk.Result` +} +``` + +### CLI + +#### `tx exec` Method + +When a CLI user wants to run a transaction on behalf of another account using `MsgExec`, they +can use the `exec` method. For instance `gaiacli tx gov vote 1 yes --from --generate-only | gaiacli tx authz exec --send-as --from ` +would send a transaction like this: + +```go +MsgExec { + Grantee: mykey, + Msgs: []sdk.Msg{ + MsgVote { + ProposalID: 1, + Voter: cosmos3thsdgh983egh823 + Option: Yes + } + } +} +``` + +#### `tx grant --from ` + +This CLI command will send a `MsgGrant` transaction. `authorization` should be encoded as +JSON on the CLI. + +#### `tx revoke --from ` + +This CLI command will send a `MsgRevoke` transaction. + +### Built-in Authorizations + +#### `SendAuthorization` + +```protobuf +// SendAuthorization allows the grantee to spend up to spend_limit coins from +// the granter's account. +message SendAuthorization { + repeated cosmos.base.v1beta1.Coin spend_limit = 1; +} +``` + +#### `GenericAuthorization` + +```protobuf +// GenericAuthorization gives the grantee unrestricted permissions to execute +// the provided method on behalf of the granter's account. +message GenericAuthorization { + option (cosmos_proto.implements_interface) = "Authorization"; + + // Msg, identified by it's type URL, to grant unrestricted permissions to execute + string msg = 1; +} +``` + +## Consequences + +### Positive + +* Users will be able to authorize arbitrary actions on behalf of their accounts to other +users, improving key management for many use cases +* The solution is more generic than previously considered approaches and the +`Authorization` interface approach can be extended to cover other use cases by +SDK users + +### Negative + +### Neutral + +## References + +* Initial Hackatom implementation: https://github.com/cosmos-gaians/cosmos-sdk/tree/hackatom/x/delegation +* Post-Hackatom spec: https://gist.github.com/aaronc/b60628017352df5983791cad30babe56#delegation-module +* B-Harvest subkeys spec: https://github.com/cosmos/cosmos-sdk/issues/4480 + + + +### ADR 031: Protobuf Msg Services + + + +# ADR 031: Protobuf Msg Services + +## Changelog + +* 2020-10-05: Initial Draft +* 2021-04-21: Remove `ServiceMsg`s to follow Protobuf `Any`'s spec, see [#9063](https://github.com/cosmos/cosmos-sdk/issues/9063). + +## Status + +Accepted + +## Abstract + +We want to leverage protobuf `service` definitions for defining `Msg`s, which will give us significant developer UX +improvements in terms of the code that is generated and the fact that return types will now be well defined. + +## Context + +Currently `Msg` handlers in the Cosmos SDK have return values that are placed in the `data` field of the response. +These return values, however, are not specified anywhere except in the golang handler code. + +In early conversations [it was proposed](https://docs.google.com/document/d/1eEgYgvgZqLE45vETjhwIw4VOqK-5hwQtZtjVbiXnIGc/edit) +that `Msg` return types be captured using a protobuf extension field, ex: + +```protobuf +package cosmos.gov; + +message MsgSubmitProposal + option (cosmos_proto.msg_return) = “uint64”; + string delegator_address = 1; + string validator_address = 2; + repeated sdk.Coin amount = 3; +} +``` + +This was never adopted, however. + +Having a well-specified return value for `Msg`s would improve client UX. For instance, +in `x/gov`, `MsgSubmitProposal` returns the proposal ID as a big-endian `uint64`. +This isn’t really documented anywhere and clients would need to know the internals +of the Cosmos SDK to parse that value and return it to users. + +Also, there may be cases where we want to use these return values programmatically. +For instance, https://github.com/cosmos/cosmos-sdk/issues/7093 proposes a method for +doing inter-module Ocaps using the `Msg` router. A well-defined return type would +improve the developer UX for this approach. + +In addition, handler registration of `Msg` types tends to add a bit of +boilerplate on top of keepers and is usually done through manual type switches. +This isn't necessarily bad, but it does add overhead to creating modules. + +## Decision + +We decide to use protobuf `service` definitions for defining `Msg`s as well as +the code generated by them as a replacement for `Msg` handlers. + +Below we define how this will look for the `SubmitProposal` message from `x/gov` module. +We start with a `Msg` `service` definition: + +```protobuf +package cosmos.gov; + +service Msg { + rpc SubmitProposal(MsgSubmitProposal) returns (MsgSubmitProposalResponse); +} + +// Note that for backwards compatibility this uses MsgSubmitProposal as the request +// type instead of the more canonical MsgSubmitProposalRequest +message MsgSubmitProposal { + google.protobuf.Any content = 1; + string proposer = 2; +} + +message MsgSubmitProposalResponse { + uint64 proposal_id; +} +``` + +While this is most commonly used for gRPC, overloading protobuf `service` definitions like this does not violate +the intent of the [protobuf spec](https://developers.google.com/protocol-buffers/docs/proto3#services) which says: +> If you don’t want to use gRPC, it’s also possible to use protocol buffers with your own RPC implementation. +With this approach, we would get an auto-generated `MsgServer` interface: + +In addition to clearly specifying return types, this has the benefit of generating client and server code. On the server +side, this is almost like an automatically generated keeper method and could maybe be used instead of keepers eventually +(see [\#7093](https://github.com/cosmos/cosmos-sdk/issues/7093)): + +```go +package gov + +type MsgServer interface { + SubmitProposal(context.Context, *MsgSubmitProposal) (*MsgSubmitProposalResponse, error) +} +``` + +On the client side, developers could take advantage of this by creating RPC implementations that encapsulate transaction +logic. Protobuf libraries that use asynchronous callbacks, like [protobuf.js](https://github.com/protobufjs/protobuf.js#using-services) +could use this to register callbacks for specific messages even for transactions that include multiple `Msg`s. + +Each `Msg` service method should have exactly one request parameter: its corresponding `Msg` type. For example, the `Msg` service method `/cosmos.gov.v1beta1.Msg/SubmitProposal` above has exactly one request parameter, namely the `Msg` type `/cosmos.gov.v1beta1.MsgSubmitProposal`. It is important the reader understands clearly the nomenclature difference between a `Msg` service (a Protobuf service) and a `Msg` type (a Protobuf message), and the differences in their fully-qualified name. + +This convention has been decided over the more canonical `Msg...Request` names mainly for backwards compatibility, but also for better readability in `TxBody.messages` (see [Encoding section](#encoding) below): transactions containing `/cosmos.gov.MsgSubmitProposal` read better than those containing `/cosmos.gov.v1beta1.MsgSubmitProposalRequest`. + +One consequence of this convention is that each `Msg` type can be the request parameter of only one `Msg` service method. However, we consider this limitation a good practice in explicitness. + +### Encoding + +Encoding of transactions generated with `Msg` services does not differ from current Protobuf transaction encoding as defined in [ADR-020](./adr-020-protobuf-transaction-encoding.md). We are encoding `Msg` types (which are exactly `Msg` service methods' request parameters) as `Any` in `Tx`s which involves packing the +binary-encoded `Msg` with its type URL. + +### Decoding + +Since `Msg` types are packed into `Any`, decoding transaction messages is done by unpacking `Any`s into `Msg` types. For more information, please refer to [ADR-020](./adr-020-protobuf-transaction-encoding.md#transactions). + +### Routing + +We propose to add a `msg_service_router` in BaseApp. This router is a key/value map which maps `Msg` types' `type_url`s to their corresponding `Msg` service method handler. Since there is a 1-to-1 mapping between `Msg` types and `Msg` service method, the `msg_service_router` has exactly one entry per `Msg` service method. + +When a transaction is processed by BaseApp (in CheckTx or in DeliverTx), its `TxBody.messages` are decoded as `Msg`s. Each `Msg`'s `type_url` is matched against an entry in the `msg_service_router`, and the respective `Msg` service method handler is called. + +For backward compatibility, the old handlers are not removed yet. If BaseApp receives a legacy `Msg` with no corresponding entry in the `msg_service_router`, it will be routed via its legacy `Route()` method into the legacy handler. + +### Module Configuration + +In [ADR 021](./adr-021-protobuf-query-encoding.md), we introduced a method `RegisterQueryService` +to `AppModule` which allows for modules to register gRPC queriers. + +To register `Msg` services, we attempt a more extensible approach by converting `RegisterQueryService` +to a more generic `RegisterServices` method: + +```go +type AppModule interface { + RegisterServices(Configurator) + ... +} + +type Configurator interface { + QueryServer() grpc.Server + MsgServer() grpc.Server +} + +// example module: +func (am AppModule) RegisterServices(cfg Configurator) { + types.RegisterQueryServer(cfg.QueryServer(), keeper) + types.RegisterMsgServer(cfg.MsgServer(), keeper) +} +``` + +The `RegisterServices` method and the `Configurator` interface are intended to +evolve to satisfy the use cases discussed in [\#7093](https://github.com/cosmos/cosmos-sdk/issues/7093) +and [\#7122](https://github.com/cosmos/cosmos-sdk/issues/7421). + +When `Msg` services are registered, the framework _should_ verify that all `Msg` types +implement the `sdk.Msg` interface and throw an error during initialization rather +than later when transactions are processed. + +### `Msg` Service Implementation + +Just like query services, `Msg` service methods can retrieve the `sdk.Context` +from the `context.Context` parameter using the `sdk.UnwrapSDKContext` +method: + +```go +package gov + +func (k Keeper) SubmitProposal(goCtx context.Context, params *types.MsgSubmitProposal) (*MsgSubmitProposalResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + ... +} +``` + +The `sdk.Context` should have an `EventManager` already attached by BaseApp's `msg_service_router`. + +Separate handler definition is no longer needed with this approach. + +## Consequences + +This design changes how a module functionality is exposed and accessed. It deprecates the existing `Handler` interface and `AppModule.Route` in favor of [Protocol Buffer Services](https://developers.google.com/protocol-buffers/docs/proto3#services) and Service Routing described above. This dramatically simplifies the code. We don't need to create handlers and keepers any more. Use of Protocol Buffer auto-generated clients clearly separates the communication interfaces between the module and a modules user. The control logic (aka handlers and keepers) is not exposed any more. A module interface can be seen as a black box accessible through a client API. It's worth to note that the client interfaces are also generated by Protocol Buffers. + +This also allows us to change how we perform functional tests. Instead of mocking AppModules and Router, we will mock a client (server will stay hidden). More specifically: we will never mock `moduleA.MsgServer` in `moduleB`, but rather `moduleA.MsgClient`. One can think about it as working with external services (eg DBs, or online servers...). We assume that the transmission between clients and servers is correctly handled by generated Protocol Buffers. + +Finally, closing a module to client API opens desirable OCAP patterns discussed in ADR-033. Since server implementation and interface is hidden, nobody can hold "keepers"/servers and will be forced to relay on the client interface, which will drive developers for correct encapsulation and software engineering patterns. + +### Pros + +* communicates return type clearly +* manual handler registration and return type marshaling is no longer needed, just implement the interface and register it +* communication interface is automatically generated, the developer can now focus only on the state transition methods - this would improve the UX of [\#7093](https://github.com/cosmos/cosmos-sdk/issues/7093) approach (1) if we chose to adopt that +* generated client code could be useful for clients and tests +* dramatically reduces and simplifies the code + +### Cons + +* using `service` definitions outside the context of gRPC could be confusing (but doesn’t violate the proto3 spec) + +## References + +* [Initial Github Issue \#7122](https://github.com/cosmos/cosmos-sdk/issues/7122) +* [proto 3 Language Guide: Defining Services](https://developers.google.com/protocol-buffers/docs/proto3#services) +* [Initial pre-`Any` `Msg` designs](https://docs.google.com/document/d/1eEgYgvgZqLE45vETjhwIw4VOqK-5hwQtZtjVbiXnIGc) +* [ADR 020](./adr-020-protobuf-transaction-encoding.md) +* [ADR 021](./adr-021-protobuf-query-encoding.md) + + + +### ADR 032: Typed Events + + + +# ADR 032: Typed Events + +## Changelog + +* 28-Sept-2020: Initial Draft + +## Authors + +* Anil Kumar (@anilcse) +* Jack Zampolin (@jackzampolin) +* Adam Bozanich (@boz) + +## Status + +Proposed + +## Abstract + +Currently in the Cosmos SDK, events are defined in the handlers for each message as well as `BeginBlock` and `EndBlock`. Each module doesn't have types defined for each event, they are implemented as `map[string]string`. Above all else this makes these events difficult to consume as it requires a great deal of raw string matching and parsing. This proposal focuses on updating the events to use **typed events** defined in each module such that emitting and subscribing to events will be much easier. This workflow comes from the experience of the Akash Network team. + +## Context + +Currently in the Cosmos SDK, events are defined in the handlers for each message, meaning each module doesn't have a canonical set of types for each event. Above all else this makes these events difficult to consume as it requires a great deal of raw string matching and parsing. This proposal focuses on updating the events to use **typed events** defined in each module such that emitting and subscribing to events will be much easier. This workflow comes from the experience of the Akash Network team. + +[Our platform](http://github.com/ovrclk/akash) requires a number of programmatic on chain interactions both on the provider (datacenter - to bid on new orders and listen for leases created) and user (application developer - to send the app manifest to the provider) side. In addition the Akash team is now maintaining the IBC [`relayer`](https://github.com/ovrclk/relayer), another very event driven process. In working on these core pieces of infrastructure, and integrating lessons learned from Kubernetes development, our team has developed a standard method for defining and consuming typed events in Cosmos SDK modules. We have found that it is extremely useful in building this type of event driven application. + +As the Cosmos SDK gets used more extensively for apps like `peggy`, other peg zones, IBC, DeFi, etc... there will be an exploding demand for event driven applications to support new features desired by users. We propose upstreaming our findings into the Cosmos SDK to enable all Cosmos SDK applications to quickly and easily build event driven apps to aid their core application. Wallets, exchanges, explorers, and defi protocols all stand to benefit from this work. + +If this proposal is accepted, users will be able to build event driven Cosmos SDK apps in go by just writing `EventHandler`s for their specific event types and passing them to `EventEmitters` that are defined in the Cosmos SDK. + +The end of this proposal contains a detailed example of how to consume events after this refactor. + +This proposal is specifically about how to consume these events as a client of the blockchain, not for intermodule communication. + +## Decision + +**Step-1**: Implement additional functionality in the `types` package: `EmitTypedEvent` and `ParseTypedEvent` functions + +```go +// types/events.go + +// EmitTypedEvent takes typed event and emits converting it into sdk.Event +func (em *EventManager) EmitTypedEvent(event proto.Message) error { + evtType := proto.MessageName(event) + evtJSON, err := codec.ProtoMarshalJSON(event) + if err != nil { + return err + } + + var attrMap map[string]json.RawMessage + err = json.Unmarshal(evtJSON, &attrMap) + if err != nil { + return err + } + + var attrs []abci.EventAttribute + for k, v := range attrMap { + attrs = append(attrs, abci.EventAttribute{ + Key: []byte(k), + Value: v, + }) + } + + em.EmitEvent(Event{ + Type: evtType, + Attributes: attrs, + }) + + return nil +} + +// ParseTypedEvent converts abci.Event back to typed event +func ParseTypedEvent(event abci.Event) (proto.Message, error) { + concreteGoType := proto.MessageType(event.Type) + if concreteGoType == nil { + return nil, fmt.Errorf("failed to retrieve the message of type %q", event.Type) + } + + var value reflect.Value + if concreteGoType.Kind() == reflect.Ptr { + value = reflect.New(concreteGoType.Elem()) + } else { + value = reflect.Zero(concreteGoType) + } + + protoMsg, ok := value.Interface().(proto.Message) + if !ok { + return nil, fmt.Errorf("%q does not implement proto.Message", event.Type) + } + + attrMap := make(map[string]json.RawMessage) + for _, attr := range event.Attributes { + attrMap[string(attr.Key)] = attr.Value + } + + attrBytes, err := json.Marshal(attrMap) + if err != nil { + return nil, err + } + + err = jsonpb.Unmarshal(strings.NewReader(string(attrBytes)), protoMsg) + if err != nil { + return nil, err + } + + return protoMsg, nil +} +``` + +Here, the `EmitTypedEvent` is a method on `EventManager` which takes typed event as input and apply json serialization on it. Then it maps the JSON key/value pairs to `event.Attributes` and emits it in form of `sdk.Event`. `Event.Type` will be the type URL of the proto message. + +When we subscribe to emitted events on the CometBFT websocket, they are emitted in the form of an `abci.Event`. `ParseTypedEvent` parses the event back to it's original proto message. + +**Step-2**: Add proto definitions for typed events for msgs in each module: + +For example, let's take `MsgSubmitProposal` of `gov` module and implement this event's type. + +```protobuf +// proto/cosmos/gov/v1beta1/gov.proto +// Add typed event definition + +package cosmos.gov.v1beta1; + +message EventSubmitProposal { + string from_address = 1; + uint64 proposal_id = 2; + TextProposal proposal = 3; +} +``` + +**Step-3**: Refactor event emission to use the typed event created and emit using `sdk.EmitTypedEvent`: + +```go +// x/gov/handler.go +func handleMsgSubmitProposal(ctx sdk.Context, keeper keeper.Keeper, msg types.MsgSubmitProposalI) (*sdk.Result, error) { + ... + types.Context.EventManager().EmitTypedEvent( + &EventSubmitProposal{ + FromAddress: fromAddress, + ProposalId: id, + Proposal: proposal, + }, + ) + ... +} +``` + +### How to subscribe to these typed events in `Client` + +> NOTE: Full code example below + +Users will be able to subscribe using `client.Context.Client.Subscribe` and consume events which are emitted using `EventHandler`s. + +Akash Network has built a simple [`pubsub`](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/pubsub/bus.go#L20). This can be used to subscribe to `abci.Events` and [publish](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/events/publish.go#L21) them as typed events. + +Please see the below code sample for more detail on how this flow looks for clients. + +## Consequences + +### Positive + +* Improves consistency of implementation for the events currently in the Cosmos SDK +* Provides a much more ergonomic way to handle events and facilitates writing event driven applications +* This implementation will support a middleware ecosystem of `EventHandler`s + +### Negative + +## Detailed code example of publishing events + +This ADR also proposes adding affordances to emit and consume these events. This way developers will only need to write +`EventHandler`s which define the actions they desire to take. + +```go +// EventEmitter is a type that describes event emitter functions +// This should be defined in `types/events.go` +type EventEmitter func(context.Context, client.Context, ...EventHandler) error + +// EventHandler is a type of function that handles events coming out of the event bus +// This should be defined in `types/events.go` +type EventHandler func(proto.Message) error + +// Sample use of the functions below +func main() { + ctx, cancel := context.WithCancel(context.Background()) + + if err := TxEmitter(ctx, client.Context{}.WithNodeURI("tcp://localhost:26657"), SubmitProposalEventHandler); err != nil { + cancel() + panic(err) + } + + return +} + +// SubmitProposalEventHandler is an example of an event handler that prints proposal details +// when any EventSubmitProposal is emitted. +func SubmitProposalEventHandler(ev proto.Message) (err error) { + switch event := ev.(type) { + // Handle governance proposal events creation events + case govtypes.EventSubmitProposal: + // Users define business logic here e.g. + fmt.Println(ev.FromAddress, ev.ProposalId, ev.Proposal) + return nil + default: + return nil + } +} + +// TxEmitter is an example of an event emitter that emits just transaction events. This can and +// should be implemented somewhere in the Cosmos SDK. The Cosmos SDK can include an EventEmitters for tm.event='Tx' +// and/or tm.event='NewBlock' (the new block events may contain typed events) +func TxEmitter(ctx context.Context, cliCtx client.Context, ehs ...EventHandler) (err error) { + // Instantiate and start CometBFT RPC client + client, err := cliCtx.GetNode() + if err != nil { + return err + } + + if err = client.Start(); err != nil { + return err + } + + // Start the pubsub bus + bus := pubsub.NewBus() + defer bus.Close() + + // Initialize a new error group + eg, ctx := errgroup.WithContext(ctx) + + // Publish chain events to the pubsub bus + eg.Go(func() error { + return PublishChainTxEvents(ctx, client, bus, simapp.ModuleBasics) + }) + + // Subscribe to the bus events + subscriber, err := bus.Subscribe() + if err != nil { + return err + } + + // Handle all the events coming out of the bus + eg.Go(func() error { + var err error + for { + select { + case <-ctx.Done(): + return nil + case <-subscriber.Done(): + return nil + case ev := <-subscriber.Events(): + for _, eh := range ehs { + if err = eh(ev); err != nil { + break + } + } + } + } + return nil + }) + + return group.Wait() +} + +// PublishChainTxEvents events using cmtclient. Waits on context shutdown signals to exit. +func PublishChainTxEvents(ctx context.Context, client cmtclient.EventsClient, bus pubsub.Bus, mb module.BasicManager) (err error) { + // Subscribe to transaction events + txch, err := client.Subscribe(ctx, "txevents", "tm.event='Tx'", 100) + if err != nil { + return err + } + + // Unsubscribe from transaction events on function exit + defer func() { + err = client.UnsubscribeAll(ctx, "txevents") + }() + + // Use errgroup to manage concurrency + g, ctx := errgroup.WithContext(ctx) + + // Publish transaction events in a goroutine + g.Go(func() error { + var err error + for { + select { + case <-ctx.Done(): + break + case ed := <-ch: + switch evt := ed.Data.(type) { + case cmttypes.EventDataTx: + if !evt.Result.IsOK() { + continue + } + // range over events, parse them using the basic manager and + // send them to the pubsub bus + for _, abciEv := range events { + typedEvent, err := sdk.ParseTypedEvent(abciEv) + if err != nil { + return err + } + if err := bus.Publish(typedEvent); err != nil { + bus.Close() + return + } + continue + } + } + } + } + return err + }) + + // Exit on error or context cancellation + return g.Wait() +} +``` + +## References + +* [Publish Custom Events via a bus](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/events/publish.go#L19-L58) +* [Consuming the events in `Client`](https://github.com/ovrclk/deploy/blob/bf6c633ab6c68f3026df59efd9982d6ca1bf0561/cmd/event-handlers.go#L57) + + + +### ADR 033: Protobuf-based Inter-Module Communication + + + +# ADR 033: Protobuf-based Inter-Module Communication + +## Changelog + +* 2020-10-05: Initial Draft + +## Status + +Proposed + +## Abstract + +This ADR introduces a system for permissioned inter-module communication leveraging the protobuf `Query` and `Msg` +service definitions defined in [ADR 021](./adr-021-protobuf-query-encoding.md) and +[ADR 031](./adr-031-msg-service.md) which provides: + +* stable protobuf based module interfaces to potentially later replace the keeper paradigm +* stronger inter-module object capabilities (OCAPs) guarantees +* module accounts and sub-account authorization + +## Context + +In the current Cosmos SDK documentation on the [Object-Capability Model](../docs/learn/advanced/10-ocap.md), it is stated that: + +> We assume that a thriving ecosystem of Cosmos SDK modules that are easy to compose into a blockchain application will contain faulty or malicious modules. + +There is currently not a thriving ecosystem of Cosmos SDK modules. We hypothesize that this is in part due to: + +1. lack of a stable v1.0 Cosmos SDK to build modules off of. Module interfaces are changing, sometimes dramatically, from +point release to point release, often for good reasons, but this does not create a stable foundation to build on. +2. lack of a properly implemented object capability or even object-oriented encapsulation system which makes refactors +of module keeper interfaces inevitable because the current interfaces are poorly constrained. + +### `x/bank` Case Study + +Currently the `x/bank` keeper gives pretty much unrestricted access to any module which references it. For instance, the +`SetBalance` method allows the caller to set the balance of any account to anything, bypassing even proper tracking of supply. + +There appears to have been some later attempts to implement some semblance of OCAPs using module-level minting, staking +and burning permissions. These permissions allow a module to mint, burn or delegate tokens with reference to the module’s +own account. These permissions are actually stored as a `[]string` array on the `ModuleAccount` type in state. + +However, these permissions don’t really do much. They control what modules can be referenced in the `MintCoins`, +`BurnCoins` and `DelegateCoins***` methods, but for one there is no unique object capability token that controls access — +just a simple string. So the `x/upgrade` module could mint tokens for the `x/staking` module simply by calling +`MintCoins(“staking”)`. Furthermore, all modules which have access to these keeper methods, also have access to +`SetBalance` negating any other attempt at OCAPs and breaking even basic object-oriented encapsulation. + +## Decision + +Based on [ADR-021](./adr-021-protobuf-query-encoding.md) and [ADR-031](./adr-031-msg-service.md), we introduce the +Inter-Module Communication framework for secure module authorization and OCAPs. +When implemented, this could also serve as an alternative to the existing paradigm of passing keepers between +modules. The approach outlined here-in is intended to form the basis of a Cosmos SDK v1.0 that provides the necessary +stability and encapsulation guarantees that allow a thriving module ecosystem to emerge. + +Of particular note — the decision is to _enable_ this functionality for modules to adopt at their own discretion. +Proposals to migrate existing modules to this new paradigm will have to be a separate conversation, potentially +addressed as amendments to this ADR. + +### New "Keeper" Paradigm + +In [ADR 021](./adr-021-protobuf-query-encoding.md), a mechanism for using protobuf service definitions to define queriers +was introduced and in [ADR 31](./adr-031-msg-service.md), a mechanism for using protobuf service to define `Msg`s was added. +Protobuf service definitions generate two golang interfaces representing the client and server sides of a service plus +some helper code. Here is a minimal example for the bank `cosmos.bank.Msg/Send` message type: + +```go +package bank + +type MsgClient interface { + Send(context.Context, *MsgSend, opts ...grpc.CallOption) (*MsgSendResponse, error) +} + +type MsgServer interface { + Send(context.Context, *MsgSend) (*MsgSendResponse, error) +} +``` + +[ADR 021](./adr-021-protobuf-query-encoding.md) and [ADR 31](./adr-031-msg-service.md) specifies how modules can implement the generated `QueryServer` +and `MsgServer` interfaces as replacements for the legacy queriers and `Msg` handlers respectively. + +In this ADR we explain how modules can make queries and send `Msg`s to other modules using the generated `QueryClient` +and `MsgClient` interfaces and propose this mechanism as a replacement for the existing `Keeper` paradigm. To be clear, +this ADR does not necessitate the creation of new protobuf definitions or services. Rather, it leverages the same proto +based service interfaces already used by clients for inter-module communication. + +Using this `QueryClient`/`MsgClient` approach has the following key benefits over exposing keepers to external modules: + +1. Protobuf types are checked for breaking changes using [buf](https://buf.build/docs/breaking-overview) and because of +the way protobuf is designed this will give us strong backwards compatibility guarantees while allowing for forward +evolution. +2. The separation between the client and server interfaces will allow us to insert permission checking code in between +the two which checks if one module is authorized to send the specified `Msg` to the other module providing a proper +object capability system (see below). +3. The router for inter-module communication gives us a convenient place to handle rollback of transactions, +enabling atomicity of operations ([currently a problem](https://github.com/cosmos/cosmos-sdk/issues/8030)). Any failure within a module-to-module call would result in a failure of the entire +transaction + +This mechanism has the added benefits of: + +* reducing boilerplate through code generation, and +* allowing for modules in other languages either via a VM like CosmWasm or sub-processes using gRPC + +### Inter-module Communication + +To use the `Client` generated by the protobuf compiler we need a `grpc.ClientConn` [interface](https://github.com/grpc/grpc-go/blob/v1.49.x/clientconn.go#L441-L450) +implementation. For this we introduce +a new type, `ModuleKey`, which implements the `grpc.ClientConn` interface. `ModuleKey` can be thought of as the "private +key" corresponding to a module account, where authentication is provided through use of a special `Invoker()` function, +described in more detail below. + +Blockchain users (external clients) use their account's private key to sign transactions containing `Msg`s where they are listed as signers (each +message specifies required signers with `Msg.GetSigner`). The authentication check is performed by `AnteHandler`. + +Here, we extend this process, by allowing modules to be identified in `Msg.GetSigners`. When a module wants to trigger the execution a `Msg` in another module, +its `ModuleKey` acts as the sender (through the `ClientConn` interface we describe below) and is set as a sole "signer". It's worth to note +that we don't use any cryptographic signature in this case. +For example, module `A` could use its `A.ModuleKey` to create `MsgSend` object for `/cosmos.bank.Msg/Send` transaction. `MsgSend` validation +will assure that the `from` account (`A.ModuleKey` in this case) is the signer. + +Here's an example of a hypothetical module `foo` interacting with `x/bank`: + +```go +package foo + + +type FooMsgServer { + // ... + + bankQuery bank.QueryClient + bankMsg bank.MsgClient +} + +func NewFooMsgServer(moduleKey RootModuleKey, ...) FooMsgServer { + // ... + + return FooMsgServer { + // ... + modouleKey: moduleKey, + bankQuery: bank.NewQueryClient(moduleKey), + bankMsg: bank.NewMsgClient(moduleKey), + } +} + +func (foo *FooMsgServer) Bar(ctx context.Context, req *MsgBarRequest) (*MsgBarResponse, error) { + balance, err := foo.bankQuery.Balance(&bank.QueryBalanceRequest{Address: foo.moduleKey.Address(), Denom: "foo"}) + + ... + + res, err := foo.bankMsg.Send(ctx, &bank.MsgSendRequest{FromAddress: fooMsgServer.moduleKey.Address(), ...}) + + ... +} +``` + +This design is also intended to be extensible to cover use cases of more fine grained permissioning like minting by +denom prefix being restricted to certain modules (as discussed in +[#7459](https://github.com/cosmos/cosmos-sdk/pull/7459#discussion_r529545528)). + +### `ModuleKey`s and `ModuleID`s + +A `ModuleKey` can be thought of as a "private key" for a module account and a `ModuleID` can be thought of as the +corresponding "public key". From the [ADR 028](./adr-028-public-key-addresses.md), modules can have both a root module account and any number of sub-accounts +or derived accounts that can be used for different pools (ex. staking pools) or managed accounts (ex. group +accounts). We can also think of module sub-accounts as similar to derived keys - there is a root key and then some +derivation path. `ModuleID` is a simple struct which contains the module name and optional "derivation" path, +and forms its address based on the `AddressHash` method from [the ADR-028](#adr-028-public-key-addresses): + +```go +type ModuleID struct { + ModuleName string + Path []byte +} + +func (key ModuleID) Address() []byte { + return AddressHash(key.ModuleName, key.Path) +} +``` + +In addition to being able to generate a `ModuleID` and address, a `ModuleKey` contains a special function called +`Invoker` which is the key to safe inter-module access. The `Invoker` creates an `InvokeFn` closure which is used as an `Invoke` method in +the `grpc.ClientConn` interface and under the hood is able to route messages to the appropriate `Msg` and `Query` handlers +performing appropriate security checks on `Msg`s. This allows for even safer inter-module access than keeper's whose +private member variables could be manipulated through reflection. Golang does not support reflection on a function +closure's captured variables and direct manipulation of memory would be needed for a truly malicious module to bypass +the `ModuleKey` security. + +The two `ModuleKey` types are `RootModuleKey` and `DerivedModuleKey`: + +```go +type Invoker func(callInfo CallInfo) func(ctx context.Context, request, response interface{}, opts ...interface{}) error + +type CallInfo { + Method string + Caller ModuleID +} + +type RootModuleKey struct { + moduleName string + invoker Invoker +} + +func (rm RootModuleKey) Derive(path []byte) DerivedModuleKey { /* ... */} + +type DerivedModuleKey struct { + moduleName string + path []byte + invoker Invoker +} +``` + +A module can get access to a `DerivedModuleKey`, using the `Derive(path []byte)` method on `RootModuleKey` and then +would use this key to authenticate `Msg`s from a sub-account. Ex: + +```go +package foo + +func (fooMsgServer *MsgServer) Bar(ctx context.Context, req *MsgBar) (*MsgBarResponse, error) { + derivedKey := fooMsgServer.moduleKey.Derive(req.SomePath) + bankMsgClient := bank.NewMsgClient(derivedKey) + res, err := bankMsgClient.Balance(ctx, &bank.MsgSend{FromAddress: derivedKey.Address(), ...}) + ... +} +``` + +In this way, a module can gain permissioned access to a root account and any number of sub-accounts and send +authenticated `Msg`s from these accounts. The `Invoker` `callInfo.Caller` parameter is used under the hood to +distinguish between different module accounts, but either way the function returned by `Invoker` only allows `Msg`s +from either the root or a derived module account to pass through. + +Note that `Invoker` itself returns a function closure based on the `CallInfo` passed in. This will allow client implementations +in the future that cache the invoke function for each method type avoiding the overhead of hash table lookup. +This would reduce the performance overhead of this inter-module communication method to the bare minimum required for +checking permissions. + +To re-iterate, the closure only allows access to authorized calls. There is no access to anything else regardless of any +name impersonation. + +Below is a rough sketch of the implementation of `grpc.ClientConn.Invoke` for `RootModuleKey`: + +```go +func (key RootModuleKey) Invoke(ctx context.Context, method string, args, reply interface{}, opts ...grpc.CallOption) error { + f := key.invoker(CallInfo {Method: method, Caller: ModuleID {ModuleName: key.moduleName}}) + return f(ctx, args, reply) +} +``` + +### `AppModule` Wiring and Requirements + +In [ADR 031](./adr-031-msg-service.md), the `AppModule.RegisterService(Configurator)` method was introduced. To support +inter-module communication, we extend the `Configurator` interface to pass in the `ModuleKey` and to allow modules to +specify their dependencies on other modules using `RequireServer()`: + +```go +type Configurator interface { + MsgServer() grpc.Server + QueryServer() grpc.Server + + ModuleKey() ModuleKey + RequireServer(msgServer interface{}) +} +``` + +The `ModuleKey` is passed to modules in the `RegisterService` method itself so that `RegisterServices` serves as a single +entry point for configuring module services. This is intended to also have the side-effect of greatly reducing boilerplate in +`app.go`. For now, `ModuleKey`s will be created based on `AppModuleBasic.Name()`, but a more flexible system may be +introduced in the future. The `ModuleManager` will handle creation of module accounts behind the scenes. + +Because modules do not get direct access to each other anymore, modules may have unfulfilled dependencies. To make sure +that module dependencies are resolved at startup, the `Configurator.RequireServer` method should be added. The `ModuleManager` +will make sure that all dependencies declared with `RequireServer` can be resolved before the app starts. An example +module `foo` could declare its dependency on `x/bank` like this: + +```go +package foo + +func (am AppModule) RegisterServices(cfg Configurator) { + cfg.RequireServer((*bank.QueryServer)(nil)) + cfg.RequireServer((*bank.MsgServer)(nil)) +} +``` + +### Security Considerations + +In addition to checking for `ModuleKey` permissions, a few additional security precautions will need to be taken by +the underlying router infrastructure. + +#### Recursion and Re-entry + +Recursive or re-entrant method invocations pose a potential security threat. This can be a problem if Module A +calls Module B and Module B calls module A again in the same call. + +One basic way for the router system to deal with this is to maintain a call stack which prevents a module from +being referenced more than once in the call stack so that there is no re-entry. A `map[string]interface{}` table +in the router could be used to perform this security check. + +#### Queries + +Queries in Cosmos SDK are generally un-permissioned so allowing one module to query another module should not pose +any major security threats assuming basic precautions are taken. The basic precaution that the router system will +need to take is making sure that the `sdk.Context` passed to query methods does not allow writing to the store. This +can be done for now with a `CacheMultiStore` as is currently done for `BaseApp` queries. + +### Internal Methods + +In many cases, we may wish for modules to call methods on other modules which are not exposed to clients at all. For this +purpose, we add the `InternalServer` method to `Configurator`: + +```go +type Configurator interface { + MsgServer() grpc.Server + QueryServer() grpc.Server + InternalServer() grpc.Server +} +``` + +As an example, x/slashing's Slash must call x/staking's Slash, but we don't want to expose x/staking's Slash to end users +and clients. + +Internal protobuf services will be defined in a corresponding `internal.proto` file in the given module's +proto package. + +Services registered against `InternalServer` will be callable from other modules but not by external clients. + +An alternative solution to internal-only methods could involve hooks / plugins as discussed [here](https://github.com/cosmos/cosmos-sdk/pull/7459#issuecomment-733807753). +A more detailed evaluation of a hooks / plugin system will be addressed later in follow-ups to this ADR or as a separate +ADR. + +### Authorization + +By default, the inter-module router requires that messages are sent by the first signer returned by `GetSigners`. The +inter-module router should also accept authorization middleware such as that provided by [ADR 030](#adr-030-authorization-module). +This middleware will allow accounts to authorize specific module accounts to perform actions on their behalf. +Authorization middleware should take into account the need to grant certain modules effectively "admin" privileges to +other modules. This will be addressed in separate ADRs or updates to this ADR. + +### Future Work + +Other future improvements may include: + +* custom code generation that: + * simplifies interfaces (ex. generates code with `sdk.Context` instead of `context.Context`) + * optimizes inter-module calls - for instance caching resolved methods after first invocation +* combining `StoreKey`s and `ModuleKey`s into a single interface so that modules have a single OCAPs handle +* code generation which makes inter-module communication more performant +* decoupling `ModuleKey` creation from `AppModuleBasic.Name()` so that app's can override root module account names +* inter-module hooks and plugins + +## Alternatives + +### MsgServices vs `x/capability` + +The `x/capability` module does provide a proper object-capability implementation that can be used by any module in the +Cosmos SDK and could even be used for inter-module OCAPs as described in [\#5931](https://github.com/cosmos/cosmos-sdk/issues/5931). + +The advantages of the approach described in this ADR are mostly around how it integrates with other parts of the Cosmos SDK, +specifically: + +* protobuf so that: + * code generation of interfaces can be leveraged for a better dev UX + * module interfaces are versioned and checked for breakage using [buf](https://docs.buf.build/breaking-overview) +* sub-module accounts as per ADR 028 +* the general `Msg` passing paradigm and the way signers are specified by `GetSigners` + +Also, this is a complete replacement for keepers and could be applied to _all_ inter-module communication whereas the +`x/capability` approach in #5931 would need to be applied method by method. + +## Consequences + +### Backwards Compatibility + +This ADR is intended to provide a pathway to a scenario where there is greater long term compatibility between modules. +In the short-term, this will likely result in breaking certain `Keeper` interfaces which are too permissive and/or +replacing `Keeper` interfaces altogether. + +### Positive + +* an alternative to keepers which can more easily lead to stable inter-module interfaces +* proper inter-module OCAPs +* improved module developer DevX, as commented on by several participants on + [Architecture Review Call, Dec 3](https://hackmd.io/E0wxxOvRQ5qVmTf6N_k84Q) +* lays the groundwork for what can be a greatly simplified `app.go` +* router can be setup to enforce atomic transactions for module-to-module calls + +### Negative + +* modules which adopt this will need significant refactoring + +### Neutral + +## Test Cases [optional] + +## References + +* [ADR 021](./adr-021-protobuf-query-encoding.md) +* [ADR 031](./adr-031-msg-service.md) +* [ADR 028](./adr-028-public-key-addresses.md) +* [ADR 030 draft](https://github.com/cosmos/cosmos-sdk/pull/7105) +* [Object-Capability Model](https://docs.network.com/main/core/ocap) + + + +### ADR 034: Account Rekeying + + + +# ADR 034: Account Rekeying + +## Changelog + +* 30-09-2020: Initial Draft + +## Status + +PROPOSED + +## Abstract + +Account rekeying is a process that allows an account to replace its authentication pubkey with a new one. + +## Context + +Currently, in the Cosmos SDK, the address of an auth `BaseAccount` is based on the hash of the public key. Once an account is created, the public key for the account is set in stone, and cannot be changed. This can be a problem for users, as key rotation is a useful security practice, but is not possible currently. Furthermore, as multisigs are a type of pubkey, once a multisig for an account is set, it cannot be updated. This is problematic, as multisigs are often used by organizations or companies, who may need to change their set of multisig signers for internal reasons. + +Transferring all the assets of an account to a new account with the updated pubkey is not sufficient, because some "engagements" of an account are not easily transferable. For example, in staking, to transfer bonded Atoms, an account would have to unbond all delegations and wait the three-week unbonding period. Even more significantly, for validator operators, ownership over a validator is not transferable at all, meaning that the operator key for a validator can never be updated, leading to poor operational security for validators. + +## Decision + +We propose the addition of a new feature to `x/auth` that allows accounts to update the public key associated with their account, while keeping the address the same. + +This is possible because the Cosmos SDK `BaseAccount` stores the public key for an account in state, instead of making the assumption that the public key is included in the transaction (whether explicitly or implicitly through the signature) as in other blockchains such as Bitcoin and Ethereum. Because the public key is stored on chain, it is okay for the public key to not hash to the address of an account, as the address is not pertinent to the signature checking process. + +To build this system, we design a new Msg type as follows: + +```protobuf +service Msg { + rpc ChangePubKey(MsgChangePubKey) returns (MsgChangePubKeyResponse); +} + +message MsgChangePubKey { + string address = 1; + google.protobuf.Any pub_key = 2; +} + +message MsgChangePubKeyResponse {} +``` + +The MsgChangePubKey transaction needs to be signed by the existing pubkey in state. + +Once approved, the handler for this message type, which takes in the AccountKeeper, will update the in-state pubkey for the account and replace it with the pubkey from the Msg. + +An account that has had its pubkey changed cannot be automatically pruned from state. This is because if pruned, the original pubkey of the account would be needed to recreate the same address, but the owner of the address may not have the original pubkey anymore. Currently, we do not automatically prune any accounts anyways, but we would like to keep this option open down the road (this is the purpose of account numbers). To resolve this, we charge an additional gas fee for this operation to compensate for this externality (this bound gas amount is configured as a parameter `PubKeyChangeCost`). The bonus gas is charged inside the handler, using the `ConsumeGas` function. Furthermore, in the future, we can allow accounts that have rekeyed manually prune themselves using a new Msg type such as `MsgDeleteAccount`. Manually pruning accounts can give a gas refund as an incentive for performing the action. + +```go + amount := ak.GetParams(ctx).PubKeyChangeCost + ctx.GasMeter().ConsumeGas(amount, "pubkey change fee") +``` + +Every time a key for an address is changed, we will store a log of this change in the state of the chain, thus creating a stack of all previous keys for an address and the time intervals for which they were active. This allows dapps and clients to easily query past keys for an account which may be useful for features such as verifying timestamped off-chain signed messages. + +## Consequences + +### Positive + +* Will allow users and validator operators to employ better operational security practices with key rotation. +* Will allow organizations or groups to easily change and add/remove multisig signers. + +### Negative + +Breaks the current assumed relationship between address and pubkey as H(pubkey) = address. This has a couple of consequences. + +* This makes wallets that support this feature more complicated. For example, if an address on-chain was updated, the corresponding key in the CLI wallet also needs to be updated. +* Cannot automatically prune accounts with 0 balance that have had their pubkey changed. + +### Neutral + +* While the purpose of this is intended to allow the owner of an account to update to a new pubkey they own, this could technically also be used to transfer ownership of an account to a new owner. For example, this could be used to sell a staked position without unbonding or an account that has vesting tokens. However, the friction of this is very high as this would essentially have to be done as a very specific OTC trade. Furthermore, additional constraints could be added to prevent accounts with Vesting tokens to use this feature. +* Will require that PubKeys for an account are included in the genesis exports. + +## References + +* https://www.algorand.com/resources/blog/announcing-rekeying + + + +### ADR 035: Rosetta API Support + + + +# ADR 035: Rosetta API Support + +## Authors + +* Jonathan Gimeno (@jgimeno) +* David Grierson (@senormonito) +* Alessio Treglia (@alessio) +* Frojdy Dymylja (@fdymylja) + +## Changelog + +* 2021-05-12: the external library [cosmos-rosetta-gateway](https://github.com/tendermint/cosmos-rosetta-gateway) has been moved within the Cosmos SDK. + +## Context + +[Rosetta API](https://www.rosetta-api.org/) is an open-source specification and set of tools developed by Coinbase to +standardise blockchain interactions. + +Through the use of a standard API for integrating blockchain applications it will + +* Be easier for a user to interact with a given blockchain +* Allow exchanges to integrate new blockchains quickly and easily +* Enable application developers to build cross-blockchain applications such as block explorers, wallets and dApps at + considerably lower cost and effort. + +## Decision + +It is clear that adding Rosetta API support to the Cosmos SDK will bring value to all the developers and +Cosmos SDK based chains in the ecosystem. How it is implemented is key. + +The driving principles of the proposed design are: + +1. **Extensibility:** it must be as riskless and painless as possible for application developers to set-up network + configurations to expose Rosetta API-compliant services. +2. **Long term support:** This proposal aims to provide support for all the Cosmos SDK release series. +3. **Cost-efficiency:** Backporting changes to Rosetta API specifications from `master` to the various stable + branches of Cosmos SDK is a cost that needs to be reduced. + +We will achieve these by delivering on these principles by the following: + +1. There will be a package `rosetta/lib` + for the implementation of the core Rosetta API features, particularly: + a. The types and interfaces (`Client`, `OfflineClient`...), this separates design from implementation detail. + b. The `Server` functionality as this is independent of the Cosmos SDK version. + c. The `Online/OfflineNetwork`, which is not exported, and implements the rosetta API using the `Client` interface to query the node, build tx and so on. + d. The `errors` package to extend rosetta errors. +2. Due to differences between the Cosmos release series, each series will have its own specific implementation of `Client` interface. +3. There will be two options for starting an API service in applications: + a. API shares the application process + b. API-specific process. + +## Architecture + +### The External Repo + +This section will describe the proposed external library, including the service implementation, plus the defined types and interfaces. + +#### Server + +`Server` is a simple `struct` that is started and listens to the port specified in the settings. This is meant to be used across all the Cosmos SDK versions that are actively supported. + +The constructor follows: + +`func NewServer(settings Settings) (Server, error)` + +`Settings`, which are used to construct a new server, are the following: + +```go +// Settings define the rosetta server settings +type Settings struct { + // Network contains the information regarding the network + Network *types.NetworkIdentifier + // Client is the online API handler + Client crgtypes.Client + // Listen is the address the handler will listen at + Listen string + // Offline defines if the rosetta service should be exposed in offline mode + Offline bool + // Retries is the number of readiness checks that will be attempted when instantiating the handler + // valid only for online API + Retries int + // RetryWait is the time that will be waited between retries + RetryWait time.Duration +} +``` + +#### Types + +Package types uses a mixture of rosetta types and custom defined type wrappers, that the client must parse and return while executing operations. + +##### Interfaces + +Every SDK version uses a different format to connect (rpc, gRPC, etc), query and build transactions, we have abstracted this in what is the `Client` interface. +The client uses rosetta types, whilst the `Online/OfflineNetwork` takes care of returning correctly parsed rosetta responses and errors. + +Each Cosmos SDK release series will have their own `Client` implementations. +Developers can implement their own custom `Client`s as required. + +```go +// Client defines the API the client implementation should provide. +type Client interface { + // Needed if the client needs to perform some action before connecting. + Bootstrap() error + // Ready checks if the servicer constraints for queries are satisfied + // for example the node might still not be ready, it's useful in process + // when the rosetta instance might come up before the node itself + // the servicer must return nil if the node is ready + Ready() error + + // Data API + + // Balances fetches the balance of the given address + // if height is not nil, then the balance will be displayed + // at the provided height, otherwise last block balance will be returned + Balances(ctx context.Context, addr string, height *int64) ([]*types.Amount, error) + // BlockByHashAlt gets a block and its transaction at the provided height + BlockByHash(ctx context.Context, hash string) (BlockResponse, error) + // BlockByHeightAlt gets a block given its height, if height is nil then last block is returned + BlockByHeight(ctx context.Context, height *int64) (BlockResponse, error) + // BlockTransactionsByHash gets the block, parent block and transactions + // given the block hash. + BlockTransactionsByHash(ctx context.Context, hash string) (BlockTransactionsResponse, error) + // BlockTransactionsByHeight gets the block, parent block and transactions + // given the block height. + BlockTransactionsByHeight(ctx context.Context, height *int64) (BlockTransactionsResponse, error) + // GetTx gets a transaction given its hash + GetTx(ctx context.Context, hash string) (*types.Transaction, error) + // GetUnconfirmedTx gets an unconfirmed Tx given its hash + // NOTE(fdymylja): NOT IMPLEMENTED YET! + GetUnconfirmedTx(ctx context.Context, hash string) (*types.Transaction, error) + // Mempool returns the list of the current non confirmed transactions + Mempool(ctx context.Context) ([]*types.TransactionIdentifier, error) + // Peers gets the peers currently connected to the node + Peers(ctx context.Context) ([]*types.Peer, error) + // Status returns the node status, such as sync data, version etc + Status(ctx context.Context) (*types.SyncStatus, error) + + // Construction API + + // PostTx posts txBytes to the node and returns the transaction identifier plus metadata related + // to the transaction itself. + PostTx(txBytes []byte) (res *types.TransactionIdentifier, meta map[string]interface{}, err error) + // ConstructionMetadataFromOptions + ConstructionMetadataFromOptions(ctx context.Context, options map[string]interface{}) (meta map[string]interface{}, err error) + OfflineClient +} + +// OfflineClient defines the functionalities supported without having access to the node +type OfflineClient interface { + NetworkInformationProvider + // SignedTx returns the signed transaction given the tx bytes (msgs) plus the signatures + SignedTx(ctx context.Context, txBytes []byte, sigs []*types.Signature) (signedTxBytes []byte, err error) + // TxOperationsAndSignersAccountIdentifiers returns the operations related to a transaction and the account + // identifiers if the transaction is signed + TxOperationsAndSignersAccountIdentifiers(signed bool, hexBytes []byte) (ops []*types.Operation, signers []*types.AccountIdentifier, err error) + // ConstructionPayload returns the construction payload given the request + ConstructionPayload(ctx context.Context, req *types.ConstructionPayloadsRequest) (resp *types.ConstructionPayloadsResponse, err error) + // PreprocessOperationsToOptions returns the options given the preprocess operations + PreprocessOperationsToOptions(ctx context.Context, req *types.ConstructionPreprocessRequest) (options map[string]interface{}, err error) + // AccountIdentifierFromPublicKey returns the account identifier given the public key + AccountIdentifierFromPublicKey(pubKey *types.PublicKey) (*types.AccountIdentifier, error) +} +``` + +### 2. Cosmos SDK Implementation + +The Cosmos SDK implementation, based on version, takes care of satisfying the `Client` interface. +In Stargate, Launchpad and 0.37, we have introduced the concept of rosetta.Msg, this message is not in the shared repository as the sdk.Msg type differs between Cosmos SDK versions. + +The rosetta.Msg interface follows: + +```go +// Msg represents a cosmos-sdk message that can be converted from and to a rosetta operation. +type Msg interface { + sdk.Msg + ToOperations(withStatus, hasError bool) []*types.Operation + FromOperations(ops []*types.Operation) (sdk.Msg, error) +} +``` + +Hence developers who want to extend the rosetta set of supported operations just need to extend their module's sdk.Msgs with the `ToOperations` and `FromOperations` methods. + +### 3. API service invocation + +As stated at the start, application developers will have two methods for invocation of the Rosetta API service: + +1. Shared process for both application and API +2. Standalone API service + +#### Shared Process (Only Stargate) + +Rosetta API service could run within the same execution process as the application. This would be enabled via app.toml settings, and if gRPC is not enabled the rosetta instance would be spun in offline mode (tx building capabilities only). + +#### Separate API service + +Client application developers can write a new command to launch a Rosetta API server as a separate process too, using the rosetta command contained in the `/server/rosetta` package. Construction of the command depends on Cosmos SDK version. Examples can be found inside `simd` for stargate, and `contrib/rosetta/simapp` for other release series. + +## Status + +Proposed + +## Consequences + +### Positive + +* Out-of-the-box Rosetta API support within Cosmos SDK. +* Blockchain interface standardisation + +## References + +* https://www.rosetta-api.org/ + + + +### ADR 036: Arbitrary Message Signature Specification + + + +# ADR 036: Arbitrary Message Signature Specification + +## Changelog + +* 28/10/2020 - Initial draft + +## Authors + +* Antoine Herzog (@antoineherzog) +* Zaki Manian (@zmanian) +* Aleksandr Bezobchuk (alexanderbez) [1] +* Frojdi Dymylja (@fdymylja) + +## Status + +Draft + +## Abstract + +Currently, in the Cosmos SDK, there is no convention to sign arbitrary messages like in Ethereum. We propose with this specification, for Cosmos SDK ecosystem, a way to sign and validate off-chain arbitrary messages. + +This specification serves the purpose of covering every use case; this means that Cosmos SDK application developers decide how to serialize and represent `Data` to users. + +## Context + +Having the ability to sign messages off-chain has proven to be a fundamental aspect of nearly any blockchain. The notion of signing messages off-chain has many added benefits such as saving on computational costs and reducing transaction throughput and overhead. Within the context of the Cosmos, some of the major applications of signing such data include, but is not limited to, providing a cryptographic secure and verifiable means of proving validator identity and possibly associating it with some other framework or organization. In addition, having the ability to sign Cosmos messages with a Ledger or similar HSM device. + +Further context and use cases can be found in the reference links. + +## Decision + +The aim is being able to sign arbitrary messages, even using Ledger or similar HSM devices. + +As a result, signed messages should look roughly like Cosmos SDK messages but **must not** be a valid on-chain transaction. `chain-id`, `account_number` and `sequence` can all be assigned invalid values. + +Cosmos SDK 0.40 also introduces a concept of “auth_info” this can specify SIGN_MODES. + +A spec should include an `auth_info` that supports SIGN_MODE_DIRECT and SIGN_MODE_LEGACY_AMINO. + +To create the `offchain` proto definitions, we extend the auth module with `offchain` package to offer functionalities to verify and sign offline messages. + +An offchain transaction follows these rules: + +* the memo must be empty +* nonce, sequence number must be equal to 0 +* chain-id must be equal to “” +* fee gas must be equal to 0 +* fee amount must be an empty array + +Verification of an offchain transaction follows the same rules as an onchain one, except for the spec differences highlighted above. + +The first message added to the `offchain` package is `MsgSignData`. + +`MsgSignData` allows developers to sign arbitrary bytes validatable offchain only. `Signer` is the account address of the signer. `Data` is arbitrary bytes which can represent `text`, `files`, `object`s. It's applications developers decision how `Data` should be deserialized, serialized and the object it can represent in their context. + +It's applications developers decision how `Data` should be treated, by treated we mean the serialization and deserialization process and the Object `Data` should represent. + +Proto definition: + +```protobuf +// MsgSignData defines an arbitrary, general-purpose, off-chain message +message MsgSignData { + // Signer is the sdk.AccAddress of the message signer + bytes Signer = 1 [(gogoproto.jsontag) = "signer", (gogoproto.casttype) = "github.com/cosmos/cosmos-sdk/types.AccAddress"]; + // Data represents the raw bytes of the content that is signed (text, json, etc) + bytes Data = 2 [(gogoproto.jsontag) = "data"]; +} +``` + +Signed MsgSignData json example: + +```json +{ + "type": "cosmos-sdk/StdTx", + "value": { + "msg": [ + { + "type": "sign/MsgSignData", + "value": { + "signer": "cosmos1hftz5ugqmpg9243xeegsqqav62f8hnywsjr4xr", + "data": "cmFuZG9t" + } + } + ], + "fee": { + "amount": [], + "gas": "0" + }, + "signatures": [ + { + "pub_key": { + "type": "tendermint/PubKeySecp256k1", + "value": "AqnDSiRoFmTPfq97xxEb2VkQ/Hm28cPsqsZm9jEVsYK9" + }, + "signature": "8y8i34qJakkjse9pOD2De+dnlc4KvFgh0wQpes4eydN66D9kv7cmCEouRrkka9tlW9cAkIL52ErB+6ye7X5aEg==" + } + ], + "memo": "" + } +} +``` + +## Consequences + +There is a specification on how messages, that are not meant to be broadcast to a live chain, should be formed. + +### Backwards Compatibility + +Backwards compatibility is maintained as this is a new message spec definition. + +### Positive + +* A common format that can be used by multiple applications to sign and verify off-chain messages. +* The specification is primitive which means it can cover every use case without limiting what is possible to fit inside it. +* It gives room for other off-chain messages specifications that aim to target more specific and common use cases such as off-chain-based authN/authZ layers [2]. + +### Negative + +* The current proposal requires a fixed relationship between an account address and a public key. +* Doesn't work with multisig accounts. + +## Further discussion + +* Regarding security in `MsgSignData`, the developer using `MsgSignData` is in charge of making the content contained in `Data` non-replayable when, and if, needed. +* The offchain package will be further extended with extra messages that target specific use cases such as, but not limited to, authentication in applications, payment channels, L2 solutions in general. + +## References + +1. https://github.com/cosmos/ics/pull/33 +2. https://github.com/cosmos/cosmos-sdk/pull/7727#discussion_r515668204 +3. https://github.com/cosmos/cosmos-sdk/pull/7727#issuecomment-722478477 +4. https://github.com/cosmos/cosmos-sdk/pull/7727#issuecomment-721062923 + + + +### ADR 037: Governance split votes + + + +# ADR 037: Governance split votes + +## Changelog + +* 2020/10/28: Initial draft + +## Status + +Accepted + +## Abstract + +This ADR defines a modification to the governance module that would allow a staker to split their votes into several voting options. For example, it could use 70% of its voting power to vote Yes and 30% of its voting power to vote No. + +## Context + +Currently, an address can cast a vote with only one option (Yes/No/Abstain/NoWithVeto) and use their full voting power behind that choice. + +However, oftentimes the entity owning that address might not be a single individual. For example, a company might have different stakeholders who want to vote differently, and so it makes sense to allow them to split their voting power. Another example use case is exchanges. Many centralized exchanges often stake a portion of their users' tokens in their custody. Currently, it is not possible for them to do "passthrough voting" and giving their users voting rights over their tokens. However, with this system, exchanges can poll their users for voting preferences, and then vote on-chain proportionally to the results of the poll. + +## Decision + +We modify the vote structs to be + +```go +type WeightedVoteOption struct { + Option string + Weight sdk.Dec +} + +type Vote struct { + ProposalID int64 + Voter sdk.Address + Options []WeightedVoteOption +} +``` + +And for backwards compatibility, we introduce `MsgVoteWeighted` while keeping `MsgVote`. + +```go +type MsgVote struct { + ProposalID int64 + Voter sdk.Address + Option Option +} + +type MsgVoteWeighted struct { + ProposalID int64 + Voter sdk.Address + Options []WeightedVoteOption +} +``` + +The `ValidateBasic` of a `MsgVoteWeighted` struct would require that + +1. The sum of all the rates is equal to 1.0 +2. No Option is repeated + +The governance tally function will iterate over all the options in a vote and add to the tally the result of the voter's voting power * the rate for that option. + +```go +tally() { + results := map[types.VoteOption]sdk.Dec + + for _, vote := range votes { + for i, weightedOption := range vote.Options { + results[weightedOption.Option] += getVotingPower(vote.voter) * weightedOption.Weight + } + } +} +``` + +The CLI command for creating a multi-option vote would be as such: + +```shell +simd tx gov vote 1 "yes=0.6,no=0.3,abstain=0.05,no_with_veto=0.05" --from mykey +``` + +To create a single-option vote a user can do either + +```shell +simd tx gov vote 1 "yes=1" --from mykey +``` + +or + +```shell +simd tx gov vote 1 yes --from mykey +``` + +to maintain backwards compatibility. + +## Consequences + +### Backwards Compatibility + +* Previous VoteMsg types will remain the same and so clients will not have to update their procedure unless they want to support the WeightedVoteMsg feature. +* When querying a Vote struct from state, its structure will be different, and so clients wanting to display all voters and their respective votes will have to handle the new format and the fact that a single voter can have split votes. +* The result of querying the tally function should have the same API for clients. + +### Positive + +* Can make the voting process more accurate for addresses representing multiple stakeholders, often some of the largest addresses. + +### Negative + +* Is more complex than simple voting, and so may be harder to explain to users. However, this is mostly mitigated because the feature is opt-in. + +### Neutral + +* Relatively minor change to governance tally function. + + + +### ADR 038: KVStore state listening + + + +# ADR 038: KVStore state listening + +## Changelog + +* 11/23/2020: Initial draft +* 10/06/2022: Introduce plugin system based on hashicorp/go-plugin +* 10/14/2022: + * Add `ListenCommit`, flatten the state writes in a block to a single batch. + * Remove listeners from cache stores, should only listen to `rootmulti.Store`. + * Remove `HaltAppOnDeliveryError()`, the errors are propagated by default, the implementations should return nil if don't want to propagate errors. +* 26/05/2023: Update with ABCI 2.0 + +## Status + +Proposed + +## Abstract + +This ADR defines a set of changes to enable listening to state changes of individual KVStores and exposing these data to consumers. + +## Context + +Currently, KVStore data can be remotely accessed through [Queries](https://github.com/cosmos/cosmos-sdk/blob/master/docs/building-modules/messages-and-queries.md#queries) +which proceed either through Tendermint and the ABCI, or through the gRPC server. +In addition to these request/response queries, it would be beneficial to have a means of listening to state changes as they occur in real time. + +## Decision + +We will modify the `CommitMultiStore` interface and its concrete (`rootmulti`) implementations and introduce a new `listenkv.Store` to allow listening to state changes in underlying KVStores. We don't need to listen to cache stores, because we can't be sure that the writes will be committed eventually, and the writes are duplicated in `rootmulti.Store` eventually, so we should only listen to `rootmulti.Store`. +We will introduce a plugin system for configuring and running streaming services that write these state changes and their surrounding ABCI message context to different destinations. + +### Listening + +In a new file, `store/types/listening.go`, we will create a `MemoryListener` struct for streaming out protobuf encoded KV pairs state changes from a KVStore. +The `MemoryListener` will be used internally by the concrete `rootmulti` implementation to collect state changes from KVStores. + +```go +// MemoryListener listens to the state writes and accumulate the records in memory. +type MemoryListener struct { + stateCache []StoreKVPair +} + +// NewMemoryListener creates a listener that accumulate the state writes in memory. +func NewMemoryListener() *MemoryListener { + return &MemoryListener{} +} + +// OnWrite writes state change events to the internal cache +func (fl *MemoryListener) OnWrite(storeKey StoreKey, key []byte, value []byte, delete bool) { + fl.stateCache = append(fl.stateCache, StoreKVPair{ + StoreKey: storeKey.Name(), + Delete: delete, + Key: key, + Value: value, + }) +} + +// PopStateCache returns the current state caches and set to nil +func (fl *MemoryListener) PopStateCache() []StoreKVPair { + res := fl.stateCache + fl.stateCache = nil + return res +} +``` + +We will also define a protobuf type for the KV pairs. In addition to the key and value fields this message +will include the StoreKey for the originating KVStore so that we can collect information from separate KVStores and determine the source of each KV pair. + +```protobuf +message StoreKVPair { + optional string store_key = 1; // the store key for the KVStore this pair originates from + required bool set = 2; // true indicates a set operation, false indicates a delete operation + required bytes key = 3; + required bytes value = 4; +} +``` + +### ListenKVStore + +We will create a new `Store` type `listenkv.Store` that the `rootmulti` store will use to wrap a `KVStore` to enable state listening. +We will configure the `Store` with a `MemoryListener` which will collect state changes for output to specific destinations. + +```go +// Store implements the KVStore interface with listening enabled. +// Operations are traced on each core KVStore call and written to any of the +// underlying listeners with the proper key and operation permissions +type Store struct { + parent types.KVStore + listener *types.MemoryListener + parentStoreKey types.StoreKey +} + +// NewStore returns a reference to a new traceKVStore given a parent +// KVStore implementation and a buffered writer. +func NewStore(parent types.KVStore, psk types.StoreKey, listener *types.MemoryListener) *Store { + return &Store{parent: parent, listener: listener, parentStoreKey: psk} +} + +// Set implements the KVStore interface. It traces a write operation and +// delegates the Set call to the parent KVStore. +func (s *Store) Set(key []byte, value []byte) { + types.AssertValidKey(key) + s.parent.Set(key, value) + s.listener.OnWrite(s.parentStoreKey, key, value, false) +} + +// Delete implements the KVStore interface. It traces a write operation and +// delegates the Delete call to the parent KVStore. +func (s *Store) Delete(key []byte) { + s.parent.Delete(key) + s.listener.OnWrite(s.parentStoreKey, key, nil, true) +} +``` + +### MultiStore interface updates + +We will update the `CommitMultiStore` interface to allow us to wrap a `Memorylistener` to a specific `KVStore`. +Note that the `MemoryListener` will be attached internally by the concrete `rootmulti` implementation. + +```go +type CommitMultiStore interface { + ... + + // AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + // PopStateCache returns the accumulated state change messages from MemoryListener + PopStateCache() []StoreKVPair +} +``` + + +### MultiStore implementation updates + +We will adjust the `rootmulti` `GetKVStore` method to wrap the returned `KVStore` with a `listenkv.Store` if listening is turned on for that `Store`. + +```go +func (rs *Store) GetKVStore(key types.StoreKey) types.KVStore { + store := rs.stores[key].(types.KVStore) + + if rs.TracingEnabled() { + store = tracekv.NewStore(store, rs.traceWriter, rs.traceContext) + } + if rs.ListeningEnabled(key) { + store = listenkv.NewStore(store, key, rs.listeners[key]) + } + + return store +} +``` + +We will implement `AddListeners` to manage KVStore listeners internally and implement `PopStateCache` +for a means of retrieving the current state. + +```go +// AddListeners adds state change listener for a specific KVStore +func (rs *Store) AddListeners(keys []types.StoreKey) { + listener := types.NewMemoryListener() + for i := range keys { + rs.listeners[keys[i]] = listener + } +} +``` + +```go +func (rs *Store) PopStateCache() []types.StoreKVPair { + var cache []types.StoreKVPair + for _, ls := range rs.listeners { + cache = append(cache, ls.PopStateCache()...) + } + sort.SliceStable(cache, func(i, j int) bool { + return cache[i].StoreKey < cache[j].StoreKey + }) + return cache +} +``` + +We will also adjust the `rootmulti` `CacheMultiStore` and `CacheMultiStoreWithVersion` methods to enable listening in +the cache layer. + +```go +func (rs *Store) CacheMultiStore() types.CacheMultiStore { + stores := make(map[types.StoreKey]types.CacheWrapper) + for k, v := range rs.stores { + store := v.(types.KVStore) + // Wire the listenkv.Store to allow listeners to observe the writes from the cache store, + // set same listeners on cache store will observe duplicated writes. + if rs.ListeningEnabled(k) { + store = listenkv.NewStore(store, k, rs.listeners[k]) + } + stores[k] = store + } + return cachemulti.NewStore(rs.db, stores, rs.keysByName, rs.traceWriter, rs.getTracingContext()) +} +``` + +```go +func (rs *Store) CacheMultiStoreWithVersion(version int64) (types.CacheMultiStore, error) { + // ... + + // Wire the listenkv.Store to allow listeners to observe the writes from the cache store, + // set same listeners on cache store will observe duplicated writes. + if rs.ListeningEnabled(key) { + cacheStore = listenkv.NewStore(cacheStore, key, rs.listeners[key]) + } + + cachedStores[key] = cacheStore + } + + return cachemulti.NewStore(rs.db, cachedStores, rs.keysByName, rs.traceWriter, rs.getTracingContext()), nil +} +``` + +### Exposing the data + +#### Streaming Service + +We will introduce a new `ABCIListener` interface that plugs into the BaseApp and relays ABCI requests and responses +so that the service can group the state changes with the ABCI requests. + +```go +// baseapp/streaming.go + +// ABCIListener is the interface that we're exposing as a streaming service. +type ABCIListener interface { + // ListenFinalizeBlock updates the streaming service with the latest FinalizeBlock messages + ListenFinalizeBlock(ctx context.Context, req abci.RequestFinalizeBlock, res abci.ResponseFinalizeBlock) error + // ListenCommit updates the steaming service with the latest Commit messages and state changes + ListenCommit(ctx context.Context, res abci.ResponseCommit, changeSet []*StoreKVPair) error +} +``` + +#### BaseApp Registration + +We will add a new method to the `BaseApp` to enable the registration of `StreamingService`s: + + ```go + // SetStreamingService is used to set a streaming service into the BaseApp hooks and load the listeners into the multistore +func (app *BaseApp) SetStreamingService(s ABCIListener) { + // register the StreamingService within the BaseApp + // BaseApp will pass BeginBlock, DeliverTx, and EndBlock requests and responses to the streaming services to update their ABCI context + app.abciListeners = append(app.abciListeners, s) +} +``` + +We will add two new fields to the `BaseApp` struct: + +```go +type BaseApp struct { + + ... + + // abciListenersAsync for determining if abciListeners will run asynchronously. + // When abciListenersAsync=false and stopNodeOnABCIListenerErr=false listeners will run synchronized but will not stop the node. + // When abciListenersAsync=true stopNodeOnABCIListenerErr will be ignored. + abciListenersAsync bool + + // stopNodeOnABCIListenerErr halts the node when ABCI streaming service listening results in an error. + // stopNodeOnABCIListenerErr=true must be paired with abciListenersAsync=false. + stopNodeOnABCIListenerErr bool +} +``` + +#### ABCI Event Hooks + +We will modify the `FinalizeBlock` and `Commit` methods to pass ABCI requests and responses +to any streaming service hooks registered with the `BaseApp`. + +```go +func (app *BaseApp) FinalizeBlock(req abci.RequestFinalizeBlock) abci.ResponseFinalizeBlock { + + var abciRes abci.ResponseFinalizeBlock + defer func() { + // call the streaming service hook with the FinalizeBlock messages + for _, abciListener := range app.abciListeners { + ctx := app.finalizeState.ctx + blockHeight := ctx.BlockHeight() + if app.abciListenersAsync { + go func(req abci.RequestFinalizeBlock, res abci.ResponseFinalizeBlock) { + if err := app.abciListener.FinalizeBlock(blockHeight, req, res); err != nil { + app.logger.Error("FinalizeBlock listening hook failed", "height", blockHeight, "err", err) + } + }(req, abciRes) + } else { + if err := app.abciListener.ListenFinalizeBlock(blockHeight, req, res); err != nil { + app.logger.Error("FinalizeBlock listening hook failed", "height", blockHeight, "err", err) + if app.stopNodeOnABCIListenerErr { + os.Exit(1) + } + } + } + } + }() + + ... + + return abciRes +} +``` + +```go +func (app *BaseApp) Commit() abci.ResponseCommit { + + ... + + res := abci.ResponseCommit{ + Data: commitID.Hash, + RetainHeight: retainHeight, + } + + // call the streaming service hook with the Commit messages + for _, abciListener := range app.abciListeners { + ctx := app.deliverState.ctx + blockHeight := ctx.BlockHeight() + changeSet := app.cms.PopStateCache() + if app.abciListenersAsync { + go func(res abci.ResponseCommit, changeSet []store.StoreKVPair) { + if err := app.abciListener.ListenCommit(ctx, res, changeSet); err != nil { + app.logger.Error("ListenCommit listening hook failed", "height", blockHeight, "err", err) + } + }(res, changeSet) + } else { + if err := app.abciListener.ListenCommit(ctx, res, changeSet); err != nil { + app.logger.Error("ListenCommit listening hook failed", "height", blockHeight, "err", err) + if app.stopNodeOnABCIListenerErr { + os.Exit(1) + } + } + } + } + + ... + + return res +} +``` + +#### Go Plugin System + +We propose a plugin architecture to load and run `Streaming` plugins and other types of implementations. We will introduce a plugin +system over gRPC that is used to load and run Cosmos-SDK plugins. The plugin system uses [hashicorp/go-plugin](https://github.com/hashicorp/go-plugin). +Each plugin must have a struct that implements the `plugin.Plugin` interface and an `Impl` interface for processing messages over gRPC. +Each plugin must also have a message protocol defined for the gRPC service: + +```go +// streaming/plugins/abci/{plugin_version}/interface.go + +// Handshake is a common handshake that is shared by streaming and host. +// This prevents users from executing bad plugins or executing a plugin +// directory. It is a UX feature, not a security feature. +var Handshake = plugin.HandshakeConfig{ + ProtocolVersion: 1, + MagicCookieKey: "ABCI_LISTENER_PLUGIN", + MagicCookieValue: "ef78114d-7bdf-411c-868f-347c99a78345", +} + +// ListenerPlugin is the base struct for all kinds of go-plugin implementations +// It will be included in interfaces of different Plugins +type ABCIListenerPlugin struct { + // GRPCPlugin must still implement the Plugin interface + plugin.Plugin + // Concrete implementation, written in Go. This is only used for plugins + // that are written in Go. + Impl baseapp.ABCIListener +} + +func (p *ListenerGRPCPlugin) GRPCServer(_ *plugin.GRPCBroker, s *grpc.Server) error { + RegisterABCIListenerServiceServer(s, &GRPCServer{Impl: p.Impl}) + return nil +} + +func (p *ListenerGRPCPlugin) GRPCClient( + _ context.Context, + _ *plugin.GRPCBroker, + c *grpc.ClientConn, +) (interface{}, error) { + return &GRPCClient{client: NewABCIListenerServiceClient(c)}, nil +} +``` + +The `plugin.Plugin` interface has two methods `Client` and `Server`. For our GRPC service these are `GRPCClient` and `GRPCServer` +The `Impl` field holds the concrete implementation of our `baseapp.ABCIListener` interface written in Go. +Note: this is only used for plugin implementations written in Go. + +The advantage of having such a plugin system is that within each plugin authors can define the message protocol in a way that fits their use case. +For example, when state change listening is desired, the `ABCIListener` message protocol can be defined as below (*for illustrative purposes only*). +When state change listening is not desired than `ListenCommit` can be omitted from the protocol. + +```protobuf +syntax = "proto3"; + +... + +message Empty {} + +message ListenFinalizeBlockRequest { + RequestFinalizeBlock req = 1; + ResponseFinalizeBlock res = 2; +} +message ListenCommitRequest { + int64 block_height = 1; + ResponseCommit res = 2; + repeated StoreKVPair changeSet = 3; +} + +// plugin that listens to state changes +service ABCIListenerService { + rpc ListenFinalizeBlock(ListenFinalizeBlockRequest) returns (Empty); + rpc ListenCommit(ListenCommitRequest) returns (Empty); +} +``` + +```protobuf +... +// plugin that doesn't listen to state changes +service ABCIListenerService { + rpc ListenFinalizeBlock(ListenFinalizeBlockRequest) returns (Empty); + rpc ListenCommit(ListenCommitRequest) returns (Empty); +} +``` + +Implementing the service above: + +```go +// streaming/plugins/abci/{plugin_version}/grpc.go + +var ( + _ baseapp.ABCIListener = (*GRPCClient)(nil) +) + +// GRPCClient is an implementation of the ABCIListener and ABCIListenerPlugin interfaces that talks over RPC. +type GRPCClient struct { + client ABCIListenerServiceClient +} + +func (m *GRPCClient) ListenFinalizeBlock(goCtx context.Context, req abci.RequestFinalizeBlock, res abci.ResponseFinalizeBlock) error { + ctx := sdk.UnwrapSDKContext(goCtx) + _, err := m.client.ListenDeliverTx(ctx, &ListenDeliverTxRequest{BlockHeight: ctx.BlockHeight(), Req: req, Res: res}) + return err +} + +func (m *GRPCClient) ListenCommit(goCtx context.Context, res abci.ResponseCommit, changeSet []store.StoreKVPair) error { + ctx := sdk.UnwrapSDKContext(goCtx) + _, err := m.client.ListenCommit(ctx, &ListenCommitRequest{BlockHeight: ctx.BlockHeight(), Res: res, ChangeSet: changeSet}) + return err +} + +// GRPCServer is the gRPC server that GRPCClient talks to. +type GRPCServer struct { + // This is the real implementation + Impl baseapp.ABCIListener +} + +func (m *GRPCServer) ListenFinalizeBlock(ctx context.Context, req *ListenFinalizeBlockRequest) (*Empty, error) { + return &Empty{}, m.Impl.ListenFinalizeBlock(ctx, req.Req, req.Res) +} + +func (m *GRPCServer) ListenCommit(ctx context.Context, req *ListenCommitRequest) (*Empty, error) { + return &Empty{}, m.Impl.ListenCommit(ctx, req.Res, req.ChangeSet) +} + +``` + +And the pre-compiled Go plugin `Impl`(*this is only used for plugins that are written in Go*): + +```go +// streaming/plugins/abci/{plugin_version}/impl/plugin.go + +// Plugins are pre-compiled and loaded by the plugin system + +// ABCIListener is the implementation of the baseapp.ABCIListener interface +type ABCIListener struct{} + +func (m *ABCIListenerPlugin) ListenFinalizeBlock(ctx context.Context, req abci.RequestFinalizeBlock, res abci.ResponseFinalizeBlock) error { + // send data to external system +} + +func (m *ABCIListenerPlugin) ListenCommit(ctx context.Context, res abci.ResponseCommit, changeSet []store.StoreKVPair) error { + // send data to external system +} + +func main() { + plugin.Serve(&plugin.ServeConfig{ + HandshakeConfig: grpc_abci_v1.Handshake, + Plugins: map[string]plugin.Plugin{ + "grpc_plugin_v1": &grpc_abci_v1.ABCIListenerGRPCPlugin{Impl: &ABCIListenerPlugin{}}, + }, + + // A non-nil value here enables gRPC serving for this streaming... + GRPCServer: plugin.DefaultGRPCServer, + }) +} +``` + +We will introduce a plugin loading system that will return `(interface{}, error)`. +This provides the advantage of using versioned plugins where the plugin interface and gRPC protocol change over time. +In addition, it allows for building independent plugin that can expose different parts of the system over gRPC. + +```go +func NewStreamingPlugin(name string, logLevel string) (interface{}, error) { + logger := hclog.New(&hclog.LoggerOptions{ + Output: hclog.DefaultOutput, + Level: toHclogLevel(logLevel), + Name: fmt.Sprintf("plugin.%s", name), + }) + + // We're a host. Start by launching the streaming process. + env := os.Getenv(GetPluginEnvKey(name)) + client := plugin.NewClient(&plugin.ClientConfig{ + HandshakeConfig: HandshakeMap[name], + Plugins: PluginMap, + Cmd: exec.Command("sh", "-c", env), + Logger: logger, + AllowedProtocols: []plugin.Protocol{ + plugin.ProtocolNetRPC, plugin.ProtocolGRPC}, + }) + + // Connect via RPC + rpcClient, err := client.Client() + if err != nil { + return nil, err + } + + // Request streaming plugin + return rpcClient.Dispense(name) +} + +``` + +We propose a `RegisterStreamingPlugin` function for the App to register `NewStreamingPlugin`s with the App's BaseApp. +Streaming plugins can be of `Any` type; therefore, the function takes in an interface vs a concrete type. +For example, we could have plugins of `ABCIListener`, `WasmListener` or `IBCListener`. Note that `RegisterStreamingPluing` function +is helper function and not a requirement. Plugin registration can easily be moved from the App to the BaseApp directly. + +```go +// baseapp/streaming.go + +// RegisterStreamingPlugin registers streaming plugins with the App. +// This method returns an error if a plugin is not supported. +func RegisterStreamingPlugin( + bApp *BaseApp, + appOpts servertypes.AppOptions, + keys map[string]*types.KVStoreKey, + streamingPlugin interface{}, +) error { + switch t := streamingPlugin.(type) { + case ABCIListener: + registerABCIListenerPlugin(bApp, appOpts, keys, t) + default: + return fmt.Errorf("unexpected plugin type %T", t) + } + return nil +} +``` + +```go +func registerABCIListenerPlugin( + bApp *BaseApp, + appOpts servertypes.AppOptions, + keys map[string]*store.KVStoreKey, + abciListener ABCIListener, +) { + asyncKey := fmt.Sprintf("%s.%s.%s", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIAsync) + async := cast.ToBool(appOpts.Get(asyncKey)) + stopNodeOnErrKey := fmt.Sprintf("%s.%s.%s", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIStopNodeOnErrTomlKey) + stopNodeOnErr := cast.ToBool(appOpts.Get(stopNodeOnErrKey)) + keysKey := fmt.Sprintf("%s.%s.%s", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIKeysTomlKey) + exposeKeysStr := cast.ToStringSlice(appOpts.Get(keysKey)) + exposedKeys := exposeStoreKeysSorted(exposeKeysStr, keys) + bApp.cms.AddListeners(exposedKeys) + app.SetStreamingManager( + storetypes.StreamingManager{ + ABCIListeners: []storetypes.ABCIListener{abciListener}, + StopNodeOnErr: stopNodeOnErr, + }, + ) +} +``` + +```go +func exposeAll(list []string) bool { + for _, ele := range list { + if ele == "*" { + return true + } + } + return false +} + +func exposeStoreKeys(keysStr []string, keys map[string]*types.KVStoreKey) []types.StoreKey { + var exposeStoreKeys []types.StoreKey + if exposeAll(keysStr) { + exposeStoreKeys = make([]types.StoreKey, 0, len(keys)) + for _, storeKey := range keys { + exposeStoreKeys = append(exposeStoreKeys, storeKey) + } + } else { + exposeStoreKeys = make([]types.StoreKey, 0, len(keysStr)) + for _, keyStr := range keysStr { + if storeKey, ok := keys[keyStr]; ok { + exposeStoreKeys = append(exposeStoreKeys, storeKey) + } + } + } + // sort storeKeys for deterministic output + sort.SliceStable(exposeStoreKeys, func(i, j int) bool { + return exposeStoreKeys[i].Name() < exposeStoreKeys[j].Name() + }) + + return exposeStoreKeys +} +``` + +The `NewStreamingPlugin` and `RegisterStreamingPlugin` functions are used to register a plugin with the App's BaseApp. + +e.g. in `NewSimApp`: + +```go +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + + ... + + keys := sdk.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, ibchost.StoreKey, upgradetypes.StoreKey, + evidencetypes.StoreKey, ibctransfertypes.StoreKey, capabilitytypes.StoreKey, + ) + + ... + + // register streaming services + streamingCfg := cast.ToStringMap(appOpts.Get(baseapp.StreamingTomlKey)) + for service := range streamingCfg { + pluginKey := fmt.Sprintf("%s.%s.%s", baseapp.StreamingTomlKey, service, baseapp.StreamingPluginTomlKey) + pluginName := strings.TrimSpace(cast.ToString(appOpts.Get(pluginKey))) + if len(pluginName) > 0 { + logLevel := cast.ToString(appOpts.Get(flags.FlagLogLevel)) + plugin, err := streaming.NewStreamingPlugin(pluginName, logLevel) + if err != nil { + tmos.Exit(err.Error()) + } + if err := baseapp.RegisterStreamingPlugin(bApp, appOpts, keys, plugin); err != nil { + tmos.Exit(err.Error()) + } + } + } + + return app +``` + +#### Configuration + +The plugin system will be configured within an App's TOML configuration files. + +```toml +# gRPC streaming +[streaming] + +# ABCI streaming service +[streaming.abci] + +# The plugin version to use for ABCI listening +plugin = "abci_v1" + +# List of kv store keys to listen to for state changes. +# Set to ["*"] to expose all keys. +keys = ["*"] + +# Enable abciListeners to run asynchronously. +# When abciListenersAsync=false and stopNodeOnABCIListenerErr=false listeners will run synchronized but will not stop the node. +# When abciListenersAsync=true stopNodeOnABCIListenerErr will be ignored. +async = false + +# Whether to stop the node on message deliver error. +stop-node-on-err = true +``` + +There will be four parameters for configuring `ABCIListener` plugin: `streaming.abci.plugin`, `streaming.abci.keys`, `streaming.abci.async` and `streaming.abci.stop-node-on-err`. +`streaming.abci.plugin` is the name of the plugin we want to use for streaming, `streaming.abci.keys` is a set of store keys for stores it listens to, +`streaming.abci.async` is bool enabling asynchronous listening and `streaming.abci.stop-node-on-err` is a bool that stops the node when true and when operating +on synchronized mode `streaming.abci.async=false`. Note that `streaming.abci.stop-node-on-err=true` will be ignored if `streaming.abci.async=true`. + +The configuration above support additional streaming plugins by adding the plugin to the `[streaming]` configuration section +and registering the plugin with `RegisterStreamingPlugin` helper function. + +Note the that each plugin must include `streaming.{service}.plugin` property as it is a requirement for doing the lookup and registration of the plugin +with the App. All other properties are unique to the individual services. + +#### Encoding and decoding streams + +ADR-038 introduces the interfaces and types for streaming state changes out from KVStores, associating this +data with their related ABCI requests and responses, and registering a service for consuming this data and streaming it to some destination in a final format. +Instead of prescribing a final data format in this ADR, it is left to a specific plugin implementation to define and document this format. +We take this approach because flexibility in the final format is necessary to support a wide range of streaming service plugins. For example, +the data format for a streaming service that writes the data out to a set of files will differ from the data format that is written to a Kafka topic. + +## Consequences + +These changes will provide a means of subscribing to KVStore state changes in real time. + +### Backwards Compatibility + +* This ADR changes the `CommitMultiStore` interface, implementations supporting the previous version of this interface will not support the new one + +### Positive + +* Ability to listen to KVStore state changes in real time and expose these events to external consumers + +### Negative + +* Changes `CommitMultiStore` interface and its implementations + +### Neutral + +* Introduces additional- but optional- complexity to configuring and running a cosmos application +* If an application developer opts to use these features to expose data, they need to be aware of the ramifications/risks of that data exposure as it pertains to the specifics of their application + + + +### ADR 039: Epoched Staking + + + +# ADR 039: Epoched Staking + +## Changelog + +* 10-Feb-2021: Initial Draft + +## Authors + +* Dev Ojha (@valardragon) +* Sunny Aggarwal (@sunnya97) + +## Status + +Proposed + +## Abstract + +This ADR updates the proof of stake module to buffer the staking weight updates for a number of blocks before updating the consensus' staking weights. The length of the buffer is dubbed an epoch. The prior functionality of the staking module is then a special case of the abstracted module, with the epoch being set to 1 block. + +## Context + +The current proof of stake module takes the design decision to apply staking weight changes to the consensus engine immediately. This means that delegations and unbonds get applied immediately to the validator set. This decision was primarily done as it was the simplest from an implementation perspective, and because we at the time believed that this would lead to better UX for clients. + +An alternative design choice is to allow buffering staking updates (delegations, unbonds, validators joining) for a number of blocks. This epoched proof of stake consensus provides the guarantee that the consensus weights for validators will not change mid-epoch, except in the event of a slash condition. + +Additionally, the UX hurdle may not be as significant as was previously thought. This is because it is possible to provide users immediate acknowledgement that their bond was recorded and will be executed. + +Furthermore, it has become clearer over time that immediate execution of staking events comes with limitations, such as: + +* Threshold based cryptography. One of the main limitations is that because the validator set can change so regularly, it makes the running of multiparty computation by a fixed validator set difficult. Many threshold-based cryptographic features for blockchains such as randomness beacons and threshold decryption require a computationally-expensive DKG process (will take much longer than 1 block to create). To productively use these, we need to guarantee that the result of the DKG will be used for a reasonably long time. It wouldn't be feasible to rerun the DKG every block. By epoching staking, it guarantees we'll only need to run a new DKG once every epoch. + +* Light client efficiency. This would lessen the overhead for IBC when there is high churn in the validator set. In the Tendermint light client bisection algorithm, the number of headers you need to verify is related to bounding the difference in validator sets between a trusted header and the latest header. If the difference is too great, you verify more headers in between the two. By limiting the frequency of validator set changes, we can reduce the worst case size of IBC lite client proofs, which occurs when a validator set has high churn. + +* Fairness of deterministic leader election. Currently we have no ways of reasoning about fairness of deterministic leader election in the presence of staking changes without epochs (tendermint/spec#217). Breaking fairness of leader election is profitable for validators, as they earn additional rewards from being the proposer. Adding epochs at least makes it easier for our deterministic leader election to match something we can prove secure. (Albeit, we still haven’t proven if our current algorithm is fair with > 2 validators in the presence of stake changes) + +* Staking derivative design. Currently, reward distribution is done lazily using the F1 fee distribution. While saving computational complexity, lazy accounting requires a more stateful staking implementation. Right now, each delegation entry has to track the time of last withdrawal. Handling this can be a challenge for some staking derivatives designs that seek to provide fungibility for all tokens staked to a single validator. Force-withdrawing rewards to users can help solve this, however it is infeasible to force-withdraw rewards to users on a per block basis. With epochs, a chain could more easily alter the design to have rewards be forcefully withdrawn (iterating over delegator accounts only once per-epoch), and can thus remove delegation timing from state. This may be useful for certain staking derivative designs. + +## Design considerations + +### Slashing + +There is a design consideration for whether to apply a slash immediately or at the end of an epoch. A slash event should apply to only members who are actually staked during the time of the infraction, namely during the epoch the slash event occurred. + +Applying it immediately can be viewed as offering greater consensus layer security, at potential costs to the aforementioned use cases. The benefits of immediate slashing for consensus layer security can be all be obtained by executing the validator jailing immediately (thus removing it from the validator set), and delaying the actual slash change to the validator's weight until the epoch boundary. For the use cases mentioned above, workarounds can be integrated to avoid problems, as follows: + +* For threshold based cryptography, this setting will have the threshold cryptography use the original epoch weights, while consensus has an update that lets it more rapidly benefit from additional security. If the threshold based cryptography blocks liveness of the chain, then we have effectively raised the liveness threshold of the remaining validators for the rest of the epoch. (Alternatively, jailed nodes could still contribute shares) This plan will fail in the extreme case that more than 1/3rd of the validators have been jailed within a single epoch. For such an extreme scenario, the chain already have its own custom incident response plan, and defining how to handle the threshold cryptography should be a part of that. +* For light client efficiency, there can be a bit included in the header indicating an intra-epoch slash (ala https://github.com/tendermint/spec/issues/199). +* For fairness of deterministic leader election, applying a slash or jailing within an epoch would break the guarantee we were seeking to provide. This then re-introduces a new (but significantly simpler) problem for trying to provide fairness guarantees. Namely, that validators can adversarially elect to remove themselves from the set of proposers. From a security perspective, this could potentially be handled by two different mechanisms (or prove to still be too difficult to achieve). One is making a security statement acknowledging the ability for an adversary to force an ahead-of-time fixed threshold of users to drop out of the proposer set within an epoch. The second method would be to parameterize such that the cost of a slash within the epoch far outweighs benefits due to being a proposer. However, this latter criterion is quite dubious, since being a proposer can have many advantageous side-effects in chains with complex state machines. (Namely, DeFi games such as Fomo3D) +* For staking derivative design, there is no issue introduced. This does not increase the state size of staking records, since whether a slash has occurred is fully queryable given the validator address. + +### Token lockup + +When someone makes a transaction to delegate, even though they are not immediately staked, their tokens should be moved into a pool managed by the staking module which will then be used at the end of an epoch. This prevents concerns where they stake, and then spend those tokens not realizing they were already allocated for staking, and thus having their staking tx fail. + +### Pipelining the epochs + +For threshold based cryptography in particular, we need a pipeline for epoch changes. This is because when we are in epoch N, we want the epoch N+1 weights to be fixed so that the validator set can do the DKG accordingly. So if we are currently in epoch N, the stake weights for epoch N+1 should already be fixed, and new stake changes should be getting applied to epoch N + 2. + +This can be handled by making a parameter for the epoch pipeline length. This parameter should not be alterable except during hard forks, to mitigate implementation complexity of switching the pipeline length. + +With pipeline length 1, if I redelegate during epoch N, then my redelegation is applied prior to the beginning of epoch N+1. +With pipeline length 2, if I redelegate during epoch N, then my redelegation is applied prior to the beginning of epoch N+2. + +### Rewards + +Even though all staking updates are applied at epoch boundaries, rewards can still be distributed immediately when they are claimed. This is because they do not affect the current stake weights, as we do not implement auto-bonding of rewards. If such a feature were to be implemented, it would have to be setup so that rewards are auto-bonded at the epoch boundary. + +### Parameterizing the epoch length + +When choosing the epoch length, there is a trade-off between queued state/computation buildup, and countering the previously discussed limitations of immediate execution if they apply to a given chain. + +Until an ABCI mechanism for variable block times is introduced, it is ill-advised to be using high epoch lengths due to the computation buildup. This is because when a block's execution time is greater than the expected block time from Tendermint, rounds may increment. + +## Decision + +**Step-1**: Implement buffering of all staking and slashing messages. + +First we create a pool for storing tokens that are being bonded, but should be applied at the epoch boundary called the `EpochDelegationPool`. Then, we have two separate queues, one for staking, one for slashing. We describe what happens on each message being delivered below: + +### Staking messages + +* **MsgCreateValidator**: Move user's self-bond to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the self-bond, taking the funds from the `EpochDelegationPool`. If Epoch execution fails, return back funds from `EpochDelegationPool` to user's account. +* **MsgEditValidator**: Validate message and if valid queue the message for execution at the end of the Epoch. +* **MsgDelegate**: Move user's funds to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the delegation, taking the funds from the `EpochDelegationPool`. If Epoch execution fails, return back funds from `EpochDelegationPool` to user's account. +* **MsgBeginRedelegate**: Validate message and if valid queue the message for execution at the end of the Epoch. +* **MsgUndelegate**: Validate message and if valid queue the message for execution at the end of the Epoch. + +### Slashing messages + +* **MsgUnjail**: Validate message and if valid queue the message for execution at the end of the Epoch. +* **Slash Event**: Whenever a slash event is created, it gets queued in the slashing module to apply at the end of the epoch. The queues should be set up such that this slash applies immediately. + +### Evidence Messages + +* **MsgSubmitEvidence**: This gets executed immediately, and the validator gets jailed immediately. However in slashing, the actual slash event gets queued. + +Then we add methods to the end blockers, to ensure that at the epoch boundary the queues are cleared and delegation updates are applied. + +**Step-2**: Implement querying of queued staking txs. + +When querying the staking activity of a given address, the status should return not only the amount of tokens staked, but also if there are any queued stake events for that address. This will require more work to be done in the querying logic, to trace the queued upcoming staking events. + +As an initial implementation, this can be implemented as a linear search over all queued staking events. However, for chains that need long epochs, they should eventually build additional support for nodes that support querying to be able to produce results in constant time. (This is doable by maintaining an auxiliary hashmap for indexing upcoming staking events by address) + +**Step-3**: Adjust gas + +Currently gas represents the cost of executing a transaction when its done immediately. (Merging together costs of p2p overhead, state access overhead, and computational overhead) However, now a transaction can cause computation in a future block, namely at the epoch boundary. + +To handle this, we should initially include parameters for estimating the amount of future computation (denominated in gas), and add that as a flat charge needed for the message. +We leave it out of scope for how to weight future computation versus current computation in gas pricing, and have it set such that they are weighted equally for now. + +## Consequences + +### Positive + +* Abstracts the proof of stake module that allows retaining the existing functionality +* Enables new features such as validator-set based threshold cryptography + +### Negative + +* Increases complexity of integrating more complex gas pricing mechanisms, as they now have to consider future execution costs as well. +* When epoch > 1, validators can no longer leave the network immediately, and must wait until an epoch boundary. + + + +### ADR 040: Storage and SMT State Commitments + + + +# ADR 040: Storage and SMT State Commitments + +## Changelog + +* 2020-01-15: Draft + +## Status + +DRAFT Not Implemented + +## Abstract + +Sparse Merkle Tree ([SMT](https://osf.io/8mcnh/)) is a version of a Merkle Tree with various storage and performance optimizations. This ADR defines a separation of state commitments from data storage and the Cosmos SDK transition from IAVL to SMT. + +## Context + +Currently, Cosmos SDK uses IAVL for both state [commitments](https://cryptography.fandom.com/wiki/Commitment_scheme) and data storage. + +IAVL has effectively become an orphaned project within the Cosmos ecosystem and it's proven to be an inefficient state commitment data structure. +In the current design, IAVL is used for both data storage and as a Merkle Tree for state commitments. IAVL is meant to be a standalone Merkleized key/value database, however it's using a KV DB engine to store all tree nodes. So, each node is stored in a separate record in the KV DB. This causes many inefficiencies and problems: + +* Each object query requires a tree traversal from the root. Subsequent queries for the same object are cached on the Cosmos SDK level. +* Each edge traversal requires a DB query. +* Creating snapshots is [expensive](https://github.com/cosmos/cosmos-sdk/issues/7215#issuecomment-684804950). It takes about 30 seconds to export less than 100 MB of state (as of March 2020). +* Updates in IAVL may trigger tree reorganization and possible O(log(n)) hashes re-computation, which can become a CPU bottleneck. +* The node structure is pretty expensive - it contains a standard tree node elements (key, value, left and right element) and additional metadata such as height, version (which is not required by the Cosmos SDK). The entire node is hashed, and that hash is used as the key in the underlying database, [ref](https://github.com/cosmos/iavl/blob/master/docs/node/node.md +). + +Moreover, the IAVL project lacks support and a maintainer and we already see better and well-established alternatives. Instead of optimizing the IAVL, we are looking into other solutions for both storage and state commitments. + +## Decision + +We propose to separate the concerns of state commitment (**SC**), needed for consensus, and state storage (**SS**), needed for state machine. Finally we replace IAVL with [Celestia's SMT](https://github.com/lazyledger/smt). Celestia SMT is based on Diem (called jellyfish) design [*] - it uses a compute-optimized SMT by replacing subtrees with only default values with a single node (same approach is used by Ethereum2) and implements compact proofs. + +The storage model presented here doesn't deal with data structure nor serialization. It's a Key-Value database, where both key and value are binaries. The storage user is responsible for data serialization. + +### Decouple state commitment from storage + +Separation of storage and commitment (by the SMT) will allow the optimization of different components according to their usage and access patterns. + +`SC` (SMT) is used to commit to a data and compute Merkle proofs. `SS` is used to directly access data. To avoid collisions, both `SS` and `SC` will use a separate storage namespace (they could use the same database underneath). `SS` will store each record directly (mapping `(key, value)` as `key → value`). + +SMT is a merkle tree structure: we don't store keys directly. For every `(key, value)` pair, `hash(key)` is used as leaf path (we hash a key to uniformly distribute leaves in the tree) and `hash(value)` as the leaf contents. The tree structure is specified in more depth [below](#smt-for-state-commitment). + +For data access we propose 2 additional KV buckets (implemented as namespaces for the key-value pairs, sometimes called [column family](https://github.com/facebook/rocksdb/wiki/Terminology)): + +1. B1: `key → value`: the principal object storage, used by a state machine, behind the Cosmos SDK `KVStore` interface: provides direct access by key and allows prefix iteration (KV DB backend must support it). +2. B2: `hash(key) → key`: a reverse index to get a key from an SMT path. Internally the SMT will store `(key, value)` as `prefix || hash(key) || hash(value)`. So, we can get an object value by composing `hash(key) → B2 → B1`. +3. We could use more buckets to optimize the app usage if needed. + +We propose to use a KV database for both `SS` and `SC`. The store interface will allow to use the same physical DB backend for both `SS` and `SC` as well two separate DBs. The latter option allows for the separation of `SS` and `SC` into different hardware units, providing support for more complex setup scenarios and improving overall performance: one can use different backends (eg RocksDB and Badger) as well as independently tuning the underlying DB configuration. + +### Requirements + +State Storage requirements: + +* range queries +* quick (key, value) access +* creating a snapshot +* historical versioning +* pruning (garbage collection) + +State Commitment requirements: + +* fast updates +* tree path should be short +* query historical commitment proofs using ICS-23 standard +* pruning (garbage collection) + +### SMT for State Commitment + +A Sparse Merkle tree is based on the idea of a complete Merkle tree of an intractable size. The assumption here is that as the size of the tree is intractable, there would only be a few leaf nodes with valid data blocks relative to the tree size, rendering a sparse tree. + +The full specification can be found at [Celestia](https://github.com/celestiaorg/celestia-specs/blob/ec98170398dfc6394423ee79b00b71038879e211/src/specs/data_structures.md#sparse-merkle-tree). In summary: + +* The SMT consists of a binary Merkle tree, constructed in the same fashion as described in [Certificate Transparency (RFC-6962)](https://tools.ietf.org/html/rfc6962), but using as the hashing function SHA-2-256 as defined in [FIPS 180-4](https://doi.org/10.6028/NIST.FIPS.180-4). +* Leaves and internal nodes are hashed differently: the one-byte `0x00` is prepended for leaf nodes while `0x01` is prepended for internal nodes. +* Default values are given to leaf nodes with empty leaves. +* While the above rule is sufficient to pre-compute the values of intermediate nodes that are roots of empty subtrees, a further simplification is to extend this default value to all nodes that are roots of empty subtrees. The 32-byte zero is used as the default value. This rule takes precedence over the above one. +* An internal node that is the root of a subtree that contains exactly one non-empty leaf is replaced by that leaf's leaf node. + +### Snapshots for storage sync and state versioning + +Below, with simple _snapshot_ we refer to a database snapshot mechanism, not to a _ABCI snapshot sync_. The latter will be referred as _snapshot sync_ (which will directly use DB snapshot as described below). + +Database snapshot is a view of DB state at a certain time or transaction. It's not a full copy of a database (it would be too big). Usually a snapshot mechanism is based on a _copy on write_ and it allows DB state to be efficiently delivered at a certain stage. +Some DB engines support snapshotting. Hence, we propose to reuse that functionality for the state sync and versioning (described below). We limit the supported DB engines to ones which efficiently implement snapshots. In a final section we discuss the evaluated DBs. + +One of the Stargate core features is a _snapshot sync_ delivered in the `/snapshot` package. It provides a way to trustlessly sync a blockchain without repeating all transactions from the genesis. This feature is implemented in Cosmos SDK and requires storage support. Currently IAVL is the only supported backend. It works by streaming to a client a snapshot of a `SS` at a certain version together with a header chain. + +A new database snapshot will be created in every `EndBlocker` and identified by a block height. The `root` store keeps track of the available snapshots to offer `SS` at a certain version. The `root` store implements the `RootStore` interface described below. In essence, `RootStore` encapsulates a `Committer` interface. `Committer` has a `Commit`, `SetPruning`, `GetPruning` functions which will be used for creating and removing snapshots. The `rootStore.Commit` function creates a new snapshot and increments the version on each call, and checks if it needs to remove old versions. We will need to update the SMT interface to implement the `Committer` interface. +NOTE: `Commit` must be called exactly once per block. Otherwise we risk going out of sync for the version number and block height. +NOTE: For the Cosmos SDK storage, we may consider splitting that interface into `Committer` and `PruningCommitter` - only the multiroot should implement `PruningCommitter` (cache and prefix store don't need pruning). + +Number of historical versions for `abci.RequestQuery` and state sync snapshots is part of a node configuration, not a chain configuration (configuration implied by the blockchain consensus). A configuration should allow to specify number of past blocks and number of past blocks modulo some number (eg: 100 past blocks and one snapshot every 100 blocks for past 2000 blocks). Archival nodes can keep all past versions. + +Pruning old snapshots is effectively done by a database. Whenever we update a record in `SC`, SMT won't update nodes - instead it creates new nodes on the update path, without removing the old one. Since we are snapshotting each block, we need to change that mechanism to immediately remove orphaned nodes from the database. This is a safe operation - snapshots will keep track of the records and make it available when accessing past versions. + +To manage the active snapshots we will either use a DB _max number of snapshots_ option (if available), or we will remove DB snapshots in the `EndBlocker`. The latter option can be done efficiently by identifying snapshots with block height and calling a store function to remove past versions. + +#### Accessing old state versions + +One of the functional requirements is to access old state. This is done through `abci.RequestQuery` structure. The version is specified by a block height (so we query for an object by a key `K` at block height `H`). The number of old versions supported for `abci.RequestQuery` is configurable. Accessing an old state is done by using available snapshots. +`abci.RequestQuery` doesn't need old state of `SC` unless the `prove=true` parameter is set. The SMT merkle proof must be included in the `abci.ResponseQuery` only if both `SC` and `SS` have a snapshot for requested version. + +Moreover, Cosmos SDK could provide a way to directly access a historical state. However, a state machine shouldn't do that - since the number of snapshots is configurable, it would lead to nondeterministic execution. + +We positively [validated](https://github.com/cosmos/cosmos-sdk/discussions/8297) a versioning and snapshot mechanism for querying old state with regards to the database we evaluated. + +### State Proofs + +For any object stored in State Store (SS), we have corresponding object in `SC`. A proof for object `V` identified by a key `K` is a branch of `SC`, where the path corresponds to the key `hash(K)`, and the leaf is `hash(K, V)`. + +### Rollbacks + +We need to be able to process transactions and roll-back state updates if a transaction fails. This can be done in the following way: during transaction processing, we keep all state change requests (writes) in a `CacheWrapper` abstraction (as it's done today). Once we finish the block processing, in the `Endblocker`, we commit a root store - at that time, all changes are written to the SMT and to the `SS` and a snapshot is created. + +### Committing to an object without saving it + +We identified use-cases, where modules will need to save an object commitment without storing an object itself. Sometimes clients are receiving complex objects, and they have no way to prove a correctness of that object without knowing the storage layout. For those use cases it would be easier to commit to the object without storing it directly. + +### Refactor MultiStore + +The Stargate `/store` implementation (store/v1) adds an additional layer in the SDK store construction - the `MultiStore` structure. The multistore exists to support the modularity of the Cosmos SDK - each module is using its own instance of IAVL, but in the current implementation, all instances share the same database. The latter indicates, however, that the implementation doesn't provide true modularity. Instead it causes problems related to race condition and atomic DB commits (see: [\#6370](https://github.com/cosmos/cosmos-sdk/issues/6370) and [discussion](https://github.com/cosmos/cosmos-sdk/discussions/8297#discussioncomment-757043)). + +We propose to reduce the multistore concept from the SDK, and to use a single instance of `SC` and `SS` in a `RootStore` object. To avoid confusion, we should rename the `MultiStore` interface to `RootStore`. The `RootStore` will have the following interface; the methods for configuring tracing and listeners are omitted for brevity. + +```go +// Used where read-only access to versions is needed. +type BasicRootStore interface { + Store + GetKVStore(StoreKey) KVStore + CacheRootStore() CacheRootStore +} + +// Used as the main app state, replacing CommitMultiStore. +type CommitRootStore interface { + BasicRootStore + Committer + Snapshotter + + GetVersion(uint64) (BasicRootStore, error) + SetInitialVersion(uint64) error + + ... // Trace and Listen methods +} + +// Replaces CacheMultiStore for branched state. +type CacheRootStore interface { + BasicRootStore + Write() + + ... // Trace and Listen methods +} + +// Example of constructor parameters for the concrete type. +type RootStoreConfig struct { + Upgrades *StoreUpgrades + InitialVersion uint64 + + ReservePrefix(StoreKey, StoreType) +} +``` + + + + +In contrast to `MultiStore`, `RootStore` doesn't allow to dynamically mount sub-stores or provide an arbitrary backing DB for individual sub-stores. + +NOTE: modules will be able to use a special commitment and their own DBs. For example: a module which will use ZK proofs for state can store and commit this proof in the `RootStore` (usually as a single record) and manage the specialized store privately or using the `SC` low level interface. + +#### Compatibility support + +To ease the transition to this new interface for users, we can create a shim which wraps a `CommitMultiStore` but provides a `CommitRootStore` interface, and expose functions to safely create and access the underlying `CommitMultiStore`. + +The new `RootStore` and supporting types can be implemented in a `store/v2alpha1` package to avoid breaking existing code. + +#### Merkle Proofs and IBC + +Currently, an IBC (v1.0) Merkle proof path consists of two elements (`["", ""]`), with each key corresponding to a separate proof. These are each verified according to individual [ICS-23 specs](https://github.com/cosmos/ibc-go/blob/f7051429e1cf833a6f65d51e6c3df1609290a549/modules/core/23-commitment/types/merkle.go#L17), and the result hash of each step is used as the committed value of the next step, until a root commitment hash is obtained. +The root hash of the proof for `""` is hashed with the `""` to validate against the App Hash. + +This is not compatible with the `RootStore`, which stores all records in a single Merkle tree structure, and won't produce separate proofs for the store- and record-key. Ideally, the store-key component of the proof could just be omitted, and updated to use a "no-op" spec, so only the record-key is used. However, because the IBC verification code hardcodes the `"ibc"` prefix and applies it to the SDK proof as a separate element of the proof path, this isn't possible without a breaking change. Breaking this behavior would severely impact the Cosmos ecosystem which already widely adopts the IBC module. Requesting an update of the IBC module across the chains is a time consuming effort and not easily feasible. + +As a workaround, the `RootStore` will have to use two separate SMTs (they could use the same underlying DB): one for IBC state and one for everything else. A simple Merkle map that reference these SMTs will act as a Merkle Tree to create a final App hash. The Merkle map is not stored in a DBs - it's constructed in the runtime. The IBC substore key must be `"ibc"`. + +The workaround can still guarantee atomic syncs: the [proposed DB backends](#evaluated-kv-databases) support atomic transactions and efficient rollbacks, which will be used in the commit phase. + +The presented workaround can be used until the IBC module is fully upgraded to supports single-element commitment proofs. + +### Optimization: compress module key prefixes + +We consider a compression of prefix keys by creating a mapping from module key to an integer, and serializing the integer using varint coding. Varint coding assures that different values don't have common byte prefix. For Merkle Proofs we can't use prefix compression - so it should only apply for the `SS` keys. Moreover, the prefix compression should be only applied for the module namespace. More precisely: + +* each module has it's own namespace; +* when accessing a module namespace we create a KVStore with embedded prefix; +* that prefix will be compressed only when accessing and managing `SS`. + +We need to assure that the codes won't change. We can fix the mapping in a static variable (provided by an app) or SS state under a special key. + +TODO: need to make decision about the key compression. + +## Optimization: SS key compression + +Some objects may be saved with key, which contains a Protobuf message type. Such keys are long. We could save a lot of space if we can map Protobuf message types in varints. + +TODO: finalize this or move to another ADR. + +## Migration + +Using the new store will require a migration. 2 Migrations are proposed: + +1. Genesis export -- it will reset the blockchain history. +2. In place migration: we can reuse `UpgradeKeeper.SetUpgradeHandler` to provide the migration logic: + +```go +app.UpgradeKeeper.SetUpgradeHandler("adr-40", func(ctx sdk.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + + storev2.Migrate(iavlstore, v2.store) + + // RunMigrations returns the VersionMap + // with the updated module ConsensusVersions + return app.mm.RunMigrations(ctx, vm) +}) +``` + +The `Migrate` function will read all entries from a store/v1 DB and save them to the AD-40 combined KV store. +Cache layer should not be used and the operation must finish with a single Commit call. + +Inserting records to the `SC` (SMT) component is the bottleneck. Unfortunately SMT doesn't support batch transactions. +Adding batch transactions to `SC` layer is considered as a feature after the main release. + +## Consequences + +### Backwards Compatibility + +This ADR doesn't introduce any Cosmos SDK level API changes. + +We change the storage layout of the state machine, a storage hard fork and network upgrade is required to incorporate these changes. SMT provides a merkle proof functionality, however it is not compatible with ICS23. Updating the proofs for ICS23 compatibility is required. + +### Positive + +* Decoupling state from state commitment introduce better engineering opportunities for further optimizations and better storage patterns. +* Performance improvements. +* Joining SMT based camp which has wider and proven adoption than IAVL. Example projects which decided on SMT: Ethereum2, Diem (Libra), Trillan, Tezos, Celestia. +* Multistore removal fixes a longstanding issue with the current MultiStore design. +* Simplifies merkle proofs - all modules, except IBC, have only one pass for merkle proof. + +### Negative + +* Storage migration +* LL SMT doesn't support pruning - we will need to add and test that functionality. +* `SS` keys will have an overhead of a key prefix. This doesn't impact `SC` because all keys in `SC` have same size (they are hashed). + +### Neutral + +* Deprecating IAVL, which is one of the core proposals of Cosmos Whitepaper. + +## Alternative designs + +Most of the alternative designs were evaluated in [state commitments and storage report](https://paper.dropbox.com/published/State-commitments-and-storage-review--BDvA1MLwRtOx55KRihJ5xxLbBw-KeEB7eOd11pNrZvVtqUgL3h). + +Ethereum research published [Verkle Trie](https://dankradfeist.de/ethereum/2021/06/18/verkle-trie-for-eth1.html) - an idea of combining polynomial commitments with merkle tree in order to reduce the tree height. This concept has a very good potential, but we think it's too early to implement it. The current, SMT based design could be easily updated to the Verkle Trie once other research implement all necessary libraries. The main advantage of the design described in this ADR is the separation of state commitments from the data storage and designing a more powerful interface. + +## Further Discussions + +### Evaluated KV Databases + +We verified existing databases KV databases for evaluating snapshot support. The following databases provide efficient snapshot mechanism: Badger, RocksDB, [Pebble](https://github.com/cockroachdb/pebble). Databases which don't provide such support or are not production ready: boltdb, leveldb, goleveldb, membdb, lmdb. + +### RDBMS + +Use of RDBMS instead of simple KV store for state. Use of RDBMS will require a Cosmos SDK API breaking change (`KVStore` interface) and will allow better data extraction and indexing solutions. Instead of saving an object as a single blob of bytes, we could save it as record in a table in the state storage layer, and as a `hash(key, protobuf(object))` in the SMT as outlined above. To verify that an object registered in RDBMS is same as the one committed to SMT, one will need to load it from RDBMS, marshal using protobuf, hash and do SMT search. + +### Off Chain Store + +We were discussing use case where modules can use a support database, which is not automatically committed. Module will responsible for having a sound storage model and can optionally use the feature discussed in __Committing to an object without saving it_ section. + +## References + +* [IAVL What's Next?](https://github.com/cosmos/cosmos-sdk/issues/7100) +* [IAVL overview](https://docs.google.com/document/d/16Z_hW2rSAmoyMENO-RlAhQjAG3mSNKsQueMnKpmcBv0/edit#heading=h.yd2th7x3o1iv) of it's state v0.15 +* [State commitments and storage report](https://paper.dropbox.com/published/State-commitments-and-storage-review--BDvA1MLwRtOx55KRihJ5xxLbBw-KeEB7eOd11pNrZvVtqUgL3h) +* [Celestia (LazyLedger) SMT](https://github.com/lazyledger/smt) +* Facebook Diem (Libra) SMT [design](https://developers.diem.com/papers/jellyfish-merkle-tree/2021-01-14.pdf) +* [Trillian Revocation Transparency](https://github.com/google/trillian/blob/master/docs/papers/RevocationTransparency.pdf), [Trillian Verifiable Data Structures](https://github.com/google/trillian/blob/master/docs/papers/VerifiableDataStructures.pdf). +* Design and implementation [discussion](https://github.com/cosmos/cosmos-sdk/discussions/8297). +* [How to Upgrade IBC Chains and their Clients](https://ibc.cosmos.network/main/ibc/upgrades/quick-guide/) +* [ADR-40 Effect on IBC](https://github.com/cosmos/ibc-go/discussions/256) + + + +### ADR 041: In-Place Store Migrations + + + +# ADR 041: In-Place Store Migrations + +## Changelog + +* 17.02.2021: Initial Draft + +## Status + +Accepted + +## Abstract + +This ADR introduces a mechanism to perform in-place state store migrations during chain software upgrades. + +## Context + +When a chain upgrade introduces state-breaking changes inside modules, the current procedure consists of exporting the whole state into a JSON file (via the `simd export` command), running migration scripts on the JSON file (`simd genesis migrate` command), clearing the stores (`simd unsafe-reset-all` command), and starting a new chain with the migrated JSON file as new genesis (optionally with a custom initial block height). An example of such a procedure can be seen [in the Cosmos Hub 3->4 migration guide](https://github.com/cosmos/gaia/blob/v4.0.3/docs/migration/cosmoshub-3.md#upgrade-procedure). + +This procedure is cumbersome for multiple reasons: + +* The procedure takes time. It can take hours to run the `export` command, plus some additional hours to run `InitChain` on the fresh chain using the migrated JSON. +* The exported JSON file can be heavy (~100MB-1GB), making it difficult to view, edit and transfer, which in turn introduces additional work to solve these problems (such as [streaming genesis](https://github.com/cosmos/cosmos-sdk/issues/6936)). + +## Decision + +We propose a migration procedure based on modifying the KV store in-place without involving the JSON export-process-import flow described above. + +### Module `ConsensusVersion` + +We introduce a new method on the `AppModule` interface: + +```go +type AppModule interface { + // --snip-- + ConsensusVersion() uint64 +} +``` + +This methods returns an `uint64` which serves as state-breaking version of the module. It MUST be incremented on each consensus-breaking change introduced by the module. To avoid potential errors with default values, the initial version of a module MUST be set to 1. In the Cosmos SDK, version 1 corresponds to the modules in the v0.41 series. + +### Module-Specific Migration Functions + +For each consensus-breaking change introduced by the module, a migration script from ConsensusVersion `N` to version `N+1` MUST be registered in the `Configurator` using its newly-added `RegisterMigration` method. All modules receive a reference to the configurator in their `RegisterServices` method on `AppModule`, and this is where the migration functions should be registered. The migration functions should be registered in increasing order. + +```go +func (am AppModule) RegisterServices(cfg module.Configurator) { + // --snip-- + cfg.RegisterMigration(types.ModuleName, 1, func(ctx sdk.Context) error { + // Perform in-place store migrations from ConsensusVersion 1 to 2. + }) + cfg.RegisterMigration(types.ModuleName, 2, func(ctx sdk.Context) error { + // Perform in-place store migrations from ConsensusVersion 2 to 3. + }) + // etc. +} +``` + +For example, if the new ConsensusVersion of a module is `N` , then `N-1` migration functions MUST be registered in the configurator. + +In the Cosmos SDK, the migration functions are handled by each module's keeper, because the keeper holds the `sdk.StoreKey` used to perform in-place store migrations. To not overload the keeper, a `Migrator` wrapper is used by each module to handle the migration functions: + +```go +// Migrator is a struct for handling in-place store migrations. +type Migrator struct { + BaseKeeper +} +``` + +Migration functions should live inside the `migrations/` folder of each module, and be called by the Migrator's methods. We propose the format `Migrate{M}to{N}` for method names. + +```go +// Migrate1to2 migrates from version 1 to 2. +func (m Migrator) Migrate1to2(ctx sdk.Context) error { + return v2bank.MigrateStore(ctx, m.keeper.storeKey) // v043bank is package `x/bank/migrations/v2`. +} +``` + +Each module's migration functions are specific to the module's store evolutions, and are not described in this ADR. An example of x/bank store key migrations after the introduction of ADR-028 length-prefixed addresses can be seen in this [store.go code](https://github.com/cosmos/cosmos-sdk/blob/36f68eb9e041e20a5bb47e216ac5eb8b91f95471/x/bank/legacy/v043/store.go#L41-L62). + +### Tracking Module Versions in `x/upgrade` + +We introduce a new prefix store in `x/upgrade`'s store. This store will track each module's current version, it can be modelized as a `map[string]uint64` of module name to module ConsensusVersion, and will be used when running the migrations (see next section for details). The key prefix used is `0x1`, and the key/value format is: + +```text +0x2 | {bytes(module_name)} => BigEndian(module_consensus_version) +``` + +The initial state of the store is set from `app.go`'s `InitChainer` method. + +The UpgradeHandler signature needs to be updated to take a `VersionMap`, as well as return an upgraded `VersionMap` and an error: + +```diff +- type UpgradeHandler func(ctx sdk.Context, plan Plan) ++ type UpgradeHandler func(ctx sdk.Context, plan Plan, versionMap VersionMap) (VersionMap, error) +``` + +To apply an upgrade, we query the `VersionMap` from the `x/upgrade` store and pass it into the handler. The handler runs the actual migration functions (see next section), and if successful, returns an updated `VersionMap` to be stored in state. + +```diff +func (k UpgradeKeeper) ApplyUpgrade(ctx sdk.Context, plan types.Plan) { + // --snip-- +- handler(ctx, plan) ++ updatedVM, err := handler(ctx, plan, k.GetModuleVersionMap(ctx)) // k.GetModuleVersionMap() fetches the VersionMap stored in state. ++ if err != nil { ++ return err ++ } ++ ++ // Set the updated consensus versions to state ++ k.SetModuleVersionMap(ctx, updatedVM) +} +``` + +A gRPC query endpoint to query the `VersionMap` stored in `x/upgrade`'s state will also be added, so that app developers can double-check the `VersionMap` before the upgrade handler runs. + +### Running Migrations + +Once all the migration handlers are registered inside the configurator (which happens at startup), running migrations can happen by calling the `RunMigrations` method on `module.Manager`. This function will loop through all modules, and for each module: + +* Get the old ConsensusVersion of the module from its `VersionMap` argument (let's call it `M`). +* Fetch the new ConsensusVersion of the module from the `ConsensusVersion()` method on `AppModule` (call it `N`). +* If `N>M`, run all registered migrations for the module sequentially `M -> M+1 -> M+2...` until `N`. + * There is a special case where there is no ConsensusVersion for the module, as this means that the module has been newly added during the upgrade. In this case, no migration function is run, and the module's current ConsensusVersion is saved to `x/upgrade`'s store. + +If a required migration is missing (e.g. if it has not been registered in the `Configurator`), then the `RunMigrations` function will error. + +In practice, the `RunMigrations` method should be called from inside an `UpgradeHandler`. + +```go +app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + return app.mm.RunMigrations(ctx, vm) +}) +``` + +Assuming a chain upgrades at block `n`, the procedure should run as follows: + +* the old binary will halt in `BeginBlock` when starting block `N`. In its store, the ConsensusVersions of the old binary's modules are stored. +* the new binary will start at block `N`. The UpgradeHandler is set in the new binary, so will run at `BeginBlock` of the new binary. Inside `x/upgrade`'s `ApplyUpgrade`, the `VersionMap` will be retrieved from the (old binary's) store, and passed into the `RunMigrations` function, migrating all module stores in-place before the modules' own `BeginBlock`s. + +## Consequences + +### Backwards Compatibility + +This ADR introduces a new method `ConsensusVersion()` on `AppModule`, which all modules need to implement. It also alters the UpgradeHandler function signature. As such, it is not backwards-compatible. + +While modules MUST register their migration functions when bumping ConsensusVersions, running those scripts using an upgrade handler is optional. An application may perfectly well decide to not call the `RunMigrations` inside its upgrade handler, and continue using the legacy JSON migration path. + +### Positive + +* Perform chain upgrades without manipulating JSON files. +* While no benchmark has been made yet, it is probable that in-place store migrations will take less time than JSON migrations. The main reason supporting this claim is that both the `simd export` command on the old binary and the `InitChain` function on the new binary will be skipped. + +### Negative + +* Module developers MUST correctly track consensus-breaking changes in their modules. If a consensus-breaking change is introduced in a module without its corresponding `ConsensusVersion()` bump, then the `RunMigrations` function won't detect the migration, and the chain upgrade might be unsuccessful. Documentation should clearly reflect this. + +### Neutral + +* The Cosmos SDK will continue to support JSON migrations via the existing `simd export` and `simd genesis migrate` commands. +* The current ADR does not allow creating, renaming or deleting stores, only modifying existing store keys and values. The Cosmos SDK already has the `StoreLoader` for those operations. + +## Further Discussions + +## References + +* Initial discussion: https://github.com/cosmos/cosmos-sdk/discussions/8429 +* Implementation of `ConsensusVersion` and `RunMigrations`: https://github.com/cosmos/cosmos-sdk/pull/8485 +* Issue discussing `x/upgrade` design: https://github.com/cosmos/cosmos-sdk/issues/8514 + + + +### ADR 042: Group Module + + + +# ADR 042: Group Module + +## Changelog + +* 2020/04/09: Initial Draft + +## Status + +Draft + +## Abstract + +This ADR defines the `x/group` module which allows the creation and management of on-chain multi-signature accounts and enables voting for message execution based on configurable decision policies. + +## Context + +The legacy amino multi-signature mechanism of the Cosmos SDK has certain limitations: + +* Key rotation is not possible, although this can be solved with [account rekeying](adr-034-account-rekeying.md). +* Thresholds can't be changed. +* UX is cumbersome for non-technical users ([#5661](https://github.com/cosmos/cosmos-sdk/issues/5661)). +* It requires `legacy_amino` sign mode ([#8141](https://github.com/cosmos/cosmos-sdk/issues/8141)). + +While the group module is not meant to be a total replacement for the current multi-signature accounts, it provides a solution to the limitations described above, with a more flexible key management system where keys can be added, updated or removed, as well as configurable thresholds. +It's meant to be used with other access control modules such as [`x/feegrant`](./adr-029-fee-grant-module.md) and [`x/authz`](adr-030-authz-module.md) to simplify key management for individuals and organizations. + +The proof of concept of the group module can be found in https://github.com/cosmos/cosmos-sdk/tree/main/proto/cosmos/group/v1 and https://github.com/cosmos/cosmos-sdk/tree/main/x/group. + +## Decision + +We propose merging the `x/group` module with its supporting [ORM/Table Store package](https://github.com/cosmos/cosmos-sdk/tree/main/x/group/internal/orm) ([#7098](https://github.com/cosmos/cosmos-sdk/issues/7098)) into the Cosmos SDK and continuing development here. There will be a dedicated ADR for the ORM package. + +### Group + +A group is a composition of accounts with associated weights. It is not +an account and doesn't have a balance. It doesn't in and of itself have any +sort of voting or decision weight. +Group members can create proposals and vote on them through group accounts using different decision policies. + +It has an `admin` account which can manage members in the group, update the group +metadata and set a new admin. + +```protobuf +message GroupInfo { + + // group_id is the unique ID of this group. + uint64 group_id = 1; + + // admin is the account address of the group's admin. + string admin = 2; + + // metadata is any arbitrary metadata to attached to the group. + bytes metadata = 3; + + // version is used to track changes to a group's membership structure that + // would break existing proposals. Whenever a member weight has changed, + // or any member is added or removed, the version is incremented and will + // invalidate all proposals from older versions. + uint64 version = 4; + + // total_weight is the sum of the group members' weights. + string total_weight = 5; +} +``` + +```protobuf +message GroupMember { + + // group_id is the unique ID of the group. + uint64 group_id = 1; + + // member is the member data. + Member member = 2; +} + +// Member represents a group member with an account address, +// non-zero weight and metadata. +message Member { + + // address is the member's account address. + string address = 1; + + // weight is the member's voting weight that should be greater than 0. + string weight = 2; + + // metadata is any arbitrary metadata to attached to the member. + bytes metadata = 3; +} +``` + +### Group Account + +A group account is an account associated with a group and a decision policy. +A group account does have a balance. + +Group accounts are abstracted from groups because a single group may have +multiple decision policies for different types of actions. Managing group +membership separately from decision policies results in the least overhead +and keeps membership consistent across different policies. The pattern that +is recommended is to have a single master group account for a given group, +and then to create separate group accounts with different decision policies +and delegate the desired permissions from the master account to +those "sub-accounts" using the [`x/authz` module](adr-030-authz-module.md). + +```protobuf +message GroupAccountInfo { + + // address is the group account address. + string address = 1; + + // group_id is the ID of the Group the GroupAccount belongs to. + uint64 group_id = 2; + + // admin is the account address of the group admin. + string admin = 3; + + // metadata is any arbitrary metadata of this group account. + bytes metadata = 4; + + // version is used to track changes to a group's GroupAccountInfo structure that + // invalidates active proposal from old versions. + uint64 version = 5; + + // decision_policy specifies the group account's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} +``` + +Similarly to a group admin, a group account admin can update its metadata, decision policy or set a new group account admin. + +A group account can also be an admin or a member of a group. +For instance, a group admin could be another group account which could "elects" the members or it could be the same group that elects itself. + +### Decision Policy + +A decision policy is the mechanism by which members of a group can vote on +proposals. + +All decision policies should have a minimum and maximum voting window. +The minimum voting window is the minimum duration that must pass in order +for a proposal to potentially pass, and it may be set to 0. The maximum voting +window is the maximum time that a proposal may be voted on and executed if +it reached enough support before it is closed. +Both of these values must be less than a chain-wide max voting window parameter. + +We define the `DecisionPolicy` interface that all decision policies must implement: + +```go +type DecisionPolicy interface { + codec.ProtoMarshaler + + ValidateBasic() error + GetTimeout() types.Duration + Allow(tally Tally, totalPower string, votingDuration time.Duration) (DecisionPolicyResult, error) + Validate(g GroupInfo) error +} + +type DecisionPolicyResult struct { + Allow bool + Final bool +} +``` + +#### Threshold decision policy + +A threshold decision policy defines a minimum support votes (_yes_), based on a tally +of voter weights, for a proposal to pass. For +this decision policy, abstain and veto are treated as no support (_no_). + +```protobuf +message ThresholdDecisionPolicy { + + // threshold is the minimum weighted sum of support votes for a proposal to succeed. + string threshold = 1; + + // voting_period is the duration from submission of a proposal to the end of voting period + // Within this period, votes and exec messages can be submitted. + google.protobuf.Duration voting_period = 2 [(gogoproto.nullable) = false]; +} +``` + +### Proposal + +Any member of a group can submit a proposal for a group account to decide upon. +A proposal consists of a set of `sdk.Msg`s that will be executed if the proposal +passes as well as any metadata associated with the proposal. These `sdk.Msg`s get validated as part of the `Msg/CreateProposal` request validation. They should also have their signer set as the group account. + +Internally, a proposal also tracks: + +* its current `Status`: submitted, closed or aborted +* its `Result`: unfinalized, accepted or rejected +* its `VoteState` in the form of a `Tally`, which is calculated on new votes and when executing the proposal. + +```protobuf +// Tally represents the sum of weighted votes. +message Tally { + option (gogoproto.goproto_getters) = false; + + // yes_count is the weighted sum of yes votes. + string yes_count = 1; + + // no_count is the weighted sum of no votes. + string no_count = 2; + + // abstain_count is the weighted sum of abstainers. + string abstain_count = 3; + + // veto_count is the weighted sum of vetoes. + string veto_count = 4; +} +``` + +### Voting + +Members of a group can vote on proposals. There are four choices to choose while voting - yes, no, abstain and veto. Not +all decision policies will support them. Votes can contain some optional metadata. +In the current implementation, the voting window begins as soon as a proposal +is submitted. + +Voting internally updates the proposal `VoteState` as well as `Status` and `Result` if needed. + +### Executing Proposals + +Proposals will not be automatically executed by the chain in this current design, +but rather a user must submit a `Msg/Exec` transaction to attempt to execute the +proposal based on the current votes and decision policy. A future upgrade could +automate this and have the group account (or a fee granter) pay. + +#### Changing Group Membership + +In the current implementation, updating a group or a group account after submitting a proposal will make it invalid. It will simply fail if someone calls `Msg/Exec` and will eventually be garbage collected. + +### Notes on current implementation + +This section outlines the current implementation used in the proof of concept of the group module but this could be subject to changes and iterated on. + +#### ORM + +The [ORM package](https://github.com/cosmos/cosmos-sdk/discussions/9156) defines tables, sequences and secondary indexes which are used in the group module. + +Groups are stored in state as part of a `groupTable`, the `group_id` being an auto-increment integer. Group members are stored in a `groupMemberTable`. + +Group accounts are stored in a `groupAccountTable`. The group account address is generated based on an auto-increment integer which is used to derive the group module `RootModuleKey` into a `DerivedModuleKey`, as stated in [ADR-033](adr-033-protobuf-inter-module-comm.md#modulekeys-and-moduleids). The group account is added as a new `ModuleAccount` through `x/auth`. + +Proposals are stored as part of the `proposalTable` using the `Proposal` type. The `proposal_id` is an auto-increment integer. + +Votes are stored in the `voteTable`. The primary key is based on the vote's `proposal_id` and `voter` account address. + +#### ADR-033 to route proposal messages + +Inter-module communication introduced by [ADR-033](adr-033-protobuf-inter-module-comm.md) can be used to route a proposal's messages using the `DerivedModuleKey` corresponding to the proposal's group account. + +## Consequences + +### Positive + +* Improved UX for multi-signature accounts allowing key rotation and custom decision policies. + +### Negative + +### Neutral + +* It uses ADR 033 so it will need to be implemented within the Cosmos SDK, but this doesn't imply necessarily any large refactoring of existing Cosmos SDK modules. +* The current implementation of the group module uses the ORM package. + +## Further Discussions + +* Convergence of `/group` and `x/gov` as both support proposals and voting: https://github.com/cosmos/cosmos-sdk/discussions/9066 +* `x/group` possible future improvements: + * Execute proposals on submission (https://github.com/regen-network/regen-ledger/issues/288) + * Withdraw a proposal (https://github.com/regen-network/cosmos-modules/issues/41) + * Make `Tally` more flexible and support non-binary choices + +## References + +* Initial specification: + * https://gist.github.com/aaronc/b60628017352df5983791cad30babe56#group-module + * [#5236](https://github.com/cosmos/cosmos-sdk/pull/5236) +* Proposal to add `x/group` into the Cosmos SDK: [#7633](https://github.com/cosmos/cosmos-sdk/issues/7633) + + + +### ADR 043: NFT Module + + + +# ADR 43: NFT Module + +## Changelog + +* 2021-05-01: Initial Draft +* 2021-07-02: Review updates +* 2022-06-15: Add batch operation +* 2022-11-11: Remove strict validation of classID and tokenID + +## Status + +PROPOSED + +## Abstract + +This ADR defines the `x/nft` module which is a generic implementation of NFTs, roughly "compatible" with ERC721. **Applications using the `x/nft` module must implement the following functions**: + +* `MsgNewClass` - Receive the user's request to create a class, and call the `NewClass` of the `x/nft` module. +* `MsgUpdateClass` - Receive the user's request to update a class, and call the `UpdateClass` of the `x/nft` module. +* `MsgMintNFT` - Receive the user's request to mint a nft, and call the `MintNFT` of the `x/nft` module. +* `BurnNFT` - Receive the user's request to burn a nft, and call the `BurnNFT` of the `x/nft` module. +* `UpdateNFT` - Receive the user's request to update a nft, and call the `UpdateNFT` of the `x/nft` module. + +## Context + +NFTs are more than just crypto art, which is very helpful for accruing value to the Cosmos ecosystem. As a result, Cosmos Hub should implement NFT functions and enable a unified mechanism for storing and sending the ownership representative of NFTs as discussed in https://github.com/cosmos/cosmos-sdk/discussions/9065. + +As discussed in [#9065](https://github.com/cosmos/cosmos-sdk/discussions/9065), several potential solutions can be considered: + +* irismod/nft and modules/incubator/nft +* CW721 +* DID NFTs +* interNFT + +Since functions/use cases of NFTs are tightly connected with their logic, it is almost impossible to support all the NFTs' use cases in one Cosmos SDK module by defining and implementing different transaction types. + +Considering generic usage and compatibility of interchain protocols including IBC and Gravity Bridge, it is preferred to have a generic NFT module design which handles the generic NFTs logic. +This design idea can enable composability that application-specific functions should be managed by other modules on Cosmos Hub or on other Zones by importing the NFT module. + +The current design is based on the work done by [IRISnet team](https://github.com/irisnet/irismod/tree/master/modules/nft) and an older implementation in the [Cosmos repository](https://github.com/cosmos/modules/tree/master/incubator/nft). + +## Decision + +We create a `x/nft` module, which contains the following functionality: + +* Store NFTs and track their ownership. +* Expose `Keeper` interface for composing modules to transfer, mint and burn NFTs. +* Expose external `Message` interface for users to transfer ownership of their NFTs. +* Query NFTs and their supply information. + +The proposed module is a base module for NFT app logic. It's goal it to provide a common layer for storage, basic transfer functionality and IBC. The module should not be used as a standalone. +Instead an app should create a specialized module to handle app specific logic (eg: NFT ID construction, royalty), user level minting and burning. Moreover an app specialized module should handle auxiliary data to support the app logic (eg indexes, ORM, business data). + +All data carried over IBC must be part of the `NFT` or `Class` type described below. The app specific NFT data should be encoded in `NFT.data` for cross-chain integrity. Other objects related to NFT, which are not important for integrity can be part of the app specific module. + +### Types + +We propose two main types: + +* `Class` -- describes NFT class. We can think about it as a smart contract address. +* `NFT` -- object representing unique, non fungible asset. Each NFT is associated with a Class. + +#### Class + +NFT **Class** is comparable to an ERC-721 smart contract (provides description of a smart contract), under which a collection of NFTs can be created and managed. + +```protobuf +message Class { + string id = 1; + string name = 2; + string symbol = 3; + string description = 4; + string uri = 5; + string uri_hash = 6; + google.protobuf.Any data = 7; +} +``` + +* `id` is used as the primary index for storing the class; _required_ +* `name` is a descriptive name of the NFT class; _optional_ +* `symbol` is the symbol usually shown on exchanges for the NFT class; _optional_ +* `description` is a detailed description of the NFT class; _optional_ +* `uri` is a URI for the class metadata stored off chain. It should be a JSON file that contains metadata about the NFT class and NFT data schema ([OpenSea example](https://docs.opensea.io/docs/contract-level-metadata)); _optional_ +* `uri_hash` is a hash of the document pointed by uri; _optional_ +* `data` is app specific metadata of the class; _optional_ + +#### NFT + +We define a general model for `NFT` as follows. + +```protobuf +message NFT { + string class_id = 1; + string id = 2; + string uri = 3; + string uri_hash = 4; + google.protobuf.Any data = 10; +} +``` + +* `class_id` is the identifier of the NFT class where the NFT belongs; _required_ +* `id` is an identifier of the NFT, unique within the scope of its class. It is specified by the creator of the NFT and may be expanded to use DID in the future. `class_id` combined with `id` uniquely identifies an NFT and is used as the primary index for storing the NFT; _required_ + + ```text + {class_id}/{id} --> NFT (bytes) + ``` + +* `uri` is a URI for the NFT metadata stored off chain. Should point to a JSON file that contains metadata about this NFT (Ref: [ERC721 standard and OpenSea extension](https://docs.opensea.io/docs/metadata-standards)); _required_ +* `uri_hash` is a hash of the document pointed by uri; _optional_ +* `data` is an app specific data of the NFT. CAN be used by composing modules to specify additional properties of the NFT; _optional_ + +This ADR doesn't specify values that `data` can take; however, best practices recommend upper-level NFT modules clearly specify their contents. Although the value of this field doesn't provide the additional context required to manage NFT records, which means that the field can technically be removed from the specification, the field's existence allows basic informational/UI functionality. + +### `Keeper` Interface + +```go +type Keeper interface { + NewClass(ctx sdk.Context,class Class) + UpdateClass(ctx sdk.Context,class Class) + + Mint(ctx sdk.Context,nft NFT,receiver sdk.AccAddress) // updates totalSupply + BatchMint(ctx sdk.Context, tokens []NFT,receiver sdk.AccAddress) error + + Burn(ctx sdk.Context, classId string, nftId string) // updates totalSupply + BatchBurn(ctx sdk.Context, classID string, nftIDs []string) error + + Update(ctx sdk.Context, nft NFT) + BatchUpdate(ctx sdk.Context, tokens []NFT) error + + Transfer(ctx sdk.Context, classId string, nftId string, receiver sdk.AccAddress) + BatchTransfer(ctx sdk.Context, classID string, nftIDs []string, receiver sdk.AccAddress) error + + GetClass(ctx sdk.Context, classId string) Class + GetClasses(ctx sdk.Context) []Class + + GetNFT(ctx sdk.Context, classId string, nftId string) NFT + GetNFTsOfClassByOwner(ctx sdk.Context, classId string, owner sdk.AccAddress) []NFT + GetNFTsOfClass(ctx sdk.Context, classId string) []NFT + + GetOwner(ctx sdk.Context, classId string, nftId string) sdk.AccAddress + GetBalance(ctx sdk.Context, classId string, owner sdk.AccAddress) uint64 + GetTotalSupply(ctx sdk.Context, classId string) uint64 +} +``` + +Other business logic implementations should be defined in composing modules that import `x/nft` and use its `Keeper`. + +### `Msg` Service + +```protobuf +service Msg { + rpc Send(MsgSend) returns (MsgSendResponse); +} + +message MsgSend { + string class_id = 1; + string id = 2; + string sender = 3; + string receiver = 4; +} +message MsgSendResponse {} +``` + +`MsgSend` can be used to transfer the ownership of an NFT to another address. + +The implementation outline of the server is as follows: + +```go +type msgServer struct{ + k Keeper +} + +func (m msgServer) Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + // check current ownership + assertEqual(msg.Sender, m.k.GetOwner(msg.ClassId, msg.Id)) + + // transfer ownership + m.k.Transfer(msg.ClassId, msg.Id, msg.Receiver) + + return &types.MsgSendResponse{}, nil +} +``` + +The query service methods for the `x/nft` module are: + +```protobuf +service Query { + // Balance queries the number of NFTs of a given class owned by the owner, same as balanceOf in ERC721 + rpc Balance(QueryBalanceRequest) returns (QueryBalanceResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/balance/{owner}/{class_id}"; + } + + // Owner queries the owner of the NFT based on its class and id, same as ownerOf in ERC721 + rpc Owner(QueryOwnerRequest) returns (QueryOwnerResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/owner/{class_id}/{id}"; + } + + // Supply queries the number of NFTs from the given class, same as totalSupply of ERC721. + rpc Supply(QuerySupplyRequest) returns (QuerySupplyResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/supply/{class_id}"; + } + + // NFTs queries all NFTs of a given class or owner,choose at least one of the two, similar to tokenByIndex in ERC721Enumerable + rpc NFTs(QueryNFTsRequest) returns (QueryNFTsResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/nfts"; + } + + // NFT queries an NFT based on its class and id. + rpc NFT(QueryNFTRequest) returns (QueryNFTResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/nfts/{class_id}/{id}"; + } + + // Class queries an NFT class based on its id + rpc Class(QueryClassRequest) returns (QueryClassResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/classes/{class_id}"; + } + + // Classes queries all NFT classes + rpc Classes(QueryClassesRequest) returns (QueryClassesResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/classes"; + } +} + +// QueryBalanceRequest is the request type for the Query/Balance RPC method +message QueryBalanceRequest { + string class_id = 1; + string owner = 2; +} + +// QueryBalanceResponse is the response type for the Query/Balance RPC method +message QueryBalanceResponse { + uint64 amount = 1; +} + +// QueryOwnerRequest is the request type for the Query/Owner RPC method +message QueryOwnerRequest { + string class_id = 1; + string id = 2; +} + +// QueryOwnerResponse is the response type for the Query/Owner RPC method +message QueryOwnerResponse { + string owner = 1; +} + +// QuerySupplyRequest is the request type for the Query/Supply RPC method +message QuerySupplyRequest { + string class_id = 1; +} + +// QuerySupplyResponse is the response type for the Query/Supply RPC method +message QuerySupplyResponse { + uint64 amount = 1; +} + +// QueryNFTsRequest is the request type for the Query/NFTs RPC method +message QueryNFTsRequest { + string class_id = 1; + string owner = 2; + cosmos.base.query.v1beta1.PageRequest pagination = 3; +} + +// QueryNFTsResponse is the response type for the Query/NFTs RPC methods +message QueryNFTsResponse { + repeated cosmos.nft.v1beta1.NFT nfts = 1; + cosmos.base.query.v1beta1.PageResponse pagination = 2; +} + +// QueryNFTRequest is the request type for the Query/NFT RPC method +message QueryNFTRequest { + string class_id = 1; + string id = 2; +} + +// QueryNFTResponse is the response type for the Query/NFT RPC method +message QueryNFTResponse { + cosmos.nft.v1beta1.NFT nft = 1; +} + +// QueryClassRequest is the request type for the Query/Class RPC method +message QueryClassRequest { + string class_id = 1; +} + +// QueryClassResponse is the response type for the Query/Class RPC method +message QueryClassResponse { + cosmos.nft.v1beta1.Class class = 1; +} + +// QueryClassesRequest is the request type for the Query/Classes RPC method +message QueryClassesRequest { + // pagination defines an optional pagination for the request. + cosmos.base.query.v1beta1.PageRequest pagination = 1; +} + +// QueryClassesResponse is the response type for the Query/Classes RPC method +message QueryClassesResponse { + repeated cosmos.nft.v1beta1.Class classes = 1; + cosmos.base.query.v1beta1.PageResponse pagination = 2; +} +``` + +### Interoperability + +Interoperability is all about reusing assets between modules and chains. The former one is achieved by ADR-33: Protobuf client - server communication. At the time of writing ADR-33 is not finalized. The latter is achieved by IBC. Here we will focus on the IBC side. +IBC is implemented per module. Here, we aligned that NFTs will be recorded and managed in the x/nft. This requires creation of a new IBC standard and implementation of it. + +For IBC interoperability, NFT custom modules MUST use the NFT object type understood by the IBC client. So, for x/nft interoperability, custom NFT implementations (example: x/cryptokitty) should use the canonical x/nft module and proxy all NFT balance keeping functionality to x/nft or else re-implement all functionality using the NFT object type understood by the IBC client. In other words: x/nft becomes the standard NFT registry for all Cosmos NFTs (example: x/cryptokitty will register a kitty NFT in x/nft and use x/nft for book keeping). This was [discussed](https://github.com/cosmos/cosmos-sdk/discussions/9065#discussioncomment-873206) in the context of using x/bank as a general asset balance book. Not using x/nft will require implementing another module for IBC. + +## Consequences + +### Backward Compatibility + +No backward incompatibilities. + +### Forward Compatibility + +This specification conforms to the ERC-721 smart contract specification for NFT identifiers. Note that ERC-721 defines uniqueness based on (contract address, uint256 tokenId), and we conform to this implicitly because a single module is currently aimed to track NFT identifiers. Note: use of the (mutable) data field to determine uniqueness is not safe. + +### Positive + +* NFT identifiers available on Cosmos Hub. +* Ability to build different NFT modules for the Cosmos Hub, e.g., ERC-721. +* NFT module which supports interoperability with IBC and other cross-chain infrastructures like Gravity Bridge + +### Negative + +* New IBC app is required for x/nft +* CW721 adapter is required + +### Neutral + +* Other functions need more modules. For example, a custody module is needed for NFT trading function, a collectible module is needed for defining NFT properties. + +## Further Discussions + +For other kinds of applications on the Hub, more app-specific modules can be developed in the future: + +* `x/nft/custody`: custody of NFTs to support trading functionality. +* `x/nft/marketplace`: selling and buying NFTs using sdk.Coins. +* `x/fractional`: a module to split an ownership of an asset (NFT or other assets) for multiple stakeholder. `x/group` should work for most of the cases. + +Other networks in the Cosmos ecosystem could design and implement their own NFT modules for specific NFT applications and use cases. + +## References + +* Initial discussion: https://github.com/cosmos/cosmos-sdk/discussions/9065 +* x/nft: initialize module: https://github.com/cosmos/cosmos-sdk/pull/9174 +* [ADR 033](#adr-033-protobuf-based-inter-module-communication) + + + +### ADR 044: Guidelines for Updating Protobuf Definitions + + + +# ADR 044: Guidelines for Updating Protobuf Definitions + +## Changelog + +* 28.06.2021: Initial Draft +* 02.12.2021: Add `Since:` comment for new fields +* 21.07.2022: Remove the rule of no new `Msg` in the same proto version. + +## Status + +Draft + +## Abstract + +This ADR provides guidelines and recommended practices when updating Protobuf definitions. These guidelines are targeting module developers. + +## Context + +The Cosmos SDK maintains a set of [Protobuf definitions](https://github.com/cosmos/cosmos-sdk/tree/main/proto/cosmos). It is important to correctly design Protobuf definitions to avoid any breaking changes within the same version. The reasons are to not break tooling (including indexers and explorers), wallets and other third-party integrations. + +When making changes to these Protobuf definitions, the Cosmos SDK currently only follows [Buf's](https://docs.buf.build/) recommendations. We noticed however that Buf's recommendations might still result in breaking changes in the SDK in some cases. For example: + +* Adding fields to `Msg`s. Adding fields is not a Protobuf spec-breaking operation. However, when adding new fields to `Msg`s, the unknown field rejection will throw an error when sending the new `Msg` to an older node. +* Marking fields as `reserved`. Protobuf proposes the `reserved` keyword for removing fields without the need to bump the package version. However, by doing so, client backwards compatibility is broken as Protobuf doesn't generate anything for `reserved` fields. See [#9446](https://github.com/cosmos/cosmos-sdk/issues/9446) for more details on this issue. + +Moreover, module developers often face other questions around Protobuf definitions such as "Can I rename a field?" or "Can I deprecate a field?" This ADR aims to answer all these questions by providing clear guidelines about allowed updates for Protobuf definitions. + +## Decision + +We decide to keep [Buf's](https://docs.buf.build/) recommendations with the following exceptions: + +* `UNARY_RPC`: the Cosmos SDK currently does not support streaming RPCs. +* `COMMENT_FIELD`: the Cosmos SDK allows fields with no comments. +* `SERVICE_SUFFIX`: we use the `Query` and `Msg` service naming convention, which doesn't use the `-Service` suffix. +* `PACKAGE_VERSION_SUFFIX`: some packages, such as `cosmos.crypto.ed25519`, don't use a version suffix. +* `RPC_REQUEST_STANDARD_NAME`: Requests for the `Msg` service don't have the `-Request` suffix to keep backwards compatibility. + +On top of Buf's recommendations we add the following guidelines that are specific to the Cosmos SDK. + +### Updating Protobuf Definition Without Bumping Version + +#### 1. Module developers MAY add new Protobuf definitions + +Module developers MAY add new `message`s, new `Service`s, new `rpc` endpoints, and new fields to existing messages. This recommendation follows the Protobuf specification, but is added in this document for clarity, as the SDK requires one additional change. + +The SDK requires the Protobuf comment of the new addition to contain one line with the following format: + +```protobuf +// Since: cosmos-sdk {, ...} +``` + +Where each `version` denotes a minor ("0.45") or patch ("0.44.5") version from which the field is available. This will greatly help client libraries, who can optionally use reflection or custom code generation to show/hide these fields depending on the targeted node version. + +As examples, the following comments are valid: + +```protobuf +// Since: cosmos-sdk 0.44 + +// Since: cosmos-sdk 0.42.11, 0.44.5 +``` + +and the following ones are NOT valid: + +```protobuf +// Since cosmos-sdk v0.44 + +// since: cosmos-sdk 0.44 + +// Since: cosmos-sdk 0.42.11 0.44.5 + +// Since: Cosmos SDK 0.42.11, 0.44.5 +``` + +#### 2. Fields MAY be marked as `deprecated`, and nodes MAY implement a protocol-breaking change for handling these fields + +Protobuf supports the [`deprecated` field option](https://developers.google.com/protocol-buffers/docs/proto#options), and this option MAY be used on any field, including `Msg` fields. If a node handles a Protobuf message with a non-empty deprecated field, the node MAY change its behavior upon processing it, even in a protocol-breaking way. When possible, the node MUST handle backwards compatibility without breaking the consensus (unless we increment the proto version). + +As an example, the Cosmos SDK v0.42 to v0.43 update contained two Protobuf-breaking changes, listed below. Instead of bumping the package versions from `v1beta1` to `v1`, the SDK team decided to follow this guideline, by reverting the breaking changes, marking those changes as deprecated, and modifying the node implementation when processing messages with deprecated fields. More specifically: + +* The Cosmos SDK recently removed support for [time-based software upgrades](https://github.com/cosmos/cosmos-sdk/pull/8849). As such, the `time` field has been marked as deprecated in `cosmos.upgrade.v1beta1.Plan`. Moreover, the node will reject any proposal containing an upgrade Plan whose `time` field is non-empty. +* The Cosmos SDK now supports [governance split votes](./adr-037-gov-split-vote.md). When querying for votes, the returned `cosmos.gov.v1beta1.Vote` message has its `option` field (used for 1 vote option) deprecated in favor of its `options` field (allowing multiple vote options). Whenever possible, the SDK still populates the deprecated `option` field, that is, if and only if the `len(options) == 1` and `options[0].Weight == 1.0`. + +#### 3. Fields MUST NOT be renamed + +Whereas the official Protobuf recommendations do not prohibit renaming fields, as it does not break the Protobuf binary representation, the SDK explicitly forbids renaming fields in Protobuf structs. The main reason for this choice is to avoid introducing breaking changes for clients, which often rely on hard-coded fields from generated types. Moreover, renaming fields will lead to client-breaking JSON representations of Protobuf definitions, used in REST endpoints and in the CLI. + +### Incrementing Protobuf Package Version + +TODO, needs architecture review. Some topics: + +* Bumping versions frequency +* When bumping versions, should the Cosmos SDK support both versions? + * i.e. v1beta1 -> v1, should we have two folders in the Cosmos SDK, and handlers for both versions? +* mention ADR-023 Protobuf naming + +## Consequences + +> This section describes the resulting context, after applying the decision. All consequences should be listed here, not just the "positive" ones. A particular decision may have positive, negative, and neutral consequences, but all of them affect the team and project in the future. + +### Backwards Compatibility + +> All ADRs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The ADR must explain how the author proposes to deal with these incompatibilities. ADR submissions without a sufficient backwards compatibility treatise may be rejected outright. + +### Positive + +* less pain to tool developers +* more compatibility in the ecosystem +* ... + +### Negative + +{negative consequences} + +### Neutral + +* more rigor in Protobuf review + +## Further Discussions + +This ADR is still in the DRAFT stage, and the "Incrementing Protobuf Package Version" will be filled in once we make a decision on how to correctly do it. + +## Test Cases [optional] + +Test cases for an implementation are mandatory for ADRs that are affecting consensus changes. Other ADRs can choose to include links to test cases if applicable. + +## References + +* [#9445](https://github.com/cosmos/cosmos-sdk/issues/9445) Release proto definitions v1 +* [#9446](https://github.com/cosmos/cosmos-sdk/issues/9446) Address v1beta1 proto breaking changes + + + +### ADR 045: BaseApp `{Check,Deliver}Tx` as Middlewares + + + +# ADR 045: BaseApp `{Check,Deliver}Tx` as Middlewares + +## Changelog + +* 20.08.2021: Initial draft. +* 07.12.2021: Update `tx.Handler` interface ([\#10693](https://github.com/cosmos/cosmos-sdk/pull/10693)). +* 17.05.2022: ADR is abandoned, as middlewares are deemed too hard to reason about. + +## Status + +ABANDONED. Replacement is being discussed in [#11955](https://github.com/cosmos/cosmos-sdk/issues/11955). + +## Abstract + +This ADR replaces the current BaseApp `runTx` and antehandlers design with a middleware-based design. + +## Context + +BaseApp's implementation of ABCI `{Check,Deliver}Tx()` and its own `Simulate()` method call the `runTx` method under the hood, which first runs antehandlers, then executes `Msg`s. However, the [transaction Tips](https://github.com/cosmos/cosmos-sdk/issues/9406) and [refunding unused gas](https://github.com/cosmos/cosmos-sdk/issues/2150) use cases require custom logic to be run after the `Msg`s execution. There is currently no way to achieve this. + +A naive solution would be to add post-`Msg` hooks to BaseApp. However, the Cosmos SDK team thinks in parallel about the bigger picture of making app wiring simpler ([#9181](https://github.com/cosmos/cosmos-sdk/discussions/9182)), which includes making BaseApp more lightweight and modular. + +## Decision + +We decide to transform Baseapp's implementation of ABCI `{Check,Deliver}Tx` and its own `Simulate` methods to use a middleware-based design. + +The two following interfaces are the base of the middleware design, and are defined in `types/tx`: + +```go +type Handler interface { + CheckTx(ctx context.Context, req Request, checkReq RequestCheckTx) (Response, ResponseCheckTx, error) + DeliverTx(ctx context.Context, req Request) (Response, error) + SimulateTx(ctx context.Context, req Request (Response, error) +} + +type Middleware func(Handler) Handler +``` + +where we define the following arguments and return types: + +```go +type Request struct { + Tx sdk.Tx + TxBytes []byte +} + +type Response struct { + GasWanted uint64 + GasUsed uint64 + // MsgResponses is an array containing each Msg service handler's response + // type, packed in an Any. This will get proto-serialized into the `Data` field + // in the ABCI Check/DeliverTx responses. + MsgResponses []*codectypes.Any + Log string + Events []abci.Event +} + +type RequestCheckTx struct { + Type abci.CheckTxType +} + +type ResponseCheckTx struct { + Priority int64 +} +``` + +Please note that because CheckTx handles separate logic related to mempool prioritization, its signature is different than DeliverTx and SimulateTx. + +BaseApp holds a reference to a `tx.Handler`: + +```go +type BaseApp struct { + // other fields + txHandler tx.Handler +} +``` + +Baseapp's ABCI `{Check,Deliver}Tx()` and `Simulate()` methods simply call `app.txHandler.{Check,Deliver,Simulate}Tx()` with the relevant arguments. For example, for `DeliverTx`: + +```go +func (app *BaseApp) DeliverTx(req abci.RequestDeliverTx) abci.ResponseDeliverTx { + var abciRes abci.ResponseDeliverTx + ctx := app.getContextForTx(runTxModeDeliver, req.Tx) + res, err := app.txHandler.DeliverTx(ctx, tx.Request{TxBytes: req.Tx}) + if err != nil { + abciRes = sdkerrors.ResponseDeliverTx(err, uint64(res.GasUsed), uint64(res.GasWanted), app.trace) + return abciRes + } + + abciRes, err = convertTxResponseToDeliverTx(res) + if err != nil { + return sdkerrors.ResponseDeliverTx(err, uint64(res.GasUsed), uint64(res.GasWanted), app.trace) + } + + return abciRes +} + +// convertTxResponseToDeliverTx converts a tx.Response into a abci.ResponseDeliverTx. +func convertTxResponseToDeliverTx(txRes tx.Response) (abci.ResponseDeliverTx, error) { + data, err := makeABCIData(txRes) + if err != nil { + return abci.ResponseDeliverTx{}, nil + } + + return abci.ResponseDeliverTx{ + Data: data, + Log: txRes.Log, + Events: txRes.Events, + }, nil +} + +// makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(txRes tx.Response) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{MsgResponses: txRes.MsgResponses}) +} +``` + +The implementations are similar for `BaseApp.CheckTx` and `BaseApp.Simulate`. + +`baseapp.txHandler`'s three methods' implementations can obviously be monolithic functions, but for modularity we propose a middleware composition design, where a middleware is simply a function that takes a `tx.Handler`, and returns another `tx.Handler` wrapped around the previous one. + +### Implementing a Middleware + +In practice, middlewares are created by Go function that takes as arguments some parameters needed for the middleware, and returns a `tx.Middleware`. + +For example, for creating an arbitrary `MyMiddleware`, we can implement: + +```go +// myTxHandler is the tx.Handler of this middleware. Note that it holds a +// reference to the next tx.Handler in the stack. +type myTxHandler struct { + // next is the next tx.Handler in the middleware stack. + next tx.Handler + // some other fields that are relevant to the middleware can be added here +} + +// NewMyMiddleware returns a middleware that does this and that. +func NewMyMiddleware(arg1, arg2) tx.Middleware { + return func (txh tx.Handler) tx.Handler { + return myTxHandler{ + next: txh, + // optionally, set arg1, arg2... if they are needed in the middleware + } + } +} + +// Assert myTxHandler is a tx.Handler. +var _ tx.Handler = myTxHandler{} + +func (h myTxHandler) CheckTx(ctx context.Context, req Request, checkReq RequestcheckTx) (Response, ResponseCheckTx, error) { + // CheckTx specific pre-processing logic + + // run the next middleware + res, checkRes, err := txh.next.CheckTx(ctx, req, checkReq) + + // CheckTx specific post-processing logic + + return res, checkRes, err +} + +func (h myTxHandler) DeliverTx(ctx context.Context, req Request) (Response, error) { + // DeliverTx specific pre-processing logic + + // run the next middleware + res, err := txh.next.DeliverTx(ctx, tx, req) + + // DeliverTx specific post-processing logic + + return res, err +} + +func (h myTxHandler) SimulateTx(ctx context.Context, req Request) (Response, error) { + // SimulateTx specific pre-processing logic + + // run the next middleware + res, err := txh.next.SimulateTx(ctx, tx, req) + + // SimulateTx specific post-processing logic + + return res, err +} +``` + +### Composing Middlewares + +While BaseApp simply holds a reference to a `tx.Handler`, this `tx.Handler` itself is defined using a middleware stack. The Cosmos SDK exposes a base (i.e. innermost) `tx.Handler` called `RunMsgsTxHandler`, which executes messages. + +Then, the app developer can compose multiple middlewares on top of the base `tx.Handler`. Each middleware can run pre-and-post-processing logic around its next middleware, as described in the section above. Conceptually, as an example, given the middlewares `A`, `B`, and `C` and the base `tx.Handler` `H` the stack looks like: + +```text +A.pre + B.pre + C.pre + H # The base tx.handler, for example `RunMsgsTxHandler` + C.post + B.post +A.post +``` + +We define a `ComposeMiddlewares` function for composing middlewares. It takes the base handler as first argument, and middlewares in the "outer to inner" order. For the above stack, the final `tx.Handler` is: + +```go +txHandler := middleware.ComposeMiddlewares(H, A, B, C) +``` + +The middleware is set in BaseApp via its `SetTxHandler` setter: + +```go +// simapp/app.go + +txHandler := middleware.ComposeMiddlewares(...) +app.SetTxHandler(txHandler) +``` + +The app developer can define their own middlewares, or use the Cosmos SDK's pre-defined middlewares from `middleware.NewDefaultTxHandler()`. + +### Middlewares Maintained by the Cosmos SDK + +While the app developer can define and compose the middlewares of their choice, the Cosmos SDK provides a set of middlewares that caters for the ecosystem's most common use cases. These middlewares are: + +| Middleware | Description | +| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| RunMsgsTxHandler | This is the base `tx.Handler`. It replaces the old baseapp's `runMsgs`, and executes a transaction's `Msg`s. | +| TxDecoderMiddleware | This middleware takes in transaction raw bytes, and decodes them into a `sdk.Tx`. It replaces the `baseapp.txDecoder` field, so that BaseApp stays as thin as possible. Since most middlewares read the contents of the `sdk.Tx`, the TxDecoderMiddleware should be run first in the middleware stack. | +| {Antehandlers} | Each antehandler is converted to its own middleware. These middlewares perform signature verification, fee deductions and other validations on the incoming transaction. | +| IndexEventsTxMiddleware | This is a simple middleware that chooses which events to index in Tendermint. Replaces `baseapp.indexEvents` (which unfortunately still exists in baseapp too, because it's used to index Begin/EndBlock events) | +| RecoveryTxMiddleware | This index recovers from panics. It replaces baseapp.runTx's panic recovery described in [ADR-022](./adr-022-custom-panic-handling.md). | +| GasTxMiddleware | This replaces the [`Setup`](https://github.com/cosmos/cosmos-sdk/blob/v0.43.0/x/auth/ante/setup.go) Antehandler. It sets a GasMeter on sdk.Context. Note that before, GasMeter was set on sdk.Context inside the antehandlers, and there was some mess around the fact that antehandlers had their own panic recovery system so that the GasMeter could be read by baseapp's recovery system. Now, this mess is all removed: one middleware sets GasMeter, another one handles recovery. | + +### Similarities and Differences between Antehandlers and Middlewares + +The middleware-based design builds upon the existing antehandlers design described in [ADR-010](./adr-010-modular-antehandler.md). Even though the final decision of ADR-010 was to go with the "Simple Decorators" approach, the middleware design is actually very similar to the other [Decorator Pattern](./adr-010-modular-antehandler.md#decorator-pattern) proposal, also used in [weave](https://github.com/iov-one/weave). + +#### Similarities with Antehandlers + +* Designed as chaining/composing small modular pieces. +* Allow code reuse for `{Check,Deliver}Tx` and for `Simulate`. +* Set up in `app.go`, and easily customizable by app developers. +* Order is important. + +#### Differences with Antehandlers + +* The Antehandlers are run before `Msg` execution, whereas middlewares can run before and after. +* The middleware approach uses separate methods for `{Check,Deliver,Simulate}Tx`, whereas the antehandlers pass a `simulate bool` flag and uses the `sdkCtx.Is{Check,Recheck}Tx()` flags to determine in which transaction mode we are. +* The middleware design lets each middleware hold a reference to the next middleware, whereas the antehandlers pass a `next` argument in the `AnteHandle` method. +* The middleware design use Go's standard `context.Context`, whereas the antehandlers use `sdk.Context`. + +## Consequences + +### Backwards Compatibility + +Since this refactor removes some logic away from BaseApp and into middlewares, it introduces API-breaking changes for app developers. Most notably, instead of creating an antehandler chain in `app.go`, app developers need to create a middleware stack: + +```diff +- anteHandler, err := ante.NewAnteHandler( +- ante.HandlerOptions{ +- AccountKeeper: app.AccountKeeper, +- BankKeeper: app.BankKeeper, +- SignModeHandler: encodingConfig.TxConfig.SignModeHandler(), +- FeegrantKeeper: app.FeeGrantKeeper, +- SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +- }, +-) ++txHandler, err := authmiddleware.NewDefaultTxHandler(authmiddleware.TxHandlerOptions{ ++ Debug: app.Trace(), ++ IndexEvents: indexEvents, ++ LegacyRouter: app.legacyRouter, ++ MsgServiceRouter: app.msgSvcRouter, ++ LegacyAnteHandler: anteHandler, ++ TxDecoder: encodingConfig.TxConfig.TxDecoder, ++}) +if err != nil { + panic(err) +} +- app.SetAnteHandler(anteHandler) ++ app.SetTxHandler(txHandler) +``` + +Other more minor API breaking changes will also be provided in the CHANGELOG. As usual, the Cosmos SDK will provide a release migration document for app developers. + +This ADR does not introduce any state-machine-, client- or CLI-breaking changes. + +### Positive + +* Allow custom logic to be run before an after `Msg` execution. This enables the [tips](https://github.com/cosmos/cosmos-sdk/issues/9406) and [gas refund](https://github.com/cosmos/cosmos-sdk/issues/2150) uses cases, and possibly other ones. +* Make BaseApp more lightweight, and defer complex logic to small modular components. +* Separate paths for `{Check,Deliver,Simulate}Tx` with different returns types. This allows for improved readability (replace `if sdkCtx.IsRecheckTx() && !simulate {...}` with separate methods) and more flexibility (e.g. returning a `priority` in `ResponseCheckTx`). + +### Negative + +* It is hard to understand at first glance the state updates that would occur after a middleware runs given the `sdk.Context` and `tx`. A middleware can have an arbitrary number of nested middleware being called within its function body, each possibly doing some pre- and post-processing before calling the next middleware on the chain. Thus to understand what a middleware is doing, one must also understand what every other middleware further along the chain is also doing, and the order of middlewares matters. This can get quite complicated to understand. +* API-breaking changes for app developers. + +### Neutral + +No neutral consequences. + +## Further Discussions + +* [#9934](https://github.com/cosmos/cosmos-sdk/discussions/9934) Decomposing BaseApp's other ABCI methods into middlewares. +* Replace `sdk.Tx` interface with the concrete protobuf Tx type in the `tx.Handler` methods signature. + +## Test Cases + +We update the existing baseapp and antehandlers tests to use the new middleware API, but keep the same test cases and logic, to avoid introducing regressions. Existing CLI tests will also be left untouched. + +For new middlewares, we introduce unit tests. Since middlewares are purposefully small, unit tests suit well. + +## References + +* Initial discussion: https://github.com/cosmos/cosmos-sdk/issues/9585 +* Implementation: [#9920 BaseApp refactor](https://github.com/cosmos/cosmos-sdk/pull/9920) and [#10028 Antehandlers migration](https://github.com/cosmos/cosmos-sdk/pull/10028) + + + +### ADR 046: Module Params + + + +# ADR 046: Module Params + +## Changelog + +* Sep 22, 2021: Initial Draft + +## Status + +Proposed + +## Abstract + +This ADR describes an alternative approach to how Cosmos SDK modules use, interact, +and store their respective parameters. + +## Context + +Currently, in the Cosmos SDK, modules that require the use of parameters use the +`x/params` module. The `x/params` works by having modules define parameters, +typically via a simple `Params` structure, and registering that structure in +the `x/params` module via a unique `Subspace` that belongs to the respective +registering module. The registering module then has unique access to its respective +`Subspace`. Through this `Subspace`, the module can get and set its `Params` +structure. + +In addition, the Cosmos SDK's `x/gov` module has direct support for changing +parameters on-chain via a `ParamChangeProposal` governance proposal type, where +stakeholders can vote on suggested parameter changes. + +There are various tradeoffs to using the `x/params` module to manage individual +module parameters. Namely, managing parameters essentially comes for "free" in +that developers only need to define the `Params` struct, the `Subspace`, and the +various auxiliary functions, e.g. `ParamSetPairs`, on the `Params` type. However, +there are some notable drawbacks. These drawbacks include the fact that parameters +are serialized in state via JSON which is extremely slow. In addition, parameter +changes via `ParamChangeProposal` governance proposals have no way of reading from +or writing to state. In other words, it is currently not possible to have any +state transitions in the application during an attempt to change param(s). + +## Decision + +We will build off of the alignment of `x/gov` and `x/authz` work per +[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810). Namely, module developers +will create one or more unique parameter data structures that must be serialized +to state. The Param data structures must implement `sdk.Msg` interface with respective +Protobuf Msg service method which will validate and update the parameters with all +necessary changes. The `x/gov` module via the work done in +[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810), will dispatch Param +messages, which will be handled by Protobuf Msg services. + +Note, it is up to developers to decide how to structure their parameters and +the respective `sdk.Msg` messages. Consider the parameters currently defined in +`x/auth` using the `x/params` module for parameter management: + +```protobuf +message Params { + uint64 max_memo_characters = 1; + uint64 tx_sig_limit = 2; + uint64 tx_size_cost_per_byte = 3; + uint64 sig_verify_cost_ed25519 = 4; + uint64 sig_verify_cost_secp256k1 = 5; +} +``` + +Developers can choose to either create a unique data structure for every field in +`Params` or they can create a single `Params` structure as outlined above in the +case of `x/auth`. + +In the former, `x/params`, approach, a `sdk.Msg` would need to be created for every single +field along with a handler. This can become burdensome if there are a lot of +parameter fields. In the latter case, there is only a single data structure and +thus only a single message handler, however, the message handler might have to be +more sophisticated in that it might need to understand what parameters are being +changed vs what parameters are untouched. + +Params change proposals are made using the `x/gov` module. Execution is done through +`x/authz` authorization to the root `x/gov` module's account. + +Continuing to use `x/auth`, we demonstrate a more complete example: + +```go +type Params struct { + MaxMemoCharacters uint64 + TxSigLimit uint64 + TxSizeCostPerByte uint64 + SigVerifyCostED25519 uint64 + SigVerifyCostSecp256k1 uint64 +} + +type MsgUpdateParams struct { + MaxMemoCharacters uint64 + TxSigLimit uint64 + TxSizeCostPerByte uint64 + SigVerifyCostED25519 uint64 + SigVerifyCostSecp256k1 uint64 +} + +type MsgUpdateParamsResponse struct {} + +func (ms msgServer) UpdateParams(goCtx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + + // verification logic... + + // persist params + params := ParamsFromMsg(msg) + ms.SaveParams(ctx, params) + + return &types.MsgUpdateParamsResponse{}, nil +} + +func ParamsFromMsg(msg *types.MsgUpdateParams) Params { + // ... +} +``` + +A gRPC `Service` query should also be provided, for example: + +```protobuf +service Query { + // ... + + rpc Params(QueryParamsRequest) returns (QueryParamsResponse) { + option (google.api.http).get = "/cosmos//v1beta1/params"; + } +} + +message QueryParamsResponse { + Params params = 1 [(gogoproto.nullable) = false]; +} +``` + +## Consequences + +As a result of implementing the module parameter methodology, we gain the ability +for module parameter changes to be stateful and extensible to fit nearly every +application's use case. We will be able to emit events (and trigger hooks registered +to that events using the work proposed in [event hooks](https://github.com/cosmos/cosmos-sdk/discussions/9656)), +call other Msg service methods or perform migration. +In addition, there will be significant gains in performance when it comes to reading +and writing parameters from and to state, especially if a specific set of parameters +are read on a consistent basis. + +However, this methodology will require developers to implement more types and +Msg service methods which can become burdensome if many parameters exist. In addition, +developers are required to implement persistence logics of module parameters. +However, this should be trivial. + +### Backwards Compatibility + +The new method for working with module parameters is naturally not backwards +compatible with the existing `x/params` module. However, the `x/params` will +remain in the Cosmos SDK and will be marked as deprecated with no additional +functionality being added apart from potential bug fixes. Note, the `x/params` +module may be removed entirely in a future release. + +### Positive + +* Module parameters are serialized more efficiently +* Modules are able to react on parameters changes and perform additional actions. +* Special events can be emitted, allowing hooks to be triggered. + +### Negative + +* Module parameters become slightly more burdensome for module developers: + * Modules are now responsible for persisting and retrieving parameter state + * Modules are now required to have unique message handlers to handle parameter + changes per unique parameter data structure. + +### Neutral + +* Requires [#9810](https://github.com/cosmos/cosmos-sdk/pull/9810) to be reviewed + and merged. + + + +## References + +* https://github.com/cosmos/cosmos-sdk/pull/9810 +* https://github.com/cosmos/cosmos-sdk/issues/9438 +* https://github.com/cosmos/cosmos-sdk/discussions/9913 + + + +### ADR 047: Extend Upgrade Plan + + + +# ADR 047: Extend Upgrade Plan + +## Changelog + +* Nov, 23, 2021: Initial Draft +* May, 16, 2023: Proposal ABANDONED. `pre_run` and `post_run` are not necessary anymore and adding the `artifacts` brings minor benefits. + +## Status + +ABANDONED + +## Abstract + +This ADR expands the existing x/upgrade `Plan` proto message to include new fields for defining pre-run and post-run processes within upgrade tooling. +It also defines a structure for providing downloadable artifacts involved in an upgrade. + +## Context + +The `upgrade` module in conjunction with Cosmovisor are designed to facilitate and automate a blockchain's transition from one version to another. + +Users submit a software upgrade governance proposal containing an upgrade `Plan`. +The [Plan](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/proto/cosmos/upgrade/v1beta1/upgrade.proto#L12) currently contains the following fields: + +* `name`: A short string identifying the new version. +* `height`: The chain height at which the upgrade is to be performed. +* `info`: A string containing information about the upgrade. + +The `info` string can be anything. +However, Cosmovisor will try to use the `info` field to automatically download a new version of the blockchain executable. +For the auto-download to work, Cosmovisor expects it to be either a stringified JSON object (with a specific structure defined through documentation), or a URL that will return such JSON. +The JSON object identifies URLs used to download the new blockchain executable for different platforms (OS and Architecture, e.g. "linux/amd64"). +Such a URL can either return the executable file directly or can return an archive containing the executable and possibly other assets. + +If the URL returns an archive, it is decompressed into `{DAEMON_HOME}/cosmovisor/{upgrade name}`. +Then, if `{DAEMON_HOME}/cosmovisor/{upgrade name}/bin/{DAEMON_NAME}` does not exist, but `{DAEMON_HOME}/cosmovisor/{upgrade name}/{DAEMON_NAME}` does, the latter is copied to the former. +If the URL returns something other than an archive, it is downloaded to `{DAEMON_HOME}/cosmovisor/{upgrade name}/bin/{DAEMON_NAME}`. + +If an upgrade height is reached and the new version of the executable version isn't available, Cosmovisor will stop running. + +Both `DAEMON_HOME` and `DAEMON_NAME` are [environment variables used to configure Cosmovisor](https://github.com/cosmos/cosmos-sdk/blob/cosmovisor/v1.0.0/cosmovisor/README.md#command-line-arguments-and-environment-variables). + +Currently, there is no mechanism that makes Cosmovisor run a command after the upgraded chain has been restarted. + +The current upgrade process has this timeline: + +1. An upgrade governance proposal is submitted and approved. +1. The upgrade height is reached. +1. The `x/upgrade` module writes the `upgrade_info.json` file. +1. The chain halts. +1. Cosmovisor backs up the data directory (if set up to do so). +1. Cosmovisor downloads the new executable (if not already in place). +1. Cosmovisor executes the `${DAEMON_NAME} pre-upgrade`. +1. Cosmovisor restarts the app using the new version and same args originally provided. + +## Decision + +### Protobuf Updates + +We will update the `x/upgrade.Plan` message for providing upgrade instructions. +The upgrade instructions will contain a list of artifacts available for each platform. +It allows for the definition of a pre-run and post-run commands. +These commands are not consensus guaranteed; they will be executed by Cosmovisor (or other) during its upgrade handling. + +```protobuf +message Plan { + // ... (existing fields) + + UpgradeInstructions instructions = 6; +} +``` + +The new `UpgradeInstructions instructions` field MUST be optional. + +```protobuf +message UpgradeInstructions { + string pre_run = 1; + string post_run = 2; + repeated Artifact artifacts = 3; + string description = 4; +} +``` + +All fields in the `UpgradeInstructions` are optional. + +* `pre_run` is a command to run prior to the upgraded chain restarting. + If defined, it will be executed after halting and downloading the new artifact but before restarting the upgraded chain. + The working directory this command runs from MUST be `{DAEMON_HOME}/cosmovisor/{upgrade name}`. + This command MUST behave the same as the current [pre-upgrade](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/docs/migrations/pre-upgrade.md) command. + It does not take in any command-line arguments and is expected to terminate with the following exit codes: + + | Exit status code | How it is handled in Cosmovisor | + |------------------|---------------------------------------------------------------------------------------------------------------------| + | `0` | Assumes `pre-upgrade` command executed successfully and continues the upgrade. | + | `1` | Default exit code when `pre-upgrade` command has not been implemented. | + | `30` | `pre-upgrade` command was executed but failed. This fails the entire upgrade. | + | `31` | `pre-upgrade` command was executed but failed. But the command is retried until exit code `1` or `30` are returned. | + If defined, then the app supervisors (e.g. Cosmovisor) MUST NOT run `app pre-run`. + +* `post_run` is a command to run after the upgraded chain has been started. If defined, this command MUST be only executed at most once by an upgrading node. + The output and exit code SHOULD be logged but SHOULD NOT affect the running of the upgraded chain. + The working directory this command runs from MUST be `{DAEMON_HOME}/cosmovisor/{upgrade name}`. +* `artifacts` define items to be downloaded. + It SHOULD have only one entry per platform. +* `description` contains human-readable information about the upgrade and might contain references to external resources. + It SHOULD NOT be used for structured processing information. + +```protobuf +message Artifact { + string platform = 1; + string url = 2; + string checksum = 3; + string checksum_algo = 4; +} +``` + +* `platform` is a required string that SHOULD be in the format `{OS}/{CPU}`, e.g. `"linux/amd64"`. + The string `"any"` SHOULD also be allowed. + An `Artifact` with a `platform` of `"any"` SHOULD be used as a fallback when a specific `{OS}/{CPU}` entry is not found. + That is, if an `Artifact` exists with a `platform` that matches the system's OS and CPU, that should be used; + otherwise, if an `Artifact` exists with a `platform` of `any`, that should be used; + otherwise no artifact should be downloaded. +* `url` is a required URL string that MUST conform to [RFC 1738: Uniform Resource Locators](https://www.ietf.org/rfc/rfc1738.txt). + A request to this `url` MUST return either an executable file or an archive containing either `bin/{DAEMON_NAME}` or `{DAEMON_NAME}`. + The URL should not contain checksum - it should be specified by the `checksum` attribute. +* `checksum` is a checksum of the expected result of a request to the `url`. + It is not required, but is recommended. + If provided, it MUST be a hex encoded checksum string. + Tools utilizing these `UpgradeInstructions` MUST fail if a `checksum` is provided but is different from the checksum of the result returned by the `url`. +* `checksum_algo` is a string identifying the algorithm used to generate the `checksum`. + Recommended algorithms: `sha256`, `sha512`. + Algorithms also supported (but not recommended): `sha1`, `md5`. + If a `checksum` is provided, a `checksum_algo` MUST also be provided. + +A `url` is not required to contain a `checksum` query parameter. +If the `url` does contain a `checksum` query parameter, the `checksum` and `checksum_algo` fields MUST also be populated, and their values MUST match the value of the query parameter. +For example, if the `url` is `"https://example.com?checksum=md5:d41d8cd98f00b204e9800998ecf8427e"`, then the `checksum` field must be `"d41d8cd98f00b204e9800998ecf8427e"` and the `checksum_algo` field must be `"md5"`. + +### Upgrade Module Updates + +If an upgrade `Plan` does not use the new `UpgradeInstructions` field, existing functionality will be maintained. +The parsing of the `info` field as either a URL or `binaries` JSON will be deprecated. +During validation, if the `info` field is used as such, a warning will be issued, but not an error. + +We will update the creation of the `upgrade-info.json` file to include the `UpgradeInstructions`. + +We will update the optional validation available via CLI to account for the new `Plan` structure. +We will add the following validation: + +1. If `UpgradeInstructions` are provided: + 1. There MUST be at least one entry in `artifacts`. + 1. All of the `artifacts` MUST have a unique `platform`. + 1. For each `Artifact`, if the `url` contains a `checksum` query parameter: + 1. The `checksum` query parameter value MUST be in the format of `{checksum_algo}:{checksum}`. + 1. The `{checksum}` from the query parameter MUST equal the `checksum` provided in the `Artifact`. + 1. The `{checksum_algo}` from the query parameter MUST equal the `checksum_algo` provided in the `Artifact`. +1. The following validation is currently done using the `info` field. We will apply similar validation to the `UpgradeInstructions`. + For each `Artifact`: + 1. The `platform` MUST have the format `{OS}/{CPU}` or be `"any"`. + 1. The `url` field MUST NOT be empty. + 1. The `url` field MUST be a proper URL. + 1. A `checksum` MUST be provided either in the `checksum` field or as a query parameter in the `url`. + 1. If the `checksum` field has a value and the `url` also has a `checksum` query parameter, the two values MUST be equal. + 1. The `url` MUST return either a file or an archive containing either `bin/{DAEMON_NAME}` or `{DAEMON_NAME}`. + 1. If a `checksum` is provided (in the field or as a query param), the checksum of the result of the `url` MUST equal the provided checksum. + +Downloading of an `Artifact` will happen the same way that URLs from `info` are currently downloaded. + +### Cosmovisor Updates + +If the `upgrade-info.json` file does not contain any `UpgradeInstructions`, existing functionality will be maintained. + +We will update Cosmovisor to look for and handle the new `UpgradeInstructions` in `upgrade-info.json`. +If the `UpgradeInstructions` are provided, we will do the following: + +1. The `info` field will be ignored. +1. The `artifacts` field will be used to identify the artifact to download based on the `platform` that Cosmovisor is running in. +1. If a `checksum` is provided (either in the field or as a query param in the `url`), and the downloaded artifact has a different checksum, the upgrade process will be interrupted and Cosmovisor will exit with an error. +1. If a `pre_run` command is defined, it will be executed at the same point in the process where the `app pre-upgrade` command would have been executed. + It will be executed using the same environment as other commands run by Cosmovisor. +1. If a `post_run` command is defined, it will be executed after executing the command that restarts the chain. + It will be executed in a background process using the same environment as the other commands. + Any output generated by the command will be logged. + Once complete, the exit code will be logged. + +We will deprecate the use of the `info` field for anything other than human readable information. +A warning will be logged if the `info` field is used to define the assets (either by URL or JSON). + +The new upgrade timeline is very similar to the current one. Changes are in bold: + +1. An upgrade governance proposal is submitted and approved. +1. The upgrade height is reached. +1. The `x/upgrade` module writes the `upgrade_info.json` file **(now possibly with `UpgradeInstructions`)**. +1. The chain halts. +1. Cosmovisor backs up the data directory (if set up to do so). +1. Cosmovisor downloads the new executable (if not already in place). +1. Cosmovisor executes **the `pre_run` command if provided**, or else the `${DAEMON_NAME} pre-upgrade` command. +1. Cosmovisor restarts the app using the new version and same args originally provided. +1. **Cosmovisor immediately runs the `post_run` command in a detached process.** + +## Consequences + +### Backwards Compatibility + +Since the only change to existing definitions is the addition of the `instructions` field to the `Plan` message, and that field is optional, there are no backwards incompatibilities with respects to the proto messages. +Additionally, current behavior will be maintained when no `UpgradeInstructions` are provided, so there are no backwards incompatibilities with respects to either the upgrade module or Cosmovisor. + +### Forwards Compatibility + +In order to utilize the `UpgradeInstructions` as part of a software upgrade, both of the following must be true: + +1. The chain must already be using a sufficiently advanced version of the Cosmos SDK. +1. The chain's nodes must be using a sufficiently advanced version of Cosmovisor. + +### Positive + +1. The structure for defining artifacts is clearer since it is now defined in the proto instead of in documentation. +1. Availability of a pre-run command becomes more obvious. +1. A post-run command becomes possible. + +### Negative + +1. The `Plan` message becomes larger. This is negligible because A) the `x/upgrades` module only stores at most one upgrade plan, and B) upgrades are rare enough that the increased gas cost isn't a concern. +1. There is no option for providing a URL that will return the `UpgradeInstructions`. +1. The only way to provide multiple assets (executables and other files) for a platform is to use an archive as the platform's artifact. + +### Neutral + +1. Existing functionality of the `info` field is maintained when the `UpgradeInstructions` aren't provided. + +## Further Discussions + +1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r698708349): + Consider different names for `UpgradeInstructions instructions` (either the message type or field name). +1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r754655072): + 1. Consider putting the `string platform` field inside `UpgradeInstructions` and make `UpgradeInstructions` a repeated field in `Plan`. + 1. Consider using a `oneof` field in the `Plan` which could either be `UpgradeInstructions` or else a URL that should return the `UpgradeInstructions`. + 1. Consider allowing `info` to either be a JSON serialized version of `UpgradeInstructions` or else a URL that returns that. +1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r755462876): + Consider not including the `UpgradeInstructions.description` field, using the `info` field for that purpose instead. +1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r754643691): + Consider allowing multiple artifacts to be downloaded for any given `platform` by adding a `name` field to the `Artifact` message. +1. [PR #10502 Comment](https://github.com/cosmos/cosmos-sdk/pull/10602#discussion_r781438288) + Allow the new `UpgradeInstructions` to be provided via URL. +1. [PR #10502 Comment](https://github.com/cosmos/cosmos-sdk/pull/10602#discussion_r781438288) + Allow definition of a `signer` for assets (as an alternative to using a `checksum`). + +## References + +* [Current upgrade.proto](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/proto/cosmos/upgrade/v1beta1/upgrade.proto) +* [Upgrade Module README](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/x/upgrade/spec/README.md) +* [Cosmovisor README](https://github.com/cosmos/cosmos-sdk/blob/cosmovisor/v1.0.0/cosmovisor/README.md) +* [Pre-upgrade README](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/docs/migrations/pre-upgrade.md) +* [Draft/POC PR #10032](https://github.com/cosmos/cosmos-sdk/pull/10032) +* [RFC 1738: Uniform Resource Locators](https://www.ietf.org/rfc/rfc1738.txt) + + + +### ADR 048: Multi Tier Gas Price System + + + +# ADR 048: Multi Tier Gas Price System + +## Changelog + +* Dec 1, 2021: Initial Draft + +## Status + +Rejected + +## Abstract + +This ADR describes a flexible mechanism to maintain a consensus level gas prices, in which one can choose a multi-tier gas price system or EIP-1559 like one through configuration. + +## Context + +Currently, each validator configures it's own `minimal-gas-prices` in `app.yaml`. But setting a proper minimal gas price is critical to protect network from dos attack, and it's hard for all the validators to pick a sensible value, so we propose to maintain a gas price in consensus level. + +Since tendermint 0.34.20 has supported mempool prioritization, we can take advantage of that to implement more sophisticated gas fee system. + +## Multi-Tier Price System + +We propose a multi-tier price system on consensus to provide maximum flexibility: + +* Tier 1: a constant gas price, which could only be modified occasionally through governance proposal. +* Tier 2: a dynamic gas price which is adjusted according to previous block load. +* Tier 3: a dynamic gas price which is adjusted according to previous block load at a higher speed. + +The gas price of higher tier should be bigger than the lower tier. + +The transaction fees are charged with the exact gas price calculated on consensus. + +The parameter schema is like this: + +```protobuf +message TierParams { + uint32 priority = 1 // priority in tendermint mempool + Coin initial_gas_price = 2 // + uint32 parent_gas_target = 3 // the target saturation of block + uint32 change_denominator = 4 // decides the change speed + Coin min_gas_price = 5 // optional lower bound of the price adjustment + Coin max_gas_price = 6 // optional upper bound of the price adjustment +} + +message Params { + repeated TierParams tiers = 1; +} +``` + +### Extension Options + +We need to allow user to specify the tier of service for the transaction, to support it in an extensible way, we add an extension option in `AuthInfo`: + +```protobuf +message ExtensionOptionsTieredTx { + uint32 fee_tier = 1 +} +``` + +The value of `fee_tier` is just the index to the `tiers` parameter list. + +We also change the semantic of existing `fee` field of `Tx`, instead of charging user the exact `fee` amount, we treat it as a fee cap, while the actual amount of fee charged is decided dynamically. If the `fee` is smaller than dynamic one, the transaction won't be included in current block and ideally should stay in the mempool until the consensus gas price drop. The mempool can eventually prune old transactions. + +### Tx Prioritization + +Transactions are prioritized based on the tier, the higher the tier, the higher the priority. + +Within the same tier, follow the default Tendermint order (currently FIFO). Be aware of that the mempool tx ordering logic is not part of consensus and can be modified by malicious validator. + +This mechanism can be easily composed with prioritization mechanisms: + +* we can add extra tiers out of a user control: + * Example 1: user can set tier 0, 10 or 20, but the protocol will create tiers 0, 1, 2 ... 29. For example IBC transactions will go to tier `user_tier + 5`: if user selected tier 1, then the transaction will go to tier 15. + * Example 2: we can reserve tier 4, 5, ... only for special transaction types. For example, tier 5 is reserved for evidence tx. So if submits a bank.Send transaction and set tier 5, it will be delegated to tier 3 (the max tier level available for any transaction). + * Example 3: we can enforce that all transactions of a specific type will go to specific tier. For example, tier 100 will be reserved for evidence transactions and all evidence transactions will always go to that tier. + +### `min-gas-prices` + +Deprecate the current per-validator `min-gas-prices` configuration, since it would confusing for it to work together with the consensus gas price. + +### Adjust For Block Load + +For tier 2 and tier 3 transactions, the gas price is adjusted according to previous block load, the logic could be similar to EIP-1559: + +```python +def adjust_gas_price(gas_price, parent_gas_used, tier): + if parent_gas_used == tier.parent_gas_target: + return gas_price + elif parent_gas_used > tier.parent_gas_target: + gas_used_delta = parent_gas_used - tier.parent_gas_target + gas_price_delta = max(gas_price * gas_used_delta // tier.parent_gas_target // tier.change_speed, 1) + return gas_price + gas_price_delta + else: + gas_used_delta = parent_gas_target - parent_gas_used + gas_price_delta = gas_price * gas_used_delta // parent_gas_target // tier.change_speed + return gas_price - gas_price_delta +``` + +### Block Segment Reservation + +Ideally we should reserve block segments for each tier, so the lower tiered transactions won't be completely squeezed out by higher tier transactions, which will force user to use higher tier, and the system degraded to a single tier. + +We need help from tendermint to implement this. + +## Implementation + +We can make each tier's gas price strategy fully configurable in protocol parameters, while providing a sensible default one. + +Pseudocode in python-like syntax: + +```python +interface TieredTx: + def tier(self) -> int: + pass + +def tx_tier(tx): + if isinstance(tx, TieredTx): + return tx.tier() + else: + # default tier for custom transactions + return 0 + # NOTE: we can add more rules here per "Tx Prioritization" section + +class TierParams: + 'gas price strategy parameters of one tier' + priority: int # priority in tendermint mempool + initial_gas_price: Coin + parent_gas_target: int + change_speed: Decimal # 0 means don't adjust for block load. + +class Params: + 'protocol parameters' + tiers: List[TierParams] + +class State: + 'consensus state' + # total gas used in last block, None when it's the first block + parent_gas_used: Optional[int] + # gas prices of last block for all tiers + gas_prices: List[Coin] + +def begin_block(): + 'Adjust gas prices' + for i, tier in enumerate(Params.tiers): + if State.parent_gas_used is None: + # initialized gas price for the first block + State.gas_prices[i] = tier.initial_gas_price + else: + # adjust gas price according to gas used in previous block + State.gas_prices[i] = adjust_gas_price(State.gas_prices[i], State.parent_gas_used, tier) + +def mempoolFeeTxHandler_checkTx(ctx, tx): + # the minimal-gas-price configured by validator, zero in deliver_tx context + validator_price = ctx.MinGasPrice() + consensus_price = State.gas_prices[tx_tier(tx)] + min_price = max(validator_price, consensus_price) + + # zero means infinity for gas price cap + if tx.gas_price() > 0 and tx.gas_price() < min_price: + return 'insufficient fees' + return next_CheckTx(ctx, tx) + +def txPriorityHandler_checkTx(ctx, tx): + res, err := next_CheckTx(ctx, tx) + # pass priority to tendermint + res.Priority = Params.tiers[tx_tier(tx)].priority + return res, err + +def end_block(): + 'Update block gas used' + State.parent_gas_used = block_gas_meter.consumed() +``` + +### Dos attack protection + +To fully saturate the blocks and prevent other transactions from executing, attacker need to use transactions of highest tier, the cost would be significantly higher than the default tier. + +If attacker spam with lower tier transactions, user can mitigate by sending higher tier transactions. + +## Consequences + +### Backwards Compatibility + +* New protocol parameters. +* New consensus states. +* New/changed fields in transaction body. + +### Positive + +* The default tier keeps the same predictable gas price experience for client. +* The higher tier's gas price can adapt to block load. +* No priority conflict with custom priority based on transaction types, since this proposal only occupy three priority levels. +* Possibility to compose different priority rules with tiers + +### Negative + +* Wallets & tools need to update to support the new `tier` parameter, and semantic of `fee` field is changed. + +### Neutral + +## References + +* https://eips.ethereum.org/EIPS/eip-1559 +* https://iohk.io/en/blog/posts/2021/11/26/network-traffic-and-tiered-pricing/ + + + +### ADR 049: State Sync Hooks + + + +# ADR 049: State Sync Hooks + +## Changelog + +* Jan 19, 2022: Initial Draft +* Apr 29, 2022: Safer extension snapshotter interface + +## Status + +Implemented + +## Abstract + +This ADR outlines a hooks-based mechanism for application modules to provide additional state (outside of the IAVL tree) to be used +during state sync. + +## Context + +New clients use state-sync to download snapshots of module state from peers. Currently, the snapshot consists of a +stream of `SnapshotStoreItem` and `SnapshotIAVLItem`, which means that application modules that define their state outside of the IAVL +tree cannot include their state as part of the state-sync process. + +Note, Even though the module state data is outside of the tree, for determinism we require that the hash of the external data should +be posted in the IAVL tree. + +## Decision + +A simple proposal based on our existing implementation is that, we can add two new message types: `SnapshotExtensionMeta` +and `SnapshotExtensionPayload`, and they are appended to the existing multi-store stream with `SnapshotExtensionMeta` +acting as a delimiter between extensions. As the chunk hashes should be able to ensure data integrity, we don't need +a delimiter to mark the end of the snapshot stream. + +Besides, we provide `Snapshotter` and `ExtensionSnapshotter` interface for modules to implement snapshotters, which will handle both taking +snapshot and the restoration. Each module could have multiple snapshotters, and for modules with additional state, they should +implement `ExtensionSnapshotter` as extension snapshotters. When setting up the application, the snapshot `Manager` should call +`RegisterExtensions([]ExtensionSnapshotter…)` to register all the extension snapshotters. + +```protobuf +// SnapshotItem is an item contained in a rootmulti.Store snapshot. +// On top of the existing SnapshotStoreItem and SnapshotIAVLItem, we add two new options for the item. +message SnapshotItem { + // item is the specific type of snapshot item. + oneof item { + SnapshotStoreItem store = 1; + SnapshotIAVLItem iavl = 2 [(gogoproto.customname) = "IAVL"]; + SnapshotExtensionMeta extension = 3; + SnapshotExtensionPayload extension_payload = 4; + } +} + +// SnapshotExtensionMeta contains metadata about an external snapshotter. +// One module may need multiple snapshotters, so each module may have multiple SnapshotExtensionMeta. +message SnapshotExtensionMeta { + // the name of the ExtensionSnapshotter, and it is registered to snapshotter manager when setting up the application + // name should be unique for each ExtensionSnapshotter as we need to alphabetically order their snapshots to get + // deterministic snapshot stream. + string name = 1; + // this is used by each ExtensionSnapshotter to decide the format of payloads included in SnapshotExtensionPayload message + // it is used within the snapshotter/namespace, not global one for all modules + uint32 format = 2; +} + +// SnapshotExtensionPayload contains payloads of an external snapshotter. +message SnapshotExtensionPayload { + bytes payload = 1; +} +``` + +When we create a snapshot stream, the `multistore` snapshot is always placed at the beginning of the binary stream, and other extension snapshots are alphabetically ordered by the name of the corresponding `ExtensionSnapshotter`. + +The snapshot stream would look like as follows: + +```go +// multi-store snapshot +{SnapshotStoreItem | SnapshotIAVLItem, ...} +// extension1 snapshot +SnapshotExtensionMeta +{SnapshotExtensionPayload, ...} +// extension2 snapshot +SnapshotExtensionMeta +{SnapshotExtensionPayload, ...} +``` + +We add an `extensions` field to snapshot `Manager` for extension snapshotters. The `multistore` snapshotter is a special one and it doesn't need a name because it is always placed at the beginning of the binary stream. + +```go +type Manager struct { + store *Store + multistore types.Snapshotter + extensions map[string]types.ExtensionSnapshotter + mtx sync.Mutex + operation operation + chRestore chan<- io.ReadCloser + chRestoreDone <-chan restoreDone + restoreChunkHashes [][]byte + restoreChunkIndex uint32 +} +``` + +For extension snapshotters that implement the `ExtensionSnapshotter` interface, their names should be registered to the snapshot `Manager` by +calling `RegisterExtensions` when setting up the application. The snapshotters will handle both taking snapshot and restoration. + +```go +// RegisterExtensions register extension snapshotters to manager +func (m *Manager) RegisterExtensions(extensions ...types.ExtensionSnapshotter) error +``` + +On top of the existing `Snapshotter` interface for the `multistore`, we add `ExtensionSnapshotter` interface for the extension snapshotters. Three more function signatures: `SnapshotFormat()`, `SupportedFormats()` and `SnapshotName()` are added to `ExtensionSnapshotter`. + +```go +// ExtensionPayloadReader read extension payloads, +// it returns io.EOF when reached either end of stream or the extension boundaries. +type ExtensionPayloadReader = func() ([]byte, error) + +// ExtensionPayloadWriter is a helper to write extension payloads to underlying stream. +type ExtensionPayloadWriter = func([]byte) error + +// ExtensionSnapshotter is an extension Snapshotter that is appended to the snapshot stream. +// ExtensionSnapshotter has an unique name and manages it's own internal formats. +type ExtensionSnapshotter interface { + // SnapshotName returns the name of snapshotter, it should be unique in the manager. + SnapshotName() string + + // SnapshotFormat returns the default format used to take a snapshot. + SnapshotFormat() uint32 + + // SupportedFormats returns a list of formats it can restore from. + SupportedFormats() []uint32 + + // SnapshotExtension writes extension payloads into the underlying protobuf stream. + SnapshotExtension(height uint64, payloadWriter ExtensionPayloadWriter) error + + // RestoreExtension restores an extension state snapshot, + // the payload reader returns `io.EOF` when reached the extension boundaries. + RestoreExtension(height uint64, format uint32, payloadReader ExtensionPayloadReader) error + +} +``` + +## Consequences + +As a result of this implementation, we are able to create snapshots of binary chunk stream for the state that we maintain outside of the IAVL Tree, CosmWasm blobs for example. And new clients are able to fetch snapshots of state for all modules that have implemented the corresponding interface from peer nodes. + + +### Backwards Compatibility + +This ADR introduces new proto message types, adds an `extensions` field in snapshot `Manager`, and add new `ExtensionSnapshotter` interface, so this is not backwards compatible if we have extensions. + +But for applications that do not have the state data outside of the IAVL tree for any module, the snapshot stream is backwards-compatible. + +### Positive + +* State maintained outside of IAVL tree like CosmWasm blobs can create snapshots by implementing extension snapshotters, and being fetched by new clients via state-sync. + +### Negative + +### Neutral + +* All modules that maintain state outside of IAVL tree need to implement `ExtensionSnapshotter` and the snapshot `Manager` need to call `RegisterExtensions` when setting up the application. + +## Further Discussions + +While an ADR is in the DRAFT or PROPOSED stage, this section should contain a summary of issues to be solved in future iterations (usually referencing comments from a pull-request discussion). +Later, this section can optionally list ideas or improvements the author or reviewers found during the analysis of this ADR. + +## Test Cases [optional] + +Test cases for an implementation are mandatory for ADRs that are affecting consensus changes. Other ADRs can choose to include links to test cases if applicable. + +## References + +* https://github.com/cosmos/cosmos-sdk/pull/10961 +* https://github.com/cosmos/cosmos-sdk/issues/7340 +* https://hackmd.io/gJoyev6DSmqqkO667WQlGw + + + +### ADR 050: SIGN_MODE_TEXTUAL + + + +# ADR 050: SIGN_MODE_TEXTUAL + +## Changelog + +* Dec 06, 2021: Initial Draft. +* Feb 07, 2022: Draft read and concept-ACKed by the Ledger team. +* May 16, 2022: Change status to Accepted. +* Aug 11, 2022: Require signing over tx raw bytes. +* Sep 07, 2022: Add custom `Msg`-renderers. +* Sep 18, 2022: Structured format instead of lines of text +* Nov 23, 2022: Specify CBOR encoding. +* Dec 01, 2022: Link to examples in separate JSON file. +* Dec 06, 2022: Re-ordering of envelope screens. +* Dec 14, 2022: Mention exceptions for invertibility. +* Jan 23, 2023: Switch Screen.Text to Title+Content. +* Mar 07, 2023: Change SignDoc from array to struct containing array. +* Mar 20, 2023: Introduce a spec version initialized to 0. + +## Status + +Accepted. Implementation started. Small value renderers details still need to be polished. + +Spec version: 0. + +## Abstract + +This ADR specifies SIGN_MODE_TEXTUAL, a new string-based sign mode that is targeted at signing with hardware devices. + +## Context + +Protobuf-based SIGN_MODE_DIRECT was introduced in [ADR-020](./adr-020-protobuf-transaction-encoding.md) and is intended to replace SIGN_MODE_LEGACY_AMINO_JSON in most situations, such as mobile wallets and CLI keyrings. However, the [Ledger](https://www.ledger.com/) hardware wallet is still using SIGN_MODE_LEGACY_AMINO_JSON for displaying the sign bytes to the user. Hardware wallets cannot transition to SIGN_MODE_DIRECT as: + +* SIGN_MODE_DIRECT is binary-based and thus not suitable for display to end-users. Technically, hardware wallets could simply display the sign bytes to the user. But this would be considered as blind signing, and is a security concern. +* hardware cannot decode the protobuf sign bytes due to memory constraints, as the Protobuf definitions would need to be embedded on the hardware device. + +In an effort to remove Amino from the SDK, a new sign mode needs to be created for hardware devices. [Initial discussions](https://github.com/cosmos/cosmos-sdk/issues/6513) propose a text-based sign mode, which this ADR formally specifies. + +## Decision + +In SIGN_MODE_TEXTUAL, a transaction is rendered into a textual representation, +which is then sent to a secure device or subsystem for the user to review and sign. +Unlike `SIGN_MODE_DIRECT`, the transmitted data can be simply decoded into legible text +even on devices with limited processing and display. + +The textual representation is a sequence of _screens_. +Each screen is meant to be displayed in its entirety (if possible) even on a small device like a Ledger. +A screen is roughly equivalent to a short line of text. +Large screens can be displayed in several pieces, +much as long lines of text are wrapped, +so no hard guidance is given, though 40 characters is a good target. +A screen is used to display a single key/value pair for scalar values +(or composite values with a compact notation, such as `Coins`) +or to introduce or conclude a larger grouping. + +The text can contain the full range of Unicode code points, including control characters and nul. +The device is responsible for deciding how to display characters it cannot render natively. +See [annex 2](./adr-050-sign-mode-textual-annex2.md) for guidance. + +Screens have a non-negative indentation level to signal composite or nested structures. +Indentation level zero is the top level. +Indentation is displayed via some device-specific mechanism. +Message quotation notation is an appropriate model, such as +leading `>` characters or vertical bars on more capable displays. + +Some screens are marked as _expert_ screens, +meant to be displayed only if the viewer chooses to opt in for the extra detail. +Expert screens are meant for information that is rarely useful, +or needs to be present only for signature integrity (see below). + +### Invertible Rendering + +We require that the rendering of the transaction be invertible: +there must be a parsing function such that for every transaction, +when rendered to the textual representation, +parsing that representation yields a proto message equivalent +to the original under proto equality. + +Note that this inverse function does not need to perform correct +parsing or error signaling for the whole domain of textual data. +Merely that the range of valid transactions be invertible under +the composition of rendering and parsing. + +Note that the existence of an inverse function ensures that the +rendered text contains the full information of the original transaction, +not a hash or subset. + +We make an exception for invertibility for data which are too large to +meaningfully display, such as byte strings longer than 32 bytes. We may then +selectively render them with a cryptographically-strong hash. In these cases, +it is still computationally infeasible to find a different transaction which +has the same rendering. However, we must ensure that the hash computation is +simple enough to be reliably executed independently, so at least the hash is +itself reasonably verifiable when the raw byte string is not. + +### Chain State + +The rendering function (and parsing function) may depend on the current chain state. +This is useful for reading parameters, such as coin display metadata, +or for reading user-specific preferences such as language or address aliases. +Note that if the observed state changes between signature generation +and the transaction's inclusion in a block, the delivery-time rendering +might differ. If so, the signature will be invalid and the transaction +will be rejected. + +### Signature and Security + +For security, transaction signatures should have three properties: + +1. Given the transaction, signatures, and chain state, it must be possible to validate that the signatures matches the transaction, +to verify that the signers must have known their respective secret keys. + +2. It must be computationally infeasible to find a substantially different transaction for which the given signatures are valid, given the same chain state. + +3. The user should be able to give informed consent to the signed data via a simple, secure device with limited display capabilities. + +The correctness and security of `SIGN_MODE_TEXTUAL` is guaranteed by demonstrating an inverse function from the rendering to transaction protos. +This means that it is impossible for a different protocol buffer message to render to the same text. + +### Transaction Hash Malleability + +When client software forms a transaction, the "raw" transaction (`TxRaw`) is serialized as a proto +and a hash of the resulting byte sequence is computed. +This is the `TxHash`, and is used by various services to track the submitted transaction through its lifecycle. +Various misbehavior is possible if one can generate a modified transaction with a different TxHash +but for which the signature still checks out. + +SIGN_MODE_TEXTUAL prevents this transaction malleability by including the TxHash as an expert screen +in the rendering. + +### SignDoc + +The SignDoc for `SIGN_MODE_TEXTUAL` is formed from a data structure like: + +```go +type Screen struct { + Title string // possibly size limited to, advised to 64 characters + Content string // possibly size limited to, advised to 255 characters + Indent uint8 // size limited to something small like 16 or 32 + Expert bool +} + +type SignDocTextual struct { + Screens []Screen +} +``` + +We do not plan to use protobuf serialization to form the sequence of bytes +that will be transmitted and signed, in order to keep the decoder simple. +We will use [CBOR](https://cbor.io) ([RFC 8949](https://www.rfc-editor.org/rfc/rfc8949.html)) instead. +The encoding is defined by the following CDDL ([RFC 8610](https://www.rfc-editor.org/rfc/rfc8610)): + +``` +;;; CDDL (RFC 8610) Specification of SignDoc for SIGN_MODE_TEXTUAL. +;;; Must be encoded using CBOR deterministic encoding (RFC 8949, section 4.2.1). + +;; A Textual document is a struct containing one field: an array of screens. +sign_doc = { + screens_key: [* screen], +} + +;; The key is an integer to keep the encoding small. +screens_key = 1 + +;; A screen consists of a text string, an indentation, and the expert flag, +;; represented as an integer-keyed map. All entries are optional +;; and MUST be omitted from the encoding if empty, zero, or false. +;; Text defaults to the empty string, indent defaults to zero, +;; and expert defaults to false. +screen = { + ? title_key: tstr, + ? content_key: tstr, + ? indent_key: uint, + ? expert_key: bool, +} + +;; Keys are small integers to keep the encoding small. +title_key = 1 +content_key = 2 +indent_key = 3 +expert_key = 4 +``` + +Defining the sign_doc as directly an array of screens has also been considered. However, given the possibility of future iterations of this specification, using a single-keyed struct has been chosen over the former proposal, as structs allow for easier backwards-compatibility. + +## Details + +In the examples that follow, screens will be shown as lines of text, +indentation is indicated with a leading '>', +and expert screens are marked with a leading `*`. + +### Encoding of the Transaction Envelope + +We define "transaction envelope" as all data in a transaction that is not in the `TxBody.Messages` field. Transaction envelope includes fee, signer infos and memo, but don't include `Msg`s. `//` denotes comments and are not shown on the Ledger device. + +``` +Chain ID: +Account number: +Sequence: +Address: +*Public Key: +This transaction has Message(s) // Pluralize "Message" only when int>1 +> Message (/): // See value renderers for Any rendering. +End of Message +Memo: // Skipped if no memo set. +Fee: // See value renderers for coins rendering. +*Fee payer: // Skipped if no fee_payer set. +*Fee granter: // Skipped if no fee_granter set. +Tip: // Skipped if no tip. +Tipper: +*Gas Limit: +*Timeout Height: // Skipped if no timeout_height set. +*Other signer: SignerInfo // Skipped if the transaction only has 1 signer. +*> Other signer (/): +*End of other signers +*Extension options: Any: // Skipped if no body extension options +*> Extension options (/): +*End of extension options +*Non critical extension options: Any: // Skipped if no body non critical extension options +*> Non critical extension options (/): +*End of Non critical extension options +*Hash of raw bytes: // Hex encoding of bytes defined, to prevent tx hash malleability. +``` + +### Encoding of the Transaction Body + +Transaction Body is the `Tx.TxBody.Messages` field, which is an array of `Any`s, where each `Any` packs a `sdk.Msg`. Since `sdk.Msg`s are widely used, they have a slightly different encoding than usual array of `Any`s (Protobuf: `repeated google.protobuf.Any`) described in Annex 1. + +``` +This transaction has message: // Optional 's' for "message" if there's >1 sdk.Msgs. +// For each Msg, print the following 2 lines: +Msg (/): // E.g. Msg (1/2): bank v1beta1 send coins + +End of transaction messages +``` + +#### Example + +Given the following Protobuf message: + +```protobuf +message Grant { + google.protobuf.Any authorization = 1 [(cosmos_proto.accepts_interface) = "cosmos.authz.v1beta1.Authorization"]; + google.protobuf.Timestamp expiration = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false]; +} + +message MsgGrant { + option (cosmos.msg.v1.signer) = "granter"; + + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +and a transaction containing 1 such `sdk.Msg`, we get the following encoding: + +``` +This transaction has 1 message: +Msg (1/1): authz v1beta1 grant +Granter: cosmos1abc...def +Grantee: cosmos1ghi...jkl +End of transaction messages +``` + +### Custom `Msg` Renderers + +Application developers may choose to not follow default renderer value output for their own `Msg`s. In this case, they can implement their own custom `Msg` renderer. This is similar to [EIP4430](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4430.md), where the smart contract developer chooses the description string to be shown to the end user. + +This is done by setting the `cosmos.msg.textual.v1.expert_custom_renderer` Protobuf option to a non-empty string. This option CAN ONLY be set on a Protobuf message representing transaction message object (implementing `sdk.Msg` interface). + +```protobuf +message MsgFooBar { + // Optional comments to describe in human-readable language the formatting + // rules of the custom renderer. + option (cosmos.msg.textual.v1.expert_custom_renderer) = ""; + + // proto fields +} +``` + +When this option is set on a `Msg`, a registered function will transform the `Msg` into an array of one or more strings, which MAY use the key/value format (described in point #3) with the expert field prefix (described in point #5) and arbitrary indentation (point #6). These strings MAY be rendered from a `Msg` field using a default value renderer, or they may be generated from several fields using custom logic. + +The `` is a string convention chosen by the application developer and is used to identify the custom `Msg` renderer. For example, the documentation or specification of this custom algorithm can reference this identifier. This identifier CAN have a versioned suffix (e.g. `_v1`) to adapt for future changes (which would be consensus-breaking). We also recommend adding Protobuf comments to describe in human language the custom logic used. + +Moreover, the renderer must provide 2 functions: one for formatting from Protobuf to string, and one for parsing string to Protobuf. These 2 functions are provided by the application developer. To satisfy point #1, the parse function MUST be the inverse of the formatting function. This property will not be checked by the SDK at runtime. However, we strongly recommend the application developer to include a comprehensive suite in their app repo to test invertibility, as to not introduce security bugs. + +### Require signing over the `TxBody` and `AuthInfo` raw bytes + +Recall that the transaction bytes merkleized on chain are the Protobuf binary serialization of [TxRaw](https://buf.build/cosmos/cosmos-sdk/docs/main:cosmos.tx.v1beta1#cosmos.tx.v1beta1.TxRaw), which contains the `body_bytes` and `auth_info_bytes`. Moreover, the transaction hash is defined as the SHA256 hash of the `TxRaw` bytes. We require that the user signs over these bytes in SIGN_MODE_TEXTUAL, more specifically over the following string: + +``` +*Hash of raw bytes: +``` + +where: + +* `++` denotes concatenation, +* `HEX` is the hexadecimal representation of the bytes, all in capital letters, no `0x` prefix, +* and `len()` is encoded as a Big-Endian uint64. + +This is to prevent transaction hash malleability. The point #1 about invertibility assures that transaction `body` and `auth_info` values are not malleable, but the transaction hash still might be malleable with point #1 only, because the SIGN_MODE_TEXTUAL strings don't follow the byte ordering defined in `body_bytes` and `auth_info_bytes`. Without this hash, a malicious validator or exchange could intercept a transaction, modify its transaction hash _after_ the user signed it using SIGN_MODE_TEXTUAL (by tweaking the byte ordering inside `body_bytes` or `auth_info_bytes`), and then submit it to Tendermint. + +By including this hash in the SIGN_MODE_TEXTUAL signing payload, we keep the same level of guarantees as [SIGN_MODE_DIRECT](./adr-020-protobuf-transaction-encoding.md). + +These bytes are only shown in expert mode, hence the leading `*`. + +## Updates to the current specification + +The current specification is not set in stone, and future iterations are to be expected. We distinguish two categories of updates to this specification: + +1. Updates that require changes of the hardware device embedded application. +2. Updates that only modify the envelope and the value renderers. + +Updates in the 1st category include changes of the `Screen` struct or its corresponding CBOR encoding. This type of updates require a modification of the hardware signer application, to be able to decode and parse the new types. Backwards-compatibility must also be guaranteed, so that the new hardware application works with existing versions of the SDK. These updates require the coordination of multiple parties: SDK developers, hardware application developers (currently: Zondax), and client-side developers (e.g. CosmJS). Furthermore, a new submission of the hardware device application may be necessary, which, depending on the vendor, can take some time. As such, we recommend to avoid this type of updates as much as possible. + +Updates in the 2nd category include changes to any of the value renderers or to the transaction envelope. For example, the ordering of fields in the envelope can be swapped, or the timestamp formatting can be modified. Since SIGN_MODE_TEXTUAL sends `Screen`s to the hardware device, this type of change does not need a hardware wallet application update. They are however state-machine-breaking, and must be documented as such. They require the coordination of SDK developers with client-side developers (e.g. CosmJS), so that the updates are released on both sides close to each other in time. + +We define a spec version, which is an integer that must be incremented on each update of either category. This spec version will be exposed by the SDK's implementation, and can be communicated to clients. For example, SDK v0.50 might use the spec version 1, and SDK v0.51 might use 2; thanks to this versioning, clients can know how to craft SIGN_MODE_TEXTUAL transactions based on the target SDK version. + +The current spec version is defined in the "Status" section, on the top of this document. It is initialized to `0` to allow flexibility in choosing how to define future versions, as it would allow adding a field either in the SignDoc Go struct or in Protobuf in a backwards-compatible way. + +## Additional Formatting by the Hardware Device + +See [annex 2](./adr-050-sign-mode-textual-annex2.md). + +## Examples + +1. A minimal MsgSend: [see transaction](https://github.com/cosmos/cosmos-sdk/blob/094abcd393379acbbd043996024d66cd65246fb1/tx/textual/internal/testdata/e2e.json#L2-L70). +2. A transaction with a bit of everything: [see transaction](https://github.com/cosmos/cosmos-sdk/blob/094abcd393379acbbd043996024d66cd65246fb1/tx/textual/internal/testdata/e2e.json#L71-L270). + +The examples below are stored in a JSON file with the following fields: + +* `proto`: the representation of the transaction in ProtoJSON, +* `screens`: the transaction rendered into SIGN_MODE_TEXTUAL screens, +* `cbor`: the sign bytes of the transaction, which is the CBOR encoding of the screens. + +## Consequences + +### Backwards Compatibility + +SIGN_MODE_TEXTUAL is purely additive, and doesn't break any backwards compatibility with other sign modes. + +### Positive + +* Human-friendly way of signing in hardware devices. +* Once SIGN_MODE_TEXTUAL is shipped, SIGN_MODE_LEGACY_AMINO_JSON can be deprecated and removed. On the longer term, once the ecosystem has totally migrated, Amino can be totally removed. + +### Negative + +* Some fields are still encoded in non-human-readable ways, such as public keys in hexadecimal. +* New ledger app needs to be released, still unclear + +### Neutral + +* If the transaction is complex, the string array can be arbitrarily long, and some users might just skip some screens and blind sign. + +## Further Discussions + +* Some details on value renderers need to be polished, see [Annex 1](./adr-050-sign-mode-textual-annex1.md). +* Are ledger apps able to support both SIGN_MODE_LEGACY_AMINO_JSON and SIGN_MODE_TEXTUAL at the same time? +* Open question: should we add a Protobuf field option to allow app developers to overwrite the textual representation of certain Protobuf fields and message? This would be similar to Ethereum's [EIP4430](https://github.com/ethereum/EIPs/pull/4430), where the contract developer decides on the textual representation. +* Internationalization. + +## References + +* [Annex 1](./adr-050-sign-mode-textual-annex1.md) + +* Initial discussion: https://github.com/cosmos/cosmos-sdk/issues/6513 +* Living document used in the working group: https://hackmd.io/fsZAO-TfT0CKmLDtfMcKeA?both +* Working group meeting notes: https://hackmd.io/7RkGfv_rQAaZzEigUYhcXw +* Ethereum's "Described Transactions" https://github.com/ethereum/EIPs/pull/4430 + + + +### ADR 050: SIGN_MODE_TEXTUAL: Annex 1 Value Renderers + + + +# ADR 050: SIGN_MODE_TEXTUAL: Annex 1 Value Renderers + +## Changelog + +* Dec 06, 2021: Initial Draft +* Feb 07, 2022: Draft read and concept-ACKed by the Ledger team. +* Dec 01, 2022: Remove `Object: ` prefix on Any header screen. +* Dec 13, 2022: Sign over bytes hash when bytes length > 32. +* Mar 27, 2023: Update `Any` value renderer to omit message header screen. + +## Status + +Accepted. Implementation started. Small value renderers details still need to be polished. + +## Abstract + +This Annex describes value renderers, which are used for displaying Protobuf values in a human-friendly way using a string array. + +## Value Renderers + +Value Renderers describe how values of different Protobuf types should be encoded as a string array. Value renderers can be formalized as a set of bijective functions `func renderT(value T) []string`, where `T` is one of the below Protobuf types for which this spec is defined. + +### Protobuf `number` + +* Applies to: + * protobuf numeric integer types (`int{32,64}`, `uint{32,64}`, `sint{32,64}`, `fixed{32,64}`, `sfixed{32,64}`) + * strings whose `customtype` is `github.com/cosmos/cosmos-sdk/types.Int` or `github.com/cosmos/cosmos-sdk/types.Dec` + * bytes whose `customtype` is `github.com/cosmos/cosmos-sdk/types.Int` or `github.com/cosmos/cosmos-sdk/types.Dec` +* Trailing decimal zeroes are always removed +* Formatting with `'`s for every three integral digits. +* Usage of `.` to denote the decimal delimiter. + +#### Examples + +* `1000` (uint64) -> `1'000` +* `"1000000.00"` (string representing a Dec) -> `1'000'000` +* `"1000000.10"` (string representing a Dec) -> `1'000'000.1` + +### `coin` + +* Applies to `cosmos.base.v1beta1.Coin`. +* Denoms are converted to `display` denoms using `Metadata` (if available). **This requires a state query**. The definition of `Metadata` can be found in the [bank protobuf definition](https://buf.build/cosmos/cosmos-sdk/docs/main:cosmos.bank.v1beta1#cosmos.bank.v1beta1.Metadata). If the `display` field is empty or nil, then we do not perform any denom conversion. +* Amounts are converted to `display` denom amounts and rendered as `number`s above + * We do not change the capitalization of the denom. In practice, `display` denoms are stored in lowercase in state (e.g. `10 atom`), however they are often showed in UPPERCASE in everyday life (e.g. `10 ATOM`). Value renderers keep the case used in state, but we may recommend chains changing the denom metadata to be uppercase for better user display. +* One space between the denom and amount (e.g. `10 atom`). +* In the future, IBC denoms could maybe be converted to DID/IIDs, if we can find a robust way for doing this (ex. `cosmos:cosmos:hub:bank:denom:atom`) + +#### Examples + +* `1000000000uatom` -> `["1'000 atom"]`, because atom is the metadata's display denom. + +### `coins` + +* an array of `coin` is display as the concatenation of each `coin` encoded as the specification above, then joined together with the delimiter `", "` (a comma and a space, no quotes around). +* the list of coins is ordered by unicode code point of the display denom: `A-Z` < `a-z`. For example, the string `aAbBcC` would be sorted `ABCabc`. + * if the coins list had 0 items in it then it'll be rendered as `zero` + +### Example + +* `["3cosm", "2000000uatom"]` -> `2 atom, 3 COSM` (assuming the display denoms are `atom` and `COSM`) +* `["10atom", "20Acoin"]` -> `20 Acoin, 10 atom` (assuming the display denoms are `atom` and `Acoin`) +* `[]` -> `zero` + +### `repeated` + +* Applies to all `repeated` fields, except `cosmos.tx.v1beta1.TxBody#Messages`, which has a particular encoding (see [ADR-050](./adr-050-sign-mode-textual.md)). +* A repeated type has the following template: + +``` +: + (/): + + (/): + +End of . +``` + +where: + +* `field_name` is the Protobuf field name of the repeated field +* `field_kind`: + * if the type of the repeated field is a message, `field_kind` is the message name + * if the type of the repeated field is an enum, `field_kind` is the enum name + * in any other case, `field_kind` is the protobuf primitive type (e.g. "string" or "bytes") +* `int` is the length of the array +* `index` is one based index of the repeated field + +#### Examples + +Given the proto definition: + +```protobuf +message AllowedMsgAllowance { + repeated string allowed_messages = 1; +} +``` + +and initializing with: + +```go +x := []AllowedMsgAllowance{"cosmos.bank.v1beta1.MsgSend", "cosmos.gov.v1.MsgVote"} +``` + +we have the following value-rendered encoding: + +``` +Allowed messages: 2 strings +Allowed messages (1/2): cosmos.bank.v1beta1.MsgSend +Allowed messages (2/2): cosmos.gov.v1.MsgVote +End of Allowed messages +``` + +### `message` + +* Applies to all Protobuf messages that do not have a custom encoding. +* Field names follow [sentence case](https://en.wiktionary.org/wiki/sentence_case) + * replace each `_` with a space + * capitalize first letter of the sentence +* Field names are ordered by their Protobuf field number +* Screen title is the field name, and screen content is the value. +* Nesting: + * if a field contains a nested message, we value-render the underlying message using the template: + + ``` + : <1st line of value-rendered message> + > // Notice the `>` prefix. + ``` + + * `>` character is used to denote nesting. For each additional level of nesting, add `>`. + +#### Examples + +Given the following Protobuf messages: + +```protobuf +enum VoteOption { + VOTE_OPTION_UNSPECIFIED = 0; + VOTE_OPTION_YES = 1; + VOTE_OPTION_ABSTAIN = 2; + VOTE_OPTION_NO = 3; + VOTE_OPTION_NO_WITH_VETO = 4; +} + +message WeightedVoteOption { + VoteOption option = 1; + string weight = 2 [(cosmos_proto.scalar) = "cosmos.Dec"]; +} + +message Vote { + uint64 proposal_id = 1; + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + reserved 3; + repeated WeightedVoteOption options = 4; +} +``` + +we get the following encoding for the `Vote` message: + +``` +Vote object +> Proposal id: 4 +> Voter: cosmos1abc...def +> Options: 2 WeightedVoteOptions +> Options (1/2): WeightedVoteOption object +>> Option: VOTE_OPTION_YES +>> Weight: 0.7 +> Options (2/2): WeightedVoteOption object +>> Option: VOTE_OPTION_NO +>> Weight: 0.3 +> End of Options +``` + +### Enums + +* Show the enum variant name as string. + +#### Examples + +See example above with `message Vote{}`. + +### `google.protobuf.Any` + +* Applies to `google.protobuf.Any` +* Rendered as: + +``` + +> +``` + +There is however one exception: when the underlying message is a Protobuf message that does not have a custom encoding, then the message header screen is omitted, and one level of indentation is removed. + +Messages that have a custom encoding, including `google.protobuf.Timestamp`, `google.protobuf.Duration`, `google.protobuf.Any`, `cosmos.base.v1beta1.Coin`, and messages that have an app-defined custom encoding, will preserve their header and indentation level. + +#### Examples + +Message header screen is stripped, one-level of indentation removed: + +``` +/cosmos.gov.v1.Vote +> Proposal id: 4 +> Vote: cosmos1abc...def +> Options: 2 WeightedVoteOptions +> Options (1/2): WeightedVoteOption object +>> Option: Yes +>> Weight: 0.7 +> Options (2/2): WeightedVoteOption object +>> Option: No +>> Weight: 0.3 +> End of Options +``` + +Message with custom encoding: + +``` +/cosmos.base.v1beta1.Coin +> 10uatom +``` + +### `google.protobuf.Timestamp` + +Rendered using [RFC 3339](https://www.rfc-editor.org/rfc/rfc3339) (a +simplification of ISO 8601), which is the current recommendation for portable +time values. The rendering always uses "Z" (UTC) as the timezone. It uses only +the necessary fractional digits of a second, omitting the fractional part +entirely if the timestamp has no fractional seconds. (The resulting timestamps +are not automatically sortable by standard lexicographic order, but we favor +the legibility of the shorter string.) + +#### Examples + +The timestamp with 1136214245 seconds and 700000000 nanoseconds is rendered +as `2006-01-02T15:04:05.7Z`. +The timestamp with 1136214245 seconds and zero nanoseconds is rendered +as `2006-01-02T15:04:05Z`. + +### `google.protobuf.Duration` + +The duration proto expresses a raw number of seconds and nanoseconds. +This will be rendered as longer time units of days, hours, and minutes, +plus any remaining seconds, in that order. +Leading and trailing zero-quantity units will be omitted, but all +units in between nonzero units will be shown, e.g. ` 3 days, 0 hours, 0 minutes, 5 seconds`. + +Even longer time units such as months or years are imprecise. +Weeks are precise, but not commonly used - `91 days` is more immediately +legible than `13 weeks`. Although `days` can be problematic, +e.g. noon to noon on subsequent days can be 23 or 25 hours depending on +daylight savings transitions, there is significant advantage in using +strict 24-hour days over using only hours (e.g. `91 days` vs `2184 hours`). + +When nanoseconds are nonzero, they will be shown as fractional seconds, +with only the minimum number of digits, e.g `0.5 seconds`. + +A duration of exactly zero is shown as `0 seconds`. + +Units will be given as singular (no trailing `s`) when the quantity is exactly one, +and will be shown in plural otherwise. + +Negative durations will be indicated with a leading minus sign (`-`). + +Examples: + +* `1 day` +* `30 days` +* `-1 day, 12 hours` +* `3 hours, 0 minutes, 53.025 seconds` + +### bytes + +* Bytes of length shorter or equal to 35 are rendered in hexadecimal, all capital letters, without the `0x` prefix. +* Bytes of length greater than 35 are hashed using SHA256. The rendered text is `SHA-256=`, followed by the 32-byte hash, in hexadecimal, all capital letters, without the `0x` prefix. +* The hexadecimal string is finally separated into groups of 4 digits, with a space `' '` as separator. If the bytes length is odd, the 2 remaining hexadecimal characters are at the end. + +The number 35 was chosen because it is the longest length where the hashed-and-prefixed representation is longer than the original data directly formatted, using the 3 rules above. More specifically: + +* a 35-byte array will have 70 hex characters, plus 17 space characters, resulting in 87 characters. +* byte arrays starting from length 36 will be hashed to 32 bytes, which is 64 hex characters plus 15 spaces, and with the `SHA-256=` prefix, it takes 87 characters. +Also, secp256k1 public keys have length 33, so their Textual representation is not their hashed value, which we would like to avoid. + +Note: Data longer than 35 bytes are not rendered in a way that can be inverted. See ADR-050's [section about invertibility](./adr-050-sign-mode-textual.md#invertible-rendering) for a discussion. + +#### Examples + +Inputs are displayed as byte arrays. + +* `[0]`: `00` +* `[0,1,2]`: `0001 02` +* `[0,1,2,..,34]`: `0001 0203 0405 0607 0809 0A0B 0C0D 0E0F 1011 1213 1415 1617 1819 1A1B 1C1D 1E1F 2021 22` +* `[0,1,2,..,35]`: `SHA-256=5D7E 2D9B 1DCB C85E 7C89 0036 A2CF 2F9F E7B6 6554 F2DF 08CE C6AA 9C0A 25C9 9C21` + +### address bytes + +We currently use `string` types in protobuf for addresses so this may not be needed, but if any address bytes are used in sign mode textual they should be rendered with bech32 formatting + +### strings + +Strings are rendered as-is. + +### Default Values + +* Default Protobuf values for each field are skipped. + +#### Example + +```protobuf +message TestData { + string signer = 1; + string metadata = 2; +} +``` + +```go +myTestData := TestData{ + Signer: "cosmos1abc" +} +``` + +We get the following encoding for the `TestData` message: + +``` +TestData object +> Signer: cosmos1abc +``` + +### bool + +Boolean values are rendered as `True` or `False`. + +### [ABANDONED] Custom `msg_title` instead of Msg `type_url` + +_This paragraph is in the Annex for informational purposes only, and will be removed in a next update of the ADR._ + +
+ Click to see abandoned idea. + +* all protobuf messages to be used with `SIGN_MODE_TEXTUAL` CAN have a short title associated with them that can be used in format strings whenever the type URL is explicitly referenced via the `cosmos.msg.v1.textual.msg_title` Protobuf message option. +* if this option is not specified for a Msg, then the Protobuf fully qualified name will be used. + +```protobuf +message MsgSend { + option (cosmos.msg.v1.textual.msg_title) = "bank send coins"; +} +``` + +* they MUST be unique per message, per chain + +#### Examples + +* `cosmos.gov.v1.MsgVote` -> `governance v1 vote` + +#### Best Practices + +We recommend to use this option only for `Msg`s whose Protobuf fully qualified name can be hard to understand. As such, the two examples above (`MsgSend` and `MsgVote`) are not good examples to be used with `msg_title`. We still allow `msg_title` for chains who might have `Msg`s with complex or non-obvious names. + +In those cases, we recommend to drop the version (e.g. `v1`) in the string if there's only one version of the module on chain. This way, the bijective mapping can figure out which message each string corresponds to. If multiple Protobuf versions of the same module exist on the same chain, we recommend keeping the first `msg_title` with version, and the second `msg_title` with version (e.g. `v2`): + +* `mychain.mymodule.v1.MsgDo` -> `mymodule do something` +* `mychain.mymodule.v2.MsgDo` -> `mymodule v2 do something` + +
+ +
+ +### ADR 050: SIGN_MODE_TEXTUAL: Annex 2 XXX + + + +# ADR 050: SIGN_MODE_TEXTUAL: Annex 2 XXX + +## Changelog + +* Oct 3, 2022: Initial Draft + +## Status + +DRAFT + +## Abstract + +This annex provides normative guidance on how devices should render a +`SIGN_MODE_TEXTUAL` document. + +## Context + +`SIGN_MODE_TEXTUAL` allows a legible version of a transaction to be signed +on a hardware security device, such as a Ledger. Early versions of the +design rendered transactions directly to lines of ASCII text, but this +proved awkward from its in-band signaling, and for the need to display +Unicode text within the transaction. + +## Decision + +`SIGN_MODE_TEXTUAL` renders to an abstract representation, leaving it +up to device-specific software how to present this representation given the +capabilities, limitations, and conventions of the device. + +We offer the following normative guidance: + +1. The presentation should be as legible as possible to the user, given +the capabilities of the device. If legibility could be sacrificed for other +properties, we would recommend just using some other signing mode. +Legibility should focus on the common case - it is okay for unusual cases +to be less legible. + +2. The presentation should be invertible if possible without substantial +sacrifice of legibility. Any change to the rendered data should result +in a visible change to the presentation. This extends the integrity of the +signing to user-visible presentation. + +3. The presentation should follow normal conventions of the device, +without sacrificing legibility or invertibility. + +As an illustration of these principles, here is an example algorithm +for presentation on a device which can display a single 80-character +line of printable ASCII characters: + +* The presentation is broken into lines, and each line is presented in +sequence, with user controls for going forward or backward a line. + +* Expert mode screens are only presented if the device is in expert mode. + +* Each line of the screen starts with a number of `>` characters equal +to the screen's indentation level, followed by a `+` character if this +isn't the first line of the screen, followed by a space if either a +`>` or a `+` has been emitted, +or if this header is followed by a `>`, `+`, or space. + +* If the line ends with whitespace or an `@` character, an additional `@` +character is appended to the line. + +* The following ASCII control characters or backslash (`\`) are converted +to a backslash followed by a letter code, in the manner of string literals +in many languages: + + * a: U+0007 alert or bell + * b: U+0008 backspace + * f: U+000C form feed + * n: U+000A line feed + * r: U+000D carriage return + * t: U+0009 horizontal tab + * v: U+000B vertical tab + * `\`: U+005C backslash + +* All other ASCII control characters, plus non-ASCII Unicode code points, +are shown as either: + + * `\u` followed by 4 uppercase hex characters for code points + in the basic multilingual plane (BMP). + + * `\U` followed by 8 uppercase hex characters for other code points. + +* The screen will be broken into multiple lines to fit the 80-character +limit, considering the above transformations in a way that attempts to +minimize the number of lines generated. Expanded control or Unicode characters +are never split across lines. + +Example output: + +``` +An introductory line. +key1: 123456 +key2: a string that ends in whitespace @ +key3: a string that ends in a single ampersand - @@ + >tricky key4<: note the leading space in the presentation +introducing an aggregate +> key5: false +> key6: a very long line of text, please co\u00F6perate and break into +>+ multiple lines. +> Can we do further nesting? +>> You bet we can! +``` + +The inverse mapping gives us the only input which could have +generated this output (JSON notation for string data): + +``` +Indent Text +------ ---- +0 "An introductory line." +0 "key1: 123456" +0 "key2: a string that ends in whitespace " +0 "key3: a string that ends in a single ampersand - @" +0 ">tricky key4<: note the leading space in the presentation" +0 "introducing an aggregate" +1 "key5: false" +1 "key6: a very long line of text, please coöperate and break into multiple lines." +1 "Can we do further nesting?" +2 "You bet we can!" +``` + + + +### ADR 053: Go Module Refactoring + + + +# ADR 053: Go Module Refactoring + +## Changelog + +* 2022-04-27: First Draft + +## Status + +PROPOSED + +## Abstract + +The current SDK is built as a single monolithic go module. This ADR describes +how we refactor the SDK into smaller independently versioned go modules +for ease of maintenance. + +## Context + +Go modules impose certain requirements on software projects with respect to +stable version numbers (anything above 0.x) in that [any API breaking changes +necessitate a major version](https://go.dev/doc/modules/release-workflow#breaking) +increase which technically creates a new go module +(with a v2, v3, etc. suffix). + +[Keeping modules API compatible](https://go.dev/blog/module-compatibility) in +this way requires a fair amount of thought and discipline. + +The Cosmos SDK is a fairly large project which originated before go modules +came into existence and has always been under a v0.x release even though +it has been used in production for years now, not because it isn't production +quality software, but rather because the API compatibility guarantees required +by go modules are fairly complex to adhere to with such a large project. +Up to now, it has generally been deemed more important to be able to break the +API if needed rather than require all users update all package import paths +to accommodate breaking changes causing v2, v3, etc. releases. This is in +addition to the other complexities related to protobuf generated code that will +be addressed in a separate ADR. + +Nevertheless, the desire for semantic versioning has been [strong in the +community](https://github.com/cosmos/cosmos-sdk/discussions/10162) and the +single go module release process has made it very hard to +release small changes to isolated features in a timely manner. Release cycles +often exceed six months which means small improvements done in a day or +two get bottle-necked by everything else in the monolithic release cycle. + +## Decision + +To improve the current situation, the SDK is being refactored into multiple +go modules within the current repository. There has been a [fair amount of +debate](https://github.com/cosmos/cosmos-sdk/discussions/10582#discussioncomment-1813377) +as to how to do this, with some developers arguing for larger vs smaller +module scopes. There are pros and cons to both approaches (which will be +discussed below in the [Consequences](#consequences) section), but the +approach being adopted is the following: + +* a go module should generally be scoped to a specific coherent set of +functionality (such as math, errors, store, etc.) +* when code is removed from the core SDK and moved to a new module path, every +effort should be made to avoid API breaking changes in the existing code using +aliases and wrapper types (as done in https://github.com/cosmos/cosmos-sdk/pull/10779 +and https://github.com/cosmos/cosmos-sdk/pull/11788) +* new go modules should be moved to a standalone domain (`cosmossdk.io`) before +being tagged as `v1.0.0` to accommodate the possibility that they may be +better served by a standalone repository in the future +* all go modules should follow the guidelines in https://go.dev/blog/module-compatibility +before `v1.0.0` is tagged and should make use of `internal` packages to limit +the exposed API surface +* the new go module's API may deviate from the existing code where there are +clear improvements to be made or to remove legacy dependencies (for instance on +amino or gogo proto), as long the old package attempts +to avoid API breakage with aliases and wrappers +* care should be taken when simply trying to turn an existing package into a +new go module: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository. +In general, it seems safer to just create a new module path (appending v2, v3, etc. +if necessary), rather than trying to make an old package a new module. + +## Consequences + +### Backwards Compatibility + +If the above guidelines are followed to use aliases or wrapper types pointing +in existing APIs that point back to the new go modules, there should be no or +very limited breaking changes to existing APIs. + +### Positive + +* standalone pieces of software will reach `v1.0.0` sooner +* new features to specific functionality will be released sooner + +### Negative + +* there will be more go module versions to update in the SDK itself and +per-project, although most of these will hopefully be indirect + +### Neutral + +## Further Discussions + +Further discussions are occurring primarily in +https://github.com/cosmos/cosmos-sdk/discussions/10582 and within +the Cosmos SDK Framework Working Group. + +## References + +* https://go.dev/doc/modules/release-workflow +* https://go.dev/blog/module-compatibility +* https://github.com/cosmos/cosmos-sdk/discussions/10162 +* https://github.com/cosmos/cosmos-sdk/discussions/10582 +* https://github.com/cosmos/cosmos-sdk/pull/10779 +* https://github.com/cosmos/cosmos-sdk/pull/11788 + + + +### ADR 054: Semver Compatible SDK Modules + + + +# ADR 054: Semver Compatible SDK Modules + +## Changelog + +* 2022-04-27: First draft + +## Status + +DRAFT + +## Abstract + +In order to move the Cosmos SDK to a system of decoupled semantically versioned +modules which can be composed in different combinations (ex. staking v3 with +bank v1 and distribution v2), we need to reassess how we organize the API surface +of modules to avoid problems with go semantic import versioning and +circular dependencies. This ADR explores various approaches we can take to +addressing these issues. + +## Context + +There has been [a fair amount of desire](https://github.com/cosmos/cosmos-sdk/discussions/10162) +in the community for semantic versioning in the SDK and there has been significant +movement to splitting SDK modules into [standalone go modules](https://github.com/cosmos/cosmos-sdk/issues/11899). +Both of these will ideally allow the ecosystem to move faster because we won't +be waiting for all dependencies to update synchronously. For instance, we could +have 3 versions of the core SDK compatible with the latest 2 releases of +CosmWasm as well as 4 different versions of staking . This sort of setup would +allow early adopters to aggressively integrate new versions, while allowing +more conservative users to be selective about which versions they're ready for. + +In order to achieve this, we need to solve the following problems: + +1. because of the way [go semantic import versioning](https://research.swtch.com/vgo-import) (SIV) + works, moving to SIV naively will actually make it harder to achieve these goals +2. circular dependencies between modules need to be broken to actually release + many modules in the SDK independently +3. pernicious minor version incompatibilities introduced through correctly + [evolving protobuf schemas](https://developers.google.com/protocol-buffers/docs/proto3#updating) + without correct [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering) + +Note that all the following discussion assumes that the proto file versioning and state machine versioning of a module +are distinct in that: + +* proto files are maintained in a non-breaking way (using something + like [buf breaking](https://docs.buf.build/breaking/overview) + to ensure all changes are backwards compatible) +* proto file versions get bumped much less frequently, i.e. we might maintain `cosmos.bank.v1` through many versions + of the bank module state machine +* state machine breaking changes are more common and ideally this is what we'd want to semantically version with + go modules, ex. `x/bank/v2`, `x/bank/v3`, etc. + +### Problem 1: Semantic Import Versioning Compatibility + +Consider we have a module `foo` which defines the following `MsgDoSomething` and that we've released its state +machine in go module `example.com/foo`: + +```protobuf +package foo.v1; + +message MsgDoSomething { + string sender = 1; + uint64 amount = 2; +} + +service Msg { + DoSomething(MsgDoSomething) returns (MsgDoSomethingResponse); +} +``` + +Now consider that we make a revision to this module and add a new `condition` field to `MsgDoSomething` and also +add a new validation rule on `amount` requiring it to be non-zero, and that following go semantic versioning we +release the next state machine version of `foo` as `example.com/foo/v2`. + +```protobuf +// Revision 1 +package foo.v1; + +message MsgDoSomething { + string sender = 1; + + // amount must be a non-zero integer. + uint64 amount = 2; + + // condition is an optional condition on doing the thing. + // + // Since: Revision 1 + Condition condition = 3; +} +``` + +Approaching this naively, we would generate the protobuf types for the initial +version of `foo` in `example.com/foo/types` and we would generate the protobuf +types for the second version in `example.com/foo/v2/types`. + +Now let's say we have a module `bar` which talks to `foo` using this keeper +interface which `foo` provides: + +```go +type FooKeeper interface { + DoSomething(MsgDoSomething) error +} +``` + +#### Scenario A: Backward Compatibility: Newer Foo, Older Bar + +Imagine we have a chain which uses both `foo` and `bar` and wants to upgrade to +`foo/v2`, but the `bar` module has not upgraded to `foo/v2`. + +In this case, the chain will not be able to upgrade to `foo/v2` until `bar` +has upgraded its references to `example.com/foo/types.MsgDoSomething` to +`example.com/foo/v2/types.MsgDoSomething`. + +Even if `bar`'s usage of `MsgDoSomething` has not changed at all, the upgrade +will be impossible without this change because `example.com/foo/types.MsgDoSomething` +and `example.com/foo/v2/types.MsgDoSomething` are fundamentally different +incompatible structs in the go type system. + +#### Scenario B: Forward Compatibility: Older Foo, Newer Bar + +Now let's consider the reverse scenario, where `bar` upgrades to `foo/v2` +by changing the `MsgDoSomething` reference to `example.com/foo/v2/types.MsgDoSomething` +and releases that as `bar/v2` with some other changes that a chain wants. +The chain, however, has decided that it thinks the changes in `foo/v2` are too +risky and that it'd prefer to stay on the initial version of `foo`. + +In this scenario, it is impossible to upgrade to `bar/v2` without upgrading +to `foo/v2` even if `bar/v2` would have worked 100% fine with `foo` other +than changing the import path to `MsgDoSomething` (meaning that `bar/v2` +doesn't actually use any new features of `foo/v2`). + +Now because of the way go semantic import versioning works, we are locked +into either using `foo` and `bar` OR `foo/v2` and `bar/v2`. We cannot have +`foo` + `bar/v2` OR `foo/v2` + `bar`. The go type system doesn't allow this +even if both versions of these modules are otherwise compatible with each +other. + +#### Naive Mitigation + +A naive approach to fixing this would be to not regenerate the protobuf types +in `example.com/foo/v2/types` but instead just update `example.com/foo/types` +to reflect the changes needed for `v2` (adding `condition` and requiring +`amount` to be non-zero). Then we could release a patch of `example.com/foo/types` +with this update and use that for `foo/v2`. But this change is state machine +breaking for `v1`. It requires changing the `ValidateBasic` method to reject +the case where `amount` is zero, and it adds the `condition` field which +should be rejected based +on [ADR 020 unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering). +So adding these changes as a patch on `v1` is actually incorrect based on semantic +versioning. Chains that want to stay on `v1` of `foo` should not +be importing these changes because they are incorrect for `v1.` + +### Problem 2: Circular dependencies + +None of the above approaches allow `foo` and `bar` to be separate modules +if for some reason `foo` and `bar` depend on each other in different ways. +For instance, we can't have `foo` import `bar/types` while `bar` imports +`foo/types`. + +We have several cases of circular module dependencies in the SDK +(ex. staking, distribution and slashing) that are legitimate from a state machine +perspective. Without separating the API types out somehow, there would be +no way to independently semantically version these modules without some other +mitigation. + +### Problem 3: Handling Minor Version Incompatibilities + +Imagine that we solve the first two problems but now have a scenario where +`bar/v2` wants the option to use `MsgDoSomething.condition` which only `foo/v2` +supports. If `bar/v2` works with `foo` `v1` and sets `condition` to some non-nil +value, then `foo` will silently ignore this field resulting in a silent logic +possibly dangerous logic error. If `bar/v2` were able to check whether `foo` was +on `v1` or `v2` and dynamically, it could choose to only use `condition` when +`foo/v2` is available. Even if `bar/v2` were able to perform this check, however, +how do we know that it is always performing the check properly. Without +some sort of +framework-level [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering), +it is hard to know whether these pernicious hard to detect bugs are getting into +our app and a client-server layer such as [ADR 033: Inter-Module Communication](./adr-033-protobuf-inter-module-comm.md) +may be needed to do this. + +## Solutions + +### Approach A) Separate API and State Machine Modules + +One solution (first proposed in https://github.com/cosmos/cosmos-sdk/discussions/10582) is to isolate all protobuf +generated code into a separate module +from the state machine module. This would mean that we could have state machine +go modules `foo` and `foo/v2` which could use a types or API go module say +`foo/api`. This `foo/api` go module would be perpetually on `v1.x` and only +accept non-breaking changes. This would then allow other modules to be +compatible with either `foo` or `foo/v2` as long as the inter-module API only +depends on the types in `foo/api`. It would also allow modules `foo` and `bar` +to depend on each other in that both of them could depend on `foo/api` and +`bar/api` without `foo` directly depending on `bar` and vice versa. + +This is similar to the naive mitigation described above except that it separates +the types into separate go modules which in and of itself could be used to +break circular module dependencies. It has the same problems as the naive solution, +otherwise, which we could rectify by: + +1. removing all state machine breaking code from the API module (ex. `ValidateBasic` and any other interface methods) +2. embedding the correct file descriptors for unknown field filtering in the binary + +#### Migrate all interface methods on API types to handlers + +To solve 1), we need to remove all interface implementations from generated +types and instead use a handler approach which essentially means that given +a type `X`, we have some sort of resolver which allows us to resolve interface +implementations for that type (ex. `sdk.Msg` or `authz.Authorization`). For +example: + +```go +func (k Keeper) DoSomething(msg MsgDoSomething) error { + var validateBasicHandler ValidateBasicHandler + err := k.resolver.Resolve(&validateBasic, msg) + if err != nil { + return err + } + + err = validateBasicHandler.ValidateBasic() + ... +} +``` + +In the case of some methods on `sdk.Msg`, we could replace them with declarative +annotations. For instance, `GetSigners` can already be replaced by the protobuf +annotation `cosmos.msg.v1.signer`. In the future, we may consider some sort +of protobuf validation framework (like https://github.com/bufbuild/protoc-gen-validate +but more Cosmos-specific) to replace `ValidateBasic`. + +#### Pinned FileDescriptor's + +To solve 2), state machine modules must be able to specify what the version of +the protobuf files was that they were built against. For instance if the API +module for `foo` upgrades to `foo/v2`, the original `foo` module still needs +a copy of the original protobuf files it was built with so that ADR 020 +unknown field filtering will reject `MsgDoSomething` when `condition` is +set. + +The simplest way to do this may be to embed the protobuf `FileDescriptor`s into +the module itself so that these `FileDescriptor`s are used at runtime rather +than the ones that are built into the `foo/api` which may be different. Using +[buf build](https://docs.buf.build/build/usage#output-format), [go embed](https://pkg.go.dev/embed), +and a build script we can probably come up with a solution for embedding +`FileDescriptor`s into modules that is fairly straightforward. + +#### Potential limitations to generated code + +One challenge with this approach is that it places heavy restrictions on what +can go in API modules and requires that most of this is state machine breaking. +All or most of the code in the API module would be generated from protobuf +files, so we can probably control this with how code generation is done, but +it is a risk to be aware of. + +For instance, we do code generation for the ORM that in the future could +contain optimizations that are state machine breaking. We +would either need to ensure very carefully that the optimizations aren't +actually state machine breaking in generated code or separate this generated code +out from the API module into the state machine module. Both of these mitigations +are potentially viable but the API module approach does require an extra level +of care to avoid these sorts of issues. + +#### Minor Version Incompatibilities + +This approach in and of itself does little to address any potential minor +version incompatibilities and the +requisite [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering). +Likely some sort of client-server routing layer which does this check such as +[ADR 033: Inter-Module communication](./adr-033-protobuf-inter-module-comm.md) +is required to make sure that this is done properly. We could then allow +modules to perform a runtime check given a `MsgClient`, ex: + +```go +func (k Keeper) CallFoo() error { + if k.interModuleClient.MinorRevision(k.fooMsgClient) >= 2 { + k.fooMsgClient.DoSomething(&MsgDoSomething{Condition: ...}) + } else { + ... + } +} +``` + +To do the unknown field filtering itself, the ADR 033 router would need to use +the [protoreflect API](https://pkg.go.dev/google.golang.org/protobuf/reflect/protoreflect) +to ensure that no fields unknown to the receiving module are set. This could +result in an undesirable performance hit depending on how complex this logic is. + +### Approach B) Changes to Generated Code + +An alternate approach to solving the versioning problem is to change how protobuf code is generated and move modules +mostly or completely in the direction of inter-module communication as described +in [ADR 033](./adr-033-protobuf-inter-module-comm.md). +In this paradigm, a module could generate all the types it needs internally - including the API types of other modules - +and talk to other modules via a client-server boundary. For instance, if `bar` needs to talk to `foo`, it could +generate its own version of `MsgDoSomething` as `bar/internal/foo/v1.MsgDoSomething` and just pass this to the +inter-module router which would somehow convert it to the version which foo needs (ex. `foo/internal.MsgDoSomething`). + +Currently, two generated structs for the same protobuf type cannot exist in the same go binary without special +build flags (see https://developers.google.com/protocol-buffers/docs/reference/go/faq#fix-namespace-conflict). +A relatively simple mitigation to this issue would be to set up the protobuf code to not register protobuf types +globally if they are generated in an `internal/` package. This will require modules to register their types manually +with the app-level level protobuf registry, this is similar to what modules already do with the `InterfaceRegistry` +and amino codec. + +If modules _only_ do ADR 033 message passing then a naive and non-performant solution for +converting `bar/internal/foo/v1.MsgDoSomething` +to `foo/internal.MsgDoSomething` would be marshaling and unmarshaling in the ADR 033 router. This would break down if +we needed to expose protobuf types in `Keeper` interfaces because the whole point is to try to keep these types +`internal/` so that we don't end up with all the import version incompatibilities we've described above. However, +because of the issue with minor version incompatibilities and the need +for [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering), +sticking with the `Keeper` paradigm instead of ADR 033 may be unviable to begin with. + +A more performant solution (that could maybe be adapted to work with `Keeper` interfaces) would be to only expose +getters and setters for generated types and internally store data in memory buffers which could be passed from +one implementation to another in a zero-copy way. + +For example, imagine this protobuf API with only getters and setters is exposed for `MsgSend`: + +```go +type MsgSend interface { + proto.Message + GetFromAddress() string + GetToAddress() string + GetAmount() []v1beta1.Coin + SetFromAddress(string) + SetToAddress(string) + SetAmount([]v1beta1.Coin) +} + +func NewMsgSend() MsgSend { return &msgSendImpl{memoryBuffers: ...} } +``` + +Under the hood, `MsgSend` could be implemented based on some raw memory buffer in the same way +that [Cap'n Proto](https://capnproto.org) +and [FlatBuffers](https://google.github.io/flatbuffers/) so that we could convert between one version of `MsgSend` +and another without serialization (i.e. zero-copy). This approach would have the added benefits of allowing zero-copy +message passing to modules written in other languages such as Rust and accessed through a VM or FFI. It could also make +unknown field filtering in inter-module communication simpler if we require that all new fields are added in sequential +order, ex. just checking that no field `> 5` is set. + +Also, we wouldn't have any issues with state machine breaking code on generated types because all the generated +code used in the state machine would actually live in the state machine module itself. Depending on how interface +types and protobuf `Any`s are used in other languages, however, it may still be desirable to take the handler +approach described in approach A. Either way, types implementing interfaces would still need to be registered +with an `InterfaceRegistry` as they are now because there would be no way to retrieve them via the global registry. + +In order to simplify access to other modules using ADR 033, a public API module (maybe even one +[remotely generated by Buf](https://buf.build/docs/bsr/generated-sdks/go/)) could be used by client modules instead +of requiring to generate all client types internally. + +The big downsides of this approach are that it requires big changes to how people use protobuf types and would be a +substantial rewrite of the protobuf code generator. This new generated code, however, could still be made compatible +with +the [`google.golang.org/protobuf/reflect/protoreflect`](https://pkg.go.dev/google.golang.org/protobuf/reflect/protoreflect) +API in order to work with all standard golang protobuf tooling. + +It is possible that the naive approach of marshaling/unmarshaling in the ADR 033 router is an acceptable intermediate +solution if the changes to the code generator are seen as too complex. However, since all modules would likely need +to migrate to ADR 033 anyway with this approach, it might be better to do this all at once. + +### Approach C) Don't address these issues + +If the above solutions are seen as too complex, we can also decide not to do anything explicit to enable better module +version compatibility, and break circular dependencies. + +In this case, when developers are confronted with the issues described above they can require dependencies to update in +sync (what we do now) or attempt some ad-hoc potentially hacky solution. + +One approach is to ditch go semantic import versioning (SIV) altogether. Some people have commented that go's SIV +(i.e. changing the import path to `foo/v2`, `foo/v3`, etc.) is too restrictive and that it should be optional. The +golang maintainers disagree and only officially support semantic import versioning. We could, however, take the +contrarian perspective and get more flexibility by using 0.x-based versioning basically forever. + +Module version compatibility could then be achieved using go.mod replace directives to pin dependencies to specific +compatible 0.x versions. For instance if we knew `foo` 0.2 and 0.3 were both compatible with `bar` 0.3 and 0.4, we +could use replace directives in our go.mod to stick to the versions of `foo` and `bar` we want. This would work as +long as the authors of `foo` and `bar` avoid incompatible breaking changes between these modules. + +Or, if developers choose to use semantic import versioning, they can attempt the naive solution described above +and would also need to use special tags and replace directives to make sure that modules are pinned to the correct +versions. + +Note, however, that all of these ad-hoc approaches, would be vulnerable to the minor version compatibility issues +described above unless [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering) +is properly addressed. + +### Approach D) Avoid protobuf generated code in public APIs + +An alternative approach would be to avoid protobuf generated code in public module APIs. This would help avoid the +discrepancy between state machine versions and client API versions at the module to module boundaries. It would mean +that we wouldn't do inter-module message passing based on ADR 033, but rather stick to the existing keeper approach +and take it one step further by avoiding any protobuf generated code in the keeper interface methods. + +Using this approach, our `foo.Keeper.DoSomething` method wouldn't have the generated `MsgDoSomething` struct (which +comes from the protobuf API), but instead positional parameters. Then in order for `foo/v2` to support the `foo/v1` +keeper it would simply need to implement both the v1 and v2 keeper APIs. The `DoSomething` method in v2 could have the +additional `condition` parameter, but this wouldn't be present in v1 at all so there would be no danger of a client +accidentally setting this when it isn't available. + +So this approach would avoid the challenge around minor version incompatibilities because the existing module keeper +API would not get new fields when they are added to protobuf files. + +Taking this approach, however, would likely require making all protobuf generated code internal in order to prevent +it from leaking into the keeper API. This means we would still need to modify the protobuf code generator to not +register `internal/` code with the global registry, and we would still need to manually register protobuf +`FileDescriptor`s (this is probably true in all scenarios). It may, however, be possible to avoid needing to refactor +interface methods on generated types to handlers. + +Also, this approach doesn't address what would be done in scenarios where modules still want to use the message router. +Either way, we probably still want a way to pass messages from one module to another router safely even if it's just for +use cases like `x/gov`, `x/authz`, CosmWasm, etc. That would still require most of the things outlined in approach (B), +although we could advise modules to prefer keepers for communicating with other modules. + +The biggest downside of this approach is probably that it requires a strict refactoring of keeper interfaces to avoid +generated code leaking into the API. This may result in cases where we need to duplicate types that are already defined +in proto files and then write methods for converting between the golang and protobuf version. This may end up in a lot +of unnecessary boilerplate and that may discourage modules from actually adopting it and achieving effective version +compatibility. Approaches (A) and (B), although heavy handed initially, aim to provide a system which once adopted +more or less gives the developer version compatibility for free with minimal boilerplate. Approach (D) may not be able +to provide such a straightforward system since it requires a golang API to be defined alongside a protobuf API in a +way that requires duplication and differing sets of design principles (protobuf APIs encourage additive changes +while golang APIs would forbid it). + +Other downsides to this approach are: + +* no clear roadmap to supporting modules in other languages like Rust +* doesn't get us any closer to proper object capability security (one of the goals of ADR 033) +* ADR 033 needs to be done properly anyway for the set of use cases which do need it + +## Decision + +The latest **DRAFT** proposal is: + +1. we are alignment on adopting [ADR 033](./adr-033-protobuf-inter-module-comm.md) not just as an addition to the + framework, but as a core replacement to the keeper paradigm entirely. +2. the ADR 033 inter-module router will accommodate any variation of approach (A) or (B) given the following rules: + a. if the client type is the same as the server type then pass it directly through, + b. if both client and server use the zero-copy generated code wrappers (which still need to be defined), then pass + the memory buffers from one wrapper to the other, or + c. marshal/unmarshal types between client and server. + +This approach will allow for both maximal correctness and enable a clear path to enabling modules within in other +languages, possibly executed within a WASM VM. + +### Minor API Revisions + +To declare minor API revisions of proto files, we propose the following guidelines (which were already documented +in [cosmos.app.v1alpha module options](../proto/cosmos/app/v1alpha1/module.proto)): + +* proto packages which are revised from their initial version (considered revision `0`) should include a `package` +* comment in some .proto file containing the test `Revision N` at the start of a comment line where `N` is the current +revision number. +* all fields, messages, etc. added in a version beyond the initial revision should add a comment at the start of a +comment line of the form `Since: Revision N` where `N` is the non-zero revision it was added. + +It is advised that there is a 1:1 correspondence between a state machine module and versioned set of proto files +which are versioned either as a buf module a go API module or both. If the buf schema registry is used, the version of +this buf module should always be `1.N` where `N` corresponds to the package revision. Patch releases should be used when +only documentation comments are updated. It is okay to include proto packages named `v2`, `v3`, etc. in this same +`1.N` versioned buf module (ex. `cosmos.bank.v2`) as long as all these proto packages consist of a single API intended +to be served by a single SDK module. + +### Introspecting Minor API Revisions + +In order for modules to introspect the minor API revision of peer modules, we propose adding the following method +to `cosmossdk.io/core/intermodule.Client`: + +```go +ServiceRevision(ctx context.Context, serviceName string) uint64 +``` + +Modules could call this using the service name statically generated by the go grpc code generator: + +```go +intermoduleClient.ServiceRevision(ctx, bankv1beta1.Msg_ServiceDesc.ServiceName) +``` + +In the future, we may decide to extend the code generator used for protobuf services to add a field +to client types which does this check more concisely, ex: + +```go +package bankv1beta1 + +type MsgClient interface { + Send(context.Context, MsgSend) (MsgSendResponse, error) + ServiceRevision(context.Context) uint64 +} +``` + +### Unknown Field Filtering + +To correctly perform [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering), +the inter-module router can do one of the following: + +* use the `protoreflect` API for messages which support that +* for gogo proto messages, marshal and use the existing `codec/unknownproto` code +* for zero-copy messages, do a simple check on the highest set field number (assuming we can require that fields are + adding consecutively in increasing order) + +### `FileDescriptor` Registration + +Because a single go binary may contain different versions of the same generated protobuf code, we cannot rely on the +global protobuf registry to contain the correct `FileDescriptor`s. Because `appconfig` module configuration is itself +written in protobuf, we would like to load the `FileDescriptor`s for a module before loading a module itself. So we +will provide ways to register `FileDescriptor`s at module registration time before instantiation. We propose the +following `cosmossdk.io/core/appmodule.Option` constructors for the various cases of how `FileDescriptor`s may be +packaged: + +```go +package appmodule + +// this can be used when we are using google.golang.org/protobuf compatible generated code +// Ex: +// ProtoFiles(bankv1beta1.File_cosmos_bank_v1beta1_module_proto) +func ProtoFiles(file []protoreflect.FileDescriptor) Option {} + +// this can be used when we are using gogo proto generated code. +func GzippedProtoFiles(file [][]byte) Option {} + +// this can be used when we are using buf build to generated a pinned file descriptor +func ProtoImage(protoImage []byte) Option {} +``` + +This approach allows us to support several ways protobuf files might be generated: + +* proto files generated internally to a module (use `ProtoFiles`) +* the API module approach with pinned file descriptors (use `ProtoImage`) +* gogo proto (use `GzippedProtoFiles`) + +### Module Dependency Declaration + +One risk of ADR 033 is that dependencies are called at runtime which are not present in the loaded set of SDK modules. +Also we want modules to have a way to define a minimum dependency API revision that they require. Therefore, all +modules should declare their set of dependencies upfront. These dependencies could be defined when a module is +instantiated, but ideally we know what the dependencies are before instantiation and can statically look at an app +config and determine whether the set of modules. For example, if `bar` requires `foo` revision `>= 1`, then we +should be able to know this when creating an app config with two versions of `bar` and `foo`. + +We propose defining these dependencies in the proto options of the module config object itself. + +### Interface Registration + +We will also need to define how interface methods are defined on types that are serialized as `google.protobuf.Any`'s. +In light of the desire to support modules in other languages, we may want to think of solutions that will accommodate +other languages such as plugins described briefly in [ADR 033](./adr-033-protobuf-inter-module-comm.md#internal-methods). + +### Testing + +In order to ensure that modules are indeed with multiple versions of their dependencies, we plan to provide specialized +unit and integration testing infrastructure that automatically tests multiple versions of dependencies. + +#### Unit Testing + +Unit tests should be conducted inside SDK modules by mocking their dependencies. In a full ADR 033 scenario, +this means that all interaction with other modules is done via the inter-module router, so mocking of dependencies +means mocking their msg and query server implementations. We will provide both a test runner and fixture to make this +streamlined. The key thing that the test runner should do to test compatibility is to test all combinations of +dependency API revisions. This can be done by taking the file descriptors for the dependencies, parsing their comments +to determine the revisions various elements were added, and then created synthetic file descriptors for each revision +by subtracting elements that were added later. + +Here is a proposed API for the unit test runner and fixture: + +```go +package moduletesting + +import ( + "context" + "testing" + + "cosmossdk.io/core/intermodule" + "cosmossdk.io/depinject" + "google.golang.org/grpc" + "google.golang.org/protobuf/proto" + "google.golang.org/protobuf/reflect/protodesc" +) + +type TestFixture interface { + context.Context + intermodule.Client // for making calls to the module we're testing + BeginBlock() + EndBlock() +} + +type UnitTestFixture interface { + TestFixture + grpc.ServiceRegistrar // for registering mock service implementations +} + +type UnitTestConfig struct { + ModuleConfig proto.Message // the module's config object + DepinjectConfig depinject.Config // optional additional depinject config options + DependencyFileDescriptors []protodesc.FileDescriptorProto // optional dependency file descriptors to use instead of the global registry +} + +// Run runs the test function for all combinations of dependency API revisions. +func (cfg UnitTestConfig) Run(t *testing.T, f func(t *testing.T, f UnitTestFixture)) { + // ... +} +``` + +Here is an example for testing bar calling foo which takes advantage of conditional service revisions in the expected +mock arguments: + +```go +func TestBar(t *testing.T) { + UnitTestConfig{ModuleConfig: &foomodulev1.Module{}}.Run(t, func (t *testing.T, f moduletesting.UnitTestFixture) { + ctrl := gomock.NewController(t) + mockFooMsgServer := footestutil.NewMockMsgServer() + foov1.RegisterMsgServer(f, mockFooMsgServer) + barMsgClient := barv1.NewMsgClient(f) + if f.ServiceRevision(foov1.Msg_ServiceDesc.ServiceName) >= 1 { + mockFooMsgServer.EXPECT().DoSomething(gomock.Any(), &foov1.MsgDoSomething{ + ..., + Condition: ..., // condition is expected in revision >= 1 + }).Return(&foov1.MsgDoSomethingResponse{}, nil) + } else { + mockFooMsgServer.EXPECT().DoSomething(gomock.Any(), &foov1.MsgDoSomething{...}).Return(&foov1.MsgDoSomethingResponse{}, nil) + } + res, err := barMsgClient.CallFoo(f, &MsgCallFoo{}) + ... + }) +} +``` + +The unit test runner would make sure that no dependency mocks return arguments which are invalid for the service +revision being tested to ensure that modules don't incorrectly depend on functionality not present in a given revision. + +#### Integration Testing + +An integration test runner and fixture would also be provided which instead of using mocks would test actual module +dependencies in various combinations. Here is the proposed API: + +```go +type IntegrationTestFixture interface { + TestFixture +} + +type IntegrationTestConfig struct { + ModuleConfig proto.Message // the module's config object + DependencyMatrix map[string][]proto.Message // all the dependent module configs +} + +// Run runs the test function for all combinations of dependency modules. +func (cfg IntegrationTestConfig) Run(t *testing.T, f func (t *testing.T, f IntegrationTestFixture)) { + // ... +} +``` + +And here is an example with foo and bar: + +```go +func TestBarIntegration(t *testing.T) { + IntegrationTestConfig{ + ModuleConfig: &barmodulev1.Module{}, + DependencyMatrix: map[string][]proto.Message{ + "runtime": []proto.Message{ // test against two versions of runtime + &runtimev1.Module{}, + &runtimev2.Module{}, + }, + "foo": []proto.Message{ // test against three versions of foo + &foomodulev1.Module{}, + &foomodulev2.Module{}, + &foomodulev3.Module{}, + } + } + }.Run(t, func (t *testing.T, f moduletesting.IntegrationTestFixture) { + barMsgClient := barv1.NewMsgClient(f) + res, err := barMsgClient.CallFoo(f, &MsgCallFoo{}) + ... + }) +} +``` + +Unlike unit tests, integration tests actually pull in other module dependencies. So that modules can be written +without direct dependencies on other modules and because golang has no concept of development dependencies, integration +tests should be written in separate go modules, ex. `example.com/bar/v2/test`. Because this paradigm uses go semantic +versioning, it is possible to build a single go module which imports 3 versions of bar and 2 versions of runtime and +can test these all together in the six various combinations of dependencies. + +## Consequences + +### Backwards Compatibility + +Modules which migrate fully to ADR 033 will not be compatible with existing modules which use the keeper paradigm. +As a temporary workaround we may create some wrapper types that emulate the current keeper interface to minimize +the migration overhead. + +### Positive + +* we will be able to deliver interoperable semantically versioned modules which should dramatically increase the + ability of the Cosmos SDK ecosystem to iterate on new features +* it will be possible to write Cosmos SDK modules in other languages in the near future + +### Negative + +* all modules will need to be refactored somewhat dramatically + +### Neutral + +* the `cosmossdk.io/core/appconfig` framework will play a more central role in terms of how modules are defined, this + is likely generally a good thing but does mean additional changes for users wanting to stick to the pre-depinject way + of wiring up modules +* `depinject` is somewhat less needed or maybe even obviated because of the full ADR 033 approach. If we adopt the + core API proposed in https://github.com/cosmos/cosmos-sdk/pull/12239, then a module would probably always instantiate + itself with a method `ProvideModule(appmodule.Service) (appmodule.AppModule, error)`. There is no complex wiring of + keeper dependencies in this scenario and dependency injection may not have as much of (or any) use case. + +## Further Discussions + +The decision described above is considered in draft mode and is pending final buy-in from the team and key stakeholders. +Key outstanding discussions if we do adopt that direction are: + +* how do module clients introspect dependency module API revisions +* how do modules determine a minor dependency module API revision requirement +* how do modules appropriately test compatibility with different dependency versions +* how to register and resolve interface implementations +* how do modules register their protobuf file descriptors depending on the approach they take to generated code (the + API module approach may still be viable as a supported strategy and would need pinned file descriptors) + +## References + +* https://github.com/cosmos/cosmos-sdk/discussions/10162 +* https://github.com/cosmos/cosmos-sdk/discussions/10582 +* https://github.com/cosmos/cosmos-sdk/discussions/10368 +* https://github.com/cosmos/cosmos-sdk/pull/11340 +* https://github.com/cosmos/cosmos-sdk/issues/11899 +* [ADR 020](./adr-020-protobuf-transaction-encoding.md) +* [ADR 033](./adr-033-protobuf-inter-module-comm.md) + + + +### ADR 055: ORM + + + +# ADR 055: ORM + +## Changelog + +* 2022-04-27: First draft + +## Status + +ACCEPTED Implemented + +## Abstract + +In order to make it easier for developers to build Cosmos SDK modules and for clients to query, index and verify proofs +against state data, we have implemented an ORM (object-relational mapping) layer for the Cosmos SDK. + +## Context + +Historically modules in the Cosmos SDK have always used the key-value store directly and created various handwritten +functions for managing key format as well as constructing secondary indexes. This consumes a significant amount of +time when building a module and is error-prone. Because key formats are non-standard, sometimes poorly documented, +and subject to change, it is hard for clients to generically index, query and verify merkle proofs against state data. + +The known first instance of an "ORM" in the Cosmos ecosystem was in [weave](https://github.com/iov-one/weave/tree/master/orm). +A later version was built for [regen-ledger](https://github.com/regen-network/regen-ledger/tree/157181f955823149e1825263a317ad8e16096da4/orm) for +use in the group module and later [ported to the SDK](https://github.com/cosmos/cosmos-sdk/tree/35d3312c3be306591fcba39892223f1244c8d108/x/group/internal/orm) +just for that purpose. + +While these earlier designs made it significantly easier to write state machines, they still required a lot of manual +configuration, didn't expose state format directly to clients, and were limited in their support of different types +of index keys, composite keys, and range queries. + +Discussions about the design continued in https://github.com/cosmos/cosmos-sdk/discussions/9156 and more +sophisticated proofs of concept were created in https://github.com/allinbits/cosmos-sdk-poc/tree/master/runtime/orm +and https://github.com/cosmos/cosmos-sdk/pull/10454. + +## Decision + +These prior efforts culminated in the creation of the Cosmos SDK `orm` go module which uses protobuf annotations +for specifying ORM table definitions. This ORM is based on the new `google.golang.org/protobuf/reflect/protoreflect` +API and supports: + +* sorted indexes for all simple protobuf types (except `bytes`, `enum`, `float`, `double`) as well as `Timestamp` and `Duration` +* unsorted `bytes` and `enum` indexes +* composite primary and secondary keys +* unique indexes +* auto-incrementing `uint64` primary keys +* complex prefix and range queries +* paginated queries +* complete logical decoding of KV-store data + +Almost all the information needed to decode state directly is specified in .proto files. Each table definition specifies +an ID which is unique per .proto file and each index within a table is unique within that table. Clients then only need +to know the name of a module and the prefix ORM data for a specific .proto file within that module in order to decode +state data directly. This additional information will be exposed directly through app configs which will be explained +in a future ADR related to app wiring. + +The ORM makes optimizations around storage space by not repeating values in the primary key in the key value +when storing primary key records. For example, if the object `{"a":0,"b":1}` has the primary key `a`, it will +be stored in the key value store as `Key: '0', Value: {"b":1}` (with more efficient protobuf binary encoding). +Also, the generated code from https://github.com/cosmos/cosmos-proto does optimizations around the +`google.golang.org/protobuf/reflect/protoreflect` API to improve performance. + +A code generator is included with the ORM which creates type safe wrappers around the ORM's dynamic `Table` +implementation and is the recommended way for modules to use the ORM. + +The ORM tests provide a simplified bank module demonstration which illustrates: + +* [ORM proto options](https://github.com/cosmos/cosmos-sdk/blob/0d846ae2f0424b2eb640f6679a703b52d407813d/orm/internal/testpb/bank.proto) +* [Generated Code](https://github.com/cosmos/cosmos-sdk/blob/0d846ae2f0424b2eb640f6679a703b52d407813d/orm/internal/testpb/bank.cosmos_orm.go) +* [Example Usage in a Module Keeper](https://github.com/cosmos/cosmos-sdk/blob/0d846ae2f0424b2eb640f6679a703b52d407813d/orm/model/ormdb/module_test.go) + +## Consequences + +### Backwards Compatibility + +State machine code that adopts the ORM will need migrations as the state layout is generally backwards incompatible. +These state machines will also need to migrate to https://github.com/cosmos/cosmos-proto at least for state data. + +### Positive + +* easier to build modules +* easier to add secondary indexes to state +* possible to write a generic indexer for ORM state +* easier to write clients that do state proofs +* possible to automatically write query layers rather than needing to manually implement gRPC queries + +### Negative + +* worse performance than handwritten keys (for now). See [Further Discussions](#further-discussions) +for potential improvements + +### Neutral + +## Further Discussions + +Further discussions will happen within the Cosmos SDK Framework Working Group. Current planned and ongoing work includes: + +* automatically generate client-facing query layer +* client-side query libraries that transparently verify light client proofs +* index ORM data to SQL databases +* improve performance by: + * optimizing existing reflection based code to avoid unnecessary gets when doing deletes & updates of simple tables + * more sophisticated code generation such as making fast path reflection even faster (avoiding `switch` statements), + or even fully generating code that equals handwritten performance + + +## References + +* https://github.com/iov-one/weave/tree/master/orm). +* https://github.com/regen-network/regen-ledger/tree/157181f955823149e1825263a317ad8e16096da4/orm +* https://github.com/cosmos/cosmos-sdk/tree/35d3312c3be306591fcba39892223f1244c8d108/x/group/internal/orm +* https://github.com/cosmos/cosmos-sdk/discussions/9156 +* https://github.com/allinbits/cosmos-sdk-poc/tree/master/runtime/orm +* https://github.com/cosmos/cosmos-sdk/pull/10454 + + + +### ADR 057: App Wiring + + + +# ADR 057: App Wiring + +## Changelog + +* 2022-05-04: Initial Draft +* 2022-08-19: Updates + +## Status + +PROPOSED Implemented + +## Abstract + +In order to make it easier to build Cosmos SDK modules and apps, we propose a new app wiring system based on +dependency injection and declarative app configurations to replace the current `app.go` code. + +## Context + +A number of factors have made the SDK and SDK apps in their current state hard to maintain. A symptom of the current +state of complexity is [`simapp/app.go`](https://github.com/cosmos/cosmos-sdk/blob/c3edbb22cab8678c35e21fe0253919996b780c01/simapp/app.go) +which contains almost 100 lines of imports and is otherwise over 600 lines of mostly boilerplate code that is +generally copied to each new project. (Not to mention the additional boilerplate which gets copied in `simapp/simd`.) + +The large amount of boilerplate needed to bootstrap an app has made it hard to release independently versioned go +modules for Cosmos SDK modules as described in [ADR 053: Go Module Refactoring](./adr-053-go-module-refactoring.md). + +In addition to being very verbose and repetitive, `app.go` also exposes a large surface area for breaking changes +as most modules instantiate themselves with positional parameters which forces breaking changes anytime a new parameter +(even an optional one) is needed. + +Several attempts were made to improve the current situation including [ADR 033: Internal-Module Communication](./adr-033-protobuf-inter-module-comm.md) +and [a proof-of-concept of a new SDK](https://github.com/allinbits/cosmos-sdk-poc). The discussions around these +designs led to the current solution described here. + +## Decision + +In order to improve the current situation, a new "app wiring" paradigm has been designed to replace `app.go` which +involves: + +* declaration configuration of the modules in an app which can be serialized to JSON or YAML +* a dependency-injection (DI) framework for instantiating apps from the configuration + +### Dependency Injection + +When examining the code in `app.go` most of the code simply instantiates modules with dependencies provided either +by the framework (such as store keys) or by other modules (such as keepers). It is generally pretty obvious given +the context what the correct dependencies actually should be, so dependency-injection is an obvious solution. Rather +than making developers manually resolve dependencies, a module will tell the DI container what dependency it needs +and the container will figure out how to provide it. + +We explored several existing DI solutions in golang and felt that the reflection-based approach in [uber/dig](https://github.com/uber-go/dig) +was closest to what we needed but not quite there. Assessing what we needed for the SDK, we designed and built +the Cosmos SDK [depinject module](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/depinject), which has the following +features: + +* dependency resolution and provision through functional constructors, ex: `func(need SomeDep) (AnotherDep, error)` +* dependency injection `In` and `Out` structs which support `optional` dependencies +* grouped-dependencies (many-per-container) through the `ManyPerContainerType` tag interface +* module-scoped dependencies via `ModuleKey`s (where each module gets a unique dependency) +* one-per-module dependencies through the `OnePerModuleType` tag interface +* sophisticated debugging information and container visualization via GraphViz + +Here are some examples of how these would be used in an SDK module: + +* `StoreKey` could be a module-scoped dependency which is unique per module +* a module's `AppModule` instance (or the equivalent) could be a `OnePerModuleType` +* CLI commands could be provided with `ManyPerContainerType`s + +Note that even though dependency resolution is dynamic and based on reflection, which could be considered a pitfall +of this approach, the entire dependency graph should be resolved immediately on app startup and only gets resolved +once (except in the case of dynamic config reloading which is a separate topic). This means that if there are any +errors in the dependency graph, they will get reported immediately on startup so this approach is only slightly worse +than fully static resolution in terms of error reporting and much better in terms of code complexity. + +### Declarative App Config + +In order to compose modules into an app, a declarative app configuration will be used. This configuration is based off +of protobuf and its basic structure is very simple: + +```protobuf +package cosmos.app.v1; + +message Config { + repeated ModuleConfig modules = 1; +} + +message ModuleConfig { + string name = 1; + google.protobuf.Any config = 2; +} +``` + +(See also https://github.com/cosmos/cosmos-sdk/blob/6e18f582bf69e3926a1e22a6de3c35ea327aadce/proto/cosmos/app/v1alpha1/config.proto) + +The configuration for every module is itself a protobuf message and modules will be identified and loaded based +on the protobuf type URL of their config object (ex. `cosmos.bank.module.v1.Module`). Modules are given a unique short `name` +to share resources across different versions of the same module which might have a different protobuf package +versions (ex. `cosmos.bank.module.v2.Module`). All module config objects should define the `cosmos.app.v1alpha1.module` +descriptor option which will provide additional useful metadata for the framework and which can also be indexed +in module registries. + +An example app config in YAML might look like this: + +```yaml +modules: + - name: baseapp + config: + "@type": cosmos.baseapp.module.v1.Module + begin_blockers: [staking, auth, bank] + end_blockers: [bank, auth, staking] + init_genesis: [bank, auth, staking] + - name: auth + config: + "@type": cosmos.auth.module.v1.Module + bech32_prefix: "foo" + - name: bank + config: + "@type": cosmos.bank.module.v1.Module + - name: staking + config: + "@type": cosmos.staking.module.v1.Module +``` + +In the above example, there is a hypothetical `baseapp` module which contains the information around ordering of +begin blockers, end blockers, and init genesis. Rather than lifting these concerns up to the module config layer, +they are themselves handled by a module which could allow a convenient way of swapping out different versions of +baseapp (for instance to target different versions of tendermint), without needing to change the rest of the config. +The `baseapp` module would then provide to the server framework (which sort of sits outside the ABCI app) an instance +of `abci.Application`. + +In this model, an app is *modules all the way down* and the dependency injection/app config layer is very much +protocol-agnostic and can adapt to even major breaking changes at the protocol layer. + +### Module & Protobuf Registration + +In order for the two components of dependency injection and declarative configuration to work together as described, +we need a way for modules to actually register themselves and provide dependencies to the container. + +One additional complexity that needs to be handled at this layer is protobuf registry initialization. Recall that +in both the current SDK `codec` and the proposed [ADR 054: Protobuf Semver Compatible Codegen](https://github.com/cosmos/cosmos-sdk/pull/11802), +protobuf types need to be explicitly registered. Given that the app config itself is based on protobuf and +uses protobuf `Any` types, protobuf registration needs to happen before the app config itself can be decoded. Because +we don't know which protobuf `Any` types will be needed a priori and modules themselves define those types, we need +to decode the app config in separate phases: + +1. parse app config JSON/YAML as raw JSON and collect required module type URLs (without doing proto JSON decoding) +2. build a [protobuf type registry](https://pkg.go.dev/google.golang.org/protobuf@v1.28.0/reflect/protoregistry) based + on file descriptors and types provided by each required module +3. decode the app config as proto JSON using the protobuf type registry + +Because in [ADR 054: Protobuf Semver Compatible Codegen](https://github.com/cosmos/cosmos-sdk/pull/11802), each module +might use `internal` generated code which is not registered with the global protobuf registry, this code should provide +an alternate way to register protobuf types with a type registry. In the same way that `.pb.go` files currently have a +`var File_foo_proto protoreflect.FileDescriptor` for the file `foo.proto`, generated code should have a new member +`var Types_foo_proto TypeInfo` where `TypeInfo` is an interface or struct with all the necessary info to register both +the protobuf generated types and file descriptor. + +So a module must provide dependency injection providers and protobuf types, and takes as input its module +config object which uniquely identifies the module based on its type URL. + +With this in mind, we define a global module register which allows module implementations to register themselves +with the following API: + +```go +// Register registers a module with the provided type name (ex. cosmos.bank.module.v1.Module) +// and the provided options. +func Register(configTypeName protoreflect.FullName, option ...Option) { ... } + +type Option { /* private methods */ } + +// Provide registers dependency injection provider functions which work with the +// cosmos-sdk container module. These functions can also accept an additional +// parameter for the module's config object. +func Provide(providers ...interface{}) Option { ... } + +// Types registers protobuf TypeInfo's with the protobuf registry. +func Types(types ...TypeInfo) Option { ... } +``` + +Ex: + +```go +func init() { + appmodule.Register("cosmos.bank.module.v1.Module", + appmodule.Types( + types.Types_tx_proto, + types.Types_query_proto, + types.Types_types_proto, + ), + appmodule.Provide( + provideBankModule, + ) + ) +} + +type Inputs struct { + container.In + + AuthKeeper auth.Keeper + DB ormdb.ModuleDB +} + +type Outputs struct { + Keeper bank.Keeper + AppModule appmodule.AppModule +} + +func ProvideBankModule(config *bankmodulev1.Module, Inputs) (Outputs, error) { ... } +``` + +Note that in this module, a module configuration object *cannot* register different dependency providers at runtime +based on the configuration. This is intentional because it allows us to know globally which modules provide which +dependencies, and it will also allow us to do code generation of the whole app initialization. This +can help us figure out issues with missing dependencies in an app config if the needed modules are loaded at runtime. +In cases where required modules are not loaded at runtime, it may be possible to guide users to the correct module if +through a global Cosmos SDK module registry. + +The `*appmodule.Handler` type referenced above is a replacement for the legacy `AppModule` framework, and +described in [ADR 063: Core Module API](./adr-063-core-module-api.md). + +### New `app.go` + +With this setup, `app.go` might now look something like this: + +```go +package main + +import ( + // Each go package which registers a module must be imported just for side-effects + // so that module implementations are registered. + _ "github.com/cosmos/cosmos-sdk/x/auth/module" + _ "github.com/cosmos/cosmos-sdk/x/bank/module" + _ "github.com/cosmos/cosmos-sdk/x/staking/module" + "github.com/cosmos/cosmos-sdk/core/app" +) + +// go:embed app.yaml +var appConfigYAML []byte + +func main() { + app.Run(app.LoadYAML(appConfigYAML)) +} +``` + +### Application to existing SDK modules + +So far we have described a system which is largely agnostic to the specifics of the SDK such as store keys, `AppModule`, +`BaseApp`, etc. Improvements to these parts of the framework that integrate with the general app wiring framework +defined here are described in [ADR 063: Core Module API](./adr-063-core-module-api.md). + +### Registration of Inter-Module Hooks + +Some modules define a hooks interface (ex. `StakingHooks`) which allows one module to call back into another module +when certain events happen. + +With the app wiring framework, these hooks interfaces can be defined as a `OnePerModuleType`s and then the module +which consumes these hooks can collect these hooks as a map of module name to hook type (ex. `map[string]FooHooks`). Ex: + +```go +func init() { + appmodule.Register( + &foomodulev1.Module{}, + appmodule.Invoke(InvokeSetFooHooks), + ... + ) +} +func InvokeSetFooHooks( + keeper *keeper.Keeper, + fooHooks map[string]FooHooks, +) error { + for k in sort.Strings(maps.Keys(fooHooks)) { + keeper.AddFooHooks(fooHooks[k]) + } +} +``` + +Optionally, the module consuming hooks can allow app's to define an order for calling these hooks based on module name +in its config object. + +An alternative way for registering hooks via reflection was considered where all keeper types are inspected to see if +they implement the hook interface by the modules exposing hooks. This has the downsides of: + +* needing to expose all the keepers of all modules to the module providing hooks, +* not allowing for encapsulating hooks on a different type which doesn't expose all keeper methods, +* harder to know statically which module expose hooks or are checking for them. + +With the approach proposed here, hooks registration will be obviously observable in `app.go` if `depinject` codegen +(described below) is used. + +### Code Generation + +The `depinject` framework will optionally allow the app configuration and dependency injection wiring to be code +generated. This will allow: + +* dependency injection wiring to be inspected as regular go code just like the existing `app.go`, +* dependency injection to be opt-in with manual wiring 100% still possible. + +Code generation requires that all providers and invokers and their parameters are exported and in non-internal packages. + +### Module Semantic Versioning + +When we start creating semantically versioned SDK modules that are in standalone go modules, a state machine breaking +change to a module should be handled as follows: + +* the semantic major version should be incremented, and +* a new semantically versioned module config protobuf type should be created. + +For instance, if we have the SDK module for bank in the go module `github.com/cosmos/cosmos-sdk/x/bank` with the module config type +`cosmos.bank.module.v1.Module`, and we want to make a state machine breaking change to the module, we would: + +* create a new go module `github.com/cosmos/cosmos-sdk/x/bank/v2`, +* with the module config protobuf type `cosmos.bank.module.v2.Module`. + +This *does not* mean that we need to increment the protobuf API version for bank. Both modules can support +`cosmos.bank.v1`, but `github.com/cosmos/cosmos-sdk/x/bank/v2` will be a separate go module with a separate module config type. + +This practice will eventually allow us to use appconfig to load new versions of a module via a configuration change. + +Effectively, there should be a 1:1 correspondence between a semantically versioned go module and a +versioned module config protobuf type, and major versioning bumps should occur whenever state machine breaking changes +are made to a module. + +NOTE: SDK modules that are standalone go modules *should not* adopt semantic versioning until the concerns described in +[ADR 054: Module Semantic Versioning](./adr-054-semver-compatible-modules.md) are +addressed. The short-term solution for this issue was left somewhat unresolved. However, the easiest tactic is +likely to use a standalone API go module and follow the guidelines described in this comment: https://github.com/cosmos/cosmos-sdk/pull/11802#issuecomment-1406815181. For the time-being, it is recommended that +Cosmos SDK modules continue to follow tried and true [0-based versioning](https://0ver.org) until an officially +recommended solution is provided. This section of the ADR will be updated when that happens and for now, this section +should be considered as a design recommendation for future adoption of semantic versioning. + +## Consequences + +### Backwards Compatibility + +Modules which work with the new app wiring system do not need to drop their existing `AppModule` and `NewKeeper` +registration paradigms. These two methods can live side-by-side for as long as is needed. + +### Positive + +* wiring up new apps will be simpler, more succinct and less error-prone +* it will be easier to develop and test standalone SDK modules without needing to replicate all of simapp +* it may be possible to dynamically load modules and upgrade chains without needing to do a coordinated stop and binary + upgrade using this mechanism +* easier plugin integration +* dependency injection framework provides more automated reasoning about dependencies in the project, with a graph visualization. + +### Negative + +* it may be confusing when a dependency is missing although error messages, the GraphViz visualization, and global + module registration may help with that + +### Neutral + +* it will require work and education + +## Further Discussions + +The protobuf type registration system described in this ADR has not been implemented and may need to be reconsidered in +light of code generation. It may be better to do this type registration with a DI provider. + +## References + +* https://github.com/cosmos/cosmos-sdk/blob/c3edbb22cab8678c35e21fe0253919996b780c01/simapp/app.go +* https://github.com/allinbits/cosmos-sdk-poc +* https://github.com/uber-go/dig +* https://github.com/google/wire +* https://pkg.go.dev/github.com/cosmos/cosmos-sdk/container +* https://github.com/cosmos/cosmos-sdk/pull/11802 +* [ADR 063: Core Module API](./adr-063-core-module-api.md) + + + +### ADR 058: Auto-Generated CLI + + + +# ADR 058: Auto-Generated CLI + +## Changelog + +* 2022-05-04: Initial Draft + +## Status + +ACCEPTED Partially Implemented + +## Abstract + +In order to make it easier for developers to write Cosmos SDK modules, we provide infrastructure which automatically +generates CLI commands based on protobuf definitions. + +## Context + +Current Cosmos SDK modules generally implement a CLI command for every transaction and every query supported by the +module. These are handwritten for each command and essentially amount to providing some CLI flags or positional +arguments for specific fields in protobuf messages. + +In order to make sure CLI commands are correctly implemented as well as to make sure that the application works +in end-to-end scenarios, we do integration tests using CLI commands. While these tests are valuable on some-level, +they can be hard to write and maintain, and run slowly. [Some teams have contemplated](https://github.com/regen-network/regen-ledger/issues/1041) +moving away from CLI-style integration tests (which are really end-to-end tests) towards narrower integration tests +which exercise `MsgClient` and `QueryClient` directly. This might involve replacing the current end-to-end CLI +tests with unit tests as there still needs to be some way to test these CLI commands for full quality assurance. + +## Decision + +To make module development simpler, we provide infrastructure - in the new [`client/v2`](https://github.com/cosmos/cosmos-sdk/tree/main/client/v2) +go module - for automatically generating CLI commands based on protobuf definitions to either replace or complement +handwritten CLI commands. This will mean that when developing a module, it will be possible to skip both writing and +testing CLI commands as that can all be taken care of by the framework. + +The basic design for automatically generating CLI commands is to: + +* create one CLI command for each `rpc` method in a protobuf `Query` or `Msg` service +* create a CLI flag for each field in the `rpc` request type +* for `query` commands call gRPC and print the response as protobuf JSON or YAML (via the `-o`/`--output` flag) +* for `tx` commands, create a transaction and apply common transaction flags + +In order to make the auto-generated CLI as easy to use (or easier) than handwritten CLI, we need to do custom handling +of specific protobuf field types so that the input format is easy for humans: + +* `Coin`, `Coins`, `DecCoin`, and `DecCoins` should be input using the existing format (i.e. `1000uatom`) +* it should be possible to specify an address using either the bech32 address string or a named key in the keyring +* `Timestamp` and `Duration` should accept strings like `2001-01-01T00:00:00Z` and `1h3m` respectively +* pagination should be handled with flags like `--page-limit`, `--page-offset`, etc. +* it should be possible to customize any other protobuf type either via its message name or a `cosmos_proto.scalar` annotation + +At a basic level it should be possible to generate a command for a single `rpc` method as well as all the commands for +a whole protobuf `service` definition. It should be possible to mix and match auto-generated and handwritten commands. + +## Consequences + +### Backwards Compatibility + +Existing modules can mix and match auto-generated and handwritten CLI commands so it is up to them as to whether they +make breaking changes by replacing handwritten commands with slightly different auto-generated ones. + +For now the SDK will maintain the existing set of CLI commands for backwards compatibility but new commands will use +this functionality. + +### Positive + +* module developers will not need to write CLI commands +* module developers will not need to test CLI commands +* [lens](https://github.com/strangelove-ventures/lens) may benefit from this + +### Negative + +### Neutral + +## Further Discussions + +We would like to be able to customize: + +* short and long usage strings for commands +* aliases for flags (ex. `-a` for `--amount`) +* which fields are positional parameters rather than flags + +It is an [open discussion](https://github.com/cosmos/cosmos-sdk/pull/11725#issuecomment-1108676129) +as to whether these customizations options should lie in: + +* the .proto files themselves, +* separate config files (ex. YAML), or +* directly in code + +Providing the options in .proto files would allow a dynamic client to automatically generate +CLI commands on the fly. However, that may pollute the .proto files themselves with information that is only relevant +for a small subset of users. + +## References + +* https://github.com/regen-network/regen-ledger/issues/1041 +* https://github.com/cosmos/cosmos-sdk/tree/main/client/v2 +* https://github.com/cosmos/cosmos-sdk/pull/11725#issuecomment-1108676129 + + + +### ADR 059: Test Scopes + + + +# ADR 059: Test Scopes + +## Changelog + +* 2022-08-02: Initial Draft +* 2023-03-02: Add precision for integration tests +* 2023-03-23: Add precision for E2E tests + +## Status + +PROPOSED Partially Implemented + +## Abstract + +Recent work in the SDK aimed at breaking apart the monolithic root go module has highlighted +shortcomings and inconsistencies in our testing paradigm. This ADR clarifies a common +language for talking about test scopes and proposes an ideal state of tests at each scope. + +## Context + +[ADR-053: Go Module Refactoring](#adr-053-go-module-refactoring) expresses our desire for an SDK composed of many +independently versioned Go modules, and [ADR-057: App Wiring](#adr-057-app-wiring) offers a methodology +for breaking apart inter-module dependencies through the use of dependency injection. As +described in [EPIC: Separate all SDK modules into standalone go modules](https://github.com/cosmos/cosmos-sdk/issues/11899), module +dependencies are particularly complected in the test phase, where simapp is used as +the key test fixture in setting up and running tests. It is clear that the successful +completion of Phases 3 and 4 in that EPIC require the resolution of this dependency problem. + +In [EPIC: Unit Testing of Modules via Mocks](https://github.com/cosmos/cosmos-sdk/issues/12398) it was thought this Gordian knot could be +unwound by mocking all dependencies in the test phase for each module, but seeing how these +refactors were complete rewrites of test suites discussions began around the fate of the +existing integration tests. One perspective is that they ought to be thrown out, another is +that integration tests have some utility of their own and a place in the SDK's testing story. + +Another point of confusion has been the current state of CLI test suites, [x/auth](https://github.com/cosmos/cosmos-sdk/blob/0f7e56c6f9102cda0ca9aba5b6f091dbca976b5a/x/auth/client/testutil/suite.go#L44-L49) for +example. In code these are called integration tests, but in reality function as end to end +tests by starting up a tendermint node and full application. [EPIC: Rewrite and simplify +CLI tests](https://github.com/cosmos/cosmos-sdk/issues/12696) identifies the ideal state of CLI tests using mocks, but does not address the +place end to end tests may have in the SDK. + +From here we identify three scopes of testing, **unit**, **integration**, **e2e** (end to +end), seek to define the boundaries of each, their shortcomings (real and imposed), and their +ideal state in the SDK. + +### Unit tests + +Unit tests exercise the code contained in a single module (e.g. `/x/bank`) or package +(e.g. `/client`) in isolation from the rest of the code base. Within this we identify two +levels of unit tests, *illustrative* and *journey*. The definitions below lean heavily on +[The BDD Books - Formulation](https://leanpub.com/bddbooks-formulation) section 1.3. + +*Illustrative* tests exercise an atomic part of a module in isolation - in this case we +might do fixture setup/mocking of other parts of the module. + +Tests which exercise a whole module's function with dependencies mocked, are *journeys*. +These are almost like integration tests in that they exercise many things together but still +use mocks. + +Example 1 journey vs illustrative tests - [depinject's BDD style tests](https://github.com/cosmos/cosmos-sdk/blob/main/depinject/binding_test.go), show how we can +rapidly build up many illustrative cases demonstrating behavioral rules without [very much code](https://github.com/cosmos/cosmos-sdk/blob/main/depinject/binding_test.go) while maintaining high level readability. + +Example 2 [depinject table driven tests](https://github.com/cosmos/cosmos-sdk/blob/main/depinject/provider_desc_test.go) + +Example 3 [Bank keeper tests](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/bank/keeper/keeper_test.go#L94-L105) - A mock implementation of `AccountKeeper` is supplied to the keeper constructor. + +#### Limitations + +Certain modules are tightly coupled beyond the test phase. A recent dependency report for +`bank -> auth` found 274 total usages of `auth` in `bank`, 50 of which are in +production code and 224 in test. This tight coupling may suggest that either the modules +should be merged, or refactoring is required to abstract references to the core types tying +the modules together. It could also indicate that these modules should be tested together +in integration tests beyond mocked unit tests. + +In some cases setting up a test case for a module with many mocked dependencies can be quite +cumbersome and the resulting test may only show that the mocking framework works as expected +rather than working as a functional test of interdependent module behavior. + +### Integration tests + +Integration tests define and exercise relationships between an arbitrary number of modules +and/or application subsystems. + +Wiring for integration tests is provided by `depinject` and some [helper code](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/testutil/sims/app_helpers.go#L95) starts up +a running application. A section of the running application may then be tested. Certain +inputs during different phases of the application life cycle are expected to produce +invariant outputs without too much concern for component internals. This type of black box +testing has a larger scope than unit testing. + +Example 1 [client/grpc_query_test/TestGRPCQuery](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/client/grpc_query_test.go#L111-L129) - This test is misplaced in `/client`, +but tests the life cycle of (at least) `runtime` and `bank` as they progress through +startup, genesis and query time. It also exercises the fitness of the client and query +server without putting bytes on the wire through the use of [QueryServiceTestHelper](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/baseapp/grpcrouter_helpers.go#L31). + +Example 2 `x/evidence` Keeper integration tests - Starts up an application composed of [8 +modules](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/evidence/testutil/app.yaml#L1) with [5 keepers](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/evidence/keeper/keeper_test.go#L101-L106) used in the integration test suite. One test in the suite +exercises [HandleEquivocationEvidence](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/evidence/keeper/infraction_test.go#L42) which contains many interactions with the staking +keeper. + +Example 3 - Integration suite app configurations may also be specified via golang (not +YAML as above) [statically](https://github.com/cosmos/cosmos-sdk/blob/main/x/nft/testutil/app_config.go) or [dynamically](https://github.com/cosmos/cosmos-sdk/blob/8c23f6f957d1c0bedd314806d1ac65bea59b084c/tests/integration/bank/keeper/keeper_test.go#L129-L134). + +#### Limitations + +Setting up a particular input state may be more challenging since the application is +starting from a zero state. Some of this may be addressed by good test fixture +abstractions with testing of their own. Tests may also be more brittle, and larger +refactors could impact application initialization in unexpected ways with harder to +understand errors. This could also be seen as a benefit, and indeed the SDK's current +integration tests were helpful in tracking down logic errors during earlier stages +of app-wiring refactors. + +### Simulations + +Simulations (also called generative testing) are a special case of integration tests where +deterministically random module operations are executed against a running simapp, building +blocks on the chain until a specified height is reached. No *specific* assertions are +made for the state transitions resulting from module operations but any error will halt and +fail the simulation. Since `crisis` is included in simapp and the simulation runs +EndBlockers at the end of each block any module invariant violations will also fail +the simulation. + +Modules must implement [AppModuleSimulation.WeightedOperations](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/types/module/simulation.go#L31) to define their +simulation operations. Note that not all modules implement this which may indicate a +gap in current simulation test coverage. + +Modules not returning simulation operations: + +* `auth` +* `evidence` +* `mint` +* `params` + +A separate binary, [runsim](https://github.com/cosmos/tools/tree/master/cmd/runsim), is responsible for kicking off some of these tests and +managing their life cycle. + +#### Limitations + +* [A success](https://github.com/cosmos/cosmos-sdk/runs/7606931983?check_suite_focus=true) may take a long time to run, 7-10 minutes per simulation in CI. +* [Timeouts](https://github.com/cosmos/cosmos-sdk/runs/7606932295?check_suite_focus=true) sometimes occur on apparent successes without any indication why. +* Useful error messages not provided on [failure](https://github.com/cosmos/cosmos-sdk/runs/7606932548?check_suite_focus=true) from CI, requiring a developer to run + the simulation locally to reproduce. + +### E2E tests + +End to end tests exercise the entire system as we understand it in as close an approximation +to a production environment as is practical. Presently these tests are located at +[tests/e2e](https://github.com/cosmos/cosmos-sdk/tree/main/tests/e2e) and rely on [testutil/network](https://github.com/cosmos/cosmos-sdk/tree/main/testutil/network) to start up an in-process Tendermint node. + +An application should be built as minimally as possible to exercise the desired functionality. +The SDK uses an application will only the required modules for the tests. The application developer is advised to use its own application for e2e tests. + +#### Limitations + +In general the limitations of end to end tests are orchestration and compute cost. +Scaffolding is required to start up and run a prod-like environment and this +process takes much longer to start and run than unit or integration tests. + +Global locks present in Tendermint code cause stateful starting/stopping to sometimes hang +or fail intermittently when run in a CI environment. + +The scope of e2e tests has been complected with command line interface testing. + +## Decision + +We accept these test scopes and identify the following decisions points for each. + +| Scope | App Type | Mocks? | +| ----------- | ------------------- | ------ | +| Unit | None | Yes | +| Integration | integration helpers | Some | +| Simulation | minimal app | No | +| E2E | minimal app | No | + +The decision above is valid for the SDK. An application developer should test their application with their full application instead of the minimal app. + +### Unit Tests + +All modules must have mocked unit test coverage. + +Illustrative tests should outnumber journeys in unit tests. + +Unit tests should outnumber integration tests. + +Unit tests must not introduce additional dependencies beyond those already present in +production code. + +When module unit test introduction as per [EPIC: Unit testing of modules via mocks](https://github.com/cosmos/cosmos-sdk/issues/12398) +results in a near complete rewrite of an integration test suite the test suite should be +retained and moved to `/tests/integration`. We accept the resulting test logic +duplication but recommend improving the unit test suite through the addition of +illustrative tests. + +### Integration Tests + +All integration tests shall be located in `/tests/integration`, even those which do not +introduce extra module dependencies. + +To help limit scope and complexity, it is recommended to use the smallest possible number of +modules in application startup, i.e. don't depend on simapp. + +Integration tests should outnumber e2e tests. + +### Simulations + +Simulations shall use a minimal application (usually via app wiring). They are located under `/x/{moduleName}/simulation`. + +### E2E Tests + +Existing e2e tests shall be migrated to integration tests by removing the dependency on the +test network and in-process Tendermint node to ensure we do not lose test coverage. + +The e2e rest runner shall transition from in process Tendermint to a runner powered by +Docker via [dockertest](https://github.com/ory/dockertest). + +E2E tests exercising a full network upgrade shall be written. + +The CLI testing aspect of existing e2e tests shall be rewritten using the network mocking +demonstrated in [PR#12706](https://github.com/cosmos/cosmos-sdk/pull/12706). + +## Consequences + +### Positive + +* test coverage is increased +* test organization is improved +* reduced dependency graph size in modules +* simapp removed as a dependency from modules +* inter-module dependencies introduced in test code are removed +* reduced CI run time after transitioning away from in process Tendermint + +### Negative + +* some test logic duplication between unit and integration tests during transition +* test written using dockertest DX may be a bit worse + +### Neutral + +* some discovery required for e2e transition to dockertest + +## Further Discussions + +It may be useful if test suites could be run in integration mode (with mocked tendermint) or +with e2e fixtures (with real tendermint and many nodes). Integration fixtures could be used +for quicker runs, e2e fixtures could be used for more battle hardening. + +A PoC `x/gov` was completed in PR [#12847](https://github.com/cosmos/cosmos-sdk/pull/12847) +is in progress for unit tests demonstrating BDD [Rejected]. +Observing that a strength of BDD specifications is their readability, and a con is the +cognitive load while writing and maintaining, current consensus is to reserve BDD use +for places in the SDK where complex rules and module interactions are demonstrated. +More straightforward or low level test cases will continue to rely on go table tests. + +Levels are network mocking in integration and e2e tests are still being worked on and formalized. + + + +### ADR 060: ABCI 1.0 Integration (Phase I) + + + +# ADR 60: ABCI 1.0 Integration (Phase I) + +## Changelog + +* 2022-08-10: Initial Draft (@alexanderbez, @tac0turtle) +* Nov 12, 2022: Update `PrepareProposal` and `ProcessProposal` semantics per the + initial implementation [PR](https://github.com/cosmos/cosmos-sdk/pull/13453) (@alexanderbez) + +## Status + +ACCEPTED + +## Abstract + +This ADR describes the initial adoption of [ABCI 1.0](https://github.com/tendermint/tendermint/blob/master/spec/abci%2B%2B/README.md), +the next evolution of ABCI, within the Cosmos SDK. ABCI 1.0 aims to provide +application developers with more flexibility and control over application and +consensus semantics, e.g. in-application mempools, in-process oracles, and +order-book style matching engines. + +## Context + +Tendermint will release ABCI 1.0. Notably, at the time of this writing, +Tendermint is releasing v0.37.0 which will include `PrepareProposal` and `ProcessProposal`. + +The `PrepareProposal` ABCI method is concerned with a block proposer requesting +the application to evaluate a series of transactions to be included in the next +block, defined as a slice of `TxRecord` objects. The application can either +accept, reject, or completely ignore some or all of these transactions. This is +an important consideration to make as the application can essentially define and +control its own mempool allowing it to define sophisticated transaction priority +and filtering mechanisms, by completely ignoring the `TxRecords` Tendermint +sends it, favoring its own transactions. This essentially means that the Tendermint +mempool acts more like a gossip data structure. + +The second ABCI method, `ProcessProposal`, is used to process the block proposer's +proposal as defined by `PrepareProposal`. It is important to note the following +with respect to `ProcessProposal`: + +* Execution of `ProcessProposal` must be deterministic. +* There must be coherence between `PrepareProposal` and `ProcessProposal`. In + other words, for any two correct processes *p* and *q*, if *q*'s Tendermint + calls `RequestProcessProposal` on *up*, *q*'s Application returns + ACCEPT in `ResponseProcessProposal`. + +It is important to note that in ABCI 1.0 integration, the application +is NOT responsible for locking semantics -- Tendermint will still be responsible +for that. In the future, however, the application will be responsible for locking, +which allows for parallel execution possibilities. + +## Decision + +We will integrate ABCI 1.0, which will be introduced in Tendermint +v0.37.0, in the next major release of the Cosmos SDK. We will integrate ABCI 1.0 +methods on the `BaseApp` type. We describe the implementations of the two methods +individually below. + +Prior to describing the implementation of the two new methods, it is important to +note that the existing ABCI methods, `CheckTx`, `DeliverTx`, etc, still exist and +serve the same functions as they do now. + +### `PrepareProposal` + +Prior to evaluating the decision for how to implement `PrepareProposal`, it is +important to note that `CheckTx` will still be executed and will be responsible +for evaluating transaction validity as it does now, with one very important +*additive* distinction. + +When executing transactions in `CheckTx`, the application will now add valid +transactions, i.e. passing the AnteHandler, to its own mempool data structure. +In order to provide a flexible approach to meet the varying needs of application +developers, we will define both a mempool interface and a data structure utilizing +Golang generics, allowing developers to focus only on transaction +ordering. Developers requiring absolute full control can implement their own +custom mempool implementation. + +We define the general mempool interface as follows (subject to change): + +```go +type Mempool interface { + // Insert attempts to insert a Tx into the app-side mempool returning + // an error upon failure. + Insert(sdk.Context, sdk.Tx) error + + // Select returns an Iterator over the app-side mempool. If txs are specified, + // then they shall be incorporated into the Iterator. The Iterator must + // be closed by the caller. + Select(sdk.Context, [][]byte) Iterator + + // CountTx returns the number of transactions currently in the mempool. + CountTx() int + + // Remove attempts to remove a transaction from the mempool, returning an error + // upon failure. + Remove(sdk.Tx) error +} + +// Iterator defines an app-side mempool iterator interface that is as minimal as +// possible. The order of iteration is determined by the app-side mempool +// implementation. +type Iterator interface { + // Next returns the next transaction from the mempool. If there are no more + // transactions, it returns nil. + Next() Iterator + + // Tx returns the transaction at the current position of the iterator. + Tx() sdk.Tx +} +``` + +We will define an implementation of `Mempool`, defined by `nonceMempool`, that +will cover most basic application use-cases. Namely, it will prioritize transactions +by transaction sender, allowing for multiple transactions from the same sender. + +The default app-side mempool implementation, `nonceMempool`, will operate on a +single skip list data structure. Specifically, transactions with the lowest nonce +globally are prioritized. Transactions with the same nonce are prioritized by +sender address. + +```go +type nonceMempool struct { + txQueue *huandu.SkipList +} +``` + +Previous discussions1 have come to the agreement that Tendermint will +perform a request to the application, via `RequestPrepareProposal`, with a certain +amount of transactions reaped from Tendermint's local mempool. The exact amount +of transactions reaped will be determined by a local operator configuration. +This is referred to as the "one-shot approach" seen in discussions. + +When Tendermint reaps transactions from the local mempool and sends them to the +application via `RequestPrepareProposal`, the application will have to evaluate +the transactions. Specifically, it will need to inform Tendermint if it should +reject and or include each transaction. Note, the application can even *replace* +transactions entirely with other transactions. + +When evaluating transactions from `RequestPrepareProposal`, the application will +ignore *ALL* transactions sent to it in the request and instead reap up to +`RequestPrepareProposal.max_tx_bytes` from it's own mempool. + +Since an application can technically insert or inject transactions on `Insert` +during `CheckTx` execution, it is recommended that applications ensure transaction +validity when reaping transactions during `PrepareProposal`. However, what validity +exactly means is entirely determined by the application. + +The Cosmos SDK will provide a default `PrepareProposal` implementation that simply +select up to `MaxBytes` *valid* transactions. + +However, applications can override this default implementation with their own +implementation and set that on `BaseApp` via `SetPrepareProposal`. + + +### `ProcessProposal` + +The `ProcessProposal` ABCI method is relatively straightforward. It is responsible +for ensuring validity of the proposed block containing transactions that were +selected from the `PrepareProposal` step. However, how an application determines +validity of a proposed block depends on the application and its varying use cases. +For most applications, simply calling the `AnteHandler` chain would suffice, but +there could easily be other applications that need more control over the validation +process of the proposed block, such as ensuring txs are in a certain order or +that certain transactions are included. While this theoretically could be achieved +with a custom `AnteHandler` implementation, it's not the cleanest UX or the most +efficient solution. + +Instead, we will define an additional ABCI interface method on the existing +`Application` interface, similar to the existing ABCI methods such as `BeginBlock` +or `EndBlock`. This new interface method will be defined as follows: + +```go +ProcessProposal(sdk.Context, abci.RequestProcessProposal) error {} +``` + +Note, we must call `ProcessProposal` with a new internal branched state on the +`Context` argument as we cannot simply just use the existing `checkState` because +`BaseApp` already has a modified `checkState` at this point. So when executing +`ProcessProposal`, we create a similar branched state, `processProposalState`, +off of `deliverState`. Note, the `processProposalState` is never committed and +is completely discarded after `ProcessProposal` finishes execution. + +The Cosmos SDK will provide a default implementation of `ProcessProposal` in which +all transactions are validated using the CheckTx flow, i.e. the AnteHandler, and +will always return ACCEPT unless any transaction cannot be decoded. + +### `DeliverTx` + +Since transactions are not truly removed from the app-side mempool during +`PrepareProposal`, since `ProcessProposal` can fail or take multiple rounds and +we do not want to lose transactions, we need to finally remove the transaction +from the app-side mempool during `DeliverTx` since during this phase, the +transactions are being included in the proposed block. + +Alternatively, we can keep the transactions as truly being removed during the +reaping phase in `PrepareProposal` and add them back to the app-side mempool in +case `ProcessProposal` fails. + +## Consequences + +### Backwards Compatibility + +ABCI 1.0 is naturally not backwards compatible with prior versions of the Cosmos SDK +and Tendermint. For example, an application that requests `RequestPrepareProposal` +to the same application that does not speak ABCI 1.0 will naturally fail. + +However, in the first phase of the integration, the existing ABCI methods as we +know them today will still exist and function as they currently do. + +### Positive + +* Applications now have full control over transaction ordering and priority. +* Lays the groundwork for the full integration of ABCI 1.0, which will unlock more + app-side use cases around block construction and integration with the Tendermint + consensus engine. + +### Negative + +* Requires that the "mempool", as a general data structure that collects and stores + uncommitted transactions will be duplicated between both Tendermint and the + Cosmos SDK. +* Additional requests between Tendermint and the Cosmos SDK in the context of + block execution. Albeit, the overhead should be negligible. +* Not backwards compatible with previous versions of Tendermint and the Cosmos SDK. + +## Further Discussions + +It is possible to design the app-side implementation of the `Mempool[T MempoolTx]` +in many different ways using different data structures and implementations. All +of which have different tradeoffs. The proposed solution keeps things simple +and covers cases that would be required for most basic applications. There are +tradeoffs that can be made to improve performance of reaping and inserting into +the provided mempool implementation. + +## References + +* https://github.com/tendermint/tendermint/blob/master/spec/abci%2B%2B/README.md +* [1] https://github.com/tendermint/tendermint/issues/7750#issuecomment-1076806155 +* [2] https://github.com/tendermint/tendermint/issues/7750#issuecomment-1075717151 + + + +### ADR 061: ADR-061: Liquid Staking + + + +# ADR-061: Liquid Staking + +## Changelog + +* 2022-09-10: Initial Draft (@zmanian) + +## Status + +ACCEPTED + +## Abstract + +Add a semi-fungible liquid staking primitive to the default Cosmos SDK staking module. This upgrades proof of stake to enable safe designs with lower overall monetary issuance and integration with numerous liquid staking protocols like Stride, Persistence, Quicksilver, Lido etc. + +## Context + +The original release of the Cosmos Hub featured the implementation of a ground breaking proof of stake mechanism featuring delegation, slashing, in protocol reward distribution and adaptive issuance. This design was state of the art for 2016 and has been deployed without major changes by many L1 blockchains. + +As both Proof of Stake and blockchain use cases have matured, this design has aged poorly and should no longer be considered a good baseline Proof of Stake issuance. In the world of application specific blockchains, there cannot be a one size fits all blockchain but the Cosmos SDK does endeavour to provide a good baseline implementation and one that is suitable for the Cosmos Hub. + +The most important deficiency of the legacy staking design is that it composes poorly with on chain protocols for trading, lending, derivatives that are referred to collectively as DeFi. The legacy staking implementation starves these applications of liquidity by increasing the risk free rate adaptively. It basically makes DeFi and staking security somewhat incompatible. + +The Osmosis team has adopted the idea of Superfluid and Interfluid staking where assets that are participating in DeFi applications can also be used in proof of stake. This requires tight integration with an enshrined set of DeFi applications and thus is unsuitable for the Cosmos SDK. + +It's also important to note that Interchain Accounts are available in the default IBC implementation and can be used to [rehypothecate](https://www.investopedia.com/terms/h/hypothecation.asp#toc-what-is-rehypothecation) delegations. Thus liquid staking is already possible and these changes merely improve the UX of liquid staking. Centralized exchanges also rehypothecate staked assets, posing challenges for decentralization. This ADR takes the position that adoption of in-protocol liquid staking is the preferable outcome and provides new levers to incentivize decentralization of stake. + +These changes to the staking module have been in development for more than a year and have seen substantial industry adoption who plan to build staking UX. The internal economics at Informal team has also done a review of the impacts of these changes and this review led to the development of the exempt delegation system. This system provides governance with a tuneable parameter for modulating the risks of principal agent problem called the exemption factor. + +## Decision + +We implement the semi-fungible liquid staking system and exemption factor system within the cosmos sdk. Though registered as fungible assets, these tokenized shares have extremely limited fungibility, only among the specific delegation record that was created when shares were tokenized. These assets can be used for OTC trades but composability with DeFi is limited. The primary expected use case is improving the user experience of liquid staking providers. + +A new governance parameter is introduced that defines the ratio of exempt to issued tokenized shares. This is called the exemption factor. A larger exemption factor allows more tokenized shares to be issued for a smaller amount of exempt delegations. If governance is comfortable with how the liquid staking market is evolving, it makes sense to increase this value. + +Min self delegation is removed from the staking system with the expectation that it will be replaced by the exempt delegations system. The exempt delegation system allows multiple accounts to demonstrate economic alignment with the validator operator as team members, partners etc. without co-mingling funds. Delegation exemption will likely be required to grow the validators' business under widespread adoption of liquid staking once governance has adjusted the exemption factor. + +When shares are tokenized, the underlying shares are transferred to a module account and rewards go to the module account for the TokenizedShareRecord. + +There is no longer a mechanism to override the validators vote for TokenizedShares. + + +### `MsgTokenizeShares` + +The MsgTokenizeShares message is used to create tokenize delegated tokens. This message can be executed by any delegator who has positive amount of delegation and after execution the specific amount of delegation disappear from the account and share tokens are provided. Share tokens are denominated in the validator and record id of the underlying delegation. + +A user may tokenize some or all of their delegation. + +They will receive shares with the denom of `cosmosvaloper1xxxx/5` where 5 is the record id for the validator operator. + +MsgTokenizeShares fails if the account is a VestingAccount. Users will have to move vested tokens to a new account and endure the unbonding period. We view this as an acceptable tradeoff vs. the complex book keeping required to track vested tokens. + +The total amount of outstanding tokenized shares for the validator is checked against the sum of exempt delegations multiplied by the exemption factor. If the tokenized shares exceeds this limit, execution fails. + +MsgTokenizeSharesResponse provides the number of tokens generated and their denom. + + +### `MsgRedeemTokensforShares` + +The MsgRedeemTokensforShares message is used to redeem the delegation from share tokens. This message can be executed by any user who owns share tokens. After execution delegations will appear to the user. + +### `MsgTransferTokenizeShareRecord` + +The MsgTransferTokenizeShareRecord message is used to transfer the ownership of rewards generated from the tokenized amount of delegation. The tokenize share record is created when a user tokenize his/her delegation and deleted when the full amount of share tokens are redeemed. + +This is designed to work with liquid staking designs that do not redeem the tokenized shares and may instead want to keep the shares tokenized. + + +### `MsgExemptDelegation` + +The MsgExemptDelegation message is used to exempt a delegation to a validator. If the exemption factor is greater than 0, this will allow more delegation shares to be issued from the validator. + +This design allows the chain to force an amount of self-delegation by validators participating in liquid staking schemes. + +## Consequences + +### Backwards Compatibility + +By setting the exemption factor to zero, this module works like legacy staking. The only substantial change is the removal of min-self-bond and without any tokenized shares, there is no incentive to exempt delegation. + +### Positive + +This approach should enable integration with liquid staking providers and improved user experience. It provides a pathway to security under non-exponential issuance policies in the baseline staking module. + + + +### ADR 062: Collections, a simplified storage layer for cosmos-sdk modules + + + +# ADR 062: Collections, a simplified storage layer for cosmos-sdk modules + +## Changelog + +* 30/11/2022: PROPOSED + +## Status + +PROPOSED - Implemented + +## Abstract + +We propose a simplified module storage layer which leverages golang generics to allow module developers to handle module +storage in a simple and straightforward manner, whilst offering safety, extensibility and standardization. + +## Context + +Module developers are forced into manually implementing storage functionalities in their modules, those functionalities include +but are not limited to: + +* Defining key to bytes formats. +* Defining value to bytes formats. +* Defining secondary indexes. +* Defining query methods to expose outside to deal with storage. +* Defining local methods to deal with storage writing. +* Dealing with genesis imports and exports. +* Writing tests for all the above. + + +This brings in a lot of problems: + +* It blocks developers from focusing on the most important part: writing business logic. +* Key to bytes formats are complex and their definition is error-prone, for example: + * how do I format time to bytes in such a way that bytes are sorted? + * how do I ensure when I don't have namespace collisions when dealing with secondary indexes? +* The lack of standardization makes life hard for clients, and the problem is exacerbated when it comes to providing proofs for objects present in state. Clients are forced to maintain a list of object paths to gather proofs. + +### Current Solution: ORM + +The current SDK proposed solution to this problem is [ORM](#adr-055-orm). +Whilst ORM offers a lot of good functionality aimed at solving these specific problems, it has some downsides: + +* It requires migrations. +* It uses the newest protobuf golang API, whilst the SDK still mainly uses gogoproto. +* Integrating ORM into a module would require the developer to deal with two different golang frameworks (golang protobuf + gogoproto) representing the same API objects. +* It has a high learning curve, even for simple storage layers as it requires developers to have knowledge around protobuf options, custom cosmos-sdk storage extensions, and tooling download. Then after this they still need to learn the code-generated API. + +### CosmWasm Solution: cw-storage-plus + +The collections API takes inspiration from [cw-storage-plus](https://docs.cosmwasm.com/docs/1.0/smart-contracts/state/cw-plus/), +which has demonstrated to be a powerful tool for dealing with storage in CosmWasm contracts. +It's simple, does not require extra tooling, it makes it easy to deal with complex storage structures (indexes, snapshot, etc). +The API is straightforward and explicit. + +## Decision + +We propose to port the `collections` API, whose implementation lives in [NibiruChain/collections](https://github.com/NibiruChain/collections) to cosmos-sdk. + +Collections implements four different storage handlers types: + +* `Map`: which deals with simple `key=>object` mappings. +* `KeySet`: which acts as a `Set` and only retains keys and no object (usecase: allow-lists). +* `Item`: which always contains only one object (usecase: Params) +* `Sequence`: which implements a simple always increasing number (usecase: Nonces) +* `IndexedMap`: builds on top of `Map` and `KeySet` and allows to create relationships with `Objects` and `Objects` secondary keys. + +All the collection APIs build on top of the simple `Map` type. + +Collections is fully generic, meaning that anything can be used as `Key` and `Value`. It can be a protobuf object or not. + +Collections types, in fact, delegate the duty of serialization of keys and values to a secondary collections API component called `ValueEncoders` and `KeyEncoders`. + +`ValueEncoders` take care of converting a value to bytes (relevant only for `Map`). And offers a plug and play layer which allows us to change how we encode objects, +which is relevant for swapping serialization frameworks and enhancing performance. +`Collections` already comes in with default `ValueEncoders`, specifically for: protobuf objects, special SDK types (sdk.Int, sdk.Dec). + +`KeyEncoders` take care of converting keys to bytes, `collections` already comes in with some default `KeyEncoders` for some primitive golang types +(uint64, string, time.Time, ...) and some widely used sdk types (sdk.Acc/Val/ConsAddress, sdk.Int/Dec, ...). +These default implementations also offer safety around proper lexicographic ordering and namespace-collision. + +Examples of the collections API can be found here: + +* introduction: https://github.com/NibiruChain/collections/tree/main/examples +* usage in nibiru: [x/oracle](https://github.com/NibiruChain/nibiru/blob/master/x/oracle/keeper/keeper.go#L32), [x/perp](https://github.com/NibiruChain/nibiru/blob/master/x/perp/keeper/keeper.go#L31) +* cosmos-sdk's x/staking migrated: https://github.com/testinginprod/cosmos-sdk/pull/22 + + +## Consequences + +### Backwards Compatibility + +The design of `ValueEncoders` and `KeyEncoders` allows modules to retain the same `byte(key)=>byte(value)` mappings, making +the upgrade to the new storage layer non-state breaking. + + +### Positive + +* ADR aimed at removing code from the SDK rather than adding it. Migrating just `x/staking` to collections would yield to a net decrease in LOC (even considering the addition of collections itself). +* Simplifies and standardizes storage layers across modules in the SDK. +* Does not require to have to deal with protobuf. +* It's pure golang code. +* Generalization over `KeyEncoders` and `ValueEncoders` allows us to not tie ourself to the data serialization framework. +* `KeyEncoders` and `ValueEncoders` can be extended to provide schema reflection. + +### Negative + +* Golang generics are not as battle-tested as other Golang features, despite being used in production right now. +* Collection types instantiation needs to be improved. + +### Neutral + +{neutral consequences} + +## Further Discussions + +* Automatic genesis import/export (not implemented because of API breakage) +* Schema reflection + + +## References + + + +### ADR 063: Core Module API + + + +# ADR 063: Core Module API + +## Changelog + +* 2022-08-18 First Draft +* 2022-12-08 First Draft +* 2023-01-24 Updates + +## Status + +ACCEPTED Partially Implemented + +## Abstract + +A new core API is proposed as a way to develop cosmos-sdk applications that will eventually replace the existing +`AppModule` and `sdk.Context` frameworks a set of core services and extension interfaces. This core API aims to: + +* be simpler +* more extensible +* more stable than the current framework +* enable deterministic events and queries, +* support event listeners +* [ADR 033: Protobuf-based Inter-Module Communication](./adr-033-protobuf-inter-module-comm.md) clients. + +## Context + +Historically modules have exposed their functionality to the framework via the `AppModule` and `AppModuleBasic` +interfaces which have the following shortcomings: + +* both `AppModule` and `AppModuleBasic` need to be defined and registered which is counter-intuitive +* apps need to implement the full interfaces, even parts they don't need (although there are workarounds for this), +* interface methods depend heavily on unstable third party dependencies, in particular Comet, +* legacy required methods have littered these interfaces for far too long + +In order to interact with the state machine, modules have needed to do a combination of these things: + +* get store keys from the app +* call methods on `sdk.Context` which contains more or less the full set of capability available to modules. + +By isolating all the state machine functionality into `sdk.Context`, the set of functionalities available to +modules are tightly coupled to this type. If there are changes to upstream dependencies (such as Comet) +or new functionalities are desired (such as alternate store types), the changes need impact `sdk.Context` and all +consumers of it (basically all modules). Also, all modules now receive `context.Context` and need to convert these +to `sdk.Context`'s with a non-ergonomic unwrapping function. + +Any breaking changes to these interfaces, such as ones imposed by third-party dependencies like Comet, have the +side effect of forcing all modules in the ecosystem to update in lock-step. This means it is almost impossible to have +a version of the module which can be run with 2 or 3 different versions of the SDK or 2 or 3 different versions of +another module. This lock-step coupling slows down overall development within the ecosystem and causes updates to +components to be delayed longer than they would if things were more stable and loosely coupled. + +## Decision + +The `core` API proposes a set of core APIs that modules can rely on to interact with the state machine and expose their +functionalities to it that are designed in a principled way such that: + +* tight coupling of dependencies and unrelated functionalities is minimized or eliminated +* APIs can have long-term stability guarantees +* the SDK framework is extensible in a safe and straightforward way + +The design principles of the core API are as follows: + +* everything that a module wants to interact with in the state machine is a service +* all services coordinate state via `context.Context` and don't try to recreate the "bag of variables" approach of `sdk.Context` +* all independent services are isolated in independent packages with minimal APIs and minimal dependencies +* the core API should be minimalistic and designed for long-term support (LTS) +* a "runtime" module will implement all the "core services" defined by the core API and can handle all module + functionalities exposed by core extension interfaces +* other non-core and/or non-LTS services can be exposed by specific versions of runtime modules or other modules +following the same design principles, this includes functionality that interacts with specific non-stable versions of +third party dependencies such as Comet +* the core API doesn't implement *any* functionality, it just defines types +* go stable API compatibility guidelines are followed: https://go.dev/blog/module-compatibility + +A "runtime" module is any module which implements the core functionality of composing an ABCI app, which is currently +handled by `BaseApp` and the `ModuleManager`. Runtime modules which implement the core API are *intentionally* separate +from the core API in order to enable more parallel versions and forks of the runtime module than is possible with the +SDK's current tightly coupled `BaseApp` design while still allowing for a high degree of composability and +compatibility. + +Modules which are built only against the core API don't need to know anything about which version of runtime, +`BaseApp` or Comet in order to be compatible. Modules from the core mainline SDK could be easily composed +with a forked version of runtime with this pattern. + +This design is intended to enable matrices of compatible dependency versions. Ideally a given version of any module +is compatible with multiple versions of the runtime module and other compatible modules. This will allow dependencies +to be selectively updated based on battle-testing. More conservative projects may want to update some dependencies +slower than more fast moving projects. + +### Core Services + +The following "core services" are defined by the core API. All valid runtime module implementations should provide +implementations of these services to modules via both [dependency injection](./adr-057-app-wiring.md) and +manual wiring. The individual services described below are all bundled in a convenient `appmodule.Service` +"bundle service" so that for simplicity modules can declare a dependency on a single service. + +#### Store Services + +Store services will be defined in the `cosmossdk.io/core/store` package. + +The generic `store.KVStore` interface is the same as current SDK `KVStore` interface. Store keys have been refactored +into store services which, instead of expecting the context to know about stores, invert the pattern and allow +retrieving a store from a generic context. There are three store services for the three types of currently supported +stores - regular kv-store, memory, and transient: + +```go +type KVStoreService interface { + OpenKVStore(context.Context) KVStore +} + +type MemoryStoreService interface { + OpenMemoryStore(context.Context) KVStore +} +type TransientStoreService interface { + OpenTransientStore(context.Context) KVStore +} +``` + +Modules can use these services like this: + +```go +func (k msgServer) Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + store := k.kvStoreSvc.OpenKVStore(ctx) +} +``` + +Just as with the current runtime module implementation, modules will not need to explicitly name these store keys, +but rather the runtime module will choose an appropriate name for them and modules just need to request the +type of store they need in their dependency injection (or manual) constructors. + +#### Event Service + +The event `Service` will be defined in the `cosmossdk.io/core/event` package. + +The event `Service` allows modules to emit typed and legacy untyped events: + +```go +package event + +type Service interface { + // EmitProtoEvent emits events represented as a protobuf message (as described in ADR 032). + // + // Callers SHOULD assume that these events may be included in consensus. These events + // MUST be emitted deterministically and adding, removing or changing these events SHOULD + // be considered state-machine breaking. + EmitProtoEvent(ctx context.Context, event protoiface.MessageV1) error + + // EmitKVEvent emits an event based on an event and kv-pair attributes. + // + // These events will not be part of consensus and adding, removing or changing these events is + // not a state-machine breaking change. + EmitKVEvent(ctx context.Context, eventType string, attrs ...KVEventAttribute) error + + // EmitProtoEventNonConsensus emits events represented as a protobuf message (as described in ADR 032), without + // including it in blockchain consensus. + // + // These events will not be part of consensus and adding, removing or changing events is + // not a state-machine breaking change. + EmitProtoEventNonConsensus(ctx context.Context, event protoiface.MessageV1) error +} +``` + +Typed events emitted with `EmitProto` should be assumed to be part of blockchain consensus (whether they are part of +the block or app hash is left to the runtime to specify). + +Events emitted by `EmitKVEvent` and `EmitProtoEventNonConsensus` are not considered to be part of consensus and cannot be observed +by other modules. If there is a client-side need to add events in patch releases, these methods can be used. + +#### Logger + +A logger (`cosmossdk.io/log`) must be supplied using `depinject`, and will +be made available for modules to use via `depinject.In`. +Modules using it should follow the current pattern in the SDK by adding the module name before using it. + +```go +type ModuleInputs struct { + depinject.In + + Logger log.Logger +} + +func ProvideModule(in ModuleInputs) ModuleOutputs { + keeper := keeper.NewKeeper( + in.logger, + ) +} + +func NewKeeper(logger log.Logger) Keeper { + return Keeper{ + logger: logger.With(log.ModuleKey, "x/"+types.ModuleName), + } +} +``` + +### Core `AppModule` extension interfaces + + +Modules will provide their core services to the runtime module via extension interfaces built on top of the +`cosmossdk.io/core/appmodule.AppModule` tag interface. This tag interface requires only two empty methods which +allow `depinject` to identify implementers as `depinject.OnePerModule` types and as app module implementations: + +```go +type AppModule interface { + depinject.OnePerModuleType + + // IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} +``` + +Other core extension interfaces will be defined in `cosmossdk.io/core` should be supported by valid runtime +implementations. + +#### `MsgServer` and `QueryServer` registration + +`MsgServer` and `QueryServer` registration is done by implementing the `HasServices` extension interface: + +```go +type HasServices interface { + AppModule + + RegisterServices(grpc.ServiceRegistrar) +} + +``` + +Because of the `cosmos.msg.v1.service` protobuf option, required for `Msg` services, the same `ServiceRegistrar` can be +used to register both `Msg` and query services. + +#### Genesis + +The genesis `Handler` functions - `DefaultGenesis`, `ValidateGenesis`, `InitGenesis` and `ExportGenesis` - are specified +against the `GenesisSource` and `GenesisTarget` interfaces which will abstract over genesis sources which may be a single +JSON object or collections of JSON objects that can be efficiently streamed. + +```go +// GenesisSource is a source for genesis data in JSON format. It may abstract over a +// single JSON object or separate files for each field in a JSON object that can +// be streamed over. Modules should open a separate io.ReadCloser for each field that +// is required. When fields represent arrays they can efficiently be streamed +// over. If there is no data for a field, this function should return nil, nil. It is +// important that the caller closes the reader when done with it. +type GenesisSource = func(field string) (io.ReadCloser, error) + +// GenesisTarget is a target for writing genesis data in JSON format. It may +// abstract over a single JSON object or JSON in separate files that can be +// streamed over. Modules should open a separate io.WriteCloser for each field +// and should prefer writing fields as arrays when possible to support efficient +// iteration. It is important the caller closers the writer AND checks the error +// when done with it. It is expected that a stream of JSON data is written +// to the writer. +type GenesisTarget = func(field string) (io.WriteCloser, error) +``` + +All genesis objects for a given module are expected to conform to the semantics of a JSON object. +Each field in the JSON object should be read and written separately to support streaming genesis. +The [ORM](./adr-055-orm.md) and [collections](./adr-062-collections-state-layer.md) both support +streaming genesis and modules using these frameworks generally do not need to write any manual +genesis code. + +To support genesis, modules should implement the `HasGenesis` extension interface: + +```go +type HasGenesis interface { + AppModule + + // DefaultGenesis writes the default genesis for this module to the target. + DefaultGenesis(GenesisTarget) error + + // ValidateGenesis validates the genesis data read from the source. + ValidateGenesis(GenesisSource) error + + // InitGenesis initializes module state from the genesis source. + InitGenesis(context.Context, GenesisSource) error + + // ExportGenesis exports module state to the genesis target. + ExportGenesis(context.Context, GenesisTarget) error +} +``` + +#### Pre Blockers + +Modules that have functionality that runs before BeginBlock and should implement the `HasPreBlocker` interfaces: + +```go +type HasPreBlocker interface { + AppModule + PreBlock(context.Context) error +} +``` + +#### Begin and End Blockers + +Modules that have functionality that runs before transactions (begin blockers) or after transactions +(end blockers) should implement the has `HasBeginBlocker` and/or `HasEndBlocker` interfaces: + +```go +type HasBeginBlocker interface { + AppModule + BeginBlock(context.Context) error +} + +type HasEndBlocker interface { + AppModule + EndBlock(context.Context) error +} +``` + +The `BeginBlock` and `EndBlock` methods will take a `context.Context`, because: + +* most modules don't need Comet information other than `BlockInfo` so we can eliminate dependencies on specific +Comet versions +* for the few modules that need Comet block headers and/or return validator updates, specific versions of the +runtime module will provide specific functionality for interacting with the specific version(s) of Comet +supported + +In order for `BeginBlock`, `EndBlock` and `InitGenesis` to send back validator updates and retrieve full Comet +block headers, the runtime module for a specific version of Comet could provide services like this: + +```go +type ValidatorUpdateService interface { + SetValidatorUpdates(context.Context, []abci.ValidatorUpdate) +} +``` + +Header Service defines a way to get header information about a block. This information is generalized for all implementations: + +```go + +type Service interface { + GetHeaderInfo(context.Context) Info +} + +type Info struct { + Height int64 // Height returns the height of the block + Hash []byte // Hash returns the hash of the block header + Time time.Time // Time returns the time of the block + ChainID string // ChainId returns the chain ID of the block +} +``` + +Comet Service provides a way to get comet specific information: + +```go +type Service interface { + GetCometInfo(context.Context) Info +} + +type CometInfo struct { + Evidence []abci.Misbehavior // Misbehavior returns the misbehavior of the block + // ValidatorsHash returns the hash of the validators + // For Comet, it is the hash of the next validators + ValidatorsHash []byte + ProposerAddress []byte // ProposerAddress returns the address of the block proposer + DecidedLastCommit abci.CommitInfo // DecidedLastCommit returns the last commit info +} +``` + +If a user would like to provide a module other information they would need to implement another service like: + +```go +type RollKit Interface { + ... +} +``` + +We know these types will change at the Comet level and that also a very limited set of modules actually need this +functionality, so they are intentionally kept out of core to keep core limited to the necessary, minimal set of stable +APIs. + +#### Remaining Parts of AppModule + +The current `AppModule` framework handles a number of additional concerns which aren't addressed by this core API. +These include: + +* gas +* block headers +* upgrades +* registration of gogo proto and amino interface types +* cobra query and tx commands +* gRPC gateway +* crisis module invariants +* simulations + +Additional `AppModule` extension interfaces either inside or outside of core will need to be specified to handle +these concerns. + +In the case of gogo proto and amino interfaces, the registration of these generally should happen as early +as possible during initialization and in [ADR 057: App Wiring](./adr-057-app-wiring.md), protobuf type registration +happens before dependency injection (although this could alternatively be done dedicated DI providers). + +gRPC gateway registration should probably be handled by the runtime module, but the core API shouldn't depend on gRPC +gateway types as 1) we are already using an older version and 2) it's possible the framework can do this registration +automatically in the future. So for now, the runtime module should probably provide some sort of specific type for doing +this registration ex: + +```go +type GrpcGatewayInfo struct { + Handlers []GrpcGatewayHandler +} + +type GrpcGatewayHandler func(ctx context.Context, mux *runtime.ServeMux, client QueryClient) error +``` + +which modules can return in a provider: + +```go +func ProvideGrpcGateway() GrpcGatewayInfo { + return GrpcGatewayInfo { + Handlers: []Handler {types.RegisterQueryHandlerClient} + } +} +``` + +Crisis module invariants and simulations are subject to potential redesign and should be managed with types +defined in the crisis and simulation modules respectively. + +Extension interface for CLI commands will be provided via the `cosmossdk.io/client/v2` module and its +[autocli](./adr-058-auto-generated-cli.md) framework. + +#### Example Usage + +Here is an example of setting up a hypothetical `foo` v2 module which uses the [ORM](./adr-055-orm.md) for its state +management and genesis. + +```go + +type Keeper struct { + db orm.ModuleDB + evtSrv event.Service +} + +func (k Keeper) RegisterServices(r grpc.ServiceRegistrar) { + foov1.RegisterMsgServer(r, k) + foov1.RegisterQueryServer(r, k) +} + +func (k Keeper) BeginBlock(context.Context) error { + return nil +} + +func ProvideApp(config *foomodulev2.Module, evtSvc event.EventService, db orm.ModuleDB) (Keeper, appmodule.AppModule){ + k := &Keeper{db: db, evtSvc: evtSvc} + return k, k +} +``` + +### Runtime Compatibility Version + +The `core` module will define a static integer var, `cosmossdk.io/core.RuntimeCompatibilityVersion`, which is +a minor version indicator of the core module that is accessible at runtime. Correct runtime module implementations +should check this compatibility version and return an error if the current `RuntimeCompatibilityVersion` is higher +than the version of the core API that this runtime version can support. When new features are adding to the `core` +module API that runtime modules are required to support, this version should be incremented. + +### Runtime Modules + +The initial `runtime` module will simply be created within the existing `github.com/cosmos/cosmos-sdk` go module +under the `runtime` package. This module will be a small wrapper around the existing `BaseApp`, `sdk.Context` and +module manager and follow the Cosmos SDK's existing [0-based versioning](https://0ver.org). To move to semantic +versioning as well as runtime modularity, new officially supported runtime modules will be created under the +`cosmossdk.io/runtime` prefix. For each supported consensus engine a semantically-versioned go module should be created +with a runtime implementation for that consensus engine. For example: + +* `cosmossdk.io/runtime/comet` +* `cosmossdk.io/runtime/comet/v2` +* `cosmossdk.io/runtime/rollkit` +* etc. + +These runtime modules should attempt to be semantically versioned even if the underlying consensus engine is not. Also, +because a runtime module is also a first class Cosmos SDK module, it should have a protobuf module config type. +A new semantically versioned module config type should be created for each of these runtime module such that there is a +1:1 correspondence between the go module and module config type. This is the same practice should be followed for every +semantically versioned Cosmos SDK module as described in [ADR 057: App Wiring](./adr-057-app-wiring.md). + +Currently, `github.com/cosmos/cosmos-sdk/runtime` uses the protobuf config type `cosmos.app.runtime.v1alpha1.Module`. +When we have a standalone v1 comet runtime, we should use a dedicated protobuf module config type such as +`cosmos.runtime.comet.v1.Module1`. When we release v2 of the comet runtime (`cosmossdk.io/runtime/comet/v2`) we should +have a corresponding `cosmos.runtime.comet.v2.Module` protobuf type. + +In order to make it easier to support different consensus engines that support the same core module functionality as +described in this ADR, a common go module should be created with shared runtime components. The easiest runtime components +to share initially are probably the message/query router, inter-module client, service register, and event router. +This common runtime module should be created initially as the `cosmossdk.io/runtime/common` go module. + +When this new architecture has been implemented, the main dependency for a Cosmos SDK module would be +`cosmossdk.io/core` and that module should be able to be used with any supported consensus engine (to the extent +that it does not explicitly depend on consensus engine specific functionality such as Comet's block headers). An +app developer would then be able to choose which consensus engine they want to use by importing the corresponding +runtime module. The current `BaseApp` would be refactored into the `cosmossdk.io/runtime/comet` module, the router +infrastructure in `baseapp/` would be refactored into `cosmossdk.io/runtime/common` and support ADR 033, and eventually +a dependency on `github.com/cosmos/cosmos-sdk` would no longer be required. + +In short, modules would depend primarily on `cosmossdk.io/core`, and each `cosmossdk.io/runtime/{consensus-engine}` +would implement the `cosmossdk.io/core` functionality for that consensus engine. + +One additional piece that would need to be resolved as part of this architecture is how runtimes relate to the server. +Likely it would make sense to modularize the current server architecture so that it can be used with any runtime even +if that is based on a consensus engine besides Comet. This means that eventually the Comet runtime would need to +encapsulate the logic for starting Comet and the ABCI app. + +### Testing + +A mock implementation of all services should be provided in core to allow for unit testing of modules +without needing to depend on any particular version of runtime. Mock services should +allow tests to observe service behavior or provide a non-production implementation - for instance memory +stores can be used to mock stores. + +For integration testing, a mock runtime implementation should be provided that allows composing different app modules +together for testing without a dependency on runtime or Comet. + +## Consequences + +### Backwards Compatibility + +Early versions of runtime modules should aim to support as much as possible modules built with the existing +`AppModule`/`sdk.Context` framework. As the core API is more widely adopted, later runtime versions may choose to +drop support and only support the core API plus any runtime module specific APIs (like specific versions of Comet). + +The core module itself should strive to remain at the go semantic version `v1` as long as possible and follow design +principles that allow for strong long-term support (LTS). + +Older versions of the SDK can support modules built against core with adaptors that convert wrap core `AppModule` +implementations in implementations of `AppModule` that conform to that version of the SDK's semantics as well +as by providing service implementations by wrapping `sdk.Context`. + +### Positive + +* better API encapsulation and separation of concerns +* more stable APIs +* more framework extensibility +* deterministic events and queries +* event listeners +* inter-module msg and query execution support +* more explicit support for forking and merging of module versions (including runtime) + +### Negative + +### Neutral + +* modules will need to be refactored to use this API +* some replacements for `AppModule` functionality still need to be defined in follow-ups + (type registration, commands, invariants, simulations) and this will take additional design work + +## Further Discussions + +* gas +* block headers +* upgrades +* registration of gogo proto and amino interface types +* cobra query and tx commands +* gRPC gateway +* crisis module invariants +* simulations + +## References + +* [ADR 033: Protobuf-based Inter-Module Communication](./adr-033-protobuf-inter-module-comm.md) +* [ADR 057: App Wiring](./adr-057-app-wiring.md) +* [ADR 055: ORM](./adr-055-orm.md) +* [ADR 028: Public Key Addresses](./adr-028-public-key-addresses.md) +* [Keeping Your Modules Compatible](https://go.dev/blog/module-compatibility) + + + +### ADR 064: ABCI 2.0 Integration (Phase II) + + + +# ADR 64: ABCI 2.0 Integration (Phase II) + +## Changelog + +* 2023-01-17: Initial Draft (@alexanderbez) +* 2023-04-06: Add upgrading section (@alexanderbez) +* 2023-04-10: Simplify vote extension state persistence (@alexanderbez) +* 2023-07-07: Revise vote extension state persistence (@alexanderbez) +* 2023-08-24: Revise vote extension power calculations and staking interface (@davidterpay) + +## Status + +ACCEPTED + +## Abstract + +This ADR outlines the continuation of the efforts to implement ABCI++ in the Cosmos +SDK outlined in [ADR 060: ABCI 1.0 (Phase I)](adr-060-abci-1.0.md). + +Specifically, this ADR outlines the design and implementation of ABCI 2.0, which +includes `ExtendVote`, `VerifyVoteExtension` and `FinalizeBlock`. + +## Context + +ABCI 2.0 continues the promised updates from ABCI++, specifically three additional +ABCI methods that the application can implement in order to gain further control, +insight and customization of the consensus process, unlocking many novel use-cases +that were previously not possible. We describe these three new methods below: + +### `ExtendVote` + +This method allows each validator process to extend the pre-commit phase of the +CometBFT consensus process. Specifically, it allows the application to perform +custom business logic that extends the pre-commit vote and supply additional data +as part of the vote, although they are signed separately by the same key. + +The data, called vote extension, will be broadcast and received together with the +vote it is extending, and will be made available to the application in the next +height. Specifically, the proposer of the next block will receive the vote extensions +in `RequestPrepareProposal.local_last_commit.votes`. + +If the application does not have vote extension information to provide, it +returns a 0-length byte array as its vote extension. + +**NOTE**: + +* Although each validator process submits its own vote extension, ONLY the *proposer* + of the *next* block will receive all the vote extensions included as part of the + pre-commit phase of the previous block. This means only the proposer will + implicitly have access to all the vote extensions, via `RequestPrepareProposal`, + and that not all vote extensions may be included, since a validator does not + have to wait for all pre-commits, only 2/3. +* The pre-commit vote is signed independently from the vote extension. + +### `VerifyVoteExtension` + +This method allows validators to validate the vote extension data attached to +each pre-commit message it receives. If the validation fails, the whole pre-commit +message will be deemed invalid and ignored by CometBFT. + +CometBFT uses `VerifyVoteExtension` when validating a pre-commit vote. Specifically, +for a pre-commit, CometBFT will: + +* Reject the message if it doesn't contain a signed vote AND a signed vote extension +* Reject the message if the vote's signature OR the vote extension's signature fails to verify +* Reject the message if `VerifyVoteExtension` was rejected by the app + +Otherwise, CometBFT will accept the pre-commit message. + +Note, this has important consequences on liveness, i.e., if vote extensions repeatedly +cannot be verified by correct validators, CometBFT may not be able to finalize +a block even if sufficiently many (+2/3) validators send pre-commit votes for +that block. Thus, `VerifyVoteExtension` should be used with special care. + +CometBFT recommends that an application that detects an invalid vote extension +SHOULD accept it in `ResponseVerifyVoteExtension` and ignore it in its own logic. + +### `FinalizeBlock` + +This method delivers a decided block to the application. The application must +execute the transactions in the block deterministically and update its state +accordingly. Cryptographic commitments to the block and transaction results, +returned via the corresponding parameters in `ResponseFinalizeBlock`, are +included in the header of the next block. CometBFT calls it when a new block +is decided. + +In other words, `FinalizeBlock` encapsulates the current ABCI execution flow of +`BeginBlock`, one or more `DeliverTx`, and `EndBlock` into a single ABCI method. +CometBFT will no longer execute requests for these legacy methods and instead +will just simply call `FinalizeBlock`. + +## Decision + +We will discuss changes to the Cosmos SDK to implement ABCI 2.0 in two distinct +phases, `VoteExtensions` and `FinalizeBlock`. + +### `VoteExtensions` + +Similarly for `PrepareProposal` and `ProcessProposal`, we propose to introduce +two new handlers that an application can implement in order to provide and verify +vote extensions. + +We propose the following new handlers for applications to implement: + +```go +type ExtendVoteHandler func(sdk.Context, abci.RequestExtendVote) abci.ResponseExtendVote +type VerifyVoteExtensionHandler func(sdk.Context, abci.RequestVerifyVoteExtension) abci.ResponseVerifyVoteExtension +``` + +An ephemeral context and state will be supplied to both handlers. The +context will contain relevant metadata such as the block height and block hash. +The state will be a cached version of the committed state of the application and +will be discarded after the execution of the handler, this means that both handlers +get a fresh state view and no changes made to it will be written. + +If an application decides to implement `ExtendVoteHandler`, it must return a +non-nil `ResponseExtendVote.VoteExtension`. + +Recall, an implementation of `ExtendVoteHandler` does NOT need to be deterministic, +however, given a set of vote extensions, `VerifyVoteExtensionHandler` must be +deterministic, otherwise the chain may suffer from liveness faults. In addition, +recall CometBFT proceeds in rounds for each height, so if a decision cannot be +made about a block proposal at a given height, CometBFT will proceed to the +next round and thus will execute `ExtendVote` and `VerifyVoteExtension` again for +the new round for each validator until 2/3 valid pre-commits can be obtained. + +Given the broad scope of potential implementations and use-cases of vote extensions, +and how to verify them, most applications should choose to implement the handlers +through a single handler type, which can have any number of dependencies injected +such as keepers. In addition, this handler type could contain some notion of +volatile vote extension state management which would assist in vote extension +verification. This state management could be ephemeral or could be some form of +on-disk persistence. + +Example: + +```go +// VoteExtensionHandler implements an Oracle vote extension handler. +type VoteExtensionHandler struct { + cdc Codec + mk MyKeeper + state VoteExtState // This could be a map or a DB connection object +} + +// ExtendVoteHandler can do something with h.mk and possibly h.state to create +// a vote extension, such as fetching a series of prices for supported assets. +func (h VoteExtensionHandler) ExtendVoteHandler(ctx sdk.Context, req abci.RequestExtendVote) abci.ResponseExtendVote { + prices := GetPrices(ctx, h.mk.Assets()) + bz, err := EncodePrices(h.cdc, prices) + if err != nil { + panic(fmt.Errorf("failed to encode prices for vote extension: %w", err)) + } + + // store our vote extension at the given height + // + // NOTE: Vote extensions can be overridden since we can timeout in a round. + SetPrices(h.state, req, bz) + + return abci.ResponseExtendVote{VoteExtension: bz} +} + +// VerifyVoteExtensionHandler can do something with h.state and req to verify +// the req.VoteExtension field, such as ensuring the provided oracle prices are +// within some valid range of our prices. +func (h VoteExtensionHandler) VerifyVoteExtensionHandler(ctx sdk.Context, req abci.RequestVerifyVoteExtension) abci.ResponseVerifyVoteExtension { + prices, err := DecodePrices(h.cdc, req.VoteExtension) + if err != nil { + log("failed to decode vote extension", "err", err) + return abci.ResponseVerifyVoteExtension{Status: REJECT} + } + + if err := ValidatePrices(h.state, req, prices); err != nil { + log("failed to validate vote extension", "prices", prices, "err", err) + return abci.ResponseVerifyVoteExtension{Status: REJECT} + } + + // store updated vote extensions at the given height + // + // NOTE: Vote extensions can be overridden since we can timeout in a round. + SetPrices(h.state, req, req.VoteExtension) + + return abci.ResponseVerifyVoteExtension{Status: ACCEPT} +} +``` + +#### Vote Extension Propagation & Verification + +As mentioned previously, vote extensions for height `H` are only made available +to the proposer at height `H+1` during `PrepareProposal`. However, in order to +make vote extensions useful, all validators should have access to the agreed upon +vote extensions at height `H` during `H+1`. + +Since CometBFT includes all the vote extension signatures in `RequestPrepareProposal`, +we propose that the proposing validator manually "inject" the vote extensions +along with their respective signatures via a special transaction, `VoteExtsTx`, +into the block proposal during `PrepareProposal`. The `VoteExtsTx` will be +populated with a single `ExtendedCommitInfo` object which is received directly +from `RequestPrepareProposal`. + +For convention, the `VoteExtsTx` transaction should be the first transaction in +the block proposal, although chains can implement their own preferences. For +safety purposes, we also propose that the proposer itself verify all the vote +extension signatures it receives in `RequestPrepareProposal`. + +A validator, upon a `RequestProcessProposal`, will receive the injected `VoteExtsTx` +which includes the vote extensions along with their signatures. If no such transaction +exists, the validator MUST REJECT the proposal. + +When a validator inspects a `VoteExtsTx`, it will evaluate each `SignedVoteExtension`. +For each signed vote extension, the validator will generate the signed bytes and +verify the signature. At least 2/3 valid signatures, based on voting power, must +be received in order for the block proposal to be valid, otherwise the validator +MUST REJECT the proposal. + +In order to have the ability to validate signatures, `BaseApp` must have access +to the `x/staking` module, since this module stores an index from consensus +address to public key. However, we will avoid a direct dependency on `x/staking` +and instead rely on an interface instead. In addition, the Cosmos SDK will expose +a default signature verification method which applications can use: + +```go +type ValidatorStore interface { + GetPubKeyByConsAddr(context.Context, sdk.ConsAddress) (cmtprotocrypto.PublicKey, error) +} + +// ValidateVoteExtensions is a function that an application can execute in +// ProcessProposal to verify vote extension signatures. +func (app *BaseApp) ValidateVoteExtensions(ctx sdk.Context, currentHeight int64, extCommit abci.ExtendedCommitInfo) error { + votingPower := 0 + totalVotingPower := 0 + + for _, vote := range extCommit.Votes { + totalVotingPower += vote.Validator.Power + + if !vote.SignedLastBlock || len(vote.VoteExtension) == 0 { + continue + } + + valConsAddr := sdk.ConsAddress(vote.Validator.Address) + pubKeyProto, err := valStore.GetPubKeyByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get public key for validator %s: %w", valConsAddr, err) + } + + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("received a non-empty vote extension with empty signature for validator %s", valConsAddr) + } + + cmtPubKey, err := cryptoenc.PubKeyFromProto(pubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) + } + + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, // the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: app.GetChainID(), + } + + extSignBytes, err := cosmosio.MarshalDelimited(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) + } + + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return errors.New("received vote with invalid signature") + } + + votingPower += vote.Validator.Power + } + + if (votingPower / totalVotingPower) < threshold { + return errors.New("not enough voting power for the vote extensions") + } + + return nil +} +``` + +Once at least 2/3 signatures, by voting power, are received and verified, the +validator can use the vote extensions to derive additional data or come to some +decision based on the vote extensions. + +> NOTE: It is very important to state, that neither the vote propagation technique +> nor the vote extension verification mechanism described above is required for +> applications to implement. In other words, a proposer is not required to verify +> and propagate vote extensions along with their signatures nor are proposers +> required to verify those signatures. An application can implement it's own +> PKI mechanism and use that to sign and verify vote extensions. + +#### Vote Extension Persistence + +In certain contexts, it may be useful or necessary for applications to persist +data derived from vote extensions. In order to facilitate this use case, we propose +to allow app developers to define a pre-Blocker hook which will be called +at the very beginning of `FinalizeBlock`, i.e. before `BeginBlock` (see below). + +Note, we cannot allow applications to directly write to the application state +during `ProcessProposal` because during replay, CometBFT will NOT call `ProcessProposal`, +which would result in an incomplete state view. + +```go +func (a MyApp) PreBlocker(ctx sdk.Context, req *abci.RequestFinalizeBlock) error { + voteExts := GetVoteExtensions(ctx, req.Txs) + + // Process and perform some compute on vote extensions, storing any resulting + // state. + if err a.processVoteExtensions(ctx, voteExts); if err != nil { + return err + } +} +``` + +### `FinalizeBlock` + +The existing ABCI methods `BeginBlock`, `DeliverTx`, and `EndBlock` have existed +since the dawn of ABCI-based applications. Thus, applications, tooling, and developers +have grown used to these methods and their use-cases. Specifically, `BeginBlock` +and `EndBlock` have grown to be pretty integral and powerful within ABCI-based +applications. E.g. an application might want to run distribution and inflation +related operations prior to executing transactions and then have staking related +changes to happen after executing all transactions. + +We propose to keep `BeginBlock` and `EndBlock` within the SDK's core module +interfaces only so application developers can continue to build against existing +execution flows. However, we will remove `BeginBlock`, `DeliverTx` and `EndBlock` +from the SDK's `BaseApp` implementation and thus the ABCI surface area. + +What will then exist is a single `FinalizeBlock` execution flow. Specifically, in +`FinalizeBlock` we will execute the application's `BeginBlock`, followed by +execution of all the transactions, finally followed by execution of the application's +`EndBlock`. + +Note, we will still keep the existing transaction execution mechanics within +`BaseApp`, but all notions of `DeliverTx` will be removed, i.e. `deliverState` +will be replace with `finalizeState`, which will be committed on `Commit`. + +However, there are current parameters and fields that exist in the existing +`BeginBlock` and `EndBlock` ABCI types, such as votes that are used in distribution +and byzantine validators used in evidence handling. These parameters exist in the +`FinalizeBlock` request type, and will need to be passed to the application's +implementations of `BeginBlock` and `EndBlock`. + +This means the Cosmos SDK's core module interfaces will need to be updated to +reflect these parameters. The easiest and most straightforward way to achieve +this is to just pass `RequestFinalizeBlock` to `BeginBlock` and `EndBlock`. +Alternatively, we can create dedicated proxy types in the SDK that reflect these +legacy ABCI types, e.g. `LegacyBeginBlockRequest` and `LegacyEndBlockRequest`. Or, +we can come up with new types and names altogether. + +```go +func (app *BaseApp) FinalizeBlock(req abci.RequestFinalizeBlock) (*abci.ResponseFinalizeBlock, error) { + ctx := ... + + if app.preBlocker != nil { + ctx := app.finalizeBlockState.ctx + rsp, err := app.preBlocker(ctx, req) + if err != nil { + return nil, err + } + if rsp.ConsensusParamsChanged { + app.finalizeBlockState.ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + } + } + beginBlockResp, err := app.beginBlock(req) + appendBlockEventAttr(beginBlockResp.Events, "begin_block") + + txExecResults := make([]abci.ExecTxResult, 0, len(req.Txs)) + for _, tx := range req.Txs { + result := app.runTx(runTxModeFinalize, tx) + txExecResults = append(txExecResults, result) + } + + endBlockResp, err := app.endBlock(app.finalizeBlockState.ctx) + appendBlockEventAttr(beginBlockResp.Events, "end_block") + + return abci.ResponseFinalizeBlock{ + TxResults: txExecResults, + Events: joinEvents(beginBlockResp.Events, endBlockResp.Events), + ValidatorUpdates: endBlockResp.ValidatorUpdates, + ConsensusParamUpdates: endBlockResp.ConsensusParamUpdates, + AppHash: nil, + } +} +``` + +#### Events + +Many tools, indexers and ecosystem libraries rely on the existence `BeginBlock` +and `EndBlock` events. Since CometBFT now only exposes `FinalizeBlockEvents`, we +find that it will still be useful for these clients and tools to still query for +and rely on existing events, especially since applications will still define +`BeginBlock` and `EndBlock` implementations. + +In order to facilitate existing event functionality, we propose that all `BeginBlock` +and `EndBlock` events have a dedicated `EventAttribute` with `key=block` and +`value=begin_block|end_block`. The `EventAttribute` will be appended to each event +in both `BeginBlock` and `EndBlock` events. + + +### Upgrading + +CometBFT defines a consensus parameter, [`VoteExtensionsEnableHeight`](https://github.com/cometbft/cometbft/blob/v0.38.0-alpha.1/spec/abci/abci%2B%2B_app_requirements.md#abciparamsvoteextensionsenableheight), +which specifies the height at which vote extensions are enabled and **required**. +If the value is set to zero, which is the default, then vote extensions are +disabled and an application is not required to implement and use vote extensions. + +However, if the value `H` is positive, at all heights greater than the configured +height `H` vote extensions must be present (even if empty). When the configured +height `H` is reached, `PrepareProposal` will not include vote extensions yet, +but `ExtendVote` and `VerifyVoteExtension` will be called. Then, when reaching +height `H+1`, `PrepareProposal` will include the vote extensions from height `H`. + +It is very important to note, for all heights after H: + +* Vote extensions CANNOT be disabled +* They are mandatory, i.e. all pre-commit messages sent MUST have an extension + attached (even if empty) + +When an application updates to the Cosmos SDK version with CometBFT v0.38 support, +in the upgrade handler it must ensure to set the consensus parameter +`VoteExtensionsEnableHeight` to the correct value. E.g. if an application is set +to perform an upgrade at height `H`, then the value of `VoteExtensionsEnableHeight` +should be set to any value `>=H+1`. This means that at the upgrade height, `H`, +vote extensions will not be enabled yet, but at height `H+1` they will be enabled. + +## Consequences + +### Backwards Compatibility + +ABCI 2.0 is naturally not backwards compatible with prior versions of the Cosmos SDK +and CometBFT. For example, an application that requests `RequestFinalizeBlock` +to the same application that does not speak ABCI 2.0 will naturally fail. + +In addition, `BeginBlock`, `DeliverTx` and `EndBlock` will be removed from the +application ABCI interfaces and along with the inputs and outputs being modified +in the module interfaces. + +### Positive + +* `BeginBlock` and `EndBlock` semantics remain, so burden on application developers + should be limited. +* Less communication overhead as multiple ABCI requests are condensed into a single + request. +* Sets the groundwork for optimistic execution. +* Vote extensions allow for an entirely new set of application primitives to be + developed, such as in-process price oracles and encrypted mempools. + +### Negative + +* Some existing Cosmos SDK core APIs may need to be modified and thus broken. +* Signature verification in `ProcessProposal` of 100+ vote extension signatures + will add significant performance overhead to `ProcessProposal`. Granted, the + signature verification process can happen concurrently using an error group + with `GOMAXPROCS` goroutines. + +### Neutral + +* Having to manually "inject" vote extensions into the block proposal during + `PrepareProposal` is an awkward approach and takes up block space unnecessarily. +* The requirement of `ResetProcessProposalState` can create a footgun for + application developers if they're not careful, but this is necessary in order + for applications to be able to commit state from vote extension computation. + +## Further Discussions + +Future discussions include design and implementation of ABCI 3.0, which is a +continuation of ABCI++ and the general discussion of optimistic execution. + +## References + +* [ADR 060: ABCI 1.0 (Phase I)](adr-060-abci-1.0.md) + + + +### ADR 065: ADR-065: Store V2 + + + +# ADR-065: Store V2 + +## Changelog + +* Feb 14, 2023: Initial Draft (@alexanderbez) + +## Status + +DRAFT + +## Abstract + +The storage and state primitives that Cosmos SDK based applications have used have +by and large not changed since the launch of the inaugural Cosmos Hub. The demands +and needs of Cosmos SDK based applications, from both developer and client UX +perspectives, have evolved and outgrown the ecosystem since these primitives +were first introduced. + +Over time as these applications have gained significant adoption, many critical +shortcomings and flaws have been exposed in the state and storage primitives of +the Cosmos SDK. + +In order to keep up with the evolving demands and needs of both clients and developers, +a major overhaul to these primitives is necessary. + +## Context + +The Cosmos SDK provides application developers with various storage primitives +for dealing with application state. Specifically, each module contains its own +merkle commitment data structure -- an IAVL tree. In this data structure, a module +can store and retrieve key-value pairs along with Merkle commitments, i.e. proofs, +to those key-value pairs indicating that they do or do not exist in the global +application state. This data structure is the base layer `KVStore`. + +In addition, the SDK provides abstractions on top of this Merkle data structure. +Namely, a root multi-store (RMS) is a collection of each module's `KVStore`. +Through the RMS, the application can serve queries and provide proofs to clients +in addition to providing a module access to its own unique `KVStore` through the use +of `StoreKey`, which is an OCAP primitive. + +There are further layers of abstraction that sit between the RMS and the underlying +IAVL `KVStore`. A `GasKVStore` is responsible for tracking gas IO consumption for +state machine reads and writes. A `CacheKVStore` is responsible for providing a +way to cache reads and buffer writes to make state transitions atomic, e.g. +transaction execution or governance proposal execution. + +There are a few critical drawbacks to these layers of abstraction and the overall +design of storage in the Cosmos SDK: + +* Since each module has its own IAVL `KVStore`, commitments are not [atomic](https://github.com/cosmos/cosmos-sdk/issues/14625) + * Note, we can still allow modules to have their own IAVL `KVStore`, but the + IAVL library will need to support the ability to pass a DB instance as an + argument to various IAVL APIs. +* Since IAVL is responsible for both state storage and commitment, running an + archive node becomes increasingly expensive as disk space grows exponentially. +* As the size of a network increases, various performance bottlenecks start to + emerge in many areas such as query performance, network upgrades, state + migrations, and general application performance. +* Developer UX is poor as it does not allow application developers to experiment + with different types of approaches to storage and commitments, along with the + complications of many layers of abstractions referenced above. + +See the [Storage Discussion](https://github.com/cosmos/cosmos-sdk/discussions/13545) for more information. + +## Alternatives + +There was a previous attempt to refactor the storage layer described in [ADR-040](./adr-040-storage-and-smt-state-commitments.md). +However, this approach mainly stems from the shortcomings of IAVL and various performance +issues around it. While there was a (partial) implementation of [ADR-040](./adr-040-storage-and-smt-state-commitments.md), +it was never adopted for a variety of reasons, such as the reliance on using an +SMT, which was more in a research phase, and some design choices that couldn't +be fully agreed upon, such as the snapshotting mechanism that would result in +massive state bloat. + +## Decision + +We propose to build upon some of the great ideas introduced in [ADR-040](./adr-040-storage-and-smt-state-commitments.md), +while being a bit more flexible with the underlying implementations and overall +less intrusive. Specifically, we propose to: + +* Separate the concerns of state commitment (**SC**), needed for consensus, and + state storage (**SS**), needed for state machine and clients. +* Reduce layers of abstractions necessary between the RMS and underlying stores. +* Provide atomic module store commitments by providing a batch database object + to core IAVL APIs. +* Reduce complexities in the `CacheKVStore` implementation while also improving + performance[3]. + +Furthermore, we will keep the IAVL is the backing [commitment](https://cryptography.fandom.com/wiki/Commitment_scheme) +store for the time being. While we might not fully settle on the use of IAVL in +the long term, we do not have strong empirical evidence to suggest a better +alternative. Given that the SDK provides interfaces for stores, it should be sufficient +to change the backing commitment store in the future should evidence arise to +warrant a better alternative. However there is promising work being done to IAVL +that should result in significant performance improvement [1,2]. + +### Separating SS and SC + +By separating SS and SC, it will allow for us to optimize against primary use cases +and access patterns to state. Specifically, The SS layer will be responsible for +direct access to data in the form of (key, value) pairs, whereas the SC layer (IAVL) +will be responsible for committing to data and providing Merkle proofs. + +Note, the underlying physical storage database will be the same between both the +SS and SC layers. So to avoid collisions between (key, value) pairs, both layers +will be namespaced. + +#### State Commitment (SC) + +Given that the existing solution today acts as both SS and SC, we can simply +repurpose it to act solely as the SC layer without any significant changes to +access patterns or behavior. In other words, the entire collection of existing +IAVL-backed module `KVStore`s will act as the SC layer. + +However, in order for the SC layer to remain lightweight and not duplicate a +majority of the data held in the SS layer, we encourage node operators to keep +tight pruning strategies. + +#### State Storage (SS) + +In the RMS, we will expose a *single* `KVStore` backed by the same physical +database that backs the SC layer. This `KVStore` will be explicitly namespaced +to avoid collisions and will act as the primary storage for (key, value) pairs. + +While we most likely will continue the use of `cosmos-db`, or some local interface, +to allow for flexibility and iteration over preferred physical storage backends +as research and benchmarking continues. However, we propose to hardcode the use +of RocksDB as the primary physical storage backend. + +Since the SS layer will be implemented as a `KVStore`, it will support the +following functionality: + +* Range queries +* CRUD operations +* Historical queries and versioning +* Pruning + +The RMS will keep track of all buffered writes using a dedicated and internal +`MemoryListener` for each `StoreKey`. For each block height, upon `Commit`, the +SS layer will write all buffered (key, value) pairs under a [RocksDB user-defined timestamp](https://github.com/facebook/rocksdb/wiki/User-defined-Timestamp-%28Experimental%29) column +family using the block height as the timestamp, which is an unsigned integer. +This will allow a client to fetch (key, value) pairs at historical and current +heights along with making iteration and range queries relatively performant as +the timestamp is the key suffix. + +Note, we choose not to use a more general approach of allowing any embedded key/value +database, such as LevelDB or PebbleDB, using height key-prefixed keys to +effectively version state because most of these databases use variable length +keys which would effectively make actions likes iteration and range queries less +performant. + +Since operators might want pruning strategies to differ in SS compared to SC, +e.g. having a very tight pruning strategy in SC while having a looser pruning +strategy for SS, we propose to introduce an additional pruning configuration, +with parameters that are identical to what exists in the SDK today, and allow +operators to control the pruning strategy of the SS layer independently of the +SC layer. + +Note, the SC pruning strategy must be congruent with the operator's state sync +configuration. This is so as to allow state sync snapshots to execute successfully, +otherwise, a snapshot could be triggered on a height that is not available in SC. + +#### State Sync + +The state sync process should be largely unaffected by the separation of the SC +and SS layers. However, if a node syncs via state sync, the SS layer of the node +will not have the state synced height available, since the IAVL import process is +not setup in way to easily allow direct key/value insertion. A modification of +the IAVL import process would be necessary to facilitate having the state sync +height available. + +Note, this is not problematic for the state machine itself because when a query +is made, the RMS will automatically direct the query correctly (see [Queries](#queries)). + +#### Queries + +To consolidate the query routing between both the SC and SS layers, we propose to +have a notion of a "query router" that is constructed in the RMS. This query router +will be supplied to each `KVStore` implementation. The query router will route +queries to either the SC layer or the SS layer based on a few parameters. If +`prove: true`, then the query must be routed to the SC layer. Otherwise, if the +query height is available in the SS layer, the query will be served from the SS +layer. Otherwise, we fall back on the SC layer. + +If no height is provided, the SS layer will assume the latest height. The SS +layer will store a reverse index to lookup `LatestVersion -> timestamp(version)` +which is set on `Commit`. + +#### Proofs + +Since the SS layer is naturally a storage layer only, without any commitments +to (key, value) pairs, it cannot provide Merkle proofs to clients during queries. + +Since the pruning strategy against the SC layer is configured by the operator, +we can therefore have the RMS route the query to the SC layer if the version exists and +`prove: true`. Otherwise, the query will fall back to the SS layer without a proof. + +We could explore the idea of using state snapshots to rebuild an in-memory IAVL +tree in real time against a version closest to the one provided in the query. +However, it is not clear what the performance implications will be of this approach. + +### Atomic Commitment + +We propose to modify the existing IAVL APIs to accept a batch DB object instead +of relying on an internal batch object in `nodeDB`. Since each underlying IAVL +`KVStore` shares the same DB in the SC layer, this will allow commits to be +atomic. + +Specifically, we propose to: + +* Remove the `dbm.Batch` field from `nodeDB` +* Update the `SaveVersion` method of the `MutableTree` IAVL type to accept a batch object +* Update the `Commit` method of the `CommitKVStore` interface to accept a batch object +* Create a batch object in the RMS during `Commit` and pass this object to each + `KVStore` +* Write the database batch after all stores have committed successfully + +Note, this will require IAVL to be updated to not rely or assume on any batch +being present during `SaveVersion`. + +## Consequences + +As a result of a new store V2 package, we should expect to see improved performance +for queries and transactions due to the separation of concerns. We should also +expect to see improved developer UX around experimentation of commitment schemes +and storage backends for further performance, in addition to a reduced amount of +abstraction around KVStores making operations such as caching and state branching +more intuitive. + +However, due to the proposed design, there are drawbacks around providing state +proofs for historical queries. + +### Backwards Compatibility + +This ADR proposes changes to the storage implementation in the Cosmos SDK through +an entirely new package. Interfaces may be borrowed and extended from existing +types that exist in `store`, but no existing implementations or interfaces will +be broken or modified. + +### Positive + +* Improved performance of independent SS and SC layers +* Reduced layers of abstraction making storage primitives easier to understand +* Atomic commitments for SC +* Redesign of storage types and interfaces will allow for greater experimentation + such as different physical storage backends and different commitment schemes + for different application modules + +### Negative + +* Providing proofs for historical state is challenging + +### Neutral + +* Keeping IAVL as the primary commitment data structure, although drastic + performance improvements are being made + +## Further Discussions + +### Module Storage Control + +Many modules store secondary indexes that are typically solely used to support +client queries, but are actually not needed for the state machine's state +transitions. What this means is that these indexes technically have no reason to +exist in the SC layer at all, as they take up unnecessary space. It is worth +exploring what an API would look like to allow modules to indicate what (key, value) +pairs they want to be persisted in the SC layer, implicitly indicating the SS +layer as well, as opposed to just persisting the (key, value) pair only in the +SS layer. + +### Historical State Proofs + +It is not clear what the importance or demand is within the community of providing +commitment proofs for historical state. While solutions can be devised such as +rebuilding trees on the fly based on state snapshots, it is not clear what the +performance implications are for such solutions. + +### Physical DB Backends + +This ADR proposes usage of RocksDB to utilize user-defined timestamps as a +versioning mechanism. However, other physical DB backends are available that may +offer alternative ways to implement versioning while also providing performance +improvements over RocksDB. E.g. PebbleDB supports MVCC timestamps as well, but +we'll need to explore how PebbleDB handles compaction and state growth over time. + +## References + +* [1] https://github.com/cosmos/iavl/pull/676 +* [2] https://github.com/cosmos/iavl/pull/664 +* [3] https://github.com/cosmos/cosmos-sdk/issues/14990 + + + +### ADR 068: Preblock + + + +# ADR 068: Preblock + +## Changelog + +* Sept 13, 2023: Initial Draft + +## Status + +DRAFT + +## Abstract + +Introduce `PreBlock`, which runs before the begin blocker of other modules, and allows modifying consensus parameters, and the changes are visible to the following state machine logics. + +## Context + +When upgrading to sdk 0.47, the storage format for consensus parameters changed, but in the migration block, `ctx.ConsensusParams()` is always `nil`, because it fails to load the old format using new code, it's supposed to be migrated by the `x/upgrade` module first, but unfortunately, the migration happens in `BeginBlocker` handler, which runs after the `ctx` is initialized. +When we try to solve this, we find the `x/upgrade` module can't modify the context to make the consensus parameters visible for the other modules, the context is passed by value, and sdk team want to keep it that way, that's good for isolation between modules. + +## Alternatives + +The first alternative solution introduced a `MigrateModuleManager`, which only includes the `x/upgrade` module right now, and baseapp will run their `BeginBlocker`s before the other modules, and reload context's consensus parameters in between. + +## Decision + +Suggested this new lifecycle method. + +### `PreBlocker` + +There are two semantics around the new lifecycle method: + +* It runs before the `BeginBlocker` of all modules +* It can modify consensus parameters in storage, and signal the caller through the return value. + +When it returns `ConsensusParamsChanged=true`, the caller must refresh the consensus parameters in the finalize context: + +``` +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithConsensusParams(app.GetConsensusParams()) +``` + +The new ctx must be passed to all the other lifecycle methods. + + +## Consequences + +### Backwards Compatibility + +### Positive + +### Negative + +### Neutral + +## Further Discussions + +## Test Cases + +## References + +* [1] https://github.com/cosmos/cosmos-sdk/issues/16494 +* [2] https://github.com/cosmos/cosmos-sdk/pull/16583 +* [3] https://github.com/cosmos/cosmos-sdk/pull/17421 +* [4] https://github.com/cosmos/cosmos-sdk/pull/17713 + + + +### ADR 070: Unordered Transactions + + + +# ADR 070: Unordered Transactions + +## Changelog + +* Dec 4, 2023: Initial Draft (@yihuang, @tac0turtle, @alexanderbez) +* Jan 30, 2024: Include section on deterministic transaction encoding +* Mar 18, 2025: Revise implementation to use Cosmos SDK KV Store and require unique timeouts per-address (@technicallyty) +* Apr 25, 2025: Add note about rejecting unordered txs with sequence values. + +## Status + +ACCEPTED Not Implemented + +## Abstract + +We propose a way to do replay-attack protection without enforcing the order of +transactions and without requiring the use of monotonically increasing sequences. Instead, we propose +the use of a time-based, ephemeral sequence. + +## Context + +Account sequence values serve to prevent replay attacks and ensure transactions from the same sender are included in blocks and executed +in sequential order. Unfortunately, this makes it difficult to reliably send many concurrent transactions from the +same sender. Victims of such limitations include IBC relayers and crypto exchanges. + +## Decision + +We propose adding a boolean field `unordered` and a google.protobuf.Timestamp field `timeout_timestamp` to the transaction body. + +Unordered transactions will bypass the traditional account sequence rules and follow the rules described +below, without impacting traditional ordered transactions which will follow the same sequence rules as before. + +We will introduce new storage of time-based, ephemeral unordered sequences using the SDK's existing KV Store library. +Specifically, we will leverage the existing x/auth KV store to store the unordered sequences. + +When an unordered transaction is included in a block, a concatenation of the `timeout_timestamp` and sender’s address bytes +will be recorded to state (i.e. `542939323/`). In cases of multi-party signing, one entry per signer +will be recorded to state. + +New transactions will be checked against the state to prevent duplicate submissions. To prevent the state from growing indefinitely, we propose the following: + +* Define an upper bound for the value of `timeout_timestamp` (i.e. 10 minutes). +* Add PreBlocker method to x/auth that removes state entries with a `timeout_timestamp` earlier than the current block time. + +### Transaction Format + +```protobuf +message TxBody { + ... + + bool unordered = 4; + google.protobuf.Timestamp timeout_timestamp = 5; +} +``` + +### Replay Protection + +We facilitate replay protection by storing the unordered sequence in the Cosmos SDK KV store. Upon transaction ingress, we check if the transaction's unordered +sequence exists in state, or if the TTL value is stale, i.e. before the current block time. If so, we reject it. Otherwise, +we add the unordered sequence to the state. This section of the state will belong to the `x/auth` module. + +The state is evaluated during x/auth's `PreBlocker`. All transactions with an unordered sequence earlier than the current block time +will be deleted. + +```go +func (am AppModule) PreBlock(ctx context.Context) (appmodule.ResponsePreBlock, error) { + err := am.accountKeeper.RemoveExpired(sdk.UnwrapSDKContext(ctx)) + if err != nil { + return nil, err + } + return &sdk.ResponsePreBlock{ConsensusParamsChanged: false}, nil +} +``` + +```golang +package keeper + +import ( + sdk "github.com/cosmos/cosmos-sdk/types" + + "cosmossdk.io/collections" + "cosmossdk.io/core/store" +) + +var ( + // just arbitrarily picking some upper bound number. + unorderedSequencePrefix = collections.NewPrefix(90) +) + +type AccountKeeper struct { + // ... + unorderedSequences collections.KeySet[collections.Pair[uint64, []byte]] +} + +func (m *AccountKeeper) Contains(ctx sdk.Context, sender []byte, timestamp uint64) (bool, error) { + return m.unorderedSequences.Has(ctx, collections.Join(timestamp, sender)) +} + +func (m *AccountKeeper) Add(ctx sdk.Context, sender []byte, timestamp uint64) error { + return m.unorderedSequences.Set(ctx, collections.Join(timestamp, sender)) +} + +func (m *AccountKeeper) RemoveExpired(ctx sdk.Context) error { + blkTime := ctx.BlockTime().UnixNano() + it, err := m.unorderedSequences.Iterate(ctx, collections.NewPrefixUntilPairRange[uint64, []byte](uint64(blkTime))) + if err != nil { + return err + } + defer it.Close() + + keys, err := it.Keys() + if err != nil { + return err + } + + for _, key := range keys { + if err := m.unorderedSequences.Remove(ctx, key); err != nil { + return err + } + } + + return nil +} + +``` + +### AnteHandler Decorator + +To facilitate bypassing nonce verification, we must modify the existing +`IncrementSequenceDecorator` AnteHandler decorator to skip the nonce verification +when the transaction is marked as unordered. + +```golang +func (isd IncrementSequenceDecorator) AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + if tx.UnOrdered() { + return next(ctx, tx, simulate) + } + + // ... +} +``` + +We also introduce a new decorator to perform the unordered transaction verification. + +```golang +package ante + +import ( + "slices" + "strings" + "time" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" + + errorsmod "cosmossdk.io/errors" +) + +var _ sdk.AnteDecorator = (*UnorderedTxDecorator)(nil) + +// UnorderedTxDecorator defines an AnteHandler decorator that is responsible for +// checking if a transaction is intended to be unordered and, if so, evaluates +// the transaction accordingly. An unordered transaction will bypass having its +// nonce incremented, which allows fire-and-forget transaction broadcasting, +// removing the necessity of ordering on the sender-side. +// +// The transaction sender must ensure that unordered=true and a timeout_height +// is appropriately set. The AnteHandler will check that the transaction is not +// a duplicate and will evict it from state when the timeout is reached. +// +// The UnorderedTxDecorator should be placed as early as possible in the AnteHandler +// chain to ensure that during DeliverTx, the transaction is added to the unordered sequence state. +type UnorderedTxDecorator struct { + // maxUnOrderedTTL defines the maximum TTL a transaction can define. + maxTimeoutDuration time.Duration + txManager authkeeper.UnorderedTxManager +} + +func NewUnorderedTxDecorator( + utxm authkeeper.UnorderedTxManager, +) *UnorderedTxDecorator { + return &UnorderedTxDecorator{ + maxTimeoutDuration: 10 * time.Minute, + txManager: utxm, + } +} + +func (d *UnorderedTxDecorator) AnteHandle( + ctx sdk.Context, + tx sdk.Tx, + _ bool, + next sdk.AnteHandler, +) (sdk.Context, error) { + if err := d.ValidateTx(ctx, tx); err != nil { + return ctx, err + } + return next(ctx, tx, false) +} + +func (d *UnorderedTxDecorator) ValidateTx(ctx sdk.Context, tx sdk.Tx) error { + unorderedTx, ok := tx.(sdk.TxWithUnordered) + if !ok || !unorderedTx.GetUnordered() { + // If the transaction does not implement unordered capabilities or has the + // unordered value as false, we bypass. + return nil + } + + blockTime := ctx.BlockTime() + timeoutTimestamp := unorderedTx.GetTimeoutTimeStamp() + if timeoutTimestamp.IsZero() || timeoutTimestamp.Unix() == 0 { + return errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "unordered transaction must have timeout_timestamp set", + ) + } + if timeoutTimestamp.Before(blockTime) { + return errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "unordered transaction has a timeout_timestamp that has already passed", + ) + } + if timeoutTimestamp.After(blockTime.Add(d.maxTimeoutDuration)) { + return errorsmod.Wrapf( + sdkerrors.ErrInvalidRequest, + "unordered tx ttl exceeds %s", + d.maxTimeoutDuration.String(), + ) + } + + execMode := ctx.ExecMode() + if execMode == sdk.ExecModeSimulate { + return nil + } + + signerAddrs, err := getSigners(tx) + if err != nil { + return err + } + + for _, signer := range signerAddrs { + contains, err := d.txManager.Contains(ctx, signer, uint64(unorderedTx.GetTimeoutTimeStamp().Unix())) + if err != nil { + return errorsmod.Wrap( + sdkerrors.ErrIO, + "failed to check contains", + ) + } + if contains { + return errorsmod.Wrapf( + sdkerrors.ErrInvalidRequest, + "tx is duplicated for signer %x", signer, + ) + } + + if err := d.txManager.Add(ctx, signer, uint64(unorderedTx.GetTimeoutTimeStamp().Unix())); err != nil { + return errorsmod.Wrap( + sdkerrors.ErrIO, + "failed to add unordered sequence to state", + ) + } + } + + + return nil +} + +func getSigners(tx sdk.Tx) ([][]byte, error) { + sigTx, ok := tx.(authsigning.SigVerifiableTx) + if !ok { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, "invalid tx type") + } + return sigTx.GetSigners() +} + +``` + +### Unordered Sequences + +Unordered sequences provide a simple, straightforward mechanism to protect against both transaction malleability and +transaction duplication. It is important to note that the unordered sequence must still be unique. However, +the value is not required to be strictly increasing as with regular sequences, and the order in which the node receives +the transactions no longer matters. Clients can handle building unordered transactions similarly to the code below: + +```go +for _, tx := range txs { + tx.SetUnordered(true) + tx.SetTimeoutTimestamp(time.Now() + 1 * time.Nanosecond) +} +``` + +We will reject transactions that have both sequence and unordered timeouts set. We do this to avoid assuming the intent of the user. + +### State Management + +The storage of unordered sequences will be facilitated using the Cosmos SDK's KV Store service. + +## Note On Previous Design Iteration + +The previous iteration of unordered transactions worked by using an ad-hoc state-management system that posed severe +risks and a vector for duplicated tx processing. It relied on graceful app closure which would flush the current state +of the unordered sequence mapping. If 2/3 of the network crashed, and the graceful closure did not trigger, +the system would lose track of all sequences in the mapping, allowing those transactions to be replayed. The +implementation proposed in the updated version of this ADR solves this by writing directly to the Cosmos KV Store. +While this is less performant, for the initial implementation, we opted to choose a safer path and postpone performance optimizations until we have more data on real-world impacts and a more battle-tested approach to optimization. + +Additionally, the previous iteration relied on using hashes to create what we call an "unordered sequence." There are known +issues with transaction malleability in Cosmos SDK signing modes. This ADR gets away from this problem by enforcing +single-use unordered nonces, instead of deriving nonces from bytes in the transaction. + +## Consequences + +### Positive + +* Support unordered transaction inclusion, enabling the ability to "fire and forget" many transactions at once. + +### Negative + +* Requires additional storage overhead. +* Requirement of unique timestamps per transaction causes a small amount of additional overhead for clients. Clients must ensure each transaction's timeout timestamp is different. However, nanosecond differentials suffice. +* Usage of Cosmos SDK KV store is slower in comparison to using a non-merkleized store or ad-hoc methods, and block times may slow down as a result. + +## References + +* https://github.com/cosmos/cosmos-sdk/issues/13009 + + + +### ADR 076: Cosmos SDK Transaction Malleability Risk Review and Recommendations + + + +# Cosmos SDK Transaction Malleability Risk Review and Recommendations + +## Changelog + +* 2025-03-10: Initial draft (@aaronc) + +## Status + +PROPOSED: Not Implemented + +## Abstract + +Several encoding and sign mode related issues have historically resulted in the possibility +that Cosmos SDK transactions may be re-encoded in such a way as to change their hash +(and in rare cases, their meaning) without invalidating the signature. +This document details these cases, their potential risks, the extent to which they have been +addressed, and provides recommendations for future improvements. + +## Review + +One naive assumption about Cosmos SDK transactions is that hashing the raw bytes of a submitted transaction creates a safe unique identifier for the transaction. In reality, there are multiple ways in which transactions could be manipulated to create different transaction bytes (and as a result different hashes) that still pass signature verification. + +This document attempts to enumerate the various potential transaction "malleability" risks that we have identified and the extent to which they have or have not been addressed in various sign modes. We also identify vulnerabilities that could be introduced if developers make changes in the future without careful consideration of the complexities involved with transaction encoding, sign modes and signatures. + +### Risks Associated with Malleability + +The malleability of transactions poses the following potential risks to end users: + +* unsigned data could get added to transactions and be processed by state machines +* clients often rely on transaction hashes for checking transaction status, but whether or not submitted transaction hashes match processed transaction hashes depends primarily on good network actors rather than fundamental protocol guarantees +* transactions could potentially get executed more than once (faulty replay protection) + +If a client generates a transaction, keeps a record of its hash and then attempts to query nodes to check the transaction's status, this process may falsely conclude that the transaction had not been processed if an intermediary +processor decoded and re-encoded the transaction with different encoding rules (either maliciously or unintentionally). +As long as no malleability is present in the signature bytes themselves, clients _should_ query transactions by signature instead of hash. + +Not being cognizant of this risk may lead clients to submit the same transaction multiple times if they believe that +earlier transactions had failed or gotten lost in processing. +This could be an attack vector against users if wallets primarily query transactions by hash. + +If the state machine were to rely on transaction hashes as a replay mechanism itself, this would be faulty and not +provide the intended replay protection. Instead, the state machine should rely on deterministic representations of +transactions rather than the raw encoding, or other nonces, +if they want to provide some replay protection that doesn't rely on a monotonically +increasing account sequence number. + + +### Sources of Malleability + +#### Non-deterministic Protobuf Encoding + +Cosmos SDK transactions are encoded using protobuf binary encoding when they are submitted to the network. Protobuf binary is not inherently a deterministic encoding meaning that the same logical payload could have several valid bytes representations. In a basic sense, this means that protobuf in general can be decoded and re-encoded to produce a different byte stream (and thus different hash) without changing the logical meaning of the bytes. [ADR 027: Deterministic Protobuf Serialization](#adr-027-deterministic-protobuf-serialization) describes in detail what needs to be done to produce what we consider to be a "canonical", deterministic protobuf serialization. Briefly, the following sources of malleability at the encoding level have been identified and are addressed by this specification: + +* fields can be emitted in any order +* default field values can be included or omitted, and this doesn't change meaning unless `optional` is used +* `repeated` fields of scalars may use packed or "regular" encoding +* `varint`s can include extra ignored bits +* extra fields may be added and are usually simply ignored by decoders. [ADR 020](#adr-020-protocol-buffer-transaction-encoding) specifies that in general such extra fields should cause messages and transactions to be rejected + +When using `SIGN_MODE_DIRECT` none of the above malleabilities will be tolerated because: + +* signatures of messages and extensions must be done over the raw encoded bytes of those fields +* the outer tx envelope (`TxRaw`) must follow ADR 027 rules or be rejected + +Transactions signed with `SIGN_MODE_LEGACY_AMINO_JSON`, however, have no way of protecting against the above malleabilities because what is signed is a JSON representation of the logical contents of the transaction. These logical contents could have any number of valid protobuf binary encodings, so in general there are no guarantees regarding transaction hash with Amino JSON signing. + +In addition to being aware of the general non-determinism of protobuf binary, developers need to pay special attention to make sure that unknown protobuf fields get rejected when developing new capabilities related to protobuf transactions. The protobuf serialization format was designed with the assumption that unknown data known to encoders could safely be ignored by decoders. This assumption may have been fairly safe within the walled garden of Google's centralized infrastructure. However, in distributed blockchain systems, this assumption is generally unsafe. If a newer client encodes a protobuf message with data intended for a newer server, it is not safe for an older server to simply ignore and discard instructions that it does not understand. These instructions could include critical information that the transaction signer is relying upon and just assuming that it is unimportant is not safe. + +[ADR 020](#adr-020-protocol-buffer-transaction-encoding) specifies some provisions for "non-critical" fields which can safely be ignored by older servers. In practice, I have not seen any valid usages of this. It is something in the design that maintainers should be aware of, but it may not be necessary or even 100% safe. + +#### Non-deterministic Value Encoding + +In addition to the non-determinism present in protobuf binary itself, some protobuf field data is encoded using a micro-format which itself may not be deterministic. Consider for instance integer or decimal encoding. Some decoders may allow for the presence of leading or trailing zeros without changing the logical meaning, ex. `00100` vs `100` or `100.00` vs `100`. So if a sign mode encodes numbers deterministically, but decoders accept multiple representations, +a user may sign over the value `100` while `0100` gets encoded. This would be possible with Amino JSON to the extent that the integer decoder accepts leading zeros. I believe the current `Int` implementation will reject this, however, it is +probably possible to encode an octal or hexadecimal representation in the transaction whereas the user signs over a decimal integer. + +#### Signature Encoding + +Signatures themselves are encoded using a micro-format specific to the signature algorithm being used and sometimes these +micro-formats can allow for non-determinism (multiple valid bytes for the same signature). +Most of the signature algorithms supported by the SDK should reject non-canonical bytes in their current implementation. +However, the `Multisignature` protobuf type uses normal protobuf encoding and there is no check as to whether the +decoded bytes followed canonical ADR 027 rules or not. Therefore, multisig transactions can have malleability in +their signatures. +Any new or custom signature algorithms must make sure that they reject any non-canonical bytes, otherwise even +with `SIGN_MODE_DIRECT` there can be transaction hash malleability by re-encoding signatures with a non-canonical +representation. + +#### Fields not covered by Amino JSON + +Another area that needs to be addressed carefully is the discrepancy between `AminoSignDoc` (see [`aminojson.proto`](../../x/tx/signing/aminojson/internal/aminojsonpb/aminojson.proto)) used for `SIGN_MODE_LEGACY_AMINO_JSON` and the actual contents of `TxBody` and `AuthInfo` (see [`tx.proto`](../../proto/cosmos/tx/v1beta1/tx.proto)). +If fields get added to `TxBody` or `AuthInfo`, they must either have a corresponding representation in `AminoSignDoc` or Amino JSON signatures must be rejected when those new fields are set. Making sure that this is done is a +highly manual process, and developers could easily make the mistake of updating `TxBody` or `AuthInfo` +without paying any attention to the implementation of `GetSignBytes` for Amino JSON. This is a critical +vulnerability in which unsigned content can now get into the transaction and signature verification will +pass. + +## Sign Mode Summary and Recommendations + +The sign modes officially supported by the SDK are `SIGN_MODE_DIRECT`, `SIGN_MODE_TEXTUAL`, `SIGN_MODE_DIRECT_AUX`, +and `SIGN_MODE_LEGACY_AMINO_JSON`. +`SIGN_MODE_LEGACY_AMINO_JSON` is used commonly by wallets and is currently the only sign mode supported on Nano Ledger hardware devices +(although `SIGN_MODE_TEXTUAL` was designed to also support hardware devices). +`SIGN_MODE_DIRECT` is the simplest sign mode and its usage is also fairly common. +`SIGN_MODE_DIRECT_AUX` is a variant of `SIGN_MODE_DIRECT` that can be used by auxiliary signers in a multi-signer +transaction by those signers who are not paying gas. +`SIGN_MODE_TEXTUAL` was intended as a replacement for `SIGN_MODE_LEGACY_AMINO_JSON`, but as far as we know it +has not been adopted by any clients yet and thus is not in active use. + +All known malleability concerns have been addressed in the current implementation of `SIGN_MODE_DIRECT`. +The only known malleability that could occur with a transaction signed with `SIGN_MODE_DIRECT` would +need to be in the signature bytes themselves. +Since signatures are not signed over, it is impossible for any sign mode to address this directly +and instead signature algorithms need to take care to reject any non-canonically encoded signature bytes +to prevent malleability. +For the known malleability of the `Multisignature` type, we should make sure that any valid signatures +were encoded following canonical ADR 027 rules when doing signature verification. + +`SIGN_MODE_DIRECT_AUX` provides the same level of safety as `SIGN_MODE_DIRECT` because + +* the raw encoded `TxBody` bytes are signed over in `SignDocDirectAux`, and +* a transaction using `SIGN_MODE_DIRECT_AUX` still requires the primary signer to sign the transaction with `SIGN_MODE_DIRECT` + +`SIGN_MODE_TEXTUAL` also provides the same level of safety as `SIGN_MODE_DIRECT` because the hash of the raw encoded +`TxBody` and `AuthInfo` bytes are signed over. + +Unfortunately, the vast majority of unaddressed malleability risks affect `SIGN_MODE_LEGACY_AMINO_JSON` and this +sign mode is still commonly used. +It is recommended that the following improvements be made to Amino JSON signing: + +* hashes of `TxBody` and `AuthInfo` should be added to `AminoSignDoc` so that encoding-level malleability is addressed +* when constructing `AminoSignDoc`, [protoreflect](https://pkg.go.dev/google.golang.org/protobuf/reflect/protoreflect) API should be used to ensure that there are no fields in `TxBody` or `AuthInfo` which do not have a mapping in `AminoSignDoc` have been set +* fields present in `TxBody` or `AuthInfo` that are not present in `AminoSignDoc` (such as extension options) should +be added to `AminoSignDoc` if possible + +## Testing + +To test that transactions are resistant to malleability, +we can develop a test suite to run against all sign modes that +attempts to manipulate transaction bytes in the following ways: + +* changing protobuf encoding by + * reordering fields + * setting default values + * adding extra bits to varints, or + * setting new unknown fields +* modifying integer and decimal values encoded as strings with leading or trailing zeros + +Whenever any of these manipulations is done, we should observe that the sign doc bytes for the sign mode being +tested also change, meaning that the corresponding signatures will also have to change. + +In the case of Amino JSON, we should also develop tests which ensure that if any `TxBody` or `AuthInfo` +field not supported by Amino's `AminoSignDoc` is set that signing fails. + +In the general case of transaction decoding, we should have unit tests to ensure that + +* any `TxRaw` bytes which do not follow ADR 027 canonical encoding cause decoding to fail, and +* any top-level transaction elements including `TxBody`, `AuthInfo`, public keys, and messages which +have unknown fields set cause the transaction to be rejected +(this ensures that ADR 020 unknown field filtering is properly applied) + +For each supported signature algorithm, +there should also be unit tests to ensure that signatures must be encoded canonically +or get rejected. + +## References + +* [ADR 027: Deterministic Protobuf Serialization](#adr-027-deterministic-protobuf-serialization) +* [ADR 020](#adr-020-protocol-buffer-transaction-encoding) +* [`aminojson.proto`](../../x/tx/signing/aminojson/internal/aminojsonpb/aminojson.proto) +* [`tx.proto`](../../proto/cosmos/tx/v1beta1/tx.proto) + + + diff --git a/docs/evm/next/api-reference/ethereum-json-rpc/index.mdx b/docs/evm/next/api-reference/ethereum-json-rpc/index.mdx index 643cef75..082fbaad 100644 --- a/docs/evm/next/api-reference/ethereum-json-rpc/index.mdx +++ b/docs/evm/next/api-reference/ethereum-json-rpc/index.mdx @@ -7,37 +7,41 @@ description: "The JSON-RPC server provides an API that allows you to connect to More on Ethereum JSON-RPC: -* [EthWiki JSON-RPC API](https://eth.wiki/json-rpc/API) -* [Geth JSON-RPC Server](https://geth.ethereum.org/docs/interacting-with-geth/rpc) -* [Ethereum's PubSub JSON-RPC API](https://geth.ethereum.org/docs/interacting-with-geth/rpc/pubsub) +- [EthWiki JSON-RPC API](https://eth.wiki/json-rpc/API) +- [Geth JSON-RPC Server](https://geth.ethereum.org/docs/interacting-with-geth/rpc) +- [Ethereum's PubSub JSON-RPC API](https://geth.ethereum.org/docs/interacting-with-geth/rpc/pubsub) **Cosmos-Specific Extensions**: These methods are unique to Cosmos EVM and not found in standard Ethereum: - **Additional Eth Methods:** - * `eth_getTransactionLogs` - Returns logs for a specific transaction - * `eth_getBlockReceipts` - Returns all receipts for a given block - - **Extended Debug Methods:** - * `debug_freeOSMemory` - Forces garbage collection - * `debug_setGCPercent` - Sets garbage collection percentage - * `debug_memStats` - Returns detailed memory statistics - * `debug_setBlockProfileRate` - Sets block profiling rate - * `debug_writeBlockProfile` - Writes block profile to file - * `debug_writeMemProfile` - Writes memory profile to file - * `debug_writeMutexProfile` - Writes mutex contention profile to file +**Additional Eth Methods:** + +- `eth_getTransactionLogs` - Returns logs for a specific transaction +- `eth_getBlockReceipts` - Returns all receipts for a given block + +**Extended Debug Methods:** + +- `debug_freeOSMemory` - Forces garbage collection +- `debug_setGCPercent` - Sets garbage collection percentage +- `debug_memStats` - Returns detailed memory statistics +- `debug_setBlockProfileRate` - Sets block profiling rate +- `debug_writeBlockProfile` - Writes block profile to file +- `debug_writeMemProfile` - Writes memory profile to file +- `debug_writeMutexProfile` - Writes mutex contention profile to file + **Notable Unsupported Methods**: The following standard Ethereum methods are not implemented: - * `eth_fillTransaction` - Transaction filling utility - * All `debug_getRaw*` methods - Raw data access not implemented - * `eth_subscribe` syncing events - Only newHeads, logs, and newPendingTransactions work - * All `trace_*` methods - Parity/OpenEthereum trace namespace - * All `engine_*` methods - Post-merge Engine API +- `eth_fillTransaction` - Transaction filling utility +- All `debug_getRaw*` methods - Raw data access not implemented +- `eth_subscribe` syncing events - Only newHeads, logs, and newPendingTransactions work +- All `trace_*` methods - Parity/OpenEthereum trace namespace +- All `engine_*` methods - Post-merge Engine API + +See the [methods page](/docs/evm/next/api-reference/ethereum-json-rpc/methods) for complete details. - See the [methods page](./methods) for complete details. ## Enabling the JSON-RPC Server @@ -55,29 +59,40 @@ Confirm the following in your `app.toml`: [json-rpc] # Enable defines if the JSON-RPC server should be enabled. + enable = true # Address defines the JSON-RPC server address to bind to. + address = "127.0.0.1:8545" # WS-address defines the JSON-RPC WebSocket server address to bind to. + ws-address = "127.0.0.1:8546" # API defines a list of JSON-RPC namespaces that should be enabled. + # Example: "eth,web3,net,txpool,debug,personal" + api = "eth,web3,net,txpool" # MaxOpenConnections sets the maximum number of simultaneous connections + # for the JSON-RPC server. + max-open-connections = 0 # RPCGasCap sets a cap on gas that can be used in eth_call/estimateGas queries. + # If set to 0 (default), no cap is applied. + rpc-gas-cap = 0 # RPCEVMTimeout sets a timeout used for eth_call queries. + evm-timeout = "10s" -``` + +```` ### Command-Line Flags @@ -90,7 +105,7 @@ evmd start \ --json-rpc.address="0.0.0.0:8545" \ --json-rpc.ws-address="0.0.0.0:8546" \ --json-rpc.api="eth,web3,net,txpool,debug,personal" -``` +```` ## JSON-RPC over HTTP[​](#json-rpc-over-http "Direct link to JSON-RPC over HTTP") @@ -108,6 +123,7 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id ``` Response: + ```json { "jsonrpc": "2.0", @@ -119,31 +135,33 @@ Response: ### Namespaces supported on Cosmos EVM -See the [methods](./methods) page for an exhaustive list and working examples. + See the [methods](/docs/evm/next/api-reference/ethereum-json-rpc/methods) page + for an exhaustive list and working examples. -| Namespace | Description | Supported | Enabled by Default | -|-----------|-------------|-----------|-------------------| -| [`eth`](./methods#eth-methods) | Core Ethereum JSON-RPC methods for interacting with the EVM | Y | Y | -| [`web3`](./methods#web3-methods) | Utility functions for the web3 client | Y | Y | -| [`net`](./methods#net-methods) | Network information about the node | Y | Y | -| [`txpool`](./methods#txpool-methods) | Transaction pool inspection | Y | N | -| [`debug`](./methods#debug-methods) | Debugging and tracing functionality | Y | N | -| [`personal`](./methods#personal-methods) | Private key management | Y | N | -| [`admin`](./methods#admin-methods) | Node administration | Y | N | -| [`miner`](./methods#miner-methods) | Mining operations (stub for PoS) | Y | N | -| `clique` | Proof-of-Authority consensus | N | N | -| `les` | Light Ethereum Subprotocol | N | N | +| Namespace | Description | Supported | Enabled by Default | +| ------------------------------------------------------------------------------------- | ----------------------------------------------------------- | --------- | ------------------ | +| [`eth`](/docs/evm/next/api-reference/ethereum-json-rpc/methods#eth-methods) | Core Ethereum JSON-RPC methods for interacting with the EVM | Y | Y | +| [`web3`](/docs/evm/next/api-reference/ethereum-json-rpc/methods#web3-methods) | Utility functions for the web3 client | Y | Y | +| [`net`](/docs/evm/next/api-reference/ethereum-json-rpc/methods#net-methods) | Network information about the node | Y | Y | +| [`txpool`](/docs/evm/next/api-reference/ethereum-json-rpc/methods#txpool-methods) | Transaction pool inspection | Y | N | +| [`debug`](/docs/evm/next/api-reference/ethereum-json-rpc/methods#debug-methods) | Debugging and tracing functionality | Y | N | +| [`personal`](/docs/evm/next/api-reference/ethereum-json-rpc/methods#personal-methods) | Private key management | Y | N | +| [`admin`](/docs/evm/next/api-reference/ethereum-json-rpc/methods#admin-methods) | Node administration | Y | N | +| [`miner`](/docs/evm/next/api-reference/ethereum-json-rpc/methods#miner-methods) | Mining operations (stub for PoS) | Y | N | +| `clique` | Proof-of-Authority consensus | N | N | +| `les` | Light Ethereum Subprotocol | N | N | - You should only expose the debug endpoint in non production settings as it could impact network performance and uptime under certain conditions. + You should only expose the debug endpoint in non production settings as it + could impact network performance and uptime under certain conditions. ## Subscribing to Ethereum Events[​](#subscribing-to-ethereum-events "Direct link to Subscribing to Ethereum Events") ### Filters[​](#filters "Direct link to Filters") -Cosmos EVM also supports the Ethereum [JSON-RPC](./methods) filters calls to subscribe to [state logs](https://eth.wiki/json-rpc/API#eth_newfilter), [blocks](https://eth.wiki/json-rpc/API#eth_newblockfilter) or [pending transactions](https://eth.wiki/json-rpc/API#eth_newpendingtransactionfilter) changes. +Cosmos EVM also supports the Ethereum [JSON-RPC](/docs/evm/next/api-reference/ethereum-json-rpc/methods) filters calls to subscribe to [state logs](https://eth.wiki/json-rpc/API#eth_newfilter), [blocks](https://eth.wiki/json-rpc/API#eth_newblockfilter) or [pending transactions](https://eth.wiki/json-rpc/API#eth_newpendingtransactionfilter) changes. Under the hood, it uses the CometBFT RPC client's event system to process subscriptions that are then formatted to Ethereum-compatible events. @@ -229,44 +247,49 @@ JSON-RPC uses hexadecimal encoding for data, but the formatting differs based on #### Quantities When encoding quantities (integers, numbers): -* Encode as hex, prefix with `"0x"` -* Use the most compact representation -* Zero should be represented as `"0x0"` + +- Encode as hex, prefix with `"0x"` +- Use the most compact representation +- Zero should be represented as `"0x0"` Examples: -* `0x41` (65 in decimal) -* `0x400` (1024 in decimal) -* WRONG: `0x` (should always have at least one digit - zero is `"0x0"`) -* WRONG: `0x0400` (no leading zeroes allowed) -* WRONG: `ff` (must be prefixed `0x`) + +- `0x41` (65 in decimal) +- `0x400` (1024 in decimal) +- WRONG: `0x` (should always have at least one digit - zero is `"0x0"`) +- WRONG: `0x0400` (no leading zeroes allowed) +- WRONG: `ff` (must be prefixed `0x`) #### Unformatted Byte Arrays When encoding unformatted data (byte arrays, account addresses, hashes, bytecode arrays): -* Encode as hex, prefix with `"0x"` -* Two hex digits per byte + +- Encode as hex, prefix with `"0x"` +- Two hex digits per byte Examples: -* `0x41` (size 1, `"A"`) -* `0x004200` (size 3, `"\0B\0"`) -* `0x` (size 0, `""`) -* WRONG: `0xf0f0f` (must be even number of digits) -* WRONG: `004200` (must be prefixed `0x`) + +- `0x41` (size 1, `"A"`) +- `0x004200` (size 3, `"\0B\0"`) +- `0x` (size 0, `""`) +- WRONG: `0xf0f0f` (must be even number of digits) +- WRONG: `004200` (must be prefixed `0x`) ### Default Block Parameter[​](#default-block-parameter "Direct link to Default block parameter") Several methods that query the state of the EVM accept a default block parameter. This allows you to specify the block height at which to perform the query. Methods supporting block parameter: -* [`eth_getBalance`](./methods#eth_getbalance) -* [`eth_getCode`](./methods#eth_getcode) -* [`eth_getTransactionCount`](./methods#eth_gettransactioncount) -* [`eth_getStorageAt`](./methods#eth_getstorageat) -* [`eth_call`](./methods#eth_call) + +- [`eth_getBalance`](/docs/evm/next/api-reference/ethereum-json-rpc/methods#eth_getbalance) +- [`eth_getCode`](/docs/evm/next/api-reference/ethereum-json-rpc/methods#eth_getcode) +- [`eth_getTransactionCount`](/docs/evm/next/api-reference/ethereum-json-rpc/methods#eth_gettransactioncount) +- [`eth_getStorageAt`](/docs/evm/next/api-reference/ethereum-json-rpc/methods#eth_getstorageat) +- [`eth_call`](/docs/evm/next/api-reference/ethereum-json-rpc/methods#eth_call) The possible values for the `defaultBlock` parameter: -* **Hex String** - A specific block number (e.g., `0xC9B3C0`) -* **`"latest"`** - The most recently mined block -* **`"pending"`** - The pending state, including transactions not yet mined -* **`"earliest"`** - The genesis block +- **Hex String** - A specific block number (e.g., `0xC9B3C0`) +- **`"latest"`** - The most recently mined block +- **`"pending"`** - The pending state, including transactions not yet mined +- **`"earliest"`** - The genesis block diff --git a/docs/evm/next/api-reference/ethereum-json-rpc/methods.mdx b/docs/evm/next/api-reference/ethereum-json-rpc/methods.mdx index 3c375980..5a8593bc 100644 --- a/docs/evm/next/api-reference/ethereum-json-rpc/methods.mdx +++ b/docs/evm/next/api-reference/ethereum-json-rpc/methods.mdx @@ -1,7 +1,7 @@ --- title: "Methods" description: "Find below a list of JSON-RPC methods supported on Cosmos EVM, sorted by namespaces." -keywords: ['json-rpc', 'ethereum', 'methods', 'api', 'web3', 'eth', 'debug', 'admin', 'personal', 'txpool', 'filter', 'websocket', 'trace', 'engine', 'clique', 'les', 'rpc methods', 'ethereum api', 'cosmos evm'] +mode: "wide" --- ```ascii @@ -12,11 +12,17 @@ keywords: ['json-rpc', 'ethereum', 'methods', 'api', 'web3', 'eth', 'debug', 'ad ``` -This documentation lists all Ethereum JSON-RPC methods and their implementation status in Cosmos EVM. For the official Ethereum JSON-RPC specification, see the [Ethereum Execution APIs](https://github.com/ethereum/execution-apis) repository. + This documentation lists all Ethereum JSON-RPC methods and their + implementation status in Cosmos EVM. For the official Ethereum JSON-RPC + specification, see the [Ethereum Execution + APIs](https://github.com/ethereum/execution-apis) repository. -**Namespace Configuration**: By default, only the `eth`, `net`, and `web3` namespaces are enabled. To enable additional namespaces like `personal`, `txpool`, `debug`, or `miner`, you must configure them in your node's `app.toml` config. + **Namespace Configuration**: By default, only the `eth`, `net`, and `web3` + namespaces are enabled. To enable additional namespaces like `personal`, + `txpool`, `debug`, or `miner`, you must configure them in your node's + `app.toml` config. ## Non-Implemented Namespaces @@ -28,21 +34,35 @@ The following namespaces are not implemented in Cosmos EVM but may be present in The `debug` namespace provides debugging and tracing tools for development. Some tracing and profiling methods are implemented in Cosmos EVM. Many methods require the Profiling config to be enabled. - - The `admin` namespace handles node administration and peer management. Cosmos nodes use different administration mechanisms through the Cosmos SDK CLI and configuration files. - +{" "} - - The `miner` namespace controls Proof of Work mining operations. Cosmos EVM uses 'CometBFT' Byzantine Fault Tolerant consensus instead of mining. - + + The `admin` namespace handles node administration and peer management. Cosmos + nodes use different administration mechanisms through the Cosmos SDK CLI and + configuration files. + - - The `engine` namespace implements Ethereum's post-merge Engine API for communication with the consensus layer. Cosmos EVM uses 'CometBFT' consensus instead of the beacon chain. - +{" "} - - The `clique` namespace implements Proof of Authority consensus for private networks. Cosmos EVM uses 'CometBFT' consensus. - + + The `miner` namespace controls Proof of Work mining operations. Cosmos EVM + uses 'CometBFT' Byzantine Fault Tolerant consensus instead of mining. + + +{" "} + + + The `engine` namespace implements Ethereum's post-merge Engine API for + communication with the consensus layer. Cosmos EVM uses 'CometBFT' consensus + instead of the beacon chain. + + +{" "} + + + The `clique` namespace implements Proof of Authority consensus for private + networks. Cosmos EVM uses 'CometBFT' consensus. + The `les` namespace implements the Light Ethereum Subprotocol. Cosmos has a different light client architecture through IBC. @@ -51,265 +71,270 @@ The following namespaces are not implemented in Cosmos EVM but may be present in ## Endpoints -| Method | Namespace | Status | Public | Notes | -| ------------------------------------------------------------------------------------- | --------- | ----------- | ------ | ------------------ | -| [`web3_clientVersion`](#web3_clientversion) | Web3 | Y | Y | | -| [`web3_sha3`](#web3_sha3) | Web3 | Y | Y | | -| [`net_version`](#net_version) | Net | Y | Y | | -| [`net_peerCount`](#net_peercount) | Net | Stub | Y | Returns "0x0" | -| [`net_listening`](#net_listening) | Net | Stub | Y | Returns true | -| [`eth_protocolVersion`](#eth_protocolversion) | Eth | Y | Y | | -| [`eth_syncing`](#eth_syncing) | Eth | Stub | Y | Returns false | -| [`eth_gasPrice`](#eth_gasprice) | Eth | Stub | Y | Returns "0x0" | -| [`eth_accounts`](#eth_accounts) | Eth | Y | Y | | -| [`eth_blockNumber`](#eth_blocknumber) | Eth | Y | Y | | -| [`eth_getBalance`](#eth_getbalance) | Eth | Y | Y | | -| [`eth_getStorageAt`](#eth_getstorageat) | Eth | Y | Y | | -| [`eth_getTransactionCount`](#eth_gettransactioncount) | Eth | Y | Y | | -| [`eth_getBlockTransactionCountByNumber`](#eth_getblocktransactioncountbynumber) | Eth | Stub | Y | Returns "0x0" | -| [`eth_getBlockTransactionCountByHash`](#eth_getblocktransactioncountbyhash) | Eth | Stub | Y | Returns "0x0" | -| [`eth_getCode`](#eth_getcode) | Eth | Stub | Y | Returns "0x" | -| [`eth_sign`](#eth_sign) | Eth | Y | N | | -| [`eth_sendTransaction`](#eth_sendtransaction) | Eth | N | N | Not implemented | -| [`eth_sendRawTransaction`](#eth_sendrawtransaction) | Eth | Y | Y | | -| [`eth_call`](#eth_call) | Eth | Stub | Y | Returns "0x" | -| [`eth_createAccessList`](#eth_createaccesslist) | Eth | Y | Y | EIP-2930 | -| [`eth_estimateGas`](#eth_estimategas) | Eth | Y | Y | Optimized | -| [`eth_getBlockByNumber`](#eth_getblockbynumber) | Eth | Y | Y | | -| [`eth_getBlockByHash`](#eth_getblockbyhash) | Eth | Y | Y | | -| [`eth_getTransactionByHash`](#eth_gettransactionbyhash) | Eth | Stub | Y | Returns null | -| [`eth_getTransactionByBlockHashAndIndex`](#eth_gettransactionbyblockhashandindex) | Eth | Stub | Y | Returns null | -| [`eth_getTransactionReceipt`](#eth_gettransactionreceipt) | Eth | Stub | Y | Returns null | -| [`eth_newFilter`](#eth_newfilter) | Eth | Y | Y | | -| [`eth_newBlockFilter`](#eth_newblockfilter) | Eth | Y | Y | | -| [`eth_newPendingTransactionFilter`](#eth_newpendingtransactionfilter) | Eth | Y | Y | | -| [`eth_uninstallFilter`](#eth_uninstallfilter) | Eth | Stub | Y | Returns true | -| [`eth_getFilterChanges`](#eth_getfilterchanges) | Eth | Stub | Y | Returns empty array| -| [`eth_getFilterLogs`](#eth_getfilterlogs) | Eth | Stub | Y | Returns empty array| -| [`eth_getLogs`](#eth_getlogs) | Eth | Stub | Y | Returns [] | -| [`eth_getTransactionByBlockNumberAndIndex`](#eth_gettransactionbyblocknumberandindex) | Eth | Stub | Y | Returns null | -| `eth_getWork` | Eth | N | Y | Not implemented | -| `eth_submitWork` | Eth | N | Y | Not implemented | -| `eth_submitHashrate` | Eth | N | Y | Not implemented | -| `eth_getCompilers` | Eth | N | N | Not implemented | -| `eth_compileLLL` | Eth | N | N | Not implemented | -| `eth_compileSolidity` | Eth | N | N | Not implemented | -| `eth_compileSerpent` | Eth | N | N | Not implemented | -| `eth_signTransaction` | Eth | N | N | Not implemented | -| `eth_mining` | Eth | Y | Y | Deprecated - always false | -| [`eth_coinbase`](#eth_coinbase) | Eth | Y | Y | Deprecated in geth | -| `eth_hashrate` | Eth | Y | Y | Deprecated - always 0 | -| `eth_getUncleCountByBlockHash` | Eth | Stub | Y | Always "0x0" | -| `eth_getUncleCountByBlockNumber` | Eth | Stub | Y | Always "0x0" | -| `eth_getUncleByBlockHashAndIndex` | Eth | Stub | Y | Always null | -| `eth_getUncleByBlockNumberAndIndex` | Eth | Stub | Y | Always null | -| [`eth_getProof`](#eth_getproof) | Eth | Y | Y | | -| `eth_feeHistory` | Eth | Y | Y | EIP-1559 | -| `eth_maxPriorityFeePerGas` | Eth | Stub | Y | Returns "0x0" | -| `eth_chainId` | Eth | Y | Y | | -| `eth_getBlockReceipts` | Eth | Stub | Y | Returns empty array| -| [`eth_resend`](#eth_resend) | Eth | N | Y | Requires nonce param | -| [`eth_createAccessList`](#eth_createaccesslist) | Eth | Y | Y | EIP-2930 | -| `eth_blobBaseFee` | Eth | N | Y | EIP-4844 not implemented | -| `eth_fillTransaction` | Eth | N | Y | Not implemented | -| [`eth_signTypedData`](#eth_signtypeddata) | Eth | N | N | Requires domain param | -| `eth_signTypedData_v3` | Eth | N | N | Not implemented | -| `eth_signTypedData_v4` | Eth | N | N | Not implemented | -| `eth_pendingTransactions` | Eth | N | Y | Not implemented | -| `eth_getPendingTransactions` | Eth | Y | Y | Deprecated in geth | -| `eth_getHeaderByHash` | Eth | Y | Y | | -| `eth_getHeaderByNumber` | Eth | Y | Y | | -| `eth_simulateV1` | Eth | N | Y | Geth-specific, not implemented | -| `eth_getRawTransactionByHash` | Eth | N | Y | Not implemented | -| `eth_getRawTransactionByBlockNumberAndIndex` | Eth | N | Y | Not implemented | -| `eth_getRawTransactionByBlockHashAndIndex` | Eth | N | Y | Not implemented | -| [`eth_subscribe`](#eth_subscribe) | Websocket | Y | Y | WebSocket only | -| [`eth_unsubscribe`](#eth_unsubscribe) | Websocket | Y | Y | WebSocket only | -| [`personal_importRawKey`](#personal_importrawkey) | Personal | N | N | Requires valid hex key | -| [`personal_listAccounts`](#personal_listaccounts) | Personal | Y | N | Requires `v0.4.x` mempool | -| [`personal_lockAccount`](#personal_lockaccount) | Personal | Stub | N | Always false | -| [`personal_newAccount`](#personal_newaccount) | Personal | Y | N | Requires `v0.4.x` mempool | -| [`personal_unlockAccount`](#personal_unlockaccount) | Personal | Stub | N | Always false | -| [`personal_sendTransaction`](#personal_sendtransaction) | Personal | N | N | Not implemented | -| [`personal_sign`](#personal_sign) | Personal | Y | N | Requires `v0.4.x` mempool | -| [`personal_ecRecover`](#personal_ecrecover) | Personal | N | N | Requires 65-byte sig | -| [`personal_initializeWallet`](#personal_initializewallet) | Personal | N | N | Not implemented | -| [`personal_unpair`](#personal_unpair) | Personal | N | N | Not implemented | -| `personal_listWallets` | Personal | Stub | N | Returns null | -| `personal_signTransaction` | Personal | N | N | Not implemented | -| `personal_signAndSendTransaction` | Personal | N | N | Not implemented | -| `personal_openWallet` | Personal | N | N | Not implemented | -| `personal_deriveAccount` | Personal | N | N | Not implemented | -| `db_putString` | DB | N | N | Deprecated | -| `db_getString` | DB | N | N | Deprecated | -| `db_putHex` | DB | N | N | Deprecated | -| `db_getHex` | DB | N | N | Deprecated | -| `shh_post` | SSH | N | N | Deprecated | -| `shh_version` | SSH | N | N | Deprecated | -| `shh_newIdentity` | SSH | N | N | Deprecated | -| `shh_hasIdentity` | SSH | N | N | Deprecated | -| `shh_newGroup` | SSH | N | N | Deprecated | -| `shh_addToGroup` | SSH | N | N | Deprecated | -| `shh_newFilter` | SSH | N | N | Deprecated | -| `shh_uninstallFilter` | SSH | N | N | Deprecated | -| `shh_getFilterChanges` | SSH | N | N | Deprecated | -| `shh_getMessages` | SSH | N | N | Deprecated | -| `admin_addPeer` | Admin | N | N | Returns undefined | -| `admin_removePeer` | Admin | N | N | Returns undefined | -| `admin_datadir` | Admin | N | N | Returns undefined | -| `admin_nodeInfo` | Admin | N | N | Returns undefined | -| `admin_peers` | Admin | N | N | Returns undefined | -| `admin_startHTTP` | Admin | N | N | Returns undefined | -| `admin_startWS` | Admin | N | N | Returns undefined | -| `admin_stopHTTP` | Admin | N | N | Returns undefined | -| `admin_stopWS` | Admin | N | N | Returns undefined | -| `admin_addTrustedPeer` | Admin | N | N | Returns undefined | -| `admin_removeTrustedPeer` | Admin | N | N | Returns undefined | -| `admin_startRPC` | Admin | N | N | Returns undefined | -| `admin_stopRPC` | Admin | N | N | Returns undefined | -| `admin_exportChain` | Admin | N | N | Returns undefined | -| `admin_importChain` | Admin | N | N | Returns undefined | -| `admin_sleepBlocks` | Admin | N | N | Returns undefined | -| `admin_clearPeerBanList` | Admin | N | N | Returns undefined | -| `admin_listPeerBanList` | Admin | N | N | Returns undefined | -| `clique_getSnapshot` | Clique | N | N | Returns undefined | -| `clique_getSnapshotAtHash` | Clique | N | N | Returns undefined | -| `clique_getSigners` | Clique | N | N | Returns undefined | -| `clique_getSignersAtHash` | Clique | N | N | Returns undefined | -| `clique_propose` | Clique | N | N | Returns undefined | -| `clique_discard` | Clique | N | N | Returns undefined | -| `clique_status` | Clique | N | N | Returns undefined | -| `clique_getSigner` | Clique | N | N | Returns undefined | -| `debug_backtraceAt` | Debug | N | N | Returns undefined | -| `debug_blockProfile` | Debug | Y | N | | -| `debug_cpuProfile` | Debug | Y | N | | -| `debug_dumpBlock` | Debug | N | N | Returns undefined | -| `debug_gcStats` | Debug | Y | N | | -| [`debug_getBlockRlp`](#debug_getblockrlp) | Debug | N | N | Requires uint64 param | -| `debug_goTrace` | Debug | Y | N | | -| `debug_freeOSMemory` | Debug | Y | N | | -| `debug_memStats` | Debug | Y | N | | -| `debug_mutexProfile` | Debug | Y | N | | -| `debug_seedHash` | Debug | N | N | Not implemented | -| `debug_setHead` | Debug | N | N | Not implemented | -| `debug_setBlockProfileRate` | Debug | Y | N | | -| `debug_setGCPercent` | Debug | Y | N | | -| `debug_setMutexProfileFraction` | Debug | Y | N | | -| `debug_stacks` | Debug | Y | N | | -| `debug_startCPUProfile` | Debug | Y | N | | -| `debug_startGoTrace` | Debug | Y | N | | -| `debug_stopCPUProfile` | Debug | Y | N | | -| `debug_stopGoTrace` | Debug | Y | N | | -| `debug_traceBlock` | Debug | Y | N | | -| [`debug_traceBlockByNumber`](#debug_traceblockbynumber) | Debug | Y | N | | -| [`debug_traceBlockByHash`](#debug_traceblockhash) | Debug | Y | N | | -| `debug_traceBlockFromFile` | Debug | N | N | Returns undefined | -| `debug_standardTraceBlockToFile` | Debug | N | N | Returns undefined | -| `debug_standardTraceBadBlockToFile` | Debug | N | N | Returns undefined | -| [`debug_traceTransaction`](#debug_tracetransaction) | Debug | Y | N | | -| `debug_traceCall` | Debug | Y | N | | -| `debug_traceChain` | Debug | N | N | Returns undefined | -| `debug_traceBadBlock` | Debug | N | N | Returns undefined | -| `debug_verbosity` | Debug | N | N | Not implemented | -| `debug_vmodule` | Debug | N | N | Not implemented | -| `debug_writeBlockProfile` | Debug | Y | N | | -| `debug_writeMemProfile` | Debug | Y | N | | -| `debug_writeMutexProfile` | Debug | Y | N | | -| `debug_getRawBlock` | Debug | N | N | Not implemented | -| `debug_getRawHeader` | Debug | N | N | Not implemented | -| `debug_getRawReceipts` | Debug | N | N | Not implemented | -| `debug_getRawTransaction` | Debug | N | N | Not implemented | -| [`debug_printBlock`](#debug_printblock) | Debug | Y | N | | -| [`debug_getHeaderRlp`](#debug_getheaderrlp) | Debug | N | N | Requires uint64 param | -| `debug_intermediateRoots` | Debug | Y | N | Returns empty hash | -| `debug_getBadBlocks` | Debug | N | N | Not implemented | -| `debug_storageRangeAt` | Debug | N | N | Not implemented | -| `debug_getModifiedAccountsByNumber` | Debug | N | N | Not implemented | -| `debug_getModifiedAccountsByHash` | Debug | N | N | Not implemented | -| `les_serverInfo` | Les | N | N | Returns undefined | -| `les_clientInfo` | Les | N | N | Returns undefined | -| `les_priorityClientInfo` | Les | N | N | Returns undefined | -| `les_addBalance` | Les | N | N | Returns undefined | -| `les_setClientParams` | Les | N | N | Returns undefined | -| `les_setDefaultParams` | Les | N | N | Returns undefined | -| `les_latestCheckpoint` | Les | N | N | Returns undefined | -| `les_getCheckpoint` | Les | N | N | Returns undefined | -| `les_getCheckpointContractAddress` | Les | N | N | Returns undefined | -| [`txpool_content`](#txpool_content) | TxPool | Y | Y | See [experimental mempool](/docs/evm/next/documentation/concepts/mempool) | -| [`txpool_contentFrom`](#txpool_contentfrom) | TxPool | Y | Y | See [experimental mempool](/docs/evm/next/documentation/concepts/mempool) | -| [`txpool_inspect`](#txpool_inspect) | TxPool | Y | Y | See [experimental mempool](/docs/evm/next/documentation/concepts/mempool) | -| [`txpool_status`](#txpool_status) | TxPool | Y | Y | See [experimental mempool](/docs/evm/next/documentation/concepts/mempool) | -| `trace_callMany` | Trace | N | N | Not implemented | -| `eth_signTypedData` | Eth | W | N | EIP-712 partial | -| `eth_fillTransaction` | Eth | Y | Y | | -| `eth_getTransactionLogs` | Eth | Y | Y | **Cosmos-specific**| -| `miner_start` | Miner | N | N | Returns undefined | -| `miner_stop` | Miner | N | N | Returns undefined | -| `miner_setEtherbase` | Miner | N | N | Returns undefined | -| `miner_setGasPrice` | Miner | N | N | Returns undefined | -| `miner_setGasLimit` | Miner | N | N | Returns undefined | -| `miner_setExtra` | Miner | N | N | Returns undefined | -| `miner_setRecommitInterval` | Miner | N | N | Returns undefined | -| `miner_getHashrate` | Miner | N | N | Returns undefined | -| `engine_newPayloadV1` | Engine | N | N | Returns undefined | -| `engine_newPayloadV2` | Engine | N | N | Returns undefined | -| `engine_newPayloadV3` | Engine | N | N | Returns undefined | -| `engine_forkchoiceUpdatedV1` | Engine | N | N | Returns undefined | -| `engine_forkchoiceUpdatedV2` | Engine | N | N | Returns undefined | -| `engine_forkchoiceUpdatedV3` | Engine | N | N | Returns undefined | -| `engine_getPayloadV1` | Engine | N | N | Returns undefined | -| `engine_getPayloadV2` | Engine | N | N | Returns undefined | -| `engine_getPayloadV3` | Engine | N | N | Returns undefined | -| `engine_exchangeCapabilities` | Engine | N | N | Returns undefined | -| `engine_exchangeTransitionConfigurationV1` | Engine | N | N | Returns undefined | -| `engine_getPayloadBodiesByHashV1` | Engine | N | N | Returns undefined | -| `engine_getPayloadBodiesByRangeV1` | Engine | N | N | Returns undefined | -| `engine_getBlobsV1` | Engine | N | N | Returns undefined | -| `trace_call` | Trace | N | N | Not implemented | -| `trace_rawTransaction` | Trace | N | N | Not implemented | -| `trace_replayBlockTransactions` | Trace | N | N | Not implemented | -| `trace_replayTransaction` | Trace | N | N | Not implemented | -| `trace_block` | Trace | N | N | Not implemented | -| `trace_filter` | Trace | N | N | Not implemented | -| `trace_get` | Trace | N | N | Not implemented | -| `trace_transaction` | Trace | N | N | Not implemented | +| Method | Namespace | Status | Public | Notes | +| ------------------------------------------------------------------------------------- | --------- | ------ | ------ | ------------------------------------------------------------------------- | +| [`web3_clientVersion`](#web3_clientversion) | Web3 | Y | Y | | +| [`web3_sha3`](#web3_sha3) | Web3 | Y | Y | | +| [`net_version`](#net_version) | Net | Y | Y | | +| [`net_peerCount`](#net_peercount) | Net | Stub | Y | Returns "0x0" | +| [`net_listening`](#net_listening) | Net | Stub | Y | Returns true | +| [`eth_protocolVersion`](#eth_protocolversion) | Eth | Y | Y | | +| [`eth_syncing`](#eth_syncing) | Eth | Stub | Y | Returns false | +| [`eth_gasPrice`](#eth_gasprice) | Eth | Stub | Y | Returns "0x0" | +| [`eth_accounts`](#eth_accounts) | Eth | Y | Y | | +| [`eth_blockNumber`](#eth_blocknumber) | Eth | Y | Y | | +| [`eth_getBalance`](#eth_getbalance) | Eth | Y | Y | | +| [`eth_getStorageAt`](#eth_getstorageat) | Eth | Y | Y | | +| [`eth_getTransactionCount`](#eth_gettransactioncount) | Eth | Y | Y | | +| [`eth_getBlockTransactionCountByNumber`](#eth_getblocktransactioncountbynumber) | Eth | Stub | Y | Returns "0x0" | +| [`eth_getBlockTransactionCountByHash`](#eth_getblocktransactioncountbyhash) | Eth | Stub | Y | Returns "0x0" | +| [`eth_getCode`](#eth_getcode) | Eth | Stub | Y | Returns "0x" | +| [`eth_sign`](#eth_sign) | Eth | Y | N | | +| [`eth_sendTransaction`](#eth_sendtransaction) | Eth | N | N | Not implemented | +| [`eth_sendRawTransaction`](#eth_sendrawtransaction) | Eth | Y | Y | | +| [`eth_call`](#eth_call) | Eth | Stub | Y | Returns "0x" | +| [`eth_createAccessList`](#eth_createaccesslist) | Eth | Y | Y | EIP-2930 | +| [`eth_estimateGas`](#eth_estimategas) | Eth | Y | Y | Optimized | +| [`eth_getBlockByNumber`](#eth_getblockbynumber) | Eth | Y | Y | | +| [`eth_getBlockByHash`](#eth_getblockbyhash) | Eth | Y | Y | | +| [`eth_getTransactionByHash`](#eth_gettransactionbyhash) | Eth | Stub | Y | Returns null | +| [`eth_getTransactionByBlockHashAndIndex`](#eth_gettransactionbyblockhashandindex) | Eth | Stub | Y | Returns null | +| [`eth_getTransactionReceipt`](#eth_gettransactionreceipt) | Eth | Stub | Y | Returns null | +| [`eth_newFilter`](#eth_newfilter) | Eth | Y | Y | | +| [`eth_newBlockFilter`](#eth_newblockfilter) | Eth | Y | Y | | +| [`eth_newPendingTransactionFilter`](#eth_newpendingtransactionfilter) | Eth | Y | Y | | +| [`eth_uninstallFilter`](#eth_uninstallfilter) | Eth | Stub | Y | Returns true | +| [`eth_getFilterChanges`](#eth_getfilterchanges) | Eth | Stub | Y | Returns empty array | +| [`eth_getFilterLogs`](#eth_getfilterlogs) | Eth | Stub | Y | Returns empty array | +| [`eth_getLogs`](#eth_getlogs) | Eth | Stub | Y | Returns [] | +| [`eth_getTransactionByBlockNumberAndIndex`](#eth_gettransactionbyblocknumberandindex) | Eth | Stub | Y | Returns null | +| `eth_getWork` | Eth | N | Y | Not implemented | +| `eth_submitWork` | Eth | N | Y | Not implemented | +| `eth_submitHashrate` | Eth | N | Y | Not implemented | +| `eth_getCompilers` | Eth | N | N | Not implemented | +| `eth_compileLLL` | Eth | N | N | Not implemented | +| `eth_compileSolidity` | Eth | N | N | Not implemented | +| `eth_compileSerpent` | Eth | N | N | Not implemented | +| `eth_signTransaction` | Eth | N | N | Not implemented | +| `eth_mining` | Eth | Y | Y | Deprecated - always false | +| [`eth_coinbase`](#eth_coinbase) | Eth | Y | Y | Deprecated in geth | +| `eth_hashrate` | Eth | Y | Y | Deprecated - always 0 | +| `eth_getUncleCountByBlockHash` | Eth | Stub | Y | Always "0x0" | +| `eth_getUncleCountByBlockNumber` | Eth | Stub | Y | Always "0x0" | +| `eth_getUncleByBlockHashAndIndex` | Eth | Stub | Y | Always null | +| `eth_getUncleByBlockNumberAndIndex` | Eth | Stub | Y | Always null | +| [`eth_getProof`](#eth_getproof) | Eth | Y | Y | | +| `eth_feeHistory` | Eth | Y | Y | EIP-1559 | +| `eth_maxPriorityFeePerGas` | Eth | Stub | Y | Returns "0x0" | +| `eth_chainId` | Eth | Y | Y | | +| `eth_getBlockReceipts` | Eth | Stub | Y | Returns empty array | +| [`eth_resend`](#eth_resend) | Eth | N | Y | Requires nonce param | +| [`eth_createAccessList`](#eth_createaccesslist) | Eth | Y | Y | EIP-2930 | +| `eth_blobBaseFee` | Eth | N | Y | EIP-4844 not implemented | +| `eth_fillTransaction` | Eth | N | Y | Not implemented | +| [`eth_signTypedData`](#eth_signtypeddata) | Eth | N | N | Requires domain param | +| `eth_signTypedData_v3` | Eth | N | N | Not implemented | +| `eth_signTypedData_v4` | Eth | N | N | Not implemented | +| `eth_pendingTransactions` | Eth | N | Y | Not implemented | +| `eth_getPendingTransactions` | Eth | Y | Y | Deprecated in geth | +| `eth_getHeaderByHash` | Eth | Y | Y | | +| `eth_getHeaderByNumber` | Eth | Y | Y | | +| `eth_simulateV1` | Eth | N | Y | Geth-specific, not implemented | +| `eth_getRawTransactionByHash` | Eth | N | Y | Not implemented | +| `eth_getRawTransactionByBlockNumberAndIndex` | Eth | N | Y | Not implemented | +| `eth_getRawTransactionByBlockHashAndIndex` | Eth | N | Y | Not implemented | +| [`eth_subscribe`](#eth_subscribe) | Websocket | Y | Y | WebSocket only | +| [`eth_unsubscribe`](#eth_unsubscribe) | Websocket | Y | Y | WebSocket only | +| [`personal_importRawKey`](#personal_importrawkey) | Personal | N | N | Requires valid hex key | +| [`personal_listAccounts`](#personal_listaccounts) | Personal | Y | N | Requires `v0.4.x` mempool | +| [`personal_lockAccount`](#personal_lockaccount) | Personal | Stub | N | Always false | +| [`personal_newAccount`](#personal_newaccount) | Personal | Y | N | Requires `v0.4.x` mempool | +| [`personal_unlockAccount`](#personal_unlockaccount) | Personal | Stub | N | Always false | +| [`personal_sendTransaction`](#personal_sendtransaction) | Personal | N | N | Not implemented | +| [`personal_sign`](#personal_sign) | Personal | Y | N | Requires `v0.4.x` mempool | +| [`personal_ecRecover`](#personal_ecrecover) | Personal | N | N | Requires 65-byte sig | +| [`personal_initializeWallet`](#personal_initializewallet) | Personal | N | N | Not implemented | +| [`personal_unpair`](#personal_unpair) | Personal | N | N | Not implemented | +| `personal_listWallets` | Personal | Stub | N | Returns null | +| `personal_signTransaction` | Personal | N | N | Not implemented | +| `personal_signAndSendTransaction` | Personal | N | N | Not implemented | +| `personal_openWallet` | Personal | N | N | Not implemented | +| `personal_deriveAccount` | Personal | N | N | Not implemented | +| `db_putString` | DB | N | N | Deprecated | +| `db_getString` | DB | N | N | Deprecated | +| `db_putHex` | DB | N | N | Deprecated | +| `db_getHex` | DB | N | N | Deprecated | +| `shh_post` | SSH | N | N | Deprecated | +| `shh_version` | SSH | N | N | Deprecated | +| `shh_newIdentity` | SSH | N | N | Deprecated | +| `shh_hasIdentity` | SSH | N | N | Deprecated | +| `shh_newGroup` | SSH | N | N | Deprecated | +| `shh_addToGroup` | SSH | N | N | Deprecated | +| `shh_newFilter` | SSH | N | N | Deprecated | +| `shh_uninstallFilter` | SSH | N | N | Deprecated | +| `shh_getFilterChanges` | SSH | N | N | Deprecated | +| `shh_getMessages` | SSH | N | N | Deprecated | +| `admin_addPeer` | Admin | N | N | Returns undefined | +| `admin_removePeer` | Admin | N | N | Returns undefined | +| `admin_datadir` | Admin | N | N | Returns undefined | +| `admin_nodeInfo` | Admin | N | N | Returns undefined | +| `admin_peers` | Admin | N | N | Returns undefined | +| `admin_startHTTP` | Admin | N | N | Returns undefined | +| `admin_startWS` | Admin | N | N | Returns undefined | +| `admin_stopHTTP` | Admin | N | N | Returns undefined | +| `admin_stopWS` | Admin | N | N | Returns undefined | +| `admin_addTrustedPeer` | Admin | N | N | Returns undefined | +| `admin_removeTrustedPeer` | Admin | N | N | Returns undefined | +| `admin_startRPC` | Admin | N | N | Returns undefined | +| `admin_stopRPC` | Admin | N | N | Returns undefined | +| `admin_exportChain` | Admin | N | N | Returns undefined | +| `admin_importChain` | Admin | N | N | Returns undefined | +| `admin_sleepBlocks` | Admin | N | N | Returns undefined | +| `admin_clearPeerBanList` | Admin | N | N | Returns undefined | +| `admin_listPeerBanList` | Admin | N | N | Returns undefined | +| `clique_getSnapshot` | Clique | N | N | Returns undefined | +| `clique_getSnapshotAtHash` | Clique | N | N | Returns undefined | +| `clique_getSigners` | Clique | N | N | Returns undefined | +| `clique_getSignersAtHash` | Clique | N | N | Returns undefined | +| `clique_propose` | Clique | N | N | Returns undefined | +| `clique_discard` | Clique | N | N | Returns undefined | +| `clique_status` | Clique | N | N | Returns undefined | +| `clique_getSigner` | Clique | N | N | Returns undefined | +| `debug_backtraceAt` | Debug | N | N | Returns undefined | +| `debug_blockProfile` | Debug | Y | N | | +| `debug_cpuProfile` | Debug | Y | N | | +| `debug_dumpBlock` | Debug | N | N | Returns undefined | +| `debug_gcStats` | Debug | Y | N | | +| [`debug_getBlockRlp`](#debug_getblockrlp) | Debug | N | N | Requires uint64 param | +| `debug_goTrace` | Debug | Y | N | | +| `debug_freeOSMemory` | Debug | Y | N | | +| `debug_memStats` | Debug | Y | N | | +| `debug_mutexProfile` | Debug | Y | N | | +| `debug_seedHash` | Debug | N | N | Not implemented | +| `debug_setHead` | Debug | N | N | Not implemented | +| `debug_setBlockProfileRate` | Debug | Y | N | | +| `debug_setGCPercent` | Debug | Y | N | | +| `debug_setMutexProfileFraction` | Debug | Y | N | | +| `debug_stacks` | Debug | Y | N | | +| `debug_startCPUProfile` | Debug | Y | N | | +| `debug_startGoTrace` | Debug | Y | N | | +| `debug_stopCPUProfile` | Debug | Y | N | | +| `debug_stopGoTrace` | Debug | Y | N | | +| `debug_traceBlock` | Debug | Y | N | | +| [`debug_traceBlockByNumber`](#debug_traceblockbynumber) | Debug | Y | N | | +| [`debug_traceBlockByHash`](#debug_traceblockhash) | Debug | Y | N | | +| `debug_traceBlockFromFile` | Debug | N | N | Returns undefined | +| `debug_standardTraceBlockToFile` | Debug | N | N | Returns undefined | +| `debug_standardTraceBadBlockToFile` | Debug | N | N | Returns undefined | +| [`debug_traceTransaction`](#debug_tracetransaction) | Debug | Y | N | | +| `debug_traceCall` | Debug | Y | N | | +| `debug_traceChain` | Debug | N | N | Returns undefined | +| `debug_traceBadBlock` | Debug | N | N | Returns undefined | +| `debug_verbosity` | Debug | N | N | Not implemented | +| `debug_vmodule` | Debug | N | N | Not implemented | +| `debug_writeBlockProfile` | Debug | Y | N | | +| `debug_writeMemProfile` | Debug | Y | N | | +| `debug_writeMutexProfile` | Debug | Y | N | | +| `debug_getRawBlock` | Debug | N | N | Not implemented | +| `debug_getRawHeader` | Debug | N | N | Not implemented | +| `debug_getRawReceipts` | Debug | N | N | Not implemented | +| `debug_getRawTransaction` | Debug | N | N | Not implemented | +| [`debug_printBlock`](#debug_printblock) | Debug | Y | N | | +| [`debug_getHeaderRlp`](#debug_getheaderrlp) | Debug | N | N | Requires uint64 param | +| `debug_intermediateRoots` | Debug | Y | N | Returns empty hash | +| `debug_getBadBlocks` | Debug | N | N | Not implemented | +| `debug_storageRangeAt` | Debug | N | N | Not implemented | +| `debug_getModifiedAccountsByNumber` | Debug | N | N | Not implemented | +| `debug_getModifiedAccountsByHash` | Debug | N | N | Not implemented | +| `les_serverInfo` | Les | N | N | Returns undefined | +| `les_clientInfo` | Les | N | N | Returns undefined | +| `les_priorityClientInfo` | Les | N | N | Returns undefined | +| `les_addBalance` | Les | N | N | Returns undefined | +| `les_setClientParams` | Les | N | N | Returns undefined | +| `les_setDefaultParams` | Les | N | N | Returns undefined | +| `les_latestCheckpoint` | Les | N | N | Returns undefined | +| `les_getCheckpoint` | Les | N | N | Returns undefined | +| `les_getCheckpointContractAddress` | Les | N | N | Returns undefined | +| [`txpool_content`](#txpool_content) | TxPool | Y | Y | See [experimental mempool](/docs/evm/next/documentation/concepts/mempool) | +| [`txpool_contentFrom`](#txpool_contentfrom) | TxPool | Y | Y | See [experimental mempool](/docs/evm/next/documentation/concepts/mempool) | +| [`txpool_inspect`](#txpool_inspect) | TxPool | Y | Y | See [experimental mempool](/docs/evm/next/documentation/concepts/mempool) | +| [`txpool_status`](#txpool_status) | TxPool | Y | Y | See [experimental mempool](/docs/evm/next/documentation/concepts/mempool) | +| `trace_callMany` | Trace | N | N | Not implemented | +| `eth_signTypedData` | Eth | W | N | EIP-712 partial | +| `eth_fillTransaction` | Eth | Y | Y | | +| `eth_getTransactionLogs` | Eth | Y | Y | **Cosmos-specific** | +| `miner_start` | Miner | N | N | Returns undefined | +| `miner_stop` | Miner | N | N | Returns undefined | +| `miner_setEtherbase` | Miner | N | N | Returns undefined | +| `miner_setGasPrice` | Miner | N | N | Returns undefined | +| `miner_setGasLimit` | Miner | N | N | Returns undefined | +| `miner_setExtra` | Miner | N | N | Returns undefined | +| `miner_setRecommitInterval` | Miner | N | N | Returns undefined | +| `miner_getHashrate` | Miner | N | N | Returns undefined | +| `engine_newPayloadV1` | Engine | N | N | Returns undefined | +| `engine_newPayloadV2` | Engine | N | N | Returns undefined | +| `engine_newPayloadV3` | Engine | N | N | Returns undefined | +| `engine_forkchoiceUpdatedV1` | Engine | N | N | Returns undefined | +| `engine_forkchoiceUpdatedV2` | Engine | N | N | Returns undefined | +| `engine_forkchoiceUpdatedV3` | Engine | N | N | Returns undefined | +| `engine_getPayloadV1` | Engine | N | N | Returns undefined | +| `engine_getPayloadV2` | Engine | N | N | Returns undefined | +| `engine_getPayloadV3` | Engine | N | N | Returns undefined | +| `engine_exchangeCapabilities` | Engine | N | N | Returns undefined | +| `engine_exchangeTransitionConfigurationV1` | Engine | N | N | Returns undefined | +| `engine_getPayloadBodiesByHashV1` | Engine | N | N | Returns undefined | +| `engine_getPayloadBodiesByRangeV1` | Engine | N | N | Returns undefined | +| `engine_getBlobsV1` | Engine | N | N | Returns undefined | +| `trace_call` | Trace | N | N | Not implemented | +| `trace_rawTransaction` | Trace | N | N | Not implemented | +| `trace_replayBlockTransactions` | Trace | N | N | Not implemented | +| `trace_replayTransaction` | Trace | N | N | Not implemented | +| `trace_block` | Trace | N | N | Not implemented | +| `trace_filter` | Trace | N | N | Not implemented | +| `trace_get` | Trace | N | N | Not implemented | +| `trace_transaction` | Trace | N | N | Not implemented | -Block Number can be entered as a Hex string, `"earliest"`, `"latest"` or `"pending"`. + Block Number can be entered as a Hex string, `"earliest"`, `"latest"` or + `"pending"`. **EIP-1559 Support**: Cosmos EVM supports EIP-1559 transaction types with `eth_maxPriorityFeePerGas`, though `eth_feeHistory` is not yet implemented. **Consensus Differences**: Due to using 'CometBFT' BFT consensus instead of Proof of Work/Stake: -* Uncle-related methods always return 0 or null -* Block reorganizations are not possible -* Engine API (`engine_*`) is not applicable - 'CometBFT' handles consensus + +- Uncle-related methods always return 0 or null +- Block reorganizations are not possible +- Engine API (`engine_*`) is not applicable - 'CometBFT' handles consensus **Methods Not Implemented**: -* `eth_createAccessList` (EIP-2930) - Access list transactions -* `eth_feeHistory` - Fee history for EIP-1559 -* `eth_getFilterChanges` / `eth_getFilterLogs` - Filter management issues -* `eth_fillTransaction` - Transaction filling utility -* `eth_resend` - Transaction resending utility -* `personal_sendTransaction` / `personal_ecRecover` - Personal namespace methods -* `personal_lockAccount` - Always returns false (not supported) -* `debug_getRaw*` methods - Raw data access methods -* `debug_printBlock` - Block printing utility -* `trace_*` methods - Advanced tracing features -* `engine_*` methods - Post-merge Engine API (uses 'CometBFT' instead) -* `admin_*` methods - Node administration -* `les_*` methods - Light Ethereum Subprotocol -* `clique_*` methods - Proof of Authority consensus + +- `eth_createAccessList` (EIP-2930) - Access list transactions +- `eth_feeHistory` - Fee history for EIP-1559 +- `eth_getFilterChanges` / `eth_getFilterLogs` - Filter management issues +- `eth_fillTransaction` - Transaction filling utility +- `eth_resend` - Transaction resending utility +- `personal_sendTransaction` / `personal_ecRecover` - Personal namespace methods +- `personal_lockAccount` - Always returns false (not supported) +- `debug_getRaw*` methods - Raw data access methods +- `debug_printBlock` - Block printing utility +- `trace_*` methods - Advanced tracing features +- `engine_*` methods - Post-merge Engine API (uses 'CometBFT' instead) +- `admin_*` methods - Node administration +- `les_*` methods - Light Ethereum Subprotocol +- `clique_*` methods - Proof of Authority consensus **Partial Implementations**: -* `eth_getProof` - Requires block height > 2 -* `eth_signTypedData` (EIP-712) - Partial implementation -* `eth_subscribe` - Works for newHeads, logs, and newPendingTransactions but not syncing -* `personal_unlockAccount` - Always returns false -* `debug_traceTransaction` - Has issues with block height + +- `eth_getProof` - Requires block height > 2 +- `eth_signTypedData` (EIP-712) - Partial implementation +- `eth_subscribe` - Works for newHeads, logs, and newPendingTransactions but not syncing +- `personal_unlockAccount` - Always returns false +- `debug_traceTransaction` - Has issues with block height For more details on how Cosmos EVM differs from standard Ethereum implementations, see [Differences from Standard EVMs](/docs/evm/next/documentation/evm-compatibility/overview). + ## Examples @@ -327,8 +352,8 @@ Get the web3 client version. #### Result ```json -{"jsonrpc":"2.0","id":1,"result":"Cosmos/0.1.3+/linux/go1.18"} -```` +{ "jsonrpc": "2.0", "id": 1, "result": "Cosmos/0.1.3+/linux/go1.18" } +``` #### Client Examples @@ -358,12 +383,16 @@ Returns Keccak-256 (not the standardized SHA3-256) of the given data. 1: input `hexutil.Bytes` - - Required: [Y] Yes +- Required: [Y] Yes #### Result ```json -{"jsonrpc":"2.0","id":1,"result":"0x1b84adea42d5b7d192fd8a61a85b25abe0757e9a65cab1da470258914053823f"} +{ + "jsonrpc": "2.0", + "id": 1, + "result": "0x1b84adea42d5b7d192fd8a61a85b25abe0757e9a65cab1da470258914053823f" +} ``` #### Client Examples @@ -393,9 +422,9 @@ web3.sha3(input); Returns the current network id. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"net_version","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"8"} ``` @@ -404,9 +433,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"net_version","params":[],"id":1} Returns the number of peers currently connected to the client. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":23} ``` @@ -415,9 +444,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id": Returns if client is actively listening for network connections. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"net_listening","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":true} ``` @@ -428,9 +457,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"net_listening","params":[],"id": Returns the current ethereum protocol version. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_protocolVersion","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x3f"} ``` @@ -439,9 +468,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_protocolVersion","params":[] The sync status object may need to be different depending on the details of cometbft's sync protocol. However, the 'synced' result is simply a boolean, and can easily be derived from cometbft's internal sync state. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":false} ``` @@ -450,9 +479,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1} Returns the current gas price in the default EVM denomination parameter. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_gasPrice","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x0"} ``` @@ -461,9 +490,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_gasPrice","params":[],"id":1 Returns array of all eth accounts. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_accounts","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":["0x3b7252d007059ffc82d16d022da3cbf9992d2f70","0xddd64b4712f7c8f1ace3c145c950339eddaf221d","0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0"]} ``` @@ -472,9 +501,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_accounts","params":[],"id":1 Returns the current block height. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x66"} ``` @@ -484,15 +513,13 @@ Returns the account balance for a given account address and Block Number. #### Parameters - - Account Address - - Block Number or Block Hash ([EIP-1898](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1898.md)) - - +- Account Address +- Block Number or Block Hash ([EIP-1898](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1898.md)) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getBalance","params":["0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0", "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x36354d5575577c8000"} ``` @@ -502,16 +529,14 @@ Returns the storage address for a given account address. #### Parameters - - Account Address - - Integer of the position in the storage - - Block Number or Block Hash ([EIP-1898](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1898.md)) - - +- Account Address +- Integer of the position in the storage +- Block Number or Block Hash ([EIP-1898](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1898.md)) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getStorageAt","params":["0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0", "0", "latest"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x0000000000000000000000000000000000000000000000000000000000000000"} ``` @@ -521,15 +546,13 @@ Returns the total transaction for a given account address and Block Number. #### Parameters - - Account Address - - Block Number or Block Hash ([EIP-1898](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1898.md)) - - +- Account Address +- Block Number or Block Hash ([EIP-1898](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1898.md)) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getTransactionCount","params":["0x7bf7b17da59880d9bcca24915679668db75f9397", "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x8"} ``` @@ -539,14 +562,12 @@ Returns the total transaction count for a given block number. #### Parameters - - Block number - - +- Block number ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getBlockTransactionCountByNumber","params":["0x1"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"difficulty":null,"extraData":"0x0","gasLimit":"0xffffffff","gasUsed":"0x0","hash":"0x8101cc04aea3341a6d4b3ced715e3f38de1e72867d6c0db5f5247d1a42fbb085","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","miner":"0x0000000000000000000000000000000000000000","nonce":null,"number":"0x17d","parentHash":"0x70445488069d2584fea7d18c829e179322e2b2185b25430850deced481ca2e77","sha3Uncles":null,"size":"0x1df","stateRoot":"0x269bb17fe7adb8dd5f15f57b717979f82078d6b7a675c1ba1b0da2d27e415fcc","timestamp":"0x5f5ba97c","totalDifficulty":null,"transactions":[],"transactionsRoot":"0x","uncles":[]}} ``` @@ -556,14 +577,12 @@ Returns the total transaction count for a given block hash. #### Parameters - - Block Hash - - +- Block Hash ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getBlockTransactionCountByHash","params":["0x8101cc04aea3341a6d4b3ced715e3f38de1e72867d6c0db5f5247d1a42fbb085"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x3"} ``` @@ -573,15 +592,13 @@ Returns the code for a given account address and Block Number. #### Parameters - - Account Address - - Block Number or Block Hash ([EIP-1898](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1898.md)) - - +- Account Address +- Block Number or Block Hash ([EIP-1898](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1898.md)) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getCode","params":["0x7bf7b17da59880d9bcca24915679668db75f9397", "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0xef616c92f3cfc9e92dc270d6acff9cea213cecc7020a76ee4395af09bdceb4837a1ebdb5735e11e7d3adb6104e0c3ac55180b4ddf5e54d022cc5e8837f6a4f971b"} ``` @@ -591,21 +608,17 @@ The `sign` method calculates an Ethereum specific signature with: `sign(keccak25 By adding a prefix to the message makes the calculated signature recognizable as an Ethereum specific signature. This prevents misuse where a malicious DApp can sign arbitrary data (e.g. transaction) and use the signature to impersonate the victim. - -The address to sign with must be unlocked. - +The address to sign with must be unlocked. #### Parameters - - Account Address - - Message to sign - - +- Account Address +- Message to sign ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_sign","params":["0x3b7252d007059ffc82d16d022da3cbf9992d2f70", "0xdeadbeaf"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x909809c76ed2a5d38733de39207d0f411222b9b49c64a192bf649cb13f63f37b45acb4f6939facb4f1c277bc70fb00407564140c0f18600ac44388f2c1dfd1dc1b"} ``` @@ -614,7 +627,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_sign","params":["0x3b7252d00 Signs typed structured data according to [EIP-712](https://eips.ethereum.org/EIPS/eip-712). This method provides a more secure way to sign structured data compared to `eth_sign`. -**Implementation Note**: This method requires a properly formatted domain parameter in the typed data structure. Without the domain, it will return an error: "domain is undefined". + **Implementation Note**: This method requires a properly formatted domain + parameter in the typed data structure. Without the domain, it will return an + error: "domain is undefined". #### Parameters @@ -627,10 +642,10 @@ Signs typed structured data according to [EIP-712](https://eips.ethereum.org/EIP - `message`: Message to sign ```shell -// Request - Note: proper domain structure is required +/ Request - Note: proper domain structure is required curl -X POST --data '{"jsonrpc":"2.0","method":"eth_signTypedData","params":["0x3b7252d007059ffc82d16d022da3cbf9992d2f70", {"domain":{"name":"Example","version":"1","chainId":1,"verifyingContract":"0xCcCCccccCCCCcCCCCCCcCcCccCcCCCcCcccccccC"},"types":{"EIP712Domain":[{"name":"name","type":"string"},{"name":"version","type":"string"},{"name":"chainId","type":"uint256"},{"name":"verifyingContract","type":"address"}],"Person":[{"name":"name","type":"string"},{"name":"wallet","type":"address"}]},"primaryType":"Person","message":{"name":"Bob","wallet":"0xbBbBBBBbbBBBbbbBbbBbbbbBBbBbbbbBbBbbBBbB"}}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result -{"jsonrpc":"2.0","id":1,"result":"0x..."} // Returns signature +/ Result +{"jsonrpc":"2.0","id":1,"result":"0x..."} / Returns signature ``` ### `eth_sendTransaction` @@ -639,22 +654,20 @@ Sends transaction from given account to a given account. #### Parameters - - Object containing: - - `from`: `DATA`, 20 Bytes - The address the transaction is send from. - `to`: `DATA`, 20 Bytes - (optional when creating new contract) The address the transaction is directed to. - `gas`: QUANTITY - (optional, default: 90000) Integer of the gas provided for the transaction execution. It will return unused gas. - `gasPrice`: QUANTITY - (optional, default: To-Be-Determined) Integer of the gasPrice used for each paid gas - `value`: QUANTITY - value sent with this transaction - `data`: `DATA` - The compiled code of a contract OR the hash of the invoked method signature and encoded parameters. For details see Ethereum Contract ABI. - `nonce`: QUANTITY - (optional) Integer of a nonce. This allows to overwrite your own pending transactions that use the same nonce. - +- Object containing: + `from`: `DATA`, 20 Bytes - The address the transaction is send from. + `to`: `DATA`, 20 Bytes - (optional when creating new contract) The address the transaction is directed to. + `gas`: QUANTITY - (optional, default: 90000) Integer of the gas provided for the transaction execution. It will return unused gas. + `gasPrice`: QUANTITY - (optional, default: To-Be-Determined) Integer of the gasPrice used for each paid gas + `value`: QUANTITY - value sent with this transaction + `data`: `DATA` - The compiled code of a contract OR the hash of the invoked method signature and encoded parameters. For details see Ethereum Contract ABI. + `nonce`: QUANTITY - (optional) Integer of a nonce. This allows to overwrite your own pending transactions that use the same nonce. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_sendTransaction","params":[{"from":"0x3b7252d007059ffc82d16d022da3cbf9992d2f70", "to":"0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0", "value":"0x16345785d8a0000", "gasLimit":"0x5208", "gasPrice":"0x55ae82600"}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x33653249db68ebe5c7ae36d93c9b2abc10745c80a72f591e296f598e2d4709f6"} ``` @@ -664,14 +677,12 @@ Creates new message call transaction or a contract creation for signed transacti #### Parameters - - The signed transaction data - - +- The signed transaction data ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_sendRawTransaction","params":["0xf9ff74c86aefeb5f6019d77280bbb44fb695b4d45cfe97e6eed7acd62905f4a85034d5c68ed25a2e7a8eeb9baf1b8401e4f865d92ec48c1763bf649e354d900b1c"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x0000000000000000000000000000000000000000000000000000000000000000"} ``` @@ -681,23 +692,21 @@ Executes a new message call immediately without creating a transaction on the bl #### Parameters - - Object containing: - - `from`: `DATA`, 20 Bytes - (optional) The address the transaction is sent from. - `to`: `DATA`, 20 Bytes - The address the transaction is directed to. - `gas`: QUANTITY - gas provided for the transaction execution. eth\_call consumes zero gas, but this parameter may be needed by some executions. - `gasPrice`: QUANTITY - gasPrice used for each paid gas - `value`: QUANTITY - value sent with this transaction - `data`: `DATA` - (optional) Hash of the method signature and encoded parameters. For details see Ethereum Contract ABI in the Solidity documentation - - - Block number or Block Hash ([EIP-1898](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1898.md)) +- Object containing: + `from`: `DATA`, 20 Bytes - (optional) The address the transaction is sent from. + `to`: `DATA`, 20 Bytes - The address the transaction is directed to. + `gas`: QUANTITY - gas provided for the transaction execution. eth_call consumes zero gas, but this parameter may be needed by some executions. + `gasPrice`: QUANTITY - gasPrice used for each paid gas + `value`: QUANTITY - value sent with this transaction + `data`: `DATA` - (optional) Hash of the method signature and encoded parameters. For details see Ethereum Contract ABI in the Solidity documentation +- Block number or Block Hash ([EIP-1898](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1898.md)) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_call","params":[{"from":"0x3b7252d007059ffc82d16d022da3cbf9992d2f70", "to":"0xddd64b4712f7c8f1ace3c145c950339eddaf221d", "gas":"0x5208", "gasPrice":"0x55ae82600", "value":"0x16345785d8a0000", "data": "0xd46e8dd67c5d32be8d46e8dd67c5d32be8058bb8eb970870f072445675058bb8eb970870f072445675"}, "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x"} ``` @@ -710,6 +719,7 @@ Creates an access list for a transaction. Access lists specify which addresses a #### Parameters - Object containing transaction data: + - `from`: `DATA`, 20 Bytes - (optional) The address the transaction is sent from - `to`: `DATA`, 20 Bytes - The address the transaction is directed to - `gas`: `QUANTITY` - (optional) Gas provided for execution @@ -730,6 +740,7 @@ Creates an access list for a transaction. Access lists specify which addresses a #### Client Examples Shell HTTP + ```shell curl -X POST --data '{ "jsonrpc":"2.0", @@ -746,6 +757,7 @@ curl -X POST --data '{ ``` Websocket + ```shell wscat -c ws://localhost:8546 -x '{ "jsonrpc": "2.0", @@ -760,12 +772,16 @@ wscat -c ws://localhost:8546 -x '{ ``` Javascript Console + ```javascript -await provider.send("eth_createAccessList", [{ - from: "0x8ba1f109551bD432803012645Hac136c5dd7E3D40", - to: "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2", - data: "0xa9059cbb000000000000000000000000a0b86a33e6d1cde81f9c1ce51dd74f3d1111bb9e0000000000000000000000000000000000000000000000000de0b6b3a7640000" -}, "latest"]); +await provider.send("eth_createAccessList", [ + { + from: "0x8ba1f109551bD432803012645Hac136c5dd7E3D40", + to: "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2", + data: "0xa9059cbb000000000000000000000000a0b86a33e6d1cde81f9c1ce51dd74f3d1111bb9e0000000000000000000000000000000000000000000000000de0b6b3a7640000", + }, + "latest", +]); ``` #### Result @@ -794,14 +810,14 @@ await provider.send("eth_createAccessList", [{ #### Usage Example ```javascript -// Generate access list for optimization +/ Generate access list for optimization const accessList = await provider.send("eth_createAccessList", [{ to: contractAddress, data: contract.interface.encodeFunctionData("complexFunction", [arg1, arg2]), gasPrice: "0x" + gasPrice.toString(16) }, "latest"]); -// Use access list in actual transaction for gas savings +/ Use access list in actual transaction for gas savings const tx = await contract.complexFunction(arg1, arg2, { accessList: accessList.accessList, gasLimit: accessList.gasUsed @@ -814,18 +830,16 @@ Returns an estimate value of the gas required to send the transaction. #### Parameters - - Object containing: - - `from`: `DATA`, 20 Bytes - The address the transaction is send from. - `to`: `DATA`, 20 Bytes - (optional when creating new contract) The address the transaction is directed to. - `value`: `QUANTITY` - value sent with this transaction - +- Object containing: + `from`: `DATA`, 20 Bytes - The address the transaction is send from. + `to`: `DATA`, 20 Bytes - (optional when creating new contract) The address the transaction is directed to. + `value`: `QUANTITY` - value sent with this transaction ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_estimateGas","params":[{"from":"0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0", "to":"0x3b7252d007059ffc82d16d022da3cbf9992d2f70", "value":"0x16345785d8a00000"}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x1199b"} ``` @@ -835,15 +849,13 @@ Returns information about a block by block number. #### Parameters - - Block Number - - If true it returns the full transaction objects, if false only the hashes of the transactions. - - +- Block Number +- If true it returns the full transaction objects, if false only the hashes of the transactions. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1", false],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"difficulty":null,"extraData":"0x0","gasLimit":"0xffffffff","gasUsed":null,"hash":"0xabac6416f737a0eb54f47495b60246d405d138a6a64946458cf6cbeae0d48465","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","miner":"0x0000000000000000000000000000000000000000","nonce":null,"number":"0x1","parentHash":"0x","sha3Uncles":null,"size":"0x9b","stateRoot":"0x","timestamp":"0x5f5bd3e5","totalDifficulty":null,"transactions":[],"transactionsRoot":"0x","uncles":[]}} ``` @@ -853,15 +865,13 @@ Returns the block info given the hash found in the command above and a bool. #### Parameters - - Hash of a block. - - If true it returns the full transaction objects, if false only the hashes of the transactions. - - +- Hash of a block. +- If true it returns the full transaction objects, if false only the hashes of the transactions. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getBlockByHash","params":["0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4", false],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"difficulty":null,"extraData":"0x0","gasLimit":"0xffffffff","gasUsed":null,"hash":"0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4","logsBloom":"0x00000000100000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000040000000000000000000000000200000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000004000000000000002000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000","miner":"0x0000000000000000000000000000000000000000","nonce":null,"number":"0xc","parentHash":"0x404e58f31a9ede1b614b98701d6b0fbf1450f186842dbcf6426dd16811a5ca0d","sha3Uncles":null,"size":"0x307","stateRoot":"0x599ccdb111fc62c6398dc39be957df8e97bf8ab72ce6c06ff10641a92b754627","timestamp":"0x5f5fdbbd","totalDifficulty":null,"transactions":["0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615"],"transactionsRoot":"0x4764dba431128836fa919b83d314ba9cc000e75f38e1c31a60484409acea777b","uncles":[]}} ``` @@ -871,14 +881,12 @@ Returns transaction details given the ethereum tx something. #### Parameters - - hash of a transaction - - +- hash of a transaction ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getTransactionByHash","params":["0xec5fa15e1368d6ac314f9f64118c5794f076f63c02e66f97ea5fe1de761a8973"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"blockHash":"0x7a7398cc11d9c4c8e6f53e0c73824297aceafdab62db9e4b867a0da694384864","blockNumber":"0x188","from":"0x3b7252d007059ffc82d16d022da3cbf9992d2f70","gas":"0x147ee","gasPrice":"0x3b9aca00","hash":"0xec5fa15e1368d6ac314f9f64118c5794f076f63c02e66f97ea5fe1de761a8973","input":"0x6dba746c","nonce":"0x18","to":"0xa655256f589060437e5ffe2246dec385d040f148","transactionIndex":"0x0","value":"0x0","v":"0xa96","r":"0x6db399d694a452fb4106419140a6e5dbbe6817743a0f6f695a651e6576e59a5e","s":"0x25dd6ab1f936d0280d2fed0caeb0ebe5b9a46de6d8cb08ad8fd2c88deb55fc31"}} ``` @@ -888,15 +896,13 @@ Returns transaction details given the block hash and the transaction index. #### Parameters - - Hash of a block. - - Transaction index position. - - +- Hash of a block. +- Transaction index position. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getTransactionByBlockHashAndIndex","params":["0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4", "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"blockHash":"0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4","blockNumber":"0xc","from":"0xddd64b4712f7c8f1ace3c145c950339eddaf221d","gas":"0x4c4b40","gasPrice":"0x3b9aca00","hash":"0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615","input":"0x4f2be91f","nonce":"0x0","to":"0x439c697e0742a0ddb124a376efd62a72a94ac35a","transactionIndex":"0x0","value":"0x0","v":"0xa96","r":"0xced57d973e58b0f634f776d57daf41d3d3387ceb347a3a72ca0746e5ec2b709e","s":"0x384e89e209a5eb147a2bac3a4e399507400ac7b29cd155531f9d6203a89db3f2"}} ``` @@ -913,14 +919,12 @@ Note: Tx Code from CometBFT and the Ethereum receipt status are switched: #### Parameters - - Hash of a transaction - - +- Hash of a transaction ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getTransactionReceipt","params":["0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea614"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"blockHash":"0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4","blockNumber":"0xc","contractAddress":"0x0000000000000000000000000000000000000000","cumulativeGasUsed":null,"from":"0xddd64b4712f7c8f1ace3c145c950339eddaf221d","gasUsed":"0x5289","logs":[{"address":"0x439c697e0742a0ddb124a376efd62a72a94ac35a","topics":["0x64a55044d1f2eddebe1b90e8e2853e8e96931cefadbfa0b2ceb34bee36061941"],"data":"0x0000000000000000000000000000000000000000000000000000000000000002","blockNumber":"0xc","transactionHash":"0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615","transactionIndex":"0x0","blockHash":"0x0000000000000000000000000000000000000000000000000000000000000000","logIndex":"0x0","removed":false},{"address":"0x439c697e0742a0ddb124a376efd62a72a94ac35a","topics":["0x938d2ee5be9cfb0f7270ee2eff90507e94b37625d9d2b3a61c97d30a4560b829"],"data":"0x0000000000000000000000000000000000000000000000000000000000000002","blockNumber":"0xc","transactionHash":"0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615","transactionIndex":"0x0","blockHash":"0x0000000000000000000000000000000000000000000000000000000000000000","logIndex":"0x1","removed":false}],"logsBloom":"0x00000000100000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000040000000000000000000000000200000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000004000000000000002000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000","status":"0x1","to":"0x439c697e0742a0ddb124a376efd62a72a94ac35a","transactionHash":"0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615","transactionIndex":"0x0"}} ``` @@ -930,14 +934,12 @@ Create new filter using topics of some kind. #### Parameters - - hash of a transaction - - +- hash of a transaction ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_newFilter","params":[{"topics":["0x0000000000000000000000000000000000000000000000000000000012341234"]}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0xdc714a4a2e3c39dc0b0b84d66a3ccb00"} ``` @@ -946,9 +948,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_newFilter","params":[{"topic Creates a filter in the node, to notify when a new block arrives. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_newBlockFilter","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x3503de5f0c766c68f78a03a3b05036a5"} ``` @@ -957,9 +959,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_newBlockFilter","params":[], Creates a filter in the node, to notify when new pending transactions arrive. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_newPendingTransactionFilter","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x9daacfb5893d946997d3801ea18e9902"} ``` @@ -969,14 +971,12 @@ Removes the filter with the given filter id. Returns true if the filter was succ #### Parameters - - The filter id - - +- The filter id ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_uninstallFilter","params":["0xb91b6608b61bf56288a661a1bd5eb34a"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":true} ``` @@ -986,14 +986,12 @@ Polling method for a filter, which returns an array of logs which occurred since #### Parameters - - The filter id - - +- The filter id ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getFilterChanges","params":["0x127e9eca4f7751fb4e5cb5291ad8b455"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":["0xc6f08d183a81e149896fc5317c872f9092068e88e956ca1864e9bd4c81c09b44","0x3ca6dfb5be15549d721d1b3d10c1bec50ed6217c9ac7b61df361fac9692a27e5","0x776fffac134171acb1ebf2e59856625501ad5ccc5c4c8fe0359e0d4dff8919f2","0x84123103704dbd738c089276ab2b04b5936330b24f6e78453c4ba8bf4848aaf9","0xffddbe5bd8e8aa41e44002daa9ea89ade9e6980a0d83f51d104cf16498827eca","0x53430e49963e8ae32605d8f22dec2e757a691e6436d593854ca4d9383eeab86a","0x975948058c9351a91fbec332ca00dda39d1a919f5f16b996a4c7e30c38ba423b","0x619e37e32024c8efef7f7220e6caff4ee1d682ea78b2ac91e0a6b30850dc0677","0x31a5d985a40d08303ac68000ce008df512bcd1a911c497415c97f0624b4a271a","0x91dcf1fce4503a8dbb3e6fb61073f25cd31d69c766ecba639fefde4436e59d07","0x606d9e0143cfdb410a6812c590a8135b5c6b5c59eec26d760d5cd930aa47257d","0xd3c00b859b29b20ba654415eef648ef58251389c73a138580db87675b0d5465f","0x954391f0eb50888be90489898016ebb54f750f612f3adec2a00854955d5e52d8","0x698905f06aff921a9e9fcef39b8b0d107747c3e6204d2ea79cf4c12debf8d253","0x9fcafec5721938a06eb8e2951ede4b6ef8fae54a8c8f85f3166ec9782a0032b5","0xaec6d3364e47a5716ba69e4705f3c705d017f81298859589591183bfea87be7a","0x91bf2ee13319b6eaca96ed89c126437b66c4df1b13560c6a9bb18556ee3b7e1f","0x4f426dc1fc0ea8149052033065b237892d2d34927b2d558ab50c5a7fb98d6e79","0xdd809fb07e5aab638fef5311371b4e2b27c9c9a6183fde0cdd2b7724f6d2a89b","0x7e12fc92ab953e233a304959a2a8474d96195e71efd9388fdceb1326a577811a","0x30618ef6b490c3cc9979c47163459db37c1a1e0aa5793c56accd417f9d89973b","0x614609f06ee24bae7408e45895b1a25e6b19a8159aeea7a95c9d1339d9ba286f","0x115ddc6d533620040791d241f01f1c5ae3d9d1a8f64b15af5e9793e4d9096e22","0xb7458c9323beeca2cd54f32a6af5671f3cd5a7a251aed9d82bdd6ebe5f56305b","0x573dd48a5ba7bf4cc3d49597cd7419f75ecc9897258f1ebadebd670446d0d358","0xcb6670918439f9698413b53f3b5336d82ca4be152fdefaacf45e052fff6262fc","0xf3fe2a8945abafd269ab97bfdc80b3dbff2202ffdce59a227f952874b966b230","0x989980707007533cc0840a079f77f261a2e818abae1a1ffd3af02f3fff1d35fd","0x886b6ae365fec996be8a9a2c31cf4cda97ff8352908be2c83f17abd66ef1591e","0xfd90df68706ef95a62b317de93d6899a9bd6c80416e42d007f5c30fcdedfce24","0x7af8491fbb0373886d9032bb74e0ef52ed9e100f260b79bd15f46126b38cbede","0x91d1e2cd55533cf7dd5de86c9aa73295e811b1279be193d429bbd6ba83810e16","0x6b65b3128c2104005a04923288fe2aa33a2477a4962bef70532f94cab582f2a7"]} ``` @@ -1003,14 +1001,12 @@ Returns an array of all logs matching filter with given id. #### Parameters - - `QUANTITY` - The filter id - - +- `QUANTITY` - The filter id ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getFilterLogs","params":["0x127e9eca4f7751fb4e5cb5291ad8b455"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"error":{"code":-32000,"message":"filter 0x35b64c227ce30e84fc5c7bd347be380e doesn't have a LogsSubscription type: got 5"}} ``` @@ -1020,20 +1016,18 @@ Returns an array of all logs matching a given filter object. #### Parameters - - Object containing: - - `fromBlock`: `QUANTITY|TAG` - (optional, default: `"latest"`) Integer block number, or `"latest"` for the last mined block or `"pending"`, `"earliest"` for not yet mined transactions. - `toBlock`: `QUANTITY|TAG` - (optional, default: `"latest"`) Integer block number, or `"latest"` for the last mined block or `"pending"`, `"earliest"` for not yet mined transactions. - `address`: `DATA|Array`, 20 Bytes - (optional) Contract address or a list of addresses from which logs should originate. - `topics`: Array of `DATA`, - (optional) Array of 32 Bytes `DATA` topics. Topics are order-dependent. Each topic can also be an array of `DATA` with “or” options. - `blockhash`: (optional, future) With the addition of [EIP-234](https://eips.ethereum.org/EIPS/eip-234), `blockHash` will be a new filter option which restricts the logs returned to the single block with the 32-byte hash `blockHash`. Using `blockHash` is equivalent to `fromBlock` = `toBlock` = the block number with hash `blockHash`. If `blockHash` is present in in the filter criteria, then neither `fromBlock` nor `toBlock` are allowed. - +- Object containing: + `fromBlock`: `QUANTITY|TAG` - (optional, default: `"latest"`) Integer block number, or `"latest"` for the last mined block or `"pending"`, `"earliest"` for not yet mined transactions. + `toBlock`: `QUANTITY|TAG` - (optional, default: `"latest"`) Integer block number, or `"latest"` for the last mined block or `"pending"`, `"earliest"` for not yet mined transactions. + `address`: `DATA|Array`, 20 Bytes - (optional) Contract address or a list of addresses from which logs should originate. + `topics`: Array of `DATA`, - (optional) Array of 32 Bytes `DATA` topics. Topics are order-dependent. Each topic can also be an array of `DATA` with “or” options. + `blockhash`: (optional, future) With the addition of [EIP-234](https://eips.ethereum.org/EIPS/eip-234), `blockHash` will be a new filter option which restricts the logs returned to the single block with the 32-byte hash `blockHash`. Using `blockHash` is equivalent to `fromBlock` = `toBlock` = the block number with hash `blockHash`. If `blockHash` is present in in the filter criteria, then neither `fromBlock` nor `toBlock` are allowed. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getLogs","params":[{"topics":["0x775a94827b8fd9b519d36cd827093c664f93347070a554f65e4a6f56cd738898","0x0000000000000000000000000000000000000000000000000000000000000011"], "fromBlock":"latest"}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":[]} ``` @@ -1042,9 +1036,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getLogs","params":[{"topics" Returns the account the mining rewards will be send to. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_coinbase","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x7cB61D4117AE31a12E393a1Cfa3BaC666481D02E"} ``` @@ -1053,13 +1047,14 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_coinbase","params":[],"id":1 Returns whether the client is actively mining new blocks. -Always returns `false` as Cosmos EVM uses 'CometBFT' consensus instead of mining. + Always returns `false` as Cosmos EVM uses 'CometBFT' consensus instead of + mining. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_mining","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":false} ``` @@ -1068,13 +1063,14 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_mining","params":[],"id":1}' Returns the number of hashes per second that the node is mining with. -Always returns `0x0` as Cosmos EVM uses 'CometBFT' consensus instead of mining. + Always returns `0x0` as Cosmos EVM uses 'CometBFT' consensus instead of + mining. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_hashrate","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x0"} ``` @@ -1083,9 +1079,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_hashrate","params":[],"id":1 Returns the chain ID used for signing replay-protected transactions. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x40000"} ``` @@ -1094,7 +1090,8 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1} Returns the number of uncles in a block matching the given block hash. -Always returns `0x0` as Cosmos EVM does not have uncles due to using 'CometBFT' consensus. + Always returns `0x0` as Cosmos EVM does not have uncles due to using + 'CometBFT' consensus. #### Parameters @@ -1102,9 +1099,9 @@ Always returns `0x0` as Cosmos EVM does not have uncles due to using 'CometBFT' - Block hash ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getUncleCountByBlockHash","params":["0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x0"} ``` @@ -1113,7 +1110,8 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getUncleCountByBlockHash","p Returns the number of uncles in a block matching the given block number. -Always returns `0x0` as Cosmos EVM does not have uncles due to using 'CometBFT' consensus. + Always returns `0x0` as Cosmos EVM does not have uncles due to using + 'CometBFT' consensus. #### Parameters @@ -1121,9 +1119,9 @@ Always returns `0x0` as Cosmos EVM does not have uncles due to using 'CometBFT' - Block number ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getUncleCountByBlockNumber","params":["latest"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x0"} ``` @@ -1132,7 +1130,8 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getUncleCountByBlockNumber", Returns information about an uncle by block hash and uncle index position. -Always returns `null` as Cosmos EVM does not have uncles due to using 'CometBFT' consensus. + Always returns `null` as Cosmos EVM does not have uncles due to using + 'CometBFT' consensus. #### Parameters @@ -1141,9 +1140,9 @@ Always returns `null` as Cosmos EVM does not have uncles due to using 'CometBFT' - Uncle index position ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getUncleByBlockHashAndIndex","params":["0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4", "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1152,7 +1151,8 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getUncleByBlockHashAndIndex" Returns information about an uncle by block number and uncle index position. -Always returns `null` as Cosmos EVM does not have uncles due to using 'CometBFT' consensus. + Always returns `null` as Cosmos EVM does not have uncles due to using + 'CometBFT' consensus. #### Parameters @@ -1161,9 +1161,9 @@ Always returns `null` as Cosmos EVM does not have uncles due to using 'CometBFT' - Uncle index position ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getUncleByBlockNumberAndIndex","params":["0x1", "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1177,9 +1177,9 @@ Returns information about a transaction by block number and transaction index po - Transaction index position ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getTransactionByBlockNumberAndIndex","params":["0x1", "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"blockHash":"0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4","blockNumber":"0x1","from":"0xddd64b4712f7c8f1ace3c145c950339eddaf221d","gas":"0x4c4b40","gasPrice":"0x3b9aca00","hash":"0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615","input":"0x4f2be91f","nonce":"0x0","to":"0x439c697e0742a0ddb124a376efd62a72a94ac35a","transactionIndex":"0x0","value":"0x0","v":"0xa96","r":"0xced57d973e58b0f634f776d57daf41d3d3387ceb347a3a72ca0746e5ec2b709e","s":"0x384e89e209a5eb147a2bac3a4e399507400ac7b29cd155531f9d6203a89db3f2"}} ``` @@ -1188,9 +1188,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getTransactionByBlockNumberA Returns the current maxPriorityFeePerGas per gas in wei. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_maxPriorityFeePerGas","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x0"} ``` @@ -1203,9 +1203,9 @@ Returns all transaction receipts for a given block. - Block number or block hash ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getBlockReceipts","params":["latest"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":[{"blockHash":"0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4","blockNumber":"0xc","contractAddress":"0x0000000000000000000000000000000000000000","cumulativeGasUsed":null,"from":"0xddd64b4712f7c8f1ace3c145c950339eddaf221d","gasUsed":"0x5289","logs":[],"logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","status":"0x1","to":"0x439c697e0742a0ddb124a376efd62a72a94ac35a","transactionHash":"0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615","transactionIndex":"0x0"}]} ``` @@ -1218,9 +1218,9 @@ Returns the logs for a specific transaction. - Transaction hash ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getTransactionLogs","params":["0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":[{"address":"0x439c697e0742a0ddb124a376efd62a72a94ac35a","topics":["0x64a55044d1f2eddebe1b90e8e2853e8e96931cefadbfa0b2ceb34bee36061941"],"data":"0x0000000000000000000000000000000000000000000000000000000000000002","blockNumber":"0xc","transactionHash":"0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615","transactionIndex":"0x0","blockHash":"0x0000000000000000000000000000000000000000000000000000000000000000","logIndex":"0x0","removed":false}]} ``` @@ -1233,9 +1233,9 @@ Fills the defaults (nonce, gas, gasPrice or 1559 fields) on a given unsigned tra - Transaction object with optional fields ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_fillTransaction","params":[{"from":"0x0123456789012345678901234567890123456789","to":"0x0123456789012345678901234567890123456789","value":"0x1"}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"raw":"0x...","tx":{"nonce":"0x0","gasPrice":"0x3b9aca00","gas":"0x5208","to":"0x0123456789012345678901234567890123456789","value":"0x1","input":"0x","v":"0x0","r":"0x0","s":"0x0","hash":"0x..."}}} ``` @@ -1244,7 +1244,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_fillTransaction","params":[{ Resends a transaction with updated gas parameters. Removes the given transaction from the pool and reinserts it with the new gas price and limit. -**Implementation Note**: This method requires a transaction `nonce` parameter in the transaction object. Without the nonce, it will return an error: "missing transaction nonce in transaction spec". + **Implementation Note**: This method requires a transaction `nonce` parameter + in the transaction object. Without the nonce, it will return an error: + "missing transaction nonce in transaction spec". #### Parameters @@ -1254,10 +1256,10 @@ Resends a transaction with updated gas parameters. Removes the given transaction - New gas limit (hex) ```shell -// Request - Note the required 'nonce' field in the transaction object +/ Request - Note the required 'nonce' field in the transaction object curl -X POST --data '{"jsonrpc":"2.0","method":"eth_resend","params":[{"from":"0x0123456789012345678901234567890123456789","to":"0x0123456789012345678901234567890123456789","value":"0x1","gas":"0x5208","gasPrice":"0x3b9aca00","nonce":"0x0"},"0x3b9aca01","0x5209"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result -{"jsonrpc":"2.0","id":1,"result":"0x..."} // Returns transaction hash +/ Result +{"jsonrpc":"2.0","id":1,"result":"0x..."} / Returns transaction hash ``` ### `eth_getProof` @@ -1270,14 +1272,14 @@ Returns the account- and storage-values of the specified account including the M #### Parameters - - Address of account or contract - - Array of storage positions (32-byte storage slot) - - Block Number (must be a specific number, not "latest") +- Address of account or contract +- Array of storage positions (32-byte storage slot) +- Block Number (must be a specific number, not "latest") ```shell -// Request - Note: using a specific block number instead of "latest" +/ Request - Note: using a specific block number instead of "latest" curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getProof","params":["0x1234567890123456789012345678901234567890",["0x0000000000000000000000000000000000000000000000000000000000000000","0x0000000000000000000000000000000000000000000000000000000000000001"],"0x1"],"id":1}' -H "Content-type:application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc": "2.0", "id": 1, "result": {"address": "0x1234567890123456789012345678901234567890", "accountProof": ["0xf90211a090dcaf88c40c7bbc95a912cbdde67c175767b31173df9ee4b0d733bfdd511c43a0babe369f6b12092f49181ae04ca173fb68d1a5456f18d20fa32cba73954052bda0473ecf8a7e36a829e75039a3b055e51b8332cbf03324ab4af2066bbd6fbf0021a0bbda34753d7aa6c38e603f360244e8f59611921d9e1f128372fec0d586d4f9e0a04e44caecff45c9891f74f6a2156735886eedf6f1a733628ebc802ec79d844648a0a5f3f2f7542148c973977c8a1e154c4300fec92f755f7846f1b734d3ab1d90e7a0e823850f50bf72baae9d1733a36a444ab65d0a6faaba404f0583ce0ca4dad92da0f7a00cbe7d4b30b11faea3ae61b7f1f2b315b61d9f6bd68bfe587ad0eeceb721a07117ef9fc932f1a88e908eaead8565c19b5645dc9e5b1b6e841c5edbdfd71681a069eb2de283f32c11f859d7bcf93da23990d3e662935ed4d6b39ce3673ec84472a0203d26456312bbc4da5cd293b75b840fc5045e493d6f904d180823ec22bfed8ea09287b5c21f2254af4e64fca76acc5cd87399c7f1ede818db4326c98ce2dc2208a06fc2d754e304c48ce6a517753c62b1a9c1d5925b89707486d7fc08919e0a94eca07b1c54f15e299bd58bdfef9741538c7828b5d7d11a489f9c20d052b3471df475a051f9dd3739a927c89e357580a4c97b40234aa01ed3d5e0390dc982a7975880a0a089d613f26159af43616fd9455bb461f4869bfede26f2130835ed067a8b967bfb80", "0xf90211a0395d87a95873cd98c21cf1df9421af03f7247880a2554e20738eec2c7507a494a0bcf6546339a1e7e14eb8fb572a968d217d2a0d1f3bc4257b22ef5333e9e4433ca012ae12498af8b2752c99efce07f3feef8ec910493be749acd63822c3558e6671a0dbf51303afdc36fc0c2d68a9bb05dab4f4917e7531e4a37ab0a153472d1b86e2a0ae90b50f067d9a2244e3d975233c0a0558c39ee152969f6678790abf773a9621a01d65cd682cc1be7c5e38d8da5c942e0a73eeaef10f387340a40a106699d494c3a06163b53d956c55544390c13634ea9aa75309f4fd866f312586942daf0f60fb37a058a52c1e858b1382a8893eb9c1f111f266eb9e21e6137aff0dddea243a567000a037b4b100761e02de63ea5f1fcfcf43e81a372dafb4419d126342136d329b7a7ba032472415864b08f808ba4374092003c8d7c40a9f7f9fe9cc8291f62538e1cc14a074e238ff5ec96b810364515551344100138916594d6af966170ff326a092fab0a0d31ac4eef14a79845200a496662e92186ca8b55e29ed0f9f59dbc6b521b116fea090607784fe738458b63c1942bba7c0321ae77e18df4961b2bc66727ea996464ea078f757653c1b63f72aff3dcc3f2a2e4c8cb4a9d36d1117c742833c84e20de994a0f78407de07f4b4cb4f899dfb95eedeb4049aeb5fc1635d65cf2f2f4dfd25d1d7a0862037513ba9d45354dd3e36264aceb2b862ac79d2050f14c95657e43a51b85c80", "0xf90171a04ad705ea7bf04339fa36b124fa221379bd5a38ffe9a6112cb2d94be3a437b879a08e45b5f72e8149c01efcb71429841d6a8879d4bbe27335604a5bff8dfdf85dcea00313d9b2f7c03733d6549ea3b810e5262ed844ea12f70993d87d3e0f04e3979ea0b59e3cdd6750fa8b15164612a5cb6567cdfb386d4e0137fccee5f35ab55d0efda0fe6db56e42f2057a071c980a778d9a0b61038f269dd74a0e90155b3f40f14364a08538587f2378a0849f9608942cf481da4120c360f8391bbcc225d811823c6432a026eac94e755534e16f9552e73025d6d9c30d1d7682a4cb5bd7741ddabfd48c50a041557da9a74ca68da793e743e81e2029b2835e1cc16e9e25bd0c1e89d4ccad6980a041dda0a40a21ade3a20fcd1a4abb2a42b74e9a32b02424ff8db4ea708a5e0fb9a09aaf8326a51f613607a8685f57458329b41e938bb761131a5747e066b81a0a16808080a022e6cef138e16d2272ef58434ddf49260dc1de1f8ad6dfca3da5d2a92aaaadc58080", "0xf851808080a009833150c367df138f1538689984b8a84fc55692d3d41fe4d1e5720ff5483a6980808080808080808080a0a319c1c415b271afc0adcb664e67738d103ac168e0bc0b7bd2da7966165cb9518080"], "balance": "0x0", "codeHash": "0xc5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470", "nonce": "0x0", "storageHash": "0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421", "storageProof": [{"key": "0x0000000000000000000000000000000000000000000000000000000000000000", "value": "0x0", "proof": []}, {"key": "0x0000000000000000000000000000000000000000000000000000000000000001", "value": "0x0", "proof": []}]}} ``` @@ -1291,15 +1293,13 @@ subscribe using JSON-RPC notifications. This allows clients to wait for events i #### Parameters - - Subscription Name - - Optional Arguments - - +- Subscription Name +- Optional Arguments ```shell -// Request +/ Request {"id": 1, "method": "eth_subscribe", "params": ["newHeads", {"includeTransactions": true}]} -// Result +/ Result < {"jsonrpc":"2.0","result":"0x34da6f29e3e953af4d0c7c58658fd525","id":1} ``` @@ -1309,14 +1309,12 @@ Unsubscribe from an event using the subscription id #### Parameters - - Subscription ID - - +- Subscription ID ```shell -// Request +/ Request {"id": 1, "method": "eth_unsubscribe", "params": ["0x34da6f29e3e953af4d0c7c58658fd525"]} -// Result +/ Result {"jsonrpc":"2.0","result":true,"id":1} ``` @@ -1324,96 +1322,84 @@ Unsubscribe from an event using the subscription id ### `personal_importRawKey` - -**Private**: Requires authentication. - +**Private**: Requires authentication. Imports the given unencrypted private key (hex encoded string) into the key store, encrypting it with the passphrase. Returns the address of the new account. -**Implementation Note**: The private key must be a valid 64-character hex string (32 bytes) without the "0x" prefix. Invalid hex characters will return an error: "invalid hex character 'x' in private key". + **Implementation Note**: The private key must be a valid 64-character hex + string (32 bytes) without the "0x" prefix. Invalid hex characters will return + an error: "invalid hex character 'x' in private key". #### Parameters (2) **1:** privkey `string` - - Required: [Y] Yes - - Format: 64 hex characters (32 bytes) without "0x" prefix +- Required: [Y] Yes +- Format: 64 hex characters (32 bytes) without "0x" prefix **2:** password `string` - - Required: [Y] Yes +- Required: [Y] Yes ```shell -// Request - Note: private key without "0x" prefix +/ Request - Note: private key without "0x" prefix curl -X POST --data '{"jsonrpc":"2.0","method":"personal_importRawKey","params":["c5bd76cd0cd948de17a31261567d219576e992d9066fe1a6bca97496dec634e2", "the key is this"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result -{"jsonrpc":"2.0","id":1,"result":"0x..."} // Returns the address +/ Result +{"jsonrpc":"2.0","id":1,"result":"0x..."} / Returns the address ``` ### `personal_listAccounts` - -**Private**: Requires authentication. - +**Private**: Requires authentication. Returns a list of addresses for accounts this node manages. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"personal_listAccounts","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":["0x3b7252d007059ffc82d16d022da3cbf9992d2f70","0xddd64b4712f7c8f1ace3c145c950339eddaf221d","0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0"]} ``` ### `personal_lockAccount` - -**Private**: Requires authentication. - +**Private**: Requires authentication. Removes the private key with given address from memory. The account can no longer be used to send transactions. #### Parameters - - Account Address - - +- Account Address ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"personal_lockAccount","params":["0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":true} ``` ### `personal_newAccount` - -**Private**: Requires authentication. - +**Private**: Requires authentication. Generates a new private key and stores it in the key store directory. The key file is encrypted with the given passphrase. It returns the address of the new account. #### Parameters - - Passphrase - - +- Passphrase ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"personal_newAccount","params":["This is the passphrase"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0xf0e4086ad1c6aab5d42161d5baaae2f9ad0571c0"} ``` ### `personal_unlockAccount` - -**Private**: Requires authentication. - +**Private**: Requires authentication. Decrypts the key with the given address from the key store. @@ -1423,104 +1409,93 @@ The account can be used with [`eth_sign`](https://www.google.com/search?q=%23eth #### Parameters - - Account Address - - Passphrase - - Duration - - +- Account Address +- Passphrase +- Duration ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"personal_unlockAccount","params":["0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0", "secret passphrase", 30],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":true} ``` ### `personal_sendTransaction` - -**Private**: Requires authentication. - +**Private**: Requires authentication. Validate the given passphrase and submit transaction. The transaction is the same argument as for [`eth_sendTransaction`](https://www.google.com/search?q=%23eth_sendtransaction) and contains the `from` address. If the passphrase can be used to decrypt the private key belonging to `tx.from` the transaction is verified, signed and send onto the network. -The account is not unlocked globally in the node and cannot be used in other RPC calls. + The account is not unlocked globally in the node and cannot be used in other + RPC calls. #### Parameters - - Object containing: - - `from`: `DATA`, 20 Bytes - The address the transaction is send from. - `to`: `DATA`, 20 Bytes - (optional when creating new contract) The address the transaction is directed to. - `value`: QUANTITY - value sent with this transaction - - - Passphrase +- Object containing: + `from`: `DATA`, 20 Bytes - The address the transaction is send from. + `to`: `DATA`, 20 Bytes - (optional when creating new contract) The address the transaction is directed to. + `value`: QUANTITY - value sent with this transaction +- Passphrase ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"personal_sendTransaction","params":[{"from":"0x3b7252d007059ffc82d16d022da3cbf9992d2f70","to":"0xddd64b4712f7c8f1ace3c145c950339eddaf221d", "value":"0x16345785d8a0000"}, "passphrase"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0xd2a31ec1b89615c8d1f4d08fe4e4182efa4a9c0d5758ace6676f485ea60e154c"} ``` ### `personal_sign` - -**Private**: Requires authentication. - +**Private**: Requires authentication. The sign method calculates an Ethereum specific signature with: `sign(keccack256("\x19Ethereum Signed Message:\n" + len(message) + message)))`, #### Parameters - - Message - - Account Address - - Password - - +- Message +- Account Address +- Password ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"personal_sign","params":["0xdeadbeaf", "0x3b7252d007059ffc82d16d022da3cbf9992d2f70", "password"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0xf9ff74c86aefeb5f6019d77280bbb44fb695b4d45cfe97e6eed7acd62905f4a85034d5c68ed25a2e7a8eeb9baf1b8401e4f865d92ec48c1763bf649e354d900b1c"} ``` ### `personal_ecRecover` - -**Private**: Requires authentication. - +**Private**: Requires authentication. `ecRecover` returns the address associated with the private key that was used to calculate the signature in [`personal_sign`](#personal_sign). -**Implementation Note**: The signature must be exactly 65 bytes (130 hex characters). Signatures of other lengths will return an error: "signature must be 65 bytes long". + **Implementation Note**: The signature must be exactly 65 bytes (130 hex + characters). Signatures of other lengths will return an error: "signature must + be 65 bytes long". #### Parameters - - Message (hex encoded) - - Signature returned from [`personal_sign`](#personal_sign) (65 bytes) +- Message (hex encoded) +- Signature returned from [`personal_sign`](#personal_sign) (65 bytes) ```shell -// Request - Note: signature must be exactly 130 hex characters (65 bytes) +/ Request - Note: signature must be exactly 130 hex characters (65 bytes) curl -X POST --data '{"jsonrpc":"2.0","method":"personal_ecRecover","params":["0xdeadbeaf", "0xf9ff74c86aefeb5f6019d77280bbb44fb695b4d45cfe97e6eed7acd62905f4a85034d5c68ed25a2e7a8eeb9baf1b8401e4f865d92ec48c1763bf649e354d900b1c"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x3b7252d007059ffc82d16d022da3cbf9992d2f70"} ``` ### `personal_initializeWallet` - -**Private**: Requires authentication. - +**Private**: Requires authentication. Initializes a new wallet at the provided URL, by generating and returning a new private key. @@ -1530,7 +1505,7 @@ Parameters must be given by position. 1: url `string` - - Required: [Y] Yes +- Required: [Y] Yes #### Client Examples @@ -1554,16 +1529,14 @@ personal.initializeWallet(url); ### `personal_unpair` - -**Private**: Requires authentication. - +**Private**: Requires authentication. Unpair deletes a pairing between wallet and the node. #### Parameters (2) - - URL - - Pairing password +- URL +- Pairing password #### Client Examples @@ -1582,25 +1555,23 @@ wscat -c ws://localhost:8546 -x '{"jsonrpc": "2.0", "id": 1, "method": "personal Javascript Console ```javascript -personal.unpair(url,pin); +personal.unpair(url, pin); ``` ### `personal_listWallets` - -**Private**: Requires authentication. - +**Private**: Requires authentication. Returns a list of wallets this node manages. -Currently returns `null` as wallet-level management is not supported. + Currently returns `null` as wallet-level management is not supported. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"personal_listWallets","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1612,14 +1583,12 @@ The `traceTransaction` debugging method will attempt to run the transaction in t #### Parameters - - Trace Config - - +- Trace Config ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_traceTransaction","params":["0xddecdb13226339681372b44e01df0fbc0f446fca6f834b2de5ecb1e569022ec8", {"tracer": "{data: [], fault: function(log) {}, step: function(log) { if(log.op.toString() == \"CALL\") this.data.push(log.stack.peek(0)); }, result: function() { return this.data; }}"}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -//Result +/Result ["68410", "51470"] ``` @@ -1628,18 +1597,20 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_traceTransaction","params" The `traceBlockByNumber` endpoint accepts a block number and will replay the block that is already present in the database. -**Stub Implementation**: This method is implemented but currently returns an empty array for all requests. The tracing functionality is not fully operational in Cosmos EVM. + **Stub Implementation**: This method is implemented but currently returns an + empty array for all requests. The tracing functionality is not fully + operational in Cosmos EVM. #### Parameters - - Block number (hex string) - - Trace Config (optional) +- Block number (hex string) +- Trace Config (optional) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_traceBlockByNumber","params":["0xe", {"tracer": "{data: [], fault: function(log) {}, step: function(log) { if(log.op.toString() == \"CALL\") this.data.push(log.stack.peek(0)); }, result: function() { return this.data; }}"}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result - Currently returns empty array +/ Result - Currently returns empty array {"jsonrpc":"2.0","id":1,"result":[]} ``` @@ -1648,18 +1619,20 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_traceBlockByNumber","param Similar to `debug_traceBlockByNumber`, this method accepts a block hash and will replay the block that is already present in the database. -**Stub Implementation**: This method is implemented but currently returns an empty array for all requests. The tracing functionality is not fully operational in Cosmos EVM. + **Stub Implementation**: This method is implemented but currently returns an + empty array for all requests. The tracing functionality is not fully + operational in Cosmos EVM. #### Parameters - - Block hash (32-byte hash) - - Trace Config (optional) +- Block hash (32-byte hash) +- Trace Config (optional) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_traceBlockByHash","params":["0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4", {}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result - Currently returns empty array +/ Result - Currently returns empty array {"jsonrpc":"2.0","id":1,"result":[]} ``` @@ -1667,14 +1640,12 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_traceBlockByHash","params" Forces the Go runtime to perform a garbage collection and return unused memory to the OS. - -**Private**: Requires authentication. - +**Private**: Requires authentication. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_freeOSMemory","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1682,18 +1653,16 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_freeOSMemory","params":[], Sets the garbage collection target percentage. A negative value disables garbage collection. - -**Private**: Requires authentication. - +**Private**: Requires authentication. #### Parameters - - GC percentage (integer) +- GC percentage (integer) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_setGCPercent","params":[100],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":100} ``` @@ -1701,14 +1670,12 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_setGCPercent","params":[10 Returns detailed runtime memory statistics. - -**Private**: Requires authentication. - +**Private**: Requires authentication. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_memStats","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"Alloc":83328680,"TotalAlloc":451796592,"Sys":166452520,"Lookups":0,"Mallocs":4071615,"Frees":3772501,"HeapAlloc":83328680,"HeapSys":153452544,"HeapIdle":51568640,"HeapInuse":101883904,"HeapReleased":44720128,"HeapObjects":299114,"StackInuse":1736704,"StackSys":1736704,"MSpanInuse":1119520,"MSpanSys":1958400,"MCacheInuse":16912,"MCacheSys":31408,"BuckHashSys":1583603,"GCSys":5251008,"OtherSys":2438853,"NextGC":142217095,"LastGC":1754180652189080000,"PauseTotalNs":1266251,"NumGC":18,"NumForcedGC":1,"GCCPUFraction":0.0002128524091018015,"EnableGC":true,"DebugGC":false}} ``` @@ -1716,18 +1683,16 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_memStats","params":[],"id" Sets the rate of goroutine block profile data collection. A non-zero value enables block profiling. - -**Private**: Requires authentication. - +**Private**: Requires authentication. #### Parameters - - Profile rate (integer) +- Profile rate (integer) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_setBlockProfileRate","params":[1],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1735,18 +1700,16 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_setBlockProfileRate","para Writes a goroutine blocking profile to the specified file. - -**Private**: Requires authentication. - +**Private**: Requires authentication. #### Parameters - - File path (string) +- File path (string) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_writeBlockProfile","params":["block.prof"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1754,18 +1717,16 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_writeBlockProfile","params Writes an allocation profile to the specified file. - -**Private**: Requires authentication. - +**Private**: Requires authentication. #### Parameters - - File path (string) +- File path (string) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_writeMemProfile","params":["mem.prof"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1773,18 +1734,16 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_writeMemProfile","params": Writes a mutex contention profile to the specified file. - -**Private**: Requires authentication. - +**Private**: Requires authentication. #### Parameters - - File path (string) +- File path (string) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_writeMutexProfile","params":["mutex.prof"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1792,9 +1751,7 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_writeMutexProfile","params Turns on block profiling for the given duration and writes the profile data to disk. - -**Private**: Requires authentication. - +**Private**: Requires authentication. #### Parameters @@ -1802,9 +1759,9 @@ Turns on block profiling for the given duration and writes the profile data to d - Duration in seconds (number) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_blockProfile","params":["block.prof", 30],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1812,9 +1769,7 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_blockProfile","params":["b Turns on CPU profiling for the given duration and writes the profile data to disk. - -**Private**: Requires authentication. - +**Private**: Requires authentication. #### Parameters @@ -1822,9 +1777,9 @@ Turns on CPU profiling for the given duration and writes the profile data to dis - Duration in seconds (number) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_cpuProfile","params":["cpu.prof", 30],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1833,14 +1788,14 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_cpuProfile","params":["cpu Returns garbage collection statistics. -**Private**: Requires authentication. -**Cosmos-specific**: This method is unique to Cosmos EVM. + **Private**: Requires authentication. **Cosmos-specific**: This method is + unique to Cosmos EVM. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_gcStats","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"NumGC":10,"PauseTotal":1000000,"Pause":[100000,200000],"PauseEnd":[1234567890,1234567891],"...":"..."}} ``` @@ -1848,9 +1803,7 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_gcStats","params":[],"id": Turns on Go runtime tracing for the given duration and writes the trace data to disk. - -**Private**: Requires authentication. - +**Private**: Requires authentication. #### Parameters @@ -1858,9 +1811,9 @@ Turns on Go runtime tracing for the given duration and writes the trace data to - Duration in seconds (number) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_goTrace","params":["trace.out", 5],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1868,14 +1821,12 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_goTrace","params":["trace. Returns a printed representation of the stacks of all goroutines. - -**Private**: Requires authentication. - +**Private**: Requires authentication. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_stacks","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"goroutine 1 [running]:\n..."} ``` @@ -1883,18 +1834,16 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_stacks","params":[],"id":1 Starts indefinite CPU profiling, writing to the given file. - -**Private**: Requires authentication. - +**Private**: Requires authentication. #### Parameters - File path (string) - Output file for the profile ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_startCPUProfile","params":["cpu.prof"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1902,14 +1851,12 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_startCPUProfile","params": Stops an ongoing CPU profile. - -**Private**: Requires authentication. - +**Private**: Requires authentication. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_stopCPUProfile","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1917,9 +1864,7 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_stopCPUProfile","params":[ Turns on mutex profiling for the given duration and writes the profile data to disk. - -**Private**: Requires authentication. - +**Private**: Requires authentication. #### Parameters @@ -1927,9 +1872,9 @@ Turns on mutex profiling for the given duration and writes the profile data to d - Duration in seconds (number) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_mutexProfile","params":["mutex.prof", 10],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1938,8 +1883,8 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_mutexProfile","params":["m Sets the rate of mutex profiling. -**Private**: Requires authentication. -**Cosmos-specific**: This method is unique to Cosmos EVM. + **Private**: Requires authentication. **Cosmos-specific**: This method is + unique to Cosmos EVM. #### Parameters @@ -1947,9 +1892,9 @@ Sets the rate of mutex profiling. - Rate (number) - 0 to disable, 1 for full profiling ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_setMutexProfileFraction","params":[1],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":0} ``` @@ -1957,12 +1902,13 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_setMutexProfileFraction"," Returns the RLP encoding of the header of the block. - -**Private**: Requires authentication. - +**Private**: Requires authentication. -**Parameter Type**: This method requires the block number as a `uint64` (number type), not a hex string. Passing a string will return an error: "invalid argument 0: json: cannot unmarshal string into Go value of type uint64". + **Parameter Type**: This method requires the block number as a `uint64` + (number type), not a hex string. Passing a string will return an error: + "invalid argument 0: json: cannot unmarshal string into Go value of type + uint64". #### Parameters @@ -1970,9 +1916,9 @@ Returns the RLP encoding of the header of the block. - Block number (uint64 number, not hex string) ```shell -// Request - Note: block number as integer, not hex string +/ Request - Note: block number as integer, not hex string curl -X POST --data '{"jsonrpc":"2.0","method":"debug_getHeaderRlp","params":[100],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0xf90..."} ``` @@ -1980,12 +1926,13 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_getHeaderRlp","params":[10 Returns the RLP encoding of the block. - -**Private**: Requires authentication. - +**Private**: Requires authentication. -**Parameter Type**: This method requires the block number as a `uint64` (number type), not a hex string. Passing a string will return an error: "invalid argument 0: json: cannot unmarshal string into Go value of type uint64". + **Parameter Type**: This method requires the block number as a `uint64` + (number type), not a hex string. Passing a string will return an error: + "invalid argument 0: json: cannot unmarshal string into Go value of type + uint64". #### Parameters @@ -1993,9 +1940,9 @@ Returns the RLP encoding of the block. - Block number (uint64 number, not hex string) ```shell -// Request - Note: block number as integer, not hex string +/ Request - Note: block number as integer, not hex string curl -X POST --data '{"jsonrpc":"2.0","method":"debug_getBlockRlp","params":[100],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0xf90..."} ``` @@ -2003,12 +1950,13 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_getBlockRlp","params":[100 Returns a formatted string of the block. - -**Private**: Requires authentication. - +**Private**: Requires authentication. -**Parameter Type**: This method requires the block number as a `uint64` (number type), not a hex string. Passing a string will return an error: "invalid argument 0: json: cannot unmarshal string into Go value of type uint64". + **Parameter Type**: This method requires the block number as a `uint64` + (number type), not a hex string. Passing a string will return an error: + "invalid argument 0: json: cannot unmarshal string into Go value of type + uint64". #### Parameters @@ -2016,9 +1964,9 @@ Returns a formatted string of the block. - Block number (uint64 number, not hex string) ```shell -// Request - Note: block number as integer, not hex string +/ Request - Note: block number as integer, not hex string curl -X POST --data '{"jsonrpc":"2.0","method":"debug_printBlock","params":[100],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"Block #100\nParent: 0x...\nStateRoot: 0x...\n..."} ``` @@ -2026,9 +1974,7 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"debug_printBlock","params":[100] Returns the intermediate state roots for a transaction. - -**Private**: Requires authentication. - +**Private**: Requires authentication. #### Parameters @@ -2036,9 +1982,9 @@ Returns the intermediate state roots for a transaction. - Trace config (object, optional) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_intermediateRoots","params":["0x88df016429689c079f3b2f6ad39fa052532c56795b733da78a91ebe6a713944b", {}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":["0x...","0x..."]} ``` @@ -2073,7 +2019,7 @@ txpool.content(); #### Result ```json -{"jsonrpc":"2.0","id":1,"result":{"pending":{},"queued":{}}} +{ "jsonrpc": "2.0", "id": 1, "result": { "pending": {}, "queued": {} } } ``` ### `txpool_contentFrom` @@ -2107,7 +2053,7 @@ txpool.contentFrom("0x1234567890abcdef1234567890abcdef12345678"); #### Result ```json -{"jsonrpc":"2.0","id":1,"result":{"pending":{},"queued":{}}} +{ "jsonrpc": "2.0", "id": 1, "result": { "pending": {}, "queued": {} } } ``` ### `txpool_inspect` @@ -2139,7 +2085,7 @@ txpool.inspect(); #### Result ```json -{"jsonrpc":"2.0","id":1,"result":{"pending":{},"queued":{}}} +{ "jsonrpc": "2.0", "id": 1, "result": { "pending": {}, "queued": {} } } ``` ### `txpool_status` @@ -2171,13 +2117,14 @@ txpool.status(); #### Result ```json -{"jsonrpc":"2.0","id":1,"result":{"pending":"0x0","queued":"0x0"}} +{ "jsonrpc": "2.0", "id": 1, "result": { "pending": "0x0", "queued": "0x0" } } ``` ### `eth_createAccessList` -This method is fully implemented and supports EIP-2930 access list transactions. + This method is fully implemented and supports EIP-2930 access list + transactions. Creates an access list for a given transaction. This is useful for optimizing gas costs when calling contracts, as it allows the transaction to specify which storage slots will be accessed, potentially reducing gas costs for those accesses. @@ -2187,6 +2134,7 @@ The access list contains addresses and storage keys that the transaction plans t #### Parameters 1. **Transaction Object** - The transaction call object + - `from`: `DATA`, 20 Bytes - (optional) The address the transaction is sent from - `to`: `DATA`, 20 Bytes - The address the transaction is directed to - `gas`: `QUANTITY` - (optional) Integer of the gas provided for the transaction execution @@ -2234,11 +2182,14 @@ wscat -c ws://localhost:8546 -x '{ Javascript (web3.js) ```javascript -web3.eth.createAccessList({ - from: "0x8D97689C9818892B700e27F316cc3E41e17fBeb9", - to: "0xd3CdA913deB6f67967B99D67aCDFa1712C293601", - data: "0x70a08231000000000000000000000000d3CdA913deB6f67967B99D67aCDFa1712C293601" -}, "latest"); +web3.eth.createAccessList( + { + from: "0x8D97689C9818892B700e27F316cc3E41e17fBeb9", + to: "0xd3CdA913deB6f67967B99D67aCDFa1712C293601", + data: "0x70a08231000000000000000000000000d3CdA913deB6f67967B99D67aCDFa1712C293601", + }, + "latest" +); ``` #### Result diff --git a/docs/evm/next/changelog/release-notes.mdx b/docs/evm/next/changelog/release-notes.mdx index 8f1c091d..f2d6cd01 100644 --- a/docs/evm/next/changelog/release-notes.mdx +++ b/docs/evm/next/changelog/release-notes.mdx @@ -1,87 +1,100 @@ --- title: "Release Notes" +description: "Release notes generated from the project `CHANGELOG.md`" mode: "center" --- - This page tracks all releases and changes from the [cosmos/evm](https://github.com/cosmos/evm) repository. - For the latest development updates, see the [UNRELEASED](https://github.com/cosmos/evm/blob/main/CHANGELOG.md#unreleased) section. + This page tracks all releases and changes from the + [cosmos/evm](https://github.com/cosmos/evm) repository. For the latest + development updates, see the + [UNRELEASED](https://github.com/cosmos/evm/blob/main/CHANGELOG.md#unreleased) + section. ## Features -* Add comprehensive Solidity-based end-to-end tests for precompiles ([#253](https://github.com/cosmos/evm/pull/253)) -* Add 4-node localnet infrastructure for testing multi-validator setups ([#301](https://github.com/cosmos/evm/pull/301)) -* Add system test framework for integration testing ([#304](https://github.com/cosmos/evm/pull/304)) -* Add txpool RPC namespace stubs in preparation for app-side mempool implementation ([#344](https://github.com/cosmos/evm/pull/344)) -* Enforce app creator returning application implement AppWithPendingTxStream in build time. ([#440](https://github.com/cosmos/evm/pull/440)) - -## Improvements - -* Enforce single EVM transaction per Cosmos transaction for security ([#294](https://github.com/cosmos/evm/pull/294)) -* Update dependencies for security and performance improvements ([#299](https://github.com/cosmos/evm/pull/299)) -* Preallocate EVM access_list for better performance ([#307](https://github.com/cosmos/evm/pull/307)) -* Fix EmitApprovalEvent to use owner address instead of precompile address ([#317](https://github.com/cosmos/evm/pull/317)) -* Fix gas cap calculation and fee rounding errors in ante handler benchmarks ([#345](https://github.com/cosmos/evm/pull/345)) -* Add loop break labels for optimization ([#347](https://github.com/cosmos/evm/pull/347)) -* Use larger CI runners for resource-intensive tests ([#370](https://github.com/cosmos/evm/pull/370)) -* Apply security audit patches ([#373](https://github.com/cosmos/evm/pull/373)) -* Apply audit-related commit 388b5c0 ([#377](https://github.com/cosmos/evm/pull/377)) -* Post-audit security fixes (batch 1) ([#382](https://github.com/cosmos/evm/pull/382)) -* Post-audit security fixes (batch 2) ([#388](https://github.com/cosmos/evm/pull/388)) -* Post-audit security fixes (batch 3) ([#389](https://github.com/cosmos/evm/pull/389)) -* Post-audit security fixes (batch 5) ([#392](https://github.com/cosmos/evm/pull/392)) -* Post-audit security fixes (batch 4) ([#398](https://github.com/cosmos/evm/pull/398)) -* Prevent nil pointer by checking error in gov precompile FromResponse. ([#442](https://github.com/cosmos/evm/pull/442)) -* (Experimental) EVM-compatible appside mempool ([#387](https://github.com/cosmos/evm/pull/387)) -* Add revert error e2e tests for contract and precompile calls ([#476](https://github.com/cosmos/evm/pull/476)) - -## Bug Fixes - -* Fix compilation error in server/start.go ([#179](https://github.com/cosmos/evm/pull/179)) -* Use PriorityMempool with signer extractor to prevent missing signers error in tx execution ([#245](https://github.com/cosmos/evm/pull/245)) -* Align revert reason format with go-ethereum (return hex-encoded result) ([#289](https://github.com/cosmos/evm/pull/289)) -* Use proper address codecs in precompiles for bech32/hex conversion ([#291](https://github.com/cosmos/evm/pull/291)) -* Add sanity checks to trace_tx RPC endpoint ([#296](https://github.com/cosmos/evm/pull/296)) -* Fix estimate gas to handle missing fields for new transaction types ([#316](https://github.com/cosmos/evm/pull/316)) -* Fix error propagation in BlockHash RPCs and address test flakiness ([#330](https://github.com/cosmos/evm/pull/330)) -* Fix non-determinism in state transitions ([#332](https://github.com/cosmos/evm/pull/332)) -* Fix p256 precompile test flakiness ([#350](https://github.com/cosmos/evm/pull/350)) -* Fix precompile initialization for local node development script ([#376](https://github.com/cosmos/evm/pull/376)) -* Fix debug_traceTransaction RPC failing with block height mismatch errors ([#384](https://github.com/cosmos/evm/pull/384)) -* Align precompiles map with available static check to Prague. ([#441](https://github.com/cosmos/evm/pull/441)) -* Cleanup unused cancel function in filter. ([#452](https://github.com/cosmos/evm/pull/452)) -* Align multi decode functions instead of string contains check in HexAddressFromBech32String. ([#454](https://github.com/cosmos/evm/pull/454)) -* Add pagination flags to `token-pairs` to improve query flexibility. ([#468](https://github.com/cosmos/evm/pull/468)) - -## Dependencies - -* Update `cosmossdk.io/log` to `v1.6.1` to support Go `v1.25.0+`. ([#459](https://github.com/cosmos/evm/pull/459)) -* Update Cosmos SDK to `v0.53.4` and CometBFT to `v0.38.18`. ([#435](https://github.com/cosmos/evm/pull/435)) - -## API Breaking - -* Remove non–go-ethereum JSON-RPC methods to align with Geth’s surface ([#456](https://github.com/cosmos/evm/pull/456)) -* Move `ante` logic from the `evmd` Go package to the `evm` package to ([#443](https://github.com/cosmos/evm/pull/443)) -* Align function and package names for consistency. ([#422](https://github.com/cosmos/evm/pull/422)) -* Remove evidence precompile due to lack of use cases ([#305](https://github.com/cosmos/evm/pull/305)) - -## API Breaking - -* Remove non–go-ethereum JSON-RPC methods to align with Geth’s surface ([#456](https://github.com/cosmos/evm/pull/456)) -* Move `ante` logic from the `evmd` Go package to the `evm` package to ([#443](https://github.com/cosmos/evm/pull/443)) -* Align function and package names for consistency. ([#422](https://github.com/cosmos/evm/pull/422)) -* Remove evidence precompile due to lack of use cases ([#305](https://github.com/cosmos/evm/pull/305)) +- Add comprehensive Solidity-based end-to-end tests for precompiles ([#253](https://github.com/cosmos/evm/pull/253)) +- Add 4-node localnet infrastructure for testing multi-validator setups ([#301](https://github.com/cosmos/evm/pull/301)) +- Add system test framework for integration testing ([#304](https://github.com/cosmos/evm/pull/304)) +- Add txpool RPC namespace stubs in preparation for app-side mempool implementation ([#344](https://github.com/cosmos/evm/pull/344)) +- Enforce app creator returning application implement AppWithPendingTxStream in build time. ([#440](https://github.com/cosmos/evm/pull/440)) + +## Improvements + +- Enforce single EVM transaction per Cosmos transaction for security ([#294](https://github.com/cosmos/evm/pull/294)) +- Update dependencies for security and performance improvements ([#299](https://github.com/cosmos/evm/pull/299)) +- Preallocate EVM access_list for better performance ([#307](https://github.com/cosmos/evm/pull/307)) +- Fix EmitApprovalEvent to use owner address instead of precompile address ([#317](https://github.com/cosmos/evm/pull/317)) +- Fix gas cap calculation and fee rounding errors in ante handler benchmarks ([#345](https://github.com/cosmos/evm/pull/345)) +- Add loop break labels for optimization ([#347](https://github.com/cosmos/evm/pull/347)) +- Use larger CI runners for resource-intensive tests ([#370](https://github.com/cosmos/evm/pull/370)) +- Apply security audit patches ([#373](https://github.com/cosmos/evm/pull/373)) +- Apply audit-related commit 388b5c0 ([#377](https://github.com/cosmos/evm/pull/377)) +- Post-audit security fixes (batch 1) ([#382](https://github.com/cosmos/evm/pull/382)) +- Post-audit security fixes (batch 2) ([#388](https://github.com/cosmos/evm/pull/388)) +- Post-audit security fixes (batch 3) ([#389](https://github.com/cosmos/evm/pull/389)) +- Post-audit security fixes (batch 5) ([#392](https://github.com/cosmos/evm/pull/392)) +- Post-audit security fixes (batch 4) ([#398](https://github.com/cosmos/evm/pull/398)) +- Prevent nil pointer by checking error in gov precompile FromResponse. ([#442](https://github.com/cosmos/evm/pull/442)) +- (Experimental) EVM-compatible appside mempool ([#387](https://github.com/cosmos/evm/pull/387)) +- Add revert error e2e tests for contract and precompile calls ([#476](https://github.com/cosmos/evm/pull/476)) + +## Bug Fixes + +- Fix compilation error in server/start.go ([#179](https://github.com/cosmos/evm/pull/179)) +- Use PriorityMempool with signer extractor to prevent missing signers error in tx execution ([#245](https://github.com/cosmos/evm/pull/245)) +- Align revert reason format with go-ethereum (return hex-encoded result) ([#289](https://github.com/cosmos/evm/pull/289)) +- Use proper address codecs in precompiles for bech32/hex conversion ([#291](https://github.com/cosmos/evm/pull/291)) +- Add sanity checks to trace_tx RPC endpoint ([#296](https://github.com/cosmos/evm/pull/296)) +- Fix estimate gas to handle missing fields for new transaction types ([#316](https://github.com/cosmos/evm/pull/316)) +- Fix error propagation in BlockHash RPCs and address test flakiness ([#330](https://github.com/cosmos/evm/pull/330)) +- Fix non-determinism in state transitions ([#332](https://github.com/cosmos/evm/pull/332)) +- Fix p256 precompile test flakiness ([#350](https://github.com/cosmos/evm/pull/350)) +- Fix precompile initialization for local node development script ([#376](https://github.com/cosmos/evm/pull/376)) +- Fix debug_traceTransaction RPC failing with block height mismatch errors ([#384](https://github.com/cosmos/evm/pull/384)) +- Align precompiles map with available static check to Prague. ([#441](https://github.com/cosmos/evm/pull/441)) +- Cleanup unused cancel function in filter. ([#452](https://github.com/cosmos/evm/pull/452)) +- Align multi decode functions instead of string contains check in HexAddressFromBech32String. ([#454](https://github.com/cosmos/evm/pull/454)) +- Add pagination flags to `token-pairs` to improve query flexibility. ([#468](https://github.com/cosmos/evm/pull/468)) + +## Dependencies + +- Update `cosmossdk.io/log` to `v1.6.1` to support Go `v1.25.0+`. ([#459](https://github.com/cosmos/evm/pull/459)) +- Update Cosmos SDK to `v0.53.4` and CometBFT to `v0.38.18`. ([#435](https://github.com/cosmos/evm/pull/435)) + +## API Breaking + +- Remove non–go-ethereum JSON-RPC methods to align with Geth’s surface ([#456](https://github.com/cosmos/evm/pull/456)) +- Move `ante` logic from the `evmd` Go package to the `evm` package to ([#443](https://github.com/cosmos/evm/pull/443)) +- Align function and package names for consistency. ([#422](https://github.com/cosmos/evm/pull/422)) +- Remove evidence precompile due to lack of use cases ([#305](https://github.com/cosmos/evm/pull/305)) + +## API Breaking + +- Remove non–go-ethereum JSON-RPC methods to align with Geth’s surface ([#456](https://github.com/cosmos/evm/pull/456)) +- Move `ante` logic from the `evmd` Go package to the `evm` package to ([#443](https://github.com/cosmos/evm/pull/443)) +- Align function and package names for consistency. ([#422](https://github.com/cosmos/evm/pull/422)) +- Remove evidence precompile due to lack of use cases ([#305](https://github.com/cosmos/evm/pull/305)) + --- - + See the complete changelog on GitHub - + Report bugs or request features diff --git a/docs/evm/next/documentation/concepts/accounts.mdx b/docs/evm/next/documentation/concepts/accounts.mdx index 5617bc45..9bf580e6 100644 --- a/docs/evm/next/documentation/concepts/accounts.mdx +++ b/docs/evm/next/documentation/concepts/accounts.mdx @@ -42,20 +42,20 @@ The root HD path for EVM-based accounts is `m/44'/60'/0'/0`. It is recommended t The custom Cosmos EVM [EthAccount](https://github.com/cosmos/evm/blob/v0.4.1/types/account.go#L28-L33) satisfies the `AccountI` interface from the Cosmos SDK auth module and includes additional fields that are required for Ethereum type addresses: ```go "EthAccountI Interface" expandable -// EthAccountI represents the interface of an EVM compatible account +/ EthAccountI represents the interface of an EVM compatible account type EthAccountI interface { authtypes.AccountI - // EthAddress returns the ethereum Address representation of the AccAddress + / EthAddress returns the ethereum Address representation of the AccAddress EthAddress() common.Address - // CodeHash is the keccak256 hash of the contract code (if any) + / CodeHash is the keccak256 hash of the contract code (if any) GetCodeHash() common.Hash - // SetCodeHash sets the code hash to the account fields + / SetCodeHash sets the code hash to the account fields SetCodeHash(code common.Hash) error - // Type returns the type of Ethereum Account (EOA or Contract) + / Type returns the type of Ethereum Account (EOA or Contract) Type() int8 } ``` diff --git a/docs/evm/next/documentation/concepts/chain-id.mdx b/docs/evm/next/documentation/concepts/chain-id.mdx index c72991ad..1bc305b6 100644 --- a/docs/evm/next/documentation/concepts/chain-id.mdx +++ b/docs/evm/next/documentation/concepts/chain-id.mdx @@ -24,7 +24,7 @@ The **Cosmos Chain ID** is a string identifier used by: **Example**: ```json -// In genesis.json +/ In genesis.json { "chain_id": "cosmosevm-1" } @@ -42,7 +42,7 @@ The **EVM Chain ID** is an integer used by: **Example**: ```go -// In app/app.go +/ In app/app.go const EVMChainID = 9000 ``` @@ -53,10 +53,10 @@ Both chain IDs must be configured when setting up your chain: ### In Your Application Code ```go "Chain ID Configuration" expandable -// app/app.go +/ app/app.go const ( - CosmosChainID = "cosmosevm-1" // String for Cosmos/IBC - EVMChainID = 9000 // Integer for EVM/Ethereum + CosmosChainID = "cosmosevm-1" / String for Cosmos/IBC + EVMChainID = 9000 / Integer for EVM/Ethereum ) ``` @@ -66,14 +66,14 @@ The Cosmos Chain ID is set in `genesis.json`: ```json "Genesis Chain ID Configuration" expandable { "chain_id": "cosmosevm-1", - // ... other genesis parameters + / ... other genesis parameters } ``` The EVM Chain ID is configured in the EVM module parameters: ```go "EVM Chain ID Initialization" expandable -// During chain initialization -evmtypes.DefaultChainConfig(9000) // Your EVM Chain ID +/ During chain initialization +evmtypes.DefaultChainConfig(9000) / Your EVM Chain ID ``` ## Important Considerations @@ -115,36 +115,3 @@ Here are some example configurations: | Devnet | `"cosmosevm-dev-1"` | `9002` | Development network | | Local | `"cosmosevm-local-1"` | `31337` | Local development | -## Troubleshooting - -### Common Issues - -1. **"Chain ID mismatch" errors** - - **Cause**: Using Cosmos Chain ID where EVM Chain ID is expected (or vice versa) - - **Solution**: Ensure you're using the correct type of chain ID for each context - -2. **MetaMask connection failures** - - **Cause**: Incorrect EVM Chain ID in wallet configuration - - **Solution**: Use the integer EVM Chain ID, not the string Cosmos Chain ID - -3. **IBC transfer failures** - - **Cause**: Using EVM Chain ID for IBC operations - - **Solution**: IBC always uses the Cosmos Chain ID (string format) - -4. **Smart contract deployment issues** - - **Cause**: EIP-155 replay protection using wrong chain ID - - **Solution**: Ensure your EVM Chain ID matches what's configured in the chain - -### Verification Commands - -To verify your chain IDs are correctly configured: - -```bash "Chain ID Verification Commands" expandable -# Check Cosmos Chain ID -curl -s http://localhost:26657/status | jq '.result.node_info.network' - -# Check EVM Chain ID -curl -X POST -H "Content-Type: application/json" \ - --data '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \ - http://localhost:8545 | jq '.result' -``` \ No newline at end of file diff --git a/docs/evm/next/documentation/concepts/encoding.mdx b/docs/evm/next/documentation/concepts/encoding.mdx index 4c1a0f71..4c63f8df 100644 --- a/docs/evm/next/documentation/concepts/encoding.mdx +++ b/docs/evm/next/documentation/concepts/encoding.mdx @@ -79,7 +79,7 @@ The `x/vm` module handles `MsgEthereumTx` encoding by converting to go-ethereum' ```go "Transaction Encoding Implementation" expandable -// TxEncoder overwrites sdk.TxEncoder to support MsgEthereumTx +/ TxEncoder overwrites sdk.TxEncoder to support MsgEthereumTx func (g txConfig) TxEncoder() sdk.TxEncoder { return func(tx sdk.Tx) ([]byte, error) { msg, ok := tx.(*evmtypes.MsgEthereumTx) @@ -90,7 +90,7 @@ func (g txConfig) TxEncoder() sdk.TxEncoder { } } -// TxDecoder overwrites sdk.TxDecoder to support MsgEthereumTx +/ TxDecoder overwrites sdk.TxDecoder to support MsgEthereumTx func (g txConfig) TxDecoder() sdk.TxDecoder { return func(txBytes []byte) (sdk.Tx, error) { tx := ðtypes.Transaction{} diff --git a/docs/evm/next/documentation/concepts/ibc.mdx b/docs/evm/next/documentation/concepts/ibc.mdx index 8ad5a63f..4f9a99d1 100644 --- a/docs/evm/next/documentation/concepts/ibc.mdx +++ b/docs/evm/next/documentation/concepts/ibc.mdx @@ -18,22 +18,22 @@ The `Packet` is the primary container for cross-chain communication in IBC v2. E ```typescript "IBC v2 Core Data Structures" expandable - // Main container sent between chains + / Main container sent between chains interface Packet { - sourceClientId: bytes; // Client ID for destination chain (stored on source) - destClientId: bytes; // Client ID for source chain (stored on destination) - sequence: uint64; // Monotonically increasing nonce per client-pair - timeout: uint64; // UNIX timestamp (seconds) for packet expiry - data: Payload[]; // Application payload(s) + sourceClientId: bytes; / Client ID for destination chain (stored on source) + destClientId: bytes; / Client ID for source chain (stored on destination) + sequence: uint64; / Monotonically increasing nonce per client-pair + timeout: uint64; / UNIX timestamp (seconds) for packet expiry + data: Payload[]; / Application payload(s) } - // Application-specific data + / Application-specific data interface Payload { - sourcePort: bytes; // Sending application identifier - destPort: bytes; // Receiving application identifier - version: string; // Application version - encoding: string; // MIME-type for decoding - value: bytes; // Opaque app data + sourcePort: bytes; / Sending application identifier + destPort: bytes; / Receiving application identifier + version: string; / Application version + encoding: string; / MIME-type for decoding + value: bytes; / Opaque app data } ``` diff --git a/docs/evm/next/documentation/concepts/mempool.mdx b/docs/evm/next/documentation/concepts/mempool.mdx index b9991a74..a9dfcb3e 100644 --- a/docs/evm/next/documentation/concepts/mempool.mdx +++ b/docs/evm/next/documentation/concepts/mempool.mdx @@ -52,17 +52,17 @@ The CheckTx handler processes transactions with special handling for nonce gaps **Nonce Gap Detection** - Transactions with future nonces are intercepted and queued locally: ```go CheckTx Nonce Gap Handling expandable -// From mempool/check_tx.go +/ From mempool/check_tx.go if err != nil { - // detect if there is a nonce gap error (only returned for EVM transactions) + / detect if there is a nonce gap error (only returned for EVM transactions) if errors.Is(err, ErrNonceGap) { - // send it to the mempool for further triage + / send it to the mempool for further triage err := mempool.InsertInvalidNonce(request.Tx) if err != nil { return sdkerrors.ResponseCheckTxWithEvents(err, gInfo.GasWanted, gInfo.GasUsed, anteEvents, false), nil } } - // anything else, return regular error + / anything else, return regular error return sdkerrors.ResponseCheckTxWithEvents(err, gInfo.GasWanted, gInfo.GasUsed, anteEvents, false), nil } ``` @@ -195,27 +195,27 @@ For chain developers looking to integrate the mempool into their Cosmos SDK chai **Before v0.5.0 (Pre-built Objects):** ```go Before v0.5.0 Configuration expandable -// Required pre-built pools +/ Required pre-built pools customTxPool := CreateCustomTxPool(...) customCosmosPool := CreateCustomCosmosPool(...) mempoolConfig := &evmmempool.EVMMempoolConfig{ - TxPool: customTxPool, // ← REMOVED in v0.5.0 - CosmosPool: customCosmosPool, // ← REMOVED in v0.5.0 + TxPool: customTxPool, / ← REMOVED in v0.5.0 + CosmosPool: customCosmosPool, / ← REMOVED in v0.5.0 AnteHandler: app.GetAnteHandler(), } ``` **After v0.5.0 (Configuration Objects):** ```go After v0.5.0 Configuration expandable -// Configuration-based approach +/ Configuration-based approach mempoolConfig := &evmmempool.EVMMempoolConfig{ - LegacyPoolConfig: &legacypool.Config{...}, // ← NEW in v0.5.0 - CosmosPoolConfig: &sdkmempool.PriorityNonceMempoolConfig{...}, // ← NEW in v0.5.0 + LegacyPoolConfig: &legacypool.Config{...}, / ← NEW in v0.5.0 + CosmosPoolConfig: &sdkmempool.PriorityNonceMempoolConfig{...}, / ← NEW in v0.5.0 AnteHandler: app.GetAnteHandler(), - BroadcastTxFn: customBroadcastFunc, // ← NEW: Optional custom broadcast - BlockGasLimit: 100_000_000, // ← NEW: Configurable gas limit - MinTip: uint256.NewInt(1000000000), // ← NEW: Minimum tip filter (optional) + BroadcastTxFn: customBroadcastFunc, / ← NEW: Optional custom broadcast + BlockGasLimit: 100_000_000, / ← NEW: Configurable gas limit + MinTip: uint256.NewInt(1000000000), / ← NEW: Minimum tip filter (optional) } ``` @@ -238,10 +238,10 @@ v0.5.0 introduces comprehensive configuration options for customizing mempool be #### Basic Configuration ```go Basic Mempool Configuration expandable -// Minimal configuration with defaults +/ Minimal configuration with defaults mempoolConfig := &evmmempool.EVMMempoolConfig{ AnteHandler: app.GetAnteHandler(), - BlockGasLimit: 100_000_000, // or 0 for auto-default + BlockGasLimit: 100_000_000, / or 0 for auto-default } evmMempool := evmmempool.NewExperimentalEVMMempool( @@ -258,19 +258,19 @@ evmMempool := evmmempool.NewExperimentalEVMMempool( #### Advanced Configuration ```go Advanced Mempool Configuration expandable -// Custom configuration for high-throughput chains +/ Custom configuration for high-throughput chains mempoolConfig := &evmmempool.EVMMempoolConfig{ LegacyPoolConfig: &legacypool.Config{ - AccountSlots: 32, // Transactions per account (default: 16) - GlobalSlots: 8192, // Total pending transactions (default: 5120) - AccountQueue: 128, // Queued per account (default: 64) - GlobalQueue: 2048, // Total queued transactions (default: 1024) - Lifetime: 6*time.Hour, // Transaction lifetime (default: 3h) - PriceLimit: 2, // Min gas price in wei (default: 1) - PriceBump: 15, // Replacement bump % (default: 10) - // Source: mempool/txpool/legacypool/legacypool.go:168-178 - Journal: "txpool.rlp", // Persistence file (default: "transactions.rlp") - Rejournal: 2*time.Hour, // Journal refresh interval (default: 1h) + AccountSlots: 32, / Transactions per account (default: 16) + GlobalSlots: 8192, / Total pending transactions (default: 5120) + AccountQueue: 128, / Queued per account (default: 64) + GlobalQueue: 2048, / Total queued transactions (default: 1024) + Lifetime: 6*time.Hour, / Transaction lifetime (default: 3h) + PriceLimit: 2, / Min gas price in wei (default: 1) + PriceBump: 15, / Replacement bump % (default: 10) + / Source: mempool/txpool/legacypool/legacypool.go:168-178 + Journal: "txpool.rlp", / Persistence file (default: "transactions.rlp") + Rejournal: 2*time.Hour, / Journal refresh interval (default: 1h) }, CosmosPoolConfig: &sdkmempool.PriorityNonceMempoolConfig[math.Int]{ TxPriority: sdkmempool.TxPriority[math.Int]{ @@ -280,17 +280,17 @@ mempoolConfig := &evmmempool.EVMMempoolConfig{ }, }, AnteHandler: app.GetAnteHandler(), - BroadcastTxFn: customBroadcastFunction, // Optional custom broadcast logic - BlockGasLimit: 200_000_000, // Custom block gas limit + BroadcastTxFn: customBroadcastFunction, / Optional custom broadcast logic + BlockGasLimit: 200_000_000, / Custom block gas limit } ``` #### Custom Priority Functions ```go Custom Priority Function Example expandable -// Example: Prioritize governance and staking transactions +/ Example: Prioritize governance and staking transactions func customPriorityFunction(goCtx context.Context, tx sdk.Tx) math.Int { - // High priority for governance proposals + / High priority for governance proposals for _, msg := range tx.GetMsgs() { if _, ok := msg.(*govtypes.MsgSubmitProposal); ok { return math.NewInt(1000000) @@ -300,7 +300,7 @@ func customPriorityFunction(goCtx context.Context, tx sdk.Tx) math.Int { } } - // Standard fee-based priority for other transactions + / Standard fee-based priority for other transactions feeTx, ok := tx.(sdk.FeeTx) if !ok { return math.ZeroInt() @@ -318,15 +318,15 @@ func customPriorityFunction(goCtx context.Context, tx sdk.Tx) math.Int { #### Custom Broadcast Functions ```go Custom Broadcast Function Example expandable -// Example: Rate-limited broadcasting +/ Example: Rate-limited broadcasting func customBroadcastFunction(txs []*ethtypes.Transaction) error { for i, tx := range txs { - // Rate limit: max 10 tx/second + / Rate limit: max 10 tx/second if i > 0 && i%10 == 0 { time.Sleep(1 * time.Second) } - // Custom broadcast logic + / Custom broadcast logic if err := broadcastTransaction(tx); err != nil { return fmt.Errorf("failed to broadcast tx %s: %w", tx.Hash(), err) } @@ -343,24 +343,24 @@ func customBroadcastFunction(txs []*ethtypes.Transaction) error { ```go LegacyPoolConfig Structure expandable type Config struct { - // Transaction pool capacity - AccountSlots uint64 // Executable transactions per account (default: 16) - GlobalSlots uint64 // Total executable transactions (default: 5120) - AccountQueue uint64 // Non-executable transactions per account (default: 64) - GlobalQueue uint64 // Total non-executable transactions (default: 1024) + / Transaction pool capacity + AccountSlots uint64 / Executable transactions per account (default: 16) + GlobalSlots uint64 / Total executable transactions (default: 5120) + AccountQueue uint64 / Non-executable transactions per account (default: 64) + GlobalQueue uint64 / Total non-executable transactions (default: 1024) - // Economic parameters - PriceLimit uint64 // Minimum gas price in wei (default: 1) - PriceBump uint64 // Minimum price bump % for replacement (default: 10) + / Economic parameters + PriceLimit uint64 / Minimum gas price in wei (default: 1) + PriceBump uint64 / Minimum price bump % for replacement (default: 10) - // Lifecycle management - Lifetime time.Duration // Transaction retention time (default: 3h) - Journal string // Persistence file (default: "transactions.rlp") - Rejournal time.Duration // Journal refresh interval (default: 1h) + / Lifecycle management + Lifetime time.Duration / Transaction retention time (default: 3h) + Journal string / Persistence file (default: "transactions.rlp") + Rejournal time.Duration / Journal refresh interval (default: 1h) - // Local transaction handling - Locals []common.Address // Addresses treated as local (no gas limit) - NoLocals bool // Disable local transaction handling + / Local transaction handling + Locals []common.Address / Addresses treated as local (no gas limit) + NoLocals bool / Disable local transaction handling } ``` @@ -370,16 +370,16 @@ type Config struct { ```go CosmosPoolConfig Structure expandable type PriorityNonceMempoolConfig[C comparable] struct { - TxPriority TxPriority[C] // Custom priority calculation function - OnRead func(tx sdk.Tx) // Callback when transaction is read - TxReplacement TxReplacement[C] // Transaction replacement rules + TxPriority TxPriority[C] / Custom priority calculation function + OnRead func(tx sdk.Tx) / Callback when transaction is read + TxReplacement TxReplacement[C] / Transaction replacement rules } -// Priority calculation interface +/ Priority calculation interface type TxPriority[C comparable] struct { - GetTxPriority func(context.Context, sdk.Tx) C // Calculate transaction priority - Compare func(a, b C) int // Compare two priorities - MinValue C // Minimum priority value + GetTxPriority func(context.Context, sdk.Tx) C / Calculate transaction priority + Compare func(a, b C) int / Compare two priorities + MinValue C / Minimum priority value } ``` @@ -389,13 +389,13 @@ type TxPriority[C comparable] struct { **Remove this code:** ```go Remove Pre-built Pools expandable -// Delete these lines from your app.go +/ Delete these lines from your app.go customTxPool := legacypool.New(config, blockchain) customCosmosPool := sdkmempool.NewPriorityMempool(priorityConfig) mempoolConfig := &evmmempool.EVMMempoolConfig{ - TxPool: customTxPool, // ← Remove - CosmosPool: customCosmosPool, // ← Remove + TxPool: customTxPool, / ← Remove + CosmosPool: customCosmosPool, / ← Remove } ``` @@ -403,20 +403,20 @@ mempoolConfig := &evmmempool.EVMMempoolConfig{ **Add this code:** ```go Add Configuration Objects expandable -// Replace with configuration-based approach +/ Replace with configuration-based approach mempoolConfig := &evmmempool.EVMMempoolConfig{ - // Convert custom pool settings to config + / Convert custom pool settings to config LegacyPoolConfig: &legacypool.Config{ - AccountSlots: 32, // Previously customTxPool.AccountSlots - GlobalSlots: 8192, // Previously customTxPool.GlobalSlots - PriceLimit: 2, // Previously customTxPool.PriceLimit - // ... other settings from your old pool + AccountSlots: 32, / Previously customTxPool.AccountSlots + GlobalSlots: 8192, / Previously customTxPool.GlobalSlots + PriceLimit: 2, / Previously customTxPool.PriceLimit + / ... other settings from your old pool }, CosmosPoolConfig: &sdkmempool.PriorityNonceMempoolConfig[math.Int]{ - TxPriority: priorityConfig.TxPriority, // Reuse existing priority logic + TxPriority: priorityConfig.TxPriority, / Reuse existing priority logic }, AnteHandler: app.GetAnteHandler(), - BlockGasLimit: 100_000_000, // NEW: Must specify + BlockGasLimit: 100_000_000, / NEW: Must specify } ``` @@ -437,8 +437,8 @@ import ( **New Required Parameter:** ```go mempoolConfig := &evmmempool.EVMMempoolConfig{ - BlockGasLimit: 100_000_000, // ← REQUIRED in v0.5.0 (was optional) - AnteHandler: app.GetAnteHandler(), // ← Still required + BlockGasLimit: 100_000_000, / ← REQUIRED in v0.5.0 (was optional) + AnteHandler: app.GetAnteHandler(), / ← Still required } ``` @@ -453,17 +453,17 @@ mempoolConfig := &evmmempool.EVMMempoolConfig{ **Where to Add Configuration:** ```go Initialization Location expandable -// In app/app.go, during NewApp() function +/ In app/app.go, during NewApp() function func NewApp(...) *App { - // ... other initialization + / ... other initialization - // Configure mempool AFTER ante handler setup + / Configure mempool AFTER ante handler setup if evmtypes.GetChainConfig() != nil { mempoolConfig := &evmmempool.EVMMempoolConfig{ - AnteHandler: app.GetAnteHandler(), // Must be set first + AnteHandler: app.GetAnteHandler(), / Must be set first BlockGasLimit: 100_000_000, - MinTip: uint256.NewInt(1000000000), // 1 gwei minimum tip - // Add custom configs here + MinTip: uint256.NewInt(1000000000), / 1 gwei minimum tip + / Add custom configs here } app.EVMMempool = evmmempool.NewExperimentalEVMMempool( @@ -476,7 +476,7 @@ func NewApp(...) *App { mempoolConfig, ) - // Set as app mempool + / Set as app mempool app.SetMempool(app.EVMMempool) } @@ -488,36 +488,36 @@ func NewApp(...) *App { #### High-Throughput Chains ```go -// Optimize for maximum transaction volume +/ Optimize for maximum transaction volume LegacyPoolConfig: &legacypool.Config{ - AccountSlots: 64, // More transactions per account - GlobalSlots: 16384, // Higher global capacity - AccountQueue: 256, // Larger queues + AccountSlots: 64, / More transactions per account + GlobalSlots: 16384, / Higher global capacity + AccountQueue: 256, / Larger queues GlobalQueue: 4096, - PriceLimit: 1, // Accept lower gas prices + PriceLimit: 1, / Accept lower gas prices } ``` #### Resource-Constrained Nodes ```go -// Optimize for lower memory usage +/ Optimize for lower memory usage LegacyPoolConfig: &legacypool.Config{ - AccountSlots: 8, // Fewer transactions per account - GlobalSlots: 2048, // Lower global capacity - AccountQueue: 32, // Smaller queues + AccountSlots: 8, / Fewer transactions per account + GlobalSlots: 2048, / Lower global capacity + AccountQueue: 32, / Smaller queues GlobalQueue: 512, - Lifetime: 1*time.Hour, // Shorter retention + Lifetime: 1*time.Hour, / Shorter retention } ``` #### DeFi-Optimized Configuration ```go -// Balance throughput with MEV protection +/ Balance throughput with MEV protection LegacyPoolConfig: &legacypool.Config{ AccountSlots: 32, GlobalSlots: 8192, - PriceBump: 25, // Higher replacement threshold - Lifetime: 30*time.Minute, // Faster eviction for MEV + PriceBump: 25, / Higher replacement threshold + Lifetime: 30*time.Minute, / Faster eviction for MEV } ``` diff --git a/docs/evm/next/documentation/concepts/pending-state.mdx b/docs/evm/next/documentation/concepts/pending-state.mdx index fe5aafe8..20574986 100644 --- a/docs/evm/next/documentation/concepts/pending-state.mdx +++ b/docs/evm/next/documentation/concepts/pending-state.mdx @@ -55,8 +55,8 @@ The mempool provides dedicated RPC methods to inspect pending and queued transac Returns counts of pending and queued transactions: ```json { - "pending": "0x10", // 16 pending transactions - "queued": "0x5" // 5 queued transactions + "pending": "0x10", / 16 pending transactions + "queued": "0x5" / 5 queued transactions } ``` @@ -107,12 +107,12 @@ Additionally, the `txpool_*` namespace provides specialized methods for mempool ### Monitoring Transaction Status ```javascript -// Check if transaction is pending +/ Check if transaction is pending const tx = await provider.getTransaction(txHash); if (tx && !tx.blockNumber) { console.log("Transaction is pending"); - // Check pool status + / Check pool status const poolStatus = await provider.send("txpool_status", []); console.log(`Pool has ${poolStatus.pending} pending, ${poolStatus.queued} queued`); } @@ -121,25 +121,25 @@ if (tx && !tx.blockNumber) { ### Handling Nonce Gaps ```javascript -// Send transactions with nonce gaps (they'll be queued) -await wallet.sendTransaction({nonce: 100, ...}); // Executes immediately -await wallet.sendTransaction({nonce: 102, ...}); // Queued (gap at 101) -await wallet.sendTransaction({nonce: 101, ...}); // Fills gap, both execute +/ Send transactions with nonce gaps (they'll be queued) +await wallet.sendTransaction({nonce: 100, ...}); / Executes immediately +await wallet.sendTransaction({nonce: 102, ...}); / Queued (gap at 101) +await wallet.sendTransaction({nonce: 101, ...}); / Fills gap, both execute ``` ### Transaction Replacement ```javascript -// Speed up a pending transaction +/ Speed up a pending transaction const originalTx = await wallet.sendTransaction({ nonce: 100, gasPrice: parseUnits("20", "gwei") }); -// Replace with higher fee +/ Replace with higher fee const fasterTx = await wallet.sendTransaction({ - nonce: 100, // Same nonce - gasPrice: parseUnits("30", "gwei") // Higher fee + nonce: 100, / Same nonce + gasPrice: parseUnits("30", "gwei") / Higher fee }); ``` diff --git a/docs/evm/next/documentation/concepts/performance.mdx b/docs/evm/next/documentation/concepts/performance.mdx index 651db7ea..9e3ce6e2 100644 --- a/docs/evm/next/documentation/concepts/performance.mdx +++ b/docs/evm/next/documentation/concepts/performance.mdx @@ -17,19 +17,19 @@ v0.5.0 introduces intelligent gas estimation that dramatically improves performa **Performance Improvement:** ~90% faster gas estimation for simple ETH transfers ```solidity -// These transactions now estimate gas instantly (21000) -contract.call{value: 1 ether}(""); // Empty call to EOA -payable(recipient).transfer(amount); // Direct transfer +/ These transactions now estimate gas instantly (21000) +contract.call{value: 1 ether}(""); / Empty call to EOA +payable(recipient).transfer(amount); / Direct transfer ``` **Implementation Details:** ```go -// Short-circuit logic for plain transfers -// Source: x/vm/keeper/grpc_query.go:414-422 +/ Short-circuit logic for plain transfers +/ Source: x/vm/keeper/grpc_query.go:414-422 if len(msg.Data) == 0 && msg.To != nil { acct := k.GetAccountWithoutBalance(ctx, *msg.To) if acct == nil || !acct.IsContract() { - // Return 21000 immediately without full execution + / Return 21000 immediately without full execution return &types.EstimateGasResponse{Gas: ethparams.TxGas}, nil } } @@ -40,15 +40,15 @@ if len(msg.Data) == 0 && msg.To != nil { **Performance Improvement:** Faster estimation for complex transactions through better initial bounds ```go -// Uses actual execution results to optimize binary search -// Source: x/vm/keeper/grpc_query.go:450-452 +/ Uses actual execution results to optimize binary search +/ Source: x/vm/keeper/grpc_query.go:450-452 optimisticGasLimit := (result.MaxUsedGas + ethparams.CallStipend) * 64 / 63 -// Binary search starts with smarter bounds instead of blind search +/ Binary search starts with smarter bounds instead of blind search if optimisticGasLimit < hi { - // Test optimistic limit first + / Test optimistic limit first if !failed { - hi = optimisticGasLimit // Much faster convergence + hi = optimisticGasLimit / Much faster convergence } } ``` @@ -71,17 +71,17 @@ if optimisticGasLimit < hi { **What Changed:** Eliminated unnecessary EVM instance creation during balance checks ```go -// Before (v0.4.x): Created full EVM instance for balance checks +/ Before (v0.4.x): Created full EVM instance for balance checks func (d CanTransferDecorator) AnteHandle(...) { evm := d.evmKeeper.NewEVM(ctx, msg, cfg, tracer, stateDB) balance := evm.StateDB.GetBalance(msg.From) - // Heavy operation for simple balance check + / Heavy operation for simple balance check } -// After (v0.5.0): Direct keeper method +/ After (v0.5.0): Direct keeper method func CanTransfer(ctx sdk.Context, evmKeeper EVMKeeper, msg core.Message, ...) error { balance := evmKeeper.GetAccount(ctx, msg.From).Balance - // Lightweight direct access + / Lightweight direct access } ``` @@ -95,14 +95,14 @@ func CanTransfer(ctx sdk.Context, evmKeeper EVMKeeper, msg core.Message, ...) er **EIP Compliance:** Implements proper storage empty checks per Ethereum standards ```go -// Optimized storage operations with empty checks +/ Optimized storage operations with empty checks func (k Keeper) SetState(ctx sdk.Context, addr common.Address, key, value common.Hash) { if value == (common.Hash{}) { - // Optimized path for storage deletion + / Optimized path for storage deletion k.deleteState(ctx, addr, key) return } - // Standard storage set + / Standard storage set k.setState(ctx, addr, key, value) } ``` @@ -116,17 +116,17 @@ v0.5.0's configurable mempool enables fine-tuning for different performance requ #### High-Throughput Configuration ```go -// Optimized for maximum transaction volume +/ Optimized for maximum transaction volume mempoolConfig := &evmmempool.EVMMempoolConfig{ LegacyPoolConfig: &legacypool.Config{ - AccountSlots: 64, // 4x more transactions per account - GlobalSlots: 16384, // 4x more pending transactions - AccountQueue: 256, // 4x larger queues + AccountSlots: 64, / 4x more transactions per account + GlobalSlots: 16384, / 4x more pending transactions + AccountQueue: 256, / 4x larger queues GlobalQueue: 4096, - PriceLimit: 1, // Accept lowest gas prices - Lifetime: 6*time.Hour, // Longer retention + PriceLimit: 1, / Accept lowest gas prices + Lifetime: 6*time.Hour, / Longer retention }, - BlockGasLimit: 200_000_000, // Higher gas limit + BlockGasLimit: 200_000_000, / Higher gas limit } ``` @@ -138,15 +138,15 @@ mempoolConfig := &evmmempool.EVMMempoolConfig{ #### Low-Latency Configuration ```go -// Optimized for fastest transaction processing +/ Optimized for fastest transaction processing mempoolConfig := &evmmempool.EVMMempoolConfig{ LegacyPoolConfig: &legacypool.Config{ - AccountSlots: 4, // Faster iteration - GlobalSlots: 1024, // Smaller pools - AccountQueue: 16, // Quick processing + AccountSlots: 4, / Faster iteration + GlobalSlots: 1024, / Smaller pools + AccountQueue: 16, / Quick processing GlobalQueue: 256, - PriceLimit: 10, // Higher minimum price - Lifetime: 30*time.Minute, // Faster eviction + PriceLimit: 10, / Higher minimum price + Lifetime: 30*time.Minute, / Faster eviction }, } ``` @@ -161,30 +161,30 @@ mempoolConfig := &evmmempool.EVMMempoolConfig{ **Performance-Aware Priority:** Optimize transaction ordering for specific use cases ```go Custom Priority Functions expandable -// MEV-resistant priority function +/ MEV-resistant priority function func mevResistantPriority(goCtx context.Context, tx sdk.Tx) math.Int { - // Prioritize based on transaction age instead of just fees + / Prioritize based on transaction age instead of just fees feeTx, ok := tx.(sdk.FeeTx) if !ok { return math.ZeroInt() } basePriority := calculateFeePriority(feeTx) - agePriority := calculateAgePriority(tx) // Factor in submission time + agePriority := calculateAgePriority(tx) / Factor in submission time - // Balanced priority prevents MEV front-running + / Balanced priority prevents MEV front-running return basePriority.Add(agePriority) } -// High-frequency trading priority +/ High-frequency trading priority func hftPriority(goCtx context.Context, tx sdk.Tx) math.Int { - // Custom priority for time-sensitive protocols + / Custom priority for time-sensitive protocols for _, msg := range tx.GetMsgs() { if isArbitrageTx(msg) { - return math.NewInt(1000000) // Highest priority + return math.NewInt(1000000) / Highest priority } if isLiquidationTx(msg) { - return math.NewInt(500000) // High priority + return math.NewInt(500000) / High priority } } return standardPriority(tx) @@ -198,8 +198,8 @@ func hftPriority(goCtx context.Context, tx sdk.Tx) math.Int { **What Changed:** Improved block notification timing prevents funding errors ```go -// Before: Block notifications could arrive too late -// After: Optimized timing ensures mempool updates before transaction processing +/ Before: Block notifications could arrive too late +/ After: Optimized timing ensures mempool updates before transaction processing ``` **Benefits:** @@ -212,13 +212,13 @@ func hftPriority(goCtx context.Context, tx sdk.Tx) math.Int { **EVM Log Optimization:** EVM logs no longer emitted to cosmos-sdk events (commit fedc27f6) ```go -// Before: Double emission increased processing overhead -// 1. EVM logs stored in transient state -// 2. EVM logs also emitted as Cosmos events +/ Before: Double emission increased processing overhead +/ 1. EVM logs stored in transient state +/ 2. EVM logs also emitted as Cosmos events -// After: Single emission path -// 1. EVM logs stored in transient state only -// 2. No duplicate Cosmos event emission +/ After: Single emission path +/ 1. EVM logs stored in transient state only +/ 2. No duplicate Cosmos event emission ``` **Performance Impact:** @@ -262,14 +262,14 @@ max-open-connections = 50 **New RPC Method:** `eth_createAccessList` enables transaction cost optimization ```javascript -// Generate access list for gas optimization +/ Generate access list for gas optimization const accessList = await provider.send("eth_createAccessList", [{ to: contractAddress, data: contractInterface.encodeFunctionData("complexFunction", [arg1, arg2]), gasPrice: "0x" + gasPrice.toString(16) }, "latest"]); -// Use access list in transaction for reduced gas costs +/ Use access list in transaction for reduced gas costs const tx = await contract.complexFunction(arg1, arg2, { accessList: accessList.accessList, gasLimit: accessList.gasUsed @@ -285,58 +285,58 @@ const tx = await contract.complexFunction(arg1, arg2, { #### DeFi-Focused Chain ```go DeFi Chain Configuration expandable -// Optimized for DeFi protocols +/ Optimized for DeFi protocols mempoolConfig := &evmmempool.EVMMempoolConfig{ LegacyPoolConfig: &legacypool.Config{ - AccountSlots: 32, // Higher per-account limit - PriceBump: 25, // Higher replacement threshold (MEV protection) - Lifetime: 30*time.Minute, // Fast eviction + AccountSlots: 32, / Higher per-account limit + PriceBump: 25, / Higher replacement threshold (MEV protection) + Lifetime: 30*time.Minute, / Fast eviction }, - BlockGasLimit: 150_000_000, // Optimized for complex contracts + BlockGasLimit: 150_000_000, / Optimized for complex contracts } -// VM parameters +/ VM parameters { - "history_serve_window": 4096, // Balanced for flash loan protocols + "history_serve_window": 4096, / Balanced for flash loan protocols } ``` #### Gaming Chain ```go Gaming Chain Configuration expandable -// Optimized for high transaction volume, low complexity +/ Optimized for high transaction volume, low complexity mempoolConfig := &evmmempool.EVMMempoolConfig{ LegacyPoolConfig: &legacypool.Config{ - AccountSlots: 128, // Many small transactions per player - GlobalSlots: 32768, // Very high global capacity - PriceLimit: 1, // Accept very low gas prices - Lifetime: 10*time.Minute, // Fast turnover + AccountSlots: 128, / Many small transactions per player + GlobalSlots: 32768, / Very high global capacity + PriceLimit: 1, / Accept very low gas prices + Lifetime: 10*time.Minute, / Fast turnover }, } -// VM parameters +/ VM parameters { - "history_serve_window": 1024, // Lower storage overhead + "history_serve_window": 1024, / Lower storage overhead } ``` #### Enterprise/Private Chain ```go Enterprise Chain Configuration expandable -// Optimized for predictable, high-value transactions +/ Optimized for predictable, high-value transactions mempoolConfig := &evmmempool.EVMMempoolConfig{ LegacyPoolConfig: &legacypool.Config{ - AccountSlots: 16, // Standard capacity - GlobalSlots: 4096, // Standard global capacity - PriceLimit: 100, // Higher minimum price - PriceBump: 50, // High replacement threshold - Lifetime: 24*time.Hour, // Long retention + AccountSlots: 16, / Standard capacity + GlobalSlots: 4096, / Standard global capacity + PriceLimit: 100, / Higher minimum price + PriceBump: 50, / High replacement threshold + Lifetime: 24*time.Hour, / Long retention }, } -// VM parameters +/ VM parameters { - "history_serve_window": 16384, // Extended history for audit trails + "history_serve_window": 16384, / Extended history for audit trails } ``` @@ -435,98 +435,15 @@ watch 'curl -s -X POST \ http://localhost:8545 | jq' ``` -## Troubleshooting Performance Issues - -### Common Performance Problems - -#### Slow Gas Estimation - -**Symptoms:** -- `eth_estimateGas` taking > 100ms consistently -- Timeouts during gas estimation - -**Solutions:** -```toml -# Increase timeout for complex contracts -[json-rpc] -evm-timeout = "10s" - -# Or optimize at chain level -[evm] -max-tx-gas-wanted = 5000000 # Lower ante gas limit -``` - -#### Mempool Congestion - -**Symptoms:** -- Transactions stuck in pending state -- "txpool is full" errors - -**Solutions:** -```go -// Increase mempool capacity -LegacyPoolConfig: &legacypool.Config{ - GlobalSlots: 8192, // 2x standard capacity - AccountSlots: 32, // 2x per-account limit - GlobalQueue: 2048, // 2x queue capacity -} -``` - -#### High Memory Usage - -**Symptoms:** -- Node memory usage consistently high -- Out-of-memory errors during high load - -**Solutions:** -```go -// Reduce mempool memory footprint -LegacyPoolConfig: &legacypool.Config{ - AccountSlots: 8, // Lower per-account limit - GlobalSlots: 2048, // Lower global capacity - Lifetime: 1*time.Hour, // Faster eviction -} -``` - -### Performance Monitoring - -#### Key Metrics to Track - -```bash Key Performance Metrics expandable -# Transaction pool health -txpool_pending_gauge -txpool_queued_gauge -txpool_pending_discard_meter - -# Gas estimation performance -eth_estimateGas_duration_histogram -eth_call_duration_histogram - -# RPC performance -jsonrpc_batch_requests_total -jsonrpc_batch_response_size_histogram -http_request_duration_seconds -``` - -#### Alerting Thresholds - -```yaml -# Recommended alerting thresholds -mempool_pending_transactions: > 1000 -mempool_queued_transactions: > 500 -gas_estimation_p99_latency: > 200ms -rpc_error_rate: > 5% -block_processing_time: > 2s -``` ## Advanced Optimization Techniques ### Custom Transaction Lifecycle ```go -// Implement custom broadcast function for performance +/ Implement custom broadcast function for performance BroadcastTxFn: func(txs []*ethtypes.Transaction) error { - // Batch broadcast for efficiency + / Batch broadcast for efficiency return batchBroadcastTransactions(txs, batchSize=50) } ``` @@ -534,10 +451,10 @@ BroadcastTxFn: func(txs []*ethtypes.Transaction) error { ### State Preloading ```go -// Preload frequently accessed state +/ Preload frequently accessed state func (k Keeper) PreloadState(ctx sdk.Context, addresses []common.Address) { for _, addr := range addresses { - // Warm cache with frequently accessed accounts + / Warm cache with frequently accessed accounts k.GetAccount(ctx, addr) } } @@ -546,7 +463,7 @@ func (k Keeper) PreloadState(ctx sdk.Context, addresses []common.Address) { ### Gas Price Oracle Optimization ```go -// Custom gas price calculation for better estimation +/ Custom gas price calculation for better estimation func optimizedGasPrice(baseFee *big.Int, urgency string) *big.Int { switch urgency { case "immediate": @@ -578,14 +495,14 @@ go test -bench=BenchmarkRPC ./rpc/namespaces/ethereum/eth/ ### Load Testing Scripts ```javascript Load Testing Script expandable -// Ethereum tooling load test +/ Ethereum tooling load test const { ethers } = require("ethers"); async function loadTest() { const provider = new ethers.JsonRpcProvider("http://localhost:8545"); const wallet = new ethers.Wallet(privateKey, provider); - // Send 1000 transactions concurrently + / Send 1000 transactions concurrently const promises = []; for (let i = 0; i < 1000; i++) { promises.push( @@ -610,4 +527,4 @@ async function loadTest() { - **Gas Estimation PR:** [cosmos/evm#538](https://github.com/cosmos/evm/pull/538) - **Mempool Optimization PR:** [cosmos/evm#496](https://github.com/cosmos/evm/pull/496) - **Ante Handler Performance PR:** [cosmos/evm#352](https://github.com/cosmos/evm/pull/352) -- **Related Documentation:** [Mempool Configuration](./mempool), [VM Module](../cosmos-sdk/modules/vm) +- **Related Documentation:** [Mempool Configuration](/docs/evm/next/documentation/concepts/mempool), [VM Module](/docs/evm/next/documentation/cosmos-sdk/modules/vm) diff --git a/docs/evm/next/documentation/concepts/precision-handling.mdx b/docs/evm/next/documentation/concepts/precision-handling.mdx index 85efd1f4..6bbc5337 100644 --- a/docs/evm/next/documentation/concepts/precision-handling.mdx +++ b/docs/evm/next/documentation/concepts/precision-handling.mdx @@ -230,21 +230,21 @@ This single multiplication and addition reconstructs the full 18-decimal balance ### Simple Scaling ``` -// Naive approach - loses precision +/ Naive approach - loses precision evmAmount = cosmosAmount * 10^12 ``` **Problems**: Loses sub-test amounts, rounding errors accumulate ### Fixed-Point Arithmetic ``` -// Complex fixed-point math +/ Complex fixed-point math amount = FixedPoint{mantissa: 123456, exponent: -18} ``` **Problems**: Complex implementation, performance overhead, compatibility issues ### Separate Balances ``` -// Maintain two separate balance systems +/ Maintain two separate balance systems cosmosBalance: 1000000 test evmBalance: 1000000000000000000 atest ``` @@ -252,7 +252,7 @@ evmBalance: 1000000000000000000 atest ### PreciseBank Solution ``` -// Elegant decomposition +/ Elegant decomposition totalBalance = integerPart * 10^12 + fractionalPart ``` **Advantages**: Simple, efficient, precise, compatible diff --git a/docs/evm/next/documentation/concepts/replay-protection.mdx b/docs/evm/next/documentation/concepts/replay-protection.mdx index 3b439035..11e88e7d 100644 --- a/docs/evm/next/documentation/concepts/replay-protection.mdx +++ b/docs/evm/next/documentation/concepts/replay-protection.mdx @@ -56,14 +56,14 @@ If absolutely necessary, unprotected transactions can be allowed through a two-s The EVM module parameter must be changed via governance proposal or genesis configuration: ```go -// In x/vm/types/params.go -DefaultAllowUnprotectedTxs = false // Default value +/ In x/vm/types/params.go +DefaultAllowUnprotectedTxs = false / Default value -// To enable in genesis.json +/ To enable in genesis.json { "vm": { "params": { - "allow_unprotected_txs": true // Must be set to true + "allow_unprotected_txs": true / Must be set to true } } } @@ -94,14 +94,14 @@ Both conditions must be met for unprotected transactions to be accepted: All modern Ethereum transaction types include chain ID: ```javascript -// EIP-1559 Dynamic Fee Transaction +/ EIP-1559 Dynamic Fee Transaction const tx = { type: 2, - chainId: 9000, // EVM Chain ID included + chainId: 9000, / EVM Chain ID included nonce: 0, maxFeePerGas: ..., maxPriorityFeePerGas: ..., - // ... other fields + / ... other fields } ``` @@ -110,7 +110,7 @@ const tx = { Pre-EIP-155 transactions without chain ID: ```javascript -// Legacy transaction without chain ID (rejected by default) +/ Legacy transaction without chain ID (rejected by default) const tx = { nonce: 0, gasPrice: ..., @@ -118,7 +118,7 @@ const tx = { to: ..., value: ..., data: ..., - // No chainId field + / No chainId field } ``` diff --git a/docs/evm/next/documentation/concepts/statedb.mdx b/docs/evm/next/documentation/concepts/statedb.mdx new file mode 100644 index 00000000..9e9add8a --- /dev/null +++ b/docs/evm/next/documentation/concepts/statedb.mdx @@ -0,0 +1,252 @@ +--- +title: "StateDB Architecture and Implementation" +description: "Deep dive into the EVM StateDB implementation, caching mechanisms, and implications for application builders" +--- + +# StateDB Architecture and Implementation + +## Overview + +The **StateDB** is a critical in-memory storage layer that manages state changes during a transaction's lifecycle. It ensures that updates are only committed to the underlying Cosmos state if the transaction succeeds entirely. It uses a sophisticated caching and journaling mechanism to handle complex execution flows and prevent state inconsistencies. + +This document provides a comprehensive technical overview of the StateDB implementation, its caching system, and practical considerations for application builders and chain integrators. + +--- + +## Core Functionality + +The StateDB acts as an intermediary between the Go Ethereum (Geth) execution environment and the Cosmos SDK's state machine. It provides a crucial bridge that enables EVM compatibility while maintaining Cosmos SDK's transaction semantics and state management principles. + +### Journaling System + +At its core, the StateDB uses a **journal** to track all state modifications made during a transaction, such as balance changes or nonce increments. This log of changes allows the system to undo operations if an error or revert occurs. + +The journal maintains a sequential record of every state modification, including: + +- **Account balance changes** +- **Nonce increments** +- **Code deployments** +- **Storage slot modifications** +- **Account creation and deletion** +- **Refund tracking** + +### Snapshot Mechanism + +The system can take a **snapshot** of the current state at any point during execution. A snapshot is essentially a bookmark of the journal's current index. This is vital for properly implementing EVM features like `try-catch` blocks, as it allows the EVM to revert to a specific "save point" without canceling the entire transaction. + +Key characteristics of snapshots: + +- **Lightweight**: Only stores the journal index, not full state data +- **Stackable**: Multiple snapshots can be taken and reverted in LIFO order +- **Atomic**: Enables partial transaction rollbacks while maintaining consistency + +--- + +## The Cache Stack Solution + +A key challenge in bridging the EVM and Cosmos SDK environments is the potential for **partial reversions**. A transaction may succeed in the EVM but subsequently fail during execution in a Cosmos SDK module, leading to an inconsistent state. To solve this, a **cache stack** was implemented to synchronize the EVM's state with the Cosmos SDK's state. + +### Cache Stack Operations + +#### 1. Pushing to the Stack + +When a snapshot is taken, a new **cache context** is created and pushed onto a stack. This involves creating a "multi-snapshot layer" composed of individual snapshots from each relevant module (e.g., bank, staking), which is more memory-efficient than deep copying the entire context. + +The cache stack implementation: + +```go +// Pseudo-code representation +type CacheStack struct { + layers []CacheContext + baseCtx sdk.Context +} + +func (cs *CacheStack) Snapshot() int { + cacheCtx, _ := cs.baseCtx.CacheContext() + cs.layers = append(cs.layers, cacheCtx) + return len(cs.layers) - 1 +} +``` + +#### 2. Popping from the Stack + +If a revert is triggered, layers are popped off the cache stack until the state is restored to the desired snapshot. This process ensures that all state changes made after the snapshot point are discarded. + +#### 3. Synchronization + +This layered approach ensures that if a chunk of EVM execution is reverted, any corresponding state changes made in the Cosmos SDK modules are also reverted, guaranteeing atomic execution across the EVM and Cosmos boundary. + +#### 4. Committing + +If the transaction is successful, the final cache context, containing all the "dirty states" (addresses with uncommitted changes), is written to the underlying keepers. This final commit phase applies all accumulated state changes to the persistent storage. + +--- + +## The Native Balance Handler + +To ensure consistency between the EVM and the native bank module, the system uses a **native balance handler**. Previously, manual journaling of native coin balance changes was required, creating a potential for error. + +### Automated Balance Tracking + +The native balance handler automates this by listening for `coin_spent` and `coin_received` events from the SDK's event system and automatically adding the corresponding balance changes to the StateDB journal. This eliminates the need for manual balance tracking and reduces the likelihood of state inconsistencies. + +Key features: + +- **Event-driven**: Automatically responds to SDK banking events +- **Journal integration**: Balance changes are recorded in the StateDB journal +- **Atomic consistency**: Ensures EVM and native balance changes are synchronized + +### Security Considerations + +While effective, this mechanism's reliance on the event system introduces a degree of fragility; maliciously emitted events could potentially lead to state inconsistencies. Application builders should be aware of this potential attack vector and implement appropriate safeguards. + +Recommended mitigations: + +- **Event validation**: Verify the authenticity and source of banking events +- **Balance reconciliation**: Implement periodic checks to ensure EVM and native balances align +- **Access control**: Restrict which modules can emit balance-related events + +--- + +## Implementation Details + +### State Object Management + +The StateDB manages **state objects** for each account, which contain: + +- **Account information**: Address, nonce, balance, code hash +- **Storage trie**: Key-value storage for contract state +- **Dirty flag**: Indicates whether the object has been modified +- **Suicide flag**: Marks accounts for deletion + +### Memory Management + +The StateDB implementation includes several optimizations for memory efficiency: + +- **Lazy loading**: State objects are only loaded when accessed +- **Copy-on-write**: State objects are copied only when modified +- **Garbage collection**: Unreferenced state objects are cleaned up automatically + +### Transaction Lifecycle Integration + +The StateDB integrates closely with the EVM transaction lifecycle: + +1. **Transaction start**: Clean StateDB instance created +2. **Execution**: State changes tracked in journal and cache stack +3. **Success path**: Cache committed to persistent storage +4. **Failure path**: Journal reverted, cache discarded + +--- + +## Implications for Application Builders + +### Development Considerations + +When building applications that interact with the EVM module, developers should understand several key implications: + +#### State Consistency + +- **Atomic transactions**: EVM and Cosmos SDK state changes are atomic +- **Revert handling**: Failed transactions leave no state artifacts +- **Gas estimation**: State changes affect gas consumption calculations + +#### Performance Implications + +- **Memory usage**: Complex transactions with many state changes consume more memory +- **Cache depth**: Deep snapshot stacks can impact performance +- **Commit overhead**: Large dirty state sets require more time to commit + +#### Integration Patterns + +- **Event handling**: Native balance changes trigger automatic journal updates +- **Module interactions**: Other Cosmos SDK modules can observe EVM state changes +- **Cross-chain operations**: StateDB changes are visible to IBC and other protocols + +### Best Practices + +1. **Minimize state changes**: Reduce unnecessary state modifications to improve performance +2. **Handle reverts gracefully**: Design contracts to work correctly with EVM revert semantics +3. **Monitor memory usage**: Be aware of cache stack depth in complex execution flows +4. **Validate events**: Implement checks for balance-related events if using native tokens + +--- + +## Chain Integrator Considerations + +### Configuration Options + +Chain integrators can configure several StateDB-related parameters: + +- **Cache size limits**: Maximum memory usage for state caching +- **Journal capacity**: Maximum number of journal entries +- **Snapshot depth**: Maximum cache stack depth + +### Monitoring and Observability + +Key metrics to monitor: + +- **Cache hit rates**: Efficiency of state object caching +- **Journal size**: Memory usage of state change tracking +- **Revert frequency**: Rate of transaction reversions +- **Commit latency**: Time taken to persist state changes + +### Debugging and Troubleshooting + +Common issues and solutions: + +#### State Inconsistencies + +- **Symptom**: EVM and native balances don't match +- **Cause**: Event system manipulation or handler bugs +- **Solution**: Implement balance reconciliation checks + +#### Memory Issues + +- **Symptom**: High memory usage during complex transactions +- **Solution**: Tune cache parameters or optimize transaction patterns + +#### Performance Problems + +- **Symptom**: Slow transaction processing +- **Cause**: Deep cache stacks or large dirty state sets +- **Solution**: Monitor and optimize state access patterns + +--- + +## Advanced Topics + +### Custom State Extensions + +Application builders can extend the StateDB functionality through: + +- **Custom journal entries**: Track application-specific state changes +- **Event handlers**: React to specific state modifications +- **Cache extensions**: Implement custom caching strategies + +### Integration with Other Cosmos SDK Modules + +The StateDB integrates with various Cosmos SDK modules: + +- **Bank module**: Native token transfers and balance management +- **Staking module**: Validator rewards and delegation tracking +- **Governance module**: Proposal execution and voting +- **Authz module**: Authorization state management + +### Future Enhancements + +Planned improvements to the StateDB system include: + +- **Optimized memory usage**: Reduced memory footprint for large transactions +- **Enhanced observability**: Better metrics and debugging tools +- **Improved security**: Hardened event handling mechanisms +- **Performance optimizations**: Faster commit and revert operations + +--- + +## Conclusion + +The StateDB represents a sophisticated solution to the challenge of bridging EVM and Cosmos SDK state management. Its journaling, snapshot, and cache stack mechanisms ensure atomic transaction execution while maintaining the performance and flexibility required for production blockchain applications. + +Understanding the StateDB's implementation details is crucial for application builders and chain integrators who want to leverage the full power of the Cosmos EVM while avoiding common pitfalls related to state management and transaction atomicity. + +For developers building on the Cosmos EVM, the StateDB provides a robust foundation that handles the complexities of state synchronization, allowing focus on application logic rather than low-level state management concerns. \ No newline at end of file diff --git a/docs/evm/next/documentation/cosmos-sdk/modules/erc20.mdx b/docs/evm/next/documentation/cosmos-sdk/modules/erc20.mdx index 031f2171..f35bfe6e 100644 --- a/docs/evm/next/documentation/cosmos-sdk/modules/erc20.mdx +++ b/docs/evm/next/documentation/cosmos-sdk/modules/erc20.mdx @@ -52,10 +52,10 @@ The module maintains token pair mappings and allowances ([source](https://github ```go type TokenPair struct { - Erc20Address string // Hex address of ERC20 contract - Denom string // Cosmos coin denomination - Enabled bool // Conversion enable status - ContractOwner Owner // OWNER_MODULE or OWNER_EXTERNAL + Erc20Address string / Hex address of ERC20 contract + Denom string / Cosmos coin denomination + Enabled bool / Conversion enable status + ContractOwner Owner / OWNER_MODULE or OWNER_EXTERNAL } ``` @@ -103,9 +103,9 @@ Convert Cosmos coins to ERC20 tokens: ```protobuf message MsgConvertCoin { - Coin coin = 1; // Amount and denom to convert - string receiver = 2; // Hex address to receive ERC20 - string sender = 3; // Bech32 address of sender + Coin coin = 1; / Amount and denom to convert + string receiver = 2; / Hex address to receive ERC20 + string sender = 3; / Bech32 address of sender } ``` @@ -120,10 +120,10 @@ Convert ERC20 tokens to Cosmos coins: ```protobuf message MsgConvertERC20 { - string contract_address = 1; // ERC20 contract - string amount = 2; // Amount to convert - string receiver = 3; // Bech32 address - string sender = 4; // Hex address of sender + string contract_address = 1; / ERC20 contract + string amount = 2; / Amount to convert + string receiver = 3; / Bech32 address + string sender = 4; / Hex address of sender } ``` @@ -138,8 +138,8 @@ Enable/disable conversions for a token pair (governance only): ```protobuf message MsgToggleConversion { - string authority = 1; // Must be governance account - string token = 2; // Address or denom + string authority = 1; / Must be governance account + string token = 2; / Address or denom } ``` @@ -149,8 +149,8 @@ Update module parameters (governance only): ```protobuf message MsgUpdateParams { - string authority = 1; // Must be governance account - Params params = 2; // New parameters + string authority = 1; / Must be governance account + Params params = 2; / New parameters } ``` @@ -196,7 +196,7 @@ sequenceDiagram Automatically created for Cosmos coins at deterministic addresses: ```go -// Address derivation +/ Address derivation address = keccak256("erc20|")[:20] ``` @@ -231,20 +231,20 @@ Standard IBC transfer integration ([source](https://github.com/cosmos/evm/blob/v ```go -// Automatic registration and conversion on receive +/ Automatic registration and conversion on receive OnRecvPacket(packet) { - // Auto-register new IBC tokens (with "ibc/" prefix) + / Auto-register new IBC tokens (with "ibc/" prefix) if !tokenPairExists && hasPrefix(denom, "ibc/") { RegisterERC20Extension(denom) } - // Auto-convert to ERC20 for Ethereum addresses + / Auto-convert to ERC20 for Ethereum addresses if isEthereumAddress(receiver) { convertToERC20(voucher, receiver) } } -// Automatic conversion on acknowledgment +/ Automatic conversion on acknowledgment OnAcknowledgementPacket(packet, ack) { if wasConverted { convertBackToCosmos(refund) @@ -295,13 +295,13 @@ Enhanced IBC v2 support ([source](https://github.com/cosmos/evm/blob/v0.4.1/x/er ```protobuf service Query { - // Get module parameters + / Get module parameters rpc Params(QueryParamsRequest) returns (QueryParamsResponse); - // Get all token pairs + / Get all token pairs rpc TokenPairs(QueryTokenPairsRequest) returns (QueryTokenPairsResponse); - // Get specific token pair + / Get specific token pair rpc TokenPair(QueryTokenPairRequest) returns (QueryTokenPairResponse); } ``` @@ -347,14 +347,14 @@ evmd tx gov submit-proposal register-erc20-proposal.json --from mykey ### DeFi Protocol Integration ```javascript -// Using precompiled ERC20 interface for IBC tokens +/ Using precompiled ERC20 interface for IBC tokens const ibcToken = new ethers.Contract( - "0x80b5a32e4f032b2a058b4f29ec95eefeeb87adcd", // Precompile address + "0x80b5a32e4f032b2a058b4f29ec95eefeeb87adcd", / Precompile address ERC20_ABI, signer ); -// Standard ERC20 operations work seamlessly +/ Standard ERC20 operations work seamlessly await ibcToken.approve(dexRouter, amount); await dexRouter.swapExactTokensForTokens(...); ``` @@ -362,19 +362,19 @@ await dexRouter.swapExactTokensForTokens(...); ### Automatic IBC Conversion ```javascript -// IBC transfer automatically converts to ERC20 +/ IBC transfer automatically converts to ERC20 const packet = { sender: "cosmos1...", - receiver: "0x...", // EVM address triggers conversion + receiver: "0x...", / EVM address triggers conversion token: { denom: "test", amount: "1000000" } }; -// Token arrives as ERC20 in receiver's EVM account +/ Token arrives as ERC20 in receiver's EVM account ``` ### Manual Conversion Flow ```javascript -// Convert native to ERC20 +/ Convert native to ERC20 await client.signAndBroadcast(address, [ { typeUrl: "/cosmos.evm.erc20.v1.MsgConvertCoin", @@ -410,7 +410,7 @@ await client.signAndBroadcast(address, [ 1. **Contract Validation** ```solidity - // Verify standard interface + / Verify standard interface IERC20(token).totalSupply(); IERC20(token).balanceOf(address(this)); ``` @@ -425,31 +425,6 @@ await client.signAndBroadcast(address, [ - Toggle specific token pairs - Have incident response plan -## Troubleshooting - -### Common Issues - -| Issue | Cause | Solution | -|-------|-------|----------| -| "token pair not found" | Token not registered | Register via governance | -| "token pair disabled" | Conversions toggled off | Enable via governance | -| "insufficient balance" | Low balance for conversion | Check balance in correct format | -| "invalid recipient" | Wrong address format | Use hex for EVM, bech32 for Cosmos | -| "module disabled" | enable_erc20 = false | Enable via governance | - -### Debug Commands - -```bash -# Check if token is registered -evmd query erc20 token-pair test - -# Check module parameters -evmd query erc20 params - -# Check account balance (both formats) -evmd query bank balances cosmos1... -evmd query vm balance 0x... -``` ## References diff --git a/docs/evm/next/documentation/cosmos-sdk/modules/feemarket.mdx b/docs/evm/next/documentation/cosmos-sdk/modules/feemarket.mdx index 9a8cbe53..81e58201 100644 --- a/docs/evm/next/documentation/cosmos-sdk/modules/feemarket.mdx +++ b/docs/evm/next/documentation/cosmos-sdk/modules/feemarket.mdx @@ -86,43 +86,43 @@ Adjust parameters to achieve specific network behaviors ([params validation](htt ```json Ethereum-Compatible expandable { "no_base_fee": false, - "base_fee_change_denominator": 8, // ±12.5% max per block - "elasticity_multiplier": 2, // 50% target utilization - "enable_height": 0, // Active from genesis - "base_fee": "1000000000000000000000", // Starting base fee - "min_gas_price": "1000000000000000000", // Price floor - "min_gas_multiplier": "500000000000000000" // 0.5 (anti-manipulation) + "base_fee_change_denominator": 8, / ±12.5% max per block + "elasticity_multiplier": 2, / 50% target utilization + "enable_height": 0, / Active from genesis + "base_fee": "1000000000000000000000", / Starting base fee + "min_gas_price": "1000000000000000000", / Price floor + "min_gas_multiplier": "500000000000000000" / 0.5 (anti-manipulation) } -// Behavior: Standard Ethereum, balanced for general use -// 100k gas tx: ~0.1 tokens at base, up to 10x during congestion +/ Behavior: Standard Ethereum, balanced for general use +/ 100k gas tx: ~0.1 tokens at base, up to 10x during congestion ``` ```json High-Volume-Stable expandable { "no_base_fee": false, - "base_fee_change_denominator": 16, // ±6.25% max per block - "elasticity_multiplier": 2, // 50% target utilization + "base_fee_change_denominator": 16, / ±6.25% max per block + "elasticity_multiplier": 2, / 50% target utilization "enable_height": 0, - "base_fee": "100000000000000000000", // Lower starting fee - "min_gas_price": "10000000000000000", // Lower floor + "base_fee": "100000000000000000000", / Lower starting fee + "min_gas_price": "10000000000000000", / Lower floor "min_gas_multiplier": "500000000000000000" } -// Behavior: Slow, predictable changes for DeFi/DEX chains -// 100k gas tx: ~0.01 tokens, gradual increases +/ Behavior: Slow, predictable changes for DeFi/DEX chains +/ 100k gas tx: ~0.01 tokens, gradual increases ``` ```json Aggressive-Anti-Spam expandable { "no_base_fee": false, - "base_fee_change_denominator": 4, // ±25% max per block - "elasticity_multiplier": 4, // 25% target utilization + "base_fee_change_denominator": 4, / ±25% max per block + "elasticity_multiplier": 4, / 25% target utilization "enable_height": 0, - "base_fee": "10000000000000000000000", // High starting fee - "min_gas_price": "10000000000000000000000", // High floor - "min_gas_multiplier": "1000000000000000000" // 1.0 (strict) + "base_fee": "10000000000000000000000", / High starting fee + "min_gas_price": "10000000000000000000000", / High floor + "min_gas_multiplier": "1000000000000000000" / 1.0 (strict) } -// Behavior: Rapid adjustments, high minimum for premium chains -// 100k gas tx: ~1 token minimum, aggressive scaling +/ Behavior: Rapid adjustments, high minimum for premium chains +/ 100k gas tx: ~1 token minimum, aggressive scaling ``` @@ -161,8 +161,8 @@ The module supports different token decimal configurations: Parameters can be updated through governance proposals ([msg server](https://github.com/cosmos/evm/blob/v0.4.1/x/feemarket/keeper/msg_server.go)): ```json Governance Parameter Update expandable -// MsgUpdateParams structure -// Source: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/tx.proto +/ MsgUpdateParams structure +/ Source: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/tx.proto { "@type": "/cosmos.evm.feemarket.v1.MsgUpdateParams", "authority": "cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn", @@ -226,15 +226,15 @@ Examples: ```javascript Wallet-Integration expandable -// Get current base fee from latest block +/ Get current base fee from latest block const block = await provider.getBlock('latest'); const baseFee = block.baseFeePerGas; -// Add buffer for inclusion certainty (20% buffer) +/ Add buffer for inclusion certainty (20% buffer) const maxFeePerGas = baseFee * 120n / 100n; const maxPriorityFeePerGas = ethers.parseUnits("2", "gwei"); -// Send EIP-1559 transaction +/ Send EIP-1559 transaction const tx = await wallet.sendTransaction({ type: 2, maxFeePerGas, @@ -246,24 +246,24 @@ const tx = await wallet.sendTransaction({ ``` ```javascript Fee-Monitoring expandable -// Get current gas price (includes base fee) +/ Get current gas price (includes base fee) const gasPrice = await provider.send('eth_gasPrice', []); console.log('Current gas price:', ethers.formatUnits(gasPrice, 'gwei'), 'gwei'); -// Get fee history +/ Get fee history const feeHistory = await provider.send('eth_feeHistory', [ - '0x10', // last 16 blocks + '0x10', / last 16 blocks 'latest', [25, 50, 75] ]); -// Process base fees from history +/ Process base fees from history const baseFees = feeHistory.baseFeePerGas.map(fee => ethers.formatUnits(fee, 'gwei') ); console.log('Base fee history:', baseFees); -// Check if fees are elevated +/ Check if fees are elevated const currentBaseFee = BigInt(feeHistory.baseFeePerGas[feeHistory.baseFeePerGas.length - 1]); const threshold = ethers.parseUnits('100', 'gwei'); const isHighFee = currentBaseFee > threshold; @@ -281,11 +281,11 @@ The fee market integrates with standard Ethereum JSON-RPC methods ([RPC backend] Returns the current base fee as hex string. In EIP-1559 mode, returns the base fee; in legacy mode, returns the median gas price. ```json - // Request + / Request {"jsonrpc":"2.0","method":"eth_gasPrice","params":[],"id":1} - // Response - {"jsonrpc":"2.0","id":1,"result":"0x3b9aca00"} // 1 gwei in hex + / Response + {"jsonrpc":"2.0","id":1,"result":"0x3b9aca00"} / 1 gwei in hex ``` @@ -293,11 +293,11 @@ The fee market integrates with standard Ethereum JSON-RPC methods ([RPC backend] Returns suggested priority fee (tip) per gas. In cosmos/evm with the experimental mempool, transactions are prioritized by effective gas tip (fees), with ties broken by arrival time. ```json - // Request + / Request {"jsonrpc":"2.0","method":"eth_maxPriorityFeePerGas","params":[],"id":1} - // Response - {"jsonrpc":"2.0","id":1,"result":"0x77359400"} // Suggested tip in wei + / Response + {"jsonrpc":"2.0","id":1,"result":"0x77359400"} / Suggested tip in wei ``` @@ -319,7 +319,7 @@ The fee market integrates with standard Ethereum JSON-RPC methods ([RPC backend] ```json eth_feeHistory Response expandable - // Request + / Request { "jsonrpc":"2.0", "method":"eth_feeHistory", @@ -327,7 +327,7 @@ The fee market integrates with standard Ethereum JSON-RPC methods ([RPC backend] "id":1 } - // Response + / Response { "jsonrpc":"2.0", "id":1, @@ -363,8 +363,8 @@ The fee market integrates with standard Ethereum JSON-RPC methods ([RPC backend] "result":{ "number": "0x1234", "hash": "0x...", - "baseFeePerGas": "0x3b9aca00", // Base fee for this block - // ... other block fields + "baseFeePerGas": "0x3b9aca00", / Base fee for this block + / ... other block fields } } ``` @@ -379,7 +379,7 @@ The fee market supports both legacy and EIP-1559 transaction types ([types](http ```json Legacy-Transaction { "type": "0x0", - "gasPrice": "0x3b9aca00", // Must be >= current base fee + "gasPrice": "0x3b9aca00", / Must be >= current base fee "gas": "0x5208", "to": "0x...", "value": "0x...", @@ -390,8 +390,8 @@ The fee market supports both legacy and EIP-1559 transaction types ([types](http ```json EIP-1559-Transaction { "type": "0x2", - "maxFeePerGas": "0x3b9aca00", // Maximum total fee willing to pay - "maxPriorityFeePerGas": "0x77359400", // Tip that affects transaction priority in mempool + "maxFeePerGas": "0x3b9aca00", / Maximum total fee willing to pay + "maxPriorityFeePerGas": "0x77359400", / Tip that affects transaction priority in mempool "gas": "0x5208", "to": "0x...", "value": "0x...", @@ -430,11 +430,11 @@ evmd query feemarket block-gas ```proto Query-Parameters expandable -// Request all module parameters -// Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto +/ Request all module parameters +/ Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto grpcurl -plaintext localhost:9090 feemarket.Query/Params -// Response +/ Response { "params": { "no_base_fee": false, @@ -449,22 +449,22 @@ grpcurl -plaintext localhost:9090 feemarket.Query/Params ``` ```proto Query-Base-Fee -// Request current base fee -// Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto +/ Request current base fee +/ Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto grpcurl -plaintext localhost:9090 feemarket.Query/BaseFee -// Response +/ Response { "base_fee": "1000000000000000000000" } ``` ```proto Query-Block-Gas -// Request block gas wanted -// Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto +/ Request block gas wanted +/ Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto grpcurl -plaintext localhost:9090 feemarket.Query/BlockGas -// Response +/ Response { "gas": "5000000" } @@ -481,7 +481,7 @@ grpcurl -plaintext localhost:9090 feemarket.Query/BlockGas # Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto GET /cosmos/feemarket/v1/params -// Response +/ Response { "params": { "no_base_fee": false, @@ -500,7 +500,7 @@ GET /cosmos/feemarket/v1/params # Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto GET /cosmos/feemarket/v1/base_fee -// Response +/ Response { "base_fee": "1000000000000000000000" } @@ -511,7 +511,7 @@ GET /cosmos/feemarket/v1/base_fee # Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto GET /cosmos/feemarket/v1/block_gas -// Response +/ Response { "gas": "5000000" } @@ -550,15 +550,15 @@ At the beginning of each block, the module calculates and sets the new base fee func (k Keeper) BeginBlock(ctx sdk.Context) error { baseFee := k.CalculateBaseFee(ctx) - // Skip if base fee is nil (not enabled) + / Skip if base fee is nil (not enabled) if baseFee.IsNil() { return nil } - // Update the base fee in state + / Update the base fee in state k.SetBaseFee(ctx, baseFee) - // Emit event with new base fee + / Emit event with new base fee ctx.EventManager().EmitEvent( sdk.NewEvent( "fee_market", @@ -578,13 +578,13 @@ func (k Keeper) EndBlock(ctx sdk.Context) error { gasWanted := k.GetTransientGasWanted(ctx) gasUsed := ctx.BlockGasMeter().GasConsumedToLimit() - // Apply MinGasMultiplier to prevent manipulation - // gasWanted = max(gasWanted * MinGasMultiplier, gasUsed) + / Apply MinGasMultiplier to prevent manipulation + / gasWanted = max(gasWanted * MinGasMultiplier, gasUsed) minGasMultiplier := k.GetParams(ctx).MinGasMultiplier limitedGasWanted := NewDec(gasWanted).Mul(minGasMultiplier) updatedGasWanted := MaxDec(limitedGasWanted, NewDec(gasUsed)).TruncateInt() - // Store for next block's base fee calculation + / Store for next block's base fee calculation k.SetBlockGasWanted(ctx, updatedGasWanted) ctx.EventManager().EmitEvent( @@ -606,21 +606,21 @@ The calculation follows the EIP-1559 specification with Cosmos SDK adaptations ( ```go func CalcGasBaseFee(gasUsed, gasTarget, baseFeeChangeDenom uint64, baseFee, minUnitGas, minGasPrice LegacyDec) LegacyDec { - // No change if at target + / No change if at target if gasUsed == gasTarget { return baseFee } - // Calculate adjustment magnitude + / Calculate adjustment magnitude delta := abs(gasUsed - gasTarget) adjustment := baseFee.Mul(delta).Quo(gasTarget).Quo(baseFeeChangeDenom) if gasUsed > gasTarget { - // Increase base fee (minimum 1 unit) + / Increase base fee (minimum 1 unit) baseFeeDelta := MaxDec(adjustment, minUnitGas) return baseFee.Add(baseFeeDelta) } else { - // Decrease base fee (not below minGasPrice) + / Decrease base fee (not below minGasPrice) return MaxDec(baseFee.Sub(adjustment), minGasPrice) } } @@ -678,7 +678,7 @@ The primary ante handler that validates and processes EVM transaction fees ([mon ```go -// 1. Check mempool minimum fee (for pre-London transactions) +/ 1. Check mempool minimum fee (for pre-London transactions) if ctx.IsCheckTx() && !simulate && !isLondon { requiredFee := mempoolMinGasPrice.Mul(gasLimit) if fee.LT(requiredFee) { @@ -686,13 +686,13 @@ if ctx.IsCheckTx() && !simulate && !isLondon { } } -// 2. Calculate effective fee for EIP-1559 transactions +/ 2. Calculate effective fee for EIP-1559 transactions if ethTx.Type() >= DynamicFeeTxType && baseFee != nil { feeAmt = ethMsg.GetEffectiveFee(baseFee) fee = NewDecFromBigInt(feeAmt) } -// 3. Check global minimum gas price +/ 3. Check global minimum gas price if globalMinGasPrice.IsPositive() { requiredFee := globalMinGasPrice.Mul(gasLimit) if fee.LT(requiredFee) { @@ -700,7 +700,7 @@ if globalMinGasPrice.IsPositive() { } } -// 4. Track gas wanted for base fee calculation +/ 4. Track gas wanted for base fee calculation if isLondon && baseFeeEnabled { feeMarketKeeper.AddTransientGasWanted(ctx, gasWanted) } @@ -716,7 +716,7 @@ Validates transaction fees against the local mempool minimum ([source](https://g ```go func CheckMempoolFee(fee, mempoolMinGasPrice, gasLimit LegacyDec, isLondon bool) error { - // Skip for London-enabled chains (EIP-1559 handles this) + / Skip for London-enabled chains (EIP-1559 handles this) if isLondon { return nil } @@ -770,12 +770,12 @@ func CheckGasWanted(ctx Context, feeMarketKeeper FeeMarketKeeper, tx Tx, isLondo gasWanted := tx.GetGas() blockGasLimit := BlockGasLimit(ctx) - // Reject if tx exceeds block limit + / Reject if tx exceeds block limit if gasWanted > blockGasLimit { return ErrOutOfGas } - // Add to cumulative gas wanted for base fee adjustment + / Add to cumulative gas wanted for base fee adjustment feeMarketKeeper.AddTransientGasWanted(ctx, gasWanted) return nil } diff --git a/docs/evm/next/documentation/cosmos-sdk/modules/precisebank.mdx b/docs/evm/next/documentation/cosmos-sdk/modules/precisebank.mdx index a936da79..06d02bf2 100644 --- a/docs/evm/next/documentation/cosmos-sdk/modules/precisebank.mdx +++ b/docs/evm/next/documentation/cosmos-sdk/modules/precisebank.mdx @@ -49,16 +49,16 @@ The module provides a bank-compatible keeper ([source](https://github.com/cosmos ```go type Keeper interface { - // Query methods + / Query methods GetBalance(ctx, addr, denom) sdk.Coin SpendableCoins(ctx, addr) sdk.Coins - // Transfer methods + / Transfer methods SendCoins(ctx, from, to, amt) error SendCoinsFromModuleToAccount(ctx, module, to, amt) error SendCoinsFromAccountToModule(ctx, from, module, amt) error - // Mint/Burn methods + / Mint/Burn methods MintCoins(ctx, module, amt) error BurnCoins(ctx, module, amt) error } @@ -78,9 +78,9 @@ Automatic handling of "atest" denomination: Handles both integer and fractional components: ```go -// SendCoins automatically handles precision +/ SendCoins automatically handles precision keeper.SendCoins(ctx, from, to, sdk.NewCoins( - sdk.NewCoin("atest", sdk.NewInt(1500000000000)), // 0.0015 test + sdk.NewCoin("atest", sdk.NewInt(1500000000000)), / 0.0015 test )) ``` @@ -95,9 +95,9 @@ keeper.SendCoins(ctx, from, to, sdk.NewCoins( Creates new tokens with proper backing: ```go -// MintCoins with extended precision +/ MintCoins with extended precision keeper.MintCoins(ctx, moduleName, sdk.NewCoins( - sdk.NewCoin("atest", sdk.NewInt(1000000000000000000)), // 1 test + sdk.NewCoin("atest", sdk.NewInt(1000000000000000000)), / 1 test )) ``` @@ -111,9 +111,9 @@ keeper.MintCoins(ctx, moduleName, sdk.NewCoins( Removes tokens from circulation: ```go -// BurnCoins with extended precision +/ BurnCoins with extended precision keeper.BurnCoins(ctx, moduleName, sdk.NewCoins( - sdk.NewCoin("atest", sdk.NewInt(500000000000)), // 0.0005 test + sdk.NewCoin("atest", sdk.NewInt(500000000000)), / 0.0005 test )) ``` @@ -147,15 +147,15 @@ Standard bank events with extended precision amounts: ```protobuf service Query { - // Get total of all fractional balances + / Get total of all fractional balances rpc TotalFractionalBalances(QueryTotalFractionalBalancesRequest) returns (QueryTotalFractionalBalancesResponse); - // Get current remainder amount + / Get current remainder amount rpc Remainder(QueryRemainderRequest) returns (QueryRemainderResponse); - // Get fractional balance for an account + / Get fractional balance for an account rpc FractionalBalance(QueryFractionalBalanceRequest) returns (QueryFractionalBalanceResponse); } @@ -199,8 +199,8 @@ Replace bank keeper with precisebank keeper in app.go: ```go app.EvmKeeper = evmkeeper.NewKeeper( - app.PrecisebankKeeper, // Instead of app.BankKeeper - // ... other parameters + app.PrecisebankKeeper, / Instead of app.BankKeeper + / ... other parameters ) ``` @@ -209,10 +209,10 @@ app.EvmKeeper = evmkeeper.NewKeeper( Query extended balances through standard interface: ```go -// Automatically handles atest denomination +/ Automatically handles atest denomination balance := keeper.GetBalance(ctx, addr, "atest") -// Transfer with 18 decimal precision +/ Transfer with 18 decimal precision err := keeper.SendCoins(ctx, from, to, sdk.NewCoins(sdk.NewCoin("atest", amount))) ``` @@ -258,16 +258,16 @@ Critical invariants maintained by the module: 2. **Migration Path** ```go - // Deploy in passive mode + / Deploy in passive mode app.PrecisebankKeeper = precisebankkeeper.NewKeeper( app.BankKeeper, - // Fractional balances start at zero + / Fractional balances start at zero ) ``` 3. **Testing** ```go - // Verify fractional operations + / Verify fractional operations suite.Require().Equal( expectedFractional, keeper.GetFractionalBalance(ctx, addr), @@ -278,7 +278,7 @@ Critical invariants maintained by the module: 1. **Balance Queries** ```javascript - // Query in atest (18 decimals) + / Query in atest (18 decimals) const balance = await queryClient.precisebank.fractionalBalance({ address: "cosmos1..." }); @@ -286,7 +286,7 @@ Critical invariants maintained by the module: 2. **Precision Handling** ```javascript - // Convert between precisions + / Convert between precisions const testAmount = atestAmount / BigInt(10**12); const atestAmount = testAmount * BigInt(10**12); ``` @@ -325,28 +325,6 @@ Critical invariants maintained by the module: - Sparse storage (only non-zero fractions stored) - Batch operations maintain efficiency -## Troubleshooting - -### Common Issues - -| Issue | Cause | Solution | -|-------|-------|----------| -| "fractional overflow" | Fractional > 10^12 | Check calculation logic | -| "insufficient balance" | Including fractional | Verify full atest balance | -| "invariant violation" | Supply mismatch | Audit reserve and remainder | - -### Validation Commands - -```bash -# Verify module invariants -evmd query precisebank total-fractional-balances -evmd query precisebank remainder - -# Check specific account -evmd query bank balances cosmos1... --denom test -evmd query precisebank fractional-balance cosmos1... -``` - ## References ### Source Code diff --git a/docs/evm/next/documentation/cosmos-sdk/modules/vm.mdx b/docs/evm/next/documentation/cosmos-sdk/modules/vm.mdx index 631da81c..7650eba9 100644 --- a/docs/evm/next/documentation/cosmos-sdk/modules/vm.mdx +++ b/docs/evm/next/documentation/cosmos-sdk/modules/vm.mdx @@ -61,19 +61,19 @@ Controls EIP-2935 historical block hash storage depth: **Configuration Examples:** ```json { - "history_serve_window": 8192 // Default: full EIP-2935 compatibility + "history_serve_window": 8192 / Default: full EIP-2935 compatibility } ``` ```json { - "history_serve_window": 1024 // Performance optimized: reduced storage + "history_serve_window": 1024 / Performance optimized: reduced storage } ``` ```json { - "history_serve_window": 16384 // Extended history: maximum compatibility + "history_serve_window": 16384 / Extended history: maximum compatibility } ``` @@ -81,7 +81,7 @@ Controls EIP-2935 historical block hash storage depth: ```solidity contract HistoryExample { function getRecentBlockHash(uint256 blockNumber) public view returns (bytes32) { - // Works for blocks within history_serve_window range + / Works for blocks within history_serve_window range return blockhash(blockNumber); } } @@ -93,8 +93,8 @@ Granular control over contract operations: ```go type AccessControl struct { - Create AccessControlType // Contract deployment - Call AccessControlType // Contract execution + Create AccessControlType / Contract deployment + Call AccessControlType / Contract execution } ``` @@ -223,9 +223,9 @@ Primary message for EVM transactions ([source](https://github.com/cosmos/evm/blo ```protobuf message MsgEthereumTx { - google.protobuf.Any data = 1; // Transaction data (Legacy/AccessList/DynamicFee) - string hash = 3; // Transaction hash - string from = 4; // Sender address (derived from signature) + google.protobuf.Any data = 1; / Transaction data (Legacy/AccessList/DynamicFee) + string hash = 3; / Transaction hash + string from = 4; / Sender address (derived from signature) } ``` @@ -241,8 +241,8 @@ Governance message for parameter updates: ```protobuf message MsgUpdateParams { - string authority = 1; // Must be governance module - Params params = 2; // New parameters + string authority = 1; / Must be governance module + Params params = 2; / New parameters } ``` @@ -289,29 +289,29 @@ message MsgUpdateParams { ```protobuf service Query { - // Account queries + / Account queries rpc Account(QueryAccountRequest) returns (QueryAccountResponse); rpc CosmosAccount(QueryCosmosAccountRequest) returns (QueryCosmosAccountResponse); rpc ValidatorAccount(QueryValidatorAccountRequest) returns (QueryValidatorAccountResponse); - // State queries + / State queries rpc Balance(QueryBalanceRequest) returns (QueryBalanceResponse); rpc Storage(QueryStorageRequest) returns (QueryStorageResponse); rpc Code(QueryCodeRequest) returns (QueryCodeResponse); - // Module queries + / Module queries rpc Params(QueryParamsRequest) returns (QueryParamsResponse); rpc Config(QueryConfigRequest) returns (QueryConfigResponse); - // EVM operations + / EVM operations rpc EthCall(EthCallRequest) returns (MsgEthereumTxResponse); rpc EstimateGas(EthCallRequest) returns (EstimateGasResponse); - // Debugging + / Debugging rpc TraceTx(QueryTraceTxRequest) returns (QueryTraceTxResponse); rpc TraceBlock(QueryTraceBlockRequest) returns (QueryTraceBlockResponse); - // Fee queries + / Fee queries rpc BaseFee(QueryBaseFeeRequest) returns (QueryBaseFeeResponse); rpc GlobalMinGasPrice(QueryGlobalMinGasPriceRequest) returns (QueryGlobalMinGasPriceResponse); } @@ -428,7 +428,7 @@ type EvmHooks interface { ### Registration ```go -// In app.go +/ In app.go app.EvmKeeper = app.EvmKeeper.SetHooks( vmkeeper.NewMultiEvmHooks( app.Erc20Keeper.Hooks(), @@ -481,8 +481,8 @@ finalGasUsed = gasUsed - refund 3. **Fork Planning** ```json { - "shanghai_time": "1681338455", // Test first - "cancun_time": "1710338455" // Delay for safety + "shanghai_time": "1681338455", / Test first + "cancun_time": "1710338455" / Delay for safety } ``` @@ -490,14 +490,14 @@ finalGasUsed = gasUsed - refund 1. **Gas Optimization** ```solidity - // Cache storage reads + / Cache storage reads uint256 cached = storageVar; - // Use cached instead of storageVar + / Use cached instead of storageVar - // Pack structs + / Pack structs struct Packed { uint128 a; - uint128 b; // Single slot + uint128 b; / Single slot } ``` @@ -523,36 +523,11 @@ finalGasUsed = gasUsed - refund await contract.method(); } catch (error) { if (error.code === 'CALL_EXCEPTION') { - // Handle revert + / Handle revert } } ``` -## Troubleshooting - -### Common Issues - -| Issue | Cause | Solution | -|-------|-------|----------| -| "out of gas" | Insufficient gas limit | Increase gas limit or optimize contract | -| "nonce too low" | Stale nonce | Query current nonce, retry | -| "insufficient funds" | Low balance | Ensure balance > value + (gasLimit × gasPrice) | -| "contract creation failed" | CREATE disabled or restricted | Check access_control settings | -| "precompile not active" | Precompile not in active list | Enable via governance | - -### Debug Commands - -```bash -# Check EVM parameters -evmd query vm params - -# Trace failed transaction -evmd query vm trace-tx 0xFailedTxHash - -# Check account state -evmd query vm account 0xAddress -``` - ## References ### Source Code diff --git a/docs/evm/next/documentation/custom-improvement-proposals.mdx b/docs/evm/next/documentation/custom-improvement-proposals.mdx index 6d723daa..a7d332c4 100644 --- a/docs/evm/next/documentation/custom-improvement-proposals.mdx +++ b/docs/evm/next/documentation/custom-improvement-proposals.mdx @@ -34,7 +34,7 @@ To allow any Cosmos EVM user to define their own specific improvements without o Below, you will find an example of how the Cosmos EVM chain uses this functionality to modify the behavior of the `CREATE` and `CREATE2` opcodes. First, the modifier function has to be defined: ``` -// Enable0000 contains the logic to modify the CREATE and CREATE2 opcodes// constant gas value.func Enable0000(jt *vm.JumpTable) { multiplier := 10 currentValCreate := jt[vm.CREATE].GetConstantGas() jt[vm.CREATE].SetConstantGas(currentValCreate * multiplier) currentValCreate2 := jt[vm.CREATE2].GetConstantGas() jt[vm.CREATE2].SetConstantGas(currentValCreate2 * multiplier)} +/ Enable0000 contains the logic to modify the CREATE and CREATE2 opcodes/ constant gas value.func Enable0000(jt *vm.JumpTable) { multiplier := 10 currentValCreate := jt[vm.CREATE].GetConstantGas() jt[vm.CREATE].SetConstantGas(currentValCreate * multiplier) currentValCreate2 := jt[vm.CREATE2].GetConstantGas() jt[vm.CREATE2].SetConstantGas(currentValCreate2 * multiplier)} ``` Then, the function as to be associated with a name via a custom activator: diff --git a/docs/evm/next/documentation/evm-compatibility/eip-2935.mdx b/docs/evm/next/documentation/evm-compatibility/eip-2935.mdx index 8a43e4f2..fab36b7f 100644 --- a/docs/evm/next/documentation/evm-compatibility/eip-2935.mdx +++ b/docs/evm/next/documentation/evm-compatibility/eip-2935.mdx @@ -37,7 +37,7 @@ The `history_serve_window` parameter controls how many historical block hashes a { "vm": { "params": { - "history_serve_window": 8192 // Default: 8192 blocks + "history_serve_window": 8192 / Default: 8192 blocks } } } @@ -79,7 +79,7 @@ contract BlockHashExample { view returns (bytes32) { - // Works for blocks within history_serve_window range + / Works for blocks within history_serve_window range return blockhash(blockNumber); } @@ -88,7 +88,7 @@ contract BlockHashExample { } function getHistoricalRange() external view returns (uint256) { - // Can access block hashes for last 8192 blocks (default) + / Can access block hashes for last 8192 blocks (default) return 8192; } } @@ -108,11 +108,11 @@ contract VerifiableRandomness { mapping(bytes32 => RandomnessCommit) public commits; function commitRandomness(bytes32 commitment) external { - // Commit to using future block hash as randomness source - uint256 revealBlock = block.number + 10; // Reveal 10 blocks later + / Commit to using future block hash as randomness source + uint256 revealBlock = block.number + 10; / Reveal 10 blocks later commits[commitment] = RandomnessCommit({ - blockHash: bytes32(0), // Will be filled during reveal + blockHash: bytes32(0), / Will be filled during reveal blockNumber: revealBlock, timestamp: block.timestamp, revealed: false @@ -127,14 +127,14 @@ contract VerifiableRandomness { require(!commit.revealed, "Already revealed"); require(block.number >= commit.blockNumber, "Too early to reveal"); - // Get block hash from historical storage (EIP-2935) + / Get block hash from historical storage (EIP-2935) bytes32 blockHash = blockhash(commit.blockNumber); require(blockHash != bytes32(0), "Block hash not available"); commit.blockHash = blockHash; commit.revealed = true; - // Generate verifiable randomness + / Generate verifiable randomness randomness = keccak256(abi.encodePacked( blockHash, commitment, @@ -150,14 +150,14 @@ contract VerifiableRandomness { ```solidity Block-Based State Transitions expandable contract BlockBasedLogic { - uint256 public constant EPOCH_LENGTH = 1000; // blocks + uint256 public constant EPOCH_LENGTH = 1000; / blocks mapping(uint256 => bytes32) public epochSeeds; function updateEpochSeed() external { uint256 currentEpoch = block.number / EPOCH_LENGTH; if (epochSeeds[currentEpoch] == bytes32(0)) { - // Use block hash from start of epoch as seed + / Use block hash from start of epoch as seed uint256 epochStartBlock = currentEpoch * EPOCH_LENGTH; bytes32 seedHash = blockhash(epochStartBlock); @@ -286,4 +286,4 @@ The implementation integrates seamlessly with the EVM: - **EIP-2935 Specification:** https://eips.ethereum.org/EIPS/eip-2935 - **Implementation PR:** [cosmos/evm#407](https://github.com/cosmos/evm/pull/407) - **System Contract Address:** `0x0aae40965e6800cd9b1f4b05ff21581047e3f91e` -- **Related Documentation:** [VM Module Parameters](../cosmos-sdk/modules/vm#parameters) \ No newline at end of file +- **Related Documentation:** [VM Module Parameters](/docs/evm/next/documentation/cosmos-sdk/modules/vm#parameters) \ No newline at end of file diff --git a/docs/evm/next/documentation/evm-compatibility/eip-7702.mdx b/docs/evm/next/documentation/evm-compatibility/eip-7702.mdx index 5c027ac1..455da9e2 100644 --- a/docs/evm/next/documentation/evm-compatibility/eip-7702.mdx +++ b/docs/evm/next/documentation/evm-compatibility/eip-7702.mdx @@ -56,24 +56,24 @@ EIP-7702 allows externally owned accounts (EOAs) to delegate their code executio ```go -// SetCodeTx transaction structure (EIP-7702 Type 4 transaction) -// This transaction type enables EOAs to temporarily delegate code execution +/ SetCodeTx transaction structure (EIP-7702 Type 4 transaction) +/ This transaction type enables EOAs to temporarily delegate code execution type SetCodeTx struct { - // Standard transaction fields - ChainID *uint256.Int // Chain ID for EIP-155 replay protection - Nonce uint64 // Account nonce (must be sequential) - GasTipCap *uint256.Int // Priority fee per gas (EIP-1559) - GasFeeCap *uint256.Int // Maximum fee per gas (EIP-1559) - Gas uint64 // Gas limit for transaction execution - To *common.Address // Recipient address (can be self for delegation) - Value *uint256.Int // ETH value to transfer - Data []byte // Contract call data to execute with delegated code - AccessList ethtypes.AccessList // EIP-2930 access list for gas optimization - - // EIP-7702 specific fields - AuthList []SetCodeAuthorization // List of code delegation authorizations - - // Transaction signature (signs entire transaction) + / Standard transaction fields + ChainID *uint256.Int / Chain ID for EIP-155 replay protection + Nonce uint64 / Account nonce (must be sequential) + GasTipCap *uint256.Int / Priority fee per gas (EIP-1559) + GasFeeCap *uint256.Int / Maximum fee per gas (EIP-1559) + Gas uint64 / Gas limit for transaction execution + To *common.Address / Recipient address (can be self for delegation) + Value *uint256.Int / ETH value to transfer + Data []byte / Contract call data to execute with delegated code + AccessList ethtypes.AccessList / EIP-2930 access list for gas optimization + + / EIP-7702 specific fields + AuthList []SetCodeAuthorization / List of code delegation authorizations + + / Transaction signature (signs entire transaction) V *uint256.Int R *uint256.Int S *uint256.Int @@ -85,27 +85,27 @@ type SetCodeTx struct { ```go -// SetCodeAuthorization enables an EOA to authorize code delegation -// Each authorization is separately signed and validated +/ SetCodeAuthorization enables an EOA to authorize code delegation +/ Each authorization is separately signed and validated type SetCodeAuthorization struct { - // Delegation parameters - ChainID uint256.Int // Target chain ID (0 = valid on any chain) - Address common.Address // Contract address to delegate code from - Nonce uint64 // Authority's current nonce (prevents replay) - - // Authorization signature (separate from transaction signature) - // Signs: keccak256(0x05 || rlp(chainId, contractAddress, nonce)) - V uint8 // Recovery ID - R *uint256.Int // Signature R value - S *uint256.Int // Signature S value + / Delegation parameters + ChainID uint256.Int / Target chain ID (0 = valid on any chain) + Address common.Address / Contract address to delegate code from + Nonce uint64 / Authority's current nonce (prevents replay) + + / Authorization signature (separate from transaction signature) + / Signs: keccak256(0x05 || rlp(chainId, contractAddress, nonce)) + V uint8 / Recovery ID + R *uint256.Int / Signature R value + S *uint256.Int / Signature S value } -// Delegation Bytecode Format (automatically generated) -// When delegation is active, the EOA's code becomes: -// 0xEF0100 + contractAddress (23 bytes total) -// - 0xEF01: Delegation marker (EIP-3541 compliance) -// - 0x00: Delegation version -// - contractAddress: 20 bytes of target contract +/ Delegation Bytecode Format (automatically generated) +/ When delegation is active, the EOA's code becomes: +/ 0xEF0100 + contractAddress (23 bytes total) +/ - 0xEF01: Delegation marker (EIP-3541 compliance) +/ - 0x00: Delegation version +/ - contractAddress: 20 bytes of target contract ``` @@ -118,35 +118,35 @@ type SetCodeAuthorization struct { ```javascript -// Example: Delegate EOA to execute multicall contract -// Source: Based on EIP-7702 specification examples +/ Example: Delegate EOA to execute multicall contract +/ Source: Based on EIP-7702 specification examples import { ethers } from "ethers"; async function createDelegatedMulticall() { const wallet = new ethers.Wallet(privateKey, provider); - const multicallAddress = "0x..."; // Deployed multicall contract + const multicallAddress = "0x..."; / Deployed multicall contract - // Step 1: Create authorization for code delegation + / Step 1: Create authorization for code delegation const authorization = { - chainId: 9000, // Your Cosmos EVM chain ID - address: multicallAddress, // Contract to delegate code from - nonce: await wallet.getNonce(), // Current account nonce + chainId: 9000, / Your Cosmos EVM chain ID + address: multicallAddress, / Contract to delegate code from + nonce: await wallet.getNonce(), / Current account nonce }; - // Step 2: Sign authorization (EIP-712 format) + / Step 2: Sign authorization (EIP-712 format) const authSignature = await signAuthorization(authorization, wallet); - // Step 3: Create SetCode transaction (Type 4) + / Step 3: Create SetCode transaction (Type 4) const tx = { - type: 4, // SetCodeTxType + type: 4, / SetCodeTxType chainId: 9000, - nonce: authorization.nonce + 1, // Must increment after authorization + nonce: authorization.nonce + 1, / Must increment after authorization gasLimit: 500000, gasFeeCap: ethers.parseUnits("20", "gwei"), gasTipCap: ethers.parseUnits("1", "gwei"), - to: wallet.address, // Self-delegation + to: wallet.address, / Self-delegation value: 0, - data: "0x", // No direct call data + data: "0x", / No direct call data accessList: [], authorizationList: [{ chainId: authorization.chainId, @@ -158,7 +158,7 @@ async function createDelegatedMulticall() { }] }; - // Step 4: Send transaction - EOA executes as multicall contract + / Step 4: Send transaction - EOA executes as multicall contract return await wallet.sendTransaction(tx); } ``` @@ -169,51 +169,51 @@ async function createDelegatedMulticall() { ```solidity -// Custom account logic for enhanced EOA functionality -// Source: Account abstraction pattern for EIP-7702 +/ Custom account logic for enhanced EOA functionality +/ Source: Account abstraction pattern for EIP-7702 pragma solidity ^0.8.0; contract CustomAccountLogic { - // Custom state for account logic + / Custom state for account logic mapping(address => uint256) public customNonces; mapping(address => mapping(address => bool)) public authorizedSpenders; - // Custom validation logic + / Custom validation logic function validateTransaction( address sender, bytes calldata signature, bytes calldata txData ) external view returns (bool) { - // Example: Multi-signature validation - // Example: Spending limit validation - // Example: Time-lock validation + / Example: Multi-signature validation + / Example: Spending limit validation + / Example: Time-lock validation - // Simplified validation - implement your custom logic + / Simplified validation - implement your custom logic return true; } - // Custom execution logic + / Custom execution logic function executeTransaction( address target, uint256 value, bytes calldata data ) external returns (bool success, bytes memory result) { - // Pre-execution hooks + / Pre-execution hooks require(authorizedSpenders[tx.origin][target] || target == tx.origin, "Unauthorized target"); - // Execute with custom logic + / Execute with custom logic (success, result) = target.call{value: value}(data); - // Post-execution hooks + / Post-execution hooks customNonces[tx.origin]++; - // Emit custom events + / Emit custom events emit CustomTransaction(tx.origin, target, value, success); return (success, result); } - // Authorization management + / Authorization management function authorizeSpender(address spender) external { authorizedSpenders[tx.origin][spender] = true; } @@ -228,48 +228,48 @@ contract CustomAccountLogic { ```javascript -// Example: Batch multiple DeFi operations in one transaction -// Source: DeFi batching pattern using EIP-7702 +/ Example: Batch multiple DeFi operations in one transaction +/ Source: DeFi batching pattern using EIP-7702 async function batchDeFiOperations() { - const batchExecutorAddress = "0x..."; // Deployed batch executor contract + const batchExecutorAddress = "0x..."; / Deployed batch executor contract - // Step 1: Authorization to use batch executor + / Step 1: Authorization to use batch executor const authorization = await signAuthorization({ chainId: 9000, address: batchExecutorAddress, nonce: await wallet.getNonce(), }, wallet); - // Step 2: Encode batch operations + / Step 2: Encode batch operations const operations = [ - // Operation 1: Approve token spending + / Operation 1: Approve token spending { target: tokenAddress, callData: erc20Interface.encodeFunctionData("approve", [spenderAddress, amount]) }, - // Operation 2: Stake tokens + / Operation 2: Stake tokens { target: stakingAddress, callData: stakingInterface.encodeFunctionData("stake", [amount]) }, - // Operation 3: Claim rewards + / Operation 3: Claim rewards { target: rewardsAddress, callData: rewardsInterface.encodeFunctionData("claimRewards", []) } ]; - // Step 3: Encode batch call data + / Step 3: Encode batch call data const batchCallData = batchInterface.encodeFunctionData("executeBatch", [operations]); - // Step 4: Create SetCode transaction + / Step 4: Create SetCode transaction const tx = { type: 4, authorizationList: [authorization], - to: wallet.address, // Self-delegation - data: batchCallData, // Execute batch operations - gasLimit: 800000, // Higher gas for multiple operations - // ... other tx fields + to: wallet.address, / Self-delegation + data: batchCallData, / Execute batch operations + gasLimit: 800000, / Higher gas for multiple operations + / ... other tx fields }; return await wallet.sendTransaction(tx); @@ -282,8 +282,8 @@ async function batchDeFiOperations() { ```solidity -// Enhanced wallet functionality using EIP-7702 -// Source: Smart wallet pattern implementation +/ Enhanced wallet functionality using EIP-7702 +/ Source: Smart wallet pattern implementation pragma solidity ^0.8.0; contract SmartWalletLogic { @@ -296,7 +296,7 @@ contract SmartWalletLogic { mapping(address => SpendingLimit) public spendingLimits; mapping(address => mapping(bytes32 => bool)) public executedTxHashes; - // Enhanced wallet functions + / Enhanced wallet functions function setDailySpendingLimit(uint256 limit) external { spendingLimits[tx.origin] = SpendingLimit({ dailyLimit: limit, @@ -310,30 +310,30 @@ contract SmartWalletLogic { uint256 value, bytes calldata data ) external returns (bytes memory) { - // Check spending limits + / Check spending limits SpendingLimit storage limit = spendingLimits[tx.origin]; - // Reset daily counter if new day + / Reset daily counter if new day uint256 currentDay = block.timestamp / 1 days; if (currentDay > limit.lastResetDay) { limit.spentToday = 0; limit.lastResetDay = currentDay; } - // Enforce spending limit + / Enforce spending limit require(limit.spentToday + value <= limit.dailyLimit, "Daily spending limit exceeded"); - // Execute transaction + / Execute transaction (bool success, bytes memory result) = target.call{value: value}(data); require(success, "Transaction execution failed"); - // Update spent amount + / Update spent amount limit.spentToday += value; return result; } - // Prevent replay attacks + / Prevent replay attacks function executeOnce( bytes32 txHash, address target, diff --git a/docs/evm/next/documentation/getting-started/development-environment.mdx b/docs/evm/next/documentation/getting-started/development-environment.mdx index 77420dd2..b2626488 100644 --- a/docs/evm/next/documentation/getting-started/development-environment.mdx +++ b/docs/evm/next/documentation/getting-started/development-environment.mdx @@ -10,7 +10,7 @@ Each person has their own preference and different tasks or scopes of work may c [Remix](https://remix.org) is a full-feature IDE in a web-app supporting all EVM compatible networks out of the box. A convenient option For quick testing, or as a self-contained smart contract depoyment interface. -[Read more..](docs/evm/next/documentation/getting-started/tooling-and-resources/remix.mdx) +[Read more..](/docs/evm/next/documentation/getting-started/tooling-and-resources/remix) diff --git a/docs/evm/next/documentation/getting-started/tooling-and-resources/hardhat.mdx b/docs/evm/next/documentation/getting-started/tooling-and-resources/hardhat.mdx index 73961ba9..21953b62 100644 --- a/docs/evm/next/documentation/getting-started/tooling-and-resources/hardhat.mdx +++ b/docs/evm/next/documentation/getting-started/tooling-and-resources/hardhat.mdx @@ -42,13 +42,13 @@ const config: HardhatUserConfig = { networks: { local: { url: "http://127.0.0.1:8545", - chainId: 4321, // Your EVM chain ID + chainId: 4321, / Your EVM chain ID accounts: process.env.PRIVATE_KEY ? [process.env.PRIVATE_KEY] : [], gasPrice: 20000000000, }, testnet: { url: "", - chainId: 4321, // Your EVM chain ID + chainId: 4321, / Your EVM chain ID accounts: process.env.PRIVATE_KEY ? [process.env.PRIVATE_KEY] : [], } }, @@ -63,7 +63,7 @@ const config: HardhatUserConfig = { customChains: [ { network: "cosmosEvmTestnet", - chainId: 4321, // Your EVM chain ID + chainId: 4321, / Your EVM chain ID urls: { apiURL: "", browserURL: "" @@ -85,13 +85,13 @@ Hardhat's first-class TypeScript support enables type-safe contract interactions Create a contract in the `contracts/` directory. For this example, we'll use a simple `LiquidStakingVault`. ```solidity title="contracts/LiquidStakingVault.sol" lines expandable -// contracts/LiquidStakingVault.sol -// SPDX-License-Identifier: MIT +/ contracts/LiquidStakingVault.sol +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.24; import "@openzeppelin/contracts/access/Ownable.sol"; -// Interface for the staking precompile +/ Interface for the staking precompile interface IStaking { function delegate(address validator, uint256 amount) external returns (bool); function undelegate(address validator, uint256 amount) external returns (bool, uint64); @@ -126,7 +126,7 @@ contract LiquidStakingVault is Ownable { Create type-safe tests in the `test/` directory. ```typescript title="test/LiquidStakingVault.test.ts" lines expandable -// test/LiquidStakingVault.test.ts +/ test/LiquidStakingVault.test.ts import { expect } from "chai"; import { ethers } from "hardhat"; import { LiquidStakingVault } from "../typechain-types"; @@ -147,8 +147,8 @@ describe("LiquidStakingVault", function () { vault = await VaultFactory.deploy(VALIDATOR_ADDRESS); await vault.waitForDeployment(); - // Mock the staking precompile's delegate function to always return true - // This bytecode is a minimal contract that returns true for any call + / Mock the staking precompile's delegate function to always return true + / This bytecode is a minimal contract that returns true for any call const successBytecode = "0x6080604052348015600f57600080fd5b50600160005560016000f3"; await ethers.provider.send("hardhat_setCode", [ STAKING_PRECOMPILE, @@ -179,7 +179,7 @@ npx hardhat test Create a deployment script in the `scripts/` directory to deploy your contract to a live network. ```typescript title="scripts/deploy.ts" lines expandable -// scripts/deploy.ts +/ scripts/deploy.ts import { ethers, network } from "hardhat"; import { writeFileSync } from "fs"; @@ -197,7 +197,7 @@ async function main() { const vaultAddress = await vault.getAddress(); console.log("LiquidStakingVault deployed to:", vaultAddress); - // Save deployment information + / Save deployment information const deploymentInfo = { contractAddress: vaultAddress, deployer: deployer.address, @@ -210,11 +210,11 @@ async function main() { JSON.stringify(deploymentInfo, null, 2) ); - // Optional: Verify contract on Etherscan-compatible explorer + / Optional: Verify contract on Etherscan-compatible explorer if (network.name !== "local" && process.env.ETHERSCAN_API_KEY) { console.log("Waiting for block confirmations before verification..."); - // await vault.deploymentTransaction()?.wait(5); // Wait for 5 blocks - await new Promise(resolve => setTimeout(resolve, 30000)); // Or wait 30 seconds + / await vault.deploymentTransaction()?.wait(5); / Wait for 5 blocks + await new Promise(resolve => setTimeout(resolve, 30000)); / Or wait 30 seconds await hre.run("verify:verify", { address: vaultAddress, diff --git a/docs/evm/next/documentation/getting-started/tooling-and-resources/testing-and-fuzzing.mdx b/docs/evm/next/documentation/getting-started/tooling-and-resources/testing-and-fuzzing.mdx index 5749a95c..740862b0 100644 --- a/docs/evm/next/documentation/getting-started/tooling-and-resources/testing-and-fuzzing.mdx +++ b/docs/evm/next/documentation/getting-started/tooling-and-resources/testing-and-fuzzing.mdx @@ -97,7 +97,7 @@ tests/evm-tools-compatibility/ **Setup:** ```bash # Start comparison testing environment -cd tests/jsonrpc +cd scripts/test/jsonrpc # Build required Docker images make localnet-build-env @@ -121,12 +121,12 @@ cd simulator && go build . && ./simulator **Configuration:** ```go -// tests/jsonrpc/simulator/config/config.go:27-36 +/ tests/jsonrpc/simulator/config/config.go:27-36 type Config struct { - EvmdHttpEndpoint string // Default: "http://localhost:8545" - EvmdWsEndpoint string // Default: "ws://localhost:8546" - GethHttpEndpoint string // Default: "http://localhost:8547" - GethWsEndpoint string // Default: "ws://localhost:8548" + EvmdHttpEndpoint string / Default: "http://localhost:8545" + EvmdWsEndpoint string / Default: "ws://localhost:8546" + GethHttpEndpoint string / Default: "http://localhost:8547" + GethWsEndpoint string / Default: "ws://localhost:8548" RichPrivKey string Timeout string } @@ -137,9 +137,9 @@ type Config struct { Fuzz testing generates random inputs to find edge cases that violate expected properties. ```solidity title="test/MyContract.t.sol (Foundry)" lines expandable -// test/MyContract.t.sol (Foundry) +/ test/MyContract.t.sol (Foundry) function test_myFunction_fuzz(uint256 amount) public { - vm.assume(amount > 0 && amount < 1e18); // constrain input + vm.assume(amount > 0 && amount < 1e18); / constrain input myContract.myFunction(amount); assertEq(myContract.someState(), expectedValue); } @@ -171,10 +171,10 @@ Tools that enhance test coverage, assert correctness, or catch common vulnerabil ```js lines expandable const { time, expectRevert } = require('@openzeppelin/test-helpers'); -// advance clock by one day +/ advance clock by one day await time.increase(time.duration.days(1)); -// expect revert with specific reason +/ expect revert with specific reason await expectRevert(contract.someFunction(), "Ownable: caller is not the owner"); ``` diff --git a/docs/evm/next/documentation/integration/erc20-precompiles-migration.mdx b/docs/evm/next/documentation/integration/erc20-precompiles-migration.mdx index b9689577..a7346217 100644 --- a/docs/evm/next/documentation/integration/erc20-precompiles-migration.mdx +++ b/docs/evm/next/documentation/integration/erc20-precompiles-migration.mdx @@ -47,11 +47,11 @@ Your chain needs this migration if you have: Add this essential migration logic to your existing upgrade handler: ```go -// In your upgrade handler +/ In your upgrade handler store := ctx.KVStore(storeKeys[erc20types.StoreKey]) -const addressLength = 42 // "0x" + 40 hex characters +const addressLength = 42 / "0x" + 40 hex characters -// Migrate dynamic precompiles (IBC tokens, token factory) +/ Migrate dynamic precompiles (IBC tokens, token factory) if oldData := store.Get([]byte("DynamicPrecompiles")); len(oldData) > 0 { for i := 0; i < len(oldData); i += addressLength { address := common.HexToAddress(string(oldData[i : i+addressLength])) @@ -60,7 +60,7 @@ if oldData := store.Get([]byte("DynamicPrecompiles")); len(oldData) > 0 { store.Delete([]byte("DynamicPrecompiles")) } -// Migrate native precompiles +/ Migrate native precompiles if oldData := store.Get([]byte("NativePrecompiles")); len(oldData) > 0 { for i := 0; i < len(oldData); i += addressLength { address := common.HexToAddress(string(oldData[i : i+addressLength])) @@ -70,8 +70,7 @@ if oldData := store.Get([]byte("NativePrecompiles")); len(oldData) > 0 { } ``` -
-Complete Implementation Example + ### Create Upgrade Handler @@ -103,13 +102,13 @@ func CreateUpgradeHandler( ctx := sdk.UnwrapSDKContext(c) ctx.Logger().Info("Starting v0.4.0 upgrade...") - // Run standard module migrations + / Run standard module migrations vm, err := mm.RunMigrations(ctx, configurator, vm) if err != nil { return vm, err } - // Migrate ERC20 precompiles + / Migrate ERC20 precompiles if err := migrateERC20Precompiles(ctx, storeKeys[erc20types.StoreKey], keepers.Erc20Keeper); err != nil { return vm, err } @@ -138,7 +137,7 @@ func migrateERC20Precompiles( erc20Keeper erc20keeper.Keeper, ) error { store := ctx.KVStore(storeKey) - const addressLength = 42 // "0x" + 40 hex characters + const addressLength = 42 / "0x" + 40 hex characters migrations := []struct { oldKey string @@ -183,7 +182,7 @@ func migrateERC20Precompiles( addressStr := string(oldData[i : i+addressLength]) address := common.HexToAddress(addressStr) - // Validate address + / Validate address if address == (common.Address{}) { ctx.Logger().Warn("Skipping zero address", "type", migration.description, @@ -192,7 +191,7 @@ func migrateERC20Precompiles( continue } - // Migrate to new storage + / Migrate to new storage migration.setter(ctx, address) migratedCount++ @@ -203,7 +202,7 @@ func migrateERC20Precompiles( ) } - // Clean up old storage + / Clean up old storage store.Delete([]byte(migration.oldKey)) ctx.Logger().Info("Migration complete", "type", migration.description, @@ -239,7 +238,7 @@ func (app *App) RegisterUpgradeHandlers() { ``` -
+
## Testing @@ -277,13 +276,13 @@ mantrachaind export | jq '.app_state.erc20.dynamic_precompiles' ```go tests/upgrade_test.go func TestERC20PrecompileMigration(t *testing.T) { - // Setup test environment + / Setup test environment app, ctx := setupTestApp(t) - // Create legacy storage entries + / Create legacy storage entries store := ctx.KVStore(app.keys[erc20types.StoreKey]) - // Add test addresses in old format + / Add test addresses in old format dynamicAddresses := []string{ "0x6eC942095eCD4948d9C094337ABd59Dc3c521005", "0x1234567890123456789012345678901234567890", @@ -294,15 +293,15 @@ func TestERC20PrecompileMigration(t *testing.T) { } store.Set([]byte("DynamicPrecompiles"), []byte(dynamicData)) - // Run migration + / Run migration err := migrateERC20Precompiles(ctx, app.keys[erc20types.StoreKey], app.Erc20Keeper) require.NoError(t, err) - // Verify migration + / Verify migration migratedAddresses := app.Erc20Keeper.GetDynamicPrecompiles(ctx) require.Len(t, migratedAddresses, len(dynamicAddresses)) - // Verify old storage is cleaned + / Verify old storage is cleaned oldData := store.Get([]byte("DynamicPrecompiles")) require.Nil(t, oldData) } diff --git a/docs/evm/next/documentation/integration/evm-module-integration.mdx b/docs/evm/next/documentation/integration/evm-module-integration.mdx index 75b4e688..30878bc8 100644 --- a/docs/evm/next/documentation/integration/evm-module-integration.mdx +++ b/docs/evm/next/documentation/integration/evm-module-integration.mdx @@ -4,7 +4,8 @@ description: "Integrating the Cosmos EVM module into Cosmos SDK v0.53.x chains" --- -Big thanks to Reece & the [Spawn](https://github.com/rollchains/spawn) team for their valuable contributions to this guide. + Big thanks to Reece & the [Spawn](https://github.com/rollchains/spawn) team + for their valuable contributions to this guide. This guide provides instructions for adding EVM compatibility to a new Cosmos SDK chain. It targets chains being built from scratch with EVM support. @@ -15,7 +16,8 @@ This guide provides instructions for adding EVM compatibility to a new Cosmos SD - Token decimal changes (from Cosmos standard 6 to Ethereum standard 18) impacting all existing balances - Asset migration where existing assets need to be initialized and mirrored in the EVM -Contact [Interchain Labs](https://share-eu1.hsforms.com/2g6yO-PVaRoKj50rUgG4Pjg2e2sca) for production chain upgrade guidance. +Contact [Cosmos Labs](https://cosmos.network/interest-form) for production chain upgrade guidance. + ## Prerequisites @@ -26,16 +28,18 @@ Contact [Interchain Labs](https://share-eu1.hsforms.com/2g6yO-PVaRoKj50rUgG4Pjg2 - Basic knowledge of Go and Cosmos SDK -Throughout this guide, `evmd` refers to your chain's binary (e.g., `gaiad`, `dydxd`, etc.). + Throughout this guide, `evmd` refers to your chain's binary (e.g., `gaiad`, + `dydxd`, etc.). ## Version Compatibility -These version numbers may change as development continues. Check [github.com/cosmos/evm](https://github.com/cosmos/evm) for the latest releases. + These version numbers may change as development continues. Check + [github.com/cosmos/evm](https://github.com/cosmos/evm) for the latest + releases. - ```go require ( github.com/cosmos/cosmos-sdk v0.53.0 @@ -44,7 +48,7 @@ require ( ) replace ( - // Use the Cosmos fork of go-ethereum + / Use the Cosmos fork of go-ethereum github.com/ethereum/go-ethereum => github.com/cosmos/go-ethereum v1.15.11-cosmos-0 ) ``` @@ -52,12 +56,12 @@ replace ( ## Step 1: Update Dependencies ```go "go.mod Dependencies" expandable -// go.mod +/ go.mod require ( github.com/cosmos/cosmos-sdk v0.53.0 github.com/ethereum/go-ethereum v1.15.10 - // for IBC functionality in EVM + / for IBC functionality in EVM github.com/cosmos/ibc-go/modules/capability v1.0.1 github.com/cosmos/ibc-go/v10 v10.3.0 ) @@ -73,15 +77,17 @@ Cosmos EVM requires two separate chain IDs: - **EVM Chain ID** (integer): Used for EVM transactions and EIP-155 tooling (e.g., 9000) -Ensure your EVM chain ID is not already in use by checking [chainlist.org](https://chainlist.org/). + Ensure your EVM chain ID is not already in use by checking + [chainlist.org](https://chainlist.org/). **Files to Update:** 1. `app/app.go`: Set chain ID constants + ```go -const CosmosChainID = "mychain-1" // Standard Cosmos format -const EVMChainID = 9000 // EIP-155 integer +const CosmosChainID = "mychain-1" / Standard Cosmos format +const EVMChainID = 9000 / EIP-155 integer ``` 2. Update `Makefile`, scripts, and `genesis.json` with correct chain IDs @@ -93,11 +99,13 @@ Use `eth_secp256k1` as the standard account type with coin type 60 for Ethereum **Files to Update:** 1. `app/app.go`: + ```go const CoinType uint32 = 60 ``` 2. `chain_registry.json`: + ```json "slip44": 60 ``` @@ -107,11 +115,13 @@ const CoinType uint32 = 60 Changing from 6 decimals (Cosmos convention) to 18 decimals (EVM standard) is highly recommended for full compatibility. 1. Set the denomination in `app/app.go`: + ```go const BaseDenomUnit int64 = 18 ``` 2. Update the `init()` function: + ```go "Power Reduction Initialization" expandable import ( "math/big" @@ -120,7 +130,7 @@ import ( ) func init() { - // Update power reduction for 18-decimal base unit + / Update power reduction for 18-decimal base unit sdk.DefaultPowerReduction = math.NewIntFromBigInt( new(big.Int).Exp(big.NewInt(10), big.NewInt(BaseDenomUnit), nil), ) @@ -130,7 +140,9 @@ func init() { ## Step 3: Handle EVM Decimal Precision -The mismatch between EVM's 18-decimal standard and Cosmos SDK's 6-decimal standard is critical. The default behavior (flooring) discards any value below 10^-6, causing asset loss and breaking DeFi applications. + The mismatch between EVM's 18-decimal standard and Cosmos SDK's 6-decimal + standard is critical. The default behavior (flooring) discards any value below + 10^-6, causing asset loss and breaking DeFi applications. ### Solution: x/precisebank Module @@ -138,6 +150,7 @@ The mismatch between EVM's 18-decimal standard and Cosmos SDK's 6-decimal standa The `x/precisebank` module wraps the native `x/bank` module to maintain fractional balances for EVM denominations, handling full 18-decimal precision without loss. **Benefits:** + - Lossless precision preventing invisible asset loss - High DApp compatibility ensuring DeFi protocols function correctly - Simple integration requiring minimal changes @@ -145,7 +158,7 @@ The `x/precisebank` module wraps the native `x/bank` module to maintain fraction **Integration in app.go:** ```go "PreciseBank Integration" expandable -// Initialize PreciseBankKeeper +/ Initialize PreciseBankKeeper app.PreciseBankKeeper = precisebankkeeper.NewKeeper( appCodec, keys[precisebanktypes.StoreKey], @@ -153,14 +166,14 @@ app.PreciseBankKeeper = precisebankkeeper.NewKeeper( authtypes.NewModuleAddress(govtypes.ModuleName).String(), ) -// Pass PreciseBankKeeper to EVMKeeper instead of BankKeeper +/ Pass PreciseBankKeeper to EVMKeeper instead of BankKeeper app.EVMKeeper = evmkeeper.NewKeeper( appCodec, keys[evmtypes.StoreKey], tkeys[evmtypes.TransientKey], authtypes.NewModuleAddress(govtypes.ModuleName), app.AccountKeeper, - app.PreciseBankKeeper, // Use PreciseBankKeeper here + app.PreciseBankKeeper, / Use PreciseBankKeeper here app.StakingKeeper, app.FeeMarketKeeper, &app.Erc20Keeper, @@ -178,6 +191,7 @@ The Cosmos EVM `x/erc20` module can automatically register ERC20 token pairs for 1. **Use the Extended IBC Transfer Module**: Import and use the transfer module from `github.com/cosmos/evm/x/ibc/transfer` 2. **Enable ERC20 Module Parameters** in genesis: + ```go erc20Params := erc20types.DefaultParams() erc20Params.EnableErc20 = true @@ -210,29 +224,29 @@ func NoOpEVMOptions(_ string) error { var sealed = false -// ChainsCoinInfo maps EVM chain IDs to coin configuration -// IMPORTANT: Uses uint64 EVM chain IDs as keys, not Cosmos chain ID strings +/ ChainsCoinInfo maps EVM chain IDs to coin configuration +/ IMPORTANT: Uses uint64 EVM chain IDs as keys, not Cosmos chain ID strings var ChainsCoinInfo = map[uint64]evmtypes.EvmCoinInfo{ - EVMChainID: { // Your numeric EVM chain ID (e.g., 9000) + EVMChainID: { / Your numeric EVM chain ID (e.g., 9000) Denom: BaseDenom, DisplayDenom: DisplayDenom, Decimals: evmtypes.EighteenDecimals, }, } -// EVMAppOptions sets up global configuration +/ EVMAppOptions sets up global configuration func EVMAppOptions(chainID string) error { if sealed { return nil } - // IMPORTANT: Lookup uses numeric EVMChainID, not Cosmos chainID string + / IMPORTANT: Lookup uses numeric EVMChainID, not Cosmos chainID string coinInfo, found := ChainsCoinInfo[EVMChainID] if !found { return fmt.Errorf("unknown EVM chain id: %d", EVMChainID) } - // Set denom info for the chain + / Set denom info for the chain if err := setBaseDenom(coinInfo); err != nil { return err } @@ -256,7 +270,7 @@ func EVMAppOptions(chainID string) error { return nil } -// setBaseDenom registers display and base denoms +/ setBaseDenom registers display and base denoms func setBaseDenom(ci evmtypes.EvmCoinInfo) error { if err := sdk.RegisterDenom(ci.DisplayDenom, math.LegacyOneDec()); err != nil { return err @@ -286,7 +300,7 @@ import ( stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" bankprecompile "github.com/cosmos/evm/precompiles/bank" "github.com/cosmos/evm/precompiles/bech32" - "github.com/cosmos/evm/precompiles/common" // v0.5.0+: Required for interfaces + "github.com/cosmos/evm/precompiles/common" / v0.5.0+: Required for interfaces distprecompile "github.com/cosmos/evm/precompiles/distribution" evidenceprecompile "github.com/cosmos/evm/precompiles/evidence" govprecompile "github.com/cosmos/evm/precompiles/gov" @@ -298,14 +312,14 @@ import ( transferkeeper "github.com/cosmos/evm/x/ibc/transfer/keeper" "github.com/cosmos/evm/x/vm/core/vm" evmkeeper "github.com/cosmos/evm/x/vm/keeper" - channelkeeper "github.com/cosmos/ibc-go/v10/modules/core/04-channel/keeper" // Updated to v10 + channelkeeper "github.com/cosmos/ibc-go/v10/modules/core/04-channel/keeper" / Updated to v10 "github.com/ethereum/go-ethereum/common" ) const bech32PrecompileBaseGas = 6_000 -// NewAvailableStaticPrecompiles returns all available static precompiled contracts -// v0.5.0+: Uses keeper interfaces instead of concrete types +/ NewAvailableStaticPrecompiles returns all available static precompiled contracts +/ v0.5.0+: Uses keeper interfaces instead of concrete types func NewAvailableStaticPrecompiles( stakingKeeper stakingkeeper.Keeper, distributionKeeper distributionkeeper.Keeper, @@ -318,7 +332,7 @@ func NewAvailableStaticPrecompiles( govKeeper govkeeper.Keeper, slashingKeeper slashingkeeper.Keeper, evidenceKeeper evidencekeeper.Keeper, - addressCodec address.Codec, // Required for v0.5.0+ + addressCodec address.Codec, / Required for v0.5.0+ ) map[common.Address]vm.PrecompiledContract { precompiles := maps.Clone(vm.PrecompiledContractsBerlin) @@ -329,63 +343,63 @@ func NewAvailableStaticPrecompiles( panic(fmt.Errorf("failed to instantiate bech32 precompile: %w", err)) } - // v0.5.0+: Cast concrete keepers to interfaces + / v0.5.0+: Cast concrete keepers to interfaces stakingPrecompile, err := stakingprecompile.NewPrecompile( - common.StakingKeeper(stakingKeeper), // Cast to interface + common.StakingKeeper(stakingKeeper), / Cast to interface ) if err != nil { panic(fmt.Errorf("failed to instantiate staking precompile: %w", err)) } distributionPrecompile, err := distprecompile.NewPrecompile( - common.DistributionKeeper(distributionKeeper), // Cast to interface - evmKeeper.Codec(), - addressCodec, // Required in v0.5.0+ + common.DistributionKeeper(distributionKeeper), / Cast to interface + evmKeeper.Codec(), + addressCodec, / Required in v0.5.0+ ) if err != nil { panic(fmt.Errorf("failed to instantiate distribution precompile: %w", err)) } - // ICS20 precompile - bankKeeper now first parameter in v0.5.0+ + / ICS20 precompile - bankKeeper now first parameter in v0.5.0+ ibcTransferPrecompile, err := ics20precompile.NewPrecompile( - common.BankKeeper(bankKeeper), // Now first parameter - common.StakingKeeper(stakingKeeper), // Cast to interface - common.TransferKeeper(transferKeeper), // Cast to interface - common.ChannelKeeper(channelKeeper), // Cast to interface + common.BankKeeper(bankKeeper), / Now first parameter + common.StakingKeeper(stakingKeeper), / Cast to interface + common.TransferKeeper(transferKeeper), / Cast to interface + common.ChannelKeeper(channelKeeper), / Cast to interface ) if err != nil { panic(fmt.Errorf("failed to instantiate ICS20 precompile: %w", err)) } bankPrecompile, err := bankprecompile.NewPrecompile( - common.BankKeeper(bankKeeper), // Cast to interface + common.BankKeeper(bankKeeper), / Cast to interface ) if err != nil { panic(fmt.Errorf("failed to instantiate bank precompile: %w", err)) } - // Gov precompile - now requires AddressCodec in v0.5.0+ + / Gov precompile - now requires AddressCodec in v0.5.0+ govPrecompile, err := govprecompile.NewPrecompile( - govKeeper, + govKeeper, evmKeeper.Codec(), - addressCodec, // Required in v0.5.0+ + addressCodec, / Required in v0.5.0+ ) if err != nil { panic(fmt.Errorf("failed to instantiate gov precompile: %w", err)) } slashingPrecompile, err := slashingprecompile.NewPrecompile( - common.SlashingKeeper(slashingKeeper), // Cast to interface + common.SlashingKeeper(slashingKeeper), / Cast to interface ) if err != nil { panic(fmt.Errorf("failed to instantiate slashing precompile: %w", err)) } - // Stateless precompiles + / Stateless precompiles precompiles[bech32Precompile.Address()] = bech32Precompile precompiles[p256Precompile.Address()] = p256Precompile - // Stateful precompiles + / Stateful precompiles precompiles[stakingPrecompile.Address()] = stakingPrecompile precompiles[distributionPrecompile.Address()] = distributionPrecompile precompiles[ibcTransferPrecompile.Address()] = ibcTransferPrecompile @@ -398,7 +412,7 @@ func NewAvailableStaticPrecompiles( **v0.5.0 Breaking Changes Applied Above:** -- Precompile constructors now use keeper interfaces (cast required) +- Precompile constructors now use keeper interfaces (cast required) - ICS20 precompile: `bankKeeper` now first parameter - Gov precompile: `AddressCodec` now required - Distribution precompile: Simplified parameter list @@ -412,7 +426,7 @@ func NewAvailableStaticPrecompiles( ```go "app.go EVM Imports" expandable import ( - // ... other imports + / ... other imports ante "github.com/your-repo/your-chain/ante" evmante "github.com/cosmos/evm/ante" evmcosmosante "github.com/cosmos/evm/ante/cosmos" @@ -432,12 +446,12 @@ import ( _ "github.com/cosmos/evm/x/vm/core/tracers/js" _ "github.com/cosmos/evm/x/vm/core/tracers/native" - // Replace default transfer with EVM's extended transfer module + / Replace default transfer with EVM's extended transfer module transfer "github.com/cosmos/evm/x/ibc/transfer" ibctransferkeeper "github.com/cosmos/evm/x/ibc/transfer/keeper" ibctransfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" - // Add authz for precompiles + / Add authz for precompiles authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" ) ``` @@ -446,7 +460,7 @@ import ( ```go var maccPerms = map[string][]string{ - // ... existing permissions + / ... existing permissions evmtypes.ModuleName: {authtypes.Minter, authtypes.Burner}, feemarkettypes.ModuleName: nil, erc20types.ModuleName: {authtypes.Minter, authtypes.Burner}, @@ -457,7 +471,7 @@ var maccPerms = map[string][]string{ ```go type ChainApp struct { - // ... existing fields + / ... existing fields FeeMarketKeeper feemarketkeeper.Keeper EVMKeeper *evmkeeper.Keeper Erc20Keeper erc20keeper.Keeper @@ -469,12 +483,12 @@ type ChainApp struct { ```go func NewChainApp( - // ... existing params + / ... existing params appOpts servertypes.AppOptions, - evmAppOptions EVMOptionsFn, // Add this parameter + evmAppOptions EVMOptionsFn, / Add this parameter baseAppOptions ...func(*baseapp.BaseApp), ) *ChainApp { - // ... + / ... } ``` @@ -491,7 +505,7 @@ txConfig := encodingConfig.TxConfig ```go "Store Keys Configuration" expandable keys := storetypes.NewKVStoreKeys( - // ... existing keys + / ... existing keys evmtypes.StoreKey, feemarkettypes.StoreKey, erc20types.StoreKey, @@ -507,11 +521,11 @@ tkeys := storetypes.NewTransientStoreKeys( ### Initialize Keepers (Critical Order) -Keepers must be initialized in exact order: FeeMarket → EVM → Erc20 → Transfer + Keepers must be initialized in exact order: FeeMarket → EVM → Erc20 → Transfer ```go "Keeper Initialization (Critical Order)" expandable -// Initialize AuthzKeeper if not already done +/ Initialize AuthzKeeper if not already done app.AuthzKeeper = authzkeeper.NewKeeper( keys[authz.StoreKey], appCodec, @@ -519,7 +533,7 @@ app.AuthzKeeper = authzkeeper.NewKeeper( app.AccountKeeper, ) -// Initialize FeeMarketKeeper +/ Initialize FeeMarketKeeper app.FeeMarketKeeper = feemarketkeeper.NewKeeper( appCodec, authtypes.NewModuleAddress(govtypes.ModuleName), @@ -528,7 +542,7 @@ app.FeeMarketKeeper = feemarketkeeper.NewKeeper( app.GetSubspace(feemarkettypes.ModuleName), ) -// Initialize EVMKeeper +/ Initialize EVMKeeper tracer := cast.ToString(appOpts.Get(srvflags.EVMTracer)) app.EVMKeeper = evmkeeper.NewKeeper( appCodec, @@ -539,12 +553,12 @@ app.EVMKeeper = evmkeeper.NewKeeper( app.BankKeeper, app.StakingKeeper, app.FeeMarketKeeper, - &app.Erc20Keeper, // Pass pointer for circular dependency + &app.Erc20Keeper, / Pass pointer for circular dependency tracer, app.GetSubspace(evmtypes.ModuleName), ) -// Initialize Erc20Keeper +/ Initialize Erc20Keeper app.Erc20Keeper = erc20keeper.NewKeeper( keys[erc20types.StoreKey], appCodec, @@ -554,10 +568,10 @@ app.Erc20Keeper = erc20keeper.NewKeeper( app.EVMKeeper, app.StakingKeeper, app.AuthzKeeper, - &app.TransferKeeper, // Pass pointer for circular dependency + &app.TransferKeeper, / Pass pointer for circular dependency ) -// Initialize extended TransferKeeper +/ Initialize extended TransferKeeper app.TransferKeeper = ibctransferkeeper.NewKeeper( appCodec, keys[ibctransfertypes.StoreKey], @@ -572,11 +586,11 @@ app.TransferKeeper = ibctransferkeeper.NewKeeper( authtypes.NewModuleAddress(govtypes.ModuleName).String(), ) -// CRITICAL: Wire IBC callbacks for automatic ERC20 registration +/ CRITICAL: Wire IBC callbacks for automatic ERC20 registration transferModule := transfer.NewIBCModule(app.TransferKeeper) app.Erc20Keeper.SetICS20Module(transferModule) -// Configure EVM Precompiles +/ Configure EVM Precompiles corePrecompiles := NewAvailableStaticPrecompiles( *app.StakingKeeper, app.DistrKeeper, @@ -597,7 +611,7 @@ app.EVMKeeper.WithStaticPrecompiles(corePrecompiles) ```go app.ModuleManager = module.NewManager( - // ... existing modules + / ... existing modules evm.NewAppModule(app.EVMKeeper, app.AccountKeeper, app.GetSubspace(evmtypes.ModuleName)), feemarket.NewAppModule(app.FeeMarketKeeper, app.GetSubspace(feemarkettypes.ModuleName)), erc20.NewAppModule(app.Erc20Keeper, app.AccountKeeper, app.GetSubspace(erc20types.ModuleName)), @@ -608,31 +622,31 @@ app.ModuleManager = module.NewManager( ### Update Module Ordering ```go "Module Ordering Configuration" expandable -// SetOrderBeginBlockers - EVM must come after feemarket +/ SetOrderBeginBlockers - EVM must come after feemarket app.ModuleManager.SetOrderBeginBlockers( - // ... other modules + / ... other modules erc20types.ModuleName, feemarkettypes.ModuleName, evmtypes.ModuleName, - // ... + / ... ) -// SetOrderEndBlockers +/ SetOrderEndBlockers app.ModuleManager.SetOrderEndBlockers( - // ... other modules + / ... other modules evmtypes.ModuleName, feemarkettypes.ModuleName, erc20types.ModuleName, - // ... + / ... ) -// SetOrderInitGenesis - feemarket must be before genutil +/ SetOrderInitGenesis - feemarket must be before genutil genesisModuleOrder := []string{ - // ... other modules + / ... other modules evmtypes.ModuleName, feemarkettypes.ModuleName, erc20types.ModuleName, - // ... + / ... } ``` @@ -650,7 +664,7 @@ options := ante.HandlerOptions{ ExtensionOptionChecker: cosmosevmtypes.HasDynamicFeeExtensionOption, MaxTxGasWanted: cast.ToUint64(appOpts.Get(srvflags.EVMMaxTxGasWanted)), TxFeeChecker: evmevmante.NewDynamicFeeChecker(app.FeeMarketKeeper), - // ... other options + / ... other options } anteHandler, err := ante.NewAnteHandler(options) @@ -666,12 +680,12 @@ app.SetAnteHandler(anteHandler) func (a *ChainApp) DefaultGenesis() map[string]json.RawMessage { genesis := a.BasicModuleManager.DefaultGenesis(a.appCodec) - // Add EVM genesis config + / Add EVM genesis config evmGenState := evmtypes.DefaultGenesisState() evmGenState.Params.ActiveStaticPrecompiles = evmtypes.AvailableStaticPrecompiles genesis[evmtypes.ModuleName] = a.appCodec.MustMarshalJSON(evmGenState) - // Add ERC20 genesis config + / Add ERC20 genesis config erc20GenState := erc20types.DefaultGenesisState() genesis[erc20types.ModuleName] = a.appCodec.MustMarshalJSON(erc20GenState) @@ -754,7 +768,7 @@ import ( cosmosante "github.com/cosmos/evm/ante/cosmos" ) -// newCosmosAnteHandler creates the default SDK ante handler for Cosmos transactions +/ newCosmosAnteHandler creates the default SDK ante handler for Cosmos transactions func newCosmosAnteHandler(options HandlerOptions) sdk.AnteHandler { return sdk.ChainAnteDecorators( ante.NewSetUpContextDecorator(), @@ -790,7 +804,7 @@ import ( evmante "github.com/cosmos/evm/ante/evm" ) -// newMonoEVMAnteHandler creates the sdk.AnteHandler for EVM transactions +/ newMonoEVMAnteHandler creates the sdk.AnteHandler for EVM transactions func newMonoEVMAnteHandler(options HandlerOptions) sdk.AnteHandler { return sdk.ChainAnteDecorators( evmante.NewEVMMonoDecorator( @@ -816,7 +830,7 @@ import ( "github.com/cosmos/evm/ante/evm" ) -// NewAnteHandler routes Ethereum or SDK transactions to the appropriate handler +/ NewAnteHandler routes Ethereum or SDK transactions to the appropriate handler func NewAnteHandler(options HandlerOptions) (sdk.AnteHandler, error) { if err := options.Validate(); err != nil { return nil, err @@ -826,10 +840,10 @@ func NewAnteHandler(options HandlerOptions) (sdk.AnteHandler, error) { var anteHandler sdk.AnteHandler if ethTx, ok := tx.(*evm.EthTx); ok { - // Handle as Ethereum transaction + / Handle as Ethereum transaction anteHandler = newMonoEVMAnteHandler(options) } else { - // Handle as normal Cosmos SDK transaction + / Handle as normal Cosmos SDK transaction anteHandler = newCosmosAnteHandler(options) } @@ -844,14 +858,14 @@ func NewAnteHandler(options HandlerOptions) (sdk.AnteHandler, error) { ```go "cmd/evmd/commands.go Updates" expandable import ( - // Add imports + / Add imports evmcmd "github.com/cosmos/evm/client" evmserver "github.com/cosmos/evm/server" evmserverconfig "github.com/cosmos/evm/server/config" srvflags "github.com/cosmos/evm/server/flags" ) -// Define custom app config struct +/ Define custom app config struct type CustomAppConfig struct { serverconfig.Config EVM evmserverconfig.EVMConfig @@ -859,7 +873,7 @@ type CustomAppConfig struct { TLS evmserverconfig.TLSConfig } -// Update initAppConfig to include EVM config +/ Update initAppConfig to include EVM config func initAppConfig() (string, interface{}) { srvCfg, customAppTemplate := serverconfig.AppConfig(DefaultDenom) customAppConfig := CustomAppConfig{ @@ -872,9 +886,9 @@ func initAppConfig() (string, interface{}) { return customAppTemplate, customAppConfig } -// In initRootCmd, replace server.AddCommands with evmserver.AddCommands +/ In initRootCmd, replace server.AddCommands with evmserver.AddCommands func initRootCmd(...) { - // ... + / ... evmserver.AddCommands( rootCmd, evmserver.NewDefaultStartOptions(newApp, app.DefaultNodeHome), @@ -883,7 +897,7 @@ func initRootCmd(...) { ) rootCmd.AddCommand( - // ... existing commands + / ... existing commands evmcmd.KeyCommands(app.DefaultNodeHome, true), ) @@ -899,7 +913,7 @@ func initRootCmd(...) { ```go "cmd/evmd/root.go Updates" expandable import ( - // ... existing imports + / ... existing imports evmkeyring "github.com/cosmos/evm/crypto/keyring" evmtypes "github.com/cosmos/evm/x/vm/types" sdk "github.com/cosmos/cosmos-sdk/types" @@ -907,19 +921,19 @@ import ( ) func NewRootCmd() *cobra.Command { - // ... - // In client context setup: + / ... + / In client context setup: clientCtx = clientCtx. WithKeyringOptions(evmkeyring.Option()). WithBroadcastMode(flags.FlagBroadcastMode). WithLedgerHasProtobuf(true) - // Update the coin type + / Update the coin type cfg := sdk.GetConfig() - cfg.SetCoinType(evmtypes.Bip44CoinType) // Sets coin type to 60 + cfg.SetCoinType(evmtypes.Bip44CoinType) / Sets coin type to 60 cfg.Seal() - // ... + / ... return rootCmd } ``` @@ -931,22 +945,22 @@ Sign Mode Textual is a new Cosmos SDK signing method that may not be compatible ### Option A: Disable Sign Mode Textual (Recommended for pure EVM compatibility) ```go "Sign Mode Textual Configuration" expandable -// In app.go +/ In app.go import ( authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" ) -// ... in NewChainApp, where you set up your txConfig: +/ ... in NewChainApp, where you set up your txConfig: txConfig := authtx.NewTxConfigWithOptions( appCodec, authtx.ConfigOptions{ - // Remove SignMode_SIGN_MODE_TEXTUAL from enabled sign modes + / Remove SignMode_SIGN_MODE_TEXTUAL from enabled sign modes EnabledSignModes: []signing.SignMode{ signing.SignMode_SIGN_MODE_DIRECT, signing.SignMode_SIGN_MODE_LEGACY_AMINO_JSON, signing.SignMode_SIGN_MODE_EIP_191, }, - // ... + / ... }, ) ``` diff --git a/docs/evm/next/documentation/integration/fresh-v0.5-integration.mdx b/docs/evm/next/documentation/integration/fresh-v0.5-integration.mdx index ab7f2ece..95ef2274 100644 --- a/docs/evm/next/documentation/integration/fresh-v0.5-integration.mdx +++ b/docs/evm/next/documentation/integration/fresh-v0.5-integration.mdx @@ -30,7 +30,7 @@ require ( ) replace ( - // Required: Use Cosmos fork of go-ethereum + / Required: Use Cosmos fork of go-ethereum github.com/ethereum/go-ethereum => github.com/cosmos/go-ethereum v1.15.11-cosmos-0 ) ``` @@ -38,20 +38,20 @@ replace ( ### Key Imports for app.go ```go Key Imports for app.go expandable -// EVM-specific imports +/ EVM-specific imports import ( - // EVM ante handlers (updated import path in v0.5.0) + / EVM ante handlers (updated import path in v0.5.0) evmante "github.com/cosmos/evm/ante" cosmosevmante "github.com/cosmos/evm/ante/evm" - // Configuration + / Configuration evmconfig "github.com/cosmos/evm/config" srvflags "github.com/cosmos/evm/server/flags" - // Mempool (new in v0.5.0) + / Mempool (new in v0.5.0) evmmempool "github.com/cosmos/evm/mempool" - // Modules + / Modules "github.com/cosmos/evm/x/erc20" erc20keeper "github.com/cosmos/evm/x/erc20/keeper" erc20types "github.com/cosmos/evm/x/erc20/types" @@ -64,11 +64,11 @@ import ( evmkeeper "github.com/cosmos/evm/x/vm/keeper" evmtypes "github.com/cosmos/evm/x/vm/types" - // Override IBC transfer for ERC20 support + / Override IBC transfer for ERC20 support "github.com/cosmos/evm/x/ibc/transfer" transferkeeper "github.com/cosmos/evm/x/ibc/transfer/keeper" - // Server interface + / Server interface evmserver "github.com/cosmos/evm/server" "github.com/ethereum/go-ethereum/common" @@ -80,47 +80,47 @@ import ( ### App Struct with Required Fields ```go App Struct with Required Fields expandable -// App extends BaseApp with EVM functionality +/ App extends BaseApp with EVM functionality type App struct { *baseapp.BaseApp - // Encoding + / Encoding legacyAmino *codec.LegacyAmino appCodec codec.Codec interfaceRegistry types.InterfaceRegistry txConfig client.TxConfig - // REQUIRED: Client context for EVM operations + / REQUIRED: Client context for EVM operations clientCtx client.Context - // REQUIRED: Pending transaction listeners for mempool integration + / REQUIRED: Pending transaction listeners for mempool integration pendingTxListeners []evmante.PendingTxListener - // Store keys + / Store keys keys map[string]*storetypes.KVStoreKey tkeys map[string]*storetypes.TransientStoreKey memKeys map[string]*storetypes.MemoryStoreKey - // Standard Cosmos SDK keepers + / Standard Cosmos SDK keepers AccountKeeper authkeeper.AccountKeeper BankKeeper bankkeeper.Keeper StakingKeeper *stakingkeeper.Keeper SlashingKeeper slashingkeeper.Keeper DistrKeeper distrkeeper.Keeper GovKeeper govkeeper.Keeper - // ... other standard keepers + / ... other standard keepers - // REQUIRED: EVM-specific keepers + / REQUIRED: EVM-specific keepers FeeMarketKeeper feemarketkeeper.Keeper EVMKeeper *evmkeeper.Keeper Erc20Keeper erc20keeper.Keeper - PreciseBankKeeper precisebankkeeper.Keeper // Critical for 18-decimal precision - TransferKeeper transferkeeper.Keeper // EVM-enhanced IBC transfer + PreciseBankKeeper precisebankkeeper.Keeper / Critical for 18-decimal precision + TransferKeeper transferkeeper.Keeper / EVM-enhanced IBC transfer - // REQUIRED: EVM mempool (new in v0.5.0) + / REQUIRED: EVM mempool (new in v0.5.0) EVMMempool *evmmempool.ExperimentalEVMMempool - // Module management + / Module management ModuleManager *module.Manager BasicModuleManager module.BasicManager sm *module.SimulationManager @@ -130,12 +130,12 @@ type App struct { ### Required Interface Methods ```go -// REQUIRED: SetClientCtx for EVM operations +/ REQUIRED: SetClientCtx for EVM operations func (app *App) SetClientCtx(clientCtx client.Context) { app.clientCtx = clientCtx } -// REQUIRED: AppWithPendingTxStream interface methods +/ REQUIRED: AppWithPendingTxStream interface methods func (app *App) RegisterPendingTxListener(listener func(common.Hash)) { app.pendingTxListeners = append(app.pendingTxListeners, listener) } @@ -150,14 +150,14 @@ func (app *App) onPendingTx(hash common.Hash) { ### App Constructor Return Type ```go -// CRITICAL: Must return evmserver.Application, not servertypes.Application +/ CRITICAL: Must return evmserver.Application, not servertypes.Application func NewApp( logger log.Logger, db dbm.DB, traceStore io.Writer, appOpts servertypes.AppOptions, -) evmserver.Application { // NOT servertypes.Application - // ... implementation +) evmserver.Application { / NOT servertypes.Application + / ... implementation return app } ``` @@ -167,41 +167,41 @@ func NewApp( ### Store Key Registration ```go Store Key Registration expandable -// Add EVM-specific store keys to your existing keys +/ Add EVM-specific store keys to your existing keys keys := storetypes.NewKVStoreKeys( - // Standard SDK keys... + / Standard SDK keys... authtypes.StoreKey, banktypes.StoreKey, - // ... other standard keys + / ... other standard keys - // REQUIRED: EVM module store keys + / REQUIRED: EVM module store keys evmtypes.StoreKey, feemarkettypes.StoreKey, erc20types.StoreKey, - precisebanktypes.StoreKey, // For 18-decimal precision - ibctransfertypes.StoreKey, // EVM-enhanced IBC transfer + precisebanktypes.StoreKey, / For 18-decimal precision + ibctransfertypes.StoreKey, / EVM-enhanced IBC transfer ) tkeys := storetypes.NewTransientStoreKeys( - evmtypes.TransientKey, // Required for EVM state transitions + evmtypes.TransientKey, / Required for EVM state transitions ) ``` ### Module Registration ```go Module Registration expandable -// Module manager with EVM modules +/ Module manager with EVM modules app.ModuleManager = module.NewManager( - // Standard SDK modules... + / Standard SDK modules... auth.NewAppModule(appCodec, app.AccountKeeper, ...), bank.NewAppModule(appCodec, app.BankKeeper, ...), - // ... other standard modules + / ... other standard modules - // IBC modules (use EVM-enhanced transfer) + / IBC modules (use EVM-enhanced transfer) ibc.NewAppModule(app.IBCKeeper), - transferModule, // EVM-enhanced, not standard IBC transfer + transferModule, / EVM-enhanced, not standard IBC transfer - // REQUIRED: EVM modules in dependency order + / REQUIRED: EVM modules in dependency order vm.NewAppModule(app.EVMKeeper, app.AccountKeeper, app.AccountKeeper.AddressCodec()), feemarket.NewAppModule(app.FeeMarketKeeper), erc20.NewAppModule(app.Erc20Keeper, app.AccountKeeper), @@ -211,72 +211,89 @@ app.ModuleManager = module.NewManager( ## Step 4: Keeper Initialization +### Why Initialization Order Matters + +The keeper initialization order is critical due to dependencies between modules: +- **PreciseBank** must be initialized before EVM keeper as it wraps the bank keeper to provide 18-decimal precision +- **FeeMarket** must exist before EVM keeper as EVM uses it for gas pricing +- **EVM keeper** must be initialized before ERC20 keeper due to a forward reference pattern +- **ERC20 keeper** requires a fully initialized EVM keeper to manage token contracts + ### Initialization Order (Critical) ```go Initialization Order (Critical) expandable -// 1. Standard SDK keepers first +/ 1. Standard SDK keepers first app.AccountKeeper = authkeeper.NewAccountKeeper(...) app.BankKeeper = bankkeeper.NewKeeper(...) app.StakingKeeper = stakingkeeper.NewKeeper(...) -// ... other SDK keepers +/ ... other SDK keepers -// 2. PreciseBank keeper (wraps bank keeper for 18-decimal precision) +/ 2. PreciseBank keeper - REQUIRED for 18-decimal precision +/ Why: Cosmos native tokens use 6 decimals, EVM uses 18. PreciseBank handles conversion. +/ Without this: EVM transactions will lose precision, breaking DeFi protocols app.PreciseBankKeeper = precisebankkeeper.NewKeeper( appCodec, keys[precisebanktypes.StoreKey], - app.BankKeeper, + app.BankKeeper, / Wraps the standard bank keeper app.AccountKeeper, ) -// 3. FeeMarket keeper (required by EVM) +/ 3. FeeMarket keeper - REQUIRED for EIP-1559 gas pricing +/ Why: Implements dynamic base fee adjustment based on block utilization +/ Without this: EVM will use static gas prices, vulnerable to spam app.FeeMarketKeeper = feemarketkeeper.NewKeeper( appCodec, - authtypes.NewModuleAddress(govtypes.ModuleName), + authtypes.NewModuleAddress(govtypes.ModuleName), / Gov can update params keys[feemarkettypes.StoreKey], - tkeys[feemarkettypes.TransientKey], + tkeys[feemarkettypes.TransientKey], / For temporary base fee calculations app.GetSubspace(feemarkettypes.ModuleName), ) -// 4. EVM keeper (MUST be before ERC20 keeper) +/ 4. EVM keeper - Core EVM functionality +/ CRITICAL: Must be before ERC20 keeper due to forward reference pattern tracer := cast.ToString(appOpts.Get(srvflags.EVMTracer)) app.EVMKeeper = evmkeeper.NewKeeper( appCodec, - keys[evmtypes.StoreKey], - tkeys[evmtypes.TransientKey], - keys, + keys[evmtypes.StoreKey], / Persistent EVM state + tkeys[evmtypes.TransientKey], / Temporary execution state + keys, / All store keys for precompile access authtypes.NewModuleAddress(govtypes.ModuleName), app.AccountKeeper, - app.PreciseBankKeeper, // Use PreciseBank for 18-decimal support - app.StakingKeeper, - app.FeeMarketKeeper, + app.PreciseBankKeeper, / MUST use PreciseBank, not BankKeeper + app.StakingKeeper, / For precompile access to staking + app.FeeMarketKeeper, / For dynamic gas pricing &app.ConsensusParamsKeeper, - &app.Erc20Keeper, // Forward reference - tracer, + &app.Erc20Keeper, / Forward reference - will be set below + tracer, / "" for production, "json" for debugging ) -// 5. ERC20 keeper (after EVM keeper) +/ 5. ERC20 keeper - Manages ERC20 token registration +/ Why after EVM: Needs EVM keeper to deploy token contracts +/ Why before Transfer: IBC transfer needs ERC20 for token mapping app.Erc20Keeper = erc20keeper.NewKeeper( keys[erc20types.StoreKey], appCodec, authtypes.NewModuleAddress(govtypes.ModuleName), app.AccountKeeper, - app.PreciseBankKeeper, - app.EVMKeeper, - app.StakingKeeper, + app.PreciseBankKeeper, / For token balance conversions + app.EVMKeeper, / To deploy and interact with contracts + app.StakingKeeper, / For staking derivative tokens ) -// 6. Enhanced IBC Transfer keeper for ERC20 support +/ 6. Enhanced IBC Transfer keeper - REPLACES standard IBC transfer +/ Why enhanced: Automatic ERC20 registration for IBC tokens +/ Without this: IBC tokens won't be accessible from EVM app.TransferKeeper = transferkeeper.NewKeeper( appCodec, keys[ibctransfertypes.StoreKey], app.GetSubspace(ibctransfertypes.ModuleName), - app.IBCKeeper.ChannelKeeper, - app.IBCKeeper.ChannelKeeper, - app.IBCKeeper.PortKeeper, + app.IBCKeeper.ChannelKeeper, / For IBC channel operations + app.IBCKeeper.ChannelKeeper, / Duplicate required by interface + app.IBCKeeper.PortKeeper, / For IBC port binding app.AccountKeeper, - app.PreciseBankKeeper, // Use PreciseBank + app.PreciseBankKeeper, / MUST use PreciseBank for 18-decimal support scopedTransferKeeper, - app.Erc20Keeper, // EVM integration + app.Erc20Keeper, / For automatic ERC20 registration of IBC tokens ) ``` @@ -298,17 +315,17 @@ func (app *App) setAnteHandler(txConfig client.TxConfig, maxGasWanted uint64) { SignModeHandler: txConfig.SignModeHandler(), SigGasConsumer: evmante.SigVerificationGasConsumer, - // v0.5.0 new parameters - MaxTxGasWanted: maxGasWanted, // From app.toml evm.max-tx-gas-wanted + / v0.5.0 new parameters + MaxTxGasWanted: maxGasWanted, / From app.toml evm.max-tx-gas-wanted TxFeeChecker: cosmosevmante.NewDynamicFeeChecker(app.FeeMarketKeeper), - PendingTxListener: app.onPendingTx, // Required for mempool integration + PendingTxListener: app.onPendingTx, / Required for mempool integration } if err := options.Validate(); err != nil { panic(fmt.Sprintf("ante handler options validation failed: %v", err)) } - // Import path changed from evmd/ante to evm/ante in v0.5.0 + / Import path changed from evmd/ante to evm/ante in v0.5.0 app.SetAnteHandler(evmante.NewAnteHandler(options)) } ``` @@ -316,49 +333,70 @@ func (app *App) setAnteHandler(txConfig client.TxConfig, maxGasWanted uint64) { ### Call During App Construction ```go -// In NewApp constructor, AFTER keeper initialization +/ In NewApp constructor, AFTER keeper initialization maxGasWanted := cast.ToUint64(appOpts.Get(srvflags.EVMMaxTxGasWanted)) app.setAnteHandler(app.txConfig, maxGasWanted) ``` ## Step 6: Mempool Integration (v0.5.0 Required) +### Why Custom Mempool is Required + +The standard Cosmos SDK mempool cannot handle Ethereum transactions because: +- **Nonce ordering**: Ethereum requires strict nonce ordering per account +- **Gas price dynamics**: EIP-1559 transactions use dynamic base fees +- **Transaction replacement**: Ethereum allows replacing pending transactions with higher gas +- **Dual transaction types**: Must handle both Cosmos and Ethereum transaction formats + +Without the EVM mempool: +- MetaMask transactions will be rejected +- Nonce gaps will cause transaction failures +- Gas estimation will be incorrect +- Transaction replacement won't work + ### Complete Mempool Setup ```go Complete Mempool Setup expandable -// In NewApp constructor, AFTER ante handler is set +/ In NewApp constructor, AFTER ante handler is set if cosmosevmtypes.GetChainConfig() != nil { - // Get configuration from app.toml and genesis.json + / Get configuration from app.toml and genesis.json blockGasLimit := evmconfig.GetBlockGasLimit(appOpts, logger) minTip := evmconfig.GetMinTip(appOpts, logger) - // Configure EVM mempool + / Configure EVM mempool with minimal required settings + / v0.5.0 uses smart defaults - only configure what you need mempoolConfig := &evmmempool.EVMMempoolConfig{ - AnteHandler: app.GetAnteHandler(), // Must be set first - BlockGasLimit: blockGasLimit, // From genesis consensus params - MinTip: minTip, // From app.toml evm.min-tip (v0.5.0) + AnteHandler: app.GetAnteHandler(), / Required: validates transactions + BlockGasLimit: blockGasLimit, / Default: 100M gas if not set + MinTip: minTip, / Default: 0 (no minimum tip) + / LegacyPoolConfig: nil, / Uses defaults: 10K tx capacity + / CosmosPoolConfig: nil, / Uses defaults: 5K tx capacity } - // Initialize EVM mempool + / Initialize EVM mempool + / This replaces the standard SDK mempool entirely evmMempool := evmmempool.NewExperimentalEVMMempool( - app.CreateQueryContext, + app.CreateQueryContext, / For state queries during validation logger, - app.EVMKeeper, - app.FeeMarketKeeper, - app.txConfig, - app.clientCtx, + app.EVMKeeper, / Access EVM state + app.FeeMarketKeeper, / Get current base fee + app.txConfig, / Transaction encoding/decoding + app.clientCtx, / Client context for RPC mempoolConfig, ) app.EVMMempool = evmMempool - // REQUIRED: Replace BaseApp mempool + / CRITICAL: Replace BaseApp mempool + / Without this: EVM transactions won't enter mempool app.SetMempool(evmMempool) - // REQUIRED: Set custom CheckTx handler + / CRITICAL: Set custom CheckTx handler + / Without this: EVM transaction validation fails checkTxHandler := evmmempool.NewCheckTxHandler(evmMempool) app.SetCheckTxHandler(checkTxHandler) - // REQUIRED: Set custom PrepareProposal handler + / CRITICAL: Set custom PrepareProposal handler + / Without this: EVM transactions won't be included in blocks abciProposalHandler := baseapp.NewDefaultProposalHandler(evmMempool, app) abciProposalHandler.SetSignerExtractionAdapter( evmmempool.NewEthSignerExtractionAdapter( @@ -369,12 +407,51 @@ if cosmosevmtypes.GetChainConfig() != nil { } ``` +### What Happens Without Each Component + +| Component | Impact if Missing | +|-----------|-------------------| +| SetMempool | EVM transactions rejected with "unsupported tx type" | +| CheckTxHandler | Transaction validation uses wrong rules, nonce errors | +| PrepareProposal | EVM transactions in mempool but never included in blocks | +| SignerExtractor | Cannot identify transaction sender, auth failures | + ## Step 7: Precompile Registration +### What Are Precompiles and When Do You Need Them? + +Precompiles are special contracts at fixed addresses that provide native Cosmos SDK functionality to EVM smart contracts. They execute native Go code instead of EVM bytecode, offering: +- **Gas efficiency**: 10-100x cheaper than Solidity implementations +- **Native integration**: Direct access to Cosmos SDK modules +- **Atomicity**: Operations complete in the same transaction + +#### Decision Tree for Precompiles + +| If You Need... | Required Precompiles | Optional Precompiles | +|----------------|---------------------|----------------------| +| Basic EVM only | None | Bech32, P256 | +| DeFi protocols | Bank | Staking, Distribution | +| Liquid staking | Bank, Staking | Distribution, Governance | +| Cross-chain assets | Bank, ICS20 | ERC20 | +| Full Cosmos features | All | Custom precompiles | + +#### Precompile Addresses and Functions + +| Precompile | Address | Key Functions | Gas Cost | +|------------|---------|---------------|----------| +| Bech32 | 0x0000...0100 | Address conversion | ~3,000 | +| P256 | 0x0000...0400 | Secp256r1 signatures | ~3,500 | +| Bank | 0x0000...0800 | Send, balance queries | ~25,000 | +| Staking | 0x0000...0801 | Delegate, undelegate | ~50,000 | +| Distribution | 0x0000...0802 | Withdraw rewards | ~30,000 | +| ICS20 | 0x0000...0804 | IBC transfers | ~100,000 | +| Governance | 0x0000...0805 | Vote, deposit | ~30,000 | +| Slashing | 0x0000...0806 | Unjail validator | ~50,000 | + ### Interface-Based Precompile Setup ```go Interface-Based Precompile Setup expandable -// Create precompile options with address codecs +/ Create precompile options with address codecs type Optionals struct { AddressCodec address.Codec ValidatorAddrCodec address.Codec @@ -389,11 +466,11 @@ func defaultOptionals() Optionals { } } -// Register static precompiles with interface-based constructors +/ Register static precompiles with interface-based constructors func NewAvailableStaticPrecompiles( stakingKeeper stakingkeeper.Keeper, distributionKeeper distributionkeeper.Keeper, - bankKeeper cmn.BankKeeper, // Interface, not concrete type + bankKeeper cmn.BankKeeper, / Interface, not concrete type erc20Keeper erc20keeper.Keeper, transferKeeper transferkeeper.Keeper, channelKeeper channelkeeper.Keeper, @@ -409,14 +486,14 @@ func NewAvailableStaticPrecompiles( precompiles := make(map[common.Address]vm.PrecompiledContract) - // Bank precompile + / Bank precompile bankPrecompile, err := bankprecompile.NewPrecompile(bankKeeper, cdc) if err != nil { return nil, fmt.Errorf("failed to instantiate bank precompile: %w", err) } precompiles[bankPrecompile.Address()] = bankPrecompile - // Distribution precompile + / Distribution precompile distributionPrecompile, err := distprecompile.NewPrecompile( cmn.DistributionKeeper(distributionKeeper), cdc, @@ -427,7 +504,7 @@ func NewAvailableStaticPrecompiles( } precompiles[distributionPrecompile.Address()] = distributionPrecompile - // Staking precompile + / Staking precompile stakingPrecompile, err := stakingprecompile.NewPrecompile( cmn.StakingKeeper(stakingKeeper), cdc, @@ -438,9 +515,9 @@ func NewAvailableStaticPrecompiles( } precompiles[stakingPrecompile.Address()] = stakingPrecompile - // ICS20 precompile (parameter order changed in v0.4.0) + / ICS20 precompile (parameter order changed in v0.4.0) ics20Precompile, err := ics20precompile.NewPrecompile( - bankKeeper, // bankKeeper FIRST (changed in v0.4.0) + bankKeeper, / bankKeeper FIRST (changed in v0.4.0) stakingKeeper, transferKeeper, channelKeeper, @@ -452,18 +529,18 @@ func NewAvailableStaticPrecompiles( } precompiles[ics20Precompile.Address()] = ics20Precompile - // Governance precompile (address codec required in v0.4.0) + / Governance precompile (address codec required in v0.4.0) govPrecompile, err := govprecompile.NewPrecompile( cmn.GovKeeper(govKeeper), cdc, - options.AddressCodec, // Required in v0.4.0 + options.AddressCodec, / Required in v0.4.0 ) if err != nil { return nil, fmt.Errorf("failed to instantiate gov precompile: %w", err) } precompiles[govPrecompile.Address()] = govPrecompile - // Slashing precompile + / Slashing precompile slashingPrecompile, err := slashingprecompile.NewPrecompile( cmn.SlashingKeeper(slashingKeeper), cdc, @@ -475,14 +552,14 @@ func NewAvailableStaticPrecompiles( } precompiles[slashingPrecompile.Address()] = slashingPrecompile - // Bech32 precompile + / Bech32 precompile bech32Precompile, err := bech32.NewPrecompile() if err != nil { return nil, fmt.Errorf("failed to instantiate bech32 precompile: %w", err) } precompiles[bech32Precompile.Address()] = bech32Precompile - // P256 precompile (secp256r1) + / P256 precompile (secp256r1) p256Precompile, err := p256.NewPrecompile() if err != nil { return nil, fmt.Errorf("failed to instantiate p256 precompile: %w", err) @@ -498,9 +575,9 @@ func NewAvailableStaticPrecompiles( ### Required CLI Wrapper ```go -// cmd/myapp/cmd/root.go +/ cmd/myapp/cmd/root.go -// REQUIRED: Wrapper for SDK commands that expect servertypes.Application +/ REQUIRED: Wrapper for SDK commands that expect servertypes.Application sdkAppCreatorWrapper := func( logger log.Logger, db dbm.DB, @@ -510,7 +587,7 @@ sdkAppCreatorWrapper := func( return ac.newApp(logger, db, traceStore, appOpts) } -// Use wrapper for pruning and snapshot commands +/ Use wrapper for pruning and snapshot commands rootCmd.AddCommand( pruning.Cmd(sdkAppCreatorWrapper, app.DefaultNodeHome), snapshot.Cmd(sdkAppCreatorWrapper), @@ -519,24 +596,44 @@ rootCmd.AddCommand( ## Step 9: Configuration Management +### Understanding app.toml EVM Parameters + +Each parameter controls specific EVM behavior with direct impact on your chain's operation: + ### app.toml EVM Section ```toml app.toml EVM Section expandable # EVM configuration (v0.5.0) [evm] -# Minimum priority fee for mempool (in wei) +# Minimum priority fee for mempool inclusion (in wei) +# 0 = accept any transaction (permissioned/test chains) +# 1000000000 (1 gwei) = standard for public chains (spam protection) +# Impact: Too low allows spam, too high excludes legitimate users min-tip = 0 -# Maximum gas for eth transactions in check tx mode +# Maximum gas allowed for a single transaction during CheckTx +# 0 = use block gas limit (default behavior) +# 50000000 = limit individual transactions to 50M gas +# Why limit: Prevent DoS from expensive CheckTx validation +# Impact: Too low blocks complex DeFi operations max-tx-gas-wanted = 0 -# EIP-155 chain ID (separate from Cosmos chain ID) +# EIP-155 chain ID for transaction signatures +# MUST be unique across all EVM chains to prevent replay attacks +# Calculate: hash(cosmos_chain_id) % 2^32 for uniqueness +# Common values: 1 (mainnet fork), 31337 (local dev), your custom ID evm-chain-id = 262144 -# EVM execution tracer (empty = disabled) +# EVM execution tracer for debugging +# "" = disabled (production - best performance) +# "json" = structured JSON output (debugging) +# "struct" = detailed struct logging (deep debugging) +# "access_list" = track state access (gas optimization) +# Impact: Tracing reduces performance by 20-50% tracer = "" -# SHA3 preimage tracking (not implemented yet) +# SHA3 preimage tracking (not implemented in v0.5.0) +# Reserved for future Ethereum compatibility cache-preimage = false [json-rpc] @@ -565,24 +662,50 @@ block-range-cap = 10000 "app_state": { "evm": { "params": { + # The denomination used for EVM transactions (18 decimal places) + # Must match your chain's base denomination with 'a' prefix + # Example: "uatom" → "aatom", "stake" → "astake" "evm_denom": "atoken", + + # Additional EIPs to activate beyond standard set + # [] = use defaults (sufficient for 99% of chains) + # [3855] = activate EIP-3855 (PUSH0 opcode) if needed + # Warning: Only add EIPs you fully understand "extra_eips": [], + + # IBC channels for cross-chain EVM calls (advanced feature) + # [] = no cross-chain EVM (recommended for most chains) + # ["channel-0"] = allow EVM calls over specified channel "evm_channels": [], + + # Access control for contract deployment and calls + # PERMISSIONLESS = anyone can deploy/call (public chains) + # RESTRICTED = only allowlisted addresses (private chains) + # FORBIDDEN = completely disabled (security lockdown) "access_control": { "create": {"access_type": "ACCESS_TYPE_PERMISSIONLESS"}, "call": {"access_type": "ACCESS_TYPE_PERMISSIONLESS"} }, + + # Precompiles to activate (must match registered precompiles) + # Order doesn't matter, but all addresses must be exact + # Missing precompile = contract calls fail with "no code" "active_static_precompiles": [ - "0x0000000000000000000000000000000000000100", - "0x0000000000000000000000000000000000000400", - "0x0000000000000000000000000000000000000800", - "0x0000000000000000000000000000000000000801", - "0x0000000000000000000000000000000000000802", - "0x0000000000000000000000000000000000000804", - "0x0000000000000000000000000000000000000805", - "0x0000000000000000000000000000000000000806", - "0x0000000000000000000000000000000000000807" + "0x0000000000000000000000000000000000000100", # Bech32 + "0x0000000000000000000000000000000000000400", # P256 + "0x0000000000000000000000000000000000000800", # Bank + "0x0000000000000000000000000000000000000801", # Staking + "0x0000000000000000000000000000000000000802", # Distribution + "0x0000000000000000000000000000000000000804", # ICS20 + "0x0000000000000000000000000000000000000805", # Governance + "0x0000000000000000000000000000000000000806", # Slashing + "0x0000000000000000000000000000000000000807" # Unknown/Custom ], + + # Number of blocks to serve for eth_getBlockByNumber + # 8192 = ~13 hours at 6s blocks (standard) + # 43200 = ~3 days (for archive nodes) + # Impact: Higher = more disk usage, better for indexers "history_serve_window": 8192 } }, @@ -612,7 +735,7 @@ block-range-cap = 10000 ### 18-Decimal Precision Setup ```go -// Set power reduction for 18-decimal base unit (EVM standard) +/ Set power reduction for 18-decimal base unit (EVM standard) func init() { sdk.DefaultPowerReduction = sdkmath.NewIntFromBigInt( new(big.Int).Exp(big.NewInt(10), big.NewInt(18), nil), @@ -623,7 +746,7 @@ func init() { ### Coin Type Configuration ```go -// Use coin type 60 for Ethereum compatibility +/ Use coin type 60 for Ethereum compatibility const CoinType uint32 = 60 ``` @@ -715,64 +838,6 @@ batch-request-limit = 100 tracer = "" ``` -## Common Integration Issues - -### Issue: Build Failure - Interface Not Satisfied - -``` -error: *App does not implement evmserver.Application -``` - -**Solution**: Ensure app constructor returns `evmserver.Application`: - -```diff -- func NewApp(...) servertypes.Application { -+ func NewApp(...) evmserver.Application { -``` - -### Issue: Runtime Panic - Missing Interface Methods - -``` -panic: App must implement AppWithPendingTxStream interface -``` - -**Solution**: Implement required interface methods: - -```go -func (app *App) RegisterPendingTxListener(listener func(common.Hash)) { - app.pendingTxListeners = append(app.pendingTxListeners, listener) -} -``` - -### Issue: Mempool Configuration Error - -``` -error: mempool config must not be nil -``` - -**Solution**: Ensure mempool configuration is properly set: - -```go Mempool Configuration Fix expandable -mempoolConfig := &evmmempool.EVMMempoolConfig{ - AnteHandler: app.GetAnteHandler(), // Must be set AFTER ante handler - BlockGasLimit: blockGasLimit, - MinTip: minTip, // From app.toml evm.min-tip (v0.5.0) -} -``` - -### Issue: JSON-RPC Methods Not Working - -``` -error: eth_createAccessList not found -``` - -**Solution**: Ensure indexer is enabled and mempool is configured: - -```toml -[json-rpc] -enable-indexer = true -api = "eth,net,web3,txpool" -``` ## Integration Checklist diff --git a/docs/evm/next/documentation/integration/mempool-integration.mdx b/docs/evm/next/documentation/integration/mempool-integration.mdx index d381a961..99ac14ad 100644 --- a/docs/evm/next/documentation/integration/mempool-integration.mdx +++ b/docs/evm/next/documentation/integration/mempool-integration.mdx @@ -15,9 +15,9 @@ Update your `app/app.go` to include the EVM mempool: ```go type App struct { *baseapp.BaseApp - // ... other keepers + / ... other keepers - // Cosmos EVM keepers + / Cosmos EVM keepers FeeMarketKeeper feemarketkeeper.Keeper EVMKeeper *evmkeeper.Keeper EVMMempool *evmmempool.ExperimentalEVMMempool @@ -34,7 +34,7 @@ Add the following configuration in your `NewApp` constructor: ```go -// Set the EVM priority nonce mempool +/ Set the EVM priority nonce mempool if evmtypes.GetChainConfig() != nil { mempoolConfig := &evmmempool.EVMMempoolConfig{ AnteHandler: app.GetAnteHandler(), @@ -52,14 +52,14 @@ if evmtypes.GetChainConfig() != nil { ) app.EVMMempool = evmMempool - // Replace BaseApp mempool + / Replace BaseApp mempool app.SetMempool(evmMempool) - // Set custom CheckTx handler for nonce gap support + / Set custom CheckTx handler for nonce gap support checkTxHandler := evmmempool.NewCheckTxHandler(evmMempool) app.SetCheckTxHandler(checkTxHandler) - // Set custom PrepareProposal handler + / Set custom PrepareProposal handler abciProposalHandler := baseapp.NewDefaultProposalHandler(evmMempool, app) abciProposalHandler.SetSignerExtractionAdapter( evmmempool.NewEthSignerExtractionAdapter( @@ -84,7 +84,7 @@ The `EVMMempoolConfig` struct provides several configuration options for customi ```go mempoolConfig := &evmmempool.EVMMempoolConfig{ AnteHandler: app.GetAnteHandler(), - BlockGasLimit: 100_000_000, // 100M gas limit + BlockGasLimit: 100_000_000, / 100M gas limit } ``` @@ -92,22 +92,22 @@ mempoolConfig := &evmmempool.EVMMempoolConfig{ ```go type EVMMempoolConfig struct { - // Required: AnteHandler for transaction validation + / Required: AnteHandler for transaction validation AnteHandler sdk.AnteHandler - // Required: Block gas limit for transaction selection + / Required: Block gas limit for transaction selection BlockGasLimit uint64 - // Optional: Custom legacy pool configuration (replaces TxPool) + / Optional: Custom legacy pool configuration (replaces TxPool) LegacyPoolConfig *legacypool.Config - // Optional: Custom Cosmos pool configuration (replaces CosmosPool) + / Optional: Custom Cosmos pool configuration (replaces CosmosPool) CosmosPoolConfig *sdkmempool.PriorityNonceMempoolConfig[math.Int] - // Optional: Custom broadcast function for promoted transactions + / Optional: Custom broadcast function for promoted transactions BroadcastTxFn func(txs []*ethtypes.Transaction) error - // Optional: Minimum tip required for EVM transactions + / Optional: Minimum tip required for EVM transactions MinTip *uint256.Int } ``` @@ -129,8 +129,8 @@ type EVMMempoolConfig struct { **Before (v0.4.x):** ```go mempoolConfig := &evmmempool.EVMMempoolConfig{ - TxPool: customTxPool, // ← REMOVED - CosmosPool: customCosmosPool, // ← REMOVED + TxPool: customTxPool, / ← REMOVED + CosmosPool: customCosmosPool, / ← REMOVED AnteHandler: app.GetAnteHandler(), BlockGasLimit: 100_000_000, } @@ -139,13 +139,13 @@ mempoolConfig := &evmmempool.EVMMempoolConfig{ **After (v0.5.0):** ```go mempoolConfig := &evmmempool.EVMMempoolConfig{ - LegacyPoolConfig: &legacypool.Config{ // ← NEW + LegacyPoolConfig: &legacypool.Config{ / ← NEW AccountSlots: 16, GlobalSlots: 5120, PriceLimit: 1, - // ... other config options + / ... other config options }, - CosmosPoolConfig: &sdkmempool.PriorityNonceMempoolConfig[math.Int]{ // ← NEW + CosmosPoolConfig: &sdkmempool.PriorityNonceMempoolConfig[math.Int]{ / ← NEW TxPriority: customPriorityConfig, }, AnteHandler: app.GetAnteHandler(), @@ -158,7 +158,7 @@ mempoolConfig := &evmmempool.EVMMempoolConfig{ The mempool uses a `PriorityNonceMempool` for Cosmos transactions by default. You can customize the priority calculation: ```go -// Define custom priority calculation for Cosmos transactions +/ Define custom priority calculation for Cosmos transactions priorityConfig := sdkmempool.PriorityNonceMempoolConfig[math.Int]{ TxPriority: sdkmempool.TxPriority[math.Int]{ GetTxPriority: func(goCtx context.Context, tx sdk.Tx) math.Int { @@ -167,20 +167,20 @@ priorityConfig := sdkmempool.PriorityNonceMempoolConfig[math.Int]{ return math.ZeroInt() } - // Get fee in bond denomination - bondDenom := "test" // or your chain's bond denom + / Get fee in bond denomination + bondDenom := "test" / or your chain's bond denom fee := feeTx.GetFee() found, coin := fee.Find(bondDenom) if !found { return math.ZeroInt() } - // Calculate gas price: fee_amount / gas_limit + / Calculate gas price: fee_amount / gas_limit gasPrice := coin.Amount.Quo(math.NewIntFromUint64(feeTx.GetGas())) return gasPrice }, Compare: func(a, b math.Int) int { - return a.BigInt().Cmp(b.BigInt()) // Higher values have priority + return a.BigInt().Cmp(b.BigInt()) / Higher values have priority }, MinValue: math.ZeroInt(), }, @@ -189,7 +189,7 @@ priorityConfig := sdkmempool.PriorityNonceMempoolConfig[math.Int]{ mempoolConfig := &evmmempool.EVMMempoolConfig{ AnteHandler: app.GetAnteHandler(), BlockGasLimit: 100_000_000, - CosmosPoolConfig: &priorityConfig, // Pass config instead of pre-built pool + CosmosPoolConfig: &priorityConfig, / Pass config instead of pre-built pool } ``` @@ -198,7 +198,7 @@ mempoolConfig := &evmmempool.EVMMempoolConfig{ Different chains may require different gas limits based on their capacity: ```go -// Example: 50M gas limit for lower capacity chains +/ Example: 50M gas limit for lower capacity chains mempoolConfig := &evmmempool.EVMMempoolConfig{ AnteHandler: app.GetAnteHandler(), BlockGasLimit: 50_000_000, @@ -210,7 +210,7 @@ mempoolConfig := &evmmempool.EVMMempoolConfig{ For best results, connect the mempool to CometBFT's EventBus so it can react to finalized blocks: ```go -// After starting the CometBFT node +/ After starting the CometBFT node if m, ok := app.GetMempool().(*evmmempool.ExperimentalEVMMempool); ok { m.SetEventBus(bftNode.EventBus()) } @@ -238,9 +238,9 @@ Custom transaction validation that handles nonce gaps specially (`mempool/check_ **Special Handling**: On `ErrNonceGap` for EVM transactions: ```go if errors.Is(err, ErrNonceGap) { - // Route to local queue instead of rejecting + / Route to local queue instead of rejecting err := mempool.InsertInvalidNonce(request.Tx) - // Must intercept error and return success to EVM client + / Must intercept error and return success to EVM client return interceptedSuccess } ``` @@ -260,7 +260,7 @@ Standard Cosmos SDK mempool for non-EVM transactions with fee-based prioritizati **Default Priority Calculation**: ```go -// Calculate effective gas price +/ Calculate effective gas price priority = (fee_amount / gas_limit) - base_fee ``` @@ -284,12 +284,12 @@ The mempool handles different transaction types appropriately: During block building, both transaction types compete fairly based on their effective tips: ```go -// Simplified selection logic +/ Simplified selection logic func SelectTransactions() Iterator { - evmTxs := GetPendingEVMTransactions() // From local TxPool - cosmosTxs := GetPendingCosmosTransactions() // From Cosmos mempool + evmTxs := GetPendingEVMTransactions() / From local TxPool + cosmosTxs := GetPendingCosmosTransactions() / From Cosmos mempool - return NewUnifiedIterator(evmTxs, cosmosTxs) // Fee-based priority + return NewUnifiedIterator(evmTxs, cosmosTxs) / Fee-based priority } ``` @@ -305,10 +305,10 @@ func SelectTransactions() Iterator { Test that transactions with nonce gaps are properly queued: ```javascript -// Send transactions out of order -await wallet.sendTransaction({nonce: 100, ...}); // OK: Immediate execution -await wallet.sendTransaction({nonce: 102, ...}); // OK: Queued locally (gap) -await wallet.sendTransaction({nonce: 101, ...}); // OK: Fills gap, both execute +/ Send transactions out of order +await wallet.sendTransaction({nonce: 100, ...}); / OK: Immediate execution +await wallet.sendTransaction({nonce: 102, ...}); / OK: Queued locally (gap) +await wallet.sendTransaction({nonce: 101, ...}); / OK: Fills gap, both execute ``` ### Test Transaction Replacement @@ -316,18 +316,18 @@ await wallet.sendTransaction({nonce: 101, ...}); // OK: Fills gap, both execute Verify that higher-fee transactions replace lower-fee ones: ```javascript -// Send initial transaction +/ Send initial transaction const tx1 = await wallet.sendTransaction({ nonce: 100, gasPrice: parseUnits("20", "gwei") }); -// Replace with higher fee +/ Replace with higher fee const tx2 = await wallet.sendTransaction({ - nonce: 100, // Same nonce - gasPrice: parseUnits("30", "gwei") // Higher fee + nonce: 100, / Same nonce + gasPrice: parseUnits("30", "gwei") / Higher fee }); -// tx1 is replaced by tx2 +/ tx1 is replaced by tx2 ``` ### Verify Batch Deployments @@ -335,11 +335,11 @@ const tx2 = await wallet.sendTransaction({ Test typical deployment scripts (like Uniswap) that send many transactions at once: ```javascript -// Deploy multiple contracts in quick succession +/ Deploy multiple contracts in quick succession const factory = await Factory.deploy(); const router = await Router.deploy(factory.address); const multicall = await Multicall.deploy(); -// All transactions should queue and execute properly +/ All transactions should queue and execute properly ``` ## Monitoring and Debugging diff --git a/docs/evm/next/documentation/integration/migration-v0.3-to-v0.4.mdx b/docs/evm/next/documentation/integration/migration-v0.3-to-v0.4.mdx index 07c388f5..ed0d9319 100644 --- a/docs/evm/next/documentation/integration/migration-v0.3-to-v0.4.mdx +++ b/docs/evm/next/documentation/integration/migration-v0.3-to-v0.4.mdx @@ -55,7 +55,7 @@ Update your app's `newApp` to return an `evmserver.Application` rather than `ser ### Change the return type ```go -// cmd/myapp/cmd/root.go +/ cmd/myapp/cmd/root.go import ( evmserver "github.com/cosmos/evm/server" ) @@ -65,8 +65,8 @@ func (a appCreator) newApp( db dbm.DB, traceStore io.Writer, appOpts servertypes.AppOptions, -) evmserver.Application { // Changed from servertypes.Application - // ... +) evmserver.Application { / Changed from servertypes.Application + / ... } ``` @@ -75,7 +75,7 @@ func (a appCreator) newApp( Create a thin wrapper and use it for `pruning.Cmd` and `snapshot.Cmd`: ```go -// cmd/myapp/cmd/root.go +/ cmd/myapp/cmd/root.go sdkAppCreatorWrapper := func(l log.Logger, d dbm.DB, w io.Writer, ao servertypes.AppOptions) servertypes.Application { return ac.newApp(l, d, w, ao) } @@ -91,13 +91,13 @@ rootCmd.AddCommand( Add the clientCtx to your app object: ```go -// app/app.go +/ app/app.go import ( "github.com/cosmos/cosmos-sdk/client" ) type MyApp struct { - // ... existing fields + / ... existing fields clientCtx client.Context } @@ -113,7 +113,7 @@ func (app *MyApp) SetClientCtx(clientCtx client.Context) { Import the EVM ante package and geth common: ```go -// app/app.go +/ app/app.go import ( "github.com/cosmos/evm/ante" "github.com/ethereum/go-ethereum/common" @@ -125,9 +125,9 @@ import ( Add a new field for listeners: ```go -// app/app.go +/ app/app.go type MyApp struct { - // ... existing fields + / ... existing fields pendingTxListeners []ante.PendingTxListener } ``` @@ -137,7 +137,7 @@ type MyApp struct { Add a public method to register a listener by txHash: ```go -// app/app.go +/ app/app.go func (app *MyApp) RegisterPendingTxListener(listener func(common.Hash)) { app.pendingTxListeners = append(app.pendingTxListeners, listener) } @@ -148,7 +148,7 @@ func (app *MyApp) RegisterPendingTxListener(listener func(common.Hash)) { ### New imports ```go -// app/keepers/precompiles.go +/ app/keepers/precompiles.go import ( "cosmossdk.io/core/address" addresscodec "github.com/cosmos/cosmos-sdk/codec/address" @@ -161,11 +161,11 @@ import ( Create a small options container with sane defaults pulled from the app's bech32 config: ```go -// app/keepers/precompiles.go +/ app/keepers/precompiles.go type Optionals struct { - AddressCodec address.Codec // used by gov/staking - ValidatorAddrCodec address.Codec // used by slashing - ConsensusAddrCodec address.Codec // used by slashing + AddressCodec address.Codec / used by gov/staking + ValidatorAddrCodec address.Codec / used by slashing + ConsensusAddrCodec address.Codec / used by slashing } func defaultOptionals() Optionals { @@ -194,17 +194,17 @@ func WithConsensusAddrCodec(c address.Codec) Option { ### 4.3 Update the precompile factory to accept options ```go -// app/keepers/precompiles.go +/ app/keepers/precompiles.go func NewAvailableStaticPrecompiles( ctx context.Context, - // ... other params + / ... other params opts ...Option, ) map[common.Address]vm.PrecompiledContract { options := defaultOptionals() for _, opt := range opts { opt(&options) } - // ... rest of implementation + / ... rest of implementation } ``` @@ -220,7 +220,7 @@ func NewAvailableStaticPrecompiles( + stakingKeeper, transferKeeper, &channelKeeper, - // ... + / ... ``` **Gov precompile** now requires an `AddressCodec`: @@ -246,16 +246,16 @@ Include this migration with your upgrade if your chain has: ### Implementation -For complete migration instructions, see: **[ERC20 Precompiles Migration Guide](./erc20-precompiles-migration)** +For complete migration instructions, see: **[ERC20 Precompiles Migration Guide](/docs/evm/next/documentation/integration/erc20-precompiles-migration)** Add this to your upgrade handler: ```go -// In your upgrade handler +/ In your upgrade handler store := ctx.KVStore(storeKeys[erc20types.StoreKey]) const addressLength = 42 -// Migrate dynamic precompiles +/ Migrate dynamic precompiles if oldData := store.Get([]byte("DynamicPrecompiles")); len(oldData) > 0 { for i := 0; i < len(oldData); i += addressLength { address := common.HexToAddress(string(oldData[i : i+addressLength])) @@ -264,7 +264,7 @@ if oldData := store.Get([]byte("DynamicPrecompiles")); len(oldData) > 0 { store.Delete([]byte("DynamicPrecompiles")) } -// Migrate native precompiles +/ Migrate native precompiles if oldData := store.Get([]byte("NativePrecompiles")); len(oldData) > 0 { for i := 0; i < len(oldData); i += addressLength { address := common.HexToAddress(string(oldData[i : i+addressLength])) @@ -325,14 +325,14 @@ mantrachaind export | jq '.app_state.erc20.dynamic_precompiles' **App listeners** ```go -// app/app.go +/ app/app.go import ( "github.com/cosmos/evm/ante" "github.com/ethereum/go-ethereum/common" ) type MyApp struct { - // ... + / ... pendingTxListeners []ante.PendingTxListener } @@ -344,7 +344,7 @@ func (app *MyApp) RegisterPendingTxListener(l func(common.Hash)) { **CLI wrapper** ```go -// cmd/myapp/cmd/root.go +/ cmd/myapp/cmd/root.go sdkAppCreatorWrapper := func(l log.Logger, d dbm.DB, w io.Writer, ao servertypes.AppOptions) servertypes.Application { return ac.newApp(l, d, w, ao) } @@ -358,9 +358,9 @@ rootCmd.AddCommand( **Precompile options & usage** ```go -// app/keepers/precompiles.go +/ app/keepers/precompiles.go opts := []Option{ - // override defaults only if you use non-standard prefixes/codecs + / override defaults only if you use non-standard prefixes/codecs WithAddressCodec(myAcctCodec), WithValidatorAddrCodec(myValCodec), WithConsensusAddrCodec(myConsCodec), diff --git a/docs/evm/next/documentation/integration/migration-v0.4-to-v0.5.mdx b/docs/evm/next/documentation/integration/migration-v0.4-to-v0.5.mdx index dae75049..fe4ea01e 100644 --- a/docs/evm/next/documentation/integration/migration-v0.4-to-v0.5.mdx +++ b/docs/evm/next/documentation/integration/migration-v0.4-to-v0.5.mdx @@ -57,7 +57,9 @@ go mod tidy ### BREAKING: allow_unprotected_txs Parameter Removed -The `allow_unprotected_txs` parameter has been removed from VM module parameters. Non-EIP-155 transaction acceptance is now managed per-node. +**What it was**: This parameter controlled whether to accept non-EIP-155 transactions (transactions without chain ID in signature). + +**Why removed**: Security concerns - non-EIP-155 transactions are vulnerable to replay attacks across chains. The v0.5.0 approach is to reject these by default, with node operators able to override via local configuration if absolutely necessary (not recommended). **Migration Required:** @@ -86,19 +88,27 @@ The `allow_unprotected_txs` parameter has been removed from VM module parameters If you have custom parameter validation logic, remove references to `allow_unprotected_txs`: ```diff -// Remove any code referencing AllowUnprotectedTxs - if !params.AllowUnprotectedTxs { -- // validation logic +- return errors.New("unprotected transactions not allowed") - } ``` +**Impact if not removed**: Genesis validation will fail with "unknown field" error. + ### NEW: history_serve_window Parameter -Added for EIP-2935 block hash storage support. +**Purpose**: Implements EIP-2935 (historical block hashes in state) for better Ethereum compatibility. + +**What it controls**: Number of recent block hashes accessible via the BLOCKHASH opcode and eth_getBlockByNumber. + +**Configuration guidance**: +- **Default**: `8192` blocks (~13 hours at 6s blocks) +- **Archive nodes**: `43200` (~3 days) +- **Light clients**: `256` (minimal history) + +**Storage impact**: Each block hash uses ~100 bytes. 8192 blocks = ~800KB additional state. -- **Default value:** `8192` blocks -- **Purpose:** Controls how many historical block hashes to serve -- **Range:** Must be > 0, recommended ≤ 8192 for optimal performance +**Why this matters**: Smart contracts often verify historical data using block hashes. Without sufficient history, these verifications fail. --- @@ -106,77 +116,158 @@ Added for EIP-2935 block hash storage support. ### BREAKING: Constructor Interface Updates -All precompile constructors now accept keeper interfaces instead of concrete implementations for better decoupling. +**The Problem v0.5.0 Solves**: +In v0.4.0, precompiles were tightly coupled to concrete keeper implementations. This made testing difficult and created import cycles. The v0.5.0 approach uses interfaces for clean dependency injection. -**New Interface Definitions:** +**Impact**: ALL precompile initializations must be updated. This is not optional. -```go New Interface Definitions expandable -// precompiles/common/interfaces.go -type BankKeeper interface { - IterateAccountBalances(ctx context.Context, account sdk.AccAddress, cb func(coin sdk.Coin) bool) - GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) sdk.Coin - SendCoins(ctx context.Context, fromAddr sdk.AccAddress, toAddr sdk.AccAddress, amt sdk.Coins) error - // ... other methods -} +### What Changed and Why -type StakingKeeper interface { - BondDenom(ctx context.Context) (string, error) - GetDelegatorValidators(ctx context.Context, delegatorAddr sdk.AccAddress, maxRetrieve uint32) (stakingtypes.Validators, error) - // ... other methods -} +Each precompile constructor now uses interface types from `precompiles/common/interfaces.go`: + +```go +// v0.4.0 - Concrete types (problematic) +func NewPrecompile(keeper bankkeeper.Keeper) (*Precompile, error) + +// v0.5.0 - Interface types (clean) +func NewPrecompile(keeper common.BankKeeper) (*Precompile, error) ``` -**Migration Steps:** - -1. **Update Precompile Assembly:** - -```diff Precompile Assembly Update expandable -// evmd/precompiles.go or app/precompiles.go -func NewAvailableStaticPrecompiles( - // ... other params -) map[common.Address]vm.PrecompiledContract { - precompiles := make(map[common.Address]vm.PrecompiledContract) - - // Bank precompile - interface now required -- bankPrecompile, err := bankprecompile.NewPrecompile(bankKeeper, ...) -+ bankPrecompile, err := bankprecompile.NewPrecompile( -+ common.BankKeeper(bankKeeper), // Cast to interface -+ // ... other params -+ ) - - // Distribution precompile - simplified parameters -- distributionPrecompile, err := distributionprecompile.NewPrecompile( -- distributionKeeper, stakingKeeper, authzKeeper, cdc, options.AddressCodec -- ) -+ distributionPrecompile, err := distributionprecompile.NewPrecompile( -+ common.DistributionKeeper(distributionKeeper), -+ cdc, options.AddressCodec -+ ) - - // Staking precompile - keeper interface -- stakingPrecompile, err := stakingprecompile.NewPrecompile(stakingKeeper, ...) -+ stakingPrecompile, err := stakingprecompile.NewPrecompile( -+ common.StakingKeeper(stakingKeeper), -+ // ... other params -+ ) - - return precompiles -} +**Benefits of this change**: +- Eliminates import cycles between modules +- Enables proper mocking for tests +- Reduces compilation time +- Allows alternative keeper implementations + +### Migration Steps for Each Precompile + +#### Bank Precompile + +```diff +- bankPrecompile, err := bankprecompile.NewPrecompile( +- app.BankKeeper, // Direct concrete keeper +- appCodec, +- ) ++ bankPrecompile, err := bankprecompile.NewPrecompile( ++ common.BankKeeper(app.BankKeeper), // Cast to interface ++ appCodec, ++ ) ``` -2. **Update Custom Precompiles:** +**Why the cast is safe**: Your BankKeeper already implements all required methods. -If you have custom precompiles, update their constructors to accept interfaces: +#### Distribution Precompile ```diff -// Custom precompile constructor -- func NewMyPrecompile(bankKeeper bankkeeper.Keeper, stakingKeeper stakingkeeper.Keeper) (*MyPrecompile, error) { -+ func NewMyPrecompile(bankKeeper common.BankKeeper, stakingKeeper common.StakingKeeper) (*MyPrecompile, error) { - return &MyPrecompile{ - bankKeeper: bankKeeper, - stakingKeeper: stakingKeeper, - }, nil -} +- distributionPrecompile, err := distributionprecompile.NewPrecompile( +- app.DistrKeeper, +- app.StakingKeeper, // REMOVED - not needed anymore +- app.AuthzKeeper, // REMOVED - not used +- appCodec, +- addressCodec, +- ) ++ distributionPrecompile, err := distributionprecompile.NewPrecompile( ++ common.DistributionKeeper(app.DistrKeeper), ++ appCodec, ++ addressCodec, ++ ) +``` + +**Why parameters were removed**: +- StakingKeeper: Was only used for validator queries, now handled differently +- AuthzKeeper: Never actually used in the precompile implementation + +#### Staking Precompile + +```diff +- stakingPrecompile, err := stakingprecompile.NewPrecompile( +- app.StakingKeeper, +- appCodec, +- addressCodec, +- ) ++ stakingPrecompile, err := stakingprecompile.NewPrecompile( ++ common.StakingKeeper(app.StakingKeeper), ++ appCodec, ++ addressCodec, ++ ) +``` + +#### ICS20 Precompile (Most Changed) + +```diff +- ics20Precompile, err := ics20precompile.NewPrecompile( +- app.TransferKeeper, +- app.ChannelKeeper, +- app.BankKeeper, +- app.StakingKeeper, +- app.EVMKeeper, // REMOVED - circular dependency! +- appCodec, +- addressCodec, +- ) ++ ics20Precompile, err := ics20precompile.NewPrecompile( ++ common.BankKeeper(app.BankKeeper), // Bank first now ++ common.StakingKeeper(app.StakingKeeper), ++ app.TransferKeeper, // Concrete still OK ++ app.ChannelKeeper, // Concrete still OK ++ appCodec, ++ addressCodec, ++ ) +``` + +**Critical change**: EVMKeeper removed to break circular dependency. The precompile no longer needs direct EVM access. + +#### Governance Precompile + +```diff +- govPrecompile, err := govprecompile.NewPrecompile( +- app.GovKeeper, +- appCodec, +- addressCodec, +- ) ++ govPrecompile, err := govprecompile.NewPrecompile( ++ common.GovKeeper(app.GovKeeper), ++ appCodec, ++ addressCodec, ++ ) +``` + +#### Slashing Precompile + +```diff +- slashingPrecompile, err := slashingprecompile.NewPrecompile( +- app.SlashingKeeper, +- appCodec, +- validatorAddressCodec, +- consensusAddressCodec, +- ) ++ slashingPrecompile, err := slashingprecompile.NewPrecompile( ++ common.SlashingKeeper(app.SlashingKeeper), ++ appCodec, ++ validatorAddressCodec, ++ consensusAddressCodec, ++ ) +``` + +### If You Have Custom Precompiles + +Update your custom precompile constructors to use interfaces: + +```diff +// Your custom precompile +- func NewMyPrecompile( +- bankKeeper bankkeeper.Keeper, +- stakingKeeper stakingkeeper.Keeper, +- ) (*MyPrecompile, error) { ++ func NewMyPrecompile( ++ bankKeeper common.BankKeeper, // Use interface ++ stakingKeeper common.StakingKeeper, // Use interface ++ ) (*MyPrecompile, error) { +``` + +**Testing benefit**: You can now use mock keepers: +```go +mockBank := &MockBankKeeper{} +precompile, _ := NewMyPrecompile(mockBank, mockStaking) ``` --- @@ -185,54 +276,87 @@ If you have custom precompiles, update their constructors to accept interfaces: ### Configuration-Based Architecture -The mempool now uses configuration objects instead of pre-built pools for better flexibility. +**The Problem**: In v0.4.0, you had to construct complete `TxPool` and `CosmosPool` instances with 20+ parameters each. This was error-prone and required deep understanding of Ethereum internals. -**Migration for Standard Setups:** +**The Solution**: v0.5.0 accepts configuration objects with smart defaults. You only configure what differs from standard behavior. -If you use default mempool configuration, minimal changes are needed: +### Migration for Default Configurations -```go Standard Mempool Setup expandable -// Existing code continues to work +If you're using standard mempool settings, the migration is minimal: + +```go +// v0.4.0 and v0.5.0 - Same for basic usage mempoolConfig := &evmmempool.EVMMempoolConfig{ AnteHandler: app.GetAnteHandler(), - BlockGasLimit: 100_000_000, // or 0 for default + BlockGasLimit: 0, // 0 means use default 100M } -evmMempool := evmmempool.NewExperimentalEVMMempool( - app.CreateQueryContext, - logger, - app.EVMKeeper, - app.FeeMarketKeeper, - app.txConfig, - app.clientCtx, - mempoolConfig -) ``` -**Migration for Advanced Setups:** +**What happens with defaults**: +- Transaction capacity: 10,000 EVM + 5,000 Cosmos transactions +- Gas pricing: 1 gwei minimum +- Nonce ordering: Strict per account +- Replacement rules: 10% gas increase required -If you previously built custom pools: +### Migration for Custom Configurations -```diff Advanced Mempool Configuration expandable - mempoolConfig := &evmmempool.EVMMempoolConfig{ -- TxPool: customTxPool, -- CosmosPool: customCosmosPool, +If you previously customized transaction pools: + +```diff +mempoolConfig := &evmmempool.EVMMempoolConfig{ +- // v0.4.0: Had to build entire pool +- TxPool: &txpool.TxPool{ +- config: txpool.Config{ +- Locals: []common.Address{...}, +- NoLocals: false, +- Journal: ".ethereum/transactions.rlp", +- Rejournal: time.Hour, +- PriceLimit: 1, +- PriceBump: 10, +- AccountSlots: 16, +- GlobalSlots: 10000, +- // ... 15+ more fields +- }, +- }, +- CosmosPool: customCosmosPool, + ++ // v0.5.0: Just provide overrides + LegacyPoolConfig: &legacypool.Config{ -+ PriceLimit: 2, -+ // ... other legacy pool settings -+ }, -+ CosmosPoolConfig: &sdkmempool.PriorityNonceMempoolConfig[math.Int]{ -+ TxPriority: sdkmempool.TxPriority[math.Int]{ -+ GetTxPriority: customPriorityFunc, -+ Compare: math.IntComparator, -+ MinValue: math.ZeroInt(), -+ }, ++ AccountSlots: 64, // Override: more txs per account ++ GlobalSlots: 50000, // Override: bigger pool ++ // Defaults used for everything else + }, - AnteHandler: app.GetAnteHandler(), - BroadcastTxFn: customBroadcastFunc, // optional - BlockGasLimit: 100_000_000, - } ++ CosmosPoolConfig: nil, // Use all defaults + + AnteHandler: app.GetAnteHandler(), + BlockGasLimit: 200_000_000, // Custom: higher gas limit +} +``` + +**Key configuration parameters**: + +| Parameter | Default | When to Override | +|-----------|---------|------------------| +| AccountSlots | 16 | High-frequency trading (set 64+) | +| GlobalSlots | 10,000 | High throughput chain (set 50,000+) | +| PriceLimit | 1 gwei | Private chain (set 0) | +| BlockGasLimit | 100M | Gaming/DeFi chain (adjust as needed) | + +### New MinTip Parameter + +v0.5.0 adds `MinTip` for spam protection: + +```go +mempoolConfig := &evmmempool.EVMMempoolConfig{ + // ... other config + MinTip: big.NewInt(1_000_000_000), // 1 gwei minimum tip +} ``` +**Purpose**: Prevents zero-fee spam transactions during high congestion. +**Default**: 0 (accept any fee) +**Recommendation**: Set to 1 gwei for public chains + --- ## 5) Ante Handler Changes @@ -255,14 +379,14 @@ The ante handler system has been optimized to remove unnecessary EVM instance cr If you have custom ante handlers that depend on EVM instance creation during balance checks: ```diff -// Custom ante handler example +/ Custom ante handler example func (d MyCustomDecorator) AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { - // Balance checking now optimized - no EVM instance creation + / Balance checking now optimized - no EVM instance creation - evm := d.evmKeeper.NewEVM(ctx, ...) - stateDB := evm.StateDB - balance := stateDB.GetBalance(address) - // Use keeper method instead + / Use keeper method instead + balance := d.evmKeeper.GetBalance(ctx, address) return next(ctx, tx, simulate) @@ -288,48 +412,91 @@ func (d MyCustomDecorator) AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, ### BREAKING: Singleton Pattern Eliminated -The global mempool registry has been removed in favor of direct injection. +**The Problem with v0.4.0**: The global mempool singleton created several critical issues: +- **Testing nightmare**: Couldn't run parallel tests (global state conflicts) +- **Multi-chain impossible**: Couldn't run multiple chains in one process +- **Race conditions**: Concurrent access to global state +- **Memory leaks**: Global state never cleaned up -**What Was Removed:** -- `mempool.SetGlobalEVMMempool()` -- `mempool.GetGlobalEVMMempool()` -- Global mempool singleton pattern +**The v0.5.0 Solution**: Direct dependency injection - each component receives its mempool reference explicitly. -**Migration Steps:** +### Required Migration Steps -1. **Update JSON-RPC Server Initialization:** +#### 1. Remove Global Mempool Registration -```diff JSON-RPC Server Update expandable -// server/start.go or equivalent -- mempool.SetGlobalEVMMempool(evmMempool) - - jsonRPCServer, err := jsonrpc.StartJSONRPC( - ctx, - clientCtx, - logger.With("module", "jsonrpc"), -+ evmMempool, // Pass mempool directly - config, - indexer, - ) +```diff +// In app.go or app initialization +evmMempool := evmmempool.NewExperimentalEVMMempool(...) + +- // DELETE THIS - no longer needed or supported +- if err := mempool.SetGlobalEVMMempool(evmMempool); err != nil { +- panic(err) +- } ``` -2. **Update RPC Backend:** +**Impact if not removed**: Compilation error - these functions no longer exist. -If you have custom RPC backends: +#### 2. Update JSON-RPC Server Initialization -```diff Custom RPC Backend Update expandable -// Custom RPC backend -func NewCustomBackend( - // ... other params -+ mempool *evmmempool.ExperimentalEVMMempool, -) *CustomBackend { -- mempool := mempool.GetGlobalEVMMempool() - - return &CustomBackend{ - mempool: mempool, - // ... other fields +The JSON-RPC server now requires explicit mempool injection: + +```diff +// In server/start.go or your server initialization +jsonRPCServer, err := jsonrpc.StartJSONRPC( + ctx, + clientCtx, + logger.With("module", "jsonrpc"), ++ evmMempool, // Pass mempool as parameter + config, + indexer, +) +``` + +**Why this change**: The JSON-RPC server needs mempool access for `txpool_` methods. Previously it used the global, now it's explicitly provided. + +#### 3. Update Custom Components + +If you have custom components that accessed the global mempool: + +```diff +// Custom component that used global mempool +type MyCustomService struct { +- // No mempool field - used global ++ mempool *evmmempool.ExperimentalEVMMempool +} + +- func NewMyCustomService() *MyCustomService { ++ func NewMyCustomService(mempool *evmmempool.ExperimentalEVMMempool) *MyCustomService { + return &MyCustomService{ +- // Used to call mempool.GetGlobalEVMMempool() when needed ++ mempool: mempool, } } + +func (s *MyCustomService) DoSomething() { +- pool := mempool.GetGlobalEVMMempool() +- if pool == nil { +- return errors.New("mempool not initialized") +- } ++ // Use s.mempool directly + pending := s.mempool.GetPendingTransactions() +} +``` + +### Testing Benefits + +The removal of global state enables proper testing: + +```go +// v0.5.0 - Can run parallel tests +func TestMempool(t *testing.T) { + t.Parallel() // Now safe! + + mempool1 := evmmempool.NewExperimentalEVMMempool(...) + mempool2 := evmmempool.NewExperimentalEVMMempool(...) + + // Each test has isolated mempool +} ``` --- @@ -350,7 +517,7 @@ EIP-7702 enables externally owned accounts to temporarily execute smart contract **Usage Example:** ```javascript -// Enable EOA to execute as multicall contract +/ Enable EOA to execute as multicall contract const authorization = await signAuthorization({ chainId: 9000, address: multicallContractAddress, @@ -358,7 +525,7 @@ const authorization = await signAuthorization({ }, wallet); const tx = { - type: 4, // SetCodeTxType + type: 4, / SetCodeTxType authorizationList: [authorization], to: wallet.address, data: multicall.interface.encodeFunctionData("batchCall", [calls]), @@ -390,17 +557,17 @@ EIP-2935 provides standardized access to historical block hashes via contract st **Configuration:** ```go -// Genesis parameter -"history_serve_window": 8192 // Default: 8192 blocks +/ Genesis parameter +"history_serve_window": 8192 / Default: 8192 blocks ``` **Usage for Developers:** ```solidity -// Smart contract can now reliably access historical block hashes +/ Smart contract can now reliably access historical block hashes contract HistoryExample { function getRecentBlockHash(uint256 blockNumber) public view returns (bytes32) { - // Works for blocks within history_serve_window range + / Works for blocks within history_serve_window range return blockhash(blockNumber); } } @@ -521,9 +688,9 @@ cast call --rpc-url http://localhost:8545 \ ### Integration Tests ```go Integration Test Example expandable -// Example integration test +/ Example integration test func TestV050Migration(t *testing.T) { - // Test mempool configuration + / Test mempool configuration config := &evmmempool.EVMMempoolConfig{ AnteHandler: anteHandler, BlockGasLimit: 100_000_000, @@ -531,16 +698,16 @@ func TestV050Migration(t *testing.T) { mempool := evmmempool.NewExperimentalEVMMempool(...) require.NotNil(t, mempool) - // Test precompile interfaces + / Test precompile interfaces bankPrecompile, err := bankprecompile.NewPrecompile( common.BankKeeper(bankKeeper), ) require.NoError(t, err) - // Test parameter validation + / Test parameter validation params := vmtypes.NewParams(...) require.Equal(t, uint64(8192), params.HistoryServeWindow) - require.False(t, hasAllowUnprotectedTxs(params)) // Should be removed + require.False(t, hasAllowUnprotectedTxs(params)) / Should be removed } ``` @@ -578,7 +745,7 @@ error: cannot use bankKeeper (type bankkeeper.Keeper) as type common.BankKeeper ```go bankPrecompile, err := bankprecompile.NewPrecompile( - common.BankKeeper(bankKeeper), // Add interface cast + common.BankKeeper(bankKeeper), / Add interface cast ) ``` @@ -619,9 +786,9 @@ error: undefined: mempool.GetGlobalEVMMempool **Solution:** Pass mempool directly instead of using global access: ```go -// Pass mempool as parameter +/ Pass mempool as parameter func NewRPCService(mempool *evmmempool.ExperimentalEVMMempool) { - // Use injected mempool + / Use injected mempool } ``` @@ -640,6 +807,6 @@ v0.5.0 introduces significant improvements in performance, EVM compatibility, an Review all breaking changes carefully and test thoroughly before deploying to production. For additional support, refer to: -- [Precompiles Documentation](../smart-contracts/precompiles/overview) -- [EVM Compatibility Guide](../evm-compatibility/overview) -- [Performance Optimization Guide](../concepts/performance) +- [Precompiles Documentation](/docs/evm/next/documentation/smart-contracts/precompiles/overview) +- [EVM Compatibility Guide](/docs/evm/next/documentation/evm-compatibility/overview) +- [Performance Optimization Guide](/docs/evm/next/documentation/concepts/performance) diff --git a/docs/evm/next/documentation/integration/predeployed-contracts-integration.mdx b/docs/evm/next/documentation/integration/predeployed-contracts-integration.mdx index d374faeb..9fb31479 100644 --- a/docs/evm/next/documentation/integration/predeployed-contracts-integration.mdx +++ b/docs/evm/next/documentation/integration/predeployed-contracts-integration.mdx @@ -22,11 +22,11 @@ The most straightforward method for new chains or testnets. Contracts are deploy Using the evmd example application automatically includes default preinstalls: ```go -// Source: evmd/genesis.go:28-34 +/ Source: evmd/genesis.go:28-34 func NewEVMGenesisState() *evmtypes.GenesisState { evmGenState := evmtypes.DefaultGenesisState() evmGenState.Params.ActiveStaticPrecompiles = evmtypes.AvailableStaticPrecompiles - evmGenState.Preinstalls = evmtypes.DefaultPreinstalls // Defined in x/vm/types/preinstall.go + evmGenState.Preinstalls = evmtypes.DefaultPreinstalls / Defined in x/vm/types/preinstall.go return evmGenState } ``` @@ -36,7 +36,7 @@ func NewEVMGenesisState() *evmtypes.GenesisState { To customize which contracts are deployed: ```json -// genesis.json +/ genesis.json { "app_state": { "evm": { @@ -49,7 +49,7 @@ To customize which contracts are deployed: { "name": "Multicall3", "address": "0xcA11bde05977b3631167028862bE2a173976CA11", - "code": "0x6080604052..." // Full bytecode + "code": "0x6080604052..." / Full bytecode } ] } @@ -122,7 +122,7 @@ func CreateUpgradeHandler( evmKeeper *evmkeeper.Keeper, ) upgradetypes.UpgradeHandler { return func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { - // Add preinstalls during upgrade + / Add preinstalls during upgrade if err := evmKeeper.AddPreinstalls(ctx, evmtypes.DefaultPreinstalls); err != nil { return nil, err } @@ -158,7 +158,7 @@ All preinstall deployments undergo strict validation: Predeployed contracts are stored in the chain state like regular contracts: ```go -// Actual deployment process from x/vm/keeper/preinstalls.go +/ Actual deployment process from x/vm/keeper/preinstalls.go func (k *Keeper) AddPreinstalls(ctx sdk.Context, preinstalls []types.Preinstall) error { for _, preinstall := range preinstalls { address := common.HexToAddress(preinstall.Address) @@ -175,7 +175,7 @@ func (k *Keeper) AddPreinstalls(ctx sdk.Context, preinstalls []types.Preinstall) "preinstall %s has empty code hash", preinstall.Address) } - // Check for existing code hash conflicts + / Check for existing code hash conflicts existingCodeHash := k.GetCodeHash(ctx, address) if !types.IsEmptyCodeHash(existingCodeHash.Bytes()) && !bytes.Equal(existingCodeHash.Bytes(), codeHash) { @@ -183,13 +183,13 @@ func (k *Keeper) AddPreinstalls(ctx sdk.Context, preinstalls []types.Preinstall) "preinstall %s already has a different code hash", preinstall.Address) } - // Check that the account is not already set + / Check that the account is not already set if acc := k.accountKeeper.GetAccount(ctx, accAddress); acc != nil { return errorsmod.Wrapf(types.ErrInvalidPreinstall, "preinstall %s already has an account in account keeper", preinstall.Address) } - // Create account and store code + / Create account and store code account := k.accountKeeper.NewAccountWithAddress(ctx, accAddress) k.accountKeeper.SetAccount(ctx, account) k.SetCodeHash(ctx, address.Bytes(), codeHash) @@ -214,7 +214,7 @@ evmd query evm account 0x4e59b44847b379578588920ca78fbf26c0b4956c ``` ```javascript -// Verify via Web3 +/ Verify via Web3 const code = await provider.getCode("0x4e59b44847b379578588920ca78fbf26c0b4956c"); console.log("Deployed:", code !== "0x"); ``` @@ -261,41 +261,22 @@ The Safe Singleton Factory bytecode in the current DefaultPreinstalls may be inc To add custom contracts beyond the defaults: ```go -// Define custom preinstall +/ Define custom preinstall customPreinstall := types.Preinstall{ Name: "MyCustomContract", Address: "0xYourChosenAddress", Code: "0xCompiledBytecode", } -// Validate before deployment +/ Validate before deployment if err := customPreinstall.Validate(); err != nil { return err } -// Add via appropriate method (genesis, governance, or upgrade) +/ Add via appropriate method (genesis, governance, or upgrade) preinstalls := append(evmtypes.DefaultPreinstalls, customPreinstall) ``` -## Troubleshooting - -### Common Issues - -| Issue | Cause | Solution | -|-------|-------|----------| -| "preinstall already has an account in account keeper" | Address collision | Choose different address | -| "preinstall has empty code hash" | Invalid or empty bytecode | Verify bytecode hex string is valid | -| "preinstall address is not a valid hex address" | Malformed address | Ensure 0x prefix and 40 hex chars | -| "invalid authority" | Wrong governance address | Use correct gov module account address | -| Contract not found after deployment | Wrong network | Verify chain ID and RPC endpoint | - -### Debugging Steps - -1. Check chain genesis configuration -2. Verify proposal passed and executed -3. Query contract code directly -4. Test with simple contract interaction -5. Review chain logs for errors ## Further Resources diff --git a/docs/evm/next/documentation/integration/upgrade-handlers.mdx b/docs/evm/next/documentation/integration/upgrade-handlers.mdx index d6ab732f..39fb6d1a 100644 --- a/docs/evm/next/documentation/integration/upgrade-handlers.mdx +++ b/docs/evm/next/documentation/integration/upgrade-handlers.mdx @@ -27,7 +27,7 @@ Upgrade handlers are registered in your app's `RegisterUpgradeHandlers()` method ```go -// app/upgrades.go +/ app/upgrades.go package app import ( @@ -40,18 +40,18 @@ import ( func (app *App) RegisterUpgradeHandlers() { app.UpgradeKeeper.SetUpgradeHandler( - "v1.0.0", // upgrade name (must match governance proposal) + "v1.0.0", / upgrade name (must match governance proposal) func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { sdkCtx := sdk.UnwrapSDKContext(ctx) sdkCtx.Logger().Info("Starting upgrade", "name", plan.Name) - // Run module migrations + / Run module migrations migrations, err := app.ModuleManager.RunMigrations(ctx, app.configurator, fromVM) if err != nil { return nil, err } - // Add custom migration logic here + / Add custom migration logic here sdkCtx.Logger().Info("Upgrade complete", "name", plan.Name) return migrations, nil @@ -66,8 +66,8 @@ func (app *App) RegisterUpgradeHandlers() { The upgrade handler receives and returns a `module.VersionMap` that tracks module versions: ```go -// fromVM contains the module versions before the upgrade -// The returned VersionMap contains the new versions after migration +/ fromVM contains the module versions before the upgrade +/ The returned VersionMap contains the new versions after migration migrations, err := app.ModuleManager.RunMigrations(ctx, app.configurator, fromVM) ``` @@ -96,7 +96,7 @@ app/ ```go -// app/upgrades/v1_0_0/constants.go +/ app/upgrades/v1_0_0/constants.go package v1_0_0 import ( @@ -110,8 +110,8 @@ var Upgrade = upgrades.Upgrade{ UpgradeName: UpgradeName, CreateUpgradeHandler: CreateUpgradeHandler, StoreUpgrades: storetypes.StoreUpgrades{ - Added: []string{}, // New modules - Deleted: []string{}, // Removed modules + Added: []string{}, / New modules + Deleted: []string{}, / Removed modules }, } ``` @@ -121,7 +121,7 @@ var Upgrade = upgrades.Upgrade{ ```go -// app/upgrades/v1_0_0/handler.go +/ app/upgrades/v1_0_0/handler.go package v1_0_0 import ( @@ -142,13 +142,13 @@ func CreateUpgradeHandler( return func(c context.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { ctx := sdk.UnwrapSDKContext(c) - // Run module migrations + / Run module migrations vm, err := mm.RunMigrations(c, configurator, vm) if err != nil { return nil, err } - // Custom migrations + / Custom migrations if err := runCustomMigrations(ctx, keepers, storeKeys); err != nil { return nil, err } @@ -167,19 +167,19 @@ Migrating module parameters to new formats: ```go func migrateParams(ctx sdk.Context, keeper paramskeeper.Keeper) error { - // Get old params + / Get old params var oldParams v1.Params keeper.GetParamSet(ctx, &oldParams) - // Convert to new format + / Convert to new format newParams := v2.Params{ Field1: oldParams.Field1, Field2: convertField(oldParams.Field2), - // New field with default value + / New field with default value Field3: "default", } - // Set new params + / Set new params keeper.SetParams(ctx, newParams) return nil } @@ -193,7 +193,7 @@ Moving data between different storage locations: func migrateState(ctx sdk.Context, storeKey storetypes.StoreKey) error { store := ctx.KVStore(storeKey) - // Iterate over old storage + / Iterate over old storage iterator := storetypes.KVStorePrefixIterator(store, oldPrefix) defer iterator.Close() @@ -201,14 +201,14 @@ func migrateState(ctx sdk.Context, storeKey storetypes.StoreKey) error { oldKey := iterator.Key() value := iterator.Value() - // Transform key/value if needed + / Transform key/value if needed newKey := transformKey(oldKey) newValue := transformValue(value) - // Write to new location + / Write to new location store.Set(newKey, newValue) - // Delete old entry + / Delete old entry store.Delete(oldKey) } @@ -221,7 +221,7 @@ func migrateState(ctx sdk.Context, storeKey storetypes.StoreKey) error { Adding or removing modules during upgrade: ```go -// In constants.go +/ In constants.go var Upgrade = upgrades.Upgrade{ UpgradeName: "v2.0.0", CreateUpgradeHandler: CreateUpgradeHandler, @@ -231,18 +231,18 @@ var Upgrade = upgrades.Upgrade{ }, } -// In handler.go +/ In handler.go func CreateUpgradeHandler(...) upgradetypes.UpgradeHandler { return func(c context.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { - // Delete old module version + / Delete old module version delete(vm, "oldmodule") - // Initialize new module + / Initialize new module if err := newModuleKeeper.InitGenesis(ctx, defaultGenesis); err != nil { return nil, err } - // Run migrations + / Run migrations return mm.RunMigrations(c, configurator, vm) } } @@ -260,16 +260,16 @@ Make migrations idempotent when possible: ```go func migrateSomething(ctx sdk.Context, store sdk.KVStore) error { - // Check if migration already done + / Check if migration already done if store.Has(migrationCompleteKey) { ctx.Logger().Info("Migration already completed, skipping") return nil } - // Perform migration - // ... + / Perform migration + / ... - // Mark as complete + / Mark as complete store.Set(migrationCompleteKey, []byte{1}) return nil } @@ -297,7 +297,7 @@ func migrate(ctx sdk.Context, keeper Keeper) error { } count++ - // Log progress for long migrations + / Log progress for long migrations if count%1000 == 0 { ctx.Logger().Info("Migration progress", "processed", count) } @@ -317,22 +317,22 @@ func TestUpgradeHandler(t *testing.T) { app := setupApp(t) ctx := app.NewContext(false, tmproto.Header{Height: 1}) - // Setup pre-upgrade state + / Setup pre-upgrade state setupOldState(t, ctx, app) - // Run upgrade handler + / Run upgrade handler _, err := v1_0_0.CreateUpgradeHandler( app.ModuleManager, app.configurator, &upgrades.UpgradeKeepers{ - // ... keepers + / ... keepers }, app.keys, )(ctx, upgradetypes.Plan{Name: "v1.0.0"}, app.ModuleManager.GetVersionMap()) require.NoError(t, err) - // Verify post-upgrade state + / Verify post-upgrade state verifyNewState(t, ctx, app) } ``` @@ -391,54 +391,11 @@ mantrachaind query upgrade applied v1.0.0 For Cosmos EVM chains, specific migrations include: -- **[ERC20 Precompiles Migration](./erc20-precompiles-migration)**: Required for v0.3.x to v0.4.0 +- **[ERC20 Precompiles Migration](/docs/evm/next/documentation/integration/erc20-precompiles-migration)**: Required for v0.3.x to v0.4.0 - **Fee Market Parameters**: Updating EIP-1559 parameters - **Custom Precompiles**: Registering new precompiled contracts - **EVM State**: Migrating account balances or contract storage -## Troubleshooting - -### Consensus Failure - -**Symptom:** Chain halts with consensus failure at upgrade height - -**Causes:** -- Validators running different binary versions -- Upgrade handler not registered -- Non-deterministic migration logic - -**Solution:** -- Ensure all validators have the same binary -- Verify upgrade handler is registered -- Review migration logic for non-determinism - -### Upgrade Panic - -**Symptom:** Node panics during upgrade - -**Causes:** -- Unhandled error in migration -- Missing required state -- Invalid type assertions - -**Solution:** -- Add comprehensive error handling -- Validate state before migration -- Use safe type conversions - -### State Corruption - -**Symptom:** Invalid state after upgrade - -**Causes:** -- Partial migration completion -- Incorrect data transformation -- Missing cleanup of old data - -**Solution:** -- Make migrations atomic -- Thoroughly test transformations -- Ensure old data is properly cleaned up ## References diff --git a/docs/evm/next/documentation/smart-contracts/precompiles/bank.mdx b/docs/evm/next/documentation/smart-contracts/precompiles/bank.mdx index 996d21c2..51f7c674 100644 --- a/docs/evm/next/documentation/smart-contracts/precompiles/bank.mdx +++ b/docs/evm/next/documentation/smart-contracts/precompiles/bank.mdx @@ -29,7 +29,7 @@ The Bank precompile provides ERC20-style access to native Cosmos SDK tokens, ena ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract BankExample { @@ -50,7 +50,7 @@ contract BankExample { return balances; } - // Convenience function to get caller's balances + / Convenience function to get caller's balances function getMyBalances() external view returns (Balance[] memory balances) { return this.getAccountBalances(msg.sender); } @@ -59,20 +59,20 @@ contract BankExample { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// Connect to the network +/ Connect to the network const provider = new ethers.JsonRpcProvider(""); -// Precompile address and ABI +/ Precompile address and ABI const precompileAddress = "0x0000000000000000000000000000000000000804"; const precompileAbi = [ "function balances(address account) view returns (tuple(address contractAddress, uint256 amount)[])" ]; -// Create a contract instance +/ Create a contract instance const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Address to query -const accountAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +/ Address to query +const accountAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder async function getBalances() { try { @@ -116,7 +116,7 @@ curl -X POST --data '{ ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract BankExample { @@ -141,16 +141,16 @@ contract BankExample { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// Connect to the network +/ Connect to the network const provider = new ethers.JsonRpcProvider(""); -// Precompile address and ABI +/ Precompile address and ABI const precompileAddress = "0x0000000000000000000000000000000000000804"; const precompileAbi = [ "function totalSupply() view returns (tuple(address contractAddress, uint256 amount)[])" ]; -// Create a contract instance +/ Create a contract instance const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); async function getTotalSupply() { @@ -194,7 +194,7 @@ curl -X POST --data '{ ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract BankExample { @@ -210,7 +210,7 @@ contract BankExample { return supply; } - // Helper function to check if token exists (has supply > 0) + / Helper function to check if token exists (has supply > 0) function tokenExists(address erc20Address) external view returns (bool) { uint256 supply = this.getTokenSupply(erc20Address); return supply > 0; @@ -220,20 +220,20 @@ contract BankExample { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// Connect to the network +/ Connect to the network const provider = new ethers.JsonRpcProvider(""); -// Precompile address and ABI +/ Precompile address and ABI const precompileAddress = "0x0000000000000000000000000000000000000804"; const precompileAbi = [ "function supplyOf(address erc20Address) view returns (uint256)" ]; -// Create a contract instance +/ Create a contract instance const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// ERC20 address to query -const erc20Address = "0xdAC17F958D2ee523a2206206994597C13D831ec7"; // Placeholder for a token contract +/ ERC20 address to query +const erc20Address = "0xdAC17F958D2ee523a2206206994597C13D831ec7"; / Placeholder for a token contract async function getSupplyOf() { try { @@ -281,20 +281,20 @@ The Balance struct represents a token balance with its associated ERC20 contract ## Full Interface & ABI ```solidity title="Bank Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.18; -/// @dev The IBank contract's address. +/ @dev The IBank contract's address. address constant IBANK_PRECOMPILE_ADDRESS = 0x0000000000000000000000000000000000000804; -/// @dev The IBank contract's instance. +/ @dev The IBank contract's instance. IBank constant IBANK_CONTRACT = IBank(IBANK_PRECOMPILE_ADDRESS); -/// @dev Balance specifies the ERC20 contract address and the amount of tokens. +/ @dev Balance specifies the ERC20 contract address and the amount of tokens. struct Balance { - /// contractAddress defines the ERC20 contract address. + / contractAddress defines the ERC20 contract address. address contractAddress; - /// amount of tokens + / amount of tokens uint256 amount; } @@ -304,21 +304,21 @@ struct Balance { * @dev Interface for querying balances and supply from the Bank module. */ interface IBank { - /// @dev balances defines a method for retrieving all the native token balances - /// for a given account. - /// @param account the address of the account to query balances for. - /// @return balances the array of native token balances. + / @dev balances defines a method for retrieving all the native token balances + / for a given account. + / @param account the address of the account to query balances for. + / @return balances the array of native token balances. function balances( address account ) external view returns (Balance[] memory balances); - /// @dev totalSupply defines a method for retrieving the total supply of all - /// native tokens. - /// @return totalSupply the supply as an array of native token balances + / @dev totalSupply defines a method for retrieving the total supply of all + / native tokens. + / @return totalSupply the supply as an array of native token balances function totalSupply() external view returns (Balance[] memory totalSupply); - /// @dev supplyOf defines a method for retrieving the total supply of a particular native coin. - /// @return totalSupply the supply as a uint256 + / @dev supplyOf defines a method for retrieving the total supply of a particular native coin. + / @return totalSupply the supply as a uint256 function supplyOf( address erc20Address ) external view returns (uint256 totalSupply); diff --git a/docs/evm/next/documentation/smart-contracts/precompiles/bech32.mdx b/docs/evm/next/documentation/smart-contracts/precompiles/bech32.mdx index 12f8c65b..f74a2980 100644 --- a/docs/evm/next/documentation/smart-contracts/precompiles/bech32.mdx +++ b/docs/evm/next/documentation/smart-contracts/precompiles/bech32.mdx @@ -33,10 +33,10 @@ Both methods use a configurable base gas amount that is set during chain initial ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; -// Interface for the Bech32 precompile +/ Interface for the Bech32 precompile interface IBech32 { function hexToBech32(address addr, string memory prefix) external returns (string memory bech32Address); function bech32ToHex(string memory bech32Address) external returns (address addr); @@ -66,12 +66,12 @@ contract Bech32Example { return bech32Address; } - // Convert multiple addresses with the same prefix + / Convert multiple addresses with the same prefix function batchHexToBech32(address[] calldata addresses, string calldata prefix) external returns (string[] memory bech32Addresses) { - bech32Addresses = new string[](addresses.length); + bech32Addresses = new string[](/docs/evm/next/documentation/smart-contracts/precompiles/addresses.length); for (uint256 i = 0; i < addresses.length; i++) { require(addresses[i] != address(0), "Invalid address in batch"); @@ -82,7 +82,7 @@ contract Bech32Example { return bech32Addresses; } - // Get common address formats for a single hex address + / Get common address formats for a single hex address function getCommonFormats(address addr) external returns ( @@ -100,7 +100,7 @@ contract Bech32Example { return (cosmosAddr, evmosAddr, osmosisAddr); } - // Convert caller's address to bech32 format + / Convert caller's address to bech32 format function getMyBech32Address(string calldata prefix) external returns (string memory) @@ -112,20 +112,20 @@ contract Bech32Example { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// Connect to the network +/ Connect to the network const provider = new ethers.JsonRpcProvider(""); -// Precompile address and ABI +/ Precompile address and ABI const precompileAddress = "0x0000000000000000000000000000000000000400"; const precompileAbi = [ "function hexToBech32(address addr, string memory prefix) returns (string memory)" ]; -// Create a contract instance +/ Create a contract instance const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs -const ethAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +/ Inputs +const ethAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder const prefix = "cosmos"; async function convertToBech32() { @@ -171,10 +171,10 @@ curl -X POST --data '{ ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; -// Interface for the Bech32 precompile +/ Interface for the Bech32 precompile interface IBech32 { function bech32ToHex(string memory bech32Address) external returns (address addr); function hexToBech32(address addr, string memory prefix) external returns (string memory bech32Address); @@ -204,12 +204,12 @@ contract Bech32Example { return hexAddress; } - // Convert multiple bech32 addresses to hex format + / Convert multiple bech32 addresses to hex format function batchBech32ToHex(string[] calldata bech32Addresses) external returns (address[] memory hexAddresses) { - hexAddresses = new address[](bech32Addresses.length); + hexAddresses = new address[](/docs/evm/next/documentation/smart-contracts/precompiles/bech32Addresses.length); for (uint256 i = 0; i < bech32Addresses.length; i++) { require(bytes(bech32Addresses[i]).length > 0, "Invalid bech32 address"); @@ -221,7 +221,7 @@ contract Bech32Example { return hexAddresses; } - // Validate address conversion by converting back and forth + / Validate address conversion by converting back and forth function validateAddressConversion(address addr, string calldata prefix) external returns (bool isValid) @@ -238,7 +238,7 @@ contract Bech32Example { } } - // Check if two addresses (hex and bech32) represent the same account + / Check if two addresses (hex and bech32) represent the same account function areAddressesEqual(address hexAddr, string calldata bech32Addr) external returns (bool isEqual) @@ -251,7 +251,7 @@ contract Bech32Example { } } - // Safely convert bech32 to hex with error handling + / Safely convert bech32 to hex with error handling function safeBech32ToHex(string calldata bech32Address) external returns (bool success, address hexAddress) @@ -267,20 +267,20 @@ contract Bech32Example { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// Connect to the network +/ Connect to the network const provider = new ethers.JsonRpcProvider(""); -// Precompile address and ABI +/ Precompile address and ABI const precompileAddress = "0x0000000000000000000000000000000000000400"; const precompileAbi = [ "function bech32ToHex(string memory bech32Address) returns (address)" ]; -// Create a contract instance +/ Create a contract instance const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input -const bech32Address = "cosmos1mrdxhunfvjhe6lhdncp72dq46da2jcz9d9sh93"; // Placeholder +/ Input +const bech32Address = "cosmos1mrdxhunfvjhe6lhdncp72dq46da2jcz9d9sh93"; / Placeholder async function convertToHex() { try { @@ -319,13 +319,13 @@ curl -X POST --data '{ ## Full Interface & ABI ```solidity title="Bech32 Solidity Interface" lines -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.17; -/// @dev The Bech32I contract's address. +/ @dev The Bech32I contract's address. address constant Bech32_PRECOMPILE_ADDRESS = 0x0000000000000000000000000000000000000400; -/// @dev The Bech32I contract's instance. +/ @dev The Bech32I contract's instance. Bech32I constant BECH32_CONTRACT = Bech32I(Bech32_PRECOMPILE_ADDRESS); interface Bech32I { diff --git a/docs/evm/next/documentation/smart-contracts/precompiles/callbacks.mdx b/docs/evm/next/documentation/smart-contracts/precompiles/callbacks.mdx index ac028e1b..0e273d53 100644 --- a/docs/evm/next/documentation/smart-contracts/precompiles/callbacks.mdx +++ b/docs/evm/next/documentation/smart-contracts/precompiles/callbacks.mdx @@ -27,14 +27,14 @@ Critically, **only the IBC packet sender can set the callback**. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract CallbacksExample { - // Address of the authorized IBC module + / Address of the authorized IBC module address public immutable ibcModule; - // Mapping to track packet statuses + / Mapping to track packet statuses mapping(bytes32 => PacketStatus) public packetStatuses; mapping(address => uint256) public userBalances; @@ -67,14 +67,14 @@ contract CallbacksExample { ) external onlyIBC { bytes32 packetId = keccak256(abi.encodePacked(channelId, portId, sequence)); - // Ensure packet hasn't been processed already + / Ensure packet hasn't been processed already if (packetStatuses[packetId] != PacketStatus.None) { revert PacketAlreadyProcessed(); } packetStatuses[packetId] = PacketStatus.Acknowledged; - // Parse acknowledgement to determine success/failure + / Parse acknowledgement to determine success/failure bool success = _parseAcknowledgement(acknowledgement); if (success) { @@ -93,13 +93,13 @@ contract CallbacksExample { { if (acknowledgement.length == 0) return false; - // Check for error indicators in acknowledgement + / Check for error indicators in acknowledgement bytes5 errorPrefix = bytes5(acknowledgement); if (errorPrefix == bytes5("error")) { return false; } - return true; // Non-error acknowledgement indicates success + return true; / Non-error acknowledgement indicates success } function _handleSuccessfulAcknowledgement( @@ -107,14 +107,14 @@ contract CallbacksExample { bytes memory data, bytes memory acknowledgement ) internal { - // Parse packet data to get sender and amount + / Parse packet data to get sender and amount (address sender, uint256 amount, string memory operation) = _parsePacketData(data); if (keccak256(bytes(operation)) == keccak256(bytes("cross_chain_swap"))) { emit CrossChainOrderExecuted(packetId, sender, amount); } - // Credit any rewards or returns + / Credit any rewards or returns userBalances[sender] += amount; } @@ -125,7 +125,7 @@ contract CallbacksExample { ) internal { (address sender, uint256 amount, ) = _parsePacketData(data); - // Issue refund for failed transaction + / Issue refund for failed transaction userBalances[sender] += amount; emit RefundIssued(sender, amount, packetId); } @@ -135,7 +135,7 @@ contract CallbacksExample { pure returns (address sender, uint256 amount, string memory operation) { - // Simplified parser - extract sender, amount, and operation from packet data + / Simplified parser - extract sender, amount, and operation from packet data if (data.length < 64) { revert InvalidPacketData(); } @@ -145,8 +145,8 @@ contract CallbacksExample { amount := mload(add(data, 64)) } - // Extract operation string (simplified) - operation = "cross_chain_swap"; // Default operation + / Extract operation string (simplified) + operation = "cross_chain_swap"; / Default operation return (sender, amount, operation); } } @@ -154,14 +154,14 @@ contract CallbacksExample { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// Deploy a contract that implements IBC callbacks +/ Deploy a contract that implements IBC callbacks class IBCCallbackHandler { constructor(provider, signer, contractAddress) { this.provider = provider; this.signer = signer; this.contractAddress = contractAddress; - // ABI for the callback interface + / ABI for the callback interface this.abi = [ "function onPacketAcknowledgement(string channelId, string portId, uint64 sequence, bytes data, bytes acknowledgement)", "event PacketAcknowledged(bytes32 indexed packetId, string channelId, uint64 sequence)", @@ -171,7 +171,7 @@ class IBCCallbackHandler { this.contract = new ethers.Contract(contractAddress, this.abi, signer); } - // Listen for packet acknowledgement events + / Listen for packet acknowledgement events async listenForAcknowledgements() { console.log("Listening for packet acknowledgements..."); @@ -190,14 +190,14 @@ class IBCCallbackHandler { }); } - // Get packet status + / Get packet status async getPacketStatus(channelId, portId, sequence) { const packetId = ethers.solidityPackedKeccak256( ["string", "string", "uint64"], [channelId, portId, sequence] ); - // Assuming the contract has a packetStatuses mapping + / Assuming the contract has a packetStatuses mapping const statusAbi = ["function packetStatuses(bytes32) view returns (uint8)"]; const contract = new ethers.Contract(this.contractAddress, statusAbi, this.provider); @@ -212,7 +212,7 @@ class IBCCallbackHandler { } } -// Example usage +/ Example usage async function setupCallbackHandler() { const provider = new ethers.JsonRpcProvider(""); const signer = new ethers.Wallet("", provider); @@ -220,15 +220,15 @@ async function setupCallbackHandler() { const handler = new IBCCallbackHandler(provider, signer, contractAddress); - // Start listening for events + / Start listening for events await handler.listenForAcknowledgements(); - // Check a packet status + / Check a packet status const status = await handler.getPacketStatus("channel-0", "transfer", 123); console.log("Packet status:", status); } -// setupCallbackHandler(); +/ setupCallbackHandler(); ``` @@ -240,7 +240,7 @@ async function setupCallbackHandler() { ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract CallbacksExample { @@ -275,27 +275,27 @@ contract CallbacksExample { ) external onlyIBC { bytes32 packetId = keccak256(abi.encodePacked(channelId, portId, sequence)); - // Ensure packet hasn't been processed already + / Ensure packet hasn't been processed already if (packetStatuses[packetId] != PacketStatus.None) { revert PacketAlreadyProcessed(); } packetStatuses[packetId] = PacketStatus.TimedOut; - // Handle timeout by issuing refunds + / Handle timeout by issuing refunds _handleTimeout(packetId, data); emit PacketTimedOut(packetId, channelId, sequence); } function _handleTimeout(bytes32 packetId, bytes memory data) internal { - // Parse packet data to extract sender and amount for refund + / Parse packet data to extract sender and amount for refund (address sender, uint256 amount, string memory operation) = _parsePacketData(data); - // Issue full refund for timed out packets + / Issue full refund for timed out packets _issueRefund(sender, amount, packetId); - // Additional timeout-specific logic based on operation type + / Additional timeout-specific logic based on operation type if (keccak256(bytes(operation)) == keccak256(bytes("stake_remote"))) { _handleStakeTimeout(sender, amount, packetId); } else if (keccak256(bytes(operation)) == keccak256(bytes("cross_chain_swap"))) { @@ -309,18 +309,18 @@ contract CallbacksExample { } function _handleStakeTimeout(address user, uint256 amount, bytes32 packetId) internal { - // Handle staking timeout - might need to cancel staking plans - // Restore user's staking availability - userBalances[user] += amount; // Return staked amount + / Handle staking timeout - might need to cancel staking plans + / Restore user's staking availability + userBalances[user] += amount; / Return staked amount - // Additional staking-specific cleanup logic here + / Additional staking-specific cleanup logic here } function _handleSwapTimeout(address user, uint256 amount, bytes32 packetId) internal { - // Handle swap timeout - return original tokens + / Handle swap timeout - return original tokens userBalances[user] += amount; - // Additional swap-specific cleanup logic here + / Additional swap-specific cleanup logic here } function _parsePacketData(bytes memory data) @@ -337,11 +337,11 @@ contract CallbacksExample { amount := mload(add(data, 64)) } - operation = "timeout_operation"; // Default + operation = "timeout_operation"; / Default return (sender, amount, operation); } - // User functions to interact with refunds + / User functions to interact with refunds function withdraw(uint256 amount) external { require(userBalances[msg.sender] >= amount, "Insufficient balance"); userBalances[msg.sender] -= amount; @@ -365,14 +365,14 @@ contract CallbacksExample { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// Handle timeout callbacks +/ Handle timeout callbacks class TimeoutHandler { constructor(provider, signer, contractAddress) { this.provider = provider; this.signer = signer; this.contractAddress = contractAddress; - // ABI for timeout handling + / ABI for timeout handling this.abi = [ "function onPacketTimeout(string channelId, string portId, uint64 sequence, bytes data)", "event PacketTimedOut(bytes32 indexed packetId, string channelId, uint64 sequence)", @@ -383,7 +383,7 @@ class TimeoutHandler { this.contract = new ethers.Contract(contractAddress, this.abi, signer); } - // Listen for timeout events + / Listen for timeout events async listenForTimeouts() { console.log("Listening for packet timeouts..."); @@ -400,13 +400,13 @@ class TimeoutHandler { console.log(` Amount: ${ethers.formatEther(amount)} ETH`); console.log(` Packet ID: ${packetId}`); - // Check new balance + / Check new balance const newBalance = await this.contract.userBalances(user); console.log(` New user balance: ${ethers.formatEther(newBalance)} ETH`); }); } - // Check user balance after refund + / Check user balance after refund async checkUserBalance(userAddress) { const balance = await this.contract.userBalances(userAddress); return { @@ -416,9 +416,9 @@ class TimeoutHandler { }; } - // Encode packet data for testing + / Encode packet data for testing static encodePacketData(sender, amount, operation = "cross_chain_swap") { - // Simple encoding for testing + / Simple encoding for testing return ethers.AbiCoder.defaultAbiCoder().encode( ["address", "uint256", "string"], [sender, amount, operation] @@ -426,7 +426,7 @@ class TimeoutHandler { } } -// Example usage +/ Example usage async function handleTimeouts() { const provider = new ethers.JsonRpcProvider(""); const signer = new ethers.Wallet("", provider); @@ -434,15 +434,15 @@ async function handleTimeouts() { const handler = new TimeoutHandler(provider, signer, contractAddress); - // Start listening + / Start listening await handler.listenForTimeouts(); - // Check balance after refund + / Check balance after refund const balance = await handler.checkUserBalance("0x742d35Cc6634C0532925a3b844Bc9e7595f0bEb8"); console.log("User balance:", balance.formatted); } -// handleTimeouts(); +/ handleTimeouts(); ``` @@ -464,7 +464,7 @@ When implementing the Callbacks interface, consider the following security aspec ### Example Security Pattern ```solidity contract SecureIBCCallback is ICallbacks { - address constant IBC_MODULE = 0x...; // IBC module address + address constant IBC_MODULE = 0x...; / IBC module address modifier onlyIBC() { require(msg.sender == IBC_MODULE, "Unauthorized"); @@ -472,11 +472,11 @@ contract SecureIBCCallback is ICallbacks { } function onPacketAcknowledgement(...) external onlyIBC { - // Callback logic + / Callback logic } function onPacketTimeout(...) external onlyIBC { - // Timeout logic + / Timeout logic } } ``` @@ -484,19 +484,19 @@ contract SecureIBCCallback is ICallbacks { ## Full Solidity Interface & ABI ```solidity title="Callbacks Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.18; interface ICallbacks { - /// @dev Callback function to be called on the source chain - /// after the packet life cycle is completed and acknowledgement is processed - /// by source chain. The contract address is passed the packet information and acknowledgmeent - /// to execute the callback logic. - /// @param channelId the channnel identifier of the packet - /// @param portId the port identifier of the packet - /// @param sequence the sequence number of the packet - /// @param data the data of the packet - /// @param acknowledgement the acknowledgement of the packet + / @dev Callback function to be called on the source chain + / after the packet life cycle is completed and acknowledgement is processed + / by source chain. The contract address is passed the packet information and acknowledgmeent + / to execute the callback logic. + / @param channelId the channnel identifier of the packet + / @param portId the port identifier of the packet + / @param sequence the sequence number of the packet + / @param data the data of the packet + / @param acknowledgement the acknowledgement of the packet function onPacketAcknowledgement( string memory channelId, string memory portId, @@ -505,14 +505,14 @@ interface ICallbacks { bytes memory acknowledgement ) external; - /// @dev Callback function to be called on the source chain - /// after the packet life cycle is completed and the packet is timed out - /// by source chain. The contract address is passed the packet information - /// to execute the callback logic. - /// @param channelId the channnel identifier of the packet - /// @param portId the port identifier of the packet - /// @param sequence the sequence number of the packet - /// @param data the data of the packet + / @dev Callback function to be called on the source chain + / after the packet life cycle is completed and the packet is timed out + / by source chain. The contract address is passed the packet information + / to execute the callback logic. + / @param channelId the channnel identifier of the packet + / @param portId the port identifier of the packet + / @param sequence the sequence number of the packet + / @param data the data of the packet function onPacketTimeout( string memory channelId, string memory portId, diff --git a/docs/evm/next/documentation/smart-contracts/precompiles/distribution.mdx b/docs/evm/next/documentation/smart-contracts/precompiles/distribution.mdx index b38e2489..49bdb8ba 100644 --- a/docs/evm/next/documentation/smart-contracts/precompiles/distribution.mdx +++ b/docs/evm/next/documentation/smart-contracts/precompiles/distribution.mdx @@ -29,7 +29,7 @@ Gas costs are approximated and may vary based on call complexity and chain setti The precompile defines the following constants for the various Cosmos SDK message types: ```solidity -// Transaction message type URLs +/ Transaction message type URLs string constant MSG_SET_WITHDRAWER_ADDRESS = "/cosmos.distribution.v1beta1.MsgSetWithdrawAddress"; string constant MSG_WITHDRAW_DELEGATOR_REWARD = "/cosmos.distribution.v1beta1.MsgWithdrawDelegatorReward"; string constant MSG_WITHDRAW_VALIDATOR_COMMISSION = "/cosmos.distribution.v1beta1.MsgWithdrawValidatorCommission"; @@ -46,7 +46,7 @@ The `maxRetrieve` parameter limits the number of validators from which to claim ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract DistributionClaimRewards { @@ -67,7 +67,7 @@ contract DistributionClaimRewards { } function claimAllRewards() external returns (bool success) { - // Use a high maxRetrieve value to claim from all validators + / Use a high maxRetrieve value to claim from all validators return claimRewards(100); } } @@ -83,7 +83,7 @@ Withdraws staking rewards from a single, specific validator. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract DistributionExample { @@ -125,7 +125,7 @@ Sets or changes the withdrawal address for receiving staking rewards. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract DistributionSetWithdrawAddress { @@ -146,9 +146,9 @@ contract DistributionSetWithdrawAddress { } function resetWithdrawAddress() external returns (bool success) { - // Reset to delegator's own address by converting msg.sender to bech32 - // Note: In practice, you'd need to convert the EVM address to bech32 format - string memory selfAddress = "art1..."; // This would be the bech32 equivalent + / Reset to delegator's own address by converting msg.sender to bech32 + / Note: In practice, you'd need to convert the EVM address to bech32 format + string memory selfAddress = "art1..."; / This would be the bech32 equivalent return setWithdrawAddress(selfAddress); } } @@ -164,7 +164,7 @@ Withdraws a validator's accumulated commission rewards. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract DistributionWithdrawCommission { @@ -194,7 +194,7 @@ contract DistributionWithdrawCommission { return amount; } - // Helper function for validator operators to withdraw their own commission + / Helper function for validator operators to withdraw their own commission function withdrawMyCommission(string calldata myValidatorAddress) external returns (Coin[] memory) { return withdrawValidatorCommission(myValidatorAddress); } @@ -211,7 +211,7 @@ Sends tokens directly to the community pool. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract DistributionFundCommunityPool { @@ -232,7 +232,7 @@ contract DistributionFundCommunityPool { require(callSuccess, "Fund community pool call failed"); success = abi.decode(result, (bool)); - // Emit events for each coin funded + / Emit events for each coin funded for (uint i = 0; i < amount.length; i++) { emit CommunityPoolFunded(msg.sender, amount[i].denom, amount[i].amount, success); } @@ -241,7 +241,7 @@ contract DistributionFundCommunityPool { } function fundCommunityPoolSingleCoin(string calldata denom, uint256 amount) external returns (bool success) { - Coin[] memory coins = new Coin[](1); + Coin[] memory coins = new Coin[](/docs/evm/next/documentation/smart-contracts/precompiles/1); coins[0] = Coin(denom, amount); return fundCommunityPool(coins); } @@ -258,7 +258,7 @@ Deposits tokens into a specific validator's rewards pool. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract DistributionDepositValidatorRewards { @@ -293,7 +293,7 @@ contract DistributionDepositValidatorRewards { require(callSuccess, "Deposit validator rewards pool call failed"); success = abi.decode(result, (bool)); - // Emit events for each coin deposited + / Emit events for each coin deposited for (uint i = 0; i < amount.length; i++) { emit ValidatorRewardsPoolDeposited( msg.sender, @@ -312,7 +312,7 @@ contract DistributionDepositValidatorRewards { string calldata denom, uint256 amount ) external returns (bool success) { - Coin[] memory coins = new Coin[](1); + Coin[] memory coins = new Coin[](/docs/evm/next/documentation/smart-contracts/precompiles/1); coins[0] = Coin(denom, amount); return depositValidatorRewardsPool(validatorAddress, coins); } @@ -333,18 +333,18 @@ Retrieves comprehensive reward information for all of a delegator's positions. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the relevant parts of the precompile +/ ABI definition for the relevant parts of the precompile const precompileAbi = [ "function delegationTotalRewards(address delegatorAddress) view returns (tuple(string validatorAddress, tuple(string denom, uint256 amount, uint8 precision)[] reward)[] rewards, tuple(string denom, uint256 amount, uint8 precision)[] total)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input: The address of the delegator to query -const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +/ Input: The address of the delegator to query +const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder async function getTotalRewards() { try { @@ -385,19 +385,19 @@ Queries pending rewards for a specific delegation. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function delegationRewards(address delegatorAddress, string memory validatorAddress) view returns (tuple(string denom, uint256 amount, uint8 precision)[] rewards)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs -const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder -const validatorAddress = "artvaloper1..."; // Placeholder +/ Inputs +const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder +const validatorAddress = "artvaloper1..."; / Placeholder async function getDelegationRewards() { try { @@ -435,18 +435,18 @@ Retrieves a list of all validators from which a delegator has rewards. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function delegatorValidators(address delegatorAddress) view returns (string[] validators)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input -const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +/ Input +const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder async function getDelegatorValidators() { try { @@ -484,18 +484,18 @@ Queries the address that can withdraw rewards for a given delegator. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function delegatorWithdrawAddress(address delegatorAddress) view returns (string withdrawAddress)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input -const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +/ Input +const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder async function getWithdrawAddress() { try { @@ -533,12 +533,12 @@ Queries the current balance of the community pool. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function communityPool() view returns (tuple(string denom, uint256 amount, uint8 precision)[] coins)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); @@ -579,18 +579,18 @@ Queries the accumulated commission for a specific validator. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function validatorCommission(string memory validatorAddress) view returns (tuple(string denom, uint256 amount, uint8 precision)[] commission)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input -const validatorAddress = "artvaloper1..."; // Placeholder +/ Input +const validatorAddress = "artvaloper1..."; / Placeholder async function getValidatorCommission() { try { @@ -628,18 +628,18 @@ Queries a validator's commission and self-delegation rewards. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function validatorDistributionInfo(string memory validatorAddress) view returns (tuple(string operatorAddress, tuple(string denom, uint256 amount, uint8 precision)[] selfBondRewards, tuple(string denom, uint256 amount, uint8 precision)[] commission) distributionInfo)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input -const validatorAddress = "artvaloper1..."; // Placeholder +/ Input +const validatorAddress = "artvaloper1..."; / Placeholder async function getValidatorDistInfo() { try { @@ -677,18 +677,18 @@ Queries the outstanding rewards of a validator. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function validatorOutstandingRewards(string memory validatorAddress) view returns (tuple(string denom, uint256 amount, uint8 precision)[] rewards)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input -const validatorAddress = "artvaloper1..."; // Placeholder +/ Input +const validatorAddress = "artvaloper1..."; / Placeholder async function getOutstandingRewards() { try { @@ -726,20 +726,20 @@ Queries slashing events for a validator within a specific height range. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function validatorSlashes(string validatorAddress, uint64 startingHeight, uint64 endingHeight, tuple(bytes key, uint64 offset, uint64 limit, bool countTotal, bool reverse) pageRequest) view returns (tuple(uint64 validatorPeriod, tuple(uint256 value, uint8 precision) fraction, int64 height)[] slashes, tuple(bytes nextKey, uint64 total) pageResponse)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs -const validatorAddress = "artvaloper1..."; // Placeholder -const startingHeight = 1000; // Starting block height -const endingHeight = 2000; // Ending block height +/ Inputs +const validatorAddress = "artvaloper1..."; / Placeholder +const startingHeight = 1000; / Starting block height +const endingHeight = 2000; / Ending block height const pageRequest = { key: "0x", offset: 0, @@ -786,15 +786,15 @@ curl -X POST --data '{ ## Full Solidity Interface & ABI ```solidity title="Distribution Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.17; import "../common/Types.sol"; -/// @dev The DistributionI contract's address. +/ @dev The DistributionI contract's address. address constant DISTRIBUTION_PRECOMPILE_ADDRESS = 0x0000000000000000000000000000000000000801; -/// @dev The DistributionI contract's instance. +/ @dev The DistributionI contract's instance. DistributionI constant DISTRIBUTION_CONTRACT = DistributionI( DISTRIBUTION_PRECOMPILE_ADDRESS ); @@ -815,10 +815,10 @@ struct DelegationDelegatorReward { DecCoin[] reward; } -/// @author Evmos Team -/// @title Distribution Precompile Contract -/// @dev The interface through which solidity contracts will interact with Distribution -/// @custom:address 0x0000000000000000000000000000000000000801 +/ @author Evmos Team +/ @title Distribution Precompile Contract +/ @dev The interface through which solidity contracts will interact with Distribution +/ @custom:address 0x0000000000000000000000000000000000000801 interface DistributionI { event ClaimRewards(address indexed delegatorAddress, uint256 amount); event SetWithdrawerAddress(address indexed caller, string withdrawerAddress); diff --git a/docs/evm/next/documentation/smart-contracts/precompiles/erc20.mdx b/docs/evm/next/documentation/smart-contracts/precompiles/erc20.mdx index 6f0f02f0..c3039b8e 100644 --- a/docs/evm/next/documentation/smart-contracts/precompiles/erc20.mdx +++ b/docs/evm/next/documentation/smart-contracts/precompiles/erc20.mdx @@ -52,7 +52,7 @@ Returns the total amount of tokens in existence. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ERC20Example { @@ -76,11 +76,11 @@ contract ERC20Example { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// The address of the ERC20 precompile is dynamic, assigned when a token pair is registered. +/ The address of the ERC20 precompile is dynamic, assigned when a token pair is registered. const erc20PrecompileAddress = ""; const provider = new ethers.JsonRpcProvider(""); -// A generic ERC20 ABI is sufficient for read-only calls +/ A generic ERC20 ABI is sufficient for read-only calls const erc20Abi = ["function totalSupply() view returns (uint256)"]; const contract = new ethers.Contract(erc20PrecompileAddress, erc20Abi, provider); @@ -118,7 +118,7 @@ Returns the token balance of a specified account. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ERC20Example { @@ -138,7 +138,7 @@ contract ERC20Example { return balance; } - // Get caller's balance + / Get caller's balance function getMyBalance() external view returns (uint256) { return this.getBalance(msg.sender); } @@ -148,7 +148,7 @@ contract ERC20Example { import { ethers } from "ethers"; const erc20PrecompileAddress = ""; -const accountAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +const accountAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder const provider = new ethers.JsonRpcProvider(""); const erc20Abi = ["function balanceOf(address account) view returns (uint256)"]; @@ -188,7 +188,7 @@ Moves tokens from the caller's account to a recipient. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ERC20Example { @@ -228,7 +228,7 @@ Moves tokens from one account to another using an allowance. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ERC20Example { @@ -269,7 +269,7 @@ Sets the allowance of a spender over the caller's tokens. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ERC20Example { @@ -296,7 +296,7 @@ contract ERC20Example { return success; } - // Helper function to approve maximum amount + / Helper function to approve maximum amount function approveMax(address spender) external returns (bool) { return this.approveSpender(spender, type(uint256).max); } @@ -313,7 +313,7 @@ Returns the remaining number of tokens that a spender is allowed to spend on beh ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ERC20Example { @@ -338,8 +338,8 @@ contract ERC20Example { import { ethers } from "ethers"; const erc20PrecompileAddress = ""; -const ownerAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder -const spenderAddress = "0x27f320b7280911c7987a421a8138997a48d4b315"; // Placeholder +const ownerAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder +const spenderAddress = "0x27f320b7280911c7987a421a8138997a48d4b315"; / Placeholder const provider = new ethers.JsonRpcProvider(""); const erc20Abi = ["function allowance(address owner, address spender) view returns (uint256)"]; @@ -379,7 +379,7 @@ Returns the name of the token. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ERC20Example { @@ -443,7 +443,7 @@ Returns the symbol of the token. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ERC20Example { @@ -511,7 +511,7 @@ Returns the number of decimals used for the token. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ERC20Example { @@ -573,8 +573,8 @@ curl -X POST --data '{ ## Full Solidity Interface & ABI ```solidity title="ERC20 Solidity Interface" lines expandable -// SPDX-License-Identifier: MIT -// OpenZeppelin Contracts (last updated v4.6.0) (token/ERC20/IERC20.sol) +/ SPDX-License-Identifier: MIT +/ OpenZeppelin Contracts (last updated v4.6.0) (token/ERC20/IERC20.sol) pragma solidity ^0.8.0; diff --git a/docs/evm/next/documentation/smart-contracts/precompiles/governance.mdx b/docs/evm/next/documentation/smart-contracts/precompiles/governance.mdx index 6aca2ce8..556ea7ed 100644 --- a/docs/evm/next/documentation/smart-contracts/precompiles/governance.mdx +++ b/docs/evm/next/documentation/smart-contracts/precompiles/governance.mdx @@ -30,7 +30,7 @@ Submits a new governance proposal. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract GovernanceExample { @@ -51,7 +51,7 @@ contract GovernanceExample { require(jsonProposal.length > 0, "Proposal cannot be empty"); require(initialDeposit.length > 0, "Initial deposit required"); - // Call the governance precompile + / Call the governance precompile (bool success, bytes memory result) = GOVERNANCE_PRECOMPILE.call( abi.encodeWithSignature( "submitProposal(address,bytes,tuple(string,uint256)[])", @@ -68,7 +68,7 @@ contract GovernanceExample { return proposalId; } - // Helper function to create a coin struct + / Helper function to create a coin struct function createCoin(string memory denom, uint256 amount) external pure returns (Coin memory) { return Coin({denom: denom, amount: amount}); } @@ -81,7 +81,7 @@ Casts a single vote on an active proposal. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract GovernanceExample { @@ -118,12 +118,12 @@ contract GovernanceExample { emit VoteCast(msg.sender, proposalId, option); } - // Convenience function to vote yes + / Convenience function to vote yes function voteYes(uint64 proposalId) external { this.vote(proposalId, VoteOption.Yes, ""); } - // Convenience function to vote no + / Convenience function to vote no function voteNo(uint64 proposalId) external { this.vote(proposalId, VoteOption.No, ""); } @@ -136,7 +136,7 @@ Casts a weighted vote, splitting voting power across multiple options. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract GovernanceWeightedVote { @@ -169,8 +169,8 @@ contract GovernanceWeightedVote { require(proposalId > 0, "Invalid proposal ID"); require(options.length > 0, "Must provide vote options"); - // Validate that weights sum to 1.0 (represented as "1.000000000000000000") - // This is just a basic check - in practice you'd want more robust validation + / Validate that weights sum to 1.0 (represented as "1.000000000000000000") + / This is just a basic check - in practice you'd want more robust validation (bool callSuccess, bytes memory result) = GOVERNANCE_PRECOMPILE.call( abi.encodeWithSignature( @@ -190,7 +190,7 @@ contract GovernanceWeightedVote { return success; } - // Helper function to create a split vote (e.g., 70% Yes, 30% Abstain) + / Helper function to create a split vote (e.g., 70% Yes, 30% Abstain) function voteSplit( uint64 proposalId, uint256 yesPercent, @@ -198,9 +198,9 @@ contract GovernanceWeightedVote { ) external returns (bool) { require(yesPercent + abstainPercent == 100, "Percentages must sum to 100"); - WeightedVoteOption[] memory options = new WeightedVoteOption[](2); + WeightedVoteOption[] memory options = new WeightedVoteOption[](/docs/evm/next/documentation/smart-contracts/precompiles/2); - // Convert percentages to decimal weights (e.g., 70% = "0.700000000000000000") + / Convert percentages to decimal weights (e.g., 70% = "0.700000000000000000") options[0] = WeightedVoteOption({ option: VoteOption.Yes, weight: string(abi.encodePacked("0.", _padPercentage(yesPercent))) @@ -214,13 +214,13 @@ contract GovernanceWeightedVote { return voteWeighted(proposalId, options, "Split vote"); } - // Helper function to pad percentage to 18 decimal places + / Helper function to pad percentage to 18 decimal places function _padPercentage(uint256 percent) internal pure returns (string memory) { require(percent <= 100, "Percent cannot exceed 100"); - if (percent == 100) return "000000000000000000"; // Special case for 100% + if (percent == 100) return "000000000000000000"; / Special case for 100% - // This is a simplified version - in practice you'd want more robust decimal handling + / This is a simplified version - in practice you'd want more robust decimal handling string memory percentStr = _toString(percent); if (percent < 10) { return string(abi.encodePacked("0", percentStr, "0000000000000000")); @@ -259,7 +259,7 @@ Adds funds to a proposal's deposit during the deposit period. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract GovernanceExample { @@ -289,7 +289,7 @@ contract GovernanceExample { bool depositSuccess = abi.decode(result, (bool)); require(depositSuccess, "Deposit was rejected"); - // Calculate total deposited amount for event + / Calculate total deposited amount for event uint256 totalAmount = 0; for (uint i = 0; i < amount.length; i++) { totalAmount += amount[i].amount; @@ -298,11 +298,11 @@ contract GovernanceExample { emit DepositMade(msg.sender, proposalId, totalAmount); } - // Convenience function to deposit native tokens + / Convenience function to deposit native tokens function depositNative(uint64 proposalId, uint256 amount, string memory denom) external payable { require(msg.value >= amount, "Insufficient value sent"); - Coin[] memory coins = new Coin[](1); + Coin[] memory coins = new Coin[](/docs/evm/next/documentation/smart-contracts/precompiles/1); coins[0] = Coin({denom: denom, amount: amount}); this.deposit{value: amount}(proposalId, coins); @@ -316,7 +316,7 @@ Cancels a proposal that is still in its deposit period. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract GovernanceCancelProposal { @@ -338,10 +338,10 @@ contract GovernanceCancelProposal { return success; } - // Function to safely cancel a proposal with additional checks + / Function to safely cancel a proposal with additional checks function safeCancelProposal(uint64 proposalId) external returns (bool success) { - // First, check if the proposal exists and can be canceled - // Note: In practice, you'd want to call getProposal first to verify status + / First, check if the proposal exists and can be canceled + / Note: In practice, you'd want to call getProposal first to verify status success = cancelProposal(proposalId); require(success, "Proposal cancellation failed - check if you're the proposer and proposal is in deposit period"); @@ -363,7 +363,7 @@ Retrieves detailed information about a specific proposal. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract GovernanceQueries { @@ -409,27 +409,27 @@ contract GovernanceQueries { return proposal; } - // Helper function to check if proposal is in voting period + / Helper function to check if proposal is in voting period function isProposalInVotingPeriod(uint64 proposalId) external view returns (bool) { ProposalData memory proposal = this.getProposal(proposalId); - return proposal.status == 2; // PROPOSAL_STATUS_VOTING_PERIOD + return proposal.status == 2; / PROPOSAL_STATUS_VOTING_PERIOD } } ``` ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile, including the complex return struct +/ ABI for the precompile, including the complex return struct const precompileAbi = [ "function getProposal(uint64 proposalId) view returns (tuple(uint64 id, address proposer, string metadata, uint64 submit_time, uint64 voting_start_time, uint64 voting_end_time, uint8 status, tuple(string yes_count, string abstain_count, string no_count, string no_with_veto_count) final_tally_result, tuple(string denom, uint256 amount)[] total_deposit, string[] messages) proposal)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input: The ID of the proposal to query +/ Input: The ID of the proposal to query const proposalId = 1; async function getProposalDetails() { @@ -469,7 +469,7 @@ Retrieves a filtered and paginated list of proposals. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract GovernanceProposalsList { @@ -537,7 +537,7 @@ contract GovernanceProposalsList { return (proposals, pageResponse); } - // Helper function to get all active proposals (in voting period) + / Helper function to get all active proposals (in voting period) function getActiveProposals(uint64 limit) external view returns (ProposalData[] memory) { PageRequest memory pagination = PageRequest({ key: "", @@ -547,7 +547,7 @@ contract GovernanceProposalsList { reverse: false }); - uint32 votingStatus = 2; // PROPOSAL_STATUS_VOTING_PERIOD + uint32 votingStatus = 2; / PROPOSAL_STATUS_VOTING_PERIOD (ProposalData[] memory proposals,) = this.getProposals( votingStatus, address(0), @@ -562,32 +562,32 @@ contract GovernanceProposalsList { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile, including the complex return struct +/ ABI for the precompile, including the complex return struct const precompileAbi = [ "function getProposals(tuple(uint64 offset, bytes key, uint64 limit, bool count_total, bool reverse) pagination, uint8 proposal_status, address voter, address depositor) view returns (tuple(uint64 id, address proposer, string metadata, uint64 submit_time, uint64 voting_start_time, uint64 voting_end_time, uint8 status, tuple(string yes_count, string abstain_count, string no_count, string no_with_veto_count) final_tally_result, tuple(string denom, uint256 amount)[] total_deposit, string[] messages)[] proposals, tuple(bytes next_key, uint64 total) page_response)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs for filtering and pagination +/ Inputs for filtering and pagination const pagination = { offset: 0, - key: "0x", // Start from the beginning + key: "0x", / Start from the beginning limit: 10, count_total: true, reverse: false, }; -const proposalStatus = 0; // 0 for Unspecified, 1 for Deposit, 2 for Voting, etc. -const voterAddress = ethers.ZeroAddress; // Filter by voter, or ZeroAddress for none -const depositorAddress = ethers.ZeroAddress; // Filter by depositor, or ZeroAddress for none +const proposalStatus = 0; / 0 for Unspecified, 1 for Deposit, 2 for Voting, etc. +const voterAddress = ethers.ZeroAddress; / Filter by voter, or ZeroAddress for none +const depositorAddress = ethers.ZeroAddress; / Filter by depositor, or ZeroAddress for none async function getProposalsList() { try { const result = await contract.getProposals(pagination, proposalStatus, voterAddress, depositorAddress); - // The result object contains 'proposals' and 'page_response' + / The result object contains 'proposals' and 'page_response' console.log("Proposals:", JSON.stringify(result.proposals, null, 2)); console.log("Pagination Response:", result.page_response); } catch (error) { @@ -621,7 +621,7 @@ Retrieves the current or final vote tally for a proposal. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract GovernanceTally { @@ -646,12 +646,12 @@ contract GovernanceTally { return tallyResult; } - // Helper function to check if proposal is passing + / Helper function to check if proposal is passing function isProposalPassing(uint64 proposalId) external view returns (bool) { TallyResultData memory tally = this.getTallyResult(proposalId); - // Convert string votes to numbers for comparison (simplified) - // In production, use proper decimal math libraries + / Convert string votes to numbers for comparison (simplified) + / In production, use proper decimal math libraries return keccak256(bytes(tally.yes)) > keccak256(bytes(tally.no)); } } @@ -659,17 +659,17 @@ contract GovernanceTally { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile +/ ABI for the precompile const precompileAbi = [ "function getTallyResult(uint64 proposalId) view returns (tuple(string yes_count, string abstain_count, string no_count, string no_with_veto_count) tally)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input: The ID of the proposal to get the tally for +/ Input: The ID of the proposal to get the tally for const proposalId = 1; async function getTally() { @@ -712,7 +712,7 @@ Retrieves the vote cast by a specific address on a proposal. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract GovernanceVoteQuery { @@ -751,7 +751,7 @@ contract GovernanceVoteQuery { return vote; } - // Helper function to check if an address has voted + / Helper function to check if an address has voted function hasVoted(uint64 proposalId, address voter) external view returns (bool) { try this.getVote(proposalId, voter) returns (WeightedVote memory vote) { return vote.options.length > 0; @@ -764,19 +764,19 @@ contract GovernanceVoteQuery { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile +/ ABI for the precompile const precompileAbi = [ "function getVote(uint64 proposalId, address voter) view returns (tuple(uint64 proposal_id, address voter, tuple(uint8 option, string weight)[] options, string metadata) vote)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs +/ Inputs const proposalId = 1; -const voterAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +const voterAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder async function getVoteDetails() { try { @@ -813,7 +813,7 @@ Retrieves all votes cast on a proposal, with pagination. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract GovernanceVotesList { @@ -871,7 +871,7 @@ contract GovernanceVotesList { return (votes, pageResponse); } - // Helper function to get all Yes votes + / Helper function to get all Yes votes function getYesVoters(uint64 proposalId, uint64 limit) external view returns (address[] memory) { PageRequest memory pagination = PageRequest({ key: "", @@ -890,7 +890,7 @@ contract GovernanceVotesList { } } - address[] memory yesVoters = new address[](yesCount); + address[] memory yesVoters = new address[](/docs/evm/next/documentation/smart-contracts/precompiles/yesCount); uint256 index = 0; for (uint i = 0; i < votes.length; i++) { if (votes[i].options.length > 0 && votes[i].options[0].option == VoteOption.Yes) { @@ -905,17 +905,17 @@ contract GovernanceVotesList { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile +/ ABI for the precompile const precompileAbi = [ "function getVotes(uint64 proposalId, tuple(uint64 offset, bytes key, uint64 limit, bool count_total, bool reverse) pagination) view returns (tuple(uint64 proposal_id, address voter, tuple(uint8 option, string weight)[] options, string metadata)[] votes, tuple(bytes next_key, uint64 total) page_response)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs +/ Inputs const proposalId = 1; const pagination = { offset: 0, @@ -961,7 +961,7 @@ Retrieves deposit information for a specific depositor on a proposal. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract GovernanceDepositQuery { @@ -991,7 +991,7 @@ contract GovernanceDepositQuery { return deposit; } - // Helper function to get total deposit amount for a specific denom + / Helper function to get total deposit amount for a specific denom function getDepositAmount(uint64 proposalId, address depositor, string memory denom) external view returns (uint256) { DepositData memory deposit = this.getDeposit(proposalId, depositor); @@ -1008,19 +1008,19 @@ contract GovernanceDepositQuery { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile +/ ABI for the precompile const precompileAbi = [ "function getDeposit(uint64 proposalId, address depositor) view returns (tuple(address depositor, tuple(string denom, uint256 amount)[] amount) deposit)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs +/ Inputs const proposalId = 1; -const depositorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +const depositorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder async function getDepositDetails() { try { @@ -1057,7 +1057,7 @@ Retrieves all deposits made on a proposal, with pagination. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract GovernanceDepositsList { @@ -1106,7 +1106,7 @@ contract GovernanceDepositsList { return (deposits, pageResponse); } - // Helper function to get all depositors + / Helper function to get all depositors function getDepositors(uint64 proposalId, uint64 limit) external view returns (address[] memory) { PageRequest memory pagination = PageRequest({ key: "", @@ -1118,7 +1118,7 @@ contract GovernanceDepositsList { (DepositData[] memory deposits,) = this.getDeposits(proposalId, pagination); - address[] memory depositors = new address[](deposits.length); + address[] memory depositors = new address[](/docs/evm/next/documentation/smart-contracts/precompiles/deposits.length); for (uint i = 0; i < deposits.length; i++) { depositors[i] = deposits[i].depositor; } @@ -1130,17 +1130,17 @@ contract GovernanceDepositsList { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile +/ ABI for the precompile const precompileAbi = [ "function getDeposits(uint64 proposalId, tuple(uint64 offset, bytes key, uint64 limit, bool count_total, bool reverse) pagination) view returns (tuple(address depositor, tuple(string denom, uint256 amount)[] amount)[] deposits, tuple(bytes next_key, uint64 total) page_response)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs +/ Inputs const proposalId = 1; const pagination = { offset: 0, @@ -1186,7 +1186,7 @@ Retrieves current governance parameters. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract GovernanceParams { @@ -1226,7 +1226,7 @@ contract GovernanceParams { return params; } - // Helper function to get minimum deposit amount for a specific denom + / Helper function to get minimum deposit amount for a specific denom function getMinDepositForDenom(string memory denom) external view returns (uint256) { Params memory params = this.getParams(); @@ -1239,7 +1239,7 @@ contract GovernanceParams { return 0; } - // Helper function to get voting period in seconds + / Helper function to get voting period in seconds function getVotingPeriodSeconds() external view returns (int64) { Params memory params = this.getParams(); return params.votingPeriod; @@ -1249,12 +1249,12 @@ contract GovernanceParams { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile +/ ABI for the precompile const precompileAbi = [ "function getParams() view returns (tuple(string[] min_deposit, string max_deposit_period, string voting_period, string yes_quorum, string veto_threshold, string min_initial_deposit_ratio, string proposal_cancel_ratio, string proposal_cancel_dest, string min_deposit_ratio) params)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); @@ -1296,7 +1296,7 @@ Retrieves the current governance constitution. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract GovernanceConstitution { @@ -1312,13 +1312,13 @@ contract GovernanceConstitution { return constitution; } - // Helper function to check if constitution is set + / Helper function to check if constitution is set function hasConstitution() external view returns (bool) { string memory constitution = this.getConstitution(); return bytes(constitution).length > 0; } - // Helper function to get constitution length + / Helper function to get constitution length function getConstitutionLength() external view returns (uint256) { string memory constitution = this.getConstitution(); return bytes(constitution).length; @@ -1328,12 +1328,12 @@ contract GovernanceConstitution { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile +/ ABI for the precompile const precompileAbi = [ "function getConstitution() view returns (string constitution)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); @@ -1370,34 +1370,34 @@ curl -X POST --data '{ ## Full Solidity Interface & ABI ```solidity title="Governance Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.17; import "../common/Types.sol"; -/// @dev The IGov contract's address. +/ @dev The IGov contract's address. address constant GOV_PRECOMPILE_ADDRESS = 0x0000000000000000000000000000000000000805; -/// @dev The IGov contract's instance. +/ @dev The IGov contract's instance. IGov constant GOV_CONTRACT = IGov(GOV_PRECOMPILE_ADDRESS); /** * @dev VoteOption enumerates the valid vote options for a given governance proposal. */ enum VoteOption { - // Unspecified defines a no-op vote option. + / Unspecified defines a no-op vote option. Unspecified, - // Yes defines a yes vote option. + / Yes defines a yes vote option. Yes, - // Abstain defines an abstain vote option. + / Abstain defines an abstain vote option. Abstain, - // No defines a no vote option. + / No defines a no vote option. No, - // NoWithVeto defines a no with veto vote option. + / NoWithVeto defines a no with veto vote option. NoWithVeto } -/// @dev WeightedVote represents a vote on a governance proposal +/ @dev WeightedVote represents a vote on a governance proposal struct WeightedVote { uint64 proposalId; address voter; @@ -1405,20 +1405,20 @@ struct WeightedVote { string metadata; } -/// @dev WeightedVoteOption represents a weighted vote option +/ @dev WeightedVoteOption represents a weighted vote option struct WeightedVoteOption { VoteOption option; string weight; } -/// @dev DepositData represents information about a deposit on a proposal +/ @dev DepositData represents information about a deposit on a proposal struct DepositData { uint64 proposalId; address depositor; Coin[] amount; } -/// @dev TallyResultData represents the tally result of a proposal +/ @dev TallyResultData represents the tally result of a proposal struct TallyResultData { string yes; string abstain; @@ -1426,7 +1426,7 @@ struct TallyResultData { string noWithVeto; } -/// @dev ProposalData represents a governance proposal +/ @dev ProposalData represents a governance proposal struct ProposalData { uint64 id; string[] messages; @@ -1443,7 +1443,7 @@ struct ProposalData { address proposer; } -/// @dev Params defines the governance parameters +/ @dev Params defines the governance parameters struct Params { int64 votingPeriod; Coin[] minDeposit; @@ -1463,78 +1463,78 @@ struct Params { string minDepositRatio; } -/// @author The Evmos Core Team -/// @title Gov Precompile Contract -/// @dev The interface through which solidity contracts will interact with Gov +/ @author The Evmos Core Team +/ @title Gov Precompile Contract +/ @dev The interface through which solidity contracts will interact with Gov interface IGov { - /// @dev SubmitProposal defines an Event emitted when a proposal is submitted. - /// @param proposer the address of the proposer - /// @param proposalId the proposal of id + / @dev SubmitProposal defines an Event emitted when a proposal is submitted. + / @param proposer the address of the proposer + / @param proposalId the proposal of id event SubmitProposal(address indexed proposer, uint64 proposalId); - /// @dev CancelProposal defines an Event emitted when a proposal is canceled. - /// @param proposer the address of the proposer - /// @param proposalId the proposal of id + / @dev CancelProposal defines an Event emitted when a proposal is canceled. + / @param proposer the address of the proposer + / @param proposalId the proposal of id event CancelProposal(address indexed proposer, uint64 proposalId); - /// @dev Deposit defines an Event emitted when a deposit is made. - /// @param depositor the address of the depositor - /// @param proposalId the proposal of id - /// @param amount the amount of the deposit + / @dev Deposit defines an Event emitted when a deposit is made. + / @param depositor the address of the depositor + / @param proposalId the proposal of id + / @param amount the amount of the deposit event Deposit(address indexed depositor, uint64 proposalId, Coin[] amount); - /// @dev Vote defines an Event emitted when a proposal voted. - /// @param voter the address of the voter - /// @param proposalId the proposal of id - /// @param option the option for voter + / @dev Vote defines an Event emitted when a proposal voted. + / @param voter the address of the voter + / @param proposalId the proposal of id + / @param option the option for voter event Vote(address indexed voter, uint64 proposalId, uint8 option); - /// @dev VoteWeighted defines an Event emitted when a proposal voted. - /// @param voter the address of the voter - /// @param proposalId the proposal of id - /// @param options the options for voter + / @dev VoteWeighted defines an Event emitted when a proposal voted. + / @param voter the address of the voter + / @param proposalId the proposal of id + / @param options the options for voter event VoteWeighted( address indexed voter, uint64 proposalId, WeightedVoteOption[] options ); - /// TRANSACTIONS + / TRANSACTIONS - /// @notice submitProposal creates a new proposal from a protoJSON document. - /// @dev submitProposal defines a method to submit a proposal. - /// @param jsonProposal The JSON proposal - /// @param deposit The deposit for the proposal - /// @return proposalId The proposal id + / @notice submitProposal creates a new proposal from a protoJSON document. + / @dev submitProposal defines a method to submit a proposal. + / @param jsonProposal The JSON proposal + / @param deposit The deposit for the proposal + / @return proposalId The proposal id function submitProposal( address proposer, bytes calldata jsonProposal, Coin[] calldata deposit ) external returns (uint64 proposalId); - /// @dev cancelProposal defines a method to cancel a proposal. - /// @param proposalId The proposal id - /// @return success Whether the transaction was successful or not + / @dev cancelProposal defines a method to cancel a proposal. + / @param proposalId The proposal id + / @return success Whether the transaction was successful or not function cancelProposal( address proposer, uint64 proposalId ) external returns (bool success); - /// @dev deposit defines a method to add a deposit to a proposal. - /// @param proposalId The proposal id - /// @param amount The amount to deposit + / @dev deposit defines a method to add a deposit to a proposal. + / @param proposalId The proposal id + / @param amount The amount to deposit function deposit( address depositor, uint64 proposalId, Coin[] calldata amount ) external returns (bool success); - /// @dev vote defines a method to add a vote on a specific proposal. - /// @param voter The address of the voter - /// @param proposalId the proposal of id - /// @param option the option for voter - /// @param metadata the metadata for voter send - /// @return success Whether the transaction was successful or not + / @dev vote defines a method to add a vote on a specific proposal. + / @param voter The address of the voter + / @param proposalId the proposal of id + / @param option the option for voter + / @param metadata the metadata for voter send + / @return success Whether the transaction was successful or not function vote( address voter, uint64 proposalId, @@ -1542,12 +1542,12 @@ interface IGov { string memory metadata ) external returns (bool success); - /// @dev voteWeighted defines a method to add a vote on a specific proposal. - /// @param voter The address of the voter - /// @param proposalId The proposal id - /// @param options The options for voter - /// @param metadata The metadata for voter send - /// @return success Whether the transaction was successful or not + / @dev voteWeighted defines a method to add a vote on a specific proposal. + / @param voter The address of the voter + / @param proposalId The proposal id + / @param options The options for voter + / @param metadata The metadata for voter send + / @return success Whether the transaction was successful or not function voteWeighted( address voter, uint64 proposalId, @@ -1555,23 +1555,23 @@ interface IGov { string memory metadata ) external returns (bool success); - /// QUERIES + / QUERIES - /// @dev getVote returns the vote of a single voter for a - /// given proposalId. - /// @param proposalId The proposal id - /// @param voter The voter on the proposal - /// @return vote Voter's vote for the proposal + / @dev getVote returns the vote of a single voter for a + / given proposalId. + / @param proposalId The proposal id + / @param voter The voter on the proposal + / @return vote Voter's vote for the proposal function getVote( uint64 proposalId, address voter ) external view returns (WeightedVote memory vote); - /// @dev getVotes Returns the votes for a specific proposal. - /// @param proposalId The proposal id - /// @param pagination The pagination options - /// @return votes The votes for the proposal - /// @return pageResponse The pagination information + / @dev getVotes Returns the votes for a specific proposal. + / @param proposalId The proposal id + / @param pagination The pagination options + / @return votes The votes for the proposal + / @return pageResponse The pagination information function getVotes( uint64 proposalId, PageRequest calldata pagination @@ -1580,20 +1580,20 @@ interface IGov { view returns (WeightedVote[] memory votes, PageResponse memory pageResponse); - /// @dev getDeposit returns the deposit of a single depositor for a given proposalId. - /// @param proposalId The proposal id - /// @param depositor The address of the depositor - /// @return deposit The deposit information + / @dev getDeposit returns the deposit of a single depositor for a given proposalId. + / @param proposalId The proposal id + / @param depositor The address of the depositor + / @return deposit The deposit information function getDeposit( uint64 proposalId, address depositor ) external view returns (DepositData memory deposit); - /// @dev getDeposits returns all deposits for a specific proposal. - /// @param proposalId The proposal id - /// @param pagination The pagination options - /// @return deposits The deposits for the proposal - /// @return pageResponse The pagination information + / @dev getDeposits returns all deposits for a specific proposal. + / @param proposalId The proposal id + / @param pagination The pagination options + / @return deposits The deposits for the proposal + / @return pageResponse The pagination information function getDeposits( uint64 proposalId, PageRequest calldata pagination @@ -1605,27 +1605,27 @@ interface IGov { PageResponse memory pageResponse ); - /// @dev getTallyResult returns the tally result of a proposal. - /// @param proposalId The proposal id - /// @return tallyResult The tally result of the proposal + / @dev getTallyResult returns the tally result of a proposal. + / @param proposalId The proposal id + / @return tallyResult The tally result of the proposal function getTallyResult( uint64 proposalId ) external view returns (TallyResultData memory tallyResult); - /// @dev getProposal returns the proposal details based on proposal id. - /// @param proposalId The proposal id - /// @return proposal The proposal data + / @dev getProposal returns the proposal details based on proposal id. + / @param proposalId The proposal id + / @return proposal The proposal data function getProposal( uint64 proposalId ) external view returns (ProposalData memory proposal); - /// @dev getProposals returns proposals with matching status. - /// @param proposalStatus The proposal status to filter by - /// @param voter The voter address to filter by, if any - /// @param depositor The depositor address to filter by, if any - /// @param pagination The pagination config - /// @return proposals The proposals matching the filter criteria - /// @return pageResponse The pagination information + / @dev getProposals returns proposals with matching status. + / @param proposalStatus The proposal status to filter by + / @param voter The voter address to filter by, if any + / @param depositor The depositor address to filter by, if any + / @param pagination The pagination config + / @return proposals The proposals matching the filter criteria + / @return pageResponse The pagination information function getProposals( uint32 proposalStatus, address voter, @@ -1639,12 +1639,12 @@ interface IGov { PageResponse memory pageResponse ); - /// @dev getParams returns the current governance parameters. - /// @return params The governance parameters + / @dev getParams returns the current governance parameters. + / @return params The governance parameters function getParams() external view returns (Params memory params); - /// @dev getConstitution returns the current constitution. - /// @return constitution The current constitution + / @dev getConstitution returns the current constitution. + / @return constitution The current constitution function getConstitution() external view returns (string memory constitution); } ``` diff --git a/docs/evm/next/documentation/smart-contracts/precompiles/ics20.mdx b/docs/evm/next/documentation/smart-contracts/precompiles/ics20.mdx index b24ea1e3..e9761aa6 100644 --- a/docs/evm/next/documentation/smart-contracts/precompiles/ics20.mdx +++ b/docs/evm/next/documentation/smart-contracts/precompiles/ics20.mdx @@ -75,7 +75,7 @@ Initiates a cross-chain token transfer using the IBC protocol. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ICS20Example { @@ -109,10 +109,10 @@ contract ICS20Example { require(bytes(receiver).length > 0, "Receiver address required"); require(timeoutTimestamp > 0, "Timeout timestamp required"); - // Default port for ICS20 transfers + / Default port for ICS20 transfers string memory sourcePort = "transfer"; - // Disable height-based timeout (using timestamp instead) + / Disable height-based timeout (using timestamp instead) Height memory timeoutHeight = Height({ revisionNumber: 0, revisionHeight: 0 @@ -140,19 +140,19 @@ contract ICS20Example { return sequence; } - // Helper function to calculate timeout (1 hour from now) + / Helper function to calculate timeout (1 hour from now) function calculateTimeout(uint256 durationSeconds) external view returns (uint64) { - return uint64((block.timestamp + durationSeconds) * 1e9); // Convert to nanoseconds + return uint64((block.timestamp + durationSeconds) * 1e9); / Convert to nanoseconds } - // Quick transfer with 1-hour timeout + / Quick transfer with 1-hour timeout function quickTransfer( string calldata sourceChannel, string calldata denom, uint256 amount, string calldata receiver ) external payable returns (uint64) { - uint64 timeoutTimestamp = this.calculateTimeout(3600); // 1 hour + uint64 timeoutTimestamp = this.calculateTimeout(3600); / 1 hour return this.transfer{value: msg.value}( sourceChannel, denom, @@ -167,26 +167,26 @@ contract ICS20Example { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the transfer function +/ ABI definition for the transfer function const precompileAbi = [ "function transfer(string memory sourcePort, string memory sourceChannel, string memory denom, uint256 amount, address sender, string memory receiver, tuple(uint64 revisionNumber, uint64 revisionHeight) timeoutHeight, uint64 timeoutTimestamp, string memory memo) returns (uint64)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000802"; const signer = new ethers.Wallet("", provider); const contract = new ethers.Contract(precompileAddress, precompileAbi, signer); -// Transfer parameters +/ Transfer parameters const sourcePort = "transfer"; const sourceChannel = "channel-0"; -const denom = "test"; // Token denomination -const amount = ethers.parseEther("1.0"); // Amount to transfer +const denom = "test"; / Token denomination +const amount = ethers.parseEther("1.0"); / Amount to transfer const sender = await signer.getAddress(); -const receiver = "cosmos1..."; // Destination address on target chain -const timeoutHeight = { revisionNumber: 0, revisionHeight: 0 }; // Height timeout disabled -const timeoutTimestamp = Math.floor(Date.now() / 1000) + 3600; // 1 hour from now +const receiver = "cosmos1..."; / Destination address on target chain +const timeoutHeight = { revisionNumber: 0, revisionHeight: 0 }; / Height timeout disabled +const timeoutTimestamp = Math.floor(Date.now() / 1000) + 3600; / 1 hour from now const memo = ""; async function transferTokens() { @@ -211,7 +211,7 @@ async function transferTokens() { } } -// transferTokens(); +/ transferTokens(); ``` ```bash cURL expandable lines @@ -228,7 +228,7 @@ Queries denomination information for an IBC token by its hash. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ICS20DenomQuery { @@ -256,13 +256,13 @@ contract ICS20DenomQuery { return denom; } - // Helper function to check if a denom is native (no trace) + / Helper function to check if a denom is native (no trace) function isNativeDenom(string memory hash) external view returns (bool) { Denom memory denom = this.getDenom(hash); return denom.trace.length == 0; } - // Helper function to get the base denomination + / Helper function to get the base denomination function getBaseDenom(string memory hash) external view returns (string memory) { Denom memory denom = this.getDenom(hash); return denom.base; @@ -272,18 +272,18 @@ contract ICS20DenomQuery { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function denom(string memory hash) view returns (tuple(string base, tuple(string portId, string channelId)[] trace) denom)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000802"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input: The hash of the denomination to query -const denomHash = "ibc/..."; // Placeholder for actual denomination hash +/ Input: The hash of the denomination to query +const denomHash = "ibc/..."; / Placeholder for actual denomination hash async function getDenom() { try { @@ -318,7 +318,7 @@ Retrieves a paginated list of all denomination traces registered on the chain. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ICS20DenomsList { @@ -364,7 +364,7 @@ contract ICS20DenomsList { return (denoms, pageResponse); } - // Helper function to get all IBC denoms (with trace) + / Helper function to get all IBC denoms (with trace) function getIBCDenoms(uint64 limit) external view returns (Denom[] memory) { PageRequest memory pageRequest = PageRequest({ key: "", @@ -376,7 +376,7 @@ contract ICS20DenomsList { (Denom[] memory allDenoms,) = this.getDenoms(pageRequest); - // Count IBC denoms (those with trace) + / Count IBC denoms (those with trace) uint256 ibcCount = 0; for (uint i = 0; i < allDenoms.length; i++) { if (allDenoms[i].trace.length > 0) { @@ -384,8 +384,8 @@ contract ICS20DenomsList { } } - // Filter IBC denoms - Denom[] memory ibcDenoms = new Denom[](ibcCount); + / Filter IBC denoms + Denom[] memory ibcDenoms = new Denom[](/docs/evm/next/documentation/smart-contracts/precompiles/ibcCount); uint256 index = 0; for (uint i = 0; i < allDenoms.length; i++) { if (allDenoms[i].trace.length > 0) { @@ -400,17 +400,17 @@ contract ICS20DenomsList { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function denoms(tuple(bytes key, uint64 offset, uint64 limit, bool countTotal, bool reverse) pageRequest) view returns (tuple(string base, tuple(string portId, string channelId)[] trace)[] denoms, tuple(bytes nextKey, uint64 total) pageResponse)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000802"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input for pagination +/ Input for pagination const pagination = { key: "0x", offset: 0, @@ -454,7 +454,7 @@ Computes the hash of a denomination trace path. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ICS20DenomHash { @@ -472,13 +472,13 @@ contract ICS20DenomHash { return hash; } - // Helper function to build and hash a trace path + / Helper function to build and hash a trace path function buildAndHashTrace( string memory portId, string memory channelId, string memory baseDenom ) external view returns (string memory) { - // Build trace in format: "port/channel/denom" + / Build trace in format: "port/channel/denom" string memory trace = string(abi.encodePacked( portId, "/", @@ -490,7 +490,7 @@ contract ICS20DenomHash { return this.getDenomHash(trace); } - // Helper function to verify if a hash matches a trace + / Helper function to verify if a hash matches a trace function verifyDenomHash( string memory trace, string memory expectedHash @@ -503,18 +503,18 @@ contract ICS20DenomHash { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function denomHash(string memory trace) view returns (string memory hash)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000802"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input: The trace path to hash -const tracePath = "transfer/channel-0/test"; // Placeholder for actual trace path +/ Input: The trace path to hash +const tracePath = "transfer/channel-0/test"; / Placeholder for actual trace path async function getDenomHash() { try { @@ -547,46 +547,46 @@ curl -X POST --data '{ ## Full Solidity Interface & ABI ```solidity title="ICS20 Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.18; import "../common/Types.sol"; -/// @dev The ICS20I contract's address. +/ @dev The ICS20I contract's address. address constant ICS20_PRECOMPILE_ADDRESS = 0x0000000000000000000000000000000000000802; -/// @dev The ICS20 contract's instance. +/ @dev The ICS20 contract's instance. ICS20I constant ICS20_CONTRACT = ICS20I(ICS20_PRECOMPILE_ADDRESS); -/// @dev Denom contains the base denomination for ICS20 fungible tokens and the -/// source tracing information path. +/ @dev Denom contains the base denomination for ICS20 fungible tokens and the +/ source tracing information path. struct Denom { - /// base denomination of the relayed fungible token. + / base denomination of the relayed fungible token. string base; - /// trace contains a list of hops for multi-hop transfers. + / trace contains a list of hops for multi-hop transfers. Hop[] trace; } -/// @dev Hop defines a port ID, channel ID pair specifying where -/// tokens must be forwarded next in a multi-hop transfer. +/ @dev Hop defines a port ID, channel ID pair specifying where +/ tokens must be forwarded next in a multi-hop transfer. struct Hop { string portId; string channelId; } -/// @author Evmos Team -/// @title ICS20 Transfer Precompiled Contract -/// @dev The interface through which solidity contracts will interact with IBC Transfer (ICS20) -/// @custom:address 0x0000000000000000000000000000000000000802 +/ @author Evmos Team +/ @title ICS20 Transfer Precompiled Contract +/ @dev The interface through which solidity contracts will interact with IBC Transfer (ICS20) +/ @custom:address 0x0000000000000000000000000000000000000802 interface ICS20I { - /// @dev Emitted when an ICS-20 transfer is executed. - /// @param sender The address of the sender. - /// @param receiver The address of the receiver. - /// @param sourcePort The source port of the IBC transaction, For v2 packets, leave it empty. - /// @param sourceChannel The source channel of the IBC transaction, For v2 packets, set the client ID. - /// @param denom The denomination of the tokens transferred. - /// @param amount The amount of tokens transferred. - /// @param memo The IBC transaction memo. + / @dev Emitted when an ICS-20 transfer is executed. + / @param sender The address of the sender. + / @param receiver The address of the receiver. + / @param sourcePort The source port of the IBC transaction, For v2 packets, leave it empty. + / @param sourceChannel The source channel of the IBC transaction, For v2 packets, set the client ID. + / @param denom The denomination of the tokens transferred. + / @param amount The amount of tokens transferred. + / @param memo The IBC transaction memo. event IBCTransfer( address indexed sender, string indexed receiver, @@ -597,19 +597,19 @@ interface ICS20I { string memo ); - /// @dev Transfer defines a method for performing an IBC transfer. - /// @param sourcePort the port on which the packet will be sent - /// @param sourceChannel the channel by which the packet will be sent - /// @param denom the denomination of the Coin to be transferred to the receiver - /// @param amount the amount of the Coin to be transferred to the receiver - /// @param sender the hex address of the sender - /// @param receiver the bech32 address of the receiver (hex addresses not yet supported) - /// @param timeoutHeight the timeout height relative to the current block height. - /// The timeout is disabled when set to 0 - /// @param timeoutTimestamp the timeout timestamp in absolute nanoseconds since unix epoch. - /// The timeout is disabled when set to 0 - /// @param memo optional memo - /// @return nextSequence sequence number of the transfer packet sent + / @dev Transfer defines a method for performing an IBC transfer. + / @param sourcePort the port on which the packet will be sent + / @param sourceChannel the channel by which the packet will be sent + / @param denom the denomination of the Coin to be transferred to the receiver + / @param amount the amount of the Coin to be transferred to the receiver + / @param sender the hex address of the sender + / @param receiver the bech32 address of the receiver (hex addresses not yet supported) + / @param timeoutHeight the timeout height relative to the current block height. + / The timeout is disabled when set to 0 + / @param timeoutTimestamp the timeout timestamp in absolute nanoseconds since unix epoch. + / The timeout is disabled when set to 0 + / @param memo optional memo + / @return nextSequence sequence number of the transfer packet sent function transfer( string memory sourcePort, string memory sourceChannel, @@ -622,8 +622,8 @@ interface ICS20I { string memory memo ) external returns (uint64 nextSequence); - /// @dev denoms Defines a method for returning all denoms. - /// @param pageRequest Defines the pagination parameters to for the request. + / @dev denoms Defines a method for returning all denoms. + / @param pageRequest Defines the pagination parameters to for the request. function denoms( PageRequest memory pageRequest ) @@ -634,12 +634,12 @@ interface ICS20I { PageResponse memory pageResponse ); - /// @dev Denom defines a method for returning a denom. + / @dev Denom defines a method for returning a denom. function denom( string memory hash ) external view returns (Denom memory denom); - /// @dev DenomHash defines a method for returning a hash of the denomination info. + / @dev DenomHash defines a method for returning a hash of the denomination info. function denomHash( string memory trace ) external view returns (string memory hash); diff --git a/docs/evm/next/documentation/smart-contracts/precompiles/overview.mdx b/docs/evm/next/documentation/smart-contracts/precompiles/overview.mdx index b96027ea..34b06f4c 100644 --- a/docs/evm/next/documentation/smart-contracts/precompiles/overview.mdx +++ b/docs/evm/next/documentation/smart-contracts/precompiles/overview.mdx @@ -9,16 +9,16 @@ keywords: ['precompiles', 'precompiled contracts', 'cosmos sdk', 'evm', 'smart c | Precompile | Address | Purpose | Reference | | ---------------- | -------------------------------------------- | ---------------------------------------------------------------- | ------------------------- | -| **Bank** | `0x0000000000000000000000000000000000000804` | ERC20-style access to native Cosmos SDK tokens | [Details](./bank) | -| **Bech32** | `0x0000000000000000000000000000000000000400` | Address format conversion between Ethereum hex and Cosmos bech32 | [Details](./bech32) | -| **Staking** | `0x0000000000000000000000000000000000000800` | Validator operations, delegation, and staking rewards | [Details](./staking) | -| **Distribution** | `0x0000000000000000000000000000000000000801` | Staking rewards and community pool management | [Details](./distribution) | -| **ERC20** | Dynamic per token | Standard ERC20 functionality for native Cosmos tokens | [Details](./erc20) | -| **Governance** | `0x0000000000000000000000000000000000000805` | On-chain governance proposals and voting | [Details](./governance) | -| **ICS20** | `0x0000000000000000000000000000000000000802` | Cross-chain token transfers via IBC | [Details](./ics20) | -| **WERC20** | Dynamic per token | Wrapped native token functionality | [Details](./werc20) | -| **Slashing** | `0x0000000000000000000000000000000000000806` | Validator slashing and jail management | [Details](./slashing) | -| **P256** | `0x0000000000000000000000000000000000000100` | P-256 elliptic curve cryptographic operations | [Details](./p256) | +| **Bank** | `0x0000000000000000000000000000000000000804` | ERC20-style access to native Cosmos SDK tokens | [Details](/docs/evm/next/documentation/smart-contracts/precompiles/bank) | +| **Bech32** | `0x0000000000000000000000000000000000000400` | Address format conversion between Ethereum hex and Cosmos bech32 | [Details](/docs/evm/next/documentation/smart-contracts/precompiles/bech32) | +| **Staking** | `0x0000000000000000000000000000000000000800` | Validator operations, delegation, and staking rewards | [Details](/docs/evm/next/documentation/smart-contracts/precompiles/staking) | +| **Distribution** | `0x0000000000000000000000000000000000000801` | Staking rewards and community pool management | [Details](/docs/evm/next/documentation/smart-contracts/precompiles/distribution) | +| **ERC20** | Dynamic per token | Standard ERC20 functionality for native Cosmos tokens | [Details](/docs/evm/next/documentation/smart-contracts/precompiles/erc20) | +| **Governance** | `0x0000000000000000000000000000000000000805` | On-chain governance proposals and voting | [Details](/docs/evm/next/documentation/smart-contracts/precompiles/governance) | +| **ICS20** | `0x0000000000000000000000000000000000000802` | Cross-chain token transfers via IBC | [Details](/docs/evm/next/documentation/smart-contracts/precompiles/ics20) | +| **WERC20** | Dynamic per token | Wrapped native token functionality | [Details](/docs/evm/next/documentation/smart-contracts/precompiles/werc20) | +| **Slashing** | `0x0000000000000000000000000000000000000806` | Validator slashing and jail management | [Details](/docs/evm/next/documentation/smart-contracts/precompiles/slashing) | +| **P256** | `0x0000000000000000000000000000000000000100` | P-256 elliptic curve cryptographic operations | [Details](/docs/evm/next/documentation/smart-contracts/precompiles/p256) | ## Configuration @@ -40,13 +40,13 @@ Precompiled contracts provide direct, gas-efficient access to native Cosmos SDK 3. Never assume 18-decimal precision ```solidity - // WRONG: Assuming Ethereum's 18 decimals - uint256 amount = 1 ether; // 1000000000000000000 - staking.delegate(validator, amount); // Delegates 1 trillion tokens! + / WRONG: Assuming Ethereum's 18 decimals + uint256 amount = 1 ether; / 1000000000000000000 + staking.delegate(validator, amount); / Delegates 1 trillion tokens! - // CORRECT: Using chain's native 6 decimals - uint256 amount = 1000000; // 1 token with 6 decimals - staking.delegate(validator, amount); // Delegates 1 token + / CORRECT: Using chain's native 6 decimals + uint256 amount = 1000000; / 1 token with 6 decimals + staking.delegate(validator, amount); / Delegates 1 token ``` @@ -60,15 +60,15 @@ Precompiles must be enabled in the genesis file to be available on the network. "evm": { "params": { "active_static_precompiles": [ - "0x0000000000000000000000000000000000000100", // P256 - "0x0000000000000000000000000000000000000400", // Bech32 - "0x0000000000000000000000000000000000000800", // Staking - "0x0000000000000000000000000000000000000801", // Distribution - "0x0000000000000000000000000000000000000802", // ICS20 - "0x0000000000000000000000000000000000000804", // Bank - "0x0000000000000000000000000000000000000805", // Governance - "0x0000000000000000000000000000000000000806", // Slashing - "0x0000000000000000000000000000000000000807" // Callbacks + "0x0000000000000000000000000000000000000100", / P256 + "0x0000000000000000000000000000000000000400", / Bech32 + "0x0000000000000000000000000000000000000800", / Staking + "0x0000000000000000000000000000000000000801", / Distribution + "0x0000000000000000000000000000000000000802", / ICS20 + "0x0000000000000000000000000000000000000804", / Bank + "0x0000000000000000000000000000000000000805", / Governance + "0x0000000000000000000000000000000000000806", / Slashing + "0x0000000000000000000000000000000000000807" / Callbacks ] } } @@ -92,20 +92,20 @@ All precompile constructors in v0.5.0 now require keeper interfaces instead of c **Before (v0.4.x):** ```go -// Concrete keeper types required +/ Concrete keeper types required bankPrecompile, err := bankprecompile.NewPrecompile( - bankKeeper, // bankkeeper.Keeper concrete type - stakingKeeper, // stakingkeeper.Keeper concrete type - // ... other keepers + bankKeeper, / bankkeeper.Keeper concrete type + stakingKeeper, / stakingkeeper.Keeper concrete type + / ... other keepers ) ``` **After (v0.5.0):** ```go -// Keeper interfaces required (parameters accept interfaces directly) +/ Keeper interfaces required (parameters accept interfaces directly) bankPrecompile, err := bankprecompile.NewPrecompile( - bankKeeper, // common.BankKeeper interface - erc20Keeper, // erc20keeper.Keeper interface + bankKeeper, / common.BankKeeper interface + erc20Keeper, / erc20keeper.Keeper interface ) ``` @@ -127,7 +127,7 @@ The following keeper interfaces are defined in [`precompiles/common/interfaces.g **Custom Precompile Migration:** ```go -// Before (v0.4.x) +/ Before (v0.4.x) func NewCustomPrecompile( bankKeeper bankkeeper.Keeper, stakingKeeper stakingkeeper.Keeper, @@ -138,10 +138,10 @@ func NewCustomPrecompile( }, nil } -// After (v0.5.0) +/ After (v0.5.0) func NewCustomPrecompile( - bankKeeper common.BankKeeper, // Interface instead of concrete type - stakingKeeper common.StakingKeeper, // Interface instead of concrete type + bankKeeper common.BankKeeper, / Interface instead of concrete type + stakingKeeper common.StakingKeeper, / Interface instead of concrete type ) (*CustomPrecompile, error) { return &CustomPrecompile{ bankKeeper: bankKeeper, @@ -152,18 +152,18 @@ func NewCustomPrecompile( **Assembly Updates:** ```go -// In your precompile assembly function (based on evmd/precompiles.go) +/ In your precompile assembly function (based on evmd/precompiles.go) func NewAvailableStaticPrecompiles( stakingKeeper stakingkeeper.Keeper, distributionKeeper distributionkeeper.Keeper, - bankKeeper cmn.BankKeeper, // Already interface type + bankKeeper cmn.BankKeeper, / Already interface type erc20Keeper erc20keeper.Keeper, - // ... other keeper interfaces + / ... other keeper interfaces opts ...Option, ) map[common.Address]vm.PrecompiledContract { precompiles := make(map[common.Address]vm.PrecompiledContract) - // No casting needed - parameters are already interface types + / No casting needed - parameters are already interface types bankPrecompile, err := bankprecompile.NewPrecompile(bankKeeper, erc20Keeper) if err != nil { panic(fmt.Errorf("failed to instantiate bank precompile: %w", err)) @@ -186,8 +186,8 @@ Precompiles must be explicitly activated via the `active_static_precompiles` par # Via genesis configuration { "active_static_precompiles": [ - "0x0000000000000000000000000000000000000804", // Bank - "0x0000000000000000000000000000000000000800" // Staking + "0x0000000000000000000000000000000000000804", / Bank + "0x0000000000000000000000000000000000000800" / Staking ] } @@ -198,7 +198,7 @@ evmd tx gov submit-proposal param-change-proposal.json --from validator ### Usage in Smart Contracts ```solidity -// Import precompile interfaces +/ Import precompile interfaces import "./IBank.sol"; import "./IStaking.sol"; @@ -222,8 +222,8 @@ All precompiles include comprehensive Solidity test suites for validation: ```bash # Run precompile tests -cd tests && npm test +cd scripts/test && npm test # Test specific precompile -npx hardhat test test/Bank.test.js +npx hardhat test scripts/test/Bank.test.js ``` diff --git a/docs/evm/next/documentation/smart-contracts/precompiles/p256.mdx b/docs/evm/next/documentation/smart-contracts/precompiles/p256.mdx index b6054e73..82bd658f 100644 --- a/docs/evm/next/documentation/smart-contracts/precompiles/p256.mdx +++ b/docs/evm/next/documentation/smart-contracts/precompiles/p256.mdx @@ -38,7 +38,7 @@ The precompile exposes a single unnamed function that verifies P-256 signatures. ```solidity Solidity -// P256 signature verification +/ P256 signature verification address constant P256_PRECOMPILE = 0x0000000000000000000000000000000000000100; function verifyP256Signature( @@ -63,11 +63,11 @@ function verifyP256Signature( ```javascript Ethers.js const ethers = require('ethers'); -// P256 precompile address +/ P256 precompile address const P256_ADDRESS = '0x0000000000000000000000000000000000000100'; async function verifyP256Signature(provider, messageHash, r, s, x, y) { - // Encode the input data + / Encode the input data const input = ethers.utils.concat([ messageHash, r, @@ -76,13 +76,13 @@ async function verifyP256Signature(provider, messageHash, r, s, x, y) { y ]); - // Call the precompile + / Call the precompile const result = await provider.call({ to: P256_ADDRESS, data: input }); - // Check if signature is valid (result should be 0x00...01) + / Check if signature is valid (result should be 0x00...01) return result === '0x' + '00'.repeat(31) + '01'; } ``` @@ -152,11 +152,11 @@ contract WebAuthnWallet { ) external view returns (bool) { Credential memory cred = credentials[msg.sender]; - // Compute challenge hash according to WebAuthn spec + / Compute challenge hash according to WebAuthn spec bytes32 clientDataHash = sha256(clientDataJSON); bytes32 messageHash = sha256(abi.encodePacked(authenticatorData, clientDataHash)); - // Verify P-256 signature + / Verify P-256 signature bytes memory input = abi.encodePacked( messageHash, r, diff --git a/docs/evm/next/documentation/smart-contracts/precompiles/slashing.mdx b/docs/evm/next/documentation/smart-contracts/precompiles/slashing.mdx index b5bcda1f..35cc8b39 100644 --- a/docs/evm/next/documentation/smart-contracts/precompiles/slashing.mdx +++ b/docs/evm/next/documentation/smart-contracts/precompiles/slashing.mdx @@ -30,7 +30,7 @@ Allows a validator to unjail themselves after being jailed for downtime. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract SlashingExample { @@ -51,7 +51,7 @@ contract SlashingExample { function unjailValidator(address validatorAddress) external returns (bool success) { require(validatorAddress != address(0), "Invalid validator address"); - // Check signing info to prevent unjailing tombstoned validators + / Check signing info to prevent unjailing tombstoned validators SigningInfo memory info = this.getValidatorSigningInfo(validatorAddress); require(!info.tombstoned, "Tombstoned validators cannot be unjailed"); require(info.jailedUntil > int64(int256(block.timestamp)), "Validator is not jailed"); @@ -89,7 +89,7 @@ contract SlashingExample { { SigningInfo memory info = this.getValidatorSigningInfo(consAddress); - // CRITICAL: Tombstoned validators can NEVER be unjailed + / CRITICAL: Tombstoned validators can NEVER be unjailed if (info.tombstoned) { return (false, "PERMANENT_TOMBSTONE: Validator can never be unjailed due to severe infractions (e.g., double-signing)"); } @@ -129,18 +129,18 @@ Returns the signing information for a specific validator. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function getSigningInfo(address consAddress) view returns (tuple(address validatorAddress, int64 startHeight, int64 indexOffset, int64 jailedUntil, bool tombstoned, int64 missedBlocksCounter) signingInfo)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000806"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input: The consensus address of the validator -const consAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +/ Input: The consensus address of the validator +const consAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder async function getSigningInfo() { try { @@ -177,17 +177,17 @@ Returns the signing information for all validators, with pagination support. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function getSigningInfos(tuple(bytes key, uint64 offset, uint64 limit, bool countTotal, bool reverse) pagination) view returns (tuple(address validatorAddress, int64 startHeight, int64 indexOffset, int64 jailedUntil, bool tombstoned, int64 missedBlocksCounter)[] signingInfos, tuple(bytes nextKey, uint64 total) pageResponse)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000806"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input for pagination +/ Input for pagination const pagination = { key: "0x", offset: 0, @@ -232,12 +232,12 @@ Returns the current parameters for the slashing module. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function getParams() view returns (tuple(int64 signedBlocksWindow, tuple(uint256 value, uint8 precision) minSignedPerWindow, int64 downtimeJailDuration, tuple(uint256 value, uint8 precision) slashFractionDoubleSign, tuple(uint256 value, uint8 precision) slashFractionDowntime) params)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000806"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); @@ -276,81 +276,81 @@ curl -X POST --data '{ ## Full Solidity Interface & ABI ```solidity title="Slashing Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.17; import "../common/Types.sol"; -/// @dev The ISlashing contract's address. +/ @dev The ISlashing contract's address. address constant SLASHING_PRECOMPILE_ADDRESS = 0x0000000000000000000000000000000000000806; -/// @dev The ISlashing contract's instance. +/ @dev The ISlashing contract's instance. ISlashing constant SLASHING_CONTRACT = ISlashing(SLASHING_PRECOMPILE_ADDRESS); -/// @dev SigningInfo defines a validator's signing info for monitoring their -/// liveness activity. +/ @dev SigningInfo defines a validator's signing info for monitoring their +/ liveness activity. struct SigningInfo { - /// @dev Address of the validator + / @dev Address of the validator address validatorAddress; - /// @dev Height at which validator was first a candidate OR was unjailed + / @dev Height at which validator was first a candidate OR was unjailed int64 startHeight; - /// @dev Index offset into signed block bit array + / @dev Index offset into signed block bit array int64 indexOffset; - /// @dev Timestamp until which validator is jailed due to liveness downtime + / @dev Timestamp until which validator is jailed due to liveness downtime int64 jailedUntil; - /// @dev Whether or not a validator has been tombstoned (killed out of validator set) + / @dev Whether or not a validator has been tombstoned (killed out of validator set) bool tombstoned; - /// @dev Missed blocks counter (to avoid scanning the array every time) + / @dev Missed blocks counter (to avoid scanning the array every time) int64 missedBlocksCounter; } -/// @dev Params defines the parameters for the slashing module. +/ @dev Params defines the parameters for the slashing module. struct Params { - /// @dev SignedBlocksWindow defines how many blocks the validator should have signed + / @dev SignedBlocksWindow defines how many blocks the validator should have signed int64 signedBlocksWindow; - /// @dev MinSignedPerWindow defines the minimum blocks signed per window to avoid slashing + / @dev MinSignedPerWindow defines the minimum blocks signed per window to avoid slashing Dec minSignedPerWindow; - /// @dev DowntimeJailDuration defines how long the validator will be jailed for downtime + / @dev DowntimeJailDuration defines how long the validator will be jailed for downtime int64 downtimeJailDuration; - /// @dev SlashFractionDoubleSign defines the percentage of slash for double sign + / @dev SlashFractionDoubleSign defines the percentage of slash for double sign Dec slashFractionDoubleSign; - /// @dev SlashFractionDowntime defines the percentage of slash for downtime + / @dev SlashFractionDowntime defines the percentage of slash for downtime Dec slashFractionDowntime; } -/// @author Evmos Team -/// @title Slashing Precompiled Contract -/// @dev The interface through which solidity contracts will interact with slashing. -/// We follow this same interface including four-byte function selectors, in the precompile that -/// wraps the pallet. -/// @custom:address 0x0000000000000000000000000000000000000806 +/ @author Evmos Team +/ @title Slashing Precompiled Contract +/ @dev The interface through which solidity contracts will interact with slashing. +/ We follow this same interface including four-byte function selectors, in the precompile that +/ wraps the pallet. +/ @custom:address 0x0000000000000000000000000000000000000806 interface ISlashing { - /// @dev Emitted when a validator is unjailed - /// @param validator The address of the validator + / @dev Emitted when a validator is unjailed + / @param validator The address of the validator event ValidatorUnjailed(address indexed validator); - /// @dev GetSigningInfo returns the signing info for a specific validator. - /// @param consAddress The validator consensus address - /// @return signingInfo The validator signing info + / @dev GetSigningInfo returns the signing info for a specific validator. + / @param consAddress The validator consensus address + / @return signingInfo The validator signing info function getSigningInfo( address consAddress ) external view returns (SigningInfo memory signingInfo); - /// @dev GetSigningInfos returns the signing info for all validators. - /// @param pagination Pagination configuration for the query - /// @return signingInfos The list of validator signing info - /// @return pageResponse Pagination information for the response + / @dev GetSigningInfos returns the signing info for all validators. + / @param pagination Pagination configuration for the query + / @return signingInfos The list of validator signing info + / @return pageResponse Pagination information for the response function getSigningInfos( PageRequest calldata pagination ) external view returns (SigningInfo[] memory signingInfos, PageResponse memory pageResponse); - /// @dev Unjail allows validators to unjail themselves after being jailed for downtime - /// @param validatorAddress The validator operator address to unjail - /// @return success true if the unjail operation was successful + / @dev Unjail allows validators to unjail themselves after being jailed for downtime + / @param validatorAddress The validator operator address to unjail + / @return success true if the unjail operation was successful function unjail(address validatorAddress) external returns (bool success); - /// @dev GetParams returns the slashing module parameters - /// @return params The slashing module parameters + / @dev GetParams returns the slashing module parameters + / @return params The slashing module parameters function getParams() external view returns (Params memory params); } ``` diff --git a/docs/evm/next/documentation/smart-contracts/precompiles/staking.mdx b/docs/evm/next/documentation/smart-contracts/precompiles/staking.mdx index 5d3b7f9e..05819bae 100644 --- a/docs/evm/next/documentation/smart-contracts/precompiles/staking.mdx +++ b/docs/evm/next/documentation/smart-contracts/precompiles/staking.mdx @@ -30,10 +30,10 @@ Creates a new validator. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; -// Interface for the Staking precompile +/ Interface for the Staking precompile interface IStaking { struct Description { string moniker; @@ -112,10 +112,10 @@ Edits an existing validator's parameters. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; -// Interface for the Staking precompile +/ Interface for the Staking precompile interface IStaking { struct Description { string moniker; @@ -173,10 +173,10 @@ Delegates tokens to a validator. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; -// Interface for the Staking precompile +/ Interface for the Staking precompile interface IStaking { function delegate(address delegatorAddress, string memory validatorAddress, uint256 amount) external returns (bool success); } @@ -213,21 +213,21 @@ contract StakingExample { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the delegate function +/ ABI definition for the delegate function const precompileAbi = [ "function delegate(address delegatorAddress, string memory validatorAddress, uint256 amount) returns (bool)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000800"; const signer = new ethers.Wallet("", provider); const contract = new ethers.Contract(precompileAddress, precompileAbi, signer); -// Delegation parameters +/ Delegation parameters const delegatorAddress = await signer.getAddress(); -const validatorAddress = "cosmosvaloper1..."; // Validator operator address -const amount = ethers.parseEther("10.0"); // Amount to delegate in native token +const validatorAddress = "cosmosvaloper1..."; / Validator operator address +const amount = ethers.parseEther("10.0"); / Amount to delegate in native token async function delegateTokens() { try { @@ -245,7 +245,7 @@ async function delegateTokens() { } } -// delegateTokens(); +/ delegateTokens(); ``` ```bash cURL expandable lines @@ -260,10 +260,10 @@ Undelegates tokens from a validator. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; -// Interface for the Staking precompile +/ Interface for the Staking precompile interface IStaking { function beginUnbonding(address delegatorAddress, string memory validatorAddress, uint256 amount) external returns (int64 completionTime); } @@ -296,7 +296,7 @@ contract StakingExample { } } - // Helper function to calculate days until completion + / Helper function to calculate days until completion function getDaysUntilCompletion(int64 completionTime) external view returns (uint256 days) { if (completionTime <= int64(block.timestamp)) { return 0; @@ -316,10 +316,10 @@ Redelegates tokens from one validator to another. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; -// Interface for the Staking precompile +/ Interface for the Staking precompile interface IStaking { function beginRedelegate( address delegatorAddress, @@ -384,10 +384,10 @@ Cancels an unbonding delegation. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; -// Interface for the Staking precompile +/ Interface for the Staking precompile interface IStaking { function cancelUnbondingDelegation( address delegatorAddress, @@ -450,22 +450,22 @@ Queries information about a specific validator. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; -// Interface for the Staking precompile (matches ~/repos/evm ABI) +/ Interface for the Staking precompile (matches ~/repos/evm ABI) interface IStaking { struct Validator { string operatorAddress; string consensusPubkey; bool jailed; - uint8 status; // BondStatus enum as uint8 + uint8 status; / BondStatus enum as uint8 uint256 tokens; - uint256 delegatorShares; // uint256 - string description; // string, not a nested struct + uint256 delegatorShares; / uint256 + string description; / string, not a nested struct int64 unbondingHeight; - int64 unbondingTime; // int64 - uint256 commission; // uint256 + int64 unbondingTime; / int64 + uint256 commission; / uint256 uint256 minSelfDelegation; } @@ -503,18 +503,18 @@ contract StakingExample { ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function validator(address validatorAddress) view returns (tuple(string operatorAddress, string consensusPubkey, bool jailed, uint8 status, uint256 tokens, uint256 delegatorShares, string description, int64 unbondingHeight, int64 unbondingTime, uint256 commission, uint256 minSelfDelegation) validator)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000800"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input - use actual validator address from the network -const validatorAddress = "0x7cB61D4117AE31a12E393a1Cfa3BaC666481D02E"; // Example validator address +/ Input - use actual validator address from the network +const validatorAddress = "0x7cB61D4117AE31a12E393a1Cfa3BaC666481D02E"; / Example validator address async function getValidator() { try { @@ -522,7 +522,7 @@ async function getValidator() { console.log("Validator Info:", { operatorAddress: validator.operatorAddress, jailed: validator.jailed, - status: validator.status, // 0=Unspecified, 1=Unbonded, 2=Unbonding, 3=Bonded + status: validator.status, / 0=Unspecified, 1=Unbonded, 2=Unbonding, 3=Bonded tokens: ethers.formatEther(validator.tokens), delegatorShares: ethers.formatEther(validator.delegatorShares) }); @@ -558,17 +558,17 @@ Queries validators with optional status filtering and pagination. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function validators(string memory status, tuple(bytes key, uint64 offset, uint64 limit, bool countTotal, bool reverse) pageRequest) view returns (tuple(string operatorAddress, string consensusPubkey, bool jailed, uint32 status, uint256 tokens, string delegatorShares, tuple(string moniker, string identity, string website, string securityContact, string details) description, int64 unbondingHeight, uint256 unbondingTime, tuple(tuple(string rate, string maxRate, string maxChangeRate) commissionRates, uint256 updateTime) commission, uint256 minSelfDelegation)[] validators, tuple(bytes nextKey, uint64 total) pageResponse)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000800"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs +/ Inputs const status = "BOND_STATUS_BONDED"; const pagination = { key: "0x", @@ -615,19 +615,19 @@ Queries the delegation amount between a delegator and a validator. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function delegation(address delegatorAddress, string memory validatorAddress) view returns (uint256 shares, tuple(string denom, uint256 amount) balance)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000800"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs -const delegatorAddress = "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"; // Example delegator -const validatorAddress = "cosmosvaloper10jmp6sgh4cc6zt3e8gw05wavvejgr5pw4xyrql"; // Example validator +/ Inputs +const delegatorAddress = "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"; / Example delegator +const validatorAddress = "cosmosvaloper10jmp6sgh4cc6zt3e8gw05wavvejgr5pw4xyrql"; / Example validator async function getDelegation() { try { @@ -665,19 +665,19 @@ Queries unbonding delegation information. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function unbondingDelegation(address delegatorAddress, string validatorAddress) view returns (tuple(string delegatorAddress, string validatorAddress, tuple(int64 creationHeight, int64 completionTime, uint256 initialBalance, uint256 balance, uint64 unbondingId, int64 unbondingOnHoldRefCount)[] entries) unbondingDelegation)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000800"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs -const delegatorAddress = "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"; // Example delegator -const validatorAddress = "cosmosvaloper10jmp6sgh4cc6zt3e8gw05wavvejgr5pw4xyrql"; // Example validator +/ Inputs +const delegatorAddress = "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"; / Example delegator +const validatorAddress = "cosmosvaloper10jmp6sgh4cc6zt3e8gw05wavvejgr5pw4xyrql"; / Example validator async function getUnbondingDelegation() { try { @@ -712,10 +712,10 @@ Queries a specific redelegation. ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; -// Interface for the Staking precompile +/ Interface for the Staking precompile interface IStaking { struct RedelegationEntry { int64 creationHeight; @@ -778,7 +778,7 @@ contract StakingExample { entriesCount = redelegation.entries.length; hasActiveRedelegations = entriesCount > 0; - // Check if any redelegations are still active + / Check if any redelegations are still active for (uint256 i = 0; i < redelegation.entries.length; i++) { if (redelegation.entries[i].completionTime > int64(int256(block.timestamp))) { hasActiveRedelegations = true; @@ -790,7 +790,7 @@ contract StakingExample { return (entriesCount, hasActiveRedelegations); } - // Check if redelegation is complete + / Check if redelegation is complete function isRedelegationComplete(RedelegationOutput memory redelegation) external view returns (bool) { for (uint i = 0; i < redelegation.entries.length; i++) { if (redelegation.entries[i].completionTime > int64(block.timestamp)) { @@ -825,20 +825,20 @@ Queries redelegations with optional filters. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function redelegations(address delegatorAddress, string srcValidatorAddress, string dstValidatorAddress, tuple(bytes key, uint64 offset, uint64 limit, bool countTotal, bool reverse) pageRequest) view returns (tuple(tuple(string delegatorAddress, string validatorSrcAddress, string validatorDstAddress, tuple(int64 creationHeight, int64 completionTime, uint256 initialBalance, uint256 sharesDst)[] entries) redelegation, tuple(tuple(int64 creationHeight, int64 completionTime, uint256 initialBalance, uint256 sharesDst) redelegationEntry, uint256 balance)[] entries)[] redelegations, tuple(bytes nextKey, uint64 total) pageResponse)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000800"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs -const delegatorAddress = "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"; // Example delegator -const srcValidatorAddress = ""; // Empty string for all -const dstValidatorAddress = ""; // Empty string for all +/ Inputs +const delegatorAddress = "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"; / Example delegator +const srcValidatorAddress = ""; / Empty string for all +const dstValidatorAddress = ""; / Empty string for all const pagination = { key: "0x", offset: 0, @@ -882,18 +882,18 @@ curl -X POST --data '{ ## Full Solidity Interface & ABI ```solidity title="Staking Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.17; import "../common/Types.sol"; -/// @dev The StakingI contract's address. +/ @dev The StakingI contract's address. address constant STAKING_PRECOMPILE_ADDRESS = 0x0000000000000000000000000000000000000800; -/// @dev The StakingI contract's instance. +/ @dev The StakingI contract's instance. StakingI constant STAKING_CONTRACT = StakingI(STAKING_PRECOMPILE_ADDRESS); -// BondStatus is the status of a validator. +/ BondStatus is the status of a validator. enum BondStatus { Unspecified, Unbonded, @@ -901,7 +901,7 @@ enum BondStatus { Bonded } -// Description contains a validator's description. +/ Description contains a validator's description. struct Description { string moniker; string identity; @@ -910,25 +910,25 @@ struct Description { string details; } -// CommissionRates defines the initial commission rates to be used for a validator +/ CommissionRates defines the initial commission rates to be used for a validator struct CommissionRates { uint256 rate; uint256 maxRate; uint256 maxChangeRate; } -// Commission defines a commission parameters for a given validator. +/ Commission defines a commission parameters for a given validator. struct Commission { CommissionRates commissionRates; uint256 updateTime; } -// Validator defines a validator, an account that can participate in consensus. +/ Validator defines a validator, an account that can participate in consensus. struct Validator { string operatorAddress; string consensusPubkey; bool jailed; - uint8 status; // BondStatus enum: 0=Unspecified, 1=Unbonded, 2=Unbonding, 3=Bonded + uint8 status; / BondStatus enum: 0=Unspecified, 1=Unbonded, 2=Unbonding, 3=Bonded uint256 tokens; uint256 delegatorShares; string description; @@ -938,24 +938,24 @@ struct Validator { uint256 minSelfDelegation; } -// Delegation represents the bond with tokens held by an account. It is -// owned by one delegator, and is associated with the voting power of one -// validator. +/ Delegation represents the bond with tokens held by an account. It is +/ owned by one delegator, and is associated with the voting power of one +/ validator. struct Delegation { address delegatorAddress; string validatorAddress; string shares; } -// UnbondingDelegation stores all of a single delegator's unbonding bonds -// for a single validator in an array. +/ UnbondingDelegation stores all of a single delegator's unbonding bonds +/ for a single validator in an array. struct UnbondingDelegation { address delegatorAddress; string validatorAddress; UnbondingDelegationEntry[] entries; } -// UnbondingDelegationEntry defines an unbonding object with relevant metadata. +/ UnbondingDelegationEntry defines an unbonding object with relevant metadata. struct UnbondingDelegationEntry { uint256 creationHeight; uint256 completionTime; @@ -963,7 +963,7 @@ struct UnbondingDelegationEntry { string balance; } -// RedelegationEntry defines a redelegation object with relevant metadata. +/ RedelegationEntry defines a redelegation object with relevant metadata. struct RedelegationEntry { uint256 creationHeight; uint256 completionTime; @@ -971,8 +971,8 @@ struct RedelegationEntry { string sharesDst; } -// Redelegation contains the list of a particular delegator's redelegating bonds -// from a particular source validator to a particular destination validator. +/ Redelegation contains the list of a particular delegator's redelegating bonds +/ from a particular source validator to a particular destination validator. struct Redelegation { address delegatorAddress; string validatorSrcAddress; @@ -980,36 +980,36 @@ struct Redelegation { RedelegationEntry[] entries; } -// DelegationResponse is equivalent to Delegation except that it contains a -// balance in addition to shares which is more suitable for client responses. +/ DelegationResponse is equivalent to Delegation except that it contains a +/ balance in addition to shares which is more suitable for client responses. struct DelegationResponse { Delegation delegation; Coin balance; } -// RedelegationEntryResponse is equivalent to a RedelegationEntry except that it -// contains a balance in addition to shares which is more suitable for client -// responses. +/ RedelegationEntryResponse is equivalent to a RedelegationEntry except that it +/ contains a balance in addition to shares which is more suitable for client +/ responses. struct RedelegationEntryResponse { RedelegationEntry redelegationEntry; string balance; } -// RedelegationResponse is equivalent to a Redelegation except that its entries -// contain a balance in addition to shares which is more suitable for client -// responses. +/ RedelegationResponse is equivalent to a Redelegation except that its entries +/ contain a balance in addition to shares which is more suitable for client +/ responses. struct RedelegationResponse { Redelegation redelegation; RedelegationEntryResponse[] entries; } -// Pool is used for tracking bonded and not-bonded token supply of the bond denomination. +/ Pool is used for tracking bonded and not-bonded token supply of the bond denomination. struct Pool { string notBondedTokens; string bondedTokens; } -// StakingParams defines the parameters for the staking module. +/ StakingParams defines the parameters for the staking module. struct Params { uint256 unbondingTime; uint256 maxValidators; @@ -1019,10 +1019,10 @@ struct Params { string minCommissionRate; } -/// @author The Evmos Core Team -/// @title Staking Precompile Contract -/// @dev The interface through which solidity contracts will interact with Staking -/// @custom:address 0x0000000000000000000000000000000000000800 +/ @author The Evmos Core Team +/ @title Staking Precompile Contract +/ @dev The interface through which solidity contracts will interact with Staking +/ @custom:address 0x0000000000000000000000000000000000000800 interface StakingI { event CreateValidator(string indexed validatorAddress, uint256 value); event EditValidator(string indexed validatorAddress); @@ -1031,7 +1031,7 @@ interface StakingI { event Redelegate(address indexed delegatorAddress, address indexed validatorSrcAddress, address indexed validatorDstAddress, uint256 amount, uint256 completionTime); event CancelUnbondingDelegation(address indexed delegatorAddress, address indexed validatorAddress, uint256 amount, uint256 creationHeight); - // Transactions + / Transactions function createValidator( Description calldata description, CommissionRates calldata commissionRates, @@ -1074,7 +1074,7 @@ interface StakingI { uint256 creationHeight ) external returns (bool); - // Queries + / Queries function validator( address validatorAddress ) external view returns (Validator memory); diff --git a/docs/evm/next/documentation/smart-contracts/precompiles/werc20.mdx b/docs/evm/next/documentation/smart-contracts/precompiles/werc20.mdx index 4720d3a9..8910cb51 100644 --- a/docs/evm/next/documentation/smart-contracts/precompiles/werc20.mdx +++ b/docs/evm/next/documentation/smart-contracts/precompiles/werc20.mdx @@ -51,19 +51,19 @@ For a comprehensive understanding of how single token representation works and i The ERC20 module creates a **unified token representation** that bridges native Cosmos tokens with ERC20 interfaces: ```go -// Simplified conceptual flow (not actual implementation) +/ Simplified conceptual flow (not actual implementation) func (k Keeper) ERC20Transfer(from, to common.Address, amount *big.Int) error { - // Convert EVM addresses to Cosmos addresses + / Convert EVM addresses to Cosmos addresses cosmosFrom := sdk.AccAddress(from.Bytes()) cosmosTo := sdk.AccAddress(to.Bytes()) - // Use bank module directly - no separate ERC20 state + / Use bank module directly - no separate ERC20 state coin := sdk.NewCoin(k.denom, sdk.NewIntFromBigInt(amount)) return k.bankKeeper.SendCoins(ctx, cosmosFrom, cosmosTo, sdk.Coins{coin}) } func (k Keeper) ERC20BalanceOf(account common.Address) *big.Int { - // Query bank module directly + / Query bank module directly cosmosAddr := sdk.AccAddress(account.Bytes()) balance := k.bankKeeper.GetBalance(ctx, cosmosAddr, k.denom) return balance.Amount.BigInt() @@ -75,17 +75,17 @@ func (k Keeper) ERC20BalanceOf(account common.Address) *big.Int { Since TEST and WTEST provide different interfaces to the same bank module token, deposit/withdraw functions exist for WETH interface compatibility: ```solidity -// These functions exist for WETH interface compatibility +/ These functions exist for WETH interface compatibility function deposit() external payable { - // Handles msg.value by sending received coins back to the caller - // Emits Deposit event for interface compatibility - // Your bank balance reflects the same amount accessible via ERC20 interface + / Handles msg.value by sending received coins back to the caller + / Emits Deposit event for interface compatibility + / Your bank balance reflects the same amount accessible via ERC20 interface } function withdraw(uint256 amount) external { - // No-op implementation that only emits Withdrawal event - // No actual token movement since bank balance is directly accessible - // Exists purely for WETH interface compatibility + / No-op implementation that only emits Withdrawal event + / No actual token movement since bank balance is directly accessible + / Exists purely for WETH interface compatibility } ``` @@ -103,22 +103,22 @@ This is why `deposit()` and `withdraw()` are no-ops - there's no separate wrappe ### Real-World Example ```javascript -// User starts with 100 TEST in bank module +/ User starts with 100 TEST in bank module const testBalance = await bankPrecompile.balances(userAddress); -// Returns: [{denom: "atest", amount: "100000000000000000000"}] // 100 TEST (18 decimals) +/ Returns: [{denom: "atest", amount: "100000000000000000000"}] / 100 TEST (18 decimals) const wtestBalance = await wtest.balanceOf(userAddress); -// Returns: "100000000000000000000" // Same 100 TEST, accessed via ERC20 interface +/ Returns: "100000000000000000000" / Same 100 TEST, accessed via ERC20 interface -// User transfers 50 WTEST via ERC20 +/ User transfers 50 WTEST via ERC20 await wtest.transfer(recipientAddress, "50000000000000000000"); -// Check balances again +/ Check balances again const newTestBalance = await bankPrecompile.balances(userAddress); -// Returns: [{denom: "atest", amount: "50000000000000000000"}] // 50 TEST (18 decimals) remaining +/ Returns: [{denom: "atest", amount: "50000000000000000000"}] / 50 TEST (18 decimals) remaining const newWtestBalance = await wtest.balanceOf(userAddress); -// Returns: "50000000000000000000" // Same 50 TEST, both queries return identical values +/ Returns: "50000000000000000000" / Same 50 TEST, both queries return identical values ``` ## Methods @@ -174,13 +174,13 @@ Transfers tokens using the bank module (identical to native Cosmos transfer). ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; import "@openzeppelin/contracts/token/ERC20/IERC20.sol"; contract WERC20Example { - // WTEST contract address + / WTEST contract address address constant WTEST = 0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE; IERC20 public immutable wtest; @@ -194,8 +194,8 @@ contract WERC20Example { require(to != address(0), "Invalid recipient"); require(amount > 0, "Amount must be greater than 0"); - // This directly moves tokens in the bank module - // No wrapping/unwrapping - same underlying token balance + / This directly moves tokens in the bank module + / No wrapping/unwrapping - same underlying token balance bool success = wtest.transfer(to, amount); require(success, "Transfer failed"); @@ -214,7 +214,7 @@ contract WERC20Example { return true; } - // Batch transfer example + / Batch transfer example function batchTransfer(address[] calldata recipients, uint256[] calldata amounts) external { require(recipients.length == amounts.length, "Arrays length mismatch"); @@ -238,7 +238,7 @@ const wtest = new ethers.Contract(wtestAddress, werc20Abi, signer); async function transferTokens() { try { const recipientAddress = "0x742d35Cc6634C0532925a3b844Bc9e7595f5b899"; - const amount = ethers.parseUnits("10.0", 18); // 10 TEST (18 decimals) + const amount = ethers.parseUnits("10.0", 18); / 10 TEST (18 decimals) const tx = await wtest.transfer(recipientAddress, amount); const receipt = await tx.wait(); @@ -325,18 +325,18 @@ const wtest = new ethers.Contract(wtestAddress, werc20Abi, signer); async function approveAndTransfer() { try { const spenderAddress = "0x742d35Cc6634C0532925a3b844Bc9e7595f5b899"; - const amount = ethers.parseUnits("50.0", 18); // 50 TEST (18 decimals) + const amount = ethers.parseUnits("50.0", 18); / 50 TEST (18 decimals) - // Approve spending + / Approve spending const approveTx = await wtest.approve(spenderAddress, amount); await approveTx.wait(); - // Check allowance + / Check allowance const allowance = await wtest.allowance(signer.address, spenderAddress); console.log("Allowance:", allowance.toString()); - // Transfer from (would be called by spender) - // const transferTx = await wtest.transferFrom(ownerAddress, recipientAddress, amount); + / Transfer from (would be called by spender) + / const transferTx = await wtest.transferFrom(ownerAddress, recipientAddress, amount); } catch (error) { console.error("Error:", error); } @@ -430,10 +430,10 @@ This function receives msg.value and immediately sends the coins back to the cal ```solidity Solidity expandable lines -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; -// Interface for WERC20 precompile +/ Interface for WERC20 precompile interface IWERC20 { event Deposit(address indexed dst, uint256 wad); event Withdrawal(address indexed src, uint256 wad); @@ -455,18 +455,18 @@ contract WERC20Example { function depositToWTEST() external payable { require(msg.value > 0, "Must send tokens to deposit"); - // Get balance before deposit + / Get balance before deposit uint256 balanceBefore = wtest.balanceOf(msg.sender); - // WERC20 deposit is a no-op for compatibility - // Your native token balance is immediately accessible as WTEST + / WERC20 deposit is a no-op for compatibility + / Your native token balance is immediately accessible as WTEST wtest.deposit{value: msg.value}(); - // Verify balance is now accessible via WTEST interface + / Verify balance is now accessible via WTEST interface uint256 balanceAfter = wtest.balanceOf(msg.sender); - // Both TEST and WTEST balances reflect the same bank module amount - // No actual wrapping occurred - same token, different interface + / Both TEST and WTEST balances reflect the same bank module amount + / No actual wrapping occurred - same token, different interface require(balanceAfter >= balanceBefore, "Deposit processed"); } @@ -474,14 +474,14 @@ contract WERC20Example { require(amount > 0, "Amount must be greater than 0"); require(wtest.balanceOf(msg.sender) >= amount, "Insufficient balance"); - // WERC20 withdraw is a no-op that emits event for compatibility - // Your bank balance remains accessible as both native TEST and WTEST + / WERC20 withdraw is a no-op that emits event for compatibility + / Your bank balance remains accessible as both native TEST and WTEST wtest.withdraw(amount); - // Tokens are still in bank module and accessible both ways + / Tokens are still in bank module and accessible both ways } - // Helper function to demonstrate balance consistency + / Helper function to demonstrate balance consistency function checkBalanceConsistency(address user) external view returns ( uint256 wtestBalance, string memory explanation @@ -492,19 +492,19 @@ contract WERC20Example { return (wtestBalance, explanation); } - // Example DeFi integration showing no wrapping needed + / Example DeFi integration showing no wrapping needed function addLiquidityWithDeposit() external payable { require(msg.value > 0, "Must send tokens"); - // Deposit via WERC20 interface (compatibility no-op) + / Deposit via WERC20 interface (compatibility no-op) wtest.deposit{value: msg.value}(); - // Your tokens are now accessible as WTEST for DeFi protocols - // No additional steps needed - same token, ERC20 interface available + / Your tokens are now accessible as WTEST for DeFi protocols + / No additional steps needed - same token, ERC20 interface available uint256 availableForDeFi = wtest.balanceOf(msg.sender); - // Use in DeFi protocols immediately - // wtest.transfer(defiProtocolAddress, availableForDeFi); + / Use in DeFi protocols immediately + / wtest.transfer(defiProtocolAddress, availableForDeFi); } } ``` @@ -529,85 +529,85 @@ contract LiquidityPool { } function addLiquidity(uint256 amount) external { - // This transfers from the user's bank balance + / This transfers from the user's bank balance WTEST.transferFrom(msg.sender, address(this), amount); - // Pool now has tokens in its bank balance - // No wrapping/unwrapping needed - it's all the same token! + / Pool now has tokens in its bank balance + / No wrapping/unwrapping needed - it's all the same token! } function removeLiquidity(uint256 amount) external { - // This transfers back to user's bank balance + / This transfers back to user's bank balance WTEST.transfer(msg.sender, amount); - // User can now use these tokens as native TEST - // or continue using WTEST interface - both access same balance + / User can now use these tokens as native TEST + / or continue using WTEST interface - both access same balance } } ``` ### Cross-Interface Balance Verification ```javascript -// Verify that both interfaces show the same balance +/ Verify that both interfaces show the same balance async function verifyBalanceConsistency(userAddress) { - // Query via bank precompile (native interface) + / Query via bank precompile (native interface) const bankBalance = await bankContract.balances(userAddress); const testAmount = bankBalance.find(b => b.denom === "test")?.amount || "0"; - // Query via WERC20 precompile (ERC20 interface) + / Query via WERC20 precompile (ERC20 interface) const wtestAmount = await wtest.balanceOf(userAddress); - // These will always be equal since the ERC20 balance is just - // an abstracted bank module balance query + / These will always be equal since the ERC20 balance is just + / an abstracted bank module balance query console.log(`Consistent balance: ${testAmount} (both TEST and WTEST)`); } ``` ### Working with IBC Tokens ```javascript -// IBC tokens work exactly the same way -const ibcTokenAddress = "0x..."; // Each IBC token gets its own WERC20 address +/ IBC tokens work exactly the same way +const ibcTokenAddress = "0x..."; / Each IBC token gets its own WERC20 address const ibcToken = new ethers.Contract(ibcTokenAddress, werc20Abi, signer); -// Check balance (same as bank module balance) +/ Check balance (same as bank module balance) const balance = await ibcToken.balanceOf(userAddress); -// Transfer IBC tokens via ERC20 interface +/ Transfer IBC tokens via ERC20 interface await ibcToken.transfer(recipientAddress, amount); -// Use in DeFi protocols just like any ERC20 token +/ Use in DeFi protocols just like any ERC20 token await defiProtocol.stake(ibcTokenAddress, amount); ``` ## Solidity Interface & ABI ```solidity title="WERC20 Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.18; import "@openzeppelin/contracts/token/ERC20/IERC20.sol"; -/// @title WERC20 Precompile Contract -/// @dev Provides ERC20 interface to native Cosmos tokens via bank module -/// @notice This is NOT a traditional wrapped token - both native and ERC20 interfaces access the same balance +/ @title WERC20 Precompile Contract +/ @dev Provides ERC20 interface to native Cosmos tokens via bank module +/ @notice This is NOT a traditional wrapped token - both native and ERC20 interfaces access the same balance interface IWERC20 is IERC20 { - /// @dev Emitted when deposit() is called (no-op for compatibility) - /// @param dst The address that called deposit - /// @param wad The amount specified (though no conversion occurs) + / @dev Emitted when deposit() is called (no-op for compatibility) + / @param dst The address that called deposit + / @param wad The amount specified (though no conversion occurs) event Deposit(address indexed dst, uint256 wad); - /// @dev Emitted when withdraw() is called (no-op for compatibility) - /// @param src The address that called withdraw - /// @param wad The amount specified (though no conversion occurs) + / @dev Emitted when withdraw() is called (no-op for compatibility) + / @param src The address that called withdraw + / @param wad The amount specified (though no conversion occurs) event Withdrawal(address indexed src, uint256 wad); - /// @dev No-op function for WETH compatibility - native tokens automatically update balance - /// @notice This function exists for interface compatibility but performs no conversion + / @dev No-op function for WETH compatibility - native tokens automatically update balance + / @notice This function exists for interface compatibility but performs no conversion function deposit() external payable; - /// @dev No-op function for WETH compatibility - native tokens always accessible - /// @param wad Amount to "withdraw" (no conversion performed) - /// @notice This function exists for interface compatibility but performs no conversion + / @dev No-op function for WETH compatibility - native tokens always accessible + / @param wad Amount to "withdraw" (no conversion performed) + / @notice This function exists for interface compatibility but performs no conversion function withdraw(uint256 wad) external; } ``` diff --git a/docs/evm/next/documentation/smart-contracts/predeployed-contracts/create2.mdx b/docs/evm/next/documentation/smart-contracts/predeployed-contracts/create2.mdx index eb98b146..5cd00c94 100644 --- a/docs/evm/next/documentation/smart-contracts/predeployed-contracts/create2.mdx +++ b/docs/evm/next/documentation/smart-contracts/predeployed-contracts/create2.mdx @@ -41,18 +41,18 @@ import { ethers } from "ethers"; const provider = new ethers.JsonRpcProvider("YOUR_RPC_URL"); const signer = new ethers.Wallet("YOUR_PRIVATE_KEY", provider); -// Create2 Factory address +/ Create2 Factory address const CREATE2_FACTORY = "0x4e59b44847b379578588920ca78fbf26c0b4956c"; -// Your contract bytecode (including constructor args) -const bytecode = "0x608060405234801561001057600080fd5b50..."; // Your compiled bytecode +/ Your contract bytecode (including constructor args) +const bytecode = "0x608060405234801561001057600080fd5b50..."; / Your compiled bytecode -// Choose a salt (32 bytes) -const salt = ethers.id("my-unique-salt-v1"); // Or use ethers.randomBytes(32) +/ Choose a salt (32 bytes) +const salt = ethers.id("my-unique-salt-v1"); / Or use ethers.randomBytes(32) -// Deploy using Create2 +/ Deploy using Create2 async function deployWithCreate2() { - // Compute the deployment address + / Compute the deployment address const deployAddress = ethers.getCreate2Address( CREATE2_FACTORY, salt, @@ -61,11 +61,11 @@ async function deployWithCreate2() { console.log("Contract will be deployed to:", deployAddress); - // Send deployment transaction + / Send deployment transaction const tx = await signer.sendTransaction({ to: CREATE2_FACTORY, - data: salt + bytecode.slice(2), // Concatenate salt and bytecode - gasLimit: 3000000, // Adjust based on your contract + data: salt + bytecode.slice(2), / Concatenate salt and bytecode + gasLimit: 3000000, / Adjust based on your contract }); console.log("Deployment tx:", tx.hash); @@ -79,7 +79,7 @@ deployWithCreate2(); ``` ```solidity "Solidity Create2 Deployer Contract" expandable -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract Create2Deployer { @@ -89,14 +89,14 @@ contract Create2Deployer { bytes32 salt, bytes memory bytecode ) external returns (address) { - // Prepare deployment data + / Prepare deployment data bytes memory deploymentData = abi.encodePacked(salt, bytecode); - // Deploy via Create2 factory + / Deploy via Create2 factory (bool success, bytes memory result) = CREATE2_FACTORY.call(deploymentData); require(success, "Create2 deployment failed"); - // Extract deployed address from return data + / Extract deployed address from return data address deployed = abi.decode(result, (address)); return deployed; } @@ -138,7 +138,7 @@ function computeCreate2Address(salt, bytecode) { return address; } -// Example usage +/ Example usage const salt = ethers.id("my-deployment-v1"); const bytecode = "0x608060405234801561001057600080fd5b50..."; const futureAddress = computeCreate2Address(salt, bytecode); @@ -152,7 +152,7 @@ console.log("Contract will deploy to:", futureAddress); Deploy contracts to the same address across multiple chains: ```javascript const salt = ethers.id("myapp-v1.0.0"); - // Same salt + bytecode = same address on all chains + / Same salt + bytecode = same address on all chains ``` @@ -176,7 +176,7 @@ console.log("Contract will deploy to:", futureAddress); ```solidity function deployChild(uint256 nonce) external { bytes32 salt = keccak256(abi.encode(msg.sender, nonce)); - // Deploy with predictable address + / Deploy with predictable address } ``` @@ -187,7 +187,7 @@ console.log("Contract will deploy to:", futureAddress); The Create2 factory contract is extremely minimal: ```assembly -// Entire contract bytecode (45 bytes) +/ Entire contract bytecode (45 bytes) 0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffe03601600081602082378035828234f58015156039578182fd5b8082525050506014600cf3 ``` @@ -210,17 +210,17 @@ This assembly code: Choose salts carefully for your use case: ```javascript "Salt Selection Strategies" expandable -// Sequential salts for multiple instances +/ Sequential salts for multiple instances const salt1 = ethers.solidityPacked(["uint256"], [1]); const salt2 = ethers.solidityPacked(["uint256"], [2]); -// User-specific salts +/ User-specific salts const userSalt = ethers.solidityPacked(["address", "uint256"], [userAddress, nonce]); -// Version-based salts +/ Version-based salts const versionSalt = ethers.id("v1.2.3"); -// Random salts for uniqueness +/ Random salts for uniqueness const randomSalt = ethers.randomBytes(32); ``` @@ -240,17 +240,6 @@ const randomSalt = ethers.randomBytes(32); | Gas Cost | Baseline | ~32,000 gas overhead | | Use Case | Standard deployments | Deterministic deployments | -## Troubleshooting - -Common issues and solutions: - -| Issue | Solution | -|-------|----------| -| "Create2 deployment failed" | Ensure sufficient gas and correct bytecode format | -| Address mismatch | Verify salt and bytecode are identical to computation | -| Contract already deployed | CREATE2 can't deploy to the same address twice | -| Invalid bytecode | Ensure bytecode includes constructor arguments if needed | - ## Example: Multi-chain Token Deployment Deploy an ERC20 token to the same address across multiple chains: @@ -259,10 +248,10 @@ Deploy an ERC20 token to the same address across multiple chains: import { ethers } from "ethers"; async function deployTokenMultichain(chains) { - const bytecode = "0x..."; // ERC20 bytecode with constructor args + const bytecode = "0x..."; / ERC20 bytecode with constructor args const salt = ethers.id("MyToken-v1.0.0"); - // Compute address (same on all chains) + / Compute address (same on all chains) const tokenAddress = ethers.getCreate2Address( "0x4e59b44847b379578588920ca78fbf26c0b4956c", salt, @@ -271,19 +260,19 @@ async function deployTokenMultichain(chains) { console.log("Token will deploy to:", tokenAddress); - // Deploy on each chain + / Deploy on each chain for (const chain of chains) { const provider = new ethers.JsonRpcProvider(chain.rpc); const signer = new ethers.Wallet(privateKey, provider); - // Check if already deployed + / Check if already deployed const code = await provider.getCode(tokenAddress); if (code !== "0x") { console.log(`Already deployed on ${chain.name}`); continue; } - // Deploy + / Deploy const tx = await signer.sendTransaction({ to: "0x4e59b44847b379578588920ca78fbf26c0b4956c", data: salt + bytecode.slice(2), diff --git a/docs/evm/next/documentation/smart-contracts/predeployed-contracts/multicall3.mdx b/docs/evm/next/documentation/smart-contracts/predeployed-contracts/multicall3.mdx index c0f712a2..d3463dd4 100644 --- a/docs/evm/next/documentation/smart-contracts/predeployed-contracts/multicall3.mdx +++ b/docs/evm/next/documentation/smart-contracts/predeployed-contracts/multicall3.mdx @@ -64,29 +64,29 @@ import { ethers } from "ethers"; const provider = new ethers.JsonRpcProvider("YOUR_RPC_URL"); const MULTICALL3 = "0xcA11bde05977b3631167028862bE2a173976CA11"; -// Multicall3 ABI +/ Multicall3 ABI const multicallAbi = [ "function aggregate3(tuple(address target, bool allowFailure, bytes callData)[] calls) returns (tuple(bool success, bytes returnData)[] returnData)" ]; const multicall = new ethers.Contract(MULTICALL3, multicallAbi, provider); -// Example: Read multiple token balances +/ Example: Read multiple token balances async function getMultipleBalances(tokenAddress, accounts) { const erc20Abi = ["function balanceOf(address) view returns (uint256)"]; const iface = new ethers.Interface(erc20Abi); - // Prepare calls + / Prepare calls const calls = accounts.map(account => ({ target: tokenAddress, allowFailure: false, callData: iface.encodeFunctionData("balanceOf", [account]) })); - // Execute multicall + / Execute multicall const results = await multicall.aggregate3(calls); - // Decode results + / Decode results const balances = results.map((result, i) => { if (result.success) { return { @@ -102,7 +102,7 @@ async function getMultipleBalances(tokenAddress, accounts) { ``` ```solidity "Solidity Multi-Token Balance Reader" expandable -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; interface IMulticall3 { @@ -128,7 +128,7 @@ contract MultiTokenReader { address[] calldata tokens, address account ) external returns (uint256[] memory balances) { - IMulticall3.Call3[] memory calls = new IMulticall3.Call3[](tokens.length); + IMulticall3.Call3[] memory calls = new IMulticall3.Call3[](/docs/evm/next/documentation/smart-contracts/predeployed-contracts/tokens.length); for (uint i = 0; i < tokens.length; i++) { calls[i] = IMulticall3.Call3({ @@ -139,7 +139,7 @@ contract MultiTokenReader { } IMulticall3.Result[] memory results = multicall.aggregate3(calls); - balances = new uint256[](results.length); + balances = new uint256[](/docs/evm/next/documentation/smart-contracts/predeployed-contracts/results.length); for (uint i = 0; i < results.length; i++) { if (results[i].success) { @@ -156,28 +156,28 @@ contract MultiTokenReader { Combine multiple DeFi operations atomically: ```javascript "Complex DeFi Swap and Stake Operation" expandable -// Swap and stake in one transaction +/ Swap and stake in one transaction async function swapAndStake(tokenIn, tokenOut, amountIn, poolAddress) { const calls = [ - // 1. Approve router + / 1. Approve router { target: tokenIn, allowFailure: false, callData: iface.encodeFunctionData("approve", [router, amountIn]) }, - // 2. Perform swap + / 2. Perform swap { target: router, allowFailure: false, callData: iface.encodeFunctionData("swap", [tokenIn, tokenOut, amountIn]) }, - // 3. Approve staking + / 3. Approve staking { target: tokenOut, allowFailure: false, callData: iface.encodeFunctionData("approve", [poolAddress, amountOut]) }, - // 4. Stake tokens + / 4. Stake tokens { target: poolAddress, allowFailure: false, @@ -196,14 +196,14 @@ Get consistent protocol state in one call: ```javascript async function getProtocolState(protocol) { const calls = [ - { target: protocol, callData: "0x18160ddd", allowFailure: false }, // totalSupply() - { target: protocol, callData: "0x313ce567", allowFailure: false }, // decimals() - { target: protocol, callData: "0x06fdde03", allowFailure: false }, // name() - { target: protocol, callData: "0x95d89b41", allowFailure: false }, // symbol() + { target: protocol, callData: "0x18160ddd", allowFailure: false }, / totalSupply() + { target: protocol, callData: "0x313ce567", allowFailure: false }, / decimals() + { target: protocol, callData: "0x06fdde03", allowFailure: false }, / name() + { target: protocol, callData: "0x95d89b41", allowFailure: false }, / symbol() ]; const results = await multicall.aggregate3(calls); - // All values from the same block + / All values from the same block return decodeResults(results); } ``` @@ -231,26 +231,26 @@ const calls = [ ]; await multicall.aggregate3Value(calls, { - value: ethers.parseEther("1.5") // Total ETH to send + value: ethers.parseEther("1.5") / Total ETH to send }); ``` ### Error Handling Strategies ```javascript "Error Handling Strategies" expandable -// Strict mode - revert if any call fails +/ Strict mode - revert if any call fails const strictCalls = calls.map(call => ({ ...call, allowFailure: false })); -// Permissive mode - continue even if some fail +/ Permissive mode - continue even if some fail const permissiveCalls = calls.map(call => ({ ...call, allowFailure: true })); -// Mixed mode - critical calls must succeed +/ Mixed mode - critical calls must succeed const mixedCalls = [ { ...criticalCall, allowFailure: false }, { ...optionalCall, allowFailure: true } @@ -314,14 +314,6 @@ Popular libraries with Multicall3 support: - **viem**: TypeScript alternative to ethers - **wagmi**: React hooks for Ethereum -## Troubleshooting - -| Issue | Solution | -|-------|----------| -| "Multicall3: call failed" | Check individual call success flags | -| Gas estimation failure | Increase gas limit or reduce batch size | -| Unexpected revert | One of the calls with `allowFailure: false` failed | -| Value mismatch | Ensure total value sent matches sum of individual values | ## Further Reading diff --git a/docs/evm/next/documentation/smart-contracts/predeployed-contracts/overview.mdx b/docs/evm/next/documentation/smart-contracts/predeployed-contracts/overview.mdx index 52e65a52..a6512f42 100644 --- a/docs/evm/next/documentation/smart-contracts/predeployed-contracts/overview.mdx +++ b/docs/evm/next/documentation/smart-contracts/predeployed-contracts/overview.mdx @@ -13,10 +13,10 @@ These contracts are included in `evmtypes.DefaultPreinstalls` and can be deploye | Contract | Address | Purpose | Documentation | |----------|---------|---------|---------------| -| **Create2** | `0x4e59b44847b379578588920ca78fbf26c0b4956c` | Deterministic contract deployment using CREATE2 opcode | [Details](./create2) | -| **Multicall3** | `0xcA11bde05977b3631167028862bE2a173976CA11` | Batch multiple contract calls in a single transaction | [Details](./multicall3) | -| **Permit2** | `0x000000000022D473030F116dDEE9F6B43aC78BA3` | Token approval and transfer management with signatures | [Details](./permit2) | -| **Safe Singleton Factory** | `0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7` | Deploy Safe multisig wallets at deterministic addresses | [Details](./safe-factory) | +| **Create2** | `0x4e59b44847b379578588920ca78fbf26c0b4956c` | Deterministic contract deployment using CREATE2 opcode | [Details](/docs/evm/next/documentation/smart-contracts/predeployed-contracts/create2) | +| **Multicall3** | `0xcA11bde05977b3631167028862bE2a173976CA11` | Batch multiple contract calls in a single transaction | [Details](/docs/evm/next/documentation/smart-contracts/predeployed-contracts/multicall3) | +| **Permit2** | `0x000000000022D473030F116dDEE9F6B43aC78BA3` | Token approval and transfer management with signatures | [Details](/docs/evm/next/documentation/smart-contracts/predeployed-contracts/permit2) | +| **Safe Singleton Factory** | `0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7` | Deploy Safe multisig wallets at deterministic addresses | [Details](/docs/evm/next/documentation/smart-contracts/predeployed-contracts/safe-factory) | Additional pre-deployable contracts can be incorporated into your project in a similar way, given that any dependencies are met. @@ -24,7 +24,7 @@ Additional pre-deployable contracts can be incorporated into your project in a s ## Learn More - [Implementation](/docs/evm/next/documentation/integration/predeployed-contracts-integration) - Activate these contracts for your project -- [Create2](./create2) - Deterministic deployment factory documentation -- [Multicall3](./multicall3) - Batch operations contract documentation -- [Permit2](./permit2) - Advanced token approvals documentation -- [Safe Factory](./safe-factory) - Multisig wallet factory documentation \ No newline at end of file +- [Create2](/docs/evm/next/documentation/smart-contracts/predeployed-contracts/create2) - Deterministic deployment factory documentation +- [Multicall3](/docs/evm/next/documentation/smart-contracts/predeployed-contracts/multicall3) - Batch operations contract documentation +- [Permit2](/docs/evm/next/documentation/smart-contracts/predeployed-contracts/permit2) - Advanced token approvals documentation +- [Safe Factory](/docs/evm/next/documentation/smart-contracts/predeployed-contracts/safe-factory) - Multisig wallet factory documentation \ No newline at end of file diff --git a/docs/evm/next/documentation/smart-contracts/predeployed-contracts/permit2.mdx b/docs/evm/next/documentation/smart-contracts/predeployed-contracts/permit2.mdx index c528571d..b9eebccc 100644 --- a/docs/evm/next/documentation/smart-contracts/predeployed-contracts/permit2.mdx +++ b/docs/evm/next/documentation/smart-contracts/predeployed-contracts/permit2.mdx @@ -36,14 +36,14 @@ Permit2 is a universal token approval contract that enables signature-based appr ### Allowance Transfer ```solidity -// Grant permission via signature +/ Grant permission via signature function permit( address owner, PermitSingle memory permitSingle, bytes calldata signature ) external -// Transfer with existing permission +/ Transfer with existing permission function transferFrom( address from, address to, @@ -55,7 +55,7 @@ function transferFrom( ### Signature Transfer ```solidity -// One-time transfer via signature +/ One-time transfer via signature function permitTransferFrom( PermitTransferFrom memory permit, SignatureTransferDetails calldata transferDetails, @@ -75,7 +75,7 @@ import { AllowanceProvider, AllowanceTransfer } from "@uniswap/permit2-sdk"; const PERMIT2_ADDRESS = "0x000000000022D473030F116dDEE9F6B43aC78BA3"; -// Step 1: User approves Permit2 for token (one-time) +/ Step 1: User approves Permit2 for token (one-time) async function approvePermit2(tokenContract, signer) { const tx = await tokenContract.approve( PERMIT2_ADDRESS, @@ -85,33 +85,33 @@ async function approvePermit2(tokenContract, signer) { console.log("Permit2 approved for token"); } -// Step 2: Create and sign permit +/ Step 2: Create and sign permit async function createPermit(token, spender, amount, deadline, signer) { const permit = { details: { token: token, amount: amount, expiration: deadline, - nonce: 0, // Get current nonce from contract + nonce: 0, / Get current nonce from contract }, spender: spender, sigDeadline: deadline, }; - // Create permit data + / Create permit data const { domain, types, values } = AllowanceTransfer.getPermitData( permit, PERMIT2_ADDRESS, await signer.provider.getNetwork().then(n => n.chainId) ); - // Sign permit + / Sign permit const signature = await signer._signTypedData(domain, types, values); return { permit, signature }; } -// Step 3: Execute transfer with permit +/ Step 3: Execute transfer with permit async function transferWithPermit(permit, signature, transferDetails) { const permit2 = new ethers.Contract( PERMIT2_ADDRESS, @@ -119,14 +119,14 @@ async function transferWithPermit(permit, signature, transferDetails) { signer ); - // First, submit the permit + / First, submit the permit await permit2.permit( signer.address, permit, signature ); - // Then transfer + / Then transfer await permit2.transferFrom( transferDetails.from, transferDetails.to, @@ -137,7 +137,7 @@ async function transferWithPermit(permit, signature, transferDetails) { ``` ```solidity "Solidity Permit2 Integration Contract" expandable -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; interface IPermit2 { @@ -184,10 +184,10 @@ contract Permit2Integration { IPermit2.PermitSingle calldata permit, bytes calldata signature ) external { - // Submit permit + / Submit permit permit2.permit(from, permit, signature); - // Execute transfer + / Execute transfer permit2.transferFrom(from, to, amount, token); } } @@ -203,7 +203,7 @@ async function batchTransferWithPermit2(transfers, owner, signer) { const permits = []; const signatures = []; - // Prepare batch permits + / Prepare batch permits for (const transfer of transfers) { const permit = { details: { @@ -221,7 +221,7 @@ async function batchTransferWithPermit2(transfers, owner, signer) { signatures.push(signature); } - // Execute batch + / Execute batch const permit2 = new ethers.Contract(PERMIT2_ADDRESS, abi, signer); await permit2.permitBatch(owner, permits, signatures); } @@ -232,9 +232,9 @@ async function batchTransferWithPermit2(transfers, owner, signer) { Enable gasless token approvals using meta-transactions: ```javascript "Gasless Approval via Meta-transactions" expandable -// User signs permit off-chain +/ User signs permit off-chain async function createGaslessPermit(token, spender, amount, signer) { - const deadline = Math.floor(Date.now() / 1000) + 3600; // 1 hour + const deadline = Math.floor(Date.now() / 1000) + 3600; / 1 hour const permit = { details: { @@ -249,7 +249,7 @@ async function createGaslessPermit(token, spender, amount, signer) { const signature = await signPermit(permit, signer); - // Return data for relayer + / Return data for relayer return { permit, signature, @@ -257,7 +257,7 @@ async function createGaslessPermit(token, spender, amount, signer) { }; } -// Relayer submits transaction +/ Relayer submits transaction async function relayPermit(permitData, relayerSigner) { const permit2 = new ethers.Contract(PERMIT2_ADDRESS, abi, relayerSigner); @@ -283,7 +283,7 @@ struct PermitWitnessTransferFrom { address spender; uint256 nonce; uint256 deadline; - bytes32 witness; // Custom data hash + bytes32 witness; / Custom data hash } ``` @@ -292,10 +292,10 @@ struct PermitWitnessTransferFrom { Permit2 uses unordered nonces for flexibility: ```javascript -// Invalidate specific nonces +/ Invalidate specific nonces await permit2.invalidateNonces(token, spender, newNonce); -// Invalidate nonce range +/ Invalidate nonce range await permit2.invalidateUnorderedNonces(wordPos, mask); ``` @@ -305,10 +305,10 @@ Set appropriate expiration times: ```javascript const expirations = { - shortTerm: Math.floor(Date.now() / 1000) + 300, // 5 minutes - standard: Math.floor(Date.now() / 1000) + 3600, // 1 hour - longTerm: Math.floor(Date.now() / 1000) + 86400, // 1 day - maximum: 2n ** 48n - 1n, // Max allowed + shortTerm: Math.floor(Date.now() / 1000) + 300, / 5 minutes + standard: Math.floor(Date.now() / 1000) + 3600, / 1 hour + longTerm: Math.floor(Date.now() / 1000) + 86400, / 1 day + maximum: 2n ** 48n - 1n, / Max allowed }; ``` @@ -340,7 +340,7 @@ contract DEXWithPermit2 { PermitSingle calldata permit, bytes calldata signature ) external { - // Get tokens via permit + / Get tokens via permit permit2.permit(msg.sender, permit, signature); permit2.transferFrom( msg.sender, @@ -349,7 +349,7 @@ contract DEXWithPermit2 { permit.details.token ); - // Execute swap + / Execute swap _performSwap(params); } } @@ -360,12 +360,12 @@ contract DEXWithPermit2 { ```javascript "Payment Processor with Permit2" expandable class PaymentProcessor { async processPayment(order, permit, signature) { - // Verify order details + / Verify order details if (!this.verifyOrder(order)) { throw new Error("Invalid order"); } - // Process payment via Permit2 + / Process payment via Permit2 await this.permit2.permitTransferFrom( permit, { @@ -376,7 +376,7 @@ class PaymentProcessor { signature ); - // Fulfill order + / Fulfill order await this.fulfillOrder(order); } } diff --git a/docs/evm/next/documentation/smart-contracts/predeployed-contracts/safe-factory.mdx b/docs/evm/next/documentation/smart-contracts/predeployed-contracts/safe-factory.mdx index cd6e9c6a..3bbeff65 100644 --- a/docs/evm/next/documentation/smart-contracts/predeployed-contracts/safe-factory.mdx +++ b/docs/evm/next/documentation/smart-contracts/predeployed-contracts/safe-factory.mdx @@ -48,7 +48,7 @@ import { SafeFactory } from "@safe-global/safe-core-sdk"; const FACTORY_ADDRESS = "0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7"; -// Using Safe SDK +/ Using Safe SDK async function deploySafeWallet(owners, threshold, signer) { const safeFactory = await SafeFactory.create({ ethAdapter: new EthersAdapter({ ethers, signer }), @@ -58,18 +58,18 @@ async function deploySafeWallet(owners, threshold, signer) { const safeAccountConfig = { owners: owners, threshold: threshold, - // Optional parameters - fallbackHandler: "0x...", // Fallback handler address + / Optional parameters + fallbackHandler: "0x...", / Fallback handler address paymentToken: ethers.ZeroAddress, payment: 0, paymentReceiver: ethers.ZeroAddress }; - // Predict address before deployment + / Predict address before deployment const predictedAddress = await safeFactory.predictSafeAddress(safeAccountConfig); console.log("Safe will be deployed to:", predictedAddress); - // Deploy the Safe + / Deploy the Safe const safeSdk = await safeFactory.deploySafe({ safeAccountConfig }); const safeAddress = await safeSdk.getAddress(); @@ -77,7 +77,7 @@ async function deploySafeWallet(owners, threshold, signer) { return safeSdk; } -// Manual deployment without SDK +/ Manual deployment without SDK async function deployManually(signer) { const factory = new ethers.Contract( FACTORY_ADDRESS, @@ -85,22 +85,22 @@ async function deployManually(signer) { signer ); - // Prepare Safe proxy bytecode with initialization - const proxyBytecode = "0x..."; // Safe proxy bytecode + / Prepare Safe proxy bytecode with initialization + const proxyBytecode = "0x..."; / Safe proxy bytecode const salt = ethers.id("my-safe-v1"); - // Deploy + / Deploy const tx = await factory.deploy(proxyBytecode, salt); const receipt = await tx.wait(); - // Get deployed address from events + / Get deployed address from events const deployedAddress = receipt.logs[0].address; return deployedAddress; } ``` ```solidity "Solidity Safe Deployer Contract" expandable -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; interface ISafeFactory { @@ -118,28 +118,28 @@ contract SafeDeployer { bytes memory initializer, uint256 saltNonce ) external returns (address safe) { - // Prepare proxy creation code + / Prepare proxy creation code bytes memory deploymentData = abi.encodePacked( getProxyCreationCode(), uint256(uint160(safeSingleton)) ); - // Calculate salt + / Calculate salt bytes32 salt = keccak256(abi.encodePacked( keccak256(initializer), saltNonce )); - // Deploy via factory + / Deploy via factory safe = factory.deploy(deploymentData, salt); - // Initialize the Safe + / Initialize the Safe (bool success,) = safe.call(initializer); require(success, "Safe initialization failed"); } function getProxyCreationCode() pure returns (bytes memory) { - // Safe proxy bytecode + / Safe proxy bytecode return hex"608060405234801561001057600080fd5b50..."; } } @@ -185,19 +185,19 @@ async function deployCustomSafe(config, signer) { fallbackHandler } = config; - // Encode initialization data + / Encode initialization data const setupData = safeSingleton.interface.encodeFunctionData("setup", [ owners, threshold, - ethers.ZeroAddress, // to - "0x", // data + ethers.ZeroAddress, / to + "0x", / data fallbackHandler, - ethers.ZeroAddress, // paymentToken - 0, // payment - ethers.ZeroAddress // paymentReceiver + ethers.ZeroAddress, / paymentToken + 0, / payment + ethers.ZeroAddress / paymentReceiver ]); - // Deploy proxy pointing to singleton + / Deploy proxy pointing to singleton const proxyFactory = new ethers.Contract( PROXY_FACTORY_ADDRESS, proxyFactoryAbi, @@ -207,13 +207,13 @@ async function deployCustomSafe(config, signer) { const tx = await proxyFactory.createProxyWithNonce( SAFE_SINGLETON_ADDRESS, setupData, - Date.now() // saltNonce + Date.now() / saltNonce ); const receipt = await tx.wait(); const safeAddress = getSafeAddressFromReceipt(receipt); - // Enable modules if specified + / Enable modules if specified for (const module of modules) { await enableModule(safeAddress, module, signer); } @@ -237,7 +237,7 @@ async function deployOrgTreasury(orgId, chains) { const provider = new ethers.JsonRpcProvider(chain.rpc); const signer = new ethers.Wallet(deployerKey, provider); - // Same salt = same address on all chains + / Same salt = same address on all chains const address = await deploySafeWithSalt(salt, signer); results[chain.name] = address; } @@ -259,16 +259,16 @@ class CounterfactualSafe { } predictAddress() { - // Calculate address without deploying + / Calculate address without deploying return predictSafeAddress( this.owners, this.threshold, - 0 // saltNonce + 0 / saltNonce ); } async deploy(signer) { - // Only deploy when needed + / Only deploy when needed const code = await signer.provider.getCode(this.address); if (code !== "0x") { console.log("Already deployed"); @@ -291,14 +291,14 @@ class CounterfactualSafe { Deploy and enable Safe modules: ```javascript "Safe Module Deployment and Configuration" expandable -// Deploy module via factory +/ Deploy module via factory async function deployModule(moduleCode, salt) { const tx = await factory.deploy(moduleCode, salt); const receipt = await tx.wait(); return receipt.contractAddress; } -// Enable module on Safe +/ Enable module on Safe async function enableModule(safeAddress, moduleAddress, signer) { const safe = new ethers.Contract(safeAddress, safeAbi, signer); const tx = await safe.enableModule(moduleAddress); @@ -312,11 +312,11 @@ Deploy transaction guards for additional security: ```javascript "Deploy and Set Safe Transaction Guard" expandable async function deployAndSetGuard(safeAddress, guardCode, signer) { - // Deploy guard + / Deploy guard const salt = ethers.id(`guard-${safeAddress}`); const guardAddress = await factory.deploy(guardCode, salt); - // Set as Safe guard + / Set as Safe guard const safe = new ethers.Contract(safeAddress, safeAbi, signer); const tx = await safe.setGuard(guardAddress); await tx.wait(); @@ -330,7 +330,7 @@ async function deployAndSetGuard(safeAddress, guardCode, signer) { ### Salt Management ```javascript "Structured Salt Generation Patterns" expandable -// Structured salt generation +/ Structured salt generation function generateSalt(context, nonce) { return ethers.solidityPackedKeccak256( ["string", "uint256"], @@ -338,7 +338,7 @@ function generateSalt(context, nonce) { ); } -// Examples +/ Examples const userSalt = generateSalt(`user-${userId}`, 0); const orgSalt = generateSalt(`org-${orgId}`, iteration); const appSalt = generateSalt(`app-${appId}-${version}`, 0); @@ -375,14 +375,6 @@ async function deploySafeVersion(version, owners, threshold) { - **Singleton Verification**: Verify singleton contract before deployment - **Access Control**: The factory itself has no access control - anyone can deploy -## Troubleshooting - -| Issue | Solution | -|-------|----------| -| Address mismatch | Verify salt and bytecode are identical | -| Deployment fails | Check sufficient gas and valid bytecode | -| Safe not working | Ensure proper initialization after deployment | -| Cross-chain inconsistency | Verify same singleton and salt used | ## Related Contracts diff --git a/docs/evm/next/security-audit.mdx b/docs/evm/next/security-audit.mdx new file mode 100644 index 00000000..58ca1ab4 --- /dev/null +++ b/docs/evm/next/security-audit.mdx @@ -0,0 +1,72 @@ +--- +title: "Security Audit" +description: "External security audit report for the Cosmos EVM module" +icon: "shield-check" +keywords: ['security', 'audit', 'sherlock', 'vulnerability', 'assessment', 'evm', 'cosmos'] +--- + +## Overview + +The Cosmos EVM module underwent a comprehensive security audit conducted by Sherlock, a leading blockchain security firm. The audit was completed on July 28, 2025, providing an independent assessment of the module's security posture, code quality, and potential vulnerabilities. + +## Audit Details + +**Auditor**: Sherlock +**Audit Completion Date**: July 28, 2025 +**Report Version**: Final +**Pages**: 203 + +## Scope + +The security audit covered the entire EVM module implementation, including: + +- Core EVM execution environment +- Precompiled contracts and their integrations +- State management and storage mechanisms +- Transaction processing and gas metering +- Integration with Cosmos SDK modules +- Cross-chain functionality and IBC compatibility +- Security-critical components and access controls + +## Key Areas of Focus + +The audit specifically examined: + +1. **Smart Contract Security**: Analysis of precompiles and their interaction patterns +2. **State Consistency**: Verification of state transitions and atomicity guarantees +3. **Gas Economics**: Review of gas consumption and potential denial-of-service vectors +4. **Access Controls**: Examination of permission systems and authorization mechanisms +5. **Integration Points**: Assessment of module boundaries and cross-module communications +6. **Edge Cases**: Testing of boundary conditions and error handling paths + +## Accessing the Report + +The complete audit report is publicly available and can be accessed through the following link: + + + Download the complete 203-page security audit report conducted by Sherlock + + +## Recommendations + +Security audits are a critical component of blockchain development. While this audit provides confidence in the module's security, users and developers should: + +- Stay informed about any security advisories or updates +- Follow best practices when developing applications using the EVM module +- Report any suspected vulnerabilities through appropriate security channels +- Keep their deployments updated with the latest security patches + +## Continuous Security + +Security is an ongoing process. The Cosmos EVM team maintains a commitment to: + +- Regular security reviews and assessments +- Prompt response to security disclosures +- Transparent communication about security matters +- Collaboration with the security research community + +For security-related inquiries or to report potential vulnerabilities, please follow the [Cosmos Security Policy](https://github.com/cosmos/cosmos-sdk/security/policy). \ No newline at end of file diff --git a/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/index.mdx b/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/index.mdx index 371fc704..fa777305 100644 --- a/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/index.mdx +++ b/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/index.mdx @@ -37,7 +37,7 @@ More on Ethereum JSON-RPC: * All `trace_*` methods - Parity/OpenEthereum trace namespace * All `engine_*` methods - Post-merge Engine API - See the [methods page](./methods) for complete details. + See the [methods page](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods) for complete details. ## Enabling the JSON-RPC Server @@ -117,19 +117,19 @@ Response: ### Namespaces supported on Cosmos EVM -See the [methods](./methods) page for an exhaustive list and working examples. +See the [methods](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods) page for an exhaustive list and working examples. | Namespace | Description | Supported | Enabled by Default | |-----------|-------------|-----------|-------------------| -| [`eth`](./methods#eth-methods) | Core Ethereum JSON-RPC methods for interacting with the EVM | Y | Y | -| [`web3`](./methods#web3-methods) | Utility functions for the web3 client | Y | Y | -| [`net`](./methods#net-methods) | Network information about the node | Y | Y | -| [`txpool`](./methods#txpool-methods) | Transaction pool inspection | Y | N | -| [`debug`](./methods#debug-methods) | Debugging and tracing functionality | Y | N | -| [`personal`](./methods#personal-methods) | Private key management | Y | N | -| [`admin`](./methods#admin-methods) | Node administration | Y | N | -| [`miner`](./methods#miner-methods) | Mining operations (stub for PoS) | Y | N | +| [`eth`](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods#eth-methods) | Core Ethereum JSON-RPC methods for interacting with the EVM | Y | Y | +| [`web3`](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods#web3-methods) | Utility functions for the web3 client | Y | Y | +| [`net`](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods#net-methods) | Network information about the node | Y | Y | +| [`txpool`](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods#txpool-methods) | Transaction pool inspection | Y | N | +| [`debug`](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods#debug-methods) | Debugging and tracing functionality | Y | N | +| [`personal`](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods#personal-methods) | Private key management | Y | N | +| [`admin`](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods#admin-methods) | Node administration | Y | N | +| [`miner`](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods#miner-methods) | Mining operations (stub for PoS) | Y | N | | `clique` | Proof-of-Authority consensus | N | N | | `les` | Light Ethereum Subprotocol | N | N | @@ -141,7 +141,7 @@ See the [methods](./methods) page for an exhaustive list and working examples. ### Filters[​](#filters "Direct link to Filters") -Cosmos EVM also supports the Ethereum [JSON-RPC](./methods) filters calls to subscribe to [state logs](https://eth.wiki/json-rpc/API#eth_newfilter), [blocks](https://eth.wiki/json-rpc/API#eth_newblockfilter) or [pending transactions](https://eth.wiki/json-rpc/API#eth_newpendingtransactionfilter) changes. +Cosmos EVM also supports the Ethereum [JSON-RPC](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods) filters calls to subscribe to [state logs](https://eth.wiki/json-rpc/API#eth_newfilter), [blocks](https://eth.wiki/json-rpc/API#eth_newblockfilter) or [pending transactions](https://eth.wiki/json-rpc/API#eth_newpendingtransactionfilter) changes. Under the hood, it uses the CometBFT RPC client's event system to process subscriptions that are then formatted to Ethereum-compatible events. @@ -256,11 +256,11 @@ Examples: Several methods that query the state of the EVM accept a default block parameter. This allows you to specify the block height at which to perform the query. Methods supporting block parameter: -* [`eth_getBalance`](./methods#eth_getbalance) -* [`eth_getCode`](./methods#eth_getcode) -* [`eth_getTransactionCount`](./methods#eth_gettransactioncount) -* [`eth_getStorageAt`](./methods#eth_getstorageat) -* [`eth_call`](./methods#eth_call) +* [`eth_getBalance`](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods#eth_getbalance) +* [`eth_getCode`](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods#eth_getcode) +* [`eth_getTransactionCount`](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods#eth_gettransactioncount) +* [`eth_getStorageAt`](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods#eth_getstorageat) +* [`eth_call`](/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods#eth_call) The possible values for the `defaultBlock` parameter: * **Hex String** - A specific block number (e.g., `0xC9B3C0`) diff --git a/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods.mdx b/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods.mdx index b940d139..2ef97b54 100644 --- a/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods.mdx +++ b/docs/evm/v0.4.x/api-reference/ethereum-json-rpc/methods.mdx @@ -394,9 +394,9 @@ web3.sha3(input); Returns the current network id. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"net_version","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"8"} ``` @@ -405,9 +405,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"net_version","params":[],"id":1} Returns the number of peers currently connected to the client. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":23} ``` @@ -416,9 +416,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id": Returns if client is actively listening for network connections. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"net_listening","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":true} ``` @@ -429,9 +429,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"net_listening","params":[],"id": Returns the current ethereum protocol version. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_protocolVersion","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x3f"} ``` @@ -440,9 +440,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_protocolVersion","params":[] The sync status object may need to be different depending on the details of cometbft's sync protocol. However, the 'synced' result is simply a boolean, and can easily be derived from cometbft's internal sync state. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":false} ``` @@ -451,9 +451,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1} Returns the current gas price in the default EVM denomination parameter. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_gasPrice","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x0"} ``` @@ -462,9 +462,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_gasPrice","params":[],"id":1 Returns array of all eth accounts. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_accounts","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":["0x3b7252d007059ffc82d16d022da3cbf9992d2f70","0xddd64b4712f7c8f1ace3c145c950339eddaf221d","0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0"]} ``` @@ -473,9 +473,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_accounts","params":[],"id":1 Returns the current block height. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x66"} ``` @@ -491,9 +491,9 @@ Returns the account balance for a given account address and Block Number. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getBalance","params":["0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0", "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x36354d5575577c8000"} ``` @@ -510,9 +510,9 @@ Returns the storage address for a given account address. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getStorageAt","params":["0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0", "0", "latest"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x0000000000000000000000000000000000000000000000000000000000000000"} ``` @@ -528,9 +528,9 @@ Returns the total transaction for a given account address and Block Number. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getTransactionCount","params":["0x7bf7b17da59880d9bcca24915679668db75f9397", "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x8"} ``` @@ -545,9 +545,9 @@ Returns the total transaction count for a given block number. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getBlockTransactionCountByNumber","params":["0x1"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"difficulty":null,"extraData":"0x0","gasLimit":"0xffffffff","gasUsed":"0x0","hash":"0x8101cc04aea3341a6d4b3ced715e3f38de1e72867d6c0db5f5247d1a42fbb085","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","miner":"0x0000000000000000000000000000000000000000","nonce":null,"number":"0x17d","parentHash":"0x70445488069d2584fea7d18c829e179322e2b2185b25430850deced481ca2e77","sha3Uncles":null,"size":"0x1df","stateRoot":"0x269bb17fe7adb8dd5f15f57b717979f82078d6b7a675c1ba1b0da2d27e415fcc","timestamp":"0x5f5ba97c","totalDifficulty":null,"transactions":[],"transactionsRoot":"0x","uncles":[]}} ``` @@ -562,9 +562,9 @@ Returns the total transaction count for a given block hash. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getBlockTransactionCountByHash","params":["0x8101cc04aea3341a6d4b3ced715e3f38de1e72867d6c0db5f5247d1a42fbb085"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x3"} ``` @@ -580,9 +580,9 @@ Returns the code for a given account address and Block Number. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getCode","params":["0x7bf7b17da59880d9bcca24915679668db75f9397", "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0xef616c92f3cfc9e92dc270d6acff9cea213cecc7020a76ee4395af09bdceb4837a1ebdb5735e11e7d3adb6104e0c3ac55180b4ddf5e54d022cc5e8837f6a4f971b"} ``` @@ -604,9 +604,9 @@ The address to sign with must be unlocked. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_sign","params":["0x3b7252d007059ffc82d16d022da3cbf9992d2f70", "0xdeadbeaf"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x909809c76ed2a5d38733de39207d0f411222b9b49c64a192bf649cb13f63f37b45acb4f6939facb4f1c277bc70fb00407564140c0f18600ac44388f2c1dfd1dc1b"} ``` @@ -628,10 +628,10 @@ Signs typed structured data according to [EIP-712](https://eips.ethereum.org/EIP - `message`: Message to sign ```shell -// Request - Note: proper domain structure is required +/ Request - Note: proper domain structure is required curl -X POST --data '{"jsonrpc":"2.0","method":"eth_signTypedData","params":["0x3b7252d007059ffc82d16d022da3cbf9992d2f70", {"domain":{"name":"Example","version":"1","chainId":1,"verifyingContract":"0xCcCCccccCCCCcCCCCCCcCcCccCcCCCcCcccccccC"},"types":{"EIP712Domain":[{"name":"name","type":"string"},{"name":"version","type":"string"},{"name":"chainId","type":"uint256"},{"name":"verifyingContract","type":"address"}],"Person":[{"name":"name","type":"string"},{"name":"wallet","type":"address"}]},"primaryType":"Person","message":{"name":"Bob","wallet":"0xbBbBBBBbbBBBbbbBbbBbbbbBBbBbbbbBbBbbBBbB"}}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result -{"jsonrpc":"2.0","id":1,"result":"0x..."} // Returns signature +/ Result +{"jsonrpc":"2.0","id":1,"result":"0x..."} / Returns signature ``` ### `eth_sendTransaction` @@ -653,9 +653,9 @@ Sends transaction from given account to a given account. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_sendTransaction","params":[{"from":"0x3b7252d007059ffc82d16d022da3cbf9992d2f70", "to":"0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0", "value":"0x16345785d8a0000", "gasLimit":"0x5208", "gasPrice":"0x55ae82600"}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x33653249db68ebe5c7ae36d93c9b2abc10745c80a72f591e296f598e2d4709f6"} ``` @@ -670,9 +670,9 @@ Creates new message call transaction or a contract creation for signed transacti ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_sendRawTransaction","params":["0xf9ff74c86aefeb5f6019d77280bbb44fb695b4d45cfe97e6eed7acd62905f4a85034d5c68ed25a2e7a8eeb9baf1b8401e4f865d92ec48c1763bf649e354d900b1c"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x0000000000000000000000000000000000000000000000000000000000000000"} ``` @@ -696,9 +696,9 @@ Executes a new message call immediately without creating a transaction on the bl ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_call","params":[{"from":"0x3b7252d007059ffc82d16d022da3cbf9992d2f70", "to":"0xddd64b4712f7c8f1ace3c145c950339eddaf221d", "gas":"0x5208", "gasPrice":"0x55ae82600", "value":"0x16345785d8a0000", "data": "0xd46e8dd67c5d32be8d46e8dd67c5d32be8058bb8eb970870f072445675058bb8eb970870f072445675"}, "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x"} ``` @@ -717,9 +717,9 @@ Returns an estimate value of the gas required to send the transaction. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_estimateGas","params":[{"from":"0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0", "to":"0x3b7252d007059ffc82d16d022da3cbf9992d2f70", "value":"0x16345785d8a00000"}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x1199b"} ``` @@ -735,9 +735,9 @@ Returns information about a block by block number. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1", false],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"difficulty":null,"extraData":"0x0","gasLimit":"0xffffffff","gasUsed":null,"hash":"0xabac6416f737a0eb54f47495b60246d405d138a6a64946458cf6cbeae0d48465","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","miner":"0x0000000000000000000000000000000000000000","nonce":null,"number":"0x1","parentHash":"0x","sha3Uncles":null,"size":"0x9b","stateRoot":"0x","timestamp":"0x5f5bd3e5","totalDifficulty":null,"transactions":[],"transactionsRoot":"0x","uncles":[]}} ``` @@ -753,9 +753,9 @@ Returns the block info given the hash found in the command above and a bool. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getBlockByHash","params":["0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4", false],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"difficulty":null,"extraData":"0x0","gasLimit":"0xffffffff","gasUsed":null,"hash":"0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4","logsBloom":"0x00000000100000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000040000000000000000000000000200000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000004000000000000002000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000","miner":"0x0000000000000000000000000000000000000000","nonce":null,"number":"0xc","parentHash":"0x404e58f31a9ede1b614b98701d6b0fbf1450f186842dbcf6426dd16811a5ca0d","sha3Uncles":null,"size":"0x307","stateRoot":"0x599ccdb111fc62c6398dc39be957df8e97bf8ab72ce6c06ff10641a92b754627","timestamp":"0x5f5fdbbd","totalDifficulty":null,"transactions":["0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615"],"transactionsRoot":"0x4764dba431128836fa919b83d314ba9cc000e75f38e1c31a60484409acea777b","uncles":[]}} ``` @@ -770,9 +770,9 @@ Returns transaction details given the ethereum tx something. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getTransactionByHash","params":["0xec5fa15e1368d6ac314f9f64118c5794f076f63c02e66f97ea5fe1de761a8973"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"blockHash":"0x7a7398cc11d9c4c8e6f53e0c73824297aceafdab62db9e4b867a0da694384864","blockNumber":"0x188","from":"0x3b7252d007059ffc82d16d022da3cbf9992d2f70","gas":"0x147ee","gasPrice":"0x3b9aca00","hash":"0xec5fa15e1368d6ac314f9f64118c5794f076f63c02e66f97ea5fe1de761a8973","input":"0x6dba746c","nonce":"0x18","to":"0xa655256f589060437e5ffe2246dec385d040f148","transactionIndex":"0x0","value":"0x0","v":"0xa96","r":"0x6db399d694a452fb4106419140a6e5dbbe6817743a0f6f695a651e6576e59a5e","s":"0x25dd6ab1f936d0280d2fed0caeb0ebe5b9a46de6d8cb08ad8fd2c88deb55fc31"}} ``` @@ -788,9 +788,9 @@ Returns transaction details given the block hash and the transaction index. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getTransactionByBlockHashAndIndex","params":["0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4", "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"blockHash":"0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4","blockNumber":"0xc","from":"0xddd64b4712f7c8f1ace3c145c950339eddaf221d","gas":"0x4c4b40","gasPrice":"0x3b9aca00","hash":"0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615","input":"0x4f2be91f","nonce":"0x0","to":"0x439c697e0742a0ddb124a376efd62a72a94ac35a","transactionIndex":"0x0","value":"0x0","v":"0xa96","r":"0xced57d973e58b0f634f776d57daf41d3d3387ceb347a3a72ca0746e5ec2b709e","s":"0x384e89e209a5eb147a2bac3a4e399507400ac7b29cd155531f9d6203a89db3f2"}} ``` @@ -812,9 +812,9 @@ Note: Tx Code from CometBFT and the Ethereum receipt status are switched: ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getTransactionReceipt","params":["0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea614"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"blockHash":"0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4","blockNumber":"0xc","contractAddress":"0x0000000000000000000000000000000000000000","cumulativeGasUsed":null,"from":"0xddd64b4712f7c8f1ace3c145c950339eddaf221d","gasUsed":"0x5289","logs":[{"address":"0x439c697e0742a0ddb124a376efd62a72a94ac35a","topics":["0x64a55044d1f2eddebe1b90e8e2853e8e96931cefadbfa0b2ceb34bee36061941"],"data":"0x0000000000000000000000000000000000000000000000000000000000000002","blockNumber":"0xc","transactionHash":"0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615","transactionIndex":"0x0","blockHash":"0x0000000000000000000000000000000000000000000000000000000000000000","logIndex":"0x0","removed":false},{"address":"0x439c697e0742a0ddb124a376efd62a72a94ac35a","topics":["0x938d2ee5be9cfb0f7270ee2eff90507e94b37625d9d2b3a61c97d30a4560b829"],"data":"0x0000000000000000000000000000000000000000000000000000000000000002","blockNumber":"0xc","transactionHash":"0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615","transactionIndex":"0x0","blockHash":"0x0000000000000000000000000000000000000000000000000000000000000000","logIndex":"0x1","removed":false}],"logsBloom":"0x00000000100000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000040000000000000000000000000200000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000004000000000000002000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000","status":"0x1","to":"0x439c697e0742a0ddb124a376efd62a72a94ac35a","transactionHash":"0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615","transactionIndex":"0x0"}} ``` @@ -829,9 +829,9 @@ Create new filter using topics of some kind. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_newFilter","params":[{"topics":["0x0000000000000000000000000000000000000000000000000000000012341234"]}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0xdc714a4a2e3c39dc0b0b84d66a3ccb00"} ``` @@ -840,9 +840,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_newFilter","params":[{"topic Creates a filter in the node, to notify when a new block arrives. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_newBlockFilter","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x3503de5f0c766c68f78a03a3b05036a5"} ``` @@ -851,9 +851,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_newBlockFilter","params":[], Creates a filter in the node, to notify when new pending transactions arrive. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_newPendingTransactionFilter","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x9daacfb5893d946997d3801ea18e9902"} ``` @@ -868,9 +868,9 @@ Removes the filter with the given filter id. Returns true if the filter was succ ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_uninstallFilter","params":["0xb91b6608b61bf56288a661a1bd5eb34a"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":true} ``` @@ -885,9 +885,9 @@ Polling method for a filter, which returns an array of logs which occurred since ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getFilterChanges","params":["0x127e9eca4f7751fb4e5cb5291ad8b455"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":["0xc6f08d183a81e149896fc5317c872f9092068e88e956ca1864e9bd4c81c09b44","0x3ca6dfb5be15549d721d1b3d10c1bec50ed6217c9ac7b61df361fac9692a27e5","0x776fffac134171acb1ebf2e59856625501ad5ccc5c4c8fe0359e0d4dff8919f2","0x84123103704dbd738c089276ab2b04b5936330b24f6e78453c4ba8bf4848aaf9","0xffddbe5bd8e8aa41e44002daa9ea89ade9e6980a0d83f51d104cf16498827eca","0x53430e49963e8ae32605d8f22dec2e757a691e6436d593854ca4d9383eeab86a","0x975948058c9351a91fbec332ca00dda39d1a919f5f16b996a4c7e30c38ba423b","0x619e37e32024c8efef7f7220e6caff4ee1d682ea78b2ac91e0a6b30850dc0677","0x31a5d985a40d08303ac68000ce008df512bcd1a911c497415c97f0624b4a271a","0x91dcf1fce4503a8dbb3e6fb61073f25cd31d69c766ecba639fefde4436e59d07","0x606d9e0143cfdb410a6812c590a8135b5c6b5c59eec26d760d5cd930aa47257d","0xd3c00b859b29b20ba654415eef648ef58251389c73a138580db87675b0d5465f","0x954391f0eb50888be90489898016ebb54f750f612f3adec2a00854955d5e52d8","0x698905f06aff921a9e9fcef39b8b0d107747c3e6204d2ea79cf4c12debf8d253","0x9fcafec5721938a06eb8e2951ede4b6ef8fae54a8c8f85f3166ec9782a0032b5","0xaec6d3364e47a5716ba69e4705f3c705d017f81298859589591183bfea87be7a","0x91bf2ee13319b6eaca96ed89c126437b66c4df1b13560c6a9bb18556ee3b7e1f","0x4f426dc1fc0ea8149052033065b237892d2d34927b2d558ab50c5a7fb98d6e79","0xdd809fb07e5aab638fef5311371b4e2b27c9c9a6183fde0cdd2b7724f6d2a89b","0x7e12fc92ab953e233a304959a2a8474d96195e71efd9388fdceb1326a577811a","0x30618ef6b490c3cc9979c47163459db37c1a1e0aa5793c56accd417f9d89973b","0x614609f06ee24bae7408e45895b1a25e6b19a8159aeea7a95c9d1339d9ba286f","0x115ddc6d533620040791d241f01f1c5ae3d9d1a8f64b15af5e9793e4d9096e22","0xb7458c9323beeca2cd54f32a6af5671f3cd5a7a251aed9d82bdd6ebe5f56305b","0x573dd48a5ba7bf4cc3d49597cd7419f75ecc9897258f1ebadebd670446d0d358","0xcb6670918439f9698413b53f3b5336d82ca4be152fdefaacf45e052fff6262fc","0xf3fe2a8945abafd269ab97bfdc80b3dbff2202ffdce59a227f952874b966b230","0x989980707007533cc0840a079f77f261a2e818abae1a1ffd3af02f3fff1d35fd","0x886b6ae365fec996be8a9a2c31cf4cda97ff8352908be2c83f17abd66ef1591e","0xfd90df68706ef95a62b317de93d6899a9bd6c80416e42d007f5c30fcdedfce24","0x7af8491fbb0373886d9032bb74e0ef52ed9e100f260b79bd15f46126b38cbede","0x91d1e2cd55533cf7dd5de86c9aa73295e811b1279be193d429bbd6ba83810e16","0x6b65b3128c2104005a04923288fe2aa33a2477a4962bef70532f94cab582f2a7"]} ``` @@ -902,9 +902,9 @@ Returns an array of all logs matching filter with given id. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getFilterLogs","params":["0x127e9eca4f7751fb4e5cb5291ad8b455"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"error":{"code":-32000,"message":"filter 0x35b64c227ce30e84fc5c7bd347be380e doesn't have a LogsSubscription type: got 5"}} ``` @@ -925,9 +925,9 @@ Returns an array of all logs matching a given filter object. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getLogs","params":[{"topics":["0x775a94827b8fd9b519d36cd827093c664f93347070a554f65e4a6f56cd738898","0x0000000000000000000000000000000000000000000000000000000000000011"], "fromBlock":"latest"}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":[]} ``` @@ -936,9 +936,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getLogs","params":[{"topics" Returns the account the mining rewards will be send to. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_coinbase","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x7cB61D4117AE31a12E393a1Cfa3BaC666481D02E"} ``` @@ -951,9 +951,9 @@ Always returns `false` as Cosmos EVM uses 'CometBFT' consensus instead of mining ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_mining","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":false} ``` @@ -966,9 +966,9 @@ Always returns `0x0` as Cosmos EVM uses 'CometBFT' consensus instead of mining. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_hashrate","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x0"} ``` @@ -977,9 +977,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_hashrate","params":[],"id":1 Returns the chain ID used for signing replay-protected transactions. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x40000"} ``` @@ -996,9 +996,9 @@ Always returns `0x0` as Cosmos EVM does not have uncles due to using 'CometBFT' - Block hash ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getUncleCountByBlockHash","params":["0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x0"} ``` @@ -1015,9 +1015,9 @@ Always returns `0x0` as Cosmos EVM does not have uncles due to using 'CometBFT' - Block number ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getUncleCountByBlockNumber","params":["latest"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x0"} ``` @@ -1035,9 +1035,9 @@ Always returns `null` as Cosmos EVM does not have uncles due to using 'CometBFT' - Uncle index position ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getUncleByBlockHashAndIndex","params":["0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4", "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1055,9 +1055,9 @@ Always returns `null` as Cosmos EVM does not have uncles due to using 'CometBFT' - Uncle index position ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getUncleByBlockNumberAndIndex","params":["0x1", "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1071,9 +1071,9 @@ Returns information about a transaction by block number and transaction index po - Transaction index position ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getTransactionByBlockNumberAndIndex","params":["0x1", "0x0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"blockHash":"0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4","blockNumber":"0x1","from":"0xddd64b4712f7c8f1ace3c145c950339eddaf221d","gas":"0x4c4b40","gasPrice":"0x3b9aca00","hash":"0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615","input":"0x4f2be91f","nonce":"0x0","to":"0x439c697e0742a0ddb124a376efd62a72a94ac35a","transactionIndex":"0x0","value":"0x0","v":"0xa96","r":"0xced57d973e58b0f634f776d57daf41d3d3387ceb347a3a72ca0746e5ec2b709e","s":"0x384e89e209a5eb147a2bac3a4e399507400ac7b29cd155531f9d6203a89db3f2"}} ``` @@ -1082,9 +1082,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getTransactionByBlockNumberA Returns the current maxPriorityFeePerGas per gas in wei. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_maxPriorityFeePerGas","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x0"} ``` @@ -1097,9 +1097,9 @@ Returns all transaction receipts for a given block. - Block number or block hash ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getBlockReceipts","params":["latest"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":[{"blockHash":"0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4","blockNumber":"0xc","contractAddress":"0x0000000000000000000000000000000000000000","cumulativeGasUsed":null,"from":"0xddd64b4712f7c8f1ace3c145c950339eddaf221d","gasUsed":"0x5289","logs":[],"logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","status":"0x1","to":"0x439c697e0742a0ddb124a376efd62a72a94ac35a","transactionHash":"0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615","transactionIndex":"0x0"}]} ``` @@ -1112,9 +1112,9 @@ Returns the logs for a specific transaction. - Transaction hash ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getTransactionLogs","params":["0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":[{"address":"0x439c697e0742a0ddb124a376efd62a72a94ac35a","topics":["0x64a55044d1f2eddebe1b90e8e2853e8e96931cefadbfa0b2ceb34bee36061941"],"data":"0x0000000000000000000000000000000000000000000000000000000000000002","blockNumber":"0xc","transactionHash":"0xae64961cb206a9773a6e5efeb337773a6fd0a2085ce480a174135a029afea615","transactionIndex":"0x0","blockHash":"0x0000000000000000000000000000000000000000000000000000000000000000","logIndex":"0x0","removed":false}]} ``` @@ -1127,9 +1127,9 @@ Fills the defaults (nonce, gas, gasPrice or 1559 fields) on a given unsigned tra - Transaction object with optional fields ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"eth_fillTransaction","params":[{"from":"0x0123456789012345678901234567890123456789","to":"0x0123456789012345678901234567890123456789","value":"0x1"}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"raw":"0x...","tx":{"nonce":"0x0","gasPrice":"0x3b9aca00","gas":"0x5208","to":"0x0123456789012345678901234567890123456789","value":"0x1","input":"0x","v":"0x0","r":"0x0","s":"0x0","hash":"0x..."}}} ``` @@ -1148,10 +1148,10 @@ Resends a transaction with updated gas parameters. Removes the given transaction - New gas limit (hex) ```shell -// Request - Note the required 'nonce' field in the transaction object +/ Request - Note the required 'nonce' field in the transaction object curl -X POST --data '{"jsonrpc":"2.0","method":"eth_resend","params":[{"from":"0x0123456789012345678901234567890123456789","to":"0x0123456789012345678901234567890123456789","value":"0x1","gas":"0x5208","gasPrice":"0x3b9aca00","nonce":"0x0"},"0x3b9aca01","0x5209"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result -{"jsonrpc":"2.0","id":1,"result":"0x..."} // Returns transaction hash +/ Result +{"jsonrpc":"2.0","id":1,"result":"0x..."} / Returns transaction hash ``` ### `eth_getProof` @@ -1169,9 +1169,9 @@ Returns the account- and storage-values of the specified account including the M - Block Number (must be a specific number, not "latest") ```shell -// Request - Note: using a specific block number instead of "latest" +/ Request - Note: using a specific block number instead of "latest" curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getProof","params":["0x1234567890123456789012345678901234567890",["0x0000000000000000000000000000000000000000000000000000000000000000","0x0000000000000000000000000000000000000000000000000000000000000001"],"0x1"],"id":1}' -H "Content-type:application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc": "2.0", "id": 1, "result": {"address": "0x1234567890123456789012345678901234567890", "accountProof": ["0xf90211a090dcaf88c40c7bbc95a912cbdde67c175767b31173df9ee4b0d733bfdd511c43a0babe369f6b12092f49181ae04ca173fb68d1a5456f18d20fa32cba73954052bda0473ecf8a7e36a829e75039a3b055e51b8332cbf03324ab4af2066bbd6fbf0021a0bbda34753d7aa6c38e603f360244e8f59611921d9e1f128372fec0d586d4f9e0a04e44caecff45c9891f74f6a2156735886eedf6f1a733628ebc802ec79d844648a0a5f3f2f7542148c973977c8a1e154c4300fec92f755f7846f1b734d3ab1d90e7a0e823850f50bf72baae9d1733a36a444ab65d0a6faaba404f0583ce0ca4dad92da0f7a00cbe7d4b30b11faea3ae61b7f1f2b315b61d9f6bd68bfe587ad0eeceb721a07117ef9fc932f1a88e908eaead8565c19b5645dc9e5b1b6e841c5edbdfd71681a069eb2de283f32c11f859d7bcf93da23990d3e662935ed4d6b39ce3673ec84472a0203d26456312bbc4da5cd293b75b840fc5045e493d6f904d180823ec22bfed8ea09287b5c21f2254af4e64fca76acc5cd87399c7f1ede818db4326c98ce2dc2208a06fc2d754e304c48ce6a517753c62b1a9c1d5925b89707486d7fc08919e0a94eca07b1c54f15e299bd58bdfef9741538c7828b5d7d11a489f9c20d052b3471df475a051f9dd3739a927c89e357580a4c97b40234aa01ed3d5e0390dc982a7975880a0a089d613f26159af43616fd9455bb461f4869bfede26f2130835ed067a8b967bfb80", "0xf90211a0395d87a95873cd98c21cf1df9421af03f7247880a2554e20738eec2c7507a494a0bcf6546339a1e7e14eb8fb572a968d217d2a0d1f3bc4257b22ef5333e9e4433ca012ae12498af8b2752c99efce07f3feef8ec910493be749acd63822c3558e6671a0dbf51303afdc36fc0c2d68a9bb05dab4f4917e7531e4a37ab0a153472d1b86e2a0ae90b50f067d9a2244e3d975233c0a0558c39ee152969f6678790abf773a9621a01d65cd682cc1be7c5e38d8da5c942e0a73eeaef10f387340a40a106699d494c3a06163b53d956c55544390c13634ea9aa75309f4fd866f312586942daf0f60fb37a058a52c1e858b1382a8893eb9c1f111f266eb9e21e6137aff0dddea243a567000a037b4b100761e02de63ea5f1fcfcf43e81a372dafb4419d126342136d329b7a7ba032472415864b08f808ba4374092003c8d7c40a9f7f9fe9cc8291f62538e1cc14a074e238ff5ec96b810364515551344100138916594d6af966170ff326a092fab0a0d31ac4eef14a79845200a496662e92186ca8b55e29ed0f9f59dbc6b521b116fea090607784fe738458b63c1942bba7c0321ae77e18df4961b2bc66727ea996464ea078f757653c1b63f72aff3dcc3f2a2e4c8cb4a9d36d1117c742833c84e20de994a0f78407de07f4b4cb4f899dfb95eedeb4049aeb5fc1635d65cf2f2f4dfd25d1d7a0862037513ba9d45354dd3e36264aceb2b862ac79d2050f14c95657e43a51b85c80", "0xf90171a04ad705ea7bf04339fa36b124fa221379bd5a38ffe9a6112cb2d94be3a437b879a08e45b5f72e8149c01efcb71429841d6a8879d4bbe27335604a5bff8dfdf85dcea00313d9b2f7c03733d6549ea3b810e5262ed844ea12f70993d87d3e0f04e3979ea0b59e3cdd6750fa8b15164612a5cb6567cdfb386d4e0137fccee5f35ab55d0efda0fe6db56e42f2057a071c980a778d9a0b61038f269dd74a0e90155b3f40f14364a08538587f2378a0849f9608942cf481da4120c360f8391bbcc225d811823c6432a026eac94e755534e16f9552e73025d6d9c30d1d7682a4cb5bd7741ddabfd48c50a041557da9a74ca68da793e743e81e2029b2835e1cc16e9e25bd0c1e89d4ccad6980a041dda0a40a21ade3a20fcd1a4abb2a42b74e9a32b02424ff8db4ea708a5e0fb9a09aaf8326a51f613607a8685f57458329b41e938bb761131a5747e066b81a0a16808080a022e6cef138e16d2272ef58434ddf49260dc1de1f8ad6dfca3da5d2a92aaaadc58080", "0xf851808080a009833150c367df138f1538689984b8a84fc55692d3d41fe4d1e5720ff5483a6980808080808080808080a0a319c1c415b271afc0adcb664e67738d103ac168e0bc0b7bd2da7966165cb9518080"], "balance": "0x0", "codeHash": "0xc5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470", "nonce": "0x0", "storageHash": "0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421", "storageProof": [{"key": "0x0000000000000000000000000000000000000000000000000000000000000000", "value": "0x0", "proof": []}, {"key": "0x0000000000000000000000000000000000000000000000000000000000000001", "value": "0x0", "proof": []}]}} ``` @@ -1191,9 +1191,9 @@ subscribe using JSON-RPC notifications. This allows clients to wait for events i ```shell -// Request +/ Request {"id": 1, "method": "eth_subscribe", "params": ["newHeads", {"includeTransactions": true}]} -// Result +/ Result < {"jsonrpc":"2.0","result":"0x34da6f29e3e953af4d0c7c58658fd525","id":1} ``` @@ -1208,9 +1208,9 @@ Unsubscribe from an event using the subscription id ```shell -// Request +/ Request {"id": 1, "method": "eth_unsubscribe", "params": ["0x34da6f29e3e953af4d0c7c58658fd525"]} -// Result +/ Result {"jsonrpc":"2.0","result":true,"id":1} ``` @@ -1240,10 +1240,10 @@ Imports the given unencrypted private key (hex encoded string) into the key stor - Required: [Y] Yes ```shell -// Request - Note: private key without "0x" prefix +/ Request - Note: private key without "0x" prefix curl -X POST --data '{"jsonrpc":"2.0","method":"personal_importRawKey","params":["c5bd76cd0cd948de17a31261567d219576e992d9066fe1a6bca97496dec634e2", "the key is this"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result -{"jsonrpc":"2.0","id":1,"result":"0x..."} // Returns the address +/ Result +{"jsonrpc":"2.0","id":1,"result":"0x..."} / Returns the address ``` ### `personal_listAccounts` @@ -1255,9 +1255,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"personal_importRawKey","params": Returns a list of addresses for accounts this node manages. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"personal_listAccounts","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":["0x3b7252d007059ffc82d16d022da3cbf9992d2f70","0xddd64b4712f7c8f1ace3c145c950339eddaf221d","0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0"]} ``` @@ -1276,9 +1276,9 @@ Removes the private key with given address from memory. The account can no longe ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"personal_lockAccount","params":["0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":true} ``` @@ -1297,9 +1297,9 @@ Generates a new private key and stores it in the key store directory. The key fi ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"personal_newAccount","params":["This is the passphrase"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0xf0e4086ad1c6aab5d42161d5baaae2f9ad0571c0"} ``` @@ -1324,9 +1324,9 @@ The account can be used with [`eth_sign`](https://www.google.com/search?q=%23eth ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"personal_unlockAccount","params":["0x0f54f47bf9b8e317b214ccd6a7c3e38b893cd7f0", "secret passphrase", 30],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":true} ``` @@ -1357,9 +1357,9 @@ The account is not unlocked globally in the node and cannot be used in other RPC ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"personal_sendTransaction","params":[{"from":"0x3b7252d007059ffc82d16d022da3cbf9992d2f70","to":"0xddd64b4712f7c8f1ace3c145c950339eddaf221d", "value":"0x16345785d8a0000"}, "passphrase"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0xd2a31ec1b89615c8d1f4d08fe4e4182efa4a9c0d5758ace6676f485ea60e154c"} ``` @@ -1380,9 +1380,9 @@ The sign method calculates an Ethereum specific signature with: `sign(keccack256 ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"personal_sign","params":["0xdeadbeaf", "0x3b7252d007059ffc82d16d022da3cbf9992d2f70", "password"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0xf9ff74c86aefeb5f6019d77280bbb44fb695b4d45cfe97e6eed7acd62905f4a85034d5c68ed25a2e7a8eeb9baf1b8401e4f865d92ec48c1763bf649e354d900b1c"} ``` @@ -1404,9 +1404,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"personal_sign","params":["0xdead - Signature returned from [`personal_sign`](#personal_sign) (65 bytes) ```shell -// Request - Note: signature must be exactly 130 hex characters (65 bytes) +/ Request - Note: signature must be exactly 130 hex characters (65 bytes) curl -X POST --data '{"jsonrpc":"2.0","method":"personal_ecRecover","params":["0xdeadbeaf", "0xf9ff74c86aefeb5f6019d77280bbb44fb695b4d45cfe97e6eed7acd62905f4a85034d5c68ed25a2e7a8eeb9baf1b8401e4f865d92ec48c1763bf649e354d900b1c"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0x3b7252d007059ffc82d16d022da3cbf9992d2f70"} ``` @@ -1492,9 +1492,9 @@ Currently returns `null` as wallet-level management is not supported. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"personal_listWallets","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1511,9 +1511,9 @@ The `traceTransaction` debugging method will attempt to run the transaction in t ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_traceTransaction","params":["0xddecdb13226339681372b44e01df0fbc0f446fca6f834b2de5ecb1e569022ec8", {"tracer": "{data: [], fault: function(log) {}, step: function(log) { if(log.op.toString() == \"CALL\") this.data.push(log.stack.peek(0)); }, result: function() { return this.data; }}"}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -//Result +/Result ["68410", "51470"] ``` @@ -1531,9 +1531,9 @@ The `traceBlockByNumber` endpoint accepts a block number and will replay the blo - Trace Config (optional) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_traceBlockByNumber","params":["0xe", {"tracer": "{data: [], fault: function(log) {}, step: function(log) { if(log.op.toString() == \"CALL\") this.data.push(log.stack.peek(0)); }, result: function() { return this.data; }}"}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result - Currently returns empty array +/ Result - Currently returns empty array {"jsonrpc":"2.0","id":1,"result":[]} ``` @@ -1551,9 +1551,9 @@ Similar to `debug_traceBlockByNumber`, this method accepts a block hash and will - Trace Config (optional) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_traceBlockByHash","params":["0x1b9911f57c13e5160d567ea6cf5b545413f96b95e43ec6e02787043351fb2cc4", {}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result - Currently returns empty array +/ Result - Currently returns empty array {"jsonrpc":"2.0","id":1,"result":[]} ``` @@ -1566,9 +1566,9 @@ Forces the Go runtime to perform a garbage collection and return unused memory t ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_freeOSMemory","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1585,9 +1585,9 @@ Sets the garbage collection target percentage. A negative value disables garbage - GC percentage (integer) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_setGCPercent","params":[100],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":100} ``` @@ -1600,9 +1600,9 @@ Returns detailed runtime memory statistics. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_memStats","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"Alloc":83328680,"TotalAlloc":451796592,"Sys":166452520,"Lookups":0,"Mallocs":4071615,"Frees":3772501,"HeapAlloc":83328680,"HeapSys":153452544,"HeapIdle":51568640,"HeapInuse":101883904,"HeapReleased":44720128,"HeapObjects":299114,"StackInuse":1736704,"StackSys":1736704,"MSpanInuse":1119520,"MSpanSys":1958400,"MCacheInuse":16912,"MCacheSys":31408,"BuckHashSys":1583603,"GCSys":5251008,"OtherSys":2438853,"NextGC":142217095,"LastGC":1754180652189080000,"PauseTotalNs":1266251,"NumGC":18,"NumForcedGC":1,"GCCPUFraction":0.0002128524091018015,"EnableGC":true,"DebugGC":false}} ``` @@ -1619,9 +1619,9 @@ Sets the rate of goroutine block profile data collection. A non-zero value enabl - Profile rate (integer) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_setBlockProfileRate","params":[1],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1638,9 +1638,9 @@ Writes a goroutine blocking profile to the specified file. - File path (string) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_writeBlockProfile","params":["block.prof"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1657,9 +1657,9 @@ Writes an allocation profile to the specified file. - File path (string) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_writeMemProfile","params":["mem.prof"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1676,9 +1676,9 @@ Writes a mutex contention profile to the specified file. - File path (string) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_writeMutexProfile","params":["mutex.prof"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1696,9 +1696,9 @@ Turns on block profiling for the given duration and writes the profile data to d - Duration in seconds (number) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_blockProfile","params":["block.prof", 30],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1716,9 +1716,9 @@ Turns on CPU profiling for the given duration and writes the profile data to dis - Duration in seconds (number) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_cpuProfile","params":["cpu.prof", 30],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1732,9 +1732,9 @@ Returns garbage collection statistics. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_gcStats","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":{"NumGC":10,"PauseTotal":1000000,"Pause":[100000,200000],"PauseEnd":[1234567890,1234567891],"...":"..."}} ``` @@ -1752,9 +1752,9 @@ Turns on Go runtime tracing for the given duration and writes the trace data to - Duration in seconds (number) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_goTrace","params":["trace.out", 5],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1767,9 +1767,9 @@ Returns a printed representation of the stacks of all goroutines. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_stacks","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"goroutine 1 [running]:\n..."} ``` @@ -1786,9 +1786,9 @@ Starts indefinite CPU profiling, writing to the given file. - File path (string) - Output file for the profile ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_startCPUProfile","params":["cpu.prof"],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1801,9 +1801,9 @@ Stops an ongoing CPU profile. ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_stopCPUProfile","params":[],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1821,9 +1821,9 @@ Turns on mutex profiling for the given duration and writes the profile data to d - Duration in seconds (number) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_mutexProfile","params":["mutex.prof", 10],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":null} ``` @@ -1841,9 +1841,9 @@ Sets the rate of mutex profiling. - Rate (number) - 0 to disable, 1 for full profiling ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_setMutexProfileFraction","params":[1],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":0} ``` @@ -1864,9 +1864,9 @@ Returns the RLP encoding of the header of the block. - Block number (uint64 number, not hex string) ```shell -// Request - Note: block number as integer, not hex string +/ Request - Note: block number as integer, not hex string curl -X POST --data '{"jsonrpc":"2.0","method":"debug_getHeaderRlp","params":[100],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0xf90..."} ``` @@ -1887,9 +1887,9 @@ Returns the RLP encoding of the block. - Block number (uint64 number, not hex string) ```shell -// Request - Note: block number as integer, not hex string +/ Request - Note: block number as integer, not hex string curl -X POST --data '{"jsonrpc":"2.0","method":"debug_getBlockRlp","params":[100],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"0xf90..."} ``` @@ -1910,9 +1910,9 @@ Returns a formatted string of the block. - Block number (uint64 number, not hex string) ```shell -// Request - Note: block number as integer, not hex string +/ Request - Note: block number as integer, not hex string curl -X POST --data '{"jsonrpc":"2.0","method":"debug_printBlock","params":[100],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":"Block #100\nParent: 0x...\nStateRoot: 0x...\n..."} ``` @@ -1930,9 +1930,9 @@ Returns the intermediate state roots for a transaction. - Trace config (object, optional) ```shell -// Request +/ Request curl -X POST --data '{"jsonrpc":"2.0","method":"debug_intermediateRoots","params":["0x88df016429689c079f3b2f6ad39fa052532c56795b733da78a91ebe6a713944b", {}],"id":1}' -H "Content-Type: application/json" http://localhost:8545 -// Result +/ Result {"jsonrpc":"2.0","id":1,"result":["0x...","0x..."]} ``` diff --git a/docs/evm/v0.4.x/changelog/release-notes.mdx b/docs/evm/v0.4.x/changelog/release-notes.mdx index 4fd9e370..e8d94030 100644 --- a/docs/evm/v0.4.x/changelog/release-notes.mdx +++ b/docs/evm/v0.4.x/changelog/release-notes.mdx @@ -1,87 +1,100 @@ --- title: "Release Notes" -description: "Release history and changelog for Cosmos EVM" +description: "Release notes generated from the project `CHANGELOG.md`" +mode: "center" --- - This page tracks all releases and changes from the [cosmos/evm](https://github.com/cosmos/evm) repository. - For the latest development updates, see the [UNRELEASED](https://github.com/cosmos/evm/blob/main/CHANGELOG.md#unreleased) section. + This page tracks all releases and changes from the + [cosmos/evm](https://github.com/cosmos/evm) repository. For the latest + development updates, see the + [UNRELEASED](https://github.com/cosmos/evm/blob/main/CHANGELOG.md#unreleased) + section. ## Features -* Add comprehensive Solidity-based end-to-end tests for precompiles ([#253](https://github.com/cosmos/evm/pull/253)) -* Add 4-node localnet infrastructure for testing multi-validator setups ([#301](https://github.com/cosmos/evm/pull/301)) -* Add system test framework for integration testing ([#304](https://github.com/cosmos/evm/pull/304)) -* Add txpool RPC namespace stubs in preparation for app-side mempool implementation ([#344](https://github.com/cosmos/evm/pull/344)) -* Enforce app creator returning application implement AppWithPendingTxStream in build time. ([#440](https://github.com/cosmos/evm/pull/440)) - -## Improvements - -* Enforce single EVM transaction per Cosmos transaction for security ([#294](https://github.com/cosmos/evm/pull/294)) -* Update dependencies for security and performance improvements ([#299](https://github.com/cosmos/evm/pull/299)) -* Preallocate EVM access_list for better performance ([#307](https://github.com/cosmos/evm/pull/307)) -* Fix EmitApprovalEvent to use owner address instead of precompile address ([#317](https://github.com/cosmos/evm/pull/317)) -* Fix gas cap calculation and fee rounding errors in ante handler benchmarks ([#345](https://github.com/cosmos/evm/pull/345)) -* Add loop break labels for optimization ([#347](https://github.com/cosmos/evm/pull/347)) -* Use larger CI runners for resource-intensive tests ([#370](https://github.com/cosmos/evm/pull/370)) -* Apply security audit patches ([#373](https://github.com/cosmos/evm/pull/373)) -* Apply audit-related commit 388b5c0 ([#377](https://github.com/cosmos/evm/pull/377)) -* Post-audit security fixes (batch 1) ([#382](https://github.com/cosmos/evm/pull/382)) -* Post-audit security fixes (batch 2) ([#388](https://github.com/cosmos/evm/pull/388)) -* Post-audit security fixes (batch 3) ([#389](https://github.com/cosmos/evm/pull/389)) -* Post-audit security fixes (batch 5) ([#392](https://github.com/cosmos/evm/pull/392)) -* Post-audit security fixes (batch 4) ([#398](https://github.com/cosmos/evm/pull/398)) -* Prevent nil pointer by checking error in gov precompile FromResponse. ([#442](https://github.com/cosmos/evm/pull/442)) -* (Experimental) EVM-compatible appside mempool ([#387](https://github.com/cosmos/evm/pull/387)) -* Add revert error e2e tests for contract and precompile calls ([#476](https://github.com/cosmos/evm/pull/476)) - -## Bug Fixes - -* Fix compilation error in server/start.go ([#179](https://github.com/cosmos/evm/pull/179)) -* Use PriorityMempool with signer extractor to prevent missing signers error in tx execution ([#245](https://github.com/cosmos/evm/pull/245)) -* Align revert reason format with go-ethereum (return hex-encoded result) ([#289](https://github.com/cosmos/evm/pull/289)) -* Use proper address codecs in precompiles for bech32/hex conversion ([#291](https://github.com/cosmos/evm/pull/291)) -* Add sanity checks to trace_tx RPC endpoint ([#296](https://github.com/cosmos/evm/pull/296)) -* Fix estimate gas to handle missing fields for new transaction types ([#316](https://github.com/cosmos/evm/pull/316)) -* Fix error propagation in BlockHash RPCs and address test flakiness ([#330](https://github.com/cosmos/evm/pull/330)) -* Fix non-determinism in state transitions ([#332](https://github.com/cosmos/evm/pull/332)) -* Fix p256 precompile test flakiness ([#350](https://github.com/cosmos/evm/pull/350)) -* Fix precompile initialization for local node development script ([#376](https://github.com/cosmos/evm/pull/376)) -* Fix debug_traceTransaction RPC failing with block height mismatch errors ([#384](https://github.com/cosmos/evm/pull/384)) -* Align precompiles map with available static check to Prague. ([#441](https://github.com/cosmos/evm/pull/441)) -* Cleanup unused cancel function in filter. ([#452](https://github.com/cosmos/evm/pull/452)) -* Align multi decode functions instead of string contains check in HexAddressFromBech32String. ([#454](https://github.com/cosmos/evm/pull/454)) -* Add pagination flags to `token-pairs` to improve query flexibility. ([#468](https://github.com/cosmos/evm/pull/468)) - -## Dependencies - -* Update `cosmossdk.io/log` to `v1.6.1` to support Go `v1.25.0+`. ([#459](https://github.com/cosmos/evm/pull/459)) -* Update Cosmos SDK to `v0.53.4` and CometBFT to `v0.38.18`. ([#435](https://github.com/cosmos/evm/pull/435)) - -## API Breaking - -* Remove non–go-ethereum JSON-RPC methods to align with Geth’s surface ([#456](https://github.com/cosmos/evm/pull/456)) -* Move `ante` logic from the `evmd` Go package to the `evm` package to ([#443](https://github.com/cosmos/evm/pull/443)) -* Align function and package names for consistency. ([#422](https://github.com/cosmos/evm/pull/422)) -* Remove evidence precompile due to lack of use cases ([#305](https://github.com/cosmos/evm/pull/305)) - -## API Breaking - -* Remove non–go-ethereum JSON-RPC methods to align with Geth’s surface ([#456](https://github.com/cosmos/evm/pull/456)) -* Move `ante` logic from the `evmd` Go package to the `evm` package to ([#443](https://github.com/cosmos/evm/pull/443)) -* Align function and package names for consistency. ([#422](https://github.com/cosmos/evm/pull/422)) -* Remove evidence precompile due to lack of use cases ([#305](https://github.com/cosmos/evm/pull/305)) +- Add comprehensive Solidity-based end-to-end tests for precompiles ([#253](https://github.com/cosmos/evm/pull/253)) +- Add 4-node localnet infrastructure for testing multi-validator setups ([#301](https://github.com/cosmos/evm/pull/301)) +- Add system test framework for integration testing ([#304](https://github.com/cosmos/evm/pull/304)) +- Add txpool RPC namespace stubs in preparation for app-side mempool implementation ([#344](https://github.com/cosmos/evm/pull/344)) +- Enforce app creator returning application implement AppWithPendingTxStream in build time. ([#440](https://github.com/cosmos/evm/pull/440)) + +## Improvements + +- Enforce single EVM transaction per Cosmos transaction for security ([#294](https://github.com/cosmos/evm/pull/294)) +- Update dependencies for security and performance improvements ([#299](https://github.com/cosmos/evm/pull/299)) +- Preallocate EVM access_list for better performance ([#307](https://github.com/cosmos/evm/pull/307)) +- Fix EmitApprovalEvent to use owner address instead of precompile address ([#317](https://github.com/cosmos/evm/pull/317)) +- Fix gas cap calculation and fee rounding errors in ante handler benchmarks ([#345](https://github.com/cosmos/evm/pull/345)) +- Add loop break labels for optimization ([#347](https://github.com/cosmos/evm/pull/347)) +- Use larger CI runners for resource-intensive tests ([#370](https://github.com/cosmos/evm/pull/370)) +- Apply security audit patches ([#373](https://github.com/cosmos/evm/pull/373)) +- Apply audit-related commit 388b5c0 ([#377](https://github.com/cosmos/evm/pull/377)) +- Post-audit security fixes (batch 1) ([#382](https://github.com/cosmos/evm/pull/382)) +- Post-audit security fixes (batch 2) ([#388](https://github.com/cosmos/evm/pull/388)) +- Post-audit security fixes (batch 3) ([#389](https://github.com/cosmos/evm/pull/389)) +- Post-audit security fixes (batch 5) ([#392](https://github.com/cosmos/evm/pull/392)) +- Post-audit security fixes (batch 4) ([#398](https://github.com/cosmos/evm/pull/398)) +- Prevent nil pointer by checking error in gov precompile FromResponse. ([#442](https://github.com/cosmos/evm/pull/442)) +- (Experimental) EVM-compatible appside mempool ([#387](https://github.com/cosmos/evm/pull/387)) +- Add revert error e2e tests for contract and precompile calls ([#476](https://github.com/cosmos/evm/pull/476)) + +## Bug Fixes + +- Fix compilation error in server/start.go ([#179](https://github.com/cosmos/evm/pull/179)) +- Use PriorityMempool with signer extractor to prevent missing signers error in tx execution ([#245](https://github.com/cosmos/evm/pull/245)) +- Align revert reason format with go-ethereum (return hex-encoded result) ([#289](https://github.com/cosmos/evm/pull/289)) +- Use proper address codecs in precompiles for bech32/hex conversion ([#291](https://github.com/cosmos/evm/pull/291)) +- Add sanity checks to trace_tx RPC endpoint ([#296](https://github.com/cosmos/evm/pull/296)) +- Fix estimate gas to handle missing fields for new transaction types ([#316](https://github.com/cosmos/evm/pull/316)) +- Fix error propagation in BlockHash RPCs and address test flakiness ([#330](https://github.com/cosmos/evm/pull/330)) +- Fix non-determinism in state transitions ([#332](https://github.com/cosmos/evm/pull/332)) +- Fix p256 precompile test flakiness ([#350](https://github.com/cosmos/evm/pull/350)) +- Fix precompile initialization for local node development script ([#376](https://github.com/cosmos/evm/pull/376)) +- Fix debug_traceTransaction RPC failing with block height mismatch errors ([#384](https://github.com/cosmos/evm/pull/384)) +- Align precompiles map with available static check to Prague. ([#441](https://github.com/cosmos/evm/pull/441)) +- Cleanup unused cancel function in filter. ([#452](https://github.com/cosmos/evm/pull/452)) +- Align multi decode functions instead of string contains check in HexAddressFromBech32String. ([#454](https://github.com/cosmos/evm/pull/454)) +- Add pagination flags to `token-pairs` to improve query flexibility. ([#468](https://github.com/cosmos/evm/pull/468)) + +## Dependencies + +- Update `cosmossdk.io/log` to `v1.6.1` to support Go `v1.25.0+`. ([#459](https://github.com/cosmos/evm/pull/459)) +- Update Cosmos SDK to `v0.53.4` and CometBFT to `v0.38.18`. ([#435](https://github.com/cosmos/evm/pull/435)) + +## API Breaking + +- Remove non-go-ethereum JSON-RPC methods to align with Geth's surface ([#456](https://github.com/cosmos/evm/pull/456)) +- Move `ante` logic from the `evmd` Go package to the `evm` package to ([#443](https://github.com/cosmos/evm/pull/443)) +- Align function and package names for consistency. ([#422](https://github.com/cosmos/evm/pull/422)) +- Remove evidence precompile due to lack of use cases ([#305](https://github.com/cosmos/evm/pull/305)) + +## API Breaking + +- Remove non-go-ethereum JSON-RPC methods to align with Geth's surface ([#456](https://github.com/cosmos/evm/pull/456)) +- Move `ante` logic from the `evmd` Go package to the `evm` package to ([#443](https://github.com/cosmos/evm/pull/443)) +- Align function and package names for consistency. ([#422](https://github.com/cosmos/evm/pull/422)) +- Remove evidence precompile due to lack of use cases ([#305](https://github.com/cosmos/evm/pull/305)) + --- - + See the complete changelog on GitHub - + Report bugs or request features diff --git a/docs/evm/v0.4.x/devnet_ARCHIVED/connect-to-the-network.mdx b/docs/evm/v0.4.x/devnet_ARCHIVED/connect-to-the-network.mdx index 669ceae6..7c5bd052 100644 --- a/docs/evm/v0.4.x/devnet_ARCHIVED/connect-to-the-network.mdx +++ b/docs/evm/v0.4.x/devnet_ARCHIVED/connect-to-the-network.mdx @@ -38,7 +38,7 @@ To test out your dApp via a web UI using an extension-based wallet, you'll need ```javascript Add Chain Example const chainConfig = { - chainId: '0x1087', // 4231 in hex + chainId: '0x1087', / 4231 in hex chainName: 'Cosmos-EVM-Devnet', nativeCurrency: { name: 'ATOM', diff --git a/docs/evm/v0.4.x/documentation/concepts/accounts.mdx b/docs/evm/v0.4.x/documentation/concepts/accounts.mdx index 92a1b063..d45ceeb4 100644 --- a/docs/evm/v0.4.x/documentation/concepts/accounts.mdx +++ b/docs/evm/v0.4.x/documentation/concepts/accounts.mdx @@ -42,7 +42,7 @@ The root HD path for EVM-based accounts is `m/44'/60'/0'/0`. It is recommended t The custom Cosmos EVM [EthAccount](https://github.com/cosmos/evm/blob/v0.4.1/types/account.go#L28-L33) satisfies the `AccountI` interface from the Cosmos SDK auth module and includes additional fields that are required for Ethereum type addresses: ``` -// EthAccountI represents the interface of an EVM compatible accounttype EthAccountI interface { authtypes.AccountI // EthAddress returns the ethereum Address representation of the AccAddress EthAddress() common.Address // CodeHash is the keccak256 hash of the contract code (if any) GetCodeHash() common.Hash // SetCodeHash sets the code hash to the account fields SetCodeHash(code common.Hash) error // Type returns the type of Ethereum Account (EOA or Contract) Type() int8} +/ EthAccountI represents the interface of an EVM compatible accounttype EthAccountI interface { authtypes.AccountI / EthAddress returns the ethereum Address representation of the AccAddress EthAddress() common.Address / CodeHash is the keccak256 hash of the contract code (if any) GetCodeHash() common.Hash / SetCodeHash sets the code hash to the account fields SetCodeHash(code common.Hash) error / Type returns the type of Ethereum Account (EOA or Contract) Type() int8} ``` For more information on Ethereum accounts head over to the [x/vm module](/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/vm). diff --git a/docs/evm/v0.4.x/documentation/concepts/chain-id.mdx b/docs/evm/v0.4.x/documentation/concepts/chain-id.mdx index 8be5ea33..e4150b89 100644 --- a/docs/evm/v0.4.x/documentation/concepts/chain-id.mdx +++ b/docs/evm/v0.4.x/documentation/concepts/chain-id.mdx @@ -24,7 +24,7 @@ The **Cosmos Chain ID** is a string identifier used by: **Example**: ```json -// In genesis.json +/ In genesis.json { "chain_id": "cosmosevm-1" } @@ -42,7 +42,7 @@ The **EVM Chain ID** is an integer used by: **Example**: ```go -// In app/app.go +/ In app/app.go const EVMChainID = 9000 ``` @@ -53,10 +53,10 @@ Both chain IDs must be configured when setting up your chain: ### In Your Application Code ```go -// app/app.go +/ app/app.go const ( - CosmosChainID = "cosmosevm-1" // String for Cosmos/IBC - EVMChainID = 9000 // Integer for EVM/Ethereum + CosmosChainID = "cosmosevm-1" / String for Cosmos/IBC + EVMChainID = 9000 / Integer for EVM/Ethereum ) ``` @@ -66,14 +66,14 @@ The Cosmos Chain ID is set in `genesis.json`: ```json { "chain_id": "cosmosevm-1", - // ... other genesis parameters + / ... other genesis parameters } ``` The EVM Chain ID is configured in the EVM module parameters: ```go -// During chain initialization -evmtypes.DefaultChainConfig(9000) // Your EVM Chain ID +/ During chain initialization +evmtypes.DefaultChainConfig(9000) / Your EVM Chain ID ``` ## Important Considerations @@ -115,36 +115,3 @@ Here are some example configurations: | Devnet | `"cosmosevm-dev-1"` | `9002` | Development network | | Local | `"cosmosevm-local-1"` | `31337` | Local development | -## Troubleshooting - -### Common Issues - -1. **"Chain ID mismatch" errors** - - **Cause**: Using Cosmos Chain ID where EVM Chain ID is expected (or vice versa) - - **Solution**: Ensure you're using the correct type of chain ID for each context - -2. **MetaMask connection failures** - - **Cause**: Incorrect EVM Chain ID in wallet configuration - - **Solution**: Use the integer EVM Chain ID, not the string Cosmos Chain ID - -3. **IBC transfer failures** - - **Cause**: Using EVM Chain ID for IBC operations - - **Solution**: IBC always uses the Cosmos Chain ID (string format) - -4. **Smart contract deployment issues** - - **Cause**: EIP-155 replay protection using wrong chain ID - - **Solution**: Ensure your EVM Chain ID matches what's configured in the chain - -### Verification Commands - -To verify your chain IDs are correctly configured: - -```bash -# Check Cosmos Chain ID -curl -s http://localhost:26657/status | jq '.result.node_info.network' - -# Check EVM Chain ID -curl -X POST -H "Content-Type: application/json" \ - --data '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \ - http://localhost:8545 | jq '.result' -``` \ No newline at end of file diff --git a/docs/evm/v0.4.x/documentation/concepts/encoding.mdx b/docs/evm/v0.4.x/documentation/concepts/encoding.mdx index 32058f2b..8b47b174 100644 --- a/docs/evm/v0.4.x/documentation/concepts/encoding.mdx +++ b/docs/evm/v0.4.x/documentation/concepts/encoding.mdx @@ -79,7 +79,7 @@ The `x/vm` module handles `MsgEthereumTx` encoding by converting to go-ethereum' ```go TxEncoder -// TxEncoder overwrites sdk.TxEncoder to support MsgEthereumTx +/ TxEncoder overwrites sdk.TxEncoder to support MsgEthereumTx func (g txConfig) TxEncoder() sdk.TxEncoder { return func(tx sdk.Tx) ([]byte, error) { msg, ok := tx.(*evmtypes.MsgEthereumTx) @@ -92,7 +92,7 @@ func (g txConfig) TxEncoder() sdk.TxEncoder { ``` ```go TxDecoder -// TxDecoder overwrites sdk.TxDecoder to support MsgEthereumTx +/ TxDecoder overwrites sdk.TxDecoder to support MsgEthereumTx func (g txConfig) TxDecoder() sdk.TxDecoder { return func(txBytes []byte) (sdk.Tx, error) { tx := ðtypes.Transaction{} diff --git a/docs/evm/v0.4.x/documentation/concepts/ibc.mdx b/docs/evm/v0.4.x/documentation/concepts/ibc.mdx index 624b09f9..322926c4 100644 --- a/docs/evm/v0.4.x/documentation/concepts/ibc.mdx +++ b/docs/evm/v0.4.x/documentation/concepts/ibc.mdx @@ -15,22 +15,22 @@ The `Packet` is the primary container for cross-chain communication. Each packet ```typescript packet.ts -// The main data container sent between chains +/ The main data container sent between chains interface Packet { - sourceClientId: bytes; // Client ID for destination chain (stored on source) - destClientId: bytes; // Client ID for source chain (stored on destination) - sequence: uint64; // Monotonically increasing nonce for ordering - timeoutTimestamp: uint64; // UNIX timestamp (seconds) when the packet expires - data: Payload[]; // List of application payloads + sourceClientId: bytes; / Client ID for destination chain (stored on source) + destClientId: bytes; / Client ID for source chain (stored on destination) + sequence: uint64; / Monotonically increasing nonce for ordering + timeoutTimestamp: uint64; / UNIX timestamp (seconds) when the packet expires + data: Payload[]; / List of application payloads } -// Application-specific data and routing information +/ Application-specific data and routing information interface Payload { - sourcePort: bytes; // Identifies sending application module - destPort: bytes; // Identifies receiving application module - version: string; // App-specific version for interpretation - encoding: string; // MIME-type for decoding value - value: bytes; // Opaque application data + sourcePort: bytes; / Identifies sending application module + destPort: bytes; / Identifies receiving application module + version: string; / App-specific version for interpretation + encoding: string; / MIME-type for decoding value + value: bytes; / Opaque application data } ``` diff --git a/docs/evm/v0.4.x/documentation/concepts/mempool.mdx b/docs/evm/v0.4.x/documentation/concepts/mempool.mdx index bff1b570..7d621453 100644 --- a/docs/evm/v0.4.x/documentation/concepts/mempool.mdx +++ b/docs/evm/v0.4.x/documentation/concepts/mempool.mdx @@ -22,41 +22,41 @@ The EVM mempool enables seamless interaction with Ethereum tooling and deploymen DeFi protocols like Uniswap deploy multiple interdependent contracts in rapid succession. With the EVM mempool, these deployment scripts work without modification: ```javascript -// Deploy Uniswap V3 contracts - sends many transactions at once +/ Deploy Uniswap V3 contracts - sends many transactions at once const factory = await UniswapV3Factory.deploy(); const router = await SwapRouter.deploy(factory.address, WETH); const quoter = await Quoter.deploy(factory.address, WETH); const multicall = await UniswapInterfaceMulticall.deploy(); -// All transactions queue properly even if they arrive out of order -// The mempool handles nonce gaps automatically +/ All transactions queue properly even if they arrive out of order +/ The mempool handles nonce gaps automatically ``` ### Batch Transaction Submission ```javascript -// Ethereum tooling sends multiple transactions -await wallet.sendTransaction({nonce: 100, ...}); // OK: Immediate execution -await wallet.sendTransaction({nonce: 101, ...}); // OK: Immediate execution -await wallet.sendTransaction({nonce: 103, ...}); // OK: Queued locally (gap) -await wallet.sendTransaction({nonce: 102, ...}); // OK: Fills gap, both execute +/ Ethereum tooling sends multiple transactions +await wallet.sendTransaction({nonce: 100, ...}); / OK: Immediate execution +await wallet.sendTransaction({nonce: 101, ...}); / OK: Immediate execution +await wallet.sendTransaction({nonce: 103, ...}); / OK: Queued locally (gap) +await wallet.sendTransaction({nonce: 102, ...}); / OK: Fills gap, both execute ``` ### Transaction Replacement ```javascript -// Speed up transaction with same nonce, higher fee +/ Speed up transaction with same nonce, higher fee const tx1 = await wallet.sendTransaction({ nonce: 100, gasPrice: parseUnits("20", "gwei") }); -// Replace with higher fee +/ Replace with higher fee const tx2 = await wallet.sendTransaction({ - nonce: 100, // Same nonce - gasPrice: parseUnits("30", "gwei") // Higher fee + nonce: 100, / Same nonce + gasPrice: parseUnits("30", "gwei") / Higher fee }); -// tx1 is replaced by tx2 +/ tx1 is replaced by tx2 ``` ## Transaction Flow @@ -75,17 +75,17 @@ The CheckTx handler processes transactions with special handling for nonce gaps **Nonce Gap Detection** - Transactions with future nonces are intercepted and queued locally: ```go -// From mempool/check_tx.go +/ From mempool/check_tx.go if err != nil { - // detect if there is a nonce gap error (only returned for EVM transactions) + / detect if there is a nonce gap error (only returned for EVM transactions) if errors.Is(err, ErrNonceGap) { - // send it to the mempool for further triage + / send it to the mempool for further triage err := mempool.InsertInvalidNonce(request.Tx) if err != nil { return sdkerrors.ResponseCheckTxWithEvents(err, gInfo.GasWanted, gInfo.GasUsed, anteEvents, false), nil } } - // anything else, return regular error + / anything else, return regular error return sdkerrors.ResponseCheckTxWithEvents(err, gInfo.GasWanted, gInfo.GasUsed, anteEvents, false), nil } ``` diff --git a/docs/evm/v0.4.x/documentation/concepts/overview.mdx b/docs/evm/v0.4.x/documentation/concepts/overview.mdx index d31c168a..56a8739a 100644 --- a/docs/evm/v0.4.x/documentation/concepts/overview.mdx +++ b/docs/evm/v0.4.x/documentation/concepts/overview.mdx @@ -70,11 +70,11 @@ Instead of Ethereum's proof-of-stake: Contracts compile to EVM bytecode - a low-level instruction set: ``` -PUSH1 0x60 // Push 1 byte onto stack -PUSH1 0x40 // Push another byte -MSTORE // Store in memory -CALLVALUE // Get msg.value -DUP1 // Duplicate top stack item +PUSH1 0x60 / Push 1 byte onto stack +PUSH1 0x40 / Push another byte +MSTORE / Store in memory +CALLVALUE / Get msg.value +DUP1 / Duplicate top stack item ``` Each opcode has a fixed gas cost reflecting computational complexity. diff --git a/docs/evm/v0.4.x/documentation/concepts/pending-state.mdx b/docs/evm/v0.4.x/documentation/concepts/pending-state.mdx index a16687df..26f79433 100644 --- a/docs/evm/v0.4.x/documentation/concepts/pending-state.mdx +++ b/docs/evm/v0.4.x/documentation/concepts/pending-state.mdx @@ -55,8 +55,8 @@ The mempool provides dedicated RPC methods to inspect pending and queued transac Returns counts of pending and queued transactions: ```json { - "pending": "0x10", // 16 pending transactions - "queued": "0x5" // 5 queued transactions + "pending": "0x10", / 16 pending transactions + "queued": "0x5" / 5 queued transactions } ``` @@ -107,12 +107,12 @@ Additionally, the `txpool_*` namespace provides specialized methods for mempool ### Monitoring Transaction Status ```javascript -// Check if transaction is pending +/ Check if transaction is pending const tx = await provider.getTransaction(txHash); if (tx && !tx.blockNumber) { console.log("Transaction is pending"); - // Check pool status + / Check pool status const poolStatus = await provider.send("txpool_status", []); console.log(`Pool has ${poolStatus.pending} pending, ${poolStatus.queued} queued`); } @@ -121,25 +121,25 @@ if (tx && !tx.blockNumber) { ### Handling Nonce Gaps ```javascript -// Send transactions with nonce gaps (they'll be queued) -await wallet.sendTransaction({nonce: 100, ...}); // Executes immediately -await wallet.sendTransaction({nonce: 102, ...}); // Queued (gap at 101) -await wallet.sendTransaction({nonce: 101, ...}); // Fills gap, both execute +/ Send transactions with nonce gaps (they'll be queued) +await wallet.sendTransaction({nonce: 100, ...}); / Executes immediately +await wallet.sendTransaction({nonce: 102, ...}); / Queued (gap at 101) +await wallet.sendTransaction({nonce: 101, ...}); / Fills gap, both execute ``` ### Transaction Replacement ```javascript -// Speed up a pending transaction +/ Speed up a pending transaction const originalTx = await wallet.sendTransaction({ nonce: 100, gasPrice: parseUnits("20", "gwei") }); -// Replace with higher fee +/ Replace with higher fee const fasterTx = await wallet.sendTransaction({ - nonce: 100, // Same nonce - gasPrice: parseUnits("30", "gwei") // Higher fee + nonce: 100, / Same nonce + gasPrice: parseUnits("30", "gwei") / Higher fee }); ``` diff --git a/docs/evm/v0.4.x/documentation/concepts/precision-handling.mdx b/docs/evm/v0.4.x/documentation/concepts/precision-handling.mdx index 85efd1f4..6bbc5337 100644 --- a/docs/evm/v0.4.x/documentation/concepts/precision-handling.mdx +++ b/docs/evm/v0.4.x/documentation/concepts/precision-handling.mdx @@ -230,21 +230,21 @@ This single multiplication and addition reconstructs the full 18-decimal balance ### Simple Scaling ``` -// Naive approach - loses precision +/ Naive approach - loses precision evmAmount = cosmosAmount * 10^12 ``` **Problems**: Loses sub-test amounts, rounding errors accumulate ### Fixed-Point Arithmetic ``` -// Complex fixed-point math +/ Complex fixed-point math amount = FixedPoint{mantissa: 123456, exponent: -18} ``` **Problems**: Complex implementation, performance overhead, compatibility issues ### Separate Balances ``` -// Maintain two separate balance systems +/ Maintain two separate balance systems cosmosBalance: 1000000 test evmBalance: 1000000000000000000 atest ``` @@ -252,7 +252,7 @@ evmBalance: 1000000000000000000 atest ### PreciseBank Solution ``` -// Elegant decomposition +/ Elegant decomposition totalBalance = integerPart * 10^12 + fractionalPart ``` **Advantages**: Simple, efficient, precise, compatible diff --git a/docs/evm/v0.4.x/documentation/concepts/replay-protection.mdx b/docs/evm/v0.4.x/documentation/concepts/replay-protection.mdx index 78b2938c..720372d3 100644 --- a/docs/evm/v0.4.x/documentation/concepts/replay-protection.mdx +++ b/docs/evm/v0.4.x/documentation/concepts/replay-protection.mdx @@ -56,14 +56,14 @@ If absolutely necessary, unprotected transactions can be allowed through a two-s The EVM module parameter must be changed via governance proposal or genesis configuration: ```go -// In x/vm/types/params.go -DefaultAllowUnprotectedTxs = false // Default value +/ In x/vm/types/params.go +DefaultAllowUnprotectedTxs = false / Default value -// To enable in genesis.json +/ To enable in genesis.json { "vm": { "params": { - "allow_unprotected_txs": true // Must be set to true + "allow_unprotected_txs": true / Must be set to true } } } @@ -94,14 +94,14 @@ Both conditions must be met for unprotected transactions to be accepted: All modern Ethereum transaction types include chain ID: ```javascript -// EIP-1559 Dynamic Fee Transaction +/ EIP-1559 Dynamic Fee Transaction const tx = { type: 2, - chainId: 9000, // EVM Chain ID included + chainId: 9000, / EVM Chain ID included nonce: 0, maxFeePerGas: ..., maxPriorityFeePerGas: ..., - // ... other fields + / ... other fields } ``` @@ -110,7 +110,7 @@ const tx = { Pre-EIP-155 transactions without chain ID: ```javascript -// Legacy transaction without chain ID (rejected by default) +/ Legacy transaction without chain ID (rejected by default) const tx = { nonce: 0, gasPrice: ..., @@ -118,7 +118,7 @@ const tx = { to: ..., value: ..., data: ..., - // No chainId field + / No chainId field } ``` diff --git a/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/erc20.mdx b/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/erc20.mdx index b038d6a4..f6083c5e 100644 --- a/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/erc20.mdx +++ b/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/erc20.mdx @@ -52,10 +52,10 @@ The module maintains token pair mappings and allowances ([source](https://github ```go type TokenPair struct { - Erc20Address string // Hex address of ERC20 contract - Denom string // Cosmos coin denomination - Enabled bool // Conversion enable status - ContractOwner Owner // OWNER_MODULE or OWNER_EXTERNAL + Erc20Address string / Hex address of ERC20 contract + Denom string / Cosmos coin denomination + Enabled bool / Conversion enable status + ContractOwner Owner / OWNER_MODULE or OWNER_EXTERNAL } ``` @@ -103,9 +103,9 @@ Convert Cosmos coins to ERC20 tokens: ```protobuf message MsgConvertCoin { - Coin coin = 1; // Amount and denom to convert - string receiver = 2; // Hex address to receive ERC20 - string sender = 3; // Bech32 address of sender + Coin coin = 1; / Amount and denom to convert + string receiver = 2; / Hex address to receive ERC20 + string sender = 3; / Bech32 address of sender } ``` @@ -120,10 +120,10 @@ Convert ERC20 tokens to Cosmos coins: ```protobuf message MsgConvertERC20 { - string contract_address = 1; // ERC20 contract - string amount = 2; // Amount to convert - string receiver = 3; // Bech32 address - string sender = 4; // Hex address of sender + string contract_address = 1; / ERC20 contract + string amount = 2; / Amount to convert + string receiver = 3; / Bech32 address + string sender = 4; / Hex address of sender } ``` @@ -138,8 +138,8 @@ Enable/disable conversions for a token pair (governance only): ```protobuf message MsgToggleConversion { - string authority = 1; // Must be governance account - string token = 2; // Address or denom + string authority = 1; / Must be governance account + string token = 2; / Address or denom } ``` @@ -149,8 +149,8 @@ Update module parameters (governance only): ```protobuf message MsgUpdateParams { - string authority = 1; // Must be governance account - Params params = 2; // New parameters + string authority = 1; / Must be governance account + Params params = 2; / New parameters } ``` @@ -196,7 +196,7 @@ sequenceDiagram Automatically created for Cosmos coins at deterministic addresses: ```go -// Address derivation +/ Address derivation address = keccak256("erc20|")[:20] ``` @@ -230,20 +230,20 @@ interface IWERC20 is IERC20 { Standard IBC transfer integration ([source](https://github.com/cosmos/evm/blob/v0.4.1/x/erc20/ibc_middleware.go)): ```go -// Automatic registration and conversion on receive +/ Automatic registration and conversion on receive OnRecvPacket(packet) { - // Auto-register new IBC tokens (with "ibc/" prefix) + / Auto-register new IBC tokens (with "ibc/" prefix) if !tokenPairExists && hasPrefix(denom, "ibc/") { RegisterERC20Extension(denom) } - // Auto-convert to ERC20 for Ethereum addresses + / Auto-convert to ERC20 for Ethereum addresses if isEthereumAddress(receiver) { convertToERC20(voucher, receiver) } } -// Automatic conversion on acknowledgment +/ Automatic conversion on acknowledgment OnAcknowledgementPacket(packet, ack) { if wasConverted { convertBackToCosmos(refund) @@ -293,13 +293,13 @@ Enhanced IBC v2 support ([source](https://github.com/cosmos/evm/blob/v0.4.1/x/er ```protobuf service Query { - // Get module parameters + / Get module parameters rpc Params(QueryParamsRequest) returns (QueryParamsResponse); - // Get all token pairs + / Get all token pairs rpc TokenPairs(QueryTokenPairsRequest) returns (QueryTokenPairsResponse); - // Get specific token pair + / Get specific token pair rpc TokenPair(QueryTokenPairRequest) returns (QueryTokenPairResponse); } ``` @@ -345,14 +345,14 @@ evmd tx gov submit-proposal register-erc20-proposal.json --from mykey ### DeFi Protocol Integration ```javascript -// Using precompiled ERC20 interface for IBC tokens +/ Using precompiled ERC20 interface for IBC tokens const ibcToken = new ethers.Contract( - "0x80b5a32e4f032b2a058b4f29ec95eefeeb87adcd", // Precompile address + "0x80b5a32e4f032b2a058b4f29ec95eefeeb87adcd", / Precompile address ERC20_ABI, signer ); -// Standard ERC20 operations work seamlessly +/ Standard ERC20 operations work seamlessly await ibcToken.approve(dexRouter, amount); await dexRouter.swapExactTokensForTokens(...); ``` @@ -360,19 +360,19 @@ await dexRouter.swapExactTokensForTokens(...); ### Automatic IBC Conversion ```javascript -// IBC transfer automatically converts to ERC20 +/ IBC transfer automatically converts to ERC20 const packet = { sender: "cosmos1...", - receiver: "0x...", // EVM address triggers conversion + receiver: "0x...", / EVM address triggers conversion token: { denom: "test", amount: "1000000" } }; -// Token arrives as ERC20 in receiver's EVM account +/ Token arrives as ERC20 in receiver's EVM account ``` ### Manual Conversion Flow ```javascript -// Convert native to ERC20 +/ Convert native to ERC20 await client.signAndBroadcast(address, [ { typeUrl: "/cosmos.evm.erc20.v1.MsgConvertCoin", @@ -408,7 +408,7 @@ await client.signAndBroadcast(address, [ 1. **Contract Validation** ```solidity - // Verify standard interface + / Verify standard interface IERC20(token).totalSupply(); IERC20(token).balanceOf(address(this)); ``` @@ -423,31 +423,6 @@ await client.signAndBroadcast(address, [ - Toggle specific token pairs - Have incident response plan -## Troubleshooting - -### Common Issues - -| Issue | Cause | Solution | -|-------|-------|----------| -| "token pair not found" | Token not registered | Register via governance | -| "token pair disabled" | Conversions toggled off | Enable via governance | -| "insufficient balance" | Low balance for conversion | Check balance in correct format | -| "invalid recipient" | Wrong address format | Use hex for EVM, bech32 for Cosmos | -| "module disabled" | enable_erc20 = false | Enable via governance | - -### Debug Commands - -```bash -# Check if token is registered -evmd query erc20 token-pair test - -# Check module parameters -evmd query erc20 params - -# Check account balance (both formats) -evmd query bank balances cosmos1... -evmd query vm balance 0x... -``` ## References diff --git a/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/feemarket.mdx b/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/feemarket.mdx index 0ae140e3..e3a91f0c 100644 --- a/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/feemarket.mdx +++ b/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/feemarket.mdx @@ -86,43 +86,43 @@ Adjust parameters to achieve specific network behaviors ([params validation](htt ```json Ethereum-Compatible { "no_base_fee": false, - "base_fee_change_denominator": 8, // ±12.5% max per block - "elasticity_multiplier": 2, // 50% target utilization - "enable_height": 0, // Active from genesis - "base_fee": "1000000000000000000000", // Starting base fee - "min_gas_price": "1000000000000000000", // Price floor - "min_gas_multiplier": "500000000000000000" // 0.5 (anti-manipulation) + "base_fee_change_denominator": 8, / ±12.5% max per block + "elasticity_multiplier": 2, / 50% target utilization + "enable_height": 0, / Active from genesis + "base_fee": "1000000000000000000000", / Starting base fee + "min_gas_price": "1000000000000000000", / Price floor + "min_gas_multiplier": "500000000000000000" / 0.5 (anti-manipulation) } -// Behavior: Standard Ethereum, balanced for general use -// 100k gas tx: ~0.1 tokens at base, up to 10x during congestion +/ Behavior: Standard Ethereum, balanced for general use +/ 100k gas tx: ~0.1 tokens at base, up to 10x during congestion ``` ```json High-Volume-Stable { "no_base_fee": false, - "base_fee_change_denominator": 16, // ±6.25% max per block - "elasticity_multiplier": 2, // 50% target utilization + "base_fee_change_denominator": 16, / ±6.25% max per block + "elasticity_multiplier": 2, / 50% target utilization "enable_height": 0, - "base_fee": "100000000000000000000", // Lower starting fee - "min_gas_price": "10000000000000000", // Lower floor + "base_fee": "100000000000000000000", / Lower starting fee + "min_gas_price": "10000000000000000", / Lower floor "min_gas_multiplier": "500000000000000000" } -// Behavior: Slow, predictable changes for DeFi/DEX chains -// 100k gas tx: ~0.01 tokens, gradual increases +/ Behavior: Slow, predictable changes for DeFi/DEX chains +/ 100k gas tx: ~0.01 tokens, gradual increases ``` ```json Aggressive-Anti-Spam { "no_base_fee": false, - "base_fee_change_denominator": 4, // ±25% max per block - "elasticity_multiplier": 4, // 25% target utilization + "base_fee_change_denominator": 4, / ±25% max per block + "elasticity_multiplier": 4, / 25% target utilization "enable_height": 0, - "base_fee": "10000000000000000000000", // High starting fee - "min_gas_price": "10000000000000000000000", // High floor - "min_gas_multiplier": "1000000000000000000" // 1.0 (strict) + "base_fee": "10000000000000000000000", / High starting fee + "min_gas_price": "10000000000000000000000", / High floor + "min_gas_multiplier": "1000000000000000000" / 1.0 (strict) } -// Behavior: Rapid adjustments, high minimum for premium chains -// 100k gas tx: ~1 token minimum, aggressive scaling +/ Behavior: Rapid adjustments, high minimum for premium chains +/ 100k gas tx: ~1 token minimum, aggressive scaling ``` @@ -161,8 +161,8 @@ The module supports different token decimal configurations: Parameters can be updated through governance proposals ([msg server](https://github.com/cosmos/evm/blob/v0.4.1/x/feemarket/keeper/msg_server.go)): ```json -// MsgUpdateParams structure -// Source: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/tx.proto +/ MsgUpdateParams structure +/ Source: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/tx.proto { "@type": "/cosmos.evm.feemarket.v1.MsgUpdateParams", "authority": "cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn", @@ -226,15 +226,15 @@ Examples: ```javascript Wallet-Integration -// Get current base fee from latest block +/ Get current base fee from latest block const block = await provider.getBlock('latest'); const baseFee = block.baseFeePerGas; -// Add buffer for inclusion certainty (20% buffer) +/ Add buffer for inclusion certainty (20% buffer) const maxFeePerGas = baseFee * 120n / 100n; const maxPriorityFeePerGas = ethers.parseUnits("2", "gwei"); -// Send EIP-1559 transaction +/ Send EIP-1559 transaction const tx = await wallet.sendTransaction({ type: 2, maxFeePerGas, @@ -246,24 +246,24 @@ const tx = await wallet.sendTransaction({ ``` ```javascript Fee-Monitoring -// Get current gas price (includes base fee) +/ Get current gas price (includes base fee) const gasPrice = await provider.send('eth_gasPrice', []); console.log('Current gas price:', ethers.formatUnits(gasPrice, 'gwei'), 'gwei'); -// Get fee history +/ Get fee history const feeHistory = await provider.send('eth_feeHistory', [ - '0x10', // last 16 blocks + '0x10', / last 16 blocks 'latest', [25, 50, 75] ]); -// Process base fees from history +/ Process base fees from history const baseFees = feeHistory.baseFeePerGas.map(fee => ethers.formatUnits(fee, 'gwei') ); console.log('Base fee history:', baseFees); -// Check if fees are elevated +/ Check if fees are elevated const currentBaseFee = BigInt(feeHistory.baseFeePerGas[feeHistory.baseFeePerGas.length - 1]); const threshold = ethers.parseUnits('100', 'gwei'); const isHighFee = currentBaseFee > threshold; @@ -281,11 +281,11 @@ The fee market integrates with standard Ethereum JSON-RPC methods ([RPC backend] Returns the current base fee as hex string. In EIP-1559 mode, returns the base fee; in legacy mode, returns the median gas price. ```json - // Request + / Request {"jsonrpc":"2.0","method":"eth_gasPrice","params":[],"id":1} - // Response - {"jsonrpc":"2.0","id":1,"result":"0x3b9aca00"} // 1 gwei in hex + / Response + {"jsonrpc":"2.0","id":1,"result":"0x3b9aca00"} / 1 gwei in hex ``` @@ -293,11 +293,11 @@ The fee market integrates with standard Ethereum JSON-RPC methods ([RPC backend] Returns suggested priority fee (tip) per gas. In cosmos/evm with the experimental mempool, transactions are prioritized by effective gas tip (fees), with ties broken by arrival time. ```json - // Request + / Request {"jsonrpc":"2.0","method":"eth_maxPriorityFeePerGas","params":[],"id":1} - // Response - {"jsonrpc":"2.0","id":1,"result":"0x77359400"} // Suggested tip in wei + / Response + {"jsonrpc":"2.0","id":1,"result":"0x77359400"} / Suggested tip in wei ``` @@ -319,7 +319,7 @@ The fee market integrates with standard Ethereum JSON-RPC methods ([RPC backend]
```json - // Request + / Request { "jsonrpc":"2.0", "method":"eth_feeHistory", @@ -327,7 +327,7 @@ The fee market integrates with standard Ethereum JSON-RPC methods ([RPC backend] "id":1 } - // Response + / Response { "jsonrpc":"2.0", "id":1, @@ -363,8 +363,8 @@ The fee market integrates with standard Ethereum JSON-RPC methods ([RPC backend] "result":{ "number": "0x1234", "hash": "0x...", - "baseFeePerGas": "0x3b9aca00", // Base fee for this block - // ... other block fields + "baseFeePerGas": "0x3b9aca00", / Base fee for this block + / ... other block fields } } ``` @@ -379,7 +379,7 @@ The fee market supports both legacy and EIP-1559 transaction types ([types](http ```json Legacy-Transaction { "type": "0x0", - "gasPrice": "0x3b9aca00", // Must be >= current base fee + "gasPrice": "0x3b9aca00", / Must be >= current base fee "gas": "0x5208", "to": "0x...", "value": "0x...", @@ -390,8 +390,8 @@ The fee market supports both legacy and EIP-1559 transaction types ([types](http ```json EIP-1559-Transaction { "type": "0x2", - "maxFeePerGas": "0x3b9aca00", // Maximum total fee willing to pay - "maxPriorityFeePerGas": "0x77359400", // Tip that affects transaction priority in mempool + "maxFeePerGas": "0x3b9aca00", / Maximum total fee willing to pay + "maxPriorityFeePerGas": "0x77359400", / Tip that affects transaction priority in mempool "gas": "0x5208", "to": "0x...", "value": "0x...", @@ -430,11 +430,11 @@ evmd query feemarket block-gas ```proto Query-Parameters -// Request all module parameters -// Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto +/ Request all module parameters +/ Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto grpcurl -plaintext localhost:9090 feemarket.Query/Params -// Response +/ Response { "params": { "no_base_fee": false, @@ -449,22 +449,22 @@ grpcurl -plaintext localhost:9090 feemarket.Query/Params ``` ```proto Query-Base-Fee -// Request current base fee -// Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto +/ Request current base fee +/ Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto grpcurl -plaintext localhost:9090 feemarket.Query/BaseFee -// Response +/ Response { "base_fee": "1000000000000000000000" } ``` ```proto Query-Block-Gas -// Request block gas wanted -// Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto +/ Request block gas wanted +/ Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto grpcurl -plaintext localhost:9090 feemarket.Query/BlockGas -// Response +/ Response { "gas": "5000000" } @@ -481,7 +481,7 @@ grpcurl -plaintext localhost:9090 feemarket.Query/BlockGas # Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto GET /cosmos/feemarket/v1/params -// Response +/ Response { "params": { "no_base_fee": false, @@ -500,7 +500,7 @@ GET /cosmos/feemarket/v1/params # Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto GET /cosmos/feemarket/v1/base_fee -// Response +/ Response { "base_fee": "1000000000000000000000" } @@ -511,7 +511,7 @@ GET /cosmos/feemarket/v1/base_fee # Proto: github.com/cosmos/evm/blob/v0.4.1/proto/cosmos/evm/feemarket/v1/query.proto GET /cosmos/feemarket/v1/block_gas -// Response +/ Response { "gas": "5000000" } @@ -550,15 +550,15 @@ At the beginning of each block, the module calculates and sets the new base fee func (k Keeper) BeginBlock(ctx sdk.Context) error { baseFee := k.CalculateBaseFee(ctx) - // Skip if base fee is nil (not enabled) + / Skip if base fee is nil (not enabled) if baseFee.IsNil() { return nil } - // Update the base fee in state + / Update the base fee in state k.SetBaseFee(ctx, baseFee) - // Emit event with new base fee + / Emit event with new base fee ctx.EventManager().EmitEvent( sdk.NewEvent( "fee_market", @@ -578,13 +578,13 @@ func (k Keeper) EndBlock(ctx sdk.Context) error { gasWanted := k.GetTransientGasWanted(ctx) gasUsed := ctx.BlockGasMeter().GasConsumedToLimit() - // Apply MinGasMultiplier to prevent manipulation - // gasWanted = max(gasWanted * MinGasMultiplier, gasUsed) + / Apply MinGasMultiplier to prevent manipulation + / gasWanted = max(gasWanted * MinGasMultiplier, gasUsed) minGasMultiplier := k.GetParams(ctx).MinGasMultiplier limitedGasWanted := NewDec(gasWanted).Mul(minGasMultiplier) updatedGasWanted := MaxDec(limitedGasWanted, NewDec(gasUsed)).TruncateInt() - // Store for next block's base fee calculation + / Store for next block's base fee calculation k.SetBlockGasWanted(ctx, updatedGasWanted) ctx.EventManager().EmitEvent( @@ -606,21 +606,21 @@ The calculation follows the EIP-1559 specification with Cosmos SDK adaptations ( ```go func CalcGasBaseFee(gasUsed, gasTarget, baseFeeChangeDenom uint64, baseFee, minUnitGas, minGasPrice LegacyDec) LegacyDec { - // No change if at target + / No change if at target if gasUsed == gasTarget { return baseFee } - // Calculate adjustment magnitude + / Calculate adjustment magnitude delta := abs(gasUsed - gasTarget) adjustment := baseFee.Mul(delta).Quo(gasTarget).Quo(baseFeeChangeDenom) if gasUsed > gasTarget { - // Increase base fee (minimum 1 unit) + / Increase base fee (minimum 1 unit) baseFeeDelta := MaxDec(adjustment, minUnitGas) return baseFee.Add(baseFeeDelta) } else { - // Decrease base fee (not below minGasPrice) + / Decrease base fee (not below minGasPrice) return MaxDec(baseFee.Sub(adjustment), minGasPrice) } } @@ -678,7 +678,7 @@ The primary ante handler that validates and processes EVM transaction fees ([mon ```go -// 1. Check mempool minimum fee (for pre-London transactions) +/ 1. Check mempool minimum fee (for pre-London transactions) if ctx.IsCheckTx() && !simulate && !isLondon { requiredFee := mempoolMinGasPrice.Mul(gasLimit) if fee.LT(requiredFee) { @@ -686,13 +686,13 @@ if ctx.IsCheckTx() && !simulate && !isLondon { } } -// 2. Calculate effective fee for EIP-1559 transactions +/ 2. Calculate effective fee for EIP-1559 transactions if ethTx.Type() >= DynamicFeeTxType && baseFee != nil { feeAmt = ethMsg.GetEffectiveFee(baseFee) fee = NewDecFromBigInt(feeAmt) } -// 3. Check global minimum gas price +/ 3. Check global minimum gas price if globalMinGasPrice.IsPositive() { requiredFee := globalMinGasPrice.Mul(gasLimit) if fee.LT(requiredFee) { @@ -700,7 +700,7 @@ if globalMinGasPrice.IsPositive() { } } -// 4. Track gas wanted for base fee calculation +/ 4. Track gas wanted for base fee calculation if isLondon && baseFeeEnabled { feeMarketKeeper.AddTransientGasWanted(ctx, gasWanted) } @@ -716,7 +716,7 @@ Validates transaction fees against the local mempool minimum ([source](https://g ```go func CheckMempoolFee(fee, mempoolMinGasPrice, gasLimit LegacyDec, isLondon bool) error { - // Skip for London-enabled chains (EIP-1559 handles this) + / Skip for London-enabled chains (EIP-1559 handles this) if isLondon { return nil } @@ -770,12 +770,12 @@ func CheckGasWanted(ctx Context, feeMarketKeeper FeeMarketKeeper, tx Tx, isLondo gasWanted := tx.GetGas() blockGasLimit := BlockGasLimit(ctx) - // Reject if tx exceeds block limit + / Reject if tx exceeds block limit if gasWanted > blockGasLimit { return ErrOutOfGas } - // Add to cumulative gas wanted for base fee adjustment + / Add to cumulative gas wanted for base fee adjustment feeMarketKeeper.AddTransientGasWanted(ctx, gasWanted) return nil } diff --git a/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/precisebank.mdx b/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/precisebank.mdx index e665e7ee..7aebe318 100644 --- a/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/precisebank.mdx +++ b/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/precisebank.mdx @@ -49,16 +49,16 @@ The module provides a bank-compatible keeper ([source](https://github.com/cosmos ```go type Keeper interface { - // Query methods + / Query methods GetBalance(ctx, addr, denom) sdk.Coin SpendableCoins(ctx, addr) sdk.Coins - // Transfer methods + / Transfer methods SendCoins(ctx, from, to, amt) error SendCoinsFromModuleToAccount(ctx, module, to, amt) error SendCoinsFromAccountToModule(ctx, from, module, amt) error - // Mint/Burn methods + / Mint/Burn methods MintCoins(ctx, module, amt) error BurnCoins(ctx, module, amt) error } @@ -78,9 +78,9 @@ Automatic handling of "atest" denomination: Handles both integer and fractional components: ```go -// SendCoins automatically handles precision +/ SendCoins automatically handles precision keeper.SendCoins(ctx, from, to, sdk.NewCoins( - sdk.NewCoin("atest", sdk.NewInt(1500000000000)), // 0.0015 test + sdk.NewCoin("atest", sdk.NewInt(1500000000000)), / 0.0015 test )) ``` @@ -95,9 +95,9 @@ keeper.SendCoins(ctx, from, to, sdk.NewCoins( Creates new tokens with proper backing: ```go -// MintCoins with extended precision +/ MintCoins with extended precision keeper.MintCoins(ctx, moduleName, sdk.NewCoins( - sdk.NewCoin("atest", sdk.NewInt(1000000000000000000)), // 1 test + sdk.NewCoin("atest", sdk.NewInt(1000000000000000000)), / 1 test )) ``` @@ -111,9 +111,9 @@ keeper.MintCoins(ctx, moduleName, sdk.NewCoins( Removes tokens from circulation: ```go -// BurnCoins with extended precision +/ BurnCoins with extended precision keeper.BurnCoins(ctx, moduleName, sdk.NewCoins( - sdk.NewCoin("atest", sdk.NewInt(500000000000)), // 0.0005 test + sdk.NewCoin("atest", sdk.NewInt(500000000000)), / 0.0005 test )) ``` @@ -147,15 +147,15 @@ Standard bank events with extended precision amounts: ```protobuf service Query { - // Get total of all fractional balances + / Get total of all fractional balances rpc TotalFractionalBalances(QueryTotalFractionalBalancesRequest) returns (QueryTotalFractionalBalancesResponse); - // Get current remainder amount + / Get current remainder amount rpc Remainder(QueryRemainderRequest) returns (QueryRemainderResponse); - // Get fractional balance for an account + / Get fractional balance for an account rpc FractionalBalance(QueryFractionalBalanceRequest) returns (QueryFractionalBalanceResponse); } @@ -199,8 +199,8 @@ Replace bank keeper with precisebank keeper in app.go: ```go app.EvmKeeper = evmkeeper.NewKeeper( - app.PrecisebankKeeper, // Instead of app.BankKeeper - // ... other parameters + app.PrecisebankKeeper, / Instead of app.BankKeeper + / ... other parameters ) ``` @@ -209,10 +209,10 @@ app.EvmKeeper = evmkeeper.NewKeeper( Query extended balances through standard interface: ```go -// Automatically handles atest denomination +/ Automatically handles atest denomination balance := keeper.GetBalance(ctx, addr, "atest") -// Transfer with 18 decimal precision +/ Transfer with 18 decimal precision err := keeper.SendCoins(ctx, from, to, sdk.NewCoins(sdk.NewCoin("atest", amount))) ``` @@ -258,16 +258,16 @@ Critical invariants maintained by the module: 2. **Migration Path** ```go - // Deploy in passive mode + / Deploy in passive mode app.PrecisebankKeeper = precisebankkeeper.NewKeeper( app.BankKeeper, - // Fractional balances start at zero + / Fractional balances start at zero ) ``` 3. **Testing** ```go - // Verify fractional operations + / Verify fractional operations suite.Require().Equal( expectedFractional, keeper.GetFractionalBalance(ctx, addr), @@ -278,7 +278,7 @@ Critical invariants maintained by the module: 1. **Balance Queries** ```javascript - // Query in atest (18 decimals) + / Query in atest (18 decimals) const balance = await queryClient.precisebank.fractionalBalance({ address: "cosmos1..." }); @@ -286,7 +286,7 @@ Critical invariants maintained by the module: 2. **Precision Handling** ```javascript - // Convert between precisions + / Convert between precisions const testAmount = atestAmount / BigInt(10**12); const atestAmount = testAmount * BigInt(10**12); ``` @@ -325,27 +325,6 @@ Critical invariants maintained by the module: - Sparse storage (only non-zero fractions stored) - Batch operations maintain efficiency -## Troubleshooting - -### Common Issues - -| Issue | Cause | Solution | -|-------|-------|----------| -| "fractional overflow" | Fractional > 10^12 | Check calculation logic | -| "insufficient balance" | Including fractional | Verify full atest balance | -| "invariant violation" | Supply mismatch | Audit reserve and remainder | - -### Validation Commands - -```bash -# Verify module invariants -evmd query precisebank total-fractional-balances -evmd query precisebank remainder - -# Check specific account -evmd query bank balances cosmos1... --denom test -evmd query precisebank fractional-balance cosmos1... -``` ## References diff --git a/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/vm.mdx b/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/vm.mdx index 39ca8846..54674f84 100644 --- a/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/vm.mdx +++ b/docs/evm/v0.4.x/documentation/cosmos-sdk/modules/vm.mdx @@ -52,8 +52,8 @@ Granular control over contract operations: ```go type AccessControl struct { - Create AccessControlType // Contract deployment - Call AccessControlType // Contract execution + Create AccessControlType / Contract deployment + Call AccessControlType / Contract execution } ``` @@ -177,9 +177,9 @@ Primary message for EVM transactions ([source](https://github.com/cosmos/evm/blo ```protobuf message MsgEthereumTx { - google.protobuf.Any data = 1; // Transaction data (Legacy/AccessList/DynamicFee) - string hash = 3; // Transaction hash - string from = 4; // Sender address (derived from signature) + google.protobuf.Any data = 1; / Transaction data (Legacy/AccessList/DynamicFee) + string hash = 3; / Transaction hash + string from = 4; / Sender address (derived from signature) } ``` @@ -195,8 +195,8 @@ Governance message for parameter updates: ```protobuf message MsgUpdateParams { - string authority = 1; // Must be governance module - Params params = 2; // New parameters + string authority = 1; / Must be governance module + Params params = 2; / New parameters } ``` @@ -242,29 +242,29 @@ message MsgUpdateParams { ```protobuf service Query { - // Account queries + / Account queries rpc Account(QueryAccountRequest) returns (QueryAccountResponse); rpc CosmosAccount(QueryCosmosAccountRequest) returns (QueryCosmosAccountResponse); rpc ValidatorAccount(QueryValidatorAccountRequest) returns (QueryValidatorAccountResponse); - // State queries + / State queries rpc Balance(QueryBalanceRequest) returns (QueryBalanceResponse); rpc Storage(QueryStorageRequest) returns (QueryStorageResponse); rpc Code(QueryCodeRequest) returns (QueryCodeResponse); - // Module queries + / Module queries rpc Params(QueryParamsRequest) returns (QueryParamsResponse); rpc Config(QueryConfigRequest) returns (QueryConfigResponse); - // EVM operations + / EVM operations rpc EthCall(EthCallRequest) returns (MsgEthereumTxResponse); rpc EstimateGas(EthCallRequest) returns (EstimateGasResponse); - // Debugging + / Debugging rpc TraceTx(QueryTraceTxRequest) returns (QueryTraceTxResponse); rpc TraceBlock(QueryTraceBlockRequest) returns (QueryTraceBlockResponse); - // Fee queries + / Fee queries rpc BaseFee(QueryBaseFeeRequest) returns (QueryBaseFeeResponse); rpc GlobalMinGasPrice(QueryGlobalMinGasPriceRequest) returns (QueryGlobalMinGasPriceResponse); } @@ -379,7 +379,7 @@ type EvmHooks interface { ### Registration ```go -// In app.go +/ In app.go app.EvmKeeper = app.EvmKeeper.SetHooks( vmkeeper.NewMultiEvmHooks( app.Erc20Keeper.Hooks(), @@ -432,8 +432,8 @@ finalGasUsed = gasUsed - refund 3. **Fork Planning** ```json { - "shanghai_time": "1681338455", // Test first - "cancun_time": "1710338455" // Delay for safety + "shanghai_time": "1681338455", / Test first + "cancun_time": "1710338455" / Delay for safety } ``` @@ -441,14 +441,14 @@ finalGasUsed = gasUsed - refund 1. **Gas Optimization** ```solidity - // Cache storage reads + / Cache storage reads uint256 cached = storageVar; - // Use cached instead of storageVar + / Use cached instead of storageVar - // Pack structs + / Pack structs struct Packed { uint128 a; - uint128 b; // Single slot + uint128 b; / Single slot } ``` @@ -474,35 +474,11 @@ finalGasUsed = gasUsed - refund await contract.method(); } catch (error) { if (error.code === 'CALL_EXCEPTION') { - // Handle revert + / Handle revert } } ``` -## Troubleshooting - -### Common Issues - -| Issue | Cause | Solution | -|-------|-------|----------| -| "out of gas" | Insufficient gas limit | Increase gas limit or optimize contract | -| "nonce too low" | Stale nonce | Query current nonce, retry | -| "insufficient funds" | Low balance | Ensure balance > value + (gasLimit × gasPrice) | -| "contract creation failed" | CREATE disabled or restricted | Check access_control settings | -| "precompile not active" | Precompile not in active list | Enable via governance | - -### Debug Commands - -```bash -# Check EVM parameters -evmd query vm params - -# Trace failed transaction -evmd query vm trace-tx 0xFailedTxHash - -# Check account state -evmd query vm account 0xAddress -``` ## References diff --git a/docs/evm/v0.4.x/documentation/custom-improvement-proposals.mdx b/docs/evm/v0.4.x/documentation/custom-improvement-proposals.mdx index 6d723daa..a7d332c4 100644 --- a/docs/evm/v0.4.x/documentation/custom-improvement-proposals.mdx +++ b/docs/evm/v0.4.x/documentation/custom-improvement-proposals.mdx @@ -34,7 +34,7 @@ To allow any Cosmos EVM user to define their own specific improvements without o Below, you will find an example of how the Cosmos EVM chain uses this functionality to modify the behavior of the `CREATE` and `CREATE2` opcodes. First, the modifier function has to be defined: ``` -// Enable0000 contains the logic to modify the CREATE and CREATE2 opcodes// constant gas value.func Enable0000(jt *vm.JumpTable) { multiplier := 10 currentValCreate := jt[vm.CREATE].GetConstantGas() jt[vm.CREATE].SetConstantGas(currentValCreate * multiplier) currentValCreate2 := jt[vm.CREATE2].GetConstantGas() jt[vm.CREATE2].SetConstantGas(currentValCreate2 * multiplier)} +/ Enable0000 contains the logic to modify the CREATE and CREATE2 opcodes/ constant gas value.func Enable0000(jt *vm.JumpTable) { multiplier := 10 currentValCreate := jt[vm.CREATE].GetConstantGas() jt[vm.CREATE].SetConstantGas(currentValCreate * multiplier) currentValCreate2 := jt[vm.CREATE2].GetConstantGas() jt[vm.CREATE2].SetConstantGas(currentValCreate2 * multiplier)} ``` Then, the function as to be associated with a name via a custom activator: diff --git a/docs/evm/v0.4.x/documentation/getting-started/development-environment.mdx b/docs/evm/v0.4.x/documentation/getting-started/development-environment.mdx index 1f6880c9..2f59e50d 100644 --- a/docs/evm/v0.4.x/documentation/getting-started/development-environment.mdx +++ b/docs/evm/v0.4.x/documentation/getting-started/development-environment.mdx @@ -10,7 +10,7 @@ Each person has their own preference and different tasks or scopes of work may c [Remix](https://remix.org) is a full-feature IDE in a web-app supporting all EVM compatible networks out of the box. A convenient option For quick testing, or as a self-contained smart contract depoyment interface. -[Read more..](docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/remix.mdx) +[Read more..](/docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/remix) diff --git a/docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/hardhat.mdx b/docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/hardhat.mdx index 73961ba9..21953b62 100644 --- a/docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/hardhat.mdx +++ b/docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/hardhat.mdx @@ -42,13 +42,13 @@ const config: HardhatUserConfig = { networks: { local: { url: "http://127.0.0.1:8545", - chainId: 4321, // Your EVM chain ID + chainId: 4321, / Your EVM chain ID accounts: process.env.PRIVATE_KEY ? [process.env.PRIVATE_KEY] : [], gasPrice: 20000000000, }, testnet: { url: "", - chainId: 4321, // Your EVM chain ID + chainId: 4321, / Your EVM chain ID accounts: process.env.PRIVATE_KEY ? [process.env.PRIVATE_KEY] : [], } }, @@ -63,7 +63,7 @@ const config: HardhatUserConfig = { customChains: [ { network: "cosmosEvmTestnet", - chainId: 4321, // Your EVM chain ID + chainId: 4321, / Your EVM chain ID urls: { apiURL: "", browserURL: "" @@ -85,13 +85,13 @@ Hardhat's first-class TypeScript support enables type-safe contract interactions Create a contract in the `contracts/` directory. For this example, we'll use a simple `LiquidStakingVault`. ```solidity title="contracts/LiquidStakingVault.sol" lines expandable -// contracts/LiquidStakingVault.sol -// SPDX-License-Identifier: MIT +/ contracts/LiquidStakingVault.sol +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.24; import "@openzeppelin/contracts/access/Ownable.sol"; -// Interface for the staking precompile +/ Interface for the staking precompile interface IStaking { function delegate(address validator, uint256 amount) external returns (bool); function undelegate(address validator, uint256 amount) external returns (bool, uint64); @@ -126,7 +126,7 @@ contract LiquidStakingVault is Ownable { Create type-safe tests in the `test/` directory. ```typescript title="test/LiquidStakingVault.test.ts" lines expandable -// test/LiquidStakingVault.test.ts +/ test/LiquidStakingVault.test.ts import { expect } from "chai"; import { ethers } from "hardhat"; import { LiquidStakingVault } from "../typechain-types"; @@ -147,8 +147,8 @@ describe("LiquidStakingVault", function () { vault = await VaultFactory.deploy(VALIDATOR_ADDRESS); await vault.waitForDeployment(); - // Mock the staking precompile's delegate function to always return true - // This bytecode is a minimal contract that returns true for any call + / Mock the staking precompile's delegate function to always return true + / This bytecode is a minimal contract that returns true for any call const successBytecode = "0x6080604052348015600f57600080fd5b50600160005560016000f3"; await ethers.provider.send("hardhat_setCode", [ STAKING_PRECOMPILE, @@ -179,7 +179,7 @@ npx hardhat test Create a deployment script in the `scripts/` directory to deploy your contract to a live network. ```typescript title="scripts/deploy.ts" lines expandable -// scripts/deploy.ts +/ scripts/deploy.ts import { ethers, network } from "hardhat"; import { writeFileSync } from "fs"; @@ -197,7 +197,7 @@ async function main() { const vaultAddress = await vault.getAddress(); console.log("LiquidStakingVault deployed to:", vaultAddress); - // Save deployment information + / Save deployment information const deploymentInfo = { contractAddress: vaultAddress, deployer: deployer.address, @@ -210,11 +210,11 @@ async function main() { JSON.stringify(deploymentInfo, null, 2) ); - // Optional: Verify contract on Etherscan-compatible explorer + / Optional: Verify contract on Etherscan-compatible explorer if (network.name !== "local" && process.env.ETHERSCAN_API_KEY) { console.log("Waiting for block confirmations before verification..."); - // await vault.deploymentTransaction()?.wait(5); // Wait for 5 blocks - await new Promise(resolve => setTimeout(resolve, 30000)); // Or wait 30 seconds + / await vault.deploymentTransaction()?.wait(5); / Wait for 5 blocks + await new Promise(resolve => setTimeout(resolve, 30000)); / Or wait 30 seconds await hre.run("verify:verify", { address: vaultAddress, diff --git a/docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/testing-and-fuzzing.mdx b/docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/testing-and-fuzzing.mdx index 864b9bd7..564decbf 100644 --- a/docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/testing-and-fuzzing.mdx +++ b/docs/evm/v0.4.x/documentation/getting-started/tooling-and-resources/testing-and-fuzzing.mdx @@ -53,9 +53,9 @@ These libraries are used across the stack to write tests, simulate transactions, Fuzz testing generates random inputs to find edge cases that violate expected properties. ```solidity title="test/MyContract.t.sol (Foundry)" lines expandable -// test/MyContract.t.sol (Foundry) +/ test/MyContract.t.sol (Foundry) function test_myFunction_fuzz(uint256 amount) public { - vm.assume(amount > 0 && amount < 1e18); // constrain input + vm.assume(amount > 0 && amount < 1e18); / constrain input myContract.myFunction(amount); assertEq(myContract.someState(), expectedValue); } @@ -87,10 +87,10 @@ Tools that enhance test coverage, assert correctness, or catch common vulnerabil ```js lines expandable const { time, expectRevert } = require('@openzeppelin/test-helpers'); -// advance clock by one day +/ advance clock by one day await time.increase(time.duration.days(1)); -// expect revert with specific reason +/ expect revert with specific reason await expectRevert(contract.someFunction(), "Ownable: caller is not the owner"); ``` diff --git a/docs/evm/v0.4.x/documentation/integration/devnet/connect-to-the-network.mdx b/docs/evm/v0.4.x/documentation/integration/devnet/connect-to-the-network.mdx index a0dca814..c45c1f5f 100644 --- a/docs/evm/v0.4.x/documentation/integration/devnet/connect-to-the-network.mdx +++ b/docs/evm/v0.4.x/documentation/integration/devnet/connect-to-the-network.mdx @@ -38,7 +38,7 @@ To test out your dApp via a web UI using an extension-based wallet, you'll need ```javascript Add Chain Example const chainConfig = { - chainId: '0x1087', // 4231 in hex + chainId: '0x1087', / 4231 in hex chainName: 'Cosmos-EVM-Devnet', nativeCurrency: { name: 'ATOM', diff --git a/docs/evm/v0.4.x/documentation/integration/erc20-precompiles-migration.mdx b/docs/evm/v0.4.x/documentation/integration/erc20-precompiles-migration.mdx index b9689577..a7346217 100644 --- a/docs/evm/v0.4.x/documentation/integration/erc20-precompiles-migration.mdx +++ b/docs/evm/v0.4.x/documentation/integration/erc20-precompiles-migration.mdx @@ -47,11 +47,11 @@ Your chain needs this migration if you have: Add this essential migration logic to your existing upgrade handler: ```go -// In your upgrade handler +/ In your upgrade handler store := ctx.KVStore(storeKeys[erc20types.StoreKey]) -const addressLength = 42 // "0x" + 40 hex characters +const addressLength = 42 / "0x" + 40 hex characters -// Migrate dynamic precompiles (IBC tokens, token factory) +/ Migrate dynamic precompiles (IBC tokens, token factory) if oldData := store.Get([]byte("DynamicPrecompiles")); len(oldData) > 0 { for i := 0; i < len(oldData); i += addressLength { address := common.HexToAddress(string(oldData[i : i+addressLength])) @@ -60,7 +60,7 @@ if oldData := store.Get([]byte("DynamicPrecompiles")); len(oldData) > 0 { store.Delete([]byte("DynamicPrecompiles")) } -// Migrate native precompiles +/ Migrate native precompiles if oldData := store.Get([]byte("NativePrecompiles")); len(oldData) > 0 { for i := 0; i < len(oldData); i += addressLength { address := common.HexToAddress(string(oldData[i : i+addressLength])) @@ -70,8 +70,7 @@ if oldData := store.Get([]byte("NativePrecompiles")); len(oldData) > 0 { } ``` -
-Complete Implementation Example + ### Create Upgrade Handler @@ -103,13 +102,13 @@ func CreateUpgradeHandler( ctx := sdk.UnwrapSDKContext(c) ctx.Logger().Info("Starting v0.4.0 upgrade...") - // Run standard module migrations + / Run standard module migrations vm, err := mm.RunMigrations(ctx, configurator, vm) if err != nil { return vm, err } - // Migrate ERC20 precompiles + / Migrate ERC20 precompiles if err := migrateERC20Precompiles(ctx, storeKeys[erc20types.StoreKey], keepers.Erc20Keeper); err != nil { return vm, err } @@ -138,7 +137,7 @@ func migrateERC20Precompiles( erc20Keeper erc20keeper.Keeper, ) error { store := ctx.KVStore(storeKey) - const addressLength = 42 // "0x" + 40 hex characters + const addressLength = 42 / "0x" + 40 hex characters migrations := []struct { oldKey string @@ -183,7 +182,7 @@ func migrateERC20Precompiles( addressStr := string(oldData[i : i+addressLength]) address := common.HexToAddress(addressStr) - // Validate address + / Validate address if address == (common.Address{}) { ctx.Logger().Warn("Skipping zero address", "type", migration.description, @@ -192,7 +191,7 @@ func migrateERC20Precompiles( continue } - // Migrate to new storage + / Migrate to new storage migration.setter(ctx, address) migratedCount++ @@ -203,7 +202,7 @@ func migrateERC20Precompiles( ) } - // Clean up old storage + / Clean up old storage store.Delete([]byte(migration.oldKey)) ctx.Logger().Info("Migration complete", "type", migration.description, @@ -239,7 +238,7 @@ func (app *App) RegisterUpgradeHandlers() { ``` -
+
## Testing @@ -277,13 +276,13 @@ mantrachaind export | jq '.app_state.erc20.dynamic_precompiles' ```go tests/upgrade_test.go func TestERC20PrecompileMigration(t *testing.T) { - // Setup test environment + / Setup test environment app, ctx := setupTestApp(t) - // Create legacy storage entries + / Create legacy storage entries store := ctx.KVStore(app.keys[erc20types.StoreKey]) - // Add test addresses in old format + / Add test addresses in old format dynamicAddresses := []string{ "0x6eC942095eCD4948d9C094337ABd59Dc3c521005", "0x1234567890123456789012345678901234567890", @@ -294,15 +293,15 @@ func TestERC20PrecompileMigration(t *testing.T) { } store.Set([]byte("DynamicPrecompiles"), []byte(dynamicData)) - // Run migration + / Run migration err := migrateERC20Precompiles(ctx, app.keys[erc20types.StoreKey], app.Erc20Keeper) require.NoError(t, err) - // Verify migration + / Verify migration migratedAddresses := app.Erc20Keeper.GetDynamicPrecompiles(ctx) require.Len(t, migratedAddresses, len(dynamicAddresses)) - // Verify old storage is cleaned + / Verify old storage is cleaned oldData := store.Get([]byte("DynamicPrecompiles")) require.Nil(t, oldData) } diff --git a/docs/evm/v0.4.x/documentation/integration/evm-module-integration.mdx b/docs/evm/v0.4.x/documentation/integration/evm-module-integration.mdx index 8deb42c6..adb01073 100644 --- a/docs/evm/v0.4.x/documentation/integration/evm-module-integration.mdx +++ b/docs/evm/v0.4.x/documentation/integration/evm-module-integration.mdx @@ -4,7 +4,8 @@ description: "Integrating the Cosmos EVM module into Cosmos SDK v0.53.x chains" --- -Big thanks to Reece & the [Spawn](https://github.com/rollchains/spawn) team for their valuable contributions to this guide. + Big thanks to Reece & the [Spawn](https://github.com/rollchains/spawn) team + for their valuable contributions to this guide. This guide provides instructions for adding EVM compatibility to a new Cosmos SDK chain. It targets chains being built from scratch with EVM support. @@ -15,7 +16,8 @@ This guide provides instructions for adding EVM compatibility to a new Cosmos SD - Token decimal changes (from Cosmos standard 6 to Ethereum standard 18) impacting all existing balances - Asset migration where existing assets need to be initialized and mirrored in the EVM -Contact [Interchain Labs](https://share-eu1.hsforms.com/2g6yO-PVaRoKj50rUgG4Pjg2e2sca) for production chain upgrade guidance. +Contact [Cosmos Labs](https://cosmos.network/interest-form) for production chain upgrade guidance. + ## Prerequisites @@ -26,16 +28,18 @@ Contact [Interchain Labs](https://share-eu1.hsforms.com/2g6yO-PVaRoKj50rUgG4Pjg2 - Basic knowledge of Go and Cosmos SDK -Throughout this guide, `evmd` refers to your chain's binary (e.g., `gaiad`, `dydxd`, etc.). + Throughout this guide, `evmd` refers to your chain's binary (e.g., `gaiad`, + `dydxd`, etc.). ## Version Compatibility -These version numbers may change as development continues. Check [github.com/cosmos/evm](https://github.com/cosmos/evm) for the latest releases. + These version numbers may change as development continues. Check + [github.com/cosmos/evm](https://github.com/cosmos/evm) for the latest + releases. - ```go require ( github.com/cosmos/cosmos-sdk v0.53.0 @@ -44,7 +48,7 @@ require ( ) replace ( - // Use the Cosmos fork of go-ethereum + / Use the Cosmos fork of go-ethereum github.com/ethereum/go-ethereum => github.com/cosmos/go-ethereum v1.15.11-cosmos-0 ) ``` @@ -52,12 +56,12 @@ replace ( ## Step 1: Update Dependencies ```go -// go.mod +/ go.mod require ( github.com/cosmos/cosmos-sdk v0.53.0 github.com/ethereum/go-ethereum v1.15.10 - // for IBC functionality in EVM + / for IBC functionality in EVM github.com/cosmos/ibc-go/modules/capability v1.0.1 github.com/cosmos/ibc-go/v10 v10.2.0 ) @@ -73,15 +77,17 @@ Cosmos EVM requires two separate chain IDs: - **EVM Chain ID** (integer): Used for EVM transactions and EIP-155 tooling (e.g., 9000) -Ensure your EVM chain ID is not already in use by checking [chainlist.org](https://chainlist.org/). + Ensure your EVM chain ID is not already in use by checking + [chainlist.org](https://chainlist.org/). **Files to Update:** 1. `app/app.go`: Set chain ID constants + ```go -const CosmosChainID = "mychain-1" // Standard Cosmos format -const EVMChainID = 9000 // EIP-155 integer +const CosmosChainID = "mychain-1" / Standard Cosmos format +const EVMChainID = 9000 / EIP-155 integer ``` 2. Update `Makefile`, scripts, and `genesis.json` with correct chain IDs @@ -93,11 +99,13 @@ Use `eth_secp256k1` as the standard account type with coin type 60 for Ethereum **Files to Update:** 1. `app/app.go`: + ```go const CoinType uint32 = 60 ``` 2. `chain_registry.json`: + ```json "slip44": 60 ``` @@ -107,11 +115,13 @@ const CoinType uint32 = 60 Changing from 6 decimals (Cosmos convention) to 18 decimals (EVM standard) is highly recommended for full compatibility. 1. Set the denomination in `app/app.go`: + ```go const BaseDenomUnit int64 = 18 ``` 2. Update the `init()` function: + ```go import ( "math/big" @@ -120,7 +130,7 @@ import ( ) func init() { - // Update power reduction for 18-decimal base unit + / Update power reduction for 18-decimal base unit sdk.DefaultPowerReduction = math.NewIntFromBigInt( new(big.Int).Exp(big.NewInt(10), big.NewInt(BaseDenomUnit), nil), ) @@ -130,7 +140,9 @@ func init() { ## Step 3: Handle EVM Decimal Precision -The mismatch between EVM's 18-decimal standard and Cosmos SDK's 6-decimal standard is critical. The default behavior (flooring) discards any value below 10^-6, causing asset loss and breaking DeFi applications. + The mismatch between EVM's 18-decimal standard and Cosmos SDK's 6-decimal + standard is critical. The default behavior (flooring) discards any value below + 10^-6, causing asset loss and breaking DeFi applications. ### Solution: x/precisebank Module @@ -138,6 +150,7 @@ The mismatch between EVM's 18-decimal standard and Cosmos SDK's 6-decimal standa The `x/precisebank` module wraps the native `x/bank` module to maintain fractional balances for EVM denominations, handling full 18-decimal precision without loss. **Benefits:** + - Lossless precision preventing invisible asset loss - High DApp compatibility ensuring DeFi protocols function correctly - Simple integration requiring minimal changes @@ -145,7 +158,7 @@ The `x/precisebank` module wraps the native `x/bank` module to maintain fraction **Integration in app.go:** ```go -// Initialize PreciseBankKeeper +/ Initialize PreciseBankKeeper app.PreciseBankKeeper = precisebankkeeper.NewKeeper( appCodec, keys[precisebanktypes.StoreKey], @@ -153,14 +166,14 @@ app.PreciseBankKeeper = precisebankkeeper.NewKeeper( authtypes.NewModuleAddress(govtypes.ModuleName).String(), ) -// Pass PreciseBankKeeper to EVMKeeper instead of BankKeeper +/ Pass PreciseBankKeeper to EVMKeeper instead of BankKeeper app.EVMKeeper = evmkeeper.NewKeeper( appCodec, keys[evmtypes.StoreKey], tkeys[evmtypes.TransientKey], authtypes.NewModuleAddress(govtypes.ModuleName), app.AccountKeeper, - app.PreciseBankKeeper, // Use PreciseBankKeeper here + app.PreciseBankKeeper, / Use PreciseBankKeeper here app.StakingKeeper, app.FeeMarketKeeper, &app.Erc20Keeper, @@ -178,6 +191,7 @@ The Cosmos EVM `x/erc20` module can automatically register ERC20 token pairs for 1. **Use the Extended IBC Transfer Module**: Import and use the transfer module from `github.com/cosmos/evm/x/ibc/transfer` 2. **Enable ERC20 Module Parameters** in genesis: + ```go erc20Params := erc20types.DefaultParams() erc20Params.EnableErc20 = true @@ -210,29 +224,29 @@ func NoOpEVMOptions(_ string) error { var sealed = false -// ChainsCoinInfo maps EVM chain IDs to coin configuration -// IMPORTANT: Uses uint64 EVM chain IDs as keys, not Cosmos chain ID strings +/ ChainsCoinInfo maps EVM chain IDs to coin configuration +/ IMPORTANT: Uses uint64 EVM chain IDs as keys, not Cosmos chain ID strings var ChainsCoinInfo = map[uint64]evmtypes.EvmCoinInfo{ - EVMChainID: { // Your numeric EVM chain ID (e.g., 9000) + EVMChainID: { / Your numeric EVM chain ID (e.g., 9000) Denom: BaseDenom, DisplayDenom: DisplayDenom, Decimals: evmtypes.EighteenDecimals, }, } -// EVMAppOptions sets up global configuration +/ EVMAppOptions sets up global configuration func EVMAppOptions(chainID string) error { if sealed { return nil } - // IMPORTANT: Lookup uses numeric EVMChainID, not Cosmos chainID string + / IMPORTANT: Lookup uses numeric EVMChainID, not Cosmos chainID string coinInfo, found := ChainsCoinInfo[EVMChainID] if !found { return fmt.Errorf("unknown EVM chain id: %d", EVMChainID) } - // Set denom info for the chain + / Set denom info for the chain if err := setBaseDenom(coinInfo); err != nil { return err } @@ -256,7 +270,7 @@ func EVMAppOptions(chainID string) error { return nil } -// setBaseDenom registers display and base denoms +/ setBaseDenom registers display and base denoms func setBaseDenom(ci evmtypes.EvmCoinInfo) error { if err := sdk.RegisterDenom(ci.DisplayDenom, math.LegacyOneDec()); err != nil { return err @@ -302,7 +316,7 @@ import ( const bech32PrecompileBaseGas = 6_000 -// NewAvailableStaticPrecompiles returns all available static precompiled contracts +/ NewAvailableStaticPrecompiles returns all available static precompiled contracts func NewAvailableStaticPrecompiles( stakingKeeper stakingkeeper.Keeper, distributionKeeper distributionkeeper.Keeper, @@ -360,11 +374,11 @@ func NewAvailableStaticPrecompiles( panic(fmt.Errorf("failed to instantiate evidence precompile: %w", err)) } - // Stateless precompiles + / Stateless precompiles precompiles[bech32Precompile.Address()] = bech32Precompile precompiles[p256Precompile.Address()] = p256Precompile - // Stateful precompiles + / Stateful precompiles precompiles[stakingPrecompile.Address()] = stakingPrecompile precompiles[distributionPrecompile.Address()] = distributionPrecompile precompiles[ibcTransferPrecompile.Address()] = ibcTransferPrecompile @@ -383,7 +397,7 @@ func NewAvailableStaticPrecompiles( ```go import ( - // ... other imports + / ... other imports ante "github.com/your-repo/your-chain/ante" evmante "github.com/cosmos/evm/ante" evmcosmosante "github.com/cosmos/evm/ante/cosmos" @@ -403,12 +417,12 @@ import ( _ "github.com/cosmos/evm/x/vm/core/tracers/js" _ "github.com/cosmos/evm/x/vm/core/tracers/native" - // Replace default transfer with EVM's extended transfer module + / Replace default transfer with EVM's extended transfer module transfer "github.com/cosmos/evm/x/ibc/transfer" ibctransferkeeper "github.com/cosmos/evm/x/ibc/transfer/keeper" ibctransfertypes "github.com/cosmos/ibc-go/v8/modules/apps/transfer/types" - // Add authz for precompiles + / Add authz for precompiles authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" ) ``` @@ -417,7 +431,7 @@ import ( ```go var maccPerms = map[string][]string{ - // ... existing permissions + / ... existing permissions evmtypes.ModuleName: {authtypes.Minter, authtypes.Burner}, feemarkettypes.ModuleName: nil, erc20types.ModuleName: {authtypes.Minter, authtypes.Burner}, @@ -428,7 +442,7 @@ var maccPerms = map[string][]string{ ```go type ChainApp struct { - // ... existing fields + / ... existing fields FeeMarketKeeper feemarketkeeper.Keeper EVMKeeper *evmkeeper.Keeper Erc20Keeper erc20keeper.Keeper @@ -440,12 +454,12 @@ type ChainApp struct { ```go func NewChainApp( - // ... existing params + / ... existing params appOpts servertypes.AppOptions, - evmAppOptions EVMOptionsFn, // Add this parameter + evmAppOptions EVMOptionsFn, / Add this parameter baseAppOptions ...func(*baseapp.BaseApp), ) *ChainApp { - // ... + / ... } ``` @@ -462,7 +476,7 @@ txConfig := encodingConfig.TxConfig ```go keys := storetypes.NewKVStoreKeys( - // ... existing keys + / ... existing keys evmtypes.StoreKey, feemarkettypes.StoreKey, erc20types.StoreKey, @@ -478,11 +492,11 @@ tkeys := storetypes.NewTransientStoreKeys( ### Initialize Keepers (Critical Order) -Keepers must be initialized in exact order: FeeMarket → EVM → Erc20 → Transfer + Keepers must be initialized in exact order: FeeMarket → EVM → Erc20 → Transfer ```go -// Initialize AuthzKeeper if not already done +/ Initialize AuthzKeeper if not already done app.AuthzKeeper = authzkeeper.NewKeeper( keys[authz.StoreKey], appCodec, @@ -490,7 +504,7 @@ app.AuthzKeeper = authzkeeper.NewKeeper( app.AccountKeeper, ) -// Initialize FeeMarketKeeper +/ Initialize FeeMarketKeeper app.FeeMarketKeeper = feemarketkeeper.NewKeeper( appCodec, authtypes.NewModuleAddress(govtypes.ModuleName), @@ -499,7 +513,7 @@ app.FeeMarketKeeper = feemarketkeeper.NewKeeper( app.GetSubspace(feemarkettypes.ModuleName), ) -// Initialize EVMKeeper +/ Initialize EVMKeeper tracer := cast.ToString(appOpts.Get(srvflags.EVMTracer)) app.EVMKeeper = evmkeeper.NewKeeper( appCodec, @@ -510,12 +524,12 @@ app.EVMKeeper = evmkeeper.NewKeeper( app.BankKeeper, app.StakingKeeper, app.FeeMarketKeeper, - &app.Erc20Keeper, // Pass pointer for circular dependency + &app.Erc20Keeper, / Pass pointer for circular dependency tracer, app.GetSubspace(evmtypes.ModuleName), ) -// Initialize Erc20Keeper +/ Initialize Erc20Keeper app.Erc20Keeper = erc20keeper.NewKeeper( keys[erc20types.StoreKey], appCodec, @@ -525,10 +539,10 @@ app.Erc20Keeper = erc20keeper.NewKeeper( app.EVMKeeper, app.StakingKeeper, app.AuthzKeeper, - &app.TransferKeeper, // Pass pointer for circular dependency + &app.TransferKeeper, / Pass pointer for circular dependency ) -// Initialize extended TransferKeeper +/ Initialize extended TransferKeeper app.TransferKeeper = ibctransferkeeper.NewKeeper( appCodec, keys[ibctransfertypes.StoreKey], @@ -543,11 +557,11 @@ app.TransferKeeper = ibctransferkeeper.NewKeeper( authtypes.NewModuleAddress(govtypes.ModuleName).String(), ) -// CRITICAL: Wire IBC callbacks for automatic ERC20 registration +/ CRITICAL: Wire IBC callbacks for automatic ERC20 registration transferModule := transfer.NewIBCModule(app.TransferKeeper) app.Erc20Keeper.SetICS20Module(transferModule) -// Configure EVM Precompiles +/ Configure EVM Precompiles corePrecompiles := NewAvailableStaticPrecompiles( *app.StakingKeeper, app.DistrKeeper, @@ -568,7 +582,7 @@ app.EVMKeeper.WithStaticPrecompiles(corePrecompiles) ```go app.ModuleManager = module.NewManager( - // ... existing modules + / ... existing modules evm.NewAppModule(app.EVMKeeper, app.AccountKeeper, app.GetSubspace(evmtypes.ModuleName)), feemarket.NewAppModule(app.FeeMarketKeeper, app.GetSubspace(feemarkettypes.ModuleName)), erc20.NewAppModule(app.Erc20Keeper, app.AccountKeeper, app.GetSubspace(erc20types.ModuleName)), @@ -579,31 +593,31 @@ app.ModuleManager = module.NewManager( ### Update Module Ordering ```go -// SetOrderBeginBlockers - EVM must come after feemarket +/ SetOrderBeginBlockers - EVM must come after feemarket app.ModuleManager.SetOrderBeginBlockers( - // ... other modules + / ... other modules erc20types.ModuleName, feemarkettypes.ModuleName, evmtypes.ModuleName, - // ... + / ... ) -// SetOrderEndBlockers +/ SetOrderEndBlockers app.ModuleManager.SetOrderEndBlockers( - // ... other modules + / ... other modules evmtypes.ModuleName, feemarkettypes.ModuleName, erc20types.ModuleName, - // ... + / ... ) -// SetOrderInitGenesis - feemarket must be before genutil +/ SetOrderInitGenesis - feemarket must be before genutil genesisModuleOrder := []string{ - // ... other modules + / ... other modules evmtypes.ModuleName, feemarkettypes.ModuleName, erc20types.ModuleName, - // ... + / ... } ``` @@ -621,7 +635,7 @@ options := ante.HandlerOptions{ ExtensionOptionChecker: cosmosevmtypes.HasDynamicFeeExtensionOption, MaxTxGasWanted: cast.ToUint64(appOpts.Get(srvflags.EVMMaxTxGasWanted)), TxFeeChecker: evmevmante.NewDynamicFeeChecker(app.FeeMarketKeeper), - // ... other options + / ... other options } anteHandler, err := ante.NewAnteHandler(options) @@ -637,12 +651,12 @@ app.SetAnteHandler(anteHandler) func (a *ChainApp) DefaultGenesis() map[string]json.RawMessage { genesis := a.BasicModuleManager.DefaultGenesis(a.appCodec) - // Add EVM genesis config + / Add EVM genesis config evmGenState := evmtypes.DefaultGenesisState() evmGenState.Params.ActiveStaticPrecompiles = evmtypes.AvailableStaticPrecompiles genesis[evmtypes.ModuleName] = a.appCodec.MustMarshalJSON(evmGenState) - // Add ERC20 genesis config + / Add ERC20 genesis config erc20GenState := erc20types.DefaultGenesisState() genesis[erc20types.ModuleName] = a.appCodec.MustMarshalJSON(erc20GenState) @@ -725,7 +739,7 @@ import ( cosmosante "github.com/cosmos/evm/ante/cosmos" ) -// newCosmosAnteHandler creates the default SDK ante handler for Cosmos transactions +/ newCosmosAnteHandler creates the default SDK ante handler for Cosmos transactions func newCosmosAnteHandler(options HandlerOptions) sdk.AnteHandler { return sdk.ChainAnteDecorators( ante.NewSetUpContextDecorator(), @@ -761,7 +775,7 @@ import ( evmante "github.com/cosmos/evm/ante/evm" ) -// newMonoEVMAnteHandler creates the sdk.AnteHandler for EVM transactions +/ newMonoEVMAnteHandler creates the sdk.AnteHandler for EVM transactions func newMonoEVMAnteHandler(options HandlerOptions) sdk.AnteHandler { return sdk.ChainAnteDecorators( evmante.NewEVMMonoDecorator( @@ -787,7 +801,7 @@ import ( "github.com/cosmos/evm/ante/evm" ) -// NewAnteHandler routes Ethereum or SDK transactions to the appropriate handler +/ NewAnteHandler routes Ethereum or SDK transactions to the appropriate handler func NewAnteHandler(options HandlerOptions) (sdk.AnteHandler, error) { if err := options.Validate(); err != nil { return nil, err @@ -797,10 +811,10 @@ func NewAnteHandler(options HandlerOptions) (sdk.AnteHandler, error) { var anteHandler sdk.AnteHandler if ethTx, ok := tx.(*evm.EthTx); ok { - // Handle as Ethereum transaction + / Handle as Ethereum transaction anteHandler = newMonoEVMAnteHandler(options) } else { - // Handle as normal Cosmos SDK transaction + / Handle as normal Cosmos SDK transaction anteHandler = newCosmosAnteHandler(options) } @@ -815,14 +829,14 @@ func NewAnteHandler(options HandlerOptions) (sdk.AnteHandler, error) { ```go import ( - // Add imports + / Add imports evmcmd "github.com/cosmos/evm/client" evmserver "github.com/cosmos/evm/server" evmserverconfig "github.com/cosmos/evm/server/config" srvflags "github.com/cosmos/evm/server/flags" ) -// Define custom app config struct +/ Define custom app config struct type CustomAppConfig struct { serverconfig.Config EVM evmserverconfig.EVMConfig @@ -830,7 +844,7 @@ type CustomAppConfig struct { TLS evmserverconfig.TLSConfig } -// Update initAppConfig to include EVM config +/ Update initAppConfig to include EVM config func initAppConfig() (string, interface{}) { srvCfg, customAppTemplate := serverconfig.AppConfig(DefaultDenom) customAppConfig := CustomAppConfig{ @@ -843,9 +857,9 @@ func initAppConfig() (string, interface{}) { return customAppTemplate, customAppConfig } -// In initRootCmd, replace server.AddCommands with evmserver.AddCommands +/ In initRootCmd, replace server.AddCommands with evmserver.AddCommands func initRootCmd(...) { - // ... + / ... evmserver.AddCommands( rootCmd, evmserver.NewDefaultStartOptions(newApp, app.DefaultNodeHome), @@ -854,7 +868,7 @@ func initRootCmd(...) { ) rootCmd.AddCommand( - // ... existing commands + / ... existing commands evmcmd.KeyCommands(app.DefaultNodeHome, true), ) @@ -870,7 +884,7 @@ func initRootCmd(...) { ```go import ( - // ... existing imports + / ... existing imports evmkeyring "github.com/cosmos/evm/crypto/keyring" evmtypes "github.com/cosmos/evm/x/vm/types" sdk "github.com/cosmos/cosmos-sdk/types" @@ -878,19 +892,19 @@ import ( ) func NewRootCmd() *cobra.Command { - // ... - // In client context setup: + / ... + / In client context setup: clientCtx = clientCtx. WithKeyringOptions(evmkeyring.Option()). WithBroadcastMode(flags.FlagBroadcastMode). WithLedgerHasProtobuf(true) - // Update the coin type + / Update the coin type cfg := sdk.GetConfig() - cfg.SetCoinType(evmtypes.Bip44CoinType) // Sets coin type to 60 + cfg.SetCoinType(evmtypes.Bip44CoinType) / Sets coin type to 60 cfg.Seal() - // ... + / ... return rootCmd } ``` @@ -902,23 +916,23 @@ Sign Mode Textual is a new Cosmos SDK signing method that may not be compatible ### Option A: Disable Sign Mode Textual (Recommended for pure EVM compatibility) ```go -// In app.go +/ In app.go import ( "github.com/cosmos/cosmos-sdk/types/tx" "github.com/cosmos/cosmos-sdk/x/auth/tx" ) -// ... in NewChainApp, where you set up your txConfig: +/ ... in NewChainApp, where you set up your txConfig: txConfig := tx.NewTxConfigWithOptions( appCodec, tx.ConfigOptions{ - // Remove SignMode_SIGN_MODE_TEXTUAL from enabled sign modes + / Remove SignMode_SIGN_MODE_TEXTUAL from enabled sign modes EnabledSignModes: []signing.SignMode{ signing.SignMode_SIGN_MODE_DIRECT, signing.SignMode_SIGN_MODE_LEGACY_AMINO_JSON, signing.SignMode_SIGN_MODE_EIP_191, }, - // ... + / ... }, ) ``` @@ -966,4 +980,4 @@ evmd genesis validate-genesis --- -Check [github.com/cosmos/evm](https://github.com/cosmos/evm) for the latest updates or to open an issue. \ No newline at end of file +Check [github.com/cosmos/evm](https://github.com/cosmos/evm) for the latest updates or to open an issue. diff --git a/docs/evm/v0.4.x/documentation/integration/mempool-integration.mdx b/docs/evm/v0.4.x/documentation/integration/mempool-integration.mdx index 485ac426..81fe077b 100644 --- a/docs/evm/v0.4.x/documentation/integration/mempool-integration.mdx +++ b/docs/evm/v0.4.x/documentation/integration/mempool-integration.mdx @@ -29,9 +29,9 @@ Update your `app/app.go` to include the EVM mempool: ```go type App struct { *baseapp.BaseApp - // ... other keepers + / ... other keepers - // Cosmos EVM keepers + / Cosmos EVM keepers FeeMarketKeeper feemarketkeeper.Keeper EVMKeeper *evmkeeper.Keeper EVMMempool *evmmempool.ExperimentalEVMMempool @@ -47,7 +47,7 @@ The mempool must be initialized **after** the antehandler has been set in the ap Add the following configuration in your `NewApp` constructor: ```go -// Set the EVM priority nonce mempool +/ Set the EVM priority nonce mempool if evmtypes.GetChainConfig() != nil { mempoolConfig := &evmmempool.EVMMempoolConfig{ AnteHandler: app.GetAnteHandler(), @@ -65,19 +65,19 @@ if evmtypes.GetChainConfig() != nil { ) app.EVMMempool = evmMempool - // Set the global mempool for RPC access + / Set the global mempool for RPC access if err := evmmempool.SetGlobalEVMMempool(evmMempool); err != nil { panic(err) } - // Replace BaseApp mempool + / Replace BaseApp mempool app.SetMempool(evmMempool) - // Set custom CheckTx handler for nonce gap support + / Set custom CheckTx handler for nonce gap support checkTxHandler := evmmempool.NewCheckTxHandler(evmMempool) app.SetCheckTxHandler(checkTxHandler) - // Set custom PrepareProposal handler + / Set custom PrepareProposal handler abciProposalHandler := baseapp.NewDefaultProposalHandler(evmMempool, app) abciProposalHandler.SetSignerExtractionAdapter( evmmempool.NewEthSignerExtractionAdapter( @@ -97,7 +97,7 @@ The `EVMMempoolConfig` struct provides several configuration options for customi ```go mempoolConfig := &evmmempool.EVMMempoolConfig{ AnteHandler: app.GetAnteHandler(), - BlockGasLimit: 100_000_000, // 100M gas limit + BlockGasLimit: 100_000_000, / 100M gas limit } ``` @@ -105,19 +105,19 @@ mempoolConfig := &evmmempool.EVMMempoolConfig{ ```go type EVMMempoolConfig struct { - // Required: AnteHandler for transaction validation + / Required: AnteHandler for transaction validation AnteHandler sdk.AnteHandler - // Required: Block gas limit for transaction selection + / Required: Block gas limit for transaction selection BlockGasLimit uint64 - // Optional: Custom TxPool (defaults to LegacyPool) + / Optional: Custom TxPool (defaults to LegacyPool) TxPool *txpool.TxPool - // Optional: Custom Cosmos pool (defaults to PriorityNonceMempool) + / Optional: Custom Cosmos pool (defaults to PriorityNonceMempool) CosmosPool sdkmempool.ExtMempool - // Optional: Custom broadcast function for promoted transactions + / Optional: Custom broadcast function for promoted transactions BroadCastTxFn func(txs []*ethtypes.Transaction) error } ``` @@ -127,7 +127,7 @@ type EVMMempoolConfig struct { The mempool uses a `PriorityNonceMempool` for Cosmos transactions by default. You can customize the priority calculation: ```go -// Define custom priority calculation for Cosmos transactions +/ Define custom priority calculation for Cosmos transactions priorityConfig := sdkmempool.PriorityNonceMempoolConfig[math.Int]{ TxPriority: sdkmempool.TxPriority[math.Int]{ GetTxPriority: func(goCtx context.Context, tx sdk.Tx) math.Int { @@ -136,20 +136,20 @@ priorityConfig := sdkmempool.PriorityNonceMempoolConfig[math.Int]{ return math.ZeroInt() } - // Get fee in bond denomination - bondDenom := "test" // or your chain's bond denom + / Get fee in bond denomination + bondDenom := "test" / or your chain's bond denom fee := feeTx.GetFee() found, coin := fee.Find(bondDenom) if !found { return math.ZeroInt() } - // Calculate gas price: fee_amount / gas_limit + / Calculate gas price: fee_amount / gas_limit gasPrice := coin.Amount.Quo(math.NewIntFromUint64(feeTx.GetGas())) return gasPrice }, Compare: func(a, b math.Int) int { - return a.BigInt().Cmp(b.BigInt()) // Higher values have priority + return a.BigInt().Cmp(b.BigInt()) / Higher values have priority }, MinValue: math.ZeroInt(), }, @@ -167,7 +167,7 @@ mempoolConfig := &evmmempool.EVMMempoolConfig{ Different chains may require different gas limits based on their capacity: ```go -// Example: 50M gas limit for lower capacity chains +/ Example: 50M gas limit for lower capacity chains mempoolConfig := &evmmempool.EVMMempoolConfig{ AnteHandler: app.GetAnteHandler(), BlockGasLimit: 50_000_000, @@ -192,39 +192,39 @@ var DefaultConfig = Config{ Journal: "", Rejournal: time.Hour, - PriceLimit: 1, // 1 gwei minimum gas price - PriceBump: 10, // 10% minimum price bump + PriceLimit: 1, / 1 gwei minimum gas price + PriceBump: 10, / 10% minimum price bump - AccountSlots: 16, // 16 pending transactions per account - GlobalSlots: 4096, // 4096 total pending transactions - AccountQueue: 64, // 64 queued transactions per account - GlobalQueue: 1024, // 1024 total queued transactions + AccountSlots: 16, / 16 pending transactions per account + GlobalSlots: 4096, / 4096 total pending transactions + AccountQueue: 64, / 64 queued transactions per account + GlobalQueue: 1024, / 1024 total queued transactions - Lifetime: 3 * time.Hour, // 3 hour maximum queue time + Lifetime: 3 * time.Hour, / 3 hour maximum queue time } ``` **Custom Configuration Example:** ```go -// Create custom configuration in your app initialization +/ Create custom configuration in your app initialization customConfig := legacypool.Config{ Locals: []common.Address{}, NoLocals: false, Journal: "", Rejournal: time.Hour, - PriceLimit: 5, // 5 gwei minimum (higher than default) - PriceBump: 15, // 15% price bump (more aggressive) + PriceLimit: 5, / 5 gwei minimum (higher than default) + PriceBump: 15, / 15% price bump (more aggressive) - AccountSlots: 32, // 32 pending per account (double default) - GlobalSlots: 8192, // 8192 total pending (double default) - AccountQueue: 128, // 128 queued per account (double default) - GlobalQueue: 2048, // 2048 total queued (double default) + AccountSlots: 32, / 32 pending per account (double default) + GlobalSlots: 8192, / 8192 total pending (double default) + AccountQueue: 128, / 128 queued per account (double default) + GlobalQueue: 2048, / 2048 total queued (double default) - Lifetime: 6 * time.Hour, // 6 hour queue time (double default) + Lifetime: 6 * time.Hour, / 6 hour queue time (double default) } -// Use in mempool initialization +/ Use in mempool initialization customTxPool := legacypool.New(customConfig, blockChain, opts...) mempoolConfig := &evmmempool.EVMMempoolConfig{ AnteHandler: app.GetAnteHandler(), @@ -238,17 +238,17 @@ mempoolConfig := &evmmempool.EVMMempoolConfig{ For chains handling high transaction volumes: ```go -// File modification needed in mempool/txpool/legacypool/legacypool.go +/ File modification needed in mempool/txpool/legacypool/legacypool.go highThroughputConfig := legacypool.Config{ - PriceLimit: 0, // Accept zero gas price transactions - PriceBump: 5, // Lower bump requirement for faster replacement + PriceLimit: 0, / Accept zero gas price transactions + PriceBump: 5, / Lower bump requirement for faster replacement - AccountSlots: 64, // 4x more pending per account - GlobalSlots: 16384, // 4x more total pending - AccountQueue: 256, // 4x more queued per account - GlobalQueue: 4096, // 4x more total queued + AccountSlots: 64, / 4x more pending per account + GlobalSlots: 16384, / 4x more total pending + AccountQueue: 256, / 4x more queued per account + GlobalQueue: 4096, / 4x more total queued - Lifetime: 12 * time.Hour, // Longer queue retention + Lifetime: 12 * time.Hour, / Longer queue retention } ``` @@ -257,16 +257,16 @@ highThroughputConfig := legacypool.Config{ For resource-limited environments: ```go -// Conservative memory usage configuration +/ Conservative memory usage configuration conservativeConfig := legacypool.Config{ - PriceLimit: 10, // Higher minimum to reduce spam + PriceLimit: 10, / Higher minimum to reduce spam - AccountSlots: 8, // Half the default pending slots - GlobalSlots: 2048, // Half the default total pending - AccountQueue: 32, // Half the default queued slots - GlobalQueue: 512, // Half the default total queued + AccountSlots: 8, / Half the default pending slots + GlobalSlots: 2048, / Half the default total pending + AccountQueue: 32, / Half the default queued slots + GlobalQueue: 512, / Half the default total queued - Lifetime: time.Hour, // Shorter retention time + Lifetime: time.Hour, / Shorter retention time } ``` @@ -291,7 +291,7 @@ type CustomTxPool struct { } type Config struct { - // Your custom configuration parameters + / Your custom configuration parameters MaxTxsPerAccount int MaxGlobalTxs int MinGasPriceGwei int64 @@ -304,7 +304,7 @@ func NewCustomPool(config Config, blockchain *core.BlockChain) *CustomTxPool { PriceBump: config.ReplacementThreshold, AccountSlots: uint64(config.MaxTxsPerAccount), GlobalSlots: uint64(config.MaxGlobalTxs), - // ... other parameters + / ... other parameters } pool := legacypool.New(legacyConfig, blockchain) @@ -314,24 +314,24 @@ func NewCustomPool(config Config, blockchain *core.BlockChain) *CustomTxPool { } } -// Add custom methods for advanced pool management +/ Add custom methods for advanced pool management func (p *CustomTxPool) SetDynamicPricing(enabled bool) { - // Implement dynamic gas pricing logic + / Implement dynamic gas pricing logic } func (p *CustomTxPool) GetPoolStatistics() PoolStats { - // Return detailed pool statistics + / Return detailed pool statistics return PoolStats{ PendingCount: p.Stats().Pending, QueuedCount: p.Stats().Queued, - // ... additional metrics + / ... additional metrics } } ``` **Integration in app.go:** ```go -// In your NewApp constructor +/ In your NewApp constructor customPoolConfig := mempool.Config{ MaxTxsPerAccount: 50, MaxGlobalTxs: 10000, @@ -369,9 +369,9 @@ Custom transaction validation that handles nonce gaps specially (`mempool/check_ **Special Handling**: On `ErrNonceGap` for EVM transactions: ```go if errors.Is(err, ErrNonceGap) { - // Route to local queue instead of rejecting + / Route to local queue instead of rejecting err := mempool.InsertInvalidNonce(request.Tx) - // Must intercept error and return success to EVM client + / Must intercept error and return success to EVM client return interceptedSuccess } ``` @@ -391,7 +391,7 @@ Standard Cosmos SDK mempool for non-EVM transactions with fee-based prioritizati **Default Priority Calculation**: ```go -// Calculate effective gas price +/ Calculate effective gas price priority = (fee_amount / gas_limit) - base_fee ``` @@ -415,12 +415,12 @@ The mempool handles different transaction types appropriately: During block building, both transaction types compete fairly based on their effective tips: ```go -// Simplified selection logic +/ Simplified selection logic func SelectTransactions() Iterator { - evmTxs := GetPendingEVMTransactions() // From local TxPool - cosmosTxs := GetPendingCosmosTransactions() // From Cosmos mempool + evmTxs := GetPendingEVMTransactions() / From local TxPool + cosmosTxs := GetPendingCosmosTransactions() / From Cosmos mempool - return NewUnifiedIterator(evmTxs, cosmosTxs) // Fee-based priority + return NewUnifiedIterator(evmTxs, cosmosTxs) / Fee-based priority } ``` @@ -436,10 +436,10 @@ func SelectTransactions() Iterator { Test that transactions with nonce gaps are properly queued: ```javascript -// Send transactions out of order -await wallet.sendTransaction({nonce: 100, ...}); // OK: Immediate execution -await wallet.sendTransaction({nonce: 102, ...}); // OK: Queued locally (gap) -await wallet.sendTransaction({nonce: 101, ...}); // OK: Fills gap, both execute +/ Send transactions out of order +await wallet.sendTransaction({nonce: 100, ...}); / OK: Immediate execution +await wallet.sendTransaction({nonce: 102, ...}); / OK: Queued locally (gap) +await wallet.sendTransaction({nonce: 101, ...}); / OK: Fills gap, both execute ``` ### Test Transaction Replacement @@ -447,18 +447,18 @@ await wallet.sendTransaction({nonce: 101, ...}); // OK: Fills gap, both execute Verify that higher-fee transactions replace lower-fee ones: ```javascript -// Send initial transaction +/ Send initial transaction const tx1 = await wallet.sendTransaction({ nonce: 100, gasPrice: parseUnits("20", "gwei") }); -// Replace with higher fee +/ Replace with higher fee const tx2 = await wallet.sendTransaction({ - nonce: 100, // Same nonce - gasPrice: parseUnits("30", "gwei") // Higher fee + nonce: 100, / Same nonce + gasPrice: parseUnits("30", "gwei") / Higher fee }); -// tx1 is replaced by tx2 +/ tx1 is replaced by tx2 ``` ### Verify Batch Deployments @@ -466,11 +466,11 @@ const tx2 = await wallet.sendTransaction({ Test typical deployment scripts (like Uniswap) that send many transactions at once: ```javascript -// Deploy multiple contracts in quick succession +/ Deploy multiple contracts in quick succession const factory = await Factory.deploy(); const router = await Router.deploy(factory.address); const multicall = await Multicall.deploy(); -// All transactions should queue and execute properly +/ All transactions should queue and execute properly ``` ## Monitoring and Debugging diff --git a/docs/evm/v0.4.x/documentation/integration/migration-v0.3-to-v0.4.mdx b/docs/evm/v0.4.x/documentation/integration/migration-v0.3-to-v0.4.mdx index 07c388f5..ee29b7b2 100644 --- a/docs/evm/v0.4.x/documentation/integration/migration-v0.3-to-v0.4.mdx +++ b/docs/evm/v0.4.x/documentation/integration/migration-v0.3-to-v0.4.mdx @@ -55,7 +55,7 @@ Update your app's `newApp` to return an `evmserver.Application` rather than `ser ### Change the return type ```go -// cmd/myapp/cmd/root.go +/ cmd/myapp/cmd/root.go import ( evmserver "github.com/cosmos/evm/server" ) @@ -65,8 +65,8 @@ func (a appCreator) newApp( db dbm.DB, traceStore io.Writer, appOpts servertypes.AppOptions, -) evmserver.Application { // Changed from servertypes.Application - // ... +) evmserver.Application { / Changed from servertypes.Application + / ... } ``` @@ -75,7 +75,7 @@ func (a appCreator) newApp( Create a thin wrapper and use it for `pruning.Cmd` and `snapshot.Cmd`: ```go -// cmd/myapp/cmd/root.go +/ cmd/myapp/cmd/root.go sdkAppCreatorWrapper := func(l log.Logger, d dbm.DB, w io.Writer, ao servertypes.AppOptions) servertypes.Application { return ac.newApp(l, d, w, ao) } @@ -91,13 +91,13 @@ rootCmd.AddCommand( Add the clientCtx to your app object: ```go -// app/app.go +/ app/app.go import ( "github.com/cosmos/cosmos-sdk/client" ) type MyApp struct { - // ... existing fields + / ... existing fields clientCtx client.Context } @@ -113,7 +113,7 @@ func (app *MyApp) SetClientCtx(clientCtx client.Context) { Import the EVM ante package and geth common: ```go -// app/app.go +/ app/app.go import ( "github.com/cosmos/evm/ante" "github.com/ethereum/go-ethereum/common" @@ -125,9 +125,9 @@ import ( Add a new field for listeners: ```go -// app/app.go +/ app/app.go type MyApp struct { - // ... existing fields + / ... existing fields pendingTxListeners []ante.PendingTxListener } ``` @@ -137,7 +137,7 @@ type MyApp struct { Add a public method to register a listener by txHash: ```go -// app/app.go +/ app/app.go func (app *MyApp) RegisterPendingTxListener(listener func(common.Hash)) { app.pendingTxListeners = append(app.pendingTxListeners, listener) } @@ -148,7 +148,7 @@ func (app *MyApp) RegisterPendingTxListener(listener func(common.Hash)) { ### New imports ```go -// app/keepers/precompiles.go +/ app/keepers/precompiles.go import ( "cosmossdk.io/core/address" addresscodec "github.com/cosmos/cosmos-sdk/codec/address" @@ -161,11 +161,11 @@ import ( Create a small options container with sane defaults pulled from the app's bech32 config: ```go -// app/keepers/precompiles.go +/ app/keepers/precompiles.go type Optionals struct { - AddressCodec address.Codec // used by gov/staking - ValidatorAddrCodec address.Codec // used by slashing - ConsensusAddrCodec address.Codec // used by slashing + AddressCodec address.Codec / used by gov/staking + ValidatorAddrCodec address.Codec / used by slashing + ConsensusAddrCodec address.Codec / used by slashing } func defaultOptionals() Optionals { @@ -194,17 +194,17 @@ func WithConsensusAddrCodec(c address.Codec) Option { ### 4.3 Update the precompile factory to accept options ```go -// app/keepers/precompiles.go +/ app/keepers/precompiles.go func NewAvailableStaticPrecompiles( ctx context.Context, - // ... other params + / ... other params opts ...Option, ) map[common.Address]vm.PrecompiledContract { options := defaultOptionals() for _, opt := range opts { opt(&options) } - // ... rest of implementation + / ... rest of implementation } ``` @@ -220,7 +220,7 @@ func NewAvailableStaticPrecompiles( + stakingKeeper, transferKeeper, &channelKeeper, - // ... + / ... ``` **Gov precompile** now requires an `AddressCodec`: @@ -246,16 +246,16 @@ Include this migration with your upgrade if your chain has: ### Implementation -For complete migration instructions, see: **[ERC20 Precompiles Migration Guide](./erc20-precompiles-migration)** +For complete migration instructions, see: **[ERC20 Precompiles Migration Guide](/docs/evm/v0.4.x/documentation/integration/erc20-precompiles-migration)** Add this to your upgrade handler: ```go -// In your upgrade handler +/ In your upgrade handler store := ctx.KVStore(storeKeys[erc20types.StoreKey]) const addressLength = 42 -// Migrate dynamic precompiles +/ Migrate dynamic precompiles if oldData := store.Get([]byte("DynamicPrecompiles")); len(oldData) > 0 { for i := 0; i < len(oldData); i += addressLength { address := common.HexToAddress(string(oldData[i : i+addressLength])) @@ -264,7 +264,7 @@ if oldData := store.Get([]byte("DynamicPrecompiles")); len(oldData) > 0 { store.Delete([]byte("DynamicPrecompiles")) } -// Migrate native precompiles +/ Migrate native precompiles if oldData := store.Get([]byte("NativePrecompiles")); len(oldData) > 0 { for i := 0; i < len(oldData); i += addressLength { address := common.HexToAddress(string(oldData[i : i+addressLength])) @@ -325,14 +325,14 @@ mantrachaind export | jq '.app_state.erc20.dynamic_precompiles' **App listeners** ```go -// app/app.go +/ app/app.go import ( "github.com/cosmos/evm/ante" "github.com/ethereum/go-ethereum/common" ) type MyApp struct { - // ... + / ... pendingTxListeners []ante.PendingTxListener } @@ -344,7 +344,7 @@ func (app *MyApp) RegisterPendingTxListener(l func(common.Hash)) { **CLI wrapper** ```go -// cmd/myapp/cmd/root.go +/ cmd/myapp/cmd/root.go sdkAppCreatorWrapper := func(l log.Logger, d dbm.DB, w io.Writer, ao servertypes.AppOptions) servertypes.Application { return ac.newApp(l, d, w, ao) } @@ -358,9 +358,9 @@ rootCmd.AddCommand( **Precompile options & usage** ```go -// app/keepers/precompiles.go +/ app/keepers/precompiles.go opts := []Option{ - // override defaults only if you use non-standard prefixes/codecs + / override defaults only if you use non-standard prefixes/codecs WithAddressCodec(myAcctCodec), WithValidatorAddrCodec(myValCodec), WithConsensusAddrCodec(myConsCodec), diff --git a/docs/evm/v0.4.x/documentation/integration/upgrade-handlers.mdx b/docs/evm/v0.4.x/documentation/integration/upgrade-handlers.mdx index 7a6dd20d..e842bc3d 100644 --- a/docs/evm/v0.4.x/documentation/integration/upgrade-handlers.mdx +++ b/docs/evm/v0.4.x/documentation/integration/upgrade-handlers.mdx @@ -26,7 +26,7 @@ Upgrade handlers are required when: Upgrade handlers are registered in your app's `RegisterUpgradeHandlers()` method: ```go -// app/upgrades.go +/ app/upgrades.go package app import ( @@ -39,18 +39,18 @@ import ( func (app *App) RegisterUpgradeHandlers() { app.UpgradeKeeper.SetUpgradeHandler( - "v1.0.0", // upgrade name (must match governance proposal) + "v1.0.0", / upgrade name (must match governance proposal) func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { sdkCtx := sdk.UnwrapSDKContext(ctx) sdkCtx.Logger().Info("Starting upgrade", "name", plan.Name) - // Run module migrations + / Run module migrations migrations, err := app.ModuleManager.RunMigrations(ctx, app.configurator, fromVM) if err != nil { return vm, err } - // Add custom migration logic here + / Add custom migration logic here sdkCtx.Logger().Info("Upgrade complete", "name", plan.Name) return migrations, nil @@ -64,8 +64,8 @@ func (app *App) RegisterUpgradeHandlers() { The upgrade handler receives and returns a `module.VersionMap` that tracks module versions: ```go -// fromVM contains the module versions before the upgrade -// The returned VersionMap contains the new versions after migration +/ fromVM contains the module versions before the upgrade +/ The returned VersionMap contains the new versions after migration migrations, err := app.ModuleManager.RunMigrations(ctx, app.configurator, fromVM) ``` @@ -93,7 +93,7 @@ app/ ### constants.go ```go -// app/upgrades/v1_0_0/constants.go +/ app/upgrades/v1_0_0/constants.go package v1_0_0 import ( @@ -107,8 +107,8 @@ var Upgrade = upgrades.Upgrade{ UpgradeName: UpgradeName, CreateUpgradeHandler: CreateUpgradeHandler, StoreUpgrades: storetypes.StoreUpgrades{ - Added: []string{}, // New modules - Deleted: []string{}, // Removed modules + Added: []string{}, / New modules + Deleted: []string{}, / Removed modules }, } ``` @@ -116,7 +116,7 @@ var Upgrade = upgrades.Upgrade{ ### handler.go ```go -// app/upgrades/v1_0_0/handler.go +/ app/upgrades/v1_0_0/handler.go package v1_0_0 import ( @@ -137,13 +137,13 @@ func CreateUpgradeHandler( return func(c context.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { ctx := sdk.UnwrapSDKContext(c) - // Run module migrations + / Run module migrations vm, err := mm.RunMigrations(c, configurator, vm) if err != nil { return nil, err } - // Custom migrations + / Custom migrations if err := runCustomMigrations(ctx, keepers, storeKeys); err != nil { return nil, err } @@ -161,19 +161,19 @@ Migrating module parameters to new formats: ```go func migrateParams(ctx sdk.Context, keeper paramskeeper.Keeper) error { - // Get old params + / Get old params var oldParams v1.Params keeper.GetParamSet(ctx, &oldParams) - // Convert to new format + / Convert to new format newParams := v2.Params{ Field1: oldParams.Field1, Field2: convertField(oldParams.Field2), - // New field with default value + / New field with default value Field3: "default", } - // Set new params + / Set new params keeper.SetParams(ctx, newParams) return nil } @@ -187,7 +187,7 @@ Moving data between different storage locations: func migrateState(ctx sdk.Context, storeKey storetypes.StoreKey) error { store := ctx.KVStore(storeKey) - // Iterate over old storage + / Iterate over old storage iterator := storetypes.KVStorePrefixIterator(store, oldPrefix) defer iterator.Close() @@ -195,14 +195,14 @@ func migrateState(ctx sdk.Context, storeKey storetypes.StoreKey) error { oldKey := iterator.Key() value := iterator.Value() - // Transform key/value if needed + / Transform key/value if needed newKey := transformKey(oldKey) newValue := transformValue(value) - // Write to new location + / Write to new location store.Set(newKey, newValue) - // Delete old entry + / Delete old entry store.Delete(oldKey) } @@ -215,7 +215,7 @@ func migrateState(ctx sdk.Context, storeKey storetypes.StoreKey) error { Adding or removing modules during upgrade: ```go -// In constants.go +/ In constants.go var Upgrade = upgrades.Upgrade{ UpgradeName: "v2.0.0", CreateUpgradeHandler: CreateUpgradeHandler, @@ -225,18 +225,18 @@ var Upgrade = upgrades.Upgrade{ }, } -// In handler.go +/ In handler.go func CreateUpgradeHandler(...) upgradetypes.UpgradeHandler { return func(c context.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { - // Delete old module version + / Delete old module version delete(vm, "oldmodule") - // Initialize new module + / Initialize new module if err := newModuleKeeper.InitGenesis(ctx, defaultGenesis); err != nil { return nil, err } - // Run migrations + / Run migrations return mm.RunMigrations(c, configurator, vm) } } @@ -254,16 +254,16 @@ Make migrations idempotent when possible: ```go func migrateSomething(ctx sdk.Context, store sdk.KVStore) error { - // Check if migration already done + / Check if migration already done if store.Has(migrationCompleteKey) { ctx.Logger().Info("Migration already completed, skipping") return nil } - // Perform migration - // ... + / Perform migration + / ... - // Mark as complete + / Mark as complete store.Set(migrationCompleteKey, []byte{1}) return nil } @@ -291,7 +291,7 @@ func migrate(ctx sdk.Context, keeper Keeper) error { } count++ - // Log progress for long migrations + / Log progress for long migrations if count%1000 == 0 { ctx.Logger().Info("Migration progress", "processed", count) } @@ -311,22 +311,22 @@ func TestUpgradeHandler(t *testing.T) { app := setupApp(t) ctx := app.NewContext(false, tmproto.Header{Height: 1}) - // Setup pre-upgrade state + / Setup pre-upgrade state setupOldState(t, ctx, app) - // Run upgrade handler + / Run upgrade handler _, err := v1_0_0.CreateUpgradeHandler( app.ModuleManager, app.configurator, &upgrades.UpgradeKeepers{ - // ... keepers + / ... keepers }, app.keys, )(ctx, upgradetypes.Plan{Name: "v1.0.0"}, app.ModuleManager.GetVersionMap()) require.NoError(t, err) - // Verify post-upgrade state + / Verify post-upgrade state verifyNewState(t, ctx, app) } ``` @@ -385,54 +385,11 @@ mantrachaind query upgrade applied v1.0.0 For Cosmos EVM chains, specific migrations include: -- **[ERC20 Precompiles Migration](./erc20-precompiles-migration)**: Required for v0.3.x to v0.4.0 +- **[ERC20 Precompiles Migration](/docs/evm/v0.4.x/documentation/integration/erc20-precompiles-migration)**: Required for v0.3.x to v0.4.0 - **Fee Market Parameters**: Updating EIP-1559 parameters - **Custom Precompiles**: Registering new precompiled contracts - **EVM State**: Migrating account balances or contract storage -## Troubleshooting - -### Consensus Failure - -**Symptom:** Chain halts with consensus failure at upgrade height - -**Causes:** -- Validators running different binary versions -- Upgrade handler not registered -- Non-deterministic migration logic - -**Solution:** -- Ensure all validators have the same binary -- Verify upgrade handler is registered -- Review migration logic for non-determinism - -### Upgrade Panic - -**Symptom:** Node panics during upgrade - -**Causes:** -- Unhandled error in migration -- Missing required state -- Invalid type assertions - -**Solution:** -- Add comprehensive error handling -- Validate state before migration -- Use safe type conversions - -### State Corruption - -**Symptom:** Invalid state after upgrade - -**Causes:** -- Partial migration completion -- Incorrect data transformation -- Missing cleanup of old data - -**Solution:** -- Make migrations atomic -- Thoroughly test transformations -- Ensure old data is properly cleaned up ## References diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/bank.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/bank.mdx index 7202ce0d..e364ef71 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/bank.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/bank.mdx @@ -31,20 +31,20 @@ The Bank precompile provides ERC20-style access to native Cosmos SDK tokens, ena ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// Connect to the network +/ Connect to the network const provider = new ethers.JsonRpcProvider(""); -// Precompile address and ABI +/ Precompile address and ABI const precompileAddress = "0x0000000000000000000000000000000000000804"; const precompileAbi = [ "function balances(address account) view returns (tuple(address contractAddress, uint256 amount)[])" ]; -// Create a contract instance +/ Create a contract instance const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Address to query -const accountAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +/ Address to query +const accountAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder async function getBalances() { try { @@ -90,16 +90,16 @@ curl -X POST --data '{ ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// Connect to the network +/ Connect to the network const provider = new ethers.JsonRpcProvider(""); -// Precompile address and ABI +/ Precompile address and ABI const precompileAddress = "0x0000000000000000000000000000000000000804"; const precompileAbi = [ "function totalSupply() view returns (tuple(address contractAddress, uint256 amount)[])" ]; -// Create a contract instance +/ Create a contract instance const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); async function getTotalSupply() { @@ -145,20 +145,20 @@ curl -X POST --data '{ ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// Connect to the network +/ Connect to the network const provider = new ethers.JsonRpcProvider(""); -// Precompile address and ABI +/ Precompile address and ABI const precompileAddress = "0x0000000000000000000000000000000000000804"; const precompileAbi = [ "function supplyOf(address erc20Address) view returns (uint256)" ]; -// Create a contract instance +/ Create a contract instance const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// ERC20 address to query -const erc20Address = "0xdAC17F958D2ee523a2206206994597C13D831ec7"; // Placeholder for a token contract +/ ERC20 address to query +const erc20Address = "0xdAC17F958D2ee523a2206206994597C13D831ec7"; / Placeholder for a token contract async function getSupplyOf() { try { @@ -206,20 +206,20 @@ The Balance struct represents a token balance with its associated ERC20 contract ## Full Interface & ABI ```solidity title="Bank Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.18; -/// @dev The IBank contract's address. +/ @dev The IBank contract's address. address constant IBANK_PRECOMPILE_ADDRESS = 0x0000000000000000000000000000000000000804; -/// @dev The IBank contract's instance. +/ @dev The IBank contract's instance. IBank constant IBANK_CONTRACT = IBank(IBANK_PRECOMPILE_ADDRESS); -/// @dev Balance specifies the ERC20 contract address and the amount of tokens. +/ @dev Balance specifies the ERC20 contract address and the amount of tokens. struct Balance { - /// contractAddress defines the ERC20 contract address. + / contractAddress defines the ERC20 contract address. address contractAddress; - /// amount of tokens + / amount of tokens uint256 amount; } @@ -229,21 +229,21 @@ struct Balance { * @dev Interface for querying balances and supply from the Bank module. */ interface IBank { - /// @dev balances defines a method for retrieving all the native token balances - /// for a given account. - /// @param account the address of the account to query balances for. - /// @return balances the array of native token balances. + / @dev balances defines a method for retrieving all the native token balances + / for a given account. + / @param account the address of the account to query balances for. + / @return balances the array of native token balances. function balances( address account ) external view returns (Balance[] memory balances); - /// @dev totalSupply defines a method for retrieving the total supply of all - /// native tokens. - /// @return totalSupply the supply as an array of native token balances + / @dev totalSupply defines a method for retrieving the total supply of all + / native tokens. + / @return totalSupply the supply as an array of native token balances function totalSupply() external view returns (Balance[] memory totalSupply); - /// @dev supplyOf defines a method for retrieving the total supply of a particular native coin. - /// @return totalSupply the supply as a uint256 + / @dev supplyOf defines a method for retrieving the total supply of a particular native coin. + / @return totalSupply the supply as a uint256 function supplyOf( address erc20Address ) external view returns (uint256 totalSupply); diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/bech32.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/bech32.mdx index 5ec7bf3d..07940740 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/bech32.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/bech32.mdx @@ -35,20 +35,20 @@ Both methods use a configurable base gas amount that is set during chain initial ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// Connect to the network +/ Connect to the network const provider = new ethers.JsonRpcProvider(""); -// Precompile address and ABI +/ Precompile address and ABI const precompileAddress = "0x0000000000000000000000000000000000000400"; const precompileAbi = [ "function hexToBech32(address addr, string memory prefix) returns (string memory)" ]; -// Create a contract instance +/ Create a contract instance const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs -const ethAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +/ Inputs +const ethAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder const prefix = "cosmos"; async function convertToBech32() { @@ -96,20 +96,20 @@ curl -X POST --data '{ ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// Connect to the network +/ Connect to the network const provider = new ethers.JsonRpcProvider(""); -// Precompile address and ABI +/ Precompile address and ABI const precompileAddress = "0x0000000000000000000000000000000000000400"; const precompileAbi = [ "function bech32ToHex(string memory bech32Address) returns (address)" ]; -// Create a contract instance +/ Create a contract instance const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input -const bech32Address = "cosmos1mrdxhunfvjhe6lhdncp72dq46da2jcz9d9sh93"; // Placeholder +/ Input +const bech32Address = "cosmos1mrdxhunfvjhe6lhdncp72dq46da2jcz9d9sh93"; / Placeholder async function convertToHex() { try { @@ -148,13 +148,13 @@ curl -X POST --data '{ ## Full Interface & ABI ```solidity title="Bech32 Solidity Interface" lines -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.17; -/// @dev The Bech32I contract's address. +/ @dev The Bech32I contract's address. address constant Bech32_PRECOMPILE_ADDRESS = 0x0000000000000000000000000000000000000400; -/// @dev The Bech32I contract's instance. +/ @dev The Bech32I contract's instance. Bech32I constant BECH32_CONTRACT = Bech32I(Bech32_PRECOMPILE_ADDRESS); interface Bech32I { diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/callbacks.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/callbacks.mdx index 464d5770..d10bee01 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/callbacks.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/callbacks.mdx @@ -49,7 +49,7 @@ When implementing the Callbacks interface, consider the following security aspec ### Example Security Pattern ```solidity contract SecureIBCCallback is ICallbacks { - address constant IBC_MODULE = 0x...; // IBC module address + address constant IBC_MODULE = 0x...; / IBC module address modifier onlyIBC() { require(msg.sender == IBC_MODULE, "Unauthorized"); @@ -57,11 +57,11 @@ contract SecureIBCCallback is ICallbacks { } function onPacketAcknowledgement(...) external onlyIBC { - // Callback logic + / Callback logic } function onPacketTimeout(...) external onlyIBC { - // Timeout logic + / Timeout logic } } ``` @@ -69,19 +69,19 @@ contract SecureIBCCallback is ICallbacks { ## Full Solidity Interface & ABI ```solidity title="Callbacks Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.18; interface ICallbacks { - /// @dev Callback function to be called on the source chain - /// after the packet life cycle is completed and acknowledgement is processed - /// by source chain. The contract address is passed the packet information and acknowledgmeent - /// to execute the callback logic. - /// @param channelId the channnel identifier of the packet - /// @param portId the port identifier of the packet - /// @param sequence the sequence number of the packet - /// @param data the data of the packet - /// @param acknowledgement the acknowledgement of the packet + / @dev Callback function to be called on the source chain + / after the packet life cycle is completed and acknowledgement is processed + / by source chain. The contract address is passed the packet information and acknowledgmeent + / to execute the callback logic. + / @param channelId the channnel identifier of the packet + / @param portId the port identifier of the packet + / @param sequence the sequence number of the packet + / @param data the data of the packet + / @param acknowledgement the acknowledgement of the packet function onPacketAcknowledgement( string memory channelId, string memory portId, @@ -90,14 +90,14 @@ interface ICallbacks { bytes memory acknowledgement ) external; - /// @dev Callback function to be called on the source chain - /// after the packet life cycle is completed and the packet is timed out - /// by source chain. The contract address is passed the packet information - /// to execute the callback logic. - /// @param channelId the channnel identifier of the packet - /// @param portId the port identifier of the packet - /// @param sequence the sequence number of the packet - /// @param data the data of the packet + / @dev Callback function to be called on the source chain + / after the packet life cycle is completed and the packet is timed out + / by source chain. The contract address is passed the packet information + / to execute the callback logic. + / @param channelId the channnel identifier of the packet + / @param portId the port identifier of the packet + / @param sequence the sequence number of the packet + / @param data the data of the packet function onPacketTimeout( string memory channelId, string memory portId, diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/distribution.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/distribution.mdx index cb1f863e..193d37b1 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/distribution.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/distribution.mdx @@ -29,7 +29,7 @@ Gas costs are approximated and may vary based on call complexity and chain setti The precompile defines the following constants for the various Cosmos SDK message types: ```solidity -// Transaction message type URLs +/ Transaction message type URLs string constant MSG_CLAIM_REWARDS = "/cosmos.distribution.v1beta1.MsgClaimRewards"; string constant MSG_SET_WITHDRAW_ADDRESS = "/cosmos.distribution.v1beta1.MsgSetWithdrawAddress"; string constant MSG_WITHDRAW_DELEGATOR_REWARD = "/cosmos.distribution.v1beta1.MsgWithdrawDelegatorReward"; @@ -68,18 +68,18 @@ Retrieves comprehensive reward information for all of a delegator's positions. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the relevant parts of the precompile +/ ABI definition for the relevant parts of the precompile const precompileAbi = [ "function delegationTotalRewards(address delegatorAddress) view returns (tuple(string validatorAddress, tuple(string denom, uint256 amount, uint8 precision)[] reward)[] rewards, tuple(string denom, uint256 amount, uint8 precision)[] total)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input: The address of the delegator to query -const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +/ Input: The address of the delegator to query +const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder async function getTotalRewards() { try { @@ -120,19 +120,19 @@ Queries pending rewards for a specific delegation. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function delegationRewards(address delegatorAddress, string memory validatorAddress) view returns (tuple(string denom, uint256 amount, uint8 precision)[] rewards)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs -const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder -const validatorAddress = "artvaloper1..."; // Placeholder +/ Inputs +const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder +const validatorAddress = "artvaloper1..."; / Placeholder async function getDelegationRewards() { try { @@ -170,18 +170,18 @@ Retrieves a list of all validators from which a delegator has rewards. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function delegatorValidators(address delegatorAddress) view returns (string[] validators)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input -const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +/ Input +const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder async function getDelegatorValidators() { try { @@ -219,18 +219,18 @@ Queries the address that can withdraw rewards for a given delegator. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function delegatorWithdrawAddress(address delegatorAddress) view returns (string withdrawAddress)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input -const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +/ Input +const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder async function getWithdrawAddress() { try { @@ -268,12 +268,12 @@ Queries the current balance of the community pool. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function communityPool() view returns (tuple(string denom, uint256 amount, uint8 precision)[] coins)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); @@ -314,18 +314,18 @@ Queries the accumulated commission for a specific validator. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function validatorCommission(string memory validatorAddress) view returns (tuple(string denom, uint256 amount, uint8 precision)[] commission)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input -const validatorAddress = "artvaloper1..."; // Placeholder +/ Input +const validatorAddress = "artvaloper1..."; / Placeholder async function getValidatorCommission() { try { @@ -363,18 +363,18 @@ Queries a validator's commission and self-delegation rewards. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function validatorDistributionInfo(string memory validatorAddress) view returns (tuple(string operatorAddress, tuple(string denom, uint256 amount, uint8 precision)[] selfBondRewards, tuple(string denom, uint256 amount, uint8 precision)[] commission) distributionInfo)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input -const validatorAddress = "artvaloper1..."; // Placeholder +/ Input +const validatorAddress = "artvaloper1..."; / Placeholder async function getValidatorDistInfo() { try { @@ -412,18 +412,18 @@ Queries the outstanding rewards of a validator. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function validatorOutstandingRewards(string memory validatorAddress) view returns (tuple(string denom, uint256 amount, uint8 precision)[] rewards)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input -const validatorAddress = "artvaloper1..."; // Placeholder +/ Input +const validatorAddress = "artvaloper1..."; / Placeholder async function getOutstandingRewards() { try { @@ -461,20 +461,20 @@ Queries slashing events for a validator within a specific height range. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function validatorSlashes(string memory validatorAddress, uint64 startingHeight, uint64 endingHeight, tuple(bytes key, uint64 offset, uint64 limit, bool countTotal, bool reverse) pageRequest) view returns (tuple(uint64 validatorPeriod, tuple(uint256 amount, uint8 precision) fraction)[] slashes, tuple(bytes nextKey, uint64 total) pageResponse)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000801"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs -const validatorAddress = "artvaloper1..."; // Placeholder -const startingHeight = 1000; // Starting block height -const endingHeight = 2000; // Ending block height +/ Inputs +const validatorAddress = "artvaloper1..."; / Placeholder +const startingHeight = 1000; / Starting block height +const endingHeight = 2000; / Ending block height const pageRequest = { key: "0x", offset: 0, @@ -521,15 +521,15 @@ curl -X POST --data '{ ## Full Solidity Interface & ABI ```solidity title="Distribution Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.17; import "../common/Types.sol"; -/// @dev The DistributionI contract's address. +/ @dev The DistributionI contract's address. address constant DISTRIBUTION_PRECOMPILE_ADDRESS = 0x0000000000000000000000000000000000000801; -/// @dev The DistributionI contract's instance. +/ @dev The DistributionI contract's instance. DistributionI constant DISTRIBUTION_CONTRACT = DistributionI( DISTRIBUTION_PRECOMPILE_ADDRESS ); @@ -550,10 +550,10 @@ struct DelegationDelegatorReward { DecCoin[] reward; } -/// @author Evmos Team -/// @title Distribution Precompile Contract -/// @dev The interface through which solidity contracts will interact with Distribution -/// @custom:address 0x0000000000000000000000000000000000000801 +/ @author Evmos Team +/ @title Distribution Precompile Contract +/ @dev The interface through which solidity contracts will interact with Distribution +/ @custom:address 0x0000000000000000000000000000000000000801 interface DistributionI { event ClaimRewards(address indexed delegatorAddress, uint256 amount); event SetWithdrawerAddress(address indexed caller, string withdrawerAddress); diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/erc20.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/erc20.mdx index d9abe31c..13082c43 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/erc20.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/erc20.mdx @@ -54,11 +54,11 @@ Returns the total amount of tokens in existence. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// The address of the ERC20 precompile is dynamic, assigned when a token pair is registered. +/ The address of the ERC20 precompile is dynamic, assigned when a token pair is registered. const erc20PrecompileAddress = ""; const provider = new ethers.JsonRpcProvider(""); -// A generic ERC20 ABI is sufficient for read-only calls +/ A generic ERC20 ABI is sufficient for read-only calls const erc20Abi = ["function totalSupply() view returns (uint256)"]; const contract = new ethers.Contract(erc20PrecompileAddress, erc20Abi, provider); @@ -99,7 +99,7 @@ Returns the token balance of a specified account. import { ethers } from "ethers"; const erc20PrecompileAddress = ""; -const accountAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +const accountAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder const provider = new ethers.JsonRpcProvider(""); const erc20Abi = ["function balanceOf(address account) view returns (uint256)"]; @@ -151,8 +151,8 @@ Returns the remaining number of tokens that a spender is allowed to spend on beh import { ethers } from "ethers"; const erc20PrecompileAddress = ""; -const ownerAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder -const spenderAddress = "0x27f320b7280911c7987a421a8138997a48d4b315"; // Placeholder +const ownerAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder +const spenderAddress = "0x27f320b7280911c7987a421a8138997a48d4b315"; / Placeholder const provider = new ethers.JsonRpcProvider(""); const erc20Abi = ["function allowance(address owner, address spender) view returns (uint256)"]; @@ -320,8 +320,8 @@ curl -X POST --data '{ ## Full Solidity Interface & ABI ```solidity title="ERC20 Solidity Interface" lines expandable -// SPDX-License-Identifier: MIT -// OpenZeppelin Contracts (last updated v4.6.0) (token/ERC20/IERC20.sol) +/ SPDX-License-Identifier: MIT +/ OpenZeppelin Contracts (last updated v4.6.0) (token/ERC20/IERC20.sol) pragma solidity ^0.8.0; diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/governance.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/governance.mdx index e9b6385e..d3629c8e 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/governance.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/governance.mdx @@ -49,17 +49,17 @@ Retrieves detailed information about a specific proposal. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile, including the complex return struct +/ ABI for the precompile, including the complex return struct const precompileAbi = [ "function getProposal(uint64 proposalId) view returns (tuple(uint64 id, address proposer, string metadata, uint64 submit_time, uint64 voting_start_time, uint64 voting_end_time, uint8 status, tuple(string yes_count, string abstain_count, string no_count, string no_with_veto_count) final_tally_result, tuple(string denom, uint256 amount)[] total_deposit, string[] messages) proposal)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input: The ID of the proposal to query +/ Input: The ID of the proposal to query const proposalId = 1; async function getProposalDetails() { @@ -101,32 +101,32 @@ Retrieves a filtered and paginated list of proposals. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile, including the complex return struct +/ ABI for the precompile, including the complex return struct const precompileAbi = [ "function getProposals(uint32 proposal_status, address voter, address depositor, tuple(bytes key, uint64 offset, uint64 limit, bool count_total, bool reverse) pagination) view returns (tuple(uint64 id, string[] messages, uint32 status, tuple(string yes_count, string abstain_count, string no_count, string no_with_veto_count) final_tally_result, uint64 submit_time, uint64 deposit_end_time, tuple(string denom, uint256 amount)[] total_deposit, uint64 voting_start_time, uint64 voting_end_time, string metadata, string title, string summary, address proposer)[] proposals, tuple(bytes next_key, uint64 total) page_response)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs for filtering and pagination +/ Inputs for filtering and pagination const pagination = { offset: 0, - key: "0x", // Start from the beginning + key: "0x", / Start from the beginning limit: 10, count_total: true, reverse: true, }; -const proposalStatus = 4; // 0 for Unspecified, 1 for Deposit, 2 for Voting, etc. -const voterAddress = ethers.ZeroAddress; // Filter by voter, or ZeroAddress for none -const depositorAddress = ethers.ZeroAddress; // Filter by depositor, or ZeroAddress for none +const proposalStatus = 4; / 0 for Unspecified, 1 for Deposit, 2 for Voting, etc. +const voterAddress = ethers.ZeroAddress; / Filter by voter, or ZeroAddress for none +const depositorAddress = ethers.ZeroAddress; / Filter by depositor, or ZeroAddress for none async function getProposalsList() { try { const result = await contract.getProposals(pagination, proposalStatus, voterAddress, depositorAddress); - // The result object contains 'proposals' and 'page_response' + / The result object contains 'proposals' and 'page_response' console.log("Proposals:", JSON.stringify(result.proposals, null, 2)); console.log("Pagination Response:", result.page_response); } catch (error) { @@ -162,17 +162,17 @@ Retrieves the current or final vote tally for a proposal. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile +/ ABI for the precompile const precompileAbi = [ "function getTallyResult(uint64 proposalId) view returns (tuple(string yes_count, string abstain_count, string no_count, string no_with_veto_count) tally)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input: The ID of the proposal to get the tally for +/ Input: The ID of the proposal to get the tally for const proposalId = 1; async function getTally() { @@ -217,19 +217,19 @@ Retrieves the vote cast by a specific address on a proposal. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile +/ ABI for the precompile const precompileAbi = [ "function getVote(uint64 proposalId, address voter) view returns (tuple(uint64 proposal_id, address voter, tuple(uint8 option, string weight)[] options, string metadata) vote)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs +/ Inputs const proposalId = 1; -const voterAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +const voterAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder async function getVoteDetails() { try { @@ -268,17 +268,17 @@ Retrieves all votes cast on a proposal, with pagination. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile +/ ABI for the precompile const precompileAbi = [ "function getVotes(uint64 proposalId, tuple(bytes key, uint64 offset, uint64 limit, bool count_total, bool reverse) pagination) view returns (tuple(uint64 proposal_id, address voter, tuple(uint8 option, string weight)[] options, string metadata)[] votes, tuple(bytes next_key, uint64 total) page_response)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs +/ Inputs const proposalId = 1; const pagination = { offset: 0, @@ -326,19 +326,19 @@ Retrieves deposit information for a specific depositor on a proposal. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile +/ ABI for the precompile const precompileAbi = [ "function getDeposit(uint64 proposalId, address depositor) view returns (tuple(address depositor, tuple(string denom, uint256 amount)[] amount) deposit)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs +/ Inputs const proposalId = 1; -const depositorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +const depositorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder async function getDepositDetails() { try { @@ -377,17 +377,17 @@ Retrieves all deposits made on a proposal, with pagination. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile +/ ABI for the precompile const precompileAbi = [ "function getDeposits(uint64 proposalId, tuple(bytes key, uint64 offset, uint64 limit, bool count_total, bool reverse) pagination) view returns (tuple(address depositor, tuple(string denom, uint256 amount)[] amount)[] deposits, tuple(bytes next_key, uint64 total) page_response)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs +/ Inputs const proposalId = 1; const pagination = { offset: 0, @@ -435,12 +435,12 @@ Retrieves current governance parameters. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile +/ ABI for the precompile const precompileAbi = [ "function getParams() view returns (tuple(int64 voting_period, tuple(string denom, uint256 amount)[] min_deposit, int64 max_deposit_period, string quorum, string threshold, string veto_threshold, string min_initial_deposit_ratio, string proposal_cancel_ratio, string proposal_cancel_dest, int64 expedited_voting_period, string expedited_threshold, tuple(string denom, uint256 amount)[] expedited_min_deposit, bool burn_vote_quorum, bool burn_proposal_deposit_prevote, bool burn_vote_veto, string min_deposit_ratio) params)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); @@ -484,12 +484,12 @@ Retrieves the current governance constitution. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI for the precompile +/ ABI for the precompile const precompileAbi = [ "function getConstitution() view returns (string constitution)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000805"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); @@ -526,34 +526,34 @@ curl -X POST --data '{ ## Full Solidity Interface & ABI ```solidity title="Governance Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.17; import "../common/Types.sol"; -/// @dev The IGov contract's address. +/ @dev The IGov contract's address. address constant GOV_PRECOMPILE_ADDRESS = 0x0000000000000000000000000000000000000805; -/// @dev The IGov contract's instance. +/ @dev The IGov contract's instance. IGov constant GOV_CONTRACT = IGov(GOV_PRECOMPILE_ADDRESS); /** * @dev VoteOption enumerates the valid vote options for a given governance proposal. */ enum VoteOption { - // Unspecified defines a no-op vote option. + / Unspecified defines a no-op vote option. Unspecified, - // Yes defines a yes vote option. + / Yes defines a yes vote option. Yes, - // Abstain defines an abstain vote option. + / Abstain defines an abstain vote option. Abstain, - // No defines a no vote option. + / No defines a no vote option. No, - // NoWithWeto defines a no with veto vote option. + / NoWithWeto defines a no with veto vote option. NoWithWeto } -/// @dev WeightedVote represents a vote on a governance proposal +/ @dev WeightedVote represents a vote on a governance proposal struct WeightedVote { uint64 proposalId; address voter; @@ -561,20 +561,20 @@ struct WeightedVote { string metadata; } -/// @dev WeightedVoteOption represents a weighted vote option +/ @dev WeightedVoteOption represents a weighted vote option struct WeightedVoteOption { VoteOption option; string weight; } -/// @dev DepositData represents information about a deposit on a proposal +/ @dev DepositData represents information about a deposit on a proposal struct DepositData { uint64 proposalId; address depositor; Coin[] amount; } -/// @dev TallyResultData represents the tally result of a proposal +/ @dev TallyResultData represents the tally result of a proposal struct TallyResultData { string yes; string abstain; @@ -582,7 +582,7 @@ struct TallyResultData { string noWithVeto; } -/// @dev ProposalData represents a governance proposal +/ @dev ProposalData represents a governance proposal struct ProposalData { uint64 id; string[] messages; @@ -599,7 +599,7 @@ struct ProposalData { address proposer; } -/// @dev Params defines the governance parameters +/ @dev Params defines the governance parameters struct Params { int64 votingPeriod; Coin[] minDeposit; @@ -619,78 +619,78 @@ struct Params { string minDepositRatio; } -/// @author The Evmos Core Team -/// @title Gov Precompile Contract -/// @dev The interface through which solidity contracts will interact with Gov +/ @author The Evmos Core Team +/ @title Gov Precompile Contract +/ @dev The interface through which solidity contracts will interact with Gov interface IGov { - /// @dev SubmitProposal defines an Event emitted when a proposal is submitted. - /// @param proposer the address of the proposer - /// @param proposalId the proposal of id + / @dev SubmitProposal defines an Event emitted when a proposal is submitted. + / @param proposer the address of the proposer + / @param proposalId the proposal of id event SubmitProposal(address indexed proposer, uint64 proposalId); - /// @dev CancelProposal defines an Event emitted when a proposal is canceled. - /// @param proposer the address of the proposer - /// @param proposalId the proposal of id + / @dev CancelProposal defines an Event emitted when a proposal is canceled. + / @param proposer the address of the proposer + / @param proposalId the proposal of id event CancelProposal(address indexed proposer, uint64 proposalId); - /// @dev Deposit defines an Event emitted when a deposit is made. - /// @param depositor the address of the depositor - /// @param proposalId the proposal of id - /// @param amount the amount of the deposit + / @dev Deposit defines an Event emitted when a deposit is made. + / @param depositor the address of the depositor + / @param proposalId the proposal of id + / @param amount the amount of the deposit event Deposit(address indexed depositor, uint64 proposalId, Coin[] amount); - /// @dev Vote defines an Event emitted when a proposal voted. - /// @param voter the address of the voter - /// @param proposalId the proposal of id - /// @param option the option for voter + / @dev Vote defines an Event emitted when a proposal voted. + / @param voter the address of the voter + / @param proposalId the proposal of id + / @param option the option for voter event Vote(address indexed voter, uint64 proposalId, uint8 option); - /// @dev VoteWeighted defines an Event emitted when a proposal voted. - /// @param voter the address of the voter - /// @param proposalId the proposal of id - /// @param options the options for voter + / @dev VoteWeighted defines an Event emitted when a proposal voted. + / @param voter the address of the voter + / @param proposalId the proposal of id + / @param options the options for voter event VoteWeighted( address indexed voter, uint64 proposalId, WeightedVoteOption[] options ); - /// TRANSACTIONS + / TRANSACTIONS - /// @notice submitProposal creates a new proposal from a protoJSON document. - /// @dev submitProposal defines a method to submit a proposal. - /// @param jsonProposal The JSON proposal - /// @param deposit The deposit for the proposal - /// @return proposalId The proposal id + / @notice submitProposal creates a new proposal from a protoJSON document. + / @dev submitProposal defines a method to submit a proposal. + / @param jsonProposal The JSON proposal + / @param deposit The deposit for the proposal + / @return proposalId The proposal id function submitProposal( address proposer, bytes calldata jsonProposal, Coin[] calldata deposit ) external returns (uint64 proposalId); - /// @dev cancelProposal defines a method to cancel a proposal. - /// @param proposalId The proposal id - /// @return success Whether the transaction was successful or not + / @dev cancelProposal defines a method to cancel a proposal. + / @param proposalId The proposal id + / @return success Whether the transaction was successful or not function cancelProposal( address proposer, uint64 proposalId ) external returns (bool success); - /// @dev deposit defines a method to add a deposit to a proposal. - /// @param proposalId The proposal id - /// @param amount The amount to deposit + / @dev deposit defines a method to add a deposit to a proposal. + / @param proposalId The proposal id + / @param amount The amount to deposit function deposit( address depositor, uint64 proposalId, Coin[] calldata amount ) external returns (bool success); - /// @dev vote defines a method to add a vote on a specific proposal. - /// @param voter The address of the voter - /// @param proposalId the proposal of id - /// @param option the option for voter - /// @param metadata the metadata for voter send - /// @return success Whether the transaction was successful or not + / @dev vote defines a method to add a vote on a specific proposal. + / @param voter The address of the voter + / @param proposalId the proposal of id + / @param option the option for voter + / @param metadata the metadata for voter send + / @return success Whether the transaction was successful or not function vote( address voter, uint64 proposalId, @@ -698,12 +698,12 @@ interface IGov { string memory metadata ) external returns (bool success); - /// @dev voteWeighted defines a method to add a vote on a specific proposal. - /// @param voter The address of the voter - /// @param proposalId The proposal id - /// @param options The options for voter - /// @param metadata The metadata for voter send - /// @return success Whether the transaction was successful or not + / @dev voteWeighted defines a method to add a vote on a specific proposal. + / @param voter The address of the voter + / @param proposalId The proposal id + / @param options The options for voter + / @param metadata The metadata for voter send + / @return success Whether the transaction was successful or not function voteWeighted( address voter, uint64 proposalId, @@ -711,23 +711,23 @@ interface IGov { string memory metadata ) external returns (bool success); - /// QUERIES + / QUERIES - /// @dev getVote returns the vote of a single voter for a - /// given proposalId. - /// @param proposalId The proposal id - /// @param voter The voter on the proposal - /// @return vote Voter's vote for the proposal + / @dev getVote returns the vote of a single voter for a + / given proposalId. + / @param proposalId The proposal id + / @param voter The voter on the proposal + / @return vote Voter's vote for the proposal function getVote( uint64 proposalId, address voter ) external view returns (WeightedVote memory vote); - /// @dev getVotes Returns the votes for a specific proposal. - /// @param proposalId The proposal id - /// @param pagination The pagination options - /// @return votes The votes for the proposal - /// @return pageResponse The pagination information + / @dev getVotes Returns the votes for a specific proposal. + / @param proposalId The proposal id + / @param pagination The pagination options + / @return votes The votes for the proposal + / @return pageResponse The pagination information function getVotes( uint64 proposalId, PageRequest calldata pagination @@ -736,20 +736,20 @@ interface IGov { view returns (WeightedVote[] memory votes, PageResponse memory pageResponse); - /// @dev getDeposit returns the deposit of a single depositor for a given proposalId. - /// @param proposalId The proposal id - /// @param depositor The address of the depositor - /// @return deposit The deposit information + / @dev getDeposit returns the deposit of a single depositor for a given proposalId. + / @param proposalId The proposal id + / @param depositor The address of the depositor + / @return deposit The deposit information function getDeposit( uint64 proposalId, address depositor ) external view returns (DepositData memory deposit); - /// @dev getDeposits returns all deposits for a specific proposal. - /// @param proposalId The proposal id - /// @param pagination The pagination options - /// @return deposits The deposits for the proposal - /// @return pageResponse The pagination information + / @dev getDeposits returns all deposits for a specific proposal. + / @param proposalId The proposal id + / @param pagination The pagination options + / @return deposits The deposits for the proposal + / @return pageResponse The pagination information function getDeposits( uint64 proposalId, PageRequest calldata pagination @@ -761,27 +761,27 @@ interface IGov { PageResponse memory pageResponse ); - /// @dev getTallyResult returns the tally result of a proposal. - /// @param proposalId The proposal id - /// @return tallyResult The tally result of the proposal + / @dev getTallyResult returns the tally result of a proposal. + / @param proposalId The proposal id + / @return tallyResult The tally result of the proposal function getTallyResult( uint64 proposalId ) external view returns (TallyResultData memory tallyResult); - /// @dev getProposal returns the proposal details based on proposal id. - /// @param proposalId The proposal id - /// @return proposal The proposal data + / @dev getProposal returns the proposal details based on proposal id. + / @param proposalId The proposal id + / @return proposal The proposal data function getProposal( uint64 proposalId ) external view returns (ProposalData memory proposal); - /// @dev getProposals returns proposals with matching status. - /// @param proposalStatus The proposal status to filter by - /// @param voter The voter address to filter by, if any - /// @param depositor The depositor address to filter by, if any - /// @param pagination The pagination config - /// @return proposals The proposals matching the filter criteria - /// @return pageResponse The pagination information + / @dev getProposals returns proposals with matching status. + / @param proposalStatus The proposal status to filter by + / @param voter The voter address to filter by, if any + / @param depositor The depositor address to filter by, if any + / @param pagination The pagination config + / @return proposals The proposals matching the filter criteria + / @return pageResponse The pagination information function getProposals( uint32 proposalStatus, address voter, @@ -795,12 +795,12 @@ interface IGov { PageResponse memory pageResponse ); - /// @dev getParams returns the current governance parameters. - /// @return params The governance parameters + / @dev getParams returns the current governance parameters. + / @return params The governance parameters function getParams() external view returns (Params memory params); - /// @dev getConstitution returns the current constitution. - /// @return constitution The current constitution + / @dev getConstitution returns the current constitution. + / @return constitution The current constitution function getConstitution() external view returns (string memory constitution); } ``` diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/ics20.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/ics20.mdx index a629c249..62798838 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/ics20.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/ics20.mdx @@ -54,25 +54,25 @@ Initiates a cross-chain token transfer using the IBC protocol. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the transfer function +/ ABI definition for the transfer function const precompileAbi = [ "function transfer(string memory sourcePort, string memory sourceChannel, string memory token, address sender, string memory receiver, uint64 timeoutHeight, uint64 timeoutTimestamp, string memory memo) payable" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000802"; const signer = new ethers.Wallet("", provider); const contract = new ethers.Contract(precompileAddress, precompileAbi, signer); -// Transfer parameters +/ Transfer parameters const sourcePort = "transfer"; const sourceChannel = "channel-0"; -const token = "test"; // or ERC20 token address +const token = "test"; / or ERC20 token address const sender = await signer.getAddress(); -const receiver = "cosmos1..."; // Destination address on target chain -const timeoutHeight = 0; // 0 for no height timeout -const timeoutTimestamp = Math.floor(Date.now() / 1000) + 3600; // 1 hour from now +const receiver = "cosmos1..."; / Destination address on target chain +const timeoutHeight = 0; / 0 for no height timeout +const timeoutTimestamp = Math.floor(Date.now() / 1000) + 3600; / 1 hour from now const memo = ""; async function transferTokens() { @@ -86,7 +86,7 @@ async function transferTokens() { timeoutHeight, timeoutTimestamp, memo, - { value: ethers.parseEther("1.0") } // Amount to transfer + { value: ethers.parseEther("1.0") } / Amount to transfer ); console.log("Transfer initiated:", tx.hash); @@ -97,7 +97,7 @@ async function transferTokens() { } } -// transferTokens(); +/ transferTokens(); ``` ```bash cURL expandable lines @@ -116,18 +116,18 @@ Queries denomination information for an IBC token by its hash. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function denom(string memory hash) view returns (tuple(string base, tuple(string portId, string channelId)[] trace) denom)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000802"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input: The hash of the denomination to query -const denomHash = "ibc/..."; // Placeholder for actual denomination hash +/ Input: The hash of the denomination to query +const denomHash = "ibc/..."; / Placeholder for actual denomination hash async function getDenom() { try { @@ -164,17 +164,17 @@ Retrieves a paginated list of all denomination traces registered on the chain. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function denoms(tuple(bytes key, uint64 offset, uint64 limit, bool countTotal, bool reverse) pageRequest) view returns (tuple(string base, tuple(string portId, string channelId)[] trace)[] denoms, tuple(bytes nextKey, uint64 total) pageResponse)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000802"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input for pagination +/ Input for pagination const pagination = { key: "0x", offset: 0, @@ -220,18 +220,18 @@ Computes the hash of a denomination trace path. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function denomHash(string memory trace) view returns (string memory hash)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000802"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input: The trace path to hash -const tracePath = "transfer/channel-0/test"; // Placeholder for actual trace path +/ Input: The trace path to hash +const tracePath = "transfer/channel-0/test"; / Placeholder for actual trace path async function getDenomHash() { try { @@ -264,46 +264,46 @@ curl -X POST --data '{ ## Full Solidity Interface & ABI ```solidity title="ICS20 Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.18; import "../common/Types.sol"; -/// @dev The ICS20I contract's address. +/ @dev The ICS20I contract's address. address constant ICS20_PRECOMPILE_ADDRESS = 0x0000000000000000000000000000000000000802; -/// @dev The ICS20 contract's instance. +/ @dev The ICS20 contract's instance. ICS20I constant ICS20_CONTRACT = ICS20I(ICS20_PRECOMPILE_ADDRESS); -/// @dev Denom contains the base denomination for ICS20 fungible tokens and the -/// source tracing information path. +/ @dev Denom contains the base denomination for ICS20 fungible tokens and the +/ source tracing information path. struct Denom { - /// base denomination of the relayed fungible token. + / base denomination of the relayed fungible token. string base; - /// trace contains a list of hops for multi-hop transfers. + / trace contains a list of hops for multi-hop transfers. Hop[] trace; } -/// @dev Hop defines a port ID, channel ID pair specifying where -/// tokens must be forwarded next in a multi-hop transfer. +/ @dev Hop defines a port ID, channel ID pair specifying where +/ tokens must be forwarded next in a multi-hop transfer. struct Hop { string portId; string channelId; } -/// @author Evmos Team -/// @title ICS20 Transfer Precompiled Contract -/// @dev The interface through which solidity contracts will interact with IBC Transfer (ICS20) -/// @custom:address 0x0000000000000000000000000000000000000802 +/ @author Evmos Team +/ @title ICS20 Transfer Precompiled Contract +/ @dev The interface through which solidity contracts will interact with IBC Transfer (ICS20) +/ @custom:address 0x0000000000000000000000000000000000000802 interface ICS20I { - /// @dev Emitted when an ICS-20 transfer is executed. - /// @param sender The address of the sender. - /// @param receiver The address of the receiver. - /// @param sourcePort The source port of the IBC transaction, For v2 packets, leave it empty. - /// @param sourceChannel The source channel of the IBC transaction, For v2 packets, set the client ID. - /// @param denom The denomination of the tokens transferred. - /// @param amount The amount of tokens transferred. - /// @param memo The IBC transaction memo. + / @dev Emitted when an ICS-20 transfer is executed. + / @param sender The address of the sender. + / @param receiver The address of the receiver. + / @param sourcePort The source port of the IBC transaction, For v2 packets, leave it empty. + / @param sourceChannel The source channel of the IBC transaction, For v2 packets, set the client ID. + / @param denom The denomination of the tokens transferred. + / @param amount The amount of tokens transferred. + / @param memo The IBC transaction memo. event IBCTransfer( address indexed sender, string indexed receiver, @@ -314,19 +314,19 @@ interface ICS20I { string memo ); - /// @dev Transfer defines a method for performing an IBC transfer. - /// @param sourcePort the port on which the packet will be sent - /// @param sourceChannel the channel by which the packet will be sent - /// @param denom the denomination of the Coin to be transferred to the receiver - /// @param amount the amount of the Coin to be transferred to the receiver - /// @param sender the hex address of the sender - /// @param receiver the bech32 address of the receiver - /// @param timeoutHeight the timeout height relative to the current block height. - /// The timeout is disabled when set to 0 - /// @param timeoutTimestamp the timeout timestamp in absolute nanoseconds since unix epoch. - /// The timeout is disabled when set to 0 - /// @param memo optional memo - /// @return nextSequence sequence number of the transfer packet sent + / @dev Transfer defines a method for performing an IBC transfer. + / @param sourcePort the port on which the packet will be sent + / @param sourceChannel the channel by which the packet will be sent + / @param denom the denomination of the Coin to be transferred to the receiver + / @param amount the amount of the Coin to be transferred to the receiver + / @param sender the hex address of the sender + / @param receiver the bech32 address of the receiver + / @param timeoutHeight the timeout height relative to the current block height. + / The timeout is disabled when set to 0 + / @param timeoutTimestamp the timeout timestamp in absolute nanoseconds since unix epoch. + / The timeout is disabled when set to 0 + / @param memo optional memo + / @return nextSequence sequence number of the transfer packet sent function transfer( string memory sourcePort, string memory sourceChannel, @@ -339,8 +339,8 @@ interface ICS20I { string memory memo ) external returns (uint64 nextSequence); - /// @dev denoms Defines a method for returning all denoms. - /// @param pageRequest Defines the pagination parameters to for the request. + / @dev denoms Defines a method for returning all denoms. + / @param pageRequest Defines the pagination parameters to for the request. function denoms( PageRequest memory pageRequest ) @@ -351,12 +351,12 @@ interface ICS20I { PageResponse memory pageResponse ); - /// @dev Denom defines a method for returning a denom. + / @dev Denom defines a method for returning a denom. function denom( string memory hash ) external view returns (Denom memory denom); - /// @dev DenomHash defines a method for returning a hash of the denomination info. + / @dev DenomHash defines a method for returning a hash of the denomination info. function denomHash( string memory trace ) external view returns (string memory hash); diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/index.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/index.mdx index 8fffdfab..ba5ede5b 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/index.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/index.mdx @@ -24,13 +24,13 @@ Precompiled contracts provide direct, gas-efficient access to native Cosmos SDK 3. Never assume 18-decimal precision ```solidity - // WRONG: Assuming Ethereum's 18 decimals - uint256 amount = 1 ether; // 1000000000000000000 - staking.delegate(validator, amount); // Delegates 1 trillion tokens! + / WRONG: Assuming Ethereum's 18 decimals + uint256 amount = 1 ether; / 1000000000000000000 + staking.delegate(validator, amount); / Delegates 1 trillion tokens! - // CORRECT: Using chain's native 6 decimals - uint256 amount = 1000000; // 1 token with 6 decimals - staking.delegate(validator, amount); // Delegates 1 token + / CORRECT: Using chain's native 6 decimals + uint256 amount = 1000000; / 1 token with 6 decimals + staking.delegate(validator, amount); / Delegates 1 token ``` diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/overview.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/overview.mdx index 70d4e146..6d6a99dd 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/overview.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/overview.mdx @@ -5,17 +5,193 @@ icon: "code" keywords: ['precompiles', 'precompiled contracts', 'cosmos sdk', 'evm', 'smart contracts', 'staking', 'governance', 'ibc', 'bank', 'distribution', 'slashing', 'evidence', 'bech32', 'p256', 'erc20', 'werc20', 'callbacks'] --- +## How Precompiles Work + +Precompiles are special contracts that exist at the protocol level rather than as deployed bytecode on the blockchain. They execute native code directly within the node software, offering significant performance advantages over regular smart contracts while maintaining compatibility with the EVM interface. + +### Architecture + +Precompiles operate through a unique architecture: + +1. **Native Implementation**: Unlike regular smart contracts that execute bytecode through the EVM interpreter, precompiles run optimized native code directly within the blockchain node +2. **Fixed Addresses**: Each precompile is assigned a specific address in the reserved range (typically `0x00...00` to `0x00...FF` for standard precompiles, with extended ranges for custom implementations) +3. **Gas Metering**: Precompiles use predetermined gas costs for operations, often much lower than equivalent EVM bytecode execution +4. **State Access**: They have direct access to the underlying blockchain state through native modules, bypassing EVM state management overhead + +### Building and Implementing Precompiles + +Creating custom precompiles involves several key steps: + +1. **Define the Interface**: Create a Solidity interface that defines the precompile's functions and events +2. **Implement Native Logic**: Write the actual implementation in the node's native language (typically Go for Cosmos chains) +3. **Register the Precompile**: Map the implementation to a specific address in the EVM configuration +4. **Gas Configuration**: Define appropriate gas costs for each operation based on computational complexity + +The implementation must handle: +- Input parsing and validation from EVM calldata +- State transitions through the native module system +- Result encoding back to EVM-compatible formats +- Error handling and reversion logic + +## Implementation Patterns + +Based on the Cosmos EVM precompile architecture, several critical patterns must be followed when implementing precompiles: + +### 1. The Run/run Wrapper Pattern + +Every precompile implements a public `Run` method that wraps a private `run` method. This pattern provides consistent error handling and return value formatting: + +```go +// Public Run method - entry point for EVM +func (p Precompile) Run(evm *vm.EVM, contract *vm.Contract, readOnly bool) (bz []byte, err error) { + bz, err = p.run(evm, contract, readOnly) + if err != nil { + return cmn.ReturnRevertError(evm, err) + } + return bz, nil +} + +// Private run method - actual implementation +func (p Precompile) run(evm *vm.EVM, contract *vm.Contract, readOnly bool) (bz []byte, err error) { + // Setup and method routing logic + ctx, stateDB, method, initialGas, args, err := p.RunSetup(evm, contract, readOnly, p.IsTransaction) + + // Balance handler for native token operations + p.GetBalanceHandler().BeforeBalanceChange(ctx) + + // Method execution with gas tracking + // ... + + // Finalize balance changes + if err = p.GetBalanceHandler().AfterBalanceChange(ctx, stateDB); err != nil { + return nil, err + } + + return bz, nil +} +``` + +### 2. Native Balance Handler + +Precompiles that modify native token balances must use the balance handler to track changes properly. The handler monitors bank module events and synchronizes them with the EVM state: + +```go +// Before any balance-changing operation +p.GetBalanceHandler().BeforeBalanceChange(ctx) + +// Execute the operation that may change balances +// ... + +// After the operation, sync balance changes with StateDB +if err = p.GetBalanceHandler().AfterBalanceChange(ctx, stateDB); err != nil { + return nil, err +} +``` + +The balance handler: +- Records the event count before operations +- Processes `CoinSpent` and `CoinReceived` events after operations +- Updates the StateDB with `AddBalance` and `SubBalance` calls +- Handles fractional balance changes for precise accounting +- Bypasses blocked addresses to prevent authorization errors + +### 3. Required Structure + +Every precompile must embed the common `Precompile` struct and implement these components: + +```go +type Precompile struct { + cmn.Precompile // Embedded common precompile + stakingKeeper Keeper // Module keeper + stakingMsgServer MsgServer // Message server for transactions + stakingQuerier QueryServer // Query server for reads +} + +// Required methods +func (p Precompile) RequiredGas(input []byte) uint64 +func (p Precompile) Run(evm *vm.EVM, contract *vm.Contract, readOnly bool) ([]byte, error) +func (p Precompile) IsTransaction(method *abi.Method) bool +``` + +### 4. Gas Management + +Precompiles use a two-phase gas management approach: + +```go +// Initial gas tracking +initialGas := ctx.GasMeter().GasConsumed() + +// Deferred gas error handler +defer cmn.HandleGasError(ctx, contract, initialGas, &err)() + +// After method execution +cost := ctx.GasMeter().GasConsumed() - initialGas +if !contract.UseGas(cost, nil, tracing.GasChangeCallPrecompiledContract) { + return nil, vm.ErrOutOfGas +} +``` + +### 5. State Management + +Precompiles must properly manage state transitions: + +```go +// Take snapshot before changes +snapshot := stateDB.MultiStoreSnapshot() +events := ctx.EventManager().Events() + +// Add to journal for reversion +err = stateDB.AddPrecompileFn(snapshot, events) + +// Commit cache context changes +if err := stateDB.CommitWithCacheCtx(); err != nil { + return nil, err +} +``` + +## Testing Precompiles + +### The Testing Challenge + +A significant challenge when working with precompiles is that their code doesn't exist as deployable bytecode. Traditional development tools like Foundry or Hardhat cannot directly access precompile implementations since they simulate the EVM locally without the underlying node infrastructure. When these tools encounter a call to a precompile address, they find no deployed code and the call fails. + +### Using Foundry's Etch Cheatcode + +Foundry provides a powerful workaround through its `vm.etch` cheatcode, which allows you to inject bytecode at any address during testing. This enables simulation of precompile behavior by deploying mock implementations at the precompile addresses. + +Here's how to use `etch` effectively: + +```solidity +// Deploy a mock implementation of the precompile +MockStakingPrecompile mockStaking = new MockStakingPrecompile(); + +// Use etch to place the mock bytecode at the precompile address +vm.etch(0x0000000000000000000000000000000000000800, address(mockStaking).code); + +// Now calls to the precompile address will execute the mock implementation +IStaking staking = IStaking(0x0000000000000000000000000000000000000800); +staking.delegate(validator, amount); // This will work in tests +``` + +This approach allows you to: +- Test smart contracts that interact with precompiles +- Simulate various precompile responses and edge cases +- Develop and debug locally without a full node setup +- Maintain consistent testing workflows with other smart contract development + +For comprehensive testing, consider creating mock implementations that closely mirror the actual precompile behavior, including gas consumption patterns and error conditions. This ensures your contracts will behave correctly when deployed to the actual chain. + ## Available Precompiles | Precompile | Address | Purpose | Reference | | ---------------- | -------------------------------------------- | ---------------------------------------------------------------- | ------------------------- | -| **Bank** | `0x0000000000000000000000000000000000000804` | ERC20-style access to native Cosmos SDK tokens | [Details](./bank) | -| **Bech32** | `0x0000000000000000000000000000000000000400` | Address format conversion between Ethereum hex and Cosmos bech32 | [Details](./bech32) | -| **Staking** | `0x0000000000000000000000000000000000000800` | Validator operations, delegation, and staking rewards | [Details](./staking) | -| **Distribution** | `0x0000000000000000000000000000000000000801` | Staking rewards and community pool management | [Details](./distribution) | -| **ERC20** | Dynamic per token | Standard ERC20 functionality for native Cosmos tokens | [Details](./erc20) | -| **Governance** | `0x0000000000000000000000000000000000000805` | On-chain governance proposals and voting | [Details](./governance) | -| **ICS20** | `0x0000000000000000000000000000000000000802` | Cross-chain token transfers via IBC | [Details](./ics20) | -| **WERC20** | Dynamic per token | Wrapped native token functionality | [Details](./werc20) | -| **Slashing** | `0x0000000000000000000000000000000000000806` | Validator slashing and jail management | [Details](./slashing) | -| **P256** | `0x0000000000000000000000000000000000000100` | P-256 elliptic curve cryptographic operations | [Details](./p256) | +| **Bank** | `0x0000000000000000000000000000000000000804` | ERC20-style access to native Cosmos SDK tokens | [Details](/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/bank) | +| **Bech32** | `0x0000000000000000000000000000000000000400` | Address format conversion between Ethereum hex and Cosmos bech32 | [Details](/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/bech32) | +| **Staking** | `0x0000000000000000000000000000000000000800` | Validator operations, delegation, and staking rewards | [Details](/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/staking) | +| **Distribution** | `0x0000000000000000000000000000000000000801` | Staking rewards and community pool management | [Details](/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/distribution) | +| **ERC20** | Dynamic per token | Standard ERC20 functionality for native Cosmos tokens | [Details](/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/erc20) | +| **Governance** | `0x0000000000000000000000000000000000000805` | On-chain governance proposals and voting | [Details](/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/governance) | +| **ICS20** | `0x0000000000000000000000000000000000000802` | Cross-chain token transfers via IBC | [Details](/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/ics20) | +| **WERC20** | Dynamic per token | Wrapped native token functionality | [Details](/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/werc20) | +| **Slashing** | `0x0000000000000000000000000000000000000806` | Validator slashing and jail management | [Details](/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/slashing) | +| **P256** | `0x0000000000000000000000000000000000000100` | P-256 elliptic curve cryptographic operations | [Details](/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/p256) | diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/p256.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/p256.mdx index b6054e73..82bd658f 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/p256.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/p256.mdx @@ -38,7 +38,7 @@ The precompile exposes a single unnamed function that verifies P-256 signatures. ```solidity Solidity -// P256 signature verification +/ P256 signature verification address constant P256_PRECOMPILE = 0x0000000000000000000000000000000000000100; function verifyP256Signature( @@ -63,11 +63,11 @@ function verifyP256Signature( ```javascript Ethers.js const ethers = require('ethers'); -// P256 precompile address +/ P256 precompile address const P256_ADDRESS = '0x0000000000000000000000000000000000000100'; async function verifyP256Signature(provider, messageHash, r, s, x, y) { - // Encode the input data + / Encode the input data const input = ethers.utils.concat([ messageHash, r, @@ -76,13 +76,13 @@ async function verifyP256Signature(provider, messageHash, r, s, x, y) { y ]); - // Call the precompile + / Call the precompile const result = await provider.call({ to: P256_ADDRESS, data: input }); - // Check if signature is valid (result should be 0x00...01) + / Check if signature is valid (result should be 0x00...01) return result === '0x' + '00'.repeat(31) + '01'; } ``` @@ -152,11 +152,11 @@ contract WebAuthnWallet { ) external view returns (bool) { Credential memory cred = credentials[msg.sender]; - // Compute challenge hash according to WebAuthn spec + / Compute challenge hash according to WebAuthn spec bytes32 clientDataHash = sha256(clientDataJSON); bytes32 messageHash = sha256(abi.encodePacked(authenticatorData, clientDataHash)); - // Verify P-256 signature + / Verify P-256 signature bytes memory input = abi.encodePacked( messageHash, r, diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/slashing.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/slashing.mdx index 80c60dcf..079fdf5e 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/slashing.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/slashing.mdx @@ -37,18 +37,18 @@ Returns the signing information for a specific validator. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function getSigningInfo(address consAddress) view returns (tuple(address validatorAddress, int64 startHeight, int64 indexOffset, int64 jailedUntil, bool tombstoned, int64 missedBlocksCounter) signingInfo)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000806"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input: The consensus address of the validator -const consAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder +/ Input: The consensus address of the validator +const consAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder async function getSigningInfo() { try { @@ -85,17 +85,17 @@ Returns the signing information for all validators, with pagination support. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function getSigningInfos(tuple(bytes key, uint64 offset, uint64 limit, bool countTotal, bool reverse) pagination) view returns (tuple(address validatorAddress, int64 startHeight, int64 indexOffset, int64 jailedUntil, bool tombstoned, int64 missedBlocksCounter)[] signingInfos, tuple(bytes nextKey, uint64 total) pageResponse)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000806"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input for pagination +/ Input for pagination const pagination = { key: "0x", offset: 0, @@ -140,12 +140,12 @@ Returns the current parameters for the slashing module. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function getParams() view returns (tuple(int64 signedBlocksWindow, tuple(uint256 value, uint8 precision) minSignedPerWindow, int64 downtimeJailDuration, tuple(uint256 value, uint8 precision) slashFractionDoubleSign, tuple(uint256 value, uint8 precision) slashFractionDowntime) params)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000806"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); @@ -184,81 +184,81 @@ curl -X POST --data '{ ## Full Solidity Interface & ABI ```solidity title="Slashing Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.17; import "../common/Types.sol"; -/// @dev The ISlashing contract's address. +/ @dev The ISlashing contract's address. address constant SLASHING_PRECOMPILE_ADDRESS = 0x0000000000000000000000000000000000000806; -/// @dev The ISlashing contract's instance. +/ @dev The ISlashing contract's instance. ISlashing constant SLASHING_CONTRACT = ISlashing(SLASHING_PRECOMPILE_ADDRESS); -/// @dev SigningInfo defines a validator's signing info for monitoring their -/// liveness activity. +/ @dev SigningInfo defines a validator's signing info for monitoring their +/ liveness activity. struct SigningInfo { - /// @dev Address of the validator + / @dev Address of the validator address validatorAddress; - /// @dev Height at which validator was first a candidate OR was unjailed + / @dev Height at which validator was first a candidate OR was unjailed int64 startHeight; - /// @dev Index offset into signed block bit array + / @dev Index offset into signed block bit array int64 indexOffset; - /// @dev Timestamp until which validator is jailed due to liveness downtime + / @dev Timestamp until which validator is jailed due to liveness downtime int64 jailedUntil; - /// @dev Whether or not a validator has been tombstoned (killed out of validator set) + / @dev Whether or not a validator has been tombstoned (killed out of validator set) bool tombstoned; - /// @dev Missed blocks counter (to avoid scanning the array every time) + / @dev Missed blocks counter (to avoid scanning the array every time) int64 missedBlocksCounter; } -/// @dev Params defines the parameters for the slashing module. +/ @dev Params defines the parameters for the slashing module. struct Params { - /// @dev SignedBlocksWindow defines how many blocks the validator should have signed + / @dev SignedBlocksWindow defines how many blocks the validator should have signed int64 signedBlocksWindow; - /// @dev MinSignedPerWindow defines the minimum blocks signed per window to avoid slashing + / @dev MinSignedPerWindow defines the minimum blocks signed per window to avoid slashing Dec minSignedPerWindow; - /// @dev DowntimeJailDuration defines how long the validator will be jailed for downtime + / @dev DowntimeJailDuration defines how long the validator will be jailed for downtime int64 downtimeJailDuration; - /// @dev SlashFractionDoubleSign defines the percentage of slash for double sign + / @dev SlashFractionDoubleSign defines the percentage of slash for double sign Dec slashFractionDoubleSign; - /// @dev SlashFractionDowntime defines the percentage of slash for downtime + / @dev SlashFractionDowntime defines the percentage of slash for downtime Dec slashFractionDowntime; } -/// @author Evmos Team -/// @title Slashing Precompiled Contract -/// @dev The interface through which solidity contracts will interact with slashing. -/// We follow this same interface including four-byte function selectors, in the precompile that -/// wraps the pallet. -/// @custom:address 0x0000000000000000000000000000000000000806 +/ @author Evmos Team +/ @title Slashing Precompiled Contract +/ @dev The interface through which solidity contracts will interact with slashing. +/ We follow this same interface including four-byte function selectors, in the precompile that +/ wraps the pallet. +/ @custom:address 0x0000000000000000000000000000000000000806 interface ISlashing { - /// @dev Emitted when a validator is unjailed - /// @param validator The address of the validator + / @dev Emitted when a validator is unjailed + / @param validator The address of the validator event ValidatorUnjailed(address indexed validator); - /// @dev GetSigningInfo returns the signing info for a specific validator. - /// @param consAddress The validator consensus address - /// @return signingInfo The validator signing info + / @dev GetSigningInfo returns the signing info for a specific validator. + / @param consAddress The validator consensus address + / @return signingInfo The validator signing info function getSigningInfo( address consAddress ) external view returns (SigningInfo memory signingInfo); - /// @dev GetSigningInfos returns the signing info for all validators. - /// @param pagination Pagination configuration for the query - /// @return signingInfos The list of validator signing info - /// @return pageResponse Pagination information for the response + / @dev GetSigningInfos returns the signing info for all validators. + / @param pagination Pagination configuration for the query + / @return signingInfos The list of validator signing info + / @return pageResponse Pagination information for the response function getSigningInfos( PageRequest calldata pagination ) external view returns (SigningInfo[] memory signingInfos, PageResponse memory pageResponse); - /// @dev Unjail allows validators to unjail themselves after being jailed for downtime - /// @param validatorAddress The validator operator address to unjail - /// @return success true if the unjail operation was successful + / @dev Unjail allows validators to unjail themselves after being jailed for downtime + / @param validatorAddress The validator operator address to unjail + / @return success true if the unjail operation was successful function unjail(address validatorAddress) external returns (bool success); - /// @dev GetParams returns the slashing module parameters - /// @return params The slashing module parameters + / @dev GetParams returns the slashing module parameters + / @return params The slashing module parameters function getParams() external view returns (Params memory params); } ``` diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/staking.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/staking.mdx index 1cd5ce78..e95780c2 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/staking.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/staking.mdx @@ -38,21 +38,21 @@ Delegates tokens to a validator. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the delegate function +/ ABI definition for the delegate function const precompileAbi = [ "function delegate(address delegatorAddress, string memory validatorAddress, uint256 amount) payable" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000800"; const signer = new ethers.Wallet("", provider); const contract = new ethers.Contract(precompileAddress, precompileAbi, signer); -// Delegation parameters +/ Delegation parameters const delegatorAddress = await signer.getAddress(); -const validatorAddress = "cosmosvaloper1..."; // Validator operator address -const amount = ethers.parseEther("10.0"); // Amount to delegate in native token +const validatorAddress = "cosmosvaloper1..."; / Validator operator address +const amount = ethers.parseEther("10.0"); / Amount to delegate in native token async function delegateTokens() { try { @@ -60,7 +60,7 @@ async function delegateTokens() { delegatorAddress, validatorAddress, amount, - { value: amount } // Must send the amount being delegated + { value: amount } / Must send the amount being delegated ); console.log("Delegation transaction:", tx.hash); @@ -71,7 +71,7 @@ async function delegateTokens() { } } -// delegateTokens(); +/ delegateTokens(); ``` ```bash cURL expandable lines @@ -99,18 +99,18 @@ Queries information about a specific validator. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function validator(string memory validatorAddress) view returns (tuple(string operatorAddress, string consensusPubkey, bool jailed, uint32 status, uint256 tokens, string delegatorShares, tuple(string moniker, string identity, string website, string securityContact, string details) description, int64 unbondingHeight, uint256 unbondingTime, tuple(tuple(string rate, string maxRate, string maxChangeRate) commissionRates, uint256 updateTime) commission, uint256 minSelfDelegation) validator)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000800"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Input -const validatorAddress = "cosmosvaloper1qypqxr72xk6plc6678ll42ephq2lgl7lc7225w"; // Placeholder +/ Input +const validatorAddress = "cosmosvaloper1qypqxr72xk6plc6678ll42ephq2lgl7lc7225w"; / Placeholder async function getValidator() { try { @@ -147,17 +147,17 @@ Queries validators with optional status filtering and pagination. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function validators(string memory status, tuple(bytes key, uint64 offset, uint64 limit, bool countTotal, bool reverse) pageRequest) view returns (tuple(string operatorAddress, string consensusPubkey, bool jailed, uint32 status, uint256 tokens, string delegatorShares, tuple(string moniker, string identity, string website, string securityContact, string details) description, int64 unbondingHeight, uint256 unbondingTime, tuple(tuple(string rate, string maxRate, string maxChangeRate) commissionRates, uint256 updateTime) commission, uint256 minSelfDelegation)[] validators, tuple(bytes nextKey, uint64 total) pageResponse)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000800"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs +/ Inputs const status = "Bonded"; const pagination = { key: "0x", @@ -204,19 +204,19 @@ Queries the delegation amount between a delegator and a validator. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function delegation(address delegatorAddress, string memory validatorAddress) view returns (uint256)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000800"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs -const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder -const validatorAddress = "cosmosvaloper1qypqxr72xk6plc6678ll42ephq2lgl7lc7225w"; // Placeholder +/ Inputs +const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder +const validatorAddress = "cosmosvaloper1qypqxr72xk6plc6678ll42ephq2lgl7lc7225w"; / Placeholder async function getDelegation() { try { @@ -253,19 +253,19 @@ Queries unbonding delegation information. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function unbondingDelegation(address delegatorAddress, string memory validatorAddress) view returns (tuple(address delegatorAddress, string validatorAddress, tuple(uint256 creationHeight, uint256 completionTime, string initialBalance, string balance)[] entries) unbondingDelegation)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000800"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs -const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder -const validatorAddress = "cosmosvaloper1qypqxr72xk6plc6678ll42ephq2lgl7lc7225w"; // Placeholder +/ Inputs +const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder +const validatorAddress = "cosmosvaloper1qypqxr72xk6plc6678ll42ephq2lgl7lc7225w"; / Placeholder async function getUnbondingDelegation() { try { @@ -305,20 +305,20 @@ Queries redelegations with optional filters. ```javascript Ethers.js expandable lines import { ethers } from "ethers"; -// ABI definition for the function +/ ABI definition for the function const precompileAbi = [ "function redelegations(address delegatorAddress, string memory srcValidatorAddress, string memory dstValidatorAddress, tuple(bytes key, uint64 offset, uint64 limit, bool countTotal, bool reverse) pageRequest) view returns (tuple(tuple(address delegatorAddress, string validatorSrcAddress, string validatorDstAddress, tuple(uint256 creationHeight, uint256 completionTime, string initialBalance, string sharesDst)[] entries) redelegation, tuple(tuple(uint256 creationHeight, uint256 completionTime, string initialBalance, string sharesDst) redelegationEntry, string balance)[] entries)[] redelegations, tuple(bytes nextKey, uint64 total) pageResponse)" ]; -// Provider and contract setup +/ Provider and contract setup const provider = new ethers.JsonRpcProvider(""); const precompileAddress = "0x0000000000000000000000000000000000000800"; const contract = new ethers.Contract(precompileAddress, precompileAbi, provider); -// Inputs -const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; // Placeholder -const srcValidatorAddress = ""; // Empty string for all -const dstValidatorAddress = ""; // Empty string for all +/ Inputs +const delegatorAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"; / Placeholder +const srcValidatorAddress = ""; / Empty string for all +const dstValidatorAddress = ""; / Empty string for all const pagination = { key: "0x", offset: 0, @@ -362,18 +362,18 @@ curl -X POST --data '{ ## Full Solidity Interface & ABI ```solidity title="Staking Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.17; import "../common/Types.sol"; -/// @dev The StakingI contract's address. +/ @dev The StakingI contract's address. address constant STAKING_PRECOMPILE_ADDRESS = 0x0000000000000000000000000000000000000800; -/// @dev The StakingI contract's instance. +/ @dev The StakingI contract's instance. StakingI constant STAKING_CONTRACT = StakingI(STAKING_PRECOMPILE_ADDRESS); -// BondStatus is the status of a validator. +/ BondStatus is the status of a validator. enum BondStatus { Unspecified, Unbonded, @@ -381,7 +381,7 @@ enum BondStatus { Bonded } -// Description contains a validator's description. +/ Description contains a validator's description. struct Description { string moniker; string identity; @@ -390,20 +390,20 @@ struct Description { string details; } -// CommissionRates defines the initial commission rates to be used for a validator +/ CommissionRates defines the initial commission rates to be used for a validator struct CommissionRates { string rate; string maxRate; string maxChangeRate; } -// Commission defines a commission parameters for a given validator. +/ Commission defines a commission parameters for a given validator. struct Commission { CommissionRates commissionRates; uint256 updateTime; } -// Validator defines a validator, an account that can participate in consensus. +/ Validator defines a validator, an account that can participate in consensus. struct Validator { string operatorAddress; string consensusPubkey; @@ -418,24 +418,24 @@ struct Validator { uint256 minSelfDelegation; } -// Delegation represents the bond with tokens held by an account. It is -// owned by one delegator, and is associated with the voting power of one -// validator. +/ Delegation represents the bond with tokens held by an account. It is +/ owned by one delegator, and is associated with the voting power of one +/ validator. struct Delegation { address delegatorAddress; string validatorAddress; string shares; } -// UnbondingDelegation stores all of a single delegator's unbonding bonds -// for a single validator in an array. +/ UnbondingDelegation stores all of a single delegator's unbonding bonds +/ for a single validator in an array. struct UnbondingDelegation { address delegatorAddress; string validatorAddress; UnbondingDelegationEntry[] entries; } -// UnbondingDelegationEntry defines an unbonding object with relevant metadata. +/ UnbondingDelegationEntry defines an unbonding object with relevant metadata. struct UnbondingDelegationEntry { uint256 creationHeight; uint256 completionTime; @@ -443,7 +443,7 @@ struct UnbondingDelegationEntry { string balance; } -// RedelegationEntry defines a redelegation object with relevant metadata. +/ RedelegationEntry defines a redelegation object with relevant metadata. struct RedelegationEntry { uint256 creationHeight; uint256 completionTime; @@ -451,8 +451,8 @@ struct RedelegationEntry { string sharesDst; } -// Redelegation contains the list of a particular delegator's redelegating bonds -// from a particular source validator to a particular destination validator. +/ Redelegation contains the list of a particular delegator's redelegating bonds +/ from a particular source validator to a particular destination validator. struct Redelegation { address delegatorAddress; string validatorSrcAddress; @@ -460,36 +460,36 @@ struct Redelegation { RedelegationEntry[] entries; } -// DelegationResponse is equivalent to Delegation except that it contains a -// balance in addition to shares which is more suitable for client responses. +/ DelegationResponse is equivalent to Delegation except that it contains a +/ balance in addition to shares which is more suitable for client responses. struct DelegationResponse { Delegation delegation; Coin balance; } -// RedelegationEntryResponse is equivalent to a RedelegationEntry except that it -// contains a balance in addition to shares which is more suitable for client -// responses. +/ RedelegationEntryResponse is equivalent to a RedelegationEntry except that it +/ contains a balance in addition to shares which is more suitable for client +/ responses. struct RedelegationEntryResponse { RedelegationEntry redelegationEntry; string balance; } -// RedelegationResponse is equivalent to a Redelegation except that its entries -// contain a balance in addition to shares which is more suitable for client -// responses. +/ RedelegationResponse is equivalent to a Redelegation except that its entries +/ contain a balance in addition to shares which is more suitable for client +/ responses. struct RedelegationResponse { Redelegation redelegation; RedelegationEntryResponse[] entries; } -// Pool is used for tracking bonded and not-bonded token supply of the bond denomination. +/ Pool is used for tracking bonded and not-bonded token supply of the bond denomination. struct Pool { string notBondedTokens; string bondedTokens; } -// StakingParams defines the parameters for the staking module. +/ StakingParams defines the parameters for the staking module. struct Params { uint256 unbondingTime; uint256 maxValidators; @@ -499,10 +499,10 @@ struct Params { string minCommissionRate; } -/// @author The Evmos Core Team -/// @title Staking Precompile Contract -/// @dev The interface through which solidity contracts will interact with Staking -/// @custom:address 0x0000000000000000000000000000000000000800 +/ @author The Evmos Core Team +/ @title Staking Precompile Contract +/ @dev The interface through which solidity contracts will interact with Staking +/ @custom:address 0x0000000000000000000000000000000000000800 interface StakingI { event CreateValidator(string indexed validatorAddress, uint256 value); event EditValidator(string indexed validatorAddress); @@ -511,7 +511,7 @@ interface StakingI { event Redelegate(address indexed delegatorAddress, address indexed validatorSrcAddress, address indexed validatorDstAddress, uint256 amount, uint256 completionTime); event CancelUnbondingDelegation(address indexed delegatorAddress, address indexed validatorAddress, uint256 amount, uint256 creationHeight); - // Transactions + / Transactions function createValidator( Description calldata description, CommissionRates calldata commission, @@ -554,7 +554,7 @@ interface StakingI { uint256 creationHeight ) external returns (bool); - // Queries + / Queries function validator( string calldata validatorAddress ) external view returns (Validator memory); diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/werc20.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/werc20.mdx index 18a9d6d8..3cf354f5 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/werc20.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/precompiles/werc20.mdx @@ -51,19 +51,19 @@ For a comprehensive understanding of how single token representation works and i The ERC20 module creates a **unified token representation** that bridges native Cosmos tokens with ERC20 interfaces: ```go -// Simplified conceptual flow (not actual implementation) +/ Simplified conceptual flow (not actual implementation) func (k Keeper) ERC20Transfer(from, to common.Address, amount *big.Int) error { - // Convert EVM addresses to Cosmos addresses + / Convert EVM addresses to Cosmos addresses cosmosFrom := sdk.AccAddress(from.Bytes()) cosmosTo := sdk.AccAddress(to.Bytes()) - // Use bank module directly - no separate ERC20 state + / Use bank module directly - no separate ERC20 state coin := sdk.NewCoin(k.denom, sdk.NewIntFromBigInt(amount)) return k.bankKeeper.SendCoins(ctx, cosmosFrom, cosmosTo, sdk.Coins{coin}) } func (k Keeper) ERC20BalanceOf(account common.Address) *big.Int { - // Query bank module directly + / Query bank module directly cosmosAddr := sdk.AccAddress(account.Bytes()) balance := k.bankKeeper.GetBalance(ctx, cosmosAddr, k.denom) return balance.Amount.BigInt() @@ -75,15 +75,15 @@ func (k Keeper) ERC20BalanceOf(account common.Address) *big.Int { Since ATOM and WATOM represent the same underlying token, traditional deposit/withdraw operations are meaningless: ```solidity -// These functions exist for interface compatibility but do nothing +/ These functions exist for interface compatibility but do nothing function deposit() external payable { - // No-op: msg.value automatically increases your bank balance - // Your WATOM balance is identical to your ATOM balance + / No-op: msg.value automatically increases your bank balance + / Your WATOM balance is identical to your ATOM balance } function withdraw(uint256 amount) external { - // No-op: your ATOM balance is already accessible - // No conversion needed as they're the same token + / No-op: your ATOM balance is already accessible + / No conversion needed as they're the same token } ``` @@ -101,22 +101,22 @@ This is why `deposit()` and `withdraw()` are no-ops - there's no separate wrappe ### Real-World Example ```javascript -// User starts with 100 ATOM in bank module +/ User starts with 100 ATOM in bank module const atomBalance = await bankPrecompile.balances(userAddress); -// Returns: [{denom: "test", amount: "100000000"}] // 100 ATOM (6 decimals) +/ Returns: [{denom: "test", amount: "100000000"}] / 100 ATOM (6 decimals) const wtestBalance = await wtest.balanceOf(userAddress); -// Returns: "100000000" // Same 100 ATOM, accessed via ERC20 interface +/ Returns: "100000000" / Same 100 ATOM, accessed via ERC20 interface -// User transfers 50 WATOM via ERC20 +/ User transfers 50 WATOM via ERC20 await wtest.transfer(recipientAddress, "50000000"); -// Check balances again +/ Check balances again const newAtomBalance = await bankPrecompile.balances(userAddress); -// Returns: [{denom: "test", amount: "50000000"}] // 50 ATOM remaining +/ Returns: [{denom: "test", amount: "50000000"}] / 50 ATOM remaining const newWatomBalance = await wtest.balanceOf(userAddress); -// Returns: "50000000" // Same 50 ATOM, both queries return identical values +/ Returns: "50000000" / Same 50 ATOM, both queries return identical values ``` ## Methods @@ -184,7 +184,7 @@ const wtest = new ethers.Contract(wtestAddress, werc20Abi, signer); async function transferTokens() { try { const recipientAddress = "0x742d35Cc6634C0532925a3b844Bc9e7595f5b899"; - const amount = ethers.parseUnits("10.0", 6); // 10 ATOM (6 decimals) + const amount = ethers.parseUnits("10.0", 6); / 10 ATOM (6 decimals) const tx = await wtest.transfer(recipientAddress, amount); const receipt = await tx.wait(); @@ -271,18 +271,18 @@ const wtest = new ethers.Contract(wtestAddress, werc20Abi, signer); async function approveAndTransfer() { try { const spenderAddress = "0x742d35Cc6634C0532925a3b844Bc9e7595f5b899"; - const amount = ethers.parseUnits("50.0", 6); // 50 ATOM + const amount = ethers.parseUnits("50.0", 6); / 50 ATOM - // Approve spending + / Approve spending const approveTx = await wtest.approve(spenderAddress, amount); await approveTx.wait(); - // Check allowance + / Check allowance const allowance = await wtest.allowance(signer.address, spenderAddress); console.log("Allowance:", allowance.toString()); - // Transfer from (would be called by spender) - // const transferTx = await wtest.transferFrom(ownerAddress, recipientAddress, amount); + / Transfer from (would be called by spender) + / const transferTx = await wtest.transferFrom(ownerAddress, recipientAddress, amount); } catch (error) { console.error("Error:", error); } @@ -393,85 +393,85 @@ contract LiquidityPool { } function addLiquidity(uint256 amount) external { - // This transfers from the user's bank balance + / This transfers from the user's bank balance WATOM.transferFrom(msg.sender, address(this), amount); - // Pool now has tokens in its bank balance - // No wrapping/unwrapping needed - it's all the same token! + / Pool now has tokens in its bank balance + / No wrapping/unwrapping needed - it's all the same token! } function removeLiquidity(uint256 amount) external { - // This transfers back to user's bank balance + / This transfers back to user's bank balance WATOM.transfer(msg.sender, amount); - // User can now use these tokens as native ATOM - // or continue using WATOM interface - both access same balance + / User can now use these tokens as native ATOM + / or continue using WATOM interface - both access same balance } } ``` ### Cross-Interface Balance Verification ```javascript -// Verify that both interfaces show the same balance +/ Verify that both interfaces show the same balance async function verifyBalanceConsistency(userAddress) { - // Query via bank precompile (native interface) + / Query via bank precompile (native interface) const bankBalance = await bankContract.balances(userAddress); const atomAmount = bankBalance.find(b => b.denom === "test")?.amount || "0"; - // Query via WERC20 precompile (ERC20 interface) + / Query via WERC20 precompile (ERC20 interface) const wtestAmount = await wtest.balanceOf(userAddress); - // These will always be equal since the ERC20 balance is just - // an abstracted bank module balance query + / These will always be equal since the ERC20 balance is just + / an abstracted bank module balance query console.log(`Consistent balance: ${atomAmount} (both ATOM and WATOM)`); } ``` ### Working with IBC Tokens ```javascript -// IBC tokens work exactly the same way -const ibcTokenAddress = "0x..."; // Each IBC token gets its own WERC20 address +/ IBC tokens work exactly the same way +const ibcTokenAddress = "0x..."; / Each IBC token gets its own WERC20 address const ibcToken = new ethers.Contract(ibcTokenAddress, werc20Abi, signer); -// Check balance (same as bank module balance) +/ Check balance (same as bank module balance) const balance = await ibcToken.balanceOf(userAddress); -// Transfer IBC tokens via ERC20 interface +/ Transfer IBC tokens via ERC20 interface await ibcToken.transfer(recipientAddress, amount); -// Use in DeFi protocols just like any ERC20 token +/ Use in DeFi protocols just like any ERC20 token await defiProtocol.stake(ibcTokenAddress, amount); ``` ## Solidity Interface & ABI ```solidity title="WERC20 Solidity Interface" lines expandable -// SPDX-License-Identifier: LGPL-3.0-only +/ SPDX-License-Identifier: LGPL-3.0-only pragma solidity >=0.8.18; import "@openzeppelin/contracts/token/ERC20/IERC20.sol"; -/// @title WERC20 Precompile Contract -/// @dev Provides ERC20 interface to native Cosmos tokens via bank module -/// @notice This is NOT a traditional wrapped token - both native and ERC20 interfaces access the same balance +/ @title WERC20 Precompile Contract +/ @dev Provides ERC20 interface to native Cosmos tokens via bank module +/ @notice This is NOT a traditional wrapped token - both native and ERC20 interfaces access the same balance interface IWERC20 is IERC20 { - /// @dev Emitted when deposit() is called (no-op for compatibility) - /// @param dst The address that called deposit - /// @param wad The amount specified (though no conversion occurs) + / @dev Emitted when deposit() is called (no-op for compatibility) + / @param dst The address that called deposit + / @param wad The amount specified (though no conversion occurs) event Deposit(address indexed dst, uint256 wad); - /// @dev Emitted when withdraw() is called (no-op for compatibility) - /// @param src The address that called withdraw - /// @param wad The amount specified (though no conversion occurs) + / @dev Emitted when withdraw() is called (no-op for compatibility) + / @param src The address that called withdraw + / @param wad The amount specified (though no conversion occurs) event Withdrawal(address indexed src, uint256 wad); - /// @dev No-op function for WETH compatibility - native tokens automatically update balance - /// @notice This function exists for interface compatibility but performs no conversion + / @dev No-op function for WETH compatibility - native tokens automatically update balance + / @notice This function exists for interface compatibility but performs no conversion function deposit() external payable; - /// @dev No-op function for WETH compatibility - native tokens always accessible - /// @param wad Amount to "withdraw" (no conversion performed) - /// @notice This function exists for interface compatibility but performs no conversion + / @dev No-op function for WETH compatibility - native tokens always accessible + / @param wad Amount to "withdraw" (no conversion performed) + / @notice This function exists for interface compatibility but performs no conversion function withdraw(uint256 wad) external; } ``` diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/create2.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/create2.mdx index 4f757b3d..b61aac85 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/create2.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/create2.mdx @@ -41,18 +41,18 @@ import { ethers } from "ethers"; const provider = new ethers.JsonRpcProvider("YOUR_RPC_URL"); const signer = new ethers.Wallet("YOUR_PRIVATE_KEY", provider); -// Create2 Factory address +/ Create2 Factory address const CREATE2_FACTORY = "0x4e59b44847b379578588920ca78fbf26c0b4956c"; -// Your contract bytecode (including constructor args) -const bytecode = "0x608060405234801561001057600080fd5b50..."; // Your compiled bytecode +/ Your contract bytecode (including constructor args) +const bytecode = "0x608060405234801561001057600080fd5b50..."; / Your compiled bytecode -// Choose a salt (32 bytes) -const salt = ethers.id("my-unique-salt-v1"); // Or use ethers.randomBytes(32) +/ Choose a salt (32 bytes) +const salt = ethers.id("my-unique-salt-v1"); / Or use ethers.randomBytes(32) -// Deploy using Create2 +/ Deploy using Create2 async function deployWithCreate2() { - // Compute the deployment address + / Compute the deployment address const deployAddress = ethers.getCreate2Address( CREATE2_FACTORY, salt, @@ -61,11 +61,11 @@ async function deployWithCreate2() { console.log("Contract will be deployed to:", deployAddress); - // Send deployment transaction + / Send deployment transaction const tx = await signer.sendTransaction({ to: CREATE2_FACTORY, - data: salt + bytecode.slice(2), // Concatenate salt and bytecode - gasLimit: 3000000, // Adjust based on your contract + data: salt + bytecode.slice(2), / Concatenate salt and bytecode + gasLimit: 3000000, / Adjust based on your contract }); console.log("Deployment tx:", tx.hash); @@ -79,7 +79,7 @@ deployWithCreate2(); ``` ```solidity Solidity -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract Create2Deployer { @@ -89,14 +89,14 @@ contract Create2Deployer { bytes32 salt, bytes memory bytecode ) external returns (address) { - // Prepare deployment data + / Prepare deployment data bytes memory deploymentData = abi.encodePacked(salt, bytecode); - // Deploy via Create2 factory + / Deploy via Create2 factory (bool success, bytes memory result) = CREATE2_FACTORY.call(deploymentData); require(success, "Create2 deployment failed"); - // Extract deployed address from return data + / Extract deployed address from return data address deployed = abi.decode(result, (address)); return deployed; } @@ -138,7 +138,7 @@ function computeCreate2Address(salt, bytecode) { return address; } -// Example usage +/ Example usage const salt = ethers.id("my-deployment-v1"); const bytecode = "0x608060405234801561001057600080fd5b50..."; const futureAddress = computeCreate2Address(salt, bytecode); @@ -152,7 +152,7 @@ console.log("Contract will deploy to:", futureAddress); Deploy contracts to the same address across multiple chains: ```javascript const salt = ethers.id("myapp-v1.0.0"); - // Same salt + bytecode = same address on all chains + / Same salt + bytecode = same address on all chains ``` @@ -176,7 +176,7 @@ console.log("Contract will deploy to:", futureAddress); ```solidity function deployChild(uint256 nonce) external { bytes32 salt = keccak256(abi.encode(msg.sender, nonce)); - // Deploy with predictable address + / Deploy with predictable address } ``` @@ -187,7 +187,7 @@ console.log("Contract will deploy to:", futureAddress); The Create2 factory contract is extremely minimal: ```assembly -// Entire contract bytecode (45 bytes) +/ Entire contract bytecode (45 bytes) 0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffe03601600081602082378035828234f58015156039578182fd5b8082525050506014600cf3 ``` @@ -210,17 +210,17 @@ This assembly code: Choose salts carefully for your use case: ```javascript -// Sequential salts for multiple instances +/ Sequential salts for multiple instances const salt1 = ethers.solidityPacked(["uint256"], [1]); const salt2 = ethers.solidityPacked(["uint256"], [2]); -// User-specific salts +/ User-specific salts const userSalt = ethers.solidityPacked(["address", "uint256"], [userAddress, nonce]); -// Version-based salts +/ Version-based salts const versionSalt = ethers.id("v1.2.3"); -// Random salts for uniqueness +/ Random salts for uniqueness const randomSalt = ethers.randomBytes(32); ``` @@ -240,16 +240,6 @@ const randomSalt = ethers.randomBytes(32); | Gas Cost | Baseline | ~32,000 gas overhead | | Use Case | Standard deployments | Deterministic deployments | -## Troubleshooting - -Common issues and solutions: - -| Issue | Solution | -|-------|----------| -| "Create2 deployment failed" | Ensure sufficient gas and correct bytecode format | -| Address mismatch | Verify salt and bytecode are identical to computation | -| Contract already deployed | CREATE2 can't deploy to the same address twice | -| Invalid bytecode | Ensure bytecode includes constructor arguments if needed | ## Example: Multi-chain Token Deployment @@ -259,10 +249,10 @@ Deploy an ERC20 token to the same address across multiple chains: import { ethers } from "ethers"; async function deployTokenMultichain(chains) { - const bytecode = "0x..."; // ERC20 bytecode with constructor args + const bytecode = "0x..."; / ERC20 bytecode with constructor args const salt = ethers.id("MyToken-v1.0.0"); - // Compute address (same on all chains) + / Compute address (same on all chains) const tokenAddress = ethers.getCreate2Address( "0x4e59b44847b379578588920ca78fbf26c0b4956c", salt, @@ -271,19 +261,19 @@ async function deployTokenMultichain(chains) { console.log("Token will deploy to:", tokenAddress); - // Deploy on each chain + / Deploy on each chain for (const chain of chains) { const provider = new ethers.JsonRpcProvider(chain.rpc); const signer = new ethers.Wallet(privateKey, provider); - // Check if already deployed + / Check if already deployed const code = await provider.getCode(tokenAddress); if (code !== "0x") { console.log(`Already deployed on ${chain.name}`); continue; } - // Deploy + / Deploy const tx = await signer.sendTransaction({ to: "0x4e59b44847b379578588920ca78fbf26c0b4956c", data: salt + bytecode.slice(2), diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/implementation-guide.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/implementation-guide.mdx index cd7b4d1f..c0ba0ebf 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/implementation-guide.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/implementation-guide.mdx @@ -2,11 +2,23 @@ title: "Implementation Guide" description: "Deploying and managing predeployed contracts on Cosmos EVM" icon: "book-open" -keywords: ['predeployed', 'preinstalls', 'activation', 'genesis', 'governance', 'upgrade', 'implementation', 'deployment'] +keywords: + [ + "predeployed", + "preinstalls", + "activation", + "genesis", + "governance", + "upgrade", + "implementation", + "deployment", + ] --- -For conceptual understanding of predeployed contracts and how they differ from precompiles, see the [Predeployed Contracts concept page](/docs/evm/v0.4.x/documentation/concepts/predeployed-contracts). + For conceptual understanding of predeployed contracts and how they differ from + precompiles, see the [Predeployed Contracts concept + page](/docs/evm/v0.4.x/documentation/concepts/predeployed-contracts). ## Activation Methods @@ -22,7 +34,7 @@ The most straightforward method for new chains or testnets. Contracts are deploy Using the evmd example application automatically includes default preinstalls: ```go -// evmd/genesis.go +/ evmd/genesis.go func NewEVMGenesisState() *evmtypes.GenesisState { evmGenState := evmtypes.DefaultGenesisState() evmGenState.Params.ActiveStaticPrecompiles = evmtypes.AvailableStaticPrecompiles @@ -36,7 +48,7 @@ func NewEVMGenesisState() *evmtypes.GenesisState { To customize which contracts are deployed: ```json -// genesis.json +/ genesis.json { "app_state": { "evm": { @@ -49,7 +61,7 @@ To customize which contracts are deployed: { "name": "Multicall3", "address": "0xcA11bde05977b3631167028862bE2a173976CA11", - "code": "0x6080604052..." // Full bytecode + "code": "0x6080604052..." / Full bytecode } ] } @@ -73,7 +85,10 @@ For chains already in production, use the `MsgRegisterPreinstalls` governance pr #### Proposal Structure -The `authority` field must be set to the governance module account address, which is typically derived from the gov module name. This is usually something like `cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn` for the standard gov module. + The `authority` field must be set to the governance module account address, + which is typically derived from the gov module name. This is usually something + like `cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn` for the standard gov + module. ```json @@ -122,7 +137,7 @@ func CreateUpgradeHandler( evmKeeper *evmkeeper.Keeper, ) upgradetypes.UpgradeHandler { return func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { - // Add preinstalls during upgrade + / Add preinstalls during upgrade if err := evmKeeper.AddPreinstalls(ctx, evmtypes.DefaultPreinstalls); err != nil { return nil, err } @@ -139,11 +154,13 @@ func CreateUpgradeHandler( All preinstall deployments undergo strict validation: 1. **Address Validation** + - Must be valid Ethereum address format (40 hex characters) - Cannot conflict with existing contracts - Should not overlap with precompile reserved addresses (typically 0x1-0x9FF) 2. **Code Validation** + - Must be valid EVM bytecode (hex encoded) - Cannot have empty code hash - Must pass bytecode verification @@ -158,7 +175,7 @@ All preinstall deployments undergo strict validation: Predeployed contracts are stored in the chain state like regular contracts: ```go -// Actual deployment process from x/vm/keeper/preinstalls.go +/ Actual deployment process from x/vm/keeper/preinstalls.go func (k *Keeper) AddPreinstalls(ctx sdk.Context, preinstalls []types.Preinstall) error { for _, preinstall := range preinstalls { address := common.HexToAddress(preinstall.Address) @@ -175,7 +192,7 @@ func (k *Keeper) AddPreinstalls(ctx sdk.Context, preinstalls []types.Preinstall) "preinstall %s has empty code hash", preinstall.Address) } - // Check for existing code hash conflicts + / Check for existing code hash conflicts existingCodeHash := k.GetCodeHash(ctx, address) if !types.IsEmptyCodeHash(existingCodeHash.Bytes()) && !bytes.Equal(existingCodeHash.Bytes(), codeHash) { @@ -183,13 +200,13 @@ func (k *Keeper) AddPreinstalls(ctx sdk.Context, preinstalls []types.Preinstall) "preinstall %s already has a different code hash", preinstall.Address) } - // Check that the account is not already set + / Check that the account is not already set if acc := k.accountKeeper.GetAccount(ctx, accAddress); acc != nil { return errorsmod.Wrapf(types.ErrInvalidPreinstall, "preinstall %s already has an account in account keeper", preinstall.Address) } - // Create account and store code + / Create account and store code account := k.accountKeeper.NewAccountWithAddress(ctx, accAddress) k.accountKeeper.SetAccount(ctx, account) k.SetCodeHash(ctx, address.Bytes(), codeHash) @@ -214,7 +231,7 @@ evmd query evm account 0x4e59b44847b379578588920ca78fbf26c0b4956c ``` ```javascript -// Verify via Web3 +/ Verify via Web3 const code = await provider.getCode("0x4e59b44847b379578588920ca78fbf26c0b4956c"); console.log("Deployed:", code !== "0x"); ``` @@ -250,56 +267,30 @@ console.log("Deployed:", code !== "0x"); - Don't modify standard contract addresses without strong justification - Don't deploy untested or unaudited bytecode -## Known Issues - - -The Safe Singleton Factory bytecode in the current DefaultPreinstalls may be incorrect (appears to be duplicate of Create2 bytecode). Verify the correct bytecode before deploying this contract in production. - - ## Adding Custom Preinstalls To add custom contracts beyond the defaults: ```go -// Define custom preinstall +/ Define custom preinstall customPreinstall := types.Preinstall{ Name: "MyCustomContract", Address: "0xYourChosenAddress", Code: "0xCompiledBytecode", } -// Validate before deployment +/ Validate before deployment if err := customPreinstall.Validate(); err != nil { return err } -// Add via appropriate method (genesis, governance, or upgrade) +/ Add via appropriate method (genesis, governance, or upgrade) preinstalls := append(evmtypes.DefaultPreinstalls, customPreinstall) ``` -## Troubleshooting - -### Common Issues - -| Issue | Cause | Solution | -|-------|-------|----------| -| "preinstall already has an account in account keeper" | Address collision | Choose different address | -| "preinstall has empty code hash" | Invalid or empty bytecode | Verify bytecode hex string is valid | -| "preinstall address is not a valid hex address" | Malformed address | Ensure 0x prefix and 40 hex chars | -| "invalid authority" | Wrong governance address | Use correct gov module account address | -| Contract not found after deployment | Wrong network | Verify chain ID and RPC endpoint | - -### Debugging Steps - -1. Check chain genesis configuration -2. Verify proposal passed and executed -3. Query contract code directly -4. Test with simple contract interaction -5. Review chain logs for errors - ## Further Resources - [EIP-1014: CREATE2 Specification](https://eips.ethereum.org/EIPS/eip-1014) - Understanding deterministic deployment - [Multicall3 Documentation](https://github.com/mds1/multicall) - Official Multicall3 repository - [Permit2 Introduction](https://blog.uniswap.org/permit2) - Uniswap's Permit2 design -- [Safe Contracts](https://github.com/safe-global/safe-contracts) - Safe multisig implementation \ No newline at end of file +- [Safe Contracts](https://github.com/safe-global/safe-contracts) - Safe multisig implementation diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/multicall3.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/multicall3.mdx index 07c239e0..2684bd2c 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/multicall3.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/multicall3.mdx @@ -64,29 +64,29 @@ import { ethers } from "ethers"; const provider = new ethers.JsonRpcProvider("YOUR_RPC_URL"); const MULTICALL3 = "0xcA11bde05977b3631167028862bE2a173976CA11"; -// Multicall3 ABI +/ Multicall3 ABI const multicallAbi = [ "function aggregate3(tuple(address target, bool allowFailure, bytes callData)[] calls) returns (tuple(bool success, bytes returnData)[] returnData)" ]; const multicall = new ethers.Contract(MULTICALL3, multicallAbi, provider); -// Example: Read multiple token balances +/ Example: Read multiple token balances async function getMultipleBalances(tokenAddress, accounts) { const erc20Abi = ["function balanceOf(address) view returns (uint256)"]; const iface = new ethers.Interface(erc20Abi); - // Prepare calls + / Prepare calls const calls = accounts.map(account => ({ target: tokenAddress, allowFailure: false, callData: iface.encodeFunctionData("balanceOf", [account]) })); - // Execute multicall + / Execute multicall const results = await multicall.aggregate3(calls); - // Decode results + / Decode results const balances = results.map((result, i) => { if (result.success) { return { @@ -102,7 +102,7 @@ async function getMultipleBalances(tokenAddress, accounts) { ``` ```solidity Solidity -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; interface IMulticall3 { @@ -128,7 +128,7 @@ contract MultiTokenReader { address[] calldata tokens, address account ) external returns (uint256[] memory balances) { - IMulticall3.Call3[] memory calls = new IMulticall3.Call3[](tokens.length); + IMulticall3.Call3[] memory calls = new IMulticall3.Call3[](/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/tokens.length); for (uint i = 0; i < tokens.length; i++) { calls[i] = IMulticall3.Call3({ @@ -139,7 +139,7 @@ contract MultiTokenReader { } IMulticall3.Result[] memory results = multicall.aggregate3(calls); - balances = new uint256[](results.length); + balances = new uint256[](/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/results.length); for (uint i = 0; i < results.length; i++) { if (results[i].success) { @@ -156,28 +156,28 @@ contract MultiTokenReader { Combine multiple DeFi operations atomically: ```javascript -// Swap and stake in one transaction +/ Swap and stake in one transaction async function swapAndStake(tokenIn, tokenOut, amountIn, poolAddress) { const calls = [ - // 1. Approve router + / 1. Approve router { target: tokenIn, allowFailure: false, callData: iface.encodeFunctionData("approve", [router, amountIn]) }, - // 2. Perform swap + / 2. Perform swap { target: router, allowFailure: false, callData: iface.encodeFunctionData("swap", [tokenIn, tokenOut, amountIn]) }, - // 3. Approve staking + / 3. Approve staking { target: tokenOut, allowFailure: false, callData: iface.encodeFunctionData("approve", [poolAddress, amountOut]) }, - // 4. Stake tokens + / 4. Stake tokens { target: poolAddress, allowFailure: false, @@ -196,14 +196,14 @@ Get consistent protocol state in one call: ```javascript async function getProtocolState(protocol) { const calls = [ - { target: protocol, callData: "0x18160ddd", allowFailure: false }, // totalSupply() - { target: protocol, callData: "0x313ce567", allowFailure: false }, // decimals() - { target: protocol, callData: "0x06fdde03", allowFailure: false }, // name() - { target: protocol, callData: "0x95d89b41", allowFailure: false }, // symbol() + { target: protocol, callData: "0x18160ddd", allowFailure: false }, / totalSupply() + { target: protocol, callData: "0x313ce567", allowFailure: false }, / decimals() + { target: protocol, callData: "0x06fdde03", allowFailure: false }, / name() + { target: protocol, callData: "0x95d89b41", allowFailure: false }, / symbol() ]; const results = await multicall.aggregate3(calls); - // All values from the same block + / All values from the same block return decodeResults(results); } ``` @@ -231,26 +231,26 @@ const calls = [ ]; await multicall.aggregate3Value(calls, { - value: ethers.parseEther("1.5") // Total ETH to send + value: ethers.parseEther("1.5") / Total ETH to send }); ``` ### Error Handling Strategies ```javascript -// Strict mode - revert if any call fails +/ Strict mode - revert if any call fails const strictCalls = calls.map(call => ({ ...call, allowFailure: false })); -// Permissive mode - continue even if some fail +/ Permissive mode - continue even if some fail const permissiveCalls = calls.map(call => ({ ...call, allowFailure: true })); -// Mixed mode - critical calls must succeed +/ Mixed mode - critical calls must succeed const mixedCalls = [ { ...criticalCall, allowFailure: false }, { ...optionalCall, allowFailure: true } @@ -314,14 +314,6 @@ Popular libraries with Multicall3 support: - **viem**: TypeScript alternative to ethers - **wagmi**: React hooks for Ethereum -## Troubleshooting - -| Issue | Solution | -|-------|----------| -| "Multicall3: call failed" | Check individual call success flags | -| Gas estimation failure | Increase gas limit or reduce batch size | -| Unexpected revert | One of the calls with `allowFailure: false` failed | -| Value mismatch | Ensure total value sent matches sum of individual values | ## Further Reading diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/overview.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/overview.mdx index 319cfa49..0335edfa 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/overview.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/overview.mdx @@ -13,18 +13,18 @@ These contracts are included in `evmtypes.DefaultPreinstalls` and can be deploye | Contract | Address | Purpose | Documentation | |----------|---------|---------|---------------| -| **Create2** | `0x4e59b44847b379578588920ca78fbf26c0b4956c` | Deterministic contract deployment using CREATE2 opcode | [Details](./create2) | -| **Multicall3** | `0xcA11bde05977b3631167028862bE2a173976CA11` | Batch multiple contract calls in a single transaction | [Details](./multicall3) | -| **Permit2** | `0x000000000022D473030F116dDEE9F6B43aC78BA3` | Token approval and transfer management with signatures | [Details](./permit2) | -| **Safe Singleton Factory** | `0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7` | Deploy Safe multisig wallets at deterministic addresses | [Details](./safe-factory) | +| **Create2** | `0x4e59b44847b379578588920ca78fbf26c0b4956c` | Deterministic contract deployment using CREATE2 opcode | [Details](/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/create2) | +| **Multicall3** | `0xcA11bde05977b3631167028862bE2a173976CA11` | Batch multiple contract calls in a single transaction | [Details](/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/multicall3) | +| **Permit2** | `0x000000000022D473030F116dDEE9F6B43aC78BA3` | Token approval and transfer management with signatures | [Details](/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/permit2) | +| **Safe Singleton Factory** | `0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7` | Deploy Safe multisig wallets at deterministic addresses | [Details](/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/safe-factory) | Additional pre-deployable contracts can be incorporated into your project in a similar way, given that any dependencies are met. ## Learn More -- [Implementation](./implementation-guide) - Activate these contracts for your project -- [Create2](./create2) - Deterministic deployment factory documentation -- [Multicall3](./multicall3) - Batch operations contract documentation -- [Permit2](./permit2) - Advanced token approvals documentation -- [Safe Factory](./safe-factory) - Multisig wallet factory documentation \ No newline at end of file +- [Implementation](/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/implementation-guide) - Activate these contracts for your project +- [Create2](/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/create2) - Deterministic deployment factory documentation +- [Multicall3](/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/multicall3) - Batch operations contract documentation +- [Permit2](/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/permit2) - Advanced token approvals documentation +- [Safe Factory](/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/safe-factory) - Multisig wallet factory documentation \ No newline at end of file diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/permit2.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/permit2.mdx index e4ba41cf..7e1c6769 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/permit2.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/permit2.mdx @@ -36,14 +36,14 @@ Permit2 is a universal token approval contract that enables signature-based appr ### Allowance Transfer ```solidity -// Grant permission via signature +/ Grant permission via signature function permit( address owner, PermitSingle memory permitSingle, bytes calldata signature ) external -// Transfer with existing permission +/ Transfer with existing permission function transferFrom( address from, address to, @@ -55,7 +55,7 @@ function transferFrom( ### Signature Transfer ```solidity -// One-time transfer via signature +/ One-time transfer via signature function permitTransferFrom( PermitTransferFrom memory permit, SignatureTransferDetails calldata transferDetails, @@ -75,7 +75,7 @@ import { AllowanceProvider, AllowanceTransfer } from "@uniswap/permit2-sdk"; const PERMIT2_ADDRESS = "0x000000000022D473030F116dDEE9F6B43aC78BA3"; -// Step 1: User approves Permit2 for token (one-time) +/ Step 1: User approves Permit2 for token (one-time) async function approvePermit2(tokenContract, signer) { const tx = await tokenContract.approve( PERMIT2_ADDRESS, @@ -85,33 +85,33 @@ async function approvePermit2(tokenContract, signer) { console.log("Permit2 approved for token"); } -// Step 2: Create and sign permit +/ Step 2: Create and sign permit async function createPermit(token, spender, amount, deadline, signer) { const permit = { details: { token: token, amount: amount, expiration: deadline, - nonce: 0, // Get current nonce from contract + nonce: 0, / Get current nonce from contract }, spender: spender, sigDeadline: deadline, }; - // Create permit data + / Create permit data const { domain, types, values } = AllowanceTransfer.getPermitData( permit, PERMIT2_ADDRESS, await signer.provider.getNetwork().then(n => n.chainId) ); - // Sign permit + / Sign permit const signature = await signer._signTypedData(domain, types, values); return { permit, signature }; } -// Step 3: Execute transfer with permit +/ Step 3: Execute transfer with permit async function transferWithPermit(permit, signature, transferDetails) { const permit2 = new ethers.Contract( PERMIT2_ADDRESS, @@ -119,14 +119,14 @@ async function transferWithPermit(permit, signature, transferDetails) { signer ); - // First, submit the permit + / First, submit the permit await permit2.permit( signer.address, permit, signature ); - // Then transfer + / Then transfer await permit2.transferFrom( transferDetails.from, transferDetails.to, @@ -137,7 +137,7 @@ async function transferWithPermit(permit, signature, transferDetails) { ``` ```solidity Solidity -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; interface IPermit2 { @@ -184,10 +184,10 @@ contract Permit2Integration { IPermit2.PermitSingle calldata permit, bytes calldata signature ) external { - // Submit permit + / Submit permit permit2.permit(from, permit, signature); - // Execute transfer + / Execute transfer permit2.transferFrom(from, to, amount, token); } } @@ -203,7 +203,7 @@ async function batchTransferWithPermit2(transfers, owner, signer) { const permits = []; const signatures = []; - // Prepare batch permits + / Prepare batch permits for (const transfer of transfers) { const permit = { details: { @@ -221,7 +221,7 @@ async function batchTransferWithPermit2(transfers, owner, signer) { signatures.push(signature); } - // Execute batch + / Execute batch const permit2 = new ethers.Contract(PERMIT2_ADDRESS, abi, signer); await permit2.permitBatch(owner, permits, signatures); } @@ -232,9 +232,9 @@ async function batchTransferWithPermit2(transfers, owner, signer) { Enable gasless token approvals using meta-transactions: ```javascript -// User signs permit off-chain +/ User signs permit off-chain async function createGaslessPermit(token, spender, amount, signer) { - const deadline = Math.floor(Date.now() / 1000) + 3600; // 1 hour + const deadline = Math.floor(Date.now() / 1000) + 3600; / 1 hour const permit = { details: { @@ -249,7 +249,7 @@ async function createGaslessPermit(token, spender, amount, signer) { const signature = await signPermit(permit, signer); - // Return data for relayer + / Return data for relayer return { permit, signature, @@ -257,7 +257,7 @@ async function createGaslessPermit(token, spender, amount, signer) { }; } -// Relayer submits transaction +/ Relayer submits transaction async function relayPermit(permitData, relayerSigner) { const permit2 = new ethers.Contract(PERMIT2_ADDRESS, abi, relayerSigner); @@ -283,7 +283,7 @@ struct PermitWitnessTransferFrom { address spender; uint256 nonce; uint256 deadline; - bytes32 witness; // Custom data hash + bytes32 witness; / Custom data hash } ``` @@ -292,10 +292,10 @@ struct PermitWitnessTransferFrom { Permit2 uses unordered nonces for flexibility: ```javascript -// Invalidate specific nonces +/ Invalidate specific nonces await permit2.invalidateNonces(token, spender, newNonce); -// Invalidate nonce range +/ Invalidate nonce range await permit2.invalidateUnorderedNonces(wordPos, mask); ``` @@ -305,10 +305,10 @@ Set appropriate expiration times: ```javascript const expirations = { - shortTerm: Math.floor(Date.now() / 1000) + 300, // 5 minutes - standard: Math.floor(Date.now() / 1000) + 3600, // 1 hour - longTerm: Math.floor(Date.now() / 1000) + 86400, // 1 day - maximum: 2n ** 48n - 1n, // Max allowed + shortTerm: Math.floor(Date.now() / 1000) + 300, / 5 minutes + standard: Math.floor(Date.now() / 1000) + 3600, / 1 hour + longTerm: Math.floor(Date.now() / 1000) + 86400, / 1 day + maximum: 2n ** 48n - 1n, / Max allowed }; ``` @@ -340,7 +340,7 @@ contract DEXWithPermit2 { PermitSingle calldata permit, bytes calldata signature ) external { - // Get tokens via permit + / Get tokens via permit permit2.permit(msg.sender, permit, signature); permit2.transferFrom( msg.sender, @@ -349,7 +349,7 @@ contract DEXWithPermit2 { permit.details.token ); - // Execute swap + / Execute swap _performSwap(params); } } @@ -360,12 +360,12 @@ contract DEXWithPermit2 { ```javascript class PaymentProcessor { async processPayment(order, permit, signature) { - // Verify order details + / Verify order details if (!this.verifyOrder(order)) { throw new Error("Invalid order"); } - // Process payment via Permit2 + / Process payment via Permit2 await this.permit2.permitTransferFrom( permit, { @@ -376,7 +376,7 @@ class PaymentProcessor { signature ); - // Fulfill order + / Fulfill order await this.fulfillOrder(order); } } diff --git a/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/safe-factory.mdx b/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/safe-factory.mdx index 527919f9..130dc570 100644 --- a/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/safe-factory.mdx +++ b/docs/evm/v0.4.x/documentation/smart-contracts/predeployed-contracts/safe-factory.mdx @@ -48,7 +48,7 @@ import { SafeFactory } from "@safe-global/safe-core-sdk"; const FACTORY_ADDRESS = "0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7"; -// Using Safe SDK +/ Using Safe SDK async function deploySafeWallet(owners, threshold, signer) { const safeFactory = await SafeFactory.create({ ethAdapter: new EthersAdapter({ ethers, signer }), @@ -58,18 +58,18 @@ async function deploySafeWallet(owners, threshold, signer) { const safeAccountConfig = { owners: owners, threshold: threshold, - // Optional parameters - fallbackHandler: "0x...", // Fallback handler address + / Optional parameters + fallbackHandler: "0x...", / Fallback handler address paymentToken: ethers.ZeroAddress, payment: 0, paymentReceiver: ethers.ZeroAddress }; - // Predict address before deployment + / Predict address before deployment const predictedAddress = await safeFactory.predictSafeAddress(safeAccountConfig); console.log("Safe will be deployed to:", predictedAddress); - // Deploy the Safe + / Deploy the Safe const safeSdk = await safeFactory.deploySafe({ safeAccountConfig }); const safeAddress = await safeSdk.getAddress(); @@ -77,7 +77,7 @@ async function deploySafeWallet(owners, threshold, signer) { return safeSdk; } -// Manual deployment without SDK +/ Manual deployment without SDK async function deployManually(signer) { const factory = new ethers.Contract( FACTORY_ADDRESS, @@ -85,22 +85,22 @@ async function deployManually(signer) { signer ); - // Prepare Safe proxy bytecode with initialization - const proxyBytecode = "0x..."; // Safe proxy bytecode + / Prepare Safe proxy bytecode with initialization + const proxyBytecode = "0x..."; / Safe proxy bytecode const salt = ethers.id("my-safe-v1"); - // Deploy + / Deploy const tx = await factory.deploy(proxyBytecode, salt); const receipt = await tx.wait(); - // Get deployed address from events + / Get deployed address from events const deployedAddress = receipt.logs[0].address; return deployedAddress; } ``` ```solidity Solidity -// SPDX-License-Identifier: MIT +/ SPDX-License-Identifier: MIT pragma solidity ^0.8.0; interface ISafeFactory { @@ -118,28 +118,28 @@ contract SafeDeployer { bytes memory initializer, uint256 saltNonce ) external returns (address safe) { - // Prepare proxy creation code + / Prepare proxy creation code bytes memory deploymentData = abi.encodePacked( getProxyCreationCode(), uint256(uint160(safeSingleton)) ); - // Calculate salt + / Calculate salt bytes32 salt = keccak256(abi.encodePacked( keccak256(initializer), saltNonce )); - // Deploy via factory + / Deploy via factory safe = factory.deploy(deploymentData, salt); - // Initialize the Safe + / Initialize the Safe (bool success,) = safe.call(initializer); require(success, "Safe initialization failed"); } function getProxyCreationCode() pure returns (bytes memory) { - // Safe proxy bytecode + / Safe proxy bytecode return hex"608060405234801561001057600080fd5b50..."; } } @@ -185,19 +185,19 @@ async function deployCustomSafe(config, signer) { fallbackHandler } = config; - // Encode initialization data + / Encode initialization data const setupData = safeSingleton.interface.encodeFunctionData("setup", [ owners, threshold, - ethers.ZeroAddress, // to - "0x", // data + ethers.ZeroAddress, / to + "0x", / data fallbackHandler, - ethers.ZeroAddress, // paymentToken - 0, // payment - ethers.ZeroAddress // paymentReceiver + ethers.ZeroAddress, / paymentToken + 0, / payment + ethers.ZeroAddress / paymentReceiver ]); - // Deploy proxy pointing to singleton + / Deploy proxy pointing to singleton const proxyFactory = new ethers.Contract( PROXY_FACTORY_ADDRESS, proxyFactoryAbi, @@ -207,13 +207,13 @@ async function deployCustomSafe(config, signer) { const tx = await proxyFactory.createProxyWithNonce( SAFE_SINGLETON_ADDRESS, setupData, - Date.now() // saltNonce + Date.now() / saltNonce ); const receipt = await tx.wait(); const safeAddress = getSafeAddressFromReceipt(receipt); - // Enable modules if specified + / Enable modules if specified for (const module of modules) { await enableModule(safeAddress, module, signer); } @@ -237,7 +237,7 @@ async function deployOrgTreasury(orgId, chains) { const provider = new ethers.JsonRpcProvider(chain.rpc); const signer = new ethers.Wallet(deployerKey, provider); - // Same salt = same address on all chains + / Same salt = same address on all chains const address = await deploySafeWithSalt(salt, signer); results[chain.name] = address; } @@ -259,16 +259,16 @@ class CounterfactualSafe { } predictAddress() { - // Calculate address without deploying + / Calculate address without deploying return predictSafeAddress( this.owners, this.threshold, - 0 // saltNonce + 0 / saltNonce ); } async deploy(signer) { - // Only deploy when needed + / Only deploy when needed const code = await signer.provider.getCode(this.address); if (code !== "0x") { console.log("Already deployed"); @@ -291,14 +291,14 @@ class CounterfactualSafe { Deploy and enable Safe modules: ```javascript -// Deploy module via factory +/ Deploy module via factory async function deployModule(moduleCode, salt) { const tx = await factory.deploy(moduleCode, salt); const receipt = await tx.wait(); return receipt.contractAddress; } -// Enable module on Safe +/ Enable module on Safe async function enableModule(safeAddress, moduleAddress, signer) { const safe = new ethers.Contract(safeAddress, safeAbi, signer); const tx = await safe.enableModule(moduleAddress); @@ -312,11 +312,11 @@ Deploy transaction guards for additional security: ```javascript async function deployAndSetGuard(safeAddress, guardCode, signer) { - // Deploy guard + / Deploy guard const salt = ethers.id(`guard-${safeAddress}`); const guardAddress = await factory.deploy(guardCode, salt); - // Set as Safe guard + / Set as Safe guard const safe = new ethers.Contract(safeAddress, safeAbi, signer); const tx = await safe.setGuard(guardAddress); await tx.wait(); @@ -330,7 +330,7 @@ async function deployAndSetGuard(safeAddress, guardCode, signer) { ### Salt Management ```javascript -// Structured salt generation +/ Structured salt generation function generateSalt(context, nonce) { return ethers.solidityPackedKeccak256( ["string", "uint256"], @@ -338,7 +338,7 @@ function generateSalt(context, nonce) { ); } -// Examples +/ Examples const userSalt = generateSalt(`user-${userId}`, 0); const orgSalt = generateSalt(`org-${orgId}`, iteration); const appSalt = generateSalt(`app-${appId}-${version}`, 0); @@ -375,14 +375,6 @@ async function deploySafeVersion(version, owners, threshold) { - **Singleton Verification**: Verify singleton contract before deployment - **Access Control**: The factory itself has no access control - anyone can deploy -## Troubleshooting - -| Issue | Solution | -|-------|----------| -| Address mismatch | Verify salt and bytecode are identical | -| Deployment fails | Check sufficient gas and valid bytecode | -| Safe not working | Ensure proper initialization after deployment | -| Cross-chain inconsistency | Verify same singleton and salt used | ## Related Contracts diff --git a/docs/evm/v0.4.x/documentation/versioning-workflow.md b/docs/evm/v0.4.x/documentation/versioning-workflow.md index 83bf9220..3a7c09e6 100644 --- a/docs/evm/v0.4.x/documentation/versioning-workflow.md +++ b/docs/evm/v0.4.x/documentation/versioning-workflow.md @@ -242,30 +242,6 @@ import EIPCompatibilityTableStatic from '/snippets/eip-compatibility-table-stati - External dependencies captured - Self-contained frozen versions -## Troubleshooting - -### Common Issues - -1. **Script Permissions** - - ```bash - chmod +x scripts/*.sh - ``` - -2. **Missing versions.json** - - ```bash - echo '{"versions": ["main"]}' > versions.json - ``` - -3. **EIP Snapshot Fails** - - Check Google Sheets API access - - Verify network connectivity - - Manual snapshot: `node scripts/snapshot-eip-data.js v0.4.x` - -4. **Navigation Not Updated** - - Run: `node scripts/update-navigation.js v0.4.x` - - Manually check docs.json structure ## Best Practices diff --git a/docs/evm/v0.4.x/security-audit.mdx b/docs/evm/v0.4.x/security-audit.mdx new file mode 100644 index 00000000..58ca1ab4 --- /dev/null +++ b/docs/evm/v0.4.x/security-audit.mdx @@ -0,0 +1,72 @@ +--- +title: "Security Audit" +description: "External security audit report for the Cosmos EVM module" +icon: "shield-check" +keywords: ['security', 'audit', 'sherlock', 'vulnerability', 'assessment', 'evm', 'cosmos'] +--- + +## Overview + +The Cosmos EVM module underwent a comprehensive security audit conducted by Sherlock, a leading blockchain security firm. The audit was completed on July 28, 2025, providing an independent assessment of the module's security posture, code quality, and potential vulnerabilities. + +## Audit Details + +**Auditor**: Sherlock +**Audit Completion Date**: July 28, 2025 +**Report Version**: Final +**Pages**: 203 + +## Scope + +The security audit covered the entire EVM module implementation, including: + +- Core EVM execution environment +- Precompiled contracts and their integrations +- State management and storage mechanisms +- Transaction processing and gas metering +- Integration with Cosmos SDK modules +- Cross-chain functionality and IBC compatibility +- Security-critical components and access controls + +## Key Areas of Focus + +The audit specifically examined: + +1. **Smart Contract Security**: Analysis of precompiles and their interaction patterns +2. **State Consistency**: Verification of state transitions and atomicity guarantees +3. **Gas Economics**: Review of gas consumption and potential denial-of-service vectors +4. **Access Controls**: Examination of permission systems and authorization mechanisms +5. **Integration Points**: Assessment of module boundaries and cross-module communications +6. **Edge Cases**: Testing of boundary conditions and error handling paths + +## Accessing the Report + +The complete audit report is publicly available and can be accessed through the following link: + + + Download the complete 203-page security audit report conducted by Sherlock + + +## Recommendations + +Security audits are a critical component of blockchain development. While this audit provides confidence in the module's security, users and developers should: + +- Stay informed about any security advisories or updates +- Follow best practices when developing applications using the EVM module +- Report any suspected vulnerabilities through appropriate security channels +- Keep their deployments updated with the latest security patches + +## Continuous Security + +Security is an ongoing process. The Cosmos EVM team maintains a commitment to: + +- Regular security reviews and assessments +- Prompt response to security disclosures +- Transparent communication about security matters +- Collaboration with the security research community + +For security-related inquiries or to report potential vulnerabilities, please follow the [Cosmos Security Policy](https://github.com/cosmos/cosmos-sdk/security/policy). \ No newline at end of file diff --git a/docs/ibc/images/01-ibc/03-apps/images/packet_flow.png b/docs/ibc/images/01-ibc/03-apps/images/packet_flow.png new file mode 100644 index 00000000..db2d1d31 Binary files /dev/null and b/docs/ibc/images/01-ibc/03-apps/images/packet_flow.png differ diff --git a/docs/ibc/images/01-ibc/03-apps/images/packet_flow_v2.png b/docs/ibc/images/01-ibc/03-apps/images/packet_flow_v2.png new file mode 100644 index 00000000..781b426a Binary files /dev/null and b/docs/ibc/images/01-ibc/03-apps/images/packet_flow_v2.png differ diff --git a/docs/ibc/images/01-ibc/04-middleware/images/middleware-stack.png b/docs/ibc/images/01-ibc/04-middleware/images/middleware-stack.png new file mode 100644 index 00000000..1d54f808 Binary files /dev/null and b/docs/ibc/images/01-ibc/04-middleware/images/middleware-stack.png differ diff --git a/docs/ibc/images/02-apps/01-transfer/images/forwarding-3-chains-dark.png b/docs/ibc/images/02-apps/01-transfer/images/forwarding-3-chains-dark.png new file mode 100644 index 00000000..3da5c63f Binary files /dev/null and b/docs/ibc/images/02-apps/01-transfer/images/forwarding-3-chains-dark.png differ diff --git a/docs/ibc/images/02-apps/01-transfer/images/forwarding-3-chains-light.png b/docs/ibc/images/02-apps/01-transfer/images/forwarding-3-chains-light.png new file mode 100644 index 00000000..032919e8 Binary files /dev/null and b/docs/ibc/images/02-apps/01-transfer/images/forwarding-3-chains-light.png differ diff --git a/docs/ibc/images/02-apps/02-interchain-accounts/09-legacy/images/ica-pre-v6.png b/docs/ibc/images/02-apps/02-interchain-accounts/09-legacy/images/ica-pre-v6.png new file mode 100644 index 00000000..acd00d12 Binary files /dev/null and b/docs/ibc/images/02-apps/02-interchain-accounts/09-legacy/images/ica-pre-v6.png differ diff --git a/docs/ibc/images/02-apps/02-interchain-accounts/10-legacy/images/ica-pre-v6.png b/docs/ibc/images/02-apps/02-interchain-accounts/10-legacy/images/ica-pre-v6.png new file mode 100644 index 00000000..acd00d12 Binary files /dev/null and b/docs/ibc/images/02-apps/02-interchain-accounts/10-legacy/images/ica-pre-v6.png differ diff --git a/docs/ibc/images/02-apps/02-interchain-accounts/images/ica-v6.png b/docs/ibc/images/02-apps/02-interchain-accounts/images/ica-v6.png new file mode 100644 index 00000000..0ffa707c Binary files /dev/null and b/docs/ibc/images/02-apps/02-interchain-accounts/images/ica-v6.png differ diff --git a/docs/ibc/images/02-apps/02-interchain-accounts/images/send-interchain-tx.png b/docs/ibc/images/02-apps/02-interchain-accounts/images/send-interchain-tx.png new file mode 100644 index 00000000..ecaaa981 Binary files /dev/null and b/docs/ibc/images/02-apps/02-interchain-accounts/images/send-interchain-tx.png differ diff --git a/docs/ibc/images/03-middleware/01-ics29-fee/images/feeflow.png b/docs/ibc/images/03-middleware/01-ics29-fee/images/feeflow.png new file mode 100644 index 00000000..ba02071f Binary files /dev/null and b/docs/ibc/images/03-middleware/01-ics29-fee/images/feeflow.png differ diff --git a/docs/ibc/images/03-middleware/01-ics29-fee/images/msgpaypacket.png b/docs/ibc/images/03-middleware/01-ics29-fee/images/msgpaypacket.png new file mode 100644 index 00000000..1bd5deb0 Binary files /dev/null and b/docs/ibc/images/03-middleware/01-ics29-fee/images/msgpaypacket.png differ diff --git a/docs/ibc/images/03-middleware/01-ics29-fee/images/paypacketfeeasync.png b/docs/ibc/images/03-middleware/01-ics29-fee/images/paypacketfeeasync.png new file mode 100644 index 00000000..27c486a6 Binary files /dev/null and b/docs/ibc/images/03-middleware/01-ics29-fee/images/paypacketfeeasync.png differ diff --git a/docs/ibc/images/04-middleware/01-callbacks/images/callbackflow.svg b/docs/ibc/images/04-middleware/01-callbacks/images/callbackflow.svg new file mode 100644 index 00000000..2323889b --- /dev/null +++ b/docs/ibc/images/04-middleware/01-callbacks/images/callbackflow.svg @@ -0,0 +1,43 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/ibc/images/04-middleware/01-callbacks/images/ics4-callbackflow.svg b/docs/ibc/images/04-middleware/01-callbacks/images/ics4-callbackflow.svg new file mode 100644 index 00000000..032a83f7 --- /dev/null +++ b/docs/ibc/images/04-middleware/01-callbacks/images/ics4-callbackflow.svg @@ -0,0 +1,43 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/ibc/images/04-middleware/01-ics29-fee/images/feeflow.png b/docs/ibc/images/04-middleware/01-ics29-fee/images/feeflow.png new file mode 100644 index 00000000..ba02071f Binary files /dev/null and b/docs/ibc/images/04-middleware/01-ics29-fee/images/feeflow.png differ diff --git a/docs/ibc/images/04-middleware/01-ics29-fee/images/msgpaypacket.png b/docs/ibc/images/04-middleware/01-ics29-fee/images/msgpaypacket.png new file mode 100644 index 00000000..1bd5deb0 Binary files /dev/null and b/docs/ibc/images/04-middleware/01-ics29-fee/images/msgpaypacket.png differ diff --git a/docs/ibc/images/04-middleware/01-ics29-fee/images/paypacketfeeasync.png b/docs/ibc/images/04-middleware/01-ics29-fee/images/paypacketfeeasync.png new file mode 100644 index 00000000..27c486a6 Binary files /dev/null and b/docs/ibc/images/04-middleware/01-ics29-fee/images/paypacketfeeasync.png differ diff --git a/docs/ibc/images/04-middleware/02-callbacks/images/callbackflow.svg b/docs/ibc/images/04-middleware/02-callbacks/images/callbackflow.svg new file mode 100644 index 00000000..2323889b --- /dev/null +++ b/docs/ibc/images/04-middleware/02-callbacks/images/callbackflow.svg @@ -0,0 +1,43 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/ibc/images/04-middleware/02-callbacks/images/ics4-callbackflow.svg b/docs/ibc/images/04-middleware/02-callbacks/images/ics4-callbackflow.svg new file mode 100644 index 00000000..032a83f7 --- /dev/null +++ b/docs/ibc/images/04-middleware/02-callbacks/images/ics4-callbackflow.svg @@ -0,0 +1,43 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/ibc/images/04-migrations/images/auth-module-decision-tree.png b/docs/ibc/images/04-migrations/images/auth-module-decision-tree.png new file mode 100644 index 00000000..dc2c98e6 Binary files /dev/null and b/docs/ibc/images/04-migrations/images/auth-module-decision-tree.png differ diff --git a/docs/ibc/images/05-migrations/images/auth-module-decision-tree.png b/docs/ibc/images/05-migrations/images/auth-module-decision-tree.png new file mode 100644 index 00000000..dc2c98e6 Binary files /dev/null and b/docs/ibc/images/05-migrations/images/auth-module-decision-tree.png differ diff --git a/docs/ibc/images/images/ibcoverview-dark.svg b/docs/ibc/images/images/ibcoverview-dark.svg new file mode 100644 index 00000000..e36ce323 --- /dev/null +++ b/docs/ibc/images/images/ibcoverview-dark.svg @@ -0,0 +1,135 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/ibc/images/images/ibcoverview-light.svg b/docs/ibc/images/images/ibcoverview-light.svg new file mode 100644 index 00000000..10286c64 --- /dev/null +++ b/docs/ibc/images/images/ibcoverview-light.svg @@ -0,0 +1,135 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/ibc/images/static/img/IBC-go-cover.svg b/docs/ibc/images/static/img/IBC-go-cover.svg new file mode 100644 index 00000000..11cd367e --- /dev/null +++ b/docs/ibc/images/static/img/IBC-go-cover.svg @@ -0,0 +1,42 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/ibc/images/static/img/black-ibc-logo-400x400.svg b/docs/ibc/images/static/img/black-ibc-logo-400x400.svg new file mode 100644 index 00000000..c7a5478b --- /dev/null +++ b/docs/ibc/images/static/img/black-ibc-logo-400x400.svg @@ -0,0 +1,10 @@ + + + + + + + + + + diff --git a/docs/ibc/images/static/img/black-ibc-logo.svg b/docs/ibc/images/static/img/black-ibc-logo.svg new file mode 100644 index 00000000..71afdd3a --- /dev/null +++ b/docs/ibc/images/static/img/black-ibc-logo.svg @@ -0,0 +1,12 @@ + + + + + + + + + + + + diff --git a/docs/ibc/images/static/img/black-large-ibc-logo.svg b/docs/ibc/images/static/img/black-large-ibc-logo.svg new file mode 100644 index 00000000..2902c0ce --- /dev/null +++ b/docs/ibc/images/static/img/black-large-ibc-logo.svg @@ -0,0 +1,52 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/ibc/images/static/img/cosmos-logo-bw.svg b/docs/ibc/images/static/img/cosmos-logo-bw.svg new file mode 100644 index 00000000..f2575260 --- /dev/null +++ b/docs/ibc/images/static/img/cosmos-logo-bw.svg @@ -0,0 +1,8 @@ + + + + + + + + diff --git a/docs/ibc/images/static/img/ibc-go-docs-social-card.png b/docs/ibc/images/static/img/ibc-go-docs-social-card.png new file mode 100644 index 00000000..1f1017d9 Binary files /dev/null and b/docs/ibc/images/static/img/ibc-go-docs-social-card.png differ diff --git a/docs/ibc/images/static/img/ico-chevron.svg b/docs/ibc/images/static/img/ico-chevron.svg new file mode 100644 index 00000000..3f8e8fac --- /dev/null +++ b/docs/ibc/images/static/img/ico-chevron.svg @@ -0,0 +1,3 @@ + + + diff --git a/docs/ibc/images/static/img/icons/hi-coffee-icon.svg b/docs/ibc/images/static/img/icons/hi-coffee-icon.svg new file mode 100644 index 00000000..eef25bd1 --- /dev/null +++ b/docs/ibc/images/static/img/icons/hi-coffee-icon.svg @@ -0,0 +1,6 @@ + + + + + + diff --git a/docs/ibc/images/static/img/icons/hi-info-icon.svg b/docs/ibc/images/static/img/icons/hi-info-icon.svg new file mode 100644 index 00000000..ca3ca627 --- /dev/null +++ b/docs/ibc/images/static/img/icons/hi-info-icon.svg @@ -0,0 +1,5 @@ + + + + + \ No newline at end of file diff --git a/docs/ibc/images/static/img/icons/hi-note-icon.svg b/docs/ibc/images/static/img/icons/hi-note-icon.svg new file mode 100644 index 00000000..0c193026 --- /dev/null +++ b/docs/ibc/images/static/img/icons/hi-note-icon.svg @@ -0,0 +1,3 @@ + + + diff --git a/docs/ibc/images/static/img/icons/hi-prerequisite-icon.svg b/docs/ibc/images/static/img/icons/hi-prerequisite-icon.svg new file mode 100644 index 00000000..fb4c4d69 --- /dev/null +++ b/docs/ibc/images/static/img/icons/hi-prerequisite-icon.svg @@ -0,0 +1,6 @@ + + + + + + diff --git a/docs/ibc/images/static/img/icons/hi-reading-icon.svg b/docs/ibc/images/static/img/icons/hi-reading-icon.svg new file mode 100644 index 00000000..3662b303 --- /dev/null +++ b/docs/ibc/images/static/img/icons/hi-reading-icon.svg @@ -0,0 +1,3 @@ + + + diff --git a/docs/ibc/images/static/img/icons/hi-star-icon.svg b/docs/ibc/images/static/img/icons/hi-star-icon.svg new file mode 100644 index 00000000..0811b964 --- /dev/null +++ b/docs/ibc/images/static/img/icons/hi-star-icon.svg @@ -0,0 +1,3 @@ + + + diff --git a/docs/ibc/images/static/img/icons/hi-target-icon.svg b/docs/ibc/images/static/img/icons/hi-target-icon.svg new file mode 100644 index 00000000..98d94e8b --- /dev/null +++ b/docs/ibc/images/static/img/icons/hi-target-icon.svg @@ -0,0 +1,3 @@ + + + diff --git a/docs/ibc/images/static/img/icons/hi-tip-icon.svg b/docs/ibc/images/static/img/icons/hi-tip-icon.svg new file mode 100644 index 00000000..96fbbb37 --- /dev/null +++ b/docs/ibc/images/static/img/icons/hi-tip-icon.svg @@ -0,0 +1,3 @@ + + + \ No newline at end of file diff --git a/docs/ibc/images/static/img/icons/hi-warn-icon.svg b/docs/ibc/images/static/img/icons/hi-warn-icon.svg new file mode 100644 index 00000000..9baeb30e --- /dev/null +++ b/docs/ibc/images/static/img/icons/hi-warn-icon.svg @@ -0,0 +1,5 @@ + + + + + \ No newline at end of file diff --git a/docs/ibc/images/static/img/spirograph-white.svg b/docs/ibc/images/static/img/spirograph-white.svg new file mode 100644 index 00000000..f4957303 --- /dev/null +++ b/docs/ibc/images/static/img/spirograph-white.svg @@ -0,0 +1,5 @@ + + + diff --git a/docs/ibc/images/static/img/white-cosmos-icon.svg b/docs/ibc/images/static/img/white-cosmos-icon.svg new file mode 100644 index 00000000..4e59ad83 --- /dev/null +++ b/docs/ibc/images/static/img/white-cosmos-icon.svg @@ -0,0 +1,3 @@ + + + diff --git a/docs/ibc/images/static/img/white-ibc-logo-400x400.svg b/docs/ibc/images/static/img/white-ibc-logo-400x400.svg new file mode 100644 index 00000000..a33ae7c2 --- /dev/null +++ b/docs/ibc/images/static/img/white-ibc-logo-400x400.svg @@ -0,0 +1,10 @@ + + + + + + + + + + diff --git a/docs/ibc/images/static/img/white-ibc-logo.svg b/docs/ibc/images/static/img/white-ibc-logo.svg new file mode 100644 index 00000000..c4ac0cbd --- /dev/null +++ b/docs/ibc/images/static/img/white-ibc-logo.svg @@ -0,0 +1,12 @@ + + + + + + + + + + + + diff --git a/docs/ibc/images/static/img/white-large-ibc-logo.svg b/docs/ibc/images/static/img/white-large-ibc-logo.svg new file mode 100644 index 00000000..7f349ead --- /dev/null +++ b/docs/ibc/images/static/img/white-large-ibc-logo.svg @@ -0,0 +1,52 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/ibc/next/apps/interchain-accounts/active-channels.mdx b/docs/ibc/next/apps/interchain-accounts/active-channels.mdx new file mode 100644 index 00000000..af815ffd --- /dev/null +++ b/docs/ibc/next/apps/interchain-accounts/active-channels.mdx @@ -0,0 +1,44 @@ +--- +title: Active Channels +description: The Interchain Accounts module uses either ORDERED or UNORDERED channels. +--- + +The Interchain Accounts module uses either [ORDERED or UNORDERED](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#ordering) channels. + +When using `ORDERED` channels, the order of transactions when sending packets from a controller to a host chain is maintained. + +When using `UNORDERED` channels, there is no guarantee that the order of transactions when sending packets from the controller to the host chain is maintained. If no ordering is specified in `MsgRegisterInterchainAccount`, then the default ordering for new ICA channels is `UNORDERED`. + +> A limitation when using ORDERED channels is that when a packet times out the channel will be closed. + +In the case of a channel closing, a controller chain needs to be able to regain access to the interchain account registered on this channel. `Active Channels` enable this functionality. + +When an Interchain Account is registered using `MsgRegisterInterchainAccount`, a new channel is created on a particular port. During the `OnChanOpenAck` and `OnChanOpenConfirm` steps (on controller & host chain respectively) the `Active Channel` for this interchain account is stored in state. + +It is possible to create a new channel using the same controller chain portID if the previously set `Active Channel` is now in a `CLOSED` state. This channel creation can be initialized programmatically by sending a new `MsgChannelOpenInit` message like so: + +```go +msg := channeltypes.NewMsgChannelOpenInit(portID, string(versionBytes), channeltypes.ORDERED, []string{ + connectionID +}, icatypes.HostPortID, authtypes.NewModuleAddress(icatypes.ModuleName).String()) + handler := keeper.msgRouter.Handler(msg) + +res, err := handler(ctx, msg) + if err != nil { + return err +} +``` + +Alternatively, any relayer operator may initiate a new channel handshake for this interchain account once the previously set `Active Channel` is in a `CLOSED` state. This is done by initiating the channel handshake on the controller chain using the same portID associated with the interchain account in question. + +It is important to note that once a channel has been opened for a given interchain account, new channels can not be opened for this account until the currently set `Active Channel` is set to `CLOSED`. + +## Future improvements + +Future versions of the ICS-27 protocol and the Interchain Accounts module will likely use a new channel type that provides ordering of packets without the channel closing in the event of a packet timing out, thus removing the need for `Active Channels` entirely. +The following is a list of issues which will provide the infrastructure to make this possible: + +* [IBC Channel Upgrades](https://github.com/cosmos/ibc-go/issues/1599) +* [Implement ORDERED\_ALLOW\_TIMEOUT logic in 04-channel](https://github.com/cosmos/ibc-go/issues/1661) +* [Add ORDERED\_ALLOW\_TIMEOUT as supported ordering in 03-connection](https://github.com/cosmos/ibc-go/issues/1662) +* [Allow ICA channels to be opened as ORDERED\_ALLOW\_TIMEOUT](https://github.com/cosmos/ibc-go/issues/1663) diff --git a/docs/ibc/next/apps/interchain-accounts/auth-modules.mdx b/docs/ibc/next/apps/interchain-accounts/auth-modules.mdx new file mode 100644 index 00000000..c75be3be --- /dev/null +++ b/docs/ibc/next/apps/interchain-accounts/auth-modules.mdx @@ -0,0 +1,21 @@ +--- +title: Authentication Modules +--- + +## Synopsis + +Authentication modules enable application developers to perform custom logic when interacting with the Interchain Accounts controller sumbmodule's `MsgServer`. + +The controller submodule is used for account registration and packet sending. It executes only logic required of all controllers of interchain accounts. The type of authentication used to manage the interchain accounts remains unspecified. There may exist many different types of authentication which are desirable for different use cases. Thus the purpose of the authentication module is to wrap the controller submodule with custom authentication logic. + +In ibc-go, authentication modules can communicate with the controller submodule by passing messages through `baseapp`'s `MsgServiceRouter`. To implement an authentication module, the `IBCModule` interface need not be fulfilled; it is only required to fulfill Cosmos SDK's `AppModuleBasic` interface, just like any regular Cosmos SDK application module. + +The authentication module must: + +- Authenticate interchain account owners. +- Track the associated interchain account address for an owner. +- Send packets on behalf of an owner (after authentication). + +## Integration into `app.go` file + +To integrate the authentication module into your chain, please follow the steps outlined in [`app.go` integration](/docs/ibc/next/apps/interchain-accounts/integration#example-integration). diff --git a/docs/ibc/next/apps/interchain-accounts/client.mdx b/docs/ibc/next/apps/interchain-accounts/client.mdx new file mode 100644 index 00000000..fb058d88 --- /dev/null +++ b/docs/ibc/next/apps/interchain-accounts/client.mdx @@ -0,0 +1,200 @@ +--- +title: Client +description: >- + A user can query and interact with the Interchain Accounts module using the + CLI. Use the --help flag to discover the available commands: +--- + +## CLI + +A user can query and interact with the Interchain Accounts module using the CLI. Use the `--help` flag to discover the available commands: + +```shell +simd query interchain-accounts --help +``` + +> Please not that this section does not document all the available commands, but only the ones that deserved extra documentation that was not possible to fit in the command line documentation. + +### Controller + +A user can query and interact with the controller submodule. + +#### Query + +The `query` commands allow users to query the controller submodule. + +```shell +simd query interchain-accounts controller --help +``` + +#### Transactions + +The `tx` commands allow users to interact with the controller submodule. + +```shell +simd tx interchain-accounts controller --help +``` + +#### `register` + +The `register` command allows users to register an interchain account on a host chain on the provided connection. + +```shell +simd tx interchain-accounts controller register [connection-id] [flags] +``` + +During registration a new channel is set up between controller and host. There are two flags available that influence the channel that is created: + +* `--version` to specify the (JSON-formatted) version string of the channel. For example: `{\"version\":\"ics27-1\",\"encoding\":\"proto3\",\"tx_type\":\"sdk_multi_msg\",\"controller_connection_id\":\"connection-0\",\"host_connection_id\":\"connection-0\"}`. Passing a custom version string is useful if you want to specify, for example, the encoding format of the interchain accounts packet data (either `proto3` or `proto3json`). If not specified the controller submodule will generate a default version string. +* `--ordering` to specify the ordering of the channel. Available options are `order_ordered` and `order_unordered` (default if not specified). + +Example: + +```shell +simd tx interchain-accounts controller register connection-0 --ordering order_ordered --from cosmos1.. +``` + +#### `send-tx` + +The `send-tx` command allows users to send a transaction on the provided connection to be executed using an interchain account on the host chain. + +```shell +simd tx interchain-accounts controller send-tx [connection-id] [path/to/packet_msg.json] +``` + +Example: + +```shell +simd tx interchain-accounts controller send-tx connection-0 packet-data.json --from cosmos1.. +``` + +See below for example contents of `packet-data.json`. The CLI handler will unmarshal the following into `InterchainAccountPacketData` appropriately. + +```json +{ + "type": "TYPE_EXECUTE_TX", + "data": "CqIBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEoEBCkFjb3Ntb3MxNWNjc2hobXAwZ3N4MjlxcHFxNmc0em1sdG5udmdteXU5dWV1YWRoOXkybmM1emowc3psczVndGRkehItY29zbW9zMTBoOXN0YzV2Nm50Z2V5Z2Y1eGY5NDVuanFxNWgzMnI1M3VxdXZ3Gg0KBXN0YWtlEgQxMDAw", + "memo": "" +} +``` + +Note the `data` field is a base64 encoded byte string as per the tx encoding agreed upon during the channel handshake. + +A helper CLI is provided in the host submodule which can be used to generate the packet data JSON using the counterparty chain's binary. See the [`generate-packet-data` command](#generate-packet-data) for an example. + +### Host + +A user can query and interact with the host submodule. + +#### Query + +The `query` commands allow users to query the host submodule. + +```shell +simd query interchain-accounts host --help +``` + +#### Transactions + +The `tx` commands allow users to interact with the controller submodule. + +```shell +simd tx interchain-accounts host --help +``` + +##### `generate-packet-data` + +The `generate-packet-data` command allows users to generate protobuf or proto3 JSON encoded interchain accounts packet data for input message(s). The packet data can then be used with the controller submodule's [`send-tx` command](#send-tx). The `--encoding` flag can be used to specify the encoding format (value must be either `proto3` or `proto3json`); if not specified, the default will be `proto3`. The `--memo` flag can be used to include a memo string in the interchain accounts packet data. + +```shell +simd tx interchain-accounts host generate-packet-data [message] +``` + +Example: + +```shell expandable +simd tx interchain-accounts host generate-packet-data '[{ + "@type":"/cosmos.bank.v1beta1.MsgSend", + "from_address":"cosmos15ccshhmp0gsx29qpqq6g4zmltnnvgmyu9ueuadh9y2nc5zj0szls5gtddz", + "to_address":"cosmos10h9stc5v6ntgeygf5xf945njqq5h32r53uquvw", + "amount": [ + { + "denom": "stake", + "amount": "1000" + } + ] +}]' --memo memo +``` + +The command accepts a single `sdk.Msg` or a list of `sdk.Msg`s that will be encoded into the outputs `data` field. + +Example output: + +```json +{ + "type": "TYPE_EXECUTE_TX", + "data": "CqIBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEoEBCkFjb3Ntb3MxNWNjc2hobXAwZ3N4MjlxcHFxNmc0em1sdG5udmdteXU5dWV1YWRoOXkybmM1emowc3psczVndGRkehItY29zbW9zMTBoOXN0YzV2Nm50Z2V5Z2Y1eGY5NDVuanFxNWgzMnI1M3VxdXZ3Gg0KBXN0YWtlEgQxMDAw", + "memo": "memo" +} +``` + +## gRPC + +A user can query the interchain account module using gRPC endpoints. + +### Controller + +A user can query the controller submodule using gRPC endpoints. + +#### `InterchainAccount` + +The `InterchainAccount` endpoint allows users to query the controller submodule for the interchain account address for a given owner on a particular connection. + +```shell +ibc.applications.interchain_accounts.controller.v1.Query/InterchainAccount +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"owner":"cosmos1..","connection_id":"connection-0"}' \ + localhost:9090 \ + ibc.applications.interchain_accounts.controller.v1.Query/InterchainAccount +``` + +#### `Params` + +The `Params` endpoint users to query the current controller submodule parameters. + +```shell +ibc.applications.interchain_accounts.controller.v1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + ibc.applications.interchain_accounts.controller.v1.Query/Params +``` + +### Host + +A user can query the host submodule using gRPC endpoints. + +#### `Params` + +The `Params` endpoint users to query the current host submodule parameters. + +```shell +ibc.applications.interchain_accounts.host.v1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + ibc.applications.interchain_accounts.host.v1.Query/Params +``` diff --git a/docs/ibc/next/apps/interchain-accounts/development.mdx b/docs/ibc/next/apps/interchain-accounts/development.mdx new file mode 100644 index 00000000..3e9342fd --- /dev/null +++ b/docs/ibc/next/apps/interchain-accounts/development.mdx @@ -0,0 +1,34 @@ +--- +title: Development Use Cases +--- + +The initial version of Interchain Accounts allowed for the controller submodule to be extended by providing it with an underlying application which would handle all packet callbacks. +That functionality is now being deprecated in favor of alternative approaches. +This document will outline potential use cases and redirect each use case to the appropriate documentation. + +## Custom authentication + +Interchain accounts may be associated with alternative types of authentication relative to the traditional public/private key signing. +If you wish to develop or use Interchain Accounts with a custom authentication module and do not need to execute custom logic on the packet callbacks, we recommend you use ibc-go v6 or greater and that your custom authentication module interacts with the controller submodule via the [`MsgServer`](/docs/ibc/next/apps/interchain-accounts/messages). + +If you wish to consume and execute custom logic in the packet callbacks, then please read the section [Packet callbacks](#packet-callbacks) below. + +## Redirection to a smart contract + +It may be desirable to allow smart contracts to control an interchain account. +To facilitate such an action, the controller submodule may be provided an underlying application which redirects to smart contract callers. +An improved design has been suggested in [ADR 008](https://github.com/cosmos/ibc-go/pull/1976) which performs this action via middleware. + +Implementers of this use case are recommended to follow the ADR 008 approach. +The underlying application may continue to be used as a short term solution for ADR 008 and the [legacy API](/docs/ibc/next/apps/interchain-accounts/legacy/auth-modules) should continue to be utilized in such situations. + +## Packet callbacks + +If a developer requires access to packet callbacks for their use case, then they have the following options: + +1. Write a smart contract which is connected via an ADR 008 or equivalent IBC application (recommended). +2. Use the controller's underlying application to implement packet callback logic. + +In the first case, the smart contract should use the [`MsgServer`](/docs/ibc/next/apps/interchain-accounts/messages). + +In the second case, the underlying application should use the [legacy API](/docs/ibc/next/apps/interchain-accounts/legacy/keeper-api). diff --git a/docs/ibc/next/apps/interchain-accounts/integration.mdx b/docs/ibc/next/apps/interchain-accounts/integration.mdx new file mode 100644 index 00000000..2b42588b --- /dev/null +++ b/docs/ibc/next/apps/interchain-accounts/integration.mdx @@ -0,0 +1,193 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to integrate Interchain Accounts host and controller functionality to your chain. The following document only applies for Cosmos SDK chains. + +The Interchain Accounts module contains two submodules. Each submodule has its own IBC application. The Interchain Accounts module should be registered as an `AppModule` in the same way all SDK modules are registered on a chain, but each submodule should create its own `IBCModule` as necessary. A route should be added to the IBC router for each submodule which will be used. + +Chains who wish to support ICS-27 may elect to act as a host chain, a controller chain or both. Disabling host or controller functionality may be done statically by excluding the host or controller submodule entirely from the `app.go` file or it may be done dynamically by taking advantage of the on-chain parameters which enable or disable the host or controller submodules. + +Interchain Account authentication modules (both custom or generic, such as the `x/gov` or `x/auth` Cosmos SDK modules) can send messages to the controller submodule's [`MsgServer`](/docs/ibc/next/apps/interchain-accounts/messages) to register interchain accounts and send packets to the interchain account. To accomplish this, the authentication module needs to be composed with `baseapp`'s `MsgServiceRouter`. + +![ica-v6.png](/docs/ibc/images/02-apps/02-interchain-accounts/images/ica-v6.png) + +## Example integration + +```go expandable +/ app.go + +/ Register the AppModule for the Interchain Accounts module and the authentication module +/ Note: No `icaauth` exists, this must be substituted with an actual Interchain Accounts authentication module +ModuleBasics = module.NewBasicManager( + ... + ica.AppModuleBasic{ +}, + icaauth.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the Interchain Accounts module +/ Only necessary for host chain functionality +/ Each Interchain Account created on the host chain is derived from the module account created +maccPerms = map[string][]string{ + ... + icatypes.ModuleName: nil, +} + +... + +/ Add Interchain Accounts Keepers for each submodule used and the authentication module +/ If a submodule is being statically disabled, the associated Keeper does not need to be added. +type App struct { + ... + + ICAControllerKeeper icacontrollerkeeper.Keeper + ICAHostKeeper icahostkeeper.Keeper + ICAAuthKeeper icaauthkeeper.Keeper + + ... +} + +... + +/ Create store keys for each submodule Keeper and the authentication module + keys := sdk.NewKVStoreKeys( + ... + icacontrollertypes.StoreKey, + icahosttypes.StoreKey, + icaauthtypes.StoreKey, + ... +) + +... + +/ Create the Keeper for each submodule +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, app.IBCKeeper.PortKeeper, + app.MsgServiceRouter(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) + +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, app.IBCKeeper.PortKeeper, app.AccountKeeper, + app.MsgServiceRouter(), app.GRPCQueryRouter(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) + +/ Create Interchain Accounts AppModule + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, &app.ICAHostKeeper) + +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.MsgServiceRouter()) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + +/ Create controller IBC application stack and host IBC module as desired + icaControllerStack := icacontroller.NewIBCMiddleware(app.ICAControllerKeeper) + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +... + +/ Register Interchain Accounts and authentication module AppModule's +app.moduleManager = module.NewManager( + ... + icaModule, + icaAuthModule, +) + +... + +/ Add Interchain Accounts to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts module InitGenesis logic +app.moduleManager.SetOrderInitGenesis( + ... + icatypes.ModuleName, + ... +) + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey sdk.StoreKey) + +paramskeeper.Keeper { + ... + paramsKeeper.Subspace(icahosttypes.SubModuleName) + +paramsKeeper.Subspace(icacontrollertypes.SubModuleName) + ... +} +``` + +If no custom athentication module is needed and a generic Cosmos SDK authentication module can be used, then from the sample integration code above all references to `ICAAuthKeeper` and `icaAuthModule` can be removed. That's it, the following code would not be needed: + +```go +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.MsgServiceRouter()) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) +``` + +### Using submodules exclusively + +As described above, the Interchain Accounts application module is structured to support the ability of exclusively enabling controller or host functionality. +This can be achieved by simply omitting either controller or host `Keeper` from the Interchain Accounts `NewAppModule` constructor function, and mounting only the desired submodule via the `IBCRouter`. +Alternatively, submodules can be enabled and disabled dynamically using [on-chain parameters](/docs/ibc/next/apps/interchain-accounts/parameters). + +The following snippets show basic examples of statically disabling submodules using `app.go`. + +#### Disabling controller chain functionality + +```go +/ Create Interchain Accounts AppModule omitting the controller keeper + icaModule := ica.NewAppModule(nil, &app.ICAHostKeeper) + +/ Create host IBC Module + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host route +ibcRouter.AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +``` + +#### Disabling host chain functionality + +```go expandable +/ Create Interchain Accounts AppModule omitting the host keeper + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, nil) + +/ Optionally instantiate your custom authentication module if needed, or not otherwise +... + +/ Create controller IBC application stack + icaControllerStack := icacontroller.NewIBCMiddleware(app.ICAControllerKeeper) + +/ Register controller route +ibcRouter.AddRoute(icacontrollertypes.SubModuleName, icaControllerStack) +``` diff --git a/docs/ibc/next/apps/interchain-accounts/legacy/auth-modules.mdx b/docs/ibc/next/apps/interchain-accounts/legacy/auth-modules.mdx new file mode 100644 index 00000000..526867a7 --- /dev/null +++ b/docs/ibc/next/apps/interchain-accounts/legacy/auth-modules.mdx @@ -0,0 +1,306 @@ +--- +title: Authentication Modules +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +## Synopsis + +Authentication modules play the role of the `Base Application` as described in [ICS-30 IBC Middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware), and enable application developers to perform custom logic when working with the Interchain Accounts controller API. + +The controller submodule is used for account registration and packet sending. It executes only logic required of all controllers of interchain accounts. The type of authentication used to manage the interchain accounts remains unspecified. There may exist many different types of authentication which are desirable for different use cases. Thus the purpose of the authentication module is to wrap the controller submodule with custom authentication logic. + +In ibc-go, authentication modules are connected to the controller chain via a middleware stack. The controller submodule is implemented as [middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware) and the authentication module is connected to the controller submodule as the base application of the middleware stack. To implement an authentication module, the `IBCModule` interface must be fulfilled. By implementing the controller submodule as middleware, any amount of authentication modules can be created and connected to the controller submodule without writing redundant code. + +The authentication module must: + +- Authenticate interchain account owners. +- Track the associated interchain account address for an owner. +- Send packets on behalf of an owner (after authentication). + +## `IBCModule` implementation + +The following `IBCModule` callbacks must be implemented with appropriate custom logic: + +```go expandable +/ OnChanOpenInit implements the IBCModule interface +func (im IBCModule) + +OnChanOpenInit( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + / perform custom logic + + return version, nil +} + +/ OnChanOpenAck implements the IBCModule interface +func (im IBCModule) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + / perform custom logic + + return nil +} + +/ OnChanCloseConfirm implements the IBCModule interface +func (im IBCModule) + +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / perform custom logic + + return nil +} + +/ OnAcknowledgementPacket implements the IBCModule interface +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, +) + +error { + / perform custom logic + + return nil +} + +/ OnTimeoutPacket implements the IBCModule interface. +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +error { + / perform custom logic + + return nil +} +``` + +The following functions must be defined to fulfill the `IBCModule` interface, but they will never be called by the controller submodule so they may error or panic. That is because in Interchain Accounts, the channel handshake is always initiated on the controller chain and packets are always sent to the host chain and never to the controller chain. + +```go expandable +/ OnChanOpenTry implements the IBCModule interface +func (im IBCModule) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + panic("UNIMPLEMENTED") +} + +/ OnChanOpenConfirm implements the IBCModule interface +func (im IBCModule) + +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + panic("UNIMPLEMENTED") +} + +/ OnChanCloseInit implements the IBCModule interface +func (im IBCModule) + +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + panic("UNIMPLEMENTED") +} + +/ OnRecvPacket implements the IBCModule interface. A successful acknowledgement +/ is returned if the packet data is successfully decoded and the receive application +/ logic returns without error. +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +ibcexported.Acknowledgement { + panic("UNIMPLEMENTED") +} +``` + +## `OnAcknowledgementPacket` + +Controller chains will be able to access the acknowledgement written into the host chain state once a relayer relays the acknowledgement. +The acknowledgement bytes contain either the response of the execution of the message(s) on the host chain or an error. They will be passed to the auth module via the `OnAcknowledgementPacket` callback. Auth modules are expected to know how to decode the acknowledgement. + +If the controller chain is connected to a host chain using the host module on ibc-go, it may interpret the acknowledgement bytes as follows: + +Begin by unmarshaling the acknowledgement into `sdk.TxMsgData`: + +```go +var ack channeltypes.Acknowledgement + if err := channeltypes.SubModuleCdc.UnmarshalJSON(acknowledgement, &ack); err != nil { + return err +} + txMsgData := &sdk.TxMsgData{ +} + if err := proto.Unmarshal(ack.GetResult(), txMsgData); err != nil { + return err +} +``` + +If the `txMsgData.Data` field is non nil, the host chain is using SDK version `<=` v0.45. +The auth module should interpret the `txMsgData.Data` as follows: + +```go expandable +switch len(txMsgData.Data) { + case 0: + / see documentation below for SDK 0.46.x or greater +default: + for _, msgData := range txMsgData.Data { + if err := handler(msgData); err != nil { + return err +} + +} +... +} +``` + +A handler will be needed to interpret what actions to perform based on the message type sent. +A router could be used, or more simply a switch statement. + +```go expandable +func handler(msgData sdk.MsgData) + +error { + switch msgData.MsgType { + case sdk.MsgTypeURL(&banktypes.MsgSend{ +}): + msgResponse := &banktypes.MsgSendResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleBankSendMsg(msgResponse) + case sdk.MsgTypeURL(&stakingtypes.MsgDelegate{ +}): + msgResponse := &stakingtypes.MsgDelegateResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleStakingDelegateMsg(msgResponse) + case sdk.MsgTypeURL(&transfertypes.MsgTransfer{ +}): + msgResponse := &transfertypes.MsgTransferResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleIBCTransferMsg(msgResponse) + +default: + return +} +``` + +If the `txMsgData.Data` is empty, the host chain is using SDK version > v0.45. +The auth module should interpret the `txMsgData.Responses` as follows: + +```go +... +/ switch statement from above + case 0: + for _, any := range txMsgData.MsgResponses { + if err := handleAny(any); err != nil { + return err +} + +} +} +``` + +A handler will be needed to interpret what actions to perform based on the type URL of the Any. +A router could be used, or more simply a switch statement. +It may be possible to deduplicate logic between `handler` and `handleAny`. + +```go expandable +func handleAny(any *codectypes.Any) + +error { + switch any.TypeURL { + case banktypes.MsgSend: + msgResponse, err := unpackBankMsgSendResponse(any) + if err != nil { + return err +} + +handleBankSendMsg(msgResponse) + case stakingtypes.MsgDelegate: + msgResponse, err := unpackStakingDelegateResponse(any) + if err != nil { + return err +} + +handleStakingDelegateMsg(msgResponse) + case transfertypes.MsgTransfer: + msgResponse, err := unpackIBCTransferMsgResponse(any) + if err != nil { + return err +} + +handleIBCTransferMsg(msgResponse) + +default: + return +} +``` + +## Integration into `app.go` file + +To integrate the authentication module into your chain, please follow the steps outlined in [`app.go` integration](/docs/ibc/next/apps/interchain-accounts/legacy/integration#example-integration). diff --git a/docs/ibc/next/apps/interchain-accounts/legacy/integration.mdx b/docs/ibc/next/apps/interchain-accounts/legacy/integration.mdx new file mode 100644 index 00000000..eaf6d7ca --- /dev/null +++ b/docs/ibc/next/apps/interchain-accounts/legacy/integration.mdx @@ -0,0 +1,196 @@ +--- +title: Integration +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +## Synopsis + +Learn how to integrate Interchain Accounts host and controller functionality to your chain. The following document only applies for Cosmos SDK chains. + +The Interchain Accounts module contains two submodules. Each submodule has its own IBC application. The Interchain Accounts module should be registered as an `AppModule` in the same way all SDK modules are registered on a chain, but each submodule should create its own `IBCModule` as necessary. A route should be added to the IBC router for each submodule which will be used. + +Chains who wish to support ICS-27 may elect to act as a host chain, a controller chain or both. Disabling host or controller functionality may be done statically by excluding the host or controller module entirely from the `app.go` file or it may be done dynamically by taking advantage of the on-chain parameters which enable or disable the host or controller submodules. + +Interchain Account authentication modules are the base application of a middleware stack. The controller submodule is the middleware in this stack. + +![ica-pre-v6.png](/docs/ibc/images/02-apps/02-interchain-accounts/10-legacy/images/ica-pre-v6.png) + +## Example integration + +```go expandable +/ app.go + +/ Register the AppModule for the Interchain Accounts module and the authentication module +/ Note: No `icaauth` exists, this must be substituted with an actual Interchain Accounts authentication module +ModuleBasics = module.NewBasicManager( + ... + ica.AppModuleBasic{ +}, + icaauth.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the Interchain Accounts module +/ Only necessary for host chain functionality +/ Each Interchain Account created on the host chain is derived from the module account created +maccPerms = map[string][]string{ + ... + icatypes.ModuleName: nil, +} + +... + +/ Add Interchain Accounts Keepers for each submodule used and the authentication module +/ If a submodule is being statically disabled, the associated Keeper does not need to be added. +type App struct { + ... + + ICAControllerKeeper icacontrollerkeeper.Keeper + ICAHostKeeper icahostkeeper.Keeper + ICAAuthKeeper icaauthkeeper.Keeper + + ... +} + +... + +/ Create store keys for each submodule Keeper and the authentication module + keys := sdk.NewKVStoreKeys( + ... + icacontrollertypes.StoreKey, + icahosttypes.StoreKey, + icaauthtypes.StoreKey, + ... +) + +... + +... + +/ Create the Keeper for each submodule +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.MsgServiceRouter(), +) + +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, app.MsgServiceRouter(), +) + +/ Create Interchain Accounts AppModule + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, &app.ICAHostKeeper) + +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.ICAControllerKeeper) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + +/ ICA auth IBC Module + icaAuthIBCModule := icaauth.NewIBCModule(app.ICAAuthKeeper) + +/ Create controller IBC application stack and host IBC module as desired + icaControllerStack := icacontroller.NewIBCMiddlewareWithAuth(icaAuthIBCModule, app.ICAControllerKeeper) + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostIBCModule). + AddRoute(icaauthtypes.ModuleName, icaControllerStack) / Note, the authentication module is routed to the top level of the middleware stack + +... + +/ Register Interchain Accounts and authentication module AppModule's +app.moduleManager = module.NewManager( + ... + icaModule, + icaAuthModule, +) + +... + +/ Add fee middleware to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add fee middleware to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts module InitGenesis logic +app.moduleManager.SetOrderInitGenesis( + ... + icatypes.ModuleName, + ... +) + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey sdk.StoreKey) + +paramskeeper.Keeper { + ... + paramsKeeper.Subspace(icahosttypes.SubModuleName) + +paramsKeeper.Subspace(icacontrollertypes.SubModuleName) + ... +``` + +## Using submodules exclusively + +As described above, the Interchain Accounts application module is structured to support the ability of exclusively enabling controller or host functionality. +This can be achieved by simply omitting either controller or host `Keeper` from the Interchain Accounts `NewAppModule` constructor function, and mounting only the desired submodule via the `IBCRouter`. +Alternatively, submodules can be enabled and disabled dynamically using [on-chain parameters](/docs/ibc/next/apps/interchain-accounts/parameters). + +The following snippets show basic examples of statically disabling submodules using `app.go`. + +### Disabling controller chain functionality + +```go +/ Create Interchain Accounts AppModule omitting the controller keeper + icaModule := ica.NewAppModule(nil, &app.ICAHostKeeper) + +/ Create host IBC Module + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host route +ibcRouter.AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +``` + +### Disabling host chain functionality + +```go expandable +/ Create Interchain Accounts AppModule omitting the host keeper + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, nil) + +/ Create your Interchain Accounts authentication module, setting up the Keeper, AppModule and IBCModule appropriately +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.ICAControllerKeeper) + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + icaAuthIBCModule := icaauth.NewIBCModule(app.ICAAuthKeeper) + +/ Create controller IBC application stack + icaControllerStack := icacontroller.NewIBCMiddlewareWithAuth(icaAuthIBCModule, app.ICAControllerKeeper) + +/ Register controller and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icaauthtypes.ModuleName, icaControllerStack) / Note, the authentication module is routed to the top level of the middleware stack +``` diff --git a/docs/ibc/next/apps/interchain-accounts/legacy/keeper-api.mdx b/docs/ibc/next/apps/interchain-accounts/legacy/keeper-api.mdx new file mode 100644 index 00000000..26ac7b43 --- /dev/null +++ b/docs/ibc/next/apps/interchain-accounts/legacy/keeper-api.mdx @@ -0,0 +1,122 @@ +--- +title: Keeper API +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +The controller submodule keeper exposes two legacy functions that allow respectively for custom authentication modules to register interchain accounts and send packets to the interchain account. + +## `RegisterInterchainAccount` + +The authentication module can begin registering interchain accounts by calling `RegisterInterchainAccount`: + +```go +if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, connectionID, owner.String(), version, channeltypes.UNORDERED); err != nil { + return err +} + +return nil +``` + +The `version` argument is used to support ICS-29 fee middleware for relayer incentivization of ICS-27 packets. The `ordering` argument allows to specify the ordering of the channel that is created; if `NONE` is passed, then the default ordering will be `UNORDERED`. Consumers of the `RegisterInterchainAccount` are expected to build the appropriate JSON encoded version string themselves and pass it accordingly. If an empty string is passed in the `version` argument, then the version will be initialized to a default value in the `OnChanOpenInit` callback of the controller's handler, so that channel handshake can proceed. + +The following code snippet illustrates how to construct an appropriate interchain accounts `Metadata` and encode it as a JSON bytestring: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, controllerConnectionID, owner.String(), string(appVersion), channeltypes.UNORDERED); err != nil { + return err +} +``` + +Similarly, if the application stack is configured to route through ICS-29 fee middleware and a fee enabled channel is desired, construct the appropriate ICS-29 `Metadata` type: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + feeMetadata := feetypes.Metadata{ + AppVersion: string(appVersion), + FeeVersion: feetypes.Version, +} + +feeEnabledVersion, err := feetypes.ModuleCdc.MarshalJSON(&feeMetadata) + if err != nil { + return err +} + if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, controllerConnectionID, owner.String(), string(feeEnabledVersion), channeltypes.UNORDERED); err != nil { + return err +} +``` + +## `SendTx` + +The authentication module can attempt to send a packet by calling `SendTx`: + +```go expandable +/ Authenticate owner +/ perform custom logic + +/ Construct controller portID based on interchain account owner address +portID, err := icatypes.NewControllerPortID(owner.String()) + if err != nil { + return err +} + +/ Obtain data to be sent to the host chain. +/ In this example, the owner of the interchain account would like to send a bank MsgSend to the host chain. +/ The appropriate serialization function should be called. The host chain must be able to deserialize the transaction. +/ If the host chain is using the ibc-go host module, `SerializeCosmosTx` should be used. + msg := &banktypes.MsgSend{ + FromAddress: fromAddr, + ToAddress: toAddr, + Amount: amt +} + +data, err := icatypes.SerializeCosmosTx(keeper.cdc, []proto.Message{ + msg +}) + if err != nil { + return err +} + +/ Construct packet data + packetData := icatypes.InterchainAccountPacketData{ + Type: icatypes.EXECUTE_TX, + Data: data, +} + +/ Obtain timeout timestamp +/ An appropriate timeout timestamp must be determined based on the usage of the interchain account. +/ If the packet times out, the channel will be closed requiring a new channel to be created. + timeoutTimestamp := obtainTimeoutTimestamp() + +/ Send the interchain accounts packet, returning the packet sequence +seq, err = keeper.icaControllerKeeper.SendTx(ctx, portID, packetData, timeoutTimestamp) +``` + +The data within an `InterchainAccountPacketData` must be serialized using a format supported by the host chain. +If the host chain is using the ibc-go host chain submodule, `SerializeCosmosTx` should be used. If the `InterchainAccountPacketData.Data` is serialized using a format not supported by the host chain, the packet will not be successfully received. diff --git a/docs/ibc/next/apps/interchain-accounts/messages.mdx b/docs/ibc/next/apps/interchain-accounts/messages.mdx new file mode 100644 index 00000000..6ff955db --- /dev/null +++ b/docs/ibc/next/apps/interchain-accounts/messages.mdx @@ -0,0 +1,152 @@ +--- +title: Messages +description: >- + An Interchain Accounts channel handshake can be initiated using + MsgRegisterInterchainAccount: +--- + +## `MsgRegisterInterchainAccount` + +An Interchain Accounts channel handshake can be initiated using `MsgRegisterInterchainAccount`: + +```go +type MsgRegisterInterchainAccount struct { + Owner string + ConnectionID string + Version string + Ordering channeltypes.Order +} +``` + +This message is expected to fail if: + +* `Owner` is an empty string or contains more than 2048 bytes. +* `ConnectionID` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). + +This message will construct a new `MsgChannelOpenInit` on chain and route it to the core IBC message server to initiate the opening step of the channel handshake. + +The controller submodule will generate a new port identifier. The caller is expected to provide an appropriate application version string. For example, this may be an ICS-27 JSON encoded [`Metadata`](https://github.com/cosmos/ibc-go/blob/v6.0.0/proto/ibc/applications/interchain_accounts/v1/metadata.proto#L11) type or an ICS-29 JSON encoded [`Metadata`](https://github.com/cosmos/ibc-go/blob/v6.0.0/proto/ibc/applications/fee/v1/metadata.proto#L11) type with a nested application version. +If the `Version` string is omitted, the controller submodule will construct a default version string in the `OnChanOpenInit` handshake callback. + +```go +type MsgRegisterInterchainAccountResponse struct { + ChannelID string + PortId string +} +``` + +The `ChannelID` and `PortID` are returned in the message response. + +## `MsgSendTx` + +An Interchain Accounts transaction can be executed on a remote host chain by sending a `MsgSendTx` from the corresponding controller chain: + +```go +type MsgSendTx struct { + Owner string + ConnectionID string + PacketData InterchainAccountPacketData + RelativeTimeout uint64 +} +``` + +This message is expected to fail if: + +* `Owner` is an empty string or contains more than 2048 bytes. +* `ConnectionID` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +* `PacketData` contains an `UNSPECIFIED` type enum, the length of `Data` bytes is zero or the `Memo` field exceeds 256 characters in length. +* `RelativeTimeout` is zero. + +This message will create a new IBC packet with the provided `PacketData` and send it via the channel associated with the `Owner` and `ConnectionID`. +The `PacketData` is expected to contain a list of serialized `[]sdk.Msg` in the form of `CosmosTx`. Please note the signer field of each `sdk.Msg` must be the interchain account address. +When the packet is relayed to the host chain, the `PacketData` is unmarshalled and the messages are authenticated and executed. + +```go +type MsgSendTxResponse struct { + Sequence uint64 +} +``` + +The packet `Sequence` is returned in the message response. + +### Queries + +It is possible to use [`MsgModuleQuerySafe`](https://github.com/cosmos/ibc-go/blob/eecfa5c09a4c38a5c9f2cc2a322d2286f45911da/proto/ibc/applications/interchain_accounts/host/v1/tx.proto#L41-L51) to execute a list of queries on the host chain. This message can be included in the list of encoded `sdk.Msg`s of `InterchainPacketData`. The host chain will return on the acknowledgment the responses for all the queries. Please note that only module safe queries can be executed ([deterministic queries that are safe to be called from within the state machine](https://docs.cosmos.network/main/build/building-modules/query-services#calling-queries-from-the-state-machine)). + +The queries available from Cosmos SDK are: + +```plaintext expandable +/cosmos.auth.v1beta1.Query/Accounts +/cosmos.auth.v1beta1.Query/Account +/cosmos.auth.v1beta1.Query/AccountAddressByID +/cosmos.auth.v1beta1.Query/Params +/cosmos.auth.v1beta1.Query/ModuleAccounts +/cosmos.auth.v1beta1.Query/ModuleAccountByName +/cosmos.auth.v1beta1.Query/AccountInfo +/cosmos.bank.v1beta1.Query/Balance +/cosmos.bank.v1beta1.Query/AllBalances +/cosmos.bank.v1beta1.Query/SpendableBalances +/cosmos.bank.v1beta1.Query/SpendableBalanceByDenom +/cosmos.bank.v1beta1.Query/TotalSupply +/cosmos.bank.v1beta1.Query/SupplyOf +/cosmos.bank.v1beta1.Query/Params +/cosmos.bank.v1beta1.Query/DenomMetadata +/cosmos.bank.v1beta1.Query/DenomMetadataByQueryString +/cosmos.bank.v1beta1.Query/DenomsMetadata +/cosmos.bank.v1beta1.Query/DenomOwners +/cosmos.bank.v1beta1.Query/SendEnabled +/cosmos.circuit.v1.Query/Account +/cosmos.circuit.v1.Query/Accounts +/cosmos.circuit.v1.Query/DisabledList +/cosmos.staking.v1beta1.Query/Validators +/cosmos.staking.v1beta1.Query/Validator +/cosmos.staking.v1beta1.Query/ValidatorDelegations +/cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations +/cosmos.staking.v1beta1.Query/Delegation +/cosmos.staking.v1beta1.Query/UnbondingDelegation +/cosmos.staking.v1beta1.Query/DelegatorDelegations +/cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations +/cosmos.staking.v1beta1.Query/Redelegations +/cosmos.staking.v1beta1.Query/DelegatorValidators +/cosmos.staking.v1beta1.Query/DelegatorValidator +/cosmos.staking.v1beta1.Query/HistoricalInfo +/cosmos.staking.v1beta1.Query/Pool +/cosmos.staking.v1beta1.Query/Params +``` + +And the query available from ibc-go is: + +```plaintext +/ibc.core.client.v1.Query/VerifyMembership +``` + +The following code block shows an example of how `MsgModuleQuerySafe` can be used to query the account balance of an account on the host chain. The resulting packet data variable is used to set the `PacketData` of `MsgSendTx`. + +```go expandable +balanceQuery := banktypes.NewQueryBalanceRequest("cosmos1...", "uatom") + +queryBz, err := balanceQuery.Marshal() + +/ signer of message must be the interchain account on the host + queryMsg := icahosttypes.NewMsgModuleQuerySafe("cosmos2...", []icahosttypes.QueryRequest{ + { + Path: "/cosmos.bank.v1beta1.Query/Balance", + Data: queryBz, +}, +}) + +bz, err := icatypes.SerializeCosmosTx(cdc, []proto.Message{ + queryMsg +}, icatypes.EncodingProtobuf) + packetData := icatypes.InterchainAccountPacketData{ + Type: icatypes.EXECUTE_TX, + Data: bz, + Memo: "", +} +``` + +## Atomicity + +As the Interchain Accounts module supports the execution of multiple transactions using the Cosmos SDK `Msg` interface, it provides the same atomicity guarantees as Cosmos SDK-based applications, leveraging the [`CacheMultiStore`](https://docs.cosmos.network/main/learn/advanced/store#cachemultistore) architecture provided by the [`Context`](https://docs.cosmos.network/main/learn/advanced/context.html) type. + +This provides atomic execution of transactions when using Interchain Accounts, where state changes are only committed if all `Msg`s succeed. diff --git a/docs/ibc/next/apps/interchain-accounts/overview.mdx b/docs/ibc/next/apps/interchain-accounts/overview.mdx new file mode 100644 index 00000000..d1f0fc8c --- /dev/null +++ b/docs/ibc/next/apps/interchain-accounts/overview.mdx @@ -0,0 +1,46 @@ +--- +title: Overview +--- + + + Interchain Accounts is only compatible with IBC Classic, not IBC v2 + + +## Synopsis + +Learn about what the Interchain Accounts module is + +## What is the Interchain Accounts module? + +Interchain Accounts is the Cosmos SDK implementation of the ICS-27 protocol, which enables cross-chain account management built upon IBC. + +- How does an interchain account differ from a regular account? + +Regular accounts use a private key to sign transactions. Interchain Accounts are instead controlled programmatically by counterparty chains via IBC packets. + +## Concepts + +`Host Chain`: The chain where the interchain account is registered. The host chain listens for IBC packets from a controller chain which should contain instructions (e.g. Cosmos SDK messages) for which the interchain account will execute. + +`Controller Chain`: The chain registering and controlling an account on a host chain. The controller chain sends IBC packets to the host chain to control the account. + +`Interchain Account`: An account on a host chain created using the ICS-27 protocol. An interchain account has all the capabilities of a normal account. However, rather than signing transactions with a private key, a controller chain will send IBC packets to the host chain which signals what transactions the interchain account should execute. + +`Authentication Module`: A custom application module on the controller chain that uses the Interchain Accounts module to build custom logic for the creation & management of interchain accounts. It can be either an IBC application module using the [legacy API](/docs/ibc/next/apps/interchain-accounts/legacy/keeper-api), or a regular Cosmos SDK application module sending messages to the controller submodule's `MsgServer` (this is the recommended approach from ibc-go v6 if access to packet callbacks is not needed). Please note that the legacy API will eventually be removed and IBC applications will not be able to use them in later releases. + +## SDK security model + +SDK modules on a chain are assumed to be trustworthy. For example, there are no checks to prevent an untrustworthy module from accessing the bank keeper. + +The implementation of ICS-27 in ibc-go uses this assumption in its security considerations. + +The implementation assumes other IBC application modules will not bind to ports within the ICS-27 namespace. + +## Channel Closure + +The provided interchain account host and controller implementations do not support `ChanCloseInit`. However, they do support `ChanCloseConfirm`. +This means that the host and controller modules cannot close channels, but they will confirm channel closures initiated by other implementations of ICS-27. + +In the event of a channel closing (due to a packet timeout in an ordered channel, for example), the interchain account associated with that channel can become accessible again if a new channel is created with a (JSON-formatted) version string that encodes the exact same `Metadata` information of the previous channel. The channel can be reopened using either [`MsgRegisterInterchainAccount`](/docs/ibc/next/apps/interchain-accounts/messages#msgregisterinterchainaccount) or `MsgChannelOpenInit`. If `MsgRegisterInterchainAccount` is used, then it is possible to leave the `version` field of the message empty, since it will be filled in by the controller submodule. If `MsgChannelOpenInit` is used, then the `version` field must be provided with the correct JSON-encoded `Metadata` string. See section [Understanding Active Channels](/docs/ibc/next/apps/interchain-accounts/active-channels#understanding-active-channels) for more information. + +When reopening a channel with the default controller submodule, the ordering of the channel cannot be changed. diff --git a/docs/ibc/next/apps/interchain-accounts/parameters.mdx b/docs/ibc/next/apps/interchain-accounts/parameters.mdx new file mode 100644 index 00000000..eb8f805b --- /dev/null +++ b/docs/ibc/next/apps/interchain-accounts/parameters.mdx @@ -0,0 +1,63 @@ +--- +title: Parameters +description: >- + The Interchain Accounts module contains the following on-chain parameters, + logically separated for each distinct submodule: +--- + +The Interchain Accounts module contains the following on-chain parameters, logically separated for each distinct submodule: + +## Controller Submodule Parameters + +| Name | Type | Default Value | +| ------------------- | ---- | ------------- | +| `ControllerEnabled` | bool | `true` | + +### ControllerEnabled + +The `ControllerEnabled` parameter controls a chains ability to service ICS-27 controller specific logic. This includes the sending of Interchain Accounts packet data as well as the following ICS-26 callback handlers: + +* `OnChanOpenInit` +* `OnChanOpenAck` +* `OnChanCloseConfirm` +* `OnAcknowledgementPacket` +* `OnTimeoutPacket` + +## Host Submodule Parameters + +| Name | Type | Default Value | +| --------------- | --------- | ------------- | +| `HostEnabled` | bool | `true` | +| `AllowMessages` | \[]string | `["*"]` | + +### HostEnabled + +The `HostEnabled` parameter controls a chains ability to service ICS-27 host specific logic. This includes the following ICS-26 callback handlers: + +* `OnChanOpenTry` +* `OnChanOpenConfirm` +* `OnChanCloseConfirm` +* `OnRecvPacket` + +### AllowMessages + +The `AllowMessages` parameter provides the ability for a chain to limit the types of messages or transactions that hosted interchain accounts are authorized to execute by defining an allowlist using the Protobuf message type URL format. + +For example, a Cosmos SDK-based chain that elects to provide hosted Interchain Accounts with the ability of governance voting and staking delegations will define its parameters as follows: + +```json +"params": { + "host_enabled": true, + "allow_messages": ["/cosmos.staking.v1beta1.MsgDelegate", + "/cosmos.gov.v1beta1.MsgVote"] +} +``` + +There is also a special wildcard `"*"` value which allows any type of message to be executed by the interchain account. This must be the only value in the `allow_messages` array. + +```json +"params": { + "host_enabled": true, + "allow_messages": ["*"] +} +``` diff --git a/docs/ibc/next/apps/interchain-accounts/tx-encoding.mdx b/docs/ibc/next/apps/interchain-accounts/tx-encoding.mdx new file mode 100644 index 00000000..f2b9ca97 --- /dev/null +++ b/docs/ibc/next/apps/interchain-accounts/tx-encoding.mdx @@ -0,0 +1,57 @@ +--- +title: Transaction Encoding +description: >- + When orchestrating an interchain account transaction, which comprises multiple + sdk.Msg objects represented as Any types, the transactions must be encoded as + bytes within InterchainAccountPacketData. +--- + +When orchestrating an interchain account transaction, which comprises multiple `sdk.Msg` objects represented as `Any` types, the transactions must be encoded as bytes within [`InterchainAccountPacketData`](https://github.com/cosmos/ibc-go/blob/v7.2.0/proto/ibc/applications/interchain_accounts/v1/packet.proto#L21-L26). + +```protobuf +/ InterchainAccountPacketData is comprised of a raw transaction, type of transaction and optional memo field. +message InterchainAccountPacketData { + Type type = 1; + bytes data = 2; + string memo = 3; +} +``` + +The `data` field must be encoded as a [`CosmosTx`](https://github.com/cosmos/ibc-go/blob/v7.2.0/proto/ibc/applications/interchain_accounts/v1/packet.proto#L28-L31). + +```protobuf +/ CosmosTx contains a list of sdk.Msg's. It should be used when sending transactions to an SDK host chain. +message CosmosTx { + repeated google.protobuf.Any messages = 1; +} +``` + +The encoding method for `CosmosTx` is determined during the channel handshake process. If the channel version [metadata's `encoding` field](https://github.com/cosmos/ibc-go/blob/v7.2.0/proto/ibc/applications/interchain_accounts/v1/metadata.proto#L22) is marked as `proto3`, then `CosmosTx` undergoes protobuf encoding. Conversely, if the field is set to `proto3json`, then [proto3 json](https://protobuf.dev/programming-guides/proto3/#json) encoding takes place, which generates a JSON representation of the protobuf message. + +## Protobuf Encoding + +Protobuf encoding serves as the standard encoding process for `CosmosTx`. This occurs if the channel handshake initiates with an empty channel version metadata or if the `encoding` field explicitly denotes `proto3`. In Golang, the protobuf encoding procedure utilizes the `proto.Marshal` function. Every protobuf autogenerated Golang type comes equipped with a `Marshal` method that can be employed to encode the message. + +## (Protobuf) JSON Encoding + +The proto3 JSON encoding presents an alternative encoding technique for `CosmosTx`. It is selected if the channel handshake begins with the channel version metadata `encoding` field labeled as `proto3json`. In Golang, the Proto3 canonical encoding in JSON is implemented by the `"github.com/cosmos/gogoproto/jsonpb"` package. Within Cosmos SDK, the `ProtoCodec` structure implements the `JSONCodec` interface, leveraging the `jsonpb` package. This method generates a JSON format as follows: + +```json expandable +{ + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1...", + "to_address": "cosmos1...", + "amount": [ + { + "denom": "uatom", + "amount": "1000000" + } + ] + } + ] +} +``` + +Here, the `"messages"` array is populated with transactions. Each transaction is represented as a JSON object with the `@type` field denoting the transaction type and the remaining fields representing the transaction's attributes. diff --git a/docs/ibc/next/apps/transfer/IBCv2-transfer.mdx b/docs/ibc/next/apps/transfer/IBCv2-transfer.mdx new file mode 100644 index 00000000..22092543 --- /dev/null +++ b/docs/ibc/next/apps/transfer/IBCv2-transfer.mdx @@ -0,0 +1,93 @@ +--- +title: IBC v2 Transfer +description: >- + Much of the core business logic of sending and recieving tokens between chains + is unchanged between IBC Classic and IBC v2. Some of the key differences to + pay attention to are detailed below. +--- + +Much of the core business logic of sending and recieving tokens between chains is unchanged between IBC Classic and IBC v2. Some of the key differences to pay attention to are detailed below. + +## No Channel Handshakes, New Packet Format and Encoding Support + +* IBC v2 does not establish connection between applications with a channel handshake. Channel identifiers represent Client IDs and are included in the `Payload` + * The source and destination port must be `"transfer"` + * The channel IDs [must be valid client IDs](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/v2/ibc_module.go#L46-L47) of the format `{clientID}-{sequence}`, e.g. 08-wasm-007 +* The [`Payload`](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel/v2/types/packet.pb.go#L146-L158) contains the [`FungibleTokenPacketData`](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/types/packet.pb.go#L28-L39) for a token transfer. + +The code snippet shows the `Payload` struct. + +```go expandable +/ Payload contains the source and destination ports and payload for the application (version, encoding, raw bytes) + +type Payload struct { + / specifies the source port of the packet, e.g. transfer + SourcePort string `protobuf:"bytes,1,opt,name=source_port,json=sourcePort,proto3" json:"source_port,omitempty"` + / specifies the destination port of the packet, e.g. trasnfer + DestinationPort string `protobuf:"bytes,2,opt,name=destination_port,json=destinationPort,proto3" json:"destination_port,omitempty"` + / version of the specified application + Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` + / the encoding used for the provided value, for transfer this could be JSON, protobuf or ABI + Encoding string `protobuf:"bytes,4,opt,name=encoding,proto3" json:"encoding,omitempty"` + / the raw bytes for the payload. + Value []byte `protobuf:"bytes,5,opt,name=value,proto3" json:"value,omitempty"` +} +``` + +The code snippet shows the structure of the `Payload` bytes for token transfer + +```go expandable +/ FungibleTokenPacketData defines a struct for the packet payload +/ See FungibleTokenPacketData spec: +/ https://github.com/cosmos/ibc/tree/master/spec/app/ics-020-fungible-token-transfer#data-structures +type FungibleTokenPacketData struct { + / the token denomination to be transferred + Denom string `protobuf:"bytes,1,opt,name=denom,proto3" json:"denom,omitempty"` + / the token amount to be transferred + Amount string `protobuf:"bytes,2,opt,name=amount,proto3" json:"amount,omitempty"` + / the sender address + Sender string `protobuf:"bytes,3,opt,name=sender,proto3" json:"sender,omitempty"` + / the recipient address on the destination chain + Receiver string `protobuf:"bytes,4,opt,name=receiver,proto3" json:"receiver,omitempty"` + / optional memo + Memo string `protobuf:"bytes,5,opt,name=memo,proto3" json:"memo,omitempty"` +} +``` + +## Base Denoms cannot contain slashes + +With the new [`Denom`](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/types/token.pb.go#L81-L87) struct, the base denom, i.e. uatom, is seperated from the trace - the path the token has travelled. The trace is presented as an array of [`Hop`](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/types/token.pb.go#L136-L140)s. + +Because IBC v2 no longer uses channels, it is no longer possible to rely on a fixed format for an identifier so using a base denom that contains a "/" is dissallowed. + +## Changes to the application module interface + +Instead of implementing token transfer for `port.IBCModule`, IBC v2 uses the new application interface `api.IBCModule`. More information on the interface differences can be found in the [application section](/docs/ibc/next/ibc/apps/ibcv2apps). + +## MsgTransfer Entrypoint + +The `MsgTransfer` entrypoint has been retained in order to retain support for the common entrypoint integrated in most existing frontends. + +If `MsgTransfer` is used with a clientID as the `msg.SourceChannel` then the handler will automatically use the IBC v2 protocol. It will internally call the `MsgSendPacket` endpoint so that the execution flow is the same in the state machine for all IBC v2 packets while still presenting the same endpoint for users. + +Of course, we want to still retain support for sending v2 packets on existing channels. The denominations of tokens once they leave the origin chain are prefixed by the port and channel ID in IBC v1. Moreover, the transfer escrow accounts holding the original tokens are generated from the channel IDs. Thus, if we wish to interact these remote tokens using IBC v2, we must still use the v1 channel identifiers that they were originally sent with. + +Thus, `MsgTransfer` has an additional `UseAliasing` boolean field to indicate that we wish to use IBC v2 protocol while still using the old v1 channel identifiers. This enables users to interact with the same tokens, DEX pools, and cross-chain DEFI protocols using the same denominations that they had previously with the IBC v2 protocol. To use the `MsgTransfer` with aliasing we can submit the message like so: + +```go expandable +MsgTransfer{ + SourcePort: "transfer", + SourceChannel: "channel-4", /note: we are using an existing v1 channel identiifer + Token: "uatom", + Sender: { + senderAddr +}, + Receiver: { + receiverAddr +}, + TimeoutHeight: ZeroHeight, / note: IBC v2 does not use timeout height + TimeoutTimestamp: 100_000_000, + Memo: "", + UseAliasing: true, / set aliasing to true so the handler uses IBC v2 instead of IBC v1 +} +``` diff --git a/docs/ibc/next/apps/transfer/authorizations.mdx b/docs/ibc/next/apps/transfer/authorizations.mdx new file mode 100644 index 00000000..afdf023e --- /dev/null +++ b/docs/ibc/next/apps/transfer/authorizations.mdx @@ -0,0 +1,51 @@ +--- +title: Authorizations +--- + +`TransferAuthorization` implements the `Authorization` interface for `ibc.applications.transfer.v1.MsgTransfer`. It allows a granter to grant a grantee the privilege to submit `MsgTransfer` on its behalf. Please see the [Cosmos SDK docs](https://docs.cosmos.network/v0.47/modules/authz) for more details on granting privileges via the `x/authz` module. + +More specifically, the granter allows the grantee to transfer funds that belong to the granter over a specified channel. + +For the specified channel, the granter must be able to specify a spend limit of a specific denomination they wish to allow the grantee to be able to transfer. + +The granter may be able to specify the list of addresses that they allow to receive funds. If empty, then all addresses are allowed. + +It takes: + +* a `SourcePort` and a `SourceChannel` which together comprise the unique transfer channel identifier over which authorized funds can be transferred. +* a `SpendLimit` that specifies the maximum amount of tokens the grantee can transfer. The `SpendLimit` is updated as the tokens are transferred, unless the sentinel value of the maximum value for a 256-bit unsigned integer (i.e. 2^256 - 1) is used for the amount, in which case the `SpendLimit` will not be updated (please be aware that using this sentinel value will grant the grantee the privilege to transfer **all** the tokens of a given denomination available at the granter's account). The helper function `UnboundedSpendLimit` in the `types` package of the `transfer` module provides the sentinel value that can be used. This `SpendLimit` may also be updated to increase or decrease the limit as the granter wishes. +* an `AllowList` list that specifies the list of addresses that are allowed to receive funds. If this list is empty, then all addresses are allowed to receive funds from the `TransferAuthorization`. +* an `AllowedPacketData` list that specifies the list of memo strings that are allowed to be included in the memo field of the packet. If this list is empty, then only an empty memo is allowed (a `memo` field with non-empty content will be denied). If this list includes a single element equal to `"*"`, then any content in `memo` field will be allowed. + +Setting a `TransferAuthorization` is expected to fail if: + +* the spend limit is nil +* the denomination of the spend limit is an invalid coin type +* the source port ID is invalid +* the source channel ID is invalid +* there are duplicate entries in the `AllowList` +* the `memo` field is not allowed by `AllowedPacketData` + +Below is the `TransferAuthorization` message: + +```go expandable +func NewTransferAuthorization(allocations ...Allocation) *TransferAuthorization { + return &TransferAuthorization{ + Allocations: allocations, +} +} + +type Allocation struct { + / the port on which the packet will be sent + SourcePort string + / the channel by which the packet will be sent + SourceChannel string + / spend limitation on the channel + SpendLimit sdk.Coins + / allow list of receivers, an empty allow list permits any receiver address + AllowList []string + / allow list of memo strings, an empty list prohibits all memo strings; + / a list only with "*" permits any memo string + AllowedPacketData []string +} +``` diff --git a/docs/ibc/next/apps/transfer/client.mdx b/docs/ibc/next/apps/transfer/client.mdx new file mode 100644 index 00000000..7ace97d4 --- /dev/null +++ b/docs/ibc/next/apps/transfer/client.mdx @@ -0,0 +1,92 @@ +--- +title: Client +description: >- + A user can query and interact with the transfer module using the CLI. Use the + --help flag to discover the available commands: +--- + +## CLI + +A user can query and interact with the `transfer` module using the CLI. Use the `--help` flag to discover the available commands: + +### Query + +The `query` commands allow users to query `transfer` state. + +```shell +simd query ibc-transfer --help +``` + +#### Transactions + +The `tx` commands allow users to interact with the controller submodule. + +```shell +simd tx ibc-transfer --help +``` + +#### `transfer` + +The `transfer` command allows users to execute cross-chain token transfers from the source port ID and channel ID on the sending chain. + +```shell +simd tx ibc-transfer transfer [src-port] [src-channel] [receiver] [coins] [flags] +``` + +The `coins` parameter accepts the amount and denomination (e.g. `100uatom`) of the tokens to be transferred. + +The additional flags that can be used with the command are: + +* `--packet-timeout-height` to specify the timeout block height in the format `{revision}-{height}`. The default value is `0-0`, which effectively disables the timeout. Timeout height can only be absolute, therefore this option must be used in combination with `--absolute-timeouts` set to true. On IBC v1 protocol, either `--packet-timeout-height` or `--packet-timeout-timestamp` must be set. On IBC v2 protocol `--packet-timeout-timestamp` must be set. +* `--packet-timeout-timestamp` to specify the timeout timestamp in nanoseconds. The timeout can be either relative (from the current UTC time) or absolute. The default value is 10 minutes (and thus relative). On IBC v1 protocol, either `--packet-timeout-height` or `--packet-timeout-timestamp` must be set. On IBC v2 protocol `--packet-timeout-timestamp` must be set. +* `--absolute-timeouts` to interpret the timeout timestamp as an absolute value (when set to true). The default value is false (and thus the timeout is considered relative to current UTC time). +* `--memo` to specify the memo string to be sent along with the transfer packet. If forwarding is used, then the memo string will be carried through the intermediary chains to the final destination. + +#### `total-escrow` + +The `total-escrow` command allows users to query the total amount in escrow for a particular coin denomination regardless of the transfer channel from where the coins were sent out. + +```shell +simd query ibc-transfer total-escrow [denom] [flags] +``` + +Example: + +```shell +simd query ibc-transfer total-escrow samoleans +``` + +Example Output: + +```shell +amount: "100" +``` + +## gRPC + +A user can query the `transfer` module using gRPC endpoints. + +### `TotalEscrowForDenom` + +The `TotalEscrowForDenom` endpoint allows users to query the total amount in escrow for a particular coin denomination regardless of the transfer channel from where the coins were sent out. + +```shell +ibc.applications.transfer.v1.Query/TotalEscrowForDenom +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"samoleans"}' \ + localhost:9090 \ + ibc.applications.transfer.v1.Query/TotalEscrowForDenom +``` + +Example output: + +```shell +{ + "amount": "100" +} +``` diff --git a/docs/ibc/next/apps/transfer/events.mdx b/docs/ibc/next/apps/transfer/events.mdx new file mode 100644 index 00000000..9daa9182 --- /dev/null +++ b/docs/ibc/next/apps/transfer/events.mdx @@ -0,0 +1,54 @@ +--- +title: Events +--- + +## `MsgTransfer` + +| Type | Attribute Key | Attribute Value | +| ------------- | ------------- | --------------- | +| ibc\_transfer | sender | `{sender}` | +| ibc\_transfer | receiver | `{receiver}` | +| ibc\_transfer | denom | `{denom}` | +| ibc\_transfer | denom\_hash | `{denom\_hash}` | +| ibc\_transfer | amount | `{amount}` | +| ibc\_transfer | memo | `{memo}` | +| message | module | transfer | + +## `OnRecvPacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | ------------- | --------------- | +| fungible\_token\_packet | sender | `{sender}` | +| fungible\_token\_packet | receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | denom\_hash | `{denom\_hash}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | +| fungible\_token\_packet | success | `{ackSuccess}` | +| fungible\_token\_packet | error | `{ackError}` | +| message | module | transfer | + +## `OnAcknowledgePacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | --------------- | --------------- | +| fungible\_token\_packet | sender | `{sender}` | +| fungible\_token\_packet | receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | denom\_hash | `{denom\_hash}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | +| fungible\_token\_packet | acknowledgement | `{ack.String()}` | +| fungible\_token\_packet | success / error | `{ack.Response}` | +| message | module | transfer | + +## `OnTimeoutPacket` callback + +| Type | Attribute Key | Attribute Value | +| ------- | ---------------- | --------------- | +| timeout | refund\_receiver | `{receiver}` | +| timeout | refund\_tokens | `{jsonTokens}` | +| timeout | denom | `{denom}` | +| timeout | denom\_hash | `{denom\_hash}` | +| timeout | memo | `{memo}` | +| message | module | transfer | diff --git a/docs/ibc/next/apps/transfer/messages.mdx b/docs/ibc/next/apps/transfer/messages.mdx new file mode 100644 index 00000000..e9d6c7db --- /dev/null +++ b/docs/ibc/next/apps/transfer/messages.mdx @@ -0,0 +1,71 @@ +--- +title: Messages +description: 'A fungible token cross chain transfer is achieved by using the MsgTransfer:' +--- + +## `MsgTransfer` + +A fungible token cross chain transfer is achieved by using the `MsgTransfer`: + +```go expandable +type MsgTransfer struct { + SourcePort string + / with IBC v2 SourceChannel will be a client ID + SourceChannel string + Token sdk.Coin + Sender string + Receiver string + / If you are sending with IBC v1 protocol, either timeout_height or timeout_timestamp must be set. + / If you are sending with IBC v2 protocol, timeout_timestamp must be set, and timeout_height must be omitted. + TimeoutHeight ibcexported.Height + / Timeout timestamp in absolute nanoseconds since unix epoch. + TimeoutTimestamp uint64 + / optional Memo field + Memo string + / optional Encoding field + Encoding string +} +``` + +This message is expected to fail if: + +* `SourcePort` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +* `SourceChannel` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +* `Token` is invalid: + * `Amount` is not positive. + * `Denom` is not a valid IBC denomination. +* `Sender` is empty. +* `Receiver` is empty or contains more than 2048 bytes. +* `Memo` contains more than 32768 bytes. +* `TimeoutHeight` and `TimeoutTimestamp` are both zero for IBC Classic. + * Note that `TimeoutHeight` as a concept is removed in IBC v2, hence this must always be emitted and only `TimeoutTimestamp` used. + +This message will send a fungible token to the counterparty chain represented by the counterparty Channel End connected to the Channel End with the identifiers `SourcePort` and `SourceChannel`. Note that in IBC v2 a pair of clients are connected and the `SourceChannel` is referring to the source `ClientID`. + +The denomination provided for transfer should correspond to the same denomination represented on this chain. The prefixes will be added as necessary upon by the receiving chain. + +If the `Amount` is set to the maximum value for a 256-bit unsigned integer (i.e. 2^256 - 1), then the whole balance of the corresponding denomination will be transferred. The helper function `UnboundedSpendLimit` in the `types` package of the `transfer` module provides the sentinel value that can be used. + +### Memo + +The memo field was added to allow applications and users to attach metadata to transfer packets. The field is optional and may be left empty. When it is used to attach metadata for a particular middleware, the memo field should be represented as a json object where different middlewares use different json keys. + +For example, the following memo field is used by the callbacks middleware to attach a source callback to a transfer packet: + +```jsonc +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +You can find more information about other applications that use the memo field in the [chain registry](https://github.com/cosmos/chain-registry/blob/master/_memo_keys/ICS20_memo_keys.json). + +### Encoding + +In IBC v2, the encoding method used by an application has more flexibility as it is specified within a `Payload`, rather than negotiated and fixed during an IBC classic channel handshake. Certain encoding types may be more suited to specific blockchains, e.g. ABI encoding is more gas efficient to decode in an EVM than JSON or Protobuf. + +Within ibc-go, JSON, protobuf and ABI encoding are supported and can be used, see the [transfer packet types](https://github.com/cosmos/ibc-go/blob/14bc17e26ad12cee6bdb99157a05296fcf58b762/modules/apps/transfer/types/packet.go#L36-L40). diff --git a/docs/ibc/next/apps/transfer/metrics.mdx b/docs/ibc/next/apps/transfer/metrics.mdx new file mode 100644 index 00000000..37fdc97a --- /dev/null +++ b/docs/ibc/next/apps/transfer/metrics.mdx @@ -0,0 +1,13 @@ +--- +title: Metrics +description: The IBC transfer application module exposes the following set of metrics. +--- + +The IBC transfer application module exposes the following set of [metrics](https://docs.cosmos.network/main/learn/advanced/telemetry). + +| Metric | Description | Unit | Type | +| :---------------------------- | :---------------------------------------------------------------------------------------- | :------- | :------ | +| `tx_msg_ibc_transfer` | The total amount of tokens transferred via IBC in a `MsgTransfer` (source or sink chain) | token | gauge | +| `ibc_transfer_packet_receive` | The total amount of tokens received in a `FungibleTokenPacketData` (source or sink chain) | token | gauge | +| `ibc_transfer_send` | Total number of IBC transfers sent from a chain (source or sink) | transfer | counter | +| `ibc_transfer_receive` | Total number of IBC transfers received to a chain (source or sink) | transfer | counter | diff --git a/docs/ibc/next/apps/transfer/overview.mdx b/docs/ibc/next/apps/transfer/overview.mdx new file mode 100644 index 00000000..d712b49b --- /dev/null +++ b/docs/ibc/next/apps/transfer/overview.mdx @@ -0,0 +1,140 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the token Transfer module is + +## What is the Transfer module? + +Transfer is the Cosmos SDK implementation of the [ICS-20](https://github.com/cosmos/ibc/tree/master/spec/app/ics-020-fungible-token-transfer) protocol, which enables cross-chain fungible token transfers. + +## Concepts + +### Acknowledgements + +ICS20 uses the recommended acknowledgement format as specified by [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope). + +A successful receive of a transfer packet will result in a Result Acknowledgement being written +with the value `[]byte{byte(1)}` in the `Response` field. + +An unsuccessful receive of a transfer packet will result in an Error Acknowledgement being written +with the error message in the `Response` field. + +### Denomination trace + +The denomination trace corresponds to the information that allows a token to be traced back to its +origin chain. It contains a sequence of port and channel identifiers ordered from the most recent to +the oldest in the timeline of transfers. + + + When using transfer with IBC v2 connecting to e.g. Ethereum, the source + channel identifier will be the source client identifier instead. + + +This information is included on the token's base denomination field in the form of a hash to prevent an +unbounded denomination length. For example, the token `transfer/channelToA/uatom` will be displayed +as `ibc/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2`. The human readable denomination +is stored using `x/bank` module's [denom metadata](https://docs.cosmos.network/main/build/modules/bank#denom-metadata) +feature. You may display the human readable denominations by querying balances with the `--resolve-denom` flag, as in: + +```shell +simd query bank balances [address] --resolve-denom +``` + +Each send to any chain other than the one it was previously received from is a movement forwards in +the token's timeline. This causes trace to be added to the token's history and the destination port +and destination channel to be prefixed to the denomination. In these instances the sender chain is +acting as the "source zone". When the token is sent back to the chain it previously received from, the +prefix is removed. This is a backwards movement in the token's timeline and the sender chain is +acting as the "sink zone". + +It is strongly recommended to understand the implications and context of the IBC token representations. + +## UX suggestions for clients + +For clients (wallets, exchanges, applications, block explorers, etc) that want to display the source of the token, it is recommended to use the following alternatives for each of the cases below: + +### Direct connection + +If the denomination trace contains a single identifier prefix pair (as in the example above), then +the easiest way to retrieve the chain and light client identifier is to map the trace information +directly. In summary, this requires querying the channel from the denomination trace identifiers, +and then the counterparty client state using the counterparty port and channel identifiers from the +retrieved channel. + +A general pseudo algorithm would look like the following: + +1. Query the full denomination trace. +2. Query the channel with the `portID/channelID` pair, which corresponds to the first destination of the + token. +3. Query the client state using the identifiers pair. Note that this query will return a `"Not +Found"` response if the current chain is not connected to this channel. +4. Retrieve the client identifier or chain identifier from the client state (eg: on + Tendermint clients) and store it locally. + +Using the gRPC gateway client service the steps above would be, with a given IBC token `ibc/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2` stored on `chainB`: + +1. `GET /ibc/apps/transfer/v1/denom_traces/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2` -> `{"path": "transfer/channelToA", "base_denom": "uatom"}` +2. `GET /ibc/apps/transfer/v1/channels/channelToA/ports/transfer/client_state"` -> `{"client_id": "clientA", "chain-id": "chainA", ...}` +3. `GET /ibc/apps/transfer/v1/channels/channelToA/ports/transfer"` -> `{"channel_id": "channelToA", port_id": "transfer", counterparty: {"channel_id": "channelToB", port_id": "transfer"}, ...}` +4. `GET /ibc/apps/transfer/v1/channels/channelToB/ports/transfer/client_state" -> {"client_id": "clientB", "chain-id": "chainB", ...}` + +Then, the token transfer chain path for the `uatom` denomination would be: `chainA` -> `chainB`. + +### Multiple hops + +The multiple channel hops case applies when the token has passed through multiple chains between the original source and final destination chains. + +The IBC protocol doesn't know the topology of the overall network (i.e connections between chains and identifier names between them). For this reason, in the multiple hops case, a particular chain in the timeline of the individual transfers can't query the chain and client identifiers of the other chains. + +Take for example the following sequence of transfers `A -> B -> C` for an IBC token, with a final prefix path (trace info) of `transfer/channelChainC/transfer/channelChainB`. What the paragraph above means is that even in the case that chain `C` is directly connected to chain `A`, querying the port and channel identifiers that chain `B` uses to connect to chain `A` (eg: `transfer/channelChainA`) can be completely different from the one that chain `C` uses to connect to chain `A` (eg: `transfer/channelToChainA`). + +Thus the proposed solution for clients that the IBC team recommends are the following: + +- **Connect to all chains**: Connecting to all the chains in the timeline would allow clients to + perform the queries outlined in the [direct connection](#direct-connection) section to each + relevant chain. By repeatedly following the port and channel denomination trace transfer timeline, + clients should always be able to find all the relevant identifiers. This comes at the tradeoff + that the client must connect to nodes on each of the chains in order to perform the queries. +- **Relayer as a Service (RaaS)**: A longer term solution is to use/create a relayer service that + could map the denomination trace to the chain path timeline for each token (i.e `origin chain -> +chain #1 -> ... -> chain #(n-1) -> final chain`). These services could provide merkle proofs in + order to allow clients to optionally verify the path timeline correctness for themselves by + running light clients. If the proofs are not verified, they should be considered as trusted third + parties services. Additionally, client would be advised in the future to use RaaS that support the + largest number of connections between chains in the ecosystem. Unfortunately, none of the existing + public relayers (in [Golang](https://github.com/cosmos/relayer) and + [Rust](https://github.com/informalsystems/ibc-rs)), provide this service to clients. + + + The only viable alternative for clients (at the time of writing) to tokens + with multiple connection hops, is to connect to all chains directly and + perform relevant queries to each of them in the sequence. + + +## Locked funds + +In some exceptional cases, a client state associated with a given channel cannot be updated. This causes that funds from fungible tokens in that channel will be permanently locked and thus can no longer be transferred. + +To mitigate this, a client update governance proposal can be submitted to update the frozen client +with a new valid header. Once the proposal passes the client state will be unfrozen and the funds +from the associated channels will then be unlocked. This mechanism only applies to clients that +allow updates via governance, such as Tendermint clients. + +In addition to this, it's important to mention that a token must be sent back along the exact route +that it took originally in order to return it to its original form on the source chain (eg: the +Cosmos Hub for the `uatom`). Sending a token back to the same chain across a different channel will +**not** move the token back across its timeline. If a channel in the chain history closes before the +token can be sent back across that channel, then the token will not be returnable to its original +form. + +## Security considerations + +For safety, no other module must be capable of minting tokens with the `ibc/` prefix. The IBC +transfer module needs a subset of the denomination space that only it can create tokens in. + +## Channel Closure + +The IBC transfer module does not support channel closure. diff --git a/docs/ibc/next/apps/transfer/params.mdx b/docs/ibc/next/apps/transfer/params.mdx new file mode 100644 index 00000000..290641f1 --- /dev/null +++ b/docs/ibc/next/apps/transfer/params.mdx @@ -0,0 +1,89 @@ +--- +title: Params +description: 'The IBC transfer application module contains the following parameters:' +--- + +The IBC transfer application module contains the following parameters: + +| Name | Type | Default Value | +| ---------------- | ---- | ------------- | +| `SendEnabled` | bool | `true` | +| `ReceiveEnabled` | bool | `true` | + +The IBC transfer module stores its parameters under its `StoreKey` + +## `SendEnabled` + +The `SendEnabled` parameter controls send cross-chain transfer capabilities for all fungible tokens. + +To prevent a single token from being transferred from the chain, set the `SendEnabled` parameter to `true` and then, depending on the Cosmos SDK version, do one of the following: + +* For Cosmos SDK v0.46.x or earlier, set the bank module's [`SendEnabled` parameter](https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/bank/spec/05_params.md#sendenabled) for the denomination to `false`. +* For Cosmos SDK versions above v0.46.x, set the bank module's `SendEnabled` entry for the denomination to `false` using `MsgSetSendEnabled` as a governance proposal. + + +Doing so will prevent the token from being transferred between any accounts in the blockchain. + + +## `ReceiveEnabled` + +The transfers enabled parameter controls receive cross-chain transfer capabilities for all fungible tokens. + +To prevent a single token from being transferred to the chain, set the `ReceiveEnabled` parameter to `true` and then, depending on the Cosmos SDK version, do one of the following: + +* For Cosmos SDK v0.46.x or earlier, set the bank module's [`SendEnabled` parameter](https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/bank/spec/05_params.md#sendenabled) for the denomination to `false`. +* For Cosmos SDK versions above v0.46.x, set the bank module's `SendEnabled` entry for the denomination to `false` using `MsgSetSendEnabled` as a governance proposal. + + +Doing so will prevent the token from being transferred between any accounts in the blockchain. + + +## Queries + +Current parameter values can be queried via a query message. + +{/* Turn it into a github code snippet in docusaurus: */} + +```protobuf +/ proto/ibc/applications/transfer/v1/query.proto + +/ QueryParamsRequest is the request type for the Query/Params RPC method. +message QueryParamsRequest {} + +/ QueryParamsResponse is the response type for the Query/Params RPC method. +message QueryParamsResponse { + / params defines the parameters of the module. + Params params = 1; +} +``` + +To execute the query in `simd`, you use the following command: + +```bash +simd query ibc-transfer params +``` + +## Changing Parameters + +To change the parameter values, you must make a governance proposal that executes the `MsgUpdateParams` message. + +{/* Turn it into a github code snippet in docusaurus: */} + +```protobuf expandable +/ proto/ibc/applications/transfer/v1/tx.proto + +/ MsgUpdateParams is the Msg/UpdateParams request type. +message MsgUpdateParams { + / signer address (it may be the address that controls the module, which defaults to x/gov unless overwritten). + string signer = 1; + + / params defines the transfer parameters to update. + / + / NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false]; +} + +/ MsgUpdateParamsResponse defines the response structure for executing a +/ MsgUpdateParams message. +message MsgUpdateParamsResponse {} +``` diff --git a/docs/ibc/next/apps/transfer/state-transitions.mdx b/docs/ibc/next/apps/transfer/state-transitions.mdx new file mode 100644 index 00000000..fcc9169c --- /dev/null +++ b/docs/ibc/next/apps/transfer/state-transitions.mdx @@ -0,0 +1,35 @@ +--- +title: State Transitions +description: >- + A successful fungible token send has two state transitions depending if the + transfer is a movement forward or backwards in the token's timeline: +--- + +## Send fungible tokens + +A successful fungible token send has two state transitions depending if the transfer is a movement forward or backwards in the token's timeline: + +1. Sender chain is the source chain, *i.e* a transfer to any chain other than the one it was previously received from is a movement forwards in the token's timeline. This results in the following state transitions: + + * The coins are transferred to an escrow address (i.e locked) on the sender chain. + * The coins are transferred to the receiving chain through IBC TAO logic. + +2. Sender chain is the sink chain, *i.e* the token is sent back to the chain it previously received from. This is a backwards movement in the token's timeline. This results in the following state transitions: + + * The coins (vouchers) are burned on the sender chain. + * The coins are transferred to the receiving chain through IBC TAO logic. + +## Receive fungible tokens + +A successful fungible token receive has two state transitions depending if the transfer is a movement forward or backwards in the token's timeline: + +1. Receiver chain is the source chain. This is a backwards movement in the token's timeline. This results in the following state transitions: + + * The leftmost port and channel identifier pair is removed from the token denomination prefix. + * The tokens are unescrowed and sent to the receiving address. + +2. Receiver chain is the sink chain. This is a movement forwards in the token's timeline. This results in the following state transitions: + + * Token vouchers are minted by prefixing the destination port and channel identifiers to the trace information. + * The receiving chain stores the new trace information in the store (if not set already). + * The vouchers are sent to the receiving address. diff --git a/docs/ibc/next/apps/transfer/state.mdx b/docs/ibc/next/apps/transfer/state.mdx new file mode 100644 index 00000000..ff48f446 --- /dev/null +++ b/docs/ibc/next/apps/transfer/state.mdx @@ -0,0 +1,12 @@ +--- +title: State +description: >- + The IBC transfer application module keeps state of the port to which the + module is binded and the denomination trace information. +--- + +The IBC transfer application module keeps state of the port to which the module is binded and the denomination trace information. + +* `PortKey`: `0x01 -> ProtocolBuffer(string)` +* `DenomTraceKey`: `0x02 | []bytes(traceHash) -> ProtocolBuffer(Denom)` +* `DenomKey` : `0x03 | []bytes(traceHash) -> ProtocolBuffer(Denom)` diff --git a/docs/ibc/next/changelog/release-notes.mdx b/docs/ibc/next/changelog/release-notes.mdx index 604ed0db..458f2857 100644 --- a/docs/ibc/next/changelog/release-notes.mdx +++ b/docs/ibc/next/changelog/release-notes.mdx @@ -1,508 +1,2073 @@ --- -title: "Release Notes" -description: "Cosmos IBC release notes and changelog" -mode: "custom" +title: "Release Notes" +description: "Release notes generated from the project `CHANGELOG.md`" +mode: "center" --- - - - {/* - - Guiding Principles: - - Changelogs are for humans, not machines. - - There should be an entry for every single version. - - The same types of changes should be grouped. - - Versions and sections should be linkable. - - The latest version comes first. - - The release date of each version is displayed. - - Mention whether you follow Semantic Versioning. - - Usage: - - Change log entries are to be added to the Unreleased section under the - - appropriate stanza (see below). Each entry should ideally include a tag and - - the Github issue reference in the following format: - - * () \# message - - The issue numbers will later be link-ified during the release process so you do - - not have to worry about including a link manually, but you can if you wish. - - Types of changes (Stanzas): - - "Features" for new features. - - "Improvements" for changes in existing functionality. - - "Deprecated" for soon-to-be removed features. - - "Bug Fixes" for any bug fixes. - - "Client Breaking" for breaking CLI commands and REST routes used by end-users. - - "API Breaking" for breaking exported APIs used by developers building on SDK. - - "State Machine Breaking" for any changes that result in a different AppState given the same genesisState and txList. - - Ref: https://keepachangelog.com/en/1.0.0/ - - */} - - # Changelog - - ## [v10.3.0](https://github.com/cosmos/ibc-go/releases/tag/v10.3.0) - 2025-06-06 - - ### Features - - ### Dependencies - - * [\#8369](https://github.com/cosmos/ibc-go/pull/8369) Bump **github.com/CosmWasm/wasmvm** to **2.2.4** - - * [\#8369](https://github.com/cosmos/ibc-go/pull/8369) Bump **github.com/ethereum/go-ethereum** to **1.15.11** - - ### API Breaking - - ### State Machine Breaking - - ### Improvements - - * (core/api) [\#8303](https://github.com/cosmos/ibc-go/pull/8303) Prefix-based routing in IBCv2 Router - - - (apps/callbacks) [\#8353](https://github.com/cosmos/ibc-go/pull/8353) Add field in callbacks data for custom calldata - - ### Bug Fixes - - ### Testing API - - * [\#8371](https://github.com/cosmos/ibc-go/pull/8371) e2e: Create only necessary number of chains for e2e suite. - - * [\#8375](https://github.com/cosmos/ibc-go/pull/8375) feat: parse IBC v2 packets from ABCI events - - ## [v10.2.0](https://github.com/cosmos/ibc-go/releases/tag/v10.2.0) - 2025-04-30 - - ### Features - - * (light-clients/07-tendermint) [\#8185](https://github.com/cosmos/ibc-go/pull/8185) Allow scaling of trusting period for client upgrades - - ### Dependencies - - * [\#8254](https://github.com/cosmos/ibc-go/pull/8254) Bump **github.com/cosmos/cosmos-sdk** to **0.53.0** - - * [\#8326](https://github.com/cosmos/ibc-go/pull/8329) Bump **cosmossdk.io/x/upgrade** to **0.2.0** - - * [\#8326](https://github.com/cosmos/ibc-go/pull/8326) Bump **cosmossdk.io/api** to **0.9.2** - - * [\#8293](https://github.com/cosmos/ibc-go/pull/8293) Bump **cosmossdk.io/math** to **1.5.3** - - * [\#8254](https://github.com/cosmos/ibc-go/pull/8254) Bump **cosmossdk.io/core** to **0.11.3** - - * [\#8254](https://github.com/cosmos/ibc-go/pull/8254) Bump **cosmossdk.io/store** to **1.1.2** - - * [\#8254](https://github.com/cosmos/ibc-go/pull/8254) Bump **cosmossdk.io/x/tx** to **0.14.0** - - * [\#8253](https://github.com/cosmos/ibc-go/pull/8253) Bump **cosmossdk.io/errors** to **1.0.2** - - * [\#8253](https://github.com/cosmos/ibc-go/pull/8253) Bump **cosmossdk.io/log** to **1.5.1** - - * [\#8253](https://github.com/cosmos/ibc-go/pull/8253) Bump **github.com/cometbft/cometbft** to **0.38.17** - - * [\#8264](https://github.com/cosmos/ibc-go/pull/8264) Bump **github.com/prysmaticlabs/prysm** to **v5.3.0** - - ### Bug Fixes - - * [\#8287](https://github.com/cosmos/ibc-go/pull/8287) rename total_escrow REST query from `denoms` to `total_escrow` - - ## [v10.1.0](https://github.com/cosmos/ibc-go/releases/tag/v10.1.0) - 2025-03-14 - - ### Security Fixes - - * Fix [ISA-2025-001](https://github.com/cosmos/ibc-go/security/advisories/GHSA-4wf3-5qj9-368v) security vulnerability. - - * Fix [ASA-2025-004](https://github.com/cosmos/ibc-go/security/advisories/GHSA-jg6f-48ff-5xrw) security vulnerability. - - ### Features - - * (core) [\#7505](https://github.com/cosmos/ibc-go/pull/7505) Add IBC Eureka (IBC v2) implementation, enabling more efficient IBC packet handling without channel dependencies, bringing significant performance improvements. - - * (apps/transfer) [\#7650](https://github.com/cosmos/ibc-go/pull/7650) Add support for transfer of entire balance for vesting accounts. - - * (apps/wasm) [\#5079](https://github.com/cosmos/ibc-go/pull/5079) 08-wasm light client proxy module for wasm clients by. - - * (core/02-client) [\#7936](https://github.com/cosmos/ibc-go/pull/7936) Clientv2 module. - - * (core/04-channel) [\#7933](https://github.com/cosmos/ibc-go/pull/7933) Channel-v2 genesis. - - * (core/04-channel, core/api) [\#7934](https://github.com/cosmos/ibc-go/pull/7934) - Callbacks Eureka. - - * (light-clients/09-localhost) [\#6683](https://github.com/cosmos/ibc-go/pull/6683) Make 09-localhost stateless. - - * (core, app) [\#6902](https://github.com/cosmos/ibc-go/pull/6902) Add channel version to core app callbacks. - - ### Dependencies - - * [\#8181](https://github.com/cosmos/ibc-go/pull/8181) Bump **github.com/cosmos/cosmos-sdk** to **0.50.13** - - * [\#7932](https://github.com/cosmos/ibc-go/pull/7932) Bump **go** to **1.23** - - * [\#7330](https://github.com/cosmos/ibc-go/pull/7330) Bump **cosmossdk.io/api** to **0.7.6** - - * [\#6828](https://github.com/cosmos/ibc-go/pull/6828) Bump **cosmossdk.io/core** to **0.11.1** - - * [\#7182](https://github.com/cosmos/ibc-go/pull/7182) Bump **cosmossdk.io/log** to **1.4.1** - - * [\#7264](https://github.com/cosmos/ibc-go/pull/7264) Bump **cosmossdk.io/store** to **1.1.1** - - * [\#7585](https://github.com/cosmos/ibc-go/pull/7585) Bump **cosmossdk.io/math** to **1.4.0** - - * [\#7540](https://github.com/cosmos/ibc-go/pull/7540) Bump **github.com/cometbft/cometbft** to **0.38.15** - - * [\#6828](https://github.com/cosmos/ibc-go/pull/6828) Bump **cosmossdk.io/x/upgrade** to **0.1.4** - - * [\#8124](https://github.com/cosmos/ibc-go/pull/8124) Bump **cosmossdk.io/x/tx** to **0.13.7** - - * [\#7942](https://github.com/cosmos/ibc-go/pull/7942) Bump **github.com/cosmos/cosmos-db** to **1.1.1** - - * [\#7224](https://github.com/cosmos/ibc-go/pull/7224) Bump **github.com/cosmos/ics23/go** to **0.11.0** - - ### API Breaking - - * (core, apps) [\#7213](https://github.com/cosmos/ibc-go/pull/7213) Remove capabilities from `SendPacket`. - - * (core, apps) [\#7225](https://github.com/cosmos/ibc-go/pull/7225) Remove capabilities from `WriteAcknowledgement`. - - * (core, apps) [\#7232](https://github.com/cosmos/ibc-go/pull/7232) Remove capabilities from channel handshake methods. - - * (core, apps) [\#7270](https://github.com/cosmos/ibc-go/pull/7270) Remove remaining dependencies on capability module. - - * (core, apps) [\#4811](https://github.com/cosmos/ibc-go/pull/4811) Use expected interface for legacy params subspace - - * (core/04-channel) [\#7239](https://github.com/cosmos/ibc-go/pull/7239) Removed function `LookupModuleByChannel` - - * (core/05-port) [\#7252](https://github.com/cosmos/ibc-go/pull/7252) Removed function `LookupModuleByPort` - - * (core/24-host) [\#7239](https://github.com/cosmos/ibc-go/pull/7239) Removed function `ChannelCapabilityPath` - - * (apps/27-interchain-accounts) [\#7239](https://github.com/cosmos/ibc-go/pull/7239) The following functions have been removed: `AuthenticateCapability`, `ClaimCapability` - - * (apps/27-interchain-accounts) [\#7961](https://github.com/cosmos/ibc-go/pull/7961) Removed `absolute-timeouts` flag from `send-tx` in the ICA CLI. - - * (apps/transfer) [\#7239](https://github.com/cosmos/ibc-go/pull/7239) The following functions have been removed: `BindPort`, `AuthenticateCapability`, `ClaimCapability` - - * (capability) [\#7279](https://github.com/cosmos/ibc-go/pull/7279) The module `capability` has been removed. - - * (testing) [\#7305](https://github.com/cosmos/ibc-go/pull/7305) Added `TrustedValidators` map to `TestChain`. This removes the dependency on the `x/staking` module for retrieving trusted validator sets at a given height, and removes the `GetTrustedValidators` method from the `TestChain` struct. - - * (23-commitment) [\#7486](https://github.com/cosmos/ibc-go/pull/7486) Remove unimplemented `BatchVerifyMembership` and `BatchVerifyNonMembership` functions - - * (core/02-client, light-clients) [\#5806](https://github.com/cosmos/ibc-go/pull/5806) Decouple light client routing from their encoding structure. - - * (core/04-channel) [\#5991](https://github.com/cosmos/ibc-go/pull/5991) The client CLI `QueryLatestConsensusState` has been removed. - - * (light-clients/06-solomachine) [\#6037](https://github.com/cosmos/ibc-go/pull/6037) Remove `Initialize` function from `ClientState` and move logic to `Initialize` function of `LightClientModule`. - - * (light-clients/06-solomachine) [\#6230](https://github.com/cosmos/ibc-go/pull/6230) Remove `GetTimestampAtHeight`, `Status` and `UpdateStateOnMisbehaviour` functions from `ClientState` and move logic to functions of `LightClientModule`. - - * (core/02-client) [\#6084](https://github.com/cosmos/ibc-go/pull/6084) Removed `stakingKeeper` as an argument to `NewKeeper` and replaced with a `ConsensusHost` implementation. - - * (testing) [\#6070](https://github.com/cosmos/ibc-go/pull/6070) Remove `AssertEventsLegacy` function. - - * (core) [\#6138](https://github.com/cosmos/ibc-go/pull/6138) Remove `Router` reference from IBC core keeper and use instead the router on the existing `PortKeeper` reference. - - * (core/04-channel) [\#6023](https://github.com/cosmos/ibc-go/pull/6023) Remove emission of non-hexlified event attributes `packet_data` and `packet_ack`. - - * (core) [\#6320](https://github.com/cosmos/ibc-go/pull/6320) Remove unnecessary `Proof` interface from `exported` package. - - * (core/05-port) [\#6341](https://github.com/cosmos/ibc-go/pull/6341) Modify `UnmarshalPacketData` interface to take in the context, portID, and channelID. This allows for packet data's to be unmarshaled based on the channel version. - - * (apps/27-interchain-accounts) [\#6433](https://github.com/cosmos/ibc-go/pull/6433) Use UNORDERED as the default ordering for new ICA channels. - - * (apps/transfer) [\#6440](https://github.com/cosmos/ibc-go/pull/6440) Remove `GetPrefixedDenom`. - - * (apps/transfer) [\#6508](https://github.com/cosmos/ibc-go/pull/6508) Remove the `DenomTrace` type. - - * (apps/27-interchain-accounts) [\#6598](https://github.com/cosmos/ibc-go/pull/6598) Mark the `requests` repeated field of `MsgModuleQuerySafe` non-nullable. - - * (23-commmitment) [\#6644](https://github.com/cosmos/ibc-go/pull/6644) Introduce `commitment.v2.MerklePath` to include `repeated bytes` in favour of `repeated string`. This supports using merkle path keys which include non UTF-8 encoded runes. - - * (23-commmitment) [\#6870](https://github.com/cosmos/ibc-go/pull/6870) Remove `commitment.v1.MerklePath` in favour of `commitment.v2.MerklePath`. - - * (apps/27-interchain-accounts) [\#6749](https://github.com/cosmos/ibc-go/pull/6749) The ICA controller `NewIBCMiddleware` constructor function sets by default the auth module to nil. - - * (core, core/02-client) [\#6763](https://github.com/cosmos/ibc-go/pull/6763) Move prometheus metric labels for 02-client and core into a separate `metrics` package on core. - - * (core/02-client) [\#6777](https://github.com/cosmos/ibc-go/pull/6777) The `NewClientProposalHandler` of `02-client` has been removed. - - * (core/types) [\#6794](https://github.com/cosmos/ibc-go/pull/6794) The composite interface `QueryServer` has been removed from package `core/types`. Please use the granular `QueryServer` interfaces provided by each core submodule. - - * (light-clients/06-solomachine) [\#6888](https://github.com/cosmos/ibc-go/pull/6888) Remove `TypeClientMisbehaviour` constant and the `Type` method on `Misbehaviour`. - - * (light-clients/06-solomachine, light-clients/07-tendermint) [\#6891](https://github.com/cosmos/ibc-go/pull/6891) The `VerifyMembership` and `VerifyNonMembership` functions of solomachine's `ClientState` have been made private. The `VerifyMembership`, `VerifyNonMembership`, `GetTimestampAtHeight`, `Status` and `Initialize` functions of tendermint's `ClientState` have been made private. - - * (core/04-channel) [\#6902](https://github.com/cosmos/ibc-go/pull/6902) Add channel version to core application callbacks. - - * (core/03-connection, core/02-client) [\#6937](https://github.com/cosmos/ibc-go/pull/6937) Remove 'ConsensusHost' interface, also removing self client and consensus state validation in the connection handshake. - - * (core/24-host) [\#6882](https://github.com/cosmos/ibc-go/issues/6882) All functions ending in `Path` have been removed from 24-host in favour of their sybling functions ending in `Key`. - - * (23-commmitment) [\#6633](https://github.com/cosmos/ibc-go/pull/6633) MerklePath has been changed to use `repeated bytes` in favour of `repeated strings`. - - * (23-commmitment) [\#6644](https://github.com/cosmos/ibc-go/pull/6644) Introduce `commitment.v2.MerklePath` to include `repeated bytes` in favour of `repeated string`. This supports using merkle path keys which include non UTF-8 encoded runes. - - * (23-commmitment) [\#6870](https://github.com/cosmos/ibc-go/pull/6870) Remove `commitment.v1.MerklePath` in favour of `commitment.v2.MerklePath`. - - * [\#6923](https://github.com/cosmos/ibc-go/pull/6923) The JSON msg API for `VerifyMembershipMsg` and `VerifyNonMembershipMsg` payloads for client contract `SudoMsg` has been updated. The field `path` has been changed to `merkle_path`. This change requires updates to 08-wasm client contracts for integration. - - * (apps/callbacks) [\#7000](https://github.com/cosmos/ibc-go/pull/7000) Add base application version to contract keeper callbacks. - - * (light-clients/08-wasm) [\#5154](https://github.com/cosmos/ibc-go/pull/5154) Use bytes in wasm contract api instead of wrapped. - - * (core, core/08-wasm) [\#5397](https://github.com/cosmos/ibc-go/pull/5397) Add coordinator Setup functions to the Path type. - - * (core/05-port) [\#6341](https://github.com/cosmos/ibc-go/pull/6341) Modify `UnmarshalPacketData` interface to take in the context, portID, and channelID. This allows for packet data's to be unmarshaled based on the channel version. - - * (core/02-client) [\#6863](https://github.com/cosmos/ibc-go/pull/6863) remove ClientStoreProvider interface in favour of concrete type. - - * (core/05-port) [\#6988](https://github.com/cosmos/ibc-go/pull/6988) Modify `UnmarshalPacketData` interface to return the underlying application version. - - * (apps/27-interchain-accounts) [\#7053](https://github.com/cosmos/ibc-go/pull/7053) Remove ICS27 channel capability migration introduced in v6. - - * (apps/27-interchain-accounts) [\#8002](https://github.com/cosmos/ibc-go/issues/8002) Remove ICS-29: fee middleware. - - * (core/04-channel) [\#8053](https://github.com/cosmos/ibc-go/issues/8053) Remove channel upgradability. - - ### State Machine Breaking - - * (light-clients/06-solomachine) [\#6313](https://github.com/cosmos/ibc-go/pull/6313) Fix: No-op to avoid panicking on `UpdateState` for invalid misbehaviour submissions. - - * (apps/callbacks) [\#8014](https://github.com/cosmos/ibc-go/pull/8014) Callbacks will now return an error acknowledgement if the recvPacket callback fails. This reverts all app callback changes whereas before we only reverted the callback changes. We also error on all callbacks if the callback data is set but malformed whereas before we ignored the error and continued processing. - - * (apps/callbacks) [\#5349](https://github.com/cosmos/ibc-go/pull/5349) Check if clients params are duplicates. - - * (apps/transfer) [\#6268](https://github.com/cosmos/ibc-go/pull/6268) Use memo strings instead of JSON keys in `AllowedPacketData` of transfer authorization. - - * (light-clients/07-tendermint) Fix: No-op to avoid panicking on `UpdateState` for invalid misbehaviour submissions. - - * (light-clients/06-solomachine) [\#6313](https://github.com/cosmos/ibc-go/pull/6313) Fix: No-op to avoid panicking on `UpdateState` for invalid misbehaviour submissions. - - ### Improvements - - * (testing)[\#7430](https://github.com/cosmos/ibc-go/pull/7430) Update the block proposer in test chains for each block. - - * (apps/27-interchain-accounts) [\#5533](https://github.com/cosmos/ibc-go/pull/5533) ICA host sets the host connection ID on `OnChanOpenTry`, so that ICA controller implementations are not obliged to set the value on `OnChanOpenInit` if they are not able. - - * (core/02-client, core/03-connection, apps/27-interchain-accounts) [\#6256](https://github.com/cosmos/ibc-go/pull/6256) Add length checking of array fields in messages. - - * (light-clients/08-wasm) [\#5146](https://github.com/cosmos/ibc-go/pull/5146) Use global wasm VM instead of keeping an additional reference in keeper. - - * (core/04-channels) [\#7935](https://github.com/cosmos/ibc-go/pull/7935) Limit payload size for both v1 and v2 packet. - - * (core/runtime) [\#7601](https://github.com/cosmos/ibc-go/pull/7601) - IBC core runtime env. - - * (core/08-wasm) [\#5294](https://github.com/cosmos/ibc-go/pull/5294) Don't panic during any store operations. - - * (apps) [\#5305](https://github.com/cosmos/ibc-go/pull/5305)- Remove GetSigners from `sdk.Msg` implementations. - - * (apps) [\#/5778](https://github.com/cosmos/ibc-go/pull/5778) Use json for marshalling/unmarshalling transfer packet data. - - * (core/08-wasm) [\#5785](https://github.com/cosmos/ibc-go/pull/5785) Allow module safe queries in ICA. - - * (core/ante) [\#6278](https://github.com/cosmos/ibc-go/pull/6278) Performance: Exclude pruning from tendermint client updates in ante handler executions. - - * (core/ante) [\#6302](https://github.com/cosmos/ibc-go/pull/6302) Performance: Skip app callbacks during RecvPacket execution in checkTx within the redundant relay ante handler. - - * (core/ante) [\#6280](https://github.com/cosmos/ibc-go/pull/6280) Performance: Skip redundant proof checking in RecvPacket execution in reCheckTx within the redundant relay ante handler. - - * [\#6716](https://github.com/cosmos/ibc-go/pull/6716) Add `HasModule` to capability keeper to allow checking if a scoped module already exists. - - ### Bug Fixes - - * (apps/27-interchain-accounts) [\#7277](https://github.com/cosmos/ibc-go/pull/7277) Use `GogoResolver` when populating module query safe allow list to avoid panics from unresolvable protobuf dependencies. - - * (core/04-channel) [\#7342](https://github.com/cosmos/ibc-go/pull/7342) Read Tx cmd flags including from address to avoid Address cannot be empty error when upgrade-channels via cli. - - * (core/03-connection) [\#7397](https://github.com/cosmos/ibc-go/pull/7397) Skip the genesis validation connectionID for localhost client. - - * (apps/27-interchain-accounts) [\#6377](https://github.com/cosmos/ibc-go/pull/6377) Generate ICA simtest proposals only for provided keepers. - - ### Testing API - - * [\#7688](https://github.com/cosmos/ibc-go/pull/7688) Added `SendMsgsWithSender` to `TestChain`. - - * [\#7430](https://github.com/cosmos/ibc-go/pull/7430) Update block proposer in testing - - * [\#5493](https://github.com/cosmos/ibc-go/pull/5493) Add IBCClientHeader func for endpoint and update tests - - * [\#6685](https://github.com/cosmos/ibc-go/pull/6685) Configure relayers to watch only channels associated with an individual test - - * [\#6758](https://github.com/cosmos/ibc-go/pull/6758) Tokens are successfully forwarded from A to C through B - - ## [v8.5.0](https://github.com/cosmos/ibc-go/releases/tag/v8.5.0) - 2024-08-30 - - ### Dependencies - - * [\#6828](https://github.com/cosmos/ibc-go/pull/6828) Bump Cosmos SDK to v0.50.9. - - * [\#7222](https://github.com/cosmos/ibc-go/pull/7221) Update ics23 to v0.11.0. - - ### State Machine Breaking - - * (core/03-connection) [\#7129](https://github.com/cosmos/ibc-go/pull/7129) Remove verification of self client and consensus state from connection handshake. - - ## [v8.4.0](https://github.com/cosmos/ibc-go/releases/tag/v8.4.0) - 2024-07-29 - - ### Improvements - - * (core/04-channel) [\#6871](https://github.com/cosmos/ibc-go/pull/6871) Add channel ordering to write acknowledgement event. - - ### Features - - * (apps/transfer) [\#6877](https://github.com/cosmos/ibc-go/pull/6877) Added the possibility to transfer the entire user balance of a particular denomination by using [`UnboundedSpendLimit`](https://github.com/cosmos/ibc-go/blob/beb2d93b58835ddb9ed8e7624988f1e66b849251/modules/apps/transfer/types/token.go#L56-L58) as the token amount. - - ### Bug Fixes - - * (core/04-channel) [\#6935](https://github.com/cosmos/ibc-go/pull/6935) Check upgrade compatibility in `ChanUpgradeConfirm`. - - ## [v8.3.2](https://github.com/cosmos/ibc-go/releases/tag/v8.3.2) - 2024-06-20 - - ### Dependencies - - * [\#6614](https://github.com/cosmos/ibc-go/pull/6614) Bump Cosmos SDK to v0.50.7. - - ### Improvements - - * (apps/27-interchain-accounts) [\#6436](https://github.com/cosmos/ibc-go/pull/6436) Refactor ICA host keeper instantiation method to avoid panic related to proto files. - - ## [v8.3.1](https://github.com/cosmos/ibc-go/releases/tag/v8.3.1) - 2024-05-22 - - ### Improvements - - * (core/ante) [\#6302](https://github.com/cosmos/ibc-go/pull/6302) Performance: Skip app callbacks during RecvPacket execution in checkTx within the redundant relay ante handler. - - * (core/ante) [\#6280](https://github.com/cosmos/ibc-go/pull/6280) Performance: Skip redundant proof checking in RecvPacket execution in reCheckTx within the redundant relay ante handler. - - * (core/ante) [\#6306](https://github.com/cosmos/ibc-go/pull/6306) Performance: Skip misbehaviour checks in UpdateClient flow and skip signature checks in reCheckTx mode. - - ## [v8.3.0](https://github.com/cosmos/ibc-go/releases/tag/v8.3.0) - 2024-05-16 - - ### Dependencies - - * [\#6300](https://github.com/cosmos/ibc-go/pull/6300) Bump Cosmos SDK to v0.50.6 and CometBFT to v0.38.7. - - ### State Machine Breaking - - * (light-clients/07-tendermint) [\#6276](https://github.com/cosmos/ibc-go/pull/6276) Fix: No-op to avoid panicking on `UpdateState` for invalid misbehaviour submissions. - - ### Improvements - - * (apps/27-interchain-accounts, apps/transfer, apps/29-fee) [\#6253](https://github.com/cosmos/ibc-go/pull/6253) Allow channel handshake to succeed if fee middleware is wired up on one side, but not the other. - - * (apps/27-interchain-accounts) [\#6251](https://github.com/cosmos/ibc-go/pull/6251) Use `UNORDERED` as the default ordering for new ICA channels. - - * (apps/transfer) [\#6268](https://github.com/cosmos/ibc-go/pull/6268) Use memo strings instead of JSON keys in `AllowedPacketData` of transfer authorization. - - * (core/ante) [\#6278](https://github.com/cosmos/ibc-go/pull/6278) Performance: Exclude pruning from tendermint client updates in ante handler executions. - - * (core/ante) [\#6302](https://github.com/cosmos/ibc-go/pull/6302) Performance: Skip app callbacks during RecvPacket execution in checkTx within the redundant relay ante handler. - - * (core/ante) [\#6280](https://github.com/cosmos/ibc-go/pull/6280) Performance: Skip redundant proof checking in RecvPacket execution in reCheckTx within the redundant relay ante handler. - - ### Features - - * (core) [\#6055](https://github.com/cosmos/ibc-go/pull/6055) Introduce a new interface `ConsensusHost` used to validate an IBC `ClientState` and `ConsensusState` against the host chain's underlying consensus parameters. - - * (core/02-client) [\#5821](https://github.com/cosmos/ibc-go/pull/5821) Add rpc `VerifyMembershipProof` (querier approach for conditional clients). - - * (core/04-channel) [\#5788](https://github.com/cosmos/ibc-go/pull/5788) Add `NewErrorAcknowledgementWithCodespace` to allow codespaces in ack errors. - - * (apps/27-interchain-accounts) [\#5785](https://github.com/cosmos/ibc-go/pull/5785) Introduce a new tx message that ICA host submodule can use to query the chain (only those marked with `module_query_safe`) and write the responses to the acknowledgement. - - ### Bug Fixes - - * (apps/27-interchain-accounts) [\#6167](https://github.com/cosmos/ibc-go/pull/6167) Fixed an edge case bug where migrating params for a pre-existing ica module which implemented controller functionality only could panic when migrating params for newly added host, and align controller param migration with host. - - * (app/29-fee) [\#6255](https://github.com/cosmos/ibc-go/pull/6255) Delete refunded fees from state if some fee(s) cannot be refunded on channel closure. - - ## [v8.2.0](https://github.com/cosmos/ibc-go/releases/tag/v8.2.0) - 2024-04-05 - - ### Dependencies - - * [\#5975](https://github.com/cosmos/ibc-go/pull/5975) Bump Cosmos SDK to v0.50.5. - - ### Improvements - - * (proto) [\#5987](https://github.com/cosmos/ibc-go/pull/5987) Add wasm proto files. - - ## [v8.1.0](https://github.com/cosmos/ibc-go/releases/tag/v8.1.0) - 2024-01-31 - - ### Dependencies - - * [\#5663](https://github.com/cosmos/ibc-go/pull/5663) Bump Cosmos SDK to v0.50.3 and CometBFT to v0.38.2. - - ### State Machine Breaking - - * (apps/27-interchain-accounts) [\#5442](https://github.com/cosmos/ibc-go/pull/5442) Increase the maximum allowed length for the memo field of `InterchainAccountPacketData`. - - ### Improvements - - * (core/02-client) [\#5429](https://github.com/cosmos/ibc-go/pull/5429) Add wildcard `"*"` to allow all clients in `AllowedClients` param. - - * (core) [\#5541](https://github.com/cosmos/ibc-go/pull/5541) Enable emission of events on erroneous IBC application callbacks by appending a prefix to all event type and attribute keys. - - ### Features - - * (core/04-channel) [\#1613](https://github.com/cosmos/ibc-go/pull/1613) Channel upgradability. - - * (apps/transfer) [\#5280](https://github.com/cosmos/ibc-go/pull/5280) Add list of allowed packet data keys to `Allocation` of `TransferAuthorization`. - - * (apps/27-interchain-accounts) [\#5633](https://github.com/cosmos/ibc-go/pull/5633) Allow setting new and upgrading existing ICA (ordered) channels to use unordered ordering. - - ### Bug Fixes - - * (apps/27-interchain-accounts) [\#5343](https://github.com/cosmos/ibc-go/pull/5343) Add check if controller is enabled in `sendTx` before sending packet to host. - - * (apps/29-fee) [\#5441](https://github.com/cosmos/ibc-go/pull/5441) Allow setting the relayer address as payee. - - ## [v8.0.1](https://github.com/cosmos/ibc-go/releases/tag/v8.0.1) - 2024-01-31 - - ### Dependencies - - * [\#5718](https://github.com/cosmos/ibc-go/pull/5718) Update Cosmos SDK to v0.50.3 and CometBFT to v0.38.2. - - ### Improvements - - * (core) [\#5541](https://github.com/cosmos/ibc-go/pull/5541) Enable emission of events on erroneous IBC application callbacks by appending a prefix to all event type and attribute keys. - - ## [v8.0.0](https://github.com/cosmos/ibc-go/releases/tag/v8.0.0) - 2023-11-10 - - ### Dependencies - - * [\#5038](https://github.com/cosmos/ibc-go/pull/5038) Bump SDK v0.50.1 and cometBFT v0.38. - - * [\#4398](https://github.com/cosmos/ibc-go/pull/4398) Update all modules to go 1.21. - - ### API Breaking - - * (core) [\#4703](https://github.com/cosmos/ibc-go/pull/4703) Make `PortKeeper` field of `IBCKeeper` a pointer. - - * (core/23-commitment) [\#4459](https://github.com/cosmos/ibc-go/pull/4459) Remove `Pretty` and `String` custom implementations of `MerklePath`. - - * [\#3205](https://github.com/cosmos/ibc-go/pull/3205) Make event emission functions unexported. - - * (apps/27-interchain-accounts, apps/transfer) [\#3253](https://github.com/cosmos/ibc-go/pull/3253) Rename `IsBound` to `HasCapability`. - - * (apps/27-interchain-accounts, apps/transfer) [\#3303](https://github.com/cosmos/ibc-go/pull/3303) Make `HasCapability` private. - - * [\#3303](https://github.com/cosmos/ibc-go/pull/3856) Add missing/remove unnecessary gogoproto directive. - - * (apps/27-interchain-accounts) [\#3967](https://github.com/cosmos/ibc-go/pull/3967) Add encoding type as argument to ICA encoding/decoding functions. - - * (core) [\#3867](https://github.com/cosmos/ibc-go/pull/3867) Remove unnecessary event attribute from INIT handshake msgs. - - * (core/04-channel) [\#3806](https://github.com/cosmos/ibc-go/pull/3806) Remove unused `EventTypeTimeoutPacketOnClose`. - - * (testing) [\#4018](https://github.com/cosmos/ibc-go/pull/4018) Allow failure expectations when using `chain.SendMsgs`. - - ### State Machine Breaking - - * (apps/transfer, apps/27-interchain-accounts, app/29-fee) [\#4992](https://github.com/cosmos/ibc-go/pull/4992) Set validation for length of string fields. - - ### Improvements - - * [\#3304](https://github.com/cosmos/ibc-go/pull/3304) Remove unnecessary defer func statements. - - * (apps/29-fee) [\#3054](https://github.com/cosmos/ibc-go/pull/3054) Add page result to ics29-fee queries. - - * (apps/27-interchain-accounts, apps/transfer) [\#3077](https://github.com/cosmos/ibc-go/pull/3077) Add debug level logging for the error message which is discarded when generating a failed acknowledgement. - - * (core/03-connection) [\#3244](https://github.com/cosmos/ibc-go/pull/3244) Cleanup 03-connection msg validate basic test. - - * (core/02-client) [\#3514](https://github.com/cosmos/ibc-go/pull/3514) Add check for the client status in `CreateClient`. - - * (apps/29-fee) [\#4100](https://github.com/cosmos/ibc-go/pull/4100) Adding `MetadataFromVersion` to `29-fee` package `types`. - - * (apps/29-fee) [\#4290](https://github.com/cosmos/ibc-go/pull/4290) Use `types.MetadataFromVersion` helper function for callback handlers. - - * (core/04-channel) [\#4155](https://github.com/cosmos/ibc-go/pull/4155) Adding `IsOpen` and `IsClosed` methods to `Channel` type. - - * (core/03-connection) [\#4110](https://github.com/cosmos/ibc-go/pull/4110) Remove `Version` interface and casting functions from 03-connection. - - * (core) [\#4835](https://github.com/cosmos/ibc-go/pull/4835) Use expected interface for legacy params subspace parameter of keeper constructor functions. - - ### Features - - * (capability) [\#3097](https://github.com/cosmos/ibc-go/pull/3097) Migrate capability module from Cosmos SDK to ibc-go. - - * (core/02-client) [\#3640](https://github.com/cosmos/ibc-go/pull/3640) Migrate client params to be self managed. - - * (core/03-connection) [\#3650](https://github.com/cosmos/ibc-go/pull/3650) Migrate connection params to be self managed. - - * (apps/transfer) [\#3553](https://github.com/cosmos/ibc-go/pull/3553) Migrate transfer parameters to be self managed (#3553) - - * (apps/27-interchain-accounts) [\#3520](https://github.com/cosmos/ibc-go/pull/3590) Migrate ica/controller parameters to be self managed (#3590) - - * (apps/27-interchain-accounts) [\#3520](https://github.com/cosmos/ibc-go/pull/3520) Migrate ica/host to params to be self managed. - - * (apps/transfer) [\#3104](https://github.com/cosmos/ibc-go/pull/3104) Add metadata for IBC tokens. - - * [\#4620](https://github.com/cosmos/ibc-go/pull/4620) Migrate to gov v1 via the additions of `MsgRecoverClient` and `MsgIBCSoftwareUpgrade`. The legacy proposal types `ClientUpdateProposal` and `UpgradeProposal` have been deprecated and will be removed in the next major release. - - ### Bug Fixes - - * (apps/transfer) [\#4709](https://github.com/cosmos/ibc-go/pull/4709) Order query service RPCs to fix availability of denom traces endpoint when no args are provided. - - * (core/04-channel) [\#3357](https://github.com/cosmos/ibc-go/pull/3357) Handle unordered channels in `NextSequenceReceive` query. - - * (e2e) [\#3402](https://github.com/cosmos/ibc-go/pull/3402) Allow retries for messages signed by relayer. - - * (core/04-channel) [\#3417](https://github.com/cosmos/ibc-go/pull/3417) Add missing query for next sequence send. - - * (testing) [\#4630](https://github.com/cosmos/ibc-go/pull/4630) Update `testconfig` to use revision formatted chain IDs. - - * (core/04-channel) [\#4706](https://github.com/cosmos/ibc-go/pull/4706) Retrieve correct next send sequence for packets in unordered channels. - - * (core/02-client) [\#4746](https://github.com/cosmos/ibc-go/pull/4746) Register implementations against `govtypes.Content` interface. - - * (apps/27-interchain-accounts) [\#4944](https://github.com/cosmos/ibc-go/pull/4944) Add missing proto interface registration. - - * (core/02-client) [\#5020](https://github.com/cosmos/ibc-go/pull/5020) Fix expect pointer error when unmarshalling misbehaviour file. - - ### Documentation - - * [\#3133](https://github.com/cosmos/ibc-go/pull/3133) Add linter for markdown documents. - - * [\#4693](https://github.com/cosmos/ibc-go/pull/4693) Migrate docs to docusaurus. - - ### Testing - - * [\#3138](https://github.com/cosmos/ibc-go/pull/3138) Use `testing.TB` instead of `testing.T` to support benchmarks and fuzz tests. - - * [\#3980](https://github.com/cosmos/ibc-go/pull/3980) Change `sdk.Events` usage to `[]abci.Event` in the testing package. - - * [\#3986](https://github.com/cosmos/ibc-go/pull/3986) Add function `RelayPacketWithResults`. - - * [\#4182](https://github.com/cosmos/ibc-go/pull/4182) Return current validator set when requesting current height in `GetValsAtHeight`. - - * [\#4319](https://github.com/cosmos/ibc-go/pull/4319) Fix in `TimeoutPacket` function to use counterparty `portID`/`channelID` in `GetNextSequenceRecv` query. - - * [\#4180](https://github.com/cosmos/ibc-go/pull/4180) Remove unused function `simapp.SetupWithGenesisAccounts`. - - ### Miscellaneous Tasks - - * (apps/27-interchain-accounts) [\#4677](https://github.com/cosmos/ibc-go/pull/4677) Remove ica store key. - - * [\#4724](https://github.com/cosmos/ibc-go/pull/4724) Add `HasValidateBasic` compiler assertions to messages. - - * [\#4725](https://github.com/cosmos/ibc-go/pull/4725) Add fzf selection for config files. - - * [\#4741](https://github.com/cosmos/ibc-go/pull/4741) Panic with error. - - * [\#3186](https://github.com/cosmos/ibc-go/pull/3186) Migrate all SDK errors to the new errors go module. - - * [\#3216](https://github.com/cosmos/ibc-go/pull/3216) Modify `simapp` to fulfill the SDK `runtime.AppI` interface. - - * [\#3290](https://github.com/cosmos/ibc-go/pull/3290) Remove `gogoproto` yaml tags from proto files. - - * [\#3439](https://github.com/cosmos/ibc-go/pull/3439) Use nil pointer pattern to check for interface compliance. - - * [\#3433](https://github.com/cosmos/ibc-go/pull/3433) Add tests for `acknowledgement.Acknowledgement()`. - - * (core, apps/29-fee) [\#3462](https://github.com/cosmos/ibc-go/pull/3462) Add missing `nil` check and corresponding tests for query handlers. - - * (light-clients/07-tendermint, light-clients/06-solomachine) [\#3571](https://github.com/cosmos/ibc-go/pull/3571) Delete unused `GetProofSpecs` functions. - - * (core) [\#3616](https://github.com/cosmos/ibc-go/pull/3616) Add debug log for redundant relay. - - * (core) [\#3892](https://github.com/cosmos/ibc-go/pull/3892) Add deprecated option to `create_localhost` field. - - * (core) [\#3893](https://github.com/cosmos/ibc-go/pull/3893) Add deprecated option to `MsgSubmitMisbehaviour`. - - * (apps/transfer, apps/29-fee) [\#4570](https://github.com/cosmos/ibc-go/pull/4570) Remove `GetSignBytes` from 29-fee and transfer msgs. - - * [\#3630](https://github.com/cosmos/ibc-go/pull/3630) Add annotation to Msg service. - - ## [v7.8.0](https://github.com/cosmos/ibc-go/releases/tag/v7.8.0) - 2024-08-30 - - ### State Machine Breaking - - * (core/03-connection) [\#7128](https://github.com/cosmos/ibc-go/pull/7128) Remove verification of self client and consensus state from connection handshake. - - ## [v7.7.0](https://github.com/cosmos/ibc-go/releases/tag/v7.7.0) - 2024-07-29 - - ### Dependencies - - * [\#6943](https://github.com/cosmos/ibc-go/pull/6943) Update Cosmos SDK to v0.47.13. - - ### Features - - * (apps/transfer) [\#6877](https://github.com/cosmos/ibc-go/pull/6877) Added the possibility to transfer the entire user balance of a particular denomination by using [`UnboundedSpendLimit`](https://github.com/cosmos/ibc-go/blob/715f00eef8727da41db25fdd4763b709bdbba07e/modules/apps/transfer/types/transfer_authorization.go#L253-L255) as the token amount. - - ### Bug Fixes - - ## [v7.6.0](https://github.com/cosmos/ibc-go/releases/tag/v7.6.0) - 2024-06-20 - - ### State Machine Breaking - - * (apps/transfer, apps/27-interchain-accounts, app/29-fee) [\#4992](https://github.com/cosmos/ibc-go/pull/4992) Set validation for length of string fields. - - ## [v7.5.2](https://github.com/cosmos/ibc-go/releases/tag/v7.5.2) - 2024-06-20 - - ### Dependencies - - * [\#6613](https://github.com/cosmos/ibc-go/pull/6613) Update Cosmos SDK to v0.47.12. - - ### Improvements - - * (apps/27-interchain-accounts) [\#6436](https://github.com/cosmos/ibc-go/pull/6436) Refactor ICA host keeper instantiation method to avoid panic related to proto files. - - ## [v7.5.1](https://github.com/cosmos/ibc-go/releases/tag/v7.5.1) - 2024-05-22 - - ### Improvements - - * (core/ante) [\#6302](https://github.com/cosmos/ibc-go/pull/6302) Performance: Skip app callbacks during RecvPacket execution in checkTx within the redundant relay ante handler. - - * (core/ante) [\#6280](https://github.com/cosmos/ibc-go/pull/6280) Performance: Skip redundant proof checking in RecvPacket execution in reCheckTx within the redundant relay ante handler. - - * (core/ante) [\#6306](https://github.com/cosmos/ibc-go/pull/6306) Performance: Skip misbehaviour checks in UpdateClient flow and skip signature checks in reCheckTx mode. - - ## [v7.5.0](https://github.com/cosmos/ibc-go/releases/tag/v7.5.0) - 2024-05-14 - - ### Dependencies - - * [\#6254](https://github.com/cosmos/ibc-go/pull/6254) Update Cosmos SDK to v0.47.11 and CometBFT to v0.37.5. - - ### State Machine Breaking - - * (light-clients/07-tendermint) [\#6276](https://github.com/cosmos/ibc-go/pull/6276) Fix: No-op to avoid panicking on `UpdateState` for invalid misbehaviour submissions. - - ### Improvements - - * (apps/27-interchain-accounts) [\#6147](https://github.com/cosmos/ibc-go/pull/6147) Emit an event signalling that the host submodule is disabled. - - * (testing) [\#6180](https://github.com/cosmos/ibc-go/pull/6180) Add version to tm abci headers in ibctesting. - - * (apps/27-interchain-accounts, apps/transfer, apps/29-fee) [\#6253](https://github.com/cosmos/ibc-go/pull/6253) Allow channel handshake to succeed if fee middleware is wired up on one side, but not the other. - - * (apps/transfer) [\#6268](https://github.com/cosmos/ibc-go/pull/6268) Use memo strings instead of JSON keys in `AllowedPacketData` of transfer authorization. - - ### Features - - * (apps/27-interchain-accounts) [\#5633](https://github.com/cosmos/ibc-go/pull/5633) Allow new ICA channels to use unordered ordering. - - * (apps/27-interchain-accounts) [\#5785](https://github.com/cosmos/ibc-go/pull/5785) Introduce a new tx message that ICA host submodule can use to query the chain (only those marked with `module_query_safe`) and write the responses to the acknowledgement. - - ### Bug Fixes - - * (apps/29-fee) [\#6255](https://github.com/cosmos/ibc-go/pull/6255) Delete already refunded fees from state if some fee(s) cannot be refunded on channel closure. - - ## [v7.4.0](https://github.com/cosmos/ibc-go/releases/tag/v7.4.0) - 2024-04-05 - - ## [v7.3.2](https://github.com/cosmos/ibc-go/releases/tag/v7.3.2) - 2024-01-31 - - ### Dependencies - - * [\#5717](https://github.com/cosmos/ibc-go/pull/5717) Update Cosmos SDK to v0.47.8 and CometBFT to v0.37.4. - - ### Improvements - - * (core) [\#5541](https://github.com/cosmos/ibc-go/pull/5541) Enable emission of events on erroneous IBC application callbacks by appending a prefix to all event type and attribute keys. - - ### Bug Fixes - - * (apps/27-interchain-accounts) [\#4944](https://github.com/cosmos/ibc-go/pull/4944) Add missing proto interface registration. - - ## [v7.3.1](https://github.com/cosmos/ibc-go/releases/tag/v7.3.1) - 2023-10-20 - - ### Dependencies - - * [\#4539](https://github.com/cosmos/ibc-go/pull/4539) Update Cosmos SDK to v0.47.5. - - ### Improvements - - * (apps/27-interchain-accounts) [\#4537](https://github.com/cosmos/ibc-go/pull/4537) Add argument to `generate-packet-data` cli to choose the encoding format for the messages in the ICA packet data. - - ### Bug Fixes - - * (apps/transfer) [\#4709](https://github.com/cosmos/ibc-go/pull/4709) Order query service RPCs to fix availability of denom traces endpoint when no args are provided. - - ## [v7.3.0](https://github.com/cosmos/ibc-go/releases/tag/v7.3.0) - 2023-08-31 - - ### Dependencies - - * [\#4122](https://github.com/cosmos/ibc-go/pull/4122) Update Cosmos SDK to v0.47.4. - - ### Improvements - - * [\#4187](https://github.com/cosmos/ibc-go/pull/4187) Adds function `WithICS4Wrapper` to keepers to allow to set the middleware after the keeper's creation. - - * (light-clients/06-solomachine) [\#4429](https://github.com/cosmos/ibc-go/pull/4429) Remove IBC key from path of bytes signed by solomachine and not escape the path. - - ### Features - - * (apps/27-interchain-accounts) [\#3796](https://github.com/cosmos/ibc-go/pull/3796) Adds support for json tx encoding for interchain accounts. - - * [\#4188](https://github.com/cosmos/ibc-go/pull/4188) Adds optional `PacketDataUnmarshaler` interface that allows a middleware to request the packet data to be unmarshaled by the base application. - - * [\#4199](https://github.com/cosmos/ibc-go/pull/4199) Adds optional `PacketDataProvider` interface for retrieving custom packet data stored on behalf of another application. - - * [\#4200](https://github.com/cosmos/ibc-go/pull/4200) Adds optional `PacketData` interface which application's packet data may implement. - - ### Bug Fixes - - * (04-channel) [\#4476](https://github.com/cosmos/ibc-go/pull/4476) Use UTC time in log messages for packet timeout error. - - * (testing) [\#4483](https://github.com/cosmos/ibc-go/pull/4483) Use the correct revision height when querying trusted validator set. - - ## [v7.2.3](https://github.com/cosmos/ibc-go/releases/tag/v7.2.3) - 2024-01-31 - - ### Dependencies - - * [\#5716](https://github.com/cosmos/ibc-go/pull/5716) Update Cosmos SDK to v0.47.8 and CometBFT to v0.37.4. - - ### Improvements - - * (core) [\#5541](https://github.com/cosmos/ibc-go/pull/5541) Enable emission of events on erroneous IBC application callbacks by appending a prefix to all event type and attribute keys. - - ## [v7.2.2](https://github.com/cosmos/ibc-go/releases/tag/v7.2.2) - 2023-10-20 - - ### Dependencies - - * [\#4539](https://github.com/cosmos/ibc-go/pull/4539) Update Cosmos SDK to v0.47.5. - - ### Bug Fixes - - * (apps/transfer) [\#4709](https://github.com/cosmos/ibc-go/pull/4709) Order query service RPCs to fix availability of denom traces endpoint when no args are provided. - - ## [v7.2.1](https://github.com/cosmos/ibc-go/releases/tag/v7.2.1) - 2023-08-31 - - ### Bug Fixes - - * (04-channel) [\#4476](https://github.com/cosmos/ibc-go/pull/4476) Use UTC time in log messages for packet timeout error. - - * (testing) [\#4483](https://github.com/cosmos/ibc-go/pull/4483) Use the correct revision height when querying trusted validator set. - - ## [v7.2.0](https://github.com/cosmos/ibc-go/releases/tag/v7.2.0) - 2023-06-22 - - ### Dependencies - - * [\#3810](https://github.com/cosmos/ibc-go/pull/3810) Update Cosmos SDK to v0.47.3. - - * [\#3862](https://github.com/cosmos/ibc-go/pull/3862) Update CometBFT to v0.37.2. - - ### State Machine Breaking - - * [\#3907](https://github.com/cosmos/ibc-go/pull/3907) Re-implemented missing functions of `LegacyMsg` interface to fix transaction signing with ledger. - - ## [v7.1.0](https://github.com/cosmos/ibc-go/releases/tag/v7.1.0) - 2023-06-09 - - ### Dependencies - - * [\#3542](https://github.com/cosmos/ibc-go/pull/3542) Update Cosmos SDK to v0.47.2 and CometBFT to v0.37.1. - - * [\#3457](https://github.com/cosmos/ibc-go/pull/3457) Update to ics23 v0.10.0. - - ### Improvements - - * (apps/transfer) [\#3454](https://github.com/cosmos/ibc-go/pull/3454) Support transfer authorization unlimited spending when the max `uint256` value is provided as limit. - - ### Features - - * (light-clients/09-localhost) [\#3229](https://github.com/cosmos/ibc-go/pull/3229) Implementation of v2 of localhost loopback client. - - * (apps/transfer) [\#3019](https://github.com/cosmos/ibc-go/pull/3019) Add state entry to keep track of total amount of tokens in escrow. - - ### Bug Fixes - - * (core/04-channel) [\#3346](https://github.com/cosmos/ibc-go/pull/3346) Properly handle ordered channels in `UnreceivedPackets` query. - - * (core/04-channel) [\#3593](https://github.com/cosmos/ibc-go/pull/3593) `SendPacket` now correctly returns `ErrClientNotFound` in favour of `ErrConsensusStateNotFound`. - - ## [v7.0.1](https://github.com/cosmos/ibc-go/releases/tag/v7.0.1) - 2023-05-25 - - ### Bug Fixes - - * [\#3346](https://github.com/cosmos/ibc-go/pull/3346) Properly handle ordered channels in `UnreceivedPackets` query. - - ## [v7.0.0](https://github.com/cosmos/ibc-go/releases/tag/v7.0.0) - 2023-03-17 - - ### Dependencies - - * [\#2672](https://github.com/cosmos/ibc-go/issues/2672) Update to cosmos-sdk v0.47. - - * [\#3175](https://github.com/cosmos/ibc-go/issues/3175) Migrate to cometbft v0.37. - - ### API Breaking - - * (core) [\#2897](https://github.com/cosmos/ibc-go/pull/2897) Remove legacy migrations required for upgrading from Stargate release line to ibc-go >= v1.x.x. - - * (core/02-client) [\#2856](https://github.com/cosmos/ibc-go/pull/2856) Rename `IterateClients` to `IterateClientStates`. The function now takes a prefix argument which may be used for prefix iteration over the client store. - - * (light-clients/tendermint)[\#1768](https://github.com/cosmos/ibc-go/pull/1768) Removed `AllowUpdateAfterExpiry`, `AllowUpdateAfterMisbehaviour` booleans as they are deprecated (see ADR026) - - * (06-solomachine) [\#1679](https://github.com/cosmos/ibc-go/pull/1679) Remove `types` sub-package from `06-solomachine` lightclient directory. - - * (07-tendermint) [\#1677](https://github.com/cosmos/ibc-go/pull/1677) Remove `types` sub-package from `07-tendermint` lightclient directory. - - * (06-solomachine) [\#1687](https://github.com/cosmos/ibc-go/pull/1687) Bump `06-solomachine` protobuf version from `v2` to `v3`. - - * (06-solomachine) [\#1687](https://github.com/cosmos/ibc-go/pull/1687) Removed `DataType` enum and associated message types from `06-solomachine`. `DataType` has been removed from `SignBytes` and `SignatureAndData` in favour of `path`. - - * (02-client) [\#598](https://github.com/cosmos/ibc-go/pull/598) The client state and consensus state return value has been removed from `VerifyUpgradeAndUpdateState`. Light client implementations must update the client state and consensus state after verifying a valid client upgrade. - - * (06-solomachine) [\#1100](https://github.com/cosmos/ibc-go/pull/1100) Remove `GetClientID` function from 06-solomachine `Misbehaviour` type. - - * (06-solomachine) [\#1100](https://github.com/cosmos/ibc-go/pull/1100) Deprecate `ClientId` field in 06-solomachine `Misbehaviour` type. - - * (07-tendermint) [\#1097](https://github.com/cosmos/ibc-go/pull/1097) Remove `GetClientID` function from 07-tendermint `Misbehaviour` type. - - * (07-tendermint) [\#1097](https://github.com/cosmos/ibc-go/pull/1097) Deprecate `ClientId` field in 07-tendermint `Misbehaviour` type. - - * (modules/core/exported) [\#1107](https://github.com/cosmos/ibc-go/pull/1107) Merging the `Header` and `Misbehaviour` interfaces into a single `ClientMessage` type. - - * (06-solomachine)[\#1906](https://github.com/cosmos/ibc-go/pull/1906/files) Removed `AllowUpdateAfterProposal` boolean as it has been deprecated (see 01_concepts of the solo machine spec for more details). - - * (07-tendermint) [\#1896](https://github.com/cosmos/ibc-go/pull/1896) Remove error return from `IterateConsensusStateAscending` in `07-tendermint`. - - * (apps/27-interchain-accounts) [\#2638](https://github.com/cosmos/ibc-go/pull/2638) Interchain accounts host and controller Keepers now expects a keeper which fulfills the expected `exported.ScopedKeeper` interface for the capability keeper. - - * (06-solomachine) [\#2761](https://github.com/cosmos/ibc-go/pull/2761) Removed deprecated `ClientId` field from `Misbehaviour` and `allow_update_after_proposal` field from `ClientState`. - - * (apps) [\#3154](https://github.com/cosmos/ibc-go/pull/3154) Remove unused `ProposalContents` function. - - * (apps) [\#3149](https://github.com/cosmos/ibc-go/pull/3149) Remove legacy interface function `RandomizedParams`, which is no longer used. - - * (light-clients/06-solomachine) [\#2941](https://github.com/cosmos/ibc-go/pull/2941) Remove solomachine header sequence. - - * (core) [\#2982](https://github.com/cosmos/ibc-go/pull/2982) Moved the ibc module name into the exported package. - - ### State Machine Breaking - - * (06-solomachine) [\#2744](https://github.com/cosmos/ibc-go/pull/2744) `Misbehaviour.ValidateBasic()` now only enforces that signature data does not match when the signature paths are different. - - * (06-solomachine) [\#2748](https://github.com/cosmos/ibc-go/pull/2748) Adding sentinel value for header path in 06-solomachine. - - * (apps/29-fee) [\#2942](https://github.com/cosmos/ibc-go/pull/2942) Check `x/bank` send enabled before escrowing fees. - - * (core/04-channel) [\#3009](https://github.com/cosmos/ibc-go/pull/3009) Change check to disallow optimistic sends. - - ### Improvements - - * (core) [\#3082](https://github.com/cosmos/ibc-go/pull/3082) Add `HasConnection` and `HasChannel` methods. - - * (tests) [\#2926](https://github.com/cosmos/ibc-go/pull/2926) Lint tests - - * (apps/transfer) [\#2643](https://github.com/cosmos/ibc-go/pull/2643) Add amount, denom, and memo to transfer event emission. - - * (core) [\#2746](https://github.com/cosmos/ibc-go/pull/2746) Allow proof height to be zero for all core IBC `sdk.Msg` types that contain proofs. - - * (light-clients/06-solomachine) [\#2746](https://github.com/cosmos/ibc-go/pull/2746) Discard proofHeight for solo machines and use the solo machine sequence instead. - - * (modules/light-clients/07-tendermint) [\#1713](https://github.com/cosmos/ibc-go/pull/1713) Allow client upgrade proposals to update `TrustingPeriod`. See ADR-026 for context. - - * (modules/core/02-client) [\#1188](https://github.com/cosmos/ibc-go/pull/1188/files) Routing `MsgSubmitMisbehaviour` to `UpdateClient` keeper function. Deprecating `SubmitMisbehaviour` endpoint. - - * (modules/core/02-client) [\#1208](https://github.com/cosmos/ibc-go/pull/1208) Replace `CheckHeaderAndUpdateState` usage in 02-client with calls to `VerifyClientMessage`, `CheckForMisbehaviour`, `UpdateStateOnMisbehaviour` and `UpdateState`. - - * (modules/light-clients/09-localhost) [\#1187](https://github.com/cosmos/ibc-go/pull/1187/) Removing localhost light client implementation as it is not functional. An upgrade handler is provided in `modules/migrations/v5` to prune `09-localhost` clients and consensus states from the store. - - * (modules/core/02-client) [\#1186](https://github.com/cosmos/ibc-go/pull/1186) Removing `GetRoot` function from ConsensusState interface in `02-client`. `GetRoot` is unused by core IBC. - - * (modules/core/02-client) [\#1196](https://github.com/cosmos/ibc-go/pull/1196) Adding VerifyClientMessage to ClientState interface. - - * (modules/core/02-client) [\#1198](https://github.com/cosmos/ibc-go/pull/1198) Adding UpdateStateOnMisbehaviour to ClientState interface. - - * (modules/core/02-client) [\#1170](https://github.com/cosmos/ibc-go/pull/1170) Updating `ClientUpdateProposal` to set client state in lightclient implementations `CheckSubstituteAndUpdateState` methods. - - * (modules/core/02-client) [\#1197](https://github.com/cosmos/ibc-go/pull/1197) Adding `CheckForMisbehaviour` to `ClientState` interface. - - * (modules/core/02-client) [\#1210](https://github.com/cosmos/ibc-go/pull/1210) Removing `CheckHeaderAndUpdateState` from `ClientState` interface & associated light client implementations. - - * (modules/core/02-client) [\#1212](https://github.com/cosmos/ibc-go/pull/1212) Removing `CheckMisbehaviourAndUpdateState` from `ClientState` interface & associated light client implementations. - - * (modules/core/exported) [\#1206](https://github.com/cosmos/ibc-go/pull/1206) Adding new method `UpdateState` to `ClientState` interface. - - * (modules/core/02-client) [\#1741](https://github.com/cosmos/ibc-go/pull/1741) Emitting a new `upgrade_chain` event upon setting upgrade consensus state. - - * (client) [\#724](https://github.com/cosmos/ibc-go/pull/724) `IsRevisionFormat` and `IsClientIDFormat` have been updated to disallow newlines before the dash used to separate the chainID and revision number, and the client type and client sequence. - - * (02-client/cli) [\#897](https://github.com/cosmos/ibc-go/pull/897) Remove `GetClientID()` from `Misbehaviour` interface. Submit client misbehaviour cli command requires an explicit client id now. - - * (06-solomachine) [\#1972](https://github.com/cosmos/ibc-go/pull/1972) Solo machine implementation of `ZeroCustomFields` fn now panics as the fn is only used for upgrades which solo machine does not support. - - * (light-clients/06-solomachine) Moving `verifyMisbehaviour` function from update.go to misbehaviour_handle.go. - - * [\#2434](https://github.com/cosmos/ibc-go/pull/2478) Removed all `TypeMsg` constants - - * (modules/core/exported) [\#2539](https://github.com/cosmos/ibc-go/pull/2539) Removing `GetVersions` from `ConnectionI` interface. - - * (core/02-connection) [\#2419](https://github.com/cosmos/ibc-go/pull/2419) Add optional proof data to proto definitions of `MsgConnectionOpenTry` and `MsgConnectionOpenAck` for host state machines that are unable to introspect their own consensus state. - - * (light-clients/07-tendermint) [\#3046](https://github.com/cosmos/ibc-go/pull/3046) Moved non-verification misbehaviour checks to `CheckForMisbehaviour`. - - * (apps/29-fee) [\#2975](https://github.com/cosmos/ibc-go/pull/2975) Adding distribute fee events to ics29. - - * (light-clients/07-tendermint) [\#2965](https://github.com/cosmos/ibc-go/pull/2965) Prune expired `07-tendermint` consensus states on duplicate header updates. - - * (light-clients) [\#2736](https://github.com/cosmos/ibc-go/pull/2736) Updating `VerifyMembership` and `VerifyNonMembership` methods to use `Path` interface. - - * (light-clients) [\#3113](https://github.com/cosmos/ibc-go/pull/3113) Align light client module names. - - ### Features - - * (apps/transfer) [\#3079](https://github.com/cosmos/ibc-go/pull/3079) Added authz support for ics20. - - * (core/02-client) [\#2824](https://github.com/cosmos/ibc-go/pull/2824) Add genesis migrations for v6 to v7. The migration migrates the solo machine client state definition, removes all solo machine consensus states and removes the localhost client. - - * (core/24-host) [\#2856](https://github.com/cosmos/ibc-go/pull/2856) Add `PrefixedClientStorePath` and `PrefixedClientStoreKey` functions to 24-host - - * (core/02-client) [\#2819](https://github.com/cosmos/ibc-go/pull/2819) Add automatic in-place store migrations to remove the localhost client and migrate existing solo machine definitions. - - * (light-clients/06-solomachine) [\#2826](https://github.com/cosmos/ibc-go/pull/2826) Add `AppModuleBasic` for the 06-solomachine client and remove solo machine type registration from core IBC. Chains must register the `AppModuleBasic` of light clients. - - * (light-clients/07-tendermint) [\#2825](https://github.com/cosmos/ibc-go/pull/2825) Add `AppModuleBasic` for the 07-tendermint client and remove tendermint type registration from core IBC. Chains must register the `AppModuleBasic` of light clients. - - * (light-clients/07-tendermint) [\#2800](https://github.com/cosmos/ibc-go/pull/2800) Add optional in-place store migration function to prune all expired tendermint consensus states. - - * (core/24-host) [\#2820](https://github.com/cosmos/ibc-go/pull/2820) Add `MustParseClientStatePath` which parses the clientID from a client state key path. - - * (testing/simapp) [\#2842](https://github.com/cosmos/ibc-go/pull/2842) Adding the new upgrade handler for v6 -> v7 to simapp which prunes expired Tendermint consensus states. - - * (testing) [\#2829](https://github.com/cosmos/ibc-go/pull/2829) Add `AssertEvents` which asserts events against expected event map. - - ### Bug Fixes - - * (testing) [\#3295](https://github.com/cosmos/ibc-go/pull/3295) The function `SetupWithGenesisValSet` will set the baseapp chainID before running `InitChain` - - * (light-clients/solomachine) [\#1839](https://github.com/cosmos/ibc-go/pull/1839) Fixed usage of the new diversifier in validation of changing diversifiers for the solo machine. The current diversifier must sign over the new diversifier. - - * (light-clients/07-tendermint) [\#1674](https://github.com/cosmos/ibc-go/pull/1674) Submitted ClientState is zeroed out before checking the proof in order to prevent the proposal from containing information governance is not actually voting on. - - * (modules/core/02-client)[\#1676](https://github.com/cosmos/ibc-go/pull/1676) ClientState must be zeroed out for `UpgradeProposals` to pass validation. This prevents a proposal containing information governance is not actually voting on. - - * (core/02-client) [\#2510](https://github.com/cosmos/ibc-go/pull/2510) Fix client ID validation regex to conform closer to spec. - - * (apps/transfer) [\#3045](https://github.com/cosmos/ibc-go/pull/3045) Allow value with slashes in URL template. - - * (apps/27-interchain-accounts) [\#2601](https://github.com/cosmos/ibc-go/pull/2601) Remove bech32 check from owner address on ICA controller msgs RegisterInterchainAccount and SendTx. - - * (apps/transfer) [\#2651](https://github.com/cosmos/ibc-go/pull/2651) Skip emission of unpopulated memo field in ics20. - - * (apps/27-interchain-accounts) [\#2682](https://github.com/cosmos/ibc-go/pull/2682) Avoid race conditions in ics27 handshakes. - - * (light-clients/06-solomachine) [\#2741](https://github.com/cosmos/ibc-go/pull/2741) Added check for empty path in 06-solomachine. - - * (light-clients/07-tendermint) [\#3022](https://github.com/cosmos/ibc-go/pull/3022) Correctly close iterator in `07-tendermint` store. - - * (core/02-client) [\#3010](https://github.com/cosmos/ibc-go/pull/3010) Update `Paginate` to use `FilterPaginate` in `ClientStates` and `ConnectionChannels` grpc queries. - - ## [v6.3.0](https://github.com/cosmos/ibc-go/releases/tag/v6.3.0) - 2024-04-05 + + + This page tracks all releases and changes from the + [cosmos/ibc-go](https://github.com/cosmos/ibc-go) repository. For the latest + development updates, see the + [UNRELEASED](https://github.com/cosmos/ibc-go/blob/main/CHANGELOG.md#unreleased) + section. + + + + + ### Features ### Dependencies * + [\#8369](https://github.com/cosmos/ibc-go/pull/8369) Bump + **github.com/CosmWasm/wasmvm** to **2.2.4** * + [\#8369](https://github.com/cosmos/ibc-go/pull/8369) Bump + **github.com/ethereum/go-ethereum** to **1.15.11** ### API Breaking ### State + Machine Breaking ### Improvements * (core/api) + [\#8303](https://github.com/cosmos/ibc-go/pull/8303) Prefix-based routing in + IBCv2 Router - (apps/callbacks) + [\#8353](https://github.com/cosmos/ibc-go/pull/8353) Add field in callbacks + data for custom calldata ### Bug Fixes ### Testing API * + [\#8371](https://github.com/cosmos/ibc-go/pull/8371) e2e: Create only + necessary number of chains for e2e suite. * + [\#8375](https://github.com/cosmos/ibc-go/pull/8375) feat: parse IBC v2 + packets from ABCI events + + + + ### Features * (light-clients/07-tendermint) + [\#8185](https://github.com/cosmos/ibc-go/pull/8185) Allow scaling of trusting + period for client upgrades ### Dependencies * + [\#8254](https://github.com/cosmos/ibc-go/pull/8254) Bump + **github.com/cosmos/cosmos-sdk** to **0.53.0** * + [\#8326](https://github.com/cosmos/ibc-go/pull/8329) Bump + **cosmossdk.io/x/upgrade** to **0.2.0** * + [\#8326](https://github.com/cosmos/ibc-go/pull/8326) Bump **cosmossdk.io/api** + to **0.9.2** * [\#8293](https://github.com/cosmos/ibc-go/pull/8293) Bump + **cosmossdk.io/math** to **1.5.3** * + [\#8254](https://github.com/cosmos/ibc-go/pull/8254) Bump + **cosmossdk.io/core** to **0.11.3** * + [\#8254](https://github.com/cosmos/ibc-go/pull/8254) Bump + **cosmossdk.io/store** to **1.1.2** * + [\#8254](https://github.com/cosmos/ibc-go/pull/8254) Bump + **cosmossdk.io/x/tx** to **0.14.0** * + [\#8253](https://github.com/cosmos/ibc-go/pull/8253) Bump + **cosmossdk.io/errors** to **1.0.2** * + [\#8253](https://github.com/cosmos/ibc-go/pull/8253) Bump **cosmossdk.io/log** + to **1.5.1** * [\#8253](https://github.com/cosmos/ibc-go/pull/8253) Bump + **github.com/cometbft/cometbft** to **0.38.17** * + [\#8264](https://github.com/cosmos/ibc-go/pull/8264) Bump + **github.com/prysmaticlabs/prysm** to **v5.3.0** ### Bug Fixes * + [\#8287](https://github.com/cosmos/ibc-go/pull/8287) rename total_escrow REST + query from `denoms` to `total_escrow` + + + + ### Security Fixes * Fix + [ISA-2025-001](https://github.com/cosmos/ibc-go/security/advisories/GHSA-4wf3-5qj9-368v) + security vulnerability. * Fix + [ASA-2025-004](https://github.com/cosmos/ibc-go/security/advisories/GHSA-jg6f-48ff-5xrw) + security vulnerability. ### Features * (core) + [\#7505](https://github.com/cosmos/ibc-go/pull/7505) Add IBC Eureka (IBC v2) + implementation, enabling more efficient IBC packet handling without channel + dependencies, bringing significant performance improvements. * (apps/transfer) + [\#7650](https://github.com/cosmos/ibc-go/pull/7650) Add support for transfer + of entire balance for vesting accounts. * (apps/wasm) + [\#5079](https://github.com/cosmos/ibc-go/pull/5079) 08-wasm light client + proxy module for wasm clients by. * (core/02-client) + [\#7936](https://github.com/cosmos/ibc-go/pull/7936) Clientv2 module. * + (core/04-channel) [\#7933](https://github.com/cosmos/ibc-go/pull/7933) + Channel-v2 genesis. * (core/04-channel, core/api) + [\#7934](https://github.com/cosmos/ibc-go/pull/7934) - Callbacks Eureka. * + (light-clients/09-localhost) + [\#6683](https://github.com/cosmos/ibc-go/pull/6683) Make 09-localhost + stateless. * (core, app) [\#6902](https://github.com/cosmos/ibc-go/pull/6902) + Add channel version to core app callbacks. ### Dependencies * + [\#8181](https://github.com/cosmos/ibc-go/pull/8181) Bump + **github.com/cosmos/cosmos-sdk** to **0.50.13** * + [\#7932](https://github.com/cosmos/ibc-go/pull/7932) Bump **go** to **1.23** * + [\#7330](https://github.com/cosmos/ibc-go/pull/7330) Bump **cosmossdk.io/api** + to **0.7.6** * [\#6828](https://github.com/cosmos/ibc-go/pull/6828) Bump + **cosmossdk.io/core** to **0.11.1** * + [\#7182](https://github.com/cosmos/ibc-go/pull/7182) Bump **cosmossdk.io/log** + to **1.4.1** * [\#7264](https://github.com/cosmos/ibc-go/pull/7264) Bump + **cosmossdk.io/store** to **1.1.1** * + [\#7585](https://github.com/cosmos/ibc-go/pull/7585) Bump + **cosmossdk.io/math** to **1.4.0** * + [\#7540](https://github.com/cosmos/ibc-go/pull/7540) Bump + **github.com/cometbft/cometbft** to **0.38.15** * + [\#6828](https://github.com/cosmos/ibc-go/pull/6828) Bump + **cosmossdk.io/x/upgrade** to **0.1.4** * + [\#8124](https://github.com/cosmos/ibc-go/pull/8124) Bump + **cosmossdk.io/x/tx** to **0.13.7** * + [\#7942](https://github.com/cosmos/ibc-go/pull/7942) Bump + **github.com/cosmos/cosmos-db** to **1.1.1** * + [\#7224](https://github.com/cosmos/ibc-go/pull/7224) Bump + **github.com/cosmos/ics23/go** to **0.11.0** ### API Breaking * (core, apps) + [\#7213](https://github.com/cosmos/ibc-go/pull/7213) Remove capabilities from + `SendPacket`. * (core, apps) + [\#7225](https://github.com/cosmos/ibc-go/pull/7225) Remove capabilities from + `WriteAcknowledgement`. * (core, apps) + [\#7232](https://github.com/cosmos/ibc-go/pull/7232) Remove capabilities from + channel handshake methods. * (core, apps) + [\#7270](https://github.com/cosmos/ibc-go/pull/7270) Remove remaining + dependencies on capability module. * (core, apps) + [\#4811](https://github.com/cosmos/ibc-go/pull/4811) Use expected interface + for legacy params subspace * (core/04-channel) + [\#7239](https://github.com/cosmos/ibc-go/pull/7239) Removed function + `LookupModuleByChannel` * (core/05-port) + [\#7252](https://github.com/cosmos/ibc-go/pull/7252) Removed function + `LookupModuleByPort` * (core/24-host) + [\#7239](https://github.com/cosmos/ibc-go/pull/7239) Removed function + `ChannelCapabilityPath` * (apps/27-interchain-accounts) + [\#7239](https://github.com/cosmos/ibc-go/pull/7239) The following functions + have been removed: `AuthenticateCapability`, `ClaimCapability` * + (apps/27-interchain-accounts) + [\#7961](https://github.com/cosmos/ibc-go/pull/7961) Removed + `absolute-timeouts` flag from `send-tx` in the ICA CLI. * (apps/transfer) + [\#7239](https://github.com/cosmos/ibc-go/pull/7239) The following functions + have been removed: `BindPort`, `AuthenticateCapability`, `ClaimCapability` * + (capability) [\#7279](https://github.com/cosmos/ibc-go/pull/7279) The module + `capability` has been removed. * (testing) + [\#7305](https://github.com/cosmos/ibc-go/pull/7305) Added `TrustedValidators` + map to `TestChain`. This removes the dependency on the `x/staking` module for + retrieving trusted validator sets at a given height, and removes the + `GetTrustedValidators` method from the `TestChain` struct. * (23-commitment) + [\#7486](https://github.com/cosmos/ibc-go/pull/7486) Remove unimplemented + `BatchVerifyMembership` and `BatchVerifyNonMembership` functions * + (core/02-client, light-clients) + [\#5806](https://github.com/cosmos/ibc-go/pull/5806) Decouple light client + routing from their encoding structure. * (core/04-channel) + [\#5991](https://github.com/cosmos/ibc-go/pull/5991) The client CLI + `QueryLatestConsensusState` has been removed. * (light-clients/06-solomachine) + [\#6037](https://github.com/cosmos/ibc-go/pull/6037) Remove `Initialize` + function from `ClientState` and move logic to `Initialize` function of + `LightClientModule`. * (light-clients/06-solomachine) + [\#6230](https://github.com/cosmos/ibc-go/pull/6230) Remove + `GetTimestampAtHeight`, `Status` and `UpdateStateOnMisbehaviour` functions + from `ClientState` and move logic to functions of `LightClientModule`. * + (core/02-client) [\#6084](https://github.com/cosmos/ibc-go/pull/6084) Removed + `stakingKeeper` as an argument to `NewKeeper` and replaced with a + `ConsensusHost` implementation. * (testing) + [\#6070](https://github.com/cosmos/ibc-go/pull/6070) Remove + `AssertEventsLegacy` function. * (core) + [\#6138](https://github.com/cosmos/ibc-go/pull/6138) Remove `Router` reference + from IBC core keeper and use instead the router on the existing `PortKeeper` + reference. * (core/04-channel) + [\#6023](https://github.com/cosmos/ibc-go/pull/6023) Remove emission of + non-hexlified event attributes `packet_data` and `packet_ack`. * (core) + [\#6320](https://github.com/cosmos/ibc-go/pull/6320) Remove unnecessary + `Proof` interface from `exported` package. * (core/05-port) + [\#6341](https://github.com/cosmos/ibc-go/pull/6341) Modify + `UnmarshalPacketData` interface to take in the context, portID, and channelID. + This allows for packet data's to be unmarshaled based on the channel version. + * (apps/27-interchain-accounts) + [\#6433](https://github.com/cosmos/ibc-go/pull/6433) Use UNORDERED as the + default ordering for new ICA channels. * (apps/transfer) + [\#6440](https://github.com/cosmos/ibc-go/pull/6440) Remove + `GetPrefixedDenom`. * (apps/transfer) + [\#6508](https://github.com/cosmos/ibc-go/pull/6508) Remove the `DenomTrace` + type. * (apps/27-interchain-accounts) + [\#6598](https://github.com/cosmos/ibc-go/pull/6598) Mark the `requests` + repeated field of `MsgModuleQuerySafe` non-nullable. * (23-commmitment) + [\#6644](https://github.com/cosmos/ibc-go/pull/6644) Introduce + `commitment.v2.MerklePath` to include `repeated bytes` in favour of `repeated + string`. This supports using merkle path keys which include non UTF-8 encoded + runes. * (23-commmitment) [\#6870](https://github.com/cosmos/ibc-go/pull/6870) + Remove `commitment.v1.MerklePath` in favour of `commitment.v2.MerklePath`. * + (apps/27-interchain-accounts) + [\#6749](https://github.com/cosmos/ibc-go/pull/6749) The ICA controller + `NewIBCMiddleware` constructor function sets by default the auth module to + nil. * (core, core/02-client) + [\#6763](https://github.com/cosmos/ibc-go/pull/6763) Move prometheus metric + labels for 02-client and core into a separate `metrics` package on core. * + (core/02-client) [\#6777](https://github.com/cosmos/ibc-go/pull/6777) The + `NewClientProposalHandler` of `02-client` has been removed. * (core/types) + [\#6794](https://github.com/cosmos/ibc-go/pull/6794) The composite interface + `QueryServer` has been removed from package `core/types`. Please use the + granular `QueryServer` interfaces provided by each core submodule. * + (light-clients/06-solomachine) + [\#6888](https://github.com/cosmos/ibc-go/pull/6888) Remove + `TypeClientMisbehaviour` constant and the `Type` method on `Misbehaviour`. * + (light-clients/06-solomachine, light-clients/07-tendermint) + [\#6891](https://github.com/cosmos/ibc-go/pull/6891) The `VerifyMembership` + and `VerifyNonMembership` functions of solomachine's `ClientState` have been + made private. The `VerifyMembership`, `VerifyNonMembership`, + `GetTimestampAtHeight`, `Status` and `Initialize` functions of tendermint's + `ClientState` have been made private. * (core/04-channel) + [\#6902](https://github.com/cosmos/ibc-go/pull/6902) Add channel version to + core application callbacks. * (core/03-connection, core/02-client) + [\#6937](https://github.com/cosmos/ibc-go/pull/6937) Remove 'ConsensusHost' + interface, also removing self client and consensus state validation in the + connection handshake. * (core/24-host) + [\#6882](https://github.com/cosmos/ibc-go/issues/6882) All functions ending in + `Path` have been removed from 24-host in favour of their sybling functions + ending in `Key`. * (23-commmitment) + [\#6633](https://github.com/cosmos/ibc-go/pull/6633) MerklePath has been + changed to use `repeated bytes` in favour of `repeated strings`. * + (23-commmitment) [\#6644](https://github.com/cosmos/ibc-go/pull/6644) + Introduce `commitment.v2.MerklePath` to include `repeated bytes` in favour of + `repeated string`. This supports using merkle path keys which include non + UTF-8 encoded runes. * (23-commmitment) + [\#6870](https://github.com/cosmos/ibc-go/pull/6870) Remove + `commitment.v1.MerklePath` in favour of `commitment.v2.MerklePath`. * + [\#6923](https://github.com/cosmos/ibc-go/pull/6923) The JSON msg API for + `VerifyMembershipMsg` and `VerifyNonMembershipMsg` payloads for client + contract `SudoMsg` has been updated. The field `path` has been changed to + `merkle_path`. This change requires updates to 08-wasm client contracts for + integration. * (apps/callbacks) + [\#7000](https://github.com/cosmos/ibc-go/pull/7000) Add base application + version to contract keeper callbacks. * (light-clients/08-wasm) + [\#5154](https://github.com/cosmos/ibc-go/pull/5154) Use bytes in wasm + contract api instead of wrapped. * (core, core/08-wasm) + [\#5397](https://github.com/cosmos/ibc-go/pull/5397) Add coordinator Setup + functions to the Path type. * (core/05-port) + [\#6341](https://github.com/cosmos/ibc-go/pull/6341) Modify + `UnmarshalPacketData` interface to take in the context, portID, and channelID. + This allows for packet data's to be unmarshaled based on the channel version. + * (core/02-client) [\#6863](https://github.com/cosmos/ibc-go/pull/6863) remove + ClientStoreProvider interface in favour of concrete type. * (core/05-port) + [\#6988](https://github.com/cosmos/ibc-go/pull/6988) Modify + `UnmarshalPacketData` interface to return the underlying application version. + * (apps/27-interchain-accounts) + [\#7053](https://github.com/cosmos/ibc-go/pull/7053) Remove ICS27 channel + capability migration introduced in v6. * (apps/27-interchain-accounts) + [\#8002](https://github.com/cosmos/ibc-go/issues/8002) Remove ICS-29: fee + middleware. * (core/04-channel) + [\#8053](https://github.com/cosmos/ibc-go/issues/8053) Remove channel + upgradability. ### State Machine Breaking * (light-clients/06-solomachine) + [\#6313](https://github.com/cosmos/ibc-go/pull/6313) Fix: No-op to avoid + panicking on `UpdateState` for invalid misbehaviour submissions. * + (apps/callbacks) [\#8014](https://github.com/cosmos/ibc-go/pull/8014) + Callbacks will now return an error acknowledgement if the recvPacket callback + fails. This reverts all app callback changes whereas before we only reverted + the callback changes. We also error on all callbacks if the callback data is + set but malformed whereas before we ignored the error and continued + processing. * (apps/callbacks) + [\#5349](https://github.com/cosmos/ibc-go/pull/5349) Check if clients params + are duplicates. * (apps/transfer) + [\#6268](https://github.com/cosmos/ibc-go/pull/6268) Use memo strings instead + of JSON keys in `AllowedPacketData` of transfer authorization. * + (light-clients/07-tendermint) Fix: No-op to avoid panicking on `UpdateState` + for invalid misbehaviour submissions. * (light-clients/06-solomachine) + [\#6313](https://github.com/cosmos/ibc-go/pull/6313) Fix: No-op to avoid + panicking on `UpdateState` for invalid misbehaviour submissions. ### + Improvements * (testing)[\#7430](https://github.com/cosmos/ibc-go/pull/7430) + Update the block proposer in test chains for each block. * + (apps/27-interchain-accounts) + [\#5533](https://github.com/cosmos/ibc-go/pull/5533) ICA host sets the host + connection ID on `OnChanOpenTry`, so that ICA controller implementations are + not obliged to set the value on `OnChanOpenInit` if they are not able. * + (core/02-client, core/03-connection, apps/27-interchain-accounts) + [\#6256](https://github.com/cosmos/ibc-go/pull/6256) Add length checking of + array fields in messages. * (light-clients/08-wasm) + [\#5146](https://github.com/cosmos/ibc-go/pull/5146) Use global wasm VM + instead of keeping an additional reference in keeper. * (core/04-channels) + [\#7935](https://github.com/cosmos/ibc-go/pull/7935) Limit payload size for + both v1 and v2 packet. * (core/runtime) + [\#7601](https://github.com/cosmos/ibc-go/pull/7601) - IBC core runtime env. * + (core/08-wasm) [\#5294](https://github.com/cosmos/ibc-go/pull/5294) Don't + panic during any store operations. * (apps) + [\#5305](https://github.com/cosmos/ibc-go/pull/5305)- Remove GetSigners from + `sdk.Msg` implementations. * (apps) + [\#/5778](https://github.com/cosmos/ibc-go/pull/5778) Use json for + marshalling/unmarshalling transfer packet data. * (core/08-wasm) + [\#5785](https://github.com/cosmos/ibc-go/pull/5785) Allow module safe queries + in ICA. * (core/ante) [\#6278](https://github.com/cosmos/ibc-go/pull/6278) + Performance: Exclude pruning from tendermint client updates in ante handler + executions. * (core/ante) [\#6302](https://github.com/cosmos/ibc-go/pull/6302) + Performance: Skip app callbacks during RecvPacket execution in checkTx within + the redundant relay ante handler. * (core/ante) + [\#6280](https://github.com/cosmos/ibc-go/pull/6280) Performance: Skip + redundant proof checking in RecvPacket execution in reCheckTx within the + redundant relay ante handler. * + [\#6716](https://github.com/cosmos/ibc-go/pull/6716) Add `HasModule` to + capability keeper to allow checking if a scoped module already exists. ### Bug + Fixes * (apps/27-interchain-accounts) + [\#7277](https://github.com/cosmos/ibc-go/pull/7277) Use `GogoResolver` when + populating module query safe allow list to avoid panics from unresolvable + protobuf dependencies. * (core/04-channel) + [\#7342](https://github.com/cosmos/ibc-go/pull/7342) Read Tx cmd flags + including from address to avoid Address cannot be empty error when + upgrade-channels via cli. * (core/03-connection) + [\#7397](https://github.com/cosmos/ibc-go/pull/7397) Skip the genesis + validation connectionID for localhost client. * (apps/27-interchain-accounts) + [\#6377](https://github.com/cosmos/ibc-go/pull/6377) Generate ICA simtest + proposals only for provided keepers. ### Testing API * + [\#7688](https://github.com/cosmos/ibc-go/pull/7688) Added + `SendMsgsWithSender` to `TestChain`. * + [\#7430](https://github.com/cosmos/ibc-go/pull/7430) Update block proposer in + testing * [\#5493](https://github.com/cosmos/ibc-go/pull/5493) Add + IBCClientHeader func for endpoint and update tests * + [\#6685](https://github.com/cosmos/ibc-go/pull/6685) Configure relayers to + watch only channels associated with an individual test * + [\#6758](https://github.com/cosmos/ibc-go/pull/6758) Tokens are successfully + forwarded from A to C through B + + + + ### Dependencies * [\#6828](https://github.com/cosmos/ibc-go/pull/6828) Bump + Cosmos SDK to v0.50.9. * [\#7222](https://github.com/cosmos/ibc-go/pull/7221) + Update ics23 to v0.11.0. ### State Machine Breaking * (core/03-connection) + [\#7129](https://github.com/cosmos/ibc-go/pull/7129) Remove verification of + self client and consensus state from connection handshake. + + + + ### Improvements * (core/04-channel) + [\#6871](https://github.com/cosmos/ibc-go/pull/6871) Add channel ordering to + write acknowledgement event. ### Features * (apps/transfer) + [\#6877](https://github.com/cosmos/ibc-go/pull/6877) Added the possibility to + transfer the entire user balance of a particular denomination by using + [__PROTECTED_0__](https://github.com/cosmos/ibc-go/blob/beb2d93b58835ddb9ed8e7624988f1e66b849251/modules/apps/transfer/types/token.go#L56-L58) + as the token amount. ### Bug Fixes * (core/04-channel) + [\#6935](https://github.com/cosmos/ibc-go/pull/6935) Check upgrade + compatibility in `ChanUpgradeConfirm`. + + + + ### Dependencies * [\#6614](https://github.com/cosmos/ibc-go/pull/6614) Bump + Cosmos SDK to v0.50.7. ### Improvements * (apps/27-interchain-accounts) + [\#6436](https://github.com/cosmos/ibc-go/pull/6436) Refactor ICA host keeper + instantiation method to avoid panic related to proto files. + + + + ### Improvements * (core/ante) + [\#6302](https://github.com/cosmos/ibc-go/pull/6302) Performance: Skip app + callbacks during RecvPacket execution in checkTx within the redundant relay + ante handler. * (core/ante) + [\#6280](https://github.com/cosmos/ibc-go/pull/6280) Performance: Skip + redundant proof checking in RecvPacket execution in reCheckTx within the + redundant relay ante handler. * (core/ante) + [\#6306](https://github.com/cosmos/ibc-go/pull/6306) Performance: Skip + misbehaviour checks in UpdateClient flow and skip signature checks in + reCheckTx mode. + + + + ### Dependencies * [\#6300](https://github.com/cosmos/ibc-go/pull/6300) Bump + Cosmos SDK to v0.50.6 and CometBFT to v0.38.7. ### State Machine Breaking * + (light-clients/07-tendermint) + [\#6276](https://github.com/cosmos/ibc-go/pull/6276) Fix: No-op to avoid + panicking on `UpdateState` for invalid misbehaviour submissions. ### + Improvements * (apps/27-interchain-accounts, apps/transfer, apps/29-fee) + [\#6253](https://github.com/cosmos/ibc-go/pull/6253) Allow channel handshake + to succeed if fee middleware is wired up on one side, but not the other. * + (apps/27-interchain-accounts) + [\#6251](https://github.com/cosmos/ibc-go/pull/6251) Use `UNORDERED` as the + default ordering for new ICA channels. * (apps/transfer) + [\#6268](https://github.com/cosmos/ibc-go/pull/6268) Use memo strings instead + of JSON keys in `AllowedPacketData` of transfer authorization. * (core/ante) + [\#6278](https://github.com/cosmos/ibc-go/pull/6278) Performance: Exclude + pruning from tendermint client updates in ante handler executions. * + (core/ante) [\#6302](https://github.com/cosmos/ibc-go/pull/6302) Performance: + Skip app callbacks during RecvPacket execution in checkTx within the redundant + relay ante handler. * (core/ante) + [\#6280](https://github.com/cosmos/ibc-go/pull/6280) Performance: Skip + redundant proof checking in RecvPacket execution in reCheckTx within the + redundant relay ante handler. ### Features * (core) + [\#6055](https://github.com/cosmos/ibc-go/pull/6055) Introduce a new interface + `ConsensusHost` used to validate an IBC `ClientState` and `ConsensusState` + against the host chain's underlying consensus parameters. * (core/02-client) + [\#5821](https://github.com/cosmos/ibc-go/pull/5821) Add rpc + `VerifyMembershipProof` (querier approach for conditional clients). * + (core/04-channel) [\#5788](https://github.com/cosmos/ibc-go/pull/5788) Add + `NewErrorAcknowledgementWithCodespace` to allow codespaces in ack errors. * + (apps/27-interchain-accounts) + [\#5785](https://github.com/cosmos/ibc-go/pull/5785) Introduce a new tx + message that ICA host submodule can use to query the chain (only those marked + with `module_query_safe`) and write the responses to the acknowledgement. ### + Bug Fixes * (apps/27-interchain-accounts) + [\#6167](https://github.com/cosmos/ibc-go/pull/6167) Fixed an edge case bug + where migrating params for a pre-existing ica module which implemented + controller functionality only could panic when migrating params for newly + added host, and align controller param migration with host. * (app/29-fee) + [\#6255](https://github.com/cosmos/ibc-go/pull/6255) Delete refunded fees from + state if some fee(s) cannot be refunded on channel closure. + + + + ### Dependencies * [\#5975](https://github.com/cosmos/ibc-go/pull/5975) Bump + Cosmos SDK to v0.50.5. ### Improvements * (proto) + [\#5987](https://github.com/cosmos/ibc-go/pull/5987) Add wasm proto files. + + + + ### Dependencies * [\#5663](https://github.com/cosmos/ibc-go/pull/5663) Bump + Cosmos SDK to v0.50.3 and CometBFT to v0.38.2. ### State Machine Breaking * + (apps/27-interchain-accounts) + [\#5442](https://github.com/cosmos/ibc-go/pull/5442) Increase the maximum + allowed length for the memo field of `InterchainAccountPacketData`. ### + Improvements * (core/02-client) + [\#5429](https://github.com/cosmos/ibc-go/pull/5429) Add wildcard `"*"` to + allow all clients in `AllowedClients` param. * (core) + [\#5541](https://github.com/cosmos/ibc-go/pull/5541) Enable emission of events + on erroneous IBC application callbacks by appending a prefix to all event type + and attribute keys. ### Features * (core/04-channel) + [\#1613](https://github.com/cosmos/ibc-go/pull/1613) Channel upgradability. * + (apps/transfer) [\#5280](https://github.com/cosmos/ibc-go/pull/5280) Add list + of allowed packet data keys to `Allocation` of `TransferAuthorization`. * + (apps/27-interchain-accounts) + [\#5633](https://github.com/cosmos/ibc-go/pull/5633) Allow setting new and + upgrading existing ICA (ordered) channels to use unordered ordering. ### Bug + Fixes * (apps/27-interchain-accounts) + [\#5343](https://github.com/cosmos/ibc-go/pull/5343) Add check if controller + is enabled in `sendTx` before sending packet to host. * (apps/29-fee) + [\#5441](https://github.com/cosmos/ibc-go/pull/5441) Allow setting the relayer + address as payee. + + + + ### Dependencies * [\#5718](https://github.com/cosmos/ibc-go/pull/5718) Update + Cosmos SDK to v0.50.3 and CometBFT to v0.38.2. ### Improvements * (core) + [\#5541](https://github.com/cosmos/ibc-go/pull/5541) Enable emission of events + on erroneous IBC application callbacks by appending a prefix to all event type + and attribute keys. + + + + ### Dependencies * [\#5038](https://github.com/cosmos/ibc-go/pull/5038) Bump + SDK v0.50.1 and cometBFT v0.38. * + [\#4398](https://github.com/cosmos/ibc-go/pull/4398) Update all modules to go + 1.21. ### API Breaking * (core) + [\#4703](https://github.com/cosmos/ibc-go/pull/4703) Make `PortKeeper` field + of `IBCKeeper` a pointer. * (core/23-commitment) + [\#4459](https://github.com/cosmos/ibc-go/pull/4459) Remove `Pretty` and + `String` custom implementations of `MerklePath`. * + [\#3205](https://github.com/cosmos/ibc-go/pull/3205) Make event emission + functions unexported. * (apps/27-interchain-accounts, apps/transfer) + [\#3253](https://github.com/cosmos/ibc-go/pull/3253) Rename `IsBound` to + `HasCapability`. * (apps/27-interchain-accounts, apps/transfer) + [\#3303](https://github.com/cosmos/ibc-go/pull/3303) Make `HasCapability` + private. * [\#3303](https://github.com/cosmos/ibc-go/pull/3856) Add + missing/remove unnecessary gogoproto directive. * + (apps/27-interchain-accounts) + [\#3967](https://github.com/cosmos/ibc-go/pull/3967) Add encoding type as + argument to ICA encoding/decoding functions. * (core) + [\#3867](https://github.com/cosmos/ibc-go/pull/3867) Remove unnecessary event + attribute from INIT handshake msgs. * (core/04-channel) + [\#3806](https://github.com/cosmos/ibc-go/pull/3806) Remove unused + `EventTypeTimeoutPacketOnClose`. * (testing) + [\#4018](https://github.com/cosmos/ibc-go/pull/4018) Allow failure + expectations when using `chain.SendMsgs`. ### State Machine Breaking * + (apps/transfer, apps/27-interchain-accounts, app/29-fee) + [\#4992](https://github.com/cosmos/ibc-go/pull/4992) Set validation for length + of string fields. ### Improvements * + [\#3304](https://github.com/cosmos/ibc-go/pull/3304) Remove unnecessary defer + func statements. * (apps/29-fee) + [\#3054](https://github.com/cosmos/ibc-go/pull/3054) Add page result to + ics29-fee queries. * (apps/27-interchain-accounts, apps/transfer) + [\#3077](https://github.com/cosmos/ibc-go/pull/3077) Add debug level logging + for the error message which is discarded when generating a failed + acknowledgement. * (core/03-connection) + [\#3244](https://github.com/cosmos/ibc-go/pull/3244) Cleanup 03-connection msg + validate basic test. * (core/02-client) + [\#3514](https://github.com/cosmos/ibc-go/pull/3514) Add check for the client + status in `CreateClient`. * (apps/29-fee) + [\#4100](https://github.com/cosmos/ibc-go/pull/4100) Adding + `MetadataFromVersion` to `29-fee` package `types`. * (apps/29-fee) + [\#4290](https://github.com/cosmos/ibc-go/pull/4290) Use + `types.MetadataFromVersion` helper function for callback handlers. * + (core/04-channel) [\#4155](https://github.com/cosmos/ibc-go/pull/4155) Adding + `IsOpen` and `IsClosed` methods to `Channel` type. * (core/03-connection) + [\#4110](https://github.com/cosmos/ibc-go/pull/4110) Remove `Version` + interface and casting functions from 03-connection. * (core) + [\#4835](https://github.com/cosmos/ibc-go/pull/4835) Use expected interface + for legacy params subspace parameter of keeper constructor functions. ### + Features * (capability) [\#3097](https://github.com/cosmos/ibc-go/pull/3097) + Migrate capability module from Cosmos SDK to ibc-go. * (core/02-client) + [\#3640](https://github.com/cosmos/ibc-go/pull/3640) Migrate client params to + be self managed. * (core/03-connection) + [\#3650](https://github.com/cosmos/ibc-go/pull/3650) Migrate connection params + to be self managed. * (apps/transfer) + [\#3553](https://github.com/cosmos/ibc-go/pull/3553) Migrate transfer + parameters to be self managed (#3553) * (apps/27-interchain-accounts) + [\#3520](https://github.com/cosmos/ibc-go/pull/3590) Migrate ica/controller + parameters to be self managed (#3590) * (apps/27-interchain-accounts) + [\#3520](https://github.com/cosmos/ibc-go/pull/3520) Migrate ica/host to + params to be self managed. * (apps/transfer) + [\#3104](https://github.com/cosmos/ibc-go/pull/3104) Add metadata for IBC + tokens. * [\#4620](https://github.com/cosmos/ibc-go/pull/4620) Migrate to gov + v1 via the additions of `MsgRecoverClient` and `MsgIBCSoftwareUpgrade`. The + legacy proposal types `ClientUpdateProposal` and `UpgradeProposal` have been + deprecated and will be removed in the next major release. ### Bug Fixes * + (apps/transfer) [\#4709](https://github.com/cosmos/ibc-go/pull/4709) Order + query service RPCs to fix availability of denom traces endpoint when no args + are provided. * (core/04-channel) + [\#3357](https://github.com/cosmos/ibc-go/pull/3357) Handle unordered channels + in `NextSequenceReceive` query. * (e2e) + [\#3402](https://github.com/cosmos/ibc-go/pull/3402) Allow retries for + messages signed by relayer. * (core/04-channel) + [\#3417](https://github.com/cosmos/ibc-go/pull/3417) Add missing query for + next sequence send. * (testing) + [\#4630](https://github.com/cosmos/ibc-go/pull/4630) Update `testconfig` to + use revision formatted chain IDs. * (core/04-channel) + [\#4706](https://github.com/cosmos/ibc-go/pull/4706) Retrieve correct next + send sequence for packets in unordered channels. * (core/02-client) + [\#4746](https://github.com/cosmos/ibc-go/pull/4746) Register implementations + against `govtypes.Content` interface. * (apps/27-interchain-accounts) + [\#4944](https://github.com/cosmos/ibc-go/pull/4944) Add missing proto + interface registration. * (core/02-client) + [\#5020](https://github.com/cosmos/ibc-go/pull/5020) Fix expect pointer error + when unmarshalling misbehaviour file. ### Documentation * + [\#3133](https://github.com/cosmos/ibc-go/pull/3133) Add linter for markdown + documents. * [\#4693](https://github.com/cosmos/ibc-go/pull/4693) Migrate docs + to docusaurus. ### Testing * + [\#3138](https://github.com/cosmos/ibc-go/pull/3138) Use `testing.TB` instead + of `testing.T` to support benchmarks and fuzz tests. * + [\#3980](https://github.com/cosmos/ibc-go/pull/3980) Change `sdk.Events` usage + to `[]abci.Event` in the testing package. * + [\#3986](https://github.com/cosmos/ibc-go/pull/3986) Add function + `RelayPacketWithResults`. * + [\#4182](https://github.com/cosmos/ibc-go/pull/4182) Return current validator + set when requesting current height in `GetValsAtHeight`. * + [\#4319](https://github.com/cosmos/ibc-go/pull/4319) Fix in `TimeoutPacket` + function to use counterparty `portID`/`channelID` in `GetNextSequenceRecv` + query. * [\#4180](https://github.com/cosmos/ibc-go/pull/4180) Remove unused + function `simapp.SetupWithGenesisAccounts`. ### Miscellaneous Tasks * + (apps/27-interchain-accounts) + [\#4677](https://github.com/cosmos/ibc-go/pull/4677) Remove ica store key. * + [\#4724](https://github.com/cosmos/ibc-go/pull/4724) Add `HasValidateBasic` + compiler assertions to messages. * + [\#4725](https://github.com/cosmos/ibc-go/pull/4725) Add fzf selection for + config files. * [\#4741](https://github.com/cosmos/ibc-go/pull/4741) Panic + with error. * [\#3186](https://github.com/cosmos/ibc-go/pull/3186) Migrate all + SDK errors to the new errors go module. * + [\#3216](https://github.com/cosmos/ibc-go/pull/3216) Modify `simapp` to + fulfill the SDK `runtime.AppI` interface. * + [\#3290](https://github.com/cosmos/ibc-go/pull/3290) Remove `gogoproto` yaml + tags from proto files. * [\#3439](https://github.com/cosmos/ibc-go/pull/3439) + Use nil pointer pattern to check for interface compliance. * + [\#3433](https://github.com/cosmos/ibc-go/pull/3433) Add tests for + `acknowledgement.Acknowledgement()`. * (core, apps/29-fee) + [\#3462](https://github.com/cosmos/ibc-go/pull/3462) Add missing `nil` check + and corresponding tests for query handlers. * (light-clients/07-tendermint, + light-clients/06-solomachine) + [\#3571](https://github.com/cosmos/ibc-go/pull/3571) Delete unused + `GetProofSpecs` functions. * (core) + [\#3616](https://github.com/cosmos/ibc-go/pull/3616) Add debug log for + redundant relay. * (core) [\#3892](https://github.com/cosmos/ibc-go/pull/3892) + Add deprecated option to `create_localhost` field. * (core) + [\#3893](https://github.com/cosmos/ibc-go/pull/3893) Add deprecated option to + `MsgSubmitMisbehaviour`. * (apps/transfer, apps/29-fee) + [\#4570](https://github.com/cosmos/ibc-go/pull/4570) Remove `GetSignBytes` + from 29-fee and transfer msgs. * + [\#3630](https://github.com/cosmos/ibc-go/pull/3630) Add annotation to Msg + service. + + + + ### State Machine Breaking * (core/03-connection) + [\#7128](https://github.com/cosmos/ibc-go/pull/7128) Remove verification of + self client and consensus state from connection handshake. + + + + ### Dependencies * [\#6943](https://github.com/cosmos/ibc-go/pull/6943) Update + Cosmos SDK to v0.47.13. ### Features * (apps/transfer) + [\#6877](https://github.com/cosmos/ibc-go/pull/6877) Added the possibility to + transfer the entire user balance of a particular denomination by using + [__PROTECTED_0__](https://github.com/cosmos/ibc-go/blob/715f00eef8727da41db25fdd4763b709bdbba07e/modules/apps/transfer/types/transfer_authorization.go#L253-L255) + as the token amount. ### Bug Fixes + + + + ### State Machine Breaking * (apps/transfer, apps/27-interchain-accounts, + app/29-fee) [\#4992](https://github.com/cosmos/ibc-go/pull/4992) Set + validation for length of string fields. + + + + ### Dependencies * [\#6613](https://github.com/cosmos/ibc-go/pull/6613) Update + Cosmos SDK to v0.47.12. ### Improvements * (apps/27-interchain-accounts) + [\#6436](https://github.com/cosmos/ibc-go/pull/6436) Refactor ICA host keeper + instantiation method to avoid panic related to proto files. + + + + ### Improvements * (core/ante) + [\#6302](https://github.com/cosmos/ibc-go/pull/6302) Performance: Skip app + callbacks during RecvPacket execution in checkTx within the redundant relay + ante handler. * (core/ante) + [\#6280](https://github.com/cosmos/ibc-go/pull/6280) Performance: Skip + redundant proof checking in RecvPacket execution in reCheckTx within the + redundant relay ante handler. * (core/ante) + [\#6306](https://github.com/cosmos/ibc-go/pull/6306) Performance: Skip + misbehaviour checks in UpdateClient flow and skip signature checks in + reCheckTx mode. + + + + ### Dependencies * [\#6254](https://github.com/cosmos/ibc-go/pull/6254) Update + Cosmos SDK to v0.47.11 and CometBFT to v0.37.5. ### State Machine Breaking * + (light-clients/07-tendermint) + [\#6276](https://github.com/cosmos/ibc-go/pull/6276) Fix: No-op to avoid + panicking on `UpdateState` for invalid misbehaviour submissions. ### + Improvements * (apps/27-interchain-accounts) + [\#6147](https://github.com/cosmos/ibc-go/pull/6147) Emit an event signalling + that the host submodule is disabled. * (testing) + [\#6180](https://github.com/cosmos/ibc-go/pull/6180) Add version to tm abci + headers in ibctesting. * (apps/27-interchain-accounts, apps/transfer, + apps/29-fee) [\#6253](https://github.com/cosmos/ibc-go/pull/6253) Allow + channel handshake to succeed if fee middleware is wired up on one side, but + not the other. * (apps/transfer) + [\#6268](https://github.com/cosmos/ibc-go/pull/6268) Use memo strings instead + of JSON keys in `AllowedPacketData` of transfer authorization. ### Features * + (apps/27-interchain-accounts) + [\#5633](https://github.com/cosmos/ibc-go/pull/5633) Allow new ICA channels to + use unordered ordering. * (apps/27-interchain-accounts) + [\#5785](https://github.com/cosmos/ibc-go/pull/5785) Introduce a new tx + message that ICA host submodule can use to query the chain (only those marked + with `module_query_safe`) and write the responses to the acknowledgement. ### + Bug Fixes * (apps/29-fee) [\#6255](https://github.com/cosmos/ibc-go/pull/6255) + Delete already refunded fees from state if some fee(s) cannot be refunded on + channel closure. + + + + ### Dependencies * [\#5717](https://github.com/cosmos/ibc-go/pull/5717) Update + Cosmos SDK to v0.47.8 and CometBFT to v0.37.4. ### Improvements * (core) + [\#5541](https://github.com/cosmos/ibc-go/pull/5541) Enable emission of events + on erroneous IBC application callbacks by appending a prefix to all event type + and attribute keys. ### Bug Fixes * (apps/27-interchain-accounts) + [\#4944](https://github.com/cosmos/ibc-go/pull/4944) Add missing proto + interface registration. + + + + ### Dependencies * [\#4539](https://github.com/cosmos/ibc-go/pull/4539) Update + Cosmos SDK to v0.47.5. ### Improvements * (apps/27-interchain-accounts) + [\#4537](https://github.com/cosmos/ibc-go/pull/4537) Add argument to + `generate-packet-data` cli to choose the encoding format for the messages in + the ICA packet data. ### Bug Fixes * (apps/transfer) + [\#4709](https://github.com/cosmos/ibc-go/pull/4709) Order query service RPCs + to fix availability of denom traces endpoint when no args are provided. + + + + ### Dependencies * [\#4122](https://github.com/cosmos/ibc-go/pull/4122) Update + Cosmos SDK to v0.47.4. ### Improvements * + [\#4187](https://github.com/cosmos/ibc-go/pull/4187) Adds function + `WithICS4Wrapper` to keepers to allow to set the middleware after the keeper's + creation. * (light-clients/06-solomachine) + [\#4429](https://github.com/cosmos/ibc-go/pull/4429) Remove IBC key from path + of bytes signed by solomachine and not escape the path. ### Features * + (apps/27-interchain-accounts) + [\#3796](https://github.com/cosmos/ibc-go/pull/3796) Adds support for json tx + encoding for interchain accounts. * + [\#4188](https://github.com/cosmos/ibc-go/pull/4188) Adds optional + `PacketDataUnmarshaler` interface that allows a middleware to request the + packet data to be unmarshaled by the base application. * + [\#4199](https://github.com/cosmos/ibc-go/pull/4199) Adds optional + `PacketDataProvider` interface for retrieving custom packet data stored on + behalf of another application. * + [\#4200](https://github.com/cosmos/ibc-go/pull/4200) Adds optional + `PacketData` interface which application's packet data may implement. ### Bug + Fixes * (04-channel) [\#4476](https://github.com/cosmos/ibc-go/pull/4476) Use + UTC time in log messages for packet timeout error. * (testing) + [\#4483](https://github.com/cosmos/ibc-go/pull/4483) Use the correct revision + height when querying trusted validator set. + + + + ### Dependencies * [\#5716](https://github.com/cosmos/ibc-go/pull/5716) Update + Cosmos SDK to v0.47.8 and CometBFT to v0.37.4. ### Improvements * (core) + [\#5541](https://github.com/cosmos/ibc-go/pull/5541) Enable emission of events + on erroneous IBC application callbacks by appending a prefix to all event type + and attribute keys. + + + + ### Dependencies * [\#4539](https://github.com/cosmos/ibc-go/pull/4539) Update + Cosmos SDK to v0.47.5. ### Bug Fixes * (apps/transfer) + [\#4709](https://github.com/cosmos/ibc-go/pull/4709) Order query service RPCs + to fix availability of denom traces endpoint when no args are provided. + + + + ### Bug Fixes * (04-channel) + [\#4476](https://github.com/cosmos/ibc-go/pull/4476) Use UTC time in log + messages for packet timeout error. * (testing) + [\#4483](https://github.com/cosmos/ibc-go/pull/4483) Use the correct revision + height when querying trusted validator set. + + + + ### Dependencies * [\#3810](https://github.com/cosmos/ibc-go/pull/3810) Update + Cosmos SDK to v0.47.3. * [\#3862](https://github.com/cosmos/ibc-go/pull/3862) + Update CometBFT to v0.37.2. ### State Machine Breaking * + [\#3907](https://github.com/cosmos/ibc-go/pull/3907) Re-implemented missing + functions of `LegacyMsg` interface to fix transaction signing with ledger. + + + + ### Dependencies * [\#3542](https://github.com/cosmos/ibc-go/pull/3542) Update + Cosmos SDK to v0.47.2 and CometBFT to v0.37.1. * + [\#3457](https://github.com/cosmos/ibc-go/pull/3457) Update to ics23 v0.10.0. + ### Improvements * (apps/transfer) + [\#3454](https://github.com/cosmos/ibc-go/pull/3454) Support transfer + authorization unlimited spending when the max `uint256` value is provided as + limit. ### Features * (light-clients/09-localhost) + [\#3229](https://github.com/cosmos/ibc-go/pull/3229) Implementation of v2 of + localhost loopback client. * (apps/transfer) + [\#3019](https://github.com/cosmos/ibc-go/pull/3019) Add state entry to keep + track of total amount of tokens in escrow. ### Bug Fixes * (core/04-channel) + [\#3346](https://github.com/cosmos/ibc-go/pull/3346) Properly handle ordered + channels in `UnreceivedPackets` query. * (core/04-channel) + [\#3593](https://github.com/cosmos/ibc-go/pull/3593) `SendPacket` now + correctly returns `ErrClientNotFound` in favour of + `ErrConsensusStateNotFound`. + + + + ### Bug Fixes * [\#3346](https://github.com/cosmos/ibc-go/pull/3346) Properly + handle ordered channels in `UnreceivedPackets` query. + + + + ### Dependencies * [\#2672](https://github.com/cosmos/ibc-go/issues/2672) + Update to cosmos-sdk v0.47. * + [\#3175](https://github.com/cosmos/ibc-go/issues/3175) Migrate to cometbft + v0.37. ### API Breaking * (core) + [\#2897](https://github.com/cosmos/ibc-go/pull/2897) Remove legacy migrations + required for upgrading from Stargate release line to ibc-go >= v1.x.x. * + (core/02-client) [\#2856](https://github.com/cosmos/ibc-go/pull/2856) Rename + `IterateClients` to `IterateClientStates`. The function now takes a prefix + argument which may be used for prefix iteration over the client store. * + (light-clients/tendermint)[\#1768](https://github.com/cosmos/ibc-go/pull/1768) + Removed `AllowUpdateAfterExpiry`, `AllowUpdateAfterMisbehaviour` booleans as + they are deprecated (see ADR026) * (06-solomachine) + [\#1679](https://github.com/cosmos/ibc-go/pull/1679) Remove `types` + sub-package from `06-solomachine` lightclient directory. * (07-tendermint) + [\#1677](https://github.com/cosmos/ibc-go/pull/1677) Remove `types` + sub-package from `07-tendermint` lightclient directory. * (06-solomachine) + [\#1687](https://github.com/cosmos/ibc-go/pull/1687) Bump `06-solomachine` + protobuf version from `v2` to `v3`. * (06-solomachine) + [\#1687](https://github.com/cosmos/ibc-go/pull/1687) Removed `DataType` enum + and associated message types from `06-solomachine`. `DataType` has been + removed from `SignBytes` and `SignatureAndData` in favour of `path`. * + (02-client) [\#598](https://github.com/cosmos/ibc-go/pull/598) The client + state and consensus state return value has been removed from + `VerifyUpgradeAndUpdateState`. Light client implementations must update the + client state and consensus state after verifying a valid client upgrade. * + (06-solomachine) [\#1100](https://github.com/cosmos/ibc-go/pull/1100) Remove + `GetClientID` function from 06-solomachine `Misbehaviour` type. * + (06-solomachine) [\#1100](https://github.com/cosmos/ibc-go/pull/1100) + Deprecate `ClientId` field in 06-solomachine `Misbehaviour` type. * + (07-tendermint) [\#1097](https://github.com/cosmos/ibc-go/pull/1097) Remove + `GetClientID` function from 07-tendermint `Misbehaviour` type. * + (07-tendermint) [\#1097](https://github.com/cosmos/ibc-go/pull/1097) Deprecate + `ClientId` field in 07-tendermint `Misbehaviour` type. * + (modules/core/exported) [\#1107](https://github.com/cosmos/ibc-go/pull/1107) + Merging the `Header` and `Misbehaviour` interfaces into a single + `ClientMessage` type. * + (06-solomachine)[\#1906](https://github.com/cosmos/ibc-go/pull/1906/files) + Removed `AllowUpdateAfterProposal` boolean as it has been deprecated (see + 01_concepts of the solo machine spec for more details). * (07-tendermint) + [\#1896](https://github.com/cosmos/ibc-go/pull/1896) Remove error return from + `IterateConsensusStateAscending` in `07-tendermint`. * + (apps/27-interchain-accounts) + [\#2638](https://github.com/cosmos/ibc-go/pull/2638) Interchain accounts host + and controller Keepers now expects a keeper which fulfills the expected + `exported.ScopedKeeper` interface for the capability keeper. * + (06-solomachine) [\#2761](https://github.com/cosmos/ibc-go/pull/2761) Removed + deprecated `ClientId` field from `Misbehaviour` and + `allow_update_after_proposal` field from `ClientState`. * (apps) + [\#3154](https://github.com/cosmos/ibc-go/pull/3154) Remove unused + `ProposalContents` function. * (apps) + [\#3149](https://github.com/cosmos/ibc-go/pull/3149) Remove legacy interface + function `RandomizedParams`, which is no longer used. * + (light-clients/06-solomachine) + [\#2941](https://github.com/cosmos/ibc-go/pull/2941) Remove solomachine header + sequence. * (core) [\#2982](https://github.com/cosmos/ibc-go/pull/2982) Moved + the ibc module name into the exported package. ### State Machine Breaking * + (06-solomachine) [\#2744](https://github.com/cosmos/ibc-go/pull/2744) + `Misbehaviour.ValidateBasic()` now only enforces that signature data does not + match when the signature paths are different. * (06-solomachine) + [\#2748](https://github.com/cosmos/ibc-go/pull/2748) Adding sentinel value for + header path in 06-solomachine. * (apps/29-fee) + [\#2942](https://github.com/cosmos/ibc-go/pull/2942) Check `x/bank` send + enabled before escrowing fees. * (core/04-channel) + [\#3009](https://github.com/cosmos/ibc-go/pull/3009) Change check to disallow + optimistic sends. ### Improvements * (core) + [\#3082](https://github.com/cosmos/ibc-go/pull/3082) Add `HasConnection` and + `HasChannel` methods. * (tests) + [\#2926](https://github.com/cosmos/ibc-go/pull/2926) Lint tests * + (apps/transfer) [\#2643](https://github.com/cosmos/ibc-go/pull/2643) Add + amount, denom, and memo to transfer event emission. * (core) + [\#2746](https://github.com/cosmos/ibc-go/pull/2746) Allow proof height to be + zero for all core IBC `sdk.Msg` types that contain proofs. * + (light-clients/06-solomachine) + [\#2746](https://github.com/cosmos/ibc-go/pull/2746) Discard proofHeight for + solo machines and use the solo machine sequence instead. * + (modules/light-clients/07-tendermint) + [\#1713](https://github.com/cosmos/ibc-go/pull/1713) Allow client upgrade + proposals to update `TrustingPeriod`. See ADR-026 for context. * + (modules/core/02-client) + [\#1188](https://github.com/cosmos/ibc-go/pull/1188/files) Routing + `MsgSubmitMisbehaviour` to `UpdateClient` keeper function. Deprecating + `SubmitMisbehaviour` endpoint. * (modules/core/02-client) + [\#1208](https://github.com/cosmos/ibc-go/pull/1208) Replace + `CheckHeaderAndUpdateState` usage in 02-client with calls to + `VerifyClientMessage`, `CheckForMisbehaviour`, `UpdateStateOnMisbehaviour` and + `UpdateState`. * (modules/light-clients/09-localhost) + [\#1187](https://github.com/cosmos/ibc-go/pull/1187/) Removing localhost light + client implementation as it is not functional. An upgrade handler is provided + in `modules/migrations/v5` to prune `09-localhost` clients and consensus + states from the store. * (modules/core/02-client) + [\#1186](https://github.com/cosmos/ibc-go/pull/1186) Removing `GetRoot` + function from ConsensusState interface in `02-client`. `GetRoot` is unused by + core IBC. * (modules/core/02-client) + [\#1196](https://github.com/cosmos/ibc-go/pull/1196) Adding + VerifyClientMessage to ClientState interface. * (modules/core/02-client) + [\#1198](https://github.com/cosmos/ibc-go/pull/1198) Adding + UpdateStateOnMisbehaviour to ClientState interface. * (modules/core/02-client) + [\#1170](https://github.com/cosmos/ibc-go/pull/1170) Updating + `ClientUpdateProposal` to set client state in lightclient implementations + `CheckSubstituteAndUpdateState` methods. * (modules/core/02-client) + [\#1197](https://github.com/cosmos/ibc-go/pull/1197) Adding + `CheckForMisbehaviour` to `ClientState` interface. * (modules/core/02-client) + [\#1210](https://github.com/cosmos/ibc-go/pull/1210) Removing + `CheckHeaderAndUpdateState` from `ClientState` interface & associated light + client implementations. * (modules/core/02-client) + [\#1212](https://github.com/cosmos/ibc-go/pull/1212) Removing + `CheckMisbehaviourAndUpdateState` from `ClientState` interface & associated + light client implementations. * (modules/core/exported) + [\#1206](https://github.com/cosmos/ibc-go/pull/1206) Adding new method + `UpdateState` to `ClientState` interface. * (modules/core/02-client) + [\#1741](https://github.com/cosmos/ibc-go/pull/1741) Emitting a new + `upgrade_chain` event upon setting upgrade consensus state. * (client) + [\#724](https://github.com/cosmos/ibc-go/pull/724) `IsRevisionFormat` and + `IsClientIDFormat` have been updated to disallow newlines before the dash used + to separate the chainID and revision number, and the client type and client + sequence. * (02-client/cli) [\#897](https://github.com/cosmos/ibc-go/pull/897) + Remove `GetClientID()` from `Misbehaviour` interface. Submit client + misbehaviour cli command requires an explicit client id now. * + (06-solomachine) [\#1972](https://github.com/cosmos/ibc-go/pull/1972) Solo + machine implementation of `ZeroCustomFields` fn now panics as the fn is only + used for upgrades which solo machine does not support. * + (light-clients/06-solomachine) Moving `verifyMisbehaviour` function from + update.go to misbehaviour_handle.go. * + [\#2434](https://github.com/cosmos/ibc-go/pull/2478) Removed all `TypeMsg` + constants * (modules/core/exported) + [\#2539](https://github.com/cosmos/ibc-go/pull/2539) Removing `GetVersions` + from `ConnectionI` interface. * (core/02-connection) + [\#2419](https://github.com/cosmos/ibc-go/pull/2419) Add optional proof data + to proto definitions of `MsgConnectionOpenTry` and `MsgConnectionOpenAck` for + host state machines that are unable to introspect their own consensus state. * + (light-clients/07-tendermint) + [\#3046](https://github.com/cosmos/ibc-go/pull/3046) Moved non-verification + misbehaviour checks to `CheckForMisbehaviour`. * (apps/29-fee) + [\#2975](https://github.com/cosmos/ibc-go/pull/2975) Adding distribute fee + events to ics29. * (light-clients/07-tendermint) + [\#2965](https://github.com/cosmos/ibc-go/pull/2965) Prune expired + `07-tendermint` consensus states on duplicate header updates. * + (light-clients) [\#2736](https://github.com/cosmos/ibc-go/pull/2736) Updating + `VerifyMembership` and `VerifyNonMembership` methods to use `Path` interface. + * (light-clients) [\#3113](https://github.com/cosmos/ibc-go/pull/3113) Align + light client module names. ### Features * (apps/transfer) + [\#3079](https://github.com/cosmos/ibc-go/pull/3079) Added authz support for + ics20. * (core/02-client) [\#2824](https://github.com/cosmos/ibc-go/pull/2824) + Add genesis migrations for v6 to v7. The migration migrates the solo machine + client state definition, removes all solo machine consensus states and removes + the localhost client. * (core/24-host) + [\#2856](https://github.com/cosmos/ibc-go/pull/2856) Add + `PrefixedClientStorePath` and `PrefixedClientStoreKey` functions to 24-host * + (core/02-client) [\#2819](https://github.com/cosmos/ibc-go/pull/2819) Add + automatic in-place store migrations to remove the localhost client and migrate + existing solo machine definitions. * (light-clients/06-solomachine) + [\#2826](https://github.com/cosmos/ibc-go/pull/2826) Add `AppModuleBasic` for + the 06-solomachine client and remove solo machine type registration from core + IBC. Chains must register the `AppModuleBasic` of light clients. * + (light-clients/07-tendermint) + [\#2825](https://github.com/cosmos/ibc-go/pull/2825) Add `AppModuleBasic` for + the 07-tendermint client and remove tendermint type registration from core + IBC. Chains must register the `AppModuleBasic` of light clients. * + (light-clients/07-tendermint) + [\#2800](https://github.com/cosmos/ibc-go/pull/2800) Add optional in-place + store migration function to prune all expired tendermint consensus states. * + (core/24-host) [\#2820](https://github.com/cosmos/ibc-go/pull/2820) Add + `MustParseClientStatePath` which parses the clientID from a client state key + path. * (testing/simapp) [\#2842](https://github.com/cosmos/ibc-go/pull/2842) + Adding the new upgrade handler for v6 -> v7 to simapp which prunes expired + Tendermint consensus states. * (testing) + [\#2829](https://github.com/cosmos/ibc-go/pull/2829) Add `AssertEvents` which + asserts events against expected event map. ### Bug Fixes * (testing) + [\#3295](https://github.com/cosmos/ibc-go/pull/3295) The function + `SetupWithGenesisValSet` will set the baseapp chainID before running + `InitChain` * (light-clients/solomachine) + [\#1839](https://github.com/cosmos/ibc-go/pull/1839) Fixed usage of the new + diversifier in validation of changing diversifiers for the solo machine. The + current diversifier must sign over the new diversifier. * + (light-clients/07-tendermint) + [\#1674](https://github.com/cosmos/ibc-go/pull/1674) Submitted ClientState is + zeroed out before checking the proof in order to prevent the proposal from + containing information governance is not actually voting on. * + (modules/core/02-client)[\#1676](https://github.com/cosmos/ibc-go/pull/1676) + ClientState must be zeroed out for `UpgradeProposals` to pass validation. This + prevents a proposal containing information governance is not actually voting + on. * (core/02-client) [\#2510](https://github.com/cosmos/ibc-go/pull/2510) + Fix client ID validation regex to conform closer to spec. * (apps/transfer) + [\#3045](https://github.com/cosmos/ibc-go/pull/3045) Allow value with slashes + in URL template. * (apps/27-interchain-accounts) + [\#2601](https://github.com/cosmos/ibc-go/pull/2601) Remove bech32 check from + owner address on ICA controller msgs RegisterInterchainAccount and SendTx. * + (apps/transfer) [\#2651](https://github.com/cosmos/ibc-go/pull/2651) Skip + emission of unpopulated memo field in ics20. * (apps/27-interchain-accounts) + [\#2682](https://github.com/cosmos/ibc-go/pull/2682) Avoid race conditions in + ics27 handshakes. * (light-clients/06-solomachine) + [\#2741](https://github.com/cosmos/ibc-go/pull/2741) Added check for empty + path in 06-solomachine. * (light-clients/07-tendermint) + [\#3022](https://github.com/cosmos/ibc-go/pull/3022) Correctly close iterator + in `07-tendermint` store. * (core/02-client) + [\#3010](https://github.com/cosmos/ibc-go/pull/3010) Update `Paginate` to use + `FilterPaginate` in `ClientStates` and `ConnectionChannels` grpc queries. + + + + ### Bug Fixes * (apps/transfer) + [\#3045](https://github.com/cosmos/ibc-go/pull/3045) allow value with slashes + in URL template for `denom_traces` and `denom_hashes` queries. * + (apps/transfer) [\#4709](https://github.com/cosmos/ibc-go/pull/4709) Order + query service RPCs to fix availability of denom traces endpoint when no args + are provided. + + + + ### Dependencies * [\#3393](https://github.com/cosmos/ibc-go/pull/3393) Bump + Cosmos SDK to v0.46.12 and replace Tendermint with CometBFT v0.34.37. ### + Improvements * (core) [\#3082](https://github.com/cosmos/ibc-go/pull/3082) Add + `HasConnection` and `HasChannel` methods. * (apps/transfer) + [\#3454](https://github.com/cosmos/ibc-go/pull/3454) Support transfer + authorization unlimited spending when the max `uint256` value is provided as + limit. ### Features * [\#3079](https://github.com/cosmos/ibc-go/pull/3079) Add + authz support for ics20. ### Bug Fixes * + [\#3346](https://github.com/cosmos/ibc-go/pull/3346) Properly handle ordered + channels in `UnreceivedPackets` query. + + + + ### Bug Fixes * (apps/transfer) + [\#3045](https://github.com/cosmos/ibc-go/pull/3045) allow value with slashes + in URL template for `denom_traces` and `denom_hashes` queries. * + (apps/transfer) [\#4709](https://github.com/cosmos/ibc-go/pull/4709) Order + query service RPCs to fix availability of denom traces endpoint when no args + are provided. + + + + ### Bug Fixes * [\#3346](https://github.com/cosmos/ibc-go/pull/3346) Properly + handle ordered channels in `UnreceivedPackets` query. + + + + ### Dependencies * [\#2945](https://github.com/cosmos/ibc-go/pull/2945) Bump + Cosmos SDK to v0.46.7 and Tendermint to v0.34.24. ### State Machine Breaking * + (apps/29-fee) [\#2942](https://github.com/cosmos/ibc-go/pull/2942) Check + `x/bank` send enabled before escrowing fees. + + + + ### Dependencies * [\#2868](https://github.com/cosmos/ibc-go/pull/2868) Bump + ICS 23 to v0.9.0. * [\#2458](https://github.com/cosmos/ibc-go/pull/2458) Bump + Cosmos SDK to v0.46.2 * [\#2784](https://github.com/cosmos/ibc-go/pull/2784) + Bump Cosmos SDK to v0.46.6 and Tendermint to v0.34.23. ### API Breaking * + (apps/27-interchain-accounts) + [\#2607](https://github.com/cosmos/ibc-go/pull/2607) `SerializeCosmosTx` now + takes in a `[]proto.Message` instead of `[]sdk.Msg`. * (apps/transfer) + [\#2446](https://github.com/cosmos/ibc-go/pull/2446) Remove `SendTransfer` + function in favor of a private `sendTransfer` function. All IBC transfers must + be initiated with `MsgTransfer`. * (apps/29-fee) + [\#2395](https://github.com/cosmos/ibc-go/pull/2395) Remove param space from + ics29 NewKeeper function. The field was unused. * + (apps/27-interchain-accounts) + [\#2133](https://github.com/cosmos/ibc-go/pull/2133) Generates genesis protos + in a separate directory to avoid circular import errors. The protobuf package + name has changed for the genesis types. * (apps/27-interchain-accounts) + [\#2638](https://github.com/cosmos/ibc-go/pull/2638) Interchain accounts host + and controller Keepers now expects a keeper which fulfills the expected + `exported.ScopedKeeper` interface for the capability keeper. * (transfer) + [\#2638](https://github.com/cosmos/ibc-go/pull/2638) Transfer Keeper now + expects a keeper which fulfills the expected `exported.ScopedKeeper` interface + for the capability keeper. * (05-port) + [\#2638](https://github.com/cosmos/ibc-go/pull/2638) Port Keeper now expects a + keeper which fulfills the expected `exported.ScopedKeeper` interface for the + capability keeper. * (04-channel) + [\#2638](https://github.com/cosmos/ibc-go/pull/2638) Channel Keeper now + expects a keeper which fulfills the expected `exported.ScopedKeeper` interface + for the capability keeper. * + (core/04-channel)[\#1703](https://github.com/cosmos/ibc-go/pull/1703) Update + `SendPacket` API to take in necessary arguments and construct rest of packet + rather than taking in entire packet. The generated packet sequence is returned + by the `SendPacket` function. * (modules/apps/27-interchain-accounts) + [\#2433](https://github.com/cosmos/ibc-go/pull/2450) Renamed + icatypes.PortPrefix to icatypes.ControllerPortPrefix & icatypes.PortID to + icatypes.HostPortID * (testing) + [\#2567](https://github.com/cosmos/ibc-go/pull/2567) Modify `SendPacket` API + of `Endpoint` to match the API of `SendPacket` in 04-channel. ### State + Machine Breaking * (apps/transfer) + [\#2651](https://github.com/cosmos/ibc-go/pull/2651) Introduce + `mustProtoMarshalJSON` for ics20 packet data marshalling which will skip + emission (marshalling) of the memo field if unpopulated (empty). * + (27-interchain-accounts) [\#2590](https://github.com/cosmos/ibc-go/pull/2590) + Removing port prefix requirement from the ICA host channel handshake * + (transfer) [\#2377](https://github.com/cosmos/ibc-go/pull/2377) Adding + `sequence` to `MsgTransferResponse`. * (light-clients/07-tendermint) + [\#2555](https://github.com/cosmos/ibc-go/pull/2555) Forbid negative values + for `TrustingPeriod`, `UnbondingPeriod` and `MaxClockDrift` (as specified in + ICS-07). * (core/04-channel) + [\#2973](https://github.com/cosmos/ibc-go/pull/2973) Write channel state + before invoking app callbacks in ack and confirm channel handshake steps. ### + Improvements * (apps/27-interchain-accounts) + [\#2134](https://github.com/cosmos/ibc-go/pull/2134) Adding upgrade handler to + ICS27 `controller` submodule for migration of channel capabilities. This + upgrade handler migrates ownership of channel capabilities from the underlying + application to the ICS27 `controller` submodule. * + (apps/27-interchain-accounts) + [\#2102](https://github.com/cosmos/ibc-go/pull/2102) ICS27 controller + middleware now supports a nil underlying application. This allows chains to + make use of interchain accounts with existing auth mechanisms such as x/group + and x/gov. * (apps/27-interchain-accounts) + [\#2157](https://github.com/cosmos/ibc-go/pull/2157) Adding + `IsMiddlewareEnabled` functionality to enforce calls to ICS27 msg server to + *not* route to the underlying application. * (apps/27-interchain-accounts) + [\#2146](https://github.com/cosmos/ibc-go/pull/2146) ICS27 controller now + claims the channel capability passed via ibc core, and passes `nil` to the + underlying app callback. The channel capability arg in `SendTx` is now ignored + and looked up internally. * (apps/27-interchain-accounts) + [\#2177](https://github.com/cosmos/ibc-go/pull/2177) Adding + `IsMiddlewareEnabled` flag to interchain accounts `ActiveChannel` genesis + type. * (apps/27-interchain-accounts) + [\#2140](https://github.com/cosmos/ibc-go/pull/2140) Adding migration handler + to ICS27 `controller` submodule to assert ownership of channel capabilities + and set middleware enabled flag for existing channels. The ICS27 module + consensus version has been bumped from 1 to 2. * (core/04-channel) + [\#2304](https://github.com/cosmos/ibc-go/pull/2304) Adding + `GetAllChannelsWithPortPrefix` function which filters channels based on a + provided port prefix. * (apps/27-interchain-accounts) + [\#2248](https://github.com/cosmos/ibc-go/pull/2248) Adding call to underlying + app in `OnChanCloseConfirm` callback of the controller submodule and adding + relevant unit tests. * (apps/27-interchain-accounts) + [\#2251](https://github.com/cosmos/ibc-go/pull/2251) Adding `msgServer` struct + to controller submodule that embeds the `Keeper` struct. * + (apps/27-interchain-accounts) + [\#2290](https://github.com/cosmos/ibc-go/pull/2290) Changed `DefaultParams` + function in `host` submodule to allow all messages by default. Defined a + constant named `AllowAllHostMsgs` for `host` module to keep wildcard "*" + string which allows all messages. * (apps/27-interchain-accounts) + [\#2297](https://github.com/cosmos/ibc-go/pull/2297) Adding cli command to + generate ICS27 packet data. * (modules/core/keeper) + [\#1728](https://github.com/cosmos/ibc-go/pull/2399) Updated channel callback + errors to include portID & channelID for better identification of errors. * + (testing) [\#2657](https://github.com/cosmos/ibc-go/pull/2657) Carry + `ProposerAddress` through committed blocks. Allow `DefaultGenTxGas` to be + modified. * (core/03-connection) + [\#2745](https://github.com/cosmos/ibc-go/pull/2745) Adding `ConnectionParams` + grpc query and CLI to 03-connection. * (apps/29-fee) + [\#2786](https://github.com/cosmos/ibc-go/pull/2786) Save gas by checking key + existence with `KVStore`'s `Has` method. ### Features * + (apps/27-interchain-accounts) + [\#2147](https://github.com/cosmos/ibc-go/pull/2147) Adding a `SubmitTx` gRPC + endpoint for the ICS27 Controller module which allows owners of interchain + accounts to submit transactions. This replaces the previously existing need + for authentication modules to implement this standard functionality. * + (testing/simapp) [\#2190](https://github.com/cosmos/ibc-go/pull/2190) Adding + the new `x/group` cosmos-sdk module to simapp. * (apps/transfer) + [\#2595](https://github.com/cosmos/ibc-go/pull/2595) Adding optional memo + field to `FungibleTokenPacketData` and `MsgTransfer`. ### Bug Fixes * + (modules/core/keeper) [\#2403](https://github.com/cosmos/ibc-go/pull/2403) + Added a function in keeper to cater for blank pointers. * (apps/transfer) + [\#2679](https://github.com/cosmos/ibc-go/pull/2679) Check `x/bank` send + enabled. * (modules/core/keeper) + [\#2745](https://github.com/cosmos/ibc-go/pull/2745) Fix request wiring for + `UpgradedConsensusState` in core query server. + + + + ### Bug Fixes * (apps/transfer) + [\#3045](https://github.com/cosmos/ibc-go/pull/3045) allow value with slashes + in URL template for `denom_traces` and `denom_hashes` queries. * + (apps/transfer) [\#4709](https://github.com/cosmos/ibc-go/pull/4709) Order + query service RPCs to fix availability of denom traces endpoint when no args + are provided. + + + + ### Bug Fixes * [\#3346](https://github.com/cosmos/ibc-go/pull/3346) Properly + handle ordered channels in `UnreceivedPackets` query. + + + + ### Dependencies * [\#3354](https://github.com/cosmos/ibc-go/pull/3354) Bump + Cosmos SDK to v0.46.12 and replace Tendermint with CometBFT v0.34.27. + + + + ### Bug Fixes * [\#3346](https://github.com/cosmos/ibc-go/pull/3346) Properly + handle ordered channels in `UnreceivedPackets` query. + + + + ### Dependencies * [\#2868](https://github.com/cosmos/ibc-go/pull/2868) Bump + ICS 23 to v0.9.0. * [\#2944](https://github.com/cosmos/ibc-go/pull/2944) Bump + Cosmos SDK to v0.46.7 and Tendermint to v0.34.24. ### State Machine Breaking * + (apps/29-fee) [\#2942](https://github.com/cosmos/ibc-go/pull/2942) Check + `x/bank` send enabled before escrowing fees. ### Improvements * (apps/29-fee) + [\#2786](https://github.com/cosmos/ibc-go/pull/2786) Save gas by checking key + existence with `KVStore`'s `Has` method. + + + + ### Dependencies * [\#2647](https://github.com/cosmos/ibc-go/pull/2647) Bump + Cosmos SDK to v0.46.4 and Tendermint to v0.34.22. ### State Machine Breaking * + (apps/transfer) [\#2651](https://github.com/cosmos/ibc-go/pull/2651) Introduce + `mustProtoMarshalJSON` for ics20 packet data marshalling which will skip + emission (marshalling) of the memo field if unpopulated (empty). * + (27-interchain-accounts) [\#2590](https://github.com/cosmos/ibc-go/pull/2590) + Removing port prefix requirement from the ICA host channel handshake * + (transfer) [\#2377](https://github.com/cosmos/ibc-go/pull/2377) Adding + `sequence` to `MsgTransferResponse`. ### Improvements * (testing) + [\#2657](https://github.com/cosmos/ibc-go/pull/2657) Carry `ProposerAddress` + through committed blocks. Allow `DefaultGenTxGas` to be modified. ### Features + * (apps/transfer) [\#2595](https://github.com/cosmos/ibc-go/pull/2595) Adding + optional memo field to `FungibleTokenPacketData` and `MsgTransfer`. ### Bug + Fixes * (apps/transfer) [\#2679](https://github.com/cosmos/ibc-go/pull/2679) + Check `x/bank` send enabled. + + + + ### Dependencies * [\#2623](https://github.com/cosmos/ibc-go/pull/2623) Bump + SDK version to v0.46.3 and Tendermint version to v0.34.22. + + + + ### Dependencies * [\#1653](https://github.com/cosmos/ibc-go/pull/1653) Bump + SDK version to v0.46 * [\#2124](https://github.com/cosmos/ibc-go/pull/2124) + Bump SDK version to v0.46.1 ### API Breaking * + (testing)[\#2028](https://github.com/cosmos/ibc-go/pull/2028) New interface + `ibctestingtypes.StakingKeeper` added and set for the testing app + `StakingKeeper` setup. * (core/04-channel) + [\#1418](https://github.com/cosmos/ibc-go/pull/1418) `NewPacketId` has been + renamed to `NewPacketID` to comply with go linting rules. * (core/ante) + [\#1418](https://github.com/cosmos/ibc-go/pull/1418) `AnteDecorator` has been + renamed to `RedundancyDecorator` to comply with go linting rules and to give + more clarity to the purpose of the Decorator. * (core/ante) + [\#1820](https://github.com/cosmos/ibc-go/pull/1418) `RedundancyDecorator` has + been renamed to `RedundantRelayDecorator` to make the name for explicit. * + (testing) [\#1418](https://github.com/cosmos/ibc-go/pull/1418) `MockIBCApp` + has been renamed to `IBCApp` and `MockEmptyAcknowledgement` has been renamed + to `EmptyAcknowledgement` to comply with go linting rules * + (apps/27-interchain-accounts) + [\#2058](https://github.com/cosmos/ibc-go/pull/2058) Added `MessageRouter` + interface and replaced `*baseapp.MsgServiceRouter` with it. The controller and + host keepers of apps/27-interchain-accounts have been updated to use it. * + (apps/27-interchain-accounts)[\#2302](https://github.com/cosmos/ibc-go/pull/2302) + Handle unwrapping of channel version in interchain accounts channel reopening + handshake flow. The `host` submodule `Keeper` now requires an `ICS4Wrapper` + similarly to the `controller` submodule. ### Improvements * + (27-interchain-accounts) [\#1352](https://github.com/cosmos/ibc-go/pull/1352) + Add support for Cosmos-SDK simulation to ics27 module. * (linting) + [\#1418](https://github.com/cosmos/ibc-go/pull/1418) Fix linting errors, + resulting compatibility with go1.18 linting style, golangci-lint 1.46.2 and + the revivie linter. This caused breaking changes in core/04-channel, + core/ante, and the testing library. ### Features * + (apps/27-interchain-accounts) + [\#2193](https://github.com/cosmos/ibc-go/pull/2193) Adding + `InterchainAccount` gRPC query endpoint to ICS27 `controller` submodule to + allow users to retrieve registered interchain account addresses. ### Bug Fixes + * (27-interchain-accounts) + [\#2308](https://github.com/cosmos/ibc-go/pull/2308) Nil checks have been + added to ensure services are not registered for nil host or controller + keepers. * (makefile) [\#1785](https://github.com/cosmos/ibc-go/pull/1785) + Fetch the correct versions of protocol buffers dependencies from tendermint, + cosmos-sdk, and ics23. * + (modules/core/04-channel)[\#1919](https://github.com/cosmos/ibc-go/pull/1919) + Fixed formatting of sequence for packet "acknowledgement written" logs. + + + + ### Bug Fixes * (apps/transfer) + [\#3045](https://github.com/cosmos/ibc-go/pull/3045) allow value with slashes + in URL template for `denom_traces` and `denom_hashes` queries. * + (apps/transfer) [\#4709](https://github.com/cosmos/ibc-go/pull/4709) Order + query service RPCs to fix availability of denom traces endpoint when no args + are provided. + + + + ### Dependencies * [\#4738](https://github.com/cosmos/ibc-go/pull/4738) Bump + Cosmos SDK to v0.45.16. * [\#4782](https://github.com/cosmos/ibc-go/pull/4782) + Bump ics23 to v0.9.1. + + + + ### Bug Fixes * (apps/transfer) + [\#3045](https://github.com/cosmos/ibc-go/pull/3045) allow value with slashes + in URL template for `denom_traces` and `denom_hashes` queries. * + (apps/transfer) [\#4709](https://github.com/cosmos/ibc-go/pull/4709) Order + query service RPCs to fix availability of denom traces endpoint when no args + are provided. + + + + ### Bug Fixes * [\#3662](https://github.com/cosmos/ibc-go/pull/3662) Retract + v4.1.2 and v4.2.1. + + + + ### Bug Fixes * [\#3346](https://github.com/cosmos/ibc-go/pull/3346) Properly + handle ordered channels in `UnreceivedPackets` query. + + + + ### Dependencies * [\#3416](https://github.com/cosmos/ibc-go/pull/3416) Bump + Cosmos SDK to v0.45.15 and replace Tendermint with CometBFT v0.34.27. + + + + ### Bug Fixes * [\#3346](https://github.com/cosmos/ibc-go/pull/3346) Properly + handle ordered channels in `UnreceivedPackets` query. + + + + ### Dependencies * [\#3049](https://github.com/cosmos/ibc-go/pull/3049) Bump + Cosmos SDK to v0.45.12. * [\#2868](https://github.com/cosmos/ibc-go/pull/2868) + Bump ics23 to v0.9.0. ### State Machine Breaking * (core/04-channel) + [\#2973](https://github.com/cosmos/ibc-go/pull/2973) Write channel state + before invoking app callbacks in ack and confirm channel handshake steps. ### + Improvements * (apps/29-fee) + [\#2786](https://github.com/cosmos/ibc-go/pull/2786) Save gas on + `IsFeeEnabled`. ### Bug Fixes * (apps/29-fee) + [\#2942](https://github.com/cosmos/ibc-go/pull/2942) Check `x/bank` send + enabled before escrowing fees. ### Documentation * + [\#2737](https://github.com/cosmos/ibc-go/pull/2737) Fix migration/docs for + ICA controller middleware. ### Miscellaneous Tasks * + [\#2772](https://github.com/cosmos/ibc-go/pull/2772) Integrated git cliff into + the code base to automate generation of changelogs. + + + + ### Bug Fixes * [\#3661](https://github.com/cosmos/ibc-go/pull/3661) Revert + state-machine breaking improvement from PR + [#2786](https://github.com/cosmos/ibc-go/pull/2786). + + + + ### Dependencies * [\#2868](https://github.com/cosmos/ibc-go/pull/2868) Bump + ICS 23 to v0.9.0. ### Improvements * (apps/29-fee) + [\#2786](https://github.com/cosmos/ibc-go/pull/2786) Save gas by checking key + existence with `KVStore`'s `Has` method. ### Bug Fixes * + [\#3346](https://github.com/cosmos/ibc-go/pull/3346) Properly handle ordered + channels in `UnreceivedPackets` query. + + + + ### Dependencies * [\#2588](https://github.com/cosmos/ibc-go/pull/2588) Bump + SDK version to v0.45.10 and Tendermint to v0.34.22. ### State Machine Breaking + * (apps/transfer) [\#2651](https://github.com/cosmos/ibc-go/pull/2651) + Introduce `mustProtoMarshalJSON` for ics20 packet data marshalling which will + skip emission (marshalling) of the memo field if unpopulated (empty). * + (27-interchain-accounts) [\#2590](https://github.com/cosmos/ibc-go/pull/2590) + Removing port prefix requirement from the ICA host channel handshake * + (transfer) [\#2377](https://github.com/cosmos/ibc-go/pull/2377) Adding + `sequence` to `MsgTransferResponse`. ### Features * (apps/transfer) + [\#2595](https://github.com/cosmos/ibc-go/pull/2595) Adding optional memo + field to `FungibleTokenPacketData` and `MsgTransfer`. ### Bug Fixes * + (apps/transfer) [\#2679](https://github.com/cosmos/ibc-go/pull/2679) Check + `x/bank` send enabled. + + + + ### Bug Fixes * [\#3660](https://github.com/cosmos/ibc-go/pull/3660) Revert + state-machine breaking improvement from PR + [#2786](https://github.com/cosmos/ibc-go/pull/2786). + + + + ### Dependencies * [\#2868](https://github.com/cosmos/ibc-go/pull/2868) Bump + ICS 23 to v0.9.0. ### Improvements * (apps/29-fee) + [\#2786](https://github.com/cosmos/ibc-go/pull/2786) Save gas by checking key + existence with `KVStore`'s `Has` method. ### Bug Fixes * + [\#3346](https://github.com/cosmos/ibc-go/pull/3346) Properly handle ordered + channels in `UnreceivedPackets` query. + + + + ### Dependencies * [\#2624](https://github.com/cosmos/ibc-go/pull/2624) Bump + SDK version to v0.45.10 and Tendermint to v0.34.22. + + + + ### Dependencies * [\#2288](https://github.com/cosmos/ibc-go/pull/2288) Bump + SDK version to v0.45.8 and Tendermint to v0.34.21. ### Features * + (apps/27-interchain-accounts) + [\#2193](https://github.com/cosmos/ibc-go/pull/2193) Adding + `InterchainAccount` gRPC query endpoint to ICS27 `controller` submodule to + allow users to retrieve registered interchain account addresses. ### Bug Fixes + * (27-interchain-accounts) + [\#2308](https://github.com/cosmos/ibc-go/pull/2308) Nil checks have been + added to ensure services are not registered for nil host or controller + keepers. + + + + ### Dependencies * [\#2287](https://github.com/cosmos/ibc-go/pull/2287) Bump + SDK version to v0.45.8 and Tendermint to v0.34.21. + + + + ### Dependencies * [\#1627](https://github.com/cosmos/ibc-go/pull/1627) Bump + Go version to 1.18 * [\#1905](https://github.com/cosmos/ibc-go/pull/1905) Bump + SDK version to v0.45.7 ### API Breaking * (core/04-channel) + [\#1792](https://github.com/cosmos/ibc-go/pull/1792) Remove + `PreviousChannelID` from `NewMsgChannelOpenTry` arguments. + `MsgChannelOpenTry.ValidateBasic()` returns error if the deprecated + `PreviousChannelID` is not empty. * (core/03-connection) + [\#1797](https://github.com/cosmos/ibc-go/pull/1797) Remove + `PreviousConnectionID` from `NewMsgConnectionOpenTry` arguments. + `MsgConnectionOpenTry.ValidateBasic()` returns error if the deprecated + `PreviousConnectionID` is not empty. * (modules/core/03-connection) + [\#1672](https://github.com/cosmos/ibc-go/pull/1672) Remove crossing hellos + from connection handshakes. The `PreviousConnectionId` in + `MsgConnectionOpenTry` has been deprecated. * (modules/core/04-channel) + [\#1317](https://github.com/cosmos/ibc-go/pull/1317) Remove crossing hellos + from channel handshakes. The `PreviousChannelId` in `MsgChannelOpenTry` has + been deprecated. * (transfer) + [\#1250](https://github.com/cosmos/ibc-go/pull/1250) Deprecate + `GetTransferAccount` since the `transfer` module account is never used. * + (channel) [\#1283](https://github.com/cosmos/ibc-go/pull/1283) The + `OnChanOpenInit` application callback now returns a version string in line + with the latest [spec changes](https://github.com/cosmos/ibc/pull/629). * + (modules/29-fee)[\#1338](https://github.com/cosmos/ibc-go/pull/1338) Renaming + `Result` field in `IncentivizedAcknowledgement` to `AppAcknowledgement`. * + (modules/29-fee)[\#1343](https://github.com/cosmos/ibc-go/pull/1343) Renaming + `KeyForwardRelayerAddress` to `KeyRelayerAddressForAsyncAck`, and + `ParseKeyForwardRelayerAddress` to `ParseKeyRelayerAddressForAsyncAck`. * + (apps/27-interchain-accounts)[\#1432](https://github.com/cosmos/ibc-go/pull/1432) + Updating `RegisterInterchainAccount` to include an additional `version` + argument, supporting ICS29 fee middleware functionality in ICS27 interchain + accounts. * + (apps/27-interchain-accounts)[\#1565](https://github.com/cosmos/ibc-go/pull/1565) + Removing `NewErrorAcknowledgement` in favour of + `channeltypes.NewErrorAcknowledgement`. * + (transfer)[\#1565](https://github.com/cosmos/ibc-go/pull/1565) Removing + `NewErrorAcknowledgement` in favour of `channeltypes.NewErrorAcknowledgement`. + * (channel)[\#1565](https://github.com/cosmos/ibc-go/pull/1565) Updating + `NewErrorAcknowledgement` to accept an error instead of a string and removing + the possibility of non-deterministic writes to application state. * + (core/04-channel)[\#1636](https://github.com/cosmos/ibc-go/pull/1636) Removing + `SplitChannelVersion` and `MergeChannelVersions` functions since they are not + used. ### State Machine Breaking * (apps/transfer) + [\#1907](https://github.com/cosmos/ibc-go/pull/1907) Blocked module account + addresses are no longer allowed to send IBC transfers. * + (apps/27-interchain-accounts) + [\#1882](https://github.com/cosmos/ibc-go/pull/1882) Explicitly check length + of interchain account packet data in favour of nil check. ### Improvements * + (app/20-transfer) [\#1680](https://github.com/cosmos/ibc-go/pull/1680) Adds + migration to correct any malformed trace path information of tokens with + denoms that contains slashes. The transfer module consensus version has been + bumped to 2. * (app/20-transfer) + [\#1730](https://github.com/cosmos/ibc-go/pull/1730) parse the ics20 + denomination provided via a packet using the channel identifier format + specified by ibc-go. * (cleanup) + [\#1335](https://github.com/cosmos/ibc-go/pull/1335/) `gofumpt -w -l .` to + standardize the code layout more strictly than `go fmt ./...` * (middleware) + [\#1022](https://github.com/cosmos/ibc-go/pull/1022) Add `GetAppVersion` to + the ICS4Wrapper interface. This function should be used by IBC applications to + obtain their own version since the version set in the channel structure may be + wrapped many times by middleware. * (modules/core/04-channel) + [\#1232](https://github.com/cosmos/ibc-go/pull/1232) Updating params on + `NewPacketId` and moving to bottom of file. * (app/29-fee) + [\#1305](https://github.com/cosmos/ibc-go/pull/1305) Change version string for + fee module to `ics29-1` * (app/29-fee) + [\#1341](https://github.com/cosmos/ibc-go/pull/1341) Check if the fee module + is locked and if the fee module is enabled before refunding all fees * + (transfer) [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting + Sender address from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (testing/simapp) + [\#1397](https://github.com/cosmos/ibc-go/pull/1397) Adding mock module to + maccperms and adding check to ensure mock module is not a blocked account + address. * (core/02-client) + [\#1570](https://github.com/cosmos/ibc-go/pull/1570) Emitting an event when + handling an upgrade client proposal. * (modules/light-clients/07-tendermint) + [\#1713](https://github.com/cosmos/ibc-go/pull/1713) Allow client upgrade + proposals to update `TrustingPeriod`. See ADR-026 for context. * (core/client) + [\#1740](https://github.com/cosmos/ibc-go/pull/1740) Add + `cosmos_proto.implements_interface` to adhere to guidelines in [Cosmos SDK ADR + 019](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-019-protocol-buffer-state-encoding#safe-usage-of-any) + for annotating `google.protobuf.Any` types ### Features * + [\#276](https://github.com/cosmos/ibc-go/pull/276) Adding the Fee Middleware + module v1 * (apps/29-fee) [\#1229](https://github.com/cosmos/ibc-go/pull/1229) + Adding CLI commands for getting all unrelayed incentivized packets and packet + by packet-id. * (apps/29-fee) + [\#1224](https://github.com/cosmos/ibc-go/pull/1224) Adding + Query/CounterpartyAddress and CLI to ICS29 fee middleware * (apps/29-fee) + [\#1225](https://github.com/cosmos/ibc-go/pull/1225) Adding + Query/FeeEnabledChannel and Query/FeeEnabledChannels with CLIs to ICS29 fee + middleware. * (modules/apps/29-fee) + [\#1230](https://github.com/cosmos/ibc-go/pull/1230) Adding CLI command for + getting incentivized packets for a specific channel-id. ### Bug Fixes * + (apps/29-fee) [\#1774](https://github.com/cosmos/ibc-go/pull/1774) Change non + nil relayer assertion to non empty to avoid import/export issues for genesis + upgrades. * (apps/29-fee) [\#1278](https://github.com/cosmos/ibc-go/pull/1278) + The URI path for the query to get all incentivized packets for a specific + channel did not follow the same format as the rest of queries. * + (modules/core/04-channel)[\#1919](https://github.com/cosmos/ibc-go/pull/1919) + Fixed formatting of sequence for packet "acknowledgement written" logs. + + + + ### Dependencies * [\#2589](https://github.com/cosmos/ibc-go/pull/2589) Bump + SDK version to v0.45.10 and Tendermint to v0.34.22. ### State Machine Breaking + * (apps/transfer) [\#2651](https://github.com/cosmos/ibc-go/pull/2651) + Introduce `mustProtoMarshalJSON` for ics20 packet data marshalling which will + skip emission (marshalling) of the memo field if unpopulated (empty). * + (27-interchain-accounts) [\#2590](https://github.com/cosmos/ibc-go/pull/2590) + Removing port prefix requirement from the ICA host channel handshake * + (transfer) [\#2377](https://github.com/cosmos/ibc-go/pull/2377) Adding + `sequence` to `MsgTransferResponse`. ### Features * (apps/transfer) + [\#2595](https://github.com/cosmos/ibc-go/pull/2595) Adding optional memo + field to `FungibleTokenPacketData` and `MsgTransfer`. ### Bug Fixes * + (apps/transfer) [\#2679](https://github.com/cosmos/ibc-go/pull/2679) Check + `x/bank` send enabled. + + + + ### Dependencies * [\#2621](https://github.com/cosmos/ibc-go/pull/2621) Bump + SDK version to v0.45.10 and Tendermint to v0.34.22. + + + + ### Dependencies * [\#2286](https://github.com/cosmos/ibc-go/pull/2286) Bump + SDK version to v0.45.8 and Tendermint to v0.34.21. ### Features * + (apps/27-interchain-accounts) + [\#2193](https://github.com/cosmos/ibc-go/pull/2193) Adding + `InterchainAccount` gRPC query endpoint to ICS27 `controller` submodule to + allow users to retrieve registered interchain account addresses. ### Bug Fixes + * (27-interchain-accounts) + [\#2308](https://github.com/cosmos/ibc-go/pull/2308) Nil checks have been + added to ensure services are not registered for nil host or controller + keepers. + + + + ### Dependencies * [\#2285](https://github.com/cosmos/ibc-go/pull/2285) Bump + SDK version to v0.45.8 and Tendermint to v0.34.21. + + + + ### Dependencies * [\#1627](https://github.com/cosmos/ibc-go/pull/1627) Bump + Go version to 1.18 * [\#1905](https://github.com/cosmos/ibc-go/pull/1905) Bump + SDK version to v0.45.7 ### State Machine Breaking * (apps/transfer) + [\#1907](https://github.com/cosmos/ibc-go/pull/1907) Blocked module account + addresses are no longer allowed to send IBC transfers. * + (apps/27-interchain-accounts) + [\#1882](https://github.com/cosmos/ibc-go/pull/1882) Explicitly check length + of interchain account packet data in favour of nil check. ### Improvements * + (core/02-client) [\#1570](https://github.com/cosmos/ibc-go/pull/1570) Emitting + an event when handling an upgrade client proposal. * + (modules/light-clients/07-tendermint) + [\#1713](https://github.com/cosmos/ibc-go/pull/1713) Allow client upgrade + proposals to update `TrustingPeriod`. See ADR-026 for context. * + (app/20-transfer) [\#1680](https://github.com/cosmos/ibc-go/pull/1680) Adds + migration to correct any malformed trace path information of tokens with + denoms that contains slashes. The transfer module consensus version has been + bumped to 2. * (app/20-transfer) + [\#1730](https://github.com/cosmos/ibc-go/pull/1730) parse the ics20 + denomination provided via a packet using the channel identifier format + specified by ibc-go. * (core/client) + [\#1740](https://github.com/cosmos/ibc-go/pull/1740) Add + `cosmos_proto.implements_interface` to adhere to guidelines in [Cosmos SDK ADR + 019](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-019-protocol-buffer-state-encoding#safe-usage-of-any) + for annotating `google.protobuf.Any` types ### Bug Fixes * + (modules/core/04-channel)[\#1919](https://github.com/cosmos/ibc-go/pull/1919) + Fixed formatting of sequence for packet "acknowledgement written" logs. + + + + ### Dependencies * [\#1525](https://github.com/cosmos/ibc-go/pull/1525) Bump + SDK version to v0.45.5 ### Improvements * (core/02-client) + [\#1570](https://github.com/cosmos/ibc-go/pull/1570) Emitting an event when + handling an upgrade client proposal. * (core/client) + [\#1740](https://github.com/cosmos/ibc-go/pull/1740) Add + `cosmos_proto.implements_interface` to adhere to guidelines in [Cosmos SDK ADR + 019](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-019-protocol-buffer-state-encoding#safe-usage-of-any) + for annotating `google.protobuf.Any` types + + + + ### Dependencies * [\#1300](https://github.com/cosmos/ibc-go/pull/1300) Bump + SDK version to v0.45.4 ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/04-channel) + [\#1279](https://github.com/cosmos/ibc-go/pull/1279) Add selected channel + version to MsgChanOpenInitResponse and MsgChanOpenTryResponse. Emit channel + version during OpenInit/OpenTry * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. * + (modules/light-clients/07-tendermint) + [\#1118](https://github.com/cosmos/ibc-go/pull/1118) Deprecating + `AllowUpdateAfterExpiry` and `AllowUpdateAfterMisbehaviour`. See ADR-026 for + context. ### Features * (modules/core/02-client) + [\#1336](https://github.com/cosmos/ibc-go/pull/1336) Adding + Query/ConsensusStateHeights gRPC for fetching the height of every consensus + state associated with a client. * (modules/apps/transfer) + [\#1416](https://github.com/cosmos/ibc-go/pull/1416) Adding gRPC endpoint for + getting an escrow account for a given port-id and channel-id. * + (modules/apps/27-interchain-accounts) + [\#1512](https://github.com/cosmos/ibc-go/pull/1512) Allowing ICA modules to + handle all message types with "*". ### Bug Fixes * (modules/core/04-channel) + [\#1130](https://github.com/cosmos/ibc-go/pull/1130) Call + `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` log + output * (apps/transfer) [\#1451](https://github.com/cosmos/ibc-go/pull/1451) + Fixing the support for base denoms that contain slashes. + + + + ### Improvements * (core/02-client) + [\#1570](https://github.com/cosmos/ibc-go/pull/1570) Emitting an event when + handling an upgrade client proposal. * (core/client) + [\#1740](https://github.com/cosmos/ibc-go/pull/1740) Add + `cosmos_proto.implements_interface` to adhere to guidelines in [Cosmos SDK ADR + 019](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-019-protocol-buffer-state-encoding#safe-usage-of-any) + for annotating `google.protobuf.Any` types + + + + ### Dependencies * [\#1300](https://github.com/cosmos/ibc-go/pull/1300) Bump + SDK version to v0.45.4 ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies + * [\#404](https://github.com/cosmos/ibc-go/pull/404) Bump Go version to 1.17 + * [\#851](https://github.com/cosmos/ibc-go/pull/851) Bump SDK version to v0.45.1 + * [\#948](https://github.com/cosmos/ibc-go/pull/948) Bump ics23/go to v0.7 + * (core) [\#709](https://github.com/cosmos/ibc-go/pull/709) Replace github.com/pkg/errors with stdlib errors + ### API Breaking + * (testing) [\#939](https://github.com/cosmos/ibc-go/pull/939) Support custom power reduction for testing. + * (modules/core/05-port) [\#1086](https://github.com/cosmos/ibc-go/pull/1086) Added `counterpartyChannelID` argument to IBCModule.OnChanOpenAck + * (channel) [\#848](https://github.com/cosmos/ibc-go/pull/848) Added `ChannelId` to MsgChannelOpenInitResponse + * (testing) [\#813](https://github.com/cosmos/ibc-go/pull/813) The `ack` argument to the testing function `RelayPacket` has been removed as it is no longer needed. + * (testing) [\#774](https://github.com/cosmos/ibc-go/pull/774) Added `ChainID` arg to `SetupWithGenesisValSet` on the testing app. `Coordinator` generated ChainIDs now starts at index 1 + * (transfer) [\#675](https://github.com/cosmos/ibc-go/pull/675) Transfer `NewKeeper` now takes in an ICS4Wrapper. The ICS4Wrapper may be the IBC Channel Keeper when ICS20 is not used in a middleware stack. The ICS4Wrapper is required for applications wishing to connect middleware to ICS20. + * (core) [\#650](https://github.com/cosmos/ibc-go/pull/650) Modify `OnChanOpenTry` IBC application module callback to return the negotiated app version. The version passed into the `MsgChanOpenTry` has been deprecated and will be ignored by core IBC. + * (core) [\#629](https://github.com/cosmos/ibc-go/pull/629) Removes the `GetProofSpecs` from the ClientState interface. This function was previously unused by core IBC. + * (transfer) [\#517](https://github.com/cosmos/ibc-go/pull/517) Separates the ICS 26 callback functions from `AppModule` into a new type `IBCModule` for ICS 20 transfer. + * (modules/core/02-client) [\#536](https://github.com/cosmos/ibc-go/pull/536) `GetSelfConsensusState` return type changed from bool to error. + * (channel) [\#644](https://github.com/cosmos/ibc-go/pull/644) Removes `CounterpartyHops` function from the ChannelKeeper. + * (testing) [\#776](https://github.com/cosmos/ibc-go/pull/776) Adding helper fn to generate capability name for testing callbacks + * (testing) [\#892](https://github.com/cosmos/ibc-go/pull/892) IBC Mock modules store the scoped keeper and portID within the IBCMockApp. They also maintain reference to the AppModule to update the AppModule's list of IBC applications it references. Allows for the mock module to be reused as a base application in middleware stacks. + * (channel) [\#882](https://github.com/cosmos/ibc-go/pull/882) The `WriteAcknowledgement` API now takes `exported.Acknowledgement` instead of a byte array + * (modules/core/ante) [\#950](https://github.com/cosmos/ibc-go/pull/950) Replaces the channel keeper with the IBC keeper in the IBC `AnteDecorator` in order to execute the entire message and be able to reject redundant messages that are in the same block as the non-redundant messages. + ### State Machine Breaking + * (transfer) [\#818](https://github.com/cosmos/ibc-go/pull/818) Error acknowledgements returned from Transfer `OnRecvPacket` now include a deterministic ABCI code and error message. + ### Improvements + * (client) [\#888](https://github.com/cosmos/ibc-go/pull/888) Add `GetTimestampAtHeight` to `ClientState` + * (interchain-accounts) [\#1037](https://github.com/cosmos/ibc-go/pull/1037) Add a function `InitModule` to the interchain accounts `AppModule`. This function should be called within the upgrade handler when adding the interchain accounts module to a chain. It should be called in place of InitGenesis (set the consensus version in the version map). + * (testing) [\#942](https://github.com/cosmos/ibc-go/pull/942) `NewTestChain` will create 4 validators in validator set by default. A new constructor function `NewTestChainWithValSet` is provided for test writers who want custom control over the validator set of test chains. + * (testing) [\#904](https://github.com/cosmos/ibc-go/pull/904) Add `ParsePacketFromEvents` function to the testing package. Useful when sending/relaying packets via the testing package. + * (testing) [\#893](https://github.com/cosmos/ibc-go/pull/893) Support custom private keys for testing. + * (testing) [\#810](https://github.com/cosmos/ibc-go/pull/810) Additional testing function added to `Endpoint` type called `RecvPacketWithResult`. Performs the same functionality as the existing `RecvPacket` function but also returns the message result. `path.RelayPacket` no longer uses the provided acknowledgement argument and instead obtains the acknowledgement via MsgRecvPacket events. + * (connection) [\#721](https://github.com/cosmos/ibc-go/pull/721) Simplify connection handshake error messages when unpacking client state. + * (channel) [\#692](https://github.com/cosmos/ibc-go/pull/692) Minimize channel logging by only emitting the packet sequence, source port/channel, destination port/channel upon packet receives, acknowledgements and timeouts. + * [\#383](https://github.com/cosmos/ibc-go/pull/383) Adds helper functions for merging and splitting middleware versions from the underlying app version. + * (modules/core/05-port) [\#288](https://github.com/cosmos/ibc-go/pull/288) Making the 05-port keeper function IsBound public. The IsBound function checks if the provided portID is already binded to a module. + * (client) [\#724](https://github.com/cosmos/ibc-go/pull/724) `IsRevisionFormat` and `IsClientIDFormat` have been updated to disallow newlines before the dash used to separate the chainID and revision number, and the client type and client sequence. + * (channel) [\#644](https://github.com/cosmos/ibc-go/pull/644) Adds `GetChannelConnection` to the ChannelKeeper. This function returns the connectionID and connection state associated with a channel. + * (channel) [\647](https://github.com/cosmos/ibc-go/pull/647) Reorganizes channel handshake handling to set channel state after IBC application callbacks. + * (interchain-accounts) [\#1466](https://github.com/cosmos/ibc-go/pull/1466) Emit event when there is an acknowledgement during `OnRecvPacket`. + ### Features + * [\#432](https://github.com/cosmos/ibc-go/pull/432) Introduce `MockIBCApp` struct to the mock module. Allows the mock module to be reused to perform custom logic on each IBC App interface function. This might be useful when testing out IBC applications written as middleware. + * [\#380](https://github.com/cosmos/ibc-go/pull/380) Adding the Interchain Accounts module v1 + * [\#679](https://github.com/cosmos/ibc-go/pull/679) New CLI command `query ibc-transfer denom-hash ` to get the denom hash for a denom trace; this might be useful for debug + ### Bug Fixes + * (testing) [\#884](https://github.com/cosmos/ibc-go/pull/884) Add and use in simapp a custom ante handler that rejects redundant transactions + * (transfer) [\#978](https://github.com/cosmos/ibc-go/pull/978) Support base denoms with slashes in denom validation + * (client) [\#941](https://github.com/cosmos/ibc-go/pull/941) Classify client states without consensus states as expired + * (channel) [\#995](https://github.com/cosmos/ibc-go/pull/995) Call `packet.GetSequence()` rather than passing func in `AcknowledgePacket` log output + + + + ### Dependencies * [\#2578](https://github.com/cosmos/ibc-go/pull/2578) Bump + SDK version to v0.45.10 and Tendermint to v0.34.22. ### State Machine Breaking + * (apps/transfer) [\#2651](https://github.com/cosmos/ibc-go/pull/2651) + Introduce `mustProtoMarshalJSON` for ics20 packet data marshalling which will + skip emission (marshalling) of the memo field if unpopulated (empty). * + (transfer) [\#2377](https://github.com/cosmos/ibc-go/pull/2377) Adding + `sequence` to `MsgTransferResponse`. ### Features * (apps/transfer) + [\#2595](https://github.com/cosmos/ibc-go/pull/2595) Adding optional memo + field to `FungibleTokenPacketData` and `MsgTransfer`. ### Bug Fixes * + (apps/transfer) [\#2679](https://github.com/cosmos/ibc-go/pull/2679) Check + `x/bank` send enabled. + + + + ### Dependencies * [\#2622](https://github.com/cosmos/ibc-go/pull/2622) Bump + SDK version to v0.45.10 and Tendermint to v0.34.22. + + + + ### Dependencies * [\#2284](https://github.com/cosmos/ibc-go/pull/2284) Bump + SDK version to v0.45.8 and Tendermint to v0.34.21. + + + + ### Dependencies * [\#1627](https://github.com/cosmos/ibc-go/pull/1627) Bump + Go version to 1.18 * [\#1905](https://github.com/cosmos/ibc-go/pull/1905) Bump + SDK version to v0.45.7 ### State Machine Breaking * (apps/transfer) + [\#1907](https://github.com/cosmos/ibc-go/pull/1907) Blocked module account + addresses are no longer allowed to send IBC transfers. ### Improvements * + (modules/light-clients/07-tendermint) + [\#1713](https://github.com/cosmos/ibc-go/pull/1713) Allow client upgrade + proposals to update `TrustingPeriod`. See ADR-026 for context. * + (core/02-client) [\#1570](https://github.com/cosmos/ibc-go/pull/1570) Emitting + an event when handling an upgrade client proposal. * (app/20-transfer) + [\#1680](https://github.com/cosmos/ibc-go/pull/1680) Adds migration to correct + any malformed trace path information of tokens with denoms that contains + slashes. The transfer module consensus version has been bumped to 2. * + (app/20-transfer) [\#1730](https://github.com/cosmos/ibc-go/pull/1730) parse + the ics20 denomination provided via a packet using the channel identifier + format specified by ibc-go. * (core/client) + [\#1740](https://github.com/cosmos/ibc-go/pull/1740) Add + `cosmos_proto.implements_interface` to adhere to guidelines in [Cosmos SDK ADR + 019](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-019-protocol-buffer-state-encoding#safe-usage-of-any) + for annotating `google.protobuf.Any` types ### Bug Fixes * + (modules/core/04-channel)[\#1919](https://github.com/cosmos/ibc-go/pull/1919) + Fixed formatting of sequence for packet "acknowledgement written" logs. + + + + ### Dependencies * [\#1525](https://github.com/cosmos/ibc-go/pull/1525) Bump + SDK version to v0.45.5 ### Improvements * (core/02-client) + [\#1570](https://github.com/cosmos/ibc-go/pull/1570) Emitting an event when + handling an upgrade client proposal. * (core/client) + [\#1740](https://github.com/cosmos/ibc-go/pull/1740) Add + `cosmos_proto.implements_interface` to adhere to guidelines in [Cosmos SDK ADR + 019](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-019-protocol-buffer-state-encoding#safe-usage-of-any) + for annotating `google.protobuf.Any` types + + + + ### Dependencies * [\#404](https://github.com/cosmos/ibc-go/pull/404) Bump Go + version to 1.17 * [\#1300](https://github.com/cosmos/ibc-go/pull/1300) Bump + SDK version to v0.45.4 ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. * + (modules/light-clients/07-tendermint) + [\#1118](https://github.com/cosmos/ibc-go/pull/1118) Deprecating + `AllowUpdateAfterExpiry` and `AllowUpdateAfterMisbehaviour`. See ADR-026 for + context. ### Features * (modules/core/02-client) + [\#1336](https://github.com/cosmos/ibc-go/pull/1336) Adding + Query/ConsensusStateHeights gRPC for fetching the height of every consensus + state associated with a client. * (modules/apps/transfer) + [\#1416](https://github.com/cosmos/ibc-go/pull/1416) Adding gRPC endpoint for + getting an escrow account for a given port-id and channel-id. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output * (apps/transfer) + [\#1451](https://github.com/cosmos/ibc-go/pull/1451) Fixing the support for + base denoms that contain slashes. + + + + ### Improvements * (core/02-client) + [\#1570](https://github.com/cosmos/ibc-go/pull/1570) Emitting an event when + handling an upgrade client proposal. * (core/client) + [\#1740](https://github.com/cosmos/ibc-go/pull/1740) Add + `cosmos_proto.implements_interface` to adhere to guidelines in [Cosmos SDK ADR + 019](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-019-protocol-buffer-state-encoding#safe-usage-of-any) + for annotating `google.protobuf.Any` types + + + + ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies * [\#851](https://github.com/cosmos/ibc-go/pull/851) Bump SDK + version to v0.45.1 + + + + ### Improvements * (core/02-client) + [\#1570](https://github.com/cosmos/ibc-go/pull/1570) Emitting an event when + handling an upgrade client proposal. * (core/client) + [\#1740](https://github.com/cosmos/ibc-go/pull/1740) Add + `cosmos_proto.implements_interface` to adhere to guidelines in [Cosmos SDK ADR + 019](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-019-protocol-buffer-state-encoding#safe-usage-of-any) + for annotating `google.protobuf.Any` types + + + + ### Dependencies * [\#1268](https://github.com/cosmos/ibc-go/pull/1268) Bump + SDK version to v0.44.8 and Tendermint to version 0.34.19 ### Improvements * + (transfer) [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` + grpc now takes in either an `ibc denom` or a `hash` instead of only accepting + a `hash`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies + * [\#1084](https://github.com/cosmos/ibc-go/pull/1084) Bump SDK version to v0.44.6 + * [\#948](https://github.com/cosmos/ibc-go/pull/948) Bump ics23/go to v0.7 + ### State Machine Breaking + * (transfer) [\#818](https://github.com/cosmos/ibc-go/pull/818) Error acknowledgements returned from Transfer `OnRecvPacket` now include a deterministic ABCI code and error message. + ### Features + * [\#679](https://github.com/cosmos/ibc-go/pull/679) New CLI command `query ibc-transfer denom-hash ` to get the denom hash for a denom trace; this might be useful for debug + ### Bug Fixes + * (client) [\#941](https://github.com/cosmos/ibc-go/pull/941) Classify client states without consensus states as expired + * (transfer) [\#978](https://github.com/cosmos/ibc-go/pull/978) Support base denoms with slashes in denom validation + * (channel) [\#995](https://github.com/cosmos/ibc-go/pull/995) Call `packet.GetSequence()` rather than passing func in `AcknowledgePacket` log output + + + + ### Improvements * (channel) + [\#692](https://github.com/cosmos/ibc-go/pull/692) Minimize channel logging by + only emitting the packet sequence, source port/channel, destination + port/channel upon packet receives, acknowledgements and timeouts. + + + + ### Dependencies * [\#589](https://github.com/cosmos/ibc-go/pull/589) Bump SDK + version to v0.44.5 ### Bug Fixes * (modules/core) + [\#603](https://github.com/cosmos/ibc-go/pull/603) Fix module name emitted as + part of `OnChanOpenInit` event. Replacing `connection` module name with + `channel`. + + + + ### Dependencies * [\#567](https://github.com/cosmos/ibc-go/pull/567) Bump SDK + version to v0.44.4 ### Improvements * (02-client) + [\#568](https://github.com/cosmos/ibc-go/pull/568) In IBC `transfer` cli + command use local clock time as reference for relative timestamp timeout if + greater than the block timestamp queried from the latest consensus state + corresponding to the counterparty channel. * + [\#583](https://github.com/cosmos/ibc-go/pull/583) Move + third_party/proto/confio/proofs.proto to third_party/proto/proofs.proto to + enable proto service reflection. Migrate `buf` from v1beta1 to v1. ### Bug + Fixes * (02-client) [\#500](https://github.com/cosmos/ibc-go/pull/500) Fix IBC + `update-client proposal` cli command to expect correct number of args. + + + + ### Dependencies * [\#489](https://github.com/cosmos/ibc-go/pull/489) Bump + Tendermint to v0.34.14 * [\#503](https://github.com/cosmos/ibc-go/pull/503) + Bump SDK version to v0.44.3 ### API Breaking * (core) + [\#227](https://github.com/cosmos/ibc-go/pull/227) Remove sdk.Result from + application callbacks * (transfer) + [\#350](https://github.com/cosmos/ibc-go/pull/350) Change + FungibleTokenPacketData to use a string for the Amount field. This enables + token transfers with amounts previously restricted by uint64. Up to the + maximum uint256 value is supported. ### Features * + [\#384](https://github.com/cosmos/ibc-go/pull/384) Added `NegotiateAppVersion` + method to `IBCModule` interface supported by a gRPC query service in + `05-port`. This provides routing of requests to the desired application module + callback, which in turn performs application version negotiation. + + + + ### Dependencies * [\#404](https://github.com/cosmos/ibc-go/pull/404) Bump Go + version to 1.17 * [\#1300](https://github.com/cosmos/ibc-go/pull/1300) Bump + SDK version to v0.45.4 ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. * + (modules/light-clients/07-tendermint) + [\#1118](https://github.com/cosmos/ibc-go/pull/1118) Deprecating + `AllowUpdateAfterExpiry` and `AllowUpdateAfterMisbehaviour`. See ADR-026 for + context. ### Features * (modules/core/02-client) + [\#1336](https://github.com/cosmos/ibc-go/pull/1336) Adding + Query/ConsensusStateHeights gRPC for fetching the height of every consensus + state associated with a client. * (modules/apps/transfer) + [\#1416](https://github.com/cosmos/ibc-go/pull/1416) Adding gRPC endpoint for + getting an escrow account for a given port-id and channel-id. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output * (apps/transfer) + [\#1451](https://github.com/cosmos/ibc-go/pull/1451) Fixing the support for + base denoms that contain slashes. + + + + ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies * [\#851](https://github.com/cosmos/ibc-go/pull/851) Bump SDK + version to v0.45.1 + + + + ### Dependencies * [\#1267](https://github.com/cosmos/ibc-go/pull/1267) Bump + SDK version to v0.44.8 and Tendermint to version 0.34.19 ### Improvements * + (transfer) [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` + grpc now takes in either an `ibc denom` or a `hash` instead of only accepting + a `hash`. * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies + * [\#1073](https://github.com/cosmos/ibc-go/pull/1073) Bump SDK version to v0.44.6 + * [\#948](https://github.com/cosmos/ibc-go/pull/948) Bump ics23/go to v0.7 + ### State Machine Breaking + * (transfer) [\#818](https://github.com/cosmos/ibc-go/pull/818) Error acknowledgements returned from Transfer `OnRecvPacket` now include a deterministic ABCI code and error message. + ### Features + * [\#679](https://github.com/cosmos/ibc-go/pull/679) New CLI command `query ibc-transfer denom-hash ` to get the denom hash for a denom trace; this might be useful for debug + ### Bug Fixes + * (client) [\#941](https://github.com/cosmos/ibc-go/pull/941) Classify client states without consensus states as expired + * (transfer) [\#978](https://github.com/cosmos/ibc-go/pull/978) Support base denoms with slashes in denom validation + * (channel) [\#995](https://github.com/cosmos/ibc-go/pull/995) Call `packet.GetSequence()` rather than passing func in `AcknowledgePacket` log output + + + + ### Improvements * (channel) + [\#692](https://github.com/cosmos/ibc-go/pull/692) Minimize channel logging by + only emitting the packet sequence, source port/channel, destination + port/channel upon packet receives, acknowledgements and timeouts. + + + + ### Dependencies * [\#589](https://github.com/cosmos/ibc-go/pull/589) Bump SDK + version to v0.44.5 ### Bug Fixes * (modules/core) + [\#603](https://github.com/cosmos/ibc-go/pull/603) Fix module name emitted as + part of `OnChanOpenInit` event. Replacing `connection` module name with + `channel`. + + + + ### Dependencies * [\#567](https://github.com/cosmos/ibc-go/pull/567) Bump SDK + version to v0.44.4 ### Improvements * + [\#583](https://github.com/cosmos/ibc-go/pull/583) Move + third_party/proto/confio/proofs.proto to third_party/proto/proofs.proto to + enable proto service reflection. Migrate `buf` from v1beta1 to v1. + + + + ### Dependencies * [\#489](https://github.com/cosmos/ibc-go/pull/489) Bump + Tendermint to v0.34.14 * [\#503](https://github.com/cosmos/ibc-go/pull/503) + Bump SDK version to v0.44.3 + + + + ### Dependencies * [\#485](https://github.com/cosmos/ibc-go/pull/485) Bump SDK + version to v0.44.2 + + + + ### Dependencies * [\#455](https://github.com/cosmos/ibc-go/pull/455) Bump SDK + version to v0.44.1 + + + + ### State Machine Breaking + * (24-host) [\#344](https://github.com/cosmos/ibc-go/pull/344) Increase port identifier limit to 128 characters. + ### Improvements + * [\#373](https://github.com/cosmos/ibc-go/pull/375) Added optional field `PacketCommitmentSequences` to `QueryPacketAcknowledgementsRequest` to provide filtering of packet acknowledgements. + ### Features + * [\#372](https://github.com/cosmos/ibc-go/pull/372) New CLI command `query ibc client status ` to get the current activity status of a client. + ### Dependencies + * [\#386](https://github.com/cosmos/ibc-go/pull/386) Bump [tendermint](https://github.com/tendermint/tendermint) from v0.34.12 to v0.34.13. + + + + ### Improvements * (channel) + [\#692](https://github.com/cosmos/ibc-go/pull/692) Minimize channel logging by + only emitting the packet sequence, source port/channel, destination + port/channel upon packet receives, acknowledgements and timeouts. + + + + ### Dependencies * [\#589](https://github.com/cosmos/ibc-go/pull/589) Bump SDK + version to v0.44.5 ### Bug Fixes * (modules/core) + [\#603](https://github.com/cosmos/ibc-go/pull/603) Fix module name emitted as + part of `OnChanOpenInit` event. Replacing `connection` module name with + `channel`. + + + + ### Dependencies * [\#567](https://github.com/cosmos/ibc-go/pull/567) Bump SDK + version to v0.44.4 ### Improvements * + [\#583](https://github.com/cosmos/ibc-go/pull/583) Move + third_party/proto/confio/proofs.proto to third_party/proto/proofs.proto to + enable proto service reflection. Migrate `buf` from v1beta1 to v1. + + + + ### Dependencies * [\#489](https://github.com/cosmos/ibc-go/pull/489) Bump + Tendermint to v0.34.14 * [\#503](https://github.com/cosmos/ibc-go/pull/503) + Bump SDK version to v0.44.3 + + + + * [\#485](https://github.com/cosmos/ibc-go/pull/485) Bump SDK version to + v0.44.2 + + + + ### Dependencies * [\#455](https://github.com/cosmos/ibc-go/pull/455) Bump SDK + version to v0.44.1 + + + + ### Dependencies * [\#367](https://github.com/cosmos/ibc-go/pull/367) Bump + [cosmos-sdk](https://github.com/cosmos/cosmos-sdk) from 0.43 to 0.44. + + + + ### Improvements * [\#343](https://github.com/cosmos/ibc-go/pull/343) Create + helper functions for publishing of packet sent and acknowledgement sent + events. + + + + ### Bug Fixes + * (07-tendermint) [\#241](https://github.com/cosmos/ibc-go/pull/241) Ensure tendermint client state latest height revision number matches chain id revision number. + * (07-tendermint) [\#234](https://github.com/cosmos/ibc-go/pull/234) Use sentinel value for the consensus state root set during a client upgrade. This prevents genesis validation from failing. + * (modules) [\#223](https://github.com/cosmos/ibc-go/pull/223) Use correct Prometheus format for metric labels. + * (06-solomachine) [\#214](https://github.com/cosmos/ibc-go/pull/214) Disable defensive timestamp check in SendPacket for solo machine clients. + * (07-tendermint) [\#210](https://github.com/cosmos/ibc-go/pull/210) Export all consensus metadata on genesis restarts for tendermint clients. + * (core) [\#200](https://github.com/cosmos/ibc-go/pull/200) Fixes incorrect export of IBC identifier sequences. Previously, the next identifier sequence for clients/connections/channels was not set during genesis export. This resulted in the next identifiers being generated on the new chain to reuse old identifiers (the sequences began again from 0). + * (02-client) [\#192](https://github.com/cosmos/ibc-go/pull/192) Fix IBC `query ibc client header` cli command. Support historical queries for query header/node-state commands. + * (modules/light-clients/06-solomachine) [\#153](https://github.com/cosmos/ibc-go/pull/153) Fix solo machine proof height sequence mismatch bug. + * (modules/light-clients/06-solomachine) [\#122](https://github.com/cosmos/ibc-go/pull/122) Fix solo machine merkle prefix casting bug. + * (modules/light-clients/06-solomachine) [\#120](https://github.com/cosmos/ibc-go/pull/120) Fix solo machine handshake verification bug. + * (modules/light-clients/06-solomachine) [\#153](https://github.com/cosmos/ibc-go/pull/153) fix solo machine connection handshake failure at `ConnectionOpenAck`. + ### API Breaking + * (04-channel) [\#220](https://github.com/cosmos/ibc-go/pull/220) Channel legacy handler functions were removed. Please use the MsgServer functions or directly call the channel keeper's handshake function. + * (modules) [\#206](https://github.com/cosmos/ibc-go/pull/206) Expose `relayer sdk.AccAddress` on `OnRecvPacket`, `OnAcknowledgementPacket`, `OnTimeoutPacket` module callbacks to enable incentivization. + * (02-client) [\#181](https://github.com/cosmos/ibc-go/pull/181) Remove 'InitialHeight' from UpdateClient Proposal. Only copy over latest consensus state from substitute client. + * (06-solomachine) [\#169](https://github.com/cosmos/ibc-go/pull/169) Change FrozenSequence to boolean in solomachine ClientState. The solo machine proto package has been bumped from `v1` to `v2`. + * (module/core/02-client) [\#165](https://github.com/cosmos/ibc-go/pull/165) Remove GetFrozenHeight from the ClientState interface. + * (modules) [\#166](https://github.com/cosmos/ibc-go/pull/166) Remove GetHeight from the misbehaviour interface. The `consensus_height` attribute has been removed from Misbehaviour events. + * (modules) [\#162](https://github.com/cosmos/ibc-go/pull/162) Remove deprecated Handler types in core IBC and the ICS 20 transfer module. + * (modules/core) [\#161](https://github.com/cosmos/ibc-go/pull/161) Remove Type(), Route(), GetSignBytes() from 02-client, 03-connection, and 04-channel messages. + * (modules) [\#140](https://github.com/cosmos/ibc-go/pull/140) IsFrozen() client state interface changed to Status(). gRPC `ClientStatus` route added. + * (modules/core) [\#109](https://github.com/cosmos/ibc-go/pull/109) Remove connection and channel handshake CLI commands. + * (modules) [\#107](https://github.com/cosmos/ibc-go/pull/107) Modify OnRecvPacket callback to return an acknowledgement which indicates if it is successful or not. Callback state changes are discarded for unsuccessful acknowledgements only. + * (modules) [\#108](https://github.com/cosmos/ibc-go/pull/108) All message constructors take the signer as a string to prevent upstream bugs. The `String()` function for an SDK Acc Address relies on external context. + * (transfer) [\#275](https://github.com/cosmos/ibc-go/pull/275) Remove 'ChanCloseInit' function from transfer keeper. ICS20 does not close channels. + ### State Machine Breaking + * (modules/light-clients/07-tendermint) [\#99](https://github.com/cosmos/ibc-go/pull/99) Enforce maximum chain-id length for tendermint client. + * (modules/light-clients/07-tendermint) [\#141](https://github.com/cosmos/ibc-go/pull/141) Allow a new form of misbehaviour that proves counterparty chain breaks time monotonicity, automatically enforce monotonicity in UpdateClient and freeze client if monotonicity is broken. + * (modules/light-clients/07-tendermint) [\#141](https://github.com/cosmos/ibc-go/pull/141) Freeze the client if there's a conflicting header submitted for an existing consensus state. + * (modules/core/02-client) [\#8405](https://github.com/cosmos/cosmos-sdk/pull/8405) Refactor IBC client update governance proposals to use a substitute client to update a frozen or expired client. + * (modules/core/02-client) [\#8673](https://github.com/cosmos/cosmos-sdk/pull/8673) IBC upgrade logic moved to 02-client and an IBC UpgradeProposal is added. + * (modules/core/03-connection) [\#171](https://github.com/cosmos/ibc-go/pull/171) Introduces a new parameter `MaxExpectedTimePerBlock` to allow connections to calculate and enforce a block delay that is proportional to time delay set by connection. + * (core) [\#268](https://github.com/cosmos/ibc-go/pull/268) Perform a no-op on redundant relay messages. Previous behaviour returned an error. Now no state change will occur and no error will be returned. + ### Improvements + * (04-channel) [\#220](https://github.com/cosmos/ibc-go/pull/220) Channel handshake events are now emitted with the channel keeper. + * (core/02-client) [\#205](https://github.com/cosmos/ibc-go/pull/205) Add in-place and genesis migrations from SDK v0.42.0 to ibc-go v1.0.0. Solo machine protobuf definitions are migrated from v1 to v2. All solo machine consensus states are pruned. All expired tendermint consensus states are pruned. + * (modules/core) [\#184](https://github.com/cosmos/ibc-go/pull/184) Improve error messages. Uses unique error codes to indicate already relayed packets. + * (07-tendermint) [\#182](https://github.com/cosmos/ibc-go/pull/182) Remove duplicate checks in upgrade logic. + * (modules/core/04-channel) [\#7949](https://github.com/cosmos/cosmos-sdk/issues/7949) Standardized channel `Acknowledgement` moved to its own file. Codec registration redundancy removed. + * (modules/core/04-channel) [\#144](https://github.com/cosmos/ibc-go/pull/144) Introduced a `packet_data_hex` attribute to emit the hex-encoded packet data in events. This allows for raw binary (proto-encoded message) to be sent over events and decoded correctly on relayer. Original `packet_data` is DEPRECATED. All relayers and IBC event consumers are encouraged to switch to `packet_data_hex` as soon as possible. + * (core/04-channel) [\#197](https://github.com/cosmos/ibc-go/pull/197) Introduced a `packet_ack_hex` attribute to emit the hex-encoded acknowledgement in events. This allows for raw binary (proto-encoded message) to be sent over events and decoded correctly on relayer. Original `packet_ack` is DEPRECATED. All relayers and IBC event consumers are encouraged to switch to `packet_ack_hex` as soon as possible. + * (modules/light-clients/07-tendermint) [\#125](https://github.com/cosmos/ibc-go/pull/125) Implement efficient iteration of consensus states and pruning of earliest expired consensus state on UpdateClient. + * (modules/light-clients/07-tendermint) [\#141](https://github.com/cosmos/ibc-go/pull/141) Return early in case there's a duplicate update call to save Gas. + * (modules/core/ante) [\#235](https://github.com/cosmos/ibc-go/pull/235) Introduces a new IBC Antedecorator that will reject transactions that only contain redundant packet messages (and accompany UpdateClient msgs). This will prevent relayers from wasting fees by submitting messages for packets that have already been processed by previous relayer(s). The Antedecorator is only applied on CheckTx and RecheckTx and is therefore optional for each node. + ### Features + * [\#198](https://github.com/cosmos/ibc-go/pull/198) New CLI command `query ibc-transfer escrow-address ` to get the escrow address for a channel; can be used to then query balance of escrowed tokens + ### Client Breaking Changes + * (02-client/cli) [\#196](https://github.com/cosmos/ibc-go/pull/196) Rename `node-state` cli command to `self-consensus-state`. diff --git a/docs/ibc/next/ibc/apps/apps.mdx b/docs/ibc/next/ibc/apps/apps.mdx new file mode 100644 index 00000000..5b8891e8 --- /dev/null +++ b/docs/ibc/next/ibc/apps/apps.mdx @@ -0,0 +1,481 @@ +--- +title: IBC Applications +--- + + +This page is relevant for IBC Classic, naviagate to the IBC v2 applications page for information on v2 apps + + +Learn how to configure your application to use IBC and send data packets to other chains. + +This document serves as a guide for developers who want to write their own Inter-blockchain +Communication Protocol (IBC) applications for custom use cases. + +Due to the modular design of the IBC protocol, IBC +application developers do not need to concern themselves with the low-level details of clients, +connections, and proof verification, however a brief explaination is given. Then the document goes into detail on the abstraction layer most relevant for application +developers (channels and ports), and describes how to define your own custom packets, and +`IBCModule` callbacks. + +To have your module interact over IBC you must: bind to a port(s), define your own packet data and acknowledgement structs as well as how to encode/decode them, and implement the +`IBCModule` interface. Below is a more detailed explanation of how to write an IBC application +module correctly. + + + +## Pre-requisites Readings + +* [IBC Overview](/docs/ibc/next/ibc/overview) +* [IBC default integration](/docs/ibc/next/ibc/integration) + + + +## Create a custom IBC application module + +### Implement `IBCModule` Interface and callbacks + +The Cosmos SDK expects all IBC modules to implement the [`IBCModule` +interface](https://github.com/cosmos/ibc-go/tree/main/modules/core/05-port/types/module.go). This +interface contains all of the callbacks IBC expects modules to implement. This section will describe +the callbacks that are called during channel handshake execution. + +Here are the channel handshake callbacks that modules are expected to implement: + +```go expandable +/ Called by IBC Handler on MsgOpenInit +func (k Keeper) + +OnChanOpenInit(ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + counterparty channeltypes.Counterparty, + version string, +) + +error { + + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + / Examples: Abort if order == UNORDERED, + / Abort if version is unsupported + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgOpenTry +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + if err := checkArguments(args); err != nil { + return err +} + + / Construct application version + / IBC applications must return the appropriate application version + / This can be a simple string or it can be a complex version constructed + / from the counterpartyVersion and other arguments. + / The version returned will be the channel version used for both channel ends. + appVersion := negotiateAppVersion(counterpartyVersion, args) + +return appVersion, nil +} + +/ Called by IBC Handler on MsgOpenAck +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgOpenConfirm +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} +``` + +The channel closing handshake will also invoke module callbacks that can return errors to abort the +closing handshake. Closing a channel is a 2-step handshake, the initiating chain calls +`ChanCloseInit` and the finalizing chain calls `ChanCloseConfirm`. + +```go expandable +/ Called by IBC Handler on MsgCloseInit +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgCloseConfirm +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} +``` + +#### Channel Handshake Version Negotiation + +Application modules are expected to verify versioning used during the channel handshake procedure. + +* `ChanOpenInit` callback should verify that the `MsgChanOpenInit.Version` is valid +* `ChanOpenTry` callback should construct the application version used for both channel ends. If no application version can be constructed, it must return an error. +* `ChanOpenAck` callback should verify that the `MsgChanOpenAck.CounterpartyVersion` is valid and supported. + +IBC expects application modules to perform application version negotiation in `OnChanOpenTry`. The negotiated version +must be returned to core IBC. If the version cannot be negotiated, an error should be returned. + +Versions must be strings but can implement any versioning structure. If your application plans to +have linear releases then semantic versioning is recommended. If your application plans to release +various features in between major releases then it is advised to use the same versioning scheme +as IBC. This versioning scheme specifies a version identifier and compatible feature set with +that identifier. Valid version selection includes selecting a compatible version identifier with +a subset of features supported by your application for that version. The struct is used for this +scheme can be found in `03-connection/types`. + +Since the version type is a string, applications have the ability to do simple version verification +via string matching or they can use the already implemented versioning system and pass the proto +encoded version into each handhshake call as necessary. + +ICS20 currently implements basic string matching with a single supported version. + +### ICS4Wrapper + +The IBC application interacts with core IBC through the `ICS4Wrapper` interface for any application-initiated actions like: `SendPacket` and `WriteAcknowledgement`. This may be directly the IBCChannelKeeper or a middleware that sits between the application and the IBC ChannelKeeper. + +If the application is being wired with a custom middleware, the application **must** have its ICS4Wrapper set to the middleware directly above it on the stack through the following call: + +```go +/ SetICS4Wrapper sets the ICS4Wrapper. This function may be used after +/ the module's initialization to set the middleware which is above this +/ module in the IBC application stack. +/ The ICS4Wrapper **must** be used for sending packets and writing acknowledgements +/ to ensure that the middleware can intercept and process these calls. +/ Do not use the channel keeper directly to send packets or write acknowledgements +/ as this will bypass the middleware. +SetICS4Wrapper(wrapper ICS4Wrapper) +``` + +### Custom Packets + +Modules connected by a channel must agree on what application data they are sending over the +channel, as well as how they will encode/decode it. This process is not specified by IBC as it is up +to each application module to determine how to implement this agreement. However, for most +applications this will happen as a version negotiation during the channel handshake. While more +complex version negotiation is possible to implement inside the channel opening handshake, a very +simple version negotiation is implemented in the [ibc-transfer module](https://github.com/cosmos/ibc-go/tree/main/modules/apps/transfer/module.go). + +Thus, a module must define its custom packet data structure, along with a well-defined way to +encode and decode it to and from `[]byte`. + +```go expandable +/ Custom packet data defined in application module +type CustomPacketData struct { + / Custom fields ... +} + +EncodePacketData(packetData CustomPacketData) []byte { + / encode packetData to bytes +} + +DecodePacketData(encoded []byte) (CustomPacketData) { + / decode from bytes to packet data +} +``` + +Then a module must encode its packet data before sending it through IBC. + +```go expandable +/ Sending custom application packet data + data := EncodePacketData(customPacketData) + +packet.Data = data +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + +A module receiving a packet must decode the `PacketData` into a structure it expects so that it can +act on it. + +```go +/ Receiving custom application packet data (in OnRecvPacket) + packetData := DecodePacketData(packet.Data) +/ handle received custom packet data +``` + +#### Packet Flow Handling + +Just as IBC expected modules to implement callbacks for channel handshakes, IBC also expects modules +to implement callbacks for handling the packet flow through a channel. + +Once a module A and module B are connected to each other, relayers can start relaying packets and +acknowledgements back and forth on the channel. + +![IBC packet flow diagram](https://media.githubusercontent.com/media/cosmos/ibc/old/spec/ics-004-channel-and-packet-semantics/channel-state-machine.png) + +Briefly, a successful packet flow works as follows: + +1. module A sends a packet through the IBC module +2. the packet is received by module B +3. if module B writes an acknowledgement of the packet then module A will process the + acknowledgement +4. if the packet is not successfully received before the timeout, then module A processes the + packet's timeout. + +##### Sending Packets + +Modules do not send packets through callbacks, since the modules initiate the action of sending +packets to the IBC module, as opposed to other parts of the packet flow where msgs sent to the IBC +module must trigger execution on the port-bound module through the use of callbacks. Thus, to send a +packet a module simply needs to call `SendPacket` on the `IBCChannelKeeper`. + +```go expandable +/ Sending custom application packet data + data := EncodePacketData(customPacketData) +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + +##### Receiving Packets + +To handle receiving packets, the module must implement the `OnRecvPacket` callback. This gets +invoked by the IBC module after the packet has been proved valid and correctly processed by the IBC +keepers. Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state +changes given the packet data without worrying about whether the packet is valid or not. + +Modules may return to the IBC handler an acknowledgement which implements the Acknowledgement interface. +The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the +acknowledgement back to the sender module. + +The state changes that occurred during this callback will only be written if: + +* the acknowledgement was successful as indicated by the `Success()` function of the acknowledgement +* if the acknowledgement returned is nil indicating that an asynchronous process is occurring + +NOTE: Applications which process asynchronous acknowledgements must handle reverting state changes +when appropriate. Any state changes that occurred during the `OnRecvPacket` callback will be written +for asynchronous acknowledgements. + +```go expandable +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) + +ibcexported.Acknowledgement { + / Decode the packet data + packetData := DecodePacketData(packet.Data) + + / do application state changes based on packet data and return the acknowledgement + / NOTE: The acknowledgement will indicate to the IBC handler if the application + / state changes should be written via the `Success()` function. Application state + / changes are only written if the acknowledgement is successful or the acknowledgement + / returned is nil indicating that an asynchronous acknowledgement will occur. + ack := processPacket(ctx, packet, packetData) + +return ack +} +``` + +The Acknowledgement interface: + +```go +/ Acknowledgement defines the interface used to return +/ acknowledgements in the OnRecvPacket callback. +type Acknowledgement interface { + Success() + +bool + Acknowledgement() []byte +} +``` + +### Acknowledgements + +Modules may commit an acknowledgement upon receiving and processing a packet in the case of synchronous packet processing. +In the case where a packet is processed at some later point after the packet has been received (asynchronous execution), the acknowledgement +will be written once the packet has been processed by the application which may be well after the packet receipt. + +NOTE: Most blockchain modules will want to use the synchronous execution model in which the module processes and writes the acknowledgement +for a packet as soon as it has been received from the IBC module. + +This acknowledgement can then be relayed back to the original sender chain, which can take action +depending on the contents of the acknowledgement. + +Just as packet data was opaque to IBC, acknowledgements are similarly opaque. Modules must pass and +receive acknowledegments with the IBC modules as byte strings. + +Thus, modules must agree on how to encode/decode acknowledgements. The process of creating an +acknowledgement struct along with encoding and decoding it, is very similar to the packet data +example above. [ICS 04](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope) +specifies a recommended format for acknowledgements. This acknowledgement type can be imported from +[channel types](https://github.com/cosmos/ibc-go/tree/main/modules/core/04-channel/types). + +While modules may choose arbitrary acknowledgement structs, a default acknowledgement types is provided by IBC [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/core/channel/v1/channel.proto): + +```proto expandable +/ Acknowledgement is the recommended acknowledgement format to be used by +/ app-specific protocols. +/ NOTE: The field numbers 21 and 22 were explicitly chosen to avoid accidental +/ conflicts with other protobuf message formats used for acknowledgements. +/ The first byte of any message with this format will be the non-ASCII values +/ `0xaa` (result) or `0xb2` (error). Implemented as defined by ICS: +/ https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope +message Acknowledgement { + / response contains either a result or an error and must be non-empty + oneof response { + bytes result = 21; + string error = 22; + } +} +``` + +#### Acknowledging Packets + +After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can +then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the +acknowledgement is entirely up to the modules on the channel (just like the packet data); however, it +may often contain information on whether the packet was successfully processed along +with some additional data that could be useful for remediation if the packet processing failed. + +Since the modules are responsible for agreeing on an encoding/decoding standard for packet data and +acknowledgements, IBC will pass in the acknowledgements as `[]byte` to this callback. The callback +is responsible for decoding the acknowledgement and processing it. + +```go expandable +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, +) (*sdk.Result, error) { + / Decode acknowledgement + ack := DecodeAcknowledgement(acknowledgement) + + / process ack + res, err := processAck(ack) + +return res, err +} +``` + +#### Timeout Packets + +If the timeout for a packet is reached before the packet is successfully received or the +counterparty channel end is closed before the packet is successfully received, then the receiving +chain can no longer process it. Thus, the sending chain must process the timeout using +`OnTimeoutPacket` to handle this situation. Again the IBC module will verify that the timeout is +indeed valid, so our module only needs to implement the state machine logic for what to do once a +timeout is reached and the packet can no longer be received. + +```go +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) (*sdk.Result, error) { + / do custom timeout logic +} +``` + +### Routing + +As mentioned above, modules must implement the IBC module interface (which contains both channel +handshake callbacks and packet handling callbacks). The concrete implementation of this interface +must be registered with the module name as a route on the IBC `Router`. + +```go expandable +/ app.go +func NewApp(...args) *App { +/ ... + +/ Create static IBC router, add module routes, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) +/ Note: moduleCallbacks must implement IBCModule interface +ibcRouter.AddRoute(moduleName, moduleCallbacks) + +/ Setting Router will finalize all routes by sealing router +/ No more routes can be added +app.IBCKeeper.SetRouter(ibcRouter) +``` + +## Working Example + +For a real working example of an IBC application, you can look through the `ibc-transfer` module +which implements everything discussed above. + +Here are the useful parts of the module to look at: + +[Binding to transfer +port](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/genesis.go) + +[Sending transfer +packets](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/relay.go) + +[Implementing IBC +callbacks](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/ibc_module.go) diff --git a/docs/ibc/next/ibc/apps/bindports.mdx b/docs/ibc/next/ibc/apps/bindports.mdx new file mode 100644 index 00000000..ad6d7929 --- /dev/null +++ b/docs/ibc/next/ibc/apps/bindports.mdx @@ -0,0 +1,108 @@ +--- +title: Bind ports +--- + +## Synopsis + +Learn what changes to make to bind modules to their ports on initialization. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/next/ibc/overview) +- [IBC default integration](/docs/ibc/next/ibc/integration) + + +Currently, ports must be bound on app initialization. In order to bind modules to their respective ports on initialization, the following needs to be implemented: + +> Note that `portID` does not refer to a certain numerical ID, like `localhost:8080` with a `portID` 8080. Rather it refers to the application module the port binds. For IBC Modules built with the Cosmos SDK, it defaults to the module's name and for Cosmwasm contracts it defaults to the contract address. + +1. Add port ID to the `GenesisState` proto definition: + +```protobuf +message GenesisState { + string port_id = 1; + / other fields +} +``` + +2. Add port ID as a key to the module store: + +```go expandable +/ x//types/keys.go +const ( + / ModuleName defines the IBC Module name + ModuleName = "moduleName" + + / Version defines the current version the IBC + / module supports + Version = "moduleVersion-1" + + / PortID is the default port id that module binds to + PortID = "portID" + + / ... +) +``` + +3. Add port ID to `x//types/genesis.go`: + +```go expandable +/ in x//types/genesis.go + +/ DefaultGenesisState returns a GenesisState with "portID" as the default PortID. +func DefaultGenesisState() *GenesisState { + return &GenesisState{ + PortId: PortID, + / additional k-v fields +} +} + +/ Validate performs basic genesis state validation returning an error upon any +/ failure. +func (gs GenesisState) + +Validate() + +error { + if err := host.PortIdentifierValidator(gs.PortId); err != nil { + return err +} + /additional validations + + return gs.Params.Validate() +} +``` + +4. Set the port in the module keeper's for `InitGenesis`: + + + The capability module has been removed so port binding has also changed + + +```go expandable +/ SetPort sets the portID for the transfer module. Used in InitGenesis +func (k Keeper) + +SetPort(ctx sdk.Context, portID string) { + store := k.storeService.OpenKVStore(ctx) + if err := store.Set(types.PortKey, []byte(portID)); err != nil { + panic(err) +} +} + + / Initialize any other module state, like params with SetParams. +func (k Keeper) + +SetParams(ctx sdk.Context, params types.Params) { + store := k.storeService.OpenKVStore(ctx) + bz := k.cdc.MustMarshal(¶ms) + if err := store.Set([]byte(types.ParamsKey), bz); err != nil { + panic(err) +} +} + / ... +``` + +The module is set to the desired port. The setting and sealing happens during creation of the IBC router. diff --git a/docs/ibc/next/ibc/apps/ibcmodule.mdx b/docs/ibc/next/ibc/apps/ibcmodule.mdx new file mode 100644 index 00000000..5beead02 --- /dev/null +++ b/docs/ibc/next/ibc/apps/ibcmodule.mdx @@ -0,0 +1,398 @@ +--- +title: Implement IBCModule interface and callbacks +--- + +## Synopsis + +Learn how to implement the `IBCModule` interface and all of the callbacks it requires. + +The Cosmos SDK expects all IBC modules to implement the [`IBCModule` +interface](https://github.com/cosmos/ibc-go/tree/main/modules/core/05-port/types/module.go). This interface contains all of the callbacks IBC expects modules to implement. They include callbacks related to channel handshake, closing and packet callbacks (`OnRecvPacket`, `OnAcknowledgementPacket` and `OnTimeoutPacket`). + +```go +/ IBCModule implements the ICS26 interface for given the keeper. +/ The implementation of the IBCModule interface could for example be in a file called ibc_module.go, +/ but ultimately file structure is up to the developer +type IBCModule struct { + keeper keeper.Keeper +} +``` + +Additionally, in the `module.go` file, add the following line: + +```go +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + / Add this line + _ porttypes.IBCModule = IBCModule{ +} +) +``` + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/next/ibc/overview) +- [IBC default integration](/docs/ibc/next/ibc/integration) + + + +## Channel handshake callbacks + +This section will describe the callbacks that are called during channel handshake execution. + +Here are the channel handshake callbacks that modules are expected to implement: + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `checkArguments` and `negotiateAppVersion` functions. + +```go expandable +/ Called by IBC Handler on MsgOpenInit +func (im IBCModule) + +OnChanOpenInit(ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + / Examples: + / - Abort if order == UNORDERED, + / - Abort if version is unsupported + if err := checkArguments(args); err != nil { + return "", err +} + +return version, nil +} + +/ Called by IBC Handler on MsgOpenTry +func (im IBCModule) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + if err := checkArguments(args); err != nil { + return "", err +} + + / Construct application version + / IBC applications must return the appropriate application version + / This can be a simple string or it can be a complex version constructed + / from the counterpartyVersion and other arguments. + / The version returned will be the channel version used for both channel ends. + appVersion := negotiateAppVersion(counterpartyVersion, args) + +return appVersion, nil +} + +/ Called by IBC Handler on MsgOpenAck +func (im IBCModule) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + if counterpartyVersion != types.Version { + return sdkerrors.Wrapf(types.ErrInvalidVersion, "invalid counterparty version: %s, expected %s", counterpartyVersion, types.Version) +} + + / do custom logic + + return nil +} + +/ Called by IBC Handler on MsgOpenConfirm +func (im IBCModule) + +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / do custom logic + + return nil +} +``` + +### Channel closing callbacks + +The channel closing handshake will also invoke module callbacks that can return errors to abort the closing handshake. Closing a channel is a 2-step handshake, the initiating chain calls `ChanCloseInit` and the finalizing chain calls `ChanCloseConfirm`. + +Currently, all IBC modules in this repository return an error for `OnChanCloseInit` to prevent the channels from closing. This is because any user can call `ChanCloseInit` by submitting a `MsgChannelCloseInit` transaction. + +```go expandable +/ Called by IBC Handler on MsgCloseInit +func (im IBCModule) + +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgCloseConfirm +func (im IBCModule) + +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} +``` + +### Channel handshake version negotiation + +Application modules are expected to verify versioning used during the channel handshake procedure. + +- `OnChanOpenInit` will verify that the relayer-chosen parameters + are valid and perform any custom `INIT` logic. + It may return an error if the chosen parameters are invalid + in which case the handshake is aborted. + If the provided version string is non-empty, `OnChanOpenInit` should return + the version string if valid or an error if the provided version is invalid. + **If the version string is empty, `OnChanOpenInit` is expected to + return a default version string representing the version(s) + it supports.** + If there is no default version string for the application, + it should return an error if the provided version is an empty string. +- `OnChanOpenTry` will verify the relayer-chosen parameters along with the + counterparty-chosen version string and perform custom `TRY` logic. + If the relayer-chosen parameters + are invalid, the callback must return an error to abort the handshake. + If the counterparty-chosen version is not compatible with this module's + supported versions, the callback must return an error to abort the handshake. + If the versions are compatible, the try callback must select the final version + string and return it to core IBC. + `OnChanOpenTry` may also perform custom initialization logic. +- `OnChanOpenAck` will error if the counterparty selected version string + is invalid and abort the handshake. It may also perform custom ACK logic. + +Versions must be strings but can implement any versioning structure. If your application plans to +have linear releases then semantic versioning is recommended. If your application plans to release +various features in between major releases then it is advised to use the same versioning scheme +as IBC. This versioning scheme specifies a version identifier and compatible feature set with +that identifier. Valid version selection includes selecting a compatible version identifier with +a subset of features supported by your application for that version. The struct used for this +scheme can be found in [03-connection/types](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection/types/version.go#L16). + +Since the version type is a string, applications have the ability to do simple version verification +via string matching or they can use the already implemented versioning system and pass the proto +encoded version into each handhshake call as necessary. + +ICS20 currently implements basic string matching with a single supported version. + +## Packet callbacks + +Just as IBC expects modules to implement callbacks for channel handshakes, it also expects modules to implement callbacks for handling the packet flow through a channel, as defined in the `IBCModule` interface. + +Once a module A and module B are connected to each other, relayers can start relaying packets and acknowledgements back and forth on the channel. + +![IBC packet flow diagram](/docs/ibc/images/01-ibc/03-apps/images/packet_flow.png) + +Briefly, a successful packet flow works as follows: + +1. Module A sends a packet through the IBC module +2. The packet is received by module B +3. If module B writes an acknowledgement of the packet then module A will process the acknowledgement +4. If the packet is not successfully received before the timeout, then module A processes the packet's timeout. + +### Sending packets + +Modules **do not send packets through callbacks**, since the modules initiate the action of sending packets to the IBC module, as opposed to other parts of the packet flow where messages sent to the IBC +module must trigger execution on the port-bound module through the use of callbacks. Thus, to send a packet a module simply needs to call `SendPacket` on the `IBCChannelKeeper`. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `EncodePacketData(customPacketData)` function. + +```go expandable +/ Sending custom application packet data + data := EncodePacketData(customPacketData) +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + +### Receiving packets + +To handle receiving packets, the module must implement the `OnRecvPacket` callback. This gets +invoked by the IBC module after the packet has been proved valid and correctly processed by the IBC +keepers. Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state +changes given the packet data without worrying about whether the packet is valid or not. + +Modules may return to the IBC handler an acknowledgement which implements the `Acknowledgement` interface. +The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the +acknowledgement back to the sender module. + +The state changes that occurred during this callback will only be written if: + +- the acknowledgement was successful as indicated by the `Success()` function of the acknowledgement +- if the acknowledgement returned is nil indicating that an asynchronous process is occurring + +NOTE: Applications which process asynchronous acknowledgements must handle reverting state changes +when appropriate. Any state changes that occurred during the `OnRecvPacket` callback will be written +for asynchronous acknowledgements. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodePacketData(packet.Data)` function. + +```go expandable +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) + +ibcexported.Acknowledgement { + / Decode the packet data + packetData := DecodePacketData(packet.Data) + + / do application state changes based on packet data and return the acknowledgement + / NOTE: The acknowledgement will indicate to the IBC handler if the application + / state changes should be written via the `Success()` function. Application state + / changes are only written if the acknowledgement is successful or the acknowledgement + / returned is nil indicating that an asynchronous acknowledgement will occur. + ack := processPacket(ctx, packet, packetData) + +return ack +} +``` + +Reminder, the `Acknowledgement` interface: + +```go +/ Acknowledgement defines the interface used to return +/ acknowledgements in the OnRecvPacket callback. +type Acknowledgement interface { + Success() + +bool + Acknowledgement() []byte +} +``` + +### Acknowledging packets + +After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can +then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the +acknowledgement is entirely up to the modules on the channel (just like the packet data); however, it +may often contain information on whether the packet was successfully processed along +with some additional data that could be useful for remediation if the packet processing failed. + +Since the modules are responsible for agreeing on an encoding/decoding standard for packet data and +acknowledgements, IBC will pass in the acknowledgements as `[]byte` to this callback. The callback +is responsible for decoding the acknowledgement and processing it. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodeAcknowledgement(acknowledgments)` and `processAck(ack)` functions. + +```go expandable +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, +) (*sdk.Result, error) { + / Decode acknowledgement + ack := DecodeAcknowledgement(acknowledgement) + + / process ack + res, err := processAck(ack) + +return res, err +} +``` + +### Timeout packets + +If the timeout for a packet is reached before the packet is successfully received or the +counterparty channel end is closed before the packet is successfully received, then the receiving +chain can no longer process it. Thus, the sending chain must process the timeout using +`OnTimeoutPacket` to handle this situation. Again the IBC module will verify that the timeout is +indeed valid, so our module only needs to implement the state machine logic for what to do once a +timeout is reached and the packet can no longer be received. + +```go +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) (*sdk.Result, error) { + / do custom timeout logic +} +``` + +### Optional interfaces + +The following interface are optional and MAY be implemented by an IBCModule. + +#### PacketDataUnmarshaler + +The `PacketDataUnmarshaler` interface is defined as follows: + +```go +/ PacketDataUnmarshaler defines an optional interface which allows a middleware to +/ request the packet data to be unmarshaled by the base application. +type PacketDataUnmarshaler interface { + / UnmarshalPacketData unmarshals the packet data into a concrete type + / ctx, portID, channelID are provided as arguments, so that (if needed) + / the packet data can be unmarshaled based on the channel version. + / The version of the underlying app is also returned. + UnmarshalPacketData(ctx sdk.Context, portID, channelID string, bz []byte) (interface{ +}, string, error) +} +``` + +The implementation of `UnmarshalPacketData` should unmarshal the bytes into the packet data type defined for an IBC stack. +The base application of an IBC stack should unmarshal the bytes into its packet data type, while a middleware may simply defer the call to the underlying application. + +This interface allows middlewares to unmarshal a packet data in order to make use of interfaces the packet data type implements. +For example, the callbacks middleware makes use of this function to access packet data types which implement the `PacketData` and `PacketDataProvider` interfaces. diff --git a/docs/ibc/next/ibc/apps/ibcv2apps.mdx b/docs/ibc/next/ibc/apps/ibcv2apps.mdx new file mode 100644 index 00000000..93eda3a0 --- /dev/null +++ b/docs/ibc/next/ibc/apps/ibcv2apps.mdx @@ -0,0 +1,342 @@ +--- +title: IBC v2 Applications +--- + +## Synopsis + +Learn how to implement IBC v2 applications + +To build an IBC v2 application the following steps are required: + +1. [Implement the `IBCModule` interface](#implement-the-ibcmodule-interface) +2. [Bind Ports](#bind-ports) +3. [Implement the IBCModule Keeper](#implement-the-ibcmodule-keeper) +4. [Implement application payload and success acknowledgement](#packets-and-payloads) +5. [Set and Seal the IBC Router](#routing) + +Highlighted improvements for app developers with IBC v2: + +- No need to support channel handshake callbacks +- Flexibility on upgrading application versioning, no need to use channel upgradability to renegotiate an application version, simply support the application version on both sides of the connection. +- Flexibility to choose your desired encoding type. + +## Implement the `IBCModule` interface + +The Cosmos SDK expects all IBC modules to implement the [`IBCModule` +interface](https://github.com/cosmos/ibc-go/blob/main/modules/core/api/module.go#L9-L53). This interface contains all of the callbacks IBC expects modules to implement. Note that for IBC v2, an application developer no longer needs to implement callbacks for the channel handshake. Note that this interface is distinct from the [porttypes.IBCModule interface][porttypes.IBCModule] used for IBC Classic. + +```go +/ IBCModule implements the application interface given the keeper. +/ The implementation of the IBCModule interface could for example be in a file called ibc_module.go, +/ but ultimately file structure is up to the developer +type IBCModule struct { + keeper keeper.Keeper +} +``` + +Additionally, in the `module.go` file, add the following line: + +```go +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + / Add this line + _ porttypes.IBCModule = IBCModule{ +} +) +``` + +### Packet callbacks + +IBC expects modules to implement callbacks for handling the packet lifecycle, as defined in the `IBCModule` interface. + +With IBC v2, modules are not directly connected. Instead a pair of clients are connected and register the counterparty clientID. Packets are routed to the relevant application module by the portID registered in the Router. Relayers send packets between the routers/packet handlers on each chain. + +![IBC packet flow diagram](/docs/ibc/images/01-ibc/03-apps/images/packet_flow_v2.png) + +Briefly, a successful packet flow works as follows: + +1. A user sends a message to the IBC packet handler +2. The IBC packet handler validates the message, creates the packet and stores the commitment and returns the packet sequence number. The [`Payload`](https://github.com/cosmos/ibc-go/blob/fe25b216359fab71b3228461b05dbcdb1a554158/proto/ibc/core/channel/v2/packet.proto#L26-L38), which contains application specific data, is routed to the relevant application. +3. If the counterparty writes an acknowledgement of the packet then the sending chain will process the acknowledgement. +4. If the packet is not successfully received before the timeout, then the sending chain processes the packet's timeout. + +#### Sending packets + +[`MsgSendPacket`](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel/v2/types/tx.pb.go#L69-L75) is sent by a user to the [channel v2 message server](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel/v2/keeper/msg_server.go), which calls `ChannelKeeperV2.SendPacket`. This validates the message, creates the packet, stores the commitment and returns the packet sequence number. The application must specify its own payload which is used by the application and sent with `MsgSendPacket`. + +An application developer needs to implement the custom logic the application executes when a packet is sent. + +```go expandable +/ OnSendPacket logic +func (im *IBCModule) + +OnSendPacket( + ctx sdk.Context, + sourceChannel string, + destinationChannel string, + sequence uint64, + payload channeltypesv2.Payload, + signer sdk.AccAddress) + +error { + +/ implement any validation + +/ implement payload decoding and validation + +/ call the relevant keeper method for state changes as a result of application logic + +/ emit events or telemetry data + +return nil +} +``` + +#### Receiving packets + +To handle receiving packets, the module must implement the `OnRecvPacket` callback. An application module should validate and confirm support for the given version and encoding method used as there is greater flexibility in IBC v2 to support a range of versions and encoding methods. +The `OnRecvPacket` callback is invoked by the IBC module after the packet has been proven to be valid and correctly processed by the IBC +keepers. +Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state +changes given the packet data without worrying about whether the packet is valid or not. + +Modules may return to the IBC handler an acknowledgement which implements the `Acknowledgement` interface. +The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the +acknowledgement back to the sender module. + +The state changes that occurr during this callback could be: + +- the packet processing was successful as indicated by the `PacketStatus_Success` and an `Acknowledgement()` will be written +- if the packet processing was unsuccessful as indicated by the `PacketStatus_Failure` and an `ackErr` will be written + +Note that with IBC v2 the error acknowledgements are standardised and cannot be customised. + +```go +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, sourceChannel string, destinationChannel string, sequence uint64, payload channeltypesv2.Payload, relayer sdk.AccAddress) + +channeltypesv2.RecvPacketResult { + + / do application state changes based on payload and return the result + / state changes should be written via the `RecvPacketResult` + + return recvResult +} +``` + +#### Acknowledging packets + +After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can +then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the +acknowledgement is entirely up to the application developer. + +IBC will pass in the acknowledgements as `[]byte` to this callback. The callback +is responsible for decoding the acknowledgement and processing it. The acknowledgement is serialised and deserialised using JSON. + +```go expandable +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, sourceChannel string, destinationChannel string, sequence uint64, acknowledgement []byte, payload channeltypesv2.Payload, relayer sdk.AccAddress) + +error { + + / check the type of the acknowledgement + + / if not ackErr, unmarshal the JSON acknowledgement and unmarshal packet payload + + / perform any application specific logic for processing acknowledgement + + / emit events + + return nil +} +``` + +#### Timeout packets + +If the timeout for a packet is reached before the packet is successfully received or the receiving +chain can no longer process the packet the sending chain must process the timeout using +`OnTimeoutPacket`. Again the IBC module will verify that the timeout is +valid, so our module only needs to implement the state machine logic for what to do once a +timeout is reached and the packet can no longer be received. + +```go +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, sourceChannel string, destinationChannel string, sequence uint64, payload channeltypesv2.Payload, relayer sdk.AccAddress) + +error { + + / unmarshal packet data + + / do custom timeout logic, e.g. refund tokens for transfer +} +``` + +#### PacketDataUnmarshaler + +The `PacketDataUnmarshaler` interface is required for IBC v2 applications to implement because the encoding type is specified by the `Payload` and multiple encoding types are supported. + +```go +type PacketDataUnmarshaler interface { + / UnmarshalPacketData unmarshals the packet data into a concrete type + / the payload is provided and the packet data interface is returned + UnmarshalPacketData(payload channeltypesv2.Payload) (interface{ +}, error) +} +``` + +## Bind Ports + +Currently, ports must be bound on app initialization. In order to bind modules to their respective ports on initialization, the following needs to be implemented: + +> Note that `portID` does not refer to a certain numerical ID, like `localhost:8080` with a `portID` 8080. Rather it refers to the application module the port binds. For IBC Modules built with the Cosmos SDK, it defaults to the module's name and for Cosmwasm contracts it defaults to the contract address. + +Add port ID to the `GenesisState` proto definition: + +```protobuf +message GenesisState { + string port_id = 1; + / other fields +} +``` + +You can see an example for transfer [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/applications/transfer/v1/genesis.proto). + +Add port ID as a key to the module store: + +```go +/ x//types/keys.go +const ( + / ModuleName defines the IBC Module name + ModuleName = "moduleName" + + / PortID is the default port id that module binds to + PortID = "portID" + + / ... +) +``` + +Note that with IBC v2, the version does not need to be added as a key (as required with IBC classic) because versioning of applications is now contained within the [packet Payload](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel/v2/types/packet.go#L23-L32). + +Add port ID to `x//types/genesis.go`: + +```go expandable +/ in x//types/genesis.go + +/ DefaultGenesisState returns a GenesisState +/ with the portID defined in keys.go +func DefaultGenesisState() *GenesisState { + return &GenesisState{ + PortId: PortID, + / additional k-v fields +} +} + +/ Validate performs basic genesis state validation +/ returning an error upon any failure. +func (gs GenesisState) + +Validate() + +error { + if err := host.PortIdentifierValidator(gs.PortId); err != nil { + return err +} + /additional validations + + return gs.Params.Validate() +} +``` + +Set the port in the module keeper's for `InitGenesis`: + +```go expandable +/ SetPort sets the portID for the transfer module. Used in InitGenesis +func (k Keeper) + +SetPort(ctx sdk.Context, portID string) { + store := k.storeService.OpenKVStore(ctx) + if err := store.Set(types.PortKey, []byte(portID)); err != nil { + panic(err) +} +} + + / Initialize any other module state, like params with SetParams. +func (k Keeper) + +SetParams(ctx sdk.Context, params types.Params) { + store := k.storeService.OpenKVStore(ctx) + bz := k.cdc.MustMarshal(¶ms) + if err := store.Set([]byte(types.ParamsKey), bz); err != nil { + panic(err) +} +} + / ... +``` + +The module is set to the desired port. The setting and sealing happens during creation of the IBC router. + +## Implement the IBCModule Keeper + +More information on implementing the IBCModule Keepers can be found in the [keepers section](/docs/ibc/next/ibc/apps/keeper) + +## Packets and Payloads + +Applications developers need to define the `Payload` contained within an [IBC packet](https://github.com/cosmos/ibc-go/blob/fe25b216359fab71b3228461b05dbcdb1a554158/proto/ibc/core/channel/v2/packet.proto#L11-L24). Note that in IBC v2 the `timeoutHeight` has been removed and only `timeoutTimestamp` is used. A packet can contain multiple payloads in a list. Each Payload includes: + +```go expandable +/ Payload contains the source and destination ports and payload for the application (version, encoding, raw bytes) + +message Payload { + / specifies the source port of the packet. + string source_port = 1; + / specifies the destination port of the packet. + string destination_port = 2; + / version of the specified application. + string version = 3; + / the encoding used for the provided value. + string encoding = 4; + / the raw bytes for the payload. + bytes value = 5; +} +``` + +Note that compared to IBC classic, where the applications version and encoding is negotiated during the channel handshake, IBC v2 provides enhanced flexibility. The application version and encoding used by the Payload is defined in the Payload. An example Payload is illustrated below: + +```go expandable +type MyAppPayloadData struct { + Field1 string + Field2 uint64 +} + +/ Marshal your payload to bytes using your encoding +bz, err := json.Marshal(MyAppPayloadData{ + Field1: "example", Field2: 7 +}) + +/ Wrap it in a channel v2 Payload + payload := channeltypesv2.NewPayload( + sourcePort, + destPort, + "my-app-v1", / App version + channeltypesv2.EncodingJSON, / Encoding type, e.g. JSON, protobuf or ABI + bz, / Encoded data +) +``` + +It is also possible to define your own custom success acknowledgement which will be returned to the sender if the packet is successfully recieved and is returned in the `RecvPacketResult`. Note that if the packet processing fails, it is not possible to define a custom error acknowledgment, a constant ackErr is returned. + +## Routing + +More information on implementing the IBC Router can be found in the [routing section](/docs/ibc/next/ibc/apps/routing). + +[porttypes.IBCModule]: https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/module.go diff --git a/docs/ibc/next/ibc/apps/keeper.mdx b/docs/ibc/next/ibc/apps/keeper.mdx new file mode 100644 index 00000000..e92880d9 --- /dev/null +++ b/docs/ibc/next/ibc/apps/keeper.mdx @@ -0,0 +1,75 @@ +--- +title: Keeper +--- + +## Synopsis + +Learn how to implement the IBC Module keeper. Relevant for IBC classic and v2 + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/next/ibc/overview) +- [IBC default integration](/docs/ibc/next/ibc/integration) + + +In the previous sections, on channel handshake callbacks and port binding in `InitGenesis`, a reference was made to keeper methods that need to be implemented when creating a custom IBC module. Below is an overview of how to define an IBC module's keeper. + +> Note that some code has been left out for clarity, to get a full code overview, please refer to [the transfer module's keeper in the ibc-go repo](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/keeper.go). + +```go expandable +/ Keeper defines the IBC app module keeper +type Keeper struct { + storeKey sdk.StoreKey + cdc codec.BinaryCodec + paramSpace paramtypes.Subspace + + channelKeeper types.ChannelKeeper + portKeeper types.PortKeeper + + / ... additional according to custom logic +} + +/ NewKeeper creates a new IBC app module Keeper instance +func NewKeeper( + / args +) + +Keeper { + / ... + + return Keeper{ + cdc: cdc, + storeKey: key, + paramSpace: paramSpace, + + channelKeeper: channelKeeper, + portKeeper: portKeeper, + + / ... additional according to custom logic +} +} + +/ GetPort returns the portID for the IBC app module. Used in ExportGenesis +func (k Keeper) + +GetPort(ctx sdk.Context) + +string { + store := ctx.KVStore(k.storeKey) + +return string(store.Get(types.PortKey)) +} + +/ SetPort sets the portID for the IBC app module. Used in InitGenesis +func (k Keeper) + +SetPort(ctx sdk.Context, portID string) { + store := ctx.KVStore(k.storeKey) + +store.Set(types.PortKey, []byte(portID)) +} + +/ ... additional according to custom logic +``` diff --git a/docs/ibc/next/ibc/apps/packets_acks.mdx b/docs/ibc/next/ibc/apps/packets_acks.mdx new file mode 100644 index 00000000..052bb470 --- /dev/null +++ b/docs/ibc/next/ibc/apps/packets_acks.mdx @@ -0,0 +1,163 @@ +--- +title: Define packets and acks +--- + +## Synopsis + +Learn how to define custom packet and acknowledgement structs and how to encode and decode them. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/next/ibc/overview) +- [IBC default integration](/docs/ibc/next/ibc/integration) + + + +## Custom packets + +Modules connected by a channel must agree on what application data they are sending over the +channel, as well as how they will encode/decode it. This process is not specified by IBC as it is up +to each application module to determine how to implement this agreement. However, for most +applications this will happen as a version negotiation during the channel handshake. While more +complex version negotiation is possible to implement inside the channel opening handshake, a very +simple version negotiation is implemented in the [ibc-transfer module](https://github.com/cosmos/ibc-go/tree/main/modules/apps/transfer/module.go). + +Thus, a module must define its custom packet data structure, along with a well-defined way to +encode and decode it to and from `[]byte`. + +```go expandable +/ Custom packet data defined in application module +type CustomPacketData struct { + / Custom fields ... +} + +EncodePacketData(packetData CustomPacketData) []byte { + / encode packetData to bytes +} + +DecodePacketData(encoded []byte) (CustomPacketData) { + / decode from bytes to packet data +} +``` + +> Note that the `CustomPacketData` struct is defined in the proto definition and then compiled by the protobuf compiler. + +Then a module must encode its packet data before sending it through IBC. + +```go expandable +/ Sending custom application packet data + data := EncodePacketData(customPacketData) +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + +A module receiving a packet must decode the `PacketData` into a structure it expects so that it can +act on it. + +```go +/ Receiving custom application packet data (in OnRecvPacket) + packetData := DecodePacketData(packet.Data) +/ handle received custom packet data +``` + +### Optional interfaces + +The following interfaces are optional and MAY be implemented by a custom packet type. +They allow middlewares such as callbacks to access information stored within the packet data. + +#### PacketData interface + +The `PacketData` interface is defined as follows: + +```go +/ PacketData defines an optional interface which an application's packet data structure may implement. +type PacketData interface { + / GetPacketSender returns the sender address of the packet data. + / If the packet sender is unknown or undefined, an empty string should be returned. + GetPacketSender(sourcePortID string) + +string +} +``` + +The implementation of `GetPacketSender` should return the sender of the packet data. +If the packet sender is unknown or undefined, an empty string should be returned. + +This interface is intended to give IBC middlewares access to the packet sender of a packet data type. + +#### PacketDataProvider interface + +The `PacketDataProvider` interface is defined as follows: + +```go +/ PacketDataProvider defines an optional interfaces for retrieving custom packet data stored on behalf of another application. +/ An existing problem in the IBC middleware design is the inability for a middleware to define its own packet data type and insert packet sender provided information. +/ A short term solution was introduced into several application's packet data to utilize a memo field to carry this information on behalf of another application. +/ This interfaces standardizes that behaviour. Upon realization of the ability for middleware's to define their own packet data types, this interface will be deprecated and removed with time. +type PacketDataProvider interface { + / GetCustomPacketData returns the packet data held on behalf of another application. + / The name the information is stored under should be provided as the key. + / If no custom packet data exists for the key, nil should be returned. + GetCustomPacketData(key string) + +interface{ +} +} +``` + +The implementation of `GetCustomPacketData` should return packet data held on behalf of another application (if present and supported). +If this functionality is not supported, it should return nil. Otherwise it should return the packet data associated with the provided key. + +This interface gives IBC applications access to the packet data information embedded into the base packet data type. +Within transfer and interchain accounts, the embedded packet data is stored within the Memo field. + +Once all IBC applications within an IBC stack are capable of creating/maintaining their own packet data type's, this interface function will be deprecated and removed. + +## Acknowledgements + +Modules may commit an acknowledgement upon receiving and processing a packet in the case of synchronous packet processing. +In the case where a packet is processed at some later point after the packet has been received (asynchronous execution), the acknowledgement +will be written once the packet has been processed by the application which may be well after the packet receipt. + +NOTE: Most blockchain modules will want to use the synchronous execution model in which the module processes and writes the acknowledgement +for a packet as soon as it has been received from the IBC module. + +This acknowledgement can then be relayed back to the original sender chain, which can take action +depending on the contents of the acknowledgement. + +Just as packet data was opaque to IBC, acknowledgements are similarly opaque. Modules must pass and +receive acknowledegments with the IBC modules as byte strings. + +Thus, modules must agree on how to encode/decode acknowledgements. The process of creating an +acknowledgement struct along with encoding and decoding it, is very similar to the packet data +example above. [ICS 04](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope) +specifies a recommended format for acknowledgements. This acknowledgement type can be imported from +[channel types](https://github.com/cosmos/ibc-go/tree/main/modules/core/04-channel/types). + +While modules may choose arbitrary acknowledgement structs, a default acknowledgement types is provided by IBC [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/core/channel/v1/channel.proto): + +```protobuf expandable +/ Acknowledgement is the recommended acknowledgement format to be used by +/ app-specific protocols. +/ NOTE: The field numbers 21 and 22 were explicitly chosen to avoid accidental +/ conflicts with other protobuf message formats used for acknowledgements. +/ The first byte of any message with this format will be the non-ASCII values +/ `0xaa` (result) or `0xb2` (error). Implemented as defined by ICS: +/ https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope +message Acknowledgement { + / response contains either a result or an error and must be non-empty + oneof response { + bytes result = 21; + string error = 22; + } +} +``` diff --git a/docs/ibc/next/ibc/apps/routing.mdx b/docs/ibc/next/ibc/apps/routing.mdx new file mode 100644 index 00000000..862e5c9d --- /dev/null +++ b/docs/ibc/next/ibc/apps/routing.mdx @@ -0,0 +1,40 @@ +--- +title: Routing +--- + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/next/ibc/overview) +- [IBC default integration](/docs/ibc/next/ibc/integration) + + + +## Synopsis + +Learn how to hook a route to the IBC router for the custom IBC module. + +As mentioned above, modules must implement the `IBCModule` interface (which contains both channel +handshake callbacks for IBC classic only, and packet handling callbacks for IBC classic and v2). The concrete implementation of this interface +must be registered with the module name as a route on the IBC `Router`. + +```go expandable +/ app.go +func NewApp(...args) *App { + / ... + + / Create static IBC router, add module routes, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) + / Note: moduleCallbacks must implement IBCModule interface + ibcRouter.AddRoute(moduleName, moduleCallbacks) + + / Setting Router will finalize all routes by sealing router + / No more routes can be added + app.IBCKeeper.SetRouter(ibcRouter) + + / ... +} +``` diff --git a/docs/ibc/next/ibc/best-practices.mdx b/docs/ibc/next/ibc/best-practices.mdx new file mode 100644 index 00000000..96790665 --- /dev/null +++ b/docs/ibc/next/ibc/best-practices.mdx @@ -0,0 +1,20 @@ +--- +title: Best Practices +--- + +## Identifying legitimate channels + +Identifying which channel to use can be difficult as it requires verifying information about the chains you want to connect to. +Channels are based on a light client. A chain can be uniquely identified by its chain ID, validator set pairing. It is unsafe to rely only on the chain ID. +Any user can create a client with any chain ID, but only the chain with correct validator set and chain ID can produce headers which would update that client. + +Which channel to use is based on social consensus. The desired channel should have the following properties: + +* based on a valid client (can only be updated by the chain it connects to) +* has sizable activity +* the underlying client is active + +To verify if a client is valid. You will need to obtain a header from the chain you want to connect to. This can be done by running a full node for that chain or relying on a trusted rpc address. +Then you should query the light client you want to verify and obtain its latest consensus state. All consensus state fields must match the header queried for at same height as the consensus state (root, timestamp, next validator set hash). + +Explorers and wallets are highly encouraged to follow this practice. It is unsafe to algorithmically add new channels without following this process. diff --git a/docs/ibc/next/ibc/integration.mdx b/docs/ibc/next/ibc/integration.mdx new file mode 100644 index 00000000..949d259d --- /dev/null +++ b/docs/ibc/next/ibc/integration.mdx @@ -0,0 +1,370 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to integrate IBC to your application + +This document outlines the required steps to integrate and configure the [IBC +module](https://github.com/cosmos/ibc-go/tree/main/modules/core) to your Cosmos SDK application and enable sending fungible token transfers to other chains. An [example app using ibc-go v10 is linked](https://github.com/gjermundgaraba/probe/tree/ibc/v10). + +## Integrating the IBC module + +Integrating the IBC module to your SDK-based application is straightforward. The general changes can be summarized in the following steps: + +- [Define additional `Keeper` fields for the new modules on the `App` type](#add-application-fields-to-app). +- [Add the module's `StoreKey`s and initialize their `Keeper`s](#configure-the-keepers). +- [Create Application Stacks with Middleware](#create-application-stacks-with-middleware) +- [Set up IBC router and add route for the `transfer` module](#register-module-routes-in-the-ibc-router). +- [Grant permissions to `transfer`'s `ModuleAccount`](#module-account-permissions). +- [Add the modules to the module `Manager`](#module-manager-and-simulationmanager). +- [Update the module `SimulationManager` to enable simulations](#module-manager-and-simulationmanager). +- [Integrate light client modules (e.g. `07-tendermint`)](#integrating-light-clients). +- [Add modules to `Begin/EndBlockers` and `InitGenesis`](#application-abci-ordering). + +### Add application fields to `App` + +We need to register the core `ibc` and `transfer` `Keeper`s. To support the use of IBC v2, `transferv2` and `callbacksv2` must also be registered as follows: + +```go title="app.go" expandable +import ( + + / other imports + / ... + ibckeeper "github.com/cosmos/ibc-go/v10/modules/core/keeper" + ibctransferkeeper "github.com/cosmos/ibc-go/v10/modules/apps/transfer/keeper" + / ibc v2 imports + transferv2 "github.com/cosmos/ibc-go/v10/modules/apps/transfer/v2" + ibccallbacksv2 "github.com/cosmos/ibc-go/v10/modules/apps/callbacks/v2" +) + +type App struct { + / baseapp, keys and subspaces definitions + + / other keepers + / ... + IBCKeeper *ibckeeper.Keeper / IBC Keeper must be a pointer in the app, so we can SetRouter on it correctly + TransferKeeper ibctransferkeeper.Keeper / for cross-chain fungible token transfers + + / ... + / module and simulation manager definitions +} +``` + +### Configure the `Keeper`s + +Initialize the IBC `Keeper`s (for core `ibc` and `transfer` modules), and any additional modules you want to include. + + + **Notice** The capability module has been removed in ibc-go v10, therefore the + `ScopedKeeper` has also been removed + + +```go expandable +import ( + + / other imports + / ... + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + + ibcexported "github.com/cosmos/ibc-go/v10/modules/core/exported" + ibckeeper "github.com/cosmos/ibc-go/v10/modules/core/keeper" + "github.com/cosmos/ibc-go/v10/modules/apps/transfer" + ibctransfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" + ibctm "github.com/cosmos/ibc-go/v10/modules/light-clients/07-tendermint" +) + +func NewApp(...args) *App { + / define codecs and baseapp + + / ... other module keepers + + / Create IBC Keeper + app.IBCKeeper = ibckeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[ibcexported.StoreKey]), + app.GetSubspace(ibcexported.ModuleName), + app.UpgradeKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Create Transfer Keeper + app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[ibctransfertypes.StoreKey]), + app.GetSubspace(ibctransfertypes.ModuleName), + app.IBCKeeper.ChannelKeeper, + app.IBCKeeper.ChannelKeeper, + app.MsgServiceRouter(), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / ... continues +} +``` + +### Create Application Stacks with Middleware + +Middleware stacks in IBC allow you to wrap an `IBCModule` with additional logic for packets and acknowledgements. This is a chain of handlers that execute in order. The transfer stack below shows how to wire up transfer to use packet forward middleware, and the callbacks middleware. Note that the order is important. + +```go expandable +/ Create Transfer Stack for IBC Classic + maxCallbackGas := uint64(10_000_000) + wasmStackIBCHandler := wasm.NewIBCHandler(app.WasmKeeper, app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper) + +var transferStack porttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) +/ callbacks wraps the transfer stack as its base app, and uses PacketForwardKeeper as the ICS4Wrapper +/ i.e. packet-forward-middleware is higher on the stack and sits between callbacks and the ibc channel keeper +/ Since this is the lowest level middleware of the transfer stack, it should be the first entrypoint for transfer keeper's +/ WriteAcknowledgement. + cbStack := ibccallbacks.NewIBCMiddleware(transferStack, app.PacketForwardKeeper, wasmStackIBCHandler, maxCallbackGas) + +transferStack = packetforward.NewIBCMiddleware( + cbStack, + app.PacketForwardKeeper, + 0, / retries on timeout + packetforwardkeeper.DefaultForwardTransferPacketTimeoutTimestamp, +) +``` + +#### IBC v2 Application Stack + +For IBC v2, an example transfer stack is shown below. In this case the transfer stack is using the callbacks middleware. + +```go +/ Create IBC v2 transfer middleware stack +/ the callbacks gas limit is recommended to be 10M for use with wasm contracts + maxCallbackGas := uint64(10_000_000) + wasmStackIBCHandler := wasm.NewIBCHandler(app.WasmKeeper, app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper) + +var ibcv2TransferStack ibcapi.IBCModule + ibcv2TransferStack = transferv2.NewIBCModule(app.TransferKeeper) + +ibcv2TransferStack = ibccallbacksv2.NewIBCMiddleware(transferv2.NewIBCModule(app.TransferKeeper), app.IBCKeeper.ChannelKeeperV2, wasmStackIBCHandler, app.IBCKeeper.ChannelKeeperV2, maxCallbackGas) +``` + +### Register module routes in the IBC `Router` + +IBC needs to know which module is bound to which port so that it can route packets to the +appropriate module and call the appropriate callbacks. The port to module name mapping is handled by +IBC's port `Keeper`. However, the mapping from module name to the relevant callbacks is accomplished +by the port +[`Router`](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/router.go) on the +`ibc` module. + +Adding the module routes allows the IBC handler to call the appropriate callback when processing a channel handshake or a packet. + +Currently, a `Router` is static so it must be initialized and set correctly on app initialization. +Once the `Router` has been set, no new routes can be added. + +```go title="app.go" expandable +import ( + + / other imports + / ... + porttypes "github.com/cosmos/ibc-go/v10/modules/core/05-port/types" + ibctransfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" +) + +func NewApp(...args) *App { + / .. continuation from above + + / Create static IBC router, add transfer module route, then set and seal it + ibcRouter := porttypes.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) + / Setting Router will finalize all routes by sealing router + / No more routes can be added + app.IBCKeeper.SetRouter(ibcRouter) + + / ... continues +``` + +#### IBC v2 Router + +With IBC v2, there is a new [router](https://github.com/cosmos/ibc-go/blob/main/modules/core/api/router.go) that needs to register the routes for a portID to a given IBCModule. It supports two kinds of routes: direct routes and prefix-based routes. The direct routes match one specific port ID to a module, while the prefix-based routes match any port ID with a specific prefix to a module. +For example, if a direct route named `someModule` exists, only messages addressed to exactly that port ID will be passed to the corresponding module. +However, if instead, `someModule` is a prefix-based route, port IDs like `someModuleRandomPort1`, `someModuleRandomPort2`, etc., will be passed to the module. +Note that the router will panic when you add a route that conflicts with an already existing route. This is also the case if you add a prefix-based route that conflicts with an existing direct route or vice versa. + +```go +/ IBC v2 router creation + ibcRouterV2 := ibcapi.NewRouter() + +ibcRouterV2.AddRoute(ibctransfertypes.PortID, ibcv2TransferStack) + / Setting Router will finalize all routes by sealing router + / No more routes can be added + app.IBCKeeper.SetRouterV2(ibcRouterV2) +``` + +### Module `Manager` and `SimulationManager` + +In order to use IBC, we need to add the new modules to the module `Manager` and to the `SimulationManager`, in case your application supports [simulations](https://docs.cosmos.network/main/learn/advanced/simulation). + +```go title="app.go" expandable +import ( + + / other imports + / ... + "github.com/cosmos/cosmos-sdk/types/module" + + ibc "github.com/cosmos/ibc-go/v10/modules/core" + "github.com/cosmos/ibc-go/v10/modules/apps/transfer" +) + +func NewApp(...args) *App { + / ... continuation from above + + app.ModuleManager = module.NewManager( + / other modules + / ... + / highlight-start ++ ibc.NewAppModule(app.IBCKeeper), ++ transfer.NewAppModule(app.TransferKeeper), + / highlight-end + ) + + / ... + + app.simulationManager = module.NewSimulationManagerFromAppModules( + / other modules + / ... + app.ModuleManager.Modules, + map[string]module.AppModuleSimulation{ +}, + ) + + / ... continues +``` + +### Module account permissions + +After that, we need to grant `Minter` and `Burner` permissions to +the `transfer` `ModuleAccount` to mint and burn relayed tokens. + +```go title="app.go" expandable +import ( + + / other imports + / ... + "github.com/cosmos/cosmos-sdk/types/module" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + + / highlight-next-line ++ ibctransfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" +) + +/ app.go +var ( + / module account permissions + maccPerms = map[string][]string{ + / other module accounts permissions + / ... + ibctransfertypes.ModuleName: { + authtypes.Minter, authtypes.Burner +}, +} +) +``` + +### Integrating light clients + +> Note that from v10 onwards, all light clients are expected to implement the [`LightClientInterface` interface](/docs/ibc/next/light-clients/developer-guide/light-client-module#implementing-the-lightclientmodule-interface) defined by core IBC, and have to be explicitly registered in a chain's app.go. This is in contrast to earlier versions of ibc-go when `07-tendermint` and `06-solomachine` were added out of the box. Follow the steps below to integrate the `07-tendermint` light client. + +All light clients must be registered with `module.Manager` in a chain's app.go file. The following code example shows how to instantiate `07-tendermint` light client module and register its `ibctm.AppModule`. + +```go title="app.go" expandable +import ( + + / other imports + / ... + "github.com/cosmos/cosmos-sdk/types/module" + / highlight-next-line ++ ibctm "github.com/cosmos/ibc-go/v10/modules/light-clients/07-tendermint" +) + +/ app.go +/ after sealing the IBC router + clientKeeper := app.IBCKeeper.ClientKeeper + storeProvider := app.IBCKeeper.ClientKeeper.GetStoreProvider() + tmLightClientModule := ibctm.NewLightClientModule(appCodec, storeProvider) + +clientKeeper.AddRoute(ibctm.ModuleName, &tmLightClientModule) +/ ... +app.ModuleManager = module.NewManager( + / ... + ibc.NewAppModule(app.IBCKeeper), + transfer.NewAppModule(app.TransferKeeper), / i.e ibc-transfer module + + / register light clients on IBC + / highlight-next-line ++ ibctm.NewAppModule(tmLightClientModule), +) +``` + +#### Allowed Clients Params + +The allowed clients parameter defines an allow list of client types supported by the chain. The +default value is a single-element list containing the [`AllowedClients`](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client/types/client.pb.go#L248-L253) wildcard (`"*"`). Alternatively, the parameter +may be set with a list of client types (e.g. `"06-solomachine","07-tendermint","09-localhost"`). +A client type that is not registered on this list will fail upon creation or on genesis validation. +Note that, since the client type is an arbitrary string, chains must not register two light clients +which return the same value for the `ClientType()` function, otherwise the allow list check can be +bypassed. + +### Application ABCI ordering + +One addition from IBC is the concept of `HistoricalInfo` which is stored in the Cosmos SDK `x/staking` module. The number of records stored by `x/staking` is controlled by the `HistoricalEntries` parameter which stores `HistoricalInfo` on a per-height basis. +Each entry contains the historical information for the `Header` and `ValidatorSet` of this chain which is stored +at each height during the `BeginBlock` call. The `HistoricalInfo` is required to introspect a blockchain's prior state at a given height in order to verify the light client `ConsensusState` during the +connection handshake. + +```go title="app.go" expandable +import ( + + / other imports + / ... + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ibcexported "github.com/cosmos/ibc-go/v10/modules/core/exported" + ibckeeper "github.com/cosmos/ibc-go/v10/modules/core/keeper" + ibctransfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" +) + +func NewApp(...args) *App { + / ... continuation from above + + / add x/staking, ibc and transfer modules to BeginBlockers + app.ModuleManager.SetOrderBeginBlockers( + / other modules ... + stakingtypes.ModuleName, + ibcexported.ModuleName, + ibctransfertypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + / other modules ... + stakingtypes.ModuleName, + ibcexported.ModuleName, + ibctransfertypes.ModuleName, + ) + + / ... + genesisModuleOrder := []string{ + / other modules + / ... + ibcexported.ModuleName, + ibctransfertypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + + / ... continues +``` + +That's it! You have now wired up the IBC module and the `transfer` module, and are now able to send fungible tokens across +different chains. If you want to have a broader view of the changes take a look into the SDK's +[`SimApp`](https://github.com/cosmos/ibc-go/blob/main/testing/simapp/app.go). diff --git a/docs/ibc/next/ibc/middleware/develop.mdx b/docs/ibc/next/ibc/middleware/develop.mdx new file mode 100644 index 00000000..34fe3d75 --- /dev/null +++ b/docs/ibc/next/ibc/middleware/develop.mdx @@ -0,0 +1,535 @@ +--- +title: Create a custom IBC middleware +description: >- + IBC middleware will wrap over an underlying IBC application (a base + application or downstream middleware) and sits between core IBC and the base + application. +--- + +IBC middleware will wrap over an underlying IBC application (a base application or downstream middleware) and sits between core IBC and the base application. + + +middleware developers must use the same serialization and deserialization method as in ibc-go's codec: transfertypes.ModuleCdc.\[Must]MarshalJSON + + +For middleware builders this means: + +```go +import transfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" +transfertypes.ModuleCdc.[Must]MarshalJSON +func MarshalAsIBCDoes(ack channeltypes.Acknowledgement) ([]byte, error) { + return transfertypes.ModuleCdc.MarshalJSON(&ack) +} +``` + +The interfaces a middleware must implement are found [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/05-port/types/module.go). + +```go +/ Middleware implements the ICS26 Module interface +type Middleware interface { + IBCModule / middleware has access to an underlying application which may be wrapped by more middleware + ICS4Wrapper / middleware has access to ICS4Wrapper which may be core IBC Channel Handler or a higher-level middleware that wraps this middleware. + + / SetUnderlyingModule sets the underlying IBC module. This function may be used after + / the middleware's initialization to set the ibc module which is below this middleware. + SetUnderlyingApplication(IBCModule) +} +``` + +An `IBCMiddleware` struct implementing the `Middleware` interface, can be defined with its constructor as follows: + +```go expandable +/ @ x/module_name/ibc_middleware.go + +/ IBCMiddleware implements the ICS26 callbacks and ICS4Wrapper for the fee middleware given the +/ fee keeper and the underlying application. +type IBCMiddleware struct { + keeper *keeper.Keeper +} + +/ NewIBCMiddleware creates a new IBCMiddleware given the keeper and underlying application +func NewIBCMiddleware(k *keeper.Keeper) + +IBCMiddleware { + return IBCMiddleware{ + keeper: k, +} +} +``` + +## Implement `IBCModule` interface + +`IBCMiddleware` is a struct that implements the [ICS-26 `IBCModule` interface (`porttypes.IBCModule`)](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/05-port/types/module.go#L14-L107). It is recommended to separate these callbacks into a separate file `ibc_middleware.go`. + +> Note how this is analogous to implementing the same interfaces for IBC applications that act as base applications. + +As will be mentioned in the [integration section](/docs/ibc/next/ibc/middleware/integration), this struct should be different than the struct that implements `AppModule` in case the middleware maintains its own internal state and processes separate SDK messages. + +The middleware must have access to the underlying application, and be called before it during all ICS-26 callbacks. It may execute custom logic during these callbacks, and then call the underlying application's callback. + +> Middleware **may** choose not to call the underlying application's callback at all. Though these should generally be limited to error cases. + +The `IBCModule` interface consists of the channel handshake callbacks and packet callbacks. Most of the custom logic will be performed in the packet callbacks, in the case of the channel handshake callbacks, introducing the middleware requires consideration to the version negotiation. + +### Channel handshake callbacks + +#### Version negotiation + +In the case where the IBC middleware expects to speak to a compatible IBC middleware on the counterparty chain, they must use the channel handshake to negotiate the middleware version without interfering in the version negotiation of the underlying application. + +Middleware accomplishes this by formatting the version in a JSON-encoded string containing the middleware version and the application version. The application version may as well be a JSON-encoded string, possibly including further middleware and app versions, if the application stack consists of multiple milddlewares wrapping a base application. The format of the version is specified in ICS-30 as the following: + +```json +{ + "": "", + "app_version": "" +} +``` + +The `` key in the JSON struct should be replaced by the actual name of the key for the corresponding middleware (e.g. `fee_version`). + +During the handshake callbacks, the middleware can unmarshal the version string and retrieve the middleware and application versions. It can do its negotiation logic on ``, and pass the `` to the underlying application. + +> **NOTE**: Middleware that does not need to negotiate with a counterparty middleware on the remote stack will not implement the version unmarshalling and negotiation, and will simply perform its own custom logic on the callbacks without relying on the counterparty behaving similarly. + +#### `OnChanOpenInit` + +```go expandable +func (im IBCMiddleware) + +OnChanOpenInit( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + if version != "" { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + metadata, err := Unmarshal(version) + if err != nil { + / Since it is valid for fee version to not be specified, + / the above middleware version may be for another middleware. + / Pass the entire version string onto the underlying application. + return im.app.OnChanOpenInit( + ctx, + order, + connectionHops, + portID, + channelID, + counterparty, + version, + ) +} + +else { + metadata = { + / set middleware version to default value + MiddlewareVersion: defaultMiddlewareVersion, + / allow application to return its default version + AppVersion: "", +} + +} + +} + +doCustomLogic() + + / if the version string is empty, OnChanOpenInit is expected to return + / a default version string representing the version(s) + +it supports + appVersion, err := im.app.OnChanOpenInit( + ctx, + order, + connectionHops, + portID, + channelID, + counterparty, + metadata.AppVersion, / note we only pass app version here + ) + if err != nil { + return "", err +} + version := constructVersion(metadata.MiddlewareVersion, appVersion) + +return version, nil +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L36-L83) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanOpenTry` + +```go expandable +func (im IBCMiddleware) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + cpMetadata, err := Unmarshal(counterpartyVersion) + if err != nil { + return app.OnChanOpenTry( + ctx, + order, + connectionHops, + portID, + channelID, + counterparty, + counterpartyVersion, + ) +} + +doCustomLogic() + + / Call the underlying application's OnChanOpenTry callback. + / The try callback must select the final app-specific version string and return it. + appVersion, err := app.OnChanOpenTry( + ctx, + order, + connectionHops, + portID, + channelID, + counterparty, + cpMetadata.AppVersion, / note we only pass counterparty app version here + ) + if err != nil { + return "", err +} + + / negotiate final middleware version + middlewareVersion := negotiateMiddlewareVersion(cpMetadata.MiddlewareVersion) + version := constructVersion(middlewareVersion, appVersion) + +return version, nil +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L88-L125) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanOpenAck` + +```go expandable +func (im IBCMiddleware) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyChannelID string, + counterpartyVersion string, +) + +error { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + cpMetadata, err = UnmarshalJSON(counterpartyVersion) + if err != nil { + return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, counterpartyVersion) +} + if !isCompatible(cpMetadata.MiddlewareVersion) { + return error +} + +doCustomLogic() + + / call the underlying application's OnChanOpenTry callback + return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, cpMetadata.AppVersion) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L128-L153)) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanOpenConfirm` + +```go +func OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanOpenConfirm(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L156-L163) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanCloseInit` + +```go +func OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanCloseInit(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L166-L188) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanCloseConfirm` + +```go +func OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanCloseConfirm(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L191-L213) an example implementation of this callback for the ICS-29 Fee Middleware module. + +### Packet callbacks + +The packet callbacks just like the handshake callbacks wrap the application's packet callbacks. The packet callbacks are where the middleware performs most of its custom logic. The middleware may read the packet flow data and perform some additional packet handling, or it may modify the incoming data before it reaches the underlying application. This enables a wide degree of usecases, as a simple base application like token-transfer can be transformed for a variety of usecases by combining it with custom middleware. + +#### `OnRecvPacket` + +```go expandable +func (im IBCMiddleware) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +ibcexported.Acknowledgement { + doCustomLogic(packet) + ack := app.OnRecvPacket(ctx, packet, relayer) + +doCustomLogic(ack) / middleware may modify outgoing ack + + return ack +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L217-L238) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnAcknowledgementPacket` + +```go +func (im IBCMiddleware) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, +) + +error { + doCustomLogic(packet, ack) + +return app.OnAcknowledgementPacket(ctx, packet, ack, relayer) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L242-L293) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnTimeoutPacket` + +```go +func (im IBCMiddleware) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +error { + doCustomLogic(packet) + +return app.OnTimeoutPacket(ctx, packet, relayer) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L297-L335) an example implementation of this callback for the ICS-29 Fee Middleware module. + +## ICS-04 wrappers + +Middleware must also wrap ICS-04 so that any communication from the application to the `channelKeeper` goes through the middleware first. Similar to the packet callbacks, the middleware may modify outgoing acknowledgements and packets in any way it wishes. + +To ensure optimal generalisability, the `ICS4Wrapper` abstraction serves to abstract away whether a middleware is the topmost middleware (and thus directly calling into the ICS-04 `channelKeeper`) or itself being wrapped by another middleware. + +Remember that middleware can be stateful or stateless. When defining the stateful middleware's keeper, the `ics4Wrapper` field is included. Then the appropriate keeper can be passed when instantiating the middleware's keeper in `app.go` + +```go +type Keeper struct { + storeKey storetypes.StoreKey + cdc codec.BinaryCodec + + ics4Wrapper porttypes.ICS4Wrapper + channelKeeper types.ChannelKeeper + portKeeper types.PortKeeper + ... +} +``` + +For stateless middleware, the `ics4Wrapper` can be passed on directly without having to instantiate a keeper struct for the middleware. + +[The interface](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/05-port/types/module.go#L110-L133) looks as follows: + +```go expandable +/ This is implemented by ICS4 and all middleware that are wrapping base application. +/ The base application will call `sendPacket` or `writeAcknowledgement` of the middleware directly above them +/ which will call the next middleware until it reaches the core IBC handler. +type ICS4Wrapper interface { + SendPacket( + ctx sdk.Context, + sourcePort string, + sourceChannel string, + timeoutHeight clienttypes.Height, + timeoutTimestamp uint64, + data []byte, + ) (sequence uint64, err error) + +WriteAcknowledgement( + ctx sdk.Context, + packet exported.PacketI, + ack exported.Acknowledgement, + ) + +error + + GetAppVersion( + ctx sdk.Context, + portID, + channelID string, + ) (string, bool) +} +``` + +:warning: In the following paragraphs, the methods are presented in pseudo code which has been kept general, not stating whether the middleware is stateful or stateless. Remember that when the middleware is stateful, `ics4Wrapper` can be accessed through the keeper. + +Check out the references provided for an actual implementation to clarify, where the `ics4Wrapper` methods in `ibc_middleware.go` simply call the equivalent keeper methods where the actual logic resides. + +### `SendPacket` + +```go expandable +func SendPacket( + ctx sdk.Context, + sourcePort string, + sourceChannel string, + timeoutHeight clienttypes.Height, + timeoutTimestamp uint64, + appData []byte, +) (uint64, error) { + / middleware may modify data + data = doCustomLogic(appData) + +return ics4Wrapper.SendPacket( + ctx, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, + ) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/keeper/relay.go#L17-L27) an example implementation of this function for the ICS-29 Fee Middleware module. + +### `WriteAcknowledgement` + +```go expandable +/ only called for async acks +func WriteAcknowledgement( + ctx sdk.Context, + packet exported.PacketI, + ack exported.Acknowledgement, +) + +error { + / middleware may modify acknowledgement + ack_bytes = doCustomLogic(ack) + +return ics4Wrapper.WriteAcknowledgement(packet, ack_bytes) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/keeper/relay.go#L31-L55) an example implementation of this function for the ICS-29 Fee Middleware module. + +### `GetAppVersion` + +```go expandable +/ middleware must return the underlying application version +func GetAppVersion( + ctx sdk.Context, + portID, + channelID string, +) (string, bool) { + version, found := ics4Wrapper.GetAppVersion(ctx, portID, channelID) + if !found { + return "", false +} + if !MiddlewareEnabled { + return version, true +} + + / unwrap channel version + metadata, err := Unmarshal(version) + if err != nil { + panic(fmt.Errof("unable to unmarshal version: %w", err)) +} + +return metadata.AppVersion, true +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/keeper/relay.go#L58-L74) an example implementation of this function for the ICS-29 Fee Middleware module. + +## Wiring Interface Requirements + +Middleware must also implement the following functions so that they can be called in the stack builder in order to correctly wire the application stack together: `SetUnderlyingApplication` and `SetICS4Wrapper`. + +```go expandable +/ SetUnderlyingModule sets the underlying IBC module. This function may be used after +/ the middleware's initialization to set the ibc module which is below this middleware. +SetUnderlyingApplication(IBCModule) + +/ SetICS4Wrapper sets the ICS4Wrapper. This function may be used after +/ the module's initialization to set the middleware which is above this +/ module in the IBC application stack. +/ The ICS4Wrapper **must** be used for sending packets and writing acknowledgements +/ to ensure that the middleware can intercept and process these calls. +/ Do not use the channel keeper directly to send packets or write acknowledgements +/ as this will bypass the middleware. +SetICS4Wrapper(wrapper ICS4Wrapper) +``` + +The middleware itself should have access to the `underlying app` (note this may be a base app or an application wrapped by layers of lower-level middleware(s)) and access to the higher layer `ICS4wrapper`. The `underlying app` gets called during the relayer initiated actions: `recvPacket`, `acknowledgePacket`, and `timeoutPacket`. The `ics4Wrapper` gets called on user-initiated actions like `sendPacket` and `writeAcknowledgement`. + +The functions above are used by the `StackBuilder` during application setup to wire the stack correctly. The stack must be wired first and have all of the wrappers and applications set correctly before transaction execution starts and packet processing begins. diff --git a/docs/ibc/next/ibc/middleware/developIBCv2.mdx b/docs/ibc/next/ibc/middleware/developIBCv2.mdx new file mode 100644 index 00000000..8ad976e5 --- /dev/null +++ b/docs/ibc/next/ibc/middleware/developIBCv2.mdx @@ -0,0 +1,286 @@ +--- +title: Create and integrate IBC v2 middleware +description: >- + 1. Create a custom IBC v2 middleware 2. Implement IBCModule interface 3. + WriteAckWrapper 4. Integrate IBC v2 Middleware 5. Security Model 6. Design + Principles +--- + +1. [Create a custom IBC v2 middleware](#create-a-custom-ibc-v2-middleware) +2. [Implement `IBCModule` interface](#implement-ibcmodule-interface) +3. [WriteAckWrapper](#writeackwrapper) +4. [Integrate IBC v2 Middleware](#integrate-ibc-v2-middleware) +5. [Security Model](#security-model) +6. [Design Principles](#design-principles) + +## Create a custom IBC v2 middleware + +IBC middleware will wrap over an underlying IBC application (a base application or downstream middleware) and sits between core IBC and the base application. + + +middleware developers must use the same serialization and deserialization method as in ibc-go's codec: transfertypes.ModuleCdc.\[Must]MarshalJSON + + +For middleware builders this means: + +```go +import transfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" +transfertypes.ModuleCdc.[Must]MarshalJSON +func MarshalAsIBCDoes(ack channeltypes.Acknowledgement) ([]byte, error) { + return transfertypes.ModuleCdc.MarshalJSON(&ack) +} +``` + +The interfaces a middleware must implement are found in [core/api](https://github.com/cosmos/ibc-go/blob/main/modules/core/api/module.go#L11). Note that this interface has changed from IBC classic. + +An `IBCMiddleware` struct implementing the `Middleware` interface, can be defined with its constructor as follows: + +```go expandable +/ @ x/module_name/ibc_middleware.go + +/ IBCMiddleware implements the IBCv2 middleware interface +type IBCMiddleware struct { + app api.IBCModule / underlying app or middleware + writeAckWrapper api. WriteAcknowledgementWrapper / writes acknowledgement for an async acknowledgement + PacketDataUnmarshaler api.PacketDataUnmarshaler / optional interface + keeper types.Keeper / required for stateful middleware + / Keeper may include middleware specific keeper and the ChannelKeeperV2 + + / additional middleware specific fields +} + +/ NewIBCMiddleware creates a new IBCMiddleware given the keeper and underlying application +func NewIBCMiddleware(app api.IBCModule, +writeAckWrapper api.WriteAcknowledgementWrapper, +k types.Keeper +) + +IBCMiddleware { + return IBCMiddleware{ + app: app, + writeAckWrapper: writeAckWrapper, + keeper: k, +} +} +``` + + +The ICS4Wrapper has been removed in IBC v2 and there are no channel handshake callbacks, a writeAckWrapper has been added to the interface + + +## Implement `IBCModule` interface + +`IBCMiddleware` is a struct that implements the [`IBCModule` interface (`api.IBCModule`)](https://github.com/cosmos/ibc-go/blob/main/modules/core/api/module.go#L11-L53). It is recommended to separate these callbacks into a separate file `ibc_middleware.go`. + +> Note how this is analogous to implementing the same interfaces for IBC applications that act as base applications. + +The middleware must have access to the underlying application, and be called before it during all ICS-26 callbacks. It may execute custom logic during these callbacks, and then call the underlying application's callback. + +> Middleware **may** choose not to call the underlying application's callback at all. Though these should generally be limited to error cases. + +The `IBCModule` interface consists of the packet callbacks where cutom logic is performed. + +### Packet callbacks + +The packet callbacks are where the middleware performs most of its custom logic. The middleware may read the packet flow data and perform some additional packet handling, or it may modify the incoming data before it reaches the underlying application. This enables a wide degree of usecases, as a simple base application like token-transfer can be transformed for a variety of usecases by combining it with custom middleware, for example acting as a filter for which tokens can be sent and recieved. + +#### `OnRecvPacket` + +```go expandable +func (im IBCMiddleware) + +OnRecvPacket( + ctx sdk.Context, + sourceClient string, + destinationClient string, + sequence uint64, + payload channeltypesv2.Payload, + relayer sdk.AccAddress, +) + +channeltypesv2.RecvPacketResult { + / Middleware may choose to do custom preprocessing logic before calling the underlying app OnRecvPacket + / Middleware may choose to error early and return a RecvPacketResult Failure + / Middleware may choose to modify the payload before passing on to OnRecvPacket though this + / should only be done to support very advanced custom behavior + / Middleware MUST NOT modify client identifiers and sequence + doCustomPreProcessLogic() + + / call underlying app OnRecvPacket + recvResult := im.app.OnRecvPacket(ctx, sourceClient, destinationClient, sequence, payload, relayer) + if recvResult.Status == PACKET_STATUS_FAILURE { + return recvResult +} + +doCustomPostProcessLogic(recvResult) / middleware may modify recvResult + + return recvResult +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/v2/ibc_middleware.go#L161-L230) an example implementation of this callback for the Callbacks Middleware module. + +#### `OnAcknowledgementPacket` + +```go expandable +func (im IBCMiddleware) + +OnAcknowledgementPacket( + ctx sdk.Context, + sourceClient string, + destinationClient string, + sequence uint64, + acknowledgement []byte, + payload channeltypesv2.Payload, + relayer sdk.AccAddress, +) + +error { + / preprocessing logic may modify the acknowledgement before passing to + / the underlying app though this should only be done in advanced cases + / Middleware may return error early + / it MUST NOT change the identifiers of the clients or the sequence + doCustomPreProcessLogic(payload, acknowledgement) + + / call underlying app OnAcknowledgementPacket + err = im.app.OnAcknowledgementPacket( + sourceClient, destinationClient, sequence, + acknowledgement, payload, relayer + ) + if err != nil { + return err +} + + / may perform some post acknowledgement logic and return error here + return doCustomPostProcessLogic() +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/v2/ibc_middleware.go#L236-L302) an example implementation of this callback for the Callbacks Middleware module. + +#### `OnTimeoutPacket` + +```go expandable +func (im IBCMiddleware) + +OnTimeoutPacket( + ctx sdk.Context, + sourceClient string, + destinationClient string, + sequence uint64, + payload channeltypesv2.Payload, + relayer sdk.AccAddress, +) + +error { + / Middleware may choose to do custom preprocessing logic before calling the underlying app OnTimeoutPacket + / Middleware may return error early + doCustomPreProcessLogic(payload) + + / call underlying app OnTimeoutPacket + err = im.app.OnTimeoutPacket( + sourceClient, destinationClient, sequence, + payload, relayer + ) + if err != nil { + return err +} + + / may perform some post timeout logic and return error here + return doCustomPostProcessLogic() +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/v2/ibc_middleware.go#L309-L367) an example implementation of this callback for the Callbacks Middleware module. + +### WriteAckWrapper + +Middleware must also wrap the `WriteAcknowledgement` interface so that any acknowledgement written by the application passes through the middleware first. This allows middleware to modify or delay writing an acknowledgment before committed to the IBC store. + +```go +/ WithWriteAckWrapper sets the WriteAcknowledgementWrapper for the middleware. +func (im *IBCMiddleware) + +WithWriteAckWrapper(writeAckWrapper api.WriteAcknowledgementWrapper) { + im.writeAckWrapper = writeAckWrapper +} + +/ GetWriteAckWrapper returns the WriteAckWrapper +func (im *IBCMiddleware) + +GetWriteAckWrapper() + +api.WriteAcknowledgementWrapper { + return im.writeAckWrapper +} +``` + +### `WriteAcknowledgement` + +This is where the middleware acknowledgement handling is finalised. An example is shown in the [callbacks middleware](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/v2/ibc_middleware.go#L369-L454) + +```go expandable +/ WriteAcknowledgement facilitates acknowledgment being written asynchronously +/ The call stack flows from the IBC application to the IBC core handler +/ Thus this function is called by the IBC app or a lower-level middleware +func (im IBCMiddleware) + +WriteAcknowledgement( + ctx sdk.Context, + clientID string, + sequence uint64, + ack channeltypesv2.Acknowledgement, +) + +error { + doCustomPreProcessLogic() / may modify acknowledgement + + return im.writeAckWrapper.WriteAcknowledgement( + ctx, clientId, sequence, ack, + ) +} +``` + +## Integrate IBC v2 Middleware + +Middleware should be registered within the module manager in `app.go`. + +The order of middleware **matters**, function calls from IBC to the application travel from top-level middleware to the bottom middleware and then to the application. Function calls from the application to IBC goes through the bottom middleware in order to the top middleware and then to core IBC handlers. Thus the same set of middleware put in different orders may produce different effects. + +### Example Integration + +The example integration is detailed for an IBC v2 stack using transfer and the callbacks middleware. + +```go expandable +/ Middleware Stacks +/ initialising callbacks middleware + maxCallbackGas := uint64(10_000_000) + wasmStackIBCHandler := wasm.NewIBCHandler(app.WasmKeeper, app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper) + +/ Create the transferv2 stack with transfer and callbacks middleware + var ibcv2TransferStack ibcapi.IBCModule + ibcv2TransferStack = transferv2.NewIBCModule(app.TransferKeeper) + +ibcv2TransferStack = ibccallbacksv2.NewIBCMiddleware(transferv2.NewIBCModule(app.TransferKeeper), app.IBCKeeper.ChannelKeeperV2, wasmStackIBCHandler, app.IBCKeeper.ChannelKeeperV2, maxCallbackGas) + +/ Create static IBC v2 router, add app routes, then set and seal it + ibcRouterV2 := ibcapi.NewRouter() + +ibcRouterV2.AddRoute(ibctransfertypes.PortID, ibcv2TransferStack) + +app.IBCKeeper.SetRouterV2(ibcRouterV2) +``` + +## Security Model + +IBC Middleware completely wraps all communication between IBC core and the application that it is wired with. Thus, the IBC Middleware has complete control to modify any packets and acknowledgements the underlying application receives or sends. Thus, if a chain chooses to wrap an application with a given middleware, that middleware is **completely trusted** and part of the application's security model. **Do not use middlewares that are untrusted.** + +## Design Principles + +The middleware follows a decorator pattern that wraps an underlying application's connection to the IBC core handlers. Thus, when implementing a middleware for a specific purpose, it is recommended to be as **unintrusive** as possible in the middleware design while still accomplishing the intended behavior. + +The least intrusive middleware is stateless. They simply read the ICS26 callback arguments before calling the underlying app's callback and error if the arguments are not acceptable (e.g. whitelisting packets). Stateful middleware that are used solely for erroring are also very simple to build, an example of this would be a rate-limiting middleware that prevents transfer outflows from getting too high within a certain time frame. + +Middleware that directly interfere with the payload or acknowledgement before passing control to the underlying app are way more intrusive to the underyling app processing. This makes such middleware more error-prone when implementing as incorrect handling can cause the underlying app to break or worse execute unexpected behavior. Moreover, such middleware typically needs to be built for a specific underlying app rather than being generic. An example of this is the packet-forwarding middleware which modifies the payload and is specifically built for transfer. + +Middleware that modifies the payload or acknowledgement such that it is no longer readable by the underlying application is the most complicated middleware. Since it is not readable by the underlying apps, if these middleware write additional state into payloads and acknowledgements that get committed to IBC core provable state, there MUST be an equivalent counterparty middleware that is able to parse and intepret this additional state while also converting the payload and acknowledgment back to a readable form for the underlying application on its side. Thus, such middleware requires deployment on both sides of an IBC connection or the packet processing will break. This is the hardest type of middleware to implement, integrate and deploy. Thus, it is not recommended unless absolutely necessary to fulfill the given use case. diff --git a/docs/ibc/next/ibc/middleware/integration.mdx b/docs/ibc/next/ibc/middleware/integration.mdx new file mode 100644 index 00000000..250d4aaf --- /dev/null +++ b/docs/ibc/next/ibc/middleware/integration.mdx @@ -0,0 +1,90 @@ +--- +title: Integrating IBC middleware into a chain +description: >- + Learn how to integrate IBC middleware(s) with a base application to your + chain. The following document only applies for Cosmos SDK chains. +--- + +Learn how to integrate IBC middleware(s) with a base application to your chain. The following document only applies for Cosmos SDK chains. + +If the middleware is maintaining its own state and/or processing SDK messages, then it should create and register its SDK module with the module manager in `app.go`. + +All middleware must be connected to the IBC router and wrap over an underlying base IBC application. An IBC application may be wrapped by many layers of middleware, only the top layer middleware should be hooked to the IBC router, with all underlying middlewares and application getting wrapped by it. + +The order of middleware **matters**, function calls from IBC to the application travel from top-level middleware to the bottom middleware and then to the application. Function calls from the application to IBC goes through the bottom middleware in order to the top middleware and then to core IBC handlers. Thus the same set of middleware put in different orders may produce different effects. + +## Example integration + +```go expandable +/ app.go pseudocode + +/ middleware 1 and middleware 3 are stateful middleware, +/ perhaps implementing separate sdk.Msg and Handlers +/ NOTE: NewKeeper returns a pointer so that we can modify +/ the keepers later after initialization +/ They are all initialized to use the channelKeeper directly at the start +mw1Keeper := mw1.NewKeeper(storeKey1, ..., channelKeeper) / in stack 1 & 3 +/ middleware 2 is stateless +mw3Keeper1 := mw3.NewKeeper(storeKey3,..., channelKeeper) / in stack 1 +mw3Keeper2 := mw3.NewKeeper(storeKey3,..., channelKeeper) / in stack 2 + +/ Only create App Module **once** and register in app module +/ if the module maintains independent state and/or processes sdk.Msgs +app.moduleManager = module.NewManager( + ... + mw1.NewAppModule(mw1Keeper), + mw3.NewAppModule(mw3Keeper1), + mw3.NewAppModule(mw3Keeper2), + transfer.NewAppModule(transferKeeper), + custom.NewAppModule(customKeeper) +) + +/ NOTE: IBC Modules may be initialized any number of times provided they use a separate +/ Keeper and underlying port. + +customKeeper1 := custom.NewKeeper(..., KeeperCustom1, ...) + +customKeeper2 := custom.NewKeeper(..., KeeperCustom2, ...) + +/ initialize base IBC applications +/ if you want to create two different stacks with the same base application, +/ they must be given different Keepers and assigned different ports. + transferIBCModule := transfer.NewIBCModule(transferKeeper) + +customIBCModule1 := custom.NewIBCModule(customKeeper1, "portCustom1") + +customIBCModule2 := custom.NewIBCModule(customKeeper2, "portCustom2") + +/ create IBC stacks by combining middleware with base application +/ IBC Stack builders are initialized with the IBC ChannelKeeper which is the top-level ICS4Wrapper +/ NOTE: since middleware2 is stateless it does not require a Keeper +/ stack 1 contains mw1 -> mw3 -> transfer +stack1 := porttypes.NewStackBuilder(ibcChannelKeeper). + Base(transferIBCModule). + Next(mw3). + Next(mw1). + Build() +/ stack 2 contains mw3 -> mw2 -> custom1 +stack2 := porttypes.NewStackBuilder(ibcChannelKeeper). + Base(customIBCModule1). + Next(mw2). + Next(mw3). + Build() +/ stack 3 contains mw2 -> mw1 -> custom2 +stack3 := porttypes.NewStackBuilder(ibcChannelKeeper). + Base(customIBCModule2). + Next(mw1). + Next(mw2). + Build() + +/ associate each stack with the moduleName provided by the underlying Keeper + ibcRouter := porttypes.NewRouter() + +ibcRouter.AddRoute("transfer", stack1) + +ibcRouter.AddRoute("custom1", stack2) + +ibcRouter.AddRoute("custom2", stack3) + +app.IBCKeeper.SetRouter(ibcRouter) +``` diff --git a/docs/ibc/next/ibc/middleware/overview.mdx b/docs/ibc/next/ibc/middleware/overview.mdx new file mode 100644 index 00000000..87bf8937 --- /dev/null +++ b/docs/ibc/next/ibc/middleware/overview.mdx @@ -0,0 +1,50 @@ +--- +title: IBC middleware +--- + +## Synopsis + +Learn how to write your own custom middleware to wrap an IBC application, and understand how to hook different middleware to IBC base applications to form different IBC application stacks + +This documentation serves as a guide for middleware developers who want to write their own middleware and for chain developers who want to use IBC middleware on their chains. + +After going through the overview they can consult respectively: + +- [documentation on developing custom middleware](/docs/ibc/next/ibc/middleware/develop) +- [documentation on integrating middleware into a stack on a chain](/docs/ibc/next/ibc/middleware/integration) + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/next/ibc/overview) +- [IBC Integration](/docs/ibc/next/ibc/integration) +- [IBC Application Developer Guide](/docs/ibc/next/ibc/apps/apps) + + + +## Why middleware? + +IBC applications are designed to be self-contained modules that implement their own application-specific logic through a set of interfaces with the core IBC handlers. These core IBC handlers, in turn, are designed to enforce the correctness properties of IBC (transport, authentication, ordering) while delegating all application-specific handling to the IBC application modules. **However, there are cases where some functionality may be desired by many applications, yet not appropriate to place in core IBC.** + +Middleware allows developers to define the extensions as separate modules that can wrap over the base application. This middleware can thus perform its own custom logic, and pass data into the application so that it may run its logic without being aware of the middleware's existence. This allows both the application and the middleware to implement its own isolated logic while still being able to run as part of a single packet flow. + +## Definitions + +`Middleware`: A self-contained module that sits between core IBC and an underlying IBC application during packet execution. All messages between core IBC and underlying application must flow through middleware, which may perform its own custom logic. + +`Underlying Application`: An underlying application is the application that is directly connected to the middleware in question. This underlying application may itself be middleware that is chained to a base application. + +`Base Application`: A base application is an IBC application that does not contain any middleware. It may be nested by 0 or multiple middleware to form an application stack. + +`Application Stack (or stack)`: A stack is the complete set of application logic (middleware(s) + base application) that gets connected to core IBC. A stack may be just a base application, or it may be a series of middlewares that nest a base application. + +The diagram below gives an overview of a middleware stack consisting of two middleware (one stateless, the other stateful). + +![middleware-stack.png](/docs/ibc/images/01-ibc/04-middleware/images/middleware-stack.png) + +Keep in mind that: + +- **The order of the middleware matters** (more on how to correctly define your stack in the code will follow in the [integration section](/docs/ibc/next/ibc/middleware/integration)). +- Depending on the type of message, it will either be passed on from the base application up the middleware stack to core IBC or down the stack in the reverse situation (handshake and packet callbacks). +- IBC middleware will wrap over an underlying IBC application and sits between core IBC and the application. It has complete control in modifying any message coming from IBC to the application, and any message coming from the application to core IBC. **Middleware must be completely trusted by chain developers who wish to integrate them**, as this gives them complete flexibility in modifying the application(s) they wrap. diff --git a/docs/ibc/next/ibc/overview.mdx b/docs/ibc/next/ibc/overview.mdx new file mode 100644 index 00000000..1a974f47 --- /dev/null +++ b/docs/ibc/next/ibc/overview.mdx @@ -0,0 +1,294 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about IBC, its components, and its use cases. + +## What is the Inter-Blockchain Communication Protocol (IBC)? + +This document serves as a guide for developers who want to write their own Inter-Blockchain +Communication Protocol (IBC) applications for custom use cases. + +> IBC applications must be written as self-contained modules. + +Due to the modular design of the IBC Protocol, IBC +application developers do not need to be concerned with the low-level details of clients, +connections, and proof verification. + +This brief explanation of the lower levels of the +stack gives application developers a broad understanding of the IBC +Protocol. Abstraction layer details for channels and ports are most relevant for application developers and describe how to define custom packets and `IBCModule` callbacks. + +The requirements to have your module interact over IBC are: + +- Bind to a port or ports. +- Define your packet data. +- Use the default acknowledgment struct provided by core IBC or optionally define a custom acknowledgment struct. +- Standardize an encoding of the packet data. +- Implement the `IBCModule` interface. +- Implement the `UpgradableModule` interface (optional). + +Read on for a detailed explanation of how to write a self-contained IBC application module. + +## Components overview + +### [Clients](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client) + +IBC clients are on-chain light clients. Each light client is identified by a unique client ID. +IBC clients track the consensus states of other blockchains, along with the proof spec necessary to +properly verify proofs against the client's consensus state. A client can be associated with any number +of connections to the counterparty chain. The client identifier is auto generated using the client type +and the global client counter appended in the format: `{client-type}-{N}`. + +A `ClientState` should contain chain specific and light client specific information necessary for verifying updates +and upgrades to the IBC client. The `ClientState` may contain information such as chain ID, latest height, proof specs, +unbonding periods or the status of the light client. The `ClientState` should not contain information that +is specific to a given block at a certain height, this is the function of the `ConsensusState`. Each `ConsensusState` +should be associated with a unique block and should be referenced using a height. IBC clients are given a +client identifier prefixed store to store their associated client state and consensus states along with +any metadata associated with the consensus states. Consensus states are stored using their associated height. + +The supported IBC clients are: + +- [Solo Machine light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine): Devices such as phones, browsers, or laptops. +- [Tendermint light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/07-tendermint): The default for Cosmos SDK-based chains. +- [Wasm client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/08-wasm): Proxy client useful for running light clients written in a Wasm-compilable language. +- [Localhost (loopback) client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/09-localhost): Useful for testing, simulation, and relaying packets to modules on the same application. + +### IBC client heights + +IBC Client Heights are represented by the struct: + +```go +type Height struct { + RevisionNumber uint64 + RevisionHeight uint64 +} +``` + +The `RevisionNumber` represents the revision of the chain that the height is representing. +A revision typically represents a continuous, monotonically increasing range of block-heights. +The `RevisionHeight` represents the height of the chain within the given revision. + +On any reset of the `RevisionHeight`—for example, when hard-forking a Tendermint chain, +the `RevisionNumber` will get incremented. This allows IBC clients to distinguish between a +block height `n` of a previous revision of the chain (at revision `p`) and block-height `n` of the current +revision of the chain (at revision `e`). + +`Height`s that share the same revision number can be compared by simply comparing their respective `RevisionHeight`s. +`Height`s that do not share the same revision number will only be compared using their respective `RevisionNumber`s. +Thus a height `h` with revision number `e+1` will always be greater than a height `g` with revision number `e`, +**REGARDLESS** of the difference in revision heights. + +For example: + +```go +Height{ + RevisionNumber: 3, + RevisionHeight: 0 +} > Height{ + RevisionNumber: 2, + RevisionHeight: 100000000000 +} +``` + +When a Tendermint chain is running a particular revision, relayers can simply submit headers and proofs with the revision number +given by the chain's `chainID`, and the revision height given by the Tendermint block height. When a chain updates using a hard-fork +and resets its block-height, it is responsible for updating its `chainID` to increment the revision number. +IBC Tendermint clients then verifies the revision number against their `chainID` and treat the `RevisionHeight` as the Tendermint block-height. + +Tendermint chains wishing to use revisions to maintain persistent IBC connections even across height-resetting upgrades must format their `chainID`s +in the following manner: `{chainID}-{revision_number}`. On any height-resetting upgrade, the `chainID` **MUST** be updated with a higher revision number +than the previous value. + +For example: + +- Before upgrade `chainID`: `gaiamainnet-3` +- After upgrade `chainID`: `gaiamainnet-4` + +Clients that do not require revisions, such as the `06-solomachine` client, can simply hardcode `0` into the revision number whenever they +need to return an IBC height when implementing IBC interfaces and use the `RevisionHeight` exclusively. + +Other client types can implement their own logic to verify the IBC heights that relayers provide in their `Update`, `Misbehavior`, and +`Verify` functions respectively. + +The IBC interfaces expect an `ibcexported.Height` interface, however all clients must use the concrete implementation provided in +`02-client/types` and reproduced above. + +### [Connections](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection) + +Connections encapsulate two [`ConnectionEnd`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/connection/v1/connection.proto#L17) +objects on two separate blockchains. Each `ConnectionEnd` is associated with a client of the +other blockchain (for example, the counterparty blockchain). The connection handshake is responsible +for verifying that the light clients on each chain are correct for their respective counterparties. +Connections, once established, are responsible for facilitating all cross-chain verifications of IBC state. +A connection can be associated with any number of channels. + +The connection handshake is a 4-step handshake. Briefly, if a given chain A wants to open a connection with +chain B using already established light clients on both chains: + +1. chain A sends a `ConnectionOpenInit` message to signal a connection initialization attempt with chain B. +2. chain B sends a `ConnectionOpenTry` message to try opening the connection on chain A. +3. chain A sends a `ConnectionOpenAck` message to mark its connection end state as open. +4. chain B sends a `ConnectionOpenConfirm` message to mark its connection end state as open. + +#### Time delayed connections + +Connections can be opened with a time delay by setting the `delay_period` field (in nanoseconds) in the [`MsgConnectionOpenInit`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/connection/v1/tx.proto#L45). +The time delay is used to require that the underlying light clients have been updated to a certain height before commitment verification can be performed. + +`delayPeriod` is used in conjunction with the [`max_expected_time_per_block`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/connection/v1/connection.proto#L113) parameter of the connection submodule to determine the `blockDelay`, which is number of blocks that the connection must be delayed by. + +When commitment verification is performed, the connection submodule will pass `delayPeriod` and `blockDelay` to the light client. It is up to the light client to determine whether the light client has been updated to the required height. Only the following light clients in `ibc-go` support time delayed connections: + +- `07-tendermint` +- `08-wasm` (passed to the contact) + +### [Proofs](https://github.com/cosmos/ibc-go/blob/main/modules/core/23-commitment) and [paths](https://github.com/cosmos/ibc-go/blob/main/modules/core/24-host) + +In IBC, blockchains do not directly pass messages to each other over the network. Instead, to +communicate, a blockchain commits some state to a specifically defined path that is reserved for a +specific message type and a specific counterparty. For example, for storing a specific connectionEnd as part +of a handshake or a packet intended to be relayed to a module on the counterparty chain. A relayer +process monitors for updates to these paths and relays messages by submitting the data stored +under the path and a proof to the counterparty chain. + +Proofs are passed from core IBC to light clients as bytes. It is up to light client implementations to interpret these bytes appropriately. + +- The paths that all IBC implementations must use for committing IBC messages is defined in + [ICS-24 Host State Machine Requirements](https://github.com/cosmos/ibc/tree/master/spec/core/ics-024-host-requirements). +- The proof format that all implementations must be able to produce and verify is defined in [ICS-23 Proofs](https://github.com/cosmos/ics23) implementation. + +### [Ports](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port) + +An IBC module can bind to any number of ports. Each port must be identified by a unique `portID`. +Since IBC is designed to be secure with mutually distrusted modules operating on the same ledger, +binding a port returns a dynamic object capability. In order to take action on a particular port +(for example, an open channel with its port ID), a module must provide the dynamic object capability to the IBC +handler. This requirement prevents a malicious module from opening channels with ports it does not own. Thus, +IBC modules are responsible for claiming the capability that is returned on `BindPort`. + +### [Channels](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +An IBC channel can be established between two IBC ports. Currently, a port is exclusively owned by a +single module. IBC packets are sent over channels. Just as IP packets contain the destination IP +address and IP port, and the source IP address and source IP port, IBC packets contain +the destination port ID and channel ID, and the source port ID and channel ID. This packet structure enables IBC to +correctly route packets to the destination module while allowing modules receiving packets to +know the sender module. + +A channel can be `ORDERED`, where packets from a sending module must be processed by the +receiving module in the order they were sent. Or a channel can be `UNORDERED`, where packets +from a sending module are processed in the order they arrive (might be in a different order than they were sent). + +Modules can choose which channels they wish to communicate over with, thus IBC expects modules to +implement callbacks that are called during the channel handshake. These callbacks can do custom +channel initialization logic. If any callback returns an error, the channel handshake fails. Thus, by +returning errors on callbacks, modules can programmatically reject and accept channels. + +The channel handshake is a 4-step handshake. Briefly, if a given chain A wants to open a channel with +chain B using an already established connection: + +1. chain A sends a `ChanOpenInit` message to signal a channel initialization attempt with chain B. +2. chain B sends a `ChanOpenTry` message to try opening the channel on chain A. +3. chain A sends a `ChanOpenAck` message to mark its channel end status as open. +4. chain B sends a `ChanOpenConfirm` message to mark its channel end status as open. + +If all handshake steps are successful, the channel is opened on both sides. At each step in the handshake, the module +associated with the `ChannelEnd` executes its callback. So +on `ChanOpenInit`, the module on chain A executes its callback `OnChanOpenInit`. + +The channel identifier is auto derived in the format: `channel-{N}` where `N` is the next sequence to be used. + +#### Closing channels + +Closing a channel occurs in 2 handshake steps as defined in [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics). +Once a channel is closed, it cannot be reopened. The channel handshake steps are: + +**`ChanCloseInit`** closes a channel on the executing chain if + +- the channel exists and it is not already closed, +- the connection it exists upon is `OPEN`, +- the [IBC module callback `OnChanCloseInit`](/docs/ibc/next/ibc/apps/ibcmodule#channel-closing-callbacks) returns `nil`. + +`ChanCloseInit` can be initiated by any user by submitting a `MsgChannelCloseInit` transaction. +Note that channels are automatically closed when a packet times out on an `ORDERED` channel. +A timeout on an `ORDERED` channel skips the `ChanCloseInit` step and immediately closes the channel. + +**`ChanCloseConfirm`** is a response to a counterparty channel executing `ChanCloseInit`. The channel +on the executing chain closes if + +- the channel exists and is not already closed, +- the connection the channel exists upon is `OPEN`, +- the executing chain successfully verifies that the counterparty channel has been closed +- the [IBC module callback `OnChanCloseConfirm`](/docs/ibc/next/ibc/apps/ibcmodule#channel-closing-callbacks) returns `nil`. + +Currently, none of the IBC applications provided in ibc-go support `ChanCloseInit`. + +### [Packets](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Modules communicate with each other by sending packets over IBC channels. All +IBC packets contain the destination `portID` and `channelID` along with the source `portID` and +`channelID`. This packet structure allows modules to know the sender module of a given packet. IBC packets +contain a sequence to optionally enforce ordering. + +IBC packets also contain a `TimeoutHeight` and a `TimeoutTimestamp` that determine the deadline before the receiving module must process a packet. + +Modules send custom application data to each other inside the `Data` `[]byte` field of the IBC packet. +Thus, packet data is opaque to IBC handlers. It is incumbent on a sender module to encode +their application-specific packet information into the `Data` field of packets. The receiver +module must decode that `Data` back to the original application data. + +### [Receipts and timeouts](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Since IBC works over a distributed network and relies on potentially faulty relayers to relay messages between ledgers, +IBC must handle the case where a packet does not get sent to its destination in a timely manner or at all. Packets must +specify a non-zero value for timeout height (`TimeoutHeight`) or timeout timestamp (`TimeoutTimestamp` ) after which a packet can no longer be successfully received on the destination chain. + +- The `timeoutHeight` indicates a consensus height on the destination chain after which the packet is no longer to be processed, and instead counts as having timed-out. +- The `timeoutTimestamp` indicates a timestamp on the destination chain after which the packet is no longer to be processed, and instead counts as having timed-out. + +If the timeout passes without the packet being successfully received, the packet can no longer be +received on the destination chain. The sending module can timeout the packet and take appropriate actions. + +If the timeout is reached, then a proof of packet timeout can be submitted to the original chain. The original chain can then perform +application-specific logic to timeout the packet, perhaps by rolling back the packet send changes (refunding senders any locked funds, etc). + +- In `ORDERED` channels, a timeout of a single packet in the channel causes the channel to close. + + - If packet sequence `n` times out, then a packet at sequence `k > n` cannot be received without violating the contract of `ORDERED` channels that packets are processed in the order that they are sent. + - Since `ORDERED` channels enforce this invariant, a proof that sequence `n` has not been received on the destination chain by the specified timeout of packet `n` is sufficient to timeout packet `n` and close the channel. + +- In `UNORDERED` channels, the application-specific timeout logic for that packet is applied and the channel is not closed. + + - Packets can be received in any order. + - IBC writes a packet receipt for each sequence received in the `UNORDERED` channel. This receipt does not contain information; it is simply a marker intended to signify that the `UNORDERED` channel has received a packet at the specified sequence. + - To timeout a packet on an `UNORDERED` channel, a proof is required that a packet receipt **does not exist** for the packet's sequence by the specified timeout. + +For this reason, most modules should use `UNORDERED` channels as they require fewer liveness guarantees to function effectively for users of that channel. + +### [Acknowledgments](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Modules can also choose to write application-specific acknowledgments upon processing a packet. Acknowledgments can be done: + +- Synchronously on `OnRecvPacket` if the module processes packets as soon as they are received from IBC module. +- Asynchronously if module processes packets at some later point after receiving the packet. + +This acknowledgment data is opaque to IBC much like the packet `Data` and is treated by IBC as a simple byte string `[]byte`. Receiver modules must encode their acknowledgment so that the sender module can decode it correctly. The encoding must be negotiated between the two parties during version negotiation in the channel handshake. + +The acknowledgment can encode whether the packet processing succeeded or failed, along with additional information that allows the sender module to take appropriate action. + +After the acknowledgment has been written by the receiving chain, a relayer relays the acknowledgment back to the original sender module. + +The original sender module then executes application-specific acknowledgment logic using the contents of the acknowledgment. + +- After an acknowledgement fails, packet-send changes can be rolled back (for example, refunding senders in ICS 20). +- After an acknowledgment is received successfully on the original sender on the chain, the corresponding packet commitment is deleted since it is no longer needed. + +## Further readings and specs + +If you want to learn more about IBC, check the following specifications: + +- [IBC specification overview](https://github.com/cosmos/ibc/blob/master/README.md) diff --git a/docs/ibc/next/ibc/permissioning.mdx b/docs/ibc/next/ibc/permissioning.mdx new file mode 100644 index 00000000..81acf5a6 --- /dev/null +++ b/docs/ibc/next/ibc/permissioning.mdx @@ -0,0 +1,27 @@ +--- +title: Permissioning +--- + +IBC is designed at its base level to be a permissionless protocol. This does not mean that chains cannot add in permissioning on top of IBC. In ibc-go this can be accomplished by implementing and wiring an ante-decorator that checks if the IBC message is signed by a permissioned authority. If the signer address check passes, the tx can go through; otherwise it is rejected from the mempool. + +The antehandler runs before message-processing so it acts as a customizable filter that can reject messages before they get included in the block. The Cosmos SDK allows developers to write ante-decorators that can be stacked with others to add multiple independent customizable filters that run in sequence. Thus, chain developers that want to permission IBC messages are advised to implement their own custom permissioned IBC ante-decorator to add to the standard ante-decorator stack. + +## Best practices + +`MsgCreateClient`: permissioning the client creation is the most important for permissioned IBC. This will prevent malicious relayers from creating clients to fake chains. If a chain wants to control which chains are connected to it directly over IBC, the best way to do this is by controlling which clients get created. The permissioned authority can create clients only of counterparties that the chain approves of. The permissioned authority can be the governance account, however `MsgCreateClient` contains a consensus state that can be expired by the time governance passes the proposal to execute the message. Thus, if the voting period is longer than the unbonding period of the counterparty, it is advised to use a permissioned authority that can immediately execute the transaction (e.g. a trusted multisig). + +`MsgConnectionOpenInit`: permissioning this message will give the chain control over the connections that are opened and also will control which connection identifier is associated with which counterparty. + +`MsgConnectionOpenTry`: permissioning this message through a permissioned address check is ill-advised because it will prevent relayers from easily completing the handshake that was initialized on the counterparty. However, if the chain does want strict control of exactly which connections are opened, it can permission this message. Be aware, if two chains with strict permissions try to open a connection it may take much longer than expected. + +`MsgChannelOpenInit`: permissioning this message will give the chain control over the channels that are opened and also will control which channel identifier is associated with which counterparty. + +`MsgChannelOpenTry`: permissioning this message through a permissioned address check is ill-advised because it will prevent relayers from easily completing the handshake that was initialized on the counterparty. However, if the chain does want strict control of exactly which channels are opened, it can permission this message. Be aware, if two chains with strict permissions try to open a channel it may take much longer than expected. + +It is not advised to permission any other message from ibc-go. Permissionless relayers should still be allowed to complete handshakes that were authorized by permissioned parties, and to relay user packets on channels that were also authorized by permissioned parties. This provides the maximum liveness provided by a permissionless relayer network with the safety guarantees provided by permissioned client, connection, and channel creation. + +## Genesis setup + +Chains that are starting up from genesis have the option of initializing authorized clients, connections and channels from genesis. This allows chains to automatically connect to desired chains with a desired identifier. + +Note: The chain must be launched soon after the genesis file is created so that the client creation does not occur with an expired consensus state. The connections and channels must also simply have their `INIT` messages executed so that relayers can complete the rest of the handshake. diff --git a/docs/ibc/next/ibc/relayer.mdx b/docs/ibc/next/ibc/relayer.mdx new file mode 100644 index 00000000..857fc9ed --- /dev/null +++ b/docs/ibc/next/ibc/relayer.mdx @@ -0,0 +1,48 @@ +--- +title: Relayer +--- + + + +## Pre-requisite readings + +* [IBC Overview](/docs/ibc/next/ibc/overview) +* [Events](https://docs.cosmos.network/v0.47/learn/advanced/events) + + + +## Events + +Events are emitted for every transaction processed by the base application to indicate the execution +of some logic clients may want to be aware of. This is extremely useful when relaying IBC packets. +Any message that uses IBC will emit events for the corresponding TAO logic executed as defined in +the IBC specification. + +In the SDK, it can be assumed that for every message there is an event emitted with the type `message`, +attribute key `action`, and an attribute value representing the type of message sent +(`channel_open_init` would be the attribute value for `MsgChannelOpenInit`). If a relayer queries +for transaction events, it can split message events using this event Type/Attribute Key pair. + +The Event Type `message` with the Attribute Key `module` may be emitted multiple times for a single +message due to application callbacks. It can be assumed that any TAO logic executed will result in +a module event emission with the attribute value `ibc_` (02-client emits `ibc_client`). + +### Subscribing with Tendermint + +Calling the Tendermint RPC method `Subscribe` via Tendermint's Websocket will return events using +Tendermint's internal representation of them. Instead of receiving back a list of events as they +were emitted, Tendermint will return the type `map[string][]string` which maps a string in the +form `.` to `attribute_value`. This causes extraction of the event +ordering to be non-trivial, but still possible. + +A relayer should use the `message.action` key to extract the number of messages in the transaction +and the type of IBC transactions sent. For every IBC transaction within the string array for +`message.action`, the necessary information should be extracted from the other event fields. If +`send_packet` appears at index 2 in the value for `message.action`, a relayer will need to use the +value at index 2 of the key `send_packet.packet_sequence`. This process should be repeated for each +piece of information needed to relay a packet. + +## Example Implementations + +* [Golang Relayer](https://github.com/cosmos/relayer) +* [Hermes](https://github.com/informalsystems/hermes) diff --git a/docs/ibc/next/ibc/upgrades/developer-guide.mdx b/docs/ibc/next/ibc/upgrades/developer-guide.mdx new file mode 100644 index 00000000..0aa79d7a --- /dev/null +++ b/docs/ibc/next/ibc/upgrades/developer-guide.mdx @@ -0,0 +1,9 @@ +--- +title: IBC Client Developer Guide to Upgrades +--- + +## Synopsis + +Learn how to implement upgrade functionality for your custom IBC client. + +Please see the section [Handling upgrades](/docs/ibc/next/light-clients/developer-guide/upgrades) from the light client developer guide for more information. diff --git a/docs/ibc/next/ibc/upgrades/genesis-restart.mdx b/docs/ibc/next/ibc/upgrades/genesis-restart.mdx new file mode 100644 index 00000000..6eb43bfd --- /dev/null +++ b/docs/ibc/next/ibc/upgrades/genesis-restart.mdx @@ -0,0 +1,46 @@ +--- +title: Genesis Restart Upgrades +--- + +## Synopsis + +Learn how to upgrade your chain and counterparty clients using genesis restarts. + +**NOTE**: Regular genesis restarts are currently unsupported by relayers! + +## IBC Client Breaking Upgrades + +IBC client breaking upgrades are possible using genesis restarts. +It is highly recommended to use the in-place migrations instead of a genesis restart. +Genesis restarts should be used sparingly and as backup plans. + +Genesis restarts still require the usage of an IBC upgrade proposal in order to correctly upgrade counterparty clients. + +### Step-by-Step Upgrade Process for SDK Chains + +If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the [IBC Client Breaking Upgrade List](/docs/ibc/next/ibc/upgrades/quick-guide#ibc-client-breaking-upgrades) and then execute the upgrade process described below in order to prevent counterparty clients from breaking. + +1. Create a governance proposal with the [`MsgIBCSoftwareUpgrade`](https://buf.build/cosmos/ibc/docs/main:ibc.core.client.v1#ibc.core.client.v1.MsgIBCSoftwareUpgrade) which contains an `UpgradePlan` and a new IBC `ClientState` in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients and zero out any client-customizable fields (such as `TrustingPeriod`). +2. Vote on and pass the governance proposal. +3. Halt the node after successful upgrade. +4. Export the genesis file. +5. Swap to the new binary. +6. Run migrations on the genesis file. +7. Remove the upgrade plan set by the governance proposal from the genesis file. This may be done by migrations. +8. Change desired chain-specific fields (chain id, unbonding period, etc). This may be done by migrations. +9. Reset the node's data. +10. Start the chain. + +Upon passing the governance proposal, the upgrade module will commit the `UpgradedClient` under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`. + +Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client. + +### Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients + +These steps are identical to the regular [IBC client breaking upgrade process](/docs/ibc/next/ibc/upgrades/quick-guide#step-by-step-upgrade-process-for-relayers-upgrading-counterparty-clients). + +## Non-IBC Client Breaking Upgrades + +While ibc-go supports genesis restarts which do not break IBC clients, relayers do not support this upgrade path. +Here is a tracking issue on [Hermes](https://github.com/informalsystems/ibc-rs/issues/1152). +Please do not attempt a regular genesis restarts unless you have a tool to update counterparty clients correctly. diff --git a/docs/ibc/next/ibc/upgrades/intro.mdx b/docs/ibc/next/ibc/upgrades/intro.mdx new file mode 100644 index 00000000..594f3139 --- /dev/null +++ b/docs/ibc/next/ibc/upgrades/intro.mdx @@ -0,0 +1,13 @@ +--- +title: Upgrading IBC Chains Overview +description: >- + This directory contains information on how to upgrade an IBC chain without + breaking counterparty clients and connections. +--- + +This directory contains information on how to upgrade an IBC chain without breaking counterparty clients and connections. + +IBC-connected chains must be able to upgrade without breaking connections to other chains. Otherwise there would be a massive disincentive towards upgrading and disrupting high-value IBC connections, thus preventing chains in the IBC ecosystem from evolving and improving. Many chain upgrades may be irrelevant to IBC, however some upgrades could potentially break counterparty clients if not handled correctly. Thus, any IBC chain that wishes to perform an IBC-client-breaking upgrade must perform an IBC upgrade in order to allow counterparty clients to securely upgrade to the new light client. + +1. The [quick-guide](/docs/ibc/next/ibc/upgrades/quick-guide) describes how IBC-connected chains can perform client-breaking upgrades and how relayers can securely upgrade counterparty clients using the SDK. +2. The [developer-guide](/docs/ibc/next/ibc/upgrades/developer-guide) is a guide for developers intending to develop IBC client implementations with upgrade functionality. diff --git a/docs/ibc/next/ibc/upgrades/quick-guide.mdx b/docs/ibc/next/ibc/upgrades/quick-guide.mdx new file mode 100644 index 00000000..938254b9 --- /dev/null +++ b/docs/ibc/next/ibc/upgrades/quick-guide.mdx @@ -0,0 +1,54 @@ +--- +title: How to Upgrade IBC Chains and their Clients +--- + +## Synopsis + +Learn how to upgrade your chain and counterparty clients. + +The information in this doc for upgrading chains is relevant to SDK chains. However, the guide for counterparty clients is relevant to any Tendermint client that enables upgrades. + +## IBC Client Breaking Upgrades + +IBC-connected chains must perform an IBC upgrade if their upgrade will break counterparty IBC clients. The current IBC protocol supports upgrading tendermint chains for a specific subset of IBC-client-breaking upgrades. Here is the exhaustive list of IBC client-breaking upgrades and whether the IBC protocol currently supports such upgrades. + +IBC currently does **NOT** support unplanned upgrades. All of the following upgrades must be planned and committed to in advance by the upgrading chain, in order for counterparty clients to maintain their connections securely. + +Note: Since upgrades are only implemented for Tendermint clients, this doc only discusses upgrades on Tendermint chains that would break counterparty IBC Tendermint Clients. + +1. Changing the Chain-ID: **Supported** +2. Changing the UnbondingPeriod: **Partially Supported**, chains may increase the unbonding period with no issues. However, decreasing the unbonding period may irreversibly break some counterparty clients. Thus, it is **not recommended** that chains reduce the unbonding period. +3. Changing the height (resetting to 0): **Supported**, so long as chains remember to increment the revision number in their chain-id. +4. Changing the ProofSpecs: **Supported**, this should be changed if the proof structure needed to verify IBC proofs is changed across the upgrade. Ex: Switching from an IAVL store, to a SimpleTree Store +5. Changing the UpgradePath: **Supported**, this might involve changing the key under which upgraded clients and consensus states are stored in the upgrade store, or even migrating the upgrade store itself. +6. Migrating the IBC store: **Unsupported**, as the IBC store location is negotiated by the connection. +7. Upgrading to a backwards compatible version of IBC: Supported +8. Upgrading to a non-backwards compatible version of IBC: **Unsupported**, as IBC version is negotiated on connection handshake. +9. Changing the Tendermint LightClient algorithm: **Partially Supported**. Changes to the light client algorithm that do not change the ClientState or ConsensusState struct may be supported, provided that the counterparty is also upgraded to support the new light client algorithm. Changes that require updating the ClientState and ConsensusState structs themselves are theoretically possible by providing a path to translate an older ClientState struct into the new ClientState struct; however this is not currently implemented. + +## Step-by-Step Upgrade Process for SDK chains + +If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the list above and then execute the upgrade process described below in order to prevent counterparty clients from breaking. + +1. Create a governance proposal with the [`MsgIBCSoftwareUpgrade`](https://buf.build/cosmos/ibc/docs/main:ibc.core.client.v1#ibc.core.client.v1.MsgIBCSoftwareUpgrade) message which contains an `UpgradePlan` and a new IBC `ClientState` in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients (chain-specified parameters) and zero out any client-customizable fields (such as `TrustingPeriod`). +2. Vote on and pass the governance proposal. + +Upon passing the governance proposal, the upgrade module will commit the `UpgradedClient` under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`. + +Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client. + +## Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients + +Once the upgrading chain has committed to upgrading, relayers must wait till the chain halts at the upgrade height before upgrading counterparty clients. This is because chains may reschedule or cancel upgrade plans before they occur. Thus, relayers must wait till the chain reaches the upgrade height and halts before they can be sure the upgrade will take place. + +Thus, the upgrade process for relayers trying to upgrade the counterparty clients is as follows: + +1. Wait for the upgrading chain to reach the upgrade height and halt +2. Query a full node for the proofs of `UpgradedClient` and `UpgradedConsensusState` at the last height of the old chain. +3. Update the counterparty client to the last height of the old chain using the `UpdateClient` msg. +4. Submit an `UpgradeClient` msg to the counterparty chain with the `UpgradedClient`, `UpgradedConsensusState` and their respective proofs. +5. Submit an `UpdateClient` msg to the counterparty chain with a header from the new upgraded chain. + +The Tendermint client on the counterparty chain will verify that the upgrading chain did indeed commit to the upgraded client and upgraded consensus state at the upgrade height (since the upgrade height is included in the key). If the proofs are verified against the upgrade height, then the client will upgrade to the new client while retaining all of its client-customized fields. Thus, it will retain its old TrustingPeriod, TrustLevel, MaxClockDrift, etc; while adopting the new chain-specified fields such as UnbondingPeriod, ChainId, UpgradePath, etc. Note, this can lead to an invalid client since the old client-chosen fields may no longer be valid given the new chain-chosen fields. Upgrading chains should try to avoid these situations by not altering parameters that can break old clients. For an example, see the UnbondingPeriod example in the supported upgrades section. + +The upgraded consensus state will serve purely as a basis of trust for future `UpdateClientMsgs` and will not contain a consensus root to perform proof verification against. Thus, relayers must submit an `UpdateClientMsg` with a header from the new chain so that the connection can be used for proof verification again. diff --git a/docs/ibc/next/index.mdx b/docs/ibc/next/index.mdx deleted file mode 100644 index 5140437c..00000000 --- a/docs/ibc/next/index.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "IBC Overview" -description: "IBC-specific integration docs (coming soon)" ---- - - -The IBC documentation is under active development and will appear here soon. - - -For now, see EVM IBC references in Concepts and Precompiles. - diff --git a/docs/ibc/next/intro.mdx b/docs/ibc/next/intro.mdx new file mode 100644 index 00000000..9d5a2dd0 --- /dev/null +++ b/docs/ibc/next/intro.mdx @@ -0,0 +1,43 @@ +--- +title: IBC-Go Documentation +description: >- + Welcome to the documentation for IBC-Go, the Golang implementation of the + Inter-Blockchain Communication Protocol! +--- + +Welcome to the documentation for IBC-Go, the Golang implementation of the Inter-Blockchain Communication Protocol! + +The Inter-Blockchain Communication Protocol (IBC) is a protocol that allows blockchains to talk to each other. Chains that speak IBC can share any type of data as long as it's encoded in bytes, enabling the industry’s most feature-rich cross-chain interactions. IBC can be used to build a wide range of cross-chain applications that include token transfers, atomic swaps, multi-chain smart contracts (with or without mutually comprehensible VMs), and cross-chain account control. IBC is secure and permissionless. + +The protocol realizes this interoperability by specifying a set of data structures, abstractions, and semantics that can be implemented by any distributed ledger that satisfies a small set of requirements. + + +**Notice** +Since ibc-go v10, there are two versions of the protocol in the same release: IBC classic and IBC v2. The protocols are seperate - a connection uses either IBC classic or IBC v2 + + +## High-level overview of IBC v2 + +For a high level overview of IBC v2, please refer to [this blog post.](https://ibcprotocol.dev/blog/ibc-v2-announcement) For a more detailed understanding of the IBC v2 protocol, please refer to the [IBC v2 protocol specification.](https://github.com/cosmos/ibc/tree/main/spec/IBC_V2) + +If you are interested in using the cannonical deployment of IBC v2, connecting Cosmos chains and Ethereum, take a look at the [IBC Eureka](https://docs.skip.build/go/eureka/eureka-overview) documentation to get started. + +## High-level overview of IBC Classic + +The following diagram shows how IBC works at a high level: + +![Light Mode IBC Overview](/docs/ibc/images/images/ibcoverview-light.svg#gh-light-mode-only)![Dark Mode IBC Overview](/docs/ibc/images/images/ibcoverview-dark.svg#gh-dark-mode-only) + +The transport layer (TAO) provides the necessary infrastructure to establish secure connections and authenticate data packets between chains. The application layer builds on top of the transport layer and defines exactly how data packets should be packaged and interpreted by the sending and receiving chains. + +IBC provides a reliable, permissionless, and generic base layer (allowing for the secure relaying of data packets), while allowing for composability and modularity with separation of concerns by moving application designs (interpreting and acting upon the packet data) to a higher-level layer. This separation is reflected in the categories: + +* **IBC/TAO** comprises the Transport, Authentication, and Ordering of packets, i.e. the infrastructure layer. +* **IBC/APP** consists of the application handlers for the data packets being passed over the transport layer. These include but are not limited to fungible token transfers (ICS-20), NFT transfers (ICS-721), and interchain accounts (ICS-27). +* **Application module:** groups any application, middleware or smart contract that may wrap downstream application handlers to provide enhanced functionality. + +Note three crucial elements in the diagram: + +* The chains depend on relayers to communicate. [Relayers](https://github.com/cosmos/ibc/blob/main/spec/relayer/ics-018-relayer-algorithms/README.md) are the "physical" connection layer of IBC: off-chain processes responsible for relaying data between two chains running the IBC protocol by scanning the state of each chain, constructing appropriate datagrams, and executing them on the opposite chain as is allowed by the protocol. +* Many relayers can serve one or more channels to send messages between the chains. +* Each side of the connection uses the light client of the other chain to quickly verify incoming messages. diff --git a/docs/ibc/next/light-clients/developer-guide/client-state.mdx b/docs/ibc/next/light-clients/developer-guide/client-state.mdx new file mode 100644 index 00000000..a28e664b --- /dev/null +++ b/docs/ibc/next/light-clients/developer-guide/client-state.mdx @@ -0,0 +1,16 @@ +--- +title: Client State interface +description: Learn how to implement the ClientState interface. +--- + +Learn how to implement the [`ClientState`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L36) interface. + +## `ClientType` method + +`ClientType` should return a unique string identifier of the light client. This will be used when generating a client identifier. +The format is created as follows: `{client-type}-{N}` where `{N}` is the unique global nonce associated with a specific client (e.g `07-tendermint-0`). + +## `Validate` method + +`Validate` should validate every client state field and should return an error if any value is invalid. The light client +implementer is in charge of determining which checks are required. See the [Tendermint light client implementation](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/client_state.go#L111) as a reference. diff --git a/docs/ibc/next/light-clients/developer-guide/consensus-state.mdx b/docs/ibc/next/light-clients/developer-guide/consensus-state.mdx new file mode 100644 index 00000000..4b7c30a6 --- /dev/null +++ b/docs/ibc/next/light-clients/developer-guide/consensus-state.mdx @@ -0,0 +1,24 @@ +--- +title: Consensus State interface +description: >- + A ConsensusState is the snapshot of the counterparty chain, that an IBC client + uses to verify proofs (e.g. a block). +--- + +A `ConsensusState` is the snapshot of the counterparty chain, that an IBC client uses to verify proofs (e.g. a block). + +The further development of multiple types of IBC light clients and the difficulties presented by this generalization problem (see [ADR-006](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-006-02-client-refactor.md) for more information about this historical context) led to the design decision of each client keeping track of and set its own `ClientState` and `ConsensusState`, as well as the simplification of client `ConsensusState` updates through the generalized `ClientMessage` interface. + +The below [`ConsensusState`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L133) interface is a generalized interface for the types of information a `ConsensusState` could contain. For a reference `ConsensusState` implementation, please see the [Tendermint light client `ConsensusState`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/consensus_state.go). + +## `ClientType` method + +This is the type of client consensus. It should be the same as the `ClientType` return value for the [corresponding `ClientState` implementation](/docs/ibc/next/light-clients/developer-guide/client-state). + +## `GetTimestamp` method + +`GetTimestamp` should return the timestamp (in nanoseconds) of the consensus state snapshot. This function has been deprecated and will be removed in a future release. + +## `ValidateBasic` method + +`ValidateBasic` should validate every consensus state field and should return an error if any value is invalid. The light client implementer is in charge of determining which checks are required. diff --git a/docs/ibc/next/light-clients/developer-guide/light-client-module.mdx b/docs/ibc/next/light-clients/developer-guide/light-client-module.mdx new file mode 100644 index 00000000..28ab0449 --- /dev/null +++ b/docs/ibc/next/light-clients/developer-guide/light-client-module.mdx @@ -0,0 +1,71 @@ +--- +title: Light Client Module interface +description: Status must return the status of the client. +--- + +## `Status` method + +`Status` must return the status of the client. + +* An `Active` status indicates that clients are allowed to process packets. +* A `Frozen` status indicates that misbehaviour was detected in the counterparty chain and the client is not allowed to be used. +* An `Expired` status indicates that a client is not allowed to be used because it was not updated for longer than the trusting period. +* An `Unknown` status indicates that there was an error in determining the status of a client. + +All possible `Status` types can be found [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L22-L32). + +This field is returned in the response of the gRPC [`ibc.core.client.v1.Query/ClientStatus`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/types/query.pb.go#L665) endpoint. + +## `TimestampAtHeight` method + +`TimestampAtHeight` must return the timestamp for the consensus state associated with the provided height. +This value is used to facilitate timeouts by checking the packet timeout timestamp against the returned value. + +## `LatestHeight` method + +`LatestHeight` should return the latest block height that the client state represents. + +## `Initialize` method + +Clients must validate the initial consensus state, and set the initial client state and consensus state in the provided client store. +Clients may also store any necessary client-specific metadata. + +`Initialize` is called when a [client is created](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/keeper/client.go#L30). + +## `UpdateState` method + +`UpdateState` updates and stores as necessary any associated information for an IBC client, such as the `ClientState` and corresponding `ConsensusState`. See section [`UpdateState`](/docs/ibc/next/light-clients/developer-guide/updates-and-misbehaviour#updatestate) for more information. + +## `UpdateStateOnMisbehaviour` method + +`UpdateStateOnMisbehaviour` should perform appropriate state changes on a client state given that misbehaviour has been detected and verified. See section [`UpdateStateOnMisbehaviour`](/docs/ibc/next/light-clients/developer-guide/updates-and-misbehaviour#updatestateonmisbehaviour) for more information. + +## `VerifyMembership` method + +`VerifyMembership` must verify the existence of a value at a given commitment path at the specified height. For more information about membership proofs +see the [Existence and non-existence proofs section](/docs/ibc/next/light-clients/developer-guide/proofs). + +## `VerifyNonMembership` method + +`VerifyNonMembership` must verify the absence of a value at a given commitment path at a specified height. For more information about non-membership proofs +see the [Existence and non-existence proofs section](/docs/ibc/next/light-clients/developer-guide/proofs). + +## `VerifyClientMessage` method + +`VerifyClientMessage` must verify a `ClientMessage`. A `ClientMessage` could be a `Header`, `Misbehaviour`, or batch update. +It must handle each type of `ClientMessage` appropriately. Calls to `CheckForMisbehaviour`, `UpdateState`, and `UpdateStateOnMisbehaviour` +will assume that the content of the `ClientMessage` has been verified and can be trusted. An error should be returned +if the ClientMessage fails to verify. See section [`VerifyClientMessage`](/docs/ibc/next/light-clients/developer-guide/updates-and-misbehaviour#verifyclientmessage) for more information. + +## `CheckForMisbehaviour` method + +Checks for evidence of a misbehaviour in `Header` or `Misbehaviour` type. It assumes the `ClientMessage` +has already been verified. See section [`CheckForMisbehaviour`](/docs/ibc/next/light-clients/developer-guide/updates-and-misbehaviour#checkformisbehaviour) for more information. + +## `RecoverClient` method + +`RecoverClient` is used to recover an expired or frozen client by updating the client with the state of a substitute client. The method must verify that the provided substitute may be used to update the subject client. See section [Implementing `RecoverClient`](/docs/ibc/next/light-clients/proposals#implementing-recoverclient) for more information. + +## `VerifyUpgradeAndUpdateState` method + +`VerifyUpgradeAndUpdateState` provides a path to upgrading clients given an upgraded `ClientState`, upgraded `ConsensusState` and proofs for each. See section [Implementing `VerifyUpgradeAndUpdateState`](/docs/ibc/next/light-clients/developer-guide/upgrades#implementing-verifyupgradeandupdatestate) for more information. diff --git a/docs/ibc/next/light-clients/developer-guide/overview.mdx b/docs/ibc/next/light-clients/developer-guide/overview.mdx new file mode 100644 index 00000000..5552788a --- /dev/null +++ b/docs/ibc/next/light-clients/developer-guide/overview.mdx @@ -0,0 +1,90 @@ +--- +title: Overview +--- + +## Synopsis + +Learn how to build IBC light client modules and fulfill the interfaces required to integrate with core IBC. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/next/ibc/overview) +- [IBC Transport, Authentication, and Ordering Layer - Clients](https://tutorials.cosmos.network/academy/3-ibc/4-clients.html) +- [ICS-002 Client Semantics](https://github.com/cosmos/ibc/tree/main/spec/core/ics-002-client-semantics) + + + +IBC uses light clients in order to provide trust-minimized interoperability between sovereign blockchains. Light clients operate under a strict set of rules which provide security guarantees for state updates and facilitate the ability to verify the state of a remote blockchain using merkle proofs. + +The following aims to provide a high level IBC light client module developer guide. Access to IBC light clients are gated by the core IBC `MsgServer` which utilizes the abstractions set by the `02-client` submodule to call into a light client module. A light client module developer is only required to implement a set of interfaces as defined in the `modules/core/exported` package of ibc-go. + +A light client module developer should be concerned with three main interfaces: + +- [`LightClientModule`](#lightclientmodule) a module which manages many light client instances of a certain type. +- [`ClientState`](#clientstate) encapsulates the light client implementation and its semantics. +- [`ConsensusState`](#consensusstate) tracks consensus data used for verification of client updates, misbehaviour detection and proof verification of counterparty state. +- [`ClientMessage`](#clientmessage) used for submitting block headers for client updates and submission of misbehaviour evidence using conflicting headers. + +Throughout this guide the `07-tendermint` light client module may be referred to as a reference example. + +## Concepts and vocabulary + +### `LightClientModule` + +`LightClientModule` is an interface defined by core IBC which allows for modular light client implementations. All light client implementations _must_ implement the [`LightClientModule` interface](https://github.com/cosmos/ibc-go/blob/501a8462345da099144efe91d495bfcfa18d760d/modules/core/exported/client.go#L51) so that core IBC may redirect calls to the light client module. + +For example a light client module may need to: + +- create clients +- update clients +- recover and upgrade clients +- verify membership and non-membership + +The methods which make up this interface are detailed at a more granular level in the [`LightClientModule` section of this guide](/docs/ibc/next/light-clients/developer-guide/light-client-module). + +Please refer to the `07-tendermint`'s [`LightClientModule` definition](https://github.com/cosmos/ibc-go/blob/501a8462345da099144efe91d495bfcfa18d760d/modules/light-clients/07-tendermint/light_client_module.go#L17) for more information. + +### `ClientState` + +`ClientState` is a term used to define the data structure which encapsulates opaque light client state. The `ClientState` contains all the information needed to verify a `ClientMessage` and perform membership and non-membership proof verification of counterparty state. This includes properties that refer to the remote state machine, the light client type and the specific light client instance. + +For example: + +- Constraints used for client updates. +- Constraints used for misbehaviour detection. +- Constraints used for state verification. +- Constraints used for client upgrades. + +The `ClientState` type maintained within the light client module _must_ implement the [`ClientState`](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/modules/core/exported/client.go#L36) interface defined in `core/modules/exported/client.go`. +The methods which make up this interface are detailed at a more granular level in the [`ClientState` section of this guide](/docs/ibc/next/light-clients/developer-guide/client-state). + +Please refer to the `07-tendermint` light client module's [`ClientState` definition](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/proto/ibc/lightclients/tendermint/v1/tendermint.proto#L18) containing information such as chain ID, status, latest height, unbonding period and proof specifications. + +### `ConsensusState` + +`ConsensusState` is a term used to define the data structure which encapsulates consensus data at a particular point in time, i.e. a unique height or sequence number of a state machine. There must exist a single trusted `ConsensusState` for each height. `ConsensusState` generally contains a trusted root, validator set information and timestamp. + +For example, the `ConsensusState` of the `07-tendermint` light client module defines a trusted root which is used by the `ClientState` to perform verification of membership and non-membership commitment proofs, as well as the next validator set hash used for verifying headers can be trusted in client updates. + +The `ConsensusState` type maintained within the light client module _must_ implement the [`ConsensusState`](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/modules/core/exported/client.go#L134) interface defined in `modules/core/exported/client.go`. +The methods which make up this interface are detailed at a more granular level in the [`ConsensusState` section of this guide](/docs/ibc/next/light-clients/developer-guide/consensus-state). + +### `Height` + +`Height` defines a monotonically increasing sequence number which provides ordering of consensus state data persisted through client updates. +IBC light client module developers are expected to use the [concrete type](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/proto/ibc/core/client/v1/client.proto#L89) provided by the `02-client` submodule. This implements the expectations required by the [`Height`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L156) interface defined in `modules/core/exported/client.go`. + +### `ClientMessage` + +`ClientMessage` refers to the interface type [`ClientMessage`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L147) used for performing updates to a `ClientState` stored on chain. +This may be any concrete type which produces a change in state to the IBC client when verified. + +The following are considered as valid update scenarios: + +- A block header which when verified inserts a new `ConsensusState` at a unique height. +- A batch of block headers which when verified inserts `N` `ConsensusState` instances for `N` unique heights. +- Evidence of misbehaviour provided by two conflicting block headers. + +Learn more in the [Handling update and misbehaviour](/docs/ibc/next/light-clients/developer-guide/updates-and-misbehaviour) section. diff --git a/docs/ibc/next/light-clients/developer-guide/proofs.mdx b/docs/ibc/next/light-clients/developer-guide/proofs.mdx new file mode 100644 index 00000000..235ed6c0 --- /dev/null +++ b/docs/ibc/next/light-clients/developer-guide/proofs.mdx @@ -0,0 +1,73 @@ +--- +title: Existence/Non-Existence Proofs +description: >- + IBC uses merkle proofs in order to verify the state of a remote counterparty + state machine given a trusted root, and ICS-23 is a general approach for + verifying merkle trees which is used in ibc-go. +--- + +IBC uses merkle proofs in order to verify the state of a remote counterparty state machine given a trusted root, and [ICS-23](https://github.com/cosmos/ics23/tree/master/go) is a general approach for verifying merkle trees which is used in ibc-go. + +Currently, all Cosmos SDK modules contain their own stores, which maintain the state of the application module in an IAVL (immutable AVL) binary merkle tree format. Specifically with regard to IBC, core IBC maintains its own IAVL store, and IBC apps (e.g. transfer) maintain their own dedicated stores. The Cosmos SDK multistore therefore creates a simple merkle tree of all of these IAVL trees, and from each of these individual IAVL tree root hashes it derives a root hash for the application state tree as a whole (the `AppHash`). + +For the purposes of ibc-go, there are two types of proofs which are important: existence and non-existence proofs, terms which have been used interchangeably with membership and non-membership proofs. For the purposes of this guide, we will stick with "existence" and "non-existence". + +## Existence proofs + +Existence proofs are used in IBC transactions which involve verification of counterparty state for transactions which will result in the writing of provable state. For example, this includes verification of IBC store state for handshakes and packets. + +Put simply, existence proofs prove that a particular key and value exists in the tree. Under the hood, an IBC existence proof is comprised of two proofs: an IAVL proof that the key exists in IBC store/IBC root hash, and a proof that the IBC root hash exists in the multistore root hash. + +## Non-existence proofs + +Non-existence proofs verify the absence of data stored within counterparty state and are used to prove that a key does NOT exist in state. As stated above, these types of proofs can be used to timeout packets by proving that the counterparty has not written a packet receipt into the store, meaning that a token transfer has NOT successfully occurred. + +Some trees (e.g. SMT) may have a sentinel empty child for non-existent keys. In this case, the ICS-23 proof spec should include this `EmptyChild` so that ICS-23 handles the non-existence proof correctly. + +In some cases, there is a necessity to "mock" non-existence proofs if the counterparty does not have ability to prove absence. Since the verification method is designed to give complete control to client implementations, clients can support chains that do not provide absence proofs by verifying the existence of a non-empty sentinel `ABSENCE` value. In these special cases, the proof provided will be an ICS-23 `Existence` proof, and the client will verify that the `ABSENCE` value is stored under the given path for the given height. + +## State verification methods: `VerifyMembership` and `VerifyNonMembership` + +The state verification functions for all IBC data types have been consolidated into two generic methods, `VerifyMembership` and `VerifyNonMembership`. + +From the [`LightClientModule` interface definition](https://github.com/cosmos/ibc-go/blob/main/modules/core/exported/client.go#L56), we find: + +```go expandable +/ VerifyMembership is a generic proof verification method which verifies +/ a proof of the existence of a value at a given CommitmentPath at the +/ specified height. The caller is expected to construct the full CommitmentPath +/ from a CommitmentPrefix and a standardized path (as defined in ICS 24). +VerifyMembership( + ctx sdk.Context, + clientID string, + height Height, + delayTimePeriod uint64, + delayBlockPeriod uint64, + proof []byte, + path Path, + value []byte, +) + +error + +/ VerifyNonMembership is a generic proof verification method which verifies +/ the absence of a given CommitmentPath at a specified height. The caller is +/ expected to construct the full CommitmentPath from a CommitmentPrefix and +/ a standardized path (as defined in ICS 24). +VerifyNonMembership( + ctx sdk.Context, + clientStore sdk.KVStore, + cdc codec.BinaryCodec, + height Height, + delayTimePeriod uint64, + delayBlockPeriod uint64, + proof []byte, + path Path, +) + +error +``` + +Both are expected to be provided with a standardised key path, `exported.Path`, as defined in [ICS-24 host requirements](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements). Membership verification requires callers to provide the value marshalled as `[]byte`. Delay period values should be zero for non-packet processing verification. A zero proof height is now allowed by core IBC and may be passed into `VerifyMembership` and `VerifyNonMembership`. Light clients are responsible for returning an error if a zero proof height is invalid behaviour. + +Please refer to the [ICS-23 implementation](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/23-commitment/types/merkle.go#L131-L205) for a concrete example. diff --git a/docs/ibc/next/light-clients/developer-guide/proposals.mdx b/docs/ibc/next/light-clients/developer-guide/proposals.mdx new file mode 100644 index 00000000..ea7df0e6 --- /dev/null +++ b/docs/ibc/next/light-clients/developer-guide/proposals.mdx @@ -0,0 +1,32 @@ +--- +title: Handling Proposals +--- + +It is possible to update the client with the state of the substitute client through a governance proposal. This type of governance proposal is typically used to recover an expired or frozen client, as it can recover the entire state and therefore all existing channels built on top of the client. `RecoverClient` should be implemented to handle the proposal. + +## Implementing `RecoverClient` + +In the [`LightClientModule` interface](https://github.com/cosmos/ibc-go/blob/501a8462345da099144efe91d495bfcfa18d760d/modules/core/exported/client.go#L51), we find: + +```go +/ RecoverClient must verify that the provided substitute +/ may be used to update the subject client. The light client +/ must set the updated client and consensus states within +/ the clientStore for the subject client. +RecoverClient( + ctx sdk.Context, + clientID, + substituteClientID string, +) + +error +``` + +Prior to updating, this function must verify that: + +* the substitute client is the same type as the subject client. For a reference implementation, please see the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/47162061bcbfe74df791161059715a635e31c604/modules/light-clients/07-tendermint/proposal_handle.go#L34). +* the provided substitute may be used to update the subject client. This may mean that certain parameters must remain unaltered. For example, a [valid substitute Tendermint light client](https://github.com/cosmos/ibc-go/blob/47162061bcbfe74df791161059715a635e31c604/modules/light-clients/07-tendermint/proposal_handle.go#L86) must NOT change the chain ID, trust level, max clock drift, unbonding period, proof specs or upgrade path. Please note that `AllowUpdateAfterMisbehaviour` and `AllowUpdateAfterExpiry` have been deprecated (see ADR 026 for more information). + +After these checks are performed, the function must [set the updated client and consensus states](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/proposal_handle.go#L77) within the client store for the subject client. + +Please refer to the [Tendermint light client implementation](https://github.com/cosmos/ibc-go/blob/47162061bcbfe74df791161059715a635e31c604/modules/light-clients/07-tendermint/proposal_handle.go#L79) for reference. diff --git a/docs/ibc/next/light-clients/developer-guide/setup.mdx b/docs/ibc/next/light-clients/developer-guide/setup.mdx new file mode 100644 index 00000000..8ed11a53 --- /dev/null +++ b/docs/ibc/next/light-clients/developer-guide/setup.mdx @@ -0,0 +1,157 @@ +--- +title: Setup +--- + +## Synopsis + +Learn how to configure light client modules and create clients using core IBC and the `02-client` submodule. + +A last step to finish the development of the light client, is to implement the `AppModuleBasic` interface to allow it to be added to the chain's `app.go` alongside other light client types the chain enables. + +Finally, a succinct rundown is given of the remaining steps to make the light client operational, getting the light client type passed through governance and creating the clients. + +## Configuring a light client module + +An IBC light client module must implement the [`AppModuleBasic`](https://github.com/cosmos/cosmos-sdk/blob/main/types/module/module.go#L50) interface in order to register its concrete types against the core IBC interfaces defined in `modules/core/exported`. This is accomplished via the `RegisterInterfaces` method which provides the light client module with the opportunity to register codec types using the chain's `InterfaceRegistry`. Please refer to the [`07-tendermint` codec registration](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/codec.go#L11). + +The `AppModuleBasic` interface may also be leveraged to install custom CLI handlers for light client module users. Light client modules can safely no-op for interface methods which it does not wish to implement. + +Please refer to the [core IBC documentation](/docs/ibc/next/ibc/integration#integrating-light-clients) for how to configure additional light client modules alongside `07-tendermint` in `app.go`. + +See below for an example of the `07-tendermint` implementation of `AppModuleBasic`. + +```go expandable +var _ module.AppModuleBasic = AppModuleBasic{ +} + +/ AppModuleBasic defines the basic application module used by the tendermint light client. +/ Only the RegisterInterfaces function needs to be implemented. All other function perform +/ a no-op. +type AppModuleBasic struct{ +} + +/ Name returns the tendermint module name. +func (AppModuleBasic) + +Name() + +string { + return ModuleName +} + +/ RegisterLegacyAminoCodec performs a no-op. The Tendermint client does not support amino. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(*codec.LegacyAmino) { +} + +/ RegisterInterfaces registers module concrete types into protobuf Any. This allows core IBC +/ to unmarshal tendermint light client types. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + RegisterInterfaces(registry) +} + +/ DefaultGenesis performs a no-op. Genesis is not supported for the tendermint light client. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return nil +} + +/ ValidateGenesis performs a no-op. Genesis is not supported for the tendermint light client. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config client.TxEncodingConfig, bz json.RawMessage) + +error { + return nil +} + +/ RegisterGRPCGatewayRoutes performs a no-op. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *runtime.ServeMux) { +} + +/ GetTxCmd performs a no-op. Please see the 02-client cli commands. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return nil +} + +/ GetQueryCmd performs a no-op. Please see the 02-client cli commands. +func (AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return nil +} +``` + +## Creating clients + +A client is created by executing a new `MsgCreateClient` transaction composed with a valid `ClientState` and initial `ConsensusState` encoded as protobuf `Any`s. +Generally, this is performed by an off-chain process known as an [IBC relayer](https://github.com/cosmos/ibc/tree/main/spec/relayer/ics-018-relayer-algorithms) however, this is not a strict requirement. + +See below for a list of IBC relayer implementations: + +- [cosmos/relayer](https://github.com/cosmos/relayer) +- [informalsystems/hermes](https://github.com/informalsystems/hermes) +- [confio/ts-relayer](https://github.com/confio/ts-relayer) + +Stateless checks are performed within the [`ValidateBasic`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/types/msgs.go#L48) method of `MsgCreateClient`. + +```protobuf expandable +/ MsgCreateClient defines a message to create an IBC client +message MsgCreateClient { + option (gogoproto.goproto_getters) = false; + + / light client state + google.protobuf.Any client_state = 1 [(gogoproto.moretags) = "yaml:\"client_state\""]; + / consensus state associated with the client that corresponds to a given + / height. + google.protobuf.Any consensus_state = 2 [(gogoproto.moretags) = "yaml:\"consensus_state\""]; + / signer address + string signer = 3; +} +``` + +Leveraging protobuf `Any` encoding allows core IBC to [unpack](https://github.com/cosmos/ibc-go/blob/47162061bcbfe74df791161059715a635e31c604/modules/core/keeper/msg_server.go#L38) the `ClientState` into its respective interface type registered previously using the light client module's `RegisterInterfaces` method. + +Within the `02-client` submodule, the [`ClientState` is then initialized](https://github.com/cosmos/ibc-go/blob/47162061bcbfe74df791161059715a635e31c604/modules/core/02-client/keeper/client.go#L40-L42) with its own isolated key-value store, namespaced using a unique client identifier. + +In order to successfully create an IBC client using a new client type, it [must be supported](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/keeper/client.go#L19-L25). Light client support in IBC is gated by on-chain governance. The allow list may be updated by submitting a new governance proposal to update the `02-client` parameter `AllowedClients`. + +See below for example: + +```shell +%s tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "IBC Clients Param Change", + "summary": "Update allowed clients", + "messages": [ + { + "@type": "/ibc.core.client.v1.MsgUpdateParams", + "signer": "cosmos1...", / The gov module account address + "params": { + "allowed_clients": ["06-solomachine", + "07-tendermint", + "0x-new-client"] + } + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +If the `AllowedClients` list contains a single element that is equal to the wildcard `"*"`, then all client types are allowed and it is thus not necessary to submit a governance proposal to update the parameter. diff --git a/docs/ibc/next/light-clients/developer-guide/updates-and-misbehaviour.mdx b/docs/ibc/next/light-clients/developer-guide/updates-and-misbehaviour.mdx new file mode 100644 index 00000000..f689fe45 --- /dev/null +++ b/docs/ibc/next/light-clients/developer-guide/updates-and-misbehaviour.mdx @@ -0,0 +1,109 @@ +--- +title: Handling Updates and Misbehaviour +description: >- + As mentioned before in the documentation about implementing the ConsensusState + interface, ClientMessage is an interface used to update an IBC client. This + update may be performed by: +--- + +As mentioned before in the documentation about [implementing the `ConsensusState` interface](/docs/ibc/next/light-clients/developer-guide/consensus-state), [`ClientMessage`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L147) is an interface used to update an IBC client. This update may be performed by: + +* a single header, +* a batch of headers, +* evidence of misbehaviour, +* or any type which when verified produces a change to the consensus state of the IBC client. + +This interface has been purposefully kept generic in order to give the maximum amount of flexibility to the light client implementer. + +## Implementing the `ClientMessage` interface + +Find the `ClientMessage` interface in `modules/core/exported`: + +```go +type ClientMessage interface { + proto.Message + + ClientType() + +string + ValidateBasic() + +error +} +``` + +The `ClientMessage` will be passed to the client to be used in [`UpdateClient`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/keeper/client.go#L48), which retrieves the `LightClientModule` by client type (parsed from the client ID available in `MsgUpdateClient`). This `LightClientModule` implements the [`LightClientModule` interface](/docs/ibc/next/light-clients/developer-guide/light-client-module) for its specific consenus type (e.g. Tendermint). + +`UpdateClient` will then handle a number of cases including misbehaviour and/or updating the consensus state, utilizing the specific methods defined in the relevant `LightClientModule`. + +```go +VerifyClientMessage(ctx sdk.Context, clientID string, clientMsg ClientMessage) + +error +CheckForMisbehaviour(ctx sdk.Context, clientID string, clientMsg ClientMessage) + +bool +UpdateStateOnMisbehaviour(ctx sdk.Context, clientID string, clientMsg ClientMessage) + +UpdateState(ctx sdk.Context, clientID string, clientMsg ClientMessage) []Height +``` + +## Handling updates and misbehaviour + +The functions for handling updates to a light client and evidence of misbehaviour are all found in the [`LightClientModule`](https://github.com/cosmos/ibc-go/blob/501a8462345da099144efe91d495bfcfa18d760d/modules/core/exported/client.go#L51) interface, and will be discussed below. + +> It is important to note that `Misbehaviour` in this particular context is referring to misbehaviour on the chain level intended to fool the light client. This will be defined by each light client. + +## `VerifyClientMessage` + +`VerifyClientMessage` must verify a `ClientMessage`. A `ClientMessage` could be a `Header`, `Misbehaviour`, or batch update. To understand how to implement a `ClientMessage`, please refer to the [Implementing the `ClientMessage` interface](#implementing-the-clientmessage-interface) section. + +It must handle each type of `ClientMessage` appropriately. Calls to `CheckForMisbehaviour`, `UpdateState`, and `UpdateStateOnMisbehaviour` will assume that the content of the `ClientMessage` has been verified and can be trusted. An error should be returned if the `ClientMessage` fails to verify. + +For an example of a `VerifyClientMessage` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/76730ff030b52a351096ee941b7e4da44af9f059/modules/light-clients/07-tendermint/update.go#L23). + +## `CheckForMisbehaviour` + +Checks for evidence of a misbehaviour in `Header` or `Misbehaviour` type. It assumes the `ClientMessage` has already been verified. + +For an example of a `CheckForMisbehaviour` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/76730ff030b52a351096ee941b7e4da44af9f059/modules/light-clients/07-tendermint/misbehaviour_handle.go#L22). + +> The Tendermint light client [defines `Misbehaviour`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/misbehaviour.go) as two different types of situations: a situation where two conflicting `Header`s with the same height have been submitted to update a client's `ConsensusState` within the same trusting period, or that the two conflicting `Header`s have been submitted at different heights but the consensus states are not in the correct monotonic time ordering (BFT time violation). More explicitly, updating to a new height must have a timestamp greater than the previous consensus state, or, if inserting a consensus at a past height, then time must be less than those heights which come after and greater than heights which come before. + +## `UpdateStateOnMisbehaviour` + +`UpdateStateOnMisbehaviour` should perform appropriate state changes on a client state given that misbehaviour has been detected and verified. This method should only be called when misbehaviour is detected, as it does not perform any misbehaviour checks. Notably, it should freeze the client so that calling the `Status` function on the associated client state no longer returns `Active`. + +For an example of a `UpdateStateOnMisbehaviour` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/76730ff030b52a351096ee941b7e4da44af9f059/modules/light-clients/07-tendermint/update.go#L202). + +## `UpdateState` + +`UpdateState` updates and stores as necessary any associated information for an IBC client, such as the `ClientState` and corresponding `ConsensusState`. It should perform a no-op on duplicate updates. + +It assumes the `ClientMessage` has already been verified. + +For an example of a `UpdateState` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/76730ff030b52a351096ee941b7e4da44af9f059/modules/light-clients/07-tendermint/update.go#L134). + +## Putting it all together + +The `02-client` `Keeper` module in ibc-go offers a reference as to how these functions will be used to [update the client](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/keeper/client.go#L48). + +```go expandable +clientModule, found := k.router.GetRoute(clientID) + if !found { + return errorsmod.Wrap(types.ErrRouteNotFound, clientID) +} + if err := clientModule.VerifyClientMessage(ctx, clientID, clientMsg); err != nil { + return err +} + foundMisbehaviour := clientModule.CheckForMisbehaviour(ctx, clientID, clientMsg) + if foundMisbehaviour { + clientModule.UpdateStateOnMisbehaviour(ctx, clientID, clientMsg) + / emit misbehaviour event + return +} + +clientModule.UpdateState(ctx, clientID, clientMsg) / expects no-op on duplicate header +/ emit update event +return +``` diff --git a/docs/ibc/next/light-clients/developer-guide/upgrades.mdx b/docs/ibc/next/light-clients/developer-guide/upgrades.mdx new file mode 100644 index 00000000..e22170ef --- /dev/null +++ b/docs/ibc/next/light-clients/developer-guide/upgrades.mdx @@ -0,0 +1,60 @@ +--- +title: Handling Upgrades +--- + +It is vital that high-value IBC clients can upgrade along with their underlying chains to avoid disruption to the IBC ecosystem. Thus, IBC client developers will want to implement upgrade functionality to enable clients to maintain connections and channels even across chain upgrades. + +## Implementing `VerifyUpgradeAndUpdateState` + +The IBC protocol allows client implementations to provide a path to upgrading clients given the upgraded `ClientState`, upgraded `ConsensusState` and proofs for each. This path is provided in the `VerifyUpgradeAndUpdateState` method: + +```go expandable +/ NOTE: proof heights are not included as upgrade to a new revision is expected to pass only on the last +/ height committed by the current revision. Clients are responsible for ensuring that the planned last +/ height of the current revision is somehow encoded in the proof verification process. +/ This is to ensure that no premature upgrades occur, since upgrade plans committed to by the counterparty +/ may be cancelled or modified before the last planned height. +/ If the upgrade is verified, the upgraded client and consensus states must be set in the client store. +func (l LightClientModule) + +VerifyUpgradeAndUpdateState( + ctx sdk.Context, + clientID string, + newClient []byte, + newConsState []byte, + upgradeClientProof, + upgradeConsensusStateProof []byte, +) + +error +``` + +> Please refer to the [Tendermint light client implementation](https://github.com/cosmos/ibc-go/blob/47162061bcbfe74df791161059715a635e31c604/modules/light-clients/07-tendermint/light_client_module.go#L257) as an example for implementation. + +It is important to note that light clients **must** handle all management of client and consensus states including the setting of updated `ClientState` and `ConsensusState` in the client store. This can include verifying that the submitted upgraded `ClientState` is of a valid `ClientState` type, that the height of the upgraded client is not greater than the height of the current client (in order to preserve BFT monotonic time), or that certain parameters which should not be changed have not been altered in the upgraded `ClientState`. + +Developers must ensure that the `MsgUpgradeClient` does not pass until the last height of the old chain has been committed, and after the chain upgrades, the `MsgUpgradeClient` should pass once and only once on all counterparty clients. + +### Upgrade path + +Clients should have **prior knowledge of the merkle path** that the upgraded client and upgraded consensus states will use. The height at which the upgrade has occurred should also be encoded in the proof. + +> The Tendermint client implementation accomplishes this by including an `UpgradePath` in the `ClientState` itself, which is used along with the upgrade height to construct the merkle path under which the client state and consensus state are committed. + +## Chain specific vs client specific client parameters + +Developers should maintain the distinction between client parameters that are uniform across every valid light client of a chain (chain-chosen parameters), and client parameters that are customizable by each individual client (client-chosen parameters). + +When upgrading a client, developers must ensure that the new client adopts all of the new client parameters that must be uniform across every valid light client of a chain (chain-chosen parameters), while maintaining the client parameters that are customizable by each individual client (client-chosen parameters) from the previous version of the client. + +## Security + +Upgrades must adhere to the IBC Security Model. IBC does not rely on the assumption of honest relayers for correctness. Thus users should not have to rely on relayers to maintain client correctness and security (though honest relayers must exist to maintain relayer liveness). While relayers may choose any set of client parameters while creating a new `ClientState`, this still holds under the security model since users can always choose a relayer-created client that suits their security and correctness needs or create a client with their desired parameters if no such client exists. + +However, when upgrading an existing client, one must keep in mind that there are already many users who depend on this client's particular parameters. **We cannot give the upgrading relayer free choice over these parameters once they have already been chosen. This would violate the security model** since users who rely on the client would have to rely on the upgrading relayer to maintain the same level of security. + +Thus, developers must make sure that their upgrade mechanism allows clients to upgrade the chain-specified parameters whenever a chain upgrade changes these parameters (examples in the Tendermint client include `UnbondingPeriod`, `TrustingPeriod`, `ChainID`, `UpgradePath`, etc), while ensuring that the relayer submitting the `MsgUpgradeClient` cannot alter the client-chosen parameters that the users are relying upon (examples in Tendermint client include `TrustLevel`, `MaxClockDrift`, etc). + +### Document potential client parameter conflicts during upgrades + +Counterparty clients can upgrade securely by using all of the chain-chosen parameters from the chain-committed `UpgradedClient` and preserving all of the old client-chosen parameters. This enables chains to securely upgrade without relying on an honest relayer, however it can in some cases lead to an invalid final `ClientState` if the new chain-chosen parameters clash with the old client-chosen parameter. This can happen in the Tendermint client case if the upgrading chain lowers the `UnbondingPeriod` (chain-chosen) to a duration below that of a counterparty client's `TrustingPeriod` (client-chosen). Such cases should be clearly documented by developers, so that chains know which upgrades should be avoided to prevent this problem. The final upgraded client should also be validated in `VerifyUpgradeAndUpdateState` before returning to ensure that the client does not upgrade to an invalid `ClientState`. diff --git a/docs/ibc/next/light-clients/localhost/client-state.mdx b/docs/ibc/next/light-clients/localhost/client-state.mdx new file mode 100644 index 00000000..831bb962 --- /dev/null +++ b/docs/ibc/next/light-clients/localhost/client-state.mdx @@ -0,0 +1,6 @@ +--- +title: ClientState +description: The 09-localhost client is stateless and has no types. +--- + +The 09-localhost client is stateless and has no types. diff --git a/docs/ibc/next/light-clients/localhost/connection.mdx b/docs/ibc/next/light-clients/localhost/connection.mdx new file mode 100644 index 00000000..9c2fe5f5 --- /dev/null +++ b/docs/ibc/next/light-clients/localhost/connection.mdx @@ -0,0 +1,29 @@ +--- +title: Connection +description: >- + The 09-localhost light client module integrates with core IBC through a single + sentinel localhost connection. The sentinel ConnectionEnd is stored by default + in the core IBC store. +--- + +The 09-localhost light client module integrates with core IBC through a single sentinel localhost connection. +The sentinel `ConnectionEnd` is stored by default in the core IBC store. + +This enables channel handshakes to be initiated out of the box by supplying the localhost connection identifier (`connection-localhost`) in the `connectionHops` parameter of `MsgChannelOpenInit`. + +The `ConnectionEnd` is created and set in store via the `InitGenesis` handler of the 03-connection submodule in core IBC. +The `ConnectionEnd` and its `Counterparty` both reference the `09-localhost` client identifier, and share the localhost connection identifier `connection-localhost`. + +```go +/ CreateSentinelLocalhostConnection creates and sets the sentinel localhost connection end in the IBC store. +func (k Keeper) + +CreateSentinelLocalhostConnection(ctx sdk.Context) { + counterparty := types.NewCounterparty(exported.LocalhostClientID, exported.LocalhostConnectionID, commitmenttypes.NewMerklePrefix(k.GetCommitmentPrefix().Bytes())) + connectionEnd := types.NewConnectionEnd(types.OPEN, exported.LocalhostClientID, counterparty, types.GetCompatibleVersions(), 0) + +k.SetConnection(ctx, exported.LocalhostConnectionID, connectionEnd) +} +``` + +Note that connection handshakes are disallowed when using the `09-localhost` client type. diff --git a/docs/ibc/next/light-clients/localhost/integration.mdx b/docs/ibc/next/light-clients/localhost/integration.mdx new file mode 100644 index 00000000..f2faff57 --- /dev/null +++ b/docs/ibc/next/light-clients/localhost/integration.mdx @@ -0,0 +1,19 @@ +--- +title: Integration +description: >- + The 09-localhost light client module registers codec types within the core IBC + module. This differs from other light client module implementations which are + expected to register codec types using the AppModuleBasic interface. +--- + +The 09-localhost light client module registers codec types within the core IBC module. This differs from other light client module implementations which are expected to register codec types using the `AppModuleBasic` interface. + +The localhost client is implicitly enabled by using the `AllowAllClients` wildcard (`"*"`) in the 02-client submodule default value for param [`allowed_clients`](https://github.com/cosmos/ibc-go/blob/v7.0.0/proto/ibc/core/client/v1/client.proto#L102). + +```go +/ DefaultAllowedClients are the default clients for the AllowedClients parameter. +/ By default it allows all client types. +var DefaultAllowedClients = []string{ + AllowAllClients +} +``` diff --git a/docs/ibc/next/light-clients/localhost/overview.mdx b/docs/ibc/next/light-clients/localhost/overview.mdx new file mode 100644 index 00000000..c2c31979 --- /dev/null +++ b/docs/ibc/next/light-clients/localhost/overview.mdx @@ -0,0 +1,50 @@ +--- +title: Overview +--- + +## Overview + +## Synopsis + +Learn about the 09-localhost light client module. + +The 09-localhost light client module implements a stateless localhost loopback client with the ability to send and +receive IBC packets to and from the same state machine. + +### Context + +In a multichain environment, application developers will be used to developing cross-chain applications through IBC. +From their point of view, whether or not they are interacting with multiple modules on the same chain or on different +chains should not matter. The localhost client module enables a unified interface to interact with different +applications on a single chain, using the familiar IBC application layer semantics. + +### Implementation + +There exists a localhost light client module which can be invoked with the client identifier `09-localhost`. The light +client is stateless, so the `ClientState` is constructed on demand when required. + +To supplement this, a [sentinel `ConnectionEnd` is stored in core IBC](/docs/ibc/next/light-clients/localhost/connection) state with the connection +identifier `connection-localhost`. This enables IBC applications to create channels directly on top of the sentinel +connection which leverage the 09-localhost loopback functionality. + +[State verification](/docs/ibc/next/light-clients/localhost/state-verification) for channel state in handshakes or processing packets is reduced in +complexity, the `09-localhost` client can simply compare bytes stored under the standardized key paths. + +### Localhost vs _regular_ client + +The localhost client aims to provide a unified approach to interacting with applications on a single chain, as the IBC +application layer provides for cross-chain interactions. To achieve this unified interface though, there are a number of +differences under the hood compared to a 'regular' IBC client (excluding `06-solomachine` and `09-localhost` itself). + +The table below lists some important differences: + +| | Regular client | Localhost | +| -------------------------------------------- | --------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Number of clients | Many instances of a client _type_ corresponding to different counterparties | A single sentinel client with the client identifier `09-localhost` | +| Client creation | Relayer (permissionless) | Implicitly made available by the 02-client submodule in core IBC | +| Client updates | Relayer submits headers using `MsgUpdateClient` | No client updates are required as the localhost implementation is stateless | +| Number of connections | Many connections, 1 (or more) per client | A single sentinel connection with the connection identifier `connection-localhost` | +| Connection creation | Connection handshake, provided underlying client | Sentinel `ConnectionEnd` is created and set in store in the `InitGenesis` handler of the 03-connection submodule in core IBC | +| Counterparty | Underlying client, representing another chain | Client with identifier `09-localhost` in same chain | +| `VerifyMembership` and `VerifyNonMembership` | Performs proof verification using consensus state roots | Performs state verification using key-value lookups in the core IBC store | +| `ClientState` storage | `ClientState` stored and directly provable with `VerifyMembership` | Stateless, so `ClientState` is not provable directly with `VerifyMembership` | diff --git a/docs/ibc/next/light-clients/localhost/state-verification.mdx b/docs/ibc/next/light-clients/localhost/state-verification.mdx new file mode 100644 index 00000000..f8acda5c --- /dev/null +++ b/docs/ibc/next/light-clients/localhost/state-verification.mdx @@ -0,0 +1,23 @@ +--- +title: State Verification +description: >- + The localhost client handles state verification through the LightClientModule + interface methods VerifyMembership and VerifyNonMembership by performing + read-only operations directly on the core IBC store. +--- + +The localhost client handles state verification through the `LightClientModule` interface methods `VerifyMembership` and `VerifyNonMembership` by performing read-only operations directly on the core IBC store. + +When verifying channel state in handshakes or processing packets the `09-localhost` client can simply compare bytes stored under the standardized key paths defined by [ICS-24](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements). + +For existence proofs via `VerifyMembership` the 09-localhost client will retrieve the value stored under the provided key path and compare it against the value provided by the caller. In contrast, non-existence proofs via `VerifyNonMembership` assert the absence of a value at the provided key path. + +Relayers are expected to provide a sentinel proof when sending IBC messages. Submission of nil or empty proofs is disallowed in core IBC messaging. +The 09-localhost light client module defines a `SentinelProof` as a single byte. Localhost client state verification will fail if the sentinel proof value is not provided. + +```go +var SentinelProof = []byte{0x01 +} +``` + +The `ClientState` of `09-localhost` is stateless, so it is not directly provable with `VerifyMembership` or `VerifyNonMembership`. diff --git a/docs/ibc/next/light-clients/proposals.mdx b/docs/ibc/next/light-clients/proposals.mdx new file mode 100644 index 00000000..2e47a281 --- /dev/null +++ b/docs/ibc/next/light-clients/proposals.mdx @@ -0,0 +1,115 @@ +--- +title: Governance Proposals +--- + +In uncommon situations, a highly valued client may become frozen or expire due to uncontrollable +circumstances. A highly valued client might have hundreds of channels being actively used. +Some of those channels might have a significant amount of locked tokens used for ICS 20. + +## Frozen Light Clients + +If the one third of the validator set of the chain the client represents decides to collude, +they can sign off on two valid but conflicting headers each signed by the other one third +of the honest validator set. The light client can now be updated with two valid, but conflicting +headers at the same height. The light client cannot know which header is trustworthy and therefore +evidence of such misbehaviour is likely to be submitted resulting in a frozen light client. + +Frozen light clients cannot be updated under any circumstance except via a governance proposal. +Since a quorum of validators can sign arbitrary state roots which may not be valid executions +of the state machine, a governance proposal has been added to ease the complexity of unfreezing +or updating clients which have become "stuck". Without this mechanism, validator sets would need +to construct a state root to unfreeze the client. Unfreezing clients, re-enables all of the channels +built upon that client. This may result in recovery of otherwise lost funds. + +## Expired Light Clients + +Tendermint light clients may become expired if the trusting period has passed since their +last update. This may occur if relayers stop submitting headers to update the clients. + +An unplanned upgrade by the counterparty chain may also result in expired clients. If the counterparty +chain undergoes an unplanned upgrade, there may be no commitment to that upgrade signed by the validator +set before the chain ID changes. In this situation, the validator set of the last valid update for the +light client is never expected to produce another valid header since the chain ID has changed, which will +ultimately lead the on-chain light client to become expired. + +# How to recover an expired client with a governance proposal + +> **Who is this information for?** +> Although technically anyone can submit the governance proposal to recover an expired client, often it will be **relayer operators** (at least coordinating the submission). + +In the case that a highly valued light client is frozen, expired, or rendered non-updateable, a +governance proposal may be submitted to update this client, known as the subject client. The +proposal includes the client identifier for the subject and the client identifier for a substitute +client. Light client implementations may implement custom updating logic, but in most cases, +the subject will be updated to the latest consensus state of the substitute client, if the proposal passes. +The substitute client is used as a "stand in" while the subject is on trial. It is best practice to create +a substitute client *after* the subject has become frozen to avoid the substitute from also becoming frozen. +An active substitute client allows headers to be submitted during the voting period to prevent accidental expiry +once the proposal passes. + +See also the relevant documentation: ADR-026, IBC client recovery mechanisms + +## Preconditions + +* There exists an active client (with a known client identifier) for the same counterparty chain as the expired client. +* The governance deposit. + +## Steps + +### Step 1 + +Check if the client is attached to the expected `chain_id`. For example, for an expired Tendermint client representing the Akash chain the client state looks like this on querying the client state: + +```text +{ + client_id: 07-tendermint-146 + client_state: + '@type': /ibc.lightclients.tendermint.v1.ClientState + allow_update_after_expiry: true + allow_update_after_misbehaviour: true + chain_id: akashnet-2 +} +``` + +The client is attached to the expected Akash `chain_id`. Note that although the parameters (`allow_update_after_expiry` and `allow_update_after_misbehaviour`) exist to signal intent, these parameters have been deprecated and will not enforce any checks on the revival of client. See ADR-026 for more context on this deprecation. + +### Step 2 + +Anyone can submit the governance proposal to recover the client by executing the following via CLI. +If the chain is on an ibc-go version older than v8, please see the [relevant documentation](https://ibc.cosmos.network/v7/ibc/proposals). + +* From ibc-go v8 onwards + + ```shell + tx gov submit-proposal [path-to-proposal-json] + ``` + + where `proposal.json` contains: + + ```json expandable + { + "messages": [ + { + "@type": "/ibc.core.client.v1.MsgRecoverClient", + "subject_client_id": "", + "substitute_client_id": "", + "signer": "" + } + ], + "metadata": "", + "deposit": "10stake" + "title": "My proposal", + "summary": "A short summary of my proposal", + "expedited": false + } + ``` + +The `` identifier is the proposed client to be updated. This client must be either frozen or expired. + +The `` represents a substitute client. It carries all the state for the client which may be updated. It must have identical client and chain parameters to the client which may be updated (except for latest height, frozen height, and chain ID). It should be continually updated during the voting period. + +After this, all that remains is deciding who funds the governance deposit and ensuring the governance proposal passes. If it does, the client on trial will be updated to the latest state of the substitute. + +## Important considerations + +Please note that if the counterparty client is also expired, that client will also need to update. This process updates only one client. diff --git a/docs/ibc/next/light-clients/solomachine/concepts.mdx b/docs/ibc/next/light-clients/solomachine/concepts.mdx new file mode 100644 index 00000000..1026dbf9 --- /dev/null +++ b/docs/ibc/next/light-clients/solomachine/concepts.mdx @@ -0,0 +1,166 @@ +--- +title: Concepts +description: >- + The ClientState for a solo machine light client stores the latest sequence, + the frozen sequence, the latest consensus state, and client flag indicating if + the client should be allowed to be updated after a governance proposal. +--- + +## Client State + +The `ClientState` for a solo machine light client stores the latest sequence, the frozen sequence, +the latest consensus state, and client flag indicating if the client should be allowed to be updated +after a governance proposal. + +If the client is not frozen then the frozen sequence is 0. + +## Consensus State + +The consensus states stores the public key, diversifier, and timestamp of the solo machine light client. + +The diversifier is used to prevent accidental misbehaviour if the same public key is used across +different chains with the same client identifier. It should be unique to the chain the light client +is used on. + +## Public Key + +The public key can be a single public key or a multi-signature public key. The public key type used +must fulfill the tendermint public key interface (this will become the SDK public key interface in the +near future). The public key must be registered on the application codec otherwise encoding/decoding +errors will arise. The public key stored in the consensus state is represented as a protobuf `Any`. +This allows for flexibility in what other public key types can be supported in the future. + +## Counterparty Verification + +The solo machine light client can verify counterparty client state, consensus state, connection state, +channel state, packet commitments, packet acknowledgements, packet receipt absence, +and the next sequence receive. At the end of each successful verification call the light +client sequence number will be incremented. + +Successful verification requires the current public key to sign over the proof. + +## Proofs + +A solo machine proof should verify that the solomachine public key signed +over some specified data. The format for generating marshaled proofs for +the SDK's implementation of solo machine is as follows: + +1. Construct the data using the associated protobuf definition and marshal it. + +For example: + +```go +data := &ClientStateData{ + Path: []byte(path.String()), + ClientState: protoAny, +} + +dataBz, err := cdc.Marshal(data) +``` + +The helper functions `...DataBytes()` in [proof.go](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine/proof.go) handle this +functionality. + +2. Construct the `SignBytes` and marshal it. + +For example: + +```go +signBytes := &SignBytes{ + Sequence: sequence, + Timestamp: timestamp, + Diversifier: diversifier, + DataType: CLIENT, + Data: dataBz, +} + +signBz, err := cdc.Marshal(signBytes) +``` + +The helper functions `...SignBytes()` in [proof.go](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine/proof.go) handle this functionality. +The `DataType` field is used to disambiguate what type of data was signed to prevent potential +proto encoding overlap. + +3. Sign the sign bytes. Embed the signatures into either `SingleSignatureData` or `MultiSignatureData`. + Convert the `SignatureData` to proto and marshal it. + +For example: + +```go +sig, err := key.Sign(signBz) + sigData := &signing.SingleSignatureData{ + Signature: sig, +} + protoSigData := signing.SignatureDataToProto(sigData) + +bz, err := cdc.Marshal(protoSigData) +``` + +4. Construct a `TimestampedSignatureData` and marshal it. The marshaled result can be passed in + as the proof parameter to the verification functions. + +For example: + +```go +timestampedSignatureData := &solomachine.TimestampedSignatureData{ + SignatureData: sigData, + Timestamp: solomachine.Time, +} + +proof, err := cdc.Marshal(timestampedSignatureData) +``` + +NOTE: At the end of this process, the sequence associated with the key needs to be updated. +The sequence must be incremented each time proof is generated. + +## Updates By Header + +An update by a header will only succeed if: + +* the header provided is parseable to solo machine header +* the header sequence matches the current sequence +* the header timestamp is greater than or equal to the consensus state timestamp +* the currently registered public key generated the proof + +If the update is successful: + +* the public key is updated +* the diversifier is updated +* the timestamp is updated +* the sequence is incremented by 1 +* the new consensus state is set in the client state + +## Updates By Proposal + +An update by a governance proposal will only succeed if: + +* the substitute provided is parseable to solo machine client state +* the new consensus state public key does not equal the current consensus state public key + +If the update is successful: + +* the subject client state is updated to the substitute client state +* the subject consensus state is updated to the substitute consensus state +* the client is unfrozen (if it was previously frozen) + +NOTE: Previously, `AllowUpdateAfterProposal` was used to signal the update/recovery options for the solo machine client. However, this has now been deprecated because a code migration can overwrite the client and consensus states regardless of the value of this parameter. If governance would vote to overwrite a client or consensus state, it is likely that governance would also be willing to perform a code migration to do the same. + +## Misbehaviour + +Misbehaviour handling will only succeed if: + +* the misbehaviour provided is parseable to solo machine misbehaviour +* the client is not already frozen +* the current public key signed over two unique data messages at the same sequence and diversifier. + +If the misbehaviour is successfully processed: + +* the client is frozen by setting the frozen sequence to the misbehaviour sequence + +NOTE: Misbehaviour processing is data processing order dependent. A misbehaving solo machine +could update to a new public key to prevent being frozen before misbehaviour is submitted. + +## Upgrades + +Upgrades to solo machine light clients are not supported since an entirely different type of +public key can be set using normal client updates. diff --git a/docs/ibc/next/light-clients/solomachine/solomachine.mdx b/docs/ibc/next/light-clients/solomachine/solomachine.mdx new file mode 100644 index 00000000..c277f1ee --- /dev/null +++ b/docs/ibc/next/light-clients/solomachine/solomachine.mdx @@ -0,0 +1,23 @@ +--- +title: Solomachine +description: >- + This paper defines the implementation of the ICS06 protocol on the Cosmos SDK. + For the general specification please refer to the ICS06 Specification. +--- + +## Abstract + +This paper defines the implementation of the ICS06 protocol on the Cosmos SDK. For the general +specification please refer to the [ICS06 Specification](https://github.com/cosmos/ibc/tree/master/spec/client/ics-006-solo-machine-client). + +This implementation of a solo machine light client supports single and multi-signature public +keys. The client is capable of handling public key updates by header and governance proposals. +The light client is capable of processing client misbehaviour. Proofs of the counterparty state +are generated by the solo machine client by signing over the desired state with a certain sequence, +diversifier, and timestamp. + +## Contents + +1. **[Concepts](/docs/ibc/next/light-clients/solomachine/concepts)** +2. **[State](/docs/ibc/next/light-clients/solomachine/state)** +3. **[State Transitions](/docs/ibc/next/light-clients/solomachine/state_transitions)** diff --git a/docs/ibc/next/light-clients/solomachine/state.mdx b/docs/ibc/next/light-clients/solomachine/state.mdx new file mode 100644 index 00000000..7827215a --- /dev/null +++ b/docs/ibc/next/light-clients/solomachine/state.mdx @@ -0,0 +1,10 @@ +--- +title: State +description: >- + The solo machine light client will only store consensus states for each update + by a header or a governance proposal. The latest client state is also + maintained in the store. +--- + +The solo machine light client will only store consensus states for each update by a header +or a governance proposal. The latest client state is also maintained in the store. diff --git a/docs/ibc/next/light-clients/solomachine/state_transitions.mdx b/docs/ibc/next/light-clients/solomachine/state_transitions.mdx new file mode 100644 index 00000000..e7a09a62 --- /dev/null +++ b/docs/ibc/next/light-clients/solomachine/state_transitions.mdx @@ -0,0 +1,38 @@ +--- +title: State Transitions +description: 'Successful state verification by a solo machine light client will result in:' +--- + +## Client State Verification Functions + +Successful state verification by a solo machine light client will result in: + +* the sequence being incremented by 1. + +## Update By Header + +A successful update of a solo machine light client by a header will result in: + +* the public key being updated to the new public key provided by the header. +* the diversifier being updated to the new diviersifier provided by the header. +* the timestamp being updated to the new timestamp provided by the header. +* the sequence being incremented by 1 +* the consensus state being updated (consensus state stores the public key, diversifier, and timestamp) + +## Update By Governance Proposal + +A successful update of a solo machine light client by a governance proposal will result in: + +* the client state being updated to the substitute client state +* the consensus state being updated to the substitute consensus state (consensus state stores the public key, diversifier, and timestamp) +* the frozen sequence being set to zero (client is unfrozen if it was previously frozen). + +## Upgrade + +Client udgrades are not supported for the solo machine light client. No state transition occurs. + +## Misbehaviour + +Successful misbehaviour processing of a solo machine light client will result in: + +* the frozen sequence being set to the sequence the misbehaviour occurred at diff --git a/docs/ibc/next/light-clients/tendermint/overview.mdx b/docs/ibc/next/light-clients/tendermint/overview.mdx new file mode 100644 index 00000000..13845e9d --- /dev/null +++ b/docs/ibc/next/light-clients/tendermint/overview.mdx @@ -0,0 +1,167 @@ +--- +title: Overview +--- + +## Overview + +## Synopsis + +Learn about the 07-tendermint light client module. + +The Tendermint client is the first and most deployed light client in IBC. It implements the IBC [light client module interface](https://github.com/cosmos/ibc-go/blob/v9.0.0-beta.1/modules/core/exported/client.go#L41-L123) to track a counterparty running [CometBFT](https://github.com/cometbft/cometbft) consensus. + + + Tendermint is the old name of CometBFT which has been retained in IBC to avoid + expensive migration costs. + + +The Tendermint client consists of two important structs that keep track of the state of the counterparty chain and allow for future updates. The `ClientState` struct contains all the parameters necessary for CometBFT header verification. The `ConsensusState`, on the other hand, is a compressed view of a particular header of the counterparty chain. Unlike off chain light clients, IBC does not store full header. Instead it stores only the information it needs to prove verification of key/value pairs in the counterparty state (i.e. the header `AppHash`), and the information necessary to use the consensus state as the next root of trust to add a new consensus state to the client (i.e. the header `NextValidatorsHash` and `Timestamp`). The relayer provides the full trusted header on `UpdateClient`, which will get checked against the compressed root-of-trust consensus state. If the trusted header matches a previous consensus state, and the trusted header and new header pass the CometBFT light client update algorithm, then the new header is compressed into a consensus state and added to the IBC client. + +Each Tendermint Client is composed of a single `ClientState` keyed on the client ID, and multiple consensus states which are keyed on both the clientID and header height. Relayers can use the consensus states to verify merkle proofs of packet commitments, acknowledgements, and receipts against the `AppHash` of the counterparty chain in order to enable verified packet flow. + +If a counterparty chain violates the CometBFT protocol in a way that is detectable to off-chain light clients, this misbehaviour can also be submitted to an IBC client by any off-chain actor. Upon verification of this misbehaviour, the Tendermint IBC Client will freeze, preventing any further packet flow from this malicious chain from occurring. Governance or some other out-of-band protocol may then be used to unwind any damage that has already occurred. + +## Initialization + +The Tendermint light client is initialized with a `ClientState` that contains parameters necessary for CometBFT header verification along with a latest height and `ConsensusState` that encapsulates the application state root of a trusted header that will serve to verify future incoming headers from the counterparty. + +```proto expandable +message ClientState { + / human readable chain-id that will be included in header + / and signed over by the validator set + string chain_id = 1; + / trust level is the fraction of the trusted validator set + / that must sign over a new untrusted header before it is accepted + / it can be a minimum of 1/3 and a maximum of 2/3 + / Note these are the bounds of liveness. 1/3 is the minimum + / honest stake needed to maintain liveness on a chain, + / requiring more than 2/3 to sign over the new header would + / break the BFT threshold of allowing 1/3 malicious validators + Fraction trust_level = 2; + / duration of the period since the LatestTimestamp during which the + / submitted headers are valid for update + google.protobuf.Duration trusting_period = 3; + / duration of the staking unbonding period + google.protobuf.Duration unbonding_period = 4; + / defines how much new (untrusted) header's Time can drift + / into the future relative to our local clock. + google.protobuf.Duration max_clock_drift = 5; + + / Block height when the client was frozen due to a misbehaviour + ibc.core.client.v1.Height frozen_height = 6; + / Latest height the client was updated to + ibc.core.client.v1.Height latest_height = 7; + + / Proof specifications used in verifying counterparty state + repeated cosmos.ics23.v1.ProofSpec proof_specs = 8; + + / Path at which next upgraded client will be committed. + / Each element corresponds to the key for a single CommitmentProof in the + / chained proof. NOTE: ClientState must stored under + / `{upgradePath}/{upgradeHeight}/clientState` ConsensusState must be stored + / under `{upgradepath}/{upgradeHeight}/consensusState` For SDK chains using + / the default upgrade module, upgrade_path should be []string{"upgrade", + / "upgradedIBCState"}` + repeated string upgrade_path = 9; +} +``` + +```proto expandable +message ConsensusState { + / timestamp that corresponds to the block height in which the ConsensusState + / was stored. + google.protobuf.Timestamp timestamp = 1; + / commitment root (i.e app hash) that will be used + / to verify proofs of packet flow messages + ibc.core.commitment.v1.MerkleRoot root = 2; + / hash of the next validator set that will be used as + / a new updated source of trust to verify future updates + bytes next_validators_hash = 3; +} +``` + +## Updates + +Once the initial client state and consensus state are submitted, future consensus states can be added to the client by submitting IBC [headers](https://github.com/cosmos/ibc-go/blob/v9.0.0-beta.1/proto/ibc/lightclients/tendermint/v1/tendermint.proto#L76-L94). These headers contain all necessary information to run the CometBFT light client protocol. + +```proto expandable +message Header { + / this is the new signed header that we want to add + / as a new consensus state to the ibc client. + / the signed header contains the commit signatures of the `validator_set` below + .tendermint.types.SignedHeader signed_header = 1; + + / the validator set which signed the new header + .tendermint.types.ValidatorSet validator_set = 2; + / the trusted height of the consensus state which we are updating from + ibc.core.client.v1.Height trusted_height = 3; + / the trusted validator set, the hash of the trusted validators must be equal to + / `next_validators_hash` of the current consensus state + .tendermint.types.ValidatorSet trusted_validators = 4; +} +``` + +For detailed information on the CometBFT light client protocol and its safety properties please refer to the [original Tendermint whitepaper](https://arxiv.org/abs/1807.04938). + +## Proofs + +As consensus states are added to the client, they can be used for proof verification by relayers wishing to prove packet flow messages against a particular height on the counterparty. This uses the `VerifyMembership` and `VerifyNonMembership` methods on the Tendermint client. + +```go expandable +/ VerifyMembership is a generic proof verification method +/which verifies a proof of the existence of a value at a +/ given CommitmentPath at the specified height. The caller +/ is expected to construct the full CommitmentPath from a +/ CommitmentPrefix and a standardized path (as defined in ICS 24). +VerifyMembership( + ctx sdk.Context, + clientID string, + height Height, + delayTimePeriod uint64, + delayBlockPeriod uint64, + proof []byte, + path Path, + value []byte, +) + +error + +/ VerifyNonMembership is a generic proof verification method +/ which verifies the absence of a given CommitmentPath at a +/ specified height. The caller is expected to construct the +/ full CommitmentPath from a CommitmentPrefix and a standardized +/ path (as defined in ICS 24). +VerifyNonMembership( + ctx sdk.Context, + clientID string, + height Height, + delayTimePeriod uint64, + delayBlockPeriod uint64, + proof []byte, + path Path, +) + +error +``` + +The Tendermint client is initialized with an ICS23 proof spec. This allows the Tendermint implementation to support many different merkle tree structures so long as they can be represented in an [`ics23.ProofSpec`](https://github.com/cosmos/ics23/blob/go/v0.10.0/proto/cosmos/ics23/v1/proofs.proto#L145-L170). + +## Misbehaviour + +The Tendermint light client directly tracks consensus of a CometBFT counterparty chain. So long as the counterparty is Byzantine Fault Tolerant, that is to say, the malicious subset of the bonded validators does not exceed the trust level of the client, then the client is secure. + +In case the malicious subset of the validators exceeds the trust level of the client, then the client can be deceived into accepting invalid blocks and the connection is no longer secure. + +The Tendermint client has some mitigations in place to prevent this. If there are two valid blocks signed by the counterparty validator set at the same height \[e.g. a valid block signed by an honest subset and an invalid block signed by a malicious one], then these conflicting headers can be submitted to the client as [misbehaviour](https://github.com/cosmos/ibc-go/blob/v9.0.0-beta.1/proto/ibc/lightclients/tendermint/v1/tendermint.proto#L65-L74). The client will verify the headers and freeze the client; preventing any future updates and proof verification from succeeding. This effectively halts communication with the compromised counterparty while out-of-band social consensus can unwind any damage done. + +Similarly, if the timestamps of the headers are not monotonically increasing, this can also be evidence of malicious behaviour and cause the client to freeze. + +Thus, any consensus faults that are detectable by a light client are part of the misbehaviour protocol and can be used to minimize the damage caused by a compromised counterparty chain. + +### Security model + +It is important to note that IBC is not a completely trustless protocol; it is **trust-minimized**. This means that the safety property of bilateral IBC communication between two chains is dependent on the safety properties of the two chains in question. If one of the chains is compromised completely, then the IBC connection to the other chain is liable to receive invalid packets from the malicious chain. For example, if a malicious validator set has taken over more than 2/3 of the validator power on a chain; that malicious validator set can create a single chain of blocks with arbitrary commitment roots and arbitrary commitments to the next validator set. This would seize complete control of the chain and prevent the honest subset from even being able to create a competing honest block. + +In this case, there is no ability for the IBC Tendermint client solely tracking CometBFT consensus to detect the misbehaviour and freeze the client. The IBC protocol would require out-of-band mechanisms to detect and fix such an egregious safety fault on the counterparty chain. Since the Tendermint light client is only tracking consensus and not also verifying the validity of state transitions, malicious behaviour from a validator set that is beyond the BFT fault threshold is an accepted risk of this light client implementation. + +The IBC protocol has principles of fault isolation (e.g. all tokens are prefixed by their channel, so tokens from different chains are not mutually fungible) and fault mitigation (e.g. ability to freeze the client if misbehaviour can be detected before complete malicious takeover) that make this risk as minimal as possible. diff --git a/docs/ibc/next/light-clients/wasm/client.mdx b/docs/ibc/next/light-clients/wasm/client.mdx new file mode 100644 index 00000000..cdb0d80a --- /dev/null +++ b/docs/ibc/next/light-clients/wasm/client.mdx @@ -0,0 +1,149 @@ +--- +title: Client +description: >- + A user can query and interact with the 08-wasm module using the CLI. Use the + --help flag to discover the available commands: +--- + +## CLI + +A user can query and interact with the `08-wasm` module using the CLI. Use the `--help` flag to discover the available commands: + +### Transactions + +The `tx` commands allow users to interact with the `08-wasm` submodule. + +```shell +simd tx ibc-wasm --help +``` + +#### `store-code` + +The `store-code` command allows users to submit a governance proposal with a `MsgStoreCode` to store the byte code of a Wasm light client contract. + +```shell +simd tx ibc-wasm store-code [path/to/wasm-file] [flags] +``` + +`path/to/wasm-file` is the path to the `.wasm` or `.wasm.gz` file. + +#### `migrate-contract` + +The `migrate-contract` command allows users to broadcast a transaction with a `MsgMigrateContract` to migrate the contract for a given light client to a new byte code denoted by the given checksum. + +```shell +simd tx ibc-wasm migrate-contract [client-id] [checksum] [migrate-msg] +``` + +The migrate message must not be emptied and is expected to be a JSON-encoded string. + +### Query + +The `query` commands allow users to query `08-wasm` state. + +```shell +simd query ibc-wasm --help +``` + +#### `checksums` + +The `checksums` command allows users to query the list of checksums of Wasm light client contracts stored in the Wasm VM via the `MsgStoreCode`. The checksums are hex-encoded. + +```shell +simd query ibc-wasm checksums [flags] +``` + +Example: + +```shell +simd query ibc-wasm checksums +``` + +Example Output: + +```shell +checksums: +- c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64 +pagination: + next_key: null + total: "1" +``` + +#### `code` + +The `code` command allows users to query the Wasm byte code of a light client contract given the provided input checksum. + +```shell +./simd q ibc-wasm code +``` + +Example: + +```shell +simd query ibc-wasm code c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64 +``` + +Example Output: + +```shell +code: AGFzb...AqBBE= +``` + +## gRPC + +A user can query the `08-wasm` module using gRPC endpoints. + +### `Checksums` + +The `Checksums` endpoint allows users to query the list of checksums of Wasm light client contracts stored in the Wasm VM via the `MsgStoreCode`. + +```shell +ibc.lightclients.wasm.v1.Query/Checksums +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{}' \ + localhost:9090 \ + ibc.lightclients.wasm.v1.Query/Checksums +``` + +Example output: + +```shell +{ + "checksums": [ + "c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64" + ], + "pagination": { + "total": "1" + } +} +``` + +### `Code` + +The `Code` endpoint allows users to query the Wasm byte code of a light client contract given the provided input checksum. + +```shell +ibc.lightclients.wasm.v1.Query/Code +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"checksum":"c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64"}' \ + localhost:9090 \ + ibc.lightclients.wasm.v1.Query/Code +``` + +Example output: + +```shell +{ + "code": AGFzb...AqBBE= +} +``` diff --git a/docs/ibc/next/light-clients/wasm/concepts.mdx b/docs/ibc/next/light-clients/wasm/concepts.mdx new file mode 100644 index 00000000..bb97a1dc --- /dev/null +++ b/docs/ibc/next/light-clients/wasm/concepts.mdx @@ -0,0 +1,90 @@ +--- +title: Concepts +description: >- + Learn about the differences between a proxy light client and a Wasm light + client. +--- + +Learn about the differences between a proxy light client and a Wasm light client. + +## Proxy light client + +The `08-wasm` module is not a regular light client in the same sense as, for example, the 07-tendermint light client. `08-wasm` is instead a _proxy_ light client module, and this means that the module acts a proxy to the actual implementations of light clients. The module will act as a wrapper for the actual light clients uploaded as Wasm byte code and will delegate all operations to them (i.e. `08-wasm` just passes through the requests to the Wasm light clients). Still, the `08-wasm` module implements all the required interfaces necessary to integrate with core IBC, so that 02-client can call into it as it would for any other light client module. These interfaces are `LightClientModule`, `ClientState`, `ConsensusState` and `ClientMessage`, and we will describe them in the context of `08-wasm` in the following sections. For more information about this set of interfaces, please read section [Overview of the light client module developer guide](/docs/ibc/next/light-clients/developer-guide/overview#overview). + +### `LightClientModule` + +The `08-wasm`'s `LightClientModule` data structure contains two fields: + +- `keeper` is the `08-wasm` module keeper. +- `storeProvider` encapsulates the IBC core store key and provides access to isolated prefix stores for each client so they can read/write in separate namespaces. + +```go +type LightClientModule struct { + keeper wasmkeeper.Keeper + storeProvider exported.ClientStoreProvider +} +``` + +See section [`LightClientModule` of the light client module developer guide](/docs/ibc/next/light-clients/developer-guide/overview#lightclientmodule) for more information about the `LightClientModule` interface. + +### `ClientState` + +The `08-wasm`'s `ClientState` data structure contains three fields: + +- `Data` contains the bytes of the Protobuf-encoded client state of the underlying light client implemented as a Wasm contract. For example, if the Wasm light client contract implements the GRANDPA light client algorithm, then `Data` will contain the bytes for a [GRANDPA client state](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L35-L60). +- `Checksum` is the sha256 hash of the Wasm contract's byte code. This hash is used as an identifier to call the right contract. +- `LatestHeight` is the latest height of the counterparty state machine (i.e. the height of the blockchain), whose consensus state the light client tracks. + +```go +type ClientState struct { + / bytes encoding the client state of the underlying + / light client implemented as a Wasm contract + Data []byte + / sha256 hash of Wasm contract byte code + Checksum []byte + / latest height of the counterparty ledger + LatestHeight types.Height +} +``` + +See section [`ClientState` of the light client module developer guide](/docs/ibc/next/light-clients/developer-guide/overview#clientstate) for more information about the `ClientState` interface. + +### `ConsensusState` + +The `08-wasm`'s `ConsensusState` data structure maintains one field: + +- `Data` contains the bytes of the Protobuf-encoded consensus state of the underlying light client implemented as a Wasm contract. For example, if the Wasm light client contract implements the GRANDPA light client algorithm, then `Data` will contain the bytes for a [GRANDPA consensus state](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L87-L94). + +```go +type ConsensusState struct { + / bytes encoding the consensus state of the underlying light client + / implemented as a Wasm contract. + Data []byte +} +``` + +See section [`ConsensusState` of the light client module developer guide](/docs/ibc/next/light-clients/developer-guide/overview#consensusstate) for more information about the `ConsensusState` interface. + +### `ClientMessage` + +`ClientMessage` is used for performing updates to a `ClientState` stored on chain. The `08-wasm`'s `ClientMessage` data structure maintains one field: + +- `Data` contains the bytes of the Protobuf-encoded header(s) or misbehaviour for the underlying light client implemented as a Wasm contract. For example, if the Wasm light client contract implements the GRANDPA light client algorithm, then `Data` will contain the bytes of either [header](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L96-L104) or [misbehaviour](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L106-L112) for a GRANDPA light client. + +```go +type ClientMessage struct { + / bytes encoding the header(s) + +or misbehaviour for the underlying light client + / implemented as a Wasm contract. + Data []byte +} +``` + +See section [`ClientMessage` of the light client module developer guide](/docs/ibc/next/light-clients/developer-guide/overview#clientmessage) for more information about the `ClientMessage` interface. + +## Wasm light client + +The actual light client can be implemented in any language that compiles to Wasm and implements the interfaces of a [CosmWasm](https://docs.cosmwasm.com/docs/) contract. Even though in theory other languages could be used, in practice (at least for the time being) the most suitable language to use would be Rust, since there is already good support for it for developing CosmWasm smart contracts. + +At the moment of writing there are two contracts available: one for [Tendermint](https://github.com/ComposableFi/composable-ibc/tree/master/light-clients/ics07-tendermint-cw) and one [GRANDPA](https://github.com/ComposableFi/composable-ibc/tree/master/light-clients/ics10-grandpa-cw) (which is being used in production in [Composable Finance's Centauri bridge](https://github.com/ComposableFi/composable-ibc)). And there are others in development (e.g. for Near). diff --git a/docs/ibc/next/light-clients/wasm/contracts.mdx b/docs/ibc/next/light-clients/wasm/contracts.mdx new file mode 100644 index 00000000..21945459 --- /dev/null +++ b/docs/ibc/next/light-clients/wasm/contracts.mdx @@ -0,0 +1,108 @@ +--- +title: Contracts +description: >- + Learn about the expected behaviour of Wasm light client contracts and the + between with 08-wasm. +--- + +Learn about the expected behaviour of Wasm light client contracts and the between with `08-wasm`. + +## API + +The `08-wasm` light client proxy performs calls to the Wasm light client via the Wasm VM. The calls require as input JSON-encoded payload messages that fall in the three categories described in the next sections. + +## `InstantiateMessage` + +This is the message sent to the contract's `instantiate` entry point. It contains the bytes of the protobuf-encoded client and consensus states of the underlying light client, both provided in [`MsgCreateClient`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/tx.proto#L40-L52). Please note that the bytes contained within the JSON message are represented as base64-encoded strings. + +```go +type InstantiateMessage struct { + ClientState []byte `json:"client_state"` + ConsensusState []byte `json:"consensus_state"` + Checksum []byte `json:"checksum" +} +``` + +The Wasm light client contract is expected to store the client and consensus state in the corresponding keys of the client-prefixed store. + +## `QueryMsg` + +`QueryMsg` acts as a discriminated union type that is used to encode the messages that are sent to the contract's `query` entry point. Only one of the fields of the type should be set at a time, so that the other fields are omitted in the encoded JSON and the payload can be correctly translated to the corresponding element of the enumeration in Rust. + +```go +type QueryMsg struct { + Status *StatusMsg `json:"status,omitempty"` + TimestampAtHeight *TimestampAtHeightMsg `json:"timestamp_at_height,omitempty"` + VerifyClientMessage *VerifyClientMessageMsg `json:"verify_client_message,omitempty"` + CheckForMisbehaviour *CheckForMisbehaviourMsg `json:"check_for_misbehaviour,omitempty"` +} +``` + +```rust +#[cw_serde] +pub enum QueryMsg { + Status(StatusMsg), + TimestampAtHeight(TimestampAtHeightMsg), + VerifyClientMessage(VerifyClientMessageRaw), + CheckForMisbehaviour(CheckForMisbehaviourMsgRaw), +} +``` + +To learn what it is expected from the Wasm light client contract when processing each message, please read the corresponding section of the [Light client developer guide](/docs/ibc/next/light-clients/developer-guide/overview): + +- For `StatusMsg`, see the section [`Status` method](/docs/ibc/next/light-clients/developer-guide/client-state#status-method). +- For `TimestampAtHeightMsg`, see the section [`GetTimestampAtHeight` method](/docs/ibc/next/light-clients/developer-guide/client-state#gettimestampatheight-method). +- For `VerifyClientMessageMsg`, see the section [`VerifyClientMessage`](/docs/ibc/next/light-clients/developer-guide/updates-and-misbehaviour#verifyclientmessage). +- For `CheckForMisbehaviourMsg`, see the section [`CheckForMisbehaviour` method](/docs/ibc/next/light-clients/developer-guide/client-state#checkformisbehaviour-method). + +## `SudoMsg` + +`SudoMsg` acts as a discriminated union type that is used to encode the messages that are sent to the contract's `sudo` entry point. Only one of the fields of the type should be set at a time, so that the other fields are omitted in the encoded JSON and the payload can be correctly translated to the corresponding element of the enumeration in Rust. + +The `sudo` entry point is able to perform state-changing writes in the client-prefixed store. + +```go +type SudoMsg struct { + UpdateState *UpdateStateMsg `json:"update_state,omitempty"` + UpdateStateOnMisbehaviour *UpdateStateOnMisbehaviourMsg `json:"update_state_on_misbehaviour,omitempty"` + VerifyUpgradeAndUpdateState *VerifyUpgradeAndUpdateStateMsg `json:"verify_upgrade_and_update_state,omitempty"` + VerifyMembership *VerifyMembershipMsg `json:"verify_membership,omitempty"` + VerifyNonMembership *VerifyNonMembershipMsg `json:"verify_non_membership,omitempty"` + MigrateClientStore *MigrateClientStoreMsg `json:"migrate_client_store,omitempty"` +} +``` + +```rust +#[cw_serde] +pub enum SudoMsg { + UpdateState(UpdateStateMsgRaw), + UpdateStateOnMisbehaviour(UpdateStateOnMisbehaviourMsgRaw), + VerifyUpgradeAndUpdateState(VerifyUpgradeAndUpdateStateMsgRaw), + VerifyMembership(VerifyMembershipMsgRaw), + VerifyNonMembership(VerifyNonMembershipMsgRaw), + MigrateClientStore(MigrateClientStoreMsgRaw), +} +``` + +To learn what it is expected from the Wasm light client contract when processing each message, please read the corresponding section of the [Light client developer guide](/docs/ibc/next/light-clients/developer-guide/overview): + +- For `UpdateStateMsg`, see the section [`UpdateState`](/docs/ibc/next/light-clients/developer-guide/updates-and-misbehaviour#updatestate). +- For `UpdateStateOnMisbehaviourMsg`, see the section [`UpdateStateOnMisbehaviour`](/docs/ibc/next/light-clients/developer-guide/updates-and-misbehaviour#updatestateonmisbehaviour). +- For `VerifyUpgradeAndUpdateStateMsg`, see the section [`GetTimestampAtHeight` method](/docs/ibc/next/light-clients/developer-guide/upgrades#implementing-verifyupgradeandupdatestate). +- For `VerifyMembershipMsg`, see the section [`VerifyMembership` method](/docs/ibc/next/light-clients/developer-guide/client-state#verifymembership-method). +- For `VerifyNonMembershipMsg`, see the section [`VerifyNonMembership` method](/docs/ibc/next/light-clients/developer-guide/client-state#verifynonmembership-method). +- For `MigrateClientStoreMsg`, see the section [Implementing `CheckSubstituteAndUpdateState`](/docs/ibc/next/light-clients/developer-guide/proposals#implementing-checksubstituteandupdatestate). + +### Migration + +The `08-wasm` proxy light client exposes the `MigrateContract` RPC endpoint that can be used to migrate a given Wasm light client contract (specified by the client identifier) to a new Wasm byte code (specified by the hash of the byte code). The expected use case for this RPC endpoint is to enable contracts to migrate to new byte code in case the current byte code is found to have a bug or vulnerability. The Wasm byte code that contracts are migrated have to be uploaded beforehand using `MsgStoreCode` and must implement the `migrate` entry point. See section[`MsgMigrateContract`](/docs/ibc/next/apps/interchain-accounts/messages#msgmigratecontract) for information about the request message for this RPC endpoint. + +## Expected behaviour + +The `08-wasm` proxy light client modules expects the following behaviour from the Wasm light client contracts when executing messages that perform state-changing writes: + +- The contract must not delete the client state from the store. +- The contract must not change the client state to a client state of another type. +- The contract must not change the checksum in the client state. + +Any violation of these rules will result in an error returned from `08-wasm` that will abort the transaction. diff --git a/docs/ibc/next/light-clients/wasm/events.mdx b/docs/ibc/next/light-clients/wasm/events.mdx new file mode 100644 index 00000000..66223443 --- /dev/null +++ b/docs/ibc/next/light-clients/wasm/events.mdx @@ -0,0 +1,22 @@ +--- +title: Events +description: 'The 08-wasm module emits the following events:' +--- + +The `08-wasm` module emits the following events: + +## `MsgStoreCode` + +| Type | Attribute Key | Attribute Value | +| ----------------- | -------------- | ---------------------- | +| store\_wasm\_code | wasm\_checksum | `{hex.Encode(checksum)}` | +| message | module | 08-wasm | + +## `MsgMigrateContract` + +| Type | Attribute Key | Attribute Value | +| ----------------- | -------------- | ------------------------- | +| migrate\_contract | client\_id | `{clientId}` | +| migrate\_contract | wasm\_checksum | `{hex.Encode(checksum)}` | +| migrate\_contract | new\_checksum | `{hex.Encode(newChecksum)}` | +| message | module | 08-wasm | diff --git a/docs/ibc/next/light-clients/wasm/governance.mdx b/docs/ibc/next/light-clients/wasm/governance.mdx new file mode 100644 index 00000000..052c0923 --- /dev/null +++ b/docs/ibc/next/light-clients/wasm/governance.mdx @@ -0,0 +1,125 @@ +--- +title: Governance +description: >- + Learn how to upload Wasm light client byte code on a chain, and how to migrate + an existing Wasm light client contract. +--- + +Learn how to upload Wasm light client byte code on a chain, and how to migrate an existing Wasm light client contract. + +## Setting an authority + +Both the storage of Wasm light client byte code as well as the migration of an existing Wasm light client contract are permissioned (i.e. only allowed to an authority such as governance). The designated authority is specified when instantiating `08-wasm`'s keeper: both [`NewKeeperWithVM`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L39-L47) and [`NewKeeperWithConfig`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L88-L96) constructor functions accept an `authority` argument that must be the address of the authorized actor. For example, in `app.go`, when instantiating the keeper, you can pass the address of the governance module: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/types" + ... +) + +/ app.go +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), / authority + wasmVM, + app.GRPCQueryRouter(), +) +``` + +## Storing new Wasm light client byte code + +If governance is the allowed authority, the governance v1 proposal that needs to be submitted to upload a new light client contract should contain the message [`MsgStoreCode`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/proto/ibc/lightclients/wasm/v1/tx.proto#L23-L30) with the base64-encoded byte code of the Wasm contract. Use the following CLI command and JSON as an example: + +```shell +simd tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "Upload IBC Wasm light client", + "summary": "Upload wasm client", + "messages": [ + { + "@type": "/ibc.lightclients.wasm.v1.MsgStoreCode", + "signer": "cosmos1...", / the authority address (e.g. the gov module account address) + "wasm_byte_code": "YWJ...PUB+" / standard base64 encoding of the Wasm contract byte code + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +To learn more about the `submit-proposal` CLI command, please check out [the relevant section in Cosmos SDK documentation](https://docs.cosmos.network/main/modules/gov#submit-proposal). + +Alternatively, the process of submitting the proposal may be simpler if you use the CLI command `store-code`. This CLI command accepts as argument the file of the Wasm light client contract and takes care of constructing the proposal message with `MsgStoreCode` and broadcasting it. + +## Migrating an existing Wasm light client contract + +If governance is the allowed authority, the governance v1 proposal that needs to be submitted to migrate an existing new Wasm light client contract should contain the message [`MsgMigrateContract`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/proto/ibc/lightclients/wasm/v1/tx.proto#L52-L63) with the checksum of the Wasm byte code to migrate to. Use the following CLI command and JSON as an example: + +```shell +simd tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "Migrate IBC Wasm light client", + "summary": "Migrate wasm client", + "messages": [ + { + "@type": "/ibc.lightclients.wasm.v1.MsgMigrateContract", + "signer": "cosmos1...", / the authority address (e.g. the gov module account address) + "client_id": "08-wasm-1", / client identifier of the Wasm light client contract that will be migrated + "checksum": "a8ad...4dc0", / SHA-256 hash of the Wasm byte code to migrate to, previously stored with MsgStoreCode + "msg": "{}" / JSON-encoded message to be passed to the contract on migration + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +To learn more about the `submit-proposal` CLI command, please check out [the relevant section in Cosmos SDK documentation](https://docs.cosmos.network/main/modules/gov#submit-proposal). + +## Removing an existing checksum + +If governance is the allowed authority, the governance v1 proposal that needs to be submitted to remove a specific checksum from the list of allowed checksums should contain the message [`MsgRemoveChecksum`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/proto/ibc/lightclients/wasm/v1/tx.proto#L39-L46) with the checksum (of a corresponding Wasm byte code). Use the following CLI command and JSON as an example: + +```shell +simd tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "Remove checksum of Wasm light client byte code", + "summary": "Remove checksum", + "messages": [ + { + "@type": "/ibc.lightclients.wasm.v1.MsgRemoveChecksum", + "signer": "cosmos1...", / the authority address (e.g. the gov module account address) + "checksum": "a8ad...4dc0", / SHA-256 hash of the Wasm byte code that should be removed from the list of allowed checksums + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +To learn more about the `submit-proposal` CLI command, please check out [the relevant section in Cosmos SDK documentation](https://docs.cosmos.network/main/modules/gov#submit-proposal). diff --git a/docs/ibc/next/light-clients/wasm/integration.mdx b/docs/ibc/next/light-clients/wasm/integration.mdx new file mode 100644 index 00000000..21f61b83 --- /dev/null +++ b/docs/ibc/next/light-clients/wasm/integration.mdx @@ -0,0 +1,414 @@ +--- +title: Integration +description: >- + Learn how to integrate the 08-wasm module in a chain binary and about the + recommended approaches depending on whether the x/wasm module is already used + in the chain. The following document only applies for Cosmos SDK chains. +--- + +Learn how to integrate the `08-wasm` module in a chain binary and about the recommended approaches depending on whether the [`x/wasm` module](https://github.com/CosmWasm/wasmd/tree/main/x/wasm) is already used in the chain. The following document only applies for Cosmos SDK chains. + +## Importing the `08-wasm` module + +`08-wasm` has no stable releases yet. To use it, you need to import the git commit that contains the module with the compatible versions of `ibc-go` and `wasmvm`. To do so, run the following command with the desired git commit in your project: + +```sh +go get github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10 +``` + +## `app.go` setup + +The sample code below shows the relevant integration points in `app.go` required to set up the `08-wasm` module in a chain binary. Since `08-wasm` is a light client module itself, please check out as well the section [Integrating light clients](/docs/ibc/next/ibc/integration#integrating-light-clients) for more information: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + + cmtos "github.com/cometbft/cometbft/libs/os" + + ibcwasm "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10" + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/types" + ... +) + +... + +/ Register the AppModule for the 08-wasm module +ModuleBasics = module.NewBasicManager( + ... + ibcwasm.AppModuleBasic{ +}, + ... +) + +/ Add 08-wasm Keeper +type SimApp struct { + ... + WasmClientKeeper ibcwasmkeeper.Keeper + ... +} + +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + ... + keys := sdk.NewKVStoreKeys( + ... + ibcwasmtypes.StoreKey, + ) + + / Instantiate 08-wasm's keeper + / This sample code uses a constructor function that + / accepts a pointer to an existing instance of Wasm VM. + / This is the recommended approach when the chain + / also uses `x/wasm`, and then the Wasm VM instance + / can be shared. + app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmVM, + app.GRPCQueryRouter(), + ) + wasmLightClientModule := wasm.NewLightClientModule(app.WasmClientKeeper) + +app.IBCKeeper.ClientKeeper.AddRoute(ibcwasmtypes.ModuleName, &wasmLightClientModule) + +app.ModuleManager = module.NewManager( + / SDK app modules + ... + ibcwasm.NewAppModule(app.WasmClientKeeper), + ) + +app.ModuleManager.SetOrderBeginBlockers( + ... + ibcwasmtypes.ModuleName, + ... + ) + +app.ModuleManager.SetOrderEndBlockers( + ... + ibcwasmtypes.ModuleName, + ... + ) + genesisModuleOrder := []string{ + ... + ibcwasmtypes.ModuleName, + ... +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + ... + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + ... + + / must be before Loading version + if manager := app.SnapshotManager(); manager != nil { + err := manager.RegisterExtensions( + ibcwasmkeeper.NewWasmSnapshotter(app.CommitMultiStore(), &app.WasmClientKeeper), + ) + if err != nil { + panic(fmt.Errorf("failed to register snapshot extension: %s", err)) +} + +} + ... + if loadLatest { + ... + ctx := app.BaseApp.NewUncachedContext(true, cmtproto.Header{ +}) + + / Initialize pinned codes in wasmvm as they are not persisted there + if err := app.WasmClientKeeper.InitializePinnedCodes(ctx); err != nil { + cmtos.Exit(fmt.Sprintf("failed initialize pinned codes %s", err)) +} + +} +} +``` + +## Keeper instantiation + +When it comes to instantiating `08-wasm`'s keeper, there are two recommended ways of doing it. Choosing one or the other will depend on whether the chain already integrates [`x/wasm`](https://github.com/CosmWasm/wasmd/tree/main/x/wasm) or not. + +### If `x/wasm` is present + +If the chain where the module is integrated uses `x/wasm` then we recommend that both `08-wasm` and `x/wasm` share the same Wasm VM instance. Having two separate Wasm VM instances is still possible, but care should be taken to make sure that both instances do not share the directory when the VM stores blobs and various caches, otherwise unexpected behaviour is likely to happen (from `x/wasm` v0.51 and `08-wasm` v0.2.0+ibc-go-v8.3-wasmvm-v2.0 this will be forbidden anyway, since wasmvm v2.0.0 and above will not allow two different Wasm VM instances to shared the same data folder). + +In order to share the Wasm VM instance, please follow the guideline below. Please note that this requires `x/wasm` v0.41 or above. + +- Instantiate the Wasm VM in `app.go` with the parameters of your choice. +- [Create an `Option` with this Wasm VM instance](https://github.com/CosmWasm/wasmd/blob/db93d7b6c7bb6f4a340d74b96a02cec885729b59/x/wasm/keeper/options.go#L21-L25). +- Add the option created in the previous step to a slice and [pass it to the `x/wasm NewKeeper` constructor function](https://github.com/CosmWasm/wasmd/blob/db93d7b6c7bb6f4a340d74b96a02cec885729b59/x/wasm/keeper/keeper_cgo.go#L36). +- Pass the pointer to the Wasm VM instance to `08-wasm` [`NewKeeperWithVM` constructor function](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L39-L47). + +The code to set this up would look something like this: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + + wasmvm "github.com/CosmWasm/wasmvm/v2" + wasmkeeper "github.com/CosmWasm/wasmd/x/wasm/keeper" + wasmtypes "github.com/CosmWasm/wasmd/x/wasm/types" + + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/types" + ... +) + +... + +/ instantiate the Wasm VM with the chosen parameters +wasmer, err := wasmvm.NewVM( + dataDir, + availableCapabilities, + contractMemoryLimit, / default of 32 + contractDebugMode, + memoryCacheSize, +) + if err != nil { + panic(err) +} + +/ create an Option slice (or append to an existing one) +/ with the option to use a custom Wasm VM instance +wasmOpts = []wasmkeeper.Option{ + wasmkeeper.WithWasmEngine(wasmer), +} + +/ the keeper will use the provided Wasm VM instance, +/ instead of instantiating a new one +app.WasmKeeper = wasmkeeper.NewKeeper( + appCodec, + keys[wasmtypes.StoreKey], + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + distrkeeper.NewQuerier(app.DistrKeeper), + app.IBCKeeper.ChannelKeeper, + app.IBCKeeper.ChannelKeeper, + &app.IBCKeeper.PortKeeper, + scopedWasmKeeper, + app.TransferKeeper, + app.MsgServiceRouter(), + app.GRPCQueryRouter(), + wasmDir, + wasmConfig, + availableCapabilities, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmOpts..., +) + +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmer, / pass the Wasm VM instance to `08-wasm` keeper constructor + app.GRPCQueryRouter(), +) +... +``` + +### If `x/wasm` is not present + +If the chain does not use [`x/wasm`](https://github.com/CosmWasm/wasmd/tree/main/x/wasm), even though it is still possible to use the method above from the previous section +(e.g. instantiating a Wasm VM in app.go an pass it to 08-wasm's [`NewKeeperWithVM` constructor function](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L39-L47), since there would be no need in this case to share the Wasm VM instance with another module, you can use the [`NewKeeperWithConfig` constructor function](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L88-L96) and provide the Wasm VM configuration parameters of your choice instead. A Wasm VM instance will be created in `NewKeeperWithConfig`. The parameters that can set are: + +- `DataDir` is the [directory for Wasm blobs and various caches](https://github.com/CosmWasm/wasmvm/blob/v2.0.0/lib.go#L25). As an example, in `wasmd` this is set to the [`wasm` folder under the home directory](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/app/app.go#L578). In the code snippet below we set this field to the `ibc_08-wasm_client_data` folder under the home directory. +- `SupportedCapabilities` is a [list of capabilities supported by the chain](https://github.com/CosmWasm/wasmvm/blob/v2.0.0/lib.go#L26). [`wasmd` sets this to all the available capabilities](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/app/app.go#L586), but 08-wasm only requires `iterator`. +- `MemoryCacheSize` sets [the size in MiB of an in-memory cache for e.g. module caching](https://github.com/CosmWasm/wasmvm/blob/v2.0.0/lib.go#L29C16-L29C104). It is not consensus-critical and should be defined on a per-node basis, often in the range 100 to 1000 MB. [`wasmd` reads this value of](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/app/app.go#L579). Default value is 256. +- `ContractDebugMode` is a [flag to enable/disable printing debug logs from the contract to STDOUT](https://github.com/CosmWasm/wasmvm/blob/v2.0.0/lib.go#L28). This should be false in production environments. Default value is false. + +Another configuration parameter of the Wasm VM is the contract memory limit (in MiB), which is [set to 32](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/types/config.go#L8), [following the example of `wasmd`](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/x/wasm/keeper/keeper.go#L32-L34). This parameter is not configurable by users of `08-wasm`. + +The following sample code shows how the keeper would be constructed using this method: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/types" + ... +) + +... + +/ homePath is the path to the directory where the data +/ directory for Wasm blobs and caches will be created + wasmConfig := ibcwasmtypes.WasmConfig{ + DataDir: filepath.Join(homePath, "ibc_08-wasm_client_data"), + SupportedCapabilities: []string{"iterator" +}, + ContractDebugMode: false, +} + +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithConfig( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmConfig, + app.GRPCQueryRouter(), +) +``` + +Check out also the [`WasmConfig` type definition](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/types/config.go#L21-L31) for more information on each of the configurable parameters. Some parameters allow node-level configurations. There is additionally the function [`DefaultWasmConfig`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/types/config.go#L36-L42) available that returns a configuration with the default values. + +### Options + +The `08-wasm` module comes with an options API inspired by the one in `x/wasm`. +Currently the only option available is the `WithQueryPlugins` option, which allows registration of custom query plugins for the `08-wasm` module. The use of this API is optional and it is only required if the chain wants to register custom query plugins for the `08-wasm` module. + +#### `WithQueryPlugins` + +By default, the `08-wasm` module does not configure any querier options for light client contracts. However, it is possible to register custom query plugins for [`QueryRequest::Custom`](https://github.com/CosmWasm/cosmwasm/blob/v2.0.1/packages/std/src/query/mod.rs#L48) and [`QueryRequest::Stargate`](https://github.com/CosmWasm/cosmwasm/blob/v2.0.1/packages/std/src/query/mod.rs#L57-L65). + +Assuming that the keeper is not yet instantiated, the following sample code shows how to register query plugins for the `08-wasm` module. + +We first construct a [`QueryPlugins`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/types/querier.go#L78-L87) object with the desired query plugins: + +```go +queryPlugins := ibcwasmtypes.QueryPlugins { + Custom: MyCustomQueryPlugin(), + / `myAcceptList` is a `[]string` containing the list of gRPC query paths that the chain wants to allow for the `08-wasm` module to query. + / These queries must be registered in the chain's gRPC query router, be deterministic, and track their gas usage. + / The `AcceptListStargateQuerier` function will return a query plugin that will only allow queries for the paths in the `myAcceptList`. + / The query responses are encoded in protobuf unlike the implementation in `x/wasm`. + Stargate: ibcwasmtypes.AcceptListStargateQuerier(myAcceptList), +} +``` + +Note that the `Stargate` querier appends the user defined accept list of query routes to a default list defined by the `08-wasm` module. +The `defaultAcceptList` defines a single query route: `"/ibc.core.client.v1.Query/VerifyMembership"`. This allows for light client smart contracts to delegate parts of their workflow to other light clients for auxiliary proof verification. For example, proof of inclusion of block and tx data by a data availability provider. + +```go +/ defaultAcceptList defines a set of default allowed queries made available to the Querier. +var defaultAcceptList = []string{ + "/ibc.core.client.v1.Query/VerifyMembership", +} +``` + +You may leave any of the fields in the `QueryPlugins` object as `nil` if you do not want to register a query plugin for that query type. + +Then, we pass the `QueryPlugins` object to the `WithQueryPlugins` option: + +```go +querierOption := ibcwasmkeeper.WithQueryPlugins(&queryPlugins) +``` + +Finally, we pass the option to the `NewKeeperWithConfig` or `NewKeeperWithVM` constructor function during [Keeper instantiation](#keeper-instantiation): + +```diff +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithConfig( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmConfig, + app.GRPCQueryRouter(), ++ querierOption, +) +``` + +```diff +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmer, / pass the Wasm VM instance to `08-wasm` keeper constructor + app.GRPCQueryRouter(), ++ querierOption, +) +``` + +## Updating `AllowedClients` + +If the chain's 02-client submodule parameter `AllowedClients` contains the single wildcard `"*"` element, then it is not necessary to do anything in order to allow the creation of `08-wasm` clients. However, if the parameter contains a list of client types (e.g. `["06-solomachine", "07-tendermint"]`), then in order to use the `08-wasm` module chains must update the [`AllowedClients` parameter](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/client.proto#L64) of core IBC. This can be configured directly in the application upgrade handler with the sample code below: + +```go expandable +import ( + + ... + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/types" + ... +) + +... + +func CreateWasmUpgradeHandler( + mm *module.Manager, + configurator module.Configurator, + clientKeeper clientkeeper.Keeper, +) + +upgradetypes.UpgradeHandler { + return func(goCtx context.Context, _ upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + / explicitly update the IBC 02-client params, adding the wasm client type + params := clientKeeper.GetParams(ctx) + +params.AllowedClients = append(params.AllowedClients, ibcwasmtypes.Wasm) + +clientKeeper.SetParams(ctx, params) + +return mm.RunMigrations(goCtx, configurator, vm) +} +} +``` + +Or alternatively the parameter can be updated via a governance proposal (see at the bottom of section [`Creating clients`](/docs/ibc/next/light-clients/developer-guide/setup#creating-clients) for an example of how to do this). + +## Adding the module to the store + +As part of the upgrade migration you must also add the module to the upgrades store. + +```go expandable +func (app SimApp) + +RegisterUpgradeHandlers() { + + ... + if upgradeInfo.Name == UpgradeName && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := storetypes.StoreUpgrades{ + Added: []string{ + ibcwasmtypes.ModuleName, +}, +} + + / configure store loader that checks if version == upgradeHeight and applies store upgrades + app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +} +``` + +## Adding snapshot support + +In order to use the `08-wasm` module chains are required to register the `WasmSnapshotter` extension in the snapshot manager. This snapshotter takes care of persisting the external state, in the form of contract code, of the Wasm VM instance to disk when the chain is snapshotted. [This code](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/testing/simapp/app.go#L775-L782) should be placed in `NewSimApp` function in `app.go`. + +## Pin byte codes at start + +Wasm byte codes should be pinned to the WasmVM cache on every application start, therefore [this code](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/testing/simapp/app.go#L825-L830) should be placed in `NewSimApp` function in `app.go`. diff --git a/docs/ibc/next/light-clients/wasm/messages.mdx b/docs/ibc/next/light-clients/wasm/messages.mdx new file mode 100644 index 00000000..182d1154 --- /dev/null +++ b/docs/ibc/next/light-clients/wasm/messages.mdx @@ -0,0 +1,73 @@ +--- +title: Messages +description: >- + Uploading the Wasm light client contract to the Wasm VM storage is achieved by + means of MsgStoreCode: +--- + +## `MsgStoreCode` + +Uploading the Wasm light client contract to the Wasm VM storage is achieved by means of `MsgStoreCode`: + +```go +type MsgStoreCode struct { + / signer address + Signer string + / wasm byte code of light client contract. It can be raw or gzip compressed + WasmByteCode []byte +} +``` + +This message is expected to fail if: + +* `Signer` is an invalid Bech32 address, or it does not match the designated authority address. +* `WasmByteCode` is empty or it exceeds the maximum size, currently set to 3MB. + +Only light client contracts stored using `MsgStoreCode` are allowed to be instantiated. An attempt to create a light client from contracts uploaded via other means (e.g. through `x/wasm` if the module shares the same Wasm VM instance with 08-wasm) will fail. Due to the idempotent nature of the Wasm VM's `StoreCode` function, it is possible to store the same byte code multiple times. + +When execution of `MsgStoreCode` succeeds, the checksum of the contract (i.e. the sha256 hash of the contract's byte code) is stored in an allow list. When a relayer submits [`MsgCreateClient`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/tx.proto#L25-L37) with 08-wasm's `ClientState`, the client state includes the checksum of the Wasm byte code that should be called. Then 02-client calls [08-wasm's implementation of `Initialize` function](https://github.com/cosmos/ibc-go/blob/06fd8eb5ee1697e3b43be7528a6e42f5e4a4613c/modules/core/02-client/keeper/client.go#L40) (which is an interface function part of `LightClientModule`), and it will check that the checksum in the client state matches one of the checksums in the allow list. If a match is found, the light client is initialized; otherwise, the transaction is aborted. + +## `MsgMigrateContract` + +Migrating a contract to a new Wasm byte code is achieved by means of `MsgMigrateContract`: + +```go +type MsgMigrateContract struct { + / signer address + Signer string + / the client id of the contract + ClientId string + / the SHA-256 hash of the new wasm byte code for the contract + Checksum []byte + / the json-encoded migrate msg to be passed to the contract on migration + Msg []byte +} +``` + +This message is expected to fail if: + +* `Signer` is an invalid Bech32 address, or it does not match the designated authority address. +* `ClientId` is not a valid identifier prefixed by `08-wasm`. +* `Checksum` is not exactly 32 bytes long or it is not found in the list of allowed checksums (a new checksum is added to the list when executing `MsgStoreCode`), or it matches the current checksum of the contract. + +When a Wasm light client contract is migrated to a new Wasm byte code the checksum for the contract will be updated with the new checksum. + +## `MsgRemoveChecksum` + +Removing a checksum from the list of allowed checksums is achieved by means of `MsgRemoveChecksum`: + +```go +type MsgRemoveChecksum struct { + / signer address + Signer string + / Wasm byte code checksum to be removed from the store + Checksum []byte +} +``` + +This message is expected to fail if: + +* `Signer` is an invalid Bech32 address, or it does not match the designated authority address. +* `Checksum` is not exactly 32 bytes long or it is not found in the list of allowed checksums (a new checksum is added to the list when executing `MsgStoreCode`). + +When a checksum is removed from the list of allowed checksums, then the corresponding Wasm byte code will not be available for instantiation in [08-wasm's implementation of `Initialize` function](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/core/02-client/keeper/client.go#L36). diff --git a/docs/ibc/next/light-clients/wasm/migrations.mdx b/docs/ibc/next/light-clients/wasm/migrations.mdx new file mode 100644 index 00000000..69d39bfe --- /dev/null +++ b/docs/ibc/next/light-clients/wasm/migrations.mdx @@ -0,0 +1,235 @@ +--- +title: Migrations +description: This guide provides instructions for migrating 08-wasm versions. +--- + +This guide provides instructions for migrating 08-wasm versions. + +Please note that the following releases are retracted. Please refer to the appropriate migrations section for upgrading. + +```bash +v0.3.1-0.20240717085919-bb71eef0f3bf => v0.3.0+ibc-go-v8.3-wasmvm-v2.0 +v0.2.1-0.20240717085554-570d057959e3 => v0.2.0+ibc-go-v7.6-wasmvm-v1.5 +v0.2.1-0.20240523101951-4b45d1822fb6 => v0.2.0+ibc-go-v8.3-wasmvm-v2.0 +v0.1.2-0.20240412103620-7ee2a2452b79 => v0.1.1+ibc-go-v7.3-wasmvm-v1.5 +v0.1.1-0.20231213092650-57fcdb9a9a9d => v0.1.0+ibc-go-v8.0-wasmvm-v1.5 +v0.1.1-0.20231213092633-b306e7a706e1 => v0.1.0+ibc-go-v7.3-wasmvm-v1.5 +``` + +## From ibc-go v8.4.x to ibc-go v9.0.x + +### Chains + +* The `Initialize`, `Status`, `GetTimestampAtHeight`, `GetLatestHeight`, `VerifyMembership`, `VerifyNonMembership`, `VerifyClientMessage`, `UpdateState` and `UpdateStateOnMisbehaviour` functions in `ClientState` have been removed and all their logic has been moved to functions of the `LightClientModule`. +* The `MigrateContract` function has been removed from `ClientState`. +* The `VerifyMembershipMsg` and `VerifyNonMembershipMsg` payloads for `SudoMsg` have been modified. The `Path` field of both structs has been updated from `v1.MerklePath` to `v2.MerklePath`. The new `v2.MerklePath` field contains a `KeyPath` of `[][]byte` as opposed to `[]string`. This supports proving values stored under keys which contain non-utf8 encoded symbols. As a result, the JSON field `path` containing `key_path` of both messages will marshal elements as a base64 encoded bytestrings. This is a breaking change for 08-wasm client contracts and they should be migrated to correctly support deserialisation of the `v2.MerklePath` field. +* The `ExportMetadataMsg` struct has been removed and is no longer required for contracts to implement. Core IBC will handle exporting all key/value's written to the store by a light client contract. +* The `ZeroCustomFields` interface function has been removed from the `ClientState` interface. Core IBC only used this function to set tendermint client states when scheduling an IBC software upgrade. The interface function has been replaced by a type assertion. +* The `MaxWasmByteSize` function has been removed in favor of the `MaxWasmSize` constant. +* The `HasChecksum`, `GetAllChecksums` and `Logger` functions have been moved from the `types` package to a method on the `Keeper` type in the `keeper` package. +* The `InitializePinnedCodes` function has been moved to a method on the `Keeper` type in the `keeper` package. +* The `CustomQuerier`, `StargateQuerier` and `QueryPlugins` types have been moved from the `types` package to the `keeper` package. +* The `NewDefaultQueryPlugins`, `AcceptListStargateQuerier` and `RejectCustomQuerier` functions has been moved from the `types` package to the `keeper` package. +* The `NewDefaultQueryPlugins` function signature has changed to take an argument: `queryRouter ibcwasm.QueryRouter`. +* The `AcceptListStargateQuerier` function signature has changed to take an additional argument: `queryRouter ibcwasm.QueryRouter`. +* The `WithQueryPlugins` function signature has changed to take in the `QueryPlugins` type from the `keeper` package (previously from the `types` package). +* The `VMGasRegister` variable has been moved from the `types` package to the `keeper` package. + +## From v0.3.0+ibc-go-v8.3-wasmvm-v2.0 to v0.4.1-ibc-go-v8.4-wasmvm-v2.0 + +### Contract developers + +Contract developers are required to update their JSON API message structure for the `SudoMsg` payloads `VerifyMembershipMsg` and `VerifyNonMembershipMsg`. +The `path` field on both JSON API messages has been renamed to `merkle_path`. + +A migration is required for existing 08-wasm client contracts in order to correctly handle the deserialisation of these fields. + +## From v0.2.0+ibc-go-v7.3-wasmvm-v1.5 to v0.3.1-ibc-go-v7.4-wasmvm-v1.5 + +### Contract developers + +Contract developers are required to update their JSON API message structure for the `SudoMsg` payloads `VerifyMembershipMsg` and `VerifyNonMembershipMsg`. +The `path` field on both JSON API messages has been renamed to `merkle_path`. + +A migration is required for existing 08-wasm client contracts in order to correctly handle the deserialisation of these fields. + +## From v0.2.0+ibc-go-v8.3-wasmvm-v2.0 to v0.3.0-ibc-go-v8.3-wasmvm-v2.0 + +### Contract developers + +The `v0.3.0` release of 08-wasm for ibc-go `v8.3.x` and above introduces a breaking change for client contract developers. + +The contract API `SudoMsg` payloads `VerifyMembershipMsg` and `VerifyNonMembershipMsg` have been modified. +The encoding of the `Path` field of both structs has been updated from `v1.MerklePath` to `v2.MerklePath` to support proving values stored under keys which contain non-utf8 encoded symbols. + +As a result, the `Path` field now contains a `MerklePath` composed of `key_path` of `[][]byte` as opposed to `[]string`. The JSON field `path` containing `key_path` of both `VerifyMembershipMsg` and `VerifyNonMembershipMsg` structs will now marshal elements as base64 encoded bytestrings. See below for example JSON diff. + +```diff expandable +{ + "verify_membership": { + "height": { + "revision_height": 1 + }, + "delay_time_period": 0, + "delay_block_period": 0, + "proof":"dmFsaWQgcHJvb2Y=", + "path": { ++ "key_path":["L2liYw==","L2tleS9wYXRo"] +- "key_path":["/ibc","/key/path"] + }, + "value":"dmFsdWU=" + } +} +``` + +A migration is required for existing 08-wasm client contracts in order to correctly handle the deserialisation of `key_path` from `[]string` to `[][]byte`. +Contract developers should familiarise themselves with the migration path offered by 08-wasm [here](/docs/ibc/next/light-clients/wasm/governance#migrating-an-existing-wasm-light-client-contract). + +An example of the required changes in a client contract may look like: + +```diff +#[cw_serde] +pub struct MerklePath { ++ pub key_path: Vec, +- pub key_path: Vec, +} +``` + +Please refer to the [`cosmwasm_std`](https://docs.rs/cosmwasm-std/2.0.4/cosmwasm_std/struct.Binary.html) documentation for more information. + +## From v0.1.1+ibc-go-v7.3-wasmvm-v1.5 to v0.2.0-ibc-go-v7.3-wasmvm-v1.5 + +### Contract developers + +The `v0.2.0` release of 08-wasm for ibc-go `v7.6.x` and above introduces a breaking change for client contract developers. + +The contract API `SudoMsg` payloads `VerifyMembershipMsg` and `VerifyNonMembershipMsg` have been modified. +The encoding of the `Path` field of both structs has been updated from `v1.MerklePath` to `v2.MerklePath` to support proving values stored under keys which contain non-utf8 encoded symbols. + +As a result, the `Path` field now contains a `MerklePath` composed of `key_path` of `[][]byte` as opposed to `[]string`. The JSON field `path` containing `key_path` of both `VerifyMembershipMsg` and `VerifyNonMembershipMsg` structs will now marshal elements as base64 encoded bytestrings. See below for example JSON diff. + +```diff expandable +{ + "verify_membership": { + "height": { + "revision_height": 1 + }, + "delay_time_period": 0, + "delay_block_period": 0, + "proof":"dmFsaWQgcHJvb2Y=", + "path": { ++ "key_path":["L2liYw==","L2tleS9wYXRo"] +- "key_path":["/ibc","/key/path"] + }, + "value":"dmFsdWU=" + } +} +``` + +A migration is required for existing 08-wasm client contracts in order to correctly handle the deserialisation of `key_path` from `[]string` to `[][]byte`. +Contract developers should familiarise themselves with the migration path offered by 08-wasm [here](/docs/ibc/next/light-clients/wasm/governance#migrating-an-existing-wasm-light-client-contract). + +An example of the required changes in a client contract may look like: + +```diff +#[cw_serde] +pub struct MerklePath { ++ pub key_path: Vec, +- pub key_path: Vec, +} +``` + +Please refer to the [`cosmwasm_std`](https://docs.rs/cosmwasm-std/2.0.4/cosmwasm_std/struct.Binary.html) documentation for more information. + +## From ibc-go v7.3.x to ibc-go v8.0.x + +### Chains + +In the 08-wasm versions compatible with ibc-go v7.3.x and above from the v7 release line, the checksums of the uploaded Wasm bytecodes are all stored under a single key. From ibc-go v8.0.x the checksums are stored using [`collections.KeySet`](https://docs.cosmos.network/v0.50/build/packages/collections#keyset), whose full functionality became available in Cosmos SDK v0.50. There is therefore an [automatic migration handler](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/module.go#L115-L118) configured in the 08-wasm module to migrate the stored checksums to `collections.KeySet`. + +## From v0.1.0+ibc-go-v8.0-wasmvm-v1.5 to v0.2.0-ibc-go-v8.3-wasmvm-v2.0 + +The `WasmEngine` interface has been updated to reflect changes in the function signatures of Wasm VM: + +```diff expandable +type WasmEngine interface { +- StoreCode(code wasmvm.WasmCode) (wasmvm.Checksum, error) ++ StoreCode(code wasmvm.WasmCode, gasLimit uint64) (wasmvmtypes.Checksum, uint64, error) + + StoreCodeUnchecked(code wasmvm.WasmCode) (wasmvm.Checksum, error) + + Instantiate( + checksum wasmvm.Checksum, + env wasmvmtypes.Env, + info wasmvmtypes.MessageInfo, + initMsg []byte, + store wasmvm.KVStore, + goapi wasmvm.GoAPI, + querier wasmvm.Querier, + gasMeter wasmvm.GasMeter, + gasLimit uint64, + deserCost wasmvmtypes.UFraction, +- ) (*wasmvmtypes.Response, uint64, error) ++ ) (*wasmvmtypes.ContractResult, uint64, error) + + Query( + checksum wasmvm.Checksum, + env wasmvmtypes.Env, + queryMsg []byte, + store wasmvm.KVStore, + goapi wasmvm.GoAPI, + querier wasmvm.Querier, + gasMeter wasmvm.GasMeter, + gasLimit uint64, + deserCost wasmvmtypes.UFraction, +- ) ([]byte, uint64, error) ++ ) (*wasmvmtypes.QueryResult, uint64, error) + + Migrate( + checksum wasmvm.Checksum, + env wasmvmtypes.Env, + migrateMsg []byte, + store wasmvm.KVStore, + goapi wasmvm.GoAPI, + querier wasmvm.Querier, + gasMeter wasmvm.GasMeter, + gasLimit uint64, + deserCost wasmvmtypes.UFraction, +- ) (*wasmvmtypes.Response, uint64, error) ++ ) (*wasmvmtypes.ContractResult, uint64, error) + + Sudo( + checksum wasmvm.Checksum, + env wasmvmtypes.Env, + sudoMsg []byte, + store wasmvm.KVStore, + goapi wasmvm.GoAPI, + querier wasmvm.Querier, + gasMeter wasmvm.GasMeter, + gasLimit uint64, + deserCost wasmvmtypes.UFraction, +- ) (*wasmvmtypes.Response, uint64, error) ++ ) (*wasmvmtypes.ContractResult, uint64, error) + + GetCode(checksum wasmvm.Checksum) (wasmvm.WasmCode, error) + + Pin(checksum wasmvm.Checksum) error + + Unpin(checksum wasmvm.Checksum) error +} +``` + +Similar changes were required in the functions of `MockWasmEngine` interface. + +### Chains + +The `SupportedCapabilities` field of `WasmConfig` is now of type `[]string`: + +```diff +type WasmConfig struct { + DataDir string +- SupportedCapabilities string ++ SupportedCapabilities []string + ContractDebugMode bool +} +``` diff --git a/docs/ibc/next/light-clients/wasm/overview.mdx b/docs/ibc/next/light-clients/wasm/overview.mdx new file mode 100644 index 00000000..4dcc5422 --- /dev/null +++ b/docs/ibc/next/light-clients/wasm/overview.mdx @@ -0,0 +1,22 @@ +--- +title: Overview +description: Learn about the 08-wasm light client proxy module. +--- + +## Overview + +Learn about the `08-wasm` light client proxy module. + +### Context + +Traditionally, light clients used by ibc-go have been implemented only in Go, and since ibc-go v7 (with the release of the 02-client refactor), they are first-class Cosmos SDK modules. This means that updating existing light client implementations or adding support for new light clients is a multi-step, time-consuming process involving on-chain governance: it is necessary to modify the codebase of ibc-go (if the light client is part of its codebase), re-build chains' binaries, pass a governance proposal and have validators upgrade their nodes. + +### Motivation + +To break the limitation of being able to write light client implementations only in Go, the `08-wasm` adds support to run light clients written in a Wasm-compilable language. The light client byte code implements the entry points of a [CosmWasm](https://docs.cosmwasm.com/docs/) smart contract, and runs inside a Wasm VM. The `08-wasm` module exposes a proxy light client interface that routes incoming messages to the appropriate handler function, inside the Wasm VM, for execution. + +Adding a new light client to a chain is just as simple as submitting a governance proposal with the message that stores the byte code of the light client contract. No coordinated upgrade is needed. When the governance proposal passes and the message is executed, the contract is ready to be instantiated upon receiving a relayer-submitted `MsgCreateClient`. The process of creating a Wasm light client is the same as with a regular light client implemented in Go. + +### Use cases + +* Development of light clients for non-Cosmos ecosystem chains: state machines in other ecosystems are, in many cases, implemented in Rust, and thus there are probably libraries used in their light client implementations for which there is no equivalent in Go. This makes the development of a light client in Go very difficult, but relatively simple to do it in Rust. Therefore, writing a CosmWasm smart contract in Rust that implements the light client algorithm becomes a lower effort. diff --git a/docs/ibc/next/link-rewrite-test.mdx b/docs/ibc/next/link-rewrite-test.mdx deleted file mode 100644 index ed5080e7..00000000 --- a/docs/ibc/next/link-rewrite-test.mdx +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: "Link Rewrite Test" -description: "Test page to verify versioned link rewriting during freeze" ---- - -This page verifies that absolute doc links are rewritten from `next` to the frozen version. - -- Markdown link: [Go to IBC index](/docs/ibc/next/index) -- Anchor link: IBC Index -- Legacy doc path: Old doc path - - Release Notes (markdown): [IBC Release Notes](/docs/ibc/next/changelog/release-notes) - - Release Notes (anchor): IBC Release Notes - -If rewriting succeeds, in the frozen version these should point to `/docs/ibc/v0.1.0/...`. diff --git a/docs/ibc/next/middleware/callbacks/callbacks-IBCv2.mdx b/docs/ibc/next/middleware/callbacks/callbacks-IBCv2.mdx new file mode 100644 index 00000000..d370668b --- /dev/null +++ b/docs/ibc/next/middleware/callbacks/callbacks-IBCv2.mdx @@ -0,0 +1,28 @@ +--- +title: Callbacks with IBC v2 +description: >- + This page highlights some of the differences between IBC v2 and IBC classic + relevant for the callbacks middleware and how to use the module with IBC v2. + More details on middleware for IBC v2 can be found in the middleware section. +--- + +This page highlights some of the differences between IBC v2 and IBC classic relevant for the callbacks middleware and how to use the module with IBC v2. More details on middleware for IBC v2 can be found in the [middleware section](/docs/ibc/next/ibc/middleware/developIBCv2). + +## Interfaces + +Some of the interface differences are: + +* The callbacks middleware for IBC v2 requires the [`Underlying Application`](/docs/ibc/next/middleware/callbacks/overview) to implement the new [`CallbacksCompatibleModuleV2`](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/types/callbacks.go#L53-L58) interface. +* `channeltypesv2.Payload` is now used instead of `channeltypes.Packet` +* With IBC classic, the `OnRecvPacket` callback returns the `ack`, whereas v2 returns the `recvResult` which is the [status of the packet](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel/v2/types/packet.pb.go#L26-L38): unspecified, success, failue or asynchronous +* `api.WriteAcknowledgementWrapper` is used instead of `ICS4Wrapper.WriteAcknowledgement`. It is only needed if the lower level application is going to write an asynchronous acknowledgement. + +## Contract Developers + +The wasmd contract keeper enables cosmwasm developers to use the callbacks middleware. The [cosmwasm documentation](https://cosmwasm.cosmos.network/ibc/extensions/callbacks) provides information for contract developers. The IBC v2 callbacks implementation uses a `Payload` but reconstructs an IBC classic `Packet` to preserve the cosmwasm contract keeper interface. Additionally contracts must now handle the IBC v2 `ErrorAcknowledgement` sentinel value in the case of a failure. + +The callbacks middleware can be used for transfer + action workflows, for example a transfer and swap on recieve. These workflows require knowledge of the ibc denom that has been recieved. To assist with parsing the ics20 packet, [helper functions](https://github.com/cosmos/solidity-ibc-eureka/blob/a8870b023e58622fb7b3f733572c684851f8e5ee/packages/cosmwasm/ibc-callbacks-helpers/src/ics20.rs#L7-L41) can be found in the solidity-ibc-eureka repository. + +## Integration + +An example integration of the callbacks middleware in a transfer stack that is using IBC v2 can be found in the [ibc-go integration section](/docs/ibc/next/ibc/integration) diff --git a/docs/ibc/next/middleware/callbacks/end-users.mdx b/docs/ibc/next/middleware/callbacks/end-users.mdx new file mode 100644 index 00000000..8c198b6c --- /dev/null +++ b/docs/ibc/next/middleware/callbacks/end-users.mdx @@ -0,0 +1,95 @@ +--- +title: End Users +description: >- + This section explains how to use the callbacks middleware from the perspective + of an IBC Actor. Callbacks middleware provides two types of callbacks: +--- + +This section explains how to use the callbacks middleware from the perspective of an IBC Actor. Callbacks middleware provides two types of callbacks: + +* Source callbacks: + * `SendPacket` callback + * `OnAcknowledgementPacket` callback + * `OnTimeoutPacket` callback +* Destination callbacks: + * `ReceivePacket` callback + +For a given channel, the source callbacks are supported if the source chain has the callbacks middleware wired up in the channel's IBC stack. Similarly, the destination callbacks are supported if the destination chain has the callbacks middleware wired up in the channel's IBC stack. + + +Callbacks are always executed after the packet has been processed by the underlying IBC module. + + + +If the underlying application module is doing an asynchronous acknowledgement on packet receive (for example, if the [packet forward middleware](https://github.com/cosmos/ibc-apps/tree/main/middleware/packet-forward-middleware) is in the stack, and is being used by this packet), then the callbacks middleware will execute the `ReceivePacket` callback after the acknowledgement has been received. + + +## Source Callbacks + +Source callbacks are natively supported in the following ibc modules (if they are wrapped by the callbacks middleware): + +* `transfer` +* `icacontroller` + +To have your source callbacks be processed by the callbacks middleware, you must set the memo in the application's packet data to the following format: + +```jsonc +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +## Destination Callbacks + +Destination callbacks are natively only supported in the transfer module. Note that wrapping icahost is not supported. This is because icahost should be able to execute an arbitrary transaction anyway, and can call contracts or modules directly. + +To have your destination callbacks processed by the callbacks middleware, you must set the memo in the application's packet data to the following format: + +```jsonc +{ + "dest_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +Note that a packet can have both a source and destination callback. + +```jsonc expandable +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + +}, + "dest_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +# User Defined Gas Limit + +User defined gas limit was added for the following reasons: + +* To prevent callbacks from blocking packet lifecycle. +* To prevent relayers from being able to DOS the callback execution by sending a packet with a low amount of gas. + + +There is a chain wide parameter that sets the maximum gas limit that a user can set for a callback. This is to prevent a user from setting a gas limit that is too high for relayers. If the `"gas_limit"` is not set in the packet memo, then the maximum gas limit is used. + + +These goals are achieved by creating a minimum gas amount required for callback execution. If the relayer provides at least the minimum gas limit for the callback execution, then the packet lifecycle will not be blocked if the callback runs out of gas during execution, and the callback cannot be retried. If the relayer does not provided the minimum amount of gas and the callback executions runs out of gas, the entire tx is reverted and it may be executed again. + + +`SendPacket` callback is always reverted if the callback execution fails or returns an error for any reason. This is so that the packet is not sent if the callback execution fails. + diff --git a/docs/ibc/next/middleware/callbacks/events.mdx b/docs/ibc/next/middleware/callbacks/events.mdx new file mode 100644 index 00000000..cd0800c3 --- /dev/null +++ b/docs/ibc/next/middleware/callbacks/events.mdx @@ -0,0 +1,37 @@ +--- +title: Events +description: >- + An overview of all events related to the callbacks middleware. There are two + types of events, "ibcsrccallback" and "ibcdestcallback". +--- + +An overview of all events related to the callbacks middleware. There are two types of events, `"ibc_src_callback"` and `"ibc_dest_callback"`. + +## Shared Attributes + +Both of these event types share the following attributes: + +| **Attribute Key** | **Attribute Values** | **Optional** | +| :--------------------------: | :-----------------------------------------------------------------------------------------: | :----------------: | +| module | "ibccallbacks" | | +| callback\_type | **One of**: "send\_packet", "acknowledgement\_packet", "timeout\_packet", "receive\_packet" | | +| callback\_address | string | | +| callback\_exec\_gas\_limit | string (parsed from uint64) | | +| callback\_commit\_gas\_limit | string (parsed from uint64) | | +| packet\_sequence | string (parsed from uint64) | | +| callback\_result | **One of**: "success", "failure" | | +| callback\_error | string (parsed from callback err) | Yes, if err != nil | + +## `ibc_src_callback` Attributes + +| **Attribute Key** | **Attribute Values** | +| :------------------: | :----------------------: | +| packet\_src\_port | string (sourcePortID) | +| packet\_src\_channel | string (sourceChannelID) | + +## `ibc_dest_callback` Attributes + +| **Attribute Key** | **Attribute Values** | +| :-------------------: | :--------------------: | +| packet\_dest\_port | string (destPortID) | +| packet\_dest\_channel | string (destChannelID) | diff --git a/docs/ibc/next/middleware/callbacks/gas.mdx b/docs/ibc/next/middleware/callbacks/gas.mdx new file mode 100644 index 00000000..f55121b8 --- /dev/null +++ b/docs/ibc/next/middleware/callbacks/gas.mdx @@ -0,0 +1,75 @@ +--- +title: Gas Management +description: >- + Executing arbitrary code on a chain can be arbitrarily expensive. In general, + a callback may consume infinite gas (think of a callback that loops forever). + This is problematic for a few reasons: +--- + +## Overview + +Executing arbitrary code on a chain can be arbitrarily expensive. In general, a callback may consume infinite gas (think of a callback that loops forever). This is problematic for a few reasons: + +* It can block the packet lifecycle. +* It can be used to consume all of the relayer's funds and gas. +* A relayer can DOS the callback execution by sending a packet with a low amount of gas. + +To prevent these, the callbacks middleware introduces two gas limits: a chain wide gas limit (`maxCallbackGas`) and a user defined gas limit. + +### Chain Wide Gas Limit + +Since the callbacks middleware does not have a keeper, it does not use a governance parameter to set the chain wide gas limit. Instead, the chain wide gas limit is passed in as a parameter to the callbacks middleware during initialization. + +```go +/ app.go + maxCallbackGas := uint64(10_000_000) + +var transferStack porttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) + +transferStack = ibccallbacks.NewIBCMiddleware(transferStack, app.MockContractKeeper, maxCallbackGas) + +/ Add transfer stack to IBC Router +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) +``` + +### User Defined Gas Limit + +The user defined gas limit is set by the IBC Actor during packet creation. The user defined gas limit is set in the packet memo. If the user defined gas limit is not set or if the user defined gas limit is greater than the chain wide gas limit, then the chain wide gas limit is used as the user defined gas limit. + +```jsonc expandable +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + +}, + "dest_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +## Gas Limit Enforcement + +During a callback execution, there are three types of gas limits that are enforced: + +* User defined gas limit +* Chain wide gas limit +* Context gas limit (amount of gas that the relayer has left for this execution) + +Chain wide gas limit is used as a maximum to the user defined gas limit as explained in the [previous section](#user-defined-gas-limit). It may also be used as a default value if no user gas limit is provided. Therefore, we can ignore the chain wide gas limit for the rest of this section and work with the minimum of the chain wide gas limit and user defined gas limit. This minimum is called the commit gas limit. + +The gas limit enforcement is done by executing the callback inside a cached context with a new gas meter. The gas meter is initialized with the minimum of the commit gas limit and the context gas limit. This minimum is called the execution gas limit. We say that retries are allowed if `context gas limit < commit gas limit`. Otherwise, we say that retries are not allowed. + +If the callback execution fails due to an out of gas error, then the middleware checks if retries are allowed. If retries are not allowed, then it recovers from the out of gas error, consumes execution gas limit from the original context, and continues with the packet life cycle. If retries are allowed, then it panics with an out of gas error to revert the entire tx. The packet can then be submitted again with a higher gas limit. The out of gas panic descriptor is shown below. + +```go +fmt.Sprintf("ibc %s callback out of gas; commitGasLimit: %d", callbackType, callbackData.CommitGasLimit) +} +``` + +If the callback execution does not fail due to an out of gas error then the callbacks middleware does not block the packet life cycle regardless of whether retries are allowed or not. diff --git a/docs/ibc/next/middleware/callbacks/integration.mdx b/docs/ibc/next/middleware/callbacks/integration.mdx new file mode 100644 index 00000000..fc111cd4 --- /dev/null +++ b/docs/ibc/next/middleware/callbacks/integration.mdx @@ -0,0 +1,85 @@ +--- +title: Integration +description: >- + Learn how to integrate the callbacks middleware with IBC applications. The + following document is intended for developers building on top of the Cosmos + SDK and only applies for Cosmos SDK chains. +--- + +Learn how to integrate the callbacks middleware with IBC applications. The following document is intended for developers building on top of the Cosmos SDK and only applies for Cosmos SDK chains. + + +An example integration for an IBC v2 transfer stack using the callbacks middleware can be found in the [ibc-go module integration](/docs/ibc/next/ibc/integration) section + + +The callbacks middleware is a minimal and stateless implementation of the IBC middleware interface. It does not have a keeper, nor does it store any state. It simply routes IBC middleware messages to the appropriate callback function, which is implemented by the secondary application. Therefore, it doesn't need to be registered as a module, nor does it need to be added to the module manager. It only needs to be added to the IBC application stack. + +## Pre-requisite Readings + +* [IBC middleware development](/docs/ibc/next/ibc/middleware/develop) +* [IBC middleware integration](/docs/ibc/next/ibc/middleware/integration) + +The callbacks middleware, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. +For Cosmos SDK chains this setup is done via the `app/app.go` file, where modules are constructed and configured in order to bootstrap the blockchain application. + +## Configuring an application stack with the callbacks middleware + +As mentioned in [IBC middleware development](/docs/ibc/next/ibc/middleware/develop) an application stack may be composed of many or no middlewares that nest a base application. +These layers form the complete set of application logic that enable developers to build composable and flexible IBC application stacks. +For example, an application stack may just be a single base application like `transfer`, however, the same application stack composed with `packet-forward-middleware` and `callbacks` will nest the `transfer` base application twice by wrapping it with the callbacks module and then packet forward middleware. + +The callbacks middleware also **requires** a secondary application that will receive the callbacks to implement the [`ContractKeeper`](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/types/expected_keepers.go#L12-L100). The wasmd contract keeper has been implemented [here](https://github.com/CosmWasm/wasmd/tree/main/x/wasm/keeper) and is referenced as the `WasmKeeper`. + +### Transfer + +See below for an example of how to create an application stack using `transfer`, `packet-forward-middleware`, and `callbacks`. Feel free to omit the `packet-forward-middleware` if you do not want to use it. +The following `transferStack` is configured in `app/app.go` and added to the IBC `Router`. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +```go expandable +/ Create Transfer Stack +/ SendPacket, since it is originating from the application to core IBC: +/ transferKeeper.SendPacket -> callbacks.SendPacket -> feeKeeper.SendPacket -> channel.SendPacket + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is the other way +/ channel.RecvPacket -> fee.OnRecvPacket -> callbacks.OnRecvPacket -> transfer.OnRecvPacket + +/ transfer stack contains (from top to bottom): +/ - IBC Packet Forward Middleware +/ - IBC Callbacks Middleware +/ - Transfer + +/ initialise the gas limit for callbacks, recommended to be 10M for use with cosmwasm contracts + maxCallbackGas := uint64(10_000_000) + +/ the keepers for the callbacks middleware + wasmStackIBCHandler := wasm.NewIBCHandler(app.WasmKeeper, app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper) + +/ create IBC module from bottom to top of stack +/ Create Transfer Stack + var transferStack porttypes.IBCModule + transferStack = transfer.NewIBCModule(app.TransferKeeper) +/ callbacks wraps the transfer stack as its base app, and uses PacketForwardKeeper as the ICS4Wrapper +/ i.e. packet-forward-middleware is higher on the stack and sits between callbacks and the ibc channel keeper +/ Since this is the lowest level middleware of the transfer stack, it should be the first entrypoint for transfer keeper's +/ WriteAcknowledgement. + cbStack := ibccallbacks.NewIBCMiddleware(transferStack, app.PacketForwardKeeper, wasmStackIBCHandler, maxCallbackGas) + +transferStack = packetforward.NewIBCMiddleware( + cbStack, + app.PacketForwardKeeper, + 0, + packetforwardkeeper.DefaultForwardTransferPacketTimeoutTimestamp, + ) + +app.TransferKeeper.WithICS4Wrapper(cbStack) + +/ Create static IBC router, add app routes, then set and seal it + ibcRouter := porttypes.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) + +ibcRouter.AddRoute(wasmtypes.ModuleName, wasmStackIBCHandler) + +app.IBCKeeper.SetRouter(ibcRouter) +``` diff --git a/docs/ibc/next/middleware/callbacks/interfaces.mdx b/docs/ibc/next/middleware/callbacks/interfaces.mdx new file mode 100644 index 00000000..dde070b7 --- /dev/null +++ b/docs/ibc/next/middleware/callbacks/interfaces.mdx @@ -0,0 +1,179 @@ +--- +title: Interfaces +--- + +The callbacks middleware requires certain interfaces to be implemented by the underlying IBC applications and the secondary application. If you're simply wiring up the callbacks middleware to an existing IBC application stack and a secondary application such as `icacontroller` and `x/wasm`, you can skip this section. + +## Interfaces for developing the Underlying IBC Application + +### `PacketDataUnmarshaler` + +```go +/ PacketDataUnmarshaler defines an optional interface which allows a middleware to +/ request the packet data to be unmarshaled by the base application. +type PacketDataUnmarshaler interface { + / UnmarshalPacketData unmarshals the packet data into a concrete type + / ctx, portID, channelID are provided as arguments, so that (if needed) + / the packet data can be unmarshaled based on the channel version. + / The version of the underlying app is also returned. + UnmarshalPacketData(ctx sdk.Context, portID, channelID string, bz []byte) (interface{ +}, string, error) +} +``` + +The callbacks middleware **requires** the underlying ibc application to implement the [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/core/05-port/types/module.go#L142-L147) interface so that it can unmarshal the packet data bytes into the appropriate packet data type. This allows usage of interface functions implemented by the packet data type. The packet data type is expected to implement the `PacketDataProvider` interface (see section below), which is used to parse the callback data that is currently stored in the packet memo field for `transfer` and `ica` packets as a JSON string. See its implementation in the [`transfer`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/transfer/ibc_module.go#L303-L313) and [`icacontroller`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/27-interchain-accounts/controller/ibc_middleware.go#L258-L268) modules for reference. + +If the underlying application is a middleware itself, then it can implement this interface by simply passing the function call to its underlying application. See its implementation in the [`fee middleware`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/29-fee/ibc_middleware.go#L368-L378) for reference. + +### `PacketDataProvider` + +```go +/ PacketDataProvider defines an optional interfaces for retrieving custom packet data stored on behalf of another application. +/ An existing problem in the IBC middleware design is the inability for a middleware to define its own packet data type and insert packet sender provided information. +/ A short term solution was introduced into several application's packet data to utilize a memo field to carry this information on behalf of another application. +/ This interfaces standardizes that behaviour. Upon realization of the ability for middleware's to define their own packet data types, this interface will be deprecated and removed with time. +type PacketDataProvider interface { + / GetCustomPacketData returns the packet data held on behalf of another application. + / The name the information is stored under should be provided as the key. + / If no custom packet data exists for the key, nil should be returned. + GetCustomPacketData(key string) + +interface{ +} +} +``` + +The callbacks middleware also **requires** the underlying ibc application's packet data type to implement the [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/core/exported/packet.go#L43-L52) interface. This interface is used to retrieve the callback data from the packet data (using the memo field in the case of `transfer` and `ica`). For example, see its implementation in the [`transfer`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/transfer/types/packet.go#L85-L105) module. + +Since middlewares do not have packet types, they do not need to implement this interface. + +### `PacketData` + +```go +/ PacketData defines an optional interface which an application's packet data structure may implement. +type PacketData interface { + / GetPacketSender returns the sender address of the packet data. + / If the packet sender is unknown or undefined, an empty string should be returned. + GetPacketSender(sourcePortID string) + +string +} +``` + +[`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/core/exported/packet.go#L36-L41) is an optional interface that can be implemented by the underlying ibc application's packet data type. It is used to retrieve the packet sender address from the packet data. The callbacks middleware uses this interface to retrieve the packet sender address and pass it to the callback function during a source callback. If this interface is not implemented, then the callbacks middleware passes and empty string as the sender address. For example, see its implementation in the [`transfer`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/transfer/types/packet.go#L74-L83) and [`ica`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/27-interchain-accounts/types/packet.go#L78-L92) module. + +This interface was added so that secondary applications can retrieve the packet sender address to perform custom authorization logic if needed. + +Since middlewares do not have packet types, they do not need to implement this interface. + +## Interfaces for developing the Secondary Application + +### `ContractKeeper` + +The callbacks middleware requires the secondary application to implement the [`ContractKeeper`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/callbacks/types/expected_keepers.go#L11-L83) interface. The contract keeper will be invoked at each step of the packet lifecycle. When a packet is sent, if callback information is provided, the contract keeper will be invoked via the `IBCSendPacketCallback`. This allows the contract keeper to prevent packet sends when callback information is provided, for example if the sender is unauthorized to perform callbacks on the given information. If the packet send is successful, the contract keeper on the destination (if present) will be invoked when a packet has been received and the acknowledgement is written, this will occur via `IBCReceivePacketCallback`. At the end of the packet lifecycle, when processing acknowledgements or timeouts, the source contract keeper will be invoked either via `IBCOnAcknowledgementPacket` or `IBCOnTimeoutPacket`. Once a packet has been sent, each step of the packet lifecycle can be processed given that a relayer sets the gas limit to be more than or equal to the required `CommitGasLimit`. State changes performed in the callback will only be committed upon successful execution. + +```go expandable +/ ContractKeeper defines the entry points exposed to the VM module which invokes a smart contract +type ContractKeeper interface { + / IBCSendPacketCallback is called in the source chain when a PacketSend is executed. The + / packetSenderAddress is determined by the underlying module, and may be empty if the sender is + / unknown or undefined. The contract is expected to handle the callback within the user defined + / gas limit, and handle any errors, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, and the error will be propagated to the underlying IBC + / application, resulting in a packet send failure. + / + / Implementations are provided with the packetSenderAddress and MAY choose to use this to perform + / validation on the origin of a given packet. It is recommended to perform the same validation + / on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This + / defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + / + / The version provided is the base application version for the given packet send. This allows + / contracts to determine how to unmarshal the packetData. + IBCSendPacketCallback( + cachedCtx sdk.Context, + sourcePort string, + sourceChannel string, + timeoutHeight clienttypes.Height, + timeoutTimestamp uint64, + packetData []byte, + contractAddress, + packetSenderAddress string, + version string, + ) + +error + / IBCOnAcknowledgementPacketCallback is called in the source chain when a packet acknowledgement + / is received. The packetSenderAddress is determined by the underlying module, and may be empty if + / the sender is unknown or undefined. The contract is expected to handle the callback within the + / user defined gas limit, and handle any errors, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, but the packet lifecycle will not be blocked. + / + / Implementations are provided with the packetSenderAddress and MAY choose to use this to perform + / validation on the origin of a given packet. It is recommended to perform the same validation + / on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This + / defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + / + / The version provided is the base application version for the given packet send. This allows + / contracts to determine how to unmarshal the packetData. + IBCOnAcknowledgementPacketCallback( + cachedCtx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, + contractAddress, + packetSenderAddress string, + version string, + ) + +error + / IBCOnTimeoutPacketCallback is called in the source chain when a packet is not received before + / the timeout height. The packetSenderAddress is determined by the underlying module, and may be + / empty if the sender is unknown or undefined. The contract is expected to handle the callback + / within the user defined gas limit, and handle any error, out of gas, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, but the packet lifecycle will not be blocked. + / + / Implementations are provided with the packetSenderAddress and MAY choose to use this to perform + / validation on the origin of a given packet. It is recommended to perform the same validation + / on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This + / defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + / + / The version provided is the base application version for the given packet send. This allows + / contracts to determine how to unmarshal the packetData. + IBCOnTimeoutPacketCallback( + cachedCtx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, + contractAddress, + packetSenderAddress string, + version string, + ) + +error + / IBCReceivePacketCallback is called in the destination chain when a packet acknowledgement is written. + / The contract is expected to handle the callback within the user defined gas limit, and handle any errors, + / out of gas, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, but the packet lifecycle will not be blocked. + / + / The version provided is the base application version for the given packet send. This allows + / contracts to determine how to unmarshal the packetData. + IBCReceivePacketCallback( + cachedCtx sdk.Context, + packet ibcexported.PacketI, + ack ibcexported.Acknowledgement, + contractAddress string, + version string, + ) + +error +} +``` + +These are the callback entry points exposed to the secondary application. The secondary application is expected to execute its custom logic within these entry points. The callbacks middleware will handle the execution of these callbacks and revert the state if needed. + + +Note that the source callback entry points are provided with the `packetSenderAddress` and MAY choose to use this to perform validation on the origin of a given packet. It is recommended to perform the same validation on all source chain callbacks (SendPacket, AcknowledgePacket, TimeoutPacket). This defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + diff --git a/docs/ibc/next/middleware/callbacks/overview.mdx b/docs/ibc/next/middleware/callbacks/overview.mdx new file mode 100644 index 00000000..440c4107 --- /dev/null +++ b/docs/ibc/next/middleware/callbacks/overview.mdx @@ -0,0 +1,49 @@ +--- +title: Overview +description: >- + Learn about what the Callbacks Middleware is, and how to build custom modules + that utilize the Callbacks Middleware functionality +--- + +Learn about what the Callbacks Middleware is, and how to build custom modules that utilize the Callbacks Middleware functionality + +## What is the Callbacks Middleware? + +IBC was designed with callbacks between core IBC and IBC applications. IBC apps would send a packet to core IBC, and receive a callback on every step of that packet's lifecycle. This allows IBC applications to be built on top of core IBC, and to be able to execute custom logic on packet lifecycle events (e.g. unescrow tokens for ICS-20). + +This setup worked well for off-chain users interacting with IBC applications. However, we are now seeing the desire for secondary applications (e.g. smart contracts, modules) to call into IBC apps as part of their state machine logic and then do some actions on packet lifecycle events. + +The Callbacks Middleware allows for this functionality by allowing the packets of the underlying IBC applications to register callbacks to secondary applications for lifecycle events. These callbacks are then executed by the Callbacks Middleware when the corresponding packet lifecycle event occurs. + +After much discussion, the design was expanded to an ADR, and the Callbacks Middleware is an implementation of that ADR. + +## Concepts + +Callbacks Middleware was built with smart contracts in mind, but can be used by any secondary application that wants to allow IBC packets to call into it. Think of the Callbacks Middleware as a bridge between core IBC and a secondary application. + +We have the following definitions: + +* `Underlying IBC application`: The IBC application that is wrapped by the Callbacks Middleware. This is the IBC application that is actually sending and receiving packet lifecycle events from core IBC. For example, the transfer module, or the ICA controller submodule. +* `IBC Actor`: IBC Actor is an on-chain or off-chain entity that can initiate a packet on the underlying IBC application. For example, a smart contract, an off-chain user, or a module that sends a transfer packet are all IBC Actors. +* `Secondary application`: The application that is being called into by the Callbacks Middleware for packet lifecycle events. This is the application that is receiving the callback directly from the Callbacks Middleware module. For example, the `x/wasm` module. +* `Callback Actor`: The on-chain smart contract or module that is registered to receive callbacks from the secondary application. For example, a Wasm smart contract (gatekeeped by the `x/wasm` module). Note that the Callback Actor is not necessarily the same as the IBC Actor. For example, an off-chain user can initiate a packet on the underlying IBC application, but the Callback Actor could be a smart contract. The secondary application may want to check that the IBC Actor is allowed to call into the Callback Actor, for example, by checking that the IBC Actor is the same as the Callback Actor. +* `Callback Address`: Address of the Callback Actor. This is the address that the secondary application will call into when a packet lifecycle event occurs. For example, the address of the Wasm smart contract. +* `Maximum gas limit`: The maximum amount of gas that the Callbacks Middleware will allow the secondary application to use when it executes its custom logic. +* `User defined gas limit`: The amount of gas that the IBC Actor wants to allow the secondary application to use when it executes its custom logic. This is the gas limit that the IBC Actor specifies when it sends a packet to the underlying IBC application. This cannot be greater than the maximum gas limit. + +Think of the secondary application as a bridge between the Callbacks Middleware and the Callback Actor. The secondary application is responsible for executing the custom logic of the Callback Actor when a packet lifecycle event occurs. The secondary application is also responsible for checking that the IBC Actor is allowed to call into the Callback Actor. + +Note that it is possible that the IBC Actor, Secondary Application, and Callback Actor are all the same entity. In which case, the Callback Address should be the secondary application's module address. + +The following diagram shows how a typical `RecvPacket`, `AcknowledgementPacket`, and `TimeoutPacket` execution flow would look like: +![callbacks-middleware](/docs/ibc/images/04-middleware/02-callbacks/images/callbackflow.svg) + +And the following diagram shows how a typical `SendPacket` and `WriteAcknowledgement` execution flow would look like: +![callbacks-middleware](/docs/ibc/images/04-middleware/02-callbacks/images/ics4-callbackflow.svg) + +## Known Limitations + +* Callbacks are always executed after the underlying IBC application has executed its logic. +* Maximum gas limit is hardcoded manually during wiring. It requires a coordinated upgrade to change the maximum gas limit. +* The receive packet callback does not pass the relayer address to the secondary application. This is so that we can use the same callback for both synchronous and asynchronous acknowledgements. +* The receive packet callback does not pass IBC Actor's address, this is because the IBC Actor lives in the counterparty chain and cannot be trusted. diff --git a/docs/ibc/next/middleware/packet-forward-middleware/example-usage.mdx b/docs/ibc/next/middleware/packet-forward-middleware/example-usage.mdx new file mode 100644 index 00000000..e689e1e7 --- /dev/null +++ b/docs/ibc/next/middleware/packet-forward-middleware/example-usage.mdx @@ -0,0 +1,138 @@ +--- +title: Example Flows +description: >- + This document outlines some example flows leveraging packet forward middleware + and formats of the memo field. +--- + +This document outlines some example flows leveraging packet forward middleware and formats of the memo field. + +## Example Scenarios + +### Successful Transfer forwarding through chain B + +```mermaid +sequenceDiagram + autonumber + Chain A ->> Chain B: Send PFM transfer + Chain B ->> Chain C: Forward + Chain C ->> Chain B: ACK + Chain B ->> Chain A: ACK +``` + +### Memo for simple forward + +* The packet-forward-middleware integrated on Chain B. +* The packet data `receiver` for the `MsgTransfer` on Chain A is set to `"pfm"` or some other invalid bech32 string.\* +* The packet `memo` is included in `MsgTransfer` by user on Chain A. + +```json +{ + "forward": { + "receiver": "chain-c-bech32-address", + "port": "transfer", + "channel": "channel-123" + } +} +``` + +### Error on Forwarding Hop, Refund to A + +```mermaid +sequenceDiagram + autonumber + Chain A ->> Chain B: PFM transfer + Chain B ->> Chain C: Forward + Chain B ->> Chain C: Forward (errors) + Chain C ->> Chain B: ACK error + Chain B ->> Chain A: ACK error +``` + +### Forwarding with Retry and Timeout Logic + +```mermaid +sequenceDiagram + autonumber + Chain A ->> Chain B: PFM transfer + Chain B ->> Chain C: Forward + Chain C --x Chain B: Timeout + Chain B ->> Chain C: Retry forward + Chain C --x Chain B: Timeout + Chain B ->> Chain A: ACK error +``` + +### A -> B -> C full success + +1. `A` This sends packet over underlying ICS-004 wrapper with memo as is. +2. `B` This receives packet and parses it into ICS-020 packet. +3. `B` Validates `forward` packet on this step, return `ACK` error if fails. +4. `B` If other middleware not yet called ICS-020, call it and ACK error on fail. Tokens minted or unescrowed here. +5. `B` Handle denom. If denom prefix is from `B`, remove it. If denom prefix is other chain - add `B` prefix. +6. `B` Take fee, create new ICS-004 packet with timeout from forward for next step, and remaining inner `memo`. +7. `B` Send transfer to `C` with parameters obtained from `memo`. Tokens burnt or escrowed here. +8. `B` Store tracking `in flight packet` under next `(channel, port, ICS-20 transfer sequence)`, do not `ACK` packet yet. +9. `C` Handle ICS-020 packet as usual. +10. `B` On ICS-020 ACK from `C` find `in flight packet`, delete it and write `ACK` for original packet from `A`. +11. `A` Handle ICS-020 `ACK` as usual + +[Example](https://mintscan.io/osmosis-testnet/txs/FAB912347B8729FFCA92AC35E6B1E83BC8169DE7CC2C254A5A3F70C8EC35D771?height=3788973) of USDC transfer from Osmosis -> Noble -> Sei + +### A -> B -> C with C error ACK + +10. `B` On ICS-020 ACK from `C` find `in flight packet`, delete it +11. `B` Burns or escrows tokens. +12. `B` And write error `ACK` for original packet from `A`. +13. `A` Handle ICS-020 timeout as usual +14. `C` writes success `ACK` for packet from `B` + +Same behavior in case of timeout on `C` + +### A packet timeouts on B before C timeouts packet from B + +10. `A` Cannot timeout because `in flight packet` has proof on `B` of packet inclusion. +11. `B` waits for ACK or timeout from `C`. +12. `B` timeout from `C` becomes fail `ACK` on `B` for `A` +13. `A` receives success or fail `ACK`, but not timeout + +In this case `A` assets `hang` until final hop timeouts or ACK. + +### Memo for Retry and Timeout Logic, with Nested Memo (2 forwards) + +* The packet-forward-middleware integrated on Chain B and Chain C. +* The packet data `receiver` for the `MsgTransfer` on Chain A is set to `"pfm"` or some other invalid bech32 string. +* The forward metadata `receiver` for the hop from Chain B to Chain C is set to `"pfm"` or some other invalid bech32 string. +* The packet `memo` is included in `MsgTransfer` by user on Chain A. +* A packet timeout of 10 minutes and 2 retries is set for both forwards. + +In the case of a timeout after 10 minutes for either forward, the packet would be retried up to 2 times, afterwards an error ack would be written to issue a refund on the prior chain. + +`next` is the `memo` to pass for the next transfer hop. Per `memo` intended usage of a JSON string, it should be either JSON which will be Marshaled retaining key order, or an escaped JSON string which will be passed directly. + +`next` as JSON + +```json expandable +{ + "forward": { + "receiver": "pfm", / intentionally invalid + "port": "transfer", + "channel": "channel-123", + "timeout": "10m", + "retries": 2, + "next": { + "forward": { + "receiver": "chain-d-bech32-address", + "port": "transfer", + "channel": "channel-234", + "timeout": "10m", + "retries": 2 + } + } + } +} +``` + +## Intermediate Address Security + +Intermediate chains don’t need a valid receiver address. Instead, they derive a secure address from the packet’s sender and channel, preventing users from forwarding tokens to arbitrary accounts. + +To avoid accidental transfers to chains without PFM, use an invalid bech32 address (e.g., "pfm") for intermediate receivers. diff --git a/docs/ibc/next/middleware/packet-forward-middleware/integration.mdx b/docs/ibc/next/middleware/packet-forward-middleware/integration.mdx new file mode 100644 index 00000000..e8b01d21 --- /dev/null +++ b/docs/ibc/next/middleware/packet-forward-middleware/integration.mdx @@ -0,0 +1,181 @@ +--- +title: Integration +description: >- + This document provides instructions on integrating and configuring the Packet + Forward Middleware (PFM) within your existing chain implementation. The + integration steps include the following: +--- + +This document provides instructions on integrating and configuring the Packet Forward Middleware (PFM) within your +existing chain implementation. +The integration steps include the following: + +1. [Import the PFM, initialize the PFM Module & Keeper, initialize the store keys and module params, and initialize the Begin/End Block logic and InitGenesis order](#example-integration-of-the-packet-forward-middleware) +2. [Configure the IBC application stack including the transfer module](#configuring-the-transfer-application-stack-with-packet-forward-middleware) +3. [Configuration of additional options such as timeout period, number of retries on timeout, refund timeout period, and fee percentage](#configurable-options-in-the-packet-forward-middleware) + +Integration of the PFM should take approximately 20 minutes. + +## Example integration of the Packet Forward Middleware + +```go expandable +/ app.go + +/ Import the packet forward middleware +import ( + + "github.com/cosmos/ibc-apps/middleware/packet-forward-middleware/v10/packetforward" + packetforwardkeeper "github.com/cosmos/ibc-apps/middleware/packet-forward-middleware/v10/packetforward/keeper" + packetforwardtypes "github.com/cosmos/ibc-apps/middleware/packet-forward-middleware/v10/packetforward/types" +) + +... + +/ Register the AppModule for the packet forward middleware module +ModuleBasics = module.NewBasicManager( + ... + packetforward.AppModuleBasic{ +}, + ... +) + +... + +/ Add packet forward middleware Keeper +type App struct { + ... + PacketForwardKeeper *packetforwardkeeper.Keeper + ... +} + +... + +/ Create store keys + keys := sdk.NewKVStoreKeys( + ... + packetforwardtypes.StoreKey, + ... +) + +... + +/ Initialize the packet forward middleware Keeper +/ It's important to note that the PFM Keeper must be initialized before the Transfer Keeper +app.PacketForwardKeeper = packetforwardkeeper.NewKeeper( + appCodec, + keys[packetforwardtypes.StoreKey], + nil, / will be zero-value here, reference is set later on with SetTransferKeeper. + app.IBCKeeper.ChannelKeeper, + appKeepers.DistrKeeper, + app.BankKeeper, + app.IBCKeeper.ChannelKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) + +/ Initialize the transfer module Keeper +app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, + keys[ibctransfertypes.StoreKey], + app.GetSubspace(ibctransfertypes.ModuleName), + app.PacketForwardKeeper, + app.IBCKeeper.ChannelKeeper, + &app.IBCKeeper.PortKeeper, + app.AccountKeeper, + app.BankKeeper, + scopedTransferKeeper, +) + +app.PacketForwardKeeper.SetTransferKeeper(app.TransferKeeper) + +/ See the section below for configuring an application stack with the packet forward middleware + +... + +/ Register packet forward middleware AppModule +app.moduleManager = module.NewManager( + ... + packetforward.NewAppModule(app.PacketForwardKeeper, app.GetSubspace(packetforwardtypes.ModuleName)), +) + +... + +/ Add packet forward middleware to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + packetforwardtypes.ModuleName, + ... +) + +/ Add packet forward middleware to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + packetforwardtypes.ModuleName, + ... +) + +/ Add packet forward middleware to init genesis logic +app.moduleManager.SetOrderInitGenesis( + ... + packetforwardtypes.ModuleName, + ... +) + +/ Add packet forward middleware to init params keeper +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + ... + paramsKeeper.Subspace(packetforwardtypes.ModuleName).WithKeyTable(packetforwardtypes.ParamKeyTable()) + ... +} +``` + +## Configuring the transfer application stack with Packet Forward Middleware + +Here is an example of how to create an application stack using `transfer` and `packet-forward-middleware`. +The following `transferStack` is configured in `app/app.go` and added to the IBC `Router`. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +For more information on configuring an IBC application stack see the ibc-go docs [here](https://github.com/cosmos/ibc-go/blob/e69a833de764fa0f5bdf0338d9452fd6e579a675/docs/docs/04-middleware/01-ics29-fee/02-integration.md#configuring-an-application-stack-with-fee-middleware). + +```go expandable +/ Create Transfer Stack +/ SendPacket, since it is originating from the application to core IBC: +/ transferKeeper.SendPacket -> packetforward.SendPacket -> channel.SendPacket + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is the other way +/ channel.RecvPacket -> packetforward.OnRecvPacket -> transfer.OnRecvPacket + +/ transfer stack contains (from top to bottom): +/ - Packet Forward Middleware +/ - Transfer +var transferStack ibcporttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) + +transferStack = packetforward.NewIBCMiddleware( + transferStack, + app.PacketForwardKeeper, + 0, / retries on timeout + packetforwardkeeper.DefaultForwardTransferPacketTimeoutTimestamp, / forward timeout +) + +/ Add transfer stack to IBC Router +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) +``` + +## Configurable options in the Packet Forward Middleware + +The Packet Forward Middleware has several configurable options available when initializing the IBC application stack. +You can see these passed in as arguments to `packetforward.NewIBCMiddleware` and they include the number of retries that +will be performed on a forward timeout, the timeout period that will be used for a forward, and the timeout period that +will be used for performing refunds in the case that a forward is taking too long. + +Additionally, there is a fee percentage parameter that can be set in `InitGenesis`, this is an optional parameter that +can be used to take a fee from each forwarded packet which will then be distributed to the community pool. In the +`OnRecvPacket` callback `ForwardTransferPacket` is invoked which will attempt to subtract a fee from the forwarded +packet amount if the fee percentage is non-zero. + +* Retries On Timeout - how many times will a forward be re-attempted in the case of a timeout. +* Timeout Period - how long can a forward be in progress before giving up. +* Refund Timeout - how long can a forward be in progress before issuing a refund back to the original source chain. +* Fee Percentage - % of the forwarded packet amount which will be subtracted and distributed to the community pool. diff --git a/docs/ibc/next/middleware/packet-forward-middleware/overview.mdx b/docs/ibc/next/middleware/packet-forward-middleware/overview.mdx new file mode 100644 index 00000000..d5603528 --- /dev/null +++ b/docs/ibc/next/middleware/packet-forward-middleware/overview.mdx @@ -0,0 +1,34 @@ +--- +title: Overview +--- + + +Packet forward middleware is only compatible with IBC classic, not IBC v2 + + +Learn about packet forward middleware, a middleware that can be used in combination with token transfers (ICS-20) + +## What is Packet Forward Middleware? + +Packet Forward Middleware enables multi-hop token transfers by forwarding IBC packets through intermediate chains, which may not be directly connected. It supports: + +* **Path-Unwinding Functionality:** + Because the fungibility of tokens transferred between chains is determined by [the path the tokens have travelled](/docs/ibc/next/apps/transfer/overview#denomination-trace), i.e. the same token sent from chain A to chain B is not fungible with the same token sent from chain A, to chain C and then to chain B, packet forward middleware also enables routing tokens back through their source, before sending onto the final destination. +* **Asynchronous Acknowledgements:** + Acknowledgements are only written to the origin chain after all forwarding steps succeed or fail, users only need to monitor the source chain for the result. +* **Retry and Timeout Handling:** + The middleware can be configured to retry forwarding in the case that there was a timeout. +* **Forwarding across multiple chains with nested memos:** + Instructions on which route to take to forward a packet across more than one chain can be set within a nested JSON with the memo field +* **Configurable Fee Deduction on Recieve:** + Integrators of PFM can choose to deduct a percentage of tokens forwarded through their chain and distribute these tokens to the community pool. + +## How it works? + +1. User initiates a `MsgTransfer` with a memo JSON payload containing forwarding instructions. + +2. Intermediate chains (with PFM enabled) parse the memo and forward the packet to the destination specified. + +3. Acknowledgements are passed back step-by-step to the origin chain after the final hop succeeds or fails, along the same path used for forwarding. + +In practise, it can be challenging to correctly format the memo for the desired route. It is recommended to use the Skip API to correctly format the memo needed in `MsgTransfer` to make this easy. diff --git a/docs/ibc/next/middleware/rate-limit-middleware/integration.mdx b/docs/ibc/next/middleware/rate-limit-middleware/integration.mdx new file mode 100644 index 00000000..c97f41be --- /dev/null +++ b/docs/ibc/next/middleware/rate-limit-middleware/integration.mdx @@ -0,0 +1,8 @@ +--- +title: Integration +description: >- + This section should be completed once the middleware wiring approach is + finalised. +--- + +This section should be completed once the middleware wiring approach is finalised. diff --git a/docs/ibc/next/middleware/rate-limit-middleware/overview.mdx b/docs/ibc/next/middleware/rate-limit-middleware/overview.mdx new file mode 100644 index 00000000..f88ae6a3 --- /dev/null +++ b/docs/ibc/next/middleware/rate-limit-middleware/overview.mdx @@ -0,0 +1,29 @@ +--- +title: Overview +description: >- + Learn about rate limit middleware, a middleware that can be used in + combination with token transfers (ICS-20) to control the amount of in and + outflows of assets in a certain time period. +--- + +Learn about rate limit middleware, a middleware that can be used in combination with token transfers (ICS-20) to control the amount of in and outflows of assets in a certain time period. + +## What is Rate Limit Middleware? + +The rate limit middleware enforces rate limits on IBC token transfers coming into and out of a chain. It supports: + +* **Risk Mitigation:** In case of a bug exploit, attack or economic failure of a connected chain, it limits the impact to the in/outflow specified for a given time period. +* **Token Filtering:** Through the use of a blacklist, the middleware can completely block tokens entering or leaving a domain, relevant for complicance or giving asset issuers greater control over the domains token can be sent to. +* **Uninterupted Packet Flow:** When desired, rate limits can be bypassed by using the whitelist, to avoid any restriction on asset in or outflows. + +## How it works + +The rate limiting middleware determines whether tokens can flow into or out of a chain. The middleware does this by: + +1. Check transfer limits for an asset (Quota): When tokens are recieved or sent, the middleware determines whether the amount of tokens flowing in or out have exceeded the limit. + +2. Track in or outflow: When tokens enter or leave the chain, the amount transferred is tracked in state + +3. Block or allow token flow: Dependent on the limit, the middleware will either allow the tokens to pass through or block the tokens. + +4. Handle failures: If the packet timesout or fails to be delivered, the middleware ensures limits are correctly recorded. diff --git a/docs/ibc/next/middleware/rate-limit-middleware/setting-limits.mdx b/docs/ibc/next/middleware/rate-limit-middleware/setting-limits.mdx new file mode 100644 index 00000000..4cc0c79d --- /dev/null +++ b/docs/ibc/next/middleware/rate-limit-middleware/setting-limits.mdx @@ -0,0 +1,21 @@ +--- +title: Setting Rate Limits +description: >- + Rate limits are set through a governance-gated authority on a per denom, and + per channel / client basis. To add a rate limit, the MsgAddRateLimit message + must be executed which includes: +--- + +Rate limits are set through a governance-gated authority on a per denom, and per channel / client basis. To add a rate limit, the [`MsgAddRateLimit`](https://github.com/cosmos/ibc-go/blob/main/modules/apps/rate-limiting/types/msgs.go#L26-L34) message must be executed which includes: + +* Denom: the asset that the rate limit should be applied to +* ChannelOrClientId: the channelID for use with IBC classic connections, or the clientID for use with IBC v2 connections +* MaxPercentSend: the outflow threshold as a percentage of the `channelValue`. More explicitly, a packet being sent would exceed the threshold quota if: (Outflow - Inflow + Packet Amount) / channelValue is greater than MaxPercentSend +* MaxPercentRecv: the inflow threshold as a percentage of the `channelValue` +* DurationHours: the length of time, after which the rate limits reset + +## Updating, Removing or Resetting Rate Limits + +* If rate limits were set to be too low or high for a given channel/client, they can be updated with [`MsgUpdateRateLimit`](https://github.com/cosmos/ibc-go/blob/main/modules/apps/rate-limiting/types/msgs.go#L81-L89). +* If rate limits are no longer needed, they can be removed with [`MsgRemoveRateLimit`](https://github.com/cosmos/ibc-go/blob/main/modules/apps/rate-limiting/types/msgs.go#L136-L141). +* If the flow counter needs to be resent for a given rate limit, it is possible to do so with [`MsgResetRateLimit`](https://github.com/cosmos/ibc-go/blob/main/modules/apps/rate-limiting/types/msgs.go#L169-L174). diff --git a/docs/ibc/next/migrations/migration.template.mdx b/docs/ibc/next/migrations/migration.template.mdx new file mode 100644 index 00000000..cc687cdf --- /dev/null +++ b/docs/ibc/next/migrations/migration.template.mdx @@ -0,0 +1,30 @@ +--- +description: This guide provides instructions for migrating to a new version of ibc-go. +--- + +This guide provides instructions for migrating to a new version of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Chains](#chains) +* [IBC Apps](#ibc-apps) +* [Relayers](#relayers) +* [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +* No relevant changes were made in this release. + +## IBC Apps + +* No relevant changes were made in this release. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/next/migrations/sdk-to-v1.mdx b/docs/ibc/next/migrations/sdk-to-v1.mdx new file mode 100644 index 00000000..28f07f1d --- /dev/null +++ b/docs/ibc/next/migrations/sdk-to-v1.mdx @@ -0,0 +1,194 @@ +--- +title: SDK v0.43 to IBC-Go v1 +description: >- + This file contains information on how to migrate from the IBC module contained + in the SDK 0.41.x and 0.42.x lines to the IBC module in the ibc-go repository + based on the 0.44 SDK version. +--- + +This file contains information on how to migrate from the IBC module contained in the SDK 0.41.x and 0.42.x lines to the IBC module in the ibc-go repository based on the 0.44 SDK version. + +## Import Changes + +The most obvious changes is import name changes. We need to change: + +* applications -> apps +* cosmos-sdk/x/ibc -> ibc-go + +On my GNU/Linux based machine I used the following commands, executed in order: + +```shell +grep -RiIl 'cosmos-sdk\/x\/ibc\/applications' | xargs sed -i 's/cosmos-sdk\/x\/ibc\/applications/ibc-go\/modules\/apps/g' +``` + +```shell +grep -RiIl 'cosmos-sdk\/x\/ibc' | xargs sed -i 's/cosmos-sdk\/x\/ibc/ibc-go\/modules/g' +``` + +ref: [explanation of the above commands](https://www.internalpointers.com/post/linux-find-and-replace-text-multiple-files) + +Executing these commands out of order will cause issues. + +Feel free to use your own method for modifying import names. + +NOTE: Updating to the `v0.44.0` SDK release and then running `go mod tidy` will cause a downgrade to `v0.42.0` in order to support the old IBC import paths. +Update the import paths before running `go mod tidy`. + +## Chain Upgrades + +Chains may choose to upgrade via an upgrade proposal or genesis upgrades. Both in-place store migrations and genesis migrations are supported. + +**WARNING**: Please read at least the quick guide for [IBC client upgrades](/docs/ibc/next/ibc/upgrades/quick-guide) before upgrading your chain. It is highly recommended you do not change the chain-ID during an upgrade, otherwise you must follow the IBC client upgrade instructions. + +Both in-place store migrations and genesis migrations will: + +* migrate the solo machine client state from v1 to v2 protobuf definitions +* prune all solo machine consensus states +* prune all expired tendermint consensus states + +Chains must set a new connection parameter during either in place store migrations or genesis migration. The new parameter, max expected block time, is used to enforce packet processing delays on the receiving end of an IBC packet flow. Checkout the [docs](https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2) for more information. + +### In-Place Store Migrations + +The new chain binary will need to run migrations in the upgrade handler. The fromVM (previous module version) for the IBC module should be 1. This will allow migrations to be run for IBC updating the version from 1 to 2. + +Ex: + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler("my-upgrade-proposal", + func(ctx sdk.Context, _ upgradetypes.Plan, _ module.VersionMap) (module.VersionMap, error) { + / set max expected block time parameter. Replace the default with your expected value + / https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2 + app.IBCKeeper.ConnectionKeeper.SetParams(ctx, ibcconnectiontypes.DefaultParams()) + fromVM := map[string]uint64{ + ... / other modules + "ibc": 1, + ... +} + +return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +### Genesis Migrations + +To perform genesis migrations, the following code must be added to your existing migration code. + +```go expandable +/ add imports as necessary +import ( + + ibcv100 "github.com/cosmos/ibc-go/modules/core/legacy/v100" + ibchost "github.com/cosmos/ibc-go/modules/core/24-host" +) + +... + +/ add in migrate cmd function +/ expectedTimePerBlock is a new connection parameter +/ https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2 +newGenState, err = ibcv100.MigrateGenesis(newGenState, clientCtx, *genDoc, expectedTimePerBlock) + if err != nil { + return err +} +``` + +**NOTE:** The genesis chain-id, time and height MUST be updated before migrating IBC, otherwise the tendermint consensus state will not be pruned. + +## IBC Keeper Changes + +The IBC Keeper now takes in the Upgrade Keeper. Please add the chains' Upgrade Keeper after the Staking Keeper: + +```diff +/ Create IBC Keeper +app.IBCKeeper = ibckeeper.NewKeeper( +- appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, scopedIBCKeeper, ++ appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, app.UpgradeKeeper, scopedIBCKeeper, +) +``` + +## Proposals + +### UpdateClientProposal + +The `UpdateClient` has been modified to take in two client-identifiers and one initial height. + +### UpgradeProposal + +A new IBC proposal type has been added, `UpgradeProposal`. This handles an IBC (breaking) Upgrade. +The previous `UpgradedClientState` field in an Upgrade `Plan` has been deprecated in favor of this new proposal type. + +### Proposal Handler Registration + +The `ClientUpdateProposalHandler` has been renamed to `ClientProposalHandler`. +It handles both `UpdateClientProposal`s and `UpgradeProposal`s. + +Add this import: + +```diff ++ ibcclienttypes "github.com/cosmos/ibc-go/modules/core/02-client/types" +``` + +Please ensure the governance module adds the correct route: + +```diff +- AddRoute(ibchost.RouterKey, ibcclient.NewClientUpdateProposalHandler(app.IBCKeeper.ClientKeeper)) ++ AddRoute(ibcclienttypes.RouterKey, ibcclient.NewClientProposalHandler(app.IBCKeeper.ClientKeeper)) +``` + +NOTE: Simapp registration was incorrect in the 0.41.x releases. The `UpdateClient` proposal handler should be registered with the router key belonging to `ibc-go/core/02-client/types` +as shown in the diffs above. + +### Proposal CLI Registration + +Please ensure both proposal type CLI commands are registered on the governance module by adding the following arguments to `gov.NewAppModuleBasic()`: + +Add the following import: + +```diff ++ ibcclientclient "github.com/cosmos/ibc-go/modules/core/02-client/client" +``` + +Register the cli commands: + +```diff +gov.NewAppModuleBasic( + paramsclient.ProposalHandler, distrclient.ProposalHandler, upgradeclient.ProposalHandler, upgradeclient.CancelProposalHandler, ++ ibcclientclient.UpdateClientProposalHandler, ibcclientclient.UpgradeProposalHandler, +), +``` + +REST routes are not supported for these proposals. + +## Proto file changes + +The gRPC querier service endpoints have changed slightly. The previous files used `v1beta1` gRPC route, this has been updated to `v1`. + +The solo machine has replaced the FrozenSequence uint64 field with a IsFrozen boolean field. The package has been bumped from `v1` to `v2` + +## IBC callback changes + +### OnRecvPacket + +Application developers need to update their `OnRecvPacket` callback logic. + +The `OnRecvPacket` callback has been modified to only return the acknowledgement. The acknowledgement returned must implement the `Acknowledgement` interface. The acknowledgement should indicate if it represents a successful processing of a packet by returning true on `Success()` and false in all other cases. A return value of false on `Success()` will result in all state changes which occurred in the callback being discarded. More information can be found in the [documentation](/docs/ibc/next/ibc/apps/apps#receiving-packets). + +The `OnRecvPacket`, `OnAcknowledgementPacket`, and `OnTimeoutPacket` callbacks are now passed the `sdk.AccAddress` of the relayer who relayed the IBC packet. Applications may use or ignore this information. + +## IBC Event changes + +The `packet_data` attribute has been deprecated in favor of `packet_data_hex`, in order to provide standardized encoding/decoding of packet data in events. While the `packet_data` event still exists, all relayers and IBC Event consumers are strongly encouraged to switch over to using `packet_data_hex` as soon as possible. + +The `packet_ack` attribute has also been deprecated in favor of `packet_ack_hex` for the same reason stated above. All relayers and IBC Event consumers are strongly encouraged to switch over to using `packet_ack_hex` as soon as possible. + +The `consensus_height` attribute has been removed in the Misbehaviour event emitted. IBC clients no longer have a frozen height and misbehaviour does not necessarily have an associated height. + +## Relevant SDK changes + +* (codec) [#9226](https://github.com/cosmos/cosmos-sdk/pull/9226) Rename codec interfaces and methods, to follow a general Go interfaces: + * `codec.Marshaler` → `codec.Codec` (this defines objects which serialize other objects) + * `codec.BinaryMarshaler` → `codec.BinaryCodec` + * `codec.JSONMarshaler` → `codec.JSONCodec` + * Removed `BinaryBare` suffix from `BinaryCodec` methods (`MarshalBinaryBare`, `UnmarshalBinaryBare`, ...) + * Removed `Binary` infix from `BinaryCodec` methods (`MarshalBinaryLengthPrefixed`, `UnmarshalBinaryLengthPrefixed`, ...) diff --git a/docs/ibc/next/migrations/support-denoms-with-slashes.mdx b/docs/ibc/next/migrations/support-denoms-with-slashes.mdx new file mode 100644 index 00000000..0a92b46f --- /dev/null +++ b/docs/ibc/next/migrations/support-denoms-with-slashes.mdx @@ -0,0 +1,90 @@ +--- +title: Support transfer of coins whose base denom contains slashes +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +This document is necessary when chains are upgrading from a version that does not support base denoms with slashes (e.g. v3.0.0) to a version that does (e.g. v3.2.0). All versions of ibc-go smaller than v1.5.0 for the v1.x release line, v2.3.0 for the v2.x release line, and v3.1.0 for the v3.x release line do **NOT** support IBC token transfers of coins whose base denoms contain slashes. Therefore the in-place of genesis migration described in this document are required when upgrading. + +If a chain receives coins of a base denom with slashes before it upgrades to supporting it, the receive may pass however the trace information will be incorrect. + +E.g. If a base denom of `testcoin/testcoin/testcoin` is sent to a chain that does not support slashes in the base denom, the receive will be successful. However, the trace information stored on the receiving chain will be: `Trace: "transfer/{channel-id}/testcoin/testcoin", BaseDenom: "testcoin"`. + +This incorrect trace information must be corrected when the chain does upgrade to fully supporting denominations with slashes. + +To do so, chain binaries should include a migration script that will run when the chain upgrades from not supporting base denominations with slashes to supporting base denominations with slashes. + +## Chains + +### ICS20 - Transfer + +The transfer module will now support slashes in base denoms, so we must iterate over current traces to check if any of them are incorrectly formed and correct the trace information. + +### Upgrade Proposal + +```go +app.UpgradeKeeper.SetUpgradeHandler("MigrateTraces", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / transfer module consensus version has been bumped to 2 + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +This is only necessary if there are denom traces in the store with incorrect trace information from previously received coins that had a slash in the base denom. However, it is recommended that any chain upgrading to support base denominations with slashes runs this code for safety. + +For a more detailed sample, please check out the code changes in [this pull request](https://github.com/cosmos/ibc-go/pull/1680). + +### Genesis Migration + +If the chain chooses to add support for slashes in base denoms via genesis export, then the trace information must be corrected during genesis migration. + +The migration code required may look like: + +```go expandable +func migrateGenesisSlashedDenomsUpgrade(appState genutiltypes.AppMap, clientCtx client.Context, genDoc *tmtypes.GenesisDoc) (genutiltypes.AppMap, error) { + if appState[ibctransfertypes.ModuleName] != nil { + transferGenState := &ibctransfertypes.GenesisState{ +} + +clientCtx.Codec.MustUnmarshalJSON(appState[ibctransfertypes.ModuleName], transferGenState) + substituteTraces := make([]ibctransfertypes.DenomTrace, len(transferGenState.DenomTraces)) + for i, dt := range transferGenState.DenomTraces { + / replace all previous traces with the latest trace if validation passes + / note most traces will have same value + newTrace := ibctransfertypes.ParseDenomTrace(dt.GetFullDenomPath()) + if err := newTrace.Validate(); err != nil { + substituteTraces[i] = dt +} + +else { + substituteTraces[i] = newTrace +} + +} + +transferGenState.DenomTraces = substituteTraces + + / delete old genesis state + delete(appState, ibctransfertypes.ModuleName) + + / set new ibc transfer genesis state + appState[ibctransfertypes.ModuleName] = clientCtx.Codec.MustMarshalJSON(transferGenState) +} + +return appState, nil +} +``` + +For a more detailed sample, please check out the code changes in [this pull request](https://github.com/cosmos/ibc-go/pull/1528). diff --git a/docs/ibc/next/migrations/support-stackbuilder.mdx b/docs/ibc/next/migrations/support-stackbuilder.mdx new file mode 100644 index 00000000..6b4a1ab5 --- /dev/null +++ b/docs/ibc/next/migrations/support-stackbuilder.mdx @@ -0,0 +1,155 @@ +--- +title: >- + Support the new StackBuilder primitive for Wiring Middlewares in the chain + application +--- + +The StackBuilder struct is a new primitive for wiring middleware in a simpler and less error-prone manner. It is not a breaking change thus the existing method of wiring middleware still works, though it is highly recommended to transition to the new wiring method. + +Refer to the [integration guide](/docs/ibc/next/ibc/middleware/integration) to understand how to use this new middleware to improve middleware wiring in the chain application setup. + +# Migrations for Application Developers + +In order to be wired with the new StackBuilder primitive, applications and middlewares must implement new methods as part of their respective interfaces. + +IBC Applications must implement a new `SetICS4Wrapper` which will set the `ICS4Wrapper` through which the application will call `SendPacket` and `WriteAcknowledgement`. It is recommended that IBC applications are initialized first with the IBC ChannelKeeper directly, and then modified with a middleware ICS4Wrapper during the stack wiring. + +```go +/ SetICS4Wrapper sets the ICS4Wrapper. This function may be used after +/ the module's initialization to set the middleware which is above this +/ module in the IBC application stack. +/ The ICS4Wrapper **must** be used for sending packets and writing acknowledgements +/ to ensure that the middleware can intercept and process these calls. +/ Do not use the channel keeper directly to send packets or write acknowledgements +/ as this will bypass the middleware. +SetICS4Wrapper(wrapper ICS4Wrapper) +``` + +Many applications have a stateful keeper that executes the logic for sending packets and writing acknowledgements. In this case, the keeper in the application must be a **pointer** reference so that it can be modified in place after initialization. + +The initialization should be modified to no longer take in an addition `ics4Wrapper` as this gets modified later by `SetICS4Wrapper`. The constructor function must also return a **pointer** reference so that it may be modified in-place by the stack builder. + +Below is an example IBCModule that supports the stack builder wiring. + +E.g. + +```go expandable +type IBCModule struct { + keeper *keeper.Keeper +} + +/ NewIBCModule creates a new IBCModule given the keeper +func NewIBCModule(k *keeper.Keeper) *IBCModule { + return &IBCModule{ + keeper: k, +} +} + +/ SetICS4Wrapper sets the ICS4Wrapper. This function may be used after +/ the module's initialization to set the middleware which is above this +/ module in the IBC application stack. +func (im IBCModule) + +SetICS4Wrapper(wrapper porttypes.ICS4Wrapper) { + if wrapper == nil { + panic("ICS4Wrapper cannot be nil") +} + +im.keeper.WithICS4Wrapper(wrapper) +} + +/ Keeper file that has ICS4Wrapper internal to its own struct + +/ Keeper defines the IBC fungible transfer keeper +type Keeper struct { + ... + ics4Wrapper porttypes.ICS4Wrapper + + / Keeper is initialized with ICS4Wrapper + / being equal to the top-level channelKeeper + / this can be changed by calling WithICS4Wrapper + / with a different middleware ICS4Wrapper + channelKeeper types.ChannelKeeper + ... +} + +/ WithICS4Wrapper sets the ICS4Wrapper. This function may be used after +/ the keepers creation to set the middleware which is above this module +/ in the IBC application stack. +func (k *Keeper) + +WithICS4Wrapper(wrapper porttypes.ICS4Wrapper) { + k.ics4Wrapper = wrapper +} +``` + +# Migration for Middleware Developers + +Since Middleware is itself implement the IBC application interface, it must also implement `SetICS4Wrapper` in the same way as IBC applications. + +Additionally, IBC Middleware has an underlying IBC application that it calls into as well. Previously this application would be set in the middleware upon construction. With the stack builder primitive, the application is only set during upon calling `stack.Build()`. Thus, middleware is additionally responsible for implementing the new method: `SetUnderlyingApplication`: + +```go +/ SetUnderlyingModule sets the underlying IBC module. This function may be used after +/ the middleware's initialization to set the ibc module which is below this middleware. +SetUnderlyingApplication(IBCModule) +``` + +The initialization should not include the ICS4Wrapper and application as this gets set later. The constructor function for Middlewares **must** be modified to return a **pointer** reference so that it can be modified in place by the stack builder. + +Below is an example middleware setup: + +```go expandable +/ IBCMiddleware implements the ICS26 callbacks +type IBCMiddleware struct { + app porttypes.PacketUnmarshalerModule + ics4Wrapper porttypes.ICS4Wrapper + + / this is a stateful middleware with its own internal keeper + mwKeeper *keeper.MiddlewareKeeper + + / this is a middleware specific field + mwField any +} + +/ NewIBCMiddleware creates a new IBCMiddleware given the keeper and underlying application. +/ NOTE: It **must** return a pointer reference so it can be +/ modified in place by the stack builder +/ NOTE: We do not pass in the underlying app and ICS4Wrapper here as this happens later +func NewIBCMiddleware( + mwKeeper *keeper.MiddlewareKeeper, mwField any, +) *IBCMiddleware { + return &IBCMiddleware{ + mwKeeper: mwKeeper, + mwField, mwField, +} +} + +/ SetICS4Wrapper sets the ICS4Wrapper. This function may be used after the +/ middleware's creation to set the middleware which is above this module in +/ the IBC application stack. +func (im *IBCMiddleware) + +SetICS4Wrapper(wrapper porttypes.ICS4Wrapper) { + if wrapper == nil { + panic("ICS4Wrapper cannot be nil") +} + +im.mwKeeper.WithICS4Wrapper(wrapper) +} + +/ SetUnderlyingApplication sets the underlying IBC module. This function may be used after +/ the middleware's creation to set the ibc module which is below this middleware. +func (im *IBCMiddleware) + +SetUnderlyingApplication(app porttypes.IBCModule) { + if app == nil { + panic(errors.New("underlying application cannot be nil")) +} + if im.app != nil { + panic(errors.New("underlying application already set")) +} + +im.app = app +} +``` diff --git a/docs/ibc/next/migrations/v1-to-v2.mdx b/docs/ibc/next/migrations/v1-to-v2.mdx new file mode 100644 index 00000000..ddd1bf1b --- /dev/null +++ b/docs/ibc/next/migrations/v1-to-v2.mdx @@ -0,0 +1,59 @@ +--- +title: IBC-Go v1 to v2 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go -> github.com/cosmos/ibc-go/v2 +``` + +## Chains + +* No relevant changes were made in this release. + +## IBC Apps + +A new function has been added to the app module interface: + +```go expandable +/ NegotiateAppVersion performs application version negotiation given the provided channel ordering, connectionID, portID, counterparty and proposed version. +/ An error is returned if version negotiation cannot be performed. For example, an application module implementing this interface +/ may decide to return an error in the event of the proposed version being incompatible with it's own +NegotiateAppVersion( + ctx sdk.Context, + order channeltypes.Order, + connectionID string, + portID string, + counterparty channeltypes.Counterparty, + proposedVersion string, +) (version string, err error) +``` + +This function should perform application version negotiation and return the negotiated version. If the version cannot be negotiated, an error should be returned. This function is only used on the client side. + +### `sdk.Result` removed + +`sdk.Result` has been removed as a return value in the application callbacks. Previously it was being discarded by core IBC and was thus unused. + +## Relayers + +A new gRPC has been added to 05-port, `AppVersion`. It returns the negotiated app version. This function should be used for the `ChanOpenTry` channel handshake step to decide upon the application version which should be set in the channel. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/next/migrations/v10-to-v11.mdx b/docs/ibc/next/migrations/v10-to-v11.mdx new file mode 100644 index 00000000..0b040d02 --- /dev/null +++ b/docs/ibc/next/migrations/v10-to-v11.mdx @@ -0,0 +1,60 @@ +--- +title: IBC-Go v10 to v11 +description: This guide provides instructions for migrating to a new version of ibc-go. +--- + +This guide provides instructions for migrating to a new version of ibc-go. + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +Diff examples are shown after the list of overall changes: + +* Chains will need to remove the `ParamSubspace` arg from all calls to `Keeper` constructors + +```diff + app.IBCKeeper = ibckeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[ibcexported.StoreKey]), +- app.GetSubspace(ibcexported.ModuleName), + app.UpgradeKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) +``` + +The transfer module, the packet forward middleware, and the rate limiting middleware support custom address codecs. This feature is primarily added to support Cosmos EVM for IBC transfers. In a standard Cosmos SDK app, they are wired as follows: + +```diff + app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, ++ app.AccountKeeper.AddressCodec(), + runtime.NewKVStoreService(keys[ibctransfertypes.StoreKey]), + app.IBCKeeper.ChannelKeeper, + app.MsgServiceRouter(), + app.AccountKeeper, app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) +``` + +```diff + app.RateLimitKeeper = ratelimitkeeper.NewKeeper( + appCodec, ++ app.AccountKeeper.AddressCodec(), + runtime.NewKVStoreService(keys[ratelimittypes.StoreKey]), + app.IBCKeeper.ChannelKeeper, + app.IBCKeeper.ClientKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String() + ) +``` + +```diff + app.PFMKeeper = packetforwardkeeper.NewKeeper( + appCodec, ++ app.AccountKeeper.AddressCodec(), + runtime.NewKVStoreService(keys[packetforwardtypes.StoreKey]), + app.TransferKeeper, + app.IBCKeeper.ChannelKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String() + ) +``` diff --git a/docs/ibc/next/migrations/v2-to-v3.mdx b/docs/ibc/next/migrations/v2-to-v3.mdx new file mode 100644 index 00000000..4ee298cb --- /dev/null +++ b/docs/ibc/next/migrations/v2-to-v3.mdx @@ -0,0 +1,187 @@ +--- +title: IBC-Go v2 to v3 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v2 -> github.com/cosmos/ibc-go/v3 +``` + +No genesis or in-place migrations are required when upgrading from v1 or v2 of ibc-go. + +## Chains + +### ICS20 + +The `transferkeeper.NewKeeper(...)` now takes in an ICS4Wrapper. +The ICS4Wrapper should be the IBC Channel Keeper unless ICS 20 is being connected to a middleware application. + +### ICS27 + +ICS27 Interchain Accounts has been added as a supported IBC application of ibc-go. +Please see the [ICS27 documentation](/docs/ibc/next/apps/interchain-accounts/overview) for more information. + +### Upgrade Proposal + +If the chain will adopt ICS27, it must set the appropriate params during the execution of the upgrade handler in `app.go`: + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler("v3", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / set the ICS27 consensus version so InitGenesis is not run + fromVM[icatypes.ModuleName] = icamodule.ConsensusVersion() + + / create ICS27 Controller submodule params + controllerParams := icacontrollertypes.Params{ + ControllerEnabled: true, +} + + / create ICS27 Host submodule params + hostParams := icahosttypes.Params{ + HostEnabled: true, + AllowMessages: []string{"/cosmos.bank.v1beta1.MsgSend", ... +}, +} + + / initialize ICS27 module + icamodule.InitModule(ctx, controllerParams, hostParams) + + ... + + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +The host and controller submodule params only need to be set if the chain integrates those submodules. +For example, if a chain chooses not to integrate a controller submodule, it may pass empty params into `InitModule`. + +#### Add `StoreUpgrades` for ICS27 module + +For ICS27 it is also necessary to [manually add store upgrades](https://docs.cosmos.network/main/learn/advanced/upgrade#add-storeupgrades-for-new-modules) for the new ICS27 module and then configure the store loader to apply those upgrades in `app.go`: + +```go +if upgradeInfo.Name == "v3" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := store.StoreUpgrades{ + Added: []string{ + icacontrollertypes.StoreKey, icahosttypes.StoreKey +}, +} + +app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +``` + +This ensures that the new module's stores are added to the multistore before the migrations begin. +The host and controller submodule keys only need to be added if the chain integrates those submodules. +For example, if a chain chooses not to integrate a controller submodule, it does not need to add the controller key to the `Added` field. + +### Genesis migrations + +If the chain will adopt ICS27 and chooses to upgrade via a genesis export, then the ICS27 parameters must be set during genesis migration. + +The migration code required may look like: + +```go expandable +controllerGenesisState := icatypes.DefaultControllerGenesis() + / overwrite parameters as desired + controllerGenesisState.Params = icacontrollertypes.Params{ + ControllerEnabled: true, +} + hostGenesisState := icatypes.DefaultHostGenesis() + / overwrite parameters as desired + hostGenesisState.Params = icahosttypes.Params{ + HostEnabled: true, + AllowMessages: []string{"/cosmos.bank.v1beta1.MsgSend", ... +}, +} + icaGenesisState := icatypes.NewGenesisState(controllerGenesisState, hostGenesisState) + + / set new ics27 genesis state + appState[icatypes.ModuleName] = clientCtx.Codec.MustMarshalJSON(icaGenesisState) +``` + +### Ante decorator + +The field of type `channelkeeper.Keeper` in the `AnteDecorator` structure has been replaced with a field of type `*keeper.Keeper`: + +```diff +type AnteDecorator struct { +- k channelkeeper.Keeper ++ k *keeper.Keeper +} + +- func NewAnteDecorator(k channelkeeper.Keeper) AnteDecorator { ++ func NewAnteDecorator(k *keeper.Keeper) AnteDecorator { + return AnteDecorator{k: k} +} +``` + +## IBC Apps + +### `OnChanOpenTry` must return negotiated application version + +The `OnChanOpenTry` application callback has been modified. +The return signature now includes the application version. +IBC applications must perform application version negotiation in `OnChanOpenTry` using the counterparty version. +The negotiated application version then must be returned in `OnChanOpenTry` to core IBC. +Core IBC will set this version in the TRYOPEN channel. + +### `OnChanOpenAck` will take additional `counterpartyChannelID` argument + +The `OnChanOpenAck` application callback has been modified. +The arguments now include the counterparty channel id. + +### `NegotiateAppVersion` removed from `IBCModule` interface + +Previously this logic was handled by the `NegotiateAppVersion` function. +Relayers would query this function before calling `ChanOpenTry`. +Applications would then need to verify that the passed in version was correct. +Now applications will perform this version negotiation during the channel handshake, thus removing the need for `NegotiateAppVersion`. + +### Channel state will not be set before application callback + +The channel handshake logic has been reorganized within core IBC. +Channel state will not be set in state after the application callback is performed. +Applications must rely only on the passed in channel parameters instead of querying the channel keeper for channel state. + +### IBC application callbacks moved from `AppModule` to `IBCModule` + +Previously, IBC module callbacks were apart of the `AppModule` type. +The recommended approach is to create an `IBCModule` type and move the IBC module callbacks from `AppModule` to `IBCModule` in a separate file `ibc_module.go`. + +The mock module go API has been broken in this release by applying the above format. +The IBC module callbacks have been moved from the mock modules `AppModule` into a new type `IBCModule`. + +As apart of this release, the mock module now supports middleware testing. Please see the [README](https://github.com/cosmos/ibc-go/blob/v3.0.0/testing/README.md#middleware-testing) for more information. + +Please review the [mock](https://github.com/cosmos/ibc-go/blob/v3.0.0/testing/mock/ibc_module.go) and [transfer](https://github.com/cosmos/ibc-go/blob/v3.0.0/modules/apps/transfer/ibc_module.go) modules as examples. Additionally, [simapp](https://github.com/cosmos/ibc-go/blob/v3.0.0/testing/simapp/app.go) provides an example of how `IBCModule` types should now be added to the IBC router in favour of `AppModule`. + +### IBC testing package + +`TestChain`s are now created with chainID's beginning from an index of 1. Any calls to `GetChainID(0)` will now fail. Please increment all calls to `GetChainID` by 1. + +## Relayers + +`AppVersion` gRPC has been removed. +The `version` string in `MsgChanOpenTry` has been deprecated and will be ignored by core IBC. +Relayers no longer need to determine the version to use on the `ChanOpenTry` step. +IBC applications will determine the correct version using the counterparty version. + +## IBC Light Clients + +The `GetProofSpecs` function has been removed from the `ClientState` interface. This function was previously unused by core IBC. Light clients which don't use this function may remove it. diff --git a/docs/ibc/next/migrations/v3-to-v4.mdx b/docs/ibc/next/migrations/v3-to-v4.mdx new file mode 100644 index 00000000..c2e57de8 --- /dev/null +++ b/docs/ibc/next/migrations/v3-to-v4.mdx @@ -0,0 +1,156 @@ +--- +title: IBC-Go v3 to v4 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v3 -> github.com/cosmos/ibc-go/v4 +``` + +No genesis or in-place migrations required when upgrading from v1 or v2 of ibc-go. + +## Chains + +### ICS27 - Interchain Accounts + +The controller submodule implements now the 05-port `Middleware` interface instead of the 05-port `IBCModule` interface. Chains that integrate the controller submodule, need to create it with the `NewIBCMiddleware` constructor function. For example: + +```diff +- icacontroller.NewIBCModule(app.ICAControllerKeeper, icaAuthIBCModule) ++ icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) +``` + +where `icaAuthIBCModule` is the Interchain Accounts authentication IBC Module. + +### ICS29 - Fee Middleware + +The Fee Middleware module, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. + +Please read the Fee Middleware integration documentation for an in depth guide on how to configure the module correctly in order to incentivize IBC packets. + +Take a look at the following diff for an [example setup](https://github.com/cosmos/ibc-go/pull/1432/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08aL366) of how to incentivize ics27 channels. + +### Migration to fix support for base denoms with slashes + +As part of [v1.5.0](https://github.com/cosmos/ibc-go/releases/tag/v1.5.0), [v2.3.0](https://github.com/cosmos/ibc-go/releases/tag/v2.3.0) and [v3.1.0](https://github.com/cosmos/ibc-go/releases/tag/v3.1.0) some [migration handler code sample was documented](/docs/ibc/next/migrations/support-denoms-with-slashes#upgrade-proposal) that needs to run in order to correct the trace information of coins transferred using ICS20 whose base denom contains slashes. + +Based on feedback from the community we add now an improved solution to run the same migration that does not require copying a large piece of code over from the migration document, but instead requires only adding a one-line upgrade handler. + +If the chain will migrate to supporting base denoms with slashes, it must set the appropriate params during the execution of the upgrade handler in `app.go`: + +```go +app.UpgradeKeeper.SetUpgradeHandler("MigrateTraces", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / transfer module consensus version has been bumped to 2 + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +If a chain receives coins of a base denom with slashes before it upgrades to supporting it, the receive may pass however the trace information will be incorrect. + +E.g. If a base denom of `testcoin/testcoin/testcoin` is sent to a chain that does not support slashes in the base denom, the receive will be successful. However, the trace information stored on the receiving chain will be: `Trace: "transfer/{channel-id}/testcoin/testcoin", BaseDenom: "testcoin"`. + +This incorrect trace information must be corrected when the chain does upgrade to fully supporting denominations with slashes. + +## IBC Apps + +### ICS03 - Connection + +Crossing hellos have been removed from 03-connection handshake negotiation. +`PreviousConnectionId` in `MsgConnectionOpenTry` has been deprecated and is no longer used by core IBC. + +`NewMsgConnectionOpenTry` no longer takes in the `PreviousConnectionId` as crossing hellos are no longer supported. A non-empty `PreviousConnectionId` will fail basic validation for this message. + +### ICS04 - Channel + +The `WriteAcknowledgement` API now takes the `exported.Acknowledgement` type instead of passing in the acknowledgement byte array directly. +This is an API breaking change and as such IBC application developers will have to update any calls to `WriteAcknowledgement`. + +The `OnChanOpenInit` application callback has been modified. +The return signature now includes the application version as detailed in the latest IBC [spec changes](https://github.com/cosmos/ibc/pull/629). + +The `NewErrorAcknowledgement` method signature has changed. +It now accepts an `error` rather than a `string`. This was done in order to prevent accidental state changes. +All error acknowledgements now contain a deterministic ABCI code and error message. It is the responsibility of the application developer to emit error details in events. + +Crossing hellos have been removed from 04-channel handshake negotiation. +IBC Applications no longer need to account from already claimed capabilities in the `OnChanOpenTry` callback. The capability provided by core IBC must be able to be claimed with error. +`PreviousChannelId` in `MsgChannelOpenTry` has been deprecated and is no longer used by core IBC. + +`NewMsgChannelOpenTry` no longer takes in the `PreviousChannelId` as crossing hellos are no longer supported. A non-empty `PreviousChannelId` will fail basic validation for this message. + +### ICS27 - Interchain Accounts + +The `RegisterInterchainAccount` API has been modified to include an additional `version` argument. This change has been made in order to support ICS29 fee middleware, for relayer incentivization of ICS27 packets. +Consumers of the `RegisterInterchainAccount` are now expected to build the appropriate JSON encoded version string themselves and pass it accordingly. +This should be constructed within the interchain accounts authentication module which leverages the APIs exposed via the interchain accounts `controllerKeeper`. If an empty string is passed in the `version` argument, then the version will be initialized to a default value in the `OnChanOpenInit` callback of the controller's handler, so that channel handshake can proceed. + +The following code snippet illustrates how to construct an appropriate interchain accounts `Metadata` and encode it as a JSON bytestring: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + if err := k.icaControllerKeeper.RegisterInterchainAccount(ctx, msg.ConnectionId, msg.Owner, string(appVersion)); err != nil { + return err +} +``` + +Similarly, if the application stack is configured to route through ICS29 fee middleware and a fee enabled channel is desired, construct the appropriate ICS29 `Metadata` type: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + feeMetadata := feetypes.Metadata{ + AppVersion: string(appVersion), + FeeVersion: feetypes.Version, +} + +feeEnabledVersion, err := feetypes.ModuleCdc.MarshalJSON(&feeMetadata) + if err != nil { + return err +} + if err := k.icaControllerKeeper.RegisterInterchainAccount(ctx, msg.ConnectionId, msg.Owner, string(feeEnabledVersion)); err != nil { + return err +} +``` + +## Relayers + +When using the `DenomTrace` gRPC, the full IBC denomination with the `ibc/` prefix may now be passed in. + +Crossing hellos are no longer supported by core IBC for 03-connection and 04-channel. The handshake should be completed in the logical 4 step process (INIT, TRY, ACK, CONFIRM). diff --git a/docs/ibc/next/migrations/v4-to-v5.mdx b/docs/ibc/next/migrations/v4-to-v5.mdx new file mode 100644 index 00000000..81b549de --- /dev/null +++ b/docs/ibc/next/migrations/v4-to-v5.mdx @@ -0,0 +1,440 @@ +--- +title: IBC-Go v4 to v5 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* [Chains](#chains) +* [IBC Apps](#ibc-apps) +* [Relayers](#relayers) +* [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v4 -> github.com/cosmos/ibc-go/v5 +``` + +## Chains + +### Ante decorator + +The `AnteDecorator` type in `core/ante` has been renamed to `RedundantRelayDecorator` (and the corresponding constructor function to `NewRedundantRelayDecorator`). Therefore in the function that creates the instance of the `sdk.AnteHandler` type (e.g. `NewAnteHandler`) the change would be like this: + +```diff expandable +func NewAnteHandler(options HandlerOptions) (sdk.AnteHandler, error) { + / parameter validation + + anteDecorators := []sdk.AnteDecorator{ + / other ante decorators +- ibcante.NewAnteDecorator(opts.IBCkeeper), ++ ibcante.NewRedundantRelayDecorator(options.IBCKeeper), + } + + return sdk.ChainAnteDecorators(anteDecorators...), nil +} +``` + +The `AnteDecorator` was actually renamed twice, but in [this PR](https://github.com/cosmos/ibc-go/pull/1820) you can see the changes made for the final rename. + +## IBC Apps + +### Core + +The `key` parameter of the `NewKeeper` function in `modules/core/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + stakingKeeper clienttypes.StakingKeeper, + upgradeKeeper clienttypes.UpgradeKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) *Keeper +``` + +The `RegisterRESTRoutes` function in `modules/core` has been removed. + +### ICS03 - Connection + +The `key` parameter of the `NewKeeper` function in `modules/core/03-connection/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ck types.ClientKeeper +) Keeper +``` + +### ICS04 - Channel + +The function `NewPacketId` in `modules/core/04-channel/types` has been renamed to `NewPacketID`: + +```diff +- func NewPacketId( ++ func NewPacketID( + portID, + channelID string, + seq uint64 +) PacketId +``` + +The `key` parameter of the `NewKeeper` function in `modules/core/04-channel/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + clientKeeper types.ClientKeeper, + connectionKeeper types.ConnectionKeeper, + portKeeper types.PortKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) Keeper +``` + +### ICS20 - Transfer + +The `key` parameter of the `NewKeeper` function in `modules/apps/transfer/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper types.ICS4Wrapper, + channelKeeper types.ChannelKeeper, + portKeeper types.PortKeeper, + authKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) Keeper +``` + +The `amount` parameter of function `GetTransferCoin` in `modules/apps/transfer/types` is now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff +func GetTransferCoin( + portID, channelID, baseDenom string, +- amount sdk.Int ++ amount math.Int +) sdk.Coin +``` + +The `RegisterRESTRoutes` function in `modules/apps/transfer` has been removed. + +### ICS27 - Interchain Accounts + +The `key` and `msgRouter` parameters of the `NewKeeper` functions in + +* `modules/apps/27-interchain-accounts/controller/keeper` +* and `modules/apps/27-interchain-accounts/host/keeper` + +have changed type. The `key` parameter is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`), and the `msgRouter` parameter is now of type `*icatypes.MessageRouter` (where `icatypes` is an import alias for `"github.com/cosmos/ibc-go/v5/modules/apps/27-interchain-accounts/types"`): + +```diff expandable +/ NewKeeper creates a new interchain accounts controller Keeper instance +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper icatypes.ICS4Wrapper, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +- msgRouter *baseapp.MsgServiceRouter, ++ msgRouter *icatypes.MessageRouter, +) Keeper +``` + +```diff expandable +/ NewKeeper creates a new interchain accounts host Keeper instance +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +- msgRouter *baseapp.MsgServiceRouter, ++ msgRouter *icatypes.MessageRouter, +) Keeper +``` + +The new `MessageRouter` interface is defined as: + +```go +type MessageRouter interface { + Handler(msg sdk.Msg) + +baseapp.MsgServiceHandler +} +``` + +The `RegisterRESTRoutes` function in `modules/apps/27-interchain-accounts` has been removed. + +An additional parameter, `ics4Wrapper` has been added to the `host` submodule `NewKeeper` function in `modules/apps/27-interchain-accounts/host/keeper`. +This allows the `host` submodule to correctly unwrap the channel version for channel reopening handshakes in the `OnChanOpenTry` callback. + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, + key storetypes.StoreKey, + paramSpace paramtypes.Subspace, ++ ics4Wrapper icatypes.ICS4Wrapper, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, + scopedKeeper icatypes.ScopedKeeper, + msgRouter icatypes.MessageRouter, +) Keeper +``` + +#### Cosmos SDK message handler responses in packet acknowledgement + +The construction of the transaction response of a message execution on the host chain has changed. The `Data` field in the `sdk.TxMsgData` has been deprecated and since Cosmos SDK 0.46 the `MsgResponses` field contains the message handler responses packed into `Any`s. + +For chains on Cosmos SDK 0.45 and below, the message response was constructed like this: + +```go expandable +txMsgData := &sdk.TxMsgData{ + Data: make([]*sdk.MsgData, len(msgs)), +} + for i, msg := range msgs { + / message validation + + msgResponse, err := k.executeMsg(cacheCtx, msg) + / return if err != nil + + txMsgData.Data[i] = &sdk.MsgData{ + MsgType: sdk.MsgTypeURL(msg), + Data: msgResponse, +} +} + +/ emit events + +txResponse, err := proto.Marshal(txMsgData) +/ return if err != nil + +return txResponse, nil +``` + +And for chains on Cosmos SDK 0.46 and above, it is now done like this: + +```go expandable +txMsgData := &sdk.TxMsgData{ + MsgResponses: make([]*codectypes.Any, len(msgs)), +} + for i, msg := range msgs { + / message validation + + protoAny, err := k.executeMsg(cacheCtx, msg) + / return if err != nil + + txMsgData.MsgResponses[i] = protoAny +} + +/ emit events + +txResponse, err := proto.Marshal(txMsgData) +/ return if err != nil + +return txResponse, nil +``` + +When handling the acknowledgement in the `OnAcknowledgementPacket` callback of a custom ICA controller module, then depending on whether `txMsgData.Data` is empty or not, the logic to handle the message handler response will be different. **Only controller chains on Cosmos SDK 0.46 or above will be able to write the logic needed to handle the response from a host chain on Cosmos SDK 0.46 or above.** + +```go expandable +var ack channeltypes.Acknowledgement + if err := channeltypes.SubModuleCdc.UnmarshalJSON(acknowledgement, &ack); err != nil { + return err +} + +var txMsgData sdk.TxMsgData + if err := proto.Unmarshal(ack.GetResult(), txMsgData); err != nil { + return err +} + switch len(txMsgData.Data) { + case 0: / for SDK 0.46 and above + for _, msgResponse := range txMsgData.MsgResponses { + / unmarshall msgResponse and execute logic based on the response +} + +return nil +default: / for SDK 0.45 and below + for _, msgData := range txMsgData.Data { + / unmarshall msgData and execute logic based on the response +} +} +``` + +See the corresponding documentation about authentication modules for more information. + +### ICS29 - Fee Middleware + +The `key` parameter of the `NewKeeper` function in `modules/apps/29-fee` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper types.ICS4Wrapper, + channelKeeper types.ChannelKeeper, + portKeeper types.PortKeeper, + authKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, +) Keeper +``` + +The `RegisterRESTRoutes` function in `modules/apps/29-fee` has been removed. + +### IBC testing package + +The `MockIBCApp` type has been renamed to `IBCApp` (and the corresponding constructor function to `NewIBCApp`). This has resulted therefore in: + +* The `IBCApp` field of the `*IBCModule` in `testing/mock` to change its type as well to `*IBCApp`: + +```diff +type IBCModule struct { + appModule *AppModule +- IBCApp *MockIBCApp / base application of an IBC middleware stack ++ IBCApp *IBCApp / base application of an IBC middleware stack +} +``` + +* The `app` parameter to `*NewIBCModule` in `testing/mock` to change its type as well to `*IBCApp`: + +```diff +func NewIBCModule( + appModule *AppModule, +- app *MockIBCApp ++ app *IBCApp +) IBCModule +``` + +The `MockEmptyAcknowledgement` type has been renamed to `EmptyAcknowledgement` (and the corresponding constructor function to `NewEmptyAcknowledgement`). + +The `TestingApp` interface in `testing` has gone through some modifications: + +* The return type of the function `GetStakingKeeper` is not the concrete type `stakingkeeper.Keeper` anymore (where `stakingkeeper` is an import alias for `"github.com/cosmos/cosmos-sdk/x/staking/keeper"`), but it has been changed to the interface `ibctestingtypes.StakingKeeper` (where `ibctestingtypes` is an import alias for `""github.com/cosmos/ibc-go/v5/testing/types"`). See this [PR](https://github.com/cosmos/ibc-go/pull/2028) for more details. The `StakingKeeper` interface is defined as: + +```go +type StakingKeeper interface { + GetHistoricalInfo(ctx sdk.Context, height int64) (stakingtypes.HistoricalInfo, bool) +} +``` + +* The return type of the function `LastCommitID` has changed to `storetypes.CommitID` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`). + +See the following `git diff` for more details: + +```diff expandable +type TestingApp interface { + abci.Application + + / ibc-go additions + GetBaseApp() *baseapp.BaseApp +- GetStakingKeeper() stakingkeeper.Keeper ++ GetStakingKeeper() ibctestingtypes.StakingKeeper + GetIBCKeeper() *keeper.Keeper + GetScopedIBCKeeper() capabilitykeeper.ScopedKeeper + GetTxConfig() client.TxConfig + + / Implemented by SimApp + AppCodec() codec.Codec + + / Implemented by BaseApp +- LastCommitID() sdk.CommitID ++ LastCommitID() storetypes.CommitID + LastBlockHeight() int64 +} +``` + +The `powerReduction` parameter of the function `SetupWithGenesisValSet` in `testing` is now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff +func SetupWithGenesisValSet( + t *testing.T, + valSet *tmtypes.ValidatorSet, + genAccs []authtypes.GenesisAccount, + chainID string, +- powerReduction sdk.Int, ++ powerReduction math.Int, + balances ...banktypes.Balance +) TestingApp +``` + +The `accAmt` parameter of the functions + +* `AddTestAddrsFromPubKeys` , +* `AddTestAddrs` +* and `AddTestAddrsIncremental` + +in `testing/simapp` are now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff expandable +func AddTestAddrsFromPubKeys( + app *SimApp, + ctx sdk.Context, + pubKeys []cryptotypes.PubKey, +- accAmt sdk.Int, ++ accAmt math.Int +) +func addTestAddrs( + app *SimApp, + ctx sdk.Context, + accNum int, +- accAmt sdk.Int, ++ accAmt math.Int, + strategy GenerateAccountStrategy +) []sdk.AccAddress +func AddTestAddrsIncremental( + app *SimApp, + ctx sdk.Context, + accNum int, +- accAmt sdk.Int, ++ accAmt math.Int +) []sdk.AccAddress +``` + +The `RegisterRESTRoutes` function in `testing/mock` has been removed. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +### ICS02 - Client + +The `key` parameter of the `NewKeeper` function in `modules/core/02-client/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + sk types.StakingKeeper, + uk types.UpgradeKeeper +) Keeper +``` diff --git a/docs/ibc/next/migrations/v5-to-v6.mdx b/docs/ibc/next/migrations/v5-to-v6.mdx new file mode 100644 index 00000000..cdc9a636 --- /dev/null +++ b/docs/ibc/next/migrations/v5-to-v6.mdx @@ -0,0 +1,301 @@ +--- +title: IBC-Go v5 to v6 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +## Chains + +The `ibc-go/v6` release introduces a new set of migrations for `27-interchain-accounts`. Ownership of ICS27 channel capabilities is transferred from ICS27 authentication modules and will now reside with the ICS27 controller submodule moving forward. + +For chains which contain a custom authentication module using the ICS27 controller submodule this requires a migration function to be included in the chain upgrade handler. A subsequent migration handler is run automatically, asserting the ownership of ICS27 channel capabilities has been transferred successfully. + +This migration is not required for chains which *do not* contain a custom authentication module using the ICS27 controller submodule. + +This migration facilitates the addition of the ICS27 controller submodule `MsgServer` which provides a standardised approach to integrating existing forms of authentication such as `x/gov` and `x/group` provided by the Cosmos SDK. + +For more information please refer to the ICS27 controller submodule documentation. + +### Upgrade proposal + +Please refer to [PR #2383](https://github.com/cosmos/ibc-go/pull/2383) for integrating the ICS27 channel capability migration logic or follow the steps outlined below: + +1. Add the upgrade migration logic to chain distribution. This may be, for example, maintained under a package `app/upgrades/v6`. + +```go expandable +package v6 + +import ( + + "github.com/cosmos/cosmos-sdk/codec" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" + + v6 "github.com/cosmos/ibc-go/v6/modules/apps/27-interchain-accounts/controller/migrations/v6" +) + +const ( + UpgradeName = "v6" +) + +func CreateUpgradeHandler( + mm *module.Manager, + configurator module.Configurator, + cdc codec.BinaryCodec, + capabilityStoreKey *storetypes.KVStoreKey, + capabilityKeeper *capabilitykeeper.Keeper, + moduleName string, +) + +upgradetypes.UpgradeHandler { + return func(ctx sdk.Context, _ upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + if err := v6.MigrateICS27ChannelCapability(ctx, cdc, capabilityStoreKey, capabilityKeeper, moduleName); err != nil { + return nil, err +} + +return mm.RunMigrations(ctx, configurator, vm) +} +} +``` + +2. Set the upgrade handler in `app.go`. The `moduleName` parameter refers to the authentication module's `ScopedKeeper` name. This is the name provided upon instantiation in `app.go` via the [`x/capability` keeper `ScopeToModule(moduleName string)`](https://github.com/cosmos/cosmos-sdk/blob/v0.46.1/x/capability/keeper/keeper.go#L70) method. [See here for an example in `simapp`](https://github.com/cosmos/ibc-go/blob/v5.0.0/testing/simapp/app.go#L304). + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler( + v6.UpgradeName, + v6.CreateUpgradeHandler( + app.mm, + app.configurator, + app.appCodec, + app.keys[capabilitytypes.ModuleName], + app.CapabilityKeeper, + >>>> moduleName <<<<, + ), +) +``` + +## IBC Apps + +### ICS27 - Interchain Accounts + +#### Controller APIs + +In previous releases of ibc-go, chain developers integrating the ICS27 interchain accounts controller functionality were expected to create a custom `Base Application` referred to as an authentication module, see the section [Building an authentication module](/docs/ibc/next/apps/interchain-accounts/auth-modules) from the documentation. + +The `Base Application` was intended to be composed with the ICS27 controller submodule `Keeper` and facilitate many forms of message authentication depending on a chain's particular use case. + +Prior to ibc-go v6 the controller submodule exposed only these two functions (to which we will refer as the legacy APIs): + +* [`RegisterInterchainAccount`](https://github.com/cosmos/ibc-go/blob/v5.0.0/modules/apps/27-interchain-accounts/controller/keeper/account.go#L19) +* [`SendTx`](https://github.com/cosmos/ibc-go/blob/v5.0.0/modules/apps/27-interchain-accounts/controller/keeper/relay.go#L18) + +However, these functions have now been deprecated in favour of the new controller submodule `MsgServer` and will be removed in later releases. + +Both APIs remain functional and maintain backwards compatibility in ibc-go v6, however consumers of these APIs are now recommended to follow the message passing paradigm outlined in Cosmos SDK [ADR 031](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-031) and [ADR 033](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-033). This is facilitated by the Cosmos SDK [`MsgServiceRouter`](https://github.com/cosmos/cosmos-sdk/blob/main/baseapp/msg_service_router.go#L17) and chain developers creating custom application logic can now omit the ICS27 controller submodule `Keeper` from their module and instead depend on message routing. + +Depending on the use case, developers of custom authentication modules face one of three scenarios: + +![auth-module-decision-tree.png](/docs/ibc/images/04-migrations/images/auth-module-decision-tree.png) + +**My authentication module needs to access IBC packet callbacks** + +Application developers that wish to consume IBC packet callbacks and react upon packet acknowledgements **must** continue using the controller submodule's legacy APIs. The authentication modules will not need a `ScopedKeeper` anymore, though, because the channel capability will be claimed by the controller submodule. For example, given an Interchain Accounts authentication module keeper `ICAAuthKeeper`, the authentication module's `ScopedKeeper` (`scopedICAAuthKeeper`) is not needed anymore and can be removed for the argument list of the keeper constructor function, as shown here: + +```diff +app.ICAAuthKeeper = icaauthkeeper.NewKeeper( + appCodec, + keys[icaauthtypes.StoreKey], + app.ICAControllerKeeper, +- scopedICAAuthKeeper, +) +``` + +Please note that the authentication module's `ScopedKeeper` name is still needed as part of the channel capability migration described in section [Upgrade proposal](#upgrade-proposal) above. Therefore the authentication module's `ScopedKeeper` cannot be completely removed from the chain code until the migration has run. + +In the future, the use of the legacy APIs for accessing packet callbacks will be replaced by IBC Actor Callbacks (see [ADR 008](https://github.com/cosmos/ibc-go/pull/1976) for more details) and it will also be possible to access them with the `MsgServiceRouter`. + +**My authentication module does not need access to IBC packet callbacks** + +The authentication module can migrate from using the legacy APIs and it can be composed instead with the `MsgServiceRouter`, so that the authentication module is able to pass messages to the controller submodule's `MsgServer` to register interchain accounts and send packets to the interchain account. For example, given an Interchain Accounts authentication module keeper `ICAAuthKeeper`, the ICS27 controller submodule keeper (`ICAControllerKeeper`) and authentication module scoped keeper (`scopedICAAuthKeeper`) are not needed anymore and can be replaced with the `MsgServiceRouter`, as shown here: + +```diff +app.ICAAuthKeeper = icaauthkeeper.NewKeeper( + appCodec, + keys[icaauthtypes.StoreKey], +- app.ICAControllerKeeper, +- scopedICAAuthKeeper, ++ app.MsgServiceRouter(), +) +``` + +In your authentication module you can route messages to the controller submodule's `MsgServer` instead of using the legacy APIs. For example, for registering an interchain account: + +```diff expandable +- if err := keeper.icaControllerKeeper.RegisterInterchainAccount( +- ctx, +- connectionID, +- owner.String(), +- version, +- ); err != nil { +- return err +- } ++ msg := controllertypes.NewMsgRegisterInterchainAccount( ++ connectionID, ++ owner.String(), ++ version, ++ ) ++ handler := keeper.msgRouter.Handler(msg) ++ res, err := handler(ctx, msg) ++ if err != nil { ++ return err ++ } +``` + +where `controllertypes` is an import alias for `"github.com/cosmos/ibc-go/v6/modules/apps/27-interchain-accounts/controller/types"`. + +In addition, in this use case the authentication module does not need to implement the `IBCModule` interface anymore. + +**I do not need a custom authentication module anymore** + +If your authentication module does not have any extra functionality compared to the default authentication module added in ibc-go v6 (the `MsgServer`), or if you can use a generic authentication module, such as the `x/auth`, `x/gov` or `x/group` modules from the Cosmos SDK (v0.46 and later), then you can remove your authentication module completely and use instead the gRPC endpoints of the `MsgServer` or the CLI added in ibc-go v6. + +Please remember that the authentication module's `ScopedKeeper` name is still needed as part of the channel capability migration described in section [Upgrade proposal](#upgrade-proposal) above. + +#### Host params + +The ICS27 host submodule default params have been updated to include the `AllowAllHostMsgs` wildcard `*`. +This enables execution of any `sdk.Msg` type for ICS27 registered on the host chain `InterfaceRegistry`. + +```diff +/ AllowAllHostMsgs holds the string key that allows all message types on interchain accounts host module +const AllowAllHostMsgs = "*" + +... + +/ DefaultParams is the default parameter configuration for the host submodule +func DefaultParams() Params { +- return NewParams(DefaultHostEnabled, nil) ++ return NewParams(DefaultHostEnabled, []string{AllowAllHostMsgs}) +} +``` + +#### API breaking changes + +`SerializeCosmosTx` takes in a `[]proto.Message` instead of `[]sdk.Message`. This allows for the serialization of proto messages without requiring the fulfillment of the `sdk.Msg` interface. + +The `27-interchain-accounts` genesis types have been moved to their own package: `modules/apps/27-interchain-accounts/genesis/types`. +This change facilitates the addition of the ICS27 controller submodule `MsgServer` and avoids cyclic imports. This should have minimal disruption to chain developers integrating `27-interchain-accounts`. + +The ICS27 host submodule `NewKeeper` function in `modules/apps/27-interchain-accounts/host/keeper` now includes an additional parameter of type `ICS4Wrapper`. +This provides the host submodule with the ability to correctly unwrap channel versions in the event of a channel reopening handshake. + +```diff +func NewKeeper( + cdc codec.BinaryCodec, key storetypes.StoreKey, paramSpace paramtypes.Subspace, +- channelKeeper icatypes.ChannelKeeper, portKeeper icatypes.PortKeeper, ++ ics4Wrapper icatypes.ICS4Wrapper, channelKeeper icatypes.ChannelKeeper, portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, scopedKeeper icatypes.ScopedKeeper, msgRouter icatypes.MessageRouter, +) Keeper +``` + +### ICS29 - `NewKeeper` API change + +The `NewKeeper` function of ICS29 has been updated to remove the `paramSpace` parameter as it was unused. + +```diff +func NewKeeper( +- cdc codec.BinaryCodec, key storetypes.StoreKey, paramSpace paramtypes.Subspace, +- ics4Wrapper types.ICS4Wrapper, channelKeeper types.ChannelKeeper, portKeeper types.PortKeeper, authKeeper types.AccountKeeper, bankKeeper types.BankKeeper, ++ cdc codec.BinaryCodec, key storetypes.StoreKey, ++ ics4Wrapper types.ICS4Wrapper, channelKeeper types.ChannelKeeper, ++ portKeeper types.PortKeeper, authKeeper types.AccountKeeper, bankKeeper types.BankKeeper, +) Keeper { +``` + +### ICS20 - `SendTransfer` is no longer exported + +The `SendTransfer` function of ICS20 has been removed. IBC transfers should now be initiated with `MsgTransfer` and routed to the ICS20 `MsgServer`. + +See below for example: + +```go +if handler := msgRouter.Handler(msgTransfer); handler != nil { + if err := msgTransfer.ValidateBasic(); err != nil { + return nil, err +} + +res, err := handler(ctx, msgTransfer) + if err != nil { + return nil, err +} +} +``` + +### ICS04 - `SendPacket` API change + +The `SendPacket` API has been simplified: + +```diff expandable +/ SendPacket is called by a module in order to send an IBC packet on a channel +func (k Keeper) SendPacket( + ctx sdk.Context, + channelCap *capabilitytypes.Capability, +- packet exported.PacketI, +-) error { ++ sourcePort string, ++ sourceChannel string, ++ timeoutHeight clienttypes.Height, ++ timeoutTimestamp uint64, ++ data []byte, ++) (uint64, error) { +``` + +Callers no longer need to pass in a pre-constructed packet. +The destination port/channel identifiers and the packet sequence will be determined by core IBC. +`SendPacket` will return the packet sequence. + +### IBC testing package + +The `SendPacket` API has been simplified: + +```diff expandable +/ SendPacket is called by a module in order to send an IBC packet on a channel +func (k Keeper) SendPacket( + ctx sdk.Context, + channelCap *capabilitytypes.Capability, +- packet exported.PacketI, +-) error { ++ sourcePort string, ++ sourceChannel string, ++ timeoutHeight clienttypes.Height, ++ timeoutTimestamp uint64, ++ data []byte, ++) (uint64, error) { +``` + +Callers no longer need to pass in a pre-constructed packet. `SendPacket` will return the packet sequence. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/next/migrations/v6-to-v7.mdx b/docs/ibc/next/migrations/v6-to-v7.mdx new file mode 100644 index 00000000..6f652e70 --- /dev/null +++ b/docs/ibc/next/migrations/v6-to-v7.mdx @@ -0,0 +1,358 @@ +--- +title: IBC-Go v6 to v7 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +## Chains + +Chains will perform automatic migrations to remove existing localhost clients and to migrate the solomachine to v3 of the protobuf definition. + +An optional upgrade handler has been added to prune expired tendermint consensus states. It may be used during any upgrade (from v7 onwards). +Add the following to the function call to the upgrade handler in `app/app.go`, to perform the optional state pruning. + +```go expandable +import ( + + / ... + ibctmmigrations "github.com/cosmos/ibc-go/v7/modules/light-clients/07-tendermint/migrations" +) + +/ ... + +app.UpgradeKeeper.SetUpgradeHandler( + upgradeName, + func(ctx sdk.Context, _ upgradetypes.Plan, _ module.VersionMap) (module.VersionMap, error) { + / prune expired tendermint consensus states to save storage space + _, err := ibctmmigrations.PruneExpiredConsensusStates(ctx, app.Codec, app.IBCKeeper.ClientKeeper) + if err != nil { + return nil, err +} + +return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}, +) +``` + +Checkout the logs to see how many consensus states are pruned. + +### Light client registration + +Chains must explicitly register the types of any light client modules it wishes to integrate. + +#### Tendermint registration + +To register the tendermint client, modify the `app.go` file to include the tendermint `AppModuleBasic`: + +```diff expandable +import ( + / ... ++ ibctm "github.com/cosmos/ibc-go/v7/modules/light-clients/07-tendermint" +) + +/ ... + +ModuleBasics = module.NewBasicManager( + ... + ibc.AppModuleBasic{}, ++ ibctm.AppModuleBasic{}, + ... +) +``` + +It may be useful to reference the [PR](https://github.com/cosmos/ibc-go/pull/2825) which added the `AppModuleBasic` for the tendermint client. + +#### Solo machine registration + +To register the solo machine client, modify the `app.go` file to include the solo machine `AppModuleBasic`: + +```diff expandable +import ( + / ... ++ solomachine "github.com/cosmos/ibc-go/v7/modules/light-clients/06-solomachine" +) + +/ ... + +ModuleBasics = module.NewBasicManager( + ... + ibc.AppModuleBasic{}, ++ solomachine.AppModuleBasic{}, + ... +) +``` + +It may be useful to reference the [PR](https://github.com/cosmos/ibc-go/pull/2826) which added the `AppModuleBasic` for the solo machine client. + +### Testing package API + +The `SetChannelClosed` utility method in `testing/endpoint.go` has been updated to `SetChannelState`, which will take a `channeltypes.State` argument so that the `ChannelState` can be set to any of the possible channel states. + +## IBC Apps + +* No relevant changes were made in this release. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +### `ClientState` interface changes + +The `VerifyUpgradeAndUpdateState` function has been modified. The client state and consensus state return values have been removed. + +Light clients **must** handle all management of client and consensus states including the setting of updated client state and consensus state in the client store. + +The `Initialize` method is now expected to set the initial client state, consensus state and any client-specific metadata in the provided store upon client creation. + +The `CheckHeaderAndUpdateState` method has been split into 4 new methods: + +* `VerifyClientMessage` verifies a `ClientMessage`. A `ClientMessage` could be a `Header`, `Misbehaviour`, or batch update. Calls to `CheckForMisbehaviour`, `UpdateState`, and `UpdateStateOnMisbehaviour` will assume that the content of the `ClientMessage` has been verified and can be trusted. An error should be returned if the `ClientMessage` fails to verify. + +* `CheckForMisbehaviour` checks for evidence of a misbehaviour in `Header` or `Misbehaviour` types. + +* `UpdateStateOnMisbehaviour` performs appropriate state changes on a `ClientState` given that misbehaviour has been detected and verified. + +* `UpdateState` updates and stores as necessary any associated information for an IBC client, such as the `ClientState` and corresponding `ConsensusState`. An error is returned if `ClientMessage` is of type `Misbehaviour`. Upon successful update, a list containing the updated consensus state height is returned. + +The `CheckMisbehaviourAndUpdateState` function has been removed from `ClientState` interface. This functionality is now encapsulated by the usage of `VerifyClientMessage`, `CheckForMisbehaviour`, `UpdateStateOnMisbehaviour`. + +The function `GetTimestampAtHeight` has been added to the `ClientState` interface. It should return the timestamp for a consensus state associated with the provided height. + +Prior to ibc-go/v7 the `ClientState` interface defined a method for each data type which was being verified in the counterparty state store. +The state verification functions for all IBC data types have been consolidated into two generic methods, `VerifyMembership` and `VerifyNonMembership`. +Both are expected to be provided with a standardised key path, `exported.Path`, as defined in [ICS 24 host requirements](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements). Membership verification requires callers to provide the marshalled value `[]byte`. Delay period values should be zero for non-packet processing verification. A zero proof height is now allowed by core IBC and may be passed into `VerifyMembership` and `VerifyNonMembership`. Light clients are responsible for returning an error if a zero proof height is invalid behaviour. + +See below for an example of how ibc-go now performs channel state verification. + +```go expandable +merklePath := commitmenttypes.NewMerklePath(host.ChannelPath(portID, channelID)) + +merklePath, err := commitmenttypes.ApplyPrefix(connection.GetCounterparty().GetPrefix(), merklePath) + if err != nil { + return err +} + +channelEnd, ok := channel.(channeltypes.Channel) + if !ok { + return sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "invalid channel type %T", channel) +} + +bz, err := k.cdc.Marshal(&channelEnd) + if err != nil { + return err +} + if err := clientState.VerifyMembership( + ctx, clientStore, k.cdc, height, + 0, 0, / skip delay period checks for non-packet processing verification + proof, merklePath, bz, +); err != nil { + return sdkerrors.Wrapf(err, "failed channel state verification for client (%s)", clientID) +} +``` + +### `Header` and `Misbehaviour` + +`exported.Header` and `exported.Misbehaviour` interface types have been merged and renamed to `ClientMessage` interface. + +`GetHeight` function has been removed from `exported.Header` and thus is not included in the `ClientMessage` interface + +### `ConsensusState` + +The `GetRoot` function has been removed from consensus state interface since it was not used by core IBC. + +### Client keeper + +Keeper function `CheckMisbehaviourAndUpdateState` has been removed since function `UpdateClient` can now handle updating `ClientState` on `ClientMessage` type which can be any `Misbehaviour` implementations. + +### SDK message + +`MsgSubmitMisbehaviour` is deprecated since `MsgUpdateClient` can now submit a `ClientMessage` type which can be any `Misbehaviour` implementations. + +The field `header` in `MsgUpdateClient` has been renamed to `client_message`. + +## Solomachine + +The `06-solomachine` client implementation has been simplified in ibc-go/v7. In-place store migrations have been added to migrate solomachine clients from `v2` to `v3`. + +### `ClientState` + +The `ClientState` protobuf message definition has been updated to remove the deprecated `bool` field `allow_update_after_proposal`. + +```diff +message ClientState { + option (gogoproto.goproto_getters) = false; + + uint64 sequence = 1; + bool is_frozen = 2 [(gogoproto.moretags) = "yaml:\"is_frozen\""]; + ConsensusState consensus_state = 3 [(gogoproto.moretags) = "yaml:\"consensus_state\""]; +- bool allow_update_after_proposal = 4 [(gogoproto.moretags) = "yaml:\"allow_update_after_proposal\""]; +} +``` + +### `Header` and `Misbehaviour` + +The `06-solomachine` protobuf message `Header` has been updated to remove the `sequence` field. This field was seen as redundant as the implementation can safely rely on the `sequence` value maintained within the `ClientState`. + +```diff expandable +message Header { + option (gogoproto.goproto_getters) = false; + +- uint64 sequence = 1; +- uint64 timestamp = 2; +- bytes signature = 3; +- google.protobuf.Any new_public_key = 4 [(gogoproto.moretags) = "yaml:\"new_public_key\""]; +- string new_diversifier = 5 [(gogoproto.moretags) = "yaml:\"new_diversifier\""]; ++ uint64 timestamp = 1; ++ bytes signature = 2; ++ google.protobuf.Any new_public_key = 3 [(gogoproto.moretags) = "yaml:\"new_public_key\""]; ++ string new_diversifier = 4 [(gogoproto.moretags) = "yaml:\"new_diversifier\""]; +} +``` + +Similarly, the `Misbehaviour` protobuf message has been updated to remove the `client_id` field. + +```diff expandable +message Misbehaviour { + option (gogoproto.goproto_getters) = false; + +- string client_id = 1 [(gogoproto.moretags) = "yaml:\"client_id\""]; +- uint64 sequence = 2; +- SignatureAndData signature_one = 3 [(gogoproto.moretags) = "yaml:\"signature_one\""]; +- SignatureAndData signature_two = 4 [(gogoproto.moretags) = "yaml:\"signature_two\""]; ++ uint64 sequence = 1; ++ SignatureAndData signature_one = 2 [(gogoproto.moretags) = "yaml:\"signature_one\""]; ++ SignatureAndData signature_two = 3 [(gogoproto.moretags) = "yaml:\"signature_two\""]; +} +``` + +### `SignBytes` + +Most notably, the `SignBytes` protobuf definition has been modified to replace the `data_type` field with a new field, `path`. The `path` field is defined as `bytes` and represents a serialized [ICS-24](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements) standardized key path under which the `data` is stored. + +```diff +message SignBytes { + option (gogoproto.goproto_getters) = false; + + uint64 sequence = 1; + uint64 timestamp = 2; + string diversifier = 3; +- DataType data_type = 4 [(gogoproto.moretags) = "yaml:\"data_type\""]; ++ bytes path = 4; + bytes data = 5; +} +``` + +The `DataType` enum and all associated data types have been removed, greatly reducing the number of message definitions and complexity in constructing the `SignBytes` message type. Likewise, solomachine implementations must now use the serialized `path` value when constructing `SignatureAndData` for signature verification of `SignBytes` data. + +```diff +message SignatureAndData { + option (gogoproto.goproto_getters) = false; + + bytes signature = 1; +- DataType data_type = 2 [(gogoproto.moretags) = "yaml:\"data_type\""]; ++ bytes path = 2; + bytes data = 3; + uint64 timestamp = 4; +} +``` + +For more information, please refer to [ADR-007](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/docs/architecture/adr-007-solomachine-signbytes.md). + +### IBC module constants + +IBC module constants have been moved from the `host` package to the `exported` package. Any usages will need to be updated. + +```diff expandable +import ( + / ... +- host "github.com/cosmos/ibc-go/v7/modules/core/24-host" ++ ibcexported "github.com/cosmos/ibc-go/v7/modules/core/exported" + / ... +) + +- host.ModuleName ++ ibcexported.ModuleName + +- host.StoreKey ++ ibcexported.StoreKey + +- host.QuerierRoute ++ ibcexported.QuerierRoute + +- host.RouterKey ++ ibcexported.RouterKey +``` + +## Upgrading to Cosmos SDK 0.47 + +The following should be considered as complementary to [Cosmos SDK v0.47 UPGRADING.md](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc2/UPGRADING.md). + +### Protobuf + +Protobuf code generation, linting and formatting have been updated to leverage the `ghcr.io/cosmos/proto-builder:0.11.5` docker container. IBC protobuf definitions are now packaged and published to [buf.build/cosmos/ibc](https://buf.build/cosmos/ibc) via CI workflows. The `third_party/proto` directory has been removed in favour of dependency management using [buf.build](https://docs.buf.build/introduction). + +### App modules + +Legacy APIs of the `AppModule` interface have been removed from ibc-go modules. For example, for + +```diff expandable +- / Route implements the AppModule interface +- func (am AppModule) Route() sdk.Route { +- return sdk.Route{} +- } +- +- / QuerierRoute implements the AppModule interface +- func (AppModule) QuerierRoute() string { +- return types.QuerierRoute +- } +- +- / LegacyQuerierHandler implements the AppModule interface +- func (am AppModule) LegacyQuerierHandler(*codec.LegacyAmino) sdk.Querier { +- return nil +- } +- +- / ProposalContents doesn't return any content functions for governance proposals. +- func (AppModule) ProposalContents(_ module.SimulationState) []simtypes.WeightedProposalContent { +- return nil +- } +``` + +### Imports + +Imports for ics23 have been updated as the repository have been migrated from confio to cosmos. + +```diff +import ( + / ... +- ics23 "github.com/confio/ics23/go" ++ ics23 "github.com/cosmos/ics23/go" + / ... +) +``` + +Imports for gogoproto have been updated. + +```diff +import ( + / ... +- "github.com/gogo/protobuf/proto" ++ "github.com/cosmos/gogoproto/proto" + / ... +) +``` diff --git a/docs/ibc/next/migrations/v7-to-v7_1.mdx b/docs/ibc/next/migrations/v7-to-v7_1.mdx new file mode 100644 index 00000000..735ce786 --- /dev/null +++ b/docs/ibc/next/migrations/v7-to-v7_1.mdx @@ -0,0 +1,66 @@ +--- +title: IBC-Go v7 to v7.1 +description: This guide provides instructions for migrating to version v7.1.0 of ibc-go. +--- + +This guide provides instructions for migrating to version `v7.1.0` of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Migrating from v7 to v7.1](#migrating-from-v7-to-v71) + * [Chains](#chains) + * [IBC Apps](#ibc-apps) + * [Relayers](#relayers) + * [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +In the previous release of ibc-go, the localhost `v1` light client module was deprecated and removed. The ibc-go `v7.1.0` release introduces `v2` of the 09-localhost light client module. + +An [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v7.2.0/modules/core/module.go#L127-L145) is configured in the core IBC module to set the localhost `ClientState` and sentinel `ConnectionEnd` in state. + +In order to use the 09-localhost client chains must update the `AllowedClients` parameter in the 02-client submodule of core IBC. This can be configured directly in the application upgrade handler or alternatively updated via the legacy governance parameter change proposal. +We **strongly** recommend chains to perform this action so that intra-ledger communication can be carried out using the familiar IBC interfaces. + +See the upgrade handler code sample provided below or [follow this link](https://github.com/cosmos/ibc-go/blob/v7.2.0/testing/simapp/upgrades/upgrades.go#L85) for the upgrade handler used by the ibc-go simapp. + +```go expandable +func CreateV7LocalhostUpgradeHandler( + mm *module.Manager, + configurator module.Configurator, + clientKeeper clientkeeper.Keeper, +) + +upgradetypes.UpgradeHandler { + return func(ctx sdk.Context, _ upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + / explicitly update the IBC 02-client params, adding the localhost client type + params := clientKeeper.GetParams(ctx) + +params.AllowedClients = append(params.AllowedClients, exported.Localhost) + +clientKeeper.SetParams(ctx, params) + +return mm.RunMigrations(ctx, configurator, vm) +} +} +``` + +### Transfer migration + +An [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v7.2.0/modules/apps/transfer/module.go#L111-L113) is configured in the transfer module to set the total amount in escrow for all denominations of coins that have been sent out. For each denomination a state entry is added with the total amount of coins in escrow regardless of the channel from which they were transferred. + +## IBC Apps + +* No relevant changes were made in this release. + +## Relayers + +The event attribute `packet_connection` (`connectiontypes.AttributeKeyConnection`) has been deprecated. +Please use the `connection_id` attribute (`connectiontypes.AttributeKeyConnectionID`) which is emitted by all channel events. +Only send packet, receive packet, write acknowledgement, and acknowledge packet events used `packet_connection` previously. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/next/migrations/v7-to-v8.mdx b/docs/ibc/next/migrations/v7-to-v8.mdx new file mode 100644 index 00000000..c0e19915 --- /dev/null +++ b/docs/ibc/next/migrations/v7-to-v8.mdx @@ -0,0 +1,217 @@ +--- +title: IBC-Go v7 to v8 +description: This guide provides instructions for migrating to version v8.0.0 of ibc-go. +--- + +This guide provides instructions for migrating to version `v8.0.0` of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Migrating from v7 to v8](#migrating-from-v7-to-v8) + * [Chains](#chains) + * [Cosmos SDK v0.50 upgrade](#cosmos-sdk-v050-upgrade) + * [Authority](#authority) + * [Testing package](#testing-package) + * [Params migration](#params-migration) + * [Governance V1 migration](#governance-v1-migration) + * [Transfer migration](#transfer-migration) + * [IBC Apps](#ibc-apps) + * [ICS20 - Transfer](#ics20---transfer) + * [ICS27 - Interchain Accounts](#ics27---interchain-accounts) + * [Relayers](#relayers) + * [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +The type of the `PortKeeper` field of the IBC keeper have been changed to `*portkeeper.Keeper`: + +```diff expandable +/ Keeper defines each ICS keeper for IBC +type Keeper struct { + / implements gRPC QueryServer interface + types.QueryServer + + cdc codec.BinaryCodec + + ClientKeeper clientkeeper.Keeper + ConnectionKeeper connectionkeeper.Keeper + ChannelKeeper channelkeeper.Keeper +- PortKeeper portkeeper.Keeper ++ PortKeeper *portkeeper.Keeper + Router *porttypes.Router + + authority string +} +``` + +See [this PR](https://github.com/cosmos/ibc-go/pull/4703/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a) for the changes required in `app.go`. + +An extra parameter `totalEscrowed` of type `sdk.Coins` has been added to transfer module's [`NewGenesisState` function](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/transfer/types/genesis.go#L10). This parameter specifies the total amount of tokens that are in the module's escrow accounts. + +### Cosmos SDK v0.50 upgrade + +Version `v8.0.0` of ibc-go upgrades to Cosmos SDK v0.50. Please follow the [Cosmos SDK v0.50 upgrading guide](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/UPGRADING.md) to account for its API breaking changes. + +### Authority + +An authority identifier (e.g. an address) needs to be passed in the `NewKeeper` functions of the following keepers: + +* You must pass the `authority` to the ica/host keeper (implemented in [#3520](https://github.com/cosmos/ibc-go/pull/3520)). See [diff](https://github.com/cosmos/ibc-go/pull/3520/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a): + +```diff +/ app.go + +/ ICA Host keeper +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCFeeKeeper, / use ics29 fee as ics4Wrapper in middleware stack + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, scopedICAHostKeeper, app.MsgServiceRouter(), ++ authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +* You must pass the `authority` to the ica/controller keeper (implemented in [#3590](https://github.com/cosmos/ibc-go/pull/3590)). See [diff](https://github.com/cosmos/ibc-go/pull/3590/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a): + +```diff +/ app.go + +/ ICA Controller keeper +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCFeeKeeper, / use ics29 fee as ics4Wrapper in middleware stack + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + scopedICAControllerKeeper, app.MsgServiceRouter(), ++ authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +* You must pass the `authority` to the ibctransfer keeper (implemented in [#3553](https://github.com/cosmos/ibc-go/pull/3553)). See [diff](https://github.com/cosmos/ibc-go/pull/3553/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a): + +```diff expandable +/ app.go + +/ Create Transfer Keeper and pass IBCFeeKeeper as expected Channel and PortKeeper +/ since fee middleware will wrap the IBCKeeper for underlying application. +app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, keys[ibctransfertypes.StoreKey], app.GetSubspace(ibctransfertypes.ModuleName), + app.IBCFeeKeeper, / ISC4 Wrapper: fee IBC middleware + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, app.BankKeeper, scopedTransferKeeper, ++ authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +* You should pass the `authority` to the IBC keeper (implemented in [#3640](https://github.com/cosmos/ibc-go/pull/3640) and [#3650](https://github.com/cosmos/ibc-go/pull/3650)). See [diff](https://github.com/cosmos/ibc-go/pull/3640/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a): + +```diff expandable +/ app.go + +/ IBC Keepers +app.IBCKeeper = ibckeeper.NewKeeper( + appCodec, + keys[ibcexported.StoreKey], + app.GetSubspace(ibcexported.ModuleName), + app.StakingKeeper, + app.UpgradeKeeper, + scopedIBCKeeper, ++ authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +The authority determines the transaction signer allowed to execute certain messages (e.g. `MsgUpdateParams`). + +### Testing package + +* The function `SetupWithGenesisAccounts` has been removed. +* The function [`RelayPacketWithResults`](https://github.com/cosmos/ibc-go/blob/v8.0.0/testing/path.go#L66) has been added. This function returns the result of the packet receive transaction, the acknowledgement written on the receiving chain, an error if a relay step fails or the packet commitment does not exist on either chain. + +### Params migration + +Params are now self managed in the following submodules: + +* ica/controller [#3590](https://github.com/cosmos/ibc-go/pull/3590) +* ica/host [#3520](https://github.com/cosmos/ibc-go/pull/3520) +* ibc/connection [#3650](https://github.com/cosmos/ibc-go/pull/3650) +* ibc/client [#3640](https://github.com/cosmos/ibc-go/pull/3640) +* ibc/transfer [#3553](https://github.com/cosmos/ibc-go/pull/3553) + +Each module has a corresponding `MsgUpdateParams` message with a `Params` which can be specified in full to update the modules' `Params`. + +Legacy params subspaces must still be initialised in app.go in order to successfully migrate from \`x/params\`\` to the new self-contained approach. See reference [this](https://github.com/cosmos/ibc-go/blob/v8.0.0/testing/simapp/app.go#L1007-L1012) for reference. + +For new chains which do not rely on migration of parameters from `x/params`, an expected interface has been added for each module. This allows chain developers to provide `nil` as the `legacySubspace` argument to `NewKeeper` functions. + +### Governance V1 migration + +Proposals have been migrated to [gov v1 messages](https://docs.cosmos.network/v0.50/modules/gov#messages) (see [#4620](https://github.com/cosmos/ibc-go/pull/4620)). The proposal `ClientUpdateProposal` has been deprecated and [`MsgRecoverClient`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/tx.proto#L121-L134) should be used instead. Likewise, the proposal `UpgradeProposal` has been deprecated and [`MsgIBCSoftwareUpgrade`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/tx.proto#L139-L154) should be used instead. Both proposals will be removed in the next major release. + +`MsgRecoverClient` and `MsgIBCSoftwareUpgrade` will only be allowed to be executed if the signer is the authority designated at the time of instantiating the IBC keeper. So please make sure that the correct authority is provided to the IBC keeper. + +Remove the `UpgradeProposalHandler` and `UpdateClientProposalHandler` from the `BasicModuleManager`: + +```diff expandable +app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +- ibcclientclient.UpdateClientProposalHandler, +- ibcclientclient.UpgradeProposalHandler, + }, + ), +}) +``` + +Support for in-flight legacy recover client proposals (i.e. `ClientUpdateProposal`) will be made for v8, but chains should use `MsgRecoverClient` only afterwards to avoid in-flight client recovery failing when upgrading to v9. See [this issue](https://github.com/cosmos/ibc-go/issues/4721) for more information. + +Please note that ibc-go offers facilities to test an ibc-go upgrade: + +* All e2e tests of the repository can be [run with custom Docker chain images](https://github.com/cosmos/ibc-go/blob/c5bac5e03a0eae449b9efe0d312258115c1a1e85/e2e/README.md#running-tests-with-custom-images). +* An [importable workflow](https://github.com/cosmos/ibc-go/blob/c5bac5e03a0eae449b9efe0d312258115c1a1e85/e2e/README.md#importable-workflow) that can be used from any other repository to test chain upgrades. + +### Transfer migration + +An [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/transfer/module.go#L136) is configured in the transfer module to set the [denomination metadata](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/proto/cosmos/bank/v1beta1/bank.proto#L96-L125) for the IBC denominations of all vouchers minted by the transfer module. + +## IBC Apps + +### ICS20 - Transfer + +* The function `IsBound` has been renamed to [`hasCapability`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/transfer/keeper/keeper.go#L98) and made unexported. + +### ICS27 - Interchain Accounts + +* Functions [`SerializeCosmosTx`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/types/codec.go#L32) and [`DeserializeCosmosTx`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/types/codec.go#L76) now accept an extra parameter `encoding` of type `string` that specifies the format in which the transaction messages are marshaled. Both [protobuf and proto3 JSON formats](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/types/metadata.go#L14-L17) are supported. +* The function `IsBound` of controller submodule has been renamed to [`hasCapability`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/controller/keeper/keeper.go#L111) and made unexported. +* The function `IsBound` of host submodule has been renamed to [`hasCapability`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/host/keeper/keeper.go#L94) and made unexported. + +## Relayers + +* Getter functions in `MsgChannelOpenInitResponse`, `MsgChannelOpenTryResponse`, `MsgTransferResponse`, `MsgRegisterInterchainAccountResponse` and `MsgSendTxResponse` have been removed. The fields can be accessed directly. +* `channeltypes.EventTypeTimeoutPacketOnClose` (where `channeltypes` is an import alias for `"github.com/cosmos/ibc-go/v8/modules/core/04-channel"`) has been removed, since core IBC does not emit any event with this key. +* Attribute with key `counterparty_connection_id` has been removed from event with key `connectiontypes.EventTypeConnectionOpenInit` (where `connectiontypes` is an import alias for `"github.com/cosmos/ibc-go/v8/modules/core/03-connection/types"`) and attribute with key `counterparty_channel_id` has been removed from event with key `channeltypes.EventTypeChannelOpenInit` (where `channeltypes` is an import alias for `"github.com/cosmos/ibc-go/v8/modules/core/04-channel"`) since both (counterparty connection ID and counterparty channel ID) are empty on `ConnectionOpenInit` and `ChannelOpenInit` respectively. +* As part of the migration to [governance V1 messages](#governance-v1-migration) the following changes in events have been made: + +```diff expandable +/ IBC client events vars +var ( + EventTypeCreateClient = "create_client" + EventTypeUpdateClient = "update_client" + EventTypeUpgradeClient = "upgrade_client" + EventTypeSubmitMisbehaviour = "client_misbehaviour" +- EventTypeUpdateClientProposal = "update_client_proposal" +- EventTypeUpgradeClientProposal = "upgrade_client_proposal" ++ EventTypeRecoverClient = "recover_client" ++ EventTypeScheduleIBCSoftwareUpgrade = "schedule_ibc_software_upgrade" + EventTypeUpgradeChain = "upgrade_chain" +) +``` + +## IBC Light Clients + +* Functions `Pretty` and `String` of type `MerklePath` have been [removed](https://github.com/cosmos/ibc-go/pull/4459/files#diff-dd94ec1dde9b047c0cdfba204e30dad74a81de202e3b09ac5b42f493153811af). diff --git a/docs/ibc/next/migrations/v7_2-to-v7_3.mdx b/docs/ibc/next/migrations/v7_2-to-v7_3.mdx new file mode 100644 index 00000000..5341a493 --- /dev/null +++ b/docs/ibc/next/migrations/v7_2-to-v7_3.mdx @@ -0,0 +1,46 @@ +--- +title: IBC-Go v7.2 to v7.3 +description: This guide provides instructions for migrating to version v7.3.0 of ibc-go. +--- + +This guide provides instructions for migrating to version `v7.3.0` of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Migrating from v7.2 to v7.3](#migrating-from-v72-to-v73) + * [Chains](#chains) + * [IBC Apps](#ibc-apps) + * [Relayers](#relayers) + * [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +* No relevant changes were made in this release. + +## IBC Apps + +A set of interfaces have been added that IBC applications may optionally implement. Developers interested in integrating their applications with the [callbacks middleware](/docs/ibc/next/middleware/callbacks/overview) should implement these interfaces so that the callbacks middleware can retrieve the desired callback addresses on the source and destination chains and execute actions on packet lifecycle events. The interfaces are [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/core/05-port/types/module.go#L142-L147), [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/core/exported/packet.go#L43-L52) and [`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/core/exported/packet.go#L36-L41). + +Sample implementations are available for reference. For `transfer`: + +* [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/transfer/ibc_module.go#L303-L313), +* [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/transfer/types/packet.go#L85-L105) +* and [`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/transfer/types/packet.go#L74-L83). + +For `27-interchain-accounts`: + +* [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/27-interchain-accounts/controller/ibc_middleware.go#L258-L268), +* [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/27-interchain-accounts/types/packet.go#L94-L114) +* and [`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/27-interchain-accounts/types/packet.go#L78-L92). + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +### 06-solomachine + +Solo machines are now expected to sign data on a path that 1) does not include a connection prefix (e.g `ibc`) and 2) does not escape any characters. See PR [#4429](https://github.com/cosmos/ibc-go/pull/4429) for more details. We recommend **NOT** using the solo machine light client of versions lower than v7.3.0. diff --git a/docs/ibc/next/migrations/v8-to-v8_1.mdx b/docs/ibc/next/migrations/v8-to-v8_1.mdx new file mode 100644 index 00000000..0537d07d --- /dev/null +++ b/docs/ibc/next/migrations/v8-to-v8_1.mdx @@ -0,0 +1,38 @@ +--- +title: IBC-Go v8 to v8.1 +description: This guide provides instructions for migrating to version v8.1.0 of ibc-go. +--- + +This guide provides instructions for migrating to version `v8.1.0` of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Migrating from v8 to v8.1](#migrating-from-v8-to-v81) + * [Chains](#chains) + * [IBC apps](#ibc-apps) + * [Relayers](#relayers) + * [IBC light clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +### `04-channel` params migration + +Self-managed [params](https://github.com/cosmos/ibc-go/blob/v8.1.0/proto/ibc/core/channel/v1/channel.proto#L183-L187) have been added for `04-channel` module. The params include the `upgrade_timeout` that is used in channel upgradability to specify the interval of time during which the counterparty chain must flush all in-flight packets on its end and move to `FLUSH_COMPLETE` state). An [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v8.1.0/modules/core/module.go#L162-L166) is configured in the `04-channel` module that sets the default params (with a default upgrade timeout of 10 minutes). The module has a corresponding [`MsgUpdateParams` message](https://github.com/cosmos/ibc-go/blob/v8.1.0/proto/ibc/core/channel/v1/tx.proto#L435-L447) with a `Params` field which can be specified in full to update the module's `Params`. + +### Fee migration + +In ibc-go v8.1.0 an improved, more efficient escrow calculation of fees for packet incentivisation has been introduced (see [this issue](https://github.com/cosmos/ibc-go/issues/5509) for more information). Before v8.1.0 the amount escrowed was `(ReckFee + AckFee + TimeoutFee)`; from ibc-go v8.1.0, the calculation is changed to `Max(RecvFee + AckFee, TimeoutFee)`. In order to guarantee that the correct amount of fees are refunded for packets that are in-flight during the upgrade to ibc-go v8.1.0, an [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v8.1.0/modules/apps/29-fee/module.go#L113-L115) is configured in the `29-fee` module to refund the leftover fees (i.e `(ReckFee + AckFee + TimeoutFee) - Max(RecvFee + AckFee, TimeoutFee)`) that otherwise would not be refunded when the packet lifecycle completes and the new calculation is used. + +## IBC apps + +* No relevant changes were made in this release. + +## Relayers + +* No relevant changes were made in this release. + +## IBC light clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/next/migrations/v8_1-to-v10.mdx b/docs/ibc/next/migrations/v8_1-to-v10.mdx new file mode 100644 index 00000000..f39da766 --- /dev/null +++ b/docs/ibc/next/migrations/v8_1-to-v10.mdx @@ -0,0 +1,285 @@ +--- +title: IBC-Go v8.1 to v10 +description: This guide provides instructions for migrating to a new version of ibc-go. +--- + +This guide provides instructions for migrating to a new version of ibc-go. + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. In addition, for this release, the 08-wasm module has been released as v10, and the callbacks middleware has been moved into the ibc-go module itself. + +Diff examples are shown after the list of overall changes: + +* To add support for IBC v2, Chains will need to wire up a new IBC v2 Transfer stack +* Chains will need to wire up the new light client modules +* Chains will need to update Keeper construction calls to comply with the new signatures +* Chains will need to remove the route for the legacy proposal handler for 02-client from their `app/app.go` +* Chains will need to remove the capability keeper and all related setup, including the scoped keepers from their `app/app.go` +* Chains will need to remove ibc fee middleware (29-fee) +* Chains will need, if using this module, to update their imports and usage of `github.com/cosmos/ibc-go/modules/light-clients/08-wasm/` to `github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10` +* Chains will need, if using this module, to update their imports and usage of `github.com/cosmos/ibc-go/modules/apps/callbacks` to `github.com/cosmos/ibc-go/v10/modules/apps/callbacks` + +To add IBC v2 support, wire up a new transfer stack. Example below showing wired up with IBC Callbacks module: + +```diff ++ var ibcv2TransferStack ibcapi.IBCModule ++ ibcv2TransferStack = transferv2.NewIBCModule(app.TransferKeeper) ++ ibcv2TransferStack = ibccallbacksv2.NewIBCMiddleware( ++ transferv2.NewIBCModule(app.TransferKeeper), ++ app.IBCKeeper.ChannelKeeperV2, ++ wasmStackIBCHandler, ++ app.IBCKeeper.ChannelKeeperV2, ++ maxCallbackGas, ++ ) +``` + +Wire up each light client as a separate module and add them to the client keeper router. Example below for 07-tendermint and 08-wasm: + +```diff ++ / Light client modules ++ clientKeeper := app.IBCKeeper.ClientKeeper ++ storeProvider := app.IBCKeeper.ClientKeeper.GetStoreProvider() ++ ++ tmLightClientModule := ibctm.NewLightClientModule(appCodec, storeProvider) ++ clientKeeper.AddRoute(ibctm.ModuleName, &tmLightClientModule) ++ ++ wasmLightClientModule := ibcwasm.NewLightClientModule(app.WasmClientKeeper, storeProvider) ++ clientKeeper.AddRoute(ibcwasmtypes.ModuleName, &wasmLightClientModule) +``` + +Remove ibc fee module name (if used) from module account permissions: + +```diff + / app.go + ... + / module account permissions + var maccPerms = map[string][]string{ + ... +- ibcfeetypes.ModuleName: nil, + ... + } +``` + +Remove `CapabilityKeeper`, `IBCFeeKeeper` and all `capabilitykeeper.ScopedKeeper` Scoped keepers from the App struct: + +```diff expandable + / ChainApp extended ABCI application + type ChainApp struct { + ... +- CapabilityKeeper *capabilitykeeper.Keeper + ... +- IBCFeeKeeper ibcfeekeeper.Keeper + ... +- ScopedIBCKeeper capabilitykeeper.ScopedKeeper +- ScopedICAHostKeeper capabilitykeeper.ScopedKeeper +- ScopedICAControllerKeeper capabilitykeeper.ScopedKeeper +- ScopedTransferKeeper capabilitykeeper.ScopedKeeper +- ScopedIBCFeeKeeper capabilitykeeper.ScopedKeeper + ... + } + ... +- app.ScopedIBCKeeper = scopedIBCKeeper +- app.ScopedTransferKeeper = scopedTransferKeeper +- app.ScopedWasmKeeper = scopedWasmKeeper +- app.ScopedICAHostKeeper = scopedICAHostKeeper +- app.ScopedICAControllerKeeper = scopedICAControllerKeeper +``` + +Remove capability and ibc fee middleware store keys from the `NewKVStoreKeys` call: + +```diff +... + keys := storetypes.NewKVStoreKeys( + ... +- capabilitytypes.StoreKey, +- ibcfeetypes.StoreKey, + ... + } +``` + +Remove the in-memory store keys previously used by the capability module: + +```diff +- memKeys := storetypes.NewMemoryStoreKeys(capabilitytypes.MemStoreKey) +... +- app.MountMemoryStores(memKeys) +``` + +Remove creation of the capability keeper: + +```diff expandable +- / add capability keeper and ScopeToModule for ibc module +- app.CapabilityKeeper = capabilitykeeper.NewKeeper( +- appCodec, +- keys[capabilitytypes.StoreKey], +- memKeys[capabilitytypes.MemStoreKey], +- ) + +- scopedIBCKeeper := app.CapabilityKeeper.ScopeToModule(ibcexported.ModuleName) +- scopedICAHostKeeper := app.CapabilityKeeper.ScopeToModule(icahosttypes.SubModuleName) +- scopedICAControllerKeeper := app.CapabilityKeeper.ScopeToModule(icacontrollertypes.SubModuleName) +- scopedTransferKeeper := app.CapabilityKeeper.ScopeToModule(ibctransfertypes.ModuleName) +- scopedWasmKeeper := app.CapabilityKeeper.ScopeToModule(wasmtypes.ModuleName) +- app.CapabilityKeeper.Seal() +``` + +Remove the legacy route for the client keeper: + +```diff +... + govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). +- AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). +- AddRoute(ibcclienttypes.RouterKey, ibcclient.NewClientProposalHandler(app.IBCKeeper.ClientKeeper)) ++ AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)) +``` + +Update Core IBC Keeper constructor: + +```diff + app.IBCKeeper = ibckeeper.NewKeeper( + appCodec, +- keys[ibcexported.StoreKey], ++ runtime.NewKVStoreService(keys[ibcexported.StoreKey]), + app.GetSubspace(ibcexported.ModuleName), +- app.StakingKeeper, + app.UpgradeKeeper, +- scopedIBCKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) +``` + +Update IBC Transfer keeper constructor: + +```diff expandable + app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, +- keys[ibctransfertypes.StoreKey], ++ runtime.NewKVStoreService(keys[ibctransfertypes.StoreKey]), + app.GetSubspace(ibctransfertypes.ModuleName), + app.IBCKeeper.ChannelKeeper, + app.IBCKeeper.ChannelKeeper, +- app.IBCKeeper.PortKeeper, ++ app.MsgServiceRouter(), + app.AccountKeeper, + app.BankKeeper, +- scopedTransferKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) +``` + +Update ICA Host keeper constructor, notice the removal of the `WithQueryRouter` call in particular: + +```diff expandable + app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, +- keys[icahosttypes.StoreKey], ++ runtime.NewKVStoreService(keys[icahosttypes.StoreKey]), + app.GetSubspace(icahosttypes.SubModuleName), +- app.IBCFeeKeeper, / use ics29 fee as ics4Wrapper in middleware stack + app.IBCKeeper.ChannelKeeper, +- app.IBCKeeper.PortKeeper, ++ app.IBCKeeper.ChannelKeeper, + app.AccountKeeper, +- scopedICAHostKeeper, + app.MsgServiceRouter(), ++ app.GRPCQueryRouter(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) +- app.ICAHostKeeper.WithQueryRouter(app.GRPCQueryRouter()) +``` + +Remove IBC Fee Module keeper: + +```diff +- app.IBCFeeKeeper = ibcfeekeeper.NewKeeper( +- appCodec, keys[ibcfeetypes.StoreKey], +- app.IBCKeeper.ChannelKeeper, / may be replaced with IBC middleware +- app.IBCKeeper.ChannelKeeper, +- app.IBCKeeper.PortKeeper, app.AccountKeeper, app.BankKeeper, +- ) +``` + +Update Transfer stack to remove the fee middleware. The example below shows the correct way to wire up a middleware stack with the IBC callbacks middleware: + +```diff expandable + / Create Transfer Stack + var transferStack porttypes.IBCModule + transferStack = transfer.NewIBCModule(app.TransferKeeper) +- transferStack = ibccallbacks.NewIBCMiddleware(transferStack, app.IBCFeeKeeper, wasmStackIBCHandler, maxCallbackGas) +- transferStack = ibcfee.NewIBCMiddleware(transferStack, app.IBCFeeKeeper) ++ / callbacks wraps the transfer stack as its base app, and uses PacketForwardKeeper as the ICS4Wrapper ++ / i.e. packet-forward-middleware is higher on the stack and sits between callbacks and the ibc channel keeper ++ / Since this is the lowest level middleware of the transfer stack, it should be the first entrypoint for transfer keeper's ++ / WriteAcknowledgement. ++ cbStack := ibccallbacks.NewIBCMiddleware(transferStack, app.PacketForwardKeeper, wasmStackIBCHandler, maxCallbackGas) +transferStack = packetforward.NewIBCMiddleware( +- transferStack, ++ cbStack, + app.PacketForwardKeeper, + 0, + packetforwardkeeper.DefaultForwardTransferPacketTimeoutTimestamp, + ) ++ app.TransferKeeper.WithICS4Wrapper(cbStack) +``` + +Remove ibc fee middleware and any empty IBCModule (often dubbed `noAuthzModule`) from the ICA Controller stack creation: + +```diff +- var noAuthzModule porttypes.IBCModule +- icaControllerStack = icacontroller.NewIBCMiddleware(noAuthzModule, app.ICAControllerKeeper) +- icaControllerStack = ibcfee.NewIBCMiddleware(icaControllerStack, app.IBCFeeKeeper) ++ icaControllerStack = icacontroller.NewIBCMiddleware(app.ICAControllerKeeper) +``` + +Remove ibc fee middleware from ICA Host stack creation: + +```diff + icaHostStack = icahost.NewIBCModule(app.ICAHostKeeper) +- icaHostStack = ibcfee.NewIBCMiddleware(icaHostStack, app.IBCFeeKeeper) +``` + +Update the module manager creation by removing the capability module, fee module and updating the tendermint app module constructor: + +```diff + app.ModuleManager = module.NewManager( + ... +- capability.NewAppModule(appCodec, *app.CapabilityKeeper, false), + ... +- ibcfee.NewAppModule(app.IBCFeeKeeper), + ... +- ibctm.NewAppModule(), ++ ibctm.NewAppModule(tmLightClientModule), + ... + ) +``` + +Remove the capability module and ibc fee middleware from `SetOrderBeginBlockers`, `SetOrderEndBlockers`, `SetOrderInitGenesis` and `SetOrderExportGenesis`: + +```diff +- capabilitytypes.ModuleName, +- ibcfeetypes.ModuleName, +``` + +If you use 08-wasm, you will need to update the go module that is used for `QueryPlugins` and `AcceptListStargateQuerier`. + +```diff +- wasmLightClientQuerier := ibcwasmtypes.QueryPlugins{ ++ wasmLightClientQuerier := ibcwasmkeeper.QueryPlugins{ +- Stargate: ibcwasmtypes.AcceptListStargateQuerier([]string{ ++ Stargate: ibcwasmkeeper.AcceptListStargateQuerier([]string{ + "/ibc.core.client.v1.Query/ClientState", + "/ibc.core.client.v1.Query/ConsensusState", + "/ibc.core.connection.v1.Query/Connection", +- }), ++ }, app.GRPCQueryRouter()), + } +``` + +If you use 08-wasm, you will need to use the wasm client keeper rather than the go module to initialize pinned codes: + +```diff +- if err := ibcwasmkeeper.InitializePinnedCodes(ctx); err != nil { +- panic(fmt.Sprintf("ibcwasmkeeper failed initialize pinned codes %s", err)) ++ if err := app.WasmClientKeeper.InitializePinnedCodes(ctx); err != nil { ++ panic(fmt.Sprintf("WasmClientKeeper failed initialize pinned codes %s", err)) ++ } +``` diff --git a/docs/ibc/next/security-audits.mdx b/docs/ibc/next/security-audits.mdx new file mode 100644 index 00000000..3d6a52d1 --- /dev/null +++ b/docs/ibc/next/security-audits.mdx @@ -0,0 +1,176 @@ +--- +title: "Security Audits" +description: "Comprehensive security audit reports for IBC-Go protocol and features" +--- + +## Overview + +The IBC-Go protocol has undergone multiple comprehensive security audits by leading blockchain security firms. These audits cover various components and features of the IBC protocol, ensuring robust security across all major functionality areas. Each audit provides an independent assessment of code quality, potential vulnerabilities, and architectural design. + +## Available Audit Reports + +### IBC v2 Protocol Audit + +**Auditor**: Collaborative Audit Team +**Completion Date**: April 2025 +**Pages**: 74 +**Audited Commit**: `79218a531e769bb5c29022d50ef017bd81e4bd9b` +**Scope**: IBC v2 protocol implementation + +This comprehensive audit covers the IBC v2 protocol implementation that simplifies the IBC protocol by removing channel and connection handshakes, minimizing the application interface, and enabling connectivity with new domains like Ethereum while maintaining backward compatibility with existing IBC channels. + + + Complete security assessment of IBC v2 protocol implementation (74 pages) + + +### ICS-20 Token Transfer v2 + +**Auditor**: Atredis Partners +**Completion Date**: September 2024 +**Pages**: 41 +**Features Covered**: + +- Multi-denomination support +- Memo field enhancements +- Forwarding middleware +- Path unwinding capabilities + + + Security assessment of ICS-20 v2 token transfer features (41 pages) + + +### Channel Upgrades + +**Auditor**: Atredis Partners +**Completion Date**: March 2024 +**Version**: Report v1.1 +**Pages**: 38 +**Features Covered**: + +- Channel upgrade handshakes +- Timeout mechanisms +- State machine verification +- Upgrade cancellation logic + + + Assessment of IBC channel upgrade functionality (38 pages) + + +### 08-WASM Light Client + +**Multiple Audits Available**: + +##### Halborn Security Audit + +**Auditor**: Halborn +**Completion Date**: February 2023 +**Pages**: 55 +**Focus**: WASM light client implementation security + + + Halborn security assessment of WASM light client (55 pages) + + +##### Ethan Frey Review + +**Reviewer**: Ethan Frey +**Type**: Technical Review +**Focus**: WASM client architecture and implementation + + + Technical review of WASM client implementation + + +### Interchain Accounts (ICS-27) + +**Auditor**: Trail of Bits +**Pages**: 42 +**Features Covered**: + +- Controller and host chain implementations +- Authentication mechanisms +- Message routing and execution +- Security boundaries and access controls + + + Trail of Bits assessment of Interchain Accounts (42 pages) + + +## Key Security Areas + +These audits collectively cover: + +### Protocol Security + +- Core IBC protocol mechanics +- Handshake protocols and state machines +- Timeout and error handling +- Proof verification systems + +### Feature Security + +- Token transfer mechanisms +- Cross-chain account control +- Light client implementations +- Channel upgrade procedures + +### Implementation Security + +- Memory safety and resource management +- Cryptographic operations +- State consistency guarantees +- Access control and permissions + +## Recommendations for Developers + +When building with IBC-Go: + +1. **Review Relevant Audits**: Consult the audit reports for features you're implementing +2. **Follow Security Patterns**: Adopt the security practices recommended in the audits +3. **Test Thoroughly**: Include security testing based on audit findings +4. **Stay Updated**: Monitor for security advisories and updates +5. **Report Vulnerabilities**: Follow responsible disclosure practices + +## Continuous Security + +The IBC-Go team maintains an ongoing commitment to security through: + +- Regular audits of new features and major releases +- Rapid response to security disclosures +- Transparent communication via security advisories +- Active collaboration with security researchers +- Continuous improvement based on audit findings + +## Security Disclosure + +For security-related inquiries or to report potential vulnerabilities, please follow the [IBC-Go Security Policy](https://github.com/cosmos/ibc-go/security/policy). + +## Additional Resources + +- [IBC Protocol Specification](https://github.com/cosmos/ibc) +- [IBC-Go GitHub Repository](https://github.com/cosmos/ibc-go) +- [Security Best Practices](/docs/ibc/next/ibc/best-practices) diff --git a/docs/ibc/v10.1.x/apps/interchain-accounts/active-channels.mdx b/docs/ibc/v10.1.x/apps/interchain-accounts/active-channels.mdx new file mode 100644 index 00000000..af815ffd --- /dev/null +++ b/docs/ibc/v10.1.x/apps/interchain-accounts/active-channels.mdx @@ -0,0 +1,44 @@ +--- +title: Active Channels +description: The Interchain Accounts module uses either ORDERED or UNORDERED channels. +--- + +The Interchain Accounts module uses either [ORDERED or UNORDERED](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#ordering) channels. + +When using `ORDERED` channels, the order of transactions when sending packets from a controller to a host chain is maintained. + +When using `UNORDERED` channels, there is no guarantee that the order of transactions when sending packets from the controller to the host chain is maintained. If no ordering is specified in `MsgRegisterInterchainAccount`, then the default ordering for new ICA channels is `UNORDERED`. + +> A limitation when using ORDERED channels is that when a packet times out the channel will be closed. + +In the case of a channel closing, a controller chain needs to be able to regain access to the interchain account registered on this channel. `Active Channels` enable this functionality. + +When an Interchain Account is registered using `MsgRegisterInterchainAccount`, a new channel is created on a particular port. During the `OnChanOpenAck` and `OnChanOpenConfirm` steps (on controller & host chain respectively) the `Active Channel` for this interchain account is stored in state. + +It is possible to create a new channel using the same controller chain portID if the previously set `Active Channel` is now in a `CLOSED` state. This channel creation can be initialized programmatically by sending a new `MsgChannelOpenInit` message like so: + +```go +msg := channeltypes.NewMsgChannelOpenInit(portID, string(versionBytes), channeltypes.ORDERED, []string{ + connectionID +}, icatypes.HostPortID, authtypes.NewModuleAddress(icatypes.ModuleName).String()) + handler := keeper.msgRouter.Handler(msg) + +res, err := handler(ctx, msg) + if err != nil { + return err +} +``` + +Alternatively, any relayer operator may initiate a new channel handshake for this interchain account once the previously set `Active Channel` is in a `CLOSED` state. This is done by initiating the channel handshake on the controller chain using the same portID associated with the interchain account in question. + +It is important to note that once a channel has been opened for a given interchain account, new channels can not be opened for this account until the currently set `Active Channel` is set to `CLOSED`. + +## Future improvements + +Future versions of the ICS-27 protocol and the Interchain Accounts module will likely use a new channel type that provides ordering of packets without the channel closing in the event of a packet timing out, thus removing the need for `Active Channels` entirely. +The following is a list of issues which will provide the infrastructure to make this possible: + +* [IBC Channel Upgrades](https://github.com/cosmos/ibc-go/issues/1599) +* [Implement ORDERED\_ALLOW\_TIMEOUT logic in 04-channel](https://github.com/cosmos/ibc-go/issues/1661) +* [Add ORDERED\_ALLOW\_TIMEOUT as supported ordering in 03-connection](https://github.com/cosmos/ibc-go/issues/1662) +* [Allow ICA channels to be opened as ORDERED\_ALLOW\_TIMEOUT](https://github.com/cosmos/ibc-go/issues/1663) diff --git a/docs/ibc/v10.1.x/apps/interchain-accounts/auth-modules.mdx b/docs/ibc/v10.1.x/apps/interchain-accounts/auth-modules.mdx new file mode 100644 index 00000000..83028db7 --- /dev/null +++ b/docs/ibc/v10.1.x/apps/interchain-accounts/auth-modules.mdx @@ -0,0 +1,21 @@ +--- +title: Authentication Modules +--- + +## Synopsis + +Authentication modules enable application developers to perform custom logic when interacting with the Interchain Accounts controller sumbmodule's `MsgServer`. + +The controller submodule is used for account registration and packet sending. It executes only logic required of all controllers of interchain accounts. The type of authentication used to manage the interchain accounts remains unspecified. There may exist many different types of authentication which are desirable for different use cases. Thus the purpose of the authentication module is to wrap the controller submodule with custom authentication logic. + +In ibc-go, authentication modules can communicate with the controller submodule by passing messages through `baseapp`'s `MsgServiceRouter`. To implement an authentication module, the `IBCModule` interface need not be fulfilled; it is only required to fulfill Cosmos SDK's `AppModuleBasic` interface, just like any regular Cosmos SDK application module. + +The authentication module must: + +- Authenticate interchain account owners. +- Track the associated interchain account address for an owner. +- Send packets on behalf of an owner (after authentication). + +## Integration into `app.go` file + +To integrate the authentication module into your chain, please follow the steps outlined in [`app.go` integration](/docs/ibc/v10.1.x/apps/interchain-accounts/integration#example-integration). diff --git a/docs/ibc/v10.1.x/apps/interchain-accounts/client.mdx b/docs/ibc/v10.1.x/apps/interchain-accounts/client.mdx new file mode 100644 index 00000000..fb058d88 --- /dev/null +++ b/docs/ibc/v10.1.x/apps/interchain-accounts/client.mdx @@ -0,0 +1,200 @@ +--- +title: Client +description: >- + A user can query and interact with the Interchain Accounts module using the + CLI. Use the --help flag to discover the available commands: +--- + +## CLI + +A user can query and interact with the Interchain Accounts module using the CLI. Use the `--help` flag to discover the available commands: + +```shell +simd query interchain-accounts --help +``` + +> Please not that this section does not document all the available commands, but only the ones that deserved extra documentation that was not possible to fit in the command line documentation. + +### Controller + +A user can query and interact with the controller submodule. + +#### Query + +The `query` commands allow users to query the controller submodule. + +```shell +simd query interchain-accounts controller --help +``` + +#### Transactions + +The `tx` commands allow users to interact with the controller submodule. + +```shell +simd tx interchain-accounts controller --help +``` + +#### `register` + +The `register` command allows users to register an interchain account on a host chain on the provided connection. + +```shell +simd tx interchain-accounts controller register [connection-id] [flags] +``` + +During registration a new channel is set up between controller and host. There are two flags available that influence the channel that is created: + +* `--version` to specify the (JSON-formatted) version string of the channel. For example: `{\"version\":\"ics27-1\",\"encoding\":\"proto3\",\"tx_type\":\"sdk_multi_msg\",\"controller_connection_id\":\"connection-0\",\"host_connection_id\":\"connection-0\"}`. Passing a custom version string is useful if you want to specify, for example, the encoding format of the interchain accounts packet data (either `proto3` or `proto3json`). If not specified the controller submodule will generate a default version string. +* `--ordering` to specify the ordering of the channel. Available options are `order_ordered` and `order_unordered` (default if not specified). + +Example: + +```shell +simd tx interchain-accounts controller register connection-0 --ordering order_ordered --from cosmos1.. +``` + +#### `send-tx` + +The `send-tx` command allows users to send a transaction on the provided connection to be executed using an interchain account on the host chain. + +```shell +simd tx interchain-accounts controller send-tx [connection-id] [path/to/packet_msg.json] +``` + +Example: + +```shell +simd tx interchain-accounts controller send-tx connection-0 packet-data.json --from cosmos1.. +``` + +See below for example contents of `packet-data.json`. The CLI handler will unmarshal the following into `InterchainAccountPacketData` appropriately. + +```json +{ + "type": "TYPE_EXECUTE_TX", + "data": "CqIBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEoEBCkFjb3Ntb3MxNWNjc2hobXAwZ3N4MjlxcHFxNmc0em1sdG5udmdteXU5dWV1YWRoOXkybmM1emowc3psczVndGRkehItY29zbW9zMTBoOXN0YzV2Nm50Z2V5Z2Y1eGY5NDVuanFxNWgzMnI1M3VxdXZ3Gg0KBXN0YWtlEgQxMDAw", + "memo": "" +} +``` + +Note the `data` field is a base64 encoded byte string as per the tx encoding agreed upon during the channel handshake. + +A helper CLI is provided in the host submodule which can be used to generate the packet data JSON using the counterparty chain's binary. See the [`generate-packet-data` command](#generate-packet-data) for an example. + +### Host + +A user can query and interact with the host submodule. + +#### Query + +The `query` commands allow users to query the host submodule. + +```shell +simd query interchain-accounts host --help +``` + +#### Transactions + +The `tx` commands allow users to interact with the controller submodule. + +```shell +simd tx interchain-accounts host --help +``` + +##### `generate-packet-data` + +The `generate-packet-data` command allows users to generate protobuf or proto3 JSON encoded interchain accounts packet data for input message(s). The packet data can then be used with the controller submodule's [`send-tx` command](#send-tx). The `--encoding` flag can be used to specify the encoding format (value must be either `proto3` or `proto3json`); if not specified, the default will be `proto3`. The `--memo` flag can be used to include a memo string in the interchain accounts packet data. + +```shell +simd tx interchain-accounts host generate-packet-data [message] +``` + +Example: + +```shell expandable +simd tx interchain-accounts host generate-packet-data '[{ + "@type":"/cosmos.bank.v1beta1.MsgSend", + "from_address":"cosmos15ccshhmp0gsx29qpqq6g4zmltnnvgmyu9ueuadh9y2nc5zj0szls5gtddz", + "to_address":"cosmos10h9stc5v6ntgeygf5xf945njqq5h32r53uquvw", + "amount": [ + { + "denom": "stake", + "amount": "1000" + } + ] +}]' --memo memo +``` + +The command accepts a single `sdk.Msg` or a list of `sdk.Msg`s that will be encoded into the outputs `data` field. + +Example output: + +```json +{ + "type": "TYPE_EXECUTE_TX", + "data": "CqIBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEoEBCkFjb3Ntb3MxNWNjc2hobXAwZ3N4MjlxcHFxNmc0em1sdG5udmdteXU5dWV1YWRoOXkybmM1emowc3psczVndGRkehItY29zbW9zMTBoOXN0YzV2Nm50Z2V5Z2Y1eGY5NDVuanFxNWgzMnI1M3VxdXZ3Gg0KBXN0YWtlEgQxMDAw", + "memo": "memo" +} +``` + +## gRPC + +A user can query the interchain account module using gRPC endpoints. + +### Controller + +A user can query the controller submodule using gRPC endpoints. + +#### `InterchainAccount` + +The `InterchainAccount` endpoint allows users to query the controller submodule for the interchain account address for a given owner on a particular connection. + +```shell +ibc.applications.interchain_accounts.controller.v1.Query/InterchainAccount +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"owner":"cosmos1..","connection_id":"connection-0"}' \ + localhost:9090 \ + ibc.applications.interchain_accounts.controller.v1.Query/InterchainAccount +``` + +#### `Params` + +The `Params` endpoint users to query the current controller submodule parameters. + +```shell +ibc.applications.interchain_accounts.controller.v1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + ibc.applications.interchain_accounts.controller.v1.Query/Params +``` + +### Host + +A user can query the host submodule using gRPC endpoints. + +#### `Params` + +The `Params` endpoint users to query the current host submodule parameters. + +```shell +ibc.applications.interchain_accounts.host.v1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + ibc.applications.interchain_accounts.host.v1.Query/Params +``` diff --git a/docs/ibc/v10.1.x/apps/interchain-accounts/development.mdx b/docs/ibc/v10.1.x/apps/interchain-accounts/development.mdx new file mode 100644 index 00000000..5a927bac --- /dev/null +++ b/docs/ibc/v10.1.x/apps/interchain-accounts/development.mdx @@ -0,0 +1,34 @@ +--- +title: Development Use Cases +--- + +The initial version of Interchain Accounts allowed for the controller submodule to be extended by providing it with an underlying application which would handle all packet callbacks. +That functionality is now being deprecated in favor of alternative approaches. +This document will outline potential use cases and redirect each use case to the appropriate documentation. + +## Custom authentication + +Interchain accounts may be associated with alternative types of authentication relative to the traditional public/private key signing. +If you wish to develop or use Interchain Accounts with a custom authentication module and do not need to execute custom logic on the packet callbacks, we recommend you use ibc-go v6 or greater and that your custom authentication module interacts with the controller submodule via the [`MsgServer`](/docs/ibc/v10.1.x/apps/interchain-accounts/messages). + +If you wish to consume and execute custom logic in the packet callbacks, then please read the section [Packet callbacks](#packet-callbacks) below. + +## Redirection to a smart contract + +It may be desirable to allow smart contracts to control an interchain account. +To facilitate such an action, the controller submodule may be provided an underlying application which redirects to smart contract callers. +An improved design has been suggested in [ADR 008](https://github.com/cosmos/ibc-go/pull/1976) which performs this action via middleware. + +Implementers of this use case are recommended to follow the ADR 008 approach. +The underlying application may continue to be used as a short term solution for ADR 008 and the [legacy API](/docs/ibc/v10.1.x/apps/interchain-accounts/legacy/auth-modules) should continue to be utilized in such situations. + +## Packet callbacks + +If a developer requires access to packet callbacks for their use case, then they have the following options: + +1. Write a smart contract which is connected via an ADR 008 or equivalent IBC application (recommended). +2. Use the controller's underlying application to implement packet callback logic. + +In the first case, the smart contract should use the [`MsgServer`](/docs/ibc/v10.1.x/apps/interchain-accounts/messages). + +In the second case, the underlying application should use the [legacy API](/docs/ibc/v10.1.x/apps/interchain-accounts/legacy/keeper-api). diff --git a/docs/ibc/v10.1.x/apps/interchain-accounts/integration.mdx b/docs/ibc/v10.1.x/apps/interchain-accounts/integration.mdx new file mode 100644 index 00000000..f2d4500f --- /dev/null +++ b/docs/ibc/v10.1.x/apps/interchain-accounts/integration.mdx @@ -0,0 +1,193 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to integrate Interchain Accounts host and controller functionality to your chain. The following document only applies for Cosmos SDK chains. + +The Interchain Accounts module contains two submodules. Each submodule has its own IBC application. The Interchain Accounts module should be registered as an `AppModule` in the same way all SDK modules are registered on a chain, but each submodule should create its own `IBCModule` as necessary. A route should be added to the IBC router for each submodule which will be used. + +Chains who wish to support ICS-27 may elect to act as a host chain, a controller chain or both. Disabling host or controller functionality may be done statically by excluding the host or controller submodule entirely from the `app.go` file or it may be done dynamically by taking advantage of the on-chain parameters which enable or disable the host or controller submodules. + +Interchain Account authentication modules (both custom or generic, such as the `x/gov`, `x/group` or `x/auth` Cosmos SDK modules) can send messages to the controller submodule's [`MsgServer`](/docs/ibc/v10.1.x/apps/interchain-accounts/messages) to register interchain accounts and send packets to the interchain account. To accomplish this, the authentication module needs to be composed with `baseapp`'s `MsgServiceRouter`. + +![ica-v6.png](/docs/ibc/images/02-apps/02-interchain-accounts/images/ica-v6.png) + +## Example integration + +```go expandable +/ app.go + +/ Register the AppModule for the Interchain Accounts module and the authentication module +/ Note: No `icaauth` exists, this must be substituted with an actual Interchain Accounts authentication module +ModuleBasics = module.NewBasicManager( + ... + ica.AppModuleBasic{ +}, + icaauth.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the Interchain Accounts module +/ Only necessary for host chain functionality +/ Each Interchain Account created on the host chain is derived from the module account created +maccPerms = map[string][]string{ + ... + icatypes.ModuleName: nil, +} + +... + +/ Add Interchain Accounts Keepers for each submodule used and the authentication module +/ If a submodule is being statically disabled, the associated Keeper does not need to be added. +type App struct { + ... + + ICAControllerKeeper icacontrollerkeeper.Keeper + ICAHostKeeper icahostkeeper.Keeper + ICAAuthKeeper icaauthkeeper.Keeper + + ... +} + +... + +/ Create store keys for each submodule Keeper and the authentication module + keys := sdk.NewKVStoreKeys( + ... + icacontrollertypes.StoreKey, + icahosttypes.StoreKey, + icaauthtypes.StoreKey, + ... +) + +... + +/ Create the Keeper for each submodule +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, app.IBCKeeper.PortKeeper, + app.MsgServiceRouter(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) + +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, app.IBCKeeper.PortKeeper, app.AccountKeeper, + app.MsgServiceRouter(), app.GRPCQueryRouter(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) + +/ Create Interchain Accounts AppModule + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, &app.ICAHostKeeper) + +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.MsgServiceRouter()) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + +/ Create controller IBC application stack and host IBC module as desired + icaControllerStack := icacontroller.NewIBCMiddleware(app.ICAControllerKeeper) + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +... + +/ Register Interchain Accounts and authentication module AppModule's +app.moduleManager = module.NewManager( + ... + icaModule, + icaAuthModule, +) + +... + +/ Add Interchain Accounts to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts module InitGenesis logic +app.moduleManager.SetOrderInitGenesis( + ... + icatypes.ModuleName, + ... +) + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey sdk.StoreKey) + +paramskeeper.Keeper { + ... + paramsKeeper.Subspace(icahosttypes.SubModuleName) + +paramsKeeper.Subspace(icacontrollertypes.SubModuleName) + ... +} +``` + +If no custom athentication module is needed and a generic Cosmos SDK authentication module can be used, then from the sample integration code above all references to `ICAAuthKeeper` and `icaAuthModule` can be removed. That's it, the following code would not be needed: + +```go +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.MsgServiceRouter()) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) +``` + +### Using submodules exclusively + +As described above, the Interchain Accounts application module is structured to support the ability of exclusively enabling controller or host functionality. +This can be achieved by simply omitting either controller or host `Keeper` from the Interchain Accounts `NewAppModule` constructor function, and mounting only the desired submodule via the `IBCRouter`. +Alternatively, submodules can be enabled and disabled dynamically using [on-chain parameters](/docs/ibc/v10.1.x/apps/interchain-accounts/parameters). + +The following snippets show basic examples of statically disabling submodules using `app.go`. + +#### Disabling controller chain functionality + +```go +/ Create Interchain Accounts AppModule omitting the controller keeper + icaModule := ica.NewAppModule(nil, &app.ICAHostKeeper) + +/ Create host IBC Module + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host route +ibcRouter.AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +``` + +#### Disabling host chain functionality + +```go expandable +/ Create Interchain Accounts AppModule omitting the host keeper + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, nil) + +/ Optionally instantiate your custom authentication module if needed, or not otherwise +... + +/ Create controller IBC application stack + icaControllerStack := icacontroller.NewIBCMiddleware(app.ICAControllerKeeper) + +/ Register controller route +ibcRouter.AddRoute(icacontrollertypes.SubModuleName, icaControllerStack) +``` diff --git a/docs/ibc/v10.1.x/apps/interchain-accounts/legacy/auth-modules.mdx b/docs/ibc/v10.1.x/apps/interchain-accounts/legacy/auth-modules.mdx new file mode 100644 index 00000000..7b25448e --- /dev/null +++ b/docs/ibc/v10.1.x/apps/interchain-accounts/legacy/auth-modules.mdx @@ -0,0 +1,306 @@ +--- +title: Authentication Modules +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +## Synopsis + +Authentication modules play the role of the `Base Application` as described in [ICS-30 IBC Middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware), and enable application developers to perform custom logic when working with the Interchain Accounts controller API. + +The controller submodule is used for account registration and packet sending. It executes only logic required of all controllers of interchain accounts. The type of authentication used to manage the interchain accounts remains unspecified. There may exist many different types of authentication which are desirable for different use cases. Thus the purpose of the authentication module is to wrap the controller submodule with custom authentication logic. + +In ibc-go, authentication modules are connected to the controller chain via a middleware stack. The controller submodule is implemented as [middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware) and the authentication module is connected to the controller submodule as the base application of the middleware stack. To implement an authentication module, the `IBCModule` interface must be fulfilled. By implementing the controller submodule as middleware, any amount of authentication modules can be created and connected to the controller submodule without writing redundant code. + +The authentication module must: + +- Authenticate interchain account owners. +- Track the associated interchain account address for an owner. +- Send packets on behalf of an owner (after authentication). + +## `IBCModule` implementation + +The following `IBCModule` callbacks must be implemented with appropriate custom logic: + +```go expandable +/ OnChanOpenInit implements the IBCModule interface +func (im IBCModule) + +OnChanOpenInit( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + / perform custom logic + + return version, nil +} + +/ OnChanOpenAck implements the IBCModule interface +func (im IBCModule) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + / perform custom logic + + return nil +} + +/ OnChanCloseConfirm implements the IBCModule interface +func (im IBCModule) + +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / perform custom logic + + return nil +} + +/ OnAcknowledgementPacket implements the IBCModule interface +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, +) + +error { + / perform custom logic + + return nil +} + +/ OnTimeoutPacket implements the IBCModule interface. +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +error { + / perform custom logic + + return nil +} +``` + +The following functions must be defined to fulfill the `IBCModule` interface, but they will never be called by the controller submodule so they may error or panic. That is because in Interchain Accounts, the channel handshake is always initiated on the controller chain and packets are always sent to the host chain and never to the controller chain. + +```go expandable +/ OnChanOpenTry implements the IBCModule interface +func (im IBCModule) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + panic("UNIMPLEMENTED") +} + +/ OnChanOpenConfirm implements the IBCModule interface +func (im IBCModule) + +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + panic("UNIMPLEMENTED") +} + +/ OnChanCloseInit implements the IBCModule interface +func (im IBCModule) + +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + panic("UNIMPLEMENTED") +} + +/ OnRecvPacket implements the IBCModule interface. A successful acknowledgement +/ is returned if the packet data is successfully decoded and the receive application +/ logic returns without error. +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +ibcexported.Acknowledgement { + panic("UNIMPLEMENTED") +} +``` + +## `OnAcknowledgementPacket` + +Controller chains will be able to access the acknowledgement written into the host chain state once a relayer relays the acknowledgement. +The acknowledgement bytes contain either the response of the execution of the message(s) on the host chain or an error. They will be passed to the auth module via the `OnAcknowledgementPacket` callback. Auth modules are expected to know how to decode the acknowledgement. + +If the controller chain is connected to a host chain using the host module on ibc-go, it may interpret the acknowledgement bytes as follows: + +Begin by unmarshaling the acknowledgement into `sdk.TxMsgData`: + +```go +var ack channeltypes.Acknowledgement + if err := channeltypes.SubModuleCdc.UnmarshalJSON(acknowledgement, &ack); err != nil { + return err +} + txMsgData := &sdk.TxMsgData{ +} + if err := proto.Unmarshal(ack.GetResult(), txMsgData); err != nil { + return err +} +``` + +If the `txMsgData.Data` field is non nil, the host chain is using SDK version `<=` v0.45. +The auth module should interpret the `txMsgData.Data` as follows: + +```go expandable +switch len(txMsgData.Data) { + case 0: + / see documentation below for SDK 0.46.x or greater +default: + for _, msgData := range txMsgData.Data { + if err := handler(msgData); err != nil { + return err +} + +} +... +} +``` + +A handler will be needed to interpret what actions to perform based on the message type sent. +A router could be used, or more simply a switch statement. + +```go expandable +func handler(msgData sdk.MsgData) + +error { + switch msgData.MsgType { + case sdk.MsgTypeURL(&banktypes.MsgSend{ +}): + msgResponse := &banktypes.MsgSendResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleBankSendMsg(msgResponse) + case sdk.MsgTypeURL(&stakingtypes.MsgDelegate{ +}): + msgResponse := &stakingtypes.MsgDelegateResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleStakingDelegateMsg(msgResponse) + case sdk.MsgTypeURL(&transfertypes.MsgTransfer{ +}): + msgResponse := &transfertypes.MsgTransferResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleIBCTransferMsg(msgResponse) + +default: + return +} +``` + +If the `txMsgData.Data` is empty, the host chain is using SDK version > v0.45. +The auth module should interpret the `txMsgData.Responses` as follows: + +```go +... +/ switch statement from above + case 0: + for _, any := range txMsgData.MsgResponses { + if err := handleAny(any); err != nil { + return err +} + +} +} +``` + +A handler will be needed to interpret what actions to perform based on the type URL of the Any. +A router could be used, or more simply a switch statement. +It may be possible to deduplicate logic between `handler` and `handleAny`. + +```go expandable +func handleAny(any *codectypes.Any) + +error { + switch any.TypeURL { + case banktypes.MsgSend: + msgResponse, err := unpackBankMsgSendResponse(any) + if err != nil { + return err +} + +handleBankSendMsg(msgResponse) + case stakingtypes.MsgDelegate: + msgResponse, err := unpackStakingDelegateResponse(any) + if err != nil { + return err +} + +handleStakingDelegateMsg(msgResponse) + case transfertypes.MsgTransfer: + msgResponse, err := unpackIBCTransferMsgResponse(any) + if err != nil { + return err +} + +handleIBCTransferMsg(msgResponse) + +default: + return +} +``` + +## Integration into `app.go` file + +To integrate the authentication module into your chain, please follow the steps outlined in [`app.go` integration](/docs/ibc/v10.1.x/apps/interchain-accounts/legacy/integration#example-integration). diff --git a/docs/ibc/v10.1.x/apps/interchain-accounts/legacy/integration.mdx b/docs/ibc/v10.1.x/apps/interchain-accounts/legacy/integration.mdx new file mode 100644 index 00000000..302a4501 --- /dev/null +++ b/docs/ibc/v10.1.x/apps/interchain-accounts/legacy/integration.mdx @@ -0,0 +1,196 @@ +--- +title: Integration +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +## Synopsis + +Learn how to integrate Interchain Accounts host and controller functionality to your chain. The following document only applies for Cosmos SDK chains. + +The Interchain Accounts module contains two submodules. Each submodule has its own IBC application. The Interchain Accounts module should be registered as an `AppModule` in the same way all SDK modules are registered on a chain, but each submodule should create its own `IBCModule` as necessary. A route should be added to the IBC router for each submodule which will be used. + +Chains who wish to support ICS-27 may elect to act as a host chain, a controller chain or both. Disabling host or controller functionality may be done statically by excluding the host or controller module entirely from the `app.go` file or it may be done dynamically by taking advantage of the on-chain parameters which enable or disable the host or controller submodules. + +Interchain Account authentication modules are the base application of a middleware stack. The controller submodule is the middleware in this stack. + +![ica-pre-v6.png](/docs/ibc/images/02-apps/02-interchain-accounts/10-legacy/images/ica-pre-v6.png) + +## Example integration + +```go expandable +/ app.go + +/ Register the AppModule for the Interchain Accounts module and the authentication module +/ Note: No `icaauth` exists, this must be substituted with an actual Interchain Accounts authentication module +ModuleBasics = module.NewBasicManager( + ... + ica.AppModuleBasic{ +}, + icaauth.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the Interchain Accounts module +/ Only necessary for host chain functionality +/ Each Interchain Account created on the host chain is derived from the module account created +maccPerms = map[string][]string{ + ... + icatypes.ModuleName: nil, +} + +... + +/ Add Interchain Accounts Keepers for each submodule used and the authentication module +/ If a submodule is being statically disabled, the associated Keeper does not need to be added. +type App struct { + ... + + ICAControllerKeeper icacontrollerkeeper.Keeper + ICAHostKeeper icahostkeeper.Keeper + ICAAuthKeeper icaauthkeeper.Keeper + + ... +} + +... + +/ Create store keys for each submodule Keeper and the authentication module + keys := sdk.NewKVStoreKeys( + ... + icacontrollertypes.StoreKey, + icahosttypes.StoreKey, + icaauthtypes.StoreKey, + ... +) + +... + +... + +/ Create the Keeper for each submodule +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.MsgServiceRouter(), +) + +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, app.MsgServiceRouter(), +) + +/ Create Interchain Accounts AppModule + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, &app.ICAHostKeeper) + +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.ICAControllerKeeper) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + +/ ICA auth IBC Module + icaAuthIBCModule := icaauth.NewIBCModule(app.ICAAuthKeeper) + +/ Create controller IBC application stack and host IBC module as desired + icaControllerStack := icacontroller.NewIBCMiddlewareWithAuth(icaAuthIBCModule, app.ICAControllerKeeper) + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostIBCModule). + AddRoute(icaauthtypes.ModuleName, icaControllerStack) / Note, the authentication module is routed to the top level of the middleware stack + +... + +/ Register Interchain Accounts and authentication module AppModule's +app.moduleManager = module.NewManager( + ... + icaModule, + icaAuthModule, +) + +... + +/ Add fee middleware to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add fee middleware to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts module InitGenesis logic +app.moduleManager.SetOrderInitGenesis( + ... + icatypes.ModuleName, + ... +) + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey sdk.StoreKey) + +paramskeeper.Keeper { + ... + paramsKeeper.Subspace(icahosttypes.SubModuleName) + +paramsKeeper.Subspace(icacontrollertypes.SubModuleName) + ... +``` + +## Using submodules exclusively + +As described above, the Interchain Accounts application module is structured to support the ability of exclusively enabling controller or host functionality. +This can be achieved by simply omitting either controller or host `Keeper` from the Interchain Accounts `NewAppModule` constructor function, and mounting only the desired submodule via the `IBCRouter`. +Alternatively, submodules can be enabled and disabled dynamically using [on-chain parameters](/docs/ibc/v10.1.x/apps/interchain-accounts/parameters). + +The following snippets show basic examples of statically disabling submodules using `app.go`. + +### Disabling controller chain functionality + +```go +/ Create Interchain Accounts AppModule omitting the controller keeper + icaModule := ica.NewAppModule(nil, &app.ICAHostKeeper) + +/ Create host IBC Module + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host route +ibcRouter.AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +``` + +### Disabling host chain functionality + +```go expandable +/ Create Interchain Accounts AppModule omitting the host keeper + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, nil) + +/ Create your Interchain Accounts authentication module, setting up the Keeper, AppModule and IBCModule appropriately +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.ICAControllerKeeper) + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + icaAuthIBCModule := icaauth.NewIBCModule(app.ICAAuthKeeper) + +/ Create controller IBC application stack + icaControllerStack := icacontroller.NewIBCMiddlewareWithAuth(icaAuthIBCModule, app.ICAControllerKeeper) + +/ Register controller and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icaauthtypes.ModuleName, icaControllerStack) / Note, the authentication module is routed to the top level of the middleware stack +``` diff --git a/docs/ibc/v10.1.x/apps/interchain-accounts/legacy/keeper-api.mdx b/docs/ibc/v10.1.x/apps/interchain-accounts/legacy/keeper-api.mdx new file mode 100644 index 00000000..26ac7b43 --- /dev/null +++ b/docs/ibc/v10.1.x/apps/interchain-accounts/legacy/keeper-api.mdx @@ -0,0 +1,122 @@ +--- +title: Keeper API +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +The controller submodule keeper exposes two legacy functions that allow respectively for custom authentication modules to register interchain accounts and send packets to the interchain account. + +## `RegisterInterchainAccount` + +The authentication module can begin registering interchain accounts by calling `RegisterInterchainAccount`: + +```go +if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, connectionID, owner.String(), version, channeltypes.UNORDERED); err != nil { + return err +} + +return nil +``` + +The `version` argument is used to support ICS-29 fee middleware for relayer incentivization of ICS-27 packets. The `ordering` argument allows to specify the ordering of the channel that is created; if `NONE` is passed, then the default ordering will be `UNORDERED`. Consumers of the `RegisterInterchainAccount` are expected to build the appropriate JSON encoded version string themselves and pass it accordingly. If an empty string is passed in the `version` argument, then the version will be initialized to a default value in the `OnChanOpenInit` callback of the controller's handler, so that channel handshake can proceed. + +The following code snippet illustrates how to construct an appropriate interchain accounts `Metadata` and encode it as a JSON bytestring: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, controllerConnectionID, owner.String(), string(appVersion), channeltypes.UNORDERED); err != nil { + return err +} +``` + +Similarly, if the application stack is configured to route through ICS-29 fee middleware and a fee enabled channel is desired, construct the appropriate ICS-29 `Metadata` type: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + feeMetadata := feetypes.Metadata{ + AppVersion: string(appVersion), + FeeVersion: feetypes.Version, +} + +feeEnabledVersion, err := feetypes.ModuleCdc.MarshalJSON(&feeMetadata) + if err != nil { + return err +} + if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, controllerConnectionID, owner.String(), string(feeEnabledVersion), channeltypes.UNORDERED); err != nil { + return err +} +``` + +## `SendTx` + +The authentication module can attempt to send a packet by calling `SendTx`: + +```go expandable +/ Authenticate owner +/ perform custom logic + +/ Construct controller portID based on interchain account owner address +portID, err := icatypes.NewControllerPortID(owner.String()) + if err != nil { + return err +} + +/ Obtain data to be sent to the host chain. +/ In this example, the owner of the interchain account would like to send a bank MsgSend to the host chain. +/ The appropriate serialization function should be called. The host chain must be able to deserialize the transaction. +/ If the host chain is using the ibc-go host module, `SerializeCosmosTx` should be used. + msg := &banktypes.MsgSend{ + FromAddress: fromAddr, + ToAddress: toAddr, + Amount: amt +} + +data, err := icatypes.SerializeCosmosTx(keeper.cdc, []proto.Message{ + msg +}) + if err != nil { + return err +} + +/ Construct packet data + packetData := icatypes.InterchainAccountPacketData{ + Type: icatypes.EXECUTE_TX, + Data: data, +} + +/ Obtain timeout timestamp +/ An appropriate timeout timestamp must be determined based on the usage of the interchain account. +/ If the packet times out, the channel will be closed requiring a new channel to be created. + timeoutTimestamp := obtainTimeoutTimestamp() + +/ Send the interchain accounts packet, returning the packet sequence +seq, err = keeper.icaControllerKeeper.SendTx(ctx, portID, packetData, timeoutTimestamp) +``` + +The data within an `InterchainAccountPacketData` must be serialized using a format supported by the host chain. +If the host chain is using the ibc-go host chain submodule, `SerializeCosmosTx` should be used. If the `InterchainAccountPacketData.Data` is serialized using a format not supported by the host chain, the packet will not be successfully received. diff --git a/docs/ibc/v10.1.x/apps/interchain-accounts/messages.mdx b/docs/ibc/v10.1.x/apps/interchain-accounts/messages.mdx new file mode 100644 index 00000000..6ff955db --- /dev/null +++ b/docs/ibc/v10.1.x/apps/interchain-accounts/messages.mdx @@ -0,0 +1,152 @@ +--- +title: Messages +description: >- + An Interchain Accounts channel handshake can be initiated using + MsgRegisterInterchainAccount: +--- + +## `MsgRegisterInterchainAccount` + +An Interchain Accounts channel handshake can be initiated using `MsgRegisterInterchainAccount`: + +```go +type MsgRegisterInterchainAccount struct { + Owner string + ConnectionID string + Version string + Ordering channeltypes.Order +} +``` + +This message is expected to fail if: + +* `Owner` is an empty string or contains more than 2048 bytes. +* `ConnectionID` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). + +This message will construct a new `MsgChannelOpenInit` on chain and route it to the core IBC message server to initiate the opening step of the channel handshake. + +The controller submodule will generate a new port identifier. The caller is expected to provide an appropriate application version string. For example, this may be an ICS-27 JSON encoded [`Metadata`](https://github.com/cosmos/ibc-go/blob/v6.0.0/proto/ibc/applications/interchain_accounts/v1/metadata.proto#L11) type or an ICS-29 JSON encoded [`Metadata`](https://github.com/cosmos/ibc-go/blob/v6.0.0/proto/ibc/applications/fee/v1/metadata.proto#L11) type with a nested application version. +If the `Version` string is omitted, the controller submodule will construct a default version string in the `OnChanOpenInit` handshake callback. + +```go +type MsgRegisterInterchainAccountResponse struct { + ChannelID string + PortId string +} +``` + +The `ChannelID` and `PortID` are returned in the message response. + +## `MsgSendTx` + +An Interchain Accounts transaction can be executed on a remote host chain by sending a `MsgSendTx` from the corresponding controller chain: + +```go +type MsgSendTx struct { + Owner string + ConnectionID string + PacketData InterchainAccountPacketData + RelativeTimeout uint64 +} +``` + +This message is expected to fail if: + +* `Owner` is an empty string or contains more than 2048 bytes. +* `ConnectionID` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +* `PacketData` contains an `UNSPECIFIED` type enum, the length of `Data` bytes is zero or the `Memo` field exceeds 256 characters in length. +* `RelativeTimeout` is zero. + +This message will create a new IBC packet with the provided `PacketData` and send it via the channel associated with the `Owner` and `ConnectionID`. +The `PacketData` is expected to contain a list of serialized `[]sdk.Msg` in the form of `CosmosTx`. Please note the signer field of each `sdk.Msg` must be the interchain account address. +When the packet is relayed to the host chain, the `PacketData` is unmarshalled and the messages are authenticated and executed. + +```go +type MsgSendTxResponse struct { + Sequence uint64 +} +``` + +The packet `Sequence` is returned in the message response. + +### Queries + +It is possible to use [`MsgModuleQuerySafe`](https://github.com/cosmos/ibc-go/blob/eecfa5c09a4c38a5c9f2cc2a322d2286f45911da/proto/ibc/applications/interchain_accounts/host/v1/tx.proto#L41-L51) to execute a list of queries on the host chain. This message can be included in the list of encoded `sdk.Msg`s of `InterchainPacketData`. The host chain will return on the acknowledgment the responses for all the queries. Please note that only module safe queries can be executed ([deterministic queries that are safe to be called from within the state machine](https://docs.cosmos.network/main/build/building-modules/query-services#calling-queries-from-the-state-machine)). + +The queries available from Cosmos SDK are: + +```plaintext expandable +/cosmos.auth.v1beta1.Query/Accounts +/cosmos.auth.v1beta1.Query/Account +/cosmos.auth.v1beta1.Query/AccountAddressByID +/cosmos.auth.v1beta1.Query/Params +/cosmos.auth.v1beta1.Query/ModuleAccounts +/cosmos.auth.v1beta1.Query/ModuleAccountByName +/cosmos.auth.v1beta1.Query/AccountInfo +/cosmos.bank.v1beta1.Query/Balance +/cosmos.bank.v1beta1.Query/AllBalances +/cosmos.bank.v1beta1.Query/SpendableBalances +/cosmos.bank.v1beta1.Query/SpendableBalanceByDenom +/cosmos.bank.v1beta1.Query/TotalSupply +/cosmos.bank.v1beta1.Query/SupplyOf +/cosmos.bank.v1beta1.Query/Params +/cosmos.bank.v1beta1.Query/DenomMetadata +/cosmos.bank.v1beta1.Query/DenomMetadataByQueryString +/cosmos.bank.v1beta1.Query/DenomsMetadata +/cosmos.bank.v1beta1.Query/DenomOwners +/cosmos.bank.v1beta1.Query/SendEnabled +/cosmos.circuit.v1.Query/Account +/cosmos.circuit.v1.Query/Accounts +/cosmos.circuit.v1.Query/DisabledList +/cosmos.staking.v1beta1.Query/Validators +/cosmos.staking.v1beta1.Query/Validator +/cosmos.staking.v1beta1.Query/ValidatorDelegations +/cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations +/cosmos.staking.v1beta1.Query/Delegation +/cosmos.staking.v1beta1.Query/UnbondingDelegation +/cosmos.staking.v1beta1.Query/DelegatorDelegations +/cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations +/cosmos.staking.v1beta1.Query/Redelegations +/cosmos.staking.v1beta1.Query/DelegatorValidators +/cosmos.staking.v1beta1.Query/DelegatorValidator +/cosmos.staking.v1beta1.Query/HistoricalInfo +/cosmos.staking.v1beta1.Query/Pool +/cosmos.staking.v1beta1.Query/Params +``` + +And the query available from ibc-go is: + +```plaintext +/ibc.core.client.v1.Query/VerifyMembership +``` + +The following code block shows an example of how `MsgModuleQuerySafe` can be used to query the account balance of an account on the host chain. The resulting packet data variable is used to set the `PacketData` of `MsgSendTx`. + +```go expandable +balanceQuery := banktypes.NewQueryBalanceRequest("cosmos1...", "uatom") + +queryBz, err := balanceQuery.Marshal() + +/ signer of message must be the interchain account on the host + queryMsg := icahosttypes.NewMsgModuleQuerySafe("cosmos2...", []icahosttypes.QueryRequest{ + { + Path: "/cosmos.bank.v1beta1.Query/Balance", + Data: queryBz, +}, +}) + +bz, err := icatypes.SerializeCosmosTx(cdc, []proto.Message{ + queryMsg +}, icatypes.EncodingProtobuf) + packetData := icatypes.InterchainAccountPacketData{ + Type: icatypes.EXECUTE_TX, + Data: bz, + Memo: "", +} +``` + +## Atomicity + +As the Interchain Accounts module supports the execution of multiple transactions using the Cosmos SDK `Msg` interface, it provides the same atomicity guarantees as Cosmos SDK-based applications, leveraging the [`CacheMultiStore`](https://docs.cosmos.network/main/learn/advanced/store#cachemultistore) architecture provided by the [`Context`](https://docs.cosmos.network/main/learn/advanced/context.html) type. + +This provides atomic execution of transactions when using Interchain Accounts, where state changes are only committed if all `Msg`s succeed. diff --git a/docs/ibc/v10.1.x/apps/interchain-accounts/overview.mdx b/docs/ibc/v10.1.x/apps/interchain-accounts/overview.mdx new file mode 100644 index 00000000..bd9232d6 --- /dev/null +++ b/docs/ibc/v10.1.x/apps/interchain-accounts/overview.mdx @@ -0,0 +1,46 @@ +--- +title: Overview +--- + + + Interchain Accounts is only compatible with IBC Classic, not IBC v2 + + +## Synopsis + +Learn about what the Interchain Accounts module is + +## What is the Interchain Accounts module? + +Interchain Accounts is the Cosmos SDK implementation of the ICS-27 protocol, which enables cross-chain account management built upon IBC. + +- How does an interchain account differ from a regular account? + +Regular accounts use a private key to sign transactions. Interchain Accounts are instead controlled programmatically by counterparty chains via IBC packets. + +## Concepts + +`Host Chain`: The chain where the interchain account is registered. The host chain listens for IBC packets from a controller chain which should contain instructions (e.g. Cosmos SDK messages) for which the interchain account will execute. + +`Controller Chain`: The chain registering and controlling an account on a host chain. The controller chain sends IBC packets to the host chain to control the account. + +`Interchain Account`: An account on a host chain created using the ICS-27 protocol. An interchain account has all the capabilities of a normal account. However, rather than signing transactions with a private key, a controller chain will send IBC packets to the host chain which signals what transactions the interchain account should execute. + +`Authentication Module`: A custom application module on the controller chain that uses the Interchain Accounts module to build custom logic for the creation & management of interchain accounts. It can be either an IBC application module using the [legacy API](/docs/ibc/v10.1.x/apps/interchain-accounts/legacy/keeper-api), or a regular Cosmos SDK application module sending messages to the controller submodule's `MsgServer` (this is the recommended approach from ibc-go v6 if access to packet callbacks is not needed). Please note that the legacy API will eventually be removed and IBC applications will not be able to use them in later releases. + +## SDK security model + +SDK modules on a chain are assumed to be trustworthy. For example, there are no checks to prevent an untrustworthy module from accessing the bank keeper. + +The implementation of ICS-27 in ibc-go uses this assumption in its security considerations. + +The implementation assumes other IBC application modules will not bind to ports within the ICS-27 namespace. + +## Channel Closure + +The provided interchain account host and controller implementations do not support `ChanCloseInit`. However, they do support `ChanCloseConfirm`. +This means that the host and controller modules cannot close channels, but they will confirm channel closures initiated by other implementations of ICS-27. + +In the event of a channel closing (due to a packet timeout in an ordered channel, for example), the interchain account associated with that channel can become accessible again if a new channel is created with a (JSON-formatted) version string that encodes the exact same `Metadata` information of the previous channel. The channel can be reopened using either [`MsgRegisterInterchainAccount`](/docs/ibc/v10.1.x/apps/interchain-accounts/messages#msgregisterinterchainaccount) or `MsgChannelOpenInit`. If `MsgRegisterInterchainAccount` is used, then it is possible to leave the `version` field of the message empty, since it will be filled in by the controller submodule. If `MsgChannelOpenInit` is used, then the `version` field must be provided with the correct JSON-encoded `Metadata` string. See section [Understanding Active Channels](/docs/ibc/v10.1.x/apps/interchain-accounts/active-channels#understanding-active-channels) for more information. + +When reopening a channel with the default controller submodule, the ordering of the channel cannot be changed. diff --git a/docs/ibc/v10.1.x/apps/interchain-accounts/parameters.mdx b/docs/ibc/v10.1.x/apps/interchain-accounts/parameters.mdx new file mode 100644 index 00000000..eb8f805b --- /dev/null +++ b/docs/ibc/v10.1.x/apps/interchain-accounts/parameters.mdx @@ -0,0 +1,63 @@ +--- +title: Parameters +description: >- + The Interchain Accounts module contains the following on-chain parameters, + logically separated for each distinct submodule: +--- + +The Interchain Accounts module contains the following on-chain parameters, logically separated for each distinct submodule: + +## Controller Submodule Parameters + +| Name | Type | Default Value | +| ------------------- | ---- | ------------- | +| `ControllerEnabled` | bool | `true` | + +### ControllerEnabled + +The `ControllerEnabled` parameter controls a chains ability to service ICS-27 controller specific logic. This includes the sending of Interchain Accounts packet data as well as the following ICS-26 callback handlers: + +* `OnChanOpenInit` +* `OnChanOpenAck` +* `OnChanCloseConfirm` +* `OnAcknowledgementPacket` +* `OnTimeoutPacket` + +## Host Submodule Parameters + +| Name | Type | Default Value | +| --------------- | --------- | ------------- | +| `HostEnabled` | bool | `true` | +| `AllowMessages` | \[]string | `["*"]` | + +### HostEnabled + +The `HostEnabled` parameter controls a chains ability to service ICS-27 host specific logic. This includes the following ICS-26 callback handlers: + +* `OnChanOpenTry` +* `OnChanOpenConfirm` +* `OnChanCloseConfirm` +* `OnRecvPacket` + +### AllowMessages + +The `AllowMessages` parameter provides the ability for a chain to limit the types of messages or transactions that hosted interchain accounts are authorized to execute by defining an allowlist using the Protobuf message type URL format. + +For example, a Cosmos SDK-based chain that elects to provide hosted Interchain Accounts with the ability of governance voting and staking delegations will define its parameters as follows: + +```json +"params": { + "host_enabled": true, + "allow_messages": ["/cosmos.staking.v1beta1.MsgDelegate", + "/cosmos.gov.v1beta1.MsgVote"] +} +``` + +There is also a special wildcard `"*"` value which allows any type of message to be executed by the interchain account. This must be the only value in the `allow_messages` array. + +```json +"params": { + "host_enabled": true, + "allow_messages": ["*"] +} +``` diff --git a/docs/ibc/v10.1.x/apps/interchain-accounts/tx-encoding.mdx b/docs/ibc/v10.1.x/apps/interchain-accounts/tx-encoding.mdx new file mode 100644 index 00000000..f2b9ca97 --- /dev/null +++ b/docs/ibc/v10.1.x/apps/interchain-accounts/tx-encoding.mdx @@ -0,0 +1,57 @@ +--- +title: Transaction Encoding +description: >- + When orchestrating an interchain account transaction, which comprises multiple + sdk.Msg objects represented as Any types, the transactions must be encoded as + bytes within InterchainAccountPacketData. +--- + +When orchestrating an interchain account transaction, which comprises multiple `sdk.Msg` objects represented as `Any` types, the transactions must be encoded as bytes within [`InterchainAccountPacketData`](https://github.com/cosmos/ibc-go/blob/v7.2.0/proto/ibc/applications/interchain_accounts/v1/packet.proto#L21-L26). + +```protobuf +/ InterchainAccountPacketData is comprised of a raw transaction, type of transaction and optional memo field. +message InterchainAccountPacketData { + Type type = 1; + bytes data = 2; + string memo = 3; +} +``` + +The `data` field must be encoded as a [`CosmosTx`](https://github.com/cosmos/ibc-go/blob/v7.2.0/proto/ibc/applications/interchain_accounts/v1/packet.proto#L28-L31). + +```protobuf +/ CosmosTx contains a list of sdk.Msg's. It should be used when sending transactions to an SDK host chain. +message CosmosTx { + repeated google.protobuf.Any messages = 1; +} +``` + +The encoding method for `CosmosTx` is determined during the channel handshake process. If the channel version [metadata's `encoding` field](https://github.com/cosmos/ibc-go/blob/v7.2.0/proto/ibc/applications/interchain_accounts/v1/metadata.proto#L22) is marked as `proto3`, then `CosmosTx` undergoes protobuf encoding. Conversely, if the field is set to `proto3json`, then [proto3 json](https://protobuf.dev/programming-guides/proto3/#json) encoding takes place, which generates a JSON representation of the protobuf message. + +## Protobuf Encoding + +Protobuf encoding serves as the standard encoding process for `CosmosTx`. This occurs if the channel handshake initiates with an empty channel version metadata or if the `encoding` field explicitly denotes `proto3`. In Golang, the protobuf encoding procedure utilizes the `proto.Marshal` function. Every protobuf autogenerated Golang type comes equipped with a `Marshal` method that can be employed to encode the message. + +## (Protobuf) JSON Encoding + +The proto3 JSON encoding presents an alternative encoding technique for `CosmosTx`. It is selected if the channel handshake begins with the channel version metadata `encoding` field labeled as `proto3json`. In Golang, the Proto3 canonical encoding in JSON is implemented by the `"github.com/cosmos/gogoproto/jsonpb"` package. Within Cosmos SDK, the `ProtoCodec` structure implements the `JSONCodec` interface, leveraging the `jsonpb` package. This method generates a JSON format as follows: + +```json expandable +{ + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1...", + "to_address": "cosmos1...", + "amount": [ + { + "denom": "uatom", + "amount": "1000000" + } + ] + } + ] +} +``` + +Here, the `"messages"` array is populated with transactions. Each transaction is represented as a JSON object with the `@type` field denoting the transaction type and the remaining fields representing the transaction's attributes. diff --git a/docs/ibc/v10.1.x/apps/transfer/IBCv2-transfer.mdx b/docs/ibc/v10.1.x/apps/transfer/IBCv2-transfer.mdx new file mode 100644 index 00000000..26e2e09e --- /dev/null +++ b/docs/ibc/v10.1.x/apps/transfer/IBCv2-transfer.mdx @@ -0,0 +1,65 @@ +--- +title: IBC v2 Transfer +description: >- + Much of the core business logic of sending and recieving tokens between chains + is unchanged between IBC Classic and IBC v2. Some of the key differences to + pay attention to are detailed below. +--- + +Much of the core business logic of sending and recieving tokens between chains is unchanged between IBC Classic and IBC v2. Some of the key differences to pay attention to are detailed below. + +## No Channel Handshakes, New Packet Format and Encoding Support + +* IBC v2 does not establish connection between applications with a channel handshake. Channel identifiers represent Client IDs and are included in the `Payload` + * The source and destination port must be `"transfer"` + * The channel IDs [must be valid client IDs](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/v2/ibc_module.go#L46-L47) of the format `{clientID}-{sequence}`, e.g. 08-wasm-007 +* The [`Payload`](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel/v2/types/packet.pb.go#L146-L158) contains the [`FungibleTokenPacketData`](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/types/packet.pb.go#L28-L39) for a token transfer. + +The code snippet shows the `Payload` struct. + +```go expandable +/ Payload contains the source and destination ports and payload for the application (version, encoding, raw bytes) + +type Payload struct { + / specifies the source port of the packet, e.g. transfer + SourcePort string `protobuf:"bytes,1,opt,name=source_port,json=sourcePort,proto3" json:"source_port,omitempty"` + / specifies the destination port of the packet, e.g. trasnfer + DestinationPort string `protobuf:"bytes,2,opt,name=destination_port,json=destinationPort,proto3" json:"destination_port,omitempty"` + / version of the specified application + Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` + / the encoding used for the provided value, for transfer this could be JSON, protobuf or ABI + Encoding string `protobuf:"bytes,4,opt,name=encoding,proto3" json:"encoding,omitempty"` + / the raw bytes for the payload. + Value []byte `protobuf:"bytes,5,opt,name=value,proto3" json:"value,omitempty"` +} +``` + +The code snippet shows the structure of the `Payload` bytes for token transfer + +```go expandable +/ FungibleTokenPacketData defines a struct for the packet payload +/ See FungibleTokenPacketData spec: +/ https://github.com/cosmos/ibc/tree/master/spec/app/ics-020-fungible-token-transfer#data-structures +type FungibleTokenPacketData struct { + / the token denomination to be transferred + Denom string `protobuf:"bytes,1,opt,name=denom,proto3" json:"denom,omitempty"` + / the token amount to be transferred + Amount string `protobuf:"bytes,2,opt,name=amount,proto3" json:"amount,omitempty"` + / the sender address + Sender string `protobuf:"bytes,3,opt,name=sender,proto3" json:"sender,omitempty"` + / the recipient address on the destination chain + Receiver string `protobuf:"bytes,4,opt,name=receiver,proto3" json:"receiver,omitempty"` + / optional memo + Memo string `protobuf:"bytes,5,opt,name=memo,proto3" json:"memo,omitempty"` +} +``` + +## Base Denoms cannot contain slashes + +With the new [`Denom`](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/types/token.pb.go#L81-L87) struct, the base denom, i.e. uatom, is seperated from the trace - the path the token has travelled. The trace is presented as an array of [`Hop`](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/types/token.pb.go#L136-L140)s. + +Because IBC v2 no longer uses channels, it is no longer possible to rely on a fixed format for an identifier so using a base denom that contains a "/" is dissallowed. + +## Changes to the application module interface + +Instead of implementing token transfer for `port.IBCModule`, IBC v2 uses the new application interface `api.IBCModule`. More information on the interface differences can be found in the [application section](/docs/ibc/v10.1.x/ibc/apps/ibcv2apps). diff --git a/docs/ibc/v10.1.x/apps/transfer/authorizations.mdx b/docs/ibc/v10.1.x/apps/transfer/authorizations.mdx new file mode 100644 index 00000000..afdf023e --- /dev/null +++ b/docs/ibc/v10.1.x/apps/transfer/authorizations.mdx @@ -0,0 +1,51 @@ +--- +title: Authorizations +--- + +`TransferAuthorization` implements the `Authorization` interface for `ibc.applications.transfer.v1.MsgTransfer`. It allows a granter to grant a grantee the privilege to submit `MsgTransfer` on its behalf. Please see the [Cosmos SDK docs](https://docs.cosmos.network/v0.47/modules/authz) for more details on granting privileges via the `x/authz` module. + +More specifically, the granter allows the grantee to transfer funds that belong to the granter over a specified channel. + +For the specified channel, the granter must be able to specify a spend limit of a specific denomination they wish to allow the grantee to be able to transfer. + +The granter may be able to specify the list of addresses that they allow to receive funds. If empty, then all addresses are allowed. + +It takes: + +* a `SourcePort` and a `SourceChannel` which together comprise the unique transfer channel identifier over which authorized funds can be transferred. +* a `SpendLimit` that specifies the maximum amount of tokens the grantee can transfer. The `SpendLimit` is updated as the tokens are transferred, unless the sentinel value of the maximum value for a 256-bit unsigned integer (i.e. 2^256 - 1) is used for the amount, in which case the `SpendLimit` will not be updated (please be aware that using this sentinel value will grant the grantee the privilege to transfer **all** the tokens of a given denomination available at the granter's account). The helper function `UnboundedSpendLimit` in the `types` package of the `transfer` module provides the sentinel value that can be used. This `SpendLimit` may also be updated to increase or decrease the limit as the granter wishes. +* an `AllowList` list that specifies the list of addresses that are allowed to receive funds. If this list is empty, then all addresses are allowed to receive funds from the `TransferAuthorization`. +* an `AllowedPacketData` list that specifies the list of memo strings that are allowed to be included in the memo field of the packet. If this list is empty, then only an empty memo is allowed (a `memo` field with non-empty content will be denied). If this list includes a single element equal to `"*"`, then any content in `memo` field will be allowed. + +Setting a `TransferAuthorization` is expected to fail if: + +* the spend limit is nil +* the denomination of the spend limit is an invalid coin type +* the source port ID is invalid +* the source channel ID is invalid +* there are duplicate entries in the `AllowList` +* the `memo` field is not allowed by `AllowedPacketData` + +Below is the `TransferAuthorization` message: + +```go expandable +func NewTransferAuthorization(allocations ...Allocation) *TransferAuthorization { + return &TransferAuthorization{ + Allocations: allocations, +} +} + +type Allocation struct { + / the port on which the packet will be sent + SourcePort string + / the channel by which the packet will be sent + SourceChannel string + / spend limitation on the channel + SpendLimit sdk.Coins + / allow list of receivers, an empty allow list permits any receiver address + AllowList []string + / allow list of memo strings, an empty list prohibits all memo strings; + / a list only with "*" permits any memo string + AllowedPacketData []string +} +``` diff --git a/docs/ibc/v10.1.x/apps/transfer/client.mdx b/docs/ibc/v10.1.x/apps/transfer/client.mdx new file mode 100644 index 00000000..7ace97d4 --- /dev/null +++ b/docs/ibc/v10.1.x/apps/transfer/client.mdx @@ -0,0 +1,92 @@ +--- +title: Client +description: >- + A user can query and interact with the transfer module using the CLI. Use the + --help flag to discover the available commands: +--- + +## CLI + +A user can query and interact with the `transfer` module using the CLI. Use the `--help` flag to discover the available commands: + +### Query + +The `query` commands allow users to query `transfer` state. + +```shell +simd query ibc-transfer --help +``` + +#### Transactions + +The `tx` commands allow users to interact with the controller submodule. + +```shell +simd tx ibc-transfer --help +``` + +#### `transfer` + +The `transfer` command allows users to execute cross-chain token transfers from the source port ID and channel ID on the sending chain. + +```shell +simd tx ibc-transfer transfer [src-port] [src-channel] [receiver] [coins] [flags] +``` + +The `coins` parameter accepts the amount and denomination (e.g. `100uatom`) of the tokens to be transferred. + +The additional flags that can be used with the command are: + +* `--packet-timeout-height` to specify the timeout block height in the format `{revision}-{height}`. The default value is `0-0`, which effectively disables the timeout. Timeout height can only be absolute, therefore this option must be used in combination with `--absolute-timeouts` set to true. On IBC v1 protocol, either `--packet-timeout-height` or `--packet-timeout-timestamp` must be set. On IBC v2 protocol `--packet-timeout-timestamp` must be set. +* `--packet-timeout-timestamp` to specify the timeout timestamp in nanoseconds. The timeout can be either relative (from the current UTC time) or absolute. The default value is 10 minutes (and thus relative). On IBC v1 protocol, either `--packet-timeout-height` or `--packet-timeout-timestamp` must be set. On IBC v2 protocol `--packet-timeout-timestamp` must be set. +* `--absolute-timeouts` to interpret the timeout timestamp as an absolute value (when set to true). The default value is false (and thus the timeout is considered relative to current UTC time). +* `--memo` to specify the memo string to be sent along with the transfer packet. If forwarding is used, then the memo string will be carried through the intermediary chains to the final destination. + +#### `total-escrow` + +The `total-escrow` command allows users to query the total amount in escrow for a particular coin denomination regardless of the transfer channel from where the coins were sent out. + +```shell +simd query ibc-transfer total-escrow [denom] [flags] +``` + +Example: + +```shell +simd query ibc-transfer total-escrow samoleans +``` + +Example Output: + +```shell +amount: "100" +``` + +## gRPC + +A user can query the `transfer` module using gRPC endpoints. + +### `TotalEscrowForDenom` + +The `TotalEscrowForDenom` endpoint allows users to query the total amount in escrow for a particular coin denomination regardless of the transfer channel from where the coins were sent out. + +```shell +ibc.applications.transfer.v1.Query/TotalEscrowForDenom +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"samoleans"}' \ + localhost:9090 \ + ibc.applications.transfer.v1.Query/TotalEscrowForDenom +``` + +Example output: + +```shell +{ + "amount": "100" +} +``` diff --git a/docs/ibc/v10.1.x/apps/transfer/events.mdx b/docs/ibc/v10.1.x/apps/transfer/events.mdx new file mode 100644 index 00000000..9daa9182 --- /dev/null +++ b/docs/ibc/v10.1.x/apps/transfer/events.mdx @@ -0,0 +1,54 @@ +--- +title: Events +--- + +## `MsgTransfer` + +| Type | Attribute Key | Attribute Value | +| ------------- | ------------- | --------------- | +| ibc\_transfer | sender | `{sender}` | +| ibc\_transfer | receiver | `{receiver}` | +| ibc\_transfer | denom | `{denom}` | +| ibc\_transfer | denom\_hash | `{denom\_hash}` | +| ibc\_transfer | amount | `{amount}` | +| ibc\_transfer | memo | `{memo}` | +| message | module | transfer | + +## `OnRecvPacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | ------------- | --------------- | +| fungible\_token\_packet | sender | `{sender}` | +| fungible\_token\_packet | receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | denom\_hash | `{denom\_hash}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | +| fungible\_token\_packet | success | `{ackSuccess}` | +| fungible\_token\_packet | error | `{ackError}` | +| message | module | transfer | + +## `OnAcknowledgePacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | --------------- | --------------- | +| fungible\_token\_packet | sender | `{sender}` | +| fungible\_token\_packet | receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | denom\_hash | `{denom\_hash}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | +| fungible\_token\_packet | acknowledgement | `{ack.String()}` | +| fungible\_token\_packet | success / error | `{ack.Response}` | +| message | module | transfer | + +## `OnTimeoutPacket` callback + +| Type | Attribute Key | Attribute Value | +| ------- | ---------------- | --------------- | +| timeout | refund\_receiver | `{receiver}` | +| timeout | refund\_tokens | `{jsonTokens}` | +| timeout | denom | `{denom}` | +| timeout | denom\_hash | `{denom\_hash}` | +| timeout | memo | `{memo}` | +| message | module | transfer | diff --git a/docs/ibc/v10.1.x/apps/transfer/messages.mdx b/docs/ibc/v10.1.x/apps/transfer/messages.mdx new file mode 100644 index 00000000..9b373a26 --- /dev/null +++ b/docs/ibc/v10.1.x/apps/transfer/messages.mdx @@ -0,0 +1,71 @@ +--- +title: Messages +description: 'A fungible token cross chain transfer is achieved by using the MsgTransfer:' +--- + +## `MsgTransfer` + +A fungible token cross chain transfer is achieved by using the `MsgTransfer`: + +```go expandable +type MsgTransfer struct { + SourcePort string + / with IBC v2 SourceChannel will be a client ID + SourceChannel string + Token sdk.Coin + Sender string + Receiver string + / If you are sending with IBC v1 protocol, either timeout_height or timeout_timestamp must be set. + / If you are sending with IBC v2 protocol, timeout_timestamp must be set, and timeout_height must be omitted. + TimeoutHeight ibcexported.Height + / Timeout timestamp in absolute nanoseconds since unix epoch. + TimeoutTimestamp uint64 + / optional Memo field + Memo string + / optional Encoding field + Encoding string +} +``` + +This message is expected to fail if: + +* `SourcePort` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +* `SourceChannel` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +* `Token` is invalid: + * `Amount` is not positive. + * `Denom` is not a valid IBC denomination as per [ADR 001 - Coin Source Tracing](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md). +* `Sender` is empty. +* `Receiver` is empty or contains more than 2048 bytes. +* `Memo` contains more than 32768 bytes. +* `TimeoutHeight` and `TimeoutTimestamp` are both zero for IBC Classic. + * Note that `TimeoutHeight` as a concept is removed in IBC v2, hence this must always be emitted and only `TimeoutTimestamp` used. + +This message will send a fungible token to the counterparty chain represented by the counterparty Channel End connected to the Channel End with the identifiers `SourcePort` and `SourceChannel`. Note that in IBC v2 a pair of clients are connected and the `SourceChannel` is referring to the source `ClientID`. + +The denomination provided for transfer should correspond to the same denomination represented on this chain. The prefixes will be added as necessary upon by the receiving chain. + +If the `Amount` is set to the maximum value for a 256-bit unsigned integer (i.e. 2^256 - 1), then the whole balance of the corresponding denomination will be transferred. The helper function `UnboundedSpendLimit` in the `types` package of the `transfer` module provides the sentinel value that can be used. + +### Memo + +The memo field was added to allow applications and users to attach metadata to transfer packets. The field is optional and may be left empty. When it is used to attach metadata for a particular middleware, the memo field should be represented as a json object where different middlewares use different json keys. + +For example, the following memo field is used by the callbacks middleware to attach a source callback to a transfer packet: + +```jsonc +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +You can find more information about other applications that use the memo field in the [chain registry](https://github.com/cosmos/chain-registry/blob/master/_memo_keys/ICS20_memo_keys.json). + +### Encoding + +In IBC v2, the encoding method used by an application has more flexibility as it is specified within a `Payload`, rather than negotiated and fixed during an IBC classic channel handshake. Certain encoding types may be more suited to specific blockchains, e.g. ABI encoding is more gas efficient to decode in an EVM than JSON or Protobuf. + +Within ibc-go, JSON, protobuf and ABI encoding are supported and can be used, see the [transfer packet types](https://github.com/cosmos/ibc-go/blob/14bc17e26ad12cee6bdb99157a05296fcf58b762/modules/apps/transfer/types/packet.go#L36-L40). diff --git a/docs/ibc/v10.1.x/apps/transfer/metrics.mdx b/docs/ibc/v10.1.x/apps/transfer/metrics.mdx new file mode 100644 index 00000000..37fdc97a --- /dev/null +++ b/docs/ibc/v10.1.x/apps/transfer/metrics.mdx @@ -0,0 +1,13 @@ +--- +title: Metrics +description: The IBC transfer application module exposes the following set of metrics. +--- + +The IBC transfer application module exposes the following set of [metrics](https://docs.cosmos.network/main/learn/advanced/telemetry). + +| Metric | Description | Unit | Type | +| :---------------------------- | :---------------------------------------------------------------------------------------- | :------- | :------ | +| `tx_msg_ibc_transfer` | The total amount of tokens transferred via IBC in a `MsgTransfer` (source or sink chain) | token | gauge | +| `ibc_transfer_packet_receive` | The total amount of tokens received in a `FungibleTokenPacketData` (source or sink chain) | token | gauge | +| `ibc_transfer_send` | Total number of IBC transfers sent from a chain (source or sink) | transfer | counter | +| `ibc_transfer_receive` | Total number of IBC transfers received to a chain (source or sink) | transfer | counter | diff --git a/docs/ibc/v10.1.x/apps/transfer/overview.mdx b/docs/ibc/v10.1.x/apps/transfer/overview.mdx new file mode 100644 index 00000000..7c579c73 --- /dev/null +++ b/docs/ibc/v10.1.x/apps/transfer/overview.mdx @@ -0,0 +1,140 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the token Transfer module is + +## What is the Transfer module? + +Transfer is the Cosmos SDK implementation of the [ICS-20](https://github.com/cosmos/ibc/tree/master/spec/app/ics-020-fungible-token-transfer) protocol, which enables cross-chain fungible token transfers. + +## Concepts + +### Acknowledgements + +ICS20 uses the recommended acknowledgement format as specified by [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope). + +A successful receive of a transfer packet will result in a Result Acknowledgement being written +with the value `[]byte{byte(1)}` in the `Response` field. + +An unsuccessful receive of a transfer packet will result in an Error Acknowledgement being written +with the error message in the `Response` field. + +### Denomination trace + +The denomination trace corresponds to the information that allows a token to be traced back to its +origin chain. It contains a sequence of port and channel identifiers ordered from the most recent to +the oldest in the timeline of transfers. + + + When using transfer with IBC v2 connecting to e.g. Ethereum, the source + channel identifier will be the source client identifier instead. + + +This information is included on the token's base denomination field in the form of a hash to prevent an +unbounded denomination length. For example, the token `transfer/channelToA/uatom` will be displayed +as `ibc/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2`. The human readable denomination +is stored using `x/bank` module's [denom metadata](https://docs.cosmos.network/main/build/modules/bank#denom-metadata) +feature. You may display the human readable denominations by querying balances with the `--resolve-denom` flag, as in: + +```shell +simd query bank balances [address] --resolve-denom +``` + +Each send to any chain other than the one it was previously received from is a movement forwards in +the token's timeline. This causes trace to be added to the token's history and the destination port +and destination channel to be prefixed to the denomination. In these instances the sender chain is +acting as the "source zone". When the token is sent back to the chain it previously received from, the +prefix is removed. This is a backwards movement in the token's timeline and the sender chain is +acting as the "sink zone". + +It is strongly recommended to read the full details of [ADR 001: Coin Source Tracing](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md) to understand the implications and context of the IBC token representations. + +## UX suggestions for clients + +For clients (wallets, exchanges, applications, block explorers, etc) that want to display the source of the token, it is recommended to use the following alternatives for each of the cases below: + +### Direct connection + +If the denomination trace contains a single identifier prefix pair (as in the example above), then +the easiest way to retrieve the chain and light client identifier is to map the trace information +directly. In summary, this requires querying the channel from the denomination trace identifiers, +and then the counterparty client state using the counterparty port and channel identifiers from the +retrieved channel. + +A general pseudo algorithm would look like the following: + +1. Query the full denomination trace. +2. Query the channel with the `portID/channelID` pair, which corresponds to the first destination of the + token. +3. Query the client state using the identifiers pair. Note that this query will return a `"Not +Found"` response if the current chain is not connected to this channel. +4. Retrieve the client identifier or chain identifier from the client state (eg: on + Tendermint clients) and store it locally. + +Using the gRPC gateway client service the steps above would be, with a given IBC token `ibc/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2` stored on `chainB`: + +1. `GET /ibc/apps/transfer/v1/denom_traces/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2` -> `{"path": "transfer/channelToA", "base_denom": "uatom"}` +2. `GET /ibc/apps/transfer/v1/channels/channelToA/ports/transfer/client_state"` -> `{"client_id": "clientA", "chain-id": "chainA", ...}` +3. `GET /ibc/apps/transfer/v1/channels/channelToA/ports/transfer"` -> `{"channel_id": "channelToA", port_id": "transfer", counterparty: {"channel_id": "channelToB", port_id": "transfer"}, ...}` +4. `GET /ibc/apps/transfer/v1/channels/channelToB/ports/transfer/client_state" -> {"client_id": "clientB", "chain-id": "chainB", ...}` + +Then, the token transfer chain path for the `uatom` denomination would be: `chainA` -> `chainB`. + +### Multiple hops + +The multiple channel hops case applies when the token has passed through multiple chains between the original source and final destination chains. + +The IBC protocol doesn't know the topology of the overall network (i.e connections between chains and identifier names between them). For this reason, in the multiple hops case, a particular chain in the timeline of the individual transfers can't query the chain and client identifiers of the other chains. + +Take for example the following sequence of transfers `A -> B -> C` for an IBC token, with a final prefix path (trace info) of `transfer/channelChainC/transfer/channelChainB`. What the paragraph above means is that even in the case that chain `C` is directly connected to chain `A`, querying the port and channel identifiers that chain `B` uses to connect to chain `A` (eg: `transfer/channelChainA`) can be completely different from the one that chain `C` uses to connect to chain `A` (eg: `transfer/channelToChainA`). + +Thus the proposed solution for clients that the IBC team recommends are the following: + +- **Connect to all chains**: Connecting to all the chains in the timeline would allow clients to + perform the queries outlined in the [direct connection](#direct-connection) section to each + relevant chain. By repeatedly following the port and channel denomination trace transfer timeline, + clients should always be able to find all the relevant identifiers. This comes at the tradeoff + that the client must connect to nodes on each of the chains in order to perform the queries. +- **Relayer as a Service (RaaS)**: A longer term solution is to use/create a relayer service that + could map the denomination trace to the chain path timeline for each token (i.e `origin chain -> +chain #1 -> ... -> chain #(n-1) -> final chain`). These services could provide merkle proofs in + order to allow clients to optionally verify the path timeline correctness for themselves by + running light clients. If the proofs are not verified, they should be considered as trusted third + parties services. Additionally, client would be advised in the future to use RaaS that support the + largest number of connections between chains in the ecosystem. Unfortunately, none of the existing + public relayers (in [Golang](https://github.com/cosmos/relayer) and + [Rust](https://github.com/informalsystems/ibc-rs)), provide this service to clients. + + + The only viable alternative for clients (at the time of writing) to tokens + with multiple connection hops, is to connect to all chains directly and + perform relevant queries to each of them in the sequence. + + +## Locked funds + +In some [exceptional cases](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-026-ibc-client-recovery-mechanisms.md#exceptional-cases), a client state associated with a given channel cannot be updated. This causes that funds from fungible tokens in that channel will be permanently locked and thus can no longer be transferred. + +To mitigate this, a client update governance proposal can be submitted to update the frozen client +with a new valid header. Once the proposal passes the client state will be unfrozen and the funds +from the associated channels will then be unlocked. This mechanism only applies to clients that +allow updates via governance, such as Tendermint clients. + +In addition to this, it's important to mention that a token must be sent back along the exact route +that it took originally in order to return it to its original form on the source chain (eg: the +Cosmos Hub for the `uatom`). Sending a token back to the same chain across a different channel will +**not** move the token back across its timeline. If a channel in the chain history closes before the +token can be sent back across that channel, then the token will not be returnable to its original +form. + +## Security considerations + +For safety, no other module must be capable of minting tokens with the `ibc/` prefix. The IBC +transfer module needs a subset of the denomination space that only it can create tokens in. + +## Channel Closure + +The IBC transfer module does not support channel closure. diff --git a/docs/ibc/v10.1.x/apps/transfer/params.mdx b/docs/ibc/v10.1.x/apps/transfer/params.mdx new file mode 100644 index 00000000..290641f1 --- /dev/null +++ b/docs/ibc/v10.1.x/apps/transfer/params.mdx @@ -0,0 +1,89 @@ +--- +title: Params +description: 'The IBC transfer application module contains the following parameters:' +--- + +The IBC transfer application module contains the following parameters: + +| Name | Type | Default Value | +| ---------------- | ---- | ------------- | +| `SendEnabled` | bool | `true` | +| `ReceiveEnabled` | bool | `true` | + +The IBC transfer module stores its parameters under its `StoreKey` + +## `SendEnabled` + +The `SendEnabled` parameter controls send cross-chain transfer capabilities for all fungible tokens. + +To prevent a single token from being transferred from the chain, set the `SendEnabled` parameter to `true` and then, depending on the Cosmos SDK version, do one of the following: + +* For Cosmos SDK v0.46.x or earlier, set the bank module's [`SendEnabled` parameter](https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/bank/spec/05_params.md#sendenabled) for the denomination to `false`. +* For Cosmos SDK versions above v0.46.x, set the bank module's `SendEnabled` entry for the denomination to `false` using `MsgSetSendEnabled` as a governance proposal. + + +Doing so will prevent the token from being transferred between any accounts in the blockchain. + + +## `ReceiveEnabled` + +The transfers enabled parameter controls receive cross-chain transfer capabilities for all fungible tokens. + +To prevent a single token from being transferred to the chain, set the `ReceiveEnabled` parameter to `true` and then, depending on the Cosmos SDK version, do one of the following: + +* For Cosmos SDK v0.46.x or earlier, set the bank module's [`SendEnabled` parameter](https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/bank/spec/05_params.md#sendenabled) for the denomination to `false`. +* For Cosmos SDK versions above v0.46.x, set the bank module's `SendEnabled` entry for the denomination to `false` using `MsgSetSendEnabled` as a governance proposal. + + +Doing so will prevent the token from being transferred between any accounts in the blockchain. + + +## Queries + +Current parameter values can be queried via a query message. + +{/* Turn it into a github code snippet in docusaurus: */} + +```protobuf +/ proto/ibc/applications/transfer/v1/query.proto + +/ QueryParamsRequest is the request type for the Query/Params RPC method. +message QueryParamsRequest {} + +/ QueryParamsResponse is the response type for the Query/Params RPC method. +message QueryParamsResponse { + / params defines the parameters of the module. + Params params = 1; +} +``` + +To execute the query in `simd`, you use the following command: + +```bash +simd query ibc-transfer params +``` + +## Changing Parameters + +To change the parameter values, you must make a governance proposal that executes the `MsgUpdateParams` message. + +{/* Turn it into a github code snippet in docusaurus: */} + +```protobuf expandable +/ proto/ibc/applications/transfer/v1/tx.proto + +/ MsgUpdateParams is the Msg/UpdateParams request type. +message MsgUpdateParams { + / signer address (it may be the address that controls the module, which defaults to x/gov unless overwritten). + string signer = 1; + + / params defines the transfer parameters to update. + / + / NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false]; +} + +/ MsgUpdateParamsResponse defines the response structure for executing a +/ MsgUpdateParams message. +message MsgUpdateParamsResponse {} +``` diff --git a/docs/ibc/v10.1.x/apps/transfer/state-transitions.mdx b/docs/ibc/v10.1.x/apps/transfer/state-transitions.mdx new file mode 100644 index 00000000..fcc9169c --- /dev/null +++ b/docs/ibc/v10.1.x/apps/transfer/state-transitions.mdx @@ -0,0 +1,35 @@ +--- +title: State Transitions +description: >- + A successful fungible token send has two state transitions depending if the + transfer is a movement forward or backwards in the token's timeline: +--- + +## Send fungible tokens + +A successful fungible token send has two state transitions depending if the transfer is a movement forward or backwards in the token's timeline: + +1. Sender chain is the source chain, *i.e* a transfer to any chain other than the one it was previously received from is a movement forwards in the token's timeline. This results in the following state transitions: + + * The coins are transferred to an escrow address (i.e locked) on the sender chain. + * The coins are transferred to the receiving chain through IBC TAO logic. + +2. Sender chain is the sink chain, *i.e* the token is sent back to the chain it previously received from. This is a backwards movement in the token's timeline. This results in the following state transitions: + + * The coins (vouchers) are burned on the sender chain. + * The coins are transferred to the receiving chain through IBC TAO logic. + +## Receive fungible tokens + +A successful fungible token receive has two state transitions depending if the transfer is a movement forward or backwards in the token's timeline: + +1. Receiver chain is the source chain. This is a backwards movement in the token's timeline. This results in the following state transitions: + + * The leftmost port and channel identifier pair is removed from the token denomination prefix. + * The tokens are unescrowed and sent to the receiving address. + +2. Receiver chain is the sink chain. This is a movement forwards in the token's timeline. This results in the following state transitions: + + * Token vouchers are minted by prefixing the destination port and channel identifiers to the trace information. + * The receiving chain stores the new trace information in the store (if not set already). + * The vouchers are sent to the receiving address. diff --git a/docs/ibc/v10.1.x/apps/transfer/state.mdx b/docs/ibc/v10.1.x/apps/transfer/state.mdx new file mode 100644 index 00000000..ff48f446 --- /dev/null +++ b/docs/ibc/v10.1.x/apps/transfer/state.mdx @@ -0,0 +1,12 @@ +--- +title: State +description: >- + The IBC transfer application module keeps state of the port to which the + module is binded and the denomination trace information. +--- + +The IBC transfer application module keeps state of the port to which the module is binded and the denomination trace information. + +* `PortKey`: `0x01 -> ProtocolBuffer(string)` +* `DenomTraceKey`: `0x02 | []bytes(traceHash) -> ProtocolBuffer(Denom)` +* `DenomKey` : `0x03 | []bytes(traceHash) -> ProtocolBuffer(Denom)` diff --git a/docs/ibc/v10.1.x/changelog/release-notes.mdx b/docs/ibc/v10.1.x/changelog/release-notes.mdx new file mode 100644 index 00000000..dc5c6328 --- /dev/null +++ b/docs/ibc/v10.1.x/changelog/release-notes.mdx @@ -0,0 +1,266 @@ +--- +title: "Release Notes" +description: "Release notes generated from the project `CHANGELOG.md`" +mode: "center" +--- + + + + This page tracks all releases and changes from the + [cosmos/ibc-go](https://github.com/cosmos/ibc-go) repository. For the latest + development updates, see the + [UNRELEASED](https://github.com/cosmos/ibc-go/blob/main/CHANGELOG.md#unreleased) + section. + + + + + ### Dependencies * (light-clients/08-wasm) + [\#8264](https://github.com/cosmos/ibc-go/pull/8264) Bump + **github.com/prysmaticlabs/prysm** to **v5.3.0** ### Testing API * + [\#8154](https://github.com/cosmos/ibc-go/pull/8154) Enable creating test + chains with custom app creation functions + + + + ### Security Fixes * Fix + [ISA-2025-001](https://github.com/cosmos/ibc-go/security/advisories/GHSA-4wf3-5qj9-368v) + security vulnerability. * Fix + [ASA-2025-004](https://github.com/cosmos/ibc-go/security/advisories/GHSA-jg6f-48ff-5xrw) + security vulnerability. ### Features * (core) + [\#7505](https://github.com/cosmos/ibc-go/pull/7505) Add IBC Eureka (IBC v2) + implementation, enabling more efficient IBC packet handling without channel + dependencies, bringing significant performance improvements. * (apps/transfer) + [\#7650](https://github.com/cosmos/ibc-go/pull/7650) Add support for transfer + of entire balance for vesting accounts. * (apps/wasm) + [\#5079](https://github.com/cosmos/ibc-go/pull/5079) 08-wasm light client + proxy module for wasm clients by. * (core/02-client) + [\#7936](https://github.com/cosmos/ibc-go/pull/7936) Clientv2 module. * + (core/04-channel) [\#7933](https://github.com/cosmos/ibc-go/pull/7933) + Channel-v2 genesis. * (core/04-channel, core/api) + [\#7934](https://github.com/cosmos/ibc-go/pull/7934) - Callbacks Eureka. * + (light-clients/09-localhost) + [\#6683](https://github.com/cosmos/ibc-go/pull/6683) Make 09-localhost + stateless. * (core, app) [\#6902](https://github.com/cosmos/ibc-go/pull/6902) + Add channel version to core app callbacks. ### Dependencies * + [\#8181](https://github.com/cosmos/ibc-go/pull/8181) Bump + **github.com/cosmos/cosmos-sdk** to **0.50.13** * + [\#7932](https://github.com/cosmos/ibc-go/pull/7932) Bump **go** to **1.23** * + [\#7330](https://github.com/cosmos/ibc-go/pull/7330) Bump **cosmossdk.io/api** + to **0.7.6** * [\#6828](https://github.com/cosmos/ibc-go/pull/6828) Bump + **cosmossdk.io/core** to **0.11.1** * + [\#7182](https://github.com/cosmos/ibc-go/pull/7182) Bump **cosmossdk.io/log** + to **1.4.1** * [\#7264](https://github.com/cosmos/ibc-go/pull/7264) Bump + **cosmossdk.io/store** to **1.1.1** * + [\#7585](https://github.com/cosmos/ibc-go/pull/7585) Bump + **cosmossdk.io/math** to **1.4.0** * + [\#7540](https://github.com/cosmos/ibc-go/pull/7540) Bump + **github.com/cometbft/cometbft** to **0.38.15** * + [\#6828](https://github.com/cosmos/ibc-go/pull/6828) Bump + **cosmossdk.io/x/upgrade** to **0.1.4** * + [\#8124](https://github.com/cosmos/ibc-go/pull/8124) Bump + **cosmossdk.io/x/tx** to **0.13.7** * + [\#7942](https://github.com/cosmos/ibc-go/pull/7942) Bump + **github.com/cosmos/cosmos-db** to **1.1.1** * + [\#7224](https://github.com/cosmos/ibc-go/pull/7224) Bump + **github.com/cosmos/ics23/go** to **0.11.0** ### API Breaking * (core, apps) + [\#7213](https://github.com/cosmos/ibc-go/pull/7213) Remove capabilities from + `SendPacket`. * (core, apps) + [\#7225](https://github.com/cosmos/ibc-go/pull/7225) Remove capabilities from + `WriteAcknowledgement`. * (core, apps) + [\#7232](https://github.com/cosmos/ibc-go/pull/7232) Remove capabilities from + channel handshake methods. * (core, apps) + [\#7270](https://github.com/cosmos/ibc-go/pull/7270) Remove remaining + dependencies on capability module. * (core, apps) + [\#4811](https://github.com/cosmos/ibc-go/pull/4811) Use expected interface + for legacy params subspace * (core/04-channel) + [\#7239](https://github.com/cosmos/ibc-go/pull/7239) Removed function + `LookupModuleByChannel` * (core/05-port) + [\#7252](https://github.com/cosmos/ibc-go/pull/7252) Removed function + `LookupModuleByPort` * (core/24-host) + [\#7239](https://github.com/cosmos/ibc-go/pull/7239) Removed function + `ChannelCapabilityPath` * (apps/27-interchain-accounts) + [\#7239](https://github.com/cosmos/ibc-go/pull/7239) The following functions + have been removed: `AuthenticateCapability`, `ClaimCapability` * + (apps/27-interchain-accounts) + [\#7961](https://github.com/cosmos/ibc-go/pull/7961) Removed + `absolute-timeouts` flag from `send-tx` in the ICA CLI. * (apps/transfer) + [\#7239](https://github.com/cosmos/ibc-go/pull/7239) The following functions + have been removed: `BindPort`, `AuthenticateCapability`, `ClaimCapability` * + (capability) [\#7279](https://github.com/cosmos/ibc-go/pull/7279) The module + `capability` has been removed. * (testing) + [\#7305](https://github.com/cosmos/ibc-go/pull/7305) Added `TrustedValidators` + map to `TestChain`. This removes the dependency on the `x/staking` module for + retrieving trusted validator sets at a given height, and removes the + `GetTrustedValidators` method from the `TestChain` struct. * (23-commitment) + [\#7486](https://github.com/cosmos/ibc-go/pull/7486) Remove unimplemented + `BatchVerifyMembership` and `BatchVerifyNonMembership` functions * + (core/02-client, light-clients) + [\#5806](https://github.com/cosmos/ibc-go/pull/5806) Decouple light client + routing from their encoding structure. * (core/04-channel) + [\#5991](https://github.com/cosmos/ibc-go/pull/5991) The client CLI + `QueryLatestConsensusState` has been removed. * (light-clients/06-solomachine) + [\#6037](https://github.com/cosmos/ibc-go/pull/6037) Remove `Initialize` + function from `ClientState` and move logic to `Initialize` function of + `LightClientModule`. * (light-clients/06-solomachine) + [\#6230](https://github.com/cosmos/ibc-go/pull/6230) Remove + `GetTimestampAtHeight`, `Status` and `UpdateStateOnMisbehaviour` functions + from `ClientState` and move logic to functions of `LightClientModule`. * + (core/02-client) [\#6084](https://github.com/cosmos/ibc-go/pull/6084) Removed + `stakingKeeper` as an argument to `NewKeeper` and replaced with a + `ConsensusHost` implementation. * (testing) + [\#6070](https://github.com/cosmos/ibc-go/pull/6070) Remove + `AssertEventsLegacy` function. * (core) + [\#6138](https://github.com/cosmos/ibc-go/pull/6138) Remove `Router` reference + from IBC core keeper and use instead the router on the existing `PortKeeper` + reference. * (core/04-channel) + [\#6023](https://github.com/cosmos/ibc-go/pull/6023) Remove emission of + non-hexlified event attributes `packet_data` and `packet_ack`. * (core) + [\#6320](https://github.com/cosmos/ibc-go/pull/6320) Remove unnecessary + `Proof` interface from `exported` package. * (core/05-port) + [\#6341](https://github.com/cosmos/ibc-go/pull/6341) Modify + `UnmarshalPacketData` interface to take in the context, portID, and channelID. + This allows for packet data's to be unmarshaled based on the channel version. + * (apps/27-interchain-accounts) + [\#6433](https://github.com/cosmos/ibc-go/pull/6433) Use UNORDERED as the + default ordering for new ICA channels. * (apps/transfer) + [\#6440](https://github.com/cosmos/ibc-go/pull/6440) Remove + `GetPrefixedDenom`. * (apps/transfer) + [\#6508](https://github.com/cosmos/ibc-go/pull/6508) Remove the `DenomTrace` + type. * (apps/27-interchain-accounts) + [\#6598](https://github.com/cosmos/ibc-go/pull/6598) Mark the `requests` + repeated field of `MsgModuleQuerySafe` non-nullable. * (23-commmitment) + [\#6644](https://github.com/cosmos/ibc-go/pull/6644) Introduce + `commitment.v2.MerklePath` to include `repeated bytes` in favour of `repeated + string`. This supports using merkle path keys which include non UTF-8 encoded + runes. * (23-commmitment) [\#6870](https://github.com/cosmos/ibc-go/pull/6870) + Remove `commitment.v1.MerklePath` in favour of `commitment.v2.MerklePath`. * + (apps/27-interchain-accounts) + [\#6749](https://github.com/cosmos/ibc-go/pull/6749) The ICA controller + `NewIBCMiddleware` constructor function sets by default the auth module to + nil. * (core, core/02-client) + [\#6763](https://github.com/cosmos/ibc-go/pull/6763) Move prometheus metric + labels for 02-client and core into a separate `metrics` package on core. * + (core/02-client) [\#6777](https://github.com/cosmos/ibc-go/pull/6777) The + `NewClientProposalHandler` of `02-client` has been removed. * (core/types) + [\#6794](https://github.com/cosmos/ibc-go/pull/6794) The composite interface + `QueryServer` has been removed from package `core/types`. Please use the + granular `QueryServer` interfaces provided by each core submodule. * + (light-clients/06-solomachine) + [\#6888](https://github.com/cosmos/ibc-go/pull/6888) Remove + `TypeClientMisbehaviour` constant and the `Type` method on `Misbehaviour`. * + (light-clients/06-solomachine, light-clients/07-tendermint) + [\#6891](https://github.com/cosmos/ibc-go/pull/6891) The `VerifyMembership` + and `VerifyNonMembership` functions of solomachine's `ClientState` have been + made private. The `VerifyMembership`, `VerifyNonMembership`, + `GetTimestampAtHeight`, `Status` and `Initialize` functions of tendermint's + `ClientState` have been made private. * (core/04-channel) + [\#6902](https://github.com/cosmos/ibc-go/pull/6902) Add channel version to + core application callbacks. * (core/03-connection, core/02-client) + [\#6937](https://github.com/cosmos/ibc-go/pull/6937) Remove 'ConsensusHost' + interface, also removing self client and consensus state validation in the + connection handshake. * (core/24-host) + [\#6882](https://github.com/cosmos/ibc-go/issues/6882) All functions ending in + `Path` have been removed from 24-host in favour of their sybling functions + ending in `Key`. * (23-commmitment) + [\#6633](https://github.com/cosmos/ibc-go/pull/6633) MerklePath has been + changed to use `repeated bytes` in favour of `repeated strings`. * + (23-commmitment) [\#6644](https://github.com/cosmos/ibc-go/pull/6644) + Introduce `commitment.v2.MerklePath` to include `repeated bytes` in favour of + `repeated string`. This supports using merkle path keys which include non + UTF-8 encoded runes. * (23-commmitment) + [\#6870](https://github.com/cosmos/ibc-go/pull/6870) Remove + `commitment.v1.MerklePath` in favour of `commitment.v2.MerklePath`. * + [\#6923](https://github.com/cosmos/ibc-go/pull/6923) The JSON msg API for + `VerifyMembershipMsg` and `VerifyNonMembershipMsg` payloads for client + contract `SudoMsg` has been updated. The field `path` has been changed to + `merkle_path`. This change requires updates to 08-wasm client contracts for + integration. * (apps/callbacks) + [\#7000](https://github.com/cosmos/ibc-go/pull/7000) Add base application + version to contract keeper callbacks. * (light-clients/08-wasm) + [\#5154](https://github.com/cosmos/ibc-go/pull/5154) Use bytes in wasm + contract api instead of wrapped. * (core, core/08-wasm) + [\#5397](https://github.com/cosmos/ibc-go/pull/5397) Add coordinator Setup + functions to the Path type. * (core/05-port) + [\#6341](https://github.com/cosmos/ibc-go/pull/6341) Modify + `UnmarshalPacketData` interface to take in the context, portID, and channelID. + This allows for packet data's to be unmarshaled based on the channel version. + * (core/02-client) [\#6863](https://github.com/cosmos/ibc-go/pull/6863) remove + ClientStoreProvider interface in favour of concrete type. * (core/05-port) + [\#6988](https://github.com/cosmos/ibc-go/pull/6988) Modify + `UnmarshalPacketData` interface to return the underlying application version. + * (apps/27-interchain-accounts) + [\#7053](https://github.com/cosmos/ibc-go/pull/7053) Remove ICS27 channel + capability migration introduced in v6. * (apps/27-interchain-accounts) + [\#8002](https://github.com/cosmos/ibc-go/issues/8002) Remove ICS-29: fee + middleware. * (core/04-channel) + [\#8053](https://github.com/cosmos/ibc-go/issues/8053) Remove channel + upgradability. ### State Machine Breaking * (light-clients/06-solomachine) + [\#6313](https://github.com/cosmos/ibc-go/pull/6313) Fix: No-op to avoid + panicking on `UpdateState` for invalid misbehaviour submissions. * + (apps/callbacks) [\#8014](https://github.com/cosmos/ibc-go/pull/8014) + Callbacks will now return an error acknowledgement if the recvPacket callback + fails. This reverts all app callback changes whereas before we only reverted + the callback changes. We also error on all callbacks if the callback data is + set but malformed whereas before we ignored the error and continued + processing. * (apps/callbacks) + [\#5349](https://github.com/cosmos/ibc-go/pull/5349) Check if clients params + are duplicates. * (apps/transfer) + [\#6268](https://github.com/cosmos/ibc-go/pull/6268) Use memo strings instead + of JSON keys in `AllowedPacketData` of transfer authorization. * + (light-clients/07-tendermint) Fix: No-op to avoid panicking on `UpdateState` + for invalid misbehaviour submissions. * (light-clients/06-solomachine) + [\#6313](https://github.com/cosmos/ibc-go/pull/6313) Fix: No-op to avoid + panicking on `UpdateState` for invalid misbehaviour submissions. ### + Improvements * (testing)[\#7430](https://github.com/cosmos/ibc-go/pull/7430) + Update the block proposer in test chains for each block. * + (apps/27-interchain-accounts) + [\#5533](https://github.com/cosmos/ibc-go/pull/5533) ICA host sets the host + connection ID on `OnChanOpenTry`, so that ICA controller implementations are + not obliged to set the value on `OnChanOpenInit` if they are not able. * + (core/02-client, core/03-connection, apps/27-interchain-accounts) + [\#6256](https://github.com/cosmos/ibc-go/pull/6256) Add length checking of + array fields in messages. * (light-clients/08-wasm) + [\#5146](https://github.com/cosmos/ibc-go/pull/5146) Use global wasm VM + instead of keeping an additional reference in keeper. * (core/04-channels) + [\#7935](https://github.com/cosmos/ibc-go/pull/7935) Limit payload size for + both v1 and v2 packet. * (core/runtime) + [\#7601](https://github.com/cosmos/ibc-go/pull/7601) - IBC core runtime env. * + (core/08-wasm) [\#5294](https://github.com/cosmos/ibc-go/pull/5294) Don't + panic during any store operations. * (apps) + [\#5305](https://github.com/cosmos/ibc-go/pull/5305)- Remove GetSigners from + `sdk.Msg` implementations. * (apps) + [\#/5778](https://github.com/cosmos/ibc-go/pull/5778) Use json for + marshalling/unmarshalling transfer packet data. * (core/08-wasm) + [\#5785](https://github.com/cosmos/ibc-go/pull/5785) Allow module safe queries + in ICA. * (core/ante) [\#6278](https://github.com/cosmos/ibc-go/pull/6278) + Performance: Exclude pruning from tendermint client updates in ante handler + executions. * (core/ante) [\#6302](https://github.com/cosmos/ibc-go/pull/6302) + Performance: Skip app callbacks during RecvPacket execution in checkTx within + the redundant relay ante handler. * (core/ante) + [\#6280](https://github.com/cosmos/ibc-go/pull/6280) Performance: Skip + redundant proof checking in RecvPacket execution in reCheckTx within the + redundant relay ante handler. * + [\#6716](https://github.com/cosmos/ibc-go/pull/6716) Add `HasModule` to + capability keeper to allow checking if a scoped module already exists. ### Bug + Fixes * (apps/27-interchain-accounts) + [\#7277](https://github.com/cosmos/ibc-go/pull/7277) Use `GogoResolver` when + populating module query safe allow list to avoid panics from unresolvable + protobuf dependencies. * (core/04-channel) + [\#7342](https://github.com/cosmos/ibc-go/pull/7342) Read Tx cmd flags + including from address to avoid Address cannot be empty error when + upgrade-channels via cli. * (core/03-connection) + [\#7397](https://github.com/cosmos/ibc-go/pull/7397) Skip the genesis + validation connectionID for localhost client. * (apps/27-interchain-accounts) + [\#6377](https://github.com/cosmos/ibc-go/pull/6377) Generate ICA simtest + proposals only for provided keepers. ### Testing API * + [\#7688](https://github.com/cosmos/ibc-go/pull/7688) Added + `SendMsgsWithSender` to `TestChain`. * + [\#7430](https://github.com/cosmos/ibc-go/pull/7430) Update block proposer in + testing * [\#5493](https://github.com/cosmos/ibc-go/pull/5493) Add + IBCClientHeader func for endpoint and update tests * + [\#6685](https://github.com/cosmos/ibc-go/pull/6685) Configure relayers to + watch only channels associated with an individual test * + [\#6758](https://github.com/cosmos/ibc-go/pull/6758) Tokens are successfully + forwarded from A to C through B + diff --git a/docs/ibc/v10.1.x/ibc/apps/apps.mdx b/docs/ibc/v10.1.x/ibc/apps/apps.mdx new file mode 100644 index 00000000..e85b25ef --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/apps/apps.mdx @@ -0,0 +1,464 @@ +--- +title: IBC Applications +--- + + +This page is relevant for IBC Classic, naviagate to the IBC v2 applications page for information on v2 apps + + +Learn how to configure your application to use IBC and send data packets to other chains. + +This document serves as a guide for developers who want to write their own Inter-blockchain +Communication Protocol (IBC) applications for custom use cases. + +Due to the modular design of the IBC protocol, IBC +application developers do not need to concern themselves with the low-level details of clients, +connections, and proof verification, however a brief explaination is given. Then the document goes into detail on the abstraction layer most relevant for application +developers (channels and ports), and describes how to define your own custom packets, and +`IBCModule` callbacks. + +To have your module interact over IBC you must: bind to a port(s), define your own packet data and acknowledgement structs as well as how to encode/decode them, and implement the +`IBCModule` interface. Below is a more detailed explanation of how to write an IBC application +module correctly. + + + +## Pre-requisites Readings + +* [IBC Overview](/docs/ibc/v10.1.x/ibc/overview) +* [IBC default integration](/docs/ibc/v10.1.x/ibc/integration) + + + +## Create a custom IBC application module + +### Implement `IBCModule` Interface and callbacks + +The Cosmos SDK expects all IBC modules to implement the [`IBCModule` +interface](https://github.com/cosmos/ibc-go/tree/main/modules/core/05-port/types/module.go). This +interface contains all of the callbacks IBC expects modules to implement. This section will describe +the callbacks that are called during channel handshake execution. + +Here are the channel handshake callbacks that modules are expected to implement: + +```go expandable +/ Called by IBC Handler on MsgOpenInit +func (k Keeper) + +OnChanOpenInit(ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + counterparty channeltypes.Counterparty, + version string, +) + +error { + + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + / Examples: Abort if order == UNORDERED, + / Abort if version is unsupported + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgOpenTry +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + if err := checkArguments(args); err != nil { + return err +} + + / Construct application version + / IBC applications must return the appropriate application version + / This can be a simple string or it can be a complex version constructed + / from the counterpartyVersion and other arguments. + / The version returned will be the channel version used for both channel ends. + appVersion := negotiateAppVersion(counterpartyVersion, args) + +return appVersion, nil +} + +/ Called by IBC Handler on MsgOpenAck +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgOpenConfirm +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} +``` + +The channel closing handshake will also invoke module callbacks that can return errors to abort the +closing handshake. Closing a channel is a 2-step handshake, the initiating chain calls +`ChanCloseInit` and the finalizing chain calls `ChanCloseConfirm`. + +```go expandable +/ Called by IBC Handler on MsgCloseInit +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgCloseConfirm +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} +``` + +#### Channel Handshake Version Negotiation + +Application modules are expected to verify versioning used during the channel handshake procedure. + +* `ChanOpenInit` callback should verify that the `MsgChanOpenInit.Version` is valid +* `ChanOpenTry` callback should construct the application version used for both channel ends. If no application version can be constructed, it must return an error. +* `ChanOpenAck` callback should verify that the `MsgChanOpenAck.CounterpartyVersion` is valid and supported. + +IBC expects application modules to perform application version negotiation in `OnChanOpenTry`. The negotiated version +must be returned to core IBC. If the version cannot be negotiated, an error should be returned. + +Versions must be strings but can implement any versioning structure. If your application plans to +have linear releases then semantic versioning is recommended. If your application plans to release +various features in between major releases then it is advised to use the same versioning scheme +as IBC. This versioning scheme specifies a version identifier and compatible feature set with +that identifier. Valid version selection includes selecting a compatible version identifier with +a subset of features supported by your application for that version. The struct is used for this +scheme can be found in `03-connection/types`. + +Since the version type is a string, applications have the ability to do simple version verification +via string matching or they can use the already implemented versioning system and pass the proto +encoded version into each handhshake call as necessary. + +ICS20 currently implements basic string matching with a single supported version. + +### Custom Packets + +Modules connected by a channel must agree on what application data they are sending over the +channel, as well as how they will encode/decode it. This process is not specified by IBC as it is up +to each application module to determine how to implement this agreement. However, for most +applications this will happen as a version negotiation during the channel handshake. While more +complex version negotiation is possible to implement inside the channel opening handshake, a very +simple version negotiation is implemented in the [ibc-transfer module](https://github.com/cosmos/ibc-go/tree/main/modules/apps/transfer/module.go). + +Thus, a module must define its custom packet data structure, along with a well-defined way to +encode and decode it to and from `[]byte`. + +```go expandable +/ Custom packet data defined in application module +type CustomPacketData struct { + / Custom fields ... +} + +EncodePacketData(packetData CustomPacketData) []byte { + / encode packetData to bytes +} + +DecodePacketData(encoded []byte) (CustomPacketData) { + / decode from bytes to packet data +} +``` + +Then a module must encode its packet data before sending it through IBC. + +```go expandable +/ Sending custom application packet data + data := EncodePacketData(customPacketData) + +packet.Data = data +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + +A module receiving a packet must decode the `PacketData` into a structure it expects so that it can +act on it. + +```go +/ Receiving custom application packet data (in OnRecvPacket) + packetData := DecodePacketData(packet.Data) +/ handle received custom packet data +``` + +#### Packet Flow Handling + +Just as IBC expected modules to implement callbacks for channel handshakes, IBC also expects modules +to implement callbacks for handling the packet flow through a channel. + +Once a module A and module B are connected to each other, relayers can start relaying packets and +acknowledgements back and forth on the channel. + +![IBC packet flow diagram](https://media.githubusercontent.com/media/cosmos/ibc/old/spec/ics-004-channel-and-packet-semantics/channel-state-machine.png) + +Briefly, a successful packet flow works as follows: + +1. module A sends a packet through the IBC module +2. the packet is received by module B +3. if module B writes an acknowledgement of the packet then module A will process the + acknowledgement +4. if the packet is not successfully received before the timeout, then module A processes the + packet's timeout. + +##### Sending Packets + +Modules do not send packets through callbacks, since the modules initiate the action of sending +packets to the IBC module, as opposed to other parts of the packet flow where msgs sent to the IBC +module must trigger execution on the port-bound module through the use of callbacks. Thus, to send a +packet a module simply needs to call `SendPacket` on the `IBCChannelKeeper`. + +```go expandable +/ Sending custom application packet data + data := EncodePacketData(customPacketData) +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + +##### Receiving Packets + +To handle receiving packets, the module must implement the `OnRecvPacket` callback. This gets +invoked by the IBC module after the packet has been proved valid and correctly processed by the IBC +keepers. Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state +changes given the packet data without worrying about whether the packet is valid or not. + +Modules may return to the IBC handler an acknowledgement which implements the Acknowledgement interface. +The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the +acknowledgement back to the sender module. + +The state changes that occurred during this callback will only be written if: + +* the acknowledgement was successful as indicated by the `Success()` function of the acknowledgement +* if the acknowledgement returned is nil indicating that an asynchronous process is occurring + +NOTE: Applications which process asynchronous acknowledgements must handle reverting state changes +when appropriate. Any state changes that occurred during the `OnRecvPacket` callback will be written +for asynchronous acknowledgements. + +```go expandable +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) + +ibcexported.Acknowledgement { + / Decode the packet data + packetData := DecodePacketData(packet.Data) + + / do application state changes based on packet data and return the acknowledgement + / NOTE: The acknowledgement will indicate to the IBC handler if the application + / state changes should be written via the `Success()` function. Application state + / changes are only written if the acknowledgement is successful or the acknowledgement + / returned is nil indicating that an asynchronous acknowledgement will occur. + ack := processPacket(ctx, packet, packetData) + +return ack +} +``` + +The Acknowledgement interface: + +```go +/ Acknowledgement defines the interface used to return +/ acknowledgements in the OnRecvPacket callback. +type Acknowledgement interface { + Success() + +bool + Acknowledgement() []byte +} +``` + +### Acknowledgements + +Modules may commit an acknowledgement upon receiving and processing a packet in the case of synchronous packet processing. +In the case where a packet is processed at some later point after the packet has been received (asynchronous execution), the acknowledgement +will be written once the packet has been processed by the application which may be well after the packet receipt. + +NOTE: Most blockchain modules will want to use the synchronous execution model in which the module processes and writes the acknowledgement +for a packet as soon as it has been received from the IBC module. + +This acknowledgement can then be relayed back to the original sender chain, which can take action +depending on the contents of the acknowledgement. + +Just as packet data was opaque to IBC, acknowledgements are similarly opaque. Modules must pass and +receive acknowledegments with the IBC modules as byte strings. + +Thus, modules must agree on how to encode/decode acknowledgements. The process of creating an +acknowledgement struct along with encoding and decoding it, is very similar to the packet data +example above. [ICS 04](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope) +specifies a recommended format for acknowledgements. This acknowledgement type can be imported from +[channel types](https://github.com/cosmos/ibc-go/tree/main/modules/core/04-channel/types). + +While modules may choose arbitrary acknowledgement structs, a default acknowledgement types is provided by IBC [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/core/channel/v1/channel.proto): + +```proto expandable +/ Acknowledgement is the recommended acknowledgement format to be used by +/ app-specific protocols. +/ NOTE: The field numbers 21 and 22 were explicitly chosen to avoid accidental +/ conflicts with other protobuf message formats used for acknowledgements. +/ The first byte of any message with this format will be the non-ASCII values +/ `0xaa` (result) or `0xb2` (error). Implemented as defined by ICS: +/ https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope +message Acknowledgement { + / response contains either a result or an error and must be non-empty + oneof response { + bytes result = 21; + string error = 22; + } +} +``` + +#### Acknowledging Packets + +After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can +then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the +acknowledgement is entirely up to the modules on the channel (just like the packet data); however, it +may often contain information on whether the packet was successfully processed along +with some additional data that could be useful for remediation if the packet processing failed. + +Since the modules are responsible for agreeing on an encoding/decoding standard for packet data and +acknowledgements, IBC will pass in the acknowledgements as `[]byte` to this callback. The callback +is responsible for decoding the acknowledgement and processing it. + +```go expandable +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, +) (*sdk.Result, error) { + / Decode acknowledgement + ack := DecodeAcknowledgement(acknowledgement) + + / process ack + res, err := processAck(ack) + +return res, err +} +``` + +#### Timeout Packets + +If the timeout for a packet is reached before the packet is successfully received or the +counterparty channel end is closed before the packet is successfully received, then the receiving +chain can no longer process it. Thus, the sending chain must process the timeout using +`OnTimeoutPacket` to handle this situation. Again the IBC module will verify that the timeout is +indeed valid, so our module only needs to implement the state machine logic for what to do once a +timeout is reached and the packet can no longer be received. + +```go +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) (*sdk.Result, error) { + / do custom timeout logic +} +``` + +### Routing + +As mentioned above, modules must implement the IBC module interface (which contains both channel +handshake callbacks and packet handling callbacks). The concrete implementation of this interface +must be registered with the module name as a route on the IBC `Router`. + +```go expandable +/ app.go +func NewApp(...args) *App { +/ ... + +/ Create static IBC router, add module routes, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) +/ Note: moduleCallbacks must implement IBCModule interface +ibcRouter.AddRoute(moduleName, moduleCallbacks) + +/ Setting Router will finalize all routes by sealing router +/ No more routes can be added +app.IBCKeeper.SetRouter(ibcRouter) +``` + +## Working Example + +For a real working example of an IBC application, you can look through the `ibc-transfer` module +which implements everything discussed above. + +Here are the useful parts of the module to look at: + +[Binding to transfer +port](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/genesis.go) + +[Sending transfer +packets](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/relay.go) + +[Implementing IBC +callbacks](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/ibc_module.go) diff --git a/docs/ibc/v10.1.x/ibc/apps/bindports.mdx b/docs/ibc/v10.1.x/ibc/apps/bindports.mdx new file mode 100644 index 00000000..325ae1a7 --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/apps/bindports.mdx @@ -0,0 +1,108 @@ +--- +title: Bind ports +--- + +## Synopsis + +Learn what changes to make to bind modules to their ports on initialization. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v10.1.x/ibc/overview) +- [IBC default integration](/docs/ibc/v10.1.x/ibc/integration) + + +Currently, ports must be bound on app initialization. In order to bind modules to their respective ports on initialization, the following needs to be implemented: + +> Note that `portID` does not refer to a certain numerical ID, like `localhost:8080` with a `portID` 8080. Rather it refers to the application module the port binds. For IBC Modules built with the Cosmos SDK, it defaults to the module's name and for Cosmwasm contracts it defaults to the contract address. + +1. Add port ID to the `GenesisState` proto definition: + +```protobuf +message GenesisState { + string port_id = 1; + / other fields +} +``` + +2. Add port ID as a key to the module store: + +```go expandable +/ x//types/keys.go +const ( + / ModuleName defines the IBC Module name + ModuleName = "moduleName" + + / Version defines the current version the IBC + / module supports + Version = "moduleVersion-1" + + / PortID is the default port id that module binds to + PortID = "portID" + + / ... +) +``` + +3. Add port ID to `x//types/genesis.go`: + +```go expandable +/ in x//types/genesis.go + +/ DefaultGenesisState returns a GenesisState with "portID" as the default PortID. +func DefaultGenesisState() *GenesisState { + return &GenesisState{ + PortId: PortID, + / additional k-v fields +} +} + +/ Validate performs basic genesis state validation returning an error upon any +/ failure. +func (gs GenesisState) + +Validate() + +error { + if err := host.PortIdentifierValidator(gs.PortId); err != nil { + return err +} + /additional validations + + return gs.Params.Validate() +} +``` + +4. Set the port in the module keeper's for `InitGenesis`: + + + The capability module has been removed so port binding has also changed + + +```go expandable +/ SetPort sets the portID for the transfer module. Used in InitGenesis +func (k Keeper) + +SetPort(ctx sdk.Context, portID string) { + store := k.storeService.OpenKVStore(ctx) + if err := store.Set(types.PortKey, []byte(portID)); err != nil { + panic(err) +} +} + + / Initialize any other module state, like params with SetParams. +func (k Keeper) + +SetParams(ctx sdk.Context, params types.Params) { + store := k.storeService.OpenKVStore(ctx) + bz := k.cdc.MustMarshal(¶ms) + if err := store.Set([]byte(types.ParamsKey), bz); err != nil { + panic(err) +} +} + / ... +``` + +The module is set to the desired port. The setting and sealing happens during creation of the IBC router. diff --git a/docs/ibc/v10.1.x/ibc/apps/ibcmodule.mdx b/docs/ibc/v10.1.x/ibc/apps/ibcmodule.mdx new file mode 100644 index 00000000..3844fa1d --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/apps/ibcmodule.mdx @@ -0,0 +1,398 @@ +--- +title: Implement IBCModule interface and callbacks +--- + +## Synopsis + +Learn how to implement the `IBCModule` interface and all of the callbacks it requires. + +The Cosmos SDK expects all IBC modules to implement the [`IBCModule` +interface](https://github.com/cosmos/ibc-go/tree/main/modules/core/05-port/types/module.go). This interface contains all of the callbacks IBC expects modules to implement. They include callbacks related to channel handshake, closing and packet callbacks (`OnRecvPacket`, `OnAcknowledgementPacket` and `OnTimeoutPacket`). + +```go +/ IBCModule implements the ICS26 interface for given the keeper. +/ The implementation of the IBCModule interface could for example be in a file called ibc_module.go, +/ but ultimately file structure is up to the developer +type IBCModule struct { + keeper keeper.Keeper +} +``` + +Additionally, in the `module.go` file, add the following line: + +```go +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + / Add this line + _ porttypes.IBCModule = IBCModule{ +} +) +``` + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v10.1.x/ibc/overview) +- [IBC default integration](/docs/ibc/v10.1.x/ibc/integration) + + + +## Channel handshake callbacks + +This section will describe the callbacks that are called during channel handshake execution. + +Here are the channel handshake callbacks that modules are expected to implement: + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `checkArguments` and `negotiateAppVersion` functions. + +```go expandable +/ Called by IBC Handler on MsgOpenInit +func (im IBCModule) + +OnChanOpenInit(ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + / Examples: + / - Abort if order == UNORDERED, + / - Abort if version is unsupported + if err := checkArguments(args); err != nil { + return "", err +} + +return version, nil +} + +/ Called by IBC Handler on MsgOpenTry +func (im IBCModule) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + if err := checkArguments(args); err != nil { + return "", err +} + + / Construct application version + / IBC applications must return the appropriate application version + / This can be a simple string or it can be a complex version constructed + / from the counterpartyVersion and other arguments. + / The version returned will be the channel version used for both channel ends. + appVersion := negotiateAppVersion(counterpartyVersion, args) + +return appVersion, nil +} + +/ Called by IBC Handler on MsgOpenAck +func (im IBCModule) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + if counterpartyVersion != types.Version { + return sdkerrors.Wrapf(types.ErrInvalidVersion, "invalid counterparty version: %s, expected %s", counterpartyVersion, types.Version) +} + + / do custom logic + + return nil +} + +/ Called by IBC Handler on MsgOpenConfirm +func (im IBCModule) + +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / do custom logic + + return nil +} +``` + +### Channel closing callbacks + +The channel closing handshake will also invoke module callbacks that can return errors to abort the closing handshake. Closing a channel is a 2-step handshake, the initiating chain calls `ChanCloseInit` and the finalizing chain calls `ChanCloseConfirm`. + +Currently, all IBC modules in this repository return an error for `OnChanCloseInit` to prevent the channels from closing. This is because any user can call `ChanCloseInit` by submitting a `MsgChannelCloseInit` transaction. + +```go expandable +/ Called by IBC Handler on MsgCloseInit +func (im IBCModule) + +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgCloseConfirm +func (im IBCModule) + +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} +``` + +### Channel handshake version negotiation + +Application modules are expected to verify versioning used during the channel handshake procedure. + +- `OnChanOpenInit` will verify that the relayer-chosen parameters + are valid and perform any custom `INIT` logic. + It may return an error if the chosen parameters are invalid + in which case the handshake is aborted. + If the provided version string is non-empty, `OnChanOpenInit` should return + the version string if valid or an error if the provided version is invalid. + **If the version string is empty, `OnChanOpenInit` is expected to + return a default version string representing the version(s) + it supports.** + If there is no default version string for the application, + it should return an error if the provided version is an empty string. +- `OnChanOpenTry` will verify the relayer-chosen parameters along with the + counterparty-chosen version string and perform custom `TRY` logic. + If the relayer-chosen parameters + are invalid, the callback must return an error to abort the handshake. + If the counterparty-chosen version is not compatible with this module's + supported versions, the callback must return an error to abort the handshake. + If the versions are compatible, the try callback must select the final version + string and return it to core IBC. + `OnChanOpenTry` may also perform custom initialization logic. +- `OnChanOpenAck` will error if the counterparty selected version string + is invalid and abort the handshake. It may also perform custom ACK logic. + +Versions must be strings but can implement any versioning structure. If your application plans to +have linear releases then semantic versioning is recommended. If your application plans to release +various features in between major releases then it is advised to use the same versioning scheme +as IBC. This versioning scheme specifies a version identifier and compatible feature set with +that identifier. Valid version selection includes selecting a compatible version identifier with +a subset of features supported by your application for that version. The struct used for this +scheme can be found in [03-connection/types](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection/types/version.go#L16). + +Since the version type is a string, applications have the ability to do simple version verification +via string matching or they can use the already implemented versioning system and pass the proto +encoded version into each handhshake call as necessary. + +ICS20 currently implements basic string matching with a single supported version. + +## Packet callbacks + +Just as IBC expects modules to implement callbacks for channel handshakes, it also expects modules to implement callbacks for handling the packet flow through a channel, as defined in the `IBCModule` interface. + +Once a module A and module B are connected to each other, relayers can start relaying packets and acknowledgements back and forth on the channel. + +![IBC packet flow diagram](/docs/ibc/images/01-ibc/03-apps/images/packet_flow.png) + +Briefly, a successful packet flow works as follows: + +1. Module A sends a packet through the IBC module +2. The packet is received by module B +3. If module B writes an acknowledgement of the packet then module A will process the acknowledgement +4. If the packet is not successfully received before the timeout, then module A processes the packet's timeout. + +### Sending packets + +Modules **do not send packets through callbacks**, since the modules initiate the action of sending packets to the IBC module, as opposed to other parts of the packet flow where messages sent to the IBC +module must trigger execution on the port-bound module through the use of callbacks. Thus, to send a packet a module simply needs to call `SendPacket` on the `IBCChannelKeeper`. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `EncodePacketData(customPacketData)` function. + +```go expandable +/ Sending custom application packet data + data := EncodePacketData(customPacketData) +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + +### Receiving packets + +To handle receiving packets, the module must implement the `OnRecvPacket` callback. This gets +invoked by the IBC module after the packet has been proved valid and correctly processed by the IBC +keepers. Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state +changes given the packet data without worrying about whether the packet is valid or not. + +Modules may return to the IBC handler an acknowledgement which implements the `Acknowledgement` interface. +The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the +acknowledgement back to the sender module. + +The state changes that occurred during this callback will only be written if: + +- the acknowledgement was successful as indicated by the `Success()` function of the acknowledgement +- if the acknowledgement returned is nil indicating that an asynchronous process is occurring + +NOTE: Applications which process asynchronous acknowledgements must handle reverting state changes +when appropriate. Any state changes that occurred during the `OnRecvPacket` callback will be written +for asynchronous acknowledgements. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodePacketData(packet.Data)` function. + +```go expandable +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) + +ibcexported.Acknowledgement { + / Decode the packet data + packetData := DecodePacketData(packet.Data) + + / do application state changes based on packet data and return the acknowledgement + / NOTE: The acknowledgement will indicate to the IBC handler if the application + / state changes should be written via the `Success()` function. Application state + / changes are only written if the acknowledgement is successful or the acknowledgement + / returned is nil indicating that an asynchronous acknowledgement will occur. + ack := processPacket(ctx, packet, packetData) + +return ack +} +``` + +Reminder, the `Acknowledgement` interface: + +```go +/ Acknowledgement defines the interface used to return +/ acknowledgements in the OnRecvPacket callback. +type Acknowledgement interface { + Success() + +bool + Acknowledgement() []byte +} +``` + +### Acknowledging packets + +After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can +then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the +acknowledgement is entirely up to the modules on the channel (just like the packet data); however, it +may often contain information on whether the packet was successfully processed along +with some additional data that could be useful for remediation if the packet processing failed. + +Since the modules are responsible for agreeing on an encoding/decoding standard for packet data and +acknowledgements, IBC will pass in the acknowledgements as `[]byte` to this callback. The callback +is responsible for decoding the acknowledgement and processing it. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodeAcknowledgement(acknowledgments)` and `processAck(ack)` functions. + +```go expandable +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, +) (*sdk.Result, error) { + / Decode acknowledgement + ack := DecodeAcknowledgement(acknowledgement) + + / process ack + res, err := processAck(ack) + +return res, err +} +``` + +### Timeout packets + +If the timeout for a packet is reached before the packet is successfully received or the +counterparty channel end is closed before the packet is successfully received, then the receiving +chain can no longer process it. Thus, the sending chain must process the timeout using +`OnTimeoutPacket` to handle this situation. Again the IBC module will verify that the timeout is +indeed valid, so our module only needs to implement the state machine logic for what to do once a +timeout is reached and the packet can no longer be received. + +```go +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) (*sdk.Result, error) { + / do custom timeout logic +} +``` + +### Optional interfaces + +The following interface are optional and MAY be implemented by an IBCModule. + +#### PacketDataUnmarshaler + +The `PacketDataUnmarshaler` interface is defined as follows: + +```go +/ PacketDataUnmarshaler defines an optional interface which allows a middleware to +/ request the packet data to be unmarshaled by the base application. +type PacketDataUnmarshaler interface { + / UnmarshalPacketData unmarshals the packet data into a concrete type + / ctx, portID, channelID are provided as arguments, so that (if needed) + / the packet data can be unmarshaled based on the channel version. + / The version of the underlying app is also returned. + UnmarshalPacketData(ctx sdk.Context, portID, channelID string, bz []byte) (interface{ +}, string, error) +} +``` + +The implementation of `UnmarshalPacketData` should unmarshal the bytes into the packet data type defined for an IBC stack. +The base application of an IBC stack should unmarshal the bytes into its packet data type, while a middleware may simply defer the call to the underlying application. + +This interface allows middlewares to unmarshal a packet data in order to make use of interfaces the packet data type implements. +For example, the callbacks middleware makes use of this function to access packet data types which implement the `PacketData` and `PacketDataProvider` interfaces. diff --git a/docs/ibc/v10.1.x/ibc/apps/ibcv2apps.mdx b/docs/ibc/v10.1.x/ibc/apps/ibcv2apps.mdx new file mode 100644 index 00000000..2cc35897 --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/apps/ibcv2apps.mdx @@ -0,0 +1,342 @@ +--- +title: IBC v2 Applications +--- + +## Synopsis + +Learn how to implement IBC v2 applications + +To build an IBC v2 application the following steps are required: + +1. [Implement the `IBCModule` interface](#implement-the-ibcmodule-interface) +2. [Bind Ports](#bind-ports) +3. [Implement the IBCModule Keeper](#implement-the-ibcmodule-keeper) +4. [Implement application payload and success acknowledgement](#packets-and-payloads) +5. [Set and Seal the IBC Router](#routing) + +Highlighted improvements for app developers with IBC v2: + +- No need to support channel handshake callbacks +- Flexibility on upgrading application versioning, no need to use channel upgradability to renegotiate an application version, simply support the application version on both sides of the connection. +- Flexibility to choose your desired encoding type. + +## Implement the `IBCModule` interface + +The Cosmos SDK expects all IBC modules to implement the [`IBCModule` +interface](https://github.com/cosmos/ibc-go/blob/main/modules/core/api/module.go#L9-L53). This interface contains all of the callbacks IBC expects modules to implement. Note that for IBC v2, an application developer no longer needs to implement callbacks for the channel handshake. Note that this interface is distinct from the [porttypes.IBCModule interface][porttypes.IBCModule] used for IBC Classic. + +```go +/ IBCModule implements the application interface given the keeper. +/ The implementation of the IBCModule interface could for example be in a file called ibc_module.go, +/ but ultimately file structure is up to the developer +type IBCModule struct { + keeper keeper.Keeper +} +``` + +Additionally, in the `module.go` file, add the following line: + +```go +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + / Add this line + _ porttypes.IBCModule = IBCModule{ +} +) +``` + +### Packet callbacks + +IBC expects modules to implement callbacks for handling the packet lifecycle, as defined in the `IBCModule` interface. + +With IBC v2, modules are not directly connected. Instead a pair of clients are connected and register the counterparty clientID. Packets are routed to the relevant application module by the portID registered in the Router. Relayers send packets between the routers/packet handlers on each chain. + +![IBC packet flow diagram](/docs/ibc/images/01-ibc/03-apps/images/packet_flow_v2.png) + +Briefly, a successful packet flow works as follows: + +1. A user sends a message to the IBC packet handler +2. The IBC packet handler validates the message, creates the packet and stores the commitment and returns the packet sequence number. The [`Payload`](https://github.com/cosmos/ibc-go/blob/fe25b216359fab71b3228461b05dbcdb1a554158/proto/ibc/core/channel/v2/packet.proto#L26-L38), which contains application specific data, is routed to the relevant application. +3. If the counterparty writes an acknowledgement of the packet then the sending chain will process the acknowledgement. +4. If the packet is not successfully received before the timeout, then the sending chain processes the packet's timeout. + +#### Sending packets + +[`MsgSendPacket`](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel/v2/types/tx.pb.go#L69-L75) is sent by a user to the [channel v2 message server](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel/v2/keeper/msg_server.go), which calls `ChannelKeeperV2.SendPacket`. This validates the message, creates the packet, stores the commitment and returns the packet sequence number. The application must specify its own payload which is used by the application and sent with `MsgSendPacket`. + +An application developer needs to implement the custom logic the application executes when a packet is sent. + +```go expandable +/ OnSendPacket logic +func (im *IBCModule) + +OnSendPacket( + ctx sdk.Context, + sourceChannel string, + destinationChannel string, + sequence uint64, + payload channeltypesv2.Payload, + signer sdk.AccAddress) + +error { + +/ implement any validation + +/ implement payload decoding and validation + +/ call the relevant keeper method for state changes as a result of application logic + +/ emit events or telemetry data + +return nil +} +``` + +#### Receiving packets + +To handle receiving packets, the module must implement the `OnRecvPacket` callback. An application module should validate and confirm support for the given version and encoding method used as there is greater flexibility in IBC v2 to support a range of versions and encoding methods. +The `OnRecvPacket` callback is invoked by the IBC module after the packet has been proven to be valid and correctly processed by the IBC +keepers. +Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state +changes given the packet data without worrying about whether the packet is valid or not. + +Modules may return to the IBC handler an acknowledgement which implements the `Acknowledgement` interface. +The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the +acknowledgement back to the sender module. + +The state changes that occurr during this callback could be: + +- the packet processing was successful as indicated by the `PacketStatus_Success` and an `Acknowledgement()` will be written +- if the packet processing was unsuccessful as indicated by the `PacketStatus_Failure` and an `ackErr` will be written + +Note that with IBC v2 the error acknowledgements are standardised and cannot be customised. + +```go +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, sourceChannel string, destinationChannel string, sequence uint64, payload channeltypesv2.Payload, relayer sdk.AccAddress) + +channeltypesv2.RecvPacketResult { + + / do application state changes based on payload and return the result + / state changes should be written via the `RecvPacketResult` + + return recvResult +} +``` + +#### Acknowledging packets + +After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can +then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the +acknowledgement is entirely up to the application developer. + +IBC will pass in the acknowledgements as `[]byte` to this callback. The callback +is responsible for decoding the acknowledgement and processing it. The acknowledgement is serialised and deserialised using JSON. + +```go expandable +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, sourceChannel string, destinationChannel string, sequence uint64, acknowledgement []byte, payload channeltypesv2.Payload, relayer sdk.AccAddress) + +error { + + / check the type of the acknowledgement + + / if not ackErr, unmarshal the JSON acknowledgement and unmarshal packet payload + + / perform any application specific logic for processing acknowledgement + + / emit events + + return nil +} +``` + +#### Timeout packets + +If the timeout for a packet is reached before the packet is successfully received or the receiving +chain can no longer process the packet the sending chain must process the timeout using +`OnTimeoutPacket`. Again the IBC module will verify that the timeout is +valid, so our module only needs to implement the state machine logic for what to do once a +timeout is reached and the packet can no longer be received. + +```go +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, sourceChannel string, destinationChannel string, sequence uint64, payload channeltypesv2.Payload, relayer sdk.AccAddress) + +error { + + / unmarshal packet data + + / do custom timeout logic, e.g. refund tokens for transfer +} +``` + +#### PacketDataUnmarshaler + +The `PacketDataUnmarshaler` interface is required for IBC v2 applications to implement because the encoding type is specified by the `Payload` and multiple encoding types are supported. + +```go +type PacketDataUnmarshaler interface { + / UnmarshalPacketData unmarshals the packet data into a concrete type + / the payload is provided and the packet data interface is returned + UnmarshalPacketData(payload channeltypesv2.Payload) (interface{ +}, error) +} +``` + +## Bind Ports + +Currently, ports must be bound on app initialization. In order to bind modules to their respective ports on initialization, the following needs to be implemented: + +> Note that `portID` does not refer to a certain numerical ID, like `localhost:8080` with a `portID` 8080. Rather it refers to the application module the port binds. For IBC Modules built with the Cosmos SDK, it defaults to the module's name and for Cosmwasm contracts it defaults to the contract address. + +Add port ID to the `GenesisState` proto definition: + +```protobuf +message GenesisState { + string port_id = 1; + / other fields +} +``` + +You can see an example for transfer [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/applications/transfer/v1/genesis.proto). + +Add port ID as a key to the module store: + +```go +/ x//types/keys.go +const ( + / ModuleName defines the IBC Module name + ModuleName = "moduleName" + + / PortID is the default port id that module binds to + PortID = "portID" + + / ... +) +``` + +Note that with IBC v2, the version does not need to be added as a key (as required with IBC classic) because versioning of applications is now contained within the [packet Payload](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel/v2/types/packet.go#L23-L32). + +Add port ID to `x//types/genesis.go`: + +```go expandable +/ in x//types/genesis.go + +/ DefaultGenesisState returns a GenesisState +/ with the portID defined in keys.go +func DefaultGenesisState() *GenesisState { + return &GenesisState{ + PortId: PortID, + / additional k-v fields +} +} + +/ Validate performs basic genesis state validation +/ returning an error upon any failure. +func (gs GenesisState) + +Validate() + +error { + if err := host.PortIdentifierValidator(gs.PortId); err != nil { + return err +} + /additional validations + + return gs.Params.Validate() +} +``` + +Set the port in the module keeper's for `InitGenesis`: + +```go expandable +/ SetPort sets the portID for the transfer module. Used in InitGenesis +func (k Keeper) + +SetPort(ctx sdk.Context, portID string) { + store := k.storeService.OpenKVStore(ctx) + if err := store.Set(types.PortKey, []byte(portID)); err != nil { + panic(err) +} +} + + / Initialize any other module state, like params with SetParams. +func (k Keeper) + +SetParams(ctx sdk.Context, params types.Params) { + store := k.storeService.OpenKVStore(ctx) + bz := k.cdc.MustMarshal(¶ms) + if err := store.Set([]byte(types.ParamsKey), bz); err != nil { + panic(err) +} +} + / ... +``` + +The module is set to the desired port. The setting and sealing happens during creation of the IBC router. + +## Implement the IBCModule Keeper + +More information on implementing the IBCModule Keepers can be found in the [keepers section](/docs/ibc/v10.1.x/ibc/apps/keeper) + +## Packets and Payloads + +Applications developers need to define the `Payload` contained within an [IBC packet](https://github.com/cosmos/ibc-go/blob/fe25b216359fab71b3228461b05dbcdb1a554158/proto/ibc/core/channel/v2/packet.proto#L11-L24). Note that in IBC v2 the `timeoutHeight` has been removed and only `timeoutTimestamp` is used. A packet can contain multiple payloads in a list. Each Payload includes: + +```go expandable +/ Payload contains the source and destination ports and payload for the application (version, encoding, raw bytes) + +message Payload { + / specifies the source port of the packet. + string source_port = 1; + / specifies the destination port of the packet. + string destination_port = 2; + / version of the specified application. + string version = 3; + / the encoding used for the provided value. + string encoding = 4; + / the raw bytes for the payload. + bytes value = 5; +} +``` + +Note that compared to IBC classic, where the applications version and encoding is negotiated during the channel handshake, IBC v2 provides enhanced flexibility. The application version and encoding used by the Payload is defined in the Payload. An example Payload is illustrated below: + +```go expandable +type MyAppPayloadData struct { + Field1 string + Field2 uint64 +} + +/ Marshal your payload to bytes using your encoding +bz, err := json.Marshal(MyAppPayloadData{ + Field1: "example", Field2: 7 +}) + +/ Wrap it in a channel v2 Payload + payload := channeltypesv2.NewPayload( + sourcePort, + destPort, + "my-app-v1", / App version + channeltypesv2.EncodingJSON, / Encoding type, e.g. JSON, protobuf or ABI + bz, / Encoded data +) +``` + +It is also possible to define your own custom success acknowledgement which will be returned to the sender if the packet is successfully recieved and is returned in the `RecvPacketResult`. Note that if the packet processing fails, it is not possible to define a custom error acknowledgment, a constant ackErr is returned. + +## Routing + +More information on implementing the IBC Router can be found in the [routing section](/docs/ibc/v10.1.x/ibc/apps/routing). + +[porttypes.IBCModule]: https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/module.go diff --git a/docs/ibc/v10.1.x/ibc/apps/keeper.mdx b/docs/ibc/v10.1.x/ibc/apps/keeper.mdx new file mode 100644 index 00000000..39434d0a --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/apps/keeper.mdx @@ -0,0 +1,75 @@ +--- +title: Keeper +--- + +## Synopsis + +Learn how to implement the IBC Module keeper. Relevant for IBC classic and v2 + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v10.1.x/ibc/overview) +- [IBC default integration](/docs/ibc/v10.1.x/ibc/integration) + + +In the previous sections, on channel handshake callbacks and port binding in `InitGenesis`, a reference was made to keeper methods that need to be implemented when creating a custom IBC module. Below is an overview of how to define an IBC module's keeper. + +> Note that some code has been left out for clarity, to get a full code overview, please refer to [the transfer module's keeper in the ibc-go repo](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/keeper.go). + +```go expandable +/ Keeper defines the IBC app module keeper +type Keeper struct { + storeKey sdk.StoreKey + cdc codec.BinaryCodec + paramSpace paramtypes.Subspace + + channelKeeper types.ChannelKeeper + portKeeper types.PortKeeper + + / ... additional according to custom logic +} + +/ NewKeeper creates a new IBC app module Keeper instance +func NewKeeper( + / args +) + +Keeper { + / ... + + return Keeper{ + cdc: cdc, + storeKey: key, + paramSpace: paramSpace, + + channelKeeper: channelKeeper, + portKeeper: portKeeper, + + / ... additional according to custom logic +} +} + +/ GetPort returns the portID for the IBC app module. Used in ExportGenesis +func (k Keeper) + +GetPort(ctx sdk.Context) + +string { + store := ctx.KVStore(k.storeKey) + +return string(store.Get(types.PortKey)) +} + +/ SetPort sets the portID for the IBC app module. Used in InitGenesis +func (k Keeper) + +SetPort(ctx sdk.Context, portID string) { + store := ctx.KVStore(k.storeKey) + +store.Set(types.PortKey, []byte(portID)) +} + +/ ... additional according to custom logic +``` diff --git a/docs/ibc/v10.1.x/ibc/apps/packets_acks.mdx b/docs/ibc/v10.1.x/ibc/apps/packets_acks.mdx new file mode 100644 index 00000000..ba4bb2dd --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/apps/packets_acks.mdx @@ -0,0 +1,163 @@ +--- +title: Define packets and acks +--- + +## Synopsis + +Learn how to define custom packet and acknowledgement structs and how to encode and decode them. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v10.1.x/ibc/overview) +- [IBC default integration](/docs/ibc/v10.1.x/ibc/integration) + + + +## Custom packets + +Modules connected by a channel must agree on what application data they are sending over the +channel, as well as how they will encode/decode it. This process is not specified by IBC as it is up +to each application module to determine how to implement this agreement. However, for most +applications this will happen as a version negotiation during the channel handshake. While more +complex version negotiation is possible to implement inside the channel opening handshake, a very +simple version negotiation is implemented in the [ibc-transfer module](https://github.com/cosmos/ibc-go/tree/main/modules/apps/transfer/module.go). + +Thus, a module must define its custom packet data structure, along with a well-defined way to +encode and decode it to and from `[]byte`. + +```go expandable +/ Custom packet data defined in application module +type CustomPacketData struct { + / Custom fields ... +} + +EncodePacketData(packetData CustomPacketData) []byte { + / encode packetData to bytes +} + +DecodePacketData(encoded []byte) (CustomPacketData) { + / decode from bytes to packet data +} +``` + +> Note that the `CustomPacketData` struct is defined in the proto definition and then compiled by the protobuf compiler. + +Then a module must encode its packet data before sending it through IBC. + +```go expandable +/ Sending custom application packet data + data := EncodePacketData(customPacketData) +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + +A module receiving a packet must decode the `PacketData` into a structure it expects so that it can +act on it. + +```go +/ Receiving custom application packet data (in OnRecvPacket) + packetData := DecodePacketData(packet.Data) +/ handle received custom packet data +``` + +### Optional interfaces + +The following interfaces are optional and MAY be implemented by a custom packet type. +They allow middlewares such as callbacks to access information stored within the packet data. + +#### PacketData interface + +The `PacketData` interface is defined as follows: + +```go +/ PacketData defines an optional interface which an application's packet data structure may implement. +type PacketData interface { + / GetPacketSender returns the sender address of the packet data. + / If the packet sender is unknown or undefined, an empty string should be returned. + GetPacketSender(sourcePortID string) + +string +} +``` + +The implementation of `GetPacketSender` should return the sender of the packet data. +If the packet sender is unknown or undefined, an empty string should be returned. + +This interface is intended to give IBC middlewares access to the packet sender of a packet data type. + +#### PacketDataProvider interface + +The `PacketDataProvider` interface is defined as follows: + +```go +/ PacketDataProvider defines an optional interfaces for retrieving custom packet data stored on behalf of another application. +/ An existing problem in the IBC middleware design is the inability for a middleware to define its own packet data type and insert packet sender provided information. +/ A short term solution was introduced into several application's packet data to utilize a memo field to carry this information on behalf of another application. +/ This interfaces standardizes that behaviour. Upon realization of the ability for middleware's to define their own packet data types, this interface will be deprecated and removed with time. +type PacketDataProvider interface { + / GetCustomPacketData returns the packet data held on behalf of another application. + / The name the information is stored under should be provided as the key. + / If no custom packet data exists for the key, nil should be returned. + GetCustomPacketData(key string) + +interface{ +} +} +``` + +The implementation of `GetCustomPacketData` should return packet data held on behalf of another application (if present and supported). +If this functionality is not supported, it should return nil. Otherwise it should return the packet data associated with the provided key. + +This interface gives IBC applications access to the packet data information embedded into the base packet data type. +Within transfer and interchain accounts, the embedded packet data is stored within the Memo field. + +Once all IBC applications within an IBC stack are capable of creating/maintaining their own packet data type's, this interface function will be deprecated and removed. + +## Acknowledgements + +Modules may commit an acknowledgement upon receiving and processing a packet in the case of synchronous packet processing. +In the case where a packet is processed at some later point after the packet has been received (asynchronous execution), the acknowledgement +will be written once the packet has been processed by the application which may be well after the packet receipt. + +NOTE: Most blockchain modules will want to use the synchronous execution model in which the module processes and writes the acknowledgement +for a packet as soon as it has been received from the IBC module. + +This acknowledgement can then be relayed back to the original sender chain, which can take action +depending on the contents of the acknowledgement. + +Just as packet data was opaque to IBC, acknowledgements are similarly opaque. Modules must pass and +receive acknowledegments with the IBC modules as byte strings. + +Thus, modules must agree on how to encode/decode acknowledgements. The process of creating an +acknowledgement struct along with encoding and decoding it, is very similar to the packet data +example above. [ICS 04](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope) +specifies a recommended format for acknowledgements. This acknowledgement type can be imported from +[channel types](https://github.com/cosmos/ibc-go/tree/main/modules/core/04-channel/types). + +While modules may choose arbitrary acknowledgement structs, a default acknowledgement types is provided by IBC [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/core/channel/v1/channel.proto): + +```protobuf expandable +/ Acknowledgement is the recommended acknowledgement format to be used by +/ app-specific protocols. +/ NOTE: The field numbers 21 and 22 were explicitly chosen to avoid accidental +/ conflicts with other protobuf message formats used for acknowledgements. +/ The first byte of any message with this format will be the non-ASCII values +/ `0xaa` (result) or `0xb2` (error). Implemented as defined by ICS: +/ https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope +message Acknowledgement { + / response contains either a result or an error and must be non-empty + oneof response { + bytes result = 21; + string error = 22; + } +} +``` diff --git a/docs/ibc/v10.1.x/ibc/apps/routing.mdx b/docs/ibc/v10.1.x/ibc/apps/routing.mdx new file mode 100644 index 00000000..96609026 --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/apps/routing.mdx @@ -0,0 +1,40 @@ +--- +title: Routing +--- + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v10.1.x/ibc/overview) +- [IBC default integration](/docs/ibc/v10.1.x/ibc/integration) + + + +## Synopsis + +Learn how to hook a route to the IBC router for the custom IBC module. + +As mentioned above, modules must implement the `IBCModule` interface (which contains both channel +handshake callbacks for IBC classic only, and packet handling callbacks for IBC classic and v2). The concrete implementation of this interface +must be registered with the module name as a route on the IBC `Router`. + +```go expandable +/ app.go +func NewApp(...args) *App { + / ... + + / Create static IBC router, add module routes, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) + / Note: moduleCallbacks must implement IBCModule interface + ibcRouter.AddRoute(moduleName, moduleCallbacks) + + / Setting Router will finalize all routes by sealing router + / No more routes can be added + app.IBCKeeper.SetRouter(ibcRouter) + + / ... +} +``` diff --git a/docs/ibc/v10.1.x/ibc/best-practices.mdx b/docs/ibc/v10.1.x/ibc/best-practices.mdx new file mode 100644 index 00000000..96790665 --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/best-practices.mdx @@ -0,0 +1,20 @@ +--- +title: Best Practices +--- + +## Identifying legitimate channels + +Identifying which channel to use can be difficult as it requires verifying information about the chains you want to connect to. +Channels are based on a light client. A chain can be uniquely identified by its chain ID, validator set pairing. It is unsafe to rely only on the chain ID. +Any user can create a client with any chain ID, but only the chain with correct validator set and chain ID can produce headers which would update that client. + +Which channel to use is based on social consensus. The desired channel should have the following properties: + +* based on a valid client (can only be updated by the chain it connects to) +* has sizable activity +* the underlying client is active + +To verify if a client is valid. You will need to obtain a header from the chain you want to connect to. This can be done by running a full node for that chain or relying on a trusted rpc address. +Then you should query the light client you want to verify and obtain its latest consensus state. All consensus state fields must match the header queried for at same height as the consensus state (root, timestamp, next validator set hash). + +Explorers and wallets are highly encouraged to follow this practice. It is unsafe to algorithmically add new channels without following this process. diff --git a/docs/ibc/v10.1.x/ibc/integration.mdx b/docs/ibc/v10.1.x/ibc/integration.mdx new file mode 100644 index 00000000..42716b91 --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/integration.mdx @@ -0,0 +1,367 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to integrate IBC to your application + +This document outlines the required steps to integrate and configure the [IBC +module](https://github.com/cosmos/ibc-go/tree/main/modules/core) to your Cosmos SDK application and enable sending fungible token transfers to other chains. An [example app using ibc-go v10 is linked](https://github.com/gjermundgaraba/probe/tree/ibc/v10). + +## Integrating the IBC module + +Integrating the IBC module to your SDK-based application is straightforward. The general changes can be summarized in the following steps: + +- [Define additional `Keeper` fields for the new modules on the `App` type](#add-application-fields-to-app). +- [Add the module's `StoreKey`s and initialize their `Keeper`s](#configure-the-keepers). +- [Create Application Stacks with Middleware](#create-application-stacks-with-middleware) +- [Set up IBC router and add route for the `transfer` module](#register-module-routes-in-the-ibc-router). +- [Grant permissions to `transfer`'s `ModuleAccount`](#module-account-permissions). +- [Add the modules to the module `Manager`](#module-manager-and-simulationmanager). +- [Update the module `SimulationManager` to enable simulations](#module-manager-and-simulationmanager). +- [Integrate light client modules (e.g. `07-tendermint`)](#integrating-light-clients). +- [Add modules to `Begin/EndBlockers` and `InitGenesis`](#application-abci-ordering). + +### Add application fields to `App` + +We need to register the core `ibc` and `transfer` `Keeper`s. To support the use of IBC v2, `transferv2` and `callbacksv2` must also be registered as follows: + +```go title="app.go" expandable +import ( + + / other imports + / ... + ibckeeper "github.com/cosmos/ibc-go/v10/modules/core/keeper" + ibctransferkeeper "github.com/cosmos/ibc-go/v10/modules/apps/transfer/keeper" + / ibc v2 imports + transferv2 "github.com/cosmos/ibc-go/v10/modules/apps/transfer/v2" + ibccallbacksv2 "github.com/cosmos/ibc-go/v10/modules/apps/callbacks/v2" +) + +type App struct { + / baseapp, keys and subspaces definitions + + / other keepers + / ... + IBCKeeper *ibckeeper.Keeper / IBC Keeper must be a pointer in the app, so we can SetRouter on it correctly + TransferKeeper ibctransferkeeper.Keeper / for cross-chain fungible token transfers + + / ... + / module and simulation manager definitions +} +``` + +### Configure the `Keeper`s + +Initialize the IBC `Keeper`s (for core `ibc` and `transfer` modules), and any additional modules you want to include. + + + **Notice** The capability module has been removed in ibc-go v10, therefore the + `ScopedKeeper` has also been removed + + +```go expandable +import ( + + / other imports + / ... + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + + ibcexported "github.com/cosmos/ibc-go/v10/modules/core/exported" + ibckeeper "github.com/cosmos/ibc-go/v10/modules/core/keeper" + "github.com/cosmos/ibc-go/v10/modules/apps/transfer" + ibctransfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" + ibctm "github.com/cosmos/ibc-go/v10/modules/light-clients/07-tendermint" +) + +func NewApp(...args) *App { + / define codecs and baseapp + + / ... other module keepers + + / Create IBC Keeper + app.IBCKeeper = ibckeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[ibcexported.StoreKey]), + app.GetSubspace(ibcexported.ModuleName), + app.UpgradeKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Create Transfer Keeper + app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[ibctransfertypes.StoreKey]), + app.GetSubspace(ibctransfertypes.ModuleName), + app.IBCKeeper.ChannelKeeper, + app.IBCKeeper.ChannelKeeper, + app.MsgServiceRouter(), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / ... continues +} +``` + +### Create Application Stacks with Middleware + +Middleware stacks in IBC allow you to wrap an `IBCModule` with additional logic for packets and acknowledgements. This is a chain of handlers that execute in order. The transfer stack below shows how to wire up transfer to use packet forward middleware, and the callbacks middleware. Note that the order is important. + +```go expandable +/ Create Transfer Stack for IBC Classic + maxCallbackGas := uint64(10_000_000) + wasmStackIBCHandler := wasm.NewIBCHandler(app.WasmKeeper, app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper) + +var transferStack porttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) +/ callbacks wraps the transfer stack as its base app, and uses PacketForwardKeeper as the ICS4Wrapper +/ i.e. packet-forward-middleware is higher on the stack and sits between callbacks and the ibc channel keeper +/ Since this is the lowest level middleware of the transfer stack, it should be the first entrypoint for transfer keeper's +/ WriteAcknowledgement. + cbStack := ibccallbacks.NewIBCMiddleware(transferStack, app.PacketForwardKeeper, wasmStackIBCHandler, maxCallbackGas) + +transferStack = packetforward.NewIBCMiddleware( + cbStack, + app.PacketForwardKeeper, + 0, / retries on timeout + packetforwardkeeper.DefaultForwardTransferPacketTimeoutTimestamp, +) +``` + +#### IBC v2 Application Stack + +For IBC v2, an example transfer stack is shown below. In this case the transfer stack is using the callbacks middleware. + +```go +/ Create IBC v2 transfer middleware stack +/ the callbacks gas limit is recommended to be 10M for use with wasm contracts + maxCallbackGas := uint64(10_000_000) + wasmStackIBCHandler := wasm.NewIBCHandler(app.WasmKeeper, app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper) + +var ibcv2TransferStack ibcapi.IBCModule + ibcv2TransferStack = transferv2.NewIBCModule(app.TransferKeeper) + +ibcv2TransferStack = ibccallbacksv2.NewIBCMiddleware(transferv2.NewIBCModule(app.TransferKeeper), app.IBCKeeper.ChannelKeeperV2, wasmStackIBCHandler, app.IBCKeeper.ChannelKeeperV2, maxCallbackGas) +``` + +### Register module routes in the IBC `Router` + +IBC needs to know which module is bound to which port so that it can route packets to the +appropriate module and call the appropriate callbacks. The port to module name mapping is handled by +IBC's port `Keeper`. However, the mapping from module name to the relevant callbacks is accomplished +by the port +[`Router`](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/router.go) on the +`ibc` module. + +Adding the module routes allows the IBC handler to call the appropriate callback when processing a channel handshake or a packet. + +Currently, a `Router` is static so it must be initialized and set correctly on app initialization. +Once the `Router` has been set, no new routes can be added. + +```go title="app.go" expandable +import ( + + / other imports + / ... + porttypes "github.com/cosmos/ibc-go/v10/modules/core/05-port/types" + ibctransfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" +) + +func NewApp(...args) *App { + / .. continuation from above + + / Create static IBC router, add transfer module route, then set and seal it + ibcRouter := porttypes.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) + / Setting Router will finalize all routes by sealing router + / No more routes can be added + app.IBCKeeper.SetRouter(ibcRouter) + + / ... continues +``` + +#### IBC v2 Router + +With IBC v2, there is a new [router](https://github.com/cosmos/ibc-go/blob/main/modules/core/api/router.go) that needs to register the routes for a portID to a given IBCModule. + +```go +/ IBC v2 router creation + ibcRouterV2 := ibcapi.NewRouter() + +ibcRouterV2.AddRoute(ibctransfertypes.PortID, ibcv2TransferStack) + / Setting Router will finalize all routes by sealing router + / No more routes can be added + app.IBCKeeper.SetRouterV2(ibcRouterV2) +``` + +### Module `Manager` and `SimulationManager` + +In order to use IBC, we need to add the new modules to the module `Manager` and to the `SimulationManager`, in case your application supports [simulations](https://docs.cosmos.network/main/learn/advanced/simulation). + +```go title="app.go" expandable +import ( + + / other imports + / ... + "github.com/cosmos/cosmos-sdk/types/module" + + ibc "github.com/cosmos/ibc-go/v10/modules/core" + "github.com/cosmos/ibc-go/v10/modules/apps/transfer" +) + +func NewApp(...args) *App { + / ... continuation from above + + app.ModuleManager = module.NewManager( + / other modules + / ... + / highlight-start ++ ibc.NewAppModule(app.IBCKeeper), ++ transfer.NewAppModule(app.TransferKeeper), + / highlight-end + ) + + / ... + + app.simulationManager = module.NewSimulationManagerFromAppModules( + / other modules + / ... + app.ModuleManager.Modules, + map[string]module.AppModuleSimulation{ +}, + ) + + / ... continues +``` + +### Module account permissions + +After that, we need to grant `Minter` and `Burner` permissions to +the `transfer` `ModuleAccount` to mint and burn relayed tokens. + +```go title="app.go" expandable +import ( + + / other imports + / ... + "github.com/cosmos/cosmos-sdk/types/module" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + + / highlight-next-line ++ ibctransfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" +) + +/ app.go +var ( + / module account permissions + maccPerms = map[string][]string{ + / other module accounts permissions + / ... + ibctransfertypes.ModuleName: { + authtypes.Minter, authtypes.Burner +}, +} +) +``` + +### Integrating light clients + +> Note that from v10 onwards, all light clients are expected to implement the [`LightClientInterface` interface](/docs/ibc/v10.1.x/light-clients/developer-guide/light-client-module#implementing-the-lightclientmodule-interface) defined by core IBC, and have to be explicitly registered in a chain's app.go. This is in contrast to earlier versions of ibc-go when `07-tendermint` and `06-solomachine` were added out of the box. Follow the steps below to integrate the `07-tendermint` light client. + +All light clients must be registered with `module.Manager` in a chain's app.go file. The following code example shows how to instantiate `07-tendermint` light client module and register its `ibctm.AppModule`. + +```go title="app.go" expandable +import ( + + / other imports + / ... + "github.com/cosmos/cosmos-sdk/types/module" + / highlight-next-line ++ ibctm "github.com/cosmos/ibc-go/v10/modules/light-clients/07-tendermint" +) + +/ app.go +/ after sealing the IBC router + clientKeeper := app.IBCKeeper.ClientKeeper + storeProvider := app.IBCKeeper.ClientKeeper.GetStoreProvider() + tmLightClientModule := ibctm.NewLightClientModule(appCodec, storeProvider) + +clientKeeper.AddRoute(ibctm.ModuleName, &tmLightClientModule) +/ ... +app.ModuleManager = module.NewManager( + / ... + ibc.NewAppModule(app.IBCKeeper), + transfer.NewAppModule(app.TransferKeeper), / i.e ibc-transfer module + + / register light clients on IBC + / highlight-next-line ++ ibctm.NewAppModule(tmLightClientModule), +) +``` + +#### Allowed Clients Params + +The allowed clients parameter defines an allow list of client types supported by the chain. The +default value is a single-element list containing the [`AllowedClients`](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client/types/client.pb.go#L248-L253) wildcard (`"*"`). Alternatively, the parameter +may be set with a list of client types (e.g. `"06-solomachine","07-tendermint","09-localhost"`). +A client type that is not registered on this list will fail upon creation or on genesis validation. +Note that, since the client type is an arbitrary string, chains must not register two light clients +which return the same value for the `ClientType()` function, otherwise the allow list check can be +bypassed. + +### Application ABCI ordering + +One addition from IBC is the concept of `HistoricalInfo` which is stored in the Cosmos SDK `x/staking` module. The number of records stored by `x/staking` is controlled by the `HistoricalEntries` parameter which stores `HistoricalInfo` on a per-height basis. +Each entry contains the historical information for the `Header` and `ValidatorSet` of this chain which is stored +at each height during the `BeginBlock` call. The `HistoricalInfo` is required to introspect a blockchain's prior state at a given height in order to verify the light client `ConsensusState` during the +connection handshake. + +```go title="app.go" expandable +import ( + + / other imports + / ... + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ibcexported "github.com/cosmos/ibc-go/v10/modules/core/exported" + ibckeeper "github.com/cosmos/ibc-go/v10/modules/core/keeper" + ibctransfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" +) + +func NewApp(...args) *App { + / ... continuation from above + + / add x/staking, ibc and transfer modules to BeginBlockers + app.ModuleManager.SetOrderBeginBlockers( + / other modules ... + stakingtypes.ModuleName, + ibcexported.ModuleName, + ibctransfertypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + / other modules ... + stakingtypes.ModuleName, + ibcexported.ModuleName, + ibctransfertypes.ModuleName, + ) + + / ... + genesisModuleOrder := []string{ + / other modules + / ... + ibcexported.ModuleName, + ibctransfertypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + + / ... continues +``` + +That's it! You have now wired up the IBC module and the `transfer` module, and are now able to send fungible tokens across +different chains. If you want to have a broader view of the changes take a look into the SDK's +[`SimApp`](https://github.com/cosmos/ibc-go/blob/main/testing/simapp/app.go). diff --git a/docs/ibc/v10.1.x/ibc/middleware/develop.mdx b/docs/ibc/v10.1.x/ibc/middleware/develop.mdx new file mode 100644 index 00000000..be4ae031 --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/middleware/develop.mdx @@ -0,0 +1,510 @@ +--- +title: Create a custom IBC middleware +description: >- + IBC middleware will wrap over an underlying IBC application (a base + application or downstream middleware) and sits between core IBC and the base + application. +--- + +IBC middleware will wrap over an underlying IBC application (a base application or downstream middleware) and sits between core IBC and the base application. + + +middleware developers must use the same serialization and deserialization method as in ibc-go's codec: transfertypes.ModuleCdc.\[Must]MarshalJSON + + +For middleware builders this means: + +```go +import transfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" +transfertypes.ModuleCdc.[Must]MarshalJSON +func MarshalAsIBCDoes(ack channeltypes.Acknowledgement) ([]byte, error) { + return transfertypes.ModuleCdc.MarshalJSON(&ack) +} +``` + +The interfaces a middleware must implement are found [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/05-port/types/module.go). + +```go +/ Middleware implements the ICS26 Module interface +type Middleware interface { + IBCModule / middleware has access to an underlying application which may be wrapped by more middleware + ICS4Wrapper / middleware has access to ICS4Wrapper which may be core IBC Channel Handler or a higher-level middleware that wraps this middleware. +} +``` + +An `IBCMiddleware` struct implementing the `Middleware` interface, can be defined with its constructor as follows: + +```go expandable +/ @ x/module_name/ibc_middleware.go + +/ IBCMiddleware implements the ICS26 callbacks and ICS4Wrapper for the fee middleware given the +/ fee keeper and the underlying application. +type IBCMiddleware struct { + app porttypes.IBCModule + keeper keeper.Keeper +} + +/ NewIBCMiddleware creates a new IBCMiddleware given the keeper and underlying application +func NewIBCMiddleware(app porttypes.IBCModule, k keeper.Keeper) + +IBCMiddleware { + return IBCMiddleware{ + app: app, + keeper: k, +} +} +``` + +## Implement `IBCModule` interface + +`IBCMiddleware` is a struct that implements the [ICS-26 `IBCModule` interface (`porttypes.IBCModule`)](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/05-port/types/module.go#L14-L107). It is recommended to separate these callbacks into a separate file `ibc_middleware.go`. + +> Note how this is analogous to implementing the same interfaces for IBC applications that act as base applications. + +As will be mentioned in the [integration section](/docs/ibc/v10.1.x/ibc/middleware/integration), this struct should be different than the struct that implements `AppModule` in case the middleware maintains its own internal state and processes separate SDK messages. + +The middleware must have access to the underlying application, and be called before it during all ICS-26 callbacks. It may execute custom logic during these callbacks, and then call the underlying application's callback. + +> Middleware **may** choose not to call the underlying application's callback at all. Though these should generally be limited to error cases. + +The `IBCModule` interface consists of the channel handshake callbacks and packet callbacks. Most of the custom logic will be performed in the packet callbacks, in the case of the channel handshake callbacks, introducing the middleware requires consideration to the version negotiation. + +### Channel handshake callbacks + +#### Version negotiation + +In the case where the IBC middleware expects to speak to a compatible IBC middleware on the counterparty chain, they must use the channel handshake to negotiate the middleware version without interfering in the version negotiation of the underlying application. + +Middleware accomplishes this by formatting the version in a JSON-encoded string containing the middleware version and the application version. The application version may as well be a JSON-encoded string, possibly including further middleware and app versions, if the application stack consists of multiple milddlewares wrapping a base application. The format of the version is specified in ICS-30 as the following: + +```json +{ + "": "", + "app_version": "" +} +``` + +The `` key in the JSON struct should be replaced by the actual name of the key for the corresponding middleware (e.g. `fee_version`). + +During the handshake callbacks, the middleware can unmarshal the version string and retrieve the middleware and application versions. It can do its negotiation logic on ``, and pass the `` to the underlying application. + +> **NOTE**: Middleware that does not need to negotiate with a counterparty middleware on the remote stack will not implement the version unmarshalling and negotiation, and will simply perform its own custom logic on the callbacks without relying on the counterparty behaving similarly. + +#### `OnChanOpenInit` + +```go expandable +func (im IBCMiddleware) + +OnChanOpenInit( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + if version != "" { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + metadata, err := Unmarshal(version) + if err != nil { + / Since it is valid for fee version to not be specified, + / the above middleware version may be for another middleware. + / Pass the entire version string onto the underlying application. + return im.app.OnChanOpenInit( + ctx, + order, + connectionHops, + portID, + channelID, + counterparty, + version, + ) +} + +else { + metadata = { + / set middleware version to default value + MiddlewareVersion: defaultMiddlewareVersion, + / allow application to return its default version + AppVersion: "", +} + +} + +} + +doCustomLogic() + + / if the version string is empty, OnChanOpenInit is expected to return + / a default version string representing the version(s) + +it supports + appVersion, err := im.app.OnChanOpenInit( + ctx, + order, + connectionHops, + portID, + channelID, + counterparty, + metadata.AppVersion, / note we only pass app version here + ) + if err != nil { + return "", err +} + version := constructVersion(metadata.MiddlewareVersion, appVersion) + +return version, nil +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L36-L83) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanOpenTry` + +```go expandable +func (im IBCMiddleware) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + cpMetadata, err := Unmarshal(counterpartyVersion) + if err != nil { + return app.OnChanOpenTry( + ctx, + order, + connectionHops, + portID, + channelID, + counterparty, + counterpartyVersion, + ) +} + +doCustomLogic() + + / Call the underlying application's OnChanOpenTry callback. + / The try callback must select the final app-specific version string and return it. + appVersion, err := app.OnChanOpenTry( + ctx, + order, + connectionHops, + portID, + channelID, + counterparty, + cpMetadata.AppVersion, / note we only pass counterparty app version here + ) + if err != nil { + return "", err +} + + / negotiate final middleware version + middlewareVersion := negotiateMiddlewareVersion(cpMetadata.MiddlewareVersion) + version := constructVersion(middlewareVersion, appVersion) + +return version, nil +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L88-L125) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanOpenAck` + +```go expandable +func (im IBCMiddleware) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyChannelID string, + counterpartyVersion string, +) + +error { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + cpMetadata, err = UnmarshalJSON(counterpartyVersion) + if err != nil { + return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, counterpartyVersion) +} + if !isCompatible(cpMetadata.MiddlewareVersion) { + return error +} + +doCustomLogic() + + / call the underlying application's OnChanOpenTry callback + return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, cpMetadata.AppVersion) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L128-L153)) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanOpenConfirm` + +```go +func OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanOpenConfirm(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L156-L163) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanCloseInit` + +```go +func OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanCloseInit(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L166-L188) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanCloseConfirm` + +```go +func OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanCloseConfirm(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L191-L213) an example implementation of this callback for the ICS-29 Fee Middleware module. + +### Packet callbacks + +The packet callbacks just like the handshake callbacks wrap the application's packet callbacks. The packet callbacks are where the middleware performs most of its custom logic. The middleware may read the packet flow data and perform some additional packet handling, or it may modify the incoming data before it reaches the underlying application. This enables a wide degree of usecases, as a simple base application like token-transfer can be transformed for a variety of usecases by combining it with custom middleware. + +#### `OnRecvPacket` + +```go expandable +func (im IBCMiddleware) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +ibcexported.Acknowledgement { + doCustomLogic(packet) + ack := app.OnRecvPacket(ctx, packet, relayer) + +doCustomLogic(ack) / middleware may modify outgoing ack + + return ack +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L217-L238) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnAcknowledgementPacket` + +```go +func (im IBCMiddleware) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, +) + +error { + doCustomLogic(packet, ack) + +return app.OnAcknowledgementPacket(ctx, packet, ack, relayer) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L242-L293) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnTimeoutPacket` + +```go +func (im IBCMiddleware) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +error { + doCustomLogic(packet) + +return app.OnTimeoutPacket(ctx, packet, relayer) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L297-L335) an example implementation of this callback for the ICS-29 Fee Middleware module. + +## ICS-04 wrappers + +Middleware must also wrap ICS-04 so that any communication from the application to the `channelKeeper` goes through the middleware first. Similar to the packet callbacks, the middleware may modify outgoing acknowledgements and packets in any way it wishes. + +To ensure optimal generalisability, the `ICS4Wrapper` abstraction serves to abstract away whether a middleware is the topmost middleware (and thus directly calling into the ICS-04 `channelKeeper`) or itself being wrapped by another middleware. + +Remember that middleware can be stateful or stateless. When defining the stateful middleware's keeper, the `ics4Wrapper` field is included. Then the appropriate keeper can be passed when instantiating the middleware's keeper in `app.go` + +```go +type Keeper struct { + storeKey storetypes.StoreKey + cdc codec.BinaryCodec + + ics4Wrapper porttypes.ICS4Wrapper + channelKeeper types.ChannelKeeper + portKeeper types.PortKeeper + ... +} +``` + +For stateless middleware, the `ics4Wrapper` can be passed on directly without having to instantiate a keeper struct for the middleware. + +[The interface](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/05-port/types/module.go#L110-L133) looks as follows: + +```go expandable +/ This is implemented by ICS4 and all middleware that are wrapping base application. +/ The base application will call `sendPacket` or `writeAcknowledgement` of the middleware directly above them +/ which will call the next middleware until it reaches the core IBC handler. +type ICS4Wrapper interface { + SendPacket( + ctx sdk.Context, + sourcePort string, + sourceChannel string, + timeoutHeight clienttypes.Height, + timeoutTimestamp uint64, + data []byte, + ) (sequence uint64, err error) + +WriteAcknowledgement( + ctx sdk.Context, + packet exported.PacketI, + ack exported.Acknowledgement, + ) + +error + + GetAppVersion( + ctx sdk.Context, + portID, + channelID string, + ) (string, bool) +} +``` + +:warning: In the following paragraphs, the methods are presented in pseudo code which has been kept general, not stating whether the middleware is stateful or stateless. Remember that when the middleware is stateful, `ics4Wrapper` can be accessed through the keeper. + +Check out the references provided for an actual implementation to clarify, where the `ics4Wrapper` methods in `ibc_middleware.go` simply call the equivalent keeper methods where the actual logic resides. + +### `SendPacket` + +```go expandable +func SendPacket( + ctx sdk.Context, + sourcePort string, + sourceChannel string, + timeoutHeight clienttypes.Height, + timeoutTimestamp uint64, + appData []byte, +) (uint64, error) { + / middleware may modify data + data = doCustomLogic(appData) + +return ics4Wrapper.SendPacket( + ctx, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, + ) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/keeper/relay.go#L17-L27) an example implementation of this function for the ICS-29 Fee Middleware module. + +### `WriteAcknowledgement` + +```go expandable +/ only called for async acks +func WriteAcknowledgement( + ctx sdk.Context, + packet exported.PacketI, + ack exported.Acknowledgement, +) + +error { + / middleware may modify acknowledgement + ack_bytes = doCustomLogic(ack) + +return ics4Wrapper.WriteAcknowledgement(packet, ack_bytes) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/keeper/relay.go#L31-L55) an example implementation of this function for the ICS-29 Fee Middleware module. + +### `GetAppVersion` + +```go expandable +/ middleware must return the underlying application version +func GetAppVersion( + ctx sdk.Context, + portID, + channelID string, +) (string, bool) { + version, found := ics4Wrapper.GetAppVersion(ctx, portID, channelID) + if !found { + return "", false +} + if !MiddlewareEnabled { + return version, true +} + + / unwrap channel version + metadata, err := Unmarshal(version) + if err != nil { + panic(fmt.Errof("unable to unmarshal version: %w", err)) +} + +return metadata.AppVersion, true +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/keeper/relay.go#L58-L74) an example implementation of this function for the ICS-29 Fee Middleware module. diff --git a/docs/ibc/v10.1.x/ibc/middleware/developIBCv2.mdx b/docs/ibc/v10.1.x/ibc/middleware/developIBCv2.mdx new file mode 100644 index 00000000..6af37ec1 --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/middleware/developIBCv2.mdx @@ -0,0 +1,286 @@ +--- +title: Create and integrate IBC v2 middleware +description: >- + 1. Create a custom IBC v2 middleware 2. Implement IBCModule interface 3. + WriteAckWrapper 4. Integrate IBC v2 Middleware 5. Security Model 6. Design + Principles +--- + +1. [Create a custom IBC v2 middleware](#create-a-custom-ibc-v2-middleware) +2. [Implement `IBCModule` interface](#implement-ibcmodule-interface) +3. [WriteAckWrapper](#writeackwrapper) +4. [Integrate IBC v2 Middleware](#integrate-ibc-v2-middleware) +5. [Security Model](#security-model) +6. [Design Principles](#design-principles) + +## Create a custom IBC v2 middleware + +IBC middleware will wrap over an underlying IBC application (a base application or downstream middleware) and sits between core IBC and the base application. + + +middleware developers must use the same serialization and deserialization method as in ibc-go's codec: transfertypes.ModuleCdc.\[Must]MarshalJSON + + +For middleware builders this means: + +```go +import transfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" +transfertypes.ModuleCdc.[Must]MarshalJSON +func MarshalAsIBCDoes(ack channeltypes.Acknowledgement) ([]byte, error) { + return transfertypes.ModuleCdc.MarshalJSON(&ack) +} +``` + +The interfaces a middleware must implement are found in [core/api](https://github.com/cosmos/ibc-go/blob/main/modules/core/api/module.go#L11). Note that this interface has changed from IBC classic. + +An `IBCMiddleware` struct implementing the `Middleware` interface, can be defined with its constructor as follows: + +```go expandable +/ @ x/module_name/ibc_middleware.go + +/ IBCMiddleware implements the IBCv2 middleware interface +type IBCMiddleware struct { + app api.IBCModule / underlying app or middleware + writeAckWrapper api. WriteAcknowledgementWrapper / writes acknowledgement for an async acknowledgement + PacketDataUnmarshaler api.PacketDataUnmarshaler / optional interface + keeper types.Keeper / required for stateful middleware + / Keeper may include middleware specific keeper and the ChannelKeeperV2 + + / additional middleware specific fields +} + +/ NewIBCMiddleware creates a new IBCMiddleware given the keeper and underlying application +func NewIBCMiddleware(app api.IBCModule, +writeAckWrapper api.WriteAcknowledgementWrapper, +k types.Keeper +) + +IBCMiddleware { + return IBCMiddleware{ + app: app, + writeAckWrapper: writeAckWrapper, + keeper: k, +} +} +``` + + +The ICS4Wrapper has been removed in IBC v2 and there are no channel handshake callbacks, a writeAckWrapper has been added to the interface + + +## Implement `IBCModule` interface + +`IBCMiddleware` is a struct that implements the [`IBCModule` interface (`api.IBCModule`)](https://github.com/cosmos/ibc-go/blob/main/modules/core/api/module.go#L11-L53). It is recommended to separate these callbacks into a separate file `ibc_middleware.go`. + +> Note how this is analogous to implementing the same interfaces for IBC applications that act as base applications. + +The middleware must have access to the underlying application, and be called before it during all ICS-26 callbacks. It may execute custom logic during these callbacks, and then call the underlying application's callback. + +> Middleware **may** choose not to call the underlying application's callback at all. Though these should generally be limited to error cases. + +The `IBCModule` interface consists of the packet callbacks where custom logic is performed. + +### Packet callbacks + +The packet callbacks are where the middleware performs most of its custom logic. The middleware may read the packet flow data and perform some additional packet handling, or it may modify the incoming data before it reaches the underlying application. This enables a wide degree of usecases, as a simple base application like token-transfer can be transformed for a variety of usecases by combining it with custom middleware, for example acting as a filter for which tokens can be sent and received. + +#### `OnRecvPacket` + +```go expandable +func (im IBCMiddleware) + +OnRecvPacket( + ctx sdk.Context, + sourceClient string, + destinationClient string, + sequence uint64, + payload channeltypesv2.Payload, + relayer sdk.AccAddress, +) + +channeltypesv2.RecvPacketResult { + / Middleware may choose to do custom preprocessing logic before calling the underlying app OnRecvPacket + / Middleware may choose to error early and return a RecvPacketResult Failure + / Middleware may choose to modify the payload before passing on to OnRecvPacket though this + / should only be done to support very advanced custom behavior + / Middleware MUST NOT modify client identifiers and sequence + doCustomPreProcessLogic() + + / call underlying app OnRecvPacket + recvResult := im.app.OnRecvPacket(ctx, sourceClient, destinationClient, sequence, payload, relayer) + if recvResult.Status == PACKET_STATUS_FAILURE { + return recvResult +} + +doCustomPostProcessLogic(recvResult) / middleware may modify recvResult + + return recvResult +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/v2/ibc_middleware.go#L161-L230) an example implementation of this callback for the Callbacks Middleware module. + +#### `OnAcknowledgementPacket` + +```go expandable +func (im IBCMiddleware) + +OnAcknowledgementPacket( + ctx sdk.Context, + sourceClient string, + destinationClient string, + sequence uint64, + acknowledgement []byte, + payload channeltypesv2.Payload, + relayer sdk.AccAddress, +) + +error { + / preprocessing logic may modify the acknowledgement before passing to + / the underlying app though this should only be done in advanced cases + / Middleware may return error early + / it MUST NOT change the identifiers of the clients or the sequence + doCustomPreProcessLogic(payload, acknowledgement) + + / call underlying app OnAcknowledgementPacket + err = im.app.OnAcknowledgementPacket( + sourceClient, destinationClient, sequence, + acknowledgement, payload, relayer + ) + if err != nil { + return err +} + + / may perform some post acknowledgement logic and return error here + return doCustomPostProcessLogic() +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/v2/ibc_middleware.go#L236-L302) an example implementation of this callback for the Callbacks Middleware module. + +#### `OnTimeoutPacket` + +```go expandable +func (im IBCMiddleware) + +OnTimeoutPacket( + ctx sdk.Context, + sourceClient string, + destinationClient string, + sequence uint64, + payload channeltypesv2.Payload, + relayer sdk.AccAddress, +) + +error { + / Middleware may choose to do custom preprocessing logic before calling the underlying app OnTimeoutPacket + / Middleware may return error early + doCustomPreProcessLogic(payload) + + / call underlying app OnTimeoutPacket + err = im.app.OnTimeoutPacket( + sourceClient, destinationClient, sequence, + payload, relayer + ) + if err != nil { + return err +} + + / may perform some post timeout logic and return error here + return doCustomPostProcessLogic() +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/v2/ibc_middleware.go#L309-L367) an example implementation of this callback for the Callbacks Middleware module. + +### WriteAckWrapper + +Middleware must also wrap the `WriteAcknowledgement` interface so that any acknowledgement written by the application passes through the middleware first. This allows middleware to modify or delay writing an acknowledgment before committed to the IBC store. + +```go +/ WithWriteAckWrapper sets the WriteAcknowledgementWrapper for the middleware. +func (im *IBCMiddleware) + +WithWriteAckWrapper(writeAckWrapper api.WriteAcknowledgementWrapper) { + im.writeAckWrapper = writeAckWrapper +} + +/ GetWriteAckWrapper returns the WriteAckWrapper +func (im *IBCMiddleware) + +GetWriteAckWrapper() + +api.WriteAcknowledgementWrapper { + return im.writeAckWrapper +} +``` + +### `WriteAcknowledgement` + +This is where the middleware acknowledgement handling is finalised. An example is shown in the [callbacks middleware](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/v2/ibc_middleware.go#L369-L454) + +```go expandable +/ WriteAcknowledgement facilitates acknowledgment being written asynchronously +/ The call stack flows from the IBC application to the IBC core handler +/ Thus this function is called by the IBC app or a lower-level middleware +func (im IBCMiddleware) + +WriteAcknowledgement( + ctx sdk.Context, + clientID string, + sequence uint64, + ack channeltypesv2.Acknowledgement, +) + +error { + doCustomPreProcessLogic() / may modify acknowledgement + + return im.writeAckWrapper.WriteAcknowledgement( + ctx, clientId, sequence, ack, + ) +} +``` + +## Integrate IBC v2 Middleware + +Middleware should be registered within the module manager in `app.go`. + +The order of middleware **matters**, function calls from IBC to the application travel from top-level middleware to the bottom middleware and then to the application. Function calls from the application to IBC goes through the bottom middleware in order to the top middleware and then to core IBC handlers. Thus the same set of middleware put in different orders may produce different effects. + +### Example Integration + +The example integration is detailed for an IBC v2 stack using transfer and the callbacks middleware. + +```go expandable +/ Middleware Stacks +/ initialising callbacks middleware + maxCallbackGas := uint64(10_000_000) + wasmStackIBCHandler := wasm.NewIBCHandler(app.WasmKeeper, app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper) + +/ Create the transferv2 stack with transfer and callbacks middleware + var ibcv2TransferStack ibcapi.IBCModule + ibcv2TransferStack = transferv2.NewIBCModule(app.TransferKeeper) + +ibcv2TransferStack = ibccallbacksv2.NewIBCMiddleware(transferv2.NewIBCModule(app.TransferKeeper), app.IBCKeeper.ChannelKeeperV2, wasmStackIBCHandler, app.IBCKeeper.ChannelKeeperV2, maxCallbackGas) + +/ Create static IBC v2 router, add app routes, then set and seal it + ibcRouterV2 := ibcapi.NewRouter() + +ibcRouterV2.AddRoute(ibctransfertypes.PortID, ibcv2TransferStack) + +app.IBCKeeper.SetRouterV2(ibcRouterV2) +``` + +## Security Model + +IBC Middleware completely wraps all communication between IBC core and the application that it is wired with. Thus, the IBC Middleware has complete control to modify any packets and acknowledgements the underlying application receives or sends. Thus, if a chain chooses to wrap an application with a given middleware, that middleware is **completely trusted** and part of the application's security model. **Do not use middlewares that are untrusted.** + +## Design Principles + +The middleware follows a decorator pattern that wraps an underlying application's connection to the IBC core handlers. Thus, when implementing a middleware for a specific purpose, it is recommended to be as **unintrusive** as possible in the middleware design while still accomplishing the intended behavior. + +The least intrusive middleware is stateless. They simply read the ICS26 callback arguments before calling the underlying app's callback and error if the arguments are not acceptable (e.g. whitelisting packets). Stateful middleware that are used solely for erroring are also very simple to build, an example of this would be a rate-limiting middleware that prevents transfer outflows from getting too high within a certain time frame. + +Middleware that directly interfere with the payload or acknowledgement before passing control to the underlying app are way more intrusive to the underlying app processing. This makes such middleware more error-prone when implementing as incorrect handling can cause the underlying app to break or worse execute unexpected behavior. Moreover, such middleware typically needs to be built for a specific underlying app rather than being generic. An example of this is the packet-forwarding middleware which modifies the payload and is specifically built for transfer. + +Middleware that modifies the payload or acknowledgement such that it is no longer readable by the underlying application is the most complicated middleware. Since it is not readable by the underlying apps, if these middleware write additional state into payloads and acknowledgements that get committed to IBC core provable state, there MUST be an equivalent counterparty middleware that is able to parse and interpret this additional state while also converting the payload and acknowledgment back to a readable form for the underlying application on its side. Thus, such middleware requires deployment on both sides of an IBC connection or the packet processing will break. This is the hardest type of middleware to implement, integrate and deploy. Thus, it is not recommended unless absolutely necessary to fulfill the given use case. diff --git a/docs/ibc/v10.1.x/ibc/middleware/integration.mdx b/docs/ibc/v10.1.x/ibc/middleware/integration.mdx new file mode 100644 index 00000000..031bcb29 --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/middleware/integration.mdx @@ -0,0 +1,74 @@ +--- +title: Integrating IBC middleware into a chain +description: >- + Learn how to integrate IBC middleware(s) with a base application to your + chain. The following document only applies for Cosmos SDK chains. +--- + +Learn how to integrate IBC middleware(s) with a base application to your chain. The following document only applies for Cosmos SDK chains. + +If the middleware is maintaining its own state and/or processing SDK messages, then it should create and register its SDK module with the module manager in `app.go`. + +All middleware must be connected to the IBC router and wrap over an underlying base IBC application. An IBC application may be wrapped by many layers of middleware, only the top layer middleware should be hooked to the IBC router, with all underlying middlewares and application getting wrapped by it. + +The order of middleware **matters**, function calls from IBC to the application travel from top-level middleware to the bottom middleware and then to the application. Function calls from the application to IBC goes through the bottom middleware in order to the top middleware and then to core IBC handlers. Thus the same set of middleware put in different orders may produce different effects. + +## Example integration + +```go expandable +/ app.go pseudocode + +/ middleware 1 and middleware 3 are stateful middleware, +/ perhaps implementing separate sdk.Msg and Handlers +mw1Keeper := mw1.NewKeeper(storeKey1, ..., ics4Wrapper: channelKeeper, ...) / in stack 1 & 3 +/ middleware 2 is stateless +mw3Keeper1 := mw3.NewKeeper(storeKey3,..., ics4Wrapper: mw1Keeper, ...) / in stack 1 +mw3Keeper2 := mw3.NewKeeper(storeKey3,..., ics4Wrapper: channelKeeper, ...) / in stack 2 + +/ Only create App Module **once** and register in app module +/ if the module maintains independent state and/or processes sdk.Msgs +app.moduleManager = module.NewManager( + ... + mw1.NewAppModule(mw1Keeper), + mw3.NewAppModule(mw3Keeper1), + mw3.NewAppModule(mw3Keeper2), + transfer.NewAppModule(transferKeeper), + custom.NewAppModule(customKeeper) +) + +/ NOTE: IBC Modules may be initialized any number of times provided they use a separate +/ Keeper and underlying port. + +customKeeper1 := custom.NewKeeper(..., KeeperCustom1, ...) + +customKeeper2 := custom.NewKeeper(..., KeeperCustom2, ...) + +/ initialize base IBC applications +/ if you want to create two different stacks with the same base application, +/ they must be given different Keepers and assigned different ports. + transferIBCModule := transfer.NewIBCModule(transferKeeper) + +customIBCModule1 := custom.NewIBCModule(customKeeper1, "portCustom1") + +customIBCModule2 := custom.NewIBCModule(customKeeper2, "portCustom2") + +/ create IBC stacks by combining middleware with base application +/ NOTE: since middleware2 is stateless it does not require a Keeper +/ stack 1 contains mw1 -> mw3 -> transfer +stack1 := mw1.NewIBCMiddleware(mw3.NewIBCMiddleware(transferIBCModule, mw3Keeper1), mw1Keeper) +/ stack 2 contains mw3 -> mw2 -> custom1 +stack2 := mw3.NewIBCMiddleware(mw2.NewIBCMiddleware(customIBCModule1), mw3Keeper2) +/ stack 3 contains mw2 -> mw1 -> custom2 +stack3 := mw2.NewIBCMiddleware(mw1.NewIBCMiddleware(customIBCModule2, mw1Keeper)) + +/ associate each stack with the moduleName provided by the underlying Keeper + ibcRouter := porttypes.NewRouter() + +ibcRouter.AddRoute("transfer", stack1) + +ibcRouter.AddRoute("custom1", stack2) + +ibcRouter.AddRoute("custom2", stack3) + +app.IBCKeeper.SetRouter(ibcRouter) +``` diff --git a/docs/ibc/v10.1.x/ibc/middleware/overview.mdx b/docs/ibc/v10.1.x/ibc/middleware/overview.mdx new file mode 100644 index 00000000..dbac30e0 --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/middleware/overview.mdx @@ -0,0 +1,50 @@ +--- +title: IBC middleware +--- + +## Synopsis + +Learn how to write your own custom middleware to wrap an IBC application, and understand how to hook different middleware to IBC base applications to form different IBC application stacks + +This documentation serves as a guide for middleware ./developers who want to write their own middleware and for chain ./developers who want to use IBC middleware on their chains. + +After going through the overview they can consult respectively: + +- [documentation on ./developing custom middleware](/docs/ibc/v10.1.x/ibc/middleware/develop) +- [documentation on integrating middleware into a stack on a chain](/docs/ibc/v10.1.x/ibc/middleware/integration) + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v10.1.x/ibc/overview) +- [IBC Integration](/docs/ibc/v10.1.x/ibc/./integration) +- [IBC Application Developer Guide](/docs/ibc/v10.1.x/ibc/apps/apps) + + + +## Why middleware? + +IBC applications are designed to be self-contained modules that implement their own application-specific logic through a set of interfaces with the core IBC handlers. These core IBC handlers, in turn, are designed to enforce the correctness properties of IBC (transport, authentication, ordering) while delegating all application-specific handling to the IBC application modules. **However, there are cases where some functionality may be desired by many applications, yet not appropriate to place in core IBC.** + +Middleware allows ./developers to define the extensions as separate modules that can wrap over the base application. This middleware can thus perform its own custom logic, and pass data into the application so that it may run its logic without being aware of the middleware's existence. This allows both the application and the middleware to implement its own isolated logic while still being able to run as part of a single packet flow. + +## Definitions + +`Middleware`: A self-contained module that sits between core IBC and an underlying IBC application during packet execution. All messages between core IBC and underlying application must flow through middleware, which may perform its own custom logic. + +`Underlying Application`: An underlying application is the application that is directly connected to the middleware in question. This underlying application may itself be middleware that is chained to a base application. + +`Base Application`: A base application is an IBC application that does not contain any middleware. It may be nested by 0 or multiple middleware to form an application stack. + +`Application Stack (or stack)`: A stack is the complete set of application logic (middleware(s) + base application) that gets connected to core IBC. A stack may be just a base application, or it may be a series of middlewares that nest a base application. + +The diagram below gives an overview of a middleware stack consisting of two middleware (one stateless, the other stateful). + +![middleware-stack.png](/docs/ibc/images/01-ibc/04-middleware/images/middleware-stack.png) + +Keep in mind that: + +- **The order of the middleware matters** (more on how to correctly define your stack in the code will follow in the [./integration section](/docs/ibc/v10.1.x/ibc/middleware/integration)). +- Depending on the type of message, it will either be passed on from the base application up the middleware stack to core IBC or down the stack in the reverse situation (handshake and packet callbacks). +- IBC middleware will wrap over an underlying IBC application and sits between core IBC and the application. It has complete control in modifying any message coming from IBC to the application, and any message coming from the application to core IBC. **Middleware must be completely trusted by chain ./developers who wish to integrate them**, as this gives them complete flexibility in modifying the application(s) they wrap. diff --git a/docs/ibc/v10.1.x/ibc/overview.mdx b/docs/ibc/v10.1.x/ibc/overview.mdx new file mode 100644 index 00000000..71d8acdd --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/overview.mdx @@ -0,0 +1,294 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about IBC, its components, and its use cases. + +## What is the Inter-Blockchain Communication Protocol (IBC)? + +This document serves as a guide for developers who want to write their own Inter-Blockchain +Communication Protocol (IBC) applications for custom use cases. + +> IBC applications must be written as self-contained modules. + +Due to the modular design of the IBC Protocol, IBC +application developers do not need to be concerned with the low-level details of clients, +connections, and proof verification. + +This brief explanation of the lower levels of the +stack gives application developers a broad understanding of the IBC +Protocol. Abstraction layer details for channels and ports are most relevant for application developers and describe how to define custom packets and `IBCModule` callbacks. + +The requirements to have your module interact over IBC are: + +- Bind to a port or ports. +- Define your packet data. +- Use the default acknowledgment struct provided by core IBC or optionally define a custom acknowledgment struct. +- Standardize an encoding of the packet data. +- Implement the `IBCModule` interface. +- Implement the `UpgradableModule` interface (optional). + +Read on for a detailed explanation of how to write a self-contained IBC application module. + +## Components overview + +### [Clients](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client) + +IBC clients are on-chain light clients. Each light client is identified by a unique client ID. +IBC clients track the consensus states of other blockchains, along with the proof spec necessary to +properly verify proofs against the client's consensus state. A client can be associated with any number +of connections to the counterparty chain. The client identifier is auto generated using the client type +and the global client counter appended in the format: `{client-type}-{N}`. + +A `ClientState` should contain chain specific and light client specific information necessary for verifying updates +and upgrades to the IBC client. The `ClientState` may contain information such as chain ID, latest height, proof specs, +unbonding periods or the status of the light client. The `ClientState` should not contain information that +is specific to a given block at a certain height, this is the function of the `ConsensusState`. Each `ConsensusState` +should be associated with a unique block and should be referenced using a height. IBC clients are given a +client identifier prefixed store to store their associated client state and consensus states along with +any metadata associated with the consensus states. Consensus states are stored using their associated height. + +The supported IBC clients are: + +- [Solo Machine light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine): Devices such as phones, browsers, or laptops. +- [Tendermint light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/07-tendermint): The default for Cosmos SDK-based chains. +- [Wasm client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/08-wasm): Proxy client useful for running light clients written in a Wasm-compilable language. +- [Localhost (loopback) client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/09-localhost): Useful for testing, simulation, and relaying packets to modules on the same application. + +### IBC client heights + +IBC Client Heights are represented by the struct: + +```go +type Height struct { + RevisionNumber uint64 + RevisionHeight uint64 +} +``` + +The `RevisionNumber` represents the revision of the chain that the height is representing. +A revision typically represents a continuous, monotonically increasing range of block-heights. +The `RevisionHeight` represents the height of the chain within the given revision. + +On any reset of the `RevisionHeight`—for example, when hard-forking a Tendermint chain, +the `RevisionNumber` will get incremented. This allows IBC clients to distinguish between a +block height `n` of a previous revision of the chain (at revision `p`) and block-height `n` of the current +revision of the chain (at revision `e`). + +`Height`s that share the same revision number can be compared by simply comparing their respective `RevisionHeight`s. +`Height`s that do not share the same revision number will only be compared using their respective `RevisionNumber`s. +Thus a height `h` with revision number `e+1` will always be greater than a height `g` with revision number `e`, +**REGARDLESS** of the difference in revision heights. + +For example: + +```go +Height{ + RevisionNumber: 3, + RevisionHeight: 0 +} > Height{ + RevisionNumber: 2, + RevisionHeight: 100000000000 +} +``` + +When a Tendermint chain is running a particular revision, relayers can simply submit headers and proofs with the revision number +given by the chain's `chainID`, and the revision height given by the Tendermint block height. When a chain updates using a hard-fork +and resets its block-height, it is responsible for updating its `chainID` to increment the revision number. +IBC Tendermint clients then verifies the revision number against their `chainID` and treat the `RevisionHeight` as the Tendermint block-height. + +Tendermint chains wishing to use revisions to maintain persistent IBC connections even across height-resetting upgrades must format their `chainID`s +in the following manner: `{chainID}-{revision_number}`. On any height-resetting upgrade, the `chainID` **MUST** be updated with a higher revision number +than the previous value. + +For example: + +- Before upgrade `chainID`: `gaiamainnet-3` +- After upgrade `chainID`: `gaiamainnet-4` + +Clients that do not require revisions, such as the `06-solomachine` client, can simply hardcode `0` into the revision number whenever they +need to return an IBC height when implementing IBC interfaces and use the `RevisionHeight` exclusively. + +Other client types can implement their own logic to verify the IBC heights that relayers provide in their `Update`, `Misbehavior`, and +`Verify` functions respectively. + +The IBC interfaces expect an `ibcexported.Height` interface, however all clients must use the concrete implementation provided in +`02-client/types` and reproduced above. + +### [Connections](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection) + +Connections encapsulate two [`ConnectionEnd`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/connection/v1/connection.proto#L17) +objects on two separate blockchains. Each `ConnectionEnd` is associated with a client of the +other blockchain (for example, the counterparty blockchain). The connection handshake is responsible +for verifying that the light clients on each chain are correct for their respective counterparties. +Connections, once established, are responsible for facilitating all cross-chain verifications of IBC state. +A connection can be associated with any number of channels. + +The connection handshake is a 4-step handshake. Briefly, if a given chain A wants to open a connection with +chain B using already established light clients on both chains: + +1. chain A sends a `ConnectionOpenInit` message to signal a connection initialization attempt with chain B. +2. chain B sends a `ConnectionOpenTry` message to try opening the connection on chain A. +3. chain A sends a `ConnectionOpenAck` message to mark its connection end state as open. +4. chain B sends a `ConnectionOpenConfirm` message to mark its connection end state as open. + +#### Time delayed connections + +Connections can be opened with a time delay by setting the `delay_period` field (in nanoseconds) in the [`MsgConnectionOpenInit`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/connection/v1/tx.proto#L45). +The time delay is used to require that the underlying light clients have been updated to a certain height before commitment verification can be performed. + +`delayPeriod` is used in conjunction with the [`max_expected_time_per_block`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/connection/v1/connection.proto#L113) parameter of the connection submodule to determine the `blockDelay`, which is number of blocks that the connection must be delayed by. + +When commitment verification is performed, the connection submodule will pass `delayPeriod` and `blockDelay` to the light client. It is up to the light client to determine whether the light client has been updated to the required height. Only the following light clients in `ibc-go` support time delayed connections: + +- `07-tendermint` +- `08-wasm` (passed to the contact) + +### [Proofs](https://github.com/cosmos/ibc-go/blob/main/modules/core/23-commitment) and [paths](https://github.com/cosmos/ibc-go/blob/main/modules/core/24-host) + +In IBC, blockchains do not directly pass messages to each other over the network. Instead, to +communicate, a blockchain commits some state to a specifically defined path that is reserved for a +specific message type and a specific counterparty. For example, for storing a specific connectionEnd as part +of a handshake or a packet intended to be relayed to a module on the counterparty chain. A relayer +process monitors for updates to these paths and relays messages by submitting the data stored +under the path and a proof to the counterparty chain. + +Proofs are passed from core IBC to light clients as bytes. It is up to light client implementations to interpret these bytes appropriately. + +- The paths that all IBC implementations must use for committing IBC messages is defined in + [ICS-24 Host State Machine Requirements](https://github.com/cosmos/ibc/tree/master/spec/core/ics-024-host-requirements). +- The proof format that all implementations must be able to produce and verify is defined in [ICS-23 Proofs](https://github.com/cosmos/ics23) implementation. + +### [Ports](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port) + +An IBC module can bind to any number of ports. Each port must be identified by a unique `portID`. +Since IBC is designed to be secure with mutually distrusted modules operating on the same ledger, +binding a port returns a dynamic object capability. In order to take action on a particular port +(for example, an open channel with its port ID), a module must provide the dynamic object capability to the IBC +handler. This requirement prevents a malicious module from opening channels with ports it does not own. Thus, +IBC modules are responsible for claiming the capability that is returned on `BindPort`. + +### [Channels](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +An IBC channel can be established between two IBC ports. Currently, a port is exclusively owned by a +single module. IBC packets are sent over channels. Just as IP packets contain the destination IP +address and IP port, and the source IP address and source IP port, IBC packets contain +the destination port ID and channel ID, and the source port ID and channel ID. This packet structure enables IBC to +correctly route packets to the destination module while allowing modules receiving packets to +know the sender module. + +A channel can be `ORDERED`, where packets from a sending module must be processed by the +receiving module in the order they were sent. Or a channel can be `UNORDERED`, where packets +from a sending module are processed in the order they arrive (might be in a different order than they were sent). + +Modules can choose which channels they wish to communicate over with, thus IBC expects modules to +implement callbacks that are called during the channel handshake. These callbacks can do custom +channel initialization logic. If any callback returns an error, the channel handshake fails. Thus, by +returning errors on callbacks, modules can programmatically reject and accept channels. + +The channel handshake is a 4-step handshake. Briefly, if a given chain A wants to open a channel with +chain B using an already established connection: + +1. chain A sends a `ChanOpenInit` message to signal a channel initialization attempt with chain B. +2. chain B sends a `ChanOpenTry` message to try opening the channel on chain A. +3. chain A sends a `ChanOpenAck` message to mark its channel end status as open. +4. chain B sends a `ChanOpenConfirm` message to mark its channel end status as open. + +If all handshake steps are successful, the channel is opened on both sides. At each step in the handshake, the module +associated with the `ChannelEnd` executes its callback. So +on `ChanOpenInit`, the module on chain A executes its callback `OnChanOpenInit`. + +The channel identifier is auto derived in the format: `channel-{N}` where `N` is the next sequence to be used. + +#### Closing channels + +Closing a channel occurs in 2 handshake steps as defined in [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics). +Once a channel is closed, it cannot be reopened. The channel handshake steps are: + +**`ChanCloseInit`** closes a channel on the executing chain if + +- the channel exists and it is not already closed, +- the connection it exists upon is `OPEN`, +- the [IBC module callback `OnChanCloseInit`](/docs/ibc/v10.1.x/ibc/apps/ibcmodule#channel-closing-callbacks) returns `nil`. + +`ChanCloseInit` can be initiated by any user by submitting a `MsgChannelCloseInit` transaction. +Note that channels are automatically closed when a packet times out on an `ORDERED` channel. +A timeout on an `ORDERED` channel skips the `ChanCloseInit` step and immediately closes the channel. + +**`ChanCloseConfirm`** is a response to a counterparty channel executing `ChanCloseInit`. The channel +on the executing chain closes if + +- the channel exists and is not already closed, +- the connection the channel exists upon is `OPEN`, +- the executing chain successfully verifies that the counterparty channel has been closed +- the [IBC module callback `OnChanCloseConfirm`](/docs/ibc/v10.1.x/ibc/apps/ibcmodule#channel-closing-callbacks) returns `nil`. + +Currently, none of the IBC applications provided in ibc-go support `ChanCloseInit`. + +### [Packets](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Modules communicate with each other by sending packets over IBC channels. All +IBC packets contain the destination `portID` and `channelID` along with the source `portID` and +`channelID`. This packet structure allows modules to know the sender module of a given packet. IBC packets +contain a sequence to optionally enforce ordering. + +IBC packets also contain a `TimeoutHeight` and a `TimeoutTimestamp` that determine the deadline before the receiving module must process a packet. + +Modules send custom application data to each other inside the `Data` `[]byte` field of the IBC packet. +Thus, packet data is opaque to IBC handlers. It is incumbent on a sender module to encode +their application-specific packet information into the `Data` field of packets. The receiver +module must decode that `Data` back to the original application data. + +### [Receipts and timeouts](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Since IBC works over a distributed network and relies on potentially faulty relayers to relay messages between ledgers, +IBC must handle the case where a packet does not get sent to its destination in a timely manner or at all. Packets must +specify a non-zero value for timeout height (`TimeoutHeight`) or timeout timestamp (`TimeoutTimestamp` ) after which a packet can no longer be successfully received on the destination chain. + +- The `timeoutHeight` indicates a consensus height on the destination chain after which the packet is no longer to be processed, and instead counts as having timed-out. +- The `timeoutTimestamp` indicates a timestamp on the destination chain after which the packet is no longer to be processed, and instead counts as having timed-out. + +If the timeout passes without the packet being successfully received, the packet can no longer be +received on the destination chain. The sending module can timeout the packet and take appropriate actions. + +If the timeout is reached, then a proof of packet timeout can be submitted to the original chain. The original chain can then perform +application-specific logic to timeout the packet, perhaps by rolling back the packet send changes (refunding senders any locked funds, etc). + +- In `ORDERED` channels, a timeout of a single packet in the channel causes the channel to close. + + - If packet sequence `n` times out, then a packet at sequence `k > n` cannot be received without violating the contract of `ORDERED` channels that packets are processed in the order that they are sent. + - Since `ORDERED` channels enforce this invariant, a proof that sequence `n` has not been received on the destination chain by the specified timeout of packet `n` is sufficient to timeout packet `n` and close the channel. + +- In `UNORDERED` channels, the application-specific timeout logic for that packet is applied and the channel is not closed. + + - Packets can be received in any order. + - IBC writes a packet receipt for each sequence received in the `UNORDERED` channel. This receipt does not contain information; it is simply a marker intended to signify that the `UNORDERED` channel has received a packet at the specified sequence. + - To timeout a packet on an `UNORDERED` channel, a proof is required that a packet receipt **does not exist** for the packet's sequence by the specified timeout. + +For this reason, most modules should use `UNORDERED` channels as they require fewer liveness guarantees to function effectively for users of that channel. + +### [Acknowledgments](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Modules can also choose to write application-specific acknowledgments upon processing a packet. Acknowledgments can be done: + +- Synchronously on `OnRecvPacket` if the module processes packets as soon as they are received from IBC module. +- Asynchronously if module processes packets at some later point after receiving the packet. + +This acknowledgment data is opaque to IBC much like the packet `Data` and is treated by IBC as a simple byte string `[]byte`. Receiver modules must encode their acknowledgment so that the sender module can decode it correctly. The encoding must be negotiated between the two parties during version negotiation in the channel handshake. + +The acknowledgment can encode whether the packet processing succeeded or failed, along with additional information that allows the sender module to take appropriate action. + +After the acknowledgment has been written by the receiving chain, a relayer relays the acknowledgment back to the original sender module. + +The original sender module then executes application-specific acknowledgment logic using the contents of the acknowledgment. + +- After an acknowledgement fails, packet-send changes can be rolled back (for example, refunding senders in ICS 20). +- After an acknowledgment is received successfully on the original sender on the chain, the corresponding packet commitment is deleted since it is no longer needed. + +## Further readings and specs + +If you want to learn more about IBC, check the following specifications: + +- [IBC specification overview](https://github.com/cosmos/ibc/blob/master/README.md) diff --git a/docs/ibc/v10.1.x/ibc/permissioning.mdx b/docs/ibc/v10.1.x/ibc/permissioning.mdx new file mode 100644 index 00000000..81acf5a6 --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/permissioning.mdx @@ -0,0 +1,27 @@ +--- +title: Permissioning +--- + +IBC is designed at its base level to be a permissionless protocol. This does not mean that chains cannot add in permissioning on top of IBC. In ibc-go this can be accomplished by implementing and wiring an ante-decorator that checks if the IBC message is signed by a permissioned authority. If the signer address check passes, the tx can go through; otherwise it is rejected from the mempool. + +The antehandler runs before message-processing so it acts as a customizable filter that can reject messages before they get included in the block. The Cosmos SDK allows developers to write ante-decorators that can be stacked with others to add multiple independent customizable filters that run in sequence. Thus, chain developers that want to permission IBC messages are advised to implement their own custom permissioned IBC ante-decorator to add to the standard ante-decorator stack. + +## Best practices + +`MsgCreateClient`: permissioning the client creation is the most important for permissioned IBC. This will prevent malicious relayers from creating clients to fake chains. If a chain wants to control which chains are connected to it directly over IBC, the best way to do this is by controlling which clients get created. The permissioned authority can create clients only of counterparties that the chain approves of. The permissioned authority can be the governance account, however `MsgCreateClient` contains a consensus state that can be expired by the time governance passes the proposal to execute the message. Thus, if the voting period is longer than the unbonding period of the counterparty, it is advised to use a permissioned authority that can immediately execute the transaction (e.g. a trusted multisig). + +`MsgConnectionOpenInit`: permissioning this message will give the chain control over the connections that are opened and also will control which connection identifier is associated with which counterparty. + +`MsgConnectionOpenTry`: permissioning this message through a permissioned address check is ill-advised because it will prevent relayers from easily completing the handshake that was initialized on the counterparty. However, if the chain does want strict control of exactly which connections are opened, it can permission this message. Be aware, if two chains with strict permissions try to open a connection it may take much longer than expected. + +`MsgChannelOpenInit`: permissioning this message will give the chain control over the channels that are opened and also will control which channel identifier is associated with which counterparty. + +`MsgChannelOpenTry`: permissioning this message through a permissioned address check is ill-advised because it will prevent relayers from easily completing the handshake that was initialized on the counterparty. However, if the chain does want strict control of exactly which channels are opened, it can permission this message. Be aware, if two chains with strict permissions try to open a channel it may take much longer than expected. + +It is not advised to permission any other message from ibc-go. Permissionless relayers should still be allowed to complete handshakes that were authorized by permissioned parties, and to relay user packets on channels that were also authorized by permissioned parties. This provides the maximum liveness provided by a permissionless relayer network with the safety guarantees provided by permissioned client, connection, and channel creation. + +## Genesis setup + +Chains that are starting up from genesis have the option of initializing authorized clients, connections and channels from genesis. This allows chains to automatically connect to desired chains with a desired identifier. + +Note: The chain must be launched soon after the genesis file is created so that the client creation does not occur with an expired consensus state. The connections and channels must also simply have their `INIT` messages executed so that relayers can complete the rest of the handshake. diff --git a/docs/ibc/v10.1.x/ibc/relayer.mdx b/docs/ibc/v10.1.x/ibc/relayer.mdx new file mode 100644 index 00000000..7736b30d --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/relayer.mdx @@ -0,0 +1,48 @@ +--- +title: Relayer +--- + + + +## Pre-requisite readings + +* [IBC Overview](/docs/ibc/v10.1.x/ibc/overview) +* [Events](https://docs.cosmos.network/main/learn/advanced/events) + + + +## Events + +Events are emitted for every transaction processed by the base application to indicate the execution +of some logic clients may want to be aware of. This is extremely useful when relaying IBC packets. +Any message that uses IBC will emit events for the corresponding TAO logic executed as defined in +the [IBC events document](). + +In the SDK, it can be assumed that for every message there is an event emitted with the type `message`, +attribute key `action`, and an attribute value representing the type of message sent +(`channel_open_init` would be the attribute value for `MsgChannelOpenInit`). If a relayer queries +for transaction events, it can split message events using this event Type/Attribute Key pair. + +The Event Type `message` with the Attribute Key `module` may be emitted multiple times for a single +message due to application callbacks. It can be assumed that any TAO logic executed will result in +a module event emission with the attribute value `ibc_` (02-client emits `ibc_client`). + +### Subscribing with Tendermint + +Calling the Tendermint RPC method `Subscribe` via Tendermint's Websocket will return events using +Tendermint's internal representation of them. Instead of receiving back a list of events as they +were emitted, Tendermint will return the type `map[string][]string` which maps a string in the +form `.` to `attribute_value`. This causes extraction of the event +ordering to be non-trivial, but still possible. + +A relayer should use the `message.action` key to extract the number of messages in the transaction +and the type of IBC transactions sent. For every IBC transaction within the string array for +`message.action`, the necessary information should be extracted from the other event fields. If +`send_packet` appears at index 2 in the value for `message.action`, a relayer will need to use the +value at index 2 of the key `send_packet.packet_sequence`. This process should be repeated for each +piece of information needed to relay a packet. + +## Example Implementations + +* [Golang Relayer](https://github.com/cosmos/relayer) +* [Hermes](https://github.com/informalsystems/hermes) diff --git a/docs/ibc/v10.1.x/ibc/upgrades/developer-guide.mdx b/docs/ibc/v10.1.x/ibc/upgrades/developer-guide.mdx new file mode 100644 index 00000000..40a04f0c --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/upgrades/developer-guide.mdx @@ -0,0 +1,9 @@ +--- +title: IBC Client Developer Guide to Upgrades +--- + +## Synopsis + +Learn how to implement upgrade functionality for your custom IBC client. + +Please see the section [Handling upgrades](/docs/ibc/v10.1.x/light-clients/developer-guide/upgrades) from the light client developer guide for more information. diff --git a/docs/ibc/v10.1.x/ibc/upgrades/genesis-restart.mdx b/docs/ibc/v10.1.x/ibc/upgrades/genesis-restart.mdx new file mode 100644 index 00000000..bc26105c --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/upgrades/genesis-restart.mdx @@ -0,0 +1,46 @@ +--- +title: Genesis Restart Upgrades +--- + +## Synopsis + +Learn how to upgrade your chain and counterparty clients using genesis restarts. + +**NOTE**: Regular genesis restarts are currently unsupported by relayers! + +## IBC Client Breaking Upgrades + +IBC client breaking upgrades are possible using genesis restarts. +It is highly recommended to use the in-place migrations instead of a genesis restart. +Genesis restarts should be used sparingly and as backup plans. + +Genesis restarts still require the usage of an IBC upgrade proposal in order to correctly upgrade counterparty clients. + +### Step-by-Step Upgrade Process for SDK Chains + +If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the [IBC Client Breaking Upgrade List](/docs/ibc/v10.1.x/ibc/upgrades/quick-guide#ibc-client-breaking-upgrades) and then execute the upgrade process described below in order to prevent counterparty clients from breaking. + +1. Create a governance proposal with the [`MsgIBCSoftwareUpgrade`](https://buf.build/cosmos/ibc/docs/main:ibc.core.client.v1#ibc.core.client.v1.MsgIBCSoftwareUpgrade) which contains an `UpgradePlan` and a new IBC `ClientState` in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients and zero out any client-customizable fields (such as `TrustingPeriod`). +2. Vote on and pass the governance proposal. +3. Halt the node after successful upgrade. +4. Export the genesis file. +5. Swap to the new binary. +6. Run migrations on the genesis file. +7. Remove the upgrade plan set by the governance proposal from the genesis file. This may be done by migrations. +8. Change desired chain-specific fields (chain id, unbonding period, etc). This may be done by migrations. +9. Reset the node's data. +10. Start the chain. + +Upon passing the governance proposal, the upgrade module will commit the `UpgradedClient` under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`. + +Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client. + +### Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients + +These steps are identical to the regular [IBC client breaking upgrade process](/docs/ibc/v10.1.x/ibc/upgrades/quick-guide#step-by-step-upgrade-process-for-relayers-upgrading-counterparty-clients). + +## Non-IBC Client Breaking Upgrades + +While ibc-go supports genesis restarts which do not break IBC clients, relayers do not support this upgrade path. +Here is a tracking issue on [Hermes](https://github.com/informalsystems/ibc-rs/issues/1152). +Please do not attempt a regular genesis restarts unless you have a tool to update counterparty clients correctly. diff --git a/docs/ibc/v10.1.x/ibc/upgrades/intro.mdx b/docs/ibc/v10.1.x/ibc/upgrades/intro.mdx new file mode 100644 index 00000000..d03ed363 --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/upgrades/intro.mdx @@ -0,0 +1,13 @@ +--- +title: Upgrading IBC Chains Overview +description: >- + This directory contains information on how to upgrade an IBC chain without + breaking counterparty clients and connections. +--- + +This directory contains information on how to upgrade an IBC chain without breaking counterparty clients and connections. + +IBC-connected chains must be able to upgrade without breaking connections to other chains. Otherwise there would be a massive disincentive towards upgrading and disrupting high-value IBC connections, thus preventing chains in the IBC ecosystem from evolving and improving. Many chain upgrades may be irrelevant to IBC, however some upgrades could potentially break counterparty clients if not handled correctly. Thus, any IBC chain that wishes to perform an IBC-client-breaking upgrade must perform an IBC upgrade in order to allow counterparty clients to securely upgrade to the new light client. + +1. The [quick-guide](/docs/ibc/v10.1.x/ibc/upgrades/quick-guide) describes how IBC-connected chains can perform client-breaking upgrades and how relayers can securely upgrade counterparty clients using the SDK. +2. The [developer-guide](/docs/ibc/v10.1.x/ibc/upgrades/developer-guide) is a guide for developers intending to develop IBC client implementations with upgrade functionality. diff --git a/docs/ibc/v10.1.x/ibc/upgrades/quick-guide.mdx b/docs/ibc/v10.1.x/ibc/upgrades/quick-guide.mdx new file mode 100644 index 00000000..938254b9 --- /dev/null +++ b/docs/ibc/v10.1.x/ibc/upgrades/quick-guide.mdx @@ -0,0 +1,54 @@ +--- +title: How to Upgrade IBC Chains and their Clients +--- + +## Synopsis + +Learn how to upgrade your chain and counterparty clients. + +The information in this doc for upgrading chains is relevant to SDK chains. However, the guide for counterparty clients is relevant to any Tendermint client that enables upgrades. + +## IBC Client Breaking Upgrades + +IBC-connected chains must perform an IBC upgrade if their upgrade will break counterparty IBC clients. The current IBC protocol supports upgrading tendermint chains for a specific subset of IBC-client-breaking upgrades. Here is the exhaustive list of IBC client-breaking upgrades and whether the IBC protocol currently supports such upgrades. + +IBC currently does **NOT** support unplanned upgrades. All of the following upgrades must be planned and committed to in advance by the upgrading chain, in order for counterparty clients to maintain their connections securely. + +Note: Since upgrades are only implemented for Tendermint clients, this doc only discusses upgrades on Tendermint chains that would break counterparty IBC Tendermint Clients. + +1. Changing the Chain-ID: **Supported** +2. Changing the UnbondingPeriod: **Partially Supported**, chains may increase the unbonding period with no issues. However, decreasing the unbonding period may irreversibly break some counterparty clients. Thus, it is **not recommended** that chains reduce the unbonding period. +3. Changing the height (resetting to 0): **Supported**, so long as chains remember to increment the revision number in their chain-id. +4. Changing the ProofSpecs: **Supported**, this should be changed if the proof structure needed to verify IBC proofs is changed across the upgrade. Ex: Switching from an IAVL store, to a SimpleTree Store +5. Changing the UpgradePath: **Supported**, this might involve changing the key under which upgraded clients and consensus states are stored in the upgrade store, or even migrating the upgrade store itself. +6. Migrating the IBC store: **Unsupported**, as the IBC store location is negotiated by the connection. +7. Upgrading to a backwards compatible version of IBC: Supported +8. Upgrading to a non-backwards compatible version of IBC: **Unsupported**, as IBC version is negotiated on connection handshake. +9. Changing the Tendermint LightClient algorithm: **Partially Supported**. Changes to the light client algorithm that do not change the ClientState or ConsensusState struct may be supported, provided that the counterparty is also upgraded to support the new light client algorithm. Changes that require updating the ClientState and ConsensusState structs themselves are theoretically possible by providing a path to translate an older ClientState struct into the new ClientState struct; however this is not currently implemented. + +## Step-by-Step Upgrade Process for SDK chains + +If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the list above and then execute the upgrade process described below in order to prevent counterparty clients from breaking. + +1. Create a governance proposal with the [`MsgIBCSoftwareUpgrade`](https://buf.build/cosmos/ibc/docs/main:ibc.core.client.v1#ibc.core.client.v1.MsgIBCSoftwareUpgrade) message which contains an `UpgradePlan` and a new IBC `ClientState` in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients (chain-specified parameters) and zero out any client-customizable fields (such as `TrustingPeriod`). +2. Vote on and pass the governance proposal. + +Upon passing the governance proposal, the upgrade module will commit the `UpgradedClient` under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`. + +Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client. + +## Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients + +Once the upgrading chain has committed to upgrading, relayers must wait till the chain halts at the upgrade height before upgrading counterparty clients. This is because chains may reschedule or cancel upgrade plans before they occur. Thus, relayers must wait till the chain reaches the upgrade height and halts before they can be sure the upgrade will take place. + +Thus, the upgrade process for relayers trying to upgrade the counterparty clients is as follows: + +1. Wait for the upgrading chain to reach the upgrade height and halt +2. Query a full node for the proofs of `UpgradedClient` and `UpgradedConsensusState` at the last height of the old chain. +3. Update the counterparty client to the last height of the old chain using the `UpdateClient` msg. +4. Submit an `UpgradeClient` msg to the counterparty chain with the `UpgradedClient`, `UpgradedConsensusState` and their respective proofs. +5. Submit an `UpdateClient` msg to the counterparty chain with a header from the new upgraded chain. + +The Tendermint client on the counterparty chain will verify that the upgrading chain did indeed commit to the upgraded client and upgraded consensus state at the upgrade height (since the upgrade height is included in the key). If the proofs are verified against the upgrade height, then the client will upgrade to the new client while retaining all of its client-customized fields. Thus, it will retain its old TrustingPeriod, TrustLevel, MaxClockDrift, etc; while adopting the new chain-specified fields such as UnbondingPeriod, ChainId, UpgradePath, etc. Note, this can lead to an invalid client since the old client-chosen fields may no longer be valid given the new chain-chosen fields. Upgrading chains should try to avoid these situations by not altering parameters that can break old clients. For an example, see the UnbondingPeriod example in the supported upgrades section. + +The upgraded consensus state will serve purely as a basis of trust for future `UpdateClientMsgs` and will not contain a consensus root to perform proof verification against. Thus, relayers must submit an `UpdateClientMsg` with a header from the new chain so that the connection can be used for proof verification again. diff --git a/docs/ibc/v10.1.x/intro.mdx b/docs/ibc/v10.1.x/intro.mdx new file mode 100644 index 00000000..9d5a2dd0 --- /dev/null +++ b/docs/ibc/v10.1.x/intro.mdx @@ -0,0 +1,43 @@ +--- +title: IBC-Go Documentation +description: >- + Welcome to the documentation for IBC-Go, the Golang implementation of the + Inter-Blockchain Communication Protocol! +--- + +Welcome to the documentation for IBC-Go, the Golang implementation of the Inter-Blockchain Communication Protocol! + +The Inter-Blockchain Communication Protocol (IBC) is a protocol that allows blockchains to talk to each other. Chains that speak IBC can share any type of data as long as it's encoded in bytes, enabling the industry’s most feature-rich cross-chain interactions. IBC can be used to build a wide range of cross-chain applications that include token transfers, atomic swaps, multi-chain smart contracts (with or without mutually comprehensible VMs), and cross-chain account control. IBC is secure and permissionless. + +The protocol realizes this interoperability by specifying a set of data structures, abstractions, and semantics that can be implemented by any distributed ledger that satisfies a small set of requirements. + + +**Notice** +Since ibc-go v10, there are two versions of the protocol in the same release: IBC classic and IBC v2. The protocols are seperate - a connection uses either IBC classic or IBC v2 + + +## High-level overview of IBC v2 + +For a high level overview of IBC v2, please refer to [this blog post.](https://ibcprotocol.dev/blog/ibc-v2-announcement) For a more detailed understanding of the IBC v2 protocol, please refer to the [IBC v2 protocol specification.](https://github.com/cosmos/ibc/tree/main/spec/IBC_V2) + +If you are interested in using the cannonical deployment of IBC v2, connecting Cosmos chains and Ethereum, take a look at the [IBC Eureka](https://docs.skip.build/go/eureka/eureka-overview) documentation to get started. + +## High-level overview of IBC Classic + +The following diagram shows how IBC works at a high level: + +![Light Mode IBC Overview](/docs/ibc/images/images/ibcoverview-light.svg#gh-light-mode-only)![Dark Mode IBC Overview](/docs/ibc/images/images/ibcoverview-dark.svg#gh-dark-mode-only) + +The transport layer (TAO) provides the necessary infrastructure to establish secure connections and authenticate data packets between chains. The application layer builds on top of the transport layer and defines exactly how data packets should be packaged and interpreted by the sending and receiving chains. + +IBC provides a reliable, permissionless, and generic base layer (allowing for the secure relaying of data packets), while allowing for composability and modularity with separation of concerns by moving application designs (interpreting and acting upon the packet data) to a higher-level layer. This separation is reflected in the categories: + +* **IBC/TAO** comprises the Transport, Authentication, and Ordering of packets, i.e. the infrastructure layer. +* **IBC/APP** consists of the application handlers for the data packets being passed over the transport layer. These include but are not limited to fungible token transfers (ICS-20), NFT transfers (ICS-721), and interchain accounts (ICS-27). +* **Application module:** groups any application, middleware or smart contract that may wrap downstream application handlers to provide enhanced functionality. + +Note three crucial elements in the diagram: + +* The chains depend on relayers to communicate. [Relayers](https://github.com/cosmos/ibc/blob/main/spec/relayer/ics-018-relayer-algorithms/README.md) are the "physical" connection layer of IBC: off-chain processes responsible for relaying data between two chains running the IBC protocol by scanning the state of each chain, constructing appropriate datagrams, and executing them on the opposite chain as is allowed by the protocol. +* Many relayers can serve one or more channels to send messages between the chains. +* Each side of the connection uses the light client of the other chain to quickly verify incoming messages. diff --git a/docs/ibc/v10.1.x/light-clients/developer-guide/client-state.mdx b/docs/ibc/v10.1.x/light-clients/developer-guide/client-state.mdx new file mode 100644 index 00000000..a28e664b --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/developer-guide/client-state.mdx @@ -0,0 +1,16 @@ +--- +title: Client State interface +description: Learn how to implement the ClientState interface. +--- + +Learn how to implement the [`ClientState`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L36) interface. + +## `ClientType` method + +`ClientType` should return a unique string identifier of the light client. This will be used when generating a client identifier. +The format is created as follows: `{client-type}-{N}` where `{N}` is the unique global nonce associated with a specific client (e.g `07-tendermint-0`). + +## `Validate` method + +`Validate` should validate every client state field and should return an error if any value is invalid. The light client +implementer is in charge of determining which checks are required. See the [Tendermint light client implementation](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/client_state.go#L111) as a reference. diff --git a/docs/ibc/v10.1.x/light-clients/developer-guide/consensus-state.mdx b/docs/ibc/v10.1.x/light-clients/developer-guide/consensus-state.mdx new file mode 100644 index 00000000..aad8d294 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/developer-guide/consensus-state.mdx @@ -0,0 +1,24 @@ +--- +title: Consensus State interface +description: >- + A ConsensusState is the snapshot of the counterparty chain, that an IBC client + uses to verify proofs (e.g. a block). +--- + +A `ConsensusState` is the snapshot of the counterparty chain, that an IBC client uses to verify proofs (e.g. a block). + +The further development of multiple types of IBC light clients and the difficulties presented by this generalization problem (see [ADR-006](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-006-02-client-refactor.md) for more information about this historical context) led to the design decision of each client keeping track of and set its own `ClientState` and `ConsensusState`, as well as the simplification of client `ConsensusState` updates through the generalized `ClientMessage` interface. + +The below [`ConsensusState`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L133) interface is a generalized interface for the types of information a `ConsensusState` could contain. For a reference `ConsensusState` implementation, please see the [Tendermint light client `ConsensusState`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/consensus_state.go). + +## `ClientType` method + +This is the type of client consensus. It should be the same as the `ClientType` return value for the [corresponding `ClientState` implementation](/docs/ibc/v10.1.x/light-clients/developer-guide/client-state). + +## `GetTimestamp` method + +`GetTimestamp` should return the timestamp (in nanoseconds) of the consensus state snapshot. This function has been deprecated and will be removed in a future release. + +## `ValidateBasic` method + +`ValidateBasic` should validate every consensus state field and should return an error if any value is invalid. The light client implementer is in charge of determining which checks are required. diff --git a/docs/ibc/v10.1.x/light-clients/developer-guide/light-client-module.mdx b/docs/ibc/v10.1.x/light-clients/developer-guide/light-client-module.mdx new file mode 100644 index 00000000..e757985e --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/developer-guide/light-client-module.mdx @@ -0,0 +1,71 @@ +--- +title: Light Client Module interface +description: Status must return the status of the client. +--- + +## `Status` method + +`Status` must return the status of the client. + +* An `Active` status indicates that clients are allowed to process packets. +* A `Frozen` status indicates that misbehaviour was detected in the counterparty chain and the client is not allowed to be used. +* An `Expired` status indicates that a client is not allowed to be used because it was not updated for longer than the trusting period. +* An `Unknown` status indicates that there was an error in determining the status of a client. + +All possible `Status` types can be found [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L22-L32). + +This field is returned in the response of the gRPC [`ibc.core.client.v1.Query/ClientStatus`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/types/query.pb.go#L665) endpoint. + +## `TimestampAtHeight` method + +`TimestampAtHeight` must return the timestamp for the consensus state associated with the provided height. +This value is used to facilitate timeouts by checking the packet timeout timestamp against the returned value. + +## `LatestHeight` method + +`LatestHeight` should return the latest block height that the client state represents. + +## `Initialize` method + +Clients must validate the initial consensus state, and set the initial client state and consensus state in the provided client store. +Clients may also store any necessary client-specific metadata. + +`Initialize` is called when a [client is created](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/keeper/client.go#L30). + +## `UpdateState` method + +`UpdateState` updates and stores as necessary any associated information for an IBC client, such as the `ClientState` and corresponding `ConsensusState`. See section [`UpdateState`](/docs/ibc/v10.1.x/light-clients/developer-guide/updates-and-misbehaviour#updatestate) for more information. + +## `UpdateStateOnMisbehaviour` method + +`UpdateStateOnMisbehaviour` should perform appropriate state changes on a client state given that misbehaviour has been detected and verified. See section [`UpdateStateOnMisbehaviour`](/docs/ibc/v10.1.x/light-clients/developer-guide/updates-and-misbehaviour#updatestateonmisbehaviour) for more information. + +## `VerifyMembership` method + +`VerifyMembership` must verify the existence of a value at a given commitment path at the specified height. For more information about membership proofs +see the [Existence and non-existence proofs section](/docs/ibc/v10.1.x/light-clients/developer-guide/proofs). + +## `VerifyNonMembership` method + +`VerifyNonMembership` must verify the absence of a value at a given commitment path at a specified height. For more information about non-membership proofs +see the [Existence and non-existence proofs section](/docs/ibc/v10.1.x/light-clients/developer-guide/proofs). + +## `VerifyClientMessage` method + +`VerifyClientMessage` must verify a `ClientMessage`. A `ClientMessage` could be a `Header`, `Misbehaviour`, or batch update. +It must handle each type of `ClientMessage` appropriately. Calls to `CheckForMisbehaviour`, `UpdateState`, and `UpdateStateOnMisbehaviour` +will assume that the content of the `ClientMessage` has been verified and can be trusted. An error should be returned +if the ClientMessage fails to verify. See section [`VerifyClientMessage`](/docs/ibc/v10.1.x/light-clients/developer-guide/updates-and-misbehaviour#verifyclientmessage) for more information. + +## `CheckForMisbehaviour` method + +Checks for evidence of a misbehaviour in `Header` or `Misbehaviour` type. It assumes the `ClientMessage` +has already been verified. See section [`CheckForMisbehaviour`](/docs/ibc/v10.1.x/light-clients/developer-guide/updates-and-misbehaviour#checkformisbehaviour) for more information. + +## `RecoverClient` method + +`RecoverClient` is used to recover an expired or frozen client by updating the client with the state of a substitute client. The method must verify that the provided substitute may be used to update the subject client. See section [Implementing `RecoverClient`](/docs/ibc/v10.1.x/light-clients/proposals#implementing-recoverclient) for more information. + +## `VerifyUpgradeAndUpdateState` method + +`VerifyUpgradeAndUpdateState` provides a path to upgrading clients given an upgraded `ClientState`, upgraded `ConsensusState` and proofs for each. See section [Implementing `VerifyUpgradeAndUpdateState`](/docs/ibc/v10.1.x/light-clients/developer-guide/upgrades#implementing-verifyupgradeandupdatestate) for more information. diff --git a/docs/ibc/v10.1.x/light-clients/developer-guide/overview.mdx b/docs/ibc/v10.1.x/light-clients/developer-guide/overview.mdx new file mode 100644 index 00000000..55281e10 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/developer-guide/overview.mdx @@ -0,0 +1,90 @@ +--- +title: Overview +--- + +## Synopsis + +Learn how to build IBC light client modules and fulfill the interfaces required to integrate with core IBC. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v10.1.x/ibc/overview) +- [IBC Transport, Authentication, and Ordering Layer - Clients](https://tutorials.cosmos.network/academy/3-ibc/4-clients.html) +- [ICS-002 Client Semantics](https://github.com/cosmos/ibc/tree/main/spec/core/ics-002-client-semantics) + + + +IBC uses light clients in order to provide trust-minimized interoperability between sovereign blockchains. Light clients operate under a strict set of rules which provide security guarantees for state updates and facilitate the ability to verify the state of a remote blockchain using merkle proofs. + +The following aims to provide a high level IBC light client module developer guide. Access to IBC light clients are gated by the core IBC `MsgServer` which utilizes the abstractions set by the `02-client` submodule to call into a light client module. A light client module developer is only required to implement a set of interfaces as defined in the `modules/core/exported` package of ibc-go. + +A light client module developer should be concerned with three main interfaces: + +- [`LightClientModule`](#lightclientmodule) a module which manages many light client instances of a certain type. +- [`ClientState`](#clientstate) encapsulates the light client implementation and its semantics. +- [`ConsensusState`](#consensusstate) tracks consensus data used for verification of client updates, misbehaviour detection and proof verification of counterparty state. +- [`ClientMessage`](#clientmessage) used for submitting block headers for client updates and submission of misbehaviour evidence using conflicting headers. + +Throughout this guide the `07-tendermint` light client module may be referred to as a reference example. + +## Concepts and vocabulary + +### `LightClientModule` + +`LightClientModule` is an interface defined by core IBC which allows for modular light client implementations. All light client implementations _must_ implement the [`LightClientModule` interface](https://github.com/cosmos/ibc-go/blob/501a8462345da099144efe91d495bfcfa18d760d/modules/core/exported/client.go#L51) so that core IBC may redirect calls to the light client module. + +For example a light client module may need to: + +- create clients +- update clients +- recover and upgrade clients +- verify membership and non-membership + +The methods which make up this interface are detailed at a more granular level in the [`LightClientModule` section of this guide](/docs/ibc/v10.1.x/light-clients/developer-guide/light-client-module). + +Please refer to the `07-tendermint`'s [`LightClientModule` definition](https://github.com/cosmos/ibc-go/blob/501a8462345da099144efe91d495bfcfa18d760d/modules/light-clients/07-tendermint/light_client_module.go#L17) for more information. + +### `ClientState` + +`ClientState` is a term used to define the data structure which encapsulates opaque light client state. The `ClientState` contains all the information needed to verify a `ClientMessage` and perform membership and non-membership proof verification of counterparty state. This includes properties that refer to the remote state machine, the light client type and the specific light client instance. + +For example: + +- Constraints used for client updates. +- Constraints used for misbehaviour detection. +- Constraints used for state verification. +- Constraints used for client upgrades. + +The `ClientState` type maintained within the light client module _must_ implement the [`ClientState`](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/modules/core/exported/client.go#L36) interface defined in `core/modules/exported/client.go`. +The methods which make up this interface are detailed at a more granular level in the [`ClientState` section of this guide](/docs/ibc/v10.1.x/light-clients/developer-guide/client-state). + +Please refer to the `07-tendermint` light client module's [`ClientState` definition](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/proto/ibc/lightclients/tendermint/v1/tendermint.proto#L18) containing information such as chain ID, status, latest height, unbonding period and proof specifications. + +### `ConsensusState` + +`ConsensusState` is a term used to define the data structure which encapsulates consensus data at a particular point in time, i.e. a unique height or sequence number of a state machine. There must exist a single trusted `ConsensusState` for each height. `ConsensusState` generally contains a trusted root, validator set information and timestamp. + +For example, the `ConsensusState` of the `07-tendermint` light client module defines a trusted root which is used by the `ClientState` to perform verification of membership and non-membership commitment proofs, as well as the next validator set hash used for verifying headers can be trusted in client updates. + +The `ConsensusState` type maintained within the light client module _must_ implement the [`ConsensusState`](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/modules/core/exported/client.go#L134) interface defined in `modules/core/exported/client.go`. +The methods which make up this interface are detailed at a more granular level in the [`ConsensusState` section of this guide](/docs/ibc/v10.1.x/light-clients/developer-guide/consensus-state). + +### `Height` + +`Height` defines a monotonically increasing sequence number which provides ordering of consensus state data persisted through client updates. +IBC light client module developers are expected to use the [concrete type](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/proto/ibc/core/client/v1/client.proto#L89) provided by the `02-client` submodule. This implements the expectations required by the [`Height`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L156) interface defined in `modules/core/exported/client.go`. + +### `ClientMessage` + +`ClientMessage` refers to the interface type [`ClientMessage`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L147) used for performing updates to a `ClientState` stored on chain. +This may be any concrete type which produces a change in state to the IBC client when verified. + +The following are considered as valid update scenarios: + +- A block header which when verified inserts a new `ConsensusState` at a unique height. +- A batch of block headers which when verified inserts `N` `ConsensusState` instances for `N` unique heights. +- Evidence of misbehaviour provided by two conflicting block headers. + +Learn more in the [Handling update and misbehaviour](/docs/ibc/v10.1.x/light-clients/developer-guide/updates-and-misbehaviour) section. diff --git a/docs/ibc/v10.1.x/light-clients/developer-guide/proofs.mdx b/docs/ibc/v10.1.x/light-clients/developer-guide/proofs.mdx new file mode 100644 index 00000000..235ed6c0 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/developer-guide/proofs.mdx @@ -0,0 +1,73 @@ +--- +title: Existence/Non-Existence Proofs +description: >- + IBC uses merkle proofs in order to verify the state of a remote counterparty + state machine given a trusted root, and ICS-23 is a general approach for + verifying merkle trees which is used in ibc-go. +--- + +IBC uses merkle proofs in order to verify the state of a remote counterparty state machine given a trusted root, and [ICS-23](https://github.com/cosmos/ics23/tree/master/go) is a general approach for verifying merkle trees which is used in ibc-go. + +Currently, all Cosmos SDK modules contain their own stores, which maintain the state of the application module in an IAVL (immutable AVL) binary merkle tree format. Specifically with regard to IBC, core IBC maintains its own IAVL store, and IBC apps (e.g. transfer) maintain their own dedicated stores. The Cosmos SDK multistore therefore creates a simple merkle tree of all of these IAVL trees, and from each of these individual IAVL tree root hashes it derives a root hash for the application state tree as a whole (the `AppHash`). + +For the purposes of ibc-go, there are two types of proofs which are important: existence and non-existence proofs, terms which have been used interchangeably with membership and non-membership proofs. For the purposes of this guide, we will stick with "existence" and "non-existence". + +## Existence proofs + +Existence proofs are used in IBC transactions which involve verification of counterparty state for transactions which will result in the writing of provable state. For example, this includes verification of IBC store state for handshakes and packets. + +Put simply, existence proofs prove that a particular key and value exists in the tree. Under the hood, an IBC existence proof is comprised of two proofs: an IAVL proof that the key exists in IBC store/IBC root hash, and a proof that the IBC root hash exists in the multistore root hash. + +## Non-existence proofs + +Non-existence proofs verify the absence of data stored within counterparty state and are used to prove that a key does NOT exist in state. As stated above, these types of proofs can be used to timeout packets by proving that the counterparty has not written a packet receipt into the store, meaning that a token transfer has NOT successfully occurred. + +Some trees (e.g. SMT) may have a sentinel empty child for non-existent keys. In this case, the ICS-23 proof spec should include this `EmptyChild` so that ICS-23 handles the non-existence proof correctly. + +In some cases, there is a necessity to "mock" non-existence proofs if the counterparty does not have ability to prove absence. Since the verification method is designed to give complete control to client implementations, clients can support chains that do not provide absence proofs by verifying the existence of a non-empty sentinel `ABSENCE` value. In these special cases, the proof provided will be an ICS-23 `Existence` proof, and the client will verify that the `ABSENCE` value is stored under the given path for the given height. + +## State verification methods: `VerifyMembership` and `VerifyNonMembership` + +The state verification functions for all IBC data types have been consolidated into two generic methods, `VerifyMembership` and `VerifyNonMembership`. + +From the [`LightClientModule` interface definition](https://github.com/cosmos/ibc-go/blob/main/modules/core/exported/client.go#L56), we find: + +```go expandable +/ VerifyMembership is a generic proof verification method which verifies +/ a proof of the existence of a value at a given CommitmentPath at the +/ specified height. The caller is expected to construct the full CommitmentPath +/ from a CommitmentPrefix and a standardized path (as defined in ICS 24). +VerifyMembership( + ctx sdk.Context, + clientID string, + height Height, + delayTimePeriod uint64, + delayBlockPeriod uint64, + proof []byte, + path Path, + value []byte, +) + +error + +/ VerifyNonMembership is a generic proof verification method which verifies +/ the absence of a given CommitmentPath at a specified height. The caller is +/ expected to construct the full CommitmentPath from a CommitmentPrefix and +/ a standardized path (as defined in ICS 24). +VerifyNonMembership( + ctx sdk.Context, + clientStore sdk.KVStore, + cdc codec.BinaryCodec, + height Height, + delayTimePeriod uint64, + delayBlockPeriod uint64, + proof []byte, + path Path, +) + +error +``` + +Both are expected to be provided with a standardised key path, `exported.Path`, as defined in [ICS-24 host requirements](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements). Membership verification requires callers to provide the value marshalled as `[]byte`. Delay period values should be zero for non-packet processing verification. A zero proof height is now allowed by core IBC and may be passed into `VerifyMembership` and `VerifyNonMembership`. Light clients are responsible for returning an error if a zero proof height is invalid behaviour. + +Please refer to the [ICS-23 implementation](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/23-commitment/types/merkle.go#L131-L205) for a concrete example. diff --git a/docs/ibc/v10.1.x/light-clients/developer-guide/proposals.mdx b/docs/ibc/v10.1.x/light-clients/developer-guide/proposals.mdx new file mode 100644 index 00000000..ea7df0e6 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/developer-guide/proposals.mdx @@ -0,0 +1,32 @@ +--- +title: Handling Proposals +--- + +It is possible to update the client with the state of the substitute client through a governance proposal. This type of governance proposal is typically used to recover an expired or frozen client, as it can recover the entire state and therefore all existing channels built on top of the client. `RecoverClient` should be implemented to handle the proposal. + +## Implementing `RecoverClient` + +In the [`LightClientModule` interface](https://github.com/cosmos/ibc-go/blob/501a8462345da099144efe91d495bfcfa18d760d/modules/core/exported/client.go#L51), we find: + +```go +/ RecoverClient must verify that the provided substitute +/ may be used to update the subject client. The light client +/ must set the updated client and consensus states within +/ the clientStore for the subject client. +RecoverClient( + ctx sdk.Context, + clientID, + substituteClientID string, +) + +error +``` + +Prior to updating, this function must verify that: + +* the substitute client is the same type as the subject client. For a reference implementation, please see the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/47162061bcbfe74df791161059715a635e31c604/modules/light-clients/07-tendermint/proposal_handle.go#L34). +* the provided substitute may be used to update the subject client. This may mean that certain parameters must remain unaltered. For example, a [valid substitute Tendermint light client](https://github.com/cosmos/ibc-go/blob/47162061bcbfe74df791161059715a635e31c604/modules/light-clients/07-tendermint/proposal_handle.go#L86) must NOT change the chain ID, trust level, max clock drift, unbonding period, proof specs or upgrade path. Please note that `AllowUpdateAfterMisbehaviour` and `AllowUpdateAfterExpiry` have been deprecated (see ADR 026 for more information). + +After these checks are performed, the function must [set the updated client and consensus states](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/proposal_handle.go#L77) within the client store for the subject client. + +Please refer to the [Tendermint light client implementation](https://github.com/cosmos/ibc-go/blob/47162061bcbfe74df791161059715a635e31c604/modules/light-clients/07-tendermint/proposal_handle.go#L79) for reference. diff --git a/docs/ibc/v10.1.x/light-clients/developer-guide/setup.mdx b/docs/ibc/v10.1.x/light-clients/developer-guide/setup.mdx new file mode 100644 index 00000000..48d6fa0c --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/developer-guide/setup.mdx @@ -0,0 +1,157 @@ +--- +title: Setup +--- + +## Synopsis + +Learn how to configure light client modules and create clients using core IBC and the `02-client` submodule. + +A last step to finish the development of the light client, is to implement the `AppModuleBasic` interface to allow it to be added to the chain's `app.go` alongside other light client types the chain enables. + +Finally, a succinct rundown is given of the remaining steps to make the light client operational, getting the light client type passed through governance and creating the clients. + +## Configuring a light client module + +An IBC light client module must implement the [`AppModuleBasic`](https://github.com/cosmos/cosmos-sdk/blob/main/types/module/module.go#L50) interface in order to register its concrete types against the core IBC interfaces defined in `modules/core/exported`. This is accomplished via the `RegisterInterfaces` method which provides the light client module with the opportunity to register codec types using the chain's `InterfaceRegistry`. Please refer to the [`07-tendermint` codec registration](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/codec.go#L11). + +The `AppModuleBasic` interface may also be leveraged to install custom CLI handlers for light client module users. Light client modules can safely no-op for interface methods which it does not wish to implement. + +Please refer to the [core IBC documentation](/docs/ibc/v10.1.x/ibc/integration#integrating-light-clients) for how to configure additional light client modules alongside `07-tendermint` in `app.go`. + +See below for an example of the `07-tendermint` implementation of `AppModuleBasic`. + +```go expandable +var _ module.AppModuleBasic = AppModuleBasic{ +} + +/ AppModuleBasic defines the basic application module used by the tendermint light client. +/ Only the RegisterInterfaces function needs to be implemented. All other function perform +/ a no-op. +type AppModuleBasic struct{ +} + +/ Name returns the tendermint module name. +func (AppModuleBasic) + +Name() + +string { + return ModuleName +} + +/ RegisterLegacyAminoCodec performs a no-op. The Tendermint client does not support amino. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(*codec.LegacyAmino) { +} + +/ RegisterInterfaces registers module concrete types into protobuf Any. This allows core IBC +/ to unmarshal tendermint light client types. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + RegisterInterfaces(registry) +} + +/ DefaultGenesis performs a no-op. Genesis is not supported for the tendermint light client. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return nil +} + +/ ValidateGenesis performs a no-op. Genesis is not supported for the tendermint light client. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config client.TxEncodingConfig, bz json.RawMessage) + +error { + return nil +} + +/ RegisterGRPCGatewayRoutes performs a no-op. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *runtime.ServeMux) { +} + +/ GetTxCmd performs a no-op. Please see the 02-client cli commands. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return nil +} + +/ GetQueryCmd performs a no-op. Please see the 02-client cli commands. +func (AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return nil +} +``` + +## Creating clients + +A client is created by executing a new `MsgCreateClient` transaction composed with a valid `ClientState` and initial `ConsensusState` encoded as protobuf `Any`s. +Generally, this is performed by an off-chain process known as an [IBC relayer](https://github.com/cosmos/ibc/tree/main/spec/relayer/ics-018-relayer-algorithms) however, this is not a strict requirement. + +See below for a list of IBC relayer implementations: + +- [cosmos/relayer](https://github.com/cosmos/relayer) +- [informalsystems/hermes](https://github.com/informalsystems/hermes) +- [confio/ts-relayer](https://github.com/confio/ts-relayer) + +Stateless checks are performed within the [`ValidateBasic`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/types/msgs.go#L48) method of `MsgCreateClient`. + +```protobuf expandable +/ MsgCreateClient defines a message to create an IBC client +message MsgCreateClient { + option (gogoproto.goproto_getters) = false; + + / light client state + google.protobuf.Any client_state = 1 [(gogoproto.moretags) = "yaml:\"client_state\""]; + / consensus state associated with the client that corresponds to a given + / height. + google.protobuf.Any consensus_state = 2 [(gogoproto.moretags) = "yaml:\"consensus_state\""]; + / signer address + string signer = 3; +} +``` + +Leveraging protobuf `Any` encoding allows core IBC to [unpack](https://github.com/cosmos/ibc-go/blob/47162061bcbfe74df791161059715a635e31c604/modules/core/keeper/msg_server.go#L38) the `ClientState` into its respective interface type registered previously using the light client module's `RegisterInterfaces` method. + +Within the `02-client` submodule, the [`ClientState` is then initialized](https://github.com/cosmos/ibc-go/blob/47162061bcbfe74df791161059715a635e31c604/modules/core/02-client/keeper/client.go#L40-L42) with its own isolated key-value store, namespaced using a unique client identifier. + +In order to successfully create an IBC client using a new client type, it [must be supported](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/keeper/client.go#L19-L25). Light client support in IBC is gated by on-chain governance. The allow list may be updated by submitting a new governance proposal to update the `02-client` parameter `AllowedClients`. + +See below for example: + +```shell +%s tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "IBC Clients Param Change", + "summary": "Update allowed clients", + "messages": [ + { + "@type": "/ibc.core.client.v1.MsgUpdateParams", + "signer": "cosmos1...", / The gov module account address + "params": { + "allowed_clients": ["06-solomachine", + "07-tendermint", + "0x-new-client"] + } + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +If the `AllowedClients` list contains a single element that is equal to the wildcard `"*"`, then all client types are allowed and it is thus not necessary to submit a governance proposal to update the parameter. diff --git a/docs/ibc/v10.1.x/light-clients/developer-guide/updates-and-misbehaviour.mdx b/docs/ibc/v10.1.x/light-clients/developer-guide/updates-and-misbehaviour.mdx new file mode 100644 index 00000000..dd3e0f4b --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/developer-guide/updates-and-misbehaviour.mdx @@ -0,0 +1,109 @@ +--- +title: Handling Updates and Misbehaviour +description: >- + As mentioned before in the documentation about implementing the ConsensusState + interface, ClientMessage is an interface used to update an IBC client. This + update may be performed by: +--- + +As mentioned before in the documentation about [implementing the `ConsensusState` interface](/docs/ibc/v10.1.x/light-clients/developer-guide/consensus-state), [`ClientMessage`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L147) is an interface used to update an IBC client. This update may be performed by: + +* a single header, +* a batch of headers, +* evidence of misbehaviour, +* or any type which when verified produces a change to the consensus state of the IBC client. + +This interface has been purposefully kept generic in order to give the maximum amount of flexibility to the light client implementer. + +## Implementing the `ClientMessage` interface + +Find the `ClientMessage` interface in `modules/core/exported`: + +```go +type ClientMessage interface { + proto.Message + + ClientType() + +string + ValidateBasic() + +error +} +``` + +The `ClientMessage` will be passed to the client to be used in [`UpdateClient`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/keeper/client.go#L48), which retrieves the `LightClientModule` by client type (parsed from the client ID available in `MsgUpdateClient`). This `LightClientModule` implements the [`LightClientModule` interface](/docs/ibc/v10.1.x/light-clients/developer-guide/light-client-module) for its specific consenus type (e.g. Tendermint). + +`UpdateClient` will then handle a number of cases including misbehaviour and/or updating the consensus state, utilizing the specific methods defined in the relevant `LightClientModule`. + +```go +VerifyClientMessage(ctx sdk.Context, clientID string, clientMsg ClientMessage) + +error +CheckForMisbehaviour(ctx sdk.Context, clientID string, clientMsg ClientMessage) + +bool +UpdateStateOnMisbehaviour(ctx sdk.Context, clientID string, clientMsg ClientMessage) + +UpdateState(ctx sdk.Context, clientID string, clientMsg ClientMessage) []Height +``` + +## Handling updates and misbehaviour + +The functions for handling updates to a light client and evidence of misbehaviour are all found in the [`LightClientModule`](https://github.com/cosmos/ibc-go/blob/501a8462345da099144efe91d495bfcfa18d760d/modules/core/exported/client.go#L51) interface, and will be discussed below. + +> It is important to note that `Misbehaviour` in this particular context is referring to misbehaviour on the chain level intended to fool the light client. This will be defined by each light client. + +## `VerifyClientMessage` + +`VerifyClientMessage` must verify a `ClientMessage`. A `ClientMessage` could be a `Header`, `Misbehaviour`, or batch update. To understand how to implement a `ClientMessage`, please refer to the [Implementing the `ClientMessage` interface](#implementing-the-clientmessage-interface) section. + +It must handle each type of `ClientMessage` appropriately. Calls to `CheckForMisbehaviour`, `UpdateState`, and `UpdateStateOnMisbehaviour` will assume that the content of the `ClientMessage` has been verified and can be trusted. An error should be returned if the `ClientMessage` fails to verify. + +For an example of a `VerifyClientMessage` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/76730ff030b52a351096ee941b7e4da44af9f059/modules/light-clients/07-tendermint/update.go#L23). + +## `CheckForMisbehaviour` + +Checks for evidence of a misbehaviour in `Header` or `Misbehaviour` type. It assumes the `ClientMessage` has already been verified. + +For an example of a `CheckForMisbehaviour` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/76730ff030b52a351096ee941b7e4da44af9f059/modules/light-clients/07-tendermint/misbehaviour_handle.go#L22). + +> The Tendermint light client [defines `Misbehaviour`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/misbehaviour.go) as two different types of situations: a situation where two conflicting `Header`s with the same height have been submitted to update a client's `ConsensusState` within the same trusting period, or that the two conflicting `Header`s have been submitted at different heights but the consensus states are not in the correct monotonic time ordering (BFT time violation). More explicitly, updating to a new height must have a timestamp greater than the previous consensus state, or, if inserting a consensus at a past height, then time must be less than those heights which come after and greater than heights which come before. + +## `UpdateStateOnMisbehaviour` + +`UpdateStateOnMisbehaviour` should perform appropriate state changes on a client state given that misbehaviour has been detected and verified. This method should only be called when misbehaviour is detected, as it does not perform any misbehaviour checks. Notably, it should freeze the client so that calling the `Status` function on the associated client state no longer returns `Active`. + +For an example of a `UpdateStateOnMisbehaviour` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/76730ff030b52a351096ee941b7e4da44af9f059/modules/light-clients/07-tendermint/update.go#L202). + +## `UpdateState` + +`UpdateState` updates and stores as necessary any associated information for an IBC client, such as the `ClientState` and corresponding `ConsensusState`. It should perform a no-op on duplicate updates. + +It assumes the `ClientMessage` has already been verified. + +For an example of a `UpdateState` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/76730ff030b52a351096ee941b7e4da44af9f059/modules/light-clients/07-tendermint/update.go#L134). + +## Putting it all together + +The `02-client` `Keeper` module in ibc-go offers a reference as to how these functions will be used to [update the client](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/keeper/client.go#L48). + +```go expandable +clientModule, found := k.router.GetRoute(clientID) + if !found { + return errorsmod.Wrap(types.ErrRouteNotFound, clientID) +} + if err := clientModule.VerifyClientMessage(ctx, clientID, clientMsg); err != nil { + return err +} + foundMisbehaviour := clientModule.CheckForMisbehaviour(ctx, clientID, clientMsg) + if foundMisbehaviour { + clientModule.UpdateStateOnMisbehaviour(ctx, clientID, clientMsg) + / emit misbehaviour event + return +} + +clientModule.UpdateState(ctx, clientID, clientMsg) / expects no-op on duplicate header +/ emit update event +return +``` diff --git a/docs/ibc/v10.1.x/light-clients/developer-guide/upgrades.mdx b/docs/ibc/v10.1.x/light-clients/developer-guide/upgrades.mdx new file mode 100644 index 00000000..e22170ef --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/developer-guide/upgrades.mdx @@ -0,0 +1,60 @@ +--- +title: Handling Upgrades +--- + +It is vital that high-value IBC clients can upgrade along with their underlying chains to avoid disruption to the IBC ecosystem. Thus, IBC client developers will want to implement upgrade functionality to enable clients to maintain connections and channels even across chain upgrades. + +## Implementing `VerifyUpgradeAndUpdateState` + +The IBC protocol allows client implementations to provide a path to upgrading clients given the upgraded `ClientState`, upgraded `ConsensusState` and proofs for each. This path is provided in the `VerifyUpgradeAndUpdateState` method: + +```go expandable +/ NOTE: proof heights are not included as upgrade to a new revision is expected to pass only on the last +/ height committed by the current revision. Clients are responsible for ensuring that the planned last +/ height of the current revision is somehow encoded in the proof verification process. +/ This is to ensure that no premature upgrades occur, since upgrade plans committed to by the counterparty +/ may be cancelled or modified before the last planned height. +/ If the upgrade is verified, the upgraded client and consensus states must be set in the client store. +func (l LightClientModule) + +VerifyUpgradeAndUpdateState( + ctx sdk.Context, + clientID string, + newClient []byte, + newConsState []byte, + upgradeClientProof, + upgradeConsensusStateProof []byte, +) + +error +``` + +> Please refer to the [Tendermint light client implementation](https://github.com/cosmos/ibc-go/blob/47162061bcbfe74df791161059715a635e31c604/modules/light-clients/07-tendermint/light_client_module.go#L257) as an example for implementation. + +It is important to note that light clients **must** handle all management of client and consensus states including the setting of updated `ClientState` and `ConsensusState` in the client store. This can include verifying that the submitted upgraded `ClientState` is of a valid `ClientState` type, that the height of the upgraded client is not greater than the height of the current client (in order to preserve BFT monotonic time), or that certain parameters which should not be changed have not been altered in the upgraded `ClientState`. + +Developers must ensure that the `MsgUpgradeClient` does not pass until the last height of the old chain has been committed, and after the chain upgrades, the `MsgUpgradeClient` should pass once and only once on all counterparty clients. + +### Upgrade path + +Clients should have **prior knowledge of the merkle path** that the upgraded client and upgraded consensus states will use. The height at which the upgrade has occurred should also be encoded in the proof. + +> The Tendermint client implementation accomplishes this by including an `UpgradePath` in the `ClientState` itself, which is used along with the upgrade height to construct the merkle path under which the client state and consensus state are committed. + +## Chain specific vs client specific client parameters + +Developers should maintain the distinction between client parameters that are uniform across every valid light client of a chain (chain-chosen parameters), and client parameters that are customizable by each individual client (client-chosen parameters). + +When upgrading a client, developers must ensure that the new client adopts all of the new client parameters that must be uniform across every valid light client of a chain (chain-chosen parameters), while maintaining the client parameters that are customizable by each individual client (client-chosen parameters) from the previous version of the client. + +## Security + +Upgrades must adhere to the IBC Security Model. IBC does not rely on the assumption of honest relayers for correctness. Thus users should not have to rely on relayers to maintain client correctness and security (though honest relayers must exist to maintain relayer liveness). While relayers may choose any set of client parameters while creating a new `ClientState`, this still holds under the security model since users can always choose a relayer-created client that suits their security and correctness needs or create a client with their desired parameters if no such client exists. + +However, when upgrading an existing client, one must keep in mind that there are already many users who depend on this client's particular parameters. **We cannot give the upgrading relayer free choice over these parameters once they have already been chosen. This would violate the security model** since users who rely on the client would have to rely on the upgrading relayer to maintain the same level of security. + +Thus, developers must make sure that their upgrade mechanism allows clients to upgrade the chain-specified parameters whenever a chain upgrade changes these parameters (examples in the Tendermint client include `UnbondingPeriod`, `TrustingPeriod`, `ChainID`, `UpgradePath`, etc), while ensuring that the relayer submitting the `MsgUpgradeClient` cannot alter the client-chosen parameters that the users are relying upon (examples in Tendermint client include `TrustLevel`, `MaxClockDrift`, etc). + +### Document potential client parameter conflicts during upgrades + +Counterparty clients can upgrade securely by using all of the chain-chosen parameters from the chain-committed `UpgradedClient` and preserving all of the old client-chosen parameters. This enables chains to securely upgrade without relying on an honest relayer, however it can in some cases lead to an invalid final `ClientState` if the new chain-chosen parameters clash with the old client-chosen parameter. This can happen in the Tendermint client case if the upgrading chain lowers the `UnbondingPeriod` (chain-chosen) to a duration below that of a counterparty client's `TrustingPeriod` (client-chosen). Such cases should be clearly documented by developers, so that chains know which upgrades should be avoided to prevent this problem. The final upgraded client should also be validated in `VerifyUpgradeAndUpdateState` before returning to ensure that the client does not upgrade to an invalid `ClientState`. diff --git a/docs/ibc/v10.1.x/light-clients/localhost/client-state.mdx b/docs/ibc/v10.1.x/light-clients/localhost/client-state.mdx new file mode 100644 index 00000000..831bb962 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/localhost/client-state.mdx @@ -0,0 +1,6 @@ +--- +title: ClientState +description: The 09-localhost client is stateless and has no types. +--- + +The 09-localhost client is stateless and has no types. diff --git a/docs/ibc/v10.1.x/light-clients/localhost/connection.mdx b/docs/ibc/v10.1.x/light-clients/localhost/connection.mdx new file mode 100644 index 00000000..9c2fe5f5 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/localhost/connection.mdx @@ -0,0 +1,29 @@ +--- +title: Connection +description: >- + The 09-localhost light client module integrates with core IBC through a single + sentinel localhost connection. The sentinel ConnectionEnd is stored by default + in the core IBC store. +--- + +The 09-localhost light client module integrates with core IBC through a single sentinel localhost connection. +The sentinel `ConnectionEnd` is stored by default in the core IBC store. + +This enables channel handshakes to be initiated out of the box by supplying the localhost connection identifier (`connection-localhost`) in the `connectionHops` parameter of `MsgChannelOpenInit`. + +The `ConnectionEnd` is created and set in store via the `InitGenesis` handler of the 03-connection submodule in core IBC. +The `ConnectionEnd` and its `Counterparty` both reference the `09-localhost` client identifier, and share the localhost connection identifier `connection-localhost`. + +```go +/ CreateSentinelLocalhostConnection creates and sets the sentinel localhost connection end in the IBC store. +func (k Keeper) + +CreateSentinelLocalhostConnection(ctx sdk.Context) { + counterparty := types.NewCounterparty(exported.LocalhostClientID, exported.LocalhostConnectionID, commitmenttypes.NewMerklePrefix(k.GetCommitmentPrefix().Bytes())) + connectionEnd := types.NewConnectionEnd(types.OPEN, exported.LocalhostClientID, counterparty, types.GetCompatibleVersions(), 0) + +k.SetConnection(ctx, exported.LocalhostConnectionID, connectionEnd) +} +``` + +Note that connection handshakes are disallowed when using the `09-localhost` client type. diff --git a/docs/ibc/v10.1.x/light-clients/localhost/integration.mdx b/docs/ibc/v10.1.x/light-clients/localhost/integration.mdx new file mode 100644 index 00000000..f2faff57 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/localhost/integration.mdx @@ -0,0 +1,19 @@ +--- +title: Integration +description: >- + The 09-localhost light client module registers codec types within the core IBC + module. This differs from other light client module implementations which are + expected to register codec types using the AppModuleBasic interface. +--- + +The 09-localhost light client module registers codec types within the core IBC module. This differs from other light client module implementations which are expected to register codec types using the `AppModuleBasic` interface. + +The localhost client is implicitly enabled by using the `AllowAllClients` wildcard (`"*"`) in the 02-client submodule default value for param [`allowed_clients`](https://github.com/cosmos/ibc-go/blob/v7.0.0/proto/ibc/core/client/v1/client.proto#L102). + +```go +/ DefaultAllowedClients are the default clients for the AllowedClients parameter. +/ By default it allows all client types. +var DefaultAllowedClients = []string{ + AllowAllClients +} +``` diff --git a/docs/ibc/v10.1.x/light-clients/localhost/overview.mdx b/docs/ibc/v10.1.x/light-clients/localhost/overview.mdx new file mode 100644 index 00000000..25292942 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/localhost/overview.mdx @@ -0,0 +1,50 @@ +--- +title: Overview +--- + +## Overview + +## Synopsis + +Learn about the 09-localhost light client module. + +The 09-localhost light client module implements a stateless localhost loopback client with the ability to send and +receive IBC packets to and from the same state machine. + +### Context + +In a multichain environment, application developers will be used to developing cross-chain applications through IBC. +From their point of view, whether or not they are interacting with multiple modules on the same chain or on different +chains should not matter. The localhost client module enables a unified interface to interact with different +applications on a single chain, using the familiar IBC application layer semantics. + +### Implementation + +There exists a localhost light client module which can be invoked with the client identifier `09-localhost`. The light +client is stateless, so the `ClientState` is constructed on demand when required. + +To supplement this, a [sentinel `ConnectionEnd` is stored in core IBC](/docs/ibc/v10.1.x/light-clients/localhost/connection) state with the connection +identifier `connection-localhost`. This enables IBC applications to create channels directly on top of the sentinel +connection which leverage the 09-localhost loopback functionality. + +[State verification](/docs/ibc/v10.1.x/light-clients/localhost/state-verification) for channel state in handshakes or processing packets is reduced in +complexity, the `09-localhost` client can simply compare bytes stored under the standardized key paths. + +### Localhost vs _regular_ client + +The localhost client aims to provide a unified approach to interacting with applications on a single chain, as the IBC +application layer provides for cross-chain interactions. To achieve this unified interface though, there are a number of +differences under the hood compared to a 'regular' IBC client (excluding `06-solomachine` and `09-localhost` itself). + +The table below lists some important differences: + +| | Regular client | Localhost | +| -------------------------------------------- | --------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Number of clients | Many instances of a client _type_ corresponding to different counterparties | A single sentinel client with the client identifier `09-localhost` | +| Client creation | Relayer (permissionless) | Implicitly made available by the 02-client submodule in core IBC | +| Client updates | Relayer submits headers using `MsgUpdateClient` | No client updates are required as the localhost implementation is stateless | +| Number of connections | Many connections, 1 (or more) per client | A single sentinel connection with the connection identifier `connection-localhost` | +| Connection creation | Connection handshake, provided underlying client | Sentinel `ConnectionEnd` is created and set in store in the `InitGenesis` handler of the 03-connection submodule in core IBC | +| Counterparty | Underlying client, representing another chain | Client with identifier `09-localhost` in same chain | +| `VerifyMembership` and `VerifyNonMembership` | Performs proof verification using consensus state roots | Performs state verification using key-value lookups in the core IBC store | +| `ClientState` storage | `ClientState` stored and directly provable with `VerifyMembership` | Stateless, so `ClientState` is not provable directly with `VerifyMembership` | diff --git a/docs/ibc/v10.1.x/light-clients/localhost/state-verification.mdx b/docs/ibc/v10.1.x/light-clients/localhost/state-verification.mdx new file mode 100644 index 00000000..f8acda5c --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/localhost/state-verification.mdx @@ -0,0 +1,23 @@ +--- +title: State Verification +description: >- + The localhost client handles state verification through the LightClientModule + interface methods VerifyMembership and VerifyNonMembership by performing + read-only operations directly on the core IBC store. +--- + +The localhost client handles state verification through the `LightClientModule` interface methods `VerifyMembership` and `VerifyNonMembership` by performing read-only operations directly on the core IBC store. + +When verifying channel state in handshakes or processing packets the `09-localhost` client can simply compare bytes stored under the standardized key paths defined by [ICS-24](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements). + +For existence proofs via `VerifyMembership` the 09-localhost client will retrieve the value stored under the provided key path and compare it against the value provided by the caller. In contrast, non-existence proofs via `VerifyNonMembership` assert the absence of a value at the provided key path. + +Relayers are expected to provide a sentinel proof when sending IBC messages. Submission of nil or empty proofs is disallowed in core IBC messaging. +The 09-localhost light client module defines a `SentinelProof` as a single byte. Localhost client state verification will fail if the sentinel proof value is not provided. + +```go +var SentinelProof = []byte{0x01 +} +``` + +The `ClientState` of `09-localhost` is stateless, so it is not directly provable with `VerifyMembership` or `VerifyNonMembership`. diff --git a/docs/ibc/v10.1.x/light-clients/proposals.mdx b/docs/ibc/v10.1.x/light-clients/proposals.mdx new file mode 100644 index 00000000..2e47a281 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/proposals.mdx @@ -0,0 +1,115 @@ +--- +title: Governance Proposals +--- + +In uncommon situations, a highly valued client may become frozen or expire due to uncontrollable +circumstances. A highly valued client might have hundreds of channels being actively used. +Some of those channels might have a significant amount of locked tokens used for ICS 20. + +## Frozen Light Clients + +If the one third of the validator set of the chain the client represents decides to collude, +they can sign off on two valid but conflicting headers each signed by the other one third +of the honest validator set. The light client can now be updated with two valid, but conflicting +headers at the same height. The light client cannot know which header is trustworthy and therefore +evidence of such misbehaviour is likely to be submitted resulting in a frozen light client. + +Frozen light clients cannot be updated under any circumstance except via a governance proposal. +Since a quorum of validators can sign arbitrary state roots which may not be valid executions +of the state machine, a governance proposal has been added to ease the complexity of unfreezing +or updating clients which have become "stuck". Without this mechanism, validator sets would need +to construct a state root to unfreeze the client. Unfreezing clients, re-enables all of the channels +built upon that client. This may result in recovery of otherwise lost funds. + +## Expired Light Clients + +Tendermint light clients may become expired if the trusting period has passed since their +last update. This may occur if relayers stop submitting headers to update the clients. + +An unplanned upgrade by the counterparty chain may also result in expired clients. If the counterparty +chain undergoes an unplanned upgrade, there may be no commitment to that upgrade signed by the validator +set before the chain ID changes. In this situation, the validator set of the last valid update for the +light client is never expected to produce another valid header since the chain ID has changed, which will +ultimately lead the on-chain light client to become expired. + +# How to recover an expired client with a governance proposal + +> **Who is this information for?** +> Although technically anyone can submit the governance proposal to recover an expired client, often it will be **relayer operators** (at least coordinating the submission). + +In the case that a highly valued light client is frozen, expired, or rendered non-updateable, a +governance proposal may be submitted to update this client, known as the subject client. The +proposal includes the client identifier for the subject and the client identifier for a substitute +client. Light client implementations may implement custom updating logic, but in most cases, +the subject will be updated to the latest consensus state of the substitute client, if the proposal passes. +The substitute client is used as a "stand in" while the subject is on trial. It is best practice to create +a substitute client *after* the subject has become frozen to avoid the substitute from also becoming frozen. +An active substitute client allows headers to be submitted during the voting period to prevent accidental expiry +once the proposal passes. + +See also the relevant documentation: ADR-026, IBC client recovery mechanisms + +## Preconditions + +* There exists an active client (with a known client identifier) for the same counterparty chain as the expired client. +* The governance deposit. + +## Steps + +### Step 1 + +Check if the client is attached to the expected `chain_id`. For example, for an expired Tendermint client representing the Akash chain the client state looks like this on querying the client state: + +```text +{ + client_id: 07-tendermint-146 + client_state: + '@type': /ibc.lightclients.tendermint.v1.ClientState + allow_update_after_expiry: true + allow_update_after_misbehaviour: true + chain_id: akashnet-2 +} +``` + +The client is attached to the expected Akash `chain_id`. Note that although the parameters (`allow_update_after_expiry` and `allow_update_after_misbehaviour`) exist to signal intent, these parameters have been deprecated and will not enforce any checks on the revival of client. See ADR-026 for more context on this deprecation. + +### Step 2 + +Anyone can submit the governance proposal to recover the client by executing the following via CLI. +If the chain is on an ibc-go version older than v8, please see the [relevant documentation](https://ibc.cosmos.network/v7/ibc/proposals). + +* From ibc-go v8 onwards + + ```shell + tx gov submit-proposal [path-to-proposal-json] + ``` + + where `proposal.json` contains: + + ```json expandable + { + "messages": [ + { + "@type": "/ibc.core.client.v1.MsgRecoverClient", + "subject_client_id": "", + "substitute_client_id": "", + "signer": "" + } + ], + "metadata": "", + "deposit": "10stake" + "title": "My proposal", + "summary": "A short summary of my proposal", + "expedited": false + } + ``` + +The `` identifier is the proposed client to be updated. This client must be either frozen or expired. + +The `` represents a substitute client. It carries all the state for the client which may be updated. It must have identical client and chain parameters to the client which may be updated (except for latest height, frozen height, and chain ID). It should be continually updated during the voting period. + +After this, all that remains is deciding who funds the governance deposit and ensuring the governance proposal passes. If it does, the client on trial will be updated to the latest state of the substitute. + +## Important considerations + +Please note that if the counterparty client is also expired, that client will also need to update. This process updates only one client. diff --git a/docs/ibc/v10.1.x/light-clients/solomachine/concepts.mdx b/docs/ibc/v10.1.x/light-clients/solomachine/concepts.mdx new file mode 100644 index 00000000..1026dbf9 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/solomachine/concepts.mdx @@ -0,0 +1,166 @@ +--- +title: Concepts +description: >- + The ClientState for a solo machine light client stores the latest sequence, + the frozen sequence, the latest consensus state, and client flag indicating if + the client should be allowed to be updated after a governance proposal. +--- + +## Client State + +The `ClientState` for a solo machine light client stores the latest sequence, the frozen sequence, +the latest consensus state, and client flag indicating if the client should be allowed to be updated +after a governance proposal. + +If the client is not frozen then the frozen sequence is 0. + +## Consensus State + +The consensus states stores the public key, diversifier, and timestamp of the solo machine light client. + +The diversifier is used to prevent accidental misbehaviour if the same public key is used across +different chains with the same client identifier. It should be unique to the chain the light client +is used on. + +## Public Key + +The public key can be a single public key or a multi-signature public key. The public key type used +must fulfill the tendermint public key interface (this will become the SDK public key interface in the +near future). The public key must be registered on the application codec otherwise encoding/decoding +errors will arise. The public key stored in the consensus state is represented as a protobuf `Any`. +This allows for flexibility in what other public key types can be supported in the future. + +## Counterparty Verification + +The solo machine light client can verify counterparty client state, consensus state, connection state, +channel state, packet commitments, packet acknowledgements, packet receipt absence, +and the next sequence receive. At the end of each successful verification call the light +client sequence number will be incremented. + +Successful verification requires the current public key to sign over the proof. + +## Proofs + +A solo machine proof should verify that the solomachine public key signed +over some specified data. The format for generating marshaled proofs for +the SDK's implementation of solo machine is as follows: + +1. Construct the data using the associated protobuf definition and marshal it. + +For example: + +```go +data := &ClientStateData{ + Path: []byte(path.String()), + ClientState: protoAny, +} + +dataBz, err := cdc.Marshal(data) +``` + +The helper functions `...DataBytes()` in [proof.go](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine/proof.go) handle this +functionality. + +2. Construct the `SignBytes` and marshal it. + +For example: + +```go +signBytes := &SignBytes{ + Sequence: sequence, + Timestamp: timestamp, + Diversifier: diversifier, + DataType: CLIENT, + Data: dataBz, +} + +signBz, err := cdc.Marshal(signBytes) +``` + +The helper functions `...SignBytes()` in [proof.go](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine/proof.go) handle this functionality. +The `DataType` field is used to disambiguate what type of data was signed to prevent potential +proto encoding overlap. + +3. Sign the sign bytes. Embed the signatures into either `SingleSignatureData` or `MultiSignatureData`. + Convert the `SignatureData` to proto and marshal it. + +For example: + +```go +sig, err := key.Sign(signBz) + sigData := &signing.SingleSignatureData{ + Signature: sig, +} + protoSigData := signing.SignatureDataToProto(sigData) + +bz, err := cdc.Marshal(protoSigData) +``` + +4. Construct a `TimestampedSignatureData` and marshal it. The marshaled result can be passed in + as the proof parameter to the verification functions. + +For example: + +```go +timestampedSignatureData := &solomachine.TimestampedSignatureData{ + SignatureData: sigData, + Timestamp: solomachine.Time, +} + +proof, err := cdc.Marshal(timestampedSignatureData) +``` + +NOTE: At the end of this process, the sequence associated with the key needs to be updated. +The sequence must be incremented each time proof is generated. + +## Updates By Header + +An update by a header will only succeed if: + +* the header provided is parseable to solo machine header +* the header sequence matches the current sequence +* the header timestamp is greater than or equal to the consensus state timestamp +* the currently registered public key generated the proof + +If the update is successful: + +* the public key is updated +* the diversifier is updated +* the timestamp is updated +* the sequence is incremented by 1 +* the new consensus state is set in the client state + +## Updates By Proposal + +An update by a governance proposal will only succeed if: + +* the substitute provided is parseable to solo machine client state +* the new consensus state public key does not equal the current consensus state public key + +If the update is successful: + +* the subject client state is updated to the substitute client state +* the subject consensus state is updated to the substitute consensus state +* the client is unfrozen (if it was previously frozen) + +NOTE: Previously, `AllowUpdateAfterProposal` was used to signal the update/recovery options for the solo machine client. However, this has now been deprecated because a code migration can overwrite the client and consensus states regardless of the value of this parameter. If governance would vote to overwrite a client or consensus state, it is likely that governance would also be willing to perform a code migration to do the same. + +## Misbehaviour + +Misbehaviour handling will only succeed if: + +* the misbehaviour provided is parseable to solo machine misbehaviour +* the client is not already frozen +* the current public key signed over two unique data messages at the same sequence and diversifier. + +If the misbehaviour is successfully processed: + +* the client is frozen by setting the frozen sequence to the misbehaviour sequence + +NOTE: Misbehaviour processing is data processing order dependent. A misbehaving solo machine +could update to a new public key to prevent being frozen before misbehaviour is submitted. + +## Upgrades + +Upgrades to solo machine light clients are not supported since an entirely different type of +public key can be set using normal client updates. diff --git a/docs/ibc/v10.1.x/light-clients/solomachine/solomachine.mdx b/docs/ibc/v10.1.x/light-clients/solomachine/solomachine.mdx new file mode 100644 index 00000000..7d415109 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/solomachine/solomachine.mdx @@ -0,0 +1,23 @@ +--- +title: Solomachine +description: >- + This paper defines the implementation of the ICS06 protocol on the Cosmos SDK. + For the general specification please refer to the ICS06 Specification. +--- + +## Abstract + +This paper defines the implementation of the ICS06 protocol on the Cosmos SDK. For the general +specification please refer to the [ICS06 Specification](https://github.com/cosmos/ibc/tree/master/spec/client/ics-006-solo-machine-client). + +This implementation of a solo machine light client supports single and multi-signature public +keys. The client is capable of handling public key updates by header and governance proposals. +The light client is capable of processing client misbehaviour. Proofs of the counterparty state +are generated by the solo machine client by signing over the desired state with a certain sequence, +diversifier, and timestamp. + +## Contents + +1. **[Concepts](/docs/ibc/v10.1.x/light-clients/solomachine/concepts)** +2. **[State](/docs/ibc/v10.1.x/light-clients/solomachine/state)** +3. **[State Transitions](/docs/ibc/v10.1.x/light-clients/solomachine/state_transitions)** diff --git a/docs/ibc/v10.1.x/light-clients/solomachine/state.mdx b/docs/ibc/v10.1.x/light-clients/solomachine/state.mdx new file mode 100644 index 00000000..7827215a --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/solomachine/state.mdx @@ -0,0 +1,10 @@ +--- +title: State +description: >- + The solo machine light client will only store consensus states for each update + by a header or a governance proposal. The latest client state is also + maintained in the store. +--- + +The solo machine light client will only store consensus states for each update by a header +or a governance proposal. The latest client state is also maintained in the store. diff --git a/docs/ibc/v10.1.x/light-clients/solomachine/state_transitions.mdx b/docs/ibc/v10.1.x/light-clients/solomachine/state_transitions.mdx new file mode 100644 index 00000000..e7a09a62 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/solomachine/state_transitions.mdx @@ -0,0 +1,38 @@ +--- +title: State Transitions +description: 'Successful state verification by a solo machine light client will result in:' +--- + +## Client State Verification Functions + +Successful state verification by a solo machine light client will result in: + +* the sequence being incremented by 1. + +## Update By Header + +A successful update of a solo machine light client by a header will result in: + +* the public key being updated to the new public key provided by the header. +* the diversifier being updated to the new diviersifier provided by the header. +* the timestamp being updated to the new timestamp provided by the header. +* the sequence being incremented by 1 +* the consensus state being updated (consensus state stores the public key, diversifier, and timestamp) + +## Update By Governance Proposal + +A successful update of a solo machine light client by a governance proposal will result in: + +* the client state being updated to the substitute client state +* the consensus state being updated to the substitute consensus state (consensus state stores the public key, diversifier, and timestamp) +* the frozen sequence being set to zero (client is unfrozen if it was previously frozen). + +## Upgrade + +Client udgrades are not supported for the solo machine light client. No state transition occurs. + +## Misbehaviour + +Successful misbehaviour processing of a solo machine light client will result in: + +* the frozen sequence being set to the sequence the misbehaviour occurred at diff --git a/docs/ibc/v10.1.x/light-clients/tendermint/overview.mdx b/docs/ibc/v10.1.x/light-clients/tendermint/overview.mdx new file mode 100644 index 00000000..13845e9d --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/tendermint/overview.mdx @@ -0,0 +1,167 @@ +--- +title: Overview +--- + +## Overview + +## Synopsis + +Learn about the 07-tendermint light client module. + +The Tendermint client is the first and most deployed light client in IBC. It implements the IBC [light client module interface](https://github.com/cosmos/ibc-go/blob/v9.0.0-beta.1/modules/core/exported/client.go#L41-L123) to track a counterparty running [CometBFT](https://github.com/cometbft/cometbft) consensus. + + + Tendermint is the old name of CometBFT which has been retained in IBC to avoid + expensive migration costs. + + +The Tendermint client consists of two important structs that keep track of the state of the counterparty chain and allow for future updates. The `ClientState` struct contains all the parameters necessary for CometBFT header verification. The `ConsensusState`, on the other hand, is a compressed view of a particular header of the counterparty chain. Unlike off chain light clients, IBC does not store full header. Instead it stores only the information it needs to prove verification of key/value pairs in the counterparty state (i.e. the header `AppHash`), and the information necessary to use the consensus state as the next root of trust to add a new consensus state to the client (i.e. the header `NextValidatorsHash` and `Timestamp`). The relayer provides the full trusted header on `UpdateClient`, which will get checked against the compressed root-of-trust consensus state. If the trusted header matches a previous consensus state, and the trusted header and new header pass the CometBFT light client update algorithm, then the new header is compressed into a consensus state and added to the IBC client. + +Each Tendermint Client is composed of a single `ClientState` keyed on the client ID, and multiple consensus states which are keyed on both the clientID and header height. Relayers can use the consensus states to verify merkle proofs of packet commitments, acknowledgements, and receipts against the `AppHash` of the counterparty chain in order to enable verified packet flow. + +If a counterparty chain violates the CometBFT protocol in a way that is detectable to off-chain light clients, this misbehaviour can also be submitted to an IBC client by any off-chain actor. Upon verification of this misbehaviour, the Tendermint IBC Client will freeze, preventing any further packet flow from this malicious chain from occurring. Governance or some other out-of-band protocol may then be used to unwind any damage that has already occurred. + +## Initialization + +The Tendermint light client is initialized with a `ClientState` that contains parameters necessary for CometBFT header verification along with a latest height and `ConsensusState` that encapsulates the application state root of a trusted header that will serve to verify future incoming headers from the counterparty. + +```proto expandable +message ClientState { + / human readable chain-id that will be included in header + / and signed over by the validator set + string chain_id = 1; + / trust level is the fraction of the trusted validator set + / that must sign over a new untrusted header before it is accepted + / it can be a minimum of 1/3 and a maximum of 2/3 + / Note these are the bounds of liveness. 1/3 is the minimum + / honest stake needed to maintain liveness on a chain, + / requiring more than 2/3 to sign over the new header would + / break the BFT threshold of allowing 1/3 malicious validators + Fraction trust_level = 2; + / duration of the period since the LatestTimestamp during which the + / submitted headers are valid for update + google.protobuf.Duration trusting_period = 3; + / duration of the staking unbonding period + google.protobuf.Duration unbonding_period = 4; + / defines how much new (untrusted) header's Time can drift + / into the future relative to our local clock. + google.protobuf.Duration max_clock_drift = 5; + + / Block height when the client was frozen due to a misbehaviour + ibc.core.client.v1.Height frozen_height = 6; + / Latest height the client was updated to + ibc.core.client.v1.Height latest_height = 7; + + / Proof specifications used in verifying counterparty state + repeated cosmos.ics23.v1.ProofSpec proof_specs = 8; + + / Path at which next upgraded client will be committed. + / Each element corresponds to the key for a single CommitmentProof in the + / chained proof. NOTE: ClientState must stored under + / `{upgradePath}/{upgradeHeight}/clientState` ConsensusState must be stored + / under `{upgradepath}/{upgradeHeight}/consensusState` For SDK chains using + / the default upgrade module, upgrade_path should be []string{"upgrade", + / "upgradedIBCState"}` + repeated string upgrade_path = 9; +} +``` + +```proto expandable +message ConsensusState { + / timestamp that corresponds to the block height in which the ConsensusState + / was stored. + google.protobuf.Timestamp timestamp = 1; + / commitment root (i.e app hash) that will be used + / to verify proofs of packet flow messages + ibc.core.commitment.v1.MerkleRoot root = 2; + / hash of the next validator set that will be used as + / a new updated source of trust to verify future updates + bytes next_validators_hash = 3; +} +``` + +## Updates + +Once the initial client state and consensus state are submitted, future consensus states can be added to the client by submitting IBC [headers](https://github.com/cosmos/ibc-go/blob/v9.0.0-beta.1/proto/ibc/lightclients/tendermint/v1/tendermint.proto#L76-L94). These headers contain all necessary information to run the CometBFT light client protocol. + +```proto expandable +message Header { + / this is the new signed header that we want to add + / as a new consensus state to the ibc client. + / the signed header contains the commit signatures of the `validator_set` below + .tendermint.types.SignedHeader signed_header = 1; + + / the validator set which signed the new header + .tendermint.types.ValidatorSet validator_set = 2; + / the trusted height of the consensus state which we are updating from + ibc.core.client.v1.Height trusted_height = 3; + / the trusted validator set, the hash of the trusted validators must be equal to + / `next_validators_hash` of the current consensus state + .tendermint.types.ValidatorSet trusted_validators = 4; +} +``` + +For detailed information on the CometBFT light client protocol and its safety properties please refer to the [original Tendermint whitepaper](https://arxiv.org/abs/1807.04938). + +## Proofs + +As consensus states are added to the client, they can be used for proof verification by relayers wishing to prove packet flow messages against a particular height on the counterparty. This uses the `VerifyMembership` and `VerifyNonMembership` methods on the Tendermint client. + +```go expandable +/ VerifyMembership is a generic proof verification method +/which verifies a proof of the existence of a value at a +/ given CommitmentPath at the specified height. The caller +/ is expected to construct the full CommitmentPath from a +/ CommitmentPrefix and a standardized path (as defined in ICS 24). +VerifyMembership( + ctx sdk.Context, + clientID string, + height Height, + delayTimePeriod uint64, + delayBlockPeriod uint64, + proof []byte, + path Path, + value []byte, +) + +error + +/ VerifyNonMembership is a generic proof verification method +/ which verifies the absence of a given CommitmentPath at a +/ specified height. The caller is expected to construct the +/ full CommitmentPath from a CommitmentPrefix and a standardized +/ path (as defined in ICS 24). +VerifyNonMembership( + ctx sdk.Context, + clientID string, + height Height, + delayTimePeriod uint64, + delayBlockPeriod uint64, + proof []byte, + path Path, +) + +error +``` + +The Tendermint client is initialized with an ICS23 proof spec. This allows the Tendermint implementation to support many different merkle tree structures so long as they can be represented in an [`ics23.ProofSpec`](https://github.com/cosmos/ics23/blob/go/v0.10.0/proto/cosmos/ics23/v1/proofs.proto#L145-L170). + +## Misbehaviour + +The Tendermint light client directly tracks consensus of a CometBFT counterparty chain. So long as the counterparty is Byzantine Fault Tolerant, that is to say, the malicious subset of the bonded validators does not exceed the trust level of the client, then the client is secure. + +In case the malicious subset of the validators exceeds the trust level of the client, then the client can be deceived into accepting invalid blocks and the connection is no longer secure. + +The Tendermint client has some mitigations in place to prevent this. If there are two valid blocks signed by the counterparty validator set at the same height \[e.g. a valid block signed by an honest subset and an invalid block signed by a malicious one], then these conflicting headers can be submitted to the client as [misbehaviour](https://github.com/cosmos/ibc-go/blob/v9.0.0-beta.1/proto/ibc/lightclients/tendermint/v1/tendermint.proto#L65-L74). The client will verify the headers and freeze the client; preventing any future updates and proof verification from succeeding. This effectively halts communication with the compromised counterparty while out-of-band social consensus can unwind any damage done. + +Similarly, if the timestamps of the headers are not monotonically increasing, this can also be evidence of malicious behaviour and cause the client to freeze. + +Thus, any consensus faults that are detectable by a light client are part of the misbehaviour protocol and can be used to minimize the damage caused by a compromised counterparty chain. + +### Security model + +It is important to note that IBC is not a completely trustless protocol; it is **trust-minimized**. This means that the safety property of bilateral IBC communication between two chains is dependent on the safety properties of the two chains in question. If one of the chains is compromised completely, then the IBC connection to the other chain is liable to receive invalid packets from the malicious chain. For example, if a malicious validator set has taken over more than 2/3 of the validator power on a chain; that malicious validator set can create a single chain of blocks with arbitrary commitment roots and arbitrary commitments to the next validator set. This would seize complete control of the chain and prevent the honest subset from even being able to create a competing honest block. + +In this case, there is no ability for the IBC Tendermint client solely tracking CometBFT consensus to detect the misbehaviour and freeze the client. The IBC protocol would require out-of-band mechanisms to detect and fix such an egregious safety fault on the counterparty chain. Since the Tendermint light client is only tracking consensus and not also verifying the validity of state transitions, malicious behaviour from a validator set that is beyond the BFT fault threshold is an accepted risk of this light client implementation. + +The IBC protocol has principles of fault isolation (e.g. all tokens are prefixed by their channel, so tokens from different chains are not mutually fungible) and fault mitigation (e.g. ability to freeze the client if misbehaviour can be detected before complete malicious takeover) that make this risk as minimal as possible. diff --git a/docs/ibc/v10.1.x/light-clients/wasm/client.mdx b/docs/ibc/v10.1.x/light-clients/wasm/client.mdx new file mode 100644 index 00000000..cdb0d80a --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/wasm/client.mdx @@ -0,0 +1,149 @@ +--- +title: Client +description: >- + A user can query and interact with the 08-wasm module using the CLI. Use the + --help flag to discover the available commands: +--- + +## CLI + +A user can query and interact with the `08-wasm` module using the CLI. Use the `--help` flag to discover the available commands: + +### Transactions + +The `tx` commands allow users to interact with the `08-wasm` submodule. + +```shell +simd tx ibc-wasm --help +``` + +#### `store-code` + +The `store-code` command allows users to submit a governance proposal with a `MsgStoreCode` to store the byte code of a Wasm light client contract. + +```shell +simd tx ibc-wasm store-code [path/to/wasm-file] [flags] +``` + +`path/to/wasm-file` is the path to the `.wasm` or `.wasm.gz` file. + +#### `migrate-contract` + +The `migrate-contract` command allows users to broadcast a transaction with a `MsgMigrateContract` to migrate the contract for a given light client to a new byte code denoted by the given checksum. + +```shell +simd tx ibc-wasm migrate-contract [client-id] [checksum] [migrate-msg] +``` + +The migrate message must not be emptied and is expected to be a JSON-encoded string. + +### Query + +The `query` commands allow users to query `08-wasm` state. + +```shell +simd query ibc-wasm --help +``` + +#### `checksums` + +The `checksums` command allows users to query the list of checksums of Wasm light client contracts stored in the Wasm VM via the `MsgStoreCode`. The checksums are hex-encoded. + +```shell +simd query ibc-wasm checksums [flags] +``` + +Example: + +```shell +simd query ibc-wasm checksums +``` + +Example Output: + +```shell +checksums: +- c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64 +pagination: + next_key: null + total: "1" +``` + +#### `code` + +The `code` command allows users to query the Wasm byte code of a light client contract given the provided input checksum. + +```shell +./simd q ibc-wasm code +``` + +Example: + +```shell +simd query ibc-wasm code c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64 +``` + +Example Output: + +```shell +code: AGFzb...AqBBE= +``` + +## gRPC + +A user can query the `08-wasm` module using gRPC endpoints. + +### `Checksums` + +The `Checksums` endpoint allows users to query the list of checksums of Wasm light client contracts stored in the Wasm VM via the `MsgStoreCode`. + +```shell +ibc.lightclients.wasm.v1.Query/Checksums +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{}' \ + localhost:9090 \ + ibc.lightclients.wasm.v1.Query/Checksums +``` + +Example output: + +```shell +{ + "checksums": [ + "c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64" + ], + "pagination": { + "total": "1" + } +} +``` + +### `Code` + +The `Code` endpoint allows users to query the Wasm byte code of a light client contract given the provided input checksum. + +```shell +ibc.lightclients.wasm.v1.Query/Code +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"checksum":"c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64"}' \ + localhost:9090 \ + ibc.lightclients.wasm.v1.Query/Code +``` + +Example output: + +```shell +{ + "code": AGFzb...AqBBE= +} +``` diff --git a/docs/ibc/v10.1.x/light-clients/wasm/concepts.mdx b/docs/ibc/v10.1.x/light-clients/wasm/concepts.mdx new file mode 100644 index 00000000..f7e260d1 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/wasm/concepts.mdx @@ -0,0 +1,90 @@ +--- +title: Concepts +description: >- + Learn about the differences between a proxy light client and a Wasm light + client. +--- + +Learn about the differences between a proxy light client and a Wasm light client. + +## Proxy light client + +The `08-wasm` module is not a regular light client in the same sense as, for example, the 07-tendermint light client. `08-wasm` is instead a _proxy_ light client module, and this means that the module acts a proxy to the actual implementations of light clients. The module will act as a wrapper for the actual light clients uploaded as Wasm byte code and will delegate all operations to them (i.e. `08-wasm` just passes through the requests to the Wasm light clients). Still, the `08-wasm` module implements all the required interfaces necessary to integrate with core IBC, so that 02-client can call into it as it would for any other light client module. These interfaces are `LightClientModule`, `ClientState`, `ConsensusState` and `ClientMessage`, and we will describe them in the context of `08-wasm` in the following sections. For more information about this set of interfaces, please read section [Overview of the light client module developer guide](/docs/ibc/v10.1.x/light-clients/developer-guide/overview#overview). + +### `LightClientModule` + +The `08-wasm`'s `LightClientModule` data structure contains two fields: + +- `keeper` is the `08-wasm` module keeper. +- `storeProvider` encapsulates the IBC core store key and provides access to isolated prefix stores for each client so they can read/write in separate namespaces. + +```go +type LightClientModule struct { + keeper wasmkeeper.Keeper + storeProvider exported.ClientStoreProvider +} +``` + +See section [`LightClientModule` of the light client module developer guide](/docs/ibc/v10.1.x/light-clients/developer-guide/overview#lightclientmodule) for more information about the `LightClientModule` interface. + +### `ClientState` + +The `08-wasm`'s `ClientState` data structure contains three fields: + +- `Data` contains the bytes of the Protobuf-encoded client state of the underlying light client implemented as a Wasm contract. For example, if the Wasm light client contract implements the GRANDPA light client algorithm, then `Data` will contain the bytes for a [GRANDPA client state](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L35-L60). +- `Checksum` is the sha256 hash of the Wasm contract's byte code. This hash is used as an identifier to call the right contract. +- `LatestHeight` is the latest height of the counterparty state machine (i.e. the height of the blockchain), whose consensus state the light client tracks. + +```go +type ClientState struct { + / bytes encoding the client state of the underlying + / light client implemented as a Wasm contract + Data []byte + / sha256 hash of Wasm contract byte code + Checksum []byte + / latest height of the counterparty ledger + LatestHeight types.Height +} +``` + +See section [`ClientState` of the light client module developer guide](/docs/ibc/v10.1.x/light-clients/developer-guide/overview#clientstate) for more information about the `ClientState` interface. + +### `ConsensusState` + +The `08-wasm`'s `ConsensusState` data structure maintains one field: + +- `Data` contains the bytes of the Protobuf-encoded consensus state of the underlying light client implemented as a Wasm contract. For example, if the Wasm light client contract implements the GRANDPA light client algorithm, then `Data` will contain the bytes for a [GRANDPA consensus state](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L87-L94). + +```go +type ConsensusState struct { + / bytes encoding the consensus state of the underlying light client + / implemented as a Wasm contract. + Data []byte +} +``` + +See section [`ConsensusState` of the light client module developer guide](/docs/ibc/v10.1.x/light-clients/developer-guide/overview#consensusstate) for more information about the `ConsensusState` interface. + +### `ClientMessage` + +`ClientMessage` is used for performing updates to a `ClientState` stored on chain. The `08-wasm`'s `ClientMessage` data structure maintains one field: + +- `Data` contains the bytes of the Protobuf-encoded header(s) or misbehaviour for the underlying light client implemented as a Wasm contract. For example, if the Wasm light client contract implements the GRANDPA light client algorithm, then `Data` will contain the bytes of either [header](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L96-L104) or [misbehaviour](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L106-L112) for a GRANDPA light client. + +```go +type ClientMessage struct { + / bytes encoding the header(s) + +or misbehaviour for the underlying light client + / implemented as a Wasm contract. + Data []byte +} +``` + +See section [`ClientMessage` of the light client module developer guide](/docs/ibc/v10.1.x/light-clients/developer-guide/overview#clientmessage) for more information about the `ClientMessage` interface. + +## Wasm light client + +The actual light client can be implemented in any language that compiles to Wasm and implements the interfaces of a [CosmWasm](https://docs.cosmwasm.com/docs/) contract. Even though in theory other languages could be used, in practice (at least for the time being) the most suitable language to use would be Rust, since there is already good support for it for developing CosmWasm smart contracts. + +At the moment of writing there are two contracts available: one for [Tendermint](https://github.com/ComposableFi/composable-ibc/tree/master/light-clients/ics07-tendermint-cw) and one [GRANDPA](https://github.com/ComposableFi/composable-ibc/tree/master/light-clients/ics10-grandpa-cw) (which is being used in production in [Composable Finance's Centauri bridge](https://github.com/ComposableFi/composable-ibc)). And there are others in development (e.g. for Near). diff --git a/docs/ibc/v10.1.x/light-clients/wasm/contracts.mdx b/docs/ibc/v10.1.x/light-clients/wasm/contracts.mdx new file mode 100644 index 00000000..a9af3d7d --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/wasm/contracts.mdx @@ -0,0 +1,108 @@ +--- +title: Contracts +description: >- + Learn about the expected behaviour of Wasm light client contracts and the + between with 08-wasm. +--- + +Learn about the expected behaviour of Wasm light client contracts and the between with `08-wasm`. + +## API + +The `08-wasm` light client proxy performs calls to the Wasm light client via the Wasm VM. The calls require as input JSON-encoded payload messages that fall in the three categories described in the next sections. + +## `InstantiateMessage` + +This is the message sent to the contract's `instantiate` entry point. It contains the bytes of the protobuf-encoded client and consensus states of the underlying light client, both provided in [`MsgCreateClient`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/tx.proto#L40-L52). Please note that the bytes contained within the JSON message are represented as base64-encoded strings. + +```go +type InstantiateMessage struct { + ClientState []byte `json:"client_state"` + ConsensusState []byte `json:"consensus_state"` + Checksum []byte `json:"checksum" +} +``` + +The Wasm light client contract is expected to store the client and consensus state in the corresponding keys of the client-prefixed store. + +## `QueryMsg` + +`QueryMsg` acts as a discriminated union type that is used to encode the messages that are sent to the contract's `query` entry point. Only one of the fields of the type should be set at a time, so that the other fields are omitted in the encoded JSON and the payload can be correctly translated to the corresponding element of the enumeration in Rust. + +```go +type QueryMsg struct { + Status *StatusMsg `json:"status,omitempty"` + TimestampAtHeight *TimestampAtHeightMsg `json:"timestamp_at_height,omitempty"` + VerifyClientMessage *VerifyClientMessageMsg `json:"verify_client_message,omitempty"` + CheckForMisbehaviour *CheckForMisbehaviourMsg `json:"check_for_misbehaviour,omitempty"` +} +``` + +```rust +#[cw_serde] +pub enum QueryMsg { + Status(StatusMsg), + TimestampAtHeight(TimestampAtHeightMsg), + VerifyClientMessage(VerifyClientMessageRaw), + CheckForMisbehaviour(CheckForMisbehaviourMsgRaw), +} +``` + +To learn what it is expected from the Wasm light client contract when processing each message, please read the corresponding section of the [Light client developer guide](/docs/ibc/v10.1.x/light-clients/developer-guide/overview): + +- For `StatusMsg`, see the section [`Status` method](/docs/ibc/v10.1.x/light-clients/developer-guide/client-state#status-method). +- For `TimestampAtHeightMsg`, see the section [`GetTimestampAtHeight` method](/docs/ibc/v10.1.x/light-clients/developer-guide/client-state#gettimestampatheight-method). +- For `VerifyClientMessageMsg`, see the section [`VerifyClientMessage`](/docs/ibc/v10.1.x/light-clients/developer-guide/updates-and-misbehaviour#verifyclientmessage). +- For `CheckForMisbehaviourMsg`, see the section [`CheckForMisbehaviour` method](/docs/ibc/v10.1.x/light-clients/developer-guide/client-state#checkformisbehaviour-method). + +## `SudoMsg` + +`SudoMsg` acts as a discriminated union type that is used to encode the messages that are sent to the contract's `sudo` entry point. Only one of the fields of the type should be set at a time, so that the other fields are omitted in the encoded JSON and the payload can be correctly translated to the corresponding element of the enumeration in Rust. + +The `sudo` entry point is able to perform state-changing writes in the client-prefixed store. + +```go +type SudoMsg struct { + UpdateState *UpdateStateMsg `json:"update_state,omitempty"` + UpdateStateOnMisbehaviour *UpdateStateOnMisbehaviourMsg `json:"update_state_on_misbehaviour,omitempty"` + VerifyUpgradeAndUpdateState *VerifyUpgradeAndUpdateStateMsg `json:"verify_upgrade_and_update_state,omitempty"` + VerifyMembership *VerifyMembershipMsg `json:"verify_membership,omitempty"` + VerifyNonMembership *VerifyNonMembershipMsg `json:"verify_non_membership,omitempty"` + MigrateClientStore *MigrateClientStoreMsg `json:"migrate_client_store,omitempty"` +} +``` + +```rust +#[cw_serde] +pub enum SudoMsg { + UpdateState(UpdateStateMsgRaw), + UpdateStateOnMisbehaviour(UpdateStateOnMisbehaviourMsgRaw), + VerifyUpgradeAndUpdateState(VerifyUpgradeAndUpdateStateMsgRaw), + VerifyMembership(VerifyMembershipMsgRaw), + VerifyNonMembership(VerifyNonMembershipMsgRaw), + MigrateClientStore(MigrateClientStoreMsgRaw), +} +``` + +To learn what it is expected from the Wasm light client contract when processing each message, please read the corresponding section of the [Light client developer guide](/docs/ibc/v10.1.x/light-clients/developer-guide/overview): + +- For `UpdateStateMsg`, see the section [`UpdateState`](/docs/ibc/v10.1.x/light-clients/developer-guide/updates-and-misbehaviour#updatestate). +- For `UpdateStateOnMisbehaviourMsg`, see the section [`UpdateStateOnMisbehaviour`](/docs/ibc/v10.1.x/light-clients/developer-guide/updates-and-misbehaviour#updatestateonmisbehaviour). +- For `VerifyUpgradeAndUpdateStateMsg`, see the section [`GetTimestampAtHeight` method](/docs/ibc/v10.1.x/light-clients/developer-guide/upgrades#implementing-verifyupgradeandupdatestate). +- For `VerifyMembershipMsg`, see the section [`VerifyMembership` method](/docs/ibc/v10.1.x/light-clients/developer-guide/client-state#verifymembership-method). +- For `VerifyNonMembershipMsg`, see the section [`VerifyNonMembership` method](/docs/ibc/v10.1.x/light-clients/developer-guide/client-state#verifynonmembership-method). +- For `MigrateClientStoreMsg`, see the section [Implementing `CheckSubstituteAndUpdateState`](/docs/ibc/v10.1.x/light-clients/developer-guide/proposals#implementing-checksubstituteandupdatestate). + +### Migration + +The `08-wasm` proxy light client exposes the `MigrateContract` RPC endpoint that can be used to migrate a given Wasm light client contract (specified by the client identifier) to a new Wasm byte code (specified by the hash of the byte code). The expected use case for this RPC endpoint is to enable contracts to migrate to new byte code in case the current byte code is found to have a bug or vulnerability. The Wasm byte code that contracts are migrated have to be uploaded beforehand using `MsgStoreCode` and must implement the `migrate` entry point. See section[`MsgMigrateContract`](/docs/ibc/v10.1.x/apps/interchain-accounts/messages#msgmigratecontract) for information about the request message for this RPC endpoint. + +## Expected behaviour + +The `08-wasm` proxy light client modules expects the following behaviour from the Wasm light client contracts when executing messages that perform state-changing writes: + +- The contract must not delete the client state from the store. +- The contract must not change the client state to a client state of another type. +- The contract must not change the checksum in the client state. + +Any violation of these rules will result in an error returned from `08-wasm` that will abort the transaction. diff --git a/docs/ibc/v10.1.x/light-clients/wasm/events.mdx b/docs/ibc/v10.1.x/light-clients/wasm/events.mdx new file mode 100644 index 00000000..66223443 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/wasm/events.mdx @@ -0,0 +1,22 @@ +--- +title: Events +description: 'The 08-wasm module emits the following events:' +--- + +The `08-wasm` module emits the following events: + +## `MsgStoreCode` + +| Type | Attribute Key | Attribute Value | +| ----------------- | -------------- | ---------------------- | +| store\_wasm\_code | wasm\_checksum | `{hex.Encode(checksum)}` | +| message | module | 08-wasm | + +## `MsgMigrateContract` + +| Type | Attribute Key | Attribute Value | +| ----------------- | -------------- | ------------------------- | +| migrate\_contract | client\_id | `{clientId}` | +| migrate\_contract | wasm\_checksum | `{hex.Encode(checksum)}` | +| migrate\_contract | new\_checksum | `{hex.Encode(newChecksum)}` | +| message | module | 08-wasm | diff --git a/docs/ibc/v10.1.x/light-clients/wasm/governance.mdx b/docs/ibc/v10.1.x/light-clients/wasm/governance.mdx new file mode 100644 index 00000000..5f916d4a --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/wasm/governance.mdx @@ -0,0 +1,125 @@ +--- +title: Governance +description: >- + Learn how to upload Wasm light client byte code on a chain, and how to migrate + an existing Wasm light client contract. +--- + +Learn how to upload Wasm light client byte code on a chain, and how to migrate an existing Wasm light client contract. + +## Setting an authority + +Both the storage of Wasm light client byte code as well as the migration of an existing Wasm light client contract are permissioned (i.e. only allowed to an authority such as governance). The designated authority is specified when instantiating `08-wasm`'s keeper: both [`NewKeeperWithVM`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L39-L47) and [`NewKeeperWithConfig`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L88-L96) constructor functions accept an `authority` argument that must be the address of the authorized actor. For example, in `app.go`, when instantiating the keeper, you can pass the address of the governance module: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/types" + ... +) + +/ app.go +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), / authority + wasmVM, + app.GRPCQueryRouter(), +) +``` + +## Storing new Wasm light client byte code + +If governance is the allowed authority, the governance v1 proposal that needs to be submitted to upload a new light client contract should contain the message [`MsgStoreCode`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/proto/ibc/lightclients/wasm/v1/tx.proto#L23-L30) with the base64-encoded byte code of the Wasm contract. Use the following CLI command and JSON as an example: + +```shell +simd tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "Upload IBC Wasm light client", + "summary": "Upload wasm client", + "messages": [ + { + "@type": "/ibc.lightclients.wasm.v1.MsgStoreCode", + "signer": "cosmos1...", / the authority address (e.g. the gov module account address) + "wasm_byte_code": "YWJ...PUB+" / standard base64 encoding of the Wasm contract byte code + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +To learn more about the `submit-proposal` CLI command, please check out [the relevant section in Cosmos SDK documentation](https://docs.cosmos.network/main/modules/gov#submit-proposal). + +Alternatively, the process of submitting the proposal may be simpler if you use the CLI command `store-code`. This CLI command accepts as argument the file of the Wasm light client contract and takes care of constructing the proposal message with `MsgStoreCode` and broadcasting it. See section [`store-code`](/docs/ibc/v10.1.x/light-clients/wasm/client#store-code) for more information. + +## Migrating an existing Wasm light client contract + +If governance is the allowed authority, the governance v1 proposal that needs to be submitted to migrate an existing new Wasm light client contract should contain the message [`MsgMigrateContract`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/proto/ibc/lightclients/wasm/v1/tx.proto#L52-L63) with the checksum of the Wasm byte code to migrate to. Use the following CLI command and JSON as an example: + +```shell +simd tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "Migrate IBC Wasm light client", + "summary": "Migrate wasm client", + "messages": [ + { + "@type": "/ibc.lightclients.wasm.v1.MsgMigrateContract", + "signer": "cosmos1...", / the authority address (e.g. the gov module account address) + "client_id": "08-wasm-1", / client identifier of the Wasm light client contract that will be migrated + "checksum": "a8ad...4dc0", / SHA-256 hash of the Wasm byte code to migrate to, previously stored with MsgStoreCode + "msg": "{}" / JSON-encoded message to be passed to the contract on migration + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +To learn more about the `submit-proposal` CLI command, please check out [the relevant section in Cosmos SDK documentation](https://docs.cosmos.network/main/modules/gov#submit-proposal). + +## Removing an existing checksum + +If governance is the allowed authority, the governance v1 proposal that needs to be submitted to remove a specific checksum from the list of allowed checksums should contain the message [`MsgRemoveChecksum`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/proto/ibc/lightclients/wasm/v1/tx.proto#L39-L46) with the checksum (of a corresponding Wasm byte code). Use the following CLI command and JSON as an example: + +```shell +simd tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "Remove checksum of Wasm light client byte code", + "summary": "Remove checksum", + "messages": [ + { + "@type": "/ibc.lightclients.wasm.v1.MsgRemoveChecksum", + "signer": "cosmos1...", / the authority address (e.g. the gov module account address) + "checksum": "a8ad...4dc0", / SHA-256 hash of the Wasm byte code that should be removed from the list of allowed checksums + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +To learn more about the `submit-proposal` CLI command, please check out [the relevant section in Cosmos SDK documentation](https://docs.cosmos.network/main/modules/gov#submit-proposal). diff --git a/docs/ibc/v10.1.x/light-clients/wasm/integration.mdx b/docs/ibc/v10.1.x/light-clients/wasm/integration.mdx new file mode 100644 index 00000000..42da84b3 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/wasm/integration.mdx @@ -0,0 +1,414 @@ +--- +title: Integration +description: >- + Learn how to integrate the 08-wasm module in a chain binary and about the + recommended approaches depending on whether the x/wasm module is already used + in the chain. The following document only applies for Cosmos SDK chains. +--- + +Learn how to integrate the `08-wasm` module in a chain binary and about the recommended approaches depending on whether the [`x/wasm` module](https://github.com/CosmWasm/wasmd/tree/main/x/wasm) is already used in the chain. The following document only applies for Cosmos SDK chains. + +## Importing the `08-wasm` module + +`08-wasm` has no stable releases yet. To use it, you need to import the git commit that contains the module with the compatible versions of `ibc-go` and `wasmvm`. To do so, run the following command with the desired git commit in your project: + +```sh +go get github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10 +``` + +## `app.go` setup + +The sample code below shows the relevant integration points in `app.go` required to set up the `08-wasm` module in a chain binary. Since `08-wasm` is a light client module itself, please check out as well the section [Integrating light clients](/docs/ibc/v10.1.x/ibc/integration#integrating-light-clients) for more information: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + + cmtos "github.com/cometbft/cometbft/libs/os" + + ibcwasm "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10" + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/types" + ... +) + +... + +/ Register the AppModule for the 08-wasm module +ModuleBasics = module.NewBasicManager( + ... + ibcwasm.AppModuleBasic{ +}, + ... +) + +/ Add 08-wasm Keeper +type SimApp struct { + ... + WasmClientKeeper ibcwasmkeeper.Keeper + ... +} + +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + ... + keys := sdk.NewKVStoreKeys( + ... + ibcwasmtypes.StoreKey, + ) + + / Instantiate 08-wasm's keeper + / This sample code uses a constructor function that + / accepts a pointer to an existing instance of Wasm VM. + / This is the recommended approach when the chain + / also uses `x/wasm`, and then the Wasm VM instance + / can be shared. + app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmVM, + app.GRPCQueryRouter(), + ) + wasmLightClientModule := wasm.NewLightClientModule(app.WasmClientKeeper) + +app.IBCKeeper.ClientKeeper.AddRoute(ibcwasmtypes.ModuleName, &wasmLightClientModule) + +app.ModuleManager = module.NewManager( + / SDK app modules + ... + ibcwasm.NewAppModule(app.WasmClientKeeper), + ) + +app.ModuleManager.SetOrderBeginBlockers( + ... + ibcwasmtypes.ModuleName, + ... + ) + +app.ModuleManager.SetOrderEndBlockers( + ... + ibcwasmtypes.ModuleName, + ... + ) + genesisModuleOrder := []string{ + ... + ibcwasmtypes.ModuleName, + ... +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + ... + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + ... + + / must be before Loading version + if manager := app.SnapshotManager(); manager != nil { + err := manager.RegisterExtensions( + ibcwasmkeeper.NewWasmSnapshotter(app.CommitMultiStore(), &app.WasmClientKeeper), + ) + if err != nil { + panic(fmt.Errorf("failed to register snapshot extension: %s", err)) +} + +} + ... + if loadLatest { + ... + ctx := app.BaseApp.NewUncachedContext(true, cmtproto.Header{ +}) + + / Initialize pinned codes in wasmvm as they are not persisted there + if err := app.WasmClientKeeper.InitializePinnedCodes(ctx); err != nil { + cmtos.Exit(fmt.Sprintf("failed initialize pinned codes %s", err)) +} + +} +} +``` + +## Keeper instantiation + +When it comes to instantiating `08-wasm`'s keeper, there are two recommended ways of doing it. Choosing one or the other will depend on whether the chain already integrates [`x/wasm`](https://github.com/CosmWasm/wasmd/tree/main/x/wasm) or not. + +### If `x/wasm` is present + +If the chain where the module is integrated uses `x/wasm` then we recommend that both `08-wasm` and `x/wasm` share the same Wasm VM instance. Having two separate Wasm VM instances is still possible, but care should be taken to make sure that both instances do not share the directory when the VM stores blobs and various caches, otherwise unexpected behaviour is likely to happen (from `x/wasm` v0.51 and `08-wasm` v0.2.0+ibc-go-v8.3-wasmvm-v2.0 this will be forbidden anyway, since wasmvm v2.0.0 and above will not allow two different Wasm VM instances to shared the same data folder). + +In order to share the Wasm VM instance, please follow the guideline below. Please note that this requires `x/wasm` v0.41 or above. + +- Instantiate the Wasm VM in `app.go` with the parameters of your choice. +- [Create an `Option` with this Wasm VM instance](https://github.com/CosmWasm/wasmd/blob/db93d7b6c7bb6f4a340d74b96a02cec885729b59/x/wasm/keeper/options.go#L21-L25). +- Add the option created in the previous step to a slice and [pass it to the `x/wasm NewKeeper` constructor function](https://github.com/CosmWasm/wasmd/blob/db93d7b6c7bb6f4a340d74b96a02cec885729b59/x/wasm/keeper/keeper_cgo.go#L36). +- Pass the pointer to the Wasm VM instance to `08-wasm` [`NewKeeperWithVM` constructor function](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L39-L47). + +The code to set this up would look something like this: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + + wasmvm "github.com/CosmWasm/wasmvm/v2" + wasmkeeper "github.com/CosmWasm/wasmd/x/wasm/keeper" + wasmtypes "github.com/CosmWasm/wasmd/x/wasm/types" + + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/types" + ... +) + +... + +/ instantiate the Wasm VM with the chosen parameters +wasmer, err := wasmvm.NewVM( + dataDir, + availableCapabilities, + contractMemoryLimit, / default of 32 + contractDebugMode, + memoryCacheSize, +) + if err != nil { + panic(err) +} + +/ create an Option slice (or append to an existing one) +/ with the option to use a custom Wasm VM instance +wasmOpts = []wasmkeeper.Option{ + wasmkeeper.WithWasmEngine(wasmer), +} + +/ the keeper will use the provided Wasm VM instance, +/ instead of instantiating a new one +app.WasmKeeper = wasmkeeper.NewKeeper( + appCodec, + keys[wasmtypes.StoreKey], + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + distrkeeper.NewQuerier(app.DistrKeeper), + app.IBCKeeper.ChannelKeeper, + app.IBCKeeper.ChannelKeeper, + &app.IBCKeeper.PortKeeper, + scopedWasmKeeper, + app.TransferKeeper, + app.MsgServiceRouter(), + app.GRPCQueryRouter(), + wasmDir, + wasmConfig, + availableCapabilities, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmOpts..., +) + +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmer, / pass the Wasm VM instance to `08-wasm` keeper constructor + app.GRPCQueryRouter(), +) +... +``` + +### If `x/wasm` is not present + +If the chain does not use [`x/wasm`](https://github.com/CosmWasm/wasmd/tree/main/x/wasm), even though it is still possible to use the method above from the previous section +(e.g. instantiating a Wasm VM in app.go an pass it to 08-wasm's [`NewKeeperWithVM` constructor function](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L39-L47), since there would be no need in this case to share the Wasm VM instance with another module, you can use the [`NewKeeperWithConfig` constructor function](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L88-L96) and provide the Wasm VM configuration parameters of your choice instead. A Wasm VM instance will be created in `NewKeeperWithConfig`. The parameters that can set are: + +- `DataDir` is the [directory for Wasm blobs and various caches](https://github.com/CosmWasm/wasmvm/blob/v2.0.0/lib.go#L25). As an example, in `wasmd` this is set to the [`wasm` folder under the home directory](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/app/app.go#L578). In the code snippet below we set this field to the `ibc_08-wasm_client_data` folder under the home directory. +- `SupportedCapabilities` is a [list of capabilities supported by the chain](https://github.com/CosmWasm/wasmvm/blob/v2.0.0/lib.go#L26). [`wasmd` sets this to all the available capabilities](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/app/app.go#L586), but 08-wasm only requires `iterator`. +- `MemoryCacheSize` sets [the size in MiB of an in-memory cache for e.g. module caching](https://github.com/CosmWasm/wasmvm/blob/v2.0.0/lib.go#L29C16-L29C104). It is not consensus-critical and should be defined on a per-node basis, often in the range 100 to 1000 MB. [`wasmd` reads this value of](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/app/app.go#L579). Default value is 256. +- `ContractDebugMode` is a [flag to enable/disable printing debug logs from the contract to STDOUT](https://github.com/CosmWasm/wasmvm/blob/v2.0.0/lib.go#L28). This should be false in production environments. Default value is false. + +Another configuration parameter of the Wasm VM is the contract memory limit (in MiB), which is [set to 32](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/types/config.go#L8), [following the example of `wasmd`](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/x/wasm/keeper/keeper.go#L32-L34). This parameter is not configurable by users of `08-wasm`. + +The following sample code shows how the keeper would be constructed using this method: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/types" + ... +) + +... + +/ homePath is the path to the directory where the data +/ directory for Wasm blobs and caches will be created + wasmConfig := ibcwasmtypes.WasmConfig{ + DataDir: filepath.Join(homePath, "ibc_08-wasm_client_data"), + SupportedCapabilities: []string{"iterator" +}, + ContractDebugMode: false, +} + +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithConfig( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmConfig, + app.GRPCQueryRouter(), +) +``` + +Check out also the [`WasmConfig` type definition](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/types/config.go#L21-L31) for more information on each of the configurable parameters. Some parameters allow node-level configurations. There is additionally the function [`DefaultWasmConfig`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/types/config.go#L36-L42) available that returns a configuration with the default values. + +### Options + +The `08-wasm` module comes with an options API inspired by the one in `x/wasm`. +Currently the only option available is the `WithQueryPlugins` option, which allows registration of custom query plugins for the `08-wasm` module. The use of this API is optional and it is only required if the chain wants to register custom query plugins for the `08-wasm` module. + +#### `WithQueryPlugins` + +By default, the `08-wasm` module does not configure any querier options for light client contracts. However, it is possible to register custom query plugins for [`QueryRequest::Custom`](https://github.com/CosmWasm/cosmwasm/blob/v2.0.1/packages/std/src/query/mod.rs#L48) and [`QueryRequest::Stargate`](https://github.com/CosmWasm/cosmwasm/blob/v2.0.1/packages/std/src/query/mod.rs#L57-L65). + +Assuming that the keeper is not yet instantiated, the following sample code shows how to register query plugins for the `08-wasm` module. + +We first construct a [`QueryPlugins`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/types/querier.go#L78-L87) object with the desired query plugins: + +```go +queryPlugins := ibcwasmtypes.QueryPlugins { + Custom: MyCustomQueryPlugin(), + / `myAcceptList` is a `[]string` containing the list of gRPC query paths that the chain wants to allow for the `08-wasm` module to query. + / These queries must be registered in the chain's gRPC query router, be deterministic, and track their gas usage. + / The `AcceptListStargateQuerier` function will return a query plugin that will only allow queries for the paths in the `myAcceptList`. + / The query responses are encoded in protobuf unlike the implementation in `x/wasm`. + Stargate: ibcwasmtypes.AcceptListStargateQuerier(myAcceptList), +} +``` + +Note that the `Stargate` querier appends the user defined accept list of query routes to a default list defined by the `08-wasm` module. +The `defaultAcceptList` defines a single query route: `"/ibc.core.client.v1.Query/VerifyMembership"`. This allows for light client smart contracts to delegate parts of their workflow to other light clients for auxiliary proof verification. For example, proof of inclusion of block and tx data by a data availability provider. + +```go +/ defaultAcceptList defines a set of default allowed queries made available to the Querier. +var defaultAcceptList = []string{ + "/ibc.core.client.v1.Query/VerifyMembership", +} +``` + +You may leave any of the fields in the `QueryPlugins` object as `nil` if you do not want to register a query plugin for that query type. + +Then, we pass the `QueryPlugins` object to the `WithQueryPlugins` option: + +```go +querierOption := ibcwasmkeeper.WithQueryPlugins(&queryPlugins) +``` + +Finally, we pass the option to the `NewKeeperWithConfig` or `NewKeeperWithVM` constructor function during [Keeper instantiation](#keeper-instantiation): + +```diff +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithConfig( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmConfig, + app.GRPCQueryRouter(), ++ querierOption, +) +``` + +```diff +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmer, / pass the Wasm VM instance to `08-wasm` keeper constructor + app.GRPCQueryRouter(), ++ querierOption, +) +``` + +## Updating `AllowedClients` + +If the chain's 02-client submodule parameter `AllowedClients` contains the single wildcard `"*"` element, then it is not necessary to do anything in order to allow the creation of `08-wasm` clients. However, if the parameter contains a list of client types (e.g. `["06-solomachine", "07-tendermint"]`), then in order to use the `08-wasm` module chains must update the [`AllowedClients` parameter](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/client.proto#L64) of core IBC. This can be configured directly in the application upgrade handler with the sample code below: + +```go expandable +import ( + + ... + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10/types" + ... +) + +... + +func CreateWasmUpgradeHandler( + mm *module.Manager, + configurator module.Configurator, + clientKeeper clientkeeper.Keeper, +) + +upgradetypes.UpgradeHandler { + return func(goCtx context.Context, _ upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + / explicitly update the IBC 02-client params, adding the wasm client type + params := clientKeeper.GetParams(ctx) + +params.AllowedClients = append(params.AllowedClients, ibcwasmtypes.Wasm) + +clientKeeper.SetParams(ctx, params) + +return mm.RunMigrations(goCtx, configurator, vm) +} +} +``` + +Or alternatively the parameter can be updated via a governance proposal (see at the bottom of section [`Creating clients`](/docs/ibc/v10.1.x/light-clients/developer-guide/setup#creating-clients) for an example of how to do this). + +## Adding the module to the store + +As part of the upgrade migration you must also add the module to the upgrades store. + +```go expandable +func (app SimApp) + +RegisterUpgradeHandlers() { + + ... + if upgradeInfo.Name == UpgradeName && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := storetypes.StoreUpgrades{ + Added: []string{ + ibcwasmtypes.ModuleName, +}, +} + + / configure store loader that checks if version == upgradeHeight and applies store upgrades + app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +} +``` + +## Adding snapshot support + +In order to use the `08-wasm` module chains are required to register the `WasmSnapshotter` extension in the snapshot manager. This snapshotter takes care of persisting the external state, in the form of contract code, of the Wasm VM instance to disk when the chain is snapshotted. [This code](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/testing/simapp/app.go#L775-L782) should be placed in `NewSimApp` function in `app.go`. + +## Pin byte codes at start + +Wasm byte codes should be pinned to the WasmVM cache on every application start, therefore [this code](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/testing/simapp/app.go#L825-L830) should be placed in `NewSimApp` function in `app.go`. diff --git a/docs/ibc/v10.1.x/light-clients/wasm/messages.mdx b/docs/ibc/v10.1.x/light-clients/wasm/messages.mdx new file mode 100644 index 00000000..182d1154 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/wasm/messages.mdx @@ -0,0 +1,73 @@ +--- +title: Messages +description: >- + Uploading the Wasm light client contract to the Wasm VM storage is achieved by + means of MsgStoreCode: +--- + +## `MsgStoreCode` + +Uploading the Wasm light client contract to the Wasm VM storage is achieved by means of `MsgStoreCode`: + +```go +type MsgStoreCode struct { + / signer address + Signer string + / wasm byte code of light client contract. It can be raw or gzip compressed + WasmByteCode []byte +} +``` + +This message is expected to fail if: + +* `Signer` is an invalid Bech32 address, or it does not match the designated authority address. +* `WasmByteCode` is empty or it exceeds the maximum size, currently set to 3MB. + +Only light client contracts stored using `MsgStoreCode` are allowed to be instantiated. An attempt to create a light client from contracts uploaded via other means (e.g. through `x/wasm` if the module shares the same Wasm VM instance with 08-wasm) will fail. Due to the idempotent nature of the Wasm VM's `StoreCode` function, it is possible to store the same byte code multiple times. + +When execution of `MsgStoreCode` succeeds, the checksum of the contract (i.e. the sha256 hash of the contract's byte code) is stored in an allow list. When a relayer submits [`MsgCreateClient`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/tx.proto#L25-L37) with 08-wasm's `ClientState`, the client state includes the checksum of the Wasm byte code that should be called. Then 02-client calls [08-wasm's implementation of `Initialize` function](https://github.com/cosmos/ibc-go/blob/06fd8eb5ee1697e3b43be7528a6e42f5e4a4613c/modules/core/02-client/keeper/client.go#L40) (which is an interface function part of `LightClientModule`), and it will check that the checksum in the client state matches one of the checksums in the allow list. If a match is found, the light client is initialized; otherwise, the transaction is aborted. + +## `MsgMigrateContract` + +Migrating a contract to a new Wasm byte code is achieved by means of `MsgMigrateContract`: + +```go +type MsgMigrateContract struct { + / signer address + Signer string + / the client id of the contract + ClientId string + / the SHA-256 hash of the new wasm byte code for the contract + Checksum []byte + / the json-encoded migrate msg to be passed to the contract on migration + Msg []byte +} +``` + +This message is expected to fail if: + +* `Signer` is an invalid Bech32 address, or it does not match the designated authority address. +* `ClientId` is not a valid identifier prefixed by `08-wasm`. +* `Checksum` is not exactly 32 bytes long or it is not found in the list of allowed checksums (a new checksum is added to the list when executing `MsgStoreCode`), or it matches the current checksum of the contract. + +When a Wasm light client contract is migrated to a new Wasm byte code the checksum for the contract will be updated with the new checksum. + +## `MsgRemoveChecksum` + +Removing a checksum from the list of allowed checksums is achieved by means of `MsgRemoveChecksum`: + +```go +type MsgRemoveChecksum struct { + / signer address + Signer string + / Wasm byte code checksum to be removed from the store + Checksum []byte +} +``` + +This message is expected to fail if: + +* `Signer` is an invalid Bech32 address, or it does not match the designated authority address. +* `Checksum` is not exactly 32 bytes long or it is not found in the list of allowed checksums (a new checksum is added to the list when executing `MsgStoreCode`). + +When a checksum is removed from the list of allowed checksums, then the corresponding Wasm byte code will not be available for instantiation in [08-wasm's implementation of `Initialize` function](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/core/02-client/keeper/client.go#L36). diff --git a/docs/ibc/v10.1.x/light-clients/wasm/migrations.mdx b/docs/ibc/v10.1.x/light-clients/wasm/migrations.mdx new file mode 100644 index 00000000..2bc73f43 --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/wasm/migrations.mdx @@ -0,0 +1,235 @@ +--- +title: Migrations +description: This guide provides instructions for migrating 08-wasm versions. +--- + +This guide provides instructions for migrating 08-wasm versions. + +Please note that the following releases are retracted. Please refer to the appropriate migrations section for upgrading. + +```bash +v0.3.1-0.20240717085919-bb71eef0f3bf => v0.3.0+ibc-go-v8.3-wasmvm-v2.0 +v0.2.1-0.20240717085554-570d057959e3 => v0.2.0+ibc-go-v7.6-wasmvm-v1.5 +v0.2.1-0.20240523101951-4b45d1822fb6 => v0.2.0+ibc-go-v8.3-wasmvm-v2.0 +v0.1.2-0.20240412103620-7ee2a2452b79 => v0.1.1+ibc-go-v7.3-wasmvm-v1.5 +v0.1.1-0.20231213092650-57fcdb9a9a9d => v0.1.0+ibc-go-v8.0-wasmvm-v1.5 +v0.1.1-0.20231213092633-b306e7a706e1 => v0.1.0+ibc-go-v7.3-wasmvm-v1.5 +``` + +## From ibc-go v8.4.x to ibc-go v9.0.x + +### Chains + +* The `Initialize`, `Status`, `GetTimestampAtHeight`, `GetLatestHeight`, `VerifyMembership`, `VerifyNonMembership`, `VerifyClientMessage`, `UpdateState` and `UpdateStateOnMisbehaviour` functions in `ClientState` have been removed and all their logic has been moved to functions of the `LightClientModule`. +* The `MigrateContract` function has been removed from `ClientState`. +* The `VerifyMembershipMsg` and `VerifyNonMembershipMsg` payloads for `SudoMsg` have been modified. The `Path` field of both structs has been updated from `v1.MerklePath` to `v2.MerklePath`. The new `v2.MerklePath` field contains a `KeyPath` of `[][]byte` as opposed to `[]string`. This supports proving values stored under keys which contain non-utf8 encoded symbols. As a result, the JSON field `path` containing `key_path` of both messages will marshal elements as a base64 encoded bytestrings. This is a breaking change for 08-wasm client contracts and they should be migrated to correctly support deserialisation of the `v2.MerklePath` field. +* The `ExportMetadataMsg` struct has been removed and is no longer required for contracts to implement. Core IBC will handle exporting all key/value's written to the store by a light client contract. +* The `ZeroCustomFields` interface function has been removed from the `ClientState` interface. Core IBC only used this function to set tendermint client states when scheduling an IBC software upgrade. The interface function has been replaced by a type assertion. +* The `MaxWasmByteSize` function has been removed in favor of the `MaxWasmSize` constant. +* The `HasChecksum`, `GetAllChecksums` and `Logger` functions have been moved from the `types` package to a method on the `Keeper` type in the `keeper` package. +* The `InitializePinnedCodes` function has been moved to a method on the `Keeper` type in the `keeper` package. +* The `CustomQuerier`, `StargateQuerier` and `QueryPlugins` types have been moved from the `types` package to the `keeper` package. +* The `NewDefaultQueryPlugins`, `AcceptListStargateQuerier` and `RejectCustomQuerier` functions has been moved from the `types` package to the `keeper` package. +* The `NewDefaultQueryPlugins` function signature has changed to take an argument: `queryRouter ibcwasm.QueryRouter`. +* The `AcceptListStargateQuerier` function signature has changed to take an additional argument: `queryRouter ibcwasm.QueryRouter`. +* The `WithQueryPlugins` function signature has changed to take in the `QueryPlugins` type from the `keeper` package (previously from the `types` package). +* The `VMGasRegister` variable has been moved from the `types` package to the `keeper` package. + +## From v0.3.0+ibc-go-v8.3-wasmvm-v2.0 to v0.4.1-ibc-go-v8.4-wasmvm-v2.0 + +### Contract developers + +Contract developers are required to update their JSON API message structure for the `SudoMsg` payloads `VerifyMembershipMsg` and `VerifyNonMembershipMsg`. +The `path` field on both JSON API messages has been renamed to `merkle_path`. + +A migration is required for existing 08-wasm client contracts in order to correctly handle the deserialisation of these fields. + +## From v0.2.0+ibc-go-v7.3-wasmvm-v1.5 to v0.3.1-ibc-go-v7.4-wasmvm-v1.5 + +### Contract developers + +Contract developers are required to update their JSON API message structure for the `SudoMsg` payloads `VerifyMembershipMsg` and `VerifyNonMembershipMsg`. +The `path` field on both JSON API messages has been renamed to `merkle_path`. + +A migration is required for existing 08-wasm client contracts in order to correctly handle the deserialisation of these fields. + +## From v0.2.0+ibc-go-v8.3-wasmvm-v2.0 to v0.3.0-ibc-go-v8.3-wasmvm-v2.0 + +### Contract developers + +The `v0.3.0` release of 08-wasm for ibc-go `v8.3.x` and above introduces a breaking change for client contract developers. + +The contract API `SudoMsg` payloads `VerifyMembershipMsg` and `VerifyNonMembershipMsg` have been modified. +The encoding of the `Path` field of both structs has been updated from `v1.MerklePath` to `v2.MerklePath` to support proving values stored under keys which contain non-utf8 encoded symbols. + +As a result, the `Path` field now contains a `MerklePath` composed of `key_path` of `[][]byte` as opposed to `[]string`. The JSON field `path` containing `key_path` of both `VerifyMembershipMsg` and `VerifyNonMembershipMsg` structs will now marshal elements as base64 encoded bytestrings. See below for example JSON diff. + +```diff expandable +{ + "verify_membership": { + "height": { + "revision_height": 1 + }, + "delay_time_period": 0, + "delay_block_period": 0, + "proof":"dmFsaWQgcHJvb2Y=", + "path": { ++ "key_path":["L2liYw==","L2tleS9wYXRo"] +- "key_path":["/ibc","/key/path"] + }, + "value":"dmFsdWU=" + } +} +``` + +A migration is required for existing 08-wasm client contracts in order to correctly handle the deserialisation of `key_path` from `[]string` to `[][]byte`. +Contract developers should familiarise themselves with the migration path offered by 08-wasm [here](/docs/ibc/v10.1.x/light-clients/wasm/governance#migrating-an-existing-wasm-light-client-contract). + +An example of the required changes in a client contract may look like: + +```diff +#[cw_serde] +pub struct MerklePath { ++ pub key_path: Vec, +- pub key_path: Vec, +} +``` + +Please refer to the [`cosmwasm_std`](https://docs.rs/cosmwasm-std/2.0.4/cosmwasm_std/struct.Binary.html) documentation for more information. + +## From v0.1.1+ibc-go-v7.3-wasmvm-v1.5 to v0.2.0-ibc-go-v7.3-wasmvm-v1.5 + +### Contract developers + +The `v0.2.0` release of 08-wasm for ibc-go `v7.6.x` and above introduces a breaking change for client contract developers. + +The contract API `SudoMsg` payloads `VerifyMembershipMsg` and `VerifyNonMembershipMsg` have been modified. +The encoding of the `Path` field of both structs has been updated from `v1.MerklePath` to `v2.MerklePath` to support proving values stored under keys which contain non-utf8 encoded symbols. + +As a result, the `Path` field now contains a `MerklePath` composed of `key_path` of `[][]byte` as opposed to `[]string`. The JSON field `path` containing `key_path` of both `VerifyMembershipMsg` and `VerifyNonMembershipMsg` structs will now marshal elements as base64 encoded bytestrings. See below for example JSON diff. + +```diff expandable +{ + "verify_membership": { + "height": { + "revision_height": 1 + }, + "delay_time_period": 0, + "delay_block_period": 0, + "proof":"dmFsaWQgcHJvb2Y=", + "path": { ++ "key_path":["L2liYw==","L2tleS9wYXRo"] +- "key_path":["/ibc","/key/path"] + }, + "value":"dmFsdWU=" + } +} +``` + +A migration is required for existing 08-wasm client contracts in order to correctly handle the deserialisation of `key_path` from `[]string` to `[][]byte`. +Contract developers should familiarise themselves with the migration path offered by 08-wasm [here](/docs/ibc/v10.1.x/light-clients/wasm/governance#migrating-an-existing-wasm-light-client-contract). + +An example of the required changes in a client contract may look like: + +```diff +#[cw_serde] +pub struct MerklePath { ++ pub key_path: Vec, +- pub key_path: Vec, +} +``` + +Please refer to the [`cosmwasm_std`](https://docs.rs/cosmwasm-std/2.0.4/cosmwasm_std/struct.Binary.html) documentation for more information. + +## From ibc-go v7.3.x to ibc-go v8.0.x + +### Chains + +In the 08-wasm versions compatible with ibc-go v7.3.x and above from the v7 release line, the checksums of the uploaded Wasm bytecodes are all stored under a single key. From ibc-go v8.0.x the checksums are stored using [`collections.KeySet`](https://docs.cosmos.network/v0.50/build/packages/collections#keyset), whose full functionality became available in Cosmos SDK v0.50. There is therefore an [automatic migration handler](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/module.go#L115-L118) configured in the 08-wasm module to migrate the stored checksums to `collections.KeySet`. + +## From v0.1.0+ibc-go-v8.0-wasmvm-v1.5 to v0.2.0-ibc-go-v8.3-wasmvm-v2.0 + +The `WasmEngine` interface has been updated to reflect changes in the function signatures of Wasm VM: + +```diff expandable +type WasmEngine interface { +- StoreCode(code wasmvm.WasmCode) (wasmvm.Checksum, error) ++ StoreCode(code wasmvm.WasmCode, gasLimit uint64) (wasmvmtypes.Checksum, uint64, error) + + StoreCodeUnchecked(code wasmvm.WasmCode) (wasmvm.Checksum, error) + + Instantiate( + checksum wasmvm.Checksum, + env wasmvmtypes.Env, + info wasmvmtypes.MessageInfo, + initMsg []byte, + store wasmvm.KVStore, + goapi wasmvm.GoAPI, + querier wasmvm.Querier, + gasMeter wasmvm.GasMeter, + gasLimit uint64, + deserCost wasmvmtypes.UFraction, +- ) (*wasmvmtypes.Response, uint64, error) ++ ) (*wasmvmtypes.ContractResult, uint64, error) + + Query( + checksum wasmvm.Checksum, + env wasmvmtypes.Env, + queryMsg []byte, + store wasmvm.KVStore, + goapi wasmvm.GoAPI, + querier wasmvm.Querier, + gasMeter wasmvm.GasMeter, + gasLimit uint64, + deserCost wasmvmtypes.UFraction, +- ) ([]byte, uint64, error) ++ ) (*wasmvmtypes.QueryResult, uint64, error) + + Migrate( + checksum wasmvm.Checksum, + env wasmvmtypes.Env, + migrateMsg []byte, + store wasmvm.KVStore, + goapi wasmvm.GoAPI, + querier wasmvm.Querier, + gasMeter wasmvm.GasMeter, + gasLimit uint64, + deserCost wasmvmtypes.UFraction, +- ) (*wasmvmtypes.Response, uint64, error) ++ ) (*wasmvmtypes.ContractResult, uint64, error) + + Sudo( + checksum wasmvm.Checksum, + env wasmvmtypes.Env, + sudoMsg []byte, + store wasmvm.KVStore, + goapi wasmvm.GoAPI, + querier wasmvm.Querier, + gasMeter wasmvm.GasMeter, + gasLimit uint64, + deserCost wasmvmtypes.UFraction, +- ) (*wasmvmtypes.Response, uint64, error) ++ ) (*wasmvmtypes.ContractResult, uint64, error) + + GetCode(checksum wasmvm.Checksum) (wasmvm.WasmCode, error) + + Pin(checksum wasmvm.Checksum) error + + Unpin(checksum wasmvm.Checksum) error +} +``` + +Similar changes were required in the functions of `MockWasmEngine` interface. + +### Chains + +The `SupportedCapabilities` field of `WasmConfig` is now of type `[]string`: + +```diff +type WasmConfig struct { + DataDir string +- SupportedCapabilities string ++ SupportedCapabilities []string + ContractDebugMode bool +} +``` diff --git a/docs/ibc/v10.1.x/light-clients/wasm/overview.mdx b/docs/ibc/v10.1.x/light-clients/wasm/overview.mdx new file mode 100644 index 00000000..3cf3177f --- /dev/null +++ b/docs/ibc/v10.1.x/light-clients/wasm/overview.mdx @@ -0,0 +1,22 @@ +--- +title: Overview +description: Learn about the 08-wasm light client proxy module. +--- + +## Overview + +Learn about the `08-wasm` light client proxy module. + +### Context + +Traditionally, light clients used by ibc-go have been implemented only in Go, and since ibc-go v7 (with the release of the 02-client refactor), they are [first-class Cosmos SDK modules. This means that updating existing light client implementations or adding support for new light clients is a multi-step, time-consuming process involving on-chain governance: it is necessary to modify the codebase of ibc-go (if the light client is part of its codebase), re-build chains' binaries, pass a governance proposal and have validators upgrade their nodes. + +### Motivation + +To break the limitation of being able to write light client implementations only in Go, the `08-wasm` adds support to run light clients written in a Wasm-compilable language. The light client byte code implements the entry points of a [CosmWasm](https://docs.cosmwasm.com/docs/) smart contract, and runs inside a Wasm VM. The `08-wasm` module exposes a proxy light client interface that routes incoming messages to the appropriate handler function, inside the Wasm VM, for execution. + +Adding a new light client to a chain is just as simple as submitting a governance proposal with the message that stores the byte code of the light client contract. No coordinated upgrade is needed. When the governance proposal passes and the message is executed, the contract is ready to be instantiated upon receiving a relayer-submitted `MsgCreateClient`. The process of creating a Wasm light client is the same as with a regular light client implemented in Go. + +### Use cases + +* Development of light clients for non-Cosmos ecosystem chains: state machines in other ecosystems are, in many cases, implemented in Rust, and thus there are probably libraries used in their light client implementations for which there is no equivalent in Go. This makes the development of a light client in Go very difficult, but relatively simple to do it in Rust. Therefore, writing a CosmWasm smart contract in Rust that implements the light client algorithm becomes a lower effort. diff --git a/docs/ibc/v10.1.x/middleware/callbacks/callbacks-IBCv2.mdx b/docs/ibc/v10.1.x/middleware/callbacks/callbacks-IBCv2.mdx new file mode 100644 index 00000000..fa3fddc3 --- /dev/null +++ b/docs/ibc/v10.1.x/middleware/callbacks/callbacks-IBCv2.mdx @@ -0,0 +1,28 @@ +--- +title: Callbacks with IBC v2 +description: >- + This page highlights some of the differences between IBC v2 and IBC classic + relevant for the callbacks middleware and how to use the module with IBC v2. + More details on middleware for IBC v2 can be found in the middleware section. +--- + +This page highlights some of the differences between IBC v2 and IBC classic relevant for the callbacks middleware and how to use the module with IBC v2. More details on middleware for IBC v2 can be found in the [middleware section](/docs/ibc/v10.1.x/ibc/middleware/developIBCv2). + +## Interfaces + +Some of the interface differences are: + +* The callbacks middleware for IBC v2 requires the [`Underlying Application`](/docs/ibc/v10.1.x/middleware/callbacks/overview) to implement the new [`CallbacksCompatibleModuleV2`](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/types/callbacks.go#L53-L58) interface. +* `channeltypesv2.Payload` is now used instead of `channeltypes.Packet` +* With IBC classic, the `OnRecvPacket` callback returns the `ack`, whereas v2 returns the `recvResult` which is the [status of the packet](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel/v2/types/packet.pb.go#L26-L38): unspecified, success, failue or asynchronous +* `api.WriteAcknowledgementWrapper` is used instead of `ICS4Wrapper.WriteAcknowledgement`. It is only needed if the lower level application is going to write an asynchronous acknowledgement. + +## Contract Developers + +The wasmd contract keeper enables cosmwasm developers to use the callbacks middleware. The [cosmwasm documentation](https://cosmwasm.cosmos.network/ibc/extensions/callbacks) provides information for contract developers. The IBC v2 callbacks implementation uses a `Payload` but reconstructs an IBC classic `Packet` to preserve the cosmwasm contract keeper interface. Additionally contracts must now handle the IBC v2 `ErrorAcknowledgement` sentinel value in the case of a failure. + +The callbacks middleware can be used for transfer + action workflows, for example a transfer and swap on recieve. These workflows require knowledge of the ibc denom that has been recieved. To assist with parsing the ics20 packet, [helper functions](https://github.com/cosmos/solidity-ibc-eureka/blob/a8870b023e58622fb7b3f733572c684851f8e5ee/packages/cosmwasm/ibc-callbacks-helpers/src/ics20.rs#L7-L41) can be found in the solidity-ibc-eureka repository. + +## Integration + +An example integration of the callbacks middleware in a transfer stack that is using IBC v2 can be found in the [ibc-go integration section](/docs/ibc/v10.1.x/ibc/integration) diff --git a/docs/ibc/v10.1.x/middleware/callbacks/end-users.mdx b/docs/ibc/v10.1.x/middleware/callbacks/end-users.mdx new file mode 100644 index 00000000..8c198b6c --- /dev/null +++ b/docs/ibc/v10.1.x/middleware/callbacks/end-users.mdx @@ -0,0 +1,95 @@ +--- +title: End Users +description: >- + This section explains how to use the callbacks middleware from the perspective + of an IBC Actor. Callbacks middleware provides two types of callbacks: +--- + +This section explains how to use the callbacks middleware from the perspective of an IBC Actor. Callbacks middleware provides two types of callbacks: + +* Source callbacks: + * `SendPacket` callback + * `OnAcknowledgementPacket` callback + * `OnTimeoutPacket` callback +* Destination callbacks: + * `ReceivePacket` callback + +For a given channel, the source callbacks are supported if the source chain has the callbacks middleware wired up in the channel's IBC stack. Similarly, the destination callbacks are supported if the destination chain has the callbacks middleware wired up in the channel's IBC stack. + + +Callbacks are always executed after the packet has been processed by the underlying IBC module. + + + +If the underlying application module is doing an asynchronous acknowledgement on packet receive (for example, if the [packet forward middleware](https://github.com/cosmos/ibc-apps/tree/main/middleware/packet-forward-middleware) is in the stack, and is being used by this packet), then the callbacks middleware will execute the `ReceivePacket` callback after the acknowledgement has been received. + + +## Source Callbacks + +Source callbacks are natively supported in the following ibc modules (if they are wrapped by the callbacks middleware): + +* `transfer` +* `icacontroller` + +To have your source callbacks be processed by the callbacks middleware, you must set the memo in the application's packet data to the following format: + +```jsonc +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +## Destination Callbacks + +Destination callbacks are natively only supported in the transfer module. Note that wrapping icahost is not supported. This is because icahost should be able to execute an arbitrary transaction anyway, and can call contracts or modules directly. + +To have your destination callbacks processed by the callbacks middleware, you must set the memo in the application's packet data to the following format: + +```jsonc +{ + "dest_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +Note that a packet can have both a source and destination callback. + +```jsonc expandable +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + +}, + "dest_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +# User Defined Gas Limit + +User defined gas limit was added for the following reasons: + +* To prevent callbacks from blocking packet lifecycle. +* To prevent relayers from being able to DOS the callback execution by sending a packet with a low amount of gas. + + +There is a chain wide parameter that sets the maximum gas limit that a user can set for a callback. This is to prevent a user from setting a gas limit that is too high for relayers. If the `"gas_limit"` is not set in the packet memo, then the maximum gas limit is used. + + +These goals are achieved by creating a minimum gas amount required for callback execution. If the relayer provides at least the minimum gas limit for the callback execution, then the packet lifecycle will not be blocked if the callback runs out of gas during execution, and the callback cannot be retried. If the relayer does not provided the minimum amount of gas and the callback executions runs out of gas, the entire tx is reverted and it may be executed again. + + +`SendPacket` callback is always reverted if the callback execution fails or returns an error for any reason. This is so that the packet is not sent if the callback execution fails. + diff --git a/docs/ibc/v10.1.x/middleware/callbacks/events.mdx b/docs/ibc/v10.1.x/middleware/callbacks/events.mdx new file mode 100644 index 00000000..cd0800c3 --- /dev/null +++ b/docs/ibc/v10.1.x/middleware/callbacks/events.mdx @@ -0,0 +1,37 @@ +--- +title: Events +description: >- + An overview of all events related to the callbacks middleware. There are two + types of events, "ibcsrccallback" and "ibcdestcallback". +--- + +An overview of all events related to the callbacks middleware. There are two types of events, `"ibc_src_callback"` and `"ibc_dest_callback"`. + +## Shared Attributes + +Both of these event types share the following attributes: + +| **Attribute Key** | **Attribute Values** | **Optional** | +| :--------------------------: | :-----------------------------------------------------------------------------------------: | :----------------: | +| module | "ibccallbacks" | | +| callback\_type | **One of**: "send\_packet", "acknowledgement\_packet", "timeout\_packet", "receive\_packet" | | +| callback\_address | string | | +| callback\_exec\_gas\_limit | string (parsed from uint64) | | +| callback\_commit\_gas\_limit | string (parsed from uint64) | | +| packet\_sequence | string (parsed from uint64) | | +| callback\_result | **One of**: "success", "failure" | | +| callback\_error | string (parsed from callback err) | Yes, if err != nil | + +## `ibc_src_callback` Attributes + +| **Attribute Key** | **Attribute Values** | +| :------------------: | :----------------------: | +| packet\_src\_port | string (sourcePortID) | +| packet\_src\_channel | string (sourceChannelID) | + +## `ibc_dest_callback` Attributes + +| **Attribute Key** | **Attribute Values** | +| :-------------------: | :--------------------: | +| packet\_dest\_port | string (destPortID) | +| packet\_dest\_channel | string (destChannelID) | diff --git a/docs/ibc/v10.1.x/middleware/callbacks/gas.mdx b/docs/ibc/v10.1.x/middleware/callbacks/gas.mdx new file mode 100644 index 00000000..f55121b8 --- /dev/null +++ b/docs/ibc/v10.1.x/middleware/callbacks/gas.mdx @@ -0,0 +1,75 @@ +--- +title: Gas Management +description: >- + Executing arbitrary code on a chain can be arbitrarily expensive. In general, + a callback may consume infinite gas (think of a callback that loops forever). + This is problematic for a few reasons: +--- + +## Overview + +Executing arbitrary code on a chain can be arbitrarily expensive. In general, a callback may consume infinite gas (think of a callback that loops forever). This is problematic for a few reasons: + +* It can block the packet lifecycle. +* It can be used to consume all of the relayer's funds and gas. +* A relayer can DOS the callback execution by sending a packet with a low amount of gas. + +To prevent these, the callbacks middleware introduces two gas limits: a chain wide gas limit (`maxCallbackGas`) and a user defined gas limit. + +### Chain Wide Gas Limit + +Since the callbacks middleware does not have a keeper, it does not use a governance parameter to set the chain wide gas limit. Instead, the chain wide gas limit is passed in as a parameter to the callbacks middleware during initialization. + +```go +/ app.go + maxCallbackGas := uint64(10_000_000) + +var transferStack porttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) + +transferStack = ibccallbacks.NewIBCMiddleware(transferStack, app.MockContractKeeper, maxCallbackGas) + +/ Add transfer stack to IBC Router +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) +``` + +### User Defined Gas Limit + +The user defined gas limit is set by the IBC Actor during packet creation. The user defined gas limit is set in the packet memo. If the user defined gas limit is not set or if the user defined gas limit is greater than the chain wide gas limit, then the chain wide gas limit is used as the user defined gas limit. + +```jsonc expandable +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + +}, + "dest_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +## Gas Limit Enforcement + +During a callback execution, there are three types of gas limits that are enforced: + +* User defined gas limit +* Chain wide gas limit +* Context gas limit (amount of gas that the relayer has left for this execution) + +Chain wide gas limit is used as a maximum to the user defined gas limit as explained in the [previous section](#user-defined-gas-limit). It may also be used as a default value if no user gas limit is provided. Therefore, we can ignore the chain wide gas limit for the rest of this section and work with the minimum of the chain wide gas limit and user defined gas limit. This minimum is called the commit gas limit. + +The gas limit enforcement is done by executing the callback inside a cached context with a new gas meter. The gas meter is initialized with the minimum of the commit gas limit and the context gas limit. This minimum is called the execution gas limit. We say that retries are allowed if `context gas limit < commit gas limit`. Otherwise, we say that retries are not allowed. + +If the callback execution fails due to an out of gas error, then the middleware checks if retries are allowed. If retries are not allowed, then it recovers from the out of gas error, consumes execution gas limit from the original context, and continues with the packet life cycle. If retries are allowed, then it panics with an out of gas error to revert the entire tx. The packet can then be submitted again with a higher gas limit. The out of gas panic descriptor is shown below. + +```go +fmt.Sprintf("ibc %s callback out of gas; commitGasLimit: %d", callbackType, callbackData.CommitGasLimit) +} +``` + +If the callback execution does not fail due to an out of gas error then the callbacks middleware does not block the packet life cycle regardless of whether retries are allowed or not. diff --git a/docs/ibc/v10.1.x/middleware/callbacks/integration.mdx b/docs/ibc/v10.1.x/middleware/callbacks/integration.mdx new file mode 100644 index 00000000..884aaf71 --- /dev/null +++ b/docs/ibc/v10.1.x/middleware/callbacks/integration.mdx @@ -0,0 +1,85 @@ +--- +title: Integration +description: >- + Learn how to integrate the callbacks middleware with IBC applications. The + following document is intended for developers building on top of the Cosmos + SDK and only applies for Cosmos SDK chains. +--- + +Learn how to integrate the callbacks middleware with IBC applications. The following document is intended for developers building on top of the Cosmos SDK and only applies for Cosmos SDK chains. + + +An example integration for an IBC v2 transfer stack using the callbacks middleware can be found in the [ibc-go module integration](/docs/ibc/v10.1.x/ibc/integration) section + + +The callbacks middleware is a minimal and stateless implementation of the IBC middleware interface. It does not have a keeper, nor does it store any state. It simply routes IBC middleware messages to the appropriate callback function, which is implemented by the secondary application. Therefore, it doesn't need to be registered as a module, nor does it need to be added to the module manager. It only needs to be added to the IBC application stack. + +## Pre-requisite Readings + +* [IBC middleware development](/docs/ibc/v10.1.x/ibc/middleware/develop) +* [IBC middleware integration](/docs/ibc/v10.1.x/ibc/middleware/integration) + +The callbacks middleware, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. +For Cosmos SDK chains this setup is done via the `app/app.go` file, where modules are constructed and configured in order to bootstrap the blockchain application. + +## Configuring an application stack with the callbacks middleware + +As mentioned in [IBC middleware development](/docs/ibc/v10.1.x/ibc/middleware/develop) an application stack may be composed of many or no middlewares that nest a base application. +These layers form the complete set of application logic that enable developers to build composable and flexible IBC application stacks. +For example, an application stack may just be a single base application like `transfer`, however, the same application stack composed with `packet-forward-middleware` and `callbacks` will nest the `transfer` base application twice by wrapping it with the callbacks module and then packet forward middleware. + +The callbacks middleware also **requires** a secondary application that will receive the callbacks to implement the [`ContractKeeper`](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/types/expected_keepers.go#L12-L100). The wasmd contract keeper has been implemented [here](https://github.com/CosmWasm/wasmd/tree/main/x/wasm/keeper) and is referenced as the `WasmKeeper`. + +### Transfer + +See below for an example of how to create an application stack using `transfer`, `packet-forward-middleware`, and `callbacks`. Feel free to omit the `packet-forward-middleware` if you do not want to use it. +The following `transferStack` is configured in `app/app.go` and added to the IBC `Router`. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +```go expandable +/ Create Transfer Stack +/ SendPacket, since it is originating from the application to core IBC: +/ transferKeeper.SendPacket -> callbacks.SendPacket -> feeKeeper.SendPacket -> channel.SendPacket + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is the other way +/ channel.RecvPacket -> fee.OnRecvPacket -> callbacks.OnRecvPacket -> transfer.OnRecvPacket + +/ transfer stack contains (from top to bottom): +/ - IBC Packet Forward Middleware +/ - IBC Callbacks Middleware +/ - Transfer + +/ initialise the gas limit for callbacks, recommended to be 10M for use with cosmwasm contracts + maxCallbackGas := uint64(10_000_000) + +/ the keepers for the callbacks middleware + wasmStackIBCHandler := wasm.NewIBCHandler(app.WasmKeeper, app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper) + +/ create IBC module from bottom to top of stack +/ Create Transfer Stack + var transferStack porttypes.IBCModule + transferStack = transfer.NewIBCModule(app.TransferKeeper) +/ callbacks wraps the transfer stack as its base app, and uses PacketForwardKeeper as the ICS4Wrapper +/ i.e. packet-forward-middleware is higher on the stack and sits between callbacks and the ibc channel keeper +/ Since this is the lowest level middleware of the transfer stack, it should be the first entrypoint for transfer keeper's +/ WriteAcknowledgement. + cbStack := ibccallbacks.NewIBCMiddleware(transferStack, app.PacketForwardKeeper, wasmStackIBCHandler, maxCallbackGas) + +transferStack = packetforward.NewIBCMiddleware( + cbStack, + app.PacketForwardKeeper, + 0, + packetforwardkeeper.DefaultForwardTransferPacketTimeoutTimestamp, + ) + +app.TransferKeeper.WithICS4Wrapper(cbStack) + +/ Create static IBC router, add app routes, then set and seal it + ibcRouter := porttypes.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) + +ibcRouter.AddRoute(wasmtypes.ModuleName, wasmStackIBCHandler) + +app.IBCKeeper.SetRouter(ibcRouter) +``` diff --git a/docs/ibc/v10.1.x/middleware/callbacks/interfaces.mdx b/docs/ibc/v10.1.x/middleware/callbacks/interfaces.mdx new file mode 100644 index 00000000..dde070b7 --- /dev/null +++ b/docs/ibc/v10.1.x/middleware/callbacks/interfaces.mdx @@ -0,0 +1,179 @@ +--- +title: Interfaces +--- + +The callbacks middleware requires certain interfaces to be implemented by the underlying IBC applications and the secondary application. If you're simply wiring up the callbacks middleware to an existing IBC application stack and a secondary application such as `icacontroller` and `x/wasm`, you can skip this section. + +## Interfaces for developing the Underlying IBC Application + +### `PacketDataUnmarshaler` + +```go +/ PacketDataUnmarshaler defines an optional interface which allows a middleware to +/ request the packet data to be unmarshaled by the base application. +type PacketDataUnmarshaler interface { + / UnmarshalPacketData unmarshals the packet data into a concrete type + / ctx, portID, channelID are provided as arguments, so that (if needed) + / the packet data can be unmarshaled based on the channel version. + / The version of the underlying app is also returned. + UnmarshalPacketData(ctx sdk.Context, portID, channelID string, bz []byte) (interface{ +}, string, error) +} +``` + +The callbacks middleware **requires** the underlying ibc application to implement the [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/core/05-port/types/module.go#L142-L147) interface so that it can unmarshal the packet data bytes into the appropriate packet data type. This allows usage of interface functions implemented by the packet data type. The packet data type is expected to implement the `PacketDataProvider` interface (see section below), which is used to parse the callback data that is currently stored in the packet memo field for `transfer` and `ica` packets as a JSON string. See its implementation in the [`transfer`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/transfer/ibc_module.go#L303-L313) and [`icacontroller`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/27-interchain-accounts/controller/ibc_middleware.go#L258-L268) modules for reference. + +If the underlying application is a middleware itself, then it can implement this interface by simply passing the function call to its underlying application. See its implementation in the [`fee middleware`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/29-fee/ibc_middleware.go#L368-L378) for reference. + +### `PacketDataProvider` + +```go +/ PacketDataProvider defines an optional interfaces for retrieving custom packet data stored on behalf of another application. +/ An existing problem in the IBC middleware design is the inability for a middleware to define its own packet data type and insert packet sender provided information. +/ A short term solution was introduced into several application's packet data to utilize a memo field to carry this information on behalf of another application. +/ This interfaces standardizes that behaviour. Upon realization of the ability for middleware's to define their own packet data types, this interface will be deprecated and removed with time. +type PacketDataProvider interface { + / GetCustomPacketData returns the packet data held on behalf of another application. + / The name the information is stored under should be provided as the key. + / If no custom packet data exists for the key, nil should be returned. + GetCustomPacketData(key string) + +interface{ +} +} +``` + +The callbacks middleware also **requires** the underlying ibc application's packet data type to implement the [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/core/exported/packet.go#L43-L52) interface. This interface is used to retrieve the callback data from the packet data (using the memo field in the case of `transfer` and `ica`). For example, see its implementation in the [`transfer`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/transfer/types/packet.go#L85-L105) module. + +Since middlewares do not have packet types, they do not need to implement this interface. + +### `PacketData` + +```go +/ PacketData defines an optional interface which an application's packet data structure may implement. +type PacketData interface { + / GetPacketSender returns the sender address of the packet data. + / If the packet sender is unknown or undefined, an empty string should be returned. + GetPacketSender(sourcePortID string) + +string +} +``` + +[`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/core/exported/packet.go#L36-L41) is an optional interface that can be implemented by the underlying ibc application's packet data type. It is used to retrieve the packet sender address from the packet data. The callbacks middleware uses this interface to retrieve the packet sender address and pass it to the callback function during a source callback. If this interface is not implemented, then the callbacks middleware passes and empty string as the sender address. For example, see its implementation in the [`transfer`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/transfer/types/packet.go#L74-L83) and [`ica`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/27-interchain-accounts/types/packet.go#L78-L92) module. + +This interface was added so that secondary applications can retrieve the packet sender address to perform custom authorization logic if needed. + +Since middlewares do not have packet types, they do not need to implement this interface. + +## Interfaces for developing the Secondary Application + +### `ContractKeeper` + +The callbacks middleware requires the secondary application to implement the [`ContractKeeper`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/callbacks/types/expected_keepers.go#L11-L83) interface. The contract keeper will be invoked at each step of the packet lifecycle. When a packet is sent, if callback information is provided, the contract keeper will be invoked via the `IBCSendPacketCallback`. This allows the contract keeper to prevent packet sends when callback information is provided, for example if the sender is unauthorized to perform callbacks on the given information. If the packet send is successful, the contract keeper on the destination (if present) will be invoked when a packet has been received and the acknowledgement is written, this will occur via `IBCReceivePacketCallback`. At the end of the packet lifecycle, when processing acknowledgements or timeouts, the source contract keeper will be invoked either via `IBCOnAcknowledgementPacket` or `IBCOnTimeoutPacket`. Once a packet has been sent, each step of the packet lifecycle can be processed given that a relayer sets the gas limit to be more than or equal to the required `CommitGasLimit`. State changes performed in the callback will only be committed upon successful execution. + +```go expandable +/ ContractKeeper defines the entry points exposed to the VM module which invokes a smart contract +type ContractKeeper interface { + / IBCSendPacketCallback is called in the source chain when a PacketSend is executed. The + / packetSenderAddress is determined by the underlying module, and may be empty if the sender is + / unknown or undefined. The contract is expected to handle the callback within the user defined + / gas limit, and handle any errors, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, and the error will be propagated to the underlying IBC + / application, resulting in a packet send failure. + / + / Implementations are provided with the packetSenderAddress and MAY choose to use this to perform + / validation on the origin of a given packet. It is recommended to perform the same validation + / on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This + / defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + / + / The version provided is the base application version for the given packet send. This allows + / contracts to determine how to unmarshal the packetData. + IBCSendPacketCallback( + cachedCtx sdk.Context, + sourcePort string, + sourceChannel string, + timeoutHeight clienttypes.Height, + timeoutTimestamp uint64, + packetData []byte, + contractAddress, + packetSenderAddress string, + version string, + ) + +error + / IBCOnAcknowledgementPacketCallback is called in the source chain when a packet acknowledgement + / is received. The packetSenderAddress is determined by the underlying module, and may be empty if + / the sender is unknown or undefined. The contract is expected to handle the callback within the + / user defined gas limit, and handle any errors, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, but the packet lifecycle will not be blocked. + / + / Implementations are provided with the packetSenderAddress and MAY choose to use this to perform + / validation on the origin of a given packet. It is recommended to perform the same validation + / on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This + / defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + / + / The version provided is the base application version for the given packet send. This allows + / contracts to determine how to unmarshal the packetData. + IBCOnAcknowledgementPacketCallback( + cachedCtx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, + contractAddress, + packetSenderAddress string, + version string, + ) + +error + / IBCOnTimeoutPacketCallback is called in the source chain when a packet is not received before + / the timeout height. The packetSenderAddress is determined by the underlying module, and may be + / empty if the sender is unknown or undefined. The contract is expected to handle the callback + / within the user defined gas limit, and handle any error, out of gas, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, but the packet lifecycle will not be blocked. + / + / Implementations are provided with the packetSenderAddress and MAY choose to use this to perform + / validation on the origin of a given packet. It is recommended to perform the same validation + / on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This + / defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + / + / The version provided is the base application version for the given packet send. This allows + / contracts to determine how to unmarshal the packetData. + IBCOnTimeoutPacketCallback( + cachedCtx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, + contractAddress, + packetSenderAddress string, + version string, + ) + +error + / IBCReceivePacketCallback is called in the destination chain when a packet acknowledgement is written. + / The contract is expected to handle the callback within the user defined gas limit, and handle any errors, + / out of gas, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, but the packet lifecycle will not be blocked. + / + / The version provided is the base application version for the given packet send. This allows + / contracts to determine how to unmarshal the packetData. + IBCReceivePacketCallback( + cachedCtx sdk.Context, + packet ibcexported.PacketI, + ack ibcexported.Acknowledgement, + contractAddress string, + version string, + ) + +error +} +``` + +These are the callback entry points exposed to the secondary application. The secondary application is expected to execute its custom logic within these entry points. The callbacks middleware will handle the execution of these callbacks and revert the state if needed. + + +Note that the source callback entry points are provided with the `packetSenderAddress` and MAY choose to use this to perform validation on the origin of a given packet. It is recommended to perform the same validation on all source chain callbacks (SendPacket, AcknowledgePacket, TimeoutPacket). This defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + diff --git a/docs/ibc/v10.1.x/middleware/callbacks/overview.mdx b/docs/ibc/v10.1.x/middleware/callbacks/overview.mdx new file mode 100644 index 00000000..440c4107 --- /dev/null +++ b/docs/ibc/v10.1.x/middleware/callbacks/overview.mdx @@ -0,0 +1,49 @@ +--- +title: Overview +description: >- + Learn about what the Callbacks Middleware is, and how to build custom modules + that utilize the Callbacks Middleware functionality +--- + +Learn about what the Callbacks Middleware is, and how to build custom modules that utilize the Callbacks Middleware functionality + +## What is the Callbacks Middleware? + +IBC was designed with callbacks between core IBC and IBC applications. IBC apps would send a packet to core IBC, and receive a callback on every step of that packet's lifecycle. This allows IBC applications to be built on top of core IBC, and to be able to execute custom logic on packet lifecycle events (e.g. unescrow tokens for ICS-20). + +This setup worked well for off-chain users interacting with IBC applications. However, we are now seeing the desire for secondary applications (e.g. smart contracts, modules) to call into IBC apps as part of their state machine logic and then do some actions on packet lifecycle events. + +The Callbacks Middleware allows for this functionality by allowing the packets of the underlying IBC applications to register callbacks to secondary applications for lifecycle events. These callbacks are then executed by the Callbacks Middleware when the corresponding packet lifecycle event occurs. + +After much discussion, the design was expanded to an ADR, and the Callbacks Middleware is an implementation of that ADR. + +## Concepts + +Callbacks Middleware was built with smart contracts in mind, but can be used by any secondary application that wants to allow IBC packets to call into it. Think of the Callbacks Middleware as a bridge between core IBC and a secondary application. + +We have the following definitions: + +* `Underlying IBC application`: The IBC application that is wrapped by the Callbacks Middleware. This is the IBC application that is actually sending and receiving packet lifecycle events from core IBC. For example, the transfer module, or the ICA controller submodule. +* `IBC Actor`: IBC Actor is an on-chain or off-chain entity that can initiate a packet on the underlying IBC application. For example, a smart contract, an off-chain user, or a module that sends a transfer packet are all IBC Actors. +* `Secondary application`: The application that is being called into by the Callbacks Middleware for packet lifecycle events. This is the application that is receiving the callback directly from the Callbacks Middleware module. For example, the `x/wasm` module. +* `Callback Actor`: The on-chain smart contract or module that is registered to receive callbacks from the secondary application. For example, a Wasm smart contract (gatekeeped by the `x/wasm` module). Note that the Callback Actor is not necessarily the same as the IBC Actor. For example, an off-chain user can initiate a packet on the underlying IBC application, but the Callback Actor could be a smart contract. The secondary application may want to check that the IBC Actor is allowed to call into the Callback Actor, for example, by checking that the IBC Actor is the same as the Callback Actor. +* `Callback Address`: Address of the Callback Actor. This is the address that the secondary application will call into when a packet lifecycle event occurs. For example, the address of the Wasm smart contract. +* `Maximum gas limit`: The maximum amount of gas that the Callbacks Middleware will allow the secondary application to use when it executes its custom logic. +* `User defined gas limit`: The amount of gas that the IBC Actor wants to allow the secondary application to use when it executes its custom logic. This is the gas limit that the IBC Actor specifies when it sends a packet to the underlying IBC application. This cannot be greater than the maximum gas limit. + +Think of the secondary application as a bridge between the Callbacks Middleware and the Callback Actor. The secondary application is responsible for executing the custom logic of the Callback Actor when a packet lifecycle event occurs. The secondary application is also responsible for checking that the IBC Actor is allowed to call into the Callback Actor. + +Note that it is possible that the IBC Actor, Secondary Application, and Callback Actor are all the same entity. In which case, the Callback Address should be the secondary application's module address. + +The following diagram shows how a typical `RecvPacket`, `AcknowledgementPacket`, and `TimeoutPacket` execution flow would look like: +![callbacks-middleware](/docs/ibc/images/04-middleware/02-callbacks/images/callbackflow.svg) + +And the following diagram shows how a typical `SendPacket` and `WriteAcknowledgement` execution flow would look like: +![callbacks-middleware](/docs/ibc/images/04-middleware/02-callbacks/images/ics4-callbackflow.svg) + +## Known Limitations + +* Callbacks are always executed after the underlying IBC application has executed its logic. +* Maximum gas limit is hardcoded manually during wiring. It requires a coordinated upgrade to change the maximum gas limit. +* The receive packet callback does not pass the relayer address to the secondary application. This is so that we can use the same callback for both synchronous and asynchronous acknowledgements. +* The receive packet callback does not pass IBC Actor's address, this is because the IBC Actor lives in the counterparty chain and cannot be trusted. diff --git a/docs/ibc/v10.1.x/migrations/migration.template.mdx b/docs/ibc/v10.1.x/migrations/migration.template.mdx new file mode 100644 index 00000000..cc687cdf --- /dev/null +++ b/docs/ibc/v10.1.x/migrations/migration.template.mdx @@ -0,0 +1,30 @@ +--- +description: This guide provides instructions for migrating to a new version of ibc-go. +--- + +This guide provides instructions for migrating to a new version of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Chains](#chains) +* [IBC Apps](#ibc-apps) +* [Relayers](#relayers) +* [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +* No relevant changes were made in this release. + +## IBC Apps + +* No relevant changes were made in this release. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v10.1.x/migrations/sdk-to-v1.mdx b/docs/ibc/v10.1.x/migrations/sdk-to-v1.mdx new file mode 100644 index 00000000..cf0f72cd --- /dev/null +++ b/docs/ibc/v10.1.x/migrations/sdk-to-v1.mdx @@ -0,0 +1,194 @@ +--- +title: SDK v0.43 to IBC-Go v1 +description: >- + This file contains information on how to migrate from the IBC module contained + in the SDK 0.41.x and 0.42.x lines to the IBC module in the ibc-go repository + based on the 0.44 SDK version. +--- + +This file contains information on how to migrate from the IBC module contained in the SDK 0.41.x and 0.42.x lines to the IBC module in the ibc-go repository based on the 0.44 SDK version. + +## Import Changes + +The most obvious changes is import name changes. We need to change: + +* applications -> apps +* cosmos-sdk/x/ibc -> ibc-go + +On my GNU/Linux based machine I used the following commands, executed in order: + +```shell +grep -RiIl 'cosmos-sdk\/x\/ibc\/applications' | xargs sed -i 's/cosmos-sdk\/x\/ibc\/applications/ibc-go\/modules\/apps/g' +``` + +```shell +grep -RiIl 'cosmos-sdk\/x\/ibc' | xargs sed -i 's/cosmos-sdk\/x\/ibc/ibc-go\/modules/g' +``` + +ref: [explanation of the above commands](https://www.internalpointers.com/post/linux-find-and-replace-text-multiple-files) + +Executing these commands out of order will cause issues. + +Feel free to use your own method for modifying import names. + +NOTE: Updating to the `v0.44.0` SDK release and then running `go mod tidy` will cause a downgrade to `v0.42.0` in order to support the old IBC import paths. +Update the import paths before running `go mod tidy`. + +## Chain Upgrades + +Chains may choose to upgrade via an upgrade proposal or genesis upgrades. Both in-place store migrations and genesis migrations are supported. + +**WARNING**: Please read at least the quick guide for [IBC client upgrades](/docs/ibc/v10.1.x/ibc/upgrades/quick-guide) before upgrading your chain. It is highly recommended you do not change the chain-ID during an upgrade, otherwise you must follow the IBC client upgrade instructions. + +Both in-place store migrations and genesis migrations will: + +* migrate the solo machine client state from v1 to v2 protobuf definitions +* prune all solo machine consensus states +* prune all expired tendermint consensus states + +Chains must set a new connection parameter during either in place store migrations or genesis migration. The new parameter, max expected block time, is used to enforce packet processing delays on the receiving end of an IBC packet flow. Checkout the [docs](https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2) for more information. + +### In-Place Store Migrations + +The new chain binary will need to run migrations in the upgrade handler. The fromVM (previous module version) for the IBC module should be 1. This will allow migrations to be run for IBC updating the version from 1 to 2. + +Ex: + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler("my-upgrade-proposal", + func(ctx sdk.Context, _ upgradetypes.Plan, _ module.VersionMap) (module.VersionMap, error) { + / set max expected block time parameter. Replace the default with your expected value + / https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2 + app.IBCKeeper.ConnectionKeeper.SetParams(ctx, ibcconnectiontypes.DefaultParams()) + fromVM := map[string]uint64{ + ... / other modules + "ibc": 1, + ... +} + +return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +### Genesis Migrations + +To perform genesis migrations, the following code must be added to your existing migration code. + +```go expandable +/ add imports as necessary +import ( + + ibcv100 "github.com/cosmos/ibc-go/modules/core/legacy/v100" + ibchost "github.com/cosmos/ibc-go/modules/core/24-host" +) + +... + +/ add in migrate cmd function +/ expectedTimePerBlock is a new connection parameter +/ https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2 +newGenState, err = ibcv100.MigrateGenesis(newGenState, clientCtx, *genDoc, expectedTimePerBlock) + if err != nil { + return err +} +``` + +**NOTE:** The genesis chain-id, time and height MUST be updated before migrating IBC, otherwise the tendermint consensus state will not be pruned. + +## IBC Keeper Changes + +The IBC Keeper now takes in the Upgrade Keeper. Please add the chains' Upgrade Keeper after the Staking Keeper: + +```diff +/ Create IBC Keeper +app.IBCKeeper = ibckeeper.NewKeeper( +- appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, scopedIBCKeeper, ++ appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, app.UpgradeKeeper, scopedIBCKeeper, +) +``` + +## Proposals + +### UpdateClientProposal + +The `UpdateClient` has been modified to take in two client-identifiers and one initial height. + +### UpgradeProposal + +A new IBC proposal type has been added, `UpgradeProposal`. This handles an IBC (breaking) Upgrade. +The previous `UpgradedClientState` field in an Upgrade `Plan` has been deprecated in favor of this new proposal type. + +### Proposal Handler Registration + +The `ClientUpdateProposalHandler` has been renamed to `ClientProposalHandler`. +It handles both `UpdateClientProposal`s and `UpgradeProposal`s. + +Add this import: + +```diff ++ ibcclienttypes "github.com/cosmos/ibc-go/modules/core/02-client/types" +``` + +Please ensure the governance module adds the correct route: + +```diff +- AddRoute(ibchost.RouterKey, ibcclient.NewClientUpdateProposalHandler(app.IBCKeeper.ClientKeeper)) ++ AddRoute(ibcclienttypes.RouterKey, ibcclient.NewClientProposalHandler(app.IBCKeeper.ClientKeeper)) +``` + +NOTE: Simapp registration was incorrect in the 0.41.x releases. The `UpdateClient` proposal handler should be registered with the router key belonging to `ibc-go/core/02-client/types` +as shown in the diffs above. + +### Proposal CLI Registration + +Please ensure both proposal type CLI commands are registered on the governance module by adding the following arguments to `gov.NewAppModuleBasic()`: + +Add the following import: + +```diff ++ ibcclientclient "github.com/cosmos/ibc-go/modules/core/02-client/client" +``` + +Register the cli commands: + +```diff +gov.NewAppModuleBasic( + paramsclient.ProposalHandler, distrclient.ProposalHandler, upgradeclient.ProposalHandler, upgradeclient.CancelProposalHandler, ++ ibcclientclient.UpdateClientProposalHandler, ibcclientclient.UpgradeProposalHandler, +), +``` + +REST routes are not supported for these proposals. + +## Proto file changes + +The gRPC querier service endpoints have changed slightly. The previous files used `v1beta1` gRPC route, this has been updated to `v1`. + +The solo machine has replaced the FrozenSequence uint64 field with a IsFrozen boolean field. The package has been bumped from `v1` to `v2` + +## IBC callback changes + +### OnRecvPacket + +Application developers need to update their `OnRecvPacket` callback logic. + +The `OnRecvPacket` callback has been modified to only return the acknowledgement. The acknowledgement returned must implement the `Acknowledgement` interface. The acknowledgement should indicate if it represents a successful processing of a packet by returning true on `Success()` and false in all other cases. A return value of false on `Success()` will result in all state changes which occurred in the callback being discarded. More information can be found in the [documentation](/docs/ibc/v10.1.x/ibc/apps/apps#receiving-packets). + +The `OnRecvPacket`, `OnAcknowledgementPacket`, and `OnTimeoutPacket` callbacks are now passed the `sdk.AccAddress` of the relayer who relayed the IBC packet. Applications may use or ignore this information. + +## IBC Event changes + +The `packet_data` attribute has been deprecated in favor of `packet_data_hex`, in order to provide standardized encoding/decoding of packet data in events. While the `packet_data` event still exists, all relayers and IBC Event consumers are strongly encouraged to switch over to using `packet_data_hex` as soon as possible. + +The `packet_ack` attribute has also been deprecated in favor of `packet_ack_hex` for the same reason stated above. All relayers and IBC Event consumers are strongly encouraged to switch over to using `packet_ack_hex` as soon as possible. + +The `consensus_height` attribute has been removed in the Misbehaviour event emitted. IBC clients no longer have a frozen height and misbehaviour does not necessarily have an associated height. + +## Relevant SDK changes + +* (codec) [#9226](https://github.com/cosmos/cosmos-sdk/pull/9226) Rename codec interfaces and methods, to follow a general Go interfaces: + * `codec.Marshaler` → `codec.Codec` (this defines objects which serialize other objects) + * `codec.BinaryMarshaler` → `codec.BinaryCodec` + * `codec.JSONMarshaler` → `codec.JSONCodec` + * Removed `BinaryBare` suffix from `BinaryCodec` methods (`MarshalBinaryBare`, `UnmarshalBinaryBare`, ...) + * Removed `Binary` infix from `BinaryCodec` methods (`MarshalBinaryLengthPrefixed`, `UnmarshalBinaryLengthPrefixed`, ...) diff --git a/docs/ibc/v10.1.x/migrations/support-denoms-with-slashes.mdx b/docs/ibc/v10.1.x/migrations/support-denoms-with-slashes.mdx new file mode 100644 index 00000000..0a92b46f --- /dev/null +++ b/docs/ibc/v10.1.x/migrations/support-denoms-with-slashes.mdx @@ -0,0 +1,90 @@ +--- +title: Support transfer of coins whose base denom contains slashes +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +This document is necessary when chains are upgrading from a version that does not support base denoms with slashes (e.g. v3.0.0) to a version that does (e.g. v3.2.0). All versions of ibc-go smaller than v1.5.0 for the v1.x release line, v2.3.0 for the v2.x release line, and v3.1.0 for the v3.x release line do **NOT** support IBC token transfers of coins whose base denoms contain slashes. Therefore the in-place of genesis migration described in this document are required when upgrading. + +If a chain receives coins of a base denom with slashes before it upgrades to supporting it, the receive may pass however the trace information will be incorrect. + +E.g. If a base denom of `testcoin/testcoin/testcoin` is sent to a chain that does not support slashes in the base denom, the receive will be successful. However, the trace information stored on the receiving chain will be: `Trace: "transfer/{channel-id}/testcoin/testcoin", BaseDenom: "testcoin"`. + +This incorrect trace information must be corrected when the chain does upgrade to fully supporting denominations with slashes. + +To do so, chain binaries should include a migration script that will run when the chain upgrades from not supporting base denominations with slashes to supporting base denominations with slashes. + +## Chains + +### ICS20 - Transfer + +The transfer module will now support slashes in base denoms, so we must iterate over current traces to check if any of them are incorrectly formed and correct the trace information. + +### Upgrade Proposal + +```go +app.UpgradeKeeper.SetUpgradeHandler("MigrateTraces", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / transfer module consensus version has been bumped to 2 + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +This is only necessary if there are denom traces in the store with incorrect trace information from previously received coins that had a slash in the base denom. However, it is recommended that any chain upgrading to support base denominations with slashes runs this code for safety. + +For a more detailed sample, please check out the code changes in [this pull request](https://github.com/cosmos/ibc-go/pull/1680). + +### Genesis Migration + +If the chain chooses to add support for slashes in base denoms via genesis export, then the trace information must be corrected during genesis migration. + +The migration code required may look like: + +```go expandable +func migrateGenesisSlashedDenomsUpgrade(appState genutiltypes.AppMap, clientCtx client.Context, genDoc *tmtypes.GenesisDoc) (genutiltypes.AppMap, error) { + if appState[ibctransfertypes.ModuleName] != nil { + transferGenState := &ibctransfertypes.GenesisState{ +} + +clientCtx.Codec.MustUnmarshalJSON(appState[ibctransfertypes.ModuleName], transferGenState) + substituteTraces := make([]ibctransfertypes.DenomTrace, len(transferGenState.DenomTraces)) + for i, dt := range transferGenState.DenomTraces { + / replace all previous traces with the latest trace if validation passes + / note most traces will have same value + newTrace := ibctransfertypes.ParseDenomTrace(dt.GetFullDenomPath()) + if err := newTrace.Validate(); err != nil { + substituteTraces[i] = dt +} + +else { + substituteTraces[i] = newTrace +} + +} + +transferGenState.DenomTraces = substituteTraces + + / delete old genesis state + delete(appState, ibctransfertypes.ModuleName) + + / set new ibc transfer genesis state + appState[ibctransfertypes.ModuleName] = clientCtx.Codec.MustMarshalJSON(transferGenState) +} + +return appState, nil +} +``` + +For a more detailed sample, please check out the code changes in [this pull request](https://github.com/cosmos/ibc-go/pull/1528). diff --git a/docs/ibc/v10.1.x/migrations/v1-to-v2.mdx b/docs/ibc/v10.1.x/migrations/v1-to-v2.mdx new file mode 100644 index 00000000..ddd1bf1b --- /dev/null +++ b/docs/ibc/v10.1.x/migrations/v1-to-v2.mdx @@ -0,0 +1,59 @@ +--- +title: IBC-Go v1 to v2 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go -> github.com/cosmos/ibc-go/v2 +``` + +## Chains + +* No relevant changes were made in this release. + +## IBC Apps + +A new function has been added to the app module interface: + +```go expandable +/ NegotiateAppVersion performs application version negotiation given the provided channel ordering, connectionID, portID, counterparty and proposed version. +/ An error is returned if version negotiation cannot be performed. For example, an application module implementing this interface +/ may decide to return an error in the event of the proposed version being incompatible with it's own +NegotiateAppVersion( + ctx sdk.Context, + order channeltypes.Order, + connectionID string, + portID string, + counterparty channeltypes.Counterparty, + proposedVersion string, +) (version string, err error) +``` + +This function should perform application version negotiation and return the negotiated version. If the version cannot be negotiated, an error should be returned. This function is only used on the client side. + +### `sdk.Result` removed + +`sdk.Result` has been removed as a return value in the application callbacks. Previously it was being discarded by core IBC and was thus unused. + +## Relayers + +A new gRPC has been added to 05-port, `AppVersion`. It returns the negotiated app version. This function should be used for the `ChanOpenTry` channel handshake step to decide upon the application version which should be set in the channel. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v10.1.x/migrations/v2-to-v3.mdx b/docs/ibc/v10.1.x/migrations/v2-to-v3.mdx new file mode 100644 index 00000000..5a41f895 --- /dev/null +++ b/docs/ibc/v10.1.x/migrations/v2-to-v3.mdx @@ -0,0 +1,187 @@ +--- +title: IBC-Go v2 to v3 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v2 -> github.com/cosmos/ibc-go/v3 +``` + +No genesis or in-place migrations are required when upgrading from v1 or v2 of ibc-go. + +## Chains + +### ICS20 + +The `transferkeeper.NewKeeper(...)` now takes in an ICS4Wrapper. +The ICS4Wrapper should be the IBC Channel Keeper unless ICS 20 is being connected to a middleware application. + +### ICS27 + +ICS27 Interchain Accounts has been added as a supported IBC application of ibc-go. +Please see the [ICS27 documentation](/docs/ibc/v10.1.x/apps/interchain-accounts/overview) for more information. + +### Upgrade Proposal + +If the chain will adopt ICS27, it must set the appropriate params during the execution of the upgrade handler in `app.go`: + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler("v3", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / set the ICS27 consensus version so InitGenesis is not run + fromVM[icatypes.ModuleName] = icamodule.ConsensusVersion() + + / create ICS27 Controller submodule params + controllerParams := icacontrollertypes.Params{ + ControllerEnabled: true, +} + + / create ICS27 Host submodule params + hostParams := icahosttypes.Params{ + HostEnabled: true, + AllowMessages: []string{"/cosmos.bank.v1beta1.MsgSend", ... +}, +} + + / initialize ICS27 module + icamodule.InitModule(ctx, controllerParams, hostParams) + + ... + + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +The host and controller submodule params only need to be set if the chain integrates those submodules. +For example, if a chain chooses not to integrate a controller submodule, it may pass empty params into `InitModule`. + +#### Add `StoreUpgrades` for ICS27 module + +For ICS27 it is also necessary to [manually add store upgrades](https://docs.cosmos.network/main/learn/advanced/upgrade#add-storeupgrades-for-new-modules) for the new ICS27 module and then configure the store loader to apply those upgrades in `app.go`: + +```go +if upgradeInfo.Name == "v3" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := store.StoreUpgrades{ + Added: []string{ + icacontrollertypes.StoreKey, icahosttypes.StoreKey +}, +} + +app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +``` + +This ensures that the new module's stores are added to the multistore before the migrations begin. +The host and controller submodule keys only need to be added if the chain integrates those submodules. +For example, if a chain chooses not to integrate a controller submodule, it does not need to add the controller key to the `Added` field. + +### Genesis migrations + +If the chain will adopt ICS27 and chooses to upgrade via a genesis export, then the ICS27 parameters must be set during genesis migration. + +The migration code required may look like: + +```go expandable +controllerGenesisState := icatypes.DefaultControllerGenesis() + / overwrite parameters as desired + controllerGenesisState.Params = icacontrollertypes.Params{ + ControllerEnabled: true, +} + hostGenesisState := icatypes.DefaultHostGenesis() + / overwrite parameters as desired + hostGenesisState.Params = icahosttypes.Params{ + HostEnabled: true, + AllowMessages: []string{"/cosmos.bank.v1beta1.MsgSend", ... +}, +} + icaGenesisState := icatypes.NewGenesisState(controllerGenesisState, hostGenesisState) + + / set new ics27 genesis state + appState[icatypes.ModuleName] = clientCtx.Codec.MustMarshalJSON(icaGenesisState) +``` + +### Ante decorator + +The field of type `channelkeeper.Keeper` in the `AnteDecorator` structure has been replaced with a field of type `*keeper.Keeper`: + +```diff +type AnteDecorator struct { +- k channelkeeper.Keeper ++ k *keeper.Keeper +} + +- func NewAnteDecorator(k channelkeeper.Keeper) AnteDecorator { ++ func NewAnteDecorator(k *keeper.Keeper) AnteDecorator { + return AnteDecorator{k: k} +} +``` + +## IBC Apps + +### `OnChanOpenTry` must return negotiated application version + +The `OnChanOpenTry` application callback has been modified. +The return signature now includes the application version. +IBC applications must perform application version negotiation in `OnChanOpenTry` using the counterparty version. +The negotiated application version then must be returned in `OnChanOpenTry` to core IBC. +Core IBC will set this version in the TRYOPEN channel. + +### `OnChanOpenAck` will take additional `counterpartyChannelID` argument + +The `OnChanOpenAck` application callback has been modified. +The arguments now include the counterparty channel id. + +### `NegotiateAppVersion` removed from `IBCModule` interface + +Previously this logic was handled by the `NegotiateAppVersion` function. +Relayers would query this function before calling `ChanOpenTry`. +Applications would then need to verify that the passed in version was correct. +Now applications will perform this version negotiation during the channel handshake, thus removing the need for `NegotiateAppVersion`. + +### Channel state will not be set before application callback + +The channel handshake logic has been reorganized within core IBC. +Channel state will not be set in state after the application callback is performed. +Applications must rely only on the passed in channel parameters instead of querying the channel keeper for channel state. + +### IBC application callbacks moved from `AppModule` to `IBCModule` + +Previously, IBC module callbacks were apart of the `AppModule` type. +The recommended approach is to create an `IBCModule` type and move the IBC module callbacks from `AppModule` to `IBCModule` in a separate file `ibc_module.go`. + +The mock module go API has been broken in this release by applying the above format. +The IBC module callbacks have been moved from the mock modules `AppModule` into a new type `IBCModule`. + +As apart of this release, the mock module now supports middleware testing. Please see the [README](https://github.com/cosmos/ibc-go/blob/v3.0.0/testing/README.md#middleware-testing) for more information. + +Please review the [mock](https://github.com/cosmos/ibc-go/blob/v3.0.0/testing/mock/ibc_module.go) and [transfer](https://github.com/cosmos/ibc-go/blob/v3.0.0/modules/apps/transfer/ibc_module.go) modules as examples. Additionally, [simapp](https://github.com/cosmos/ibc-go/blob/v3.0.0/testing/simapp/app.go) provides an example of how `IBCModule` types should now be added to the IBC router in favour of `AppModule`. + +### IBC testing package + +`TestChain`s are now created with chainID's beginning from an index of 1. Any calls to `GetChainID(0)` will now fail. Please increment all calls to `GetChainID` by 1. + +## Relayers + +`AppVersion` gRPC has been removed. +The `version` string in `MsgChanOpenTry` has been deprecated and will be ignored by core IBC. +Relayers no longer need to determine the version to use on the `ChanOpenTry` step. +IBC applications will determine the correct version using the counterparty version. + +## IBC Light Clients + +The `GetProofSpecs` function has been removed from the `ClientState` interface. This function was previously unused by core IBC. Light clients which don't use this function may remove it. diff --git a/docs/ibc/v10.1.x/migrations/v3-to-v4.mdx b/docs/ibc/v10.1.x/migrations/v3-to-v4.mdx new file mode 100644 index 00000000..9b79e667 --- /dev/null +++ b/docs/ibc/v10.1.x/migrations/v3-to-v4.mdx @@ -0,0 +1,156 @@ +--- +title: IBC-Go v3 to v4 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v3 -> github.com/cosmos/ibc-go/v4 +``` + +No genesis or in-place migrations required when upgrading from v1 or v2 of ibc-go. + +## Chains + +### ICS27 - Interchain Accounts + +The controller submodule implements now the 05-port `Middleware` interface instead of the 05-port `IBCModule` interface. Chains that integrate the controller submodule, need to create it with the `NewIBCMiddleware` constructor function. For example: + +```diff +- icacontroller.NewIBCModule(app.ICAControllerKeeper, icaAuthIBCModule) ++ icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) +``` + +where `icaAuthIBCModule` is the Interchain Accounts authentication IBC Module. + +### ICS29 - Fee Middleware + +The Fee Middleware module, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. + +Please read the Fee Middleware integration documentation for an in depth guide on how to configure the module correctly in order to incentivize IBC packets. + +Take a look at the following diff for an [example setup](https://github.com/cosmos/ibc-go/pull/1432/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08aL366) of how to incentivize ics27 channels. + +### Migration to fix support for base denoms with slashes + +As part of [v1.5.0](https://github.com/cosmos/ibc-go/releases/tag/v1.5.0), [v2.3.0](https://github.com/cosmos/ibc-go/releases/tag/v2.3.0) and [v3.1.0](https://github.com/cosmos/ibc-go/releases/tag/v3.1.0) some [migration handler code sample was documented](/docs/ibc/v10.1.x/migrations/support-denoms-with-slashes#upgrade-proposal) that needs to run in order to correct the trace information of coins transferred using ICS20 whose base denom contains slashes. + +Based on feedback from the community we add now an improved solution to run the same migration that does not require copying a large piece of code over from the migration document, but instead requires only adding a one-line upgrade handler. + +If the chain will migrate to supporting base denoms with slashes, it must set the appropriate params during the execution of the upgrade handler in `app.go`: + +```go +app.UpgradeKeeper.SetUpgradeHandler("MigrateTraces", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / transfer module consensus version has been bumped to 2 + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +If a chain receives coins of a base denom with slashes before it upgrades to supporting it, the receive may pass however the trace information will be incorrect. + +E.g. If a base denom of `testcoin/testcoin/testcoin` is sent to a chain that does not support slashes in the base denom, the receive will be successful. However, the trace information stored on the receiving chain will be: `Trace: "transfer/{channel-id}/testcoin/testcoin", BaseDenom: "testcoin"`. + +This incorrect trace information must be corrected when the chain does upgrade to fully supporting denominations with slashes. + +## IBC Apps + +### ICS03 - Connection + +Crossing hellos have been removed from 03-connection handshake negotiation. +`PreviousConnectionId` in `MsgConnectionOpenTry` has been deprecated and is no longer used by core IBC. + +`NewMsgConnectionOpenTry` no longer takes in the `PreviousConnectionId` as crossing hellos are no longer supported. A non-empty `PreviousConnectionId` will fail basic validation for this message. + +### ICS04 - Channel + +The `WriteAcknowledgement` API now takes the `exported.Acknowledgement` type instead of passing in the acknowledgement byte array directly. +This is an API breaking change and as such IBC application developers will have to update any calls to `WriteAcknowledgement`. + +The `OnChanOpenInit` application callback has been modified. +The return signature now includes the application version as detailed in the latest IBC [spec changes](https://github.com/cosmos/ibc/pull/629). + +The `NewErrorAcknowledgement` method signature has changed. +It now accepts an `error` rather than a `string`. This was done in order to prevent accidental state changes. +All error acknowledgements now contain a deterministic ABCI code and error message. It is the responsibility of the application developer to emit error details in events. + +Crossing hellos have been removed from 04-channel handshake negotiation. +IBC Applications no longer need to account from already claimed capabilities in the `OnChanOpenTry` callback. The capability provided by core IBC must be able to be claimed with error. +`PreviousChannelId` in `MsgChannelOpenTry` has been deprecated and is no longer used by core IBC. + +`NewMsgChannelOpenTry` no longer takes in the `PreviousChannelId` as crossing hellos are no longer supported. A non-empty `PreviousChannelId` will fail basic validation for this message. + +### ICS27 - Interchain Accounts + +The `RegisterInterchainAccount` API has been modified to include an additional `version` argument. This change has been made in order to support ICS29 fee middleware, for relayer incentivization of ICS27 packets. +Consumers of the `RegisterInterchainAccount` are now expected to build the appropriate JSON encoded version string themselves and pass it accordingly. +This should be constructed within the interchain accounts authentication module which leverages the APIs exposed via the interchain accounts `controllerKeeper`. If an empty string is passed in the `version` argument, then the version will be initialized to a default value in the `OnChanOpenInit` callback of the controller's handler, so that channel handshake can proceed. + +The following code snippet illustrates how to construct an appropriate interchain accounts `Metadata` and encode it as a JSON bytestring: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + if err := k.icaControllerKeeper.RegisterInterchainAccount(ctx, msg.ConnectionId, msg.Owner, string(appVersion)); err != nil { + return err +} +``` + +Similarly, if the application stack is configured to route through ICS29 fee middleware and a fee enabled channel is desired, construct the appropriate ICS29 `Metadata` type: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + feeMetadata := feetypes.Metadata{ + AppVersion: string(appVersion), + FeeVersion: feetypes.Version, +} + +feeEnabledVersion, err := feetypes.ModuleCdc.MarshalJSON(&feeMetadata) + if err != nil { + return err +} + if err := k.icaControllerKeeper.RegisterInterchainAccount(ctx, msg.ConnectionId, msg.Owner, string(feeEnabledVersion)); err != nil { + return err +} +``` + +## Relayers + +When using the `DenomTrace` gRPC, the full IBC denomination with the `ibc/` prefix may now be passed in. + +Crossing hellos are no longer supported by core IBC for 03-connection and 04-channel. The handshake should be completed in the logical 4 step process (INIT, TRY, ACK, CONFIRM). diff --git a/docs/ibc/v10.1.x/migrations/v4-to-v5.mdx b/docs/ibc/v10.1.x/migrations/v4-to-v5.mdx new file mode 100644 index 00000000..81b549de --- /dev/null +++ b/docs/ibc/v10.1.x/migrations/v4-to-v5.mdx @@ -0,0 +1,440 @@ +--- +title: IBC-Go v4 to v5 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* [Chains](#chains) +* [IBC Apps](#ibc-apps) +* [Relayers](#relayers) +* [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v4 -> github.com/cosmos/ibc-go/v5 +``` + +## Chains + +### Ante decorator + +The `AnteDecorator` type in `core/ante` has been renamed to `RedundantRelayDecorator` (and the corresponding constructor function to `NewRedundantRelayDecorator`). Therefore in the function that creates the instance of the `sdk.AnteHandler` type (e.g. `NewAnteHandler`) the change would be like this: + +```diff expandable +func NewAnteHandler(options HandlerOptions) (sdk.AnteHandler, error) { + / parameter validation + + anteDecorators := []sdk.AnteDecorator{ + / other ante decorators +- ibcante.NewAnteDecorator(opts.IBCkeeper), ++ ibcante.NewRedundantRelayDecorator(options.IBCKeeper), + } + + return sdk.ChainAnteDecorators(anteDecorators...), nil +} +``` + +The `AnteDecorator` was actually renamed twice, but in [this PR](https://github.com/cosmos/ibc-go/pull/1820) you can see the changes made for the final rename. + +## IBC Apps + +### Core + +The `key` parameter of the `NewKeeper` function in `modules/core/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + stakingKeeper clienttypes.StakingKeeper, + upgradeKeeper clienttypes.UpgradeKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) *Keeper +``` + +The `RegisterRESTRoutes` function in `modules/core` has been removed. + +### ICS03 - Connection + +The `key` parameter of the `NewKeeper` function in `modules/core/03-connection/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ck types.ClientKeeper +) Keeper +``` + +### ICS04 - Channel + +The function `NewPacketId` in `modules/core/04-channel/types` has been renamed to `NewPacketID`: + +```diff +- func NewPacketId( ++ func NewPacketID( + portID, + channelID string, + seq uint64 +) PacketId +``` + +The `key` parameter of the `NewKeeper` function in `modules/core/04-channel/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + clientKeeper types.ClientKeeper, + connectionKeeper types.ConnectionKeeper, + portKeeper types.PortKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) Keeper +``` + +### ICS20 - Transfer + +The `key` parameter of the `NewKeeper` function in `modules/apps/transfer/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper types.ICS4Wrapper, + channelKeeper types.ChannelKeeper, + portKeeper types.PortKeeper, + authKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) Keeper +``` + +The `amount` parameter of function `GetTransferCoin` in `modules/apps/transfer/types` is now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff +func GetTransferCoin( + portID, channelID, baseDenom string, +- amount sdk.Int ++ amount math.Int +) sdk.Coin +``` + +The `RegisterRESTRoutes` function in `modules/apps/transfer` has been removed. + +### ICS27 - Interchain Accounts + +The `key` and `msgRouter` parameters of the `NewKeeper` functions in + +* `modules/apps/27-interchain-accounts/controller/keeper` +* and `modules/apps/27-interchain-accounts/host/keeper` + +have changed type. The `key` parameter is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`), and the `msgRouter` parameter is now of type `*icatypes.MessageRouter` (where `icatypes` is an import alias for `"github.com/cosmos/ibc-go/v5/modules/apps/27-interchain-accounts/types"`): + +```diff expandable +/ NewKeeper creates a new interchain accounts controller Keeper instance +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper icatypes.ICS4Wrapper, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +- msgRouter *baseapp.MsgServiceRouter, ++ msgRouter *icatypes.MessageRouter, +) Keeper +``` + +```diff expandable +/ NewKeeper creates a new interchain accounts host Keeper instance +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +- msgRouter *baseapp.MsgServiceRouter, ++ msgRouter *icatypes.MessageRouter, +) Keeper +``` + +The new `MessageRouter` interface is defined as: + +```go +type MessageRouter interface { + Handler(msg sdk.Msg) + +baseapp.MsgServiceHandler +} +``` + +The `RegisterRESTRoutes` function in `modules/apps/27-interchain-accounts` has been removed. + +An additional parameter, `ics4Wrapper` has been added to the `host` submodule `NewKeeper` function in `modules/apps/27-interchain-accounts/host/keeper`. +This allows the `host` submodule to correctly unwrap the channel version for channel reopening handshakes in the `OnChanOpenTry` callback. + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, + key storetypes.StoreKey, + paramSpace paramtypes.Subspace, ++ ics4Wrapper icatypes.ICS4Wrapper, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, + scopedKeeper icatypes.ScopedKeeper, + msgRouter icatypes.MessageRouter, +) Keeper +``` + +#### Cosmos SDK message handler responses in packet acknowledgement + +The construction of the transaction response of a message execution on the host chain has changed. The `Data` field in the `sdk.TxMsgData` has been deprecated and since Cosmos SDK 0.46 the `MsgResponses` field contains the message handler responses packed into `Any`s. + +For chains on Cosmos SDK 0.45 and below, the message response was constructed like this: + +```go expandable +txMsgData := &sdk.TxMsgData{ + Data: make([]*sdk.MsgData, len(msgs)), +} + for i, msg := range msgs { + / message validation + + msgResponse, err := k.executeMsg(cacheCtx, msg) + / return if err != nil + + txMsgData.Data[i] = &sdk.MsgData{ + MsgType: sdk.MsgTypeURL(msg), + Data: msgResponse, +} +} + +/ emit events + +txResponse, err := proto.Marshal(txMsgData) +/ return if err != nil + +return txResponse, nil +``` + +And for chains on Cosmos SDK 0.46 and above, it is now done like this: + +```go expandable +txMsgData := &sdk.TxMsgData{ + MsgResponses: make([]*codectypes.Any, len(msgs)), +} + for i, msg := range msgs { + / message validation + + protoAny, err := k.executeMsg(cacheCtx, msg) + / return if err != nil + + txMsgData.MsgResponses[i] = protoAny +} + +/ emit events + +txResponse, err := proto.Marshal(txMsgData) +/ return if err != nil + +return txResponse, nil +``` + +When handling the acknowledgement in the `OnAcknowledgementPacket` callback of a custom ICA controller module, then depending on whether `txMsgData.Data` is empty or not, the logic to handle the message handler response will be different. **Only controller chains on Cosmos SDK 0.46 or above will be able to write the logic needed to handle the response from a host chain on Cosmos SDK 0.46 or above.** + +```go expandable +var ack channeltypes.Acknowledgement + if err := channeltypes.SubModuleCdc.UnmarshalJSON(acknowledgement, &ack); err != nil { + return err +} + +var txMsgData sdk.TxMsgData + if err := proto.Unmarshal(ack.GetResult(), txMsgData); err != nil { + return err +} + switch len(txMsgData.Data) { + case 0: / for SDK 0.46 and above + for _, msgResponse := range txMsgData.MsgResponses { + / unmarshall msgResponse and execute logic based on the response +} + +return nil +default: / for SDK 0.45 and below + for _, msgData := range txMsgData.Data { + / unmarshall msgData and execute logic based on the response +} +} +``` + +See the corresponding documentation about authentication modules for more information. + +### ICS29 - Fee Middleware + +The `key` parameter of the `NewKeeper` function in `modules/apps/29-fee` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper types.ICS4Wrapper, + channelKeeper types.ChannelKeeper, + portKeeper types.PortKeeper, + authKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, +) Keeper +``` + +The `RegisterRESTRoutes` function in `modules/apps/29-fee` has been removed. + +### IBC testing package + +The `MockIBCApp` type has been renamed to `IBCApp` (and the corresponding constructor function to `NewIBCApp`). This has resulted therefore in: + +* The `IBCApp` field of the `*IBCModule` in `testing/mock` to change its type as well to `*IBCApp`: + +```diff +type IBCModule struct { + appModule *AppModule +- IBCApp *MockIBCApp / base application of an IBC middleware stack ++ IBCApp *IBCApp / base application of an IBC middleware stack +} +``` + +* The `app` parameter to `*NewIBCModule` in `testing/mock` to change its type as well to `*IBCApp`: + +```diff +func NewIBCModule( + appModule *AppModule, +- app *MockIBCApp ++ app *IBCApp +) IBCModule +``` + +The `MockEmptyAcknowledgement` type has been renamed to `EmptyAcknowledgement` (and the corresponding constructor function to `NewEmptyAcknowledgement`). + +The `TestingApp` interface in `testing` has gone through some modifications: + +* The return type of the function `GetStakingKeeper` is not the concrete type `stakingkeeper.Keeper` anymore (where `stakingkeeper` is an import alias for `"github.com/cosmos/cosmos-sdk/x/staking/keeper"`), but it has been changed to the interface `ibctestingtypes.StakingKeeper` (where `ibctestingtypes` is an import alias for `""github.com/cosmos/ibc-go/v5/testing/types"`). See this [PR](https://github.com/cosmos/ibc-go/pull/2028) for more details. The `StakingKeeper` interface is defined as: + +```go +type StakingKeeper interface { + GetHistoricalInfo(ctx sdk.Context, height int64) (stakingtypes.HistoricalInfo, bool) +} +``` + +* The return type of the function `LastCommitID` has changed to `storetypes.CommitID` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`). + +See the following `git diff` for more details: + +```diff expandable +type TestingApp interface { + abci.Application + + / ibc-go additions + GetBaseApp() *baseapp.BaseApp +- GetStakingKeeper() stakingkeeper.Keeper ++ GetStakingKeeper() ibctestingtypes.StakingKeeper + GetIBCKeeper() *keeper.Keeper + GetScopedIBCKeeper() capabilitykeeper.ScopedKeeper + GetTxConfig() client.TxConfig + + / Implemented by SimApp + AppCodec() codec.Codec + + / Implemented by BaseApp +- LastCommitID() sdk.CommitID ++ LastCommitID() storetypes.CommitID + LastBlockHeight() int64 +} +``` + +The `powerReduction` parameter of the function `SetupWithGenesisValSet` in `testing` is now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff +func SetupWithGenesisValSet( + t *testing.T, + valSet *tmtypes.ValidatorSet, + genAccs []authtypes.GenesisAccount, + chainID string, +- powerReduction sdk.Int, ++ powerReduction math.Int, + balances ...banktypes.Balance +) TestingApp +``` + +The `accAmt` parameter of the functions + +* `AddTestAddrsFromPubKeys` , +* `AddTestAddrs` +* and `AddTestAddrsIncremental` + +in `testing/simapp` are now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff expandable +func AddTestAddrsFromPubKeys( + app *SimApp, + ctx sdk.Context, + pubKeys []cryptotypes.PubKey, +- accAmt sdk.Int, ++ accAmt math.Int +) +func addTestAddrs( + app *SimApp, + ctx sdk.Context, + accNum int, +- accAmt sdk.Int, ++ accAmt math.Int, + strategy GenerateAccountStrategy +) []sdk.AccAddress +func AddTestAddrsIncremental( + app *SimApp, + ctx sdk.Context, + accNum int, +- accAmt sdk.Int, ++ accAmt math.Int +) []sdk.AccAddress +``` + +The `RegisterRESTRoutes` function in `testing/mock` has been removed. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +### ICS02 - Client + +The `key` parameter of the `NewKeeper` function in `modules/core/02-client/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + sk types.StakingKeeper, + uk types.UpgradeKeeper +) Keeper +``` diff --git a/docs/ibc/v10.1.x/migrations/v5-to-v6.mdx b/docs/ibc/v10.1.x/migrations/v5-to-v6.mdx new file mode 100644 index 00000000..40fd9a7c --- /dev/null +++ b/docs/ibc/v10.1.x/migrations/v5-to-v6.mdx @@ -0,0 +1,301 @@ +--- +title: IBC-Go v5 to v6 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +## Chains + +The `ibc-go/v6` release introduces a new set of migrations for `27-interchain-accounts`. Ownership of ICS27 channel capabilities is transferred from ICS27 authentication modules and will now reside with the ICS27 controller submodule moving forward. + +For chains which contain a custom authentication module using the ICS27 controller submodule this requires a migration function to be included in the chain upgrade handler. A subsequent migration handler is run automatically, asserting the ownership of ICS27 channel capabilities has been transferred successfully. + +This migration is not required for chains which *do not* contain a custom authentication module using the ICS27 controller submodule. + +This migration facilitates the addition of the ICS27 controller submodule `MsgServer` which provides a standardised approach to integrating existing forms of authentication such as `x/gov` and `x/group` provided by the Cosmos SDK. + +For more information please refer to the ICS27 controller submodule documentation. + +### Upgrade proposal + +Please refer to [PR #2383](https://github.com/cosmos/ibc-go/pull/2383) for integrating the ICS27 channel capability migration logic or follow the steps outlined below: + +1. Add the upgrade migration logic to chain distribution. This may be, for example, maintained under a package `app/upgrades/v6`. + +```go expandable +package v6 + +import ( + + "github.com/cosmos/cosmos-sdk/codec" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" + + v6 "github.com/cosmos/ibc-go/v6/modules/apps/27-interchain-accounts/controller/migrations/v6" +) + +const ( + UpgradeName = "v6" +) + +func CreateUpgradeHandler( + mm *module.Manager, + configurator module.Configurator, + cdc codec.BinaryCodec, + capabilityStoreKey *storetypes.KVStoreKey, + capabilityKeeper *capabilitykeeper.Keeper, + moduleName string, +) + +upgradetypes.UpgradeHandler { + return func(ctx sdk.Context, _ upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + if err := v6.MigrateICS27ChannelCapability(ctx, cdc, capabilityStoreKey, capabilityKeeper, moduleName); err != nil { + return nil, err +} + +return mm.RunMigrations(ctx, configurator, vm) +} +} +``` + +2. Set the upgrade handler in `app.go`. The `moduleName` parameter refers to the authentication module's `ScopedKeeper` name. This is the name provided upon instantiation in `app.go` via the [`x/capability` keeper `ScopeToModule(moduleName string)`](https://github.com/cosmos/cosmos-sdk/blob/v0.46.1/x/capability/keeper/keeper.go#L70) method. [See here for an example in `simapp`](https://github.com/cosmos/ibc-go/blob/v5.0.0/testing/simapp/app.go#L304). + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler( + v6.UpgradeName, + v6.CreateUpgradeHandler( + app.mm, + app.configurator, + app.appCodec, + app.keys[capabilitytypes.ModuleName], + app.CapabilityKeeper, + >>>> moduleName <<<<, + ), +) +``` + +## IBC Apps + +### ICS27 - Interchain Accounts + +#### Controller APIs + +In previous releases of ibc-go, chain developers integrating the ICS27 interchain accounts controller functionality were expected to create a custom `Base Application` referred to as an authentication module, see the section [Building an authentication module](/docs/ibc/v10.1.x/apps/interchain-accounts/auth-modules) from the documentation. + +The `Base Application` was intended to be composed with the ICS27 controller submodule `Keeper` and facilitate many forms of message authentication depending on a chain's particular use case. + +Prior to ibc-go v6 the controller submodule exposed only these two functions (to which we will refer as the legacy APIs): + +* [`RegisterInterchainAccount`](https://github.com/cosmos/ibc-go/blob/v5.0.0/modules/apps/27-interchain-accounts/controller/keeper/account.go#L19) +* [`SendTx`](https://github.com/cosmos/ibc-go/blob/v5.0.0/modules/apps/27-interchain-accounts/controller/keeper/relay.go#L18) + +However, these functions have now been deprecated in favour of the new controller submodule `MsgServer` and will be removed in later releases. + +Both APIs remain functional and maintain backwards compatibility in ibc-go v6, however consumers of these APIs are now recommended to follow the message passing paradigm outlined in Cosmos SDK [ADR 031](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-031) and [ADR 033](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-033). This is facilitated by the Cosmos SDK [`MsgServiceRouter`](https://github.com/cosmos/cosmos-sdk/blob/main/baseapp/msg_service_router.go#L17) and chain developers creating custom application logic can now omit the ICS27 controller submodule `Keeper` from their module and instead depend on message routing. + +Depending on the use case, developers of custom authentication modules face one of three scenarios: + +![auth-module-decision-tree.png](/docs/ibc/images/04-migrations/images/auth-module-decision-tree.png) + +**My authentication module needs to access IBC packet callbacks** + +Application developers that wish to consume IBC packet callbacks and react upon packet acknowledgements **must** continue using the controller submodule's legacy APIs. The authentication modules will not need a `ScopedKeeper` anymore, though, because the channel capability will be claimed by the controller submodule. For example, given an Interchain Accounts authentication module keeper `ICAAuthKeeper`, the authentication module's `ScopedKeeper` (`scopedICAAuthKeeper`) is not needed anymore and can be removed for the argument list of the keeper constructor function, as shown here: + +```diff +app.ICAAuthKeeper = icaauthkeeper.NewKeeper( + appCodec, + keys[icaauthtypes.StoreKey], + app.ICAControllerKeeper, +- scopedICAAuthKeeper, +) +``` + +Please note that the authentication module's `ScopedKeeper` name is still needed as part of the channel capability migration described in section [Upgrade proposal](#upgrade-proposal) above. Therefore the authentication module's `ScopedKeeper` cannot be completely removed from the chain code until the migration has run. + +In the future, the use of the legacy APIs for accessing packet callbacks will be replaced by IBC Actor Callbacks (see [ADR 008](https://github.com/cosmos/ibc-go/pull/1976) for more details) and it will also be possible to access them with the `MsgServiceRouter`. + +**My authentication module does not need access to IBC packet callbacks** + +The authentication module can migrate from using the legacy APIs and it can be composed instead with the `MsgServiceRouter`, so that the authentication module is able to pass messages to the controller submodule's `MsgServer` to register interchain accounts and send packets to the interchain account. For example, given an Interchain Accounts authentication module keeper `ICAAuthKeeper`, the ICS27 controller submodule keeper (`ICAControllerKeeper`) and authentication module scoped keeper (`scopedICAAuthKeeper`) are not needed anymore and can be replaced with the `MsgServiceRouter`, as shown here: + +```diff +app.ICAAuthKeeper = icaauthkeeper.NewKeeper( + appCodec, + keys[icaauthtypes.StoreKey], +- app.ICAControllerKeeper, +- scopedICAAuthKeeper, ++ app.MsgServiceRouter(), +) +``` + +In your authentication module you can route messages to the controller submodule's `MsgServer` instead of using the legacy APIs. For example, for registering an interchain account: + +```diff expandable +- if err := keeper.icaControllerKeeper.RegisterInterchainAccount( +- ctx, +- connectionID, +- owner.String(), +- version, +- ); err != nil { +- return err +- } ++ msg := controllertypes.NewMsgRegisterInterchainAccount( ++ connectionID, ++ owner.String(), ++ version, ++ ) ++ handler := keeper.msgRouter.Handler(msg) ++ res, err := handler(ctx, msg) ++ if err != nil { ++ return err ++ } +``` + +where `controllertypes` is an import alias for `"github.com/cosmos/ibc-go/v6/modules/apps/27-interchain-accounts/controller/types"`. + +In addition, in this use case the authentication module does not need to implement the `IBCModule` interface anymore. + +**I do not need a custom authentication module anymore** + +If your authentication module does not have any extra functionality compared to the default authentication module added in ibc-go v6 (the `MsgServer`), or if you can use a generic authentication module, such as the `x/auth`, `x/gov` or `x/group` modules from the Cosmos SDK (v0.46 and later), then you can remove your authentication module completely and use instead the gRPC endpoints of the `MsgServer` or the CLI added in ibc-go v6. + +Please remember that the authentication module's `ScopedKeeper` name is still needed as part of the channel capability migration described in section [Upgrade proposal](#upgrade-proposal) above. + +#### Host params + +The ICS27 host submodule default params have been updated to include the `AllowAllHostMsgs` wildcard `*`. +This enables execution of any `sdk.Msg` type for ICS27 registered on the host chain `InterfaceRegistry`. + +```diff +/ AllowAllHostMsgs holds the string key that allows all message types on interchain accounts host module +const AllowAllHostMsgs = "*" + +... + +/ DefaultParams is the default parameter configuration for the host submodule +func DefaultParams() Params { +- return NewParams(DefaultHostEnabled, nil) ++ return NewParams(DefaultHostEnabled, []string{AllowAllHostMsgs}) +} +``` + +#### API breaking changes + +`SerializeCosmosTx` takes in a `[]proto.Message` instead of `[]sdk.Message`. This allows for the serialization of proto messages without requiring the fulfillment of the `sdk.Msg` interface. + +The `27-interchain-accounts` genesis types have been moved to their own package: `modules/apps/27-interchain-accounts/genesis/types`. +This change facilitates the addition of the ICS27 controller submodule `MsgServer` and avoids cyclic imports. This should have minimal disruption to chain developers integrating `27-interchain-accounts`. + +The ICS27 host submodule `NewKeeper` function in `modules/apps/27-interchain-accounts/host/keeper` now includes an additional parameter of type `ICS4Wrapper`. +This provides the host submodule with the ability to correctly unwrap channel versions in the event of a channel reopening handshake. + +```diff +func NewKeeper( + cdc codec.BinaryCodec, key storetypes.StoreKey, paramSpace paramtypes.Subspace, +- channelKeeper icatypes.ChannelKeeper, portKeeper icatypes.PortKeeper, ++ ics4Wrapper icatypes.ICS4Wrapper, channelKeeper icatypes.ChannelKeeper, portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, scopedKeeper icatypes.ScopedKeeper, msgRouter icatypes.MessageRouter, +) Keeper +``` + +### ICS29 - `NewKeeper` API change + +The `NewKeeper` function of ICS29 has been updated to remove the `paramSpace` parameter as it was unused. + +```diff +func NewKeeper( +- cdc codec.BinaryCodec, key storetypes.StoreKey, paramSpace paramtypes.Subspace, +- ics4Wrapper types.ICS4Wrapper, channelKeeper types.ChannelKeeper, portKeeper types.PortKeeper, authKeeper types.AccountKeeper, bankKeeper types.BankKeeper, ++ cdc codec.BinaryCodec, key storetypes.StoreKey, ++ ics4Wrapper types.ICS4Wrapper, channelKeeper types.ChannelKeeper, ++ portKeeper types.PortKeeper, authKeeper types.AccountKeeper, bankKeeper types.BankKeeper, +) Keeper { +``` + +### ICS20 - `SendTransfer` is no longer exported + +The `SendTransfer` function of ICS20 has been removed. IBC transfers should now be initiated with `MsgTransfer` and routed to the ICS20 `MsgServer`. + +See below for example: + +```go +if handler := msgRouter.Handler(msgTransfer); handler != nil { + if err := msgTransfer.ValidateBasic(); err != nil { + return nil, err +} + +res, err := handler(ctx, msgTransfer) + if err != nil { + return nil, err +} +} +``` + +### ICS04 - `SendPacket` API change + +The `SendPacket` API has been simplified: + +```diff expandable +/ SendPacket is called by a module in order to send an IBC packet on a channel +func (k Keeper) SendPacket( + ctx sdk.Context, + channelCap *capabilitytypes.Capability, +- packet exported.PacketI, +-) error { ++ sourcePort string, ++ sourceChannel string, ++ timeoutHeight clienttypes.Height, ++ timeoutTimestamp uint64, ++ data []byte, ++) (uint64, error) { +``` + +Callers no longer need to pass in a pre-constructed packet. +The destination port/channel identifiers and the packet sequence will be determined by core IBC. +`SendPacket` will return the packet sequence. + +### IBC testing package + +The `SendPacket` API has been simplified: + +```diff expandable +/ SendPacket is called by a module in order to send an IBC packet on a channel +func (k Keeper) SendPacket( + ctx sdk.Context, + channelCap *capabilitytypes.Capability, +- packet exported.PacketI, +-) error { ++ sourcePort string, ++ sourceChannel string, ++ timeoutHeight clienttypes.Height, ++ timeoutTimestamp uint64, ++ data []byte, ++) (uint64, error) { +``` + +Callers no longer need to pass in a pre-constructed packet. `SendPacket` will return the packet sequence. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v10.1.x/migrations/v6-to-v7.mdx b/docs/ibc/v10.1.x/migrations/v6-to-v7.mdx new file mode 100644 index 00000000..6f652e70 --- /dev/null +++ b/docs/ibc/v10.1.x/migrations/v6-to-v7.mdx @@ -0,0 +1,358 @@ +--- +title: IBC-Go v6 to v7 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +## Chains + +Chains will perform automatic migrations to remove existing localhost clients and to migrate the solomachine to v3 of the protobuf definition. + +An optional upgrade handler has been added to prune expired tendermint consensus states. It may be used during any upgrade (from v7 onwards). +Add the following to the function call to the upgrade handler in `app/app.go`, to perform the optional state pruning. + +```go expandable +import ( + + / ... + ibctmmigrations "github.com/cosmos/ibc-go/v7/modules/light-clients/07-tendermint/migrations" +) + +/ ... + +app.UpgradeKeeper.SetUpgradeHandler( + upgradeName, + func(ctx sdk.Context, _ upgradetypes.Plan, _ module.VersionMap) (module.VersionMap, error) { + / prune expired tendermint consensus states to save storage space + _, err := ibctmmigrations.PruneExpiredConsensusStates(ctx, app.Codec, app.IBCKeeper.ClientKeeper) + if err != nil { + return nil, err +} + +return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}, +) +``` + +Checkout the logs to see how many consensus states are pruned. + +### Light client registration + +Chains must explicitly register the types of any light client modules it wishes to integrate. + +#### Tendermint registration + +To register the tendermint client, modify the `app.go` file to include the tendermint `AppModuleBasic`: + +```diff expandable +import ( + / ... ++ ibctm "github.com/cosmos/ibc-go/v7/modules/light-clients/07-tendermint" +) + +/ ... + +ModuleBasics = module.NewBasicManager( + ... + ibc.AppModuleBasic{}, ++ ibctm.AppModuleBasic{}, + ... +) +``` + +It may be useful to reference the [PR](https://github.com/cosmos/ibc-go/pull/2825) which added the `AppModuleBasic` for the tendermint client. + +#### Solo machine registration + +To register the solo machine client, modify the `app.go` file to include the solo machine `AppModuleBasic`: + +```diff expandable +import ( + / ... ++ solomachine "github.com/cosmos/ibc-go/v7/modules/light-clients/06-solomachine" +) + +/ ... + +ModuleBasics = module.NewBasicManager( + ... + ibc.AppModuleBasic{}, ++ solomachine.AppModuleBasic{}, + ... +) +``` + +It may be useful to reference the [PR](https://github.com/cosmos/ibc-go/pull/2826) which added the `AppModuleBasic` for the solo machine client. + +### Testing package API + +The `SetChannelClosed` utility method in `testing/endpoint.go` has been updated to `SetChannelState`, which will take a `channeltypes.State` argument so that the `ChannelState` can be set to any of the possible channel states. + +## IBC Apps + +* No relevant changes were made in this release. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +### `ClientState` interface changes + +The `VerifyUpgradeAndUpdateState` function has been modified. The client state and consensus state return values have been removed. + +Light clients **must** handle all management of client and consensus states including the setting of updated client state and consensus state in the client store. + +The `Initialize` method is now expected to set the initial client state, consensus state and any client-specific metadata in the provided store upon client creation. + +The `CheckHeaderAndUpdateState` method has been split into 4 new methods: + +* `VerifyClientMessage` verifies a `ClientMessage`. A `ClientMessage` could be a `Header`, `Misbehaviour`, or batch update. Calls to `CheckForMisbehaviour`, `UpdateState`, and `UpdateStateOnMisbehaviour` will assume that the content of the `ClientMessage` has been verified and can be trusted. An error should be returned if the `ClientMessage` fails to verify. + +* `CheckForMisbehaviour` checks for evidence of a misbehaviour in `Header` or `Misbehaviour` types. + +* `UpdateStateOnMisbehaviour` performs appropriate state changes on a `ClientState` given that misbehaviour has been detected and verified. + +* `UpdateState` updates and stores as necessary any associated information for an IBC client, such as the `ClientState` and corresponding `ConsensusState`. An error is returned if `ClientMessage` is of type `Misbehaviour`. Upon successful update, a list containing the updated consensus state height is returned. + +The `CheckMisbehaviourAndUpdateState` function has been removed from `ClientState` interface. This functionality is now encapsulated by the usage of `VerifyClientMessage`, `CheckForMisbehaviour`, `UpdateStateOnMisbehaviour`. + +The function `GetTimestampAtHeight` has been added to the `ClientState` interface. It should return the timestamp for a consensus state associated with the provided height. + +Prior to ibc-go/v7 the `ClientState` interface defined a method for each data type which was being verified in the counterparty state store. +The state verification functions for all IBC data types have been consolidated into two generic methods, `VerifyMembership` and `VerifyNonMembership`. +Both are expected to be provided with a standardised key path, `exported.Path`, as defined in [ICS 24 host requirements](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements). Membership verification requires callers to provide the marshalled value `[]byte`. Delay period values should be zero for non-packet processing verification. A zero proof height is now allowed by core IBC and may be passed into `VerifyMembership` and `VerifyNonMembership`. Light clients are responsible for returning an error if a zero proof height is invalid behaviour. + +See below for an example of how ibc-go now performs channel state verification. + +```go expandable +merklePath := commitmenttypes.NewMerklePath(host.ChannelPath(portID, channelID)) + +merklePath, err := commitmenttypes.ApplyPrefix(connection.GetCounterparty().GetPrefix(), merklePath) + if err != nil { + return err +} + +channelEnd, ok := channel.(channeltypes.Channel) + if !ok { + return sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "invalid channel type %T", channel) +} + +bz, err := k.cdc.Marshal(&channelEnd) + if err != nil { + return err +} + if err := clientState.VerifyMembership( + ctx, clientStore, k.cdc, height, + 0, 0, / skip delay period checks for non-packet processing verification + proof, merklePath, bz, +); err != nil { + return sdkerrors.Wrapf(err, "failed channel state verification for client (%s)", clientID) +} +``` + +### `Header` and `Misbehaviour` + +`exported.Header` and `exported.Misbehaviour` interface types have been merged and renamed to `ClientMessage` interface. + +`GetHeight` function has been removed from `exported.Header` and thus is not included in the `ClientMessage` interface + +### `ConsensusState` + +The `GetRoot` function has been removed from consensus state interface since it was not used by core IBC. + +### Client keeper + +Keeper function `CheckMisbehaviourAndUpdateState` has been removed since function `UpdateClient` can now handle updating `ClientState` on `ClientMessage` type which can be any `Misbehaviour` implementations. + +### SDK message + +`MsgSubmitMisbehaviour` is deprecated since `MsgUpdateClient` can now submit a `ClientMessage` type which can be any `Misbehaviour` implementations. + +The field `header` in `MsgUpdateClient` has been renamed to `client_message`. + +## Solomachine + +The `06-solomachine` client implementation has been simplified in ibc-go/v7. In-place store migrations have been added to migrate solomachine clients from `v2` to `v3`. + +### `ClientState` + +The `ClientState` protobuf message definition has been updated to remove the deprecated `bool` field `allow_update_after_proposal`. + +```diff +message ClientState { + option (gogoproto.goproto_getters) = false; + + uint64 sequence = 1; + bool is_frozen = 2 [(gogoproto.moretags) = "yaml:\"is_frozen\""]; + ConsensusState consensus_state = 3 [(gogoproto.moretags) = "yaml:\"consensus_state\""]; +- bool allow_update_after_proposal = 4 [(gogoproto.moretags) = "yaml:\"allow_update_after_proposal\""]; +} +``` + +### `Header` and `Misbehaviour` + +The `06-solomachine` protobuf message `Header` has been updated to remove the `sequence` field. This field was seen as redundant as the implementation can safely rely on the `sequence` value maintained within the `ClientState`. + +```diff expandable +message Header { + option (gogoproto.goproto_getters) = false; + +- uint64 sequence = 1; +- uint64 timestamp = 2; +- bytes signature = 3; +- google.protobuf.Any new_public_key = 4 [(gogoproto.moretags) = "yaml:\"new_public_key\""]; +- string new_diversifier = 5 [(gogoproto.moretags) = "yaml:\"new_diversifier\""]; ++ uint64 timestamp = 1; ++ bytes signature = 2; ++ google.protobuf.Any new_public_key = 3 [(gogoproto.moretags) = "yaml:\"new_public_key\""]; ++ string new_diversifier = 4 [(gogoproto.moretags) = "yaml:\"new_diversifier\""]; +} +``` + +Similarly, the `Misbehaviour` protobuf message has been updated to remove the `client_id` field. + +```diff expandable +message Misbehaviour { + option (gogoproto.goproto_getters) = false; + +- string client_id = 1 [(gogoproto.moretags) = "yaml:\"client_id\""]; +- uint64 sequence = 2; +- SignatureAndData signature_one = 3 [(gogoproto.moretags) = "yaml:\"signature_one\""]; +- SignatureAndData signature_two = 4 [(gogoproto.moretags) = "yaml:\"signature_two\""]; ++ uint64 sequence = 1; ++ SignatureAndData signature_one = 2 [(gogoproto.moretags) = "yaml:\"signature_one\""]; ++ SignatureAndData signature_two = 3 [(gogoproto.moretags) = "yaml:\"signature_two\""]; +} +``` + +### `SignBytes` + +Most notably, the `SignBytes` protobuf definition has been modified to replace the `data_type` field with a new field, `path`. The `path` field is defined as `bytes` and represents a serialized [ICS-24](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements) standardized key path under which the `data` is stored. + +```diff +message SignBytes { + option (gogoproto.goproto_getters) = false; + + uint64 sequence = 1; + uint64 timestamp = 2; + string diversifier = 3; +- DataType data_type = 4 [(gogoproto.moretags) = "yaml:\"data_type\""]; ++ bytes path = 4; + bytes data = 5; +} +``` + +The `DataType` enum and all associated data types have been removed, greatly reducing the number of message definitions and complexity in constructing the `SignBytes` message type. Likewise, solomachine implementations must now use the serialized `path` value when constructing `SignatureAndData` for signature verification of `SignBytes` data. + +```diff +message SignatureAndData { + option (gogoproto.goproto_getters) = false; + + bytes signature = 1; +- DataType data_type = 2 [(gogoproto.moretags) = "yaml:\"data_type\""]; ++ bytes path = 2; + bytes data = 3; + uint64 timestamp = 4; +} +``` + +For more information, please refer to [ADR-007](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/docs/architecture/adr-007-solomachine-signbytes.md). + +### IBC module constants + +IBC module constants have been moved from the `host` package to the `exported` package. Any usages will need to be updated. + +```diff expandable +import ( + / ... +- host "github.com/cosmos/ibc-go/v7/modules/core/24-host" ++ ibcexported "github.com/cosmos/ibc-go/v7/modules/core/exported" + / ... +) + +- host.ModuleName ++ ibcexported.ModuleName + +- host.StoreKey ++ ibcexported.StoreKey + +- host.QuerierRoute ++ ibcexported.QuerierRoute + +- host.RouterKey ++ ibcexported.RouterKey +``` + +## Upgrading to Cosmos SDK 0.47 + +The following should be considered as complementary to [Cosmos SDK v0.47 UPGRADING.md](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc2/UPGRADING.md). + +### Protobuf + +Protobuf code generation, linting and formatting have been updated to leverage the `ghcr.io/cosmos/proto-builder:0.11.5` docker container. IBC protobuf definitions are now packaged and published to [buf.build/cosmos/ibc](https://buf.build/cosmos/ibc) via CI workflows. The `third_party/proto` directory has been removed in favour of dependency management using [buf.build](https://docs.buf.build/introduction). + +### App modules + +Legacy APIs of the `AppModule` interface have been removed from ibc-go modules. For example, for + +```diff expandable +- / Route implements the AppModule interface +- func (am AppModule) Route() sdk.Route { +- return sdk.Route{} +- } +- +- / QuerierRoute implements the AppModule interface +- func (AppModule) QuerierRoute() string { +- return types.QuerierRoute +- } +- +- / LegacyQuerierHandler implements the AppModule interface +- func (am AppModule) LegacyQuerierHandler(*codec.LegacyAmino) sdk.Querier { +- return nil +- } +- +- / ProposalContents doesn't return any content functions for governance proposals. +- func (AppModule) ProposalContents(_ module.SimulationState) []simtypes.WeightedProposalContent { +- return nil +- } +``` + +### Imports + +Imports for ics23 have been updated as the repository have been migrated from confio to cosmos. + +```diff +import ( + / ... +- ics23 "github.com/confio/ics23/go" ++ ics23 "github.com/cosmos/ics23/go" + / ... +) +``` + +Imports for gogoproto have been updated. + +```diff +import ( + / ... +- "github.com/gogo/protobuf/proto" ++ "github.com/cosmos/gogoproto/proto" + / ... +) +``` diff --git a/docs/ibc/v10.1.x/migrations/v7-to-v7_1.mdx b/docs/ibc/v10.1.x/migrations/v7-to-v7_1.mdx new file mode 100644 index 00000000..735ce786 --- /dev/null +++ b/docs/ibc/v10.1.x/migrations/v7-to-v7_1.mdx @@ -0,0 +1,66 @@ +--- +title: IBC-Go v7 to v7.1 +description: This guide provides instructions for migrating to version v7.1.0 of ibc-go. +--- + +This guide provides instructions for migrating to version `v7.1.0` of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Migrating from v7 to v7.1](#migrating-from-v7-to-v71) + * [Chains](#chains) + * [IBC Apps](#ibc-apps) + * [Relayers](#relayers) + * [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +In the previous release of ibc-go, the localhost `v1` light client module was deprecated and removed. The ibc-go `v7.1.0` release introduces `v2` of the 09-localhost light client module. + +An [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v7.2.0/modules/core/module.go#L127-L145) is configured in the core IBC module to set the localhost `ClientState` and sentinel `ConnectionEnd` in state. + +In order to use the 09-localhost client chains must update the `AllowedClients` parameter in the 02-client submodule of core IBC. This can be configured directly in the application upgrade handler or alternatively updated via the legacy governance parameter change proposal. +We **strongly** recommend chains to perform this action so that intra-ledger communication can be carried out using the familiar IBC interfaces. + +See the upgrade handler code sample provided below or [follow this link](https://github.com/cosmos/ibc-go/blob/v7.2.0/testing/simapp/upgrades/upgrades.go#L85) for the upgrade handler used by the ibc-go simapp. + +```go expandable +func CreateV7LocalhostUpgradeHandler( + mm *module.Manager, + configurator module.Configurator, + clientKeeper clientkeeper.Keeper, +) + +upgradetypes.UpgradeHandler { + return func(ctx sdk.Context, _ upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + / explicitly update the IBC 02-client params, adding the localhost client type + params := clientKeeper.GetParams(ctx) + +params.AllowedClients = append(params.AllowedClients, exported.Localhost) + +clientKeeper.SetParams(ctx, params) + +return mm.RunMigrations(ctx, configurator, vm) +} +} +``` + +### Transfer migration + +An [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v7.2.0/modules/apps/transfer/module.go#L111-L113) is configured in the transfer module to set the total amount in escrow for all denominations of coins that have been sent out. For each denomination a state entry is added with the total amount of coins in escrow regardless of the channel from which they were transferred. + +## IBC Apps + +* No relevant changes were made in this release. + +## Relayers + +The event attribute `packet_connection` (`connectiontypes.AttributeKeyConnection`) has been deprecated. +Please use the `connection_id` attribute (`connectiontypes.AttributeKeyConnectionID`) which is emitted by all channel events. +Only send packet, receive packet, write acknowledgement, and acknowledge packet events used `packet_connection` previously. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v10.1.x/migrations/v7-to-v8.mdx b/docs/ibc/v10.1.x/migrations/v7-to-v8.mdx new file mode 100644 index 00000000..c0e19915 --- /dev/null +++ b/docs/ibc/v10.1.x/migrations/v7-to-v8.mdx @@ -0,0 +1,217 @@ +--- +title: IBC-Go v7 to v8 +description: This guide provides instructions for migrating to version v8.0.0 of ibc-go. +--- + +This guide provides instructions for migrating to version `v8.0.0` of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Migrating from v7 to v8](#migrating-from-v7-to-v8) + * [Chains](#chains) + * [Cosmos SDK v0.50 upgrade](#cosmos-sdk-v050-upgrade) + * [Authority](#authority) + * [Testing package](#testing-package) + * [Params migration](#params-migration) + * [Governance V1 migration](#governance-v1-migration) + * [Transfer migration](#transfer-migration) + * [IBC Apps](#ibc-apps) + * [ICS20 - Transfer](#ics20---transfer) + * [ICS27 - Interchain Accounts](#ics27---interchain-accounts) + * [Relayers](#relayers) + * [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +The type of the `PortKeeper` field of the IBC keeper have been changed to `*portkeeper.Keeper`: + +```diff expandable +/ Keeper defines each ICS keeper for IBC +type Keeper struct { + / implements gRPC QueryServer interface + types.QueryServer + + cdc codec.BinaryCodec + + ClientKeeper clientkeeper.Keeper + ConnectionKeeper connectionkeeper.Keeper + ChannelKeeper channelkeeper.Keeper +- PortKeeper portkeeper.Keeper ++ PortKeeper *portkeeper.Keeper + Router *porttypes.Router + + authority string +} +``` + +See [this PR](https://github.com/cosmos/ibc-go/pull/4703/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a) for the changes required in `app.go`. + +An extra parameter `totalEscrowed` of type `sdk.Coins` has been added to transfer module's [`NewGenesisState` function](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/transfer/types/genesis.go#L10). This parameter specifies the total amount of tokens that are in the module's escrow accounts. + +### Cosmos SDK v0.50 upgrade + +Version `v8.0.0` of ibc-go upgrades to Cosmos SDK v0.50. Please follow the [Cosmos SDK v0.50 upgrading guide](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/UPGRADING.md) to account for its API breaking changes. + +### Authority + +An authority identifier (e.g. an address) needs to be passed in the `NewKeeper` functions of the following keepers: + +* You must pass the `authority` to the ica/host keeper (implemented in [#3520](https://github.com/cosmos/ibc-go/pull/3520)). See [diff](https://github.com/cosmos/ibc-go/pull/3520/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a): + +```diff +/ app.go + +/ ICA Host keeper +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCFeeKeeper, / use ics29 fee as ics4Wrapper in middleware stack + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, scopedICAHostKeeper, app.MsgServiceRouter(), ++ authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +* You must pass the `authority` to the ica/controller keeper (implemented in [#3590](https://github.com/cosmos/ibc-go/pull/3590)). See [diff](https://github.com/cosmos/ibc-go/pull/3590/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a): + +```diff +/ app.go + +/ ICA Controller keeper +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCFeeKeeper, / use ics29 fee as ics4Wrapper in middleware stack + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + scopedICAControllerKeeper, app.MsgServiceRouter(), ++ authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +* You must pass the `authority` to the ibctransfer keeper (implemented in [#3553](https://github.com/cosmos/ibc-go/pull/3553)). See [diff](https://github.com/cosmos/ibc-go/pull/3553/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a): + +```diff expandable +/ app.go + +/ Create Transfer Keeper and pass IBCFeeKeeper as expected Channel and PortKeeper +/ since fee middleware will wrap the IBCKeeper for underlying application. +app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, keys[ibctransfertypes.StoreKey], app.GetSubspace(ibctransfertypes.ModuleName), + app.IBCFeeKeeper, / ISC4 Wrapper: fee IBC middleware + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, app.BankKeeper, scopedTransferKeeper, ++ authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +* You should pass the `authority` to the IBC keeper (implemented in [#3640](https://github.com/cosmos/ibc-go/pull/3640) and [#3650](https://github.com/cosmos/ibc-go/pull/3650)). See [diff](https://github.com/cosmos/ibc-go/pull/3640/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a): + +```diff expandable +/ app.go + +/ IBC Keepers +app.IBCKeeper = ibckeeper.NewKeeper( + appCodec, + keys[ibcexported.StoreKey], + app.GetSubspace(ibcexported.ModuleName), + app.StakingKeeper, + app.UpgradeKeeper, + scopedIBCKeeper, ++ authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +The authority determines the transaction signer allowed to execute certain messages (e.g. `MsgUpdateParams`). + +### Testing package + +* The function `SetupWithGenesisAccounts` has been removed. +* The function [`RelayPacketWithResults`](https://github.com/cosmos/ibc-go/blob/v8.0.0/testing/path.go#L66) has been added. This function returns the result of the packet receive transaction, the acknowledgement written on the receiving chain, an error if a relay step fails or the packet commitment does not exist on either chain. + +### Params migration + +Params are now self managed in the following submodules: + +* ica/controller [#3590](https://github.com/cosmos/ibc-go/pull/3590) +* ica/host [#3520](https://github.com/cosmos/ibc-go/pull/3520) +* ibc/connection [#3650](https://github.com/cosmos/ibc-go/pull/3650) +* ibc/client [#3640](https://github.com/cosmos/ibc-go/pull/3640) +* ibc/transfer [#3553](https://github.com/cosmos/ibc-go/pull/3553) + +Each module has a corresponding `MsgUpdateParams` message with a `Params` which can be specified in full to update the modules' `Params`. + +Legacy params subspaces must still be initialised in app.go in order to successfully migrate from \`x/params\`\` to the new self-contained approach. See reference [this](https://github.com/cosmos/ibc-go/blob/v8.0.0/testing/simapp/app.go#L1007-L1012) for reference. + +For new chains which do not rely on migration of parameters from `x/params`, an expected interface has been added for each module. This allows chain developers to provide `nil` as the `legacySubspace` argument to `NewKeeper` functions. + +### Governance V1 migration + +Proposals have been migrated to [gov v1 messages](https://docs.cosmos.network/v0.50/modules/gov#messages) (see [#4620](https://github.com/cosmos/ibc-go/pull/4620)). The proposal `ClientUpdateProposal` has been deprecated and [`MsgRecoverClient`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/tx.proto#L121-L134) should be used instead. Likewise, the proposal `UpgradeProposal` has been deprecated and [`MsgIBCSoftwareUpgrade`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/tx.proto#L139-L154) should be used instead. Both proposals will be removed in the next major release. + +`MsgRecoverClient` and `MsgIBCSoftwareUpgrade` will only be allowed to be executed if the signer is the authority designated at the time of instantiating the IBC keeper. So please make sure that the correct authority is provided to the IBC keeper. + +Remove the `UpgradeProposalHandler` and `UpdateClientProposalHandler` from the `BasicModuleManager`: + +```diff expandable +app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +- ibcclientclient.UpdateClientProposalHandler, +- ibcclientclient.UpgradeProposalHandler, + }, + ), +}) +``` + +Support for in-flight legacy recover client proposals (i.e. `ClientUpdateProposal`) will be made for v8, but chains should use `MsgRecoverClient` only afterwards to avoid in-flight client recovery failing when upgrading to v9. See [this issue](https://github.com/cosmos/ibc-go/issues/4721) for more information. + +Please note that ibc-go offers facilities to test an ibc-go upgrade: + +* All e2e tests of the repository can be [run with custom Docker chain images](https://github.com/cosmos/ibc-go/blob/c5bac5e03a0eae449b9efe0d312258115c1a1e85/e2e/README.md#running-tests-with-custom-images). +* An [importable workflow](https://github.com/cosmos/ibc-go/blob/c5bac5e03a0eae449b9efe0d312258115c1a1e85/e2e/README.md#importable-workflow) that can be used from any other repository to test chain upgrades. + +### Transfer migration + +An [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/transfer/module.go#L136) is configured in the transfer module to set the [denomination metadata](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/proto/cosmos/bank/v1beta1/bank.proto#L96-L125) for the IBC denominations of all vouchers minted by the transfer module. + +## IBC Apps + +### ICS20 - Transfer + +* The function `IsBound` has been renamed to [`hasCapability`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/transfer/keeper/keeper.go#L98) and made unexported. + +### ICS27 - Interchain Accounts + +* Functions [`SerializeCosmosTx`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/types/codec.go#L32) and [`DeserializeCosmosTx`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/types/codec.go#L76) now accept an extra parameter `encoding` of type `string` that specifies the format in which the transaction messages are marshaled. Both [protobuf and proto3 JSON formats](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/types/metadata.go#L14-L17) are supported. +* The function `IsBound` of controller submodule has been renamed to [`hasCapability`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/controller/keeper/keeper.go#L111) and made unexported. +* The function `IsBound` of host submodule has been renamed to [`hasCapability`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/host/keeper/keeper.go#L94) and made unexported. + +## Relayers + +* Getter functions in `MsgChannelOpenInitResponse`, `MsgChannelOpenTryResponse`, `MsgTransferResponse`, `MsgRegisterInterchainAccountResponse` and `MsgSendTxResponse` have been removed. The fields can be accessed directly. +* `channeltypes.EventTypeTimeoutPacketOnClose` (where `channeltypes` is an import alias for `"github.com/cosmos/ibc-go/v8/modules/core/04-channel"`) has been removed, since core IBC does not emit any event with this key. +* Attribute with key `counterparty_connection_id` has been removed from event with key `connectiontypes.EventTypeConnectionOpenInit` (where `connectiontypes` is an import alias for `"github.com/cosmos/ibc-go/v8/modules/core/03-connection/types"`) and attribute with key `counterparty_channel_id` has been removed from event with key `channeltypes.EventTypeChannelOpenInit` (where `channeltypes` is an import alias for `"github.com/cosmos/ibc-go/v8/modules/core/04-channel"`) since both (counterparty connection ID and counterparty channel ID) are empty on `ConnectionOpenInit` and `ChannelOpenInit` respectively. +* As part of the migration to [governance V1 messages](#governance-v1-migration) the following changes in events have been made: + +```diff expandable +/ IBC client events vars +var ( + EventTypeCreateClient = "create_client" + EventTypeUpdateClient = "update_client" + EventTypeUpgradeClient = "upgrade_client" + EventTypeSubmitMisbehaviour = "client_misbehaviour" +- EventTypeUpdateClientProposal = "update_client_proposal" +- EventTypeUpgradeClientProposal = "upgrade_client_proposal" ++ EventTypeRecoverClient = "recover_client" ++ EventTypeScheduleIBCSoftwareUpgrade = "schedule_ibc_software_upgrade" + EventTypeUpgradeChain = "upgrade_chain" +) +``` + +## IBC Light Clients + +* Functions `Pretty` and `String` of type `MerklePath` have been [removed](https://github.com/cosmos/ibc-go/pull/4459/files#diff-dd94ec1dde9b047c0cdfba204e30dad74a81de202e3b09ac5b42f493153811af). diff --git a/docs/ibc/v10.1.x/migrations/v7_2-to-v7_3.mdx b/docs/ibc/v10.1.x/migrations/v7_2-to-v7_3.mdx new file mode 100644 index 00000000..9cc47182 --- /dev/null +++ b/docs/ibc/v10.1.x/migrations/v7_2-to-v7_3.mdx @@ -0,0 +1,46 @@ +--- +title: IBC-Go v7.2 to v7.3 +description: This guide provides instructions for migrating to version v7.3.0 of ibc-go. +--- + +This guide provides instructions for migrating to version `v7.3.0` of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Migrating from v7.2 to v7.3](#migrating-from-v72-to-v73) + * [Chains](#chains) + * [IBC Apps](#ibc-apps) + * [Relayers](#relayers) + * [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +* No relevant changes were made in this release. + +## IBC Apps + +A set of interfaces have been added that IBC applications may optionally implement. Developers interested in integrating their applications with the [callbacks middleware](/docs/ibc/v10.1.x/middleware/callbacks/overview) should implement these interfaces so that the callbacks middleware can retrieve the desired callback addresses on the source and destination chains and execute actions on packet lifecycle events. The interfaces are [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/core/05-port/types/module.go#L142-L147), [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/core/exported/packet.go#L43-L52) and [`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/core/exported/packet.go#L36-L41). + +Sample implementations are available for reference. For `transfer`: + +* [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/transfer/ibc_module.go#L303-L313), +* [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/transfer/types/packet.go#L85-L105) +* and [`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/transfer/types/packet.go#L74-L83). + +For `27-interchain-accounts`: + +* [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/27-interchain-accounts/controller/ibc_middleware.go#L258-L268), +* [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/27-interchain-accounts/types/packet.go#L94-L114) +* and [`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/27-interchain-accounts/types/packet.go#L78-L92). + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +### 06-solomachine + +Solo machines are now expected to sign data on a path that 1) does not include a connection prefix (e.g `ibc`) and 2) does not escape any characters. See PR [#4429](https://github.com/cosmos/ibc-go/pull/4429) for more details. We recommend **NOT** using the solo machine light client of versions lower than v7.3.0. diff --git a/docs/ibc/v10.1.x/migrations/v8-to-v8_1.mdx b/docs/ibc/v10.1.x/migrations/v8-to-v8_1.mdx new file mode 100644 index 00000000..0537d07d --- /dev/null +++ b/docs/ibc/v10.1.x/migrations/v8-to-v8_1.mdx @@ -0,0 +1,38 @@ +--- +title: IBC-Go v8 to v8.1 +description: This guide provides instructions for migrating to version v8.1.0 of ibc-go. +--- + +This guide provides instructions for migrating to version `v8.1.0` of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Migrating from v8 to v8.1](#migrating-from-v8-to-v81) + * [Chains](#chains) + * [IBC apps](#ibc-apps) + * [Relayers](#relayers) + * [IBC light clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +### `04-channel` params migration + +Self-managed [params](https://github.com/cosmos/ibc-go/blob/v8.1.0/proto/ibc/core/channel/v1/channel.proto#L183-L187) have been added for `04-channel` module. The params include the `upgrade_timeout` that is used in channel upgradability to specify the interval of time during which the counterparty chain must flush all in-flight packets on its end and move to `FLUSH_COMPLETE` state). An [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v8.1.0/modules/core/module.go#L162-L166) is configured in the `04-channel` module that sets the default params (with a default upgrade timeout of 10 minutes). The module has a corresponding [`MsgUpdateParams` message](https://github.com/cosmos/ibc-go/blob/v8.1.0/proto/ibc/core/channel/v1/tx.proto#L435-L447) with a `Params` field which can be specified in full to update the module's `Params`. + +### Fee migration + +In ibc-go v8.1.0 an improved, more efficient escrow calculation of fees for packet incentivisation has been introduced (see [this issue](https://github.com/cosmos/ibc-go/issues/5509) for more information). Before v8.1.0 the amount escrowed was `(ReckFee + AckFee + TimeoutFee)`; from ibc-go v8.1.0, the calculation is changed to `Max(RecvFee + AckFee, TimeoutFee)`. In order to guarantee that the correct amount of fees are refunded for packets that are in-flight during the upgrade to ibc-go v8.1.0, an [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v8.1.0/modules/apps/29-fee/module.go#L113-L115) is configured in the `29-fee` module to refund the leftover fees (i.e `(ReckFee + AckFee + TimeoutFee) - Max(RecvFee + AckFee, TimeoutFee)`) that otherwise would not be refunded when the packet lifecycle completes and the new calculation is used. + +## IBC apps + +* No relevant changes were made in this release. + +## Relayers + +* No relevant changes were made in this release. + +## IBC light clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v10.1.x/migrations/v8_1-to-v10.mdx b/docs/ibc/v10.1.x/migrations/v8_1-to-v10.mdx new file mode 100644 index 00000000..f39da766 --- /dev/null +++ b/docs/ibc/v10.1.x/migrations/v8_1-to-v10.mdx @@ -0,0 +1,285 @@ +--- +title: IBC-Go v8.1 to v10 +description: This guide provides instructions for migrating to a new version of ibc-go. +--- + +This guide provides instructions for migrating to a new version of ibc-go. + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. In addition, for this release, the 08-wasm module has been released as v10, and the callbacks middleware has been moved into the ibc-go module itself. + +Diff examples are shown after the list of overall changes: + +* To add support for IBC v2, Chains will need to wire up a new IBC v2 Transfer stack +* Chains will need to wire up the new light client modules +* Chains will need to update Keeper construction calls to comply with the new signatures +* Chains will need to remove the route for the legacy proposal handler for 02-client from their `app/app.go` +* Chains will need to remove the capability keeper and all related setup, including the scoped keepers from their `app/app.go` +* Chains will need to remove ibc fee middleware (29-fee) +* Chains will need, if using this module, to update their imports and usage of `github.com/cosmos/ibc-go/modules/light-clients/08-wasm/` to `github.com/cosmos/ibc-go/modules/light-clients/08-wasm/v10` +* Chains will need, if using this module, to update their imports and usage of `github.com/cosmos/ibc-go/modules/apps/callbacks` to `github.com/cosmos/ibc-go/v10/modules/apps/callbacks` + +To add IBC v2 support, wire up a new transfer stack. Example below showing wired up with IBC Callbacks module: + +```diff ++ var ibcv2TransferStack ibcapi.IBCModule ++ ibcv2TransferStack = transferv2.NewIBCModule(app.TransferKeeper) ++ ibcv2TransferStack = ibccallbacksv2.NewIBCMiddleware( ++ transferv2.NewIBCModule(app.TransferKeeper), ++ app.IBCKeeper.ChannelKeeperV2, ++ wasmStackIBCHandler, ++ app.IBCKeeper.ChannelKeeperV2, ++ maxCallbackGas, ++ ) +``` + +Wire up each light client as a separate module and add them to the client keeper router. Example below for 07-tendermint and 08-wasm: + +```diff ++ / Light client modules ++ clientKeeper := app.IBCKeeper.ClientKeeper ++ storeProvider := app.IBCKeeper.ClientKeeper.GetStoreProvider() ++ ++ tmLightClientModule := ibctm.NewLightClientModule(appCodec, storeProvider) ++ clientKeeper.AddRoute(ibctm.ModuleName, &tmLightClientModule) ++ ++ wasmLightClientModule := ibcwasm.NewLightClientModule(app.WasmClientKeeper, storeProvider) ++ clientKeeper.AddRoute(ibcwasmtypes.ModuleName, &wasmLightClientModule) +``` + +Remove ibc fee module name (if used) from module account permissions: + +```diff + / app.go + ... + / module account permissions + var maccPerms = map[string][]string{ + ... +- ibcfeetypes.ModuleName: nil, + ... + } +``` + +Remove `CapabilityKeeper`, `IBCFeeKeeper` and all `capabilitykeeper.ScopedKeeper` Scoped keepers from the App struct: + +```diff expandable + / ChainApp extended ABCI application + type ChainApp struct { + ... +- CapabilityKeeper *capabilitykeeper.Keeper + ... +- IBCFeeKeeper ibcfeekeeper.Keeper + ... +- ScopedIBCKeeper capabilitykeeper.ScopedKeeper +- ScopedICAHostKeeper capabilitykeeper.ScopedKeeper +- ScopedICAControllerKeeper capabilitykeeper.ScopedKeeper +- ScopedTransferKeeper capabilitykeeper.ScopedKeeper +- ScopedIBCFeeKeeper capabilitykeeper.ScopedKeeper + ... + } + ... +- app.ScopedIBCKeeper = scopedIBCKeeper +- app.ScopedTransferKeeper = scopedTransferKeeper +- app.ScopedWasmKeeper = scopedWasmKeeper +- app.ScopedICAHostKeeper = scopedICAHostKeeper +- app.ScopedICAControllerKeeper = scopedICAControllerKeeper +``` + +Remove capability and ibc fee middleware store keys from the `NewKVStoreKeys` call: + +```diff +... + keys := storetypes.NewKVStoreKeys( + ... +- capabilitytypes.StoreKey, +- ibcfeetypes.StoreKey, + ... + } +``` + +Remove the in-memory store keys previously used by the capability module: + +```diff +- memKeys := storetypes.NewMemoryStoreKeys(capabilitytypes.MemStoreKey) +... +- app.MountMemoryStores(memKeys) +``` + +Remove creation of the capability keeper: + +```diff expandable +- / add capability keeper and ScopeToModule for ibc module +- app.CapabilityKeeper = capabilitykeeper.NewKeeper( +- appCodec, +- keys[capabilitytypes.StoreKey], +- memKeys[capabilitytypes.MemStoreKey], +- ) + +- scopedIBCKeeper := app.CapabilityKeeper.ScopeToModule(ibcexported.ModuleName) +- scopedICAHostKeeper := app.CapabilityKeeper.ScopeToModule(icahosttypes.SubModuleName) +- scopedICAControllerKeeper := app.CapabilityKeeper.ScopeToModule(icacontrollertypes.SubModuleName) +- scopedTransferKeeper := app.CapabilityKeeper.ScopeToModule(ibctransfertypes.ModuleName) +- scopedWasmKeeper := app.CapabilityKeeper.ScopeToModule(wasmtypes.ModuleName) +- app.CapabilityKeeper.Seal() +``` + +Remove the legacy route for the client keeper: + +```diff +... + govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). +- AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). +- AddRoute(ibcclienttypes.RouterKey, ibcclient.NewClientProposalHandler(app.IBCKeeper.ClientKeeper)) ++ AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)) +``` + +Update Core IBC Keeper constructor: + +```diff + app.IBCKeeper = ibckeeper.NewKeeper( + appCodec, +- keys[ibcexported.StoreKey], ++ runtime.NewKVStoreService(keys[ibcexported.StoreKey]), + app.GetSubspace(ibcexported.ModuleName), +- app.StakingKeeper, + app.UpgradeKeeper, +- scopedIBCKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) +``` + +Update IBC Transfer keeper constructor: + +```diff expandable + app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, +- keys[ibctransfertypes.StoreKey], ++ runtime.NewKVStoreService(keys[ibctransfertypes.StoreKey]), + app.GetSubspace(ibctransfertypes.ModuleName), + app.IBCKeeper.ChannelKeeper, + app.IBCKeeper.ChannelKeeper, +- app.IBCKeeper.PortKeeper, ++ app.MsgServiceRouter(), + app.AccountKeeper, + app.BankKeeper, +- scopedTransferKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) +``` + +Update ICA Host keeper constructor, notice the removal of the `WithQueryRouter` call in particular: + +```diff expandable + app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, +- keys[icahosttypes.StoreKey], ++ runtime.NewKVStoreService(keys[icahosttypes.StoreKey]), + app.GetSubspace(icahosttypes.SubModuleName), +- app.IBCFeeKeeper, / use ics29 fee as ics4Wrapper in middleware stack + app.IBCKeeper.ChannelKeeper, +- app.IBCKeeper.PortKeeper, ++ app.IBCKeeper.ChannelKeeper, + app.AccountKeeper, +- scopedICAHostKeeper, + app.MsgServiceRouter(), ++ app.GRPCQueryRouter(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) +- app.ICAHostKeeper.WithQueryRouter(app.GRPCQueryRouter()) +``` + +Remove IBC Fee Module keeper: + +```diff +- app.IBCFeeKeeper = ibcfeekeeper.NewKeeper( +- appCodec, keys[ibcfeetypes.StoreKey], +- app.IBCKeeper.ChannelKeeper, / may be replaced with IBC middleware +- app.IBCKeeper.ChannelKeeper, +- app.IBCKeeper.PortKeeper, app.AccountKeeper, app.BankKeeper, +- ) +``` + +Update Transfer stack to remove the fee middleware. The example below shows the correct way to wire up a middleware stack with the IBC callbacks middleware: + +```diff expandable + / Create Transfer Stack + var transferStack porttypes.IBCModule + transferStack = transfer.NewIBCModule(app.TransferKeeper) +- transferStack = ibccallbacks.NewIBCMiddleware(transferStack, app.IBCFeeKeeper, wasmStackIBCHandler, maxCallbackGas) +- transferStack = ibcfee.NewIBCMiddleware(transferStack, app.IBCFeeKeeper) ++ / callbacks wraps the transfer stack as its base app, and uses PacketForwardKeeper as the ICS4Wrapper ++ / i.e. packet-forward-middleware is higher on the stack and sits between callbacks and the ibc channel keeper ++ / Since this is the lowest level middleware of the transfer stack, it should be the first entrypoint for transfer keeper's ++ / WriteAcknowledgement. ++ cbStack := ibccallbacks.NewIBCMiddleware(transferStack, app.PacketForwardKeeper, wasmStackIBCHandler, maxCallbackGas) +transferStack = packetforward.NewIBCMiddleware( +- transferStack, ++ cbStack, + app.PacketForwardKeeper, + 0, + packetforwardkeeper.DefaultForwardTransferPacketTimeoutTimestamp, + ) ++ app.TransferKeeper.WithICS4Wrapper(cbStack) +``` + +Remove ibc fee middleware and any empty IBCModule (often dubbed `noAuthzModule`) from the ICA Controller stack creation: + +```diff +- var noAuthzModule porttypes.IBCModule +- icaControllerStack = icacontroller.NewIBCMiddleware(noAuthzModule, app.ICAControllerKeeper) +- icaControllerStack = ibcfee.NewIBCMiddleware(icaControllerStack, app.IBCFeeKeeper) ++ icaControllerStack = icacontroller.NewIBCMiddleware(app.ICAControllerKeeper) +``` + +Remove ibc fee middleware from ICA Host stack creation: + +```diff + icaHostStack = icahost.NewIBCModule(app.ICAHostKeeper) +- icaHostStack = ibcfee.NewIBCMiddleware(icaHostStack, app.IBCFeeKeeper) +``` + +Update the module manager creation by removing the capability module, fee module and updating the tendermint app module constructor: + +```diff + app.ModuleManager = module.NewManager( + ... +- capability.NewAppModule(appCodec, *app.CapabilityKeeper, false), + ... +- ibcfee.NewAppModule(app.IBCFeeKeeper), + ... +- ibctm.NewAppModule(), ++ ibctm.NewAppModule(tmLightClientModule), + ... + ) +``` + +Remove the capability module and ibc fee middleware from `SetOrderBeginBlockers`, `SetOrderEndBlockers`, `SetOrderInitGenesis` and `SetOrderExportGenesis`: + +```diff +- capabilitytypes.ModuleName, +- ibcfeetypes.ModuleName, +``` + +If you use 08-wasm, you will need to update the go module that is used for `QueryPlugins` and `AcceptListStargateQuerier`. + +```diff +- wasmLightClientQuerier := ibcwasmtypes.QueryPlugins{ ++ wasmLightClientQuerier := ibcwasmkeeper.QueryPlugins{ +- Stargate: ibcwasmtypes.AcceptListStargateQuerier([]string{ ++ Stargate: ibcwasmkeeper.AcceptListStargateQuerier([]string{ + "/ibc.core.client.v1.Query/ClientState", + "/ibc.core.client.v1.Query/ConsensusState", + "/ibc.core.connection.v1.Query/Connection", +- }), ++ }, app.GRPCQueryRouter()), + } +``` + +If you use 08-wasm, you will need to use the wasm client keeper rather than the go module to initialize pinned codes: + +```diff +- if err := ibcwasmkeeper.InitializePinnedCodes(ctx); err != nil { +- panic(fmt.Sprintf("ibcwasmkeeper failed initialize pinned codes %s", err)) ++ if err := app.WasmClientKeeper.InitializePinnedCodes(ctx); err != nil { ++ panic(fmt.Sprintf("WasmClientKeeper failed initialize pinned codes %s", err)) ++ } +``` diff --git a/docs/ibc/v4.6.x/apps/interchain-accounts/active-channels.mdx b/docs/ibc/v4.6.x/apps/interchain-accounts/active-channels.mdx new file mode 100644 index 00000000..6cbefa72 --- /dev/null +++ b/docs/ibc/v4.6.x/apps/interchain-accounts/active-channels.mdx @@ -0,0 +1,29 @@ +--- +title: Active Channels +description: >- + The Interchain Accounts module uses ORDERED channels to maintain the order of + transactions when sending packets from a controller to a host chain. A + limitation when using ORDERED channels is that when a packet times out the + channel will be closed. +--- + +The Interchain Accounts module uses [ORDERED channels](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#ordering) to maintain the order of transactions when sending packets from a controller to a host chain. A limitation when using ORDERED channels is that when a packet times out the channel will be closed. + +In the case of a channel closing, a controller chain needs to be able to regain access to the interchain account registered on this channel. `Active Channels` enable this functionality. Future versions of the ICS-27 protocol and the Interchain Accounts module will likely use a new +channel type that provides ordering of packets without the channel closing on timing out, thus removing the need for `Active Channels` entirely. + +When an Interchain Account is registered using the `RegisterInterchainAccount` API, a new channel is created on a particular port. During the `OnChanOpenAck` and `OnChanOpenConfirm` steps (controller & host chain) the `Active Channel` for this interchain account +is stored in state. + +It is possible to create a new channel using the same controller chain portID if the previously set `Active Channel` is now in a `CLOSED` state. This channel creation can be initialized programmatically by sending a new `MsgChannelOpenInit` message like so: + +```go +msg := channeltypes.NewMsgChannelOpenInit(portID, string(versionBytes), channeltypes.ORDERED, []string{ + connectionID +}, icatypes.PortID, icatypes.ModuleName) + handler := k.msgRouter.Handler(msg) +``` + +Alternatively, any relayer operator may initiate a new channel handshake for this interchain account once the previously set `Active Channel` is in a `CLOSED` state. This is done by initiating the channel handshake on the controller chain using the same portID associated with the interchain account in question. + +It is important to note that once a channel has been opened for a given Interchain Account, new channels can not be opened for this account until the currently set `Active Channel` is set to `CLOSED`. diff --git a/docs/ibc/v4.6.x/apps/interchain-accounts/auth-modules.mdx b/docs/ibc/v4.6.x/apps/interchain-accounts/auth-modules.mdx new file mode 100644 index 00000000..40e6106c --- /dev/null +++ b/docs/ibc/v4.6.x/apps/interchain-accounts/auth-modules.mdx @@ -0,0 +1,442 @@ +--- +title: Authentication Modules +--- + +## Synopsis + +Authentication modules play the role of the `Base Application` as described in [ICS30 IBC Middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware), and enable application developers to perform custom logic when working with the Interchain Accounts controller API. + +The controller submodule is used for account registration and packet sending. +It executes only logic required of all controllers of interchain accounts. +The type of authentication used to manage the interchain accounts remains unspecified. +There may exist many different types of authentication which are desirable for different use cases. +Thus the purpose of the authentication module is to wrap the controller module with custom authentication logic. + +In ibc-go, authentication modules are connected to the controller chain via a middleware stack. +The controller module is implemented as [middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware) and the authentication module is connected to the controller module as the base application of the middleware stack. +To implement an authentication module, the `IBCModule` interface must be fulfilled. +By implementing the controller module as middleware, any amount of authentication modules can be created and connected to the controller module without writing redundant code. + +The authentication module must: + +- Authenticate interchain account owners +- Track the associated interchain account address for an owner +- Claim the channel capability in `OnChanOpenInit` +- Send packets on behalf of an owner (after authentication) + +## IBCModule implementation + +The following `IBCModule` callbacks must be implemented with appropriate custom logic: + +```go expandable +/ OnChanOpenInit implements the IBCModule interface +func (im IBCModule) + +OnChanOpenInit( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + chanCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + / the authentication module *must* claim the channel capability on OnChanOpenInit + if err := im.keeper.ClaimCapability(ctx, chanCap, host.ChannelCapabilityPath(portID, channelID)); err != nil { + return version, err +} + + / perform custom logic + + return version, nil +} + +/ OnChanOpenAck implements the IBCModule interface +func (im IBCModule) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + / perform custom logic + + return nil +} + +/ OnChanCloseConfirm implements the IBCModule interface +func (im IBCModule) + +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / perform custom logic + + return nil +} + +/ OnAcknowledgementPacket implements the IBCModule interface +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, +) + +error { + / perform custom logic + + return nil +} + +/ OnTimeoutPacket implements the IBCModule interface. +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +error { + / perform custom logic + + return nil +} +``` + +**Note**: The channel capability must be claimed by the authentication module in `OnChanOpenInit` otherwise the authentication module will not be able to send packets on the channel created for the associated interchain account. + +The following functions must be defined to fulfill the `IBCModule` interface, but they will never be called by the controller module so they may error or panic. + +```go expandable +/ OnChanOpenTry implements the IBCModule interface +func (im IBCModule) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + chanCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + panic("UNIMPLEMENTED") +} + +/ OnChanOpenConfirm implements the IBCModule interface +func (im IBCModule) + +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + panic("UNIMPLEMENTED") +} + +/ OnChanCloseInit implements the IBCModule interface +func (im IBCModule) + +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + panic("UNIMPLEMENTED") +} + +/ OnRecvPacket implements the IBCModule interface. A successful acknowledgement +/ is returned if the packet data is successfully decoded and the receive application +/ logic returns without error. +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +ibcexported.Acknowledgement { + panic("UNIMPLEMENTED") +} +``` + +## `RegisterInterchainAccount` + +The authentication module can begin registering interchain accounts by calling `RegisterInterchainAccount`: + +```go +if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, connectionID, owner.String(), version); err != nil { + return err +} + +return nil +``` + +The `version` argument is used to support ICS29 fee middleware for relayer incentivization of ICS27 packets. Consumers of the `RegisterInterchainAccount` are expected to build the appropriate JSON encoded version string themselves and pass it accordingly. If an empty string is passed in the `version` argument, then the version will be initialized to a default value in the `OnChanOpenInit` callback of the controller's handler, so that channel handshake can proceed. + +The following code snippet illustrates how to construct an appropriate interchain accounts `Metadata` and encode it as a JSON bytestring: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, controllerConnectionID, owner.String(), string(appVersion)); err != nil { + return err +} +``` + +Similarly, if the application stack is configured to route through ICS29 fee middleware and a fee enabled channel is desired, construct the appropriate ICS29 `Metadata` type: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + feeMetadata := feetypes.Metadata{ + AppVersion: string(appVersion), + FeeVersion: feetypes.Version, +} + +feeEnabledVersion, err := feetypes.ModuleCdc.MarshalJSON(&feeMetadata) + if err != nil { + return err +} + if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, controllerConnectionID, owner.String(), string(feeEnabledVersion)); err != nil { + return err +} +``` + +## `SendTx` + +The authentication module can attempt to send a packet by calling `SendTx`: + +```go expandable +/ Authenticate owner +/ perform custom logic + +/ Construct controller portID based on interchain account owner address +portID, err := icatypes.NewControllerPortID(owner.String()) + if err != nil { + return err +} + +channelID, found := keeper.icaControllerKeeper.GetActiveChannelID(ctx, portID) + if !found { + return sdkerrors.Wrapf(icatypes.ErrActiveChannelNotFound, "failed to retrieve active channel for port %s", portID) +} + +/ Obtain the channel capability, claimed in OnChanOpenInit +chanCap, found := keeper.scopedKeeper.GetCapability(ctx, host.ChannelCapabilityPath(portID, channelID)) + if !found { + return sdkerrors.Wrap(channeltypes.ErrChannelCapabilityNotFound, "module does not own channel capability") +} + +/ Obtain data to be sent to the host chain. +/ In this example, the owner of the interchain account would like to send a bank MsgSend to the host chain. +/ The appropriate serialization function should be called. The host chain must be able to deserialize the transaction. +/ If the host chain is using the ibc-go host module, `SerializeCosmosTx` should be used. + msg := &banktypes.MsgSend{ + FromAddress: fromAddr, + ToAddress: toAddr, + Amount: amt +} + +data, err := icatypes.SerializeCosmosTx(keeper.cdc, []sdk.Msg{ + msg +}) + if err != nil { + return err +} + +/ Construct packet data + packetData := icatypes.InterchainAccountPacketData{ + Type: icatypes.EXECUTE_TX, + Data: data, +} + +/ Obtain timeout timestamp +/ An appropriate timeout timestamp must be determined based on the usage of the interchain account. +/ If the packet times out, the channel will be closed requiring a new channel to be created + timeoutTimestamp := obtainTimeoutTimestamp() + +/ Send the interchain accounts packet, returning the packet sequence +seq, err = keeper.icaControllerKeeper.SendTx(ctx, chanCap, portID, packetData, timeoutTimestamp) +``` + +The data within an `InterchainAccountPacketData` must be serialized using a format supported by the host chain. +If the host chain is using the ibc-go host chain submodule, `SerializeCosmosTx` should be used. If the `InterchainAccountPacketData.Data` is serialized using a format not support by the host chain, the packet will not be successfully received. + +## `OnAcknowledgementPacket` + +Controller chains will be able to access the acknowledgement written into the host chain state once a relayer relays the acknowledgement. +The acknowledgement bytes will be passed to the auth module via the `OnAcknowledgementPacket` callback. +Auth modules are expected to know how to decode the acknowledgement. + +If the controller chain is connected to a host chain using the host module on ibc-go, it may interpret the acknowledgement bytes as follows: + +Begin by unmarshaling the acknowledgement into sdk.TxMsgData: + +```go +var ack channeltypes.Acknowledgement + if err := channeltypes.SubModuleCdc.UnmarshalJSON(acknowledgement, &ack); err != nil { + return err +} + txMsgData := &sdk.TxMsgData{ +} + if err := proto.Unmarshal(ack.GetResult(), txMsgData); err != nil { + return err +} +``` + +If the txMsgData.Data field is non nil, the host chain is using SDK version `<=` v0.45. +The auth module should interpret the txMsgData.Data as follows: + +```go expandable +switch len(txMsgData.Data) { + case 0: + / see documentation below for SDK 0.46.x or greater +default: + for _, msgData := range txMsgData.Data { + if err := handler(msgData); err != nil { + return err +} + +} +... +} +``` + +A handler will be needed to interpret what actions to perform based on the message type sent. +A router could be used, or more simply a switch statement. + +```go expandable +func handler(msgData sdk.MsgData) + +error { + switch msgData.MsgType { + case sdk.MsgTypeURL(&banktypes.MsgSend{ +}): + msgResponse := &banktypes.MsgSendResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleBankSendMsg(msgResponse) + case sdk.MsgTypeURL(&stakingtypes.MsgDelegate{ +}): + msgResponse := &stakingtypes.MsgDelegateResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleStakingDelegateMsg(msgResponse) + case sdk.MsgTypeURL(&transfertypes.MsgTransfer{ +}): + msgResponse := &transfertypes.MsgTransferResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleIBCTransferMsg(msgResponse) + +default: + return +} +``` + +If the txMsgData.Data is empty, the host chain is using SDK version > v0.45. +The auth module should interpret the txMsgData.Responses as follows: + +```go +... +/ switch statement from above + case 0: + for _, any := range txMsgData.MsgResponses { + if err := handleAny(any); err != nil { + return err +} + +} +} +``` + +A handler will be needed to interpret what actions to perform based on the type url of the Any. +A router could be used, or more simply a switch statement. +It may be possible to deduplicate logic between `handler` and `handleAny`. + +```go expandable +func handleAny(any *codectypes.Any) + +error { + switch any.TypeURL { + case banktypes.MsgSend: + msgResponse, err := unpackBankMsgSendResponse(any) + if err != nil { + return err +} + +handleBankSendMsg(msgResponse) + case stakingtypes.MsgDelegate: + msgResponse, err := unpackStakingDelegateResponse(any) + if err != nil { + return err +} + +handleStakingDelegateMsg(msgResponse) + case transfertypes.MsgTransfer: + msgResponse, err := unpackIBCTransferMsgResponse(any) + if err != nil { + return err +} + +handleIBCTransferMsg(msgResponse) + +default: + return +} +``` + +### Integration into `app.go` file + +To integrate the authentication module into your chain, please follow the steps outlined above in [app.go integration](/docs/ibc/v4.6.x/apps/interchain-accounts/integration#example-integration). diff --git a/docs/ibc/v4.6.x/apps/interchain-accounts/integration.mdx b/docs/ibc/v4.6.x/apps/interchain-accounts/integration.mdx new file mode 100644 index 00000000..d2576de6 --- /dev/null +++ b/docs/ibc/v4.6.x/apps/interchain-accounts/integration.mdx @@ -0,0 +1,193 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to integrate Interchain Accounts host and controller functionality to your chain. The following document only applies for Cosmos SDK chains. + +The Interchain Accounts module contains two submodules. Each submodule has its own IBC application. The Interchain Accounts module should be registered as an `AppModule` in the same way all SDK modules are registered on a chain, but each submodule should create its own `IBCModule` as necessary. A route should be added to the IBC router for each submodule which will be used. + +Chains who wish to support ICS27 may elect to act as a host chain, a controller chain or both. Disabling host or controller functionality may be done statically by excluding the host or controller module entirely from the `app.go` file or it may be done dynamically by taking advantage of the on-chain parameters which enable or disable the host or controller submodules. + +Interchain Account authentication modules are the base application of a middleware stack. The controller submodule is the middleware in this stack. + +## Example integration + +```go expandable +/ app.go + +/ Register the AppModule for the Interchain Accounts module and the authentication module +/ Note: No `icaauth` exists, this must be substituted with an actual Interchain Accounts authentication module +ModuleBasics = module.NewBasicManager( + ... + ica.AppModuleBasic{ +}, + icaauth.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the Interchain Accounts module +/ Only necessary for host chain functionality +/ Each Interchain Account created on the host chain is derived from the module account created +maccPerms = map[string][]string{ + ... + icatypes.ModuleName: nil, +} + +... + +/ Add Interchain Accounts Keepers for each submodule used and the authentication module +/ If a submodule is being statically disabled, the associated Keeper does not need to be added. +type App struct { + ... + + ICAControllerKeeper icacontrollerkeeper.Keeper + ICAHostKeeper icahostkeeper.Keeper + ICAAuthKeeper icaauthkeeper.Keeper + + ... +} + +... + +/ Create store keys for each submodule Keeper and the authentication module + keys := sdk.NewKVStoreKeys( + ... + icacontrollertypes.StoreKey, + icahosttypes.StoreKey, + icaauthtypes.StoreKey, + ... +) + +... + +/ Create the scoped keepers for each submodule keeper and authentication keeper + scopedICAControllerKeeper := app.CapabilityKeeper.ScopeToModule(icacontrollertypes.SubModuleName) + scopedICAHostKeeper := app.CapabilityKeeper.ScopeToModule(icahosttypes.SubModuleName) + scopedICAAuthKeeper := app.CapabilityKeeper.ScopeToModule(icaauthtypes.ModuleName) + +... + +/ Create the Keeper for each submodule +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + scopedICAControllerKeeper, app.MsgServiceRouter(), +) + +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, scopedICAHostKeeper, app.MsgServiceRouter(), +) + +/ Create Interchain Accounts AppModule + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, &app.ICAHostKeeper) + +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.ICAControllerKeeper, scopedICAAuthKeeper) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + +/ ICA auth IBC Module + icaAuthIBCModule := icaauth.NewIBCModule(app.ICAAuthKeeper) + +/ Create controller IBC application stack and host IBC module as desired + icaControllerStack := icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostIBCModule). + AddRoute(icaauthtypes.ModuleName, icaControllerStack) / Note, the authentication module is routed to the top level of the middleware stack + +... + +/ Register Interchain Accounts and authentication module AppModule's +app.moduleManager = module.NewManager( + ... + icaModule, + icaAuthModule, +) + +... + +/ Add fee middleware to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add fee middleware to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts module InitGenesis logic +app.moduleManager.SetOrderInitGenesis( + ... + icatypes.ModuleName, + ... +) + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey sdk.StoreKey) + +paramskeeper.Keeper { + ... + paramsKeeper.Subspace(icahosttypes.SubModuleName) + +paramsKeeper.Subspace(icacontrollertypes.SubModuleName) + ... +``` + +### Using submodules exclusively + +As described above, the Interchain Accounts application module is structured to support the ability of exclusively enabling controller or host functionality. +This can be achieved by simply omitting either controller or host `Keeper` from the Interchain Accounts `NewAppModule` constructor function, and mounting only the desired submodule via the `IBCRouter`. +Alternatively, submodules can be enabled and disabled dynamically using [on-chain parameters](/docs/ibc/v4.6.x/apps/interchain-accounts/parameters). + +The following snippets show basic examples of statically disabling submodules using `app.go`. + +#### Disabling controller chain functionality + +```go +/ Create Interchain Accounts AppModule omitting the controller keeper + icaModule := ica.NewAppModule(nil, &app.ICAHostKeeper) + +/ Create host IBC Module + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host route +ibcRouter.AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +``` + +#### Disabling host chain functionality + +```go expandable +/ Create Interchain Accounts AppModule omitting the host keeper + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, nil) + +/ Create your Interchain Accounts authentication module, setting up the Keeper, AppModule and IBCModule appropriately +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.ICAControllerKeeper, scopedICAAuthKeeper) + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + icaAuthIBCModule := icaauth.NewIBCModule(app.ICAAuthKeeper) + +/ Create controller IBC application stack + icaControllerStack := icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) + +/ Register controller and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icaauthtypes.ModuleName, icaControllerStack) / Note, the authentication module is routed to the top level of the middleware stack +``` diff --git a/docs/ibc/v4.6.x/apps/interchain-accounts/overview.mdx b/docs/ibc/v4.6.x/apps/interchain-accounts/overview.mdx new file mode 100644 index 00000000..8cd67665 --- /dev/null +++ b/docs/ibc/v4.6.x/apps/interchain-accounts/overview.mdx @@ -0,0 +1,41 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the Interchain Accounts module is, and how to build custom modules that utilize Interchain Accounts functionality + +## What is the Interchain Accounts module? + +Interchain Accounts is the Cosmos SDK implementation of the ICS-27 protocol, which enables cross-chain account management built upon IBC. Chains using the Interchain Accounts module can programmatically create accounts on other chains and control these accounts via IBC transactions. + +Interchain Accounts exposes a simple-to-use API which means IBC application developers do not require an in-depth knowledge of the underlying low-level details of IBC or the ICS-27 protocol. + +Developers looking to build upon Interchain Accounts must write custom logic in their own IBC application module, called authentication modules. + +- How is an interchain account different than a regular account? + +Regular accounts use a private key to sign transactions on-chain. Interchain Accounts are instead controlled programmatically by separate chains via IBC transactions. Interchain Accounts are implemented as sub-accounts of the interchain accounts module account. + +## Concepts + +`Host Chain`: The chain where the interchain account is registered. The host chain listens for IBC packets from a controller chain which should contain instructions (e.g. cosmos SDK messages) for which the interchain account will execute. + +`Controller Chain`: The chain registering and controlling an account on a host chain. The controller chain sends IBC packets to the host chain to control the account. A controller chain must have at least one interchain accounts authentication module in order to act as a controller chain. + +`Authentication Module`: A custom IBC application module on the controller chain that uses the Interchain Accounts module API to build custom logic for the creation & management of interchain accounts. For a controller chain to utilize the interchain accounts module functionality, an authentication module is required. + +`Interchain Account`: An account on a host chain. An interchain account has all the capabilities of a normal account. However, rather than signing transactions with a private key, a controller chain's authentication module will send IBC packets to the host chain which signals what transactions the interchain account should execute. + +## SDK Security Model + +SDK modules on a chain are assumed to be trustworthy. For example, there are no checks to prevent an untrustworthy module from accessing the bank keeper. + +The implementation of ICS27 on ibc-go uses this assumption in its security considerations. The implementation assumes the authentication module will not try to open channels on owner addresses it does not control. + +The implementation assumes other IBC application modules will not bind to ports within the ICS27 namespace. + +## Known Bugs + +- Fee-enabled Interchain Accounts channels cannot be reopened in case of closure due to packet timeout. Regular channels (non fee-enabled) can be reopened. A fix for this bug has been implemented, but, since it is API breaking, it is only available from v5.x. See [this PR](https://github.com/cosmos/ibc-go/pull/2302) for more details. diff --git a/docs/ibc/v4.6.x/apps/interchain-accounts/parameters.mdx b/docs/ibc/v4.6.x/apps/interchain-accounts/parameters.mdx new file mode 100644 index 00000000..0890101b --- /dev/null +++ b/docs/ibc/v4.6.x/apps/interchain-accounts/parameters.mdx @@ -0,0 +1,63 @@ +--- +title: Parameters +description: >- + The Interchain Accounts module contains the following on-chain parameters, + logically separated for each distinct submodule: +--- + +The Interchain Accounts module contains the following on-chain parameters, logically separated for each distinct submodule: + +## Controller Submodule Parameters + +| Key | Type | Default Value | +| ------------------- | ---- | ------------- | +| `ControllerEnabled` | bool | `true` | + +### ControllerEnabled + +The `ControllerEnabled` parameter controls a chains ability to service ICS-27 controller specific logic. This includes the sending of Interchain Accounts packet data as well as the following ICS-26 callback handlers: + +* `OnChanOpenInit` +* `OnChanOpenAck` +* `OnChanCloseConfirm` +* `OnAcknowledgementPacket` +* `OnTimeoutPacket` + +## Host Submodule Parameters + +| Key | Type | Default Value | +| --------------- | --------- | ------------- | +| `HostEnabled` | bool | `true` | +| `AllowMessages` | \[]string | `[]` | + +### HostEnabled + +The `HostEnabled` parameter controls a chains ability to service ICS27 host specific logic. This includes the following ICS-26 callback handlers: + +* `OnChanOpenTry` +* `OnChanOpenConfirm` +* `OnChanCloseConfirm` +* `OnRecvPacket` + +### AllowMessages + +The `AllowMessages` parameter provides the ability for a chain to limit the types of messages or transactions that hosted interchain accounts are authorized to execute by defining an allowlist using the Protobuf message TypeURL format. + +For example, a Cosmos SDK based chain that elects to provide hosted Interchain Accounts with the ability of governance voting and staking delegations will define its parameters as follows: + +```json +"params": { + "host_enabled": true, + "allow_messages": ["/cosmos.staking.v1beta1.MsgDelegate", + "/cosmos.gov.v1beta1.MsgVote"] +} +``` + +There is also a special wildcard `"*"` message type which allows any type of message to be executed by the interchain account. This must be the only message in the `allow_messages` array. + +```json +"params": { + "host_enabled": true, + "allow_messages": ["*"] +} +``` diff --git a/docs/ibc/v4.6.x/apps/interchain-accounts/transactions.mdx b/docs/ibc/v4.6.x/apps/interchain-accounts/transactions.mdx new file mode 100644 index 00000000..8f1aa68b --- /dev/null +++ b/docs/ibc/v4.6.x/apps/interchain-accounts/transactions.mdx @@ -0,0 +1,21 @@ +--- +title: Transactions +--- + +## Synopsis + +Learn about Interchain Accounts transaction execution + +## Executing a transaction + +As described in [Authentication Modules](/docs/ibc/v4.6.x/apps/interchain-accounts/auth-modules#trysendtx) transactions are executed using the interchain accounts controller API and require a `Base Application` as outlined in [ICS30 IBC Middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware) to facilitate authentication. The method of authentication remains unspecified to provide flexibility for the authentication module developer. + +Transactions are executed via the ICS27 [`SendTx` API](/docs/ibc/v4.6.x/apps/interchain-accounts/auth-modules#trysendtx). This must be invoked through an Interchain Accounts authentication module and follows the outlined path of execution below. Packet relaying semantics provided by the IBC core transport, authentication, and ordering (IBC/TAO) layer are omitted for brevity. + +![send-interchain-tx.png](/docs/ibc/images/02-apps/02-interchain-accounts/images/send-interchain-tx.png) + +## Atomicity + +As the Interchain Accounts module supports the execution of multiple transactions using the Cosmos SDK `Msg` interface, it provides the same atomicity guarantees as Cosmos SDK-based applications, leveraging the [`CacheMultiStore`](https://docs.cosmos.network/main/learn/advanced/store#cachemultistore) architecture provided by the [`Context`](https://docs.cosmos.network/main/learn/advanced/context.html) type. + +This provides atomic execution of transactions when using Interchain Accounts, where state changes are only committed if all `Msg`s succeed. diff --git a/docs/ibc/v4.6.x/apps/transfer/events.mdx b/docs/ibc/v4.6.x/apps/transfer/events.mdx new file mode 100644 index 00000000..1ae39041 --- /dev/null +++ b/docs/ibc/v4.6.x/apps/transfer/events.mdx @@ -0,0 +1,49 @@ +--- +title: Events +--- + +## `MsgTransfer` + +| Type | Attribute Key | Attribute Value | +| ------------- | ------------- | --------------- | +| ibc\_transfer | sender | `{sender}` | +| ibc\_transfer | receiver | `{receiver}` | +| message | module | transfer | + +## `OnRecvPacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | ------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | sender | `{sender}` | +| fungible\_token\_packet | receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | +| fungible\_token\_packet | success | `{ackSuccess}` | +| fungible\_token\_packet | error | `{ackError}` | +| denomination\_trace | trace\_hash | `{hex\_hash}` | +| denomination\_trace | denom | `{voucherDenom}` | + +## `OnAcknowledgePacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | --------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | sender | `{sender}` | +| fungible\_token\_packet | receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | +| fungible\_token\_packet | acknowledgement | `{ack.String()}` | +| fungible\_token\_packet | success / error | `{ack.Response}` | + +## `OnTimeoutPacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | ---------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | refund\_receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | diff --git a/docs/ibc/v4.6.x/apps/transfer/messages.mdx b/docs/ibc/v4.6.x/apps/transfer/messages.mdx new file mode 100644 index 00000000..58410ca1 --- /dev/null +++ b/docs/ibc/v4.6.x/apps/transfer/messages.mdx @@ -0,0 +1,42 @@ +--- +title: Messages +description: 'A fungible token cross chain transfer is achieved by using the MsgTransfer:' +--- + +## `MsgTransfer` + +A fungible token cross chain transfer is achieved by using the `MsgTransfer`: + +```go +type MsgTransfer struct { + SourcePort string + SourceChannel string + Token sdk.Coin + Sender string + Receiver string + TimeoutHeight ibcexported.Height + TimeoutTimestamp uint64 + Memo string +} +``` + +This message is expected to fail if: + +* `SourcePort` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +* `SourceChannel` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +* `Token` is invalid (denom is invalid or amount is negative) + * `Token.Amount` is not positive. + * `Token.Denom` is not a valid IBC denomination as per [ADR 001 - Coin Source Tracing](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md). +* `Sender` is empty. +* `Receiver` is empty. +* `TimeoutHeight` and `TimeoutTimestamp` are both zero. + +This message will send a fungible token to the counterparty chain represented by the counterparty Channel End connected to the Channel End with the identifiers `SourcePort` and `SourceChannel`. + +The denomination provided for transfer should correspond to the same denomination represented on this chain. The prefixes will be added as necessary upon by the receiving chain. + +### Memo + +The memo field was added to allow applications and users to attach metadata to transfer packets. The field is optional and may be left empty. When it is used to attach metadata for a particular middleware, the memo field should be represented as a json object where different middlewares use different json keys. + +You can find more information about applications that use the memo field in the [chain registry](https://github.com/cosmos/chain-registry/blob/master/_memo_keys/ICS20_memo_keys.json). diff --git a/docs/ibc/v4.6.x/apps/transfer/metrics.mdx b/docs/ibc/v4.6.x/apps/transfer/metrics.mdx new file mode 100644 index 00000000..7ef82c88 --- /dev/null +++ b/docs/ibc/v4.6.x/apps/transfer/metrics.mdx @@ -0,0 +1,13 @@ +--- +title: Metrics +description: The IBC transfer application module exposes the following set of metrics. +--- + +The IBC transfer application module exposes the following set of metrics. + +| Metric | Description | Unit | Type | +| :---------------------------- | :---------------------------------------------------------------------------------------- | :------- | :------ | +| `tx_msg_ibc_transfer` | The total amount of tokens transferred via IBC in a `MsgTransfer` (source or sink chain) | token | gauge | +| `ibc_transfer_packet_receive` | The total amount of tokens received in a `FungibleTokenPacketData` (source or sink chain) | token | gauge | +| `ibc_transfer_send` | Total number of IBC transfers sent from a chain (source or sink) | transfer | counter | +| `ibc_transfer_receive` | Total number of IBC transfers received to a chain (source or sink) | transfer | counter | diff --git a/docs/ibc/v4.6.x/apps/transfer/overview.mdx b/docs/ibc/v4.6.x/apps/transfer/overview.mdx new file mode 100644 index 00000000..90fcfee1 --- /dev/null +++ b/docs/ibc/v4.6.x/apps/transfer/overview.mdx @@ -0,0 +1,125 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the token Transfer module is + +## What is the Transfer module? + +Transfer is the Cosmos SDK implementation of the [ICS-20](https://github.com/cosmos/ibc/tree/master/spec/app/ics-020-fungible-token-transfer) protocol, which enables cross-chain fungible token transfers. + +## Concepts + +### Acknowledgements + +ICS20 uses the recommended acknowledgement format as specified by [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope). + +A successful receive of a transfer packet will result in a Result Acknowledgement being written +with the value `[]byte{byte(1)}` in the `Response` field. + +An unsuccessful receive of a transfer packet will result in an Error Acknowledgement being written +with the error message in the `Response` field. + +### Denomination trace + +The denomination trace corresponds to the information that allows a token to be traced back to its +origin chain. It contains a sequence of port and channel identifiers ordered from the most recent to +the oldest in the timeline of transfers. + +This information is included on the token denomination field in the form of a hash to prevent an +unbounded denomination length. For example, the token `transfer/channelToA/uatom` will be displayed +as `ibc/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2`. + +Each send to any chain other than the one it was previously received from is a movement forwards in +the token's timeline. This causes trace to be added to the token's history and the destination port +and destination channel to be prefixed to the denomination. In these instances the sender chain is +acting as the "source zone". When the token is sent back to the chain it previously received from, the +prefix is removed. This is a backwards movement in the token's timeline and the sender chain is +acting as the "sink zone". + +It is strongly recommended to read the full details of [ADR 001: Coin Source Tracing](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md) to understand the implications and context of the IBC token representations. + +## UX suggestions for clients + +For clients (wallets, exchanges, applications, block explorers, etc) that want to display the source of the token, it is recommended to use the following alternatives for each of the cases below: + +### Direct connection + +If the denomination trace contains a single identifier prefix pair (as in the example above), then +the easiest way to retrieve the chain and light client identifier is to map the trace information +directly. In summary, this requires querying the channel from the denomination trace identifiers, +and then the counterparty client state using the counterparty port and channel identifiers from the +retrieved channel. + +A general pseudo algorithm would look like the following: + +1. Query the full denomination trace. +2. Query the channel with the `portID/channelID` pair, which corresponds to the first destination of the + token. +3. Query the client state using the identifiers pair. Note that this query will return a `"Not +Found"` response if the current chain is not connected to this channel. +4. Retrieve the client identifier or chain identifier from the client state (eg: on + Tendermint clients) and store it locally. + +Using the gRPC gateway client service the steps above would be, with a given IBC token `ibc/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2` stored on `chainB`: + +1. `GET /ibc/apps/transfer/v1/denom_traces/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2` -> `{"path": "transfer/channelToA", "base_denom": "uatom"}` +2. `GET /ibc/apps/transfer/v1/channels/channelToA/ports/transfer/client_state"` -> `{"client_id": "clientA", "chain-id": "chainA", ...}` +3. `GET /ibc/apps/transfer/v1/channels/channelToA/ports/transfer"` -> `{"channel_id": "channelToA", port_id": "transfer", counterparty: {"channel_id": "channelToB", port_id": "transfer"}, ...}` +4. `GET /ibc/apps/transfer/v1/channels/channelToB/ports/transfer/client_state" -> {"client_id": "clientB", "chain-id": "chainB", ...}` + +Then, the token transfer chain path for the `uatom` denomination would be: `chainA` -> `chainB`. + +### Multiple hops + +The multiple channel hops case applies when the token has passed through multiple chains between the original source and final destination chains. + +The IBC protocol doesn't know the topology of the overall network (i.e connections between chains and identifier names between them). For this reason, in the multiple hops case, a particular chain in the timeline of the individual transfers can't query the chain and client identifiers of the other chains. + +Take for example the following sequence of transfers `A -> B -> C` for an IBC token, with a final prefix path (trace info) of `transfer/channelChainC/transfer/channelChainB`. What the paragraph above means is that even in the case that chain `C` is directly connected to chain `A`, querying the port and channel identifiers that chain `B` uses to connect to chain `A` (eg: `transfer/channelChainA`) can be completely different from the one that chain `C` uses to connect to chain `A` (eg: `transfer/channelToChainA`). + +Thus the proposed solution for clients that the IBC team recommends are the following: + +- **Connect to all chains**: Connecting to all the chains in the timeline would allow clients to + perform the queries outlined in the [direct connection](#direct-connection) section to each + relevant chain. By repeatedly following the port and channel denomination trace transfer timeline, + clients should always be able to find all the relevant identifiers. This comes at the tradeoff + that the client must connect to nodes on each of the chains in order to perform the queries. +- **Relayer as a Service (RaaS)**: A longer term solution is to use/create a relayer service that + could map the denomination trace to the chain path timeline for each token (i.e `origin chain -> +chain #1 -> ... -> chain #(n-1) -> final chain`). These services could provide merkle proofs in + order to allow clients to optionally verify the path timeline correctness for themselves by + running light clients. If the proofs are not verified, they should be considered as trusted third + parties services. Additionally, client would be advised in the future to use RaaS that support the + largest number of connections between chains in the ecosystem. Unfortunately, none of the existing + public relayers (in [Golang](https://github.com/cosmos/relayer) and + [Rust](https://github.com/informalsystems/ibc-rs)), provide this service to clients. + + + The only viable alternative for clients (at the time of writing) to tokens + with multiple connection hops, is to connect to all chains directly and + perform relevant queries to each of them in the sequence. + + +## Locked funds + +In some [exceptional cases](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-026-ibc-client-recovery-mechanisms.md#exceptional-cases), a client state associated with a given channel cannot be updated. This causes that funds from fungible tokens in that channel will be permanently locked and thus can no longer be transferred. + +To mitigate this, a client update governance proposal can be submitted to update the frozen client +with a new valid header. Once the proposal passes the client state will be unfrozen and the funds +from the associated channels will then be unlocked. This mechanism only applies to clients that +allow updates via governance, such as Tendermint clients. + +In addition to this, it's important to mention that a token must be sent back along the exact route +that it took originally in order to return it to its original form on the source chain (eg: the +Cosmos Hub for the `uatom`). Sending a token back to the same chain across a different channel will +**not** move the token back across its timeline. If a channel in the chain history closes before the +token can be sent back across that channel, then the token will not be returnable to its original +form. + +## Security considerations + +For safety, no other module must be capable of minting tokens with the `ibc/` prefix. The IBC +transfer module needs a subset of the denomination space that only it can create tokens in. diff --git a/docs/ibc/v4.6.x/apps/transfer/params.mdx b/docs/ibc/v4.6.x/apps/transfer/params.mdx new file mode 100644 index 00000000..f1c78927 --- /dev/null +++ b/docs/ibc/v4.6.x/apps/transfer/params.mdx @@ -0,0 +1,29 @@ +--- +title: Params +description: 'The IBC transfer application module contains the following parameters:' +--- + +The IBC transfer application module contains the following parameters: + +| Key | Type | Default Value | +| ---------------- | ---- | ------------- | +| `SendEnabled` | bool | `true` | +| `ReceiveEnabled` | bool | `true` | + +## `SendEnabled` + +The transfers enabled parameter controls send cross-chain transfer capabilities for all fungible tokens. + +To prevent a single token from being transferred from the chain, set the `SendEnabled` parameter to `true` and then, depending on the Cosmos SDK version, do one of the following: + +* For Cosmos SDK v0.46.x or earlier, set the bank module's [`SendEnabled` parameter](https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/bank/spec/05_params.md#sendenabled) for the denomination to `false`. +* For Cosmos SDK versions above v0.46.x, set the bank module's `SendEnabled` entry for the denomination to `false` using `MsgSetSendEnabled` as a governance proposal. + +## `ReceiveEnabled` + +The transfers enabled parameter controls receive cross-chain transfer capabilities for all fungible tokens. + +To prevent a single token from being transferred to the chain, set the `ReceiveEnabled` parameter to `true` and then, depending on the Cosmos SDK version, do one of the following: + +* For Cosmos SDK v0.46.x or earlier, set the bank module's [`SendEnabled` parameter](https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/bank/spec/05_params.md#sendenabled) for the denomination to `false`. +* For Cosmos SDK versions above v0.46.x, set the bank module's `SendEnabled` entry for the denomination to `false` using `MsgSetSendEnabled` as a governance proposal. diff --git a/docs/ibc/v4.6.x/apps/transfer/state-transitions.mdx b/docs/ibc/v4.6.x/apps/transfer/state-transitions.mdx new file mode 100644 index 00000000..fcc9169c --- /dev/null +++ b/docs/ibc/v4.6.x/apps/transfer/state-transitions.mdx @@ -0,0 +1,35 @@ +--- +title: State Transitions +description: >- + A successful fungible token send has two state transitions depending if the + transfer is a movement forward or backwards in the token's timeline: +--- + +## Send fungible tokens + +A successful fungible token send has two state transitions depending if the transfer is a movement forward or backwards in the token's timeline: + +1. Sender chain is the source chain, *i.e* a transfer to any chain other than the one it was previously received from is a movement forwards in the token's timeline. This results in the following state transitions: + + * The coins are transferred to an escrow address (i.e locked) on the sender chain. + * The coins are transferred to the receiving chain through IBC TAO logic. + +2. Sender chain is the sink chain, *i.e* the token is sent back to the chain it previously received from. This is a backwards movement in the token's timeline. This results in the following state transitions: + + * The coins (vouchers) are burned on the sender chain. + * The coins are transferred to the receiving chain through IBC TAO logic. + +## Receive fungible tokens + +A successful fungible token receive has two state transitions depending if the transfer is a movement forward or backwards in the token's timeline: + +1. Receiver chain is the source chain. This is a backwards movement in the token's timeline. This results in the following state transitions: + + * The leftmost port and channel identifier pair is removed from the token denomination prefix. + * The tokens are unescrowed and sent to the receiving address. + +2. Receiver chain is the sink chain. This is a movement forwards in the token's timeline. This results in the following state transitions: + + * Token vouchers are minted by prefixing the destination port and channel identifiers to the trace information. + * The receiving chain stores the new trace information in the store (if not set already). + * The vouchers are sent to the receiving address. diff --git a/docs/ibc/v4.6.x/apps/transfer/state.mdx b/docs/ibc/v4.6.x/apps/transfer/state.mdx new file mode 100644 index 00000000..76c7af41 --- /dev/null +++ b/docs/ibc/v4.6.x/apps/transfer/state.mdx @@ -0,0 +1,12 @@ +--- +title: State +description: >- + The IBC transfer application module keeps state of the port to which the + module is binded and the denomination trace information as outlined in ADR + 001. +--- + +The IBC transfer application module keeps state of the port to which the module is binded and the denomination trace information as outlined in [ADR 001](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md). + +* `Port`: `0x01 -> ProtocolBuffer(string)` +* `DenomTrace`: `0x02 | []bytes(traceHash) -> ProtocolBuffer(DenomTrace)` diff --git a/docs/ibc/v4.6.x/changelog/release-notes.mdx b/docs/ibc/v4.6.x/changelog/release-notes.mdx new file mode 100644 index 00000000..cd8ee193 --- /dev/null +++ b/docs/ibc/v4.6.x/changelog/release-notes.mdx @@ -0,0 +1,686 @@ +--- +title: "Release Notes" +description: "Release notes generated from the project `CHANGELOG.md`" +mode: "center" +--- + + + + This page tracks all releases and changes from the + [cosmos/ibc-go](https://github.com/cosmos/ibc-go) repository. For the latest + development updates, see the + [UNRELEASED](https://github.com/cosmos/ibc-go/blob/main/CHANGELOG.md#unreleased) + section. + + + + + ### Bug Fixes * (apps/transfer) + [\#3045](https://github.com/cosmos/ibc-go/pull/3045) allow value with slashes + in URL template for `denom_traces` and `denom_hashes` queries. * + (apps/transfer) [\#4709](https://github.com/cosmos/ibc-go/pull/4709) Order + query service RPCs to fix availability of denom traces endpoint when no args + are provided. + + + + ### Dependencies * [\#4738](https://github.com/cosmos/ibc-go/pull/4738) Bump + Cosmos SDK to v0.45.16. * [\#4782](https://github.com/cosmos/ibc-go/pull/4782) + Bump ics23 to v0.9.1. + + + + ### Bug Fixes * [\#3662](https://github.com/cosmos/ibc-go/pull/3662) Retract + v4.1.2 and v4.2.1. + + + + ### Bug Fixes * [\#3346](https://github.com/cosmos/ibc-go/pull/3346) Properly + handle ordered channels in `UnreceivedPackets` query. + + + + ### Dependencies * [\#3524](https://github.com/cosmos/ibc-go/pull/3524) Update + protos and Makefile * [\#3416](https://github.com/cosmos/ibc-go/pull/3416) + Bump Cosmos SDK to v0.45.15 and replace Tendermint with CometBFT v0.34.27. + + + + ### Dependencies * Bump Cosmos SDK to v0.45.12 + ([#3049](https://github.com/cosmos/ibc-go/issues/3049)) * Bump ics23 to v0.9.0 + ([#2868](https://github.com/cosmos/ibc-go/issues/2868)) + ([#2877](https://github.com/cosmos/ibc-go/issues/2877)) ### State Machine + Breaking * Write channel state before invoking app callbacks in ack and + confirm channel handshake steps + ([#2973](https://github.com/cosmos/ibc-go/issues/2973)) ### Improvements * + Save gas on IsFeeEnabled + ([#2786](https://github.com/cosmos/ibc-go/issues/2786)) ### Bug Fixes * Check + `x/bank` send enabled before escrowing fees + ([#2942](https://github.com/cosmos/ibc-go/issues/2942)) + ([#2952](https://github.com/cosmos/ibc-go/issues/2952)) ### Documentation * + Fix migration/docs for ICA controller middleware + ([#2737](https://github.com/cosmos/ibc-go/issues/2737)) + ([#2763](https://github.com/cosmos/ibc-go/issues/2763)) ### Miscellaneous + Tasks * Integrated git cliff into the code base to automate generation of + changelogs ([#2772](https://github.com/cosmos/ibc-go/issues/2772)) * Updating + CHANGELOG and cliff toml + + + + ### Dependencies * [\#2588](https://github.com/cosmos/ibc-go/pull/2588) Bump + SDK version to v0.45.10 and Tendermint to v0.34.22. ### State Machine Breaking + * (apps/transfer) [\#2651](https://github.com/cosmos/ibc-go/pull/2651) + Introduce `mustProtoMarshalJSON` for ics20 packet data marshalling which will + skip emission (marshalling) of the memo field if unpopulated (empty). * + (27-interchain-accounts) + [\#2580](https://github.com/cosmos/ibc-go/issues/2580) Removing port prefix + requirement from the ICA host channel handshake * (transfer) + [\#2377](https://github.com/cosmos/ibc-go/pull/2377) Adding `sequence` to + `MsgTransferResponse`. ### Features * (apps/transfer) + [\#2595](https://github.com/cosmos/ibc-go/pull/2595) Adding optional memo + field to `FungibleTokenPacketData` and `MsgTransfer`. ### Bug Fixes * + (apps/transfer) [\#2679](https://github.com/cosmos/ibc-go/pull/2679) Check + `x/bank` send enabled. + + + + ### Dependencies * [\#2288](https://github.com/cosmos/ibc-go/pull/2288) Bump + SDK version to v0.45.8 and Tendermint to v0.34.21. ### Features * + (apps/27-interchain-accounts) + [\#2193](https://github.com/cosmos/ibc-go/pull/2193) Adding + `InterchainAccount` gRPC query endpont to ICS27 `controller` submodule to + allow users to retrieve registered interchain account addresses. ### Bug Fixes + * (27-interchain-accounts) + [\#2308](https://github.com/cosmos/ibc-go/pull/2308) Nil checks have been + added to ensure services are not registered for nil host or controller + keepers. + + + + ### Dependencies * [\#1627](https://github.com/cosmos/ibc-go/pull/1627) Bump + Go version to 1.18 * [\#1905](https://github.com/cosmos/ibc-go/pull/1905) Bump + SDK version to v0.45.7 ### API Breaking * (core/04-channel) + [\#1792](https://github.com/cosmos/ibc-go/pull/1792) Remove + `PreviousChannelID` from `NewMsgChannelOpenTry` arguments. + `MsgChannelOpenTry.ValidateBasic()` returns error if the deprecated + `PreviousChannelID` is not empty. * (core/03-connection) + [\#1797](https://github.com/cosmos/ibc-go/pull/1797) Remove + `PreviousConnectionID` from `NewMsgConnectionOpenTry` arguments. + `MsgConnectionOpenTry.ValidateBasic()` returns error if the deprecated + `PreviousConnectionID` is not empty. * (modules/core/03-connection) + [\#1672](https://github.com/cosmos/ibc-go/pull/1672) Remove crossing hellos + from connection handshakes. The `PreviousConnectionId` in + `MsgConnectionOpenTry` has been deprecated. * (modules/core/04-channel) + [\#1317](https://github.com/cosmos/ibc-go/pull/1317) Remove crossing hellos + from channel handshakes. The `PreviousChannelId` in `MsgChannelOpenTry` has + been deprecated. * (transfer) + [\#1250](https://github.com/cosmos/ibc-go/pull/1250) Deprecate + `GetTransferAccount` since the `transfer` module account is never used. * + (channel) [\#1283](https://github.com/cosmos/ibc-go/pull/1283) The + `OnChanOpenInit` application callback now returns a version string in line + with the latest [spec changes](https://github.com/cosmos/ibc/pull/629). * + (modules/29-fee)[\#1338](https://github.com/cosmos/ibc-go/pull/1338) Renaming + `Result` field in `IncentivizedAcknowledgement` to `AppAcknowledgement`. * + (modules/29-fee)[\#1343](https://github.com/cosmos/ibc-go/pull/1343) Renaming + `KeyForwardRelayerAddress` to `KeyRelayerAddressForAsyncAck`, and + `ParseKeyForwardRelayerAddress` to `ParseKeyRelayerAddressForAsyncAck`. * + (apps/27-interchain-accounts)[\#1432](https://github.com/cosmos/ibc-go/pull/1432) + Updating `RegisterInterchainAccount` to include an additional `version` + argument, supporting ICS29 fee middleware functionality in ICS27 interchain + accounts. * + (apps/27-interchain-accounts)[\#1565](https://github.com/cosmos/ibc-go/pull/1565) + Removing `NewErrorAcknowledgement` in favour of + `channeltypes.NewErrorAcknowledgement`. * + (transfer)[\#1565](https://github.com/cosmos/ibc-go/pull/1565) Removing + `NewErrorAcknowledgement` in favour of `channeltypes.NewErrorAcknowledgement`. + * (channel)[\#1565](https://github.com/cosmos/ibc-go/pull/1565) Updating + `NewErrorAcknowledgement` to accept an error instead of a string and removing + the possibility of non-deterministic writes to application state. * + (core/04-channel)[\#1636](https://github.com/cosmos/ibc-go/pull/1636) Removing + `SplitChannelVersion` and `MergeChannelVersions` functions since they are not + used. ### State Machine Breaking * (apps/transfer) + [\#1907](https://github.com/cosmos/ibc-go/pull/1907) Blocked module account + addresses are no longer allowed to send IBC transfers. * + (apps/27-interchain-accounts) + [\#1882](https://github.com/cosmos/ibc-go/pull/1882) Explicitly check length + of interchain account packet data in favour of nil check. ### Improvements * + (app/20-transfer) [\#1680](https://github.com/cosmos/ibc-go/pull/1680) Adds + migration to correct any malformed trace path information of tokens with + denoms that contains slashes. The transfer module consensus version has been + bumped to 2. * (app/20-transfer) + [\#1730](https://github.com/cosmos/ibc-go/pull/1730) parse the ics20 + denomination provided via a packet using the channel identifier format + specified by ibc-go. * (cleanup) + [\#1335](https://github.com/cosmos/ibc-go/pull/1335/) `gofumpt -w -l .` to + standardize the code layout more strictly than `go fmt ./...` * (middleware) + [\#1022](https://github.com/cosmos/ibc-go/pull/1022) Add `GetAppVersion` to + the ICS4Wrapper interface. This function should be used by IBC applications to + obtain their own version since the version set in the channel structure may be + wrapped many times by middleware. * (modules/core/04-channel) + [\#1232](https://github.com/cosmos/ibc-go/pull/1232) Updating params on + `NewPacketId` and moving to bottom of file. * (app/29-fee) + [\#1305](https://github.com/cosmos/ibc-go/pull/1305) Change version string for + fee module to `ics29-1` * (app/29-fee) + [\#1341](https://github.com/cosmos/ibc-go/pull/1341) Check if the fee module + is locked and if the fee module is enabled before refunding all fees * + (transfer) [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting + Sender address from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (testing/simapp) + [\#1397](https://github.com/cosmos/ibc-go/pull/1397) Adding mock module to + maccperms and adding check to ensure mock module is not a blocked account + address. * (core/02-client) + [\#1570](https://github.com/cosmos/ibc-go/pull/1570) Emitting an event when + handling an upgrade client proposal. * (modules/light-clients/07-tendermint) + [\#1713](https://github.com/cosmos/ibc-go/pull/1713) Allow client upgrade + proposals to update `TrustingPeriod`. See ADR-026 for context. * (core/client) + [\#1740](https://github.com/cosmos/ibc-go/pull/1740) Add + `cosmos_proto.implements_interface` to adhere to guidelines in [Cosmos SDK ADR + 019](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-019-protocol-buffer-state-encoding#safe-usage-of-any) + for annotating `google.protobuf.Any` types ### Features * + [\#276](https://github.com/cosmos/ibc-go/pull/276) Adding the Fee Middleware + module v1 * (apps/29-fee) [\#1229](https://github.com/cosmos/ibc-go/pull/1229) + Adding CLI commands for getting all unrelayed incentivized packets and packet + by packet-id. * (apps/29-fee) + [\#1224](https://github.com/cosmos/ibc-go/pull/1224) Adding + Query/CounterpartyAddress and CLI to ICS29 fee middleware * (apps/29-fee) + [\#1225](https://github.com/cosmos/ibc-go/pull/1225) Adding + Query/FeeEnabledChannel and Query/FeeEnabledChannels with CLIs to ICS29 fee + middleware. * (modules/apps/29-fee) + [\#1230](https://github.com/cosmos/ibc-go/pull/1230) Adding CLI command for + getting incentivized packets for a specific channel-id. ### Bug Fixes * + (apps/29-fee) [\#1774](https://github.com/cosmos/ibc-go/pull/1774) Change non + nil relayer assertion to non empty to avoid import/export issues for genesis + upgrades. * (apps/29-fee) [\#1278](https://github.com/cosmos/ibc-go/pull/1278) + The URI path for the query to get all incentivized packets for a specific + channel did not follow the same format as the rest of queries. * + (modules/core/04-channel)[\#1919](https://github.com/cosmos/ibc-go/pull/1919) + Fixed formatting of sequence for packet "acknowledgement written" logs. + + + + ### Dependencies * [\#1300](https://github.com/cosmos/ibc-go/pull/1300) Bump + SDK version to v0.45.4 ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/04-channel) + [\#1279](https://github.com/cosmos/ibc-go/pull/1279) Add selected channel + version to MsgChanOpenInitResponse and MsgChanOpenTryResponse. Emit channel + version during OpenInit/OpenTry * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. * + (modules/light-clients/07-tendermint) + [\#1118](https://github.com/cosmos/ibc-go/pull/1118) Deprecating + `AllowUpdateAfterExpiry` and `AllowUpdateAfterMisbehaviour`. See ADR-026 for + context. ### Features * (modules/core/02-client) + [\#1336](https://github.com/cosmos/ibc-go/pull/1336) Adding + Query/ConsensusStateHeights gRPC for fetching the height of every consensus + state associated with a client. * (modules/apps/transfer) + [\#1416](https://github.com/cosmos/ibc-go/pull/1416) Adding gRPC endpoint for + getting an escrow account for a given port-id and channel-id. * + (modules/apps/27-interchain-accounts) + [\#1512](https://github.com/cosmos/ibc-go/pull/1512) Allowing ICA modules to + handle all message types with "*". ### Bug Fixes * (modules/core/04-channel) + [\#1130](https://github.com/cosmos/ibc-go/pull/1130) Call + `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` log + output * (apps/transfer) [\#1451](https://github.com/cosmos/ibc-go/pull/1451) + Fixing the support for base denoms that contain slashes. + + + + ### Dependencies * [\#1300](https://github.com/cosmos/ibc-go/pull/1300) Bump + SDK version to v0.45.4 ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies + * [\#404](https://github.com/cosmos/ibc-go/pull/404) Bump Go version to 1.17 + * [\#851](https://github.com/cosmos/ibc-go/pull/851) Bump SDK version to v0.45.1 + * [\#948](https://github.com/cosmos/ibc-go/pull/948) Bump ics23/go to v0.7 + * (core) [\#709](https://github.com/cosmos/ibc-go/pull/709) Replace github.com/pkg/errors with stdlib errors + ### API Breaking + * (testing) [\#939](https://github.com/cosmos/ibc-go/pull/939) Support custom power reduction for testing. + * (modules/core/05-port) [\#1086](https://github.com/cosmos/ibc-go/pull/1086) Added `counterpartyChannelID` argument to IBCModule.OnChanOpenAck + * (channel) [\#848](https://github.com/cosmos/ibc-go/pull/848) Added `ChannelId` to MsgChannelOpenInitResponse + * (testing) [\#813](https://github.com/cosmos/ibc-go/pull/813) The `ack` argument to the testing function `RelayPacket` has been removed as it is no longer needed. + * (testing) [\#774](https://github.com/cosmos/ibc-go/pull/774) Added `ChainID` arg to `SetupWithGenesisValSet` on the testing app. `Coordinator` generated ChainIDs now starts at index 1 + * (transfer) [\#675](https://github.com/cosmos/ibc-go/pull/675) Transfer `NewKeeper` now takes in an ICS4Wrapper. The ICS4Wrapper may be the IBC Channel Keeper when ICS20 is not used in a middleware stack. The ICS4Wrapper is required for applications wishing to connect middleware to ICS20. + * (core) [\#650](https://github.com/cosmos/ibc-go/pull/650) Modify `OnChanOpenTry` IBC application module callback to return the negotiated app version. The version passed into the `MsgChanOpenTry` has been deprecated and will be ignored by core IBC. + * (core) [\#629](https://github.com/cosmos/ibc-go/pull/629) Removes the `GetProofSpecs` from the ClientState interface. This function was previously unused by core IBC. + * (transfer) [\#517](https://github.com/cosmos/ibc-go/pull/517) Separates the ICS 26 callback functions from `AppModule` into a new type `IBCModule` for ICS 20 transfer. + * (modules/core/02-client) [\#536](https://github.com/cosmos/ibc-go/pull/536) `GetSelfConsensusState` return type changed from bool to error. + * (channel) [\#644](https://github.com/cosmos/ibc-go/pull/644) Removes `CounterpartyHops` function from the ChannelKeeper. + * (testing) [\#776](https://github.com/cosmos/ibc-go/pull/776) Adding helper fn to generate capability name for testing callbacks + * (testing) [\#892](https://github.com/cosmos/ibc-go/pull/892) IBC Mock modules store the scoped keeper and portID within the IBCMockApp. They also maintain reference to the AppModule to update the AppModule's list of IBC applications it references. Allows for the mock module to be reused as a base application in middleware stacks. + * (channel) [\#882](https://github.com/cosmos/ibc-go/pull/882) The `WriteAcknowledgement` API now takes `exported.Acknowledgement` instead of a byte array + * (modules/core/ante) [\#950](https://github.com/cosmos/ibc-go/pull/950) Replaces the channel keeper with the IBC keeper in the IBC `AnteDecorator` in order to execute the entire message and be able to reject redundant messages that are in the same block as the non-redundant messages. + ### State Machine Breaking + * (transfer) [\#818](https://github.com/cosmos/ibc-go/pull/818) Error acknowledgements returned from Transfer `OnRecvPacket` now include a deterministic ABCI code and error message. + ### Improvements + * (interchain-accounts) [\#1037](https://github.com/cosmos/ibc-go/pull/1037) Add a function `InitModule` to the interchain accounts `AppModule`. This function should be called within the upgrade handler when adding the interchain accounts module to a chain. It should be called in place of InitGenesis (set the consensus version in the version map). + * (testing) [\#942](https://github.com/cosmos/ibc-go/pull/942) `NewTestChain` will create 4 validators in validator set by default. A new constructor function `NewTestChainWithValSet` is provided for test writers who want custom control over the validator set of test chains. + * (testing) [\#904](https://github.com/cosmos/ibc-go/pull/904) Add `ParsePacketFromEvents` function to the testing package. Useful when sending/relaying packets via the testing package. + * (testing) [\#893](https://github.com/cosmos/ibc-go/pull/893) Support custom private keys for testing. + * (testing) [\#810](https://github.com/cosmos/ibc-go/pull/810) Additional testing function added to `Endpoint` type called `RecvPacketWithResult`. Performs the same functionality as the existing `RecvPacket` function but also returns the message result. `path.RelayPacket` no longer uses the provided acknowledgement argument and instead obtains the acknowledgement via MsgRecvPacket events. + * (connection) [\#721](https://github.com/cosmos/ibc-go/pull/721) Simplify connection handshake error messages when unpacking client state. + * (channel) [\#692](https://github.com/cosmos/ibc-go/pull/692) Minimize channel logging by only emitting the packet sequence, source port/channel, destination port/channel upon packet receives, acknowledgements and timeouts. + * [\#383](https://github.com/cosmos/ibc-go/pull/383) Adds helper functions for merging and splitting middleware versions from the underlying app version. + * (modules/core/05-port) [\#288](https://github.com/cosmos/ibc-go/issues/288) Making the 05-port keeper function IsBound public. The IsBound function checks if the provided portID is already binded to a module. + * (channel) [\#644](https://github.com/cosmos/ibc-go/pull/644) Adds `GetChannelConnection` to the ChannelKeeper. This function returns the connectionID and connection state associated with a channel. + * (channel) [\647](https://github.com/cosmos/ibc-go/pull/647) Reorganizes channel handshake handling to set channel state after IBC application callbacks. + * (client) [\#724](https://github.com/cosmos/ibc-go/pull/724) `IsRevisionFormat` and `IsClientIDFormat` have been updated to disallow newlines before the dash used to separate the chainID and revision number, and the client type and client sequence. + * (interchain-accounts) [\#1466](https://github.com/cosmos/ibc-go/pull/1466) Emit event when there is an acknowledgement during `OnRecvPacket`. + ### Features + * [\#432](https://github.com/cosmos/ibc-go/pull/432) Introduce `MockIBCApp` struct to the mock module. Allows the mock module to be reused to perform custom logic on each IBC App interface function. This might be useful when testing out IBC applications written as middleware. + * [\#380](https://github.com/cosmos/ibc-go/pull/380) Adding the Interchain Accounts module v1 + * [\#679](https://github.com/cosmos/ibc-go/pull/679) New CLI command `query ibc-transfer denom-hash ` to get the denom hash for a denom trace; this might be useful for debug + ### Bug Fixes + * (testing) [\#884](https://github.com/cosmos/ibc-go/pull/884) Add and use in simapp a custom ante handler that rejects redundant transactions + * (transfer) [\#978](https://github.com/cosmos/ibc-go/pull/978) Support base denoms with slashes in denom validation + * (client) [\#941](https://github.com/cosmos/ibc-go/pull/941) Classify client states without consensus states as expired + * (channel) [\#995](https://github.com/cosmos/ibc-go/pull/995) Call `packet.GetSequence()` rather than passing func in `AcknowledgePacket` log output + + + + ### Dependencies * [\#404](https://github.com/cosmos/ibc-go/pull/404) Bump Go + version to 1.17 * [\#1300](https://github.com/cosmos/ibc-go/pull/1300) Bump + SDK version to v0.45.4 ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. * + (modules/light-clients/07-tendermint) + [\#1118](https://github.com/cosmos/ibc-go/pull/1118) Deprecating + `AllowUpdateAfterExpiry` and `AllowUpdateAfterMisbehaviour`. See ADR-026 for + context. ### Features * (modules/core/02-client) + [\#1336](https://github.com/cosmos/ibc-go/pull/1336) Adding + Query/ConsensusStateHeights gRPC for fetching the height of every consensus + state associated with a client. * (modules/apps/transfer) + [\#1416](https://github.com/cosmos/ibc-go/pull/1416) Adding gRPC endpoint for + getting an escrow account for a given port-id and channel-id. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output * (apps/transfer) + [\#1451](https://github.com/cosmos/ibc-go/pull/1451) Fixing the support for + base denoms that contain slashes. + + + + ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies * [\#851](https://github.com/cosmos/ibc-go/pull/851) Bump SDK + version to v0.45.1 + + + + ### Dependencies * [\#1268](https://github.com/cosmos/ibc-go/pull/1268) Bump + SDK version to v0.44.8 and Tendermint to version 0.34.19 ### Improvements * + (transfer) [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` + grpc now takes in either an `ibc denom` or a `hash` instead of only accepting + a `hash`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies + * [\#1084](https://github.com/cosmos/ibc-go/pull/1084) Bump SDK version to v0.44.6 + * [\#948](https://github.com/cosmos/ibc-go/pull/948) Bump ics23/go to v0.7 + ### State Machine Breaking + * (transfer) [\#818](https://github.com/cosmos/ibc-go/pull/818) Error acknowledgements returned from Transfer `OnRecvPacket` now include a deterministic ABCI code and error message. + ### Features + * [\#679](https://github.com/cosmos/ibc-go/pull/679) New CLI command `query ibc-transfer denom-hash ` to get the denom hash for a denom trace; this might be useful for debug + ### Bug Fixes + * (client) [\#941](https://github.com/cosmos/ibc-go/pull/941) Classify client states without consensus states as expired + * (transfer) [\#978](https://github.com/cosmos/ibc-go/pull/978) Support base denoms with slashes in denom validation + * (channel) [\#995](https://github.com/cosmos/ibc-go/pull/995) Call `packet.GetSequence()` rather than passing func in `AcknowledgePacket` log output + + + + ### Improvements * (channel) + [\#692](https://github.com/cosmos/ibc-go/pull/692) Minimize channel logging by + only emitting the packet sequence, source port/channel, destination + port/channel upon packet receives, acknowledgements and timeouts. + + + + ### Dependencies * [\#589](https://github.com/cosmos/ibc-go/pull/589) Bump SDK + version to v0.44.5 ### Bug Fixes * (modules/core) + [\#603](https://github.com/cosmos/ibc-go/pull/603) Fix module name emitted as + part of `OnChanOpenInit` event. Replacing `connection` module name with + `channel`. + + + + ### Dependencies * [\#567](https://github.com/cosmos/ibc-go/pull/567) Bump SDK + version to v0.44.4 ### Improvements * (02-client) + [\#568](https://github.com/cosmos/ibc-go/pull/568) In IBC `transfer` cli + command use local clock time as reference for relative timestamp timeout if + greater than the block timestamp queried from the latest consensus state + corresponding to the counterparty channel. * + [\#583](https://github.com/cosmos/ibc-go/pull/583) Move + third_party/proto/confio/proofs.proto to third_party/proto/proofs.proto to + enable proto service reflection. Migrate `buf` from v1beta1 to v1. ### Bug + Fixes * (02-client) [\#500](https://github.com/cosmos/ibc-go/pull/500) Fix IBC + `update-client proposal` cli command to expect correct number of args. + + + + ### Dependencies * [\#489](https://github.com/cosmos/ibc-go/pull/489) Bump + Tendermint to v0.34.14 * [\#503](https://github.com/cosmos/ibc-go/pull/503) + Bump SDK version to v0.44.3 ### API Breaking * (core) + [\#227](https://github.com/cosmos/ibc-go/pull/227) Remove sdk.Result from + application callbacks * (transfer) + [\#350](https://github.com/cosmos/ibc-go/pull/350) Change + FungibleTokenPacketData to use a string for the Amount field. This enables + token transfers with amounts previously restricted by uint64. Up to the + maximum uint256 value is supported. ### Features * + [\#384](https://github.com/cosmos/ibc-go/pull/384) Added `NegotiateAppVersion` + method to `IBCModule` interface supported by a gRPC query service in + `05-port`. This provides routing of requests to the desired application module + callback, which in turn performs application version negotiation. + + + + ### Dependencies * [\#404](https://github.com/cosmos/ibc-go/pull/404) Bump Go + version to 1.17 * [\#1300](https://github.com/cosmos/ibc-go/pull/1300) Bump + SDK version to v0.45.4 ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. * + (modules/light-clients/07-tendermint) + [\#1118](https://github.com/cosmos/ibc-go/pull/1118) Deprecating + `AllowUpdateAfterExpiry` and `AllowUpdateAfterMisbehaviour`. See ADR-026 for + context. ### Features * (modules/core/02-client) + [\#1336](https://github.com/cosmos/ibc-go/pull/1336) Adding + Query/ConsensusStateHeights gRPC for fetching the height of every consensus + state associated with a client. * (modules/apps/transfer) + [\#1416](https://github.com/cosmos/ibc-go/pull/1416) Adding gRPC endpoint for + getting an escrow account for a given port-id and channel-id. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output * (apps/transfer) + [\#1451](https://github.com/cosmos/ibc-go/pull/1451) Fixing the support for + base denoms that contain slashes. + + + + ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies * [\#851](https://github.com/cosmos/ibc-go/pull/851) Bump SDK + version to v0.45.1 + + + + ### Dependencies * [\#1267](https://github.com/cosmos/ibc-go/pull/1267) Bump + SDK version to v0.44.8 and Tendermint to version 0.34.19 ### Improvements * + (transfer) [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` + grpc now takes in either an `ibc denom` or a `hash` instead of only accepting + a `hash`. * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies + * [\#1073](https://github.com/cosmos/ibc-go/pull/1073) Bump SDK version to v0.44.6 + * [\#948](https://github.com/cosmos/ibc-go/pull/948) Bump ics23/go to v0.7 + ### State Machine Breaking + * (transfer) [\#818](https://github.com/cosmos/ibc-go/pull/818) Error acknowledgements returned from Transfer `OnRecvPacket` now include a deterministic ABCI code and error message. + ### Features + * [\#679](https://github.com/cosmos/ibc-go/pull/679) New CLI command `query ibc-transfer denom-hash ` to get the denom hash for a denom trace; this might be useful for debug + ### Bug Fixes + * (client) [\#941](https://github.com/cosmos/ibc-go/pull/941) Classify client states without consensus states as expired + * (transfer) [\#978](https://github.com/cosmos/ibc-go/pull/978) Support base denoms with slashes in denom validation + * (channel) [\#995](https://github.com/cosmos/ibc-go/pull/995) Call `packet.GetSequence()` rather than passing func in `AcknowledgePacket` log output + + + + ### Improvements * (channel) + [\#692](https://github.com/cosmos/ibc-go/pull/692) Minimize channel logging by + only emitting the packet sequence, source port/channel, destination + port/channel upon packet receives, acknowledgements and timeouts. + + + + ### Dependencies * [\#589](https://github.com/cosmos/ibc-go/pull/589) Bump SDK + version to v0.44.5 ### Bug Fixes * (modules/core) + [\#603](https://github.com/cosmos/ibc-go/pull/603) Fix module name emitted as + part of `OnChanOpenInit` event. Replacing `connection` module name with + `channel`. + + + + ### Dependencies * [\#567](https://github.com/cosmos/ibc-go/pull/567) Bump SDK + version to v0.44.4 ### Improvements * + [\#583](https://github.com/cosmos/ibc-go/pull/583) Move + third_party/proto/confio/proofs.proto to third_party/proto/proofs.proto to + enable proto service reflection. Migrate `buf` from v1beta1 to v1. + + + + ### Dependencies * [\#489](https://github.com/cosmos/ibc-go/pull/489) Bump + Tendermint to v0.34.14 * [\#503](https://github.com/cosmos/ibc-go/pull/503) + Bump SDK version to v0.44.3 + + + + ### Dependencies * [\#485](https://github.com/cosmos/ibc-go/pull/485) Bump SDK + version to v0.44.2 + + + + ### Dependencies * [\#455](https://github.com/cosmos/ibc-go/pull/455) Bump SDK + version to v0.44.1 + + + + ### State Machine Breaking + * (24-host) [\#344](https://github.com/cosmos/ibc-go/pull/344) Increase port identifier limit to 128 characters. + ### Improvements + * [\#373](https://github.com/cosmos/ibc-go/pull/375) Added optional field `PacketCommitmentSequences` to `QueryPacketAcknowledgementsRequest` to provide filtering of packet acknowledgements. + ### Features + * [\#372](https://github.com/cosmos/ibc-go/pull/372) New CLI command `query ibc client status ` to get the current activity status of a client. + ### Dependencies + * [\#386](https://github.com/cosmos/ibc-go/pull/386) Bump [tendermint](https://github.com/tendermint/tendermint) from v0.34.12 to v0.34.13. + + + + ### Improvements * (channel) + [\#692](https://github.com/cosmos/ibc-go/pull/692) Minimize channel logging by + only emitting the packet sequence, source port/channel, destination + port/channel upon packet receives, acknowledgements and timeouts. + + + + ### Dependencies * [\#589](https://github.com/cosmos/ibc-go/pull/589) Bump SDK + version to v0.44.5 ### Bug Fixes * (modules/core) + [\#603](https://github.com/cosmos/ibc-go/pull/603) Fix module name emitted as + part of `OnChanOpenInit` event. Replacing `connection` module name with + `channel`. + + + + ### Dependencies * [\#567](https://github.com/cosmos/ibc-go/pull/567) Bump SDK + version to v0.44.4 ### Improvements * + [\#583](https://github.com/cosmos/ibc-go/pull/583) Move + third_party/proto/confio/proofs.proto to third_party/proto/proofs.proto to + enable proto service reflection. Migrate `buf` from v1beta1 to v1. + + + + ### Dependencies * [\#489](https://github.com/cosmos/ibc-go/pull/489) Bump + Tendermint to v0.34.14 * [\#503](https://github.com/cosmos/ibc-go/pull/503) + Bump SDK version to v0.44.3 + + + + * [\#485](https://github.com/cosmos/ibc-go/pull/485) Bump SDK version to + v0.44.2 + + + + ### Dependencies * [\#455](https://github.com/cosmos/ibc-go/pull/455) Bump SDK + version to v0.44.1 + + + + ### Dependencies * [\#367](https://github.com/cosmos/ibc-go/pull/367) Bump + [cosmos-sdk](https://github.com/cosmos/cosmos-sdk) from 0.43 to 0.44. + + + + ### Improvements * [\#343](https://github.com/cosmos/ibc-go/pull/343) Create + helper functions for publishing of packet sent and acknowledgement sent + events. + + + + ### Bug Fixes + * (07-tendermint) [\#241](https://github.com/cosmos/ibc-go/pull/241) Ensure tendermint client state latest height revision number matches chain id revision number. + * (07-tendermint) [\#234](https://github.com/cosmos/ibc-go/pull/234) Use sentinel value for the consensus state root set during a client upgrade. This prevents genesis validation from failing. + * (modules) [\#223](https://github.com/cosmos/ibc-go/pull/223) Use correct Prometheus format for metric labels. + * (06-solomachine) [\#214](https://github.com/cosmos/ibc-go/pull/214) Disable defensive timestamp check in SendPacket for solo machine clients. + * (07-tendermint) [\#210](https://github.com/cosmos/ibc-go/pull/210) Export all consensus metadata on genesis restarts for tendermint clients. + * (core) [\#200](https://github.com/cosmos/ibc-go/pull/200) Fixes incorrect export of IBC identifier sequences. Previously, the next identifier sequence for clients/connections/channels was not set during genesis export. This resulted in the next identifiers being generated on the new chain to reuse old identifiers (the sequences began again from 0). + * (02-client) [\#192](https://github.com/cosmos/ibc-go/pull/192) Fix IBC `query ibc client header` cli command. Support historical queries for query header/node-state commands. + * (modules/light-clients/06-solomachine) [\#153](https://github.com/cosmos/ibc-go/pull/153) Fix solo machine proof height sequence mismatch bug. + * (modules/light-clients/06-solomachine) [\#122](https://github.com/cosmos/ibc-go/pull/122) Fix solo machine merkle prefix casting bug. + * (modules/light-clients/06-solomachine) [\#120](https://github.com/cosmos/ibc-go/pull/120) Fix solo machine handshake verification bug. + * (modules/light-clients/06-solomachine) [\#153](https://github.com/cosmos/ibc-go/pull/153) fix solo machine connection handshake failure at `ConnectionOpenAck`. + ### API Breaking + * (04-channel) [\#220](https://github.com/cosmos/ibc-go/pull/220) Channel legacy handler functions were removed. Please use the MsgServer functions or directly call the channel keeper's handshake function. + * (modules) [\#206](https://github.com/cosmos/ibc-go/pull/206) Expose `relayer sdk.AccAddress` on `OnRecvPacket`, `OnAcknowledgementPacket`, `OnTimeoutPacket` module callbacks to enable incentivization. + * (02-client) [\#181](https://github.com/cosmos/ibc-go/pull/181) Remove 'InitialHeight' from UpdateClient Proposal. Only copy over latest consensus state from substitute client. + * (06-solomachine) [\#169](https://github.com/cosmos/ibc-go/pull/169) Change FrozenSequence to boolean in solomachine ClientState. The solo machine proto package has been bumped from `v1` to `v2`. + * (module/core/02-client) [\#165](https://github.com/cosmos/ibc-go/pull/165) Remove GetFrozenHeight from the ClientState interface. + * (modules) [\#166](https://github.com/cosmos/ibc-go/pull/166) Remove GetHeight from the misbehaviour interface. The `consensus_height` attribute has been removed from Misbehaviour events. + * (modules) [\#162](https://github.com/cosmos/ibc-go/pull/162) Remove deprecated Handler types in core IBC and the ICS 20 transfer module. + * (modules/core) [\#161](https://github.com/cosmos/ibc-go/pull/161) Remove Type(), Route(), GetSignBytes() from 02-client, 03-connection, and 04-channel messages. + * (modules) [\#140](https://github.com/cosmos/ibc-go/pull/140) IsFrozen() client state interface changed to Status(). gRPC `ClientStatus` route added. + * (modules/core) [\#109](https://github.com/cosmos/ibc-go/pull/109) Remove connection and channel handshake CLI commands. + * (modules) [\#107](https://github.com/cosmos/ibc-go/pull/107) Modify OnRecvPacket callback to return an acknowledgement which indicates if it is successful or not. Callback state changes are discarded for unsuccessful acknowledgements only. + * (modules) [\#108](https://github.com/cosmos/ibc-go/pull/108) All message constructors take the signer as a string to prevent upstream bugs. The `String()` function for an SDK Acc Address relies on external context. + * (transfer) [\#275](https://github.com/cosmos/ibc-go/pull/275) Remove 'ChanCloseInit' function from transfer keeper. ICS20 does not close channels. + ### State Machine Breaking + * (modules/light-clients/07-tendermint) [\#99](https://github.com/cosmos/ibc-go/pull/99) Enforce maximum chain-id length for tendermint client. + * (modules/light-clients/07-tendermint) [\#141](https://github.com/cosmos/ibc-go/pull/141) Allow a new form of misbehaviour that proves counterparty chain breaks time monotonicity, automatically enforce monotonicity in UpdateClient and freeze client if monotonicity is broken. + * (modules/light-clients/07-tendermint) [\#141](https://github.com/cosmos/ibc-go/pull/141) Freeze the client if there's a conflicting header submitted for an existing consensus state. + * (modules/core/02-client) [\#8405](https://github.com/cosmos/cosmos-sdk/pull/8405) Refactor IBC client update governance proposals to use a substitute client to update a frozen or expired client. + * (modules/core/02-client) [\#8673](https://github.com/cosmos/cosmos-sdk/pull/8673) IBC upgrade logic moved to 02-client and an IBC UpgradeProposal is added. + * (modules/core/03-connection) [\#171](https://github.com/cosmos/ibc-go/pull/171) Introduces a new parameter `MaxExpectedTimePerBlock` to allow connections to calculate and enforce a block delay that is proportional to time delay set by connection. + * (core) [\#268](https://github.com/cosmos/ibc-go/pull/268) Perform a no-op on redundant relay messages. Previous behaviour returned an error. Now no state change will occur and no error will be returned. + ### Improvements + * (04-channel) [\#220](https://github.com/cosmos/ibc-go/pull/220) Channel handshake events are now emitted with the channel keeper. + * (core/02-client) [\#205](https://github.com/cosmos/ibc-go/pull/205) Add in-place and genesis migrations from SDK v0.42.0 to ibc-go v1.0.0. Solo machine protobuf defintions are migrated from v1 to v2. All solo machine consensus states are pruned. All expired tendermint consensus states are pruned. + * (modules/core) [\#184](https://github.com/cosmos/ibc-go/pull/184) Improve error messages. Uses unique error codes to indicate already relayed packets. + * (07-tendermint) [\#182](https://github.com/cosmos/ibc-go/pull/182) Remove duplicate checks in upgrade logic. + * (modules/core/04-channel) [\#7949](https://github.com/cosmos/cosmos-sdk/issues/7949) Standardized channel `Acknowledgement` moved to its own file. Codec registration redundancy removed. + * (modules/core/04-channel) [\#144](https://github.com/cosmos/ibc-go/pull/144) Introduced a `packet_data_hex` attribute to emit the hex-encoded packet data in events. This allows for raw binary (proto-encoded message) to be sent over events and decoded correctly on relayer. Original `packet_data` is DEPRECATED. All relayers and IBC event consumers are encouraged to switch to `packet_data_hex` as soon as possible. + * (core/04-channel) [\#197](https://github.com/cosmos/ibc-go/pull/197) Introduced a `packet_ack_hex` attribute to emit the hex-encoded acknowledgement in events. This allows for raw binary (proto-encoded message) to be sent over events and decoded correctly on relayer. Original `packet_ack` is DEPRECATED. All relayers and IBC event consumers are encouraged to switch to `packet_ack_hex` as soon as possible. + * (modules/light-clients/07-tendermint) [\#125](https://github.com/cosmos/ibc-go/pull/125) Implement efficient iteration of consensus states and pruning of earliest expired consensus state on UpdateClient. + * (modules/light-clients/07-tendermint) [\#141](https://github.com/cosmos/ibc-go/pull/141) Return early in case there's a duplicate update call to save Gas. + * (modules/core/ante) [\#235](https://github.com/cosmos/ibc-go/pull/235) Introduces a new IBC Antedecorator that will reject transactions that only contain redundant packet messages (and accompany UpdateClient msgs). This will prevent relayers from wasting fees by submitting messages for packets that have already been processed by previous relayer(s). The Antedecorator is only applied on CheckTx and RecheckTx and is therefore optional for each node. + ### Features + * [\#198](https://github.com/cosmos/ibc-go/pull/198) New CLI command `query ibc-transfer escrow-address ` to get the escrow address for a channel; can be used to then query balance of escrowed tokens + ### Client Breaking Changes + * (02-client/cli) [\#196](https://github.com/cosmos/ibc-go/pull/196) Rename `node-state` cli command to `self-consensus-state`. + diff --git a/docs/ibc/v4.6.x/ibc/apps/apps.mdx b/docs/ibc/v4.6.x/ibc/apps/apps.mdx new file mode 100644 index 00000000..7ec95352 --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/apps/apps.mdx @@ -0,0 +1,51 @@ +--- +title: IBC Applications +--- + +## Synopsis + +Learn how to build custom IBC application modules that enable packets to be sent to and received from other IBC-enabled chains. + +This document serves as a guide for developers who want to write their own Inter-blockchain Communication Protocol (IBC) applications for custom use cases. + +Due to the modular design of the IBC protocol, IBC application developers do not need to concern themselves with the low-level details of clients, connections, and proof verification. Nevertheless, an overview of these low-level concepts can be found in [the Overview section](/docs/ibc/v4.6.x/ibc/overview). +The document goes into detail on the abstraction layer most relevant for application developers (channels and ports), and describes how to define your own custom packets, `IBCModule` callbacks and more to make an application module IBC ready. + +**To have your module interact over IBC you must:** + +- implement the `IBCModule` interface, i.e.: + - channel (opening) handshake callbacks + - channel closing handshake callbacks + - packet callbacks +- bind to a port(s) +- add keeper methods +- define your own packet data and acknowledgement structs as well as how to encode/decode them +- add a route to the IBC router + +The following sections provide a more detailed explanation of how to write an IBC application +module correctly corresponding to the listed steps. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v4.6.x/ibc/overview) +- [IBC default integration](/docs/ibc/v4.6.x/ibc/integration) + + + +## Working example + +For a real working example of an IBC application, you can look through the `ibc-transfer` module +which implements everything discussed in this section. + +Here are the useful parts of the module to look at: + +[Binding to transfer +port](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/genesis.go) + +[Sending transfer +packets](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/relay.go) + +[Implementing IBC +callbacks](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/ibc_module.go) diff --git a/docs/ibc/v4.6.x/ibc/apps/bindports.mdx b/docs/ibc/v4.6.x/ibc/apps/bindports.mdx new file mode 100644 index 00000000..f12d34b8 --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/apps/bindports.mdx @@ -0,0 +1,134 @@ +--- +title: Bind ports +--- + +## Synopsis + +Learn what changes to make to bind modules to their ports on initialization. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v4.6.x/ibc/overview) +- [IBC default integration](/docs/ibc/v4.6.x/ibc/integration) + + +Currently, ports must be bound on app initialization. In order to bind modules to their respective ports on initialization, the following needs to be implemented: + +> Note that `portID` does not refer to a certain numerical ID, like `localhost:8080` with a `portID` 8080. Rather it refers to the application module the port binds. For IBC Modules built with the Cosmos SDK, it defaults to the module's name and for Cosmwasm contracts it defaults to the contract address. + +1. Add port ID to the `GenesisState` proto definition: + + ```protobuf + message GenesisState { + string port_id = 1; + / other fields + } + ``` + +2. Add port ID as a key to the module store: + + ```go expandable + / x//types/keys.go + const ( + / ModuleName defines the IBC Module name + ModuleName = "moduleName" + + / Version defines the current version the IBC + / module supports + Version = "moduleVersion-1" + + / PortID is the default port id that module binds to + PortID = "portID" + + / ... + ) + ``` + +3. Add port ID to `x//types/genesis.go`: + + ```go expandable + / in x//types/genesis.go + + / DefaultGenesisState returns a GenesisState with "transfer" as the default PortID. + func DefaultGenesisState() *GenesisState { + return &GenesisState{ + PortId: PortID, + / additional k-v fields + } + } + + / Validate performs basic genesis state validation returning an error upon any + / failure. + func (gs GenesisState) + + Validate() + + error { + if err := host.PortIdentifierValidator(gs.PortId); err != nil { + return err + } + /additional validations + + return gs.Params.Validate() + } + ``` + +4. Bind to port(s) in the module keeper's `InitGenesis`: + + ```go expandable + / InitGenesis initializes the ibc-module state and binds to PortID. + func (k Keeper) + + InitGenesis(ctx sdk.Context, state types.GenesisState) { + k.SetPort(ctx, state.PortId) + + / ... + + / Only try to bind to port if it is not already bound, since we may already own + / port capability from capability InitGenesis + if !k.IsBound(ctx, state.PortId) { + / transfer module binds to the transfer port on InitChain + / and claims the returned capability + err := k.BindPort(ctx, state.PortId) + if err != nil { + panic(fmt.Sprintf("could not claim port capability: %v", err)) + } + + } + + / ... + } + ``` + + With: + + ```go expandable + / IsBound checks if the module is already bound to the desired port + func (k Keeper) + + IsBound(ctx sdk.Context, portID string) + + bool { + _, ok := k.scopedKeeper.GetCapability(ctx, host.PortPath(portID)) + + return ok + } + + / BindPort defines a wrapper function for the port Keeper's function in + / order to expose it to module's InitGenesis function + func (k Keeper) + + BindPort(ctx sdk.Context, portID string) + + error { + cap := k.portKeeper.BindPort(ctx, portID) + + return k.ClaimCapability(ctx, cap, host.PortPath(portID)) + } + ``` + + The module binds to the desired port(s) and returns the capabilities. + + In the above we find reference to keeper methods that wrap other keeper functionality, in the next section the keeper methods that need to be implemented will be defined. diff --git a/docs/ibc/v4.6.x/ibc/apps/ibcmodule.mdx b/docs/ibc/v4.6.x/ibc/apps/ibcmodule.mdx new file mode 100644 index 00000000..dd0b0fb8 --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/apps/ibcmodule.mdx @@ -0,0 +1,384 @@ +--- +title: Implement `IBCModule` interface and callbacks +--- + +## Synopsis + +Learn how to implement the `IBCModule` interface and all of the callbacks it requires. + +The Cosmos SDK expects all IBC modules to implement the [`IBCModule` +interface](https://github.com/cosmos/ibc-go/tree/main/modules/core/05-port/types/module.go). This interface contains all of the callbacks IBC expects modules to implement. They include callbacks related to channel handshake, closing and packet callbacks (`OnRecvPacket`, `OnAcknowledgementPacket` and `OnTimeoutPacket`). + +```go +/ IBCModule implements the ICS26 interface for given the keeper. +/ The implementation of the IBCModule interface could for example be in a file called ibc_module.go, +/ but ultimately file structure is up to the developer +type IBCModule struct { + keeper keeper.Keeper +} +``` + +Additionally, in the `module.go` file, add the following line: + +```go +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + / Add this line + _ porttypes.IBCModule = IBCModule{ +} +) +``` + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v4.6.x/ibc/overview) +- [IBC default integration](/docs/ibc/v4.6.x/ibc/integration) + + + +## Channel handshake callbacks + +This section will describe the callbacks that are called during channel handshake execution. Among other things, it will claim channel capabilities passed on from core IBC. For a refresher on capabilities, check [the Overview section](/docs/ibc/v4.6.x/ibc/overview#capabilities). + +Here are the channel handshake callbacks that modules are expected to implement: + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `checkArguments` and `negotiateAppVersion` functions. + +```go expandable +/ Called by IBC Handler on MsgOpenInit +func (im IBCModule) + +OnChanOpenInit(ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + / Examples: + / - Abort if order == UNORDERED, + / - Abort if version is unsupported + if err := checkArguments(args); err != nil { + return "", err +} + + / OpenInit must claim the channelCapability that IBC passes into the callback + if err := im.keeper.ClaimCapability(ctx, chanCap, host.ChannelCapabilityPath(portID, channelID)); err != nil { + return "", err +} + +return version, nil +} + +/ Called by IBC Handler on MsgOpenTry +func (im IBCModule) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + if err := checkArguments(args); err != nil { + return "", err +} + + / OpenTry must claim the channelCapability that IBC passes into the callback + if err := im.keeper.scopedKeeper.ClaimCapability(ctx, chanCap, host.ChannelCapabilityPath(portID, channelID)); err != nil { + return err +} + + / Construct application version + / IBC applications must return the appropriate application version + / This can be a simple string or it can be a complex version constructed + / from the counterpartyVersion and other arguments. + / The version returned will be the channel version used for both channel ends. + appVersion := negotiateAppVersion(counterpartyVersion, args) + +return appVersion, nil +} + +/ Called by IBC Handler on MsgOpenAck +func (im IBCModule) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + if counterpartyVersion != types.Version { + return sdkerrors.Wrapf(types.ErrInvalidVersion, "invalid counterparty version: %s, expected %s", counterpartyVersion, types.Version) +} + + / do custom logic + + return nil +} + +/ Called by IBC Handler on MsgOpenConfirm +func (im IBCModule) + +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / do custom logic + + return nil +} +``` + +The channel closing handshake will also invoke module callbacks that can return errors to abort the closing handshake. Closing a channel is a 2-step handshake, the initiating chain calls `ChanCloseInit` and the finalizing chain calls `ChanCloseConfirm`. + +```go expandable +/ Called by IBC Handler on MsgCloseInit +func (im IBCModule) + +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgCloseConfirm +func (im IBCModule) + +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} +``` + +### Channel handshake version negotiation + +Application modules are expected to verify versioning used during the channel handshake procedure. + +- `OnChanOpenInit` will verify that the relayer-chosen parameters + are valid and perform any custom `INIT` logic. + It may return an error if the chosen parameters are invalid + in which case the handshake is aborted. + If the provided version string is non-empty, `OnChanOpenInit` should return + the version string if valid or an error if the provided version is invalid. + **If the version string is empty, `OnChanOpenInit` is expected to + return a default version string representing the version(s) + it supports.** + If there is no default version string for the application, + it should return an error if the provided version is an empty string. +- `OnChanOpenTry` will verify the relayer-chosen parameters along with the + counterparty-chosen version string and perform custom `TRY` logic. + If the relayer-chosen parameters + are invalid, the callback must return an error to abort the handshake. + If the counterparty-chosen version is not compatible with this module's + supported versions, the callback must return an error to abort the handshake. + If the versions are compatible, the try callback must select the final version + string and return it to core IBC. + `OnChanOpenTry` may also perform custom initialization logic. +- `OnChanOpenAck` will error if the counterparty selected version string + is invalid and abort the handshake. It may also perform custom ACK logic. + +Versions must be strings but can implement any versioning structure. If your application plans to +have linear releases then semantic versioning is recommended. If your application plans to release +various features in between major releases then it is advised to use the same versioning scheme +as IBC. This versioning scheme specifies a version identifier and compatible feature set with +that identifier. Valid version selection includes selecting a compatible version identifier with +a subset of features supported by your application for that version. The struct used for this +scheme can be found in [03-connection/types](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection/types/version.go#L16). + +Since the version type is a string, applications have the ability to do simple version verification +via string matching or they can use the already implemented versioning system and pass the proto +encoded version into each handhshake call as necessary. + +ICS20 currently implements basic string matching with a single supported version. + +## Packet callbacks + +Just as IBC expects modules to implement callbacks for channel handshakes, it also expects modules to implement callbacks for handling the packet flow through a channel, as defined in the `IBCModule` interface. + +Once a module A and module B are connected to each other, relayers can start relaying packets and acknowledgements back and forth on the channel. + +![IBC packet flow diagram](/docs/ibc/images/01-ibc/03-apps/images/packet_flow.png) + +Briefly, a successful packet flow works as follows: + +1. module A sends a packet through the IBC module +2. the packet is received by module B +3. if module B writes an acknowledgement of the packet then module A will process the + acknowledgement +4. if the packet is not successfully received before the timeout, then module A processes the + packet's timeout. + +### Sending packets + +Modules **do not send packets through callbacks**, since the modules initiate the action of sending packets to the IBC module, as opposed to other parts of the packet flow where messages sent to the IBC +module must trigger execution on the port-bound module through the use of callbacks. Thus, to send a packet a module simply needs to call `SendPacket` on the `IBCChannelKeeper`. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `EncodePacketData(customPacketData)` function. + +```go +/ retrieve the dynamic capability for this channel + channelCap := scopedKeeper.GetCapability(ctx, channelCapName) +/ Sending custom application packet data + data := EncodePacketData(customPacketData) + +packet.Data = data +/ Send packet to IBC, authenticating with channelCap +IBCChannelKeeper.SendPacket(ctx, channelCap, packet) +``` + + + In order to prevent modules from sending packets on channels they do not own, + IBC expects modules to pass in the correct channel capability for the packet's + source channel. + + +### Receiving packets + +To handle receiving packets, the module must implement the `OnRecvPacket` callback. This gets +invoked by the IBC module after the packet has been proved valid and correctly processed by the IBC +keepers. Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state +changes given the packet data without worrying about whether the packet is valid or not. + +Modules may return to the IBC handler an acknowledgement which implements the `Acknowledgement` interface. +The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the +acknowledgement back to the sender module. + +The state changes that occurred during this callback will only be written if: + +- the acknowledgement was successful as indicated by the `Success()` function of the acknowledgement +- if the acknowledgement returned is nil indicating that an asynchronous process is occurring + +NOTE: Applications which process asynchronous acknowledgements must handle reverting state changes +when appropriate. Any state changes that occurred during the `OnRecvPacket` callback will be written +for asynchronous acknowledgements. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodePacketData(packet.Data)` function. + +```go expandable +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) + +ibcexported.Acknowledgement { + / Decode the packet data + packetData := DecodePacketData(packet.Data) + + / do application state changes based on packet data and return the acknowledgement + / NOTE: The acknowledgement will indicate to the IBC handler if the application + / state changes should be written via the `Success()` function. Application state + / changes are only written if the acknowledgement is successful or the acknowledgement + / returned is nil indicating that an asynchronous acknowledgement will occur. + ack := processPacket(ctx, packet, packetData) + +return ack +} +``` + +Reminder, the `Acknowledgement` interface: + +```go +/ Acknowledgement defines the interface used to return +/ acknowledgements in the OnRecvPacket callback. +type Acknowledgement interface { + Success() + +bool + Acknowledgement() []byte +} +``` + +### Acknowledging packets + +After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can +then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the +acknowledgement is entirely up to the modules on the channel (just like the packet data); however, it +may often contain information on whether the packet was successfully processed along +with some additional data that could be useful for remediation if the packet processing failed. + +Since the modules are responsible for agreeing on an encoding/decoding standard for packet data and +acknowledgements, IBC will pass in the acknowledgements as `[]byte` to this callback. The callback +is responsible for decoding the acknowledgement and processing it. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodeAcknowledgement(acknowledgments)` and `processAck(ack)` functions. + +```go expandable +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, +) (*sdk.Result, error) { + / Decode acknowledgement + ack := DecodeAcknowledgement(acknowledgement) + + / process ack + res, err := processAck(ack) + +return res, err +} +``` + +### Timeout packets + +If the timeout for a packet is reached before the packet is successfully received or the +counterparty channel end is closed before the packet is successfully received, then the receiving +chain can no longer process it. Thus, the sending chain must process the timeout using +`OnTimeoutPacket` to handle this situation. Again the IBC module will verify that the timeout is +indeed valid, so our module only needs to implement the state machine logic for what to do once a +timeout is reached and the packet can no longer be received. + +```go +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) (*sdk.Result, error) { + / do custom timeout logic +} +``` diff --git a/docs/ibc/v4.6.x/ibc/apps/keeper.mdx b/docs/ibc/v4.6.x/ibc/apps/keeper.mdx new file mode 100644 index 00000000..1f6c57da --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/apps/keeper.mdx @@ -0,0 +1,119 @@ +--- +title: Keeper +--- + +## Synopsis + +Learn how to implement the IBC Module keeper. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v4.6.x/ibc/overview) +- [IBC default integration](/docs/ibc/v4.6.x/ibc/integration) + + +In the previous sections, on channel handshake callbacks and port binding in `InitGenesis`, a reference was made to keeper methods that need to be implemented when creating a custom IBC module. Below is an overview of how to define an IBC module's keeper. + +> Note that some code has been left out for clarity, to get a full code overview, please refer to [the transfer module's keeper in the ibc-go repo](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/keeper.go). + +```go expandable +/ Keeper defines the IBC app module keeper +type Keeper struct { + storeKey sdk.StoreKey + cdc codec.BinaryCodec + paramSpace paramtypes.Subspace + + channelKeeper types.ChannelKeeper + portKeeper types.PortKeeper + scopedKeeper capabilitykeeper.ScopedKeeper + + / ... additional according to custom logic +} + +/ NewKeeper creates a new IBC app module Keeper instance +func NewKeeper( + / args +) + +Keeper { + / ... + + return Keeper{ + cdc: cdc, + storeKey: key, + paramSpace: paramSpace, + + channelKeeper: channelKeeper, + portKeeper: portKeeper, + scopedKeeper: scopedKeeper, + + / ... additional according to custom logic +} +} + +/ IsBound checks if the IBC app module is already bound to the desired port +func (k Keeper) + +IsBound(ctx sdk.Context, portID string) + +bool { + _, ok := k.scopedKeeper.GetCapability(ctx, host.PortPath(portID)) + +return ok +} + +/ BindPort defines a wrapper function for the port Keeper's function in +/ order to expose it to module's InitGenesis function +func (k Keeper) + +BindPort(ctx sdk.Context, portID string) + +error { + cap := k.portKeeper.BindPort(ctx, portID) + +return k.ClaimCapability(ctx, cap, host.PortPath(portID)) +} + +/ GetPort returns the portID for the IBC app module. Used in ExportGenesis +func (k Keeper) + +GetPort(ctx sdk.Context) + +string { + store := ctx.KVStore(k.storeKey) + +return string(store.Get(types.PortKey)) +} + +/ SetPort sets the portID for the IBC app module. Used in InitGenesis +func (k Keeper) + +SetPort(ctx sdk.Context, portID string) { + store := ctx.KVStore(k.storeKey) + +store.Set(types.PortKey, []byte(portID)) +} + +/ AuthenticateCapability wraps the scopedKeeper's AuthenticateCapability function +func (k Keeper) + +AuthenticateCapability(ctx sdk.Context, cap *capabilitytypes.Capability, name string) + +bool { + return k.scopedKeeper.AuthenticateCapability(ctx, cap, name) +} + +/ ClaimCapability allows the IBC app module to claim a capability that core IBC +/ passes to it +func (k Keeper) + +ClaimCapability(ctx sdk.Context, cap *capabilitytypes.Capability, name string) + +error { + return k.scopedKeeper.ClaimCapability(ctx, cap, name) +} + +/ ... additional according to custom logic +``` diff --git a/docs/ibc/v4.6.x/ibc/apps/packets_acks.mdx b/docs/ibc/v4.6.x/ibc/apps/packets_acks.mdx new file mode 100644 index 00000000..3e834836 --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/apps/packets_acks.mdx @@ -0,0 +1,104 @@ +--- +title: Define packets and acks +--- + +## Synopsis + +Learn how to define custom packet and acknowledgement structs and how to encode and decode them. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v4.6.x/ibc/overview) +- [IBC default integration](/docs/ibc/v4.6.x/ibc/integration) + + + +## Custom packets + +Modules connected by a channel must agree on what application data they are sending over the +channel, as well as how they will encode/decode it. This process is not specified by IBC as it is up +to each application module to determine how to implement this agreement. However, for most +applications this will happen as a version negotiation during the channel handshake. While more +complex version negotiation is possible to implement inside the channel opening handshake, a very +simple version negotiation is implemented in the [ibc-transfer module](https://github.com/cosmos/ibc-go/tree/main/modules/apps/transfer/module.go). + +Thus, a module must define its custom packet data structure, along with a well-defined way to +encode and decode it to and from `[]byte`. + +```go expandable +/ Custom packet data defined in application module +type CustomPacketData struct { + / Custom fields ... +} + +EncodePacketData(packetData CustomPacketData) []byte { + / encode packetData to bytes +} + +DecodePacketData(encoded []byte) (CustomPacketData) { + / decode from bytes to packet data +} +``` + +> Note that the `CustomPacketData` struct is defined in the proto definition and then compiled by the protobuf compiler. + +Then a module must encode its packet data before sending it through IBC. + +```go +/ Sending custom application packet data + data := EncodePacketData(customPacketData) + +packet.Data = data +IBCChannelKeeper.SendPacket(ctx, packet) +``` + +A module receiving a packet must decode the `PacketData` into a structure it expects so that it can +act on it. + +```go +/ Receiving custom application packet data (in OnRecvPacket) + packetData := DecodePacketData(packet.Data) +/ handle received custom packet data +``` + +## Acknowledgements + +Modules may commit an acknowledgement upon receiving and processing a packet in the case of synchronous packet processing. +In the case where a packet is processed at some later point after the packet has been received (asynchronous execution), the acknowledgement +will be written once the packet has been processed by the application which may be well after the packet receipt. + +NOTE: Most blockchain modules will want to use the synchronous execution model in which the module processes and writes the acknowledgement +for a packet as soon as it has been received from the IBC module. + +This acknowledgement can then be relayed back to the original sender chain, which can take action +depending on the contents of the acknowledgement. + +Just as packet data was opaque to IBC, acknowledgements are similarly opaque. Modules must pass and +receive acknowledegments with the IBC modules as byte strings. + +Thus, modules must agree on how to encode/decode acknowledgements. The process of creating an +acknowledgement struct along with encoding and decoding it, is very similar to the packet data +example above. [ICS 04](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope) +specifies a recommended format for acknowledgements. This acknowledgement type can be imported from +[channel types](https://github.com/cosmos/ibc-go/tree/main/modules/core/04-channel/types). + +While modules may choose arbitrary acknowledgement structs, a default acknowledgement types is provided by IBC [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/core/channel/v1/channel.proto): + +```protobuf expandable +/ Acknowledgement is the recommended acknowledgement format to be used by +/ app-specific protocols. +/ NOTE: The field numbers 21 and 22 were explicitly chosen to avoid accidental +/ conflicts with other protobuf message formats used for acknowledgements. +/ The first byte of any message with this format will be the non-ASCII values +/ `0xaa` (result) or `0xb2` (error). Implemented as defined by ICS: +/ https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope +message Acknowledgement { + / response contains either a result or an error and must be non-empty + oneof response { + bytes result = 21; + string error = 22; + } +} +``` diff --git a/docs/ibc/v4.6.x/ibc/apps/routing.mdx b/docs/ibc/v4.6.x/ibc/apps/routing.mdx new file mode 100644 index 00000000..043a65f9 --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/apps/routing.mdx @@ -0,0 +1,40 @@ +--- +title: Routing +--- + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v4.6.x/ibc/overview) +- [IBC default integration](/docs/ibc/v4.6.x/ibc/integration) + + + +## Synopsis + +Learn how to hook a route to the IBC router for the custom IBC module. + +As mentioned above, modules must implement the `IBCModule` interface (which contains both channel +handshake callbacks and packet handling callbacks). The concrete implementation of this interface +must be registered with the module name as a route on the IBC `Router`. + +```go expandable +/ app.go +func NewApp(...args) *App { +/ ... + +/ Create static IBC router, add module routes, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) +/ Note: moduleCallbacks must implement IBCModule interface +ibcRouter.AddRoute(moduleName, moduleCallbacks) + +/ Setting Router will finalize all routes by sealing router +/ No more routes can be added +app.IBCKeeper.SetRouter(ibcRouter) + +/ ... +} +``` diff --git a/docs/ibc/v4.6.x/ibc/integration.mdx b/docs/ibc/v4.6.x/ibc/integration.mdx new file mode 100644 index 00000000..3ab57303 --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/integration.mdx @@ -0,0 +1,224 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to integrate IBC to your application and send data packets to other chains. + +This document outlines the required steps to integrate and configure the [IBC +module](https://github.com/cosmos/ibc-go/tree/main/modules/core) to your Cosmos SDK application and +send fungible token transfers to other chains. + +## Integrating the IBC module + +Integrating the IBC module to your SDK-based application is straightforward. The general changes can be summarized in the following steps: + +- Add required modules to the `module.BasicManager` +- Define additional `Keeper` fields for the new modules on the `App` type +- Add the module's `StoreKeys` and initialize their `Keepers` +- Set up corresponding routers and routes for the `ibc` module +- Add the modules to the module `Manager` +- Add modules to `Begin/EndBlockers` and `InitGenesis` +- Update the module `SimulationManager` to enable simulations + +### Module `BasicManager` and `ModuleAccount` permissions + +The first step is to add the following modules to the `BasicManager`: `x/capability`, `x/ibc`, +and `x/ibc-transfer`. After that, we need to grant `Minter` and `Burner` permissions to +the `ibc-transfer` `ModuleAccount` to mint and burn relayed tokens. + +```go expandable +/ app.go +var ( + + ModuleBasics = module.NewBasicManager( + / ... + capability.AppModuleBasic{ +}, + ibc.AppModuleBasic{ +}, + transfer.AppModuleBasic{ +}, / i.e ibc-transfer module + ) + + / module account permissions + maccPerms = map[string][]string{ + / other module accounts permissions + / ... + ibctransfertypes.ModuleName: { + authtypes.Minter, authtypes.Burner +}, +) +``` + +### Application fields + +Then, we need to register the `Keepers` as follows: + +```go expandable +/ app.go +type App struct { + / baseapp, keys and subspaces definitions + + / other keepers + / ... + IBCKeeper *ibckeeper.Keeper / IBC Keeper must be a pointer in the app, so we can SetRouter on it correctly + TransferKeeper ibctransferkeeper.Keeper / for cross-chain fungible token transfers + + / make scoped keepers public for test purposes + ScopedIBCKeeper capabilitykeeper.ScopedKeeper + ScopedTransferKeeper capabilitykeeper.ScopedKeeper + + / ... + / module and simulation manager definitions +} +``` + +### Configure the `Keepers` + +During initialization, besides initializing the IBC `Keepers` (for the `x/ibc`, and +`x/ibc-transfer` modules), we need to grant specific capabilities through the capability module +`ScopedKeepers` so that we can authenticate the object-capability permissions for each of the IBC +channels. + +```go expandable +func NewApp(...args) *App { + / define codecs and baseapp + + / add capability keeper and ScopeToModule for ibc module + app.CapabilityKeeper = capabilitykeeper.NewKeeper(appCodec, keys[capabilitytypes.StoreKey], memKeys[capabilitytypes.MemStoreKey]) + + / grant capabilities for the ibc and ibc-transfer modules + scopedIBCKeeper := app.CapabilityKeeper.ScopeToModule(ibchost.ModuleName) + scopedTransferKeeper := app.CapabilityKeeper.ScopeToModule(ibctransfertypes.ModuleName) + + / ... other modules keepers + + / Create IBC Keeper + app.IBCKeeper = ibckeeper.NewKeeper( + appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, app.UpgradeKeeper, scopedIBCKeeper, + ) + + / Create Transfer Keepers + app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, keys[ibctransfertypes.StoreKey], app.GetSubspace(ibctransfertypes.ModuleName), + app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, app.BankKeeper, scopedTransferKeeper, + ) + transferModule := transfer.NewAppModule(app.TransferKeeper) + + / .. continues +} +``` + +### Register `Routers` + +IBC needs to know which module is bound to which port so that it can route packets to the +appropriate module and call the appropriate callbacks. The port to module name mapping is handled by +IBC's port `Keeper`. However, the mapping from module name to the relevant callbacks is accomplished +by the port +[`Router`](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/router.go) on the +IBC module. + +Adding the module routes allows the IBC handler to call the appropriate callback when processing a +channel handshake or a packet. + +Currently, a `Router` is static so it must be initialized and set correctly on app initialization. +Once the `Router` has been set, no new routes can be added. + +```go expandable +/ app.go +func NewApp(...args) *App { + / .. continuation from above + + / Create static IBC router, add ibc-transfer module route, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) + / Setting Router will finalize all routes by sealing router + / No more routes can be added + app.IBCKeeper.SetRouter(ibcRouter) + + / .. continues +``` + +### Module Managers + +In order to use IBC, we need to add the new modules to the module `Manager` and to the `SimulationManager` in case your application supports simulations. + +```go expandable +/ app.go +func NewApp(...args) *App { + / .. continuation from above + + app.mm = module.NewManager( + / other modules + / ... + capability.NewAppModule(appCodec, *app.CapabilityKeeper), + ibc.NewAppModule(app.IBCKeeper), + transferModule, + ) + + / ... + + app.sm = module.NewSimulationManager( + / other modules + / ... + capability.NewAppModule(appCodec, *app.CapabilityKeeper), + ibc.NewAppModule(app.IBCKeeper), + transferModule, + ) + + / .. continues +``` + +### Application ABCI Ordering + +One addition from IBC is the concept of `HistoricalEntries` which are stored on the staking module. +Each entry contains the historical information for the `Header` and `ValidatorSet` of this chain which is stored +at each height during the `BeginBlock` call. The historical info is required to introspect the +past historical info at any given height in order to verify the light client `ConsensusState` during the +connection handshake. + +The IBC module also has +[`BeginBlock`](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client/abci.go) logic as well. This is optional as it is only required if your application uses the localhost client to connect two different modules from the same chain. + + + Only register the ibc module to the `SetOrderBeginBlockers` if your + application will use the localhost (*aka* loopback) client. + + +```go expandable +/ app.go +func NewApp(...args) *App { + / .. continuation from above + + / add staking and ibc modules to BeginBlockers + app.mm.SetOrderBeginBlockers( + / other modules ... + stakingtypes.ModuleName, ibchost.ModuleName, + ) + + / ... + + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + app.mm.SetOrderInitGenesis( + capabilitytypes.ModuleName, + / other modules ... + ibchost.ModuleName, ibctransfertypes.ModuleName, + ) + + / .. continues +``` + + + **IMPORTANT**: The capability module **must** be declared first in + `SetOrderInitGenesis` + + +That's it! You have now wired up the IBC module and are now able to send fungible tokens across +different chains. If you want to have a broader view of the changes take a look into the SDK's +[`SimApp`](https://github.com/cosmos/ibc-go/blob/main/testing/simapp/app.go). diff --git a/docs/ibc/v4.6.x/ibc/middleware/develop.mdx b/docs/ibc/v4.6.x/ibc/middleware/develop.mdx new file mode 100644 index 00000000..f461f1b5 --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/middleware/develop.mdx @@ -0,0 +1,434 @@ +--- +title: IBC middleware +--- + +## Synopsis + +Learn how to write your own custom middleware to wrap an IBC application, and understand how to hook different middleware to IBC base applications to form different IBC application stacks + +This document serves as a guide for middleware developers who want to write their own middleware and for chain developers who want to use IBC middleware on their chains. + +IBC applications are designed to be self-contained modules that implement their own application-specific logic through a set of interfaces with the core IBC handlers. These core IBC handlers, in turn, are designed to enforce the correctness properties of IBC (transport, authentication, ordering) while delegating all application-specific handling to the IBC application modules. However, there are cases where some functionality may be desired by many applications, yet not appropriate to place in core IBC. + +Middleware allows developers to define the extensions as separate modules that can wrap over the base application. This middleware can thus perform its own custom logic, and pass data into the application so that it may run its logic without being aware of the middleware's existence. This allows both the application and the middleware to implement its own isolated logic while still being able to run as part of a single packet flow. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v4.6.x/ibc/overview) +- [IBC Integration](/docs/ibc/v4.6.x/ibc/integration) +- [IBC Application Developer Guide](/docs/ibc/v4.6.x/ibc/apps/apps) + + + +## Definitions + +`Middleware`: A self-contained module that sits between core IBC and an underlying IBC application during packet execution. All messages between core IBC and underlying application must flow through middleware, which may perform its own custom logic. + +`Underlying Application`: An underlying application is the application that is directly connected to the middleware in question. This underlying application may itself be middleware that is chained to a base application. + +`Base Application`: A base application is an IBC application that does not contain any middleware. It may be nested by 0 or multiple middleware to form an application stack. + +`Application Stack (or stack)`: A stack is the complete set of application logic (middleware(s) + base application) that gets connected to core IBC. A stack may be just a base application, or it may be a series of middlewares that nest a base application. + +## Create a custom IBC middleware + +IBC middleware will wrap over an underlying IBC application and sits between core IBC and the application. It has complete control in modifying any message coming from IBC to the application, and any message coming from the application to core IBC. Thus, middleware must be completely trusted by chain developers who wish to integrate them, however this gives them complete flexibility in modifying the application(s) they wrap. + +### Interfaces + +```go +/ Middleware implements the ICS26 Module interface +type Middleware interface { + porttypes.IBCModule / middleware has access to an underlying application which may be wrapped by more middleware + ics4Wrapper: ICS4Wrapper / middleware has access to ICS4Wrapper which may be core IBC Channel Handler or a higher-level middleware that wraps this middleware. +} +``` + +```typescript +/ This is implemented by ICS4 and all middleware that are wrapping base application. +/ The base application will call `sendPacket` or `writeAcknowledgement` of the middleware directly above them +/ which will call the next middleware until it reaches the core IBC handler. +type ICS4Wrapper interface { + SendPacket(ctx sdk.Context, chanCap *capabilitytypes.Capability, packet exported.Packet) error + WriteAcknowledgement(ctx sdk.Context, chanCap *capabilitytypes.Capability, packet exported.Packet, ack exported.Acknowledgement) error + GetAppVersion(ctx sdk.Context, portID, channelID string) (string, bool) +} +``` + +### Implement `IBCModule` interface and callbacks + +The `IBCModule` is a struct that implements the [ICS-26 interface (`porttypes.IBCModule`)](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/module.go#L11-L106). It is recommended to separate these callbacks into a separate file `ibc_module.go`. As will be mentioned in the [integration section](/docs/ibc/v4.6.x/ibc/middleware/integration), this struct should be different than the struct that implements `AppModule` in case the middleware maintains its own internal state and processes separate SDK messages. + +The middleware must have access to the underlying application, and be called before during all ICS-26 callbacks. It may execute custom logic during these callbacks, and then call the underlying application's callback. Middleware **may** choose not to call the underlying application's callback at all. Though these should generally be limited to error cases. + +In the case where the IBC middleware expects to speak to a compatible IBC middleware on the counterparty chain, they must use the channel handshake to negotiate the middleware version without interfering in the version negotiation of the underlying application. + +Middleware accomplishes this by formatting the version in a JSON-encoded string containing the middleware version and the application version. The application version may as well be a JSON-encoded string, possibly including further middleware and app versions, if the application stack consists of multiple milddlewares wrapping a base application. The format of the version is specified in ICS-30 as the following: + +```json +{ + "": "", + "app_version": "" +} +``` + +The `` key in the JSON struct should be replaced by the actual name of the key for the corresponding middleware (e.g. `fee_version`). + +During the handshake callbacks, the middleware can unmarshal the version string and retrieve the middleware and application versions. It can do its negotiation logic on ``, and pass the `` to the underlying application. + +The middleware should simply pass the capability in the callback arguments along to the underlying application so that it may be claimed by the base application. The base application will then pass the capability up the stack in order to authenticate an outgoing packet/acknowledgement. + +In the case where the middleware wishes to send a packet or acknowledgment without the involvement of the underlying application, it should be given access to the same `scopedKeeper` as the base application so that it can retrieve the capabilities by itself. + +### Handshake callbacks + +#### `OnChanOpenInit` + +```go expandable +func (im IBCModule) + +OnChanOpenInit( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + if version != "" { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + metadata, err := Unmarshal(version) + if err != nil { + / Since it is valid for fee version to not be specified, + / the above middleware version may be for another middleware. + / Pass the entire version string onto the underlying application. + return im.app.OnChanOpenInit( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + version, + ) +} + +else { + metadata = { + / set middleware version to default value + MiddlewareVersion: defaultMiddlewareVersion, + / allow application to return its default version + AppVersion: "", +} + +} + +doCustomLogic() + + / if the version string is empty, OnChanOpenInit is expected to return + / a default version string representing the version(s) + +it supports + appVersion, err := im.app.OnChanOpenInit( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + metadata.AppVersion, / note we only pass app version here + ) + if err != nil { + return "", err +} + version := constructVersion(metadata.MiddlewareVersion, appVersion) + +return version, nil +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L34-L82) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanOpenTry` + +```go expandable +func OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + cpMetadata, err := Unmarshal(counterpartyVersion) + if err != nil { + return app.OnChanOpenTry( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + counterpartyVersion, + ) +} + +doCustomLogic() + + / Call the underlying application's OnChanOpenTry callback. + / The try callback must select the final app-specific version string and return it. + appVersion, err := app.OnChanOpenTry( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + cpMetadata.AppVersion, / note we only pass counterparty app version here + ) + if err != nil { + return "", err +} + + / negotiate final middleware version + middlewareVersion := negotiateMiddlewareVersion(cpMetadata.MiddlewareVersion) + version := constructVersion(middlewareVersion, appVersion) + +return version, nil +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L84-L124) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanOpenAck` + +```go expandable +func OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyChannelID string, + counterpartyVersion string, +) + +error { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + cpMetadata, err = UnmarshalJSON(counterpartyVersion) + if err != nil { + return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, counterpartyVersion) +} + if !isCompatible(cpMetadata.MiddlewareVersion) { + return error +} + +doCustomLogic() + + / call the underlying application's OnChanOpenTry callback + return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, cpMetadata.AppVersion) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L126-L152) an example implementation of this callback for the ICS29 Fee Middleware module. + +### `OnChanOpenConfirm` + +```go +func OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanOpenConfirm(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L154-L162) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanCloseInit` + +```go +func OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanCloseInit(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L164-L187) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanCloseConfirm` + +```go +func OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanCloseConfirm(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L189-L212) an example implementation of this callback for the ICS29 Fee Middleware module. + +**NOTE**: Middleware that does not need to negotiate with a counterparty middleware on the remote stack will not implement the version unmarshalling and negotiation, and will simply perform its own custom logic on the callbacks without relying on the counterparty behaving similarly. + +### Packet callbacks + +The packet callbacks just like the handshake callbacks wrap the application's packet callbacks. The packet callbacks are where the middleware performs most of its custom logic. The middleware may read the packet flow data and perform some additional packet handling, or it may modify the incoming data before it reaches the underlying application. This enables a wide degree of usecases, as a simple base application like token-transfer can be transformed for a variety of usecases by combining it with custom middleware. + +#### `OnRecvPacket` + +```go expandable +func OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +ibcexported.Acknowledgement { + doCustomLogic(packet) + ack := app.OnRecvPacket(ctx, packet, relayer) + +doCustomLogic(ack) / middleware may modify outgoing ack + return ack +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L214-L237) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnAcknowledgementPacket` + +```go +func OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, +) + +error { + doCustomLogic(packet, ack) + +return app.OnAcknowledgementPacket(ctx, packet, ack, relayer) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L239-L292) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnTimeoutPacket` + +```go +func OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +error { + doCustomLogic(packet) + +return app.OnTimeoutPacket(ctx, packet, relayer) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L294-L334) an example implementation of this callback for the ICS29 Fee Middleware module. + +### ICS-4 wrappers + +Middleware must also wrap ICS-4 so that any communication from the application to the `channelKeeper` goes through the middleware first. Similar to the packet callbacks, the middleware may modify outgoing acknowledgements and packets in any way it wishes. + +#### `SendPacket` + +```go +func SendPacket( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + appPacket exported.PacketI, +) { + / middleware may modify packet + packet = doCustomLogic(appPacket) + +return ics4Keeper.SendPacket(ctx, chanCap, packet) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L336-L343) an example implementation of this function for the ICS29 Fee Middleware module. + +#### `WriteAcknowledgement` + +```go expandable +/ only called for async acks +func WriteAcknowledgement( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + packet exported.PacketI, + ack exported.Acknowledgement, +) { + / middleware may modify acknowledgement + ack_bytes = doCustomLogic(ack) + +return ics4Keeper.WriteAcknowledgement(packet, ack_bytes) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L345-L353) an example implementation of this function for the ICS29 Fee Middleware module. + +#### `GetAppVersion` + +```go expandable +/ middleware must return the underlying application version +func GetAppVersion( + ctx sdk.Context, + portID, + channelID string, +) (string, bool) { + version, found := ics4Keeper.GetAppVersion(ctx, portID, channelID) + if !found { + return "", false +} + if !MiddlewareEnabled { + return version, true +} + + / unwrap channel version + metadata, err := Unmarshal(version) + if err != nil { + panic(fmt.Errof("unable to unmarshal version: %w", err)) +} + +return metadata.AppVersion, true +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L355-L358) an example implementation of this function for the ICS29 Fee Middleware module. diff --git a/docs/ibc/v4.6.x/ibc/middleware/integration.mdx b/docs/ibc/v4.6.x/ibc/middleware/integration.mdx new file mode 100644 index 00000000..3b942b53 --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/middleware/integration.mdx @@ -0,0 +1,78 @@ +--- +title: Integrating IBC middleware into a chain +description: >- + Learn how to integrate IBC middleware(s) with a base application to your + chain. The following document only applies for Cosmos SDK chains. +--- + +Learn how to integrate IBC middleware(s) with a base application to your chain. The following document only applies for Cosmos SDK chains. + +If the middleware is maintaining its own state and/or processing SDK messages, then it should create and register its SDK module **only once** with the module manager in `app.go`. + +All middleware must be connected to the IBC router and wrap over an underlying base IBC application. An IBC application may be wrapped by many layers of middleware, only the top layer middleware should be hooked to the IBC router, with all underlying middlewares and application getting wrapped by it. + +The order of middleware **matters**, function calls from IBC to the application travel from top-level middleware to the bottom middleware and then to the application. Function calls from the application to IBC goes through the bottom middleware in order to the top middleware and then to core IBC handlers. Thus the same set of middleware put in different orders may produce different effects. + +## Example integration + +```go expandable +/ app.go + +/ middleware 1 and middleware 3 are stateful middleware, +/ perhaps implementing separate sdk.Msg and Handlers +mw1Keeper := mw1.NewKeeper(storeKey1) + +mw3Keeper := mw3.NewKeeper(storeKey3) + +/ Only create App Module **once** and register in app module +/ if the module maintains independent state and/or processes sdk.Msgs +app.moduleManager = module.NewManager( + ... + mw1.NewAppModule(mw1Keeper), + mw3.NewAppModule(mw3Keeper), + transfer.NewAppModule(transferKeeper), + custom.NewAppModule(customKeeper) +) + +mw1IBCModule := mw1.NewIBCModule(mw1Keeper) + +mw2IBCModule := mw2.NewIBCModule() / middleware2 is stateless middleware +mw3IBCModule := mw3.NewIBCModule(mw3Keeper) + scopedKeeperTransfer := capabilityKeeper.NewScopedKeeper("transfer") + +scopedKeeperCustom1 := capabilityKeeper.NewScopedKeeper("custom1") + +scopedKeeperCustom2 := capabilityKeeper.NewScopedKeeper("custom2") + +/ NOTE: IBC Modules may be initialized any number of times provided they use a separate +/ scopedKeeper and underlying port. + +/ initialize base IBC applications +/ if you want to create two different stacks with the same base application, +/ they must be given different scopedKeepers and assigned different ports. + transferIBCModule := transfer.NewIBCModule(transferKeeper) + +customIBCModule1 := custom.NewIBCModule(customKeeper, "portCustom1") + +customIBCModule2 := custom.NewIBCModule(customKeeper, "portCustom2") + +/ create IBC stacks by combining middleware with base application +/ NOTE: since middleware2 is stateless it does not require a Keeper +/ stack 1 contains mw1 -> mw3 -> transfer +stack1 := mw1.NewIBCMiddleware(mw3.NewIBCMiddleware(transferIBCModule, mw3Keeper), mw1Keeper) +/ stack 2 contains mw3 -> mw2 -> custom1 +stack2 := mw3.NewIBCMiddleware(mw2.NewIBCMiddleware(customIBCModule1), mw3Keeper) +/ stack 3 contains mw2 -> mw1 -> custom2 +stack3 := mw2.NewIBCMiddleware(mw1.NewIBCMiddleware(customIBCModule2, mw1Keeper)) + +/ associate each stack with the moduleName provided by the underlying scopedKeeper + ibcRouter := porttypes.NewRouter() + +ibcRouter.AddRoute("transfer", stack1) + +ibcRouter.AddRoute("custom1", stack2) + +ibcRouter.AddRoute("custom2", stack3) + +app.IBCKeeper.SetRouter(ibcRouter) +``` diff --git a/docs/ibc/v4.6.x/ibc/overview.mdx b/docs/ibc/v4.6.x/ibc/overview.mdx new file mode 100644 index 00000000..3a2b5e14 --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/overview.mdx @@ -0,0 +1,297 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about IBC, its components, and IBC use cases. + +## What is the Inter-Blockchain Communication Protocol (IBC)? + +This document serves as a guide for developers who want to write their own Inter-Blockchain +Communication protocol (IBC) applications for custom use cases. + +> IBC applications must be written as self-contained modules. + +Due to the modular design of the IBC protocol, IBC +application developers do not need to be concerned with the low-level details of clients, +connections, and proof verification. + +This brief explanation of the lower levels of the +stack gives application developers a broad understanding of the IBC +protocol. Abstraction layer details for channels and ports are most relevant for application developers and describe how to define custom packets and `IBCModule` callbacks. + +The requirements to have your module interact over IBC are: + +- Bind to a port or ports. +- Define your packet data. +- Use the default acknowledgment struct provided by core IBC or optionally define a custom acknowledgment struct. +- Standardize an encoding of the packet data. +- Implement the `IBCModule` interface. + +Read on for a detailed explanation of how to write a self-contained IBC application module. + +## Components Overview + +### [Clients](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client) + +IBC clients are on-chain light clients. Each light client is identified by a unique client-id. +IBC clients track the consensus states of other blockchains, along with the proof spec necessary to +properly verify proofs against the client's consensus state. A client can be associated with any number +of connections to the counterparty chain. The client identifier is auto generated using the client type +and the global client counter appended in the format: `{client-type}-{N}`. + +A `ClientState` should contain chain specific and light client specific information necessary for verifying updates +and upgrades to the IBC client. The `ClientState` may contain information such as chain-id, latest height, proof specs, +unbonding periods or the status of the light client. The `ClientState` should not contain information that +is specific to a given block at a certain height, this is the function of the `ConsensusState`. Each `ConsensusState` +should be associated with a unique block and should be referenced using a height. IBC clients are given a +client identifier prefixed store to store their associated client state and consensus states along with +any metadata associated with the consensus states. Consensus states are stored using their associated height. + +The supported IBC clients are: + +- [Solo Machine light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine): Devices such as phones, browsers, or laptops. +- [Tendermint light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/07-tendermint): The default for Cosmos SDK-based chains. +- [Localhost (loopback) client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/09-localhost): Useful for + testing, simulation, and relaying packets to modules on the same application. + +### IBC Client Heights + +IBC Client Heights are represented by the struct: + +```go +type Height struct { + RevisionNumber uint64 + RevisionHeight uint64 +} +``` + +The `RevisionNumber` represents the revision of the chain that the height is representing. +A revision typically represents a continuous, monotonically increasing range of block-heights. +The `RevisionHeight` represents the height of the chain within the given revision. + +On any reset of the `RevisionHeight`—for example, when hard-forking a Tendermint chain— +the `RevisionNumber` will get incremented. This allows IBC clients to distinguish between a +block-height `n` of a previous revision of the chain (at revision `p`) and block-height `n` of the current +revision of the chain (at revision `e`). + +`Height`s that share the same revision number can be compared by simply comparing their respective `RevisionHeight`s. +`Height`s that do not share the same revision number will only be compared using their respective `RevisionNumber`s. +Thus a height `h` with revision number `e+1` will always be greater than a height `g` with revision number `e`, +**REGARDLESS** of the difference in revision heights. + +Ex: + +```go +Height{ + RevisionNumber: 3, + RevisionHeight: 0 +} > Height{ + RevisionNumber: 2, + RevisionHeight: 100000000000 +} +``` + +When a Tendermint chain is running a particular revision, relayers can simply submit headers and proofs with the revision number +given by the chain's `chainID`, and the revision height given by the Tendermint block height. When a chain updates using a hard-fork +and resets its block-height, it is responsible for updating its `chainID` to increment the revision number. +IBC Tendermint clients then verifies the revision number against their `chainID` and treat the `RevisionHeight` as the Tendermint block-height. + +Tendermint chains wishing to use revisions to maintain persistent IBC connections even across height-resetting upgrades must format their `chainID`s +in the following manner: `{chainID}-{revision_number}`. On any height-resetting upgrade, the `chainID` **MUST** be updated with a higher revision number +than the previous value. + +Ex: + +- Before upgrade `chainID`: `gaiamainnet-3` +- After upgrade `chainID`: `gaiamainnet-4` + +Clients that do not require revisions, such as the solo-machine client, simply hardcode `0` into the revision number whenever they +need to return an IBC height when implementing IBC interfaces and use the `RevisionHeight` exclusively. + +Other client-types can implement their own logic to verify the IBC heights that relayers provide in their `Update`, `Misbehavior`, and +`Verify` functions respectively. + +The IBC interfaces expect an `ibcexported.Height` interface, however all clients must use the concrete implementation provided in +`02-client/types` and reproduced above. + +### [Connections](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection) + +Connections encapsulate two `ConnectionEnd` objects on two separate blockchains. Each +`ConnectionEnd` is associated with a client of the other blockchain (for example, the counterparty blockchain). +The connection handshake is responsible for verifying that the light clients on each chain are +correct for their respective counterparties. Connections, once established, are responsible for +facilitating all cross-chain verifications of IBC state. A connection can be associated with any +number of channels. + +### [Proofs](https://github.com/cosmos/ibc-go/blob/main/modules/core/23-commitment) and [Paths](https://github.com/cosmos/ibc-go/blob/main/modules/core/24-host) + +In IBC, blockchains do not directly pass messages to each other over the network. Instead, to +communicate, a blockchain commits some state to a specifically defined path that is reserved for a +specific message type and a specific counterparty. For example, for storing a specific connectionEnd as part +of a handshake or a packet intended to be relayed to a module on the counterparty chain. A relayer +process monitors for updates to these paths and relays messages by submitting the data stored +under the path and a proof to the counterparty chain. + +Proofs are passed from core IBC to light-clients as bytes. It is up to light client implementation to interpret these bytes appropriately. + +- The paths that all IBC implementations must use for committing IBC messages is defined in + [ICS-24 Host State Machine Requirements](https://github.com/cosmos/ibc/tree/master/spec/core/ics-024-host-requirements). +- The proof format that all implementations must be able to produce and verify is defined in [ICS-23 Proofs](https://github.com/confio/ics23) implementation. + +### Capabilities + +IBC is intended to work in execution environments where modules do not necessarily trust each +other. Thus, IBC must authenticate module actions on ports and channels so that only modules with the +appropriate permissions can use them. + +This module authentication is accomplished using a [dynamic +capability store](/docs/common/pages/adr-comprehensive#adr-003-dynamic-capability-store). Upon binding to a port or +creating a channel for a module, IBC returns a dynamic capability that the module must claim in +order to use that port or channel. The dynamic capability module prevents other modules from using that port or channel since +they do not own the appropriate capability. + +While this background information is useful, IBC modules do not need to interact at all with +these lower-level abstractions. The relevant abstraction layer for IBC application developers is +that of channels and ports. IBC applications must be written as self-contained **modules**. + +A module on one blockchain can communicate with other modules on other blockchains by sending, +receiving, and acknowledging packets through channels that are uniquely identified by the +`(channelID, portID)` tuple. + +A useful analogy is to consider IBC modules as internet applications on +a computer. A channel can then be conceptualized as an IP connection, with the IBC portID being +analogous to an IP port and the IBC channelID being analogous to an IP address. Thus, a single +instance of an IBC module can communicate on the same port with any number of other modules and +IBC correctly routes all packets to the relevant module using the (channelID, portID tuple). An +IBC module can also communicate with another IBC module over multiple ports, with each +`(portID<->portID)` packet stream being sent on a different unique channel. + +### [Ports](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port) + +An IBC module can bind to any number of ports. Each port must be identified by a unique `portID`. +Since IBC is designed to be secure with mutually distrusted modules operating on the same ledger, +binding a port returns a dynamic object capability. In order to take action on a particular port +(for example, an open channel with its portID), a module must provide the dynamic object capability to the IBC +handler. This requirement prevents a malicious module from opening channels with ports it does not own. Thus, +IBC modules are responsible for claiming the capability that is returned on `BindPort`. + +### [Channels](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +An IBC channel can be established between two IBC ports. Currently, a port is exclusively owned by a +single module. IBC packets are sent over channels. Just as IP packets contain the destination IP +address and IP port, and the source IP address and source IP port, IBC packets contain +the destination portID and channelID, and the source portID and channelID. This packet structure enables IBC to +correctly route packets to the destination module while allowing modules receiving packets to +know the sender module. + +A channel can be `ORDERED`, where packets from a sending module must be processed by the +receiving module in the order they were sent. Or a channel can be `UNORDERED`, where packets +from a sending module are processed in the order they arrive (might be in a different order than they were sent). + +Modules can choose which channels they wish to communicate over with, thus IBC expects modules to +implement callbacks that are called during the channel handshake. These callbacks can do custom +channel initialization logic. If any callback returns an error, the channel handshake fails. Thus, by +returning errors on callbacks, modules can programmatically reject and accept channels. + +The channel handshake is a 4-step handshake. Briefly, if a given chain A wants to open a channel with +chain B using an already established connection: + +1. chain A sends a `ChanOpenInit` message to signal a channel initialization attempt with chain B. +2. chain B sends a `ChanOpenTry` message to try opening the channel on chain A. +3. chain A sends a `ChanOpenAck` message to mark its channel end status as open. +4. chain B sends a `ChanOpenConfirm` message to mark its channel end status as open. + +If all handshake steps are successful, the channel is opened on both sides. At each step in the handshake, the module +associated with the `ChannelEnd` executes its callback. So +on `ChanOpenInit`, the module on chain A executes its callback `OnChanOpenInit`. + +The channel identifier is auto derived in the format: `channel-{N}` where N is the next sequence to be used. + +Just as ports came with dynamic capabilities, channel initialization returns a dynamic capability +that the module **must** claim so that they can pass in a capability to authenticate channel actions +like sending packets. The channel capability is passed into the callback on the first parts of the +handshake; either `OnChanOpenInit` on the initializing chain or `OnChanOpenTry` on the other chain. + +#### Closing channels + +Closing a channel occurs in 2 handshake steps as defined in [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics). + +`ChanCloseInit` closes a channel on the executing chain if the channel exists, it is not +already closed and the connection it exists upon is OPEN. Channels can only be closed by a +calling module or in the case of a packet timeout on an ORDERED channel. + +`ChanCloseConfirm` is a response to a counterparty channel executing `ChanCloseInit`. The channel +on the executing chain closes if the channel exists, the channel is not already closed, +the connection the channel exists upon is OPEN and the executing chain successfully verifies +that the counterparty channel has been closed. + +### [Packets](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Modules communicate with each other by sending packets over IBC channels. All +IBC packets contain the destination `portID` and `channelID` along with the source `portID` and +`channelID`. This packet structure allows modules to know the sender module of a given packet. IBC packets +contain a sequence to optionally enforce ordering. + +IBC packets also contain a `TimeoutHeight` and a `TimeoutTimestamp` that determine the deadline before the receiving module must process a packet. + +Modules send custom application data to each other inside the `Data []byte` field of the IBC packet. +Thus, packet data is opaque to IBC handlers. It is incumbent on a sender module to encode +their application-specific packet information into the `Data` field of packets. The receiver +module must decode that `Data` back to the original application data. + +### [Receipts and Timeouts](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Since IBC works over a distributed network and relies on potentially faulty relayers to relay messages between ledgers, +IBC must handle the case where a packet does not get sent to its destination in a timely manner or at all. Packets must +specify a non-zero value for timeout height (`TimeoutHeight`) or timeout timestamp (`TimeoutTimestamp` ) after which a packet can no longer be successfully received on the destination chain. + +- The `timeoutHeight` indicates a consensus height on the destination chain after which the packet is no longer be processed, and instead counts as having timed-out. +- The `timeoutTimestamp` indicates a timestamp on the destination chain after which the packet is no longer be processed, and instead counts as having timed-out. + +If the timeout passes without the packet being successfully received, the packet can no longer be +received on the destination chain. The sending module can timeout the packet and take appropriate actions. + +If the timeout is reached, then a proof of packet timeout can be submitted to the original chain. The original chain can then perform +application-specific logic to timeout the packet, perhaps by rolling back the packet send changes (refunding senders any locked funds, etc.). + +- In ORDERED channels, a timeout of a single packet in the channel causes the channel to close. + + - If packet sequence `n` times out, then a packet at sequence `k > n` cannot be received without violating the contract of ORDERED channels that packets are processed in the order that they are sent. + - Since ORDERED channels enforce this invariant, a proof that sequence `n` has not been received on the destination chain by the specified timeout of packet `n` is sufficient to timeout packet `n` and close the channel. + +- In UNORDERED channels, the application-specific timeout logic for that packet is applied and the channel is not closed. + + - Packets can be received in any order. + + - IBC writes a packet receipt for each sequence receives in the UNORDERED channel. This receipt does not contain information; it is simply a marker intended to signify that the UNORDERED channel has received a packet at the specified sequence. + + - To timeout a packet on an UNORDERED channel, a proof is required that a packet receipt **does not exist** for the packet's sequence by the specified timeout. + +For this reason, most modules should use UNORDERED channels as they require fewer liveness guarantees to function effectively for users of that channel. + +### [Acknowledgments](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Modules can also choose to write application-specific acknowledgments upon processing a packet. Acknowledgments can be done: + +- Synchronously on `OnRecvPacket` if the module processes packets as soon as they are received from IBC module. +- Asynchronously if module processes packets at some later point after receiving the packet. + +This acknowledgment data is opaque to IBC much like the packet `Data` and is treated by IBC as a simple byte string `[]byte`. Receiver modules must encode their acknowledgment so that the sender module can decode it correctly. The encoding must be negotiated between the two parties during version negotiation in the channel handshake. + +The acknowledgment can encode whether the packet processing succeeded or failed, along with additional information that allows the sender module to take appropriate action. + +After the acknowledgment has been written by the receiving chain, a relayer relays the acknowledgment back to the original sender module. + +The original sender module then executes application-specific acknowledgment logic using the contents of the acknowledgment. + +- After an acknowledgement fails, packet-send changes can be rolled back (for example, refunding senders in ICS20). + +- After an acknowledgment is received successfully on the original sender on the chain, the corresponding packet commitment is deleted since it is no longer needed. + +## Further Readings and Specs + +If you want to learn more about IBC, check the following specifications: + +- [IBC specification overview](https://github.com/cosmos/ibc/blob/master/README.md) diff --git a/docs/ibc/v4.6.x/ibc/proposals.mdx b/docs/ibc/v4.6.x/ibc/proposals.mdx new file mode 100644 index 00000000..1e2d8d2f --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/proposals.mdx @@ -0,0 +1,86 @@ +--- +title: Governance Proposals +--- + +In uncommon situations, a highly valued client may become frozen due to uncontrollable +circumstances. A highly valued client might have hundreds of channels being actively used. +Some of those channels might have a significant amount of locked tokens used for ICS 20. + +If the one third of the validator set of the chain the client represents decides to collude, +they can sign off on two valid but conflicting headers each signed by the other one third +of the honest validator set. The light client can now be updated with two valid, but conflicting +headers at the same height. The light client cannot know which header is trustworthy and therefore +evidence of such misbehaviour is likely to be submitted resulting in a frozen light client. + +Frozen light clients cannot be updated under any circumstance except via a governance proposal. +Since a quorum of validators can sign arbitrary state roots which may not be valid executions +of the state machine, a governance proposal has been added to ease the complexity of unfreezing +or updating clients which have become "stuck". Without this mechanism, validator sets would need +to construct a state root to unfreeze the client. Unfreezing clients, re-enables all of the channels +built upon that client. This may result in recovery of otherwise lost funds. + +Tendermint light clients may become expired if the trusting period has passed since their +last update. This may occur if relayers stop submitting headers to update the clients. + +An unplanned upgrade by the counterparty chain may also result in expired clients. If the counterparty +chain undergoes an unplanned upgrade, there may be no commitment to that upgrade signed by the validator +set before the chain-id changes. In this situation, the validator set of the last valid update for the +light client is never expected to produce another valid header since the chain-id has changed, which will +ultimately lead the on-chain light client to become expired. + +In the case that a highly valued light client is frozen, expired, or rendered non-updateable, a +governance proposal may be submitted to update this client, known as the subject client. The +proposal includes the client identifier for the subject and the client identifier for a substitute +client. Light client implementations may implement custom updating logic, but in most cases, +the subject will be updated to the latest consensus state of the substitute client, if the proposal passes. +The substitute client is used as a "stand in" while the subject is on trial. It is best practice to create +a substitute client *after* the subject has become frozen to avoid the substitute from also becoming frozen. +An active substitute client allows headers to be submitted during the voting period to prevent accidental expiry +once the proposal passes. + +## How to recover an expired client with a governance proposal + +See also the relevant documentation: [ADR-026, IBC client recovery mechanisms](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-026-ibc-client-recovery-mechanisms.md) + +### Preconditions + +* The chain is updated with ibc-go `>=` v1.1.0. +* The client identifier of an active client for the same counterparty chain. +* The governance deposit. + +## Steps + +### Step 1 + +Check if the client is attached to the expected `chain-id`. For example, for an expired Tendermint client representing the Akash chain the client state looks like this on querying the client state: + +```text +{ + client_id: 07-tendermint-146 + client_state: + '@type': /ibc.lightclients.tendermint.v1.ClientState + allow_update_after_expiry: true + allow_update_after_misbehaviour: true + chain_id: akashnet-2 +} +``` + +The client is attached to the expected Akash `chain-id`. Note that although the parameters (`allow_update_after_expiry` and `allow_update_after_misbehaviour`) exist to signal intent, these parameters have been deprecated and will not enforce any checks on the revival of client. See ADR-026 for more context on this deprecation. + +### Step 2 + +If the chain has been updated to ibc-go `>=` v1.1.0, anyone can submit the governance proposal to recover the client by executing this via cli: + +```bash + tx gov submit-proposal update-client +``` + +The `` should be a client identifier on the same chain as the expired or frozen client. This client identifier should connect to the same chain as the expired or frozen client. This means: use the active client that is currently being used to relay packets between the two chains as the replacement client. + +After this, it is just a question of who funds the governance deposit and if the chain in question votes yes. + +## Important considerations + +Please note that from v1.0.0 of ibc-go it will not be allowed for transactions to go to expired clients anymore, so please update to at least this version to prevent similar issues in the future. + +Please also note that if the client on the other end of the transaction is also expired, that client will also need to update. This process updates only one client. diff --git a/docs/ibc/v4.6.x/ibc/proto-docs.mdx b/docs/ibc/v4.6.x/ibc/proto-docs.mdx new file mode 100644 index 00000000..d6f7f3d0 --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/proto-docs.mdx @@ -0,0 +1,6 @@ +--- +title: Protobuf Documentation +description: See ibc-go v4.4.x Buf Protobuf documentation. +--- + +See [ibc-go v4.4.x Buf Protobuf documentation](https://github.com/cosmos/ibc-go/blob/release/v4.4.x/docs/ibc/proto-docs.md). diff --git a/docs/ibc/v4.6.x/ibc/relayer.mdx b/docs/ibc/v4.6.x/ibc/relayer.mdx new file mode 100644 index 00000000..0e612478 --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/relayer.mdx @@ -0,0 +1,48 @@ +--- +title: Relayer +--- + + + +## Pre-requisite readings + +* [IBC Overview](/docs/ibc/v4.6.x/ibc/overview) +* Events + + + +## Events + +Events are emitted for every transaction processed by the base application to indicate the execution +of some logic clients may want to be aware of. This is extremely useful when relaying IBC packets. +Any message that uses IBC will emit events for the corresponding TAO logic executed as defined in +the [IBC events document](). + +In the SDK, it can be assumed that for every message there is an event emitted with the type `message`, +attribute key `action`, and an attribute value representing the type of message sent +(`channel_open_init` would be the attribute value for `MsgChannelOpenInit`). If a relayer queries +for transaction events, it can split message events using this event Type/Attribute Key pair. + +The Event Type `message` with the Attribute Key `module` may be emitted multiple times for a single +message due to application callbacks. It can be assumed that any TAO logic executed will result in +a module event emission with the attribute value `ibc_` (02-client emits `ibc_client`). + +### Subscribing with Tendermint + +Calling the Tendermint RPC method `Subscribe` via Tendermint's Websocket will return events using +Tendermint's internal representation of them. Instead of receiving back a list of events as they +were emitted, Tendermint will return the type `map[string][]string` which maps a string in the +form `.` to `attribute_value`. This causes extraction of the event +ordering to be non-trivial, but still possible. + +A relayer should use the `message.action` key to extract the number of messages in the transaction +and the type of IBC transactions sent. For every IBC transaction within the string array for +`message.action`, the necessary information should be extracted from the other event fields. If +`send_packet` appears at index 2 in the value for `message.action`, a relayer will need to use the +value at index 2 of the key `send_packet.packet_sequence`. This process should be repeated for each +piece of information needed to relay a packet. + +## Example Implementations + +* [Golang Relayer](https://github.com/cosmos/relayer) +* [Hermes](https://github.com/informalsystems/hermes) diff --git a/docs/ibc/v4.6.x/ibc/roadmap.mdx b/docs/ibc/v4.6.x/ibc/roadmap.mdx new file mode 100644 index 00000000..9797c46b --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/roadmap.mdx @@ -0,0 +1,57 @@ +--- +title: Roadmap +description: 'Last update: March 31, 2022' +--- + +*Last update: March 31, 2022* + +This document endeavours to inform the wider IBC community about plans and priorities for work on ibc-go by the team at Interchain GmbH. It is intended to broadly inform all users of ibc-go, including developers and operators of IBC, relayer, chain and wallet applications. + +This roadmap should be read as a high-level guide, rather than a commitment to schedules and deliverables. The degree of specificity is inversely proportional to the timeline. We will update this document periodically to reflect the status and plans. + +## Q2 - 2022 + +At a high level we will focus on: + +* Finishing the implementation of [relayer incentivisation](https://github.com/orgs/cosmos/projects/7/views/8). +* Finishing the [refactoring of 02-client](https://github.com/cosmos/ibc-go/milestone/16). +* Finishing the upgrade to Cosmos SDK v0.46 and Tendermint v0.35. +* Implementing and testing the changes needed to support the [transition to SMT storage](https://github.com/cosmos/ibc-go/milestone/21) in the Cosmos SDK. +* Designing the implementation and scoping the engineering work for [channel upgradability](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics/UPGRADES.md). +* Improving the project's documentation and writing guides for [light client](https://github.com/cosmos/ibc-go/issues/59) and middleware implementation. +* Working on [core backlog issues](https://github.com/cosmos/ibc-go/milestone/8). +* Spending time on expanding and deepening our knowledge of IBC, but also other parts of the Cosmos stack. +* And last, but not least, onboarding new members to the team. + +For a detail view of each iteration's planned work, please check out our [project board](https://github.com/orgs/cosmos/projects/7). + +### Release schedule + +#### **April** + +In the first half of the month we will probably cut: + +* Alpha/beta pre-releases with the upgrade to SDK 0.46 and Tendermint v0.35. +* [Alpha](https://github.com/cosmos/ibc-go/milestone/5) pre-release with the implementation of relayer incentivisation. + +In the second half, and depending on the date of the final release of Cosmos SDK 0.46, we will probably cut the final release with the upgrade to SDK 0.46 and Tendermint v0.35, and also a [beta](https://github.com/cosmos/ibc-go/milestone/23) pre-release with the implementation of relayer incentivisation. + +In the second half of the month we also plan to do a second internal audit of the implementation of relayer incentivisation, and issues will most likely will be created from the audit. Depending on the nature and type of the issues we create, those would be released in a second beta pre-release or in a [release candidate](https://github.com/cosmos/ibc-go/milestone/24). + +#### **May** + +In the first half we will probably start cutting release candidates with relayer incentivisation and the 02-client refactor. Final release would most likely come out at the end of the month or beginning of June. + +#### **June** + +We will probably cut at the end of the month or beginning of Q3 patch or minor releases on all the supported release lines with the [small features and core improvements](https://github.com/cosmos/ibc-go/milestone/8) that we work on during the quarter. + +## Q3 - 2022 + +We will most likely start the implementation of [channel upgradability](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics/UPGRADES.md). At the end of Q2 or maybe beginning of Q3 we might also work on designing the implementation and scoping the engineering work to add support for [ordered channels that can timeout](https://github.com/cosmos/ibc/pull/636), and we could potentially work on this feature also in Q3. + +We will also probably do an audit of the implementation of the [CCV application](https://github.com/cosmos/interchain-security/tree/main/x/ccv) for Interchain Security. + +### Release schedule + +In this quarter we will make the final release to support the migration to SMT storage. diff --git a/docs/ibc/v4.6.x/ibc/upgrades/developer-guide.mdx b/docs/ibc/v4.6.x/ibc/upgrades/developer-guide.mdx new file mode 100644 index 00000000..9e1cc251 --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/upgrades/developer-guide.mdx @@ -0,0 +1,52 @@ +--- +title: IBC Client Developer Guide to Upgrades +--- + +## Synopsis + +Learn how to implement upgrade functionality for your custom IBC client. + +As mentioned in the [README](/docs/ibc/v4.6.x/intro), it is vital that high-value IBC clients can upgrade along with their underlying chains to avoid disruption to the IBC ecosystem. Thus, IBC client developers will want to implement upgrade functionality to enable clients to maintain connections and channels even across chain upgrades. + +The IBC protocol allows client implementations to provide a path to upgrading clients given the upgraded client state, upgraded consensus state and proofs for each. + +```go expandable +/ Upgrade functions +/ NOTE: proof heights are not included as upgrade to a new revision is expected to pass only on the last +/ height committed by the current revision. Clients are responsible for ensuring that the planned last +/ height of the current revision is somehow encoded in the proof verification process. +/ This is to ensure that no premature upgrades occur, since upgrade plans committed to by the counterparty +/ may be cancelled or modified before the last planned height. +VerifyUpgradeAndUpdateState( + ctx sdk.Context, + cdc codec.BinaryCodec, + store sdk.KVStore, + newClient ClientState, + newConsState ConsensusState, + proofUpgradeClient, + proofUpgradeConsState []byte, +) (upgradedClient ClientState, upgradedConsensus ConsensusState, err error) +``` + +Note that the clients should have prior knowledge of the merkle path that the upgraded client and upgraded consensus states will use. The height at which the upgrade has occurred should also be encoded in the proof. The Tendermint client implementation accomplishes this by including an `UpgradePath` in the ClientState itself, which is used along with the upgrade height to construct the merkle path under which the client state and consensus state are committed. + +Developers must ensure that the `UpgradeClientMsg` does not pass until the last height of the old chain has been committed, and after the chain upgrades, the `UpgradeClientMsg` should pass once and only once on all counterparty clients. + +Developers must ensure that the new client adopts all of the new Client parameters that must be uniform across every valid light client of a chain (chain-chosen parameters), while maintaining the Client parameters that are customizable by each individual client (client-chosen parameters) from the previous version of the client. + +Upgrades must adhere to the IBC Security Model. IBC does not rely on the assumption of honest relayers for correctness. Thus users should not have to rely on relayers to maintain client correctness and security (though honest relayers must exist to maintain relayer liveness). While relayers may choose any set of client parameters while creating a new `ClientState`, this still holds under the security model since users can always choose a relayer-created client that suits their security and correctness needs or create a Client with their desired parameters if no such client exists. + +However, when upgrading an existing client, one must keep in mind that there are already many users who depend on this client's particular parameters. We cannot give the upgrading relayer free choice over these parameters once they have already been chosen. This would violate the security model since users who rely on the client would have to rely on the upgrading relayer to maintain the same level of security. Thus, developers must make sure that their upgrade mechanism allows clients to upgrade the chain-specified parameters whenever a chain upgrade changes these parameters (examples in the Tendermint client include `UnbondingPeriod`, `TrustingPeriod`, `ChainID`, `UpgradePath`, etc.), while ensuring that the relayer submitting the `UpgradeClientMsg` cannot alter the client-chosen parameters that the users are relying upon (examples in Tendermint client include `TrustLevel`, `MaxClockDrift`, etc). + +Developers should maintain the distinction between Client parameters that are uniform across every valid light client of a chain (chain-chosen parameters), and Client parameters that are customizable by each individual client (client-chosen parameters); since this distinction is necessary to implement the `ZeroCustomFields` method in the `ClientState` interface: + +```go +/ Utility function that zeroes out any client customizable fields in client state +/ Ledger enforced fields are maintained while all custom fields are zero values +/ Used to verify upgrades +ZeroCustomFields() + +ClientState +``` + +Counterparty clients can upgrade securely by using all of the chain-chosen parameters from the chain-committed `UpgradedClient` and preserving all of the old client-chosen parameters. This enables chains to securely upgrade without relying on an honest relayer, however it can in some cases lead to an invalid final `ClientState` if the new chain-chosen parameters clash with the old client-chosen parameter. This can happen in the Tendermint client case if the upgrading chain lowers the `UnbondingPeriod` (chain-chosen) to a duration below that of a counterparty client's `TrustingPeriod` (client-chosen). Such cases should be clearly documented by developers, so that chains know which upgrades should be avoided to prevent this problem. The final upgraded client should also be validated in `VerifyUpgradeAndUpdateState` before returning to ensure that the client does not upgrade to an invalid `ClientState`. diff --git a/docs/ibc/v4.6.x/ibc/upgrades/genesis-restart.mdx b/docs/ibc/v4.6.x/ibc/upgrades/genesis-restart.mdx new file mode 100644 index 00000000..76685e17 --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/upgrades/genesis-restart.mdx @@ -0,0 +1,46 @@ +--- +title: Genesis Restart Upgrades +--- + +## Synopsis + +Learn how to upgrade your chain and counterparty clients using genesis restarts. + +**NOTE**: Regular genesis restarts are currently unsupported by relayers! + +## IBC Client Breaking Upgrades + +IBC client breaking upgrades are possible using genesis restarts. +It is highly recommended to use the in-place migrations instead of a genesis restart. +Genesis restarts should be used sparingly and as backup plans. + +Genesis restarts still require the usage of an IBC upgrade proposal in order to correctly upgrade counterparty clients. + +### Step-by-Step Upgrade Process for SDK Chains + +If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the [IBC Client Breaking Upgrade List](/docs/ibc/v4.6.x/ibc/upgrades/quick-guide#ibc-client-breaking-upgrades) and then execute the upgrade process described below in order to prevent counterparty clients from breaking. + +1. Create a 02-client [`UpgradeProposal`](https://github.com/cosmos/ibc-go/blob/v4.4.2/proto/ibc/core/client/v1/client.proto#L58-L77) with an `UpgradePlan` and a new IBC ClientState in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients and zero out any client-customizable fields (such as TrustingPeriod). +2. Vote on and pass the `UpgradeProposal` +3. Halt the node after successful upgrade. +4. Export the genesis file. +5. Swap to the new binary. +6. Run migrations on the genesis file. +7. Remove the `UpgradeProposal` plan from the genesis file. This may be done by migrations. +8. Change desired chain-specific fields (chain id, unbonding period, etc). This may be done by migrations. +9. Reset the node's data. +10. Start the chain. + +Upon the `UpgradeProposal` passing, the upgrade module will commit the UpgradedClient under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`. + +Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client. + +#### Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients + +These steps are identical to the regular [IBC client breaking upgrade process](/docs/ibc/v4.6.x/ibc/upgrades/quick-guide#step-by-step-upgrade-process-for-relayers-upgrading-counterparty-clients). + +### Non-IBC Client Breaking Upgrades + +While ibc-go supports genesis restarts which do not break IBC clients, relayers do not support this upgrade path. +Here is a tracking issue on [Hermes](https://github.com/informalsystems/ibc-rs/issues/1152). +Please do not attempt a regular genesis restarts unless you have a tool to update counterparty clients correctly. diff --git a/docs/ibc/v4.6.x/ibc/upgrades/intro.mdx b/docs/ibc/v4.6.x/ibc/upgrades/intro.mdx new file mode 100644 index 00000000..4153337f --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/upgrades/intro.mdx @@ -0,0 +1,15 @@ +--- +title: Upgrading IBC Chains Overview +description: >- + This directory contains information on how to upgrade an IBC chain without + breaking counterparty clients and connections. +--- + +### Upgrading IBC Chains Overview + +This directory contains information on how to upgrade an IBC chain without breaking counterparty clients and connections. + +IBC-connected chains must be able to upgrade without breaking connections to other chains. Otherwise there would be a massive disincentive towards upgrading and disrupting high-value IBC connections, thus preventing chains in the IBC ecosystem from evolving and improving. Many chain upgrades may be irrelevant to IBC, however some upgrades could potentially break counterparty clients if not handled correctly. Thus, any IBC chain that wishes to perform an IBC-client-breaking upgrade must perform an IBC upgrade in order to allow counterparty clients to securely upgrade to the new light client. + +1. The [quick-guide](/docs/ibc/v4.6.x/ibc/upgrades/quick-guide) describes how IBC-connected chains can perform client-breaking upgrades and how relayers can securely upgrade counterparty clients using the SDK. +2. The [developer-guide](/docs/ibc/v4.6.x/ibc/upgrades/developer-guide) is a guide for developers intending to develop IBC client implementations with upgrade functionality. diff --git a/docs/ibc/v4.6.x/ibc/upgrades/quick-guide.mdx b/docs/ibc/v4.6.x/ibc/upgrades/quick-guide.mdx new file mode 100644 index 00000000..eea09dd3 --- /dev/null +++ b/docs/ibc/v4.6.x/ibc/upgrades/quick-guide.mdx @@ -0,0 +1,54 @@ +--- +title: How to Upgrade IBC Chains and their Clients +--- + +## Synopsis + +Learn how to upgrade your chain and counterparty clients. + +The information in this doc for upgrading chains is relevant to SDK chains. However, the guide for counterparty clients is relevant to any Tendermint client that enables upgrades. + +## IBC Client Breaking Upgrades + +IBC-connected chains must perform an IBC upgrade if their upgrade will break counterparty IBC clients. The current IBC protocol supports upgrading tendermint chains for a specific subset of IBC-client-breaking upgrades. Here is the exhaustive list of IBC client-breaking upgrades and whether the IBC protocol currently supports such upgrades. + +IBC currently does **NOT** support unplanned upgrades. All of the following upgrades must be planned and committed to in advance by the upgrading chain, in order for counterparty clients to maintain their connections securely. + +Note: Since upgrades are only implemented for Tendermint clients, this doc only discusses upgrades on Tendermint chains that would break counterparty IBC Tendermint Clients. + +1. Changing the Chain-ID: **Supported** +2. Changing the UnbondingPeriod: **Partially Supported**, chains may increase the unbonding period with no issues. However, decreasing the unbonding period may irreversibly break some counterparty clients. Thus, it is **not recommended** that chains reduce the unbonding period. +3. Changing the height (resetting to 0): **Supported**, so long as chains remember to increment the revision number in their chain-id. +4. Changing the ProofSpecs: **Supported**, this should be changed if the proof structure needed to verify IBC proofs is changed across the upgrade. Ex: Switching from an IAVL store, to a SimpleTree Store +5. Changing the UpgradePath: **Supported**, this might involve changing the key under which upgraded clients and consensus states are stored in the upgrade store, or even migrating the upgrade store itself. +6. Migrating the IBC store: **Unsupported**, as the IBC store location is negotiated by the connection. +7. Upgrading to a backwards compatible version of IBC: Supported +8. Upgrading to a non-backwards compatible version of IBC: **Unsupported**, as IBC version is negotiated on connection handshake. +9. Changing the Tendermint LightClient algorithm: **Partially Supported**. Changes to the light client algorithm that do not change the ClientState or ConsensusState struct may be supported, provided that the counterparty is also upgraded to support the new light client algorithm. Changes that require updating the ClientState and ConsensusState structs themselves are theoretically possible by providing a path to translate an older ClientState struct into the new ClientState struct; however this is not currently implemented. + +### Step-by-Step Upgrade Process for SDK chains + +If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the list above and then execute the upgrade process described below in order to prevent counterparty clients from breaking. + +1. Create a 02-client [`UpgradeProposal`](https://github.com/cosmos/ibc-go/blob/v4.4.2/proto/ibc/core/client/v1/client.proto#L58-L77) with an `UpgradePlan` and a new IBC ClientState in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients and zero out any client-customizable fields (such as TrustingPeriod). +2. Vote on and pass the `UpgradeProposal` + +Upon the `UpgradeProposal` passing, the upgrade module will commit the UpgradedClient under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`. + +Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client. + +### Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients + +Once the upgrading chain has committed to upgrading, relayers must wait till the chain halts at the upgrade height before upgrading counterparty clients. This is because chains may reschedule or cancel upgrade plans before they occur. Thus, relayers must wait till the chain reaches the upgrade height and halts before they can be sure the upgrade will take place. + +Thus, the upgrade process for relayers trying to upgrade the counterparty clients is as follows: + +1. Wait for the upgrading chain to reach the upgrade height and halt +2. Query a full node for the proofs of `UpgradedClient` and `UpgradedConsensusState` at the last height of the old chain. +3. Update the counterparty client to the last height of the old chain using the `UpdateClient` msg. +4. Submit an `UpgradeClient` msg to the counterparty chain with the `UpgradedClient`, `UpgradedConsensusState` and their respective proofs. +5. Submit an `UpdateClient` msg to the counterparty chain with a header from the new upgraded chain. + +The Tendermint client on the counterparty chain will verify that the upgrading chain did indeed commit to the upgraded client and upgraded consensus state at the upgrade height (since the upgrade height is included in the key). If the proofs are verified against the upgrade height, then the client will upgrade to the new client while retaining all of its client-customized fields. Thus, it will retain its old TrustingPeriod, TrustLevel, MaxClockDrift, etc; while adopting the new chain-specified fields such as UnbondingPeriod, ChainId, UpgradePath, etc. Note, this can lead to an invalid client since the old client-chosen fields may no longer be valid given the new chain-chosen fields. Upgrading chains should try to avoid these situations by not altering parameters that can break old clients. For an example, see the UnbondingPeriod example in the supported upgrades section. + +The upgraded consensus state will serve purely as a basis of trust for future `UpdateClientMsgs` and will not contain a consensus root to perform proof verification against. Thus, relayers must submit an `UpdateClientMsg` with a header from the new chain so that the connection can be used for proof verification again. diff --git a/docs/ibc/v4.6.x/intro.mdx b/docs/ibc/v4.6.x/intro.mdx new file mode 100644 index 00000000..579a6862 --- /dev/null +++ b/docs/ibc/v4.6.x/intro.mdx @@ -0,0 +1,17 @@ +--- +title: IBC-Go Documentation +--- + + +This version of ibc-go is not supported anymore. Please upgrade to the latest version. + + +Welcome to the IBC-Go documentation! + +The Inter-Blockchain Communication protocol (IBC) is an end-to-end, connection-oriented, stateful protocol for reliable, ordered, and authenticated communication between heterogeneous blockchains arranged in an unknown and dynamic topology. + +IBC is a protocol that allows blockchains to talk to each other. + +The protocol realizes this interoperability by specifying a set of data structures, abstractions, and semantics that can be implemented by any distributed ledger that satisfies a small set of requirements. + +IBC can be used to build a wide range of cross-chain applications that include token transfers, atomic swaps, multi-chain smart contracts (with or without mutually comprehensible VMs), and data and code sharding of various kinds. diff --git a/docs/ibc/v4.6.x/middleware/ics29-fee/end-users.mdx b/docs/ibc/v4.6.x/middleware/ics29-fee/end-users.mdx new file mode 100644 index 00000000..1c7a071a --- /dev/null +++ b/docs/ibc/v4.6.x/middleware/ics29-fee/end-users.mdx @@ -0,0 +1,30 @@ +--- +title: End Users +--- + +## Synopsis + +Learn how to incentivize IBC packets using the ICS29 Fee Middleware module. + +## Pre-requisite readings + +- [Fee Middleware](/docs/ibc/v4.6.x/middleware/ics29-fee/overview) + +## Summary + +Different types of end users: + +- CLI users who want to manually incentivize IBC packets +- Client developers + +The Fee Middleware module allows end users to add a 'tip' to each IBC packet which will incentivize relayer operators to relay packets between chains. gRPC endpoints are exposed for client developers as well as a simple CLI for manually incentivizing IBC packets. + +## CLI Users + +For an in depth guide on how to use the ICS29 Fee Middleware module using the CLI please take a look at the [wiki](https://github.com/cosmos/ibc-go/wiki/Fee-enabled-fungible-token-transfers#asynchronous-incentivization-of-a-fungible-token-transfer) on the `ibc-go` repo. + +## Client developers + +Client developers can read more about the relevant ICS29 message types in the [Fee messages section](/docs/ibc/v4.6.x/middleware/ics29-fee/msgs). + +[CosmJS](https://github.com/cosmos/cosmjs) is a useful client library for signing and broadcasting Cosmos SDK messages. diff --git a/docs/ibc/v4.6.x/middleware/ics29-fee/events.mdx b/docs/ibc/v4.6.x/middleware/ics29-fee/events.mdx new file mode 100644 index 00000000..7d6dfc42 --- /dev/null +++ b/docs/ibc/v4.6.x/middleware/ics29-fee/events.mdx @@ -0,0 +1,37 @@ +--- +title: Events +--- + +## Synopsis + +An overview of all events related to ICS-29 + +## `MsgPayPacketFee`, `MsgPayPacketFeeAsync` + +| Type | Attribute Key | Attribute Value | +| ----------------------- | --------------- | --------------- | +| incentivized_ibc_packet | port_id | `{portID}` | +| incentivized_ibc_packet | channel_id | `{channelID}` | +| incentivized_ibc_packet | packet_sequence | `{sequence}` | +| incentivized_ibc_packet | recv_fee | `{recvFee}` | +| incentivized_ibc_packet | ack_fee | `{ackFee}` | +| incentivized_ibc_packet | timeout_fee | `{timeoutFee}` | +| message | module | fee-ibc | + +## `RegisterPayee` + +| Type | Attribute Key | Attribute Value | +| -------------- | ------------- | --------------- | +| register_payee | relayer | `{relayer}` | +| register_payee | payee | `{payee}` | +| register_payee | channel_id | `{channelID}` | +| message | module | fee-ibc | + +## `RegisterCounterpartyPayee` + +| Type | Attribute Key | Attribute Value | +| --------------------------- | ------------------ | --------------------- | +| register_counterparty_payee | relayer | `{relayer}` | +| register_counterparty_payee | counterparty_payee | `{counterpartyPayee}` | +| register_counterparty_payee | channel_id | `{channelID}` | +| message | module | fee-ibc | diff --git a/docs/ibc/v4.6.x/middleware/ics29-fee/fee-distribution.mdx b/docs/ibc/v4.6.x/middleware/ics29-fee/fee-distribution.mdx new file mode 100644 index 00000000..acc75da8 --- /dev/null +++ b/docs/ibc/v4.6.x/middleware/ics29-fee/fee-distribution.mdx @@ -0,0 +1,108 @@ +--- +title: Fee Distribution +--- + +## Synopsis + +Learn about payee registration for the distribution of packet fees. The following document is intended for relayer operators. + +## Pre-requisite readings + +- [Fee Middleware](/docs/ibc/v4.6.x/middleware/ics29-fee/overview) + +Packet fees are divided into 3 distinct amounts in order to compensate relayer operators for packet relaying on fee enabled IBC channels. + +- `RecvFee`: The sum of all packet receive fees distributed to a payee for successful execution of `MsgRecvPacket`. +- `AckFee`: The sum of all packet acknowledgement fees distributed to a payee for successful execution of `MsgAcknowledgement`. +- `TimeoutFee`: The sum of all packet timeout fees distributed to a payee for successful execution of `MsgTimeout`. + +## Register a counterparty payee address for forward relaying + +As mentioned in [ICS29 Concepts](/docs/ibc/v4.6.x/middleware/ics29-fee/overview#concepts), the forward relayer describes the actor who performs the submission of `MsgRecvPacket` on the destination chain. +Fee distribution for incentivized packet relays takes place on the packet source chain. + +> Relayer operators are expected to register a counterparty payee address, in order to be compensated accordingly with `RecvFee`s upon completion of a packet lifecycle. + +The counterparty payee address registered on the destination chain is encoded into the packet acknowledgement and communicated as such to the source chain for fee distribution. +**If a counterparty payee is not registered for the forward relayer on the destination chain, the escrowed fees will be refunded upon fee distribution.** + +### Relayer operator actions? + +A transaction must be submitted **to the destination chain** including a `CounterpartyPayee` address of an account on the source chain. +The transaction must be signed by the `Relayer`. + +Note: If a module account address is used as the `CounterpartyPayee` but the module has been set as a blocked address in the `BankKeeper`, the refunding to the module account will fail. This is because many modules use invariants to compare internal tracking of module account balances against the actual balance of the account stored in the `BankKeeper`. If a token transfer to the module account occurs without going through this module and updating the account balance of the module on the `BankKeeper`, then invariants may break and unknown behaviour could occur depending on the module implementation. Therefore, if it is desirable to use a module account that is currently blocked, the module developers should be consulted to gauge to possibility of removing the module account from the blocked list. + +```go +type MsgRegisterCounterpartyPayee struct { + / unique port identifier + PortId string + / unique channel identifier + ChannelId string + / the relayer address + Relayer string + / the counterparty payee address + CounterpartyPayee string +} +``` + +> This message is expected to fail if: +> +> - `PortId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +> - `ChannelId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +> - `Relayer` is an invalid address. +> - `CounterpartyPayee` is empty. + +See below for an example CLI command: + +```bash +simd tx ibc-fee register-counterparty-payee transfer channel-0 \ +cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh \ +osmo1v5y0tz01llxzf4c2afml8s3awue0ymju22wxx2 \ +--from cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh +``` + +## Register an alternative payee address for reverse and timeout relaying + +As mentioned in [ICS29 Concepts](/docs/ibc/v4.6.x/middleware/ics29-fee/overview#concepts), the reverse relayer describes the actor who performs the submission of `MsgAcknowledgement` on the source chain. +Similarly the timeout relayer describes the actor who performs the submission of `MsgTimeout` (or `MsgTimeoutOnClose`) on the source chain. + +> Relayer operators **may choose** to register an optional payee address, in order to be compensated accordingly with `AckFee`s and `TimeoutFee`s upon completion of a packet life cycle. + +If a payee is not registered for the reverse or timeout relayer on the source chain, then fee distribution assumes the default behaviour, where fees are paid out to the relayer account which delivers `MsgAcknowledgement` or `MsgTimeout`/`MsgTimeoutOnClose`. + +### Relayer operator actions + +A transaction must be submitted **to the source chain** including a `Payee` address of an account on the source chain. +The transaction must be signed by the `Relayer`. + +Note: If a module account address is used as the `Payee` it is recommended to [turn off invariant checks](https://github.com/cosmos/ibc-go/blob/71d7480c923f4227453e8a80f51be01ae7ee845e/testing/simapp/app.go#L659) for that module. + +```go +type MsgRegisterPayee struct { + / unique port identifier + PortId string + / unique channel identifier + ChannelId string + / the relayer address + Relayer string + / the payee address + Payee string +} +``` + +> This message is expected to fail if: +> +> - `PortId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +> - `ChannelId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +> - `Relayer` is an invalid address. +> - `Payee` is an invalid address. + +See below for an example CLI command: + +```bash +simd tx ibc-fee register-payee transfer channel-0 \ +cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh \ +cosmos153lf4zntqt33a4v0sm5cytrxyqn78q7kz8j8x5 \ +--from cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh +``` diff --git a/docs/ibc/v4.6.x/middleware/ics29-fee/integration.mdx b/docs/ibc/v4.6.x/middleware/ics29-fee/integration.mdx new file mode 100644 index 00000000..814c9b2b --- /dev/null +++ b/docs/ibc/v4.6.x/middleware/ics29-fee/integration.mdx @@ -0,0 +1,174 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to configure the Fee Middleware module with IBC applications. The following document is intended for developers building on top of the Cosmos SDK and only applies for Cosmos SDK chains. + +## Pre-requisite Readings + +- [IBC middleware development](/docs/ibc/v4.6.x/ibc/middleware/develop) +- [IBC middleware integration](/docs/ibc/v4.6.x/ibc/middleware/integration) + +The Fee Middleware module, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. +For Cosmos SDK chains this setup is done via the `app/app.go` file, where modules are constructed and configured in order to bootstrap the blockchain application. + +## Example integration of the Fee Middleware module + +```go expandable +/ app.go + +/ Register the AppModule for the fee middleware module +ModuleBasics = module.NewBasicManager( + ... + ibcfee.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the fee middleware module +maccPerms = map[string][]string{ + ... + ibcfeetypes.ModuleName: nil, +} + +... + +/ Add fee middleware Keeper +type App struct { + ... + + IBCFeeKeeper ibcfeekeeper.Keeper + + ... +} + +... + +/ Create store keys + keys := sdk.NewKVStoreKeys( + ... + ibcfeetypes.StoreKey, + ... +) + +... + +app.IBCFeeKeeper = ibcfeekeeper.NewKeeper( + appCodec, keys[ibcfeetypes.StoreKey], + app.IBCKeeper.ChannelKeeper, / may be replaced with IBC middleware + app.IBCKeeper.ChannelKeeper, + &app.IBCKeeper.PortKeeper, app.AccountKeeper, app.BankKeeper, +) + +/ See the section below for configuring an application stack with the fee middleware module + +... + +/ Register fee middleware AppModule +app.moduleManager = module.NewManager( + ... + ibcfee.NewAppModule(app.IBCFeeKeeper), +) + +... + +/ Add fee middleware to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + ibcfeetypes.ModuleName, + ... +) + +/ Add fee middleware to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + ibcfeetypes.ModuleName, + ... +) + +/ Add fee middleware to init genesis logic +app.moduleManager.SetOrderInitGenesis( + ... + ibcfeetypes.ModuleName, + ... +) +``` + +## Configuring an application stack with Fee Middleware + +As mentioned in [IBC middleware development](/docs/ibc/v4.6.x/ibc/middleware/develop) an application stack may be composed of many or no middlewares that nest a base application. +These layers form the complete set of application logic that enable developers to build composable and flexible IBC application stacks. +For example, an application stack may be just a single base application like `transfer`, however, the same application stack composed with `29-fee` will nest the `transfer` base application +by wrapping it with the Fee Middleware module. + +### Transfer + +See below for an example of how to create an application stack using `transfer` and `29-fee`. +The following `transferStack` is configured in `app/app.go` and added to the IBC `Router`. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +```go expandable +/ Create Transfer Stack +/ SendPacket, since it is originating from the application to core IBC: +/ transferKeeper.SendPacket -> fee.SendPacket -> channel.SendPacket + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is the other way +/ channel.RecvPacket -> fee.OnRecvPacket -> transfer.OnRecvPacket + +/ transfer stack contains (from top to bottom): +/ - IBC Fee Middleware +/ - Transfer + +/ create IBC module from bottom to top of stack +var transferStack porttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) + +transferStack = ibcfee.NewIBCMiddleware(transferStack, app.IBCFeeKeeper) + +/ Add transfer stack to IBC Router +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) +``` + +### Interchain Accounts + +See below for an example of how to create an application stack using `27-interchain-accounts` and `29-fee`. +The following `icaControllerStack` and `icaHostStack` are configured in `app/app.go` and added to the IBC `Router` with the associated authentication module. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +```go expandable +/ Create Interchain Accounts Stack +/ SendPacket, since it is originating from the application to core IBC: +/ icaAuthModuleKeeper.SendTx -> icaController.SendPacket -> fee.SendPacket -> channel.SendPacket + +/ initialize ICA module with mock module as the authentication module on the controller side +var icaControllerStack porttypes.IBCModule +icaControllerStack = ibcmock.NewIBCModule(&mockModule, ibcmock.NewMockIBCApp("", scopedICAMockKeeper)) + +app.ICAAuthModule = icaControllerStack.(ibcmock.IBCModule) + +icaControllerStack = icacontroller.NewIBCMiddleware(icaControllerStack, app.ICAControllerKeeper) + +icaControllerStack = ibcfee.NewIBCMiddleware(icaControllerStack, app.IBCFeeKeeper) + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is: +/ channel.RecvPacket -> fee.OnRecvPacket -> icaHost.OnRecvPacket + +var icaHostStack porttypes.IBCModule +icaHostStack = icahost.NewIBCModule(app.ICAHostKeeper) + +icaHostStack = ibcfee.NewIBCMiddleware(icaHostStack, app.IBCFeeKeeper) + +/ Add authentication module, controller and host to IBC router +ibcRouter. + / the ICA Controller middleware needs to be explicitly added to the IBC Router because the + / ICA controller module owns the port capability for ICA. The ICA authentication module + / owns the channel capability. + AddRoute(ibcmock.ModuleName+icacontrollertypes.SubModuleName, icaControllerStack) / ica with mock auth module stack route to ica (top level of middleware stack) + +AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostStack). +``` diff --git a/docs/ibc/v4.6.x/middleware/ics29-fee/msgs.mdx b/docs/ibc/v4.6.x/middleware/ics29-fee/msgs.mdx new file mode 100644 index 00000000..8aea528d --- /dev/null +++ b/docs/ibc/v4.6.x/middleware/ics29-fee/msgs.mdx @@ -0,0 +1,90 @@ +--- +title: Fee Messages +--- + +## Synopsis + +Learn about the different ways to pay for fees, how the fees are paid out and what happens when not enough escrowed fees are available for payout + +## Escrowing fees + +The fee middleware module exposes two different ways to pay fees for relaying IBC packets: + +1. `MsgPayPacketFee`, which enables the escrowing of fees for a packet at the next sequence send and should be combined into one `MultiMsgTx` with the message that will be paid for. + + Note that the `Relayers` field has been set up to allow for an optional whitelist of relayers permitted to receive this fee, however, this feature has not yet been enabled at this time. + + ```go expandable + type MsgPayPacketFee struct{ + / fee encapsulates the recv, ack and timeout fees associated with an IBC packet + Fee Fee + / the source port unique identifier + SourcePortId string + / the source channel unique identifier + SourceChannelId string + / account address to refund fee if necessary + Signer string + / optional list of relayers permitted to the receive packet fee + Relayers []string + } + ``` + + The `Fee` message contained in this synchronous fee payment method configures different fees which will be paid out for `MsgRecvPacket`, `MsgAcknowledgement`, and `MsgTimeout`/`MsgTimeoutOnClose`. + + ```go + type Fee struct { + RecvFee types.Coins + AckFee types.Coins + TimeoutFee types.Coins + } + ``` + +The diagram below shows the `MultiMsgTx` with the `MsgTransfer` coming from a token transfer message, along with `MsgPayPacketFee`. + +![msgpaypacket.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/msgpaypacket.png) + +2. `MsgPayPacketFeeAsync`, which enables the asynchronous escrowing of fees for a specified packet: + + Note that a packet can be 'topped up' multiple times with additional fees of any coin denomination by broadcasting multiple `MsgPayPacketFeeAsync` messages. + + ```go + type MsgPayPacketFeeAsync struct { + / unique packet identifier comprised of the channel ID, port ID and sequence + PacketId channeltypes.PacketId + / the packet fee associated with a particular IBC packet + PacketFee PacketFee + } + ``` + + where the `PacketFee` also specifies the `Fee` to be paid as well as the refund address for fees which are not paid out + + ```go + type PacketFee struct { + Fee Fee + RefundAddress string + Relayers []string + } + ``` + +The diagram below shows how multiple `MsgPayPacketFeeAsync` can be broadcasted asynchronously. Escrowing of the fee associated with a packet can be carried out by any party because ICS-29 does not dictate a particular fee payer. In fact, chains can choose to simply not expose this fee payment to end users at all and rely on a different module account or even the community pool as the source of relayer incentives. + +![paypacketfeeasync.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/paypacketfeeasync.png) + +Please see our [wiki](https://github.com/cosmos/ibc-go/wiki/Fee-enabled-fungible-token-transfers) for example flows on how to use these messages to incentivise a token transfer channel using a CLI. + +## Paying out the escrowed fees + +Following diagram takes a look at the packet flow for an incentivized token transfer and investigates the several scenario's for paying out the escrowed fees. We assume that the relayers have registered their counterparty address, detailed in the [Fee distribution section](/docs/ibc/v4.6.x/middleware/ics29-fee/fee-distribution). + +![feeflow.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/feeflow.png) + +- In the case of a successful transaction, `RecvFee` will be paid out to the designated counterparty payee address which has been registered on the receiver chain and sent back with the `MsgAcknowledgement`, `AckFee` will be paid out to the relayer address which has submitted the `MsgAcknowledgement` on the sending chain (or the registered payee in case one has been registered for the relayer address), and `TimeoutFee` will be reimbursed to the account which escrowed the fee. +- In case of a timeout transaction, `RecvFee` and `AckFee` will be reimbursed. The `TimeoutFee` will be paid to the `Timeout Relayer` (who submits the timeout message to the source chain). + +> Please note that fee payments are built on the assumption that sender chains are the source of incentives — the chain that sends the packets is the same chain where fee payments will occur -- please see the [Fee distribution section](/docs/ibc/v4.6.x/middleware/ics29-fee/fee-distribution) to understand the flow for registering payee and counterparty payee (fee receiving) addresses. + +## A locked fee middleware module + +The fee middleware module can become locked if the situation arises that the escrow account for the fees does not have sufficient funds to pay out the fees which have been escrowed for each packet. _This situation indicates a severe bug._ In this case, the fee module will be locked until manual intervention fixes the issue. + +> A locked fee module will simply skip fee logic and continue on to the underlying packet flow. A channel with a locked fee module will temporarily function as a fee disabled channel, and the locking of a fee module will not affect the continued flow of packets over the channel. diff --git a/docs/ibc/v4.6.x/middleware/ics29-fee/overview.mdx b/docs/ibc/v4.6.x/middleware/ics29-fee/overview.mdx new file mode 100644 index 00000000..0ceed671 --- /dev/null +++ b/docs/ibc/v4.6.x/middleware/ics29-fee/overview.mdx @@ -0,0 +1,49 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the Fee Middleware module is, and how to build custom modules that utilize the Fee Middleware functionality + +## What is the Fee Middleware module? + +IBC does not depend on relayer operators for transaction verification. However, the relayer infrastructure ensures liveness of the Interchain network — operators listen for packets sent through channels opened between chains, and perform the vital service of ferrying these packets (and proof of the transaction on the sending chain/receipt on the receiving chain) to the clients on each side of the channel. + +Though relaying is permissionless and completely decentralized and accessible, it does come with operational costs. Running full nodes to query transaction proofs and paying for transaction fees associated with IBC packets are two of the primary cost burdens which have driven the overall discussion on **a general, in-protocol incentivization mechanism for relayers**. + +Initially, a [simple proposal](https://github.com/cosmos/ibc/pull/577/files) was created to incentivize relaying on ICS20 token transfers on the destination chain. However, the proposal was specific to ICS20 token transfers and would have to be reimplemented in this format on every other IBC application module. + +After much discussion, the proposal was expanded to a [general incentivisation design](https://github.com/cosmos/ibc/tree/master/spec/app/ics-029-fee-payment) that can be adopted by any ICS application protocol as [middleware](/docs/ibc/v4.6.x/ibc/middleware/develop). + +## Concepts + +ICS29 fee payments in this middleware design are built on the assumption that sender chains are the source of incentives — the chain on which packets are incentivized is the chain that distributes fees to relayer operators. However, as part of the IBC packet flow, messages have to be submitted on both sender and destination chains. This introduces the requirement of a mapping of relayer operator's addresses on both chains. + +To achieve the stated requirements, the **fee middleware module has two main groups of functionality**: + +- Registering of relayer addresses associated with each party involved in relaying the packet on the source chain. This registration process can be automated on start up of relayer infrastructure and happens only once, not every packet flow. + + This is described in the [Fee distribution section](/docs/ibc/v4.6.x/middleware/ics29-fee/fee-distribution). + +- Escrowing fees by any party which will be paid out to each rightful party on completion of the packet lifecycle. + + This is described in the [Fee messages section](/docs/ibc/v4.6.x/middleware/ics29-fee/msgs). + +We complete the introduction by giving a list of definitions of relevant terminology. + +`Forward relayer`: The relayer that submits the `MsgRecvPacket` message for a given packet (on the destination chain). + +`Reverse relayer`: The relayer that submits the `MsgAcknowledgement` message for a given packet (on the source chain). + +`Timeout relayer`: The relayer that submits the `MsgTimeout` or `MsgTimeoutOnClose` messages for a given packet (on the source chain). + +`Payee`: The account address on the source chain to be paid on completion of the packet lifecycle. The packet lifecycle on the source chain completes with the receipt of a `MsgTimeout`/`MsgTimeoutOnClose` or a `MsgAcknowledgement`. + +`Counterparty payee`: The account address to be paid on completion of the packet lifecycle on the destination chain. The package lifecycle on the destination chain completes with a successful `MsgRecvPacket`. + +`Refund address`: The address of the account paying for the incentivization of packet relaying. The account is refunded timeout fees upon successful acknowledgement. In the event of a packet timeout, both acknowledgement and receive fees are refunded. + +## Known Limitations + +The first version of fee payments middleware will only support incentivisation of new channels, however, channel upgradeability will enable incentivisation of all existing channels. diff --git a/docs/ibc/v4.6.x/migrations/sdk-to-v1.mdx b/docs/ibc/v4.6.x/migrations/sdk-to-v1.mdx new file mode 100644 index 00000000..61334c5e --- /dev/null +++ b/docs/ibc/v4.6.x/migrations/sdk-to-v1.mdx @@ -0,0 +1,195 @@ +--- +title: SDK v0.43 to IBC-Go v1 +description: >- + This file contains information on how to migrate from the IBC module contained + in the SDK 0.41.x and 0.42.x lines to the IBC module in the ibc-go repository + based on the 0.44 SDK version. +--- + +This file contains information on how to migrate from the IBC module contained in the SDK 0.41.x and 0.42.x lines to the IBC module in the ibc-go repository based on the 0.44 SDK version. + +## Import Changes + +The most obvious changes is import name changes. We need to change: + +* applications -> apps +* cosmos-sdk/x/ibc -> ibc-go + +On my GNU/Linux based machine I used the following commands, executed in order: + +```bash +grep -RiIl 'cosmos-sdk\/x\/ibc\/applications' | xargs sed -i 's/cosmos-sdk\/x\/ibc\/applications/ibc-go\/modules\/apps/g' +``` + +```bash +grep -RiIl 'cosmos-sdk\/x\/ibc' | xargs sed -i 's/cosmos-sdk\/x\/ibc/ibc-go\/modules/g' +``` + +ref: [explanation of the above commands](https://www.internalpointers.com/post/linux-find-and-replace-text-multiple-files) + +Executing these commands out of order will cause issues. + +Feel free to use your own method for modifying import names. + +NOTE: Updating to the `v0.44.0` SDK release and then running `go mod tidy` will cause a downgrade to `v0.42.0` in order to support the old IBC import paths. +Update the import paths before running `go mod tidy`. + +## Chain Upgrades + +Chains may choose to upgrade via an upgrade proposal or genesis upgrades. Both in-place store migrations and genesis migrations are supported. + +**WARNING**: Please read at least the quick guide for [IBC client upgrades](/docs/ibc/v4.6.x/ibc/upgrades/intro) before upgrading your chain. It is highly recommended you do not change the chain-ID during an upgrade, otherwise you must follow the IBC client upgrade instructions. + +Both in-place store migrations and genesis migrations will: + +* migrate the solo machine client state from v1 to v2 protobuf definitions +* prune all solo machine consensus states +* prune all expired tendermint consensus states + +Chains must set a new connection parameter during either in place store migrations or genesis migration. The new parameter, max expected block time, is used to enforce packet processing delays on the receiving end of an IBC packet flow. Checkout the [docs](https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2) for more information. + +### In-Place Store Migrations + +The new chain binary will need to run migrations in the upgrade handler. The fromVM (previous module version) for the IBC module should be 1. This will allow migrations to be run for IBC updating the version from 1 to 2. + +Ex: + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler("my-upgrade-proposal", + func(ctx sdk.Context, _ upgradetypes.Plan, _ module.VersionMap) (module.VersionMap, error) { + / set max expected block time parameter. Replace the default with your expected value + / https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2 + app.IBCKeeper.ConnectionKeeper.SetParams(ctx, ibcconnectiontypes.DefaultParams()) + fromVM := map[string]uint64{ + ... / other modules + "ibc": 1, + ... +} + +return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +### Genesis Migrations + +To perform genesis migrations, the following code must be added to your existing migration code. + +```go expandable +/ add imports as necessary +import ( + + ibcv100 "github.com/cosmos/ibc-go/modules/core/legacy/v100" + ibchost "github.com/cosmos/ibc-go/modules/core/24-host" +) + +... + +/ add in migrate cmd function +/ expectedTimePerBlock is a new connection parameter +/ https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2 +newGenState, err = ibcv100.MigrateGenesis(newGenState, clientCtx, *genDoc, expectedTimePerBlock) + if err != nil { + return err +} +``` + +**NOTE:** The genesis chain-id, time and height MUST be updated before migrating IBC, otherwise the tendermint consensus state will not be pruned. + +## IBC Keeper Changes + +The IBC Keeper now takes in the Upgrade Keeper. Please add the chains' Upgrade Keeper after the Staking Keeper: + +```diff + / Create IBC Keeper + app.IBCKeeper = ibckeeper.NewKeeper( +- appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, scopedIBCKeeper, ++ appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, app.UpgradeKeeper, scopedIBCKeeper, + ) + +``` + +## Proposals + +### UpdateClientProposal + +The `UpdateClient` has been modified to take in two client-identifiers and one initial height. Please see the [documentation](/docs/ibc/v4.6.x/ibc/proposals) for more information. + +### UpgradeProposal + +A new IBC proposal type has been added, `UpgradeProposal`. This handles an IBC (breaking) Upgrade. +The previous `UpgradedClientState` field in an Upgrade `Plan` has been deprecated in favor of this new proposal type. + +### Proposal Handler Registration + +The `ClientUpdateProposalHandler` has been renamed to `ClientProposalHandler`. +It handles both `UpdateClientProposal`s and `UpgradeProposal`s. + +Add this import: + +```diff ++ ibcclienttypes "github.com/cosmos/ibc-go/modules/core/02-client/types" +``` + +Please ensure the governance module adds the correct route: + +```diff +- AddRoute(ibchost.RouterKey, ibcclient.NewClientUpdateProposalHandler(app.IBCKeeper.ClientKeeper)) ++ AddRoute(ibcclienttypes.RouterKey, ibcclient.NewClientProposalHandler(app.IBCKeeper.ClientKeeper)) +``` + +NOTE: Simapp registration was incorrect in the 0.41.x releases. The `UpdateClient` proposal handler should be registered with the router key belonging to `ibc-go/core/02-client/types` +as shown in the diffs above. + +### Proposal CLI Registration + +Please ensure both proposal type CLI commands are registered on the governance module by adding the following arguments to `gov.NewAppModuleBasic()`: + +Add the following import: + +```diff ++ ibcclientclient "github.com/cosmos/ibc-go/modules/core/02-client/client" +``` + +Register the cli commands: + +```diff + gov.NewAppModuleBasic( + paramsclient.ProposalHandler, distrclient.ProposalHandler, upgradeclient.ProposalHandler, upgradeclient.CancelProposalHandler, ++ ibcclientclient.UpdateClientProposalHandler, ibcclientclient.UpgradeProposalHandler, + ), +``` + +REST routes are not supported for these proposals. + +## Proto file changes + +The gRPC querier service endpoints have changed slightly. The previous files used `v1beta1` gRPC route, this has been updated to `v1`. + +The solo machine has replaced the FrozenSequence uint64 field with a IsFrozen boolean field. The package has been bumped from `v1` to `v2` + +## IBC callback changes + +### OnRecvPacket + +Application developers need to update their `OnRecvPacket` callback logic. + +The `OnRecvPacket` callback has been modified to only return the acknowledgement. The acknowledgement returned must implement the `Acknowledgement` interface. The acknowledgement should indicate if it represents a successful processing of a packet by returning true on `Success()` and false in all other cases. A return value of false on `Success()` will result in all state changes which occurred in the callback being discarded. More information can be found in the [documentation](/docs/ibc/v4.6.x/ibc/apps/ibcmodule#receiving-packets). + +The `OnRecvPacket`, `OnAcknowledgementPacket`, and `OnTimeoutPacket` callbacks are now passed the `sdk.AccAddress` of the relayer who relayed the IBC packet. Applications may use or ignore this information. + +## IBC Event changes + +The `packet_data` attribute has been deprecated in favor of `packet_data_hex`, in order to provide standardized encoding/decoding of packet data in events. While the `packet_data` event still exists, all relayers and IBC Event consumers are strongly encouraged to switch over to using `packet_data_hex` as soon as possible. + +The `packet_ack` attribute has also been deprecated in favor of `packet_ack_hex` for the same reason stated above. All relayers and IBC Event consumers are strongly encouraged to switch over to using `packet_ack_hex` as soon as possible. + +The `consensus_height` attribute has been removed in the Misbehaviour event emitted. IBC clients no longer have a frozen height and misbehaviour does not necessarily have an associated height. + +## Relevant SDK changes + +* (codec) [#9226](https://github.com/cosmos/cosmos-sdk/pull/9226) Rename codec interfaces and methods, to follow a general Go interfaces: + * `codec.Marshaler` → `codec.Codec` (this defines objects which serialize other objects) + * `codec.BinaryMarshaler` → `codec.BinaryCodec` + * `codec.JSONMarshaler` → `codec.JSONCodec` + * Removed `BinaryBare` suffix from `BinaryCodec` methods (`MarshalBinaryBare`, `UnmarshalBinaryBare`, ...) + * Removed `Binary` infix from `BinaryCodec` methods (`MarshalBinaryLengthPrefixed`, `UnmarshalBinaryLengthPrefixed`, ...) diff --git a/docs/ibc/v4.6.x/migrations/support-denoms-with-slashes.mdx b/docs/ibc/v4.6.x/migrations/support-denoms-with-slashes.mdx new file mode 100644 index 00000000..6a698566 --- /dev/null +++ b/docs/ibc/v4.6.x/migrations/support-denoms-with-slashes.mdx @@ -0,0 +1,90 @@ +--- +title: Support transfer of coins whose base denom contains slashes +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +This document is necessary when chains are upgrading from a version that does not support base denoms with slashes (e.g. v3.0.0) to a version that does (e.g. v3.2.0). All versions of ibc-go smaller than v1.5.0 for the v1.x release line, v2.3.0 for the v2.x release line, and v3.1.0 for the v3.x release line do **NOT** support IBC token transfers of coins whose base denoms contain slashes. Therefore the in-place of genesis migration described in this document are required when upgrading. + +If a chain receives coins of a base denom with slashes before it upgrades to supporting it, the receive may pass however the trace information will be incorrect. + +E.g. If a base denom of `testcoin/testcoin/testcoin` is sent to a chain that does not support slashes in the base denom, the receive will be successful. However, the trace information stored on the receiving chain will be: `Trace: "transfer/{channel-id}/testcoin/testcoin", BaseDenom: "testcoin"`. + +This incorrect trace information must be corrected when the chain does upgrade to fully supporting denominations with slashes. + +To do so, chain binaries should include a migration script that will run when the chain upgrades from not supporting base denominations with slashes to supporting base denominations with slashes. + +## Chains + +### ICS20 - Transfer + +The transfer module will now support slashes in base denoms, so we must iterate over current traces to check if any of them are incorrectly formed and correct the trace information. + +### Upgrade Proposal + +```go +app.UpgradeKeeper.SetUpgradeHandler("MigrateTraces", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / transfer module consensus version has been bumped to 2 + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +This is only necessary if there are denom traces in the store with incorrect trace information from previously received coins that had a slash in the base denom. However, it is recommended that any chain upgrading to support base denominations with slashes runs this code for safety. + +For a more detailed sample, please check out the code changes in [this pull request](https://github.com/cosmos/ibc-go/pull/1680). + +### Genesis Migration + +If the chain chooses to add support for slashes in base denoms via genesis export, then the trace information must be corrected during genesis migration. + +The migration code required may look like: + +```go expandable +func migrateGenesisSlashedDenomsUpgrade(appState genutiltypes.AppMap, clientCtx client.Context, genDoc *tmtypes.GenesisDoc) (genutiltypes.AppMap, error) { + if appState[ibctransfertypes.ModuleName] != nil { + transferGenState := &ibctransfertypes.GenesisState{ +} + +clientCtx.Codec.MustUnmarshalJSON(appState[ibctransfertypes.ModuleName], transferGenState) + substituteTraces := make([]ibctransfertypes.DenomTrace, len(transferGenState.DenomTraces)) + for i, dt := range transferGenState.DenomTraces { + / replace all previous traces with the latest trace if validation passes + / note most traces will have same value + newTrace := ibctransfertypes.ParseDenomTrace(dt.GetFullDenomPath()) + if err := newTrace.Validate(); err != nil { + substituteTraces[i] = dt +} + +else { + substituteTraces[i] = newTrace +} + +} + +transferGenState.DenomTraces = substituteTraces + + / delete old genesis state + delete(appState, ibctransfertypes.ModuleName) + + / set new ibc transfer genesis state + appState[ibctransfertypes.ModuleName] = clientCtx.Codec.MustMarshalJSON(transferGenState) +} + +return appState, nil +} +``` + +For a more detailed sample, please check out the code changes in [this pull request](https://github.com/cosmos/ibc-go/pull/1528). diff --git a/docs/ibc/v4.6.x/migrations/v1-to-v2.mdx b/docs/ibc/v4.6.x/migrations/v1-to-v2.mdx new file mode 100644 index 00000000..035b8c94 --- /dev/null +++ b/docs/ibc/v4.6.x/migrations/v1-to-v2.mdx @@ -0,0 +1,60 @@ +--- +title: IBC-Go v1 to v2 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go -> github.com/cosmos/ibc-go/v2 +``` + +## Chains + +* No relevant changes were made in this release. + +## IBC Apps + +A new function has been added to the app module interface: + +```go expandable +/ NegotiateAppVersion performs application version negotiation given the provided channel ordering, connectionID, portID, counterparty and proposed version. + / An error is returned if version negotiation cannot be performed. For example, an application module implementing this interface + / may decide to return an error in the event of the proposed version being incompatible with it's own + NegotiateAppVersion( + ctx sdk.Context, + order channeltypes.Order, + connectionID string, + portID string, + counterparty channeltypes.Counterparty, + proposedVersion string, + ) (version string, err error) +} +``` + +This function should perform application version negotiation and return the negotiated version. If the version cannot be negotiated, an error should be returned. This function is only used on the client side. + +### sdk.Result removed + +sdk.Result has been removed as a return value in the application callbacks. Previously it was being discarded by core IBC and was thus unused. + +## Relayers + +A new gRPC has been added to 05-port, `AppVersion`. It returns the negotiated app version. This function should be used for the `ChanOpenTry` channel handshake step to decide upon the application version which should be set in the channel. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v4.6.x/migrations/v2-to-v3.mdx b/docs/ibc/v4.6.x/migrations/v2-to-v3.mdx new file mode 100644 index 00000000..8d4df030 --- /dev/null +++ b/docs/ibc/v4.6.x/migrations/v2-to-v3.mdx @@ -0,0 +1,187 @@ +--- +title: IBC-Go v2 to v3 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v2 -> github.com/cosmos/ibc-go/v3 +``` + +No genesis or in-place migrations are required when upgrading from v1 or v2 of ibc-go. + +## Chains + +### ICS20 + +The `transferkeeper.NewKeeper(...)` now takes in an ICS4Wrapper. +The ICS4Wrapper should be the IBC Channel Keeper unless ICS 20 is being connected to a middleware application. + +### ICS27 + +ICS27 Interchain Accounts has been added as a supported IBC application of ibc-go. +Please see the [ICS27 documentation](/docs/ibc/v4.6.x/apps/interchain-accounts/overview) for more information. + +### Upgrade Proposal + +If the chain will adopt ICS27, it must set the appropriate params during the execution of the upgrade handler in `app.go`: + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler("v3", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / set the ICS27 consensus version so InitGenesis is not run + fromVM[icatypes.ModuleName] = icamodule.ConsensusVersion() + + / create ICS27 Controller submodule params + controllerParams := icacontrollertypes.Params{ + ControllerEnabled: true, +} + + / create ICS27 Host submodule params + hostParams := icahosttypes.Params{ + HostEnabled: true, + AllowMessages: []string{"/cosmos.bank.v1beta1.MsgSend", ... +}, +} + + / initialize ICS27 module + icamodule.InitModule(ctx, controllerParams, hostParams) + + ... + + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +The host and controller submodule params only need to be set if the chain integrates those submodules. +For example, if a chain chooses not to integrate a controller submodule, it may pass empty params into `InitModule`. + +#### Add `StoreUpgrades` for ICS27 module + +For ICS27 it is also necessary to [manually add store upgrades](https://docs.cosmos.network/main/learn/advanced/upgrade#add-storeupgrades-for-new-modules) for the new ICS27 module and then configure the store loader to apply those upgrades in `app.go`: + +```go +if upgradeInfo.Name == "v3" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := store.StoreUpgrades{ + Added: []string{ + icacontrollertypes.StoreKey, icahosttypes.StoreKey +}, +} + +app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +``` + +This ensures that the new module's stores are added to the multistore before the migrations begin. +The host and controller submodule keys only need to be added if the chain integrates those submodules. +For example, if a chain chooses not to integrate a controller submodule, it does not need to add the controller key to the `Added` field. + +### Genesis migrations + +If the chain will adopt ICS27 and chooses to upgrade via a genesis export, then the ICS27 parameters must be set during genesis migration. + +The migration code required may look like: + +```go expandable +controllerGenesisState := icatypes.DefaultControllerGenesis() + / overwrite parameters as desired + controllerGenesisState.Params = icacontrollertypes.Params{ + ControllerEnabled: true, +} + hostGenesisState := icatypes.DefaultHostGenesis() + / overwrite parameters as desired + hostGenesisState.Params = icahosttypes.Params{ + HostEnabled: true, + AllowMessages: []string{"/cosmos.bank.v1beta1.MsgSend", ... +}, +} + icaGenesisState := icatypes.NewGenesisState(controllerGenesisState, hostGenesisState) + + / set new ics27 genesis state + appState[icatypes.ModuleName] = clientCtx.JSONCodec.MustMarshalJSON(icaGenesisState) +``` + +### Ante decorator + +The field of type `channelkeeper.Keeper` in the `AnteDecorator` structure has been replaced with a field of type `*keeper.Keeper`: + +```diff +type AnteDecorator struct { +- k channelkeeper.Keeper ++ k *keeper.Keeper +} + +- func NewAnteDecorator(k channelkeeper.Keeper) AnteDecorator { ++ func NewAnteDecorator(k *keeper.Keeper) AnteDecorator { + return AnteDecorator{k: k} +} +``` + +## IBC Apps + +### `OnChanOpenTry` must return negotiated application version + +The `OnChanOpenTry` application callback has been modified. +The return signature now includes the application version. +IBC applications must perform application version negotiation in `OnChanOpenTry` using the counterparty version. +The negotiated application version then must be returned in `OnChanOpenTry` to core IBC. +Core IBC will set this version in the TRYOPEN channel. + +### `OnChanOpenAck` will take additional `counterpartyChannelID` argument + +The `OnChanOpenAck` application callback has been modified. +The arguments now include the counterparty channel id. + +### `NegotiateAppVersion` removed from `IBCModule` interface + +Previously this logic was handled by the `NegotiateAppVersion` function. +Relayers would query this function before calling `ChanOpenTry`. +Applications would then need to verify that the passed in version was correct. +Now applications will perform this version negotiation during the channel handshake, thus removing the need for `NegotiateAppVersion`. + +### Channel state will not be set before application callback + +The channel handshake logic has been reorganized within core IBC. +Channel state will not be set in state after the application callback is performed. +Applications must rely only on the passed in channel parameters instead of querying the channel keeper for channel state. + +### IBC application callbacks moved from `AppModule` to `IBCModule` + +Previously, IBC module callbacks were apart of the `AppModule` type. +The recommended approach is to create an `IBCModule` type and move the IBC module callbacks from `AppModule` to `IBCModule` in a separate file `ibc_module.go`. + +The mock module go API has been broken in this release by applying the above format. +The IBC module callbacks have been moved from the mock modules `AppModule` into a new type `IBCModule`. + +As apart of this release, the mock module now supports middleware testing. Please see the [README](https://github.com/cosmos/ibc-go/blob/v4.4.0/testing/README.md#middleware-testing) for more information. + +Please review the [mock](https://github.com/cosmos/ibc-go/blob/v4.4.0/testing/mock/ibc_module.go) and [transfer](https://github.com/cosmos/ibc-go/blob/v4.4.0/modules/apps/transfer/ibc_module.go) modules as examples. Additionally, [simapp](https://github.com/cosmos/ibc-go/blob/v4.4.0/testing/simapp/app.go) provides an example of how `IBCModule` types should now be added to the IBC router in favour of `AppModule`. + +### IBC testing package + +`TestChain`s are now created with chainID's beginning from an index of 1. Any calls to `GetChainID(0)` will now fail. Please increment all calls to `GetChainID` by 1. + +## Relayers + +`AppVersion` gRPC has been removed. +The `version` string in `MsgChanOpenTry` has been deprecated and will be ignored by core IBC. +Relayers no longer need to determine the version to use on the `ChanOpenTry` step. +IBC applications will determine the correct version using the counterparty version. + +## IBC Light Clients + +The `GetProofSpecs` function has been removed from the `ClientState` interface. This function was previously unused by core IBC. Light clients which don't use this function may remove it. diff --git a/docs/ibc/v4.6.x/migrations/v3-to-v4.mdx b/docs/ibc/v4.6.x/migrations/v3-to-v4.mdx new file mode 100644 index 00000000..de5e2dde --- /dev/null +++ b/docs/ibc/v4.6.x/migrations/v3-to-v4.mdx @@ -0,0 +1,156 @@ +--- +title: IBC-Go v3 to v4 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v3 -> github.com/cosmos/ibc-go/v4 +``` + +No genesis or in-place migrations required when upgrading from v1 or v2 of ibc-go. + +## Chains + +### ICS27 - Interchain Accounts + +The controller submodule implements now the 05-port `Middleware` interface instead of the 05-port `IBCModule` interface. Chains that integrate the controller submodule, need to create it with the `NewIBCMiddleware` constructor function. For example: + +```diff +- icacontroller.NewIBCModule(app.ICAControllerKeeper, icaAuthIBCModule) ++ icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) +``` + +where `icaAuthIBCModule` is the Interchain Accounts authentication IBC Module. + +### ICS29 - Fee Middleware + +The Fee Middleware module, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. + +Please read the Fee Middleware [integration documentation](/docs/ibc/v4.6.x/middleware/ics29-fee/integration) for an in depth guide on how to configure the module correctly in order to incentivize IBC packets. + +Take a look at the following diff for an [example setup](https://github.com/cosmos/ibc-go/pull/1432/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08aL366) of how to incentivize ics27 channels. + +### Migration to fix support for base denoms with slashes + +As part of [v1.5.0](https://github.com/cosmos/ibc-go/releases/tag/v1.5.0), [v2.3.0](https://github.com/cosmos/ibc-go/releases/tag/v2.3.0) and [v3.1.0](https://github.com/cosmos/ibc-go/releases/tag/v3.1.0) some [migration handler code sample was documented](/docs/ibc/v4.6.x/migrations/support-denoms-with-slashes#upgrade-proposal) that needs to run in order to correct the trace information of coins transferred using ICS20 whose base denom contains slashes. + +Based on feedback from the community we add now an improved solution to run the same migration that does not require copying a large piece of code over from the migration document, but instead requires only adding a one-line upgrade handler. + +If the chain will migrate to supporting base denoms with slashes, it must set the appropriate params during the execution of the upgrade handler in `app.go`: + +```go +app.UpgradeKeeper.SetUpgradeHandler("MigrateTraces", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / transfer module consensus version has been bumped to 2 + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +If a chain receives coins of a base denom with slashes before it upgrades to supporting it, the receive may pass however the trace information will be incorrect. + +E.g. If a base denom of `testcoin/testcoin/testcoin` is sent to a chain that does not support slashes in the base denom, the receive will be successful. However, the trace information stored on the receiving chain will be: `Trace: "transfer/{channel-id}/testcoin/testcoin", BaseDenom: "testcoin"`. + +This incorrect trace information must be corrected when the chain does upgrade to fully supporting denominations with slashes. + +## IBC Apps + +### ICS03 - Connection + +Crossing hellos have been removed from 03-connection handshake negotiation. +`PreviousConnectionId` in `MsgConnectionOpenTry` has been deprecated and is no longer used by core IBC. + +`NewMsgConnectionOpenTry` no longer takes in the `PreviousConnectionId` as crossing hellos are no longer supported. A non-empty `PreviousConnectionId` will fail basic validation for this message. + +### ICS04 - Channel + +The `WriteAcknowledgement` API now takes the `exported.Acknowledgement` type instead of passing in the acknowledgement byte array directly. +This is an API breaking change and as such IBC application developers will have to update any calls to `WriteAcknowledgement`. + +The `OnChanOpenInit` application callback has been modified. +The return signature now includes the application version as detailed in the latest IBC [spec changes](https://github.com/cosmos/ibc/pull/629). + +The `NewErrorAcknowledgement` method signature has changed. +It now accepts an `error` rather than a `string`. This was done in order to prevent accidental state changes. +All error acknowledgements now contain a deterministic ABCI code and error message. It is the responsibility of the application developer to emit error details in events. + +Crossing hellos have been removed from 04-channel handshake negotiation. +IBC Applications no longer need to account from already claimed capabilities in the `OnChanOpenTry` callback. The capability provided by core IBC must be able to be claimed with error. +`PreviousChannelId` in `MsgChannelOpenTry` has been deprecated and is no longer used by core IBC. + +`NewMsgChannelOpenTry` no longer takes in the `PreviousChannelId` as crossing hellos are no longer supported. A non-empty `PreviousChannelId` will fail basic validation for this message. + +### ICS27 - Interchain Accounts + +The `RegisterInterchainAccount` API has been modified to include an additional `version` argument. This change has been made in order to support ICS29 fee middleware, for relayer incentivization of ICS27 packets. +Consumers of the `RegisterInterchainAccount` are now expected to build the appropriate JSON encoded version string themselves and pass it accordingly. +This should be constructed within the interchain accounts authentication module which leverages the APIs exposed via the interchain accounts `controllerKeeper`. If an empty string is passed in the `version` argument, then the version will be initialized to a default value in the `OnChanOpenInit` callback of the controller's handler, so that channel handshake can proceed. + +The following code snippet illustrates how to construct an appropriate interchain accounts `Metadata` and encode it as a JSON bytestring: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + if err := k.icaControllerKeeper.RegisterInterchainAccount(ctx, msg.ConnectionId, msg.Owner, string(appVersion)); err != nil { + return err +} +``` + +Similarly, if the application stack is configured to route through ICS29 fee middleware and a fee enabled channel is desired, construct the appropriate ICS29 `Metadata` type: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + feeMetadata := feetypes.Metadata{ + AppVersion: string(appVersion), + FeeVersion: feetypes.Version, +} + +feeEnabledVersion, err := feetypes.ModuleCdc.MarshalJSON(&feeMetadata) + if err != nil { + return err +} + if err := k.icaControllerKeeper.RegisterInterchainAccount(ctx, msg.ConnectionId, msg.Owner, string(feeEnabledVersion)); err != nil { + return err +} +``` + +## Relayers + +When using the `DenomTrace` gRPC, the full IBC denomination with the `ibc/` prefix may now be passed in. + +Crossing hellos are no longer supported by core IBC for 03-connection and 04-channel. The handshake should be completed in the logical 4 step process (INIT, TRY, ACK, CONFIRM). diff --git a/docs/ibc/v5.4.x/apps/interchain-accounts/active-channels.mdx b/docs/ibc/v5.4.x/apps/interchain-accounts/active-channels.mdx new file mode 100644 index 00000000..6cbefa72 --- /dev/null +++ b/docs/ibc/v5.4.x/apps/interchain-accounts/active-channels.mdx @@ -0,0 +1,29 @@ +--- +title: Active Channels +description: >- + The Interchain Accounts module uses ORDERED channels to maintain the order of + transactions when sending packets from a controller to a host chain. A + limitation when using ORDERED channels is that when a packet times out the + channel will be closed. +--- + +The Interchain Accounts module uses [ORDERED channels](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#ordering) to maintain the order of transactions when sending packets from a controller to a host chain. A limitation when using ORDERED channels is that when a packet times out the channel will be closed. + +In the case of a channel closing, a controller chain needs to be able to regain access to the interchain account registered on this channel. `Active Channels` enable this functionality. Future versions of the ICS-27 protocol and the Interchain Accounts module will likely use a new +channel type that provides ordering of packets without the channel closing on timing out, thus removing the need for `Active Channels` entirely. + +When an Interchain Account is registered using the `RegisterInterchainAccount` API, a new channel is created on a particular port. During the `OnChanOpenAck` and `OnChanOpenConfirm` steps (controller & host chain) the `Active Channel` for this interchain account +is stored in state. + +It is possible to create a new channel using the same controller chain portID if the previously set `Active Channel` is now in a `CLOSED` state. This channel creation can be initialized programmatically by sending a new `MsgChannelOpenInit` message like so: + +```go +msg := channeltypes.NewMsgChannelOpenInit(portID, string(versionBytes), channeltypes.ORDERED, []string{ + connectionID +}, icatypes.PortID, icatypes.ModuleName) + handler := k.msgRouter.Handler(msg) +``` + +Alternatively, any relayer operator may initiate a new channel handshake for this interchain account once the previously set `Active Channel` is in a `CLOSED` state. This is done by initiating the channel handshake on the controller chain using the same portID associated with the interchain account in question. + +It is important to note that once a channel has been opened for a given Interchain Account, new channels can not be opened for this account until the currently set `Active Channel` is set to `CLOSED`. diff --git a/docs/ibc/v5.4.x/apps/interchain-accounts/auth-modules.mdx b/docs/ibc/v5.4.x/apps/interchain-accounts/auth-modules.mdx new file mode 100644 index 00000000..a7bc33ae --- /dev/null +++ b/docs/ibc/v5.4.x/apps/interchain-accounts/auth-modules.mdx @@ -0,0 +1,442 @@ +--- +title: Authentication Modules +--- + +## Synopsis + +Authentication modules play the role of the `Base Application` as described in [ICS30 IBC Middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware), and enable application developers to perform custom logic when working with the Interchain Accounts controller API. + +The controller submodule is used for account registration and packet sending. +It executes only logic required of all controllers of interchain accounts. +The type of authentication used to manage the interchain accounts remains unspecified. +There may exist many different types of authentication which are desirable for different use cases. +Thus the purpose of the authentication module is to wrap the controller module with custom authentication logic. + +In ibc-go, authentication modules are connected to the controller chain via a middleware stack. +The controller module is implemented as [middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware) and the authentication module is connected to the controller module as the base application of the middleware stack. +To implement an authentication module, the `IBCModule` interface must be fulfilled. +By implementing the controller module as middleware, any amount of authentication modules can be created and connected to the controller module without writing redundant code. + +The authentication module must: + +- Authenticate interchain account owners +- Track the associated interchain account address for an owner +- Claim the channel capability in `OnChanOpenInit` +- Send packets on behalf of an owner (after authentication) + +## IBCModule implementation + +The following `IBCModule` callbacks must be implemented with appropriate custom logic: + +```go expandable +/ OnChanOpenInit implements the IBCModule interface +func (im IBCModule) + +OnChanOpenInit( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + chanCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + / the authentication module *must* claim the channel capability on OnChanOpenInit + if err := im.keeper.ClaimCapability(ctx, chanCap, host.ChannelCapabilityPath(portID, channelID)); err != nil { + return version, err +} + + / perform custom logic + + return version, nil +} + +/ OnChanOpenAck implements the IBCModule interface +func (im IBCModule) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + / perform custom logic + + return nil +} + +/ OnChanCloseConfirm implements the IBCModule interface +func (im IBCModule) + +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / perform custom logic + + return nil +} + +/ OnAcknowledgementPacket implements the IBCModule interface +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, +) + +error { + / perform custom logic + + return nil +} + +/ OnTimeoutPacket implements the IBCModule interface. +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +error { + / perform custom logic + + return nil +} +``` + +**Note**: The channel capability must be claimed by the authentication module in `OnChanOpenInit` otherwise the authentication module will not be able to send packets on the channel created for the associated interchain account. + +The following functions must be defined to fulfill the `IBCModule` interface, but they will never be called by the controller module so they may error or panic. + +```go expandable +/ OnChanOpenTry implements the IBCModule interface +func (im IBCModule) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + chanCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + panic("UNIMPLEMENTED") +} + +/ OnChanOpenConfirm implements the IBCModule interface +func (im IBCModule) + +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + panic("UNIMPLEMENTED") +} + +/ OnChanCloseInit implements the IBCModule interface +func (im IBCModule) + +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + panic("UNIMPLEMENTED") +} + +/ OnRecvPacket implements the IBCModule interface. A successful acknowledgement +/ is returned if the packet data is successfully decoded and the receive application +/ logic returns without error. +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +ibcexported.Acknowledgement { + panic("UNIMPLEMENTED") +} +``` + +## `RegisterInterchainAccount` + +The authentication module can begin registering interchain accounts by calling `RegisterInterchainAccount`: + +```go +if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, connectionID, owner.String(), version); err != nil { + return err +} + +return nil +``` + +The `version` argument is used to support ICS29 fee middleware for relayer incentivization of ICS27 packets. Consumers of the `RegisterInterchainAccount` are expected to build the appropriate JSON encoded version string themselves and pass it accordingly. If an empty string is passed in the `version` argument, then the version will be initialized to a default value in the `OnChanOpenInit` callback of the controller's handler, so that channel handshake can proceed. + +The following code snippet illustrates how to construct an appropriate interchain accounts `Metadata` and encode it as a JSON bytestring: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, controllerConnectionID, owner.String(), string(appVersion)); err != nil { + return err +} +``` + +Similarly, if the application stack is configured to route through ICS29 fee middleware and a fee enabled channel is desired, construct the appropriate ICS29 `Metadata` type: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + feeMetadata := feetypes.Metadata{ + AppVersion: string(appVersion), + FeeVersion: feetypes.Version, +} + +feeEnabledVersion, err := feetypes.ModuleCdc.MarshalJSON(&feeMetadata) + if err != nil { + return err +} + if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, controllerConnectionID, owner.String(), string(feeEnabledVersion)); err != nil { + return err +} +``` + +## `SendTx` + +The authentication module can attempt to send a packet by calling `SendTx`: + +```go expandable +/ Authenticate owner +/ perform custom logic + +/ Construct controller portID based on interchain account owner address +portID, err := icatypes.NewControllerPortID(owner.String()) + if err != nil { + return err +} + +channelID, found := keeper.icaControllerKeeper.GetActiveChannelID(ctx, portID) + if !found { + return sdkerrors.Wrapf(icatypes.ErrActiveChannelNotFound, "failed to retrieve active channel for port %s", portID) +} + +/ Obtain the channel capability, claimed in OnChanOpenInit +chanCap, found := keeper.scopedKeeper.GetCapability(ctx, host.ChannelCapabilityPath(portID, channelID)) + if !found { + return sdkerrors.Wrap(channeltypes.ErrChannelCapabilityNotFound, "module does not own channel capability") +} + +/ Obtain data to be sent to the host chain. +/ In this example, the owner of the interchain account would like to send a bank MsgSend to the host chain. +/ The appropriate serialization function should be called. The host chain must be able to deserialize the transaction. +/ If the host chain is using the ibc-go host module, `SerializeCosmosTx` should be used. + msg := &banktypes.MsgSend{ + FromAddress: fromAddr, + ToAddress: toAddr, + Amount: amt +} + +data, err := icatypes.SerializeCosmosTx(keeper.cdc, []sdk.Msg{ + msg +}) + if err != nil { + return err +} + +/ Construct packet data + packetData := icatypes.InterchainAccountPacketData{ + Type: icatypes.EXECUTE_TX, + Data: data, +} + +/ Obtain timeout timestamp +/ An appropriate timeout timestamp must be determined based on the usage of the interchain account. +/ If the packet times out, the channel will be closed requiring a new channel to be created + timeoutTimestamp := obtainTimeoutTimestamp() + +/ Send the interchain accounts packet, returning the packet sequence +seq, err = keeper.icaControllerKeeper.SendTx(ctx, chanCap, portID, packetData, timeoutTimestamp) +``` + +The data within an `InterchainAccountPacketData` must be serialized using a format supported by the host chain. +If the host chain is using the ibc-go host chain submodule, `SerializeCosmosTx` should be used. If the `InterchainAccountPacketData.Data` is serialized using a format not support by the host chain, the packet will not be successfully received. + +## `OnAcknowledgementPacket` + +Controller chains will be able to access the acknowledgement written into the host chain state once a relayer relays the acknowledgement. +The acknowledgement bytes will be passed to the auth module via the `OnAcknowledgementPacket` callback. +Auth modules are expected to know how to decode the acknowledgement. + +If the controller chain is connected to a host chain using the host module on ibc-go, it may interpret the acknowledgement bytes as follows: + +Begin by unmarshaling the acknowledgement into `sdk.TxMsgData`: + +```go +var ack channeltypes.Acknowledgement + if err := channeltypes.SubModuleCdc.UnmarshalJSON(acknowledgement, &ack); err != nil { + return err +} + txMsgData := &sdk.TxMsgData{ +} + if err := proto.Unmarshal(ack.GetResult(), txMsgData); err != nil { + return err +} +``` + +If the `txMsgData.Data` field is non nil, the host chain is using SDK version `<=` v0.45. +The auth module should interpret the `txMsgData.Data` as follows: + +```go expandable +switch len(txMsgData.Data) { + case 0: + / see documentation below for SDK 0.46.x or greater +default: + for _, msgData := range txMsgData.Data { + if err := handler(msgData); err != nil { + return err +} + +} +... +} +``` + +A handler will be needed to interpret what actions to perform based on the message type sent. +A router could be used, or more simply a switch statement. + +```go expandable +func handler(msgData sdk.MsgData) + +error { + switch msgData.MsgType { + case sdk.MsgTypeURL(&banktypes.MsgSend{ +}): + msgResponse := &banktypes.MsgSendResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleBankSendMsg(msgResponse) + case sdk.MsgTypeURL(&stakingtypes.MsgDelegate{ +}): + msgResponse := &stakingtypes.MsgDelegateResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleStakingDelegateMsg(msgResponse) + case sdk.MsgTypeURL(&transfertypes.MsgTransfer{ +}): + msgResponse := &transfertypes.MsgTransferResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleIBCTransferMsg(msgResponse) + +default: + return +} +``` + +If the `txMsgData.Data` is empty, the host chain is using SDK version > v0.45. +The auth module should interpret the `txMsgData.Responses` as follows: + +```go +... +/ switch statement from above + case 0: + for _, any := range txMsgData.MsgResponses { + if err := handleAny(any); err != nil { + return err +} + +} +} +``` + +A handler will be needed to interpret what actions to perform based on the type url of the Any. +A router could be used, or more simply a switch statement. +It may be possible to deduplicate logic between `handler` and `handleAny`. + +```go expandable +func handleAny(any *codectypes.Any) + +error { + switch any.TypeURL { + case banktypes.MsgSend: + msgResponse, err := unpackBankMsgSendResponse(any) + if err != nil { + return err +} + +handleBankSendMsg(msgResponse) + case stakingtypes.MsgDelegate: + msgResponse, err := unpackStakingDelegateResponse(any) + if err != nil { + return err +} + +handleStakingDelegateMsg(msgResponse) + case transfertypes.MsgTransfer: + msgResponse, err := unpackIBCTransferMsgResponse(any) + if err != nil { + return err +} + +handleIBCTransferMsg(msgResponse) + +default: + return +} +``` + +### Integration into `app.go` file + +To integrate the authentication module into your chain, please follow the steps outlined above in [app.go integration](/docs/ibc/v5.4.x/apps/interchain-accounts/integration#example-integration). diff --git a/docs/ibc/v5.4.x/apps/interchain-accounts/integration.mdx b/docs/ibc/v5.4.x/apps/interchain-accounts/integration.mdx new file mode 100644 index 00000000..50a8797b --- /dev/null +++ b/docs/ibc/v5.4.x/apps/interchain-accounts/integration.mdx @@ -0,0 +1,194 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to integrate Interchain Accounts host and controller functionality to your chain. The following document only applies for Cosmos SDK chains. + +The Interchain Accounts module contains two submodules. Each submodule has its own IBC application. The Interchain Accounts module should be registered as an `AppModule` in the same way all SDK modules are registered on a chain, but each submodule should create its own `IBCModule` as necessary. A route should be added to the IBC router for each submodule which will be used. + +Chains who wish to support ICS27 may elect to act as a host chain, a controller chain or both. Disabling host or controller functionality may be done statically by excluding the host or controller module entirely from the `app.go` file or it may be done dynamically by taking advantage of the on-chain parameters which enable or disable the host or controller submodules. + +Interchain Account authentication modules are the base application of a middleware stack. The controller submodule is the middleware in this stack. + +## Example integration + +```go expandable +/ app.go + +/ Register the AppModule for the Interchain Accounts module and the authentication module +/ Note: No `icaauth` exists, this must be substituted with an actual Interchain Accounts authentication module +ModuleBasics = module.NewBasicManager( + ... + ica.AppModuleBasic{ +}, + icaauth.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the Interchain Accounts module +/ Only necessary for host chain functionality +/ Each Interchain Account created on the host chain is derived from the module account created +maccPerms = map[string][]string{ + ... + icatypes.ModuleName: nil, +} + +... + +/ Add Interchain Accounts Keepers for each submodule used and the authentication module +/ If a submodule is being statically disabled, the associated Keeper does not need to be added. +type App struct { + ... + + ICAControllerKeeper icacontrollerkeeper.Keeper + ICAHostKeeper icahostkeeper.Keeper + ICAAuthKeeper icaauthkeeper.Keeper + + ... +} + +... + +/ Create store keys for each submodule Keeper and the authentication module + keys := sdk.NewKVStoreKeys( + ... + icacontrollertypes.StoreKey, + icahosttypes.StoreKey, + icaauthtypes.StoreKey, + ... +) + +... + +/ Create the scoped keepers for each submodule keeper and authentication keeper + scopedICAControllerKeeper := app.CapabilityKeeper.ScopeToModule(icacontrollertypes.SubModuleName) + scopedICAHostKeeper := app.CapabilityKeeper.ScopeToModule(icahosttypes.SubModuleName) + scopedICAAuthKeeper := app.CapabilityKeeper.ScopeToModule(icaauthtypes.ModuleName) + +... + +/ Create the Keeper for each submodule +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + scopedICAControllerKeeper, app.MsgServiceRouter(), +) + +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, scopedICAHostKeeper, app.MsgServiceRouter(), +) + +/ Create Interchain Accounts AppModule + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, &app.ICAHostKeeper) + +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.ICAControllerKeeper, scopedICAAuthKeeper) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + +/ ICA auth IBC Module + icaAuthIBCModule := icaauth.NewIBCModule(app.ICAAuthKeeper) + +/ Create controller IBC application stack and host IBC module as desired + icaControllerStack := icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostIBCModule). + AddRoute(icaauthtypes.ModuleName, icaControllerStack) / Note, the authentication module is routed to the top level of the middleware stack + +... + +/ Register Interchain Accounts and authentication module AppModule's +app.moduleManager = module.NewManager( + ... + icaModule, + icaAuthModule, +) + +... + +/ Add fee middleware to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add fee middleware to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts module InitGenesis logic +app.moduleManager.SetOrderInitGenesis( + ... + icatypes.ModuleName, + ... +) + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey sdk.StoreKey) + +paramskeeper.Keeper { + ... + paramsKeeper.Subspace(icahosttypes.SubModuleName) + +paramsKeeper.Subspace(icacontrollertypes.SubModuleName) + ... +``` + +### Using submodules exclusively + +As described above, the Interchain Accounts application module is structured to support the ability of exclusively enabling controller or host functionality. +This can be achieved by simply omitting either controller or host `Keeper` from the Interchain Accounts `NewAppModule` constructor function, and mounting only the desired submodule via the `IBCRouter`. +Alternatively, submodules can be enabled and disabled dynamically using [on-chain parameters](/docs/ibc/v5.4.x/apps/interchain-accounts/parameters). + +The following snippets show basic examples of statically disabling submodules using `app.go`. + +#### Disabling controller chain functionality + +```go +/ Create Interchain Accounts AppModule omitting the controller keeper + icaModule := ica.NewAppModule(nil, &app.ICAHostKeeper) + +/ Create host IBC Module + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host route +ibcRouter.AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +``` + +#### Disabling host chain functionality + +```go expandable +/ Create Interchain Accounts AppModule omitting the host keeper + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, nil) + +/ Create your Interchain Accounts authentication module, setting up the Keeper, AppModule and IBCModule appropriately +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.ICAControllerKeeper, scopedICAAuthKeeper) + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + icaAuthIBCModule := icaauth.NewIBCModule(app.ICAAuthKeeper) + +/ Create controller IBC application stack + icaControllerStack := icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) + +/ Register controller and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icaauthtypes.ModuleName, icaControllerStack) / Note, the authentication module is routed to the top level of the middleware stack +``` diff --git a/docs/ibc/v5.4.x/apps/interchain-accounts/overview.mdx b/docs/ibc/v5.4.x/apps/interchain-accounts/overview.mdx new file mode 100644 index 00000000..3051d193 --- /dev/null +++ b/docs/ibc/v5.4.x/apps/interchain-accounts/overview.mdx @@ -0,0 +1,37 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the Interchain Accounts module is, and how to build custom modules that utilize Interchain Accounts functionality + +## What is the Interchain Accounts module? + +Interchain Accounts is the Cosmos SDK implementation of the ICS-27 protocol, which enables cross-chain account management built upon IBC. Chains using the Interchain Accounts module can programmatically create accounts on other chains and control these accounts via IBC transactions. + +Interchain Accounts exposes a simple-to-use API which means IBC application developers do not require an in-depth knowledge of the underlying low-level details of IBC or the ICS-27 protocol. + +Developers looking to build upon Interchain Accounts must write custom logic in their own IBC application module, called authentication modules. + +- How is an interchain account different than a regular account? + +Regular accounts use a private key to sign transactions on-chain. Interchain Accounts are instead controlled programmatically by separate chains via IBC transactions. Interchain Accounts are implemented as sub-accounts of the interchain accounts module account. + +## Concepts + +`Host Chain`: The chain where the interchain account is registered. The host chain listens for IBC packets from a controller chain which should contain instructions (e.g. cosmos SDK messages) for which the interchain account will execute. + +`Controller Chain`: The chain registering and controlling an account on a host chain. The controller chain sends IBC packets to the host chain to control the account. A controller chain must have at least one interchain accounts authentication module in order to act as a controller chain. + +`Authentication Module`: A custom IBC application module on the controller chain that uses the Interchain Accounts module API to build custom logic for the creation & management of interchain accounts. For a controller chain to utilize the interchain accounts module functionality, an authentication module is required. + +`Interchain Account`: An account on a host chain. An interchain account has all the capabilities of a normal account. However, rather than signing transactions with a private key, a controller chain's authentication module will send IBC packets to the host chain which signals what transactions the interchain account should execute. + +## SDK Security Model + +SDK modules on a chain are assumed to be trustworthy. For example, there are no checks to prevent an untrustworthy module from accessing the bank keeper. + +The implementation of ICS27 on ibc-go uses this assumption in its security considerations. The implementation assumes the authentication module will not try to open channels on owner addresses it does not control. + +The implementation assumes other IBC application modules will not bind to ports within the ICS27 namespace. diff --git a/docs/ibc/v5.4.x/apps/interchain-accounts/parameters.mdx b/docs/ibc/v5.4.x/apps/interchain-accounts/parameters.mdx new file mode 100644 index 00000000..0890101b --- /dev/null +++ b/docs/ibc/v5.4.x/apps/interchain-accounts/parameters.mdx @@ -0,0 +1,63 @@ +--- +title: Parameters +description: >- + The Interchain Accounts module contains the following on-chain parameters, + logically separated for each distinct submodule: +--- + +The Interchain Accounts module contains the following on-chain parameters, logically separated for each distinct submodule: + +## Controller Submodule Parameters + +| Key | Type | Default Value | +| ------------------- | ---- | ------------- | +| `ControllerEnabled` | bool | `true` | + +### ControllerEnabled + +The `ControllerEnabled` parameter controls a chains ability to service ICS-27 controller specific logic. This includes the sending of Interchain Accounts packet data as well as the following ICS-26 callback handlers: + +* `OnChanOpenInit` +* `OnChanOpenAck` +* `OnChanCloseConfirm` +* `OnAcknowledgementPacket` +* `OnTimeoutPacket` + +## Host Submodule Parameters + +| Key | Type | Default Value | +| --------------- | --------- | ------------- | +| `HostEnabled` | bool | `true` | +| `AllowMessages` | \[]string | `[]` | + +### HostEnabled + +The `HostEnabled` parameter controls a chains ability to service ICS27 host specific logic. This includes the following ICS-26 callback handlers: + +* `OnChanOpenTry` +* `OnChanOpenConfirm` +* `OnChanCloseConfirm` +* `OnRecvPacket` + +### AllowMessages + +The `AllowMessages` parameter provides the ability for a chain to limit the types of messages or transactions that hosted interchain accounts are authorized to execute by defining an allowlist using the Protobuf message TypeURL format. + +For example, a Cosmos SDK based chain that elects to provide hosted Interchain Accounts with the ability of governance voting and staking delegations will define its parameters as follows: + +```json +"params": { + "host_enabled": true, + "allow_messages": ["/cosmos.staking.v1beta1.MsgDelegate", + "/cosmos.gov.v1beta1.MsgVote"] +} +``` + +There is also a special wildcard `"*"` message type which allows any type of message to be executed by the interchain account. This must be the only message in the `allow_messages` array. + +```json +"params": { + "host_enabled": true, + "allow_messages": ["*"] +} +``` diff --git a/docs/ibc/v5.4.x/apps/interchain-accounts/transactions.mdx b/docs/ibc/v5.4.x/apps/interchain-accounts/transactions.mdx new file mode 100644 index 00000000..6dbad9c1 --- /dev/null +++ b/docs/ibc/v5.4.x/apps/interchain-accounts/transactions.mdx @@ -0,0 +1,21 @@ +--- +title: Transactions +--- + +## Synopsis + +Learn about Interchain Accounts transaction execution + +## Executing a transaction + +As described in [Authentication Modules](/docs/ibc/v5.4.x/apps/interchain-accounts/auth-modules#trysendtx) transactions are executed using the interchain accounts controller API and require a `Base Application` as outlined in [ICS30 IBC Middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware) to facilitate authentication. The method of authentication remains unspecified to provide flexibility for the authentication module developer. + +Transactions are executed via the ICS27 [`SendTx` API](/docs/ibc/v5.4.x/apps/interchain-accounts/auth-modules#trysendtx). This must be invoked through an Interchain Accounts authentication module and follows the outlined path of execution below. Packet relaying semantics provided by the IBC core transport, authentication, and ordering (IBC/TAO) layer are omitted for brevity. + +![send-interchain-tx.png](/docs/ibc/images/02-apps/02-interchain-accounts/images/send-interchain-tx.png) + +## Atomicity + +As the Interchain Accounts module supports the execution of multiple transactions using the Cosmos SDK `Msg` interface, it provides the same atomicity guarantees as Cosmos SDK-based applications, leveraging the [`CacheMultiStore`](https://docs.cosmos.network/main/learn/advanced/store#cachemultistore) architecture provided by the [`Context`](https://docs.cosmos.network/main/learn/advanced/context.html) type. + +This provides atomic execution of transactions when using Interchain Accounts, where state changes are only committed if all `Msg`s succeed. diff --git a/docs/ibc/v5.4.x/apps/transfer/events.mdx b/docs/ibc/v5.4.x/apps/transfer/events.mdx new file mode 100644 index 00000000..1ae39041 --- /dev/null +++ b/docs/ibc/v5.4.x/apps/transfer/events.mdx @@ -0,0 +1,49 @@ +--- +title: Events +--- + +## `MsgTransfer` + +| Type | Attribute Key | Attribute Value | +| ------------- | ------------- | --------------- | +| ibc\_transfer | sender | `{sender}` | +| ibc\_transfer | receiver | `{receiver}` | +| message | module | transfer | + +## `OnRecvPacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | ------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | sender | `{sender}` | +| fungible\_token\_packet | receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | +| fungible\_token\_packet | success | `{ackSuccess}` | +| fungible\_token\_packet | error | `{ackError}` | +| denomination\_trace | trace\_hash | `{hex\_hash}` | +| denomination\_trace | denom | `{voucherDenom}` | + +## `OnAcknowledgePacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | --------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | sender | `{sender}` | +| fungible\_token\_packet | receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | +| fungible\_token\_packet | acknowledgement | `{ack.String()}` | +| fungible\_token\_packet | success / error | `{ack.Response}` | + +## `OnTimeoutPacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | ---------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | refund\_receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | diff --git a/docs/ibc/v5.4.x/apps/transfer/messages.mdx b/docs/ibc/v5.4.x/apps/transfer/messages.mdx new file mode 100644 index 00000000..58410ca1 --- /dev/null +++ b/docs/ibc/v5.4.x/apps/transfer/messages.mdx @@ -0,0 +1,42 @@ +--- +title: Messages +description: 'A fungible token cross chain transfer is achieved by using the MsgTransfer:' +--- + +## `MsgTransfer` + +A fungible token cross chain transfer is achieved by using the `MsgTransfer`: + +```go +type MsgTransfer struct { + SourcePort string + SourceChannel string + Token sdk.Coin + Sender string + Receiver string + TimeoutHeight ibcexported.Height + TimeoutTimestamp uint64 + Memo string +} +``` + +This message is expected to fail if: + +* `SourcePort` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +* `SourceChannel` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +* `Token` is invalid (denom is invalid or amount is negative) + * `Token.Amount` is not positive. + * `Token.Denom` is not a valid IBC denomination as per [ADR 001 - Coin Source Tracing](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md). +* `Sender` is empty. +* `Receiver` is empty. +* `TimeoutHeight` and `TimeoutTimestamp` are both zero. + +This message will send a fungible token to the counterparty chain represented by the counterparty Channel End connected to the Channel End with the identifiers `SourcePort` and `SourceChannel`. + +The denomination provided for transfer should correspond to the same denomination represented on this chain. The prefixes will be added as necessary upon by the receiving chain. + +### Memo + +The memo field was added to allow applications and users to attach metadata to transfer packets. The field is optional and may be left empty. When it is used to attach metadata for a particular middleware, the memo field should be represented as a json object where different middlewares use different json keys. + +You can find more information about applications that use the memo field in the [chain registry](https://github.com/cosmos/chain-registry/blob/master/_memo_keys/ICS20_memo_keys.json). diff --git a/docs/ibc/v5.4.x/apps/transfer/metrics.mdx b/docs/ibc/v5.4.x/apps/transfer/metrics.mdx new file mode 100644 index 00000000..7ef82c88 --- /dev/null +++ b/docs/ibc/v5.4.x/apps/transfer/metrics.mdx @@ -0,0 +1,13 @@ +--- +title: Metrics +description: The IBC transfer application module exposes the following set of metrics. +--- + +The IBC transfer application module exposes the following set of metrics. + +| Metric | Description | Unit | Type | +| :---------------------------- | :---------------------------------------------------------------------------------------- | :------- | :------ | +| `tx_msg_ibc_transfer` | The total amount of tokens transferred via IBC in a `MsgTransfer` (source or sink chain) | token | gauge | +| `ibc_transfer_packet_receive` | The total amount of tokens received in a `FungibleTokenPacketData` (source or sink chain) | token | gauge | +| `ibc_transfer_send` | Total number of IBC transfers sent from a chain (source or sink) | transfer | counter | +| `ibc_transfer_receive` | Total number of IBC transfers received to a chain (source or sink) | transfer | counter | diff --git a/docs/ibc/v5.4.x/apps/transfer/overview.mdx b/docs/ibc/v5.4.x/apps/transfer/overview.mdx new file mode 100644 index 00000000..90fcfee1 --- /dev/null +++ b/docs/ibc/v5.4.x/apps/transfer/overview.mdx @@ -0,0 +1,125 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the token Transfer module is + +## What is the Transfer module? + +Transfer is the Cosmos SDK implementation of the [ICS-20](https://github.com/cosmos/ibc/tree/master/spec/app/ics-020-fungible-token-transfer) protocol, which enables cross-chain fungible token transfers. + +## Concepts + +### Acknowledgements + +ICS20 uses the recommended acknowledgement format as specified by [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope). + +A successful receive of a transfer packet will result in a Result Acknowledgement being written +with the value `[]byte{byte(1)}` in the `Response` field. + +An unsuccessful receive of a transfer packet will result in an Error Acknowledgement being written +with the error message in the `Response` field. + +### Denomination trace + +The denomination trace corresponds to the information that allows a token to be traced back to its +origin chain. It contains a sequence of port and channel identifiers ordered from the most recent to +the oldest in the timeline of transfers. + +This information is included on the token denomination field in the form of a hash to prevent an +unbounded denomination length. For example, the token `transfer/channelToA/uatom` will be displayed +as `ibc/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2`. + +Each send to any chain other than the one it was previously received from is a movement forwards in +the token's timeline. This causes trace to be added to the token's history and the destination port +and destination channel to be prefixed to the denomination. In these instances the sender chain is +acting as the "source zone". When the token is sent back to the chain it previously received from, the +prefix is removed. This is a backwards movement in the token's timeline and the sender chain is +acting as the "sink zone". + +It is strongly recommended to read the full details of [ADR 001: Coin Source Tracing](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md) to understand the implications and context of the IBC token representations. + +## UX suggestions for clients + +For clients (wallets, exchanges, applications, block explorers, etc) that want to display the source of the token, it is recommended to use the following alternatives for each of the cases below: + +### Direct connection + +If the denomination trace contains a single identifier prefix pair (as in the example above), then +the easiest way to retrieve the chain and light client identifier is to map the trace information +directly. In summary, this requires querying the channel from the denomination trace identifiers, +and then the counterparty client state using the counterparty port and channel identifiers from the +retrieved channel. + +A general pseudo algorithm would look like the following: + +1. Query the full denomination trace. +2. Query the channel with the `portID/channelID` pair, which corresponds to the first destination of the + token. +3. Query the client state using the identifiers pair. Note that this query will return a `"Not +Found"` response if the current chain is not connected to this channel. +4. Retrieve the client identifier or chain identifier from the client state (eg: on + Tendermint clients) and store it locally. + +Using the gRPC gateway client service the steps above would be, with a given IBC token `ibc/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2` stored on `chainB`: + +1. `GET /ibc/apps/transfer/v1/denom_traces/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2` -> `{"path": "transfer/channelToA", "base_denom": "uatom"}` +2. `GET /ibc/apps/transfer/v1/channels/channelToA/ports/transfer/client_state"` -> `{"client_id": "clientA", "chain-id": "chainA", ...}` +3. `GET /ibc/apps/transfer/v1/channels/channelToA/ports/transfer"` -> `{"channel_id": "channelToA", port_id": "transfer", counterparty: {"channel_id": "channelToB", port_id": "transfer"}, ...}` +4. `GET /ibc/apps/transfer/v1/channels/channelToB/ports/transfer/client_state" -> {"client_id": "clientB", "chain-id": "chainB", ...}` + +Then, the token transfer chain path for the `uatom` denomination would be: `chainA` -> `chainB`. + +### Multiple hops + +The multiple channel hops case applies when the token has passed through multiple chains between the original source and final destination chains. + +The IBC protocol doesn't know the topology of the overall network (i.e connections between chains and identifier names between them). For this reason, in the multiple hops case, a particular chain in the timeline of the individual transfers can't query the chain and client identifiers of the other chains. + +Take for example the following sequence of transfers `A -> B -> C` for an IBC token, with a final prefix path (trace info) of `transfer/channelChainC/transfer/channelChainB`. What the paragraph above means is that even in the case that chain `C` is directly connected to chain `A`, querying the port and channel identifiers that chain `B` uses to connect to chain `A` (eg: `transfer/channelChainA`) can be completely different from the one that chain `C` uses to connect to chain `A` (eg: `transfer/channelToChainA`). + +Thus the proposed solution for clients that the IBC team recommends are the following: + +- **Connect to all chains**: Connecting to all the chains in the timeline would allow clients to + perform the queries outlined in the [direct connection](#direct-connection) section to each + relevant chain. By repeatedly following the port and channel denomination trace transfer timeline, + clients should always be able to find all the relevant identifiers. This comes at the tradeoff + that the client must connect to nodes on each of the chains in order to perform the queries. +- **Relayer as a Service (RaaS)**: A longer term solution is to use/create a relayer service that + could map the denomination trace to the chain path timeline for each token (i.e `origin chain -> +chain #1 -> ... -> chain #(n-1) -> final chain`). These services could provide merkle proofs in + order to allow clients to optionally verify the path timeline correctness for themselves by + running light clients. If the proofs are not verified, they should be considered as trusted third + parties services. Additionally, client would be advised in the future to use RaaS that support the + largest number of connections between chains in the ecosystem. Unfortunately, none of the existing + public relayers (in [Golang](https://github.com/cosmos/relayer) and + [Rust](https://github.com/informalsystems/ibc-rs)), provide this service to clients. + + + The only viable alternative for clients (at the time of writing) to tokens + with multiple connection hops, is to connect to all chains directly and + perform relevant queries to each of them in the sequence. + + +## Locked funds + +In some [exceptional cases](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-026-ibc-client-recovery-mechanisms.md#exceptional-cases), a client state associated with a given channel cannot be updated. This causes that funds from fungible tokens in that channel will be permanently locked and thus can no longer be transferred. + +To mitigate this, a client update governance proposal can be submitted to update the frozen client +with a new valid header. Once the proposal passes the client state will be unfrozen and the funds +from the associated channels will then be unlocked. This mechanism only applies to clients that +allow updates via governance, such as Tendermint clients. + +In addition to this, it's important to mention that a token must be sent back along the exact route +that it took originally in order to return it to its original form on the source chain (eg: the +Cosmos Hub for the `uatom`). Sending a token back to the same chain across a different channel will +**not** move the token back across its timeline. If a channel in the chain history closes before the +token can be sent back across that channel, then the token will not be returnable to its original +form. + +## Security considerations + +For safety, no other module must be capable of minting tokens with the `ibc/` prefix. The IBC +transfer module needs a subset of the denomination space that only it can create tokens in. diff --git a/docs/ibc/v5.4.x/apps/transfer/params.mdx b/docs/ibc/v5.4.x/apps/transfer/params.mdx new file mode 100644 index 00000000..f1c78927 --- /dev/null +++ b/docs/ibc/v5.4.x/apps/transfer/params.mdx @@ -0,0 +1,29 @@ +--- +title: Params +description: 'The IBC transfer application module contains the following parameters:' +--- + +The IBC transfer application module contains the following parameters: + +| Key | Type | Default Value | +| ---------------- | ---- | ------------- | +| `SendEnabled` | bool | `true` | +| `ReceiveEnabled` | bool | `true` | + +## `SendEnabled` + +The transfers enabled parameter controls send cross-chain transfer capabilities for all fungible tokens. + +To prevent a single token from being transferred from the chain, set the `SendEnabled` parameter to `true` and then, depending on the Cosmos SDK version, do one of the following: + +* For Cosmos SDK v0.46.x or earlier, set the bank module's [`SendEnabled` parameter](https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/bank/spec/05_params.md#sendenabled) for the denomination to `false`. +* For Cosmos SDK versions above v0.46.x, set the bank module's `SendEnabled` entry for the denomination to `false` using `MsgSetSendEnabled` as a governance proposal. + +## `ReceiveEnabled` + +The transfers enabled parameter controls receive cross-chain transfer capabilities for all fungible tokens. + +To prevent a single token from being transferred to the chain, set the `ReceiveEnabled` parameter to `true` and then, depending on the Cosmos SDK version, do one of the following: + +* For Cosmos SDK v0.46.x or earlier, set the bank module's [`SendEnabled` parameter](https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/bank/spec/05_params.md#sendenabled) for the denomination to `false`. +* For Cosmos SDK versions above v0.46.x, set the bank module's `SendEnabled` entry for the denomination to `false` using `MsgSetSendEnabled` as a governance proposal. diff --git a/docs/ibc/v5.4.x/apps/transfer/state-transitions.mdx b/docs/ibc/v5.4.x/apps/transfer/state-transitions.mdx new file mode 100644 index 00000000..fcc9169c --- /dev/null +++ b/docs/ibc/v5.4.x/apps/transfer/state-transitions.mdx @@ -0,0 +1,35 @@ +--- +title: State Transitions +description: >- + A successful fungible token send has two state transitions depending if the + transfer is a movement forward or backwards in the token's timeline: +--- + +## Send fungible tokens + +A successful fungible token send has two state transitions depending if the transfer is a movement forward or backwards in the token's timeline: + +1. Sender chain is the source chain, *i.e* a transfer to any chain other than the one it was previously received from is a movement forwards in the token's timeline. This results in the following state transitions: + + * The coins are transferred to an escrow address (i.e locked) on the sender chain. + * The coins are transferred to the receiving chain through IBC TAO logic. + +2. Sender chain is the sink chain, *i.e* the token is sent back to the chain it previously received from. This is a backwards movement in the token's timeline. This results in the following state transitions: + + * The coins (vouchers) are burned on the sender chain. + * The coins are transferred to the receiving chain through IBC TAO logic. + +## Receive fungible tokens + +A successful fungible token receive has two state transitions depending if the transfer is a movement forward or backwards in the token's timeline: + +1. Receiver chain is the source chain. This is a backwards movement in the token's timeline. This results in the following state transitions: + + * The leftmost port and channel identifier pair is removed from the token denomination prefix. + * The tokens are unescrowed and sent to the receiving address. + +2. Receiver chain is the sink chain. This is a movement forwards in the token's timeline. This results in the following state transitions: + + * Token vouchers are minted by prefixing the destination port and channel identifiers to the trace information. + * The receiving chain stores the new trace information in the store (if not set already). + * The vouchers are sent to the receiving address. diff --git a/docs/ibc/v5.4.x/apps/transfer/state.mdx b/docs/ibc/v5.4.x/apps/transfer/state.mdx new file mode 100644 index 00000000..76c7af41 --- /dev/null +++ b/docs/ibc/v5.4.x/apps/transfer/state.mdx @@ -0,0 +1,12 @@ +--- +title: State +description: >- + The IBC transfer application module keeps state of the port to which the + module is binded and the denomination trace information as outlined in ADR + 001. +--- + +The IBC transfer application module keeps state of the port to which the module is binded and the denomination trace information as outlined in [ADR 001](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md). + +* `Port`: `0x01 -> ProtocolBuffer(string)` +* `DenomTrace`: `0x02 | []bytes(traceHash) -> ProtocolBuffer(DenomTrace)` diff --git a/docs/ibc/v5.4.x/changelog/release-notes.mdx b/docs/ibc/v5.4.x/changelog/release-notes.mdx new file mode 100644 index 00000000..c0c17b3f --- /dev/null +++ b/docs/ibc/v5.4.x/changelog/release-notes.mdx @@ -0,0 +1,596 @@ +--- +title: "Release Notes" +description: "Release notes generated from the project `CHANGELOG.md`" +mode: "center" +--- + + + + This page tracks all releases and changes from the + [cosmos/ibc-go](https://github.com/cosmos/ibc-go) repository. For the latest + development updates, see the + [UNRELEASED](https://github.com/cosmos/ibc-go/blob/main/CHANGELOG.md#unreleased) + section. + + + + + ### Bug Fixes * (apps/transfer) + [\#3045](https://github.com/cosmos/ibc-go/pull/3045) allow value with slashes + in URL template for `denom_traces` and `denom_hashes` queries. * + (apps/transfer) [\#4709](https://github.com/cosmos/ibc-go/pull/4709) Order + query service RPCs to fix availability of denom traces endpoint when no args + are provided. + + + + ### Bug Fixes * [\#3346](https://github.com/cosmos/ibc-go/pull/3346) Properly + handle ordered channels in `UnreceivedPackets` query. + + + + ### Dependencies * [\#3354](https://github.com/cosmos/ibc-go/pull/3354) Bump + Cosmos SDK to v0.46.12 and replace Tendermint with CometBFT v0.34.27. + + + + ### Dependencies * [\#2868](https://github.com/cosmos/ibc-go/pull/2868) Bump + ICS 23 to v0.9.0. * [\#2944](https://github.com/cosmos/ibc-go/pull/2944) Bump + Cosmos SDK to v0.46.7 and Tendermint to v0.34.24. ### State Machine Breaking * + (apps/29-fee) [\#2942](https://github.com/cosmos/ibc-go/pull/2942) Check + `x/bank` send enabled before escrowing fees. ### Improvements * (apps/29-fee) + [\#2786](https://github.com/cosmos/ibc-go/pull/2786) Save gas by checking key + existence with `KVStore`'s `Has` method. + + + + ### Dependencies * [\#2647](https://github.com/cosmos/ibc-go/pull/2647) Bump + Cosmos SDK to v0.46.4 and Tendermint to v0.34.22. ### State Machine Breaking * + (apps/transfer) [\#2651](https://github.com/cosmos/ibc-go/pull/2651) Introduce + `mustProtoMarshalJSON` for ics20 packet data marshalling which will skip + emission (marshalling) of the memo field if unpopulated (empty). * + (27-interchain-accounts) + [\#2580](https://github.com/cosmos/ibc-go/issues/2580) Removing port prefix + requirement from the ICA host channel handshake * (transfer) + [\#2377](https://github.com/cosmos/ibc-go/pull/2377) Adding `sequence` to + `MsgTransferResponse`. ### Improvements * (testing) + [\#2657](https://github.com/cosmos/ibc-go/pull/2657) Carry `ProposerAddress` + through committed blocks. Allow `DefaultGenTxGas` to be modified. ### Features + * (apps/transfer) [\#2595](https://github.com/cosmos/ibc-go/pull/2595) Adding + optional memo field to `FungibleTokenPacketData` and `MsgTransfer`. ### Bug + Fixes * (apps/transfer) [\#2679](https://github.com/cosmos/ibc-go/pull/2679) + Check `x/bank` send enabled. + + + + ### Dependencies * [\#1653](https://github.com/cosmos/ibc-go/pull/1653) Bump + SDK version to v0.46 * [\#2124](https://github.com/cosmos/ibc-go/pull/2124) + Bump SDK version to v0.46.1 ### API Breaking * + (testing)[\#2028](https://github.com/cosmos/ibc-go/pull/2028) New interface + `ibctestingtypes.StakingKeeper` added and set for the testing app + `StakingKeeper` setup. * (core/04-channel) + [\#1418](https://github.com/cosmos/ibc-go/pull/1418) `NewPacketId` has been + renamed to `NewPacketID` to comply with go linting rules. * (core/ante) + [\#1418](https://github.com/cosmos/ibc-go/pull/1418) `AnteDecorator` has been + renamed to `RedundancyDecorator` to comply with go linting rules and to give + more clarity to the purpose of the Decorator. * (core/ante) + [\#1820](https://github.com/cosmos/ibc-go/pull/1418) `RedundancyDecorator` has + been renamed to `RedundantRelayDecorator` to make the name for explicit. * + (testing) [\#1418](https://github.com/cosmos/ibc-go/pull/1418) `MockIBCApp` + has been renamed to `IBCApp` and `MockEmptyAcknowledgement` has been renamed + to `EmptyAcknowledgement` to comply with go linting rules * + (apps/27-interchain-accounts) + [\#2058](https://github.com/cosmos/ibc-go/pull/2058) Added `MessageRouter` + interface and replaced `*baseapp.MsgServiceRouter` with it. The controller and + host keepers of apps/27-interchain-accounts have been updated to use it. * + (apps/27-interchain-accounts)[\#2302](https://github.com/cosmos/ibc-go/pull/2302) + Handle unwrapping of channel version in interchain accounts channel reopening + handshake flow. The `host` submodule `Keeper` now requires an `ICS4Wrapper` + similarly to the `controller` submodule. ### Improvements * + (27-interchain-accounts) + [\#1352](https://github.com/cosmos/ibc-go/issues/1352) Add support for + Cosmos-SDK simulation to ics27 module. * (linting) + [\#1418](https://github.com/cosmos/ibc-go/pull/1418) Fix linting errors, + resulting compatiblity with go1.18 linting style, golangci-lint 1.46.2 and the + revivie linter. This caused breaking changes in core/04-channel, core/ante, + and the testing library. ### Features * (apps/27-interchain-accounts) + [\#2193](https://github.com/cosmos/ibc-go/pull/2193) Adding + `InterchainAccount` gRPC query endpont to ICS27 `controller` submodule to + allow users to retrieve registered interchain account addresses. ### Bug Fixes + * (27-interchain-accounts) + [\#2308](https://github.com/cosmos/ibc-go/pull/2308) Nil checks have been + added to ensure services are not registered for nil host or controller + keepers. * (makefile) [\#1785](https://github.com/cosmos/ibc-go/pull/1785) + Fetch the correct versions of protocol buffers dependencies from tendermint, + cosmos-sdk, and ics23. * + (modules/core/04-channel)[\#1919](https://github.com/cosmos/ibc-go/pull/1919) + Fixed formatting of sequence for packet "acknowledgement written" logs. + + + + ### Dependencies * [\#1300](https://github.com/cosmos/ibc-go/pull/1300) Bump + SDK version to v0.45.4 ### API Breaking ### State Machine Breaking ### + Improvements * (transfer) [\#1342](https://github.com/cosmos/ibc-go/pull/1342) + `DenomTrace` grpc now takes in either an `ibc denom` or a `hash` instead of + only accepting a `hash`. * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/04-channel) + [\#1279](https://github.com/cosmos/ibc-go/pull/1279) Add selected channel + version to MsgChanOpenInitResponse and MsgChanOpenTryResponse. Emit channel + version during OpenInit/OpenTry * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. * + (modules/light-clients/07-tendermint) + [\#1118](https://github.com/cosmos/ibc-go/pull/1118) Deprecating + `AllowUpdateAfterExpiry` and `AllowUpdateAfterMisbehaviour`. See ADR-026 for + context. ### Features * (modules/core/02-client) + [\#1336](https://github.com/cosmos/ibc-go/pull/1336) Adding + Query/ConsensusStateHeights gRPC for fetching the height of every consensus + state associated with a client. * (modules/apps/transfer) + [\#1416](https://github.com/cosmos/ibc-go/pull/1416) Adding gRPC endpoint for + getting an escrow account for a given port-id and channel-id. * + (modules/apps/27-interchain-accounts) + [\#1512](https://github.com/cosmos/ibc-go/pull/1512) Allowing ICA modules to + handle all message types with "*". ### Bug Fixes * (modules/core/04-channel) + [\#1130](https://github.com/cosmos/ibc-go/pull/1130) Call + `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` log + output * (apps/transfer) [\#1451](https://github.com/cosmos/ibc-go/pull/1451) + Fixing the support for base denoms that contain slashes. + + + + ### Dependencies * [\#1300](https://github.com/cosmos/ibc-go/pull/1300) Bump + SDK version to v0.45.4 ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies + * [\#404](https://github.com/cosmos/ibc-go/pull/404) Bump Go version to 1.17 + * [\#851](https://github.com/cosmos/ibc-go/pull/851) Bump SDK version to v0.45.1 + * [\#948](https://github.com/cosmos/ibc-go/pull/948) Bump ics23/go to v0.7 + * (core) [\#709](https://github.com/cosmos/ibc-go/pull/709) Replace github.com/pkg/errors with stdlib errors + ### API Breaking + * (testing) [\#939](https://github.com/cosmos/ibc-go/pull/939) Support custom power reduction for testing. + * (modules/core/05-port) [\#1086](https://github.com/cosmos/ibc-go/pull/1086) Added `counterpartyChannelID` argument to IBCModule.OnChanOpenAck + * (channel) [\#848](https://github.com/cosmos/ibc-go/pull/848) Added `ChannelId` to MsgChannelOpenInitResponse + * (testing) [\#813](https://github.com/cosmos/ibc-go/pull/813) The `ack` argument to the testing function `RelayPacket` has been removed as it is no longer needed. + * (testing) [\#774](https://github.com/cosmos/ibc-go/pull/774) Added `ChainID` arg to `SetupWithGenesisValSet` on the testing app. `Coordinator` generated ChainIDs now starts at index 1 + * (transfer) [\#675](https://github.com/cosmos/ibc-go/pull/675) Transfer `NewKeeper` now takes in an ICS4Wrapper. The ICS4Wrapper may be the IBC Channel Keeper when ICS20 is not used in a middleware stack. The ICS4Wrapper is required for applications wishing to connect middleware to ICS20. + * (core) [\#650](https://github.com/cosmos/ibc-go/pull/650) Modify `OnChanOpenTry` IBC application module callback to return the negotiated app version. The version passed into the `MsgChanOpenTry` has been deprecated and will be ignored by core IBC. + * (core) [\#629](https://github.com/cosmos/ibc-go/pull/629) Removes the `GetProofSpecs` from the ClientState interface. This function was previously unused by core IBC. + * (transfer) [\#517](https://github.com/cosmos/ibc-go/pull/517) Separates the ICS 26 callback functions from `AppModule` into a new type `IBCModule` for ICS 20 transfer. + * (modules/core/02-client) [\#536](https://github.com/cosmos/ibc-go/pull/536) `GetSelfConsensusState` return type changed from bool to error. + * (channel) [\#644](https://github.com/cosmos/ibc-go/pull/644) Removes `CounterpartyHops` function from the ChannelKeeper. + * (testing) [\#776](https://github.com/cosmos/ibc-go/pull/776) Adding helper fn to generate capability name for testing callbacks + * (testing) [\#892](https://github.com/cosmos/ibc-go/pull/892) IBC Mock modules store the scoped keeper and portID within the IBCMockApp. They also maintain reference to the AppModule to update the AppModule's list of IBC applications it references. Allows for the mock module to be reused as a base application in middleware stacks. + * (channel) [\#882](https://github.com/cosmos/ibc-go/pull/882) The `WriteAcknowledgement` API now takes `exported.Acknowledgement` instead of a byte array + * (modules/core/ante) [\#950](https://github.com/cosmos/ibc-go/pull/950) Replaces the channel keeper with the IBC keeper in the IBC `AnteDecorator` in order to execute the entire message and be able to reject redundant messages that are in the same block as the non-redundant messages. + ### State Machine Breaking + * (transfer) [\#818](https://github.com/cosmos/ibc-go/pull/818) Error acknowledgements returned from Transfer `OnRecvPacket` now include a deterministic ABCI code and error message. + ### Improvements + * (interchain-accounts) [\#1037](https://github.com/cosmos/ibc-go/pull/1037) Add a function `InitModule` to the interchain accounts `AppModule`. This function should be called within the upgrade handler when adding the interchain accounts module to a chain. It should be called in place of InitGenesis (set the consensus version in the version map). + * (testing) [\#942](https://github.com/cosmos/ibc-go/pull/942) `NewTestChain` will create 4 validators in validator set by default. A new constructor function `NewTestChainWithValSet` is provided for test writers who want custom control over the validator set of test chains. + * (testing) [\#904](https://github.com/cosmos/ibc-go/pull/904) Add `ParsePacketFromEvents` function to the testing package. Useful when sending/relaying packets via the testing package. + * (testing) [\#893](https://github.com/cosmos/ibc-go/pull/893) Support custom private keys for testing. + * (testing) [\#810](https://github.com/cosmos/ibc-go/pull/810) Additional testing function added to `Endpoint` type called `RecvPacketWithResult`. Performs the same functionality as the existing `RecvPacket` function but also returns the message result. `path.RelayPacket` no longer uses the provided acknowledgement argument and instead obtains the acknowledgement via MsgRecvPacket events. + * (connection) [\#721](https://github.com/cosmos/ibc-go/pull/721) Simplify connection handshake error messages when unpacking client state. + * (channel) [\#692](https://github.com/cosmos/ibc-go/pull/692) Minimize channel logging by only emitting the packet sequence, source port/channel, destination port/channel upon packet receives, acknowledgements and timeouts. + * [\#383](https://github.com/cosmos/ibc-go/pull/383) Adds helper functions for merging and splitting middleware versions from the underlying app version. + * (modules/core/05-port) [\#288](https://github.com/cosmos/ibc-go/issues/288) Making the 05-port keeper function IsBound public. The IsBound function checks if the provided portID is already binded to a module. + * (channel) [\#644](https://github.com/cosmos/ibc-go/pull/644) Adds `GetChannelConnection` to the ChannelKeeper. This function returns the connectionID and connection state associated with a channel. + * (channel) [\647](https://github.com/cosmos/ibc-go/pull/647) Reorganizes channel handshake handling to set channel state after IBC application callbacks. + * (client) [\#724](https://github.com/cosmos/ibc-go/pull/724) `IsRevisionFormat` and `IsClientIDFormat` have been updated to disallow newlines before the dash used to separate the chainID and revision number, and the client type and client sequence. + * (interchain-accounts) [\#1466](https://github.com/cosmos/ibc-go/pull/1466) Emit event when there is an acknowledgement during `OnRecvPacket`. + ### Features + * [\#432](https://github.com/cosmos/ibc-go/pull/432) Introduce `MockIBCApp` struct to the mock module. Allows the mock module to be reused to perform custom logic on each IBC App interface function. This might be useful when testing out IBC applications written as middleware. + * [\#380](https://github.com/cosmos/ibc-go/pull/380) Adding the Interchain Accounts module v1 + * [\#679](https://github.com/cosmos/ibc-go/pull/679) New CLI command `query ibc-transfer denom-hash ` to get the denom hash for a denom trace; this might be useful for debug + ### Bug Fixes + * (testing) [\#884](https://github.com/cosmos/ibc-go/pull/884) Add and use in simapp a custom ante handler that rejects redundant transactions + * (transfer) [\#978](https://github.com/cosmos/ibc-go/pull/978) Support base denoms with slashes in denom validation + * (client) [\#941](https://github.com/cosmos/ibc-go/pull/941) Classify client states without consensus states as expired + * (channel) [\#995](https://github.com/cosmos/ibc-go/pull/995) Call `packet.GetSequence()` rather than passing func in `AcknowledgePacket` log output + + + + ### Dependencies * [\#404](https://github.com/cosmos/ibc-go/pull/404) Bump Go + version to 1.17 * [\#1300](https://github.com/cosmos/ibc-go/pull/1300) Bump + SDK version to v0.45.4 ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. * + (modules/light-clients/07-tendermint) + [\#1118](https://github.com/cosmos/ibc-go/pull/1118) Deprecating + `AllowUpdateAfterExpiry` and `AllowUpdateAfterMisbehaviour`. See ADR-026 for + context. ### Features * (modules/core/02-client) + [\#1336](https://github.com/cosmos/ibc-go/pull/1336) Adding + Query/ConsensusStateHeights gRPC for fetching the height of every consensus + state associated with a client. * (modules/apps/transfer) + [\#1416](https://github.com/cosmos/ibc-go/pull/1416) Adding gRPC endpoint for + getting an escrow account for a given port-id and channel-id. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output * (apps/transfer) + [\#1451](https://github.com/cosmos/ibc-go/pull/1451) Fixing the support for + base denoms that contain slashes. + + + + ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies * [\#851](https://github.com/cosmos/ibc-go/pull/851) Bump SDK + version to v0.45.1 + + + + ### Dependencies * [\#1268](https://github.com/cosmos/ibc-go/pull/1268) Bump + SDK version to v0.44.8 and Tendermint to version 0.34.19 ### Improvements * + (transfer) [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` + grpc now takes in either an `ibc denom` or a `hash` instead of only accepting + a `hash`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies + * [\#1084](https://github.com/cosmos/ibc-go/pull/1084) Bump SDK version to v0.44.6 + * [\#948](https://github.com/cosmos/ibc-go/pull/948) Bump ics23/go to v0.7 + ### State Machine Breaking + * (transfer) [\#818](https://github.com/cosmos/ibc-go/pull/818) Error acknowledgements returned from Transfer `OnRecvPacket` now include a deterministic ABCI code and error message. + ### Features + * [\#679](https://github.com/cosmos/ibc-go/pull/679) New CLI command `query ibc-transfer denom-hash ` to get the denom hash for a denom trace; this might be useful for debug + ### Bug Fixes + * (client) [\#941](https://github.com/cosmos/ibc-go/pull/941) Classify client states without consensus states as expired + * (transfer) [\#978](https://github.com/cosmos/ibc-go/pull/978) Support base denoms with slashes in denom validation + * (channel) [\#995](https://github.com/cosmos/ibc-go/pull/995) Call `packet.GetSequence()` rather than passing func in `AcknowledgePacket` log output + + + + ### Improvements * (channel) + [\#692](https://github.com/cosmos/ibc-go/pull/692) Minimize channel logging by + only emitting the packet sequence, source port/channel, destination + port/channel upon packet receives, acknowledgements and timeouts. + + + + ### Dependencies * [\#589](https://github.com/cosmos/ibc-go/pull/589) Bump SDK + version to v0.44.5 ### Bug Fixes * (modules/core) + [\#603](https://github.com/cosmos/ibc-go/pull/603) Fix module name emitted as + part of `OnChanOpenInit` event. Replacing `connection` module name with + `channel`. + + + + ### Dependencies * [\#567](https://github.com/cosmos/ibc-go/pull/567) Bump SDK + version to v0.44.4 ### Improvements * (02-client) + [\#568](https://github.com/cosmos/ibc-go/pull/568) In IBC `transfer` cli + command use local clock time as reference for relative timestamp timeout if + greater than the block timestamp queried from the latest consensus state + corresponding to the counterparty channel. * + [\#583](https://github.com/cosmos/ibc-go/pull/583) Move + third_party/proto/confio/proofs.proto to third_party/proto/proofs.proto to + enable proto service reflection. Migrate `buf` from v1beta1 to v1. ### Bug + Fixes * (02-client) [\#500](https://github.com/cosmos/ibc-go/pull/500) Fix IBC + `update-client proposal` cli command to expect correct number of args. + + + + ### Dependencies * [\#489](https://github.com/cosmos/ibc-go/pull/489) Bump + Tendermint to v0.34.14 * [\#503](https://github.com/cosmos/ibc-go/pull/503) + Bump SDK version to v0.44.3 ### API Breaking * (core) + [\#227](https://github.com/cosmos/ibc-go/pull/227) Remove sdk.Result from + application callbacks * (transfer) + [\#350](https://github.com/cosmos/ibc-go/pull/350) Change + FungibleTokenPacketData to use a string for the Amount field. This enables + token transfers with amounts previously restricted by uint64. Up to the + maximum uint256 value is supported. ### Features * + [\#384](https://github.com/cosmos/ibc-go/pull/384) Added `NegotiateAppVersion` + method to `IBCModule` interface supported by a gRPC query service in + `05-port`. This provides routing of requests to the desired application module + callback, which in turn performs application version negotiation. + + + + ### Dependencies * [\#404](https://github.com/cosmos/ibc-go/pull/404) Bump Go + version to 1.17 * [\#1300](https://github.com/cosmos/ibc-go/pull/1300) Bump + SDK version to v0.45.4 ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. * + (modules/light-clients/07-tendermint) + [\#1118](https://github.com/cosmos/ibc-go/pull/1118) Deprecating + `AllowUpdateAfterExpiry` and `AllowUpdateAfterMisbehaviour`. See ADR-026 for + context. ### Features * (modules/core/02-client) + [\#1336](https://github.com/cosmos/ibc-go/pull/1336) Adding + Query/ConsensusStateHeights gRPC for fetching the height of every consensus + state associated with a client. * (modules/apps/transfer) + [\#1416](https://github.com/cosmos/ibc-go/pull/1416) Adding gRPC endpoint for + getting an escrow account for a given port-id and channel-id. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output * (apps/transfer) + [\#1451](https://github.com/cosmos/ibc-go/pull/1451) Fixing the support for + base denoms that contain slashes. + + + + ### Improvements * (transfer) + [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` grpc now + takes in either an `ibc denom` or a `hash` instead of only accepting a `hash`. + * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies * [\#851](https://github.com/cosmos/ibc-go/pull/851) Bump SDK + version to v0.45.1 + + + + ### Dependencies * [\#1267](https://github.com/cosmos/ibc-go/pull/1267) Bump + SDK version to v0.44.8 and Tendermint to version 0.34.19 ### Improvements * + (transfer) [\#1342](https://github.com/cosmos/ibc-go/pull/1342) `DenomTrace` + grpc now takes in either an `ibc denom` or a `hash` instead of only accepting + a `hash`. * (modules/core/04-channel) + [\#1160](https://github.com/cosmos/ibc-go/pull/1160) Improve `uint64 -> + string` performance in `Logger`. * (modules/core/keeper) + [\#1284](https://github.com/cosmos/ibc-go/pull/1284) Add sanity check for the + keepers passed into `ibckeeper.NewKeeper`. `ibckeeper.NewKeeper` now panics if + any of the keepers passed in is empty. * (transfer) + [\#1414](https://github.com/cosmos/ibc-go/pull/1414) Emitting Sender address + from `fungible_token_packet` events in `OnRecvPacket` and + `OnAcknowledgementPacket`. * (modules/core/04-channel) + [\#1464](https://github.com/cosmos/ibc-go/pull/1464) Emit a channel close + event when an ordered channel is closed. ### Bug Fixes * + (modules/core/04-channel) [\#1130](https://github.com/cosmos/ibc-go/pull/1130) + Call `packet.GetSequence()` rather than passing func in `WriteAcknowledgement` + log output + + + + ### Dependencies + * [\#1073](https://github.com/cosmos/ibc-go/pull/1073) Bump SDK version to v0.44.6 + * [\#948](https://github.com/cosmos/ibc-go/pull/948) Bump ics23/go to v0.7 + ### State Machine Breaking + * (transfer) [\#818](https://github.com/cosmos/ibc-go/pull/818) Error acknowledgements returned from Transfer `OnRecvPacket` now include a deterministic ABCI code and error message. + ### Features + * [\#679](https://github.com/cosmos/ibc-go/pull/679) New CLI command `query ibc-transfer denom-hash ` to get the denom hash for a denom trace; this might be useful for debug + ### Bug Fixes + * (client) [\#941](https://github.com/cosmos/ibc-go/pull/941) Classify client states without consensus states as expired + * (transfer) [\#978](https://github.com/cosmos/ibc-go/pull/978) Support base denoms with slashes in denom validation + * (channel) [\#995](https://github.com/cosmos/ibc-go/pull/995) Call `packet.GetSequence()` rather than passing func in `AcknowledgePacket` log output + + + + ### Improvements * (channel) + [\#692](https://github.com/cosmos/ibc-go/pull/692) Minimize channel logging by + only emitting the packet sequence, source port/channel, destination + port/channel upon packet receives, acknowledgements and timeouts. + + + + ### Dependencies * [\#589](https://github.com/cosmos/ibc-go/pull/589) Bump SDK + version to v0.44.5 ### Bug Fixes * (modules/core) + [\#603](https://github.com/cosmos/ibc-go/pull/603) Fix module name emitted as + part of `OnChanOpenInit` event. Replacing `connection` module name with + `channel`. + + + + ### Dependencies * [\#567](https://github.com/cosmos/ibc-go/pull/567) Bump SDK + version to v0.44.4 ### Improvements * + [\#583](https://github.com/cosmos/ibc-go/pull/583) Move + third_party/proto/confio/proofs.proto to third_party/proto/proofs.proto to + enable proto service reflection. Migrate `buf` from v1beta1 to v1. + + + + ### Dependencies * [\#489](https://github.com/cosmos/ibc-go/pull/489) Bump + Tendermint to v0.34.14 * [\#503](https://github.com/cosmos/ibc-go/pull/503) + Bump SDK version to v0.44.3 + + + + ### Dependencies * [\#485](https://github.com/cosmos/ibc-go/pull/485) Bump SDK + version to v0.44.2 + + + + ### Dependencies * [\#455](https://github.com/cosmos/ibc-go/pull/455) Bump SDK + version to v0.44.1 + + + + ### State Machine Breaking + * (24-host) [\#344](https://github.com/cosmos/ibc-go/pull/344) Increase port identifier limit to 128 characters. + ### Improvements + * [\#373](https://github.com/cosmos/ibc-go/pull/375) Added optional field `PacketCommitmentSequences` to `QueryPacketAcknowledgementsRequest` to provide filtering of packet acknowledgements. + ### Features + * [\#372](https://github.com/cosmos/ibc-go/pull/372) New CLI command `query ibc client status ` to get the current activity status of a client. + ### Dependencies + * [\#386](https://github.com/cosmos/ibc-go/pull/386) Bump [tendermint](https://github.com/tendermint/tendermint) from v0.34.12 to v0.34.13. + + + + ### Improvements * (channel) + [\#692](https://github.com/cosmos/ibc-go/pull/692) Minimize channel logging by + only emitting the packet sequence, source port/channel, destination + port/channel upon packet receives, acknowledgements and timeouts. + + + + ### Dependencies * [\#589](https://github.com/cosmos/ibc-go/pull/589) Bump SDK + version to v0.44.5 ### Bug Fixes * (modules/core) + [\#603](https://github.com/cosmos/ibc-go/pull/603) Fix module name emitted as + part of `OnChanOpenInit` event. Replacing `connection` module name with + `channel`. + + + + ### Dependencies * [\#567](https://github.com/cosmos/ibc-go/pull/567) Bump SDK + version to v0.44.4 ### Improvements * + [\#583](https://github.com/cosmos/ibc-go/pull/583) Move + third_party/proto/confio/proofs.proto to third_party/proto/proofs.proto to + enable proto service reflection. Migrate `buf` from v1beta1 to v1. + + + + ### Dependencies * [\#489](https://github.com/cosmos/ibc-go/pull/489) Bump + Tendermint to v0.34.14 * [\#503](https://github.com/cosmos/ibc-go/pull/503) + Bump SDK version to v0.44.3 + + + + * [\#485](https://github.com/cosmos/ibc-go/pull/485) Bump SDK version to + v0.44.2 + + + + ### Dependencies * [\#455](https://github.com/cosmos/ibc-go/pull/455) Bump SDK + version to v0.44.1 + + + + ### Dependencies * [\#367](https://github.com/cosmos/ibc-go/pull/367) Bump + [cosmos-sdk](https://github.com/cosmos/cosmos-sdk) from 0.43 to 0.44. + + + + ### Improvements * [\#343](https://github.com/cosmos/ibc-go/pull/343) Create + helper functions for publishing of packet sent and acknowledgement sent + events. + + + + ### Bug Fixes + * (07-tendermint) [\#241](https://github.com/cosmos/ibc-go/pull/241) Ensure tendermint client state latest height revision number matches chain id revision number. + * (07-tendermint) [\#234](https://github.com/cosmos/ibc-go/pull/234) Use sentinel value for the consensus state root set during a client upgrade. This prevents genesis validation from failing. + * (modules) [\#223](https://github.com/cosmos/ibc-go/pull/223) Use correct Prometheus format for metric labels. + * (06-solomachine) [\#214](https://github.com/cosmos/ibc-go/pull/214) Disable defensive timestamp check in SendPacket for solo machine clients. + * (07-tendermint) [\#210](https://github.com/cosmos/ibc-go/pull/210) Export all consensus metadata on genesis restarts for tendermint clients. + * (core) [\#200](https://github.com/cosmos/ibc-go/pull/200) Fixes incorrect export of IBC identifier sequences. Previously, the next identifier sequence for clients/connections/channels was not set during genesis export. This resulted in the next identifiers being generated on the new chain to reuse old identifiers (the sequences began again from 0). + * (02-client) [\#192](https://github.com/cosmos/ibc-go/pull/192) Fix IBC `query ibc client header` cli command. Support historical queries for query header/node-state commands. + * (modules/light-clients/06-solomachine) [\#153](https://github.com/cosmos/ibc-go/pull/153) Fix solo machine proof height sequence mismatch bug. + * (modules/light-clients/06-solomachine) [\#122](https://github.com/cosmos/ibc-go/pull/122) Fix solo machine merkle prefix casting bug. + * (modules/light-clients/06-solomachine) [\#120](https://github.com/cosmos/ibc-go/pull/120) Fix solo machine handshake verification bug. + * (modules/light-clients/06-solomachine) [\#153](https://github.com/cosmos/ibc-go/pull/153) fix solo machine connection handshake failure at `ConnectionOpenAck`. + ### API Breaking + * (04-channel) [\#220](https://github.com/cosmos/ibc-go/pull/220) Channel legacy handler functions were removed. Please use the MsgServer functions or directly call the channel keeper's handshake function. + * (modules) [\#206](https://github.com/cosmos/ibc-go/pull/206) Expose `relayer sdk.AccAddress` on `OnRecvPacket`, `OnAcknowledgementPacket`, `OnTimeoutPacket` module callbacks to enable incentivization. + * (02-client) [\#181](https://github.com/cosmos/ibc-go/pull/181) Remove 'InitialHeight' from UpdateClient Proposal. Only copy over latest consensus state from substitute client. + * (06-solomachine) [\#169](https://github.com/cosmos/ibc-go/pull/169) Change FrozenSequence to boolean in solomachine ClientState. The solo machine proto package has been bumped from `v1` to `v2`. + * (module/core/02-client) [\#165](https://github.com/cosmos/ibc-go/pull/165) Remove GetFrozenHeight from the ClientState interface. + * (modules) [\#166](https://github.com/cosmos/ibc-go/pull/166) Remove GetHeight from the misbehaviour interface. The `consensus_height` attribute has been removed from Misbehaviour events. + * (modules) [\#162](https://github.com/cosmos/ibc-go/pull/162) Remove deprecated Handler types in core IBC and the ICS 20 transfer module. + * (modules/core) [\#161](https://github.com/cosmos/ibc-go/pull/161) Remove Type(), Route(), GetSignBytes() from 02-client, 03-connection, and 04-channel messages. + * (modules) [\#140](https://github.com/cosmos/ibc-go/pull/140) IsFrozen() client state interface changed to Status(). gRPC `ClientStatus` route added. + * (modules/core) [\#109](https://github.com/cosmos/ibc-go/pull/109) Remove connection and channel handshake CLI commands. + * (modules) [\#107](https://github.com/cosmos/ibc-go/pull/107) Modify OnRecvPacket callback to return an acknowledgement which indicates if it is successful or not. Callback state changes are discarded for unsuccessful acknowledgements only. + * (modules) [\#108](https://github.com/cosmos/ibc-go/pull/108) All message constructors take the signer as a string to prevent upstream bugs. The `String()` function for an SDK Acc Address relies on external context. + * (transfer) [\#275](https://github.com/cosmos/ibc-go/pull/275) Remove 'ChanCloseInit' function from transfer keeper. ICS20 does not close channels. + ### State Machine Breaking + * (modules/light-clients/07-tendermint) [\#99](https://github.com/cosmos/ibc-go/pull/99) Enforce maximum chain-id length for tendermint client. + * (modules/light-clients/07-tendermint) [\#141](https://github.com/cosmos/ibc-go/pull/141) Allow a new form of misbehaviour that proves counterparty chain breaks time monotonicity, automatically enforce monotonicity in UpdateClient and freeze client if monotonicity is broken. + * (modules/light-clients/07-tendermint) [\#141](https://github.com/cosmos/ibc-go/pull/141) Freeze the client if there's a conflicting header submitted for an existing consensus state. + * (modules/core/02-client) [\#8405](https://github.com/cosmos/cosmos-sdk/pull/8405) Refactor IBC client update governance proposals to use a substitute client to update a frozen or expired client. + * (modules/core/02-client) [\#8673](https://github.com/cosmos/cosmos-sdk/pull/8673) IBC upgrade logic moved to 02-client and an IBC UpgradeProposal is added. + * (modules/core/03-connection) [\#171](https://github.com/cosmos/ibc-go/pull/171) Introduces a new parameter `MaxExpectedTimePerBlock` to allow connections to calculate and enforce a block delay that is proportional to time delay set by connection. + * (core) [\#268](https://github.com/cosmos/ibc-go/pull/268) Perform a no-op on redundant relay messages. Previous behaviour returned an error. Now no state change will occur and no error will be returned. + ### Improvements + * (04-channel) [\#220](https://github.com/cosmos/ibc-go/pull/220) Channel handshake events are now emitted with the channel keeper. + * (core/02-client) [\#205](https://github.com/cosmos/ibc-go/pull/205) Add in-place and genesis migrations from SDK v0.42.0 to ibc-go v1.0.0. Solo machine protobuf defintions are migrated from v1 to v2. All solo machine consensus states are pruned. All expired tendermint consensus states are pruned. + * (modules/core) [\#184](https://github.com/cosmos/ibc-go/pull/184) Improve error messages. Uses unique error codes to indicate already relayed packets. + * (07-tendermint) [\#182](https://github.com/cosmos/ibc-go/pull/182) Remove duplicate checks in upgrade logic. + * (modules/core/04-channel) [\#7949](https://github.com/cosmos/cosmos-sdk/issues/7949) Standardized channel `Acknowledgement` moved to its own file. Codec registration redundancy removed. + * (modules/core/04-channel) [\#144](https://github.com/cosmos/ibc-go/pull/144) Introduced a `packet_data_hex` attribute to emit the hex-encoded packet data in events. This allows for raw binary (proto-encoded message) to be sent over events and decoded correctly on relayer. Original `packet_data` is DEPRECATED. All relayers and IBC event consumers are encouraged to switch to `packet_data_hex` as soon as possible. + * (core/04-channel) [\#197](https://github.com/cosmos/ibc-go/pull/197) Introduced a `packet_ack_hex` attribute to emit the hex-encoded acknowledgement in events. This allows for raw binary (proto-encoded message) to be sent over events and decoded correctly on relayer. Original `packet_ack` is DEPRECATED. All relayers and IBC event consumers are encouraged to switch to `packet_ack_hex` as soon as possible. + * (modules/light-clients/07-tendermint) [\#125](https://github.com/cosmos/ibc-go/pull/125) Implement efficient iteration of consensus states and pruning of earliest expired consensus state on UpdateClient. + * (modules/light-clients/07-tendermint) [\#141](https://github.com/cosmos/ibc-go/pull/141) Return early in case there's a duplicate update call to save Gas. + * (modules/core/ante) [\#235](https://github.com/cosmos/ibc-go/pull/235) Introduces a new IBC Antedecorator that will reject transactions that only contain redundant packet messages (and accompany UpdateClient msgs). This will prevent relayers from wasting fees by submitting messages for packets that have already been processed by previous relayer(s). The Antedecorator is only applied on CheckTx and RecheckTx and is therefore optional for each node. + ### Features + * [\#198](https://github.com/cosmos/ibc-go/pull/198) New CLI command `query ibc-transfer escrow-address ` to get the escrow address for a channel; can be used to then query balance of escrowed tokens + ### Client Breaking Changes + * (02-client/cli) [\#196](https://github.com/cosmos/ibc-go/pull/196) Rename `node-state` cli command to `self-consensus-state`. + diff --git a/docs/ibc/v5.4.x/ibc/apps/apps.mdx b/docs/ibc/v5.4.x/ibc/apps/apps.mdx new file mode 100644 index 00000000..2aba84ea --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/apps/apps.mdx @@ -0,0 +1,51 @@ +--- +title: IBC Applications +--- + +## Synopsis + +Learn how to build custom IBC application modules that enable packets to be sent to and received from other IBC-enabled chains. + +This document serves as a guide for developers who want to write their own Inter-blockchain Communication Protocol (IBC) applications for custom use cases. + +Due to the modular design of the IBC protocol, IBC application developers do not need to concern themselves with the low-level details of clients, connections, and proof verification. Nevertheless, an overview of these low-level concepts can be found in [the Overview section](/docs/ibc/v5.4.x/ibc/overview). +The document goes into detail on the abstraction layer most relevant for application developers (channels and ports), and describes how to define your own custom packets, `IBCModule` callbacks and more to make an application module IBC ready. + +**To have your module interact over IBC you must:** + +- implement the `IBCModule` interface, i.e.: + - channel (opening) handshake callbacks + - channel closing handshake callbacks + - packet callbacks +- bind to a port(s) +- add keeper methods +- define your own packet data and acknowledgement structs as well as how to encode/decode them +- add a route to the IBC router + +The following sections provide a more detailed explanation of how to write an IBC application +module correctly corresponding to the listed steps. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v5.4.x/ibc/overview) +- [IBC default integration](/docs/ibc/v5.4.x/middleware/ics29-fee/integration) + + + +## Working example + +For a real working example of an IBC application, you can look through the `ibc-transfer` module +which implements everything discussed in this section. + +Here are the useful parts of the module to look at: + +[Binding to transfer +port](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/genesis.go) + +[Sending transfer +packets](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/relay.go) + +[Implementing IBC +callbacks](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/ibc_module.go) diff --git a/docs/ibc/v5.4.x/ibc/apps/bindports.mdx b/docs/ibc/v5.4.x/ibc/apps/bindports.mdx new file mode 100644 index 00000000..3ddc9147 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/apps/bindports.mdx @@ -0,0 +1,134 @@ +--- +title: Bind ports +--- + +## Synopsis + +Learn what changes to make to bind modules to their ports on initialization. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v5.4.x/ibc/overview) +- [IBC default integration](/docs/ibc/v5.4.x/middleware/ics29-fee/integration) + + +Currently, ports must be bound on app initialization. In order to bind modules to their respective ports on initialization, the following needs to be implemented: + +> Note that `portID` does not refer to a certain numerical ID, like `localhost:8080` with a `portID` 8080. Rather it refers to the application module the port binds. For IBC Modules built with the Cosmos SDK, it defaults to the module's name and for Cosmwasm contracts it defaults to the contract address. + +1. Add port ID to the `GenesisState` proto definition: + + ```protobuf + message GenesisState { + string port_id = 1; + / other fields + } + ``` + +2. Add port ID as a key to the module store: + + ```go expandable + / x//types/keys.go + const ( + / ModuleName defines the IBC Module name + ModuleName = "moduleName" + + / Version defines the current version the IBC + / module supports + Version = "moduleVersion-1" + + / PortID is the default port id that module binds to + PortID = "portID" + + / ... + ) + ``` + +3. Add port ID to `x//types/genesis.go`: + + ```go expandable + / in x//types/genesis.go + + / DefaultGenesisState returns a GenesisState with "transfer" as the default PortID. + func DefaultGenesisState() *GenesisState { + return &GenesisState{ + PortId: PortID, + / additional k-v fields + } + } + + / Validate performs basic genesis state validation returning an error upon any + / failure. + func (gs GenesisState) + + Validate() + + error { + if err := host.PortIdentifierValidator(gs.PortId); err != nil { + return err + } + /additional validations + + return gs.Params.Validate() + } + ``` + +4. Bind to port(s) in the module keeper's `InitGenesis`: + + ```go expandable + / InitGenesis initializes the ibc-module state and binds to PortID. + func (k Keeper) + + InitGenesis(ctx sdk.Context, state types.GenesisState) { + k.SetPort(ctx, state.PortId) + + / ... + + / Only try to bind to port if it is not already bound, since we may already own + / port capability from capability InitGenesis + if !k.IsBound(ctx, state.PortId) { + / transfer module binds to the transfer port on InitChain + / and claims the returned capability + err := k.BindPort(ctx, state.PortId) + if err != nil { + panic(fmt.Sprintf("could not claim port capability: %v", err)) + } + + } + + / ... + } + ``` + + With: + + ```go expandable + / IsBound checks if the module is already bound to the desired port + func (k Keeper) + + IsBound(ctx sdk.Context, portID string) + + bool { + _, ok := k.scopedKeeper.GetCapability(ctx, host.PortPath(portID)) + + return ok + } + + / BindPort defines a wrapper function for the port Keeper's function in + / order to expose it to module's InitGenesis function + func (k Keeper) + + BindPort(ctx sdk.Context, portID string) + + error { + cap := k.portKeeper.BindPort(ctx, portID) + + return k.ClaimCapability(ctx, cap, host.PortPath(portID)) + } + ``` + + The module binds to the desired port(s) and returns the capabilities. + + In the above we find reference to keeper methods that wrap other keeper functionality, in the next section the keeper methods that need to be implemented will be defined. diff --git a/docs/ibc/v5.4.x/ibc/apps/ibcmodule.mdx b/docs/ibc/v5.4.x/ibc/apps/ibcmodule.mdx new file mode 100644 index 00000000..e78d5a94 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/apps/ibcmodule.mdx @@ -0,0 +1,384 @@ +--- +title: Implement `IBCModule` interface and callbacks +--- + +## Synopsis + +Learn how to implement the `IBCModule` interface and all of the callbacks it requires. + +The Cosmos SDK expects all IBC modules to implement the [`IBCModule` +interface](https://github.com/cosmos/ibc-go/tree/main/modules/core/05-port/types/module.go). This interface contains all of the callbacks IBC expects modules to implement. They include callbacks related to channel handshake, closing and packet callbacks (`OnRecvPacket`, `OnAcknowledgementPacket` and `OnTimeoutPacket`). + +```go +/ IBCModule implements the ICS26 interface for given the keeper. +/ The implementation of the IBCModule interface could for example be in a file called ibc_module.go, +/ but ultimately file structure is up to the developer +type IBCModule struct { + keeper keeper.Keeper +} +``` + +Additionally, in the `module.go` file, add the following line: + +```go +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + / Add this line + _ porttypes.IBCModule = IBCModule{ +} +) +``` + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v5.4.x/ibc/overview) +- [IBC default integration](/docs/ibc/v5.4.x/middleware/ics29-fee/integration) + + + +## Channel handshake callbacks + +This section will describe the callbacks that are called during channel handshake execution. Among other things, it will claim channel capabilities passed on from core IBC. For a refresher on capabilities, check [the Overview section](/docs/ibc/v5.4.x/ibc/overview#capabilities). + +Here are the channel handshake callbacks that modules are expected to implement: + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `checkArguments` and `negotiateAppVersion` functions. + +```go expandable +/ Called by IBC Handler on MsgOpenInit +func (im IBCModule) + +OnChanOpenInit(ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + / Examples: + / - Abort if order == UNORDERED, + / - Abort if version is unsupported + if err := checkArguments(args); err != nil { + return "", err +} + + / OpenInit must claim the channelCapability that IBC passes into the callback + if err := im.keeper.ClaimCapability(ctx, chanCap, host.ChannelCapabilityPath(portID, channelID)); err != nil { + return "", err +} + +return version, nil +} + +/ Called by IBC Handler on MsgOpenTry +func (im IBCModule) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + if err := checkArguments(args); err != nil { + return "", err +} + + / OpenTry must claim the channelCapability that IBC passes into the callback + if err := im.keeper.scopedKeeper.ClaimCapability(ctx, chanCap, host.ChannelCapabilityPath(portID, channelID)); err != nil { + return err +} + + / Construct application version + / IBC applications must return the appropriate application version + / This can be a simple string or it can be a complex version constructed + / from the counterpartyVersion and other arguments. + / The version returned will be the channel version used for both channel ends. + appVersion := negotiateAppVersion(counterpartyVersion, args) + +return appVersion, nil +} + +/ Called by IBC Handler on MsgOpenAck +func (im IBCModule) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + if counterpartyVersion != types.Version { + return sdkerrors.Wrapf(types.ErrInvalidVersion, "invalid counterparty version: %s, expected %s", counterpartyVersion, types.Version) +} + + / do custom logic + + return nil +} + +/ Called by IBC Handler on MsgOpenConfirm +func (im IBCModule) + +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / do custom logic + + return nil +} +``` + +The channel closing handshake will also invoke module callbacks that can return errors to abort the closing handshake. Closing a channel is a 2-step handshake, the initiating chain calls `ChanCloseInit` and the finalizing chain calls `ChanCloseConfirm`. + +```go expandable +/ Called by IBC Handler on MsgCloseInit +func (im IBCModule) + +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgCloseConfirm +func (im IBCModule) + +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} +``` + +### Channel handshake version negotiation + +Application modules are expected to verify versioning used during the channel handshake procedure. + +- `OnChanOpenInit` will verify that the relayer-chosen parameters + are valid and perform any custom `INIT` logic. + It may return an error if the chosen parameters are invalid + in which case the handshake is aborted. + If the provided version string is non-empty, `OnChanOpenInit` should return + the version string if valid or an error if the provided version is invalid. + **If the version string is empty, `OnChanOpenInit` is expected to + return a default version string representing the version(s) + it supports.** + If there is no default version string for the application, + it should return an error if the provided version is an empty string. +- `OnChanOpenTry` will verify the relayer-chosen parameters along with the + counterparty-chosen version string and perform custom `TRY` logic. + If the relayer-chosen parameters + are invalid, the callback must return an error to abort the handshake. + If the counterparty-chosen version is not compatible with this module's + supported versions, the callback must return an error to abort the handshake. + If the versions are compatible, the try callback must select the final version + string and return it to core IBC. + `OnChanOpenTry` may also perform custom initialization logic. +- `OnChanOpenAck` will error if the counterparty selected version string + is invalid and abort the handshake. It may also perform custom ACK logic. + +Versions must be strings but can implement any versioning structure. If your application plans to +have linear releases then semantic versioning is recommended. If your application plans to release +various features in between major releases then it is advised to use the same versioning scheme +as IBC. This versioning scheme specifies a version identifier and compatible feature set with +that identifier. Valid version selection includes selecting a compatible version identifier with +a subset of features supported by your application for that version. The struct used for this +scheme can be found in [03-connection/types](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection/types/version.go#L16). + +Since the version type is a string, applications have the ability to do simple version verification +via string matching or they can use the already implemented versioning system and pass the proto +encoded version into each handhshake call as necessary. + +ICS20 currently implements basic string matching with a single supported version. + +## Packet callbacks + +Just as IBC expects modules to implement callbacks for channel handshakes, it also expects modules to implement callbacks for handling the packet flow through a channel, as defined in the `IBCModule` interface. + +Once a module A and module B are connected to each other, relayers can start relaying packets and acknowledgements back and forth on the channel. + +![IBC packet flow diagram](/docs/ibc/images/01-ibc/03-apps/images/packet_flow.png) + +Briefly, a successful packet flow works as follows: + +1. module A sends a packet through the IBC module +2. the packet is received by module B +3. if module B writes an acknowledgement of the packet then module A will process the + acknowledgement +4. if the packet is not successfully received before the timeout, then module A processes the + packet's timeout. + +### Sending packets + +Modules **do not send packets through callbacks**, since the modules initiate the action of sending packets to the IBC module, as opposed to other parts of the packet flow where messages sent to the IBC +module must trigger execution on the port-bound module through the use of callbacks. Thus, to send a packet a module simply needs to call `SendPacket` on the `IBCChannelKeeper`. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `EncodePacketData(customPacketData)` function. + +```go +/ retrieve the dynamic capability for this channel + channelCap := scopedKeeper.GetCapability(ctx, channelCapName) +/ Sending custom application packet data + data := EncodePacketData(customPacketData) + +packet.Data = data +/ Send packet to IBC, authenticating with channelCap +IBCChannelKeeper.SendPacket(ctx, channelCap, packet) +``` + + + In order to prevent modules from sending packets on channels they do not own, + IBC expects modules to pass in the correct channel capability for the packet's + source channel. + + +### Receiving packets + +To handle receiving packets, the module must implement the `OnRecvPacket` callback. This gets +invoked by the IBC module after the packet has been proved valid and correctly processed by the IBC +keepers. Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state +changes given the packet data without worrying about whether the packet is valid or not. + +Modules may return to the IBC handler an acknowledgement which implements the `Acknowledgement` interface. +The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the +acknowledgement back to the sender module. + +The state changes that occurred during this callback will only be written if: + +- the acknowledgement was successful as indicated by the `Success()` function of the acknowledgement +- if the acknowledgement returned is nil indicating that an asynchronous process is occurring + +NOTE: Applications which process asynchronous acknowledgements must handle reverting state changes +when appropriate. Any state changes that occurred during the `OnRecvPacket` callback will be written +for asynchronous acknowledgements. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodePacketData(packet.Data)` function. + +```go expandable +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) + +ibcexported.Acknowledgement { + / Decode the packet data + packetData := DecodePacketData(packet.Data) + + / do application state changes based on packet data and return the acknowledgement + / NOTE: The acknowledgement will indicate to the IBC handler if the application + / state changes should be written via the `Success()` function. Application state + / changes are only written if the acknowledgement is successful or the acknowledgement + / returned is nil indicating that an asynchronous acknowledgement will occur. + ack := processPacket(ctx, packet, packetData) + +return ack +} +``` + +Reminder, the `Acknowledgement` interface: + +```go +/ Acknowledgement defines the interface used to return +/ acknowledgements in the OnRecvPacket callback. +type Acknowledgement interface { + Success() + +bool + Acknowledgement() []byte +} +``` + +### Acknowledging packets + +After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can +then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the +acknowledgement is entirely up to the modules on the channel (just like the packet data); however, it +may often contain information on whether the packet was successfully processed along +with some additional data that could be useful for remediation if the packet processing failed. + +Since the modules are responsible for agreeing on an encoding/decoding standard for packet data and +acknowledgements, IBC will pass in the acknowledgements as `[]byte` to this callback. The callback +is responsible for decoding the acknowledgement and processing it. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodeAcknowledgement(acknowledgments)` and `processAck(ack)` functions. + +```go expandable +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, +) (*sdk.Result, error) { + / Decode acknowledgement + ack := DecodeAcknowledgement(acknowledgement) + + / process ack + res, err := processAck(ack) + +return res, err +} +``` + +### Timeout packets + +If the timeout for a packet is reached before the packet is successfully received or the +counterparty channel end is closed before the packet is successfully received, then the receiving +chain can no longer process it. Thus, the sending chain must process the timeout using +`OnTimeoutPacket` to handle this situation. Again the IBC module will verify that the timeout is +indeed valid, so our module only needs to implement the state machine logic for what to do once a +timeout is reached and the packet can no longer be received. + +```go +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) (*sdk.Result, error) { + / do custom timeout logic +} +``` diff --git a/docs/ibc/v5.4.x/ibc/apps/keeper.mdx b/docs/ibc/v5.4.x/ibc/apps/keeper.mdx new file mode 100644 index 00000000..d3b6faa9 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/apps/keeper.mdx @@ -0,0 +1,119 @@ +--- +title: Keeper +--- + +## Synopsis + +Learn how to implement the IBC Module keeper. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v5.4.x/ibc/overview) +- [IBC default integration](/docs/ibc/v5.4.x/middleware/ics29-fee/integration) + + +In the previous sections, on channel handshake callbacks and port binding in `InitGenesis`, a reference was made to keeper methods that need to be implemented when creating a custom IBC module. Below is an overview of how to define an IBC module's keeper. + +> Note that some code has been left out for clarity, to get a full code overview, please refer to [the transfer module's keeper in the ibc-go repo](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/keeper.go). + +```go expandable +/ Keeper defines the IBC app module keeper +type Keeper struct { + storeKey sdk.StoreKey + cdc codec.BinaryCodec + paramSpace paramtypes.Subspace + + channelKeeper types.ChannelKeeper + portKeeper types.PortKeeper + scopedKeeper capabilitykeeper.ScopedKeeper + + / ... additional according to custom logic +} + +/ NewKeeper creates a new IBC app module Keeper instance +func NewKeeper( + / args +) + +Keeper { + / ... + + return Keeper{ + cdc: cdc, + storeKey: key, + paramSpace: paramSpace, + + channelKeeper: channelKeeper, + portKeeper: portKeeper, + scopedKeeper: scopedKeeper, + + / ... additional according to custom logic +} +} + +/ IsBound checks if the IBC app module is already bound to the desired port +func (k Keeper) + +IsBound(ctx sdk.Context, portID string) + +bool { + _, ok := k.scopedKeeper.GetCapability(ctx, host.PortPath(portID)) + +return ok +} + +/ BindPort defines a wrapper function for the port Keeper's function in +/ order to expose it to module's InitGenesis function +func (k Keeper) + +BindPort(ctx sdk.Context, portID string) + +error { + cap := k.portKeeper.BindPort(ctx, portID) + +return k.ClaimCapability(ctx, cap, host.PortPath(portID)) +} + +/ GetPort returns the portID for the IBC app module. Used in ExportGenesis +func (k Keeper) + +GetPort(ctx sdk.Context) + +string { + store := ctx.KVStore(k.storeKey) + +return string(store.Get(types.PortKey)) +} + +/ SetPort sets the portID for the IBC app module. Used in InitGenesis +func (k Keeper) + +SetPort(ctx sdk.Context, portID string) { + store := ctx.KVStore(k.storeKey) + +store.Set(types.PortKey, []byte(portID)) +} + +/ AuthenticateCapability wraps the scopedKeeper's AuthenticateCapability function +func (k Keeper) + +AuthenticateCapability(ctx sdk.Context, cap *capabilitytypes.Capability, name string) + +bool { + return k.scopedKeeper.AuthenticateCapability(ctx, cap, name) +} + +/ ClaimCapability allows the IBC app module to claim a capability that core IBC +/ passes to it +func (k Keeper) + +ClaimCapability(ctx sdk.Context, cap *capabilitytypes.Capability, name string) + +error { + return k.scopedKeeper.ClaimCapability(ctx, cap, name) +} + +/ ... additional according to custom logic +``` diff --git a/docs/ibc/v5.4.x/ibc/apps/packets_acks.mdx b/docs/ibc/v5.4.x/ibc/apps/packets_acks.mdx new file mode 100644 index 00000000..a28e60ee --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/apps/packets_acks.mdx @@ -0,0 +1,104 @@ +--- +title: Define packets and acks +--- + +## Synopsis + +Learn how to define custom packet and acknowledgement structs and how to encode and decode them. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v5.4.x/ibc/overview) +- [IBC default integration](/docs/ibc/v5.4.x/middleware/ics29-fee/integration) + + + +## Custom packets + +Modules connected by a channel must agree on what application data they are sending over the +channel, as well as how they will encode/decode it. This process is not specified by IBC as it is up +to each application module to determine how to implement this agreement. However, for most +applications this will happen as a version negotiation during the channel handshake. While more +complex version negotiation is possible to implement inside the channel opening handshake, a very +simple version negotiation is implemented in the [ibc-transfer module](https://github.com/cosmos/ibc-go/tree/main/modules/apps/transfer/module.go). + +Thus, a module must define its custom packet data structure, along with a well-defined way to +encode and decode it to and from `[]byte`. + +```go expandable +/ Custom packet data defined in application module +type CustomPacketData struct { + / Custom fields ... +} + +EncodePacketData(packetData CustomPacketData) []byte { + / encode packetData to bytes +} + +DecodePacketData(encoded []byte) (CustomPacketData) { + / decode from bytes to packet data +} +``` + +> Note that the `CustomPacketData` struct is defined in the proto definition and then compiled by the protobuf compiler. + +Then a module must encode its packet data before sending it through IBC. + +```go +/ Sending custom application packet data + data := EncodePacketData(customPacketData) + +packet.Data = data +IBCChannelKeeper.SendPacket(ctx, packet) +``` + +A module receiving a packet must decode the `PacketData` into a structure it expects so that it can +act on it. + +```go +/ Receiving custom application packet data (in OnRecvPacket) + packetData := DecodePacketData(packet.Data) +/ handle received custom packet data +``` + +## Acknowledgements + +Modules may commit an acknowledgement upon receiving and processing a packet in the case of synchronous packet processing. +In the case where a packet is processed at some later point after the packet has been received (asynchronous execution), the acknowledgement +will be written once the packet has been processed by the application which may be well after the packet receipt. + +NOTE: Most blockchain modules will want to use the synchronous execution model in which the module processes and writes the acknowledgement +for a packet as soon as it has been received from the IBC module. + +This acknowledgement can then be relayed back to the original sender chain, which can take action +depending on the contents of the acknowledgement. + +Just as packet data was opaque to IBC, acknowledgements are similarly opaque. Modules must pass and +receive acknowledegments with the IBC modules as byte strings. + +Thus, modules must agree on how to encode/decode acknowledgements. The process of creating an +acknowledgement struct along with encoding and decoding it, is very similar to the packet data +example above. [ICS 04](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope) +specifies a recommended format for acknowledgements. This acknowledgement type can be imported from +[channel types](https://github.com/cosmos/ibc-go/tree/main/modules/core/04-channel/types). + +While modules may choose arbitrary acknowledgement structs, a default acknowledgement types is provided by IBC [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/core/channel/v1/channel.proto): + +```protobuf expandable +/ Acknowledgement is the recommended acknowledgement format to be used by +/ app-specific protocols. +/ NOTE: The field numbers 21 and 22 were explicitly chosen to avoid accidental +/ conflicts with other protobuf message formats used for acknowledgements. +/ The first byte of any message with this format will be the non-ASCII values +/ `0xaa` (result) or `0xb2` (error). Implemented as defined by ICS: +/ https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope +message Acknowledgement { + / response contains either a result or an error and must be non-empty + oneof response { + bytes result = 21; + string error = 22; + } +} +``` diff --git a/docs/ibc/v5.4.x/ibc/apps/routing.mdx b/docs/ibc/v5.4.x/ibc/apps/routing.mdx new file mode 100644 index 00000000..915fdd5e --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/apps/routing.mdx @@ -0,0 +1,40 @@ +--- +title: Routing +--- + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v5.4.x/ibc/overview) +- [IBC default integration](/docs/ibc/v5.4.x/middleware/ics29-fee/integration) + + + +## Synopsis + +Learn how to hook a route to the IBC router for the custom IBC module. + +As mentioned above, modules must implement the `IBCModule` interface (which contains both channel +handshake callbacks and packet handling callbacks). The concrete implementation of this interface +must be registered with the module name as a route on the IBC `Router`. + +```go expandable +/ app.go +func NewApp(...args) *App { +/ ... + +/ Create static IBC router, add module routes, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) +/ Note: moduleCallbacks must implement IBCModule interface +ibcRouter.AddRoute(moduleName, moduleCallbacks) + +/ Setting Router will finalize all routes by sealing router +/ No more routes can be added +app.IBCKeeper.SetRouter(ibcRouter) + +/ ... +} +``` diff --git a/docs/ibc/v5.4.x/ibc/integration.mdx b/docs/ibc/v5.4.x/ibc/integration.mdx new file mode 100644 index 00000000..3ab57303 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/integration.mdx @@ -0,0 +1,224 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to integrate IBC to your application and send data packets to other chains. + +This document outlines the required steps to integrate and configure the [IBC +module](https://github.com/cosmos/ibc-go/tree/main/modules/core) to your Cosmos SDK application and +send fungible token transfers to other chains. + +## Integrating the IBC module + +Integrating the IBC module to your SDK-based application is straightforward. The general changes can be summarized in the following steps: + +- Add required modules to the `module.BasicManager` +- Define additional `Keeper` fields for the new modules on the `App` type +- Add the module's `StoreKeys` and initialize their `Keepers` +- Set up corresponding routers and routes for the `ibc` module +- Add the modules to the module `Manager` +- Add modules to `Begin/EndBlockers` and `InitGenesis` +- Update the module `SimulationManager` to enable simulations + +### Module `BasicManager` and `ModuleAccount` permissions + +The first step is to add the following modules to the `BasicManager`: `x/capability`, `x/ibc`, +and `x/ibc-transfer`. After that, we need to grant `Minter` and `Burner` permissions to +the `ibc-transfer` `ModuleAccount` to mint and burn relayed tokens. + +```go expandable +/ app.go +var ( + + ModuleBasics = module.NewBasicManager( + / ... + capability.AppModuleBasic{ +}, + ibc.AppModuleBasic{ +}, + transfer.AppModuleBasic{ +}, / i.e ibc-transfer module + ) + + / module account permissions + maccPerms = map[string][]string{ + / other module accounts permissions + / ... + ibctransfertypes.ModuleName: { + authtypes.Minter, authtypes.Burner +}, +) +``` + +### Application fields + +Then, we need to register the `Keepers` as follows: + +```go expandable +/ app.go +type App struct { + / baseapp, keys and subspaces definitions + + / other keepers + / ... + IBCKeeper *ibckeeper.Keeper / IBC Keeper must be a pointer in the app, so we can SetRouter on it correctly + TransferKeeper ibctransferkeeper.Keeper / for cross-chain fungible token transfers + + / make scoped keepers public for test purposes + ScopedIBCKeeper capabilitykeeper.ScopedKeeper + ScopedTransferKeeper capabilitykeeper.ScopedKeeper + + / ... + / module and simulation manager definitions +} +``` + +### Configure the `Keepers` + +During initialization, besides initializing the IBC `Keepers` (for the `x/ibc`, and +`x/ibc-transfer` modules), we need to grant specific capabilities through the capability module +`ScopedKeepers` so that we can authenticate the object-capability permissions for each of the IBC +channels. + +```go expandable +func NewApp(...args) *App { + / define codecs and baseapp + + / add capability keeper and ScopeToModule for ibc module + app.CapabilityKeeper = capabilitykeeper.NewKeeper(appCodec, keys[capabilitytypes.StoreKey], memKeys[capabilitytypes.MemStoreKey]) + + / grant capabilities for the ibc and ibc-transfer modules + scopedIBCKeeper := app.CapabilityKeeper.ScopeToModule(ibchost.ModuleName) + scopedTransferKeeper := app.CapabilityKeeper.ScopeToModule(ibctransfertypes.ModuleName) + + / ... other modules keepers + + / Create IBC Keeper + app.IBCKeeper = ibckeeper.NewKeeper( + appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, app.UpgradeKeeper, scopedIBCKeeper, + ) + + / Create Transfer Keepers + app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, keys[ibctransfertypes.StoreKey], app.GetSubspace(ibctransfertypes.ModuleName), + app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, app.BankKeeper, scopedTransferKeeper, + ) + transferModule := transfer.NewAppModule(app.TransferKeeper) + + / .. continues +} +``` + +### Register `Routers` + +IBC needs to know which module is bound to which port so that it can route packets to the +appropriate module and call the appropriate callbacks. The port to module name mapping is handled by +IBC's port `Keeper`. However, the mapping from module name to the relevant callbacks is accomplished +by the port +[`Router`](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/router.go) on the +IBC module. + +Adding the module routes allows the IBC handler to call the appropriate callback when processing a +channel handshake or a packet. + +Currently, a `Router` is static so it must be initialized and set correctly on app initialization. +Once the `Router` has been set, no new routes can be added. + +```go expandable +/ app.go +func NewApp(...args) *App { + / .. continuation from above + + / Create static IBC router, add ibc-transfer module route, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) + / Setting Router will finalize all routes by sealing router + / No more routes can be added + app.IBCKeeper.SetRouter(ibcRouter) + + / .. continues +``` + +### Module Managers + +In order to use IBC, we need to add the new modules to the module `Manager` and to the `SimulationManager` in case your application supports simulations. + +```go expandable +/ app.go +func NewApp(...args) *App { + / .. continuation from above + + app.mm = module.NewManager( + / other modules + / ... + capability.NewAppModule(appCodec, *app.CapabilityKeeper), + ibc.NewAppModule(app.IBCKeeper), + transferModule, + ) + + / ... + + app.sm = module.NewSimulationManager( + / other modules + / ... + capability.NewAppModule(appCodec, *app.CapabilityKeeper), + ibc.NewAppModule(app.IBCKeeper), + transferModule, + ) + + / .. continues +``` + +### Application ABCI Ordering + +One addition from IBC is the concept of `HistoricalEntries` which are stored on the staking module. +Each entry contains the historical information for the `Header` and `ValidatorSet` of this chain which is stored +at each height during the `BeginBlock` call. The historical info is required to introspect the +past historical info at any given height in order to verify the light client `ConsensusState` during the +connection handshake. + +The IBC module also has +[`BeginBlock`](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client/abci.go) logic as well. This is optional as it is only required if your application uses the localhost client to connect two different modules from the same chain. + + + Only register the ibc module to the `SetOrderBeginBlockers` if your + application will use the localhost (*aka* loopback) client. + + +```go expandable +/ app.go +func NewApp(...args) *App { + / .. continuation from above + + / add staking and ibc modules to BeginBlockers + app.mm.SetOrderBeginBlockers( + / other modules ... + stakingtypes.ModuleName, ibchost.ModuleName, + ) + + / ... + + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + app.mm.SetOrderInitGenesis( + capabilitytypes.ModuleName, + / other modules ... + ibchost.ModuleName, ibctransfertypes.ModuleName, + ) + + / .. continues +``` + + + **IMPORTANT**: The capability module **must** be declared first in + `SetOrderInitGenesis` + + +That's it! You have now wired up the IBC module and are now able to send fungible tokens across +different chains. If you want to have a broader view of the changes take a look into the SDK's +[`SimApp`](https://github.com/cosmos/ibc-go/blob/main/testing/simapp/app.go). diff --git a/docs/ibc/v5.4.x/ibc/middleware/develop.mdx b/docs/ibc/v5.4.x/ibc/middleware/develop.mdx new file mode 100644 index 00000000..36693888 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/middleware/develop.mdx @@ -0,0 +1,434 @@ +--- +title: IBC middleware +--- + +## Synopsis + +Learn how to write your own custom middleware to wrap an IBC application, and understand how to hook different middleware to IBC base applications to form different IBC application stacks + +This document serves as a guide for middleware developers who want to write their own middleware and for chain developers who want to use IBC middleware on their chains. + +IBC applications are designed to be self-contained modules that implement their own application-specific logic through a set of interfaces with the core IBC handlers. These core IBC handlers, in turn, are designed to enforce the correctness properties of IBC (transport, authentication, ordering) while delegating all application-specific handling to the IBC application modules. However, there are cases where some functionality may be desired by many applications, yet not appropriate to place in core IBC. + +Middleware allows developers to define the extensions as separate modules that can wrap over the base application. This middleware can thus perform its own custom logic, and pass data into the application so that it may run its logic without being aware of the middleware's existence. This allows both the application and the middleware to implement its own isolated logic while still being able to run as part of a single packet flow. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v5.4.x/ibc/overview) +- [IBC Integration](/docs/ibc/v5.4.x/middleware/ics29-fee/integration) +- [IBC Application Developer Guide](/docs/ibc/v5.4.x/ibc/apps/apps) + + + +## Definitions + +`Middleware`: A self-contained module that sits between core IBC and an underlying IBC application during packet execution. All messages between core IBC and underlying application must flow through middleware, which may perform its own custom logic. + +`Underlying Application`: An underlying application is the application that is directly connected to the middleware in question. This underlying application may itself be middleware that is chained to a base application. + +`Base Application`: A base application is an IBC application that does not contain any middleware. It may be nested by 0 or multiple middleware to form an application stack. + +`Application Stack (or stack)`: A stack is the complete set of application logic (middleware(s) + base application) that gets connected to core IBC. A stack may be just a base application, or it may be a series of middlewares that nest a base application. + +## Create a custom IBC middleware + +IBC middleware will wrap over an underlying IBC application and sits between core IBC and the application. It has complete control in modifying any message coming from IBC to the application, and any message coming from the application to core IBC. Thus, middleware must be completely trusted by chain developers who wish to integrate them, however this gives them complete flexibility in modifying the application(s) they wrap. + +### Interfaces + +```go +/ Middleware implements the ICS26 Module interface +type Middleware interface { + porttypes.IBCModule / middleware has access to an underlying application which may be wrapped by more middleware + ics4Wrapper: ICS4Wrapper / middleware has access to ICS4Wrapper which may be core IBC Channel Handler or a higher-level middleware that wraps this middleware. +} +``` + +```typescript +/ This is implemented by ICS4 and all middleware that are wrapping base application. +/ The base application will call `sendPacket` or `writeAcknowledgement` of the middleware directly above them +/ which will call the next middleware until it reaches the core IBC handler. +type ICS4Wrapper interface { + SendPacket(ctx sdk.Context, chanCap *capabilitytypes.Capability, packet exported.Packet) error + WriteAcknowledgement(ctx sdk.Context, chanCap *capabilitytypes.Capability, packet exported.Packet, ack exported.Acknowledgement) error + GetAppVersion(ctx sdk.Context, portID, channelID string) (string, bool) +} +``` + +### Implement `IBCModule` interface and callbacks + +The `IBCModule` is a struct that implements the [ICS-26 interface (`porttypes.IBCModule`)](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/module.go#L11-L106). It is recommended to separate these callbacks into a separate file `ibc_module.go`. As will be mentioned in the [integration section](/docs/ibc/v5.4.x/ibc/middleware/integration), this struct should be different than the struct that implements `AppModule` in case the middleware maintains its own internal state and processes separate SDK messages. + +The middleware must have access to the underlying application, and be called before during all ICS-26 callbacks. It may execute custom logic during these callbacks, and then call the underlying application's callback. Middleware **may** choose not to call the underlying application's callback at all. Though these should generally be limited to error cases. + +In the case where the IBC middleware expects to speak to a compatible IBC middleware on the counterparty chain, they must use the channel handshake to negotiate the middleware version without interfering in the version negotiation of the underlying application. + +Middleware accomplishes this by formatting the version in a JSON-encoded string containing the middleware version and the application version. The application version may as well be a JSON-encoded string, possibly including further middleware and app versions, if the application stack consists of multiple milddlewares wrapping a base application. The format of the version is specified in ICS-30 as the following: + +```json +{ + "": "", + "app_version": "" +} +``` + +The `` key in the JSON struct should be replaced by the actual name of the key for the corresponding middleware (e.g. `fee_version`). + +During the handshake callbacks, the middleware can unmarshal the version string and retrieve the middleware and application versions. It can do its negotiation logic on ``, and pass the `` to the underlying application. + +The middleware should simply pass the capability in the callback arguments along to the underlying application so that it may be claimed by the base application. The base application will then pass the capability up the stack in order to authenticate an outgoing packet/acknowledgement. + +In the case where the middleware wishes to send a packet or acknowledgment without the involvement of the underlying application, it should be given access to the same `scopedKeeper` as the base application so that it can retrieve the capabilities by itself. + +### Handshake callbacks + +#### `OnChanOpenInit` + +```go expandable +func (im IBCModule) + +OnChanOpenInit( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + if version != "" { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + metadata, err := Unmarshal(version) + if err != nil { + / Since it is valid for fee version to not be specified, + / the above middleware version may be for another middleware. + / Pass the entire version string onto the underlying application. + return im.app.OnChanOpenInit( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + version, + ) +} + +else { + metadata = { + / set middleware version to default value + MiddlewareVersion: defaultMiddlewareVersion, + / allow application to return its default version + AppVersion: "", +} + +} + +doCustomLogic() + + / if the version string is empty, OnChanOpenInit is expected to return + / a default version string representing the version(s) + +it supports + appVersion, err := im.app.OnChanOpenInit( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + metadata.AppVersion, / note we only pass app version here + ) + if err != nil { + return "", err +} + version := constructVersion(metadata.MiddlewareVersion, appVersion) + +return version, nil +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L34-L82) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanOpenTry` + +```go expandable +func OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + cpMetadata, err := Unmarshal(counterpartyVersion) + if err != nil { + return app.OnChanOpenTry( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + counterpartyVersion, + ) +} + +doCustomLogic() + + / Call the underlying application's OnChanOpenTry callback. + / The try callback must select the final app-specific version string and return it. + appVersion, err := app.OnChanOpenTry( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + cpMetadata.AppVersion, / note we only pass counterparty app version here + ) + if err != nil { + return "", err +} + + / negotiate final middleware version + middlewareVersion := negotiateMiddlewareVersion(cpMetadata.MiddlewareVersion) + version := constructVersion(middlewareVersion, appVersion) + +return version, nil +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L84-L124) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanOpenAck` + +```go expandable +func OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyChannelID string, + counterpartyVersion string, +) + +error { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + cpMetadata, err = UnmarshalJSON(counterpartyVersion) + if err != nil { + return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, counterpartyVersion) +} + if !isCompatible(cpMetadata.MiddlewareVersion) { + return error +} + +doCustomLogic() + + / call the underlying application's OnChanOpenTry callback + return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, cpMetadata.AppVersion) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L126-L152) an example implementation of this callback for the ICS29 Fee Middleware module. + +### `OnChanOpenConfirm` + +```go +func OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanOpenConfirm(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L154-L162) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanCloseInit` + +```go +func OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanCloseInit(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L164-L187) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanCloseConfirm` + +```go +func OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanCloseConfirm(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L189-L212) an example implementation of this callback for the ICS29 Fee Middleware module. + +**NOTE**: Middleware that does not need to negotiate with a counterparty middleware on the remote stack will not implement the version unmarshalling and negotiation, and will simply perform its own custom logic on the callbacks without relying on the counterparty behaving similarly. + +### Packet callbacks + +The packet callbacks just like the handshake callbacks wrap the application's packet callbacks. The packet callbacks are where the middleware performs most of its custom logic. The middleware may read the packet flow data and perform some additional packet handling, or it may modify the incoming data before it reaches the underlying application. This enables a wide degree of usecases, as a simple base application like token-transfer can be transformed for a variety of usecases by combining it with custom middleware. + +#### `OnRecvPacket` + +```go expandable +func OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +ibcexported.Acknowledgement { + doCustomLogic(packet) + ack := app.OnRecvPacket(ctx, packet, relayer) + +doCustomLogic(ack) / middleware may modify outgoing ack + return ack +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L214-L237) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnAcknowledgementPacket` + +```go +func OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, +) + +error { + doCustomLogic(packet, ack) + +return app.OnAcknowledgementPacket(ctx, packet, ack, relayer) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L239-L292) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnTimeoutPacket` + +```go +func OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +error { + doCustomLogic(packet) + +return app.OnTimeoutPacket(ctx, packet, relayer) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L294-L334) an example implementation of this callback for the ICS29 Fee Middleware module. + +### ICS-4 wrappers + +Middleware must also wrap ICS-4 so that any communication from the application to the `channelKeeper` goes through the middleware first. Similar to the packet callbacks, the middleware may modify outgoing acknowledgements and packets in any way it wishes. + +#### `SendPacket` + +```go +func SendPacket( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + appPacket exported.PacketI, +) { + / middleware may modify packet + packet = doCustomLogic(appPacket) + +return ics4Keeper.SendPacket(ctx, chanCap, packet) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L336-L343) an example implementation of this function for the ICS29 Fee Middleware module. + +#### `WriteAcknowledgement` + +```go expandable +/ only called for async acks +func WriteAcknowledgement( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + packet exported.PacketI, + ack exported.Acknowledgement, +) { + / middleware may modify acknowledgement + ack_bytes = doCustomLogic(ack) + +return ics4Keeper.WriteAcknowledgement(packet, ack_bytes) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L345-L353) an example implementation of this function for the ICS29 Fee Middleware module. + +#### `GetAppVersion` + +```go expandable +/ middleware must return the underlying application version +func GetAppVersion( + ctx sdk.Context, + portID, + channelID string, +) (string, bool) { + version, found := ics4Keeper.GetAppVersion(ctx, portID, channelID) + if !found { + return "", false +} + if !MiddlewareEnabled { + return version, true +} + + / unwrap channel version + metadata, err := Unmarshal(version) + if err != nil { + panic(fmt.Errof("unable to unmarshal version: %w", err)) +} + +return metadata.AppVersion, true +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L355-L358) an example implementation of this function for the ICS29 Fee Middleware module. diff --git a/docs/ibc/v5.4.x/ibc/middleware/integration.mdx b/docs/ibc/v5.4.x/ibc/middleware/integration.mdx new file mode 100644 index 00000000..3b942b53 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/middleware/integration.mdx @@ -0,0 +1,78 @@ +--- +title: Integrating IBC middleware into a chain +description: >- + Learn how to integrate IBC middleware(s) with a base application to your + chain. The following document only applies for Cosmos SDK chains. +--- + +Learn how to integrate IBC middleware(s) with a base application to your chain. The following document only applies for Cosmos SDK chains. + +If the middleware is maintaining its own state and/or processing SDK messages, then it should create and register its SDK module **only once** with the module manager in `app.go`. + +All middleware must be connected to the IBC router and wrap over an underlying base IBC application. An IBC application may be wrapped by many layers of middleware, only the top layer middleware should be hooked to the IBC router, with all underlying middlewares and application getting wrapped by it. + +The order of middleware **matters**, function calls from IBC to the application travel from top-level middleware to the bottom middleware and then to the application. Function calls from the application to IBC goes through the bottom middleware in order to the top middleware and then to core IBC handlers. Thus the same set of middleware put in different orders may produce different effects. + +## Example integration + +```go expandable +/ app.go + +/ middleware 1 and middleware 3 are stateful middleware, +/ perhaps implementing separate sdk.Msg and Handlers +mw1Keeper := mw1.NewKeeper(storeKey1) + +mw3Keeper := mw3.NewKeeper(storeKey3) + +/ Only create App Module **once** and register in app module +/ if the module maintains independent state and/or processes sdk.Msgs +app.moduleManager = module.NewManager( + ... + mw1.NewAppModule(mw1Keeper), + mw3.NewAppModule(mw3Keeper), + transfer.NewAppModule(transferKeeper), + custom.NewAppModule(customKeeper) +) + +mw1IBCModule := mw1.NewIBCModule(mw1Keeper) + +mw2IBCModule := mw2.NewIBCModule() / middleware2 is stateless middleware +mw3IBCModule := mw3.NewIBCModule(mw3Keeper) + scopedKeeperTransfer := capabilityKeeper.NewScopedKeeper("transfer") + +scopedKeeperCustom1 := capabilityKeeper.NewScopedKeeper("custom1") + +scopedKeeperCustom2 := capabilityKeeper.NewScopedKeeper("custom2") + +/ NOTE: IBC Modules may be initialized any number of times provided they use a separate +/ scopedKeeper and underlying port. + +/ initialize base IBC applications +/ if you want to create two different stacks with the same base application, +/ they must be given different scopedKeepers and assigned different ports. + transferIBCModule := transfer.NewIBCModule(transferKeeper) + +customIBCModule1 := custom.NewIBCModule(customKeeper, "portCustom1") + +customIBCModule2 := custom.NewIBCModule(customKeeper, "portCustom2") + +/ create IBC stacks by combining middleware with base application +/ NOTE: since middleware2 is stateless it does not require a Keeper +/ stack 1 contains mw1 -> mw3 -> transfer +stack1 := mw1.NewIBCMiddleware(mw3.NewIBCMiddleware(transferIBCModule, mw3Keeper), mw1Keeper) +/ stack 2 contains mw3 -> mw2 -> custom1 +stack2 := mw3.NewIBCMiddleware(mw2.NewIBCMiddleware(customIBCModule1), mw3Keeper) +/ stack 3 contains mw2 -> mw1 -> custom2 +stack3 := mw2.NewIBCMiddleware(mw1.NewIBCMiddleware(customIBCModule2, mw1Keeper)) + +/ associate each stack with the moduleName provided by the underlying scopedKeeper + ibcRouter := porttypes.NewRouter() + +ibcRouter.AddRoute("transfer", stack1) + +ibcRouter.AddRoute("custom1", stack2) + +ibcRouter.AddRoute("custom2", stack3) + +app.IBCKeeper.SetRouter(ibcRouter) +``` diff --git a/docs/ibc/v5.4.x/ibc/overview.mdx b/docs/ibc/v5.4.x/ibc/overview.mdx new file mode 100644 index 00000000..3a2b5e14 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/overview.mdx @@ -0,0 +1,297 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about IBC, its components, and IBC use cases. + +## What is the Inter-Blockchain Communication Protocol (IBC)? + +This document serves as a guide for developers who want to write their own Inter-Blockchain +Communication protocol (IBC) applications for custom use cases. + +> IBC applications must be written as self-contained modules. + +Due to the modular design of the IBC protocol, IBC +application developers do not need to be concerned with the low-level details of clients, +connections, and proof verification. + +This brief explanation of the lower levels of the +stack gives application developers a broad understanding of the IBC +protocol. Abstraction layer details for channels and ports are most relevant for application developers and describe how to define custom packets and `IBCModule` callbacks. + +The requirements to have your module interact over IBC are: + +- Bind to a port or ports. +- Define your packet data. +- Use the default acknowledgment struct provided by core IBC or optionally define a custom acknowledgment struct. +- Standardize an encoding of the packet data. +- Implement the `IBCModule` interface. + +Read on for a detailed explanation of how to write a self-contained IBC application module. + +## Components Overview + +### [Clients](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client) + +IBC clients are on-chain light clients. Each light client is identified by a unique client-id. +IBC clients track the consensus states of other blockchains, along with the proof spec necessary to +properly verify proofs against the client's consensus state. A client can be associated with any number +of connections to the counterparty chain. The client identifier is auto generated using the client type +and the global client counter appended in the format: `{client-type}-{N}`. + +A `ClientState` should contain chain specific and light client specific information necessary for verifying updates +and upgrades to the IBC client. The `ClientState` may contain information such as chain-id, latest height, proof specs, +unbonding periods or the status of the light client. The `ClientState` should not contain information that +is specific to a given block at a certain height, this is the function of the `ConsensusState`. Each `ConsensusState` +should be associated with a unique block and should be referenced using a height. IBC clients are given a +client identifier prefixed store to store their associated client state and consensus states along with +any metadata associated with the consensus states. Consensus states are stored using their associated height. + +The supported IBC clients are: + +- [Solo Machine light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine): Devices such as phones, browsers, or laptops. +- [Tendermint light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/07-tendermint): The default for Cosmos SDK-based chains. +- [Localhost (loopback) client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/09-localhost): Useful for + testing, simulation, and relaying packets to modules on the same application. + +### IBC Client Heights + +IBC Client Heights are represented by the struct: + +```go +type Height struct { + RevisionNumber uint64 + RevisionHeight uint64 +} +``` + +The `RevisionNumber` represents the revision of the chain that the height is representing. +A revision typically represents a continuous, monotonically increasing range of block-heights. +The `RevisionHeight` represents the height of the chain within the given revision. + +On any reset of the `RevisionHeight`—for example, when hard-forking a Tendermint chain— +the `RevisionNumber` will get incremented. This allows IBC clients to distinguish between a +block-height `n` of a previous revision of the chain (at revision `p`) and block-height `n` of the current +revision of the chain (at revision `e`). + +`Height`s that share the same revision number can be compared by simply comparing their respective `RevisionHeight`s. +`Height`s that do not share the same revision number will only be compared using their respective `RevisionNumber`s. +Thus a height `h` with revision number `e+1` will always be greater than a height `g` with revision number `e`, +**REGARDLESS** of the difference in revision heights. + +Ex: + +```go +Height{ + RevisionNumber: 3, + RevisionHeight: 0 +} > Height{ + RevisionNumber: 2, + RevisionHeight: 100000000000 +} +``` + +When a Tendermint chain is running a particular revision, relayers can simply submit headers and proofs with the revision number +given by the chain's `chainID`, and the revision height given by the Tendermint block height. When a chain updates using a hard-fork +and resets its block-height, it is responsible for updating its `chainID` to increment the revision number. +IBC Tendermint clients then verifies the revision number against their `chainID` and treat the `RevisionHeight` as the Tendermint block-height. + +Tendermint chains wishing to use revisions to maintain persistent IBC connections even across height-resetting upgrades must format their `chainID`s +in the following manner: `{chainID}-{revision_number}`. On any height-resetting upgrade, the `chainID` **MUST** be updated with a higher revision number +than the previous value. + +Ex: + +- Before upgrade `chainID`: `gaiamainnet-3` +- After upgrade `chainID`: `gaiamainnet-4` + +Clients that do not require revisions, such as the solo-machine client, simply hardcode `0` into the revision number whenever they +need to return an IBC height when implementing IBC interfaces and use the `RevisionHeight` exclusively. + +Other client-types can implement their own logic to verify the IBC heights that relayers provide in their `Update`, `Misbehavior`, and +`Verify` functions respectively. + +The IBC interfaces expect an `ibcexported.Height` interface, however all clients must use the concrete implementation provided in +`02-client/types` and reproduced above. + +### [Connections](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection) + +Connections encapsulate two `ConnectionEnd` objects on two separate blockchains. Each +`ConnectionEnd` is associated with a client of the other blockchain (for example, the counterparty blockchain). +The connection handshake is responsible for verifying that the light clients on each chain are +correct for their respective counterparties. Connections, once established, are responsible for +facilitating all cross-chain verifications of IBC state. A connection can be associated with any +number of channels. + +### [Proofs](https://github.com/cosmos/ibc-go/blob/main/modules/core/23-commitment) and [Paths](https://github.com/cosmos/ibc-go/blob/main/modules/core/24-host) + +In IBC, blockchains do not directly pass messages to each other over the network. Instead, to +communicate, a blockchain commits some state to a specifically defined path that is reserved for a +specific message type and a specific counterparty. For example, for storing a specific connectionEnd as part +of a handshake or a packet intended to be relayed to a module on the counterparty chain. A relayer +process monitors for updates to these paths and relays messages by submitting the data stored +under the path and a proof to the counterparty chain. + +Proofs are passed from core IBC to light-clients as bytes. It is up to light client implementation to interpret these bytes appropriately. + +- The paths that all IBC implementations must use for committing IBC messages is defined in + [ICS-24 Host State Machine Requirements](https://github.com/cosmos/ibc/tree/master/spec/core/ics-024-host-requirements). +- The proof format that all implementations must be able to produce and verify is defined in [ICS-23 Proofs](https://github.com/confio/ics23) implementation. + +### Capabilities + +IBC is intended to work in execution environments where modules do not necessarily trust each +other. Thus, IBC must authenticate module actions on ports and channels so that only modules with the +appropriate permissions can use them. + +This module authentication is accomplished using a [dynamic +capability store](/docs/common/pages/adr-comprehensive#adr-003-dynamic-capability-store). Upon binding to a port or +creating a channel for a module, IBC returns a dynamic capability that the module must claim in +order to use that port or channel. The dynamic capability module prevents other modules from using that port or channel since +they do not own the appropriate capability. + +While this background information is useful, IBC modules do not need to interact at all with +these lower-level abstractions. The relevant abstraction layer for IBC application developers is +that of channels and ports. IBC applications must be written as self-contained **modules**. + +A module on one blockchain can communicate with other modules on other blockchains by sending, +receiving, and acknowledging packets through channels that are uniquely identified by the +`(channelID, portID)` tuple. + +A useful analogy is to consider IBC modules as internet applications on +a computer. A channel can then be conceptualized as an IP connection, with the IBC portID being +analogous to an IP port and the IBC channelID being analogous to an IP address. Thus, a single +instance of an IBC module can communicate on the same port with any number of other modules and +IBC correctly routes all packets to the relevant module using the (channelID, portID tuple). An +IBC module can also communicate with another IBC module over multiple ports, with each +`(portID<->portID)` packet stream being sent on a different unique channel. + +### [Ports](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port) + +An IBC module can bind to any number of ports. Each port must be identified by a unique `portID`. +Since IBC is designed to be secure with mutually distrusted modules operating on the same ledger, +binding a port returns a dynamic object capability. In order to take action on a particular port +(for example, an open channel with its portID), a module must provide the dynamic object capability to the IBC +handler. This requirement prevents a malicious module from opening channels with ports it does not own. Thus, +IBC modules are responsible for claiming the capability that is returned on `BindPort`. + +### [Channels](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +An IBC channel can be established between two IBC ports. Currently, a port is exclusively owned by a +single module. IBC packets are sent over channels. Just as IP packets contain the destination IP +address and IP port, and the source IP address and source IP port, IBC packets contain +the destination portID and channelID, and the source portID and channelID. This packet structure enables IBC to +correctly route packets to the destination module while allowing modules receiving packets to +know the sender module. + +A channel can be `ORDERED`, where packets from a sending module must be processed by the +receiving module in the order they were sent. Or a channel can be `UNORDERED`, where packets +from a sending module are processed in the order they arrive (might be in a different order than they were sent). + +Modules can choose which channels they wish to communicate over with, thus IBC expects modules to +implement callbacks that are called during the channel handshake. These callbacks can do custom +channel initialization logic. If any callback returns an error, the channel handshake fails. Thus, by +returning errors on callbacks, modules can programmatically reject and accept channels. + +The channel handshake is a 4-step handshake. Briefly, if a given chain A wants to open a channel with +chain B using an already established connection: + +1. chain A sends a `ChanOpenInit` message to signal a channel initialization attempt with chain B. +2. chain B sends a `ChanOpenTry` message to try opening the channel on chain A. +3. chain A sends a `ChanOpenAck` message to mark its channel end status as open. +4. chain B sends a `ChanOpenConfirm` message to mark its channel end status as open. + +If all handshake steps are successful, the channel is opened on both sides. At each step in the handshake, the module +associated with the `ChannelEnd` executes its callback. So +on `ChanOpenInit`, the module on chain A executes its callback `OnChanOpenInit`. + +The channel identifier is auto derived in the format: `channel-{N}` where N is the next sequence to be used. + +Just as ports came with dynamic capabilities, channel initialization returns a dynamic capability +that the module **must** claim so that they can pass in a capability to authenticate channel actions +like sending packets. The channel capability is passed into the callback on the first parts of the +handshake; either `OnChanOpenInit` on the initializing chain or `OnChanOpenTry` on the other chain. + +#### Closing channels + +Closing a channel occurs in 2 handshake steps as defined in [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics). + +`ChanCloseInit` closes a channel on the executing chain if the channel exists, it is not +already closed and the connection it exists upon is OPEN. Channels can only be closed by a +calling module or in the case of a packet timeout on an ORDERED channel. + +`ChanCloseConfirm` is a response to a counterparty channel executing `ChanCloseInit`. The channel +on the executing chain closes if the channel exists, the channel is not already closed, +the connection the channel exists upon is OPEN and the executing chain successfully verifies +that the counterparty channel has been closed. + +### [Packets](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Modules communicate with each other by sending packets over IBC channels. All +IBC packets contain the destination `portID` and `channelID` along with the source `portID` and +`channelID`. This packet structure allows modules to know the sender module of a given packet. IBC packets +contain a sequence to optionally enforce ordering. + +IBC packets also contain a `TimeoutHeight` and a `TimeoutTimestamp` that determine the deadline before the receiving module must process a packet. + +Modules send custom application data to each other inside the `Data []byte` field of the IBC packet. +Thus, packet data is opaque to IBC handlers. It is incumbent on a sender module to encode +their application-specific packet information into the `Data` field of packets. The receiver +module must decode that `Data` back to the original application data. + +### [Receipts and Timeouts](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Since IBC works over a distributed network and relies on potentially faulty relayers to relay messages between ledgers, +IBC must handle the case where a packet does not get sent to its destination in a timely manner or at all. Packets must +specify a non-zero value for timeout height (`TimeoutHeight`) or timeout timestamp (`TimeoutTimestamp` ) after which a packet can no longer be successfully received on the destination chain. + +- The `timeoutHeight` indicates a consensus height on the destination chain after which the packet is no longer be processed, and instead counts as having timed-out. +- The `timeoutTimestamp` indicates a timestamp on the destination chain after which the packet is no longer be processed, and instead counts as having timed-out. + +If the timeout passes without the packet being successfully received, the packet can no longer be +received on the destination chain. The sending module can timeout the packet and take appropriate actions. + +If the timeout is reached, then a proof of packet timeout can be submitted to the original chain. The original chain can then perform +application-specific logic to timeout the packet, perhaps by rolling back the packet send changes (refunding senders any locked funds, etc.). + +- In ORDERED channels, a timeout of a single packet in the channel causes the channel to close. + + - If packet sequence `n` times out, then a packet at sequence `k > n` cannot be received without violating the contract of ORDERED channels that packets are processed in the order that they are sent. + - Since ORDERED channels enforce this invariant, a proof that sequence `n` has not been received on the destination chain by the specified timeout of packet `n` is sufficient to timeout packet `n` and close the channel. + +- In UNORDERED channels, the application-specific timeout logic for that packet is applied and the channel is not closed. + + - Packets can be received in any order. + + - IBC writes a packet receipt for each sequence receives in the UNORDERED channel. This receipt does not contain information; it is simply a marker intended to signify that the UNORDERED channel has received a packet at the specified sequence. + + - To timeout a packet on an UNORDERED channel, a proof is required that a packet receipt **does not exist** for the packet's sequence by the specified timeout. + +For this reason, most modules should use UNORDERED channels as they require fewer liveness guarantees to function effectively for users of that channel. + +### [Acknowledgments](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Modules can also choose to write application-specific acknowledgments upon processing a packet. Acknowledgments can be done: + +- Synchronously on `OnRecvPacket` if the module processes packets as soon as they are received from IBC module. +- Asynchronously if module processes packets at some later point after receiving the packet. + +This acknowledgment data is opaque to IBC much like the packet `Data` and is treated by IBC as a simple byte string `[]byte`. Receiver modules must encode their acknowledgment so that the sender module can decode it correctly. The encoding must be negotiated between the two parties during version negotiation in the channel handshake. + +The acknowledgment can encode whether the packet processing succeeded or failed, along with additional information that allows the sender module to take appropriate action. + +After the acknowledgment has been written by the receiving chain, a relayer relays the acknowledgment back to the original sender module. + +The original sender module then executes application-specific acknowledgment logic using the contents of the acknowledgment. + +- After an acknowledgement fails, packet-send changes can be rolled back (for example, refunding senders in ICS20). + +- After an acknowledgment is received successfully on the original sender on the chain, the corresponding packet commitment is deleted since it is no longer needed. + +## Further Readings and Specs + +If you want to learn more about IBC, check the following specifications: + +- [IBC specification overview](https://github.com/cosmos/ibc/blob/master/README.md) diff --git a/docs/ibc/v5.4.x/ibc/proposals.mdx b/docs/ibc/v5.4.x/ibc/proposals.mdx new file mode 100644 index 00000000..769981a1 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/proposals.mdx @@ -0,0 +1,125 @@ +--- +title: Governance Proposals +--- + +In uncommon situations, a highly valued client may become frozen due to uncontrollable +circumstances. A highly valued client might have hundreds of channels being actively used. +Some of those channels might have a significant amount of locked tokens used for ICS 20. + +If the one third of the validator set of the chain the client represents decides to collude, +they can sign off on two valid but conflicting headers each signed by the other one third +of the honest validator set. The light client can now be updated with two valid, but conflicting +headers at the same height. The light client cannot know which header is trustworthy and therefore +evidence of such misbehaviour is likely to be submitted resulting in a frozen light client. + +Frozen light clients cannot be updated under any circumstance except via a governance proposal. +Since a quorum of validators can sign arbitrary state roots which may not be valid executions +of the state machine, a governance proposal has been added to ease the complexity of unfreezing +or updating clients which have become "stuck". Without this mechanism, validator sets would need +to construct a state root to unfreeze the client. Unfreezing clients, re-enables all of the channels +built upon that client. This may result in recovery of otherwise lost funds. + +Tendermint light clients may become expired if the trusting period has passed since their +last update. This may occur if relayers stop submitting headers to update the clients. + +An unplanned upgrade by the counterparty chain may also result in expired clients. If the counterparty +chain undergoes an unplanned upgrade, there may be no commitment to that upgrade signed by the validator +set before the chain-id changes. In this situation, the validator set of the last valid update for the +light client is never expected to produce another valid header since the chain-id has changed, which will +ultimately lead the on-chain light client to become expired. + +In the case that a highly valued light client is frozen, expired, or rendered non-updateable, a +governance proposal may be submitted to update this client, known as the subject client. The +proposal includes the client identifier for the subject and the client identifier for a substitute +client. Light client implementations may implement custom updating logic, but in most cases, +the subject will be updated to the latest consensus state of the substitute client, if the proposal passes. +The substitute client is used as a "stand in" while the subject is on trial. It is best practice to create +a substitute client *after* the subject has become frozen to avoid the substitute from also becoming frozen. +An active substitute client allows headers to be submitted during the voting period to prevent accidental expiry +once the proposal passes. + +## How to recover an expired client with a governance proposal + +See also the relevant documentation: [ADR-026, IBC client recovery mechanisms](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-026-ibc-client-recovery-mechanisms.md) + +> **Who is this information for?** +> Although technically anyone can submit the governance proposal to recover an expired client, often it will be **relayer operators** (at least coordinating the submission). + +### Preconditions + +* The chain is updated with ibc-go `>=` v1.1.0. +* There exists an active client (with a known client identifier) for the same counterparty chain as the expired client. +* The governance deposit. + +## Steps + +### Step 1 + +Check if the client is attached to the expected `chain-id`. For example, for an expired Tendermint client representing the Akash chain the client state looks like this on querying the client state: + +```text +{ + client_id: 07-tendermint-146 + client_state: + '@type': /ibc.lightclients.tendermint.v1.ClientState + allow_update_after_expiry: true + allow_update_after_misbehaviour: true + chain_id: akashnet-2 +} +``` + +The client is attached to the expected Akash `chain-id`. Note that although the parameters (`allow_update_after_expiry` and `allow_update_after_misbehaviour`) exist to signal intent, these parameters have been deprecated and will not enforce any checks on the revival of client. See ADR-026 for more context on this deprecation. + +### Step 2 + +If the chain has been updated to ibc-go `>=` v1.1.0, anyone can submit the governance proposal to recover the client by executing this via CLI. + +> Note that the Cosmos SDK has updated how governance proposals are submitted in SDK v0.46, now requiring to pass a .json proposal file + +* From SDK v0.46.x onwards + + ```bash + tx gov submit-proposal [path-to-proposal-json] + ``` + + where `proposal.json` contains: + + ```json expandable + { + "messages": [ + { + "@type": "/ibc.core.client.v1.ClientUpdateProposal", + "title": "title_string", + "description": "description_string", + "subject_client_id": "expired_client_id_string", + "substitute_client_id": "active_client_id_string" + } + ], + "metadata": "", + "deposit": "10stake" + } + ``` + + Alternatively there's a legacy command (that is no longer recommended though): + + ```bash + tx gov submit-legacy-proposal update-client + ``` + +* Until SDK v0.45.x + + ```bash + tx gov submit-proposal update-client + ``` + +The `` identifier is the proposed client to be updated. This client must be either frozen or expired. + +The `` represents a substitute client. It carries all the state for the client which may be updated. It must have identical client and chain parameters to the client which may be updated (except for latest height, frozen height, and chain ID). It should be continually updated during the voting period. + +After this, all that remains is deciding who funds the governance deposit and ensuring the governance proposal passes. If it does, the client on trial will be updated to the latest state of the substitute. + +## Important considerations + +Please note that from v1.0.0 of ibc-go it will not be allowed for transactions to go to expired clients anymore, so please update to at least this version to prevent similar issues in the future. + +Please also note that if the client on the other end of the transaction is also expired, that client will also need to update. This process updates only one client. diff --git a/docs/ibc/v5.4.x/ibc/proto-docs.mdx b/docs/ibc/v5.4.x/ibc/proto-docs.mdx new file mode 100644 index 00000000..efa5d947 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/proto-docs.mdx @@ -0,0 +1,6 @@ +--- +title: Protobuf Documentation +description: See ibc-go v5.3.x Buf Protobuf documentation. +--- + +See [ibc-go v5.3.x Buf Protobuf documentation](https://github.com/cosmos/ibc-go/blob/release/v5.3.x/docs/ibc/proto-docs.md). diff --git a/docs/ibc/v5.4.x/ibc/relayer.mdx b/docs/ibc/v5.4.x/ibc/relayer.mdx new file mode 100644 index 00000000..521192b0 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/relayer.mdx @@ -0,0 +1,48 @@ +--- +title: Relayer +--- + + + +## Pre-requisite readings + +* [IBC Overview](/docs/ibc/v5.4.x/ibc/overview) +* Events + + + +## Events + +Events are emitted for every transaction processed by the base application to indicate the execution +of some logic clients may want to be aware of. This is extremely useful when relaying IBC packets. +Any message that uses IBC will emit events for the corresponding TAO logic executed as defined in +the [IBC events document](/docs/ibc/v5.4.x/middleware/ics29-fee/events). + +In the SDK, it can be assumed that for every message there is an event emitted with the type `message`, +attribute key `action`, and an attribute value representing the type of message sent +(`channel_open_init` would be the attribute value for `MsgChannelOpenInit`). If a relayer queries +for transaction events, it can split message events using this event Type/Attribute Key pair. + +The Event Type `message` with the Attribute Key `module` may be emitted multiple times for a single +message due to application callbacks. It can be assumed that any TAO logic executed will result in +a module event emission with the attribute value `ibc_` (02-client emits `ibc_client`). + +### Subscribing with Tendermint + +Calling the Tendermint RPC method `Subscribe` via Tendermint's Websocket will return events using +Tendermint's internal representation of them. Instead of receiving back a list of events as they +were emitted, Tendermint will return the type `map[string][]string` which maps a string in the +form `.` to `attribute_value`. This causes extraction of the event +ordering to be non-trivial, but still possible. + +A relayer should use the `message.action` key to extract the number of messages in the transaction +and the type of IBC transactions sent. For every IBC transaction within the string array for +`message.action`, the necessary information should be extracted from the other event fields. If +`send_packet` appears at index 2 in the value for `message.action`, a relayer will need to use the +value at index 2 of the key `send_packet.packet_sequence`. This process should be repeated for each +piece of information needed to relay a packet. + +## Example Implementations + +* [Golang Relayer](https://github.com/cosmos/relayer) +* [Hermes](https://github.com/informalsystems/hermes) diff --git a/docs/ibc/v5.4.x/ibc/roadmap.mdx b/docs/ibc/v5.4.x/ibc/roadmap.mdx new file mode 100644 index 00000000..01ab8797 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/roadmap.mdx @@ -0,0 +1,56 @@ +--- +title: Roadmap +description: 'Latest update: July 7, 2022' +--- + +*Latest update: July 7, 2022* + +This document endeavours to inform the wider IBC community about plans and priorities for work on ibc-go by the team at Interchain GmbH. It is intended to broadly inform all users of ibc-go, including developers and operators of IBC, relayer, chain and wallet applications. + +This roadmap should be read as a high-level guide, rather than a commitment to schedules and deliverables. The degree of specificity is inversely proportional to the timeline. We will update this document periodically to reflect the status and plans. + +## Q3 - 2022 + +At a high level we will focus on: + +### Features + +* Releasing [v4.0.0](https://github.com/cosmos/ibc-go/milestone/26), which includes the ICS-29 Fee Middleware module. +* Finishing and releasing the [refactoring of 02-client](https://github.com/cosmos/ibc-go/milestone/16). This refactor will make the development of light clients easier. +* Starting the implementation of channel upgradability (see [epic](https://github.com/cosmos/ibc-go/issues/1599) and [alpha milestone](https://github.com/cosmos/ibc-go/milestone/29)) with the goal of cutting an alpha1 pre-release by the end of the quarter. Channel upgradability will allow chains to renegotiate an existing channel to take advantage of new features without having to create a new channel, thus preserving all existing packet state processed on the channel. +* Implementing the new [`ORDERED_ALLOW_TIMEOUT` channel type](https://github.com/cosmos/ibc-go/milestone/31) and hopefully releasing it as well. This new channel type will allow packets on an ordered channel to timeout without causing the closure of the channel. + +### Testing and infrastructure + +* Adding [automated e2e tests](https://github.com/cosmos/ibc-go/milestone/32) to the repo's CI. + +### Documentation and backlog + +* Finishing and releasing the upgrade to Cosmos SDK v0.46. +* Writing the [light client implementation guide](https://github.com/cosmos/ibc-go/issues/59). +* Working on [core backlog issues](https://github.com/cosmos/ibc-go/milestone/28). +* Depending on the timeline of the Cosmos SDK, implementing and testing the changes needed to support the [transition to SMT storage](https://github.com/cosmos/ibc-go/milestone/21). + +We have also received a lot of feedback to improve Interchain Accounts and we might also work on a few things, but will depend on priorities and availability. + +For a detail view of each iteration's planned work, please check out our [project board](https://github.com/orgs/cosmos/projects/7). + +### Release schedule + +#### **July** + +We will probably cut at least one more release candidate of v4.0.0 before the final release, which should happen around the end of the month. + +For the Rho upgrade of the Cosmos Hub we will also release a new minor version of v3 with SDK 0.46. + +#### **August** + +In the first half we will probably start cutting release candidates for the 02-client refactor. Final release would most likely come out at the end of the month or beginning of September. + +#### **September** + +We might cut some pre-releases for the new channel type, and by the end of the month we expect to cut the first alpha pre-release for channel upgradability. + +## Q4 - 2022 + +We will continue the implementation and cut the final release of [channel upgradability](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics/UPGRADES.md). At the end of Q3 or maybe beginning of Q4 we might also work on designing the implementation and scoping the engineering work to add support for [multihop channels](https://github.com/cosmos/ibc/pull/741/files), so that we could start the implementation of this feature during Q4 (but this is still be decided). diff --git a/docs/ibc/v5.4.x/ibc/upgrades/developer-guide.mdx b/docs/ibc/v5.4.x/ibc/upgrades/developer-guide.mdx new file mode 100644 index 00000000..9e1cc251 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/upgrades/developer-guide.mdx @@ -0,0 +1,52 @@ +--- +title: IBC Client Developer Guide to Upgrades +--- + +## Synopsis + +Learn how to implement upgrade functionality for your custom IBC client. + +As mentioned in the [README](/docs/ibc/v4.6.x/intro), it is vital that high-value IBC clients can upgrade along with their underlying chains to avoid disruption to the IBC ecosystem. Thus, IBC client developers will want to implement upgrade functionality to enable clients to maintain connections and channels even across chain upgrades. + +The IBC protocol allows client implementations to provide a path to upgrading clients given the upgraded client state, upgraded consensus state and proofs for each. + +```go expandable +/ Upgrade functions +/ NOTE: proof heights are not included as upgrade to a new revision is expected to pass only on the last +/ height committed by the current revision. Clients are responsible for ensuring that the planned last +/ height of the current revision is somehow encoded in the proof verification process. +/ This is to ensure that no premature upgrades occur, since upgrade plans committed to by the counterparty +/ may be cancelled or modified before the last planned height. +VerifyUpgradeAndUpdateState( + ctx sdk.Context, + cdc codec.BinaryCodec, + store sdk.KVStore, + newClient ClientState, + newConsState ConsensusState, + proofUpgradeClient, + proofUpgradeConsState []byte, +) (upgradedClient ClientState, upgradedConsensus ConsensusState, err error) +``` + +Note that the clients should have prior knowledge of the merkle path that the upgraded client and upgraded consensus states will use. The height at which the upgrade has occurred should also be encoded in the proof. The Tendermint client implementation accomplishes this by including an `UpgradePath` in the ClientState itself, which is used along with the upgrade height to construct the merkle path under which the client state and consensus state are committed. + +Developers must ensure that the `UpgradeClientMsg` does not pass until the last height of the old chain has been committed, and after the chain upgrades, the `UpgradeClientMsg` should pass once and only once on all counterparty clients. + +Developers must ensure that the new client adopts all of the new Client parameters that must be uniform across every valid light client of a chain (chain-chosen parameters), while maintaining the Client parameters that are customizable by each individual client (client-chosen parameters) from the previous version of the client. + +Upgrades must adhere to the IBC Security Model. IBC does not rely on the assumption of honest relayers for correctness. Thus users should not have to rely on relayers to maintain client correctness and security (though honest relayers must exist to maintain relayer liveness). While relayers may choose any set of client parameters while creating a new `ClientState`, this still holds under the security model since users can always choose a relayer-created client that suits their security and correctness needs or create a Client with their desired parameters if no such client exists. + +However, when upgrading an existing client, one must keep in mind that there are already many users who depend on this client's particular parameters. We cannot give the upgrading relayer free choice over these parameters once they have already been chosen. This would violate the security model since users who rely on the client would have to rely on the upgrading relayer to maintain the same level of security. Thus, developers must make sure that their upgrade mechanism allows clients to upgrade the chain-specified parameters whenever a chain upgrade changes these parameters (examples in the Tendermint client include `UnbondingPeriod`, `TrustingPeriod`, `ChainID`, `UpgradePath`, etc.), while ensuring that the relayer submitting the `UpgradeClientMsg` cannot alter the client-chosen parameters that the users are relying upon (examples in Tendermint client include `TrustLevel`, `MaxClockDrift`, etc). + +Developers should maintain the distinction between Client parameters that are uniform across every valid light client of a chain (chain-chosen parameters), and Client parameters that are customizable by each individual client (client-chosen parameters); since this distinction is necessary to implement the `ZeroCustomFields` method in the `ClientState` interface: + +```go +/ Utility function that zeroes out any client customizable fields in client state +/ Ledger enforced fields are maintained while all custom fields are zero values +/ Used to verify upgrades +ZeroCustomFields() + +ClientState +``` + +Counterparty clients can upgrade securely by using all of the chain-chosen parameters from the chain-committed `UpgradedClient` and preserving all of the old client-chosen parameters. This enables chains to securely upgrade without relying on an honest relayer, however it can in some cases lead to an invalid final `ClientState` if the new chain-chosen parameters clash with the old client-chosen parameter. This can happen in the Tendermint client case if the upgrading chain lowers the `UnbondingPeriod` (chain-chosen) to a duration below that of a counterparty client's `TrustingPeriod` (client-chosen). Such cases should be clearly documented by developers, so that chains know which upgrades should be avoided to prevent this problem. The final upgraded client should also be validated in `VerifyUpgradeAndUpdateState` before returning to ensure that the client does not upgrade to an invalid `ClientState`. diff --git a/docs/ibc/v5.4.x/ibc/upgrades/genesis-restart.mdx b/docs/ibc/v5.4.x/ibc/upgrades/genesis-restart.mdx new file mode 100644 index 00000000..bfcfd809 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/upgrades/genesis-restart.mdx @@ -0,0 +1,46 @@ +--- +title: Genesis Restart Upgrades +--- + +## Synopsis + +Learn how to upgrade your chain and counterparty clients using genesis restarts. + +**NOTE**: Regular genesis restarts are currently unsupported by relayers! + +## IBC Client Breaking Upgrades + +IBC client breaking upgrades are possible using genesis restarts. +It is highly recommended to use the in-place migrations instead of a genesis restart. +Genesis restarts should be used sparingly and as backup plans. + +Genesis restarts still require the usage of an IBC upgrade proposal in order to correctly upgrade counterparty clients. + +### Step-by-Step Upgrade Process for SDK Chains + +If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the [IBC Client Breaking Upgrade List](/docs/ibc/v5.4.x/ibc/upgrades/quick-guide#ibc-client-breaking-upgrades) and then execute the upgrade process described below in order to prevent counterparty clients from breaking. + +1. Create a 02-client [`UpgradeProposal`](https://github.com/cosmos/ibc-go/blob/v5.3.1/proto/ibc/core/client/v1/client.proto#L58-L77) with an `UpgradePlan` and a new IBC ClientState in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients and zero out any client-customizable fields (such as TrustingPeriod). +2. Vote on and pass the `UpgradeProposal` +3. Halt the node after successful upgrade. +4. Export the genesis file. +5. Swap to the new binary. +6. Run migrations on the genesis file. +7. Remove the `UpgradeProposal` plan from the genesis file. This may be done by migrations. +8. Change desired chain-specific fields (chain id, unbonding period, etc). This may be done by migrations. +9. Reset the node's data. +10. Start the chain. + +Upon the `UpgradeProposal` passing, the upgrade module will commit the UpgradedClient under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`. + +Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client. + +#### Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients + +These steps are identical to the regular [IBC client breaking upgrade process](/docs/ibc/v5.4.x/ibc/upgrades/quick-guide#step-by-step-upgrade-process-for-relayers-upgrading-counterparty-clients). + +### Non-IBC Client Breaking Upgrades + +While ibc-go supports genesis restarts which do not break IBC clients, relayers do not support this upgrade path. +Here is a tracking issue on [Hermes](https://github.com/informalsystems/ibc-rs/issues/1152). +Please do not attempt a regular genesis restarts unless you have a tool to update counterparty clients correctly. diff --git a/docs/ibc/v5.4.x/ibc/upgrades/intro.mdx b/docs/ibc/v5.4.x/ibc/upgrades/intro.mdx new file mode 100644 index 00000000..d46b3730 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/upgrades/intro.mdx @@ -0,0 +1,15 @@ +--- +title: 00 Intro +description: >- + This directory contains information on how to upgrade an IBC chain without + breaking counterparty clients and connections. +--- + +### Upgrading IBC Chains Overview + +This directory contains information on how to upgrade an IBC chain without breaking counterparty clients and connections. + +IBC-connected chains must be able to upgrade without breaking connections to other chains. Otherwise there would be a massive disincentive towards upgrading and disrupting high-value IBC connections, thus preventing chains in the IBC ecosystem from evolving and improving. Many chain upgrades may be irrelevant to IBC, however some upgrades could potentially break counterparty clients if not handled correctly. Thus, any IBC chain that wishes to perform an IBC-client-breaking upgrade must perform an IBC upgrade in order to allow counterparty clients to securely upgrade to the new light client. + +1. The [quick-guide](/docs/ibc/v5.4.x/ibc/upgrades/quick-guide) describes how IBC-connected chains can perform client-breaking upgrades and how relayers can securely upgrade counterparty clients using the SDK. +2. The [developer-guide](/docs/ibc/v5.4.x/ibc/upgrades/developer-guide) is a guide for developers intending to develop IBC client implementations with upgrade functionality. diff --git a/docs/ibc/v5.4.x/ibc/upgrades/quick-guide.mdx b/docs/ibc/v5.4.x/ibc/upgrades/quick-guide.mdx new file mode 100644 index 00000000..a8281c82 --- /dev/null +++ b/docs/ibc/v5.4.x/ibc/upgrades/quick-guide.mdx @@ -0,0 +1,54 @@ +--- +title: How to Upgrade IBC Chains and their Clients +--- + +## Synopsis + +Learn how to upgrade your chain and counterparty clients. + +The information in this doc for upgrading chains is relevant to SDK chains. However, the guide for counterparty clients is relevant to any Tendermint client that enables upgrades. + +## IBC Client Breaking Upgrades + +IBC-connected chains must perform an IBC upgrade if their upgrade will break counterparty IBC clients. The current IBC protocol supports upgrading tendermint chains for a specific subset of IBC-client-breaking upgrades. Here is the exhaustive list of IBC client-breaking upgrades and whether the IBC protocol currently supports such upgrades. + +IBC currently does **NOT** support unplanned upgrades. All of the following upgrades must be planned and committed to in advance by the upgrading chain, in order for counterparty clients to maintain their connections securely. + +Note: Since upgrades are only implemented for Tendermint clients, this doc only discusses upgrades on Tendermint chains that would break counterparty IBC Tendermint Clients. + +1. Changing the Chain-ID: **Supported** +2. Changing the UnbondingPeriod: **Partially Supported**, chains may increase the unbonding period with no issues. However, decreasing the unbonding period may irreversibly break some counterparty clients. Thus, it is **not recommended** that chains reduce the unbonding period. +3. Changing the height (resetting to 0): **Supported**, so long as chains remember to increment the revision number in their chain-id. +4. Changing the ProofSpecs: **Supported**, this should be changed if the proof structure needed to verify IBC proofs is changed across the upgrade. Ex: Switching from an IAVL store, to a SimpleTree Store +5. Changing the UpgradePath: **Supported**, this might involve changing the key under which upgraded clients and consensus states are stored in the upgrade store, or even migrating the upgrade store itself. +6. Migrating the IBC store: **Unsupported**, as the IBC store location is negotiated by the connection. +7. Upgrading to a backwards compatible version of IBC: Supported +8. Upgrading to a non-backwards compatible version of IBC: **Unsupported**, as IBC version is negotiated on connection handshake. +9. Changing the Tendermint LightClient algorithm: **Partially Supported**. Changes to the light client algorithm that do not change the ClientState or ConsensusState struct may be supported, provided that the counterparty is also upgraded to support the new light client algorithm. Changes that require updating the ClientState and ConsensusState structs themselves are theoretically possible by providing a path to translate an older ClientState struct into the new ClientState struct; however this is not currently implemented. + +### Step-by-Step Upgrade Process for SDK chains + +If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the list above and then execute the upgrade process described below in order to prevent counterparty clients from breaking. + +1. Create a 02-client [`UpgradeProposal`](https://github.com/cosmos/ibc-go/blob/v5.3.1/proto/ibc/core/client/v1/client.proto#L58-L77) with an `UpgradePlan` and a new IBC ClientState in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients and zero out any client-customizable fields (such as TrustingPeriod). +2. Vote on and pass the `UpgradeProposal` + +Upon the `UpgradeProposal` passing, the upgrade module will commit the UpgradedClient under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`. + +Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client. + +### Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients + +Once the upgrading chain has committed to upgrading, relayers must wait till the chain halts at the upgrade height before upgrading counterparty clients. This is because chains may reschedule or cancel upgrade plans before they occur. Thus, relayers must wait till the chain reaches the upgrade height and halts before they can be sure the upgrade will take place. + +Thus, the upgrade process for relayers trying to upgrade the counterparty clients is as follows: + +1. Wait for the upgrading chain to reach the upgrade height and halt +2. Query a full node for the proofs of `UpgradedClient` and `UpgradedConsensusState` at the last height of the old chain. +3. Update the counterparty client to the last height of the old chain using the `UpdateClient` msg. +4. Submit an `UpgradeClient` msg to the counterparty chain with the `UpgradedClient`, `UpgradedConsensusState` and their respective proofs. +5. Submit an `UpdateClient` msg to the counterparty chain with a header from the new upgraded chain. + +The Tendermint client on the counterparty chain will verify that the upgrading chain did indeed commit to the upgraded client and upgraded consensus state at the upgrade height (since the upgrade height is included in the key). If the proofs are verified against the upgrade height, then the client will upgrade to the new client while retaining all of its client-customized fields. Thus, it will retain its old TrustingPeriod, TrustLevel, MaxClockDrift, etc; while adopting the new chain-specified fields such as UnbondingPeriod, ChainId, UpgradePath, etc. Note, this can lead to an invalid client since the old client-chosen fields may no longer be valid given the new chain-chosen fields. Upgrading chains should try to avoid these situations by not altering parameters that can break old clients. For an example, see the UnbondingPeriod example in the supported upgrades section. + +The upgraded consensus state will serve purely as a basis of trust for future `UpdateClientMsgs` and will not contain a consensus root to perform proof verification against. Thus, relayers must submit an `UpdateClientMsg` with a header from the new chain so that the connection can be used for proof verification again. diff --git a/docs/ibc/v5.4.x/intro.mdx b/docs/ibc/v5.4.x/intro.mdx new file mode 100644 index 00000000..579a6862 --- /dev/null +++ b/docs/ibc/v5.4.x/intro.mdx @@ -0,0 +1,17 @@ +--- +title: IBC-Go Documentation +--- + + +This version of ibc-go is not supported anymore. Please upgrade to the latest version. + + +Welcome to the IBC-Go documentation! + +The Inter-Blockchain Communication protocol (IBC) is an end-to-end, connection-oriented, stateful protocol for reliable, ordered, and authenticated communication between heterogeneous blockchains arranged in an unknown and dynamic topology. + +IBC is a protocol that allows blockchains to talk to each other. + +The protocol realizes this interoperability by specifying a set of data structures, abstractions, and semantics that can be implemented by any distributed ledger that satisfies a small set of requirements. + +IBC can be used to build a wide range of cross-chain applications that include token transfers, atomic swaps, multi-chain smart contracts (with or without mutually comprehensible VMs), and data and code sharding of various kinds. diff --git a/docs/ibc/v5.4.x/middleware/ics29-fee/end-users.mdx b/docs/ibc/v5.4.x/middleware/ics29-fee/end-users.mdx new file mode 100644 index 00000000..3b7ea2a3 --- /dev/null +++ b/docs/ibc/v5.4.x/middleware/ics29-fee/end-users.mdx @@ -0,0 +1,30 @@ +--- +title: End Users +--- + +## Synopsis + +Learn how to incentivize IBC packets using the ICS29 Fee Middleware module. + +## Pre-requisite readings + +- [Fee Middleware](/docs/ibc/v5.4.x/middleware/ics29-fee/overview) + +## Summary + +Different types of end users: + +- CLI users who want to manually incentivize IBC packets +- Client developers + +The Fee Middleware module allows end users to add a 'tip' to each IBC packet which will incentivize relayer operators to relay packets between chains. gRPC endpoints are exposed for client developers as well as a simple CLI for manually incentivizing IBC packets. + +## CLI Users + +For an in depth guide on how to use the ICS29 Fee Middleware module using the CLI please take a look at the [wiki](https://github.com/cosmos/ibc-go/wiki/Fee-enabled-fungible-token-transfers#asynchronous-incentivization-of-a-fungible-token-transfer) on the `ibc-go` repo. + +## Client developers + +Client developers can read more about the relevant ICS29 message types in the [Fee messages section](/docs/ibc/v5.4.x/middleware/ics29-fee/msgs). + +[CosmJS](https://github.com/cosmos/cosmjs) is a useful client library for signing and broadcasting Cosmos SDK messages. diff --git a/docs/ibc/v5.4.x/middleware/ics29-fee/events.mdx b/docs/ibc/v5.4.x/middleware/ics29-fee/events.mdx new file mode 100644 index 00000000..7d6dfc42 --- /dev/null +++ b/docs/ibc/v5.4.x/middleware/ics29-fee/events.mdx @@ -0,0 +1,37 @@ +--- +title: Events +--- + +## Synopsis + +An overview of all events related to ICS-29 + +## `MsgPayPacketFee`, `MsgPayPacketFeeAsync` + +| Type | Attribute Key | Attribute Value | +| ----------------------- | --------------- | --------------- | +| incentivized_ibc_packet | port_id | `{portID}` | +| incentivized_ibc_packet | channel_id | `{channelID}` | +| incentivized_ibc_packet | packet_sequence | `{sequence}` | +| incentivized_ibc_packet | recv_fee | `{recvFee}` | +| incentivized_ibc_packet | ack_fee | `{ackFee}` | +| incentivized_ibc_packet | timeout_fee | `{timeoutFee}` | +| message | module | fee-ibc | + +## `RegisterPayee` + +| Type | Attribute Key | Attribute Value | +| -------------- | ------------- | --------------- | +| register_payee | relayer | `{relayer}` | +| register_payee | payee | `{payee}` | +| register_payee | channel_id | `{channelID}` | +| message | module | fee-ibc | + +## `RegisterCounterpartyPayee` + +| Type | Attribute Key | Attribute Value | +| --------------------------- | ------------------ | --------------------- | +| register_counterparty_payee | relayer | `{relayer}` | +| register_counterparty_payee | counterparty_payee | `{counterpartyPayee}` | +| register_counterparty_payee | channel_id | `{channelID}` | +| message | module | fee-ibc | diff --git a/docs/ibc/v5.4.x/middleware/ics29-fee/fee-distribution.mdx b/docs/ibc/v5.4.x/middleware/ics29-fee/fee-distribution.mdx new file mode 100644 index 00000000..ca1ed919 --- /dev/null +++ b/docs/ibc/v5.4.x/middleware/ics29-fee/fee-distribution.mdx @@ -0,0 +1,108 @@ +--- +title: Fee Distribution +--- + +## Synopsis + +Learn about payee registration for the distribution of packet fees. The following document is intended for relayer operators. + +## Pre-requisite readings + +- [Fee Middleware](/docs/ibc/v5.4.x/middleware/ics29-fee/overview) + +Packet fees are divided into 3 distinct amounts in order to compensate relayer operators for packet relaying on fee enabled IBC channels. + +- `RecvFee`: The sum of all packet receive fees distributed to a payee for successful execution of `MsgRecvPacket`. +- `AckFee`: The sum of all packet acknowledgement fees distributed to a payee for successful execution of `MsgAcknowledgement`. +- `TimeoutFee`: The sum of all packet timeout fees distributed to a payee for successful execution of `MsgTimeout`. + +## Register a counterparty payee address for forward relaying + +As mentioned in [ICS29 Concepts](/docs/ibc/v5.4.x/middleware/ics29-fee/overview#concepts), the forward relayer describes the actor who performs the submission of `MsgRecvPacket` on the destination chain. +Fee distribution for incentivized packet relays takes place on the packet source chain. + +> Relayer operators are expected to register a counterparty payee address, in order to be compensated accordingly with `RecvFee`s upon completion of a packet lifecycle. + +The counterparty payee address registered on the destination chain is encoded into the packet acknowledgement and communicated as such to the source chain for fee distribution. +**If a counterparty payee is not registered for the forward relayer on the destination chain, the escrowed fees will be refunded upon fee distribution.** + +### Relayer operator actions? + +A transaction must be submitted **to the destination chain** including a `CounterpartyPayee` address of an account on the source chain. +The transaction must be signed by the `Relayer`. + +Note: If a module account address is used as the `CounterpartyPayee` but the module has been set as a blocked address in the `BankKeeper`, the refunding to the module account will fail. This is because many modules use invariants to compare internal tracking of module account balances against the actual balance of the account stored in the `BankKeeper`. If a token transfer to the module account occurs without going through this module and updating the account balance of the module on the `BankKeeper`, then invariants may break and unknown behaviour could occur depending on the module implementation. Therefore, if it is desirable to use a module account that is currently blocked, the module developers should be consulted to gauge to possibility of removing the module account from the blocked list. + +```go +type MsgRegisterCounterpartyPayee struct { + / unique port identifier + PortId string + / unique channel identifier + ChannelId string + / the relayer address + Relayer string + / the counterparty payee address + CounterpartyPayee string +} +``` + +> This message is expected to fail if: +> +> - `PortId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +> - `ChannelId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +> - `Relayer` is an invalid address. +> - `CounterpartyPayee` is empty. + +See below for an example CLI command: + +```bash +simd tx ibc-fee register-counterparty-payee transfer channel-0 \ +cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh \ +osmo1v5y0tz01llxzf4c2afml8s3awue0ymju22wxx2 \ +--from cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh +``` + +## Register an alternative payee address for reverse and timeout relaying + +As mentioned in [ICS29 Concepts](/docs/ibc/v5.4.x/middleware/ics29-fee/overview#concepts), the reverse relayer describes the actor who performs the submission of `MsgAcknowledgement` on the source chain. +Similarly the timeout relayer describes the actor who performs the submission of `MsgTimeout` (or `MsgTimeoutOnClose`) on the source chain. + +> Relayer operators **may choose** to register an optional payee address, in order to be compensated accordingly with `AckFee`s and `TimeoutFee`s upon completion of a packet life cycle. + +If a payee is not registered for the reverse or timeout relayer on the source chain, then fee distribution assumes the default behaviour, where fees are paid out to the relayer account which delivers `MsgAcknowledgement` or `MsgTimeout`/`MsgTimeoutOnClose`. + +### Relayer operator actions + +A transaction must be submitted **to the source chain** including a `Payee` address of an account on the source chain. +The transaction must be signed by the `Relayer`. + +Note: If a module account address is used as the `Payee` it is recommended to [turn off invariant checks](https://github.com/cosmos/ibc-go/blob/71d7480c923f4227453e8a80f51be01ae7ee845e/testing/simapp/app.go#L659) for that module. + +```go +type MsgRegisterPayee struct { + / unique port identifier + PortId string + / unique channel identifier + ChannelId string + / the relayer address + Relayer string + / the payee address + Payee string +} +``` + +> This message is expected to fail if: +> +> - `PortId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +> - `ChannelId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +> - `Relayer` is an invalid address. +> - `Payee` is an invalid address. + +See below for an example CLI command: + +```bash +simd tx ibc-fee register-payee transfer channel-0 \ +cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh \ +cosmos153lf4zntqt33a4v0sm5cytrxyqn78q7kz8j8x5 \ +--from cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh +``` diff --git a/docs/ibc/v5.4.x/middleware/ics29-fee/integration.mdx b/docs/ibc/v5.4.x/middleware/ics29-fee/integration.mdx new file mode 100644 index 00000000..47994cd5 --- /dev/null +++ b/docs/ibc/v5.4.x/middleware/ics29-fee/integration.mdx @@ -0,0 +1,174 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to configure the Fee Middleware module with IBC applications. The following document is intended for developers building on top of the Cosmos SDK and only applies for Cosmos SDK chains. + +## Pre-requisite Readings + +- [IBC middleware development](/docs/ibc/v5.4.x/ibc/middleware/develop) +- [IBC middleware integration](/docs/ibc/v5.4.x/ibc/middleware/integration) + +The Fee Middleware module, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. +For Cosmos SDK chains this setup is done via the `app/app.go` file, where modules are constructed and configured in order to bootstrap the blockchain application. + +## Example integration of the Fee Middleware module + +```go expandable +/ app.go + +/ Register the AppModule for the fee middleware module +ModuleBasics = module.NewBasicManager( + ... + ibcfee.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the fee middleware module +maccPerms = map[string][]string{ + ... + ibcfeetypes.ModuleName: nil, +} + +... + +/ Add fee middleware Keeper +type App struct { + ... + + IBCFeeKeeper ibcfeekeeper.Keeper + + ... +} + +... + +/ Create store keys + keys := sdk.NewKVStoreKeys( + ... + ibcfeetypes.StoreKey, + ... +) + +... + +app.IBCFeeKeeper = ibcfeekeeper.NewKeeper( + appCodec, keys[ibcfeetypes.StoreKey], + app.IBCKeeper.ChannelKeeper, / may be replaced with IBC middleware + app.IBCKeeper.ChannelKeeper, + &app.IBCKeeper.PortKeeper, app.AccountKeeper, app.BankKeeper, +) + +/ See the section below for configuring an application stack with the fee middleware module + +... + +/ Register fee middleware AppModule +app.moduleManager = module.NewManager( + ... + ibcfee.NewAppModule(app.IBCFeeKeeper), +) + +... + +/ Add fee middleware to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + ibcfeetypes.ModuleName, + ... +) + +/ Add fee middleware to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + ibcfeetypes.ModuleName, + ... +) + +/ Add fee middleware to init genesis logic +app.moduleManager.SetOrderInitGenesis( + ... + ibcfeetypes.ModuleName, + ... +) +``` + +## Configuring an application stack with Fee Middleware + +As mentioned in [IBC middleware development](/docs/ibc/v5.4.x/ibc/middleware/develop) an application stack may be composed of many or no middlewares that nest a base application. +These layers form the complete set of application logic that enable developers to build composable and flexible IBC application stacks. +For example, an application stack may be just a single base application like `transfer`, however, the same application stack composed with `29-fee` will nest the `transfer` base application +by wrapping it with the Fee Middleware module. + +### Transfer + +See below for an example of how to create an application stack using `transfer` and `29-fee`. +The following `transferStack` is configured in `app/app.go` and added to the IBC `Router`. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +```go expandable +/ Create Transfer Stack +/ SendPacket, since it is originating from the application to core IBC: +/ transferKeeper.SendPacket -> fee.SendPacket -> channel.SendPacket + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is the other way +/ channel.RecvPacket -> fee.OnRecvPacket -> transfer.OnRecvPacket + +/ transfer stack contains (from top to bottom): +/ - IBC Fee Middleware +/ - Transfer + +/ create IBC module from bottom to top of stack +var transferStack porttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) + +transferStack = ibcfee.NewIBCMiddleware(transferStack, app.IBCFeeKeeper) + +/ Add transfer stack to IBC Router +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) +``` + +### Interchain Accounts + +See below for an example of how to create an application stack using `27-interchain-accounts` and `29-fee`. +The following `icaControllerStack` and `icaHostStack` are configured in `app/app.go` and added to the IBC `Router` with the associated authentication module. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +```go expandable +/ Create Interchain Accounts Stack +/ SendPacket, since it is originating from the application to core IBC: +/ icaAuthModuleKeeper.SendTx -> icaController.SendPacket -> fee.SendPacket -> channel.SendPacket + +/ initialize ICA module with mock module as the authentication module on the controller side +var icaControllerStack porttypes.IBCModule +icaControllerStack = ibcmock.NewIBCModule(&mockModule, ibcmock.NewMockIBCApp("", scopedICAMockKeeper)) + +app.ICAAuthModule = icaControllerStack.(ibcmock.IBCModule) + +icaControllerStack = icacontroller.NewIBCMiddleware(icaControllerStack, app.ICAControllerKeeper) + +icaControllerStack = ibcfee.NewIBCMiddleware(icaControllerStack, app.IBCFeeKeeper) + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is: +/ channel.RecvPacket -> fee.OnRecvPacket -> icaHost.OnRecvPacket + +var icaHostStack porttypes.IBCModule +icaHostStack = icahost.NewIBCModule(app.ICAHostKeeper) + +icaHostStack = ibcfee.NewIBCMiddleware(icaHostStack, app.IBCFeeKeeper) + +/ Add authentication module, controller and host to IBC router +ibcRouter. + / the ICA Controller middleware needs to be explicitly added to the IBC Router because the + / ICA controller module owns the port capability for ICA. The ICA authentication module + / owns the channel capability. + AddRoute(ibcmock.ModuleName+icacontrollertypes.SubModuleName, icaControllerStack) / ica with mock auth module stack route to ica (top level of middleware stack) + +AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostStack). +``` diff --git a/docs/ibc/v5.4.x/middleware/ics29-fee/msgs.mdx b/docs/ibc/v5.4.x/middleware/ics29-fee/msgs.mdx new file mode 100644 index 00000000..cf9834ed --- /dev/null +++ b/docs/ibc/v5.4.x/middleware/ics29-fee/msgs.mdx @@ -0,0 +1,90 @@ +--- +title: Fee Messages +--- + +## Synopsis + +Learn about the different ways to pay for fees, how the fees are paid out and what happens when not enough escrowed fees are available for payout + +## Escrowing fees + +The fee middleware module exposes two different ways to pay fees for relaying IBC packets: + +1. `MsgPayPacketFee`, which enables the escrowing of fees for a packet at the next sequence send and should be combined into one `MultiMsgTx` with the message that will be paid for. + + Note that the `Relayers` field has been set up to allow for an optional whitelist of relayers permitted to receive this fee, however, this feature has not yet been enabled at this time. + + ```go expandable + type MsgPayPacketFee struct{ + / fee encapsulates the recv, ack and timeout fees associated with an IBC packet + Fee Fee + / the source port unique identifier + SourcePortId string + / the source channel unique identifier + SourceChannelId string + / account address to refund fee if necessary + Signer string + / optional list of relayers permitted to the receive packet fee + Relayers []string + } + ``` + + The `Fee` message contained in this synchronous fee payment method configures different fees which will be paid out for `MsgRecvPacket`, `MsgAcknowledgement`, and `MsgTimeout`/`MsgTimeoutOnClose`. + + ```go + type Fee struct { + RecvFee types.Coins + AckFee types.Coins + TimeoutFee types.Coins + } + ``` + +The diagram below shows the `MultiMsgTx` with the `MsgTransfer` coming from a token transfer message, along with `MsgPayPacketFee`. + +![msgpaypacket.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/msgpaypacket.png) + +2. `MsgPayPacketFeeAsync`, which enables the asynchronous escrowing of fees for a specified packet: + + Note that a packet can be 'topped up' multiple times with additional fees of any coin denomination by broadcasting multiple `MsgPayPacketFeeAsync` messages. + + ```go + type MsgPayPacketFeeAsync struct { + / unique packet identifier comprised of the channel ID, port ID and sequence + PacketId channeltypes.PacketId + / the packet fee associated with a particular IBC packet + PacketFee PacketFee + } + ``` + + where the `PacketFee` also specifies the `Fee` to be paid as well as the refund address for fees which are not paid out + + ```go + type PacketFee struct { + Fee Fee + RefundAddress string + Relayers []string + } + ``` + +The diagram below shows how multiple `MsgPayPacketFeeAsync` can be broadcasted asynchronously. Escrowing of the fee associated with a packet can be carried out by any party because ICS-29 does not dictate a particular fee payer. In fact, chains can choose to simply not expose this fee payment to end users at all and rely on a different module account or even the community pool as the source of relayer incentives. + +![paypacketfeeasync.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/paypacketfeeasync.png) + +Please see our [wiki](https://github.com/cosmos/ibc-go/wiki/Fee-enabled-fungible-token-transfers) for example flows on how to use these messages to incentivise a token transfer channel using a CLI. + +## Paying out the escrowed fees + +Following diagram takes a look at the packet flow for an incentivized token transfer and investigates the several scenario's for paying out the escrowed fees. We assume that the relayers have registered their counterparty address, detailed in the [Fee distribution section](/docs/ibc/v5.4.x/middleware/ics29-fee/fee-distribution). + +![feeflow.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/feeflow.png) + +- In the case of a successful transaction, `RecvFee` will be paid out to the designated counterparty payee address which has been registered on the receiver chain and sent back with the `MsgAcknowledgement`, `AckFee` will be paid out to the relayer address which has submitted the `MsgAcknowledgement` on the sending chain (or the registered payee in case one has been registered for the relayer address), and `TimeoutFee` will be reimbursed to the account which escrowed the fee. +- In case of a timeout transaction, `RecvFee` and `AckFee` will be reimbursed. The `TimeoutFee` will be paid to the `Timeout Relayer` (who submits the timeout message to the source chain). + +> Please note that fee payments are built on the assumption that sender chains are the source of incentives — the chain that sends the packets is the same chain where fee payments will occur -- please see the [Fee distribution section](/docs/ibc/v5.4.x/middleware/ics29-fee/fee-distribution) to understand the flow for registering payee and counterparty payee (fee receiving) addresses. + +## A locked fee middleware module + +The fee middleware module can become locked if the situation arises that the escrow account for the fees does not have sufficient funds to pay out the fees which have been escrowed for each packet. _This situation indicates a severe bug._ In this case, the fee module will be locked until manual intervention fixes the issue. + +> A locked fee module will simply skip fee logic and continue on to the underlying packet flow. A channel with a locked fee module will temporarily function as a fee disabled channel, and the locking of a fee module will not affect the continued flow of packets over the channel. diff --git a/docs/ibc/v5.4.x/middleware/ics29-fee/overview.mdx b/docs/ibc/v5.4.x/middleware/ics29-fee/overview.mdx new file mode 100644 index 00000000..7d67f53a --- /dev/null +++ b/docs/ibc/v5.4.x/middleware/ics29-fee/overview.mdx @@ -0,0 +1,49 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the Fee Middleware module is, and how to build custom modules that utilize the Fee Middleware functionality + +## What is the Fee Middleware module? + +IBC does not depend on relayer operators for transaction verification. However, the relayer infrastructure ensures liveness of the Interchain network — operators listen for packets sent through channels opened between chains, and perform the vital service of ferrying these packets (and proof of the transaction on the sending chain/receipt on the receiving chain) to the clients on each side of the channel. + +Though relaying is permissionless and completely decentralized and accessible, it does come with operational costs. Running full nodes to query transaction proofs and paying for transaction fees associated with IBC packets are two of the primary cost burdens which have driven the overall discussion on **a general, in-protocol incentivization mechanism for relayers**. + +Initially, a [simple proposal](https://github.com/cosmos/ibc/pull/577/files) was created to incentivize relaying on ICS20 token transfers on the destination chain. However, the proposal was specific to ICS20 token transfers and would have to be reimplemented in this format on every other IBC application module. + +After much discussion, the proposal was expanded to a [general incentivisation design](https://github.com/cosmos/ibc/tree/master/spec/app/ics-029-fee-payment) that can be adopted by any ICS application protocol as [middleware](/docs/ibc/v5.4.x/ibc/middleware/develop). + +## Concepts + +ICS29 fee payments in this middleware design are built on the assumption that sender chains are the source of incentives — the chain on which packets are incentivized is the chain that distributes fees to relayer operators. However, as part of the IBC packet flow, messages have to be submitted on both sender and destination chains. This introduces the requirement of a mapping of relayer operator's addresses on both chains. + +To achieve the stated requirements, the **fee middleware module has two main groups of functionality**: + +- Registering of relayer addresses associated with each party involved in relaying the packet on the source chain. This registration process can be automated on start up of relayer infrastructure and happens only once, not every packet flow. + + This is described in the [Fee distribution section](/docs/ibc/v5.4.x/middleware/ics29-fee/fee-distribution). + +- Escrowing fees by any party which will be paid out to each rightful party on completion of the packet lifecycle. + + This is described in the [Fee messages section](/docs/ibc/v5.4.x/middleware/ics29-fee/msgs). + +We complete the introduction by giving a list of definitions of relevant terminology. + +`Forward relayer`: The relayer that submits the `MsgRecvPacket` message for a given packet (on the destination chain). + +`Reverse relayer`: The relayer that submits the `MsgAcknowledgement` message for a given packet (on the source chain). + +`Timeout relayer`: The relayer that submits the `MsgTimeout` or `MsgTimeoutOnClose` messages for a given packet (on the source chain). + +`Payee`: The account address on the source chain to be paid on completion of the packet lifecycle. The packet lifecycle on the source chain completes with the receipt of a `MsgTimeout`/`MsgTimeoutOnClose` or a `MsgAcknowledgement`. + +`Counterparty payee`: The account address to be paid on completion of the packet lifecycle on the destination chain. The package lifecycle on the destination chain completes with a successful `MsgRecvPacket`. + +`Refund address`: The address of the account paying for the incentivization of packet relaying. The account is refunded timeout fees upon successful acknowledgement. In the event of a packet timeout, both acknowledgement and receive fees are refunded. + +## Known Limitations + +The first version of fee payments middleware will only support incentivisation of new channels, however, channel upgradeability will enable incentivisation of all existing channels. diff --git a/docs/ibc/v5.4.x/migrations/sdk-to-v1.mdx b/docs/ibc/v5.4.x/migrations/sdk-to-v1.mdx new file mode 100644 index 00000000..69ef154c --- /dev/null +++ b/docs/ibc/v5.4.x/migrations/sdk-to-v1.mdx @@ -0,0 +1,195 @@ +--- +title: SDK v0.43 to IBC-Go v1 +description: >- + This file contains information on how to migrate from the IBC module contained + in the SDK 0.41.x and 0.42.x lines to the IBC module in the ibc-go repository + based on the 0.44 SDK version. +--- + +This file contains information on how to migrate from the IBC module contained in the SDK 0.41.x and 0.42.x lines to the IBC module in the ibc-go repository based on the 0.44 SDK version. + +## Import Changes + +The most obvious changes is import name changes. We need to change: + +* applications -> apps +* cosmos-sdk/x/ibc -> ibc-go + +On my GNU/Linux based machine I used the following commands, executed in order: + +```bash +grep -RiIl 'cosmos-sdk\/x\/ibc\/applications' | xargs sed -i 's/cosmos-sdk\/x\/ibc\/applications/ibc-go\/modules\/apps/g' +``` + +```bash +grep -RiIl 'cosmos-sdk\/x\/ibc' | xargs sed -i 's/cosmos-sdk\/x\/ibc/ibc-go\/modules/g' +``` + +ref: [explanation of the above commands](https://www.internalpointers.com/post/linux-find-and-replace-text-multiple-files) + +Executing these commands out of order will cause issues. + +Feel free to use your own method for modifying import names. + +NOTE: Updating to the `v0.44.0` SDK release and then running `go mod tidy` will cause a downgrade to `v0.42.0` in order to support the old IBC import paths. +Update the import paths before running `go mod tidy`. + +## Chain Upgrades + +Chains may choose to upgrade via an upgrade proposal or genesis upgrades. Both in-place store migrations and genesis migrations are supported. + +**WARNING**: Please read at least the quick guide for [IBC client upgrades](/docs/ibc/v5.4.x/ibc/upgrades/intro) before upgrading your chain. It is highly recommended you do not change the chain-ID during an upgrade, otherwise you must follow the IBC client upgrade instructions. + +Both in-place store migrations and genesis migrations will: + +* migrate the solo machine client state from v1 to v2 protobuf definitions +* prune all solo machine consensus states +* prune all expired tendermint consensus states + +Chains must set a new connection parameter during either in place store migrations or genesis migration. The new parameter, max expected block time, is used to enforce packet processing delays on the receiving end of an IBC packet flow. Checkout the [docs](https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2) for more information. + +### In-Place Store Migrations + +The new chain binary will need to run migrations in the upgrade handler. The fromVM (previous module version) for the IBC module should be 1. This will allow migrations to be run for IBC updating the version from 1 to 2. + +Ex: + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler("my-upgrade-proposal", + func(ctx sdk.Context, _ upgradetypes.Plan, _ module.VersionMap) (module.VersionMap, error) { + / set max expected block time parameter. Replace the default with your expected value + / https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2 + app.IBCKeeper.ConnectionKeeper.SetParams(ctx, ibcconnectiontypes.DefaultParams()) + fromVM := map[string]uint64{ + ... / other modules + "ibc": 1, + ... +} + +return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +### Genesis Migrations + +To perform genesis migrations, the following code must be added to your existing migration code. + +```go expandable +/ add imports as necessary +import ( + + ibcv100 "github.com/cosmos/ibc-go/modules/core/legacy/v100" + ibchost "github.com/cosmos/ibc-go/modules/core/24-host" +) + +... + +/ add in migrate cmd function +/ expectedTimePerBlock is a new connection parameter +/ https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2 +newGenState, err = ibcv100.MigrateGenesis(newGenState, clientCtx, *genDoc, expectedTimePerBlock) + if err != nil { + return err +} +``` + +**NOTE:** The genesis chain-id, time and height MUST be updated before migrating IBC, otherwise the tendermint consensus state will not be pruned. + +## IBC Keeper Changes + +The IBC Keeper now takes in the Upgrade Keeper. Please add the chains' Upgrade Keeper after the Staking Keeper: + +```diff + / Create IBC Keeper + app.IBCKeeper = ibckeeper.NewKeeper( +- appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, scopedIBCKeeper, ++ appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, app.UpgradeKeeper, scopedIBCKeeper, + ) + +``` + +## Proposals + +### UpdateClientProposal + +The `UpdateClient` has been modified to take in two client-identifiers and one initial height. Please see the [documentation](/docs/ibc/v5.4.x/ibc/proposals) for more information. + +### UpgradeProposal + +A new IBC proposal type has been added, `UpgradeProposal`. This handles an IBC (breaking) Upgrade. +The previous `UpgradedClientState` field in an Upgrade `Plan` has been deprecated in favor of this new proposal type. + +### Proposal Handler Registration + +The `ClientUpdateProposalHandler` has been renamed to `ClientProposalHandler`. +It handles both `UpdateClientProposal`s and `UpgradeProposal`s. + +Add this import: + +```diff ++ ibcclienttypes "github.com/cosmos/ibc-go/modules/core/02-client/types" +``` + +Please ensure the governance module adds the correct route: + +```diff +- AddRoute(ibchost.RouterKey, ibcclient.NewClientUpdateProposalHandler(app.IBCKeeper.ClientKeeper)) ++ AddRoute(ibcclienttypes.RouterKey, ibcclient.NewClientProposalHandler(app.IBCKeeper.ClientKeeper)) +``` + +NOTE: Simapp registration was incorrect in the 0.41.x releases. The `UpdateClient` proposal handler should be registered with the router key belonging to `ibc-go/core/02-client/types` +as shown in the diffs above. + +### Proposal CLI Registration + +Please ensure both proposal type CLI commands are registered on the governance module by adding the following arguments to `gov.NewAppModuleBasic()`: + +Add the following import: + +```diff ++ ibcclientclient "github.com/cosmos/ibc-go/modules/core/02-client/client" +``` + +Register the cli commands: + +```diff + gov.NewAppModuleBasic( + paramsclient.ProposalHandler, distrclient.ProposalHandler, upgradeclient.ProposalHandler, upgradeclient.CancelProposalHandler, ++ ibcclientclient.UpdateClientProposalHandler, ibcclientclient.UpgradeProposalHandler, + ), +``` + +REST routes are not supported for these proposals. + +## Proto file changes + +The gRPC querier service endpoints have changed slightly. The previous files used `v1beta1` gRPC route, this has been updated to `v1`. + +The solo machine has replaced the FrozenSequence uint64 field with a IsFrozen boolean field. The package has been bumped from `v1` to `v2` + +## IBC callback changes + +### OnRecvPacket + +Application developers need to update their `OnRecvPacket` callback logic. + +The `OnRecvPacket` callback has been modified to only return the acknowledgement. The acknowledgement returned must implement the `Acknowledgement` interface. The acknowledgement should indicate if it represents a successful processing of a packet by returning true on `Success()` and false in all other cases. A return value of false on `Success()` will result in all state changes which occurred in the callback being discarded. More information can be found in the [documentation](/docs/ibc/v5.4.x/ibc/apps/ibcmodule#receiving-packets). + +The `OnRecvPacket`, `OnAcknowledgementPacket`, and `OnTimeoutPacket` callbacks are now passed the `sdk.AccAddress` of the relayer who relayed the IBC packet. Applications may use or ignore this information. + +## IBC Event changes + +The `packet_data` attribute has been deprecated in favor of `packet_data_hex`, in order to provide standardized encoding/decoding of packet data in events. While the `packet_data` event still exists, all relayers and IBC Event consumers are strongly encouraged to switch over to using `packet_data_hex` as soon as possible. + +The `packet_ack` attribute has also been deprecated in favor of `packet_ack_hex` for the same reason stated above. All relayers and IBC Event consumers are strongly encouraged to switch over to using `packet_ack_hex` as soon as possible. + +The `consensus_height` attribute has been removed in the Misbehaviour event emitted. IBC clients no longer have a frozen height and misbehaviour does not necessarily have an associated height. + +## Relevant SDK changes + +* (codec) [#9226](https://github.com/cosmos/cosmos-sdk/pull/9226) Rename codec interfaces and methods, to follow a general Go interfaces: + * `codec.Marshaler` → `codec.Codec` (this defines objects which serialize other objects) + * `codec.BinaryMarshaler` → `codec.BinaryCodec` + * `codec.JSONMarshaler` → `codec.JSONCodec` + * Removed `BinaryBare` suffix from `BinaryCodec` methods (`MarshalBinaryBare`, `UnmarshalBinaryBare`, ...) + * Removed `Binary` infix from `BinaryCodec` methods (`MarshalBinaryLengthPrefixed`, `UnmarshalBinaryLengthPrefixed`, ...) diff --git a/docs/ibc/v5.4.x/migrations/support-denoms-with-slashes.mdx b/docs/ibc/v5.4.x/migrations/support-denoms-with-slashes.mdx new file mode 100644 index 00000000..6a698566 --- /dev/null +++ b/docs/ibc/v5.4.x/migrations/support-denoms-with-slashes.mdx @@ -0,0 +1,90 @@ +--- +title: Support transfer of coins whose base denom contains slashes +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +This document is necessary when chains are upgrading from a version that does not support base denoms with slashes (e.g. v3.0.0) to a version that does (e.g. v3.2.0). All versions of ibc-go smaller than v1.5.0 for the v1.x release line, v2.3.0 for the v2.x release line, and v3.1.0 for the v3.x release line do **NOT** support IBC token transfers of coins whose base denoms contain slashes. Therefore the in-place of genesis migration described in this document are required when upgrading. + +If a chain receives coins of a base denom with slashes before it upgrades to supporting it, the receive may pass however the trace information will be incorrect. + +E.g. If a base denom of `testcoin/testcoin/testcoin` is sent to a chain that does not support slashes in the base denom, the receive will be successful. However, the trace information stored on the receiving chain will be: `Trace: "transfer/{channel-id}/testcoin/testcoin", BaseDenom: "testcoin"`. + +This incorrect trace information must be corrected when the chain does upgrade to fully supporting denominations with slashes. + +To do so, chain binaries should include a migration script that will run when the chain upgrades from not supporting base denominations with slashes to supporting base denominations with slashes. + +## Chains + +### ICS20 - Transfer + +The transfer module will now support slashes in base denoms, so we must iterate over current traces to check if any of them are incorrectly formed and correct the trace information. + +### Upgrade Proposal + +```go +app.UpgradeKeeper.SetUpgradeHandler("MigrateTraces", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / transfer module consensus version has been bumped to 2 + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +This is only necessary if there are denom traces in the store with incorrect trace information from previously received coins that had a slash in the base denom. However, it is recommended that any chain upgrading to support base denominations with slashes runs this code for safety. + +For a more detailed sample, please check out the code changes in [this pull request](https://github.com/cosmos/ibc-go/pull/1680). + +### Genesis Migration + +If the chain chooses to add support for slashes in base denoms via genesis export, then the trace information must be corrected during genesis migration. + +The migration code required may look like: + +```go expandable +func migrateGenesisSlashedDenomsUpgrade(appState genutiltypes.AppMap, clientCtx client.Context, genDoc *tmtypes.GenesisDoc) (genutiltypes.AppMap, error) { + if appState[ibctransfertypes.ModuleName] != nil { + transferGenState := &ibctransfertypes.GenesisState{ +} + +clientCtx.Codec.MustUnmarshalJSON(appState[ibctransfertypes.ModuleName], transferGenState) + substituteTraces := make([]ibctransfertypes.DenomTrace, len(transferGenState.DenomTraces)) + for i, dt := range transferGenState.DenomTraces { + / replace all previous traces with the latest trace if validation passes + / note most traces will have same value + newTrace := ibctransfertypes.ParseDenomTrace(dt.GetFullDenomPath()) + if err := newTrace.Validate(); err != nil { + substituteTraces[i] = dt +} + +else { + substituteTraces[i] = newTrace +} + +} + +transferGenState.DenomTraces = substituteTraces + + / delete old genesis state + delete(appState, ibctransfertypes.ModuleName) + + / set new ibc transfer genesis state + appState[ibctransfertypes.ModuleName] = clientCtx.Codec.MustMarshalJSON(transferGenState) +} + +return appState, nil +} +``` + +For a more detailed sample, please check out the code changes in [this pull request](https://github.com/cosmos/ibc-go/pull/1528). diff --git a/docs/ibc/v5.4.x/migrations/v1-to-v2.mdx b/docs/ibc/v5.4.x/migrations/v1-to-v2.mdx new file mode 100644 index 00000000..035b8c94 --- /dev/null +++ b/docs/ibc/v5.4.x/migrations/v1-to-v2.mdx @@ -0,0 +1,60 @@ +--- +title: IBC-Go v1 to v2 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go -> github.com/cosmos/ibc-go/v2 +``` + +## Chains + +* No relevant changes were made in this release. + +## IBC Apps + +A new function has been added to the app module interface: + +```go expandable +/ NegotiateAppVersion performs application version negotiation given the provided channel ordering, connectionID, portID, counterparty and proposed version. + / An error is returned if version negotiation cannot be performed. For example, an application module implementing this interface + / may decide to return an error in the event of the proposed version being incompatible with it's own + NegotiateAppVersion( + ctx sdk.Context, + order channeltypes.Order, + connectionID string, + portID string, + counterparty channeltypes.Counterparty, + proposedVersion string, + ) (version string, err error) +} +``` + +This function should perform application version negotiation and return the negotiated version. If the version cannot be negotiated, an error should be returned. This function is only used on the client side. + +### sdk.Result removed + +sdk.Result has been removed as a return value in the application callbacks. Previously it was being discarded by core IBC and was thus unused. + +## Relayers + +A new gRPC has been added to 05-port, `AppVersion`. It returns the negotiated app version. This function should be used for the `ChanOpenTry` channel handshake step to decide upon the application version which should be set in the channel. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v5.4.x/migrations/v2-to-v3.mdx b/docs/ibc/v5.4.x/migrations/v2-to-v3.mdx new file mode 100644 index 00000000..acd4c881 --- /dev/null +++ b/docs/ibc/v5.4.x/migrations/v2-to-v3.mdx @@ -0,0 +1,187 @@ +--- +title: IBC-Go v2 to v3 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v2 -> github.com/cosmos/ibc-go/v3 +``` + +No genesis or in-place migrations are required when upgrading from v1 or v2 of ibc-go. + +## Chains + +### ICS20 + +The `transferkeeper.NewKeeper(...)` now takes in an ICS4Wrapper. +The ICS4Wrapper should be the IBC Channel Keeper unless ICS 20 is being connected to a middleware application. + +### ICS27 + +ICS27 Interchain Accounts has been added as a supported IBC application of ibc-go. +Please see the [ICS27 documentation](/docs/ibc/v5.4.x/apps/interchain-accounts/overview) for more information. + +### Upgrade Proposal + +If the chain will adopt ICS27, it must set the appropriate params during the execution of the upgrade handler in `app.go`: + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler("v3", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / set the ICS27 consensus version so InitGenesis is not run + fromVM[icatypes.ModuleName] = icamodule.ConsensusVersion() + + / create ICS27 Controller submodule params + controllerParams := icacontrollertypes.Params{ + ControllerEnabled: true, +} + + / create ICS27 Host submodule params + hostParams := icahosttypes.Params{ + HostEnabled: true, + AllowMessages: []string{"/cosmos.bank.v1beta1.MsgSend", ... +}, +} + + / initialize ICS27 module + icamodule.InitModule(ctx, controllerParams, hostParams) + + ... + + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +The host and controller submodule params only need to be set if the chain integrates those submodules. +For example, if a chain chooses not to integrate a controller submodule, it may pass empty params into `InitModule`. + +#### Add `StoreUpgrades` for ICS27 module + +For ICS27 it is also necessary to [manually add store upgrades](https://docs.cosmos.network/main/learn/advanced/upgrade#add-storeupgrades-for-new-modules) for the new ICS27 module and then configure the store loader to apply those upgrades in `app.go`: + +```go +if upgradeInfo.Name == "v3" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := store.StoreUpgrades{ + Added: []string{ + icacontrollertypes.StoreKey, icahosttypes.StoreKey +}, +} + +app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +``` + +This ensures that the new module's stores are added to the multistore before the migrations begin. +The host and controller submodule keys only need to be added if the chain integrates those submodules. +For example, if a chain chooses not to integrate a controller submodule, it does not need to add the controller key to the `Added` field. + +### Genesis migrations + +If the chain will adopt ICS27 and chooses to upgrade via a genesis export, then the ICS27 parameters must be set during genesis migration. + +The migration code required may look like: + +```go expandable +controllerGenesisState := icatypes.DefaultControllerGenesis() + / overwrite parameters as desired + controllerGenesisState.Params = icacontrollertypes.Params{ + ControllerEnabled: true, +} + hostGenesisState := icatypes.DefaultHostGenesis() + / overwrite parameters as desired + hostGenesisState.Params = icahosttypes.Params{ + HostEnabled: true, + AllowMessages: []string{"/cosmos.bank.v1beta1.MsgSend", ... +}, +} + icaGenesisState := icatypes.NewGenesisState(controllerGenesisState, hostGenesisState) + + / set new ics27 genesis state + appState[icatypes.ModuleName] = clientCtx.Codec.MustMarshalJSON(icaGenesisState) +``` + +### Ante decorator + +The field of type `channelkeeper.Keeper` in the `AnteDecorator` structure has been replaced with a field of type `*keeper.Keeper`: + +```diff +type AnteDecorator struct { +- k channelkeeper.Keeper ++ k *keeper.Keeper +} + +- func NewAnteDecorator(k channelkeeper.Keeper) AnteDecorator { ++ func NewAnteDecorator(k *keeper.Keeper) AnteDecorator { + return AnteDecorator{k: k} +} +``` + +## IBC Apps + +### `OnChanOpenTry` must return negotiated application version + +The `OnChanOpenTry` application callback has been modified. +The return signature now includes the application version. +IBC applications must perform application version negotiation in `OnChanOpenTry` using the counterparty version. +The negotiated application version then must be returned in `OnChanOpenTry` to core IBC. +Core IBC will set this version in the TRYOPEN channel. + +### `OnChanOpenAck` will take additional `counterpartyChannelID` argument + +The `OnChanOpenAck` application callback has been modified. +The arguments now include the counterparty channel id. + +### `NegotiateAppVersion` removed from `IBCModule` interface + +Previously this logic was handled by the `NegotiateAppVersion` function. +Relayers would query this function before calling `ChanOpenTry`. +Applications would then need to verify that the passed in version was correct. +Now applications will perform this version negotiation during the channel handshake, thus removing the need for `NegotiateAppVersion`. + +### Channel state will not be set before application callback + +The channel handshake logic has been reorganized within core IBC. +Channel state will not be set in state after the application callback is performed. +Applications must rely only on the passed in channel parameters instead of querying the channel keeper for channel state. + +### IBC application callbacks moved from `AppModule` to `IBCModule` + +Previously, IBC module callbacks were apart of the `AppModule` type. +The recommended approach is to create an `IBCModule` type and move the IBC module callbacks from `AppModule` to `IBCModule` in a separate file `ibc_module.go`. + +The mock module go API has been broken in this release by applying the above format. +The IBC module callbacks have been moved from the mock modules `AppModule` into a new type `IBCModule`. + +As apart of this release, the mock module now supports middleware testing. Please see the [README](https://github.com/cosmos/ibc-go/blob/v5.3.0/testing/README.md#middleware-testing) for more information. + +Please review the [mock](https://github.com/cosmos/ibc-go/blob/v5.3.0/testing/mock/ibc_module.go) and [transfer](https://github.com/cosmos/ibc-go/blob/v5.3.0/modules/apps/transfer/ibc_module.go) modules as examples. Additionally, [simapp](https://github.com/cosmos/ibc-go/blob/v5.3.0/testing/simapp/app.go) provides an example of how `IBCModule` types should now be added to the IBC router in favour of `AppModule`. + +### IBC testing package + +`TestChain`s are now created with chainID's beginning from an index of 1. Any calls to `GetChainID(0)` will now fail. Please increment all calls to `GetChainID` by 1. + +## Relayers + +`AppVersion` gRPC has been removed. +The `version` string in `MsgChanOpenTry` has been deprecated and will be ignored by core IBC. +Relayers no longer need to determine the version to use on the `ChanOpenTry` step. +IBC applications will determine the correct version using the counterparty version. + +## IBC Light Clients + +The `GetProofSpecs` function has been removed from the `ClientState` interface. This function was previously unused by core IBC. Light clients which don't use this function may remove it. diff --git a/docs/ibc/v5.4.x/migrations/v3-to-v4.mdx b/docs/ibc/v5.4.x/migrations/v3-to-v4.mdx new file mode 100644 index 00000000..a92505c6 --- /dev/null +++ b/docs/ibc/v5.4.x/migrations/v3-to-v4.mdx @@ -0,0 +1,156 @@ +--- +title: IBC-Go v3 to v4 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v3 -> github.com/cosmos/ibc-go/v4 +``` + +No genesis or in-place migrations required when upgrading from v1 or v2 of ibc-go. + +## Chains + +### ICS27 - Interchain Accounts + +The controller submodule implements now the 05-port `Middleware` interface instead of the 05-port `IBCModule` interface. Chains that integrate the controller submodule, need to create it with the `NewIBCMiddleware` constructor function. For example: + +```diff +- icacontroller.NewIBCModule(app.ICAControllerKeeper, icaAuthIBCModule) ++ icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) +``` + +where `icaAuthIBCModule` is the Interchain Accounts authentication IBC Module. + +### ICS29 - Fee Middleware + +The Fee Middleware module, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. + +Please read the Fee Middleware [integration documentation](/docs/ibc/v5.4.x/middleware/ics29-fee/integration) for an in depth guide on how to configure the module correctly in order to incentivize IBC packets. + +Take a look at the following diff for an [example setup](https://github.com/cosmos/ibc-go/pull/1432/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08aL366) of how to incentivize ics27 channels. + +### Migration to fix support for base denoms with slashes + +As part of [v1.5.0](https://github.com/cosmos/ibc-go/releases/tag/v1.5.0), [v2.3.0](https://github.com/cosmos/ibc-go/releases/tag/v2.3.0) and [v3.1.0](https://github.com/cosmos/ibc-go/releases/tag/v3.1.0) some [migration handler code sample was documented](/docs/ibc/v5.4.x/migrations/support-denoms-with-slashes#upgrade-proposal) that needs to run in order to correct the trace information of coins transferred using ICS20 whose base denom contains slashes. + +Based on feedback from the community we add now an improved solution to run the same migration that does not require copying a large piece of code over from the migration document, but instead requires only adding a one-line upgrade handler. + +If the chain will migrate to supporting base denoms with slashes, it must set the appropriate params during the execution of the upgrade handler in `app.go`: + +```go +app.UpgradeKeeper.SetUpgradeHandler("MigrateTraces", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / transfer module consensus version has been bumped to 2 + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +If a chain receives coins of a base denom with slashes before it upgrades to supporting it, the receive may pass however the trace information will be incorrect. + +E.g. If a base denom of `testcoin/testcoin/testcoin` is sent to a chain that does not support slashes in the base denom, the receive will be successful. However, the trace information stored on the receiving chain will be: `Trace: "transfer/{channel-id}/testcoin/testcoin", BaseDenom: "testcoin"`. + +This incorrect trace information must be corrected when the chain does upgrade to fully supporting denominations with slashes. + +## IBC Apps + +### ICS03 - Connection + +Crossing hellos have been removed from 03-connection handshake negotiation. +`PreviousConnectionId` in `MsgConnectionOpenTry` has been deprecated and is no longer used by core IBC. + +`NewMsgConnectionOpenTry` no longer takes in the `PreviousConnectionId` as crossing hellos are no longer supported. A non-empty `PreviousConnectionId` will fail basic validation for this message. + +### ICS04 - Channel + +The `WriteAcknowledgement` API now takes the `exported.Acknowledgement` type instead of passing in the acknowledgement byte array directly. +This is an API breaking change and as such IBC application developers will have to update any calls to `WriteAcknowledgement`. + +The `OnChanOpenInit` application callback has been modified. +The return signature now includes the application version as detailed in the latest IBC [spec changes](https://github.com/cosmos/ibc/pull/629). + +The `NewErrorAcknowledgement` method signature has changed. +It now accepts an `error` rather than a `string`. This was done in order to prevent accidental state changes. +All error acknowledgements now contain a deterministic ABCI code and error message. It is the responsibility of the application developer to emit error details in events. + +Crossing hellos have been removed from 04-channel handshake negotiation. +IBC Applications no longer need to account from already claimed capabilities in the `OnChanOpenTry` callback. The capability provided by core IBC must be able to be claimed with error. +`PreviousChannelId` in `MsgChannelOpenTry` has been deprecated and is no longer used by core IBC. + +`NewMsgChannelOpenTry` no longer takes in the `PreviousChannelId` as crossing hellos are no longer supported. A non-empty `PreviousChannelId` will fail basic validation for this message. + +### ICS27 - Interchain Accounts + +The `RegisterInterchainAccount` API has been modified to include an additional `version` argument. This change has been made in order to support ICS29 fee middleware, for relayer incentivization of ICS27 packets. +Consumers of the `RegisterInterchainAccount` are now expected to build the appropriate JSON encoded version string themselves and pass it accordingly. +This should be constructed within the interchain accounts authentication module which leverages the APIs exposed via the interchain accounts `controllerKeeper`. If an empty string is passed in the `version` argument, then the version will be initialized to a default value in the `OnChanOpenInit` callback of the controller's handler, so that channel handshake can proceed. + +The following code snippet illustrates how to construct an appropriate interchain accounts `Metadata` and encode it as a JSON bytestring: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + if err := k.icaControllerKeeper.RegisterInterchainAccount(ctx, msg.ConnectionId, msg.Owner, string(appVersion)); err != nil { + return err +} +``` + +Similarly, if the application stack is configured to route through ICS29 fee middleware and a fee enabled channel is desired, construct the appropriate ICS29 `Metadata` type: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + feeMetadata := feetypes.Metadata{ + AppVersion: string(appVersion), + FeeVersion: feetypes.Version, +} + +feeEnabledVersion, err := feetypes.ModuleCdc.MarshalJSON(&feeMetadata) + if err != nil { + return err +} + if err := k.icaControllerKeeper.RegisterInterchainAccount(ctx, msg.ConnectionId, msg.Owner, string(feeEnabledVersion)); err != nil { + return err +} +``` + +## Relayers + +When using the `DenomTrace` gRPC, the full IBC denomination with the `ibc/` prefix may now be passed in. + +Crossing hellos are no longer supported by core IBC for 03-connection and 04-channel. The handshake should be completed in the logical 4 step process (INIT, TRY, ACK, CONFIRM). diff --git a/docs/ibc/v6.3.x/apps/interchain-accounts/active-channels.mdx b/docs/ibc/v6.3.x/apps/interchain-accounts/active-channels.mdx new file mode 100644 index 00000000..3dc4ef00 --- /dev/null +++ b/docs/ibc/v6.3.x/apps/interchain-accounts/active-channels.mdx @@ -0,0 +1,42 @@ +--- +title: Active Channels +description: >- + The Interchain Accounts module uses ORDERED channels to maintain the order of + transactions when sending packets from a controller to a host chain. A + limitation when using ORDERED channels is that when a packet times out the + channel will be closed. +--- + +The Interchain Accounts module uses [ORDERED channels](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#ordering) to maintain the order of transactions when sending packets from a controller to a host chain. A limitation when using ORDERED channels is that when a packet times out the channel will be closed. + +In the case of a channel closing, a controller chain needs to be able to regain access to the interchain account registered on this channel. `Active Channels` enable this functionality. + +When an Interchain Account is registered using `MsgRegisterInterchainAccount`, a new channel is created on a particular port. During the `OnChanOpenAck` and `OnChanOpenConfirm` steps (on controller & host chain respectively) the `Active Channel` for this interchain account is stored in state. + +It is possible to create a new channel using the same controller chain portID if the previously set `Active Channel` is now in a `CLOSED` state. This channel creation can be initialized programmatically by sending a new `MsgChannelOpenInit` message like so: + +```go +msg := channeltypes.NewMsgChannelOpenInit(portID, string(versionBytes), channeltypes.ORDERED, []string{ + connectionID +}, icatypes.HostPortID, authtypes.NewModuleAddress(icatypes.ModuleName).String()) + handler := keeper.msgRouter.Handler(msg) + +res, err := handler(ctx, msg) + if err != nil { + return err +} +``` + +Alternatively, any relayer operator may initiate a new channel handshake for this interchain account once the previously set `Active Channel` is in a `CLOSED` state. This is done by initiating the channel handshake on the controller chain using the same portID associated with the interchain account in question. + +It is important to note that once a channel has been opened for a given interchain account, new channels can not be opened for this account until the currently set `Active Channel` is set to `CLOSED`. + +## Future improvements + +Future versions of the ICS-27 protocol and the Interchain Accounts module will likely use a new channel type that provides ordering of packets without the channel closing in the event of a packet timing out, thus removing the need for `Active Channels` entirely. +The following is a list of issues which will provide the infrastructure to make this possible: + +* [IBC Channel Upgrades](https://github.com/cosmos/ibc-go/issues/1599) +* [Implement ORDERED\_ALLOW\_TIMEOUT logic in 04-channel](https://github.com/cosmos/ibc-go/issues/1661) +* [Add ORDERED\_ALLOW\_TIMEOUT as supported ordering in 03-connection](https://github.com/cosmos/ibc-go/issues/1662) +* [Allow ICA channels to be opened as ORDERED\_ALLOW\_TIMEOUT](https://github.com/cosmos/ibc-go/issues/1663) diff --git a/docs/ibc/v6.3.x/apps/interchain-accounts/auth-modules.mdx b/docs/ibc/v6.3.x/apps/interchain-accounts/auth-modules.mdx new file mode 100644 index 00000000..0febbd4e --- /dev/null +++ b/docs/ibc/v6.3.x/apps/interchain-accounts/auth-modules.mdx @@ -0,0 +1,21 @@ +--- +title: Authentication Modules +--- + +## Synopsis + +Authentication modules enable application developers to perform custom logic when interacting with the Interchain Accounts controller sumbmodule's `MsgServer`. + +The controller submodule is used for account registration and packet sending. It executes only logic required of all controllers of interchain accounts. The type of authentication used to manage the interchain accounts remains unspecified. There may exist many different types of authentication which are desirable for different use cases. Thus the purpose of the authentication module is to wrap the controller submodule with custom authentication logic. + +In ibc-go, authentication modules can communicate with the controller submodule by passing messages through `baseapp`'s `MsgServiceRouter`. To implement an authentication module, the `IBCModule` interface need not be fulfilled; it is only required to fulfill Cosmos SDK's `AppModuleBasic` interface, just like any regular Cosmos SDK application module. + +The authentication module must: + +- Authenticate interchain account owners. +- Track the associated interchain account address for an owner. +- Send packets on behalf of an owner (after authentication). + +## Integration into `app.go` file + +To integrate the authentication module into your chain, please follow the steps outlined in [`app.go` integration](/docs/ibc/v6.3.x/apps/interchain-accounts/integration#example-integration). diff --git a/docs/ibc/v6.3.x/apps/interchain-accounts/client.mdx b/docs/ibc/v6.3.x/apps/interchain-accounts/client.mdx new file mode 100644 index 00000000..7efeca2b --- /dev/null +++ b/docs/ibc/v6.3.x/apps/interchain-accounts/client.mdx @@ -0,0 +1,181 @@ +--- +title: Client +description: >- + A user can query and interact with the Interchain Accounts module using the + CLI. Use the --help flag to discover the available commands: +--- + +## CLI + +A user can query and interact with the Interchain Accounts module using the CLI. Use the `--help` flag to discover the available commands: + +```shell +simd query interchain-accounts --help +``` + +> Please not that this section does not document all the available commands, but only the ones that deserved extra documentation that was not possible to fit in the command line documentation. + +### Controller + +A user can query and interact with the controller submodule. + +#### Query + +The `query` commands allow users to query the controller submodule. + +```shell +simd query interchain-accounts controller --help +``` + +#### Transactions + +The `tx` commands allow users to interact with the controller submodule. + +```shell +simd tx interchain-accounts controller --help +``` + +#### `send-tx` + +The `send-tx` command allows users to send a transaction on the provided connection to be executed using an interchain account on the host chain. + +```shell +simd tx interchain-accounts controller send-tx [connection-id] [path/to/packet_msg.json] +``` + +Example: + +```shell +simd tx interchain-accounts controller send-tx connection-0 packet-data.json --from cosmos1.. +``` + +See below for example contents of `packet-data.json`. The CLI handler will unmarshal the following into `InterchainAccountPacketData` appropriately. + +```json +{ + "type": "TYPE_EXECUTE_TX", + "data": "CqIBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEoEBCkFjb3Ntb3MxNWNjc2hobXAwZ3N4MjlxcHFxNmc0em1sdG5udmdteXU5dWV1YWRoOXkybmM1emowc3psczVndGRkehItY29zbW9zMTBoOXN0YzV2Nm50Z2V5Z2Y1eGY5NDVuanFxNWgzMnI1M3VxdXZ3Gg0KBXN0YWtlEgQxMDAw", + "memo": "" +} +``` + +Note the `data` field is a base64 encoded byte string as per the [proto3 JSON encoding specification](https://developers.google.com/protocol-buffers/docs/proto3#json). + +A helper CLI is provided in the host submodule which can be used to generate the packet data JSON using the counterparty chain's binary. See the [`generate-packet-data` command](#generate-packet-data) for an example. + +### Host + +A user can query and interact with the host submodule. + +#### Query + +The `query` commands allow users to query the host submodule. + +```shell +simd query interchain-accounts host --help +``` + +#### Transactions + +The `tx` commands allow users to interact with the controller submodule. + +```shell +simd tx interchain-accounts host --help +``` + +##### `generate-packet-data` + +The `generate-packet-data` command allows users to generate interchain accounts packet data for input message(s). The packet data can then be used with the controller submodule's [`send-tx` command](#send-tx). + +```shell +simd tx interchain-accounts host generate-packet-data [message] +``` + +Example: + +```shell expandable +simd tx interchain-accounts host generate-packet-data '[{ + "@type":"/cosmos.bank.v1beta1.MsgSend", + "from_address":"cosmos15ccshhmp0gsx29qpqq6g4zmltnnvgmyu9ueuadh9y2nc5zj0szls5gtddz", + "to_address":"cosmos10h9stc5v6ntgeygf5xf945njqq5h32r53uquvw", + "amount": [ + { + "denom": "stake", + "amount": "1000" + } + ] +}]' --memo memo +``` + +The command accepts a single `sdk.Msg` or a list of `sdk.Msg`s that will be encoded into the outputs `data` field. + +Example output: + +```json +{ + "type": "TYPE_EXECUTE_TX", + "data": "CqIBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEoEBCkFjb3Ntb3MxNWNjc2hobXAwZ3N4MjlxcHFxNmc0em1sdG5udmdteXU5dWV1YWRoOXkybmM1emowc3psczVndGRkehItY29zbW9zMTBoOXN0YzV2Nm50Z2V5Z2Y1eGY5NDVuanFxNWgzMnI1M3VxdXZ3Gg0KBXN0YWtlEgQxMDAw", + "memo": "memo" +} +``` + +## gRPC + +A user can query the interchain account module using gRPC endpoints. + +### Controller + +A user can query the controller submodule using gRPC endpoints. + +#### `InterchainAccount` + +The `InterchainAccount` endpoint allows users to query the controller submodule for the interchain account address for a given owner on a particular connection. + +```shell +ibc.applications.interchain_accounts.controller.v1.Query/InterchainAccount +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"owner":"cosmos1..","connection_id":"connection-0"}' \ + localhost:9090 \ + ibc.applications.interchain_accounts.controller.v1.Query/InterchainAccount +``` + +#### `Params` + +The `Params` endpoint users to query the current controller submodule parameters. + +```shell +ibc.applications.interchain_accounts.controller.v1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + ibc.applications.interchain_accounts.controller.v1.Query/Params +``` + +### Host + +A user can query the host submodule using gRPC endpoints. + +#### `Params` + +The `Params` endpoint users to query the current host submodule parameters. + +```shell +ibc.applications.interchain_accounts.host.v1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + ibc.applications.interchain_accounts.host.v1.Query/Params +``` diff --git a/docs/ibc/v6.3.x/apps/interchain-accounts/development.mdx b/docs/ibc/v6.3.x/apps/interchain-accounts/development.mdx new file mode 100644 index 00000000..61e701f3 --- /dev/null +++ b/docs/ibc/v6.3.x/apps/interchain-accounts/development.mdx @@ -0,0 +1,34 @@ +--- +title: Development Use Cases +--- + +The initial version of Interchain Accounts allowed for the controller submodule to be extended by providing it with an underlying application which would handle all packet callbacks. +That functionality is now being deprecated in favor of alternative approaches. +This document will outline potential use cases and redirect each use case to the appropriate documentation. + +## Custom authentication + +Interchain accounts may be associated with alternative types of authentication relative to the traditional public/private key signing. +If you wish to develop or use Interchain Accounts with a custom authentication module and do not need to execute custom logic on the packet callbacks, we recommend you use ibc-go v6 or greater and that your custom authentication module interacts with the controller submodule via the [`MsgServer`](/docs/ibc/v6.3.x/apps/interchain-accounts/messages). + +If you wish to consume and execute custom logic in the packet callbacks, then please read the section [Packet callbacks](#packet-callbacks) below. + +## Redirection to a smart contract + +It may be desirable to allow smart contracts to control an interchain account. +To facilitate such an action, the controller submodule may be provided an underlying application which redirects to smart contract callers. +An improved design has been suggested in [ADR 008](https://github.com/cosmos/ibc-go/pull/1976) which performs this action via middleware. + +Implementers of this use case are recommended to follow the ADR 008 approach. +The underlying application may continue to be used as a short term solution for ADR 008 and the [legacy API](/docs/ibc/v6.3.x/apps/interchain-accounts/auth-modules) should continue to be utilized in such situations. + +## Packet callbacks + +If a developer requires access to packet callbacks for their use case, then they have the following options: + +1. Write a smart contract which is connected via an ADR 008 or equivalent IBC application (recommended). +2. Use the controller's underlying application to implement packet callback logic. + +In the first case, the smart contract should use the [`MsgServer`](/docs/ibc/v6.3.x/apps/interchain-accounts/messages). + +In the second case, the underlying application should use the [legacy API](/docs/ibc/v6.3.x/apps/interchain-accounts/legacy/keeper-api). diff --git a/docs/ibc/v6.3.x/apps/interchain-accounts/integration.mdx b/docs/ibc/v6.3.x/apps/interchain-accounts/integration.mdx new file mode 100644 index 00000000..53e45efb --- /dev/null +++ b/docs/ibc/v6.3.x/apps/interchain-accounts/integration.mdx @@ -0,0 +1,198 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to integrate Interchain Accounts host and controller functionality to your chain. The following document only applies for Cosmos SDK chains. + +The Interchain Accounts module contains two submodules. Each submodule has its own IBC application. The Interchain Accounts module should be registered as an `AppModule` in the same way all SDK modules are registered on a chain, but each submodule should create its own `IBCModule` as necessary. A route should be added to the IBC router for each submodule which will be used. + +Chains who wish to support ICS-27 may elect to act as a host chain, a controller chain or both. Disabling host or controller functionality may be done statically by excluding the host or controller submodule entirely from the `app.go` file or it may be done dynamically by taking advantage of the on-chain parameters which enable or disable the host or controller submodules. + +Interchain Account authentication modules (both custom or generic, such as the `x/gov`, `x/group` or `x/auth` Cosmos SDK modules) can send messages to the controller submodule's [`MsgServer`](/docs/ibc/v6.3.x/apps/interchain-accounts/messages) to register interchain accounts and send packets to the interchain account. To accomplish this, the authentication module needs to be composed with `baseapp`'s `MsgServiceRouter`. + +![ica-v6.png](/docs/ibc/images/02-apps/02-interchain-accounts/images/ica-v6.png) + +## Example integration + +```go expandable +/ app.go + +/ Register the AppModule for the Interchain Accounts module and the authentication module +/ Note: No `icaauth` exists, this must be substituted with an actual Interchain Accounts authentication module +ModuleBasics = module.NewBasicManager( + ... + ica.AppModuleBasic{ +}, + icaauth.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the Interchain Accounts module +/ Only necessary for host chain functionality +/ Each Interchain Account created on the host chain is derived from the module account created +maccPerms = map[string][]string{ + ... + icatypes.ModuleName: nil, +} + +... + +/ Add Interchain Accounts Keepers for each submodule used and the authentication module +/ If a submodule is being statically disabled, the associated Keeper does not need to be added. +type App struct { + ... + + ICAControllerKeeper icacontrollerkeeper.Keeper + ICAHostKeeper icahostkeeper.Keeper + ICAAuthKeeper icaauthkeeper.Keeper + + ... +} + +... + +/ Create store keys for each submodule Keeper and the authentication module + keys := sdk.NewKVStoreKeys( + ... + icacontrollertypes.StoreKey, + icahosttypes.StoreKey, + icaauthtypes.StoreKey, + ... +) + +... + +/ Create the scoped keepers for each submodule keeper and authentication keeper + scopedICAControllerKeeper := app.CapabilityKeeper.ScopeToModule(icacontrollertypes.SubModuleName) + scopedICAHostKeeper := app.CapabilityKeeper.ScopeToModule(icahosttypes.SubModuleName) + scopedICAAuthKeeper := app.CapabilityKeeper.ScopeToModule(icaauthtypes.ModuleName) + +... + +/ Create the Keeper for each submodule +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + scopedICAControllerKeeper, app.MsgServiceRouter(), +) + +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, scopedICAHostKeeper, app.MsgServiceRouter(), +) + +/ Create Interchain Accounts AppModule + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, &app.ICAHostKeeper) + +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.MsgServiceRouter()) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + +/ Create controller IBC application stack and host IBC module as desired + icaControllerStack := icacontroller.NewIBCMiddleware(nil, app.ICAControllerKeeper) + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +... + +/ Register Interchain Accounts and authentication module AppModule's +app.moduleManager = module.NewManager( + ... + icaModule, + icaAuthModule, +) + +... + +/ Add Interchain Accounts to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts module InitGenesis logic +app.moduleManager.SetOrderInitGenesis( + ... + icatypes.ModuleName, + ... +) + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey sdk.StoreKey) + +paramskeeper.Keeper { + ... + paramsKeeper.Subspace(icahosttypes.SubModuleName) + +paramsKeeper.Subspace(icacontrollertypes.SubModuleName) + ... +} +``` + +If no custom athentication module is needed and a generic Cosmos SDK authentication module can be used, then from the sample integration code above all references to `ICAAuthKeeper` and `icaAuthModule` can be removed. That's it, the following code would not be needed: + +```go +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.MsgServiceRouter()) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) +``` + +### Using submodules exclusively + +As described above, the Interchain Accounts application module is structured to support the ability of exclusively enabling controller or host functionality. +This can be achieved by simply omitting either controller or host `Keeper` from the Interchain Accounts `NewAppModule` constructor function, and mounting only the desired submodule via the `IBCRouter`. +Alternatively, submodules can be enabled and disabled dynamically using [on-chain parameters](/docs/ibc/v6.3.x/apps/interchain-accounts/parameters). + +The following snippets show basic examples of statically disabling submodules using `app.go`. + +#### Disabling controller chain functionality + +```go +/ Create Interchain Accounts AppModule omitting the controller keeper + icaModule := ica.NewAppModule(nil, &app.ICAHostKeeper) + +/ Create host IBC Module + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host route +ibcRouter.AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +``` + +#### Disabling host chain functionality + +```go expandable +/ Create Interchain Accounts AppModule omitting the host keeper + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, nil) + +/ Optionally instantiate your custom authentication module if needed, or not otherwise +... + +/ Create controller IBC application stack + icaControllerStack := icacontroller.NewIBCMiddleware(nil, app.ICAControllerKeeper) + +/ Register controller route +ibcRouter.AddRoute(icacontrollertypes.SubModuleName, icaControllerStack) +``` diff --git a/docs/ibc/v6.3.x/apps/interchain-accounts/legacy/auth-modules.mdx b/docs/ibc/v6.3.x/apps/interchain-accounts/legacy/auth-modules.mdx new file mode 100644 index 00000000..38620885 --- /dev/null +++ b/docs/ibc/v6.3.x/apps/interchain-accounts/legacy/auth-modules.mdx @@ -0,0 +1,312 @@ +--- +title: Authentication Modules +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +## Synopsis + +Authentication modules play the role of the `Base Application` as described in [ICS-30 IBC Middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware), and enable application developers to perform custom logic when working with the Interchain Accounts controller API. + +The controller submodule is used for account registration and packet sending. It executes only logic required of all controllers of interchain accounts. The type of authentication used to manage the interchain accounts remains unspecified. There may exist many different types of authentication which are desirable for different use cases. Thus the purpose of the authentication module is to wrap the controller submodule with custom authentication logic. + +In ibc-go, authentication modules are connected to the controller chain via a middleware stack. The controller submodule is implemented as [middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware) and the authentication module is connected to the controller submodule as the base application of the middleware stack. To implement an authentication module, the `IBCModule` interface must be fulfilled. By implementing the controller submodule as middleware, any amount of authentication modules can be created and connected to the controller submodule without writing redundant code. + +The authentication module must: + +- Authenticate interchain account owners. +- Track the associated interchain account address for an owner. +- Send packets on behalf of an owner (after authentication). + +> Please note that since ibc-go v6 the channel capability is claimed by the controller submodule and therefore it is not required for authentication modules to claim the capability in the `OnChanOpenInit` callback. When the authentication module sends packets on the channel created for the associated interchain account it can pass a `nil` capability to the legacy function `SendTx` of the controller keeper (see [section `SendTx`](/docs/ibc/v6.3.x/apps/interchain-accounts/legacy/keeper-api#sendtx) below for mode information). + +## `IBCModule` implementation + +The following `IBCModule` callbacks must be implemented with appropriate custom logic: + +```go expandable +/ OnChanOpenInit implements the IBCModule interface +func (im IBCModule) + +OnChanOpenInit( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + chanCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + / since ibc-go v6 the authentication module *must not* claim the channel capability on OnChanOpenInit + + / perform custom logic + + return version, nil +} + +/ OnChanOpenAck implements the IBCModule interface +func (im IBCModule) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + / perform custom logic + + return nil +} + +/ OnChanCloseConfirm implements the IBCModule interface +func (im IBCModule) + +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / perform custom logic + + return nil +} + +/ OnAcknowledgementPacket implements the IBCModule interface +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, +) + +error { + / perform custom logic + + return nil +} + +/ OnTimeoutPacket implements the IBCModule interface. +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +error { + / perform custom logic + + return nil +} +``` + +The following functions must be defined to fulfill the `IBCModule` interface, but they will never be called by the controller submodule so they may error or panic. That is because in Interchain Accounts, the channel handshake is always initiated on the controller chain and packets are always sent to the host chain and never to the controller chain. + +```go expandable +/ OnChanOpenTry implements the IBCModule interface +func (im IBCModule) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + chanCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + panic("UNIMPLEMENTED") +} + +/ OnChanOpenConfirm implements the IBCModule interface +func (im IBCModule) + +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + panic("UNIMPLEMENTED") +} + +/ OnChanCloseInit implements the IBCModule interface +func (im IBCModule) + +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + panic("UNIMPLEMENTED") +} + +/ OnRecvPacket implements the IBCModule interface. A successful acknowledgement +/ is returned if the packet data is successfully decoded and the receive application +/ logic returns without error. +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +ibcexported.Acknowledgement { + panic("UNIMPLEMENTED") +} +``` + +## `OnAcknowledgementPacket` + +Controller chains will be able to access the acknowledgement written into the host chain state once a relayer relays the acknowledgement. +The acknowledgement bytes contain either the response of the execution of the message(s) on the host chain or an error. They will be passed to the auth module via the `OnAcknowledgementPacket` callback. Auth modules are expected to know how to decode the acknowledgement. + +If the controller chain is connected to a host chain using the host module on ibc-go, it may interpret the acknowledgement bytes as follows: + +Begin by unmarshaling the acknowledgement into `sdk.TxMsgData`: + +```go +var ack channeltypes.Acknowledgement + if err := channeltypes.SubModuleCdc.UnmarshalJSON(acknowledgement, &ack); err != nil { + return err +} + txMsgData := &sdk.TxMsgData{ +} + if err := proto.Unmarshal(ack.GetResult(), txMsgData); err != nil { + return err +} +``` + +If the `txMsgData.Data` field is non nil, the host chain is using SDK version `<=` v0.45. +The auth module should interpret the `txMsgData.Data` as follows: + +```go expandable +switch len(txMsgData.Data) { + case 0: + / see documentation below for SDK 0.46.x or greater +default: + for _, msgData := range txMsgData.Data { + if err := handler(msgData); err != nil { + return err +} + +} +... +} +``` + +A handler will be needed to interpret what actions to perform based on the message type sent. +A router could be used, or more simply a switch statement. + +```go expandable +func handler(msgData sdk.MsgData) + +error { + switch msgData.MsgType { + case sdk.MsgTypeURL(&banktypes.MsgSend{ +}): + msgResponse := &banktypes.MsgSendResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleBankSendMsg(msgResponse) + case sdk.MsgTypeURL(&stakingtypes.MsgDelegate{ +}): + msgResponse := &stakingtypes.MsgDelegateResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleStakingDelegateMsg(msgResponse) + case sdk.MsgTypeURL(&transfertypes.MsgTransfer{ +}): + msgResponse := &transfertypes.MsgTransferResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleIBCTransferMsg(msgResponse) + +default: + return +} +``` + +If the `txMsgData.Data` is empty, the host chain is using SDK version > v0.45. +The auth module should interpret the `txMsgData.Responses` as follows: + +```go +... +/ switch statement from above + case 0: + for _, any := range txMsgData.MsgResponses { + if err := handleAny(any); err != nil { + return err +} + +} +} +``` + +A handler will be needed to interpret what actions to perform based on the type URL of the Any. +A router could be used, or more simply a switch statement. +It may be possible to deduplicate logic between `handler` and `handleAny`. + +```go expandable +func handleAny(any *codectypes.Any) + +error { + switch any.TypeURL { + case banktypes.MsgSend: + msgResponse, err := unpackBankMsgSendResponse(any) + if err != nil { + return err +} + +handleBankSendMsg(msgResponse) + case stakingtypes.MsgDelegate: + msgResponse, err := unpackStakingDelegateResponse(any) + if err != nil { + return err +} + +handleStakingDelegateMsg(msgResponse) + case transfertypes.MsgTransfer: + msgResponse, err := unpackIBCTransferMsgResponse(any) + if err != nil { + return err +} + +handleIBCTransferMsg(msgResponse) + +default: + return +} +``` + +## Integration into `app.go` file + +To integrate the authentication module into your chain, please follow the steps outlined in [`app.go` integration](/docs/ibc/v6.3.x/apps/interchain-accounts/legacy/integration#example-integration). diff --git a/docs/ibc/v6.3.x/apps/interchain-accounts/legacy/integration.mdx b/docs/ibc/v6.3.x/apps/interchain-accounts/legacy/integration.mdx new file mode 100644 index 00000000..9509cebf --- /dev/null +++ b/docs/ibc/v6.3.x/apps/interchain-accounts/legacy/integration.mdx @@ -0,0 +1,202 @@ +--- +title: Integration +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +## Synopsis + +Learn how to integrate Interchain Accounts host and controller functionality to your chain. The following document only applies for Cosmos SDK chains. + +The Interchain Accounts module contains two submodules. Each submodule has its own IBC application. The Interchain Accounts module should be registered as an `AppModule` in the same way all SDK modules are registered on a chain, but each submodule should create its own `IBCModule` as necessary. A route should be added to the IBC router for each submodule which will be used. + +Chains who wish to support ICS-27 may elect to act as a host chain, a controller chain or both. Disabling host or controller functionality may be done statically by excluding the host or controller module entirely from the `app.go` file or it may be done dynamically by taking advantage of the on-chain parameters which enable or disable the host or controller submodules. + +Interchain Account authentication modules are the base application of a middleware stack. The controller submodule is the middleware in this stack. + +![ica-pre-v6.png](/docs/ibc/images/02-apps/02-interchain-accounts/10-legacy/images/ica-pre-v6.png) + +> Please note that since ibc-go v6 the channel capability is claimed by the controller submodule and therefore it is not required for authentication modules to claim the capability in the `OnChanOpenInit` callback. Therefore the custom authentication module does not need a scoped keeper anymore. + +## Example integration + +```go expandable +/ app.go + +/ Register the AppModule for the Interchain Accounts module and the authentication module +/ Note: No `icaauth` exists, this must be substituted with an actual Interchain Accounts authentication module +ModuleBasics = module.NewBasicManager( + ... + ica.AppModuleBasic{ +}, + icaauth.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the Interchain Accounts module +/ Only necessary for host chain functionality +/ Each Interchain Account created on the host chain is derived from the module account created +maccPerms = map[string][]string{ + ... + icatypes.ModuleName: nil, +} + +... + +/ Add Interchain Accounts Keepers for each submodule used and the authentication module +/ If a submodule is being statically disabled, the associated Keeper does not need to be added. +type App struct { + ... + + ICAControllerKeeper icacontrollerkeeper.Keeper + ICAHostKeeper icahostkeeper.Keeper + ICAAuthKeeper icaauthkeeper.Keeper + + ... +} + +... + +/ Create store keys for each submodule Keeper and the authentication module + keys := sdk.NewKVStoreKeys( + ... + icacontrollertypes.StoreKey, + icahosttypes.StoreKey, + icaauthtypes.StoreKey, + ... +) + +... + +/ Create the scoped keepers for each submodule keeper and authentication keeper + scopedICAControllerKeeper := app.CapabilityKeeper.ScopeToModule(icacontrollertypes.SubModuleName) + scopedICAHostKeeper := app.CapabilityKeeper.ScopeToModule(icahosttypes.SubModuleName) + +... + +/ Create the Keeper for each submodule +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + scopedICAControllerKeeper, app.MsgServiceRouter(), +) + +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, scopedICAHostKeeper, app.MsgServiceRouter(), +) + +/ Create Interchain Accounts AppModule + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, &app.ICAHostKeeper) + +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.ICAControllerKeeper) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + +/ ICA auth IBC Module + icaAuthIBCModule := icaauth.NewIBCModule(app.ICAAuthKeeper) + +/ Create controller IBC application stack and host IBC module as desired + icaControllerStack := icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostIBCModule). + AddRoute(icaauthtypes.ModuleName, icaControllerStack) / Note, the authentication module is routed to the top level of the middleware stack + +... + +/ Register Interchain Accounts and authentication module AppModule's +app.moduleManager = module.NewManager( + ... + icaModule, + icaAuthModule, +) + +... + +/ Add fee middleware to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add fee middleware to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts module InitGenesis logic +app.moduleManager.SetOrderInitGenesis( + ... + icatypes.ModuleName, + ... +) + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey sdk.StoreKey) + +paramskeeper.Keeper { + ... + paramsKeeper.Subspace(icahosttypes.SubModuleName) + +paramsKeeper.Subspace(icacontrollertypes.SubModuleName) + ... +``` + +## Using submodules exclusively + +As described above, the Interchain Accounts application module is structured to support the ability of exclusively enabling controller or host functionality. +This can be achieved by simply omitting either controller or host `Keeper` from the Interchain Accounts `NewAppModule` constructor function, and mounting only the desired submodule via the `IBCRouter`. +Alternatively, submodules can be enabled and disabled dynamically using [on-chain parameters](/docs/ibc/v6.3.x/apps/interchain-accounts/parameters). + +The following snippets show basic examples of statically disabling submodules using `app.go`. + +### Disabling controller chain functionality + +```go +/ Create Interchain Accounts AppModule omitting the controller keeper + icaModule := ica.NewAppModule(nil, &app.ICAHostKeeper) + +/ Create host IBC Module + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host route +ibcRouter.AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +``` + +### Disabling host chain functionality + +```go expandable +/ Create Interchain Accounts AppModule omitting the host keeper + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, nil) + +/ Create your Interchain Accounts authentication module, setting up the Keeper, AppModule and IBCModule appropriately +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.ICAControllerKeeper) + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + icaAuthIBCModule := icaauth.NewIBCModule(app.ICAAuthKeeper) + +/ Create controller IBC application stack + icaControllerStack := icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) + +/ Register controller and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icaauthtypes.ModuleName, icaControllerStack) / Note, the authentication module is routed to the top level of the middleware stack +``` diff --git a/docs/ibc/v6.3.x/apps/interchain-accounts/legacy/keeper-api.mdx b/docs/ibc/v6.3.x/apps/interchain-accounts/legacy/keeper-api.mdx new file mode 100644 index 00000000..bd336aa1 --- /dev/null +++ b/docs/ibc/v6.3.x/apps/interchain-accounts/legacy/keeper-api.mdx @@ -0,0 +1,124 @@ +--- +title: Keeper API +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +The controller submodule keeper exposes two legacy functions that allow respectively for custom authentication modules to register interchain accounts and send packets to the interchain account. + +## `RegisterInterchainAccount` + +The authentication module can begin registering interchain accounts by calling `RegisterInterchainAccount`: + +```go +if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, connectionID, owner.String(), version); err != nil { + return err +} + +return nil +``` + +The `version` argument is used to support ICS-29 fee middleware for relayer incentivization of ICS-27 packets. Consumers of the `RegisterInterchainAccount` are expected to build the appropriate JSON encoded version string themselves and pass it accordingly. If an empty string is passed in the `version` argument, then the version will be initialized to a default value in the `OnChanOpenInit` callback of the controller's handler, so that channel handshake can proceed. + +The following code snippet illustrates how to construct an appropriate interchain accounts `Metadata` and encode it as a JSON bytestring: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, controllerConnectionID, owner.String(), string(appVersion)); err != nil { + return err +} +``` + +Similarly, if the application stack is configured to route through ICS-29 fee middleware and a fee enabled channel is desired, construct the appropriate ICS-29 `Metadata` type: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + feeMetadata := feetypes.Metadata{ + AppVersion: string(appVersion), + FeeVersion: feetypes.Version, +} + +feeEnabledVersion, err := feetypes.ModuleCdc.MarshalJSON(&feeMetadata) + if err != nil { + return err +} + if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, controllerConnectionID, owner.String(), string(feeEnabledVersion)); err != nil { + return err +} +``` + +## `SendTx` + +The authentication module can attempt to send a packet by calling `SendTx`: + +```go expandable +/ Authenticate owner +/ perform custom logic + +/ Construct controller portID based on interchain account owner address +portID, err := icatypes.NewControllerPortID(owner.String()) + if err != nil { + return err +} + +/ Obtain data to be sent to the host chain. +/ In this example, the owner of the interchain account would like to send a bank MsgSend to the host chain. +/ The appropriate serialization function should be called. The host chain must be able to deserialize the transaction. +/ If the host chain is using the ibc-go host module, `SerializeCosmosTx` should be used. + msg := &banktypes.MsgSend{ + FromAddress: fromAddr, + ToAddress: toAddr, + Amount: amt +} + +data, err := icatypes.SerializeCosmosTx(keeper.cdc, []proto.Message{ + msg +}) + if err != nil { + return err +} + +/ Construct packet data + packetData := icatypes.InterchainAccountPacketData{ + Type: icatypes.EXECUTE_TX, + Data: data, +} + +/ Obtain timeout timestamp +/ An appropriate timeout timestamp must be determined based on the usage of the interchain account. +/ If the packet times out, the channel will be closed requiring a new channel to be created. + timeoutTimestamp := obtainTimeoutTimestamp() + +/ Send the interchain accounts packet, returning the packet sequence +/ A nil channel capability can be passed, since the controller submodule (and not the authentication module) +/ claims the channel capability since ibc-go v6. +seq, err = keeper.icaControllerKeeper.SendTx(ctx, nil, portID, packetData, timeoutTimestamp) +``` + +The data within an `InterchainAccountPacketData` must be serialized using a format supported by the host chain. +If the host chain is using the ibc-go host chain submodule, `SerializeCosmosTx` should be used. If the `InterchainAccountPacketData.Data` is serialized using a format not supported by the host chain, the packet will not be successfully received. diff --git a/docs/ibc/v6.3.x/apps/interchain-accounts/messages.mdx b/docs/ibc/v6.3.x/apps/interchain-accounts/messages.mdx new file mode 100644 index 00000000..1ae54888 --- /dev/null +++ b/docs/ibc/v6.3.x/apps/interchain-accounts/messages.mdx @@ -0,0 +1,74 @@ +--- +title: Messages +description: >- + An Interchain Accounts channel handshake can be initiated using + MsgRegisterInterchainAccount: +--- + +## `MsgRegisterInterchainAccount` + +An Interchain Accounts channel handshake can be initiated using `MsgRegisterInterchainAccount`: + +```go +type MsgRegisterInterchainAccount struct { + Owner string + ConnectionID string + Version string +} +``` + +This message is expected to fail if: + +* `Owner` is an empty string. +* `ConnectionID` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). + +This message will construct a new `MsgChannelOpenInit` on chain and route it to the core IBC message server to initiate the opening step of the channel handshake. + +The controller submodule will generate a new port identifier and claim the associated port capability. The caller is expected to provide an appropriate application version string. For example, this may be an ICS-27 JSON encoded [`Metadata`](https://github.com/cosmos/ibc-go/blob/v6.0.0-rc0/proto/ibc/applications/interchain_accounts/v1/metadata.proto#L11) type or an ICS-29 JSON encoded [`Metadata`](https://github.com/cosmos/ibc-go/blob/v6.0.0-rc0/proto/ibc/applications/fee/v1/metadata.proto#L11) type with a nested application version. +If the `Version` string is omitted, the controller submodule will construct a default version string in the `OnChanOpenInit` handshake callback. + +```go +type MsgRegisterInterchainAccountResponse struct { + ChannelID string +} +``` + +The `ChannelID` is returned in the message response. + +## `MsgSendTx` + +An Interchain Accounts transaction can be executed on a remote host chain by sending a `MsgSendTx` from the corresponding controller chain: + +```go +type MsgSendTx struct { + Owner string + ConnectionID string + PacketData InterchainAccountPacketData + RelativeTimeout uint64 +} +``` + +This message is expected to fail if: + +* `Owner` is an empty string. +* `ConnectionID` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +* `PacketData` contains an `UNSPECIFIED` type enum, the length of `Data` bytes is zero or the `Memo` field exceeds 256 characters in length. +* `RelativeTimeout` is zero. + +This message will create a new IBC packet with the provided `PacketData` and send it via the channel associated with the `Owner` and `ConnectionID`. +The `PacketData` is expected to contain a list of serialized `[]sdk.Msg` in the form of `CosmosTx`. Please note the signer field of each `sdk.Msg` must be the interchain account address. +When the packet is relayed to the host chain, the `PacketData` is unmarshalled and the messages are authenticated and executed. + +```go +type MsgSendTxResponse struct { + Sequence uint64 +} +``` + +The packet `Sequence` is returned in the message response. + +## Atomicity + +As the Interchain Accounts module supports the execution of multiple transactions using the Cosmos SDK `Msg` interface, it provides the same atomicity guarantees as Cosmos SDK-based applications, leveraging the [`CacheMultiStore`](https://docs.cosmos.network/main/learn/advanced/store#cachemultistore) architecture provided by the [`Context`](https://docs.cosmos.network/main/learn/advanced/context.html) type. + +This provides atomic execution of transactions when using Interchain Accounts, where state changes are only committed if all `Msg`s succeed. diff --git a/docs/ibc/v6.3.x/apps/interchain-accounts/overview.mdx b/docs/ibc/v6.3.x/apps/interchain-accounts/overview.mdx new file mode 100644 index 00000000..c4634acc --- /dev/null +++ b/docs/ibc/v6.3.x/apps/interchain-accounts/overview.mdx @@ -0,0 +1,33 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the Interchain Accounts module is + +## What is the Interchain Accounts module? + +Interchain Accounts is the Cosmos SDK implementation of the ICS-27 protocol, which enables cross-chain account management built upon IBC. + +- How does an interchain account differ from a regular account? + +Regular accounts use a private key to sign transactions. Interchain Accounts are instead controlled programmatically by counterparty chains via IBC packets. + +## Concepts + +`Host Chain`: The chain where the interchain account is registered. The host chain listens for IBC packets from a controller chain which should contain instructions (e.g. Cosmos SDK messages) for which the interchain account will execute. + +`Controller Chain`: The chain registering and controlling an account on a host chain. The controller chain sends IBC packets to the host chain to control the account. + +`Interchain Account`: An account on a host chain created using the ICS-27 protocol. An interchain account has all the capabilities of a normal account. However, rather than signing transactions with a private key, a controller chain will send IBC packets to the host chain which signals what transactions the interchain account should execute. + +`Authentication Module`: A custom application module on the controller chain that uses the Interchain Accounts module to build custom logic for the creation & management of interchain accounts. It can be either an IBC application module using the [legacy API](/docs/ibc/v6.3.x/apps/interchain-accounts/legacy/keeper-api), or a regular Cosmos SDK application module sending messages to the controller submodule's `MsgServer` (this is the recommended approach from ibc-go v6 if access to packet callbacks is not needed). Please note that the legacy API will eventually be removed and IBC applications will not be able to use them in later releases. + +## SDK security model + +SDK modules on a chain are assumed to be trustworthy. For example, there are no checks to prevent an untrustworthy module from accessing the bank keeper. + +The implementation of ICS-27 in ibc-go uses this assumption in its security considerations. + +The implementation assumes other IBC application modules will not bind to ports within the ICS-27 namespace. diff --git a/docs/ibc/v6.3.x/apps/interchain-accounts/parameters.mdx b/docs/ibc/v6.3.x/apps/interchain-accounts/parameters.mdx new file mode 100644 index 00000000..089fda71 --- /dev/null +++ b/docs/ibc/v6.3.x/apps/interchain-accounts/parameters.mdx @@ -0,0 +1,63 @@ +--- +title: Parameters +description: >- + The Interchain Accounts module contains the following on-chain parameters, + logically separated for each distinct submodule: +--- + +The Interchain Accounts module contains the following on-chain parameters, logically separated for each distinct submodule: + +## Controller Submodule Parameters + +| Key | Type | Default Value | +| ------------------- | ---- | ------------- | +| `ControllerEnabled` | bool | `true` | + +### ControllerEnabled + +The `ControllerEnabled` parameter controls a chains ability to service ICS-27 controller specific logic. This includes the sending of Interchain Accounts packet data as well as the following ICS-26 callback handlers: + +* `OnChanOpenInit` +* `OnChanOpenAck` +* `OnChanCloseConfirm` +* `OnAcknowledgementPacket` +* `OnTimeoutPacket` + +## Host Submodule Parameters + +| Key | Type | Default Value | +| --------------- | --------- | ------------- | +| `HostEnabled` | bool | `true` | +| `AllowMessages` | \[]string | `["*"]` | + +### HostEnabled + +The `HostEnabled` parameter controls a chains ability to service ICS-27 host specific logic. This includes the following ICS-26 callback handlers: + +* `OnChanOpenTry` +* `OnChanOpenConfirm` +* `OnChanCloseConfirm` +* `OnRecvPacket` + +### AllowMessages + +The `AllowMessages` parameter provides the ability for a chain to limit the types of messages or transactions that hosted interchain accounts are authorized to execute by defining an allowlist using the Protobuf message type URL format. + +For example, a Cosmos SDK-based chain that elects to provide hosted Interchain Accounts with the ability of governance voting and staking delegations will define its parameters as follows: + +```json +"params": { + "host_enabled": true, + "allow_messages": ["/cosmos.staking.v1beta1.MsgDelegate", + "/cosmos.gov.v1beta1.MsgVote"] +} +``` + +There is also a special wildcard `"*"` value which allows any type of message to be executed by the interchain account. This must be the only value in the `allow_messages` array. + +```json +"params": { + "host_enabled": true, + "allow_messages": ["*"] +} +``` diff --git a/docs/ibc/v6.3.x/apps/transfer/authorizations.mdx b/docs/ibc/v6.3.x/apps/transfer/authorizations.mdx new file mode 100644 index 00000000..85eda938 --- /dev/null +++ b/docs/ibc/v6.3.x/apps/transfer/authorizations.mdx @@ -0,0 +1,48 @@ +--- +title: Authorizations +--- + +`TransferAuthorization` implements the `Authorization` interface for `ibc.applications.transfer.v1.MsgTransfer`. It allows a granter to grant a grantee the privilege to submit `MsgTransfer` on its behalf. Please see the [Cosmos SDK docs](https://docs.cosmos.network/v0.47/modules/authz) for more details on granting privileges via the `x/authz` module. + +More specifically, the granter allows the grantee to transfer funds that belong to the granter over a specified channel. + +For the specified channel, the granter must be able to specify a spend limit of a specific denomination they wish to allow the grantee to be able to transfer. + +The granter may be able to specify the list of addresses that they allow to receive funds. If empty, then all addresses are allowed. + +It takes: + +* a `SourcePort` and a `SourceChannel` which together comprise the unique transfer channel identifier over which authorized funds can be transferred. + +* a `SpendLimit` that specifies the maximum amount of tokens the grantee can transfer. The `SpendLimit` is updated as the tokens are transferred, unless the sentinel value of the maximum value for a 256-bit unsigned integer (i.e. 2^256 - 1) is used for the amount, in which case the `SpendLimit` will not be updated (please be aware that using this sentinel value will grant the grantee the privilege to transfer **all** the tokens of a given denomination available at the granter's account). The helper function `UnboundedSpendLimit` in the `types` package of the `transfer` module provides the sentinel value that can be used. This `SpendLimit` may also be updated to increase or decrease the limit as the granter wishes. + +* an `AllowList` list that specifies the list of addresses that are allowed to receive funds. If this list is empty, then all addresses are allowed to receive funds from the `TransferAuthorization`. + +Setting a `TransferAuthorization` is expected to fail if: + +* the spend limit is nil +* the denomination of the spend limit is an invalid coin type +* the source port ID is invalid +* the source channel ID is invalid +* there are duplicate entries in the `AllowList` + +Below is the `TransferAuthorization` message: + +```golang expandable +func NewTransferAuthorization(allocations ...Allocation) *TransferAuthorization { + return &TransferAuthorization{ + Allocations: allocations, +} +} + +type Allocation struct { + / the port on which the packet will be sent + SourcePort string + / the channel by which the packet will be sent + SourceChannel string + / spend limitation on the channel + SpendLimit sdk.Coins + / allow list of receivers, an empty allow list permits any receiver address + AllowList []string +} +``` diff --git a/docs/ibc/v6.3.x/apps/transfer/events.mdx b/docs/ibc/v6.3.x/apps/transfer/events.mdx new file mode 100644 index 00000000..1ae39041 --- /dev/null +++ b/docs/ibc/v6.3.x/apps/transfer/events.mdx @@ -0,0 +1,49 @@ +--- +title: Events +--- + +## `MsgTransfer` + +| Type | Attribute Key | Attribute Value | +| ------------- | ------------- | --------------- | +| ibc\_transfer | sender | `{sender}` | +| ibc\_transfer | receiver | `{receiver}` | +| message | module | transfer | + +## `OnRecvPacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | ------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | sender | `{sender}` | +| fungible\_token\_packet | receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | +| fungible\_token\_packet | success | `{ackSuccess}` | +| fungible\_token\_packet | error | `{ackError}` | +| denomination\_trace | trace\_hash | `{hex\_hash}` | +| denomination\_trace | denom | `{voucherDenom}` | + +## `OnAcknowledgePacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | --------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | sender | `{sender}` | +| fungible\_token\_packet | receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | +| fungible\_token\_packet | acknowledgement | `{ack.String()}` | +| fungible\_token\_packet | success / error | `{ack.Response}` | + +## `OnTimeoutPacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | ---------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | refund\_receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | diff --git a/docs/ibc/v6.3.x/apps/transfer/messages.mdx b/docs/ibc/v6.3.x/apps/transfer/messages.mdx new file mode 100644 index 00000000..58410ca1 --- /dev/null +++ b/docs/ibc/v6.3.x/apps/transfer/messages.mdx @@ -0,0 +1,42 @@ +--- +title: Messages +description: 'A fungible token cross chain transfer is achieved by using the MsgTransfer:' +--- + +## `MsgTransfer` + +A fungible token cross chain transfer is achieved by using the `MsgTransfer`: + +```go +type MsgTransfer struct { + SourcePort string + SourceChannel string + Token sdk.Coin + Sender string + Receiver string + TimeoutHeight ibcexported.Height + TimeoutTimestamp uint64 + Memo string +} +``` + +This message is expected to fail if: + +* `SourcePort` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +* `SourceChannel` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +* `Token` is invalid (denom is invalid or amount is negative) + * `Token.Amount` is not positive. + * `Token.Denom` is not a valid IBC denomination as per [ADR 001 - Coin Source Tracing](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md). +* `Sender` is empty. +* `Receiver` is empty. +* `TimeoutHeight` and `TimeoutTimestamp` are both zero. + +This message will send a fungible token to the counterparty chain represented by the counterparty Channel End connected to the Channel End with the identifiers `SourcePort` and `SourceChannel`. + +The denomination provided for transfer should correspond to the same denomination represented on this chain. The prefixes will be added as necessary upon by the receiving chain. + +### Memo + +The memo field was added to allow applications and users to attach metadata to transfer packets. The field is optional and may be left empty. When it is used to attach metadata for a particular middleware, the memo field should be represented as a json object where different middlewares use different json keys. + +You can find more information about applications that use the memo field in the [chain registry](https://github.com/cosmos/chain-registry/blob/master/_memo_keys/ICS20_memo_keys.json). diff --git a/docs/ibc/v6.3.x/apps/transfer/metrics.mdx b/docs/ibc/v6.3.x/apps/transfer/metrics.mdx new file mode 100644 index 00000000..7ef82c88 --- /dev/null +++ b/docs/ibc/v6.3.x/apps/transfer/metrics.mdx @@ -0,0 +1,13 @@ +--- +title: Metrics +description: The IBC transfer application module exposes the following set of metrics. +--- + +The IBC transfer application module exposes the following set of metrics. + +| Metric | Description | Unit | Type | +| :---------------------------- | :---------------------------------------------------------------------------------------- | :------- | :------ | +| `tx_msg_ibc_transfer` | The total amount of tokens transferred via IBC in a `MsgTransfer` (source or sink chain) | token | gauge | +| `ibc_transfer_packet_receive` | The total amount of tokens received in a `FungibleTokenPacketData` (source or sink chain) | token | gauge | +| `ibc_transfer_send` | Total number of IBC transfers sent from a chain (source or sink) | transfer | counter | +| `ibc_transfer_receive` | Total number of IBC transfers received to a chain (source or sink) | transfer | counter | diff --git a/docs/ibc/v6.3.x/apps/transfer/overview.mdx b/docs/ibc/v6.3.x/apps/transfer/overview.mdx new file mode 100644 index 00000000..90fcfee1 --- /dev/null +++ b/docs/ibc/v6.3.x/apps/transfer/overview.mdx @@ -0,0 +1,125 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the token Transfer module is + +## What is the Transfer module? + +Transfer is the Cosmos SDK implementation of the [ICS-20](https://github.com/cosmos/ibc/tree/master/spec/app/ics-020-fungible-token-transfer) protocol, which enables cross-chain fungible token transfers. + +## Concepts + +### Acknowledgements + +ICS20 uses the recommended acknowledgement format as specified by [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope). + +A successful receive of a transfer packet will result in a Result Acknowledgement being written +with the value `[]byte{byte(1)}` in the `Response` field. + +An unsuccessful receive of a transfer packet will result in an Error Acknowledgement being written +with the error message in the `Response` field. + +### Denomination trace + +The denomination trace corresponds to the information that allows a token to be traced back to its +origin chain. It contains a sequence of port and channel identifiers ordered from the most recent to +the oldest in the timeline of transfers. + +This information is included on the token denomination field in the form of a hash to prevent an +unbounded denomination length. For example, the token `transfer/channelToA/uatom` will be displayed +as `ibc/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2`. + +Each send to any chain other than the one it was previously received from is a movement forwards in +the token's timeline. This causes trace to be added to the token's history and the destination port +and destination channel to be prefixed to the denomination. In these instances the sender chain is +acting as the "source zone". When the token is sent back to the chain it previously received from, the +prefix is removed. This is a backwards movement in the token's timeline and the sender chain is +acting as the "sink zone". + +It is strongly recommended to read the full details of [ADR 001: Coin Source Tracing](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md) to understand the implications and context of the IBC token representations. + +## UX suggestions for clients + +For clients (wallets, exchanges, applications, block explorers, etc) that want to display the source of the token, it is recommended to use the following alternatives for each of the cases below: + +### Direct connection + +If the denomination trace contains a single identifier prefix pair (as in the example above), then +the easiest way to retrieve the chain and light client identifier is to map the trace information +directly. In summary, this requires querying the channel from the denomination trace identifiers, +and then the counterparty client state using the counterparty port and channel identifiers from the +retrieved channel. + +A general pseudo algorithm would look like the following: + +1. Query the full denomination trace. +2. Query the channel with the `portID/channelID` pair, which corresponds to the first destination of the + token. +3. Query the client state using the identifiers pair. Note that this query will return a `"Not +Found"` response if the current chain is not connected to this channel. +4. Retrieve the client identifier or chain identifier from the client state (eg: on + Tendermint clients) and store it locally. + +Using the gRPC gateway client service the steps above would be, with a given IBC token `ibc/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2` stored on `chainB`: + +1. `GET /ibc/apps/transfer/v1/denom_traces/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2` -> `{"path": "transfer/channelToA", "base_denom": "uatom"}` +2. `GET /ibc/apps/transfer/v1/channels/channelToA/ports/transfer/client_state"` -> `{"client_id": "clientA", "chain-id": "chainA", ...}` +3. `GET /ibc/apps/transfer/v1/channels/channelToA/ports/transfer"` -> `{"channel_id": "channelToA", port_id": "transfer", counterparty: {"channel_id": "channelToB", port_id": "transfer"}, ...}` +4. `GET /ibc/apps/transfer/v1/channels/channelToB/ports/transfer/client_state" -> {"client_id": "clientB", "chain-id": "chainB", ...}` + +Then, the token transfer chain path for the `uatom` denomination would be: `chainA` -> `chainB`. + +### Multiple hops + +The multiple channel hops case applies when the token has passed through multiple chains between the original source and final destination chains. + +The IBC protocol doesn't know the topology of the overall network (i.e connections between chains and identifier names between them). For this reason, in the multiple hops case, a particular chain in the timeline of the individual transfers can't query the chain and client identifiers of the other chains. + +Take for example the following sequence of transfers `A -> B -> C` for an IBC token, with a final prefix path (trace info) of `transfer/channelChainC/transfer/channelChainB`. What the paragraph above means is that even in the case that chain `C` is directly connected to chain `A`, querying the port and channel identifiers that chain `B` uses to connect to chain `A` (eg: `transfer/channelChainA`) can be completely different from the one that chain `C` uses to connect to chain `A` (eg: `transfer/channelToChainA`). + +Thus the proposed solution for clients that the IBC team recommends are the following: + +- **Connect to all chains**: Connecting to all the chains in the timeline would allow clients to + perform the queries outlined in the [direct connection](#direct-connection) section to each + relevant chain. By repeatedly following the port and channel denomination trace transfer timeline, + clients should always be able to find all the relevant identifiers. This comes at the tradeoff + that the client must connect to nodes on each of the chains in order to perform the queries. +- **Relayer as a Service (RaaS)**: A longer term solution is to use/create a relayer service that + could map the denomination trace to the chain path timeline for each token (i.e `origin chain -> +chain #1 -> ... -> chain #(n-1) -> final chain`). These services could provide merkle proofs in + order to allow clients to optionally verify the path timeline correctness for themselves by + running light clients. If the proofs are not verified, they should be considered as trusted third + parties services. Additionally, client would be advised in the future to use RaaS that support the + largest number of connections between chains in the ecosystem. Unfortunately, none of the existing + public relayers (in [Golang](https://github.com/cosmos/relayer) and + [Rust](https://github.com/informalsystems/ibc-rs)), provide this service to clients. + + + The only viable alternative for clients (at the time of writing) to tokens + with multiple connection hops, is to connect to all chains directly and + perform relevant queries to each of them in the sequence. + + +## Locked funds + +In some [exceptional cases](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-026-ibc-client-recovery-mechanisms.md#exceptional-cases), a client state associated with a given channel cannot be updated. This causes that funds from fungible tokens in that channel will be permanently locked and thus can no longer be transferred. + +To mitigate this, a client update governance proposal can be submitted to update the frozen client +with a new valid header. Once the proposal passes the client state will be unfrozen and the funds +from the associated channels will then be unlocked. This mechanism only applies to clients that +allow updates via governance, such as Tendermint clients. + +In addition to this, it's important to mention that a token must be sent back along the exact route +that it took originally in order to return it to its original form on the source chain (eg: the +Cosmos Hub for the `uatom`). Sending a token back to the same chain across a different channel will +**not** move the token back across its timeline. If a channel in the chain history closes before the +token can be sent back across that channel, then the token will not be returnable to its original +form. + +## Security considerations + +For safety, no other module must be capable of minting tokens with the `ibc/` prefix. The IBC +transfer module needs a subset of the denomination space that only it can create tokens in. diff --git a/docs/ibc/v6.3.x/apps/transfer/params.mdx b/docs/ibc/v6.3.x/apps/transfer/params.mdx new file mode 100644 index 00000000..f1c78927 --- /dev/null +++ b/docs/ibc/v6.3.x/apps/transfer/params.mdx @@ -0,0 +1,29 @@ +--- +title: Params +description: 'The IBC transfer application module contains the following parameters:' +--- + +The IBC transfer application module contains the following parameters: + +| Key | Type | Default Value | +| ---------------- | ---- | ------------- | +| `SendEnabled` | bool | `true` | +| `ReceiveEnabled` | bool | `true` | + +## `SendEnabled` + +The transfers enabled parameter controls send cross-chain transfer capabilities for all fungible tokens. + +To prevent a single token from being transferred from the chain, set the `SendEnabled` parameter to `true` and then, depending on the Cosmos SDK version, do one of the following: + +* For Cosmos SDK v0.46.x or earlier, set the bank module's [`SendEnabled` parameter](https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/bank/spec/05_params.md#sendenabled) for the denomination to `false`. +* For Cosmos SDK versions above v0.46.x, set the bank module's `SendEnabled` entry for the denomination to `false` using `MsgSetSendEnabled` as a governance proposal. + +## `ReceiveEnabled` + +The transfers enabled parameter controls receive cross-chain transfer capabilities for all fungible tokens. + +To prevent a single token from being transferred to the chain, set the `ReceiveEnabled` parameter to `true` and then, depending on the Cosmos SDK version, do one of the following: + +* For Cosmos SDK v0.46.x or earlier, set the bank module's [`SendEnabled` parameter](https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/bank/spec/05_params.md#sendenabled) for the denomination to `false`. +* For Cosmos SDK versions above v0.46.x, set the bank module's `SendEnabled` entry for the denomination to `false` using `MsgSetSendEnabled` as a governance proposal. diff --git a/docs/ibc/v6.3.x/apps/transfer/state-transitions.mdx b/docs/ibc/v6.3.x/apps/transfer/state-transitions.mdx new file mode 100644 index 00000000..fcc9169c --- /dev/null +++ b/docs/ibc/v6.3.x/apps/transfer/state-transitions.mdx @@ -0,0 +1,35 @@ +--- +title: State Transitions +description: >- + A successful fungible token send has two state transitions depending if the + transfer is a movement forward or backwards in the token's timeline: +--- + +## Send fungible tokens + +A successful fungible token send has two state transitions depending if the transfer is a movement forward or backwards in the token's timeline: + +1. Sender chain is the source chain, *i.e* a transfer to any chain other than the one it was previously received from is a movement forwards in the token's timeline. This results in the following state transitions: + + * The coins are transferred to an escrow address (i.e locked) on the sender chain. + * The coins are transferred to the receiving chain through IBC TAO logic. + +2. Sender chain is the sink chain, *i.e* the token is sent back to the chain it previously received from. This is a backwards movement in the token's timeline. This results in the following state transitions: + + * The coins (vouchers) are burned on the sender chain. + * The coins are transferred to the receiving chain through IBC TAO logic. + +## Receive fungible tokens + +A successful fungible token receive has two state transitions depending if the transfer is a movement forward or backwards in the token's timeline: + +1. Receiver chain is the source chain. This is a backwards movement in the token's timeline. This results in the following state transitions: + + * The leftmost port and channel identifier pair is removed from the token denomination prefix. + * The tokens are unescrowed and sent to the receiving address. + +2. Receiver chain is the sink chain. This is a movement forwards in the token's timeline. This results in the following state transitions: + + * Token vouchers are minted by prefixing the destination port and channel identifiers to the trace information. + * The receiving chain stores the new trace information in the store (if not set already). + * The vouchers are sent to the receiving address. diff --git a/docs/ibc/v6.3.x/apps/transfer/state.mdx b/docs/ibc/v6.3.x/apps/transfer/state.mdx new file mode 100644 index 00000000..76c7af41 --- /dev/null +++ b/docs/ibc/v6.3.x/apps/transfer/state.mdx @@ -0,0 +1,12 @@ +--- +title: State +description: >- + The IBC transfer application module keeps state of the port to which the + module is binded and the denomination trace information as outlined in ADR + 001. +--- + +The IBC transfer application module keeps state of the port to which the module is binded and the denomination trace information as outlined in [ADR 001](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md). + +* `Port`: `0x01 -> ProtocolBuffer(string)` +* `DenomTrace`: `0x02 | []bytes(traceHash) -> ProtocolBuffer(DenomTrace)` diff --git a/docs/ibc/v6.3.x/changelog/release-notes.mdx b/docs/ibc/v6.3.x/changelog/release-notes.mdx new file mode 100644 index 00000000..560d697a --- /dev/null +++ b/docs/ibc/v6.3.x/changelog/release-notes.mdx @@ -0,0 +1,21 @@ +--- +title: "Release Notes" +description: "Release notes generated from the project `CHANGELOG.md`" +mode: "center" +--- + + + + This page tracks all releases and changes from the + [cosmos/ibc-go](https://github.com/cosmos/ibc-go) repository. For the latest + development updates, see the + [UNRELEASED](https://github.com/cosmos/ibc-go/blob/main/CHANGELOG.md#unreleased) + section. + + + + + ### Improvements * (testing) + [\#6179](https://github.com/cosmos/ibc-go/pull/6179) Add `Version` to + tendermint headers in ibctesting. + diff --git a/docs/ibc/v6.3.x/ibc/apps/apps.mdx b/docs/ibc/v6.3.x/ibc/apps/apps.mdx new file mode 100644 index 00000000..e5d61824 --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/apps/apps.mdx @@ -0,0 +1,51 @@ +--- +title: IBC Applications +--- + +## Synopsis + +Learn how to build custom IBC application modules that enable packets to be sent to and received from other IBC-enabled chains. + +This document serves as a guide for developers who want to write their own Inter-blockchain Communication Protocol (IBC) applications for custom use cases. + +Due to the modular design of the IBC protocol, IBC application developers do not need to concern themselves with the low-level details of clients, connections, and proof verification. Nevertheless, an overview of these low-level concepts can be found in [the Overview section](/docs/ibc/v6.3.x/ibc/overview). +The document goes into detail on the abstraction layer most relevant for application developers (channels and ports), and describes how to define your own custom packets, `IBCModule` callbacks and more to make an application module IBC ready. + +**To have your module interact over IBC you must:** + +- implement the `IBCModule` interface, i.e.: + - channel (opening) handshake callbacks + - channel closing handshake callbacks + - packet callbacks +- bind to a port(s) +- add keeper methods +- define your own packet data and acknowledgement structs as well as how to encode/decode them +- add a route to the IBC router + +The following sections provide a more detailed explanation of how to write an IBC application +module correctly corresponding to the listed steps. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v6.3.x/ibc/overview) +- [IBC default integration](/docs/ibc/v6.3.x/middleware/ics29-fee/integration) + + + +## Working example + +For a real working example of an IBC application, you can look through the `ibc-transfer` module +which implements everything discussed in this section. + +Here are the useful parts of the module to look at: + +[Binding to transfer +port](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/genesis.go) + +[Sending transfer +packets](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/relay.go) + +[Implementing IBC +callbacks](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/ibc_module.go) diff --git a/docs/ibc/v6.3.x/ibc/apps/bindports.mdx b/docs/ibc/v6.3.x/ibc/apps/bindports.mdx new file mode 100644 index 00000000..fbc86de8 --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/apps/bindports.mdx @@ -0,0 +1,134 @@ +--- +title: Bind ports +--- + +## Synopsis + +Learn what changes to make to bind modules to their ports on initialization. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v6.3.x/ibc/overview) +- [IBC default integration](/docs/ibc/v6.3.x/middleware/ics29-fee/integration) + + +Currently, ports must be bound on app initialization. In order to bind modules to their respective ports on initialization, the following needs to be implemented: + +> Note that `portID` does not refer to a certain numerical ID, like `localhost:8080` with a `portID` 8080. Rather it refers to the application module the port binds. For IBC Modules built with the Cosmos SDK, it defaults to the module's name and for Cosmwasm contracts it defaults to the contract address. + +1. Add port ID to the `GenesisState` proto definition: + + ```protobuf + message GenesisState { + string port_id = 1; + / other fields + } + ``` + +2. Add port ID as a key to the module store: + + ```go expandable + / x//types/keys.go + const ( + / ModuleName defines the IBC Module name + ModuleName = "moduleName" + + / Version defines the current version the IBC + / module supports + Version = "moduleVersion-1" + + / PortID is the default port id that module binds to + PortID = "portID" + + / ... + ) + ``` + +3. Add port ID to `x//types/genesis.go`: + + ```go expandable + / in x//types/genesis.go + + / DefaultGenesisState returns a GenesisState with "transfer" as the default PortID. + func DefaultGenesisState() *GenesisState { + return &GenesisState{ + PortId: PortID, + / additional k-v fields + } + } + + / Validate performs basic genesis state validation returning an error upon any + / failure. + func (gs GenesisState) + + Validate() + + error { + if err := host.PortIdentifierValidator(gs.PortId); err != nil { + return err + } + /additional validations + + return gs.Params.Validate() + } + ``` + +4. Bind to port(s) in the module keeper's `InitGenesis`: + + ```go expandable + / InitGenesis initializes the ibc-module state and binds to PortID. + func (k Keeper) + + InitGenesis(ctx sdk.Context, state types.GenesisState) { + k.SetPort(ctx, state.PortId) + + / ... + + / Only try to bind to port if it is not already bound, since we may already own + / port capability from capability InitGenesis + if !k.IsBound(ctx, state.PortId) { + / transfer module binds to the transfer port on InitChain + / and claims the returned capability + err := k.BindPort(ctx, state.PortId) + if err != nil { + panic(fmt.Sprintf("could not claim port capability: %v", err)) + } + + } + + / ... + } + ``` + + With: + + ```go expandable + / IsBound checks if the module is already bound to the desired port + func (k Keeper) + + IsBound(ctx sdk.Context, portID string) + + bool { + _, ok := k.scopedKeeper.GetCapability(ctx, host.PortPath(portID)) + + return ok + } + + / BindPort defines a wrapper function for the port Keeper's function in + / order to expose it to module's InitGenesis function + func (k Keeper) + + BindPort(ctx sdk.Context, portID string) + + error { + cap := k.portKeeper.BindPort(ctx, portID) + + return k.ClaimCapability(ctx, cap, host.PortPath(portID)) + } + ``` + + The module binds to the desired port(s) and returns the capabilities. + + In the above we find reference to keeper methods that wrap other keeper functionality, in the next section the keeper methods that need to be implemented will be defined. diff --git a/docs/ibc/v6.3.x/ibc/apps/ibcmodule.mdx b/docs/ibc/v6.3.x/ibc/apps/ibcmodule.mdx new file mode 100644 index 00000000..8b07fe4b --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/apps/ibcmodule.mdx @@ -0,0 +1,390 @@ +--- +title: Implement `IBCModule` interface and callbacks +--- + +## Synopsis + +Learn how to implement the `IBCModule` interface and all of the callbacks it requires. + +The Cosmos SDK expects all IBC modules to implement the [`IBCModule` +interface](https://github.com/cosmos/ibc-go/tree/main/modules/core/05-port/types/module.go). This interface contains all of the callbacks IBC expects modules to implement. They include callbacks related to channel handshake, closing and packet callbacks (`OnRecvPacket`, `OnAcknowledgementPacket` and `OnTimeoutPacket`). + +```go +/ IBCModule implements the ICS26 interface for given the keeper. +/ The implementation of the IBCModule interface could for example be in a file called ibc_module.go, +/ but ultimately file structure is up to the developer +type IBCModule struct { + keeper keeper.Keeper +} +``` + +Additionally, in the `module.go` file, add the following line: + +```go +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + / Add this line + _ porttypes.IBCModule = IBCModule{ +} +) +``` + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v6.3.x/ibc/overview) +- [IBC default integration](/docs/ibc/v6.3.x/ibc/integration) + + + +## Channel handshake callbacks + +This section will describe the callbacks that are called during channel handshake execution. Among other things, it will claim channel capabilities passed on from core IBC. For a refresher on capabilities, check [the Overview section](/docs/ibc/v6.3.x/ibc/overview#capabilities). + +Here are the channel handshake callbacks that modules are expected to implement: + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `checkArguments` and `negotiateAppVersion` functions. + +```go expandable +/ Called by IBC Handler on MsgOpenInit +func (im IBCModule) + +OnChanOpenInit(ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + / Examples: + / - Abort if order == UNORDERED, + / - Abort if version is unsupported + if err := checkArguments(args); err != nil { + return "", err +} + + / OpenInit must claim the channelCapability that IBC passes into the callback + if err := im.keeper.ClaimCapability(ctx, chanCap, host.ChannelCapabilityPath(portID, channelID)); err != nil { + return "", err +} + +return version, nil +} + +/ Called by IBC Handler on MsgOpenTry +func (im IBCModule) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + if err := checkArguments(args); err != nil { + return "", err +} + + / OpenTry must claim the channelCapability that IBC passes into the callback + if err := im.keeper.scopedKeeper.ClaimCapability(ctx, chanCap, host.ChannelCapabilityPath(portID, channelID)); err != nil { + return err +} + + / Construct application version + / IBC applications must return the appropriate application version + / This can be a simple string or it can be a complex version constructed + / from the counterpartyVersion and other arguments. + / The version returned will be the channel version used for both channel ends. + appVersion := negotiateAppVersion(counterpartyVersion, args) + +return appVersion, nil +} + +/ Called by IBC Handler on MsgOpenAck +func (im IBCModule) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + if counterpartyVersion != types.Version { + return sdkerrors.Wrapf(types.ErrInvalidVersion, "invalid counterparty version: %s, expected %s", counterpartyVersion, types.Version) +} + + / do custom logic + + return nil +} + +/ Called by IBC Handler on MsgOpenConfirm +func (im IBCModule) + +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / do custom logic + + return nil +} +``` + +The channel closing handshake will also invoke module callbacks that can return errors to abort the closing handshake. Closing a channel is a 2-step handshake, the initiating chain calls `ChanCloseInit` and the finalizing chain calls `ChanCloseConfirm`. + +```go expandable +/ Called by IBC Handler on MsgCloseInit +func (im IBCModule) + +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgCloseConfirm +func (im IBCModule) + +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} +``` + +### Channel handshake version negotiation + +Application modules are expected to verify versioning used during the channel handshake procedure. + +- `OnChanOpenInit` will verify that the relayer-chosen parameters + are valid and perform any custom `INIT` logic. + It may return an error if the chosen parameters are invalid + in which case the handshake is aborted. + If the provided version string is non-empty, `OnChanOpenInit` should return + the version string if valid or an error if the provided version is invalid. + **If the version string is empty, `OnChanOpenInit` is expected to + return a default version string representing the version(s) + it supports.** + If there is no default version string for the application, + it should return an error if the provided version is an empty string. +- `OnChanOpenTry` will verify the relayer-chosen parameters along with the + counterparty-chosen version string and perform custom `TRY` logic. + If the relayer-chosen parameters + are invalid, the callback must return an error to abort the handshake. + If the counterparty-chosen version is not compatible with this module's + supported versions, the callback must return an error to abort the handshake. + If the versions are compatible, the try callback must select the final version + string and return it to core IBC. + `OnChanOpenTry` may also perform custom initialization logic. +- `OnChanOpenAck` will error if the counterparty selected version string + is invalid and abort the handshake. It may also perform custom ACK logic. + +Versions must be strings but can implement any versioning structure. If your application plans to +have linear releases then semantic versioning is recommended. If your application plans to release +various features in between major releases then it is advised to use the same versioning scheme +as IBC. This versioning scheme specifies a version identifier and compatible feature set with +that identifier. Valid version selection includes selecting a compatible version identifier with +a subset of features supported by your application for that version. The struct used for this +scheme can be found in [03-connection/types](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection/types/version.go#L16). + +Since the version type is a string, applications have the ability to do simple version verification +via string matching or they can use the already implemented versioning system and pass the proto +encoded version into each handhshake call as necessary. + +ICS20 currently implements basic string matching with a single supported version. + +## Packet callbacks + +Just as IBC expects modules to implement callbacks for channel handshakes, it also expects modules to implement callbacks for handling the packet flow through a channel, as defined in the `IBCModule` interface. + +Once a module A and module B are connected to each other, relayers can start relaying packets and acknowledgements back and forth on the channel. + +![IBC packet flow diagram](/docs/ibc/images/01-ibc/03-apps/images/packet_flow.png) + +Briefly, a successful packet flow works as follows: + +1. module A sends a packet through the IBC module +2. the packet is received by module B +3. if module B writes an acknowledgement of the packet then module A will process the + acknowledgement +4. if the packet is not successfully received before the timeout, then module A processes the + packet's timeout. + +### Sending packets + +Modules **do not send packets through callbacks**, since the modules initiate the action of sending packets to the IBC module, as opposed to other parts of the packet flow where messages sent to the IBC +module must trigger execution on the port-bound module through the use of callbacks. Thus, to send a packet a module simply needs to call `SendPacket` on the `IBCChannelKeeper`. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `EncodePacketData(customPacketData)` function. + +```go expandable +/ retrieve the dynamic capability for this channel + channelCap := scopedKeeper.GetCapability(ctx, channelCapName) +/ Sending custom application packet data + data := EncodePacketData(customPacketData) +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + channelCap, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + + + In order to prevent modules from sending packets on channels they do not own, + IBC expects modules to pass in the correct channel capability for the packet's + source channel. + + +### Receiving packets + +To handle receiving packets, the module must implement the `OnRecvPacket` callback. This gets +invoked by the IBC module after the packet has been proved valid and correctly processed by the IBC +keepers. Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state +changes given the packet data without worrying about whether the packet is valid or not. + +Modules may return to the IBC handler an acknowledgement which implements the `Acknowledgement` interface. +The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the +acknowledgement back to the sender module. + +The state changes that occurred during this callback will only be written if: + +- the acknowledgement was successful as indicated by the `Success()` function of the acknowledgement +- if the acknowledgement returned is nil indicating that an asynchronous process is occurring + +NOTE: Applications which process asynchronous acknowledgements must handle reverting state changes +when appropriate. Any state changes that occurred during the `OnRecvPacket` callback will be written +for asynchronous acknowledgements. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodePacketData(packet.Data)` function. + +```go expandable +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) + +ibcexported.Acknowledgement { + / Decode the packet data + packetData := DecodePacketData(packet.Data) + + / do application state changes based on packet data and return the acknowledgement + / NOTE: The acknowledgement will indicate to the IBC handler if the application + / state changes should be written via the `Success()` function. Application state + / changes are only written if the acknowledgement is successful or the acknowledgement + / returned is nil indicating that an asynchronous acknowledgement will occur. + ack := processPacket(ctx, packet, packetData) + +return ack +} +``` + +Reminder, the `Acknowledgement` interface: + +```go +/ Acknowledgement defines the interface used to return +/ acknowledgements in the OnRecvPacket callback. +type Acknowledgement interface { + Success() + +bool + Acknowledgement() []byte +} +``` + +### Acknowledging packets + +After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can +then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the +acknowledgement is entirely up to the modules on the channel (just like the packet data); however, it +may often contain information on whether the packet was successfully processed along +with some additional data that could be useful for remediation if the packet processing failed. + +Since the modules are responsible for agreeing on an encoding/decoding standard for packet data and +acknowledgements, IBC will pass in the acknowledgements as `[]byte` to this callback. The callback +is responsible for decoding the acknowledgement and processing it. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodeAcknowledgement(acknowledgments)` and `processAck(ack)` functions. + +```go expandable +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, +) (*sdk.Result, error) { + / Decode acknowledgement + ack := DecodeAcknowledgement(acknowledgement) + + / process ack + res, err := processAck(ack) + +return res, err +} +``` + +### Timeout packets + +If the timeout for a packet is reached before the packet is successfully received or the +counterparty channel end is closed before the packet is successfully received, then the receiving +chain can no longer process it. Thus, the sending chain must process the timeout using +`OnTimeoutPacket` to handle this situation. Again the IBC module will verify that the timeout is +indeed valid, so our module only needs to implement the state machine logic for what to do once a +timeout is reached and the packet can no longer be received. + +```go +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) (*sdk.Result, error) { + / do custom timeout logic +} +``` diff --git a/docs/ibc/v6.3.x/ibc/apps/keeper.mdx b/docs/ibc/v6.3.x/ibc/apps/keeper.mdx new file mode 100644 index 00000000..2ad08966 --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/apps/keeper.mdx @@ -0,0 +1,119 @@ +--- +title: Keeper +--- + +## Synopsis + +Learn how to implement the IBC Module keeper. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v6.3.x/ibc/overview) +- [IBC default integration](/docs/ibc/v6.3.x/middleware/ics29-fee/integration) + + +In the previous sections, on channel handshake callbacks and port binding in `InitGenesis`, a reference was made to keeper methods that need to be implemented when creating a custom IBC module. Below is an overview of how to define an IBC module's keeper. + +> Note that some code has been left out for clarity, to get a full code overview, please refer to [the transfer module's keeper in the ibc-go repo](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/keeper.go). + +```go expandable +/ Keeper defines the IBC app module keeper +type Keeper struct { + storeKey sdk.StoreKey + cdc codec.BinaryCodec + paramSpace paramtypes.Subspace + + channelKeeper types.ChannelKeeper + portKeeper types.PortKeeper + scopedKeeper capabilitykeeper.ScopedKeeper + + / ... additional according to custom logic +} + +/ NewKeeper creates a new IBC app module Keeper instance +func NewKeeper( + / args +) + +Keeper { + / ... + + return Keeper{ + cdc: cdc, + storeKey: key, + paramSpace: paramSpace, + + channelKeeper: channelKeeper, + portKeeper: portKeeper, + scopedKeeper: scopedKeeper, + + / ... additional according to custom logic +} +} + +/ IsBound checks if the IBC app module is already bound to the desired port +func (k Keeper) + +IsBound(ctx sdk.Context, portID string) + +bool { + _, ok := k.scopedKeeper.GetCapability(ctx, host.PortPath(portID)) + +return ok +} + +/ BindPort defines a wrapper function for the port Keeper's function in +/ order to expose it to module's InitGenesis function +func (k Keeper) + +BindPort(ctx sdk.Context, portID string) + +error { + cap := k.portKeeper.BindPort(ctx, portID) + +return k.ClaimCapability(ctx, cap, host.PortPath(portID)) +} + +/ GetPort returns the portID for the IBC app module. Used in ExportGenesis +func (k Keeper) + +GetPort(ctx sdk.Context) + +string { + store := ctx.KVStore(k.storeKey) + +return string(store.Get(types.PortKey)) +} + +/ SetPort sets the portID for the IBC app module. Used in InitGenesis +func (k Keeper) + +SetPort(ctx sdk.Context, portID string) { + store := ctx.KVStore(k.storeKey) + +store.Set(types.PortKey, []byte(portID)) +} + +/ AuthenticateCapability wraps the scopedKeeper's AuthenticateCapability function +func (k Keeper) + +AuthenticateCapability(ctx sdk.Context, cap *capabilitytypes.Capability, name string) + +bool { + return k.scopedKeeper.AuthenticateCapability(ctx, cap, name) +} + +/ ClaimCapability allows the IBC app module to claim a capability that core IBC +/ passes to it +func (k Keeper) + +ClaimCapability(ctx sdk.Context, cap *capabilitytypes.Capability, name string) + +error { + return k.scopedKeeper.ClaimCapability(ctx, cap, name) +} + +/ ... additional according to custom logic +``` diff --git a/docs/ibc/v6.3.x/ibc/apps/packets_acks.mdx b/docs/ibc/v6.3.x/ibc/apps/packets_acks.mdx new file mode 100644 index 00000000..f4c33d60 --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/apps/packets_acks.mdx @@ -0,0 +1,113 @@ +--- +title: Define packets and acks +--- + +## Synopsis + +Learn how to define custom packet and acknowledgement structs and how to encode and decode them. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v6.3.x/ibc/overview) +- [IBC default integration](/docs/ibc/v6.3.x/ibc/integration) + + + +## Custom packets + +Modules connected by a channel must agree on what application data they are sending over the +channel, as well as how they will encode/decode it. This process is not specified by IBC as it is up +to each application module to determine how to implement this agreement. However, for most +applications this will happen as a version negotiation during the channel handshake. While more +complex version negotiation is possible to implement inside the channel opening handshake, a very +simple version negotiation is implemented in the [ibc-transfer module](https://github.com/cosmos/ibc-go/tree/main/modules/apps/transfer/module.go). + +Thus, a module must define its custom packet data structure, along with a well-defined way to +encode and decode it to and from `[]byte`. + +```go expandable +/ Custom packet data defined in application module +type CustomPacketData struct { + / Custom fields ... +} + +EncodePacketData(packetData CustomPacketData) []byte { + / encode packetData to bytes +} + +DecodePacketData(encoded []byte) (CustomPacketData) { + / decode from bytes to packet data +} +``` + +> Note that the `CustomPacketData` struct is defined in the proto definition and then compiled by the protobuf compiler. + +Then a module must encode its packet data before sending it through IBC. + +```go expandable +/ retrieve the dynamic capability for this channel + channelCap := scopedKeeper.GetCapability(ctx, channelCapName) +/ Sending custom application packet data + data := EncodePacketData(customPacketData) +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + channelCap, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + +A module receiving a packet must decode the `PacketData` into a structure it expects so that it can +act on it. + +```go +/ Receiving custom application packet data (in OnRecvPacket) + packetData := DecodePacketData(packet.Data) +/ handle received custom packet data +``` + +## Acknowledgements + +Modules may commit an acknowledgement upon receiving and processing a packet in the case of synchronous packet processing. +In the case where a packet is processed at some later point after the packet has been received (asynchronous execution), the acknowledgement +will be written once the packet has been processed by the application which may be well after the packet receipt. + +NOTE: Most blockchain modules will want to use the synchronous execution model in which the module processes and writes the acknowledgement +for a packet as soon as it has been received from the IBC module. + +This acknowledgement can then be relayed back to the original sender chain, which can take action +depending on the contents of the acknowledgement. + +Just as packet data was opaque to IBC, acknowledgements are similarly opaque. Modules must pass and +receive acknowledegments with the IBC modules as byte strings. + +Thus, modules must agree on how to encode/decode acknowledgements. The process of creating an +acknowledgement struct along with encoding and decoding it, is very similar to the packet data +example above. [ICS 04](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope) +specifies a recommended format for acknowledgements. This acknowledgement type can be imported from +[channel types](https://github.com/cosmos/ibc-go/tree/main/modules/core/04-channel/types). + +While modules may choose arbitrary acknowledgement structs, a default acknowledgement types is provided by IBC [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/core/channel/v1/channel.proto): + +```protobuf expandable +/ Acknowledgement is the recommended acknowledgement format to be used by +/ app-specific protocols. +/ NOTE: The field numbers 21 and 22 were explicitly chosen to avoid accidental +/ conflicts with other protobuf message formats used for acknowledgements. +/ The first byte of any message with this format will be the non-ASCII values +/ `0xaa` (result) or `0xb2` (error). Implemented as defined by ICS: +/ https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope +message Acknowledgement { + / response contains either a result or an error and must be non-empty + oneof response { + bytes result = 21; + string error = 22; + } +} +``` diff --git a/docs/ibc/v6.3.x/ibc/apps/routing.mdx b/docs/ibc/v6.3.x/ibc/apps/routing.mdx new file mode 100644 index 00000000..90f5a3b1 --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/apps/routing.mdx @@ -0,0 +1,40 @@ +--- +title: Routing +--- + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v6.3.x/ibc/overview) +- [IBC default integration](/docs/ibc/v6.3.x/middleware/ics29-fee/integration) + + + +## Synopsis + +Learn how to hook a route to the IBC router for the custom IBC module. + +As mentioned above, modules must implement the `IBCModule` interface (which contains both channel +handshake callbacks and packet handling callbacks). The concrete implementation of this interface +must be registered with the module name as a route on the IBC `Router`. + +```go expandable +/ app.go +func NewApp(...args) *App { +/ ... + +/ Create static IBC router, add module routes, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) +/ Note: moduleCallbacks must implement IBCModule interface +ibcRouter.AddRoute(moduleName, moduleCallbacks) + +/ Setting Router will finalize all routes by sealing router +/ No more routes can be added +app.IBCKeeper.SetRouter(ibcRouter) + +/ ... +} +``` diff --git a/docs/ibc/v6.3.x/ibc/integration.mdx b/docs/ibc/v6.3.x/ibc/integration.mdx new file mode 100644 index 00000000..3ab57303 --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/integration.mdx @@ -0,0 +1,224 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to integrate IBC to your application and send data packets to other chains. + +This document outlines the required steps to integrate and configure the [IBC +module](https://github.com/cosmos/ibc-go/tree/main/modules/core) to your Cosmos SDK application and +send fungible token transfers to other chains. + +## Integrating the IBC module + +Integrating the IBC module to your SDK-based application is straightforward. The general changes can be summarized in the following steps: + +- Add required modules to the `module.BasicManager` +- Define additional `Keeper` fields for the new modules on the `App` type +- Add the module's `StoreKeys` and initialize their `Keepers` +- Set up corresponding routers and routes for the `ibc` module +- Add the modules to the module `Manager` +- Add modules to `Begin/EndBlockers` and `InitGenesis` +- Update the module `SimulationManager` to enable simulations + +### Module `BasicManager` and `ModuleAccount` permissions + +The first step is to add the following modules to the `BasicManager`: `x/capability`, `x/ibc`, +and `x/ibc-transfer`. After that, we need to grant `Minter` and `Burner` permissions to +the `ibc-transfer` `ModuleAccount` to mint and burn relayed tokens. + +```go expandable +/ app.go +var ( + + ModuleBasics = module.NewBasicManager( + / ... + capability.AppModuleBasic{ +}, + ibc.AppModuleBasic{ +}, + transfer.AppModuleBasic{ +}, / i.e ibc-transfer module + ) + + / module account permissions + maccPerms = map[string][]string{ + / other module accounts permissions + / ... + ibctransfertypes.ModuleName: { + authtypes.Minter, authtypes.Burner +}, +) +``` + +### Application fields + +Then, we need to register the `Keepers` as follows: + +```go expandable +/ app.go +type App struct { + / baseapp, keys and subspaces definitions + + / other keepers + / ... + IBCKeeper *ibckeeper.Keeper / IBC Keeper must be a pointer in the app, so we can SetRouter on it correctly + TransferKeeper ibctransferkeeper.Keeper / for cross-chain fungible token transfers + + / make scoped keepers public for test purposes + ScopedIBCKeeper capabilitykeeper.ScopedKeeper + ScopedTransferKeeper capabilitykeeper.ScopedKeeper + + / ... + / module and simulation manager definitions +} +``` + +### Configure the `Keepers` + +During initialization, besides initializing the IBC `Keepers` (for the `x/ibc`, and +`x/ibc-transfer` modules), we need to grant specific capabilities through the capability module +`ScopedKeepers` so that we can authenticate the object-capability permissions for each of the IBC +channels. + +```go expandable +func NewApp(...args) *App { + / define codecs and baseapp + + / add capability keeper and ScopeToModule for ibc module + app.CapabilityKeeper = capabilitykeeper.NewKeeper(appCodec, keys[capabilitytypes.StoreKey], memKeys[capabilitytypes.MemStoreKey]) + + / grant capabilities for the ibc and ibc-transfer modules + scopedIBCKeeper := app.CapabilityKeeper.ScopeToModule(ibchost.ModuleName) + scopedTransferKeeper := app.CapabilityKeeper.ScopeToModule(ibctransfertypes.ModuleName) + + / ... other modules keepers + + / Create IBC Keeper + app.IBCKeeper = ibckeeper.NewKeeper( + appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, app.UpgradeKeeper, scopedIBCKeeper, + ) + + / Create Transfer Keepers + app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, keys[ibctransfertypes.StoreKey], app.GetSubspace(ibctransfertypes.ModuleName), + app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, app.BankKeeper, scopedTransferKeeper, + ) + transferModule := transfer.NewAppModule(app.TransferKeeper) + + / .. continues +} +``` + +### Register `Routers` + +IBC needs to know which module is bound to which port so that it can route packets to the +appropriate module and call the appropriate callbacks. The port to module name mapping is handled by +IBC's port `Keeper`. However, the mapping from module name to the relevant callbacks is accomplished +by the port +[`Router`](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/router.go) on the +IBC module. + +Adding the module routes allows the IBC handler to call the appropriate callback when processing a +channel handshake or a packet. + +Currently, a `Router` is static so it must be initialized and set correctly on app initialization. +Once the `Router` has been set, no new routes can be added. + +```go expandable +/ app.go +func NewApp(...args) *App { + / .. continuation from above + + / Create static IBC router, add ibc-transfer module route, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) + / Setting Router will finalize all routes by sealing router + / No more routes can be added + app.IBCKeeper.SetRouter(ibcRouter) + + / .. continues +``` + +### Module Managers + +In order to use IBC, we need to add the new modules to the module `Manager` and to the `SimulationManager` in case your application supports simulations. + +```go expandable +/ app.go +func NewApp(...args) *App { + / .. continuation from above + + app.mm = module.NewManager( + / other modules + / ... + capability.NewAppModule(appCodec, *app.CapabilityKeeper), + ibc.NewAppModule(app.IBCKeeper), + transferModule, + ) + + / ... + + app.sm = module.NewSimulationManager( + / other modules + / ... + capability.NewAppModule(appCodec, *app.CapabilityKeeper), + ibc.NewAppModule(app.IBCKeeper), + transferModule, + ) + + / .. continues +``` + +### Application ABCI Ordering + +One addition from IBC is the concept of `HistoricalEntries` which are stored on the staking module. +Each entry contains the historical information for the `Header` and `ValidatorSet` of this chain which is stored +at each height during the `BeginBlock` call. The historical info is required to introspect the +past historical info at any given height in order to verify the light client `ConsensusState` during the +connection handshake. + +The IBC module also has +[`BeginBlock`](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client/abci.go) logic as well. This is optional as it is only required if your application uses the localhost client to connect two different modules from the same chain. + + + Only register the ibc module to the `SetOrderBeginBlockers` if your + application will use the localhost (*aka* loopback) client. + + +```go expandable +/ app.go +func NewApp(...args) *App { + / .. continuation from above + + / add staking and ibc modules to BeginBlockers + app.mm.SetOrderBeginBlockers( + / other modules ... + stakingtypes.ModuleName, ibchost.ModuleName, + ) + + / ... + + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + app.mm.SetOrderInitGenesis( + capabilitytypes.ModuleName, + / other modules ... + ibchost.ModuleName, ibctransfertypes.ModuleName, + ) + + / .. continues +``` + + + **IMPORTANT**: The capability module **must** be declared first in + `SetOrderInitGenesis` + + +That's it! You have now wired up the IBC module and are now able to send fungible tokens across +different chains. If you want to have a broader view of the changes take a look into the SDK's +[`SimApp`](https://github.com/cosmos/ibc-go/blob/main/testing/simapp/app.go). diff --git a/docs/ibc/v6.3.x/ibc/middleware/develop.mdx b/docs/ibc/v6.3.x/ibc/middleware/develop.mdx new file mode 100644 index 00000000..ba4aff6d --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/middleware/develop.mdx @@ -0,0 +1,465 @@ +--- +title: IBC middleware +--- + +## Synopsis + +Learn how to write your own custom middleware to wrap an IBC application, and understand how to hook different middleware to IBC base applications to form different IBC application stacks + +This document serves as a guide for middleware developers who want to write their own middleware and for chain developers who want to use IBC middleware on their chains. + +IBC applications are designed to be self-contained modules that implement their own application-specific logic through a set of interfaces with the core IBC handlers. These core IBC handlers, in turn, are designed to enforce the correctness properties of IBC (transport, authentication, ordering) while delegating all application-specific handling to the IBC application modules. However, there are cases where some functionality may be desired by many applications, yet not appropriate to place in core IBC. + +Middleware allows developers to define the extensions as separate modules that can wrap over the base application. This middleware can thus perform its own custom logic, and pass data into the application so that it may run its logic without being aware of the middleware's existence. This allows both the application and the middleware to implement its own isolated logic while still being able to run as part of a single packet flow. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v6.3.x/ibc/overview) +- [IBC Integration](/docs/ibc/v6.3.x/ibc/integration) +- [IBC Application Developer Guide](/docs/ibc/v6.3.x/ibc/apps/apps) + + + +## Definitions + +`Middleware`: A self-contained module that sits between core IBC and an underlying IBC application during packet execution. All messages between core IBC and underlying application must flow through middleware, which may perform its own custom logic. + +`Underlying Application`: An underlying application is the application that is directly connected to the middleware in question. This underlying application may itself be middleware that is chained to a base application. + +`Base Application`: A base application is an IBC application that does not contain any middleware. It may be nested by 0 or multiple middleware to form an application stack. + +`Application Stack (or stack)`: A stack is the complete set of application logic (middleware(s) + base application) that gets connected to core IBC. A stack may be just a base application, or it may be a series of middlewares that nest a base application. + +## Create a custom IBC middleware + +IBC middleware will wrap over an underlying IBC application and sits between core IBC and the application. It has complete control in modifying any message coming from IBC to the application, and any message coming from the application to core IBC. Thus, middleware must be completely trusted by chain developers who wish to integrate them, however this gives them complete flexibility in modifying the application(s) they wrap. + +### Interfaces + +```go +/ Middleware implements the ICS26 Module interface +type Middleware interface { + porttypes.IBCModule / middleware has access to an underlying application which may be wrapped by more middleware + ics4Wrapper: ICS4Wrapper / middleware has access to ICS4Wrapper which may be core IBC Channel Handler or a higher-level middleware that wraps this middleware. +} +``` + +```typescript expandable +/ This is implemented by ICS4 and all middleware that are wrapping base application. +/ The base application will call `sendPacket` or `writeAcknowledgement` of the middleware directly above them +/ which will call the next middleware until it reaches the core IBC handler. +type ICS4Wrapper interface { + SendPacket( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + sourcePort string, + sourceChannel string, + timeoutHeight clienttypes.Height, + timeoutTimestamp uint64, + data []byte, + ) (sequence uint64, err error) + + WriteAcknowledgement( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + packet exported.PacketI, + ack exported.Acknowledgement, + ) error + + GetAppVersion( + ctx sdk.Context, + portID, + channelID string, + ) (string, bool) +} +``` + +### Implement `IBCModule` interface and callbacks + +The `IBCModule` is a struct that implements the [ICS-26 interface (`porttypes.IBCModule`)](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/module.go#L11-L106). It is recommended to separate these callbacks into a separate file `ibc_module.go`. As will be mentioned in the [integration section](/docs/ibc/v6.3.x/ibc/middleware/integration), this struct should be different than the struct that implements `AppModule` in case the middleware maintains its own internal state and processes separate SDK messages. + +The middleware must have access to the underlying application, and be called before during all ICS-26 callbacks. It may execute custom logic during these callbacks, and then call the underlying application's callback. Middleware **may** choose not to call the underlying application's callback at all. Though these should generally be limited to error cases. + +In the case where the IBC middleware expects to speak to a compatible IBC middleware on the counterparty chain, they must use the channel handshake to negotiate the middleware version without interfering in the version negotiation of the underlying application. + +Middleware accomplishes this by formatting the version in a JSON-encoded string containing the middleware version and the application version. The application version may as well be a JSON-encoded string, possibly including further middleware and app versions, if the application stack consists of multiple milddlewares wrapping a base application. The format of the version is specified in ICS-30 as the following: + +```json +{ + "": "", + "app_version": "" +} +``` + +The `` key in the JSON struct should be replaced by the actual name of the key for the corresponding middleware (e.g. `fee_version`). + +During the handshake callbacks, the middleware can unmarshal the version string and retrieve the middleware and application versions. It can do its negotiation logic on ``, and pass the `` to the underlying application. + +The middleware should simply pass the capability in the callback arguments along to the underlying application so that it may be claimed by the base application. The base application will then pass the capability up the stack in order to authenticate an outgoing packet/acknowledgement. + +In the case where the middleware wishes to send a packet or acknowledgment without the involvement of the underlying application, it should be given access to the same `scopedKeeper` as the base application so that it can retrieve the capabilities by itself. + +### Handshake callbacks + +#### `OnChanOpenInit` + +```go expandable +func (im IBCModule) + +OnChanOpenInit( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + if version != "" { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + metadata, err := Unmarshal(version) + if err != nil { + / Since it is valid for fee version to not be specified, + / the above middleware version may be for another middleware. + / Pass the entire version string onto the underlying application. + return im.app.OnChanOpenInit( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + version, + ) +} + +else { + metadata = { + / set middleware version to default value + MiddlewareVersion: defaultMiddlewareVersion, + / allow application to return its default version + AppVersion: "", +} + +} + +doCustomLogic() + + / if the version string is empty, OnChanOpenInit is expected to return + / a default version string representing the version(s) + +it supports + appVersion, err := im.app.OnChanOpenInit( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + metadata.AppVersion, / note we only pass app version here + ) + if err != nil { + return "", err +} + version := constructVersion(metadata.MiddlewareVersion, appVersion) + +return version, nil +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L34-L82) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanOpenTry` + +```go expandable +func OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + cpMetadata, err := Unmarshal(counterpartyVersion) + if err != nil { + return app.OnChanOpenTry( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + counterpartyVersion, + ) +} + +doCustomLogic() + + / Call the underlying application's OnChanOpenTry callback. + / The try callback must select the final app-specific version string and return it. + appVersion, err := app.OnChanOpenTry( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + cpMetadata.AppVersion, / note we only pass counterparty app version here + ) + if err != nil { + return "", err +} + + / negotiate final middleware version + middlewareVersion := negotiateMiddlewareVersion(cpMetadata.MiddlewareVersion) + version := constructVersion(middlewareVersion, appVersion) + +return version, nil +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L84-L124) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanOpenAck` + +```go expandable +func OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyChannelID string, + counterpartyVersion string, +) + +error { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + cpMetadata, err = UnmarshalJSON(counterpartyVersion) + if err != nil { + return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, counterpartyVersion) +} + if !isCompatible(cpMetadata.MiddlewareVersion) { + return error +} + +doCustomLogic() + + / call the underlying application's OnChanOpenTry callback + return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, cpMetadata.AppVersion) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L126-L152) an example implementation of this callback for the ICS29 Fee Middleware module. + +### `OnChanOpenConfirm` + +```go +func OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanOpenConfirm(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L154-L162) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanCloseInit` + +```go +func OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanCloseInit(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L164-L187) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanCloseConfirm` + +```go +func OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanCloseConfirm(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L189-L212) an example implementation of this callback for the ICS29 Fee Middleware module. + +**NOTE**: Middleware that does not need to negotiate with a counterparty middleware on the remote stack will not implement the version unmarshalling and negotiation, and will simply perform its own custom logic on the callbacks without relying on the counterparty behaving similarly. + +### Packet callbacks + +The packet callbacks just like the handshake callbacks wrap the application's packet callbacks. The packet callbacks are where the middleware performs most of its custom logic. The middleware may read the packet flow data and perform some additional packet handling, or it may modify the incoming data before it reaches the underlying application. This enables a wide degree of usecases, as a simple base application like token-transfer can be transformed for a variety of usecases by combining it with custom middleware. + +#### `OnRecvPacket` + +```go expandable +func OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +ibcexported.Acknowledgement { + doCustomLogic(packet) + ack := app.OnRecvPacket(ctx, packet, relayer) + +doCustomLogic(ack) / middleware may modify outgoing ack + return ack +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L214-L237) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnAcknowledgementPacket` + +```go +func OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, +) + +error { + doCustomLogic(packet, ack) + +return app.OnAcknowledgementPacket(ctx, packet, ack, relayer) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L239-L292) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnTimeoutPacket` + +```go +func OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +error { + doCustomLogic(packet) + +return app.OnTimeoutPacket(ctx, packet, relayer) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L294-L334) an example implementation of this callback for the ICS29 Fee Middleware module. + +### ICS-4 wrappers + +Middleware must also wrap ICS-4 so that any communication from the application to the `channelKeeper` goes through the middleware first. Similar to the packet callbacks, the middleware may modify outgoing acknowledgements and packets in any way it wishes. + +#### `SendPacket` + +```go expandable +func SendPacket( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + sourcePort string, + sourceChannel string, + timeoutHeight clienttypes.Height, + timeoutTimestamp uint64, + appData []byte, +) { + / middleware may modify data + data = doCustomLogic(appData) + +return ics4Keeper.SendPacket( + ctx, + chanCap, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, + ) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L336-L343) an example implementation of this function for the ICS29 Fee Middleware module. + +#### `WriteAcknowledgement` + +```go expandable +/ only called for async acks +func WriteAcknowledgement( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + packet exported.PacketI, + ack exported.Acknowledgement, +) { + / middleware may modify acknowledgement + ack_bytes = doCustomLogic(ack) + +return ics4Keeper.WriteAcknowledgement(packet, ack_bytes) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L345-L353) an example implementation of this function for the ICS29 Fee Middleware module. + +#### `GetAppVersion` + +```go expandable +/ middleware must return the underlying application version +func GetAppVersion( + ctx sdk.Context, + portID, + channelID string, +) (string, bool) { + version, found := ics4Keeper.GetAppVersion(ctx, portID, channelID) + if !found { + return "", false +} + if !MiddlewareEnabled { + return version, true +} + + / unwrap channel version + metadata, err := Unmarshal(version) + if err != nil { + panic(fmt.Errof("unable to unmarshal version: %w", err)) +} + +return metadata.AppVersion, true +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L355-L358) an example implementation of this function for the ICS29 Fee Middleware module. diff --git a/docs/ibc/v6.3.x/ibc/middleware/integration.mdx b/docs/ibc/v6.3.x/ibc/middleware/integration.mdx new file mode 100644 index 00000000..3b942b53 --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/middleware/integration.mdx @@ -0,0 +1,78 @@ +--- +title: Integrating IBC middleware into a chain +description: >- + Learn how to integrate IBC middleware(s) with a base application to your + chain. The following document only applies for Cosmos SDK chains. +--- + +Learn how to integrate IBC middleware(s) with a base application to your chain. The following document only applies for Cosmos SDK chains. + +If the middleware is maintaining its own state and/or processing SDK messages, then it should create and register its SDK module **only once** with the module manager in `app.go`. + +All middleware must be connected to the IBC router and wrap over an underlying base IBC application. An IBC application may be wrapped by many layers of middleware, only the top layer middleware should be hooked to the IBC router, with all underlying middlewares and application getting wrapped by it. + +The order of middleware **matters**, function calls from IBC to the application travel from top-level middleware to the bottom middleware and then to the application. Function calls from the application to IBC goes through the bottom middleware in order to the top middleware and then to core IBC handlers. Thus the same set of middleware put in different orders may produce different effects. + +## Example integration + +```go expandable +/ app.go + +/ middleware 1 and middleware 3 are stateful middleware, +/ perhaps implementing separate sdk.Msg and Handlers +mw1Keeper := mw1.NewKeeper(storeKey1) + +mw3Keeper := mw3.NewKeeper(storeKey3) + +/ Only create App Module **once** and register in app module +/ if the module maintains independent state and/or processes sdk.Msgs +app.moduleManager = module.NewManager( + ... + mw1.NewAppModule(mw1Keeper), + mw3.NewAppModule(mw3Keeper), + transfer.NewAppModule(transferKeeper), + custom.NewAppModule(customKeeper) +) + +mw1IBCModule := mw1.NewIBCModule(mw1Keeper) + +mw2IBCModule := mw2.NewIBCModule() / middleware2 is stateless middleware +mw3IBCModule := mw3.NewIBCModule(mw3Keeper) + scopedKeeperTransfer := capabilityKeeper.NewScopedKeeper("transfer") + +scopedKeeperCustom1 := capabilityKeeper.NewScopedKeeper("custom1") + +scopedKeeperCustom2 := capabilityKeeper.NewScopedKeeper("custom2") + +/ NOTE: IBC Modules may be initialized any number of times provided they use a separate +/ scopedKeeper and underlying port. + +/ initialize base IBC applications +/ if you want to create two different stacks with the same base application, +/ they must be given different scopedKeepers and assigned different ports. + transferIBCModule := transfer.NewIBCModule(transferKeeper) + +customIBCModule1 := custom.NewIBCModule(customKeeper, "portCustom1") + +customIBCModule2 := custom.NewIBCModule(customKeeper, "portCustom2") + +/ create IBC stacks by combining middleware with base application +/ NOTE: since middleware2 is stateless it does not require a Keeper +/ stack 1 contains mw1 -> mw3 -> transfer +stack1 := mw1.NewIBCMiddleware(mw3.NewIBCMiddleware(transferIBCModule, mw3Keeper), mw1Keeper) +/ stack 2 contains mw3 -> mw2 -> custom1 +stack2 := mw3.NewIBCMiddleware(mw2.NewIBCMiddleware(customIBCModule1), mw3Keeper) +/ stack 3 contains mw2 -> mw1 -> custom2 +stack3 := mw2.NewIBCMiddleware(mw1.NewIBCMiddleware(customIBCModule2, mw1Keeper)) + +/ associate each stack with the moduleName provided by the underlying scopedKeeper + ibcRouter := porttypes.NewRouter() + +ibcRouter.AddRoute("transfer", stack1) + +ibcRouter.AddRoute("custom1", stack2) + +ibcRouter.AddRoute("custom2", stack3) + +app.IBCKeeper.SetRouter(ibcRouter) +``` diff --git a/docs/ibc/v6.3.x/ibc/overview.mdx b/docs/ibc/v6.3.x/ibc/overview.mdx new file mode 100644 index 00000000..3a2b5e14 --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/overview.mdx @@ -0,0 +1,297 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about IBC, its components, and IBC use cases. + +## What is the Inter-Blockchain Communication Protocol (IBC)? + +This document serves as a guide for developers who want to write their own Inter-Blockchain +Communication protocol (IBC) applications for custom use cases. + +> IBC applications must be written as self-contained modules. + +Due to the modular design of the IBC protocol, IBC +application developers do not need to be concerned with the low-level details of clients, +connections, and proof verification. + +This brief explanation of the lower levels of the +stack gives application developers a broad understanding of the IBC +protocol. Abstraction layer details for channels and ports are most relevant for application developers and describe how to define custom packets and `IBCModule` callbacks. + +The requirements to have your module interact over IBC are: + +- Bind to a port or ports. +- Define your packet data. +- Use the default acknowledgment struct provided by core IBC or optionally define a custom acknowledgment struct. +- Standardize an encoding of the packet data. +- Implement the `IBCModule` interface. + +Read on for a detailed explanation of how to write a self-contained IBC application module. + +## Components Overview + +### [Clients](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client) + +IBC clients are on-chain light clients. Each light client is identified by a unique client-id. +IBC clients track the consensus states of other blockchains, along with the proof spec necessary to +properly verify proofs against the client's consensus state. A client can be associated with any number +of connections to the counterparty chain. The client identifier is auto generated using the client type +and the global client counter appended in the format: `{client-type}-{N}`. + +A `ClientState` should contain chain specific and light client specific information necessary for verifying updates +and upgrades to the IBC client. The `ClientState` may contain information such as chain-id, latest height, proof specs, +unbonding periods or the status of the light client. The `ClientState` should not contain information that +is specific to a given block at a certain height, this is the function of the `ConsensusState`. Each `ConsensusState` +should be associated with a unique block and should be referenced using a height. IBC clients are given a +client identifier prefixed store to store their associated client state and consensus states along with +any metadata associated with the consensus states. Consensus states are stored using their associated height. + +The supported IBC clients are: + +- [Solo Machine light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine): Devices such as phones, browsers, or laptops. +- [Tendermint light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/07-tendermint): The default for Cosmos SDK-based chains. +- [Localhost (loopback) client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/09-localhost): Useful for + testing, simulation, and relaying packets to modules on the same application. + +### IBC Client Heights + +IBC Client Heights are represented by the struct: + +```go +type Height struct { + RevisionNumber uint64 + RevisionHeight uint64 +} +``` + +The `RevisionNumber` represents the revision of the chain that the height is representing. +A revision typically represents a continuous, monotonically increasing range of block-heights. +The `RevisionHeight` represents the height of the chain within the given revision. + +On any reset of the `RevisionHeight`—for example, when hard-forking a Tendermint chain— +the `RevisionNumber` will get incremented. This allows IBC clients to distinguish between a +block-height `n` of a previous revision of the chain (at revision `p`) and block-height `n` of the current +revision of the chain (at revision `e`). + +`Height`s that share the same revision number can be compared by simply comparing their respective `RevisionHeight`s. +`Height`s that do not share the same revision number will only be compared using their respective `RevisionNumber`s. +Thus a height `h` with revision number `e+1` will always be greater than a height `g` with revision number `e`, +**REGARDLESS** of the difference in revision heights. + +Ex: + +```go +Height{ + RevisionNumber: 3, + RevisionHeight: 0 +} > Height{ + RevisionNumber: 2, + RevisionHeight: 100000000000 +} +``` + +When a Tendermint chain is running a particular revision, relayers can simply submit headers and proofs with the revision number +given by the chain's `chainID`, and the revision height given by the Tendermint block height. When a chain updates using a hard-fork +and resets its block-height, it is responsible for updating its `chainID` to increment the revision number. +IBC Tendermint clients then verifies the revision number against their `chainID` and treat the `RevisionHeight` as the Tendermint block-height. + +Tendermint chains wishing to use revisions to maintain persistent IBC connections even across height-resetting upgrades must format their `chainID`s +in the following manner: `{chainID}-{revision_number}`. On any height-resetting upgrade, the `chainID` **MUST** be updated with a higher revision number +than the previous value. + +Ex: + +- Before upgrade `chainID`: `gaiamainnet-3` +- After upgrade `chainID`: `gaiamainnet-4` + +Clients that do not require revisions, such as the solo-machine client, simply hardcode `0` into the revision number whenever they +need to return an IBC height when implementing IBC interfaces and use the `RevisionHeight` exclusively. + +Other client-types can implement their own logic to verify the IBC heights that relayers provide in their `Update`, `Misbehavior`, and +`Verify` functions respectively. + +The IBC interfaces expect an `ibcexported.Height` interface, however all clients must use the concrete implementation provided in +`02-client/types` and reproduced above. + +### [Connections](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection) + +Connections encapsulate two `ConnectionEnd` objects on two separate blockchains. Each +`ConnectionEnd` is associated with a client of the other blockchain (for example, the counterparty blockchain). +The connection handshake is responsible for verifying that the light clients on each chain are +correct for their respective counterparties. Connections, once established, are responsible for +facilitating all cross-chain verifications of IBC state. A connection can be associated with any +number of channels. + +### [Proofs](https://github.com/cosmos/ibc-go/blob/main/modules/core/23-commitment) and [Paths](https://github.com/cosmos/ibc-go/blob/main/modules/core/24-host) + +In IBC, blockchains do not directly pass messages to each other over the network. Instead, to +communicate, a blockchain commits some state to a specifically defined path that is reserved for a +specific message type and a specific counterparty. For example, for storing a specific connectionEnd as part +of a handshake or a packet intended to be relayed to a module on the counterparty chain. A relayer +process monitors for updates to these paths and relays messages by submitting the data stored +under the path and a proof to the counterparty chain. + +Proofs are passed from core IBC to light-clients as bytes. It is up to light client implementation to interpret these bytes appropriately. + +- The paths that all IBC implementations must use for committing IBC messages is defined in + [ICS-24 Host State Machine Requirements](https://github.com/cosmos/ibc/tree/master/spec/core/ics-024-host-requirements). +- The proof format that all implementations must be able to produce and verify is defined in [ICS-23 Proofs](https://github.com/confio/ics23) implementation. + +### Capabilities + +IBC is intended to work in execution environments where modules do not necessarily trust each +other. Thus, IBC must authenticate module actions on ports and channels so that only modules with the +appropriate permissions can use them. + +This module authentication is accomplished using a [dynamic +capability store](/docs/common/pages/adr-comprehensive#adr-003-dynamic-capability-store). Upon binding to a port or +creating a channel for a module, IBC returns a dynamic capability that the module must claim in +order to use that port or channel. The dynamic capability module prevents other modules from using that port or channel since +they do not own the appropriate capability. + +While this background information is useful, IBC modules do not need to interact at all with +these lower-level abstractions. The relevant abstraction layer for IBC application developers is +that of channels and ports. IBC applications must be written as self-contained **modules**. + +A module on one blockchain can communicate with other modules on other blockchains by sending, +receiving, and acknowledging packets through channels that are uniquely identified by the +`(channelID, portID)` tuple. + +A useful analogy is to consider IBC modules as internet applications on +a computer. A channel can then be conceptualized as an IP connection, with the IBC portID being +analogous to an IP port and the IBC channelID being analogous to an IP address. Thus, a single +instance of an IBC module can communicate on the same port with any number of other modules and +IBC correctly routes all packets to the relevant module using the (channelID, portID tuple). An +IBC module can also communicate with another IBC module over multiple ports, with each +`(portID<->portID)` packet stream being sent on a different unique channel. + +### [Ports](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port) + +An IBC module can bind to any number of ports. Each port must be identified by a unique `portID`. +Since IBC is designed to be secure with mutually distrusted modules operating on the same ledger, +binding a port returns a dynamic object capability. In order to take action on a particular port +(for example, an open channel with its portID), a module must provide the dynamic object capability to the IBC +handler. This requirement prevents a malicious module from opening channels with ports it does not own. Thus, +IBC modules are responsible for claiming the capability that is returned on `BindPort`. + +### [Channels](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +An IBC channel can be established between two IBC ports. Currently, a port is exclusively owned by a +single module. IBC packets are sent over channels. Just as IP packets contain the destination IP +address and IP port, and the source IP address and source IP port, IBC packets contain +the destination portID and channelID, and the source portID and channelID. This packet structure enables IBC to +correctly route packets to the destination module while allowing modules receiving packets to +know the sender module. + +A channel can be `ORDERED`, where packets from a sending module must be processed by the +receiving module in the order they were sent. Or a channel can be `UNORDERED`, where packets +from a sending module are processed in the order they arrive (might be in a different order than they were sent). + +Modules can choose which channels they wish to communicate over with, thus IBC expects modules to +implement callbacks that are called during the channel handshake. These callbacks can do custom +channel initialization logic. If any callback returns an error, the channel handshake fails. Thus, by +returning errors on callbacks, modules can programmatically reject and accept channels. + +The channel handshake is a 4-step handshake. Briefly, if a given chain A wants to open a channel with +chain B using an already established connection: + +1. chain A sends a `ChanOpenInit` message to signal a channel initialization attempt with chain B. +2. chain B sends a `ChanOpenTry` message to try opening the channel on chain A. +3. chain A sends a `ChanOpenAck` message to mark its channel end status as open. +4. chain B sends a `ChanOpenConfirm` message to mark its channel end status as open. + +If all handshake steps are successful, the channel is opened on both sides. At each step in the handshake, the module +associated with the `ChannelEnd` executes its callback. So +on `ChanOpenInit`, the module on chain A executes its callback `OnChanOpenInit`. + +The channel identifier is auto derived in the format: `channel-{N}` where N is the next sequence to be used. + +Just as ports came with dynamic capabilities, channel initialization returns a dynamic capability +that the module **must** claim so that they can pass in a capability to authenticate channel actions +like sending packets. The channel capability is passed into the callback on the first parts of the +handshake; either `OnChanOpenInit` on the initializing chain or `OnChanOpenTry` on the other chain. + +#### Closing channels + +Closing a channel occurs in 2 handshake steps as defined in [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics). + +`ChanCloseInit` closes a channel on the executing chain if the channel exists, it is not +already closed and the connection it exists upon is OPEN. Channels can only be closed by a +calling module or in the case of a packet timeout on an ORDERED channel. + +`ChanCloseConfirm` is a response to a counterparty channel executing `ChanCloseInit`. The channel +on the executing chain closes if the channel exists, the channel is not already closed, +the connection the channel exists upon is OPEN and the executing chain successfully verifies +that the counterparty channel has been closed. + +### [Packets](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Modules communicate with each other by sending packets over IBC channels. All +IBC packets contain the destination `portID` and `channelID` along with the source `portID` and +`channelID`. This packet structure allows modules to know the sender module of a given packet. IBC packets +contain a sequence to optionally enforce ordering. + +IBC packets also contain a `TimeoutHeight` and a `TimeoutTimestamp` that determine the deadline before the receiving module must process a packet. + +Modules send custom application data to each other inside the `Data []byte` field of the IBC packet. +Thus, packet data is opaque to IBC handlers. It is incumbent on a sender module to encode +their application-specific packet information into the `Data` field of packets. The receiver +module must decode that `Data` back to the original application data. + +### [Receipts and Timeouts](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Since IBC works over a distributed network and relies on potentially faulty relayers to relay messages between ledgers, +IBC must handle the case where a packet does not get sent to its destination in a timely manner or at all. Packets must +specify a non-zero value for timeout height (`TimeoutHeight`) or timeout timestamp (`TimeoutTimestamp` ) after which a packet can no longer be successfully received on the destination chain. + +- The `timeoutHeight` indicates a consensus height on the destination chain after which the packet is no longer be processed, and instead counts as having timed-out. +- The `timeoutTimestamp` indicates a timestamp on the destination chain after which the packet is no longer be processed, and instead counts as having timed-out. + +If the timeout passes without the packet being successfully received, the packet can no longer be +received on the destination chain. The sending module can timeout the packet and take appropriate actions. + +If the timeout is reached, then a proof of packet timeout can be submitted to the original chain. The original chain can then perform +application-specific logic to timeout the packet, perhaps by rolling back the packet send changes (refunding senders any locked funds, etc.). + +- In ORDERED channels, a timeout of a single packet in the channel causes the channel to close. + + - If packet sequence `n` times out, then a packet at sequence `k > n` cannot be received without violating the contract of ORDERED channels that packets are processed in the order that they are sent. + - Since ORDERED channels enforce this invariant, a proof that sequence `n` has not been received on the destination chain by the specified timeout of packet `n` is sufficient to timeout packet `n` and close the channel. + +- In UNORDERED channels, the application-specific timeout logic for that packet is applied and the channel is not closed. + + - Packets can be received in any order. + + - IBC writes a packet receipt for each sequence receives in the UNORDERED channel. This receipt does not contain information; it is simply a marker intended to signify that the UNORDERED channel has received a packet at the specified sequence. + + - To timeout a packet on an UNORDERED channel, a proof is required that a packet receipt **does not exist** for the packet's sequence by the specified timeout. + +For this reason, most modules should use UNORDERED channels as they require fewer liveness guarantees to function effectively for users of that channel. + +### [Acknowledgments](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Modules can also choose to write application-specific acknowledgments upon processing a packet. Acknowledgments can be done: + +- Synchronously on `OnRecvPacket` if the module processes packets as soon as they are received from IBC module. +- Asynchronously if module processes packets at some later point after receiving the packet. + +This acknowledgment data is opaque to IBC much like the packet `Data` and is treated by IBC as a simple byte string `[]byte`. Receiver modules must encode their acknowledgment so that the sender module can decode it correctly. The encoding must be negotiated between the two parties during version negotiation in the channel handshake. + +The acknowledgment can encode whether the packet processing succeeded or failed, along with additional information that allows the sender module to take appropriate action. + +After the acknowledgment has been written by the receiving chain, a relayer relays the acknowledgment back to the original sender module. + +The original sender module then executes application-specific acknowledgment logic using the contents of the acknowledgment. + +- After an acknowledgement fails, packet-send changes can be rolled back (for example, refunding senders in ICS20). + +- After an acknowledgment is received successfully on the original sender on the chain, the corresponding packet commitment is deleted since it is no longer needed. + +## Further Readings and Specs + +If you want to learn more about IBC, check the following specifications: + +- [IBC specification overview](https://github.com/cosmos/ibc/blob/master/README.md) diff --git a/docs/ibc/v6.3.x/ibc/proposals.mdx b/docs/ibc/v6.3.x/ibc/proposals.mdx new file mode 100644 index 00000000..769981a1 --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/proposals.mdx @@ -0,0 +1,125 @@ +--- +title: Governance Proposals +--- + +In uncommon situations, a highly valued client may become frozen due to uncontrollable +circumstances. A highly valued client might have hundreds of channels being actively used. +Some of those channels might have a significant amount of locked tokens used for ICS 20. + +If the one third of the validator set of the chain the client represents decides to collude, +they can sign off on two valid but conflicting headers each signed by the other one third +of the honest validator set. The light client can now be updated with two valid, but conflicting +headers at the same height. The light client cannot know which header is trustworthy and therefore +evidence of such misbehaviour is likely to be submitted resulting in a frozen light client. + +Frozen light clients cannot be updated under any circumstance except via a governance proposal. +Since a quorum of validators can sign arbitrary state roots which may not be valid executions +of the state machine, a governance proposal has been added to ease the complexity of unfreezing +or updating clients which have become "stuck". Without this mechanism, validator sets would need +to construct a state root to unfreeze the client. Unfreezing clients, re-enables all of the channels +built upon that client. This may result in recovery of otherwise lost funds. + +Tendermint light clients may become expired if the trusting period has passed since their +last update. This may occur if relayers stop submitting headers to update the clients. + +An unplanned upgrade by the counterparty chain may also result in expired clients. If the counterparty +chain undergoes an unplanned upgrade, there may be no commitment to that upgrade signed by the validator +set before the chain-id changes. In this situation, the validator set of the last valid update for the +light client is never expected to produce another valid header since the chain-id has changed, which will +ultimately lead the on-chain light client to become expired. + +In the case that a highly valued light client is frozen, expired, or rendered non-updateable, a +governance proposal may be submitted to update this client, known as the subject client. The +proposal includes the client identifier for the subject and the client identifier for a substitute +client. Light client implementations may implement custom updating logic, but in most cases, +the subject will be updated to the latest consensus state of the substitute client, if the proposal passes. +The substitute client is used as a "stand in" while the subject is on trial. It is best practice to create +a substitute client *after* the subject has become frozen to avoid the substitute from also becoming frozen. +An active substitute client allows headers to be submitted during the voting period to prevent accidental expiry +once the proposal passes. + +## How to recover an expired client with a governance proposal + +See also the relevant documentation: [ADR-026, IBC client recovery mechanisms](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-026-ibc-client-recovery-mechanisms.md) + +> **Who is this information for?** +> Although technically anyone can submit the governance proposal to recover an expired client, often it will be **relayer operators** (at least coordinating the submission). + +### Preconditions + +* The chain is updated with ibc-go `>=` v1.1.0. +* There exists an active client (with a known client identifier) for the same counterparty chain as the expired client. +* The governance deposit. + +## Steps + +### Step 1 + +Check if the client is attached to the expected `chain-id`. For example, for an expired Tendermint client representing the Akash chain the client state looks like this on querying the client state: + +```text +{ + client_id: 07-tendermint-146 + client_state: + '@type': /ibc.lightclients.tendermint.v1.ClientState + allow_update_after_expiry: true + allow_update_after_misbehaviour: true + chain_id: akashnet-2 +} +``` + +The client is attached to the expected Akash `chain-id`. Note that although the parameters (`allow_update_after_expiry` and `allow_update_after_misbehaviour`) exist to signal intent, these parameters have been deprecated and will not enforce any checks on the revival of client. See ADR-026 for more context on this deprecation. + +### Step 2 + +If the chain has been updated to ibc-go `>=` v1.1.0, anyone can submit the governance proposal to recover the client by executing this via CLI. + +> Note that the Cosmos SDK has updated how governance proposals are submitted in SDK v0.46, now requiring to pass a .json proposal file + +* From SDK v0.46.x onwards + + ```bash + tx gov submit-proposal [path-to-proposal-json] + ``` + + where `proposal.json` contains: + + ```json expandable + { + "messages": [ + { + "@type": "/ibc.core.client.v1.ClientUpdateProposal", + "title": "title_string", + "description": "description_string", + "subject_client_id": "expired_client_id_string", + "substitute_client_id": "active_client_id_string" + } + ], + "metadata": "", + "deposit": "10stake" + } + ``` + + Alternatively there's a legacy command (that is no longer recommended though): + + ```bash + tx gov submit-legacy-proposal update-client + ``` + +* Until SDK v0.45.x + + ```bash + tx gov submit-proposal update-client + ``` + +The `` identifier is the proposed client to be updated. This client must be either frozen or expired. + +The `` represents a substitute client. It carries all the state for the client which may be updated. It must have identical client and chain parameters to the client which may be updated (except for latest height, frozen height, and chain ID). It should be continually updated during the voting period. + +After this, all that remains is deciding who funds the governance deposit and ensuring the governance proposal passes. If it does, the client on trial will be updated to the latest state of the substitute. + +## Important considerations + +Please note that from v1.0.0 of ibc-go it will not be allowed for transactions to go to expired clients anymore, so please update to at least this version to prevent similar issues in the future. + +Please also note that if the client on the other end of the transaction is also expired, that client will also need to update. This process updates only one client. diff --git a/docs/ibc/v6.3.x/ibc/proto-docs.mdx b/docs/ibc/v6.3.x/ibc/proto-docs.mdx new file mode 100644 index 00000000..75a9b9c5 --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/proto-docs.mdx @@ -0,0 +1,6 @@ +--- +title: Protobuf Documentation +description: See ibc-go v6.2.x Buf Protobuf documentation. +--- + +See [ibc-go v6.2.x Buf Protobuf documentation](https://github.com/cosmos/ibc-go/blob/release/v6.2.x/docs/ibc/proto-docs.md). diff --git a/docs/ibc/v6.3.x/ibc/relayer.mdx b/docs/ibc/v6.3.x/ibc/relayer.mdx new file mode 100644 index 00000000..cd3fe079 --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/relayer.mdx @@ -0,0 +1,48 @@ +--- +title: Relayer +--- + + + +## Pre-requisite readings + +* [IBC Overview](/docs/ibc/v6.3.x/ibc/overview) +* Events + + + +## Events + +Events are emitted for every transaction processed by the base application to indicate the execution +of some logic clients may want to be aware of. This is extremely useful when relaying IBC packets. +Any message that uses IBC will emit events for the corresponding TAO logic executed as defined in +the [IBC events document](/docs/ibc/v6.3.x/middleware/ics29-fee/events). + +In the SDK, it can be assumed that for every message there is an event emitted with the type `message`, +attribute key `action`, and an attribute value representing the type of message sent +(`channel_open_init` would be the attribute value for `MsgChannelOpenInit`). If a relayer queries +for transaction events, it can split message events using this event Type/Attribute Key pair. + +The Event Type `message` with the Attribute Key `module` may be emitted multiple times for a single +message due to application callbacks. It can be assumed that any TAO logic executed will result in +a module event emission with the attribute value `ibc_` (02-client emits `ibc_client`). + +### Subscribing with Tendermint + +Calling the Tendermint RPC method `Subscribe` via Tendermint's Websocket will return events using +Tendermint's internal representation of them. Instead of receiving back a list of events as they +were emitted, Tendermint will return the type `map[string][]string` which maps a string in the +form `.` to `attribute_value`. This causes extraction of the event +ordering to be non-trivial, but still possible. + +A relayer should use the `message.action` key to extract the number of messages in the transaction +and the type of IBC transactions sent. For every IBC transaction within the string array for +`message.action`, the necessary information should be extracted from the other event fields. If +`send_packet` appears at index 2 in the value for `message.action`, a relayer will need to use the +value at index 2 of the key `send_packet.packet_sequence`. This process should be repeated for each +piece of information needed to relay a packet. + +## Example Implementations + +* [Golang Relayer](https://github.com/cosmos/relayer) +* [Hermes](https://github.com/informalsystems/hermes) diff --git a/docs/ibc/v6.3.x/ibc/roadmap.mdx b/docs/ibc/v6.3.x/ibc/roadmap.mdx new file mode 100644 index 00000000..01ab8797 --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/roadmap.mdx @@ -0,0 +1,56 @@ +--- +title: Roadmap +description: 'Latest update: July 7, 2022' +--- + +*Latest update: July 7, 2022* + +This document endeavours to inform the wider IBC community about plans and priorities for work on ibc-go by the team at Interchain GmbH. It is intended to broadly inform all users of ibc-go, including developers and operators of IBC, relayer, chain and wallet applications. + +This roadmap should be read as a high-level guide, rather than a commitment to schedules and deliverables. The degree of specificity is inversely proportional to the timeline. We will update this document periodically to reflect the status and plans. + +## Q3 - 2022 + +At a high level we will focus on: + +### Features + +* Releasing [v4.0.0](https://github.com/cosmos/ibc-go/milestone/26), which includes the ICS-29 Fee Middleware module. +* Finishing and releasing the [refactoring of 02-client](https://github.com/cosmos/ibc-go/milestone/16). This refactor will make the development of light clients easier. +* Starting the implementation of channel upgradability (see [epic](https://github.com/cosmos/ibc-go/issues/1599) and [alpha milestone](https://github.com/cosmos/ibc-go/milestone/29)) with the goal of cutting an alpha1 pre-release by the end of the quarter. Channel upgradability will allow chains to renegotiate an existing channel to take advantage of new features without having to create a new channel, thus preserving all existing packet state processed on the channel. +* Implementing the new [`ORDERED_ALLOW_TIMEOUT` channel type](https://github.com/cosmos/ibc-go/milestone/31) and hopefully releasing it as well. This new channel type will allow packets on an ordered channel to timeout without causing the closure of the channel. + +### Testing and infrastructure + +* Adding [automated e2e tests](https://github.com/cosmos/ibc-go/milestone/32) to the repo's CI. + +### Documentation and backlog + +* Finishing and releasing the upgrade to Cosmos SDK v0.46. +* Writing the [light client implementation guide](https://github.com/cosmos/ibc-go/issues/59). +* Working on [core backlog issues](https://github.com/cosmos/ibc-go/milestone/28). +* Depending on the timeline of the Cosmos SDK, implementing and testing the changes needed to support the [transition to SMT storage](https://github.com/cosmos/ibc-go/milestone/21). + +We have also received a lot of feedback to improve Interchain Accounts and we might also work on a few things, but will depend on priorities and availability. + +For a detail view of each iteration's planned work, please check out our [project board](https://github.com/orgs/cosmos/projects/7). + +### Release schedule + +#### **July** + +We will probably cut at least one more release candidate of v4.0.0 before the final release, which should happen around the end of the month. + +For the Rho upgrade of the Cosmos Hub we will also release a new minor version of v3 with SDK 0.46. + +#### **August** + +In the first half we will probably start cutting release candidates for the 02-client refactor. Final release would most likely come out at the end of the month or beginning of September. + +#### **September** + +We might cut some pre-releases for the new channel type, and by the end of the month we expect to cut the first alpha pre-release for channel upgradability. + +## Q4 - 2022 + +We will continue the implementation and cut the final release of [channel upgradability](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics/UPGRADES.md). At the end of Q3 or maybe beginning of Q4 we might also work on designing the implementation and scoping the engineering work to add support for [multihop channels](https://github.com/cosmos/ibc/pull/741/files), so that we could start the implementation of this feature during Q4 (but this is still be decided). diff --git a/docs/ibc/v6.3.x/ibc/upgrades/developer-guide.mdx b/docs/ibc/v6.3.x/ibc/upgrades/developer-guide.mdx new file mode 100644 index 00000000..9e1cc251 --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/upgrades/developer-guide.mdx @@ -0,0 +1,52 @@ +--- +title: IBC Client Developer Guide to Upgrades +--- + +## Synopsis + +Learn how to implement upgrade functionality for your custom IBC client. + +As mentioned in the [README](/docs/ibc/v4.6.x/intro), it is vital that high-value IBC clients can upgrade along with their underlying chains to avoid disruption to the IBC ecosystem. Thus, IBC client developers will want to implement upgrade functionality to enable clients to maintain connections and channels even across chain upgrades. + +The IBC protocol allows client implementations to provide a path to upgrading clients given the upgraded client state, upgraded consensus state and proofs for each. + +```go expandable +/ Upgrade functions +/ NOTE: proof heights are not included as upgrade to a new revision is expected to pass only on the last +/ height committed by the current revision. Clients are responsible for ensuring that the planned last +/ height of the current revision is somehow encoded in the proof verification process. +/ This is to ensure that no premature upgrades occur, since upgrade plans committed to by the counterparty +/ may be cancelled or modified before the last planned height. +VerifyUpgradeAndUpdateState( + ctx sdk.Context, + cdc codec.BinaryCodec, + store sdk.KVStore, + newClient ClientState, + newConsState ConsensusState, + proofUpgradeClient, + proofUpgradeConsState []byte, +) (upgradedClient ClientState, upgradedConsensus ConsensusState, err error) +``` + +Note that the clients should have prior knowledge of the merkle path that the upgraded client and upgraded consensus states will use. The height at which the upgrade has occurred should also be encoded in the proof. The Tendermint client implementation accomplishes this by including an `UpgradePath` in the ClientState itself, which is used along with the upgrade height to construct the merkle path under which the client state and consensus state are committed. + +Developers must ensure that the `UpgradeClientMsg` does not pass until the last height of the old chain has been committed, and after the chain upgrades, the `UpgradeClientMsg` should pass once and only once on all counterparty clients. + +Developers must ensure that the new client adopts all of the new Client parameters that must be uniform across every valid light client of a chain (chain-chosen parameters), while maintaining the Client parameters that are customizable by each individual client (client-chosen parameters) from the previous version of the client. + +Upgrades must adhere to the IBC Security Model. IBC does not rely on the assumption of honest relayers for correctness. Thus users should not have to rely on relayers to maintain client correctness and security (though honest relayers must exist to maintain relayer liveness). While relayers may choose any set of client parameters while creating a new `ClientState`, this still holds under the security model since users can always choose a relayer-created client that suits their security and correctness needs or create a Client with their desired parameters if no such client exists. + +However, when upgrading an existing client, one must keep in mind that there are already many users who depend on this client's particular parameters. We cannot give the upgrading relayer free choice over these parameters once they have already been chosen. This would violate the security model since users who rely on the client would have to rely on the upgrading relayer to maintain the same level of security. Thus, developers must make sure that their upgrade mechanism allows clients to upgrade the chain-specified parameters whenever a chain upgrade changes these parameters (examples in the Tendermint client include `UnbondingPeriod`, `TrustingPeriod`, `ChainID`, `UpgradePath`, etc.), while ensuring that the relayer submitting the `UpgradeClientMsg` cannot alter the client-chosen parameters that the users are relying upon (examples in Tendermint client include `TrustLevel`, `MaxClockDrift`, etc). + +Developers should maintain the distinction between Client parameters that are uniform across every valid light client of a chain (chain-chosen parameters), and Client parameters that are customizable by each individual client (client-chosen parameters); since this distinction is necessary to implement the `ZeroCustomFields` method in the `ClientState` interface: + +```go +/ Utility function that zeroes out any client customizable fields in client state +/ Ledger enforced fields are maintained while all custom fields are zero values +/ Used to verify upgrades +ZeroCustomFields() + +ClientState +``` + +Counterparty clients can upgrade securely by using all of the chain-chosen parameters from the chain-committed `UpgradedClient` and preserving all of the old client-chosen parameters. This enables chains to securely upgrade without relying on an honest relayer, however it can in some cases lead to an invalid final `ClientState` if the new chain-chosen parameters clash with the old client-chosen parameter. This can happen in the Tendermint client case if the upgrading chain lowers the `UnbondingPeriod` (chain-chosen) to a duration below that of a counterparty client's `TrustingPeriod` (client-chosen). Such cases should be clearly documented by developers, so that chains know which upgrades should be avoided to prevent this problem. The final upgraded client should also be validated in `VerifyUpgradeAndUpdateState` before returning to ensure that the client does not upgrade to an invalid `ClientState`. diff --git a/docs/ibc/v6.3.x/ibc/upgrades/genesis-restart.mdx b/docs/ibc/v6.3.x/ibc/upgrades/genesis-restart.mdx new file mode 100644 index 00000000..f069bd15 --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/upgrades/genesis-restart.mdx @@ -0,0 +1,46 @@ +--- +title: Genesis Restart Upgrades +--- + +## Synopsis + +Learn how to upgrade your chain and counterparty clients using genesis restarts. + +**NOTE**: Regular genesis restarts are currently unsupported by relayers! + +## IBC Client Breaking Upgrades + +IBC client breaking upgrades are possible using genesis restarts. +It is highly recommended to use the in-place migrations instead of a genesis restart. +Genesis restarts should be used sparingly and as backup plans. + +Genesis restarts still require the usage of an IBC upgrade proposal in order to correctly upgrade counterparty clients. + +### Step-by-Step Upgrade Process for SDK Chains + +If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the [IBC Client Breaking Upgrade List](/docs/ibc/v6.3.x/ibc/upgrades/quick-guide#ibc-client-breaking-upgrades) and then execute the upgrade process described below in order to prevent counterparty clients from breaking. + +1. Create a 02-client [`UpgradeProposal`](https://github.com/cosmos/ibc-go/blob/v6.2.0/proto/ibc/core/client/v1/client.proto#L58-L77) with an `UpgradePlan` and a new IBC ClientState in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients and zero out any client-customizable fields (such as TrustingPeriod). +2. Vote on and pass the `UpgradeProposal` +3. Halt the node after successful upgrade. +4. Export the genesis file. +5. Swap to the new binary. +6. Run migrations on the genesis file. +7. Remove the `UpgradeProposal` plan from the genesis file. This may be done by migrations. +8. Change desired chain-specific fields (chain id, unbonding period, etc). This may be done by migrations. +9. Reset the node's data. +10. Start the chain. + +Upon the `UpgradeProposal` passing, the upgrade module will commit the UpgradedClient under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`. + +Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client. + +#### Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients + +These steps are identical to the regular [IBC client breaking upgrade process](/docs/ibc/v6.3.x/ibc/upgrades/quick-guide#step-by-step-upgrade-process-for-relayers-upgrading-counterparty-clients). + +### Non-IBC Client Breaking Upgrades + +While ibc-go supports genesis restarts which do not break IBC clients, relayers do not support this upgrade path. +Here is a tracking issue on [Hermes](https://github.com/informalsystems/ibc-rs/issues/1152). +Please do not attempt a regular genesis restarts unless you have a tool to update counterparty clients correctly. diff --git a/docs/ibc/v6.3.x/ibc/upgrades/intro.mdx b/docs/ibc/v6.3.x/ibc/upgrades/intro.mdx new file mode 100644 index 00000000..f58e85ee --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/upgrades/intro.mdx @@ -0,0 +1,15 @@ +--- +title: Upgrading IBC Chains Overview +description: >- + This directory contains information on how to upgrade an IBC chain without + breaking counterparty clients and connections. +--- + +### Upgrading IBC Chains Overview + +This directory contains information on how to upgrade an IBC chain without breaking counterparty clients and connections. + +IBC-connected chains must be able to upgrade without breaking connections to other chains. Otherwise there would be a massive disincentive towards upgrading and disrupting high-value IBC connections, thus preventing chains in the IBC ecosystem from evolving and improving. Many chain upgrades may be irrelevant to IBC, however some upgrades could potentially break counterparty clients if not handled correctly. Thus, any IBC chain that wishes to perform an IBC-client-breaking upgrade must perform an IBC upgrade in order to allow counterparty clients to securely upgrade to the new light client. + +1. The [quick-guide](/docs/ibc/v6.3.x/ibc/upgrades/quick-guide) describes how IBC-connected chains can perform client-breaking upgrades and how relayers can securely upgrade counterparty clients using the SDK. +2. The [developer-guide](/docs/ibc/v6.3.x/ibc/upgrades/developer-guide) is a guide for developers intending to develop IBC client implementations with upgrade functionality. diff --git a/docs/ibc/v6.3.x/ibc/upgrades/quick-guide.mdx b/docs/ibc/v6.3.x/ibc/upgrades/quick-guide.mdx new file mode 100644 index 00000000..c8fdd58c --- /dev/null +++ b/docs/ibc/v6.3.x/ibc/upgrades/quick-guide.mdx @@ -0,0 +1,54 @@ +--- +title: How to Upgrade IBC Chains and their Clients +--- + +## Synopsis + +Learn how to upgrade your chain and counterparty clients. + +The information in this doc for upgrading chains is relevant to SDK chains. However, the guide for counterparty clients is relevant to any Tendermint client that enables upgrades. + +## IBC Client Breaking Upgrades + +IBC-connected chains must perform an IBC upgrade if their upgrade will break counterparty IBC clients. The current IBC protocol supports upgrading tendermint chains for a specific subset of IBC-client-breaking upgrades. Here is the exhaustive list of IBC client-breaking upgrades and whether the IBC protocol currently supports such upgrades. + +IBC currently does **NOT** support unplanned upgrades. All of the following upgrades must be planned and committed to in advance by the upgrading chain, in order for counterparty clients to maintain their connections securely. + +Note: Since upgrades are only implemented for Tendermint clients, this doc only discusses upgrades on Tendermint chains that would break counterparty IBC Tendermint Clients. + +1. Changing the Chain-ID: **Supported** +2. Changing the UnbondingPeriod: **Partially Supported**, chains may increase the unbonding period with no issues. However, decreasing the unbonding period may irreversibly break some counterparty clients. Thus, it is **not recommended** that chains reduce the unbonding period. +3. Changing the height (resetting to 0): **Supported**, so long as chains remember to increment the revision number in their chain-id. +4. Changing the ProofSpecs: **Supported**, this should be changed if the proof structure needed to verify IBC proofs is changed across the upgrade. Ex: Switching from an IAVL store, to a SimpleTree Store +5. Changing the UpgradePath: **Supported**, this might involve changing the key under which upgraded clients and consensus states are stored in the upgrade store, or even migrating the upgrade store itself. +6. Migrating the IBC store: **Unsupported**, as the IBC store location is negotiated by the connection. +7. Upgrading to a backwards compatible version of IBC: Supported +8. Upgrading to a non-backwards compatible version of IBC: **Unsupported**, as IBC version is negotiated on connection handshake. +9. Changing the Tendermint LightClient algorithm: **Partially Supported**. Changes to the light client algorithm that do not change the ClientState or ConsensusState struct may be supported, provided that the counterparty is also upgraded to support the new light client algorithm. Changes that require updating the ClientState and ConsensusState structs themselves are theoretically possible by providing a path to translate an older ClientState struct into the new ClientState struct; however this is not currently implemented. + +### Step-by-Step Upgrade Process for SDK chains + +If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the list above and then execute the upgrade process described below in order to prevent counterparty clients from breaking. + +1. Create a 02-client [`UpgradeProposal`](https://github.com/cosmos/ibc-go/blob/v6.2.0/proto/ibc/core/client/v1/client.proto#L58-L77) with an `UpgradePlan` and a new IBC ClientState in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients and zero out any client-customizable fields (such as TrustingPeriod). +2. Vote on and pass the `UpgradeProposal` + +Upon the `UpgradeProposal` passing, the upgrade module will commit the UpgradedClient under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`. + +Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client. + +### Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients + +Once the upgrading chain has committed to upgrading, relayers must wait till the chain halts at the upgrade height before upgrading counterparty clients. This is because chains may reschedule or cancel upgrade plans before they occur. Thus, relayers must wait till the chain reaches the upgrade height and halts before they can be sure the upgrade will take place. + +Thus, the upgrade process for relayers trying to upgrade the counterparty clients is as follows: + +1. Wait for the upgrading chain to reach the upgrade height and halt +2. Query a full node for the proofs of `UpgradedClient` and `UpgradedConsensusState` at the last height of the old chain. +3. Update the counterparty client to the last height of the old chain using the `UpdateClient` msg. +4. Submit an `UpgradeClient` msg to the counterparty chain with the `UpgradedClient`, `UpgradedConsensusState` and their respective proofs. +5. Submit an `UpdateClient` msg to the counterparty chain with a header from the new upgraded chain. + +The Tendermint client on the counterparty chain will verify that the upgrading chain did indeed commit to the upgraded client and upgraded consensus state at the upgrade height (since the upgrade height is included in the key). If the proofs are verified against the upgrade height, then the client will upgrade to the new client while retaining all of its client-customized fields. Thus, it will retain its old TrustingPeriod, TrustLevel, MaxClockDrift, etc; while adopting the new chain-specified fields such as UnbondingPeriod, ChainId, UpgradePath, etc. Note, this can lead to an invalid client since the old client-chosen fields may no longer be valid given the new chain-chosen fields. Upgrading chains should try to avoid these situations by not altering parameters that can break old clients. For an example, see the UnbondingPeriod example in the supported upgrades section. + +The upgraded consensus state will serve purely as a basis of trust for future `UpdateClientMsgs` and will not contain a consensus root to perform proof verification against. Thus, relayers must submit an `UpdateClientMsg` with a header from the new chain so that the connection can be used for proof verification again. diff --git a/docs/ibc/v6.3.x/intro.mdx b/docs/ibc/v6.3.x/intro.mdx new file mode 100644 index 00000000..579a6862 --- /dev/null +++ b/docs/ibc/v6.3.x/intro.mdx @@ -0,0 +1,17 @@ +--- +title: IBC-Go Documentation +--- + + +This version of ibc-go is not supported anymore. Please upgrade to the latest version. + + +Welcome to the IBC-Go documentation! + +The Inter-Blockchain Communication protocol (IBC) is an end-to-end, connection-oriented, stateful protocol for reliable, ordered, and authenticated communication between heterogeneous blockchains arranged in an unknown and dynamic topology. + +IBC is a protocol that allows blockchains to talk to each other. + +The protocol realizes this interoperability by specifying a set of data structures, abstractions, and semantics that can be implemented by any distributed ledger that satisfies a small set of requirements. + +IBC can be used to build a wide range of cross-chain applications that include token transfers, atomic swaps, multi-chain smart contracts (with or without mutually comprehensible VMs), and data and code sharding of various kinds. diff --git a/docs/ibc/v6.3.x/middleware/ics29-fee/end-users.mdx b/docs/ibc/v6.3.x/middleware/ics29-fee/end-users.mdx new file mode 100644 index 00000000..ba87d27a --- /dev/null +++ b/docs/ibc/v6.3.x/middleware/ics29-fee/end-users.mdx @@ -0,0 +1,30 @@ +--- +title: End Users +--- + +## Synopsis + +Learn how to incentivize IBC packets using the ICS29 Fee Middleware module. + +## Pre-requisite readings + +- [Fee Middleware](/docs/ibc/v6.3.x/middleware/ics29-fee/overview) + +## Summary + +Different types of end users: + +- CLI users who want to manually incentivize IBC packets +- Client developers + +The Fee Middleware module allows end users to add a 'tip' to each IBC packet which will incentivize relayer operators to relay packets between chains. gRPC endpoints are exposed for client developers as well as a simple CLI for manually incentivizing IBC packets. + +## CLI Users + +For an in depth guide on how to use the ICS29 Fee Middleware module using the CLI please take a look at the [wiki](https://github.com/cosmos/ibc-go/wiki/Fee-enabled-fungible-token-transfers#asynchronous-incentivization-of-a-fungible-token-transfer) on the `ibc-go` repo. + +## Client developers + +Client developers can read more about the relevant ICS29 message types in the [Fee messages section](/docs/ibc/v6.3.x/middleware/ics29-fee/msgs). + +[CosmJS](https://github.com/cosmos/cosmjs) is a useful client library for signing and broadcasting Cosmos SDK messages. diff --git a/docs/ibc/v6.3.x/middleware/ics29-fee/events.mdx b/docs/ibc/v6.3.x/middleware/ics29-fee/events.mdx new file mode 100644 index 00000000..7d6dfc42 --- /dev/null +++ b/docs/ibc/v6.3.x/middleware/ics29-fee/events.mdx @@ -0,0 +1,37 @@ +--- +title: Events +--- + +## Synopsis + +An overview of all events related to ICS-29 + +## `MsgPayPacketFee`, `MsgPayPacketFeeAsync` + +| Type | Attribute Key | Attribute Value | +| ----------------------- | --------------- | --------------- | +| incentivized_ibc_packet | port_id | `{portID}` | +| incentivized_ibc_packet | channel_id | `{channelID}` | +| incentivized_ibc_packet | packet_sequence | `{sequence}` | +| incentivized_ibc_packet | recv_fee | `{recvFee}` | +| incentivized_ibc_packet | ack_fee | `{ackFee}` | +| incentivized_ibc_packet | timeout_fee | `{timeoutFee}` | +| message | module | fee-ibc | + +## `RegisterPayee` + +| Type | Attribute Key | Attribute Value | +| -------------- | ------------- | --------------- | +| register_payee | relayer | `{relayer}` | +| register_payee | payee | `{payee}` | +| register_payee | channel_id | `{channelID}` | +| message | module | fee-ibc | + +## `RegisterCounterpartyPayee` + +| Type | Attribute Key | Attribute Value | +| --------------------------- | ------------------ | --------------------- | +| register_counterparty_payee | relayer | `{relayer}` | +| register_counterparty_payee | counterparty_payee | `{counterpartyPayee}` | +| register_counterparty_payee | channel_id | `{channelID}` | +| message | module | fee-ibc | diff --git a/docs/ibc/v6.3.x/middleware/ics29-fee/fee-distribution.mdx b/docs/ibc/v6.3.x/middleware/ics29-fee/fee-distribution.mdx new file mode 100644 index 00000000..45bcb250 --- /dev/null +++ b/docs/ibc/v6.3.x/middleware/ics29-fee/fee-distribution.mdx @@ -0,0 +1,108 @@ +--- +title: Fee Distribution +--- + +## Synopsis + +Learn about payee registration for the distribution of packet fees. The following document is intended for relayer operators. + +## Pre-requisite readings + +- [Fee Middleware](/docs/ibc/v6.3.x/middleware/ics29-fee/overview) + +Packet fees are divided into 3 distinct amounts in order to compensate relayer operators for packet relaying on fee enabled IBC channels. + +- `RecvFee`: The sum of all packet receive fees distributed to a payee for successful execution of `MsgRecvPacket`. +- `AckFee`: The sum of all packet acknowledgement fees distributed to a payee for successful execution of `MsgAcknowledgement`. +- `TimeoutFee`: The sum of all packet timeout fees distributed to a payee for successful execution of `MsgTimeout`. + +## Register a counterparty payee address for forward relaying + +As mentioned in [ICS29 Concepts](/docs/ibc/v6.3.x/middleware/ics29-fee/overview#concepts), the forward relayer describes the actor who performs the submission of `MsgRecvPacket` on the destination chain. +Fee distribution for incentivized packet relays takes place on the packet source chain. + +> Relayer operators are expected to register a counterparty payee address, in order to be compensated accordingly with `RecvFee`s upon completion of a packet lifecycle. + +The counterparty payee address registered on the destination chain is encoded into the packet acknowledgement and communicated as such to the source chain for fee distribution. +**If a counterparty payee is not registered for the forward relayer on the destination chain, the escrowed fees will be refunded upon fee distribution.** + +### Relayer operator actions? + +A transaction must be submitted **to the destination chain** including a `CounterpartyPayee` address of an account on the source chain. +The transaction must be signed by the `Relayer`. + +Note: If a module account address is used as the `CounterpartyPayee` but the module has been set as a blocked address in the `BankKeeper`, the refunding to the module account will fail. This is because many modules use invariants to compare internal tracking of module account balances against the actual balance of the account stored in the `BankKeeper`. If a token transfer to the module account occurs without going through this module and updating the account balance of the module on the `BankKeeper`, then invariants may break and unknown behaviour could occur depending on the module implementation. Therefore, if it is desirable to use a module account that is currently blocked, the module developers should be consulted to gauge to possibility of removing the module account from the blocked list. + +```go +type MsgRegisterCounterpartyPayee struct { + / unique port identifier + PortId string + / unique channel identifier + ChannelId string + / the relayer address + Relayer string + / the counterparty payee address + CounterpartyPayee string +} +``` + +> This message is expected to fail if: +> +> - `PortId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +> - `ChannelId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +> - `Relayer` is an invalid address. +> - `CounterpartyPayee` is empty. + +See below for an example CLI command: + +```bash +simd tx ibc-fee register-counterparty-payee transfer channel-0 \ +cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh \ +osmo1v5y0tz01llxzf4c2afml8s3awue0ymju22wxx2 \ +--from cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh +``` + +## Register an alternative payee address for reverse and timeout relaying + +As mentioned in [ICS29 Concepts](/docs/ibc/v6.3.x/middleware/ics29-fee/overview#concepts), the reverse relayer describes the actor who performs the submission of `MsgAcknowledgement` on the source chain. +Similarly the timeout relayer describes the actor who performs the submission of `MsgTimeout` (or `MsgTimeoutOnClose`) on the source chain. + +> Relayer operators **may choose** to register an optional payee address, in order to be compensated accordingly with `AckFee`s and `TimeoutFee`s upon completion of a packet life cycle. + +If a payee is not registered for the reverse or timeout relayer on the source chain, then fee distribution assumes the default behaviour, where fees are paid out to the relayer account which delivers `MsgAcknowledgement` or `MsgTimeout`/`MsgTimeoutOnClose`. + +### Relayer operator actions + +A transaction must be submitted **to the source chain** including a `Payee` address of an account on the source chain. +The transaction must be signed by the `Relayer`. + +Note: If a module account address is used as the `Payee` it is recommended to [turn off invariant checks](https://github.com/cosmos/ibc-go/blob/71d7480c923f4227453e8a80f51be01ae7ee845e/testing/simapp/app.go#L659) for that module. + +```go +type MsgRegisterPayee struct { + / unique port identifier + PortId string + / unique channel identifier + ChannelId string + / the relayer address + Relayer string + / the payee address + Payee string +} +``` + +> This message is expected to fail if: +> +> - `PortId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +> - `ChannelId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +> - `Relayer` is an invalid address. +> - `Payee` is an invalid address. + +See below for an example CLI command: + +```bash +simd tx ibc-fee register-payee transfer channel-0 \ +cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh \ +cosmos153lf4zntqt33a4v0sm5cytrxyqn78q7kz8j8x5 \ +--from cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh +``` diff --git a/docs/ibc/v6.3.x/middleware/ics29-fee/integration.mdx b/docs/ibc/v6.3.x/middleware/ics29-fee/integration.mdx new file mode 100644 index 00000000..c704e13d --- /dev/null +++ b/docs/ibc/v6.3.x/middleware/ics29-fee/integration.mdx @@ -0,0 +1,174 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to configure the Fee Middleware module with IBC applications. The following document is intended for developers building on top of the Cosmos SDK and only applies for Cosmos SDK chains. + +## Pre-requisite Readings + +- [IBC middleware development](/docs/ibc/v6.3.x/ibc/middleware/develop) +- [IBC middleware integration](/docs/ibc/v6.3.x/ibc/middleware/integration) + +The Fee Middleware module, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. +For Cosmos SDK chains this setup is done via the `app/app.go` file, where modules are constructed and configured in order to bootstrap the blockchain application. + +## Example integration of the Fee Middleware module + +```go expandable +/ app.go + +/ Register the AppModule for the fee middleware module +ModuleBasics = module.NewBasicManager( + ... + ibcfee.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the fee middleware module +maccPerms = map[string][]string{ + ... + ibcfeetypes.ModuleName: nil, +} + +... + +/ Add fee middleware Keeper +type App struct { + ... + + IBCFeeKeeper ibcfeekeeper.Keeper + + ... +} + +... + +/ Create store keys + keys := sdk.NewKVStoreKeys( + ... + ibcfeetypes.StoreKey, + ... +) + +... + +app.IBCFeeKeeper = ibcfeekeeper.NewKeeper( + appCodec, keys[ibcfeetypes.StoreKey], + app.IBCKeeper.ChannelKeeper, / may be replaced with IBC middleware + app.IBCKeeper.ChannelKeeper, + &app.IBCKeeper.PortKeeper, app.AccountKeeper, app.BankKeeper, +) + +/ See the section below for configuring an application stack with the fee middleware module + +... + +/ Register fee middleware AppModule +app.moduleManager = module.NewManager( + ... + ibcfee.NewAppModule(app.IBCFeeKeeper), +) + +... + +/ Add fee middleware to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + ibcfeetypes.ModuleName, + ... +) + +/ Add fee middleware to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + ibcfeetypes.ModuleName, + ... +) + +/ Add fee middleware to init genesis logic +app.moduleManager.SetOrderInitGenesis( + ... + ibcfeetypes.ModuleName, + ... +) +``` + +## Configuring an application stack with Fee Middleware + +As mentioned in [IBC middleware development](/docs/ibc/v6.3.x/ibc/middleware/develop) an application stack may be composed of many or no middlewares that nest a base application. +These layers form the complete set of application logic that enable developers to build composable and flexible IBC application stacks. +For example, an application stack may be just a single base application like `transfer`, however, the same application stack composed with `29-fee` will nest the `transfer` base application +by wrapping it with the Fee Middleware module. + +### Transfer + +See below for an example of how to create an application stack using `transfer` and `29-fee`. +The following `transferStack` is configured in `app/app.go` and added to the IBC `Router`. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +```go expandable +/ Create Transfer Stack +/ SendPacket, since it is originating from the application to core IBC: +/ transferKeeper.SendPacket -> fee.SendPacket -> channel.SendPacket + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is the other way +/ channel.RecvPacket -> fee.OnRecvPacket -> transfer.OnRecvPacket + +/ transfer stack contains (from top to bottom): +/ - IBC Fee Middleware +/ - Transfer + +/ create IBC module from bottom to top of stack +var transferStack porttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) + +transferStack = ibcfee.NewIBCMiddleware(transferStack, app.IBCFeeKeeper) + +/ Add transfer stack to IBC Router +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) +``` + +### Interchain Accounts + +See below for an example of how to create an application stack using `27-interchain-accounts` and `29-fee`. +The following `icaControllerStack` and `icaHostStack` are configured in `app/app.go` and added to the IBC `Router` with the associated authentication module. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +```go expandable +/ Create Interchain Accounts Stack +/ SendPacket, since it is originating from the application to core IBC: +/ icaAuthModuleKeeper.SendTx -> icaController.SendPacket -> fee.SendPacket -> channel.SendPacket + +/ initialize ICA module with mock module as the authentication module on the controller side +var icaControllerStack porttypes.IBCModule +icaControllerStack = ibcmock.NewIBCModule(&mockModule, ibcmock.NewMockIBCApp("", scopedICAMockKeeper)) + +app.ICAAuthModule = icaControllerStack.(ibcmock.IBCModule) + +icaControllerStack = icacontroller.NewIBCMiddleware(icaControllerStack, app.ICAControllerKeeper) + +icaControllerStack = ibcfee.NewIBCMiddleware(icaControllerStack, app.IBCFeeKeeper) + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is: +/ channel.RecvPacket -> fee.OnRecvPacket -> icaHost.OnRecvPacket + +var icaHostStack porttypes.IBCModule +icaHostStack = icahost.NewIBCModule(app.ICAHostKeeper) + +icaHostStack = ibcfee.NewIBCMiddleware(icaHostStack, app.IBCFeeKeeper) + +/ Add authentication module, controller and host to IBC router +ibcRouter. + / the ICA Controller middleware needs to be explicitly added to the IBC Router because the + / ICA controller module owns the port capability for ICA. The ICA authentication module + / owns the channel capability. + AddRoute(ibcmock.ModuleName+icacontrollertypes.SubModuleName, icaControllerStack) / ica with mock auth module stack route to ica (top level of middleware stack) + +AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostStack). +``` diff --git a/docs/ibc/v6.3.x/middleware/ics29-fee/msgs.mdx b/docs/ibc/v6.3.x/middleware/ics29-fee/msgs.mdx new file mode 100644 index 00000000..42096549 --- /dev/null +++ b/docs/ibc/v6.3.x/middleware/ics29-fee/msgs.mdx @@ -0,0 +1,90 @@ +--- +title: Fee Messages +--- + +## Synopsis + +Learn about the different ways to pay for fees, how the fees are paid out and what happens when not enough escrowed fees are available for payout + +## Escrowing fees + +The fee middleware module exposes two different ways to pay fees for relaying IBC packets: + +1. `MsgPayPacketFee`, which enables the escrowing of fees for a packet at the next sequence send and should be combined into one `MultiMsgTx` with the message that will be paid for. + + Note that the `Relayers` field has been set up to allow for an optional whitelist of relayers permitted to receive this fee, however, this feature has not yet been enabled at this time. + + ```go expandable + type MsgPayPacketFee struct{ + / fee encapsulates the recv, ack and timeout fees associated with an IBC packet + Fee Fee + / the source port unique identifier + SourcePortId string + / the source channel unique identifier + SourceChannelId string + / account address to refund fee if necessary + Signer string + / optional list of relayers permitted to the receive packet fee + Relayers []string + } + ``` + + The `Fee` message contained in this synchronous fee payment method configures different fees which will be paid out for `MsgRecvPacket`, `MsgAcknowledgement`, and `MsgTimeout`/`MsgTimeoutOnClose`. + + ```go + type Fee struct { + RecvFee types.Coins + AckFee types.Coins + TimeoutFee types.Coins + } + ``` + +The diagram below shows the `MultiMsgTx` with the `MsgTransfer` coming from a token transfer message, along with `MsgPayPacketFee`. + +![msgpaypacket.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/msgpaypacket.png) + +2. `MsgPayPacketFeeAsync`, which enables the asynchronous escrowing of fees for a specified packet: + + Note that a packet can be 'topped up' multiple times with additional fees of any coin denomination by broadcasting multiple `MsgPayPacketFeeAsync` messages. + + ```go + type MsgPayPacketFeeAsync struct { + / unique packet identifier comprised of the channel ID, port ID and sequence + PacketId channeltypes.PacketId + / the packet fee associated with a particular IBC packet + PacketFee PacketFee + } + ``` + + where the `PacketFee` also specifies the `Fee` to be paid as well as the refund address for fees which are not paid out + + ```go + type PacketFee struct { + Fee Fee + RefundAddress string + Relayers []string + } + ``` + +The diagram below shows how multiple `MsgPayPacketFeeAsync` can be broadcasted asynchronously. Escrowing of the fee associated with a packet can be carried out by any party because ICS-29 does not dictate a particular fee payer. In fact, chains can choose to simply not expose this fee payment to end users at all and rely on a different module account or even the community pool as the source of relayer incentives. + +![paypacketfeeasync.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/paypacketfeeasync.png) + +Please see our [wiki](https://github.com/cosmos/ibc-go/wiki/Fee-enabled-fungible-token-transfers) for example flows on how to use these messages to incentivise a token transfer channel using a CLI. + +## Paying out the escrowed fees + +Following diagram takes a look at the packet flow for an incentivized token transfer and investigates the several scenario's for paying out the escrowed fees. We assume that the relayers have registered their counterparty address, detailed in the [Fee distribution section](/docs/ibc/v6.3.x/middleware/ics29-fee/fee-distribution). + +![feeflow.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/feeflow.png) + +- In the case of a successful transaction, `RecvFee` will be paid out to the designated counterparty payee address which has been registered on the receiver chain and sent back with the `MsgAcknowledgement`, `AckFee` will be paid out to the relayer address which has submitted the `MsgAcknowledgement` on the sending chain (or the registered payee in case one has been registered for the relayer address), and `TimeoutFee` will be reimbursed to the account which escrowed the fee. +- In case of a timeout transaction, `RecvFee` and `AckFee` will be reimbursed. The `TimeoutFee` will be paid to the `Timeout Relayer` (who submits the timeout message to the source chain). + +> Please note that fee payments are built on the assumption that sender chains are the source of incentives — the chain that sends the packets is the same chain where fee payments will occur -- please see the [Fee distribution section](/docs/ibc/v6.3.x/middleware/ics29-fee/fee-distribution) to understand the flow for registering payee and counterparty payee (fee receiving) addresses. + +## A locked fee middleware module + +The fee middleware module can become locked if the situation arises that the escrow account for the fees does not have sufficient funds to pay out the fees which have been escrowed for each packet. _This situation indicates a severe bug._ In this case, the fee module will be locked until manual intervention fixes the issue. + +> A locked fee module will simply skip fee logic and continue on to the underlying packet flow. A channel with a locked fee module will temporarily function as a fee disabled channel, and the locking of a fee module will not affect the continued flow of packets over the channel. diff --git a/docs/ibc/v6.3.x/middleware/ics29-fee/overview.mdx b/docs/ibc/v6.3.x/middleware/ics29-fee/overview.mdx new file mode 100644 index 00000000..22df7a95 --- /dev/null +++ b/docs/ibc/v6.3.x/middleware/ics29-fee/overview.mdx @@ -0,0 +1,49 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the Fee Middleware module is, and how to build custom modules that utilize the Fee Middleware functionality + +## What is the Fee Middleware module? + +IBC does not depend on relayer operators for transaction verification. However, the relayer infrastructure ensures liveness of the Interchain network — operators listen for packets sent through channels opened between chains, and perform the vital service of ferrying these packets (and proof of the transaction on the sending chain/receipt on the receiving chain) to the clients on each side of the channel. + +Though relaying is permissionless and completely decentralized and accessible, it does come with operational costs. Running full nodes to query transaction proofs and paying for transaction fees associated with IBC packets are two of the primary cost burdens which have driven the overall discussion on **a general, in-protocol incentivization mechanism for relayers**. + +Initially, a [simple proposal](https://github.com/cosmos/ibc/pull/577/files) was created to incentivize relaying on ICS20 token transfers on the destination chain. However, the proposal was specific to ICS20 token transfers and would have to be reimplemented in this format on every other IBC application module. + +After much discussion, the proposal was expanded to a [general incentivisation design](https://github.com/cosmos/ibc/tree/master/spec/app/ics-029-fee-payment) that can be adopted by any ICS application protocol as [middleware](/docs/ibc/v6.3.x/ibc/middleware/develop). + +## Concepts + +ICS29 fee payments in this middleware design are built on the assumption that sender chains are the source of incentives — the chain on which packets are incentivized is the chain that distributes fees to relayer operators. However, as part of the IBC packet flow, messages have to be submitted on both sender and destination chains. This introduces the requirement of a mapping of relayer operator's addresses on both chains. + +To achieve the stated requirements, the **fee middleware module has two main groups of functionality**: + +- Registering of relayer addresses associated with each party involved in relaying the packet on the source chain. This registration process can be automated on start up of relayer infrastructure and happens only once, not every packet flow. + + This is described in the [Fee distribution section](/docs/ibc/v6.3.x/middleware/ics29-fee/fee-distribution). + +- Escrowing fees by any party which will be paid out to each rightful party on completion of the packet lifecycle. + + This is described in the [Fee messages section](/docs/ibc/v6.3.x/middleware/ics29-fee/msgs). + +We complete the introduction by giving a list of definitions of relevant terminology. + +`Forward relayer`: The relayer that submits the `MsgRecvPacket` message for a given packet (on the destination chain). + +`Reverse relayer`: The relayer that submits the `MsgAcknowledgement` message for a given packet (on the source chain). + +`Timeout relayer`: The relayer that submits the `MsgTimeout` or `MsgTimeoutOnClose` messages for a given packet (on the source chain). + +`Payee`: The account address on the source chain to be paid on completion of the packet lifecycle. The packet lifecycle on the source chain completes with the receipt of a `MsgTimeout`/`MsgTimeoutOnClose` or a `MsgAcknowledgement`. + +`Counterparty payee`: The account address to be paid on completion of the packet lifecycle on the destination chain. The package lifecycle on the destination chain completes with a successful `MsgRecvPacket`. + +`Refund address`: The address of the account paying for the incentivization of packet relaying. The account is refunded timeout fees upon successful acknowledgement. In the event of a packet timeout, both acknowledgement and receive fees are refunded. + +## Known Limitations + +The first version of fee payments middleware will only support incentivisation of new channels, however, channel upgradeability will enable incentivisation of all existing channels. diff --git a/docs/ibc/v6.3.x/migrations/sdk-to-v1.mdx b/docs/ibc/v6.3.x/migrations/sdk-to-v1.mdx new file mode 100644 index 00000000..623c8195 --- /dev/null +++ b/docs/ibc/v6.3.x/migrations/sdk-to-v1.mdx @@ -0,0 +1,195 @@ +--- +title: SDK v0.43 to IBC-Go v1 +description: >- + This file contains information on how to migrate from the IBC module contained + in the SDK 0.41.x and 0.42.x lines to the IBC module in the ibc-go repository + based on the 0.44 SDK version. +--- + +This file contains information on how to migrate from the IBC module contained in the SDK 0.41.x and 0.42.x lines to the IBC module in the ibc-go repository based on the 0.44 SDK version. + +## Import Changes + +The most obvious changes is import name changes. We need to change: + +* applications -> apps +* cosmos-sdk/x/ibc -> ibc-go + +On my GNU/Linux based machine I used the following commands, executed in order: + +```bash +grep -RiIl 'cosmos-sdk\/x\/ibc\/applications' | xargs sed -i 's/cosmos-sdk\/x\/ibc\/applications/ibc-go\/modules\/apps/g' +``` + +```bash +grep -RiIl 'cosmos-sdk\/x\/ibc' | xargs sed -i 's/cosmos-sdk\/x\/ibc/ibc-go\/modules/g' +``` + +ref: [explanation of the above commands](https://www.internalpointers.com/post/linux-find-and-replace-text-multiple-files) + +Executing these commands out of order will cause issues. + +Feel free to use your own method for modifying import names. + +NOTE: Updating to the `v0.44.0` SDK release and then running `go mod tidy` will cause a downgrade to `v0.42.0` in order to support the old IBC import paths. +Update the import paths before running `go mod tidy`. + +## Chain Upgrades + +Chains may choose to upgrade via an upgrade proposal or genesis upgrades. Both in-place store migrations and genesis migrations are supported. + +**WARNING**: Please read at least the quick guide for [IBC client upgrades](/docs/ibc/v6.3.x/ibc/upgrades/intro) before upgrading your chain. It is highly recommended you do not change the chain-ID during an upgrade, otherwise you must follow the IBC client upgrade instructions. + +Both in-place store migrations and genesis migrations will: + +* migrate the solo machine client state from v1 to v2 protobuf definitions +* prune all solo machine consensus states +* prune all expired tendermint consensus states + +Chains must set a new connection parameter during either in place store migrations or genesis migration. The new parameter, max expected block time, is used to enforce packet processing delays on the receiving end of an IBC packet flow. Checkout the [docs](https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2) for more information. + +### In-Place Store Migrations + +The new chain binary will need to run migrations in the upgrade handler. The fromVM (previous module version) for the IBC module should be 1. This will allow migrations to be run for IBC updating the version from 1 to 2. + +Ex: + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler("my-upgrade-proposal", + func(ctx sdk.Context, _ upgradetypes.Plan, _ module.VersionMap) (module.VersionMap, error) { + / set max expected block time parameter. Replace the default with your expected value + / https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2 + app.IBCKeeper.ConnectionKeeper.SetParams(ctx, ibcconnectiontypes.DefaultParams()) + fromVM := map[string]uint64{ + ... / other modules + "ibc": 1, + ... +} + +return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +### Genesis Migrations + +To perform genesis migrations, the following code must be added to your existing migration code. + +```go expandable +/ add imports as necessary +import ( + + ibcv100 "github.com/cosmos/ibc-go/modules/core/legacy/v100" + ibchost "github.com/cosmos/ibc-go/modules/core/24-host" +) + +... + +/ add in migrate cmd function +/ expectedTimePerBlock is a new connection parameter +/ https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2 +newGenState, err = ibcv100.MigrateGenesis(newGenState, clientCtx, *genDoc, expectedTimePerBlock) + if err != nil { + return err +} +``` + +**NOTE:** The genesis chain-id, time and height MUST be updated before migrating IBC, otherwise the tendermint consensus state will not be pruned. + +## IBC Keeper Changes + +The IBC Keeper now takes in the Upgrade Keeper. Please add the chains' Upgrade Keeper after the Staking Keeper: + +```diff + / Create IBC Keeper + app.IBCKeeper = ibckeeper.NewKeeper( +- appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, scopedIBCKeeper, ++ appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, app.UpgradeKeeper, scopedIBCKeeper, + ) + +``` + +## Proposals + +### UpdateClientProposal + +The `UpdateClient` has been modified to take in two client-identifiers and one initial height. Please see the [documentation](/docs/ibc/v6.3.x/ibc/proposals) for more information. + +### UpgradeProposal + +A new IBC proposal type has been added, `UpgradeProposal`. This handles an IBC (breaking) Upgrade. +The previous `UpgradedClientState` field in an Upgrade `Plan` has been deprecated in favor of this new proposal type. + +### Proposal Handler Registration + +The `ClientUpdateProposalHandler` has been renamed to `ClientProposalHandler`. +It handles both `UpdateClientProposal`s and `UpgradeProposal`s. + +Add this import: + +```diff ++ ibcclienttypes "github.com/cosmos/ibc-go/modules/core/02-client/types" +``` + +Please ensure the governance module adds the correct route: + +```diff +- AddRoute(ibchost.RouterKey, ibcclient.NewClientUpdateProposalHandler(app.IBCKeeper.ClientKeeper)) ++ AddRoute(ibcclienttypes.RouterKey, ibcclient.NewClientProposalHandler(app.IBCKeeper.ClientKeeper)) +``` + +NOTE: Simapp registration was incorrect in the 0.41.x releases. The `UpdateClient` proposal handler should be registered with the router key belonging to `ibc-go/core/02-client/types` +as shown in the diffs above. + +### Proposal CLI Registration + +Please ensure both proposal type CLI commands are registered on the governance module by adding the following arguments to `gov.NewAppModuleBasic()`: + +Add the following import: + +```diff ++ ibcclientclient "github.com/cosmos/ibc-go/modules/core/02-client/client" +``` + +Register the cli commands: + +```diff + gov.NewAppModuleBasic( + paramsclient.ProposalHandler, distrclient.ProposalHandler, upgradeclient.ProposalHandler, upgradeclient.CancelProposalHandler, ++ ibcclientclient.UpdateClientProposalHandler, ibcclientclient.UpgradeProposalHandler, + ), +``` + +REST routes are not supported for these proposals. + +## Proto file changes + +The gRPC querier service endpoints have changed slightly. The previous files used `v1beta1` gRPC route, this has been updated to `v1`. + +The solo machine has replaced the FrozenSequence uint64 field with a IsFrozen boolean field. The package has been bumped from `v1` to `v2` + +## IBC callback changes + +### OnRecvPacket + +Application developers need to update their `OnRecvPacket` callback logic. + +The `OnRecvPacket` callback has been modified to only return the acknowledgement. The acknowledgement returned must implement the `Acknowledgement` interface. The acknowledgement should indicate if it represents a successful processing of a packet by returning true on `Success()` and false in all other cases. A return value of false on `Success()` will result in all state changes which occurred in the callback being discarded. More information can be found in the [documentation](/docs/ibc/v6.3.x/ibc/apps/ibcmodule#receiving-packets). + +The `OnRecvPacket`, `OnAcknowledgementPacket`, and `OnTimeoutPacket` callbacks are now passed the `sdk.AccAddress` of the relayer who relayed the IBC packet. Applications may use or ignore this information. + +## IBC Event changes + +The `packet_data` attribute has been deprecated in favor of `packet_data_hex`, in order to provide standardized encoding/decoding of packet data in events. While the `packet_data` event still exists, all relayers and IBC Event consumers are strongly encouraged to switch over to using `packet_data_hex` as soon as possible. + +The `packet_ack` attribute has also been deprecated in favor of `packet_ack_hex` for the same reason stated above. All relayers and IBC Event consumers are strongly encouraged to switch over to using `packet_ack_hex` as soon as possible. + +The `consensus_height` attribute has been removed in the Misbehaviour event emitted. IBC clients no longer have a frozen height and misbehaviour does not necessarily have an associated height. + +## Relevant SDK changes + +* (codec) [#9226](https://github.com/cosmos/cosmos-sdk/pull/9226) Rename codec interfaces and methods, to follow a general Go interfaces: + * `codec.Marshaler` → `codec.Codec` (this defines objects which serialize other objects) + * `codec.BinaryMarshaler` → `codec.BinaryCodec` + * `codec.JSONMarshaler` → `codec.JSONCodec` + * Removed `BinaryBare` suffix from `BinaryCodec` methods (`MarshalBinaryBare`, `UnmarshalBinaryBare`, ...) + * Removed `Binary` infix from `BinaryCodec` methods (`MarshalBinaryLengthPrefixed`, `UnmarshalBinaryLengthPrefixed`, ...) diff --git a/docs/ibc/v6.3.x/migrations/support-denoms-with-slashes.mdx b/docs/ibc/v6.3.x/migrations/support-denoms-with-slashes.mdx new file mode 100644 index 00000000..6a698566 --- /dev/null +++ b/docs/ibc/v6.3.x/migrations/support-denoms-with-slashes.mdx @@ -0,0 +1,90 @@ +--- +title: Support transfer of coins whose base denom contains slashes +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +This document is necessary when chains are upgrading from a version that does not support base denoms with slashes (e.g. v3.0.0) to a version that does (e.g. v3.2.0). All versions of ibc-go smaller than v1.5.0 for the v1.x release line, v2.3.0 for the v2.x release line, and v3.1.0 for the v3.x release line do **NOT** support IBC token transfers of coins whose base denoms contain slashes. Therefore the in-place of genesis migration described in this document are required when upgrading. + +If a chain receives coins of a base denom with slashes before it upgrades to supporting it, the receive may pass however the trace information will be incorrect. + +E.g. If a base denom of `testcoin/testcoin/testcoin` is sent to a chain that does not support slashes in the base denom, the receive will be successful. However, the trace information stored on the receiving chain will be: `Trace: "transfer/{channel-id}/testcoin/testcoin", BaseDenom: "testcoin"`. + +This incorrect trace information must be corrected when the chain does upgrade to fully supporting denominations with slashes. + +To do so, chain binaries should include a migration script that will run when the chain upgrades from not supporting base denominations with slashes to supporting base denominations with slashes. + +## Chains + +### ICS20 - Transfer + +The transfer module will now support slashes in base denoms, so we must iterate over current traces to check if any of them are incorrectly formed and correct the trace information. + +### Upgrade Proposal + +```go +app.UpgradeKeeper.SetUpgradeHandler("MigrateTraces", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / transfer module consensus version has been bumped to 2 + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +This is only necessary if there are denom traces in the store with incorrect trace information from previously received coins that had a slash in the base denom. However, it is recommended that any chain upgrading to support base denominations with slashes runs this code for safety. + +For a more detailed sample, please check out the code changes in [this pull request](https://github.com/cosmos/ibc-go/pull/1680). + +### Genesis Migration + +If the chain chooses to add support for slashes in base denoms via genesis export, then the trace information must be corrected during genesis migration. + +The migration code required may look like: + +```go expandable +func migrateGenesisSlashedDenomsUpgrade(appState genutiltypes.AppMap, clientCtx client.Context, genDoc *tmtypes.GenesisDoc) (genutiltypes.AppMap, error) { + if appState[ibctransfertypes.ModuleName] != nil { + transferGenState := &ibctransfertypes.GenesisState{ +} + +clientCtx.Codec.MustUnmarshalJSON(appState[ibctransfertypes.ModuleName], transferGenState) + substituteTraces := make([]ibctransfertypes.DenomTrace, len(transferGenState.DenomTraces)) + for i, dt := range transferGenState.DenomTraces { + / replace all previous traces with the latest trace if validation passes + / note most traces will have same value + newTrace := ibctransfertypes.ParseDenomTrace(dt.GetFullDenomPath()) + if err := newTrace.Validate(); err != nil { + substituteTraces[i] = dt +} + +else { + substituteTraces[i] = newTrace +} + +} + +transferGenState.DenomTraces = substituteTraces + + / delete old genesis state + delete(appState, ibctransfertypes.ModuleName) + + / set new ibc transfer genesis state + appState[ibctransfertypes.ModuleName] = clientCtx.Codec.MustMarshalJSON(transferGenState) +} + +return appState, nil +} +``` + +For a more detailed sample, please check out the code changes in [this pull request](https://github.com/cosmos/ibc-go/pull/1528). diff --git a/docs/ibc/v6.3.x/migrations/v1-to-v2.mdx b/docs/ibc/v6.3.x/migrations/v1-to-v2.mdx new file mode 100644 index 00000000..035b8c94 --- /dev/null +++ b/docs/ibc/v6.3.x/migrations/v1-to-v2.mdx @@ -0,0 +1,60 @@ +--- +title: IBC-Go v1 to v2 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go -> github.com/cosmos/ibc-go/v2 +``` + +## Chains + +* No relevant changes were made in this release. + +## IBC Apps + +A new function has been added to the app module interface: + +```go expandable +/ NegotiateAppVersion performs application version negotiation given the provided channel ordering, connectionID, portID, counterparty and proposed version. + / An error is returned if version negotiation cannot be performed. For example, an application module implementing this interface + / may decide to return an error in the event of the proposed version being incompatible with it's own + NegotiateAppVersion( + ctx sdk.Context, + order channeltypes.Order, + connectionID string, + portID string, + counterparty channeltypes.Counterparty, + proposedVersion string, + ) (version string, err error) +} +``` + +This function should perform application version negotiation and return the negotiated version. If the version cannot be negotiated, an error should be returned. This function is only used on the client side. + +### sdk.Result removed + +sdk.Result has been removed as a return value in the application callbacks. Previously it was being discarded by core IBC and was thus unused. + +## Relayers + +A new gRPC has been added to 05-port, `AppVersion`. It returns the negotiated app version. This function should be used for the `ChanOpenTry` channel handshake step to decide upon the application version which should be set in the channel. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v6.3.x/migrations/v2-to-v3.mdx b/docs/ibc/v6.3.x/migrations/v2-to-v3.mdx new file mode 100644 index 00000000..f02fa88a --- /dev/null +++ b/docs/ibc/v6.3.x/migrations/v2-to-v3.mdx @@ -0,0 +1,187 @@ +--- +title: IBC-Go v2 to v3 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v2 -> github.com/cosmos/ibc-go/v3 +``` + +No genesis or in-place migrations are required when upgrading from v1 or v2 of ibc-go. + +## Chains + +### ICS20 + +The `transferkeeper.NewKeeper(...)` now takes in an ICS4Wrapper. +The ICS4Wrapper should be the IBC Channel Keeper unless ICS 20 is being connected to a middleware application. + +### ICS27 + +ICS27 Interchain Accounts has been added as a supported IBC application of ibc-go. +Please see the [ICS27 documentation](/docs/ibc/v6.3.x/apps/interchain-accounts/overview) for more information. + +### Upgrade Proposal + +If the chain will adopt ICS27, it must set the appropriate params during the execution of the upgrade handler in `app.go`: + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler("v3", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / set the ICS27 consensus version so InitGenesis is not run + fromVM[icatypes.ModuleName] = icamodule.ConsensusVersion() + + / create ICS27 Controller submodule params + controllerParams := icacontrollertypes.Params{ + ControllerEnabled: true, +} + + / create ICS27 Host submodule params + hostParams := icahosttypes.Params{ + HostEnabled: true, + AllowMessages: []string{"/cosmos.bank.v1beta1.MsgSend", ... +}, +} + + / initialize ICS27 module + icamodule.InitModule(ctx, controllerParams, hostParams) + + ... + + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +The host and controller submodule params only need to be set if the chain integrates those submodules. +For example, if a chain chooses not to integrate a controller submodule, it may pass empty params into `InitModule`. + +#### Add `StoreUpgrades` for ICS27 module + +For ICS27 it is also necessary to [manually add store upgrades](https://docs.cosmos.network/main/learn/advanced/upgrade#add-storeupgrades-for-new-modules) for the new ICS27 module and then configure the store loader to apply those upgrades in `app.go`: + +```go +if upgradeInfo.Name == "v3" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := store.StoreUpgrades{ + Added: []string{ + icacontrollertypes.StoreKey, icahosttypes.StoreKey +}, +} + +app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +``` + +This ensures that the new module's stores are added to the multistore before the migrations begin. +The host and controller submodule keys only need to be added if the chain integrates those submodules. +For example, if a chain chooses not to integrate a controller submodule, it does not need to add the controller key to the `Added` field. + +### Genesis migrations + +If the chain will adopt ICS27 and chooses to upgrade via a genesis export, then the ICS27 parameters must be set during genesis migration. + +The migration code required may look like: + +```go expandable +controllerGenesisState := icatypes.DefaultControllerGenesis() + / overwrite parameters as desired + controllerGenesisState.Params = icacontrollertypes.Params{ + ControllerEnabled: true, +} + hostGenesisState := icatypes.DefaultHostGenesis() + / overwrite parameters as desired + hostGenesisState.Params = icahosttypes.Params{ + HostEnabled: true, + AllowMessages: []string{"/cosmos.bank.v1beta1.MsgSend", ... +}, +} + icaGenesisState := icatypes.NewGenesisState(controllerGenesisState, hostGenesisState) + + / set new ics27 genesis state + appState[icatypes.ModuleName] = clientCtx.Codec.MustMarshalJSON(icaGenesisState) +``` + +### Ante decorator + +The field of type `channelkeeper.Keeper` in the `AnteDecorator` structure has been replaced with a field of type `*keeper.Keeper`: + +```diff +type AnteDecorator struct { +- k channelkeeper.Keeper ++ k *keeper.Keeper +} + +- func NewAnteDecorator(k channelkeeper.Keeper) AnteDecorator { ++ func NewAnteDecorator(k *keeper.Keeper) AnteDecorator { + return AnteDecorator{k: k} +} +``` + +## IBC Apps + +### `OnChanOpenTry` must return negotiated application version + +The `OnChanOpenTry` application callback has been modified. +The return signature now includes the application version. +IBC applications must perform application version negotiation in `OnChanOpenTry` using the counterparty version. +The negotiated application version then must be returned in `OnChanOpenTry` to core IBC. +Core IBC will set this version in the TRYOPEN channel. + +### `OnChanOpenAck` will take additional `counterpartyChannelID` argument + +The `OnChanOpenAck` application callback has been modified. +The arguments now include the counterparty channel id. + +### `NegotiateAppVersion` removed from `IBCModule` interface + +Previously this logic was handled by the `NegotiateAppVersion` function. +Relayers would query this function before calling `ChanOpenTry`. +Applications would then need to verify that the passed in version was correct. +Now applications will perform this version negotiation during the channel handshake, thus removing the need for `NegotiateAppVersion`. + +### Channel state will not be set before application callback + +The channel handshake logic has been reorganized within core IBC. +Channel state will not be set in state after the application callback is performed. +Applications must rely only on the passed in channel parameters instead of querying the channel keeper for channel state. + +### IBC application callbacks moved from `AppModule` to `IBCModule` + +Previously, IBC module callbacks were apart of the `AppModule` type. +The recommended approach is to create an `IBCModule` type and move the IBC module callbacks from `AppModule` to `IBCModule` in a separate file `ibc_module.go`. + +The mock module go API has been broken in this release by applying the above format. +The IBC module callbacks have been moved from the mock modules `AppModule` into a new type `IBCModule`. + +As apart of this release, the mock module now supports middleware testing. Please see the [README](https://github.com/cosmos/ibc-go/blob/v6.1.0/testing/README.md#middleware-testing) for more information. + +Please review the [mock](https://github.com/cosmos/ibc-go/blob/v6.1.0/testing/mock/ibc_module.go) and [transfer](https://github.com/cosmos/ibc-go/blob/v6.1.0/modules/apps/transfer/ibc_module.go) modules as examples. Additionally, [simapp](https://github.com/cosmos/ibc-go/blob/v6.1.0/testing/simapp/app.go) provides an example of how `IBCModule` types should now be added to the IBC router in favour of `AppModule`. + +### IBC testing package + +`TestChain`s are now created with chainID's beginning from an index of 1. Any calls to `GetChainID(0)` will now fail. Please increment all calls to `GetChainID` by 1. + +## Relayers + +`AppVersion` gRPC has been removed. +The `version` string in `MsgChanOpenTry` has been deprecated and will be ignored by core IBC. +Relayers no longer need to determine the version to use on the `ChanOpenTry` step. +IBC applications will determine the correct version using the counterparty version. + +## IBC Light Clients + +The `GetProofSpecs` function has been removed from the `ClientState` interface. This function was previously unused by core IBC. Light clients which don't use this function may remove it. diff --git a/docs/ibc/v6.3.x/migrations/v3-to-v4.mdx b/docs/ibc/v6.3.x/migrations/v3-to-v4.mdx new file mode 100644 index 00000000..e35e836a --- /dev/null +++ b/docs/ibc/v6.3.x/migrations/v3-to-v4.mdx @@ -0,0 +1,156 @@ +--- +title: IBC-Go v3 to v4 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v3 -> github.com/cosmos/ibc-go/v4 +``` + +No genesis or in-place migrations required when upgrading from v1 or v2 of ibc-go. + +## Chains + +### ICS27 - Interchain Accounts + +The controller submodule implements now the 05-port `Middleware` interface instead of the 05-port `IBCModule` interface. Chains that integrate the controller submodule, need to create it with the `NewIBCMiddleware` constructor function. For example: + +```diff +- icacontroller.NewIBCModule(app.ICAControllerKeeper, icaAuthIBCModule) ++ icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) +``` + +where `icaAuthIBCModule` is the Interchain Accounts authentication IBC Module. + +### ICS29 - Fee Middleware + +The Fee Middleware module, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. + +Please read the Fee Middleware [integration documentation](/docs/ibc/v6.3.x/middleware/ics29-fee/integration) for an in depth guide on how to configure the module correctly in order to incentivize IBC packets. + +Take a look at the following diff for an [example setup](https://github.com/cosmos/ibc-go/pull/1432/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08aL366) of how to incentivize ics27 channels. + +### Migration to fix support for base denoms with slashes + +As part of [v1.5.0](https://github.com/cosmos/ibc-go/releases/tag/v1.5.0), [v2.3.0](https://github.com/cosmos/ibc-go/releases/tag/v2.3.0) and [v3.1.0](https://github.com/cosmos/ibc-go/releases/tag/v3.1.0) some [migration handler code sample was documented](/docs/ibc/v6.3.x/migrations/support-denoms-with-slashes#upgrade-proposal) that needs to run in order to correct the trace information of coins transferred using ICS20 whose base denom contains slashes. + +Based on feedback from the community we add now an improved solution to run the same migration that does not require copying a large piece of code over from the migration document, but instead requires only adding a one-line upgrade handler. + +If the chain will migrate to supporting base denoms with slashes, it must set the appropriate params during the execution of the upgrade handler in `app.go`: + +```go +app.UpgradeKeeper.SetUpgradeHandler("MigrateTraces", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / transfer module consensus version has been bumped to 2 + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +If a chain receives coins of a base denom with slashes before it upgrades to supporting it, the receive may pass however the trace information will be incorrect. + +E.g. If a base denom of `testcoin/testcoin/testcoin` is sent to a chain that does not support slashes in the base denom, the receive will be successful. However, the trace information stored on the receiving chain will be: `Trace: "transfer/{channel-id}/testcoin/testcoin", BaseDenom: "testcoin"`. + +This incorrect trace information must be corrected when the chain does upgrade to fully supporting denominations with slashes. + +## IBC Apps + +### ICS03 - Connection + +Crossing hellos have been removed from 03-connection handshake negotiation. +`PreviousConnectionId` in `MsgConnectionOpenTry` has been deprecated and is no longer used by core IBC. + +`NewMsgConnectionOpenTry` no longer takes in the `PreviousConnectionId` as crossing hellos are no longer supported. A non-empty `PreviousConnectionId` will fail basic validation for this message. + +### ICS04 - Channel + +The `WriteAcknowledgement` API now takes the `exported.Acknowledgement` type instead of passing in the acknowledgement byte array directly. +This is an API breaking change and as such IBC application developers will have to update any calls to `WriteAcknowledgement`. + +The `OnChanOpenInit` application callback has been modified. +The return signature now includes the application version as detailed in the latest IBC [spec changes](https://github.com/cosmos/ibc/pull/629). + +The `NewErrorAcknowledgement` method signature has changed. +It now accepts an `error` rather than a `string`. This was done in order to prevent accidental state changes. +All error acknowledgements now contain a deterministic ABCI code and error message. It is the responsibility of the application developer to emit error details in events. + +Crossing hellos have been removed from 04-channel handshake negotiation. +IBC Applications no longer need to account from already claimed capabilities in the `OnChanOpenTry` callback. The capability provided by core IBC must be able to be claimed with error. +`PreviousChannelId` in `MsgChannelOpenTry` has been deprecated and is no longer used by core IBC. + +`NewMsgChannelOpenTry` no longer takes in the `PreviousChannelId` as crossing hellos are no longer supported. A non-empty `PreviousChannelId` will fail basic validation for this message. + +### ICS27 - Interchain Accounts + +The `RegisterInterchainAccount` API has been modified to include an additional `version` argument. This change has been made in order to support ICS29 fee middleware, for relayer incentivization of ICS27 packets. +Consumers of the `RegisterInterchainAccount` are now expected to build the appropriate JSON encoded version string themselves and pass it accordingly. +This should be constructed within the interchain accounts authentication module which leverages the APIs exposed via the interchain accounts `controllerKeeper`. If an empty string is passed in the `version` argument, then the version will be initialized to a default value in the `OnChanOpenInit` callback of the controller's handler, so that channel handshake can proceed. + +The following code snippet illustrates how to construct an appropriate interchain accounts `Metadata` and encode it as a JSON bytestring: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + if err := k.icaControllerKeeper.RegisterInterchainAccount(ctx, msg.ConnectionId, msg.Owner, string(appVersion)); err != nil { + return err +} +``` + +Similarly, if the application stack is configured to route through ICS29 fee middleware and a fee enabled channel is desired, construct the appropriate ICS29 `Metadata` type: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + feeMetadata := feetypes.Metadata{ + AppVersion: string(appVersion), + FeeVersion: feetypes.Version, +} + +feeEnabledVersion, err := feetypes.ModuleCdc.MarshalJSON(&feeMetadata) + if err != nil { + return err +} + if err := k.icaControllerKeeper.RegisterInterchainAccount(ctx, msg.ConnectionId, msg.Owner, string(feeEnabledVersion)); err != nil { + return err +} +``` + +## Relayers + +When using the `DenomTrace` gRPC, the full IBC denomination with the `ibc/` prefix may now be passed in. + +Crossing hellos are no longer supported by core IBC for 03-connection and 04-channel. The handshake should be completed in the logical 4-step process (INIT, TRY, ACK, CONFIRM). diff --git a/docs/ibc/v6.3.x/migrations/v4-to-v5.mdx b/docs/ibc/v6.3.x/migrations/v4-to-v5.mdx new file mode 100644 index 00000000..fbe6dab2 --- /dev/null +++ b/docs/ibc/v6.3.x/migrations/v4-to-v5.mdx @@ -0,0 +1,440 @@ +--- +title: IBC-Go v4 to v5 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* [Chains](#chains) +* [IBC Apps](#ibc-apps) +* [Relayers](#relayers) +* [IBC Light Clients](#relayers) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v4 -> github.com/cosmos/ibc-go/v5 +``` + +## Chains + +### Ante decorator + +The `AnteDecorator` type in `core/ante` has been renamed to `RedundantRelayDecorator` (and the corresponding constructor function to `NewRedundantRelayDecorator`). Therefore in the function that creates the instance of the `sdk.AnteHandler` type (e.g. `NewAnteHandler`) the change would be like this: + +```diff expandable +func NewAnteHandler(options HandlerOptions) (sdk.AnteHandler, error) { + / parameter validation + + anteDecorators := []sdk.AnteDecorator{ + / other ante decorators +- ibcante.NewAnteDecorator(opts.IBCkeeper), ++ ibcante.NewRedundantRelayDecorator(options.IBCKeeper), + } + + return sdk.ChainAnteDecorators(anteDecorators...), nil +} +``` + +The `AnteDecorator` was actually renamed twice, but in [this PR](https://github.com/cosmos/ibc-go/pull/1820) you can see the changes made for the final rename. + +## IBC Apps + +### Core + +The `key` parameter of the `NewKeeper` function in `modules/core/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + stakingKeeper clienttypes.StakingKeeper, + upgradeKeeper clienttypes.UpgradeKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) *Keeper +``` + +The `RegisterRESTRoutes` function in `modules/core` has been removed. + +### ICS03 - Connection + +The `key` parameter of the `NewKeeper` function in `modules/core/03-connection/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ck types.ClientKeeper +) Keeper +``` + +### ICS04 - Channel + +The function `NewPacketId` in `modules/core/04-channel/types` has been renamed to `NewPacketID`: + +```diff +- func NewPacketId( ++ func NewPacketID( + portID, + channelID string, + seq uint64 +) PacketId +``` + +The `key` parameter of the `NewKeeper` function in `modules/core/04-channel/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + clientKeeper types.ClientKeeper, + connectionKeeper types.ConnectionKeeper, + portKeeper types.PortKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) Keeper +``` + +### ICS20 - Transfer + +The `key` parameter of the `NewKeeper` function in `modules/apps/transfer/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper types.ICS4Wrapper, + channelKeeper types.ChannelKeeper, + portKeeper types.PortKeeper, + authKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) Keeper +``` + +The `amount` parameter of function `GetTransferCoin` in `modules/apps/transfer/types` is now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff +func GetTransferCoin( + portID, channelID, baseDenom string, +- amount sdk.Int ++ amount math.Int +) sdk.Coin +``` + +The `RegisterRESTRoutes` function in `modules/apps/transfer` has been removed. + +### ICS27 - Interchain Accounts + +The `key` and `msgRouter` parameters of the `NewKeeper` functions in + +* `modules/apps/27-interchain-accounts/controller/keeper` +* and `modules/apps/27-interchain-accounts/host/keeper` + +have changed type. The `key` parameter is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`), and the `msgRouter` parameter is now of type `*icatypes.MessageRouter` (where `icatypes` is an import alias for `"github.com/cosmos/ibc-go/v5/modules/apps/27-interchain-accounts/types"`): + +```diff expandable +/ NewKeeper creates a new interchain accounts controller Keeper instance +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper icatypes.ICS4Wrapper, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +- msgRouter *baseapp.MsgServiceRouter, ++ msgRouter *icatypes.MessageRouter, +) Keeper +``` + +```diff expandable +/ NewKeeper creates a new interchain accounts host Keeper instance +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +- msgRouter *baseapp.MsgServiceRouter, ++ msgRouter *icatypes.MessageRouter, +) Keeper +``` + +The new `MessageRouter` interface is defined as: + +```go +type MessageRouter interface { + Handler(msg sdk.Msg) + +baseapp.MsgServiceHandler +} +``` + +The `RegisterRESTRoutes` function in `modules/apps/27-interchain-accounts` has been removed. + +An additional parameter, `ics4Wrapper` has been added to the `host` submodule `NewKeeper` function in `modules/apps/27-interchain-accounts/host/keeper`. +This allows the `host` submodule to correctly unwrap the channel version for channel reopening handshakes in the `OnChanOpenTry` callback. + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, + key storetypes.StoreKey, + paramSpace paramtypes.Subspace, ++ ics4Wrapper icatypes.ICS4Wrapper, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, + scopedKeeper icatypes.ScopedKeeper, + msgRouter icatypes.MessageRouter, +) Keeper +``` + +#### Cosmos SDK message handler responses in packet acknowledgement + +The construction of the transaction response of a message execution on the host chain has changed. The `Data` field in the `sdk.TxMsgData` has been deprecated and since Cosmos SDK 0.46 the `MsgResponses` field contains the message handler responses packed into `Any`s. + +For chains on Cosmos SDK 0.45 and below, the message response was constructed like this: + +```go expandable +txMsgData := &sdk.TxMsgData{ + Data: make([]*sdk.MsgData, len(msgs)), +} + for i, msg := range msgs { + / message validation + + msgResponse, err := k.executeMsg(cacheCtx, msg) + / return if err != nil + + txMsgData.Data[i] = &sdk.MsgData{ + MsgType: sdk.MsgTypeURL(msg), + Data: msgResponse, +} +} + +/ emit events + +txResponse, err := proto.Marshal(txMsgData) +/ return if err != nil + +return txResponse, nil +``` + +And for chains on Cosmos SDK 0.46 and above, it is now done like this: + +```go expandable +txMsgData := &sdk.TxMsgData{ + MsgResponses: make([]*codectypes.Any, len(msgs)), +} + for i, msg := range msgs { + / message validation + + any, err := k.executeMsg(cacheCtx, msg) + / return if err != nil + + txMsgData.MsgResponses[i] = any +} + +/ emit events + +txResponse, err := proto.Marshal(txMsgData) +/ return if err != nil + +return txResponse, nil +``` + +When handling the acknowledgement in the `OnAcknowledgementPacket` callback of a custom ICA controller module, then depending on whether `txMsgData.Data` is empty or not, the logic to handle the message handler response will be different. **Only controller chains on Cosmos SDK 0.46 or above will be able to write the logic needed to handle the response from a host chain on Cosmos SDK 0.46 or above.** + +```go expandable +var ack channeltypes.Acknowledgement + if err := channeltypes.SubModuleCdc.UnmarshalJSON(acknowledgement, &ack); err != nil { + return err +} + +var txMsgData sdk.TxMsgData + if err := proto.Unmarshal(ack.GetResult(), txMsgData); err != nil { + return err +} + switch len(txMsgData.Data) { + case 0: / for SDK 0.46 and above + for _, msgResponse := range txMsgData.MsgResponses { + / unmarshall msgResponse and execute logic based on the response +} + +return nil +default: / for SDK 0.45 and below + for _, msgData := range txMsgData.Data { + / unmarshall msgData and execute logic based on the response +} +} +``` + +See the corresponding documentation about authentication modules for more information. + +### ICS29 - Fee Middleware + +The `key` parameter of the `NewKeeper` function in `modules/apps/29-fee` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper types.ICS4Wrapper, + channelKeeper types.ChannelKeeper, + portKeeper types.PortKeeper, + authKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, +) Keeper +``` + +The `RegisterRESTRoutes` function in `modules/apps/29-fee` has been removed. + +### IBC testing package + +The `MockIBCApp` type has been renamed to `IBCApp` (and the corresponding constructor function to `NewIBCApp`). This has resulted therefore in: + +* The `IBCApp` field of the `*IBCModule` in `testing/mock` to change its type as well to `*IBCApp`: + +```diff +type IBCModule struct { + appModule *AppModule +- IBCApp *MockIBCApp / base application of an IBC middleware stack ++ IBCApp *IBCApp / base application of an IBC middleware stack +} +``` + +* The `app` parameter to `*NewIBCModule` in `testing/mock` to change its type as well to `*IBCApp`: + +```diff +func NewIBCModule( + appModule *AppModule, +- app *MockIBCApp ++ app *IBCApp +) IBCModule +``` + +The `MockEmptyAcknowledgement` type has been renamed to `EmptyAcknowledgement` (and the corresponding constructor function to `NewEmptyAcknowledgement`). + +The `TestingApp` interface in `testing` has gone through some modifications: + +* The return type of the function `GetStakingKeeper` is not the concrete type `stakingkeeper.Keeper` anymore (where `stakingkeeper` is an import alias for `"github.com/cosmos/cosmos-sdk/x/staking/keeper"`), but it has been changed to the interface `ibctestingtypes.StakingKeeper` (where `ibctestingtypes` is an import alias for `""github.com/cosmos/ibc-go/v5/testing/types"`). See this [PR](https://github.com/cosmos/ibc-go/pull/2028) for more details. The `StakingKeeper` interface is defined as: + +```go +type StakingKeeper interface { + GetHistoricalInfo(ctx sdk.Context, height int64) (stakingtypes.HistoricalInfo, bool) +} +``` + +* The return type of the function `LastCommitID` has changed to `storetypes.CommitID` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`). + +See the following `git diff` for more details: + +```diff expandable +type TestingApp interface { + abci.Application + + / ibc-go additions + GetBaseApp() *baseapp.BaseApp +- GetStakingKeeper() stakingkeeper.Keeper ++ GetStakingKeeper() ibctestingtypes.StakingKeeper + GetIBCKeeper() *keeper.Keeper + GetScopedIBCKeeper() capabilitykeeper.ScopedKeeper + GetTxConfig() client.TxConfig + + / Implemented by SimApp + AppCodec() codec.Codec + + / Implemented by BaseApp +- LastCommitID() sdk.CommitID ++ LastCommitID() storetypes.CommitID + LastBlockHeight() int64 +} +``` + +The `powerReduction` parameter of the function `SetupWithGenesisValSet` in `testing` is now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff +func SetupWithGenesisValSet( + t *testing.T, + valSet *tmtypes.ValidatorSet, + genAccs []authtypes.GenesisAccount, + chainID string, +- powerReduction sdk.Int, ++ powerReduction math.Int, + balances ...banktypes.Balance +) TestingApp +``` + +The `accAmt` parameter of the functions + +* `AddTestAddrsFromPubKeys` , +* `AddTestAddrs` +* and `AddTestAddrsIncremental` + +in `testing/simapp` are now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff expandable +func AddTestAddrsFromPubKeys( + app *SimApp, + ctx sdk.Context, + pubKeys []cryptotypes.PubKey, +- accAmt sdk.Int, ++ accAmt math.Int +) +func addTestAddrs( + app *SimApp, + ctx sdk.Context, + accNum int, +- accAmt sdk.Int, ++ accAmt math.Int, + strategy GenerateAccountStrategy +) []sdk.AccAddress +func AddTestAddrsIncremental( + app *SimApp, + ctx sdk.Context, + accNum int, +- accAmt sdk.Int, ++ accAmt math.Int +) []sdk.AccAddress +``` + +The `RegisterRESTRoutes` function in `testing/mock` has been removed. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +### ICS02 - Client + +The `key` parameter of the `NewKeeper` function in `modules/core/02-client/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + sk types.StakingKeeper, + uk types.UpgradeKeeper +) Keeper +``` diff --git a/docs/ibc/v6.3.x/migrations/v5-to-v6.mdx b/docs/ibc/v6.3.x/migrations/v5-to-v6.mdx new file mode 100644 index 00000000..cf96d71e --- /dev/null +++ b/docs/ibc/v6.3.x/migrations/v5-to-v6.mdx @@ -0,0 +1,301 @@ +--- +title: IBC-Go v5 to v6 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +## Chains + +The `ibc-go/v6` release introduces a new set of migrations for `27-interchain-accounts`. Ownership of ICS27 channel capabilities is transferred from ICS27 authentication modules and will now reside with the ICS27 controller submodule moving forward. + +For chains which contain a custom authentication module using the ICS27 controller submodule this requires a migration function to be included in the chain upgrade handler. A subsequent migration handler is run automatically, asserting the ownership of ICS27 channel capabilities has been transferred successfully. + +This migration is not required for chains which *do not* contain a custom authentication module using the ICS27 controller submodule. + +This migration facilitates the addition of the ICS27 controller submodule `MsgServer` which provides a standardised approach to integrating existing forms of authentication such as `x/gov` and `x/group` provided by the Cosmos SDK. + +For more information please refer to the ICS27 controller submodule documentation. + +### Upgrade proposal + +Please refer to [PR #2383](https://github.com/cosmos/ibc-go/pull/2383) for integrating the ICS27 channel capability migration logic or follow the steps outlined below: + +1. Add the upgrade migration logic to chain distribution. This may be, for example, maintained under a package `app/upgrades/v6`. + +```go expandable +package v6 + +import ( + + "github.com/cosmos/cosmos-sdk/codec" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" + + v6 "github.com/cosmos/ibc-go/v6/modules/apps/27-interchain-accounts/controller/migrations/v6" +) + +const ( + UpgradeName = "v6" +) + +func CreateUpgradeHandler( + mm *module.Manager, + configurator module.Configurator, + cdc codec.BinaryCodec, + capabilityStoreKey *storetypes.KVStoreKey, + capabilityKeeper *capabilitykeeper.Keeper, + moduleName string, +) + +upgradetypes.UpgradeHandler { + return func(ctx sdk.Context, _ upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + if err := v6.MigrateICS27ChannelCapability(ctx, cdc, capabilityStoreKey, capabilityKeeper, moduleName); err != nil { + return nil, err +} + +return mm.RunMigrations(ctx, configurator, vm) +} +} +``` + +2. Set the upgrade handler in `app.go`. The `moduleName` parameter refers to the authentication module's `ScopedKeeper` name. This is the name provided upon instantiation in `app.go` via the [`x/capability` keeper `ScopeToModule(moduleName string)`](https://github.com/cosmos/cosmos-sdk/blob/v0.46.1/x/capability/keeper/keeper.go#L70) method. [See here for an example in `simapp`](https://github.com/cosmos/ibc-go/blob/v5.0.0/testing/simapp/app.go#L304). + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler( + v6.UpgradeName, + v6.CreateUpgradeHandler( + app.mm, + app.configurator, + app.appCodec, + app.keys[capabilitytypes.ModuleName], + app.CapabilityKeeper, + >>>> moduleName <<<<, + ), +) +``` + +## IBC Apps + +### ICS27 - Interchain Accounts + +#### Controller APIs + +In previous releases of ibc-go, chain developers integrating the ICS27 interchain accounts controller functionality were expected to create a custom `Base Application` referred to as an authentication module, see the section [Building an authentication module](/docs/ibc/v6.3.x/apps/interchain-accounts/auth-modules) from the documentation. + +The `Base Application` was intended to be composed with the ICS27 controller submodule `Keeper` and facilitate many forms of message authentication depending on a chain's particular use case. + +Prior to ibc-go v6 the controller submodule exposed only these two functions (to which we will refer as the legacy APIs): + +* [`RegisterInterchainAccount`](https://github.com/cosmos/ibc-go/blob/v5.0.0/modules/apps/27-interchain-accounts/controller/keeper/account.go#L19) +* [`SendTx`](https://github.com/cosmos/ibc-go/blob/v5.0.0/modules/apps/27-interchain-accounts/controller/keeper/relay.go#L18) + +However, these functions have now been deprecated in favour of the new controller submodule `MsgServer` and will be removed in later releases. + +Both APIs remain functional and maintain backwards compatibility in ibc-go v6, however consumers of these APIs are now recommended to follow the message passing paradigm outlined in Cosmos SDK [ADR 031](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-031) and [ADR 033](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-033). This is facilitated by the Cosmos SDK [`MsgServiceRouter`](https://github.com/cosmos/cosmos-sdk/blob/main/baseapp/msg_service_router.go#L17) and chain developers creating custom application logic can now omit the ICS27 controller submodule `Keeper` from their module and instead depend on message routing. + +Depending on the use case, developers of custom authentication modules face one of three scenarios: + +![auth-module-decision-tree.png](/docs/ibc/images/04-migrations/images/auth-module-decision-tree.png) + +**My authentication module needs to access IBC packet callbacks** + +Application developers that wish to consume IBC packet callbacks and react upon packet acknowledgements **must** continue using the controller submodule's legacy APIs. The authentication modules will not need a `ScopedKeeper` anymore, though, because the channel capability will be claimed by the controller submodule. For example, given an Interchain Accounts authentication module keeper `ICAAuthKeeper`, the authentication module's `ScopedKeeper` (`scopedICAAuthKeeper`) is not needed anymore and can be removed for the argument list of the keeper constructor function, as shown here: + +```diff +app.ICAAuthKeeper = icaauthkeeper.NewKeeper( + appCodec, + keys[icaauthtypes.StoreKey], + app.ICAControllerKeeper, +- scopedICAAuthKeeper, +) +``` + +Please note that the authentication module's `ScopedKeeper` name is still needed as part of the channel capability migration described in section [Upgrade proposal](#upgrade-proposal) above. Therefore the authentication module's `ScopedKeeper` cannot be completely removed from the chain code until the migration has run. + +In the future, the use of the legacy APIs for accessing packet callbacks will be replaced by IBC Actor Callbacks (see [ADR 008](https://github.com/cosmos/ibc-go/pull/1976) for more details) and it will also be possible to access them with the `MsgServiceRouter`. + +**My authentication module does not need access to IBC packet callbacks** + +The authentication module can migrate from using the legacy APIs and it can be composed instead with the `MsgServiceRouter`, so that the authentication module is able to pass messages to the controller submodule's `MsgServer` to register interchain accounts and send packets to the interchain account. For example, given an Interchain Accounts authentication module keeper `ICAAuthKeeper`, the ICS27 controller submodule keeper (`ICAControllerKeeper`) and authentication module scoped keeper (`scopedICAAuthKeeper`) are not needed anymore and can be replaced with the `MsgServiceRouter`, as shown here: + +```diff +app.ICAAuthKeeper = icaauthkeeper.NewKeeper( + appCodec, + keys[icaauthtypes.StoreKey], +- app.ICAControllerKeeper, +- scopedICAAuthKeeper, ++ app.MsgServiceRouter(), +) +``` + +In your authentication module you can route messages to the controller submodule's `MsgServer` instead of using the legacy APIs. For example, for registering an interchain account: + +```diff expandable +- if err := keeper.icaControllerKeeper.RegisterInterchainAccount( +- ctx, +- connectionID, +- owner.String(), +- version, +- ); err != nil { +- return err +- } ++ msg := controllertypes.NewMsgRegisterInterchainAccount( ++ connectionID, ++ owner.String(), ++ version, ++ ) ++ handler := keeper.msgRouter.Handler(msg) ++ res, err := handler(ctx, msg) ++ if err != nil { ++ return err ++ } +``` + +where `controllertypes` is an import alias for `"github.com/cosmos/ibc-go/v6/modules/apps/27-interchain-accounts/controller/types"`. + +In addition, in this use case the authentication module does not need to implement the `IBCModule` interface anymore. + +**I do not need a custom authentication module anymore** + +If your authentication module does not have any extra functionality compared to the default authentication module added in ibc-go v6 (the `MsgServer`), or if you can use a generic authentication module, such as the `x/auth`, `x/gov` or `x/group` modules from the Cosmos SDK (v0.46 and later), then you can remove your authentication module completely and use instead the gRPC endpoints of the `MsgServer` or the CLI added in ibc-go v6. + +Please remember that the authentication module's `ScopedKeeper` name is still needed as part of the channel capability migration described in section [Upgrade proposal](#upgrade-proposal) above. + +#### Host params + +The ICS27 host submodule default params have been updated to include the `AllowAllHostMsgs` wildcard `*`. +This enables execution of any `sdk.Msg` type for ICS27 registered on the host chain `InterfaceRegistry`. + +```diff +/ AllowAllHostMsgs holds the string key that allows all message types on interchain accounts host module +const AllowAllHostMsgs = "*" + +... + +/ DefaultParams is the default parameter configuration for the host submodule +func DefaultParams() Params { +- return NewParams(DefaultHostEnabled, nil) ++ return NewParams(DefaultHostEnabled, []string{AllowAllHostMsgs}) +} +``` + +#### API breaking changes + +`SerializeCosmosTx` takes in a `[]proto.Message` instead of `[]sdk.Message`. This allows for the serialization of proto messages without requiring the fulfillment of the `sdk.Msg` interface. + +The `27-interchain-accounts` genesis types have been moved to their own package: `modules/apps/27-interchain-acccounts/genesis/types`. +This change facilitates the addition of the ICS27 controller submodule `MsgServer` and avoids cyclic imports. This should have minimal disruption to chain developers integrating `27-interchain-accounts`. + +The ICS27 host submodule `NewKeeper` function in `modules/apps/27-interchain-acccounts/host/keeper` now includes an additional parameter of type `ICS4Wrapper`. +This provides the host submodule with the ability to correctly unwrap channel versions in the event of a channel reopening handshake. + +```diff +func NewKeeper( + cdc codec.BinaryCodec, key storetypes.StoreKey, paramSpace paramtypes.Subspace, +- channelKeeper icatypes.ChannelKeeper, portKeeper icatypes.PortKeeper, ++ ics4Wrapper icatypes.ICS4Wrapper, channelKeeper icatypes.ChannelKeeper, portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, scopedKeeper icatypes.ScopedKeeper, msgRouter icatypes.MessageRouter, +) Keeper +``` + +### ICS29 - `NewKeeper` API change + +The `NewKeeper` function of ICS29 has been updated to remove the `paramSpace` parameter as it was unused. + +```diff +func NewKeeper( +- cdc codec.BinaryCodec, key storetypes.StoreKey, paramSpace paramtypes.Subspace, +- ics4Wrapper types.ICS4Wrapper, channelKeeper types.ChannelKeeper, portKeeper types.PortKeeper, authKeeper types.AccountKeeper, bankKeeper types.BankKeeper, ++ cdc codec.BinaryCodec, key storetypes.StoreKey, ++ ics4Wrapper types.ICS4Wrapper, channelKeeper types.ChannelKeeper, ++ portKeeper types.PortKeeper, authKeeper types.AccountKeeper, bankKeeper types.BankKeeper, +) Keeper { +``` + +### ICS20 - `SendTransfer` is no longer exported + +The `SendTransfer` function of ICS20 has been removed. IBC transfers should now be initiated with `MsgTransfer` and routed to the ICS20 `MsgServer`. + +See below for example: + +```go +if handler := msgRouter.Handler(msgTransfer); handler != nil { + if err := msgTransfer.ValidateBasic(); err != nil { + return nil, err +} + +res, err := handler(ctx, msgTransfer) + if err != nil { + return nil, err +} +} +``` + +### ICS04 - `SendPacket` API change + +The `SendPacket` API has been simplified: + +```diff expandable +/ SendPacket is called by a module in order to send an IBC packet on a channel +func (k Keeper) SendPacket( + ctx sdk.Context, + channelCap *capabilitytypes.Capability, +- packet exported.PacketI, +-) error { ++ sourcePort string, ++ sourceChannel string, ++ timeoutHeight clienttypes.Height, ++ timeoutTimestamp uint64, ++ data []byte, ++) (uint64, error) { +``` + +Callers no longer need to pass in a pre-constructed packet. +The destination port/channel identifiers and the packet sequence will be determined by core IBC. +`SendPacket` will return the packet sequence. + +### IBC testing package + +The `SendPacket` API has been simplified: + +```diff expandable +/ SendPacket is called by a module in order to send an IBC packet on a channel +func (k Keeper) SendPacket( + ctx sdk.Context, + channelCap *capabilitytypes.Capability, +- packet exported.PacketI, +-) error { ++ sourcePort string, ++ sourceChannel string, ++ timeoutHeight clienttypes.Height, ++ timeoutTimestamp uint64, ++ data []byte, ++) (uint64, error) { +``` + +Callers no longer need to pass in a pre-constructed packet. `SendPacket` will return the packet sequence. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v7.8.x/apps/interchain-accounts/active-channels.mdx b/docs/ibc/v7.8.x/apps/interchain-accounts/active-channels.mdx new file mode 100644 index 00000000..11674060 --- /dev/null +++ b/docs/ibc/v7.8.x/apps/interchain-accounts/active-channels.mdx @@ -0,0 +1,44 @@ +--- +title: Active Channels +description: The Interchain Accounts module uses either ORDERED or UNORDERED channels. +--- + +The Interchain Accounts module uses either [ORDERED or UNORDERED](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#ordering) channels. + +When using `ORDERED` channels, the order of transactions when sending packets from a controller to a host chain is maintained. + +When using `UNORDERED` channels, there is no guarantee that the order of transactions when sending packets from the controller to the host chain is maintained. Since ibc-go v7.5.0, the default ordering for new ICA channels is `UNORDERED`, if no ordering is specified in `MsgRegisterInterchainAccount` (previously the default ordering was `ORDERED`). + +> A limitation when using ORDERED channels is that when a packet times out the channel will be closed. + +In the case of a channel closing, a controller chain needs to be able to regain access to the interchain account registered on this channel. `Active Channels` enable this functionality. + +When an Interchain Account is registered using `MsgRegisterInterchainAccount`, a new channel is created on a particular port. During the `OnChanOpenAck` and `OnChanOpenConfirm` steps (on controller & host chain respectively) the `Active Channel` for this interchain account is stored in state. + +It is possible to create a new channel using the same controller chain portID if the previously set `Active Channel` is now in a `CLOSED` state. This channel creation can be initialized programmatically by sending a new `MsgChannelOpenInit` message like so: + +```go +msg := channeltypes.NewMsgChannelOpenInit(portID, string(versionBytes), channeltypes.ORDERED, []string{ + connectionID +}, icatypes.HostPortID, authtypes.NewModuleAddress(icatypes.ModuleName).String()) + handler := keeper.msgRouter.Handler(msg) + +res, err := handler(ctx, msg) + if err != nil { + return err +} +``` + +Alternatively, any relayer operator may initiate a new channel handshake for this interchain account once the previously set `Active Channel` is in a `CLOSED` state. This is done by initiating the channel handshake on the controller chain using the same portID associated with the interchain account in question. + +It is important to note that once a channel has been opened for a given interchain account, new channels can not be opened for this account until the currently set `Active Channel` is set to `CLOSED`. + +## Future improvements + +Future versions of the ICS-27 protocol and the Interchain Accounts module will likely use a new channel type that provides ordering of packets without the channel closing in the event of a packet timing out, thus removing the need for `Active Channels` entirely. +The following is a list of issues which will provide the infrastructure to make this possible: + +* [IBC Channel Upgrades](https://github.com/cosmos/ibc-go/issues/1599) +* [Implement ORDERED\_ALLOW\_TIMEOUT logic in 04-channel](https://github.com/cosmos/ibc-go/issues/1661) +* [Add ORDERED\_ALLOW\_TIMEOUT as supported ordering in 03-connection](https://github.com/cosmos/ibc-go/issues/1662) +* [Allow ICA channels to be opened as ORDERED\_ALLOW\_TIMEOUT](https://github.com/cosmos/ibc-go/issues/1663) diff --git a/docs/ibc/v7.8.x/apps/interchain-accounts/auth-modules.mdx b/docs/ibc/v7.8.x/apps/interchain-accounts/auth-modules.mdx new file mode 100644 index 00000000..77aba31d --- /dev/null +++ b/docs/ibc/v7.8.x/apps/interchain-accounts/auth-modules.mdx @@ -0,0 +1,21 @@ +--- +title: Authentication Modules +--- + +## Synopsis + +Authentication modules enable application developers to perform custom logic when interacting with the Interchain Accounts controller sumbmodule's `MsgServer`. + +The controller submodule is used for account registration and packet sending. It executes only logic required of all controllers of interchain accounts. The type of authentication used to manage the interchain accounts remains unspecified. There may exist many different types of authentication which are desirable for different use cases. Thus the purpose of the authentication module is to wrap the controller submodule with custom authentication logic. + +In ibc-go, authentication modules can communicate with the controller submodule by passing messages through `baseapp`'s `MsgServiceRouter`. To implement an authentication module, the `IBCModule` interface need not be fulfilled; it is only required to fulfill Cosmos SDK's `AppModuleBasic` interface, just like any regular Cosmos SDK application module. + +The authentication module must: + +- Authenticate interchain account owners. +- Track the associated interchain account address for an owner. +- Send packets on behalf of an owner (after authentication). + +## Integration into `app.go` file + +To integrate the authentication module into your chain, please follow the steps outlined in [`app.go` integration](/docs/ibc/v7.8.x/apps/interchain-accounts/integration#example-integration). diff --git a/docs/ibc/v7.8.x/apps/interchain-accounts/client.mdx b/docs/ibc/v7.8.x/apps/interchain-accounts/client.mdx new file mode 100644 index 00000000..b90c3cea --- /dev/null +++ b/docs/ibc/v7.8.x/apps/interchain-accounts/client.mdx @@ -0,0 +1,181 @@ +--- +title: Client +description: >- + A user can query and interact with the Interchain Accounts module using the + CLI. Use the --help flag to discover the available commands: +--- + +## CLI + +A user can query and interact with the Interchain Accounts module using the CLI. Use the `--help` flag to discover the available commands: + +```shell +simd query interchain-accounts --help +``` + +> Please not that this section does not document all the available commands, but only the ones that deserved extra documentation that was not possible to fit in the command line documentation. + +### Controller + +A user can query and interact with the controller submodule. + +#### Query + +The `query` commands allow users to query the controller submodule. + +```shell +simd query interchain-accounts controller --help +``` + +#### Transactions + +The `tx` commands allow users to interact with the controller submodule. + +```shell +simd tx interchain-accounts controller --help +``` + +#### `send-tx` + +The `send-tx` command allows users to send a transaction on the provided connection to be executed using an interchain account on the host chain. + +```shell +simd tx interchain-accounts controller send-tx [connection-id] [path/to/packet_msg.json] +``` + +Example: + +```shell +simd tx interchain-accounts controller send-tx connection-0 packet-data.json --from cosmos1.. +``` + +See below for example contents of `packet-data.json`. The CLI handler will unmarshal the following into `InterchainAccountPacketData` appropriately. + +```json +{ + "type": "TYPE_EXECUTE_TX", + "data": "CqIBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEoEBCkFjb3Ntb3MxNWNjc2hobXAwZ3N4MjlxcHFxNmc0em1sdG5udmdteXU5dWV1YWRoOXkybmM1emowc3psczVndGRkehItY29zbW9zMTBoOXN0YzV2Nm50Z2V5Z2Y1eGY5NDVuanFxNWgzMnI1M3VxdXZ3Gg0KBXN0YWtlEgQxMDAw", + "memo": "" +} +``` + +Note the `data` field is a base64 encoded byte string as per the tx encoding agreed upon during the channel handshake. + +A helper CLI is provided in the host submodule which can be used to generate the packet data JSON using the counterparty chain's binary. See the [`generate-packet-data` command](#generate-packet-data) for an example. + +### Host + +A user can query and interact with the host submodule. + +#### Query + +The `query` commands allow users to query the host submodule. + +```shell +simd query interchain-accounts host --help +``` + +#### Transactions + +The `tx` commands allow users to interact with the controller submodule. + +```shell +simd tx interchain-accounts host --help +``` + +##### `generate-packet-data` + +The `generate-packet-data` command allows users to generate protobuf or proto3 JSON encoded interchain accounts packet data for input message(s). The packet data can then be used with the controller submodule's [`send-tx` command](#send-tx). The `--encoding` flag can be used to specify the encoding format (value must be either `proto3` or `proto3json`); if not specified, the default will be `proto3`. The `--memo` flag can be used to include a memo string in the interchain accounts packet data. + +```shell +simd tx interchain-accounts host generate-packet-data [message] +``` + +Example: + +```shell expandable +simd tx interchain-accounts host generate-packet-data '[{ + "@type":"/cosmos.bank.v1beta1.MsgSend", + "from_address":"cosmos15ccshhmp0gsx29qpqq6g4zmltnnvgmyu9ueuadh9y2nc5zj0szls5gtddz", + "to_address":"cosmos10h9stc5v6ntgeygf5xf945njqq5h32r53uquvw", + "amount": [ + { + "denom": "stake", + "amount": "1000" + } + ] +}]' --memo memo +``` + +The command accepts a single `sdk.Msg` or a list of `sdk.Msg`s that will be encoded into the outputs `data` field. + +Example output: + +```json +{ + "type": "TYPE_EXECUTE_TX", + "data": "CqIBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEoEBCkFjb3Ntb3MxNWNjc2hobXAwZ3N4MjlxcHFxNmc0em1sdG5udmdteXU5dWV1YWRoOXkybmM1emowc3psczVndGRkehItY29zbW9zMTBoOXN0YzV2Nm50Z2V5Z2Y1eGY5NDVuanFxNWgzMnI1M3VxdXZ3Gg0KBXN0YWtlEgQxMDAw", + "memo": "memo" +} +``` + +## gRPC + +A user can query the interchain account module using gRPC endpoints. + +### Controller + +A user can query the controller submodule using gRPC endpoints. + +#### `InterchainAccount` + +The `InterchainAccount` endpoint allows users to query the controller submodule for the interchain account address for a given owner on a particular connection. + +```shell +ibc.applications.interchain_accounts.controller.v1.Query/InterchainAccount +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"owner":"cosmos1..","connection_id":"connection-0"}' \ + localhost:9090 \ + ibc.applications.interchain_accounts.controller.v1.Query/InterchainAccount +``` + +#### `Params` + +The `Params` endpoint users to query the current controller submodule parameters. + +```shell +ibc.applications.interchain_accounts.controller.v1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + ibc.applications.interchain_accounts.controller.v1.Query/Params +``` + +### Host + +A user can query the host submodule using gRPC endpoints. + +#### `Params` + +The `Params` endpoint users to query the current host submodule parameters. + +```shell +ibc.applications.interchain_accounts.host.v1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + ibc.applications.interchain_accounts.host.v1.Query/Params +``` diff --git a/docs/ibc/v7.8.x/apps/interchain-accounts/development.mdx b/docs/ibc/v7.8.x/apps/interchain-accounts/development.mdx new file mode 100644 index 00000000..27395285 --- /dev/null +++ b/docs/ibc/v7.8.x/apps/interchain-accounts/development.mdx @@ -0,0 +1,34 @@ +--- +title: Development Use Cases +--- + +The initial version of Interchain Accounts allowed for the controller submodule to be extended by providing it with an underlying application which would handle all packet callbacks. +That functionality is now being deprecated in favor of alternative approaches. +This document will outline potential use cases and redirect each use case to the appropriate documentation. + +## Custom authentication + +Interchain accounts may be associated with alternative types of authentication relative to the traditional public/private key signing. +If you wish to develop or use Interchain Accounts with a custom authentication module and do not need to execute custom logic on the packet callbacks, we recommend you use ibc-go v6 or greater and that your custom authentication module interacts with the controller submodule via the [`MsgServer`](/docs/ibc/v7.8.x/apps/interchain-accounts/messages). + +If you wish to consume and execute custom logic in the packet callbacks, then please read the section [Packet callbacks](#packet-callbacks) below. + +## Redirection to a smart contract + +It may be desirable to allow smart contracts to control an interchain account. +To facilitate such an action, the controller submodule may be provided an underlying application which redirects to smart contract callers. +An improved design has been suggested in [ADR 008](https://github.com/cosmos/ibc-go/pull/1976) which performs this action via middleware. + +Implementers of this use case are recommended to follow the ADR 008 approach. +The underlying application may continue to be used as a short term solution for ADR 008 and the [legacy API](/docs/ibc/v7.8.x/apps/interchain-accounts/legacy/auth-modules) should continue to be utilized in such situations. + +## Packet callbacks + +If a developer requires access to packet callbacks for their use case, then they have the following options: + +1. Write a smart contract which is connected via an ADR 008 or equivalent IBC application (recommended). +2. Use the controller's underlying application to implement packet callback logic. + +In the first case, the smart contract should use the [`MsgServer`](/docs/ibc/v7.8.x/apps/interchain-accounts/messages). + +In the second case, the underlying application should use the [legacy API](/docs/ibc/v7.8.x/apps/interchain-accounts/legacy/keeper-api). diff --git a/docs/ibc/v7.8.x/apps/interchain-accounts/integration.mdx b/docs/ibc/v7.8.x/apps/interchain-accounts/integration.mdx new file mode 100644 index 00000000..fd88340e --- /dev/null +++ b/docs/ibc/v7.8.x/apps/interchain-accounts/integration.mdx @@ -0,0 +1,202 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to integrate Interchain Accounts host and controller functionality to your chain. The following document only applies for Cosmos SDK chains. + +The Interchain Accounts module contains two submodules. Each submodule has its own IBC application. The Interchain Accounts module should be registered as an `AppModule` in the same way all SDK modules are registered on a chain, but each submodule should create its own `IBCModule` as necessary. A route should be added to the IBC router for each submodule which will be used. + +Chains who wish to support ICS-27 may elect to act as a host chain, a controller chain or both. Disabling host or controller functionality may be done statically by excluding the host or controller submodule entirely from the `app.go` file or it may be done dynamically by taking advantage of the on-chain parameters which enable or disable the host or controller submodules. + +Interchain Account authentication modules (both custom or generic, such as the `x/gov`, `x/group` or `x/auth` Cosmos SDK modules) can send messages to the controller submodule's [`MsgServer`](/docs/ibc/v7.8.x/apps/interchain-accounts/messages) to register interchain accounts and send packets to the interchain account. To accomplish this, the authentication module needs to be composed with `baseapp`'s `MsgServiceRouter`. + +![ica-v6.png](/docs/ibc/images/02-apps/02-interchain-accounts/images/ica-v6.png) + +> Please note that since ibc-go v7.5.0 it is mandatory to register the gRPC query router after the creation of the host submodule's keeper; otherwise, nodes will not start. The query router is used to execute on the host query messages encoded in the ICA packet data. Please check the sample integration code below for more details. + +## Example integration + +```go expandable +/ app.go + +/ Register the AppModule for the Interchain Accounts module and the authentication module +/ Note: No `icaauth` exists, this must be substituted with an actual Interchain Accounts authentication module +ModuleBasics = module.NewBasicManager( + ... + ica.AppModuleBasic{ +}, + icaauth.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the Interchain Accounts module +/ Only necessary for host chain functionality +/ Each Interchain Account created on the host chain is derived from the module account created +maccPerms = map[string][]string{ + ... + icatypes.ModuleName: nil, +} + +... + +/ Add Interchain Accounts Keepers for each submodule used and the authentication module +/ If a submodule is being statically disabled, the associated Keeper does not need to be added. +type App struct { + ... + + ICAControllerKeeper icacontrollerkeeper.Keeper + ICAHostKeeper icahostkeeper.Keeper + ICAAuthKeeper icaauthkeeper.Keeper + + ... +} + +... + +/ Create store keys for each submodule Keeper and the authentication module + keys := sdk.NewKVStoreKeys( + ... + icacontrollertypes.StoreKey, + icahosttypes.StoreKey, + icaauthtypes.StoreKey, + ... +) + +... + +/ Create the scoped keepers for each submodule keeper and authentication keeper + scopedICAControllerKeeper := app.CapabilityKeeper.ScopeToModule(icacontrollertypes.SubModuleName) + scopedICAHostKeeper := app.CapabilityKeeper.ScopeToModule(icahosttypes.SubModuleName) + scopedICAAuthKeeper := app.CapabilityKeeper.ScopeToModule(icaauthtypes.ModuleName) + +... + +/ Create the Keeper for each submodule +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + scopedICAControllerKeeper, app.MsgServiceRouter(), +) + +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, scopedICAHostKeeper, app.MsgServiceRouter(), +) + +app.ICAHostKeeper.WithQueryRouter(app.GRPCQueryRouter()) + +/ Create Interchain Accounts AppModule + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, &app.ICAHostKeeper) + +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.MsgServiceRouter()) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + +/ Create controller IBC application stack and host IBC module as desired + icaControllerStack := icacontroller.NewIBCMiddleware(nil, app.ICAControllerKeeper) + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +... + +/ Register Interchain Accounts and authentication module AppModule's +app.moduleManager = module.NewManager( + ... + icaModule, + icaAuthModule, +) + +... + +/ Add Interchain Accounts to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts module InitGenesis logic +app.moduleManager.SetOrderInitGenesis( + ... + icatypes.ModuleName, + ... +) + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey sdk.StoreKey) + +paramskeeper.Keeper { + ... + paramsKeeper.Subspace(icahosttypes.SubModuleName) + +paramsKeeper.Subspace(icacontrollertypes.SubModuleName) + ... +} +``` + +If no custom athentication module is needed and a generic Cosmos SDK authentication module can be used, then from the sample integration code above all references to `ICAAuthKeeper` and `icaAuthModule` can be removed. That's it, the following code would not be needed: + +```go +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.MsgServiceRouter()) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) +``` + +### Using submodules exclusively + +As described above, the Interchain Accounts application module is structured to support the ability of exclusively enabling controller or host functionality. +This can be achieved by simply omitting either controller or host `Keeper` from the Interchain Accounts `NewAppModule` constructor function, and mounting only the desired submodule via the `IBCRouter`. +Alternatively, submodules can be enabled and disabled dynamically using [on-chain parameters](/docs/ibc/v7.8.x/apps/interchain-accounts/parameters). + +The following snippets show basic examples of statically disabling submodules using `app.go`. + +#### Disabling controller chain functionality + +```go +/ Create Interchain Accounts AppModule omitting the controller keeper + icaModule := ica.NewAppModule(nil, &app.ICAHostKeeper) + +/ Create host IBC Module + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host route +ibcRouter.AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +``` + +#### Disabling host chain functionality + +```go expandable +/ Create Interchain Accounts AppModule omitting the host keeper + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, nil) + +/ Optionally instantiate your custom authentication module if needed, or not otherwise +... + +/ Create controller IBC application stack + icaControllerStack := icacontroller.NewIBCMiddleware(nil, app.ICAControllerKeeper) + +/ Register controller route +ibcRouter.AddRoute(icacontrollertypes.SubModuleName, icaControllerStack) +``` diff --git a/docs/ibc/v7.8.x/apps/interchain-accounts/legacy/auth-modules.mdx b/docs/ibc/v7.8.x/apps/interchain-accounts/legacy/auth-modules.mdx new file mode 100644 index 00000000..ee725f0f --- /dev/null +++ b/docs/ibc/v7.8.x/apps/interchain-accounts/legacy/auth-modules.mdx @@ -0,0 +1,312 @@ +--- +title: Authentication Modules +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +## Synopsis + +Authentication modules play the role of the `Base Application` as described in [ICS-30 IBC Middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware), and enable application developers to perform custom logic when working with the Interchain Accounts controller API. + +The controller submodule is used for account registration and packet sending. It executes only logic required of all controllers of interchain accounts. The type of authentication used to manage the interchain accounts remains unspecified. There may exist many different types of authentication which are desirable for different use cases. Thus the purpose of the authentication module is to wrap the controller submodule with custom authentication logic. + +In ibc-go, authentication modules are connected to the controller chain via a middleware stack. The controller submodule is implemented as [middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware) and the authentication module is connected to the controller submodule as the base application of the middleware stack. To implement an authentication module, the `IBCModule` interface must be fulfilled. By implementing the controller submodule as middleware, any amount of authentication modules can be created and connected to the controller submodule without writing redundant code. + +The authentication module must: + +- Authenticate interchain account owners. +- Track the associated interchain account address for an owner. +- Send packets on behalf of an owner (after authentication). + +> Please note that since ibc-go v6 the channel capability is claimed by the controller submodule and therefore it is not required for authentication modules to claim the capability in the `OnChanOpenInit` callback. When the authentication module sends packets on the channel created for the associated interchain account it can pass a `nil` capability to the legacy function `SendTx` of the controller keeper (see section [`SendTx`](/docs/ibc/v7.8.x/apps/interchain-accounts/legacy/keeper-api#sendtx) for more information). + +## `IBCModule` implementation + +The following `IBCModule` callbacks must be implemented with appropriate custom logic: + +```go expandable +/ OnChanOpenInit implements the IBCModule interface +func (im IBCModule) + +OnChanOpenInit( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + chanCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + / since ibc-go v6 the authentication module *must not* claim the channel capability on OnChanOpenInit + + / perform custom logic + + return version, nil +} + +/ OnChanOpenAck implements the IBCModule interface +func (im IBCModule) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + / perform custom logic + + return nil +} + +/ OnChanCloseConfirm implements the IBCModule interface +func (im IBCModule) + +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / perform custom logic + + return nil +} + +/ OnAcknowledgementPacket implements the IBCModule interface +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, +) + +error { + / perform custom logic + + return nil +} + +/ OnTimeoutPacket implements the IBCModule interface. +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +error { + / perform custom logic + + return nil +} +``` + +The following functions must be defined to fulfill the `IBCModule` interface, but they will never be called by the controller submodule so they may error or panic. That is because in Interchain Accounts, the channel handshake is always initiated on the controller chain and packets are always sent to the host chain and never to the controller chain. + +```go expandable +/ OnChanOpenTry implements the IBCModule interface +func (im IBCModule) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + chanCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + panic("UNIMPLEMENTED") +} + +/ OnChanOpenConfirm implements the IBCModule interface +func (im IBCModule) + +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + panic("UNIMPLEMENTED") +} + +/ OnChanCloseInit implements the IBCModule interface +func (im IBCModule) + +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + panic("UNIMPLEMENTED") +} + +/ OnRecvPacket implements the IBCModule interface. A successful acknowledgement +/ is returned if the packet data is successfully decoded and the receive application +/ logic returns without error. +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +ibcexported.Acknowledgement { + panic("UNIMPLEMENTED") +} +``` + +## `OnAcknowledgementPacket` + +Controller chains will be able to access the acknowledgement written into the host chain state once a relayer relays the acknowledgement. +The acknowledgement bytes contain either the response of the execution of the message(s) on the host chain or an error. They will be passed to the auth module via the `OnAcknowledgementPacket` callback. Auth modules are expected to know how to decode the acknowledgement. + +If the controller chain is connected to a host chain using the host module on ibc-go, it may interpret the acknowledgement bytes as follows: + +Begin by unmarshaling the acknowledgement into `sdk.TxMsgData`: + +```go +var ack channeltypes.Acknowledgement + if err := channeltypes.SubModuleCdc.UnmarshalJSON(acknowledgement, &ack); err != nil { + return err +} + txMsgData := &sdk.TxMsgData{ +} + if err := proto.Unmarshal(ack.GetResult(), txMsgData); err != nil { + return err +} +``` + +If the `txMsgData.Data` field is non nil, the host chain is using SDK version `<=` v0.45. +The auth module should interpret the `txMsgData.Data` as follows: + +```go expandable +switch len(txMsgData.Data) { + case 0: + / see documentation below for SDK 0.46.x or greater +default: + for _, msgData := range txMsgData.Data { + if err := handler(msgData); err != nil { + return err +} + +} +... +} +``` + +A handler will be needed to interpret what actions to perform based on the message type sent. +A router could be used, or more simply a switch statement. + +```go expandable +func handler(msgData sdk.MsgData) + +error { + switch msgData.MsgType { + case sdk.MsgTypeURL(&banktypes.MsgSend{ +}): + msgResponse := &banktypes.MsgSendResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleBankSendMsg(msgResponse) + case sdk.MsgTypeURL(&stakingtypes.MsgDelegate{ +}): + msgResponse := &stakingtypes.MsgDelegateResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleStakingDelegateMsg(msgResponse) + case sdk.MsgTypeURL(&transfertypes.MsgTransfer{ +}): + msgResponse := &transfertypes.MsgTransferResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleIBCTransferMsg(msgResponse) + +default: + return +} +``` + +If the `txMsgData.Data` is empty, the host chain is using SDK version > v0.45. +The auth module should interpret the `txMsgData.Responses` as follows: + +```go +... +/ switch statement from above + case 0: + for _, any := range txMsgData.MsgResponses { + if err := handleAny(any); err != nil { + return err +} + +} +} +``` + +A handler will be needed to interpret what actions to perform based on the type URL of the Any. +A router could be used, or more simply a switch statement. +It may be possible to deduplicate logic between `handler` and `handleAny`. + +```go expandable +func handleAny(any *codectypes.Any) + +error { + switch any.TypeURL { + case banktypes.MsgSend: + msgResponse, err := unpackBankMsgSendResponse(any) + if err != nil { + return err +} + +handleBankSendMsg(msgResponse) + case stakingtypes.MsgDelegate: + msgResponse, err := unpackStakingDelegateResponse(any) + if err != nil { + return err +} + +handleStakingDelegateMsg(msgResponse) + case transfertypes.MsgTransfer: + msgResponse, err := unpackIBCTransferMsgResponse(any) + if err != nil { + return err +} + +handleIBCTransferMsg(msgResponse) + +default: + return +} +``` + +## Integration into `app.go` file + +To integrate the authentication module into your chain, please follow the steps outlined in [`app.go` integration](/docs/ibc/v7.8.x/apps/interchain-accounts/legacy/integration#example-integration). diff --git a/docs/ibc/v7.8.x/apps/interchain-accounts/legacy/integration.mdx b/docs/ibc/v7.8.x/apps/interchain-accounts/legacy/integration.mdx new file mode 100644 index 00000000..ddcbfda4 --- /dev/null +++ b/docs/ibc/v7.8.x/apps/interchain-accounts/legacy/integration.mdx @@ -0,0 +1,205 @@ +--- +title: Integration +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +## Synopsis + +Learn how to integrate Interchain Accounts host and controller functionality to your chain. The following document only applies for Cosmos SDK chains. + +The Interchain Accounts module contains two submodules. Each submodule has its own IBC application. The Interchain Accounts module should be registered as an `AppModule` in the same way all SDK modules are registered on a chain, but each submodule should create its own `IBCModule` as necessary. A route should be added to the IBC router for each submodule which will be used. + +Chains who wish to support ICS-27 may elect to act as a host chain, a controller chain or both. Disabling host or controller functionality may be done statically by excluding the host or controller module entirely from the `app.go` file or it may be done dynamically by taking advantage of the on-chain parameters which enable or disable the host or controller submodules. + +Interchain Account authentication modules are the base application of a middleware stack. The controller submodule is the middleware in this stack. + +![ica-pre-v6.png](/docs/ibc/images/02-apps/02-interchain-accounts/10-legacy/images/ica-pre-v6.png) + +> Please note that since ibc-go v6 the channel capability is claimed by the controller submodule and therefore it is not required for authentication modules to claim the capability in the `OnChanOpenInit` callback. Therefore the custom authentication module does not need a scoped keeper anymore. +> Please note that since ibc-go v7.5.0 it is mandatory to register the gRPC query router after the creation of the host submodule's keeper; otherwise, nodes will not start. The query router is used to execute on the host query messages encoded in the ICA packet data. Please check the sample integration code below for more details. + +## Example integration + +```go expandable +/ app.go + +/ Register the AppModule for the Interchain Accounts module and the authentication module +/ Note: No `icaauth` exists, this must be substituted with an actual Interchain Accounts authentication module +ModuleBasics = module.NewBasicManager( + ... + ica.AppModuleBasic{ +}, + icaauth.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the Interchain Accounts module +/ Only necessary for host chain functionality +/ Each Interchain Account created on the host chain is derived from the module account created +maccPerms = map[string][]string{ + ... + icatypes.ModuleName: nil, +} + +... + +/ Add Interchain Accounts Keepers for each submodule used and the authentication module +/ If a submodule is being statically disabled, the associated Keeper does not need to be added. +type App struct { + ... + + ICAControllerKeeper icacontrollerkeeper.Keeper + ICAHostKeeper icahostkeeper.Keeper + ICAAuthKeeper icaauthkeeper.Keeper + + ... +} + +... + +/ Create store keys for each submodule Keeper and the authentication module + keys := sdk.NewKVStoreKeys( + ... + icacontrollertypes.StoreKey, + icahosttypes.StoreKey, + icaauthtypes.StoreKey, + ... +) + +... + +/ Create the scoped keepers for each submodule keeper and authentication keeper + scopedICAControllerKeeper := app.CapabilityKeeper.ScopeToModule(icacontrollertypes.SubModuleName) + scopedICAHostKeeper := app.CapabilityKeeper.ScopeToModule(icahosttypes.SubModuleName) + +... + +/ Create the Keeper for each submodule +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + scopedICAControllerKeeper, app.MsgServiceRouter(), +) + +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, scopedICAHostKeeper, app.MsgServiceRouter(), +) + +app.ICAHostKeeper.WithQueryRouter(app.GRPCQueryRouter()) + +/ Create Interchain Accounts AppModule + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, &app.ICAHostKeeper) + +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.ICAControllerKeeper) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + +/ ICA auth IBC Module + icaAuthIBCModule := icaauth.NewIBCModule(app.ICAAuthKeeper) + +/ Create controller IBC application stack and host IBC module as desired + icaControllerStack := icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostIBCModule). + AddRoute(icaauthtypes.ModuleName, icaControllerStack) / Note, the authentication module is routed to the top level of the middleware stack + +... + +/ Register Interchain Accounts and authentication module AppModule's +app.moduleManager = module.NewManager( + ... + icaModule, + icaAuthModule, +) + +... + +/ Add fee middleware to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add fee middleware to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts module InitGenesis logic +app.moduleManager.SetOrderInitGenesis( + ... + icatypes.ModuleName, + ... +) + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey sdk.StoreKey) + +paramskeeper.Keeper { + ... + paramsKeeper.Subspace(icahosttypes.SubModuleName) + +paramsKeeper.Subspace(icacontrollertypes.SubModuleName) + ... +``` + +## Using submodules exclusively + +As described above, the Interchain Accounts application module is structured to support the ability of exclusively enabling controller or host functionality. +This can be achieved by simply omitting either controller or host `Keeper` from the Interchain Accounts `NewAppModule` constructor function, and mounting only the desired submodule via the `IBCRouter`. +Alternatively, submodules can be enabled and disabled dynamically using [on-chain parameters](/docs/ibc/v7.8.x/apps/interchain-accounts/parameters). + +The following snippets show basic examples of statically disabling submodules using `app.go`. + +### Disabling controller chain functionality + +```go +/ Create Interchain Accounts AppModule omitting the controller keeper + icaModule := ica.NewAppModule(nil, &app.ICAHostKeeper) + +/ Create host IBC Module + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host route +ibcRouter.AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +``` + +### Disabling host chain functionality + +```go expandable +/ Create Interchain Accounts AppModule omitting the host keeper + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, nil) + +/ Create your Interchain Accounts authentication module, setting up the Keeper, AppModule and IBCModule appropriately +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.ICAControllerKeeper) + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + icaAuthIBCModule := icaauth.NewIBCModule(app.ICAAuthKeeper) + +/ Create controller IBC application stack + icaControllerStack := icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) + +/ Register controller and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icaauthtypes.ModuleName, icaControllerStack) / Note, the authentication module is routed to the top level of the middleware stack +``` diff --git a/docs/ibc/v7.8.x/apps/interchain-accounts/legacy/keeper-api.mdx b/docs/ibc/v7.8.x/apps/interchain-accounts/legacy/keeper-api.mdx new file mode 100644 index 00000000..ab95c6f2 --- /dev/null +++ b/docs/ibc/v7.8.x/apps/interchain-accounts/legacy/keeper-api.mdx @@ -0,0 +1,126 @@ +--- +title: Keeper API +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +The controller submodule keeper exposes two legacy functions that allow respectively for custom authentication modules to register interchain accounts and send packets to the interchain account. + +## `RegisterInterchainAccount` + +The authentication module can begin registering interchain accounts by calling `RegisterInterchainAccount`: + +```go +if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, connectionID, owner.String(), version); err != nil { + return err +} + +return nil +``` + +The `version` argument is used to support ICS-29 fee middleware for relayer incentivization of ICS-27 packets. Consumers of the `RegisterInterchainAccount` are expected to build the appropriate JSON encoded version string themselves and pass it accordingly. If an empty string is passed in the `version` argument, then the version will be initialized to a default value in the `OnChanOpenInit` callback of the controller's handler, so that channel handshake can proceed. + +The following code snippet illustrates how to construct an appropriate interchain accounts `Metadata` and encode it as a JSON bytestring: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, controllerConnectionID, owner.String(), string(appVersion)); err != nil { + return err +} +``` + +Similarly, if the application stack is configured to route through ICS-29 fee middleware and a fee enabled channel is desired, construct the appropriate ICS-29 `Metadata` type: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + feeMetadata := feetypes.Metadata{ + AppVersion: string(appVersion), + FeeVersion: feetypes.Version, +} + +feeEnabledVersion, err := feetypes.ModuleCdc.MarshalJSON(&feeMetadata) + if err != nil { + return err +} + if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, controllerConnectionID, owner.String(), string(feeEnabledVersion)); err != nil { + return err +} +``` + +> Since ibc-go v7.5.0 the default ordering of new ICA channels created when invoking `RegisterInterchainAccount` has changed from `ORDERED` to `UNORDERED`. If this default behaviour does not meet your use case, please use the function `RegisterInterchainAccountWithOrdering` (available since ibc-go v7.5.0), which takes an extra parameter that can be used to specify the ordering of the channel. + +## `SendTx` + +The authentication module can attempt to send a packet by calling `SendTx`: + +```go expandable +/ Authenticate owner +/ perform custom logic + +/ Construct controller portID based on interchain account owner address +portID, err := icatypes.NewControllerPortID(owner.String()) + if err != nil { + return err +} + +/ Obtain data to be sent to the host chain. +/ In this example, the owner of the interchain account would like to send a bank MsgSend to the host chain. +/ The appropriate serialization function should be called. The host chain must be able to deserialize the transaction. +/ If the host chain is using the ibc-go host module, `SerializeCosmosTx` should be used. + msg := &banktypes.MsgSend{ + FromAddress: fromAddr, + ToAddress: toAddr, + Amount: amt +} + +data, err := icatypes.SerializeCosmosTx(keeper.cdc, []proto.Message{ + msg +}) + if err != nil { + return err +} + +/ Construct packet data + packetData := icatypes.InterchainAccountPacketData{ + Type: icatypes.EXECUTE_TX, + Data: data, +} + +/ Obtain timeout timestamp +/ An appropriate timeout timestamp must be determined based on the usage of the interchain account. +/ If the packet times out, the channel will be closed requiring a new channel to be created. + timeoutTimestamp := obtainTimeoutTimestamp() + +/ Send the interchain accounts packet, returning the packet sequence +/ A nil channel capability can be passed, since the controller submodule (and not the authentication module) +/ claims the channel capability since ibc-go v6. +seq, err = keeper.icaControllerKeeper.SendTx(ctx, nil, portID, packetData, timeoutTimestamp) +``` + +The data within an `InterchainAccountPacketData` must be serialized using a format supported by the host chain. +If the host chain is using the ibc-go host chain submodule, `SerializeCosmosTx` should be used. If the `InterchainAccountPacketData.Data` is serialized using a format not supported by the host chain, the packet will not be successfully received. diff --git a/docs/ibc/v7.8.x/apps/interchain-accounts/messages.mdx b/docs/ibc/v7.8.x/apps/interchain-accounts/messages.mdx new file mode 100644 index 00000000..79ae3299 --- /dev/null +++ b/docs/ibc/v7.8.x/apps/interchain-accounts/messages.mdx @@ -0,0 +1,142 @@ +--- +title: Messages +description: >- + An Interchain Accounts channel handshake can be initiated using + MsgRegisterInterchainAccount: +--- + +## `MsgRegisterInterchainAccount` + +An Interchain Accounts channel handshake can be initiated using `MsgRegisterInterchainAccount`: + +```go +type MsgRegisterInterchainAccount struct { + Owner string + ConnectionID string + Version string + Ordering channeltypes.Order +} +``` + +This message is expected to fail if: + +* `Owner` is an empty string or contains more than 2048 bytes. +* `ConnectionID` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). + +This message will construct a new `MsgChannelOpenInit` on chain and route it to the core IBC message server to initiate the opening step of the channel handshake. + +The controller submodule will generate a new port identifier and claim the associated port capability. The caller is expected to provide an appropriate application version string. For example, this may be an ICS-27 JSON encoded [`Metadata`](https://github.com/cosmos/ibc-go/blob/v6.0.0/proto/ibc/applications/interchain_accounts/v1/metadata.proto#L11) type or an ICS-29 JSON encoded [`Metadata`](https://github.com/cosmos/ibc-go/blob/v6.0.0/proto/ibc/applications/fee/v1/metadata.proto#L11) type with a nested application version. +If the `Version` string is omitted, the controller submodule will construct a default version string in the `OnChanOpenInit` handshake callback. + +```go +type MsgRegisterInterchainAccountResponse struct { + ChannelID string + PortID string +} +``` + +The `ChannelID` and `PortID` are returned in the message response. + +## `MsgSendTx` + +An Interchain Accounts transaction can be executed on a remote host chain by sending a `MsgSendTx` from the corresponding controller chain: + +```go +type MsgSendTx struct { + Owner string + ConnectionID string + PacketData InterchainAccountPacketData + RelativeTimeout uint64 +} +``` + +This message is expected to fail if: + +* `Owner` is an empty string or contains more than 2048 bytes. +* `ConnectionID` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +* `PacketData` contains an `UNSPECIFIED` type enum, the length of `Data` bytes is zero or the `Memo` field exceeds 256 characters in length. +* `RelativeTimeout` is zero. + +This message will create a new IBC packet with the provided `PacketData` and send it via the channel associated with the `Owner` and `ConnectionID`. +The `PacketData` is expected to contain a list of serialized `[]sdk.Msg` in the form of `CosmosTx`. Please note the signer field of each `sdk.Msg` must be the interchain account address. +When the packet is relayed to the host chain, the `PacketData` is unmarshalled and the messages are authenticated and executed. + +```go +type MsgSendTxResponse struct { + Sequence uint64 +} +``` + +The packet `Sequence` is returned in the message response. + +### Queries + +It is possible to use [`MsgModuleQuerySafe`](https://github.com/cosmos/ibc-go/blob/v7.5.0/proto/ibc/applications/interchain_accounts/host/v1/tx.proto#L32-L39) to execute a list of queries on the host chain. This message can be included in the list of encoded `sdk.Msg`s of `InterchainPacketData`. The host chain will return on the acknowledgment the responses for all the queries. Please note that only module safe queries can be executed ([deterministic queries that are safe to be called from within the state machine](https://docs.cosmos.network/main/build/building-modules/query-services#calling-queries-from-the-state-machine)). + +The queries available from Cosmos SDK are: + +```plaintext expandable +/cosmos.staking.v1beta1.Query/Validators, +/cosmos.staking.v1beta1.Query/Validator, +/cosmos.staking.v1beta1.Query/ValidatorDelegations", +/cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations +/cosmos.staking.v1beta1.Query/Delegation +/cosmos.staking.v1beta1.Query/UnbondingDelegation +/cosmos.staking.v1beta1.Query/DelegatorDelegations +/cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations +/cosmos.staking.v1beta1.Query/Redelegations +/cosmos.staking.v1beta1.Query/DelegatorValidators +/cosmos.staking.v1beta1.Query/DelegatorValidator +/cosmos.staking.v1beta1.Query/HistoricalInfo +/cosmos.staking.v1beta1.Query/Pool +/cosmos.staking.v1beta1.Query/Params +/cosmos.bank.v1beta1.Query/Balance +/cosmos.bank.v1beta1.Query/AllBalances +/cosmos.bank.v1beta1.Query/SpendableBalances +/cosmos.bank.v1beta1.Query/SpendableBalanceByDenom +/cosmos.bank.v1beta1.Query/TotalSupply +/cosmos.bank.v1beta1.Query/SupplyOf +/cosmos.bank.v1beta1.Query/Params +/cosmos.bank.v1beta1.Query/DenomMetadata +/cosmos.bank.v1beta1.Query/DenomsMetadata +/cosmos.bank.v1beta1.Query/DenomOwners +/cosmos.bank.v1beta1.Query/SendEnabled +/cosmos.auth.v1beta1.Query/Accounts +/cosmos.auth.v1beta1.Query/Account +/cosmos.auth.v1beta1.Query/AccountAddressByID +/cosmos.auth.v1beta1.Query/Params +/cosmos.auth.v1beta1.Query/ModuleAccounts +/cosmos.auth.v1beta1.Query/ModuleAccountByName +/cosmos.auth.v1beta1.Query/AccountInfo +``` + +The following code block shows an example of how `MsgModuleQuerySafe` can be used to query the account balance of an account on the host chain. The resulting packet data variable is used to set the `PacketData` of `MsgSendTx`. + +```go expandable +balanceQuery := banktypes.NewQueryBalanceRequest("cosmos1...", "uatom") + +queryBz, err := balanceQuery.Marshal() + +/ signer of message must be the interchain account on the host + queryMsg := icahosttypes.NewMsgModuleQuerySafe("cosmos2...", []*icahosttypes.QueryRequest{ + { + Path: "/cosmos.bank.v1beta1.Query/Balance", + Data: queryBz, +}, +}) + +bz, err := icatypes.SerializeCosmosTx(cdc, []proto.Message{ + queryMsg +}, icatypes.EncodingProtobuf) + packetData := icatypes.InterchainAccountPacketData{ + Type: icatypes.EXECUTE_TX, + Data: bz, + Memo: "", +} +``` + +## Atomicity + +As the Interchain Accounts module supports the execution of multiple transactions using the Cosmos SDK `Msg` interface, it provides the same atomicity guarantees as Cosmos SDK-based applications, leveraging the [`CacheMultiStore`](https://docs.cosmos.network/main/learn/advanced/store#cachemultistore) architecture provided by the [`Context`](https://docs.cosmos.network/main/learn/advanced/context.html) type. + +This provides atomic execution of transactions when using Interchain Accounts, where state changes are only committed if all `Msg`s succeed. diff --git a/docs/ibc/v7.8.x/apps/interchain-accounts/overview.mdx b/docs/ibc/v7.8.x/apps/interchain-accounts/overview.mdx new file mode 100644 index 00000000..e60a3654 --- /dev/null +++ b/docs/ibc/v7.8.x/apps/interchain-accounts/overview.mdx @@ -0,0 +1,33 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the Interchain Accounts module is + +## What is the Interchain Accounts module? + +Interchain Accounts is the Cosmos SDK implementation of the ICS-27 protocol, which enables cross-chain account management built upon IBC. + +- How does an interchain account differ from a regular account? + +Regular accounts use a private key to sign transactions. Interchain Accounts are instead controlled programmatically by counterparty chains via IBC packets. + +## Concepts + +`Host Chain`: The chain where the interchain account is registered. The host chain listens for IBC packets from a controller chain which should contain instructions (e.g. Cosmos SDK messages) for which the interchain account will execute. + +`Controller Chain`: The chain registering and controlling an account on a host chain. The controller chain sends IBC packets to the host chain to control the account. + +`Interchain Account`: An account on a host chain created using the ICS-27 protocol. An interchain account has all the capabilities of a normal account. However, rather than signing transactions with a private key, a controller chain will send IBC packets to the host chain which signals what transactions the interchain account should execute. + +`Authentication Module`: A custom application module on the controller chain that uses the Interchain Accounts module to build custom logic for the creation & management of interchain accounts. It can be either an IBC application module using the [legacy API](/docs/ibc/v7.8.x/apps/interchain-accounts/legacy/keeper-api), or a regular Cosmos SDK application module sending messages to the controller submodule's `MsgServer` (this is the recommended approach from ibc-go v6 if access to packet callbacks is not needed). Please note that the legacy API will eventually be removed and IBC applications will not be able to use them in later releases. + +## SDK security model + +SDK modules on a chain are assumed to be trustworthy. For example, there are no checks to prevent an untrustworthy module from accessing the bank keeper. + +The implementation of ICS-27 in ibc-go uses this assumption in its security considerations. + +The implementation assumes other IBC application modules will not bind to ports within the ICS-27 namespace. diff --git a/docs/ibc/v7.8.x/apps/interchain-accounts/parameters.mdx b/docs/ibc/v7.8.x/apps/interchain-accounts/parameters.mdx new file mode 100644 index 00000000..089fda71 --- /dev/null +++ b/docs/ibc/v7.8.x/apps/interchain-accounts/parameters.mdx @@ -0,0 +1,63 @@ +--- +title: Parameters +description: >- + The Interchain Accounts module contains the following on-chain parameters, + logically separated for each distinct submodule: +--- + +The Interchain Accounts module contains the following on-chain parameters, logically separated for each distinct submodule: + +## Controller Submodule Parameters + +| Key | Type | Default Value | +| ------------------- | ---- | ------------- | +| `ControllerEnabled` | bool | `true` | + +### ControllerEnabled + +The `ControllerEnabled` parameter controls a chains ability to service ICS-27 controller specific logic. This includes the sending of Interchain Accounts packet data as well as the following ICS-26 callback handlers: + +* `OnChanOpenInit` +* `OnChanOpenAck` +* `OnChanCloseConfirm` +* `OnAcknowledgementPacket` +* `OnTimeoutPacket` + +## Host Submodule Parameters + +| Key | Type | Default Value | +| --------------- | --------- | ------------- | +| `HostEnabled` | bool | `true` | +| `AllowMessages` | \[]string | `["*"]` | + +### HostEnabled + +The `HostEnabled` parameter controls a chains ability to service ICS-27 host specific logic. This includes the following ICS-26 callback handlers: + +* `OnChanOpenTry` +* `OnChanOpenConfirm` +* `OnChanCloseConfirm` +* `OnRecvPacket` + +### AllowMessages + +The `AllowMessages` parameter provides the ability for a chain to limit the types of messages or transactions that hosted interchain accounts are authorized to execute by defining an allowlist using the Protobuf message type URL format. + +For example, a Cosmos SDK-based chain that elects to provide hosted Interchain Accounts with the ability of governance voting and staking delegations will define its parameters as follows: + +```json +"params": { + "host_enabled": true, + "allow_messages": ["/cosmos.staking.v1beta1.MsgDelegate", + "/cosmos.gov.v1beta1.MsgVote"] +} +``` + +There is also a special wildcard `"*"` value which allows any type of message to be executed by the interchain account. This must be the only value in the `allow_messages` array. + +```json +"params": { + "host_enabled": true, + "allow_messages": ["*"] +} +``` diff --git a/docs/ibc/v7.8.x/apps/interchain-accounts/tx-encoding.mdx b/docs/ibc/v7.8.x/apps/interchain-accounts/tx-encoding.mdx new file mode 100644 index 00000000..f2b9ca97 --- /dev/null +++ b/docs/ibc/v7.8.x/apps/interchain-accounts/tx-encoding.mdx @@ -0,0 +1,57 @@ +--- +title: Transaction Encoding +description: >- + When orchestrating an interchain account transaction, which comprises multiple + sdk.Msg objects represented as Any types, the transactions must be encoded as + bytes within InterchainAccountPacketData. +--- + +When orchestrating an interchain account transaction, which comprises multiple `sdk.Msg` objects represented as `Any` types, the transactions must be encoded as bytes within [`InterchainAccountPacketData`](https://github.com/cosmos/ibc-go/blob/v7.2.0/proto/ibc/applications/interchain_accounts/v1/packet.proto#L21-L26). + +```protobuf +/ InterchainAccountPacketData is comprised of a raw transaction, type of transaction and optional memo field. +message InterchainAccountPacketData { + Type type = 1; + bytes data = 2; + string memo = 3; +} +``` + +The `data` field must be encoded as a [`CosmosTx`](https://github.com/cosmos/ibc-go/blob/v7.2.0/proto/ibc/applications/interchain_accounts/v1/packet.proto#L28-L31). + +```protobuf +/ CosmosTx contains a list of sdk.Msg's. It should be used when sending transactions to an SDK host chain. +message CosmosTx { + repeated google.protobuf.Any messages = 1; +} +``` + +The encoding method for `CosmosTx` is determined during the channel handshake process. If the channel version [metadata's `encoding` field](https://github.com/cosmos/ibc-go/blob/v7.2.0/proto/ibc/applications/interchain_accounts/v1/metadata.proto#L22) is marked as `proto3`, then `CosmosTx` undergoes protobuf encoding. Conversely, if the field is set to `proto3json`, then [proto3 json](https://protobuf.dev/programming-guides/proto3/#json) encoding takes place, which generates a JSON representation of the protobuf message. + +## Protobuf Encoding + +Protobuf encoding serves as the standard encoding process for `CosmosTx`. This occurs if the channel handshake initiates with an empty channel version metadata or if the `encoding` field explicitly denotes `proto3`. In Golang, the protobuf encoding procedure utilizes the `proto.Marshal` function. Every protobuf autogenerated Golang type comes equipped with a `Marshal` method that can be employed to encode the message. + +## (Protobuf) JSON Encoding + +The proto3 JSON encoding presents an alternative encoding technique for `CosmosTx`. It is selected if the channel handshake begins with the channel version metadata `encoding` field labeled as `proto3json`. In Golang, the Proto3 canonical encoding in JSON is implemented by the `"github.com/cosmos/gogoproto/jsonpb"` package. Within Cosmos SDK, the `ProtoCodec` structure implements the `JSONCodec` interface, leveraging the `jsonpb` package. This method generates a JSON format as follows: + +```json expandable +{ + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1...", + "to_address": "cosmos1...", + "amount": [ + { + "denom": "uatom", + "amount": "1000000" + } + ] + } + ] +} +``` + +Here, the `"messages"` array is populated with transactions. Each transaction is represented as a JSON object with the `@type` field denoting the transaction type and the remaining fields representing the transaction's attributes. diff --git a/docs/ibc/v7.8.x/apps/transfer/authorizations.mdx b/docs/ibc/v7.8.x/apps/transfer/authorizations.mdx new file mode 100644 index 00000000..e262ba53 --- /dev/null +++ b/docs/ibc/v7.8.x/apps/transfer/authorizations.mdx @@ -0,0 +1,50 @@ +--- +title: Authorizations +--- + +`TransferAuthorization` implements the `Authorization` interface for `ibc.applications.transfer.v1.MsgTransfer`. It allows a granter to grant a grantee the privilege to submit `MsgTransfer` on its behalf. Please see the [Cosmos SDK docs](https://docs.cosmos.network/v0.47/modules/authz) for more details on granting privileges via the `x/authz` module. + +More specifically, the granter allows the grantee to transfer funds that belong to the granter over a specified channel. + +For the specified channel, the granter must be able to specify a spend limit of a specific denomination they wish to allow the grantee to be able to transfer. + +The granter may be able to specify the list of addresses that they allow to receive funds. If empty, then all addresses are allowed. + +It takes: + +* a `SourcePort` and a `SourceChannel` which together comprise the unique transfer channel identifier over which authorized funds can be transferred. + +* a `SpendLimit` that specifies the maximum amount of tokens the grantee can transfer. The `SpendLimit` is updated as the tokens are transferred, unless the sentinel value of the maximum value for a 256-bit unsigned integer (i.e. 2^256 - 1) is used for the amount, in which case the `SpendLimit` will not be updated (please be aware that using this sentinel value will grant the grantee the privilege to transfer **all** the tokens of a given denomination available at the granter's account). The helper function `UnboundedSpendLimit` in the `types` package of the `transfer` module provides the sentinel value that can be used. This `SpendLimit` may also be updated to increase or decrease the limit as the granter wishes. + +* an `AllowList` list that specifies the list of addresses that are allowed to receive funds. If this list is empty, then all addresses are allowed to receive funds from the `TransferAuthorization`. + +* an `AllowedPacketData` list that specifies the list of memo strings that are allowed to be included in the memo field of the packet. If this list is empty, then only an empty memo is allowed (a `memo` field with non-empty content will be denied). If this list includes a single element equal to `"*"`, then any content in the `memo` field will be allowed. + +Setting a `TransferAuthorization` is expected to fail if: + +* the spend limit is nil +* the denomination of the spend limit is an invalid coin type +* the source port ID is invalid +* the source channel ID is invalid +* there are duplicate entries in the `AllowList` + +Below is the `TransferAuthorization` message: + +```go expandable +func NewTransferAuthorization(allocations ...Allocation) *TransferAuthorization { + return &TransferAuthorization{ + Allocations: allocations, +} +} + +type Allocation struct { + / the port on which the packet will be sent + SourcePort string + / the channel by which the packet will be sent + SourceChannel string + / spend limitation on the channel + SpendLimit sdk.Coins + / allow list of receivers, an empty allow list permits any receiver address + AllowList []string +} +``` diff --git a/docs/ibc/v7.8.x/apps/transfer/client.mdx b/docs/ibc/v7.8.x/apps/transfer/client.mdx new file mode 100644 index 00000000..5137789c --- /dev/null +++ b/docs/ibc/v7.8.x/apps/transfer/client.mdx @@ -0,0 +1,67 @@ +--- +title: Client +description: >- + A user can query and interact with the transfer module using the CLI. Use the + --help flag to discover the available commands: +--- + +## CLI + +A user can query and interact with the `transfer` module using the CLI. Use the `--help` flag to discover the available commands: + +### Query + +The `query` commands allow users to query `transfer` state. + +```shell +simd query ibc-transfer --help +``` + +#### `total-escrow` + +The `total-escrow` command allows users to query the total amount in escrow for a particular coin denomination regardless of the transfer channel from where the coins were sent out. + +```shell +simd query ibc-transfer total-escrow [denom] [flags] +``` + +Example: + +```shell +simd query ibc-transfer total-escrow samoleans +``` + +Example Output: + +```shell +amount: "100" +``` + +## gRPC + +A user can query the `transfer` module using gRPC endpoints. + +### `TotalEscrowForDenom` + +The `TotalEscrowForDenom` endpoint allows users to query the total amount in escrow for a particular coin denomination regardless of the transfer channel from where the coins were sent out. + +```shell +ibc.applications.transfer.v1.Query/TotalEscrowForDenom +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"samoleans"}' \ + localhost:9090 \ + ibc.applications.transfer.v1.Query/TotalEscrowForDenom +``` + +Example output: + +```shell +{ + "amount": "100" +} +``` diff --git a/docs/ibc/v7.8.x/apps/transfer/events.mdx b/docs/ibc/v7.8.x/apps/transfer/events.mdx new file mode 100644 index 00000000..799424b6 --- /dev/null +++ b/docs/ibc/v7.8.x/apps/transfer/events.mdx @@ -0,0 +1,52 @@ +--- +title: Events +--- + +## `MsgTransfer` + +| Type | Attribute Key | Attribute Value | +| ------------- | ------------- | --------------- | +| ibc\_transfer | sender | `{sender}` | +| ibc\_transfer | receiver | `{receiver}` | +| ibc\_transfer | amount | `{amount}` | +| ibc\_transfer | denom | `{denom}` | +| ibc\_transfer | memo | `{memo}` | +| message | module | transfer | + +## `OnRecvPacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | ------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | sender | `{sender}` | +| fungible\_token\_packet | receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | +| fungible\_token\_packet | success | `{ackSuccess}` | +| fungible\_token\_packet | error | `{ackError}` | +| denomination\_trace | trace\_hash | `{hex\_hash}` | +| denomination\_trace | denom | `{voucherDenom}` | + +## `OnAcknowledgePacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | --------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | sender | `{sender}` | +| fungible\_token\_packet | receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | +| fungible\_token\_packet | acknowledgement | `{ack.String()}` | +| fungible\_token\_packet | success / error | `{ack.Response}` | + +## `OnTimeoutPacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | ---------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | refund\_receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | diff --git a/docs/ibc/v7.8.x/apps/transfer/messages.mdx b/docs/ibc/v7.8.x/apps/transfer/messages.mdx new file mode 100644 index 00000000..189efaa1 --- /dev/null +++ b/docs/ibc/v7.8.x/apps/transfer/messages.mdx @@ -0,0 +1,57 @@ +--- +title: Messages +description: 'A fungible token cross chain transfer is achieved by using the MsgTransfer:' +--- + +## `MsgTransfer` + +A fungible token cross chain transfer is achieved by using the `MsgTransfer`: + +```go +type MsgTransfer struct { + SourcePort string + SourceChannel string + Token sdk.Coin + Sender string + Receiver string + TimeoutHeight ibcexported.Height + TimeoutTimestamp uint64 + Memo string +} +``` + +This message is expected to fail if: + +* `SourcePort` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +* `SourceChannel` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +* `Token` is invalid (denom is invalid or amount is negative) + * `Token.Amount` is not positive. + * `Token.Denom` is not a valid IBC denomination as per [ADR 001 - Coin Source Tracing](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md). +* `Sender` is empty. +* `Receiver` is empty or contains more than 2048 bytes. +* `Memo` contains more than 32768 bytes. +* `TimeoutHeight` and `TimeoutTimestamp` are both zero. + +This message will send a fungible token to the counterparty chain represented by the counterparty Channel End connected to the Channel End with the identifiers `SourcePort` and `SourceChannel`. + +The denomination provided for transfer should correspond to the same denomination represented on this chain. The prefixes will be added as necessary upon by the receiving chain. + +If the `Amount` is set to the maximum value for a 256-bit unsigned integer (i.e. 2^256 - 1), then the whole balance of the corresponding denomination will be transferred. The helper function `UnboundedSpendLimit` in the `types` package of the `transfer` module provides the sentinel value that can be used. + +### Memo + +The memo field was added to allow applications and users to attach metadata to transfer packets. The field is optional and may be left empty. When it is used to attach metadata for a particular middleware, the memo field should be represented as a json object where different middlewares use different json keys. + +For example, the following memo field is used by the [callbacks middleware](/docs/ibc/v7.8.x/middleware/callbacks/overview) to attach a source callback to a transfer packet: + +```jsonc +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +You can find more information about other applications that use the memo field in the [chain registry](https://github.com/cosmos/chain-registry/blob/master/_memo_keys/ICS20_memo_keys.json). diff --git a/docs/ibc/v7.8.x/apps/transfer/metrics.mdx b/docs/ibc/v7.8.x/apps/transfer/metrics.mdx new file mode 100644 index 00000000..37fdc97a --- /dev/null +++ b/docs/ibc/v7.8.x/apps/transfer/metrics.mdx @@ -0,0 +1,13 @@ +--- +title: Metrics +description: The IBC transfer application module exposes the following set of metrics. +--- + +The IBC transfer application module exposes the following set of [metrics](https://docs.cosmos.network/main/learn/advanced/telemetry). + +| Metric | Description | Unit | Type | +| :---------------------------- | :---------------------------------------------------------------------------------------- | :------- | :------ | +| `tx_msg_ibc_transfer` | The total amount of tokens transferred via IBC in a `MsgTransfer` (source or sink chain) | token | gauge | +| `ibc_transfer_packet_receive` | The total amount of tokens received in a `FungibleTokenPacketData` (source or sink chain) | token | gauge | +| `ibc_transfer_send` | Total number of IBC transfers sent from a chain (source or sink) | transfer | counter | +| `ibc_transfer_receive` | Total number of IBC transfers received to a chain (source or sink) | transfer | counter | diff --git a/docs/ibc/v7.8.x/apps/transfer/overview.mdx b/docs/ibc/v7.8.x/apps/transfer/overview.mdx new file mode 100644 index 00000000..235aa2aa --- /dev/null +++ b/docs/ibc/v7.8.x/apps/transfer/overview.mdx @@ -0,0 +1,121 @@ +--- +title: Overview +--- + +## What is the Transfer module? + +Transfer is the Cosmos SDK implementation of the [ICS-20](https://github.com/cosmos/ibc/tree/master/spec/app/ics-020-fungible-token-transfer) protocol, which enables cross-chain fungible token transfers. + +## Concepts + +### Acknowledgements + +ICS20 uses the recommended acknowledgement format as specified by [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope). + +A successful receive of a transfer packet will result in a Result Acknowledgement being written +with the value `[]byte{byte(1)}` in the `Response` field. + +An unsuccessful receive of a transfer packet will result in an Error Acknowledgement being written +with the error message in the `Response` field. + +### Denomination trace + +The denomination trace corresponds to the information that allows a token to be traced back to its +origin chain. It contains a sequence of port and channel identifiers ordered from the most recent to +the oldest in the timeline of transfers. + +This information is included on the token denomination field in the form of a hash to prevent an +unbounded denomination length. For example, the token `transfer/channelToA/uatom` will be displayed +as `ibc/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2`. + +Each send to any chain other than the one it was previously received from is a movement forwards in +the token's timeline. This causes trace to be added to the token's history and the destination port +and destination channel to be prefixed to the denomination. In these instances the sender chain is +acting as the "source zone". When the token is sent back to the chain it previously received from, the +prefix is removed. This is a backwards movement in the token's timeline and the sender chain is +acting as the "sink zone". + +It is strongly recommended to read the full details of [ADR 001: Coin Source Tracing](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md) to understand the implications and context of the IBC token representations. + +## UX suggestions for clients + +For clients (wallets, exchanges, applications, block explorers, etc) that want to display the source of the token, it is recommended to use the following alternatives for each of the cases below: + +### Direct connection + +If the denomination trace contains a single identifier prefix pair (as in the example above), then +the easiest way to retrieve the chain and light client identifier is to map the trace information +directly. In summary, this requires querying the channel from the denomination trace identifiers, +and then the counterparty client state using the counterparty port and channel identifiers from the +retrieved channel. + +A general pseudo algorithm would look like the following: + +1. Query the full denomination trace. +2. Query the channel with the `portID/channelID` pair, which corresponds to the first destination of the + token. +3. Query the client state using the identifiers pair. Note that this query will return a `"Not +Found"` response if the current chain is not connected to this channel. +4. Retrieve the client identifier or chain identifier from the client state (eg: on + Tendermint clients) and store it locally. + +Using the gRPC gateway client service the steps above would be, with a given IBC token `ibc/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2` stored on `chainB`: + +1. `GET /ibc/apps/transfer/v1/denom_traces/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2` -> `{"path": "transfer/channelToA", "base_denom": "uatom"}` +2. `GET /ibc/apps/transfer/v1/channels/channelToA/ports/transfer/client_state"` -> `{"client_id": "clientA", "chain-id": "chainA", ...}` +3. `GET /ibc/apps/transfer/v1/channels/channelToA/ports/transfer"` -> `{"channel_id": "channelToA", port_id": "transfer", counterparty: {"channel_id": "channelToB", port_id": "transfer"}, ...}` +4. `GET /ibc/apps/transfer/v1/channels/channelToB/ports/transfer/client_state" -> {"client_id": "clientB", "chain-id": "chainB", ...}` + +Then, the token transfer chain path for the `uatom` denomination would be: `chainA` -> `chainB`. + +### Multiple hops + +The multiple channel hops case applies when the token has passed through multiple chains between the original source and final destination chains. + +The IBC protocol doesn't know the topology of the overall network (i.e connections between chains and identifier names between them). For this reason, in the multiple hops case, a particular chain in the timeline of the individual transfers can't query the chain and client identifiers of the other chains. + +Take for example the following sequence of transfers `A -> B -> C` for an IBC token, with a final prefix path (trace info) of `transfer/channelChainC/transfer/channelChainB`. What the paragraph above means is that even in the case that chain `C` is directly connected to chain `A`, querying the port and channel identifiers that chain `B` uses to connect to chain `A` (eg: `transfer/channelChainA`) can be completely different from the one that chain `C` uses to connect to chain `A` (eg: `transfer/channelToChainA`). + +Thus the proposed solution for clients that the IBC team recommends are the following: + +- **Connect to all chains**: Connecting to all the chains in the timeline would allow clients to + perform the queries outlined in the [direct connection](#direct-connection) section to each + relevant chain. By repeatedly following the port and channel denomination trace transfer timeline, + clients should always be able to find all the relevant identifiers. This comes at the tradeoff + that the client must connect to nodes on each of the chains in order to perform the queries. +- **Relayer as a Service (RaaS)**: A longer term solution is to use/create a relayer service that + could map the denomination trace to the chain path timeline for each token (i.e `origin chain -> +chain #1 -> ... -> chain #(n-1) -> final chain`). These services could provide merkle proofs in + order to allow clients to optionally verify the path timeline correctness for themselves by + running light clients. If the proofs are not verified, they should be considered as trusted third + parties services. Additionally, client would be advised in the future to use RaaS that support the + largest number of connections between chains in the ecosystem. Unfortunately, none of the existing + public relayers (in [Golang](https://github.com/cosmos/relayer) and + [Rust](https://github.com/informalsystems/ibc-rs)), provide this service to clients. + + + The only viable alternative for clients (at the time of writing) to tokens + with multiple connection hops, is to connect to all chains directly and + perform relevant queries to each of them in the sequence. + + +## Locked funds + +In some [exceptional cases](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-026-ibc-client-recovery-mechanisms.md#exceptional-cases), a client state associated with a given channel cannot be updated. This causes that funds from fungible tokens in that channel will be permanently locked and thus can no longer be transferred. + +To mitigate this, a client update governance proposal can be submitted to update the frozen client +with a new valid header. Once the proposal passes the client state will be unfrozen and the funds +from the associated channels will then be unlocked. This mechanism only applies to clients that +allow updates via governance, such as Tendermint clients. + +In addition to this, it's important to mention that a token must be sent back along the exact route +that it took originally in order to return it to its original form on the source chain (eg: the +Cosmos Hub for the `uatom`). Sending a token back to the same chain across a different channel will +**not** move the token back across its timeline. If a channel in the chain history closes before the +token can be sent back across that channel, then the token will not be returnable to its original +form. + +## Security considerations + +For safety, no other module must be capable of minting tokens with the `ibc/` prefix. The IBC +transfer module needs a subset of the denomination space that only it can create tokens in. diff --git a/docs/ibc/v7.8.x/apps/transfer/params.mdx b/docs/ibc/v7.8.x/apps/transfer/params.mdx new file mode 100644 index 00000000..f1c78927 --- /dev/null +++ b/docs/ibc/v7.8.x/apps/transfer/params.mdx @@ -0,0 +1,29 @@ +--- +title: Params +description: 'The IBC transfer application module contains the following parameters:' +--- + +The IBC transfer application module contains the following parameters: + +| Key | Type | Default Value | +| ---------------- | ---- | ------------- | +| `SendEnabled` | bool | `true` | +| `ReceiveEnabled` | bool | `true` | + +## `SendEnabled` + +The transfers enabled parameter controls send cross-chain transfer capabilities for all fungible tokens. + +To prevent a single token from being transferred from the chain, set the `SendEnabled` parameter to `true` and then, depending on the Cosmos SDK version, do one of the following: + +* For Cosmos SDK v0.46.x or earlier, set the bank module's [`SendEnabled` parameter](https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/bank/spec/05_params.md#sendenabled) for the denomination to `false`. +* For Cosmos SDK versions above v0.46.x, set the bank module's `SendEnabled` entry for the denomination to `false` using `MsgSetSendEnabled` as a governance proposal. + +## `ReceiveEnabled` + +The transfers enabled parameter controls receive cross-chain transfer capabilities for all fungible tokens. + +To prevent a single token from being transferred to the chain, set the `ReceiveEnabled` parameter to `true` and then, depending on the Cosmos SDK version, do one of the following: + +* For Cosmos SDK v0.46.x or earlier, set the bank module's [`SendEnabled` parameter](https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/bank/spec/05_params.md#sendenabled) for the denomination to `false`. +* For Cosmos SDK versions above v0.46.x, set the bank module's `SendEnabled` entry for the denomination to `false` using `MsgSetSendEnabled` as a governance proposal. diff --git a/docs/ibc/v7.8.x/apps/transfer/state-transitions.mdx b/docs/ibc/v7.8.x/apps/transfer/state-transitions.mdx new file mode 100644 index 00000000..fcc9169c --- /dev/null +++ b/docs/ibc/v7.8.x/apps/transfer/state-transitions.mdx @@ -0,0 +1,35 @@ +--- +title: State Transitions +description: >- + A successful fungible token send has two state transitions depending if the + transfer is a movement forward or backwards in the token's timeline: +--- + +## Send fungible tokens + +A successful fungible token send has two state transitions depending if the transfer is a movement forward or backwards in the token's timeline: + +1. Sender chain is the source chain, *i.e* a transfer to any chain other than the one it was previously received from is a movement forwards in the token's timeline. This results in the following state transitions: + + * The coins are transferred to an escrow address (i.e locked) on the sender chain. + * The coins are transferred to the receiving chain through IBC TAO logic. + +2. Sender chain is the sink chain, *i.e* the token is sent back to the chain it previously received from. This is a backwards movement in the token's timeline. This results in the following state transitions: + + * The coins (vouchers) are burned on the sender chain. + * The coins are transferred to the receiving chain through IBC TAO logic. + +## Receive fungible tokens + +A successful fungible token receive has two state transitions depending if the transfer is a movement forward or backwards in the token's timeline: + +1. Receiver chain is the source chain. This is a backwards movement in the token's timeline. This results in the following state transitions: + + * The leftmost port and channel identifier pair is removed from the token denomination prefix. + * The tokens are unescrowed and sent to the receiving address. + +2. Receiver chain is the sink chain. This is a movement forwards in the token's timeline. This results in the following state transitions: + + * Token vouchers are minted by prefixing the destination port and channel identifiers to the trace information. + * The receiving chain stores the new trace information in the store (if not set already). + * The vouchers are sent to the receiving address. diff --git a/docs/ibc/v7.8.x/apps/transfer/state.mdx b/docs/ibc/v7.8.x/apps/transfer/state.mdx new file mode 100644 index 00000000..76c7af41 --- /dev/null +++ b/docs/ibc/v7.8.x/apps/transfer/state.mdx @@ -0,0 +1,12 @@ +--- +title: State +description: >- + The IBC transfer application module keeps state of the port to which the + module is binded and the denomination trace information as outlined in ADR + 001. +--- + +The IBC transfer application module keeps state of the port to which the module is binded and the denomination trace information as outlined in [ADR 001](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md). + +* `Port`: `0x01 -> ProtocolBuffer(string)` +* `DenomTrace`: `0x02 | []bytes(traceHash) -> ProtocolBuffer(DenomTrace)` diff --git a/docs/ibc/v7.8.x/changelog/release-notes.mdx b/docs/ibc/v7.8.x/changelog/release-notes.mdx new file mode 100644 index 00000000..4c3193fc --- /dev/null +++ b/docs/ibc/v7.8.x/changelog/release-notes.mdx @@ -0,0 +1,21 @@ +--- +title: "Release Notes" +description: "Release notes generated from the project `CHANGELOG.md`" +mode: "center" +--- + + + + This page tracks all releases and changes from the + [cosmos/ibc-go](https://github.com/cosmos/ibc-go) repository. For the latest + development updates, see the + [UNRELEASED](https://github.com/cosmos/ibc-go/blob/main/CHANGELOG.md#unreleased) + section. + + + + + ### State Machine Breaking * (core/03-connection) + [\#7128](https://github.com/cosmos/ibc-go/pull/7128) Remove verification of + self client and consensus state from connection handshake. + diff --git a/docs/ibc/v7.8.x/ibc/apps/apps.mdx b/docs/ibc/v7.8.x/ibc/apps/apps.mdx new file mode 100644 index 00000000..f944733e --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/apps/apps.mdx @@ -0,0 +1,51 @@ +--- +title: IBC Applications +--- + +## Synopsis + +Learn how to build custom IBC application modules that enable packets to be sent to and received from other IBC-enabled chains. + +This document serves as a guide for developers who want to write their own Inter-blockchain Communication Protocol (IBC) applications for custom use cases. + +Due to the modular design of the IBC protocol, IBC application developers do not need to concern themselves with the low-level details of clients, connections, and proof verification. Nevertheless, an overview of these low-level concepts can be found in [the Overview section](/docs/ibc/v7.8.x/ibc/overview). +The document goes into detail on the abstraction layer most relevant for application developers (channels and ports), and describes how to define your own custom packets, `IBCModule` callbacks and more to make an application module IBC ready. + +**To have your module interact over IBC you must:** + +- implement the `IBCModule` interface, i.e.: + - channel (opening) handshake callbacks + - channel closing handshake callbacks + - packet callbacks +- bind to a port(s) +- add keeper methods +- define your own packet data and acknowledgement structs as well as how to encode/decode them +- add a route to the IBC router + +The following sections provide a more detailed explanation of how to write an IBC application +module correctly corresponding to the listed steps. + + + +## Pre-requisites Readings + +- [IBC Overview](/docs/ibc/v7.8.x/ibc/overview)) +- [IBC default integration](/docs/ibc/v7.8.x/ibc/integration) + + + +## Working example + +For a real working example of an IBC application, you can look through the `ibc-transfer` module +which implements everything discussed in this section. + +Here are the useful parts of the module to look at: + +[Binding to transfer +port](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/genesis.go) + +[Sending transfer +packets](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/relay.go) + +[Implementing IBC +callbacks](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/ibc_module.go) diff --git a/docs/ibc/v7.8.x/ibc/apps/bindports.mdx b/docs/ibc/v7.8.x/ibc/apps/bindports.mdx new file mode 100644 index 00000000..7bee595a --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/apps/bindports.mdx @@ -0,0 +1,134 @@ +--- +title: Bind ports +--- + +## Synopsis + +Learn what changes to make to bind modules to their ports on initialization. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v7.8.x/ibc/overview) +- [IBC default integration](/docs/ibc/v7.8.x/light-clients/wasm/integration) + + +Currently, ports must be bound on app initialization. In order to bind modules to their respective ports on initialization, the following needs to be implemented: + +> Note that `portID` does not refer to a certain numerical ID, like `localhost:8080` with a `portID` 8080. Rather it refers to the application module the port binds. For IBC Modules built with the Cosmos SDK, it defaults to the module's name and for Cosmwasm contracts it defaults to the contract address. + +1. Add port ID to the `GenesisState` proto definition: + + ```protobuf + message GenesisState { + string port_id = 1; + / other fields + } + ``` + +2. Add port ID as a key to the module store: + + ```go expandable + / x//types/keys.go + const ( + / ModuleName defines the IBC Module name + ModuleName = "moduleName" + + / Version defines the current version the IBC + / module supports + Version = "moduleVersion-1" + + / PortID is the default port id that module binds to + PortID = "portID" + + / ... + ) + ``` + +3. Add port ID to `x//types/genesis.go`: + + ```go expandable + / in x//types/genesis.go + + / DefaultGenesisState returns a GenesisState with "transfer" as the default PortID. + func DefaultGenesisState() *GenesisState { + return &GenesisState{ + PortId: PortID, + / additional k-v fields + } + } + + / Validate performs basic genesis state validation returning an error upon any + / failure. + func (gs GenesisState) + + Validate() + + error { + if err := host.PortIdentifierValidator(gs.PortId); err != nil { + return err + } + /additional validations + + return gs.Params.Validate() + } + ``` + +4. Bind to port(s) in the module keeper's `InitGenesis`: + + ```go expandable + / InitGenesis initializes the ibc-module state and binds to PortID. + func (k Keeper) + + InitGenesis(ctx sdk.Context, state types.GenesisState) { + k.SetPort(ctx, state.PortId) + + / ... + + / Only try to bind to port if it is not already bound, since we may already own + / port capability from capability InitGenesis + if !k.IsBound(ctx, state.PortId) { + / transfer module binds to the transfer port on InitChain + / and claims the returned capability + err := k.BindPort(ctx, state.PortId) + if err != nil { + panic(fmt.Sprintf("could not claim port capability: %v", err)) + } + + } + + / ... + } + ``` + + With: + + ```go expandable + / IsBound checks if the module is already bound to the desired port + func (k Keeper) + + IsBound(ctx sdk.Context, portID string) + + bool { + _, ok := k.scopedKeeper.GetCapability(ctx, host.PortPath(portID)) + + return ok + } + + / BindPort defines a wrapper function for the port Keeper's function in + / order to expose it to module's InitGenesis function + func (k Keeper) + + BindPort(ctx sdk.Context, portID string) + + error { + cap := k.portKeeper.BindPort(ctx, portID) + + return k.ClaimCapability(ctx, cap, host.PortPath(portID)) + } + ``` + + The module binds to the desired port(s) and returns the capabilities. + + In the above we find reference to keeper methods that wrap other keeper functionality, in the next section the keeper methods that need to be implemented will be defined. diff --git a/docs/ibc/v7.8.x/ibc/apps/ibcmodule.mdx b/docs/ibc/v7.8.x/ibc/apps/ibcmodule.mdx new file mode 100644 index 00000000..be4ce698 --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/apps/ibcmodule.mdx @@ -0,0 +1,414 @@ +--- +title: Implement `IBCModule` interface and callbacks +--- + +## Synopsis + +Learn how to implement the `IBCModule` interface and all of the callbacks it requires. + +The Cosmos SDK expects all IBC modules to implement the [`IBCModule` +interface](https://github.com/cosmos/ibc-go/tree/main/modules/core/05-port/types/module.go). This interface contains all of the callbacks IBC expects modules to implement. They include callbacks related to channel handshake, closing and packet callbacks (`OnRecvPacket`, `OnAcknowledgementPacket` and `OnTimeoutPacket`). + +```go +/ IBCModule implements the ICS26 interface for given the keeper. +/ The implementation of the IBCModule interface could for example be in a file called ibc_module.go, +/ but ultimately file structure is up to the developer +type IBCModule struct { + keeper keeper.Keeper +} +``` + +Additionally, in the `module.go` file, add the following line: + +```go +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + / Add this line + _ porttypes.IBCModule = IBCModule{ +} +) +``` + + + +## Pre-requisites Readings + +- [IBC Overview](/docs/ibc/v7.8.x/ibc/overview)) +- [IBC default integration](/docs/ibc/v7.8.x/ibc/integration) + + + +## Channel handshake callbacks + +This section will describe the callbacks that are called during channel handshake execution. Among other things, it will claim channel capabilities passed on from core IBC. For a refresher on capabilities, check [the Overview section](/docs/ibc/v7.8.x/ibc/overview#capabilities). + +Here are the channel handshake callbacks that modules are expected to implement: + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `checkArguments` and `negotiateAppVersion` functions. + +```go expandable +/ Called by IBC Handler on MsgOpenInit +func (im IBCModule) + +OnChanOpenInit(ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + / Examples: + / - Abort if order == UNORDERED, + / - Abort if version is unsupported + if err := checkArguments(args); err != nil { + return "", err +} + + / OpenInit must claim the channelCapability that IBC passes into the callback + if err := im.keeper.ClaimCapability(ctx, chanCap, host.ChannelCapabilityPath(portID, channelID)); err != nil { + return "", err +} + +return version, nil +} + +/ Called by IBC Handler on MsgOpenTry +func (im IBCModule) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + if err := checkArguments(args); err != nil { + return "", err +} + + / OpenTry must claim the channelCapability that IBC passes into the callback + if err := im.keeper.scopedKeeper.ClaimCapability(ctx, chanCap, host.ChannelCapabilityPath(portID, channelID)); err != nil { + return err +} + + / Construct application version + / IBC applications must return the appropriate application version + / This can be a simple string or it can be a complex version constructed + / from the counterpartyVersion and other arguments. + / The version returned will be the channel version used for both channel ends. + appVersion := negotiateAppVersion(counterpartyVersion, args) + +return appVersion, nil +} + +/ Called by IBC Handler on MsgOpenAck +func (im IBCModule) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + if counterpartyVersion != types.Version { + return sdkerrors.Wrapf(types.ErrInvalidVersion, "invalid counterparty version: %s, expected %s", counterpartyVersion, types.Version) +} + + / do custom logic + + return nil +} + +/ Called by IBC Handler on MsgOpenConfirm +func (im IBCModule) + +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / do custom logic + + return nil +} +``` + +The channel closing handshake will also invoke module callbacks that can return errors to abort the closing handshake. Closing a channel is a 2-step handshake, the initiating chain calls `ChanCloseInit` and the finalizing chain calls `ChanCloseConfirm`. + +```go expandable +/ Called by IBC Handler on MsgCloseInit +func (im IBCModule) + +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgCloseConfirm +func (im IBCModule) + +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} +``` + +### Channel handshake version negotiation + +Application modules are expected to verify versioning used during the channel handshake procedure. + +- `OnChanOpenInit` will verify that the relayer-chosen parameters + are valid and perform any custom `INIT` logic. + It may return an error if the chosen parameters are invalid + in which case the handshake is aborted. + If the provided version string is non-empty, `OnChanOpenInit` should return + the version string if valid or an error if the provided version is invalid. + **If the version string is empty, `OnChanOpenInit` is expected to + return a default version string representing the version(s) + it supports.** + If there is no default version string for the application, + it should return an error if the provided version is an empty string. +- `OnChanOpenTry` will verify the relayer-chosen parameters along with the + counterparty-chosen version string and perform custom `TRY` logic. + If the relayer-chosen parameters + are invalid, the callback must return an error to abort the handshake. + If the counterparty-chosen version is not compatible with this module's + supported versions, the callback must return an error to abort the handshake. + If the versions are compatible, the try callback must select the final version + string and return it to core IBC. + `OnChanOpenTry` may also perform custom initialization logic. +- `OnChanOpenAck` will error if the counterparty selected version string + is invalid and abort the handshake. It may also perform custom ACK logic. + +Versions must be strings but can implement any versioning structure. If your application plans to +have linear releases then semantic versioning is recommended. If your application plans to release +various features in between major releases then it is advised to use the same versioning scheme +as IBC. This versioning scheme specifies a version identifier and compatible feature set with +that identifier. Valid version selection includes selecting a compatible version identifier with +a subset of features supported by your application for that version. The struct used for this +scheme can be found in [03-connection/types](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection/types/version.go#L16). + +Since the version type is a string, applications have the ability to do simple version verification +via string matching or they can use the already implemented versioning system and pass the proto +encoded version into each handhshake call as necessary. + +ICS20 currently implements basic string matching with a single supported version. + +## Packet callbacks + +Just as IBC expects modules to implement callbacks for channel handshakes, it also expects modules to implement callbacks for handling the packet flow through a channel, as defined in the `IBCModule` interface. + +Once a module A and module B are connected to each other, relayers can start relaying packets and acknowledgements back and forth on the channel. + +![IBC packet flow diagram](/docs/ibc/images/01-ibc/03-apps/images/packet_flow.png) + +Briefly, a successful packet flow works as follows: + +1. module A sends a packet through the IBC module +2. the packet is received by module B +3. if module B writes an acknowledgement of the packet then module A will process the + acknowledgement +4. if the packet is not successfully received before the timeout, then module A processes the + packet's timeout. + +### Sending packets + +Modules **do not send packets through callbacks**, since the modules initiate the action of sending packets to the IBC module, as opposed to other parts of the packet flow where messages sent to the IBC +module must trigger execution on the port-bound module through the use of callbacks. Thus, to send a packet a module simply needs to call `SendPacket` on the `IBCChannelKeeper`. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `EncodePacketData(customPacketData)` function. + +```go expandable +/ retrieve the dynamic capability for this channel + channelCap := scopedKeeper.GetCapability(ctx, channelCapName) +/ Sending custom application packet data + data := EncodePacketData(customPacketData) +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + channelCap, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + + + In order to prevent modules from sending packets on channels they do not own, + IBC expects modules to pass in the correct channel capability for the packet's + source channel. + + +### Receiving packets + +To handle receiving packets, the module must implement the `OnRecvPacket` callback. This gets +invoked by the IBC module after the packet has been proved valid and correctly processed by the IBC +keepers. Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state +changes given the packet data without worrying about whether the packet is valid or not. + +Modules may return to the IBC handler an acknowledgement which implements the `Acknowledgement` interface. +The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the +acknowledgement back to the sender module. + +The state changes that occurred during this callback will only be written if: + +- the acknowledgement was successful as indicated by the `Success()` function of the acknowledgement +- if the acknowledgement returned is nil indicating that an asynchronous process is occurring + +NOTE: Applications which process asynchronous acknowledgements must handle reverting state changes +when appropriate. Any state changes that occurred during the `OnRecvPacket` callback will be written +for asynchronous acknowledgements. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodePacketData(packet.Data)` function. + +```go expandable +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) + +ibcexported.Acknowledgement { + / Decode the packet data + packetData := DecodePacketData(packet.Data) + + / do application state changes based on packet data and return the acknowledgement + / NOTE: The acknowledgement will indicate to the IBC handler if the application + / state changes should be written via the `Success()` function. Application state + / changes are only written if the acknowledgement is successful or the acknowledgement + / returned is nil indicating that an asynchronous acknowledgement will occur. + ack := processPacket(ctx, packet, packetData) + +return ack +} +``` + +Reminder, the `Acknowledgement` interface: + +```go +/ Acknowledgement defines the interface used to return +/ acknowledgements in the OnRecvPacket callback. +type Acknowledgement interface { + Success() + +bool + Acknowledgement() []byte +} +``` + +### Acknowledging packets + +After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can +then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the +acknowledgement is entirely up to the modules on the channel (just like the packet data); however, it +may often contain information on whether the packet was successfully processed along +with some additional data that could be useful for remediation if the packet processing failed. + +Since the modules are responsible for agreeing on an encoding/decoding standard for packet data and +acknowledgements, IBC will pass in the acknowledgements as `[]byte` to this callback. The callback +is responsible for decoding the acknowledgement and processing it. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodeAcknowledgement(acknowledgments)` and `processAck(ack)` functions. + +```go expandable +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, +) (*sdk.Result, error) { + / Decode acknowledgement + ack := DecodeAcknowledgement(acknowledgement) + + / process ack + res, err := processAck(ack) + +return res, err +} +``` + +### Timeout packets + +If the timeout for a packet is reached before the packet is successfully received or the +counterparty channel end is closed before the packet is successfully received, then the receiving +chain can no longer process it. Thus, the sending chain must process the timeout using +`OnTimeoutPacket` to handle this situation. Again the IBC module will verify that the timeout is +indeed valid, so our module only needs to implement the state machine logic for what to do once a +timeout is reached and the packet can no longer be received. + +```go +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) (*sdk.Result, error) { + / do custom timeout logic +} +``` + +### Optional interfaces + +The following interface are optional and MAY be implemented by an IBCModule. + +#### PacketDataUnmarshaler + +The `PacketDataUnmarshaler` interface is defined as follows: + +```go +/ PacketDataUnmarshaler defines an optional interface which allows a middleware to +/ request the packet data to be unmarshaled by the base application. +type PacketDataUnmarshaler interface { + / UnmarshalPacketData unmarshals the packet data into a concrete type + UnmarshalPacketData([]byte) (interface{ +}, error) +} +``` + +The implementation of `UnmarshalPacketData` should unmarshal the bytes into the packet data type defined for an IBC stack. +The base application of an IBC stack should unmarshal the bytes into its packet data type, while a middleware may simply defer the call to the underlying application. + +This interface allows middlewares to unmarshal a packet data in order to make use of interfaces the packet data type implements. +For example, the callbacks middleware makes use of this function to access packet data types which implement the `PacketData` and `PacketDataProvider` interfaces. diff --git a/docs/ibc/v7.8.x/ibc/apps/keeper.mdx b/docs/ibc/v7.8.x/ibc/apps/keeper.mdx new file mode 100644 index 00000000..66617d55 --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/apps/keeper.mdx @@ -0,0 +1,119 @@ +--- +title: Keeper +--- + +## Synopsis + +Learn how to implement the IBC Module keeper. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v7.8.x/ibc/overview) +- [IBC default integration](/docs/ibc/v7.8.x/light-clients/wasm/integration) + + +In the previous sections, on channel handshake callbacks and port binding in `InitGenesis`, a reference was made to keeper methods that need to be implemented when creating a custom IBC module. Below is an overview of how to define an IBC module's keeper. + +> Note that some code has been left out for clarity, to get a full code overview, please refer to [the transfer module's keeper in the ibc-go repo](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/keeper.go). + +```go expandable +/ Keeper defines the IBC app module keeper +type Keeper struct { + storeKey sdk.StoreKey + cdc codec.BinaryCodec + paramSpace paramtypes.Subspace + + channelKeeper types.ChannelKeeper + portKeeper types.PortKeeper + scopedKeeper capabilitykeeper.ScopedKeeper + + / ... additional according to custom logic +} + +/ NewKeeper creates a new IBC app module Keeper instance +func NewKeeper( + / args +) + +Keeper { + / ... + + return Keeper{ + cdc: cdc, + storeKey: key, + paramSpace: paramSpace, + + channelKeeper: channelKeeper, + portKeeper: portKeeper, + scopedKeeper: scopedKeeper, + + / ... additional according to custom logic +} +} + +/ IsBound checks if the IBC app module is already bound to the desired port +func (k Keeper) + +IsBound(ctx sdk.Context, portID string) + +bool { + _, ok := k.scopedKeeper.GetCapability(ctx, host.PortPath(portID)) + +return ok +} + +/ BindPort defines a wrapper function for the port Keeper's function in +/ order to expose it to module's InitGenesis function +func (k Keeper) + +BindPort(ctx sdk.Context, portID string) + +error { + cap := k.portKeeper.BindPort(ctx, portID) + +return k.ClaimCapability(ctx, cap, host.PortPath(portID)) +} + +/ GetPort returns the portID for the IBC app module. Used in ExportGenesis +func (k Keeper) + +GetPort(ctx sdk.Context) + +string { + store := ctx.KVStore(k.storeKey) + +return string(store.Get(types.PortKey)) +} + +/ SetPort sets the portID for the IBC app module. Used in InitGenesis +func (k Keeper) + +SetPort(ctx sdk.Context, portID string) { + store := ctx.KVStore(k.storeKey) + +store.Set(types.PortKey, []byte(portID)) +} + +/ AuthenticateCapability wraps the scopedKeeper's AuthenticateCapability function +func (k Keeper) + +AuthenticateCapability(ctx sdk.Context, cap *capabilitytypes.Capability, name string) + +bool { + return k.scopedKeeper.AuthenticateCapability(ctx, cap, name) +} + +/ ClaimCapability allows the IBC app module to claim a capability that core IBC +/ passes to it +func (k Keeper) + +ClaimCapability(ctx sdk.Context, cap *capabilitytypes.Capability, name string) + +error { + return k.scopedKeeper.ClaimCapability(ctx, cap, name) +} + +/ ... additional according to custom logic +``` diff --git a/docs/ibc/v7.8.x/ibc/apps/packets_acks.mdx b/docs/ibc/v7.8.x/ibc/apps/packets_acks.mdx new file mode 100644 index 00000000..da21d64c --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/apps/packets_acks.mdx @@ -0,0 +1,166 @@ +--- +title: Define packets and acks +--- + +## Synopsis + +Learn how to define custom packet and acknowledgement structs and how to encode and decode them. + + + +## Pre-requisites Readings + +- [IBC Overview](/docs/ibc/v7.8.x/ibc/overview)) +- [IBC default integration](/docs/ibc/v7.8.x/ibc/integration) + + + +## Custom packets + +Modules connected by a channel must agree on what application data they are sending over the +channel, as well as how they will encode/decode it. This process is not specified by IBC as it is up +to each application module to determine how to implement this agreement. However, for most +applications this will happen as a version negotiation during the channel handshake. While more +complex version negotiation is possible to implement inside the channel opening handshake, a very +simple version negotiation is implemented in the [ibc-transfer module](https://github.com/cosmos/ibc-go/tree/main/modules/apps/transfer/module.go). + +Thus, a module must define its custom packet data structure, along with a well-defined way to +encode and decode it to and from `[]byte`. + +```go expandable +/ Custom packet data defined in application module +type CustomPacketData struct { + / Custom fields ... +} + +EncodePacketData(packetData CustomPacketData) []byte { + / encode packetData to bytes +} + +DecodePacketData(encoded []byte) (CustomPacketData) { + / decode from bytes to packet data +} +``` + +> Note that the `CustomPacketData` struct is defined in the proto definition and then compiled by the protobuf compiler. + +Then a module must encode its packet data before sending it through IBC. + +```go expandable +/ retrieve the dynamic capability for this channel + channelCap := scopedKeeper.GetCapability(ctx, channelCapName) +/ Sending custom application packet data + data := EncodePacketData(customPacketData) +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + channelCap, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + +A module receiving a packet must decode the `PacketData` into a structure it expects so that it can +act on it. + +```go +/ Receiving custom application packet data (in OnRecvPacket) + packetData := DecodePacketData(packet.Data) +/ handle received custom packet data +``` + +### Optional interfaces + +The following interfaces are optional and MAY be implemented by a custom packet type. +They allow middlewares such as callbacks to access information stored within the packet data. + +#### PacketData interface + +The `PacketData` interface is defined as follows: + +```go +/ PacketData defines an optional interface which an application's packet data structure may implement. +type PacketData interface { + / GetPacketSender returns the sender address of the packet data. + / If the packet sender is unknown or undefined, an empty string should be returned. + GetPacketSender(sourcePortID string) + +string +} +``` + +The implementation of `GetPacketSender` should return the sender of the packet data. +If the packet sender is unknown or undefined, an empty string should be returned. + +This interface is intended to give IBC middlewares access to the packet sender of a packet data type. + +#### PacketDataProvider interface + +The `PacketDataProvider` interface is defined as follows: + +```go +/ PacketDataProvider defines an optional interfaces for retrieving custom packet data stored on behalf of another application. +/ An existing problem in the IBC middleware design is the inability for a middleware to define its own packet data type and insert packet sender provided information. +/ A short term solution was introduced into several application's packet data to utilize a memo field to carry this information on behalf of another application. +/ This interfaces standardizes that behaviour. Upon realization of the ability for middleware's to define their own packet data types, this interface will be deprecated and removed with time. +type PacketDataProvider interface { + / GetCustomPacketData returns the packet data held on behalf of another application. + / The name the information is stored under should be provided as the key. + / If no custom packet data exists for the key, nil should be returned. + GetCustomPacketData(key string) + +interface{ +} +} +``` + +The implementation of `GetCustomPacketData` should return packet data held on behalf of another application (if present and supported). +If this functionality is not supported, it should return nil. Otherwise it should return the packet data associated with the provided key. + +This interface gives IBC applications access to the packet data information embedded into the base packet data type. +Within transfer and interchain accounts, the embedded packet data is stored within the Memo field. + +Once all IBC applications within an IBC stack are capable of creating/maintaining their own packet data type's, this interface function will be deprecated and removed. + +## Acknowledgements + +Modules may commit an acknowledgement upon receiving and processing a packet in the case of synchronous packet processing. +In the case where a packet is processed at some later point after the packet has been received (asynchronous execution), the acknowledgement +will be written once the packet has been processed by the application which may be well after the packet receipt. + +NOTE: Most blockchain modules will want to use the synchronous execution model in which the module processes and writes the acknowledgement +for a packet as soon as it has been received from the IBC module. + +This acknowledgement can then be relayed back to the original sender chain, which can take action +depending on the contents of the acknowledgement. + +Just as packet data was opaque to IBC, acknowledgements are similarly opaque. Modules must pass and +receive acknowledegments with the IBC modules as byte strings. + +Thus, modules must agree on how to encode/decode acknowledgements. The process of creating an +acknowledgement struct along with encoding and decoding it, is very similar to the packet data +example above. [ICS 04](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope) +specifies a recommended format for acknowledgements. This acknowledgement type can be imported from +[channel types](https://github.com/cosmos/ibc-go/tree/main/modules/core/04-channel/types). + +While modules may choose arbitrary acknowledgement structs, a default acknowledgement types is provided by IBC [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/core/channel/v1/channel.proto): + +```protobuf expandable +/ Acknowledgement is the recommended acknowledgement format to be used by +/ app-specific protocols. +/ NOTE: The field numbers 21 and 22 were explicitly chosen to avoid accidental +/ conflicts with other protobuf message formats used for acknowledgements. +/ The first byte of any message with this format will be the non-ASCII values +/ `0xaa` (result) or `0xb2` (error). Implemented as defined by ICS: +/ https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope +message Acknowledgement { + / response contains either a result or an error and must be non-empty + oneof response { + bytes result = 21; + string error = 22; + } +} +``` diff --git a/docs/ibc/v7.8.x/ibc/apps/routing.mdx b/docs/ibc/v7.8.x/ibc/apps/routing.mdx new file mode 100644 index 00000000..0f2c9a5a --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/apps/routing.mdx @@ -0,0 +1,40 @@ +--- +title: Routing +--- + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v7.8.x/ibc/overview) +- [IBC default integration](/docs/ibc/v7.8.x/light-clients/wasm/integration) + + + +## Synopsis + +Learn how to hook a route to the IBC router for the custom IBC module. + +As mentioned above, modules must implement the `IBCModule` interface (which contains both channel +handshake callbacks and packet handling callbacks). The concrete implementation of this interface +must be registered with the module name as a route on the IBC `Router`. + +```go expandable +/ app.go +func NewApp(...args) *App { +/ ... + +/ Create static IBC router, add module routes, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) +/ Note: moduleCallbacks must implement IBCModule interface +ibcRouter.AddRoute(moduleName, moduleCallbacks) + +/ Setting Router will finalize all routes by sealing router +/ No more routes can be added +app.IBCKeeper.SetRouter(ibcRouter) + +/ ... +} +``` diff --git a/docs/ibc/v7.8.x/ibc/integration.mdx b/docs/ibc/v7.8.x/ibc/integration.mdx new file mode 100644 index 00000000..663f5a2d --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/integration.mdx @@ -0,0 +1,231 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to integrate IBC to your application and send data packets to other chains. + +This document outlines the required steps to integrate and configure the [IBC +module](https://github.com/cosmos/ibc-go/tree/main/modules/core) to your Cosmos SDK application and +send fungible token transfers to other chains. + +## Integrating the IBC module + +Integrating the IBC module to your SDK-based application is straightforward. The general changes can be summarized in the following steps: + +- Add required modules to the `module.BasicManager` +- Define additional `Keeper` fields for the new modules on the `App` type +- Add the module's `StoreKey`s and initialize their `Keeper`s +- Set up corresponding routers and routes for the `ibc` module +- Add the modules to the module `Manager` +- Add modules to `Begin/EndBlockers` and `InitGenesis` +- Update the module `SimulationManager` to enable simulations + +### Module `BasicManager` and `ModuleAccount` permissions + +The first step is to add the following modules to the `BasicManager`: `x/capability`, `x/ibc`, +and `x/ibc-transfer`. After that, we need to grant `Minter` and `Burner` permissions to +the `ibc-transfer` `ModuleAccount` to mint and burn relayed tokens. + +### Integrating light clients + +> Note that from v7 onwards, all light clients have to be explicitly registered in a chain's app.go and follow the steps listed below. +> This is in contrast to earlier versions of ibc-go when `07-tendermint` and `06-solomachine` were added out of the box. + +All light clients must be registered with `module.BasicManager` in a chain's app.go file. + +The following code example shows how to register the existing `ibctm.AppModuleBasic{}` light client implementation. + +```diff expandable + +import ( + ... ++ ibctm "github.com/cosmos/ibc-go/v7/modules/light-clients/07-tendermint" + ... +) + +/ app.go +var ( + + ModuleBasics = module.NewBasicManager( + / ... + capability.AppModuleBasic{}, + ibc.AppModuleBasic{}, + transfer.AppModuleBasic{}, / i.e ibc-transfer module + + / register light clients on IBC ++ ibctm.AppModuleBasic{}, + ) + + / module account permissions + maccPerms = map[string][]string{ + / other module accounts permissions + / ... + ibctransfertypes.ModuleName: {authtypes.Minter, authtypes.Burner}, + } +) +``` + +### Application fields + +Then, we need to register the `Keepers` as follows: + +```go expandable +/ app.go +type App struct { + / baseapp, keys and subspaces definitions + + / other keepers + / ... + IBCKeeper *ibckeeper.Keeper / IBC Keeper must be a pointer in the app, so we can SetRouter on it correctly + TransferKeeper ibctransferkeeper.Keeper / for cross-chain fungible token transfers + + / make scoped keepers public for test purposes + ScopedIBCKeeper capabilitykeeper.ScopedKeeper + ScopedTransferKeeper capabilitykeeper.ScopedKeeper + + / ... + / module and simulation manager definitions +} +``` + +### Configure the `Keepers` + +During initialization, besides initializing the IBC `Keepers` (for the `x/ibc`, and +`x/ibc-transfer` modules), we need to grant specific capabilities through the capability module +`ScopedKeepers` so that we can authenticate the object-capability permissions for each of the IBC +channels. + +```go expandable +func NewApp(...args) *App { + / define codecs and baseapp + + / add capability keeper and ScopeToModule for ibc module + app.CapabilityKeeper = capabilitykeeper.NewKeeper(appCodec, keys[capabilitytypes.StoreKey], memKeys[capabilitytypes.MemStoreKey]) + + / grant capabilities for the ibc and ibc-transfer modules + scopedIBCKeeper := app.CapabilityKeeper.ScopeToModule(ibcexported.ModuleName) + scopedTransferKeeper := app.CapabilityKeeper.ScopeToModule(ibctransfertypes.ModuleName) + + / ... other modules keepers + + / Create IBC Keeper + app.IBCKeeper = ibckeeper.NewKeeper( + appCodec, keys[ibcexported.StoreKey], app.GetSubspace(ibcexported.ModuleName), app.StakingKeeper, app.UpgradeKeeper, scopedIBCKeeper, + ) + + / Create Transfer Keepers + app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, keys[ibctransfertypes.StoreKey], app.GetSubspace(ibctransfertypes.ModuleName), + app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, app.BankKeeper, scopedTransferKeeper, + ) + transferModule := transfer.NewAppModule(app.TransferKeeper) + + / .. continues +} +``` + +### Register `Routers` + +IBC needs to know which module is bound to which port so that it can route packets to the +appropriate module and call the appropriate callbacks. The port to module name mapping is handled by +IBC's port `Keeper`. However, the mapping from module name to the relevant callbacks is accomplished +by the port +[`Router`](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/router.go) on the +IBC module. + +Adding the module routes allows the IBC handler to call the appropriate callback when processing a +channel handshake or a packet. + +Currently, a `Router` is static so it must be initialized and set correctly on app initialization. +Once the `Router` has been set, no new routes can be added. + +```go expandable +/ app.go +func NewApp(...args) *App { + / .. continuation from above + + / Create static IBC router, add ibc-transfer module route, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) + / Setting Router will finalize all routes by sealing router + / No more routes can be added + app.IBCKeeper.SetRouter(ibcRouter) + + / .. continues +``` + +### Module Managers + +In order to use IBC, we need to add the new modules to the module `Manager` and to the `SimulationManager` in case your application supports [simulations](https://docs.cosmos.network/main/learn/advanced/simulation). + +```go expandable +/ app.go +func NewApp(...args) *App { + / .. continuation from above + + app.mm = module.NewManager( + / other modules + / ... + capability.NewAppModule(appCodec, *app.CapabilityKeeper), + ibc.NewAppModule(app.IBCKeeper), + transferModule, + ) + + / ... + + app.sm = module.NewSimulationManager( + / other modules + / ... + capability.NewAppModule(appCodec, *app.CapabilityKeeper), + ibc.NewAppModule(app.IBCKeeper), + transferModule, + ) + + / .. continues +``` + +### Application ABCI Ordering + +One addition from IBC is the concept of `HistoricalEntries` which are stored on the staking module. +Each entry contains the historical information for the `Header` and `ValidatorSet` of this chain which is stored +at each height during the `BeginBlock` call. The historical info is required to introspect the +past historical info at any given height in order to verify the light client `ConsensusState` during the +connection handshake. + +```go expandable +/ app.go +func NewApp(...args) *App { + / .. continuation from above + + / add staking and ibc modules to BeginBlockers + app.mm.SetOrderBeginBlockers( + / other modules ... + stakingtypes.ModuleName, ibcexported.ModuleName, + ) + + / ... + + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + app.mm.SetOrderInitGenesis( + capabilitytypes.ModuleName, + / other modules ... + ibcexported.ModuleName, ibctransfertypes.ModuleName, + ) + + / .. continues +``` + + + **IMPORTANT**: The capability module **must** be declared first in + `SetOrderInitGenesis` + + +That's it! You have now wired up the IBC module and are now able to send fungible tokens across +different chains. If you want to have a broader view of the changes take a look into the SDK's +[`SimApp`](https://github.com/cosmos/ibc-go/blob/main/testing/simapp/app.go). diff --git a/docs/ibc/v7.8.x/ibc/middleware/develop.mdx b/docs/ibc/v7.8.x/ibc/middleware/develop.mdx new file mode 100644 index 00000000..be33425b --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/middleware/develop.mdx @@ -0,0 +1,499 @@ +--- +title: IBC middleware +--- + +## Synopsis + +Learn how to write your own custom middleware to wrap an IBC application, and understand how to hook different middleware to IBC base applications to form different IBC application stacks + +This document serves as a guide for middleware developers who want to write their own middleware and for chain developers who want to use IBC middleware on their chains. + +IBC applications are designed to be self-contained modules that implement their own application-specific logic through a set of interfaces with the core IBC handlers. These core IBC handlers, in turn, are designed to enforce the correctness properties of IBC (transport, authentication, ordering) while delegating all application-specific handling to the IBC application modules. However, there are cases where some functionality may be desired by many applications, yet not appropriate to place in core IBC. + +Middleware allows developers to define the extensions as separate modules that can wrap over the base application. This middleware can thus perform its own custom logic, and pass data into the application so that it may run its logic without being aware of the middleware's existence. This allows both the application and the middleware to implement its own isolated logic while still being able to run as part of a single packet flow. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v7.8.x/ibc/overview) +- [IBC Integration](/docs/ibc/v7.8.x/ibc/integration) +- [IBC Application Developer Guide](/docs/ibc/v7.8.x/ibc/apps/apps) + + + +## Definitions + +`Middleware`: A self-contained module that sits between core IBC and an underlying IBC application during packet execution. All messages between core IBC and underlying application must flow through middleware, which may perform its own custom logic. + +`Underlying Application`: An underlying application is the application that is directly connected to the middleware in question. This underlying application may itself be middleware that is chained to a base application. + +`Base Application`: A base application is an IBC application that does not contain any middleware. It may be nested by 0 or multiple middleware to form an application stack. + +`Application Stack (or stack)`: A stack is the complete set of application logic (middleware(s) + base application) that gets connected to core IBC. A stack may be just a base application, or it may be a series of middlewares that nest a base application. + +## Create a custom IBC middleware + +IBC middleware will wrap over an underlying IBC application and sits between core IBC and the application. It has complete control in modifying any message coming from IBC to the application, and any message coming from the application to core IBC. Thus, middleware must be completely trusted by chain developers who wish to integrate them, however this gives them complete flexibility in modifying the application(s) they wrap. + + + middleware developers must use the same serialization and deserialization + method as in ibc-go's codec: transfertypes.ModuleCdc.\[Must]MarshalJSON + + +For middleware builders this means: + +```go +import transfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" +transfertypes.ModuleCdc.[Must]MarshalJSON +func MarshalAsIBCDoes(ack channeltypes.Acknowledgement) ([]byte, error) { + return transfertypes.ModuleCdc.MarshalJSON(&ack) +} +``` + +### Interfaces + +```go +/ Middleware implements the ICS26 Module interface +type Middleware interface { + porttypes.IBCModule / middleware has access to an underlying application which may be wrapped by more middleware + ics4Wrapper: ICS4Wrapper / middleware has access to ICS4Wrapper which may be core IBC Channel Handler or a higher-level middleware that wraps this middleware. +} +``` + +```typescript expandable +/ This is implemented by ICS4 and all middleware that are wrapping base application. +/ The base application will call `sendPacket` or `writeAcknowledgement` of the middleware directly above them +/ which will call the next middleware until it reaches the core IBC handler. +type ICS4Wrapper interface { + SendPacket( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + sourcePort string, + sourceChannel string, + timeoutHeight clienttypes.Height, + timeoutTimestamp uint64, + data []byte, + ) (sequence uint64, err error) + + WriteAcknowledgement( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + packet exported.PacketI, + ack exported.Acknowledgement, + ) error + + GetAppVersion( + ctx sdk.Context, + portID, + channelID string, + ) (string, bool) +} +``` + +### Implement `IBCModule` interface and callbacks + +The `IBCModule` is a struct that implements the [ICS-26 interface (`porttypes.IBCModule`)](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/module.go#L11-L106). It is recommended to separate these callbacks into a separate file `ibc_module.go`. As will be mentioned in the [integration section](/docs/ibc/v7.8.x/ibc/middleware/integration), this struct should be different than the struct that implements `AppModule` in case the middleware maintains its own internal state and processes separate SDK messages. + +The middleware must have access to the underlying application, and be called before during all ICS-26 callbacks. It may execute custom logic during these callbacks, and then call the underlying application's callback. Middleware **may** choose not to call the underlying application's callback at all. Though these should generally be limited to error cases. + +In the case where the IBC middleware expects to speak to a compatible IBC middleware on the counterparty chain, they must use the channel handshake to negotiate the middleware version without interfering in the version negotiation of the underlying application. + +Middleware accomplishes this by formatting the version in a JSON-encoded string containing the middleware version and the application version. The application version may as well be a JSON-encoded string, possibly including further middleware and app versions, if the application stack consists of multiple milddlewares wrapping a base application. The format of the version is specified in ICS-30 as the following: + +```json +{ + "": "", + "app_version": "" +} +``` + +The `` key in the JSON struct should be replaced by the actual name of the key for the corresponding middleware (e.g. `fee_version`). + +During the handshake callbacks, the middleware can unmarshal the version string and retrieve the middleware and application versions. It can do its negotiation logic on ``, and pass the `` to the underlying application. + +The middleware should simply pass the capability in the callback arguments along to the underlying application so that it may be claimed by the base application. The base application will then pass the capability up the stack in order to authenticate an outgoing packet/acknowledgement. + +In the case where the middleware wishes to send a packet or acknowledgment without the involvement of the underlying application, it should be given access to the same `scopedKeeper` as the base application so that it can retrieve the capabilities by itself. + +### Handshake callbacks + +#### `OnChanOpenInit` + +```go expandable +func (im IBCModule) + +OnChanOpenInit( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + if version != "" { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + metadata, err := Unmarshal(version) + if err != nil { + / Since it is valid for fee version to not be specified, + / the above middleware version may be for another middleware. + / Pass the entire version string onto the underlying application. + return im.app.OnChanOpenInit( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + version, + ) +} + +else { + metadata = { + / set middleware version to default value + MiddlewareVersion: defaultMiddlewareVersion, + / allow application to return its default version + AppVersion: "", +} + +} + +doCustomLogic() + + / if the version string is empty, OnChanOpenInit is expected to return + / a default version string representing the version(s) + +it supports + appVersion, err := im.app.OnChanOpenInit( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + metadata.AppVersion, / note we only pass app version here + ) + if err != nil { + return "", err +} + version := constructVersion(metadata.MiddlewareVersion, appVersion) + +return version, nil +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L34-L82) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanOpenTry` + +```go expandable +func OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + cpMetadata, err := Unmarshal(counterpartyVersion) + if err != nil { + return app.OnChanOpenTry( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + counterpartyVersion, + ) +} + +doCustomLogic() + + / Call the underlying application's OnChanOpenTry callback. + / The try callback must select the final app-specific version string and return it. + appVersion, err := app.OnChanOpenTry( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + cpMetadata.AppVersion, / note we only pass counterparty app version here + ) + if err != nil { + return "", err +} + + / negotiate final middleware version + middlewareVersion := negotiateMiddlewareVersion(cpMetadata.MiddlewareVersion) + version := constructVersion(middlewareVersion, appVersion) + +return version, nil +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L84-L124) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanOpenAck` + +```go expandable +func OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyChannelID string, + counterpartyVersion string, +) + +error { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + cpMetadata, err = UnmarshalJSON(counterpartyVersion) + if err != nil { + return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, counterpartyVersion) +} + if !isCompatible(cpMetadata.MiddlewareVersion) { + return error +} + +doCustomLogic() + + / call the underlying application's OnChanOpenTry callback + return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, cpMetadata.AppVersion) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L126-L152) an example implementation of this callback for the ICS29 Fee Middleware module. + +### `OnChanOpenConfirm` + +```go +func OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanOpenConfirm(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L154-L162) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanCloseInit` + +```go +func OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanCloseInit(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L164-L187) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnChanCloseConfirm` + +```go +func OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanCloseConfirm(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L189-L212) an example implementation of this callback for the ICS29 Fee Middleware module. + +**NOTE**: Middleware that does not need to negotiate with a counterparty middleware on the remote stack will not implement the version unmarshalling and negotiation, and will simply perform its own custom logic on the callbacks without relying on the counterparty behaving similarly. + +### Packet callbacks + +The packet callbacks just like the handshake callbacks wrap the application's packet callbacks. The packet callbacks are where the middleware performs most of its custom logic. The middleware may read the packet flow data and perform some additional packet handling, or it may modify the incoming data before it reaches the underlying application. This enables a wide degree of usecases, as a simple base application like token-transfer can be transformed for a variety of usecases by combining it with custom middleware. + +#### `OnRecvPacket` + +```go expandable +func OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +ibcexported.Acknowledgement { + doCustomLogic(packet) + ack := app.OnRecvPacket(ctx, packet, relayer) + +doCustomLogic(ack) / middleware may modify outgoing ack + return ack +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L214-L237) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnAcknowledgementPacket` + +```go +func OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, +) + +error { + doCustomLogic(packet, ack) + +return app.OnAcknowledgementPacket(ctx, packet, ack, relayer) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L239-L292) an example implementation of this callback for the ICS29 Fee Middleware module. + +#### `OnTimeoutPacket` + +```go +func OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +error { + doCustomLogic(packet) + +return app.OnTimeoutPacket(ctx, packet, relayer) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L294-L334) an example implementation of this callback for the ICS29 Fee Middleware module. + +### ICS-4 wrappers + +Middleware must also wrap ICS-4 so that any communication from the application to the `channelKeeper` goes through the middleware first. Similar to the packet callbacks, the middleware may modify outgoing acknowledgements and packets in any way it wishes. + +#### `SendPacket` + +```go expandable +func SendPacket( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + sourcePort string, + sourceChannel string, + timeoutHeight clienttypes.Height, + timeoutTimestamp uint64, + appData []byte, +) { + / middleware may modify data + data = doCustomLogic(appData) + +return ics4Keeper.SendPacket( + ctx, + chanCap, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, + ) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L336-L343) an example implementation of this function for the ICS29 Fee Middleware module. + +#### `WriteAcknowledgement` + +```go expandable +/ only called for async acks +func WriteAcknowledgement( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + packet exported.PacketI, + ack exported.Acknowledgement, +) { + / middleware may modify acknowledgement + ack_bytes = doCustomLogic(ack) + +return ics4Keeper.WriteAcknowledgement(packet, ack_bytes) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L345-L353) an example implementation of this function for the ICS29 Fee Middleware module. + +#### `GetAppVersion` + +```go expandable +/ middleware must return the underlying application version +func GetAppVersion( + ctx sdk.Context, + portID, + channelID string, +) (string, bool) { + version, found := ics4Keeper.GetAppVersion(ctx, portID, channelID) + if !found { + return "", false +} + if !MiddlewareEnabled { + return version, true +} + + / unwrap channel version + metadata, err := Unmarshal(version) + if err != nil { + panic(fmt.Errof("unable to unmarshal version: %w", err)) +} + +return metadata.AppVersion, true +} + +/ middleware must return the underlying application version +func GetAppVersion(ctx sdk.Context, portID, channelID string) (string, bool) { + version, found := ics4Keeper.GetAppVersion(ctx, portID, channelID) + if !found { + return "", false +} + if !MiddlewareEnabled { + return version, true +} + + / unwrap channel version + metadata, err := Unmarshal(version) + if err != nil { + panic(fmt.Errof("unable to unmarshal version: %w", err)) +} + +return metadata.AppVersion, true +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/48a6ae512b4ea42c29fdf6c6f5363f50645591a2/modules/apps/29-fee/ibc_middleware.go#L355-L358) an example implementation of this function for the ICS29 Fee Middleware module. diff --git a/docs/ibc/v7.8.x/ibc/middleware/integration.mdx b/docs/ibc/v7.8.x/ibc/middleware/integration.mdx new file mode 100644 index 00000000..3b942b53 --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/middleware/integration.mdx @@ -0,0 +1,78 @@ +--- +title: Integrating IBC middleware into a chain +description: >- + Learn how to integrate IBC middleware(s) with a base application to your + chain. The following document only applies for Cosmos SDK chains. +--- + +Learn how to integrate IBC middleware(s) with a base application to your chain. The following document only applies for Cosmos SDK chains. + +If the middleware is maintaining its own state and/or processing SDK messages, then it should create and register its SDK module **only once** with the module manager in `app.go`. + +All middleware must be connected to the IBC router and wrap over an underlying base IBC application. An IBC application may be wrapped by many layers of middleware, only the top layer middleware should be hooked to the IBC router, with all underlying middlewares and application getting wrapped by it. + +The order of middleware **matters**, function calls from IBC to the application travel from top-level middleware to the bottom middleware and then to the application. Function calls from the application to IBC goes through the bottom middleware in order to the top middleware and then to core IBC handlers. Thus the same set of middleware put in different orders may produce different effects. + +## Example integration + +```go expandable +/ app.go + +/ middleware 1 and middleware 3 are stateful middleware, +/ perhaps implementing separate sdk.Msg and Handlers +mw1Keeper := mw1.NewKeeper(storeKey1) + +mw3Keeper := mw3.NewKeeper(storeKey3) + +/ Only create App Module **once** and register in app module +/ if the module maintains independent state and/or processes sdk.Msgs +app.moduleManager = module.NewManager( + ... + mw1.NewAppModule(mw1Keeper), + mw3.NewAppModule(mw3Keeper), + transfer.NewAppModule(transferKeeper), + custom.NewAppModule(customKeeper) +) + +mw1IBCModule := mw1.NewIBCModule(mw1Keeper) + +mw2IBCModule := mw2.NewIBCModule() / middleware2 is stateless middleware +mw3IBCModule := mw3.NewIBCModule(mw3Keeper) + scopedKeeperTransfer := capabilityKeeper.NewScopedKeeper("transfer") + +scopedKeeperCustom1 := capabilityKeeper.NewScopedKeeper("custom1") + +scopedKeeperCustom2 := capabilityKeeper.NewScopedKeeper("custom2") + +/ NOTE: IBC Modules may be initialized any number of times provided they use a separate +/ scopedKeeper and underlying port. + +/ initialize base IBC applications +/ if you want to create two different stacks with the same base application, +/ they must be given different scopedKeepers and assigned different ports. + transferIBCModule := transfer.NewIBCModule(transferKeeper) + +customIBCModule1 := custom.NewIBCModule(customKeeper, "portCustom1") + +customIBCModule2 := custom.NewIBCModule(customKeeper, "portCustom2") + +/ create IBC stacks by combining middleware with base application +/ NOTE: since middleware2 is stateless it does not require a Keeper +/ stack 1 contains mw1 -> mw3 -> transfer +stack1 := mw1.NewIBCMiddleware(mw3.NewIBCMiddleware(transferIBCModule, mw3Keeper), mw1Keeper) +/ stack 2 contains mw3 -> mw2 -> custom1 +stack2 := mw3.NewIBCMiddleware(mw2.NewIBCMiddleware(customIBCModule1), mw3Keeper) +/ stack 3 contains mw2 -> mw1 -> custom2 +stack3 := mw2.NewIBCMiddleware(mw1.NewIBCMiddleware(customIBCModule2, mw1Keeper)) + +/ associate each stack with the moduleName provided by the underlying scopedKeeper + ibcRouter := porttypes.NewRouter() + +ibcRouter.AddRoute("transfer", stack1) + +ibcRouter.AddRoute("custom1", stack2) + +ibcRouter.AddRoute("custom2", stack3) + +app.IBCKeeper.SetRouter(ibcRouter) +``` diff --git a/docs/ibc/v7.8.x/ibc/overview.mdx b/docs/ibc/v7.8.x/ibc/overview.mdx new file mode 100644 index 00000000..dc35ffec --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/overview.mdx @@ -0,0 +1,293 @@ +--- +title: Overview +--- + +## What is the Inter-Blockchain Communication Protocol (IBC)? + +This document serves as a guide for developers who want to write their own Inter-Blockchain +Communication protocol (IBC) applications for custom use cases. + +> IBC applications must be written as self-contained modules. + +Due to the modular design of the IBC protocol, IBC +application developers do not need to be concerned with the low-level details of clients, +connections, and proof verification. + +This brief explanation of the lower levels of the +stack gives application developers a broad understanding of the IBC +protocol. Abstraction layer details for channels and ports are most relevant for application developers and describe how to define custom packets and `IBCModule` callbacks. + +The requirements to have your module interact over IBC are: + +- Bind to a port or ports. +- Define your packet data. +- Use the default acknowledgment struct provided by core IBC or optionally define a custom acknowledgment struct. +- Standardize an encoding of the packet data. +- Implement the `IBCModule` interface. + +Read on for a detailed explanation of how to write a self-contained IBC application module. + +## Components Overview + +### [Clients](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client) + +IBC clients are on-chain light clients. Each light client is identified by a unique client-id. +IBC clients track the consensus states of other blockchains, along with the proof spec necessary to +properly verify proofs against the client's consensus state. A client can be associated with any number +of connections to the counterparty chain. The client identifier is auto generated using the client type +and the global client counter appended in the format: `{client-type}-{N}`. + +A `ClientState` should contain chain specific and light client specific information necessary for verifying updates +and upgrades to the IBC client. The `ClientState` may contain information such as chain-id, latest height, proof specs, +unbonding periods or the status of the light client. The `ClientState` should not contain information that +is specific to a given block at a certain height, this is the function of the `ConsensusState`. Each `ConsensusState` +should be associated with a unique block and should be referenced using a height. IBC clients are given a +client identifier prefixed store to store their associated client state and consensus states along with +any metadata associated with the consensus states. Consensus states are stored using their associated height. + +The supported IBC clients are: + +- [Solo Machine light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine): Devices such as phones, browsers, or laptops. +- [Tendermint light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/07-tendermint): The default for Cosmos SDK-based chains. +- [Localhost (loopback) client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/09-localhost): Useful for + testing, simulation, and relaying packets to modules on the same application. + +### IBC Client Heights + +IBC Client Heights are represented by the struct: + +```go +type Height struct { + RevisionNumber uint64 + RevisionHeight uint64 +} +``` + +The `RevisionNumber` represents the revision of the chain that the height is representing. +A revision typically represents a continuous, monotonically increasing range of block-heights. +The `RevisionHeight` represents the height of the chain within the given revision. + +On any reset of the `RevisionHeight`—for example, when hard-forking a Tendermint chain— +the `RevisionNumber` will get incremented. This allows IBC clients to distinguish between a +block-height `n` of a previous revision of the chain (at revision `p`) and block-height `n` of the current +revision of the chain (at revision `e`). + +`Height`s that share the same revision number can be compared by simply comparing their respective `RevisionHeight`s. +`Height`s that do not share the same revision number will only be compared using their respective `RevisionNumber`s. +Thus a height `h` with revision number `e+1` will always be greater than a height `g` with revision number `e`, +**REGARDLESS** of the difference in revision heights. + +Ex: + +```go +Height{ + RevisionNumber: 3, + RevisionHeight: 0 +} > Height{ + RevisionNumber: 2, + RevisionHeight: 100000000000 +} +``` + +When a Tendermint chain is running a particular revision, relayers can simply submit headers and proofs with the revision number +given by the chain's `chainID`, and the revision height given by the Tendermint block height. When a chain updates using a hard-fork +and resets its block-height, it is responsible for updating its `chainID` to increment the revision number. +IBC Tendermint clients then verifies the revision number against their `chainID` and treat the `RevisionHeight` as the Tendermint block-height. + +Tendermint chains wishing to use revisions to maintain persistent IBC connections even across height-resetting upgrades must format their `chainID`s +in the following manner: `{chainID}-{revision_number}`. On any height-resetting upgrade, the `chainID` **MUST** be updated with a higher revision number +than the previous value. + +Ex: + +- Before upgrade `chainID`: `gaiamainnet-3` +- After upgrade `chainID`: `gaiamainnet-4` + +Clients that do not require revisions, such as the solo-machine client, simply hardcode `0` into the revision number whenever they +need to return an IBC height when implementing IBC interfaces and use the `RevisionHeight` exclusively. + +Other client-types can implement their own logic to verify the IBC heights that relayers provide in their `Update`, `Misbehavior`, and +`Verify` functions respectively. + +The IBC interfaces expect an `ibcexported.Height` interface, however all clients must use the concrete implementation provided in +`02-client/types` and reproduced above. + +### [Connections](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection) + +Connections encapsulate two `ConnectionEnd` objects on two separate blockchains. Each +`ConnectionEnd` is associated with a client of the other blockchain (for example, the counterparty blockchain). +The connection handshake is responsible for verifying that the light clients on each chain are +correct for their respective counterparties. Connections, once established, are responsible for +facilitating all cross-chain verifications of IBC state. A connection can be associated with any +number of channels. + +### [Proofs](https://github.com/cosmos/ibc-go/blob/main/modules/core/23-commitment) and [Paths](https://github.com/cosmos/ibc-go/blob/main/modules/core/24-host) + +In IBC, blockchains do not directly pass messages to each other over the network. Instead, to +communicate, a blockchain commits some state to a specifically defined path that is reserved for a +specific message type and a specific counterparty. For example, for storing a specific connectionEnd as part +of a handshake or a packet intended to be relayed to a module on the counterparty chain. A relayer +process monitors for updates to these paths and relays messages by submitting the data stored +under the path and a proof to the counterparty chain. + +Proofs are passed from core IBC to light-clients as bytes. It is up to light client implementation to interpret these bytes appropriately. + +- The paths that all IBC implementations must use for committing IBC messages is defined in + [ICS-24 Host State Machine Requirements](https://github.com/cosmos/ibc/tree/master/spec/core/ics-024-host-requirements). +- The proof format that all implementations must be able to produce and verify is defined in [ICS-23 Proofs](https://github.com/cosmos/ics23) implementation. + +### Capabilities + +IBC is intended to work in execution environments where modules do not necessarily trust each +other. Thus, IBC must authenticate module actions on ports and channels so that only modules with the +appropriate permissions can use them. + +This module authentication is accomplished using a [dynamic +capability store](/docs/common/pages/adr-comprehensive#adr-003-dynamic-capability-store). Upon binding to a port or +creating a channel for a module, IBC returns a dynamic capability that the module must claim in +order to use that port or channel. The dynamic capability module prevents other modules from using that port or channel since +they do not own the appropriate capability. + +While this background information is useful, IBC modules do not need to interact at all with +these lower-level abstractions. The relevant abstraction layer for IBC application developers is +that of channels and ports. IBC applications must be written as self-contained **modules**. + +A module on one blockchain can communicate with other modules on other blockchains by sending, +receiving, and acknowledging packets through channels that are uniquely identified by the +`(channelID, portID)` tuple. + +A useful analogy is to consider IBC modules as internet applications on +a computer. A channel can then be conceptualized as an IP connection, with the IBC portID being +analogous to an IP port and the IBC channelID being analogous to an IP address. Thus, a single +instance of an IBC module can communicate on the same port with any number of other modules and +IBC correctly routes all packets to the relevant module using the (channelID, portID tuple). An +IBC module can also communicate with another IBC module over multiple ports, with each +`(portID<->portID)` packet stream being sent on a different unique channel. + +### [Ports](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port) + +An IBC module can bind to any number of ports. Each port must be identified by a unique `portID`. +Since IBC is designed to be secure with mutually distrusted modules operating on the same ledger, +binding a port returns a dynamic object capability. In order to take action on a particular port +(for example, an open channel with its portID), a module must provide the dynamic object capability to the IBC +handler. This requirement prevents a malicious module from opening channels with ports it does not own. Thus, +IBC modules are responsible for claiming the capability that is returned on `BindPort`. + +### [Channels](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +An IBC channel can be established between two IBC ports. Currently, a port is exclusively owned by a +single module. IBC packets are sent over channels. Just as IP packets contain the destination IP +address and IP port, and the source IP address and source IP port, IBC packets contain +the destination portID and channelID, and the source portID and channelID. This packet structure enables IBC to +correctly route packets to the destination module while allowing modules receiving packets to +know the sender module. + +A channel can be `ORDERED`, where packets from a sending module must be processed by the +receiving module in the order they were sent. Or a channel can be `UNORDERED`, where packets +from a sending module are processed in the order they arrive (might be in a different order than they were sent). + +Modules can choose which channels they wish to communicate over with, thus IBC expects modules to +implement callbacks that are called during the channel handshake. These callbacks can do custom +channel initialization logic. If any callback returns an error, the channel handshake fails. Thus, by +returning errors on callbacks, modules can programmatically reject and accept channels. + +The channel handshake is a 4-step handshake. Briefly, if a given chain A wants to open a channel with +chain B using an already established connection: + +1. chain A sends a `ChanOpenInit` message to signal a channel initialization attempt with chain B. +2. chain B sends a `ChanOpenTry` message to try opening the channel on chain A. +3. chain A sends a `ChanOpenAck` message to mark its channel end status as open. +4. chain B sends a `ChanOpenConfirm` message to mark its channel end status as open. + +If all handshake steps are successful, the channel is opened on both sides. At each step in the handshake, the module +associated with the `ChannelEnd` executes its callback. So +on `ChanOpenInit`, the module on chain A executes its callback `OnChanOpenInit`. + +The channel identifier is auto derived in the format: `channel-{N}` where N is the next sequence to be used. + +Just as ports came with dynamic capabilities, channel initialization returns a dynamic capability +that the module **must** claim so that they can pass in a capability to authenticate channel actions +like sending packets. The channel capability is passed into the callback on the first parts of the +handshake; either `OnChanOpenInit` on the initializing chain or `OnChanOpenTry` on the other chain. + +#### Closing channels + +Closing a channel occurs in 2 handshake steps as defined in [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics). + +`ChanCloseInit` closes a channel on the executing chain if the channel exists, it is not +already closed and the connection it exists upon is OPEN. Channels can only be closed by a +calling module or in the case of a packet timeout on an ORDERED channel. + +`ChanCloseConfirm` is a response to a counterparty channel executing `ChanCloseInit`. The channel +on the executing chain closes if the channel exists, the channel is not already closed, +the connection the channel exists upon is OPEN and the executing chain successfully verifies +that the counterparty channel has been closed. + +### [Packets](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Modules communicate with each other by sending packets over IBC channels. All +IBC packets contain the destination `portID` and `channelID` along with the source `portID` and +`channelID`. This packet structure allows modules to know the sender module of a given packet. IBC packets +contain a sequence to optionally enforce ordering. + +IBC packets also contain a `TimeoutHeight` and a `TimeoutTimestamp` that determine the deadline before the receiving module must process a packet. + +Modules send custom application data to each other inside the `Data []byte` field of the IBC packet. +Thus, packet data is opaque to IBC handlers. It is incumbent on a sender module to encode +their application-specific packet information into the `Data` field of packets. The receiver +module must decode that `Data` back to the original application data. + +### [Receipts and Timeouts](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Since IBC works over a distributed network and relies on potentially faulty relayers to relay messages between ledgers, +IBC must handle the case where a packet does not get sent to its destination in a timely manner or at all. Packets must +specify a non-zero value for timeout height (`TimeoutHeight`) or timeout timestamp (`TimeoutTimestamp` ) after which a packet can no longer be successfully received on the destination chain. + +- The `timeoutHeight` indicates a consensus height on the destination chain after which the packet is no longer be processed, and instead counts as having timed-out. +- The `timeoutTimestamp` indicates a timestamp on the destination chain after which the packet is no longer be processed, and instead counts as having timed-out. + +If the timeout passes without the packet being successfully received, the packet can no longer be +received on the destination chain. The sending module can timeout the packet and take appropriate actions. + +If the timeout is reached, then a proof of packet timeout can be submitted to the original chain. The original chain can then perform +application-specific logic to timeout the packet, perhaps by rolling back the packet send changes (refunding senders any locked funds, etc.). + +- In ORDERED channels, a timeout of a single packet in the channel causes the channel to close. + + - If packet sequence `n` times out, then a packet at sequence `k > n` cannot be received without violating the contract of ORDERED channels that packets are processed in the order that they are sent. + - Since ORDERED channels enforce this invariant, a proof that sequence `n` has not been received on the destination chain by the specified timeout of packet `n` is sufficient to timeout packet `n` and close the channel. + +- In UNORDERED channels, the application-specific timeout logic for that packet is applied and the channel is not closed. + + - Packets can be received in any order. + + - IBC writes a packet receipt for each sequence receives in the UNORDERED channel. This receipt does not contain information; it is simply a marker intended to signify that the UNORDERED channel has received a packet at the specified sequence. + + - To timeout a packet on an UNORDERED channel, a proof is required that a packet receipt **does not exist** for the packet's sequence by the specified timeout. + +For this reason, most modules should use UNORDERED channels as they require fewer liveness guarantees to function effectively for users of that channel. + +### [Acknowledgments](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Modules can also choose to write application-specific acknowledgments upon processing a packet. Acknowledgments can be done: + +- Synchronously on `OnRecvPacket` if the module processes packets as soon as they are received from IBC module. +- Asynchronously if module processes packets at some later point after receiving the packet. + +This acknowledgment data is opaque to IBC much like the packet `Data` and is treated by IBC as a simple byte string `[]byte`. Receiver modules must encode their acknowledgment so that the sender module can decode it correctly. The encoding must be negotiated between the two parties during version negotiation in the channel handshake. + +The acknowledgment can encode whether the packet processing succeeded or failed, along with additional information that allows the sender module to take appropriate action. + +After the acknowledgment has been written by the receiving chain, a relayer relays the acknowledgment back to the original sender module. + +The original sender module then executes application-specific acknowledgment logic using the contents of the acknowledgment. + +- After an acknowledgement fails, packet-send changes can be rolled back (for example, refunding senders in ICS20). + +- After an acknowledgment is received successfully on the original sender on the chain, the corresponding packet commitment is deleted since it is no longer needed. + +## Further Readings and Specs + +If you want to learn more about IBC, check the following specifications: + +- [IBC specification overview](https://github.com/cosmos/ibc/blob/master/README.md) diff --git a/docs/ibc/v7.8.x/ibc/proposals.mdx b/docs/ibc/v7.8.x/ibc/proposals.mdx new file mode 100644 index 00000000..7d328b57 --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/proposals.mdx @@ -0,0 +1,127 @@ +--- +title: Governance Proposals +--- + +In uncommon situations, a highly valued client may become frozen due to uncontrollable +circumstances. A highly valued client might have hundreds of channels being actively used. +Some of those channels might have a significant amount of locked tokens used for ICS 20. + +If the one third of the validator set of the chain the client represents decides to collude, +they can sign off on two valid but conflicting headers each signed by the other one third +of the honest validator set. The light client can now be updated with two valid, but conflicting +headers at the same height. The light client cannot know which header is trustworthy and therefore +evidence of such misbehaviour is likely to be submitted resulting in a frozen light client. + +Frozen light clients cannot be updated under any circumstance except via a governance proposal. +Since a quorum of validators can sign arbitrary state roots which may not be valid executions +of the state machine, a governance proposal has been added to ease the complexity of unfreezing +or updating clients which have become "stuck". Without this mechanism, validator sets would need +to construct a state root to unfreeze the client. Unfreezing clients, re-enables all of the channels +built upon that client. This may result in recovery of otherwise lost funds. + +Tendermint light clients may become expired if the trusting period has passed since their +last update. This may occur if relayers stop submitting headers to update the clients. + +An unplanned upgrade by the counterparty chain may also result in expired clients. If the counterparty +chain undergoes an unplanned upgrade, there may be no commitment to that upgrade signed by the validator +set before the chain-id changes. In this situation, the validator set of the last valid update for the +light client is never expected to produce another valid header since the chain-id has changed, which will +ultimately lead the on-chain light client to become expired. + +In the case that a highly valued light client is frozen, expired, or rendered non-updateable, a +governance proposal may be submitted to update this client, known as the subject client. The +proposal includes the client identifier for the subject and the client identifier for a substitute +client. Light client implementations may implement custom updating logic, but in most cases, +the subject will be updated to the latest consensus state of the substitute client, if the proposal passes. +The substitute client is used as a "stand in" while the subject is on trial. It is best practice to create +a substitute client *after* the subject has become frozen to avoid the substitute from also becoming frozen. +An active substitute client allows headers to be submitted during the voting period to prevent accidental expiry +once the proposal passes. + +*note* two of these parameters: `AllowUpdateAfterExpiry` and `AllowUpdateAfterMisbehavior` have been deprecated, and will both be set to `false` upon upgrades even if they were previously set to `true`. These parameters will no longer play a role in restricting a client upgrade. Please see ADR026 for more details. + +## How to recover an expired client with a governance proposal + +See also the relevant documentation: [ADR-026, IBC client recovery mechanisms](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-026-ibc-client-recovery-mechanisms.md) + +> **Who is this information for?** +> Although technically anyone can submit the governance proposal to recover an expired client, often it will be **relayer operators** (at least coordinating the submission). + +### Preconditions + +* The chain is updated with ibc-go `>=` v1.1.0. +* There exists an active client (with a known client identifier) for the same counterparty chain as the expired client. +* The governance deposit. + +## Steps + +### Step 1 + +Check if the client is attached to the expected `chain-id`. For example, for an expired Tendermint client representing the Akash chain the client state looks like this on querying the client state: + +```text +{ + client_id: 07-tendermint-146 + client_state: + '@type': /ibc.lightclients.tendermint.v1.ClientState + allow_update_after_expiry: true + allow_update_after_misbehaviour: true + chain_id: akashnet-2 +} +``` + +The client is attached to the expected Akash `chain-id`. Note that although the parameters (`allow_update_after_expiry` and `allow_update_after_misbehaviour`) exist to signal intent, these parameters have been deprecated and will not enforce any checks on the revival of client. See ADR-026 for more context on this deprecation. + +### Step 2 + +If the chain has been updated to ibc-go `>=` v1.1.0, anyone can submit the governance proposal to recover the client by executing this via CLI. + +> Note that the Cosmos SDK has updated how governance proposals are submitted in SDK v0.46, now requiring to pass a .json proposal file + +* From SDK v0.46.x onwards + + ```bash + tx gov submit-proposal [path-to-proposal-json] + ``` + + where `proposal.json` contains: + + ```json expandable + { + "messages": [ + { + "@type": "/ibc.core.client.v1.ClientUpdateProposal", + "title": "title_string", + "description": "description_string", + "subject_client_id": "expired_client_id_string", + "substitute_client_id": "active_client_id_string" + } + ], + "metadata": "", + "deposit": "10stake" + } + ``` + + Alternatively there's a legacy command (that is no longer recommended though): + + ```bash + tx gov submit-legacy-proposal update-client + ``` + +* Until SDK v0.45.x + + ```bash + tx gov submit-proposal update-client + ``` + +The `` identifier is the proposed client to be updated. This client must be either frozen or expired. + +The `` represents a substitute client. It carries all the state for the client which may be updated. It must have identical client and chain parameters to the client which may be updated (except for latest height, frozen height, and chain ID). It should be continually updated during the voting period. + +After this, all that remains is deciding who funds the governance deposit and ensuring the governance proposal passes. If it does, the client on trial will be updated to the latest state of the substitute. + +## Important considerations + +Please note that from v1.0.0 of ibc-go it will not be allowed for transactions to go to expired clients anymore, so please update to at least this version to prevent similar issues in the future. + +Please also note that if the client on the other end of the transaction is also expired, that client will also need to update. This process updates only one client. diff --git a/docs/ibc/v7.8.x/ibc/proto-docs.mdx b/docs/ibc/v7.8.x/ibc/proto-docs.mdx new file mode 100644 index 00000000..f8aa3e50 --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/proto-docs.mdx @@ -0,0 +1,6 @@ +--- +title: Protobuf Documentation +description: See ibc-go Buf Protobuf documentation. +--- + +See [ibc-go Buf Protobuf documentation](https://buf.build/cosmos/ibc/tags/main). diff --git a/docs/ibc/v7.8.x/ibc/relayer.mdx b/docs/ibc/v7.8.x/ibc/relayer.mdx new file mode 100644 index 00000000..a7a170c2 --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/relayer.mdx @@ -0,0 +1,48 @@ +--- +title: Relayer +--- + + + +## Pre-requisite readings + +* [IBC Overview](/docs/ibc/v7.8.x/ibc/overview) +* Events + + + +## Events + +Events are emitted for every transaction processed by the base application to indicate the execution +of some logic clients may want to be aware of. This is extremely useful when relaying IBC packets. +Any message that uses IBC will emit events for the corresponding TAO logic executed as defined in +the [IBC events document](/docs/ibc/v7.8.x/light-clients/wasm/events). + +In the SDK, it can be assumed that for every message there is an event emitted with the type `message`, +attribute key `action`, and an attribute value representing the type of message sent +(`channel_open_init` would be the attribute value for `MsgChannelOpenInit`). If a relayer queries +for transaction events, it can split message events using this event Type/Attribute Key pair. + +The Event Type `message` with the Attribute Key `module` may be emitted multiple times for a single +message due to application callbacks. It can be assumed that any TAO logic executed will result in +a module event emission with the attribute value `ibc_` (02-client emits `ibc_client`). + +### Subscribing with Tendermint + +Calling the Tendermint RPC method `Subscribe` via Tendermint's Websocket will return events using +Tendermint's internal representation of them. Instead of receiving back a list of events as they +were emitted, Tendermint will return the type `map[string][]string` which maps a string in the +form `.` to `attribute_value`. This causes extraction of the event +ordering to be non-trivial, but still possible. + +A relayer should use the `message.action` key to extract the number of messages in the transaction +and the type of IBC transactions sent. For every IBC transaction within the string array for +`message.action`, the necessary information should be extracted from the other event fields. If +`send_packet` appears at index 2 in the value for `message.action`, a relayer will need to use the +value at index 2 of the key `send_packet.packet_sequence`. This process should be repeated for each +piece of information needed to relay a packet. + +## Example Implementations + +* [Golang Relayer](https://github.com/cosmos/relayer) +* [Hermes](https://github.com/informalsystems/hermes) diff --git a/docs/ibc/v7.8.x/ibc/roadmap.mdx b/docs/ibc/v7.8.x/ibc/roadmap.mdx new file mode 100644 index 00000000..81f3189b --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/roadmap.mdx @@ -0,0 +1,70 @@ +--- +title: Roadmap +description: 'Latest update: December 21st, 2022' +--- + +*Latest update: December 21st, 2022* + +This document endeavours to inform the wider IBC community about plans and priorities for work on ibc-go by the team at Interchain GmbH. It is intended to broadly inform all users of ibc-go, including developers and operators of IBC, relayer, chain and wallet applications. + +This roadmap should be read as a high-level guide, rather than a commitment to schedules and deliverables. The degree of specificity is inversely proportional to the timeline. We will update this document periodically to reflect the status and plans. For the latest expected release timelines, please check [here](https://github.com/cosmos/ibc-go/wiki/Release-timeline). + +## v7.0.0 + +### 02-client refactor + +This refactor will make the development of light clients easier. The ibc-go implementation will finally align with the spec and light clients will be required to set their own client and consensus states. This will allow more flexibility for light clients to manage their own internal storage and do batch updates. See [ADR 006](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-006-02-client-refactor.md) for more information. + +Follow the progress with the [beta](https://github.com/cosmos/ibc-go/milestone/25) and [RC](https://github.com/cosmos/ibc-go/milestone/27) milestones or in the [project board](https://github.com/orgs/cosmos/projects/7/views/14). + +### Upgrade Cosmos SDK v0.47 + +Follow the progress with the [milestone](https://github.com/cosmos/ibc-go/milestone/36). + +### Add `authz` support to 20-transfer + +Authz goes cross chain: users can grant permission for their tokens to be transferred to another chain on their behalf. See [this issue](https://github.com/cosmos/ibc-go/issues/2431) for more details. + +## v7.1.0 + +Because it is so important to have an ibc-go release compatible with the latest Cosmos SDK release, a couple of features will take a little longer and be released in [v7.1.0](https://github.com/cosmos/ibc-go/milestone/37). + +### Localhost connection + +This feature will add support for applications on a chain to communicate with applications on the same chain using the existing standard interface to communicate with applications on remote chains. This is a powerful UX improvement, particularly for those users interested in interacting with multiple smart contracts on a single chain through one interface. + +For more details, see the design proposal and discussion [here](https://github.com/cosmos/ibc-go/discussions/2191). + +A special shout out to Strangelove for their substantial contribution on this feature. + +### Support for Wasm light clients + +We will add support for Wasm light clients. The first Wasm client developed with ibc-go/v7 02-client refactor and stored as Wasm bytecode will be the GRANDPA light client used for Cosmos x Substrate IBC connections. This feature will be used also for a NEAR light client in the future. + +This feature was developed by Composable and Strangelove but will be upstreamed into ibc-go. + +## v8.0.0 + +### Channel upgradability + +Channel upgradability will allow chains to renegotiate an existing channel to take advantage of new features without having to create a new channel, thus preserving all existing packet state processed on the channel. + +Follow the progress with the [alpha milestone](https://github.com/cosmos/ibc-go/milestone/29) or the [project board](https://github.com/orgs/cosmos/projects/7/views/17). + +### Path unwinding + +This feature will allow tokens with non-native denoms to be sent back automatically to their native chains before being sent to a final destination chain. This will allow tokens to reach a final destination with the least amount possible of hops from their native chain. + +For more details, see this [discussion](https://github.com/cosmos/ibc/discussions/824). + +*** + +This roadmap is also available as a [project board](https://github.com/orgs/cosmos/projects/7/views/25). + +For the latest expected release timelines, please check [here](https://github.com/cosmos/ibc-go/wiki/Release-timeline). + +For the latest information on the progress of the work or the decisions made that might influence the roadmap, please follow our [engineering updates](https://github.com/cosmos/ibc-go/wiki/Engineering-updates). + +*** + +**Note**: release version numbers may be subject to change. diff --git a/docs/ibc/v7.8.x/ibc/troubleshooting.mdx b/docs/ibc/v7.8.x/ibc/troubleshooting.mdx new file mode 100644 index 00000000..3b58f016 --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/troubleshooting.mdx @@ -0,0 +1,13 @@ +--- +title: Troubleshooting +description: >- + If it is being reported that a client state is unauthorized, this is due to + the client type not being present in the AllowedClients array. +--- + +## Unauthorized client states + +If it is being reported that a client state is unauthorized, this is due to the client type not being present +in the [`AllowedClients`](https://github.com/cosmos/ibc-go/blob/v6.0.0/modules/core/02-client/types/client.pb.go#L345) array. + +Unless the client type is present in this array, all usage of clients of this type will be prevented. diff --git a/docs/ibc/v7.8.x/ibc/upgrades/developer-guide.mdx b/docs/ibc/v7.8.x/ibc/upgrades/developer-guide.mdx new file mode 100644 index 00000000..377d53ba --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/upgrades/developer-guide.mdx @@ -0,0 +1,9 @@ +--- +title: IBC Client Developer Guide to Upgrades +--- + +## Synopsis + +Learn how to implement upgrade functionality for your custom IBC client. + +Please see the section [Handling upgrades](/docs/ibc/v7.8.x/light-clients/developer-guide/upgrades) from the light client developer guide for more information. diff --git a/docs/ibc/v7.8.x/ibc/upgrades/genesis-restart.mdx b/docs/ibc/v7.8.x/ibc/upgrades/genesis-restart.mdx new file mode 100644 index 00000000..06d858ad --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/upgrades/genesis-restart.mdx @@ -0,0 +1,46 @@ +--- +title: Genesis Restart Upgrades +--- + +## Synopsis + +Learn how to upgrade your chain and counterparty clients using genesis restarts. + +**NOTE**: Regular genesis restarts are currently unsupported by relayers! + +## IBC Client Breaking Upgrades + +IBC client breaking upgrades are possible using genesis restarts. +It is highly recommended to use the in-place migrations instead of a genesis restart. +Genesis restarts should be used sparingly and as backup plans. + +Genesis restarts still require the usage of an IBC upgrade proposal in order to correctly upgrade counterparty clients. + +### Step-by-Step Upgrade Process for SDK Chains + +If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the [IBC Client Breaking Upgrade List](/docs/ibc/v7.8.x/ibc/upgrades/quick-guide#ibc-client-breaking-upgrades) and then execute the upgrade process described below in order to prevent counterparty clients from breaking. + +1. Create a 02-client [`UpgradeProposal`](https://github.com/cosmos/ibc-go/blob/v7.3.0/proto/ibc/core/client/v1/client.proto#L58-L77) with an `UpgradePlan` and a new IBC ClientState in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients and zero out any client-customizable fields (such as TrustingPeriod). +2. Vote on and pass the `UpgradeProposal` +3. Halt the node after successful upgrade. +4. Export the genesis file. +5. Swap to the new binary. +6. Run migrations on the genesis file. +7. Remove the `UpgradeProposal` plan from the genesis file. This may be done by migrations. +8. Change desired chain-specific fields (chain id, unbonding period, etc). This may be done by migrations. +9. Reset the node's data. +10. Start the chain. + +Upon the `UpgradeProposal` passing, the upgrade module will commit the UpgradedClient under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`. + +Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client. + +#### Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients + +These steps are identical to the regular [IBC client breaking upgrade process](/docs/ibc/v7.8.x/ibc/upgrades/quick-guide#step-by-step-upgrade-process-for-relayers-upgrading-counterparty-clients). + +### Non-IBC Client Breaking Upgrades + +While ibc-go supports genesis restarts which do not break IBC clients, relayers do not support this upgrade path. +Here is a tracking issue on [Hermes](https://github.com/informalsystems/ibc-rs/issues/1152). +Please do not attempt a regular genesis restarts unless you have a tool to update counterparty clients correctly. diff --git a/docs/ibc/v7.8.x/ibc/upgrades/intro.mdx b/docs/ibc/v7.8.x/ibc/upgrades/intro.mdx new file mode 100644 index 00000000..e9af9046 --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/upgrades/intro.mdx @@ -0,0 +1,15 @@ +--- +title: Upgrading IBC Chains Overview +description: >- + This directory contains information on how to upgrade an IBC chain without + breaking counterparty clients and connections. +--- + +### Upgrading IBC Chains Overview + +This directory contains information on how to upgrade an IBC chain without breaking counterparty clients and connections. + +IBC-connected chains must be able to upgrade without breaking connections to other chains. Otherwise there would be a massive disincentive towards upgrading and disrupting high-value IBC connections, thus preventing chains in the IBC ecosystem from evolving and improving. Many chain upgrades may be irrelevant to IBC, however some upgrades could potentially break counterparty clients if not handled correctly. Thus, any IBC chain that wishes to perform an IBC-client-breaking upgrade must perform an IBC upgrade in order to allow counterparty clients to securely upgrade to the new light client. + +1. The [quick-guide](/docs/ibc/v7.8.x/ibc/upgrades/quick-guide) describes how IBC-connected chains can perform client-breaking upgrades and how relayers can securely upgrade counterparty clients using the SDK. +2. The [developer-guide](/docs/ibc/v7.8.x/ibc/upgrades/developer-guide) is a guide for developers intending to develop IBC client implementations with upgrade functionality. diff --git a/docs/ibc/v7.8.x/ibc/upgrades/quick-guide.mdx b/docs/ibc/v7.8.x/ibc/upgrades/quick-guide.mdx new file mode 100644 index 00000000..79d737a2 --- /dev/null +++ b/docs/ibc/v7.8.x/ibc/upgrades/quick-guide.mdx @@ -0,0 +1,54 @@ +--- +title: How to Upgrade IBC Chains and their Clients +--- + +## Synopsis + +Learn how to upgrade your chain and counterparty clients. + +The information in this doc for upgrading chains is relevant to SDK chains. However, the guide for counterparty clients is relevant to any Tendermint client that enables upgrades. + +## IBC Client Breaking Upgrades + +IBC-connected chains must perform an IBC upgrade if their upgrade will break counterparty IBC clients. The current IBC protocol supports upgrading tendermint chains for a specific subset of IBC-client-breaking upgrades. Here is the exhaustive list of IBC client-breaking upgrades and whether the IBC protocol currently supports such upgrades. + +IBC currently does **NOT** support unplanned upgrades. All of the following upgrades must be planned and committed to in advance by the upgrading chain, in order for counterparty clients to maintain their connections securely. + +Note: Since upgrades are only implemented for Tendermint clients, this doc only discusses upgrades on Tendermint chains that would break counterparty IBC Tendermint Clients. + +1. Changing the Chain-ID: **Supported** +2. Changing the UnbondingPeriod: **Partially Supported**, chains may increase the unbonding period with no issues. However, decreasing the unbonding period may irreversibly break some counterparty clients. Thus, it is **not recommended** that chains reduce the unbonding period. +3. Changing the height (resetting to 0): **Supported**, so long as chains remember to increment the revision number in their chain-id. +4. Changing the ProofSpecs: **Supported**, this should be changed if the proof structure needed to verify IBC proofs is changed across the upgrade. Ex: Switching from an IAVL store, to a SimpleTree Store +5. Changing the UpgradePath: **Supported**, this might involve changing the key under which upgraded clients and consensus states are stored in the upgrade store, or even migrating the upgrade store itself. +6. Migrating the IBC store: **Unsupported**, as the IBC store location is negotiated by the connection. +7. Upgrading to a backwards compatible version of IBC: Supported +8. Upgrading to a non-backwards compatible version of IBC: **Unsupported**, as IBC version is negotiated on connection handshake. +9. Changing the Tendermint LightClient algorithm: **Partially Supported**. Changes to the light client algorithm that do not change the ClientState or ConsensusState struct may be supported, provided that the counterparty is also upgraded to support the new light client algorithm. Changes that require updating the ClientState and ConsensusState structs themselves are theoretically possible by providing a path to translate an older ClientState struct into the new ClientState struct; however this is not currently implemented. + +### Step-by-Step Upgrade Process for SDK chains + +If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the list above and then execute the upgrade process described below in order to prevent counterparty clients from breaking. + +1. Create a 02-client [`UpgradeProposal`](https://github.com/cosmos/ibc-go/blob/v7.3.0/proto/ibc/core/client/v1/client.proto#L58-L77) with an `UpgradePlan` and a new IBC ClientState in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients and zero out any client-customizable fields (such as TrustingPeriod). +2. Vote on and pass the `UpgradeProposal` + +Upon the `UpgradeProposal` passing, the upgrade module will commit the UpgradedClient under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`. + +Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client. + +### Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients + +Once the upgrading chain has committed to upgrading, relayers must wait till the chain halts at the upgrade height before upgrading counterparty clients. This is because chains may reschedule or cancel upgrade plans before they occur. Thus, relayers must wait till the chain reaches the upgrade height and halts before they can be sure the upgrade will take place. + +Thus, the upgrade process for relayers trying to upgrade the counterparty clients is as follows: + +1. Wait for the upgrading chain to reach the upgrade height and halt +2. Query a full node for the proofs of `UpgradedClient` and `UpgradedConsensusState` at the last height of the old chain. +3. Update the counterparty client to the last height of the old chain using the `UpdateClient` msg. +4. Submit an `UpgradeClient` msg to the counterparty chain with the `UpgradedClient`, `UpgradedConsensusState` and their respective proofs. +5. Submit an `UpdateClient` msg to the counterparty chain with a header from the new upgraded chain. + +The Tendermint client on the counterparty chain will verify that the upgrading chain did indeed commit to the upgraded client and upgraded consensus state at the upgrade height (since the upgrade height is included in the key). If the proofs are verified against the upgrade height, then the client will upgrade to the new client while retaining all of its client-customized fields. Thus, it will retain its old TrustingPeriod, TrustLevel, MaxClockDrift, etc; while adopting the new chain-specified fields such as UnbondingPeriod, ChainId, UpgradePath, etc. Note, this can lead to an invalid client since the old client-chosen fields may no longer be valid given the new chain-chosen fields. Upgrading chains should try to avoid these situations by not altering parameters that can break old clients. For an example, see the UnbondingPeriod example in the supported upgrades section. + +The upgraded consensus state will serve purely as a basis of trust for future `UpdateClientMsgs` and will not contain a consensus root to perform proof verification against. Thus, relayers must submit an `UpdateClientMsg` with a header from the new chain so that the connection can be used for proof verification again. diff --git a/docs/ibc/v7.8.x/intro.mdx b/docs/ibc/v7.8.x/intro.mdx new file mode 100644 index 00000000..7327c539 --- /dev/null +++ b/docs/ibc/v7.8.x/intro.mdx @@ -0,0 +1,14 @@ +--- +title: IBC-Go Documentation +description: Welcome to the IBC-Go documentation! +--- + +Welcome to the IBC-Go documentation! + +The Inter-Blockchain Communication protocol (IBC) is an end-to-end, connection-oriented, stateful protocol for reliable, ordered, and authenticated communication between heterogeneous blockchains arranged in an unknown and dynamic topology. + +IBC is a protocol that allows blockchains to talk to each other. + +The protocol realizes this interoperability by specifying a set of data structures, abstractions, and semantics that can be implemented by any distributed ledger that satisfies a small set of requirements. + +IBC can be used to build a wide range of cross-chain applications that include token transfers, atomic swaps, multi-chain smart contracts (with or without mutually comprehensible VMs), and data and code sharding of various kinds. diff --git a/docs/ibc/v7.8.x/light-clients/developer-guide/client-state.mdx b/docs/ibc/v7.8.x/light-clients/developer-guide/client-state.mdx new file mode 100644 index 00000000..49f0f71b --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/developer-guide/client-state.mdx @@ -0,0 +1,77 @@ +--- +title: Client State interface +description: >- + Learn how to implement the ClientState interface. This list of methods + described here does not include all methods of the interface. Some methods are + explained in detail in the relevant sections of the guide. +--- + +Learn how to implement the [`ClientState`](https://github.com/cosmos/ibc-go/blob/v6.0.0/modules/core/exported/client.go#L40) interface. This list of methods described here does not include all methods of the interface. Some methods are explained in detail in the relevant sections of the guide. + +## `ClientType` method + +`ClientType` should return a unique string identifier of the light client. This will be used when generating a client identifier. +The format is created as follows: `ClientType-{N}` where `{N}` is the unique global nonce associated with a specific client. + +## `GetLatestHeight` method + +`GetLatestHeight` should return the latest block height that the client state represents. + +## `Validate` method + +`Validate` should validate every client state field and should return an error if any value is invalid. The light client +implementer is in charge of determining which checks are required. See the [Tendermint light client implementation](https://github.com/cosmos/ibc-go/blob/v6.0.0/modules/light-clients/07-tendermint/types/client_state.go#L101) as a reference. + +## `Status` method + +`Status` must return the status of the client. + +* An `Active` status indicates that clients are allowed to process packets. +* A `Frozen` status indicates that misbehaviour was detected in the counterparty chain and the client is not allowed to be used. +* An `Expired` status indicates that a client is not allowed to be used because it was not updated for longer than the trusting period. +* An `Unknown` status indicates that there was an error in determining the status of a client. + +All possible `Status` types can be found [here](https://github.com/cosmos/ibc-go/blob/v6.0.0/modules/core/exported/client.go#L26-L36). + +This field is returned in the response of the gRPC [`ibc.core.client.v1.Query/ClientStatus`](https://github.com/cosmos/ibc-go/blob/v6.0.0/modules/core/02-client/types/query.pb.go#L665) endpoint. + +## `ZeroCustomFields` method + +`ZeroCustomFields` should return a copy of the light client with all client customizable fields with their zero value. It should not mutate the fields of the light client. +This method is used when [scheduling upgrades](https://github.com/cosmos/ibc-go/blob/v6.0.0/modules/core/02-client/keeper/proposal.go#L89). Upgrades are used to upgrade chain specific fields. +In the tendermint case, this may be the chain ID or the unbonding period. +For more information about client upgrades see the [Handling upgrades](/docs/ibc/v7.8.x/light-clients/developer-guide/upgrades) section. + +## `GetTimestampAtHeight` method + +`GetTimestampAtHeight` must return the timestamp for the consensus state associated with the provided height. +This value is used to facilitate timeouts by checking the packet timeout timestamp against the returned value. + +## `Initialize` method + +Clients must validate the initial consensus state, and set the initial client state and consensus state in the provided client store. +Clients may also store any necessary client-specific metadata. + +`Initialize` is called when a [client is created](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client/keeper/client.go#L32). + +## `VerifyMembership` method + +`VerifyMembership` must verify the existence of a value at a given commitment path at the specified height. For more information about membership proofs +see the [Existence and non-existence proofs section](/docs/ibc/v7.8.x/light-clients/developer-guide/proofs). + +## `VerifyNonMembership` method + +`VerifyNonMembership` must verify the absence of a value at a given commitment path at a specified height. For more information about non-membership proofs +see the [Existence and non-existence proofs section](/docs/ibc/v7.8.x/light-clients/developer-guide/proofs). + +## `VerifyClientMessage` method + +`VerifyClientMessage` must verify a `ClientMessage`. A `ClientMessage` could be a `Header`, `Misbehaviour`, or batch update. +It must handle each type of `ClientMessage` appropriately. Calls to `CheckForMisbehaviour`, `UpdateState`, and `UpdateStateOnMisbehaviour` +will assume that the content of the `ClientMessage` has been verified and can be trusted. An error should be returned +if the ClientMessage fails to verify. + +## `CheckForMisbehaviour` method + +Checks for evidence of a misbehaviour in `Header` or `Misbehaviour` type. It assumes the `ClientMessage` +has already been verified. diff --git a/docs/ibc/v7.8.x/light-clients/developer-guide/consensus-state.mdx b/docs/ibc/v7.8.x/light-clients/developer-guide/consensus-state.mdx new file mode 100644 index 00000000..48bfc7d2 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/developer-guide/consensus-state.mdx @@ -0,0 +1,24 @@ +--- +title: Consensus State interface +description: >- + A ConsensusState is the snapshot of the counterparty chain, that an IBC client + uses to verify proofs (e.g. a block). +--- + +A `ConsensusState` is the snapshot of the counterparty chain, that an IBC client uses to verify proofs (e.g. a block). + +The further development of multiple types of IBC light clients and the difficulties presented by this generalization problem (see [ADR-006](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-006-02-client-refactor.md) for more information about this historical context) led to the design decision of each client keeping track of and set its own `ClientState` and `ConsensusState`, as well as the simplification of client `ConsensusState` updates through the generalized `ClientMessage` interface. + +The below [`ConsensusState`](https://github.com/cosmos/ibc-go/blob/main/modules/core/exported/client.go#L134) interface is a generalized interface for the types of information a `ConsensusState` could contain. For a reference `ConsensusState` implementation, please see the [Tendermint light client `ConsensusState`](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/07-tendermint/consensus_state.go). + +## `ClientType` method + +This is the type of client consensus. It should be the same as the `ClientType` return value for the [corresponding `ClientState` implementation](/docs/ibc/v7.8.x/light-clients/developer-guide/client-state). + +## `GetTimestamp` method + +`GetTimestamp` should return the timestamp (in nanoseconds) of the consensus state snapshot. + +## `ValidateBasic` method + +`ValidateBasic` should validate every consensus state field and should return an error if any value is invalid. The light client implementer is in charge of determining which checks are required. diff --git a/docs/ibc/v7.8.x/light-clients/developer-guide/genesis.mdx b/docs/ibc/v7.8.x/light-clients/developer-guide/genesis.mdx new file mode 100644 index 00000000..18929e26 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/developer-guide/genesis.mdx @@ -0,0 +1,36 @@ +--- +title: Handling Genesis +--- + +## Synopsis + +Learn how to implement the `ExportMetadata` interface + +## Pre-requisite readings + +- [Cosmos SDK module genesis](https://docs.cosmos.network/v0.47/building-modules/genesis) + +`ClientState` instances are provided their own isolated and namespaced client store upon initialisation. `ClientState` implementations may choose to store any amount of arbitrary metadata in order to verify counterparty consensus state and perform light client updates correctly. + +The `ExportMetadata` method of the [`ClientState` interface](https://github.com/cosmos/ibc-go/blob/e650be91614ced7be687c30eb42714787a3bbc59/modules/core/exported/client.go) provides light client modules with the ability to persist metadata in genesis exports. + +```go +ExportMetadata(clientStore sdk.KVStore) []GenesisMetadata +``` + +`ExportMetadata` is provided the client store and returns an array of `GenesisMetadata`. For maximum flexibility, `GenesisMetadata` is defined as a simple interface containing two distinct `Key` and `Value` accessor methods. + +```go +type GenesisMetadata interface { + / return store key that contains metadata without clientID-prefix + GetKey() []byte + / returns metadata value + GetValue() []byte +} +``` + +This allows `ClientState` instances to retrieve and export any number of key-value pairs which are maintained within the store in their raw `[]byte` form. + +When a chain is started with a `genesis.json` file which contains `ClientState` metadata (for example, when performing manual upgrades using an exported `genesis.json`) the `02-client` submodule of core IBC will handle setting the key-value pairs within their respective client stores. [See `02-client` `InitGenesis`](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/modules/core/02-client/genesis.go#L18-L22). + +Please refer to the [Tendermint light client implementation](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/modules/light-clients/07-tendermint/genesis.go#L12) for an example. diff --git a/docs/ibc/v7.8.x/light-clients/developer-guide/overview.mdx b/docs/ibc/v7.8.x/light-clients/developer-guide/overview.mdx new file mode 100644 index 00000000..f4ea53fb --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/developer-guide/overview.mdx @@ -0,0 +1,74 @@ +--- +title: Overview +--- + +## Synopsis + +Learn how to build IBC light client modules and fulfill the interfaces required to integrate with core IBC. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v7.8.x/ibc/overview) +- [IBC Transport, Authentication, and Ordering Layer - Clients](https://tutorials.cosmos.network/academy/3-ibc/4-clients.html) +- [ICS-002 Client Semantics](https://github.com/cosmos/ibc/tree/main/spec/core/ics-002-client-semantics) + + + +IBC uses light clients in order to provide trust-minimized interoperability between sovereign blockchains. Light clients operate under a strict set of rules which provide security guarantees for state updates and facilitate the ability to verify the state of a remote blockchain using merkle proofs. + +The following aims to provide a high level IBC light client module developer guide. Access to IBC light clients are gated by the core IBC `MsgServer` which utilizes the abstractions set by the `02-client` submodule to call into a light client module. A light client module developer is only required to implement a set interfaces as defined in the `modules/core/exported` package of ibc-go. + +A light client module developer should be concerned with three main interfaces: + +- [`ClientState`](#clientstate) encapsulates the light client implementation and its semantics. +- [`ConsensusState`](#consensusstate) tracks consensus data used for verification of client updates, misbehaviour detection and proof verification of counterparty state. +- [`ClientMessage`](#clientmessage) used for submitting block headers for client updates and submission of misbehaviour evidence using conflicting headers. + +Throughout this guide the `07-tendermint` light client module may be referred to as a reference example. + +## Concepts and vocabulary + +### `ClientState` + +`ClientState` is a term used to define the data structure which encapsulates opaque light client state. The `ClientState` contains all the information needed to verify a `ClientMessage` and perform membership and non-membership proof verification of counterparty state. This includes properties that refer to the remote state machine, the light client type and the specific light client instance. + +For example: + +- Constraints used for client updates. +- Constraints used for misbehaviour detection. +- Constraints used for state verification. +- Constraints used for client upgrades. + +The `ClientState` type maintained within the light client module _must_ implement the [`ClientState`](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/modules/core/exported/client.go#L36) interface defined in `core/modules/exported/client.go`. +The methods which make up this interface are detailed at a more granular level in the [ClientState section of this guide](/docs/ibc/v7.8.x/light-clients/developer-guide/client-state). + +Please refer to the `07-tendermint` light client module's [`ClientState` definition](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/proto/ibc/lightclients/tendermint/v1/tendermint.proto#L18) containing information such as chain ID, status, latest height, unbonding period and proof specifications. + +### `ConsensusState` + +`ConsensusState` is a term used to define the data structure which encapsulates consensus data at a particular point in time, i.e. a unique height or sequence number of a state machine. There must exist a single trusted `ConsensusState` for each height. `ConsensusState` generally contains a trusted root, validator set information and timestamp. + +For example, the `ConsensusState` of the `07-tendermint` light client module defines a trusted root which is used by the `ClientState` to perform verification of membership and non-membership commitment proofs, as well as the next validator set hash used for verifying headers can be trusted in client updates. + +The `ConsensusState` type maintained within the light client module _must_ implement the [`ConsensusState`](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/modules/core/exported/client.go#L134) interface defined in `modules/core/exported/client.go`. +The methods which make up this interface are detailed at a more granular level in the [`ConsensusState` section of this guide](/docs/ibc/v7.8.x/light-clients/developer-guide/consensus-state). + +### `Height` + +`Height` defines a monotonically increasing sequence number which provides ordering of consensus state data persisted through client updates. +IBC light client module developers are expected to use the [concrete type](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/proto/ibc/core/client/v1/client.proto#L89) provided by the `02-client` submodule. This implements the expectations required by the [`Height`](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/modules/core/exported/client.go#L157) interface defined in `modules/core/exported/client.go`. + +### `ClientMessage` + +`ClientMessage` refers to the interface type [`ClientMessage`](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/modules/core/exported/client.go#L148) used for performing updates to a `ClientState` stored on chain. +This may be any concrete type which produces a change in state to the IBC client when verified. + +The following are considered as valid update scenarios: + +- A block header which when verified inserts a new `ConsensusState` at a unique height. +- A batch of block headers which when verified inserts `N` `ConsensusState` instances for `N` unique heights. +- Evidence of misbehaviour provided by two conflicting block headers. + +Learn more in the [Handling update and misbehaviour](/docs/ibc/v7.8.x/light-clients/developer-guide/updates-and-misbehaviour) section. diff --git a/docs/ibc/v7.8.x/light-clients/developer-guide/proofs.mdx b/docs/ibc/v7.8.x/light-clients/developer-guide/proofs.mdx new file mode 100644 index 00000000..b17fe8ac --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/developer-guide/proofs.mdx @@ -0,0 +1,68 @@ +--- +title: Existence/Non-Existence Proofs +description: >- + IBC uses merkle proofs in order to verify the state of a remote counterparty + state machine given a trusted root, and ICS-23 is a general approach for + verifying merkle trees which is used in ibc-go. +--- + +IBC uses merkle proofs in order to verify the state of a remote counterparty state machine given a trusted root, and [ICS-23](https://github.com/cosmos/ics23/tree/master/go) is a general approach for verifying merkle trees which is used in ibc-go. + +Currently, all Cosmos SDK modules contain their own stores, which maintain the state of the application module in an IAVL (immutable AVL) binary merkle tree format. Specifically with regard to IBC, core IBC maintains its own IAVL store, and IBC apps (e.g. transfer) maintain their own dedicated stores. The Cosmos SDK multistore therefore creates a simple merkle tree of all of these IAVL trees, and from each of these individual IAVL tree root hashes it derives a root hash for the application state tree as a whole (the `AppHash`). + +For the purposes of ibc-go, there are two types of proofs which are important: existence and non-existence proofs, terms which have been used interchangeably with membership and non-membership proofs. For the purposes of this guide, we will stick with "existence" and "non-existence". + +## Existence proofs + +Existence proofs are used in IBC transactions which involve verification of counterparty state for transactions which will result in the writing of provable state. For example, this includes verification of IBC store state for handshakes and packets. + +Put simply, existence proofs prove that a particular key and value exists in the tree. Under the hood, an IBC existence proof is comprised of two proofs: an IAVL proof that the key exists in IBC store/IBC root hash, and a proof that the IBC root hash exists in the multistore root hash. + +## Non-existence proofs + +Non-existence proofs verify the absence of data stored within counterparty state and are used to prove that a key does NOT exist in state. As stated above, these types of proofs can be used to timeout packets by proving that the counterparty has not written a packet receipt into the store, meaning that a token transfer has NOT successfully occurred. + +Some trees (e.g. SMT) may have a sentinel empty child for non-existent keys. In this case, the ICS-23 proof spec should include this `EmptyChild` so that ICS-23 handles the non-existence proof correctly. + +In some cases, there is a necessity to "mock" non-existence proofs if the counterparty does not have ability to prove absence. Since the verification method is designed to give complete control to client implementations, clients can support chains that do not provide absence proofs by verifying the existence of a non-empty sentinel `ABSENCE` value. In these special cases, the proof provided will be an ICS-23 `Existence` proof, and the client will verify that the `ABSENCE` value is stored under the given path for the given height. + +## State verification methods: `VerifyMembership` and `VerifyNonMembership` + +The state verification functions for all IBC data types have been consolidated into two generic methods, `VerifyMembership` and `VerifyNonMembership`. + +From the [`ClientState` interface definition](https://github.com/cosmos/ibc-go/blob/e650be91614ced7be687c30eb42714787a3bbc59/modules/core/exported/client.go#L68-L91), we find: + +```go expandable +VerifyMembership( + ctx sdk.Context, + clientStore sdk.KVStore, + cdc codec.BinaryCodec, + height Height, + delayTimePeriod uint64, + delayBlockPeriod uint64, + proof []byte, + path Path, + value []byte, +) + +error + +/ VerifyNonMembership is a generic proof verification method which verifies the absence of a given CommitmentPath at a specified height. +/ The caller is expected to construct the full CommitmentPath from a CommitmentPrefix and a standardized path (as defined in ICS 24). +VerifyNonMembership( + ctx sdk.Context, + clientStore sdk.KVStore, + cdc codec.BinaryCodec, + height Height, + delayTimePeriod uint64, + delayBlockPeriod uint64, + proof []byte, + path Path, +) + +error +``` + +Both are expected to be provided with a standardised key path, `exported.Path`, as defined in [ICS-24 host requirements](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements). Membership verification requires callers to provide the value marshalled as `[]byte`. Delay period values should be zero for non-packet processing verification. A zero proof height is now allowed by core IBC and may be passed into `VerifyMembership` and `VerifyNonMembership`. Light clients are responsible for returning an error if a zero proof height is invalid behaviour. + +Please refer to the [ICS-23 implementation](https://github.com/cosmos/ibc-go/blob/e093d85b533ab3572b32a7de60b88a0816bed4af/modules/core/23-commitment/types/merkle.go#L131-L205) for a concrete example. diff --git a/docs/ibc/v7.8.x/light-clients/developer-guide/proposals.mdx b/docs/ibc/v7.8.x/light-clients/developer-guide/proposals.mdx new file mode 100644 index 00000000..d497ae9c --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/developer-guide/proposals.mdx @@ -0,0 +1,31 @@ +--- +title: Handling Proposals +--- + +It is possible to update the client with the state of the substitute client through a governance proposal. [This type of governance proposal](/docs/ibc/v7.8.x/light-clients/developer-guide/proposals) is typically used to recover an expired or frozen client, as it can recover the entire state and therefore all existing channels built on top of the client. `CheckSubstituteAndUpdateState` should be implemented to handle the proposal. + +## Implementing `CheckSubstituteAndUpdateState` + +In the [`ClientState`interface](https://github.com/cosmos/ibc-go/blob/e650be91614ced7be687c30eb42714787a3bbc59/modules/core/exported/client.go), we find: + +```go +/ CheckSubstituteAndUpdateState must verify that the provided substitute may be used to update the subject client. +/ The light client must set the updated client and consensus states within the clientStore for the subject client. +CheckSubstituteAndUpdateState( + ctx sdk.Context, + cdc codec.BinaryCodec, + subjectClientStore, + substituteClientStore sdk.KVStore, + substituteClient ClientState) + +error +``` + +Prior to updating, this function must verify that: + +* the substitute client is the same type as the subject client. For a reference implementation, please see the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/modules/light-clients/07-tendermint/proposal_handle.go#L32). +* the provided substitute may be used to update the subject client. This may mean that certain parameters must remain unaltered. For example, a [valid substitute Tendermint light client](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/modules/light-clients/07-tendermint/proposal_handle.go#L84) must NOT change the chain ID, trust level, max clock drift, unbonding period, proof specs or upgrade path. Please note that `AllowUpdateAfterMisbehaviour` and `AllowUpdateAfterExpiry` have been deprecated (see ADR 026 for more information). + +After these checks are performed, the function must [set the updated client and consensus states](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/modules/light-clients/07-tendermint/proposal_handle.go#L77) within the client store for the subject client. + +Please refer to the [Tendermint light client implementation](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/modules/light-clients/07-tendermint/proposal_handle.go#L27) for reference. diff --git a/docs/ibc/v7.8.x/light-clients/developer-guide/setup.mdx b/docs/ibc/v7.8.x/light-clients/developer-guide/setup.mdx new file mode 100644 index 00000000..e799561a --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/developer-guide/setup.mdx @@ -0,0 +1,152 @@ +--- +title: Setup +--- + +## Synopsis + +Learn how to configure light client modules and create clients using core IBC and the `02-client` submodule. + +A last step to finish the development of the light client, is to implement the `AppModuleBasic` interface to allow it to be added to the chain's `app.go` alongside other light client types the chain enables. + +Finally, a succinct rundown is given of the remaining steps to make the light client operational, getting the light client type passed through governance and creating the clients. + +## Configuring a light client module + +An IBC light client module must implement the [`AppModuleBasic`](https://github.com/cosmos/cosmos-sdk/blob/main/types/module/module.go#L50) interface in order to register its concrete types against the core IBC interfaces defined in `modules/core/exported`. This is accomplished via the `RegisterInterfaces` method which provides the light client module with the opportunity to register codec types using the chain's `InterfaceRegistry`. Please refer to the [`07-tendermint` codec registration](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/modules/light-clients/07-tendermint/codec.go#L11). + +The `AppModuleBasic` interface may also be leveraged to install custom CLI handlers for light client module users. Light client modules can safely no-op for interface methods which it does not wish to implement. + +Please refer to the [core IBC documentation](/docs/ibc/v7.8.x/ibc/integration#integrating-light-clients) for how to configure additional light client modules alongside `07-tendermint` in `app.go`. + +See below for an example of the `07-tendermint` implementation of `AppModuleBasic`. + +```go expandable +var _ module.AppModuleBasic = AppModuleBasic{ +} + +/ AppModuleBasic defines the basic application module used by the tendermint light client. +/ Only the RegisterInterfaces function needs to be implemented. All other function perform +/ a no-op. +type AppModuleBasic struct{ +} + +/ Name returns the tendermint module name. +func (AppModuleBasic) + +Name() + +string { + return ModuleName +} + +/ RegisterLegacyAminoCodec performs a no-op. The Tendermint client does not support amino. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(*codec.LegacyAmino) { +} + +/ RegisterInterfaces registers module concrete types into protobuf Any. This allows core IBC +/ to unmarshal tendermint light client types. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + RegisterInterfaces(registry) +} + +/ DefaultGenesis performs a no-op. Genesis is not supported for the tendermint light client. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return nil +} + +/ ValidateGenesis performs a no-op. Genesis is not supported for the tendermint light client. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config client.TxEncodingConfig, bz json.RawMessage) + +error { + return nil +} + +/ RegisterGRPCGatewayRoutes performs a no-op. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *runtime.ServeMux) { +} + +/ GetTxCmd performs a no-op. Please see the 02-client cli commands. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return nil +} + +/ GetQueryCmd performs a no-op. Please see the 02-client cli commands. +func (AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return nil +} +``` + +## Creating clients + +A client is created by executing a new `MsgCreateClient` transaction composed with a valid `ClientState` and initial `ConsensusState` encoded as protobuf `Any`s. +Generally, this is performed by an off-chain process known as an [IBC relayer](https://github.com/cosmos/ibc/tree/main/spec/relayer/ics-018-relayer-algorithms) however, this is not a strict requirement. + +See below for a list of IBC relayer implementations: + +- [cosmos/relayer](https://github.com/cosmos/relayer) +- [informalsystems/hermes](https://github.com/informalsystems/hermes) +- [confio/ts-relayer](https://github.com/confio/ts-relayer) + +Stateless checks are performed within the [`ValidateBasic`](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/modules/core/02-client/types/msgs.go#L48) method of `MsgCreateClient`. + +```protobuf expandable +/ MsgCreateClient defines a message to create an IBC client +message MsgCreateClient { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + / light client state + google.protobuf.Any client_state = 1 [(gogoproto.moretags) = "yaml:\"client_state\""]; + / consensus state associated with the client that corresponds to a given + / height. + google.protobuf.Any consensus_state = 2 [(gogoproto.moretags) = "yaml:\"consensus_state\""]; + / signer address + string signer = 3; +} +``` + +Leveraging protobuf `Any` encoding allows core IBC to [unpack](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/modules/core/keeper/msg_server.go#L28-L36) both the `ClientState` and `ConsensusState` into their respective interface types registered previously using the light client module's `RegisterInterfaces` method. + +Within the `02-client` submodule, the [`ClientState` is then initialized](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/modules/core/02-client/keeper/client.go#L30-L34) with its own isolated key-value store, namespaced using a unique client identifier. + +In order to successfully create an IBC client using a new client type, it [must be supported](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/modules/core/02-client/keeper/client.go#L18-L24). Light client support in IBC is gated by on-chain governance. The allow list may be updated by submitting a new governance proposal to update the `02-client` parameter `AllowedClients`. + +{/* - TODO: update when params are managed by ibc-go - [Link](https://github.com/cosmos/ibc-go/issues/2010) */} +See below for example: + +```shell +%s tx gov submit-proposal param-change --from= +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "IBC Clients Param Change", + "description": "Update allowed clients", + "changes": [ + { + "subspace": "ibc", + "key": "AllowedClients", + "value": ["06-solomachine", "07-tendermint", "0x-new-client"] + } + ], + "deposit": "1000stake" +} +``` diff --git a/docs/ibc/v7.8.x/light-clients/developer-guide/updates-and-misbehaviour.mdx b/docs/ibc/v7.8.x/light-clients/developer-guide/updates-and-misbehaviour.mdx new file mode 100644 index 00000000..74a7c2f4 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/developer-guide/updates-and-misbehaviour.mdx @@ -0,0 +1,106 @@ +--- +title: Handling Updates and Misbehaviour +description: >- + As mentioned before in the documentation about implementing the ConsensusState + interface, ClientMessage is an interface used to update an IBC client. This + update may be performed by: +--- + +As mentioned before in the documentation about [implementing the `ConsensusState` interface](/docs/ibc/v7.8.x/light-clients/developer-guide/consensus-state), [`ClientMessage`](https://github.com/cosmos/ibc-go/blob/main/modules/core/exported/client.go#L145) is an interface used to update an IBC client. This update may be performed by: + +* a single header +* a batch of headers +* evidence of misbehaviour, +* or any type which when verified produces a change to the consensus state of the IBC client. + +This interface has been purposefully kept generic in order to give the maximum amount of flexibility to the light client implementer. + +## Implementing the `ClientMessage` interface + +Find the `ClientMessage`interface in `modules/core/exported`: + +```go +type ClientMessage interface { + proto.Message + + ClientType() + +string + ValidateBasic() + +error +} +``` + +The `ClientMessage` will be passed to the client to be used in [`UpdateClient`](https://github.com/cosmos/ibc-go/blob/57da75a70145409247e85365b64a4b2fc6ddad2f/modules/core/02-client/keeper/client.go#L53), which retrieves the `ClientState` by client ID (available in `MsgUpdateClient`). This `ClientState` implements the [`ClientState` interface](/docs/ibc/v7.8.x/light-clients/developer-guide/client-state) for its specific consenus type (e.g. Tendermint). + +`UpdateClient` will then handle a number of cases including misbehaviour and/or updating the consensus state, utilizing the specific methods defined in the relevant `ClientState`. + +```go +VerifyClientMessage(ctx sdk.Context, cdc codec.BinaryCodec, clientStore sdk.KVStore, clientMsg ClientMessage) + +error +CheckForMisbehaviour(ctx sdk.Context, cdc codec.BinaryCodec, clientStore sdk.KVStore, clientMsg ClientMessage) + +bool +UpdateStateOnMisbehaviour(ctx sdk.Context, cdc codec.BinaryCodec, clientStore sdk.KVStore, clientMsg ClientMessage) + +UpdateState(ctx sdk.Context, cdc codec.BinaryCodec, clientStore sdk.KVStore, clientMsg ClientMessage) []Height +``` + +## Handling updates and misbehaviour + +The functions for handling updates to a light client and evidence of misbehaviour are all found in the [`ClientState`](https://github.com/cosmos/ibc-go/blob/v6.0.0/modules/core/exported/client.go#L40) interface, and will be discussed below. + +> It is important to note that `Misbehaviour` in this particular context is referring to misbehaviour on the chain level intended to fool the light client. This will be defined by each light client. + +## `VerifyClientMessage` + +`VerifyClientMessage` must verify a `ClientMessage`. A `ClientMessage` could be a `Header`, `Misbehaviour`, or batch update. To understand how to implement a `ClientMessage`, please refer to the [Implementing the `ClientMessage` interface](#implementing-the-clientmessage-interface) section. + +It must handle each type of `ClientMessage` appropriately. Calls to `CheckForMisbehaviour`, `UpdateState`, and `UpdateStateOnMisbehaviour` will assume that the content of the `ClientMessage` has been verified and can be trusted. An error should be returned if the `ClientMessage` fails to verify. + +For an example of a `VerifyClientMessage` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/07-tendermint/update.go#L20). + +## `CheckForMisbehaviour` + +Checks for evidence of a misbehaviour in `Header` or `Misbehaviour` type. It assumes the `ClientMessage` has already been verified. + +For an example of a `CheckForMisbehaviour` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/07-tendermint/misbehaviour_handle.go#L18). + +> The Tendermint light client [defines `Misbehaviour`](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/07-tendermint/misbehaviour.go) as two different types of situations: a situation where two conflicting `Header`s with the same height have been submitted to update a client's `ConsensusState` within the same trusting period, or that the two conflicting `Header`s have been submitted at different heights but the consensus states are not in the correct monotonic time ordering (BFT time violation). More explicitly, updating to a new height must have a timestamp greater than the previous consensus state, or, if inserting a consensus at a past height, then time must be less than those heights which come after and greater than heights which come before. + +## `UpdateStateOnMisbehaviour` + +`UpdateStateOnMisbehaviour` should perform appropriate state changes on a client state given that misbehaviour has been detected and verified. This method should only be called when misbehaviour is detected, as it does not perform any misbehaviour checks. Notably, it should freeze the client so that calling the `Status` function on the associated client state no longer returns `Active`. + +For an example of a `UpdateStateOnMisbehaviour` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/07-tendermint/update.go#L197). + +## `UpdateState` + +`UpdateState` updates and stores as necessary any associated information for an IBC client, such as the `ClientState` and corresponding `ConsensusState`. It should perform a no-op on duplicate updates. + +It assumes the `ClientMessage` has already been verified. + +For an example of a `UpdateState` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/07-tendermint/update.go#L131). + +## Putting it all together + +The `02-client` `Keeper` module in ibc-go offers a reference as to how these functions will be used to [update the client](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client/keeper/client.go#L48). + +```go expandable +if err := clientState.VerifyClientMessage(clientMessage); err != nil { + return err +} + foundMisbehaviour := clientState.CheckForMisbehaviour(clientMessage) + if foundMisbehaviour { + clientState.UpdateStateOnMisbehaviour(clientMessage) + / emit misbehaviour event + return +} + +clientState.UpdateState(clientMessage) / expects no-op on duplicate header + / emit update event + return +} +``` diff --git a/docs/ibc/v7.8.x/light-clients/developer-guide/upgrades.mdx b/docs/ibc/v7.8.x/light-clients/developer-guide/upgrades.mdx new file mode 100644 index 00000000..a2568fa4 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/developer-guide/upgrades.mdx @@ -0,0 +1,69 @@ +--- +title: Handling Upgrades +--- + +It is vital that high-value IBC clients can upgrade along with their underlying chains to avoid disruption to the IBC ecosystem. Thus, IBC client developers will want to implement upgrade functionality to enable clients to maintain connections and channels even across chain upgrades. + +## Implementing `VerifyUpgradeAndUpdateState` + +The IBC protocol allows client implementations to provide a path to upgrading clients given the upgraded `ClientState`, upgraded `ConsensusState` and proofs for each. This path is provided in the `VerifyUpgradeAndUpdateState` method: + +```go expandable +/ NOTE: proof heights are not included as upgrade to a new revision is expected to pass only on the last height committed by the current revision. Clients are responsible for ensuring that the planned last height of the current revision is somehow encoded in the proof verification process. +/ This is to ensure that no premature upgrades occur, since upgrade plans committed to by the counterparty may be cancelled or modified before the last planned height. +/ If the upgrade is verified, the upgraded client and consensus states must be set in the client store. +func (cs ClientState) + +VerifyUpgradeAndUpdateState( + ctx sdk.Context, + cdc codec.BinaryCodec, + store sdk.KVStore, + newClient ClientState, + newConsState ConsensusState, + proofUpgradeClient, + proofUpgradeConsState []byte, +) + +error +``` + +> Please refer to the [Tendermint light client implementation](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/modules/light-clients/07-tendermint/upgrade.go#L27) as an example for implementation. + +It is important to note that light clients **must** handle all management of client and consensus states including the setting of updated `ClientState` and `ConsensusState` in the client store. This can include verifying that the submitted upgraded `ClientState` is of a valid `ClientState` type, that the height of the upgraded client is not greater than the height of the current client (in order to preserve BFT monotonic time), or that certain parameters which should not be changed have not been altered in the upgraded `ClientState`. + +Developers must ensure that the `MsgUpgradeClient` does not pass until the last height of the old chain has been committed, and after the chain upgrades, the `MsgUpgradeClient` should pass once and only once on all counterparty clients. + +### Upgrade path + +Clients should have **prior knowledge of the merkle path** that the upgraded client and upgraded consensus states will use. The height at which the upgrade has occurred should also be encoded in the proof. + +> The Tendermint client implementation accomplishes this by including an `UpgradePath` in the `ClientState` itself, which is used along with the upgrade height to construct the merkle path under which the client state and consensus state are committed. + +## Chain specific vs client specific client parameters + +Developers should maintain the distinction between client parameters that are uniform across every valid light client of a chain (chain-chosen parameters), and client parameters that are customizable by each individual client (client-chosen parameters); since this distinction is necessary to implement the `ZeroCustomFields` method in the [`ClientState` interface](/docs/ibc/v7.8.x/light-clients/developer-guide/client-state): + +```go +/ Utility function that zeroes out any client customizable fields in client state +/ Ledger enforced fields are maintained while all custom fields are zero values +/ Used to verify upgrades +func (cs ClientState) + +ZeroCustomFields() + +ClientState +``` + +Developers must ensure that the new client adopts all of the new client parameters that must be uniform across every valid light client of a chain (chain-chosen parameters), while maintaining the client parameters that are customizable by each individual client (client-chosen parameters) from the previous version of the client. `ZeroCustomFields` is a useful utility function to ensure only chain specific fields are updated during upgrades. + +## Security + +Upgrades must adhere to the IBC Security Model. IBC does not rely on the assumption of honest relayers for correctness. Thus users should not have to rely on relayers to maintain client correctness and security (though honest relayers must exist to maintain relayer liveness). While relayers may choose any set of client parameters while creating a new `ClientState`, this still holds under the security model since users can always choose a relayer-created client that suits their security and correctness needs or create a client with their desired parameters if no such client exists. + +However, when upgrading an existing client, one must keep in mind that there are already many users who depend on this client's particular parameters. **We cannot give the upgrading relayer free choice over these parameters once they have already been chosen. This would violate the security model** since users who rely on the client would have to rely on the upgrading relayer to maintain the same level of security. + +Thus, developers must make sure that their upgrade mechanism allows clients to upgrade the chain-specified parameters whenever a chain upgrade changes these parameters (examples in the Tendermint client include `UnbondingPeriod`, `TrustingPeriod`, `ChainID`, `UpgradePath`, etc), while ensuring that the relayer submitting the `MsgUpgradeClient` cannot alter the client-chosen parameters that the users are relying upon (examples in Tendermint client include `TrustLevel`, `MaxClockDrift`, etc). The previous paragraph discusses how `ZeroCustomFields` helps achieve this. + +### Document potential client parameter conflicts during upgrades + +Counterparty clients can upgrade securely by using all of the chain-chosen parameters from the chain-committed `UpgradedClient` and preserving all of the old client-chosen parameters. This enables chains to securely upgrade without relying on an honest relayer, however it can in some cases lead to an invalid final `ClientState` if the new chain-chosen parameters clash with the old client-chosen parameter. This can happen in the Tendermint client case if the upgrading chain lowers the `UnbondingPeriod` (chain-chosen) to a duration below that of a counterparty client's `TrustingPeriod` (client-chosen). Such cases should be clearly documented by developers, so that chains know which upgrades should be avoided to prevent this problem. The final upgraded client should also be validated in `VerifyUpgradeAndUpdateState` before returning to ensure that the client does not upgrade to an invalid `ClientState`. diff --git a/docs/ibc/v7.8.x/light-clients/localhost/client-state.mdx b/docs/ibc/v7.8.x/light-clients/localhost/client-state.mdx new file mode 100644 index 00000000..8ec6f1b7 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/localhost/client-state.mdx @@ -0,0 +1,70 @@ +--- +title: ClientState +description: >- + The 09-localhost ClientState maintains a single field used to track the latest + sequence of the state machine i.e. the height of the blockchain. +--- + +The 09-localhost `ClientState` maintains a single field used to track the latest sequence of the state machine i.e. the height of the blockchain. + +```go +type ClientState struct { + / the latest height of the blockchain + LatestHeight clienttypes.Height +} +``` + +The 09-localhost `ClientState` is instantiated in the `InitGenesis` handler of the 02-client submodule in core IBC. +It calls `CreateLocalhostClient`, declaring a new `ClientState` and initializing it with its own client prefixed store. + +```go +func (k Keeper) + +CreateLocalhostClient(ctx sdk.Context) + +error { + var clientState localhost.ClientState + return clientState.Initialize(ctx, k.cdc, k.ClientStore(ctx, exported.LocalhostClientID), nil) +} +``` + +It is possible to disable the localhost client by removing the `09-localhost` entry from the `allowed_clients` list through governance. + +## Client updates + +The latest height is updated periodically through the ABCI [`BeginBlock`](https://docs.cosmos.network/v0.47/building-modules/beginblock-endblock) interface of the 02-client submodule in core IBC. + +[See `BeginBlocker` in abci.go.](https://github.com/cosmos/ibc-go/blob/v7.8.0/modules/core/02-client/abci.go#L12) + +```go +func BeginBlocker(ctx sdk.Context, k keeper.Keeper) { + / ... + if clientState, found := k.GetClientState(ctx, exported.Localhost); found { + if k.GetClientStatus(ctx, clientState, exported.Localhost) == exported.Active { + k.UpdateLocalhostClient(ctx, clientState) +} + +} +} +``` + +The above calls into the 09-localhost `UpdateState` method of the `ClientState` . +It retrieves the current block height from the application context and sets the `LatestHeight` of the 09-localhost client. + +```go +func (cs ClientState) + +UpdateState(ctx sdk.Context, cdc codec.BinaryCodec, clientStore sdk.KVStore, clientMsg exported.ClientMessage) []exported.Height { + height := clienttypes.GetSelfHeight(ctx) + +cs.LatestHeight = height + + clientStore.Set(host.ClientStateKey(), clienttypes.MustMarshalClientState(cdc, &cs)) + +return []exported.Height{ + height +} +} +``` + +Note that the 09-localhost `ClientState` is not updated through the 02-client interface leveraged by conventional IBC light clients. diff --git a/docs/ibc/v7.8.x/light-clients/localhost/connection.mdx b/docs/ibc/v7.8.x/light-clients/localhost/connection.mdx new file mode 100644 index 00000000..828276d8 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/localhost/connection.mdx @@ -0,0 +1,29 @@ +--- +title: Connection +description: >- + The 09-localhost light client module integrates with core IBC through a single + sentinel localhost connection. The sentinel ConnectionEnd is stored by default + in the core IBC store. +--- + +The 09-localhost light client module integrates with core IBC through a single sentinel localhost connection. +The sentinel `ConnectionEnd` is stored by default in the core IBC store. + +This enables channel handshakes to be initiated out of the box by supplying the localhost connection identifier (`connection-localhost`) in the `connectionHops` parameter of `MsgChannelOpenInit`. + +The `ConnectionEnd` is created and set in store via the `InitGenesis` handler of the 03-connection submodule in core IBC. +The `ConnectionEnd` and its `Counterparty` both reference the `09-localhost` client identifier, and share the localhost connection identifier `connection-localhost`. + +```go +/ CreateSentinelLocalhostConnection creates and sets the sentinel localhost connection end in the IBC store. +func (k Keeper) + +CreateSentinelLocalhostConnection(ctx sdk.Context) { + counterparty := types.NewCounterparty(exported.LocalhostClientID, exported.LocalhostConnectionID, commitmenttypes.NewMerklePrefix(k.GetCommitmentPrefix().Bytes())) + connectionEnd := types.NewConnectionEnd(types.OPEN, exported.LocalhostClientID, counterparty, types.ExportedVersionsToProto(types.GetCompatibleVersions()), 0) + +k.SetConnection(ctx, exported.LocalhostConnectionID, connectionEnd) +} +``` + +Note that connection handshakes are disallowed when using the `09-localhost` client type. diff --git a/docs/ibc/v7.8.x/light-clients/localhost/integration.mdx b/docs/ibc/v7.8.x/light-clients/localhost/integration.mdx new file mode 100644 index 00000000..42c11aa9 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/localhost/integration.mdx @@ -0,0 +1,20 @@ +--- +title: Integration +description: >- + The 09-localhost light client module registers codec types within the core IBC + module. This differs from other light client module implementations which are + expected to register codec types using the AppModuleBasic interface. +--- + +The 09-localhost light client module registers codec types within the core IBC module. This differs from other light client module implementations which are expected to register codec types using the `AppModuleBasic` interface. + +The localhost client is added to the 02-client submodule param [`allowed_clients`](https://github.com/cosmos/ibc-go/blob/v7.0.0-rc0/proto/ibc/core/client/v1/client.proto#L102) by default in ibc-go. + +```go +var ( + / DefaultAllowedClients are the default clients for the AllowedClients parameter. + DefaultAllowedClients = []string{ + exported.Solomachine, exported.Tendermint, exported.Localhost +} +) +``` diff --git a/docs/ibc/v7.8.x/light-clients/localhost/overview.mdx b/docs/ibc/v7.8.x/light-clients/localhost/overview.mdx new file mode 100644 index 00000000..cbbb6e9c --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/localhost/overview.mdx @@ -0,0 +1,40 @@ +--- +title: Overview +--- + +## Overview + +## Synopsis + +Learn about the 09-localhost light client module. + +The 09-localhost light client module implements a localhost loopback client with the ability to send and receive IBC packets to and from the same state machine. + +### Context + +In a multichain environment, application developers will be used to developing cross-chain applications through IBC. From their point of view, whether or not they are interacting with multiple modules on the same chain or on different chains should not matter. The localhost client module enables a unified interface to interact with different applications on a single chain, using the familiar IBC application layer semantics. + +### Implementation + +There exists a [single sentinel `ClientState`](/docs/ibc/v7.8.x/light-clients/localhost/client-state) instance with the client identifier `09-localhost`. + +To supplement this, a [sentinel `ConnectionEnd` is stored in core IBC](/docs/ibc/v7.8.x/light-clients/localhost/connection) state with the connection identifier `connection-localhost`. This enables IBC applications to create channels directly on top of the sentinel connection which leverage the 09-localhost loopback functionality. + +[State verification](/docs/ibc/v7.8.x/light-clients/localhost/state-verification) for channel state in handshakes or processing packets is reduced in complexity, the `09-localhost` client can simply compare bytes stored under the standardized key paths. + +### Localhost vs _regular_ client + +The localhost client aims to provide a unified approach to interacting with applications on a single chain, as the IBC application layer provides for cross-chain interactions. To achieve this unified interface though, there are a number of differences under the hood compared to a 'regular' IBC client (excluding `06-solomachine` and `09-localhost` itself). + +The table below lists some important differences: + +| | Regular client | Localhost | +| -------------------------------------------- | --------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Number of clients | Many instances of a client _type_ corresponding to different counterparties | A single sentinel client with the client identifier `09-localhost` | +| Client creation | Relayer (permissionless) | `ClientState` is instantiated in the `InitGenesis` handler of the 02-client submodule in core IBC | +| Client updates | Relayer submits headers using `MsgUpdateClient` | Latest height is updated periodically through the ABCI [`BeginBlock`](https://docs.cosmos.network/v0.47/building-modules/beginblock-endblock) interface of the 02-client submodule in core IBC | +| Number of connections | Many connections, 1 (or more) per client | A single sentinel connection with the connection identifier `connection-localhost` | +| Connection creation | Connection handshake, provided underlying client | Sentinel `ConnectionEnd` is created and set in store in the `InitGenesis` handler of the 03-connection submodule in core IBC | +| Counterparty | Underlying client, representing another chain | Client with identifier `09-localhost` in same chain | +| `VerifyMembership` and `VerifyNonMembership` | Performs proof verification using consensus state roots | Performs state verification using key-value lookups in the core IBC store | +| Integration | Expected to register codec types using the `AppModuleBasic` interface | Registers codec types within the core IBC module | diff --git a/docs/ibc/v7.8.x/light-clients/localhost/state-verification.mdx b/docs/ibc/v7.8.x/light-clients/localhost/state-verification.mdx new file mode 100644 index 00000000..4dff4ac3 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/localhost/state-verification.mdx @@ -0,0 +1,21 @@ +--- +title: State Verification +description: >- + The localhost client handles state verification through the ClientState + interface methods VerifyMembership and VerifyNonMembership by performing + read-only operations directly on the core IBC store. +--- + +The localhost client handles state verification through the `ClientState` interface methods `VerifyMembership` and `VerifyNonMembership` by performing read-only operations directly on the core IBC store. + +When verifying channel state in handshakes or processing packets the `09-localhost` client can simply compare bytes stored under the standardized key paths defined by [ICS-24](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements). + +For existence proofs via `VerifyMembership` the 09-localhost client will retrieve the value stored under the provided key path and compare it against the value provided by the caller. In contrast, non-existence proofs via `VerifyNonMembership` assert the absence of a value at the provided key path. + +Relayers are expected to provide a sentinel proof when sending IBC messages. Submission of nil or empty proofs is disallowed in core IBC messaging. +The 09-localhost light client module defines a `SentinelProof` as a single byte. Localhost client state verification will fail if the sentintel proof value is not provided. + +```go +var SentinelProof = []byte{0x01 +} +``` diff --git a/docs/ibc/v7.8.x/light-clients/solomachine/concepts.mdx b/docs/ibc/v7.8.x/light-clients/solomachine/concepts.mdx new file mode 100644 index 00000000..96940f2c --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/solomachine/concepts.mdx @@ -0,0 +1,166 @@ +--- +title: Concepts +description: >- + The ClientState for a solo machine light client stores the latest sequence, + the frozen sequence, the latest consensus state, and client flag indicating if + the client should be allowed to be updated after a governance proposal. +--- + +## Client State + +The `ClientState` for a solo machine light client stores the latest sequence, the frozen sequence, +the latest consensus state, and client flag indicating if the client should be allowed to be updated +after a governance proposal. + +If the client is not frozen then the frozen sequence is 0. + +## Consensus State + +The consensus states stores the public key, diversifier, and timestamp of the solo machine light client. + +The diversifier is used to prevent accidental misbehaviour if the same public key is used across +different chains with the same client identifier. It should be unique to the chain the light client +is used on. + +## Public Key + +The public key can be a single public key or a multi-signature public key. The public key type used +must fulfill the tendermint public key interface (this will become the SDK public key interface in the +near future). The public key must be registered on the application codec otherwise encoding/decoding +errors will arise. The public key stored in the consensus state is represented as a protobuf `Any`. +This allows for flexibility in what other public key types can be supported in the future. + +## Counterparty Verification + +The solo machine light client can verify counterparty client state, consensus state, connection state, +channel state, packet commitments, packet acknowledgements, packet receipt absence, +and the next sequence receive. At the end of each successful verification call the light +client sequence number will be incremented. + +Successful verification requires the current public key to sign over the proof. + +## Proofs + +A solo machine proof should verify that the solomachine public key signed +over some specified data. The format for generating marshaled proofs for +the SDK's implementation of solo machine is as follows: + +1. Construct the data using the associated protobuf definition and marshal it. + +For example: + +```go +data := &ClientStateData{ + Path: []byte(path.String()), + ClientState: any, +} + +dataBz, err := cdc.Marshal(data) +``` + +The helper functions `...DataBytes()` in [proof.go](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine/proof.go) handle this +functionality. + +2. Construct the `SignBytes` and marshal it. + +For example: + +```go +signBytes := &SignBytes{ + Sequence: sequence, + Timestamp: timestamp, + Diversifier: diversifier, + DataType: CLIENT, + Data: dataBz, +} + +signBz, err := cdc.Marshal(signBytes) +``` + +The helper functions `...SignBytes()` in [proof.go](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine/proof.go) handle this functionality. +The `DataType` field is used to disambiguate what type of data was signed to prevent potential +proto encoding overlap. + +3. Sign the sign bytes. Embed the signatures into either `SingleSignatureData` or `MultiSignatureData`. + Convert the `SignatureData` to proto and marshal it. + +For example: + +```go +sig, err := key.Sign(signBz) + sigData := &signing.SingleSignatureData{ + Signature: sig, +} + protoSigData := signing.SignatureDataToProto(sigData) + +bz, err := cdc.Marshal(protoSigData) +``` + +4. Construct a `TimestampedSignatureData` and marshal it. The marshaled result can be passed in + as the proof parameter to the verification functions. + +For example: + +```go +timestampedSignatureData := &solomachine.TimestampedSignatureData{ + SignatureData: sigData, + Timestamp: solomachine.Time, +} + +proof, err := cdc.Marshal(timestampedSignatureData) +``` + +NOTE: At the end of this process, the sequence associated with the key needs to be updated. +The sequence must be incremented each time proof is generated. + +## Updates By Header + +An update by a header will only succeed if: + +* the header provided is parseable to solo machine header +* the header sequence matches the current sequence +* the header timestamp is greater than or equal to the consensus state timestamp +* the currently registered public key generated the proof + +If the update is successful: + +* the public key is updated +* the diversifier is updated +* the timestamp is updated +* the sequence is incremented by 1 +* the new consensus state is set in the client state + +## Updates By Proposal + +An update by a governance proposal will only succeed if: + +* the substitute provided is parseable to solo machine client state +* the new consensus state public key does not equal the current consensus state public key + +If the update is successful: + +* the subject client state is updated to the substitute client state +* the subject consensus state is updated to the substitute consensus state +* the client is unfrozen (if it was previously frozen) + +NOTE: Previously, `AllowUpdateAfterProposal` was used to signal the update/recovery options for the solo machine client. However, this has now been deprecated because a code migration can overwrite the client and consensus states regardless of the value of this parameter. If governance would vote to overwrite a client or consensus state, it is likely that governance would also be willing to perform a code migration to do the same. + +## Misbehaviour + +Misbehaviour handling will only succeed if: + +* the misbehaviour provided is parseable to solo machine misbehaviour +* the client is not already frozen +* the current public key signed over two unique data messages at the same sequence and diversifier. + +If the misbehaviour is successfully processed: + +* the client is frozen by setting the frozen sequence to the misbehaviour sequence + +NOTE: Misbehaviour processing is data processing order dependent. A misbehaving solo machine +could update to a new public key to prevent being frozen before misbehaviour is submitted. + +## Upgrades + +Upgrades to solo machine light clients are not supported since an entirely different type of +public key can be set using normal client updates. diff --git a/docs/ibc/v7.8.x/light-clients/solomachine/solomachine.mdx b/docs/ibc/v7.8.x/light-clients/solomachine/solomachine.mdx new file mode 100644 index 00000000..3e04609b --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/solomachine/solomachine.mdx @@ -0,0 +1,23 @@ +--- +title: Solomachine +description: >- + This paper defines the implementation of the ICS06 protocol on the Cosmos SDK. + For the general specification please refer to the ICS06 Specification. +--- + +## Abstract + +This paper defines the implementation of the ICS06 protocol on the Cosmos SDK. For the general +specification please refer to the [ICS06 Specification](https://github.com/cosmos/ibc/tree/master/spec/client/ics-006-solo-machine-client). + +This implementation of a solo machine light client supports single and multi-signature public +keys. The client is capable of handling public key updates by header and governance proposals. +The light client is capable of processing client misbehaviour. Proofs of the counterparty state +are generated by the solo machine client by signing over the desired state with a certain sequence, +diversifier, and timestamp. + +## Contents + +1. **[Concepts](/docs/ibc/v7.8.x/light-clients/solomachine/concepts)** +2. **[State](/docs/ibc/v7.8.x/light-clients/solomachine/state)** +3. **[State Transitions](/docs/ibc/v7.8.x/light-clients/solomachine/state_transitions)** diff --git a/docs/ibc/v7.8.x/light-clients/solomachine/state.mdx b/docs/ibc/v7.8.x/light-clients/solomachine/state.mdx new file mode 100644 index 00000000..7827215a --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/solomachine/state.mdx @@ -0,0 +1,10 @@ +--- +title: State +description: >- + The solo machine light client will only store consensus states for each update + by a header or a governance proposal. The latest client state is also + maintained in the store. +--- + +The solo machine light client will only store consensus states for each update by a header +or a governance proposal. The latest client state is also maintained in the store. diff --git a/docs/ibc/v7.8.x/light-clients/solomachine/state_transitions.mdx b/docs/ibc/v7.8.x/light-clients/solomachine/state_transitions.mdx new file mode 100644 index 00000000..e7a09a62 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/solomachine/state_transitions.mdx @@ -0,0 +1,38 @@ +--- +title: State Transitions +description: 'Successful state verification by a solo machine light client will result in:' +--- + +## Client State Verification Functions + +Successful state verification by a solo machine light client will result in: + +* the sequence being incremented by 1. + +## Update By Header + +A successful update of a solo machine light client by a header will result in: + +* the public key being updated to the new public key provided by the header. +* the diversifier being updated to the new diviersifier provided by the header. +* the timestamp being updated to the new timestamp provided by the header. +* the sequence being incremented by 1 +* the consensus state being updated (consensus state stores the public key, diversifier, and timestamp) + +## Update By Governance Proposal + +A successful update of a solo machine light client by a governance proposal will result in: + +* the client state being updated to the substitute client state +* the consensus state being updated to the substitute consensus state (consensus state stores the public key, diversifier, and timestamp) +* the frozen sequence being set to zero (client is unfrozen if it was previously frozen). + +## Upgrade + +Client udgrades are not supported for the solo machine light client. No state transition occurs. + +## Misbehaviour + +Successful misbehaviour processing of a solo machine light client will result in: + +* the frozen sequence being set to the sequence the misbehaviour occurred at diff --git a/docs/ibc/v7.8.x/light-clients/wasm/client.mdx b/docs/ibc/v7.8.x/light-clients/wasm/client.mdx new file mode 100644 index 00000000..cdb0d80a --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/wasm/client.mdx @@ -0,0 +1,149 @@ +--- +title: Client +description: >- + A user can query and interact with the 08-wasm module using the CLI. Use the + --help flag to discover the available commands: +--- + +## CLI + +A user can query and interact with the `08-wasm` module using the CLI. Use the `--help` flag to discover the available commands: + +### Transactions + +The `tx` commands allow users to interact with the `08-wasm` submodule. + +```shell +simd tx ibc-wasm --help +``` + +#### `store-code` + +The `store-code` command allows users to submit a governance proposal with a `MsgStoreCode` to store the byte code of a Wasm light client contract. + +```shell +simd tx ibc-wasm store-code [path/to/wasm-file] [flags] +``` + +`path/to/wasm-file` is the path to the `.wasm` or `.wasm.gz` file. + +#### `migrate-contract` + +The `migrate-contract` command allows users to broadcast a transaction with a `MsgMigrateContract` to migrate the contract for a given light client to a new byte code denoted by the given checksum. + +```shell +simd tx ibc-wasm migrate-contract [client-id] [checksum] [migrate-msg] +``` + +The migrate message must not be emptied and is expected to be a JSON-encoded string. + +### Query + +The `query` commands allow users to query `08-wasm` state. + +```shell +simd query ibc-wasm --help +``` + +#### `checksums` + +The `checksums` command allows users to query the list of checksums of Wasm light client contracts stored in the Wasm VM via the `MsgStoreCode`. The checksums are hex-encoded. + +```shell +simd query ibc-wasm checksums [flags] +``` + +Example: + +```shell +simd query ibc-wasm checksums +``` + +Example Output: + +```shell +checksums: +- c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64 +pagination: + next_key: null + total: "1" +``` + +#### `code` + +The `code` command allows users to query the Wasm byte code of a light client contract given the provided input checksum. + +```shell +./simd q ibc-wasm code +``` + +Example: + +```shell +simd query ibc-wasm code c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64 +``` + +Example Output: + +```shell +code: AGFzb...AqBBE= +``` + +## gRPC + +A user can query the `08-wasm` module using gRPC endpoints. + +### `Checksums` + +The `Checksums` endpoint allows users to query the list of checksums of Wasm light client contracts stored in the Wasm VM via the `MsgStoreCode`. + +```shell +ibc.lightclients.wasm.v1.Query/Checksums +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{}' \ + localhost:9090 \ + ibc.lightclients.wasm.v1.Query/Checksums +``` + +Example output: + +```shell +{ + "checksums": [ + "c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64" + ], + "pagination": { + "total": "1" + } +} +``` + +### `Code` + +The `Code` endpoint allows users to query the Wasm byte code of a light client contract given the provided input checksum. + +```shell +ibc.lightclients.wasm.v1.Query/Code +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"checksum":"c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64"}' \ + localhost:9090 \ + ibc.lightclients.wasm.v1.Query/Code +``` + +Example output: + +```shell +{ + "code": AGFzb...AqBBE= +} +``` diff --git a/docs/ibc/v7.8.x/light-clients/wasm/concepts.mdx b/docs/ibc/v7.8.x/light-clients/wasm/concepts.mdx new file mode 100644 index 00000000..a885090b --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/wasm/concepts.mdx @@ -0,0 +1,74 @@ +--- +title: Concepts +description: >- + Learn about the differences between a proxy light client and a Wasm light + client. +--- + +Learn about the differences between a proxy light client and a Wasm light client. + +## Proxy light client + +The `08-wasm` module is not a regular light client in the same sense as, for example, the 07-tendermint light client. `08-wasm` is instead a _proxy_ light client module, and this means that the module acts a proxy to the actual implementations of light clients. The module will act as a wrapper for the actual light clients uploaded as Wasm byte code and will delegate all operations to them (i.e. `08-wasm` just passes through the requests to the Wasm light clients). Still, the `08-wasm` module implements all the required interfaces necessary to integrate with core IBC, so that 02-client can call into it as it would for any other light client module. These interfaces are `ClientState`, `ConsensusState` and `ClientMessage`, and we will describe them in the context of `08-wasm` in the following sections. For more information about this set of interfaces, please read section [Overview of the light client module developer guide](/docs/ibc/v7.8.x/light-clients/developer-guide/overview#overview). + +### `ClientState` + +The `08-wasm`'s `ClientState` data structure contains three fields: + +- `Data` contains the bytes of the Protobuf-encoded client state of the underlying light client implemented as a Wasm contract. For example, if the Wasm light client contract implements the GRANDPA light client algorithm, then `Data` will contain the bytes for a [GRANDPA client state](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L35-L60). +- `Checksum` is the sha256 hash of the Wasm contract's byte code. This hash is used as an identifier to call the right contract. +- `LatestHeight` is the latest height of the counterparty state machine (i.e. the height of the blockchain), whose consensus state the light client tracks. + +```go +type ClientState struct { + / bytes encoding the client state of the underlying + / light client implemented as a Wasm contract + Data []byte + / sha256 hash of Wasm contract byte code + Checksum []byte + / latest height of the counterparty ledger + LatestHeight types.Height +} +``` + +See section [`ClientState` of the light client module developer guide](/docs/ibc/v7.8.x/light-clients/developer-guide/overview#clientstate) for more information about the `ClientState` interface. + +### `ConsensusState` + +The `08-wasm`'s `ConsensusState` data structure maintains one field: + +- `Data` contains the bytes of the Protobuf-encoded consensus state of the underlying light client implemented as a Wasm contract. For example, if the Wasm light client contract implements the GRANDPA light client algorithm, then `Data` will contain the bytes for a [GRANDPA consensus state](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L87-L94). + +```go +type ConsensusState struct { + / bytes encoding the consensus state of the underlying light client + / implemented as a Wasm contract. + Data []byte +} +``` + +See section [`ConsensusState` of the light client module developer guide](/docs/ibc/v7.8.x/light-clients/developer-guide/overview#consensusstate) for more information about the `ConsensusState` interface. + +### `ClientMessage` + +`ClientMessage` is used for performing updates to a `ClientState` stored on chain. The `08-wasm`'s `ClientMessage` data structure maintains one field: + +- `Data` contains the bytes of the Protobuf-encoded header(s) or misbehaviour for the underlying light client implemented as a Wasm contract. For example, if the Wasm light client contract implements the GRANDPA light client algorithm, then `Data` will contain the bytes of either [header](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L96-L104) or [misbehaviour](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L106-L112) for a GRANDPA light client. + +```go +type ClientMessage struct { + / bytes encoding the header(s) + +or misbehaviour for the underlying light client + / implemented as a Wasm contract. + Data []byte +} +``` + +See section [`ClientMessage` of the light client module developer guide](/docs/ibc/v7.8.x/light-clients/developer-guide/overview#clientmessage) for more information about the `ClientMessage` interface. + +## Wasm light client + +The actual light client can be implemented in any language that compiles to Wasm and implements the interfaces of a [CosmWasm](https://docs.cosmwasm.com/docs/) contract. Even though in theory other languages could be used, in practice (at least for the time being) the most suitable language to use would be Rust, since there is already good support for it for developing CosmWasm smart contracts. + +At the moment of writing there are two contracts available: one for [Tendermint](https://github.com/ComposableFi/composable-ibc/tree/master/light-clients/ics07-tendermint-cw) and one [GRANDPA](https://github.com/ComposableFi/composable-ibc/tree/master/light-clients/ics10-grandpa-cw) (which is being used in production in [Composable Finance's Centauri bridge](https://github.com/ComposableFi/composable-ibc)). And there are others in development (e.g. for Near). diff --git a/docs/ibc/v7.8.x/light-clients/wasm/contracts.mdx b/docs/ibc/v7.8.x/light-clients/wasm/contracts.mdx new file mode 100644 index 00000000..3b597b54 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/wasm/contracts.mdx @@ -0,0 +1,111 @@ +--- +title: Contracts +description: >- + Learn about the expected behaviour of Wasm light client contracts and the + between with 08-wasm. +--- + +Learn about the expected behaviour of Wasm light client contracts and the between with `08-wasm`. + +## API + +The `08-wasm` light client proxy performs calls to the Wasm light client via the Wasm VM. The calls require as input JSON-encoded payload messages that fall in the three categories described in the next sections. + +## `InstantiateMessage` + +This is the message sent to the contract's `instantiate` entry point. It contains the bytes of the protobuf-encoded client and consensus states of the underlying light client, both provided in [`MsgCreateClient`](https://github.com/cosmos/ibc-go/blob/v7.3.0/proto/ibc/core/client/v1/tx.proto#L44-L54). Please note that the bytes contained within the JSON message are represented as base64-encoded strings. + +```go +type InstantiateMessage struct { + ClientState []byte `json:"client_state"` + ConsensusState []byte `json:"consensus_state"` + Checksum []byte `json:"checksum" +} +``` + +The Wasm light client contract is expected to store the client and consensus state in the corresponding keys of the client-prefixed store. + +## `QueryMsg` + +`QueryMsg` acts as a discriminated union type that is used to encode the messages that are sent to the contract's `query` entry point. Only one of the fields of the type should be set at a time, so that the other fields are omitted in the encoded JSON and the payload can be correctly translated to the corresponding element of the enumeration in Rust. + +```go +type QueryMsg struct { + Status *StatusMsg `json:"status,omitempty"` + ExportMetadata *ExportMetadataMsg `json:"export_metadata,omitempty"` + TimestampAtHeight *TimestampAtHeightMsg `json:"timestamp_at_height,omitempty"` + VerifyClientMessage *VerifyClientMessageMsg `json:"verify_client_message,omitempty"` + CheckForMisbehaviour *CheckForMisbehaviourMsg `json:"check_for_misbehaviour,omitempty"` +} +``` + +```rust +#[cw_serde] +pub enum QueryMsg { + Status(StatusMsg), + ExportMetadata(ExportMetadataMsg), + TimestampAtHeight(TimestampAtHeightMsg), + VerifyClientMessage(VerifyClientMessageRaw), + CheckForMisbehaviour(CheckForMisbehaviourMsgRaw), +} +``` + +To learn what it is expected from the Wasm light client contract when processing each message, please read the corresponding section of the [Light client developer guide](/docs/ibc/v7.8.x/light-clients/developer-guide/overview): + +- For `StatusMsg`, see the section [`Status` method](/docs/ibc/v7.8.x/light-clients/developer-guide/client-state#status-method). +- For `ExportMetadataMsg`, see the section [Genesis metadata](/docs/ibc/v7.8.x/light-clients/developer-guide/genesis#genesis-metadata). +- For `TimestampAtHeightMsg`, see the section [`GetTimestampAtHeight` method](/docs/ibc/v7.8.x/light-clients/developer-guide/client-state#gettimestampatheight-method). +- For `VerifyClientMessageMsg`, see the section [`VerifyClientMessage`](/docs/ibc/v7.8.x/light-clients/developer-guide/updates-and-misbehaviour#verifyclientmessage). +- For `CheckForMisbehaviourMsg`, see the section [`CheckForMisbehaviour` method](/docs/ibc/v7.8.x/light-clients/developer-guide/client-state#checkformisbehaviour-method). + +## `SudoMsg` + +`SudoMsg` acts as a discriminated union type that is used to encode the messages that are sent to the contract's `sudo` entry point. Only one of the fields of the type should be set at a time, so that the other fields are omitted in the encoded JSON and the payload can be correctly translated to the corresponding element of the enumeration in Rust. + +The `sudo` entry point is able to perform state-changing writes in the client-prefixed store. + +```go +type SudoMsg struct { + UpdateState *UpdateStateMsg `json:"update_state,omitempty"` + UpdateStateOnMisbehaviour *UpdateStateOnMisbehaviourMsg `json:"update_state_on_misbehaviour,omitempty"` + VerifyUpgradeAndUpdateState *VerifyUpgradeAndUpdateStateMsg `json:"verify_upgrade_and_update_state,omitempty"` + VerifyMembership *VerifyMembershipMsg `json:"verify_membership,omitempty"` + VerifyNonMembership *VerifyNonMembershipMsg `json:"verify_non_membership,omitempty"` + MigrateClientStore *MigrateClientStoreMsg `json:"migrate_client_store,omitempty"` +} +``` + +```rust +#[cw_serde] +pub enum SudoMsg { + UpdateState(UpdateStateMsgRaw), + UpdateStateOnMisbehaviour(UpdateStateOnMisbehaviourMsgRaw), + VerifyUpgradeAndUpdateState(VerifyUpgradeAndUpdateStateMsgRaw), + VerifyMembership(VerifyMembershipMsgRaw), + VerifyNonMembership(VerifyNonMembershipMsgRaw), + MigrateClientStore(MigrateClientStoreMsgRaw), +} +``` + +To learn what it is expected from the Wasm light client contract when processing each message, please read the corresponding section of the [Light client developer guide](/docs/ibc/v7.8.x/light-clients/developer-guide/overview): + +- For `UpdateStateMsg`, see the section [`UpdateState`](/docs/ibc/v7.8.x/light-clients/developer-guide/updates-and-misbehaviour#updatestate). +- For `UpdateStateOnMisbehaviourMsg`, see the section [`UpdateStateOnMisbehaviour`](/docs/ibc/v7.8.x/light-clients/developer-guide/updates-and-misbehaviour#updatestateonmisbehaviour). +- For `VerifyUpgradeAndUpdateStateMsg`, see the section [`GetTimestampAtHeight` method](/docs/ibc/v7.8.x/light-clients/developer-guide/upgrades#implementing-verifyupgradeandupdatestate). +- For `VerifyMembershipMsg`, see the section [`VerifyMembership` method](/docs/ibc/v7.8.x/light-clients/developer-guide/client-state#verifymembership-method). +- For `VerifyNonMembershipMsg`, see the section [`VerifyNonMembership` method](/docs/ibc/v7.8.x/light-clients/developer-guide/client-state#verifynonmembership-method). +- For `MigrateClientStoreMsg`, see the section [Implementing `CheckSubstituteAndUpdateState`](/docs/ibc/v7.8.x/light-clients/developer-guide/proposals#implementing-checksubstituteandupdatestate). + +### Migration + +The `08-wasm` proxy light client exposes the `MigrateContract` RPC endpoint that can be used to migrate a given Wasm light client contract (specified by the client identifier) to a new Wasm byte code (specified by the hash of the byte code). The expected use case for this RPC endpoint is to enable contracts to migrate to new byte code in case the current byte code is found to have a bug or vulnerability. The Wasm byte code that contracts are migrated have to be uploaded beforehand using `MsgStoreCode` and must implement the `migrate` entry point. See section[`MsgMigrateContract`](/docs/ibc/v7.8.x/apps/interchain-accounts/messages#msgmigratecontract) for information about the request message for this RPC endpoint. + +## Expected behaviour + +The `08-wasm` proxy light client modules expects the following behaviour from the Wasm light client contracts when executing messages that perform state-changing writes: + +- The contract must not delete the client state from the store. +- The contract must not change the client state to a client state of another type. +- The contract must not change the checksum in the client state. + +Any violation of these rules will result in an error returned from `08-wasm` that will abort the transaction. diff --git a/docs/ibc/v7.8.x/light-clients/wasm/events.mdx b/docs/ibc/v7.8.x/light-clients/wasm/events.mdx new file mode 100644 index 00000000..66223443 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/wasm/events.mdx @@ -0,0 +1,22 @@ +--- +title: Events +description: 'The 08-wasm module emits the following events:' +--- + +The `08-wasm` module emits the following events: + +## `MsgStoreCode` + +| Type | Attribute Key | Attribute Value | +| ----------------- | -------------- | ---------------------- | +| store\_wasm\_code | wasm\_checksum | `{hex.Encode(checksum)}` | +| message | module | 08-wasm | + +## `MsgMigrateContract` + +| Type | Attribute Key | Attribute Value | +| ----------------- | -------------- | ------------------------- | +| migrate\_contract | client\_id | `{clientId}` | +| migrate\_contract | wasm\_checksum | `{hex.Encode(checksum)}` | +| migrate\_contract | new\_checksum | `{hex.Encode(newChecksum)}` | +| message | module | 08-wasm | diff --git a/docs/ibc/v7.8.x/light-clients/wasm/governance.mdx b/docs/ibc/v7.8.x/light-clients/wasm/governance.mdx new file mode 100644 index 00000000..e2b22b55 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/wasm/governance.mdx @@ -0,0 +1,125 @@ +--- +title: Governance +description: >- + Learn how to upload Wasm light client byte code on a chain, and how to migrate + an existing Wasm light client contract. +--- + +Learn how to upload Wasm light client byte code on a chain, and how to migrate an existing Wasm light client contract. + +## Setting an authority + +Both the storage of Wasm light client byte code as well as the migration of an existing Wasm light client contract are permissioned (i.e. only allowed to an authority such as governance). The designated authority is specified when instantiating `08-wasm`'s keeper: both [`NewKeeperWithVM`](https://github.com/cosmos/ibc-go/blob/b306e7a706e1f84a5e11af0540987bd68de9bae5/modules/light-clients/08-wasm/keeper/keeper.go#L38-L46) and [`NewKeeperWithConfig`](https://github.com/cosmos/ibc-go/blob/b306e7a706e1f84a5e11af0540987bd68de9bae5/modules/light-clients/08-wasm/keeper/keeper.go#L82-L90) constructor functions accept an `authority` argument that must be the address of the authorized actor. For example, in `app.go`, when instantiating the keeper, you can pass the address of the governance module: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/types" + ... +) + +/ app.go +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + keys[wasmtypes.StoreKey], + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), / authority + wasmVM, + app.GRPCQueryRouter(), +) +``` + +## Storing new Wasm light client byte code + +If governance is the allowed authority, the governance v1 proposal that needs to be submitted to upload a new light client contract should contain the message [`MsgStoreCode`](https://github.com/cosmos/ibc-go/blob/b306e7a706e1f84a5e11af0540987bd68de9bae5/proto/ibc/lightclients/wasm/v1/tx.proto#L22-L30) with the base64-encoded byte code of the Wasm contract. Use the following CLI command and JSON as an example: + +```shell +simd tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "Upload IBC Wasm light client", + "summary": "Upload wasm client", + "messages": [ + { + "@type": "/ibc.lightclients.wasm.v1.MsgStoreCode", + "signer": "cosmos1...", / the authority address (e.g. the gov module account address) + "wasm_byte_code": "YWJ...PUB+" / standard base64 encoding of the Wasm contract byte code + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +To learn more about the `submit-proposal` CLI command, please check out [the relevant section in Cosmos SDK documentation](https://docs.cosmos.network/main/modules/gov#submit-proposal). + +Alternatively, the process of submitting the proposal may be simpler if you use the CLI command `store-code`. This CLI command accepts as argument the file of the Wasm light client contract and takes care of constructing the proposal message with `MsgStoreCode` and broadcasting it. See section [`store-code`](/docs/ibc/v7.8.x/light-clients/wasm/client#store-code) for more information. + +## Migrating an existing Wasm light client contract + +If governance is the allowed authority, the governance v1 proposal that needs to be submitted to migrate an existing new Wasm light client contract should contain the message [`MsgMigrateContract`](https://github.com/cosmos/ibc-go/blob/b306e7a706e1f84a5e11af0540987bd68de9bae5/proto/ibc/lightclients/wasm/v1/tx.proto#L51-L63) with the checksum of the Wasm byte code to migrate to. Use the following CLI command and JSON as an example: + +```shell +simd tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "Migrate IBC Wasm light client", + "summary": "Migrate wasm client", + "messages": [ + { + "@type": "/ibc.lightclients.wasm.v1.MsgMigrateContract", + "signer": "cosmos1...", / the authority address (e.g. the gov module account address) + "client_id": "08-wasm-1", / client identifier of the Wasm light client contract that will be migrated + "checksum": "a8ad...4dc0", / SHA-256 hash of the Wasm byte code to migrate to, previously stored with MsgStoreCode + "msg": "{}" / JSON-encoded message to be passed to the contract on migration + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +To learn more about the `submit-proposal` CLI command, please check out [the relevant section in Cosmos SDK documentation](https://docs.cosmos.network/main/modules/gov#submit-proposal). + +## Removing an existing checksum + +If governance is the allowed authority, the governance v1 proposal that needs to be submitted to remove a specific checksum from the list of allowed checksums should contain the message [`MsgRemoveChecksum`](https://github.com/cosmos/ibc-go/blob/b306e7a706e1f84a5e11af0540987bd68de9bae5/proto/ibc/lightclients/wasm/v1/tx.proto#L38-L46) with the checksum (of a corresponding Wasm byte code). Use the following CLI command and JSON as an example: + +```shell +simd tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "Remove checksum of Wasm light client byte code", + "summary": "Remove checksum", + "messages": [ + { + "@type": "/ibc.lightclients.wasm.v1.MsgRemoveChecksum", + "signer": "cosmos1...", / the authority address (e.g. the gov module account address) + "checksum": "a8ad...4dc0", / SHA-256 hash of the Wasm byte code that should be removed from the list of allowed checksums + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +To learn more about the `submit-proposal` CLI command, please check out [the relevant section in Cosmos SDK documentation](https://docs.cosmos.network/main/modules/gov#submit-proposal). diff --git a/docs/ibc/v7.8.x/light-clients/wasm/integration.mdx b/docs/ibc/v7.8.x/light-clients/wasm/integration.mdx new file mode 100644 index 00000000..58c1e883 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/wasm/integration.mdx @@ -0,0 +1,402 @@ +--- +title: Integration +description: >- + Learn how to integrate the 08-wasm module in a chain binary and about the + recommended approaches depending on whether the x/wasm module is already used + in the chain. The following document only applies for Cosmos SDK chains. +--- + +Learn how to integrate the `08-wasm` module in a chain binary and about the recommended approaches depending on whether the [`x/wasm` module](https://github.com/CosmWasm/wasmd/tree/main/x/wasm) is already used in the chain. The following document only applies for Cosmos SDK chains. + +## Importing the `08-wasm` module + +`08-wasm` has no stable releases yet. To use it, you need to import the git commit that contains the module with the compatible versions of `ibc-go` and `wasmvm`. To do so, run the following command with the desired git commit in your project: + +```sh +go get github.com/cosmos/ibc-go/modules/light-clients/08-wasm@7ee2a2452b79d0bc8316dc622a1243afa058e8cb +``` + +You can find the version matrix in [here](/docs/ibc/v7.8.x/light-clients/wasm/integration#importing-the-08-wasm-module). + +## `app.go` setup + +The sample code below shows the relevant integration points in `app.go` required to setup the `08-wasm` module in a chain binary. Since `08-wasm` is a light client module itself, please check out as well the section [Integrating light clients](/docs/ibc/v7.8.x/ibc/integration#integrating-light-clients) for more information: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + + cmtos "github.com/cometbft/cometbft/libs/os" + + ibcwasm "github.com/cosmos/ibc-go/modules/light-clients/08-wasm" + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/types" + ... +) + +... + +/ Register the AppModule for the 08-wasm module +ModuleBasics = module.NewBasicManager( + ... + ibcwasm.AppModuleBasic{ +}, + ... +) + +/ Add 08-wasm Keeper +type SimApp struct { + ... + WasmClientKeeper ibcwasmkeeper.Keeper + ... +} + +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + ... + keys := sdk.NewKVStoreKeys( + ... + ibcwasmtypes.StoreKey, + ) + + / Instantiate 08-wasm's keeper + / This sample code uses a constructor function that + / accepts a pointer to an existing instance of Wasm VM. + / This is the recommended approach when the chain + / also uses `x/wasm`, and then the Wasm VM instance + / can be shared. + app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + keys[wasmtypes.StoreKey], + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmVM, + app.GRPCQueryRouter(), + ) + +app.ModuleManager = module.NewManager( + / SDK app modules + ... + ibcwasm.NewAppModule(app.WasmClientKeeper), + ) + +app.ModuleManager.SetOrderBeginBlockers( + ... + ibcwasmtypes.ModuleName, + ... + ) + +app.ModuleManager.SetOrderEndBlockers( + ... + ibcwasmtypes.ModuleName, + ... + ) + genesisModuleOrder := []string{ + ... + ibcwasmtypes.ModuleName, + ... +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + ... + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + ... + + / must be before Loading version + if manager := app.SnapshotManager(); manager != nil { + err := manager.RegisterExtensions( + ibcwasmkeeper.NewWasmSnapshotter(app.CommitMultiStore(), &app.WasmClientKeeper), + ) + if err != nil { + panic(fmt.Errorf("failed to register snapshot extension: %s", err)) +} + +} + ... + if loadLatest { + ... + ctx := app.BaseApp.NewUncachedContext(true, cmtproto.Header{ +}) + + / Initialize pinned codes in wasmvm as they are not persisted there + if err := ibcwasmkeeper.InitializePinnedCodes(ctx, app.appCodec); err != nil { + cmtos.Exit(fmt.Sprintf("failed initialize pinned codes %s", err)) +} + +} +} +``` + +## Keeper instantiation + +When it comes to instantiating `08-wasm`'s keeper there are two recommended ways of doing it. Choosing one or the other will depend on whether the chain already integrates [`x/wasm`](https://github.com/CosmWasm/wasmd/tree/main/x/wasm) or not. + +### If `x/wasm` is present + +If the chain where the module is integrated uses `x/wasm` then we recommend that both `08-wasm` and `x/wasm` share the same Wasm VM instance. Having two separate Wasm VM instances is still possible, but care should be taken to make sure that both instances do not share the directory when the VM stores blobs and various caches, otherwise unexpected behaviour is likely to happen. + +In order to share the Wasm VM instance please follow the guideline below. Please note that this requires `x/wasm`v0.41 or above. + +- Instantiate the Wasm VM in `app.go` with the parameters of your choice. +- [Create an `Option` with this Wasm VM instance](https://github.com/CosmWasm/wasmd/blob/db93d7b6c7bb6f4a340d74b96a02cec885729b59/x/wasm/keeper/options.go#L21-L25). +- Add the option created in the previous step to a slice and [pass it to the `x/wasm NewKeeper` constructor function](https://github.com/CosmWasm/wasmd/blob/db93d7b6c7bb6f4a340d74b96a02cec885729b59/x/wasm/keeper/keeper_cgo.go#L36). +- Pass the pointer to the Wasm VM instance to `08-wasm` [`NewKeeperWithVM` constructor function](https://github.com/cosmos/ibc-go/blob/b306e7a706e1f84a5e11af0540987bd68de9bae5/modules/light-clients/08-wasm/keeper/keeper.go#L38-L46). + +The code to set this up would look something like this: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + + wasmvm "github.com/CosmWasm/wasmvm" + wasmkeeper "github.com/CosmWasm/wasmd/x/wasm/keeper" + wasmtypes "github.com/CosmWasm/wasmd/x/wasm/types" + + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/types" + ... +) + +... + +/ instantiate the Wasm VM with the chosen parameters +wasmer, err := wasmvm.NewVM( + dataDir, + availableCapabilities, + contractMemoryLimit, + contractDebugMode, + memoryCacheSize, +) + if err != nil { + panic(err) +} + +/ create an Option slice (or append to an existing one) +/ with the option to use a custom Wasm VM instance +wasmOpts = []wasmkeeper.Option{ + wasmkeeper.WithWasmEngine(wasmer), +} + +/ the keeper will use the provided Wasm VM instance, +/ instead of instantiating a new one +app.WasmKeeper = wasmkeeper.NewKeeper( + appCodec, + keys[wasmtypes.StoreKey], + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + distrkeeper.NewQuerier(app.DistrKeeper), + app.IBCFeeKeeper, / ISC4 Wrapper: fee IBC middleware + app.IBCKeeper.ChannelKeeper, + &app.IBCKeeper.PortKeeper, + scopedWasmKeeper, + app.TransferKeeper, + app.MsgServiceRouter(), + app.GRPCQueryRouter(), + wasmDir, + wasmConfig, + availableCapabilities, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmOpts..., +) + +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + keys[ibcwasmtypes.StoreKey], + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmer, / pass the Wasm VM instance to `08-wasm` keeper constructor + app.GRPCQueryRouter(), +) +... +``` + +### If `x/wasm` is not present + +If the chain does not use [`x/wasm`](https://github.com/CosmWasm/wasmd/tree/main/x/wasm), even though it is still possible to use the method above from the previous section +(e.g. instantiating a Wasm VM in app.go an pass it to 08-wasm's [`NewKeeperWithVM` constructor function](https://github.com/cosmos/ibc-go/blob/b306e7a706e1f84a5e11af0540987bd68de9bae5/modules/light-clients/08-wasm/keeper/keeper.go#L38-L46), since there would be no need in this case to share the Wasm VM instance with another module, you can use the [`NewKeeperWithConfig` constructor function](https://github.com/cosmos/ibc-go/blob/b306e7a706e1f84a5e11af0540987bd68de9bae5/modules/light-clients/08-wasm/keeper/keeper.go#L82-L90) and provide the Wasm VM configuration parameters of your choice instead. A Wasm VM instance will be created in `NewKeeperWithConfig`. The parameters that can set are: + +- `DataDir` is the [directory for Wasm blobs and various caches](https://github.com/CosmWasm/wasmvm/blob/1638725b25d799f078d053391945399cb35664b1/lib.go#L25). In `wasmd` this is set to the [`wasm` folder under the home directory](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/app/app.go#L578). +- `SupportedCapabilities` is a comma separated [list of capabilities supported by the chain](https://github.com/CosmWasm/wasmvm/blob/1638725b25d799f078d053391945399cb35664b1/lib.go#L26). [`wasmd` sets this to all the available capabilities](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/app/app.go#L586), but 08-wasm only requires `iterator`. +- `MemoryCacheSize` sets [the size in MiB of an in-memory cache for e.g. module caching](https://github.com/CosmWasm/wasmvm/blob/1638725b25d799f078d053391945399cb35664b1/lib.go#L29C16-L29C104). It is not consensus-critical and should be defined on a per-node basis, often in the range 100 to 1000 MB. [`wasmd` reads this value of](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/app/app.go#L579). Default value is 256. +- `ContractDebugMode` is a [flag to enable/disable printing debug logs from the contract to STDOUT](https://github.com/CosmWasm/wasmvm/blob/1638725b25d799f078d053391945399cb35664b1/lib.go#L28). This should be false in production environments. Default value is false. + +Another configuration parameter of the Wasm VM is the contract memory limit (in MiB), which is [set to 32](https://github.com/cosmos/ibc-go/blob/b306e7a706e1f84a5e11af0540987bd68de9bae5/modules/light-clients/08-wasm/types/config.go#L8), [following the example of `wasmd`](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/x/wasm/keeper/keeper.go#L32-L34). This parameter is not configurable by users of `08-wasm`. + +The following sample code shows how the keeper would be constructed using this method: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/types" + ... +) + +... + +/ homePath is the path to the directory where the data +/ directory for Wasm blobs and caches will be created + wasmConfig := ibcwasmtypes.WasmConfig{ + DataDir: filepath.Join(homePath, "ibc_08-wasm_client_data"), + SupportedCapabilities: "iterator", + ContractDebugMode: false, +} + +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithConfig( + appCodec, + keys[ibcwasmtypes.StoreKey], + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmConfig, + app.GRPCQueryRouter(), +) +``` + +Check out also the [`WasmConfig` type definition](https://github.com/cosmos/ibc-go/blob/b306e7a706e1f84a5e11af0540987bd68de9bae5/modules/light-clients/08-wasm/types/config.go#L21-L31) for more information on each of the configurable parameters. Some parameters allow node-level configurations. There is additionally the function [`DefaultWasmConfig`](https://github.com/cosmos/ibc-go/blob/b306e7a706e1f84a5e11af0540987bd68de9bae5/modules/light-clients/08-wasm/types/config.go#L36) available that returns a configuration with the default values. + +### Options + +The `08-wasm` module comes with an options API inspired by the one in `x/wasm`. +Currently the only option available is the `WithQueryPlugins` option, which allows registration of custom query plugins for the `08-wasm` module. The use of this API is optional and it is only required if the chain wants to register custom query plugins for the `08-wasm` module. + +#### `WithQueryPlugins` + +By default, the `08-wasm` module does not support any queries. However, it is possible to register custom query plugins for [`QueryRequest::Custom`](https://github.com/CosmWasm/cosmwasm/blob/v1.5.0/packages/std/src/query/mod.rs#L45) and [`QueryRequest::Stargate`](https://github.com/CosmWasm/cosmwasm/blob/v1.5.0/packages/std/src/query/mod.rs#L54-L61). + +Assuming that the keeper is not yet instantiated, the following sample code shows how to register query plugins for the `08-wasm` module. + +We first construct a [`QueryPlugins`](https://github.com/cosmos/ibc-go/blob/b306e7a706e1f84a5e11af0540987bd68de9bae5/modules/light-clients/08-wasm/types/querier.go#L78-L87) object with the desired query plugins: + +```go +queryPlugins := ibcwasmtypes.QueryPlugins { + Custom: MyCustomQueryPlugin(), + / `myAcceptList` is a `[]string` containing the list of gRPC query paths that the chain wants to allow for the `08-wasm` module to query. + / These queries must be registered in the chain's gRPC query router, be deterministic, and track their gas usage. + / The `AcceptListStargateQuerier` function will return a query plugin that will only allow queries for the paths in the `myAcceptList`. + / The query responses are encoded in protobuf unlike the implementation in `x/wasm`. + Stargate: ibcwasmtypes.AcceptListStargateQuerier(myAcceptList), +} +``` + +You may leave any of the fields in the `QueryPlugins` object as `nil` if you do not want to register a query plugin for that query type. + +Then, we pass the `QueryPlugins` object to the `WithQueryPlugins` option: + +```go +querierOption := ibcwasmtypes.WithQueryPlugins(&queryPlugins) +``` + +Finally, we pass the option to the `NewKeeperWithConfig` or `NewKeeperWithVM` constructor function during [Keeper instantiation](#keeper-instantiation): + +```diff +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithConfig( + appCodec, + keys[ibcwasmtypes.StoreKey], + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmConfig, + app.GRPCQueryRouter(), ++ querierOption, +) +``` + +```diff +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + keys[wasmtypes.StoreKey], + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmer, / pass the Wasm VM instance to `08-wasm` keeper constructor + app.GRPCQueryRouter(), ++ querierOption, +) +``` + +## Updating `AllowedClients` + +In order to use the `08-wasm` module chains must update the [`AllowedClients` parameter in the 02-client submodule](https://github.com/cosmos/ibc-go/blob/v7.3.0/proto/ibc/core/client/v1/client.proto#L104) of core IBC. This can be configured directly in the application upgrade handler with the sample code below: + +```go expandable +import ( + + ... + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/types" + ... +) + +... + +func CreateWasmUpgradeHandler( + mm *module.Manager, + configurator module.Configurator, + clientKeeper clientkeeper.Keeper, +) + +upgradetypes.UpgradeHandler { + return func(goCtx context.Context, _ upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + / explicitly update the IBC 02-client params, adding the wasm client type + params := clientKeeper.GetParams(ctx) + +params.AllowedClients = append(params.AllowedClients, ibcwasmtypes.Wasm) + +clientKeeper.SetParams(ctx, params) + +return mm.RunMigrations(goCtx, configurator, vm) +} +} +``` + +Or alternatively the parameter can be updated via a governance proposal (see at the bottom of section [`Creating clients`](/docs/ibc/v7.8.x/light-clients/developer-guide/setup#creating-clients) for an example of how to do this). + +## Adding the module to the store + +As part of the upgrade migration you must also add the module to the upgrades store. + +```go expandable +func (app SimApp) + +RegisterUpgradeHandlers() { + + ... + if upgradeInfo.Name == UpgradeName && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := storetypes.StoreUpgrades{ + Added: []string{ + ibcwasmtypes.ModuleName, +}, +} + + / configure store loader that checks if version == upgradeHeight and applies store upgrades + app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +} +``` + +## Adding snapshot support + +In order to use the `08-wasm` module chains are required to register the `WasmSnapshotter` extension in the snapshot manager. This snapshotter takes care of persisting the external state, in the form of contract code, of the Wasm VM instance to disk when the chain is snapshotted. [This code](https://github.com/cosmos/ibc-go/blob/b306e7a706e1f84a5e11af0540987bd68de9bae5/modules/light-clients/08-wasm/testing/simapp/app.go#L747-L755) should be placed in `NewSimApp` function in `app.go`. + +## Pin byte codes at start + +Wasm byte codes should be pinned to the WasmVM cache on every application start, therefore [this code](https://github.com/cosmos/ibc-go/blob/b306e7a706e1f84a5e11af0540987bd68de9bae5/modules/light-clients/08-wasm/testing/simapp/app.go#L786-L791) should be placed in `NewSimApp` function in `app.go`. diff --git a/docs/ibc/v7.8.x/light-clients/wasm/messages.mdx b/docs/ibc/v7.8.x/light-clients/wasm/messages.mdx new file mode 100644 index 00000000..bf92c476 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/wasm/messages.mdx @@ -0,0 +1,73 @@ +--- +title: Messages +description: >- + Uploading the Wasm light client contract to the Wasm VM storage is achieved by + means of MsgStoreCode: +--- + +## `MsgStoreCode` + +Uploading the Wasm light client contract to the Wasm VM storage is achieved by means of `MsgStoreCode`: + +```go +type MsgStoreCode struct { + / signer address + Signer string + / wasm byte code of light client contract. It can be raw or gzip compressed + WasmByteCode []byte +} +``` + +This message is expected to fail if: + +* `Signer` is an invalid Bech32 address, or it does not match the designated authority address. +* `WasmByteCode` is empty or it exceeds the maximum size, currently set to 3MB. + +Only light client contracts stored using `MsgStoreCode` are allowed to be instantiated. An attempt to create a light client from contracts uploaded via other means (e.g. through `x/wasm` if the module shares the same Wasm VM instance with 08-wasm) will fail. Due to the idempotent nature of the Wasm VM's `StoreCode` function, it is possible to store the same byte code multiple times. + +When execution of `MsgStoreCode` succeeds, the checksum of the contract (i.e. the sha256 hash of the contract's byte code) is stored in an allow list. When a relayer submits [`MsgCreateClient`](https://github.com/cosmos/ibc-go/blob/v7.3.0/proto/ibc/core/client/v1/tx.proto#L25-L37) with 08-wasm's `ClientState`, the client state includes the checksum of the Wasm byte code that should be called. Then 02-client calls [08-wasm's implementation of `Initialize` function](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/core/02-client/keeper/client.go#L34) (which is an interface function part of `ClientState`), and it will check that the checksum in the client state matches one of the checksums in the allow list. If a match is found, the light client is initialized; otherwise, the transaction is aborted. + +## `MsgMigrateContract` + +Migrating a contract to a new Wasm byte code is achieved by means of `MsgMigrateContract`: + +```go +type MsgMigrateContract struct { + / signer address + Signer string + / the client id of the contract + ClientId string + / the SHA-256 hash of the new wasm byte code for the contract + Checksum []byte + / the json-encoded migrate msg to be passed to the contract on migration + Msg []byte +} +``` + +This message is expected to fail if: + +* `Signer` is an invalid Bech32 address, or it does not match the designated authority address. +* `ClientId` is not a valid identifier prefixed by `08-wasm`. +* `Checksum` is not exactly 32 bytes long or it is not found in the list of allowed checksums (a new checksum is added to the list when executing `MsgStoreCode`), or it matches the current checksum of the contract. + +When a Wasm light client contract is migrated to a new Wasm byte code the checksum for the contract will be updated with the new checksum. + +## `MsgRemoveChecksum` + +Removing a checksum from the list of allowed checksums is achieved by means of `MsgRemoveChecksum`: + +```go +type MsgRemoveChecksum struct { + / signer address + Signer string + / Wasm byte code checksum to be removed from the store + Checksum []byte +} +``` + +This message is expected to fail if: + +* `Signer` is an invalid Bech32 address, or it does not match the designated authority address. +* `Checksum` is not exactly 32 bytes long or it is not found in the list of allowed checksums (a new checksum is added to the list when executing `MsgStoreCode`). + +When a checksum is removed from the list of allowed checksums, then the corresponding Wasm byte code will not be available for instantiation in [08-wasm's implementation of `Initialize` function](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/core/02-client/keeper/client.go#L34). diff --git a/docs/ibc/v7.8.x/light-clients/wasm/overview.mdx b/docs/ibc/v7.8.x/light-clients/wasm/overview.mdx new file mode 100644 index 00000000..56293398 --- /dev/null +++ b/docs/ibc/v7.8.x/light-clients/wasm/overview.mdx @@ -0,0 +1,22 @@ +--- +title: Overview +description: Learn about the 08-wasm light client proxy module. +--- + +## Overview + +Learn about the `08-wasm` light client proxy module. + +### Context + +Traditionally, light clients used by ibc-go have been implemented only in Go, and since ibc-go v7 (with the release of the 02-client refactor), they are [first-class Cosmos SDK modules](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-010-light-clients-as-sdk-modules.md). This means that updating existing light client implementations or adding support for new light clients is a multi-step, time-consuming process involving on-chain governance: it is necessary to modify the codebase of ibc-go (if the light client is part of its codebase), re-build chains' binaries, pass a governance proposal and have validators upgrade their nodes. + +### Motivation + +To break the limitation of being able to write light client implementations only in Go, the `08-wasm` adds support to run light clients written in a Wasm-compilable language. The light client byte code implements the entry points of a [CosmWasm](https://docs.cosmwasm.com/docs/) smart contract, and runs inside a Wasm VM. The `08-wasm` module exposes a proxy light client interface that routes incoming messages to the appropriate handler function, inside the Wasm VM, for execution. + +Adding a new light client to a chain is just as simple as submitting a governance proposal with the message that stores the byte code of the light client contract. No coordinated upgrade is needed. When the governance proposal passes and the message is executed, the contract is ready to be instantiated upon receiving a relayer-submitted `MsgCreateClient`. The process of creating a Wasm light client is the same as with a regular light client implemented in Go. + +### Use cases + +* Development of light clients for non-Cosmos ecosystem chains: state machines in other ecosystems are, in many cases, implemented in Rust, and thus there are probably libraries used in their light client implementations for which there is no equivalent in Go. This makes the development of a light client in Go very difficult, but relatively simple to do it in Rust. Therefore, writing a CosmWasm smart contract in Rust that implements the light client algorithm becomes a lower effort. diff --git a/docs/ibc/v7.8.x/middleware/callbacks/end-users.mdx b/docs/ibc/v7.8.x/middleware/callbacks/end-users.mdx new file mode 100644 index 00000000..8c198b6c --- /dev/null +++ b/docs/ibc/v7.8.x/middleware/callbacks/end-users.mdx @@ -0,0 +1,95 @@ +--- +title: End Users +description: >- + This section explains how to use the callbacks middleware from the perspective + of an IBC Actor. Callbacks middleware provides two types of callbacks: +--- + +This section explains how to use the callbacks middleware from the perspective of an IBC Actor. Callbacks middleware provides two types of callbacks: + +* Source callbacks: + * `SendPacket` callback + * `OnAcknowledgementPacket` callback + * `OnTimeoutPacket` callback +* Destination callbacks: + * `ReceivePacket` callback + +For a given channel, the source callbacks are supported if the source chain has the callbacks middleware wired up in the channel's IBC stack. Similarly, the destination callbacks are supported if the destination chain has the callbacks middleware wired up in the channel's IBC stack. + + +Callbacks are always executed after the packet has been processed by the underlying IBC module. + + + +If the underlying application module is doing an asynchronous acknowledgement on packet receive (for example, if the [packet forward middleware](https://github.com/cosmos/ibc-apps/tree/main/middleware/packet-forward-middleware) is in the stack, and is being used by this packet), then the callbacks middleware will execute the `ReceivePacket` callback after the acknowledgement has been received. + + +## Source Callbacks + +Source callbacks are natively supported in the following ibc modules (if they are wrapped by the callbacks middleware): + +* `transfer` +* `icacontroller` + +To have your source callbacks be processed by the callbacks middleware, you must set the memo in the application's packet data to the following format: + +```jsonc +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +## Destination Callbacks + +Destination callbacks are natively only supported in the transfer module. Note that wrapping icahost is not supported. This is because icahost should be able to execute an arbitrary transaction anyway, and can call contracts or modules directly. + +To have your destination callbacks processed by the callbacks middleware, you must set the memo in the application's packet data to the following format: + +```jsonc +{ + "dest_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +Note that a packet can have both a source and destination callback. + +```jsonc expandable +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + +}, + "dest_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +# User Defined Gas Limit + +User defined gas limit was added for the following reasons: + +* To prevent callbacks from blocking packet lifecycle. +* To prevent relayers from being able to DOS the callback execution by sending a packet with a low amount of gas. + + +There is a chain wide parameter that sets the maximum gas limit that a user can set for a callback. This is to prevent a user from setting a gas limit that is too high for relayers. If the `"gas_limit"` is not set in the packet memo, then the maximum gas limit is used. + + +These goals are achieved by creating a minimum gas amount required for callback execution. If the relayer provides at least the minimum gas limit for the callback execution, then the packet lifecycle will not be blocked if the callback runs out of gas during execution, and the callback cannot be retried. If the relayer does not provided the minimum amount of gas and the callback executions runs out of gas, the entire tx is reverted and it may be executed again. + + +`SendPacket` callback is always reverted if the callback execution fails or returns an error for any reason. This is so that the packet is not sent if the callback execution fails. + diff --git a/docs/ibc/v7.8.x/middleware/callbacks/events.mdx b/docs/ibc/v7.8.x/middleware/callbacks/events.mdx new file mode 100644 index 00000000..cd0800c3 --- /dev/null +++ b/docs/ibc/v7.8.x/middleware/callbacks/events.mdx @@ -0,0 +1,37 @@ +--- +title: Events +description: >- + An overview of all events related to the callbacks middleware. There are two + types of events, "ibcsrccallback" and "ibcdestcallback". +--- + +An overview of all events related to the callbacks middleware. There are two types of events, `"ibc_src_callback"` and `"ibc_dest_callback"`. + +## Shared Attributes + +Both of these event types share the following attributes: + +| **Attribute Key** | **Attribute Values** | **Optional** | +| :--------------------------: | :-----------------------------------------------------------------------------------------: | :----------------: | +| module | "ibccallbacks" | | +| callback\_type | **One of**: "send\_packet", "acknowledgement\_packet", "timeout\_packet", "receive\_packet" | | +| callback\_address | string | | +| callback\_exec\_gas\_limit | string (parsed from uint64) | | +| callback\_commit\_gas\_limit | string (parsed from uint64) | | +| packet\_sequence | string (parsed from uint64) | | +| callback\_result | **One of**: "success", "failure" | | +| callback\_error | string (parsed from callback err) | Yes, if err != nil | + +## `ibc_src_callback` Attributes + +| **Attribute Key** | **Attribute Values** | +| :------------------: | :----------------------: | +| packet\_src\_port | string (sourcePortID) | +| packet\_src\_channel | string (sourceChannelID) | + +## `ibc_dest_callback` Attributes + +| **Attribute Key** | **Attribute Values** | +| :-------------------: | :--------------------: | +| packet\_dest\_port | string (destPortID) | +| packet\_dest\_channel | string (destChannelID) | diff --git a/docs/ibc/v7.8.x/middleware/callbacks/gas.mdx b/docs/ibc/v7.8.x/middleware/callbacks/gas.mdx new file mode 100644 index 00000000..461e8714 --- /dev/null +++ b/docs/ibc/v7.8.x/middleware/callbacks/gas.mdx @@ -0,0 +1,79 @@ +--- +title: Gas Management +description: >- + Executing arbitrary code on a chain can be arbitrarily expensive. In general, + a callback may consume infinite gas (think of a callback that loops forever). + This is problematic for a few reasons: +--- + +## Overview + +Executing arbitrary code on a chain can be arbitrarily expensive. In general, a callback may consume infinite gas (think of a callback that loops forever). This is problematic for a few reasons: + +* It can block the packet lifecycle. +* It can be used to consume all of the relayer's funds and gas. +* A relayer can DOS the callback execution by sending a packet with a low amount of gas. + +To prevent these, the callbacks middleware introduces two gas limits: a chain wide gas limit (`maxCallbackGas`) and a user defined gas limit. + +### Chain Wide Gas Limit + +Since the callbacks middleware does not have a keeper, it does not use a governance parameter to set the chain wide gas limit. Instead, the chain wide gas limit is passed in as a parameter to the callbacks middleware during initialization. + +```go expandable +/ app.go + maxCallbackGas := uint64(1_000_000) + +var transferStack porttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) + +transferStack = ibcfee.NewIBCMiddleware(transferStack, app.IBCFeeKeeper) + +transferStack = ibccallbacks.NewIBCMiddleware(transferStack, app.IBCFeeKeeper, app.MockContractKeeper, maxCallbackGas) +/ Since the callbacks middleware itself is an ics4wrapper, it needs to be passed to the transfer keeper +app.TransferKeeper.WithICS4Wrapper(transferStack.(porttypes.ICS4Wrapper)) + +/ Add transfer stack to IBC Router +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) +``` + +### User Defined Gas Limit + +The user defined gas limit is set by the IBC Actor during packet creation. The user defined gas limit is set in the packet memo. If the user defined gas limit is not set or if the user defined gas limit is greater than the chain wide gas limit, then the chain wide gas limit is used as the user defined gas limit. + +```jsonc expandable +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + +}, + "dest_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +## Gas Limit Enforcement + +During a callback execution, there are three types of gas limits that are enforced: + +* User defined gas limit +* Chain wide gas limit +* Context gas limit (amount of gas that the relayer has left for this execution) + +Chain wide gas limit is used as a maximum to the user defined gas limit as explained in the [previous section](#user-defined-gas-limit). It may also be used as a default value if no user gas limit is provided. Therefore, we can ignore the chain wide gas limit for the rest of this section and work with the minimum of the chain wide gas limit and user defined gas limit. This minimum is called the commit gas limit. + +The gas limit enforcement is done by executing the callback inside a cached context with a new gas meter. The gas meter is initialized with the minimum of the commit gas limit and the context gas limit. This minimum is called the execution gas limit. We say that retries are allowed if `context gas limit < commit gas limit`. Otherwise, we say that retries are not allowed. + +If the callback execution fails due to an out of gas error, then the middleware checks if retries are allowed. If retries are not allowed, then it recovers from the out of gas error, consumes execution gas limit from the original context, and continues with the packet life cycle. If retries are allowed, then it panics with an out of gas error to revert the entire tx. The packet can then be submitted again with a higher gas limit. The out of gas panic descriptor is shown below. + +```go +fmt.Sprintf("ibc %s callback out of gas; commitGasLimit: %d", callbackType, callbackData.CommitGasLimit) +} +``` + +If the callback execution does not fail due to an out of gas error then the callbacks middleware does not block the packet life cycle regardless of whether retries are allowed or not. diff --git a/docs/ibc/v7.8.x/middleware/callbacks/integration.mdx b/docs/ibc/v7.8.x/middleware/callbacks/integration.mdx new file mode 100644 index 00000000..4ff51f73 --- /dev/null +++ b/docs/ibc/v7.8.x/middleware/callbacks/integration.mdx @@ -0,0 +1,116 @@ +--- +title: Integration +description: >- + Learn how to integrate the callbacks middleware with IBC applications. The + following document is intended for developers building on top of the Cosmos + SDK and only applies for Cosmos SDK chains. +--- + +Learn how to integrate the callbacks middleware with IBC applications. The following document is intended for developers building on top of the Cosmos SDK and only applies for Cosmos SDK chains. + +The callbacks middleware is a minimal and stateless implementation of the IBC middleware interface. It does not have a keeper, nor does it store any state. It simply routes IBC middleware messages to the appropriate callback function, which is implemented by the secondary application. Therefore, it doesn't need to be registered as a module, nor does it need to be added to the module manager. It only needs to be added to the IBC application stack. + +## Pre-requisite Readings + +* [IBC middleware development](/docs/ibc/v7.8.x/ibc/middleware/develop) +* [IBC middleware integration](/docs/ibc/v7.8.x/ibc/middleware/integration) + +The callbacks middleware, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. +For Cosmos SDK chains this setup is done via the `app/app.go` file, where modules are constructed and configured in order to bootstrap the blockchain application. + +## Importing the callbacks middleware + +The callbacks middleware has no stable releases yet. To use it, you need to import the git commit that contains the module with the compatible version of `ibc-go`. To do so, run the following command with the desired git commit in your project: + +```sh +go get github.com/cosmos/ibc-go/modules/apps/callbacks@17cf1260a9cdc5292512acc9bcf6336ef0d917f4 +``` + +You can find the version matrix in [here](#importing-the-callbacks-middleware). + +## Configuring an application stack with the callbacks middleware + +As mentioned in [IBC middleware development](/docs/ibc/v7.8.x/ibc/middleware/develop) an application stack may be composed of many or no middlewares that nest a base application. +These layers form the complete set of application logic that enable developers to build composable and flexible IBC application stacks. +For example, an application stack may just be a single base application like `transfer`, however, the same application stack composed with `29-fee` and `callbacks` will nest the `transfer` base application twice by wrapping it with the Fee Middleware module and then callbacks middleware. + +The callbacks middleware also **requires** a secondary application that will receive the callbacks to implement the [`ContractKeeper`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/callbacks/types/expected_keepers.go#L11-L83). Since the wasm module does not yet support the callbacks middleware, we will use the `mockContractKeeper` module in the examples below. You should replace this with a module that implements `ContractKeeper`. + +### Transfer + +See below for an example of how to create an application stack using `transfer`, `29-fee`, and `callbacks`. Feel free to omit the `29-fee` middleware if you do not want to use it. +The following `transferStack` is configured in `app/app.go` and added to the IBC `Router`. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +```go expandable +/ Create Transfer Stack +/ SendPacket, since it is originating from the application to core IBC: +/ transferKeeper.SendPacket -> callbacks.SendPacket -> feeKeeper.SendPacket -> channel.SendPacket + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is the other way +/ channel.RecvPacket -> fee.OnRecvPacket -> callbacks.OnRecvPacket -> transfer.OnRecvPacket + +/ transfer stack contains (from top to bottom): +/ - IBC Fee Middleware +/ - IBC Callbacks Middleware +/ - Transfer + +/ create IBC module from bottom to top of stack +var transferStack porttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) + +transferStack = ibccallbacks.NewIBCMiddleware(transferStack, app.IBCFeeKeeper, app.MockContractKeeper, maxCallbackGas) + +transferICS4Wrapper := transferStack.(porttypes.ICS4Wrapper) + +transferStack = ibcfee.NewIBCMiddleware(transferStack, app.IBCFeeKeeper) +/ Since the callbacks middleware itself is an ics4wrapper, it needs to be passed to the transfer keeper +app.TransferKeeper.WithICS4Wrapper(transferICS4Wrapper) + +/ Add transfer stack to IBC Router +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) +``` + + +The usage of `WithICS4Wrapper` after `transferStack`'s configuration is critical! It allows the callbacks middleware to do `SendPacket` callbacks and asynchronous `ReceivePacket` callbacks. You must do this regardless of whether you are using the `29-fee` middleware or not. + + +### Interchain Accounts Controller + +```go expandable +/ Create Interchain Accounts Stack +/ SendPacket, since it is originating from the application to core IBC: +/ icaControllerKeeper.SendTx -> callbacks.SendPacket -> fee.SendPacket -> channel.SendPacket + +/ initialize ICA module with mock module as the authentication module on the controller side +var icaControllerStack porttypes.IBCModule +icaControllerStack = ibcmock.NewIBCModule(&mockModule, ibcmock.NewIBCApp("", scopedICAMockKeeper)) + +app.ICAAuthModule = icaControllerStack.(ibcmock.IBCModule) + +icaControllerStack = icacontroller.NewIBCMiddleware(icaControllerStack, app.ICAControllerKeeper) + +icaControllerStack = ibccallbacks.NewIBCMiddleware(icaControllerStack, app.IBCFeeKeeper, app.MockContractKeeper, maxCallbackGas) + +icaICS4Wrapper := icaControllerStack.(porttypes.ICS4Wrapper) + +icaControllerStack = ibcfee.NewIBCMiddleware(icaControllerStack, app.IBCFeeKeeper) +/ Since the callbacks middleware itself is an ics4wrapper, it needs to be passed to the ica controller keeper +app.ICAControllerKeeper.WithICS4Wrapper(icaICS4Wrapper) + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is: +/ channel.RecvPacket -> fee.OnRecvPacket -> icaHost.OnRecvPacket + +var icaHostStack porttypes.IBCModule +icaHostStack = icahost.NewIBCModule(app.ICAHostKeeper) + +icaHostStack = ibcfee.NewIBCMiddleware(icaHostStack, app.IBCFeeKeeper) + +/ Add ICA host and controller to IBC router ibcRouter. +AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). +AddRoute(icahosttypes.SubModuleName, icaHostStack). +``` + + +The usage of `WithICS4Wrapper` here is also critical! + diff --git a/docs/ibc/v7.8.x/middleware/callbacks/interfaces.mdx b/docs/ibc/v7.8.x/middleware/callbacks/interfaces.mdx new file mode 100644 index 00000000..a2d82540 --- /dev/null +++ b/docs/ibc/v7.8.x/middleware/callbacks/interfaces.mdx @@ -0,0 +1,160 @@ +--- +title: Interfaces +--- + +The callbacks middleware requires certain interfaces to be implemented by the underlying IBC applications and the secondary application. If you're simply wiring up the callbacks middleware to an existing IBC application stack and a secondary application such as `icacontroller` and `x/wasm`, you can skip this section. + +## Interfaces for developing the Underlying IBC Application + +### `PacketDataUnmarshaler` + +```go +/ PacketDataUnmarshaler defines an optional interface which allows a middleware to +/ request the packet data to be unmarshaled by the base application. +type PacketDataUnmarshaler interface { + / UnmarshalPacketData unmarshals the packet data into a concrete type + UnmarshalPacketData([]byte) (interface{ +}, error) +} +``` + +The callbacks middleware **requires** the underlying ibc application to implement the [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/core/05-port/types/module.go#L142-L147) interface so that it can unmarshal the packet data bytes into the appropriate packet data type. This allows usage of interface functions implemented by the packet data type. The packet data type is expected to implement the `PacketDataProvider` interface (see section below), which is used to parse the callback data that is currently stored in the packet memo field for `transfer` and `ica` packets as a JSON string. See its implementation in the [`transfer`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/transfer/ibc_module.go#L303-L313) and [`icacontroller`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/27-interchain-accounts/controller/ibc_middleware.go#L258-L268) modules for reference. + +If the underlying application is a middleware itself, then it can implement this interface by simply passing the function call to its underlying application. See its implementation in the [`fee middleware`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/29-fee/ibc_middleware.go#L368-L378) for reference. + +### `PacketDataProvider` + +```go +/ PacketDataProvider defines an optional interfaces for retrieving custom packet data stored on behalf of another application. +/ An existing problem in the IBC middleware design is the inability for a middleware to define its own packet data type and insert packet sender provided information. +/ A short term solution was introduced into several application's packet data to utilize a memo field to carry this information on behalf of another application. +/ This interfaces standardizes that behaviour. Upon realization of the ability for middleware's to define their own packet data types, this interface will be deprecated and removed with time. +type PacketDataProvider interface { + / GetCustomPacketData returns the packet data held on behalf of another application. + / The name the information is stored under should be provided as the key. + / If no custom packet data exists for the key, nil should be returned. + GetCustomPacketData(key string) + +interface{ +} +} +``` + +The callbacks middleware also **requires** the underlying ibc application's packet data type to implement the [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/core/exported/packet.go#L43-L52) interface. This interface is used to retrieve the callback data from the packet data (using the memo field in the case of `transfer` and `ica`). For example, see its implementation in the [`transfer`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/transfer/types/packet.go#L85-L105) module. + +Since middlewares do not have packet types, they do not need to implement this interface. + +### `PacketData` + +```go +/ PacketData defines an optional interface which an application's packet data structure may implement. +type PacketData interface { + / GetPacketSender returns the sender address of the packet data. + / If the packet sender is unknown or undefined, an empty string should be returned. + GetPacketSender(sourcePortID string) + +string +} +``` + +[`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/core/exported/packet.go#L36-L41) is an optional interface that can be implemented by the underlying ibc application's packet data type. It is used to retrieve the packet sender address from the packet data. The callbacks middleware uses this interface to retrieve the packet sender address and pass it to the callback function during a source callback. If this interface is not implemented, then the callbacks middleware passes and empty string as the sender address. For example, see its implementation in the [`transfer`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/transfer/types/packet.go#L74-L83) and [`ica`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/27-interchain-accounts/types/packet.go#L78-L92) module. + +This interface was added so that secondary applications can retrieve the packet sender address to perform custom authorization logic if needed. + +Since middlewares do not have packet types, they do not need to implement this interface. + +## Interfaces for developing the Secondary Application + +### `ContractKeeper` + +The callbacks middleware requires the secondary application to implement the [`ContractKeeper`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/callbacks/types/expected_keepers.go#L11-L83) interface. The contract keeper will be invoked at each step of the packet lifecycle. When a packet is sent, if callback information is provided, the contract keeper will be invoked via the `IBCSendPacketCallback`. This allows the contract keeper to prevent packet sends when callback information is provided, for example if the sender is unauthorized to perform callbacks on the given information. If the packet send is successful, the contract keeper on the destination (if present) will be invoked when a packet has been received and the acknowledgement is written, this will occur via `IBCReceivePacketCallback`. At the end of the packet lifecycle, when processing acknowledgements or timeouts, the source contract keeper will be invoked either via `IBCOnAcknowledgementPacket` or `IBCOnTimeoutPacket`. Once a packet has been sent, each step of the packet lifecycle can be processed given that a relayer sets the gas limit to be more than or equal to the required `CommitGasLimit`. State changes performed in the callback will only be committed upon successful execution. + +```go expandable +/ ContractKeeper defines the entry points exposed to the VM module which invokes a smart contract +type ContractKeeper interface { + / IBCSendPacketCallback is called in the source chain when a PacketSend is executed. The + / packetSenderAddress is determined by the underlying module, and may be empty if the sender is + / unknown or undefined. The contract is expected to handle the callback within the user defined + / gas limit, and handle any errors, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, and the error will be propagated to the underlying IBC + / application, resulting in a packet send failure. + / + / Implementations are provided with the packetSenderAddress and MAY choose to use this to perform + / validation on the origin of a given packet. It is recommended to perform the same validation + / on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This + / defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + IBCSendPacketCallback( + cachedCtx sdk.Context, + sourcePort string, + sourceChannel string, + timeoutHeight clienttypes.Height, + timeoutTimestamp uint64, + packetData []byte, + contractAddress, + packetSenderAddress string, + ) + +error + / IBCOnAcknowledgementPacketCallback is called in the source chain when a packet acknowledgement + / is received. The packetSenderAddress is determined by the underlying module, and may be empty if + / the sender is unknown or undefined. The contract is expected to handle the callback within the + / user defined gas limit, and handle any errors, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, but the packet lifecycle will not be blocked. + / + / Implementations are provided with the packetSenderAddress and MAY choose to use this to perform + / validation on the origin of a given packet. It is recommended to perform the same validation + / on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This + / defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + IBCOnAcknowledgementPacketCallback( + cachedCtx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, + contractAddress, + packetSenderAddress string, + ) + +error + / IBCOnTimeoutPacketCallback is called in the source chain when a packet is not received before + / the timeout height. The packetSenderAddress is determined by the underlying module, and may be + / empty if the sender is unknown or undefined. The contract is expected to handle the callback + / within the user defined gas limit, and handle any error, out of gas, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, but the packet lifecycle will not be blocked. + / + / Implementations are provided with the packetSenderAddress and MAY choose to use this to perform + / validation on the origin of a given packet. It is recommended to perform the same validation + / on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This + / defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + IBCOnTimeoutPacketCallback( + cachedCtx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, + contractAddress, + packetSenderAddress string, + ) + +error + / IBCReceivePacketCallback is called in the destination chain when a packet acknowledgement is written. + / The contract is expected to handle the callback within the user defined gas limit, and handle any errors, + / out of gas, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, but the packet lifecycle will not be blocked. + IBCReceivePacketCallback( + cachedCtx sdk.Context, + packet ibcexported.PacketI, + ack ibcexported.Acknowledgement, + contractAddress string, + ) + +error +} +``` + +These are the callback entry points exposed to the secondary application. The secondary application is expected to execute its custom logic within these entry points. The callbacks middleware will handle the execution of these callbacks and revert the state if needed. + + +Note that the source callback entry points are provided with the `packetSenderAddress` and MAY choose to use this to perform validation on the origin of a given packet. It is recommended to perform the same validation on all source chain callbacks (SendPacket, AcknowledgePacket, TimeoutPacket). This defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + diff --git a/docs/ibc/v7.8.x/middleware/callbacks/overview.mdx b/docs/ibc/v7.8.x/middleware/callbacks/overview.mdx new file mode 100644 index 00000000..440c4107 --- /dev/null +++ b/docs/ibc/v7.8.x/middleware/callbacks/overview.mdx @@ -0,0 +1,49 @@ +--- +title: Overview +description: >- + Learn about what the Callbacks Middleware is, and how to build custom modules + that utilize the Callbacks Middleware functionality +--- + +Learn about what the Callbacks Middleware is, and how to build custom modules that utilize the Callbacks Middleware functionality + +## What is the Callbacks Middleware? + +IBC was designed with callbacks between core IBC and IBC applications. IBC apps would send a packet to core IBC, and receive a callback on every step of that packet's lifecycle. This allows IBC applications to be built on top of core IBC, and to be able to execute custom logic on packet lifecycle events (e.g. unescrow tokens for ICS-20). + +This setup worked well for off-chain users interacting with IBC applications. However, we are now seeing the desire for secondary applications (e.g. smart contracts, modules) to call into IBC apps as part of their state machine logic and then do some actions on packet lifecycle events. + +The Callbacks Middleware allows for this functionality by allowing the packets of the underlying IBC applications to register callbacks to secondary applications for lifecycle events. These callbacks are then executed by the Callbacks Middleware when the corresponding packet lifecycle event occurs. + +After much discussion, the design was expanded to an ADR, and the Callbacks Middleware is an implementation of that ADR. + +## Concepts + +Callbacks Middleware was built with smart contracts in mind, but can be used by any secondary application that wants to allow IBC packets to call into it. Think of the Callbacks Middleware as a bridge between core IBC and a secondary application. + +We have the following definitions: + +* `Underlying IBC application`: The IBC application that is wrapped by the Callbacks Middleware. This is the IBC application that is actually sending and receiving packet lifecycle events from core IBC. For example, the transfer module, or the ICA controller submodule. +* `IBC Actor`: IBC Actor is an on-chain or off-chain entity that can initiate a packet on the underlying IBC application. For example, a smart contract, an off-chain user, or a module that sends a transfer packet are all IBC Actors. +* `Secondary application`: The application that is being called into by the Callbacks Middleware for packet lifecycle events. This is the application that is receiving the callback directly from the Callbacks Middleware module. For example, the `x/wasm` module. +* `Callback Actor`: The on-chain smart contract or module that is registered to receive callbacks from the secondary application. For example, a Wasm smart contract (gatekeeped by the `x/wasm` module). Note that the Callback Actor is not necessarily the same as the IBC Actor. For example, an off-chain user can initiate a packet on the underlying IBC application, but the Callback Actor could be a smart contract. The secondary application may want to check that the IBC Actor is allowed to call into the Callback Actor, for example, by checking that the IBC Actor is the same as the Callback Actor. +* `Callback Address`: Address of the Callback Actor. This is the address that the secondary application will call into when a packet lifecycle event occurs. For example, the address of the Wasm smart contract. +* `Maximum gas limit`: The maximum amount of gas that the Callbacks Middleware will allow the secondary application to use when it executes its custom logic. +* `User defined gas limit`: The amount of gas that the IBC Actor wants to allow the secondary application to use when it executes its custom logic. This is the gas limit that the IBC Actor specifies when it sends a packet to the underlying IBC application. This cannot be greater than the maximum gas limit. + +Think of the secondary application as a bridge between the Callbacks Middleware and the Callback Actor. The secondary application is responsible for executing the custom logic of the Callback Actor when a packet lifecycle event occurs. The secondary application is also responsible for checking that the IBC Actor is allowed to call into the Callback Actor. + +Note that it is possible that the IBC Actor, Secondary Application, and Callback Actor are all the same entity. In which case, the Callback Address should be the secondary application's module address. + +The following diagram shows how a typical `RecvPacket`, `AcknowledgementPacket`, and `TimeoutPacket` execution flow would look like: +![callbacks-middleware](/docs/ibc/images/04-middleware/02-callbacks/images/callbackflow.svg) + +And the following diagram shows how a typical `SendPacket` and `WriteAcknowledgement` execution flow would look like: +![callbacks-middleware](/docs/ibc/images/04-middleware/02-callbacks/images/ics4-callbackflow.svg) + +## Known Limitations + +* Callbacks are always executed after the underlying IBC application has executed its logic. +* Maximum gas limit is hardcoded manually during wiring. It requires a coordinated upgrade to change the maximum gas limit. +* The receive packet callback does not pass the relayer address to the secondary application. This is so that we can use the same callback for both synchronous and asynchronous acknowledgements. +* The receive packet callback does not pass IBC Actor's address, this is because the IBC Actor lives in the counterparty chain and cannot be trusted. diff --git a/docs/ibc/v7.8.x/middleware/ics29-fee/end-users.mdx b/docs/ibc/v7.8.x/middleware/ics29-fee/end-users.mdx new file mode 100644 index 00000000..4c7ad2af --- /dev/null +++ b/docs/ibc/v7.8.x/middleware/ics29-fee/end-users.mdx @@ -0,0 +1,30 @@ +--- +title: End Users +--- + +## Synopsis + +Learn how to incentivize IBC packets using the ICS29 Fee Middleware module. + +## Pre-requisite readings + +- [Fee Middleware](/docs/ibc/v7.8.x/middleware/ics29-fee/overview) + +## Summary + +Different types of end users: + +- CLI users who want to manually incentivize IBC packets +- Client developers + +The Fee Middleware module allows end users to add a 'tip' to each IBC packet which will incentivize relayer operators to relay packets between chains. gRPC endpoints are exposed for client developers as well as a simple CLI for manually incentivizing IBC packets. + +## CLI Users + +For an in depth guide on how to use the ICS29 Fee Middleware module using the CLI please take a look at the [wiki](https://github.com/cosmos/ibc-go/wiki/Fee-enabled-fungible-token-transfers#asynchronous-incentivization-of-a-fungible-token-transfer) on the `ibc-go` repo. + +## Client developers + +Client developers can read more about the relevant ICS29 message types in the [Fee messages section](/docs/ibc/v7.8.x/middleware/ics29-fee/msgs). + +[CosmJS](https://github.com/cosmos/cosmjs) is a useful client library for signing and broadcasting Cosmos SDK messages. diff --git a/docs/ibc/v7.8.x/middleware/ics29-fee/events.mdx b/docs/ibc/v7.8.x/middleware/ics29-fee/events.mdx new file mode 100644 index 00000000..7d6dfc42 --- /dev/null +++ b/docs/ibc/v7.8.x/middleware/ics29-fee/events.mdx @@ -0,0 +1,37 @@ +--- +title: Events +--- + +## Synopsis + +An overview of all events related to ICS-29 + +## `MsgPayPacketFee`, `MsgPayPacketFeeAsync` + +| Type | Attribute Key | Attribute Value | +| ----------------------- | --------------- | --------------- | +| incentivized_ibc_packet | port_id | `{portID}` | +| incentivized_ibc_packet | channel_id | `{channelID}` | +| incentivized_ibc_packet | packet_sequence | `{sequence}` | +| incentivized_ibc_packet | recv_fee | `{recvFee}` | +| incentivized_ibc_packet | ack_fee | `{ackFee}` | +| incentivized_ibc_packet | timeout_fee | `{timeoutFee}` | +| message | module | fee-ibc | + +## `RegisterPayee` + +| Type | Attribute Key | Attribute Value | +| -------------- | ------------- | --------------- | +| register_payee | relayer | `{relayer}` | +| register_payee | payee | `{payee}` | +| register_payee | channel_id | `{channelID}` | +| message | module | fee-ibc | + +## `RegisterCounterpartyPayee` + +| Type | Attribute Key | Attribute Value | +| --------------------------- | ------------------ | --------------------- | +| register_counterparty_payee | relayer | `{relayer}` | +| register_counterparty_payee | counterparty_payee | `{counterpartyPayee}` | +| register_counterparty_payee | channel_id | `{channelID}` | +| message | module | fee-ibc | diff --git a/docs/ibc/v7.8.x/middleware/ics29-fee/fee-distribution.mdx b/docs/ibc/v7.8.x/middleware/ics29-fee/fee-distribution.mdx new file mode 100644 index 00000000..9d2e0795 --- /dev/null +++ b/docs/ibc/v7.8.x/middleware/ics29-fee/fee-distribution.mdx @@ -0,0 +1,108 @@ +--- +title: Fee Distribution +--- + +## Synopsis + +Learn about payee registration for the distribution of packet fees. The following document is intended for relayer operators. + +## Pre-requisite readings + +- [Fee Middleware](/docs/ibc/v7.8.x/middleware/ics29-fee/overview) + +Packet fees are divided into 3 distinct amounts in order to compensate relayer operators for packet relaying on fee enabled IBC channels. + +- `RecvFee`: The sum of all packet receive fees distributed to a payee for successful execution of `MsgRecvPacket`. +- `AckFee`: The sum of all packet acknowledgement fees distributed to a payee for successful execution of `MsgAcknowledgement`. +- `TimeoutFee`: The sum of all packet timeout fees distributed to a payee for successful execution of `MsgTimeout`. + +## Register a counterparty payee address for forward relaying + +As mentioned in [ICS29 Concepts](/docs/ibc/v7.8.x/middleware/ics29-fee/overview#concepts), the forward relayer describes the actor who performs the submission of `MsgRecvPacket` on the destination chain. +Fee distribution for incentivized packet relays takes place on the packet source chain. + +> Relayer operators are expected to register a counterparty payee address, in order to be compensated accordingly with `RecvFee`s upon completion of a packet lifecycle. + +The counterparty payee address registered on the destination chain is encoded into the packet acknowledgement and communicated as such to the source chain for fee distribution. +**If a counterparty payee is not registered for the forward relayer on the destination chain, the escrowed fees will be refunded upon fee distribution.** + +### Relayer operator actions? + +A transaction must be submitted **to the destination chain** including a `CounterpartyPayee` address of an account on the source chain. +The transaction must be signed by the `Relayer`. + +Note: If a module account address is used as the `CounterpartyPayee` but the module has been set as a blocked address in the `BankKeeper`, the refunding to the module account will fail. This is because many modules use invariants to compare internal tracking of module account balances against the actual balance of the account stored in the `BankKeeper`. If a token transfer to the module account occurs without going through this module and updating the account balance of the module on the `BankKeeper`, then invariants may break and unknown behaviour could occur depending on the module implementation. Therefore, if it is desirable to use a module account that is currently blocked, the module developers should be consulted to gauge to possibility of removing the module account from the blocked list. + +```go +type MsgRegisterCounterpartyPayee struct { + / unique port identifier + PortId string + / unique channel identifier + ChannelId string + / the relayer address + Relayer string + / the counterparty payee address + CounterpartyPayee string +} +``` + +> This message is expected to fail if: +> +> - `PortId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +> - `ChannelId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +> - `Relayer` is an invalid address. +> - `CounterpartyPayee` is empty or contains more than 2048 bytes. + +See below for an example CLI command: + +```bash +simd tx ibc-fee register-counterparty-payee transfer channel-0 \ +cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh \ +osmo1v5y0tz01llxzf4c2afml8s3awue0ymju22wxx2 \ +--from cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh +``` + +## Register an alternative payee address for reverse and timeout relaying + +As mentioned in [ICS29 Concepts](/docs/ibc/v7.8.x/middleware/ics29-fee/overview#concepts), the reverse relayer describes the actor who performs the submission of `MsgAcknowledgement` on the source chain. +Similarly the timeout relayer describes the actor who performs the submission of `MsgTimeout` (or `MsgTimeoutOnClose`) on the source chain. + +> Relayer operators **may choose** to register an optional payee address, in order to be compensated accordingly with `AckFee`s and `TimeoutFee`s upon completion of a packet life cycle. + +If a payee is not registered for the reverse or timeout relayer on the source chain, then fee distribution assumes the default behaviour, where fees are paid out to the relayer account which delivers `MsgAcknowledgement` or `MsgTimeout`/`MsgTimeoutOnClose`. + +### Relayer operator actions + +A transaction must be submitted **to the source chain** including a `Payee` address of an account on the source chain. +The transaction must be signed by the `Relayer`. + +Note: If a module account address is used as the `Payee` it is recommended to [turn off invariant checks](https://github.com/cosmos/ibc-go/blob/71d7480c923f4227453e8a80f51be01ae7ee845e/testing/simapp/app.go#L659) for that module. + +```go +type MsgRegisterPayee struct { + / unique port identifier + PortId string + / unique channel identifier + ChannelId string + / the relayer address + Relayer string + / the payee address + Payee string +} +``` + +> This message is expected to fail if: +> +> - `PortId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +> - `ChannelId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +> - `Relayer` is an invalid address (see [Cosmos SDK Addresses](https://github.com/cosmos/cosmos-sdk/blob/main/docs/learn/beginner/03-accounts.md#addresses)). +> - `Payee` is an invalid address (see [Cosmos SDK Addresses](https://github.com/cosmos/cosmos-sdk/blob/main/docs/learn/beginner/03-accounts.md#addresses)). + +See below for an example CLI command: + +```bash +simd tx ibc-fee register-payee transfer channel-0 \ +cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh \ +cosmos153lf4zntqt33a4v0sm5cytrxyqn78q7kz8j8x5 \ +--from cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh +``` diff --git a/docs/ibc/v7.8.x/middleware/ics29-fee/integration.mdx b/docs/ibc/v7.8.x/middleware/ics29-fee/integration.mdx new file mode 100644 index 00000000..02c7e698 --- /dev/null +++ b/docs/ibc/v7.8.x/middleware/ics29-fee/integration.mdx @@ -0,0 +1,174 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to configure the Fee Middleware module with IBC applications. The following document is intended for developers building on top of the Cosmos SDK and only applies for Cosmos SDK chains. + +## Pre-requisite Readings + +- [IBC middleware development](/docs/ibc/v7.8.x/ibc/middleware/develop) +- [IBC middleware integration](/docs/ibc/v7.8.x/ibc/middleware/integration) + +The Fee Middleware module, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. +For Cosmos SDK chains this setup is done via the `app/app.go` file, where modules are constructed and configured in order to bootstrap the blockchain application. + +## Example integration of the Fee Middleware module + +```go expandable +/ app.go + +/ Register the AppModule for the fee middleware module +ModuleBasics = module.NewBasicManager( + ... + ibcfee.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the fee middleware module +maccPerms = map[string][]string{ + ... + ibcfeetypes.ModuleName: nil, +} + +... + +/ Add fee middleware Keeper +type App struct { + ... + + IBCFeeKeeper ibcfeekeeper.Keeper + + ... +} + +... + +/ Create store keys + keys := sdk.NewKVStoreKeys( + ... + ibcfeetypes.StoreKey, + ... +) + +... + +app.IBCFeeKeeper = ibcfeekeeper.NewKeeper( + appCodec, keys[ibcfeetypes.StoreKey], + app.IBCKeeper.ChannelKeeper, / may be replaced with IBC middleware + app.IBCKeeper.ChannelKeeper, + &app.IBCKeeper.PortKeeper, app.AccountKeeper, app.BankKeeper, +) + +/ See the section below for configuring an application stack with the fee middleware module + +... + +/ Register fee middleware AppModule +app.moduleManager = module.NewManager( + ... + ibcfee.NewAppModule(app.IBCFeeKeeper), +) + +... + +/ Add fee middleware to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + ibcfeetypes.ModuleName, + ... +) + +/ Add fee middleware to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + ibcfeetypes.ModuleName, + ... +) + +/ Add fee middleware to init genesis logic +app.moduleManager.SetOrderInitGenesis( + ... + ibcfeetypes.ModuleName, + ... +) +``` + +## Configuring an application stack with Fee Middleware + +As mentioned in [IBC middleware development](/docs/ibc/v7.8.x/ibc/middleware/develop) an application stack may be composed of many or no middlewares that nest a base application. +These layers form the complete set of application logic that enable developers to build composable and flexible IBC application stacks. +For example, an application stack may be just a single base application like `transfer`, however, the same application stack composed with `29-fee` will nest the `transfer` base application +by wrapping it with the Fee Middleware module. + +### Transfer + +See below for an example of how to create an application stack using `transfer` and `29-fee`. +The following `transferStack` is configured in `app/app.go` and added to the IBC `Router`. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +```go expandable +/ Create Transfer Stack +/ SendPacket, since it is originating from the application to core IBC: +/ transferKeeper.SendPacket -> fee.SendPacket -> channel.SendPacket + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is the other way +/ channel.RecvPacket -> fee.OnRecvPacket -> transfer.OnRecvPacket + +/ transfer stack contains (from top to bottom): +/ - IBC Fee Middleware +/ - Transfer + +/ create IBC module from bottom to top of stack +var transferStack porttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) + +transferStack = ibcfee.NewIBCMiddleware(transferStack, app.IBCFeeKeeper) + +/ Add transfer stack to IBC Router +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) +``` + +### Interchain Accounts + +See below for an example of how to create an application stack using `27-interchain-accounts` and `29-fee`. +The following `icaControllerStack` and `icaHostStack` are configured in `app/app.go` and added to the IBC `Router` with the associated authentication module. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +```go expandable +/ Create Interchain Accounts Stack +/ SendPacket, since it is originating from the application to core IBC: +/ icaAuthModuleKeeper.SendTx -> icaController.SendPacket -> fee.SendPacket -> channel.SendPacket + +/ initialize ICA module with mock module as the authentication module on the controller side +var icaControllerStack porttypes.IBCModule +icaControllerStack = ibcmock.NewIBCModule(&mockModule, ibcmock.NewMockIBCApp("", scopedICAMockKeeper)) + +app.ICAAuthModule = icaControllerStack.(ibcmock.IBCModule) + +icaControllerStack = icacontroller.NewIBCMiddleware(icaControllerStack, app.ICAControllerKeeper) + +icaControllerStack = ibcfee.NewIBCMiddleware(icaControllerStack, app.IBCFeeKeeper) + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is: +/ channel.RecvPacket -> fee.OnRecvPacket -> icaHost.OnRecvPacket + +var icaHostStack porttypes.IBCModule +icaHostStack = icahost.NewIBCModule(app.ICAHostKeeper) + +icaHostStack = ibcfee.NewIBCMiddleware(icaHostStack, app.IBCFeeKeeper) + +/ Add authentication module, controller and host to IBC router +ibcRouter. + / the ICA Controller middleware needs to be explicitly added to the IBC Router because the + / ICA controller module owns the port capability for ICA. The ICA authentication module + / owns the channel capability. + AddRoute(ibcmock.ModuleName+icacontrollertypes.SubModuleName, icaControllerStack) / ica with mock auth module stack route to ica (top level of middleware stack) + +AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostStack). +``` diff --git a/docs/ibc/v7.8.x/middleware/ics29-fee/msgs.mdx b/docs/ibc/v7.8.x/middleware/ics29-fee/msgs.mdx new file mode 100644 index 00000000..b9a8763a --- /dev/null +++ b/docs/ibc/v7.8.x/middleware/ics29-fee/msgs.mdx @@ -0,0 +1,90 @@ +--- +title: Fee Messages +--- + +## Synopsis + +Learn about the different ways to pay for fees, how the fees are paid out and what happens when not enough escrowed fees are available for payout + +## Escrowing fees + +The fee middleware module exposes two different ways to pay fees for relaying IBC packets: + +1. `MsgPayPacketFee`, which enables the escrowing of fees for a packet at the next sequence send and should be combined into one `MultiMsgTx` with the message that will be paid for. + + Note that the `Relayers` field has been set up to allow for an optional whitelist of relayers permitted to receive this fee, however, this feature has not yet been enabled at this time. + + ```go expandable + type MsgPayPacketFee struct{ + / fee encapsulates the recv, ack and timeout fees associated with an IBC packet + Fee Fee + / the source port unique identifier + SourcePortId string + / the source channel unique identifier + SourceChannelId string + / account address to refund fee if necessary + Signer string + / optional list of relayers permitted to the receive packet fee + Relayers []string + } + ``` + + The `Fee` message contained in this synchronous fee payment method configures different fees which will be paid out for `MsgRecvPacket`, `MsgAcknowledgement`, and `MsgTimeout`/`MsgTimeoutOnClose`. + + ```go + type Fee struct { + RecvFee types.Coins + AckFee types.Coins + TimeoutFee types.Coins + } + ``` + +The diagram below shows the `MultiMsgTx` with the `MsgTransfer` coming from a token transfer message, along with `MsgPayPacketFee`. + +![msgpaypacket.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/msgpaypacket.png) + +2. `MsgPayPacketFeeAsync`, which enables the asynchronous escrowing of fees for a specified packet: + + Note that a packet can be 'topped up' multiple times with additional fees of any coin denomination by broadcasting multiple `MsgPayPacketFeeAsync` messages. + + ```go + type MsgPayPacketFeeAsync struct { + / unique packet identifier comprised of the channel ID, port ID and sequence + PacketId channeltypes.PacketId + / the packet fee associated with a particular IBC packet + PacketFee PacketFee + } + ``` + + where the `PacketFee` also specifies the `Fee` to be paid as well as the refund address for fees which are not paid out + + ```go + type PacketFee struct { + Fee Fee + RefundAddress string + Relayers []string + } + ``` + +The diagram below shows how multiple `MsgPayPacketFeeAsync` can be broadcasted asynchronously. Escrowing of the fee associated with a packet can be carried out by any party because ICS-29 does not dictate a particular fee payer. In fact, chains can choose to simply not expose this fee payment to end users at all and rely on a different module account or even the community pool as the source of relayer incentives. + +![paypacketfeeasync.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/paypacketfeeasync.png) + +Please see our [wiki](https://github.com/cosmos/ibc-go/wiki/Fee-enabled-fungible-token-transfers) for example flows on how to use these messages to incentivise a token transfer channel using a CLI. + +## Paying out the escrowed fees + +Following diagram takes a look at the packet flow for an incentivized token transfer and investigates the several scenario's for paying out the escrowed fees. We assume that the relayers have registered their counterparty address, detailed in the [Fee distribution section](/docs/ibc/v7.8.x/middleware/ics29-fee/fee-distribution). + +![feeflow.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/feeflow.png) + +- In the case of a successful transaction, `RecvFee` will be paid out to the designated counterparty payee address which has been registered on the receiver chain and sent back with the `MsgAcknowledgement`, `AckFee` will be paid out to the relayer address which has submitted the `MsgAcknowledgement` on the sending chain (or the registered payee in case one has been registered for the relayer address), and `TimeoutFee` will be reimbursed to the account which escrowed the fee. +- In case of a timeout transaction, `RecvFee` and `AckFee` will be reimbursed. The `TimeoutFee` will be paid to the `Timeout Relayer` (who submits the timeout message to the source chain). + +> Please note that fee payments are built on the assumption that sender chains are the source of incentives — the chain that sends the packets is the same chain where fee payments will occur -- please see the [Fee distribution section](/docs/ibc/v7.8.x/middleware/ics29-fee/fee-distribution) to understand the flow for registering payee and counterparty payee (fee receiving) addresses. + +## A locked fee middleware module + +The fee middleware module can become locked if the situation arises that the escrow account for the fees does not have sufficient funds to pay out the fees which have been escrowed for each packet. _This situation indicates a severe bug._ In this case, the fee module will be locked until manual intervention fixes the issue. + +> A locked fee module will simply skip fee logic and continue on to the underlying packet flow. A channel with a locked fee module will temporarily function as a fee disabled channel, and the locking of a fee module will not affect the continued flow of packets over the channel. diff --git a/docs/ibc/v7.8.x/middleware/ics29-fee/overview.mdx b/docs/ibc/v7.8.x/middleware/ics29-fee/overview.mdx new file mode 100644 index 00000000..93eafbd6 --- /dev/null +++ b/docs/ibc/v7.8.x/middleware/ics29-fee/overview.mdx @@ -0,0 +1,49 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the Fee Middleware module is, and how to build custom modules that utilize the Fee Middleware functionality + +## What is the Fee Middleware module? + +IBC does not depend on relayer operators for transaction verification. However, the relayer infrastructure ensures liveness of the Interchain network — operators listen for packets sent through channels opened between chains, and perform the vital service of ferrying these packets (and proof of the transaction on the sending chain/receipt on the receiving chain) to the clients on each side of the channel. + +Though relaying is permissionless and completely decentralized and accessible, it does come with operational costs. Running full nodes to query transaction proofs and paying for transaction fees associated with IBC packets are two of the primary cost burdens which have driven the overall discussion on **a general, in-protocol incentivization mechanism for relayers**. + +Initially, a [simple proposal](https://github.com/cosmos/ibc/pull/577/files) was created to incentivize relaying on ICS20 token transfers on the destination chain. However, the proposal was specific to ICS20 token transfers and would have to be reimplemented in this format on every other IBC application module. + +After much discussion, the proposal was expanded to a [general incentivisation design](https://github.com/cosmos/ibc/tree/master/spec/app/ics-029-fee-payment) that can be adopted by any ICS application protocol as [middleware](/docs/ibc/v7.8.x/ibc/middleware/develop). + +## Concepts + +ICS29 fee payments in this middleware design are built on the assumption that sender chains are the source of incentives — the chain on which packets are incentivized is the chain that distributes fees to relayer operators. However, as part of the IBC packet flow, messages have to be submitted on both sender and destination chains. This introduces the requirement of a mapping of relayer operator's addresses on both chains. + +To achieve the stated requirements, the **fee middleware module has two main groups of functionality**: + +- Registering of relayer addresses associated with each party involved in relaying the packet on the source chain. This registration process can be automated on start up of relayer infrastructure and happens only once, not every packet flow. + + This is described in the [Fee distribution section](/docs/ibc/v7.8.x/middleware/ics29-fee/fee-distribution). + +- Escrowing fees by any party which will be paid out to each rightful party on completion of the packet lifecycle. + + This is described in the [Fee messages section](/docs/ibc/v7.8.x/middleware/ics29-fee/msgs). + +We complete the introduction by giving a list of definitions of relevant terminology. + +`Forward relayer`: The relayer that submits the `MsgRecvPacket` message for a given packet (on the destination chain). + +`Reverse relayer`: The relayer that submits the `MsgAcknowledgement` message for a given packet (on the source chain). + +`Timeout relayer`: The relayer that submits the `MsgTimeout` or `MsgTimeoutOnClose` messages for a given packet (on the source chain). + +`Payee`: The account address on the source chain to be paid on completion of the packet lifecycle. The packet lifecycle on the source chain completes with the receipt of a `MsgTimeout`/`MsgTimeoutOnClose` or a `MsgAcknowledgement`. + +`Counterparty payee`: The account address to be paid on completion of the packet lifecycle on the destination chain. The package lifecycle on the destination chain completes with a successful `MsgRecvPacket`. + +`Refund address`: The address of the account paying for the incentivization of packet relaying. The account is refunded timeout fees upon successful acknowledgement. In the event of a packet timeout, both acknowledgement and receive fees are refunded. + +## Known Limitations + +The first version of fee payments middleware will only support incentivisation of new channels, however, channel upgradeability will enable incentivisation of all existing channels. diff --git a/docs/ibc/v7.8.x/migrations/sdk-to-v1.mdx b/docs/ibc/v7.8.x/migrations/sdk-to-v1.mdx new file mode 100644 index 00000000..eafee368 --- /dev/null +++ b/docs/ibc/v7.8.x/migrations/sdk-to-v1.mdx @@ -0,0 +1,195 @@ +--- +title: SDK v0.43 to IBC-Go v1 +description: >- + This file contains information on how to migrate from the IBC module contained + in the SDK 0.41.x and 0.42.x lines to the IBC module in the ibc-go repository + based on the 0.44 SDK version. +--- + +This file contains information on how to migrate from the IBC module contained in the SDK 0.41.x and 0.42.x lines to the IBC module in the ibc-go repository based on the 0.44 SDK version. + +## Import Changes + +The most obvious changes is import name changes. We need to change: + +* applications -> apps +* cosmos-sdk/x/ibc -> ibc-go + +On my GNU/Linux based machine I used the following commands, executed in order: + +```bash +grep -RiIl 'cosmos-sdk\/x\/ibc\/applications' | xargs sed -i 's/cosmos-sdk\/x\/ibc\/applications/ibc-go\/modules\/apps/g' +``` + +```bash +grep -RiIl 'cosmos-sdk\/x\/ibc' | xargs sed -i 's/cosmos-sdk\/x\/ibc/ibc-go\/modules/g' +``` + +ref: [explanation of the above commands](https://www.internalpointers.com/post/linux-find-and-replace-text-multiple-files) + +Executing these commands out of order will cause issues. + +Feel free to use your own method for modifying import names. + +NOTE: Updating to the `v0.44.0` SDK release and then running `go mod tidy` will cause a downgrade to `v0.42.0` in order to support the old IBC import paths. +Update the import paths before running `go mod tidy`. + +## Chain Upgrades + +Chains may choose to upgrade via an upgrade proposal or genesis upgrades. Both in-place store migrations and genesis migrations are supported. + +**WARNING**: Please read at least the quick guide for [IBC client upgrades](/docs/ibc/v7.8.x/ibc/upgrades/intro) before upgrading your chain. It is highly recommended you do not change the chain-ID during an upgrade, otherwise you must follow the IBC client upgrade instructions. + +Both in-place store migrations and genesis migrations will: + +* migrate the solo machine client state from v1 to v2 protobuf definitions +* prune all solo machine consensus states +* prune all expired tendermint consensus states + +Chains must set a new connection parameter during either in place store migrations or genesis migration. The new parameter, max expected block time, is used to enforce packet processing delays on the receiving end of an IBC packet flow. Checkout the [docs](https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2) for more information. + +### In-Place Store Migrations + +The new chain binary will need to run migrations in the upgrade handler. The fromVM (previous module version) for the IBC module should be 1. This will allow migrations to be run for IBC updating the version from 1 to 2. + +Ex: + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler("my-upgrade-proposal", + func(ctx sdk.Context, _ upgradetypes.Plan, _ module.VersionMap) (module.VersionMap, error) { + / set max expected block time parameter. Replace the default with your expected value + / https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2 + app.IBCKeeper.ConnectionKeeper.SetParams(ctx, ibcconnectiontypes.DefaultParams()) + fromVM := map[string]uint64{ + ... / other modules + "ibc": 1, + ... +} + +return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +### Genesis Migrations + +To perform genesis migrations, the following code must be added to your existing migration code. + +```go expandable +/ add imports as necessary +import ( + + ibcv100 "github.com/cosmos/ibc-go/modules/core/legacy/v100" + ibchost "github.com/cosmos/ibc-go/modules/core/24-host" +) + +... + +/ add in migrate cmd function +/ expectedTimePerBlock is a new connection parameter +/ https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2 +newGenState, err = ibcv100.MigrateGenesis(newGenState, clientCtx, *genDoc, expectedTimePerBlock) + if err != nil { + return err +} +``` + +**NOTE:** The genesis chain-id, time and height MUST be updated before migrating IBC, otherwise the tendermint consensus state will not be pruned. + +## IBC Keeper Changes + +The IBC Keeper now takes in the Upgrade Keeper. Please add the chains' Upgrade Keeper after the Staking Keeper: + +```diff + / Create IBC Keeper + app.IBCKeeper = ibckeeper.NewKeeper( +- appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, scopedIBCKeeper, ++ appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, app.UpgradeKeeper, scopedIBCKeeper, + ) + +``` + +## Proposals + +### UpdateClientProposal + +The `UpdateClient` has been modified to take in two client-identifiers and one initial height. Please see the [documentation](/docs/ibc/v7.8.x/ibc/proposals) for more information. + +### UpgradeProposal + +A new IBC proposal type has been added, `UpgradeProposal`. This handles an IBC (breaking) Upgrade. +The previous `UpgradedClientState` field in an Upgrade `Plan` has been deprecated in favor of this new proposal type. + +### Proposal Handler Registration + +The `ClientUpdateProposalHandler` has been renamed to `ClientProposalHandler`. +It handles both `UpdateClientProposal`s and `UpgradeProposal`s. + +Add this import: + +```diff ++ ibcclienttypes "github.com/cosmos/ibc-go/modules/core/02-client/types" +``` + +Please ensure the governance module adds the correct route: + +```diff +- AddRoute(ibchost.RouterKey, ibcclient.NewClientUpdateProposalHandler(app.IBCKeeper.ClientKeeper)) ++ AddRoute(ibcclienttypes.RouterKey, ibcclient.NewClientProposalHandler(app.IBCKeeper.ClientKeeper)) +``` + +NOTE: Simapp registration was incorrect in the 0.41.x releases. The `UpdateClient` proposal handler should be registered with the router key belonging to `ibc-go/core/02-client/types` +as shown in the diffs above. + +### Proposal CLI Registration + +Please ensure both proposal type CLI commands are registered on the governance module by adding the following arguments to `gov.NewAppModuleBasic()`: + +Add the following import: + +```diff ++ ibcclientclient "github.com/cosmos/ibc-go/modules/core/02-client/client" +``` + +Register the cli commands: + +```diff + gov.NewAppModuleBasic( + paramsclient.ProposalHandler, distrclient.ProposalHandler, upgradeclient.ProposalHandler, upgradeclient.CancelProposalHandler, ++ ibcclientclient.UpdateClientProposalHandler, ibcclientclient.UpgradeProposalHandler, + ), +``` + +REST routes are not supported for these proposals. + +## Proto file changes + +The gRPC querier service endpoints have changed slightly. The previous files used `v1beta1` gRPC route, this has been updated to `v1`. + +The solo machine has replaced the FrozenSequence uint64 field with a IsFrozen boolean field. The package has been bumped from `v1` to `v2` + +## IBC callback changes + +### OnRecvPacket + +Application developers need to update their `OnRecvPacket` callback logic. + +The `OnRecvPacket` callback has been modified to only return the acknowledgement. The acknowledgement returned must implement the `Acknowledgement` interface. The acknowledgement should indicate if it represents a successful processing of a packet by returning true on `Success()` and false in all other cases. A return value of false on `Success()` will result in all state changes which occurred in the callback being discarded. More information can be found in the [documentation](/docs/ibc/v7.8.x/ibc/apps/ibcmodule#receiving-packets). + +The `OnRecvPacket`, `OnAcknowledgementPacket`, and `OnTimeoutPacket` callbacks are now passed the `sdk.AccAddress` of the relayer who relayed the IBC packet. Applications may use or ignore this information. + +## IBC Event changes + +The `packet_data` attribute has been deprecated in favor of `packet_data_hex`, in order to provide standardized encoding/decoding of packet data in events. While the `packet_data` event still exists, all relayers and IBC Event consumers are strongly encouraged to switch over to using `packet_data_hex` as soon as possible. + +The `packet_ack` attribute has also been deprecated in favor of `packet_ack_hex` for the same reason stated above. All relayers and IBC Event consumers are strongly encouraged to switch over to using `packet_ack_hex` as soon as possible. + +The `consensus_height` attribute has been removed in the Misbehaviour event emitted. IBC clients no longer have a frozen height and misbehaviour does not necessarily have an associated height. + +## Relevant SDK changes + +* (codec) [#9226](https://github.com/cosmos/cosmos-sdk/pull/9226) Rename codec interfaces and methods, to follow a general Go interfaces: + * `codec.Marshaler` → `codec.Codec` (this defines objects which serialize other objects) + * `codec.BinaryMarshaler` → `codec.BinaryCodec` + * `codec.JSONMarshaler` → `codec.JSONCodec` + * Removed `BinaryBare` suffix from `BinaryCodec` methods (`MarshalBinaryBare`, `UnmarshalBinaryBare`, ...) + * Removed `Binary` infix from `BinaryCodec` methods (`MarshalBinaryLengthPrefixed`, `UnmarshalBinaryLengthPrefixed`, ...) diff --git a/docs/ibc/v7.8.x/migrations/support-denoms-with-slashes.mdx b/docs/ibc/v7.8.x/migrations/support-denoms-with-slashes.mdx new file mode 100644 index 00000000..6a698566 --- /dev/null +++ b/docs/ibc/v7.8.x/migrations/support-denoms-with-slashes.mdx @@ -0,0 +1,90 @@ +--- +title: Support transfer of coins whose base denom contains slashes +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +This document is necessary when chains are upgrading from a version that does not support base denoms with slashes (e.g. v3.0.0) to a version that does (e.g. v3.2.0). All versions of ibc-go smaller than v1.5.0 for the v1.x release line, v2.3.0 for the v2.x release line, and v3.1.0 for the v3.x release line do **NOT** support IBC token transfers of coins whose base denoms contain slashes. Therefore the in-place of genesis migration described in this document are required when upgrading. + +If a chain receives coins of a base denom with slashes before it upgrades to supporting it, the receive may pass however the trace information will be incorrect. + +E.g. If a base denom of `testcoin/testcoin/testcoin` is sent to a chain that does not support slashes in the base denom, the receive will be successful. However, the trace information stored on the receiving chain will be: `Trace: "transfer/{channel-id}/testcoin/testcoin", BaseDenom: "testcoin"`. + +This incorrect trace information must be corrected when the chain does upgrade to fully supporting denominations with slashes. + +To do so, chain binaries should include a migration script that will run when the chain upgrades from not supporting base denominations with slashes to supporting base denominations with slashes. + +## Chains + +### ICS20 - Transfer + +The transfer module will now support slashes in base denoms, so we must iterate over current traces to check if any of them are incorrectly formed and correct the trace information. + +### Upgrade Proposal + +```go +app.UpgradeKeeper.SetUpgradeHandler("MigrateTraces", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / transfer module consensus version has been bumped to 2 + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +This is only necessary if there are denom traces in the store with incorrect trace information from previously received coins that had a slash in the base denom. However, it is recommended that any chain upgrading to support base denominations with slashes runs this code for safety. + +For a more detailed sample, please check out the code changes in [this pull request](https://github.com/cosmos/ibc-go/pull/1680). + +### Genesis Migration + +If the chain chooses to add support for slashes in base denoms via genesis export, then the trace information must be corrected during genesis migration. + +The migration code required may look like: + +```go expandable +func migrateGenesisSlashedDenomsUpgrade(appState genutiltypes.AppMap, clientCtx client.Context, genDoc *tmtypes.GenesisDoc) (genutiltypes.AppMap, error) { + if appState[ibctransfertypes.ModuleName] != nil { + transferGenState := &ibctransfertypes.GenesisState{ +} + +clientCtx.Codec.MustUnmarshalJSON(appState[ibctransfertypes.ModuleName], transferGenState) + substituteTraces := make([]ibctransfertypes.DenomTrace, len(transferGenState.DenomTraces)) + for i, dt := range transferGenState.DenomTraces { + / replace all previous traces with the latest trace if validation passes + / note most traces will have same value + newTrace := ibctransfertypes.ParseDenomTrace(dt.GetFullDenomPath()) + if err := newTrace.Validate(); err != nil { + substituteTraces[i] = dt +} + +else { + substituteTraces[i] = newTrace +} + +} + +transferGenState.DenomTraces = substituteTraces + + / delete old genesis state + delete(appState, ibctransfertypes.ModuleName) + + / set new ibc transfer genesis state + appState[ibctransfertypes.ModuleName] = clientCtx.Codec.MustMarshalJSON(transferGenState) +} + +return appState, nil +} +``` + +For a more detailed sample, please check out the code changes in [this pull request](https://github.com/cosmos/ibc-go/pull/1528). diff --git a/docs/ibc/v7.8.x/migrations/v1-to-v2.mdx b/docs/ibc/v7.8.x/migrations/v1-to-v2.mdx new file mode 100644 index 00000000..035b8c94 --- /dev/null +++ b/docs/ibc/v7.8.x/migrations/v1-to-v2.mdx @@ -0,0 +1,60 @@ +--- +title: IBC-Go v1 to v2 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go -> github.com/cosmos/ibc-go/v2 +``` + +## Chains + +* No relevant changes were made in this release. + +## IBC Apps + +A new function has been added to the app module interface: + +```go expandable +/ NegotiateAppVersion performs application version negotiation given the provided channel ordering, connectionID, portID, counterparty and proposed version. + / An error is returned if version negotiation cannot be performed. For example, an application module implementing this interface + / may decide to return an error in the event of the proposed version being incompatible with it's own + NegotiateAppVersion( + ctx sdk.Context, + order channeltypes.Order, + connectionID string, + portID string, + counterparty channeltypes.Counterparty, + proposedVersion string, + ) (version string, err error) +} +``` + +This function should perform application version negotiation and return the negotiated version. If the version cannot be negotiated, an error should be returned. This function is only used on the client side. + +### sdk.Result removed + +sdk.Result has been removed as a return value in the application callbacks. Previously it was being discarded by core IBC and was thus unused. + +## Relayers + +A new gRPC has been added to 05-port, `AppVersion`. It returns the negotiated app version. This function should be used for the `ChanOpenTry` channel handshake step to decide upon the application version which should be set in the channel. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v7.8.x/migrations/v2-to-v3.mdx b/docs/ibc/v7.8.x/migrations/v2-to-v3.mdx new file mode 100644 index 00000000..d9cf670b --- /dev/null +++ b/docs/ibc/v7.8.x/migrations/v2-to-v3.mdx @@ -0,0 +1,187 @@ +--- +title: IBC-Go v2 to v3 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v2 -> github.com/cosmos/ibc-go/v3 +``` + +No genesis or in-place migrations are required when upgrading from v1 or v2 of ibc-go. + +## Chains + +### ICS20 + +The `transferkeeper.NewKeeper(...)` now takes in an ICS4Wrapper. +The ICS4Wrapper should be the IBC Channel Keeper unless ICS 20 is being connected to a middleware application. + +### ICS27 + +ICS27 Interchain Accounts has been added as a supported IBC application of ibc-go. +Please see the [ICS27 documentation](/docs/ibc/v7.8.x/apps/interchain-accounts/overview) for more information. + +### Upgrade Proposal + +If the chain will adopt ICS27, it must set the appropriate params during the execution of the upgrade handler in `app.go`: + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler("v3", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / set the ICS27 consensus version so InitGenesis is not run + fromVM[icatypes.ModuleName] = icamodule.ConsensusVersion() + + / create ICS27 Controller submodule params + controllerParams := icacontrollertypes.Params{ + ControllerEnabled: true, +} + + / create ICS27 Host submodule params + hostParams := icahosttypes.Params{ + HostEnabled: true, + AllowMessages: []string{"/cosmos.bank.v1beta1.MsgSend", ... +}, +} + + / initialize ICS27 module + icamodule.InitModule(ctx, controllerParams, hostParams) + + ... + + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +The host and controller submodule params only need to be set if the chain integrates those submodules. +For example, if a chain chooses not to integrate a controller submodule, it may pass empty params into `InitModule`. + +#### Add `StoreUpgrades` for ICS27 module + +For ICS27 it is also necessary to [manually add store upgrades](https://docs.cosmos.network/main/learn/advanced/upgrade#add-storeupgrades-for-new-modules) for the new ICS27 module and then configure the store loader to apply those upgrades in `app.go`: + +```go +if upgradeInfo.Name == "v3" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := store.StoreUpgrades{ + Added: []string{ + icacontrollertypes.StoreKey, icahosttypes.StoreKey +}, +} + +app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +``` + +This ensures that the new module's stores are added to the multistore before the migrations begin. +The host and controller submodule keys only need to be added if the chain integrates those submodules. +For example, if a chain chooses not to integrate a controller submodule, it does not need to add the controller key to the `Added` field. + +### Genesis migrations + +If the chain will adopt ICS27 and chooses to upgrade via a genesis export, then the ICS27 parameters must be set during genesis migration. + +The migration code required may look like: + +```go expandable +controllerGenesisState := icatypes.DefaultControllerGenesis() + / overwrite parameters as desired + controllerGenesisState.Params = icacontrollertypes.Params{ + ControllerEnabled: true, +} + hostGenesisState := icatypes.DefaultHostGenesis() + / overwrite parameters as desired + hostGenesisState.Params = icahosttypes.Params{ + HostEnabled: true, + AllowMessages: []string{"/cosmos.bank.v1beta1.MsgSend", ... +}, +} + icaGenesisState := icatypes.NewGenesisState(controllerGenesisState, hostGenesisState) + + / set new ics27 genesis state + appState[icatypes.ModuleName] = clientCtx.Codec.MustMarshalJSON(icaGenesisState) +``` + +### Ante decorator + +The field of type `channelkeeper.Keeper` in the `AnteDecorator` structure has been replaced with a field of type `*keeper.Keeper`: + +```diff +type AnteDecorator struct { +- k channelkeeper.Keeper ++ k *keeper.Keeper +} + +- func NewAnteDecorator(k channelkeeper.Keeper) AnteDecorator { ++ func NewAnteDecorator(k *keeper.Keeper) AnteDecorator { + return AnteDecorator{k: k} +} +``` + +## IBC Apps + +### `OnChanOpenTry` must return negotiated application version + +The `OnChanOpenTry` application callback has been modified. +The return signature now includes the application version. +IBC applications must perform application version negotiation in `OnChanOpenTry` using the counterparty version. +The negotiated application version then must be returned in `OnChanOpenTry` to core IBC. +Core IBC will set this version in the TRYOPEN channel. + +### `OnChanOpenAck` will take additional `counterpartyChannelID` argument + +The `OnChanOpenAck` application callback has been modified. +The arguments now include the counterparty channel id. + +### `NegotiateAppVersion` removed from `IBCModule` interface + +Previously this logic was handled by the `NegotiateAppVersion` function. +Relayers would query this function before calling `ChanOpenTry`. +Applications would then need to verify that the passed in version was correct. +Now applications will perform this version negotiation during the channel handshake, thus removing the need for `NegotiateAppVersion`. + +### Channel state will not be set before application callback + +The channel handshake logic has been reorganized within core IBC. +Channel state will not be set in state after the application callback is performed. +Applications must rely only on the passed in channel parameters instead of querying the channel keeper for channel state. + +### IBC application callbacks moved from `AppModule` to `IBCModule` + +Previously, IBC module callbacks were apart of the `AppModule` type. +The recommended approach is to create an `IBCModule` type and move the IBC module callbacks from `AppModule` to `IBCModule` in a separate file `ibc_module.go`. + +The mock module go API has been broken in this release by applying the above format. +The IBC module callbacks have been moved from the mock modules `AppModule` into a new type `IBCModule`. + +As apart of this release, the mock module now supports middleware testing. Please see the [README](https://github.com/cosmos/ibc-go/blob/v7.0.0/testing/README.md#middleware-testing) for more information. + +Please review the [mock](https://github.com/cosmos/ibc-go/blob/v7.0.0/testing/mock/ibc_module.go) and [transfer](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/transfer/ibc_module.go) modules as examples. Additionally, [simapp](https://github.com/cosmos/ibc-go/blob/v7.0.0/testing/simapp/app.go) provides an example of how `IBCModule` types should now be added to the IBC router in favour of `AppModule`. + +### IBC testing package + +`TestChain`s are now created with chainID's beginning from an index of 1. Any calls to `GetChainID(0)` will now fail. Please increment all calls to `GetChainID` by 1. + +## Relayers + +`AppVersion` gRPC has been removed. +The `version` string in `MsgChanOpenTry` has been deprecated and will be ignored by core IBC. +Relayers no longer need to determine the version to use on the `ChanOpenTry` step. +IBC applications will determine the correct version using the counterparty version. + +## IBC Light Clients + +The `GetProofSpecs` function has been removed from the `ClientState` interface. This function was previously unused by core IBC. Light clients which don't use this function may remove it. diff --git a/docs/ibc/v7.8.x/migrations/v3-to-v4.mdx b/docs/ibc/v7.8.x/migrations/v3-to-v4.mdx new file mode 100644 index 00000000..e775c910 --- /dev/null +++ b/docs/ibc/v7.8.x/migrations/v3-to-v4.mdx @@ -0,0 +1,156 @@ +--- +title: IBC-Go v3 to v4 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v3 -> github.com/cosmos/ibc-go/v4 +``` + +No genesis or in-place migrations required when upgrading from v1 or v2 of ibc-go. + +## Chains + +### ICS27 - Interchain Accounts + +The controller submodule implements now the 05-port `Middleware` interface instead of the 05-port `IBCModule` interface. Chains that integrate the controller submodule, need to create it with the `NewIBCMiddleware` constructor function. For example: + +```diff +- icacontroller.NewIBCModule(app.ICAControllerKeeper, icaAuthIBCModule) ++ icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) +``` + +where `icaAuthIBCModule` is the Interchain Accounts authentication IBC Module. + +### ICS29 - Fee Middleware + +The Fee Middleware module, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. + +Please read the Fee Middleware [integration documentation](/docs/ibc/v7.8.x/light-clients/wasm/integration) for an in depth guide on how to configure the module correctly in order to incentivize IBC packets. + +Take a look at the following diff for an [example setup](https://github.com/cosmos/ibc-go/pull/1432/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08aL366) of how to incentivize ics27 channels. + +### Migration to fix support for base denoms with slashes + +As part of [v1.5.0](https://github.com/cosmos/ibc-go/releases/tag/v1.5.0), [v2.3.0](https://github.com/cosmos/ibc-go/releases/tag/v2.3.0) and [v3.1.0](https://github.com/cosmos/ibc-go/releases/tag/v3.1.0) some [migration handler code sample was documented](/docs/ibc/v7.8.x/migrations/support-denoms-with-slashes#upgrade-proposal) that needs to run in order to correct the trace information of coins transferred using ICS20 whose base denom contains slashes. + +Based on feedback from the community we add now an improved solution to run the same migration that does not require copying a large piece of code over from the migration document, but instead requires only adding a one-line upgrade handler. + +If the chain will migrate to supporting base denoms with slashes, it must set the appropriate params during the execution of the upgrade handler in `app.go`: + +```go +app.UpgradeKeeper.SetUpgradeHandler("MigrateTraces", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / transfer module consensus version has been bumped to 2 + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +If a chain receives coins of a base denom with slashes before it upgrades to supporting it, the receive may pass however the trace information will be incorrect. + +E.g. If a base denom of `testcoin/testcoin/testcoin` is sent to a chain that does not support slashes in the base denom, the receive will be successful. However, the trace information stored on the receiving chain will be: `Trace: "transfer/{channel-id}/testcoin/testcoin", BaseDenom: "testcoin"`. + +This incorrect trace information must be corrected when the chain does upgrade to fully supporting denominations with slashes. + +## IBC Apps + +### ICS03 - Connection + +Crossing hellos have been removed from 03-connection handshake negotiation. +`PreviousConnectionId` in `MsgConnectionOpenTry` has been deprecated and is no longer used by core IBC. + +`NewMsgConnectionOpenTry` no longer takes in the `PreviousConnectionId` as crossing hellos are no longer supported. A non-empty `PreviousConnectionId` will fail basic validation for this message. + +### ICS04 - Channel + +The `WriteAcknowledgement` API now takes the `exported.Acknowledgement` type instead of passing in the acknowledgement byte array directly. +This is an API breaking change and as such IBC application developers will have to update any calls to `WriteAcknowledgement`. + +The `OnChanOpenInit` application callback has been modified. +The return signature now includes the application version as detailed in the latest IBC [spec changes](https://github.com/cosmos/ibc/pull/629). + +The `NewErrorAcknowledgement` method signature has changed. +It now accepts an `error` rather than a `string`. This was done in order to prevent accidental state changes. +All error acknowledgements now contain a deterministic ABCI code and error message. It is the responsibility of the application developer to emit error details in events. + +Crossing hellos have been removed from 04-channel handshake negotiation. +IBC Applications no longer need to account from already claimed capabilities in the `OnChanOpenTry` callback. The capability provided by core IBC must be able to be claimed with error. +`PreviousChannelId` in `MsgChannelOpenTry` has been deprecated and is no longer used by core IBC. + +`NewMsgChannelOpenTry` no longer takes in the `PreviousChannelId` as crossing hellos are no longer supported. A non-empty `PreviousChannelId` will fail basic validation for this message. + +### ICS27 - Interchain Accounts + +The `RegisterInterchainAccount` API has been modified to include an additional `version` argument. This change has been made in order to support ICS29 fee middleware, for relayer incentivization of ICS27 packets. +Consumers of the `RegisterInterchainAccount` are now expected to build the appropriate JSON encoded version string themselves and pass it accordingly. +This should be constructed within the interchain accounts authentication module which leverages the APIs exposed via the interchain accounts `controllerKeeper`. If an empty string is passed in the `version` argument, then the version will be initialized to a default value in the `OnChanOpenInit` callback of the controller's handler, so that channel handshake can proceed. + +The following code snippet illustrates how to construct an appropriate interchain accounts `Metadata` and encode it as a JSON bytestring: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + if err := k.icaControllerKeeper.RegisterInterchainAccount(ctx, msg.ConnectionId, msg.Owner, string(appVersion)); err != nil { + return err +} +``` + +Similarly, if the application stack is configured to route through ICS29 fee middleware and a fee enabled channel is desired, construct the appropriate ICS29 `Metadata` type: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + feeMetadata := feetypes.Metadata{ + AppVersion: string(appVersion), + FeeVersion: feetypes.Version, +} + +feeEnabledVersion, err := feetypes.ModuleCdc.MarshalJSON(&feeMetadata) + if err != nil { + return err +} + if err := k.icaControllerKeeper.RegisterInterchainAccount(ctx, msg.ConnectionId, msg.Owner, string(feeEnabledVersion)); err != nil { + return err +} +``` + +## Relayers + +When using the `DenomTrace` gRPC, the full IBC denomination with the `ibc/` prefix may now be passed in. + +Crossing hellos are no longer supported by core IBC for 03-connection and 04-channel. The handshake should be completed in the logical 4 step process (INIT, TRY, ACK, CONFIRM). diff --git a/docs/ibc/v7.8.x/migrations/v4-to-v5.mdx b/docs/ibc/v7.8.x/migrations/v4-to-v5.mdx new file mode 100644 index 00000000..3ae26b1b --- /dev/null +++ b/docs/ibc/v7.8.x/migrations/v4-to-v5.mdx @@ -0,0 +1,440 @@ +--- +title: IBC-Go v4 to v5 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* [Chains](#chains) +* [IBC Apps](#ibc-apps) +* [Relayers](#relayers) +* [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v4 -> github.com/cosmos/ibc-go/v5 +``` + +## Chains + +### Ante decorator + +The `AnteDecorator` type in `core/ante` has been renamed to `RedundantRelayDecorator` (and the corresponding constructor function to `NewRedundantRelayDecorator`). Therefore in the function that creates the instance of the `sdk.AnteHandler` type (e.g. `NewAnteHandler`) the change would be like this: + +```diff expandable +func NewAnteHandler(options HandlerOptions) (sdk.AnteHandler, error) { + / parameter validation + + anteDecorators := []sdk.AnteDecorator{ + / other ante decorators +- ibcante.NewAnteDecorator(opts.IBCkeeper), ++ ibcante.NewRedundantRelayDecorator(options.IBCKeeper), + } + + return sdk.ChainAnteDecorators(anteDecorators...), nil +} +``` + +The `AnteDecorator` was actually renamed twice, but in [this PR](https://github.com/cosmos/ibc-go/pull/1820) you can see the changes made for the final rename. + +## IBC Apps + +### Core + +The `key` parameter of the `NewKeeper` function in `modules/core/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + stakingKeeper clienttypes.StakingKeeper, + upgradeKeeper clienttypes.UpgradeKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) *Keeper +``` + +The `RegisterRESTRoutes` function in `modules/core` has been removed. + +### ICS03 - Connection + +The `key` parameter of the `NewKeeper` function in `modules/core/03-connection/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ck types.ClientKeeper +) Keeper +``` + +### ICS04 - Channel + +The function `NewPacketId` in `modules/core/04-channel/types` has been renamed to `NewPacketID`: + +```diff +- func NewPacketId( ++ func NewPacketID( + portID, + channelID string, + seq uint64 +) PacketId +``` + +The `key` parameter of the `NewKeeper` function in `modules/core/04-channel/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + clientKeeper types.ClientKeeper, + connectionKeeper types.ConnectionKeeper, + portKeeper types.PortKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) Keeper +``` + +### ICS20 - Transfer + +The `key` parameter of the `NewKeeper` function in `modules/apps/transfer/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper types.ICS4Wrapper, + channelKeeper types.ChannelKeeper, + portKeeper types.PortKeeper, + authKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) Keeper +``` + +The `amount` parameter of function `GetTransferCoin` in `modules/apps/transfer/types` is now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff +func GetTransferCoin( + portID, channelID, baseDenom string, +- amount sdk.Int ++ amount math.Int +) sdk.Coin +``` + +The `RegisterRESTRoutes` function in `modules/apps/transfer` has been removed. + +### ICS27 - Interchain Accounts + +The `key` and `msgRouter` parameters of the `NewKeeper` functions in + +* `modules/apps/27-interchain-accounts/controller/keeper` +* and `modules/apps/27-interchain-accounts/host/keeper` + +have changed type. The `key` parameter is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`), and the `msgRouter` parameter is now of type `*icatypes.MessageRouter` (where `icatypes` is an import alias for `"github.com/cosmos/ibc-go/v5/modules/apps/27-interchain-accounts/types"`): + +```diff expandable +/ NewKeeper creates a new interchain accounts controller Keeper instance +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper icatypes.ICS4Wrapper, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +- msgRouter *baseapp.MsgServiceRouter, ++ msgRouter *icatypes.MessageRouter, +) Keeper +``` + +```diff expandable +/ NewKeeper creates a new interchain accounts host Keeper instance +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +- msgRouter *baseapp.MsgServiceRouter, ++ msgRouter *icatypes.MessageRouter, +) Keeper +``` + +The new `MessageRouter` interface is defined as: + +```go +type MessageRouter interface { + Handler(msg sdk.Msg) + +baseapp.MsgServiceHandler +} +``` + +The `RegisterRESTRoutes` function in `modules/apps/27-interchain-accounts` has been removed. + +An additional parameter, `ics4Wrapper` has been added to the `host` submodule `NewKeeper` function in `modules/apps/27-interchain-accounts/host/keeper`. +This allows the `host` submodule to correctly unwrap the channel version for channel reopening handshakes in the `OnChanOpenTry` callback. + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, + key storetypes.StoreKey, + paramSpace paramtypes.Subspace, ++ ics4Wrapper icatypes.ICS4Wrapper, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, + scopedKeeper icatypes.ScopedKeeper, + msgRouter icatypes.MessageRouter, +) Keeper +``` + +#### Cosmos SDK message handler responses in packet acknowledgement + +The construction of the transaction response of a message execution on the host chain has changed. The `Data` field in the `sdk.TxMsgData` has been deprecated and since Cosmos SDK 0.46 the `MsgResponses` field contains the message handler responses packed into `Any`s. + +For chains on Cosmos SDK 0.45 and below, the message response was constructed like this: + +```go expandable +txMsgData := &sdk.TxMsgData{ + Data: make([]*sdk.MsgData, len(msgs)), +} + for i, msg := range msgs { + / message validation + + msgResponse, err := k.executeMsg(cacheCtx, msg) + / return if err != nil + + txMsgData.Data[i] = &sdk.MsgData{ + MsgType: sdk.MsgTypeURL(msg), + Data: msgResponse, +} +} + +/ emit events + +txResponse, err := proto.Marshal(txMsgData) +/ return if err != nil + +return txResponse, nil +``` + +And for chains on Cosmos SDK 0.46 and above, it is now done like this: + +```go expandable +txMsgData := &sdk.TxMsgData{ + MsgResponses: make([]*codectypes.Any, len(msgs)), +} + for i, msg := range msgs { + / message validation + + any, err := k.executeMsg(cacheCtx, msg) + / return if err != nil + + txMsgData.MsgResponses[i] = any +} + +/ emit events + +txResponse, err := proto.Marshal(txMsgData) +/ return if err != nil + +return txResponse, nil +``` + +When handling the acknowledgement in the `OnAcknowledgementPacket` callback of a custom ICA controller module, then depending on whether `txMsgData.Data` is empty or not, the logic to handle the message handler response will be different. **Only controller chains on Cosmos SDK 0.46 or above will be able to write the logic needed to handle the response from a host chain on Cosmos SDK 0.46 or above.** + +```go expandable +var ack channeltypes.Acknowledgement + if err := channeltypes.SubModuleCdc.UnmarshalJSON(acknowledgement, &ack); err != nil { + return err +} + +var txMsgData sdk.TxMsgData + if err := proto.Unmarshal(ack.GetResult(), txMsgData); err != nil { + return err +} + switch len(txMsgData.Data) { + case 0: / for SDK 0.46 and above + for _, msgResponse := range txMsgData.MsgResponses { + / unmarshall msgResponse and execute logic based on the response +} + +return nil +default: / for SDK 0.45 and below + for _, msgData := range txMsgData.Data { + / unmarshall msgData and execute logic based on the response +} +} +``` + +See [ADR-03](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-003-ics27-acknowledgement.md#next-major-version-format) for more information or the [corresponding documentation about authentication modules](/docs/ibc/v7.8.x/apps/interchain-accounts/auth-modules#onacknowledgementpacket). + +### ICS29 - Fee Middleware + +The `key` parameter of the `NewKeeper` function in `modules/apps/29-fee` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper types.ICS4Wrapper, + channelKeeper types.ChannelKeeper, + portKeeper types.PortKeeper, + authKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, +) Keeper +``` + +The `RegisterRESTRoutes` function in `modules/apps/29-fee` has been removed. + +### IBC testing package + +The `MockIBCApp` type has been renamed to `IBCApp` (and the corresponding constructor function to `NewIBCApp`). This has resulted therefore in: + +* The `IBCApp` field of the `*IBCModule` in `testing/mock` to change its type as well to `*IBCApp`: + +```diff +type IBCModule struct { + appModule *AppModule +- IBCApp *MockIBCApp / base application of an IBC middleware stack ++ IBCApp *IBCApp / base application of an IBC middleware stack +} +``` + +* The `app` parameter to `*NewIBCModule` in `testing/mock` to change its type as well to `*IBCApp`: + +```diff +func NewIBCModule( + appModule *AppModule, +- app *MockIBCApp ++ app *IBCApp +) IBCModule +``` + +The `MockEmptyAcknowledgement` type has been renamed to `EmptyAcknowledgement` (and the corresponding constructor function to `NewEmptyAcknowledgement`). + +The `TestingApp` interface in `testing` has gone through some modifications: + +* The return type of the function `GetStakingKeeper` is not the concrete type `stakingkeeper.Keeper` anymore (where `stakingkeeper` is an import alias for `"github.com/cosmos/cosmos-sdk/x/staking/keeper"`), but it has been changed to the interface `ibctestingtypes.StakingKeeper` (where `ibctestingtypes` is an import alias for `""github.com/cosmos/ibc-go/v5/testing/types"`). See this [PR](https://github.com/cosmos/ibc-go/pull/2028) for more details. The `StakingKeeper` interface is defined as: + +```go +type StakingKeeper interface { + GetHistoricalInfo(ctx sdk.Context, height int64) (stakingtypes.HistoricalInfo, bool) +} +``` + +* The return type of the function `LastCommitID` has changed to `storetypes.CommitID` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`). + +See the following `git diff` for more details: + +```diff expandable +type TestingApp interface { + abci.Application + + / ibc-go additions + GetBaseApp() *baseapp.BaseApp +- GetStakingKeeper() stakingkeeper.Keeper ++ GetStakingKeeper() ibctestingtypes.StakingKeeper + GetIBCKeeper() *keeper.Keeper + GetScopedIBCKeeper() capabilitykeeper.ScopedKeeper + GetTxConfig() client.TxConfig + + / Implemented by SimApp + AppCodec() codec.Codec + + / Implemented by BaseApp +- LastCommitID() sdk.CommitID ++ LastCommitID() storetypes.CommitID + LastBlockHeight() int64 +} +``` + +The `powerReduction` parameter of the function `SetupWithGenesisValSet` in `testing` is now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff +func SetupWithGenesisValSet( + t *testing.T, + valSet *tmtypes.ValidatorSet, + genAccs []authtypes.GenesisAccount, + chainID string, +- powerReduction sdk.Int, ++ powerReduction math.Int, + balances ...banktypes.Balance +) TestingApp +``` + +The `accAmt` parameter of the functions + +* `AddTestAddrsFromPubKeys` , +* `AddTestAddrs` +* and `AddTestAddrsIncremental` + +in `testing/simapp` are now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff expandable +func AddTestAddrsFromPubKeys( + app *SimApp, + ctx sdk.Context, + pubKeys []cryptotypes.PubKey, +- accAmt sdk.Int, ++ accAmt math.Int +) +func addTestAddrs( + app *SimApp, + ctx sdk.Context, + accNum int, +- accAmt sdk.Int, ++ accAmt math.Int, + strategy GenerateAccountStrategy +) []sdk.AccAddress +func AddTestAddrsIncremental( + app *SimApp, + ctx sdk.Context, + accNum int, +- accAmt sdk.Int, ++ accAmt math.Int +) []sdk.AccAddress +``` + +The `RegisterRESTRoutes` function in `testing/mock` has been removed. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +### ICS02 - Client + +The `key` parameter of the `NewKeeper` function in `modules/core/02-client/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + sk types.StakingKeeper, + uk types.UpgradeKeeper +) Keeper +``` diff --git a/docs/ibc/v7.8.x/migrations/v5-to-v6.mdx b/docs/ibc/v7.8.x/migrations/v5-to-v6.mdx new file mode 100644 index 00000000..c38c1657 --- /dev/null +++ b/docs/ibc/v7.8.x/migrations/v5-to-v6.mdx @@ -0,0 +1,301 @@ +--- +title: IBC-Go v5 to v6 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +## Chains + +The `ibc-go/v6` release introduces a new set of migrations for `27-interchain-accounts`. Ownership of ICS27 channel capabilities is transferred from ICS27 authentication modules and will now reside with the ICS27 controller submodule moving forward. + +For chains which contain a custom authentication module using the ICS27 controller submodule this requires a migration function to be included in the chain upgrade handler. A subsequent migration handler is run automatically, asserting the ownership of ICS27 channel capabilities has been transferred successfully. + +This migration is not required for chains which *do not* contain a custom authentication module using the ICS27 controller submodule. + +This migration facilitates the addition of the ICS27 controller submodule `MsgServer` which provides a standardised approach to integrating existing forms of authentication such as `x/gov` and `x/group` provided by the Cosmos SDK. + +For more information please refer to [ADR 009](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-009-v6-ics27-msgserver.md). + +### Upgrade proposal + +Please refer to [PR #2383](https://github.com/cosmos/ibc-go/pull/2383) for integrating the ICS27 channel capability migration logic or follow the steps outlined below: + +1. Add the upgrade migration logic to chain distribution. This may be, for example, maintained under a package `app/upgrades/v6`. + +```go expandable +package v6 + +import ( + + "github.com/cosmos/cosmos-sdk/codec" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" + + v6 "github.com/cosmos/ibc-go/v6/modules/apps/27-interchain-accounts/controller/migrations/v6" +) + +const ( + UpgradeName = "v6" +) + +func CreateUpgradeHandler( + mm *module.Manager, + configurator module.Configurator, + cdc codec.BinaryCodec, + capabilityStoreKey *storetypes.KVStoreKey, + capabilityKeeper *capabilitykeeper.Keeper, + moduleName string, +) + +upgradetypes.UpgradeHandler { + return func(ctx sdk.Context, _ upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + if err := v6.MigrateICS27ChannelCapability(ctx, cdc, capabilityStoreKey, capabilityKeeper, moduleName); err != nil { + return nil, err +} + +return mm.RunMigrations(ctx, configurator, vm) +} +} +``` + +2. Set the upgrade handler in `app.go`. The `moduleName` parameter refers to the authentication module's `ScopedKeeper` name. This is the name provided upon instantiation in `app.go` via the [`x/capability` keeper `ScopeToModule(moduleName string)`](https://github.com/cosmos/cosmos-sdk/blob/v0.46.1/x/capability/keeper/keeper.go#L70) method. [See here for an example in `simapp`](https://github.com/cosmos/ibc-go/blob/v5.0.0/testing/simapp/app.go#L304). + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler( + v6.UpgradeName, + v6.CreateUpgradeHandler( + app.mm, + app.configurator, + app.appCodec, + app.keys[capabilitytypes.ModuleName], + app.CapabilityKeeper, + >>>> moduleName <<<<, + ), +) +``` + +## IBC Apps + +### ICS27 - Interchain Accounts + +#### Controller APIs + +In previous releases of ibc-go, chain developers integrating the ICS27 interchain accounts controller functionality were expected to create a custom `Base Application` referred to as an authentication module, see the section [Building an authentication module](/docs/ibc/v6.3.x/apps/interchain-accounts/auth-modules) from the documentation. + +The `Base Application` was intended to be composed with the ICS27 controller submodule `Keeper` and facilitate many forms of message authentication depending on a chain's particular use case. + +Prior to ibc-go v6 the controller submodule exposed only these two functions (to which we will refer as the legacy APIs): + +* [`RegisterInterchainAccount`](https://github.com/cosmos/ibc-go/blob/v5.0.0/modules/apps/27-interchain-accounts/controller/keeper/account.go#L19) +* [`SendTx`](https://github.com/cosmos/ibc-go/blob/v5.0.0/modules/apps/27-interchain-accounts/controller/keeper/relay.go#L18) + +However, these functions have now been deprecated in favour of the new controller submodule `MsgServer` and will be removed in later releases. + +Both APIs remain functional and maintain backwards compatibility in ibc-go v6, however consumers of these APIs are now recommended to follow the message passing paradigm outlined in Cosmos SDK [ADR 031](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-031) and [ADR 033](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-033). This is facilitated by the Cosmos SDK [`MsgServiceRouter`](https://github.com/cosmos/cosmos-sdk/blob/main/baseapp/msg_service_router.go#L17) and chain developers creating custom application logic can now omit the ICS27 controller submodule `Keeper` from their module and instead depend on message routing. + +Depending on the use case, developers of custom authentication modules face one of three scenarios: + +![auth-module-decision-tree.png](/docs/ibc/images/04-migrations/images/auth-module-decision-tree.png) + +**My authentication module needs to access IBC packet callbacks** + +Application developers that wish to consume IBC packet callbacks and react upon packet acknowledgements **must** continue using the controller submodule's legacy APIs. The authentication modules will not need a `ScopedKeeper` anymore, though, because the channel capability will be claimed by the controller submodule. For example, given an Interchain Accounts authentication module keeper `ICAAuthKeeper`, the authentication module's `ScopedKeeper` (`scopedICAAuthKeeper`) is not needed anymore and can be removed for the argument list of the keeper constructor function, as shown here: + +```diff +app.ICAAuthKeeper = icaauthkeeper.NewKeeper( + appCodec, + keys[icaauthtypes.StoreKey], + app.ICAControllerKeeper, +- scopedICAAuthKeeper, +) +``` + +Please note that the authentication module's `ScopedKeeper` name is still needed as part of the channel capability migration described in section [Upgrade proposal](#upgrade-proposal) above. Therefore the authentication module's `ScopedKeeper` cannot be completely removed from the chain code until the migration has run. + +In the future, the use of the legacy APIs for accessing packet callbacks will be replaced by IBC Actor Callbacks (see [ADR 008](https://github.com/cosmos/ibc-go/pull/1976) for more details) and it will also be possible to access them with the `MsgServiceRouter`. + +**My authentication module does not need access to IBC packet callbacks** + +The authentication module can migrate from using the legacy APIs and it can be composed instead with the `MsgServiceRouter`, so that the authentication module is able to pass messages to the controller submodule's `MsgServer` to register interchain accounts and send packets to the interchain account. For example, given an Interchain Accounts authentication module keeper `ICAAuthKeeper`, the ICS27 controller submodule keeper (`ICAControllerKeeper`) and authentication module scoped keeper (`scopedICAAuthKeeper`) are not needed anymore and can be replaced with the `MsgServiceRouter`, as shown here: + +```diff +app.ICAAuthKeeper = icaauthkeeper.NewKeeper( + appCodec, + keys[icaauthtypes.StoreKey], +- app.ICAControllerKeeper, +- scopedICAAuthKeeper, ++ app.MsgServiceRouter(), +) +``` + +In your authentication module you can route messages to the controller submodule's `MsgServer` instead of using the legacy APIs. For example, for registering an interchain account: + +```diff expandable +- if err := keeper.icaControllerKeeper.RegisterInterchainAccount( +- ctx, +- connectionID, +- owner.String(), +- version, +- ); err != nil { +- return err +- } ++ msg := controllertypes.NewMsgRegisterInterchainAccount( ++ connectionID, ++ owner.String(), ++ version, ++ ) ++ handler := keeper.msgRouter.Handler(msg) ++ res, err := handler(ctx, msg) ++ if err != nil { ++ return err ++ } +``` + +where `controllertypes` is an import alias for `"github.com/cosmos/ibc-go/v6/modules/apps/27-interchain-accounts/controller/types"`. + +In addition, in this use case the authentication module does not need to implement the `IBCModule` interface anymore. + +**I do not need a custom authentication module anymore** + +If your authentication module does not have any extra functionality compared to the default authentication module added in ibc-go v6 (the `MsgServer`), or if you can use a generic authentication module, such as the `x/auth`, `x/gov` or `x/group` modules from the Cosmos SDK (v0.46 and later), then you can remove your authentication module completely and use instead the gRPC endpoints of the `MsgServer` or the CLI added in ibc-go v6. + +Please remember that the authentication module's `ScopedKeeper` name is still needed as part of the channel capability migration described in section [Upgrade proposal](#upgrade-proposal) above. + +#### Host params + +The ICS27 host submodule default params have been updated to include the `AllowAllHostMsgs` wildcard `*`. +This enables execution of any `sdk.Msg` type for ICS27 registered on the host chain `InterfaceRegistry`. + +```diff +/ AllowAllHostMsgs holds the string key that allows all message types on interchain accounts host module +const AllowAllHostMsgs = "*" + +... + +/ DefaultParams is the default parameter configuration for the host submodule +func DefaultParams() Params { +- return NewParams(DefaultHostEnabled, nil) ++ return NewParams(DefaultHostEnabled, []string{AllowAllHostMsgs}) +} +``` + +#### API breaking changes + +`SerializeCosmosTx` takes in a `[]proto.Message` instead of `[]sdk.Message`. This allows for the serialization of proto messages without requiring the fulfillment of the `sdk.Msg` interface. + +The `27-interchain-accounts` genesis types have been moved to their own package: `modules/apps/27-interchain-acccounts/genesis/types`. +This change facilitates the addition of the ICS27 controller submodule `MsgServer` and avoids cyclic imports. This should have minimal disruption to chain developers integrating `27-interchain-accounts`. + +The ICS27 host submodule `NewKeeper` function in `modules/apps/27-interchain-acccounts/host/keeper` now includes an additional parameter of type `ICS4Wrapper`. +This provides the host submodule with the ability to correctly unwrap channel versions in the event of a channel reopening handshake. + +```diff +func NewKeeper( + cdc codec.BinaryCodec, key storetypes.StoreKey, paramSpace paramtypes.Subspace, +- channelKeeper icatypes.ChannelKeeper, portKeeper icatypes.PortKeeper, ++ ics4Wrapper icatypes.ICS4Wrapper, channelKeeper icatypes.ChannelKeeper, portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, scopedKeeper icatypes.ScopedKeeper, msgRouter icatypes.MessageRouter, +) Keeper +``` + +### ICS29 - `NewKeeper` API change + +The `NewKeeper` function of ICS29 has been updated to remove the `paramSpace` parameter as it was unused. + +```diff +func NewKeeper( +- cdc codec.BinaryCodec, key storetypes.StoreKey, paramSpace paramtypes.Subspace, +- ics4Wrapper types.ICS4Wrapper, channelKeeper types.ChannelKeeper, portKeeper types.PortKeeper, authKeeper types.AccountKeeper, bankKeeper types.BankKeeper, ++ cdc codec.BinaryCodec, key storetypes.StoreKey, ++ ics4Wrapper types.ICS4Wrapper, channelKeeper types.ChannelKeeper, ++ portKeeper types.PortKeeper, authKeeper types.AccountKeeper, bankKeeper types.BankKeeper, +) Keeper { +``` + +### ICS20 - `SendTransfer` is no longer exported + +The `SendTransfer` function of ICS20 has been removed. IBC transfers should now be initiated with `MsgTransfer` and routed to the ICS20 `MsgServer`. + +See below for example: + +```go +if handler := msgRouter.Handler(msgTransfer); handler != nil { + if err := msgTransfer.ValidateBasic(); err != nil { + return nil, err +} + +res, err := handler(ctx, msgTransfer) + if err != nil { + return nil, err +} +} +``` + +### ICS04 - `SendPacket` API change + +The `SendPacket` API has been simplified: + +```diff expandable +/ SendPacket is called by a module in order to send an IBC packet on a channel +func (k Keeper) SendPacket( + ctx sdk.Context, + channelCap *capabilitytypes.Capability, +- packet exported.PacketI, +-) error { ++ sourcePort string, ++ sourceChannel string, ++ timeoutHeight clienttypes.Height, ++ timeoutTimestamp uint64, ++ data []byte, ++) (uint64, error) { +``` + +Callers no longer need to pass in a pre-constructed packet. +The destination port/channel identifiers and the packet sequence will be determined by core IBC. +`SendPacket` will return the packet sequence. + +### IBC testing package + +The `SendPacket` API has been simplified: + +```diff expandable +/ SendPacket is called by a module in order to send an IBC packet on a channel +func (k Keeper) SendPacket( + ctx sdk.Context, + channelCap *capabilitytypes.Capability, +- packet exported.PacketI, +-) error { ++ sourcePort string, ++ sourceChannel string, ++ timeoutHeight clienttypes.Height, ++ timeoutTimestamp uint64, ++ data []byte, ++) (uint64, error) { +``` + +Callers no longer need to pass in a pre-constructed packet. `SendPacket` will return the packet sequence. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v7.8.x/migrations/v6-to-v7.mdx b/docs/ibc/v7.8.x/migrations/v6-to-v7.mdx new file mode 100644 index 00000000..ca76d0bc --- /dev/null +++ b/docs/ibc/v7.8.x/migrations/v6-to-v7.mdx @@ -0,0 +1,358 @@ +--- +title: IBC-Go v6 to v7 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +## Chains + +Chains will perform automatic migrations to remove existing localhost clients and to migrate the solomachine to v3 of the protobuf definition. + +An optional upgrade handler has been added to prune expired tendermint consensus states. It may be used during any upgrade (from v7 onwards). +Add the following to the function call to the upgrade handler in `app/app.go`, to perform the optional state pruning. + +```go expandable +import ( + + / ... + ibctmmigrations "github.com/cosmos/ibc-go/v7/modules/light-clients/07-tendermint/migrations" +) + +/ ... + +app.UpgradeKeeper.SetUpgradeHandler( + upgradeName, + func(ctx sdk.Context, _ upgradetypes.Plan, _ module.VersionMap) (module.VersionMap, error) { + / prune expired tendermint consensus states to save storage space + _, err := ibctmmigrations.PruneExpiredConsensusStates(ctx, app.Codec, app.IBCKeeper.ClientKeeper) + if err != nil { + return nil, err +} + +return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}, +) +``` + +Checkout the logs to see how many consensus states are pruned. + +### Light client registration + +Chains must explicitly register the types of any light client modules it wishes to integrate. + +#### Tendermint registration + +To register the tendermint client, modify the `app.go` file to include the tendermint `AppModuleBasic`: + +```diff expandable +import ( + / ... ++ ibctm "github.com/cosmos/ibc-go/v7/modules/light-clients/07-tendermint" +) + +/ ... + +ModuleBasics = module.NewBasicManager( + ... + ibc.AppModuleBasic{}, ++ ibctm.AppModuleBasic{}, + ... +) +``` + +It may be useful to reference the [PR](https://github.com/cosmos/ibc-go/pull/2825) which added the `AppModuleBasic` for the tendermint client. + +#### Solo machine registration + +To register the solo machine client, modify the `app.go` file to include the solo machine `AppModuleBasic`: + +```diff expandable +import ( + / ... ++ solomachine "github.com/cosmos/ibc-go/v7/modules/light-clients/06-solomachine" +) + +/ ... + +ModuleBasics = module.NewBasicManager( + ... + ibc.AppModuleBasic{}, ++ solomachine.AppModuleBasic{}, + ... +) +``` + +It may be useful to reference the [PR](https://github.com/cosmos/ibc-go/pull/2826) which added the `AppModuleBasic` for the solo machine client. + +### Testing package API + +The `SetChannelClosed` utility method in `testing/endpoint.go` has been updated to `SetChannelState`, which will take a `channeltypes.State` argument so that the `ChannelState` can be set to any of the possible channel states. + +## IBC Apps + +* No relevant changes were made in this release. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +### `ClientState` interface changes + +The `VerifyUpgradeAndUpdateState` function has been modified. The client state and consensus state return values have been removed. + +Light clients **must** handle all management of client and consensus states including the setting of updated client state and consensus state in the client store. + +The `Initialize` method is now expected to set the initial client state, consensus state and any client-specific metadata in the provided store upon client creation. + +The `CheckHeaderAndUpdateState` method has been split into 4 new methods: + +* `VerifyClientMessage` verifies a `ClientMessage`. A `ClientMessage` could be a `Header`, `Misbehaviour`, or batch update. Calls to `CheckForMisbehaviour`, `UpdateState`, and `UpdateStateOnMisbehaviour` will assume that the content of the `ClientMessage` has been verified and can be trusted. An error should be returned if the `ClientMessage` fails to verify. + +* `CheckForMisbehaviour` checks for evidence of a misbehaviour in `Header` or `Misbehaviour` types. + +* `UpdateStateOnMisbehaviour` performs appropriate state changes on a `ClientState` given that misbehaviour has been detected and verified. + +* `UpdateState` updates and stores as necessary any associated information for an IBC client, such as the `ClientState` and corresponding `ConsensusState`. An error is returned if `ClientMessage` is of type `Misbehaviour`. Upon successful update, a list containing the updated consensus state height is returned. + +The `CheckMisbehaviourAndUpdateState` function has been removed from `ClientState` interface. This functionality is now encapsulated by the usage of `VerifyClientMessage`, `CheckForMisbehaviour`, `UpdateStateOnMisbehaviour`. + +The function `GetTimestampAtHeight` has been added to the `ClientState` interface. It should return the timestamp for a consensus state associated with the provided height. + +Prior to ibc-go/v7 the `ClientState` interface defined a method for each data type which was being verified in the counterparty state store. +The state verification functions for all IBC data types have been consolidated into two generic methods, `VerifyMembership` and `VerifyNonMembership`. +Both are expected to be provided with a standardised key path, `exported.Path`, as defined in [ICS 24 host requirements](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements). Membership verification requires callers to provide the marshalled value `[]byte`. Delay period values should be zero for non-packet processing verification. A zero proof height is now allowed by core IBC and may be passed into `VerifyMembership` and `VerifyNonMembership`. Light clients are responsible for returning an error if a zero proof height is invalid behaviour. + +See below for an example of how ibc-go now performs channel state verification. + +```go expandable +merklePath := commitmenttypes.NewMerklePath(host.ChannelPath(portID, channelID)) + +merklePath, err := commitmenttypes.ApplyPrefix(connection.GetCounterparty().GetPrefix(), merklePath) + if err != nil { + return err +} + +channelEnd, ok := channel.(channeltypes.Channel) + if !ok { + return sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "invalid channel type %T", channel) +} + +bz, err := k.cdc.Marshal(&channelEnd) + if err != nil { + return err +} + if err := clientState.VerifyMembership( + ctx, clientStore, k.cdc, height, + 0, 0, / skip delay period checks for non-packet processing verification + proof, merklePath, bz, +); err != nil { + return sdkerrors.Wrapf(err, "failed channel state verification for client (%s)", clientID) +} +``` + +### `Header` and `Misbehaviour` + +`exported.Header` and `exported.Misbehaviour` interface types have been merged and renamed to `ClientMessage` interface. + +`GetHeight` function has been removed from `exported.Header` and thus is not included in the `ClientMessage` interface + +### `ConsensusState` + +The `GetRoot` function has been removed from consensus state interface since it was not used by core IBC. + +### Client keeper + +Keeper function `CheckMisbehaviourAndUpdateState` has been removed since function `UpdateClient` can now handle updating `ClientState` on `ClientMessage` type which can be any `Misbehaviour` implementations. + +### SDK message + +`MsgSubmitMisbehaviour` is deprecated since `MsgUpdateClient` can now submit a `ClientMessage` type which can be any `Misbehaviour` implementations. + +The field `header` in `MsgUpdateClient` has been renamed to `client_message`. + +## Solomachine + +The `06-solomachine` client implementation has been simplified in ibc-go/v7. In-place store migrations have been added to migrate solomachine clients from `v2` to `v3`. + +### `ClientState` + +The `ClientState` protobuf message definition has been updated to remove the deprecated `bool` field `allow_update_after_proposal`. + +```diff +message ClientState { + option (gogoproto.goproto_getters) = false; + + uint64 sequence = 1; + bool is_frozen = 2 [(gogoproto.moretags) = "yaml:\"is_frozen\""]; + ConsensusState consensus_state = 3 [(gogoproto.moretags) = "yaml:\"consensus_state\""]; +- bool allow_update_after_proposal = 4 [(gogoproto.moretags) = "yaml:\"allow_update_after_proposal\""]; +} +``` + +### `Header` and `Misbehaviour` + +The `06-solomachine` protobuf message `Header` has been updated to remove the `sequence` field. This field was seen as redundant as the implementation can safely rely on the `sequence` value maintained within the `ClientState`. + +```diff expandable +message Header { + option (gogoproto.goproto_getters) = false; + +- uint64 sequence = 1; +- uint64 timestamp = 2; +- bytes signature = 3; +- google.protobuf.Any new_public_key = 4 [(gogoproto.moretags) = "yaml:\"new_public_key\""]; +- string new_diversifier = 5 [(gogoproto.moretags) = "yaml:\"new_diversifier\""]; ++ uint64 timestamp = 1; ++ bytes signature = 2; ++ google.protobuf.Any new_public_key = 3 [(gogoproto.moretags) = "yaml:\"new_public_key\""]; ++ string new_diversifier = 4 [(gogoproto.moretags) = "yaml:\"new_diversifier\""]; +} +``` + +Similarly, the `Misbehaviour` protobuf message has been updated to remove the `client_id` field. + +```diff expandable +message Misbehaviour { + option (gogoproto.goproto_getters) = false; + +- string client_id = 1 [(gogoproto.moretags) = "yaml:\"client_id\""]; +- uint64 sequence = 2; +- SignatureAndData signature_one = 3 [(gogoproto.moretags) = "yaml:\"signature_one\""]; +- SignatureAndData signature_two = 4 [(gogoproto.moretags) = "yaml:\"signature_two\""]; ++ uint64 sequence = 1; ++ SignatureAndData signature_one = 2 [(gogoproto.moretags) = "yaml:\"signature_one\""]; ++ SignatureAndData signature_two = 3 [(gogoproto.moretags) = "yaml:\"signature_two\""]; +} +``` + +### `SignBytes` + +Most notably, the `SignBytes` protobuf definition has been modified to replace the `data_type` field with a new field, `path`. The `path` field is defined as `bytes` and represents a serialized [ICS-24](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements) standardized key path under which the `data` is stored. + +```diff +message SignBytes { + option (gogoproto.goproto_getters) = false; + + uint64 sequence = 1; + uint64 timestamp = 2; + string diversifier = 3; +- DataType data_type = 4 [(gogoproto.moretags) = "yaml:\"data_type\""]; ++ bytes path = 4; + bytes data = 5; +} +``` + +The `DataType` enum and all associated data types have been removed, greatly reducing the number of message definitions and complexity in constructing the `SignBytes` message type. Likewise, solomachine implementations must now use the serialized `path` value when constructing `SignatureAndData` for signature verification of `SignBytes` data. + +```diff +message SignatureAndData { + option (gogoproto.goproto_getters) = false; + + bytes signature = 1; +- DataType data_type = 2 [(gogoproto.moretags) = "yaml:\"data_type\""]; ++ bytes path = 2; + bytes data = 3; + uint64 timestamp = 4; +} +``` + +For more information, please refer to [ADR-007](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/docs/architecture/adr-007-solomachine-signbytes.md). + +### IBC module constants + +IBC module constants have been moved from the `host` package to the `exported` package. Any usages will need to be updated. + +```diff expandable +import ( + / ... +- host "github.com/cosmos/ibc-go/v7/modules/core/24-host" ++ ibcexported "github.com/cosmos/ibc-go/v7/modules/core/exported" + / ... +) + +- host.ModuleName ++ ibcexported.ModuleName + +- host.StoreKey ++ ibcexported.StoreKey + +- host.QuerierRoute ++ ibcexported.QuerierRoute + +- host.RouterKey ++ ibcexported.RouterKey +``` + +## Upgrading to Cosmos SDK 0.47 + +The following should be considered as complementary to [Cosmos SDK v0.47 UPGRADING.md](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc2/UPGRADING.md). + +### Protobuf + +Protobuf code generation, linting and formatting have been updated to leverage the `ghcr.io/cosmos/proto-builder:0.11.5` docker container. IBC protobuf definitions are now packaged and published to [buf.build/cosmos/ibc](https://buf.build/cosmos/ibc) via CI workflows. The `third_party/proto` directory has been removed in favour of dependency management using [buf.build](https://docs.buf.build/introduction). + +### App modules + +Legacy APIs of the `AppModule` interface have been removed from ibc-go modules. For example, for + +```diff expandable +- / Route implements the AppModule interface +- func (am AppModule) Route() sdk.Route { +- return sdk.Route{} +- } +- +- / QuerierRoute implements the AppModule interface +- func (AppModule) QuerierRoute() string { +- return types.QuerierRoute +- } +- +- / LegacyQuerierHandler implements the AppModule interface +- func (am AppModule) LegacyQuerierHandler(*codec.LegacyAmino) sdk.Querier { +- return nil +- } +- +- / ProposalContents doesn't return any content functions for governance proposals. +- func (AppModule) ProposalContents(_ module.SimulationState) []simtypes.WeightedProposalContent { +- return nil +- } +``` + +### Imports + +Imports for ics23 have been updated as the repository have been migrated from confio to cosmos. + +```diff +import ( + / ... +- ics23 "github.com/confio/ics23/go" ++ ics23 "github.com/cosmos/ics23/go" + / ... +) +``` + +Imports for gogoproto have been updated. + +```diff +import ( + / ... +- "github.com/gogo/protobuf/proto" ++ "github.com/cosmos/gogoproto/proto" + / ... +) +``` diff --git a/docs/ibc/v7.8.x/migrations/v7-to-v7_1.mdx b/docs/ibc/v7.8.x/migrations/v7-to-v7_1.mdx new file mode 100644 index 00000000..e644c728 --- /dev/null +++ b/docs/ibc/v7.8.x/migrations/v7-to-v7_1.mdx @@ -0,0 +1,68 @@ +--- +title: IBC-Go v7 to v7.1 +description: This guide provides instructions for migrating to version v7.1.0 of ibc-go. +--- + +This guide provides instructions for migrating to version `v7.1.0` of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Migrating from v7 to v7.1](#migrating-from-v7-to-v71) + * [Chains](#chains) + * [IBC Apps](#ibc-apps) + * [Relayers](#relayers) + * [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +### 09-localhost migration + +In the previous release of ibc-go, the localhost `v1` light client module was deprecated and removed. The ibc-go `v7.1.0` release introduces `v2` of the 09-localhost light client module. + +An [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v7.2.0/modules/core/module.go#L127-L145) is configured in the core IBC module to set the localhost `ClientState` and sentinel `ConnectionEnd` in state. + +In order to use the 09-localhost client chains must update the `AllowedClients` parameter in the 02-client submodule of core IBC. This can be configured directly in the application upgrade handler or alternatively updated via the legacy governance parameter change proposal. +We **strongly** recommend chains to perform this action so that intra-ledger communication can be carried out using the familiar IBC interfaces. + +See the upgrade handler code sample provided below or [follow this link](https://github.com/cosmos/ibc-go/blob/v7.2.0/testing/simapp/upgrades/upgrades.go#L85) for the upgrade handler used by the ibc-go simapp. + +```go expandable +func CreateV7LocalhostUpgradeHandler( + mm *module.Manager, + configurator module.Configurator, + clientKeeper clientkeeper.Keeper, +) + +upgradetypes.UpgradeHandler { + return func(ctx sdk.Context, _ upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + / explicitly update the IBC 02-client params, adding the localhost client type + params := clientKeeper.GetParams(ctx) + +params.AllowedClients = append(params.AllowedClients, exported.Localhost) + +clientKeeper.SetParams(ctx, params) + +return mm.RunMigrations(ctx, configurator, vm) +} +} +``` + +### Transfer migration + +An [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v7.2.0/modules/apps/transfer/module.go#L111-L113) is configured in the transfer module to set the total amount in escrow for all denominations of coins that have been sent out. For each denomination a state entry is added with the total amount of coins in escrow regardless of the channel from which they were transferred. + +## IBC Apps + +* No relevant changes were made in this release. + +## Relayers + +The event attribute `packet_connection` (`connectiontypes.AttributeKeyConnection`) has been deprecated. +Please use the `connection_id` attribute (`connectiontypes.AttributeKeyConnectionID`) which is emitted by all channel events. +Only send packet, receive packet, write acknowledgement, and acknowledge packet events used `packet_connection` previously. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v7.8.x/migrations/v7_2-to-v7_3.mdx b/docs/ibc/v7.8.x/migrations/v7_2-to-v7_3.mdx new file mode 100644 index 00000000..c962dea6 --- /dev/null +++ b/docs/ibc/v7.8.x/migrations/v7_2-to-v7_3.mdx @@ -0,0 +1,46 @@ +--- +title: IBC-Go v7.2 to v7.3 +description: This guide provides instructions for migrating to version v7.3.0 of ibc-go. +--- + +This guide provides instructions for migrating to version `v7.3.0` of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Migrating from v7.2 to v7.3](#migrating-from-v72-to-v73) + * [Chains](#chains) + * [IBC Apps](#ibc-apps) + * [Relayers](#relayers) + * [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +* No relevant changes were made in this release. + +## IBC Apps + +A set of interfaces have been added that IBC applications may optionally implement. Developers interested in integrating their applications with the [callbacks middleware](/docs/ibc/v7.8.x/middleware/callbacks/overview) should implement these interfaces so that the callbacks middleware can retrieve the desired callback addresses on the source and destination chains and execute actions on packet lifecycle events. The interfaces are [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/core/05-port/types/module.go#L142-L147), [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/core/exported/packet.go#L43-L52) and [`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/core/exported/packet.go#L36-L41). + +Sample implementations are available for reference. For `transfer`: + +* [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/transfer/ibc_module.go#L303-L313), +* [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/transfer/types/packet.go#L85-L105) +* and [`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/transfer/types/packet.go#L74-L83). + +For `27-interchain-accounts`: + +* [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/27-interchain-accounts/controller/ibc_middleware.go#L258-L268), +* [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/27-interchain-accounts/types/packet.go#L94-L114) +* and [`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/27-interchain-accounts/types/packet.go#L78-L92). + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +### 06-solomachine + +Solo machines are now expected to sign data on a path that 1) does not include a connection prefix (e.g `ibc`) and 2) does not escape any characters. See PR [#4429](https://github.com/cosmos/ibc-go/pull/4429) for more details. We recommend **NOT** using the solo machine light client of versions lower than v7.3.0. diff --git a/docs/ibc/v8.5.x/apps/interchain-accounts/active-channels.mdx b/docs/ibc/v8.5.x/apps/interchain-accounts/active-channels.mdx new file mode 100644 index 00000000..b77fe38a --- /dev/null +++ b/docs/ibc/v8.5.x/apps/interchain-accounts/active-channels.mdx @@ -0,0 +1,44 @@ +--- +title: Active Channels +description: The Interchain Accounts module uses either ORDERED or UNORDERED channels. +--- + +The Interchain Accounts module uses either [ORDERED or UNORDERED](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#ordering) channels. + +When using `ORDERED` channels, the order of transactions when sending packets from a controller to a host chain is maintained. + +When using `UNORDERED` channels, there is no guarantee that the order of transactions when sending packets from the controller to the host chain is maintained. Since ibc-go v8.3.0, the default ordering for new ICA channels is `UNORDERED`, if no ordering is specified in `MsgRegisterInterchainAccount` (previously the default ordering was `ORDERED`). + +> A limitation when using ORDERED channels is that when a packet times out the channel will be closed. + +In the case of a channel closing, a controller chain needs to be able to regain access to the interchain account registered on this channel. `Active Channels` enable this functionality. + +When an Interchain Account is registered using `MsgRegisterInterchainAccount`, a new channel is created on a particular port. During the `OnChanOpenAck` and `OnChanOpenConfirm` steps (on controller & host chain respectively) the `Active Channel` for this interchain account is stored in state. + +It is possible to create a new channel using the same controller chain portID if the previously set `Active Channel` is now in a `CLOSED` state. This channel creation can be initialized programmatically by sending a new `MsgChannelOpenInit` message like so: + +```go +msg := channeltypes.NewMsgChannelOpenInit(portID, string(versionBytes), channeltypes.ORDERED, []string{ + connectionID +}, icatypes.HostPortID, authtypes.NewModuleAddress(icatypes.ModuleName).String()) + handler := keeper.msgRouter.Handler(msg) + +res, err := handler(ctx, msg) + if err != nil { + return err +} +``` + +Alternatively, any relayer operator may initiate a new channel handshake for this interchain account once the previously set `Active Channel` is in a `CLOSED` state. This is done by initiating the channel handshake on the controller chain using the same portID associated with the interchain account in question. + +It is important to note that once a channel has been opened for a given interchain account, new channels can not be opened for this account until the currently set `Active Channel` is set to `CLOSED`. + +## Future improvements + +Future versions of the ICS-27 protocol and the Interchain Accounts module will likely use a new channel type that provides ordering of packets without the channel closing in the event of a packet timing out, thus removing the need for `Active Channels` entirely. +The following is a list of issues which will provide the infrastructure to make this possible: + +* [IBC Channel Upgrades](https://github.com/cosmos/ibc-go/issues/1599) +* [Implement ORDERED\_ALLOW\_TIMEOUT logic in 04-channel](https://github.com/cosmos/ibc-go/issues/1661) +* [Add ORDERED\_ALLOW\_TIMEOUT as supported ordering in 03-connection](https://github.com/cosmos/ibc-go/issues/1662) +* [Allow ICA channels to be opened as ORDERED\_ALLOW\_TIMEOUT](https://github.com/cosmos/ibc-go/issues/1663) diff --git a/docs/ibc/v8.5.x/apps/interchain-accounts/auth-modules.mdx b/docs/ibc/v8.5.x/apps/interchain-accounts/auth-modules.mdx new file mode 100644 index 00000000..c19d5ca9 --- /dev/null +++ b/docs/ibc/v8.5.x/apps/interchain-accounts/auth-modules.mdx @@ -0,0 +1,21 @@ +--- +title: Authentication Modules +--- + +## Synopsis + +Authentication modules enable application developers to perform custom logic when interacting with the Interchain Accounts controller sumbmodule's `MsgServer`. + +The controller submodule is used for account registration and packet sending. It executes only logic required of all controllers of interchain accounts. The type of authentication used to manage the interchain accounts remains unspecified. There may exist many different types of authentication which are desirable for different use cases. Thus the purpose of the authentication module is to wrap the controller submodule with custom authentication logic. + +In ibc-go, authentication modules can communicate with the controller submodule by passing messages through `baseapp`'s `MsgServiceRouter`. To implement an authentication module, the `IBCModule` interface need not be fulfilled; it is only required to fulfill Cosmos SDK's `AppModuleBasic` interface, just like any regular Cosmos SDK application module. + +The authentication module must: + +- Authenticate interchain account owners. +- Track the associated interchain account address for an owner. +- Send packets on behalf of an owner (after authentication). + +## Integration into `app.go` file + +To integrate the authentication module into your chain, please follow the steps outlined in [`app.go` integration](/docs/ibc/v8.5.x/apps/interchain-accounts/integration#example-integration). diff --git a/docs/ibc/v8.5.x/apps/interchain-accounts/client.mdx b/docs/ibc/v8.5.x/apps/interchain-accounts/client.mdx new file mode 100644 index 00000000..fb058d88 --- /dev/null +++ b/docs/ibc/v8.5.x/apps/interchain-accounts/client.mdx @@ -0,0 +1,200 @@ +--- +title: Client +description: >- + A user can query and interact with the Interchain Accounts module using the + CLI. Use the --help flag to discover the available commands: +--- + +## CLI + +A user can query and interact with the Interchain Accounts module using the CLI. Use the `--help` flag to discover the available commands: + +```shell +simd query interchain-accounts --help +``` + +> Please not that this section does not document all the available commands, but only the ones that deserved extra documentation that was not possible to fit in the command line documentation. + +### Controller + +A user can query and interact with the controller submodule. + +#### Query + +The `query` commands allow users to query the controller submodule. + +```shell +simd query interchain-accounts controller --help +``` + +#### Transactions + +The `tx` commands allow users to interact with the controller submodule. + +```shell +simd tx interchain-accounts controller --help +``` + +#### `register` + +The `register` command allows users to register an interchain account on a host chain on the provided connection. + +```shell +simd tx interchain-accounts controller register [connection-id] [flags] +``` + +During registration a new channel is set up between controller and host. There are two flags available that influence the channel that is created: + +* `--version` to specify the (JSON-formatted) version string of the channel. For example: `{\"version\":\"ics27-1\",\"encoding\":\"proto3\",\"tx_type\":\"sdk_multi_msg\",\"controller_connection_id\":\"connection-0\",\"host_connection_id\":\"connection-0\"}`. Passing a custom version string is useful if you want to specify, for example, the encoding format of the interchain accounts packet data (either `proto3` or `proto3json`). If not specified the controller submodule will generate a default version string. +* `--ordering` to specify the ordering of the channel. Available options are `order_ordered` and `order_unordered` (default if not specified). + +Example: + +```shell +simd tx interchain-accounts controller register connection-0 --ordering order_ordered --from cosmos1.. +``` + +#### `send-tx` + +The `send-tx` command allows users to send a transaction on the provided connection to be executed using an interchain account on the host chain. + +```shell +simd tx interchain-accounts controller send-tx [connection-id] [path/to/packet_msg.json] +``` + +Example: + +```shell +simd tx interchain-accounts controller send-tx connection-0 packet-data.json --from cosmos1.. +``` + +See below for example contents of `packet-data.json`. The CLI handler will unmarshal the following into `InterchainAccountPacketData` appropriately. + +```json +{ + "type": "TYPE_EXECUTE_TX", + "data": "CqIBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEoEBCkFjb3Ntb3MxNWNjc2hobXAwZ3N4MjlxcHFxNmc0em1sdG5udmdteXU5dWV1YWRoOXkybmM1emowc3psczVndGRkehItY29zbW9zMTBoOXN0YzV2Nm50Z2V5Z2Y1eGY5NDVuanFxNWgzMnI1M3VxdXZ3Gg0KBXN0YWtlEgQxMDAw", + "memo": "" +} +``` + +Note the `data` field is a base64 encoded byte string as per the tx encoding agreed upon during the channel handshake. + +A helper CLI is provided in the host submodule which can be used to generate the packet data JSON using the counterparty chain's binary. See the [`generate-packet-data` command](#generate-packet-data) for an example. + +### Host + +A user can query and interact with the host submodule. + +#### Query + +The `query` commands allow users to query the host submodule. + +```shell +simd query interchain-accounts host --help +``` + +#### Transactions + +The `tx` commands allow users to interact with the controller submodule. + +```shell +simd tx interchain-accounts host --help +``` + +##### `generate-packet-data` + +The `generate-packet-data` command allows users to generate protobuf or proto3 JSON encoded interchain accounts packet data for input message(s). The packet data can then be used with the controller submodule's [`send-tx` command](#send-tx). The `--encoding` flag can be used to specify the encoding format (value must be either `proto3` or `proto3json`); if not specified, the default will be `proto3`. The `--memo` flag can be used to include a memo string in the interchain accounts packet data. + +```shell +simd tx interchain-accounts host generate-packet-data [message] +``` + +Example: + +```shell expandable +simd tx interchain-accounts host generate-packet-data '[{ + "@type":"/cosmos.bank.v1beta1.MsgSend", + "from_address":"cosmos15ccshhmp0gsx29qpqq6g4zmltnnvgmyu9ueuadh9y2nc5zj0szls5gtddz", + "to_address":"cosmos10h9stc5v6ntgeygf5xf945njqq5h32r53uquvw", + "amount": [ + { + "denom": "stake", + "amount": "1000" + } + ] +}]' --memo memo +``` + +The command accepts a single `sdk.Msg` or a list of `sdk.Msg`s that will be encoded into the outputs `data` field. + +Example output: + +```json +{ + "type": "TYPE_EXECUTE_TX", + "data": "CqIBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEoEBCkFjb3Ntb3MxNWNjc2hobXAwZ3N4MjlxcHFxNmc0em1sdG5udmdteXU5dWV1YWRoOXkybmM1emowc3psczVndGRkehItY29zbW9zMTBoOXN0YzV2Nm50Z2V5Z2Y1eGY5NDVuanFxNWgzMnI1M3VxdXZ3Gg0KBXN0YWtlEgQxMDAw", + "memo": "memo" +} +``` + +## gRPC + +A user can query the interchain account module using gRPC endpoints. + +### Controller + +A user can query the controller submodule using gRPC endpoints. + +#### `InterchainAccount` + +The `InterchainAccount` endpoint allows users to query the controller submodule for the interchain account address for a given owner on a particular connection. + +```shell +ibc.applications.interchain_accounts.controller.v1.Query/InterchainAccount +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"owner":"cosmos1..","connection_id":"connection-0"}' \ + localhost:9090 \ + ibc.applications.interchain_accounts.controller.v1.Query/InterchainAccount +``` + +#### `Params` + +The `Params` endpoint users to query the current controller submodule parameters. + +```shell +ibc.applications.interchain_accounts.controller.v1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + ibc.applications.interchain_accounts.controller.v1.Query/Params +``` + +### Host + +A user can query the host submodule using gRPC endpoints. + +#### `Params` + +The `Params` endpoint users to query the current host submodule parameters. + +```shell +ibc.applications.interchain_accounts.host.v1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + ibc.applications.interchain_accounts.host.v1.Query/Params +``` diff --git a/docs/ibc/v8.5.x/apps/interchain-accounts/development.mdx b/docs/ibc/v8.5.x/apps/interchain-accounts/development.mdx new file mode 100644 index 00000000..45f9d371 --- /dev/null +++ b/docs/ibc/v8.5.x/apps/interchain-accounts/development.mdx @@ -0,0 +1,34 @@ +--- +title: Development Use Cases +--- + +The initial version of Interchain Accounts allowed for the controller submodule to be extended by providing it with an underlying application which would handle all packet callbacks. +That functionality is now being deprecated in favor of alternative approaches. +This document will outline potential use cases and redirect each use case to the appropriate documentation. + +## Custom authentication + +Interchain accounts may be associated with alternative types of authentication relative to the traditional public/private key signing. +If you wish to develop or use Interchain Accounts with a custom authentication module and do not need to execute custom logic on the packet callbacks, we recommend you use ibc-go v6 or greater and that your custom authentication module interacts with the controller submodule via the [`MsgServer`](/docs/ibc/v8.5.x/apps/interchain-accounts/messages). + +If you wish to consume and execute custom logic in the packet callbacks, then please read the section [Packet callbacks](#packet-callbacks) below. + +## Redirection to a smart contract + +It may be desirable to allow smart contracts to control an interchain account. +To facilitate such an action, the controller submodule may be provided an underlying application which redirects to smart contract callers. +An improved design has been suggested in [ADR 008](https://github.com/cosmos/ibc-go/pull/1976) which performs this action via middleware. + +Implementers of this use case are recommended to follow the ADR 008 approach. +The underlying application may continue to be used as a short term solution for ADR 008 and the [legacy API](/docs/ibc/v8.5.x/apps/interchain-accounts/legacy/auth-modules) should continue to be utilized in such situations. + +## Packet callbacks + +If a developer requires access to packet callbacks for their use case, then they have the following options: + +1. Write a smart contract which is connected via an ADR 008 or equivalent IBC application (recommended). +2. Use the controller's underlying application to implement packet callback logic. + +In the first case, the smart contract should use the [`MsgServer`](/docs/ibc/v8.5.x/apps/interchain-accounts/messages). + +In the second case, the underlying application should use the [legacy API](/docs/ibc/v8.5.x/apps/interchain-accounts/legacy/keeper-api). diff --git a/docs/ibc/v8.5.x/apps/interchain-accounts/integration.mdx b/docs/ibc/v8.5.x/apps/interchain-accounts/integration.mdx new file mode 100644 index 00000000..9722dc38 --- /dev/null +++ b/docs/ibc/v8.5.x/apps/interchain-accounts/integration.mdx @@ -0,0 +1,202 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to integrate Interchain Accounts host and controller functionality to your chain. The following document only applies for Cosmos SDK chains. + +The Interchain Accounts module contains two submodules. Each submodule has its own IBC application. The Interchain Accounts module should be registered as an `AppModule` in the same way all SDK modules are registered on a chain, but each submodule should create its own `IBCModule` as necessary. A route should be added to the IBC router for each submodule which will be used. + +Chains who wish to support ICS-27 may elect to act as a host chain, a controller chain or both. Disabling host or controller functionality may be done statically by excluding the host or controller submodule entirely from the `app.go` file or it may be done dynamically by taking advantage of the on-chain parameters which enable or disable the host or controller submodules. + +Interchain Account authentication modules (both custom or generic, such as the `x/gov`, `x/group` or `x/auth` Cosmos SDK modules) can send messages to the controller submodule's [`MsgServer`](/docs/ibc/v8.5.x/apps/interchain-accounts/messages) to register interchain accounts and send packets to the interchain account. To accomplish this, the authentication module needs to be composed with `baseapp`'s `MsgServiceRouter`. + +![ica-v6.png](/docs/ibc/images/02-apps/02-interchain-accounts/images/ica-v6.png) + +> Please note that since ibc-go v8.3.0 it is mandatory to register the gRPC query router after the creation of the host submodule's keeper; otherwise, nodes will not start. The query router is used to execute on the host query messages encoded in the ICA packet data. Please check the sample integration code below for more details. + +## Example integration + +```go expandable +/ app.go + +/ Register the AppModule for the Interchain Accounts module and the authentication module +/ Note: No `icaauth` exists, this must be substituted with an actual Interchain Accounts authentication module +ModuleBasics = module.NewBasicManager( + ... + ica.AppModuleBasic{ +}, + icaauth.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the Interchain Accounts module +/ Only necessary for host chain functionality +/ Each Interchain Account created on the host chain is derived from the module account created +maccPerms = map[string][]string{ + ... + icatypes.ModuleName: nil, +} + +... + +/ Add Interchain Accounts Keepers for each submodule used and the authentication module +/ If a submodule is being statically disabled, the associated Keeper does not need to be added. +type App struct { + ... + + ICAControllerKeeper icacontrollerkeeper.Keeper + ICAHostKeeper icahostkeeper.Keeper + ICAAuthKeeper icaauthkeeper.Keeper + + ... +} + +... + +/ Create store keys for each submodule Keeper and the authentication module + keys := sdk.NewKVStoreKeys( + ... + icacontrollertypes.StoreKey, + icahosttypes.StoreKey, + icaauthtypes.StoreKey, + ... +) + +... + +/ Create the scoped keepers for each submodule keeper and authentication keeper + scopedICAControllerKeeper := app.CapabilityKeeper.ScopeToModule(icacontrollertypes.SubModuleName) + scopedICAHostKeeper := app.CapabilityKeeper.ScopeToModule(icahosttypes.SubModuleName) + scopedICAAuthKeeper := app.CapabilityKeeper.ScopeToModule(icaauthtypes.ModuleName) + +... + +/ Create the Keeper for each submodule +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + scopedICAControllerKeeper, app.MsgServiceRouter(), +) + +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, scopedICAHostKeeper, app.MsgServiceRouter(), +) + +app.ICAHostKeeper.WithQueryRouter(app.GRPCQueryRouter()) + +/ Create Interchain Accounts AppModule + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, &app.ICAHostKeeper) + +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.MsgServiceRouter()) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + +/ Create controller IBC application stack and host IBC module as desired + icaControllerStack := icacontroller.NewIBCMiddleware(nil, app.ICAControllerKeeper) + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +... + +/ Register Interchain Accounts and authentication module AppModule's +app.moduleManager = module.NewManager( + ... + icaModule, + icaAuthModule, +) + +... + +/ Add Interchain Accounts to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts module InitGenesis logic +app.moduleManager.SetOrderInitGenesis( + ... + icatypes.ModuleName, + ... +) + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey sdk.StoreKey) + +paramskeeper.Keeper { + ... + paramsKeeper.Subspace(icahosttypes.SubModuleName) + +paramsKeeper.Subspace(icacontrollertypes.SubModuleName) + ... +} +``` + +If no custom athentication module is needed and a generic Cosmos SDK authentication module can be used, then from the sample integration code above all references to `ICAAuthKeeper` and `icaAuthModule` can be removed. That's it, the following code would not be needed: + +```go +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.MsgServiceRouter()) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) +``` + +### Using submodules exclusively + +As described above, the Interchain Accounts application module is structured to support the ability of exclusively enabling controller or host functionality. +This can be achieved by simply omitting either controller or host `Keeper` from the Interchain Accounts `NewAppModule` constructor function, and mounting only the desired submodule via the `IBCRouter`. +Alternatively, submodules can be enabled and disabled dynamically using [on-chain parameters](/docs/ibc/v8.5.x/apps/interchain-accounts/parameters). + +The following snippets show basic examples of statically disabling submodules using `app.go`. + +#### Disabling controller chain functionality + +```go +/ Create Interchain Accounts AppModule omitting the controller keeper + icaModule := ica.NewAppModule(nil, &app.ICAHostKeeper) + +/ Create host IBC Module + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host route +ibcRouter.AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +``` + +#### Disabling host chain functionality + +```go expandable +/ Create Interchain Accounts AppModule omitting the host keeper + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, nil) + +/ Optionally instantiate your custom authentication module if needed, or not otherwise +... + +/ Create controller IBC application stack + icaControllerStack := icacontroller.NewIBCMiddleware(nil, app.ICAControllerKeeper) + +/ Register controller route +ibcRouter.AddRoute(icacontrollertypes.SubModuleName, icaControllerStack) +``` diff --git a/docs/ibc/v8.5.x/apps/interchain-accounts/legacy/auth-modules.mdx b/docs/ibc/v8.5.x/apps/interchain-accounts/legacy/auth-modules.mdx new file mode 100644 index 00000000..5dc4c984 --- /dev/null +++ b/docs/ibc/v8.5.x/apps/interchain-accounts/legacy/auth-modules.mdx @@ -0,0 +1,312 @@ +--- +title: Authentication Modules +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +## Synopsis + +Authentication modules play the role of the `Base Application` as described in [ICS-30 IBC Middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware), and enable application developers to perform custom logic when working with the Interchain Accounts controller API. + +The controller submodule is used for account registration and packet sending. It executes only logic required of all controllers of interchain accounts. The type of authentication used to manage the interchain accounts remains unspecified. There may exist many different types of authentication which are desirable for different use cases. Thus the purpose of the authentication module is to wrap the controller submodule with custom authentication logic. + +In ibc-go, authentication modules are connected to the controller chain via a middleware stack. The controller submodule is implemented as [middleware](https://github.com/cosmos/ibc/tree/master/spec/app/ics-030-middleware) and the authentication module is connected to the controller submodule as the base application of the middleware stack. To implement an authentication module, the `IBCModule` interface must be fulfilled. By implementing the controller submodule as middleware, any amount of authentication modules can be created and connected to the controller submodule without writing redundant code. + +The authentication module must: + +- Authenticate interchain account owners. +- Track the associated interchain account address for an owner. +- Send packets on behalf of an owner (after authentication). + +> Please note that since ibc-go v6 the channel capability is claimed by the controller submodule and therefore it is not required for authentication modules to claim the capability in the `OnChanOpenInit` callback. When the authentication module sends packets on the channel created for the associated interchain account it can pass a `nil` capability to the legacy function `SendTx` of the controller keeper (see section [`SendTx`](/docs/ibc/v8.5.x/apps/interchain-accounts/legacy/keeper-api#sendtx) for more information). + +## `IBCModule` implementation + +The following `IBCModule` callbacks must be implemented with appropriate custom logic: + +```go expandable +/ OnChanOpenInit implements the IBCModule interface +func (im IBCModule) + +OnChanOpenInit( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + chanCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + / since ibc-go v6 the authentication module *must not* claim the channel capability on OnChanOpenInit + + / perform custom logic + + return version, nil +} + +/ OnChanOpenAck implements the IBCModule interface +func (im IBCModule) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + / perform custom logic + + return nil +} + +/ OnChanCloseConfirm implements the IBCModule interface +func (im IBCModule) + +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / perform custom logic + + return nil +} + +/ OnAcknowledgementPacket implements the IBCModule interface +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, +) + +error { + / perform custom logic + + return nil +} + +/ OnTimeoutPacket implements the IBCModule interface. +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +error { + / perform custom logic + + return nil +} +``` + +The following functions must be defined to fulfill the `IBCModule` interface, but they will never be called by the controller submodule so they may error or panic. That is because in Interchain Accounts, the channel handshake is always initiated on the controller chain and packets are always sent to the host chain and never to the controller chain. + +```go expandable +/ OnChanOpenTry implements the IBCModule interface +func (im IBCModule) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + chanCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + panic("UNIMPLEMENTED") +} + +/ OnChanOpenConfirm implements the IBCModule interface +func (im IBCModule) + +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + panic("UNIMPLEMENTED") +} + +/ OnChanCloseInit implements the IBCModule interface +func (im IBCModule) + +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + panic("UNIMPLEMENTED") +} + +/ OnRecvPacket implements the IBCModule interface. A successful acknowledgement +/ is returned if the packet data is successfully decoded and the receive application +/ logic returns without error. +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +ibcexported.Acknowledgement { + panic("UNIMPLEMENTED") +} +``` + +## `OnAcknowledgementPacket` + +Controller chains will be able to access the acknowledgement written into the host chain state once a relayer relays the acknowledgement. +The acknowledgement bytes contain either the response of the execution of the message(s) on the host chain or an error. They will be passed to the auth module via the `OnAcknowledgementPacket` callback. Auth modules are expected to know how to decode the acknowledgement. + +If the controller chain is connected to a host chain using the host module on ibc-go, it may interpret the acknowledgement bytes as follows: + +Begin by unmarshaling the acknowledgement into `sdk.TxMsgData`: + +```go +var ack channeltypes.Acknowledgement + if err := channeltypes.SubModuleCdc.UnmarshalJSON(acknowledgement, &ack); err != nil { + return err +} + txMsgData := &sdk.TxMsgData{ +} + if err := proto.Unmarshal(ack.GetResult(), txMsgData); err != nil { + return err +} +``` + +If the `txMsgData.Data` field is non nil, the host chain is using SDK version `<=` v0.45. +The auth module should interpret the `txMsgData.Data` as follows: + +```go expandable +switch len(txMsgData.Data) { + case 0: + / see documentation below for SDK 0.46.x or greater +default: + for _, msgData := range txMsgData.Data { + if err := handler(msgData); err != nil { + return err +} + +} +... +} +``` + +A handler will be needed to interpret what actions to perform based on the message type sent. +A router could be used, or more simply a switch statement. + +```go expandable +func handler(msgData sdk.MsgData) + +error { + switch msgData.MsgType { + case sdk.MsgTypeURL(&banktypes.MsgSend{ +}): + msgResponse := &banktypes.MsgSendResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleBankSendMsg(msgResponse) + case sdk.MsgTypeURL(&stakingtypes.MsgDelegate{ +}): + msgResponse := &stakingtypes.MsgDelegateResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleStakingDelegateMsg(msgResponse) + case sdk.MsgTypeURL(&transfertypes.MsgTransfer{ +}): + msgResponse := &transfertypes.MsgTransferResponse{ +} + if err := proto.Unmarshal(msgData.Data, msgResponse +}; err != nil { + return err +} + +handleIBCTransferMsg(msgResponse) + +default: + return +} +``` + +If the `txMsgData.Data` is empty, the host chain is using SDK version > v0.45. +The auth module should interpret the `txMsgData.Responses` as follows: + +```go +... +/ switch statement from above + case 0: + for _, any := range txMsgData.MsgResponses { + if err := handleAny(any); err != nil { + return err +} + +} +} +``` + +A handler will be needed to interpret what actions to perform based on the type URL of the Any. +A router could be used, or more simply a switch statement. +It may be possible to deduplicate logic between `handler` and `handleAny`. + +```go expandable +func handleAny(any *codectypes.Any) + +error { + switch any.TypeURL { + case banktypes.MsgSend: + msgResponse, err := unpackBankMsgSendResponse(any) + if err != nil { + return err +} + +handleBankSendMsg(msgResponse) + case stakingtypes.MsgDelegate: + msgResponse, err := unpackStakingDelegateResponse(any) + if err != nil { + return err +} + +handleStakingDelegateMsg(msgResponse) + case transfertypes.MsgTransfer: + msgResponse, err := unpackIBCTransferMsgResponse(any) + if err != nil { + return err +} + +handleIBCTransferMsg(msgResponse) + +default: + return +} +``` + +## Integration into `app.go` file + +To integrate the authentication module into your chain, please follow the steps outlined in [`app.go` integration](/docs/ibc/v8.5.x/apps/interchain-accounts/legacy/integration#example-integration). diff --git a/docs/ibc/v8.5.x/apps/interchain-accounts/legacy/integration.mdx b/docs/ibc/v8.5.x/apps/interchain-accounts/legacy/integration.mdx new file mode 100644 index 00000000..53c29d5e --- /dev/null +++ b/docs/ibc/v8.5.x/apps/interchain-accounts/legacy/integration.mdx @@ -0,0 +1,205 @@ +--- +title: Integration +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +## Synopsis + +Learn how to integrate Interchain Accounts host and controller functionality to your chain. The following document only applies for Cosmos SDK chains. + +The Interchain Accounts module contains two submodules. Each submodule has its own IBC application. The Interchain Accounts module should be registered as an `AppModule` in the same way all SDK modules are registered on a chain, but each submodule should create its own `IBCModule` as necessary. A route should be added to the IBC router for each submodule which will be used. + +Chains who wish to support ICS-27 may elect to act as a host chain, a controller chain or both. Disabling host or controller functionality may be done statically by excluding the host or controller module entirely from the `app.go` file or it may be done dynamically by taking advantage of the on-chain parameters which enable or disable the host or controller submodules. + +Interchain Account authentication modules are the base application of a middleware stack. The controller submodule is the middleware in this stack. + +![ica-pre-v6.png](/docs/ibc/images/02-apps/02-interchain-accounts/10-legacy/images/ica-pre-v6.png) + +> Please note that since ibc-go v6 the channel capability is claimed by the controller submodule and therefore it is not required for authentication modules to claim the capability in the `OnChanOpenInit` callback. Therefore the custom authentication module does not need a scoped keeper anymore. +> Please note that since ibc-go v8.3.0 it is mandatory to register the gRPC query router after the creation of the host submodule's keeper; otherwise, nodes will not start. The query router is used to execute on the host query messages encoded in the ICA packet data. Please check the sample integration code below for more details. + +## Example integration + +```go expandable +/ app.go + +/ Register the AppModule for the Interchain Accounts module and the authentication module +/ Note: No `icaauth` exists, this must be substituted with an actual Interchain Accounts authentication module +ModuleBasics = module.NewBasicManager( + ... + ica.AppModuleBasic{ +}, + icaauth.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the Interchain Accounts module +/ Only necessary for host chain functionality +/ Each Interchain Account created on the host chain is derived from the module account created +maccPerms = map[string][]string{ + ... + icatypes.ModuleName: nil, +} + +... + +/ Add Interchain Accounts Keepers for each submodule used and the authentication module +/ If a submodule is being statically disabled, the associated Keeper does not need to be added. +type App struct { + ... + + ICAControllerKeeper icacontrollerkeeper.Keeper + ICAHostKeeper icahostkeeper.Keeper + ICAAuthKeeper icaauthkeeper.Keeper + + ... +} + +... + +/ Create store keys for each submodule Keeper and the authentication module + keys := sdk.NewKVStoreKeys( + ... + icacontrollertypes.StoreKey, + icahosttypes.StoreKey, + icaauthtypes.StoreKey, + ... +) + +... + +/ Create the scoped keepers for each submodule keeper and authentication keeper + scopedICAControllerKeeper := app.CapabilityKeeper.ScopeToModule(icacontrollertypes.SubModuleName) + scopedICAHostKeeper := app.CapabilityKeeper.ScopeToModule(icahosttypes.SubModuleName) + +... + +/ Create the Keeper for each submodule +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + scopedICAControllerKeeper, app.MsgServiceRouter(), +) + +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCKeeper.ChannelKeeper, / may be replaced with middleware such as ics29 fee + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, scopedICAHostKeeper, app.MsgServiceRouter(), +) + +app.ICAHostKeeper.WithQueryRouter(app.GRPCQueryRouter()) + +/ Create Interchain Accounts AppModule + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, &app.ICAHostKeeper) + +/ Create your Interchain Accounts authentication module +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.ICAControllerKeeper) + +/ ICA auth AppModule + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + +/ ICA auth IBC Module + icaAuthIBCModule := icaauth.NewIBCModule(app.ICAAuthKeeper) + +/ Create controller IBC application stack and host IBC module as desired + icaControllerStack := icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostIBCModule). + AddRoute(icaauthtypes.ModuleName, icaControllerStack) / Note, the authentication module is routed to the top level of the middleware stack + +... + +/ Register Interchain Accounts and authentication module AppModule's +app.moduleManager = module.NewManager( + ... + icaModule, + icaAuthModule, +) + +... + +/ Add fee middleware to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add fee middleware to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + icatypes.ModuleName, + ... +) + +/ Add Interchain Accounts module InitGenesis logic +app.moduleManager.SetOrderInitGenesis( + ... + icatypes.ModuleName, + ... +) + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey sdk.StoreKey) + +paramskeeper.Keeper { + ... + paramsKeeper.Subspace(icahosttypes.SubModuleName) + +paramsKeeper.Subspace(icacontrollertypes.SubModuleName) + ... +``` + +## Using submodules exclusively + +As described above, the Interchain Accounts application module is structured to support the ability of exclusively enabling controller or host functionality. +This can be achieved by simply omitting either controller or host `Keeper` from the Interchain Accounts `NewAppModule` constructor function, and mounting only the desired submodule via the `IBCRouter`. +Alternatively, submodules can be enabled and disabled dynamically using [on-chain parameters](/docs/ibc/v8.5.x/apps/interchain-accounts/parameters). + +The following snippets show basic examples of statically disabling submodules using `app.go`. + +### Disabling controller chain functionality + +```go +/ Create Interchain Accounts AppModule omitting the controller keeper + icaModule := ica.NewAppModule(nil, &app.ICAHostKeeper) + +/ Create host IBC Module + icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper) + +/ Register host route +ibcRouter.AddRoute(icahosttypes.SubModuleName, icaHostIBCModule) +``` + +### Disabling host chain functionality + +```go expandable +/ Create Interchain Accounts AppModule omitting the host keeper + icaModule := ica.NewAppModule(&app.ICAControllerKeeper, nil) + +/ Create your Interchain Accounts authentication module, setting up the Keeper, AppModule and IBCModule appropriately +app.ICAAuthKeeper = icaauthkeeper.NewKeeper(appCodec, keys[icaauthtypes.StoreKey], app.ICAControllerKeeper) + icaAuthModule := icaauth.NewAppModule(appCodec, app.ICAAuthKeeper) + icaAuthIBCModule := icaauth.NewIBCModule(app.ICAAuthKeeper) + +/ Create controller IBC application stack + icaControllerStack := icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) + +/ Register controller and authentication routes +ibcRouter. + AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icaauthtypes.ModuleName, icaControllerStack) / Note, the authentication module is routed to the top level of the middleware stack +``` diff --git a/docs/ibc/v8.5.x/apps/interchain-accounts/legacy/keeper-api.mdx b/docs/ibc/v8.5.x/apps/interchain-accounts/legacy/keeper-api.mdx new file mode 100644 index 00000000..d4c5045e --- /dev/null +++ b/docs/ibc/v8.5.x/apps/interchain-accounts/legacy/keeper-api.mdx @@ -0,0 +1,126 @@ +--- +title: Keeper API +description: This document is deprecated and will be removed in future releases. +--- + +## Deprecation Notice + +**This document is deprecated and will be removed in future releases**. + +The controller submodule keeper exposes two legacy functions that allow respectively for custom authentication modules to register interchain accounts and send packets to the interchain account. + +## `RegisterInterchainAccount` + +The authentication module can begin registering interchain accounts by calling `RegisterInterchainAccount`: + +```go +if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, connectionID, owner.String(), version); err != nil { + return err +} + +return nil +``` + +The `version` argument is used to support ICS-29 fee middleware for relayer incentivization of ICS-27 packets. Consumers of the `RegisterInterchainAccount` are expected to build the appropriate JSON encoded version string themselves and pass it accordingly. If an empty string is passed in the `version` argument, then the version will be initialized to a default value in the `OnChanOpenInit` callback of the controller's handler, so that channel handshake can proceed. + +The following code snippet illustrates how to construct an appropriate interchain accounts `Metadata` and encode it as a JSON bytestring: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, controllerConnectionID, owner.String(), string(appVersion)); err != nil { + return err +} +``` + +Similarly, if the application stack is configured to route through ICS-29 fee middleware and a fee enabled channel is desired, construct the appropriate ICS-29 `Metadata` type: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + feeMetadata := feetypes.Metadata{ + AppVersion: string(appVersion), + FeeVersion: feetypes.Version, +} + +feeEnabledVersion, err := feetypes.ModuleCdc.MarshalJSON(&feeMetadata) + if err != nil { + return err +} + if err := keeper.icaControllerKeeper.RegisterInterchainAccount(ctx, controllerConnectionID, owner.String(), string(feeEnabledVersion)); err != nil { + return err +} +``` + +> Since ibc-go v8.3.0 the default ordering of new ICA channels created when invoking `RegisterInterchainAccount` has changed from `ORDERED` to `UNORDERED`. If this default behaviour does not meet your use case, please use the function `RegisterInterchainAccountWithOrdering` (available since ibc-go v8.3.0), which takes an extra parameter that can be used to specify the ordering of the channel. + +## `SendTx` + +The authentication module can attempt to send a packet by calling `SendTx`: + +```go expandable +/ Authenticate owner +/ perform custom logic + +/ Construct controller portID based on interchain account owner address +portID, err := icatypes.NewControllerPortID(owner.String()) + if err != nil { + return err +} + +/ Obtain data to be sent to the host chain. +/ In this example, the owner of the interchain account would like to send a bank MsgSend to the host chain. +/ The appropriate serialization function should be called. The host chain must be able to deserialize the transaction. +/ If the host chain is using the ibc-go host module, `SerializeCosmosTx` should be used. + msg := &banktypes.MsgSend{ + FromAddress: fromAddr, + ToAddress: toAddr, + Amount: amt +} + +data, err := icatypes.SerializeCosmosTx(keeper.cdc, []proto.Message{ + msg +}) + if err != nil { + return err +} + +/ Construct packet data + packetData := icatypes.InterchainAccountPacketData{ + Type: icatypes.EXECUTE_TX, + Data: data, +} + +/ Obtain timeout timestamp +/ An appropriate timeout timestamp must be determined based on the usage of the interchain account. +/ If the packet times out, the channel will be closed requiring a new channel to be created. + timeoutTimestamp := obtainTimeoutTimestamp() + +/ Send the interchain accounts packet, returning the packet sequence +/ A nil channel capability can be passed, since the controller submodule (and not the authentication module) +/ claims the channel capability since ibc-go v6. +seq, err = keeper.icaControllerKeeper.SendTx(ctx, nil, portID, packetData, timeoutTimestamp) +``` + +The data within an `InterchainAccountPacketData` must be serialized using a format supported by the host chain. +If the host chain is using the ibc-go host chain submodule, `SerializeCosmosTx` should be used. If the `InterchainAccountPacketData.Data` is serialized using a format not supported by the host chain, the packet will not be successfully received. diff --git a/docs/ibc/v8.5.x/apps/interchain-accounts/messages.mdx b/docs/ibc/v8.5.x/apps/interchain-accounts/messages.mdx new file mode 100644 index 00000000..6d35091f --- /dev/null +++ b/docs/ibc/v8.5.x/apps/interchain-accounts/messages.mdx @@ -0,0 +1,152 @@ +--- +title: Messages +description: >- + An Interchain Accounts channel handshake can be initiated using + MsgRegisterInterchainAccount: +--- + +## `MsgRegisterInterchainAccount` + +An Interchain Accounts channel handshake can be initiated using `MsgRegisterInterchainAccount`: + +```go +type MsgRegisterInterchainAccount struct { + Owner string + ConnectionID string + Version string + Ordering channeltypes.Order +} +``` + +This message is expected to fail if: + +* `Owner` is an empty string or contains more than 2048 bytes. +* `ConnectionID` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). + +This message will construct a new `MsgChannelOpenInit` on chain and route it to the core IBC message server to initiate the opening step of the channel handshake. + +The controller submodule will generate a new port identifier and claim the associated port capability. The caller is expected to provide an appropriate application version string. For example, this may be an ICS-27 JSON encoded [`Metadata`](https://github.com/cosmos/ibc-go/blob/v6.0.0/proto/ibc/applications/interchain_accounts/v1/metadata.proto#L11) type or an ICS-29 JSON encoded [`Metadata`](https://github.com/cosmos/ibc-go/blob/v6.0.0/proto/ibc/applications/fee/v1/metadata.proto#L11) type with a nested application version. +If the `Version` string is omitted, the controller submodule will construct a default version string in the `OnChanOpenInit` handshake callback. + +```go +type MsgRegisterInterchainAccountResponse struct { + ChannelID string + PortId string +} +``` + +The `ChannelID` and `PortID` are returned in the message response. + +## `MsgSendTx` + +An Interchain Accounts transaction can be executed on a remote host chain by sending a `MsgSendTx` from the corresponding controller chain: + +```go +type MsgSendTx struct { + Owner string + ConnectionID string + PacketData InterchainAccountPacketData + RelativeTimeout uint64 +} +``` + +This message is expected to fail if: + +* `Owner` is an empty string or contains more than 2048 bytes. +* `ConnectionID` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +* `PacketData` contains an `UNSPECIFIED` type enum, the length of `Data` bytes is zero or the `Memo` field exceeds 256 characters in length. +* `RelativeTimeout` is zero. + +This message will create a new IBC packet with the provided `PacketData` and send it via the channel associated with the `Owner` and `ConnectionID`. +The `PacketData` is expected to contain a list of serialized `[]sdk.Msg` in the form of `CosmosTx`. Please note the signer field of each `sdk.Msg` must be the interchain account address. +When the packet is relayed to the host chain, the `PacketData` is unmarshalled and the messages are authenticated and executed. + +```go +type MsgSendTxResponse struct { + Sequence uint64 +} +``` + +The packet `Sequence` is returned in the message response. + +### Queries + +It is possible to use [`MsgModuleQuerySafe`](https://github.com/cosmos/ibc-go/blob/eecfa5c09a4c38a5c9f2cc2a322d2286f45911da/proto/ibc/applications/interchain_accounts/host/v1/tx.proto#L41-L51) to execute a list of queries on the host chain. This message can be included in the list of encoded `sdk.Msg`s of `InterchainPacketData`. The host chain will return on the acknowledgment the responses for all the queries. Please note that only module safe queries can be executed ([deterministic queries that are safe to be called from within the state machine](https://docs.cosmos.network/main/build/building-modules/query-services#calling-queries-from-the-state-machine)). + +The queries available from Cosmos SDK are: + +```plaintext expandable +/cosmos.auth.v1beta1.Query/Accounts +/cosmos.auth.v1beta1.Query/Account +/cosmos.auth.v1beta1.Query/AccountAddressByID +/cosmos.auth.v1beta1.Query/Params +/cosmos.auth.v1beta1.Query/ModuleAccounts +/cosmos.auth.v1beta1.Query/ModuleAccountByName +/cosmos.auth.v1beta1.Query/AccountInfo +/cosmos.bank.v1beta1.Query/Balance +/cosmos.bank.v1beta1.Query/AllBalances +/cosmos.bank.v1beta1.Query/SpendableBalances +/cosmos.bank.v1beta1.Query/SpendableBalanceByDenom +/cosmos.bank.v1beta1.Query/TotalSupply +/cosmos.bank.v1beta1.Query/SupplyOf +/cosmos.bank.v1beta1.Query/Params +/cosmos.bank.v1beta1.Query/DenomMetadata +/cosmos.bank.v1beta1.Query/DenomMetadataByQueryString +/cosmos.bank.v1beta1.Query/DenomsMetadata +/cosmos.bank.v1beta1.Query/DenomOwners +/cosmos.bank.v1beta1.Query/SendEnabled +/cosmos.circuit.v1.Query/Account +/cosmos.circuit.v1.Query/Accounts +/cosmos.circuit.v1.Query/DisabledList +/cosmos.staking.v1beta1.Query/Validators +/cosmos.staking.v1beta1.Query/Validator +/cosmos.staking.v1beta1.Query/ValidatorDelegations +/cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations +/cosmos.staking.v1beta1.Query/Delegation +/cosmos.staking.v1beta1.Query/UnbondingDelegation +/cosmos.staking.v1beta1.Query/DelegatorDelegations +/cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations +/cosmos.staking.v1beta1.Query/Redelegations +/cosmos.staking.v1beta1.Query/DelegatorValidators +/cosmos.staking.v1beta1.Query/DelegatorValidator +/cosmos.staking.v1beta1.Query/HistoricalInfo +/cosmos.staking.v1beta1.Query/Pool +/cosmos.staking.v1beta1.Query/Params +``` + +And the query available from ibc-go is: + +```plaintext +/ibc.core.client.v1.Query/VerifyMembership +``` + +The following code block shows an example of how `MsgModuleQuerySafe` can be used to query the account balance of an account on the host chain. The resulting packet data variable is used to set the `PacketData` of `MsgSendTx`. + +```go expandable +balanceQuery := banktypes.NewQueryBalanceRequest("cosmos1...", "uatom") + +queryBz, err := balanceQuery.Marshal() + +/ signer of message must be the interchain account on the host + queryMsg := icahosttypes.NewMsgModuleQuerySafe("cosmos2...", []*icahosttypes.QueryRequest{ + { + Path: "/cosmos.bank.v1beta1.Query/Balance", + Data: queryBz, +}, +}) + +bz, err := icatypes.SerializeCosmosTx(cdc, []proto.Message{ + queryMsg +}, icatypes.EncodingProtobuf) + packetData := icatypes.InterchainAccountPacketData{ + Type: icatypes.EXECUTE_TX, + Data: bz, + Memo: "", +} +``` + +## Atomicity + +As the Interchain Accounts module supports the execution of multiple transactions using the Cosmos SDK `Msg` interface, it provides the same atomicity guarantees as Cosmos SDK-based applications, leveraging the [`CacheMultiStore`](https://docs.cosmos.network/main/learn/advanced/store#cachemultistore) architecture provided by the [`Context`](https://docs.cosmos.network/main/learn/advanced/context.html) type. + +This provides atomic execution of transactions when using Interchain Accounts, where state changes are only committed if all `Msg`s succeed. diff --git a/docs/ibc/v8.5.x/apps/interchain-accounts/overview.mdx b/docs/ibc/v8.5.x/apps/interchain-accounts/overview.mdx new file mode 100644 index 00000000..747d3489 --- /dev/null +++ b/docs/ibc/v8.5.x/apps/interchain-accounts/overview.mdx @@ -0,0 +1,42 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the Interchain Accounts module is + +## What is the Interchain Accounts module? + +Interchain Accounts is the Cosmos SDK implementation of the ICS-27 protocol, which enables cross-chain account management built upon IBC. + +- How does an interchain account differ from a regular account? + +Regular accounts use a private key to sign transactions. Interchain Accounts are instead controlled programmatically by counterparty chains via IBC packets. + +## Concepts + +`Host Chain`: The chain where the interchain account is registered. The host chain listens for IBC packets from a controller chain which should contain instructions (e.g. Cosmos SDK messages) for which the interchain account will execute. + +`Controller Chain`: The chain registering and controlling an account on a host chain. The controller chain sends IBC packets to the host chain to control the account. + +`Interchain Account`: An account on a host chain created using the ICS-27 protocol. An interchain account has all the capabilities of a normal account. However, rather than signing transactions with a private key, a controller chain will send IBC packets to the host chain which signals what transactions the interchain account should execute. + +`Authentication Module`: A custom application module on the controller chain that uses the Interchain Accounts module to build custom logic for the creation & management of interchain accounts. It can be either an IBC application module using the [legacy API](/docs/ibc/v8.5.x/apps/interchain-accounts/legacy/keeper-api), or a regular Cosmos SDK application module sending messages to the controller submodule's `MsgServer` (this is the recommended approach from ibc-go v6 if access to packet callbacks is not needed). Please note that the legacy API will eventually be removed and IBC applications will not be able to use them in later releases. + +## SDK security model + +SDK modules on a chain are assumed to be trustworthy. For example, there are no checks to prevent an untrustworthy module from accessing the bank keeper. + +The implementation of ICS-27 in ibc-go uses this assumption in its security considerations. + +The implementation assumes other IBC application modules will not bind to ports within the ICS-27 namespace. + +## Channel Closure + +The provided interchain account host and controller implementations do not support `ChanCloseInit`. However, they do support `ChanCloseConfirm`. +This means that the host and controller modules cannot close channels, but they will confirm channel closures initiated by other implementations of ICS-27. + +In the event of a channel closing (due to a packet timeout in an ordered channel, for example), the interchain account associated with that channel can become accessible again if a new channel is created with a (JSON-formatted) version string that encodes the exact same `Metadata` information of the previous channel. The channel can be reopened using either [`MsgRegisterInterchainAccount`](/docs/ibc/v8.5.x/apps/interchain-accounts/messages#msgregisterinterchainaccount) or `MsgChannelOpenInit`. If `MsgRegisterInterchainAccount` is used, then it is possible to leave the `version` field of the message empty, since it will be filled in by the controller submodule. If `MsgChannelOpenInit` is used, then the `version` field must be provided with the correct JSON-encoded `Metadata` string. See section [Understanding Active Channels](/docs/ibc/v8.5.x/apps/interchain-accounts/active-channels#understanding-active-channels) for more information. + +When reopening a channel with the default controller submodule, the ordering of the channel cannot be changed. In order to change the ordering of the channel, the channel has to go through a [channel upgrade handshake](/docs/ibc/v8.5.x/ibc/channel-upgrades) or reopen the channel with a custom controller implementation. diff --git a/docs/ibc/v8.5.x/apps/interchain-accounts/parameters.mdx b/docs/ibc/v8.5.x/apps/interchain-accounts/parameters.mdx new file mode 100644 index 00000000..eb8f805b --- /dev/null +++ b/docs/ibc/v8.5.x/apps/interchain-accounts/parameters.mdx @@ -0,0 +1,63 @@ +--- +title: Parameters +description: >- + The Interchain Accounts module contains the following on-chain parameters, + logically separated for each distinct submodule: +--- + +The Interchain Accounts module contains the following on-chain parameters, logically separated for each distinct submodule: + +## Controller Submodule Parameters + +| Name | Type | Default Value | +| ------------------- | ---- | ------------- | +| `ControllerEnabled` | bool | `true` | + +### ControllerEnabled + +The `ControllerEnabled` parameter controls a chains ability to service ICS-27 controller specific logic. This includes the sending of Interchain Accounts packet data as well as the following ICS-26 callback handlers: + +* `OnChanOpenInit` +* `OnChanOpenAck` +* `OnChanCloseConfirm` +* `OnAcknowledgementPacket` +* `OnTimeoutPacket` + +## Host Submodule Parameters + +| Name | Type | Default Value | +| --------------- | --------- | ------------- | +| `HostEnabled` | bool | `true` | +| `AllowMessages` | \[]string | `["*"]` | + +### HostEnabled + +The `HostEnabled` parameter controls a chains ability to service ICS-27 host specific logic. This includes the following ICS-26 callback handlers: + +* `OnChanOpenTry` +* `OnChanOpenConfirm` +* `OnChanCloseConfirm` +* `OnRecvPacket` + +### AllowMessages + +The `AllowMessages` parameter provides the ability for a chain to limit the types of messages or transactions that hosted interchain accounts are authorized to execute by defining an allowlist using the Protobuf message type URL format. + +For example, a Cosmos SDK-based chain that elects to provide hosted Interchain Accounts with the ability of governance voting and staking delegations will define its parameters as follows: + +```json +"params": { + "host_enabled": true, + "allow_messages": ["/cosmos.staking.v1beta1.MsgDelegate", + "/cosmos.gov.v1beta1.MsgVote"] +} +``` + +There is also a special wildcard `"*"` value which allows any type of message to be executed by the interchain account. This must be the only value in the `allow_messages` array. + +```json +"params": { + "host_enabled": true, + "allow_messages": ["*"] +} +``` diff --git a/docs/ibc/v8.5.x/apps/interchain-accounts/tx-encoding.mdx b/docs/ibc/v8.5.x/apps/interchain-accounts/tx-encoding.mdx new file mode 100644 index 00000000..f2b9ca97 --- /dev/null +++ b/docs/ibc/v8.5.x/apps/interchain-accounts/tx-encoding.mdx @@ -0,0 +1,57 @@ +--- +title: Transaction Encoding +description: >- + When orchestrating an interchain account transaction, which comprises multiple + sdk.Msg objects represented as Any types, the transactions must be encoded as + bytes within InterchainAccountPacketData. +--- + +When orchestrating an interchain account transaction, which comprises multiple `sdk.Msg` objects represented as `Any` types, the transactions must be encoded as bytes within [`InterchainAccountPacketData`](https://github.com/cosmos/ibc-go/blob/v7.2.0/proto/ibc/applications/interchain_accounts/v1/packet.proto#L21-L26). + +```protobuf +/ InterchainAccountPacketData is comprised of a raw transaction, type of transaction and optional memo field. +message InterchainAccountPacketData { + Type type = 1; + bytes data = 2; + string memo = 3; +} +``` + +The `data` field must be encoded as a [`CosmosTx`](https://github.com/cosmos/ibc-go/blob/v7.2.0/proto/ibc/applications/interchain_accounts/v1/packet.proto#L28-L31). + +```protobuf +/ CosmosTx contains a list of sdk.Msg's. It should be used when sending transactions to an SDK host chain. +message CosmosTx { + repeated google.protobuf.Any messages = 1; +} +``` + +The encoding method for `CosmosTx` is determined during the channel handshake process. If the channel version [metadata's `encoding` field](https://github.com/cosmos/ibc-go/blob/v7.2.0/proto/ibc/applications/interchain_accounts/v1/metadata.proto#L22) is marked as `proto3`, then `CosmosTx` undergoes protobuf encoding. Conversely, if the field is set to `proto3json`, then [proto3 json](https://protobuf.dev/programming-guides/proto3/#json) encoding takes place, which generates a JSON representation of the protobuf message. + +## Protobuf Encoding + +Protobuf encoding serves as the standard encoding process for `CosmosTx`. This occurs if the channel handshake initiates with an empty channel version metadata or if the `encoding` field explicitly denotes `proto3`. In Golang, the protobuf encoding procedure utilizes the `proto.Marshal` function. Every protobuf autogenerated Golang type comes equipped with a `Marshal` method that can be employed to encode the message. + +## (Protobuf) JSON Encoding + +The proto3 JSON encoding presents an alternative encoding technique for `CosmosTx`. It is selected if the channel handshake begins with the channel version metadata `encoding` field labeled as `proto3json`. In Golang, the Proto3 canonical encoding in JSON is implemented by the `"github.com/cosmos/gogoproto/jsonpb"` package. Within Cosmos SDK, the `ProtoCodec` structure implements the `JSONCodec` interface, leveraging the `jsonpb` package. This method generates a JSON format as follows: + +```json expandable +{ + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1...", + "to_address": "cosmos1...", + "amount": [ + { + "denom": "uatom", + "amount": "1000000" + } + ] + } + ] +} +``` + +Here, the `"messages"` array is populated with transactions. Each transaction is represented as a JSON object with the `@type` field denoting the transaction type and the remaining fields representing the transaction's attributes. diff --git a/docs/ibc/v8.5.x/apps/transfer/authorizations.mdx b/docs/ibc/v8.5.x/apps/transfer/authorizations.mdx new file mode 100644 index 00000000..2c166595 --- /dev/null +++ b/docs/ibc/v8.5.x/apps/transfer/authorizations.mdx @@ -0,0 +1,54 @@ +--- +title: Authorizations +--- + +`TransferAuthorization` implements the `Authorization` interface for `ibc.applications.transfer.v1.MsgTransfer`. It allows a granter to grant a grantee the privilege to submit `MsgTransfer` on its behalf. Please see the [Cosmos SDK docs](https://docs.cosmos.network/v0.47/modules/authz) for more details on granting privileges via the `x/authz` module. + +More specifically, the granter allows the grantee to transfer funds that belong to the granter over a specified channel. + +For the specified channel, the granter must be able to specify a spend limit of a specific denomination they wish to allow the grantee to be able to transfer. + +The granter may be able to specify the list of addresses that they allow to receive funds. If empty, then all addresses are allowed. + +It takes: + +* a `SourcePort` and a `SourceChannel` which together comprise the unique transfer channel identifier over which authorized funds can be transferred. + +* a `SpendLimit` that specifies the maximum amount of tokens the grantee can transfer. The `SpendLimit` is updated as the tokens are transferred, unless the sentinel value of the maximum value for a 256-bit unsigned integer (i.e. 2^256 - 1) is used for the amount, in which case the `SpendLimit` will not be updated (please be aware that using this sentinel value will grant the grantee the privilege to transfer **all** the tokens of a given denomination available at the granter's account). The helper function `UnboundedSpendLimit` in the `types` package of the `transfer` module provides the sentinel value that can be used. This `SpendLimit` may also be updated to increase or decrease the limit as the granter wishes. + +* an `AllowList` list that specifies the list of addresses that are allowed to receive funds. If this list is empty, then all addresses are allowed to receive funds from the `TransferAuthorization`. + +* an `AllowedPacketData` list that specifies the list of memo strings that are allowed to be included in the memo field of the packet. If this list is empty, then only an empty memo is allowed (a `memo` field with non-empty content will be denied). If this list includes a single element equal to `"*"`, then any content in the `memo` field will be allowed. + +Setting a `TransferAuthorization` is expected to fail if: + +* the spend limit is nil +* the denomination of the spend limit is an invalid coin type +* the source port ID is invalid +* the source channel ID is invalid +* there are duplicate entries in the `AllowList` +* the `memo` field is not allowed by `AllowedPacketData` + +Below is the `TransferAuthorization` message: + +```go expandable +func NewTransferAuthorization(allocations ...Allocation) *TransferAuthorization { + return &TransferAuthorization{ + Allocations: allocations, +} +} + +type Allocation struct { + / the port on which the packet will be sent + SourcePort string + / the channel by which the packet will be sent + SourceChannel string + / spend limitation on the channel + SpendLimit sdk.Coins + / allow list of receivers, an empty allow list permits any receiver address + AllowList []string + / allow list of packet data keys, an empty list prohibits all packet data keys; + / a list only with "*" permits any packet data key + AllowedPacketData []string +} +``` diff --git a/docs/ibc/v8.5.x/apps/transfer/client.mdx b/docs/ibc/v8.5.x/apps/transfer/client.mdx new file mode 100644 index 00000000..5137789c --- /dev/null +++ b/docs/ibc/v8.5.x/apps/transfer/client.mdx @@ -0,0 +1,67 @@ +--- +title: Client +description: >- + A user can query and interact with the transfer module using the CLI. Use the + --help flag to discover the available commands: +--- + +## CLI + +A user can query and interact with the `transfer` module using the CLI. Use the `--help` flag to discover the available commands: + +### Query + +The `query` commands allow users to query `transfer` state. + +```shell +simd query ibc-transfer --help +``` + +#### `total-escrow` + +The `total-escrow` command allows users to query the total amount in escrow for a particular coin denomination regardless of the transfer channel from where the coins were sent out. + +```shell +simd query ibc-transfer total-escrow [denom] [flags] +``` + +Example: + +```shell +simd query ibc-transfer total-escrow samoleans +``` + +Example Output: + +```shell +amount: "100" +``` + +## gRPC + +A user can query the `transfer` module using gRPC endpoints. + +### `TotalEscrowForDenom` + +The `TotalEscrowForDenom` endpoint allows users to query the total amount in escrow for a particular coin denomination regardless of the transfer channel from where the coins were sent out. + +```shell +ibc.applications.transfer.v1.Query/TotalEscrowForDenom +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"samoleans"}' \ + localhost:9090 \ + ibc.applications.transfer.v1.Query/TotalEscrowForDenom +``` + +Example output: + +```shell +{ + "amount": "100" +} +``` diff --git a/docs/ibc/v8.5.x/apps/transfer/events.mdx b/docs/ibc/v8.5.x/apps/transfer/events.mdx new file mode 100644 index 00000000..799424b6 --- /dev/null +++ b/docs/ibc/v8.5.x/apps/transfer/events.mdx @@ -0,0 +1,52 @@ +--- +title: Events +--- + +## `MsgTransfer` + +| Type | Attribute Key | Attribute Value | +| ------------- | ------------- | --------------- | +| ibc\_transfer | sender | `{sender}` | +| ibc\_transfer | receiver | `{receiver}` | +| ibc\_transfer | amount | `{amount}` | +| ibc\_transfer | denom | `{denom}` | +| ibc\_transfer | memo | `{memo}` | +| message | module | transfer | + +## `OnRecvPacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | ------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | sender | `{sender}` | +| fungible\_token\_packet | receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | +| fungible\_token\_packet | success | `{ackSuccess}` | +| fungible\_token\_packet | error | `{ackError}` | +| denomination\_trace | trace\_hash | `{hex\_hash}` | +| denomination\_trace | denom | `{voucherDenom}` | + +## `OnAcknowledgePacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | --------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | sender | `{sender}` | +| fungible\_token\_packet | receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | +| fungible\_token\_packet | acknowledgement | `{ack.String()}` | +| fungible\_token\_packet | success / error | `{ack.Response}` | + +## `OnTimeoutPacket` callback + +| Type | Attribute Key | Attribute Value | +| ----------------------- | ---------------- | --------------- | +| fungible\_token\_packet | module | transfer | +| fungible\_token\_packet | refund\_receiver | `{receiver}` | +| fungible\_token\_packet | denom | `{denom}` | +| fungible\_token\_packet | amount | `{amount}` | +| fungible\_token\_packet | memo | `{memo}` | diff --git a/docs/ibc/v8.5.x/apps/transfer/messages.mdx b/docs/ibc/v8.5.x/apps/transfer/messages.mdx new file mode 100644 index 00000000..160bf41c --- /dev/null +++ b/docs/ibc/v8.5.x/apps/transfer/messages.mdx @@ -0,0 +1,57 @@ +--- +title: Messages +description: 'A fungible token cross chain transfer is achieved by using the MsgTransfer:' +--- + +## `MsgTransfer` + +A fungible token cross chain transfer is achieved by using the `MsgTransfer`: + +```go +type MsgTransfer struct { + SourcePort string + SourceChannel string + Token sdk.Coin + Sender string + Receiver string + TimeoutHeight ibcexported.Height + TimeoutTimestamp uint64 + Memo string +} +``` + +This message is expected to fail if: + +* `SourcePort` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +* `SourceChannel` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +* `Token` is invalid (denom is invalid or amount is negative) + * `Token.Amount` is not positive. + * `Token.Denom` is not a valid IBC denomination as per [ADR 001 - Coin Source Tracing](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md). +* `Sender` is empty. +* `Receiver` is empty or contains more than 2048 bytes. +* `Memo` contains more than 32768 bytes. +* `TimeoutHeight` and `TimeoutTimestamp` are both zero. + +This message will send a fungible token to the counterparty chain represented by the counterparty Channel End connected to the Channel End with the identifiers `SourcePort` and `SourceChannel`. + +The denomination provided for transfer should correspond to the same denomination represented on this chain. The prefixes will be added as necessary upon by the receiving chain. + +If the `Amount` is set to the maximum value for a 256-bit unsigned integer (i.e. 2^256 - 1), then the whole balance of the corresponding denomination will be transferred. The helper function `UnboundedSpendLimit` in the `types` package of the `transfer` module provides the sentinel value that can be used. + +### Memo + +The memo field was added to allow applications and users to attach metadata to transfer packets. The field is optional and may be left empty. When it is used to attach metadata for a particular middleware, the memo field should be represented as a json object where different middlewares use different json keys. + +For example, the following memo field is used by the [callbacks middleware](/docs/ibc/v8.5.x/middleware/callbacks/overview) to attach a source callback to a transfer packet: + +```jsonc +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +You can find more information about other applications that use the memo field in the [chain registry](https://github.com/cosmos/chain-registry/blob/master/_memo_keys/ICS20_memo_keys.json). diff --git a/docs/ibc/v8.5.x/apps/transfer/metrics.mdx b/docs/ibc/v8.5.x/apps/transfer/metrics.mdx new file mode 100644 index 00000000..37fdc97a --- /dev/null +++ b/docs/ibc/v8.5.x/apps/transfer/metrics.mdx @@ -0,0 +1,13 @@ +--- +title: Metrics +description: The IBC transfer application module exposes the following set of metrics. +--- + +The IBC transfer application module exposes the following set of [metrics](https://docs.cosmos.network/main/learn/advanced/telemetry). + +| Metric | Description | Unit | Type | +| :---------------------------- | :---------------------------------------------------------------------------------------- | :------- | :------ | +| `tx_msg_ibc_transfer` | The total amount of tokens transferred via IBC in a `MsgTransfer` (source or sink chain) | token | gauge | +| `ibc_transfer_packet_receive` | The total amount of tokens received in a `FungibleTokenPacketData` (source or sink chain) | token | gauge | +| `ibc_transfer_send` | Total number of IBC transfers sent from a chain (source or sink) | transfer | counter | +| `ibc_transfer_receive` | Total number of IBC transfers received to a chain (source or sink) | transfer | counter | diff --git a/docs/ibc/v8.5.x/apps/transfer/overview.mdx b/docs/ibc/v8.5.x/apps/transfer/overview.mdx new file mode 100644 index 00000000..e1c7cb0f --- /dev/null +++ b/docs/ibc/v8.5.x/apps/transfer/overview.mdx @@ -0,0 +1,135 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the token Transfer module is + +## What is the Transfer module? + +Transfer is the Cosmos SDK implementation of the [ICS-20](https://github.com/cosmos/ibc/tree/master/spec/app/ics-020-fungible-token-transfer) protocol, which enables cross-chain fungible token transfers. + +## Concepts + +### Acknowledgements + +ICS20 uses the recommended acknowledgement format as specified by [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope). + +A successful receive of a transfer packet will result in a Result Acknowledgement being written +with the value `[]byte{byte(1)}` in the `Response` field. + +An unsuccessful receive of a transfer packet will result in an Error Acknowledgement being written +with the error message in the `Response` field. + +### Denomination trace + +The denomination trace corresponds to the information that allows a token to be traced back to its +origin chain. It contains a sequence of port and channel identifiers ordered from the most recent to +the oldest in the timeline of transfers. + +This information is included on the token's base denomination field in the form of a hash to prevent an +unbounded denomination length. For example, the token `transfer/channelToA/uatom` will be displayed +as `ibc/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2`. The human readable denomination +is stored using `x/bank` module's [denom metadata](https://docs.cosmos.network/main/build/modules/bank#denom-metadata) +feature. You may display the human readable denominations by querying balances with the `--resolve-denom` flag, as in: + +```shell +simd query bank balances [address] --resolve-denom +``` + +Each send to any chain other than the one it was previously received from is a movement forwards in +the token's timeline. This causes trace to be added to the token's history and the destination port +and destination channel to be prefixed to the denomination. In these instances the sender chain is +acting as the "source zone". When the token is sent back to the chain it previously received from, the +prefix is removed. This is a backwards movement in the token's timeline and the sender chain is +acting as the "sink zone". + +It is strongly recommended to read the full details of [ADR 001: Coin Source Tracing](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md) to understand the implications and context of the IBC token representations. + +## UX suggestions for clients + +For clients (wallets, exchanges, applications, block explorers, etc) that want to display the source of the token, it is recommended to use the following alternatives for each of the cases below: + +### Direct connection + +If the denomination trace contains a single identifier prefix pair (as in the example above), then +the easiest way to retrieve the chain and light client identifier is to map the trace information +directly. In summary, this requires querying the channel from the denomination trace identifiers, +and then the counterparty client state using the counterparty port and channel identifiers from the +retrieved channel. + +A general pseudo algorithm would look like the following: + +1. Query the full denomination trace. +2. Query the channel with the `portID/channelID` pair, which corresponds to the first destination of the + token. +3. Query the client state using the identifiers pair. Note that this query will return a `"Not +Found"` response if the current chain is not connected to this channel. +4. Retrieve the client identifier or chain identifier from the client state (eg: on + Tendermint clients) and store it locally. + +Using the gRPC gateway client service the steps above would be, with a given IBC token `ibc/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2` stored on `chainB`: + +1. `GET /ibc/apps/transfer/v1/denom_traces/7F1D3FCF4AE79E1554D670D1AD949A9BA4E4A3C76C63093E17E446A46061A7A2` -> `{"path": "transfer/channelToA", "base_denom": "uatom"}` +2. `GET /ibc/apps/transfer/v1/channels/channelToA/ports/transfer/client_state"` -> `{"client_id": "clientA", "chain-id": "chainA", ...}` +3. `GET /ibc/apps/transfer/v1/channels/channelToA/ports/transfer"` -> `{"channel_id": "channelToA", port_id": "transfer", counterparty: {"channel_id": "channelToB", port_id": "transfer"}, ...}` +4. `GET /ibc/apps/transfer/v1/channels/channelToB/ports/transfer/client_state" -> {"client_id": "clientB", "chain-id": "chainB", ...}` + +Then, the token transfer chain path for the `uatom` denomination would be: `chainA` -> `chainB`. + +### Multiple hops + +The multiple channel hops case applies when the token has passed through multiple chains between the original source and final destination chains. + +The IBC protocol doesn't know the topology of the overall network (i.e connections between chains and identifier names between them). For this reason, in the multiple hops case, a particular chain in the timeline of the individual transfers can't query the chain and client identifiers of the other chains. + +Take for example the following sequence of transfers `A -> B -> C` for an IBC token, with a final prefix path (trace info) of `transfer/channelChainC/transfer/channelChainB`. What the paragraph above means is that even in the case that chain `C` is directly connected to chain `A`, querying the port and channel identifiers that chain `B` uses to connect to chain `A` (eg: `transfer/channelChainA`) can be completely different from the one that chain `C` uses to connect to chain `A` (eg: `transfer/channelToChainA`). + +Thus the proposed solution for clients that the IBC team recommends are the following: + +- **Connect to all chains**: Connecting to all the chains in the timeline would allow clients to + perform the queries outlined in the [direct connection](#direct-connection) section to each + relevant chain. By repeatedly following the port and channel denomination trace transfer timeline, + clients should always be able to find all the relevant identifiers. This comes at the tradeoff + that the client must connect to nodes on each of the chains in order to perform the queries. +- **Relayer as a Service (RaaS)**: A longer term solution is to use/create a relayer service that + could map the denomination trace to the chain path timeline for each token (i.e `origin chain -> +chain #1 -> ... -> chain #(n-1) -> final chain`). These services could provide merkle proofs in + order to allow clients to optionally verify the path timeline correctness for themselves by + running light clients. If the proofs are not verified, they should be considered as trusted third + parties services. Additionally, client would be advised in the future to use RaaS that support the + largest number of connections between chains in the ecosystem. Unfortunately, none of the existing + public relayers (in [Golang](https://github.com/cosmos/relayer) and + [Rust](https://github.com/informalsystems/ibc-rs)), provide this service to clients. + + + The only viable alternative for clients (at the time of writing) to tokens + with multiple connection hops, is to connect to all chains directly and + perform relevant queries to each of them in the sequence. + + +## Locked funds + +In some [exceptional cases](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-026-ibc-client-recovery-mechanisms.md#exceptional-cases), a client state associated with a given channel cannot be updated. This causes that funds from fungible tokens in that channel will be permanently locked and thus can no longer be transferred. + +To mitigate this, a client update governance proposal can be submitted to update the frozen client +with a new valid header. Once the proposal passes the client state will be unfrozen and the funds +from the associated channels will then be unlocked. This mechanism only applies to clients that +allow updates via governance, such as Tendermint clients. + +In addition to this, it's important to mention that a token must be sent back along the exact route +that it took originally in order to return it to its original form on the source chain (eg: the +Cosmos Hub for the `uatom`). Sending a token back to the same chain across a different channel will +**not** move the token back across its timeline. If a channel in the chain history closes before the +token can be sent back across that channel, then the token will not be returnable to its original +form. + +## Security considerations + +For safety, no other module must be capable of minting tokens with the `ibc/` prefix. The IBC +transfer module needs a subset of the denomination space that only it can create tokens in. + +## Channel Closure + +The IBC transfer module does not support channel closure. diff --git a/docs/ibc/v8.5.x/apps/transfer/params.mdx b/docs/ibc/v8.5.x/apps/transfer/params.mdx new file mode 100644 index 00000000..1df8573f --- /dev/null +++ b/docs/ibc/v8.5.x/apps/transfer/params.mdx @@ -0,0 +1,89 @@ +--- +title: Params +description: 'The IBC transfer application module contains the following parameters:' +--- + +The IBC transfer application module contains the following parameters: + +| Name | Type | Default Value | +| ---------------- | ---- | ------------- | +| `SendEnabled` | bool | `true` | +| `ReceiveEnabled` | bool | `true` | + +The IBC transfer module stores its parameters in its keeper with the prefix of `0x03`. + +## `SendEnabled` + +The `SendEnabled` parameter controls send cross-chain transfer capabilities for all fungible tokens. + +To prevent a single token from being transferred from the chain, set the `SendEnabled` parameter to `true` and then, depending on the Cosmos SDK version, do one of the following: + +* For Cosmos SDK v0.46.x or earlier, set the bank module's [`SendEnabled` parameter](https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/bank/spec/05_params.md#sendenabled) for the denomination to `false`. +* For Cosmos SDK versions above v0.46.x, set the bank module's `SendEnabled` entry for the denomination to `false` using `MsgSetSendEnabled` as a governance proposal. + + +Doing so will prevent the token from being transferred between any accounts in the blockchain. + + +## `ReceiveEnabled` + +The transfers enabled parameter controls receive cross-chain transfer capabilities for all fungible tokens. + +To prevent a single token from being transferred to the chain, set the `ReceiveEnabled` parameter to `true` and then, depending on the Cosmos SDK version, do one of the following: + +* For Cosmos SDK v0.46.x or earlier, set the bank module's [`SendEnabled` parameter](https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/bank/spec/05_params.md#sendenabled) for the denomination to `false`. +* For Cosmos SDK versions above v0.46.x, set the bank module's `SendEnabled` entry for the denomination to `false` using `MsgSetSendEnabled` as a governance proposal. + + +Doing so will prevent the token from being transferred between any accounts in the blockchain. + + +## Queries + +Current parameter values can be queried via a query message. + +{/* Turn it into a github code snippet in docusaurus: */} + +```protobuf +/ proto/ibc/applications/transfer/v1/query.proto + +/ QueryParamsRequest is the request type for the Query/Params RPC method. +message QueryParamsRequest {} + +/ QueryParamsResponse is the response type for the Query/Params RPC method. +message QueryParamsResponse { + / params defines the parameters of the module. + Params params = 1; +} +``` + +To execute the query in `simd`, you use the following command: + +```bash +simd query ibc-transfer params +``` + +## Changing Parameters + +To change the parameter values, you must make a governance proposal that executes the `MsgUpdateParams` message. + +{/* Turn it into a github code snippet in docusaurus: */} + +```protobuf expandable +/ proto/ibc/applications/transfer/v1/tx.proto + +/ MsgUpdateParams is the Msg/UpdateParams request type. +message MsgUpdateParams { + / signer address (it may be the address that controls the module, which defaults to x/gov unless overwritten). + string signer = 1; + + / params defines the transfer parameters to update. + / + / NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false]; +} + +/ MsgUpdateParamsResponse defines the response structure for executing a +/ MsgUpdateParams message. +message MsgUpdateParamsResponse {} +``` diff --git a/docs/ibc/v8.5.x/apps/transfer/state-transitions.mdx b/docs/ibc/v8.5.x/apps/transfer/state-transitions.mdx new file mode 100644 index 00000000..fcc9169c --- /dev/null +++ b/docs/ibc/v8.5.x/apps/transfer/state-transitions.mdx @@ -0,0 +1,35 @@ +--- +title: State Transitions +description: >- + A successful fungible token send has two state transitions depending if the + transfer is a movement forward or backwards in the token's timeline: +--- + +## Send fungible tokens + +A successful fungible token send has two state transitions depending if the transfer is a movement forward or backwards in the token's timeline: + +1. Sender chain is the source chain, *i.e* a transfer to any chain other than the one it was previously received from is a movement forwards in the token's timeline. This results in the following state transitions: + + * The coins are transferred to an escrow address (i.e locked) on the sender chain. + * The coins are transferred to the receiving chain through IBC TAO logic. + +2. Sender chain is the sink chain, *i.e* the token is sent back to the chain it previously received from. This is a backwards movement in the token's timeline. This results in the following state transitions: + + * The coins (vouchers) are burned on the sender chain. + * The coins are transferred to the receiving chain through IBC TAO logic. + +## Receive fungible tokens + +A successful fungible token receive has two state transitions depending if the transfer is a movement forward or backwards in the token's timeline: + +1. Receiver chain is the source chain. This is a backwards movement in the token's timeline. This results in the following state transitions: + + * The leftmost port and channel identifier pair is removed from the token denomination prefix. + * The tokens are unescrowed and sent to the receiving address. + +2. Receiver chain is the sink chain. This is a movement forwards in the token's timeline. This results in the following state transitions: + + * Token vouchers are minted by prefixing the destination port and channel identifiers to the trace information. + * The receiving chain stores the new trace information in the store (if not set already). + * The vouchers are sent to the receiving address. diff --git a/docs/ibc/v8.5.x/apps/transfer/state.mdx b/docs/ibc/v8.5.x/apps/transfer/state.mdx new file mode 100644 index 00000000..76c7af41 --- /dev/null +++ b/docs/ibc/v8.5.x/apps/transfer/state.mdx @@ -0,0 +1,12 @@ +--- +title: State +description: >- + The IBC transfer application module keeps state of the port to which the + module is binded and the denomination trace information as outlined in ADR + 001. +--- + +The IBC transfer application module keeps state of the port to which the module is binded and the denomination trace information as outlined in [ADR 001](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md). + +* `Port`: `0x01 -> ProtocolBuffer(string)` +* `DenomTrace`: `0x02 | []bytes(traceHash) -> ProtocolBuffer(DenomTrace)` diff --git a/docs/ibc/v8.5.x/changelog/release-notes.mdx b/docs/ibc/v8.5.x/changelog/release-notes.mdx new file mode 100644 index 00000000..3b124178 --- /dev/null +++ b/docs/ibc/v8.5.x/changelog/release-notes.mdx @@ -0,0 +1,37 @@ +--- +title: "Release Notes" +description: "Release notes generated from the project `CHANGELOG.md`" +mode: "center" +--- + + + + This page tracks all releases and changes from the + [cosmos/ibc-go](https://github.com/cosmos/ibc-go) repository. For the latest + development updates, see the + [UNRELEASED](https://github.com/cosmos/ibc-go/blob/main/CHANGELOG.md#unreleased) + section. + + + + + ### Testing * [\#7430](https://github.com/cosmos/ibc-go/pull/7430) Update the + block proposer in test chains for each block. ### Bug Fixes * + (core/03-connection) [\#7397](https://github.com/cosmos/ibc-go/pull/7397) Skip + the genesis validation connectionID for localhost client. + + + + ### Bug Fixes * (apps/27-interchain-accounts) + [\#7277](https://github.com/cosmos/ibc-go/pull/7277) Use `GogoResolver` when + populating module query safe allow list to avoid panics from unresolvable + protobuf dependencies. + + + + ### Dependencies * [\#6828](https://github.com/cosmos/ibc-go/pull/6828) Bump + Cosmos SDK to v0.50.9. * [\#7222](https://github.com/cosmos/ibc-go/pull/7221) + Update ics23 to v0.11.0. ### State Machine Breaking * (core/03-connection) + [\#7129](https://github.com/cosmos/ibc-go/pull/7129) Remove verification of + self client and consensus state from connection handshake. + diff --git a/docs/ibc/v8.5.x/ibc/apps/apps.mdx b/docs/ibc/v8.5.x/ibc/apps/apps.mdx new file mode 100644 index 00000000..7338d9ea --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/apps/apps.mdx @@ -0,0 +1,519 @@ +--- +title: IBC Applications +description: >- + Learn how to configure your application to use IBC and send data packets to + other chains. +--- + +Learn how to configure your application to use IBC and send data packets to other chains. + +This document serves as a guide for developers who want to write their own Inter-blockchain +Communication Protocol (IBC) applications for custom use cases. + +Due to the modular design of the IBC protocol, IBC +application developers do not need to concern themselves with the low-level details of clients, +connections, and proof verification. Nevertheless a brief explanation of the lower levels of the +stack is given so that application developers may have a high-level understanding of the IBC +protocol. Then the document goes into detail on the abstraction layer most relevant for application +developers (channels and ports), and describes how to define your own custom packets, and +`IBCModule` callbacks. + +To have your module interact over IBC you must: bind to a port(s), define your own packet data and acknowledgement structs as well as how to encode/decode them, and implement the +`IBCModule` interface. Below is a more detailed explanation of how to write an IBC application +module correctly. + + + +## Pre-requisites Readings + +* [IBC Overview](/docs/ibc/v8.5.x/ibc/overview) +* [IBC default integration](/docs/ibc/v8.5.x/ibc/integration) + + + +## Create a custom IBC application module + +### Implement `IBCModule` Interface and callbacks + +The Cosmos SDK expects all IBC modules to implement the [`IBCModule` +interface](https://github.com/cosmos/ibc-go/tree/main/modules/core/05-port/types/module.go). This +interface contains all of the callbacks IBC expects modules to implement. This section will describe +the callbacks that are called during channel handshake execution. + +Here are the channel handshake callbacks that modules are expected to implement: + +```go expandable +/ Called by IBC Handler on MsgOpenInit +func (k Keeper) + +OnChanOpenInit(ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) + +error { + / OpenInit must claim the channelCapability that IBC passes into the callback + if err := k.ClaimCapability(ctx, chanCap, host.ChannelCapabilityPath(portID, channelID)); err != nil { + return err +} + + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + / Examples: Abort if order == UNORDERED, + / Abort if version is unsupported + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgOpenTry +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / OpenTry must claim the channelCapability that IBC passes into the callback + if err := k.scopedKeeper.ClaimCapability(ctx, chanCap, host.ChannelCapabilityPath(portID, channelID)); err != nil { + return err +} + + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + if err := checkArguments(args); err != nil { + return err +} + + / Construct application version + / IBC applications must return the appropriate application version + / This can be a simple string or it can be a complex version constructed + / from the counterpartyVersion and other arguments. + / The version returned will be the channel version used for both channel ends. + appVersion := negotiateAppVersion(counterpartyVersion, args) + +return appVersion, nil +} + +/ Called by IBC Handler on MsgOpenAck +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgOpenConfirm +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} +``` + +The channel closing handshake will also invoke module callbacks that can return errors to abort the +closing handshake. Closing a channel is a 2-step handshake, the initiating chain calls +`ChanCloseInit` and the finalizing chain calls `ChanCloseConfirm`. + +```go expandable +/ Called by IBC Handler on MsgCloseInit +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgCloseConfirm +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} +``` + +#### Channel Handshake Version Negotiation + +Application modules are expected to verify versioning used during the channel handshake procedure. + +* `ChanOpenInit` callback should verify that the `MsgChanOpenInit.Version` is valid +* `ChanOpenTry` callback should construct the application version used for both channel ends. If no application version can be constructed, it must return an error. +* `ChanOpenAck` callback should verify that the `MsgChanOpenAck.CounterpartyVersion` is valid and supported. + +IBC expects application modules to perform application version negotiation in `OnChanOpenTry`. The negotiated version +must be returned to core IBC. If the version cannot be negotiated, an error should be returned. + +Versions must be strings but can implement any versioning structure. If your application plans to +have linear releases then semantic versioning is recommended. If your application plans to release +various features in between major releases then it is advised to use the same versioning scheme +as IBC. This versioning scheme specifies a version identifier and compatible feature set with +that identifier. Valid version selection includes selecting a compatible version identifier with +a subset of features supported by your application for that version. The struct is used for this +scheme can be found in `03-connection/types`. + +Since the version type is a string, applications have the ability to do simple version verification +via string matching or they can use the already implemented versioning system and pass the proto +encoded version into each handhshake call as necessary. + +ICS20 currently implements basic string matching with a single supported version. + +### Bind Ports + +Currently, ports must be bound on app initialization. A module may bind to ports in `InitGenesis` +like so: + +```go expandable +func InitGenesis(ctx sdk.Context, keeper keeper.Keeper, state types.GenesisState) { + / ... other initialization logic + + / Only try to bind to port if it is not already bound, since we may already own + / port capability from capability InitGenesis + if !hasCapability(ctx, state.PortID) { + / module binds to desired ports on InitChain + / and claims returned capabilities + cap1 := keeper.IBCPortKeeper.BindPort(ctx, port1) + +cap2 := keeper.IBCPortKeeper.BindPort(ctx, port2) + +cap3 := keeper.IBCPortKeeper.BindPort(ctx, port3) + + / NOTE: The module's scoped capability keeper must be private + keeper.scopedKeeper.ClaimCapability(cap1) + +keeper.scopedKeeper.ClaimCapability(cap2) + +keeper.scopedKeeper.ClaimCapability(cap3) +} + + / ... more initialization logic +} +``` + +### Custom Packets + +Modules connected by a channel must agree on what application data they are sending over the +channel, as well as how they will encode/decode it. This process is not specified by IBC as it is up +to each application module to determine how to implement this agreement. However, for most +applications this will happen as a version negotiation during the channel handshake. While more +complex version negotiation is possible to implement inside the channel opening handshake, a very +simple version negotiation is implemented in the [ibc-transfer module](https://github.com/cosmos/ibc-go/tree/main/modules/apps/transfer/module.go). + +Thus, a module must define its custom packet data structure, along with a well-defined way to +encode and decode it to and from `[]byte`. + +```go expandable +/ Custom packet data defined in application module +type CustomPacketData struct { + / Custom fields ... +} + +EncodePacketData(packetData CustomPacketData) []byte { + / encode packetData to bytes +} + +DecodePacketData(encoded []byte) (CustomPacketData) { + / decode from bytes to packet data +} +``` + +Then a module must encode its packet data before sending it through IBC. + +```go expandable +/ retrieve the dynamic capability for this channel + channelCap := scopedKeeper.GetCapability(ctx, channelCapName) +/ Sending custom application packet data + data := EncodePacketData(customPacketData) + +packet.Data = data +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + channelCap, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + +A module receiving a packet must decode the `PacketData` into a structure it expects so that it can +act on it. + +```go +/ Receiving custom application packet data (in OnRecvPacket) + packetData := DecodePacketData(packet.Data) +/ handle received custom packet data +``` + +#### Packet Flow Handling + +Just as IBC expected modules to implement callbacks for channel handshakes, IBC also expects modules +to implement callbacks for handling the packet flow through a channel. + +Once a module A and module B are connected to each other, relayers can start relaying packets and +acknowledgements back and forth on the channel. + +![IBC packet flow diagram](https://media.githubusercontent.com/media/cosmos/ibc/old/spec/ics-004-channel-and-packet-semantics/channel-state-machine.png) + +Briefly, a successful packet flow works as follows: + +1. module A sends a packet through the IBC module +2. the packet is received by module B +3. if module B writes an acknowledgement of the packet then module A will process the + acknowledgement +4. if the packet is not successfully received before the timeout, then module A processes the + packet's timeout. + +##### Sending Packets + +Modules do not send packets through callbacks, since the modules initiate the action of sending +packets to the IBC module, as opposed to other parts of the packet flow where msgs sent to the IBC +module must trigger execution on the port-bound module through the use of callbacks. Thus, to send a +packet a module simply needs to call `SendPacket` on the `IBCChannelKeeper`. + +```go expandable +/ retrieve the dynamic capability for this channel + channelCap := scopedKeeper.GetCapability(ctx, channelCapName) +/ Sending custom application packet data + data := EncodePacketData(customPacketData) +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + channelCap, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + + +In order to prevent modules from sending packets on channels they do not own, IBC expects +modules to pass in the correct channel capability for the packet's source channel. + + +##### Receiving Packets + +To handle receiving packets, the module must implement the `OnRecvPacket` callback. This gets +invoked by the IBC module after the packet has been proved valid and correctly processed by the IBC +keepers. Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state +changes given the packet data without worrying about whether the packet is valid or not. + +Modules may return to the IBC handler an acknowledgement which implements the Acknowledgement interface. +The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the +acknowledgement back to the sender module. + +The state changes that occurred during this callback will only be written if: + +* the acknowledgement was successful as indicated by the `Success()` function of the acknowledgement +* if the acknowledgement returned is nil indicating that an asynchronous process is occurring + +NOTE: Applications which process asynchronous acknowledgements must handle reverting state changes +when appropriate. Any state changes that occurred during the `OnRecvPacket` callback will be written +for asynchronous acknowledgements. + +```go expandable +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) + +ibcexported.Acknowledgement { + / Decode the packet data + packetData := DecodePacketData(packet.Data) + + / do application state changes based on packet data and return the acknowledgement + / NOTE: The acknowledgement will indicate to the IBC handler if the application + / state changes should be written via the `Success()` function. Application state + / changes are only written if the acknowledgement is successful or the acknowledgement + / returned is nil indicating that an asynchronous acknowledgement will occur. + ack := processPacket(ctx, packet, packetData) + +return ack +} +``` + +The Acknowledgement interface: + +```go +/ Acknowledgement defines the interface used to return +/ acknowledgements in the OnRecvPacket callback. +type Acknowledgement interface { + Success() + +bool + Acknowledgement() []byte +} +``` + +### Acknowledgements + +Modules may commit an acknowledgement upon receiving and processing a packet in the case of synchronous packet processing. +In the case where a packet is processed at some later point after the packet has been received (asynchronous execution), the acknowledgement +will be written once the packet has been processed by the application which may be well after the packet receipt. + +NOTE: Most blockchain modules will want to use the synchronous execution model in which the module processes and writes the acknowledgement +for a packet as soon as it has been received from the IBC module. + +This acknowledgement can then be relayed back to the original sender chain, which can take action +depending on the contents of the acknowledgement. + +Just as packet data was opaque to IBC, acknowledgements are similarly opaque. Modules must pass and +receive acknowledegments with the IBC modules as byte strings. + +Thus, modules must agree on how to encode/decode acknowledgements. The process of creating an +acknowledgement struct along with encoding and decoding it, is very similar to the packet data +example above. [ICS 04](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope) +specifies a recommended format for acknowledgements. This acknowledgement type can be imported from +[channel types](https://github.com/cosmos/ibc-go/tree/main/modules/core/04-channel/types). + +While modules may choose arbitrary acknowledgement structs, a default acknowledgement types is provided by IBC [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/core/channel/v1/channel.proto): + +```proto expandable +/ Acknowledgement is the recommended acknowledgement format to be used by +/ app-specific protocols. +/ NOTE: The field numbers 21 and 22 were explicitly chosen to avoid accidental +/ conflicts with other protobuf message formats used for acknowledgements. +/ The first byte of any message with this format will be the non-ASCII values +/ `0xaa` (result) or `0xb2` (error). Implemented as defined by ICS: +/ https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope +message Acknowledgement { + / response contains either a result or an error and must be non-empty + oneof response { + bytes result = 21; + string error = 22; + } +} +``` + +#### Acknowledging Packets + +After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can +then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the +acknowledgement is entirely up to the modules on the channel (just like the packet data); however, it +may often contain information on whether the packet was successfully processed along +with some additional data that could be useful for remediation if the packet processing failed. + +Since the modules are responsible for agreeing on an encoding/decoding standard for packet data and +acknowledgements, IBC will pass in the acknowledgements as `[]byte` to this callback. The callback +is responsible for decoding the acknowledgement and processing it. + +```go expandable +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, +) (*sdk.Result, error) { + / Decode acknowledgement + ack := DecodeAcknowledgement(acknowledgement) + + / process ack + res, err := processAck(ack) + +return res, err +} +``` + +#### Timeout Packets + +If the timeout for a packet is reached before the packet is successfully received or the +counterparty channel end is closed before the packet is successfully received, then the receiving +chain can no longer process it. Thus, the sending chain must process the timeout using +`OnTimeoutPacket` to handle this situation. Again the IBC module will verify that the timeout is +indeed valid, so our module only needs to implement the state machine logic for what to do once a +timeout is reached and the packet can no longer be received. + +```go +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) (*sdk.Result, error) { + / do custom timeout logic +} +``` + +### Routing + +As mentioned above, modules must implement the IBC module interface (which contains both channel +handshake callbacks and packet handling callbacks). The concrete implementation of this interface +must be registered with the module name as a route on the IBC `Router`. + +```go expandable +/ app.go +func NewApp(...args) *App { +/ ... + +/ Create static IBC router, add module routes, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) +/ Note: moduleCallbacks must implement IBCModule interface +ibcRouter.AddRoute(moduleName, moduleCallbacks) + +/ Setting Router will finalize all routes by sealing router +/ No more routes can be added +app.IBCKeeper.SetRouter(ibcRouter) +``` + +## Working Example + +For a real working example of an IBC application, you can look through the `ibc-transfer` module +which implements everything discussed above. + +Here are the useful parts of the module to look at: + +[Binding to transfer +port](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/genesis.go) + +[Sending transfer +packets](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/relay.go) + +[Implementing IBC +callbacks](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/ibc_module.go) diff --git a/docs/ibc/v8.5.x/ibc/apps/bindports.mdx b/docs/ibc/v8.5.x/ibc/apps/bindports.mdx new file mode 100644 index 00000000..ff005830 --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/apps/bindports.mdx @@ -0,0 +1,134 @@ +--- +title: Bind ports +--- + +## Synopsis + +Learn what changes to make to bind modules to their ports on initialization. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v8.5.x/ibc/overview) +- [IBC default integration](/docs/ibc/v8.5.x/ibc/integration) + + +Currently, ports must be bound on app initialization. In order to bind modules to their respective ports on initialization, the following needs to be implemented: + +> Note that `portID` does not refer to a certain numerical ID, like `localhost:8080` with a `portID` 8080. Rather it refers to the application module the port binds. For IBC Modules built with the Cosmos SDK, it defaults to the module's name and for Cosmwasm contracts it defaults to the contract address. + +1. Add port ID to the `GenesisState` proto definition: + +```protobuf +message GenesisState { + string port_id = 1; + / other fields +} +``` + +1. Add port ID as a key to the module store: + +```go expandable +/ x//types/keys.go +const ( + / ModuleName defines the IBC Module name + ModuleName = "moduleName" + + / Version defines the current version the IBC + / module supports + Version = "moduleVersion-1" + + / PortID is the default port id that module binds to + PortID = "portID" + + / ... +) +``` + +1. Add port ID to `x//types/genesis.go`: + +```go expandable +/ in x//types/genesis.go + +/ DefaultGenesisState returns a GenesisState with "transfer" as the default PortID. +func DefaultGenesisState() *GenesisState { + return &GenesisState{ + PortId: PortID, + / additional k-v fields +} +} + +/ Validate performs basic genesis state validation returning an error upon any +/ failure. +func (gs GenesisState) + +Validate() + +error { + if err := host.PortIdentifierValidator(gs.PortId); err != nil { + return err +} + /additional validations + + return gs.Params.Validate() +} +``` + +1. Bind to port(s) in the module keeper's `InitGenesis`: + +```go expandable +/ InitGenesis initializes the ibc-module state and binds to PortID. +func (k Keeper) + +InitGenesis(ctx sdk.Context, state types.GenesisState) { + k.SetPort(ctx, state.PortId) + + / ... + + / Only try to bind to port if it is not already bound, since we may already own + / port capability from capability InitGenesis + if !k.hasCapability(ctx, state.PortId) { + / transfer module binds to the transfer port on InitChain + / and claims the returned capability + err := k.BindPort(ctx, state.PortId) + if err != nil { + panic(fmt.Sprintf("could not claim port capability: %v", err)) +} + +} + + / ... +} +``` + +With: + +```go expandable +/ IsBound checks if the module is already bound to the desired port +func (k Keeper) + +IsBound(ctx sdk.Context, portID string) + +bool { + _, ok := k.scopedKeeper.GetCapability(ctx, host.PortPath(portID)) + +return ok +} + +/ BindPort defines a wrapper function for the port Keeper's function in +/ order to expose it to module's InitGenesis function +func (k Keeper) + +BindPort(ctx sdk.Context, portID string) + +error { + cap := k.portKeeper.BindPort(ctx, portID) + +return k.ClaimCapability(ctx, cap, host.PortPath(portID)) +} +``` + +The module binds to the desired port(s) and returns the capabilities. + +In the above we find reference to keeper methods that wrap other keeper functionality, in the next section the keeper methods that need to be implemented will be defined. diff --git a/docs/ibc/v8.5.x/ibc/apps/ibcmodule.mdx b/docs/ibc/v8.5.x/ibc/apps/ibcmodule.mdx new file mode 100644 index 00000000..47f4498f --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/apps/ibcmodule.mdx @@ -0,0 +1,416 @@ +--- +title: Implement IBCModule interface and callbacks +--- + +## Synopsis + +Learn how to implement the `IBCModule` interface and all of the callbacks it requires. + +The Cosmos SDK expects all IBC modules to implement the [`IBCModule` +interface](https://github.com/cosmos/ibc-go/tree/main/modules/core/05-port/types/module.go). This interface contains all of the callbacks IBC expects modules to implement. They include callbacks related to channel handshake, closing and packet callbacks (`OnRecvPacket`, `OnAcknowledgementPacket` and `OnTimeoutPacket`). + +```go +/ IBCModule implements the ICS26 interface for given the keeper. +/ The implementation of the IBCModule interface could for example be in a file called ibc_module.go, +/ but ultimately file structure is up to the developer +type IBCModule struct { + keeper keeper.Keeper +} +``` + +Additionally, in the `module.go` file, add the following line: + +```go +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + / Add this line + _ porttypes.IBCModule = IBCModule{ +} +) +``` + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v8.5.x/ibc/overview) +- [IBC default integration](/docs/ibc/v8.5.x/ibc/integration) + + + +## Channel handshake callbacks + +This section will describe the callbacks that are called during channel handshake execution. Among other things, it will claim channel capabilities passed on from core IBC. For a refresher on capabilities, check [the Overview section](/docs/ibc/v8.5.x/ibc/overview#capabilities). + +Here are the channel handshake callbacks that modules are expected to implement: + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `checkArguments` and `negotiateAppVersion` functions. + +```go expandable +/ Called by IBC Handler on MsgOpenInit +func (im IBCModule) + +OnChanOpenInit(ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + / Examples: + / - Abort if order == UNORDERED, + / - Abort if version is unsupported + if err := checkArguments(args); err != nil { + return "", err +} + + / OpenInit must claim the channelCapability that IBC passes into the callback + if err := im.keeper.ClaimCapability(ctx, chanCap, host.ChannelCapabilityPath(portID, channelID)); err != nil { + return "", err +} + +return version, nil +} + +/ Called by IBC Handler on MsgOpenTry +func (im IBCModule) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / ... do custom initialization logic + + / Use above arguments to determine if we want to abort handshake + if err := checkArguments(args); err != nil { + return "", err +} + + / OpenTry must claim the channelCapability that IBC passes into the callback + if err := im.keeper.scopedKeeper.ClaimCapability(ctx, chanCap, host.ChannelCapabilityPath(portID, channelID)); err != nil { + return err +} + + / Construct application version + / IBC applications must return the appropriate application version + / This can be a simple string or it can be a complex version constructed + / from the counterpartyVersion and other arguments. + / The version returned will be the channel version used for both channel ends. + appVersion := negotiateAppVersion(counterpartyVersion, args) + +return appVersion, nil +} + +/ Called by IBC Handler on MsgOpenAck +func (im IBCModule) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyVersion string, +) + +error { + if counterpartyVersion != types.Version { + return sdkerrors.Wrapf(types.ErrInvalidVersion, "invalid counterparty version: %s, expected %s", counterpartyVersion, types.Version) +} + + / do custom logic + + return nil +} + +/ Called by IBC Handler on MsgOpenConfirm +func (im IBCModule) + +OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / do custom logic + + return nil +} +``` + +### Channel closing callbacks + +The channel closing handshake will also invoke module callbacks that can return errors to abort the closing handshake. Closing a channel is a 2-step handshake, the initiating chain calls `ChanCloseInit` and the finalizing chain calls `ChanCloseConfirm`. + +Currently, all IBC modules in this repository return an error for `OnChanCloseInit` to prevent the channels from closing. This is because any user can call `ChanCloseInit` by submitting a `MsgChannelCloseInit` transaction. + +```go expandable +/ Called by IBC Handler on MsgCloseInit +func (im IBCModule) + +OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} + +/ Called by IBC Handler on MsgCloseConfirm +func (im IBCModule) + +OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + / ... do custom finalization logic + + / Use above arguments to determine if we want to abort handshake + err := checkArguments(args) + +return err +} +``` + +### Channel handshake version negotiation + +Application modules are expected to verify versioning used during the channel handshake procedure. + +- `OnChanOpenInit` will verify that the relayer-chosen parameters + are valid and perform any custom `INIT` logic. + It may return an error if the chosen parameters are invalid + in which case the handshake is aborted. + If the provided version string is non-empty, `OnChanOpenInit` should return + the version string if valid or an error if the provided version is invalid. + **If the version string is empty, `OnChanOpenInit` is expected to + return a default version string representing the version(s) + it supports.** + If there is no default version string for the application, + it should return an error if the provided version is an empty string. +- `OnChanOpenTry` will verify the relayer-chosen parameters along with the + counterparty-chosen version string and perform custom `TRY` logic. + If the relayer-chosen parameters + are invalid, the callback must return an error to abort the handshake. + If the counterparty-chosen version is not compatible with this module's + supported versions, the callback must return an error to abort the handshake. + If the versions are compatible, the try callback must select the final version + string and return it to core IBC. + `OnChanOpenTry` may also perform custom initialization logic. +- `OnChanOpenAck` will error if the counterparty selected version string + is invalid and abort the handshake. It may also perform custom ACK logic. + +Versions must be strings but can implement any versioning structure. If your application plans to +have linear releases then semantic versioning is recommended. If your application plans to release +various features in between major releases then it is advised to use the same versioning scheme +as IBC. This versioning scheme specifies a version identifier and compatible feature set with +that identifier. Valid version selection includes selecting a compatible version identifier with +a subset of features supported by your application for that version. The struct used for this +scheme can be found in [03-connection/types](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection/types/version.go#L16). + +Since the version type is a string, applications have the ability to do simple version verification +via string matching or they can use the already implemented versioning system and pass the proto +encoded version into each handhshake call as necessary. + +ICS20 currently implements basic string matching with a single supported version. + +## Packet callbacks + +Just as IBC expects modules to implement callbacks for channel handshakes, it also expects modules to implement callbacks for handling the packet flow through a channel, as defined in the `IBCModule` interface. + +Once a module A and module B are connected to each other, relayers can start relaying packets and acknowledgements back and forth on the channel. + +![IBC packet flow diagram](/docs/ibc/images/01-ibc/03-apps/images/packet_flow.png) + +Briefly, a successful packet flow works as follows: + +1. Module A sends a packet through the IBC module +2. The packet is received by module B +3. If module B writes an acknowledgement of the packet then module A will process the acknowledgement +4. If the packet is not successfully received before the timeout, then module A processes the packet's timeout. + +### Sending packets + +Modules **do not send packets through callbacks**, since the modules initiate the action of sending packets to the IBC module, as opposed to other parts of the packet flow where messages sent to the IBC +module must trigger execution on the port-bound module through the use of callbacks. Thus, to send a packet a module simply needs to call `SendPacket` on the `IBCChannelKeeper`. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `EncodePacketData(customPacketData)` function. + +```go expandable +/ retrieve the dynamic capability for this channel + channelCap := scopedKeeper.GetCapability(ctx, channelCapName) +/ Sending custom application packet data + data := EncodePacketData(customPacketData) +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + channelCap, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + + + In order to prevent modules from sending packets on channels they do not own, + IBC expects modules to pass in the correct channel capability for the packet's + source channel. + + +### Receiving packets + +To handle receiving packets, the module must implement the `OnRecvPacket` callback. This gets +invoked by the IBC module after the packet has been proved valid and correctly processed by the IBC +keepers. Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state +changes given the packet data without worrying about whether the packet is valid or not. + +Modules may return to the IBC handler an acknowledgement which implements the `Acknowledgement` interface. +The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the +acknowledgement back to the sender module. + +The state changes that occurred during this callback will only be written if: + +- the acknowledgement was successful as indicated by the `Success()` function of the acknowledgement +- if the acknowledgement returned is nil indicating that an asynchronous process is occurring + +NOTE: Applications which process asynchronous acknowledgements must handle reverting state changes +when appropriate. Any state changes that occurred during the `OnRecvPacket` callback will be written +for asynchronous acknowledgements. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodePacketData(packet.Data)` function. + +```go expandable +func (im IBCModule) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) + +ibcexported.Acknowledgement { + / Decode the packet data + packetData := DecodePacketData(packet.Data) + + / do application state changes based on packet data and return the acknowledgement + / NOTE: The acknowledgement will indicate to the IBC handler if the application + / state changes should be written via the `Success()` function. Application state + / changes are only written if the acknowledgement is successful or the acknowledgement + / returned is nil indicating that an asynchronous acknowledgement will occur. + ack := processPacket(ctx, packet, packetData) + +return ack +} +``` + +Reminder, the `Acknowledgement` interface: + +```go +/ Acknowledgement defines the interface used to return +/ acknowledgements in the OnRecvPacket callback. +type Acknowledgement interface { + Success() + +bool + Acknowledgement() []byte +} +``` + +### Acknowledging packets + +After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can +then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the +acknowledgement is entirely up to the modules on the channel (just like the packet data); however, it +may often contain information on whether the packet was successfully processed along +with some additional data that could be useful for remediation if the packet processing failed. + +Since the modules are responsible for agreeing on an encoding/decoding standard for packet data and +acknowledgements, IBC will pass in the acknowledgements as `[]byte` to this callback. The callback +is responsible for decoding the acknowledgement and processing it. + +> Note that some of the code below is _pseudo code_, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodeAcknowledgement(acknowledgments)` and `processAck(ack)` functions. + +```go expandable +func (im IBCModule) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, +) (*sdk.Result, error) { + / Decode acknowledgement + ack := DecodeAcknowledgement(acknowledgement) + + / process ack + res, err := processAck(ack) + +return res, err +} +``` + +### Timeout packets + +If the timeout for a packet is reached before the packet is successfully received or the +counterparty channel end is closed before the packet is successfully received, then the receiving +chain can no longer process it. Thus, the sending chain must process the timeout using +`OnTimeoutPacket` to handle this situation. Again the IBC module will verify that the timeout is +indeed valid, so our module only needs to implement the state machine logic for what to do once a +timeout is reached and the packet can no longer be received. + +```go +func (im IBCModule) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, +) (*sdk.Result, error) { + / do custom timeout logic +} +``` + +### Optional interfaces + +The following interface are optional and MAY be implemented by an IBCModule. + +#### PacketDataUnmarshaler + +The `PacketDataUnmarshaler` interface is defined as follows: + +```go +/ PacketDataUnmarshaler defines an optional interface which allows a middleware to +/ request the packet data to be unmarshaled by the base application. +type PacketDataUnmarshaler interface { + / UnmarshalPacketData unmarshals the packet data into a concrete type + UnmarshalPacketData([]byte) (interface{ +}, error) +} +``` + +The implementation of `UnmarshalPacketData` should unmarshal the bytes into the packet data type defined for an IBC stack. +The base application of an IBC stack should unmarshal the bytes into its packet data type, while a middleware may simply defer the call to the underlying application. + +This interface allows middlewares to unmarshal a packet data in order to make use of interfaces the packet data type implements. +For example, the callbacks middleware makes use of this function to access packet data types which implement the `PacketData` and `PacketDataProvider` interfaces. diff --git a/docs/ibc/v8.5.x/ibc/apps/keeper.mdx b/docs/ibc/v8.5.x/ibc/apps/keeper.mdx new file mode 100644 index 00000000..2f239283 --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/apps/keeper.mdx @@ -0,0 +1,119 @@ +--- +title: Keeper +--- + +## Synopsis + +Learn how to implement the IBC Module keeper. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v8.5.x/ibc/overview) +- [IBC default integration](/docs/ibc/v8.5.x/ibc/integration) + + +In the previous sections, on channel handshake callbacks and port binding in `InitGenesis`, a reference was made to keeper methods that need to be implemented when creating a custom IBC module. Below is an overview of how to define an IBC module's keeper. + +> Note that some code has been left out for clarity, to get a full code overview, please refer to [the transfer module's keeper in the ibc-go repo](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/keeper.go). + +```go expandable +/ Keeper defines the IBC app module keeper +type Keeper struct { + storeKey sdk.StoreKey + cdc codec.BinaryCodec + paramSpace paramtypes.Subspace + + channelKeeper types.ChannelKeeper + portKeeper types.PortKeeper + scopedKeeper capabilitykeeper.ScopedKeeper + + / ... additional according to custom logic +} + +/ NewKeeper creates a new IBC app module Keeper instance +func NewKeeper( + / args +) + +Keeper { + / ... + + return Keeper{ + cdc: cdc, + storeKey: key, + paramSpace: paramSpace, + + channelKeeper: channelKeeper, + portKeeper: portKeeper, + scopedKeeper: scopedKeeper, + + / ... additional according to custom logic +} +} + +/ hasCapability checks if the IBC app module owns the port capability for the desired port +func (k Keeper) + +hasCapability(ctx sdk.Context, portID string) + +bool { + _, ok := k.scopedKeeper.GetCapability(ctx, host.PortPath(portID)) + +return ok +} + +/ BindPort defines a wrapper function for the port Keeper's function in +/ order to expose it to module's InitGenesis function +func (k Keeper) + +BindPort(ctx sdk.Context, portID string) + +error { + cap := k.portKeeper.BindPort(ctx, portID) + +return k.ClaimCapability(ctx, cap, host.PortPath(portID)) +} + +/ GetPort returns the portID for the IBC app module. Used in ExportGenesis +func (k Keeper) + +GetPort(ctx sdk.Context) + +string { + store := ctx.KVStore(k.storeKey) + +return string(store.Get(types.PortKey)) +} + +/ SetPort sets the portID for the IBC app module. Used in InitGenesis +func (k Keeper) + +SetPort(ctx sdk.Context, portID string) { + store := ctx.KVStore(k.storeKey) + +store.Set(types.PortKey, []byte(portID)) +} + +/ AuthenticateCapability wraps the scopedKeeper's AuthenticateCapability function +func (k Keeper) + +AuthenticateCapability(ctx sdk.Context, cap *capabilitytypes.Capability, name string) + +bool { + return k.scopedKeeper.AuthenticateCapability(ctx, cap, name) +} + +/ ClaimCapability allows the IBC app module to claim a capability that core IBC +/ passes to it +func (k Keeper) + +ClaimCapability(ctx sdk.Context, cap *capabilitytypes.Capability, name string) + +error { + return k.scopedKeeper.ClaimCapability(ctx, cap, name) +} + +/ ... additional according to custom logic +``` diff --git a/docs/ibc/v8.5.x/ibc/apps/packets_acks.mdx b/docs/ibc/v8.5.x/ibc/apps/packets_acks.mdx new file mode 100644 index 00000000..572b0aff --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/apps/packets_acks.mdx @@ -0,0 +1,166 @@ +--- +title: Define packets and acks +--- + +## Synopsis + +Learn how to define custom packet and acknowledgement structs and how to encode and decode them. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v8.5.x/ibc/overview) +- [IBC default integration](/docs/ibc/v8.5.x/ibc/integration) + + + +## Custom packets + +Modules connected by a channel must agree on what application data they are sending over the +channel, as well as how they will encode/decode it. This process is not specified by IBC as it is up +to each application module to determine how to implement this agreement. However, for most +applications this will happen as a version negotiation during the channel handshake. While more +complex version negotiation is possible to implement inside the channel opening handshake, a very +simple version negotiation is implemented in the [ibc-transfer module](https://github.com/cosmos/ibc-go/tree/main/modules/apps/transfer/module.go). + +Thus, a module must define its custom packet data structure, along with a well-defined way to +encode and decode it to and from `[]byte`. + +```go expandable +/ Custom packet data defined in application module +type CustomPacketData struct { + / Custom fields ... +} + +EncodePacketData(packetData CustomPacketData) []byte { + / encode packetData to bytes +} + +DecodePacketData(encoded []byte) (CustomPacketData) { + / decode from bytes to packet data +} +``` + +> Note that the `CustomPacketData` struct is defined in the proto definition and then compiled by the protobuf compiler. + +Then a module must encode its packet data before sending it through IBC. + +```go expandable +/ retrieve the dynamic capability for this channel + channelCap := scopedKeeper.GetCapability(ctx, channelCapName) +/ Sending custom application packet data + data := EncodePacketData(customPacketData) +/ Send packet to IBC, authenticating with channelCap +sequence, err := IBCChannelKeeper.SendPacket( + ctx, + channelCap, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, +) +``` + +A module receiving a packet must decode the `PacketData` into a structure it expects so that it can +act on it. + +```go +/ Receiving custom application packet data (in OnRecvPacket) + packetData := DecodePacketData(packet.Data) +/ handle received custom packet data +``` + +### Optional interfaces + +The following interfaces are optional and MAY be implemented by a custom packet type. +They allow middlewares such as callbacks to access information stored within the packet data. + +#### PacketData interface + +The `PacketData` interface is defined as follows: + +```go +/ PacketData defines an optional interface which an application's packet data structure may implement. +type PacketData interface { + / GetPacketSender returns the sender address of the packet data. + / If the packet sender is unknown or undefined, an empty string should be returned. + GetPacketSender(sourcePortID string) + +string +} +``` + +The implementation of `GetPacketSender` should return the sender of the packet data. +If the packet sender is unknown or undefined, an empty string should be returned. + +This interface is intended to give IBC middlewares access to the packet sender of a packet data type. + +#### PacketDataProvider interface + +The `PacketDataProvider` interface is defined as follows: + +```go +/ PacketDataProvider defines an optional interfaces for retrieving custom packet data stored on behalf of another application. +/ An existing problem in the IBC middleware design is the inability for a middleware to define its own packet data type and insert packet sender provided information. +/ A short term solution was introduced into several application's packet data to utilize a memo field to carry this information on behalf of another application. +/ This interfaces standardizes that behaviour. Upon realization of the ability for middleware's to define their own packet data types, this interface will be deprecated and removed with time. +type PacketDataProvider interface { + / GetCustomPacketData returns the packet data held on behalf of another application. + / The name the information is stored under should be provided as the key. + / If no custom packet data exists for the key, nil should be returned. + GetCustomPacketData(key string) + +interface{ +} +} +``` + +The implementation of `GetCustomPacketData` should return packet data held on behalf of another application (if present and supported). +If this functionality is not supported, it should return nil. Otherwise it should return the packet data associated with the provided key. + +This interface gives IBC applications access to the packet data information embedded into the base packet data type. +Within transfer and interchain accounts, the embedded packet data is stored within the Memo field. + +Once all IBC applications within an IBC stack are capable of creating/maintaining their own packet data type's, this interface function will be deprecated and removed. + +## Acknowledgements + +Modules may commit an acknowledgement upon receiving and processing a packet in the case of synchronous packet processing. +In the case where a packet is processed at some later point after the packet has been received (asynchronous execution), the acknowledgement +will be written once the packet has been processed by the application which may be well after the packet receipt. + +NOTE: Most blockchain modules will want to use the synchronous execution model in which the module processes and writes the acknowledgement +for a packet as soon as it has been received from the IBC module. + +This acknowledgement can then be relayed back to the original sender chain, which can take action +depending on the contents of the acknowledgement. + +Just as packet data was opaque to IBC, acknowledgements are similarly opaque. Modules must pass and +receive acknowledegments with the IBC modules as byte strings. + +Thus, modules must agree on how to encode/decode acknowledgements. The process of creating an +acknowledgement struct along with encoding and decoding it, is very similar to the packet data +example above. [ICS 04](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope) +specifies a recommended format for acknowledgements. This acknowledgement type can be imported from +[channel types](https://github.com/cosmos/ibc-go/tree/main/modules/core/04-channel/types). + +While modules may choose arbitrary acknowledgement structs, a default acknowledgement types is provided by IBC [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/core/channel/v1/channel.proto): + +```protobuf expandable +/ Acknowledgement is the recommended acknowledgement format to be used by +/ app-specific protocols. +/ NOTE: The field numbers 21 and 22 were explicitly chosen to avoid accidental +/ conflicts with other protobuf message formats used for acknowledgements. +/ The first byte of any message with this format will be the non-ASCII values +/ `0xaa` (result) or `0xb2` (error). Implemented as defined by ICS: +/ https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope +message Acknowledgement { + / response contains either a result or an error and must be non-empty + oneof response { + bytes result = 21; + string error = 22; + } +} +``` diff --git a/docs/ibc/v8.5.x/ibc/apps/routing.mdx b/docs/ibc/v8.5.x/ibc/apps/routing.mdx new file mode 100644 index 00000000..c8a18494 --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/apps/routing.mdx @@ -0,0 +1,40 @@ +--- +title: Routing +--- + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v8.5.x/ibc/overview) +- [IBC default integration](/docs/ibc/v8.5.x/ibc/integration) + + + +## Synopsis + +Learn how to hook a route to the IBC router for the custom IBC module. + +As mentioned above, modules must implement the `IBCModule` interface (which contains both channel +handshake callbacks and packet handling callbacks). The concrete implementation of this interface +must be registered with the module name as a route on the IBC `Router`. + +```go expandable +/ app.go +func NewApp(...args) *App { + / ... + + / Create static IBC router, add module routes, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) + / Note: moduleCallbacks must implement IBCModule interface + ibcRouter.AddRoute(moduleName, moduleCallbacks) + + / Setting Router will finalize all routes by sealing router + / No more routes can be added + app.IBCKeeper.SetRouter(ibcRouter) + + / ... +} +``` diff --git a/docs/ibc/v8.5.x/ibc/capability-module.mdx b/docs/ibc/v8.5.x/ibc/capability-module.mdx new file mode 100644 index 00000000..09bebcba --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/capability-module.mdx @@ -0,0 +1,138 @@ +--- +title: Capability Module +description: >- + modules/capability is an implementation of a Cosmos SDK module, per ADR 003, + that allows for provisioning, tracking, and authenticating multi-owner + capabilities at runtime. +--- + +## Overview + +`modules/capability` is an implementation of a Cosmos SDK module, per [ADR 003](/docs/common/pages/adr-comprehensive#adr-003-dynamic-capability-store), that allows for provisioning, tracking, and authenticating multi-owner capabilities at runtime. + +The keeper maintains two states: persistent and ephemeral in-memory. The persistent +store maintains a globally unique auto-incrementing index and a mapping from +capability index to a set of capability owners that are defined as a module and +capability name tuple. The in-memory ephemeral state keeps track of the actual +capabilities, represented as addresses in local memory, with both forward and reverse indexes. +The forward index maps module name and capability tuples to the capability name. The +reverse index maps between the module and capability name and the capability itself. + +The keeper allows the creation of "scoped" sub-keepers which are tied to a particular +module by name. Scoped keepers must be created at application initialization and +passed to modules, which can then use them to claim capabilities they receive and +retrieve capabilities which they own by name, in addition to creating new capabilities +& authenticating capabilities passed by other modules. A scoped keeper cannot escape its scope, +so a module cannot interfere with or inspect capabilities owned by other modules. + +The keeper provides no other core functionality that can be found in other modules +like queriers, REST and CLI handlers, and genesis state. + +## Initialization + +During application initialization, the keeper must be instantiated with a persistent +store key and an in-memory store key. + +```go expandable +type App struct { + / ... + + capabilityKeeper *capability.Keeper +} + +func NewApp(...) *App { + / ... + + app.capabilityKeeper = capabilitykeeper.NewKeeper(codec, persistentStoreKey, memStoreKey) +} +``` + +After the keeper is created, it can be used to create scoped sub-keepers which +are passed to other modules that can create, authenticate, and claim capabilities. +After all the necessary scoped keepers are created and the state is loaded, the +main capability keeper must be sealed to prevent further scoped keepers from +being created. + +```go expandable +func NewApp(...) *App { + / ... + + / Creating a scoped keeper + scopedIBCKeeper := app.CapabilityKeeper.ScopeToModule(ibchost.ModuleName) + + / Seal the capability keeper to prevent any further modules from creating scoped + / sub-keepers. + app.capabilityKeeper.Seal() + +return app +} +``` + +## Contents + +* [`modules/capability`](#capability-module) + * [Overview](#overview) + * [Initialization](#initialization) + * [Contents](#contents) + * [Concepts](#concepts) + * [Capabilities](#capabilities) + * [Stores](#stores) + * [State](#state) + * [Persisted KV store](#persisted-kv-store) + * [In-memory KV store](#in-memory-kv-store) + +## Concepts + +### Capabilities + +Capabilities are multi-owner. A scoped keeper can create a capability via `NewCapability` +which creates a new unique, unforgeable object-capability reference. The newly +created capability is automatically persisted; the calling module need not call +`ClaimCapability`. Calling `NewCapability` will create the capability with the +calling module and name as a tuple to be treated the capabilities first owner. + +Capabilities can be claimed by other modules which add them as owners. `ClaimCapability` +allows a module to claim a capability key which it has received from another +module so that future `GetCapability` calls will succeed. `ClaimCapability` MUST +be called if a module which receives a capability wishes to access it by name in +the future. Again, capabilities are multi-owner, so if multiple modules have a +single Capability reference, they will all own it. If a module receives a capability +from another module but does not call `ClaimCapability`, it may use it in the executing +transaction but will not be able to access it afterwards. + +`AuthenticateCapability` can be called by any module to check that a capability +does in fact correspond to a particular name (the name can be un-trusted user input) +with which the calling module previously associated it. + +`GetCapability` allows a module to fetch a capability which it has previously +claimed by name. The module is not allowed to retrieve capabilities which it does +not own. + +### Stores + +* MemStore +* KeyStore + +## State + +### Persisted KV store + +1. Global unique capability index +2. Capability owners + +Indexes: + +* Unique index: `[]byte("index") -> []byte(currentGlobalIndex)` +* Capability Index: `[]byte("capability_index") | []byte(index) -> ProtocolBuffer(CapabilityOwners)` + +### In-memory KV store + +1. Initialized flag +2. Mapping between the module and capability tuple and the capability name +3. Mapping between the module and capability name and its index + +Indexes: + +* Initialized flag: `[]byte("mem_initialized")` +* RevCapabilityKey: `[]byte(moduleName + "/rev/" + capabilityName) -> []byte(index)` +* FwdCapabilityKey: `[]byte(moduleName + "/fwd/" + capabilityPointerAddress) -> []byte(capabilityName)` diff --git a/docs/ibc/v8.5.x/ibc/channel-upgrades.mdx b/docs/ibc/v8.5.x/ibc/channel-upgrades.mdx new file mode 100644 index 00000000..2883948f --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/channel-upgrades.mdx @@ -0,0 +1,410 @@ +--- +title: Channel Upgrades +--- + +## Synopsis + +Learn how to upgrade existing IBC channels. + +Channel upgradability is an IBC-level protocol that allows chains to leverage new application and channel features without having to create new channels or perform a network-wide upgrade. + +Prior to this feature, developers who wanted to update an application module or add a middleware to their application flow would need to create a new channel in order to use the updated application feature/middleware, resulting in a loss of the accumulated state/liquidity, token fungibility (as the channel ID is encoded in the IBC denom), and any other larger network effects of losing usage of the existing channel from relayers monitoring, etc. + +With channel upgradability, applications will be able to implement features such as but not limited to: potentially adding [denom metadata to tokens](https://github.com/cosmos/ibc/discussions/719), or utilizing the [fee middleware](https://github.com/cosmos/ibc/tree/main/spec/app/ics-029-fee-payment), all while maintaining the channels on which they currently operate. + +This document outlines the channel upgrade feature, and the multiple steps used in the upgrade process. + +## Channel Upgrade Handshake + +Channel upgrades will be achieved using a handshake process that is designed to be similar to the standard connection/channel opening handshake. + +```go expandable +type Channel struct { + / current state of the channel end + State State `protobuf:"varint,1,opt,name=state,proto3,enum=ibc.core.channel.v1.State" json:"state,omitempty"` + / whether the channel is ordered or unordered + Ordering Order `protobuf:"varint,2,opt,name=ordering,proto3,enum=ibc.core.channel.v1.Order" json:"ordering,omitempty"` + / counterparty channel end + Counterparty Counterparty `protobuf:"bytes,3,opt,name=counterparty,proto3" json:"counterparty"` + / list of connection identifiers, in order, along which packets sent on + / this channel will travel + ConnectionHops []string `protobuf:"bytes,4,rep,name=connection_hops,json=connectionHops,proto3" json:"connection_hops,omitempty"` + / opaque channel version, which is agreed upon during the handshake + Version string `protobuf:"bytes,5,opt,name=version,proto3" json:"version,omitempty"` + / upgrade sequence indicates the latest upgrade attempt performed by this channel + / the value of 0 indicates the channel has never been upgraded + UpgradeSequence uint64 `protobuf:"varint,6,opt,name=upgrade_sequence,json=upgradeSequence,proto3" json:"upgrade_sequence,omitempty"` +} +``` + +The version, connection hops, and channel ordering are fields in this channel struct which can be changed. For example, the fee middleware can be added to an application module by updating the version string [shown here](https://github.com/cosmos/ibc-go/blob/v8.1.0/modules/apps/29-fee/types/metadata.pb.go#L29). However, although connection hops can change in a channel upgrade, both sides must still be each other's counterparty. This is enforced by the upgrade protocol and upgrade attempts which try to alter an expected counterparty will fail. + +On a high level, successful handshake process for channel upgrades works as follows: + +1. The chain initiating the upgrade process will propose an upgrade. +2. If the counterparty agrees with the proposal, it will block sends and begin flushing any in-flight packets on its channel end. This flushing process will be covered in more detail below. +3. Upon successful completion of the previous step, the initiating chain will also block packet sends and begin flushing any in-flight packets on its channel end. +4. Once both channel ends have completed flushing packets within the upgrade timeout window, both channel ends can be opened and upgraded to the new channel fields. + +Each handshake step will be documented below in greater detail. + +## Initializing a Channel Upgrade + +A channel upgrade is initialised by submitting the `MsgChannelUpgradeInit` message, which can be submitted only by the chain itself upon governance authorization. It is possible to upgrade the channel ordering, the channel connection hops, and the channel version, which can be found in the `UpgradeFields`. + +```go +type MsgChannelUpgradeInit struct { + PortId string + ChannelId string + Fields UpgradeFields + Signer string +} +``` + +As part of the handling of the `MsgChannelUpgradeInit` message, the application's `OnChanUpgradeInit` callback will be triggered as well. + +After this message is handled successfully, the channel's upgrade sequence will be incremented. This upgrade sequence will serve as a nonce for the upgrade process to provide replay protection. + + + Initiating an upgrade in the same block as opening a channel may potentially + prevent the counterparty channel from also opening. + + +### Governance gating on `ChanUpgradeInit` + +The message signer for `MsgChannelUpgradeInit` must be the address which has been designated as the `authority` of the `IBCKeeper`. If this proposal passes, the counterparty's channel will upgrade by default. + +If chains want to initiate the upgrade of many channels, they will need to submit a governance proposal with multiple `MsgChannelUpgradeInit` messages, one for each channel they would like to upgrade, again with message signer as the designated `authority` of the `IBCKeeper`. The `upgrade-channels` CLI command can be used to submit a proposal that initiates the upgrade of multiple channels; see section [Upgrading channels with the CLI](#upgrading-channels-with-the-cli) below for more information. + +## Channel State and Packet Flushing + +`FLUSHING` and `FLUSHCOMPLETE` are additional channel states which have been added to enable the upgrade feature. + +These states may consist of: + +```go expandable +const ( + / Default State + UNINITIALIZED State = 0 + / A channel has just started the opening handshake. + INIT State = 1 + / A channel has acknowledged the handshake step on the counterparty chain. + TRYOPEN State = 2 + / A channel has completed the handshake. Open channels are + / ready to send and receive packets. + OPEN State = 3 + / A channel has been closed and can no longer be used to send or receive + / packets. + CLOSED State = 4 + / A channel has just accepted the upgrade handshake attempt and is flushing in-flight packets. + FLUSHING State = 5 + / A channel has just completed flushing any in-flight packets. + FLUSHCOMPLETE State = 6 +) +``` + +These are found in `State` on the `Channel` struct: + +```go expandable +type Channel struct { + / current state of the channel end + State State `protobuf:"varint,1,opt,name=state,proto3,enum=ibc.core.channel.v1.State" json:"state,omitempty"` + / whether the channel is ordered or unordered + Ordering Order `protobuf:"varint,2,opt,name=ordering,proto3,enum=ibc.core.channel.v1.Order" json:"ordering,omitempty"` + / counterparty channel end + Counterparty Counterparty `protobuf:"bytes,3,opt,name=counterparty,proto3" json:"counterparty"` + / list of connection identifiers, in order, along which packets sent on + / this channel will travel + ConnectionHops []string `protobuf:"bytes,4,rep,name=connection_hops,json=connectionHops,proto3" json:"connection_hops,omitempty"` + / opaque channel version, which is agreed upon during the handshake + Version string `protobuf:"bytes,5,opt,name=version,proto3" json:"version,omitempty"` + / upgrade sequence indicates the latest upgrade attempt performed by this channel + / the value of 0 indicates the channel has never been upgraded + UpgradeSequence uint64 `protobuf:"varint,6,opt,name=upgrade_sequence,json=upgradeSequence,proto3" json:"upgrade_sequence,omitempty"` +} +``` + +`startFlushing` is the specific method which is called in `ChanUpgradeTry` and `ChanUpgradeAck` to update the state on the channel end. This will set the timeout on the upgrade and update the channel state to `FLUSHING` which will block the upgrade from continuing until all in-flight packets have been flushed. + +This will also set the upgrade timeout for the counterparty (i.e. the timeout before which the counterparty chain must move from `FLUSHING` to `FLUSHCOMPLETE`; if it doesn't then the chain will cancel the upgrade and write an error receipt). The timeout is a relative time duration in nanoseconds that can be changed with `MsgUpdateParams` and by default is 10 minutes. + +The state will change to `FLUSHCOMPLETE` once there are no in-flight packets left and the channel end is ready to move to `OPEN`. This flush state will also have an impact on how a channel upgrade can be cancelled, as detailed below. + +All other parameters will remain the same during the upgrade handshake until the upgrade handshake completes. When the channel is reset to `OPEN` on a successful upgrade handshake, the relevant fields on the channel end will be switched over to the `UpgradeFields` specified in the upgrade. + + + +When transitioning a channel from UNORDERED to ORDERED, new packet sends from the channel end which upgrades first will be incapable of being timed out until the counterparty has finished upgrading. + + + + + +Due to the addition of new channel states, packets can still be received and processed in both `FLUSHING` and `FLUSHCOMPLETE` states. +Packets can also be acknowledged in the `FLUSHING` state. Acknowledging will **not** be possible when the channel is in the `FLUSHCOMPLETE` state, since all packets sent from that channel end have been flushed. +Application developers should consider these new states when implementing application logic that relies on the channel state. +It is still only possible to send packets when the channel is in the `OPEN` state, but sending is disallowed when the channel enters `FLUSHING` and `FLUSHCOMPLETE`. When the channel reopens, sending will be possible again. + + + +## Cancelling a Channel Upgrade + +Channel upgrade cancellation is performed by submitting a `MsgChannelUpgradeCancel` message. + +It is possible for the authority to cancel an in-progress channel upgrade if the following are true: + +- The signer is the authority +- The channel state has not reached FLUSHCOMPLETE +- If the channel state has reached FLUSHCOMPLETE, an existence proof of an `ErrorReceipt` on the counterparty chain is provided at our upgrade sequence or greater + +It is possible for a relayer to cancel an in-progress channel upgrade if the following are true: + +- An existence proof of an `ErrorReceipt` on the counterparty chain is provided at our upgrade sequence or greater + +> Note: if the signer is the authority, e.g. the `gov` address, no `ErrorReceipt` or proof is required if the current channel state is not in FLUSHCOMPLETE. +> These can be left empty in the `MsgChannelUpgradeCancel` message in that case. + +Upon cancelling a channel upgrade, an `ErrorReceipt` will be written with the channel's current upgrade sequence, and +the channel will move back to `OPEN` state keeping its original parameters. + +It will then be possible to re-initiate an upgrade by sending a `MsgChannelOpenInit` message. + + + +Performing sequentially an upgrade cancellation, upgrade initialization, and another upgrade cancellation in a single block while the counterparty is in `FLUSHCOMPLETE` will lead to corrupted state. +The counterparty will be unable to cancel its upgrade attempt and will require a manual migration. +When the counterparty is in `FLUSHCOMPLETE`, it requires a proof that the counterparty cancelled its current upgrade attempt. +When this cancellation is succeeded by an initialization and cancellation in the same block, it results in the proof of cancellation existing only for the next upgrade attempt, not the current. + + + +## Timing Out a Channel Upgrade + +Timing out an outstanding channel upgrade may be necessary during the flushing packet stage of the channel upgrade process. As stated above, with `ChanUpgradeTry` or `ChanUpgradeAck`, the channel state has been changed from `OPEN` to `FLUSHING`, so no new packets will be allowed to be sent over this channel while flushing. If upgrades cannot be performed in a timely manner (due to unforeseen flushing issues), upgrade timeouts allow the channel to avoid blocking packet sends indefinitely. If flushing exceeds the time limit set in the `UpgradeTimeout` channel `Params`, the upgrade process will need to be timed out to abort the upgrade attempt and resume normal channel processing. + +Channel upgrades require setting a valid timeout value in the channel `Params` before submitting a `MsgChannelUpgradeTry` or `MsgChannelUpgradeAck` message (by default, 10 minutes): + +```go +type Params struct { + UpgradeTimeout Timeout +} +``` + +A valid Timeout contains either one or both of a timestamp and block height (sequence). + +```go +type Timeout struct { + / block height after which the packet or upgrade times out + Height types.Height + / block timestamp (in nanoseconds) + +after which the packet or upgrade times out + Timestamp uint64 +} +``` + +This timeout will then be set as a field on the `Upgrade` struct itself when flushing is started. + +```go +type Upgrade struct { + Fields UpgradeFields + Timeout Timeout + NextSequenceSend uint64 +} +``` + +If the timeout has been exceeded during flushing, a chain can then submit the `MsgChannelUpgradeTimeout` to timeout the channel upgrade process: + +```go +type MsgChannelUpgradeTimeout struct { + PortId string + ChannelId string + CounterpartyChannel Channel + ProofChannel []byte + ProofHeight types.Height + Signer string +} +``` + +An `ErrorReceipt` will be written with the channel's current upgrade sequence, and the channel will move back to `OPEN` state keeping its original parameters. + +Note that timing out a channel upgrade will end the upgrade process, and a new `MsgChannelUpgradeInit` will have to be submitted via governance in order to restart the upgrade process. + +## Pruning Acknowledgements + +Acknowledgements can be pruned by broadcasting the `MsgPruneAcknowledgements` message. + +> Note: It is only possible to prune acknowledgements after a channel has been upgraded, so pruning will fail +> if the channel has not yet been upgraded. + +```protobuf +/ MsgPruneAcknowledgements defines the request type for the PruneAcknowledgements rpc. +message MsgPruneAcknowledgements { + option (cosmos.msg.v1.signer) = "signer"; + option (gogoproto.goproto_getters) = false; + + string port_id = 1; + string channel_id = 2; + uint64 limit = 3; + string signer = 4; +} +``` + +The `port_id` and `channel_id` specify the port and channel to act on, and the `limit` specifies the upper bound for the number +of acknowledgements and packet receipts to prune. + +### CLI Usage + +Acknowledgements can be pruned via the cli with the `prune-acknowledgements` command. + +```bash +simd tx ibc channel prune-acknowledgements [port] [channel] [limit] +``` + +## IBC App Recommendations + +IBC application callbacks should be primarily used to validate data fields and do compatibility checks. Application developers +should be aware that callbacks will be invoked before any core ibc state changes are written. + +`OnChanUpgradeInit` should validate the proposed version, order, and connection hops, and should return the application version to upgrade to. + +`OnChanUpgradeTry` should validate the proposed version (provided by the counterparty), order, and connection hops. The desired upgrade version should be returned. + +`OnChanUpgradeAck` should validate the version proposed by the counterparty. + +`OnChanUpgradeOpen` should perform any logic associated with changing of the channel fields. + +> IBC applications should not attempt to process any packet data under the new conditions until after `OnChanUpgradeOpen` +> has been executed, as up until this point it is still possible for the upgrade handshake to fail and for the channel +> to remain in the pre-upgraded state. + +## Upgrade an existing transfer application stack to use 29-fee middleware + +### Wire up the transfer stack and middleware in app.go + +In app.go, the existing transfer stack must be wrapped with the fee middleware. + +```golang expandable +import ( + + / ... + ibcfee "github.com/cosmos/ibc-go/v8/modules/apps/29-fee" + ibctransferkeeper "github.com/cosmos/ibc-go/v8/modules/apps/transfer/keeper" + transfer "github.com/cosmos/ibc-go/v8/modules/apps/transfer" + porttypes "github.com/cosmos/ibc-go/v8/modules/core/05-port/types" + / ... +) + +type App struct { + / ... + TransferKeeper ibctransferkeeper.Keeper + IBCFeeKeeper ibcfeekeeper.Keeper + / .. +} + +/ ... + +app.IBCFeeKeeper = ibcfeekeeper.NewKeeper( + appCodec, keys[ibcfeetypes.StoreKey], + app.IBCKeeper.ChannelKeeper, / may be replaced with IBC middleware + app.IBCKeeper.ChannelKeeper, + app.IBCKeeper.PortKeeper, app.AccountKeeper, app.BankKeeper, +) + +/ Create Transfer Keeper and pass IBCFeeKeeper as expected Channel and PortKeeper +/ since fee middleware will wrap the IBCKeeper for underlying application. +app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, keys[ibctransfertypes.StoreKey], app.GetSubspace(ibctransfertypes.ModuleName), + app.IBCFeeKeeper, / ISC4 Wrapper: fee IBC middleware + app.IBCKeeper.ChannelKeeper, app.IBCKeeper.PortKeeper, + app.AccountKeeper, app.BankKeeper, scopedTransferKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) + ibcRouter := porttypes.NewRouter() + +/ create IBC module from bottom to top of stack +var transferStack porttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) + +transferStack = ibcfee.NewIBCMiddleware(transferStack, app.IBCFeeKeeper) + +/ Add transfer stack to IBC Router +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) +``` + +### Submit a governance proposal to execute a MsgChannelUpgradeInit message + +> This process can be performed with the new CLI that has been added +> outlined [here](#upgrading-channels-with-the-cli). + +Only the configured authority for the ibc module is able to initiate a channel upgrade by submitting a `MsgChannelUpgradeInit` message. + +Execute a governance proposal specifying the relevant fields to perform a channel upgrade. + +Update the following json sample, and copy the contents into `proposal.json`. + +```json expandable +{ + "title": "Channel upgrade init", + "summary": "Channel upgrade init", + "messages": [ + { + "@type": "/ibc.core.channel.v1.MsgChannelUpgradeInit", + "signer": "", + "port_id": "transfer", + "channel_id": "channel-...", + "fields": { + "ordering": "ORDER_UNORDERED", + "connection_hops": ["connection-0"], + "version": "{\"fee_version\":\"ics29-1\",\"app_version\":\"ics20-1\"}" + } + } + ], + "metadata": "", + "deposit": "10stake" +} +``` + +> Note: ensure the correct fields.version is specified. This is the new version that the channels will be upgraded to. + +### Submit the proposal + +```shell +simd tx submit-proposal proposal.json --from +``` + +## Upgrading channels with the CLI + +A new cli has been added which enables either +\- submitting a governance proposal which contains a `MsgChannelUpgradeInit` for every channel to be upgraded. +\- generating a `proposal.json` file which contains the proposal contents to be edited/submitted at a later date. + +The following example, would submit a governance proposal with the specified deposit, title and summary which would +contain a `MsgChannelUpgradeInit` for all `OPEN` channels whose port matches the regular expression `transfer`. + +> Note: by adding the `--json` flag, the command would instead output the contents of the proposal which could be +> stored in a `proposal.json` file to be edited and submitted at a later date. + +```bash +simd tx ibc channel upgrade-channels "{\"fee_version\":\"ics29-1\",\"app_version\":\"ics20-1\"}" \ + --deposit "10stake" \ + --title "Channel Upgrades Governance Proposal" \ + --summary "Upgrade all transfer channels to be fee enabled" \ + --port-pattern "transfer" +``` + +It is also possible to explicitly list a comma separated string of channel IDs. It is important to note that the +regular expression matching specified by `--port-pattern` (which defaults to `transfer`) still applies. + +For example the following command would generate the contents of a `proposal.json` file which would attempt to upgrade +channels with a port ID of `transfer` and a channelID of `channel-0`, `channel-1` or `channel-2`. + +```bash +simd tx ibc channel upgrade-channels "{\"fee_version\":\"ics29-1\",\"app_version\":\"ics20-1\"}" \ + --deposit "10stake" \ + --title "Channel Upgrades Governance Proposal" \ + --summary "Upgrade all transfer channels to be fee enabled" \ + --port-pattern "transfer" \ + --channel-ids "channel-0,channel-1,channel-2" \ + --json +``` diff --git a/docs/ibc/v8.5.x/ibc/integration.mdx b/docs/ibc/v8.5.x/ibc/integration.mdx new file mode 100644 index 00000000..86bd9098 --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/integration.mdx @@ -0,0 +1,234 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to integrate IBC to your application and send data packets to other chains. + +This document outlines the required steps to integrate and configure the [IBC +module](https://github.com/cosmos/ibc-go/tree/main/modules/core) to your Cosmos SDK application and +send fungible token transfers to other chains. + +## Integrating the IBC module + +Integrating the IBC module to your SDK-based application is straightforward. The general changes can be summarized in the following steps: + +- Add required modules to the `module.BasicManager` +- Define additional `Keeper` fields for the new modules on the `App` type +- Add the module's `StoreKey`s and initialize their `Keeper`s +- Set up corresponding routers and routes for the `ibc` module +- Add the modules to the module `Manager` +- Add modules to `Begin/EndBlockers` and `InitGenesis` +- Update the module `SimulationManager` to enable simulations + +### Module `BasicManager` and `ModuleAccount` permissions + +The first step is to add the following modules to the `BasicManager`: `x/capability`, `x/ibc`, +and `x/ibc-transfer`. After that, we need to grant `Minter` and `Burner` permissions to +the `ibc-transfer` `ModuleAccount` to mint and burn relayed tokens. + +### Integrating light clients + +> Note that from v7 onwards, all light clients have to be explicitly registered in a chain's app.go and follow the steps listed below. +> This is in contrast to earlier versions of ibc-go when `07-tendermint` and `06-solomachine` were added out of the box. + +All light clients must be registered with `module.BasicManager` in a chain's app.go file. + +The following code example shows how to register the existing `ibctm.AppModuleBasic{}` light client implementation. + +```go expandable +import ( + + ... + / highlight-next-line ++ ibctm "github.com/cosmos/ibc-go/v8/modules/light-clients/07-tendermint" + ... +) + +/ app.go +var ( + ModuleBasics = module.NewBasicManager( + / ... + capability.AppModuleBasic{ +}, + ibc.AppModuleBasic{ +}, + transfer.AppModuleBasic{ +}, / i.e ibc-transfer module + + / register light clients on IBC + / highlight-next-line ++ ibctm.AppModuleBasic{ +}, + ) + + / module account permissions + maccPerms = map[string][]string{ + / other module accounts permissions + / ... + ibctransfertypes.ModuleName: { + authtypes.Minter, authtypes.Burner +}, +} +) +``` + +### Application fields + +Then, we need to register the `Keepers` as follows: + +```go title="app.go" expandable +type App struct { + / baseapp, keys and subspaces definitions + + / other keepers + / ... + IBCKeeper *ibckeeper.Keeper / IBC Keeper must be a pointer in the app, so we can SetRouter on it correctly + TransferKeeper ibctransferkeeper.Keeper / for cross-chain fungible token transfers + + / make scoped keepers public for test purposes + ScopedIBCKeeper capabilitykeeper.ScopedKeeper + ScopedTransferKeeper capabilitykeeper.ScopedKeeper + + / ... + / module and simulation manager definitions +} +``` + +### Configure the `Keepers` + +During initialization, besides initializing the IBC `Keepers` (for the `x/ibc`, and +`x/ibc-transfer` modules), we need to grant specific capabilities through the capability module +`ScopedKeepers` so that we can authenticate the object-capability permissions for each of the IBC +channels. + +```go expandable +func NewApp(...args) *App { + / define codecs and baseapp + + / add capability keeper and ScopeToModule for ibc module + app.CapabilityKeeper = capabilitykeeper.NewKeeper(appCodec, keys[capabilitytypes.StoreKey], memKeys[capabilitytypes.MemStoreKey]) + + / grant capabilities for the ibc and ibc-transfer modules + scopedIBCKeeper := app.CapabilityKeeper.ScopeToModule(ibcexported.ModuleName) + scopedTransferKeeper := app.CapabilityKeeper.ScopeToModule(ibctransfertypes.ModuleName) + + / ... other modules keepers + + / Create IBC Keeper + app.IBCKeeper = ibckeeper.NewKeeper( + appCodec, keys[ibcexported.StoreKey], app.GetSubspace(ibcexported.ModuleName), app.StakingKeeper, app.UpgradeKeeper, scopedIBCKeeper, + ) + + / Create Transfer Keepers + app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, keys[ibctransfertypes.StoreKey], app.GetSubspace(ibctransfertypes.ModuleName), + app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, app.BankKeeper, scopedTransferKeeper, + ) + transferModule := transfer.NewAppModule(app.TransferKeeper) + + / .. continues +} +``` + +### Register `Routers` + +IBC needs to know which module is bound to which port so that it can route packets to the +appropriate module and call the appropriate callbacks. The port to module name mapping is handled by +IBC's port `Keeper`. However, the mapping from module name to the relevant callbacks is accomplished +by the port +[`Router`](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/router.go) on the +IBC module. + +Adding the module routes allows the IBC handler to call the appropriate callback when processing a +channel handshake or a packet. + +Currently, a `Router` is static so it must be initialized and set correctly on app initialization. +Once the `Router` has been set, no new routes can be added. + +```go title="app.go" expandable +func NewApp(...args) *App { + / .. continuation from above + + / Create static IBC router, add ibc-transfer module route, then set and seal it + ibcRouter := port.NewRouter() + +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule) + / Setting Router will finalize all routes by sealing router + / No more routes can be added + app.IBCKeeper.SetRouter(ibcRouter) + + / .. continues +``` + +### Module Managers + +In order to use IBC, we need to add the new modules to the module `Manager` and to the `SimulationManager` in case your application supports [simulations](https://docs.cosmos.network/main/learn/advanced/simulation). + +```go title="app.go" expandable +func NewApp(...args) *App { + / .. continuation from above + + app.mm = module.NewManager( + / other modules + / ... + capability.NewAppModule(appCodec, *app.CapabilityKeeper), + ibc.NewAppModule(app.IBCKeeper), + transferModule, + ) + + / ... + + app.sm = module.NewSimulationManager( + / other modules + / ... + capability.NewAppModule(appCodec, *app.CapabilityKeeper), + ibc.NewAppModule(app.IBCKeeper), + transferModule, + ) + + / .. continues +``` + +### Application ABCI Ordering + +One addition from IBC is the concept of `HistoricalEntries` which are stored on the staking module. +Each entry contains the historical information for the `Header` and `ValidatorSet` of this chain which is stored +at each height during the `BeginBlock` call. The historical info is required to introspect the +past historical info at any given height in order to verify the light client `ConsensusState` during the +connection handshake. + +```go title="app.go" expandable +func NewApp(...args) *App { + / .. continuation from above + + / add staking and ibc modules to BeginBlockers + app.mm.SetOrderBeginBlockers( + / other modules ... + stakingtypes.ModuleName, ibcexported.ModuleName, + ) + + / ... + + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + app.mm.SetOrderInitGenesis( + capabilitytypes.ModuleName, + / other modules ... + ibcexported.ModuleName, ibctransfertypes.ModuleName, + ) + + / .. continues +``` + + + **IMPORTANT**: The capability module **must** be declared first in + `SetOrderInitGenesis` + + +That's it! You have now wired up the IBC module and are now able to send fungible tokens across +different chains. If you want to have a broader view of the changes take a look into the SDK's +[`SimApp`](https://github.com/cosmos/ibc-go/blob/main/testing/simapp/app.go). diff --git a/docs/ibc/v8.5.x/ibc/middleware/develop.mdx b/docs/ibc/v8.5.x/ibc/middleware/develop.mdx new file mode 100644 index 00000000..10001cf3 --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/middleware/develop.mdx @@ -0,0 +1,527 @@ +--- +title: Create a custom IBC middleware +description: >- + IBC middleware will wrap over an underlying IBC application (a base + application or downstream middleware) and sits between core IBC and the base + application. +--- + +IBC middleware will wrap over an underlying IBC application (a base application or downstream middleware) and sits between core IBC and the base application. + + +middleware developers must use the same serialization and deserialization method as in ibc-go's codec: transfertypes.ModuleCdc.\[Must]MarshalJSON + + +For middleware builders this means: + +```go +import transfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types" +transfertypes.ModuleCdc.[Must]MarshalJSON +func MarshalAsIBCDoes(ack channeltypes.Acknowledgement) ([]byte, error) { + return transfertypes.ModuleCdc.MarshalJSON(&ack) +} +``` + +The interfaces a middleware must implement are found [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/05-port/types/module.go). + +```go +/ Middleware implements the ICS26 Module interface +type Middleware interface { + IBCModule / middleware has access to an underlying application which may be wrapped by more middleware + ICS4Wrapper / middleware has access to ICS4Wrapper which may be core IBC Channel Handler or a higher-level middleware that wraps this middleware. +} +``` + +An `IBCMiddleware` struct implementing the `Middleware` interface, can be defined with its constructor as follows: + +```go expandable +/ @ x/module_name/ibc_middleware.go + +/ IBCMiddleware implements the ICS26 callbacks and ICS4Wrapper for the fee middleware given the +/ fee keeper and the underlying application. +type IBCMiddleware struct { + app porttypes.IBCModule + keeper keeper.Keeper +} + +/ NewIBCMiddleware creates a new IBCMiddleware given the keeper and underlying application +func NewIBCMiddleware(app porttypes.IBCModule, k keeper.Keeper) + +IBCMiddleware { + return IBCMiddleware{ + app: app, + keeper: k, +} +} +``` + +## Implement `IBCModule` interface + +`IBCMiddleware` is a struct that implements the [ICS-26 `IBCModule` interface (`porttypes.IBCModule`)](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/05-port/types/module.go#L14-L107). It is recommended to separate these callbacks into a separate file `ibc_middleware.go`. + +> Note how this is analogous to implementing the same interfaces for IBC applications that act as base applications. + +As will be mentioned in the [integration section](/docs/ibc/v8.5.x/ibc/middleware/integration), this struct should be different than the struct that implements `AppModule` in case the middleware maintains its own internal state and processes separate SDK messages. + +The middleware must have access to the underlying application, and be called before it during all ICS-26 callbacks. It may execute custom logic during these callbacks, and then call the underlying application's callback. + +> Middleware **may** choose not to call the underlying application's callback at all. Though these should generally be limited to error cases. + +The `IBCModule` interface consists of the channel handshake callbacks and packet callbacks. Most of the custom logic will be performed in the packet callbacks, in the case of the channel handshake callbacks, introducing the middleware requires consideration to the version negotiation and passing of capabilities. + +### Channel handshake callbacks + +#### Version negotiation + +In the case where the IBC middleware expects to speak to a compatible IBC middleware on the counterparty chain, they must use the channel handshake to negotiate the middleware version without interfering in the version negotiation of the underlying application. + +Middleware accomplishes this by formatting the version in a JSON-encoded string containing the middleware version and the application version. The application version may as well be a JSON-encoded string, possibly including further middleware and app versions, if the application stack consists of multiple milddlewares wrapping a base application. The format of the version is specified in ICS-30 as the following: + +```json +{ + "": "", + "app_version": "" +} +``` + +The `` key in the JSON struct should be replaced by the actual name of the key for the corresponding middleware (e.g. `fee_version`). + +During the handshake callbacks, the middleware can unmarshal the version string and retrieve the middleware and application versions. It can do its negotiation logic on ``, and pass the `` to the underlying application. + +> **NOTE**: Middleware that does not need to negotiate with a counterparty middleware on the remote stack will not implement the version unmarshalling and negotiation, and will simply perform its own custom logic on the callbacks without relying on the counterparty behaving similarly. + +#### `OnChanOpenInit` + +```go expandable +func (im IBCMiddleware) + +OnChanOpenInit( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID string, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + version string, +) (string, error) { + if version != "" { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + metadata, err := Unmarshal(version) + if err != nil { + / Since it is valid for fee version to not be specified, + / the above middleware version may be for another middleware. + / Pass the entire version string onto the underlying application. + return im.app.OnChanOpenInit( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + version, + ) +} + +else { + metadata = { + / set middleware version to default value + MiddlewareVersion: defaultMiddlewareVersion, + / allow application to return its default version + AppVersion: "", +} + +} + +} + +doCustomLogic() + + / if the version string is empty, OnChanOpenInit is expected to return + / a default version string representing the version(s) + +it supports + appVersion, err := im.app.OnChanOpenInit( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + metadata.AppVersion, / note we only pass app version here + ) + if err != nil { + return "", err +} + version := constructVersion(metadata.MiddlewareVersion, appVersion) + +return version, nil +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L36-L83) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanOpenTry` + +```go expandable +func (im IBCMiddleware) + +OnChanOpenTry( + ctx sdk.Context, + order channeltypes.Order, + connectionHops []string, + portID, + channelID string, + channelCap *capabilitytypes.Capability, + counterparty channeltypes.Counterparty, + counterpartyVersion string, +) (string, error) { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + cpMetadata, err := Unmarshal(counterpartyVersion) + if err != nil { + return app.OnChanOpenTry( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + counterpartyVersion, + ) +} + +doCustomLogic() + + / Call the underlying application's OnChanOpenTry callback. + / The try callback must select the final app-specific version string and return it. + appVersion, err := app.OnChanOpenTry( + ctx, + order, + connectionHops, + portID, + channelID, + channelCap, + counterparty, + cpMetadata.AppVersion, / note we only pass counterparty app version here + ) + if err != nil { + return "", err +} + + / negotiate final middleware version + middlewareVersion := negotiateMiddlewareVersion(cpMetadata.MiddlewareVersion) + version := constructVersion(middlewareVersion, appVersion) + +return version, nil +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L88-L125) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanOpenAck` + +```go expandable +func (im IBCMiddleware) + +OnChanOpenAck( + ctx sdk.Context, + portID, + channelID string, + counterpartyChannelID string, + counterpartyVersion string, +) + +error { + / try to unmarshal JSON-encoded version string and pass + / the app-specific version to app callback. + / otherwise, pass version directly to app callback. + cpMetadata, err = UnmarshalJSON(counterpartyVersion) + if err != nil { + return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, counterpartyVersion) +} + if !isCompatible(cpMetadata.MiddlewareVersion) { + return error +} + +doCustomLogic() + + / call the underlying application's OnChanOpenTry callback + return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, cpMetadata.AppVersion) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L128-L153)) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanOpenConfirm` + +```go +func OnChanOpenConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanOpenConfirm(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L156-L163) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanCloseInit` + +```go +func OnChanCloseInit( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanCloseInit(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L166-L188) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnChanCloseConfirm` + +```go +func OnChanCloseConfirm( + ctx sdk.Context, + portID, + channelID string, +) + +error { + doCustomLogic() + +return app.OnChanCloseConfirm(ctx, portID, channelID) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L191-L213) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### Capabilities + +The middleware should simply pass the capability in the callback arguments along to the underlying application so that it may be claimed by the base application. The base application will then pass the capability up the stack in order to authenticate an outgoing packet/acknowledgement, which you can check in the [`ICS4Wrapper` section](#ics-04-wrappers). + +In the case where the middleware wishes to send a packet or acknowledgment without the involvement of the underlying application, it should be given access to the same `scopedKeeper` as the base application so that it can retrieve the capabilities by itself. + +### Packet callbacks + +The packet callbacks just like the handshake callbacks wrap the application's packet callbacks. The packet callbacks are where the middleware performs most of its custom logic. The middleware may read the packet flow data and perform some additional packet handling, or it may modify the incoming data before it reaches the underlying application. This enables a wide degree of usecases, as a simple base application like token-transfer can be transformed for a variety of usecases by combining it with custom middleware. + +#### `OnRecvPacket` + +```go expandable +func (im IBCMiddleware) + +OnRecvPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +ibcexported.Acknowledgement { + doCustomLogic(packet) + ack := app.OnRecvPacket(ctx, packet, relayer) + +doCustomLogic(ack) / middleware may modify outgoing ack + + return ack +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L217-L238) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnAcknowledgementPacket` + +```go +func (im IBCMiddleware) + +OnAcknowledgementPacket( + ctx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, +) + +error { + doCustomLogic(packet, ack) + +return app.OnAcknowledgementPacket(ctx, packet, ack, relayer) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L242-L293) an example implementation of this callback for the ICS-29 Fee Middleware module. + +#### `OnTimeoutPacket` + +```go +func (im IBCMiddleware) + +OnTimeoutPacket( + ctx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, +) + +error { + doCustomLogic(packet) + +return app.OnTimeoutPacket(ctx, packet, relayer) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L297-L335) an example implementation of this callback for the ICS-29 Fee Middleware module. + +## ICS-04 wrappers + +Middleware must also wrap ICS-04 so that any communication from the application to the `channelKeeper` goes through the middleware first. Similar to the packet callbacks, the middleware may modify outgoing acknowledgements and packets in any way it wishes. + +To ensure optimal generalisability, the `ICS4Wrapper` abstraction serves to abstract away whether a middleware is the topmost middleware (and thus directly calling into the ICS-04 `channelKeeper`) or itself being wrapped by another middleware. + +Remember that middleware can be stateful or stateless. When defining the stateful middleware's keeper, the `ics4Wrapper` field is included. Then the appropriate keeper can be passed when instantiating the middleware's keeper in `app.go` + +```go +type Keeper struct { + storeKey storetypes.StoreKey + cdc codec.BinaryCodec + + ics4Wrapper porttypes.ICS4Wrapper + channelKeeper types.ChannelKeeper + portKeeper types.PortKeeper + ... +} +``` + +For stateless middleware, the `ics4Wrapper` can be passed on directly without having to instantiate a keeper struct for the middleware. + +[The interface](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/05-port/types/module.go#L110-L133) looks as follows: + +```go expandable +/ This is implemented by ICS4 and all middleware that are wrapping base application. +/ The base application will call `sendPacket` or `writeAcknowledgement` of the middleware directly above them +/ which will call the next middleware until it reaches the core IBC handler. +type ICS4Wrapper interface { + SendPacket( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + sourcePort string, + sourceChannel string, + timeoutHeight clienttypes.Height, + timeoutTimestamp uint64, + data []byte, + ) (sequence uint64, err error) + +WriteAcknowledgement( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + packet exported.PacketI, + ack exported.Acknowledgement, + ) + +error + + GetAppVersion( + ctx sdk.Context, + portID, + channelID string, + ) (string, bool) +} +``` + +:warning: In the following paragraphs, the methods are presented in pseudo code which has been kept general, not stating whether the middleware is stateful or stateless. Remember that when the middleware is stateful, `ics4Wrapper` can be accessed through the keeper. + +Check out the references provided for an actual implementation to clarify, where the `ics4Wrapper` methods in `ibc_middleware.go` simply call the equivalent keeper methods where the actual logic resides. + +### `SendPacket` + +```go expandable +func SendPacket( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + sourcePort string, + sourceChannel string, + timeoutHeight clienttypes.Height, + timeoutTimestamp uint64, + appData []byte, +) (uint64, error) { + / middleware may modify data + data = doCustomLogic(appData) + +return ics4Wrapper.SendPacket( + ctx, + chanCap, + sourcePort, + sourceChannel, + timeoutHeight, + timeoutTimestamp, + data, + ) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/keeper/relay.go#L17-L27) an example implementation of this function for the ICS-29 Fee Middleware module. + +### `WriteAcknowledgement` + +```go expandable +/ only called for async acks +func WriteAcknowledgement( + ctx sdk.Context, + chanCap *capabilitytypes.Capability, + packet exported.PacketI, + ack exported.Acknowledgement, +) + +error { + / middleware may modify acknowledgement + ack_bytes = doCustomLogic(ack) + +return ics4Wrapper.WriteAcknowledgement(packet, ack_bytes) +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/keeper/relay.go#L31-L55) an example implementation of this function for the ICS-29 Fee Middleware module. + +### `GetAppVersion` + +```go expandable +/ middleware must return the underlying application version +func GetAppVersion( + ctx sdk.Context, + portID, + channelID string, +) (string, bool) { + version, found := ics4Wrapper.GetAppVersion(ctx, portID, channelID) + if !found { + return "", false +} + if !MiddlewareEnabled { + return version, true +} + + / unwrap channel version + metadata, err := Unmarshal(version) + if err != nil { + panic(fmt.Errof("unable to unmarshal version: %w", err)) +} + +return metadata.AppVersion, true +} +``` + +See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/keeper/relay.go#L58-L74) an example implementation of this function for the ICS-29 Fee Middleware module. diff --git a/docs/ibc/v8.5.x/ibc/middleware/integration.mdx b/docs/ibc/v8.5.x/ibc/middleware/integration.mdx new file mode 100644 index 00000000..d2419b7b --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/middleware/integration.mdx @@ -0,0 +1,79 @@ +--- +title: Integrating IBC middleware into a chain +description: >- + Learn how to integrate IBC middleware(s) with a base application to your + chain. The following document only applies for Cosmos SDK chains. +--- + +Learn how to integrate IBC middleware(s) with a base application to your chain. The following document only applies for Cosmos SDK chains. + +If the middleware is maintaining its own state and/or processing SDK messages, then it should create and register its SDK module with the module manager in `app.go`. + +All middleware must be connected to the IBC router and wrap over an underlying base IBC application. An IBC application may be wrapped by many layers of middleware, only the top layer middleware should be hooked to the IBC router, with all underlying middlewares and application getting wrapped by it. + +The order of middleware **matters**, function calls from IBC to the application travel from top-level middleware to the bottom middleware and then to the application. Function calls from the application to IBC goes through the bottom middleware in order to the top middleware and then to core IBC handlers. Thus the same set of middleware put in different orders may produce different effects. + +## Example integration + +```go expandable +/ app.go pseudocode + +/ middleware 1 and middleware 3 are stateful middleware, +/ perhaps implementing separate sdk.Msg and Handlers +mw1Keeper := mw1.NewKeeper(storeKey1, ..., ics4Wrapper: channelKeeper, ...) / in stack 1 & 3 +/ middleware 2 is stateless +mw3Keeper1 := mw3.NewKeeper(storeKey3,..., ics4Wrapper: mw1Keeper, ...) / in stack 1 +mw3Keeper2 := mw3.NewKeeper(storeKey3,..., ics4Wrapper: channelKeeper, ...) / in stack 2 + +/ Only create App Module **once** and register in app module +/ if the module maintains independent state and/or processes sdk.Msgs +app.moduleManager = module.NewManager( + ... + mw1.NewAppModule(mw1Keeper), + mw3.NewAppModule(mw3Keeper1), + mw3.NewAppModule(mw3Keeper2), + transfer.NewAppModule(transferKeeper), + custom.NewAppModule(customKeeper) +) + scopedKeeperTransfer := capabilityKeeper.NewScopedKeeper("transfer") + +scopedKeeperCustom1 := capabilityKeeper.NewScopedKeeper("custom1") + +scopedKeeperCustom2 := capabilityKeeper.NewScopedKeeper("custom2") + +/ NOTE: IBC Modules may be initialized any number of times provided they use a separate +/ scopedKeeper and underlying port. + +customKeeper1 := custom.NewKeeper(..., scopedKeeperCustom1, ...) + +customKeeper2 := custom.NewKeeper(..., scopedKeeperCustom2, ...) + +/ initialize base IBC applications +/ if you want to create two different stacks with the same base application, +/ they must be given different scopedKeepers and assigned different ports. + transferIBCModule := transfer.NewIBCModule(transferKeeper) + +customIBCModule1 := custom.NewIBCModule(customKeeper1, "portCustom1") + +customIBCModule2 := custom.NewIBCModule(customKeeper2, "portCustom2") + +/ create IBC stacks by combining middleware with base application +/ NOTE: since middleware2 is stateless it does not require a Keeper +/ stack 1 contains mw1 -> mw3 -> transfer +stack1 := mw1.NewIBCMiddleware(mw3.NewIBCMiddleware(transferIBCModule, mw3Keeper1), mw1Keeper) +/ stack 2 contains mw3 -> mw2 -> custom1 +stack2 := mw3.NewIBCMiddleware(mw2.NewIBCMiddleware(customIBCModule1), mw3Keeper2) +/ stack 3 contains mw2 -> mw1 -> custom2 +stack3 := mw2.NewIBCMiddleware(mw1.NewIBCMiddleware(customIBCModule2, mw1Keeper)) + +/ associate each stack with the moduleName provided by the underlying scopedKeeper + ibcRouter := porttypes.NewRouter() + +ibcRouter.AddRoute("transfer", stack1) + +ibcRouter.AddRoute("custom1", stack2) + +ibcRouter.AddRoute("custom2", stack3) + +app.IBCKeeper.SetRouter(ibcRouter) +``` diff --git a/docs/ibc/v8.5.x/ibc/middleware/overview.mdx b/docs/ibc/v8.5.x/ibc/middleware/overview.mdx new file mode 100644 index 00000000..f3d2ced9 --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/middleware/overview.mdx @@ -0,0 +1,50 @@ +--- +title: IBC middleware +--- + +## Synopsis + +Learn how to write your own custom middleware to wrap an IBC application, and understand how to hook different middleware to IBC base applications to form different IBC application stacks + +This documentation serves as a guide for middleware ./developers who want to write their own middleware and for chain ./developers who want to use IBC middleware on their chains. + +After going through the overview they can consult respectively: + +- [documentation on ./developing custom middleware](/docs/ibc/v8.5.x/ibc/middleware/develop) +- [documentation on integrating middleware into a stack on a chain](/docs/ibc/v8.5.x/ibc/middleware/integration) + + + +## Pre-requisite readings + +- [IBC Overview](docs/ibc/v8.5.x/ibc/overview) +- [IBC Integration](/docs/ibc/v8.5.x/light-clients/wasm/integration) +- [IBC Application Developer Guide](/docs/ibc/v8.5.x/ibc/apps/apps) + + + +## Why middleware? + +IBC applications are designed to be self-contained modules that implement their own application-specific logic through a set of interfaces with the core IBC handlers. These core IBC handlers, in turn, are designed to enforce the correctness properties of IBC (transport, authentication, ordering) while delegating all application-specific handling to the IBC application modules. **However, there are cases where some functionality may be desired by many applications, yet not appropriate to place in core IBC.** + +Middleware allows ./developers to define the extensions as separate modules that can wrap over the base application. This middleware can thus perform its own custom logic, and pass data into the application so that it may run its logic without being aware of the middleware's existence. This allows both the application and the middleware to implement its own isolated logic while still being able to run as part of a single packet flow. + +## Definitions + +`Middleware`: A self-contained module that sits between core IBC and an underlying IBC application during packet execution. All messages between core IBC and underlying application must flow through middleware, which may perform its own custom logic. + +`Underlying Application`: An underlying application is the application that is directly connected to the middleware in question. This underlying application may itself be middleware that is chained to a base application. + +`Base Application`: A base application is an IBC application that does not contain any middleware. It may be nested by 0 or multiple middleware to form an application stack. + +`Application Stack (or stack)`: A stack is the complete set of application logic (middleware(s) + base application) that gets connected to core IBC. A stack may be just a base application, or it may be a series of middlewares that nest a base application. + +The diagram below gives an overview of a middleware stack consisting of two middleware (one stateless, the other stateful). + +![middleware-stack.png](/docs/ibc/images/01-ibc/04-middleware/images/middleware-stack.png) + +Keep in mind that: + +- **The order of the middleware matters** (more on how to correctly define your stack in the code will follow in the [./integration section](/docs/ibc/v8.5.x/ibc/middleware/integration)). +- Depending on the type of message, it will either be passed on from the base application up the middleware stack to core IBC or down the stack in the reverse situation (handshake and packet callbacks). +- IBC middleware will wrap over an underlying IBC application and sits between core IBC and the application. It has complete control in modifying any message coming from IBC to the application, and any message coming from the application to core IBC. **Middleware must be completely trusted by chain ./developers who wish to integrate them**, as this gives them complete flexibility in modifying the application(s) they wrap. diff --git a/docs/ibc/v8.5.x/ibc/overview.mdx b/docs/ibc/v8.5.x/ibc/overview.mdx new file mode 100644 index 00000000..778645eb --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/overview.mdx @@ -0,0 +1,329 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about IBC, its components, and its use cases. + +## What is the Inter-Blockchain Communication Protocol (IBC)? + +This document serves as a guide for developers who want to write their own Inter-Blockchain +Communication Protocol (IBC) applications for custom use cases. + +> IBC applications must be written as self-contained modules. + +Due to the modular design of the IBC Protocol, IBC +application developers do not need to be concerned with the low-level details of clients, +connections, and proof verification. + +This brief explanation of the lower levels of the +stack gives application developers a broad understanding of the IBC +Protocol. Abstraction layer details for channels and ports are most relevant for application developers and describe how to define custom packets and `IBCModule` callbacks. + +The requirements to have your module interact over IBC are: + +- Bind to a port or ports. +- Define your packet data. +- Use the default acknowledgment struct provided by core IBC or optionally define a custom acknowledgment struct. +- Standardize an encoding of the packet data. +- Implement the `IBCModule` interface. + +Read on for a detailed explanation of how to write a self-contained IBC application module. + +## Components Overview + +### [Clients](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client) + +IBC clients are on-chain light clients. Each light client is identified by a unique client-id. +IBC clients track the consensus states of other blockchains, along with the proof spec necessary to +properly verify proofs against the client's consensus state. A client can be associated with any number +of connections to the counterparty chain. The client identifier is auto generated using the client type +and the global client counter appended in the format: `{client-type}-{N}`. + +A `ClientState` should contain chain specific and light client specific information necessary for verifying updates +and upgrades to the IBC client. The `ClientState` may contain information such as chain-id, latest height, proof specs, +unbonding periods or the status of the light client. The `ClientState` should not contain information that +is specific to a given block at a certain height, this is the function of the `ConsensusState`. Each `ConsensusState` +should be associated with a unique block and should be referenced using a height. IBC clients are given a +client identifier prefixed store to store their associated client state and consensus states along with +any metadata associated with the consensus states. Consensus states are stored using their associated height. + +The supported IBC clients are: + +- [Solo Machine light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine): Devices such as phones, browsers, or laptops. +- [Tendermint light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/07-tendermint): The default for Cosmos SDK-based chains. +- [Localhost (loopback) client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/09-localhost): Useful for + testing, simulation, and relaying packets to modules on the same application. + +### IBC Client Heights + +IBC Client Heights are represented by the struct: + +```go +type Height struct { + RevisionNumber uint64 + RevisionHeight uint64 +} +``` + +The `RevisionNumber` represents the revision of the chain that the height is representing. +A revision typically represents a continuous, monotonically increasing range of block-heights. +The `RevisionHeight` represents the height of the chain within the given revision. + +On any reset of the `RevisionHeight`—for example, when hard-forking a Tendermint chain— +the `RevisionNumber` will get incremented. This allows IBC clients to distinguish between a +block-height `n` of a previous revision of the chain (at revision `p`) and block-height `n` of the current +revision of the chain (at revision `e`). + +`Height`s that share the same revision number can be compared by simply comparing their respective `RevisionHeight`s. +`Height`s that do not share the same revision number will only be compared using their respective `RevisionNumber`s. +Thus a height `h` with revision number `e+1` will always be greater than a height `g` with revision number `e`, +**REGARDLESS** of the difference in revision heights. + +Ex: + +```go +Height{ + RevisionNumber: 3, + RevisionHeight: 0 +} > Height{ + RevisionNumber: 2, + RevisionHeight: 100000000000 +} +``` + +When a Tendermint chain is running a particular revision, relayers can simply submit headers and proofs with the revision number +given by the chain's `chainID`, and the revision height given by the Tendermint block height. When a chain updates using a hard-fork +and resets its block-height, it is responsible for updating its `chainID` to increment the revision number. +IBC Tendermint clients then verifies the revision number against their `chainID` and treat the `RevisionHeight` as the Tendermint block-height. + +Tendermint chains wishing to use revisions to maintain persistent IBC connections even across height-resetting upgrades must format their `chainID`s +in the following manner: `{chainID}-{revision_number}`. On any height-resetting upgrade, the `chainID` **MUST** be updated with a higher revision number +than the previous value. + +Ex: + +- Before upgrade `chainID`: `gaiamainnet-3` +- After upgrade `chainID`: `gaiamainnet-4` + +Clients that do not require revisions, such as the solo-machine client, simply hardcode `0` into the revision number whenever they +need to return an IBC height when implementing IBC interfaces and use the `RevisionHeight` exclusively. + +Other client-types can implement their own logic to verify the IBC heights that relayers provide in their `Update`, `Misbehavior`, and +`Verify` functions respectively. + +The IBC interfaces expect an `ibcexported.Height` interface, however all clients must use the concrete implementation provided in +`02-client/types` and reproduced above. + +### [Connections](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection) + +Connections encapsulate two [`ConnectionEnd`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/connection/v1/connection.proto#L17) +objects on two separate blockchains. Each `ConnectionEnd` is associated with a client of the +other blockchain (for example, the counterparty blockchain). The connection handshake is responsible +for verifying that the light clients on each chain are correct for their respective counterparties. +Connections, once established, are responsible for facilitating all cross-chain verifications of IBC state. +A connection can be associated with any number of channels. + +The connection handshake is a 4-step handshake. Briefly, if a given chain A wants to open a connection with +chain B using already established light-clients on both chains: + +1. chain A sends a `ConnectionOpenInit` message to signal a connection initialization attempt with chain B. +2. chain B sends a `ConnectionOpenTry` message to try opening the connection on chain A. +3. chain A sends a `ConnectionOpenAck` message to mark its connection end state as open. +4. chain B sends a `ConnectionOpenConfirm` message to mark its connection end state as open. + +#### Time Delayed Connections + +Connections can be opened with a time delay by setting the `delayPeriod` field (in nanoseconds) in the [`MsgConnectionOpenInit`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/connection/v1/tx.proto#L45). +The time delay is used to require that the underlying light clients have been updated to a certain height before commitment verification can be performed. + +`delayPeriod` is used in conjunction with the [`max_expected_time_per_block`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/connection/v1/connection.proto#L113) parameter of the connection submodule to determine the `blockDelay`, which is number of blocks that the connection must be delayed by. + +When commitment verification is performed, the connection submodule will pass `delayPeriod` and `blockDelay` to the light client. It is up to the light client to determine whether the light client has been updated to the required height. Only the following light clients in `ibc-go` support time delayed connections: + +- `07-tendermint` +- `08-wasm` (passed to the contact) + +### [Proofs](https://github.com/cosmos/ibc-go/blob/main/modules/core/23-commitment) and [Paths](https://github.com/cosmos/ibc-go/blob/main/modules/core/24-host) + +In IBC, blockchains do not directly pass messages to each other over the network. Instead, to +communicate, a blockchain commits some state to a specifically defined path that is reserved for a +specific message type and a specific counterparty. For example, for storing a specific connectionEnd as part +of a handshake or a packet intended to be relayed to a module on the counterparty chain. A relayer +process monitors for updates to these paths and relays messages by submitting the data stored +under the path and a proof to the counterparty chain. + +Proofs are passed from core IBC to light-clients as bytes. It is up to light client implementation to interpret these bytes appropriately. + +- The paths that all IBC implementations must use for committing IBC messages is defined in + [ICS-24 Host State Machine Requirements](https://github.com/cosmos/ibc/tree/master/spec/core/ics-024-host-requirements). +- The proof format that all implementations must be able to produce and verify is defined in [ICS-23 Proofs](https://github.com/cosmos/ics23) implementation. + +### Capabilities + +IBC is intended to work in execution environments where modules do not necessarily trust each +other. Thus, IBC must authenticate module actions on ports and channels so that only modules with the +appropriate permissions can use them. + +This module authentication is accomplished using a [dynamic +capability store](/docs/common/pages/adr-comprehensive#adr-003-dynamic-capability-store). Upon binding to a port or +creating a channel for a module, IBC returns a dynamic capability that the module must claim in +order to use that port or channel. The dynamic capability module prevents other modules from using that port or channel since +they do not own the appropriate capability. + +While this background information is useful, IBC modules do not need to interact at all with +these lower-level abstractions. The relevant abstraction layer for IBC application developers is +that of channels and ports. IBC applications must be written as self-contained **modules**. + +A module on one blockchain can communicate with other modules on other blockchains by sending, +receiving, and acknowledging packets through channels that are uniquely identified by the +`(channelID, portID)` tuple. + +A useful analogy is to consider IBC modules as internet applications on +a computer. A channel can then be conceptualized as an IP connection, with the IBC portID being +analogous to an IP port and the IBC channelID being analogous to an IP address. Thus, a single +instance of an IBC module can communicate on the same port with any number of other modules and +IBC correctly routes all packets to the relevant module using the (channelID, portID tuple). An +IBC module can also communicate with another IBC module over multiple ports, with each +`(portID<->portID)` packet stream being sent on a different unique channel. + +### [Ports](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port) + +An IBC module can bind to any number of ports. Each port must be identified by a unique `portID`. +Since IBC is designed to be secure with mutually distrusted modules operating on the same ledger, +binding a port returns a dynamic object capability. In order to take action on a particular port +(for example, an open channel with its portID), a module must provide the dynamic object capability to the IBC +handler. This requirement prevents a malicious module from opening channels with ports it does not own. Thus, +IBC modules are responsible for claiming the capability that is returned on `BindPort`. + +### [Channels](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +An IBC channel can be established between two IBC ports. Currently, a port is exclusively owned by a +single module. IBC packets are sent over channels. Just as IP packets contain the destination IP +address and IP port, and the source IP address and source IP port, IBC packets contain +the destination portID and channelID, and the source portID and channelID. This packet structure enables IBC to +correctly route packets to the destination module while allowing modules receiving packets to +know the sender module. + +A channel can be `ORDERED`, where packets from a sending module must be processed by the +receiving module in the order they were sent. Or a channel can be `UNORDERED`, where packets +from a sending module are processed in the order they arrive (might be in a different order than they were sent). + +Modules can choose which channels they wish to communicate over with, thus IBC expects modules to +implement callbacks that are called during the channel handshake. These callbacks can do custom +channel initialization logic. If any callback returns an error, the channel handshake fails. Thus, by +returning errors on callbacks, modules can programmatically reject and accept channels. + +The channel handshake is a 4-step handshake. Briefly, if a given chain A wants to open a channel with +chain B using an already established connection: + +1. chain A sends a `ChanOpenInit` message to signal a channel initialization attempt with chain B. +2. chain B sends a `ChanOpenTry` message to try opening the channel on chain A. +3. chain A sends a `ChanOpenAck` message to mark its channel end status as open. +4. chain B sends a `ChanOpenConfirm` message to mark its channel end status as open. + +If all handshake steps are successful, the channel is opened on both sides. At each step in the handshake, the module +associated with the `ChannelEnd` executes its callback. So +on `ChanOpenInit`, the module on chain A executes its callback `OnChanOpenInit`. + +The channel identifier is auto derived in the format: `channel-{N}` where N is the next sequence to be used. + +Just as ports came with dynamic capabilities, channel initialization returns a dynamic capability +that the module **must** claim so that they can pass in a capability to authenticate channel actions +like sending packets. The channel capability is passed into the callback on the first parts of the +handshake; either `OnChanOpenInit` on the initializing chain or `OnChanOpenTry` on the other chain. + +#### Closing channels + +Closing a channel occurs in 2 handshake steps as defined in [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics). +Once a channel is closed, it cannot be reopened. The channel handshake steps are: + +**`ChanCloseInit`** closes a channel on the executing chain if + +- the channel exists and it is not already closed, +- the connection it exists upon is OPEN, +- the [IBC module callback `OnChanCloseInit`](/docs/ibc/v8.5.x/ibc/apps/ibcmodule#channel-closing-callbacks) returns `nil`. + +`ChanCloseInit` can be initiated by any user by submitting a `MsgChannelCloseInit` transaction. +Note that channels are automatically closed when a packet times out on an `ORDERED` channel. +A timeout on an `ORDERED` channel skips the `ChanCloseInit` step and immediately closes the channel. + +**`ChanCloseConfirm`** is a response to a counterparty channel executing `ChanCloseInit`. The channel +on the executing chain closes if + +- the channel exists and is not already closed, +- the connection the channel exists upon is OPEN, +- the executing chain successfully verifies that the counterparty channel has been closed +- the [IBC module callback `OnChanCloseConfirm`](/docs/ibc/v8.5.x/ibc/apps/ibcmodule#channel-closing-callbacks) returns `nil`. + +Currently, none of the IBC applications provided in ibc-go support `ChanCloseInit`. + +### [Packets](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Modules communicate with each other by sending packets over IBC channels. All +IBC packets contain the destination `portID` and `channelID` along with the source `portID` and +`channelID`. This packet structure allows modules to know the sender module of a given packet. IBC packets +contain a sequence to optionally enforce ordering. + +IBC packets also contain a `TimeoutHeight` and a `TimeoutTimestamp` that determine the deadline before the receiving module must process a packet. + +Modules send custom application data to each other inside the `Data []byte` field of the IBC packet. +Thus, packet data is opaque to IBC handlers. It is incumbent on a sender module to encode +their application-specific packet information into the `Data` field of packets. The receiver +module must decode that `Data` back to the original application data. + +### [Receipts and Timeouts](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Since IBC works over a distributed network and relies on potentially faulty relayers to relay messages between ledgers, +IBC must handle the case where a packet does not get sent to its destination in a timely manner or at all. Packets must +specify a non-zero value for timeout height (`TimeoutHeight`) or timeout timestamp (`TimeoutTimestamp` ) after which a packet can no longer be successfully received on the destination chain. + +- The `timeoutHeight` indicates a consensus height on the destination chain after which the packet is no longer be processed, and instead counts as having timed-out. +- The `timeoutTimestamp` indicates a timestamp on the destination chain after which the packet is no longer be processed, and instead counts as having timed-out. + +If the timeout passes without the packet being successfully received, the packet can no longer be +received on the destination chain. The sending module can timeout the packet and take appropriate actions. + +If the timeout is reached, then a proof of packet timeout can be submitted to the original chain. The original chain can then perform +application-specific logic to timeout the packet, perhaps by rolling back the packet send changes (refunding senders any locked funds, etc.). + +- In ORDERED channels, a timeout of a single packet in the channel causes the channel to close. + + - If packet sequence `n` times out, then a packet at sequence `k > n` cannot be received without violating the contract of ORDERED channels that packets are processed in the order that they are sent. + - Since ORDERED channels enforce this invariant, a proof that sequence `n` has not been received on the destination chain by the specified timeout of packet `n` is sufficient to timeout packet `n` and close the channel. + +- In UNORDERED channels, the application-specific timeout logic for that packet is applied and the channel is not closed. + + - Packets can be received in any order. + + - IBC writes a packet receipt for each sequence receives in the UNORDERED channel. This receipt does not contain information; it is simply a marker intended to signify that the UNORDERED channel has received a packet at the specified sequence. + + - To timeout a packet on an UNORDERED channel, a proof is required that a packet receipt **does not exist** for the packet's sequence by the specified timeout. + +For this reason, most modules should use UNORDERED channels as they require fewer liveness guarantees to function effectively for users of that channel. + +### [Acknowledgments](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel) + +Modules can also choose to write application-specific acknowledgments upon processing a packet. Acknowledgments can be done: + +- Synchronously on `OnRecvPacket` if the module processes packets as soon as they are received from IBC module. +- Asynchronously if module processes packets at some later point after receiving the packet. + +This acknowledgment data is opaque to IBC much like the packet `Data` and is treated by IBC as a simple byte string `[]byte`. Receiver modules must encode their acknowledgment so that the sender module can decode it correctly. The encoding must be negotiated between the two parties during version negotiation in the channel handshake. + +The acknowledgment can encode whether the packet processing succeeded or failed, along with additional information that allows the sender module to take appropriate action. + +After the acknowledgment has been written by the receiving chain, a relayer relays the acknowledgment back to the original sender module. + +The original sender module then executes application-specific acknowledgment logic using the contents of the acknowledgment. + +- After an acknowledgement fails, packet-send changes can be rolled back (for example, refunding senders in ICS20). + +- After an acknowledgment is received successfully on the original sender on the chain, the corresponding packet commitment is deleted since it is no longer needed. + +## Further Readings and Specs + +If you want to learn more about IBC, check the following specifications: + +- [IBC specification overview](https://github.com/cosmos/ibc/blob/master/README.md) diff --git a/docs/ibc/v8.5.x/ibc/proposals.mdx b/docs/ibc/v8.5.x/ibc/proposals.mdx new file mode 100644 index 00000000..4712e439 --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/proposals.mdx @@ -0,0 +1,113 @@ +--- +title: Governance Proposals +--- + +In uncommon situations, a highly valued client may become frozen due to uncontrollable +circumstances. A highly valued client might have hundreds of channels being actively used. +Some of those channels might have a significant amount of locked tokens used for ICS 20. + +If the one third of the validator set of the chain the client represents decides to collude, +they can sign off on two valid but conflicting headers each signed by the other one third +of the honest validator set. The light client can now be updated with two valid, but conflicting +headers at the same height. The light client cannot know which header is trustworthy and therefore +evidence of such misbehaviour is likely to be submitted resulting in a frozen light client. + +Frozen light clients cannot be updated under any circumstance except via a governance proposal. +Since a quorum of validators can sign arbitrary state roots which may not be valid executions +of the state machine, a governance proposal has been added to ease the complexity of unfreezing +or updating clients which have become "stuck". Without this mechanism, validator sets would need +to construct a state root to unfreeze the client. Unfreezing clients, re-enables all of the channels +built upon that client. This may result in recovery of otherwise lost funds. + +Tendermint light clients may become expired if the trusting period has passed since their +last update. This may occur if relayers stop submitting headers to update the clients. + +An unplanned upgrade by the counterparty chain may also result in expired clients. If the counterparty +chain undergoes an unplanned upgrade, there may be no commitment to that upgrade signed by the validator +set before the chain ID changes. In this situation, the validator set of the last valid update for the +light client is never expected to produce another valid header since the chain ID has changed, which will +ultimately lead the on-chain light client to become expired. + +In the case that a highly valued light client is frozen, expired, or rendered non-updateable, a +governance proposal may be submitted to update this client, known as the subject client. The +proposal includes the client identifier for the subject and the client identifier for a substitute +client. Light client implementations may implement custom updating logic, but in most cases, +the subject will be updated to the latest consensus state of the substitute client, if the proposal passes. +The substitute client is used as a "stand in" while the subject is on trial. It is best practice to create +a substitute client *after* the subject has become frozen to avoid the substitute from also becoming frozen. +An active substitute client allows headers to be submitted during the voting period to prevent accidental expiry +once the proposal passes. + +*note* two of these parameters: `AllowUpdateAfterExpiry` and `AllowUpdateAfterMisbehavior` have been deprecated, and will both be set to `false` upon upgrades even if they were previously set to `true`. These parameters will no longer play a role in restricting a client upgrade. Please see ADR026 for more details. + +# How to recover an expired client with a governance proposal + +See also the relevant documentation: [ADR-026, IBC client recovery mechanisms](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-026-ibc-client-recovery-mechanisms.md) + +> **Who is this information for?** +> Although technically anyone can submit the governance proposal to recover an expired client, often it will be **relayer operators** (at least coordinating the submission). + +## Preconditions + +* There exists an active client (with a known client identifier) for the same counterparty chain as the expired client. +* The governance deposit. + +## Steps + +### Step 1 + +Check if the client is attached to the expected `chain_id`. For example, for an expired Tendermint client representing the Akash chain the client state looks like this on querying the client state: + +```text +{ + client_id: 07-tendermint-146 + client_state: + '@type': /ibc.lightclients.tendermint.v1.ClientState + allow_update_after_expiry: true + allow_update_after_misbehaviour: true + chain_id: akashnet-2 +} +``` + +The client is attached to the expected Akash `chain_id`. Note that although the parameters (`allow_update_after_expiry` and `allow_update_after_misbehaviour`) exist to signal intent, these parameters have been deprecated and will not enforce any checks on the revival of client. See ADR-026 for more context on this deprecation. + +### Step 2 + +Anyone can submit the governance proposal to recover the client by executing the following via CLI. +If the chain is on an ibc-go version older than v8, please see the [relevant documentation](https://ibc.cosmos.network/v7/ibc/proposals). + +* From ibc-go v8 onwards + + ```shell + tx gov submit-proposal [path-to-proposal-json] + ``` + + where `proposal.json` contains: + + ```json expandable + { + "messages": [ + { + "@type": "/ibc.core.client.v1.MsgRecoverClient", + "subject_client_id": "", + "substitute_client_id": "", + "signer": "" + } + ], + "metadata": "", + "deposit": "10stake" + "title": "My proposal", + "summary": "A short summary of my proposal", + "expedited": false + } + ``` + +The `` identifier is the proposed client to be updated. This client must be either frozen or expired. + +The `` represents a substitute client. It carries all the state for the client which may be updated. It must have identical client and chain parameters to the client which may be updated (except for latest height, frozen height, and chain ID). It should be continually updated during the voting period. + +After this, all that remains is deciding who funds the governance deposit and ensuring the governance proposal passes. If it does, the client on trial will be updated to the latest state of the substitute. + +## Important considerations + +Please note that if the counterparty client is also expired, that client will also need to update. This process updates only one client. diff --git a/docs/ibc/v8.5.x/ibc/proto-docs.mdx b/docs/ibc/v8.5.x/ibc/proto-docs.mdx new file mode 100644 index 00000000..d5e6f469 --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/proto-docs.mdx @@ -0,0 +1,6 @@ +--- +title: Protobuf Documentation +description: See ibc-go Buf Protobuf documentation. +--- + +See [ibc-go Buf Protobuf documentation](https://buf.build/cosmos/ibc/docs/main). diff --git a/docs/ibc/v8.5.x/ibc/relayer.mdx b/docs/ibc/v8.5.x/ibc/relayer.mdx new file mode 100644 index 00000000..dc370780 --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/relayer.mdx @@ -0,0 +1,48 @@ +--- +title: Relayer +--- + + + +## Pre-requisite readings + +* [IBC Overview](/docs/ibc/v8.5.x/ibc/overview) +* [Events](https://docs.cosmos.network/main/learn/advanced/events) + + + +## Events + +Events are emitted for every transaction processed by the base application to indicate the execution +of some logic clients may want to be aware of. This is extremely useful when relaying IBC packets. +Any message that uses IBC will emit events for the corresponding TAO logic executed as defined in +the [IBC events document](). + +In the SDK, it can be assumed that for every message there is an event emitted with the type `message`, +attribute key `action`, and an attribute value representing the type of message sent +(`channel_open_init` would be the attribute value for `MsgChannelOpenInit`). If a relayer queries +for transaction events, it can split message events using this event Type/Attribute Key pair. + +The Event Type `message` with the Attribute Key `module` may be emitted multiple times for a single +message due to application callbacks. It can be assumed that any TAO logic executed will result in +a module event emission with the attribute value `ibc_` (02-client emits `ibc_client`). + +### Subscribing with Tendermint + +Calling the Tendermint RPC method `Subscribe` via Tendermint's Websocket will return events using +Tendermint's internal representation of them. Instead of receiving back a list of events as they +were emitted, Tendermint will return the type `map[string][]string` which maps a string in the +form `.` to `attribute_value`. This causes extraction of the event +ordering to be non-trivial, but still possible. + +A relayer should use the `message.action` key to extract the number of messages in the transaction +and the type of IBC transactions sent. For every IBC transaction within the string array for +`message.action`, the necessary information should be extracted from the other event fields. If +`send_packet` appears at index 2 in the value for `message.action`, a relayer will need to use the +value at index 2 of the key `send_packet.packet_sequence`. This process should be repeated for each +piece of information needed to relay a packet. + +## Example Implementations + +* [Golang Relayer](https://github.com/cosmos/relayer) +* [Hermes](https://github.com/informalsystems/hermes) diff --git a/docs/ibc/v8.5.x/ibc/roadmap.mdx b/docs/ibc/v8.5.x/ibc/roadmap.mdx new file mode 100644 index 00000000..88561031 --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/roadmap.mdx @@ -0,0 +1,50 @@ +--- +title: Roadmap +description: 'Latest update: December 4th, 2023' +--- + +*Latest update: December 4th, 2023* + +This document endeavours to inform the wider IBC community about plans and priorities for work on ibc-go by the team at Interchain GmbH. It is intended to broadly inform all users of ibc-go, including developers and operators of IBC, relayer, chain and wallet applications. + +This roadmap should be read as a high-level guide, rather than a commitment to schedules and deliverables. The degree of specificity is inversely proportional to the timeline. We will update this document periodically to reflect the status and plans. For the latest expected release timelines, please check [here](https://github.com/cosmos/ibc-go/wiki/Release-timeline). + +## 08-wasm/v0.1.0 + +Follow the progress with the [milestone](https://github.com/cosmos/ibc-go/milestone/40). + +The first release of this new module will add support for Wasm light clients. The first Wasm client developed on top of ibc-go/v7 02-client refactor and stored as Wasm bytecode will be the GRANDPA light client used for Cosmos x Substrate IBC connections. This feature will be used also for a NEAR light client in the future. + +This feature has been developed by Composable and Strangelove. + +## v8.1.0 + +### Channel upgradability + +Channel upgradability will allow chains to renegotiate an existing channel to take advantage of new features without having to create a new channel, thus preserving all existing packet state processed on the channel. This feature will enable, for example, the adoption of existing channels of features like [path unwinding](https://github.com/cosmos/ibc/discussions/824) or fee middleware. + +Follow the progress with the [alpha milestone](https://github.com/cosmos/ibc-go/milestone/29) or the [project board](https://github.com/orgs/cosmos/projects/7/views/17). + +## v9.0.0 + +### Conditional clients + +Conditional clients are light clients which are dependent on another client in order to verify or update state. Conditional clients are essential for integration with modular blockchains which break up consensus and state management, such as rollups. Currently, light clients receive a single provable store they maintain. There is a unidirectional communication channel with 02-client: the 02-client module will call into the light client, without allowing for the light client to call into the 02-client module. But modular blockchains break up a logical blockchain into many constituent parts, so in order to accurately represent these chains and also to account for various types of shared security primitives that are coming up, we need to introduce dependencies between clients. In the case of optimistic rollups, for example, in order to prove execution (allowing for fraud proofs), you must prove data availability and sequencing. A potential solution to this problem is that a light client may optionally specify a list of dependencies and the 02-client module would lookup read-only provable stores for each dependency and provide this to the conditional client to perform verification. Please see [this issue](https://github.com/cosmos/ibc-go/issues/5112) for more details. + +## v10.0.0 + +### Multihop channels + +Multihop channels specify a way to route messages across a path of IBC enabled blockchains utilizing multiple pre-existing IBC connections. The current IBC protocol defines messaging in a point-to-point paradigm which allows message passing between two directly connected IBC chains, but as more IBC enabled chains come into existence there becomes a need to relay IBC packets across chains because IBC connections may not exist between the two chains wishing to exchange messages. IBC connections may not exist for a variety of reasons which could include economic inviability since connections require client state to be continuously exchanged between connection ends which carries a cost. Please see the [ICS 33 spec](https://github.com/cosmos/ibc/blob/main/spec/core/ics-033-multi-hop/README.md) for more information. + +*** + +This roadmap is also available as a [project board](https://github.com/orgs/cosmos/projects/7/views/25). + +For the latest expected release timelines, please check [here](https://github.com/cosmos/ibc-go/wiki/Release-timeline). + +For the latest information on the progress of the work or the decisions made that might influence the roadmap, please follow the [Announcements](https://github.com/cosmos/ibc-go/discussions/categories/announcements) category in the Discussions tab of the repository. + +*** + +**Note**: release version numbers may be subject to change. diff --git a/docs/ibc/v8.5.x/ibc/troubleshooting.mdx b/docs/ibc/v8.5.x/ibc/troubleshooting.mdx new file mode 100644 index 00000000..b147160f --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/troubleshooting.mdx @@ -0,0 +1,13 @@ +--- +title: Troubleshooting +description: >- + If it is being reported that a client state is unauthorized, this is due to + the client type not being present in the AllowedClients array. +--- + +## Unauthorized client states + +If it is being reported that a client state is unauthorized, this is due to the client type not being present +in the [`AllowedClients`](https://github.com/cosmos/ibc-go/blob/v6.0.0/modules/core/02-client/types/client.pb.go#L345) array. + +Unless the client type is present in this array or the `AllowAllClients` wildcard (`"*"`) is used, all usage of clients of this type will be prevented. diff --git a/docs/ibc/v8.5.x/ibc/upgrades/developer-guide.mdx b/docs/ibc/v8.5.x/ibc/upgrades/developer-guide.mdx new file mode 100644 index 00000000..5fde10fe --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/upgrades/developer-guide.mdx @@ -0,0 +1,9 @@ +--- +title: IBC Client Developer Guide to Upgrades +--- + +## Synopsis + +Learn how to implement upgrade functionality for your custom IBC client. + +Please see the section [Handling upgrades](/docs/ibc/v8.5.x/light-clients/developer-guide/upgrades) from the light client developer guide for more information. diff --git a/docs/ibc/v8.5.x/ibc/upgrades/genesis-restart.mdx b/docs/ibc/v8.5.x/ibc/upgrades/genesis-restart.mdx new file mode 100644 index 00000000..79d814e9 --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/upgrades/genesis-restart.mdx @@ -0,0 +1,46 @@ +--- +title: Genesis Restart Upgrades +--- + +## Synopsis + +Learn how to upgrade your chain and counterparty clients using genesis restarts. + +**NOTE**: Regular genesis restarts are currently unsupported by relayers! + +## IBC Client Breaking Upgrades + +IBC client breaking upgrades are possible using genesis restarts. +It is highly recommended to use the in-place migrations instead of a genesis restart. +Genesis restarts should be used sparingly and as backup plans. + +Genesis restarts still require the usage of an IBC upgrade proposal in order to correctly upgrade counterparty clients. + +### Step-by-Step Upgrade Process for SDK Chains + +If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the [IBC Client Breaking Upgrade List](/docs/ibc/v8.5.x/ibc/upgrades/quick-guide#ibc-client-breaking-upgrades) and then execute the upgrade process described below in order to prevent counterparty clients from breaking. + +1. Create a governance proposal with the [`MsgIBCSoftwareUpgrade`](https://buf.build/cosmos/ibc/docs/main:ibc.core.client.v1#ibc.core.client.v1.MsgIBCSoftwareUpgrade) which contains an `UpgradePlan` and a new IBC `ClientState` in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients and zero out any client-customizable fields (such as `TrustingPeriod`). +2. Vote on and pass the governance proposal. +3. Halt the node after successful upgrade. +4. Export the genesis file. +5. Swap to the new binary. +6. Run migrations on the genesis file. +7. Remove the upgrade plan set by the governance proposal from the genesis file. This may be done by migrations. +8. Change desired chain-specific fields (chain id, unbonding period, etc). This may be done by migrations. +9. Reset the node's data. +10. Start the chain. + +Upon passing the governance proposal, the upgrade module will commit the `UpgradedClient` under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`. + +Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client. + +### Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients + +These steps are identical to the regular [IBC client breaking upgrade process](/docs/ibc/v8.5.x/ibc/upgrades/quick-guide#step-by-step-upgrade-process-for-relayers-upgrading-counterparty-clients). + +## Non-IBC Client Breaking Upgrades + +While ibc-go supports genesis restarts which do not break IBC clients, relayers do not support this upgrade path. +Here is a tracking issue on [Hermes](https://github.com/informalsystems/ibc-rs/issues/1152). +Please do not attempt a regular genesis restarts unless you have a tool to update counterparty clients correctly. diff --git a/docs/ibc/v8.5.x/ibc/upgrades/intro.mdx b/docs/ibc/v8.5.x/ibc/upgrades/intro.mdx new file mode 100644 index 00000000..202931a8 --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/upgrades/intro.mdx @@ -0,0 +1,13 @@ +--- +title: Upgrading IBC Chains Overview +description: >- + This directory contains information on how to upgrade an IBC chain without + breaking counterparty clients and connections. +--- + +This directory contains information on how to upgrade an IBC chain without breaking counterparty clients and connections. + +IBC-connected chains must be able to upgrade without breaking connections to other chains. Otherwise there would be a massive disincentive towards upgrading and disrupting high-value IBC connections, thus preventing chains in the IBC ecosystem from evolving and improving. Many chain upgrades may be irrelevant to IBC, however some upgrades could potentially break counterparty clients if not handled correctly. Thus, any IBC chain that wishes to perform an IBC-client-breaking upgrade must perform an IBC upgrade in order to allow counterparty clients to securely upgrade to the new light client. + +1. The [quick-guide](/docs/ibc/v8.5.x/ibc/upgrades/quick-guide) describes how IBC-connected chains can perform client-breaking upgrades and how relayers can securely upgrade counterparty clients using the SDK. +2. The [developer-guide](/docs/ibc/v8.5.x/ibc/upgrades/developer-guide) is a guide for developers intending to develop IBC client implementations with upgrade functionality. diff --git a/docs/ibc/v8.5.x/ibc/upgrades/quick-guide.mdx b/docs/ibc/v8.5.x/ibc/upgrades/quick-guide.mdx new file mode 100644 index 00000000..938254b9 --- /dev/null +++ b/docs/ibc/v8.5.x/ibc/upgrades/quick-guide.mdx @@ -0,0 +1,54 @@ +--- +title: How to Upgrade IBC Chains and their Clients +--- + +## Synopsis + +Learn how to upgrade your chain and counterparty clients. + +The information in this doc for upgrading chains is relevant to SDK chains. However, the guide for counterparty clients is relevant to any Tendermint client that enables upgrades. + +## IBC Client Breaking Upgrades + +IBC-connected chains must perform an IBC upgrade if their upgrade will break counterparty IBC clients. The current IBC protocol supports upgrading tendermint chains for a specific subset of IBC-client-breaking upgrades. Here is the exhaustive list of IBC client-breaking upgrades and whether the IBC protocol currently supports such upgrades. + +IBC currently does **NOT** support unplanned upgrades. All of the following upgrades must be planned and committed to in advance by the upgrading chain, in order for counterparty clients to maintain their connections securely. + +Note: Since upgrades are only implemented for Tendermint clients, this doc only discusses upgrades on Tendermint chains that would break counterparty IBC Tendermint Clients. + +1. Changing the Chain-ID: **Supported** +2. Changing the UnbondingPeriod: **Partially Supported**, chains may increase the unbonding period with no issues. However, decreasing the unbonding period may irreversibly break some counterparty clients. Thus, it is **not recommended** that chains reduce the unbonding period. +3. Changing the height (resetting to 0): **Supported**, so long as chains remember to increment the revision number in their chain-id. +4. Changing the ProofSpecs: **Supported**, this should be changed if the proof structure needed to verify IBC proofs is changed across the upgrade. Ex: Switching from an IAVL store, to a SimpleTree Store +5. Changing the UpgradePath: **Supported**, this might involve changing the key under which upgraded clients and consensus states are stored in the upgrade store, or even migrating the upgrade store itself. +6. Migrating the IBC store: **Unsupported**, as the IBC store location is negotiated by the connection. +7. Upgrading to a backwards compatible version of IBC: Supported +8. Upgrading to a non-backwards compatible version of IBC: **Unsupported**, as IBC version is negotiated on connection handshake. +9. Changing the Tendermint LightClient algorithm: **Partially Supported**. Changes to the light client algorithm that do not change the ClientState or ConsensusState struct may be supported, provided that the counterparty is also upgraded to support the new light client algorithm. Changes that require updating the ClientState and ConsensusState structs themselves are theoretically possible by providing a path to translate an older ClientState struct into the new ClientState struct; however this is not currently implemented. + +## Step-by-Step Upgrade Process for SDK chains + +If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the list above and then execute the upgrade process described below in order to prevent counterparty clients from breaking. + +1. Create a governance proposal with the [`MsgIBCSoftwareUpgrade`](https://buf.build/cosmos/ibc/docs/main:ibc.core.client.v1#ibc.core.client.v1.MsgIBCSoftwareUpgrade) message which contains an `UpgradePlan` and a new IBC `ClientState` in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients (chain-specified parameters) and zero out any client-customizable fields (such as `TrustingPeriod`). +2. Vote on and pass the governance proposal. + +Upon passing the governance proposal, the upgrade module will commit the `UpgradedClient` under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`. + +Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client. + +## Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients + +Once the upgrading chain has committed to upgrading, relayers must wait till the chain halts at the upgrade height before upgrading counterparty clients. This is because chains may reschedule or cancel upgrade plans before they occur. Thus, relayers must wait till the chain reaches the upgrade height and halts before they can be sure the upgrade will take place. + +Thus, the upgrade process for relayers trying to upgrade the counterparty clients is as follows: + +1. Wait for the upgrading chain to reach the upgrade height and halt +2. Query a full node for the proofs of `UpgradedClient` and `UpgradedConsensusState` at the last height of the old chain. +3. Update the counterparty client to the last height of the old chain using the `UpdateClient` msg. +4. Submit an `UpgradeClient` msg to the counterparty chain with the `UpgradedClient`, `UpgradedConsensusState` and their respective proofs. +5. Submit an `UpdateClient` msg to the counterparty chain with a header from the new upgraded chain. + +The Tendermint client on the counterparty chain will verify that the upgrading chain did indeed commit to the upgraded client and upgraded consensus state at the upgrade height (since the upgrade height is included in the key). If the proofs are verified against the upgrade height, then the client will upgrade to the new client while retaining all of its client-customized fields. Thus, it will retain its old TrustingPeriod, TrustLevel, MaxClockDrift, etc; while adopting the new chain-specified fields such as UnbondingPeriod, ChainId, UpgradePath, etc. Note, this can lead to an invalid client since the old client-chosen fields may no longer be valid given the new chain-chosen fields. Upgrading chains should try to avoid these situations by not altering parameters that can break old clients. For an example, see the UnbondingPeriod example in the supported upgrades section. + +The upgraded consensus state will serve purely as a basis of trust for future `UpdateClientMsgs` and will not contain a consensus root to perform proof verification against. Thus, relayers must submit an `UpdateClientMsg` with a header from the new chain so that the connection can be used for proof verification again. diff --git a/docs/ibc/v8.5.x/intro.mdx b/docs/ibc/v8.5.x/intro.mdx new file mode 100644 index 00000000..5973bd54 --- /dev/null +++ b/docs/ibc/v8.5.x/intro.mdx @@ -0,0 +1,37 @@ +--- +title: IBC-Go Documentation +description: >- + Welcome to the documentation for IBC-Go, the Golang implementation of the + Inter-Blockchain Communication Protocol! Looking for information on ibc-rs? + Click here to go to the ibc-rs github repo. +--- + +Welcome to the documentation for IBC-Go, the Golang implementation of the Inter-Blockchain Communication Protocol! Looking for information on ibc-rs? [Click here to go to the ibc-rs github repo](https://github.com/cosmos/ibc-rs). + +The Inter-Blockchain Communication Protocol (IBC) is an end-to-end, connection-oriented, stateful protocol for reliable, ordered, and authenticated communication between heterogeneous blockchains arranged in an unknown and dynamic topology. + +IBC is a protocol that allows blockchains to talk to each other. Chains that speak IBC can share any type of data as long as it's encoded in bytes, enabling the industry’s most feature-rich cross-chain interactions. IBC is secure and permissionless. + +The protocol realizes this interoperability by specifying a set of data structures, abstractions, and semantics that can be implemented by any distributed ledger that satisfies a small set of requirements. + +IBC can be used to build a wide range of cross-chain applications that include token transfers, atomic swaps, multi-chain smart contracts (with or without mutually comprehensible VMs), cross-chain account control, and data and code sharding of various kinds. + +## High-level overview of IBC + +The following diagram shows how IBC works at a high level: + +![Light Mode IBC Overview](/docs/ibc/images/images/ibcoverview-light.svg#gh-light-mode-only)![Dark Mode IBC Overview](/docs/ibc/images/images/ibcoverview-dark.svg#gh-dark-mode-only) + +The transport layer (TAO) provides the necessary infrastructure to establish secure connections and authenticate data packets between chains. The application layer builds on top of the transport layer and defines exactly how data packets should be packaged and interpreted by the sending and receiving chains. + +IBC provides a reliable, permissionless, and generic base layer (allowing for the secure relaying of data packets), while allowing for composability and modularity with separation of concerns by moving application designs (interpreting and acting upon the packet data) to a higher-level layer. This separation is reflected in the categories: + +* **IBC/TAO** comprises the Transport, Authentication, and Ordering of packets, i.e. the infrastructure layer. +* **IBC/APP** consists of the application handlers for the data packets being passed over the transport layer. These include but are not limited to fungible token transfers (ICS-20), NFT transfers (ICS-721), and interchain accounts (ICS-27). +* **Application module:** groups any application, middleware or smart contract that may wrap downstream application handlers to provide enhanced functionality. + +Note three crucial elements in the diagram: + +* The chains depend on relayers to communicate. [Relayers](https://github.com/cosmos/ibc/blob/main/spec/relayer/ics-018-relayer-algorithms/README.md) are the "physical" connection layer of IBC: off-chain processes responsible for relaying data between two chains running the IBC protocol by scanning the state of each chain, constructing appropriate datagrams, and executing them on the opposite chain as is allowed by the protocol. +* Many relayers can serve one or more channels to send messages between the chains. +* Each side of the connection uses the light client of the other chain to quickly verify incoming messages. diff --git a/docs/ibc/v8.5.x/light-clients/developer-guide/client-state.mdx b/docs/ibc/v8.5.x/light-clients/developer-guide/client-state.mdx new file mode 100644 index 00000000..9ca4191d --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/developer-guide/client-state.mdx @@ -0,0 +1,77 @@ +--- +title: Client State interface +description: >- + Learn how to implement the ClientState interface. This list of methods + described here does not include all methods of the interface. Some methods are + explained in detail in the relevant sections of the guide. +--- + +Learn how to implement the [`ClientState`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L36) interface. This list of methods described here does not include all methods of the interface. Some methods are explained in detail in the relevant sections of the guide. + +## `ClientType` method + +`ClientType` should return a unique string identifier of the light client. This will be used when generating a client identifier. +The format is created as follows: `ClientType-{N}` where `{N}` is the unique global nonce associated with a specific client. + +## `GetLatestHeight` method + +`GetLatestHeight` should return the latest block height that the client state represents. + +## `Validate` method + +`Validate` should validate every client state field and should return an error if any value is invalid. The light client +implementer is in charge of determining which checks are required. See the [Tendermint light client implementation](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/client_state.go#L111) as a reference. + +## `Status` method + +`Status` must return the status of the client. + +* An `Active` status indicates that clients are allowed to process packets. +* A `Frozen` status indicates that misbehaviour was detected in the counterparty chain and the client is not allowed to be used. +* An `Expired` status indicates that a client is not allowed to be used because it was not updated for longer than the trusting period. +* An `Unknown` status indicates that there was an error in determining the status of a client. + +All possible `Status` types can be found [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L22-L32). + +This field is returned in the response of the gRPC [`ibc.core.client.v1.Query/ClientStatus`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/types/query.pb.go#L665) endpoint. + +## `ZeroCustomFields` method + +`ZeroCustomFields` should return a copy of the light client with all client customizable fields with their zero value. It should not mutate the fields of the light client. +This method is used when [scheduling upgrades](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/keeper/proposal.go#L82). Upgrades are used to upgrade chain specific fields. +In the tendermint case, this may be the chain ID or the unbonding period. +For more information about client upgrades see the [Handling upgrades](/docs/ibc/v8.5.x/light-clients/developer-guide/upgrades) section. + +## `GetTimestampAtHeight` method + +`GetTimestampAtHeight` must return the timestamp for the consensus state associated with the provided height. +This value is used to facilitate timeouts by checking the packet timeout timestamp against the returned value. + +## `Initialize` method + +Clients must validate the initial consensus state, and set the initial client state and consensus state in the provided client store. +Clients may also store any necessary client-specific metadata. + +`Initialize` is called when a [client is created](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/keeper/client.go#L30). + +## `VerifyMembership` method + +`VerifyMembership` must verify the existence of a value at a given commitment path at the specified height. For more information about membership proofs +see the [Existence and non-existence proofs section](/docs/ibc/v8.5.x/light-clients/developer-guide/proofs). + +## `VerifyNonMembership` method + +`VerifyNonMembership` must verify the absence of a value at a given commitment path at a specified height. For more information about non-membership proofs +see the [Existence and non-existence proofs section](/docs/ibc/v8.5.x/light-clients/developer-guide/proofs). + +## `VerifyClientMessage` method + +`VerifyClientMessage` must verify a `ClientMessage`. A `ClientMessage` could be a `Header`, `Misbehaviour`, or batch update. +It must handle each type of `ClientMessage` appropriately. Calls to `CheckForMisbehaviour`, `UpdateState`, and `UpdateStateOnMisbehaviour` +will assume that the content of the `ClientMessage` has been verified and can be trusted. An error should be returned +if the ClientMessage fails to verify. + +## `CheckForMisbehaviour` method + +Checks for evidence of a misbehaviour in `Header` or `Misbehaviour` type. It assumes the `ClientMessage` +has already been verified. diff --git a/docs/ibc/v8.5.x/light-clients/developer-guide/consensus-state.mdx b/docs/ibc/v8.5.x/light-clients/developer-guide/consensus-state.mdx new file mode 100644 index 00000000..8494fa18 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/developer-guide/consensus-state.mdx @@ -0,0 +1,26 @@ +--- +title: Consensus State interface +description: >- + A ConsensusState is the snapshot of the counterparty chain, that an IBC client + uses to verify proofs (e.g. a block). +--- + +A `ConsensusState` is the snapshot of the counterparty chain, that an IBC client uses to verify proofs (e.g. a block). + +The further development of multiple types of IBC light clients and the difficulties presented by this generalization problem (see [ADR-006](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-006-02-client-refactor.md) for more information about this historical context) led to the design decision of each client keeping track of and set its own `ClientState` and `ConsensusState`, as well as the simplification of client `ConsensusState` updates through the generalized `ClientMessage` interface. + +The below [`ConsensusState`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L133) interface is a generalized interface for the types of information a `ConsensusState` could contain. For a reference `ConsensusState` implementation, please see the [Tendermint light client `ConsensusState`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/consensus_state.go). + +## `ClientType` method + +This is the type of client consensus. It should be the same as the `ClientType` return value for the [corresponding `ClientState` implementation](/docs/ibc/v8.5.x/light-clients/developer-guide/client-state). + +## `GetTimestamp` method + +*Deprecated*: soon to be removed from interface + +`GetTimestamp` should return the timestamp (in nanoseconds) of the consensus state snapshot. + +## `ValidateBasic` method + +`ValidateBasic` should validate every consensus state field and should return an error if any value is invalid. The light client implementer is in charge of determining which checks are required. diff --git a/docs/ibc/v8.5.x/light-clients/developer-guide/genesis.mdx b/docs/ibc/v8.5.x/light-clients/developer-guide/genesis.mdx new file mode 100644 index 00000000..5519a77c --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/developer-guide/genesis.mdx @@ -0,0 +1,40 @@ +--- +title: Handling Genesis +--- + +## Synopsis + +Learn how to implement the `ExportMetadata` interface + + + +## Pre-requisite readings + +- [Cosmos SDK module genesis](https://docs.cosmos.network/v0.47/building-modules/genesis) + + + +`ClientState` instances are provided their own isolated and namespaced client store upon initialisation. `ClientState` implementations may choose to store any amount of arbitrary metadata in order to verify counterparty consensus state and perform light client updates correctly. + +The `ExportMetadata` method of the [`ClientState` interface](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L47) provides light client modules with the ability to persist metadata in genesis exports. + +```go +ExportMetadata(clientStore sdk.KVStore) []GenesisMetadata +``` + +`ExportMetadata` is provided the client store and returns an array of `GenesisMetadata`. For maximum flexibility, `GenesisMetadata` is defined as a simple interface containing two distinct `Key` and `Value` accessor methods. + +```go +type GenesisMetadata interface { + / return store key that contains metadata without clientID-prefix + GetKey() []byte + / returns metadata value + GetValue() []byte +} +``` + +This allows `ClientState` instances to retrieve and export any number of key-value pairs which are maintained within the store in their raw `[]byte` form. + +When a chain is started with a `genesis.json` file which contains `ClientState` metadata (for example, when performing manual upgrades using an exported `genesis.json`) the `02-client` submodule of core IBC will handle setting the key-value pairs within their respective client stores. [See `02-client` `InitGenesis`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/genesis.go#L18-L22). + +Please refer to the [Tendermint light client implementation](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/genesis.go#L12) for an example. diff --git a/docs/ibc/v8.5.x/light-clients/developer-guide/overview.mdx b/docs/ibc/v8.5.x/light-clients/developer-guide/overview.mdx new file mode 100644 index 00000000..93a90839 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/developer-guide/overview.mdx @@ -0,0 +1,74 @@ +--- +title: Overview +--- + +## Synopsis + +Learn how to build IBC light client modules and fulfill the interfaces required to integrate with core IBC. + + + +## Pre-requisite readings + +- [IBC Overview](/docs/ibc/v8.5.x/ibc/overview) +- [IBC Transport, Authentication, and Ordering Layer - Clients](https://tutorials.cosmos.network/academy/3-ibc/4-clients.html) +- [ICS-002 Client Semantics](https://github.com/cosmos/ibc/tree/main/spec/core/ics-002-client-semantics) + + + +IBC uses light clients in order to provide trust-minimized interoperability between sovereign blockchains. Light clients operate under a strict set of rules which provide security guarantees for state updates and facilitate the ability to verify the state of a remote blockchain using merkle proofs. + +The following aims to provide a high level IBC light client module developer guide. Access to IBC light clients is gated by the core IBC `MsgServer` which utilizes the abstractions set by the `02-client` submodule to call into a light client module. A light client module developer is only required to implement a set interfaces as defined in the `modules/core/exported` package of ibc-go. + +A light client module developer should be concerned with three main interfaces: + +- [`ClientState`](#clientstate) encapsulates the light client implementation and its semantics. +- [`ConsensusState`](#consensusstate) tracks consensus data used for verification of client updates, misbehaviour detection and proof verification of counterparty state. +- [`ClientMessage`](#clientmessage) used for submitting block headers for client updates and submission of misbehaviour evidence using conflicting headers. + +Throughout this guide the `07-tendermint` light client module may be referred to as a reference example. + +## Concepts and vocabulary + +### `ClientState` + +`ClientState` is a term used to define the data structure which encapsulates opaque light client state. The `ClientState` contains all the information needed to verify a `ClientMessage` and perform membership and non-membership proof verification of counterparty state. This includes properties that refer to the remote state machine, the light client type and the specific light client instance. + +For example: + +- Constraints used for client updates. +- Constraints used for misbehaviour detection. +- Constraints used for state verification. +- Constraints used for client upgrades. + +The `ClientState` type maintained within the light client module _must_ implement the [`ClientState`](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/modules/core/exported/client.go#L36) interface defined in `core/modules/exported/client.go`. +The methods which make up this interface are detailed at a more granular level in the [ClientState section of this guide](/docs/ibc/v8.5.x/light-clients/developer-guide/client-state). + +Please refer to the `07-tendermint` light client module's [`ClientState` definition](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/proto/ibc/lightclients/tendermint/v1/tendermint.proto#L18) containing information such as chain ID, status, latest height, unbonding period and proof specifications. + +### `ConsensusState` + +`ConsensusState` is a term used to define the data structure which encapsulates consensus data at a particular point in time, i.e. a unique height or sequence number of a state machine. There must exist a single trusted `ConsensusState` for each height. `ConsensusState` generally contains a trusted root, validator set information and timestamp. + +For example, the `ConsensusState` of the `07-tendermint` light client module defines a trusted root which is used by the `ClientState` to perform verification of membership and non-membership commitment proofs, as well as the next validator set hash used for verifying headers can be trusted in client updates. + +The `ConsensusState` type maintained within the light client module _must_ implement the [`ConsensusState`](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/modules/core/exported/client.go#L134) interface defined in `modules/core/exported/client.go`. +The methods which make up this interface are detailed at a more granular level in the [`ConsensusState` section of this guide](/docs/ibc/v8.5.x/light-clients/developer-guide/consensus-state). + +### `Height` + +`Height` defines a monotonically increasing sequence number which provides ordering of consensus state data persisted through client updates. +IBC light client module developers are expected to use the [concrete type](https://github.com/cosmos/ibc-go/tree/02-client-refactor-beta1/proto/ibc/core/client/v1/client.proto#L89) provided by the `02-client` submodule. This implements the expectations required by the [`Height`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L156) interface defined in `modules/core/exported/client.go`. + +### `ClientMessage` + +`ClientMessage` refers to the interface type [`ClientMessage`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L147) used for performing updates to a `ClientState` stored on chain. +This may be any concrete type which produces a change in state to the IBC client when verified. + +The following are considered as valid update scenarios: + +- A block header which when verified inserts a new `ConsensusState` at a unique height. +- A batch of block headers which when verified inserts `N` `ConsensusState` instances for `N` unique heights. +- Evidence of misbehaviour provided by two conflicting block headers. + +Learn more in the [Handling update and misbehaviour](/docs/ibc/v8.5.x/light-clients/developer-guide/updates-and-misbehaviour) section. diff --git a/docs/ibc/v8.5.x/light-clients/developer-guide/proofs.mdx b/docs/ibc/v8.5.x/light-clients/developer-guide/proofs.mdx new file mode 100644 index 00000000..7e2f939a --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/developer-guide/proofs.mdx @@ -0,0 +1,68 @@ +--- +title: Existence/Non-Existence Proofs +description: >- + IBC uses merkle proofs in order to verify the state of a remote counterparty + state machine given a trusted root, and ICS-23 is a general approach for + verifying merkle trees which is used in ibc-go. +--- + +IBC uses merkle proofs in order to verify the state of a remote counterparty state machine given a trusted root, and [ICS-23](https://github.com/cosmos/ics23/tree/master/go) is a general approach for verifying merkle trees which is used in ibc-go. + +Currently, all Cosmos SDK modules contain their own stores, which maintain the state of the application module in an IAVL (immutable AVL) binary merkle tree format. Specifically with regard to IBC, core IBC maintains its own IAVL store, and IBC apps (e.g. transfer) maintain their own dedicated stores. The Cosmos SDK multistore therefore creates a simple merkle tree of all of these IAVL trees, and from each of these individual IAVL tree root hashes it derives a root hash for the application state tree as a whole (the `AppHash`). + +For the purposes of ibc-go, there are two types of proofs which are important: existence and non-existence proofs, terms which have been used interchangeably with membership and non-membership proofs. For the purposes of this guide, we will stick with "existence" and "non-existence". + +## Existence proofs + +Existence proofs are used in IBC transactions which involve verification of counterparty state for transactions which will result in the writing of provable state. For example, this includes verification of IBC store state for handshakes and packets. + +Put simply, existence proofs prove that a particular key and value exists in the tree. Under the hood, an IBC existence proof is comprised of two proofs: an IAVL proof that the key exists in IBC store/IBC root hash, and a proof that the IBC root hash exists in the multistore root hash. + +## Non-existence proofs + +Non-existence proofs verify the absence of data stored within counterparty state and are used to prove that a key does NOT exist in state. As stated above, these types of proofs can be used to timeout packets by proving that the counterparty has not written a packet receipt into the store, meaning that a token transfer has NOT successfully occurred. + +Some trees (e.g. SMT) may have a sentinel empty child for non-existent keys. In this case, the ICS-23 proof spec should include this `EmptyChild` so that ICS-23 handles the non-existence proof correctly. + +In some cases, there is a necessity to "mock" non-existence proofs if the counterparty does not have ability to prove absence. Since the verification method is designed to give complete control to client implementations, clients can support chains that do not provide absence proofs by verifying the existence of a non-empty sentinel `ABSENCE` value. In these special cases, the proof provided will be an ICS-23 `Existence` proof, and the client will verify that the `ABSENCE` value is stored under the given path for the given height. + +## State verification methods: `VerifyMembership` and `VerifyNonMembership` + +The state verification functions for all IBC data types have been consolidated into two generic methods, `VerifyMembership` and `VerifyNonMembership`. + +From the [`ClientState` interface definition](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L68-L91), we find: + +```go expandable +VerifyMembership( + ctx sdk.Context, + clientStore sdk.KVStore, + cdc codec.BinaryCodec, + height Height, + delayTimePeriod uint64, + delayBlockPeriod uint64, + proof []byte, + path Path, + value []byte, +) + +error + +/ VerifyNonMembership is a generic proof verification method which verifies the absence of a given CommitmentPath at a specified height. +/ The caller is expected to construct the full CommitmentPath from a CommitmentPrefix and a standardized path (as defined in ICS 24). +VerifyNonMembership( + ctx sdk.Context, + clientStore sdk.KVStore, + cdc codec.BinaryCodec, + height Height, + delayTimePeriod uint64, + delayBlockPeriod uint64, + proof []byte, + path Path, +) + +error +``` + +Both are expected to be provided with a standardised key path, `exported.Path`, as defined in [ICS-24 host requirements](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements). Membership verification requires callers to provide the value marshalled as `[]byte`. Delay period values should be zero for non-packet processing verification. A zero proof height is now allowed by core IBC and may be passed into `VerifyMembership` and `VerifyNonMembership`. Light clients are responsible for returning an error if a zero proof height is invalid behaviour. + +Please refer to the [ICS-23 implementation](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/23-commitment/types/merkle.go#L131-L205) for a concrete example. diff --git a/docs/ibc/v8.5.x/light-clients/developer-guide/proposals.mdx b/docs/ibc/v8.5.x/light-clients/developer-guide/proposals.mdx new file mode 100644 index 00000000..a166dd92 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/developer-guide/proposals.mdx @@ -0,0 +1,32 @@ +--- +title: Handling Proposals +--- + +It is possible to update the client with the state of the substitute client through a governance proposal. [This type of governance proposal](/docs/ibc/v8.5.x/light-clients/developer-guide/proposals) is typically used to recover an expired or frozen client, as it can recover the entire state and therefore all existing channels built on top of the client. `CheckSubstituteAndUpdateState` should be implemented to handle the proposal. + +## Implementing `CheckSubstituteAndUpdateState` + +In the [`ClientState`interface](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go), we find: + +```go +/ CheckSubstituteAndUpdateState must verify that the provided substitute may be used to update the subject client. +/ The light client must set the updated client and consensus states within the clientStore for the subject client. +CheckSubstituteAndUpdateState( + ctx sdk.Context, + cdc codec.BinaryCodec, + subjectClientStore, + substituteClientStore sdk.KVStore, + substituteClient ClientState, +) + +error +``` + +Prior to updating, this function must verify that: + +* the substitute client is the same type as the subject client. For a reference implementation, please see the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/proposal_handle.go#L32). +* the provided substitute may be used to update the subject client. This may mean that certain parameters must remain unaltered. For example, a [valid substitute Tendermint light client](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/proposal_handle.go#L84) must NOT change the chain ID, trust level, max clock drift, unbonding period, proof specs or upgrade path. Please note that `AllowUpdateAfterMisbehaviour` and `AllowUpdateAfterExpiry` have been deprecated (see ADR 026 for more information). + +After these checks are performed, the function must [set the updated client and consensus states](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/proposal_handle.go#L77) within the client store for the subject client. + +Please refer to the [Tendermint light client implementation](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/proposal_handle.go#L27) for reference. diff --git a/docs/ibc/v8.5.x/light-clients/developer-guide/setup.mdx b/docs/ibc/v8.5.x/light-clients/developer-guide/setup.mdx new file mode 100644 index 00000000..27e599fb --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/developer-guide/setup.mdx @@ -0,0 +1,157 @@ +--- +title: Setup +--- + +## Synopsis + +Learn how to configure light client modules and create clients using core IBC and the `02-client` submodule. + +A last step to finish the development of the light client, is to implement the `AppModuleBasic` interface to allow it to be added to the chain's `app.go` alongside other light client types the chain enables. + +Finally, a succinct rundown is given of the remaining steps to make the light client operational, getting the light client type passed through governance and creating the clients. + +## Configuring a light client module + +An IBC light client module must implement the [`AppModuleBasic`](https://github.com/cosmos/cosmos-sdk/blob/main/types/module/module.go#L50) interface in order to register its concrete types against the core IBC interfaces defined in `modules/core/exported`. This is accomplished via the `RegisterInterfaces` method which provides the light client module with the opportunity to register codec types using the chain's `InterfaceRegistry`. Please refer to the [`07-tendermint` codec registration](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/codec.go#L11). + +The `AppModuleBasic` interface may also be leveraged to install custom CLI handlers for light client module users. Light client modules can safely no-op for interface methods which it does not wish to implement. + +Please refer to the [core IBC documentation](/docs/ibc/v8.5.x/ibc/integration#integrating-light-clients) for how to configure additional light client modules alongside `07-tendermint` in `app.go`. + +See below for an example of the `07-tendermint` implementation of `AppModuleBasic`. + +```go expandable +var _ module.AppModuleBasic = AppModuleBasic{ +} + +/ AppModuleBasic defines the basic application module used by the tendermint light client. +/ Only the RegisterInterfaces function needs to be implemented. All other function perform +/ a no-op. +type AppModuleBasic struct{ +} + +/ Name returns the tendermint module name. +func (AppModuleBasic) + +Name() + +string { + return ModuleName +} + +/ RegisterLegacyAminoCodec performs a no-op. The Tendermint client does not support amino. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(*codec.LegacyAmino) { +} + +/ RegisterInterfaces registers module concrete types into protobuf Any. This allows core IBC +/ to unmarshal tendermint light client types. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + RegisterInterfaces(registry) +} + +/ DefaultGenesis performs a no-op. Genesis is not supported for the tendermint light client. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return nil +} + +/ ValidateGenesis performs a no-op. Genesis is not supported for the tendermint light client. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config client.TxEncodingConfig, bz json.RawMessage) + +error { + return nil +} + +/ RegisterGRPCGatewayRoutes performs a no-op. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *runtime.ServeMux) { +} + +/ GetTxCmd performs a no-op. Please see the 02-client cli commands. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return nil +} + +/ GetQueryCmd performs a no-op. Please see the 02-client cli commands. +func (AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return nil +} +``` + +## Creating clients + +A client is created by executing a new `MsgCreateClient` transaction composed with a valid `ClientState` and initial `ConsensusState` encoded as protobuf `Any`s. +Generally, this is performed by an off-chain process known as an [IBC relayer](https://github.com/cosmos/ibc/tree/main/spec/relayer/ics-018-relayer-algorithms) however, this is not a strict requirement. + +See below for a list of IBC relayer implementations: + +- [cosmos/relayer](https://github.com/cosmos/relayer) +- [informalsystems/hermes](https://github.com/informalsystems/hermes) +- [confio/ts-relayer](https://github.com/confio/ts-relayer) + +Stateless checks are performed within the [`ValidateBasic`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/types/msgs.go#L48) method of `MsgCreateClient`. + +```protobuf expandable +/ MsgCreateClient defines a message to create an IBC client +message MsgCreateClient { + option (gogoproto.goproto_getters) = false; + + / light client state + google.protobuf.Any client_state = 1 [(gogoproto.moretags) = "yaml:\"client_state\""]; + / consensus state associated with the client that corresponds to a given + / height. + google.protobuf.Any consensus_state = 2 [(gogoproto.moretags) = "yaml:\"consensus_state\""]; + / signer address + string signer = 3; +} +``` + +Leveraging protobuf `Any` encoding allows core IBC to [unpack](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/keeper/msg_server.go#L28-L36) both the `ClientState` and `ConsensusState` into their respective interface types registered previously using the light client module's `RegisterInterfaces` method. + +Within the `02-client` submodule, the [`ClientState` is then initialized](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/keeper/client.go#L30-L32) with its own isolated key-value store, namespaced using a unique client identifier. + +In order to successfully create an IBC client using a new client type, it [must be supported](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/keeper/client.go#L19-L25). Light client support in IBC is gated by on-chain governance. The allow list may be updated by submitting a new governance proposal to update the `02-client` parameter `AllowedClients`. + +See below for example: + +```shell +%s tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "IBC Clients Param Change", + "summary": "Update allowed clients", + "messages": [ + { + "@type": "/ibc.core.client.v1.MsgUpdateParams", + "signer": "cosmos1...", / The gov module account address + "params": { + "allowed_clients": ["06-solomachine", + "07-tendermint", + "0x-new-client"] + } + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +If the `AllowedClients` list contains a single element that is equal to the wildcard `"*"`, then all client types are allowed and it is thus not necessary to submit a governance proposal to update the parameter. diff --git a/docs/ibc/v8.5.x/light-clients/developer-guide/updates-and-misbehaviour.mdx b/docs/ibc/v8.5.x/light-clients/developer-guide/updates-and-misbehaviour.mdx new file mode 100644 index 00000000..4f2ec6ca --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/developer-guide/updates-and-misbehaviour.mdx @@ -0,0 +1,105 @@ +--- +title: Handling Updates and Misbehaviour +description: >- + As mentioned before in the documentation about implementing the ConsensusState + interface, ClientMessage is an interface used to update an IBC client. This + update may be performed by: +--- + +As mentioned before in the documentation about [implementing the `ConsensusState` interface](/docs/ibc/v8.5.x/light-clients/developer-guide/consensus-state), [`ClientMessage`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L147) is an interface used to update an IBC client. This update may be performed by: + +* a single header +* a batch of headers +* evidence of misbehaviour, +* or any type which when verified produces a change to the consensus state of the IBC client. + +This interface has been purposefully kept generic in order to give the maximum amount of flexibility to the light client implementer. + +## Implementing the `ClientMessage` interface + +Find the `ClientMessage`interface in `modules/core/exported`: + +```go +type ClientMessage interface { + proto.Message + + ClientType() + +string + ValidateBasic() + +error +} +``` + +The `ClientMessage` will be passed to the client to be used in [`UpdateClient`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/keeper/client.go#L48), which retrieves the `ClientState` by client ID (available in `MsgUpdateClient`). This `ClientState` implements the [`ClientState` interface](/docs/ibc/v8.5.x/light-clients/developer-guide/client-state) for its specific consenus type (e.g. Tendermint). + +`UpdateClient` will then handle a number of cases including misbehaviour and/or updating the consensus state, utilizing the specific methods defined in the relevant `ClientState`. + +```go +VerifyClientMessage(ctx sdk.Context, cdc codec.BinaryCodec, clientStore sdk.KVStore, clientMsg ClientMessage) + +error +CheckForMisbehaviour(ctx sdk.Context, cdc codec.BinaryCodec, clientStore sdk.KVStore, clientMsg ClientMessage) + +bool +UpdateStateOnMisbehaviour(ctx sdk.Context, cdc codec.BinaryCodec, clientStore sdk.KVStore, clientMsg ClientMessage) + +UpdateState(ctx sdk.Context, cdc codec.BinaryCodec, clientStore sdk.KVStore, clientMsg ClientMessage) []Height +``` + +## Handling updates and misbehaviour + +The functions for handling updates to a light client and evidence of misbehaviour are all found in the [`ClientState`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L36) interface, and will be discussed below. + +> It is important to note that `Misbehaviour` in this particular context is referring to misbehaviour on the chain level intended to fool the light client. This will be defined by each light client. + +## `VerifyClientMessage` + +`VerifyClientMessage` must verify a `ClientMessage`. A `ClientMessage` could be a `Header`, `Misbehaviour`, or batch update. To understand how to implement a `ClientMessage`, please refer to the [Implementing the `ClientMessage` interface](#implementing-the-clientmessage-interface) section. + +It must handle each type of `ClientMessage` appropriately. Calls to `CheckForMisbehaviour`, `UpdateState`, and `UpdateStateOnMisbehaviour` will assume that the content of the `ClientMessage` has been verified and can be trusted. An error should be returned if the `ClientMessage` fails to verify. + +For an example of a `VerifyClientMessage` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/update.go#L20). + +## `CheckForMisbehaviour` + +Checks for evidence of a misbehaviour in `Header` or `Misbehaviour` type. It assumes the `ClientMessage` has already been verified. + +For an example of a `CheckForMisbehaviour` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/misbehaviour_handle.go#L19). + +> The Tendermint light client [defines `Misbehaviour`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/misbehaviour.go) as two different types of situations: a situation where two conflicting `Header`s with the same height have been submitted to update a client's `ConsensusState` within the same trusting period, or that the two conflicting `Header`s have been submitted at different heights but the consensus states are not in the correct monotonic time ordering (BFT time violation). More explicitly, updating to a new height must have a timestamp greater than the previous consensus state, or, if inserting a consensus at a past height, then time must be less than those heights which come after and greater than heights which come before. + +## `UpdateStateOnMisbehaviour` + +`UpdateStateOnMisbehaviour` should perform appropriate state changes on a client state given that misbehaviour has been detected and verified. This method should only be called when misbehaviour is detected, as it does not perform any misbehaviour checks. Notably, it should freeze the client so that calling the `Status` function on the associated client state no longer returns `Active`. + +For an example of a `UpdateStateOnMisbehaviour` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/update.go#L199). + +## `UpdateState` + +`UpdateState` updates and stores as necessary any associated information for an IBC client, such as the `ClientState` and corresponding `ConsensusState`. It should perform a no-op on duplicate updates. + +It assumes the `ClientMessage` has already been verified. + +For an example of a `UpdateState` implementation, please check the [Tendermint light client](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/update.go#L131). + +## Putting it all together + +The `02-client` `Keeper` module in ibc-go offers a reference as to how these functions will be used to [update the client](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/02-client/keeper/client.go#L48). + +```go expandable +if err := clientState.VerifyClientMessage(clientMessage); err != nil { + return err +} + foundMisbehaviour := clientState.CheckForMisbehaviour(clientMessage) + if foundMisbehaviour { + clientState.UpdateStateOnMisbehaviour(clientMessage) + / emit misbehaviour event + return +} + +clientState.UpdateState(clientMessage) / expects no-op on duplicate header +/ emit update event +return +``` diff --git a/docs/ibc/v8.5.x/light-clients/developer-guide/upgrades.mdx b/docs/ibc/v8.5.x/light-clients/developer-guide/upgrades.mdx new file mode 100644 index 00000000..35f227ac --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/developer-guide/upgrades.mdx @@ -0,0 +1,69 @@ +--- +title: Handling Upgrades +--- + +It is vital that high-value IBC clients can upgrade along with their underlying chains to avoid disruption to the IBC ecosystem. Thus, IBC client developers will want to implement upgrade functionality to enable clients to maintain connections and channels even across chain upgrades. + +## Implementing `VerifyUpgradeAndUpdateState` + +The IBC protocol allows client implementations to provide a path to upgrading clients given the upgraded `ClientState`, upgraded `ConsensusState` and proofs for each. This path is provided in the `VerifyUpgradeAndUpdateState` method: + +```go expandable +/ NOTE: proof heights are not included as upgrade to a new revision is expected to pass only on the last height committed by the current revision. Clients are responsible for ensuring that the planned last height of the current revision is somehow encoded in the proof verification process. +/ This is to ensure that no premature upgrades occur, since upgrade plans committed to by the counterparty may be cancelled or modified before the last planned height. +/ If the upgrade is verified, the upgraded client and consensus states must be set in the client store. +func (cs ClientState) + +VerifyUpgradeAndUpdateState( + ctx sdk.Context, + cdc codec.BinaryCodec, + store sdk.KVStore, + newClient ClientState, + newConsState ConsensusState, + upgradeClientProof, + upgradeConsensusStateProof []byte, +) + +error +``` + +> Please refer to the [Tendermint light client implementation](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/light-clients/07-tendermint/upgrade.go#L27) as an example for implementation. + +It is important to note that light clients **must** handle all management of client and consensus states including the setting of updated `ClientState` and `ConsensusState` in the client store. This can include verifying that the submitted upgraded `ClientState` is of a valid `ClientState` type, that the height of the upgraded client is not greater than the height of the current client (in order to preserve BFT monotonic time), or that certain parameters which should not be changed have not been altered in the upgraded `ClientState`. + +Developers must ensure that the `MsgUpgradeClient` does not pass until the last height of the old chain has been committed, and after the chain upgrades, the `MsgUpgradeClient` should pass once and only once on all counterparty clients. + +### Upgrade path + +Clients should have **prior knowledge of the merkle path** that the upgraded client and upgraded consensus states will use. The height at which the upgrade has occurred should also be encoded in the proof. + +> The Tendermint client implementation accomplishes this by including an `UpgradePath` in the `ClientState` itself, which is used along with the upgrade height to construct the merkle path under which the client state and consensus state are committed. + +## Chain specific vs client specific client parameters + +Developers should maintain the distinction between client parameters that are uniform across every valid light client of a chain (chain-chosen parameters), and client parameters that are customizable by each individual client (client-chosen parameters); since this distinction is necessary to implement the `ZeroCustomFields` method in the [`ClientState` interface](/docs/ibc/v8.5.x/light-clients/developer-guide/client-state): + +```go +/ Utility function that zeroes out any client customizable fields in client state +/ Ledger enforced fields are maintained while all custom fields are zero values +/ Used to verify upgrades +func (cs ClientState) + +ZeroCustomFields() + +ClientState +``` + +Developers must ensure that the new client adopts all of the new client parameters that must be uniform across every valid light client of a chain (chain-chosen parameters), while maintaining the client parameters that are customizable by each individual client (client-chosen parameters) from the previous version of the client. `ZeroCustomFields` is a useful utility function to ensure only chain specific fields are updated during upgrades. + +## Security + +Upgrades must adhere to the IBC Security Model. IBC does not rely on the assumption of honest relayers for correctness. Thus users should not have to rely on relayers to maintain client correctness and security (though honest relayers must exist to maintain relayer liveness). While relayers may choose any set of client parameters while creating a new `ClientState`, this still holds under the security model since users can always choose a relayer-created client that suits their security and correctness needs or create a client with their desired parameters if no such client exists. + +However, when upgrading an existing client, one must keep in mind that there are already many users who depend on this client's particular parameters. **We cannot give the upgrading relayer free choice over these parameters once they have already been chosen. This would violate the security model** since users who rely on the client would have to rely on the upgrading relayer to maintain the same level of security. + +Thus, developers must make sure that their upgrade mechanism allows clients to upgrade the chain-specified parameters whenever a chain upgrade changes these parameters (examples in the Tendermint client include `UnbondingPeriod`, `TrustingPeriod`, `ChainID`, `UpgradePath`, etc), while ensuring that the relayer submitting the `MsgUpgradeClient` cannot alter the client-chosen parameters that the users are relying upon (examples in Tendermint client include `TrustLevel`, `MaxClockDrift`, etc). The previous paragraph discusses how `ZeroCustomFields` helps achieve this. + +### Document potential client parameter conflicts during upgrades + +Counterparty clients can upgrade securely by using all of the chain-chosen parameters from the chain-committed `UpgradedClient` and preserving all of the old client-chosen parameters. This enables chains to securely upgrade without relying on an honest relayer, however it can in some cases lead to an invalid final `ClientState` if the new chain-chosen parameters clash with the old client-chosen parameter. This can happen in the Tendermint client case if the upgrading chain lowers the `UnbondingPeriod` (chain-chosen) to a duration below that of a counterparty client's `TrustingPeriod` (client-chosen). Such cases should be clearly documented by developers, so that chains know which upgrades should be avoided to prevent this problem. The final upgraded client should also be validated in `VerifyUpgradeAndUpdateState` before returning to ensure that the client does not upgrade to an invalid `ClientState`. diff --git a/docs/ibc/v8.5.x/light-clients/localhost/client-state.mdx b/docs/ibc/v8.5.x/light-clients/localhost/client-state.mdx new file mode 100644 index 00000000..482c7ca8 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/localhost/client-state.mdx @@ -0,0 +1,70 @@ +--- +title: ClientState +description: >- + The 09-localhost ClientState maintains a single field used to track the latest + sequence of the state machine i.e. the height of the blockchain. +--- + +The 09-localhost `ClientState` maintains a single field used to track the latest sequence of the state machine i.e. the height of the blockchain. + +```go +type ClientState struct { + / the latest height of the blockchain + LatestHeight clienttypes.Height +} +``` + +The 09-localhost `ClientState` is instantiated in the `InitGenesis` handler of the 02-client submodule in core IBC. +It calls `CreateLocalhostClient`, declaring a new `ClientState` and initializing it with its own client prefixed store. + +```go +func (k Keeper) + +CreateLocalhostClient(ctx sdk.Context) + +error { + var clientState localhost.ClientState + return clientState.Initialize(ctx, k.cdc, k.ClientStore(ctx, exported.LocalhostClientID), nil) +} +``` + +It is possible to disable the localhost client by removing the `09-localhost` entry from the `allowed_clients` list through governance. + +## Client updates + +The latest height is updated periodically through the ABCI [`BeginBlock`](https://docs.cosmos.network/v0.47/building-modules/beginblock-endblock) interface of the 02-client submodule in core IBC. + +[See `BeginBlocker` in abci.go.](https://github.com/cosmos/ibc-go/blob/v8.5.0/modules/core/02-client/abci.go#L12) + +```go +func BeginBlocker(ctx sdk.Context, k keeper.Keeper) { + / ... + if clientState, found := k.GetClientState(ctx, exported.Localhost); found { + if k.GetClientStatus(ctx, clientState, exported.Localhost) == exported.Active { + k.UpdateLocalhostClient(ctx, clientState) +} + +} +} +``` + +The above calls into the 09-localhost `UpdateState` method of the `ClientState` . +It retrieves the current block height from the application context and sets the `LatestHeight` of the 09-localhost client. + +```go +func (cs ClientState) + +UpdateState(ctx sdk.Context, cdc codec.BinaryCodec, clientStore sdk.KVStore, clientMsg exported.ClientMessage) []exported.Height { + height := clienttypes.GetSelfHeight(ctx) + +cs.LatestHeight = height + + clientStore.Set(host.ClientStateKey(), clienttypes.MustMarshalClientState(cdc, &cs)) + +return []exported.Height{ + height +} +} +``` + +Note that the 09-localhost `ClientState` is not updated through the 02-client interface leveraged by conventional IBC light clients. diff --git a/docs/ibc/v8.5.x/light-clients/localhost/connection.mdx b/docs/ibc/v8.5.x/light-clients/localhost/connection.mdx new file mode 100644 index 00000000..9c2fe5f5 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/localhost/connection.mdx @@ -0,0 +1,29 @@ +--- +title: Connection +description: >- + The 09-localhost light client module integrates with core IBC through a single + sentinel localhost connection. The sentinel ConnectionEnd is stored by default + in the core IBC store. +--- + +The 09-localhost light client module integrates with core IBC through a single sentinel localhost connection. +The sentinel `ConnectionEnd` is stored by default in the core IBC store. + +This enables channel handshakes to be initiated out of the box by supplying the localhost connection identifier (`connection-localhost`) in the `connectionHops` parameter of `MsgChannelOpenInit`. + +The `ConnectionEnd` is created and set in store via the `InitGenesis` handler of the 03-connection submodule in core IBC. +The `ConnectionEnd` and its `Counterparty` both reference the `09-localhost` client identifier, and share the localhost connection identifier `connection-localhost`. + +```go +/ CreateSentinelLocalhostConnection creates and sets the sentinel localhost connection end in the IBC store. +func (k Keeper) + +CreateSentinelLocalhostConnection(ctx sdk.Context) { + counterparty := types.NewCounterparty(exported.LocalhostClientID, exported.LocalhostConnectionID, commitmenttypes.NewMerklePrefix(k.GetCommitmentPrefix().Bytes())) + connectionEnd := types.NewConnectionEnd(types.OPEN, exported.LocalhostClientID, counterparty, types.GetCompatibleVersions(), 0) + +k.SetConnection(ctx, exported.LocalhostConnectionID, connectionEnd) +} +``` + +Note that connection handshakes are disallowed when using the `09-localhost` client type. diff --git a/docs/ibc/v8.5.x/light-clients/localhost/integration.mdx b/docs/ibc/v8.5.x/light-clients/localhost/integration.mdx new file mode 100644 index 00000000..f2faff57 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/localhost/integration.mdx @@ -0,0 +1,19 @@ +--- +title: Integration +description: >- + The 09-localhost light client module registers codec types within the core IBC + module. This differs from other light client module implementations which are + expected to register codec types using the AppModuleBasic interface. +--- + +The 09-localhost light client module registers codec types within the core IBC module. This differs from other light client module implementations which are expected to register codec types using the `AppModuleBasic` interface. + +The localhost client is implicitly enabled by using the `AllowAllClients` wildcard (`"*"`) in the 02-client submodule default value for param [`allowed_clients`](https://github.com/cosmos/ibc-go/blob/v7.0.0/proto/ibc/core/client/v1/client.proto#L102). + +```go +/ DefaultAllowedClients are the default clients for the AllowedClients parameter. +/ By default it allows all client types. +var DefaultAllowedClients = []string{ + AllowAllClients +} +``` diff --git a/docs/ibc/v8.5.x/light-clients/localhost/overview.mdx b/docs/ibc/v8.5.x/light-clients/localhost/overview.mdx new file mode 100644 index 00000000..5534a5a8 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/localhost/overview.mdx @@ -0,0 +1,40 @@ +--- +title: Overview +--- + +## Overview + +## Synopsis + +Learn about the 09-localhost light client module. + +The 09-localhost light client module implements a localhost loopback client with the ability to send and receive IBC packets to and from the same state machine. + +### Context + +In a multichain environment, application developers will be used to developing cross-chain applications through IBC. From their point of view, whether or not they are interacting with multiple modules on the same chain or on different chains should not matter. The localhost client module enables a unified interface to interact with different applications on a single chain, using the familiar IBC application layer semantics. + +### Implementation + +There exists a [single sentinel `ClientState`](/docs/ibc/v8.5.x/light-clients/localhost/client-state) instance with the client identifier `09-localhost`. + +To supplement this, a [sentinel `ConnectionEnd` is stored in core IBC](/docs/ibc/v8.5.x/light-clients/localhost/connection) state with the connection identifier `connection-localhost`. This enables IBC applications to create channels directly on top of the sentinel connection which leverage the 09-localhost loopback functionality. + +[State verification](/docs/ibc/v8.5.x/light-clients/localhost/state-verification) for channel state in handshakes or processing packets is reduced in complexity, the `09-localhost` client can simply compare bytes stored under the standardized key paths. + +### Localhost vs _regular_ client + +The localhost client aims to provide a unified approach to interacting with applications on a single chain, as the IBC application layer provides for cross-chain interactions. To achieve this unified interface though, there are a number of differences under the hood compared to a 'regular' IBC client (excluding `06-solomachine` and `09-localhost` itself). + +The table below lists some important differences: + +| | Regular client | Localhost | +| -------------------------------------------- | --------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Number of clients | Many instances of a client _type_ corresponding to different counterparties | A single sentinel client with the client identifier `09-localhost` | +| Client creation | Relayer (permissionless) | `ClientState` is instantiated in the `InitGenesis` handler of the 02-client submodule in core IBC | +| Client updates | Relayer submits headers using `MsgUpdateClient` | Latest height is updated periodically through the ABCI [`BeginBlock`](https://docs.cosmos.network/v0.47/building-modules/beginblock-endblock) interface of the 02-client submodule in core IBC | +| Number of connections | Many connections, 1 (or more) per client | A single sentinel connection with the connection identifier `connection-localhost` | +| Connection creation | Connection handshake, provided underlying client | Sentinel `ConnectionEnd` is created and set in store in the `InitGenesis` handler of the 03-connection submodule in core IBC | +| Counterparty | Underlying client, representing another chain | Client with identifier `09-localhost` in same chain | +| `VerifyMembership` and `VerifyNonMembership` | Performs proof verification using consensus state roots | Performs state verification using key-value lookups in the core IBC store | +| Integration | Expected to register codec types using the `AppModuleBasic` interface | Registers codec types within the core IBC module | diff --git a/docs/ibc/v8.5.x/light-clients/localhost/state-verification.mdx b/docs/ibc/v8.5.x/light-clients/localhost/state-verification.mdx new file mode 100644 index 00000000..5cd636cf --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/localhost/state-verification.mdx @@ -0,0 +1,21 @@ +--- +title: State Verification +description: >- + The localhost client handles state verification through the ClientState + interface methods VerifyMembership and VerifyNonMembership by performing + read-only operations directly on the core IBC store. +--- + +The localhost client handles state verification through the `ClientState` interface methods `VerifyMembership` and `VerifyNonMembership` by performing read-only operations directly on the core IBC store. + +When verifying channel state in handshakes or processing packets the `09-localhost` client can simply compare bytes stored under the standardized key paths defined by [ICS-24](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements). + +For existence proofs via `VerifyMembership` the 09-localhost client will retrieve the value stored under the provided key path and compare it against the value provided by the caller. In contrast, non-existence proofs via `VerifyNonMembership` assert the absence of a value at the provided key path. + +Relayers are expected to provide a sentinel proof when sending IBC messages. Submission of nil or empty proofs is disallowed in core IBC messaging. +The 09-localhost light client module defines a `SentinelProof` as a single byte. Localhost client state verification will fail if the sentinel proof value is not provided. + +```go +var SentinelProof = []byte{0x01 +} +``` diff --git a/docs/ibc/v8.5.x/light-clients/solomachine/concepts.mdx b/docs/ibc/v8.5.x/light-clients/solomachine/concepts.mdx new file mode 100644 index 00000000..1026dbf9 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/solomachine/concepts.mdx @@ -0,0 +1,166 @@ +--- +title: Concepts +description: >- + The ClientState for a solo machine light client stores the latest sequence, + the frozen sequence, the latest consensus state, and client flag indicating if + the client should be allowed to be updated after a governance proposal. +--- + +## Client State + +The `ClientState` for a solo machine light client stores the latest sequence, the frozen sequence, +the latest consensus state, and client flag indicating if the client should be allowed to be updated +after a governance proposal. + +If the client is not frozen then the frozen sequence is 0. + +## Consensus State + +The consensus states stores the public key, diversifier, and timestamp of the solo machine light client. + +The diversifier is used to prevent accidental misbehaviour if the same public key is used across +different chains with the same client identifier. It should be unique to the chain the light client +is used on. + +## Public Key + +The public key can be a single public key or a multi-signature public key. The public key type used +must fulfill the tendermint public key interface (this will become the SDK public key interface in the +near future). The public key must be registered on the application codec otherwise encoding/decoding +errors will arise. The public key stored in the consensus state is represented as a protobuf `Any`. +This allows for flexibility in what other public key types can be supported in the future. + +## Counterparty Verification + +The solo machine light client can verify counterparty client state, consensus state, connection state, +channel state, packet commitments, packet acknowledgements, packet receipt absence, +and the next sequence receive. At the end of each successful verification call the light +client sequence number will be incremented. + +Successful verification requires the current public key to sign over the proof. + +## Proofs + +A solo machine proof should verify that the solomachine public key signed +over some specified data. The format for generating marshaled proofs for +the SDK's implementation of solo machine is as follows: + +1. Construct the data using the associated protobuf definition and marshal it. + +For example: + +```go +data := &ClientStateData{ + Path: []byte(path.String()), + ClientState: protoAny, +} + +dataBz, err := cdc.Marshal(data) +``` + +The helper functions `...DataBytes()` in [proof.go](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine/proof.go) handle this +functionality. + +2. Construct the `SignBytes` and marshal it. + +For example: + +```go +signBytes := &SignBytes{ + Sequence: sequence, + Timestamp: timestamp, + Diversifier: diversifier, + DataType: CLIENT, + Data: dataBz, +} + +signBz, err := cdc.Marshal(signBytes) +``` + +The helper functions `...SignBytes()` in [proof.go](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine/proof.go) handle this functionality. +The `DataType` field is used to disambiguate what type of data was signed to prevent potential +proto encoding overlap. + +3. Sign the sign bytes. Embed the signatures into either `SingleSignatureData` or `MultiSignatureData`. + Convert the `SignatureData` to proto and marshal it. + +For example: + +```go +sig, err := key.Sign(signBz) + sigData := &signing.SingleSignatureData{ + Signature: sig, +} + protoSigData := signing.SignatureDataToProto(sigData) + +bz, err := cdc.Marshal(protoSigData) +``` + +4. Construct a `TimestampedSignatureData` and marshal it. The marshaled result can be passed in + as the proof parameter to the verification functions. + +For example: + +```go +timestampedSignatureData := &solomachine.TimestampedSignatureData{ + SignatureData: sigData, + Timestamp: solomachine.Time, +} + +proof, err := cdc.Marshal(timestampedSignatureData) +``` + +NOTE: At the end of this process, the sequence associated with the key needs to be updated. +The sequence must be incremented each time proof is generated. + +## Updates By Header + +An update by a header will only succeed if: + +* the header provided is parseable to solo machine header +* the header sequence matches the current sequence +* the header timestamp is greater than or equal to the consensus state timestamp +* the currently registered public key generated the proof + +If the update is successful: + +* the public key is updated +* the diversifier is updated +* the timestamp is updated +* the sequence is incremented by 1 +* the new consensus state is set in the client state + +## Updates By Proposal + +An update by a governance proposal will only succeed if: + +* the substitute provided is parseable to solo machine client state +* the new consensus state public key does not equal the current consensus state public key + +If the update is successful: + +* the subject client state is updated to the substitute client state +* the subject consensus state is updated to the substitute consensus state +* the client is unfrozen (if it was previously frozen) + +NOTE: Previously, `AllowUpdateAfterProposal` was used to signal the update/recovery options for the solo machine client. However, this has now been deprecated because a code migration can overwrite the client and consensus states regardless of the value of this parameter. If governance would vote to overwrite a client or consensus state, it is likely that governance would also be willing to perform a code migration to do the same. + +## Misbehaviour + +Misbehaviour handling will only succeed if: + +* the misbehaviour provided is parseable to solo machine misbehaviour +* the client is not already frozen +* the current public key signed over two unique data messages at the same sequence and diversifier. + +If the misbehaviour is successfully processed: + +* the client is frozen by setting the frozen sequence to the misbehaviour sequence + +NOTE: Misbehaviour processing is data processing order dependent. A misbehaving solo machine +could update to a new public key to prevent being frozen before misbehaviour is submitted. + +## Upgrades + +Upgrades to solo machine light clients are not supported since an entirely different type of +public key can be set using normal client updates. diff --git a/docs/ibc/v8.5.x/light-clients/solomachine/solomachine.mdx b/docs/ibc/v8.5.x/light-clients/solomachine/solomachine.mdx new file mode 100644 index 00000000..cc8af9c4 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/solomachine/solomachine.mdx @@ -0,0 +1,23 @@ +--- +title: Solomachine +description: >- + This paper defines the implementation of the ICS06 protocol on the Cosmos SDK. + For the general specification please refer to the ICS06 Specification. +--- + +## Abstract + +This paper defines the implementation of the ICS06 protocol on the Cosmos SDK. For the general +specification please refer to the [ICS06 Specification](https://github.com/cosmos/ibc/tree/master/spec/client/ics-006-solo-machine-client). + +This implementation of a solo machine light client supports single and multi-signature public +keys. The client is capable of handling public key updates by header and governance proposals. +The light client is capable of processing client misbehaviour. Proofs of the counterparty state +are generated by the solo machine client by signing over the desired state with a certain sequence, +diversifier, and timestamp. + +## Contents + +1. **[Concepts](/docs/ibc/v8.5.x/light-clients/solomachine/concepts)** +2. **[State](/docs/ibc/v8.5.x/light-clients/solomachine/state)** +3. **[State Transitions](/docs/ibc/v8.5.x/light-clients/solomachine/state_transitions)** diff --git a/docs/ibc/v8.5.x/light-clients/solomachine/state.mdx b/docs/ibc/v8.5.x/light-clients/solomachine/state.mdx new file mode 100644 index 00000000..7827215a --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/solomachine/state.mdx @@ -0,0 +1,10 @@ +--- +title: State +description: >- + The solo machine light client will only store consensus states for each update + by a header or a governance proposal. The latest client state is also + maintained in the store. +--- + +The solo machine light client will only store consensus states for each update by a header +or a governance proposal. The latest client state is also maintained in the store. diff --git a/docs/ibc/v8.5.x/light-clients/solomachine/state_transitions.mdx b/docs/ibc/v8.5.x/light-clients/solomachine/state_transitions.mdx new file mode 100644 index 00000000..e7a09a62 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/solomachine/state_transitions.mdx @@ -0,0 +1,38 @@ +--- +title: State Transitions +description: 'Successful state verification by a solo machine light client will result in:' +--- + +## Client State Verification Functions + +Successful state verification by a solo machine light client will result in: + +* the sequence being incremented by 1. + +## Update By Header + +A successful update of a solo machine light client by a header will result in: + +* the public key being updated to the new public key provided by the header. +* the diversifier being updated to the new diviersifier provided by the header. +* the timestamp being updated to the new timestamp provided by the header. +* the sequence being incremented by 1 +* the consensus state being updated (consensus state stores the public key, diversifier, and timestamp) + +## Update By Governance Proposal + +A successful update of a solo machine light client by a governance proposal will result in: + +* the client state being updated to the substitute client state +* the consensus state being updated to the substitute consensus state (consensus state stores the public key, diversifier, and timestamp) +* the frozen sequence being set to zero (client is unfrozen if it was previously frozen). + +## Upgrade + +Client udgrades are not supported for the solo machine light client. No state transition occurs. + +## Misbehaviour + +Successful misbehaviour processing of a solo machine light client will result in: + +* the frozen sequence being set to the sequence the misbehaviour occurred at diff --git a/docs/ibc/v8.5.x/light-clients/wasm/client.mdx b/docs/ibc/v8.5.x/light-clients/wasm/client.mdx new file mode 100644 index 00000000..9f3d4651 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/wasm/client.mdx @@ -0,0 +1,149 @@ +--- +title: Client +description: >- + A user can query and interact with the 08-wasm module using the CLI. Use the + --help flag to discover the available commands: +--- + +## CLI + +A user can query and interact with the `08-wasm` module using the CLI. Use the `--help` flag to discover the available commands: + +### Transactions + +The `tx` commands allow users to interact with the `08-wasm` submodule. + +```shell +simd tx ibc-wasm --help +``` + +#### `store-code` + +The `store-code` command allows users to submit a governance proposal with a `MsgStoreCode` to store the byte code of a Wasm light client contract. + +```shell +simd tx ibc-wasm store-code [path/to/wasm-file] [flags] +``` + +`path/to/wasm-file` is the path to the `.wasm` or `.wasm.gz` file. + +#### `migrate-contract` + +The `migrate-contract` command allows users to broadcast a transaction with a `MsgMigrateContract` to migrate the contract for a given light client to a new byte code denoted by the given checksum. + +```shell +simd tx ibc-wasm migrate-contract [client-id] [checksum] [migrate-msg] +``` + +The migrate message must not be empty and is expected to be a JSON-encoded string. + +### Query + +The `query` commands allow users to query `08-wasm` state. + +```shell +simd query ibc-wasm --help +``` + +#### `checksums` + +The `checksums` command allows users to query the list of checksums of Wasm light client contracts stored in the Wasm VM via the `MsgStoreCode`. The checksums are hex-encoded. + +```shell +simd query ibc-wasm checksums [flags] +``` + +Example: + +```shell +simd query ibc-wasm checksums +``` + +Example Output: + +```shell +checksums: +- c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64 +pagination: + next_key: null + total: "1" +``` + +#### `code` + +The `code` command allows users to query the Wasm byte code of a light client contract given the provided input checksum. + +```shell +./simd q ibc-wasm code +``` + +Example: + +```shell +simd query ibc-wasm code c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64 +``` + +Example Output: + +```shell +code: AGFzb...AqBBE= +``` + +## gRPC + +A user can query the `08-wasm` module using gRPC endpoints. + +### `Checksums` + +The `Checksums` endpoint allows users to query the list of checksums of Wasm light client contracts stored in the Wasm VM via the `MsgStoreCode`. + +```shell +ibc.lightclients.wasm.v1.Query/Checksums +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{}' \ + localhost:9090 \ + ibc.lightclients.wasm.v1.Query/Checksums +``` + +Example output: + +```shell +{ + "checksums": [ + "c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64" + ], + "pagination": { + "total": "1" + } +} +``` + +### `Code` + +The `Code` endpoint allows users to query the Wasm byte code of a light client contract given the provided input checksum. + +```shell +ibc.lightclients.wasm.v1.Query/Code +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"checksum":"c64f75091a6195b036f472cd8c9f19a56780b9eac3c3de7ced0ec2e29e985b64"}' \ + localhost:9090 \ + ibc.lightclients.wasm.v1.Query/Code +``` + +Example output: + +```shell +{ + "code": AGFzb...AqBBE= +} +``` diff --git a/docs/ibc/v8.5.x/light-clients/wasm/concepts.mdx b/docs/ibc/v8.5.x/light-clients/wasm/concepts.mdx new file mode 100644 index 00000000..c03b1817 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/wasm/concepts.mdx @@ -0,0 +1,74 @@ +--- +title: Concepts +description: >- + Learn about the differences between a proxy light client and a Wasm light + client. +--- + +Learn about the differences between a proxy light client and a Wasm light client. + +## Proxy light client + +The `08-wasm` module is not a regular light client in the same sense as, for example, the 07-tendermint light client. `08-wasm` is instead a _proxy_ light client module, and this means that the module acts a proxy to the actual implementations of light clients. The module will act as a wrapper for the actual light clients uploaded as Wasm byte code and will delegate all operations to them (i.e. `08-wasm` just passes through the requests to the Wasm light clients). Still, the `08-wasm` module implements all the required interfaces necessary to integrate with core IBC, so that 02-client can call into it as it would for any other light client module. These interfaces are `ClientState`, `ConsensusState` and `ClientMessage`, and we will describe them in the context of `08-wasm` in the following sections. For more information about this set of interfaces, please read section [Overview of the light client module developer guide](/docs/ibc/v8.5.x/light-clients/developer-guide/overview#overview). + +### `ClientState` + +The `08-wasm`'s `ClientState` data structure contains three fields: + +- `Data` contains the bytes of the Protobuf-encoded client state of the underlying light client implemented as a Wasm contract. For example, if the Wasm light client contract implements the GRANDPA light client algorithm, then `Data` will contain the bytes for a [GRANDPA client state](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L35-L60). +- `Checksum` is the sha256 hash of the Wasm contract's byte code. This hash is used as an identifier to call the right contract. +- `LatestHeight` is the latest height of the counterparty state machine (i.e. the height of the blockchain), whose consensus state the light client tracks. + +```go +type ClientState struct { + / bytes encoding the client state of the underlying + / light client implemented as a Wasm contract + Data []byte + / sha256 hash of Wasm contract byte code + Checksum []byte + / latest height of the counterparty ledger + LatestHeight types.Height +} +``` + +See section [`ClientState` of the light client module developer guide](/docs/ibc/v8.5.x/light-clients/developer-guide/overview#clientstate) for more information about the `ClientState` interface. + +### `ConsensusState` + +The `08-wasm`'s `ConsensusState` data structure maintains one field: + +- `Data` contains the bytes of the Protobuf-encoded consensus state of the underlying light client implemented as a Wasm contract. For example, if the Wasm light client contract implements the GRANDPA light client algorithm, then `Data` will contain the bytes for a [GRANDPA consensus state](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L87-L94). + +```go +type ConsensusState struct { + / bytes encoding the consensus state of the underlying light client + / implemented as a Wasm contract. + Data []byte +} +``` + +See section [`ConsensusState` of the light client module developer guide](/docs/ibc/v8.5.x/light-clients/developer-guide/overview#consensusstate) for more information about the `ConsensusState` interface. + +### `ClientMessage` + +`ClientMessage` is used for performing updates to a `ClientState` stored on chain. The `08-wasm`'s `ClientMessage` data structure maintains one field: + +- `Data` contains the bytes of the Protobuf-encoded header(s) or misbehaviour for the underlying light client implemented as a Wasm contract. For example, if the Wasm light client contract implements the GRANDPA light client algorithm, then `Data` will contain the bytes of either [header](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L96-L104) or [misbehaviour](https://github.com/ComposableFi/composable-ibc/blob/02ce69e2843e7986febdcf795f69a757ce569272/light-clients/ics10-grandpa/src/proto/grandpa.proto#L106-L112) for a GRANDPA light client. + +```go +type ClientMessage struct { + / bytes encoding the header(s) + +or misbehaviour for the underlying light client + / implemented as a Wasm contract. + Data []byte +} +``` + +See section [`ClientMessage` of the light client module developer guide](/docs/ibc/v8.5.x/light-clients/developer-guide/overview#clientmessage) for more information about the `ClientMessage` interface. + +## Wasm light client + +The actual light client can be implemented in any language that compiles to Wasm and implements the interfaces of a [CosmWasm](https://docs.cosmwasm.com/docs/) contract. Even though in theory other languages could be used, in practice (at least for the time being) the most suitable language to use would be Rust, since there is already good support for it for developing CosmWasm smart contracts. + +At the moment of writing there are two contracts available: one for [Tendermint](https://github.com/ComposableFi/composable-ibc/tree/master/light-clients/ics07-tendermint-cw) and one [GRANDPA](https://github.com/ComposableFi/composable-ibc/tree/master/light-clients/ics10-grandpa-cw) (which is being used in production in [Composable Finance's Centauri bridge](https://github.com/ComposableFi/composable-ibc)). And there are others in development (e.g. for Near). diff --git a/docs/ibc/v8.5.x/light-clients/wasm/contracts.mdx b/docs/ibc/v8.5.x/light-clients/wasm/contracts.mdx new file mode 100644 index 00000000..980716f2 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/wasm/contracts.mdx @@ -0,0 +1,111 @@ +--- +title: Contracts +description: >- + Learn about the expected behaviour of Wasm light client contracts and the + between with 08-wasm. +--- + +Learn about the expected behaviour of Wasm light client contracts and the between with `08-wasm`. + +## API + +The `08-wasm` light client proxy performs calls to the Wasm light client via the Wasm VM. The calls require as input JSON-encoded payload messages that fall in the three categories described in the next sections. + +## `InstantiateMessage` + +This is the message sent to the contract's `instantiate` entry point. It contains the bytes of the protobuf-encoded client and consensus states of the underlying light client, both provided in [`MsgCreateClient`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/tx.proto#L40-L52). Please note that the bytes contained within the JSON message are represented as base64-encoded strings. + +```go +type InstantiateMessage struct { + ClientState []byte `json:"client_state"` + ConsensusState []byte `json:"consensus_state"` + Checksum []byte `json:"checksum" +} +``` + +The Wasm light client contract is expected to store the client and consensus state in the corresponding keys of the client-prefixed store. + +## `QueryMsg` + +`QueryMsg` acts as a discriminated union type that is used to encode the messages that are sent to the contract's `query` entry point. Only one of the fields of the type should be set at a time, so that the other fields are omitted in the encoded JSON and the payload can be correctly translated to the corresponding element of the enumeration in Rust. + +```go +type QueryMsg struct { + Status *StatusMsg `json:"status,omitempty"` + ExportMetadata *ExportMetadataMsg `json:"export_metadata,omitempty"` + TimestampAtHeight *TimestampAtHeightMsg `json:"timestamp_at_height,omitempty"` + VerifyClientMessage *VerifyClientMessageMsg `json:"verify_client_message,omitempty"` + CheckForMisbehaviour *CheckForMisbehaviourMsg `json:"check_for_misbehaviour,omitempty"` +} +``` + +```rust +#[cw_serde] +pub enum QueryMsg { + Status(StatusMsg), + ExportMetadata(ExportMetadataMsg), + TimestampAtHeight(TimestampAtHeightMsg), + VerifyClientMessage(VerifyClientMessageRaw), + CheckForMisbehaviour(CheckForMisbehaviourMsgRaw), +} +``` + +To learn what it is expected from the Wasm light client contract when processing each message, please read the corresponding section of the [Light client developer guide](/docs/ibc/v8.5.x/light-clients/developer-guide/overview): + +- For `StatusMsg`, see the section [`Status` method](/docs/ibc/v8.5.x/light-clients/developer-guide/client-state#status-method). +- For `ExportMetadataMsg`, see the section [Genesis metadata](/docs/ibc/v8.5.x/light-clients/developer-guide/genesis#genesis-metadata). +- For `TimestampAtHeightMsg`, see the section [`GetTimestampAtHeight` method](/docs/ibc/v8.5.x/light-clients/developer-guide/client-state#gettimestampatheight-method). +- For `VerifyClientMessageMsg`, see the section [`VerifyClientMessage`](/docs/ibc/v8.5.x/light-clients/developer-guide/updates-and-misbehaviour#verifyclientmessage). +- For `CheckForMisbehaviourMsg`, see the section [`CheckForMisbehaviour` method](/docs/ibc/v8.5.x/light-clients/developer-guide/client-state#checkformisbehaviour-method). + +## `SudoMsg` + +`SudoMsg` acts as a discriminated union type that is used to encode the messages that are sent to the contract's `sudo` entry point. Only one of the fields of the type should be set at a time, so that the other fields are omitted in the encoded JSON and the payload can be correctly translated to the corresponding element of the enumeration in Rust. + +The `sudo` entry point is able to perform state-changing writes in the client-prefixed store. + +```go +type SudoMsg struct { + UpdateState *UpdateStateMsg `json:"update_state,omitempty"` + UpdateStateOnMisbehaviour *UpdateStateOnMisbehaviourMsg `json:"update_state_on_misbehaviour,omitempty"` + VerifyUpgradeAndUpdateState *VerifyUpgradeAndUpdateStateMsg `json:"verify_upgrade_and_update_state,omitempty"` + VerifyMembership *VerifyMembershipMsg `json:"verify_membership,omitempty"` + VerifyNonMembership *VerifyNonMembershipMsg `json:"verify_non_membership,omitempty"` + MigrateClientStore *MigrateClientStoreMsg `json:"migrate_client_store,omitempty"` +} +``` + +```rust +#[cw_serde] +pub enum SudoMsg { + UpdateState(UpdateStateMsgRaw), + UpdateStateOnMisbehaviour(UpdateStateOnMisbehaviourMsgRaw), + VerifyUpgradeAndUpdateState(VerifyUpgradeAndUpdateStateMsgRaw), + VerifyMembership(VerifyMembershipMsgRaw), + VerifyNonMembership(VerifyNonMembershipMsgRaw), + MigrateClientStore(MigrateClientStoreMsgRaw), +} +``` + +To learn what it is expected from the Wasm light client contract when processing each message, please read the corresponding section of the [Light client developer guide](/docs/ibc/v8.5.x/light-clients/developer-guide/overview): + +- For `UpdateStateMsg`, see the section [`UpdateState`](/docs/ibc/v8.5.x/light-clients/developer-guide/updates-and-misbehaviour#updatestate). +- For `UpdateStateOnMisbehaviourMsg`, see the section [`UpdateStateOnMisbehaviour`](/docs/ibc/v8.5.x/light-clients/developer-guide/updates-and-misbehaviour#updatestateonmisbehaviour). +- For `VerifyUpgradeAndUpdateStateMsg`, see the section [`GetTimestampAtHeight` method](/docs/ibc/v8.5.x/light-clients/developer-guide/upgrades#implementing-verifyupgradeandupdatestate). +- For `VerifyMembershipMsg`, see the section [`VerifyMembership` method](/docs/ibc/v8.5.x/light-clients/developer-guide/client-state#verifymembership-method). +- For `VerifyNonMembershipMsg`, see the section [`VerifyNonMembership` method](/docs/ibc/v8.5.x/light-clients/developer-guide/client-state#verifynonmembership-method). +- For `MigrateClientStoreMsg`, see the section [Implementing `CheckSubstituteAndUpdateState`](/docs/ibc/v8.5.x/light-clients/developer-guide/proposals#implementing-checksubstituteandupdatestate). + +### Migration + +The `08-wasm` proxy light client exposes the `MigrateContract` RPC endpoint that can be used to migrate a given Wasm light client contract (specified by the client identifier) to a new Wasm byte code (specified by the hash of the byte code). The expected use case for this RPC endpoint is to enable contracts to migrate to new byte code in case the current byte code is found to have a bug or vulnerability. The Wasm byte code that contracts are migrated have to be uploaded beforehand using `MsgStoreCode` and must implement the `migrate` entry point. See section[`MsgMigrateContract`](/docs/ibc/v8.5.x/apps/interchain-accounts/messages#msgmigratecontract) for information about the request message for this RPC endpoint. + +## Expected behaviour + +The `08-wasm` proxy light client modules expects the following behaviour from the Wasm light client contracts when executing messages that perform state-changing writes: + +- The contract must not delete the client state from the store. +- The contract must not change the client state to a client state of another type. +- The contract must not change the checksum in the client state. + +Any violation of these rules will result in an error returned from `08-wasm` that will abort the transaction. diff --git a/docs/ibc/v8.5.x/light-clients/wasm/events.mdx b/docs/ibc/v8.5.x/light-clients/wasm/events.mdx new file mode 100644 index 00000000..66223443 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/wasm/events.mdx @@ -0,0 +1,22 @@ +--- +title: Events +description: 'The 08-wasm module emits the following events:' +--- + +The `08-wasm` module emits the following events: + +## `MsgStoreCode` + +| Type | Attribute Key | Attribute Value | +| ----------------- | -------------- | ---------------------- | +| store\_wasm\_code | wasm\_checksum | `{hex.Encode(checksum)}` | +| message | module | 08-wasm | + +## `MsgMigrateContract` + +| Type | Attribute Key | Attribute Value | +| ----------------- | -------------- | ------------------------- | +| migrate\_contract | client\_id | `{clientId}` | +| migrate\_contract | wasm\_checksum | `{hex.Encode(checksum)}` | +| migrate\_contract | new\_checksum | `{hex.Encode(newChecksum)}` | +| message | module | 08-wasm | diff --git a/docs/ibc/v8.5.x/light-clients/wasm/governance.mdx b/docs/ibc/v8.5.x/light-clients/wasm/governance.mdx new file mode 100644 index 00000000..5c563db9 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/wasm/governance.mdx @@ -0,0 +1,125 @@ +--- +title: Governance +description: >- + Learn how to upload Wasm light client byte code on a chain, and how to migrate + an existing Wasm light client contract. +--- + +Learn how to upload Wasm light client byte code on a chain, and how to migrate an existing Wasm light client contract. + +## Setting an authority + +Both the storage of Wasm light client byte code as well as the migration of an existing Wasm light client contract are permissioned (i.e. only allowed to an authority such as governance). The designated authority is specified when instantiating `08-wasm`'s keeper: both [`NewKeeperWithVM`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L39-L47) and [`NewKeeperWithConfig`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L88-L96) constructor functions accept an `authority` argument that must be the address of the authorized actor. For example, in `app.go`, when instantiating the keeper, you can pass the address of the governance module: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/types" + ... +) + +/ app.go +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), / authority + wasmVM, + app.GRPCQueryRouter(), +) +``` + +## Storing new Wasm light client byte code + +If governance is the allowed authority, the governance v1 proposal that needs to be submitted to upload a new light client contract should contain the message [`MsgStoreCode`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/proto/ibc/lightclients/wasm/v1/tx.proto#L23-L30) with the base64-encoded byte code of the Wasm contract. Use the following CLI command and JSON as an example: + +```shell +simd tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "Upload IBC Wasm light client", + "summary": "Upload wasm client", + "messages": [ + { + "@type": "/ibc.lightclients.wasm.v1.MsgStoreCode", + "signer": "cosmos1...", / the authority address (e.g. the gov module account address) + "wasm_byte_code": "YWJ...PUB+" / standard base64 encoding of the Wasm contract byte code + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +To learn more about the `submit-proposal` CLI command, please check out [the relevant section in Cosmos SDK documentation](https://docs.cosmos.network/main/modules/gov#submit-proposal). + +Alternatively, the process of submitting the proposal may be simpler if you use the CLI command `store-code`. This CLI command accepts as argument the file of the Wasm light client contract and takes care of constructing the proposal message with `MsgStoreCode` and broadcasting it. See section [`store-code`](/docs/ibc/v8.5.x/light-clients/wasm/client#store-code) for more information. + +## Migrating an existing Wasm light client contract + +If governance is the allowed authority, the governance v1 proposal that needs to be submitted to migrate an existing new Wasm light client contract should contain the message [`MsgMigrateContract`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/proto/ibc/lightclients/wasm/v1/tx.proto#L52-L63) with the checksum of the Wasm byte code to migrate to. Use the following CLI command and JSON as an example: + +```shell +simd tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "Migrate IBC Wasm light client", + "summary": "Migrate wasm client", + "messages": [ + { + "@type": "/ibc.lightclients.wasm.v1.MsgMigrateContract", + "signer": "cosmos1...", / the authority address (e.g. the gov module account address) + "client_id": "08-wasm-1", / client identifier of the Wasm light client contract that will be migrated + "checksum": "a8ad...4dc0", / SHA-256 hash of the Wasm byte code to migrate to, previously stored with MsgStoreCode + "msg": "{}" / JSON-encoded message to be passed to the contract on migration + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +To learn more about the `submit-proposal` CLI command, please check out [the relevant section in Cosmos SDK documentation](https://docs.cosmos.network/main/modules/gov#submit-proposal). + +## Removing an existing checksum + +If governance is the allowed authority, the governance v1 proposal that needs to be submitted to remove a specific checksum from the list of allowed checksums should contain the message [`MsgRemoveChecksum`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/proto/ibc/lightclients/wasm/v1/tx.proto#L39-L46) with the checksum (of a corresponding Wasm byte code). Use the following CLI command and JSON as an example: + +```shell +simd tx gov submit-proposal --from +``` + +where `proposal.json` contains: + +```json expandable +{ + "title": "Remove checksum of Wasm light client byte code", + "summary": "Remove checksum", + "messages": [ + { + "@type": "/ibc.lightclients.wasm.v1.MsgRemoveChecksum", + "signer": "cosmos1...", / the authority address (e.g. the gov module account address) + "checksum": "a8ad...4dc0", / SHA-256 hash of the Wasm byte code that should be removed from the list of allowed checksums + } + ], + "metadata": "AQ==", + "deposit": "100stake" +} +``` + +To learn more about the `submit-proposal` CLI command, please check out [the relevant section in Cosmos SDK documentation](https://docs.cosmos.network/main/modules/gov#submit-proposal). diff --git a/docs/ibc/v8.5.x/light-clients/wasm/integration.mdx b/docs/ibc/v8.5.x/light-clients/wasm/integration.mdx new file mode 100644 index 00000000..93984274 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/wasm/integration.mdx @@ -0,0 +1,412 @@ +--- +title: Integration +description: >- + Learn how to integrate the 08-wasm module in a chain binary and about the + recommended approaches depending on whether the x/wasm module is already used + in the chain. The following document only applies for Cosmos SDK chains. +--- + +Learn how to integrate the `08-wasm` module in a chain binary and about the recommended approaches depending on whether the [`x/wasm` module](https://github.com/CosmWasm/wasmd/tree/main/x/wasm) is already used in the chain. The following document only applies for Cosmos SDK chains. + +## Importing the `08-wasm` module + +`08-wasm` has no stable releases yet. To use it, you need to import the git commit that contains the module with the compatible versions of `ibc-go` and `wasmvm`. To do so, run the following command with the desired git commit in your project: + +```sh +go get github.com/cosmos/ibc-go/modules/light-clients/08-wasm@57fcdb9a9a9db9b206f7df2f955866dc4e10fef4 +``` + +You can find the version matrix in [here](/docs/ibc/v8.5.x/light-clients/wasm/integration#importing-the-08-wasm-module). + +## `app.go` setup + +The sample code below shows the relevant integration points in `app.go` required to setup the `08-wasm` module in a chain binary. Since `08-wasm` is a light client module itself, please check out as well the section [Integrating light clients](/docs/ibc/v8.5.x/ibc/integration#integrating-light-clients) for more information: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + + cmtos "github.com/cometbft/cometbft/libs/os" + + ibcwasm "github.com/cosmos/ibc-go/modules/light-clients/08-wasm" + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/types" + ... +) + +... + +/ Register the AppModule for the 08-wasm module +ModuleBasics = module.NewBasicManager( + ... + ibcwasm.AppModuleBasic{ +}, + ... +) + +/ Add 08-wasm Keeper +type SimApp struct { + ... + WasmClientKeeper ibcwasmkeeper.Keeper + ... +} + +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + ... + keys := sdk.NewKVStoreKeys( + ... + ibcwasmtypes.StoreKey, + ) + + / Instantiate 08-wasm's keeper + / This sample code uses a constructor function that + / accepts a pointer to an existing instance of Wasm VM. + / This is the recommended approach when the chain + / also uses `x/wasm`, and then the Wasm VM instance + / can be shared. + app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmVM, + app.GRPCQueryRouter(), + ) + +app.ModuleManager = module.NewManager( + / SDK app modules + ... + ibcwasm.NewAppModule(app.WasmClientKeeper), + ) + +app.ModuleManager.SetOrderBeginBlockers( + ... + ibcwasmtypes.ModuleName, + ... + ) + +app.ModuleManager.SetOrderEndBlockers( + ... + ibcwasmtypes.ModuleName, + ... + ) + genesisModuleOrder := []string{ + ... + ibcwasmtypes.ModuleName, + ... +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + ... + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + ... + + / must be before Loading version + if manager := app.SnapshotManager(); manager != nil { + err := manager.RegisterExtensions( + ibcwasmkeeper.NewWasmSnapshotter(app.CommitMultiStore(), &app.WasmClientKeeper), + ) + if err != nil { + panic(fmt.Errorf("failed to register snapshot extension: %s", err)) +} + +} + ... + if loadLatest { + ... + ctx := app.BaseApp.NewUncachedContext(true, cmtproto.Header{ +}) + + / Initialize pinned codes in wasmvm as they are not persisted there + if err := ibcwasmkeeper.InitializePinnedCodes(ctx); err != nil { + cmtos.Exit(fmt.Sprintf("failed initialize pinned codes %s", err)) +} + +} +} +``` + +## Keeper instantiation + +When it comes to instantiating `08-wasm`'s keeper there are two recommended ways of doing it. Choosing one or the other will depend on whether the chain already integrates [`x/wasm`](https://github.com/CosmWasm/wasmd/tree/main/x/wasm) or not. + +### If `x/wasm` is present + +If the chain where the module is integrated uses `x/wasm` then we recommend that both `08-wasm` and `x/wasm` share the same Wasm VM instance. Having two separate Wasm VM instances is still possible, but care should be taken to make sure that both instances do not share the directory when the VM stores blobs and various caches, otherwise unexpected behaviour is likely to happen. + +In order to share the Wasm VM instance please follow the guideline below. Please note that this requires `x/wasm`v0.41 or above. + +- Instantiate the Wasm VM in `app.go` with the parameters of your choice. +- [Create an `Option` with this Wasm VM instance](https://github.com/CosmWasm/wasmd/blob/db93d7b6c7bb6f4a340d74b96a02cec885729b59/x/wasm/keeper/options.go#L21-L25). +- Add the option created in the previous step to a slice and [pass it to the `x/wasm NewKeeper` constructor function](https://github.com/CosmWasm/wasmd/blob/db93d7b6c7bb6f4a340d74b96a02cec885729b59/x/wasm/keeper/keeper_cgo.go#L36). +- Pass the pointer to the Wasm VM instance to `08-wasm` [`NewKeeperWithVM` constructor function](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L39-L47). + +The code to set this up would look something like this: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + + wasmvm "github.com/CosmWasm/wasmvm" + wasmkeeper "github.com/CosmWasm/wasmd/x/wasm/keeper" + wasmtypes "github.com/CosmWasm/wasmd/x/wasm/types" + + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/types" + ... +) + +... + +/ instantiate the Wasm VM with the chosen parameters +wasmer, err := wasmvm.NewVM( + dataDir, + availableCapabilities, + contractMemoryLimit, / default of 32 + contractDebugMode, + memoryCacheSize, +) + if err != nil { + panic(err) +} + +/ create an Option slice (or append to an existing one) +/ with the option to use a custom Wasm VM instance +wasmOpts = []wasmkeeper.Option{ + wasmkeeper.WithWasmEngine(wasmer), +} + +/ the keeper will use the provided Wasm VM instance, +/ instead of instantiating a new one +app.WasmKeeper = wasmkeeper.NewKeeper( + appCodec, + keys[wasmtypes.StoreKey], + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + distrkeeper.NewQuerier(app.DistrKeeper), + app.IBCFeeKeeper, / ISC4 Wrapper: fee IBC middleware + app.IBCKeeper.ChannelKeeper, + &app.IBCKeeper.PortKeeper, + scopedWasmKeeper, + app.TransferKeeper, + app.MsgServiceRouter(), + app.GRPCQueryRouter(), + wasmDir, + wasmConfig, + availableCapabilities, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmOpts..., +) + +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmer, / pass the Wasm VM instance to `08-wasm` keeper constructor + app.GRPCQueryRouter(), +) +... +``` + +### If `x/wasm` is not present + +If the chain does not use [`x/wasm`](https://github.com/CosmWasm/wasmd/tree/main/x/wasm), even though it is still possible to use the method above from the previous section +(e.g. instantiating a Wasm VM in app.go an pass it to 08-wasm's [`NewKeeperWithVM` constructor function](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L39-L47), since there would be no need in this case to share the Wasm VM instance with another module, you can use the [`NewKeeperWithConfig` constructor function](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/keeper/keeper.go#L88-L96) and provide the Wasm VM configuration parameters of your choice instead. A Wasm VM instance will be created in `NewKeeperWithConfig`. The parameters that can set are: + +- `DataDir` is the [directory for Wasm blobs and various caches](https://github.com/CosmWasm/wasmvm/blob/1638725b25d799f078d053391945399cb35664b1/lib.go#L25). In `wasmd` this is set to the [`wasm` folder under the home directory](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/app/app.go#L578). +- `SupportedCapabilities` is a comma separated [list of capabilities supported by the chain](https://github.com/CosmWasm/wasmvm/blob/1638725b25d799f078d053391945399cb35664b1/lib.go#L26). [`wasmd` sets this to all the available capabilities](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/app/app.go#L586), but 08-wasm only requires `iterator`. +- `MemoryCacheSize` sets [the size in MiB of an in-memory cache for e.g. module caching](https://github.com/CosmWasm/wasmvm/blob/1638725b25d799f078d053391945399cb35664b1/lib.go#L29C16-L29C104). It is not consensus-critical and should be defined on a per-node basis, often in the range 100 to 1000 MB. [`wasmd` reads this value of](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/app/app.go#L579). Default value is 256. +- `ContractDebugMode` is a [flag to enable/disable printing debug logs from the contract to STDOUT](https://github.com/CosmWasm/wasmvm/blob/1638725b25d799f078d053391945399cb35664b1/lib.go#L28). This should be false in production environments. Default value is false. + +Another configuration parameter of the Wasm VM is the contract memory limit (in MiB), which is [set to 32](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/types/config.go#L8), [following the example of `wasmd`](https://github.com/CosmWasm/wasmd/blob/36416def20effe47fb77f29f5ba35a003970fdba/x/wasm/keeper/keeper.go#L32-L34). This parameter is not configurable by users of `08-wasm`. + +The following sample code shows how the keeper would be constructed using this method: + +```go expandable +/ app.go +import ( + + ... + "github.com/cosmos/cosmos-sdk/runtime" + + ibcwasmkeeper "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/keeper" + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/types" + ... +) + +... + +/ homePath is the path to the directory where the data +/ directory for Wasm blobs and caches will be created + wasmConfig := ibcwasmtypes.WasmConfig{ + DataDir: filepath.Join(homePath, "ibc_08-wasm_client_data"), + SupportedCapabilities: "iterator", + ContractDebugMode: false, +} + +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithConfig( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmConfig, + app.GRPCQueryRouter(), +) +``` + +Check out also the [`WasmConfig` type definition](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/types/config.go#L21-L31) for more information on each of the configurable parameters. Some parameters allow node-level configurations. There is additionally the function [`DefaultWasmConfig`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/types/config.go#L36-L42) available that returns a configuration with the default values. + +### Options + +The `08-wasm` module comes with an options API inspired by the one in `x/wasm`. +Currently the only option available is the `WithQueryPlugins` option, which allows registration of custom query plugins for the `08-wasm` module. The use of this API is optional and it is only required if the chain wants to register custom query plugins for the `08-wasm` module. + +#### `WithQueryPlugins` + +By default, the `08-wasm` module does not configure any querier options for light client contracts. However, it is possible to register custom query plugins for [`QueryRequest::Custom`](https://github.com/CosmWasm/cosmwasm/blob/v2.0.1/packages/std/src/query/mod.rs#L48) and [`QueryRequest::Stargate`](https://github.com/CosmWasm/cosmwasm/blob/v2.0.1/packages/std/src/query/mod.rs#L57-L65). + +Assuming that the keeper is not yet instantiated, the following sample code shows how to register query plugins for the `08-wasm` module. + +We first construct a [`QueryPlugins`](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/types/querier.go#L78-L87) object with the desired query plugins: + +```go +queryPlugins := ibcwasmtypes.QueryPlugins { + Custom: MyCustomQueryPlugin(), + / `myAcceptList` is a `[]string` containing the list of gRPC query paths that the chain wants to allow for the `08-wasm` module to query. + / These queries must be registered in the chain's gRPC query router, be deterministic, and track their gas usage. + / The `AcceptListStargateQuerier` function will return a query plugin that will only allow queries for the paths in the `myAcceptList`. + / The query responses are encoded in protobuf unlike the implementation in `x/wasm`. + Stargate: ibcwasmtypes.AcceptListStargateQuerier(myAcceptList), +} +``` + +Note that the `Stargate` querier appends the user defined accept list of query routes to a default list defined by the `08-wasm` module. +The `defaultAcceptList` defines a single query route: `"/ibc.core.client.v1.Query/VerifyMembership"`. This allows for light client smart contracts to delegate parts of their workflow to other light clients for auxiliary proof verification. For example, proof of inclusion of block and tx data by a data availability provider. + +```go +/ defaultAcceptList defines a set of default allowed queries made available to the Querier. +var defaultAcceptList = []string{ + "/ibc.core.client.v1.Query/VerifyMembership", +} +``` + +You may leave any of the fields in the `QueryPlugins` object as `nil` if you do not want to register a query plugin for that query type. + +Then, we pass the `QueryPlugins` object to the `WithQueryPlugins` option: + +```go +querierOption := ibcwasmkeeper.WithQueryPlugins(&queryPlugins) +``` + +Finally, we pass the option to the `NewKeeperWithConfig` or `NewKeeperWithVM` constructor function during [Keeper instantiation](#keeper-instantiation): + +```diff +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithConfig( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmConfig, + app.GRPCQueryRouter(), ++ querierOption, +) +``` + +```diff +app.WasmClientKeeper = ibcwasmkeeper.NewKeeperWithVM( + appCodec, + runtime.NewKVStoreService(keys[ibcwasmtypes.StoreKey]), + app.IBCKeeper.ClientKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + wasmer, / pass the Wasm VM instance to `08-wasm` keeper constructor + app.GRPCQueryRouter(), ++ querierOption, +) +``` + +## Updating `AllowedClients` + +If the chain's 02-client submodule parameter `AllowedClients` contains the single wildcard `"*"` element, then it is not necessary to do anything in order to allow the creation of `08-wasm` clients. However, if the parameter contains a list of client types (e.g. `["06-solomachine", "07-tendermint"]`), then in order to use the `08-wasm` module chains must update the [`AllowedClients` parameter](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/client.proto#L64) of core IBC. This can be configured directly in the application upgrade handler with the sample code below: + +```go expandable +import ( + + ... + ibcwasmtypes "github.com/cosmos/ibc-go/modules/light-clients/08-wasm/types" + ... +) + +... + +func CreateWasmUpgradeHandler( + mm *module.Manager, + configurator module.Configurator, + clientKeeper clientkeeper.Keeper, +) + +upgradetypes.UpgradeHandler { + return func(goCtx context.Context, _ upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + / explicitly update the IBC 02-client params, adding the wasm client type + params := clientKeeper.GetParams(ctx) + +params.AllowedClients = append(params.AllowedClients, ibcwasmtypes.Wasm) + +clientKeeper.SetParams(ctx, params) + +return mm.RunMigrations(goCtx, configurator, vm) +} +} +``` + +Or alternatively the parameter can be updated via a governance proposal (see at the bottom of section [`Creating clients`](/docs/ibc/v8.5.x/light-clients/developer-guide/setup#creating-clients) for an example of how to do this). + +## Adding the module to the store + +As part of the upgrade migration you must also add the module to the upgrades store. + +```go expandable +func (app SimApp) + +RegisterUpgradeHandlers() { + + ... + if upgradeInfo.Name == UpgradeName && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := storetypes.StoreUpgrades{ + Added: []string{ + ibcwasmtypes.ModuleName, +}, +} + + / configure store loader that checks if version == upgradeHeight and applies store upgrades + app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +} +``` + +## Adding snapshot support + +In order to use the `08-wasm` module chains are required to register the `WasmSnapshotter` extension in the snapshot manager. This snapshotter takes care of persisting the external state, in the form of contract code, of the Wasm VM instance to disk when the chain is snapshotted. [This code](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/testing/simapp/app.go#L775-L782) should be placed in `NewSimApp` function in `app.go`. + +## Pin byte codes at start + +Wasm byte codes should be pinned to the WasmVM cache on every application start, therefore [this code](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/testing/simapp/app.go#L825-L830) should be placed in `NewSimApp` function in `app.go`. diff --git a/docs/ibc/v8.5.x/light-clients/wasm/messages.mdx b/docs/ibc/v8.5.x/light-clients/wasm/messages.mdx new file mode 100644 index 00000000..5eca2913 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/wasm/messages.mdx @@ -0,0 +1,73 @@ +--- +title: Messages +description: >- + Uploading the Wasm light client contract to the Wasm VM storage is achieved by + means of MsgStoreCode: +--- + +## `MsgStoreCode` + +Uploading the Wasm light client contract to the Wasm VM storage is achieved by means of `MsgStoreCode`: + +```go +type MsgStoreCode struct { + / signer address + Signer string + / wasm byte code of light client contract. It can be raw or gzip compressed + WasmByteCode []byte +} +``` + +This message is expected to fail if: + +* `Signer` is an invalid Bech32 address, or it does not match the designated authority address. +* `WasmByteCode` is empty or it exceeds the maximum size, currently set to 3MB. + +Only light client contracts stored using `MsgStoreCode` are allowed to be instantiated. An attempt to create a light client from contracts uploaded via other means (e.g. through `x/wasm` if the module shares the same Wasm VM instance with 08-wasm) will fail. Due to the idempotent nature of the Wasm VM's `StoreCode` function, it is possible to store the same byte code multiple times. + +When execution of `MsgStoreCode` succeeds, the checksum of the contract (i.e. the sha256 hash of the contract's byte code) is stored in an allow list. When a relayer submits [`MsgCreateClient`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/tx.proto#L25-L37) with 08-wasm's `ClientState`, the client state includes the checksum of the Wasm byte code that should be called. Then 02-client calls [08-wasm's implementation of `Initialize` function](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/core/02-client/keeper/client.go#L36) (which is an interface function part of `ClientState`), and it will check that the checksum in the client state matches one of the checksums in the allow list. If a match is found, the light client is initialized; otherwise, the transaction is aborted. + +## `MsgMigrateContract` + +Migrating a contract to a new Wasm byte code is achieved by means of `MsgMigrateContract`: + +```go +type MsgMigrateContract struct { + / signer address + Signer string + / the client id of the contract + ClientId string + / the SHA-256 hash of the new wasm byte code for the contract + Checksum []byte + / the json-encoded migrate msg to be passed to the contract on migration + Msg []byte +} +``` + +This message is expected to fail if: + +* `Signer` is an invalid Bech32 address, or it does not match the designated authority address. +* `ClientId` is not a valid identifier prefixed by `08-wasm`. +* `Checksum` is not exactly 32 bytes long or it is not found in the list of allowed checksums (a new checksum is added to the list when executing `MsgStoreCode`), or it matches the current checksum of the contract. + +When a Wasm light client contract is migrated to a new Wasm byte code the checksum for the contract will be updated with the new checksum. + +## `MsgRemoveChecksum` + +Removing a checksum from the list of allowed checksums is achieved by means of `MsgRemoveChecksum`: + +```go +type MsgRemoveChecksum struct { + / signer address + Signer string + / Wasm byte code checksum to be removed from the store + Checksum []byte +} +``` + +This message is expected to fail if: + +* `Signer` is an invalid Bech32 address, or it does not match the designated authority address. +* `Checksum` is not exactly 32 bytes long or it is not found in the list of allowed checksums (a new checksum is added to the list when executing `MsgStoreCode`). + +When a checksum is removed from the list of allowed checksums, then the corresponding Wasm byte code will not be available for instantiation in [08-wasm's implementation of `Initialize` function](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/core/02-client/keeper/client.go#L36). diff --git a/docs/ibc/v8.5.x/light-clients/wasm/migrations.mdx b/docs/ibc/v8.5.x/light-clients/wasm/migrations.mdx new file mode 100644 index 00000000..66e1f2b9 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/wasm/migrations.mdx @@ -0,0 +1,99 @@ +--- +title: Migrations +description: This guide provides instructions for migrating 08-wasm versions. +--- + +This guide provides instructions for migrating 08-wasm versions. + +## From ibc-go v7.3.x to ibc-go v8.0.x + +## Chains + +In the 08-wasm versions compatible with ibc-go v7.3.x and above from the v7 release line, the checksums of the uploaded Wasm bytecodes are all stored under a single key. From ibc-go v8.0.x the checksums are stored using [`collections.KeySet`](https://docs.cosmos.network/v0.50/build/packages/collections#keyset), whose full functionality became available in Cosmos SDK v0.50. There is therefore an [automatic migration handler](https://github.com/cosmos/ibc-go/blob/57fcdb9a9a9db9b206f7df2f955866dc4e10fef4/modules/light-clients/08-wasm/module.go#L115-L118) configured in the 08-wasm module to migrate the stored checksums to `collections.KeySet`. + +## From v0.1.0+ibc-go-v8.0-wasmvm-v1.5 to v0.2.0-ibc-go-v8.3-wasmvm-v2.0 + +The `WasmEngine` interface has been updated to reflect changes in the function signatures of Wasm VM: + +```diff expandable +type WasmEngine interface { +- StoreCode(code wasmvm.WasmCode) (wasmvm.Checksum, error) ++ StoreCode(code wasmvm.WasmCode, gasLimit uint64) (wasmvmtypes.Checksum, uint64, error) + + StoreCodeUnchecked(code wasmvm.WasmCode) (wasmvm.Checksum, error) + + Instantiate( + checksum wasmvm.Checksum, + env wasmvmtypes.Env, + info wasmvmtypes.MessageInfo, + initMsg []byte, + store wasmvm.KVStore, + goapi wasmvm.GoAPI, + querier wasmvm.Querier, + gasMeter wasmvm.GasMeter, + gasLimit uint64, + deserCost wasmvmtypes.UFraction, +- ) (*wasmvmtypes.Response, uint64, error) ++ ) (*wasmvmtypes.ContractResult, uint64, error) + + Query( + checksum wasmvm.Checksum, + env wasmvmtypes.Env, + queryMsg []byte, + store wasmvm.KVStore, + goapi wasmvm.GoAPI, + querier wasmvm.Querier, + gasMeter wasmvm.GasMeter, + gasLimit uint64, + deserCost wasmvmtypes.UFraction, +- ) ([]byte, uint64, error) ++ ) (*wasmvmtypes.QueryResult, uint64, error) + + Migrate( + checksum wasmvm.Checksum, + env wasmvmtypes.Env, + migrateMsg []byte, + store wasmvm.KVStore, + goapi wasmvm.GoAPI, + querier wasmvm.Querier, + gasMeter wasmvm.GasMeter, + gasLimit uint64, + deserCost wasmvmtypes.UFraction, +- ) (*wasmvmtypes.Response, uint64, error) ++ ) (*wasmvmtypes.ContractResult, uint64, error) + + Sudo( + checksum wasmvm.Checksum, + env wasmvmtypes.Env, + sudoMsg []byte, + store wasmvm.KVStore, + goapi wasmvm.GoAPI, + querier wasmvm.Querier, + gasMeter wasmvm.GasMeter, + gasLimit uint64, + deserCost wasmvmtypes.UFraction, +- ) (*wasmvmtypes.Response, uint64, error) ++ ) (*wasmvmtypes.ContractResult, uint64, error) + + GetCode(checksum wasmvm.Checksum) (wasmvm.WasmCode, error) + + Pin(checksum wasmvm.Checksum) error + + Unpin(checksum wasmvm.Checksum) error +} +``` + +Similar changes were required in the functions of the `MockWasmEngine` interface. + +### Chains + +The `SupportedCapabilities` field of `WasmConfig` is now of type `[]string`: + +```diff +type WasmConfig struct { + DataDir string +- SupportedCapabilities string ++ SupportedCapabilities []string + ContractDebugMode bool +} +``` diff --git a/docs/ibc/v8.5.x/light-clients/wasm/overview.mdx b/docs/ibc/v8.5.x/light-clients/wasm/overview.mdx new file mode 100644 index 00000000..56293398 --- /dev/null +++ b/docs/ibc/v8.5.x/light-clients/wasm/overview.mdx @@ -0,0 +1,22 @@ +--- +title: Overview +description: Learn about the 08-wasm light client proxy module. +--- + +## Overview + +Learn about the `08-wasm` light client proxy module. + +### Context + +Traditionally, light clients used by ibc-go have been implemented only in Go, and since ibc-go v7 (with the release of the 02-client refactor), they are [first-class Cosmos SDK modules](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-010-light-clients-as-sdk-modules.md). This means that updating existing light client implementations or adding support for new light clients is a multi-step, time-consuming process involving on-chain governance: it is necessary to modify the codebase of ibc-go (if the light client is part of its codebase), re-build chains' binaries, pass a governance proposal and have validators upgrade their nodes. + +### Motivation + +To break the limitation of being able to write light client implementations only in Go, the `08-wasm` adds support to run light clients written in a Wasm-compilable language. The light client byte code implements the entry points of a [CosmWasm](https://docs.cosmwasm.com/docs/) smart contract, and runs inside a Wasm VM. The `08-wasm` module exposes a proxy light client interface that routes incoming messages to the appropriate handler function, inside the Wasm VM, for execution. + +Adding a new light client to a chain is just as simple as submitting a governance proposal with the message that stores the byte code of the light client contract. No coordinated upgrade is needed. When the governance proposal passes and the message is executed, the contract is ready to be instantiated upon receiving a relayer-submitted `MsgCreateClient`. The process of creating a Wasm light client is the same as with a regular light client implemented in Go. + +### Use cases + +* Development of light clients for non-Cosmos ecosystem chains: state machines in other ecosystems are, in many cases, implemented in Rust, and thus there are probably libraries used in their light client implementations for which there is no equivalent in Go. This makes the development of a light client in Go very difficult, but relatively simple to do it in Rust. Therefore, writing a CosmWasm smart contract in Rust that implements the light client algorithm becomes a lower effort. diff --git a/docs/ibc/v8.5.x/middleware/callbacks/end-users.mdx b/docs/ibc/v8.5.x/middleware/callbacks/end-users.mdx new file mode 100644 index 00000000..8c198b6c --- /dev/null +++ b/docs/ibc/v8.5.x/middleware/callbacks/end-users.mdx @@ -0,0 +1,95 @@ +--- +title: End Users +description: >- + This section explains how to use the callbacks middleware from the perspective + of an IBC Actor. Callbacks middleware provides two types of callbacks: +--- + +This section explains how to use the callbacks middleware from the perspective of an IBC Actor. Callbacks middleware provides two types of callbacks: + +* Source callbacks: + * `SendPacket` callback + * `OnAcknowledgementPacket` callback + * `OnTimeoutPacket` callback +* Destination callbacks: + * `ReceivePacket` callback + +For a given channel, the source callbacks are supported if the source chain has the callbacks middleware wired up in the channel's IBC stack. Similarly, the destination callbacks are supported if the destination chain has the callbacks middleware wired up in the channel's IBC stack. + + +Callbacks are always executed after the packet has been processed by the underlying IBC module. + + + +If the underlying application module is doing an asynchronous acknowledgement on packet receive (for example, if the [packet forward middleware](https://github.com/cosmos/ibc-apps/tree/main/middleware/packet-forward-middleware) is in the stack, and is being used by this packet), then the callbacks middleware will execute the `ReceivePacket` callback after the acknowledgement has been received. + + +## Source Callbacks + +Source callbacks are natively supported in the following ibc modules (if they are wrapped by the callbacks middleware): + +* `transfer` +* `icacontroller` + +To have your source callbacks be processed by the callbacks middleware, you must set the memo in the application's packet data to the following format: + +```jsonc +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +## Destination Callbacks + +Destination callbacks are natively only supported in the transfer module. Note that wrapping icahost is not supported. This is because icahost should be able to execute an arbitrary transaction anyway, and can call contracts or modules directly. + +To have your destination callbacks processed by the callbacks middleware, you must set the memo in the application's packet data to the following format: + +```jsonc +{ + "dest_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +Note that a packet can have both a source and destination callback. + +```jsonc expandable +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + +}, + "dest_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +# User Defined Gas Limit + +User defined gas limit was added for the following reasons: + +* To prevent callbacks from blocking packet lifecycle. +* To prevent relayers from being able to DOS the callback execution by sending a packet with a low amount of gas. + + +There is a chain wide parameter that sets the maximum gas limit that a user can set for a callback. This is to prevent a user from setting a gas limit that is too high for relayers. If the `"gas_limit"` is not set in the packet memo, then the maximum gas limit is used. + + +These goals are achieved by creating a minimum gas amount required for callback execution. If the relayer provides at least the minimum gas limit for the callback execution, then the packet lifecycle will not be blocked if the callback runs out of gas during execution, and the callback cannot be retried. If the relayer does not provided the minimum amount of gas and the callback executions runs out of gas, the entire tx is reverted and it may be executed again. + + +`SendPacket` callback is always reverted if the callback execution fails or returns an error for any reason. This is so that the packet is not sent if the callback execution fails. + diff --git a/docs/ibc/v8.5.x/middleware/callbacks/events.mdx b/docs/ibc/v8.5.x/middleware/callbacks/events.mdx new file mode 100644 index 00000000..cd0800c3 --- /dev/null +++ b/docs/ibc/v8.5.x/middleware/callbacks/events.mdx @@ -0,0 +1,37 @@ +--- +title: Events +description: >- + An overview of all events related to the callbacks middleware. There are two + types of events, "ibcsrccallback" and "ibcdestcallback". +--- + +An overview of all events related to the callbacks middleware. There are two types of events, `"ibc_src_callback"` and `"ibc_dest_callback"`. + +## Shared Attributes + +Both of these event types share the following attributes: + +| **Attribute Key** | **Attribute Values** | **Optional** | +| :--------------------------: | :-----------------------------------------------------------------------------------------: | :----------------: | +| module | "ibccallbacks" | | +| callback\_type | **One of**: "send\_packet", "acknowledgement\_packet", "timeout\_packet", "receive\_packet" | | +| callback\_address | string | | +| callback\_exec\_gas\_limit | string (parsed from uint64) | | +| callback\_commit\_gas\_limit | string (parsed from uint64) | | +| packet\_sequence | string (parsed from uint64) | | +| callback\_result | **One of**: "success", "failure" | | +| callback\_error | string (parsed from callback err) | Yes, if err != nil | + +## `ibc_src_callback` Attributes + +| **Attribute Key** | **Attribute Values** | +| :------------------: | :----------------------: | +| packet\_src\_port | string (sourcePortID) | +| packet\_src\_channel | string (sourceChannelID) | + +## `ibc_dest_callback` Attributes + +| **Attribute Key** | **Attribute Values** | +| :-------------------: | :--------------------: | +| packet\_dest\_port | string (destPortID) | +| packet\_dest\_channel | string (destChannelID) | diff --git a/docs/ibc/v8.5.x/middleware/callbacks/gas.mdx b/docs/ibc/v8.5.x/middleware/callbacks/gas.mdx new file mode 100644 index 00000000..461e8714 --- /dev/null +++ b/docs/ibc/v8.5.x/middleware/callbacks/gas.mdx @@ -0,0 +1,79 @@ +--- +title: Gas Management +description: >- + Executing arbitrary code on a chain can be arbitrarily expensive. In general, + a callback may consume infinite gas (think of a callback that loops forever). + This is problematic for a few reasons: +--- + +## Overview + +Executing arbitrary code on a chain can be arbitrarily expensive. In general, a callback may consume infinite gas (think of a callback that loops forever). This is problematic for a few reasons: + +* It can block the packet lifecycle. +* It can be used to consume all of the relayer's funds and gas. +* A relayer can DOS the callback execution by sending a packet with a low amount of gas. + +To prevent these, the callbacks middleware introduces two gas limits: a chain wide gas limit (`maxCallbackGas`) and a user defined gas limit. + +### Chain Wide Gas Limit + +Since the callbacks middleware does not have a keeper, it does not use a governance parameter to set the chain wide gas limit. Instead, the chain wide gas limit is passed in as a parameter to the callbacks middleware during initialization. + +```go expandable +/ app.go + maxCallbackGas := uint64(1_000_000) + +var transferStack porttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) + +transferStack = ibcfee.NewIBCMiddleware(transferStack, app.IBCFeeKeeper) + +transferStack = ibccallbacks.NewIBCMiddleware(transferStack, app.IBCFeeKeeper, app.MockContractKeeper, maxCallbackGas) +/ Since the callbacks middleware itself is an ics4wrapper, it needs to be passed to the transfer keeper +app.TransferKeeper.WithICS4Wrapper(transferStack.(porttypes.ICS4Wrapper)) + +/ Add transfer stack to IBC Router +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) +``` + +### User Defined Gas Limit + +The user defined gas limit is set by the IBC Actor during packet creation. The user defined gas limit is set in the packet memo. If the user defined gas limit is not set or if the user defined gas limit is greater than the chain wide gas limit, then the chain wide gas limit is used as the user defined gas limit. + +```jsonc expandable +{ + "src_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + +}, + "dest_callback": { + "address": "callbackAddressString", + / optional + "gas_limit": "userDefinedGasLimitString", + } +} +``` + +## Gas Limit Enforcement + +During a callback execution, there are three types of gas limits that are enforced: + +* User defined gas limit +* Chain wide gas limit +* Context gas limit (amount of gas that the relayer has left for this execution) + +Chain wide gas limit is used as a maximum to the user defined gas limit as explained in the [previous section](#user-defined-gas-limit). It may also be used as a default value if no user gas limit is provided. Therefore, we can ignore the chain wide gas limit for the rest of this section and work with the minimum of the chain wide gas limit and user defined gas limit. This minimum is called the commit gas limit. + +The gas limit enforcement is done by executing the callback inside a cached context with a new gas meter. The gas meter is initialized with the minimum of the commit gas limit and the context gas limit. This minimum is called the execution gas limit. We say that retries are allowed if `context gas limit < commit gas limit`. Otherwise, we say that retries are not allowed. + +If the callback execution fails due to an out of gas error, then the middleware checks if retries are allowed. If retries are not allowed, then it recovers from the out of gas error, consumes execution gas limit from the original context, and continues with the packet life cycle. If retries are allowed, then it panics with an out of gas error to revert the entire tx. The packet can then be submitted again with a higher gas limit. The out of gas panic descriptor is shown below. + +```go +fmt.Sprintf("ibc %s callback out of gas; commitGasLimit: %d", callbackType, callbackData.CommitGasLimit) +} +``` + +If the callback execution does not fail due to an out of gas error then the callbacks middleware does not block the packet life cycle regardless of whether retries are allowed or not. diff --git a/docs/ibc/v8.5.x/middleware/callbacks/integration.mdx b/docs/ibc/v8.5.x/middleware/callbacks/integration.mdx new file mode 100644 index 00000000..72cb5671 --- /dev/null +++ b/docs/ibc/v8.5.x/middleware/callbacks/integration.mdx @@ -0,0 +1,116 @@ +--- +title: Integration +description: >- + Learn how to integrate the callbacks middleware with IBC applications. The + following document is intended for developers building on top of the Cosmos + SDK and only applies for Cosmos SDK chains. +--- + +Learn how to integrate the callbacks middleware with IBC applications. The following document is intended for developers building on top of the Cosmos SDK and only applies for Cosmos SDK chains. + +The callbacks middleware is a minimal and stateless implementation of the IBC middleware interface. It does not have a keeper, nor does it store any state. It simply routes IBC middleware messages to the appropriate callback function, which is implemented by the secondary application. Therefore, it doesn't need to be registered as a module, nor does it need to be added to the module manager. It only needs to be added to the IBC application stack. + +## Pre-requisite Readings + +* [IBC middleware development](/docs/ibc/v8.5.x/ibc/middleware/develop) +* [IBC middleware integration](/docs/ibc/v8.5.x/ibc/middleware/integration) + +The callbacks middleware, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. +For Cosmos SDK chains this setup is done via the `app/app.go` file, where modules are constructed and configured in order to bootstrap the blockchain application. + +## Importing the callbacks middleware + +The callbacks middleware has no stable releases yet. To use it, you need to import the git commit that contains the module with the compatible version of `ibc-go`. To do so, run the following command with the desired git commit in your project: + +```sh +go get github.com/cosmos/ibc-go/modules/apps/callbacks@342c00b0f8bd7feeebf0780f208a820b0faf90d1 +``` + +You can find the version matrix in [here](#importing-the-callbacks-middleware). + +## Configuring an application stack with the callbacks middleware + +As mentioned in [IBC middleware development](/docs/ibc/v8.5.x/ibc/middleware/develop) an application stack may be composed of many or no middlewares that nest a base application. +These layers form the complete set of application logic that enable developers to build composable and flexible IBC application stacks. +For example, an application stack may just be a single base application like `transfer`, however, the same application stack composed with `29-fee` and `callbacks` will nest the `transfer` base application twice by wrapping it with the Fee Middleware module and then callbacks middleware. + +The callbacks middleware also **requires** a secondary application that will receive the callbacks to implement the [`ContractKeeper`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/callbacks/types/expected_keepers.go#L11-L83). Since the wasm module does not yet support the callbacks middleware, we will use the `mockContractKeeper` module in the examples below. You should replace this with a module that implements `ContractKeeper`. + +### Transfer + +See below for an example of how to create an application stack using `transfer`, `29-fee`, and `callbacks`. Feel free to omit the `29-fee` middleware if you do not want to use it. +The following `transferStack` is configured in `app/app.go` and added to the IBC `Router`. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +```go expandable +/ Create Transfer Stack +/ SendPacket, since it is originating from the application to core IBC: +/ transferKeeper.SendPacket -> callbacks.SendPacket -> feeKeeper.SendPacket -> channel.SendPacket + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is the other way +/ channel.RecvPacket -> fee.OnRecvPacket -> callbacks.OnRecvPacket -> transfer.OnRecvPacket + +/ transfer stack contains (from top to bottom): +/ - IBC Fee Middleware +/ - IBC Callbacks Middleware +/ - Transfer + +/ create IBC module from bottom to top of stack +var transferStack porttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) + +transferStack = ibccallbacks.NewIBCMiddleware(transferStack, app.IBCFeeKeeper, app.MockContractKeeper, maxCallbackGas) + +transferICS4Wrapper := transferStack.(porttypes.ICS4Wrapper) + +transferStack = ibcfee.NewIBCMiddleware(transferStack, app.IBCFeeKeeper) +/ Since the callbacks middleware itself is an ics4wrapper, it needs to be passed to the transfer keeper +app.TransferKeeper.WithICS4Wrapper(transferICS4Wrapper) + +/ Add transfer stack to IBC Router +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) +``` + + +The usage of `WithICS4Wrapper` after `transferStack`'s configuration is critical! It allows the callbacks middleware to do `SendPacket` callbacks and asynchronous `ReceivePacket` callbacks. You must do this regardless of whether you are using the `29-fee` middleware or not. + + +### Interchain Accounts Controller + +```go expandable +/ Create Interchain Accounts Stack +/ SendPacket, since it is originating from the application to core IBC: +/ icaControllerKeeper.SendTx -> callbacks.SendPacket -> fee.SendPacket -> channel.SendPacket + +/ initialize ICA module with mock module as the authentication module on the controller side +var icaControllerStack porttypes.IBCModule +icaControllerStack = ibcmock.NewIBCModule(&mockModule, ibcmock.NewIBCApp("", scopedICAMockKeeper)) + +app.ICAAuthModule = icaControllerStack.(ibcmock.IBCModule) + +icaControllerStack = icacontroller.NewIBCMiddleware(icaControllerStack, app.ICAControllerKeeper) + +icaControllerStack = ibccallbacks.NewIBCMiddleware(icaControllerStack, app.IBCFeeKeeper, app.MockContractKeeper, maxCallbackGas) + +icaICS4Wrapper := icaControllerStack.(porttypes.ICS4Wrapper) + +icaControllerStack = ibcfee.NewIBCMiddleware(icaControllerStack, app.IBCFeeKeeper) +/ Since the callbacks middleware itself is an ics4wrapper, it needs to be passed to the ica controller keeper +app.ICAControllerKeeper.WithICS4Wrapper(icaICS4Wrapper) + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is: +/ channel.RecvPacket -> fee.OnRecvPacket -> icaHost.OnRecvPacket + +var icaHostStack porttypes.IBCModule +icaHostStack = icahost.NewIBCModule(app.ICAHostKeeper) + +icaHostStack = ibcfee.NewIBCMiddleware(icaHostStack, app.IBCFeeKeeper) + +/ Add ICA host and controller to IBC router ibcRouter. +AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). +AddRoute(icahosttypes.SubModuleName, icaHostStack). +``` + + +The usage of `WithICS4Wrapper` here is also critical! + diff --git a/docs/ibc/v8.5.x/middleware/callbacks/interfaces.mdx b/docs/ibc/v8.5.x/middleware/callbacks/interfaces.mdx new file mode 100644 index 00000000..a2d82540 --- /dev/null +++ b/docs/ibc/v8.5.x/middleware/callbacks/interfaces.mdx @@ -0,0 +1,160 @@ +--- +title: Interfaces +--- + +The callbacks middleware requires certain interfaces to be implemented by the underlying IBC applications and the secondary application. If you're simply wiring up the callbacks middleware to an existing IBC application stack and a secondary application such as `icacontroller` and `x/wasm`, you can skip this section. + +## Interfaces for developing the Underlying IBC Application + +### `PacketDataUnmarshaler` + +```go +/ PacketDataUnmarshaler defines an optional interface which allows a middleware to +/ request the packet data to be unmarshaled by the base application. +type PacketDataUnmarshaler interface { + / UnmarshalPacketData unmarshals the packet data into a concrete type + UnmarshalPacketData([]byte) (interface{ +}, error) +} +``` + +The callbacks middleware **requires** the underlying ibc application to implement the [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/core/05-port/types/module.go#L142-L147) interface so that it can unmarshal the packet data bytes into the appropriate packet data type. This allows usage of interface functions implemented by the packet data type. The packet data type is expected to implement the `PacketDataProvider` interface (see section below), which is used to parse the callback data that is currently stored in the packet memo field for `transfer` and `ica` packets as a JSON string. See its implementation in the [`transfer`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/transfer/ibc_module.go#L303-L313) and [`icacontroller`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/27-interchain-accounts/controller/ibc_middleware.go#L258-L268) modules for reference. + +If the underlying application is a middleware itself, then it can implement this interface by simply passing the function call to its underlying application. See its implementation in the [`fee middleware`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/29-fee/ibc_middleware.go#L368-L378) for reference. + +### `PacketDataProvider` + +```go +/ PacketDataProvider defines an optional interfaces for retrieving custom packet data stored on behalf of another application. +/ An existing problem in the IBC middleware design is the inability for a middleware to define its own packet data type and insert packet sender provided information. +/ A short term solution was introduced into several application's packet data to utilize a memo field to carry this information on behalf of another application. +/ This interfaces standardizes that behaviour. Upon realization of the ability for middleware's to define their own packet data types, this interface will be deprecated and removed with time. +type PacketDataProvider interface { + / GetCustomPacketData returns the packet data held on behalf of another application. + / The name the information is stored under should be provided as the key. + / If no custom packet data exists for the key, nil should be returned. + GetCustomPacketData(key string) + +interface{ +} +} +``` + +The callbacks middleware also **requires** the underlying ibc application's packet data type to implement the [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/core/exported/packet.go#L43-L52) interface. This interface is used to retrieve the callback data from the packet data (using the memo field in the case of `transfer` and `ica`). For example, see its implementation in the [`transfer`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/transfer/types/packet.go#L85-L105) module. + +Since middlewares do not have packet types, they do not need to implement this interface. + +### `PacketData` + +```go +/ PacketData defines an optional interface which an application's packet data structure may implement. +type PacketData interface { + / GetPacketSender returns the sender address of the packet data. + / If the packet sender is unknown or undefined, an empty string should be returned. + GetPacketSender(sourcePortID string) + +string +} +``` + +[`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/core/exported/packet.go#L36-L41) is an optional interface that can be implemented by the underlying ibc application's packet data type. It is used to retrieve the packet sender address from the packet data. The callbacks middleware uses this interface to retrieve the packet sender address and pass it to the callback function during a source callback. If this interface is not implemented, then the callbacks middleware passes and empty string as the sender address. For example, see its implementation in the [`transfer`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/transfer/types/packet.go#L74-L83) and [`ica`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/27-interchain-accounts/types/packet.go#L78-L92) module. + +This interface was added so that secondary applications can retrieve the packet sender address to perform custom authorization logic if needed. + +Since middlewares do not have packet types, they do not need to implement this interface. + +## Interfaces for developing the Secondary Application + +### `ContractKeeper` + +The callbacks middleware requires the secondary application to implement the [`ContractKeeper`](https://github.com/cosmos/ibc-go/blob/v7.3.0/modules/apps/callbacks/types/expected_keepers.go#L11-L83) interface. The contract keeper will be invoked at each step of the packet lifecycle. When a packet is sent, if callback information is provided, the contract keeper will be invoked via the `IBCSendPacketCallback`. This allows the contract keeper to prevent packet sends when callback information is provided, for example if the sender is unauthorized to perform callbacks on the given information. If the packet send is successful, the contract keeper on the destination (if present) will be invoked when a packet has been received and the acknowledgement is written, this will occur via `IBCReceivePacketCallback`. At the end of the packet lifecycle, when processing acknowledgements or timeouts, the source contract keeper will be invoked either via `IBCOnAcknowledgementPacket` or `IBCOnTimeoutPacket`. Once a packet has been sent, each step of the packet lifecycle can be processed given that a relayer sets the gas limit to be more than or equal to the required `CommitGasLimit`. State changes performed in the callback will only be committed upon successful execution. + +```go expandable +/ ContractKeeper defines the entry points exposed to the VM module which invokes a smart contract +type ContractKeeper interface { + / IBCSendPacketCallback is called in the source chain when a PacketSend is executed. The + / packetSenderAddress is determined by the underlying module, and may be empty if the sender is + / unknown or undefined. The contract is expected to handle the callback within the user defined + / gas limit, and handle any errors, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, and the error will be propagated to the underlying IBC + / application, resulting in a packet send failure. + / + / Implementations are provided with the packetSenderAddress and MAY choose to use this to perform + / validation on the origin of a given packet. It is recommended to perform the same validation + / on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This + / defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + IBCSendPacketCallback( + cachedCtx sdk.Context, + sourcePort string, + sourceChannel string, + timeoutHeight clienttypes.Height, + timeoutTimestamp uint64, + packetData []byte, + contractAddress, + packetSenderAddress string, + ) + +error + / IBCOnAcknowledgementPacketCallback is called in the source chain when a packet acknowledgement + / is received. The packetSenderAddress is determined by the underlying module, and may be empty if + / the sender is unknown or undefined. The contract is expected to handle the callback within the + / user defined gas limit, and handle any errors, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, but the packet lifecycle will not be blocked. + / + / Implementations are provided with the packetSenderAddress and MAY choose to use this to perform + / validation on the origin of a given packet. It is recommended to perform the same validation + / on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This + / defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + IBCOnAcknowledgementPacketCallback( + cachedCtx sdk.Context, + packet channeltypes.Packet, + acknowledgement []byte, + relayer sdk.AccAddress, + contractAddress, + packetSenderAddress string, + ) + +error + / IBCOnTimeoutPacketCallback is called in the source chain when a packet is not received before + / the timeout height. The packetSenderAddress is determined by the underlying module, and may be + / empty if the sender is unknown or undefined. The contract is expected to handle the callback + / within the user defined gas limit, and handle any error, out of gas, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, but the packet lifecycle will not be blocked. + / + / Implementations are provided with the packetSenderAddress and MAY choose to use this to perform + / validation on the origin of a given packet. It is recommended to perform the same validation + / on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This + / defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + IBCOnTimeoutPacketCallback( + cachedCtx sdk.Context, + packet channeltypes.Packet, + relayer sdk.AccAddress, + contractAddress, + packetSenderAddress string, + ) + +error + / IBCReceivePacketCallback is called in the destination chain when a packet acknowledgement is written. + / The contract is expected to handle the callback within the user defined gas limit, and handle any errors, + / out of gas, or panics gracefully. + / This entry point is called with a cached context. If an error is returned, then the changes in + / this context will not be persisted, but the packet lifecycle will not be blocked. + IBCReceivePacketCallback( + cachedCtx sdk.Context, + packet ibcexported.PacketI, + ack ibcexported.Acknowledgement, + contractAddress string, + ) + +error +} +``` + +These are the callback entry points exposed to the secondary application. The secondary application is expected to execute its custom logic within these entry points. The callbacks middleware will handle the execution of these callbacks and revert the state if needed. + + +Note that the source callback entry points are provided with the `packetSenderAddress` and MAY choose to use this to perform validation on the origin of a given packet. It is recommended to perform the same validation on all source chain callbacks (SendPacket, AcknowledgePacket, TimeoutPacket). This defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks. + diff --git a/docs/ibc/v8.5.x/middleware/callbacks/overview.mdx b/docs/ibc/v8.5.x/middleware/callbacks/overview.mdx new file mode 100644 index 00000000..440c4107 --- /dev/null +++ b/docs/ibc/v8.5.x/middleware/callbacks/overview.mdx @@ -0,0 +1,49 @@ +--- +title: Overview +description: >- + Learn about what the Callbacks Middleware is, and how to build custom modules + that utilize the Callbacks Middleware functionality +--- + +Learn about what the Callbacks Middleware is, and how to build custom modules that utilize the Callbacks Middleware functionality + +## What is the Callbacks Middleware? + +IBC was designed with callbacks between core IBC and IBC applications. IBC apps would send a packet to core IBC, and receive a callback on every step of that packet's lifecycle. This allows IBC applications to be built on top of core IBC, and to be able to execute custom logic on packet lifecycle events (e.g. unescrow tokens for ICS-20). + +This setup worked well for off-chain users interacting with IBC applications. However, we are now seeing the desire for secondary applications (e.g. smart contracts, modules) to call into IBC apps as part of their state machine logic and then do some actions on packet lifecycle events. + +The Callbacks Middleware allows for this functionality by allowing the packets of the underlying IBC applications to register callbacks to secondary applications for lifecycle events. These callbacks are then executed by the Callbacks Middleware when the corresponding packet lifecycle event occurs. + +After much discussion, the design was expanded to an ADR, and the Callbacks Middleware is an implementation of that ADR. + +## Concepts + +Callbacks Middleware was built with smart contracts in mind, but can be used by any secondary application that wants to allow IBC packets to call into it. Think of the Callbacks Middleware as a bridge between core IBC and a secondary application. + +We have the following definitions: + +* `Underlying IBC application`: The IBC application that is wrapped by the Callbacks Middleware. This is the IBC application that is actually sending and receiving packet lifecycle events from core IBC. For example, the transfer module, or the ICA controller submodule. +* `IBC Actor`: IBC Actor is an on-chain or off-chain entity that can initiate a packet on the underlying IBC application. For example, a smart contract, an off-chain user, or a module that sends a transfer packet are all IBC Actors. +* `Secondary application`: The application that is being called into by the Callbacks Middleware for packet lifecycle events. This is the application that is receiving the callback directly from the Callbacks Middleware module. For example, the `x/wasm` module. +* `Callback Actor`: The on-chain smart contract or module that is registered to receive callbacks from the secondary application. For example, a Wasm smart contract (gatekeeped by the `x/wasm` module). Note that the Callback Actor is not necessarily the same as the IBC Actor. For example, an off-chain user can initiate a packet on the underlying IBC application, but the Callback Actor could be a smart contract. The secondary application may want to check that the IBC Actor is allowed to call into the Callback Actor, for example, by checking that the IBC Actor is the same as the Callback Actor. +* `Callback Address`: Address of the Callback Actor. This is the address that the secondary application will call into when a packet lifecycle event occurs. For example, the address of the Wasm smart contract. +* `Maximum gas limit`: The maximum amount of gas that the Callbacks Middleware will allow the secondary application to use when it executes its custom logic. +* `User defined gas limit`: The amount of gas that the IBC Actor wants to allow the secondary application to use when it executes its custom logic. This is the gas limit that the IBC Actor specifies when it sends a packet to the underlying IBC application. This cannot be greater than the maximum gas limit. + +Think of the secondary application as a bridge between the Callbacks Middleware and the Callback Actor. The secondary application is responsible for executing the custom logic of the Callback Actor when a packet lifecycle event occurs. The secondary application is also responsible for checking that the IBC Actor is allowed to call into the Callback Actor. + +Note that it is possible that the IBC Actor, Secondary Application, and Callback Actor are all the same entity. In which case, the Callback Address should be the secondary application's module address. + +The following diagram shows how a typical `RecvPacket`, `AcknowledgementPacket`, and `TimeoutPacket` execution flow would look like: +![callbacks-middleware](/docs/ibc/images/04-middleware/02-callbacks/images/callbackflow.svg) + +And the following diagram shows how a typical `SendPacket` and `WriteAcknowledgement` execution flow would look like: +![callbacks-middleware](/docs/ibc/images/04-middleware/02-callbacks/images/ics4-callbackflow.svg) + +## Known Limitations + +* Callbacks are always executed after the underlying IBC application has executed its logic. +* Maximum gas limit is hardcoded manually during wiring. It requires a coordinated upgrade to change the maximum gas limit. +* The receive packet callback does not pass the relayer address to the secondary application. This is so that we can use the same callback for both synchronous and asynchronous acknowledgements. +* The receive packet callback does not pass IBC Actor's address, this is because the IBC Actor lives in the counterparty chain and cannot be trusted. diff --git a/docs/ibc/v8.5.x/middleware/ics29-fee/end-users.mdx b/docs/ibc/v8.5.x/middleware/ics29-fee/end-users.mdx new file mode 100644 index 00000000..39724e4f --- /dev/null +++ b/docs/ibc/v8.5.x/middleware/ics29-fee/end-users.mdx @@ -0,0 +1,34 @@ +--- +title: End Users +--- + +## Synopsis + +Learn how to incentivize IBC packets using the ICS29 Fee Middleware module. + + + +## Pre-requisite readings + +- [Fee Middleware](/docs/ibc/v8.5.x/middleware/ics29-fee/overview) + + + +## Summary + +Different types of end users: + +- CLI users who want to manually incentivize IBC packets +- Client developers + +The Fee Middleware module allows end users to add a 'tip' to each IBC packet which will incentivize relayer operators to relay packets between chains. gRPC endpoints are exposed for client developers as well as a simple CLI for manually incentivizing IBC packets. + +## CLI Users + +For an in depth guide on how to use the ICS29 Fee Middleware module using the CLI please take a look at the [wiki](https://github.com/cosmos/ibc-go/wiki/Fee-enabled-fungible-token-transfers#asynchronous-incentivization-of-a-fungible-token-transfer) on the `ibc-go` repo. + +## Client developers + +Client developers can read more about the relevant ICS29 message types in the [Fee messages section](/docs/ibc/v8.5.x/middleware/ics29-fee/msgs). + +[CosmJS](https://github.com/cosmos/cosmjs) is a useful client library for signing and broadcasting Cosmos SDK messages. diff --git a/docs/ibc/v8.5.x/middleware/ics29-fee/events.mdx b/docs/ibc/v8.5.x/middleware/ics29-fee/events.mdx new file mode 100644 index 00000000..7d6dfc42 --- /dev/null +++ b/docs/ibc/v8.5.x/middleware/ics29-fee/events.mdx @@ -0,0 +1,37 @@ +--- +title: Events +--- + +## Synopsis + +An overview of all events related to ICS-29 + +## `MsgPayPacketFee`, `MsgPayPacketFeeAsync` + +| Type | Attribute Key | Attribute Value | +| ----------------------- | --------------- | --------------- | +| incentivized_ibc_packet | port_id | `{portID}` | +| incentivized_ibc_packet | channel_id | `{channelID}` | +| incentivized_ibc_packet | packet_sequence | `{sequence}` | +| incentivized_ibc_packet | recv_fee | `{recvFee}` | +| incentivized_ibc_packet | ack_fee | `{ackFee}` | +| incentivized_ibc_packet | timeout_fee | `{timeoutFee}` | +| message | module | fee-ibc | + +## `RegisterPayee` + +| Type | Attribute Key | Attribute Value | +| -------------- | ------------- | --------------- | +| register_payee | relayer | `{relayer}` | +| register_payee | payee | `{payee}` | +| register_payee | channel_id | `{channelID}` | +| message | module | fee-ibc | + +## `RegisterCounterpartyPayee` + +| Type | Attribute Key | Attribute Value | +| --------------------------- | ------------------ | --------------------- | +| register_counterparty_payee | relayer | `{relayer}` | +| register_counterparty_payee | counterparty_payee | `{counterpartyPayee}` | +| register_counterparty_payee | channel_id | `{channelID}` | +| message | module | fee-ibc | diff --git a/docs/ibc/v8.5.x/middleware/ics29-fee/fee-distribution.mdx b/docs/ibc/v8.5.x/middleware/ics29-fee/fee-distribution.mdx new file mode 100644 index 00000000..aa705588 --- /dev/null +++ b/docs/ibc/v8.5.x/middleware/ics29-fee/fee-distribution.mdx @@ -0,0 +1,112 @@ +--- +title: Fee Distribution +--- + +## Synopsis + +Learn about payee registration for the distribution of packet fees. The following document is intended for relayer operators. + + + +## Pre-requisite readings + +- [Fee Middleware](/docs/ibc/v8.5.x/middleware/ics29-fee/overview) + + + +Packet fees are divided into 3 distinct amounts in order to compensate relayer operators for packet relaying on fee enabled IBC channels. + +- `RecvFee`: The sum of all packet receive fees distributed to a payee for successful execution of `MsgRecvPacket`. +- `AckFee`: The sum of all packet acknowledgement fees distributed to a payee for successful execution of `MsgAcknowledgement`. +- `TimeoutFee`: The sum of all packet timeout fees distributed to a payee for successful execution of `MsgTimeout`. + +## Register a counterparty payee address for forward relaying + +As mentioned in [ICS29 Concepts](/docs/ibc/v8.5.x/middleware/ics29-fee/overview#concepts), the forward relayer describes the actor who performs the submission of `MsgRecvPacket` on the destination chain. +Fee distribution for incentivized packet relays takes place on the packet source chain. + +> Relayer operators are expected to register a counterparty payee address, in order to be compensated accordingly with `RecvFee`s upon completion of a packet lifecycle. + +The counterparty payee address registered on the destination chain is encoded into the packet acknowledgement and communicated as such to the source chain for fee distribution. +**If a counterparty payee is not registered for the forward relayer on the destination chain, the escrowed fees will be refunded upon fee distribution.** + +### Relayer operator actions + +A transaction must be submitted **to the destination chain** including a `CounterpartyPayee` address of an account on the source chain. +The transaction must be signed by the `Relayer`. + +Note: If a module account address is used as the `CounterpartyPayee` but the module has been set as a blocked address in the `BankKeeper`, the refunding to the module account will fail. This is because many modules use invariants to compare internal tracking of module account balances against the actual balance of the account stored in the `BankKeeper`. If a token transfer to the module account occurs without going through this module and updating the account balance of the module on the `BankKeeper`, then invariants may break and unknown behaviour could occur depending on the module implementation. Therefore, if it is desirable to use a module account that is currently blocked, the module developers should be consulted to gauge to possibility of removing the module account from the blocked list. + +```go +type MsgRegisterCounterpartyPayee struct { + / unique port identifier + PortId string + / unique channel identifier + ChannelId string + / the relayer address + Relayer string + / the counterparty payee address + CounterpartyPayee string +} +``` + +> This message is expected to fail if: +> +> - `PortId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +> - `ChannelId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +> - `Relayer` is an invalid address. +> - `CounterpartyPayee` is empty or contains more than 2048 bytes. + +See below for an example CLI command: + +```bash +simd tx ibc-fee register-counterparty-payee transfer channel-0 \ + cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh \ + osmo1v5y0tz01llxzf4c2afml8s3awue0ymju22wxx2 \ + --from cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh +``` + +## Register an alternative payee address for reverse and timeout relaying + +As mentioned in [ICS29 Concepts](/docs/ibc/v8.5.x/middleware/ics29-fee/overview#concepts), the reverse relayer describes the actor who performs the submission of `MsgAcknowledgement` on the source chain. +Similarly the timeout relayer describes the actor who performs the submission of `MsgTimeout` (or `MsgTimeoutOnClose`) on the source chain. + +> Relayer operators **may choose** to register an optional payee address, in order to be compensated accordingly with `AckFee`s and `TimeoutFee`s upon completion of a packet life cycle. + +If a payee is not registered for the reverse or timeout relayer on the source chain, then fee distribution assumes the default behaviour, where fees are paid out to the relayer account which delivers `MsgAcknowledgement` or `MsgTimeout`/`MsgTimeoutOnClose`. + +### Relayer operator actions + +A transaction must be submitted **to the source chain** including a `Payee` address of an account on the source chain. +The transaction must be signed by the `Relayer`. + +Note: If a module account address is used as the `Payee` it is recommended to [turn off invariant checks](https://github.com/cosmos/ibc-go/blob/v7.0.0/testing/simapp/app.go#L727) for that module. + +```go +type MsgRegisterPayee struct { + / unique port identifier + PortId string + / unique channel identifier + ChannelId string + / the relayer address + Relayer string + / the payee address + Payee string +} +``` + +> This message is expected to fail if: +> +> - `PortId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators). +> - `ChannelId` is invalid (see [24-host naming requirements](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#paths-identifiers-separators)). +> - `Relayer` is an invalid address. +> - `Payee` is an invalid address. + +See below for an example CLI command: + +```bash +simd tx ibc-fee register-payee transfer channel-0 \ + cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh \ + cosmos153lf4zntqt33a4v0sm5cytrxyqn78q7kz8j8x5 \ + --from cosmos1rsp837a4kvtgp2m4uqzdge0zzu6efqgucm0qdh +``` diff --git a/docs/ibc/v8.5.x/middleware/ics29-fee/integration.mdx b/docs/ibc/v8.5.x/middleware/ics29-fee/integration.mdx new file mode 100644 index 00000000..bddc7a79 --- /dev/null +++ b/docs/ibc/v8.5.x/middleware/ics29-fee/integration.mdx @@ -0,0 +1,178 @@ +--- +title: Integration +--- + +## Synopsis + +Learn how to configure the Fee Middleware module with IBC applications. The following document is intended for developers building on top of the Cosmos SDK and only applies for Cosmos SDK chains. + + + +## Pre-requisite Readings + +- [IBC middleware development](/docs/ibc/v8.5.x/ibc/middleware/develop) +- [IBC middleware integration](/docs/ibc/v8.5.x/ibc/middleware/integration) + + + +The Fee Middleware module, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. +For Cosmos SDK chains this setup is done via the `app/app.go` file, where modules are constructed and configured in order to bootstrap the blockchain application. + +## Example integration of the Fee Middleware module + +```go expandable +/ app.go + +/ Register the AppModule for the fee middleware module +ModuleBasics = module.NewBasicManager( + ... + ibcfee.AppModuleBasic{ +}, + ... +) + +... + +/ Add module account permissions for the fee middleware module +maccPerms = map[string][]string{ + ... + ibcfeetypes.ModuleName: nil, +} + +... + +/ Add fee middleware Keeper +type App struct { + ... + + IBCFeeKeeper ibcfeekeeper.Keeper + + ... +} + +... + +/ Create store keys + keys := sdk.NewKVStoreKeys( + ... + ibcfeetypes.StoreKey, + ... +) + +... + +app.IBCFeeKeeper = ibcfeekeeper.NewKeeper( + appCodec, keys[ibcfeetypes.StoreKey], + app.IBCKeeper.ChannelKeeper, / may be replaced with IBC middleware + app.IBCKeeper.ChannelKeeper, + &app.IBCKeeper.PortKeeper, app.AccountKeeper, app.BankKeeper, +) + +/ See the section below for configuring an application stack with the fee middleware module + +... + +/ Register fee middleware AppModule +app.moduleManager = module.NewManager( + ... + ibcfee.NewAppModule(app.IBCFeeKeeper), +) + +... + +/ Add fee middleware to begin blocker logic +app.moduleManager.SetOrderBeginBlockers( + ... + ibcfeetypes.ModuleName, + ... +) + +/ Add fee middleware to end blocker logic +app.moduleManager.SetOrderEndBlockers( + ... + ibcfeetypes.ModuleName, + ... +) + +/ Add fee middleware to init genesis logic +app.moduleManager.SetOrderInitGenesis( + ... + ibcfeetypes.ModuleName, + ... +) +``` + +## Configuring an application stack with Fee Middleware + +As mentioned in [IBC middleware development](/docs/ibc/v8.5.x/ibc/middleware/develop) an application stack may be composed of many or no middlewares that nest a base application. +These layers form the complete set of application logic that enable developers to build composable and flexible IBC application stacks. +For example, an application stack may be just a single base application like `transfer`, however, the same application stack composed with `29-fee` will nest the `transfer` base application +by wrapping it with the Fee Middleware module. + +### Transfer + +See below for an example of how to create an application stack using `transfer` and `29-fee`. +The following `transferStack` is configured in `app/app.go` and added to the IBC `Router`. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +```go expandable +/ Create Transfer Stack +/ SendPacket, since it is originating from the application to core IBC: +/ transferKeeper.SendPacket -> fee.SendPacket -> channel.SendPacket + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is the other way +/ channel.RecvPacket -> fee.OnRecvPacket -> transfer.OnRecvPacket + +/ transfer stack contains (from top to bottom): +/ - IBC Fee Middleware +/ - Transfer + +/ create IBC module from bottom to top of stack +var transferStack porttypes.IBCModule +transferStack = transfer.NewIBCModule(app.TransferKeeper) + +transferStack = ibcfee.NewIBCMiddleware(transferStack, app.IBCFeeKeeper) + +/ Add transfer stack to IBC Router +ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack) +``` + +### Interchain Accounts + +See below for an example of how to create an application stack using `27-interchain-accounts` and `29-fee`. +The following `icaControllerStack` and `icaHostStack` are configured in `app/app.go` and added to the IBC `Router` with the associated authentication module. +The in-line comments describe the execution flow of packets between the application stack and IBC core. + +```go expandable +/ Create Interchain Accounts Stack +/ SendPacket, since it is originating from the application to core IBC: +/ icaAuthModuleKeeper.SendTx -> icaController.SendPacket -> fee.SendPacket -> channel.SendPacket + +/ initialize ICA module with mock module as the authentication module on the controller side +var icaControllerStack porttypes.IBCModule +icaControllerStack = ibcmock.NewIBCModule(&mockModule, ibcmock.NewMockIBCApp("", scopedICAMockKeeper)) + +app.ICAAuthModule = icaControllerStack.(ibcmock.IBCModule) + +icaControllerStack = icacontroller.NewIBCMiddleware(icaControllerStack, app.ICAControllerKeeper) + +icaControllerStack = ibcfee.NewIBCMiddleware(icaControllerStack, app.IBCFeeKeeper) + +/ RecvPacket, message that originates from core IBC and goes down to app, the flow is: +/ channel.RecvPacket -> fee.OnRecvPacket -> icaHost.OnRecvPacket + +var icaHostStack porttypes.IBCModule +icaHostStack = icahost.NewIBCModule(app.ICAHostKeeper) + +icaHostStack = ibcfee.NewIBCMiddleware(icaHostStack, app.IBCFeeKeeper) + +/ Add authentication module, controller and host to IBC router +ibcRouter. + / the ICA Controller middleware needs to be explicitly added to the IBC Router because the + / ICA controller module owns the port capability for ICA. The ICA authentication module + / owns the channel capability. + AddRoute(ibcmock.ModuleName+icacontrollertypes.SubModuleName, icaControllerStack) / ica with mock auth module stack route to ica (top level of middleware stack) + +AddRoute(icacontrollertypes.SubModuleName, icaControllerStack). + AddRoute(icahosttypes.SubModuleName, icaHostStack). +``` diff --git a/docs/ibc/v8.5.x/middleware/ics29-fee/msgs.mdx b/docs/ibc/v8.5.x/middleware/ics29-fee/msgs.mdx new file mode 100644 index 00000000..fe07ad77 --- /dev/null +++ b/docs/ibc/v8.5.x/middleware/ics29-fee/msgs.mdx @@ -0,0 +1,92 @@ +--- +title: Fee Messages +--- + +## Synopsis + +Learn about the different ways to pay for fees, how the fees are paid out and what happens when not enough escrowed fees are available for payout + +## Escrowing fees + +The fee middleware module exposes two different ways to pay fees for relaying IBC packets: + +### `MsgPayPacketFee` + +`MsgPayPacketFee` enables the escrowing of fees for a packet at the next sequence send and should be combined into one `MultiMsgTx` with the message that will be paid for. Note that the `Relayers` field has been set up to allow for an optional whitelist of relayers permitted to receive this fee, however, this feature has not yet been enabled at this time. + +```go expandable +type MsgPayPacketFee struct{ + / fee encapsulates the recv, ack and timeout fees associated with an IBC packet + Fee Fee + / the source port unique identifier + SourcePortId string + / the source channel unique identifier + SourceChannelId string + / account address to refund fee if necessary + Signer string + / optional list of relayers permitted to the receive packet fee + Relayers []string +} +``` + +The `Fee` message contained in this synchronous fee payment method configures different fees which will be paid out for `MsgRecvPacket`, `MsgAcknowledgement`, and `MsgTimeout`/`MsgTimeoutOnClose`. +The amount of fees escrowed in total is the denomwise maximum of `RecvFee + AckFee` and `TimeoutFee`. This is because we do not know whether the packet will be successfully received and acknowledged or whether it will timeout. + +```go +type Fee struct { + RecvFee types.Coins + AckFee types.Coins + TimeoutFee types.Coins +} +``` + +The diagram below shows the `MultiMsgTx` with the `MsgTransfer` coming from a token transfer message, along with `MsgPayPacketFee`. + +![msgpaypacket.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/msgpaypacket.png) + +### `MsgPayPacketFeeAsync` + +`MsgPayPacketFeeAsync` enables the asynchronous escrowing of fees for a specified packet. Note that a packet can be 'topped up' multiple times with additional fees of any coin denomination by broadcasting multiple `MsgPayPacketFeeAsync` messages. + +```go +type MsgPayPacketFeeAsync struct { + / unique packet identifier comprised of the channel ID, port ID and sequence + PacketId channeltypes.PacketId + / the packet fee associated with a particular IBC packet + PacketFee PacketFee +} +``` + +where the `PacketFee` also specifies the `Fee` to be paid as well as the refund address for fees which are not paid out + +```go +type PacketFee struct { + Fee Fee + RefundAddress string + Relayers []string +} +``` + +The diagram below shows how multiple `MsgPayPacketFeeAsync` can be broadcasted asynchronously. Escrowing of the fee associated with a packet can be carried out by any party because ICS-29 does not dictate a particular fee payer. In fact, chains can choose to simply not expose this fee payment to end users at all and rely on a different module account or even the community pool as the source of relayer incentives. + +![paypacketfeeasync.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/paypacketfeeasync.png) + +Please see our [wiki](https://github.com/cosmos/ibc-go/wiki/Fee-enabled-fungible-token-transfers) for example flows on how to use these messages to incentivise a token transfer channel using a CLI. + +## Paying out the escrowed fees + +Following diagram takes a look at the packet flow for an incentivized token transfer and investigates the several scenario's for paying out the escrowed fees. We assume that the relayers have registered their counterparty address, detailed in the [Fee distribution section](/docs/ibc/v8.5.x/middleware/ics29-fee/fee-distribution). + +![feeflow.png](/docs/ibc/images/04-middleware/01-ics29-fee/images/feeflow.png) + +- In the case of a successful transaction, `RecvFee` will be paid out to the designated counterparty payee address which has been registered on the receiver chain and sent back with the `MsgAcknowledgement`, `AckFee` will be paid out to the relayer address which has submitted the `MsgAcknowledgement` on the sending chain (or the registered payee in case one has been registered for the relayer address), and the remaining fees (if any) will be reimbursed to the account which escrowed the fee. (The reimbursed amount equals `EscrowedAmount - (RecvFee + AckFee)`). + +- In case of a timeout transaction, the `TimeoutFee` will be paid to the `Timeout Relayer` (who submits the timeout message to the source chain), and the remaining fees (if any) will be reimbursed to the account which escrowed the fee. (The reimbursed amount equals `EscrowedAmount - (TimeoutFee)`). + +> Please note that fee payments are built on the assumption that sender chains are the source of incentives — the chain that sends the packets is the same chain where fee payments will occur -- please see the [Fee distribution section](/docs/ibc/v8.5.x/middleware/ics29-fee/fee-distribution) to understand the flow for registering payee and counterparty payee (fee receiving) addresses. + +## A locked fee middleware module + +The fee middleware module can become locked if the situation arises that the escrow account for the fees does not have sufficient funds to pay out the fees which have been escrowed for each packet. _This situation indicates a severe bug._ In this case, the fee module will be locked until manual intervention fixes the issue. + +> A locked fee module will simply skip fee logic and continue on to the underlying packet flow. A channel with a locked fee module will temporarily function as a fee disabled channel, and the locking of a fee module will not affect the continued flow of packets over the channel. diff --git a/docs/ibc/v8.5.x/middleware/ics29-fee/overview.mdx b/docs/ibc/v8.5.x/middleware/ics29-fee/overview.mdx new file mode 100644 index 00000000..03cad7d9 --- /dev/null +++ b/docs/ibc/v8.5.x/middleware/ics29-fee/overview.mdx @@ -0,0 +1,49 @@ +--- +title: Overview +--- + +## Synopsis + +Learn about what the Fee Middleware module is, and how to build custom modules that utilize the Fee Middleware functionality + +## What is the Fee Middleware module? + +IBC does not depend on relayer operators for transaction verification. However, the relayer infrastructure ensures liveness of the Interchain network — operators listen for packets sent through channels opened between chains, and perform the vital service of ferrying these packets (and proof of the transaction on the sending chain/receipt on the receiving chain) to the clients on each side of the channel. + +Though relaying is permissionless and completely decentralized and accessible, it does come with operational costs. Running full nodes to query transaction proofs and paying for transaction fees associated with IBC packets are two of the primary cost burdens which have driven the overall discussion on **a general, in-protocol incentivization mechanism for relayers**. + +Initially, a [simple proposal](https://github.com/cosmos/ibc/pull/577/files) was created to incentivize relaying on ICS20 token transfers on the destination chain. However, the proposal was specific to ICS20 token transfers and would have to be reimplemented in this format on every other IBC application module. + +After much discussion, the proposal was expanded to a [general incentivisation design](https://github.com/cosmos/ibc/tree/master/spec/app/ics-029-fee-payment) that can be adopted by any ICS application protocol as [middleware](/docs/ibc/v8.5.x/ibc/middleware/develop). + +## Concepts + +ICS29 fee payments in this middleware design are built on the assumption that sender chains are the source of incentives — the chain on which packets are incentivized is the chain that distributes fees to relayer operators. However, as part of the IBC packet flow, messages have to be submitted on both sender and destination chains. This introduces the requirement of a mapping of relayer operator's addresses on both chains. + +To achieve the stated requirements, the **fee middleware module has two main groups of functionality**: + +- Registering of relayer addresses associated with each party involved in relaying the packet on the source chain. This registration process can be automated on start up of relayer infrastructure and happens only once, not every packet flow. + + This is described in the [Fee distribution section](/docs/ibc/v8.5.x/middleware/ics29-fee/fee-distribution). + +- Escrowing fees by any party which will be paid out to each rightful party on completion of the packet lifecycle. + + This is described in the [Fee messages section](/docs/ibc/v8.5.x/middleware/ics29-fee/msgs). + +We complete the introduction by giving a list of definitions of relevant terminology. + +`Forward relayer`: The relayer that submits the `MsgRecvPacket` message for a given packet (on the destination chain). + +`Reverse relayer`: The relayer that submits the `MsgAcknowledgement` message for a given packet (on the source chain). + +`Timeout relayer`: The relayer that submits the `MsgTimeout` or `MsgTimeoutOnClose` messages for a given packet (on the source chain). + +`Payee`: The account address on the source chain to be paid on completion of the packet lifecycle. The packet lifecycle on the source chain completes with the receipt of a `MsgTimeout`/`MsgTimeoutOnClose` or a `MsgAcknowledgement`. + +`Counterparty payee`: The account address to be paid on completion of the packet lifecycle on the destination chain. The package lifecycle on the destination chain completes with a successful `MsgRecvPacket`. + +`Refund address`: The address of the account paying for the incentivization of packet relaying. The account is refunded timeout fees upon successful acknowledgement. In the event of a packet timeout, both acknowledgement and receive fees are refunded. + +## Known Limitations + +The first version of fee payments middleware will only support incentivisation of new channels, however, channel upgradeability will enable incentivisation of all existing channels. diff --git a/docs/ibc/v8.5.x/migrations/migration.template.mdx b/docs/ibc/v8.5.x/migrations/migration.template.mdx new file mode 100644 index 00000000..cc687cdf --- /dev/null +++ b/docs/ibc/v8.5.x/migrations/migration.template.mdx @@ -0,0 +1,30 @@ +--- +description: This guide provides instructions for migrating to a new version of ibc-go. +--- + +This guide provides instructions for migrating to a new version of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Chains](#chains) +* [IBC Apps](#ibc-apps) +* [Relayers](#relayers) +* [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +* No relevant changes were made in this release. + +## IBC Apps + +* No relevant changes were made in this release. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v8.5.x/migrations/sdk-to-v1.mdx b/docs/ibc/v8.5.x/migrations/sdk-to-v1.mdx new file mode 100644 index 00000000..5d2818dc --- /dev/null +++ b/docs/ibc/v8.5.x/migrations/sdk-to-v1.mdx @@ -0,0 +1,194 @@ +--- +title: SDK v0.43 to IBC-Go v1 +description: >- + This file contains information on how to migrate from the IBC module contained + in the SDK 0.41.x and 0.42.x lines to the IBC module in the ibc-go repository + based on the 0.44 SDK version. +--- + +This file contains information on how to migrate from the IBC module contained in the SDK 0.41.x and 0.42.x lines to the IBC module in the ibc-go repository based on the 0.44 SDK version. + +## Import Changes + +The most obvious changes is import name changes. We need to change: + +* applications -> apps +* cosmos-sdk/x/ibc -> ibc-go + +On my GNU/Linux based machine I used the following commands, executed in order: + +```shell +grep -RiIl 'cosmos-sdk\/x\/ibc\/applications' | xargs sed -i 's/cosmos-sdk\/x\/ibc\/applications/ibc-go\/modules\/apps/g' +``` + +```shell +grep -RiIl 'cosmos-sdk\/x\/ibc' | xargs sed -i 's/cosmos-sdk\/x\/ibc/ibc-go\/modules/g' +``` + +ref: [explanation of the above commands](https://www.internalpointers.com/post/linux-find-and-replace-text-multiple-files) + +Executing these commands out of order will cause issues. + +Feel free to use your own method for modifying import names. + +NOTE: Updating to the `v0.44.0` SDK release and then running `go mod tidy` will cause a downgrade to `v0.42.0` in order to support the old IBC import paths. +Update the import paths before running `go mod tidy`. + +## Chain Upgrades + +Chains may choose to upgrade via an upgrade proposal or genesis upgrades. Both in-place store migrations and genesis migrations are supported. + +**WARNING**: Please read at least the quick guide for [IBC client upgrades](/docs/ibc/v8.5.x/ibc/upgrades/quick-guide) before upgrading your chain. It is highly recommended you do not change the chain-ID during an upgrade, otherwise you must follow the IBC client upgrade instructions. + +Both in-place store migrations and genesis migrations will: + +* migrate the solo machine client state from v1 to v2 protobuf definitions +* prune all solo machine consensus states +* prune all expired tendermint consensus states + +Chains must set a new connection parameter during either in place store migrations or genesis migration. The new parameter, max expected block time, is used to enforce packet processing delays on the receiving end of an IBC packet flow. Checkout the [docs](https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2) for more information. + +### In-Place Store Migrations + +The new chain binary will need to run migrations in the upgrade handler. The fromVM (previous module version) for the IBC module should be 1. This will allow migrations to be run for IBC updating the version from 1 to 2. + +Ex: + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler("my-upgrade-proposal", + func(ctx sdk.Context, _ upgradetypes.Plan, _ module.VersionMap) (module.VersionMap, error) { + / set max expected block time parameter. Replace the default with your expected value + / https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2 + app.IBCKeeper.ConnectionKeeper.SetParams(ctx, ibcconnectiontypes.DefaultParams()) + fromVM := map[string]uint64{ + ... / other modules + "ibc": 1, + ... +} + +return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +### Genesis Migrations + +To perform genesis migrations, the following code must be added to your existing migration code. + +```go expandable +/ add imports as necessary +import ( + + ibcv100 "github.com/cosmos/ibc-go/modules/core/legacy/v100" + ibchost "github.com/cosmos/ibc-go/modules/core/24-host" +) + +... + +/ add in migrate cmd function +/ expectedTimePerBlock is a new connection parameter +/ https://github.com/cosmos/ibc-go/blob/release/v1.0.x/docs/ibc/proto-docs.md#params-2 +newGenState, err = ibcv100.MigrateGenesis(newGenState, clientCtx, *genDoc, expectedTimePerBlock) + if err != nil { + return err +} +``` + +**NOTE:** The genesis chain-id, time and height MUST be updated before migrating IBC, otherwise the tendermint consensus state will not be pruned. + +## IBC Keeper Changes + +The IBC Keeper now takes in the Upgrade Keeper. Please add the chains' Upgrade Keeper after the Staking Keeper: + +```diff +/ Create IBC Keeper +app.IBCKeeper = ibckeeper.NewKeeper( +- appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, scopedIBCKeeper, ++ appCodec, keys[ibchost.StoreKey], app.GetSubspace(ibchost.ModuleName), app.StakingKeeper, app.UpgradeKeeper, scopedIBCKeeper, +) +``` + +## Proposals + +### UpdateClientProposal + +The `UpdateClient` has been modified to take in two client-identifiers and one initial height. Please see the [documentation](/docs/ibc/v8.5.x/ibc/proposals) for more information. + +### UpgradeProposal + +A new IBC proposal type has been added, `UpgradeProposal`. This handles an IBC (breaking) Upgrade. +The previous `UpgradedClientState` field in an Upgrade `Plan` has been deprecated in favor of this new proposal type. + +### Proposal Handler Registration + +The `ClientUpdateProposalHandler` has been renamed to `ClientProposalHandler`. +It handles both `UpdateClientProposal`s and `UpgradeProposal`s. + +Add this import: + +```diff ++ ibcclienttypes "github.com/cosmos/ibc-go/modules/core/02-client/types" +``` + +Please ensure the governance module adds the correct route: + +```diff +- AddRoute(ibchost.RouterKey, ibcclient.NewClientUpdateProposalHandler(app.IBCKeeper.ClientKeeper)) ++ AddRoute(ibcclienttypes.RouterKey, ibcclient.NewClientProposalHandler(app.IBCKeeper.ClientKeeper)) +``` + +NOTE: Simapp registration was incorrect in the 0.41.x releases. The `UpdateClient` proposal handler should be registered with the router key belonging to `ibc-go/core/02-client/types` +as shown in the diffs above. + +### Proposal CLI Registration + +Please ensure both proposal type CLI commands are registered on the governance module by adding the following arguments to `gov.NewAppModuleBasic()`: + +Add the following import: + +```diff ++ ibcclientclient "github.com/cosmos/ibc-go/modules/core/02-client/client" +``` + +Register the cli commands: + +```diff +gov.NewAppModuleBasic( + paramsclient.ProposalHandler, distrclient.ProposalHandler, upgradeclient.ProposalHandler, upgradeclient.CancelProposalHandler, ++ ibcclientclient.UpdateClientProposalHandler, ibcclientclient.UpgradeProposalHandler, +), +``` + +REST routes are not supported for these proposals. + +## Proto file changes + +The gRPC querier service endpoints have changed slightly. The previous files used `v1beta1` gRPC route, this has been updated to `v1`. + +The solo machine has replaced the FrozenSequence uint64 field with a IsFrozen boolean field. The package has been bumped from `v1` to `v2` + +## IBC callback changes + +### OnRecvPacket + +Application developers need to update their `OnRecvPacket` callback logic. + +The `OnRecvPacket` callback has been modified to only return the acknowledgement. The acknowledgement returned must implement the `Acknowledgement` interface. The acknowledgement should indicate if it represents a successful processing of a packet by returning true on `Success()` and false in all other cases. A return value of false on `Success()` will result in all state changes which occurred in the callback being discarded. More information can be found in the [documentation](/docs/ibc/v8.5.x/ibc/apps/apps#receiving-packets). + +The `OnRecvPacket`, `OnAcknowledgementPacket`, and `OnTimeoutPacket` callbacks are now passed the `sdk.AccAddress` of the relayer who relayed the IBC packet. Applications may use or ignore this information. + +## IBC Event changes + +The `packet_data` attribute has been deprecated in favor of `packet_data_hex`, in order to provide standardized encoding/decoding of packet data in events. While the `packet_data` event still exists, all relayers and IBC Event consumers are strongly encouraged to switch over to using `packet_data_hex` as soon as possible. + +The `packet_ack` attribute has also been deprecated in favor of `packet_ack_hex` for the same reason stated above. All relayers and IBC Event consumers are strongly encouraged to switch over to using `packet_ack_hex` as soon as possible. + +The `consensus_height` attribute has been removed in the Misbehaviour event emitted. IBC clients no longer have a frozen height and misbehaviour does not necessarily have an associated height. + +## Relevant SDK changes + +* (codec) [#9226](https://github.com/cosmos/cosmos-sdk/pull/9226) Rename codec interfaces and methods, to follow a general Go interfaces: + * `codec.Marshaler` → `codec.Codec` (this defines objects which serialize other objects) + * `codec.BinaryMarshaler` → `codec.BinaryCodec` + * `codec.JSONMarshaler` → `codec.JSONCodec` + * Removed `BinaryBare` suffix from `BinaryCodec` methods (`MarshalBinaryBare`, `UnmarshalBinaryBare`, ...) + * Removed `Binary` infix from `BinaryCodec` methods (`MarshalBinaryLengthPrefixed`, `UnmarshalBinaryLengthPrefixed`, ...) diff --git a/docs/ibc/v8.5.x/migrations/support-denoms-with-slashes.mdx b/docs/ibc/v8.5.x/migrations/support-denoms-with-slashes.mdx new file mode 100644 index 00000000..0a92b46f --- /dev/null +++ b/docs/ibc/v8.5.x/migrations/support-denoms-with-slashes.mdx @@ -0,0 +1,90 @@ +--- +title: Support transfer of coins whose base denom contains slashes +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +This document is necessary when chains are upgrading from a version that does not support base denoms with slashes (e.g. v3.0.0) to a version that does (e.g. v3.2.0). All versions of ibc-go smaller than v1.5.0 for the v1.x release line, v2.3.0 for the v2.x release line, and v3.1.0 for the v3.x release line do **NOT** support IBC token transfers of coins whose base denoms contain slashes. Therefore the in-place of genesis migration described in this document are required when upgrading. + +If a chain receives coins of a base denom with slashes before it upgrades to supporting it, the receive may pass however the trace information will be incorrect. + +E.g. If a base denom of `testcoin/testcoin/testcoin` is sent to a chain that does not support slashes in the base denom, the receive will be successful. However, the trace information stored on the receiving chain will be: `Trace: "transfer/{channel-id}/testcoin/testcoin", BaseDenom: "testcoin"`. + +This incorrect trace information must be corrected when the chain does upgrade to fully supporting denominations with slashes. + +To do so, chain binaries should include a migration script that will run when the chain upgrades from not supporting base denominations with slashes to supporting base denominations with slashes. + +## Chains + +### ICS20 - Transfer + +The transfer module will now support slashes in base denoms, so we must iterate over current traces to check if any of them are incorrectly formed and correct the trace information. + +### Upgrade Proposal + +```go +app.UpgradeKeeper.SetUpgradeHandler("MigrateTraces", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / transfer module consensus version has been bumped to 2 + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +This is only necessary if there are denom traces in the store with incorrect trace information from previously received coins that had a slash in the base denom. However, it is recommended that any chain upgrading to support base denominations with slashes runs this code for safety. + +For a more detailed sample, please check out the code changes in [this pull request](https://github.com/cosmos/ibc-go/pull/1680). + +### Genesis Migration + +If the chain chooses to add support for slashes in base denoms via genesis export, then the trace information must be corrected during genesis migration. + +The migration code required may look like: + +```go expandable +func migrateGenesisSlashedDenomsUpgrade(appState genutiltypes.AppMap, clientCtx client.Context, genDoc *tmtypes.GenesisDoc) (genutiltypes.AppMap, error) { + if appState[ibctransfertypes.ModuleName] != nil { + transferGenState := &ibctransfertypes.GenesisState{ +} + +clientCtx.Codec.MustUnmarshalJSON(appState[ibctransfertypes.ModuleName], transferGenState) + substituteTraces := make([]ibctransfertypes.DenomTrace, len(transferGenState.DenomTraces)) + for i, dt := range transferGenState.DenomTraces { + / replace all previous traces with the latest trace if validation passes + / note most traces will have same value + newTrace := ibctransfertypes.ParseDenomTrace(dt.GetFullDenomPath()) + if err := newTrace.Validate(); err != nil { + substituteTraces[i] = dt +} + +else { + substituteTraces[i] = newTrace +} + +} + +transferGenState.DenomTraces = substituteTraces + + / delete old genesis state + delete(appState, ibctransfertypes.ModuleName) + + / set new ibc transfer genesis state + appState[ibctransfertypes.ModuleName] = clientCtx.Codec.MustMarshalJSON(transferGenState) +} + +return appState, nil +} +``` + +For a more detailed sample, please check out the code changes in [this pull request](https://github.com/cosmos/ibc-go/pull/1528). diff --git a/docs/ibc/v8.5.x/migrations/v1-to-v2.mdx b/docs/ibc/v8.5.x/migrations/v1-to-v2.mdx new file mode 100644 index 00000000..ddd1bf1b --- /dev/null +++ b/docs/ibc/v8.5.x/migrations/v1-to-v2.mdx @@ -0,0 +1,59 @@ +--- +title: IBC-Go v1 to v2 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go -> github.com/cosmos/ibc-go/v2 +``` + +## Chains + +* No relevant changes were made in this release. + +## IBC Apps + +A new function has been added to the app module interface: + +```go expandable +/ NegotiateAppVersion performs application version negotiation given the provided channel ordering, connectionID, portID, counterparty and proposed version. +/ An error is returned if version negotiation cannot be performed. For example, an application module implementing this interface +/ may decide to return an error in the event of the proposed version being incompatible with it's own +NegotiateAppVersion( + ctx sdk.Context, + order channeltypes.Order, + connectionID string, + portID string, + counterparty channeltypes.Counterparty, + proposedVersion string, +) (version string, err error) +``` + +This function should perform application version negotiation and return the negotiated version. If the version cannot be negotiated, an error should be returned. This function is only used on the client side. + +### `sdk.Result` removed + +`sdk.Result` has been removed as a return value in the application callbacks. Previously it was being discarded by core IBC and was thus unused. + +## Relayers + +A new gRPC has been added to 05-port, `AppVersion`. It returns the negotiated app version. This function should be used for the `ChanOpenTry` channel handshake step to decide upon the application version which should be set in the channel. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v8.5.x/migrations/v2-to-v3.mdx b/docs/ibc/v8.5.x/migrations/v2-to-v3.mdx new file mode 100644 index 00000000..07e12848 --- /dev/null +++ b/docs/ibc/v8.5.x/migrations/v2-to-v3.mdx @@ -0,0 +1,187 @@ +--- +title: IBC-Go v2 to v3 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v2 -> github.com/cosmos/ibc-go/v3 +``` + +No genesis or in-place migrations are required when upgrading from v1 or v2 of ibc-go. + +## Chains + +### ICS20 + +The `transferkeeper.NewKeeper(...)` now takes in an ICS4Wrapper. +The ICS4Wrapper should be the IBC Channel Keeper unless ICS 20 is being connected to a middleware application. + +### ICS27 + +ICS27 Interchain Accounts has been added as a supported IBC application of ibc-go. +Please see the [ICS27 documentation](/docs/ibc/v8.5.x/apps/interchain-accounts/overview) for more information. + +### Upgrade Proposal + +If the chain will adopt ICS27, it must set the appropriate params during the execution of the upgrade handler in `app.go`: + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler("v3", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / set the ICS27 consensus version so InitGenesis is not run + fromVM[icatypes.ModuleName] = icamodule.ConsensusVersion() + + / create ICS27 Controller submodule params + controllerParams := icacontrollertypes.Params{ + ControllerEnabled: true, +} + + / create ICS27 Host submodule params + hostParams := icahosttypes.Params{ + HostEnabled: true, + AllowMessages: []string{"/cosmos.bank.v1beta1.MsgSend", ... +}, +} + + / initialize ICS27 module + icamodule.InitModule(ctx, controllerParams, hostParams) + + ... + + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +The host and controller submodule params only need to be set if the chain integrates those submodules. +For example, if a chain chooses not to integrate a controller submodule, it may pass empty params into `InitModule`. + +#### Add `StoreUpgrades` for ICS27 module + +For ICS27 it is also necessary to [manually add store upgrades](https://docs.cosmos.network/main/learn/advanced/upgrade#add-storeupgrades-for-new-modules) for the new ICS27 module and then configure the store loader to apply those upgrades in `app.go`: + +```go +if upgradeInfo.Name == "v3" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := store.StoreUpgrades{ + Added: []string{ + icacontrollertypes.StoreKey, icahosttypes.StoreKey +}, +} + +app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +``` + +This ensures that the new module's stores are added to the multistore before the migrations begin. +The host and controller submodule keys only need to be added if the chain integrates those submodules. +For example, if a chain chooses not to integrate a controller submodule, it does not need to add the controller key to the `Added` field. + +### Genesis migrations + +If the chain will adopt ICS27 and chooses to upgrade via a genesis export, then the ICS27 parameters must be set during genesis migration. + +The migration code required may look like: + +```go expandable +controllerGenesisState := icatypes.DefaultControllerGenesis() + / overwrite parameters as desired + controllerGenesisState.Params = icacontrollertypes.Params{ + ControllerEnabled: true, +} + hostGenesisState := icatypes.DefaultHostGenesis() + / overwrite parameters as desired + hostGenesisState.Params = icahosttypes.Params{ + HostEnabled: true, + AllowMessages: []string{"/cosmos.bank.v1beta1.MsgSend", ... +}, +} + icaGenesisState := icatypes.NewGenesisState(controllerGenesisState, hostGenesisState) + + / set new ics27 genesis state + appState[icatypes.ModuleName] = clientCtx.Codec.MustMarshalJSON(icaGenesisState) +``` + +### Ante decorator + +The field of type `channelkeeper.Keeper` in the `AnteDecorator` structure has been replaced with a field of type `*keeper.Keeper`: + +```diff +type AnteDecorator struct { +- k channelkeeper.Keeper ++ k *keeper.Keeper +} + +- func NewAnteDecorator(k channelkeeper.Keeper) AnteDecorator { ++ func NewAnteDecorator(k *keeper.Keeper) AnteDecorator { + return AnteDecorator{k: k} +} +``` + +## IBC Apps + +### `OnChanOpenTry` must return negotiated application version + +The `OnChanOpenTry` application callback has been modified. +The return signature now includes the application version. +IBC applications must perform application version negotiation in `OnChanOpenTry` using the counterparty version. +The negotiated application version then must be returned in `OnChanOpenTry` to core IBC. +Core IBC will set this version in the TRYOPEN channel. + +### `OnChanOpenAck` will take additional `counterpartyChannelID` argument + +The `OnChanOpenAck` application callback has been modified. +The arguments now include the counterparty channel id. + +### `NegotiateAppVersion` removed from `IBCModule` interface + +Previously this logic was handled by the `NegotiateAppVersion` function. +Relayers would query this function before calling `ChanOpenTry`. +Applications would then need to verify that the passed in version was correct. +Now applications will perform this version negotiation during the channel handshake, thus removing the need for `NegotiateAppVersion`. + +### Channel state will not be set before application callback + +The channel handshake logic has been reorganized within core IBC. +Channel state will not be set in state after the application callback is performed. +Applications must rely only on the passed in channel parameters instead of querying the channel keeper for channel state. + +### IBC application callbacks moved from `AppModule` to `IBCModule` + +Previously, IBC module callbacks were apart of the `AppModule` type. +The recommended approach is to create an `IBCModule` type and move the IBC module callbacks from `AppModule` to `IBCModule` in a separate file `ibc_module.go`. + +The mock module go API has been broken in this release by applying the above format. +The IBC module callbacks have been moved from the mock modules `AppModule` into a new type `IBCModule`. + +As apart of this release, the mock module now supports middleware testing. Please see the [README](https://github.com/cosmos/ibc-go/blob/v3.0.0/testing/README.md#middleware-testing) for more information. + +Please review the [mock](https://github.com/cosmos/ibc-go/blob/v3.0.0/testing/mock/ibc_module.go) and [transfer](https://github.com/cosmos/ibc-go/blob/v3.0.0/modules/apps/transfer/ibc_module.go) modules as examples. Additionally, [simapp](https://github.com/cosmos/ibc-go/blob/v3.0.0/testing/simapp/app.go) provides an example of how `IBCModule` types should now be added to the IBC router in favour of `AppModule`. + +### IBC testing package + +`TestChain`s are now created with chainID's beginning from an index of 1. Any calls to `GetChainID(0)` will now fail. Please increment all calls to `GetChainID` by 1. + +## Relayers + +`AppVersion` gRPC has been removed. +The `version` string in `MsgChanOpenTry` has been deprecated and will be ignored by core IBC. +Relayers no longer need to determine the version to use on the `ChanOpenTry` step. +IBC applications will determine the correct version using the counterparty version. + +## IBC Light Clients + +The `GetProofSpecs` function has been removed from the `ClientState` interface. This function was previously unused by core IBC. Light clients which don't use this function may remove it. diff --git a/docs/ibc/v8.5.x/migrations/v3-to-v4.mdx b/docs/ibc/v8.5.x/migrations/v3-to-v4.mdx new file mode 100644 index 00000000..e409b67c --- /dev/null +++ b/docs/ibc/v8.5.x/migrations/v3-to-v4.mdx @@ -0,0 +1,156 @@ +--- +title: IBC-Go v3 to v4 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v3 -> github.com/cosmos/ibc-go/v4 +``` + +No genesis or in-place migrations required when upgrading from v1 or v2 of ibc-go. + +## Chains + +### ICS27 - Interchain Accounts + +The controller submodule implements now the 05-port `Middleware` interface instead of the 05-port `IBCModule` interface. Chains that integrate the controller submodule, need to create it with the `NewIBCMiddleware` constructor function. For example: + +```diff +- icacontroller.NewIBCModule(app.ICAControllerKeeper, icaAuthIBCModule) ++ icacontroller.NewIBCMiddleware(icaAuthIBCModule, app.ICAControllerKeeper) +``` + +where `icaAuthIBCModule` is the Interchain Accounts authentication IBC Module. + +### ICS29 - Fee Middleware + +The Fee Middleware module, as the name suggests, plays the role of an IBC middleware and as such must be configured by chain developers to route and handle IBC messages correctly. + +Please read the Fee Middleware [integration documentation](/docs/ibc/v8.5.x/light-clients/wasm/integration) for an in depth guide on how to configure the module correctly in order to incentivize IBC packets. + +Take a look at the following diff for an [example setup](https://github.com/cosmos/ibc-go/pull/1432/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08aL366) of how to incentivize ics27 channels. + +### Migration to fix support for base denoms with slashes + +As part of [v1.5.0](https://github.com/cosmos/ibc-go/releases/tag/v1.5.0), [v2.3.0](https://github.com/cosmos/ibc-go/releases/tag/v2.3.0) and [v3.1.0](https://github.com/cosmos/ibc-go/releases/tag/v3.1.0) some [migration handler code sample was documented](/docs/ibc/v8.5.x/migrations/support-denoms-with-slashes#upgrade-proposal) that needs to run in order to correct the trace information of coins transferred using ICS20 whose base denom contains slashes. + +Based on feedback from the community we add now an improved solution to run the same migration that does not require copying a large piece of code over from the migration document, but instead requires only adding a one-line upgrade handler. + +If the chain will migrate to supporting base denoms with slashes, it must set the appropriate params during the execution of the upgrade handler in `app.go`: + +```go +app.UpgradeKeeper.SetUpgradeHandler("MigrateTraces", + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / transfer module consensus version has been bumped to 2 + return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}) +``` + +If a chain receives coins of a base denom with slashes before it upgrades to supporting it, the receive may pass however the trace information will be incorrect. + +E.g. If a base denom of `testcoin/testcoin/testcoin` is sent to a chain that does not support slashes in the base denom, the receive will be successful. However, the trace information stored on the receiving chain will be: `Trace: "transfer/{channel-id}/testcoin/testcoin", BaseDenom: "testcoin"`. + +This incorrect trace information must be corrected when the chain does upgrade to fully supporting denominations with slashes. + +## IBC Apps + +### ICS03 - Connection + +Crossing hellos have been removed from 03-connection handshake negotiation. +`PreviousConnectionId` in `MsgConnectionOpenTry` has been deprecated and is no longer used by core IBC. + +`NewMsgConnectionOpenTry` no longer takes in the `PreviousConnectionId` as crossing hellos are no longer supported. A non-empty `PreviousConnectionId` will fail basic validation for this message. + +### ICS04 - Channel + +The `WriteAcknowledgement` API now takes the `exported.Acknowledgement` type instead of passing in the acknowledgement byte array directly. +This is an API breaking change and as such IBC application developers will have to update any calls to `WriteAcknowledgement`. + +The `OnChanOpenInit` application callback has been modified. +The return signature now includes the application version as detailed in the latest IBC [spec changes](https://github.com/cosmos/ibc/pull/629). + +The `NewErrorAcknowledgement` method signature has changed. +It now accepts an `error` rather than a `string`. This was done in order to prevent accidental state changes. +All error acknowledgements now contain a deterministic ABCI code and error message. It is the responsibility of the application developer to emit error details in events. + +Crossing hellos have been removed from 04-channel handshake negotiation. +IBC Applications no longer need to account from already claimed capabilities in the `OnChanOpenTry` callback. The capability provided by core IBC must be able to be claimed with error. +`PreviousChannelId` in `MsgChannelOpenTry` has been deprecated and is no longer used by core IBC. + +`NewMsgChannelOpenTry` no longer takes in the `PreviousChannelId` as crossing hellos are no longer supported. A non-empty `PreviousChannelId` will fail basic validation for this message. + +### ICS27 - Interchain Accounts + +The `RegisterInterchainAccount` API has been modified to include an additional `version` argument. This change has been made in order to support ICS29 fee middleware, for relayer incentivization of ICS27 packets. +Consumers of the `RegisterInterchainAccount` are now expected to build the appropriate JSON encoded version string themselves and pass it accordingly. +This should be constructed within the interchain accounts authentication module which leverages the APIs exposed via the interchain accounts `controllerKeeper`. If an empty string is passed in the `version` argument, then the version will be initialized to a default value in the `OnChanOpenInit` callback of the controller's handler, so that channel handshake can proceed. + +The following code snippet illustrates how to construct an appropriate interchain accounts `Metadata` and encode it as a JSON bytestring: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + if err := k.icaControllerKeeper.RegisterInterchainAccount(ctx, msg.ConnectionId, msg.Owner, string(appVersion)); err != nil { + return err +} +``` + +Similarly, if the application stack is configured to route through ICS29 fee middleware and a fee enabled channel is desired, construct the appropriate ICS29 `Metadata` type: + +```go expandable +icaMetadata := icatypes.Metadata{ + Version: icatypes.Version, + ControllerConnectionId: controllerConnectionID, + HostConnectionId: hostConnectionID, + Encoding: icatypes.EncodingProtobuf, + TxType: icatypes.TxTypeSDKMultiMsg, +} + +appVersion, err := icatypes.ModuleCdc.MarshalJSON(&icaMetadata) + if err != nil { + return err +} + feeMetadata := feetypes.Metadata{ + AppVersion: string(appVersion), + FeeVersion: feetypes.Version, +} + +feeEnabledVersion, err := feetypes.ModuleCdc.MarshalJSON(&feeMetadata) + if err != nil { + return err +} + if err := k.icaControllerKeeper.RegisterInterchainAccount(ctx, msg.ConnectionId, msg.Owner, string(feeEnabledVersion)); err != nil { + return err +} +``` + +## Relayers + +When using the `DenomTrace` gRPC, the full IBC denomination with the `ibc/` prefix may now be passed in. + +Crossing hellos are no longer supported by core IBC for 03-connection and 04-channel. The handshake should be completed in the logical 4 step process (INIT, TRY, ACK, CONFIRM). diff --git a/docs/ibc/v8.5.x/migrations/v4-to-v5.mdx b/docs/ibc/v8.5.x/migrations/v4-to-v5.mdx new file mode 100644 index 00000000..81b549de --- /dev/null +++ b/docs/ibc/v8.5.x/migrations/v4-to-v5.mdx @@ -0,0 +1,440 @@ +--- +title: IBC-Go v4 to v5 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* [Chains](#chains) +* [IBC Apps](#ibc-apps) +* [Relayers](#relayers) +* [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +```go +github.com/cosmos/ibc-go/v4 -> github.com/cosmos/ibc-go/v5 +``` + +## Chains + +### Ante decorator + +The `AnteDecorator` type in `core/ante` has been renamed to `RedundantRelayDecorator` (and the corresponding constructor function to `NewRedundantRelayDecorator`). Therefore in the function that creates the instance of the `sdk.AnteHandler` type (e.g. `NewAnteHandler`) the change would be like this: + +```diff expandable +func NewAnteHandler(options HandlerOptions) (sdk.AnteHandler, error) { + / parameter validation + + anteDecorators := []sdk.AnteDecorator{ + / other ante decorators +- ibcante.NewAnteDecorator(opts.IBCkeeper), ++ ibcante.NewRedundantRelayDecorator(options.IBCKeeper), + } + + return sdk.ChainAnteDecorators(anteDecorators...), nil +} +``` + +The `AnteDecorator` was actually renamed twice, but in [this PR](https://github.com/cosmos/ibc-go/pull/1820) you can see the changes made for the final rename. + +## IBC Apps + +### Core + +The `key` parameter of the `NewKeeper` function in `modules/core/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + stakingKeeper clienttypes.StakingKeeper, + upgradeKeeper clienttypes.UpgradeKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) *Keeper +``` + +The `RegisterRESTRoutes` function in `modules/core` has been removed. + +### ICS03 - Connection + +The `key` parameter of the `NewKeeper` function in `modules/core/03-connection/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ck types.ClientKeeper +) Keeper +``` + +### ICS04 - Channel + +The function `NewPacketId` in `modules/core/04-channel/types` has been renamed to `NewPacketID`: + +```diff +- func NewPacketId( ++ func NewPacketID( + portID, + channelID string, + seq uint64 +) PacketId +``` + +The `key` parameter of the `NewKeeper` function in `modules/core/04-channel/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + clientKeeper types.ClientKeeper, + connectionKeeper types.ConnectionKeeper, + portKeeper types.PortKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) Keeper +``` + +### ICS20 - Transfer + +The `key` parameter of the `NewKeeper` function in `modules/apps/transfer/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper types.ICS4Wrapper, + channelKeeper types.ChannelKeeper, + portKeeper types.PortKeeper, + authKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +) Keeper +``` + +The `amount` parameter of function `GetTransferCoin` in `modules/apps/transfer/types` is now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff +func GetTransferCoin( + portID, channelID, baseDenom string, +- amount sdk.Int ++ amount math.Int +) sdk.Coin +``` + +The `RegisterRESTRoutes` function in `modules/apps/transfer` has been removed. + +### ICS27 - Interchain Accounts + +The `key` and `msgRouter` parameters of the `NewKeeper` functions in + +* `modules/apps/27-interchain-accounts/controller/keeper` +* and `modules/apps/27-interchain-accounts/host/keeper` + +have changed type. The `key` parameter is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`), and the `msgRouter` parameter is now of type `*icatypes.MessageRouter` (where `icatypes` is an import alias for `"github.com/cosmos/ibc-go/v5/modules/apps/27-interchain-accounts/types"`): + +```diff expandable +/ NewKeeper creates a new interchain accounts controller Keeper instance +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper icatypes.ICS4Wrapper, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +- msgRouter *baseapp.MsgServiceRouter, ++ msgRouter *icatypes.MessageRouter, +) Keeper +``` + +```diff expandable +/ NewKeeper creates a new interchain accounts host Keeper instance +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, + scopedKeeper capabilitykeeper.ScopedKeeper, +- msgRouter *baseapp.MsgServiceRouter, ++ msgRouter *icatypes.MessageRouter, +) Keeper +``` + +The new `MessageRouter` interface is defined as: + +```go +type MessageRouter interface { + Handler(msg sdk.Msg) + +baseapp.MsgServiceHandler +} +``` + +The `RegisterRESTRoutes` function in `modules/apps/27-interchain-accounts` has been removed. + +An additional parameter, `ics4Wrapper` has been added to the `host` submodule `NewKeeper` function in `modules/apps/27-interchain-accounts/host/keeper`. +This allows the `host` submodule to correctly unwrap the channel version for channel reopening handshakes in the `OnChanOpenTry` callback. + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, + key storetypes.StoreKey, + paramSpace paramtypes.Subspace, ++ ics4Wrapper icatypes.ICS4Wrapper, + channelKeeper icatypes.ChannelKeeper, + portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, + scopedKeeper icatypes.ScopedKeeper, + msgRouter icatypes.MessageRouter, +) Keeper +``` + +#### Cosmos SDK message handler responses in packet acknowledgement + +The construction of the transaction response of a message execution on the host chain has changed. The `Data` field in the `sdk.TxMsgData` has been deprecated and since Cosmos SDK 0.46 the `MsgResponses` field contains the message handler responses packed into `Any`s. + +For chains on Cosmos SDK 0.45 and below, the message response was constructed like this: + +```go expandable +txMsgData := &sdk.TxMsgData{ + Data: make([]*sdk.MsgData, len(msgs)), +} + for i, msg := range msgs { + / message validation + + msgResponse, err := k.executeMsg(cacheCtx, msg) + / return if err != nil + + txMsgData.Data[i] = &sdk.MsgData{ + MsgType: sdk.MsgTypeURL(msg), + Data: msgResponse, +} +} + +/ emit events + +txResponse, err := proto.Marshal(txMsgData) +/ return if err != nil + +return txResponse, nil +``` + +And for chains on Cosmos SDK 0.46 and above, it is now done like this: + +```go expandable +txMsgData := &sdk.TxMsgData{ + MsgResponses: make([]*codectypes.Any, len(msgs)), +} + for i, msg := range msgs { + / message validation + + protoAny, err := k.executeMsg(cacheCtx, msg) + / return if err != nil + + txMsgData.MsgResponses[i] = protoAny +} + +/ emit events + +txResponse, err := proto.Marshal(txMsgData) +/ return if err != nil + +return txResponse, nil +``` + +When handling the acknowledgement in the `OnAcknowledgementPacket` callback of a custom ICA controller module, then depending on whether `txMsgData.Data` is empty or not, the logic to handle the message handler response will be different. **Only controller chains on Cosmos SDK 0.46 or above will be able to write the logic needed to handle the response from a host chain on Cosmos SDK 0.46 or above.** + +```go expandable +var ack channeltypes.Acknowledgement + if err := channeltypes.SubModuleCdc.UnmarshalJSON(acknowledgement, &ack); err != nil { + return err +} + +var txMsgData sdk.TxMsgData + if err := proto.Unmarshal(ack.GetResult(), txMsgData); err != nil { + return err +} + switch len(txMsgData.Data) { + case 0: / for SDK 0.46 and above + for _, msgResponse := range txMsgData.MsgResponses { + / unmarshall msgResponse and execute logic based on the response +} + +return nil +default: / for SDK 0.45 and below + for _, msgData := range txMsgData.Data { + / unmarshall msgData and execute logic based on the response +} +} +``` + +See the corresponding documentation about authentication modules for more information. + +### ICS29 - Fee Middleware + +The `key` parameter of the `NewKeeper` function in `modules/apps/29-fee` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff expandable +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + ics4Wrapper types.ICS4Wrapper, + channelKeeper types.ChannelKeeper, + portKeeper types.PortKeeper, + authKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, +) Keeper +``` + +The `RegisterRESTRoutes` function in `modules/apps/29-fee` has been removed. + +### IBC testing package + +The `MockIBCApp` type has been renamed to `IBCApp` (and the corresponding constructor function to `NewIBCApp`). This has resulted therefore in: + +* The `IBCApp` field of the `*IBCModule` in `testing/mock` to change its type as well to `*IBCApp`: + +```diff +type IBCModule struct { + appModule *AppModule +- IBCApp *MockIBCApp / base application of an IBC middleware stack ++ IBCApp *IBCApp / base application of an IBC middleware stack +} +``` + +* The `app` parameter to `*NewIBCModule` in `testing/mock` to change its type as well to `*IBCApp`: + +```diff +func NewIBCModule( + appModule *AppModule, +- app *MockIBCApp ++ app *IBCApp +) IBCModule +``` + +The `MockEmptyAcknowledgement` type has been renamed to `EmptyAcknowledgement` (and the corresponding constructor function to `NewEmptyAcknowledgement`). + +The `TestingApp` interface in `testing` has gone through some modifications: + +* The return type of the function `GetStakingKeeper` is not the concrete type `stakingkeeper.Keeper` anymore (where `stakingkeeper` is an import alias for `"github.com/cosmos/cosmos-sdk/x/staking/keeper"`), but it has been changed to the interface `ibctestingtypes.StakingKeeper` (where `ibctestingtypes` is an import alias for `""github.com/cosmos/ibc-go/v5/testing/types"`). See this [PR](https://github.com/cosmos/ibc-go/pull/2028) for more details. The `StakingKeeper` interface is defined as: + +```go +type StakingKeeper interface { + GetHistoricalInfo(ctx sdk.Context, height int64) (stakingtypes.HistoricalInfo, bool) +} +``` + +* The return type of the function `LastCommitID` has changed to `storetypes.CommitID` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`). + +See the following `git diff` for more details: + +```diff expandable +type TestingApp interface { + abci.Application + + / ibc-go additions + GetBaseApp() *baseapp.BaseApp +- GetStakingKeeper() stakingkeeper.Keeper ++ GetStakingKeeper() ibctestingtypes.StakingKeeper + GetIBCKeeper() *keeper.Keeper + GetScopedIBCKeeper() capabilitykeeper.ScopedKeeper + GetTxConfig() client.TxConfig + + / Implemented by SimApp + AppCodec() codec.Codec + + / Implemented by BaseApp +- LastCommitID() sdk.CommitID ++ LastCommitID() storetypes.CommitID + LastBlockHeight() int64 +} +``` + +The `powerReduction` parameter of the function `SetupWithGenesisValSet` in `testing` is now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff +func SetupWithGenesisValSet( + t *testing.T, + valSet *tmtypes.ValidatorSet, + genAccs []authtypes.GenesisAccount, + chainID string, +- powerReduction sdk.Int, ++ powerReduction math.Int, + balances ...banktypes.Balance +) TestingApp +``` + +The `accAmt` parameter of the functions + +* `AddTestAddrsFromPubKeys` , +* `AddTestAddrs` +* and `AddTestAddrsIncremental` + +in `testing/simapp` are now of type `math.Int` (`"cosmossdk.io/math"`): + +```diff expandable +func AddTestAddrsFromPubKeys( + app *SimApp, + ctx sdk.Context, + pubKeys []cryptotypes.PubKey, +- accAmt sdk.Int, ++ accAmt math.Int +) +func addTestAddrs( + app *SimApp, + ctx sdk.Context, + accNum int, +- accAmt sdk.Int, ++ accAmt math.Int, + strategy GenerateAccountStrategy +) []sdk.AccAddress +func AddTestAddrsIncremental( + app *SimApp, + ctx sdk.Context, + accNum int, +- accAmt sdk.Int, ++ accAmt math.Int +) []sdk.AccAddress +``` + +The `RegisterRESTRoutes` function in `testing/mock` has been removed. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +### ICS02 - Client + +The `key` parameter of the `NewKeeper` function in `modules/core/02-client/keeper` is now of type `storetypes.StoreKey` (where `storetypes` is an import alias for `"github.com/cosmos/cosmos-sdk/store/types"`): + +```diff +func NewKeeper( + cdc codec.BinaryCodec, +- key sdk.StoreKey, ++ key storetypes.StoreKey, + paramSpace paramtypes.Subspace, + sk types.StakingKeeper, + uk types.UpgradeKeeper +) Keeper +``` diff --git a/docs/ibc/v8.5.x/migrations/v5-to-v6.mdx b/docs/ibc/v8.5.x/migrations/v5-to-v6.mdx new file mode 100644 index 00000000..6d64fad5 --- /dev/null +++ b/docs/ibc/v8.5.x/migrations/v5-to-v6.mdx @@ -0,0 +1,301 @@ +--- +title: IBC-Go v5 to v6 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +## Chains + +The `ibc-go/v6` release introduces a new set of migrations for `27-interchain-accounts`. Ownership of ICS27 channel capabilities is transferred from ICS27 authentication modules and will now reside with the ICS27 controller submodule moving forward. + +For chains which contain a custom authentication module using the ICS27 controller submodule this requires a migration function to be included in the chain upgrade handler. A subsequent migration handler is run automatically, asserting the ownership of ICS27 channel capabilities has been transferred successfully. + +This migration is not required for chains which *do not* contain a custom authentication module using the ICS27 controller submodule. + +This migration facilitates the addition of the ICS27 controller submodule `MsgServer` which provides a standardised approach to integrating existing forms of authentication such as `x/gov` and `x/group` provided by the Cosmos SDK. + +For more information please refer to [ADR 009](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-009-v6-ics27-msgserver.md). + +### Upgrade proposal + +Please refer to [PR #2383](https://github.com/cosmos/ibc-go/pull/2383) for integrating the ICS27 channel capability migration logic or follow the steps outlined below: + +1. Add the upgrade migration logic to chain distribution. This may be, for example, maintained under a package `app/upgrades/v6`. + +```go expandable +package v6 + +import ( + + "github.com/cosmos/cosmos-sdk/codec" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" + + v6 "github.com/cosmos/ibc-go/v6/modules/apps/27-interchain-accounts/controller/migrations/v6" +) + +const ( + UpgradeName = "v6" +) + +func CreateUpgradeHandler( + mm *module.Manager, + configurator module.Configurator, + cdc codec.BinaryCodec, + capabilityStoreKey *storetypes.KVStoreKey, + capabilityKeeper *capabilitykeeper.Keeper, + moduleName string, +) + +upgradetypes.UpgradeHandler { + return func(ctx sdk.Context, _ upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + if err := v6.MigrateICS27ChannelCapability(ctx, cdc, capabilityStoreKey, capabilityKeeper, moduleName); err != nil { + return nil, err +} + +return mm.RunMigrations(ctx, configurator, vm) +} +} +``` + +2. Set the upgrade handler in `app.go`. The `moduleName` parameter refers to the authentication module's `ScopedKeeper` name. This is the name provided upon instantiation in `app.go` via the [`x/capability` keeper `ScopeToModule(moduleName string)`](https://github.com/cosmos/cosmos-sdk/blob/v0.46.1/x/capability/keeper/keeper.go#L70) method. [See here for an example in `simapp`](https://github.com/cosmos/ibc-go/blob/v5.0.0/testing/simapp/app.go#L304). + +```go expandable +app.UpgradeKeeper.SetUpgradeHandler( + v6.UpgradeName, + v6.CreateUpgradeHandler( + app.mm, + app.configurator, + app.appCodec, + app.keys[capabilitytypes.ModuleName], + app.CapabilityKeeper, + >>>> moduleName <<<<, + ), +) +``` + +## IBC Apps + +### ICS27 - Interchain Accounts + +#### Controller APIs + +In previous releases of ibc-go, chain developers integrating the ICS27 interchain accounts controller functionality were expected to create a custom `Base Application` referred to as an authentication module, see the section [Building an authentication module](/docs/ibc/v8.5.x/apps/interchain-accounts/auth-modules) from the documentation. + +The `Base Application` was intended to be composed with the ICS27 controller submodule `Keeper` and facilitate many forms of message authentication depending on a chain's particular use case. + +Prior to ibc-go v6 the controller submodule exposed only these two functions (to which we will refer as the legacy APIs): + +* [`RegisterInterchainAccount`](https://github.com/cosmos/ibc-go/blob/v5.0.0/modules/apps/27-interchain-accounts/controller/keeper/account.go#L19) +* [`SendTx`](https://github.com/cosmos/ibc-go/blob/v5.0.0/modules/apps/27-interchain-accounts/controller/keeper/relay.go#L18) + +However, these functions have now been deprecated in favour of the new controller submodule `MsgServer` and will be removed in later releases. + +Both APIs remain functional and maintain backwards compatibility in ibc-go v6, however consumers of these APIs are now recommended to follow the message passing paradigm outlined in Cosmos SDK [ADR 031](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-031) and [ADR 033](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-033). This is facilitated by the Cosmos SDK [`MsgServiceRouter`](https://github.com/cosmos/cosmos-sdk/blob/main/baseapp/msg_service_router.go#L17) and chain developers creating custom application logic can now omit the ICS27 controller submodule `Keeper` from their module and instead depend on message routing. + +Depending on the use case, developers of custom authentication modules face one of three scenarios: + +![auth-module-decision-tree.png](/docs/ibc/images/04-migrations/images/auth-module-decision-tree.png) + +**My authentication module needs to access IBC packet callbacks** + +Application developers that wish to consume IBC packet callbacks and react upon packet acknowledgements **must** continue using the controller submodule's legacy APIs. The authentication modules will not need a `ScopedKeeper` anymore, though, because the channel capability will be claimed by the controller submodule. For example, given an Interchain Accounts authentication module keeper `ICAAuthKeeper`, the authentication module's `ScopedKeeper` (`scopedICAAuthKeeper`) is not needed anymore and can be removed for the argument list of the keeper constructor function, as shown here: + +```diff +app.ICAAuthKeeper = icaauthkeeper.NewKeeper( + appCodec, + keys[icaauthtypes.StoreKey], + app.ICAControllerKeeper, +- scopedICAAuthKeeper, +) +``` + +Please note that the authentication module's `ScopedKeeper` name is still needed as part of the channel capability migration described in section [Upgrade proposal](#upgrade-proposal) above. Therefore the authentication module's `ScopedKeeper` cannot be completely removed from the chain code until the migration has run. + +In the future, the use of the legacy APIs for accessing packet callbacks will be replaced by IBC Actor Callbacks (see [ADR 008](https://github.com/cosmos/ibc-go/pull/1976) for more details) and it will also be possible to access them with the `MsgServiceRouter`. + +**My authentication module does not need access to IBC packet callbacks** + +The authentication module can migrate from using the legacy APIs and it can be composed instead with the `MsgServiceRouter`, so that the authentication module is able to pass messages to the controller submodule's `MsgServer` to register interchain accounts and send packets to the interchain account. For example, given an Interchain Accounts authentication module keeper `ICAAuthKeeper`, the ICS27 controller submodule keeper (`ICAControllerKeeper`) and authentication module scoped keeper (`scopedICAAuthKeeper`) are not needed anymore and can be replaced with the `MsgServiceRouter`, as shown here: + +```diff +app.ICAAuthKeeper = icaauthkeeper.NewKeeper( + appCodec, + keys[icaauthtypes.StoreKey], +- app.ICAControllerKeeper, +- scopedICAAuthKeeper, ++ app.MsgServiceRouter(), +) +``` + +In your authentication module you can route messages to the controller submodule's `MsgServer` instead of using the legacy APIs. For example, for registering an interchain account: + +```diff expandable +- if err := keeper.icaControllerKeeper.RegisterInterchainAccount( +- ctx, +- connectionID, +- owner.String(), +- version, +- ); err != nil { +- return err +- } ++ msg := controllertypes.NewMsgRegisterInterchainAccount( ++ connectionID, ++ owner.String(), ++ version, ++ ) ++ handler := keeper.msgRouter.Handler(msg) ++ res, err := handler(ctx, msg) ++ if err != nil { ++ return err ++ } +``` + +where `controllertypes` is an import alias for `"github.com/cosmos/ibc-go/v6/modules/apps/27-interchain-accounts/controller/types"`. + +In addition, in this use case the authentication module does not need to implement the `IBCModule` interface anymore. + +**I do not need a custom authentication module anymore** + +If your authentication module does not have any extra functionality compared to the default authentication module added in ibc-go v6 (the `MsgServer`), or if you can use a generic authentication module, such as the `x/auth`, `x/gov` or `x/group` modules from the Cosmos SDK (v0.46 and later), then you can remove your authentication module completely and use instead the gRPC endpoints of the `MsgServer` or the CLI added in ibc-go v6. + +Please remember that the authentication module's `ScopedKeeper` name is still needed as part of the channel capability migration described in section [Upgrade proposal](#upgrade-proposal) above. + +#### Host params + +The ICS27 host submodule default params have been updated to include the `AllowAllHostMsgs` wildcard `*`. +This enables execution of any `sdk.Msg` type for ICS27 registered on the host chain `InterfaceRegistry`. + +```diff +/ AllowAllHostMsgs holds the string key that allows all message types on interchain accounts host module +const AllowAllHostMsgs = "*" + +... + +/ DefaultParams is the default parameter configuration for the host submodule +func DefaultParams() Params { +- return NewParams(DefaultHostEnabled, nil) ++ return NewParams(DefaultHostEnabled, []string{AllowAllHostMsgs}) +} +``` + +#### API breaking changes + +`SerializeCosmosTx` takes in a `[]proto.Message` instead of `[]sdk.Message`. This allows for the serialization of proto messages without requiring the fulfillment of the `sdk.Msg` interface. + +The `27-interchain-accounts` genesis types have been moved to their own package: `modules/apps/27-interchain-acccounts/genesis/types`. +This change facilitates the addition of the ICS27 controller submodule `MsgServer` and avoids cyclic imports. This should have minimal disruption to chain developers integrating `27-interchain-accounts`. + +The ICS27 host submodule `NewKeeper` function in `modules/apps/27-interchain-acccounts/host/keeper` now includes an additional parameter of type `ICS4Wrapper`. +This provides the host submodule with the ability to correctly unwrap channel versions in the event of a channel reopening handshake. + +```diff +func NewKeeper( + cdc codec.BinaryCodec, key storetypes.StoreKey, paramSpace paramtypes.Subspace, +- channelKeeper icatypes.ChannelKeeper, portKeeper icatypes.PortKeeper, ++ ics4Wrapper icatypes.ICS4Wrapper, channelKeeper icatypes.ChannelKeeper, portKeeper icatypes.PortKeeper, + accountKeeper icatypes.AccountKeeper, scopedKeeper icatypes.ScopedKeeper, msgRouter icatypes.MessageRouter, +) Keeper +``` + +### ICS29 - `NewKeeper` API change + +The `NewKeeper` function of ICS29 has been updated to remove the `paramSpace` parameter as it was unused. + +```diff +func NewKeeper( +- cdc codec.BinaryCodec, key storetypes.StoreKey, paramSpace paramtypes.Subspace, +- ics4Wrapper types.ICS4Wrapper, channelKeeper types.ChannelKeeper, portKeeper types.PortKeeper, authKeeper types.AccountKeeper, bankKeeper types.BankKeeper, ++ cdc codec.BinaryCodec, key storetypes.StoreKey, ++ ics4Wrapper types.ICS4Wrapper, channelKeeper types.ChannelKeeper, ++ portKeeper types.PortKeeper, authKeeper types.AccountKeeper, bankKeeper types.BankKeeper, +) Keeper { +``` + +### ICS20 - `SendTransfer` is no longer exported + +The `SendTransfer` function of ICS20 has been removed. IBC transfers should now be initiated with `MsgTransfer` and routed to the ICS20 `MsgServer`. + +See below for example: + +```go +if handler := msgRouter.Handler(msgTransfer); handler != nil { + if err := msgTransfer.ValidateBasic(); err != nil { + return nil, err +} + +res, err := handler(ctx, msgTransfer) + if err != nil { + return nil, err +} +} +``` + +### ICS04 - `SendPacket` API change + +The `SendPacket` API has been simplified: + +```diff expandable +/ SendPacket is called by a module in order to send an IBC packet on a channel +func (k Keeper) SendPacket( + ctx sdk.Context, + channelCap *capabilitytypes.Capability, +- packet exported.PacketI, +-) error { ++ sourcePort string, ++ sourceChannel string, ++ timeoutHeight clienttypes.Height, ++ timeoutTimestamp uint64, ++ data []byte, ++) (uint64, error) { +``` + +Callers no longer need to pass in a pre-constructed packet. +The destination port/channel identifiers and the packet sequence will be determined by core IBC. +`SendPacket` will return the packet sequence. + +### IBC testing package + +The `SendPacket` API has been simplified: + +```diff expandable +/ SendPacket is called by a module in order to send an IBC packet on a channel +func (k Keeper) SendPacket( + ctx sdk.Context, + channelCap *capabilitytypes.Capability, +- packet exported.PacketI, +-) error { ++ sourcePort string, ++ sourceChannel string, ++ timeoutHeight clienttypes.Height, ++ timeoutTimestamp uint64, ++ data []byte, ++) (uint64, error) { +``` + +Callers no longer need to pass in a pre-constructed packet. `SendPacket` will return the packet sequence. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v8.5.x/migrations/v6-to-v7.mdx b/docs/ibc/v8.5.x/migrations/v6-to-v7.mdx new file mode 100644 index 00000000..6f652e70 --- /dev/null +++ b/docs/ibc/v8.5.x/migrations/v6-to-v7.mdx @@ -0,0 +1,358 @@ +--- +title: IBC-Go v6 to v7 +description: >- + This document is intended to highlight significant changes which may require + more information than presented in the CHANGELOG. Any changes that must be + done by a user of ibc-go should be documented here. +--- + +This document is intended to highlight significant changes which may require more information than presented in the CHANGELOG. +Any changes that must be done by a user of ibc-go should be documented here. + +There are four sections based on the four potential user groups of this document: + +* Chains +* IBC Apps +* Relayers +* IBC Light Clients + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated to bump the version number on major releases. + +## Chains + +Chains will perform automatic migrations to remove existing localhost clients and to migrate the solomachine to v3 of the protobuf definition. + +An optional upgrade handler has been added to prune expired tendermint consensus states. It may be used during any upgrade (from v7 onwards). +Add the following to the function call to the upgrade handler in `app/app.go`, to perform the optional state pruning. + +```go expandable +import ( + + / ... + ibctmmigrations "github.com/cosmos/ibc-go/v7/modules/light-clients/07-tendermint/migrations" +) + +/ ... + +app.UpgradeKeeper.SetUpgradeHandler( + upgradeName, + func(ctx sdk.Context, _ upgradetypes.Plan, _ module.VersionMap) (module.VersionMap, error) { + / prune expired tendermint consensus states to save storage space + _, err := ibctmmigrations.PruneExpiredConsensusStates(ctx, app.Codec, app.IBCKeeper.ClientKeeper) + if err != nil { + return nil, err +} + +return app.mm.RunMigrations(ctx, app.configurator, fromVM) +}, +) +``` + +Checkout the logs to see how many consensus states are pruned. + +### Light client registration + +Chains must explicitly register the types of any light client modules it wishes to integrate. + +#### Tendermint registration + +To register the tendermint client, modify the `app.go` file to include the tendermint `AppModuleBasic`: + +```diff expandable +import ( + / ... ++ ibctm "github.com/cosmos/ibc-go/v7/modules/light-clients/07-tendermint" +) + +/ ... + +ModuleBasics = module.NewBasicManager( + ... + ibc.AppModuleBasic{}, ++ ibctm.AppModuleBasic{}, + ... +) +``` + +It may be useful to reference the [PR](https://github.com/cosmos/ibc-go/pull/2825) which added the `AppModuleBasic` for the tendermint client. + +#### Solo machine registration + +To register the solo machine client, modify the `app.go` file to include the solo machine `AppModuleBasic`: + +```diff expandable +import ( + / ... ++ solomachine "github.com/cosmos/ibc-go/v7/modules/light-clients/06-solomachine" +) + +/ ... + +ModuleBasics = module.NewBasicManager( + ... + ibc.AppModuleBasic{}, ++ solomachine.AppModuleBasic{}, + ... +) +``` + +It may be useful to reference the [PR](https://github.com/cosmos/ibc-go/pull/2826) which added the `AppModuleBasic` for the solo machine client. + +### Testing package API + +The `SetChannelClosed` utility method in `testing/endpoint.go` has been updated to `SetChannelState`, which will take a `channeltypes.State` argument so that the `ChannelState` can be set to any of the possible channel states. + +## IBC Apps + +* No relevant changes were made in this release. + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +### `ClientState` interface changes + +The `VerifyUpgradeAndUpdateState` function has been modified. The client state and consensus state return values have been removed. + +Light clients **must** handle all management of client and consensus states including the setting of updated client state and consensus state in the client store. + +The `Initialize` method is now expected to set the initial client state, consensus state and any client-specific metadata in the provided store upon client creation. + +The `CheckHeaderAndUpdateState` method has been split into 4 new methods: + +* `VerifyClientMessage` verifies a `ClientMessage`. A `ClientMessage` could be a `Header`, `Misbehaviour`, or batch update. Calls to `CheckForMisbehaviour`, `UpdateState`, and `UpdateStateOnMisbehaviour` will assume that the content of the `ClientMessage` has been verified and can be trusted. An error should be returned if the `ClientMessage` fails to verify. + +* `CheckForMisbehaviour` checks for evidence of a misbehaviour in `Header` or `Misbehaviour` types. + +* `UpdateStateOnMisbehaviour` performs appropriate state changes on a `ClientState` given that misbehaviour has been detected and verified. + +* `UpdateState` updates and stores as necessary any associated information for an IBC client, such as the `ClientState` and corresponding `ConsensusState`. An error is returned if `ClientMessage` is of type `Misbehaviour`. Upon successful update, a list containing the updated consensus state height is returned. + +The `CheckMisbehaviourAndUpdateState` function has been removed from `ClientState` interface. This functionality is now encapsulated by the usage of `VerifyClientMessage`, `CheckForMisbehaviour`, `UpdateStateOnMisbehaviour`. + +The function `GetTimestampAtHeight` has been added to the `ClientState` interface. It should return the timestamp for a consensus state associated with the provided height. + +Prior to ibc-go/v7 the `ClientState` interface defined a method for each data type which was being verified in the counterparty state store. +The state verification functions for all IBC data types have been consolidated into two generic methods, `VerifyMembership` and `VerifyNonMembership`. +Both are expected to be provided with a standardised key path, `exported.Path`, as defined in [ICS 24 host requirements](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements). Membership verification requires callers to provide the marshalled value `[]byte`. Delay period values should be zero for non-packet processing verification. A zero proof height is now allowed by core IBC and may be passed into `VerifyMembership` and `VerifyNonMembership`. Light clients are responsible for returning an error if a zero proof height is invalid behaviour. + +See below for an example of how ibc-go now performs channel state verification. + +```go expandable +merklePath := commitmenttypes.NewMerklePath(host.ChannelPath(portID, channelID)) + +merklePath, err := commitmenttypes.ApplyPrefix(connection.GetCounterparty().GetPrefix(), merklePath) + if err != nil { + return err +} + +channelEnd, ok := channel.(channeltypes.Channel) + if !ok { + return sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "invalid channel type %T", channel) +} + +bz, err := k.cdc.Marshal(&channelEnd) + if err != nil { + return err +} + if err := clientState.VerifyMembership( + ctx, clientStore, k.cdc, height, + 0, 0, / skip delay period checks for non-packet processing verification + proof, merklePath, bz, +); err != nil { + return sdkerrors.Wrapf(err, "failed channel state verification for client (%s)", clientID) +} +``` + +### `Header` and `Misbehaviour` + +`exported.Header` and `exported.Misbehaviour` interface types have been merged and renamed to `ClientMessage` interface. + +`GetHeight` function has been removed from `exported.Header` and thus is not included in the `ClientMessage` interface + +### `ConsensusState` + +The `GetRoot` function has been removed from consensus state interface since it was not used by core IBC. + +### Client keeper + +Keeper function `CheckMisbehaviourAndUpdateState` has been removed since function `UpdateClient` can now handle updating `ClientState` on `ClientMessage` type which can be any `Misbehaviour` implementations. + +### SDK message + +`MsgSubmitMisbehaviour` is deprecated since `MsgUpdateClient` can now submit a `ClientMessage` type which can be any `Misbehaviour` implementations. + +The field `header` in `MsgUpdateClient` has been renamed to `client_message`. + +## Solomachine + +The `06-solomachine` client implementation has been simplified in ibc-go/v7. In-place store migrations have been added to migrate solomachine clients from `v2` to `v3`. + +### `ClientState` + +The `ClientState` protobuf message definition has been updated to remove the deprecated `bool` field `allow_update_after_proposal`. + +```diff +message ClientState { + option (gogoproto.goproto_getters) = false; + + uint64 sequence = 1; + bool is_frozen = 2 [(gogoproto.moretags) = "yaml:\"is_frozen\""]; + ConsensusState consensus_state = 3 [(gogoproto.moretags) = "yaml:\"consensus_state\""]; +- bool allow_update_after_proposal = 4 [(gogoproto.moretags) = "yaml:\"allow_update_after_proposal\""]; +} +``` + +### `Header` and `Misbehaviour` + +The `06-solomachine` protobuf message `Header` has been updated to remove the `sequence` field. This field was seen as redundant as the implementation can safely rely on the `sequence` value maintained within the `ClientState`. + +```diff expandable +message Header { + option (gogoproto.goproto_getters) = false; + +- uint64 sequence = 1; +- uint64 timestamp = 2; +- bytes signature = 3; +- google.protobuf.Any new_public_key = 4 [(gogoproto.moretags) = "yaml:\"new_public_key\""]; +- string new_diversifier = 5 [(gogoproto.moretags) = "yaml:\"new_diversifier\""]; ++ uint64 timestamp = 1; ++ bytes signature = 2; ++ google.protobuf.Any new_public_key = 3 [(gogoproto.moretags) = "yaml:\"new_public_key\""]; ++ string new_diversifier = 4 [(gogoproto.moretags) = "yaml:\"new_diversifier\""]; +} +``` + +Similarly, the `Misbehaviour` protobuf message has been updated to remove the `client_id` field. + +```diff expandable +message Misbehaviour { + option (gogoproto.goproto_getters) = false; + +- string client_id = 1 [(gogoproto.moretags) = "yaml:\"client_id\""]; +- uint64 sequence = 2; +- SignatureAndData signature_one = 3 [(gogoproto.moretags) = "yaml:\"signature_one\""]; +- SignatureAndData signature_two = 4 [(gogoproto.moretags) = "yaml:\"signature_two\""]; ++ uint64 sequence = 1; ++ SignatureAndData signature_one = 2 [(gogoproto.moretags) = "yaml:\"signature_one\""]; ++ SignatureAndData signature_two = 3 [(gogoproto.moretags) = "yaml:\"signature_two\""]; +} +``` + +### `SignBytes` + +Most notably, the `SignBytes` protobuf definition has been modified to replace the `data_type` field with a new field, `path`. The `path` field is defined as `bytes` and represents a serialized [ICS-24](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements) standardized key path under which the `data` is stored. + +```diff +message SignBytes { + option (gogoproto.goproto_getters) = false; + + uint64 sequence = 1; + uint64 timestamp = 2; + string diversifier = 3; +- DataType data_type = 4 [(gogoproto.moretags) = "yaml:\"data_type\""]; ++ bytes path = 4; + bytes data = 5; +} +``` + +The `DataType` enum and all associated data types have been removed, greatly reducing the number of message definitions and complexity in constructing the `SignBytes` message type. Likewise, solomachine implementations must now use the serialized `path` value when constructing `SignatureAndData` for signature verification of `SignBytes` data. + +```diff +message SignatureAndData { + option (gogoproto.goproto_getters) = false; + + bytes signature = 1; +- DataType data_type = 2 [(gogoproto.moretags) = "yaml:\"data_type\""]; ++ bytes path = 2; + bytes data = 3; + uint64 timestamp = 4; +} +``` + +For more information, please refer to [ADR-007](https://github.com/cosmos/ibc-go/blob/02-client-refactor-beta1/docs/architecture/adr-007-solomachine-signbytes.md). + +### IBC module constants + +IBC module constants have been moved from the `host` package to the `exported` package. Any usages will need to be updated. + +```diff expandable +import ( + / ... +- host "github.com/cosmos/ibc-go/v7/modules/core/24-host" ++ ibcexported "github.com/cosmos/ibc-go/v7/modules/core/exported" + / ... +) + +- host.ModuleName ++ ibcexported.ModuleName + +- host.StoreKey ++ ibcexported.StoreKey + +- host.QuerierRoute ++ ibcexported.QuerierRoute + +- host.RouterKey ++ ibcexported.RouterKey +``` + +## Upgrading to Cosmos SDK 0.47 + +The following should be considered as complementary to [Cosmos SDK v0.47 UPGRADING.md](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc2/UPGRADING.md). + +### Protobuf + +Protobuf code generation, linting and formatting have been updated to leverage the `ghcr.io/cosmos/proto-builder:0.11.5` docker container. IBC protobuf definitions are now packaged and published to [buf.build/cosmos/ibc](https://buf.build/cosmos/ibc) via CI workflows. The `third_party/proto` directory has been removed in favour of dependency management using [buf.build](https://docs.buf.build/introduction). + +### App modules + +Legacy APIs of the `AppModule` interface have been removed from ibc-go modules. For example, for + +```diff expandable +- / Route implements the AppModule interface +- func (am AppModule) Route() sdk.Route { +- return sdk.Route{} +- } +- +- / QuerierRoute implements the AppModule interface +- func (AppModule) QuerierRoute() string { +- return types.QuerierRoute +- } +- +- / LegacyQuerierHandler implements the AppModule interface +- func (am AppModule) LegacyQuerierHandler(*codec.LegacyAmino) sdk.Querier { +- return nil +- } +- +- / ProposalContents doesn't return any content functions for governance proposals. +- func (AppModule) ProposalContents(_ module.SimulationState) []simtypes.WeightedProposalContent { +- return nil +- } +``` + +### Imports + +Imports for ics23 have been updated as the repository have been migrated from confio to cosmos. + +```diff +import ( + / ... +- ics23 "github.com/confio/ics23/go" ++ ics23 "github.com/cosmos/ics23/go" + / ... +) +``` + +Imports for gogoproto have been updated. + +```diff +import ( + / ... +- "github.com/gogo/protobuf/proto" ++ "github.com/cosmos/gogoproto/proto" + / ... +) +``` diff --git a/docs/ibc/v8.5.x/migrations/v7-to-v7_1.mdx b/docs/ibc/v8.5.x/migrations/v7-to-v7_1.mdx new file mode 100644 index 00000000..735ce786 --- /dev/null +++ b/docs/ibc/v8.5.x/migrations/v7-to-v7_1.mdx @@ -0,0 +1,66 @@ +--- +title: IBC-Go v7 to v7.1 +description: This guide provides instructions for migrating to version v7.1.0 of ibc-go. +--- + +This guide provides instructions for migrating to version `v7.1.0` of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Migrating from v7 to v7.1](#migrating-from-v7-to-v71) + * [Chains](#chains) + * [IBC Apps](#ibc-apps) + * [Relayers](#relayers) + * [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +In the previous release of ibc-go, the localhost `v1` light client module was deprecated and removed. The ibc-go `v7.1.0` release introduces `v2` of the 09-localhost light client module. + +An [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v7.2.0/modules/core/module.go#L127-L145) is configured in the core IBC module to set the localhost `ClientState` and sentinel `ConnectionEnd` in state. + +In order to use the 09-localhost client chains must update the `AllowedClients` parameter in the 02-client submodule of core IBC. This can be configured directly in the application upgrade handler or alternatively updated via the legacy governance parameter change proposal. +We **strongly** recommend chains to perform this action so that intra-ledger communication can be carried out using the familiar IBC interfaces. + +See the upgrade handler code sample provided below or [follow this link](https://github.com/cosmos/ibc-go/blob/v7.2.0/testing/simapp/upgrades/upgrades.go#L85) for the upgrade handler used by the ibc-go simapp. + +```go expandable +func CreateV7LocalhostUpgradeHandler( + mm *module.Manager, + configurator module.Configurator, + clientKeeper clientkeeper.Keeper, +) + +upgradetypes.UpgradeHandler { + return func(ctx sdk.Context, _ upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { + / explicitly update the IBC 02-client params, adding the localhost client type + params := clientKeeper.GetParams(ctx) + +params.AllowedClients = append(params.AllowedClients, exported.Localhost) + +clientKeeper.SetParams(ctx, params) + +return mm.RunMigrations(ctx, configurator, vm) +} +} +``` + +### Transfer migration + +An [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v7.2.0/modules/apps/transfer/module.go#L111-L113) is configured in the transfer module to set the total amount in escrow for all denominations of coins that have been sent out. For each denomination a state entry is added with the total amount of coins in escrow regardless of the channel from which they were transferred. + +## IBC Apps + +* No relevant changes were made in this release. + +## Relayers + +The event attribute `packet_connection` (`connectiontypes.AttributeKeyConnection`) has been deprecated. +Please use the `connection_id` attribute (`connectiontypes.AttributeKeyConnectionID`) which is emitted by all channel events. +Only send packet, receive packet, write acknowledgement, and acknowledge packet events used `packet_connection` previously. + +## IBC Light Clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v8.5.x/migrations/v7-to-v8.mdx b/docs/ibc/v8.5.x/migrations/v7-to-v8.mdx new file mode 100644 index 00000000..c0e19915 --- /dev/null +++ b/docs/ibc/v8.5.x/migrations/v7-to-v8.mdx @@ -0,0 +1,217 @@ +--- +title: IBC-Go v7 to v8 +description: This guide provides instructions for migrating to version v8.0.0 of ibc-go. +--- + +This guide provides instructions for migrating to version `v8.0.0` of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Migrating from v7 to v8](#migrating-from-v7-to-v8) + * [Chains](#chains) + * [Cosmos SDK v0.50 upgrade](#cosmos-sdk-v050-upgrade) + * [Authority](#authority) + * [Testing package](#testing-package) + * [Params migration](#params-migration) + * [Governance V1 migration](#governance-v1-migration) + * [Transfer migration](#transfer-migration) + * [IBC Apps](#ibc-apps) + * [ICS20 - Transfer](#ics20---transfer) + * [ICS27 - Interchain Accounts](#ics27---interchain-accounts) + * [Relayers](#relayers) + * [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +The type of the `PortKeeper` field of the IBC keeper have been changed to `*portkeeper.Keeper`: + +```diff expandable +/ Keeper defines each ICS keeper for IBC +type Keeper struct { + / implements gRPC QueryServer interface + types.QueryServer + + cdc codec.BinaryCodec + + ClientKeeper clientkeeper.Keeper + ConnectionKeeper connectionkeeper.Keeper + ChannelKeeper channelkeeper.Keeper +- PortKeeper portkeeper.Keeper ++ PortKeeper *portkeeper.Keeper + Router *porttypes.Router + + authority string +} +``` + +See [this PR](https://github.com/cosmos/ibc-go/pull/4703/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a) for the changes required in `app.go`. + +An extra parameter `totalEscrowed` of type `sdk.Coins` has been added to transfer module's [`NewGenesisState` function](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/transfer/types/genesis.go#L10). This parameter specifies the total amount of tokens that are in the module's escrow accounts. + +### Cosmos SDK v0.50 upgrade + +Version `v8.0.0` of ibc-go upgrades to Cosmos SDK v0.50. Please follow the [Cosmos SDK v0.50 upgrading guide](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/UPGRADING.md) to account for its API breaking changes. + +### Authority + +An authority identifier (e.g. an address) needs to be passed in the `NewKeeper` functions of the following keepers: + +* You must pass the `authority` to the ica/host keeper (implemented in [#3520](https://github.com/cosmos/ibc-go/pull/3520)). See [diff](https://github.com/cosmos/ibc-go/pull/3520/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a): + +```diff +/ app.go + +/ ICA Host keeper +app.ICAHostKeeper = icahostkeeper.NewKeeper( + appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName), + app.IBCFeeKeeper, / use ics29 fee as ics4Wrapper in middleware stack + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, scopedICAHostKeeper, app.MsgServiceRouter(), ++ authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +* You must pass the `authority` to the ica/controller keeper (implemented in [#3590](https://github.com/cosmos/ibc-go/pull/3590)). See [diff](https://github.com/cosmos/ibc-go/pull/3590/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a): + +```diff +/ app.go + +/ ICA Controller keeper +app.ICAControllerKeeper = icacontrollerkeeper.NewKeeper( + appCodec, keys[icacontrollertypes.StoreKey], app.GetSubspace(icacontrollertypes.SubModuleName), + app.IBCFeeKeeper, / use ics29 fee as ics4Wrapper in middleware stack + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + scopedICAControllerKeeper, app.MsgServiceRouter(), ++ authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +* You must pass the `authority` to the ibctransfer keeper (implemented in [#3553](https://github.com/cosmos/ibc-go/pull/3553)). See [diff](https://github.com/cosmos/ibc-go/pull/3553/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a): + +```diff expandable +/ app.go + +/ Create Transfer Keeper and pass IBCFeeKeeper as expected Channel and PortKeeper +/ since fee middleware will wrap the IBCKeeper for underlying application. +app.TransferKeeper = ibctransferkeeper.NewKeeper( + appCodec, keys[ibctransfertypes.StoreKey], app.GetSubspace(ibctransfertypes.ModuleName), + app.IBCFeeKeeper, / ISC4 Wrapper: fee IBC middleware + app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper, + app.AccountKeeper, app.BankKeeper, scopedTransferKeeper, ++ authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +* You should pass the `authority` to the IBC keeper (implemented in [#3640](https://github.com/cosmos/ibc-go/pull/3640) and [#3650](https://github.com/cosmos/ibc-go/pull/3650)). See [diff](https://github.com/cosmos/ibc-go/pull/3640/files#diff-d18972debee5e64f16e40807b2ae112ddbe609504a93ea5e1c80a5d489c3a08a): + +```diff expandable +/ app.go + +/ IBC Keepers +app.IBCKeeper = ibckeeper.NewKeeper( + appCodec, + keys[ibcexported.StoreKey], + app.GetSubspace(ibcexported.ModuleName), + app.StakingKeeper, + app.UpgradeKeeper, + scopedIBCKeeper, ++ authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +The authority determines the transaction signer allowed to execute certain messages (e.g. `MsgUpdateParams`). + +### Testing package + +* The function `SetupWithGenesisAccounts` has been removed. +* The function [`RelayPacketWithResults`](https://github.com/cosmos/ibc-go/blob/v8.0.0/testing/path.go#L66) has been added. This function returns the result of the packet receive transaction, the acknowledgement written on the receiving chain, an error if a relay step fails or the packet commitment does not exist on either chain. + +### Params migration + +Params are now self managed in the following submodules: + +* ica/controller [#3590](https://github.com/cosmos/ibc-go/pull/3590) +* ica/host [#3520](https://github.com/cosmos/ibc-go/pull/3520) +* ibc/connection [#3650](https://github.com/cosmos/ibc-go/pull/3650) +* ibc/client [#3640](https://github.com/cosmos/ibc-go/pull/3640) +* ibc/transfer [#3553](https://github.com/cosmos/ibc-go/pull/3553) + +Each module has a corresponding `MsgUpdateParams` message with a `Params` which can be specified in full to update the modules' `Params`. + +Legacy params subspaces must still be initialised in app.go in order to successfully migrate from \`x/params\`\` to the new self-contained approach. See reference [this](https://github.com/cosmos/ibc-go/blob/v8.0.0/testing/simapp/app.go#L1007-L1012) for reference. + +For new chains which do not rely on migration of parameters from `x/params`, an expected interface has been added for each module. This allows chain developers to provide `nil` as the `legacySubspace` argument to `NewKeeper` functions. + +### Governance V1 migration + +Proposals have been migrated to [gov v1 messages](https://docs.cosmos.network/v0.50/modules/gov#messages) (see [#4620](https://github.com/cosmos/ibc-go/pull/4620)). The proposal `ClientUpdateProposal` has been deprecated and [`MsgRecoverClient`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/tx.proto#L121-L134) should be used instead. Likewise, the proposal `UpgradeProposal` has been deprecated and [`MsgIBCSoftwareUpgrade`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/client/v1/tx.proto#L139-L154) should be used instead. Both proposals will be removed in the next major release. + +`MsgRecoverClient` and `MsgIBCSoftwareUpgrade` will only be allowed to be executed if the signer is the authority designated at the time of instantiating the IBC keeper. So please make sure that the correct authority is provided to the IBC keeper. + +Remove the `UpgradeProposalHandler` and `UpdateClientProposalHandler` from the `BasicModuleManager`: + +```diff expandable +app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +- ibcclientclient.UpdateClientProposalHandler, +- ibcclientclient.UpgradeProposalHandler, + }, + ), +}) +``` + +Support for in-flight legacy recover client proposals (i.e. `ClientUpdateProposal`) will be made for v8, but chains should use `MsgRecoverClient` only afterwards to avoid in-flight client recovery failing when upgrading to v9. See [this issue](https://github.com/cosmos/ibc-go/issues/4721) for more information. + +Please note that ibc-go offers facilities to test an ibc-go upgrade: + +* All e2e tests of the repository can be [run with custom Docker chain images](https://github.com/cosmos/ibc-go/blob/c5bac5e03a0eae449b9efe0d312258115c1a1e85/e2e/README.md#running-tests-with-custom-images). +* An [importable workflow](https://github.com/cosmos/ibc-go/blob/c5bac5e03a0eae449b9efe0d312258115c1a1e85/e2e/README.md#importable-workflow) that can be used from any other repository to test chain upgrades. + +### Transfer migration + +An [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/transfer/module.go#L136) is configured in the transfer module to set the [denomination metadata](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/proto/cosmos/bank/v1beta1/bank.proto#L96-L125) for the IBC denominations of all vouchers minted by the transfer module. + +## IBC Apps + +### ICS20 - Transfer + +* The function `IsBound` has been renamed to [`hasCapability`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/transfer/keeper/keeper.go#L98) and made unexported. + +### ICS27 - Interchain Accounts + +* Functions [`SerializeCosmosTx`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/types/codec.go#L32) and [`DeserializeCosmosTx`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/types/codec.go#L76) now accept an extra parameter `encoding` of type `string` that specifies the format in which the transaction messages are marshaled. Both [protobuf and proto3 JSON formats](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/types/metadata.go#L14-L17) are supported. +* The function `IsBound` of controller submodule has been renamed to [`hasCapability`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/controller/keeper/keeper.go#L111) and made unexported. +* The function `IsBound` of host submodule has been renamed to [`hasCapability`](https://github.com/cosmos/ibc-go/blob/v8.0.0/modules/apps/27-interchain-accounts/host/keeper/keeper.go#L94) and made unexported. + +## Relayers + +* Getter functions in `MsgChannelOpenInitResponse`, `MsgChannelOpenTryResponse`, `MsgTransferResponse`, `MsgRegisterInterchainAccountResponse` and `MsgSendTxResponse` have been removed. The fields can be accessed directly. +* `channeltypes.EventTypeTimeoutPacketOnClose` (where `channeltypes` is an import alias for `"github.com/cosmos/ibc-go/v8/modules/core/04-channel"`) has been removed, since core IBC does not emit any event with this key. +* Attribute with key `counterparty_connection_id` has been removed from event with key `connectiontypes.EventTypeConnectionOpenInit` (where `connectiontypes` is an import alias for `"github.com/cosmos/ibc-go/v8/modules/core/03-connection/types"`) and attribute with key `counterparty_channel_id` has been removed from event with key `channeltypes.EventTypeChannelOpenInit` (where `channeltypes` is an import alias for `"github.com/cosmos/ibc-go/v8/modules/core/04-channel"`) since both (counterparty connection ID and counterparty channel ID) are empty on `ConnectionOpenInit` and `ChannelOpenInit` respectively. +* As part of the migration to [governance V1 messages](#governance-v1-migration) the following changes in events have been made: + +```diff expandable +/ IBC client events vars +var ( + EventTypeCreateClient = "create_client" + EventTypeUpdateClient = "update_client" + EventTypeUpgradeClient = "upgrade_client" + EventTypeSubmitMisbehaviour = "client_misbehaviour" +- EventTypeUpdateClientProposal = "update_client_proposal" +- EventTypeUpgradeClientProposal = "upgrade_client_proposal" ++ EventTypeRecoverClient = "recover_client" ++ EventTypeScheduleIBCSoftwareUpgrade = "schedule_ibc_software_upgrade" + EventTypeUpgradeChain = "upgrade_chain" +) +``` + +## IBC Light Clients + +* Functions `Pretty` and `String` of type `MerklePath` have been [removed](https://github.com/cosmos/ibc-go/pull/4459/files#diff-dd94ec1dde9b047c0cdfba204e30dad74a81de202e3b09ac5b42f493153811af). diff --git a/docs/ibc/v8.5.x/migrations/v7_2-to-v7_3.mdx b/docs/ibc/v8.5.x/migrations/v7_2-to-v7_3.mdx new file mode 100644 index 00000000..3d2e729c --- /dev/null +++ b/docs/ibc/v8.5.x/migrations/v7_2-to-v7_3.mdx @@ -0,0 +1,46 @@ +--- +title: IBC-Go v7.2 to v7.3 +description: This guide provides instructions for migrating to version v7.3.0 of ibc-go. +--- + +This guide provides instructions for migrating to version `v7.3.0` of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Migrating from v7.2 to v7.3](#migrating-from-v72-to-v73) + * [Chains](#chains) + * [IBC Apps](#ibc-apps) + * [Relayers](#relayers) + * [IBC Light Clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +* No relevant changes were made in this release. + +## IBC Apps + +A set of interfaces have been added that IBC applications may optionally implement. Developers interested in integrating their applications with the [callbacks middleware](/docs/ibc/v8.5.x/middleware/callbacks/overview) should implement these interfaces so that the callbacks middleware can retrieve the desired callback addresses on the source and destination chains and execute actions on packet lifecycle events. The interfaces are [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/core/05-port/types/module.go#L142-L147), [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/core/exported/packet.go#L43-L52) and [`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/core/exported/packet.go#L36-L41). + +Sample implementations are available for reference. For `transfer`: + +* [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/transfer/ibc_module.go#L303-L313), +* [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/transfer/types/packet.go#L85-L105) +* and [`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/transfer/types/packet.go#L74-L83). + +For `27-interchain-accounts`: + +* [`PacketDataUnmarshaler`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/27-interchain-accounts/controller/ibc_middleware.go#L258-L268), +* [`PacketDataProvider`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/27-interchain-accounts/types/packet.go#L94-L114) +* and [`PacketData`](https://github.com/cosmos/ibc-go/blob/v7.3.0-rc1/modules/apps/27-interchain-accounts/types/packet.go#L78-L92). + +## Relayers + +* No relevant changes were made in this release. + +## IBC Light Clients + +### 06-solomachine + +Solo machines are now expected to sign data on a path that 1) does not include a connection prefix (e.g `ibc`) and 2) does not escape any characters. See PR [#4429](https://github.com/cosmos/ibc-go/pull/4429) for more details. We recommend **NOT** using the solo machine light client of versions lower than v7.3.0. diff --git a/docs/ibc/v8.5.x/migrations/v8-to-v8_1.mdx b/docs/ibc/v8.5.x/migrations/v8-to-v8_1.mdx new file mode 100644 index 00000000..326d21af --- /dev/null +++ b/docs/ibc/v8.5.x/migrations/v8-to-v8_1.mdx @@ -0,0 +1,38 @@ +--- +title: IBC-Go v8 to v8.1 +description: This guide provides instructions for migrating to version v8.1.0 of ibc-go. +--- + +This guide provides instructions for migrating to version `v8.1.0` of ibc-go. + +There are four sections based on the four potential user groups of this document: + +* [Migrating from v8 to v8.1](#migrating-from-v8-to-v81) + * [Chains](#chains) + * [IBC apps](#ibc-apps) + * [Relayers](#relayers) + * [IBC light clients](#ibc-light-clients) + +**Note:** ibc-go supports golang semantic versioning and therefore all imports must be updated on major version releases. + +## Chains + +### `04-channel` params migration + +Self-managed [params](https://github.com/cosmos/ibc-go/blob/v8.1.0/proto/ibc/core/channel/v1/channel.proto#L183-L187) have been added for `04-channel` module. The params include the `upgrade_timeout` that is used in channel upgradability to specify the interval of time during which the counterparty chain must flush all in-flight packets on its end and move to `FLUSH_COMPLETE` state (see [Channel Upgrades](/docs/ibc/v8.5.x/ibc/channel-upgrades#) for more information). An [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v8.1.0/modules/core/module.go#L162-L166) is configured in the `04-channel` module that sets the default params (with a default upgrade timeout of 10 minutes). The module has a corresponding [`MsgUpdateParams` message](https://github.com/cosmos/ibc-go/blob/v8.1.0/proto/ibc/core/channel/v1/tx.proto#L435-L447) with a `Params` field which can be specified in full to update the module's `Params`. + +### Fee migration + +In ibc-go v8.1.0 an improved, more efficient escrow calculation of fees for packet incentivisation has been introduced (see [this issue](https://github.com/cosmos/ibc-go/issues/5509) for more information). Before v8.1.0 the amount escrowed was `(ReckFee + AckFee + TimeoutFee)`; from ibc-go v8.1.0, the calculation is changed to `Max(RecvFee + AckFee, TimeoutFee)`. In order to guarantee that the correct amount of fees are refunded for packets that are in-flight during the upgrade to ibc-go v8.1.0, an [automatic migration handler](https://github.com/cosmos/ibc-go/blob/v8.1.0/modules/apps/29-fee/module.go#L113-L115) is configured in the `29-fee` module to refund the leftover fees (i.e `(ReckFee + AckFee + TimeoutFee) - Max(RecvFee + AckFee, TimeoutFee)`) that otherwise would not be refunded when the packet lifecycle completes and the new calculation is used. + +## IBC apps + +* No relevant changes were made in this release. + +## Relayers + +* No relevant changes were made in this release. + +## IBC light clients + +* No relevant changes were made in this release. diff --git a/docs/ibc/v8.5.x/security-audits.mdx b/docs/ibc/v8.5.x/security-audits.mdx new file mode 100644 index 00000000..f23062e9 --- /dev/null +++ b/docs/ibc/v8.5.x/security-audits.mdx @@ -0,0 +1,153 @@ +--- +title: "Security Audits" +description: "Security audit reports for IBC-Go v8 features and components" +--- + +## Overview + +IBC-Go v8 includes several major features that have undergone comprehensive security audits by leading blockchain security firms. These audits ensure the security and reliability of critical IBC functionality including channel upgrades, enhanced token transfers, WASM light clients, and interchain accounts. + +## Feature Audit Reports + +### Channel Upgrades (ICS-04) + +**Auditor**: Atredis Partners +**Completion Date**: March 2024 +**Version**: Report v1.1 +**Pages**: 38 + +Channel upgrades allow existing IBC channels to modify their parameters without closing and reopening connections. This audit covers the complete upgrade handshake mechanism, state transitions, and security considerations. + + + Security assessment of channel upgrade functionality (38 pages) + + +### ICS-20 Token Transfer v2 + +**Auditor**: Atredis Partners +**Completion Date**: February 2024 +**Pages**: 41 + +The ICS-20 v2 token transfer module introduces multi-denomination support, enhanced memo fields, forwarding middleware, and path unwinding capabilities. This audit evaluates all new features and their security implications. + + + Assessment of enhanced token transfer features (41 pages) + + +### 08-WASM Light Client + +The WASM light client module enables custom light client implementations via WebAssembly, providing flexibility for supporting diverse blockchain consensus mechanisms. + +#### Halborn Security Audit + +**Auditor**: Halborn +**Completion Date**: February 2023 +**Pages**: 55 + + + Comprehensive security assessment of WASM light client (55 pages) + + +#### Technical Review + +**Reviewer**: Ethan Frey +**Type**: Architecture and Implementation Review + + + Expert review of WASM client design and implementation + + +### Interchain Accounts (ICS-27) + +**Auditor**: Trail of Bits +**Pages**: 42 + +Interchain Accounts enable cross-chain account control, allowing chains to securely control accounts on other IBC-enabled chains. This audit covers both controller and host implementations. + + + Trail of Bits security assessment (42 pages) + + +## Audit Coverage + +These audits comprehensively evaluate: + +### Security Architecture + +- Threat modeling and attack vectors +- Trust boundaries and assumptions +- Cryptographic implementations +- State machine correctness + +### Code Quality + +- Memory safety and resource management +- Error handling and edge cases +- Input validation and sanitization +- Concurrency and race conditions + +### Protocol Compliance + +- IBC specification adherence +- Backwards compatibility +- Upgrade path safety +- Interoperability guarantees + +## Key Findings and Mitigations + +All critical and high-severity findings identified in these audits have been addressed in v8.5.x. The audit reports include: + +- Detailed vulnerability descriptions +- Risk assessments and impact analysis +- Recommended mitigations +- Implementation responses + +## Best Practices for Developers + +When working with IBC-Go v8.5.x: + +1. **Review Feature Audits**: Consult relevant audit reports before implementing features +2. **Follow Security Guidelines**: Implement the security patterns recommended in audits +3. **Validate Inputs**: Always validate cross-chain messages and parameters +4. **Handle Errors Gracefully**: Implement robust error handling for IBC operations +5. **Test Edge Cases**: Include security test cases based on audit findings + +## Ongoing Security Efforts + +The IBC-Go team continuously improves security through: + +- Regular security audits for new features +- Bug bounty programs +- Security advisory system +- Collaboration with security researchers +- Rapid patching of vulnerabilities + +## Reporting Security Issues + +To report security vulnerabilities, please follow the [IBC-Go Security Policy](https://github.com/cosmos/ibc-go/security/policy) for responsible disclosure. + +## Additional Resources + +- [IBC Protocol Specification](https://github.com/cosmos/ibc) +- [IBC-Go v8.5.x Documentation](/docs/ibc/v8.5.x) +- [Migration Guides](/docs/ibc/v8.5.x/migrations/v7-to-v8) diff --git a/docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-begin_block-dfbb5abb42a34744e7fe08df2f8d01e2.png b/docs/sdk/images/learn/advanced/baseapp_state-begin_block.png similarity index 100% rename from docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-begin_block-dfbb5abb42a34744e7fe08df2f8d01e2.png rename to docs/sdk/images/learn/advanced/baseapp_state-begin_block.png diff --git a/docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-checktx-5bb98c17c37b2b93e98cc681b6c1c9d6.png b/docs/sdk/images/learn/advanced/baseapp_state-checktx.png similarity index 100% rename from docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-checktx-5bb98c17c37b2b93e98cc681b6c1c9d6.png rename to docs/sdk/images/learn/advanced/baseapp_state-checktx.png diff --git a/docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-commit-247373784511c1db3ed2175551b22abb.png b/docs/sdk/images/learn/advanced/baseapp_state-commit.png similarity index 100% rename from docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-commit-247373784511c1db3ed2175551b22abb.png rename to docs/sdk/images/learn/advanced/baseapp_state-commit.png diff --git a/docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-deliver_tx-5999f54501aa641d0c0a93279561f956.png b/docs/sdk/images/learn/advanced/baseapp_state-deliver_tx.png similarity index 100% rename from docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-deliver_tx-5999f54501aa641d0c0a93279561f956.png rename to docs/sdk/images/learn/advanced/baseapp_state-deliver_tx.png diff --git a/docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-initchain-62da1a79d5dd67a6d1ab07f2805040da.png b/docs/sdk/images/learn/advanced/baseapp_state-initchain.png similarity index 100% rename from docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-initchain-62da1a79d5dd67a6d1ab07f2805040da.png rename to docs/sdk/images/learn/advanced/baseapp_state-initchain.png diff --git a/docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-prepareproposal-bc5c8099ad94b823c376d1bde26d584a.png b/docs/sdk/images/learn/advanced/baseapp_state-prepareproposal.png similarity index 100% rename from docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-prepareproposal-bc5c8099ad94b823c376d1bde26d584a.png rename to docs/sdk/images/learn/advanced/baseapp_state-prepareproposal.png diff --git a/docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-processproposal-486265a078da51c6f72ce7248e8021b3.png b/docs/sdk/images/learn/advanced/baseapp_state-processproposal.png similarity index 100% rename from docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-processproposal-486265a078da51c6f72ce7248e8021b3.png rename to docs/sdk/images/learn/advanced/baseapp_state-processproposal.png diff --git a/docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-c6660bdfda8fa3aeb44239780b465ecc.png b/docs/sdk/images/learn/advanced/baseapp_state.png similarity index 100% rename from docs/sdk/images/v0.47/learn/advanced/assets/images/baseapp_state-c6660bdfda8fa3aeb44239780b465ecc.png rename to docs/sdk/images/learn/advanced/baseapp_state.png diff --git a/docs/sdk/images/learn/advanced/blockprocessing-1.png b/docs/sdk/images/learn/advanced/blockprocessing-1.png new file mode 100644 index 00000000..d4167f33 Binary files /dev/null and b/docs/sdk/images/learn/advanced/blockprocessing-1.png differ diff --git a/docs/sdk/images/learn/intro/main-components.png b/docs/sdk/images/learn/intro/main-components.png new file mode 100644 index 00000000..fa82eb9b Binary files /dev/null and b/docs/sdk/images/learn/intro/main-components.png differ diff --git a/docs/sdk/images/static/img/acceptance-tests.png b/docs/sdk/images/static/img/acceptance-tests.png new file mode 100644 index 00000000..b85d7da7 Binary files /dev/null and b/docs/sdk/images/static/img/acceptance-tests.png differ diff --git a/docs/sdk/images/static/img/android-chrome-192x192.png b/docs/sdk/images/static/img/android-chrome-192x192.png new file mode 100644 index 00000000..6d04cf4c Binary files /dev/null and b/docs/sdk/images/static/img/android-chrome-192x192.png differ diff --git a/docs/sdk/images/static/img/android-chrome-256x256.png b/docs/sdk/images/static/img/android-chrome-256x256.png new file mode 100644 index 00000000..1c30cc02 Binary files /dev/null and b/docs/sdk/images/static/img/android-chrome-256x256.png differ diff --git a/docs/sdk/images/static/img/apple-touch-icon.png b/docs/sdk/images/static/img/apple-touch-icon.png new file mode 100644 index 00000000..397e21af Binary files /dev/null and b/docs/sdk/images/static/img/apple-touch-icon.png differ diff --git a/docs/sdk/images/static/img/banner.jpg b/docs/sdk/images/static/img/banner.jpg new file mode 100755 index 00000000..9d821931 Binary files /dev/null and b/docs/sdk/images/static/img/banner.jpg differ diff --git a/docs/sdk/images/build-module.svg b/docs/sdk/images/static/img/cube.svg similarity index 100% rename from docs/sdk/images/build-module.svg rename to docs/sdk/images/static/img/cube.svg diff --git a/docs/sdk/images/static/img/ecosystem.svg b/docs/sdk/images/static/img/ecosystem.svg new file mode 100644 index 00000000..b80bb668 --- /dev/null +++ b/docs/sdk/images/static/img/ecosystem.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/sdk/images/static/img/favicon-16x16.png b/docs/sdk/images/static/img/favicon-16x16.png new file mode 100644 index 00000000..5f15c3b0 Binary files /dev/null and b/docs/sdk/images/static/img/favicon-16x16.png differ diff --git a/docs/sdk/images/static/img/favicon-32x32.png b/docs/sdk/images/static/img/favicon-32x32.png new file mode 100644 index 00000000..9433c807 Binary files /dev/null and b/docs/sdk/images/static/img/favicon-32x32.png differ diff --git a/docs/sdk/images/static/img/favicon-dark.svg b/docs/sdk/images/static/img/favicon-dark.svg new file mode 100644 index 00000000..a4f0fac9 --- /dev/null +++ b/docs/sdk/images/static/img/favicon-dark.svg @@ -0,0 +1,15 @@ + + + + + + + + + + + \ No newline at end of file diff --git a/docs/sdk/images/static/img/favicon.svg b/docs/sdk/images/static/img/favicon.svg new file mode 100644 index 00000000..dbefbad9 --- /dev/null +++ b/docs/sdk/images/static/img/favicon.svg @@ -0,0 +1,21 @@ + + + + + + + + + + + diff --git a/docs/sdk/images/static/img/group.svg b/docs/sdk/images/static/img/group.svg new file mode 100644 index 00000000..df77603d --- /dev/null +++ b/docs/sdk/images/static/img/group.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/sdk/images/static/img/ico-chevron.svg b/docs/sdk/images/static/img/ico-chevron.svg new file mode 100644 index 00000000..3f8e8fac --- /dev/null +++ b/docs/sdk/images/static/img/ico-chevron.svg @@ -0,0 +1,3 @@ + + + diff --git a/docs/sdk/images/static/img/ico-github.svg b/docs/sdk/images/static/img/ico-github.svg new file mode 100644 index 00000000..a74bee5a --- /dev/null +++ b/docs/sdk/images/static/img/ico-github.svg @@ -0,0 +1,3 @@ + + + diff --git a/docs/sdk/images/static/img/innovation.svg b/docs/sdk/images/static/img/innovation.svg new file mode 100644 index 00000000..788a21ce --- /dev/null +++ b/docs/sdk/images/static/img/innovation.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/sdk/images/static/img/investigation.svg b/docs/sdk/images/static/img/investigation.svg new file mode 100644 index 00000000..2f8568ac --- /dev/null +++ b/docs/sdk/images/static/img/investigation.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/sdk/images/build-chain.svg b/docs/sdk/images/static/img/link.svg similarity index 100% rename from docs/sdk/images/build-chain.svg rename to docs/sdk/images/static/img/link.svg diff --git a/docs/sdk/images/static/img/logo-bw.svg b/docs/sdk/images/static/img/logo-bw.svg new file mode 100644 index 00000000..f2575260 --- /dev/null +++ b/docs/sdk/images/static/img/logo-bw.svg @@ -0,0 +1,8 @@ + + + + + + + + diff --git a/docs/sdk/images/static/img/logo-sdk-white.svg b/docs/sdk/images/static/img/logo-sdk-white.svg new file mode 100644 index 00000000..86f2cbe0 --- /dev/null +++ b/docs/sdk/images/static/img/logo-sdk-white.svg @@ -0,0 +1,10 @@ + + + + + + + + + + diff --git a/docs/sdk/images/static/img/logo-sdk.svg b/docs/sdk/images/static/img/logo-sdk.svg new file mode 100644 index 00000000..444eff2a --- /dev/null +++ b/docs/sdk/images/static/img/logo-sdk.svg @@ -0,0 +1,10 @@ + + + + + + + + + + \ No newline at end of file diff --git a/docs/sdk/images/static/img/logo.svg b/docs/sdk/images/static/img/logo.svg new file mode 100644 index 00000000..95ca6d30 --- /dev/null +++ b/docs/sdk/images/static/img/logo.svg @@ -0,0 +1,18 @@ + + + + + + + + + + + + + + + + + + diff --git a/docs/sdk/images/static/img/node.svg b/docs/sdk/images/static/img/node.svg new file mode 100644 index 00000000..5c55577d --- /dev/null +++ b/docs/sdk/images/static/img/node.svg @@ -0,0 +1,2 @@ + + diff --git a/docs/sdk/images/static/img/public-service.svg b/docs/sdk/images/static/img/public-service.svg new file mode 100644 index 00000000..092cf03e --- /dev/null +++ b/docs/sdk/images/static/img/public-service.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/sdk/images/static/img/setting.svg b/docs/sdk/images/static/img/setting.svg new file mode 100644 index 00000000..d4ba412d --- /dev/null +++ b/docs/sdk/images/static/img/setting.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/sdk/images/static/img/spirograph-white.svg b/docs/sdk/images/static/img/spirograph-white.svg new file mode 100644 index 00000000..f4957303 --- /dev/null +++ b/docs/sdk/images/static/img/spirograph-white.svg @@ -0,0 +1,5 @@ + + + diff --git a/docs/sdk/images/static/uml/svg/begin_redelegation_sequence.svg b/docs/sdk/images/static/uml/svg/begin_redelegation_sequence.svg new file mode 100644 index 00000000..ac20246d --- /dev/null +++ b/docs/sdk/images/static/uml/svg/begin_redelegation_sequence.svg @@ -0,0 +1,106 @@ +RedelegationmsgServermsgServerkeeperkeeperstorestoreBeginRedelegation(delAddr, valSrcAddr, valDstAddr, sharesAmount)get number of sharewIf the delegator has more shares than the total shares in the validator(due to rounding errors), then just withdraw the max number of shares.check the redelegation uses correct denomalt[valSrcAddr == valDstAddr]erroralt[transitive redelegation]erroralt[already has max redelegations]errorthis is the number of redelegations for a specific (del, valSrc, valDst) tripledefault : 7Unbond(del, valSrc) returns returnAmountSee unbonding diagramalt[returnAmount is zero]errorDelegate(del, returnAmount, status := valSrc.status, valDst, subtractAccount := false)See delegation diagramalt[validator is unbonded]current timealt[unbonding not complete, or just started]create redelegation objectinsert redelegation in queue, to be processed at the appropriate timecompletion time of the redelegationemit event: delegator, valSrc, valSrc,sharesAmount, completionTime \ No newline at end of file diff --git a/docs/sdk/images/static/uml/svg/delegation_sequence.svg b/docs/sdk/images/static/uml/svg/delegation_sequence.svg new file mode 100644 index 00000000..9320a9d5 --- /dev/null +++ b/docs/sdk/images/static/uml/svg/delegation_sequence.svg @@ -0,0 +1,192 @@ +Delegating (currently undelegated funds delegator)msgServer (staking)msgServer (staking)keeper (staking)keeper (staking)validatorvalidatorkeeper.bankKeeperkeeper.bankKeepervestingAccountvestingAccountctx.EventManagerctx.EventManagerstorestoreDelegate(Context, DelegatorAddress, Amount, Validator, tokenSrc := Unbonded)alt[exchange rate is invalid (tokens in validator is 0)]erroralt[perform a new delegation]delegation := create delegation objectBeforeDelegationCreated hookCalls IncrementValidatorPeriod (Used to calculate distribution) in keeper/validator.go[delegation exists, more tokens being added]BeforeDelegationModified hookwithdraw current delegation rewards (and increment period)alt[delegating from an account (subtractTokens == true)]DelegateCoinsFromAccountToModuleDelegateCoinsFromAccountToModule functionDelegateCoinsFromAccountToModuleDelegateCoinsDelegateCoins functionCheck the delegator has enough balances of all tokens delegatedTrack delegation (register that it exists to keep track of it)alt[validator is currently bonded]Transfer tokens from delegator to BondedTokensPool.[validator is currently unbonded or unbonding]Transfer tokens from delegator to NotBondedTokensPool.trackDelegation functiontrackDelegationalt[delegator is a vesting account]keep track of this delegationnil (success)[moving tokens between pools (subtractTokens == false)]alt[delegator tokens are not bonded but validator is bonded]SendCoinsFromModuleToModule(notBondedPool, bondedPool, coins)[delegator tokens are bonded but validator is not bonded]SendCoinsFromModuleToModule(bondedPool, notBondedPool, coins)SendCoins functionSendCoinsEmit TransferEvent(to, from, amount)alt[amount of spendable (balance - locked) coins too low]errorsubtract balance from senderadd balance to recipientAddTokensFromDelcalculate number of shares to issueIf there are no shares (validator being created) then 1 token = 1 share.If there are already shares, thenadded shares = (added tokens amount) * (current validator shares) / (current validator tokens)add delegated tokens to validatorvalidator, addedSharesupdate validator statecalculate new validator's powerNumber of tokens divided by PowerReduction (default: 1,000,000,000,000,000,000 = 10^18)alt[validator is not jailed]update validator's power in power indexthe power index has entries shaped as 35 || power || address.This makes the validators sorted by power, high to low.AfterDelegationModified hookCalls initializeDelegationStore the previous periodCalculate the number of tokens from shares(shares the delegator has) * (tokens in delegation object)/(total tokens delegated to the validator)Store delegation starting info.newShares (ignored by Delegate function)Emit event: Delegation(ValidatorAddress)Emit event: Message(DelegatorAddress)telemetry(Amount, Denom) \ No newline at end of file diff --git a/docs/sdk/images/v0.47/learn/advanced/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg b/docs/sdk/images/static/uml/svg/keeper_dependencies.svg similarity index 100% rename from docs/sdk/images/v0.47/learn/advanced/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg rename to docs/sdk/images/static/uml/svg/keeper_dependencies.svg diff --git a/docs/sdk/images/static/uml/svg/transaction_flow.svg b/docs/sdk/images/static/uml/svg/transaction_flow.svg new file mode 100644 index 00000000..93bb940a --- /dev/null +++ b/docs/sdk/images/static/uml/svg/transaction_flow.svg @@ -0,0 +1,48 @@ +UserUserbaseAppbaseApprouterrouterhandlerhandlermsgServermsgServerkeeperkeeperContext.EventManagerContext.EventManagerTransaction Type<Tx>Route(ctx, msgRoute)handlerMsg<Tx>(Context, Msg(...))<Tx>(Context, Msg)alt[addresses invalid, denominations wrong, etc.]errorperform action, update contextresults, error codeEmit relevant eventsmaybe wrap results in more structureresult, error coderesults, error code \ No newline at end of file diff --git a/docs/sdk/images/static/uml/svg/unbond_sequence.svg b/docs/sdk/images/static/uml/svg/unbond_sequence.svg new file mode 100644 index 00000000..d9a12404 --- /dev/null +++ b/docs/sdk/images/static/uml/svg/unbond_sequence.svg @@ -0,0 +1,110 @@ +UndelegatemsgServermsgServerkeeperkeeperstorestorebankKeeperbankKeeperUndelegate(delAddr, valAddr, tokenAmount)calculate number of shares the tokenAmount representsalt[wrong denom]errorUnbond(delAddr, valAddr, shares)BeforeDelegationSharesModified hookalt[no such delegation]erroralt[not enough shares]erroralt[delegator is the operator of the validatorand validator is not already jailedand unbonding would put self-delegation under min threshold]jail the validator, but proceed with unbondingDefault min delegation threshold : 1 sharealt[complete unbonding, all shares removed]remove delegation object[there are still shares delegated (not a complete undbonding)]update delegation objectAfterDelegationModified hookupdate validator power indexupdate validator information (including token amount)alt[validator status is "unbonded" and it has no more tokens]delete the validatorotherwise, do this in EndBlock once validator is unbondedalt[validator is bonded]send tokens from bonded pool to not bonded poolemit event : EventTypeUnbond(delAddr, valAddr, tokenAmount, completion time) \ No newline at end of file diff --git a/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-c6660bdfda8fa3aeb44239780b465ecc.png b/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-c6660bdfda8fa3aeb44239780b465ecc.png deleted file mode 100644 index 5cf54fdb..00000000 Binary files a/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-c6660bdfda8fa3aeb44239780b465ecc.png and /dev/null differ diff --git a/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-checktx-5bb98c17c37b2b93e98cc681b6c1c9d6.png b/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-checktx-5bb98c17c37b2b93e98cc681b6c1c9d6.png deleted file mode 100644 index 38b217ac..00000000 Binary files a/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-checktx-5bb98c17c37b2b93e98cc681b6c1c9d6.png and /dev/null differ diff --git a/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-commit-247373784511c1db3ed2175551b22abb.png b/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-commit-247373784511c1db3ed2175551b22abb.png deleted file mode 100644 index b23c7312..00000000 Binary files a/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-commit-247373784511c1db3ed2175551b22abb.png and /dev/null differ diff --git a/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-initchain-62da1a79d5dd67a6d1ab07f2805040da.png b/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-initchain-62da1a79d5dd67a6d1ab07f2805040da.png deleted file mode 100644 index 167b4fad..00000000 Binary files a/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-initchain-62da1a79d5dd67a6d1ab07f2805040da.png and /dev/null differ diff --git a/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-prepareproposal-bc5c8099ad94b823c376d1bde26d584a.png b/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-prepareproposal-bc5c8099ad94b823c376d1bde26d584a.png deleted file mode 100644 index 146e804b..00000000 Binary files a/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-prepareproposal-bc5c8099ad94b823c376d1bde26d584a.png and /dev/null differ diff --git a/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-processproposal-486265a078da51c6f72ce7248e8021b3.png b/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-processproposal-486265a078da51c6f72ce7248e8021b3.png deleted file mode 100644 index fb601237..00000000 Binary files a/docs/sdk/images/v0.50/learn/advanced/assets/images/baseapp_state-processproposal-486265a078da51c6f72ce7248e8021b3.png and /dev/null differ diff --git a/docs/sdk/images/v0.50/learn/advanced/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg b/docs/sdk/images/v0.50/learn/advanced/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg deleted file mode 100644 index 1fd5c71f..00000000 --- a/docs/sdk/images/v0.50/learn/advanced/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg +++ /dev/null @@ -1,102 +0,0 @@ -The dependencies between Keepers (Feb 2021)StakingDistributionSlashingEvidenceBankAuth/AccountGovMint \ No newline at end of file diff --git a/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-c6660bdfda8fa3aeb44239780b465ecc.png b/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-c6660bdfda8fa3aeb44239780b465ecc.png deleted file mode 100644 index 5cf54fdb..00000000 Binary files a/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-c6660bdfda8fa3aeb44239780b465ecc.png and /dev/null differ diff --git a/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-checktx-5bb98c17c37b2b93e98cc681b6c1c9d6.png b/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-checktx-5bb98c17c37b2b93e98cc681b6c1c9d6.png deleted file mode 100644 index 38b217ac..00000000 Binary files a/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-checktx-5bb98c17c37b2b93e98cc681b6c1c9d6.png and /dev/null differ diff --git a/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-commit-247373784511c1db3ed2175551b22abb.png b/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-commit-247373784511c1db3ed2175551b22abb.png deleted file mode 100644 index b23c7312..00000000 Binary files a/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-commit-247373784511c1db3ed2175551b22abb.png and /dev/null differ diff --git a/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-initchain-62da1a79d5dd67a6d1ab07f2805040da.png b/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-initchain-62da1a79d5dd67a6d1ab07f2805040da.png deleted file mode 100644 index 167b4fad..00000000 Binary files a/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-initchain-62da1a79d5dd67a6d1ab07f2805040da.png and /dev/null differ diff --git a/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-prepareproposal-bc5c8099ad94b823c376d1bde26d584a.png b/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-prepareproposal-bc5c8099ad94b823c376d1bde26d584a.png deleted file mode 100644 index 146e804b..00000000 Binary files a/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-prepareproposal-bc5c8099ad94b823c376d1bde26d584a.png and /dev/null differ diff --git a/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-processproposal-486265a078da51c6f72ce7248e8021b3.png b/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-processproposal-486265a078da51c6f72ce7248e8021b3.png deleted file mode 100644 index fb601237..00000000 Binary files a/docs/sdk/images/v0.53/learn/advanced/assets/images/baseapp_state-processproposal-486265a078da51c6f72ce7248e8021b3.png and /dev/null differ diff --git a/docs/sdk/images/v0.53/learn/advanced/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg b/docs/sdk/images/v0.53/learn/advanced/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg deleted file mode 100644 index 1fd5c71f..00000000 --- a/docs/sdk/images/v0.53/learn/advanced/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg +++ /dev/null @@ -1,102 +0,0 @@ -The dependencies between Keepers (Feb 2021)StakingDistributionSlashingEvidenceBankAuth/AccountGovMint \ No newline at end of file diff --git a/docs/sdk/index.mdx b/docs/sdk/index.mdx deleted file mode 100644 index 13e1d0bc..00000000 --- a/docs/sdk/index.mdx +++ /dev/null @@ -1,55 +0,0 @@ -
-

- Explore the SDK -

-

- Cosmos SDK is the world’s most popular framework for building application-specific blockchains. -

-
- - - - Get an introduction to the Cosmos SDK, its modular architecture, and developer-friendly tools. - - - - Learn how to build a custom blockchain application using the Cosmos SDK. - - - - Dive deeper into the Cosmos SDK and create custom modules to extend functionality. - - - - Learn how to set up, operate, and maintain a full node in the Cosmos network. - - - - Hands-on guides like implementing oracles, vote extensions, and mitigating front-running. - - - - Explore practical guides for key management, transactions, CLI, and running in production. - - diff --git a/docs/sdk/next/api-reference/client-tools/README.mdx b/docs/sdk/next/api-reference/client-tools/README.mdx new file mode 100644 index 00000000..4cb74093 --- /dev/null +++ b/docs/sdk/next/api-reference/client-tools/README.mdx @@ -0,0 +1,19 @@ +--- +title: Tools +description: >- + This section provides documentation on various tooling maintained by the SDK + team. This includes tools for development, operating a node, and ease of use + of a Cosmos SDK chain. +--- + +This section provides documentation on various tooling maintained by the SDK team. +This includes tools for development, operating a node, and ease of use of a Cosmos SDK chain. + +## CLI Tools + +* [Cosmovisor](/docs/sdk/next/documentation/operations/cosmovisor) +* [Confix](/docs/sdk/next/documentation/operations/confix) + +## Other Tools + +* [Protocol Buffers](/docs/sdk/next/documentation/protocol-development/protobuf) diff --git a/docs/sdk/next/api-reference/client-tools/autocli.mdx b/docs/sdk/next/api-reference/client-tools/autocli.mdx new file mode 100644 index 00000000..d465f249 --- /dev/null +++ b/docs/sdk/next/api-reference/client-tools/autocli.mdx @@ -0,0 +1,727 @@ +--- +title: AutoCLI +--- + +## Synopsis + +This document details how to build CLI and REST interfaces for a module. Examples from various Cosmos SDK modules are included. + + +**Pre-requisite Readings** + +- [CLI](https://docs.cosmos.network/main/core/cli) + + + +The `autocli` (also known as `client/v2`) package is a [Go library](https://pkg.go.dev/cosmossdk.io/client/v2/autocli) for generating CLI (command line interface) interfaces for Cosmos SDK-based applications. It provides a simple way to add CLI commands to your application by generating them automatically based on your gRPC service definitions. Autocli generates CLI commands and flags directly from your protobuf messages, including options, input parameters, and output parameters. This means that you can easily add a CLI interface to your application without having to manually create and manage commands. + +## Overview + +`autocli` generates CLI commands and flags for each method defined in your gRPC service. By default, it generates commands for each gRPC services. The commands are named based on the name of the service method. + +For example, given the following protobuf definition for a service: + +```protobuf +service MyService { + rpc MyMethod(MyRequest) returns (MyResponse) {} +} +``` + +For instance, `autocli` would generate a command named `my-method` for the `MyMethod` method. The command will have flags for each field in the `MyRequest` message. + +It is possible to customize the generation of transactions and queries by defining options for each service. + +## Application Wiring + +Here are the steps to use AutoCLI: + +1. Ensure your app's modules implements the `appmodule.AppModule` interface. +2. (optional) Configure how behave `autocli` command generation, by implementing the `func (am AppModule) AutoCLIOptions() *autocliv1.ModuleOptions` method on the module. +3. Use the `autocli.AppOptions` struct to specify the modules you defined. If you are using `depinject`, it can automatically create an instance of `autocli.AppOptions` based on your app's configuration. +4. Use the `EnhanceRootCommand()` method provided by `autocli` to add the CLI commands for the specified modules to your root command. + + + AutoCLI is additive only, meaning *enhancing* the root command will only add + subcommands that are not already registered. This means that you can use + AutoCLI alongside other custom commands within your app. + + +Here's an example of how to use `autocli` in your app: + +```go expandable +/ Define your app's modules + testModules := map[string]appmodule.AppModule{ + "testModule": &TestModule{ +}, +} + +/ Define the autocli AppOptions + autoCliOpts := autocli.AppOptions{ + Modules: testModules, +} + +/ Create the root command + rootCmd := &cobra.Command{ + Use: "app", +} + if err := appOptions.EnhanceRootCommand(rootCmd); err != nil { + return err +} + +/ Run the root command + if err := rootCmd.Execute(); err != nil { + return err +} +``` + +### Keyring + +`autocli` uses a keyring for key name resolving names and signing transactions. + + +AutoCLI provides a better UX than normal CLI as it allows to resolve key names directly from the keyring in all transactions and commands. + +```sh + q bank balances alice + tx bank send alice bob 1000denom +``` + + + +The keyring used for resolving names and signing transactions is provided via the `client.Context`. +The keyring is then converted to the `client/v2/autocli/keyring` interface. +If no keyring is provided, the `autocli` generated command will not be able to sign transactions, but will still be able to query the chain. + + +The Cosmos SDK keyring implements the `client/v2/autocli/keyring` interface, thanks to the following wrapper: + +```go +keyring.NewAutoCLIKeyring(kb) +``` + + + +## Signing + +`autocli` supports signing transactions with the keyring. +The [`cosmos.msg.v1.signer` protobuf annotation](https://docs.cosmos.network/main/build/building-modules/protobuf-annotations) defines the signer field of the message. +This field is automatically filled when using the `--from` flag or defining the signer as a positional argument. + +AutoCLI currently supports only one signer per transaction. + +## Module wiring & Customization + +The `AutoCLIOptions()` method on your module allows to specify custom commands, sub-commands or flags for each service, as it was a `cobra.Command` instance, within the `RpcCommandOptions` struct. Defining such options will customize the behavior of the `autocli` command generation, which by default generates a command for each method in your gRPC service. + +```go +*autocliv1.RpcCommandOptions{ + RpcMethod: "Params", / The name of the gRPC service + Use: "params", / Command usage that is displayed in the help + Short: "Query the parameters of the governance process", / Short description of the command + Long: "Query the parameters of the governance process. Specify specific param types (voting|tallying|deposit) + +to filter results.", / Long description of the command + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "params_type", + Optional: true +}, / Transform a flag into a positional argument +}, +} +``` + + + AutoCLI can create a gov proposal of any tx by simply setting the + `GovProposal` field to `true` in the `autocli.RpcCommandOptions` struct. Users + can however use the `--no-proposal` flag to disable the proposal creation + (which is useful if the authority isn't the gov module on a chain). + + +### Specifying Subcommands + +By default, `autocli` generates a command for each method in your gRPC service. However, you can specify subcommands to group related commands together. To specify subcommands, use the `autocliv1.ServiceCommandDescriptor` struct. + +This example shows how to use the `autocliv1.ServiceCommandDescriptor` struct to group related commands together and specify subcommands in your gRPC service by defining an instance of `autocliv1.ModuleOptions` in your `autocli.go`. + +```go expandable +package gov + +import ( + + "fmt" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + govv1 "cosmossdk.io/api/cosmos/gov/v1" + govv1beta1 "cosmossdk.io/api/cosmos/gov/v1beta1" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. +func (am AppModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: &autocliv1.ServiceCommandDescriptor{ + Service: govv1.Query_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "Params", + Use: "params", + Short: "Query the parameters of the governance process", + Long: "Query the parameters of the governance process. Specify specific param types (voting|tallying|deposit) + +to filter results.", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "params_type", + Optional: true +}, +}, +}, + { + RpcMethod: "Proposals", + Use: "proposals", + Short: "Query proposals with optional filters", + Example: fmt.Sprintf("%[1]s query gov proposals --depositor cosmos1...\n%[1]s query gov proposals --voter cosmos1...\n%[1]s query gov proposals --proposal-status (PROPOSAL_STATUS_DEPOSIT_PERIOD|PROPOSAL_STATUS_VOTING_PERIOD|PROPOSAL_STATUS_PASSED|PROPOSAL_STATUS_REJECTED|PROPOSAL_STATUS_FAILED)", version.AppName), +}, + { + RpcMethod: "Proposal", + Use: "proposal [proposal-id]", + Short: "Query details of a single proposal", + Example: fmt.Sprintf("%s query gov proposal 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Vote", + Use: "vote [proposal-id] [voter-addr]", + Short: "Query details of a single vote", + Example: fmt.Sprintf("%s query gov vote 1 cosmos1...", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, + { + ProtoField: "voter" +}, +}, +}, + { + RpcMethod: "Votes", + Use: "votes [proposal-id]", + Short: "Query votes of a single proposal", + Example: fmt.Sprintf("%s query gov votes 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Deposit", + Use: "deposit [proposal-id] [depositer-addr]", + Short: "Query details of a deposit", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, + { + ProtoField: "depositor" +}, +}, +}, + { + RpcMethod: "Deposits", + Use: "deposits [proposal-id]", + Short: "Query deposits on a proposal", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "TallyResult", + Use: "tally [proposal-id]", + Short: "Query the tally of a proposal vote", + Example: fmt.Sprintf("%s query gov tally 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Constitution", + Use: "constitution", + Short: "Query the current chain constitution", +}, +}, + / map v1beta1 as a sub-command + SubCommands: map[string]*autocliv1.ServiceCommandDescriptor{ + "v1beta1": { + Service: govv1beta1.Query_ServiceDesc.ServiceName +}, +}, +}, + Tx: &autocliv1.ServiceCommandDescriptor{ + Service: govv1.Msg_ServiceDesc.ServiceName, + / map v1beta1 as a sub-command + SubCommands: map[string]*autocliv1.ServiceCommandDescriptor{ + "v1beta1": { + Service: govv1beta1.Msg_ServiceDesc.ServiceName +}, +}, +}, +} +} +``` + +### Positional Arguments + +By default `autocli` generates a flag for each field in your protobuf message. However, you can choose to use positional arguments instead of flags for certain fields. + +To add positional arguments to a command, use the `autocliv1.PositionalArgDescriptor` struct, as seen in the example below. Specify the `ProtoField` parameter, which is the name of the protobuf field that should be used as the positional argument. In addition, if the parameter is a variable-length argument, you can specify the `Varargs` parameter as `true`. This can only be applied to the last positional parameter, and the `ProtoField` must be a repeated field. + +Here's an example of how to define a positional argument for the `Account` method of the `auth` service: + +```go expandable +package auth + +import ( + + "fmt" + + authv1beta1 "cosmossdk.io/api/cosmos/auth/v1beta1" + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + _ "cosmossdk.io/api/cosmos/crypto/secp256k1" / register to that it shows up in protoregistry.GlobalTypes + _ "cosmossdk.io/api/cosmos/crypto/secp256r1" / register to that it shows up in protoregistry.GlobalTypes + + "github.com/cosmos/cosmos-sdk/version" +) + +/ AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. +func (am AppModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: &autocliv1.ServiceCommandDescriptor{ + Service: authv1beta1.Query_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "Accounts", + Use: "accounts", + Short: "Query all the accounts", +}, + { + RpcMethod: "Account", + Use: "account [address]", + Short: "Query account by address", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "address" +}}, +}, + { + RpcMethod: "AccountInfo", + Use: "account-info [address]", + Short: "Query account info which is common to all account types.", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "address" +}}, +}, + { + RpcMethod: "AccountAddressByID", + Use: "address-by-acc-num [acc-num]", + Short: "Query account address by account number", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "id" +}}, +}, + { + RpcMethod: "ModuleAccounts", + Use: "module-accounts", + Short: "Query all module accounts", +}, + { + RpcMethod: "ModuleAccountByName", + Use: "module-account [module-name]", + Short: "Query module account info by module name", + Example: fmt.Sprintf("%s q auth module-account gov", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "name" +}}, +}, + { + RpcMethod: "AddressBytesToString", + Use: "address-bytes-to-string [address-bytes]", + Short: "Transform an address bytes to string", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "address_bytes" +}}, +}, + { + RpcMethod: "AddressStringToBytes", + Use: "address-string-to-bytes [address-string]", + Short: "Transform an address string to bytes", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "address_string" +}}, +}, + { + RpcMethod: "Bech32Prefix", + Use: "bech32-prefix", + Short: "Query the chain bech32 prefix (if applicable)", +}, + { + RpcMethod: "Params", + Use: "params", + Short: "Query the current auth parameters", +}, +}, +}, + / Tx is purposely left empty, as the only tx is MsgUpdateParams which is gov gated. +} +} +``` + +Then the command can be used as follows, instead of having to specify the `--address` flag: + +```bash + query auth account cosmos1abcd...xyz +``` + +#### Flattened Fields in Positional Arguments + +AutoCLI also supports flattening nested message fields as positional arguments. This means you can access nested fields +using dot notation in the `ProtoField` parameter. This is particularly useful when you want to directly set nested +message fields as positional arguments. + +For example, if you have a nested message structure like this: + +```protobuf +message Permissions { + string level = 1; + repeated string limit_type_urls = 2; +} + +message MsgAuthorizeCircuitBreaker { + string grantee = 1; + Permissions permissions = 2; +} +``` + +You can flatten the fields in your AutoCLI configuration: + +```go +{ + RpcMethod: "AuthorizeCircuitBreaker", + Use: "authorize ", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "grantee" +}, + { + ProtoField: "permissions.level" +}, + { + ProtoField: "permissions.limit_type_urls" +}, +}, +} +``` + +This allows users to provide values for nested fields directly as positional arguments: + +```bash + tx circuit authorize cosmos1... super-admin "/cosmos.bank.v1beta1.MsgSend,/cosmos.bank.v1beta1.MsgMultiSend" +``` + +Instead of having to provide a complex JSON structure for nested fields, flattening makes the CLI more user-friendly by allowing direct access to nested fields. + +#### Customising Flag Names + +By default, `autocli` generates flag names based on the names of the fields in your protobuf message. However, you can customise the flag names by providing a `FlagOptions`. This parameter allows you to specify custom names for flags based on the names of the message fields. + +For example, if you have a message with the fields `test` and `test1`, you can use the following naming options to customise the flags: + +```go +autocliv1.RpcCommandOptions{ + FlagOptions: map[string]*autocliv1.FlagOptions{ + "test": { + Name: "custom_name", +}, + "test1": { + Name: "other_name", +}, +}, +} +``` + +`FlagsOptions` is defined like sub commands in the `AutoCLIOptions()` method on your module. + +### Combining AutoCLI with Other Commands Within A Module + +AutoCLI can be used alongside other commands within a module. For example, the `gov` module uses AutoCLI to generate commands for the `query` subcommand, but also defines custom commands for the `proposer` subcommands. + +In order to enable this behavior, set in `AutoCLIOptions()` the `EnhanceCustomCommand` field to `true`, for the command type (queries and/or transactions) you want to enhance. + +```go expandable +package gov + +import ( + + "fmt" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + govv1 "cosmossdk.io/api/cosmos/gov/v1" + govv1beta1 "cosmossdk.io/api/cosmos/gov/v1beta1" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. +func (am AppModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: &autocliv1.ServiceCommandDescriptor{ + Service: govv1.Query_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "Params", + Use: "params", + Short: "Query the parameters of the governance process", + Long: "Query the parameters of the governance process. Specify specific param types (voting|tallying|deposit) + +to filter results.", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "params_type", + Optional: true +}, +}, +}, + { + RpcMethod: "Proposals", + Use: "proposals", + Short: "Query proposals with optional filters", + Example: fmt.Sprintf("%[1]s query gov proposals --depositor cosmos1...\n%[1]s query gov proposals --voter cosmos1...\n%[1]s query gov proposals --proposal-status (PROPOSAL_STATUS_DEPOSIT_PERIOD|PROPOSAL_STATUS_VOTING_PERIOD|PROPOSAL_STATUS_PASSED|PROPOSAL_STATUS_REJECTED|PROPOSAL_STATUS_FAILED)", version.AppName), +}, + { + RpcMethod: "Proposal", + Use: "proposal [proposal-id]", + Short: "Query details of a single proposal", + Example: fmt.Sprintf("%s query gov proposal 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Vote", + Use: "vote [proposal-id] [voter-addr]", + Short: "Query details of a single vote", + Example: fmt.Sprintf("%s query gov vote 1 cosmos1...", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, + { + ProtoField: "voter" +}, +}, +}, + { + RpcMethod: "Votes", + Use: "votes [proposal-id]", + Short: "Query votes of a single proposal", + Example: fmt.Sprintf("%s query gov votes 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Deposit", + Use: "deposit [proposal-id] [depositer-addr]", + Short: "Query details of a deposit", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, + { + ProtoField: "depositor" +}, +}, +}, + { + RpcMethod: "Deposits", + Use: "deposits [proposal-id]", + Short: "Query deposits on a proposal", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "TallyResult", + Use: "tally [proposal-id]", + Short: "Query the tally of a proposal vote", + Example: fmt.Sprintf("%s query gov tally 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Constitution", + Use: "constitution", + Short: "Query the current chain constitution", +}, +}, + / map v1beta1 as a sub-command + SubCommands: map[string]*autocliv1.ServiceCommandDescriptor{ + "v1beta1": { + Service: govv1beta1.Query_ServiceDesc.ServiceName +}, +}, + EnhanceCustomCommand: true, / We still have manual commands in gov that we want to keep +}, + Tx: &autocliv1.ServiceCommandDescriptor{ + Service: govv1.Msg_ServiceDesc.ServiceName, + / map v1beta1 as a sub-command + SubCommands: map[string]*autocliv1.ServiceCommandDescriptor{ + "v1beta1": { + Service: govv1beta1.Msg_ServiceDesc.ServiceName +}, +}, +}, +} +} +``` + +If not set to true, `AutoCLI` will not generate commands for the module if there are already commands registered for the module (when `GetTxCmd()` or `GetTxCmd()` are defined). + +### Skip a command + +AutoCLI automatically skips unsupported commands when [`cosmos_proto.method_added_in` protobuf annotation](https://docs.cosmos.network/main/build/building-modules/protobuf-annotations) is present. + +Additionally, a command can be manually skipped using the `autocliv1.RpcCommandOptions`: + +```go +*autocliv1.RpcCommandOptions{ + RpcMethod: "Params", / The name of the gRPC service + Skip: true, +} +``` + +### Use AutoCLI for non module commands + +It is possible to use `AutoCLI` for non module commands. The trick is still to implement the `appmodule.Module` interface and append it to the `appOptions.ModuleOptions` map. + +For example, here is how the SDK does it for `cometbft` gRPC commands: + +```go expandable +package cmtservice + +import ( + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + cmtv1beta1 "cosmossdk.io/api/cosmos/base/tendermint/v1beta1" +) + +var CometBFTAutoCLIDescriptor = &autocliv1.ServiceCommandDescriptor{ + Service: cmtv1beta1.Service_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "GetNodeInfo", + Use: "node-info", + Short: "Query the current node info", +}, + { + RpcMethod: "GetSyncing", + Use: "syncing", + Short: "Query node syncing status", +}, + { + RpcMethod: "GetLatestBlock", + Use: "block-latest", + Short: "Query for the latest committed block", +}, + { + RpcMethod: "GetBlockByHeight", + Use: "block-by-height [height]", + Short: "Query for a committed block by height", + Long: "Query for a specific committed block using the CometBFT RPC `block_by_height` method", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "height" +}}, +}, + { + RpcMethod: "GetLatestValidatorSet", + Use: "validator-set", + Alias: []string{"validator-set-latest", "comet-validator-set", "cometbft-validator-set", "tendermint-validator-set" +}, + Short: "Query for the latest validator set", +}, + { + RpcMethod: "GetValidatorSetByHeight", + Use: "validator-set-by-height [height]", + Short: "Query for a validator set by height", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "height" +}}, +}, + { + RpcMethod: "ABCIQuery", + Skip: true, +}, +}, +} + +/ NewCometBFTCommands is a fake `appmodule.Module` to be considered as a module +/ and be added in AutoCLI. +func NewCometBFTCommands() *cometModule { /nolint:revive / fake module and limiting import of core + return &cometModule{ +} +} + +type cometModule struct{ +} + +func (m cometModule) + +IsOnePerModuleType() { +} + +func (m cometModule) + +IsAppModule() { +} + +func (m cometModule) + +Name() + +string { + return "comet" +} + +func (m cometModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: CometBFTAutoCLIDescriptor, +} +} +``` + +## Summary + +`autocli` lets you generate CLI for your Cosmos SDK-based applications without any cobra boilerplate. It allows you to easily generate CLI commands and flags from your protobuf messages, and provides many options for customising the behavior of your CLI application. diff --git a/docs/sdk/next/api-reference/client-tools/cli.mdx b/docs/sdk/next/api-reference/client-tools/cli.mdx new file mode 100644 index 00000000..cf011ccb --- /dev/null +++ b/docs/sdk/next/api-reference/client-tools/cli.mdx @@ -0,0 +1,237 @@ +--- +title: Command-Line Interface +--- + +## Synopsis + +This document describes how command-line interface (CLI) works on a high-level, for an [**application**](/docs/sdk/next/documentation/application-framework/app-anatomy). A separate document for implementing a CLI for a Cosmos SDK [**module**](/docs/sdk/next/documentation/module-system/intro) can be found [here](/docs/sdk/next/documentation/module-system/module-interfaces#cli). + +## Command-Line Interface + +### Example Command + +There is no set way to create a CLI, but Cosmos SDK modules typically use the [Cobra Library](https://github.com/spf13/cobra). Building a CLI with Cobra entails defining commands, arguments, and flags. [**Commands**](#root-command) understand the actions users wish to take, such as `tx` for creating a transaction and `query` for querying the application. Each command can also have nested subcommands, necessary for naming the specific transaction type. Users also supply **Arguments**, such as account numbers to send coins to, and [**Flags**](#flags) to modify various aspects of the commands, such as gas prices or which node to broadcast to. + +Here is an example of a command a user might enter to interact with the simapp CLI `simd` in order to send some tokens: + +```bash +simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --gas auto --gas-prices +``` + +The first four strings specify the command: + +- The root command for the entire application `simd`. +- The subcommand `tx`, which contains all commands that let users create transactions. +- The subcommand `bank` to indicate which module to route the command to ([`x/bank`](/docs/sdk/next/documentation/module-system/bank) module in this case). +- The type of transaction `send`. + +The next two strings are arguments: the `from_address` the user wishes to send from, the `to_address` of the recipient, and the `amount` they want to send. Finally, the last few strings of the command are optional flags to indicate how much the user is willing to pay in fees (calculated using the amount of gas used to execute the transaction and the gas prices provided by the user). + +The CLI interacts with a [node](/docs/sdk/next/documentation/operations/node) to handle this command. The interface itself is defined in a `main.go` file. + +### Building the CLI + +The `main.go` file needs to have a `main()` function that creates a root command, to which all the application commands will be added as subcommands. The root command additionally handles: + +- **setting configurations** by reading in configuration files (e.g. the Cosmos SDK config file). +- **adding any flags** to it, such as `--chain-id`. +- **instantiating the `codec`** by injecting the application codecs. The [`codec`](/docs/sdk/next/documentation/protocol-development/encoding) is used to encode and decode data structures for the application - stores can only persist `[]byte`s so the developer must define a serialization format for their data structures or use the default, Protobuf. +- **adding subcommand** for all the possible user interactions, including [transaction commands](#transaction-commands) and [query commands](#query-commands). + +The `main()` function finally creates an executor and [execute](https://pkg.go.dev/github.com/spf13/cobra#Command.Execute) the root command. See an example of `main()` function from the `simapp` application: + +```go expandable +package main + +import ( + + "fmt" + "os" + + clientv2helpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/simd/cmd" + + svrcmd "github.com/cosmos/cosmos-sdk/server/cmd" +) + +func main() { + rootCmd := cmd.NewRootCmd() + if err := svrcmd.Execute(rootCmd, clientv2helpers.EnvPrefix, simapp.DefaultNodeHome); err != nil { + fmt.Fprintln(rootCmd.OutOrStderr(), err) + +os.Exit(1) +} +} +``` + +The rest of the document will detail what needs to be implemented for each step and include smaller portions of code from the `simapp` CLI files. + +## Adding Commands to the CLI + +Every application CLI first constructs a root command, then adds functionality by aggregating subcommands (often with further nested subcommands) using `rootCmd.AddCommand()`. The bulk of an application's unique capabilities lies in its transaction and query commands, called `TxCmd` and `QueryCmd` respectively. + +### Root Command + +The root command (called `rootCmd`) is what the user first types into the command line to indicate which application they wish to interact with. The string used to invoke the command (the "Use" field) is typically the name of the application suffixed with `-d`, e.g. `simd` or `gaiad`. The root command typically includes the following commands to support basic functionality in the application. + +- **Status** command from the Cosmos SDK rpc client tools, which prints information about the status of the connected [`Node`](/docs/sdk/next/documentation/operations/node). The Status of a node includes `NodeInfo`,`SyncInfo` and `ValidatorInfo`. +- **Keys** [commands](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/keys) from the Cosmos SDK client tools, which includes a collection of subcommands for using the key functions in the Cosmos SDK crypto tools, including adding a new key and saving it to the keyring, listing all public keys stored in the keyring, and deleting a key. For example, users can type `simd keys add ` to add a new key and save an encrypted copy to the keyring, using the flag `--recover` to recover a private key from a seed phrase or the flag `--multisig` to group multiple keys together to create a multisig key. For full details on the `add` key command, see the code [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/keys/add.go). For more details about usage of `--keyring-backend` for storage of key credentials look at the [keyring docs](/docs/sdk/next/documentation/operations/keyring). +- **Server** commands from the Cosmos SDK server package. These commands are responsible for providing the mechanisms necessary to start an ABCI CometBFT application and provides the CLI framework (based on [cobra](https://github.com/spf13/cobra)) necessary to fully bootstrap an application. The package exposes two core functions: `StartCmd` and `ExportCmd` which creates commands to start the application and export state respectively. + Learn more [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server). +- [**Transaction**](#transaction-commands) commands. +- [**Query**](#query-commands) commands. + +Next is an example `rootCmd` function from the `simapp` application. It instantiates the root command, adds a [_persistent_ flag](#flags) and `PreRun` function to be run before every execution, and adds all of the necessary subcommands. + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L47-L130 +``` + + + Use the `EnhanceRootCommand()` from the AutoCLI options to automatically add + auto-generated commands from the modules to the root command. Additionally it + adds all manually defined modules commands (`tx` and `query`) as well. Read + more about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its + dedicated section. + + +`rootCmd` has a function called `initAppConfig()` which is useful for setting the application's custom configs. +By default app uses CometBFT app config template from Cosmos SDK, which can be over-written via `initAppConfig()`. +Here's an example code to override default `app.toml` template. + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L144-L199 +``` + +The `initAppConfig()` also allows overriding the default Cosmos SDK's [server config](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server/config/config.go#L231). One example is the `min-gas-prices` config, which defines the minimum gas prices a validator is willing to accept for processing a transaction. By default, the Cosmos SDK sets this parameter to `""` (empty string), which forces all validators to tweak their own `app.toml` and set a non-empty value, or else the node will halt on startup. This might not be the best UX for validators, so the chain developer can set a default `app.toml` value for validators inside this `initAppConfig()` function. + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L164-L180 +``` + +The root-level `status` and `keys` subcommands are common across most applications and do not interact with application state. The bulk of an application's functionality - what users can actually _do_ with it - is enabled by its `tx` and `query` commands. + +### Transaction Commands + +[Transactions](/docs/sdk/next/documentation/protocol-development/transactions) are objects wrapping [`Msg`s](/docs/sdk/next/documentation/module-system/messages-and-queries#messages) that trigger state changes. To enable the creation of transactions using the CLI interface, a function `txCommand` is generally added to the `rootCmd`: + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L222-L229 +``` + +This `txCommand` function adds all the transaction available to end-users for the application. This typically includes: + +- **Sign command** from the [`auth`](/docs/sdk/next/documentation/module-system/auth) module that signs messages in a transaction. To enable multisig, add the `auth` module's `MultiSign` command. Since every transaction requires some sort of signature in order to be valid, the signing command is necessary for every application. +- **Broadcast command** from the Cosmos SDK client tools, to broadcast transactions. +- **All [module transaction commands](/docs/sdk/next/documentation/module-system/module-interfaces#transaction-commands)** the application is dependent on, retrieved by using the [basic module manager's](/docs/sdk/next/documentation/module-system/module-manager#basic-manager) `AddTxCommands()` function, or enhanced by [AutoCLI](https://docs.cosmos.network/main/core/autocli). + +Here is an example of a `txCommand` aggregating these subcommands from the `simapp` application: + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L270-L292 +``` + + + When using AutoCLI to generate module transaction commands, + `EnhanceRootCommand()` automatically adds the module `tx` command to the root + command. Read more about + [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its dedicated + section. + + +### Query Commands + +[**Queries**](/docs/sdk/next/documentation/module-system/messages-and-queries#queries) are objects that allow users to retrieve information about the application's state. To enable the creation of queries using the CLI interface, a function `queryCommand` is generally added to the `rootCmd`: + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L222-L229 +``` + +This `queryCommand` function adds all the queries available to end-users for the application. This typically includes: + +- **QueryTx** and/or other transaction query commands from the `auth` module which allow the user to search for a transaction by inputting its hash, a list of tags, or a block height. These queries allow users to see if transactions have been included in a block. +- **Account command** from the `auth` module, which displays the state (e.g. account balance) of an account given an address. +- **Validator command** from the Cosmos SDK rpc client tools, which displays the validator set of a given height. +- **Block command** from the Cosmos SDK RPC client tools, which displays the block data for a given height. +- **All [module query commands](/docs/sdk/next/documentation/module-system/module-interfaces#query-commands)** the application is dependent on, retrieved by using the [basic module manager's](/docs/sdk/next/documentation/module-system/module-manager#basic-manager) `AddQueryCommands()` function, or enhanced by [AutoCLI](https://docs.cosmos.network/main/core/autocli). + +Here is an example of a `queryCommand` aggregating subcommands from the `simapp` application: + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L249-L268 +``` + + + When using AutoCLI to generate module query commands, `EnhanceRootCommand()` + automatically adds the module `query` command to the root command. Read more + about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its + dedicated section. + + +## Flags + +Flags are used to modify commands; developers can include them in a `flags.go` file with their CLI. Users can explicitly include them in commands or pre-configure them by inside their [`app.toml`](/docs/sdk/next/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml). Commonly pre-configured flags include the `--node` to connect to and `--chain-id` of the blockchain the user wishes to interact with. + +A _persistent_ flag (as opposed to a _local_ flag) added to a command transcends all of its children: subcommands will inherit the configured values for these flags. Additionally, all flags have default values when they are added to commands; some toggle an option off but others are empty values that the user needs to override to create valid commands. A flag can be explicitly marked as _required_ so that an error is automatically thrown if the user does not provide a value, but it is also acceptable to handle unexpected missing flags differently. + +Flags are added to commands directly (generally in the [module's CLI file](/docs/sdk/next/documentation/module-system/module-interfaces#flags) where module commands are defined) and no flag except for the `rootCmd` persistent flags has to be added at application level. It is common to add a _persistent_ flag for `--chain-id`, the unique identifier of the blockchain the application pertains to, to the root command. Adding this flag can be done in the `main()` function. Adding this flag makes sense as the chain ID should not be changing across commands in this application CLI. + +## Environment variables + +Each flag is bound to its respective named environment variable. The name of the environment variable consist of two parts - capital case `basename` followed by flag name of the flag. `-` must be substituted with `_`. For example flag `--node` for application with basename `GAIA` is bound to `GAIA_NODE`. It allows reducing the amount of flags typed for routine operations. For example instead of: + +```shell +gaia --home=./ --node= --chain-id="testchain-1" --keyring-backend=test tx ... --from= +``` + +this will be more convenient: + +```shell +# define env variables in .env, .envrc etc +GAIA_HOME= +GAIA_NODE= +GAIA_CHAIN_ID="testchain-1" +GAIA_KEYRING_BACKEND="test" + +# and later just use +gaia tx ... --from= +``` + +## Configurations + +It is vital that the root command of an application uses `PersistentPreRun()` cobra command property for executing the command, so all child commands have access to the server and client contexts. These contexts are set as their default values initially and may be modified, scoped to the command, in their respective `PersistentPreRun()` functions. Note that the `client.Context` is typically pre-populated with "default" values that may be useful for all commands to inherit and override if necessary. + +Here is an example of an `PersistentPreRun()` function from `simapp`: + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L81-L120 +``` + +The `SetCmdClientContextHandler` call reads persistent flags via `ReadPersistentCommandFlags` which creates a `client.Context` and sets that on the root command's `Context`. + +The `InterceptConfigsPreRunHandler` call creates a viper literal, default `server.Context`, and a logger and sets that on the root command's `Context`. The `server.Context` will be modified and saved to disk. The internal `interceptConfigs` call reads or creates a CometBFT configuration based on the home path provided. In addition, `interceptConfigs` also reads and loads the application configuration, `app.toml`, and binds that to the `server.Context` viper literal. This is vital so the application can get access to not only the CLI flags, but also to the application configuration values provided by this file. + + +When willing to configure which logger is used, do not use `InterceptConfigsPreRunHandler`, which sets the default SDK logger, but instead use `InterceptConfigsAndCreateContext` and set the server context and the logger manually: + +```diff expandable +-return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig) + ++serverCtx, err := server.InterceptConfigsAndCreateContext(cmd, customAppTemplate, customAppConfig, customCMTConfig) ++if err != nil { ++ return err ++} + ++/ overwrite default server logger ++logger, err := server.CreateSDKLogger(serverCtx, cmd.OutOrStdout()) ++if err != nil { ++ return err ++} ++serverCtx.Logger = logger.With(log.ModuleKey, "server") + ++/ set server context ++return server.SetCmdServerContext(cmd, serverCtx) +``` + + diff --git a/docs/sdk/next/api-reference/client-tools/hubl.mdx b/docs/sdk/next/api-reference/client-tools/hubl.mdx new file mode 100644 index 00000000..f7b193ca --- /dev/null +++ b/docs/sdk/next/api-reference/client-tools/hubl.mdx @@ -0,0 +1,71 @@ +--- +title: Hubl +--- + +`Hubl` is a tool that allows you to query any Cosmos SDK based blockchain. +It takes advantage of the new [AutoCLI](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/client/v2@v2.0.0-20220916140313-c5245716b516/cli) feature {/* TODO replace with AutoCLI docs */} of the Cosmos SDK. + +## Installation + +Hubl can be installed using `go install`: + +```shell +go install cosmossdk.io/tools/hubl/cmd/hubl@latest +``` + +Or build from source: + +```shell +git clone --depth=1 https://github.com/cosmos/cosmos-sdk +make hubl +``` + +The binary will be located in `tools/hubl`. + +## Usage + +```shell +hubl --help +``` + +### Add chain + +To configure a new chain just run this command using the --init flag and the name of the chain as it's listed in the chain registry ([Link](https://github.com/cosmos/chain-registry)). + +If the chain is not listed in the chain registry, you can use any unique name. + +```shell +hubl init [chain-name] +hubl init regen +``` + +The chain configuration is stored in `~/.hubl/config.toml`. + + + +When using an unsecure gRPC endpoint, change the `insecure` field to `true` in the config file. + +```toml +[chains] +[chains.regen] +[[chains.regen.trusted-grpc-endpoints]] +endpoint = 'localhost:9090' +insecure = true +``` + +Or use the `--insecure` flag: + +```shell +hubl init regen --insecure +``` + + + +### Query + +To query a chain, you can use the `query` command. +Then specify which module you want to query and the query itself. + +```shell +hubl regen query auth module-accounts +``` diff --git a/docs/sdk/next/api-reference/events-streaming/events.mdx b/docs/sdk/next/api-reference/events-streaming/events.mdx new file mode 100644 index 00000000..cacc22dc --- /dev/null +++ b/docs/sdk/next/api-reference/events-streaming/events.mdx @@ -0,0 +1,2347 @@ +--- +title: Events +--- + +## Synopsis + +`Event`s are objects that contain information about the execution of the application. They are mainly used by service providers like block explorers and wallet to track the execution of various messages and index transactions. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK application](/docs/sdk/next/documentation/application-framework/app-anatomy) +- [CometBFT Documentation on Events](https://docs.cometbft.com/v0.37/spec/abci/abci++_basic_concepts#events) + + + +## Events + +Events are implemented in the Cosmos SDK as an alias of the ABCI `Event` type and +take the form of: `{eventType}.{attributeKey}={attributeValue}`. + +```protobuf +// Event allows application developers to attach additional information to +// ResponseBeginBlock, ResponseEndBlock, ResponseCheckTx and ResponseDeliverTx. +// Later, transactions may be queried using these events. +message Event { + string type = 1; + repeated EventAttribute attributes = 2 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "attributes,omitempty" + ]; +} +``` + +An Event contains: + +- A `type` to categorize the Event at a high-level; for example, the Cosmos SDK uses the `"message"` type to filter Events by `Msg`s. +- A list of `attributes` are key-value pairs that give more information about the categorized Event. For example, for the `"message"` type, we can filter Events by key-value pairs using `message.action={some_action}`, `message.module={some_module}` or `message.sender={some_sender}`. +- A `msg_index` to identify which messages relate to the same transaction + + + To parse the attribute values as strings, make sure to add `'` (single quotes) + around each attribute value. + + +_Typed Events_ are Protobuf-defined [messages](docs/sdk/next/documentation/legacy/adr-comprehensive) used by the Cosmos SDK +for emitting and querying Events. They are defined in a `event.proto` file, on a **per-module basis** and are read as `proto.Message`. +_Legacy Events_ are defined on a **per-module basis** in the module's `/types/events.go` file. +They are triggered from the module's Protobuf [`Msg` service](/docs/sdk/next/documentation/module-system/msg-services) +by using the [`EventManager`](#eventmanager). + +In addition, each module documents its events under in the `Events` sections of its specs (x/`{moduleName}`/`README.md`). + +Lastly, Events are returned to the underlying consensus engine in the response of the following ABCI messages: + +- [`BeginBlock`](/docs/sdk/next/documentation/application-framework/baseapp#beginblock) +- [`EndBlock`](/docs/sdk/next/documentation/application-framework/baseapp#endblock) +- [`CheckTx`](/docs/sdk/next/documentation/application-framework/baseapp#checktx) +- [`Transaction Execution`](/docs/sdk/next/documentation/application-framework/baseapp#transactionexecution) + +### Examples + +The following examples show how to query Events using the Cosmos SDK. + +| Event | Description | +| ------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `tx.height=23` | Query all transactions at height 23 | +| `message.action='/cosmos.bank.v1beta1.Msg/Send'` | Query all transactions containing a x/bank `Send` [Service `Msg`](/docs/sdk/next/documentation/module-system/msg-services). Note the `'`s around the value. | +| `message.module='bank'` | Query all transactions containing messages from the x/bank module. Note the `'`s around the value. | +| `create_validator.validator='cosmosval1...'` | x/staking-specific Event, see [x/staking SPEC](/docs/sdk/next/documentation/module-system/staking). | + +## EventManager + +In Cosmos SDK applications, Events are managed by an abstraction called the `EventManager`. +Internally, the `EventManager` tracks a list of Events for the entire execution flow of `FinalizeBlock` +(i.e. transaction execution, `BeginBlock`, `EndBlock`). + +```go expandable +package types + +import ( + + "encoding/json" + "fmt" + "maps" + "reflect" + "slices" + "strings" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/gogoproto/jsonpb" + proto "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/codec" +) + +type EventManagerI interface { + Events() + +Events + ABCIEvents() []abci.Event + EmitTypedEvent(tev proto.Message) + +error + EmitTypedEvents(tevs ...proto.Message) + +error + EmitEvent(event Event) + +EmitEvents(events Events) +} + +/ ---------------------------------------------------------------------------- +/ Event Manager +/ ---------------------------------------------------------------------------- + +var _ EventManagerI = (*EventManager)(nil) + +/ EventManager implements a simple wrapper around a slice of Event objects that +/ can be emitted from. +type EventManager struct { + events Events +} + +func NewEventManager() *EventManager { + return &EventManager{ + EmptyEvents() +} +} + +func (em *EventManager) + +Events() + +Events { + return em.events +} + +/ EmitEvent stores a single Event object. +/ Deprecated: Use EmitTypedEvent +func (em *EventManager) + +EmitEvent(event Event) { + em.events = em.events.AppendEvent(event) +} + +/ EmitEvents stores a series of Event objects. +/ Deprecated: Use EmitTypedEvents +func (em *EventManager) + +EmitEvents(events Events) { + em.events = em.events.AppendEvents(events) +} + +/ ABCIEvents returns all stored Event objects as abci.Event objects. +func (em EventManager) + +ABCIEvents() []abci.Event { + return em.events.ToABCIEvents() +} + +/ EmitTypedEvent takes typed event and emits converting it into Event +func (em *EventManager) + +EmitTypedEvent(tev proto.Message) + +error { + event, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +em.EmitEvent(event) + +return nil +} + +/ EmitTypedEvents takes series of typed events and emit +func (em *EventManager) + +EmitTypedEvents(tevs ...proto.Message) + +error { + events := make(Events, len(tevs)) + for i, tev := range tevs { + res, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +events[i] = res +} + +em.EmitEvents(events) + +return nil +} + +/ TypedEventToEvent takes typed event and converts to Event object +func TypedEventToEvent(tev proto.Message) (Event, error) { + evtType := proto.MessageName(tev) + +evtJSON, err := codec.ProtoMarshalJSON(tev, nil) + if err != nil { + return Event{ +}, err +} + +var attrMap map[string]json.RawMessage + err = json.Unmarshal(evtJSON, &attrMap) + if err != nil { + return Event{ +}, err +} + + / sort the keys to ensure the order is always the same + keys := slices.Sorted(maps.Keys(attrMap)) + attrs := make([]abci.EventAttribute, 0, len(attrMap)) + for _, k := range keys { + v := attrMap[k] + attrs = append(attrs, abci.EventAttribute{ + Key: k, + Value: string(v), +}) +} + +return Event{ + Type: evtType, + Attributes: attrs, +}, nil +} + +/ ParseTypedEvent converts abci.Event back to a typed event. +func ParseTypedEvent(event abci.Event) (proto.Message, error) { + concreteGoType := proto.MessageType(event.Type) + if concreteGoType == nil { + return nil, fmt.Errorf("failed to retrieve the message of type %q", event.Type) +} + +var value reflect.Value + if concreteGoType.Kind() == reflect.Ptr { + value = reflect.New(concreteGoType.Elem()) +} + +else { + value = reflect.Zero(concreteGoType) +} + +protoMsg, ok := value.Interface().(proto.Message) + if !ok { + return nil, fmt.Errorf("%q does not implement proto.Message", event.Type) +} + attrMap := make(map[string]json.RawMessage) + for _, attr := range event.Attributes { + attrMap[attr.Key] = json.RawMessage(attr.Value) +} + +attrBytes, err := json.Marshal(attrMap) + if err != nil { + return nil, err +} + unmarshaler := jsonpb.Unmarshaler{ + AllowUnknownFields: true +} + if err := unmarshaler.Unmarshal(strings.NewReader(string(attrBytes)), protoMsg); err != nil { + return nil, err +} + +return protoMsg, nil +} + +/ ---------------------------------------------------------------------------- +/ Events +/ ---------------------------------------------------------------------------- + +type ( + / Event is a type alias for an ABCI Event + Event abci.Event + + / Events defines a slice of Event objects + Events []Event +) + +/ NewEvent creates a new Event object with a given type and slice of one or more +/ attributes. +func NewEvent(ty string, attrs ...Attribute) + +Event { + e := Event{ + Type: ty +} + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ NewAttribute returns a new key/value Attribute object. +func NewAttribute(k, v string) + +Attribute { + return Attribute{ + k, v +} +} + +/ EmptyEvents returns an empty slice of events. +func EmptyEvents() + +Events { + return make(Events, 0) +} + +func (a Attribute) + +String() + +string { + return fmt.Sprintf("%s: %s", a.Key, a.Value) +} + +/ ToKVPair converts an Attribute object into a CometBFT key/value pair. +func (a Attribute) + +ToKVPair() + +abci.EventAttribute { + return abci.EventAttribute{ + Key: a.Key, + Value: a.Value +} +} + +/ AppendAttributes adds one or more attributes to an Event. +func (e Event) + +AppendAttributes(attrs ...Attribute) + +Event { + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ GetAttribute returns an attribute for a given key present in an event. +/ If the key is not found, the boolean value will be false. +func (e Event) + +GetAttribute(key string) (Attribute, bool) { + for _, attr := range e.Attributes { + if attr.Key == key { + return Attribute{ + Key: attr.Key, + Value: attr.Value +}, true +} + +} + +return Attribute{ +}, false +} + +/ AppendEvent adds an Event to a slice of events. +func (e Events) + +AppendEvent(event Event) + +Events { + return append(e, event) +} + +/ AppendEvents adds a slice of Event objects to an exist slice of Event objects. +func (e Events) + +AppendEvents(events Events) + +Events { + return append(e, events...) +} + +/ ToABCIEvents converts a slice of Event objects to a slice of abci.Event +/ objects. +func (e Events) + +ToABCIEvents() []abci.Event { + res := make([]abci.Event, len(e)) + for i, ev := range e { + res[i] = abci.Event{ + Type: ev.Type, + Attributes: ev.Attributes +} + +} + +return res +} + +/ GetAttributes returns all attributes matching a given key present in events. +/ If the key is not found, the boolean value will be false. +func (e Events) + +GetAttributes(key string) ([]Attribute, bool) { + attrs := make([]Attribute, 0) + for _, event := range e { + if attr, found := event.GetAttribute(key); found { + attrs = append(attrs, attr) +} + +} + +return attrs, len(attrs) > 0 +} + +/ Common event types and attribute keys +const ( + EventTypeTx = "tx" + + AttributeKeyAccountSequence = "acc_seq" + AttributeKeySignature = "signature" + AttributeKeyFee = "fee" + AttributeKeyFeePayer = "fee_payer" + + EventTypeMessage = "message" + + AttributeKeyAction = "action" + AttributeKeyModule = "module" + AttributeKeySender = "sender" + AttributeKeyAmount = "amount" +) + +type ( + / StringAttributes defines a slice of StringEvents objects. + StringEvents []StringEvent +) + +func (se StringEvents) + +String() + +string { + var sb strings.Builder + for _, e := range se { + fmt.Fprintf(&sb, "\t\t- %s\n", e.Type) + for _, attr := range e.Attributes { + fmt.Fprintf(&sb, "\t\t\t- %s\n", attr) +} + +} + +return strings.TrimRight(sb.String(), "\n") +} + +/ StringifyEvent converts an Event object to a StringEvent object. +func StringifyEvent(e abci.Event) + +StringEvent { + res := StringEvent{ + Type: e.Type +} + for _, attr := range e.Attributes { + res.Attributes = append( + res.Attributes, + Attribute{ + Key: attr.Key, + Value: attr.Value +}, + ) +} + +return res +} + +/ StringifyEvents converts a slice of Event objects into a slice of StringEvent +/ objects. +func StringifyEvents(events []abci.Event) + +StringEvents { + res := make(StringEvents, 0, len(events)) + for _, e := range events { + res = append(res, StringifyEvent(e)) +} + +return res +} + +/ MarkEventsToIndex returns the set of ABCI events, where each event's attribute +/ has it's index value marked based on the provided set of events to index. +func MarkEventsToIndex(events []abci.Event, indexSet map[string]struct{ +}) []abci.Event { + indexAll := len(indexSet) == 0 + updatedEvents := make([]abci.Event, len(events)) + for i, e := range events { + updatedEvent := abci.Event{ + Type: e.Type, + Attributes: make([]abci.EventAttribute, len(e.Attributes)), +} + for j, attr := range e.Attributes { + _, index := indexSet[fmt.Sprintf("%s.%s", e.Type, attr.Key)] + updatedAttr := abci.EventAttribute{ + Key: attr.Key, + Value: attr.Value, + Index: index || indexAll, +} + +updatedEvent.Attributes[j] = updatedAttr +} + +updatedEvents[i] = updatedEvent +} + +return updatedEvents +} +``` + +The `EventManager` comes with a set of useful methods to manage Events. The method +that is used most by module and application developers is `EmitTypedEvent` or `EmitEvent` that tracks +an Event in the `EventManager`. + +```go expandable +package types + +import ( + + "encoding/json" + "fmt" + "maps" + "reflect" + "slices" + "strings" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/gogoproto/jsonpb" + proto "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/codec" +) + +type EventManagerI interface { + Events() + +Events + ABCIEvents() []abci.Event + EmitTypedEvent(tev proto.Message) + +error + EmitTypedEvents(tevs ...proto.Message) + +error + EmitEvent(event Event) + +EmitEvents(events Events) +} + +/ ---------------------------------------------------------------------------- +/ Event Manager +/ ---------------------------------------------------------------------------- + +var _ EventManagerI = (*EventManager)(nil) + +/ EventManager implements a simple wrapper around a slice of Event objects that +/ can be emitted from. +type EventManager struct { + events Events +} + +func NewEventManager() *EventManager { + return &EventManager{ + EmptyEvents() +} +} + +func (em *EventManager) + +Events() + +Events { + return em.events +} + +/ EmitEvent stores a single Event object. +/ Deprecated: Use EmitTypedEvent +func (em *EventManager) + +EmitEvent(event Event) { + em.events = em.events.AppendEvent(event) +} + +/ EmitEvents stores a series of Event objects. +/ Deprecated: Use EmitTypedEvents +func (em *EventManager) + +EmitEvents(events Events) { + em.events = em.events.AppendEvents(events) +} + +/ ABCIEvents returns all stored Event objects as abci.Event objects. +func (em EventManager) + +ABCIEvents() []abci.Event { + return em.events.ToABCIEvents() +} + +/ EmitTypedEvent takes typed event and emits converting it into Event +func (em *EventManager) + +EmitTypedEvent(tev proto.Message) + +error { + event, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +em.EmitEvent(event) + +return nil +} + +/ EmitTypedEvents takes series of typed events and emit +func (em *EventManager) + +EmitTypedEvents(tevs ...proto.Message) + +error { + events := make(Events, len(tevs)) + for i, tev := range tevs { + res, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +events[i] = res +} + +em.EmitEvents(events) + +return nil +} + +/ TypedEventToEvent takes typed event and converts to Event object +func TypedEventToEvent(tev proto.Message) (Event, error) { + evtType := proto.MessageName(tev) + +evtJSON, err := codec.ProtoMarshalJSON(tev, nil) + if err != nil { + return Event{ +}, err +} + +var attrMap map[string]json.RawMessage + err = json.Unmarshal(evtJSON, &attrMap) + if err != nil { + return Event{ +}, err +} + + / sort the keys to ensure the order is always the same + keys := slices.Sorted(maps.Keys(attrMap)) + attrs := make([]abci.EventAttribute, 0, len(attrMap)) + for _, k := range keys { + v := attrMap[k] + attrs = append(attrs, abci.EventAttribute{ + Key: k, + Value: string(v), +}) +} + +return Event{ + Type: evtType, + Attributes: attrs, +}, nil +} + +/ ParseTypedEvent converts abci.Event back to a typed event. +func ParseTypedEvent(event abci.Event) (proto.Message, error) { + concreteGoType := proto.MessageType(event.Type) + if concreteGoType == nil { + return nil, fmt.Errorf("failed to retrieve the message of type %q", event.Type) +} + +var value reflect.Value + if concreteGoType.Kind() == reflect.Ptr { + value = reflect.New(concreteGoType.Elem()) +} + +else { + value = reflect.Zero(concreteGoType) +} + +protoMsg, ok := value.Interface().(proto.Message) + if !ok { + return nil, fmt.Errorf("%q does not implement proto.Message", event.Type) +} + attrMap := make(map[string]json.RawMessage) + for _, attr := range event.Attributes { + attrMap[attr.Key] = json.RawMessage(attr.Value) +} + +attrBytes, err := json.Marshal(attrMap) + if err != nil { + return nil, err +} + unmarshaler := jsonpb.Unmarshaler{ + AllowUnknownFields: true +} + if err := unmarshaler.Unmarshal(strings.NewReader(string(attrBytes)), protoMsg); err != nil { + return nil, err +} + +return protoMsg, nil +} + +/ ---------------------------------------------------------------------------- +/ Events +/ ---------------------------------------------------------------------------- + +type ( + / Event is a type alias for an ABCI Event + Event abci.Event + + / Events defines a slice of Event objects + Events []Event +) + +/ NewEvent creates a new Event object with a given type and slice of one or more +/ attributes. +func NewEvent(ty string, attrs ...Attribute) + +Event { + e := Event{ + Type: ty +} + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ NewAttribute returns a new key/value Attribute object. +func NewAttribute(k, v string) + +Attribute { + return Attribute{ + k, v +} +} + +/ EmptyEvents returns an empty slice of events. +func EmptyEvents() + +Events { + return make(Events, 0) +} + +func (a Attribute) + +String() + +string { + return fmt.Sprintf("%s: %s", a.Key, a.Value) +} + +/ ToKVPair converts an Attribute object into a CometBFT key/value pair. +func (a Attribute) + +ToKVPair() + +abci.EventAttribute { + return abci.EventAttribute{ + Key: a.Key, + Value: a.Value +} +} + +/ AppendAttributes adds one or more attributes to an Event. +func (e Event) + +AppendAttributes(attrs ...Attribute) + +Event { + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ GetAttribute returns an attribute for a given key present in an event. +/ If the key is not found, the boolean value will be false. +func (e Event) + +GetAttribute(key string) (Attribute, bool) { + for _, attr := range e.Attributes { + if attr.Key == key { + return Attribute{ + Key: attr.Key, + Value: attr.Value +}, true +} + +} + +return Attribute{ +}, false +} + +/ AppendEvent adds an Event to a slice of events. +func (e Events) + +AppendEvent(event Event) + +Events { + return append(e, event) +} + +/ AppendEvents adds a slice of Event objects to an exist slice of Event objects. +func (e Events) + +AppendEvents(events Events) + +Events { + return append(e, events...) +} + +/ ToABCIEvents converts a slice of Event objects to a slice of abci.Event +/ objects. +func (e Events) + +ToABCIEvents() []abci.Event { + res := make([]abci.Event, len(e)) + for i, ev := range e { + res[i] = abci.Event{ + Type: ev.Type, + Attributes: ev.Attributes +} + +} + +return res +} + +/ GetAttributes returns all attributes matching a given key present in events. +/ If the key is not found, the boolean value will be false. +func (e Events) + +GetAttributes(key string) ([]Attribute, bool) { + attrs := make([]Attribute, 0) + for _, event := range e { + if attr, found := event.GetAttribute(key); found { + attrs = append(attrs, attr) +} + +} + +return attrs, len(attrs) > 0 +} + +/ Common event types and attribute keys +const ( + EventTypeTx = "tx" + + AttributeKeyAccountSequence = "acc_seq" + AttributeKeySignature = "signature" + AttributeKeyFee = "fee" + AttributeKeyFeePayer = "fee_payer" + + EventTypeMessage = "message" + + AttributeKeyAction = "action" + AttributeKeyModule = "module" + AttributeKeySender = "sender" + AttributeKeyAmount = "amount" +) + +type ( + / StringAttributes defines a slice of StringEvents objects. + StringEvents []StringEvent +) + +func (se StringEvents) + +String() + +string { + var sb strings.Builder + for _, e := range se { + fmt.Fprintf(&sb, "\t\t- %s\n", e.Type) + for _, attr := range e.Attributes { + fmt.Fprintf(&sb, "\t\t\t- %s\n", attr) +} + +} + +return strings.TrimRight(sb.String(), "\n") +} + +/ StringifyEvent converts an Event object to a StringEvent object. +func StringifyEvent(e abci.Event) + +StringEvent { + res := StringEvent{ + Type: e.Type +} + for _, attr := range e.Attributes { + res.Attributes = append( + res.Attributes, + Attribute{ + Key: attr.Key, + Value: attr.Value +}, + ) +} + +return res +} + +/ StringifyEvents converts a slice of Event objects into a slice of StringEvent +/ objects. +func StringifyEvents(events []abci.Event) + +StringEvents { + res := make(StringEvents, 0, len(events)) + for _, e := range events { + res = append(res, StringifyEvent(e)) +} + +return res +} + +/ MarkEventsToIndex returns the set of ABCI events, where each event's attribute +/ has it's index value marked based on the provided set of events to index. +func MarkEventsToIndex(events []abci.Event, indexSet map[string]struct{ +}) []abci.Event { + indexAll := len(indexSet) == 0 + updatedEvents := make([]abci.Event, len(events)) + for i, e := range events { + updatedEvent := abci.Event{ + Type: e.Type, + Attributes: make([]abci.EventAttribute, len(e.Attributes)), +} + for j, attr := range e.Attributes { + _, index := indexSet[fmt.Sprintf("%s.%s", e.Type, attr.Key)] + updatedAttr := abci.EventAttribute{ + Key: attr.Key, + Value: attr.Value, + Index: index || indexAll, +} + +updatedEvent.Attributes[j] = updatedAttr +} + +updatedEvents[i] = updatedEvent +} + +return updatedEvents +} +``` + +Module developers should handle Event emission via the `EventManager#EmitTypedEvent` or `EventManager#EmitEvent` in each message +`Handler` and in each `BeginBlock`/`EndBlock` handler. The `EventManager` is accessed via +the [`Context`](/docs/sdk/next/documentation/application-framework/context), where Event should be already registered, and emitted like this: + +**Typed events:** + +```go expandable +package keeper + +import ( + + "bytes" + "context" + "encoding/binary" + "encoding/json" + "fmt" + "slices" + "strings" + + errorsmod "cosmossdk.io/errors" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/errors" + "github.com/cosmos/cosmos-sdk/x/group/internal/math" + "github.com/cosmos/cosmos-sdk/x/group/internal/orm" +) + +var _ group.MsgServer = Keeper{ +} + +/ TODO: Revisit this once we have proper gas fee framework. +/ Tracking issues https://github.com/cosmos/cosmos-sdk/issues/9054, https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(20) + +func (k Keeper) + +CreateGroup(goCtx context.Context, msg *group.MsgCreateGroup) (*group.MsgCreateGroupResponse, error) { + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Admin); err != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidAddress, "invalid admin address: %s", msg.Admin) +} + if err := k.validateMembers(msg.Members); err != nil { + return nil, errorsmod.Wrap(err, "members") +} + if err := k.assertMetadataLength(msg.Metadata, "group metadata"); err != nil { + return nil, err +} + totalWeight := math.NewDecFromInt64(0) + for _, m := range msg.Members { + if err := k.assertMetadataLength(m.Metadata, "member metadata"); err != nil { + return nil, err +} + + / Members of a group must have a positive weight. + / NOTE: group member with zero weight are only allowed when updating group members. + / If the member has a zero weight, it will be removed from the group. + weight, err := math.NewPositiveDecFromString(m.Weight) + if err != nil { + return nil, err +} + + / Adding up members weights to compute group total weight. + totalWeight, err = totalWeight.Add(weight) + if err != nil { + return nil, err +} + +} + + / Create a new group in the groupTable. + ctx := sdk.UnwrapSDKContext(goCtx) + groupInfo := &group.GroupInfo{ + Id: k.groupTable.Sequence().PeekNextVal(ctx.KVStore(k.key)), + Admin: msg.Admin, + Metadata: msg.Metadata, + Version: 1, + TotalWeight: totalWeight.String(), + CreatedAt: ctx.BlockTime(), +} + +groupID, err := k.groupTable.Create(ctx.KVStore(k.key), groupInfo) + if err != nil { + return nil, errorsmod.Wrap(err, "could not create group") +} + + / Create new group members in the groupMemberTable. + for i, m := range msg.Members { + err := k.groupMemberTable.Create(ctx.KVStore(k.key), &group.GroupMember{ + GroupId: groupID, + Member: &group.Member{ + Address: m.Address, + Weight: m.Weight, + Metadata: m.Metadata, + AddedAt: ctx.BlockTime(), +}, +}) + if err != nil { + return nil, errorsmod.Wrapf(err, "could not store member %d", i) +} + +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventCreateGroup{ + GroupId: groupID +}); err != nil { + return nil, err +} + +return &group.MsgCreateGroupResponse{ + GroupId: groupID +}, nil +} + +func (k Keeper) + +UpdateGroupMembers(goCtx context.Context, msg *group.MsgUpdateGroupMembers) (*group.MsgUpdateGroupMembersResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group id") +} + if len(msg.MemberUpdates) == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "member updates") +} + if err := k.validateMembers(msg.MemberUpdates); err != nil { + return nil, errorsmod.Wrap(err, "members") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(g *group.GroupInfo) + +error { + totalWeight, err := math.NewNonNegativeDecFromString(g.TotalWeight) + if err != nil { + return errorsmod.Wrap(err, "group total weight") +} + for _, member := range msg.MemberUpdates { + if err := k.assertMetadataLength(member.Metadata, "group member metadata"); err != nil { + return err +} + groupMember := group.GroupMember{ + GroupId: msg.GroupId, + Member: &group.Member{ + Address: member.Address, + Weight: member.Weight, + Metadata: member.Metadata, +}, +} + + / Checking if the group member is already part of the group + var found bool + var prevGroupMember group.GroupMember + switch err := k.groupMemberTable.GetOne(ctx.KVStore(k.key), orm.PrimaryKey(&groupMember), &prevGroupMember); { + case err == nil: + found = true + case sdkerrors.ErrNotFound.Is(err): + found = false + default: + return errorsmod.Wrap(err, "get group member") +} + +newMemberWeight, err := math.NewNonNegativeDecFromString(groupMember.Member.Weight) + if err != nil { + return err +} + + / Handle delete for members with zero weight. + if newMemberWeight.IsZero() { + / We can't delete a group member that doesn't already exist. + if !found { + return errorsmod.Wrap(sdkerrors.ErrNotFound, "unknown member") +} + +previousMemberWeight, err := math.NewPositiveDecFromString(prevGroupMember.Member.Weight) + if err != nil { + return err +} + + / Subtract the weight of the group member to delete from the group total weight. + totalWeight, err = math.SubNonNegative(totalWeight, previousMemberWeight) + if err != nil { + return err +} + + / Delete group member in the groupMemberTable. + if err := k.groupMemberTable.Delete(ctx.KVStore(k.key), &groupMember); err != nil { + return errorsmod.Wrap(err, "delete member") +} + +continue +} + / If group member already exists, handle update + if found { + previousMemberWeight, err := math.NewPositiveDecFromString(prevGroupMember.Member.Weight) + if err != nil { + return err +} + / Subtract previous weight from the group total weight. + totalWeight, err = math.SubNonNegative(totalWeight, previousMemberWeight) + if err != nil { + return err +} + / Save updated group member in the groupMemberTable. + groupMember.Member.AddedAt = prevGroupMember.Member.AddedAt + if err := k.groupMemberTable.Update(ctx.KVStore(k.key), &groupMember); err != nil { + return errorsmod.Wrap(err, "add member") +} + +} + +else { / else handle create. + groupMember.Member.AddedAt = ctx.BlockTime() + if err := k.groupMemberTable.Create(ctx.KVStore(k.key), &groupMember); err != nil { + return errorsmod.Wrap(err, "add member") +} + +} + / In both cases (handle + update), we need to add the new member's weight to the group total weight. + totalWeight, err = totalWeight.Add(newMemberWeight) + if err != nil { + return err +} + +} + / ensure that group has one or more members + if totalWeight.IsZero() { + return errorsmod.Wrap(errors.ErrInvalid, "group must not be empty") +} + / Update group in the groupTable. + g.TotalWeight = totalWeight.String() + +g.Version++ + if err := k.validateDecisionPolicies(ctx, *g); err != nil { + return err +} + +return k.groupTable.Update(ctx.KVStore(k.key), g.Id, g) +} + if err := k.doUpdateGroup(ctx, msg.GetGroupID(), msg.GetAdmin(), action, "members updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupMembersResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupAdmin(goCtx context.Context, msg *group.MsgUpdateGroupAdmin) (*group.MsgUpdateGroupAdminResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group id") +} + if strings.EqualFold(msg.Admin, msg.NewAdmin) { + return nil, errorsmod.Wrap(errors.ErrInvalid, "new and old admin are the same") +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Admin); err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidAddress, "admin address") +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.NewAdmin); err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidAddress, "new admin address") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(g *group.GroupInfo) + +error { + g.Admin = msg.NewAdmin + g.Version++ + + return k.groupTable.Update(ctx.KVStore(k.key), g.Id, g) +} + if err := k.doUpdateGroup(ctx, msg.GetGroupID(), msg.GetAdmin(), action, "admin updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupAdminResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupMetadata(goCtx context.Context, msg *group.MsgUpdateGroupMetadata) (*group.MsgUpdateGroupMetadataResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group id") +} + if err := k.assertMetadataLength(msg.Metadata, "group metadata"); err != nil { + return nil, err +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Admin); err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidAddress, "admin address") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(g *group.GroupInfo) + +error { + g.Metadata = msg.Metadata + g.Version++ + return k.groupTable.Update(ctx.KVStore(k.key), g.Id, g) +} + if err := k.doUpdateGroup(ctx, msg.GetGroupID(), msg.GetAdmin(), action, "metadata updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupMetadataResponse{ +}, nil +} + +func (k Keeper) + +CreateGroupWithPolicy(ctx context.Context, msg *group.MsgCreateGroupWithPolicy) (*group.MsgCreateGroupWithPolicyResponse, error) { + / NOTE: admin, and group message validation is performed in the CreateGroup method + groupRes, err := k.CreateGroup(ctx, &group.MsgCreateGroup{ + Admin: msg.Admin, + Members: msg.Members, + Metadata: msg.GroupMetadata, +}) + if err != nil { + return nil, errorsmod.Wrap(err, "group response") +} + groupID := groupRes.GroupId + + / NOTE: group policy message validation is performed in the CreateGroupPolicy method + groupPolicyRes, err := k.CreateGroupPolicy(ctx, &group.MsgCreateGroupPolicy{ + Admin: msg.Admin, + GroupId: groupID, + Metadata: msg.GroupPolicyMetadata, + DecisionPolicy: msg.DecisionPolicy, +}) + if err != nil { + return nil, errorsmod.Wrap(err, "group policy response") +} + if msg.GroupPolicyAsAdmin { + updateAdminReq := &group.MsgUpdateGroupAdmin{ + GroupId: groupID, + Admin: msg.Admin, + NewAdmin: groupPolicyRes.Address, +} + _, err = k.UpdateGroupAdmin(ctx, updateAdminReq) + if err != nil { + return nil, err +} + updatePolicyAddressReq := &group.MsgUpdateGroupPolicyAdmin{ + Admin: msg.Admin, + GroupPolicyAddress: groupPolicyRes.Address, + NewAdmin: groupPolicyRes.Address, +} + _, err = k.UpdateGroupPolicyAdmin(ctx, updatePolicyAddressReq) + if err != nil { + return nil, err +} + +} + +return &group.MsgCreateGroupWithPolicyResponse{ + GroupId: groupID, + GroupPolicyAddress: groupPolicyRes.Address +}, nil +} + +func (k Keeper) + +CreateGroupPolicy(goCtx context.Context, msg *group.MsgCreateGroupPolicy) (*group.MsgCreateGroupPolicyResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group id") +} + if err := k.assertMetadataLength(msg.GetMetadata(), "group policy metadata"); err != nil { + return nil, err +} + +policy, err := msg.GetDecisionPolicy() + if err != nil { + return nil, errorsmod.Wrap(err, "request decision policy") +} + if err := policy.ValidateBasic(); err != nil { + return nil, errorsmod.Wrap(err, "decision policy") +} + +reqGroupAdmin, err := k.accKeeper.AddressCodec().StringToBytes(msg.GetAdmin()) + if err != nil { + return nil, errorsmod.Wrap(err, "request admin") +} + ctx := sdk.UnwrapSDKContext(goCtx) + +groupInfo, err := k.getGroupInfo(ctx, msg.GetGroupID()) + if err != nil { + return nil, err +} + +groupAdmin, err := k.accKeeper.AddressCodec().StringToBytes(groupInfo.Admin) + if err != nil { + return nil, errorsmod.Wrap(err, "group admin") +} + + / Only current group admin is authorized to create a group policy for this + if !bytes.Equal(groupAdmin, reqGroupAdmin) { + return nil, errorsmod.Wrap(sdkerrors.ErrUnauthorized, "not group admin") +} + if err := policy.Validate(groupInfo, k.config); err != nil { + return nil, err +} + + / Generate account address of group policy. + var accountAddr sdk.AccAddress + / loop here in the rare case where a ADR-028-derived address creates a + / collision with an existing address. + for { + nextAccVal := k.groupPolicySeq.NextVal(ctx.KVStore(k.key)) + derivationKey := make([]byte, 8) + +binary.BigEndian.PutUint64(derivationKey, nextAccVal) + +ac, err := authtypes.NewModuleCredential(group.ModuleName, []byte{ + GroupPolicyTablePrefix +}, derivationKey) + if err != nil { + return nil, err +} + +accountAddr = sdk.AccAddress(ac.Address()) + if k.accKeeper.GetAccount(ctx, accountAddr) != nil { + / handle a rare collision, in which case we just go on to the + / next sequence value and derive a new address. + continue +} + + / group policy accounts are unclaimable base accounts + account, err := authtypes.NewBaseAccountWithPubKey(ac) + if err != nil { + return nil, errorsmod.Wrap(err, "could not create group policy account") +} + acc := k.accKeeper.NewAccount(ctx, account) + +k.accKeeper.SetAccount(ctx, acc) + +break +} + +groupPolicy, err := group.NewGroupPolicyInfo( + accountAddr, + msg.GetGroupID(), + reqGroupAdmin, + msg.GetMetadata(), + 1, + policy, + ctx.BlockTime(), + ) + if err != nil { + return nil, err +} + if err := k.groupPolicyTable.Create(ctx.KVStore(k.key), &groupPolicy); err != nil { + return nil, errorsmod.Wrap(err, "could not create group policy") +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventCreateGroupPolicy{ + Address: accountAddr.String() +}); err != nil { + return nil, err +} + +return &group.MsgCreateGroupPolicyResponse{ + Address: accountAddr.String() +}, nil +} + +func (k Keeper) + +UpdateGroupPolicyAdmin(goCtx context.Context, msg *group.MsgUpdateGroupPolicyAdmin) (*group.MsgUpdateGroupPolicyAdminResponse, error) { + if strings.EqualFold(msg.Admin, msg.NewAdmin) { + return nil, errorsmod.Wrap(errors.ErrInvalid, "new and old admin are same") +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.NewAdmin); err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidAddress, "new admin address") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(groupPolicy *group.GroupPolicyInfo) + +error { + groupPolicy.Admin = msg.NewAdmin + groupPolicy.Version++ + return k.groupPolicyTable.Update(ctx.KVStore(k.key), groupPolicy) +} + if err := k.doUpdateGroupPolicy(ctx, msg.GroupPolicyAddress, msg.Admin, action, "group policy admin updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupPolicyAdminResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupPolicyDecisionPolicy(goCtx context.Context, msg *group.MsgUpdateGroupPolicyDecisionPolicy) (*group.MsgUpdateGroupPolicyDecisionPolicyResponse, error) { + policy, err := msg.GetDecisionPolicy() + if err != nil { + return nil, errorsmod.Wrap(err, "decision policy") +} + if err := policy.ValidateBasic(); err != nil { + return nil, errorsmod.Wrap(err, "decision policy") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(groupPolicy *group.GroupPolicyInfo) + +error { + groupInfo, err := k.getGroupInfo(ctx, groupPolicy.GroupId) + if err != nil { + return err +} + +err = policy.Validate(groupInfo, k.config) + if err != nil { + return err +} + +err = groupPolicy.SetDecisionPolicy(policy) + if err != nil { + return err +} + +groupPolicy.Version++ + return k.groupPolicyTable.Update(ctx.KVStore(k.key), groupPolicy) +} + if err = k.doUpdateGroupPolicy(ctx, msg.GroupPolicyAddress, msg.Admin, action, "group policy's decision policy updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupPolicyDecisionPolicyResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupPolicyMetadata(goCtx context.Context, msg *group.MsgUpdateGroupPolicyMetadata) (*group.MsgUpdateGroupPolicyMetadataResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + metadata := msg.GetMetadata() + action := func(groupPolicy *group.GroupPolicyInfo) + +error { + groupPolicy.Metadata = metadata + groupPolicy.Version++ + return k.groupPolicyTable.Update(ctx.KVStore(k.key), groupPolicy) +} + if err := k.assertMetadataLength(metadata, "group policy metadata"); err != nil { + return nil, err +} + err := k.doUpdateGroupPolicy(ctx, msg.GroupPolicyAddress, msg.Admin, action, "group policy metadata updated") + if err != nil { + return nil, err +} + +return &group.MsgUpdateGroupPolicyMetadataResponse{ +}, nil +} + +func (k Keeper) + +SubmitProposal(goCtx context.Context, msg *group.MsgSubmitProposal) (*group.MsgSubmitProposalResponse, error) { + if len(msg.Proposers) == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "proposers") +} + if err := k.validateProposers(msg.Proposers); err != nil { + return nil, err +} + +groupPolicyAddr, err := k.accKeeper.AddressCodec().StringToBytes(msg.GroupPolicyAddress) + if err != nil { + return nil, errorsmod.Wrap(err, "request account address of group policy") +} + if err := k.assertMetadataLength(msg.Title, "proposal Title"); err != nil { + return nil, err +} + if err := k.assertSummaryLength(msg.Summary); err != nil { + return nil, err +} + if err := k.assertMetadataLength(msg.Metadata, "metadata"); err != nil { + return nil, err +} + + / verify that if present, the metadata title and summary equals the proposal title and summary + if len(msg.Metadata) != 0 { + proposalMetadata := govtypes.ProposalMetadata{ +} + if err := json.Unmarshal([]byte(msg.Metadata), &proposalMetadata); err == nil { + if proposalMetadata.Title != msg.Title { + return nil, fmt.Errorf("metadata title '%s' must equal proposal title '%s'", proposalMetadata.Title, msg.Title) +} + if proposalMetadata.Summary != msg.Summary { + return nil, fmt.Errorf("metadata summary '%s' must equal proposal summary '%s'", proposalMetadata.Summary, msg.Summary) +} + +} + + / if we can't unmarshal the metadata, this means the client didn't use the recommended metadata format + / nothing can be done here, and this is still a valid case, so we ignore the error +} + +msgs, err := msg.GetMsgs() + if err != nil { + return nil, errorsmod.Wrap(err, "request msgs") +} + if err := validateMsgs(msgs); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + +policyAcc, err := k.getGroupPolicyInfo(ctx, msg.GroupPolicyAddress) + if err != nil { + return nil, errorsmod.Wrapf(err, "load group policy: %s", msg.GroupPolicyAddress) +} + +groupInfo, err := k.getGroupInfo(ctx, policyAcc.GroupId) + if err != nil { + return nil, errorsmod.Wrap(err, "get group by groupId of group policy") +} + + / Only members of the group can submit a new proposal. + for _, proposer := range msg.Proposers { + if !k.groupMemberTable.Has(ctx.KVStore(k.key), orm.PrimaryKey(&group.GroupMember{ + GroupId: groupInfo.Id, + Member: &group.Member{ + Address: proposer +}})) { + return nil, errorsmod.Wrapf(errors.ErrUnauthorized, "not in group: %s", proposer) +} + +} + + / Check that if the messages require signers, they are all equal to the given account address of group policy. + if err := ensureMsgAuthZ(msgs, groupPolicyAddr, k.cdc); err != nil { + return nil, err +} + +policy, err := policyAcc.GetDecisionPolicy() + if err != nil { + return nil, errorsmod.Wrap(err, "proposal group policy decision policy") +} + + / Prevent proposal that cannot succeed. + if err = policy.Validate(groupInfo, k.config); err != nil { + return nil, err +} + m := &group.Proposal{ + Id: k.proposalTable.Sequence().PeekNextVal(ctx.KVStore(k.key)), + GroupPolicyAddress: msg.GroupPolicyAddress, + Metadata: msg.Metadata, + Proposers: msg.Proposers, + SubmitTime: ctx.BlockTime(), + GroupVersion: groupInfo.Version, + GroupPolicyVersion: policyAcc.Version, + Status: group.PROPOSAL_STATUS_SUBMITTED, + ExecutorResult: group.PROPOSAL_EXECUTOR_RESULT_NOT_RUN, + VotingPeriodEnd: ctx.BlockTime().Add(policy.GetVotingPeriod()), / The voting window begins as soon as the proposal is submitted. + FinalTallyResult: group.DefaultTallyResult(), + Title: msg.Title, + Summary: msg.Summary, +} + if err := m.SetMsgs(msgs); err != nil { + return nil, errorsmod.Wrap(err, "create proposal") +} + +id, err := k.proposalTable.Create(ctx.KVStore(k.key), m) + if err != nil { + return nil, errorsmod.Wrap(err, "create proposal") +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventSubmitProposal{ + ProposalId: id +}); err != nil { + return nil, err +} + + / Try to execute proposal immediately + if msg.Exec == group.Exec_EXEC_TRY { + / Consider proposers as Yes votes + for _, proposer := range msg.Proposers { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "vote on proposal") + _, err = k.Vote(ctx, &group.MsgVote{ + ProposalId: id, + Voter: proposer, + Option: group.VOTE_OPTION_YES, +}) + if err != nil { + return &group.MsgSubmitProposalResponse{ + ProposalId: id +}, errorsmod.Wrapf(err, "the proposal was created but failed on vote for voter %s", proposer) +} + +} + + / Then try to execute the proposal + _, err = k.Exec(ctx, &group.MsgExec{ + ProposalId: id, + / We consider the first proposer as the MsgExecRequest signer + / but that could be revisited (eg using the group policy) + +Executor: msg.Proposers[0], +}) + if err != nil { + return &group.MsgSubmitProposalResponse{ + ProposalId: id +}, errorsmod.Wrap(err, "the proposal was created but failed on exec") +} + +} + +return &group.MsgSubmitProposalResponse{ + ProposalId: id +}, nil +} + +func (k Keeper) + +WithdrawProposal(goCtx context.Context, msg *group.MsgWithdrawProposal) (*group.MsgWithdrawProposalResponse, error) { + if msg.ProposalId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "proposal id") +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Address); err != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidAddress, "invalid group policy admin / proposer address: %s", msg.Address) +} + ctx := sdk.UnwrapSDKContext(goCtx) + +proposal, err := k.getProposal(ctx, msg.ProposalId) + if err != nil { + return nil, err +} + + / Ensure the proposal can be withdrawn. + if proposal.Status != group.PROPOSAL_STATUS_SUBMITTED { + return nil, errorsmod.Wrapf(errors.ErrInvalid, "cannot withdraw a proposal with the status of %s", proposal.Status.String()) +} + +var policyInfo group.GroupPolicyInfo + if policyInfo, err = k.getGroupPolicyInfo(ctx, proposal.GroupPolicyAddress); err != nil { + return nil, errorsmod.Wrap(err, "load group policy") +} + + / check address is the group policy admin he is in proposers list.. + if msg.Address != policyInfo.Admin && !isProposer(proposal, msg.Address) { + return nil, errorsmod.Wrapf(errors.ErrUnauthorized, "given address is neither group policy admin nor in proposers: %s", msg.Address) +} + +proposal.Status = group.PROPOSAL_STATUS_WITHDRAWN + if err := k.proposalTable.Update(ctx.KVStore(k.key), msg.ProposalId, &proposal); err != nil { + return nil, err +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventWithdrawProposal{ + ProposalId: msg.ProposalId +}); err != nil { + return nil, err +} + +return &group.MsgWithdrawProposalResponse{ +}, nil +} + +func (k Keeper) + +Vote(goCtx context.Context, msg *group.MsgVote) (*group.MsgVoteResponse, error) { + if msg.ProposalId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "proposal id") +} + + / verify vote options + if msg.Option == group.VOTE_OPTION_UNSPECIFIED { + return nil, errorsmod.Wrap(errors.ErrEmpty, "vote option") +} + if _, ok := group.VoteOption_name[int32(msg.Option)]; !ok { + return nil, errorsmod.Wrap(errors.ErrInvalid, "vote option") +} + if err := k.assertMetadataLength(msg.Metadata, "metadata"); err != nil { + return nil, err +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Voter); err != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidAddress, "invalid voter address: %s", msg.Voter) +} + ctx := sdk.UnwrapSDKContext(goCtx) + +proposal, err := k.getProposal(ctx, msg.ProposalId) + if err != nil { + return nil, err +} + + / Ensure that we can still accept votes for this proposal. + if proposal.Status != group.PROPOSAL_STATUS_SUBMITTED { + return nil, errorsmod.Wrap(errors.ErrInvalid, "proposal not open for voting") +} + if ctx.BlockTime().After(proposal.VotingPeriodEnd) { + return nil, errorsmod.Wrap(errors.ErrExpired, "voting period has ended already") +} + +policyInfo, err := k.getGroupPolicyInfo(ctx, proposal.GroupPolicyAddress) + if err != nil { + return nil, errorsmod.Wrap(err, "load group policy") +} + +groupInfo, err := k.getGroupInfo(ctx, policyInfo.GroupId) + if err != nil { + return nil, err +} + + / Count and store votes. + voter := group.GroupMember{ + GroupId: groupInfo.Id, + Member: &group.Member{ + Address: msg.Voter +}} + if err := k.groupMemberTable.GetOne(ctx.KVStore(k.key), orm.PrimaryKey(&voter), &voter); err != nil { + return nil, errorsmod.Wrapf(err, "voter address: %s", msg.Voter) +} + newVote := group.Vote{ + ProposalId: msg.ProposalId, + Voter: msg.Voter, + Option: msg.Option, + Metadata: msg.Metadata, + SubmitTime: ctx.BlockTime(), +} + + / The ORM will return an error if the vote already exists, + / making sure than a voter hasn't already voted. + if err := k.voteTable.Create(ctx.KVStore(k.key), &newVote); err != nil { + return nil, errorsmod.Wrap(err, "store vote") +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventVote{ + ProposalId: msg.ProposalId +}); err != nil { + return nil, err +} + + / Try to execute proposal immediately + if msg.Exec == group.Exec_EXEC_TRY { + _, err = k.Exec(ctx, &group.MsgExec{ + ProposalId: msg.ProposalId, + Executor: msg.Voter +}) + if err != nil { + return nil, err +} + +} + +return &group.MsgVoteResponse{ +}, nil +} + +/ doTallyAndUpdate performs a tally, and, if the tally result is final, then: +/ - updates the proposal's `Status` and `FinalTallyResult` fields, +/ - prune all the votes. +func (k Keeper) + +doTallyAndUpdate(ctx sdk.Context, proposal *group.Proposal, groupInfo group.GroupInfo, policyInfo group.GroupPolicyInfo) + +error { + policy, err := policyInfo.GetDecisionPolicy() + if err != nil { + return err +} + +var result group.DecisionPolicyResult + tallyResult, err := k.Tally(ctx, *proposal, policyInfo.GroupId) + if err == nil { + result, err = policy.Allow(tallyResult, groupInfo.TotalWeight) +} + if err != nil { + if err := k.pruneVotes(ctx, proposal.Id); err != nil { + return err +} + +proposal.Status = group.PROPOSAL_STATUS_REJECTED + return ctx.EventManager().EmitTypedEvents( + &group.EventTallyError{ + ProposalId: proposal.Id, + ErrorMessage: err.Error(), +}) +} + + / If the result was final (i.e. enough votes to pass) + +or if the voting + / period ended, then we consider the proposal as final. + if isFinal := result.Final || ctx.BlockTime().After(proposal.VotingPeriodEnd); isFinal { + if err := k.pruneVotes(ctx, proposal.Id); err != nil { + return err +} + +proposal.FinalTallyResult = tallyResult + if result.Allow { + proposal.Status = group.PROPOSAL_STATUS_ACCEPTED +} + +else { + proposal.Status = group.PROPOSAL_STATUS_REJECTED +} + + +} + +return nil +} + +/ Exec executes the messages from a proposal. +func (k Keeper) + +Exec(goCtx context.Context, msg *group.MsgExec) (*group.MsgExecResponse, error) { + if msg.ProposalId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "proposal id") +} + ctx := sdk.UnwrapSDKContext(goCtx) + +proposal, err := k.getProposal(ctx, msg.ProposalId) + if err != nil { + return nil, err +} + if proposal.Status != group.PROPOSAL_STATUS_SUBMITTED && proposal.Status != group.PROPOSAL_STATUS_ACCEPTED { + return nil, errorsmod.Wrapf(errors.ErrInvalid, "not possible to exec with proposal status %s", proposal.Status.String()) +} + +policyInfo, err := k.getGroupPolicyInfo(ctx, proposal.GroupPolicyAddress) + if err != nil { + return nil, errorsmod.Wrap(err, "load group policy") +} + + / If proposal is still in SUBMITTED phase, it means that the voting period + / didn't end yet, and tallying hasn't been done. In this case, we need to + / tally first. + if proposal.Status == group.PROPOSAL_STATUS_SUBMITTED { + groupInfo, err := k.getGroupInfo(ctx, policyInfo.GroupId) + if err != nil { + return nil, errorsmod.Wrap(err, "load group") +} + if err = k.doTallyAndUpdate(ctx, &proposal, groupInfo, policyInfo); err != nil { + return nil, err +} + +} + + / Execute proposal payload. + var logs string + if proposal.Status == group.PROPOSAL_STATUS_ACCEPTED && proposal.ExecutorResult != group.PROPOSAL_EXECUTOR_RESULT_SUCCESS { + / Caching context so that we don't update the store in case of failure. + cacheCtx, flush := ctx.CacheContext() + +addr, err := k.accKeeper.AddressCodec().StringToBytes(policyInfo.Address) + if err != nil { + return nil, err +} + decisionPolicy := policyInfo.DecisionPolicy.GetCachedValue().(group.DecisionPolicy) + if results, err := k.doExecuteMsgs(cacheCtx, k.router, proposal, addr, decisionPolicy); err != nil { + proposal.ExecutorResult = group.PROPOSAL_EXECUTOR_RESULT_FAILURE + logs = fmt.Sprintf("proposal execution failed on proposal %d, because of error %s", proposal.Id, err.Error()) + +k.Logger(ctx).Info("proposal execution failed", "cause", err, "proposalID", proposal.Id) +} + +else { + proposal.ExecutorResult = group.PROPOSAL_EXECUTOR_RESULT_SUCCESS + flush() + for _, res := range results { + / NOTE: The sdk msg handler creates a new EventManager, so events must be correctly propagated back to the current context + ctx.EventManager().EmitEvents(res.GetEvents()) +} + +} + +} + + / Update proposal in proposalTable + / If proposal has successfully run, delete it from state. + if proposal.ExecutorResult == group.PROPOSAL_EXECUTOR_RESULT_SUCCESS { + if err := k.pruneProposal(ctx, proposal.Id); err != nil { + return nil, err +} + + / Emit event for proposal finalized with its result + if err := ctx.EventManager().EmitTypedEvent( + &group.EventProposalPruned{ + ProposalId: proposal.Id, + Status: proposal.Status, + TallyResult: &proposal.FinalTallyResult, +}); err != nil { + return nil, err +} + +} + +else { + store := ctx.KVStore(k.key) + if err := k.proposalTable.Update(store, proposal.Id, &proposal); err != nil { + return nil, err +} + +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventExec{ + ProposalId: proposal.Id, + Logs: logs, + Result: proposal.ExecutorResult, +}); err != nil { + return nil, err +} + +return &group.MsgExecResponse{ + Result: proposal.ExecutorResult, +}, nil +} + +/ LeaveGroup implements the MsgServer/LeaveGroup method. +func (k Keeper) + +LeaveGroup(goCtx context.Context, msg *group.MsgLeaveGroup) (*group.MsgLeaveGroupResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group-id") +} + + _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Address) + if err != nil { + return nil, errorsmod.Wrap(err, "group member") +} + ctx := sdk.UnwrapSDKContext(goCtx) + +groupInfo, err := k.getGroupInfo(ctx, msg.GroupId) + if err != nil { + return nil, errorsmod.Wrap(err, "group") +} + +groupWeight, err := math.NewNonNegativeDecFromString(groupInfo.TotalWeight) + if err != nil { + return nil, err +} + +gm, err := k.getGroupMember(ctx, &group.GroupMember{ + GroupId: msg.GroupId, + Member: &group.Member{ + Address: msg.Address +}, +}) + if err != nil { + return nil, err +} + +memberWeight, err := math.NewPositiveDecFromString(gm.Member.Weight) + if err != nil { + return nil, err +} + +updatedWeight, err := math.SubNonNegative(groupWeight, memberWeight) + if err != nil { + return nil, err +} + + / delete group member in the groupMemberTable. + if err := k.groupMemberTable.Delete(ctx.KVStore(k.key), gm); err != nil { + return nil, errorsmod.Wrap(err, "group member") +} + + / update group weight + groupInfo.TotalWeight = updatedWeight.String() + +groupInfo.Version++ + if err := k.validateDecisionPolicies(ctx, groupInfo); err != nil { + return nil, err +} + if err := k.groupTable.Update(ctx.KVStore(k.key), groupInfo.Id, &groupInfo); err != nil { + return nil, err +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventLeaveGroup{ + GroupId: msg.GroupId, + Address: msg.Address, +}); err != nil { + return nil, err +} + +return &group.MsgLeaveGroupResponse{ +}, nil +} + +func (k Keeper) + +getGroupMember(ctx sdk.Context, member *group.GroupMember) (*group.GroupMember, error) { + var groupMember group.GroupMember + switch err := k.groupMemberTable.GetOne(ctx.KVStore(k.key), + orm.PrimaryKey(member), &groupMember); { + case err == nil: + break + case sdkerrors.ErrNotFound.Is(err): + return nil, sdkerrors.ErrNotFound.Wrapf("%s is not part of group %d", member.Member.Address, member.GroupId) + +default: + return nil, err +} + +return &groupMember, nil +} + +type ( + actionFn func(m *group.GroupInfo) + +error + groupPolicyActionFn func(m *group.GroupPolicyInfo) + +error +) + +/ doUpdateGroupPolicy first makes sure that the group policy admin initiated the group policy update, +/ before performing the group policy update and emitting an event. +func (k Keeper) + +doUpdateGroupPolicy(ctx sdk.Context, reqGroupPolicy, reqAdmin string, action groupPolicyActionFn, note string) + +error { + groupPolicyAddr, err := k.accKeeper.AddressCodec().StringToBytes(reqGroupPolicy) + if err != nil { + return errorsmod.Wrap(err, "group policy address") +} + + _, err = k.accKeeper.AddressCodec().StringToBytes(reqAdmin) + if err != nil { + return errorsmod.Wrap(err, "group policy admin") +} + +groupPolicyInfo, err := k.getGroupPolicyInfo(ctx, reqGroupPolicy) + if err != nil { + return errorsmod.Wrap(err, "load group policy") +} + + / Only current group policy admin is authorized to update a group policy. + if reqAdmin != groupPolicyInfo.Admin { + return errorsmod.Wrap(sdkerrors.ErrUnauthorized, "not group policy admin") +} + if err := action(&groupPolicyInfo); err != nil { + return errorsmod.Wrap(err, note) +} + if err = k.abortProposals(ctx, groupPolicyAddr); err != nil { + return err +} + if err = ctx.EventManager().EmitTypedEvent(&group.EventUpdateGroupPolicy{ + Address: groupPolicyInfo.Address +}); err != nil { + return err +} + +return nil +} + +/ doUpdateGroup first makes sure that the group admin initiated the group update, +/ before performing the group update and emitting an event. +func (k Keeper) + +doUpdateGroup(ctx sdk.Context, groupID uint64, reqGroupAdmin string, action actionFn, errNote string) + +error { + groupInfo, err := k.getGroupInfo(ctx, groupID) + if err != nil { + return err +} + if !strings.EqualFold(groupInfo.Admin, reqGroupAdmin) { + return errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "not group admin; got %s, expected %s", reqGroupAdmin, groupInfo.Admin) +} + if err := action(&groupInfo); err != nil { + return errorsmod.Wrap(err, errNote) +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventUpdateGroup{ + GroupId: groupID +}); err != nil { + return err +} + +return nil +} + +/ assertMetadataLength returns an error if given metadata length +/ is greater than a pre-defined maxMetadataLen. +func (k Keeper) + +assertMetadataLength(metadata, description string) + +error { + if metadata != "" && uint64(len(metadata)) > k.config.MaxMetadataLen { + return errorsmod.Wrapf(errors.ErrMaxLimit, description) +} + +return nil +} + +/ assertSummaryLength returns an error if given summary length +/ is greater than a pre-defined 40*MaxMetadataLen. +func (k Keeper) + +assertSummaryLength(summary string) + +error { + if summary != "" && uint64(len(summary)) > 40*k.config.MaxMetadataLen { + return errorsmod.Wrapf(errors.ErrMaxLimit, "proposal summary is too long") +} + +return nil +} + +/ validateDecisionPolicies loops through all decision policies from the group, +/ and calls each of their Validate() + +method. +func (k Keeper) + +validateDecisionPolicies(ctx sdk.Context, g group.GroupInfo) + +error { + it, err := k.groupPolicyByGroupIndex.Get(ctx.KVStore(k.key), g.Id) + if err != nil { + return err +} + +defer it.Close() + for { + var groupPolicy group.GroupPolicyInfo + _, err = it.LoadNext(&groupPolicy) + if errors.ErrORMIteratorDone.Is(err) { + break +} + if err != nil { + return err +} + +err = groupPolicy.DecisionPolicy.GetCachedValue().(group.DecisionPolicy).Validate(g, k.config) + if err != nil { + return err +} + +} + +return nil +} + +/ validateProposers checks that all proposers addresses are valid. +/ It as well verifies that there is no duplicate address. +func (k Keeper) + +validateProposers(proposers []string) + +error { + index := make(map[string]struct{ +}, len(proposers)) + for _, proposer := range proposers { + if _, exists := index[proposer]; exists { + return errorsmod.Wrapf(errors.ErrDuplicate, "address: %s", proposer) +} + + _, err := k.accKeeper.AddressCodec().StringToBytes(proposer) + if err != nil { + return errorsmod.Wrapf(err, "proposer address %s", proposer) +} + +index[proposer] = struct{ +}{ +} + +} + +return nil +} + +/ validateMembers checks that all members addresses are valid. +/ additionally it verifies that there is no duplicate address +/ and the member weight is non-negative. +/ Note: in state, a member's weight MUST be positive. However, in some Msgs, +/ it's possible to set a zero member weight, for example in +/ MsgUpdateGroupMembers to denote that we're removing a member. +/ It returns an error if any of the above conditions is not met. +func (k Keeper) + +validateMembers(members []group.MemberRequest) + +error { + index := make(map[string]struct{ +}, len(members)) + for _, member := range members { + if _, exists := index[member.Address]; exists { + return errorsmod.Wrapf(errors.ErrDuplicate, "address: %s", member.Address) +} + + _, err := k.accKeeper.AddressCodec().StringToBytes(member.Address) + if err != nil { + return errorsmod.Wrapf(err, "member address %s", member.Address) +} + if _, err := math.NewNonNegativeDecFromString(member.Weight); err != nil { + return errorsmod.Wrap(err, "weight must be non negative") +} + +index[member.Address] = struct{ +}{ +} + +} + +return nil +} + +/ isProposer checks that an address is a proposer of a given proposal. +func isProposer(proposal group.Proposal, address string) + +bool { + return slices.Contains(proposal.Proposers, address) +} + +func validateMsgs(msgs []sdk.Msg) + +error { + for i, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return errorsmod.Wrapf(err, "msg %d", i) +} + +} + +return nil +} +``` + +**Legacy events:** + +```go +ctx.EventManager().EmitEvent( + sdk.NewEvent(eventType, sdk.NewAttribute(attributeKey, attributeValue)), +) +``` + +Where the `EventManager` is accessed via the [`Context`](/docs/sdk/next/documentation/application-framework/context). + +See the [`Msg` services](/docs/sdk/next/documentation/module-system/msg-services) concept doc for a more detailed +view on how to typically implement Events and use the `EventManager` in modules. + +## Subscribing to Events + +You can use CometBFT's [Websocket](https://docs.cometbft.com/v0.37/core/subscription) to subscribe to Events by calling the `subscribe` RPC method: + +```json +{ + "jsonrpc": "2.0", + "method": "subscribe", + "id": "0", + "params": { + "query": "tm.event='eventCategory' AND eventType.eventAttribute='attributeValue'" + } +} +``` + +The main `eventCategory` you can subscribe to are: + +- `NewBlock`: Contains Events triggered during `BeginBlock` and `EndBlock`. +- `Tx`: Contains Events triggered during `DeliverTx` (i.e. transaction processing). +- `ValidatorSetUpdates`: Contains validator set updates for the block. + +These Events are triggered from the `state` package after a block is committed. You can get the +full list of Event categories [on the CometBFT Go documentation](https://pkg.go.dev/github.com/cometbft/cometbft/types#pkg-constants). + +The `type` and `attribute` value of the `query` allow you to filter the specific Event you are looking for. For example, a `Mint` transaction triggers an Event of type `EventMint` and has an `Id` and an `Owner` as `attributes` (as defined in the [`events.proto` file of the `NFT` module](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/proto/cosmos/nft/v1beta1/event.proto#L21-L31)). + +Subscribing to this Event would be done like so: + +```json +{ + "jsonrpc": "2.0", + "method": "subscribe", + "id": "0", + "params": { + "query": "tm.event='Tx' AND mint.owner='ownerAddress'" + } +} +``` + +where `ownerAddress` is an address following the [`AccAddress`](/docs/sdk/next/documentation/protocol-development/accounts#addresses) format. + +The same way can be used to subscribe to [legacy events](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/bank/types/events.go). + +## Default Events + +There are a few events that are automatically emitted for all messages, directly from `baseapp`. + +- `message.action`: The name of the message type. +- `message.sender`: The address of the message signer. +- `message.module`: The name of the module that emitted the message. + + + The module name is assumed by `baseapp` to be the second element of the + message route: `"cosmos.bank.v1beta1.MsgSend" -> "bank"`. In case a module + does not follow the standard message path, (e.g. IBC), it is advised to keep + emitting the module name event. `Baseapp` only emits that event if the module + have not already done so. + diff --git a/docs/sdk/next/api-reference/service-apis/grpc_rest.mdx b/docs/sdk/next/api-reference/service-apis/grpc_rest.mdx new file mode 100644 index 00000000..90cf3902 --- /dev/null +++ b/docs/sdk/next/api-reference/service-apis/grpc_rest.mdx @@ -0,0 +1,223 @@ +--- +title: "gRPC, REST, and CometBFT Endpoints" +--- + +## Synopsis + +This document presents an overview of all the endpoints a node exposes: gRPC, REST as well as some other endpoints. + +## An Overview of All Endpoints + +Each node exposes the following endpoints for users to interact with a node, each endpoint is served on a different port. Details on how to configure each endpoint is provided in the endpoint's own section. + +- the gRPC server (default port: `9090`), +- the REST server (default port: `1317`), +- the CometBFT RPC endpoint (default port: `26657`). + + + The node also exposes some other endpoints, such as the CometBFT P2P endpoint, + or the [Prometheus endpoint](https://docs.cometbft.com/v0.37/core/metrics), + which are not directly related to the Cosmos SDK. Please refer to the + [CometBFT documentation](https://docs.cometbft.com/v0.37/core/configuration) + for more information about these endpoints. + + + + All endpoints are defaulted to localhost and must be modified to be exposed to + the public internet. + + +## gRPC Server + +In the Cosmos SDK, Protobuf is the main [encoding](/docs/sdk/next/documentation/protocol-development/encoding) library. This brings a wide range of Protobuf-based tools that can be plugged into the Cosmos SDK. One such tool is [gRPC](https://grpc.io), a modern open-source high performance RPC framework that has decent client support in several languages. + +Each module exposes a [Protobuf `Query` service](/docs/sdk/next/documentation/module-system/messages-and-queries#queries) that defines state queries. The `Query` services and a transaction service used to broadcast transactions are hooked up to the gRPC server via the following function inside the application: + +```go expandable +package types + +import ( + + "encoding/json" + "io" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/cobra" + "cosmossdk.io/log" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" +) + +type ( + / AppOptions defines an interface that is passed into an application + / constructor, typically used to set BaseApp options that are either supplied + / via config file or through CLI arguments/flags. The underlying implementation + / is defined by the server package and is typically implemented via a Viper + / literal defined on the server Context. Note, casting Get calls may not yield + / the expected types and could result in type assertion errors. It is recommend + / to either use the cast package or perform manual conversion for safety. + AppOptions interface { + Get(string) + +interface{ +} + +} + + / Application defines an application interface that wraps abci.Application. + / The interface defines the necessary contracts to be implemented in order + / to fully bootstrap and start an application. + Application interface { + ABCI + + RegisterAPIRoutes(*api.Server, config.APIConfig) + + / RegisterGRPCServerWithSkipCheckHeader registers gRPC services directly with the gRPC + / server and bypass check header flag. + RegisterGRPCServerWithSkipCheckHeader(grpc.Server, bool) + + / RegisterTxService registers the gRPC Query service for tx (such as tx + / simulation, fetching txs by hash...). + RegisterTxService(client.Context) + + / RegisterTendermintService registers the gRPC Query service for CometBFT queries. + RegisterTendermintService(client.Context) + + / RegisterNodeService registers the node gRPC Query service. + RegisterNodeService(client.Context, config.Config) + + / CommitMultiStore return the multistore instance + CommitMultiStore() + +storetypes.CommitMultiStore + + / Return the snapshot manager + SnapshotManager() *snapshots.Manager + + / Close is called in start cmd to gracefully cleanup resources. + / Must be safe to be called multiple times. + Close() + +error +} + + / AppCreator is a function that allows us to lazily initialize an + / application using various configurations. + AppCreator func(log.Logger, dbm.DB, io.Writer, AppOptions) + +Application + + / ModuleInitFlags takes a start command and adds modules specific init flags. + ModuleInitFlags func(startCmd *cobra.Command) + + / ExportedApp represents an exported app state, along with + / validators, consensus params and latest app height. + ExportedApp struct { + / AppState is the application state as JSON. + AppState json.RawMessage + / Validators is the exported validator set. + Validators []cmttypes.GenesisValidator + / Height is the app's latest block height. + Height int64 + / ConsensusParams are the exported consensus params for ABCI. + ConsensusParams cmtproto.ConsensusParams +} + + / AppExporter is a function that dumps all app state to + / JSON-serializable structure and returns the current validator set. + AppExporter func( + logger log.Logger, + db dbm.DB, + traceWriter io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + opts AppOptions, + modulesToExport []string, + ) (ExportedApp, error) +) +``` + +Note: It is not possible to expose any [Protobuf `Msg` service](/docs/sdk/next/documentation/module-system/messages-and-queries#messages) endpoints via gRPC. Transactions must be generated and signed using the CLI or programmatically before they can be broadcasted using gRPC. See [Generating, Signing, and Broadcasting Transactions](/docs/sdk/next/documentation/operations/txs) for more information. + +The `grpc.Server` is a concrete gRPC server, which spawns and serves all gRPC query requests and a broadcast transaction request. This server can be configured inside `~/.simapp/config/app.toml`: + +- `grpc.enable = true|false` field defines if the gRPC server should be enabled. Defaults to `true`. +- `grpc.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `localhost:9090`. + + + `~/.simapp` is the directory where the node's configuration and databases are + stored. By default, it's set to `~/.{app_name}`. + + +Once the gRPC server is started, you can send requests to it using a gRPC client. Some examples are given in our [Interact with the Node](/docs/sdk/next/documentation/operations/interact-node#using-grpc) tutorial. + +An overview of all available gRPC endpoints shipped with the Cosmos SDK is [Protobuf documentation](https://buf.build/cosmos/cosmos-sdk). + +## REST Server + +Cosmos SDK supports REST routes via gRPC-gateway. + +All routes are configured under the following fields in `~/.simapp/config/app.toml`: + +- `api.enable = true|false` field defines if the REST server should be enabled. Defaults to `false`. +- `api.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `tcp://localhost:1317`. +- some additional API configuration options are defined in `~/.simapp/config/app.toml`, along with comments, please refer to that file directly. + +### gRPC-gateway REST Routes + +If, for various reasons, you cannot use gRPC (for example, you are building a web application, and browsers don't support HTTP2 on which gRPC is built), then the Cosmos SDK offers REST routes via gRPC-gateway. + +[gRPC-gateway](https://grpc-ecosystem.github.io/grpc-gateway/) is a tool to expose gRPC endpoints as REST endpoints. For each gRPC endpoint defined in a Protobuf `Query` service, the Cosmos SDK offers a REST equivalent. For instance, querying a balance could be done via the `/cosmos.bank.v1beta1.QueryAllBalances` gRPC endpoint, or alternatively via the gRPC-gateway `"/cosmos/bank/v1beta1/balances/{address}"` REST endpoint: both will return the same result. For each RPC method defined in a Protobuf `Query` service, the corresponding REST endpoint is defined as an option: + +```protobuf + // AllBalances queries the balance of all coins for a single account. + // + // When called from another module, this query might consume a high amount of + // gas if the pagination field is incorrectly set. + rpc AllBalances(QueryAllBalancesRequest) returns (QueryAllBalancesResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/bank/v1beta1/balances/{address}"; + } +``` + +For application developers, gRPC-gateway REST routes needs to be wired up to the REST server, this is done by calling the `RegisterGRPCGatewayRoutes` function on the ModuleManager. + +### Swagger + +A [Swagger](https://swagger.io/) (or OpenAPIv2) specification file is exposed under the `/swagger` route on the API server. Swagger is an open specification describing the API endpoints a server serves, including description, input arguments, return types and much more about each endpoint. + +Enabling the `/swagger` endpoint is configurable inside `~/.simapp/config/app.toml` via the `api.swagger` field, which is set to false by default. + +For application developers, you may want to generate your own Swagger definitions based on your custom modules. +The Cosmos SDK's [Swagger generation script](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/scripts/protoc-swagger-gen.sh) is a good place to start. + +## CometBFT RPC + +Independently from the Cosmos SDK, CometBFT also exposes a RPC server. This RPC server can be configured by tuning parameters under the `rpc` table in the `~/.simapp/config/config.toml`, the default listening address is `tcp://localhost:26657`. An OpenAPI specification of all CometBFT RPC endpoints is available [here](https://docs.cometbft.com/main/rpc/). + +Some CometBFT RPC endpoints are directly related to the Cosmos SDK: + +- `/abci_query`: this endpoint will query the application for state. As the `path` parameter, you can send the following strings: + - any Protobuf fully-qualified service method, such as `/cosmos.bank.v1beta1.Query/AllBalances`. The `data` field should then include the method's request parameter(s) encoded as bytes using Protobuf. + - `/app/simulate`: this will simulate a transaction, and return some information such as gas used. + - `/app/version`: this will return the application's version. + - `/store/{storeName}/key`: this will directly query the named store for data associated with the key represented in the `data` parameter. + - `/store/{storeName}/subspace`: this will directly query the named store for key/value pairs in which the key has the value of the `data` parameter as a prefix. + - `/p2p/filter/addr/{port}`: this will return a filtered list of the node's P2P peers by address port. + - `/p2p/filter/id/{id}`: this will return a filtered list of the node's P2P peers by ID. +- `/broadcast_tx_{sync,async,commit}`: these 3 endpoints will broadcast a transaction to other peers. CLI, gRPC and REST expose [a way to broadcast transactions](/docs/sdk/next/documentation/protocol-development/transactions#broadcasting-the-transaction), but they all use these 3 CometBFT RPCs under the hood. + +## Comparison Table + +| Name | Advantages | Disadvantages | +| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | +| gRPC | - can use code-generated stubs in various languages
- supports streaming and bidirectional communication (HTTP2)
- small wire binary sizes, faster transmission | - based on HTTP2, not available in browsers
- learning curve (mostly due to Protobuf) | +| REST | - ubiquitous
- client libraries in all languages, faster implementation
| - only supports unary request-response communication (HTTP1.1)
- bigger over-the-wire message sizes (JSON) | +| CometBFT RPC | - easy to use | - bigger over-the-wire message sizes (JSON) | diff --git a/docs/sdk/v0.53/learn/advanced/proto-docs.mdx b/docs/sdk/next/api-reference/service-apis/proto-docs.mdx similarity index 54% rename from docs/sdk/v0.53/learn/advanced/proto-docs.mdx rename to docs/sdk/next/api-reference/service-apis/proto-docs.mdx index 1dcbc9f1..0f3593eb 100644 --- a/docs/sdk/v0.53/learn/advanced/proto-docs.mdx +++ b/docs/sdk/next/api-reference/service-apis/proto-docs.mdx @@ -1,6 +1,6 @@ --- -title: "Protobuf Documentation" -description: "Version: v0.53" +title: Protobuf Documentation +description: See Cosmos SDK Buf Proto-docs --- See [Cosmos SDK Buf Proto-docs](https://buf.build/cosmos/cosmos-sdk/docs/main) diff --git a/docs/sdk/next/api-reference/service-apis/query-lifecycle.mdx b/docs/sdk/next/api-reference/service-apis/query-lifecycle.mdx new file mode 100644 index 00000000..d632d568 --- /dev/null +++ b/docs/sdk/next/api-reference/service-apis/query-lifecycle.mdx @@ -0,0 +1,1594 @@ +--- +title: Query Lifecycle +--- + +## Synopsis This document describes the lifecycle of a query in a Cosmos SDK + +application, from the user interface to application stores and back. The query +is referred to as `MyQuery`. + + +**Pre-requisite Readings** + +- [Transaction Lifecycle](/docs/sdk/next/documentation/protocol-development/tx-lifecycle) + + + +## Query Creation + +A [**query**](/docs/sdk/next/documentation/module-system/messages-and-queries#queries) is a request for information made by end-users of applications through an interface and processed by a full-node. Users can query information about the network, the application itself, and application state directly from the application's stores or modules. Note that queries are different from [transactions](/docs/sdk/next/documentation/protocol-development/transactions) (view the lifecycle [here](/docs/sdk/next/documentation/protocol-development/tx-lifecycle)), particularly in that they do not require consensus to be processed (as they do not trigger state-transitions); they can be fully handled by one full-node. + +For the purpose of explaining the query lifecycle, let's say the query, `MyQuery`, is requesting a list of delegations made by a certain delegator address in the application called `simapp`. As is to be expected, the [`staking`](/docs/sdk/next/documentation/module-system/staking) module handles this query. But first, there are a few ways `MyQuery` can be created by users. + +### CLI + +The main interface for an application is the command-line interface. Users connect to a full-node and run the CLI directly from their machines - the CLI interacts directly with the full-node. To create `MyQuery` from their terminal, users type the following command: + +```bash +simd query staking delegations +``` + +This query command was defined by the [`staking`](/docs/sdk/next/documentation/module-system/staking) module developer and added to the list of subcommands by the application developer when creating the CLI. + +Note that the general format is as follows: + +```bash +simd query [moduleName] [command] --flag +``` + +To provide values such as `--node` (the full-node the CLI connects to), the user can use the [`app.toml`](/docs/sdk/next/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml) config file to set them or provide them as flags. + +The CLI understands a specific set of commands, defined in a hierarchical structure by the application developer: from the [root command](/docs/sdk/next/api-reference/client-tools/cli#root-command) (`simd`), the type of command (`Myquery`), the module that contains the command (`staking`), and command itself (`delegations`). Thus, the CLI knows exactly which module handles this command and directly passes the call there. + +### gRPC + +Another interface through which users can make queries is [gRPC](https://grpc.io) requests to a [gRPC server](/docs/sdk/next/api-reference/service-apis/grpc_rest#grpc-server). The endpoints are defined as [Protocol Buffers](https://developers.google.com/protocol-buffers) service methods inside `.proto` files, written in Protobuf's own language-agnostic interface definition language (IDL). The Protobuf ecosystem developed tools for code-generation from `*.proto` files into various languages. These tools allow to build gRPC clients easily. + +One such tool is [grpcurl](https://github.com/fullstorydev/grpcurl), and a gRPC request for `MyQuery` using this client looks like: + +```bash +grpcurl \ + -plaintext # We want results in plain text + -import-path ./proto \ # Import these .proto files + -proto ./proto/cosmos/staking/v1beta1/query.proto \ # Look into this .proto file for the Query protobuf service + -d '{"address":"$MY_DELEGATOR"}' \ # Query arguments + localhost:9090 \ # gRPC server endpoint + cosmos.staking.v1beta1.Query/Delegations # Fully-qualified service method name +``` + +### REST + +Another interface through which users can make queries is through HTTP Requests to a [REST server](/docs/sdk/next/api-reference/service-apis/grpc_rest#rest-server). The REST server is fully auto-generated from Protobuf services, using [gRPC-gateway](https://github.com/grpc-ecosystem/grpc-gateway). + +An example HTTP request for `MyQuery` looks like: + +```bash +GET http://localhost:1317/cosmos/staking/v1beta1/delegators/{delegatorAddr}/delegations +``` + +## How Queries are Handled by the CLI + +The preceding examples show how an external user can interact with a node by querying its state. To understand in more detail the exact lifecycle of a query, let's dig into how the CLI prepares the query, and how the node handles it. The interactions from the users' perspective are a bit different, but the underlying functions are almost identical because they are implementations of the same command defined by the module developer. This step of processing happens within the CLI, gRPC, or REST server, and heavily involves a `client.Context`. + +### Context + +The first thing that is created in the execution of a CLI command is a `client.Context`. A `client.Context` is an object that stores all the data needed to process a request on the user side. In particular, a `client.Context` stores the following: + +- **Codec**: The [encoder/decoder](/docs/sdk/next/documentation/protocol-development/encoding) used by the application, used to marshal the parameters and query before making the CometBFT RPC request and unmarshal the returned response into a JSON object. The default codec used by the CLI is Protobuf. +- **Account Decoder**: The account decoder from the [`auth`](/docs/sdk/next/documentation/module-system/auth) module, which translates `[]byte`s into accounts. +- **RPC Client**: The CometBFT RPC Client, or node, to which requests are relayed. +- **Keyring**: A [Key Manager](/docs/sdk/next/documentation/protocol-development/accounts#keyring) used to sign transactions and handle other operations with keys. +- **Output Writer**: A [Writer](https://pkg.go.dev/io/#Writer) used to output the response. +- **Configurations**: The flags configured by the user for this command, including `--height`, specifying the height of the blockchain to query, and `--indent`, which indicates to add an indent to the JSON response. + +The `client.Context` also contains various functions such as `Query()`, which retrieves the RPC Client and makes an ABCI call to relay a query to a full-node. + +```go expandable +package client + +import ( + + "bufio" + "context" + "encoding/json" + "fmt" + "io" + "os" + "path" + "strings" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/viper" + "google.golang.org/grpc" + "sigs.k8s.io/yaml" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ PreprocessTxFn defines a hook by which chains can preprocess transactions before broadcasting +type PreprocessTxFn func(chainID string, key keyring.KeyType, tx TxBuilder) + +error + +/ Context implements a typical context created in SDK modules for transaction +/ handling and queries. +type Context struct { + FromAddress sdk.AccAddress + Client CometRPC + GRPCClient *grpc.ClientConn + ChainID string + Codec codec.Codec + InterfaceRegistry codectypes.InterfaceRegistry + Input io.Reader + Keyring keyring.Keyring + KeyringOptions []keyring.Option + KeyringDir string + KeyringDefaultKeyName string + Output io.Writer + OutputFormat string + Height int64 + HomeDir string + From string + BroadcastMode string + FromName string + SignModeStr string + UseLedger bool + Simulate bool + GenerateOnly bool + Offline bool + SkipConfirm bool + TxConfig TxConfig + AccountRetriever AccountRetriever + NodeURI string + FeePayer sdk.AccAddress + FeeGranter sdk.AccAddress + Viper *viper.Viper + LedgerHasProtobuf bool + PreprocessTxHook PreprocessTxFn + + / IsAux is true when the signer is an auxiliary signer (e.g. the tipper). + IsAux bool + + / TODO: Deprecated (remove). + LegacyAmino *codec.LegacyAmino + + / CmdContext is the context.Context from the Cobra command. + CmdContext context.Context +} + +/ WithCmdContext returns a copy of the context with an updated context.Context, +/ usually set to the cobra cmd context. +func (ctx Context) + +WithCmdContext(c context.Context) + +Context { + ctx.CmdContext = c + return ctx +} + +/ WithKeyring returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyring(k keyring.Keyring) + +Context { + ctx.Keyring = k + return ctx +} + +/ WithKeyringOptions returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyringOptions(opts ...keyring.Option) + +Context { + ctx.KeyringOptions = opts + return ctx +} + +/ WithInput returns a copy of the context with an updated input. +func (ctx Context) + +WithInput(r io.Reader) + +Context { + / convert to a bufio.Reader to have a shared buffer between the keyring and the + / the Commands, ensuring a read from one advance the read pointer for the other. + / see https://github.com/cosmos/cosmos-sdk/issues/9566. + ctx.Input = bufio.NewReader(r) + +return ctx +} + +/ WithCodec returns a copy of the Context with an updated Codec. +func (ctx Context) + +WithCodec(m codec.Codec) + +Context { + ctx.Codec = m + return ctx +} + +/ WithLegacyAmino returns a copy of the context with an updated LegacyAmino codec. +/ TODO: Deprecated (remove). +func (ctx Context) + +WithLegacyAmino(cdc *codec.LegacyAmino) + +Context { + ctx.LegacyAmino = cdc + return ctx +} + +/ WithOutput returns a copy of the context with an updated output writer (e.g. stdout). +func (ctx Context) + +WithOutput(w io.Writer) + +Context { + ctx.Output = w + return ctx +} + +/ WithFrom returns a copy of the context with an updated from address or name. +func (ctx Context) + +WithFrom(from string) + +Context { + ctx.From = from + return ctx +} + +/ WithOutputFormat returns a copy of the context with an updated OutputFormat field. +func (ctx Context) + +WithOutputFormat(format string) + +Context { + ctx.OutputFormat = format + return ctx +} + +/ WithNodeURI returns a copy of the context with an updated node URI. +func (ctx Context) + +WithNodeURI(nodeURI string) + +Context { + ctx.NodeURI = nodeURI + return ctx +} + +/ WithHeight returns a copy of the context with an updated height. +func (ctx Context) + +WithHeight(height int64) + +Context { + ctx.Height = height + return ctx +} + +/ WithClient returns a copy of the context with an updated RPC client +/ instance. +func (ctx Context) + +WithClient(client CometRPC) + +Context { + ctx.Client = client + return ctx +} + +/ WithGRPCClient returns a copy of the context with an updated GRPC client +/ instance. +func (ctx Context) + +WithGRPCClient(grpcClient *grpc.ClientConn) + +Context { + ctx.GRPCClient = grpcClient + return ctx +} + +/ WithUseLedger returns a copy of the context with an updated UseLedger flag. +func (ctx Context) + +WithUseLedger(useLedger bool) + +Context { + ctx.UseLedger = useLedger + return ctx +} + +/ WithChainID returns a copy of the context with an updated chain ID. +func (ctx Context) + +WithChainID(chainID string) + +Context { + ctx.ChainID = chainID + return ctx +} + +/ WithHomeDir returns a copy of the Context with HomeDir set. +func (ctx Context) + +WithHomeDir(dir string) + +Context { + if dir != "" { + ctx.HomeDir = dir +} + +return ctx +} + +/ WithKeyringDir returns a copy of the Context with KeyringDir set. +func (ctx Context) + +WithKeyringDir(dir string) + +Context { + ctx.KeyringDir = dir + return ctx +} + +/ WithKeyringDefaultKeyName returns a copy of the Context with KeyringDefaultKeyName set. +func (ctx Context) + +WithKeyringDefaultKeyName(keyName string) + +Context { + ctx.KeyringDefaultKeyName = keyName + return ctx +} + +/ WithGenerateOnly returns a copy of the context with updated GenerateOnly value +func (ctx Context) + +WithGenerateOnly(generateOnly bool) + +Context { + ctx.GenerateOnly = generateOnly + return ctx +} + +/ WithSimulation returns a copy of the context with updated Simulate value +func (ctx Context) + +WithSimulation(simulate bool) + +Context { + ctx.Simulate = simulate + return ctx +} + +/ WithOffline returns a copy of the context with updated Offline value. +func (ctx Context) + +WithOffline(offline bool) + +Context { + ctx.Offline = offline + return ctx +} + +/ WithFromName returns a copy of the context with an updated from account name. +func (ctx Context) + +WithFromName(name string) + +Context { + ctx.FromName = name + return ctx +} + +/ WithFromAddress returns a copy of the context with an updated from account +/ address. +func (ctx Context) + +WithFromAddress(addr sdk.AccAddress) + +Context { + ctx.FromAddress = addr + return ctx +} + +/ WithFeePayerAddress returns a copy of the context with an updated fee payer account +/ address. +func (ctx Context) + +WithFeePayerAddress(addr sdk.AccAddress) + +Context { + ctx.FeePayer = addr + return ctx +} + +/ WithFeeGranterAddress returns a copy of the context with an updated fee granter account +/ address. +func (ctx Context) + +WithFeeGranterAddress(addr sdk.AccAddress) + +Context { + ctx.FeeGranter = addr + return ctx +} + +/ WithBroadcastMode returns a copy of the context with an updated broadcast +/ mode. +func (ctx Context) + +WithBroadcastMode(mode string) + +Context { + ctx.BroadcastMode = mode + return ctx +} + +/ WithSignModeStr returns a copy of the context with an updated SignMode +/ value. +func (ctx Context) + +WithSignModeStr(signModeStr string) + +Context { + ctx.SignModeStr = signModeStr + return ctx +} + +/ WithSkipConfirmation returns a copy of the context with an updated SkipConfirm +/ value. +func (ctx Context) + +WithSkipConfirmation(skip bool) + +Context { + ctx.SkipConfirm = skip + return ctx +} + +/ WithTxConfig returns the context with an updated TxConfig +func (ctx Context) + +WithTxConfig(generator TxConfig) + +Context { + ctx.TxConfig = generator + return ctx +} + +/ WithAccountRetriever returns the context with an updated AccountRetriever +func (ctx Context) + +WithAccountRetriever(retriever AccountRetriever) + +Context { + ctx.AccountRetriever = retriever + return ctx +} + +/ WithInterfaceRegistry returns the context with an updated InterfaceRegistry +func (ctx Context) + +WithInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) + +Context { + ctx.InterfaceRegistry = interfaceRegistry + return ctx +} + +/ WithViper returns the context with Viper field. This Viper instance is used to read +/ client-side config from the config file. +func (ctx Context) + +WithViper(prefix string) + +Context { + v := viper.New() + if prefix == "" { + executableName, _ := os.Executable() + +prefix = path.Base(executableName) +} + +v.SetEnvPrefix(prefix) + +v.SetEnvKeyReplacer(strings.NewReplacer(".", "_", "-", "_")) + +v.AutomaticEnv() + +ctx.Viper = v + return ctx +} + +/ WithAux returns a copy of the context with an updated IsAux value. +func (ctx Context) + +WithAux(isAux bool) + +Context { + ctx.IsAux = isAux + return ctx +} + +/ WithLedgerHasProto returns the context with the provided boolean value, indicating +/ whether the target Ledger application can support Protobuf payloads. +func (ctx Context) + +WithLedgerHasProtobuf(val bool) + +Context { + ctx.LedgerHasProtobuf = val + return ctx +} + +/ WithPreprocessTxHook returns the context with the provided preprocessing hook, which +/ enables chains to preprocess the transaction using the builder. +func (ctx Context) + +WithPreprocessTxHook(preprocessFn PreprocessTxFn) + +Context { + ctx.PreprocessTxHook = preprocessFn + return ctx +} + +/ PrintString prints the raw string to ctx.Output if it's defined, otherwise to os.Stdout +func (ctx Context) + +PrintString(str string) + +error { + return ctx.PrintBytes([]byte(str)) +} + +/ PrintBytes prints the raw bytes to ctx.Output if it's defined, otherwise to os.Stdout. +/ NOTE: for printing a complex state object, you should use ctx.PrintOutput +func (ctx Context) + +PrintBytes(o []byte) + +error { + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err := writer.Write(o) + +return err +} + +/ PrintProto outputs toPrint to the ctx.Output based on ctx.OutputFormat which is +/ either text or json. If text, toPrint will be YAML encoded. Otherwise, toPrint +/ will be JSON encoded using ctx.Codec. An error is returned upon failure. +func (ctx Context) + +PrintProto(toPrint proto.Message) + +error { + / always serialize JSON initially because proto json can't be directly YAML encoded + out, err := ctx.Codec.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintObjectLegacy is a variant of PrintProto that doesn't require a proto.Message type +/ and uses amino JSON encoding. +/ Deprecated: It will be removed in the near future! +func (ctx Context) + +PrintObjectLegacy(toPrint any) + +error { + out, err := ctx.LegacyAmino.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintRaw is a variant of PrintProto that doesn't require a proto.Message type +/ and uses a raw JSON message. No marshaling is performed. +func (ctx Context) + +PrintRaw(toPrint json.RawMessage) + +error { + return ctx.printOutput(toPrint) +} + +func (ctx Context) + +printOutput(out []byte) + +error { + var err error + if ctx.OutputFormat == "text" { + out, err = yaml.JSONToYAML(out) + if err != nil { + return err +} + +} + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err = writer.Write(out) + if err != nil { + return err +} + if ctx.OutputFormat != "text" { + / append new-line for formats besides YAML + _, err = writer.Write([]byte("\n")) + if err != nil { + return err +} + +} + +return nil +} + +/ GetFromFields returns a from account address, account name and keyring type, given either an address or key name. +/ If clientCtx.Simulate is true the keystore is not accessed and a valid address must be provided +/ If clientCtx.GenerateOnly is true the keystore is only accessed if a key name is provided +/ If from is empty, the default key if specified in the context will be used +func GetFromFields(clientCtx Context, kr keyring.Keyring, from string) (sdk.AccAddress, string, keyring.KeyType, error) { + if from == "" && clientCtx.KeyringDefaultKeyName != "" { + from = clientCtx.KeyringDefaultKeyName + _ = clientCtx.PrintString(fmt.Sprintf("No key name or address provided; using the default key: %s\n", clientCtx.KeyringDefaultKeyName)) +} + if from == "" { + return nil, "", 0, nil +} + +addr, err := sdk.AccAddressFromBech32(from) + switch { + case clientCtx.Simulate: + if err != nil { + return nil, "", 0, fmt.Errorf("a valid bech32 address must be provided in simulation mode: %w", err) +} + +return addr, "", 0, nil + case clientCtx.GenerateOnly: + if err == nil { + return addr, "", 0, nil +} + +} + +var k *keyring.Record + if err == nil { + k, err = kr.KeyByAddress(addr) + if err != nil { + return nil, "", 0, err +} + +} + +else { + k, err = kr.Key(from) + if err != nil { + return nil, "", 0, err +} + +} + +addr, err = k.GetAddress() + if err != nil { + return nil, "", 0, err +} + +return addr, k.Name, k.GetType(), nil +} + +/ NewKeyringFromBackend gets a Keyring object from a backend +func NewKeyringFromBackend(ctx Context, backend string) (keyring.Keyring, error) { + if ctx.Simulate { + backend = keyring.BackendMemory +} + +return keyring.New(sdk.KeyringServiceName(), backend, ctx.KeyringDir, ctx.Input, ctx.Codec, ctx.KeyringOptions...) +} +``` + +The `client.Context`'s primary role is to store data used during interactions with the end-user and provide methods to interact with this data - it is used before and after the query is processed by the full-node. Specifically, in handling `MyQuery`, the `client.Context` is utilized to encode the query parameters, retrieve the full-node, and write the output. Prior to being relayed to a full-node, the query needs to be encoded into a `[]byte` form, as full-nodes are application-agnostic and do not understand specific types. The full-node (RPC Client) itself is retrieved using the `client.Context`, which knows which node the user CLI is connected to. The query is relayed to this full-node to be processed. Finally, the `client.Context` contains a `Writer` to write output when the response is returned. These steps are further described in later sections. + +### Arguments and Route Creation + +At this point in the lifecycle, the user has created a CLI command with all of the data they wish to include in their query. A `client.Context` exists to assist in the rest of the `MyQuery`'s journey. Now, the next step is to parse the command or request, extract the arguments, and encode everything. These steps all happen on the user side within the interface they are interacting with. + +#### Encoding + +In our case (querying an address's delegations), `MyQuery` contains an [address](/docs/sdk/next/documentation/protocol-development/accounts#addresses) `delegatorAddress` as its only argument. However, the request can only contain `[]byte`s, as it is ultimately relayed to a consensus engine (e.g. CometBFT) of a full-node that has no inherent knowledge of the application types. Thus, the `codec` of `client.Context` is used to marshal the address. + +Here is what the code looks like for the CLI command: + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/staking/client/cli/query.go#L315-L318 +``` + +#### gRPC Query Client Creation + +The Cosmos SDK leverages code generated from Protobuf services to make queries. The `staking` module's `MyQuery` service generates a `queryClient`, which the CLI uses to make queries. Here is the relevant code: + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/staking/client/cli/query.go#L308-L343 +``` + +Under the hood, the `client.Context` has a `Query()` function used to retrieve the pre-configured node and relay a query to it; the function takes the query fully-qualified service method name as path (in our case: `/cosmos.staking.v1beta1.Query/Delegations`), and arguments as parameters. It first retrieves the RPC Client (called the [**node**](/docs/sdk/next/documentation/operations/node)) configured by the user to relay this query to, and creates the `ABCIQueryOptions` (parameters formatted for the ABCI call). The node is then used to make the ABCI call, `ABCIQueryWithOptions()`. + +Here is what the code looks like: + +```go expandable +package client + +import ( + + "context" + "fmt" + "strings" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + rpcclient "github.com/cometbft/cometbft/rpc/client" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/status" + "cosmossdk.io/store/rootmulti" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ GetNode returns an RPC client. If the context's client is not defined, an +/ error is returned. +func (ctx Context) + +GetNode() (CometRPC, error) { + if ctx.Client == nil { + return nil, errors.New("no RPC client is defined in offline mode") +} + +return ctx.Client, nil +} + +/ Query performs a query to a CometBFT node with the provided path. +/ It returns the result and height of the query upon success or an error if +/ the query fails. +func (ctx Context) + +Query(path string) ([]byte, int64, error) { + return ctx.query(path, nil) +} + +/ QueryWithData performs a query to a CometBFT node with the provided path +/ and a data payload. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +QueryWithData(path string, data []byte) ([]byte, int64, error) { + return ctx.query(path, data) +} + +/ QueryStore performs a query to a CometBFT node with the provided key and +/ store name. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +QueryStore(key []byte, storeName string) ([]byte, int64, error) { + return ctx.queryStore(key, storeName, "key") +} + +/ QueryABCI performs a query to a CometBFT node with the provide RequestQuery. +/ It returns the ResultQuery obtained from the query. The height used to perform +/ the query is the RequestQuery Height if it is non-zero, otherwise the context +/ height is used. +func (ctx Context) + +QueryABCI(req abci.RequestQuery) (abci.ResponseQuery, error) { + return ctx.queryABCI(req) +} + +/ GetFromAddress returns the from address from the context's name. +func (ctx Context) + +GetFromAddress() + +sdk.AccAddress { + return ctx.FromAddress +} + +/ GetFeePayerAddress returns the fee granter address from the context +func (ctx Context) + +GetFeePayerAddress() + +sdk.AccAddress { + return ctx.FeePayer +} + +/ GetFeeGranterAddress returns the fee granter address from the context +func (ctx Context) + +GetFeeGranterAddress() + +sdk.AccAddress { + return ctx.FeeGranter +} + +/ GetFromName returns the key name for the current context. +func (ctx Context) + +GetFromName() + +string { + return ctx.FromName +} + +func (ctx Context) + +queryABCI(req abci.RequestQuery) (abci.ResponseQuery, error) { + node, err := ctx.GetNode() + if err != nil { + return abci.ResponseQuery{ +}, err +} + +var queryHeight int64 + if req.Height != 0 { + queryHeight = req.Height +} + +else { + / fallback on the context height + queryHeight = ctx.Height +} + opts := rpcclient.ABCIQueryOptions{ + Height: queryHeight, + Prove: req.Prove, +} + +result, err := node.ABCIQueryWithOptions(context.Background(), req.Path, req.Data, opts) + if err != nil { + return abci.ResponseQuery{ +}, err +} + if !result.Response.IsOK() { + return abci.ResponseQuery{ +}, sdkErrorToGRPCError(result.Response) +} + + / data from trusted node or subspace query doesn't need verification + if !opts.Prove || !isQueryStoreWithProof(req.Path) { + return result.Response, nil +} + +return result.Response, nil +} + +func sdkErrorToGRPCError(resp abci.ResponseQuery) + +error { + switch resp.Code { + case sdkerrors.ErrInvalidRequest.ABCICode(): + return status.Error(codes.InvalidArgument, resp.Log) + case sdkerrors.ErrUnauthorized.ABCICode(): + return status.Error(codes.Unauthenticated, resp.Log) + case sdkerrors.ErrKeyNotFound.ABCICode(): + return status.Error(codes.NotFound, resp.Log) + +default: + return status.Error(codes.Unknown, resp.Log) +} +} + +/ query performs a query to a CometBFT node with the provided store name +/ and path. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +query(path string, key []byte) ([]byte, int64, error) { + resp, err := ctx.queryABCI(abci.RequestQuery{ + Path: path, + Data: key, + Height: ctx.Height, +}) + if err != nil { + return nil, 0, err +} + +return resp.Value, resp.Height, nil +} + +/ queryStore performs a query to a CometBFT node with the provided a store +/ name and path. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +queryStore(key []byte, storeName, endPath string) ([]byte, int64, error) { + path := fmt.Sprintf("/store/%s/%s", storeName, endPath) + +return ctx.query(path, key) +} + +/ isQueryStoreWithProof expects a format like /// +/ queryType must be "store" and subpath must be "key" to require a proof. +func isQueryStoreWithProof(path string) + +bool { + if !strings.HasPrefix(path, "/") { + return false +} + paths := strings.SplitN(path[1:], "/", 3) + switch { + case len(paths) != 3: + return false + case paths[0] != "store": + return false + case rootmulti.RequireProof("/" + paths[2]): + return true +} + +return false +} +``` + +## RPC + +With a call to `ABCIQueryWithOptions()`, `MyQuery` is received by a [full-node](/docs/sdk/next/documentation/protocol-development/encoding) which then processes the request. Note that, while the RPC is made to the consensus engine (e.g. CometBFT) of a full-node, queries are not part of consensus and so are not broadcasted to the rest of the network, as they do not require anything the network needs to agree upon. + +Read more about ABCI Clients and CometBFT RPC in the [CometBFT documentation](https://docs.cometbft.com/v0.37/spec/rpc/). + +## Application Query Handling + +When a query is received by the full-node after it has been relayed from the underlying consensus engine, it is at that point being handled within an environment that understands application-specific types and has a copy of the state. [`baseapp`](/docs/sdk/next/documentation/application-framework/baseapp) implements the ABCI [`Query()`](/docs/sdk/next/documentation/application-framework/baseapp#query) function and handles gRPC queries. The query route is parsed, and it matches the fully-qualified service method name of an existing service method (most likely in one of the modules), then `baseapp` relays the request to the relevant module. + +Since `MyQuery` has a Protobuf fully-qualified service method name from the `staking` module (recall `/cosmos.staking.v1beta1.Query/Delegations`), `baseapp` first parses the path, then uses its own internal `GRPCQueryRouter` to retrieve the corresponding gRPC handler, and routes the query to the module. The gRPC handler is responsible for recognizing this query, retrieving the appropriate values from the application's stores, and returning a response. Read more about query services [here](/docs/sdk/next/documentation/module-system/query-services). + +Once a result is received from the querier, `baseapp` begins the process of returning a response to the user. + +## Response + +Since `Query()` is an ABCI function, `baseapp` returns the response as an [`abci.QueryResponse`](https://docs.cometbft.com/main/spec/abci/abci++_methods#query) type. The `client.Context` `Query()` routine receives the response and processes it. + +### CLI Response + +The application [`codec`](/docs/sdk/next/documentation/protocol-development/encoding) is used to unmarshal the response to a JSON and the `client.Context` prints the output to the command line, applying any configurations such as the output type (text, JSON or YAML). + +```go expandable +package client + +import ( + + "bufio" + "context" + "encoding/json" + "fmt" + "io" + "os" + "path" + "strings" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/viper" + "google.golang.org/grpc" + "sigs.k8s.io/yaml" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ PreprocessTxFn defines a hook by which chains can preprocess transactions before broadcasting +type PreprocessTxFn func(chainID string, key keyring.KeyType, tx TxBuilder) + +error + +/ Context implements a typical context created in SDK modules for transaction +/ handling and queries. +type Context struct { + FromAddress sdk.AccAddress + Client CometRPC + GRPCClient *grpc.ClientConn + ChainID string + Codec codec.Codec + InterfaceRegistry codectypes.InterfaceRegistry + Input io.Reader + Keyring keyring.Keyring + KeyringOptions []keyring.Option + KeyringDir string + KeyringDefaultKeyName string + Output io.Writer + OutputFormat string + Height int64 + HomeDir string + From string + BroadcastMode string + FromName string + SignModeStr string + UseLedger bool + Simulate bool + GenerateOnly bool + Offline bool + SkipConfirm bool + TxConfig TxConfig + AccountRetriever AccountRetriever + NodeURI string + FeePayer sdk.AccAddress + FeeGranter sdk.AccAddress + Viper *viper.Viper + LedgerHasProtobuf bool + PreprocessTxHook PreprocessTxFn + + / IsAux is true when the signer is an auxiliary signer (e.g. the tipper). + IsAux bool + + / TODO: Deprecated (remove). + LegacyAmino *codec.LegacyAmino + + / CmdContext is the context.Context from the Cobra command. + CmdContext context.Context +} + +/ WithCmdContext returns a copy of the context with an updated context.Context, +/ usually set to the cobra cmd context. +func (ctx Context) + +WithCmdContext(c context.Context) + +Context { + ctx.CmdContext = c + return ctx +} + +/ WithKeyring returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyring(k keyring.Keyring) + +Context { + ctx.Keyring = k + return ctx +} + +/ WithKeyringOptions returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyringOptions(opts ...keyring.Option) + +Context { + ctx.KeyringOptions = opts + return ctx +} + +/ WithInput returns a copy of the context with an updated input. +func (ctx Context) + +WithInput(r io.Reader) + +Context { + / convert to a bufio.Reader to have a shared buffer between the keyring and the + / the Commands, ensuring a read from one advance the read pointer for the other. + / see https://github.com/cosmos/cosmos-sdk/issues/9566. + ctx.Input = bufio.NewReader(r) + +return ctx +} + +/ WithCodec returns a copy of the Context with an updated Codec. +func (ctx Context) + +WithCodec(m codec.Codec) + +Context { + ctx.Codec = m + return ctx +} + +/ WithLegacyAmino returns a copy of the context with an updated LegacyAmino codec. +/ TODO: Deprecated (remove). +func (ctx Context) + +WithLegacyAmino(cdc *codec.LegacyAmino) + +Context { + ctx.LegacyAmino = cdc + return ctx +} + +/ WithOutput returns a copy of the context with an updated output writer (e.g. stdout). +func (ctx Context) + +WithOutput(w io.Writer) + +Context { + ctx.Output = w + return ctx +} + +/ WithFrom returns a copy of the context with an updated from address or name. +func (ctx Context) + +WithFrom(from string) + +Context { + ctx.From = from + return ctx +} + +/ WithOutputFormat returns a copy of the context with an updated OutputFormat field. +func (ctx Context) + +WithOutputFormat(format string) + +Context { + ctx.OutputFormat = format + return ctx +} + +/ WithNodeURI returns a copy of the context with an updated node URI. +func (ctx Context) + +WithNodeURI(nodeURI string) + +Context { + ctx.NodeURI = nodeURI + return ctx +} + +/ WithHeight returns a copy of the context with an updated height. +func (ctx Context) + +WithHeight(height int64) + +Context { + ctx.Height = height + return ctx +} + +/ WithClient returns a copy of the context with an updated RPC client +/ instance. +func (ctx Context) + +WithClient(client CometRPC) + +Context { + ctx.Client = client + return ctx +} + +/ WithGRPCClient returns a copy of the context with an updated GRPC client +/ instance. +func (ctx Context) + +WithGRPCClient(grpcClient *grpc.ClientConn) + +Context { + ctx.GRPCClient = grpcClient + return ctx +} + +/ WithUseLedger returns a copy of the context with an updated UseLedger flag. +func (ctx Context) + +WithUseLedger(useLedger bool) + +Context { + ctx.UseLedger = useLedger + return ctx +} + +/ WithChainID returns a copy of the context with an updated chain ID. +func (ctx Context) + +WithChainID(chainID string) + +Context { + ctx.ChainID = chainID + return ctx +} + +/ WithHomeDir returns a copy of the Context with HomeDir set. +func (ctx Context) + +WithHomeDir(dir string) + +Context { + if dir != "" { + ctx.HomeDir = dir +} + +return ctx +} + +/ WithKeyringDir returns a copy of the Context with KeyringDir set. +func (ctx Context) + +WithKeyringDir(dir string) + +Context { + ctx.KeyringDir = dir + return ctx +} + +/ WithKeyringDefaultKeyName returns a copy of the Context with KeyringDefaultKeyName set. +func (ctx Context) + +WithKeyringDefaultKeyName(keyName string) + +Context { + ctx.KeyringDefaultKeyName = keyName + return ctx +} + +/ WithGenerateOnly returns a copy of the context with updated GenerateOnly value +func (ctx Context) + +WithGenerateOnly(generateOnly bool) + +Context { + ctx.GenerateOnly = generateOnly + return ctx +} + +/ WithSimulation returns a copy of the context with updated Simulate value +func (ctx Context) + +WithSimulation(simulate bool) + +Context { + ctx.Simulate = simulate + return ctx +} + +/ WithOffline returns a copy of the context with updated Offline value. +func (ctx Context) + +WithOffline(offline bool) + +Context { + ctx.Offline = offline + return ctx +} + +/ WithFromName returns a copy of the context with an updated from account name. +func (ctx Context) + +WithFromName(name string) + +Context { + ctx.FromName = name + return ctx +} + +/ WithFromAddress returns a copy of the context with an updated from account +/ address. +func (ctx Context) + +WithFromAddress(addr sdk.AccAddress) + +Context { + ctx.FromAddress = addr + return ctx +} + +/ WithFeePayerAddress returns a copy of the context with an updated fee payer account +/ address. +func (ctx Context) + +WithFeePayerAddress(addr sdk.AccAddress) + +Context { + ctx.FeePayer = addr + return ctx +} + +/ WithFeeGranterAddress returns a copy of the context with an updated fee granter account +/ address. +func (ctx Context) + +WithFeeGranterAddress(addr sdk.AccAddress) + +Context { + ctx.FeeGranter = addr + return ctx +} + +/ WithBroadcastMode returns a copy of the context with an updated broadcast +/ mode. +func (ctx Context) + +WithBroadcastMode(mode string) + +Context { + ctx.BroadcastMode = mode + return ctx +} + +/ WithSignModeStr returns a copy of the context with an updated SignMode +/ value. +func (ctx Context) + +WithSignModeStr(signModeStr string) + +Context { + ctx.SignModeStr = signModeStr + return ctx +} + +/ WithSkipConfirmation returns a copy of the context with an updated SkipConfirm +/ value. +func (ctx Context) + +WithSkipConfirmation(skip bool) + +Context { + ctx.SkipConfirm = skip + return ctx +} + +/ WithTxConfig returns the context with an updated TxConfig +func (ctx Context) + +WithTxConfig(generator TxConfig) + +Context { + ctx.TxConfig = generator + return ctx +} + +/ WithAccountRetriever returns the context with an updated AccountRetriever +func (ctx Context) + +WithAccountRetriever(retriever AccountRetriever) + +Context { + ctx.AccountRetriever = retriever + return ctx +} + +/ WithInterfaceRegistry returns the context with an updated InterfaceRegistry +func (ctx Context) + +WithInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) + +Context { + ctx.InterfaceRegistry = interfaceRegistry + return ctx +} + +/ WithViper returns the context with Viper field. This Viper instance is used to read +/ client-side config from the config file. +func (ctx Context) + +WithViper(prefix string) + +Context { + v := viper.New() + if prefix == "" { + executableName, _ := os.Executable() + +prefix = path.Base(executableName) +} + +v.SetEnvPrefix(prefix) + +v.SetEnvKeyReplacer(strings.NewReplacer(".", "_", "-", "_")) + +v.AutomaticEnv() + +ctx.Viper = v + return ctx +} + +/ WithAux returns a copy of the context with an updated IsAux value. +func (ctx Context) + +WithAux(isAux bool) + +Context { + ctx.IsAux = isAux + return ctx +} + +/ WithLedgerHasProto returns the context with the provided boolean value, indicating +/ whether the target Ledger application can support Protobuf payloads. +func (ctx Context) + +WithLedgerHasProtobuf(val bool) + +Context { + ctx.LedgerHasProtobuf = val + return ctx +} + +/ WithPreprocessTxHook returns the context with the provided preprocessing hook, which +/ enables chains to preprocess the transaction using the builder. +func (ctx Context) + +WithPreprocessTxHook(preprocessFn PreprocessTxFn) + +Context { + ctx.PreprocessTxHook = preprocessFn + return ctx +} + +/ PrintString prints the raw string to ctx.Output if it's defined, otherwise to os.Stdout +func (ctx Context) + +PrintString(str string) + +error { + return ctx.PrintBytes([]byte(str)) +} + +/ PrintBytes prints the raw bytes to ctx.Output if it's defined, otherwise to os.Stdout. +/ NOTE: for printing a complex state object, you should use ctx.PrintOutput +func (ctx Context) + +PrintBytes(o []byte) + +error { + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err := writer.Write(o) + +return err +} + +/ PrintProto outputs toPrint to the ctx.Output based on ctx.OutputFormat which is +/ either text or json. If text, toPrint will be YAML encoded. Otherwise, toPrint +/ will be JSON encoded using ctx.Codec. An error is returned upon failure. +func (ctx Context) + +PrintProto(toPrint proto.Message) + +error { + / always serialize JSON initially because proto json can't be directly YAML encoded + out, err := ctx.Codec.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintObjectLegacy is a variant of PrintProto that doesn't require a proto.Message type +/ and uses amino JSON encoding. +/ Deprecated: It will be removed in the near future! +func (ctx Context) + +PrintObjectLegacy(toPrint any) + +error { + out, err := ctx.LegacyAmino.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintRaw is a variant of PrintProto that doesn't require a proto.Message type +/ and uses a raw JSON message. No marshaling is performed. +func (ctx Context) + +PrintRaw(toPrint json.RawMessage) + +error { + return ctx.printOutput(toPrint) +} + +func (ctx Context) + +printOutput(out []byte) + +error { + var err error + if ctx.OutputFormat == "text" { + out, err = yaml.JSONToYAML(out) + if err != nil { + return err +} + +} + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err = writer.Write(out) + if err != nil { + return err +} + if ctx.OutputFormat != "text" { + / append new-line for formats besides YAML + _, err = writer.Write([]byte("\n")) + if err != nil { + return err +} + +} + +return nil +} + +/ GetFromFields returns a from account address, account name and keyring type, given either an address or key name. +/ If clientCtx.Simulate is true the keystore is not accessed and a valid address must be provided +/ If clientCtx.GenerateOnly is true the keystore is only accessed if a key name is provided +/ If from is empty, the default key if specified in the context will be used +func GetFromFields(clientCtx Context, kr keyring.Keyring, from string) (sdk.AccAddress, string, keyring.KeyType, error) { + if from == "" && clientCtx.KeyringDefaultKeyName != "" { + from = clientCtx.KeyringDefaultKeyName + _ = clientCtx.PrintString(fmt.Sprintf("No key name or address provided; using the default key: %s\n", clientCtx.KeyringDefaultKeyName)) +} + if from == "" { + return nil, "", 0, nil +} + +addr, err := sdk.AccAddressFromBech32(from) + switch { + case clientCtx.Simulate: + if err != nil { + return nil, "", 0, fmt.Errorf("a valid bech32 address must be provided in simulation mode: %w", err) +} + +return addr, "", 0, nil + case clientCtx.GenerateOnly: + if err == nil { + return addr, "", 0, nil +} + +} + +var k *keyring.Record + if err == nil { + k, err = kr.KeyByAddress(addr) + if err != nil { + return nil, "", 0, err +} + +} + +else { + k, err = kr.Key(from) + if err != nil { + return nil, "", 0, err +} + +} + +addr, err = k.GetAddress() + if err != nil { + return nil, "", 0, err +} + +return addr, k.Name, k.GetType(), nil +} + +/ NewKeyringFromBackend gets a Keyring object from a backend +func NewKeyringFromBackend(ctx Context, backend string) (keyring.Keyring, error) { + if ctx.Simulate { + backend = keyring.BackendMemory +} + +return keyring.New(sdk.KeyringServiceName(), backend, ctx.KeyringDir, ctx.Input, ctx.Codec, ctx.KeyringOptions...) +} +``` + +And that's a wrap! The result of the query is outputted to the console by the CLI. diff --git a/docs/sdk/next/api-reference/telemetry-metrics/telemetry.mdx b/docs/sdk/next/api-reference/telemetry-metrics/telemetry.mdx new file mode 100644 index 00000000..7583dd30 --- /dev/null +++ b/docs/sdk/next/api-reference/telemetry-metrics/telemetry.mdx @@ -0,0 +1,126 @@ +--- +title: Telemetry +--- + +## Synopsis + +Gather relevant insights about your application and modules with custom metrics and telemetry. + +The Cosmos SDK enables operators and developers to gain insight into the performance and behavior of +their application through the use of the `telemetry` package. To enable telemetry, set `telemetry.enabled = true` in the app.toml config file. + +The Cosmos SDK currently supports enabling in-memory and prometheus as telemetry sinks. In-memory sink is always attached (when the telemetry is enabled) with 10 second interval and 1 minute retention. This means that metrics will be aggregated over 10 seconds, and metrics will be kept alive for 1 minute. + +To query active metrics (see retention note above) you have to enable API server (`api.enabled = true` in the app.toml). Single API endpoint is exposed: `http://localhost:1317/metrics?format={text|prometheus}`, the default being `text`. + +## Emitting metrics + +If telemetry is enabled via configuration, a single global metrics collector is registered via the +[go-metrics](https://github.com/hashicorp/go-metrics) library. This allows emitting and collecting +metrics through simple [API](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/telemetry/wrapper.go). Example: + +```go +func EndBlocker(ctx sdk.Context, k keeper.Keeper) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) + + / ... +} +``` + +Developers may use the `telemetry` package directly, which provides wrappers around metric APIs +that include adding useful labels, or they must use the `go-metrics` library directly. It is preferable +to add as much context and adequate dimensionality to metrics as possible, so the `telemetry` package +is advised. Regardless of the package or method used, the Cosmos SDK supports the following metrics +types: + +- gauges +- summaries +- counters + +## Labels + +Certain components of modules will have their name automatically added as a label (e.g. `BeginBlock`). +Operators may also supply the application with a global set of labels that will be applied to all +metrics emitted using the `telemetry` package (e.g. chain-id). Global labels are supplied as a list +of \[name, value] tuples. + +Example: + +```toml +global-labels = [ + ["chain_id", "chain-OfXo4V"], +] +``` + +## Cardinality + +Cardinality is key, specifically label and key cardinality. Cardinality is how many unique values of +something there are. So there is naturally a tradeoff between granularity and how much stress is put +on the telemetry sink in terms of indexing, scrape, and query performance. + +Developers should take care to support metrics with enough dimensionality and granularity to be +useful, but not increase the cardinality beyond the sink's limits. A general rule of thumb is to not +exceed a cardinality of 10. + +Consider the following examples with enough granularity and adequate cardinality: + +- begin/end blocker time +- tx gas used +- block gas used +- amount of tokens minted +- amount of accounts created + +The following examples expose too much cardinality and may not even prove to be useful: + +- transfers between accounts with amount +- voting/deposit amount from unique addresses + +## Supported Metrics + +| Metric | Description | Unit | Type | +| :------------------------------ | :---------------------------------------------------------------------------------------- | :-------------- | :------ | +| `tx_count` | Total number of txs processed via `DeliverTx` | tx | counter | +| `tx_successful` | Total number of successful txs processed via `DeliverTx` | tx | counter | +| `tx_failed` | Total number of failed txs processed via `DeliverTx` | tx | counter | +| `tx_gas_used` | The total amount of gas used by a tx | gas | gauge | +| `tx_gas_wanted` | The total amount of gas requested by a tx | gas | gauge | +| `tx_msg_send` | The total amount of tokens sent in a `MsgSend` (per denom) | token | gauge | +| `tx_msg_withdraw_reward` | The total amount of tokens withdrawn in a `MsgWithdrawDelegatorReward` (per denom) | token | gauge | +| `tx_msg_withdraw_commission` | The total amount of tokens withdrawn in a `MsgWithdrawValidatorCommission` (per denom) | token | gauge | +| `tx_msg_delegate` | The total amount of tokens delegated in a `MsgDelegate` | token | gauge | +| `tx_msg_begin_unbonding` | The total amount of tokens undelegated in a `MsgUndelegate` | token | gauge | +| `tx_msg_begin_redelegate` | The total amount of tokens redelegated in a `MsgBeginRedelegate` | token | gauge | +| `tx_msg_ibc_transfer` | The total amount of tokens transferred via IBC in a `MsgTransfer` (source or sink chain) | token | gauge | +| `ibc_transfer_packet_receive` | The total amount of tokens received in a `FungibleTokenPacketData` (source or sink chain) | token | gauge | +| `new_account` | Total number of new accounts created | account | counter | +| `gov_proposal` | Total number of governance proposals | proposal | counter | +| `gov_vote` | Total number of governance votes for a proposal | vote | counter | +| `gov_deposit` | Total number of governance deposits for a proposal | deposit | counter | +| `staking_delegate` | Total number of delegations | delegation | counter | +| `staking_undelegate` | Total number of undelegations | undelegation | counter | +| `staking_redelegate` | Total number of redelegations | redelegation | counter | +| `ibc_transfer_send` | Total number of IBC transfers sent from a chain (source or sink) | transfer | counter | +| `ibc_transfer_receive` | Total number of IBC transfers received to a chain (source or sink) | transfer | counter | +| `ibc_client_create` | Total number of clients created | create | counter | +| `ibc_client_update` | Total number of client updates | update | counter | +| `ibc_client_upgrade` | Total number of client upgrades | upgrade | counter | +| `ibc_client_misbehaviour` | Total number of client misbehaviours | misbehaviour | counter | +| `ibc_connection_open-init` | Total number of connection `OpenInit` handshakes | handshake | counter | +| `ibc_connection_open-try` | Total number of connection `OpenTry` handshakes | handshake | counter | +| `ibc_connection_open-ack` | Total number of connection `OpenAck` handshakes | handshake | counter | +| `ibc_connection_open-confirm` | Total number of connection `OpenConfirm` handshakes | handshake | counter | +| `ibc_channel_open-init` | Total number of channel `OpenInit` handshakes | handshake | counter | +| `ibc_channel_open-try` | Total number of channel `OpenTry` handshakes | handshake | counter | +| `ibc_channel_open-ack` | Total number of channel `OpenAck` handshakes | handshake | counter | +| `ibc_channel_open-confirm` | Total number of channel `OpenConfirm` handshakes | handshake | counter | +| `ibc_channel_close-init` | Total number of channel `CloseInit` handshakes | handshake | counter | +| `ibc_channel_close-confirm` | Total number of channel `CloseConfirm` handshakes | handshake | counter | +| `tx_msg_ibc_recv_packet` | Total number of IBC packets received | packet | counter | +| `tx_msg_ibc_acknowledge_packet` | Total number of IBC packets acknowledged | acknowledgement | counter | +| `ibc_timeout_packet` | Total number of IBC timeout packets | timeout | counter | +| `store_iavl_get` | Duration of an IAVL `Store#Get` call | ms | summary | +| `store_iavl_set` | Duration of an IAVL `Store#Set` call | ms | summary | +| `store_iavl_has` | Duration of an IAVL `Store#Has` call | ms | summary | +| `store_iavl_delete` | Duration of an IAVL `Store#Delete` call | ms | summary | +| `store_iavl_commit` | Duration of an IAVL `Store#Commit` call | ms | summary | +| `store_iavl_query` | Duration of an IAVL `Store#Query` call | ms | summary | diff --git a/docs/sdk/next/changelog/UPGRADING.md b/docs/sdk/next/changelog/UPGRADING.md new file mode 100644 index 00000000..a1136153 --- /dev/null +++ b/docs/sdk/next/changelog/UPGRADING.md @@ -0,0 +1,75 @@ +# Upgrading Cosmos SDK + +This document provides information on upgrading between versions of the Cosmos SDK. + +## Migration to CometBFT + +The Cosmos SDK has migrated from Tendermint to CometBFT. This is the most significant change affecting all applications. + +### Breaking Changes + +- **Package imports**: All Tendermint imports must be updated to CometBFT +- **Binary names**: `tendermint` binary is now `cometbft` +- **Configuration**: Some configuration parameters have changed + +### Migration Steps + +1. **Update imports** in your application code: + ```diff + - import "github.com/tendermint/tendermint/..." + + import "github.com/cometbft/cometbft/..." + ``` + +2. **Update dependencies** in go.mod: + ```bash + go mod edit -replace github.com/tendermint/tendermint=github.com/cometbft/cometbft@v0.37.x + ``` + +3. **Update binary references** in scripts and documentation + +4. **Review configuration changes** in your app configuration + +### Additional Resources + +- [CometBFT Migration Guide](https://docs.cometbft.com/v0.37/guides/migration) +- [Cosmos SDK Release Notes](/docs/sdk/next/changelog/release-notes) + +## Module Changes + +### New Modules + +- **Circuit Breaker**: New module for emergency circuit breaking functionality +- **Protocol Pool**: Module for managing community pool funds + +### Updated Modules + +- **Gov**: Enhanced governance with multiple choice proposals +- **Staking**: Improved validator set management +- **Bank**: Enhanced multi-asset support + +### Deprecated Features + +- Legacy amino signing: Migrate to protobuf-based signing +- Legacy REST endpoints: Use gRPC or gRPC-gateway instead + +## API Changes + +### Breaking API Changes + +- Several keeper interfaces have been updated +- Message types have been restructured for better extensibility +- Query responses now include additional metadata + +### New APIs + +- Enhanced ABCI 2.0 support +- Improved mempool interfaces +- Extended telemetry capabilities + +## Testing Updates + +- Test utilities have been enhanced +- New simulation framework features +- Improved integration test helpers + +For detailed migration steps and examples, see the version-specific migration guides in the SDK repository. \ No newline at end of file diff --git a/docs/sdk/next/changelog/release-notes.mdx b/docs/sdk/next/changelog/release-notes.mdx new file mode 100644 index 00000000..743a0564 --- /dev/null +++ b/docs/sdk/next/changelog/release-notes.mdx @@ -0,0 +1,656 @@ +--- +title: "Release Notes" +description: "Release notes generated from the project `CHANGELOG.md`" +mode: "center" +--- + + + + This page tracks all releases and changes from the + [cosmos/cosmos-sdk](https://github.com/cosmos/cosmos-sdk) repository. For the latest + development updates, see the + [UNRELEASED](https://github.com/cosmos/cosmos-sdk/blob/main/CHANGELOG.md#unreleased) + section. + + + + + ### Features + * (simsx) [#24062](https://github.com/cosmos/cosmos-sdk/pull/24062) [#24145](https://github.com/cosmos/cosmos-sdk/pull/24145) Add new simsx framework on top of simulations for better module dev experience. + * (baseapp) [#24069](https://github.com/cosmos/cosmos-sdk/pull/24069) Create CheckTxHandler to allow extending the logic of CheckTx. + * (types) [#24093](https://github.com/cosmos/cosmos-sdk/pull/24093) Added a new method, `IsGT`, for `types.Coin`. This method is used to check if a `types.Coin` is greater than another `types.Coin`. + * (client/keys) [#24071](https://github.com/cosmos/cosmos-sdk/pull/24071) Add support for importing hex key using standard input. + * (types) [#23780](https://github.com/cosmos/cosmos-sdk/pull/23780) Add a ValueCodec for the math.Uint type that can be used in collections maps. + * (perf)[#24045](https://github.com/cosmos/cosmos-sdk/pull/24045) Sims: Replace runsim command with Go stdlib testing. CLI: `Commit` default true, `Lean`, `SimulateEveryOperation`, `PrintAllInvariants`, `DBBackend` params removed + * (crypto/keyring) [#24040](https://github.com/cosmos/cosmos-sdk/pull/24040) Expose the db keyring used in the keystore. + * (types) [#23919](https://github.com/cosmos/cosmos-sdk/pull/23919) Add MustValAddressFromBech32 function. + * (all) [#23708](https://github.com/cosmos/cosmos-sdk/pull/23708) Add unordered transaction support. + * Adds a `--timeout-timestamp` flag that allows users to specify a block time at which the unordered transactions should expire from the mempool. + * (x/epochs) [#23815](https://github.com/cosmos/cosmos-sdk/pull/23815) Upstream `x/epochs` from Osmosis + * (client) [#23811](https://github.com/cosmos/cosmos-sdk/pull/23811) Add auto cli for node service. + * (genutil) [#24018](https://github.com/cosmos/cosmos-sdk/pull/24018) Allow manually setting the consensus key type in genesis + * (client) [#18557](https://github.com/cosmos/cosmos-sdk/pull/18557) Add `--qrcode` flag to `keys show` command to support displaying keys address QR code. + * (x/auth) [#24030](https://github.com/cosmos/cosmos-sdk/pull/24030) Allow usage of ed25519 keys for transaction signing. + * (baseapp) [#24163](https://github.com/cosmos/cosmos-sdk/pull/24163) Add `StreamingManager` to baseapp to extend the abci listeners. + * (x/protocolpool) [#23933](https://github.com/cosmos/cosmos-sdk/pull/23933) Add x/protocolpool module. + * x/distribution can now utilize an externally managed community pool. NOTE: this will make the message handlers for FundCommunityPool and CommunityPoolSpend error, as well as the query handler for CommunityPool. + * (client) [#18101](https://github.com/cosmos/cosmos-sdk/pull/18101) Add a `keyring-default-keyname` in `client.toml` for specifying a default key name, and skip the need to use the `--from` flag when signing transactions. + * (x/gov) [#24355](https://github.com/cosmos/cosmos-sdk/pull/24355) Allow users to set a custom CalculateVoteResultsAndVotingPower function to be used in govkeeper.Tally. + * (x/mint) [#24436](https://github.com/cosmos/cosmos-sdk/pull/24436) Allow users to set a custom minting function used in the `x/mint` begin blocker. + * The `InflationCalculationFn` argument to `mint.NewAppModule()` is now ignored and must be nil. To set a custom `InflationCalculationFn` on the default minter, use `mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(customInflationFn))`. + * (api) [#24428](https://github.com/cosmos/cosmos-sdk/pull/24428) Add block height to response headers + ### Improvements + * (x/feegrant) [24461](https://github.com/cosmos/cosmos-sdk/pull/24461) Use collections for `FeeAllowance`, `FeeAllowanceQueue`. + * (client) [#24561](https://github.com/cosmos/cosmos-sdk/pull/24561) TimeoutTimestamp flag has been changed to TimeoutDuration, which now sets the timeout timestamp of unordered transactions to the current time + duration passed. + * (telemetry) [#24541](https://github.com/cosmos/cosmos-sdk/pull/24541) Telemetry now includes a pre_blocker metric key. x/upgrade should migrate to this key in v0.54.0. + * (x/auth) [#24541](https://github.com/cosmos/cosmos-sdk/pull/24541) x/auth's PreBlocker now emits telemetry under the pre_blocker metric key. + * (x/bank) [#24431](https://github.com/cosmos/cosmos-sdk/pull/24431) Reduce the number of `ValidateDenom` calls in `bank.SendCoins` and `Coin`. + * The `AmountOf()` method on`sdk.Coins` no longer will `panic` if given an invalid denom and will instead return a zero value. + * (x/staking) [#24391](https://github.com/cosmos/cosmos-sdk/pull/24391) Replace panics with error results; more verbose error messages + * (x/staking) [#24354](https://github.com/cosmos/cosmos-sdk/pull/24354) Optimize validator endblock by reducing bech32 conversions, resulting in significant performance improvement + * (client/keys) [#18950](https://github.com/cosmos/cosmos-sdk/pull/18950) Improve ` keys add`, ` keys import` and ` keys rename` by checking name validation. + * (client/keys) [#18703](https://github.com/cosmos/cosmos-sdk/pull/18703) Improve ` keys add` and ` keys show` by checking whether there are duplicate keys in the multisig case. + * (client/keys) [#18745](https://github.com/cosmos/cosmos-sdk/pull/18745) Improve ` keys export` and ` keys mnemonic` by adding --yes option to skip interactive confirmation. + * (x/bank) [#24106](https://github.com/cosmos/cosmos-sdk/pull/24106) `SendCoins` now checks for `SendRestrictions` before instead of after deducting coins using `subUnlockedCoins`. + * (crypto/ledger) [#24036](https://github.com/cosmos/cosmos-sdk/pull/24036) Improve error message when deriving paths using index > 100 + * (gRPC) [#23844](https://github.com/cosmos/cosmos-sdk/pull/23844) Add debug log prints for each gRPC request. + * (gRPC) [#24073](https://github.com/cosmos/cosmos-sdk/pull/24073) Adds error handling for out-of-gas panics in grpc query handlers. + * (server) [#24072](https://github.com/cosmos/cosmos-sdk/pull/24072) Return BlockHeader by shallow copy in server Context. + * (x/bank) [#24053](https://github.com/cosmos/cosmos-sdk/pull/24053) Resolve a foot-gun by swapping send restrictions check in `InputOutputCoins` before coin deduction. + * (codec/types) [#24336](https://github.com/cosmos/cosmos-sdk/pull/24336) Most types definitions were moved to `github.com/cosmos/gogoproto/types/any` with aliases to these left in `codec/types` so that there should be no breakage to existing code. This allows protobuf generated code to optionally reference the SDK's custom `Any` type without a direct dependency on the SDK. This can be done by changing the `protoc` `M` parameter for `any.proto` to `Mgoogle/protobuf/any.proto=github.com/cosmos/gogoproto/types/any`. + ### Bug Fixes + * (server)[#24583](https://github.com/cosmos/cosmos-sdk/pull/24583) Fix height calculation in pruning manager and better restart handling. + * (x/gov)[#24460](https://github.com/cosmos/cosmos-sdk/pull/24460) Do not call Remove during Walk in defaultCalculateVoteResultsAndVotingPower. + * (baseapp) [24261](https://github.com/cosmos/cosmos-sdk/pull/24261) Fix post handler error always results in code 1 + * (server) [#24068](https://github.com/cosmos/cosmos-sdk/pull/24068) Allow align block header with skip check header in grpc server. + * (x/gov) [#24044](https://github.com/cosmos/cosmos-sdk/pull/24044) Fix some places in which we call Remove inside a Walk (x/gov). + * (baseapp) [#24042](https://github.com/cosmos/cosmos-sdk/pull/24042) Fixed a data race inside BaseApp.getContext, found by end-to-end (e2e) tests. + * (client/server) [#24059](https://github.com/cosmos/cosmos-sdk/pull/24059) Consistently set viper prefix in client and server. It defaults for the binary name for both client and server. + * (client/keys) [#24041](https://github.com/cosmos/cosmos-sdk/pull/24041) `keys delete` won't terminate when a key is not found, but will log the error. + * (baseapp) [#24027](https://github.com/cosmos/cosmos-sdk/pull/24027) Ensure that `BaseApp.Init` checks that the commit multistore is set to protect against nil dereferences. + * (x/group) [GHSA-47ww-ff84-4jrg](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-47ww-ff84-4jrg) Fix x/group can halt when erroring in EndBlocker + * (x/distribution) [#23934](https://github.com/cosmos/cosmos-sdk/pull/23934) Fix vulnerability in `incrementReferenceCount` in distribution. + * (baseapp) [#23879](https://github.com/cosmos/cosmos-sdk/pull/23879) Ensure finalize block response is not empty in the defer check of FinalizeBlock to avoid panic by nil pointer. + * (query) [#23883](https://github.com/cosmos/cosmos-sdk/pull/23883) Fix NPE in query pagination. + * (client) [#23860](https://github.com/cosmos/cosmos-sdk/pull/23860) Add missing `unordered` field for legacy amino signing of tx body. + * (x/bank) [#23836](https://github.com/cosmos/cosmos-sdk/pull/23836) Fix `DenomMetadata` rpc allow value with slashes. + * (query) [87d3a43](https://github.com/cosmos/cosmos-sdk/commit/87d3a432af95f4cf96aa02351ed5fcc51cca6e7b) Fix collection filtered pagination. + * (sims) [#23952](https://github.com/cosmos/cosmos-sdk/pull/23952) Use liveness matrix for validator sign status in sims + * (baseapp) [#24055](https://github.com/cosmos/cosmos-sdk/pull/24055) Align block header when query with latest height. + * (baseapp) [#24074](https://github.com/cosmos/cosmos-sdk/pull/24074) Use CometBFT's ComputeProtoSizeForTxs in defaultTxSelector.SelectTxForProposal for consistency. + * (cli) [#24090](https://github.com/cosmos/cosmos-sdk/pull/24090) Prune cmd should disable async pruning. + * (x/auth) [#19239](https://github.com/cosmos/cosmos-sdk/pull/19239) Sets from flag in multi-sign command to avoid no key name provided error. + * (x/auth) [#23741](https://github.com/cosmos/cosmos-sdk/pull/23741) Support legacy global AccountNumber for legacy compatibility. + * (baseapp) [#24526](https://github.com/cosmos/cosmos-sdk/pull/24526) Fix incorrect retention height when `commitHeight` equals `minRetainBlocks`. + * (x/protocolpool) [#24594](https://github.com/cosmos/cosmos-sdk/pull/24594) Fix NPE when initializing module via depinject. + * (x/epochs) [#24610](https://github.com/cosmos/cosmos-sdk/pull/24610) Fix semantics of `CurrentEpochStartHeight` being set before epoch has started. + + + + ### Bug Fixes * + [GHSA-47ww-ff84-4jrg](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-47ww-ff84-4jrg) + Fix x/group can halt when erroring in EndBlocker + + + + ### Bug Fixes * + [GHSA-x5vx-95h7-rv4p](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-x5vx-95h7-rv4p) + Fix Group module can halt chain when handling a malicious proposal. + + + + ### Features * (crypto/keyring) + [#21653](https://github.com/cosmos/cosmos-sdk/pull/21653) New Linux-only + backend that adds Linux kernel's `keyctl` support. ### Improvements * (server) + [#21941](https://github.com/cosmos/cosmos-sdk/pull/21941) Regenerate + addrbook.json for in place testnet. ### Bug Fixes * Fix + [ABS-0043/ABS-0044](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-8wcc-m6j2-qxvm) + Limit recursion depth for unknown field detection and unpack any * (server) + [#22564](https://github.com/cosmos/cosmos-sdk/pull/22564) Fix fallback genesis + path in server * (x/group) + [#22425](https://github.com/cosmos/cosmos-sdk/pull/22425) Proper address + rendering in error * (sims) + [#21906](https://github.com/cosmos/cosmos-sdk/pull/21906) Skip sims test when + running dry on validators * (cli) + [#21919](https://github.com/cosmos/cosmos-sdk/pull/21919) Query + address-by-acc-num by account_id instead of id. * (x/group) + [#22229](https://github.com/cosmos/cosmos-sdk/pull/22229) Accept `1` and `try` + in CLI for group proposal exec. + + + + ### Features * (cli) [#20779](https://github.com/cosmos/cosmos-sdk/pull/20779) + Added `module-hash-by-height` command to query and retrieve module hashes at a + specified blockchain height, enhancing debugging capabilities. * (cli) + [#21372](https://github.com/cosmos/cosmos-sdk/pull/21372) Added a + `bulk-add-genesis-account` genesis command to add many genesis accounts at + once. * (types/collections) + [#21724](https://github.com/cosmos/cosmos-sdk/pull/21724) Added `LegacyDec` + collection value. ### Improvements * (x/bank) + [#21460](https://github.com/cosmos/cosmos-sdk/pull/21460) Added `Sender` + attribute in `MsgMultiSend` event. * (genutil) + [#21701](https://github.com/cosmos/cosmos-sdk/pull/21701) Improved error + messages for genesis validation. * (testutil/integration) + [#21816](https://github.com/cosmos/cosmos-sdk/pull/21816) Allow to pass + baseapp options in `NewIntegrationApp`. ### Bug Fixes * (runtime) + [#21769](https://github.com/cosmos/cosmos-sdk/pull/21769) Fix baseapp options + ordering to avoid overwriting options set by modules. * (x/consensus) + [#21493](https://github.com/cosmos/cosmos-sdk/pull/21493) Fix regression that + prevented to upgrade to > v0.50.7 without consensus version params. * + (baseapp) [#21256](https://github.com/cosmos/cosmos-sdk/pull/21256) Halt + height will not commit the block indicated, meaning that if halt-height is set + to 10, only blocks until 9 (included) will be committed. This is to go back to + the original behavior before a change was introduced in v0.50.0. * (baseapp) + [#21444](https://github.com/cosmos/cosmos-sdk/pull/21444) Follow-up, Return + PreBlocker events in FinalizeBlockResponse. * (baseapp) + [#21413](https://github.com/cosmos/cosmos-sdk/pull/21413) Fix data race in sdk + mempool. + + + + * (baseapp) [#21159](https://github.com/cosmos/cosmos-sdk/pull/21159) Return + PreBlocker events in FinalizeBlockResponse. * + [#20939](https://github.com/cosmos/cosmos-sdk/pull/20939) Fix collection + reverse iterator to include `pagination.key` in the result. * (client/grpc) + [#20969](https://github.com/cosmos/cosmos-sdk/pull/20969) Fix + `node.NewQueryServer` method not setting `cfg`. * (testutil/integration) + [#21006](https://github.com/cosmos/cosmos-sdk/pull/21006) Fix + `NewIntegrationApp` method not writing default genesis to state. * (runtime) + [#21080](https://github.com/cosmos/cosmos-sdk/pull/21080) Fix `app.yaml` / + `app.json` incompatibility with `depinject v1.0.0`. + + + + * (client) [#20690](https://github.com/cosmos/cosmos-sdk/pull/20690) Import + mnemonic from file * (x/authz,x/feegrant) + [#20590](https://github.com/cosmos/cosmos-sdk/pull/20590) Provide updated + keeper in depinject for authz and feegrant modules. * + [#20631](https://github.com/cosmos/cosmos-sdk/pull/20631) Fix json parsing in + the wait-tx command. * (x/auth) + [#20438](https://github.com/cosmos/cosmos-sdk/pull/20438) Add + `--skip-signature-verification` flag to multisign command to allow nested + multisigs. + + + + ### Improvements * (debug) + [#20328](https://github.com/cosmos/cosmos-sdk/pull/20328) Add consensus + address for debug cmd. * (runtime) + [#20264](https://github.com/cosmos/cosmos-sdk/pull/20264) Expose grpc query + router via depinject. * (x/consensus) + [#20381](https://github.com/cosmos/cosmos-sdk/pull/20381) Use Comet utility + for consensus module consensus param updates. * (client) + [#20356](https://github.com/cosmos/cosmos-sdk/pull/20356) Overwrite client + context when available in `SetCmdClientContext`. ### Bug Fixes * (simulation) + [#17911](https://github.com/cosmos/cosmos-sdk/pull/17911) Fix all problems + with executing command `make test-sim-custom-genesis-fast` for simulation + test. * (simulation) [#18196](https://github.com/cosmos/cosmos-sdk/pull/18196) + Fix the problem of `validator set is empty after InitGenesis` in simulation + test. * (baseapp) [#20346](https://github.com/cosmos/cosmos-sdk/pull/20346) + Correctly assign `execModeSimulate` to context for `simulateTx`. * (baseapp) + [#20144](https://github.com/cosmos/cosmos-sdk/pull/20144) Remove txs from + mempool when AnteHandler fails in recheck. * (baseapp) + [#20107](https://github.com/cosmos/cosmos-sdk/pull/20107) Avoid header height + overwrite block height. * (cli) + [#20020](https://github.com/cosmos/cosmos-sdk/pull/20020) Make bootstrap-state + command support both new and legacy genesis format. * (testutil/sims) + [#20151](https://github.com/cosmos/cosmos-sdk/pull/20151) Set all signatures + and don't overwrite the previous one in `GenSignedMockTx`. + + + + ### Features * (types) + [#19759](https://github.com/cosmos/cosmos-sdk/pull/19759) Align + SignerExtractionAdapter in PriorityNonceMempool Remove. * (client) + [#19870](https://github.com/cosmos/cosmos-sdk/pull/19870) Add new query + command `wait-tx`. Alias `event-query-tx-for` to `wait-tx` for backward + compatibility. ### Improvements * (telemetry) + [#19903](https://github.com/cosmos/cosmos-sdk/pull/19903) Conditionally emit + metrics based on enablement. * **Introduction of `Now` Function**: Added a new + function called `Now` to the telemetry package. It returns the current system + time if telemetry is enabled, or a zero time if telemetry is not enabled. * + **Atomic Global Variable**: Implemented an atomic global variable to manage + the state of telemetry's enablement. This ensures thread safety for the + telemetry state. * **Conditional Telemetry Emission**: All telemetry functions + have been updated to emit metrics only when telemetry is enabled. They perform + a check with `isTelemetryEnabled()` and return early if telemetry is disabled, + minimizing unnecessary operations and overhead. * (deps) + [#19810](https://github.com/cosmos/cosmos-sdk/pull/19810) Upgrade prometheus + version and fix API breaking change due to prometheus bump. * (deps) + [#19810](https://github.com/cosmos/cosmos-sdk/pull/19810) Bump + `cosmossdk.io/store` to v1.1.0. * (server) + [#19884](https://github.com/cosmos/cosmos-sdk/pull/19884) Add start + customizability to start command options. * (x/gov) + [#19853](https://github.com/cosmos/cosmos-sdk/pull/19853) Emit `depositor` in + `EventTypeProposalDeposit`. * (x/gov) + [#19844](https://github.com/cosmos/cosmos-sdk/pull/19844) Emit the proposer of + governance proposals. * (baseapp) + [#19616](https://github.com/cosmos/cosmos-sdk/pull/19616) Don't share gas + meter in tx execution. * (x/authz) + [#20114](https://github.com/cosmos/cosmos-sdk/pull/20114) Follow up of + [GHSA-4j93-fm92-rp4m](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-4j93-fm92-rp4m) + for `x/authz`. * (crypto) + [#19691](https://github.com/cosmos/cosmos-sdk/pull/19745) Fix tx sign doesn't + throw an error when incorrect Ledger is used. * (baseapp) + [#19970](https://github.com/cosmos/cosmos-sdk/pull/19970) Fix default config + values to use no-op mempool as default. * (crypto) + [#20027](https://github.com/cosmos/cosmos-sdk/pull/20027) secp256r1 keys now + implement gogoproto's customtype interface. * (x/bank) + [#20028](https://github.com/cosmos/cosmos-sdk/pull/20028) Align query with + multi denoms for send-enabled. + + + + ### Features * (baseapp) + [#19626](https://github.com/cosmos/cosmos-sdk/pull/19626) Add + `DisableBlockGasMeter` option to `BaseApp`, which removes the block gas meter + during transaction execution. ### Improvements * (x/distribution) + [#19707](https://github.com/cosmos/cosmos-sdk/pull/19707) Add autocli config + for `DelegationTotalRewards` for CLI consistency with `q rewards` commands in + previous versions. * (x/auth) + [#19651](https://github.com/cosmos/cosmos-sdk/pull/19651) Allow empty public + keys in `GetSignBytesAdapter`. ### Bug Fixes * (x/gov) + [#19725](https://github.com/cosmos/cosmos-sdk/pull/19725) Fetch a failed + proposal tally from proposal.FinalTallyResult in the gprc query. * (types) + [#19709](https://github.com/cosmos/cosmos-sdk/pull/19709) Fix skip staking + genesis export when using `CoreAppModuleAdaptor` / `CoreAppModuleBasicAdaptor` + for it. * (x/auth) [#19549](https://github.com/cosmos/cosmos-sdk/pull/19549) + Accept custom get signers when injecting `x/auth/tx`. * (x/staking) Fix a + possible bypass of delegator slashing: + [GHSA-86h5-xcpx-cfqc](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-86h5-xcpx-cfqc) + * (baseapp) Fix a bug in `baseapp.ValidateVoteExtensions` helper + ([GHSA-95rx-m9m5-m94v](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-95rx-m9m5-m94v)). + The helper has been fixed and for avoiding API breaking changes + `currentHeight` and `chainID` arguments are ignored. Those arguments are + removed from the helper in v0.51+. + + + + ### Features * (server) + [#19280](https://github.com/cosmos/cosmos-sdk/pull/19280) Adds in-place + testnet CLI command. ### Improvements * (client) + [#19393](https://github.com/cosmos/cosmos-sdk/pull/19393/) Add + `ReadDefaultValuesFromDefaultClientConfig` to populate the default values from + the default client config in client.Context without creating a app folder. ### + Bug Fixes * (x/auth/vesting) [GHSA-4j93-fm92-rp4m](#bug-fixes) Add + `BlockedAddr` check in `CreatePeriodicVestingAccount`. * (baseapp) + [#19338](https://github.com/cosmos/cosmos-sdk/pull/19338) Set HeaderInfo in + context when calling `setState`. * (baseapp): + [#19200](https://github.com/cosmos/cosmos-sdk/pull/19200) Ensure that sdk side + ve math matches cometbft. * + [#19106](https://github.com/cosmos/cosmos-sdk/pull/19106) Allow empty public + keys when setting signatures. Public keys aren't needed for every transaction. + * (baseapp) [#19198](https://github.com/cosmos/cosmos-sdk/pull/19198) Remove + usage of pointers in logs in all optimistic execution goroutines. * (baseapp) + [#19177](https://github.com/cosmos/cosmos-sdk/pull/19177) Fix baseapp + `DefaultProposalHandler` same-sender non-sequential sequence. * (crypto) + [#19371](https://github.com/cosmos/cosmos-sdk/pull/19371) Avoid CLI redundant + log in stdout, log to stderr instead. + + + + ### Features + * (types) [#18991](https://github.com/cosmos/cosmos-sdk/pull/18991) Add SignerExtractionAdapter to PriorityNonceMempool/Config and provide Default implementation matching existing behavior. + * (gRPC) [#19043](https://github.com/cosmos/cosmos-sdk/pull/19043) Add `halt_height` to the gRPC `/cosmos/base/node/v1beta1/config` request. + ### Improvements + * (x/bank) [#18956](https://github.com/cosmos/cosmos-sdk/pull/18956) Introduced a new `DenomOwnersByQuery` query method for `DenomOwners`, which accepts the denom value as a query string parameter, resolving issues with denoms containing slashes. + * (x/gov) [#18707](https://github.com/cosmos/cosmos-sdk/pull/18707) Improve genesis validation. + * (x/auth/tx) [#18772](https://github.com/cosmos/cosmos-sdk/pull/18772) Remove misleading gas wanted from tx simulation failure log. + * (client/tx) [#18852](https://github.com/cosmos/cosmos-sdk/pull/18852) Add `WithFromName` to tx factory. + * (types) [#18888](https://github.com/cosmos/cosmos-sdk/pull/18888) Speedup DecCoin.Sort() if len(coins) \<= 1 + * (types) [#18875](https://github.com/cosmos/cosmos-sdk/pull/18875) Speedup coins.Sort() if len(coins) \<= 1 + * (baseapp) [#18915](https://github.com/cosmos/cosmos-sdk/pull/18915) Add a new `ExecModeVerifyVoteExtension` exec mode and ensure it's populated in the `Context` during `VerifyVoteExtension` execution. + * (testutil) [#18930](https://github.com/cosmos/cosmos-sdk/pull/18930) Add NodeURI for clientCtx. + ### Bug Fixes + * (baseapp) [#19058](https://github.com/cosmos/cosmos-sdk/pull/19058) Fix baseapp posthandler branch would fail if the `runMsgs` had returned an error. + * (baseapp) [#18609](https://github.com/cosmos/cosmos-sdk/issues/18609) Fixed accounting in the block gas meter after module's beginBlock and before DeliverTx, ensuring transaction processing always starts with the expected zeroed out block gas meter. + * (baseapp) [#18895](https://github.com/cosmos/cosmos-sdk/pull/18895) Fix de-duplicating vote extensions during validation in ValidateVoteExtensions. + + + + ### Features + * (debug) [#18219](https://github.com/cosmos/cosmos-sdk/pull/18219) Add debug commands for application codec types. + * (client/keys) [#17639](https://github.com/cosmos/cosmos-sdk/pull/17639) Allows using and saving public keys encoded as base64. + * (server) [#17094](https://github.com/cosmos/cosmos-sdk/pull/17094) Add a `shutdown-grace` flag for waiting a given time before exit. + ### Improvements + * (telemetry) [#18646] (https://github.com/cosmos/cosmos-sdk/pull/18646) Enable statsd and dogstatsd telemetry sinks. + * (server) [#18478](https://github.com/cosmos/cosmos-sdk/pull/18478) Add command flag to disable colored logs. + * (x/gov) [#18025](https://github.com/cosmos/cosmos-sdk/pull/18025) Improve ` q gov proposer` by querying directly a proposal instead of tx events. It is an alias of `q gov proposal` as the proposer is a field of the proposal. + * (version) [#18063](https://github.com/cosmos/cosmos-sdk/pull/18063) Allow to define extra info to be displayed in ` version --long` command. + * (codec/unknownproto)[#18541](https://github.com/cosmos/cosmos-sdk/pull/18541) Remove the use of "protoc-gen-gogo/descriptor" in favour of using the official protobuf descriptorpb types inside unknownproto. + ### Bug Fixes + * (x/auth) [#18564](https://github.com/cosmos/cosmos-sdk/pull/18564) Fix total fees calculation when batch signing. + * (server) [#18537](https://github.com/cosmos/cosmos-sdk/pull/18537) Fix panic when defining minimum gas config as `100stake;100uatom`. Use a `,` delimiter instead of `;`. Fixes the server config getter to use the correct delimiter. + * [#18531](https://github.com/cosmos/cosmos-sdk/pull/18531) Baseapp's `GetConsensusParams` returns an empty struct instead of panicking if no params are found. + * (client/tx) [#18472](https://github.com/cosmos/cosmos-sdk/pull/18472) Utilizes the correct Pubkey when simulating a transaction. + * (baseapp) [#18486](https://github.com/cosmos/cosmos-sdk/pull/18486) Fixed FinalizeBlock calls not being passed to ABCIListeners. + * (baseapp) [#18627](https://github.com/cosmos/cosmos-sdk/pull/18627) Post handlers are run on non successful transaction executions too. + * (baseapp) [#18654](https://github.com/cosmos/cosmos-sdk/pull/18654) Fixes an issue in which `gogoproto.Merge` does not work with gogoproto messages with custom types. + + + + ### Features + * (baseapp) [#18071](https://github.com/cosmos/cosmos-sdk/pull/18071) Add hybrid handlers to `MsgServiceRouter`. + * (server) [#18162](https://github.com/cosmos/cosmos-sdk/pull/18162) Start gRPC & API server in standalone mode. + * (baseapp & types) [#17712](https://github.com/cosmos/cosmos-sdk/pull/17712) Introduce `PreBlock`, which runs before begin blocker other modules, and allows to modify consensus parameters, and the changes are visible to the following state machine logics. Additionally it can be used for vote extensions. + * (genutil) [#17571](https://github.com/cosmos/cosmos-sdk/pull/17571) Allow creation of `AppGenesis` without a file lookup. + * (codec) [#17042](https://github.com/cosmos/cosmos-sdk/pull/17042) Add `CollValueV2` which supports encoding of protov2 messages in collections. + * (x/gov) [#16976](https://github.com/cosmos/cosmos-sdk/pull/16976) Add `failed_reason` field to `Proposal` under `x/gov` to indicate the reason for a failed proposal. Referenced from [#238](https://github.com/bnb-chain/greenfield-cosmos-sdk/pull/238) under `bnb-chain/greenfield-cosmos-sdk`. + * (baseapp) [#16898](https://github.com/cosmos/cosmos-sdk/pull/16898) Add `preFinalizeBlockHook` to allow vote extensions persistence. + * (cli) [#16887](https://github.com/cosmos/cosmos-sdk/pull/16887) Add two new CLI commands: ` tx simulate` for simulating a transaction; ` query block-results` for querying CometBFT RPC for block results. + * (x/bank) [#16852](https://github.com/cosmos/cosmos-sdk/pull/16852) Add `DenomMetadataByQueryString` query in bank module to support metadata query by query string. + * (baseapp) [#16581](https://github.com/cosmos/cosmos-sdk/pull/16581) Implement Optimistic Execution as an experimental feature (not enabled by default). + * (types) [#16257](https://github.com/cosmos/cosmos-sdk/pull/16257) Allow setting the base denom in the denom registry. + * (baseapp) [#16239](https://github.com/cosmos/cosmos-sdk/pull/16239) Add Gas Limits to allow node operators to resource bound queries. + * (cli) [#16209](https://github.com/cosmos/cosmos-sdk/pull/16209) Make `StartCmd` more customizable. + * (types/simulation) [#16074](https://github.com/cosmos/cosmos-sdk/pull/16074) Add generic SimulationStoreDecoder for modules using collections. + * (genutil) [#16046](https://github.com/cosmos/cosmos-sdk/pull/16046) Add "module-name" flag to genutil `add-genesis-account` to enable initializing module accounts at genesis.* [#15970](https://github.com/cosmos/cosmos-sdk/pull/15970) Enable SIGN_MODE_TEXTUAL. + * (types) [#15958](https://github.com/cosmos/cosmos-sdk/pull/15958) Add `module.NewBasicManagerFromManager` for creating a basic module manager from a module manager. + * (types/module) [#15829](https://github.com/cosmos/cosmos-sdk/pull/15829) Add new endblocker interface to handle valset updates. + * (runtime) [#15818](https://github.com/cosmos/cosmos-sdk/pull/15818) Provide logger through `depinject` instead of appBuilder. + * (types) [#15735](https://github.com/cosmos/cosmos-sdk/pull/15735) Make `ValidateBasic() error` method of `Msg` interface optional. Modules should validate messages directly in their message handlers ([RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation)). + * (x/genutil) [#15679](https://github.com/cosmos/cosmos-sdk/pull/15679) Allow applications to specify a custom genesis migration function for the `genesis migrate` command. + * (telemetry) [#15657](https://github.com/cosmos/cosmos-sdk/pull/15657) Emit more data (go version, sdk version, upgrade height) in prom metrics. + * (client) [#15597](https://github.com/cosmos/cosmos-sdk/pull/15597) Add status endpoint for clients. + * (testutil/integration) [#15556](https://github.com/cosmos/cosmos-sdk/pull/15556) Introduce `testutil/integration` package for module integration testing. + * (runtime) [#15547](https://github.com/cosmos/cosmos-sdk/pull/15547) Allow runtime to pass event core api service to modules. + * (client) [#15458](https://github.com/cosmos/cosmos-sdk/pull/15458) Add a `CmdContext` field to client.Context initialized to cobra command's context. + * (x/genutil) [#15301](https://github.com/cosmos/cosmos-sdk/pull/15031) Add application genesis. The genesis is now entirely managed by the application and passed to CometBFT at note instantiation. Functions that were taking a `cmttypes.GenesisDoc{}` now takes a `genutiltypes.AppGenesis{}`. + * (core) [#15133](https://github.com/cosmos/cosmos-sdk/pull/15133) Implement RegisterServices in the module manager. + * (x/bank) [#14894](https://github.com/cosmos/cosmos-sdk/pull/14894) Return a human readable denomination for IBC vouchers when querying bank balances. Added a `ResolveDenom` parameter to `types.QueryAllBalancesRequest` and `--resolve-denom` flag to `GetBalancesCmd()`. + * (core) [#14860](https://github.com/cosmos/cosmos-sdk/pull/14860) Add `Precommit` and `PrepareCheckState` AppModule callbacks. + * (x/gov) [#14720](https://github.com/cosmos/cosmos-sdk/pull/14720) Upstream expedited proposals from Osmosis. + * (cli) [#14659](https://github.com/cosmos/cosmos-sdk/pull/14659) Added ability to query blocks by events with queries directly passed to Tendermint, which will allow for full query operator support, e.g. `>`. + * (x/auth) [#14650](https://github.com/cosmos/cosmos-sdk/pull/14650) Add Textual SignModeHandler. Enable `SIGN_MODE_TEXTUAL` by following the [UPGRADING.md](./UPGRADING.md) instructions. + * (x/crisis) [#14588](https://github.com/cosmos/cosmos-sdk/pull/14588) Use CacheContext() in AssertInvariants(). + * (mempool) [#14484](https://github.com/cosmos/cosmos-sdk/pull/14484) Add priority nonce mempool option for transaction replacement. + * (query) [#14468](https://github.com/cosmos/cosmos-sdk/pull/14468) Implement pagination for collections. + * (x/gov) [#14373](https://github.com/cosmos/cosmos-sdk/pull/14373) Add new proto field `constitution` of type `string` to gov module genesis state, which allows chain builders to lay a strong foundation by specifying purpose. + * (client) [#14342](https://github.com/cosmos/cosmos-sdk/pull/14342) Add ` config` command is now a sub-command, for setting, getting and migrating Cosmos SDK configuration files. + * (x/distribution) [#14322](https://github.com/cosmos/cosmos-sdk/pull/14322) Introduce a new gRPC message handler, `DepositValidatorRewardsPool`, that allows explicit funding of a validator's reward pool. + * (x/bank) [#14224](https://github.com/cosmos/cosmos-sdk/pull/14224) Allow injection of restrictions on transfers using `AppendSendRestriction` or `PrependSendRestriction`. + ### Improvements + * (x/gov) [#18189](https://github.com/cosmos/cosmos-sdk/pull/18189) Limit the accepted deposit coins for a proposal to the minimum proposal deposit denoms. + * (x/staking) [#18049](https://github.com/cosmos/cosmos-sdk/pull/18049) Return early if Slash encounters zero tokens to burn. + * (x/staking) [#18035](https://github.com/cosmos/cosmos-sdk/pull/18035) Hoisted out of the redelegation loop, the non-changing validator and delegator addresses parsing. + * (keyring) [#17913](https://github.com/cosmos/cosmos-sdk/pull/17913) Add `NewAutoCLIKeyring` for creating an AutoCLI keyring from a SDK keyring. + * (x/consensus) [#18041](https://github.com/cosmos/cosmos-sdk/pull/18041) Let `ToProtoConsensusParams()` return an error. + * (x/gov) [#17780](https://github.com/cosmos/cosmos-sdk/pull/17780) Recover panics and turn them into errors when executing x/gov proposals. + * (baseapp) [#17667](https://github.com/cosmos/cosmos-sdk/pull/17667) Close databases opened by SDK in `baseApp.Close()`. + * (types/module) [#17554](https://github.com/cosmos/cosmos-sdk/pull/17554) Introduce `HasABCIGenesis` which is implemented by a module only when a validatorset update needs to be returned. + * (cli) [#17389](https://github.com/cosmos/cosmos-sdk/pull/17389) gRPC CometBFT commands have been added under ` q consensus comet`. CometBFT commands placement in the SDK has been simplified. See the exhaustive list below. + * `client/rpc.StatusCommand()` is now at `server.StatusCommand()` + * (testutil) [#17216](https://github.com/cosmos/cosmos-sdk/issues/17216) Add `DefaultContextWithKeys` to `testutil` package. + * (cli) [#17187](https://github.com/cosmos/cosmos-sdk/pull/17187) Do not use `ctx.PrintObjectLegacy` in commands anymore. + * ` q gov proposer [proposal-id]` now returns a proposal id as int instead of string. + * (x/staking) [#17164](https://github.com/cosmos/cosmos-sdk/pull/17164) Add `BondedTokensAndPubKeyByConsAddr` to the keeper to enable vote extension verification. + * (x/group, x/gov) [#17109](https://github.com/cosmos/cosmos-sdk/pull/17109) Let proposal summary be 40x longer than metadata limit. + * (version) [#17096](https://github.com/cosmos/cosmos-sdk/pull/17096) Improve `getSDKVersion()` to handle module replacements. + * (types) [#16890](https://github.com/cosmos/cosmos-sdk/pull/16890) Remove `GetTxCmd() *cobra.Command` and `GetQueryCmd() *cobra.Command` from `module.AppModuleBasic` interface. + * (x/authz) [#16869](https://github.com/cosmos/cosmos-sdk/pull/16869) Improve error message when grant not found. + * (all) [#16497](https://github.com/cosmos/cosmos-sdk/pull/16497) Removed all exported vestiges of `sdk.MustSortJSON` and `sdk.SortJSON`. + * (server) [#16238](https://github.com/cosmos/cosmos-sdk/pull/16238) Don't setup p2p node keys if starting a node in GRPC only mode. + * (cli) [#16206](https://github.com/cosmos/cosmos-sdk/pull/16206) Make ABCI handshake profileable. + * (types) [#16076](https://github.com/cosmos/cosmos-sdk/pull/16076) Optimize `ChainAnteDecorators`/`ChainPostDecorators` to instantiate the functions once instead of on every invocation of the returned `AnteHandler`/`PostHandler`. + * (server) [#16071](https://github.com/cosmos/cosmos-sdk/pull/16071) When `mempool.max-txs` is set to a negative value, use a no-op mempool (effectively disable the app mempool). + * (types/query) [#16041](https://github.com/cosmos/cosmos-sdk/pull/16041) Change pagination max limit to a variable in order to be modified by application devs. + * (simapp) [#15958](https://github.com/cosmos/cosmos-sdk/pull/15958) Refactor SimApp for removing the global basic manager. + * (all modules) [#15901](https://github.com/cosmos/cosmos-sdk/issues/15901) All core Cosmos SDK modules query commands have migrated to [AutoCLI](https://docs.cosmos.network/main/core/autocli), ensuring parity between gRPC and CLI queries. + * (x/auth) [#15867](https://github.com/cosmos/cosmos-sdk/pull/15867) Support better logging for signature verification failure. + * (store/cachekv) [#15767](https://github.com/cosmos/cosmos-sdk/pull/15767) Reduce peak RAM usage during and after `InitGenesis`. + * (x/bank) [#15764](https://github.com/cosmos/cosmos-sdk/pull/15764) Speedup x/bank `InitGenesis`. + * (x/slashing) [#15580](https://github.com/cosmos/cosmos-sdk/pull/15580) Refactor the validator's missed block signing window to be a chunked bitmap instead of a "logical" bitmap, significantly reducing the storage footprint. + * (x/gov) [#15554](https://github.com/cosmos/cosmos-sdk/pull/15554) Add proposal result log in `active_proposal` event. When a proposal passes but fails to execute, the proposal result is logged in the `active_proposal` event. + * (x/consensus) [#15553](https://github.com/cosmos/cosmos-sdk/pull/15553) Migrate consensus module to use collections. + * (server) [#15358](https://github.com/cosmos/cosmos-sdk/pull/15358) Add `server.InterceptConfigsAndCreateContext` as alternative to `server.InterceptConfigsPreRunHandler` which does not set the server context and the default SDK logger. + * (mempool) [#15328](https://github.com/cosmos/cosmos-sdk/pull/15328) Improve the `PriorityNonceMempool`: + * Support generic transaction prioritization, instead of `ctx.Priority()` + * Improve construction through the use of a single `PriorityNonceMempoolConfig` instead of option functions + * (x/authz) [#15164](https://github.com/cosmos/cosmos-sdk/pull/15164) Add `MsgCancelUnbondingDelegation` to staking authorization. + * (server) [#15041](https://github.com/cosmos/cosmos-sdk/pull/15041) Remove unnecessary sleeps from gRPC and API server initiation. The servers will start and accept requests as soon as they're ready. + * (baseapp) [#15023](https://github.com/cosmos/cosmos-sdk/pull/15023) & [#15213](https://github.com/cosmos/cosmos-sdk/pull/15213) Add `MessageRouter` interface to baseapp and pass it to authz, gov and groups instead of concrete type. + * [#15011](https://github.com/cosmos/cosmos-sdk/pull/15011) Introduce `cosmossdk.io/log` package to provide a consistent logging interface through the SDK. CometBFT logger is now replaced by `cosmossdk.io/log.Logger`. + * (x/staking) [#14864](https://github.com/cosmos/cosmos-sdk/pull/14864) ` tx staking create-validator` CLI command now takes a json file as an arg instead of using required flags. + * (x/auth) [#14758](https://github.com/cosmos/cosmos-sdk/pull/14758) Allow transaction event queries to directly passed to Tendermint, which will allow for full query operator support, e.g. `>`. + * (x/evidence) [#14757](https://github.com/cosmos/cosmos-sdk/pull/14757) Evidence messages do not need to implement a `.Type()` anymore. + * (x/auth/tx) [#14751](https://github.com/cosmos/cosmos-sdk/pull/14751) Remove `.Type()` and `Route()` methods from all msgs and `legacytx.LegacyMsg` interface. + * (cli) [#14659](https://github.com/cosmos/cosmos-sdk/pull/14659) Added ability to query blocks by either height/hash ` q block --type=height|hash `. + * (x/staking) [#14590](https://github.com/cosmos/cosmos-sdk/pull/14590) Return undelegate amount in MsgUndelegateResponse. + * [#14529](https://github.com/cosmos/cosmos-sdk/pull/14529) Add new property `BondDenom` to `SimulationState` struct. + * (store) [#14439](https://github.com/cosmos/cosmos-sdk/pull/14439) Remove global metric gatherer from store. + * By default store has a no op metric gatherer, the application developer must set another metric gatherer or us the provided one in `store/metrics`. + * (store) [#14438](https://github.com/cosmos/cosmos-sdk/pull/14438) Pass logger from baseapp to store. + * (baseapp) [#14417](https://github.com/cosmos/cosmos-sdk/pull/14417) The store package no longer has a dependency on baseapp. + * (module) [#14415](https://github.com/cosmos/cosmos-sdk/pull/14415) Loosen assertions in SetOrderBeginBlockers() and SetOrderEndBlockers(). + * (store) [#14410](https://github.com/cosmos/cosmos-sdk/pull/14410) `rootmulti.Store.loadVersion` has validation to check if all the module stores' height is correct, it will error if any module store has incorrect height. + * [#14406](https://github.com/cosmos/cosmos-sdk/issues/14406) Migrate usage of `types/store.go` to `store/types/..`. + * (context)[#14384](https://github.com/cosmos/cosmos-sdk/pull/14384) Refactor(context): Pass EventManager to the context as an interface. + * (types) [#14354](https://github.com/cosmos/cosmos-sdk/pull/14354) Improve performance on Context.KVStore and Context.TransientStore by 40%. + * (crypto/keyring) [#14151](https://github.com/cosmos/cosmos-sdk/pull/14151) Move keys presentation from `crypto/keyring` to `client/keys` + * (signing) [#14087](https://github.com/cosmos/cosmos-sdk/pull/14087) Add SignModeHandlerWithContext interface with a new `GetSignBytesWithContext` to get the sign bytes using `context.Context` as an argument to access state. + * (server) [#14062](https://github.com/cosmos/cosmos-sdk/pull/14062) Remove rosetta from server start. + * (crypto) [#3129](https://github.com/cosmos/cosmos-sdk/pull/3129) New armor and keyring key derivation uses aead and encryption uses chacha20poly. + ### State Machine Breaking + * (x/gov) [#18146](https://github.com/cosmos/cosmos-sdk/pull/18146) Add denom check to reject denoms outside of those listed in `MinDeposit`. A new `MinDepositRatio` param is added (with a default value of `0.001`) and now deposits are required to be at least `MinDepositRatio*MinDeposit` to be accepted. + * (x/group,x/gov) [#16235](https://github.com/cosmos/cosmos-sdk/pull/16235) A group and gov proposal is rejected if the proposal metadata title and summary do not match the proposal title and summary. + * (baseapp) [#15930](https://github.com/cosmos/cosmos-sdk/pull/15930) change vote info provided by prepare and process proposal to the one in the block. + * (x/staking) [#15731](https://github.com/cosmos/cosmos-sdk/pull/15731) Introducing a new index to retrieve the delegations by validator efficiently. + * (x/staking) [#15701](https://github.com/cosmos/cosmos-sdk/pull/15701) The `HistoricalInfoKey` has been updated to use a binary format. + * (x/slashing) [#15580](https://github.com/cosmos/cosmos-sdk/pull/15580) The validator slashing window now stores "chunked" bitmap entries for each validator's signing window instead of a single boolean entry per signing window index. + * (x/staking) [#14590](https://github.com/cosmos/cosmos-sdk/pull/14590) `MsgUndelegateResponse` now includes undelegated amount. `x/staking` module's `keeper.Undelegate` now returns 3 values (completionTime,undelegateAmount,error) instead of 2. + * (x/feegrant) [#14294](https://github.com/cosmos/cosmos-sdk/pull/14294) Moved the logic of rejecting duplicate grant from `msg_server` to `keeper` method. + ### API Breaking Changes + * (x/auth) [#17787](https://github.com/cosmos/cosmos-sdk/pull/17787) Remove Tip functionality. + * (types) `module.EndBlockAppModule` has been replaced by Core API `appmodule.HasEndBlocker` or `module.HasABCIEndBlock` when needing validator updates. + * (types) `module.BeginBlockAppModule` has been replaced by Core API `appmodule.HasBeginBlocker`. + * (types) [#17358](https://github.com/cosmos/cosmos-sdk/pull/17358) Remove deprecated `sdk.Handler`, use `baseapp.MsgServiceHandler` instead. + * (client) [#17197](https://github.com/cosmos/cosmos-sdk/pull/17197) `keys.Commands` does not take a home directory anymore. It is inferred from the root command. + * (x/staking) [#17157](https://github.com/cosmos/cosmos-sdk/pull/17157) `GetValidatorsByPowerIndexKey` and `ValidateBasic` for historical info takes a validator address codec in order to be able to decode/encode addresses. + * `GetOperator()` now returns the address as it is represented in state, by default this is an encoded address + * `GetConsAddr() ([]byte, error)` returns `[]byte` instead of sdk.ConsAddres. + * `FromABCIEvidence` & `GetConsensusAddress(consAc address.Codec)` now take a consensus address codec to be able to decode the incoming address. + * (x/distribution) `Delegate` & `SlashValidator` helper function added the mock staking keeper as a parameter passed to the function + * (x/staking) [#17098](https://github.com/cosmos/cosmos-sdk/pull/17098) `NewMsgCreateValidator`, `NewValidator`, `NewMsgCancelUnbondingDelegation`, `NewMsgUndelegate`, `NewMsgBeginRedelegate`, `NewMsgDelegate` and `NewMsgEditValidator` takes a string instead of `sdk.ValAddress` or `sdk.AccAddress`: + * `NewRedelegation` and `NewUnbondingDelegation` takes a validatorAddressCodec and a delegatorAddressCodec in order to decode the addresses. + * `NewRedelegationResponse` takes a string instead of `sdk.ValAddress` or `sdk.AccAddress`. + * `NewMsgCreateValidator.Validate()` takes an address codec in order to decode the address. + * `BuildCreateValidatorMsg` takes a ValidatorAddressCodec in order to decode addresses. + * (x/slashing) [#17098](https://github.com/cosmos/cosmos-sdk/pull/17098) `NewMsgUnjail` takes a string instead of `sdk.ValAddress` + * (x/genutil) [#17098](https://github.com/cosmos/cosmos-sdk/pull/17098) `GenAppStateFromConfig`, AddGenesisAccountCmd and `GenTxCmd` takes an addresscodec to decode addresses. + * (x/distribution) [#17098](https://github.com/cosmos/cosmos-sdk/pull/17098) `NewMsgDepositValidatorRewardsPool`, `NewMsgFundCommunityPool`, `NewMsgWithdrawValidatorCommission` and `NewMsgWithdrawDelegatorReward` takes a string instead of `sdk.ValAddress` or `sdk.AccAddress`. + * (x/staking) [#16959](https://github.com/cosmos/cosmos-sdk/pull/16959) Add validator and consensus address codec as staking keeper arguments. + * (x/staking) [#16958](https://github.com/cosmos/cosmos-sdk/pull/16958) DelegationI interface `GetDelegatorAddr` & `GetValidatorAddr` have been migrated to return string instead of sdk.AccAddress and sdk.ValAddress respectively. stakingtypes.NewDelegation takes a string instead of sdk.AccAddress and sdk.ValAddress. + * (testutil) [#16899](https://github.com/cosmos/cosmos-sdk/pull/16899) The *cli testutil* `QueryBalancesExec` has been removed. Use the gRPC or REST query instead. + * (x/staking) [#16795](https://github.com/cosmos/cosmos-sdk/pull/16795) `DelegationToDelegationResponse`, `DelegationsToDelegationResponses`, `RedelegationsToRedelegationResponses` are no longer exported. + * (x/auth/vesting) [#16741](https://github.com/cosmos/cosmos-sdk/pull/16741) Vesting account constructor now return an error with the result of their validate function. + * (x/auth) [#16650](https://github.com/cosmos/cosmos-sdk/pull/16650) The *cli testutil* `QueryAccountExec` has been removed. Use the gRPC or REST query instead. + * (x/auth) [#16621](https://github.com/cosmos/cosmos-sdk/pull/16621) Pass address codec to auth new keeper constructor. + * (x/auth) [#16423](https://github.com/cosmos/cosmos-sdk/pull/16423) `helpers.AddGenesisAccount` has been moved to `x/genutil` to remove the cyclic dependency between `x/auth` and `x/genutil`. + * (baseapp) [#16342](https://github.com/cosmos/cosmos-sdk/pull/16342) NewContext was renamed to NewContextLegacy. The replacement (NewContext) now does not take a header, instead you should set the header via `WithHeaderInfo` or `WithBlockHeight`. Note that `WithBlockHeight` will soon be deprecated and its recommended to use `WithHeaderInfo`. + * (x/mint) [#16329](https://github.com/cosmos/cosmos-sdk/pull/16329) Use collections for state management: + * Removed: keeper `GetParams`, `SetParams`, `GetMinter`, `SetMinter`. + * (x/crisis) [#16328](https://github.com/cosmos/cosmos-sdk/pull/16328) Use collections for state management: + * Removed: keeper `GetConstantFee`, `SetConstantFee` + * (x/staking) [#16324](https://github.com/cosmos/cosmos-sdk/pull/16324) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey`, and methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context` and return an `error`. Notable changes: + * `Validator` method now returns `types.ErrNoValidatorFound` instead of `nil` when not found. + * (x/distribution) [#16302](https://github.com/cosmos/cosmos-sdk/pull/16302) Use collections for FeePool state management. + * Removed: keeper `GetFeePool`, `SetFeePool`, `GetFeePoolCommunityCoins` + * (types) [#16272](https://github.com/cosmos/cosmos-sdk/pull/16272) `FeeGranter` in the `FeeTx` interface returns `[]byte` instead of `string`. + * (x/gov) [#16268](https://github.com/cosmos/cosmos-sdk/pull/16268) Use collections for proposal state management (part 2): + * this finalizes the gov collections migration + * Removed: types all the key related functions + * Removed: keeper `InsertActiveProposalsQueue`, `RemoveActiveProposalsQueue`, `InsertInactiveProposalsQueue`, `RemoveInactiveProposalsQueue`, `IterateInactiveProposalsQueue`, `IterateActiveProposalsQueue`, `ActiveProposalsQueueIterator`, `InactiveProposalsQueueIterator` + * (x/slashing) [#16246](https://github.com/cosmos/cosmos-sdk/issues/16246) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey`, and methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context` and return an `error`. `GetValidatorSigningInfo` now returns an error instead of a `found bool`, the error can be `nil` (found), `ErrNoSigningInfoFound` (not found) and any other error. + * (module) [#16227](https://github.com/cosmos/cosmos-sdk/issues/16227) `manager.RunMigrations()` now take a `context.Context` instead of a `sdk.Context`. + * (x/crisis) [#16216](https://github.com/cosmos/cosmos-sdk/issues/16216) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey`, methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context` and return an `error` instead of panicking. + * (x/distribution) [#16211](https://github.com/cosmos/cosmos-sdk/pull/16211) Use collections for params state management. + * (cli) [#16209](https://github.com/cosmos/cosmos-sdk/pull/16209) Add API `StartCmdWithOptions` to create customized start command. + * (x/mint) [#16179](https://github.com/cosmos/cosmos-sdk/issues/16179) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey`, and methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context` and return an `error`. + * (x/gov) [#16171](https://github.com/cosmos/cosmos-sdk/pull/16171) Use collections for proposal state management (part 1): + * Removed: keeper: `GetProposal`, `UnmarshalProposal`, `MarshalProposal`, `IterateProposal`, `GetProposal`, `GetProposalFiltered`, `GetProposals`, `GetProposalID`, `SetProposalID` + * Removed: errors unused errors + * (x/gov) [#16164](https://github.com/cosmos/cosmos-sdk/pull/16164) Use collections for vote state management: + * Removed: types `VoteKey`, `VoteKeys` + * Removed: keeper `IterateVotes`, `IterateAllVotes`, `GetVotes`, `GetVote`, `SetVote` + * (sims) [#16155](https://github.com/cosmos/cosmos-sdk/pull/16155) + * `simulation.NewOperationMsg` now marshals the operation msg as proto bytes instead of legacy amino JSON bytes. + * `simulation.NewOperationMsg` is now 2-arity instead of 3-arity with the obsolete argument `codec.ProtoCodec` removed. + * The field `OperationMsg.Msg` is now of type `[]byte` instead of `json.RawMessage`. + * (x/gov) [#16127](https://github.com/cosmos/cosmos-sdk/pull/16127) Use collections for deposit state management: + * The following methods are removed from the gov keeper: `GetDeposit`, `GetAllDeposits`, `IterateAllDeposits`. + * The following functions are removed from the gov types: `DepositKey`, `DepositsKey`. + * (x/gov) [#16118](https://github.com/cosmos/cosmos-sdk/pull/16118/) Use collections for constitution and params state management. + * (x/gov) [#16106](https://github.com/cosmos/cosmos-sdk/pull/16106) Remove gRPC query methods from gov keeper. + * (x/*all*) [#16052](https://github.com/cosmos/cosmos-sdk/pull/16062) `GetSignBytes` implementations on messages and global legacy amino codec definitions have been removed from all modules. + * (sims) [#16052](https://github.com/cosmos/cosmos-sdk/pull/16062) `GetOrGenerate` no longer requires a codec argument is now 4-arity instead of 5-arity. + * (types/math) [#16040](https://github.com/cosmos/cosmos-sdk/pull/16798) Remove aliases in `types/math.go` (part 2). + * (types/math) [#16040](https://github.com/cosmos/cosmos-sdk/pull/16040) Remove aliases in `types/math.go` (part 1). + * (x/auth) [#16016](https://github.com/cosmos/cosmos-sdk/pull/16016) Use collections for accounts state management: + * removed: keeper `HasAccountByID`, `AccountAddressByID`, `SetParams + * (x/genutil) [#15999](https://github.com/cosmos/cosmos-sdk/pull/15999) Genutil now takes the `GenesisTxHanlder` interface instead of deliverTx. The interface is implemented on baseapp + * (x/gov) [#15988](https://github.com/cosmos/cosmos-sdk/issues/15988) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey`, methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context` and return an `error` (instead of panicking or returning a `found bool`). Iterators callback functions now return an error instead of a `bool`. + * (x/auth) [#15985](https://github.com/cosmos/cosmos-sdk/pull/15985) The `AccountKeeper` does not expose the `QueryServer` and `MsgServer` APIs anymore. + * (x/authz) [#15962](https://github.com/cosmos/cosmos-sdk/issues/15962) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey`, methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context`. The `Authorization` interface's `Accept` method now takes a `context.Context` instead of a `sdk.Context`. + * (x/distribution) [#15948](https://github.com/cosmos/cosmos-sdk/issues/15948) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey` and methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context`. Keeper methods also now return an `error`. + * (x/bank) [#15891](https://github.com/cosmos/cosmos-sdk/issues/15891) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey` and methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context`. Also `FundAccount` and `FundModuleAccount` from the `testutil` package accept a `context.Context` instead of a `sdk.Context`, and it's position was moved to the first place. + * (x/slashing) [#15875](https://github.com/cosmos/cosmos-sdk/pull/15875) `x/slashing.NewAppModule` now requires an `InterfaceRegistry` parameter. + * (x/crisis) [#15852](https://github.com/cosmos/cosmos-sdk/pull/15852) Crisis keeper now takes a instance of the address codec to be able to decode user addresses + * (x/auth) [#15822](https://github.com/cosmos/cosmos-sdk/pull/15822) The type of struct field `ante.HandlerOptions.SignModeHandler` has been changed to `x/tx/signing.HandlerMap`. + * (client) [#15822](https://github.com/cosmos/cosmos-sdk/pull/15822) The return type of the interface method `TxConfig.SignModeHandler` has been changed to `x/tx/signing.HandlerMap`. + * The signature of `VerifySignature` has been changed to accept a `x/tx/signing.HandlerMap` and other structs from `x/tx` as arguments. + * The signature of `NewTxConfigWithTextual` has been deprecated and its signature changed to accept a `SignModeOptions`. + * The signature of `NewSigVerificationDecorator` has been changed to accept a `x/tx/signing.HandlerMap`. + * (x/bank) [#15818](https://github.com/cosmos/cosmos-sdk/issues/15818) `BaseViewKeeper`'s `Logger` method now doesn't require a context. `NewBaseKeeper`, `NewBaseSendKeeper` and `NewBaseViewKeeper` now also require a `log.Logger` to be passed in. + * (x/genutil) [#15679](https://github.com/cosmos/cosmos-sdk/pull/15679) `MigrateGenesisCmd` now takes a `MigrationMap` instead of having the SDK genesis migration hardcoded. + * (client) [#15673](https://github.com/cosmos/cosmos-sdk/pull/15673) Move `client/keys.OutputFormatJSON` and `client/keys.OutputFormatText` to `client/flags` package. + * (x/*all*) [#15648](https://github.com/cosmos/cosmos-sdk/issues/15648) Make `SetParams` consistent across all modules and validate the params at the message handling instead of `SetParams` method. + * (codec) [#15600](https://github.com/cosmos/cosmos-sdk/pull/15600) [#15873](https://github.com/cosmos/cosmos-sdk/pull/15873) add support for getting signers to `codec.Codec` and `InterfaceRegistry`: + * `InterfaceRegistry` is has unexported methods and implements `protodesc.Resolver` plus the `RangeFiles` and `SigningContext` methods. All implementations of `InterfaceRegistry` by other users must now embed the official implementation. + * `Codec` has new methods `InterfaceRegistry`, `GetMsgAnySigners`, `GetMsgV1Signers`, and `GetMsgV2Signers` as well as unexported methods. All implementations of `Codec` by other users must now embed an official implementation from the `codec` package. + * `AminoCodec` is marked as deprecated and no longer implements `Codec. + * (client) [#15597](https://github.com/cosmos/cosmos-sdk/pull/15597) `RegisterNodeService` now requires a config parameter. + * (x/nft) [#15588](https://github.com/cosmos/cosmos-sdk/pull/15588) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey` and methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context`. + * (baseapp) [#15568](https://github.com/cosmos/cosmos-sdk/pull/15568) `SetIAVLLazyLoading` is removed from baseapp. + * (x/genutil) [#15567](https://github.com/cosmos/cosmos-sdk/pull/15567) `CollectGenTxsCmd` & `GenTxCmd` takes a address.Codec to be able to decode addresses. + * (x/bank) [#15567](https://github.com/cosmos/cosmos-sdk/pull/15567) `GenesisBalance.GetAddress` now returns a string instead of `sdk.AccAddress` + * `MsgSendExec` test helper function now takes a address.Codec + * (x/auth) [#15520](https://github.com/cosmos/cosmos-sdk/pull/15520) `NewAccountKeeper` now takes a `KVStoreService` instead of a `StoreKey` and methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context`. + * (baseapp) [#15519](https://github.com/cosmos/cosmos-sdk/pull/15519/files) `runTxMode`s were renamed to `execMode`. `ModeDeliver` as changed to `ModeFinalize` and a new `ModeVoteExtension` was added for vote extensions. + * (baseapp) [#15519](https://github.com/cosmos/cosmos-sdk/pull/15519/files) Writing of state to the multistore was moved to `FinalizeBlock`. `Commit` still handles the committing values to disk. + * (baseapp) [#15519](https://github.com/cosmos/cosmos-sdk/pull/15519/files) Calls to BeginBlock and EndBlock have been replaced with core api beginblock & endblock. + * (baseapp) [#15519](https://github.com/cosmos/cosmos-sdk/pull/15519/files) BeginBlock and EndBlock are now internal to baseapp. For testing, user must call `FinalizeBlock`. BeginBlock and EndBlock calls are internal to Baseapp. + * (baseapp) [#15519](https://github.com/cosmos/cosmos-sdk/pull/15519/files) All calls to ABCI methods now accept a pointer of the abci request and response types + * (x/consensus) [#15517](https://github.com/cosmos/cosmos-sdk/pull/15517) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey`. + * (x/bank) [#15477](https://github.com/cosmos/cosmos-sdk/pull/15477) `banktypes.NewMsgMultiSend` and `keeper.InputOutputCoins` only accept one input. + * (server) [#15358](https://github.com/cosmos/cosmos-sdk/pull/15358) Remove `server.ErrorCode` that was not used anywhere. + * (x/capability) [#15344](https://github.com/cosmos/cosmos-sdk/pull/15344) Capability module was removed and is now housed in [IBC-GO](https://github.com/cosmos/ibc-go). + * (mempool) [#15328](https://github.com/cosmos/cosmos-sdk/pull/15328) The `PriorityNonceMempool` is now generic over type `C comparable` and takes a single `PriorityNonceMempoolConfig[C]` argument. See `DefaultPriorityNonceMempoolConfig` for how to construct the configuration and a `TxPriority` type. + * [#15299](https://github.com/cosmos/cosmos-sdk/pull/15299) Remove `StdTx` transaction and signing APIs. No SDK version has actually supported `StdTx` since before Stargate. + * [#15284](https://github.com/cosmos/cosmos-sdk/pull/15284) + * (x/gov) [#15284](https://github.com/cosmos/cosmos-sdk/pull/15284) `NewKeeper` now requires `codec.Codec`. + * (x/authx) [#15284](https://github.com/cosmos/cosmos-sdk/pull/15284) `NewKeeper` now requires `codec.Codec`. + * `types/tx.Tx` no longer implements `sdk.Tx`. + * `sdk.Tx` now requires a new method `GetMsgsV2()`. + * `sdk.Msg.GetSigners` was deprecated and is no longer supported. Use the `cosmos.msg.v1.signer` protobuf annotation instead. + * `TxConfig` has a new method `SigningContext() *signing.Context`. + * `SigVerifiableTx.GetSigners()` now returns `([][]byte, error)` instead of `[]sdk.AccAddress`. + * `AccountKeeper` now has an `AddressCodec() address.Codec` method and the expected `AccountKeeper` for `x/auth/ante` expects this method. + * [#15211](https://github.com/cosmos/cosmos-sdk/pull/15211) Remove usage of `github.com/cometbft/cometbft/libs/bytes.HexBytes` in favor of `[]byte` thorough the SDK. + * (crypto) [#15070](https://github.com/cosmos/cosmos-sdk/pull/15070) `GenerateFromPassword` and `Cost` from `bcrypt.go` now take a `uint32` instead of a `int` type. + * (types) [#15067](https://github.com/cosmos/cosmos-sdk/pull/15067) Remove deprecated alias from `types/errors`. Use `cosmossdk.io/errors` instead. + * (server) [#15041](https://github.com/cosmos/cosmos-sdk/pull/15041) Refactor how gRPC and API servers are started to remove unnecessary sleeps: + * `api.Server#Start` now accepts a `context.Context`. The caller is responsible for ensuring that the context is canceled such that the API server can gracefully exit. The caller does not need to stop the server. + * To start the gRPC server you must first create the server via `NewGRPCServer`, after which you can start the gRPC server via `StartGRPCServer` which accepts a `context.Context`. The caller is responsible for ensuring that the context is canceled such that the gRPC server can gracefully exit. The caller does not need to stop the server. + * Rename `WaitForQuitSignals` to `ListenForQuitSignals`. Note, this function is no longer blocking. Thus the caller is expected to provide a `context.CancelFunc` which indicates that when a signal is caught, that any spawned processes can gracefully exit. + * Remove `ServerStartTime` constant. + * [#15011](https://github.com/cosmos/cosmos-sdk/pull/15011) All functions that were taking a CometBFT logger, now take `cosmossdk.io/log.Logger` instead. + * (simapp) [#14977](https://github.com/cosmos/cosmos-sdk/pull/14977) Move simulation helpers functions (`AppStateFn` and `AppStateRandomizedFn`) to `testutil/sims`. These takes an extra genesisState argument which is the default state of the app. + * (x/bank) [#14894](https://github.com/cosmos/cosmos-sdk/pull/14894) Allow a human readable denomination for coins when querying bank balances. Added a `ResolveDenom` parameter to `types.QueryAllBalancesRequest`. + * [#14847](https://github.com/cosmos/cosmos-sdk/pull/14847) App and ModuleManager methods `InitGenesis`, `ExportGenesis`, `BeginBlock` and `EndBlock` now also return an error. + * (x/upgrade) [#14764](https://github.com/cosmos/cosmos-sdk/pull/14764) The `x/upgrade` module is extracted to have a separate go.mod file which allows it to be a standalone module. + * (x/auth) [#14758](https://github.com/cosmos/cosmos-sdk/pull/14758) Refactor transaction searching: + * Refactor `QueryTxsByEvents` to accept a `query` of type `string` instead of `events` of type `[]string` + * Refactor CLI methods to accept `--query` flag instead of `--events` + * Pass `prove=false` to Tendermint's `TxSearch` RPC method + * (simulation) [#14751](https://github.com/cosmos/cosmos-sdk/pull/14751) Remove the `MsgType` field from `simulation.OperationInput` struct. + * (store) [#14746](https://github.com/cosmos/cosmos-sdk/pull/14746) Extract Store in its own go.mod and rename the package to `cosmossdk.io/store`. + * (x/nft) [#14725](https://github.com/cosmos/cosmos-sdk/pull/14725) Extract NFT in its own go.mod and rename the package to `cosmossdk.io/x/nft`. + * (x/gov) [#14720](https://github.com/cosmos/cosmos-sdk/pull/14720) Add an expedited field in the gov v1 proposal and `MsgNewMsgProposal`. + * (x/feegrant) [#14649](https://github.com/cosmos/cosmos-sdk/pull/14649) Extract Feegrant in its own go.mod and rename the package to `cosmossdk.io/x/feegrant`. + * (tx) [#14634](https://github.com/cosmos/cosmos-sdk/pull/14634) Move the `tx` go module to `x/tx`. + * (store/streaming)[#14603](https://github.com/cosmos/cosmos-sdk/pull/14603) `StoreDecoderRegistry` moved from store to `types/simulations` this breaks the `AppModuleSimulation` interface. + * (snapshots) [#14597](https://github.com/cosmos/cosmos-sdk/pull/14597) Move `snapshots` to `store/snapshots`, rename and bump proto package to v1. + * (x/staking) [#14590](https://github.com/cosmos/cosmos-sdk/pull/14590) `MsgUndelegateResponse` now includes undelegated amount. `x/staking` module's `keeper.Undelegate` now returns 3 values (completionTime,undelegateAmount,error) instead of 2. + * (crypto/keyring) [#14151](https://github.com/cosmos/cosmos-sdk/pull/14151) Move keys presentation from `crypto/keyring` to `client/keys` + * (baseapp) [#14050](https://github.com/cosmos/cosmos-sdk/pull/14050) Refactor `ABCIListener` interface to accept Go contexts. + * (x/auth) [#13850](https://github.com/cosmos/cosmos-sdk/pull/13850/) Remove `MarshalYAML` methods from module (`x/...`) types. + * (modules) [#13850](https://github.com/cosmos/cosmos-sdk/pull/13850) and [#14046](https://github.com/cosmos/cosmos-sdk/pull/14046) Remove gogoproto stringer annotations. This removes the custom `String()` methods on all types that were using the annotations. + * (x/evidence) [14724](https://github.com/cosmos/cosmos-sdk/pull/14724) Extract Evidence in its own go.mod and rename the package to `cosmossdk.io/x/evidence`. + * (crypto/keyring) [#13734](https://github.com/cosmos/cosmos-sdk/pull/13834) The keyring's `Sign` method now takes a new `signMode` argument. It is only used if the signing key is a Ledger hardware device. You can set it to 0 in all other cases. + * (snapshots) [14048](https://github.com/cosmos/cosmos-sdk/pull/14048) Move the Snapshot package to the store package. This is done in an effort group all storage related logic under one package. + * (signing) [#13701](https://github.com/cosmos/cosmos-sdk/pull/) Add `context.Context` as an argument `x/auth/signing.VerifySignature`. + * (store) [#11825](https://github.com/cosmos/cosmos-sdk/pull/11825) Make extension snapshotter interface safer to use, renamed the util function `WriteExtensionItem` to `WriteExtensionPayload`. + ### Client Breaking Changes + * (x/gov) [#17910](https://github.com/cosmos/cosmos-sdk/pull/17910) Remove telemetry for counting votes and proposals. It was incorrectly counting votes. Use alternatives, such as state streaming. + * (abci) [#15845](https://github.com/cosmos/cosmos-sdk/pull/15845) Remove duplicating events in `logs`. + * (abci) [#15845](https://github.com/cosmos/cosmos-sdk/pull/15845) Add `msg_index` to all event attributes to associate events and messages. + * (x/staking) [#15701](https://github.com/cosmos/cosmos-sdk/pull/15701) `HistoricalInfoKey` now has a binary format. + * (store/streaming) [#15519](https://github.com/cosmos/cosmos-sdk/pull/15519/files) State Streaming removed emitting of beginblock, endblock and delivertx in favour of emitting FinalizeBlock. + * (baseapp) [#15519](https://github.com/cosmos/cosmos-sdk/pull/15519/files) BeginBlock & EndBlock events have begin or endblock in the events in order to identify which stage they are emitted from since they are returned to comet as FinalizeBlock events. + * (grpc-web) [#14652](https://github.com/cosmos/cosmos-sdk/pull/14652) Use same port for gRPC-Web and the API server. + ### CLI Breaking Changes + * (all) The migration of modules to [AutoCLI](https://docs.cosmos.network/main/core/autocli) led to no changes in UX but a [small change in CLI outputs](https://github.com/cosmos/cosmos-sdk/issues/16651) where results can be nested. + * (all) Query pagination flags have been renamed with the migration to AutoCLI: + * `--reverse` -> `--page-reverse` + * `--offset` -> `--page-offset` + * `--limit` -> `--page-limit` + * `--count-total` -> `--page-count-total` + * (cli) [#17184](https://github.com/cosmos/cosmos-sdk/pull/17184) All json keys returned by the `status` command are now snake case instead of pascal case. + * (server) [#17177](https://github.com/cosmos/cosmos-sdk/pull/17177) Remove `iavl-lazy-loading` configuration. + * (x/gov) [#16987](https://github.com/cosmos/cosmos-sdk/pull/16987) In ` query gov proposals` the proposal status flag have renamed from `--status` to `--proposal-status`. Additionally, that flags now uses the ENUM values: `PROPOSAL_STATUS_DEPOSIT_PERIOD`, `PROPOSAL_STATUS_VOTING_PERIOD`, `PROPOSAL_STATUS_PASSED`, `PROPOSAL_STATUS_REJECTED`, `PROPOSAL_STATUS_FAILED`. + * (x/bank) [#16899](https://github.com/cosmos/cosmos-sdk/pull/16899) With the migration to AutoCLI some bank commands have been split in two: + * Use `total-supply` (or `total`) for querying the total supply and `total-supply-of` for querying the supply of a specific denom. + * Use `denoms-metadata` for querying all denom metadata and `denom-metadata` for querying a specific denom metadata. + * (rosetta) [#16276](https://github.com/cosmos/cosmos-sdk/issues/16276) Rosetta migration to standalone repo. + * (cli) [#15826](https://github.com/cosmos/cosmos-sdk/pull/15826) Remove ` q account` command. Use ` q auth account` instead. + * (cli) [#15299](https://github.com/cosmos/cosmos-sdk/pull/15299) Remove `--amino` flag from `sign` and `multi-sign` commands. Amino `StdTx` has been deprecated for a while. Amino JSON signing still works as expected. + * (x/gov) [#14880](https://github.com/cosmos/cosmos-sdk/pull/14880) Remove ` tx gov submit-legacy-proposal cancel-software-upgrade` and `software-upgrade` commands. These commands are now in the `x/upgrade` module and using gov v1. Use `tx upgrade software-upgrade` instead. + * (x/staking) [#14864](https://github.com/cosmos/cosmos-sdk/pull/14864) ` tx staking create-validator` CLI command now takes a json file as an arg instead of using required flags. + * (cli) [#14659](https://github.com/cosmos/cosmos-sdk/pull/14659) ` q block ` is removed as it just output json. The new command allows either height/hash and is ` q block --type=height|hash `. + * (grpc-web) [#14652](https://github.com/cosmos/cosmos-sdk/pull/14652) Remove `grpc-web.address` flag. + * (client) [#14342](https://github.com/cosmos/cosmos-sdk/pull/14342) ` config` command is now a sub-command using Confix. Use ` config --help` to learn more. + ### Bug Fixes + * (server) [#18254](https://github.com/cosmos/cosmos-sdk/pull/18254) Don't hardcode gRPC address to localhost. + * (x/gov) [#18173](https://github.com/cosmos/cosmos-sdk/pull/18173) Gov hooks now return an error and are *blocking* when they fail. Expect for `AfterProposalFailedMinDeposit` and `AfterProposalVotingPeriodEnded` which log the error and continue. + * (x/gov) [#17873](https://github.com/cosmos/cosmos-sdk/pull/17873) Fail any inactive and active proposals that cannot be decoded. + * (x/slashing) [#18016](https://github.com/cosmos/cosmos-sdk/pull/18016) Fixed builder function for missed blocks key (`validatorMissedBlockBitArrayPrefixKey`) in slashing/migration/v4. + * (x/bank) [#18107](https://github.com/cosmos/cosmos-sdk/pull/18107) Add missing keypair of SendEnabled to restore legacy param set before migration. + * (baseapp) [#17769](https://github.com/cosmos/cosmos-sdk/pull/17769) Ensure we respect block size constraints in the `DefaultProposalHandler`'s `PrepareProposal` handler when a nil or no-op mempool is used. We provide a `TxSelector` type to assist in making transaction selection generalized. We also fix a comparison bug in tx selection when `req.maxTxBytes` is reached. + * (mempool) [#17668](https://github.com/cosmos/cosmos-sdk/pull/17668) Fix `PriorityNonceIterator.Next()` nil pointer ref for min priority at the end of iteration. + * (config) [#17649](https://github.com/cosmos/cosmos-sdk/pull/17649) Fix `mempool.max-txs` configuration is invalid in `app.config`. + * (baseapp) [#17518](https://github.com/cosmos/cosmos-sdk/pull/17518) Utilizing voting power from vote extensions (CometBFT) instead of the current bonded tokens (x/staking) to determine if a set of vote extensions are valid. + * (baseapp) [#17251](https://github.com/cosmos/cosmos-sdk/pull/17251) VerifyVoteExtensions and ExtendVote initialize their own contexts/states, allowing VerifyVoteExtensions being called without ExtendVote. + * (x/distribution) [#17236](https://github.com/cosmos/cosmos-sdk/pull/17236) Using "validateCommunityTax" in "Params.ValidateBasic", preventing panic when field "CommunityTax" is nil. + * (x/bank) [#17170](https://github.com/cosmos/cosmos-sdk/pull/17170) Avoid empty spendable error message on send coins. + * (x/group) [#17146](https://github.com/cosmos/cosmos-sdk/pull/17146) Rename x/group legacy ORM package's error codespace from "orm" to "legacy_orm", preventing collisions with ORM's error codespace "orm". + * (types/query) [#16905](https://github.com/cosmos/cosmos-sdk/pull/16905) Collections Pagination now applies proper count when filtering results. + * (x/bank) [#16841](https://github.com/cosmos/cosmos-sdk/pull/16841) Correctly process legacy `DenomAddressIndex` values. + * (x/auth/vesting) [#16733](https://github.com/cosmos/cosmos-sdk/pull/16733) Panic on overflowing and negative EndTimes when creating a PeriodicVestingAccount. + * (x/consensus) [#16713](https://github.com/cosmos/cosmos-sdk/pull/16713) Add missing ABCI param in `MsgUpdateParams`. + * (baseapp) [#16700](https://github.com/cosmos/cosmos-sdk/pull/16700) Fix consensus failure in returning no response to malformed transactions. + * [#16639](https://github.com/cosmos/cosmos-sdk/pull/16639) Make sure we don't execute blocks beyond the halt height. + * (baseapp) [#16613](https://github.com/cosmos/cosmos-sdk/pull/16613) Ensure each message in a transaction has a registered handler, otherwise `CheckTx` will fail. + * (baseapp) [#16596](https://github.com/cosmos/cosmos-sdk/pull/16596) Return error during `ExtendVote` and `VerifyVoteExtension` if the request height is earlier than `VoteExtensionsEnableHeight`. + * (baseapp) [#16259](https://github.com/cosmos/cosmos-sdk/pull/16259) Ensure the `Context` block height is correct after `InitChain` and prior to the second block. + * (x/gov) [#16231](https://github.com/cosmos/cosmos-sdk/pull/16231) Fix Rawlog JSON formatting of proposal_vote option field.* (cli) [#16138](https://github.com/cosmos/cosmos-sdk/pull/16138) Fix snapshot commands panic if snapshot don't exists. + * (x/staking) [#16043](https://github.com/cosmos/cosmos-sdk/pull/16043) Call `AfterUnbondingInitiated` hook for new unbonding entries only and fix `UnbondingDelegation` entries handling. This is a behavior change compared to Cosmos SDK v0.47.x, now the hook is called only for new unbonding entries. + * (types) [#16010](https://github.com/cosmos/cosmos-sdk/pull/16010) Let `module.CoreAppModuleBasicAdaptor` fallback to legacy genesis handling. + * (types) [#15691](https://github.com/cosmos/cosmos-sdk/pull/15691) Make `Coin.Validate()` check that `.Amount` is not nil. + * (x/crypto) [#15258](https://github.com/cosmos/cosmos-sdk/pull/15258) Write keyhash file with permissions 0600 instead of 0555. + * (x/auth) [#15059](https://github.com/cosmos/cosmos-sdk/pull/15059) `ante.CountSubKeys` returns 0 when passing a nil `Pubkey`. + * (x/capability) [#15030](https://github.com/cosmos/cosmos-sdk/pull/15030) Prevent `x/capability` from consuming `GasMeter` gas during `InitMemStore` + * (types/coin) [#14739](https://github.com/cosmos/cosmos-sdk/pull/14739) Deprecate the method `Coin.IsEqual` in favour of `Coin.Equal`. The difference between the two methods is that the first one results in a panic when denoms are not equal. This panic lead to unexpected behavior. + ### Deprecated + * (types) [#16980](https://github.com/cosmos/cosmos-sdk/pull/16980) Deprecate `IntProto` and `DecProto`. Instead, `math.Int` and `math.LegacyDec` should be used respectively. Both types support `Marshal` and `Unmarshal` for binary serialization. + * (x/staking) [#14567](https://github.com/cosmos/cosmos-sdk/pull/14567) The `delegator_address` field of `MsgCreateValidator` has been deprecated. + diff --git a/docs/sdk/next/documentation/application-framework/README.mdx b/docs/sdk/next/documentation/application-framework/README.mdx new file mode 100644 index 00000000..5c8fc0f8 --- /dev/null +++ b/docs/sdk/next/documentation/application-framework/README.mdx @@ -0,0 +1,40 @@ +--- +title: Packages +description: >- + The Cosmos SDK is a collection of Go modules. This section provides + documentation on various packages that can be used when developing a Cosmos + SDK chain. It lists all standalone Go modules that are part of the Cosmos SDK. +--- + +The Cosmos SDK is a collection of Go modules. This section provides documentation on various packages that can be used when developing a Cosmos SDK chain. +It lists all standalone Go modules that are part of the Cosmos SDK. + + +For more information on SDK modules, see the [SDK Modules](https://docs.cosmos.network/main/modules) section. +For more information on SDK tooling, see the [Tooling](https://docs.cosmos.network/main/tooling) section. + + +## Core + +* [Core](https://pkg.go.dev/cosmossdk.io/core) - Core library defining SDK interfaces ([ADR-063](https://docs.cosmos.network/main/architecture/adr-063-core-module-api)) +* [API](https://pkg.go.dev/cosmossdk.io/api) - API library containing generated SDK Pulsar API +* [Store](https://pkg.go.dev/cosmossdk.io/store) - Implementation of the Cosmos SDK store + +## State Management + +* [Collections](/docs/sdk/next/documentation/state-storage/collections) - State management library + +## Automation + +* [Depinject](/docs/sdk/next/documentation/module-system/depinject) - Dependency injection framework +* [Client/v2](https://pkg.go.dev/cosmossdk.io/client/v2) - Library powering [AutoCLI](https://docs.cosmos.network/main/core/autocli) + +## Utilities + +* [Log](https://pkg.go.dev/cosmossdk.io/log) - Logging library +* [Errors](https://pkg.go.dev/cosmossdk.io/errors) - Error handling library +* [Math](https://pkg.go.dev/cosmossdk.io/math) - Math library for SDK arithmetic operations + +## Example + +* [SimApp](https://pkg.go.dev/cosmossdk.io/simapp) - SimApp is **the** sample Cosmos SDK chain. This package should not be imported in your application. diff --git a/docs/sdk/next/documentation/application-framework/app-anatomy.mdx b/docs/sdk/next/documentation/application-framework/app-anatomy.mdx new file mode 100644 index 00000000..89e7f2c4 --- /dev/null +++ b/docs/sdk/next/documentation/application-framework/app-anatomy.mdx @@ -0,0 +1,4509 @@ +--- +title: Anatomy of a Cosmos SDK Application +--- + +## Synopsis + +This document describes the core parts of a Cosmos SDK application, represented throughout the document as a placeholder application named `app`. + +## Node Client + +The Daemon, or [Full-Node Client](/docs/sdk/next/documentation/operations/node), is the core process of a Cosmos SDK-based blockchain. Participants in the network run this process to initialize their state-machine, connect with other full-nodes, and update their state-machine as new blocks come in. + +```text expandable + ^ +-------------------------------+ ^ + | | | | + | | State-machine = Application | | + | | | | Built with Cosmos SDK + | | ^ + | | + | +----------- | ABCI | ----------+ v + | | + v | ^ + | | | | +Blockchain Node | | Consensus | | + | | | | + | +-------------------------------+ | CometBFT + | | | | + | | Networking | | + | | | | + v +-------------------------------+ v +``` + +The blockchain full-node presents itself as a binary, generally suffixed by `-d` for "daemon" (e.g. `appd` for `app` or `gaiad` for `gaia`). This binary is built by running a simple [`main.go`](/docs/sdk/next/documentation/operations/node#main-function) function placed in `./cmd/appd/`. This operation usually happens through the [Makefile](#dependencies-and-makefile). + +Once the main binary is built, the node can be started by running the [`start` command](/docs/sdk/next/documentation/operations/node#start-command). This command function primarily does three things: + +1. Create an instance of the state-machine defined in [`app.go`](#core-application-file). +2. Initialize the state-machine with the latest known state, extracted from the `db` stored in the `~/.app/data` folder. At this point, the state-machine is at height `appBlockHeight`. +3. Create and start a new CometBFT instance. Among other things, the node performs a handshake with its peers. It gets the latest `blockHeight` from them and replays blocks to sync to this height if it is greater than the local `appBlockHeight`. The node starts from genesis and CometBFT sends an `InitChain` message via the ABCI to the `app`, which triggers the [`InitChainer`](#initchainer). + + + When starting a CometBFT instance, the genesis file is the `0` height and the + state within the genesis file is committed at block height `1`. When querying + the state of the node, querying block height 0 will return an error. + + +## Core Application File + +In general, the core of the state-machine is defined in a file called `app.go`. This file mainly contains the **type definition of the application** and functions to **create and initialize it**. + +### Type Definition of the Application + +The first thing defined in `app.go` is the `type` of the application. It is generally comprised of the following parts: + +- **Embedding [runtime.App](/docs/sdk/next/documentation/application-framework/runtime)** The runtime package manages the application's core components and modules through dependency injection. It provides declarative configuration for module management, state storage, and ABCI handling. + - `Runtime` wraps `BaseApp`, meaning when a transaction is relayed by CometBFT to the application, `app` uses `runtime`'s methods to route them to the appropriate module. `BaseApp` implements all the [ABCI methods](https://docs.cometbft.com/v0.38/spec/abci/) and the [routing logic](/docs/sdk/next/documentation/application-framework/baseapp#service-routers). + - It automatically configures the **[module manager](/docs/sdk/next/documentation/module-system/module-manager#manager)** based on the app wiring configuration. The module manager facilitates operations related to these modules, like registering their [`Msg` service](/docs/sdk/next/documentation/module-system/msg-services) and [gRPC `Query` service](#grpc-query-services), or setting the order of execution between modules for various functions like [`InitChainer`](#initchainer), [`PreBlocker`](#preblocker) and [`BeginBlocker` and `EndBlocker`](#beginblocker-and-endblocker). +- [**An App Wiring configuration file**](/docs/sdk/next/documentation/application-framework/runtime) The app wiring configuration file contains the list of application's modules that `runtime` must instantiate. The instantiation of the modules is done using `depinject`. It also contains the order in which all modules' `InitGenesis` and `Pre/Begin/EndBlocker` methods should be executed. +- **A reference to an [`appCodec`](/docs/sdk/next/documentation/protocol-development/encoding).** The application's `appCodec` is used to serialize and deserialize data structures in order to store them, as stores can only persist `[]bytes`. The default codec is [Protocol Buffers](/docs/sdk/next/documentation/protocol-development/encoding). +- **A reference to a [`legacyAmino`](/docs/sdk/next/documentation/protocol-development/encoding) codec.** Some parts of the Cosmos SDK have not been migrated to use the `appCodec` above, and are still hardcoded to use Amino. Other parts explicitly use Amino for backwards compatibility. For these reasons, the application still holds a reference to the legacy Amino codec. Please note that the Amino codec will be removed from the SDK in the upcoming releases. + +See an example of application type definition from `simapp`, the Cosmos SDK's own app used for demo and testing purposes: + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + / + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + / + / For providing a different address codec, add it below. + / By default the auth module uses a Bech32 address codec, + / with the prefix defined in the auth module configuration. + / + / func() + +address.Codec { + return <- custom address codec type -> +} + / + / STAKING + / + / For provinding a different validator and consensus address codec, add it below. + / By default the staking module uses the bech32 prefix provided in the auth config, + / and appends "valoper" and "valcons" for validator and consensus addresses respectively. + / When providing a custom address codec in auth, custom address codecs must be provided here as well. + / + / func() + +runtime.ValidatorAddressCodec { + return <- custom validator address codec type -> +} + / func() + +runtime.ConsensusAddressCodec { + return <- custom consensus address codec type -> +} + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom minting function that implements the mintkeeper.MintFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.UpgradeKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitKeeper, + &app.EpochsKeeper, + &app.ProtocolPoolKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + + / set custom ante handler + app.setAnteHandler(app.txConfig) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ setAnteHandler sets custom ante handlers. +/ "x/auth/tx" pre-defined ante handler have been disabled in app_config. +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + +### Constructor Function + +Also defined in `app.go` is the constructor function, which constructs a new application of the type defined in the preceding section. The function must fulfill the `AppCreator` signature in order to be used in the [`start` command](/docs/sdk/next/documentation/operations/node#start-command) of the application's daemon command. + +```go expandable +package types + +import ( + + "encoding/json" + "io" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/cobra" + "cosmossdk.io/log" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" +) + +type ( + / AppOptions defines an interface that is passed into an application + / constructor, typically used to set BaseApp options that are either supplied + / via config file or through CLI arguments/flags. The underlying implementation + / is defined by the server package and is typically implemented via a Viper + / literal defined on the server Context. Note, casting Get calls may not yield + / the expected types and could result in type assertion errors. It is recommend + / to either use the cast package or perform manual conversion for safety. + AppOptions interface { + Get(string) + +any +} + + / Application defines an application interface that wraps abci.Application. + / The interface defines the necessary contracts to be implemented in order + / to fully bootstrap and start an application. + Application interface { + ABCI + + RegisterAPIRoutes(*api.Server, config.APIConfig) + + / RegisterGRPCServerWithSkipCheckHeader registers gRPC services directly with the gRPC + / server and bypass check header flag. + RegisterGRPCServerWithSkipCheckHeader(grpc.Server, bool) + + / RegisterTxService registers the gRPC Query service for tx (such as tx + / simulation, fetching txs by hash...). + RegisterTxService(client.Context) + + / RegisterTendermintService registers the gRPC Query service for CometBFT queries. + RegisterTendermintService(client.Context) + + / RegisterNodeService registers the node gRPC Query service. + RegisterNodeService(client.Context, config.Config) + + / CommitMultiStore return the multistore instance + CommitMultiStore() + +storetypes.CommitMultiStore + + / Return the snapshot manager + SnapshotManager() *snapshots.Manager + + / Close is called in start cmd to gracefully cleanup resources. + / Must be safe to be called multiple times. + Close() + +error +} + + / AppCreator is a function that allows us to lazily initialize an + / application using various configurations. + AppCreator func(log.Logger, dbm.DB, io.Writer, AppOptions) + +Application + + / ModuleInitFlags takes a start command and adds modules specific init flags. + ModuleInitFlags func(startCmd *cobra.Command) + + / ExportedApp represents an exported app state, along with + / validators, consensus params and latest app height. + ExportedApp struct { + / AppState is the application state as JSON. + AppState json.RawMessage + / Validators is the exported validator set. + Validators []cmttypes.GenesisValidator + / Height is the app's latest block height. + Height int64 + / ConsensusParams are the exported consensus params for ABCI. + ConsensusParams cmtproto.ConsensusParams +} + + / AppExporter is a function that dumps all app state to + / JSON-serializable structure and returns the current validator set. + AppExporter func( + logger log.Logger, + db dbm.DB, + traceWriter io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + opts AppOptions, + modulesToExport []string, + ) (ExportedApp, error) +) +``` + +Here are the main actions performed by this function: + +- Instantiate a new [`codec`](/docs/sdk/next/documentation/protocol-development/encoding) and initialize the `codec` of each of the application's modules using the [basic manager](/docs/sdk/next/documentation/module-system/module-manager#basicmanager). +- Instantiate a new application with a reference to a `baseapp` instance, a codec, and all the appropriate store keys. +- Instantiate all the [`keeper`](#keeper) objects defined in the application's `type` using the `NewKeeper` function of each of the application's modules. Note that keepers must be instantiated in the correct order, as the `NewKeeper` of one module might require a reference to another module's `keeper`. +- Instantiate the application's [module manager](/docs/sdk/next/documentation/module-system/module-manager#manager) with the [`AppModule`](#application-module-interface) object of each of the application's modules. +- With the module manager, initialize the application's [`Msg` services](/docs/sdk/next/documentation/application-framework/baseapp#msg-services), [gRPC `Query` services](/docs/sdk/next/documentation/application-framework/baseapp#grpc-query-services), [legacy `Msg` routes](/docs/sdk/next/documentation/application-framework/baseapp#routing), and [legacy query routes](/docs/sdk/next/documentation/application-framework/baseapp#query-routing). When a transaction is relayed to the application by CometBFT via the ABCI, it is routed to the appropriate module's [`Msg` service](#msg-services) using the routes defined here. Likewise, when a gRPC query request is received by the application, it is routed to the appropriate module's [`gRPC query service`](#grpc-query-services) using the gRPC routes defined here. The Cosmos SDK still supports legacy `Msg`s and legacy CometBFT queries, which are routed using the legacy `Msg` routes and the legacy query routes, respectively. +- With the module manager, register the [application's modules' invariants](/docs/sdk/next/documentation/module-system/invariants). Invariants are variables (e.g. total supply of a token) that are evaluated at the end of each block. The process of checking invariants is done via a special module called the [`InvariantsRegistry`](/docs/sdk/next/documentation/module-system/invariants#invariant-registry). The value of the invariant should be equal to a predicted value defined in the module. Should the value be different than the predicted one, special logic defined in the invariant registry is triggered (usually the chain is halted). This is useful to make sure that no critical bug goes unnoticed, producing long-lasting effects that are hard to fix. +- With the module manager, set the order of execution between the `InitGenesis`, `PreBlocker`, `BeginBlocker`, and `EndBlocker` functions of each of the [application's modules](#application-module-interface). Note that not all modules implement these functions. +- Set the remaining application parameters: + - [`InitChainer`](#initchainer): used to initialize the application when it is first started. + - [`PreBlocker`](#preblocker): called before BeginBlock. + - [`BeginBlocker`, `EndBlocker`](#beginblocker-and-endblocker): called at the beginning and at the end of every block. + - [`anteHandler`](/docs/sdk/next/documentation/application-framework/baseapp#antehandler): used to handle fees and signature verification. +- Mount the stores. +- Return the application. + +Note that the constructor function only creates an instance of the app, while the actual state is either carried over from the `~/.app/data` folder if the node is restarted, or generated from the genesis file if the node is started for the first time. + +See an example of application constructor from `simapp`: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "maps" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/tx/signing" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + sigtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + panic(err) +} + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, + banktypes.StoreKey, + stakingtypes.StoreKey, + minttypes.StoreKey, + distrtypes.StoreKey, + slashingtypes.StoreKey, + govtypes.StoreKey, + consensusparamtypes.StoreKey, + upgradetypes.StoreKey, + feegrant.StoreKey, + evidencetypes.StoreKey, + circuittypes.StoreKey, + authzkeeper.StoreKey, + nftkeeper.StoreKey, + group.StoreKey, + epochstypes.StoreKey, + protocolpooltypes.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, +} + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + runtime.EventService{ +}, + ) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), + ) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + + / optional: enable sign mode textual by overwriting the default tx config (after setting the bank keeper) + enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + panic(err) +} + +app.txConfig = txConfig + + app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[stakingtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authcodec.NewBech32Codec(sdk.Bech32PrefixValAddr), + authcodec.NewBech32Codec(sdk.Bech32PrefixConsAddr), + ) + +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(minttypes.DefaultInflationCalculationFn)), custom mintFn can be added here + ) + +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), + ) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, + legacyAmino, + runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), + app.StakingKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[feegrant.StoreKey]), + app.AccountKeeper, + ) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks( + app.DistrKeeper.Hooks(), + app.SlashingKeeper.Hooks(), + ), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[circuittypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + app.AccountKeeper.AddressCodec(), + ) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper( + runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + ) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper( + keys[group.StoreKey], + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + groupConfig, + ) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper( + skipUpgradeHeights, + runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), + appCodec, + homePath, + app.BaseApp, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(...), / Add if you want to use a custom vote calculation function. + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper( + runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), + appCodec, + app.AccountKeeper, + app.BankKeeper, + ) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), + app.StakingKeeper, + app.SlashingKeeper, + app.AccountKeeper.AddressCodec(), + runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, + ) + +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + ), + ) + + /**** Module Options ****/ + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, nil), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, nil), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil, app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, nil), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + epochs.NewAppModule(app.EpochsKeeper), + protocolpool.NewAppModule(app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / NOTE: upgrade module is required to be prioritized + app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, + ) + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensusparamtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +} + exportModuleOrder := []string{ + consensusparamtypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +err = app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetPreBlocker(app.PreBlocker) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimeoutDuration), +}, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ PreBlocker application updates every pre block +func (app *SimApp) + +PreBlocker(ctx sdk.Context, _ *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + return app.ModuleManager.PreBlock(ctx) +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), + ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), + ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, 0, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + return maps.Clone(maccPerms) +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} +``` + +### InitChainer + +The `InitChainer` is a function that initializes the state of the application from a genesis file (i.e. token balances of genesis accounts). It is called when the application receives the `InitChain` message from the CometBFT engine, which happens when the node is started at `appBlockHeight == 0` (i.e. on genesis). The application must set the `InitChainer` in its [constructor](#constructor-function) via the [`SetInitChainer`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetInitChainer) method. + +In general, the `InitChainer` is mostly composed of the [`InitGenesis`](/docs/sdk/next/documentation/module-system/genesis#initgenesis) function of each of the application's modules. This is done by calling the `InitGenesis` function of the module manager, which in turn calls the `InitGenesis` function of each of the modules it contains. Note that the order in which the modules' `InitGenesis` functions must be called has to be set in the module manager using the [module manager's](/docs/sdk/next/documentation/module-system/module-manager) `SetOrderInitGenesis` method. This is done in the [application's constructor](#constructor-function), and the `SetOrderInitGenesis` has to be called before the `SetInitChainer`. + +See an example of an `InitChainer` from `simapp`: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "maps" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/tx/signing" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + sigtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + panic(err) +} + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, + banktypes.StoreKey, + stakingtypes.StoreKey, + minttypes.StoreKey, + distrtypes.StoreKey, + slashingtypes.StoreKey, + govtypes.StoreKey, + consensusparamtypes.StoreKey, + upgradetypes.StoreKey, + feegrant.StoreKey, + evidencetypes.StoreKey, + circuittypes.StoreKey, + authzkeeper.StoreKey, + nftkeeper.StoreKey, + group.StoreKey, + epochstypes.StoreKey, + protocolpooltypes.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, +} + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + runtime.EventService{ +}, + ) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), + ) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + + / optional: enable sign mode textual by overwriting the default tx config (after setting the bank keeper) + enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + panic(err) +} + +app.txConfig = txConfig + + app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[stakingtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authcodec.NewBech32Codec(sdk.Bech32PrefixValAddr), + authcodec.NewBech32Codec(sdk.Bech32PrefixConsAddr), + ) + +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(minttypes.DefaultInflationCalculationFn)), custom mintFn can be added here + ) + +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), + ) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, + legacyAmino, + runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), + app.StakingKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[feegrant.StoreKey]), + app.AccountKeeper, + ) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks( + app.DistrKeeper.Hooks(), + app.SlashingKeeper.Hooks(), + ), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[circuittypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + app.AccountKeeper.AddressCodec(), + ) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper( + runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + ) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper( + keys[group.StoreKey], + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + groupConfig, + ) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper( + skipUpgradeHeights, + runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), + appCodec, + homePath, + app.BaseApp, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(...), / Add if you want to use a custom vote calculation function. + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper( + runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), + appCodec, + app.AccountKeeper, + app.BankKeeper, + ) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), + app.StakingKeeper, + app.SlashingKeeper, + app.AccountKeeper.AddressCodec(), + runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, + ) + +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + ), + ) + + /**** Module Options ****/ + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, nil), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, nil), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil, app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, nil), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + epochs.NewAppModule(app.EpochsKeeper), + protocolpool.NewAppModule(app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / NOTE: upgrade module is required to be prioritized + app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, + ) + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensusparamtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +} + exportModuleOrder := []string{ + consensusparamtypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +err = app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetPreBlocker(app.PreBlocker) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimeoutDuration), +}, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ PreBlocker application updates every pre block +func (app *SimApp) + +PreBlocker(ctx sdk.Context, _ *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + return app.ModuleManager.PreBlock(ctx) +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), + ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), + ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, 0, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + return maps.Clone(maccPerms) +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} +``` + +### PreBlocker + +There are two semantics around the new lifecycle method: + +- It runs before the `BeginBlocker` of all modules +- It can modify consensus parameters in storage, and signal the caller through the return value. + +When it returns `ConsensusParamsChanged=true`, the caller must refresh the consensus parameter in the finalize context: + +```go +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithConsensusParams(app.GetConsensusParams()) +``` + +The new ctx must be passed to all the other lifecycle methods. + +### BeginBlocker and EndBlocker + +The Cosmos SDK offers developers the possibility to implement automatic execution of code as part of their application. This is implemented through two functions called `BeginBlocker` and `EndBlocker`. They are called when the application receives the `FinalizeBlock` messages from the CometBFT consensus engine, which happens respectively at the beginning and at the end of each block. The application must set the `BeginBlocker` and `EndBlocker` in its [constructor](#constructor-function) via the [`SetBeginBlocker`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetBeginBlocker) and [`SetEndBlocker`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetEndBlocker) methods. + +In general, the `BeginBlocker` and `EndBlocker` functions are mostly composed of the [`BeginBlock` and `EndBlock`](/docs/sdk/next/documentation/module-system/beginblock-endblock) functions of each of the application's modules. This is done by calling the `BeginBlock` and `EndBlock` functions of the module manager, which in turn calls the `BeginBlock` and `EndBlock` functions of each of the modules it contains. Note that the order in which the modules' `BeginBlock` and `EndBlock` functions must be called has to be set in the module manager using the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods, respectively. This is done via the [module manager](/docs/sdk/next/documentation/module-system/module-manager) in the [application's constructor](#application-constructor), and the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods have to be called before the `SetBeginBlocker` and `SetEndBlocker` functions. + +As a sidenote, it is important to remember that application-specific blockchains are deterministic. Developers must be careful not to introduce non-determinism in `BeginBlocker` or `EndBlocker`, and must also be careful not to make them too computationally expensive, as [gas](/docs/sdk/next/documentation/protocol-development/gas-fees) does not constrain the cost of `BeginBlocker` and `EndBlocker` execution. + +See an example of `BeginBlocker` and `EndBlocker` functions from `simapp`: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "maps" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/tx/signing" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + sigtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + panic(err) +} + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, + banktypes.StoreKey, + stakingtypes.StoreKey, + minttypes.StoreKey, + distrtypes.StoreKey, + slashingtypes.StoreKey, + govtypes.StoreKey, + consensusparamtypes.StoreKey, + upgradetypes.StoreKey, + feegrant.StoreKey, + evidencetypes.StoreKey, + circuittypes.StoreKey, + authzkeeper.StoreKey, + nftkeeper.StoreKey, + group.StoreKey, + epochstypes.StoreKey, + protocolpooltypes.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, +} + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + runtime.EventService{ +}, + ) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), + ) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + + / optional: enable sign mode textual by overwriting the default tx config (after setting the bank keeper) + enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + panic(err) +} + +app.txConfig = txConfig + + app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[stakingtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authcodec.NewBech32Codec(sdk.Bech32PrefixValAddr), + authcodec.NewBech32Codec(sdk.Bech32PrefixConsAddr), + ) + +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(minttypes.DefaultInflationCalculationFn)), custom mintFn can be added here + ) + +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), + ) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, + legacyAmino, + runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), + app.StakingKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[feegrant.StoreKey]), + app.AccountKeeper, + ) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks( + app.DistrKeeper.Hooks(), + app.SlashingKeeper.Hooks(), + ), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[circuittypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + app.AccountKeeper.AddressCodec(), + ) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper( + runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + ) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper( + keys[group.StoreKey], + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + groupConfig, + ) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper( + skipUpgradeHeights, + runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), + appCodec, + homePath, + app.BaseApp, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(...), / Add if you want to use a custom vote calculation function. + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper( + runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), + appCodec, + app.AccountKeeper, + app.BankKeeper, + ) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), + app.StakingKeeper, + app.SlashingKeeper, + app.AccountKeeper.AddressCodec(), + runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, + ) + +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + ), + ) + + /**** Module Options ****/ + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, nil), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, nil), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil, app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, nil), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + epochs.NewAppModule(app.EpochsKeeper), + protocolpool.NewAppModule(app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / NOTE: upgrade module is required to be prioritized + app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, + ) + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensusparamtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +} + exportModuleOrder := []string{ + consensusparamtypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +err = app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetPreBlocker(app.PreBlocker) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimeoutDuration), +}, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ PreBlocker application updates every pre block +func (app *SimApp) + +PreBlocker(ctx sdk.Context, _ *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + return app.ModuleManager.PreBlock(ctx) +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), + ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), + ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, 0, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + return maps.Clone(maccPerms) +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} +``` + +### Register Codec + +The `EncodingConfig` structure is the last important part of the `app.go` file. The goal of this structure is to define the codecs that will be used throughout the app. + +```go expandable +package params + +import ( + + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" +) + +/ EncodingConfig specifies the concrete encoding types to use for a given app. +/ This is provided for compatibility between protobuf and amino implementations. +type EncodingConfig struct { + InterfaceRegistry types.InterfaceRegistry + Codec codec.Codec + TxConfig client.TxConfig + Amino *codec.LegacyAmino +} +``` + +Here are descriptions of what each of the four fields means: + +- `InterfaceRegistry`: The `InterfaceRegistry` is used by the Protobuf codec to handle interfaces that are encoded and decoded (we also say "unpacked") using [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto). `Any` could be thought as a struct that contains a `type_url` (name of a concrete type implementing the interface) and a `value` (its encoded bytes). `InterfaceRegistry` provides a mechanism for registering interfaces and implementations that can be safely unpacked from `Any`. Each application module implements the `RegisterInterfaces` method that can be used to register the module's own interfaces and implementations. + - You can read more about `Any` in [ADR-019](docs/sdk/next/documentation/legacy/adr-comprehensive). + - To go more into details, the Cosmos SDK uses an implementation of the Protobuf specification called [`gogoprotobuf`](https://github.com/cosmos/gogoproto). By default, the [gogo protobuf implementation of `Any`](https://pkg.go.dev/github.com/cosmos/gogoproto/types) uses [global type registration](https://github.com/cosmos/gogoproto/blob/master/proto/properties.go#L540) to decode values packed in `Any` into concrete Go types. This introduces a vulnerability where any malicious module in the dependency tree could register a type with the global protobuf registry and cause it to be loaded and unmarshaled by a transaction that referenced it in the `type_url` field. For more information, please refer to [ADR-019](docs/sdk/next/documentation/legacy/adr-comprehensive). +- `Codec`: The default codec used throughout the Cosmos SDK. It is composed of a `BinaryCodec` used to encode and decode state, and a `JSONCodec` used to output data to the users (for example, in the [CLI](#cli)). By default, the SDK uses Protobuf as `Codec`. +- `TxConfig`: `TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type. Currently, the SDK handles two transaction types: `SIGN_MODE_DIRECT` (which uses Protobuf binary as over-the-wire encoding) and `SIGN_MODE_LEGACY_AMINO_JSON` (which depends on Amino). Read more about transactions [here](/docs/sdk/next/documentation/protocol-development/transactions). +- `Amino`: Some legacy parts of the Cosmos SDK still use Amino for backwards-compatibility. Each module exposes a `RegisterLegacyAmino` method to register the module's specific types within Amino. This `Amino` codec should not be used by app developers anymore, and will be removed in future releases. + +An application should create its own encoding config. +See an example of a `simappparams.EncodingConfig` from `simapp`: + +```go expandable +package params + +import ( + + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" +) + +/ EncodingConfig specifies the concrete encoding types to use for a given app. +/ This is provided for compatibility between protobuf and amino implementations. +type EncodingConfig struct { + InterfaceRegistry types.InterfaceRegistry + Codec codec.Codec + TxConfig client.TxConfig + Amino *codec.LegacyAmino +} +``` + +## Modules + +[Modules](/docs/sdk/next/documentation/module-system/intro) are the heart and soul of Cosmos SDK applications. They can be considered as state-machines nested within the state-machine. When a transaction is relayed from the underlying CometBFT engine via the ABCI to the application, it is routed by [`baseapp`](/docs/sdk/next/documentation/application-framework/baseapp) to the appropriate module in order to be processed. This paradigm enables developers to easily build complex state-machines, as most of the modules they need often already exist. **For developers, most of the work involved in building a Cosmos SDK application revolves around building custom modules required by their application that do not exist yet, and integrating them with modules that do already exist into one coherent application**. In the application directory, the standard practice is to store modules in the `x/` folder (not to be confused with the Cosmos SDK's `x/` folder, which contains already-built modules). + +### Application Module Interface + +Modules must implement [interfaces](/docs/sdk/next/documentation/module-system/module-manager#application-module-interfaces) defined in the Cosmos SDK, [`AppModuleBasic`](/docs/sdk/next/documentation/module-system/module-manager#appmodulebasic) and [`AppModule`](/docs/sdk/next/documentation/module-system/module-manager#appmodule). The former implements basic non-dependent elements of the module, such as the `codec`, while the latter handles the bulk of the module methods (including methods that require references to other modules' `keeper`s). Both the `AppModule` and `AppModuleBasic` types are, by convention, defined in a file called `module.go`. + +`AppModule` exposes a collection of useful methods on the module that facilitates the composition of modules into a coherent application. These methods are called from the [`module manager`](/docs/sdk/next/documentation/module-system/module-manager#manager), which manages the application's collection of modules. + +### `Msg` Services + +Each application module defines two [Protobuf services](https://developers.google.com/protocol-buffers/docs/proto#services): one `Msg` service to handle messages, and one gRPC `Query` service to handle queries. If we consider the module as a state-machine, then a `Msg` service is a set of state transition RPC methods. +Each Protobuf `Msg` service method is 1:1 related to a Protobuf request type, which must implement `sdk.Msg` interface. +Note that `sdk.Msg`s are bundled in [transactions](/docs/sdk/next/documentation/protocol-development/transactions), and each transaction contains one or multiple messages. + +When a valid block of transactions is received by the full-node, CometBFT relays each one to the application via [`DeliverTx`](https://docs.cometbft.com/v0.37/spec/abci/abci++_app_requirements#specifics-of-responsedelivertx). Then, the application handles the transaction: + +1. Upon receiving the transaction, the application first unmarshals it from `[]byte`. +2. Then, it verifies a few things about the transaction like [fee payment and signatures](/docs/sdk/next/documentation/protocol-development/gas-fees#antehandler) before extracting the `Msg`(s) contained in the transaction. +3. `sdk.Msg`s are encoded using Protobuf [`Any`s](#register-codec). By analyzing each `Any`'s `type_url`, baseapp's `msgServiceRouter` routes the `sdk.Msg` to the corresponding module's `Msg` service. +4. If the message is successfully processed, the state is updated. + +For more details, see [transaction lifecycle](/docs/sdk/next/documentation/protocol-development/tx-lifecycle). + +Module developers create custom `Msg` services when they build their own module. The general practice is to define the `Msg` Protobuf service in a `tx.proto` file. For example, the `x/bank` module defines a service with two methods to transfer tokens: + +```protobuf +// Msg defines the bank Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + // Send defines a method for sending coins from one account to another account. + rpc Send(MsgSend) returns (MsgSendResponse); + + // MultiSend defines a method for sending coins from some accounts to other accounts. + rpc MultiSend(MsgMultiSend) returns (MsgMultiSendResponse); + + // UpdateParams defines a governance operation for updating the x/bank module parameters. + // The authority is defined in the keeper. + rpc UpdateParams(MsgUpdateParams) returns (MsgUpdateParamsResponse) { + option (cosmos_proto.method_added_in) = "cosmos-sdk 0.47"; + }; + + // SetSendEnabled is a governance operation for setting the SendEnabled flag + // on any number of Denoms. Only the entries to add or update should be + // included. Entries that already exist in the store, but that aren't + // included in this message, will be left unchanged. + rpc SetSendEnabled(MsgSetSendEnabled) returns (MsgSetSendEnabledResponse) { + option (cosmos_proto.method_added_in) = "cosmos-sdk 0.47"; + }; +} +``` + +Service methods use `keeper` in order to update the module state. + +Each module should also implement the `RegisterServices` method as part of the [`AppModule` interface](#application-module-interface). This method should call the `RegisterMsgServer` function provided by the generated Protobuf code. + +### gRPC `Query` Services + +gRPC `Query` services allow users to query the state using [gRPC](https://grpc.io). They are enabled by default, and can be configured under the `grpc.enable` and `grpc.address` fields inside [`app.toml`](/docs/sdk/next/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml). + +gRPC `Query` services are defined in the module's Protobuf definition files, specifically inside `query.proto`. The `query.proto` definition file exposes a single `Query` [Protobuf service](https://developers.google.com/protocol-buffers/docs/proto#services). Each gRPC query endpoint corresponds to a service method, starting with the `rpc` keyword, inside the `Query` service. + +Protobuf generates a `QueryServer` interface for each module, containing all the service methods. A module's [`keeper`](#keeper) then needs to implement this `QueryServer` interface, by providing the concrete implementation of each service method. This concrete implementation is the handler of the corresponding gRPC query endpoint. + +Finally, each module should also implement the `RegisterServices` method as part of the [`AppModule` interface](#application-module-interface). This method should call the `RegisterQueryServer` function provided by the generated Protobuf code. + +### Keeper + +[`Keepers`](/docs/sdk/next/documentation/module-system/keeper) are the gatekeepers of their module's store(s). To read or write in a module's store, it is mandatory to go through one of its `keeper`'s methods. This is ensured by the [object-capabilities](/docs/sdk/next/documentation/core-concepts/ocap) model of the Cosmos SDK. Only objects that hold the key to a store can access it, and only the module's `keeper` should hold the key(s) to the module's store(s). + +`Keepers` are generally defined in a file called `keeper.go`. It contains the `keeper`'s type definition and methods. + +The `keeper` type definition generally consists of the following: + +- **Key(s)** to the module's store(s) in the multistore. +- Reference to **other module's `keepers`**. Only needed if the `keeper` needs to access other module's store(s) (either to read or write from them). +- A reference to the application's **codec**. The `keeper` needs it to marshal structs before storing them, or to unmarshal them when it retrieves them, because stores only accept `[]bytes` as value. + +Along with the type definition, the next important component of the `keeper.go` file is the `keeper`'s constructor function, `NewKeeper`. This function instantiates a new `keeper` of the type defined above with a `codec`, stores `keys` and potentially references other modules' `keeper`s as parameters. The `NewKeeper` function is called from the [application's constructor](#constructor-function). The rest of the file defines the `keeper`'s methods, which are primarily getters and setters. + +### Command-Line, gRPC Services and REST Interfaces + +Each module defines command-line commands, gRPC services, and REST routes to be exposed to the end-user via the [application's interfaces](#application-interfaces). This enables end-users to create messages of the types defined in the module, or to query the subset of the state managed by the module. + +#### CLI + +Generally, the [commands related to a module](/docs/sdk/next/documentation/module-system/module-interfaces#cli) are defined in a folder called `client/cli` in the module's folder. The CLI divides commands into two categories, transactions and queries, defined in `client/cli/tx.go` and `client/cli/query.go`, respectively. Both commands are built on top of the [Cobra Library](https://github.com/spf13/cobra): + +- Transactions commands let users generate new transactions so that they can be included in a block and eventually update the state. One command should be created for each [message type](#message-types) defined in the module. The command calls the constructor of the message with the parameters provided by the end-user, and wraps it into a transaction. The Cosmos SDK handles signing and the addition of other transaction metadata. +- Queries let users query the subset of the state defined by the module. Query commands forward queries to the [application's query router](/docs/sdk/next/documentation/application-framework/baseapp#query-routing), which routes them to the appropriate [querier](#querier) the `queryRoute` parameter supplied. + +#### gRPC + +[gRPC](https://grpc.io) is a modern open-source high performance RPC framework that has support in multiple languages. It is the recommended way for external clients (such as wallets, browsers and other backend services) to interact with a node. + +Each module can expose gRPC endpoints called [service methods](https://grpc.io/docs/what-is-grpc/core-concepts/#service-definition), which are defined in the [module's Protobuf `query.proto` file](#grpc-query-services). A service method is defined by its name, input arguments, and output response. The module then needs to perform the following actions: + +- Define a `RegisterGRPCGatewayRoutes` method on `AppModuleBasic` to wire the client gRPC requests to the correct handler inside the module. +- For each service method, define a corresponding handler. The handler implements the core logic necessary to serve the gRPC request, and is located in the `keeper/grpc_query.go` file. + +#### gRPC-gateway REST Endpoints + +Some external clients may not wish to use gRPC. In this case, the Cosmos SDK provides a gRPC gateway service, which exposes each gRPC service as a corresponding REST endpoint. Please refer to the [grpc-gateway](https://grpc-ecosystem.github.io/grpc-gateway/) documentation to learn more. + +The REST endpoints are defined in the Protobuf files, along with the gRPC services, using Protobuf annotations. Modules that want to expose REST queries should add `google.api.http` annotations to their `rpc` methods. By default, all REST endpoints defined in the SDK have a URL starting with the `/cosmos/` prefix. + +The Cosmos SDK also provides a development endpoint to generate [Swagger](https://swagger.io/) definition files for these REST endpoints. This endpoint can be enabled inside the [`app.toml`](/docs/sdk/next/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml) config file, under the `api.swagger` key. + +## Application Interface + +[Interfaces](#command-line-grpc-services-and-rest-interfaces) let end-users interact with full-node clients. This means querying data from the full-node or creating and sending new transactions to be relayed by the full-node and eventually included in a block. + +The main interface is the [Command-Line Interface](/docs/sdk/next/api-reference/client-tools/cli). The CLI of a Cosmos SDK application is built by aggregating [CLI commands](#cli) defined in each of the modules used by the application. The CLI of an application is the same as the daemon (e.g. `appd`), and is defined in a file called `appd/main.go`. The file contains the following: + +- **A `main()` function**, which is executed to build the `appd` interface client. This function prepares each command and adds them to the `rootCmd` before building them. At the root of `appd`, the function adds generic commands like `status`, `keys`, and `config`, query commands, tx commands, and `rest-server`. +- **Query commands**, which are added by calling the `queryCmd` function. This function returns a Cobra command that contains the query commands defined in each of the application's modules (passed as an array of `sdk.ModuleClients` from the `main()` function), as well as some other lower level query commands such as block or validator queries. Query command are called by using the command `appd query [query]` of the CLI. +- **Transaction commands**, which are added by calling the `txCmd` function. Similar to `queryCmd`, the function returns a Cobra command that contains the tx commands defined in each of the application's modules, as well as lower level tx commands like transaction signing or broadcasting. Tx commands are called by using the command `appd tx [tx]` of the CLI. + +See an example of an application's main command-line file from the [Cosmos Hub](https://github.com/cosmos/gaia). + +```go expandable +package cmd + +import ( + + "errors" + "io" + "os" + "path/filepath" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/snapshots" + snapshottypes "github.com/cosmos/cosmos-sdk/snapshots/types" + "github.com/cosmos/cosmos-sdk/store" + sdk "github.com/cosmos/cosmos-sdk/types" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" + "github.com/spf13/cast" + "github.com/spf13/cobra" + tmcfg "github.com/tendermint/tendermint/config" + tmcli "github.com/tendermint/tendermint/libs/cli" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + + gaia "github.com/cosmos/gaia/v8/app" + "github.com/cosmos/gaia/v8/app/params" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the +/ main function. +func NewRootCmd() (*cobra.Command, params.EncodingConfig) { + encodingConfig := gaia.MakeTestEncodingConfig() + initClientCtx := client.Context{ +}. + WithCodec(encodingConfig.Codec). + WithInterfaceRegistry(encodingConfig.InterfaceRegistry). + WithTxConfig(encodingConfig.TxConfig). + WithLegacyAmino(encodingConfig.Amino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(gaia.DefaultNodeHome). + WithViper("") + rootCmd := &cobra.Command{ + Use: "gaiad", + Short: "Stargate Cosmos Hub App", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + if err = client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customTemplate, customGaiaConfig := initAppConfig() + customTMConfig := initTendermintConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customTemplate, customGaiaConfig, customTMConfig) +}, +} + +initRootCmd(rootCmd, encodingConfig) + +return rootCmd, encodingConfig +} + +/ initTendermintConfig helps to override default Tendermint Config values. +/ return tmcfg.DefaultConfig if no custom configuration is required for the application. +func initTendermintConfig() *tmcfg.Config { + cfg := tmcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +func initAppConfig() (string, interface{ +}) { + srvCfg := serverconfig.DefaultConfig() + +srvCfg.StateSync.SnapshotInterval = 1000 + srvCfg.StateSync.SnapshotKeepRecent = 10 + + return params.CustomConfigTemplate(), params.CustomAppConfig{ + Config: *srvCfg, + BypassMinFeeMsgTypes: gaia.GetDefaultBypassFeeMessages(), +} +} + +func initRootCmd(rootCmd *cobra.Command, encodingConfig params.EncodingConfig) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(gaia.ModuleBasics, gaia.DefaultNodeHome), + genutilcli.CollectGenTxsCmd(banktypes.GenesisBalancesIterator{ +}, gaia.DefaultNodeHome), + genutilcli.GenTxCmd(gaia.ModuleBasics, encodingConfig.TxConfig, banktypes.GenesisBalancesIterator{ +}, gaia.DefaultNodeHome), + genutilcli.ValidateGenesisCmd(gaia.ModuleBasics), + AddGenesisAccountCmd(gaia.DefaultNodeHome), + tmcli.NewCompletionCmd(rootCmd, true), + testnetCmd(gaia.ModuleBasics, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + config.Cmd(), + ) + ac := appCreator{ + encCfg: encodingConfig, +} + +server.AddCommands(rootCmd, gaia.DefaultNodeHome, ac.newApp, ac.appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + queryCommand(), + txCommand(), + keys.Commands(gaia.DefaultNodeHome), + ) + +rootCmd.AddCommand(server.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetAccountCmd(), + rpc.ValidatorCommand(), + rpc.BlockCommand(), + authcmd.QueryTxsByEventsCmd(), + authcmd.QueryTxCmd(), + ) + +gaia.ModuleBasics.AddQueryCommands(cmd) + +cmd.PersistentFlags().String(flags.FlagChainID, "", "The network chain ID") + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + flags.LineBreak, + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +gaia.ModuleBasics.AddTxCommands(cmd) + +cmd.PersistentFlags().String(flags.FlagChainID, "", "The network chain ID") + +return cmd +} + +type appCreator struct { + encCfg params.EncodingConfig +} + +func (ac appCreator) + +newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + var cache sdk.MultiStorePersistentCache + if cast.ToBool(appOpts.Get(server.FlagInterBlockCache)) { + cache = store.NewCommitKVStoreCacheManager() +} + skipUpgradeHeights := make(map[int64]bool) + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + +pruningOpts, err := server.GetPruningOptionsFromFlags(appOpts) + if err != nil { + panic(err) +} + snapshotDir := filepath.Join(cast.ToString(appOpts.Get(flags.FlagHome)), "data", "snapshots") + +snapshotDB, err := dbm.NewDB("metadata", server.GetAppDBBackend(appOpts), snapshotDir) + if err != nil { + panic(err) +} + +snapshotStore, err := snapshots.NewStore(snapshotDB, snapshotDir) + if err != nil { + panic(err) +} + snapshotOptions := snapshottypes.NewSnapshotOptions( + cast.ToUint64(appOpts.Get(server.FlagStateSyncSnapshotInterval)), + cast.ToUint32(appOpts.Get(server.FlagStateSyncSnapshotKeepRecent)), + ) + +return gaia.NewGaiaApp( + logger, db, traceStore, true, skipUpgradeHeights, + cast.ToString(appOpts.Get(flags.FlagHome)), + cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)), + ac.encCfg, + appOpts, + baseapp.SetPruning(pruningOpts), + baseapp.SetMinGasPrices(cast.ToString(appOpts.Get(server.FlagMinGasPrices))), + baseapp.SetHaltHeight(cast.ToUint64(appOpts.Get(server.FlagHaltHeight))), + baseapp.SetHaltTime(cast.ToUint64(appOpts.Get(server.FlagHaltTime))), + baseapp.SetMinRetainBlocks(cast.ToUint64(appOpts.Get(server.FlagMinRetainBlocks))), + baseapp.SetInterBlockCache(cache), + baseapp.SetTrace(cast.ToBool(appOpts.Get(server.FlagTrace))), + baseapp.SetIndexEvents(cast.ToStringSlice(appOpts.Get(server.FlagIndexEvents))), + baseapp.SetSnapshot(snapshotStore, snapshotOptions), + ) +} + +func (ac appCreator) + +appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, +) (servertypes.ExportedApp, error) { + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home is not set") +} + +var loadLatest bool + if height == -1 { + loadLatest = true +} + gaiaApp := gaia.NewGaiaApp( + logger, + db, + traceStore, + loadLatest, + map[int64]bool{ +}, + homePath, + cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)), + ac.encCfg, + appOpts, + ) + if height != -1 { + if err := gaiaApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +return gaiaApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs) +} +``` + +## Dependencies and Makefile + +This section is optional, as developers are free to choose their dependency manager and project building method. That said, the current most used framework for versioning control is [`go.mod`](https://github.com/golang/go/wiki/Modules). It ensures each of the libraries used throughout the application are imported with the correct version. + +The following is the `go.mod` of the [Cosmos Hub](https://github.com/cosmos/gaia), provided as an example. + +```go expandable +module github.com/cosmos/gaia/v8 + +go 1.18 + +require ( + cosmossdk.io/math v1.0.0-beta.3 + github.com/cosmos/cosmos-sdk v0.46.2 + github.com/cosmos/go-bip39 v1.0.0 / indirect + github.com/cosmos/ibc-go/v5 v5.0.0 + github.com/gogo/protobuf v1.3.3 + github.com/golang/protobuf v1.5.2 + github.com/golangci/golangci-lint v1.50.0 + github.com/gorilla/mux v1.8.0 + github.com/gravity-devs/liquidity/v2 v2.0.0 + github.com/grpc-ecosystem/grpc-gateway v1.16.0 + github.com/pkg/errors v0.9.1 + github.com/rakyll/statik v0.1.7 + github.com/spf13/cast v1.5.0 + github.com/spf13/cobra v1.6.0 + github.com/spf13/pflag v1.0.5 + github.com/spf13/viper v1.13.0 + github.com/strangelove-ventures/packet-forward-middleware/v2 v2.1.4-0.20220802012200-5a62a55a7f1d + github.com/stretchr/testify v1.8.0 + github.com/tendermint/tendermint v0.34.21 + github.com/tendermint/tm-db v0.6.7 + google.golang.org/genproto v0.0.0-20220815135757-37a418bb8959 + google.golang.org/grpc v1.50.0 +) + +require ( + 4d63.com/gochecknoglobals v0.1.0 / indirect + cloud.google.com/go v0.102.1 / indirect + cloud.google.com/go/compute v1.7.0 / indirect + cloud.google.com/go/iam v0.4.0 / indirect + cloud.google.com/go/storage v1.22.1 / indirect + cosmossdk.io/errors v1.0.0-beta.7 / indirect + filippo.io/edwards25519 v1.0.0-rc.1 / indirect + github.com/99designs/go-keychain v0.0.0-20191008050251-8e49817e8af4 / indirect + github.com/99designs/keyring v1.2.1 / indirect + github.com/Abirdcfly/dupword v0.0.7 / indirect + github.com/Antonboom/errname v0.1.7 / indirect + github.com/Antonboom/nilnil v0.1.1 / indirect + github.com/BurntSushi/toml v1.2.0 / indirect + github.com/ChainSafe/go-schnorrkel v0.0.0-20200405005733-88cbf1b4c40d / indirect + github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 / indirect + github.com/GaijinEntertainment/go-exhaustruct/v2 v2.3.0 / indirect + github.com/Masterminds/semver v1.5.0 / indirect + github.com/OpenPeeDeeP/depguard v1.1.1 / indirect + github.com/Workiva/go-datastructures v1.0.53 / indirect + github.com/alexkohler/prealloc v1.0.0 / indirect + github.com/alingse/asasalint v0.0.11 / indirect + github.com/armon/go-metrics v0.4.0 / indirect + github.com/ashanbrown/forbidigo v1.3.0 / indirect + github.com/ashanbrown/makezero v1.1.1 / indirect + github.com/aws/aws-sdk-go v1.40.45 / indirect + github.com/beorn7/perks v1.0.1 / indirect + github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d / indirect + github.com/bgentry/speakeasy v0.1.0 / indirect + github.com/bkielbasa/cyclop v1.2.0 / indirect + github.com/blizzy78/varnamelen v0.8.0 / indirect + github.com/bombsimon/wsl/v3 v3.3.0 / indirect + github.com/breml/bidichk v0.2.3 / indirect + github.com/breml/errchkjson v0.3.0 / indirect + github.com/btcsuite/btcd v0.22.1 / indirect + github.com/butuzov/ireturn v0.1.1 / indirect + github.com/cenkalti/backoff/v4 v4.1.3 / indirect + github.com/cespare/xxhash v1.1.0 / indirect + github.com/cespare/xxhash/v2 v2.1.2 / indirect + github.com/charithe/durationcheck v0.0.9 / indirect + github.com/chavacava/garif v0.0.0-20220630083739-93517212f375 / indirect + github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e / indirect + github.com/cockroachdb/apd/v2 v2.0.2 / indirect + github.com/coinbase/rosetta-sdk-go v0.7.9 / indirect + github.com/confio/ics23/go v0.7.0 / indirect + github.com/cosmos/btcutil v1.0.4 / indirect + github.com/cosmos/cosmos-proto v1.0.0-alpha7 / indirect + github.com/cosmos/gorocksdb v1.2.0 / indirect + github.com/cosmos/iavl v0.19.2-0.20220916140702-9b6be3095313 / indirect + github.com/cosmos/ledger-cosmos-go v0.11.1 / indirect + github.com/cosmos/ledger-go v0.9.3 / indirect + github.com/creachadair/taskgroup v0.3.2 / indirect + github.com/curioswitch/go-reassign v0.2.0 / indirect + github.com/daixiang0/gci v0.8.0 / indirect + github.com/danieljoos/wincred v1.1.2 / indirect + github.com/davecgh/go-spew v1.1.1 / indirect + github.com/denis-tingaikin/go-header v0.4.3 / indirect + github.com/desertbit/timer v0.0.0-20180107155436-c41aec40b27f / indirect + github.com/dgraph-io/badger/v2 v2.2007.4 / indirect + github.com/dgraph-io/ristretto v0.1.0 / indirect + github.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13 / indirect + github.com/dustin/go-humanize v1.0.0 / indirect + github.com/dvsekhvalnov/jose2go v1.5.0 / indirect + github.com/esimonov/ifshort v1.0.4 / indirect + github.com/ettle/strcase v0.1.1 / indirect + github.com/fatih/color v1.13.0 / indirect + github.com/fatih/structtag v1.2.0 / indirect + github.com/felixge/httpsnoop v1.0.1 / indirect + github.com/firefart/nonamedreturns v1.0.4 / indirect + github.com/fsnotify/fsnotify v1.5.4 / indirect + github.com/fzipp/gocyclo v0.6.0 / indirect + github.com/go-critic/go-critic v0.6.5 / indirect + github.com/go-kit/kit v0.12.0 / indirect + github.com/go-kit/log v0.2.1 / indirect + github.com/go-logfmt/logfmt v0.5.1 / indirect + github.com/go-playground/validator/v10 v10.4.1 / indirect + github.com/go-toolsmith/astcast v1.0.0 / indirect + github.com/go-toolsmith/astcopy v1.0.2 / indirect + github.com/go-toolsmith/astequal v1.0.3 / indirect + github.com/go-toolsmith/astfmt v1.0.0 / indirect + github.com/go-toolsmith/astp v1.0.0 / indirect + github.com/go-toolsmith/strparse v1.0.0 / indirect + github.com/go-toolsmith/typep v1.0.2 / indirect + github.com/go-xmlfmt/xmlfmt v0.0.0-20191208150333-d5b6f63a941b / indirect + github.com/gobwas/glob v0.2.3 / indirect + github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 / indirect + github.com/gofrs/flock v0.8.1 / indirect + github.com/gogo/gateway v1.1.0 / indirect + github.com/golang/glog v1.0.0 / indirect + github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da / indirect + github.com/golang/snappy v0.0.4 / indirect + github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2 / indirect + github.com/golangci/dupl v0.0.0-20180902072040-3e9179ac440a / indirect + github.com/golangci/go-misc v0.0.0-20220329215616-d24fe342adfe / indirect + github.com/golangci/gofmt v0.0.0-20220901101216-f2edd75033f2 / indirect + github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0 / indirect + github.com/golangci/maligned v0.0.0-20180506175553-b1d89398deca / indirect + github.com/golangci/misspell v0.3.5 / indirect + github.com/golangci/revgrep v0.0.0-20220804021717-745bb2f7c2e6 / indirect + github.com/golangci/unconvert v0.0.0-20180507085042-28b1c447d1f4 / indirect + github.com/google/btree v1.0.1 / indirect + github.com/google/go-cmp v0.5.9 / indirect + github.com/google/orderedcode v0.0.1 / indirect + github.com/google/uuid v1.3.0 / indirect + github.com/googleapis/enterprise-certificate-proxy v0.1.0 / indirect + github.com/googleapis/gax-go/v2 v2.4.0 / indirect + github.com/googleapis/go-type-adapters v1.0.0 / indirect + github.com/gordonklaus/ineffassign v0.0.0-20210914165742-4cc7213b9bc8 / indirect + github.com/gorilla/handlers v1.5.1 / indirect + github.com/gorilla/websocket v1.5.0 / indirect + github.com/gostaticanalysis/analysisutil v0.7.1 / indirect + github.com/gostaticanalysis/comment v1.4.2 / indirect + github.com/gostaticanalysis/forcetypeassert v0.1.0 / indirect + github.com/gostaticanalysis/nilerr v0.1.1 / indirect + github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 / indirect + github.com/grpc-ecosystem/grpc-gateway/v2 v2.0.1 / indirect + github.com/gsterjov/go-libsecret v0.0.0-20161001094733-a6f4afe4910c / indirect + github.com/gtank/merlin v0.1.1 / indirect + github.com/gtank/ristretto255 v0.1.2 / indirect + github.com/hashicorp/errwrap v1.1.0 / indirect + github.com/hashicorp/go-cleanhttp v0.5.2 / indirect + github.com/hashicorp/go-getter v1.6.1 / indirect + github.com/hashicorp/go-immutable-radix v1.3.1 / indirect + github.com/hashicorp/go-multierror v1.1.1 / indirect + github.com/hashicorp/go-safetemp v1.0.0 / indirect + github.com/hashicorp/go-version v1.6.0 / indirect + github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d / indirect + github.com/hashicorp/hcl v1.0.0 / indirect + github.com/hdevalence/ed25519consensus v0.0.0-20220222234857-c00d1f31bab3 / indirect + github.com/hexops/gotextdiff v1.0.3 / indirect + github.com/improbable-eng/grpc-web v0.15.0 / indirect + github.com/inconshreveable/mousetrap v1.0.1 / indirect + github.com/jgautheron/goconst v1.5.1 / indirect + github.com/jingyugao/rowserrcheck v1.1.1 / indirect + github.com/jirfag/go-printf-func-name v0.0.0-20200119135958-7558a9eaa5af / indirect + github.com/jmespath/go-jmespath v0.4.0 / indirect + github.com/jmhodges/levigo v1.0.0 / indirect + github.com/julz/importas v0.1.0 / indirect + github.com/kisielk/errcheck v1.6.2 / indirect + github.com/kisielk/gotool v1.0.0 / indirect + github.com/kkHAIKE/contextcheck v1.1.2 / indirect + github.com/klauspost/compress v1.15.9 / indirect + github.com/kulti/thelper v0.6.3 / indirect + github.com/kunwardeep/paralleltest v1.0.6 / indirect + github.com/kyoh86/exportloopref v0.1.8 / indirect + github.com/ldez/gomoddirectives v0.2.3 / indirect + github.com/ldez/tagliatelle v0.3.1 / indirect + github.com/leonklingele/grouper v1.1.0 / indirect + github.com/lib/pq v1.10.6 / indirect + github.com/libp2p/go-buffer-pool v0.1.0 / indirect + github.com/lufeee/execinquery v1.2.1 / indirect + github.com/magiconair/properties v1.8.6 / indirect + github.com/manifoldco/promptui v0.9.0 / indirect + github.com/maratori/testableexamples v1.0.0 / indirect + github.com/maratori/testpackage v1.1.0 / indirect + github.com/matoous/godox v0.0.0-20210227103229-6504466cf951 / indirect + github.com/mattn/go-colorable v0.1.13 / indirect + github.com/mattn/go-isatty v0.0.16 / indirect + github.com/mattn/go-runewidth v0.0.9 / indirect + github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 / indirect + github.com/mbilski/exhaustivestruct v1.2.0 / indirect + github.com/mgechev/revive v1.2.4 / indirect + github.com/mimoo/StrobeGo v0.0.0-20181016162300-f8f6d4d2b643 / indirect + github.com/minio/highwayhash v1.0.2 / indirect + github.com/mitchellh/go-homedir v1.1.0 / indirect + github.com/mitchellh/go-testing-interface v1.0.0 / indirect + github.com/mitchellh/mapstructure v1.5.0 / indirect + github.com/moricho/tparallel v0.2.1 / indirect + github.com/mtibben/percent v0.2.1 / indirect + github.com/nakabonne/nestif v0.3.1 / indirect + github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354 / indirect + github.com/nishanths/exhaustive v0.8.3 / indirect + github.com/nishanths/predeclared v0.2.2 / indirect + github.com/olekukonko/tablewriter v0.0.5 / indirect + github.com/pelletier/go-toml v1.9.5 / indirect + github.com/pelletier/go-toml/v2 v2.0.5 / indirect + github.com/petermattis/goid v0.0.0-20180202154549-b0b1615b78e5 / indirect + github.com/phayes/checkstyle v0.0.0-20170904204023-bfd46e6a821d / indirect + github.com/pmezard/go-difflib v1.0.0 / indirect + github.com/polyfloyd/go-errorlint v1.0.5 / indirect + github.com/prometheus/client_golang v1.12.2 / indirect + github.com/prometheus/client_model v0.2.0 / indirect + github.com/prometheus/common v0.34.0 / indirect + github.com/prometheus/procfs v0.7.3 / indirect + github.com/quasilyte/go-ruleguard v0.3.18 / indirect + github.com/quasilyte/gogrep v0.0.0-20220828223005-86e4605de09f / indirect + github.com/quasilyte/regex/syntax v0.0.0-20200407221936-30656e2c4a95 / indirect + github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 / indirect + github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0 / indirect + github.com/regen-network/cosmos-proto v0.3.1 / indirect + github.com/rs/cors v1.8.2 / indirect + github.com/rs/zerolog v1.27.0 / indirect + github.com/ryancurrah/gomodguard v1.2.4 / indirect + github.com/ryanrolds/sqlclosecheck v0.3.0 / indirect + github.com/sanposhiho/wastedassign/v2 v2.0.6 / indirect + github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa / indirect + github.com/sashamelentyev/interfacebloat v1.1.0 / indirect + github.com/sashamelentyev/usestdlibvars v1.20.0 / indirect + github.com/securego/gosec/v2 v2.13.1 / indirect + github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c / indirect + github.com/sirupsen/logrus v1.9.0 / indirect + github.com/sivchari/containedctx v1.0.2 / indirect + github.com/sivchari/nosnakecase v1.7.0 / indirect + github.com/sivchari/tenv v1.7.0 / indirect + github.com/sonatard/noctx v0.0.1 / indirect + github.com/sourcegraph/go-diff v0.6.1 / indirect + github.com/spf13/afero v1.8.2 / indirect + github.com/spf13/jwalterweatherman v1.1.0 / indirect + github.com/ssgreg/nlreturn/v2 v2.2.1 / indirect + github.com/stbenjam/no-sprintf-host-port v0.1.1 / indirect + github.com/stretchr/objx v0.4.0 / indirect + github.com/subosito/gotenv v1.4.1 / indirect + github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 / indirect + github.com/tdakkota/asciicheck v0.1.1 / indirect + github.com/tendermint/btcd v0.1.1 / indirect + github.com/tendermint/crypto v0.0.0-20191022145703-50d29ede1e15 / indirect + github.com/tendermint/go-amino v0.16.0 / indirect + github.com/tetafro/godot v1.4.11 / indirect + github.com/timakin/bodyclose v0.0.0-20210704033933-f49887972144 / indirect + github.com/timonwong/loggercheck v0.9.3 / indirect + github.com/tomarrell/wrapcheck/v2 v2.6.2 / indirect + github.com/tommy-muehle/go-mnd/v2 v2.5.0 / indirect + github.com/ulikunitz/xz v0.5.8 / indirect + github.com/ultraware/funlen v0.0.3 / indirect + github.com/ultraware/whitespace v0.0.5 / indirect + github.com/uudashr/gocognit v1.0.6 / indirect + github.com/yagipy/maintidx v1.0.0 / indirect + github.com/yeya24/promlinter v0.2.0 / indirect + github.com/zondax/hid v0.9.1-0.20220302062450-5552068d2266 / indirect + gitlab.com/bosi/decorder v0.2.3 / indirect + go.etcd.io/bbolt v1.3.6 / indirect + go.opencensus.io v0.23.0 / indirect + go.uber.org/atomic v1.9.0 / indirect + go.uber.org/multierr v1.8.0 / indirect + go.uber.org/zap v1.21.0 / indirect + golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa / indirect + golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e / indirect + golang.org/x/exp/typeparams v0.0.0-20220827204233-334a2380cb91 / indirect + golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 / indirect + golang.org/x/net v0.0.0-20220726230323-06994584191e / indirect + golang.org/x/oauth2 v0.0.0-20220622183110-fd043fe589d2 / indirect + golang.org/x/sync v0.0.0-20220819030929-7fc1605a5dde / indirect + golang.org/x/sys v0.0.0-20220915200043-7b5979e65e41 / indirect + golang.org/x/term v0.0.0-20220722155259-a9ba230a4035 / indirect + golang.org/x/text v0.3.7 / indirect + golang.org/x/tools v0.1.12 / indirect + golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f / indirect + google.golang.org/api v0.93.0 / indirect + google.golang.org/appengine v1.6.7 / indirect + google.golang.org/protobuf v1.28.1 / indirect + gopkg.in/ini.v1 v1.67.0 / indirect + gopkg.in/yaml.v2 v2.4.0 / indirect + gopkg.in/yaml.v3 v3.0.1 / indirect + honnef.co/go/tools v0.3.3 / indirect + mvdan.cc/gofumpt v0.4.0 / indirect + mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed / indirect + mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b / indirect + mvdan.cc/unparam v0.0.0-20220706161116-678bad134442 / indirect + nhooyr.io/websocket v1.8.6 / indirect + sigs.k8s.io/yaml v1.3.0 / indirect +) + +replace ( + github.com/gogo/protobuf => github.com/regen-network/protobuf v1.3.3-alpha.regen.1 + github.com/zondax/hid => github.com/zondax/hid v0.9.0 +) +``` + +For building the application, a [Makefile](https://en.wikipedia.org/wiki/Makefile) is generally used. The Makefile primarily ensures that the `go.mod` is run before building the two entrypoints to the application, [`Node Client`](#node-client) and [`Application Interface`](#application-interface). + +Here is an example of the [Cosmos Hub Makefile](https://github.com/cosmos/gaia/blob/main/Makefile). diff --git a/docs/sdk/next/documentation/application-framework/app-go-di.mdx b/docs/sdk/next/documentation/application-framework/app-go-di.mdx new file mode 100644 index 00000000..993874a3 --- /dev/null +++ b/docs/sdk/next/documentation/application-framework/app-go-di.mdx @@ -0,0 +1,3320 @@ +--- +title: Overview of `app_di.go` +--- + +## Synopsis + +The Cosmos SDK makes wiring of an `app.go` much easier thanks to [runtime](/docs/sdk/next/documentation/application-framework/runtime) and app wiring. +Learn more about the rationale of App Wiring in [ADR-057](docs/sdk/next/documentation/legacy/adr-comprehensive). + + +**Pre-requisite Readings** + +- [What is `runtime`?](/docs/sdk/next/documentation/application-framework/runtime) +- [Depinject documentation](/docs/sdk/next/documentation/module-system/depinject) +- [Modules depinject-ready](/docs/sdk/next/documentation/module-system/depinject) +- [ADR 057: App Wiring](docs/sdk/next/documentation/legacy/adr-comprehensive) + + + +This section is intended to provide an overview of the `SimApp` `app_di.go` file with App Wiring. + +## `app_config.go` + +The `app_config.go` file is the single place to configure all modules parameters. + +1. Create the `AppConfig` variable: + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "cosmossdk.io/depinject" + _ "cosmossdk.io/x/circuit" / import for side-effects + circuittypes "cosmossdk.io/x/circuit/types" + _ "cosmossdk.io/x/evidence" / import for side-effects + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + "cosmossdk.io/x/nft" + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + { + Account: protocolpooltypes.ModuleName + }, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + ModuleConfig = []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / NOTE: upgrade module is required to be prioritized + PreBlockers: []string{ + upgradetypes.ModuleName, + authtypes.ModuleName, + }, + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + }, + EndBlockers: []string{ + govtypes.ModuleName, + stakingtypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + ExportGenesis: []string{ + consensustypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + / NOTE: specifying a prefix is only necessary when using bech32 addresses + / If not specfied, the auth Bech32Prefix appended with "valoper" and "valcons" is used by default + Bech32PrefixValidator: "cosmosvaloper", + Bech32PrefixConsensus: "cosmosvalcons", + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + SkipAnteHandler: true, / Enable this to skip the default antehandlers and set custom ante handlers. + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + { + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ + }), + }, + { + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ + }), + }, + } + + / AppConfig is application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: ModuleConfig, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + }, + ), + }, + ), + ) + ) + ``` + + Where the `appConfig` combines the [runtime](/docs/sdk/next/documentation/application-framework/runtime) configuration and the (extra) modules configuration. + + ```go expandable + /go:build !app_v1 + + package simapp + + import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + ) + + / DefaultNodeHome default home directories for the application daemon + var DefaultNodeHome string + + var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) + ) + + / SimApp extends an ABCI application, but with most of its parameters exported. + / They are exported for convenience in creating helper functions, as object + / capabilities aren't needed for testing. + type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / simulation manager + sm *module.SimulationManager + } + + func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) + } + } + + / NewSimApp returns a reference to an initialized SimApp. + func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) *SimApp { + var ( + app = &SimApp{ + } + + appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + / + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + + sdk.AccountI { + return authtypes.ProtoBaseAccount() + }, + / + / For providing a different address codec, add it below. + / By default the auth module uses a Bech32 address codec, + / with the prefix defined in the auth module configuration. + / + / func() + + address.Codec { + return <- custom address codec type -> + } + + / + / STAKING + / + / For provinding a different validator and consensus address codec, add it below. + / By default the staking module uses the bech32 prefix provided in the auth config, + / and appends "valoper" and "valcons" for validator and consensus addresses respectively. + / When providing a custom address codec in auth, custom address codecs must be provided here as well. + / + / func() + + runtime.ValidatorAddressCodec { + return <- custom validator address codec type -> + } + / func() + + runtime.ConsensusAddressCodec { + return <- custom consensus address codec type -> + } + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.UpgradeKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitKeeper, + &app.EpochsKeeper, + &app.ProtocolPoolKeeper, + ); err != nil { + panic(err) + } + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / + } + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + + voteExtHandler.SetHandlers(bApp) + } + + baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + + app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) + } + + /**** Module Options ****/ + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ + }) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + } + + app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + + app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / + }) + + / set custom ante handler + app.setAnteHandler(app.txConfig) + if err := app.Load(loadLatest); err != nil { + panic(err) + } + + return app + } + + / setAnteHandler sets custom ante handlers. + / "x/auth/tx" pre-defined ante handler have been disabled in app_config. + func (app *SimApp) + + setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + UnorderedNonceManager: app.AccountKeeper, + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, + }, + &app.CircuitKeeper, + }, + ) + if err != nil { + panic(err) + } + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) + } + + / LegacyAmino returns SimApp's amino codec. + / + / NOTE: This is solely to be used for testing purposes as it may be desirable + / for modules to register their own custom testing types. + func (app *SimApp) + + LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino + } + + / AppCodec returns SimApp's app codec. + / + / NOTE: This is solely to be used for testing purposes as it may be desirable + / for modules to register their own custom testing types. + func (app *SimApp) + + AppCodec() + + codec.Codec { + return app.appCodec + } + + / InterfaceRegistry returns SimApp's InterfaceRegistry. + func (app *SimApp) + + InterfaceRegistry() + + codectypes.InterfaceRegistry { + return app.interfaceRegistry + } + + / TxConfig returns SimApp's TxConfig + func (app *SimApp) + + TxConfig() + + client.TxConfig { + return app.txConfig + } + + / GetKey returns the KVStoreKey for the provided store key. + / + / NOTE: This is solely to be used for testing purposes. + func (app *SimApp) + + GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + + kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil + } + + return kvStoreKey + } + + func (app *SimApp) + + kvStoreKeys() + + map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv + } + + } + + return keys + } + + / SimulationManager implements the SimulationApp interface + func (app *SimApp) + + SimulationManager() *module.SimulationManager { + return app.sm + } + + / RegisterAPIRoutes registers all application module routes with the provided + / API server. + func (app *SimApp) + + RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) + } + } + + / GetMaccPerms returns a copy of the module account permissions + / + / NOTE: This is solely to be used for testing purposes. + func GetMaccPerms() + + map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions + } + + return dup + } + + / BlockedAddresses returns all the app's blocked account addresses. + func BlockedAddresses() + + map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true + } + + } + + else { + for addr := range GetMaccPerms() { + result[addr] = true + } + + } + + return result + } + ``` + +2. Configure the `runtime` module: + + In this configuration, the order in which the modules are defined in PreBlockers, BeginBlocks, and EndBlockers is important. + They are named in the order they should be executed by the module manager. + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "cosmossdk.io/depinject" + _ "cosmossdk.io/x/circuit" / import for side-effects + circuittypes "cosmossdk.io/x/circuit/types" + _ "cosmossdk.io/x/evidence" / import for side-effects + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + "cosmossdk.io/x/nft" + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + { + Account: protocolpooltypes.ModuleName + }, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + ModuleConfig = []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / NOTE: upgrade module is required to be prioritized + PreBlockers: []string{ + upgradetypes.ModuleName, + authtypes.ModuleName, + }, + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + }, + EndBlockers: []string{ + govtypes.ModuleName, + stakingtypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + ExportGenesis: []string{ + consensustypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + / NOTE: specifying a prefix is only necessary when using bech32 addresses + / If not specfied, the auth Bech32Prefix appended with "valoper" and "valcons" is used by default + Bech32PrefixValidator: "cosmosvaloper", + Bech32PrefixConsensus: "cosmosvalcons", + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + SkipAnteHandler: true, / Enable this to skip the default antehandlers and set custom ante handlers. + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + { + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ + }), + }, + { + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ + }), + }, + } + + / AppConfig is application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: ModuleConfig, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + }, + ), + }, + ), + ) + ) + ``` + +3. Wire the other modules: + + Next to runtime, the other (depinject-enabled) modules are wired in the `AppConfig`: + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "cosmossdk.io/depinject" + _ "cosmossdk.io/x/circuit" / import for side-effects + circuittypes "cosmossdk.io/x/circuit/types" + _ "cosmossdk.io/x/evidence" / import for side-effects + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + "cosmossdk.io/x/nft" + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + { + Account: protocolpooltypes.ModuleName + }, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + ModuleConfig = []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / NOTE: upgrade module is required to be prioritized + PreBlockers: []string{ + upgradetypes.ModuleName, + authtypes.ModuleName, + }, + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + }, + EndBlockers: []string{ + govtypes.ModuleName, + stakingtypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + ExportGenesis: []string{ + consensustypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + / NOTE: specifying a prefix is only necessary when using bech32 addresses + / If not specfied, the auth Bech32Prefix appended with "valoper" and "valcons" is used by default + Bech32PrefixValidator: "cosmosvaloper", + Bech32PrefixConsensus: "cosmosvalcons", + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + SkipAnteHandler: true, / Enable this to skip the default antehandlers and set custom ante handlers. + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + { + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ + }), + }, + { + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ + }), + }, + } + + / AppConfig is application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: ModuleConfig, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + }, + ), + }, + ), + ) + ) + ``` + + Note: the `tx` isn't a module, but a configuration. It should be wired in the `AppConfig` as well. + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "cosmossdk.io/depinject" + _ "cosmossdk.io/x/circuit" / import for side-effects + circuittypes "cosmossdk.io/x/circuit/types" + _ "cosmossdk.io/x/evidence" / import for side-effects + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + "cosmossdk.io/x/nft" + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + { + Account: protocolpooltypes.ModuleName + }, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + ModuleConfig = []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / NOTE: upgrade module is required to be prioritized + PreBlockers: []string{ + upgradetypes.ModuleName, + authtypes.ModuleName, + }, + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + }, + EndBlockers: []string{ + govtypes.ModuleName, + stakingtypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + ExportGenesis: []string{ + consensustypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + / NOTE: specifying a prefix is only necessary when using bech32 addresses + / If not specfied, the auth Bech32Prefix appended with "valoper" and "valcons" is used by default + Bech32PrefixValidator: "cosmosvaloper", + Bech32PrefixConsensus: "cosmosvalcons", + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + SkipAnteHandler: true, / Enable this to skip the default antehandlers and set custom ante handlers. + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + { + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ + }), + }, + { + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ + }), + }, + } + + / AppConfig is application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: ModuleConfig, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + }, + ), + }, + ), + ) + ) + ``` + +See the complete `app_config.go` file for `SimApp` [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_config.go). + +### Alternative formats + + +The example above shows how to create an `AppConfig` using Go. However, it is also possible to create an `AppConfig` using YAML, or JSON.\ +The configuration can then be embedded with `go:embed` and read with [`appconfig.LoadYAML`](https://pkg.go.dev/cosmossdk.io/core/appconfig#LoadYAML), or [`appconfig.LoadJSON`](https://pkg.go.dev/cosmossdk.io/core/appconfig#LoadJSON), in `app_di.go`. + +```go +/go:embed app_config.yaml +var ( + appConfigYaml []byte + appConfig = appconfig.LoadYAML(appConfigYaml) +) +``` + + + +```yaml expandable +modules: + - name: runtime + config: + "@type": cosmos.app.runtime.v1alpha1.Module + app_name: SimApp + begin_blockers: [staking, auth, bank] + end_blockers: [bank, auth, staking] + init_genesis: [bank, auth, staking] + - name: auth + config: + "@type": cosmos.auth.module.v1.Module + bech32_prefix: cosmos + - name: bank + config: + "@type": cosmos.bank.module.v1.Module + - name: staking + config: + "@type": cosmos.staking.module.v1.Module + - name: tx + config: + "@type": cosmos.tx.module.v1.Module +``` + +A more complete example of `app.yaml` can be found [here](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/simapp/example_app.yaml). + +## `app_di.go` + +`app_di.go` is the place where `SimApp` is constructed. `depinject.Inject` automatically wires the app modules and keepers when provided with an application configuration (`AppConfig`). `SimApp` is constructed upon calling the injected `*runtime.AppBuilder` with `appBuilder.Build(...)`.\ +In short `depinject` and the [`runtime` package](/docs/sdk/next/documentation/application-framework/runtime) abstract the wiring of the app, and the `AppBuilder` is the place where the app is constructed. [`runtime`](/docs/sdk/next/documentation/application-framework/runtime) takes care of registering the codecs, KV store, subspaces and instantiating `baseapp`. + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + / + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + / + / For providing a different address codec, add it below. + / By default the auth module uses a Bech32 address codec, + / with the prefix defined in the auth module configuration. + / + / func() + +address.Codec { + return <- custom address codec type -> +} + + / + / STAKING + / + / For provinding a different validator and consensus address codec, add it below. + / By default the staking module uses the bech32 prefix provided in the auth config, + / and appends "valoper" and "valcons" for validator and consensus addresses respectively. + / When providing a custom address codec in auth, custom address codecs must be provided here as well. + / + / func() + +runtime.ValidatorAddressCodec { + return <- custom validator address codec type -> +} + / func() + +runtime.ConsensusAddressCodec { + return <- custom consensus address codec type -> +} + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.UpgradeKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitKeeper, + &app.EpochsKeeper, + &app.ProtocolPoolKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + + / set custom ante handler + app.setAnteHandler(app.txConfig) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ setAnteHandler sets custom ante handlers. +/ "x/auth/tx" pre-defined ante handler have been disabled in app_config. +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + UnorderedNonceManager: app.AccountKeeper, + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + + + When using `depinject.Inject`, the injected types must be pointers. + + +### Advanced Configuration + +In advanced cases, it is possible to inject extra (module) configuration in a way that is not (yet) supported by `AppConfig`.\ +In this case, use `depinject.Configs` for combining the extra configuration, and `AppConfig` and `depinject.Supply` for providing the extra configuration. +More information on how `depinject.Configs` and `depinject.Supply` function can be found in the [`depinject` documentation](https://pkg.go.dev/cosmossdk.io/depinject). + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + / + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + / + / For providing a different address codec, add it below. + / By default the auth module uses a Bech32 address codec, + / with the prefix defined in the auth module configuration. + / + / func() + +address.Codec { + return <- custom address codec type -> +} + + / + / STAKING + / + / For provinding a different validator and consensus address codec, add it below. + / By default the staking module uses the bech32 prefix provided in the auth config, + / and appends "valoper" and "valcons" for validator and consensus addresses respectively. + / When providing a custom address codec in auth, custom address codecs must be provided here as well. + / + / func() + +runtime.ValidatorAddressCodec { + return <- custom validator address codec type -> +} + / func() + +runtime.ConsensusAddressCodec { + return <- custom consensus address codec type -> +} + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.UpgradeKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitKeeper, + &app.EpochsKeeper, + &app.ProtocolPoolKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + + / set custom ante handler + app.setAnteHandler(app.txConfig) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ setAnteHandler sets custom ante handlers. +/ "x/auth/tx" pre-defined ante handler have been disabled in app_config. +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + UnorderedNonceManager: app.AccountKeeper, + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + +### Registering non app wiring modules + +It is possible to combine app wiring / depinject enabled modules with non-app wiring modules. +To do so, use the `app.RegisterModules` method to register the modules on your app, as well as `app.RegisterStores` for registering the extra stores needed. + +```go expandable +/ .... +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + +/ register module manually +app.RegisterStores(storetypes.NewKVStoreKey(example.ModuleName)) + +app.ExampleKeeper = examplekeeper.NewKeeper(app.appCodec, app.AccountKeeper.AddressCodec(), runtime.NewKVStoreService(app.GetKey(example.ModuleName)), authtypes.NewModuleAddress(govtypes.ModuleName).String()) + exampleAppModule := examplemodule.NewAppModule(app.ExampleKeeper) + if err := app.RegisterModules(&exampleAppModule); err != nil { + panic(err) +} + +/ .... +``` + + + When using AutoCLI and combining app wiring and non-app wiring modules. The + AutoCLI options should be manually constructed instead of injected. Otherwise + it will miss the non depinject modules and not register their CLI. + + +### Complete `app_di.go` + + + Note that in the complete `SimApp` `app_di.go` file, testing utilities are + also defined, but they could as well be defined in a separate file. + + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + / + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + / + / For providing a different address codec, add it below. + / By default the auth module uses a Bech32 address codec, + / with the prefix defined in the auth module configuration. + / + / func() + +address.Codec { + return <- custom address codec type -> +} + + / + / STAKING + / + / For provinding a different validator and consensus address codec, add it below. + / By default the staking module uses the bech32 prefix provided in the auth config, + / and appends "valoper" and "valcons" for validator and consensus addresses respectively. + / When providing a custom address codec in auth, custom address codecs must be provided here as well. + / + / func() + +runtime.ValidatorAddressCodec { + return <- custom validator address codec type -> +} + / func() + +runtime.ConsensusAddressCodec { + return <- custom consensus address codec type -> +} + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.UpgradeKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitKeeper, + &app.EpochsKeeper, + &app.ProtocolPoolKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + + / set custom ante handler + app.setAnteHandler(app.txConfig) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ setAnteHandler sets custom ante handlers. +/ "x/auth/tx" pre-defined ante handler have been disabled in app_config. +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + UnorderedNonceManager: app.AccountKeeper, + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` diff --git a/docs/sdk/next/documentation/application-framework/app-go.mdx b/docs/sdk/next/documentation/application-framework/app-go.mdx new file mode 100644 index 00000000..f1691fa5 --- /dev/null +++ b/docs/sdk/next/documentation/application-framework/app-go.mdx @@ -0,0 +1,940 @@ +--- +title: Overview of `app.go` +description: >- + This section is intended to provide an overview of the SimApp app.go file and + is still a work in progress. For now please instead read the tutorials for a + deep dive on how to build a chain. +--- + +This section is intended to provide an overview of the `SimApp` `app.go` file and is still a work in progress. +For now please instead read the [tutorials](https://tutorials.cosmos.network) for a deep dive on how to build a chain. + +## Complete `app.go` + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "os" + "path/filepath" + "cosmossdk.io/log" + "cosmossdk.io/x/tx/signing" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/core/appmodule" + "github.com/cosmos/cosmos-sdk/codec/address" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ stdAccAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdAccAddressCodec struct{ +} + +func (g stdAccAddressCodec) + +StringToBytes(text string) ([]byte, error) { + if text == "" { + return nil, nil +} + +return sdk.AccAddressFromBech32(text) +} + +func (g stdAccAddressCodec) + +BytesToString(bz []byte) (string, error) { + if bz == nil { + return "", nil +} + +return sdk.AccAddress(bz).String(), nil +} + +/ stdValAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdValAddressCodec struct{ +} + +func (g stdValAddressCodec) + +StringToBytes(text string) ([]byte, error) { + return sdk.ValAddressFromBech32(text) +} + +func (g stdValAddressCodec) + +BytesToString(bz []byte) (string, error) { + return sdk.ValAddress(bz).String(), nil +} + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, circuittypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + tkeys := storetypes.NewTransientStoreKeys(paramstypes.TStoreKey) + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), runtime.EventService{ +}) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, runtime.NewKVStoreService(keys[authtypes.StoreKey]), authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[minttypes.StoreKey]), app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[distrtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[crisistypes.StoreKey]), invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[feegrant.StoreKey]), app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[circuittypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper(runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[govtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.DistrKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), app.StakingKeeper, app.SlashingKeeper, app.AccountKeeper.AddressCodec(), runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName), app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, circuittypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + err := app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + + / At startup, after all modules have been registered, check that all prot + / annotations are correct. + protoFiles, err := proto.MergedRegistry() + if err != nil { + panic(err) +} + +err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + fmt.Fprintln(os.Stderr, err.Error()) +} + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} +``` diff --git a/docs/sdk/next/documentation/application-framework/app-mempool.mdx b/docs/sdk/next/documentation/application-framework/app-mempool.mdx new file mode 100644 index 00000000..657f6558 --- /dev/null +++ b/docs/sdk/next/documentation/application-framework/app-mempool.mdx @@ -0,0 +1,94 @@ +--- +title: Application Mempool +--- + +## Synopsis + +This section describes how the app-side mempool can be used and replaced. + +Since `v0.47` the application has its own mempool to allow much more granular +block building than previous versions. This change was enabled by +[ABCI 1.0](https://github.com/cometbft/cometbft/blob/v0.37.0/spec/abci). +Notably it introduces the `PrepareProposal` and `ProcessProposal` steps of ABCI++. + + +**Pre-requisite Readings** + +- [BaseApp](docs/sdk/next/documentation/application-framework/baseapp) +- [ABCI](docs/sdk/next/documentation/consensus-block-production/introduction) + + + +## Mempool + +There are countless designs that an application developer can write for a mempool, the SDK opted to provide only simple mempool implementations. +Namely, the SDK provides the following mempools: + +- [No-op Mempool](#no-op-mempool) +- [Sender Nonce Mempool](#sender-nonce-mempool) +- [Priority Nonce Mempool](#priority-nonce-mempool) + +By default, the SDK uses the [No-op Mempool](#no-op-mempool), but it can be replaced by the application developer in [`app.go`](/docs/sdk/next/documentation/application-framework/app-go-di): + +```go +nonceMempool := mempool.NewSenderNonceMempool() + mempoolOpt := baseapp.SetMempool(nonceMempool) + +baseAppOptions = append(baseAppOptions, mempoolOpt) +``` + +### No-op Mempool + +A no-op mempool is a mempool where transactions are completely discarded and ignored when BaseApp interacts with the mempool. +When this mempool is used, it is assumed that an application will rely on CometBFT's transaction ordering defined in `RequestPrepareProposal`, +which is FIFO-ordered by default. + +> Note: If a NoOp mempool is used, PrepareProposal and ProcessProposal both should be aware of this as +> PrepareProposal could include transactions that could fail verification in ProcessProposal. + +### Sender Nonce Mempool + +The nonce mempool is a mempool that keeps transactions from a sender sorted by nonce in order to avoid the issues with nonces. +It works by storing the transaction in a list sorted by the transaction nonce. When the proposer asks for transactions to be included in a block it randomly selects a sender and gets the first transaction in the list. It repeats this until the mempool is empty or the block is full. + +It is configurable with the following parameters: + +#### MaxTxs + +It is an integer value that sets the mempool in one of three modes, _bounded_, _unbounded_, or _disabled_. + +- **negative**: Disabled, mempool does not insert new transaction and return early. +- **zero**: Unbounded mempool has no transaction limit and will never fail with `ErrMempoolTxMaxCapacity`. +- **positive**: Bounded, it fails with `ErrMempoolTxMaxCapacity` when the `maxTx` value is the same as `CountTx()` + +#### Seed + +Set the seed for the random number generator used to select transactions from the mempool. + +### Priority Nonce Mempool + +The [priority nonce mempool](https://github.com/cosmos/cosmos-sdk/blob/main/types/mempool/priority_nonce_spec.md) is a mempool implementation that stores txs in a partially ordered set by 2 dimensions: + +- priority +- sender-nonce (sequence number) + +Internally it uses one priority ordered [skip list](https://pkg.go.dev/github.com/huandu/skiplist) and one skip list per sender ordered by sender-nonce (sequence number). When there are multiple txs from the same sender, they are not always comparable by priority to other sender txs and must be partially ordered by both sender-nonce and priority. + +It is configurable with the following parameters: + +#### MaxTxs + +It is an integer value that sets the mempool in one of three modes, _bounded_, _unbounded_, or _disabled_. + +- **negative**: Disabled, mempool does not insert new transaction and return early. +- **zero**: Unbounded mempool has no transaction limit and will never fail with `ErrMempoolTxMaxCapacity`. +- **positive**: Bounded, it fails with `ErrMempoolTxMaxCapacity` when the `maxTx` value is the same as `CountTx()` + +#### Callback + +The priority nonce mempool provides mempool options allowing the application to set callback(s). + +- **OnRead**: Set a callback to be called when a transaction is read from the mempool. +- **TxReplacement**: Sets a callback to be called when duplicate transaction nonce detected during mempool insert. Application can define a transaction replacement rule based on tx priority or certain transaction fields. + +More information on the SDK mempool implementation can be found in the [godocs](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/mempool). diff --git a/docs/sdk/next/documentation/application-framework/baseapp.mdx b/docs/sdk/next/documentation/application-framework/baseapp.mdx new file mode 100644 index 00000000..c8121e99 --- /dev/null +++ b/docs/sdk/next/documentation/application-framework/baseapp.mdx @@ -0,0 +1,11308 @@ +--- +title: BaseApp +--- + +## Synopsis + +This document describes `BaseApp`, the abstraction that implements the core functionalities of a Cosmos SDK application. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK application](/docs/sdk/next/documentation/application-framework/app-anatomy) +- [Lifecycle of a Cosmos SDK transaction](/docs/sdk/next/documentation/protocol-development/tx-lifecycle) + + + +## Introduction + +`BaseApp` is a base type that implements the core of a Cosmos SDK application, namely: + +- The [Application Blockchain Interface](#main-abci-messages), for the state-machine to communicate with the underlying consensus engine (e.g. CometBFT). +- [Service Routers](#service-routers), to route messages and queries to the appropriate module. +- Different [states](#state-updates), as the state-machine can have different volatile states updated based on the ABCI message received. + +The goal of `BaseApp` is to provide the fundamental layer of a Cosmos SDK application +that developers can easily extend to build their own custom application. Usually, +developers will create a custom type for their application, like so: + +```go +type App struct { + / reference to a BaseApp + *baseapp.BaseApp + + / list of application store keys + + / list of application keepers + + / module manager +} +``` + +Extending the application with `BaseApp` gives the former access to all of `BaseApp`'s methods. +This allows developers to compose their custom application with the modules they want, while not +having to concern themselves with the hard work of implementing the ABCI, the service routers and state +management logic. + +## Type Definition + +The `BaseApp` type holds many important parameters for any Cosmos SDK based application. + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + "maps" + "math" + "slices" + "strconv" + "sync" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/core/header" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp/oe" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + +error +) + +const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + +transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeVerifyVoteExtension / Verify a vote extension + execModeFinalize / Finalize a block proposal +) + +var _ servertypes.ABCI = (*BaseApp)(nil) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { + / initialized on creation + mu sync.Mutex / mu protects the fields below. + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + +state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional + + checkTxHandler sdk.CheckTxHandler / ABCI CheckTx handler + initChainer sdk.InitChainer / ABCI InitChain handler + preBlocker sdk.PreBlocker / logic to run before BeginBlocker + beginBlocker sdk.BeginBlocker / (legacy ABCI) + +BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + +EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + sigverifyTx bool / in the simulation test, since the account does not have a private key, we have to ignore the tx sigverify. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / queryGasLimit defines the maximum gas for queries; unbounded if 0. + queryGasLimit uint64 + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec + + / optimisticExec contains the context required for Optimistic Execution, + / including the goroutine handling.This is experimental and must be enabled + / by developers. + optimisticExec *oe.OptimisticExecution + + / disableBlockGasMeter will disable the block gas meter if true, block gas meter is tricky to support + / when executing transactions in parallel. + / when disabled, the block gas meter in context is a noop one. + / + / SAFETY: it's safe to do if validators validate the total gas wanted in the `ProcessProposal`, which is the case in the default handler. + disableBlockGasMeter bool +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger.With(log.ModuleKey, "baseapp"), + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, + sigverifyTx: true, + queryGasLimit: math.MaxUint64, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) +} + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) +} + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) +} + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs baseapp will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + +protoFiles, err := proto.MergedRegistry() + if err != nil { + logger.Warn("error creating merged proto registry", "error", err) +} + +else { + err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + logger.Warn("error validating merged proto registry annotations", "error", err) +} + +} + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ GRPCQueryRouter returns the GRPCQueryRouter of a BaseApp. +func (app *BaseApp) + +GRPCQueryRouter() *GRPCQueryRouter { + return app.grpcQueryRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} + +} +} + +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) + +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + +} +} + +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) + +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} + +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := slices.Sorted(maps.Keys(keys)) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms storetypes.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +storetypes.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ ChainID returns the chainID of the app. +func (app *BaseApp) + +ChainID() + +string { + return app.chainID +} + +/ AnteHandler returns the AnteHandler of the app. +func (app *BaseApp) + +AnteHandler() + +sdk.AnteHandler { + return app.anteHandler +} + +/ Mempool returns the Mempool of the app. +func (app *BaseApp) + +Mempool() + +mempool.Mempool { + return app.mempool +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + if app.cms == nil { + return errors.New("commit multi-store must not be nil") +} + emptyHeader := cmtproto.Header{ + ChainID: app.chainID +} + + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) + +app.Seal() + +return app.cms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode execMode, h cmtproto.Header) { + ms := app.cms.CacheMultiStore() + headerInfo := header.Info{ + Height: h.Height, + Time: h.Time, + ChainID: h.ChainID, + AppHash: h.AppHash, +} + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, h, false, app.logger). + WithStreamingManager(app.streamingManager). + WithHeaderInfo(headerInfo), +} + switch mode { + case execModeCheck: + baseState.SetContext(baseState.Context().WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices)) + +app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ SetCircuitBreaker sets the circuit breaker for the BaseApp. +/ The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. +func (app *BaseApp) + +SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") +} + +app.msgServiceRouter.SetCircuit(cb) +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) + +cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ +} + +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + / This could happen while migrating from v0.45/v0.46 to v0.50, we should + / allow it to happen so during preblock the upgrade plan can be executed + / and the consensus params set for the first time in the new format. + app.logger.Error("failed to get consensus params", "err", err) + +return cmtproto.ConsensusParams{ +} + +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the BaseApp's param +/ store. +/ +/ NOTE: We're explicitly not storing the CometBFT app_version in the param store. +/ It's stored instead in the x/upgrade store, with its own bump logic. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + +error { + if app.paramStore == nil { + return errors.New("cannot store consensus params with no params store set") +} + +return app.paramStore.Set(ctx, cp) +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + +error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) +} + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 +} + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return err +} + +} + +return nil +} + +func (app *BaseApp) + +getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState +} +} + +func (app *BaseApp) + +getBlockGasMeter(ctx sdk.Context) + +storetypes.GasMeter { + if app.disableBlockGasMeter { + return noopGasMeter{ +} + +} + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) +} + +return storetypes.NewInfiniteGasMeter() +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode execMode, txBytes []byte) + +sdk.Context { + app.mu.Lock() + +defer app.mu.Unlock() + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.Context(). + WithTxBytes(txBytes). + WithGasMeter(storetypes.NewInfiniteGasMeter()) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithIsSigverifyTx(app.sigverifyTx) + +ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() + +ctx = ctx.WithExecMode(sdk.ExecMode(execModeSimulate)) +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]any{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(storetypes.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +func (app *BaseApp) + +preBlock(req *abci.RequestFinalizeBlock) ([]abci.Event, error) { + var events []abci.Event + if app.preBlocker != nil { + ctx := app.finalizeBlockState.Context().WithEventManager(sdk.NewEventManager()) + +rsp, err := app.preBlocker(ctx, req) + if err != nil { + return nil, err +} + / rsp.ConsensusParamsChanged is true from preBlocker means ConsensusParams in store get changed + / write the consensus parameters in store to context + if rsp.ConsensusParamsChanged { + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + / GasMeter must be set after we get a context with updated consensus params. + gasMeter := app.getBlockGasMeter(ctx) + +ctx = ctx.WithBlockGasMeter(gasMeter) + +app.finalizeBlockState.SetContext(ctx) +} + +events = ctx.EventManager().ABCIEvents() +} + +return events, nil +} + +func (app *BaseApp) + +beginBlock(_ *abci.RequestFinalizeBlock) (sdk.BeginBlock, error) { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.Context()) + if err != nil { + return resp, err +} + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" +}, + ) +} + +resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) +} + +return resp, nil +} + +func (app *BaseApp) + +deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ +} + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + +telemetry.IncrCounter(1, "tx", resultStr) + +telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + +telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") +}() + +gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx, nil) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + +return resp +} + +resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +} + +return resp +} + +/ endBlock is an application-defined function that is called after transactions +/ have been processed in FinalizeBlock. +func (app *BaseApp) + +endBlock(_ context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.Context()) + if err != nil { + return endblock, err +} + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" +}, + ) +} + +eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + +endblock = eb +} + +return endblock, nil +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +/ both txbytes and the decoded tx are passed to runTx to avoid the state machine encoding the tx and decoding the transaction twice +/ passing the decoded tx to runTX is optional, it will be decoded if the tx is nil +func (app *BaseApp) + +runTx(mode execMode, txBytes []byte, tx sdk.Tx) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil + ctx.Logger().Error("panic recovered in runTx", "err", err) +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() +} + + / if the transaction is not decoded, decode it here + if tx == nil { + tx, err = app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ + GasUsed: 0, + GasWanted: 0 +}, nil, nil, sdkerrors.ErrTxDecode.Wrap(err.Error()) +} + +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + for _, msg := range msgs { + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return sdk.GasInfo{ +}, nil, nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + if mode == execModeReCheck { + / if the ante handler fails on recheck, we want to remove the tx from the mempool + if mempoolErr := app.mempool.Remove(tx); mempoolErr != nil { + return gInfo, nil, anteEvents, errors.Join(err, mempoolErr) +} + +} + +return gInfo, nil, nil, err +} + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + switch mode { + case execModeCheck: + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err +} + case execModeFinalize: + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) +} + + / Run optional postHandlers (should run regardless of the execution result). + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, errPostHandler := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if errPostHandler != nil { + if err == nil { + / when the msg was handled successfully, return the post handler error only + return gInfo, nil, anteEvents, errPostHandler +} + / otherwise append to the msg error so that we keep the original error code for better user experience + return gInfo, nil, anteEvents, errorsmod.Wrapf(err, "postHandler: %s", errPostHandler) +} + + / we don't want runTx to panic if runMsgs has failed earlier + if result == nil { + result = &sdk.Result{ +} + +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if err == nil { + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents, err := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to create message events; message index: %d", i) +} + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) +} + +events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + + +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) (sdk.Events, error) { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + return nil, err +} + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + return nil, err +} + +msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events), nil +} + +/ PrepareProposalVerifyTx performs transaction verification when a proposer is +/ creating a block proposal during PrepareProposal. Any state committed to the +/ PrepareProposal state internally will be discarded. will be +/ returned if the transaction cannot be encoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModePrepareProposal, bz, tx) + if err != nil { + return nil, err +} + +return bz, nil +} + +/ ProcessProposalVerifyTx performs transaction verification when receiving a +/ block proposal during ProcessProposal. Any state committed to the +/ ProcessProposal state internally will be discarded. will be +/ returned if the transaction cannot be decoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModeProcessProposal, txBz, tx) + if err != nil { + return nil, err +} + +return tx, nil +} + +func (app *BaseApp) + +TxDecode(txBytes []byte) (sdk.Tx, error) { + return app.txDecoder(txBytes) +} + +func (app *BaseApp) + +TxEncode(tx sdk.Tx) ([]byte, error) { + return app.txEncoder(tx) +} + +func (app *BaseApp) + +StreamingManager() + +storetypes.StreamingManager { + return app.streamingManager +} + +/ Close is called in start cmd to gracefully cleanup resources. +func (app *BaseApp) + +Close() + +error { + var errs []error + + / Close app.db (opened by cosmos-sdk/server/start.go call to openDB) + if app.db != nil { + app.logger.Info("Closing application.db") + if err := app.db.Close(); err != nil { + errs = append(errs, err) +} + +} + + / Close app.snapshotManager + / - opened when app chains use cosmos-sdk/server/util.go/DefaultBaseappOptions (boilerplate) + / - which calls cosmos-sdk/server/util.go/GetSnapshotStore + / - which is passed to baseapp/options.go/SetSnapshot + / - to set app.snapshotManager = snapshots.NewManager + if app.snapshotManager != nil { + app.logger.Info("Closing snapshots/metadata.db") + if err := app.snapshotManager.Close(); err != nil { + errs = append(errs, err) +} + +} + +return errors.Join(errs...) +} + +/ GetBaseApp returns the pointer to itself. +func (app *BaseApp) + +GetBaseApp() *BaseApp { + return app +} +``` + +Let us go through the most important components. + +> **Note**: Not all parameters are described, only the most important ones. Refer to the +> type definition for the full list. + +First, the important parameters that are initialized during the bootstrapping of the application: + +- [`CommitMultiStore`](/docs/sdk/next/documentation/state-storage/store#commitmultistore): This is the main store of the application, + which holds the canonical state that is committed at the [end of each block](#commit). This store + is **not** cached, meaning it is not used to update the application's volatile (un-committed) states. + The `CommitMultiStore` is a multi-store, meaning a store of stores. Each module of the application + uses one or multiple `KVStores` in the multi-store to persist their subset of the state. +- Database: The `db` is used by the `CommitMultiStore` to handle data persistence. +- [`Msg` Service Router](#msg-service-router): The `msgServiceRouter` facilitates the routing of `sdk.Msg` requests to the appropriate + module `Msg` service for processing. Here a `sdk.Msg` refers to the transaction component that needs to be + processed by a service in order to update the application state, and not to ABCI message which implements + the interface between the application and the underlying consensus engine. +- [gRPC Query Router](#grpc-query-router): The `grpcQueryRouter` facilitates the routing of gRPC queries to the + appropriate module for it to be processed. These queries are not ABCI messages themselves, but they + are relayed to the relevant module's gRPC `Query` service. +- [`TxDecoder`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types#TxDecoder): It is used to decode + raw transaction bytes relayed by the underlying CometBFT engine. +- [`AnteHandler`](#antehandler): This handler is used to handle signature verification, fee payment, + and other pre-message execution checks when a transaction is received. It's executed during + [`CheckTx/RecheckTx`](#checktx) and [`FinalizeBlock`](#finalizeblock). +- [`InitChainer`](/docs/sdk/next/documentation/application-framework/app-anatomy#initchainer), [`PreBlocker`](/docs/sdk/next/documentation/application-framework/app-anatomy#preblocker), [`BeginBlocker` and `EndBlocker`](/docs/sdk/next/documentation/application-framework/app-anatomy#beginblocker-and-endblocker): These are + the functions executed when the application receives the `InitChain` and `FinalizeBlock` + ABCI messages from the underlying CometBFT engine. + +Then, parameters used to define [volatile states](#state-updates) (i.e. cached states): + +- `checkState`: This state is updated during [`CheckTx`](#checktx), and reset on [`Commit`](#commit). +- `finalizeBlockState`: This state is updated during [`FinalizeBlock`](#finalizeblock), and set to `nil` on + [`Commit`](#commit) and gets re-initialized on `FinalizeBlock`. +- `processProposalState`: This state is updated during [`ProcessProposal`](#process-proposal). +- `prepareProposalState`: This state is updated during [`PrepareProposal`](#prepare-proposal). + +Finally, a few more important parameters: + +- `voteInfos`: This parameter carries the list of validators whose precommit is missing, either + because they did not vote or because the proposer did not include their vote. This information is + carried by the [Context](/docs/sdk/next/documentation/application-framework/context) and can be used by the application for various things like + punishing absent validators. +- `minGasPrices`: This parameter defines the minimum gas prices accepted by the node. This is a + **local** parameter, meaning each full-node can set a different `minGasPrices`. It is used in the + `AnteHandler` during [`CheckTx`](#checktx), mainly as a spam protection mechanism. The transaction + enters the [mempool](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#mempool-methods) + only if the gas prices of the transaction are greater than one of the minimum gas price in + `minGasPrices` (e.g. if `minGasPrices == 1uatom,1photon`, the `gas-price` of the transaction must be + greater than `1uatom` OR `1photon`). +- `appVersion`: Version of the application. It is set in the + [application's constructor function](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor-function). + +## Constructor + +```go +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + + / ... +} +``` + +The `BaseApp` constructor function is pretty straightforward. The only thing worth noting is the +possibility to provide additional [`options`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/options.go) +to the `BaseApp`, which will execute them in order. The `options` are generally `setter` functions +for important parameters, like `SetPruning()` to set pruning options or `SetMinGasPrices()` to set +the node's `min-gas-prices`. + +Naturally, developers can add additional `options` based on their application's needs. + +## State Updates + +The `BaseApp` maintains four primary volatile states and a root or main state. The main state +is the canonical state of the application and the volatile states, `checkState`, `prepareProposalState`, `processProposalState` and `finalizeBlockState` +are used to handle state transitions in-between the main state made during [`Commit`](#commit). + +Internally, there is only a single `CommitMultiStore` which we refer to as the main or root state. +From this root state, we derive four volatile states by using a mechanism called _store branching_ (performed by `CacheWrap` function). +The types can be illustrated as follows: + +![Types](/docs/sdk/images/learn/advanced/baseapp_state.png) + +### InitChain State Updates + +During `InitChain`, the four volatile states, `checkState`, `prepareProposalState`, `processProposalState` +and `finalizeBlockState` are set by branching the root `CommitMultiStore`. Any subsequent reads and writes happen +on branched versions of the `CommitMultiStore`. +To avoid unnecessary roundtrip to the main state, all reads to the branched store are cached. + +![InitChain](/docs/sdk/images/learn/advanced/baseapp_state-initchain.png) + +### CheckTx State Updates + +During `CheckTx`, the `checkState`, which is based off of the last committed state from the root +store, is used for any reads and writes. Here we only execute the `AnteHandler` and verify a service router +exists for every message in the transaction. Note, when we execute the `AnteHandler`, we branch +the already branched `checkState`. +This has the side effect that if the `AnteHandler` fails, the state transitions won't be reflected in the `checkState` +\-- i.e. `checkState` is only updated on success. + +![CheckTx](/docs/sdk/images/learn/advanced/baseapp_state-checktx.png) + +### PrepareProposal State Updates + +During `PrepareProposal`, the `prepareProposalState` is set by branching the root `CommitMultiStore`. +The `prepareProposalState` is used for any reads and writes that occur during the `PrepareProposal` phase. +The function uses the `Select()` method of the mempool to iterate over the transactions. `runTx` is then called, +which encodes and validates each transaction and from there the `AnteHandler` is executed. +If successful, valid transactions are returned inclusive of the events, tags, and data generated +during the execution of the proposal. +The described behavior is that of the default handler, applications have the flexibility to define their own +[custom mempool handlers](https://docs.cosmos.network/main/build/building-apps/app-mempool). + +![ProcessProposal](/docs/sdk/images/learn/advanced/baseapp_state-prepareproposal.png) + +### ProcessProposal State Updates + +During `ProcessProposal`, the `processProposalState` is set based off of the last committed state +from the root store and is used to process a signed proposal received from a validator. +In this state, `runTx` is called and the `AnteHandler` is executed and the context used in this state is built with information +from the header and the main state, including the minimum gas prices, which are also set. +Again we want to highlight that the described behavior is that of the default handler and applications have the flexibility to define their own +[custom mempool handlers](https://docs.cosmos.network/main/build/building-apps/app-mempool). + +![ProcessProposal](/docs/sdk/images/learn/advanced/baseapp_state-processproposal.png) + +### FinalizeBlock State Updates + +During `FinalizeBlock`, the `finalizeBlockState` is set for use during transaction execution and endblock. The +`finalizeBlockState` is based off of the last committed state from the root store and is branched. +Note, the `finalizeBlockState` is set to `nil` on [`Commit`](#commit). + +The state flow for transaction execution is nearly identical to `CheckTx` except state transitions occur on +the `finalizeBlockState` and messages in a transaction are executed. Similarly to `CheckTx`, state transitions +occur on a doubly branched state -- `finalizeBlockState`. Successful message execution results in +writes being committed to `finalizeBlockState`. Note, if message execution fails, state transitions from +the AnteHandler are persisted. + +### Commit State Updates + +During `Commit` all the state transitions that occurred in the `finalizeBlockState` are finally written to +the root `CommitMultiStore` which in turn is committed to disk and results in a new application +root hash. These state transitions are now considered final. Finally, the `checkState` is set to the +newly committed state and `finalizeBlockState` is set to `nil` to be reset on `FinalizeBlock`. + +![Commit](/docs/sdk/images/learn/advanced/baseapp_state-commit.png) + +## ParamStore + +During `InitChain`, the `RequestInitChain` provides `ConsensusParams` which contains parameters +related to block execution such as maximum gas and size in addition to evidence parameters. If these +parameters are non-nil, they are set in the BaseApp's `ParamStore`. Behind the scenes, the `ParamStore` +is managed by an `x/consensus_params` module. This allows the parameters to be tweaked via +on-chain governance. + +## Service Routers + +When messages and queries are received by the application, they must be routed to the appropriate module in order to be processed. Routing is done via `BaseApp`, which holds a `msgServiceRouter` for messages, and a `grpcQueryRouter` for queries. + +### `Msg` Service Router + +[`sdk.Msg`s](/docs/sdk/next/documentation/module-system/messages-and-queries#messages) need to be routed after they are extracted from transactions, which are sent from the underlying CometBFT engine via the [`CheckTx`](#checktx) and [`FinalizeBlock`](#finalizeblock) ABCI messages. To do so, `BaseApp` holds a `msgServiceRouter` which maps fully-qualified service methods (`string`, defined in each module's Protobuf `Msg` service) to the appropriate module's `MsgServer` implementation. + +The [default `msgServiceRouter` included in `BaseApp`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/msg_service_router.go) is stateless. However, some applications may want to make use of more stateful routing mechanisms such as allowing governance to disable certain routes or point them to new modules for upgrade purposes. For this reason, the `sdk.Context` is also passed into each [route handler inside `msgServiceRouter`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/msg_service_router.go#L35-L36). For a stateless router that doesn't want to make use of this, you can just ignore the `ctx`. + +The application's `msgServiceRouter` is initialized with all the routes using the application's [module manager](/docs/sdk/next/documentation/module-system/module-manager#manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor-function). + +### gRPC Query Router + +Similar to `sdk.Msg`s, [`queries`](/docs/sdk/next/documentation/module-system/messages-and-queries#queries) need to be routed to the appropriate module's [`Query` service](/docs/sdk/next/documentation/module-system/query-services). To do so, `BaseApp` holds a `grpcQueryRouter`, which maps modules' fully-qualified service methods (`string`, defined in their Protobuf `Query` gRPC) to their `QueryServer` implementation. The `grpcQueryRouter` is called during the initial stages of query processing, which can be either by directly sending a gRPC query to the gRPC endpoint, or via the [`Query` ABCI message](#query) on the CometBFT RPC endpoint. + +Just like the `msgServiceRouter`, the `grpcQueryRouter` is initialized with all the query routes using the application's [module manager](/docs/sdk/next/documentation/module-system/module-manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](/docs/sdk/next/documentation/application-framework/app-anatomy#app-constructor). + +## Main ABCI 2.0 Messages + +The [Application-Blockchain Interface](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md) (ABCI) is a generic interface that connects a state-machine with a consensus engine to form a functional full-node. It can be wrapped in any language, and needs to be implemented by each application-specific blockchain built on top of an ABCI-compatible consensus engine like CometBFT. + +The consensus engine handles two main tasks: + +- The networking logic, which mainly consists in gossiping block parts, transactions and consensus votes. +- The consensus logic, which results in the deterministic ordering of transactions in the form of blocks. + +It is **not** the role of the consensus engine to define the state or the validity of transactions. Generally, transactions are handled by the consensus engine in the form of `[]bytes`, and relayed to the application via the ABCI to be decoded and processed. At keys moments in the networking and consensus processes (e.g. beginning of a block, commit of a block, reception of an unconfirmed transaction, ...), the consensus engine emits ABCI messages for the state-machine to act on. + +Developers building on top of the Cosmos SDK need not implement the ABCI themselves, as `BaseApp` comes with a built-in implementation of the interface. Let us go through the main ABCI messages that `BaseApp` implements: + +- [`Prepare Proposal`](#prepare-proposal) +- [`Process Proposal`](#process-proposal) +- [`CheckTx`](#checktx) +- [`FinalizeBlock`](#finalizeblock) +- [`ExtendVote`](#extendvote) +- [`VerifyVoteExtension`](#verifyvoteextension) + +### Prepare Proposal + +The `PrepareProposal` function is part of the new methods introduced in Application Blockchain Interface (ABCI++) in CometBFT and is an important part of the application's overall governance system. In the Cosmos SDK, it allows the application to have more fine-grained control over the transactions that are processed, and ensures that only valid transactions are committed to the blockchain. + +Here is how the `PrepareProposal` function can be implemented: + +1. Extract the `sdk.Msg`s from the transaction. +2. Perform _stateful_ checks by calling `Validate()` on each of the `sdk.Msg`'s. This is done after _stateless_ checks as _stateful_ checks are more computationally expensive. If `Validate()` fails, `PrepareProposal` returns before running further checks, which saves resources. +3. Perform any additional checks that are specific to the application, such as checking account balances, or ensuring that certain conditions are met before a transaction is proposed.hey are processed by the consensus engine, if necessary. +4. Return the updated transactions to be processed by the consensus engine + +Note that, unlike `CheckTx()`, `PrepareProposal` process `sdk.Msg`s, so it can directly update the state. However, unlike `FinalizeBlock()`, it does not commit the state updates. It's important to exercise caution when using `PrepareProposal` as incorrect coding could affect the overall liveness of the network. + +It's important to note that `PrepareProposal` complements the `ProcessProposal` method which is executed after this method. The combination of these two methods means that it is possible to guarantee that no invalid transactions are ever committed. Furthermore, such a setup can give rise to other interesting use cases such as Oracles, threshold decryption and more. + +`PrepareProposal` returns a response to the underlying consensus engine of type [`abci.CheckTxResponse`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#processproposal). The response contains: + +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. + +### Process Proposal + +The `ProcessProposal` function is called by the BaseApp as part of the ABCI message flow, and is executed during the `FinalizeBlock` phase of the consensus process. The purpose of this function is to give more control to the application for block validation, allowing it to check all transactions in a proposed block before the validator sends the prevote for the block. It allows a validator to perform application-dependent work in a proposed block, enabling features such as immediate block execution, and allows the Application to reject invalid blocks. + +The `ProcessProposal` function performs several key tasks, including: + +1. Validating the proposed block by checking all transactions in it. +2. Checking the proposed block against the current state of the application, to ensure that it is valid and that it can be executed. +3. Updating the application's state based on the proposal, if it is valid and passes all checks. +4. Returning a response to CometBFT indicating the result of the proposal processing. + +The `ProcessProposal` is an important part of the application's overall governance system. It is used to manage the network's parameters and other key aspects of its operation. It also ensures that the coherence property is adhered to i.e. all honest validators must accept a proposal by an honest proposer. + +It's important to note that `ProcessProposal` complements the `PrepareProposal` method which enables the application to have more fine-grained transaction control by allowing it to reorder, drop, delay, modify, and even add transactions as they see necessary. The combination of these two methods means that it is possible to guarantee that no invalid transactions are ever committed. Furthermore, such a setup can give rise to other interesting use cases such as Oracles, threshold decryption and more. + +CometBFT calls it when it receives a proposal and the CometBFT algorithm has not locked on a value. The Application cannot modify the proposal at this point but can reject it if it is invalid. If that is the case, CometBFT will prevote `nil` on the proposal, which has strong liveness implications for CometBFT. As a general rule, the Application SHOULD accept a prepared proposal passed via `ProcessProposal`, even if a part of the proposal is invalid (e.g., an invalid transaction); the Application can ignore the invalid part of the prepared proposal at block execution time. + +However, developers must exercise greater caution when using these methods. Incorrectly coding these methods could affect liveness as CometBFT is unable to receive 2/3 valid precommits to finalize a block. + +`ProcessProposal` returns a response to the underlying consensus engine of type [`abci.CheckTxResponse`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#processproposal). The response contains: + +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. + +### CheckTx + +`CheckTx` is sent by the underlying consensus engine when a new unconfirmed (i.e. not yet included in a valid block) +transaction is received by a full-node. The role of `CheckTx` is to guard the full-node's mempool +(where unconfirmed transactions are stored until they are included in a block) from spam transactions. +Unconfirmed transactions are relayed to peers only if they pass `CheckTx`. + +`CheckTx()` can perform both _stateful_ and _stateless_ checks, but developers should strive to +make the checks **lightweight** because gas fees are not charged for the resources (CPU, data load...) used during the `CheckTx`. + +In the Cosmos SDK, after [decoding transactions](/docs/sdk/next/documentation/protocol-development/encoding), `CheckTx()` is implemented +to do the following checks: + +1. Extract the `sdk.Msg`s from the transaction. +2. **Optionally** perform _stateless_ checks by calling `ValidateBasic()` on each of the `sdk.Msg`s. This is done + first, as _stateless_ checks are less computationally expensive than _stateful_ checks. If + `ValidateBasic()` fail, `CheckTx` returns before running _stateful_ checks, which saves resources. + This check is still performed for messages that have not yet migrated to the new message validation mechanism defined in [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) and still have a `ValidateBasic()` method. +3. Perform non-module related _stateful_ checks on the [account](/docs/sdk/next/documentation/protocol-development/accounts). This step is mainly about checking + that the `sdk.Msg` signatures are valid, that enough fees are provided and that the sending account + has enough funds to pay for said fees. Note that no precise [`gas`](/docs/sdk/next/documentation/protocol-development/gas-fees) counting occurs here, + as `sdk.Msg`s are not processed. Usually, the [`AnteHandler`](/docs/sdk/next/documentation/protocol-development/gas-fees#antehandler) will check that the `gas` provided + with the transaction is superior to a minimum reference gas amount based on the raw transaction size, + in order to avoid spam with transactions that provide 0 gas. + +`CheckTx` does **not** process `sdk.Msg`s - they only need to be processed when the canonical state needs to be updated, which happens during `FinalizeBlock`. + +Steps 2. and 3. are performed by the [`AnteHandler`](/docs/sdk/next/documentation/protocol-development/gas-fees#antehandler) in the [`RunTx()`](#runtx-antehandler-and-runmsgs) +function, which `CheckTx()` calls with the `runTxModeCheck` mode. During each step of `CheckTx()`, a +special [volatile state](#state-updates) called `checkState` is updated. This state is used to keep +track of the temporary changes triggered by the `CheckTx()` calls of each transaction without modifying +the [main canonical state](#main-state). For example, when a transaction goes through `CheckTx()`, the +transaction's fees are deducted from the sender's account in `checkState`. If a second transaction is +received from the same account before the first is processed, and the account has consumed all its +funds in `checkState` during the first transaction, the second transaction will fail `CheckTx`() and +be rejected. In any case, the sender's account will not actually pay the fees until the transaction +is actually included in a block, because `checkState` never gets committed to the main state. The +`checkState` is reset to the latest state of the main state each time a blocks gets [committed](#commit). + +`CheckTx` returns a response to the underlying consensus engine of type [`abci.CheckTxResponse`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#checktx). +The response contains: + +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. +- `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction. +- `GasUsed (int64)`: Amount of gas consumed by transaction. During `CheckTx`, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction. Next is an example: + +```go expandable +package ante + +import ( + + "slices" + "time" + + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec/legacy" + "github.com/cosmos/cosmos-sdk/crypto/keys/multisig" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/migrations/legacytx" + authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +/ ValidateBasicDecorator will call tx.ValidateBasic and return any non-nil error. +/ If ValidateBasic passes, decorator calls next AnteHandler in chain. Note, +/ ValidateBasicDecorator decorator will not get executed on ReCheckTx since it +/ is not dependent on application state. +type ValidateBasicDecorator struct{ +} + +func NewValidateBasicDecorator() + +ValidateBasicDecorator { + return ValidateBasicDecorator{ +} +} + +func (vbd ValidateBasicDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + / no need to validate basic on recheck tx, call next antehandler + if ctx.IsReCheckTx() { + return next(ctx, tx, simulate) +} + if validateBasic, ok := tx.(sdk.HasValidateBasic); ok { + if err := validateBasic.ValidateBasic(); err != nil { + return ctx, err +} + +} + +return next(ctx, tx, simulate) +} + +/ ValidateMemoDecorator will validate memo given the parameters passed in +/ If memo is too large decorator returns with error, otherwise call next AnteHandler +/ CONTRACT: Tx must implement TxWithMemo interface +type ValidateMemoDecorator struct { + ak AccountKeeper +} + +func NewValidateMemoDecorator(ak AccountKeeper) + +ValidateMemoDecorator { + return ValidateMemoDecorator{ + ak: ak, +} +} + +func (vmd ValidateMemoDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + memoTx, ok := tx.(sdk.TxWithMemo) + if !ok { + return ctx, errorsmod.Wrap(sdkerrors.ErrTxDecode, "invalid transaction type") +} + memoLength := len(memoTx.GetMemo()) + if memoLength > 0 { + params := vmd.ak.GetParams(ctx) + if uint64(memoLength) > params.MaxMemoCharacters { + return ctx, errorsmod.Wrapf(sdkerrors.ErrMemoTooLarge, + "maximum number of characters is %d but received %d characters", + params.MaxMemoCharacters, memoLength, + ) +} + +} + +return next(ctx, tx, simulate) +} + +/ ConsumeTxSizeGasDecorator will take in parameters and consume gas proportional +/ to the size of tx before calling next AnteHandler. Note, the gas costs will be +/ slightly over estimated due to the fact that any given signing account may need +/ to be retrieved from state. +/ +/ CONTRACT: If simulate=true, then signatures must either be completely filled +/ in or empty. +/ CONTRACT: To use this decorator, signatures of transaction must be represented +/ as legacytx.StdSignature otherwise simulate mode will incorrectly estimate gas cost. +type ConsumeTxSizeGasDecorator struct { + ak AccountKeeper +} + +func NewConsumeGasForTxSizeDecorator(ak AccountKeeper) + +ConsumeTxSizeGasDecorator { + return ConsumeTxSizeGasDecorator{ + ak: ak, +} +} + +func (cgts ConsumeTxSizeGasDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + sigTx, ok := tx.(authsigning.SigVerifiableTx) + if !ok { + return ctx, errorsmod.Wrap(sdkerrors.ErrTxDecode, "invalid tx type") +} + params := cgts.ak.GetParams(ctx) + +ctx.GasMeter().ConsumeGas(params.TxSizeCostPerByte*storetypes.Gas(len(ctx.TxBytes())), "txSize") + + / simulate gas cost for signatures in simulate mode + if simulate { + / in simulate mode, each element should be a nil signature + sigs, err := sigTx.GetSignaturesV2() + if err != nil { + return ctx, err +} + n := len(sigs) + +signers, err := sigTx.GetSigners() + if err != nil { + return sdk.Context{ +}, err +} + for i, signer := range signers { + / if signature is already filled in, no need to simulate gas cost + if i < n && !isIncompleteSignature(sigs[i].Data) { + continue +} + +var pubkey cryptotypes.PubKey + acc := cgts.ak.GetAccount(ctx, signer) + + / use placeholder simSecp256k1Pubkey if sig is nil + if acc == nil || acc.GetPubKey() == nil { + pubkey = simSecp256k1Pubkey +} + +else { + pubkey = acc.GetPubKey() +} + + / use stdsignature to mock the size of a full signature + simSig := legacytx.StdSignature{ /nolint:staticcheck / SA1019: legacytx.StdSignature is deprecated + Signature: simSecp256k1Sig[:], + PubKey: pubkey, +} + sigBz := legacy.Cdc.MustMarshal(simSig) + cost := storetypes.Gas(len(sigBz) + 6) + + / If the pubkey is a multi-signature pubkey, then we estimate for the maximum + / number of signers. + if _, ok := pubkey.(*multisig.LegacyAminoPubKey); ok { + cost *= params.TxSigLimit +} + +ctx.GasMeter().ConsumeGas(params.TxSizeCostPerByte*cost, "txSize") +} + +} + +return next(ctx, tx, simulate) +} + +/ isIncompleteSignature tests whether SignatureData is fully filled in for simulation purposes +func isIncompleteSignature(data signing.SignatureData) + +bool { + if data == nil { + return true +} + switch data := data.(type) { + case *signing.SingleSignatureData: + return len(data.Signature) == 0 + case *signing.MultiSignatureData: + if len(data.Signatures) == 0 { + return true +} + if slices.ContainsFunc(data.Signatures, isIncompleteSignature) { + return true +} + +} + +return false +} + +type ( + / TxTimeoutHeightDecorator defines an AnteHandler decorator that checks for a + / tx height timeout. + TxTimeoutHeightDecorator struct{ +} + + / TxWithTimeoutHeight defines the interface a tx must implement in order for + / TxHeightTimeoutDecorator to process the tx. + TxWithTimeoutHeight interface { + sdk.Tx + + GetTimeoutHeight() + +uint64 + GetTimeoutTimeStamp() + +time.Time +} +) + +/ TxTimeoutHeightDecorator defines an AnteHandler decorator that checks for a +/ tx height timeout. +func NewTxTimeoutHeightDecorator() + +TxTimeoutHeightDecorator { + return TxTimeoutHeightDecorator{ +} +} + +/ AnteHandle implements an AnteHandler decorator for the TxHeightTimeoutDecorator +/ type where the current block height is checked against the tx's height timeout. +/ If a height timeout is provided (non-zero) + +and is less than the current block +/ height, then an error is returned. +func (txh TxTimeoutHeightDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + timeoutTx, ok := tx.(TxWithTimeoutHeight) + if !ok { + return ctx, errorsmod.Wrap(sdkerrors.ErrTxDecode, "expected tx to implement TxWithTimeoutHeight") +} + timeoutHeight := timeoutTx.GetTimeoutHeight() + if timeoutHeight > 0 && uint64(ctx.BlockHeight()) > timeoutHeight { + return ctx, errorsmod.Wrapf( + sdkerrors.ErrTxTimeoutHeight, "block height: %d, timeout height: %d", ctx.BlockHeight(), timeoutHeight, + ) +} + timeoutTimestamp := timeoutTx.GetTimeoutTimeStamp() + blockTime := ctx.BlockHeader().Time + if !timeoutTimestamp.IsZero() && timeoutTimestamp.Unix() != 0 && timeoutTimestamp.Before(blockTime) { + return ctx, errorsmod.Wrapf( + sdkerrors.ErrTxTimeout, "block time: %s, timeout timestamp: %s", blockTime, timeoutTimestamp.String(), + ) +} + +return next(ctx, tx, simulate) +} +``` + +- `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](/docs/sdk/next/api-reference/events-streaming/events) for more. +- `Codespace (string)`: Namespace for the Code. + +#### RecheckTx + +After `Commit`, `CheckTx` is run again on all transactions that remain in the node's local mempool +excluding the transactions that are included in the block. To prevent the mempool from rechecking all transactions +every time a block is committed, the configuration option `mempool.recheck=false` can be set. As of +Tendermint v0.32.1, an additional `Type` parameter is made available to the `CheckTx` function that +indicates whether an incoming transaction is new (`CheckTxType_New`), or a recheck (`CheckTxType_Recheck`). +This allows certain checks like signature verification can be skipped during `CheckTxType_Recheck`. + +## RunTx, AnteHandler, RunMsgs, PostHandler + +### RunTx + +`RunTx` is called from `CheckTx`/`Finalizeblock` to handle the transaction, with `execModeCheck` or `execModeFinalize` as parameter to differentiate between the two modes of execution. Note that when `RunTx` receives a transaction, it has already been decoded. + +The first thing `RunTx` does upon being called is to retrieve the `context`'s `CacheMultiStore` by calling the `getContextForTx()` function with the appropriate mode (either `runTxModeCheck` or `execModeFinalize`). This `CacheMultiStore` is a branch of the main store, with cache functionality (for query requests), instantiated during `FinalizeBlock` for transaction execution and during the `Commit` of the previous block for `CheckTx`. After that, two `defer func()` are called for [`gas`](/docs/sdk/next/documentation/protocol-development/gas-fees) management. They are executed when `runTx` returns and make sure `gas` is actually consumed, and will throw errors, if any. + +After that, `RunTx()` calls `ValidateBasic()`, when available and for backward compatibility, on each `sdk.Msg`in the `Tx`, which runs preliminary _stateless_ validity checks. If any `sdk.Msg` fails to pass `ValidateBasic()`, `RunTx()` returns with an error. + +Then, the [`anteHandler`](#antehandler) of the application is run (if it exists). In preparation of this step, both the `checkState`/`finalizeBlockState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function. + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + "maps" + "math" + "slices" + "strconv" + "sync" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/core/header" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp/oe" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + +error +) + +const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + +transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeVerifyVoteExtension / Verify a vote extension + execModeFinalize / Finalize a block proposal +) + +var _ servertypes.ABCI = (*BaseApp)(nil) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { + / initialized on creation + mu sync.Mutex / mu protects the fields below. + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + +state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional + + checkTxHandler sdk.CheckTxHandler / ABCI CheckTx handler + initChainer sdk.InitChainer / ABCI InitChain handler + preBlocker sdk.PreBlocker / logic to run before BeginBlocker + beginBlocker sdk.BeginBlocker / (legacy ABCI) + +BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + +EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + sigverifyTx bool / in the simulation test, since the account does not have a private key, we have to ignore the tx sigverify. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / queryGasLimit defines the maximum gas for queries; unbounded if 0. + queryGasLimit uint64 + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec + + / optimisticExec contains the context required for Optimistic Execution, + / including the goroutine handling.This is experimental and must be enabled + / by developers. + optimisticExec *oe.OptimisticExecution + + / disableBlockGasMeter will disable the block gas meter if true, block gas meter is tricky to support + / when executing transactions in parallel. + / when disabled, the block gas meter in context is a noop one. + / + / SAFETY: it's safe to do if validators validate the total gas wanted in the `ProcessProposal`, which is the case in the default handler. + disableBlockGasMeter bool +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger.With(log.ModuleKey, "baseapp"), + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, + sigverifyTx: true, + queryGasLimit: math.MaxUint64, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) +} + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) +} + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) +} + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs baseapp will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + +protoFiles, err := proto.MergedRegistry() + if err != nil { + logger.Warn("error creating merged proto registry", "error", err) +} + +else { + err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + logger.Warn("error validating merged proto registry annotations", "error", err) +} + +} + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ GRPCQueryRouter returns the GRPCQueryRouter of a BaseApp. +func (app *BaseApp) + +GRPCQueryRouter() *GRPCQueryRouter { + return app.grpcQueryRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} + +} +} + +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) + +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + +} +} + +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) + +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} + +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := slices.Sorted(maps.Keys(keys)) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms storetypes.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +storetypes.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ ChainID returns the chainID of the app. +func (app *BaseApp) + +ChainID() + +string { + return app.chainID +} + +/ AnteHandler returns the AnteHandler of the app. +func (app *BaseApp) + +AnteHandler() + +sdk.AnteHandler { + return app.anteHandler +} + +/ Mempool returns the Mempool of the app. +func (app *BaseApp) + +Mempool() + +mempool.Mempool { + return app.mempool +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + if app.cms == nil { + return errors.New("commit multi-store must not be nil") +} + emptyHeader := cmtproto.Header{ + ChainID: app.chainID +} + + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) + +app.Seal() + +return app.cms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode execMode, h cmtproto.Header) { + ms := app.cms.CacheMultiStore() + headerInfo := header.Info{ + Height: h.Height, + Time: h.Time, + ChainID: h.ChainID, + AppHash: h.AppHash, +} + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, h, false, app.logger). + WithStreamingManager(app.streamingManager). + WithHeaderInfo(headerInfo), +} + switch mode { + case execModeCheck: + baseState.SetContext(baseState.Context().WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices)) + +app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ SetCircuitBreaker sets the circuit breaker for the BaseApp. +/ The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. +func (app *BaseApp) + +SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") +} + +app.msgServiceRouter.SetCircuit(cb) +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) + +cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ +} + +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + / This could happen while migrating from v0.45/v0.46 to v0.50, we should + / allow it to happen so during preblock the upgrade plan can be executed + / and the consensus params set for the first time in the new format. + app.logger.Error("failed to get consensus params", "err", err) + +return cmtproto.ConsensusParams{ +} + +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the BaseApp's param +/ store. +/ +/ NOTE: We're explicitly not storing the CometBFT app_version in the param store. +/ It's stored instead in the x/upgrade store, with its own bump logic. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + +error { + if app.paramStore == nil { + return errors.New("cannot store consensus params with no params store set") +} + +return app.paramStore.Set(ctx, cp) +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + +error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) +} + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 +} + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return err +} + +} + +return nil +} + +func (app *BaseApp) + +getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState +} +} + +func (app *BaseApp) + +getBlockGasMeter(ctx sdk.Context) + +storetypes.GasMeter { + if app.disableBlockGasMeter { + return noopGasMeter{ +} + +} + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) +} + +return storetypes.NewInfiniteGasMeter() +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode execMode, txBytes []byte) + +sdk.Context { + app.mu.Lock() + +defer app.mu.Unlock() + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.Context(). + WithTxBytes(txBytes). + WithGasMeter(storetypes.NewInfiniteGasMeter()) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithIsSigverifyTx(app.sigverifyTx) + +ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() + +ctx = ctx.WithExecMode(sdk.ExecMode(execModeSimulate)) +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]any{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(storetypes.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +func (app *BaseApp) + +preBlock(req *abci.RequestFinalizeBlock) ([]abci.Event, error) { + var events []abci.Event + if app.preBlocker != nil { + ctx := app.finalizeBlockState.Context().WithEventManager(sdk.NewEventManager()) + +rsp, err := app.preBlocker(ctx, req) + if err != nil { + return nil, err +} + / rsp.ConsensusParamsChanged is true from preBlocker means ConsensusParams in store get changed + / write the consensus parameters in store to context + if rsp.ConsensusParamsChanged { + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + / GasMeter must be set after we get a context with updated consensus params. + gasMeter := app.getBlockGasMeter(ctx) + +ctx = ctx.WithBlockGasMeter(gasMeter) + +app.finalizeBlockState.SetContext(ctx) +} + +events = ctx.EventManager().ABCIEvents() +} + +return events, nil +} + +func (app *BaseApp) + +beginBlock(_ *abci.RequestFinalizeBlock) (sdk.BeginBlock, error) { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.Context()) + if err != nil { + return resp, err +} + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" +}, + ) +} + +resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) +} + +return resp, nil +} + +func (app *BaseApp) + +deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ +} + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + +telemetry.IncrCounter(1, "tx", resultStr) + +telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + +telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") +}() + +gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx, nil) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + +return resp +} + +resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +} + +return resp +} + +/ endBlock is an application-defined function that is called after transactions +/ have been processed in FinalizeBlock. +func (app *BaseApp) + +endBlock(_ context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.Context()) + if err != nil { + return endblock, err +} + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" +}, + ) +} + +eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + +endblock = eb +} + +return endblock, nil +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +/ both txbytes and the decoded tx are passed to runTx to avoid the state machine encoding the tx and decoding the transaction twice +/ passing the decoded tx to runTX is optional, it will be decoded if the tx is nil +func (app *BaseApp) + +runTx(mode execMode, txBytes []byte, tx sdk.Tx) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil + ctx.Logger().Error("panic recovered in runTx", "err", err) +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() +} + + / if the transaction is not decoded, decode it here + if tx == nil { + tx, err = app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ + GasUsed: 0, + GasWanted: 0 +}, nil, nil, sdkerrors.ErrTxDecode.Wrap(err.Error()) +} + +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + for _, msg := range msgs { + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return sdk.GasInfo{ +}, nil, nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + if mode == execModeReCheck { + / if the ante handler fails on recheck, we want to remove the tx from the mempool + if mempoolErr := app.mempool.Remove(tx); mempoolErr != nil { + return gInfo, nil, anteEvents, errors.Join(err, mempoolErr) +} + +} + +return gInfo, nil, nil, err +} + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + switch mode { + case execModeCheck: + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err +} + case execModeFinalize: + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) +} + + / Run optional postHandlers (should run regardless of the execution result). + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, errPostHandler := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if errPostHandler != nil { + if err == nil { + / when the msg was handled successfully, return the post handler error only + return gInfo, nil, anteEvents, errPostHandler +} + / otherwise append to the msg error so that we keep the original error code for better user experience + return gInfo, nil, anteEvents, errorsmod.Wrapf(err, "postHandler: %s", errPostHandler) +} + + / we don't want runTx to panic if runMsgs has failed earlier + if result == nil { + result = &sdk.Result{ +} + +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if err == nil { + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents, err := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to create message events; message index: %d", i) +} + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) +} + +events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + + +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) (sdk.Events, error) { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + return nil, err +} + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + return nil, err +} + +msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events), nil +} + +/ PrepareProposalVerifyTx performs transaction verification when a proposer is +/ creating a block proposal during PrepareProposal. Any state committed to the +/ PrepareProposal state internally will be discarded. will be +/ returned if the transaction cannot be encoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModePrepareProposal, bz, tx) + if err != nil { + return nil, err +} + +return bz, nil +} + +/ ProcessProposalVerifyTx performs transaction verification when receiving a +/ block proposal during ProcessProposal. Any state committed to the +/ ProcessProposal state internally will be discarded. will be +/ returned if the transaction cannot be decoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModeProcessProposal, txBz, tx) + if err != nil { + return nil, err +} + +return tx, nil +} + +func (app *BaseApp) + +TxDecode(txBytes []byte) (sdk.Tx, error) { + return app.txDecoder(txBytes) +} + +func (app *BaseApp) + +TxEncode(tx sdk.Tx) ([]byte, error) { + return app.txEncoder(tx) +} + +func (app *BaseApp) + +StreamingManager() + +storetypes.StreamingManager { + return app.streamingManager +} + +/ Close is called in start cmd to gracefully cleanup resources. +func (app *BaseApp) + +Close() + +error { + var errs []error + + / Close app.db (opened by cosmos-sdk/server/start.go call to openDB) + if app.db != nil { + app.logger.Info("Closing application.db") + if err := app.db.Close(); err != nil { + errs = append(errs, err) +} + +} + + / Close app.snapshotManager + / - opened when app chains use cosmos-sdk/server/util.go/DefaultBaseappOptions (boilerplate) + / - which calls cosmos-sdk/server/util.go/GetSnapshotStore + / - which is passed to baseapp/options.go/SetSnapshot + / - to set app.snapshotManager = snapshots.NewManager + if app.snapshotManager != nil { + app.logger.Info("Closing snapshots/metadata.db") + if err := app.snapshotManager.Close(); err != nil { + errs = append(errs, err) +} + +} + +return errors.Join(errs...) +} + +/ GetBaseApp returns the pointer to itself. +func (app *BaseApp) + +GetBaseApp() *BaseApp { + return app +} +``` + +This allows `RunTx` not to commit the changes made to the state during the execution of `anteHandler` if it ends up failing. It also prevents the module implementing the `anteHandler` from writing to state, which is an important part of the [object-capabilities](/docs/sdk/next/documentation/core-concepts/ocap) of the Cosmos SDK. + +Finally, the [`RunMsgs()`](#runmsgs) function is called to process the `sdk.Msg`s in the `Tx`. In preparation of this step, just like with the `anteHandler`, both the `checkState`/`finalizeBlockState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function. + +### AnteHandler + +The `AnteHandler` is a special handler that implements the `AnteHandler` interface and is used to authenticate the transaction before the transaction's internal messages are processed. + +```go expandable +package types + +/ AnteHandler authenticates transactions, before their internal messages are handled. +/ If newCtx.IsZero(), ctx is used instead. +type AnteHandler func(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) + +/ PostHandler like AnteHandler but it executes after RunMsgs. Runs on success +/ or failure and enables use cases like gas refunding. +type PostHandler func(ctx Context, tx Tx, simulate, success bool) (newCtx Context, err error) + +/ AnteDecorator wraps the next AnteHandler to perform custom pre-processing. +type AnteDecorator interface { + AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) +} + +/ PostDecorator wraps the next PostHandler to perform custom post-processing. +type PostDecorator interface { + PostHandle(ctx Context, tx Tx, simulate, success bool, next PostHandler) (newCtx Context, err error) +} + +/ ChainAnteDecorators ChainDecorator chains AnteDecorators together with each AnteDecorator +/ wrapping over the decorators further along chain and returns a single AnteHandler. +/ +/ NOTE: The first element is outermost decorator, while the last element is innermost +/ decorator. Decorator ordering is critical since some decorators will expect +/ certain checks and updates to be performed (e.g. the Context) + +before the decorator +/ is run. These expectations should be documented clearly in a CONTRACT docline +/ in the decorator's godoc. +/ +/ NOTE: Any application that uses GasMeter to limit transaction processing cost +/ MUST set GasMeter with the FIRST AnteDecorator. Failing to do so will cause +/ transactions to be processed with an infinite gasmeter and open a DOS attack vector. +/ Use `ante.SetUpContextDecorator` or a custom Decorator with similar functionality. +/ Returns nil when no AnteDecorator are supplied. +func ChainAnteDecorators(chain ...AnteDecorator) + +AnteHandler { + if len(chain) == 0 { + return nil +} + handlerChain := make([]AnteHandler, len(chain)+1) + / set the terminal AnteHandler decorator + handlerChain[len(chain)] = func(ctx Context, tx Tx, simulate bool) (Context, error) { + return ctx, nil +} + for i := range chain { + ii := i + handlerChain[ii] = func(ctx Context, tx Tx, simulate bool) (Context, error) { + return chain[ii].AnteHandle(ctx, tx, simulate, handlerChain[ii+1]) +} + +} + +return handlerChain[0] +} + +/ ChainPostDecorators chains PostDecorators together with each PostDecorator +/ wrapping over the decorators further along chain and returns a single PostHandler. +/ +/ NOTE: The first element is outermost decorator, while the last element is innermost +/ decorator. Decorator ordering is critical since some decorators will expect +/ certain checks and updates to be performed (e.g. the Context) + +before the decorator +/ is run. These expectations should be documented clearly in a CONTRACT docline +/ in the decorator's godoc. +func ChainPostDecorators(chain ...PostDecorator) + +PostHandler { + if len(chain) == 0 { + return nil +} + handlerChain := make([]PostHandler, len(chain)+1) + / set the terminal PostHandler decorator + handlerChain[len(chain)] = func(ctx Context, tx Tx, simulate, success bool) (Context, error) { + return ctx, nil +} + for i := range chain { + ii := i + handlerChain[ii] = func(ctx Context, tx Tx, simulate, success bool) (Context, error) { + return chain[ii].PostHandle(ctx, tx, simulate, success, handlerChain[ii+1]) +} + +} + +return handlerChain[0] +} + +/ Terminator AnteDecorator will get added to the chain to simplify decorator code +/ Don't need to check if next == nil further up the chain +/ +/ ______ +/ <((((((\\\ +/ / . +}\ +/ ;--..--._| +} +/ (\ '--/\--' ) +/ \\ | '-' :'| +/ \\ . -==- .-| +/ \\ \.__.' \--._ +/ [\\ __.--| / _/'--. +/ \ \\ .'-._ ('-----'/ __/ \ +/ \ \\ / __>| | '--. | +/ \ \\ | \ | / / / +/ \ '\ / \ | | _/ / +/ \ \ \ | | / / +/ snd \ \ \ / +/ +/ Deprecated: Terminator is retired (ref https://github.com/cosmos/cosmos-sdk/pull/16076). +type Terminator struct{ +} + +/ AnteHandle returns the provided Context and nil error +func (t Terminator) + +AnteHandle(ctx Context, _ Tx, _ bool, _ AnteHandler) (Context, error) { + return ctx, nil +} + +/ PostHandle returns the provided Context and nil error +func (t Terminator) + +PostHandle(ctx Context, _ Tx, _, _ bool, _ PostHandler) (Context, error) { + return ctx, nil +} +``` + +The `AnteHandler` is theoretically optional, but still a very important component of public blockchain networks. It serves 3 primary purposes: + +- Be a primary line of defense against spam and second line of defense (the first one being the mempool) against transaction replay with fees deduction and [`sequence`](/docs/sdk/next/documentation/protocol-development/transactions#transaction-generation) checking. +- Perform preliminary _stateful_ validity checks like ensuring signatures are valid or that the sender has enough funds to pay for fees. +- Play a role in the incentivization of stakeholders via the collection of transaction fees. + +`BaseApp` holds an `anteHandler` as parameter that is initialized in the [application's constructor](/docs/sdk/next/documentation/application-framework/app-anatomy#application-constructor). The most widely used `anteHandler` is the [`auth` module](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/ante/ante.go). + +Click [here](/docs/sdk/next/documentation/protocol-development/gas-fees#antehandler) for more on the `anteHandler`. + +### RunMsgs + +`RunMsgs` is called from `RunTx` with `runTxModeCheck` as parameter to check the existence of a route for each message the transaction, and with `execModeFinalize` to actually process the `sdk.Msg`s. + +First, it retrieves the `sdk.Msg`'s fully-qualified type name, by checking the `type_url` of the Protobuf `Any` representing the `sdk.Msg`. Then, using the application's [`msgServiceRouter`](#msg-service-router), it checks for the existence of `Msg` service method related to that `type_url`. At this point, if `mode == runTxModeCheck`, `RunMsgs` returns. Otherwise, if `mode == execModeFinalize`, the [`Msg` service](/docs/sdk/next/documentation/module-system/msg-services) RPC is executed, before `RunMsgs` returns. + +### PostHandler + +`PostHandler` is similar to `AnteHandler`, but it, as the name suggests, executes custom post tx processing logic after [`RunMsgs`](#runmsgs) is called. `PostHandler` receives the `Result` of the `RunMsgs` in order to enable this customizable behavior. + +Like `AnteHandler`s, `PostHandler`s are theoretically optional. + +Other use cases like unused gas refund can also be enabled by `PostHandler`s. + +```go expandable +package posthandler + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ HandlerOptions are the options required for constructing a default SDK PostHandler. +type HandlerOptions struct{ +} + +/ NewPostHandler returns an empty PostHandler chain. +func NewPostHandler(_ HandlerOptions) (sdk.PostHandler, error) { + postDecorators := []sdk.PostDecorator{ +} + +return sdk.ChainPostDecorators(postDecorators...), nil +} +``` + +Note, when `PostHandler`s fail, the state from `runMsgs` is also reverted, effectively making the transaction fail. + +## Other ABCI Messages + +### InitChain + +The [`InitChain` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine when the chain is first started. It is mainly used to **initialize** parameters and state like: + +- [Consensus Parameters](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#consensus-parameters) via `setConsensusParams`. +- [`checkState` and `finalizeBlockState`](#state-updates) via `setState`. +- The [block gas meter](/docs/sdk/next/documentation/protocol-development/gas-fees#block-gas-meter), with infinite gas to process genesis transactions. + +Finally, the `InitChain(req abci.InitChainRequest)` method of `BaseApp` calls the [`initChainer()`](/docs/sdk/next/documentation/application-framework/app-anatomy#initchainer) of the application in order to initialize the main state of the application from the `genesis file` and, if defined, call the [`InitGenesis`](/docs/sdk/next/documentation/module-system/genesis#initgenesis) function of each of the application's modules. + +### FinalizeBlock + +The [`FinalizeBlock` ABCI message](https://github.com/cometbft/cometbft/blob/v0.38.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine when a block proposal created by the correct proposer is received. The previous `BeginBlock, DeliverTx and Endblock` calls are private methods on the BaseApp struct. + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + "sort" + "strings" + "time" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc/codes" + grpcstatus "google.golang.org/grpc/status" + + coreheader "cosmossdk.io/core/header" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/store/rootmulti" + snapshottypes "cosmossdk.io/store/snapshots/types" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ Supported ABCI Query prefixes and paths +const ( + QueryPathApp = "app" + QueryPathCustom = "custom" + QueryPathP2P = "p2p" + QueryPathStore = "store" + + QueryPathBroadcastTx = "/cosmos.tx.v1beta1.Service/BroadcastTx" +) + +func (app *BaseApp) + +InitChain(req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + if req.ChainId != app.chainID { + return nil, fmt.Errorf("invalid chain-id on InitChain; expected: %s, got: %s", app.chainID, req.ChainId) +} + + / On a new chain, we consider the init chain block height as 0, even though + / req.InitialHeight is 1 by default. + initHeader := cmtproto.Header{ + ChainID: req.ChainId, + Time: req.Time +} + +app.logger.Info("InitChain", "initialHeight", req.InitialHeight, "chainID", req.ChainId) + + / Set the initial height, which will be used to determine if we are proposing + / or processing the first block or not. + app.initialHeight = req.InitialHeight + if app.initialHeight == 0 { / If initial height is 0, set it to 1 + app.initialHeight = 1 +} + + / if req.InitialHeight is > 1, then we set the initial version on all stores + if req.InitialHeight > 1 { + initHeader.Height = req.InitialHeight + if err := app.cms.SetInitialVersion(req.InitialHeight); err != nil { + return nil, err +} + +} + + / initialize states with a correct header + app.setState(execModeFinalize, initHeader) + +app.setState(execModeCheck, initHeader) + + / Store the consensus params in the BaseApp's param store. Note, this must be + / done after the finalizeBlockState and context have been set as it's persisted + / to state. + if req.ConsensusParams != nil { + err := app.StoreConsensusParams(app.finalizeBlockState.Context(), *req.ConsensusParams) + if err != nil { + return nil, err +} + +} + +defer func() { + / InitChain represents the state of the application BEFORE the first block, + / i.e. the genesis block. This means that when processing the app's InitChain + / handler, the block height is zero by default. However, after Commit is called + / the height needs to reflect the true block height. + initHeader.Height = req.InitialHeight + app.checkState.SetContext(app.checkState.Context().WithBlockHeader(initHeader). + WithHeaderInfo(coreheader.Info{ + ChainID: req.ChainId, + Height: req.InitialHeight, + Time: req.Time, +})) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockHeader(initHeader). + WithHeaderInfo(coreheader.Info{ + ChainID: req.ChainId, + Height: req.InitialHeight, + Time: req.Time, +})) +}() + if app.initChainer == nil { + return &abci.ResponseInitChain{ +}, nil +} + + / add block gas meter for any genesis transactions (allow infinite gas) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(storetypes.NewInfiniteGasMeter())) + +res, err := app.initChainer(app.finalizeBlockState.Context(), req) + if err != nil { + return nil, err +} + if len(req.Validators) > 0 { + if len(req.Validators) != len(res.Validators) { + return nil, fmt.Errorf( + "len(RequestInitChain.Validators) != len(GenesisValidators) (%d != %d)", + len(req.Validators), len(res.Validators), + ) +} + +sort.Sort(abci.ValidatorUpdates(req.Validators)) + +sort.Sort(abci.ValidatorUpdates(res.Validators)) + for i := range res.Validators { + if !proto.Equal(&res.Validators[i], &req.Validators[i]) { + return nil, fmt.Errorf("genesisValidators[%d] != req.Validators[%d] ", i, i) +} + +} + +} + + / NOTE: We don't commit, but FinalizeBlock for block InitialHeight starts from + / this FinalizeBlockState. + return &abci.ResponseInitChain{ + ConsensusParams: res.ConsensusParams, + Validators: res.Validators, + AppHash: app.LastCommitID().Hash, +}, nil +} + +func (app *BaseApp) + +Info(_ *abci.RequestInfo) (*abci.ResponseInfo, error) { + lastCommitID := app.cms.LastCommitID() + +return &abci.ResponseInfo{ + Data: app.name, + Version: app.version, + AppVersion: app.appVersion, + LastBlockHeight: lastCommitID.Version, + LastBlockAppHash: lastCommitID.Hash, +}, nil +} + +/ Query implements the ABCI interface. It delegates to CommitMultiStore if it +/ implements Queryable. +func (app *BaseApp) + +Query(_ context.Context, req *abci.RequestQuery) (resp *abci.ResponseQuery, err error) { + / add panic recovery for all queries + / + / Ref: https://github.com/cosmos/cosmos-sdk/pull/8039 + defer func() { + if r := recover(); r != nil { + resp = sdkerrors.QueryResult(errorsmod.Wrapf(sdkerrors.ErrPanic, "%v", r), app.trace) +} + +}() + + / when a client did not provide a query height, manually inject the latest + if req.Height == 0 { + req.Height = app.LastBlockHeight() +} + +telemetry.IncrCounter(1, "query", "count") + +telemetry.IncrCounter(1, "query", req.Path) + +defer telemetry.MeasureSince(telemetry.Now(), req.Path) + if req.Path == QueryPathBroadcastTx { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "can't route a broadcast tx message"), app.trace), nil +} + + / handle gRPC routes first rather than calling splitPath because '/' characters + / are used as part of gRPC paths + if grpcHandler := app.grpcQueryRouter.Route(req.Path); grpcHandler != nil { + return app.handleQueryGRPC(grpcHandler, req), nil +} + path := SplitABCIQueryPath(req.Path) + if len(path) == 0 { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "no query path provided"), app.trace), nil +} + switch path[0] { + case QueryPathApp: + / "/app" prefix for special application queries + resp = handleQueryApp(app, path, req) + case QueryPathStore: + resp = handleQueryStore(app, path, *req) + case QueryPathP2P: + resp = handleQueryP2P(app, path) + +default: + resp = sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "unknown query path"), app.trace) +} + +return resp, nil +} + +/ ListSnapshots implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ListSnapshots(req *abci.RequestListSnapshots) (*abci.ResponseListSnapshots, error) { + resp := &abci.ResponseListSnapshots{ + Snapshots: []*abci.Snapshot{ +}} + if app.snapshotManager == nil { + return resp, nil +} + +snapshots, err := app.snapshotManager.List() + if err != nil { + app.logger.Error("failed to list snapshots", "err", err) + +return nil, err +} + for _, snapshot := range snapshots { + abciSnapshot, err := snapshot.ToABCI() + if err != nil { + app.logger.Error("failed to convert ABCI snapshots", "err", err) + +return nil, err +} + +resp.Snapshots = append(resp.Snapshots, &abciSnapshot) +} + +return resp, nil +} + +/ LoadSnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +LoadSnapshotChunk(req *abci.RequestLoadSnapshotChunk) (*abci.ResponseLoadSnapshotChunk, error) { + if app.snapshotManager == nil { + return &abci.ResponseLoadSnapshotChunk{ +}, nil +} + +chunk, err := app.snapshotManager.LoadChunk(req.Height, req.Format, req.Chunk) + if err != nil { + app.logger.Error( + "failed to load snapshot chunk", + "height", req.Height, + "format", req.Format, + "chunk", req.Chunk, + "err", err, + ) + +return nil, err +} + +return &abci.ResponseLoadSnapshotChunk{ + Chunk: chunk +}, nil +} + +/ OfferSnapshot implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +OfferSnapshot(req *abci.RequestOfferSnapshot) (*abci.ResponseOfferSnapshot, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ABORT +}, nil +} + if req.Snapshot == nil { + app.logger.Error("received nil snapshot") + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil +} + +snapshot, err := snapshottypes.SnapshotFromABCI(req.Snapshot) + if err != nil { + app.logger.Error("failed to decode snapshot metadata", "err", err) + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil +} + +err = app.snapshotManager.Restore(snapshot) + switch { + case err == nil: + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrUnknownFormat): + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT_FORMAT +}, nil + case errors.Is(err, snapshottypes.ErrInvalidMetadata): + app.logger.Error( + "rejecting invalid snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil + + default: + / CometBFT errors are defined here: https://github.com/cometbft/cometbft/blob/main/statesync/syncer.go + / It may happen that in case of a CometBFT error, such as a timeout (which occurs after two minutes), + / the process is aborted. This is done intentionally because deleting the database programmatically + / can lead to more complicated situations. + app.logger.Error( + "failed to restore snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + + / We currently don't support resetting the IAVL stores and retrying a + / different snapshot, so we ask CometBFT to abort all snapshot restoration. + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ABORT +}, nil +} +} + +/ ApplySnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ApplySnapshotChunk(req *abci.RequestApplySnapshotChunk) (*abci.ResponseApplySnapshotChunk, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ABORT +}, nil +} + + _, err := app.snapshotManager.RestoreChunk(req.Chunk) + switch { + case err == nil: + return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrChunkHashMismatch): + app.logger.Error( + "chunk checksum mismatch; rejecting sender and requesting refetch", + "chunk", req.Index, + "sender", req.Sender, + "err", err, + ) + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_RETRY, + RefetchChunks: []uint32{ + req.Index +}, + RejectSenders: []string{ + req.Sender +}, +}, nil + + default: + app.logger.Error("failed to restore snapshot", "err", err) + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ABORT +}, nil +} +} + +/ CheckTx implements the ABCI interface and executes a tx in CheckTx mode. In +/ CheckTx mode, messages are not executed. This means messages are only validated +/ and only the AnteHandler is executed. State is persisted to the BaseApp's +/ internal CheckTx state if the AnteHandler passes. Otherwise, the ResponseCheckTx +/ will contain relevant error information. Regardless of tx execution outcome, +/ the ResponseCheckTx will contain relevant gas execution context. +func (app *BaseApp) + +CheckTx(req *abci.RequestCheckTx) (*abci.ResponseCheckTx, error) { + var mode execMode + switch req.Type { + case abci.CheckTxType_New: + mode = execModeCheck + case abci.CheckTxType_Recheck: + mode = execModeReCheck + + default: + return nil, fmt.Errorf("unknown RequestCheckTx type: %s", req.Type) +} + if app.checkTxHandler == nil { + gInfo, result, anteEvents, err := app.runTx(mode, req.Tx, nil) + if err != nil { + return sdkerrors.ResponseCheckTxWithEvents(err, gInfo.GasWanted, gInfo.GasUsed, anteEvents, app.trace), nil +} + +return &abci.ResponseCheckTx{ + GasWanted: int64(gInfo.GasWanted), / TODO: Should type accept unsigned ints? + GasUsed: int64(gInfo.GasUsed), / TODO: Should type accept unsigned ints? + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +}, nil +} + + / Create wrapper to avoid users overriding the execution mode + runTx := func(txBytes []byte, tx sdk.Tx) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + return app.runTx(mode, txBytes, tx) +} + +return app.checkTxHandler(runTx, req) +} + +/ PrepareProposal implements the PrepareProposal ABCI method and returns a +/ ResponsePrepareProposal object to the client. The PrepareProposal method is +/ responsible for allowing the block proposer to perform application-dependent +/ work in a block before proposing it. +/ +/ Transactions can be modified, removed, or added by the application. Since the +/ application maintains its own local mempool, it will ignore the transactions +/ provided to it in RequestPrepareProposal. Instead, it will determine which +/ transactions to return based on the mempool's semantics and the MaxTxBytes +/ provided by the client's request. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +PrepareProposal(req *abci.RequestPrepareProposal) (resp *abci.ResponsePrepareProposal, err error) { + if app.prepareProposal == nil { + return nil, errors.New("PrepareProposal handler not set") +} + + / Always reset state given that PrepareProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, + AppHash: app.LastCommitID().Hash, +} + +app.setState(execModePrepareProposal, header) + + / CometBFT must never call PrepareProposal with a height of 0. + / + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("PrepareProposal called with invalid height") +} + +app.prepareProposalState.SetContext(app.getContextForProposal(app.prepareProposalState.Context(), req.Height). + WithVoteInfos(toVoteInfo(req.LocalLastCommit.Votes)). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithBlockTime(req.Time). + WithProposer(req.ProposerAddress). + WithExecMode(sdk.ExecModePrepareProposal). + WithCometInfo(prepareProposalInfo{ + req +}). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, +})) + +app.prepareProposalState.SetContext(app.prepareProposalState.Context(). + WithConsensusParams(app.GetConsensusParams(app.prepareProposalState.Context())). + WithBlockGasMeter(app.getBlockGasMeter(app.prepareProposalState.Context()))) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in PrepareProposal", + "height", req.Height, + "time", req.Time, + "panic", err, + ) + +resp = &abci.ResponsePrepareProposal{ + Txs: req.Txs +} + +} + +}() + +resp, err = app.prepareProposal(app.prepareProposalState.Context(), req) + if err != nil { + app.logger.Error("failed to prepare proposal", "height", req.Height, "time", req.Time, "err", err) + +return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} + +return resp, nil +} + +/ ProcessProposal implements the ProcessProposal ABCI method and returns a +/ ResponseProcessProposal object to the client. The ProcessProposal method is +/ responsible for allowing execution of application-dependent work in a proposed +/ block. Note, the application defines the exact implementation details of +/ ProcessProposal. In general, the application must at the very least ensure +/ that all transactions are valid. If all transactions are valid, then we inform +/ CometBFT that the Status is ACCEPT. However, the application is also able +/ to implement optimizations such as executing the entire proposed block +/ immediately. +/ +/ If a panic is detected during execution of an application's ProcessProposal +/ handler, it will be recovered and we will reject the proposal. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +ProcessProposal(req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) { + if app.processProposal == nil { + return nil, errors.New("ProcessProposal handler not set") +} + + / CometBFT must never call ProcessProposal with a height of 0. + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("ProcessProposal called with invalid height") +} + + / Always reset state given that ProcessProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, + AppHash: app.LastCommitID().Hash, +} + +app.setState(execModeProcessProposal, header) + + / Since the application can get access to FinalizeBlock state and write to it, + / we must be sure to reset it in case ProcessProposal timeouts and is called + / again in a subsequent round. However, we only want to do this after we've + / processed the first block, as we want to avoid overwriting the finalizeState + / after state changes during InitChain. + if req.Height > app.initialHeight { + / abort any running OE + app.optimisticExec.Abort() + +app.setState(execModeFinalize, header) +} + +app.processProposalState.SetContext(app.getContextForProposal(app.processProposalState.Context(), req.Height). + WithVoteInfos(req.ProposedLastCommit.Votes). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithBlockTime(req.Time). + WithHeaderHash(req.Hash). + WithProposer(req.ProposerAddress). + WithCometInfo(cometInfo{ + ProposerAddress: req.ProposerAddress, + ValidatorsHash: req.NextValidatorsHash, + Misbehavior: req.Misbehavior, + LastCommit: req.ProposedLastCommit +}). + WithExecMode(sdk.ExecModeProcessProposal). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, +})) + +app.processProposalState.SetContext(app.processProposalState.Context(). + WithConsensusParams(app.GetConsensusParams(app.processProposalState.Context())). + WithBlockGasMeter(app.getBlockGasMeter(app.processProposalState.Context()))) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in ProcessProposal", + "height", req.Height, + "time", req.Time, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +resp = &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + +}() + +resp, err = app.processProposal(app.processProposalState.Context(), req) + if err != nil { + app.logger.Error("failed to process proposal", "height", req.Height, "time", req.Time, "hash", fmt.Sprintf("%X", req.Hash), "err", err) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + + / Only execute optimistic execution if the proposal is accepted, OE is + / enabled and the block height is greater than the initial height. During + / the first block we'll be carrying state from InitChain, so it would be + / impossible for us to easily revert. + / After the first block has been processed, the next blocks will get executed + / optimistically, so that when the ABCI client calls `FinalizeBlock` the app + / can have a response ready. + if resp.Status == abci.ResponseProcessProposal_ACCEPT && + app.optimisticExec.Enabled() && + req.Height > app.initialHeight { + app.optimisticExec.Execute(req) +} + +return resp, nil +} + +/ ExtendVote implements the ExtendVote ABCI method and returns a ResponseExtendVote. +/ It calls the application's ExtendVote handler which is responsible for performing +/ application-specific business logic when sending a pre-commit for the NEXT +/ block height. The extensions response may be non-deterministic but must always +/ be returned, even if empty. +/ +/ Agreed upon vote extensions are made available to the proposer of the next +/ height and are committed in the subsequent height, i.e. H+2. An error is +/ returned if vote extensions are not enabled or if extendVote fails or panics. +func (app *BaseApp) + +ExtendVote(_ context.Context, req *abci.RequestExtendVote) (resp *abci.ResponseExtendVote, err error) { + / Always reset state given that ExtendVote and VerifyVoteExtension can timeout + / and be called again in a subsequent round. + var ctx sdk.Context + + / If we're extending the vote for the initial height, we need to use the + / finalizeBlockState context, otherwise we don't get the uncommitted data + / from InitChain. + if req.Height == app.initialHeight { + ctx, _ = app.finalizeBlockState.Context().CacheContext() +} + +else { + emptyHeader := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height +} + ms := app.cms.CacheMultiStore() + +ctx = sdk.NewContext(ms, emptyHeader, false, app.logger).WithStreamingManager(app.streamingManager) +} + if app.extendVote == nil { + return nil, errors.New("application ExtendVote handler not set") +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(ctx) + + / Note: In this case, we do want to extend vote if the height is equal or + / greater than VoteExtensionsEnableHeight. This defers from the check done + / in ValidateVoteExtensions and PrepareProposal in which we'll check for + / vote extensions on VoteExtensionsEnableHeight+1. + extsEnabled := cp.Abci != nil && req.Height >= cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + if !extsEnabled { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to ExtendVote at height %d", req.Height) +} + +ctx = ctx. + WithConsensusParams(cp). + WithBlockGasMeter(storetypes.NewInfiniteGasMeter()). + WithBlockHeight(req.Height). + WithHeaderHash(req.Hash). + WithExecMode(sdk.ExecModeVoteExtension). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Hash: req.Hash, +}) + + / add a deferred recover handler in case extendVote panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in ExtendVote", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +err = fmt.Errorf("recovered application panic in ExtendVote: %v", r) +} + +}() + +resp, err = app.extendVote(ctx, req) + if err != nil { + app.logger.Error("failed to extend vote", "height", req.Height, "hash", fmt.Sprintf("%X", req.Hash), "err", err) + +return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} + +return resp, err +} + +/ VerifyVoteExtension implements the VerifyVoteExtension ABCI method and returns +/ a ResponseVerifyVoteExtension. It calls the applications' VerifyVoteExtension +/ handler which is responsible for performing application-specific business +/ logic in verifying a vote extension from another validator during the pre-commit +/ phase. The response MUST be deterministic. An error is returned if vote +/ extensions are not enabled or if verifyVoteExt fails or panics. +func (app *BaseApp) + +VerifyVoteExtension(req *abci.RequestVerifyVoteExtension) (resp *abci.ResponseVerifyVoteExtension, err error) { + if app.verifyVoteExt == nil { + return nil, errors.New("application VerifyVoteExtension handler not set") +} + +var ctx sdk.Context + + / If we're verifying the vote for the initial height, we need to use the + / finalizeBlockState context, otherwise we don't get the uncommitted data + / from InitChain. + if req.Height == app.initialHeight { + ctx, _ = app.finalizeBlockState.Context().CacheContext() +} + +else { + emptyHeader := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height +} + ms := app.cms.CacheMultiStore() + +ctx = sdk.NewContext(ms, emptyHeader, false, app.logger).WithStreamingManager(app.streamingManager) +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(ctx) + + / Note: we verify votes extensions on VoteExtensionsEnableHeight+1. Check + / comment in ExtendVote and ValidateVoteExtensions for more details. + extsEnabled := cp.Abci != nil && req.Height >= cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + if !extsEnabled { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to VerifyVoteExtension at height %d", req.Height) +} + + / add a deferred recover handler in case verifyVoteExt panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in VerifyVoteExtension", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "validator", fmt.Sprintf("%X", req.ValidatorAddress), + "panic", r, + ) + +err = fmt.Errorf("recovered application panic in VerifyVoteExtension: %v", r) +} + +}() + +ctx = ctx. + WithConsensusParams(cp). + WithBlockGasMeter(storetypes.NewInfiniteGasMeter()). + WithBlockHeight(req.Height). + WithHeaderHash(req.Hash). + WithExecMode(sdk.ExecModeVerifyVoteExtension). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Hash: req.Hash, +}) + +resp, err = app.verifyVoteExt(ctx, req) + if err != nil { + app.logger.Error("failed to verify vote extension", "height", req.Height, "err", err) + +return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_REJECT +}, nil +} + +return resp, err +} + +/ internalFinalizeBlock executes the block, called by the Optimistic +/ Execution flow or by the FinalizeBlock ABCI method. The context received is +/ only used to handle early cancellation, for anything related to state app.finalizeBlockState.Context() +/ must be used. +func (app *BaseApp) + +internalFinalizeBlock(ctx context.Context, req *abci.RequestFinalizeBlock) (*abci.ResponseFinalizeBlock, error) { + var events []abci.Event + if err := app.checkHalt(req.Height, req.Time); err != nil { + return nil, err +} + if err := app.validateFinalizeBlockHeight(req); err != nil { + return nil, err +} + if app.cms.TracingEnabled() { + app.cms.SetTracingContext(storetypes.TraceContext( + map[string]any{"blockHeight": req.Height +}, + )) +} + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, + AppHash: app.LastCommitID().Hash, +} + + / finalizeBlockState should be set on InitChain or ProcessProposal. If it is + / nil, it means we are replaying this block and we need to set the state here + / given that during block replay ProcessProposal is not executed by CometBFT. + if app.finalizeBlockState == nil { + app.setState(execModeFinalize, header) +} + + / Context is now updated with Header information. + app.finalizeBlockState.SetContext(app.finalizeBlockState.Context(). + WithBlockHeader(header). + WithHeaderHash(req.Hash). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + Hash: req.Hash, + AppHash: app.LastCommitID().Hash, +}). + WithConsensusParams(app.GetConsensusParams(app.finalizeBlockState.Context())). + WithVoteInfos(req.DecidedLastCommit.Votes). + WithExecMode(sdk.ExecModeFinalize). + WithCometInfo(cometInfo{ + Misbehavior: req.Misbehavior, + ValidatorsHash: req.NextValidatorsHash, + ProposerAddress: req.ProposerAddress, + LastCommit: req.DecidedLastCommit, +})) + + / GasMeter must be set after we get a context with updated consensus params. + gasMeter := app.getBlockGasMeter(app.finalizeBlockState.Context()) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(gasMeter)) + if app.checkState != nil { + app.checkState.SetContext(app.checkState.Context(). + WithBlockGasMeter(gasMeter). + WithHeaderHash(req.Hash)) +} + +preblockEvents, err := app.preBlock(req) + if err != nil { + return nil, err +} + +events = append(events, preblockEvents...) + +beginBlock, err := app.beginBlock(req) + if err != nil { + return nil, err +} + + / First check for an abort signal after beginBlock, as it's the first place + / we spend any significant amount of time. + select { + case <-ctx.Done(): + return nil, ctx.Err() + +default: + / continue +} + +events = append(events, beginBlock.Events...) + + / Reset the gas meter so that the AnteHandlers aren't required to + gasMeter = app.getBlockGasMeter(app.finalizeBlockState.Context()) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(gasMeter)) + + / Iterate over all raw transactions in the proposal and attempt to execute + / them, gathering the execution results. + / + / NOTE: Not all raw transactions may adhere to the sdk.Tx interface, e.g. + / vote extensions, so skip those. + txResults := make([]*abci.ExecTxResult, 0, len(req.Txs)) + for _, rawTx := range req.Txs { + var response *abci.ExecTxResult + if _, err := app.txDecoder(rawTx); err == nil { + response = app.deliverTx(rawTx) +} + +else { + / In the case where a transaction included in a block proposal is malformed, + / we still want to return a default response to comet. This is because comet + / expects a response for each transaction included in a block proposal. + response = sdkerrors.ResponseExecTxResultWithEvents( + sdkerrors.ErrTxDecode, + 0, + 0, + nil, + false, + ) +} + + / check after every tx if we should abort + select { + case <-ctx.Done(): + return nil, ctx.Err() + +default: + / continue +} + +txResults = append(txResults, response) +} + if app.finalizeBlockState.ms.TracingEnabled() { + app.finalizeBlockState.ms = app.finalizeBlockState.ms.SetTracingContext(nil).(storetypes.CacheMultiStore) +} + +endBlock, err := app.endBlock(app.finalizeBlockState.Context()) + if err != nil { + return nil, err +} + + / check after endBlock if we should abort, to avoid propagating the result + select { + case <-ctx.Done(): + return nil, ctx.Err() + +default: + / continue +} + +events = append(events, endBlock.Events...) + cp := app.GetConsensusParams(app.finalizeBlockState.Context()) + +return &abci.ResponseFinalizeBlock{ + Events: events, + TxResults: txResults, + ValidatorUpdates: endBlock.ValidatorUpdates, + ConsensusParamUpdates: &cp, +}, nil +} + +/ FinalizeBlock will execute the block proposal provided by RequestFinalizeBlock. +/ Specifically, it will execute an application's BeginBlock (if defined), followed +/ by the transactions in the proposal, finally followed by the application's +/ EndBlock (if defined). +/ +/ For each raw transaction, i.e. a byte slice, BaseApp will only execute it if +/ it adheres to the sdk.Tx interface. Otherwise, the raw transaction will be +/ skipped. This is to support compatibility with proposers injecting vote +/ extensions into the proposal, which should not themselves be executed in cases +/ where they adhere to the sdk.Tx interface. +func (app *BaseApp) + +FinalizeBlock(req *abci.RequestFinalizeBlock) (res *abci.ResponseFinalizeBlock, err error) { + defer func() { + if res == nil { + return +} + / call the streaming service hooks with the FinalizeBlock messages + for _, streamingListener := range app.streamingManager.ABCIListeners { + if err := streamingListener.ListenFinalizeBlock(app.finalizeBlockState.Context(), *req, *res); err != nil { + app.logger.Error("ListenFinalizeBlock listening hook failed", "height", req.Height, "err", err) +} + +} + +}() + if app.optimisticExec.Initialized() { + / check if the hash we got is the same as the one we are executing + aborted := app.optimisticExec.AbortIfNeeded(req.Hash) + / Wait for the OE to finish, regardless of whether it was aborted or not + res, err = app.optimisticExec.WaitResult() + + / only return if we are not aborting + if !aborted { + if res != nil { + res.AppHash = app.workingHash() +} + +return res, err +} + + / if it was aborted, we need to reset the state + app.finalizeBlockState = nil + app.optimisticExec.Reset() +} + + / if no OE is running, just run the block (this is either a block replay or a OE that got aborted) + +res, err = app.internalFinalizeBlock(context.Background(), req) + if res != nil { + res.AppHash = app.workingHash() +} + +return res, err +} + +/ checkHalt checkes if height or time exceeds halt-height or halt-time respectively. +func (app *BaseApp) + +checkHalt(height int64, time time.Time) + +error { + var halt bool + switch { + case app.haltHeight > 0 && uint64(height) >= app.haltHeight: + halt = true + case app.haltTime > 0 && time.Unix() >= int64(app.haltTime): + halt = true +} + if halt { + return fmt.Errorf("halt per configuration height %d time %d", app.haltHeight, app.haltTime) +} + +return nil +} + +/ Commit implements the ABCI interface. It will commit all state that exists in +/ the deliver state's multi-store and includes the resulting commit ID in the +/ returned abci.ResponseCommit. Commit will set the check state based on the +/ latest header and reset the deliver state. Also, if a non-zero halt height is +/ defined in config, Commit will execute a deferred function call to check +/ against that height and gracefully halt if it matches the latest committed +/ height. +func (app *BaseApp) + +Commit() (*abci.ResponseCommit, error) { + header := app.finalizeBlockState.Context().BlockHeader() + retainHeight := app.GetBlockRetentionHeight(header.Height) + if app.precommiter != nil { + app.precommiter(app.finalizeBlockState.Context()) +} + +rms, ok := app.cms.(*rootmulti.Store) + if ok { + rms.SetCommitHeader(header) +} + +app.cms.Commit() + resp := &abci.ResponseCommit{ + RetainHeight: retainHeight, +} + abciListeners := app.streamingManager.ABCIListeners + if len(abciListeners) > 0 { + ctx := app.finalizeBlockState.Context() + blockHeight := ctx.BlockHeight() + changeSet := app.cms.PopStateCache() + for _, abciListener := range abciListeners { + if err := abciListener.ListenCommit(ctx, *resp, changeSet); err != nil { + app.logger.Error("Commit listening hook failed", "height", blockHeight, "err", err) +} + +} + +} + + / Reset the CheckTx state to the latest committed. + / + / NOTE: This is safe because CometBFT holds a lock on the mempool for + / Commit. Use the header from this latest block. + app.setState(execModeCheck, header) + +app.finalizeBlockState = nil + if app.prepareCheckStater != nil { + app.prepareCheckStater(app.checkState.Context()) +} + + / The SnapshotIfApplicable method will create the snapshot by starting the goroutine + app.snapshotManager.SnapshotIfApplicable(header.Height) + +return resp, nil +} + +/ workingHash gets the apphash that will be finalized in commit. +/ These writes will be persisted to the root multi-store (app.cms) + +and flushed to +/ disk in the Commit phase. This means when the ABCI client requests Commit(), the application +/ state transitions will be flushed to disk and as a result, but we already have +/ an application Merkle root. +func (app *BaseApp) + +workingHash() []byte { + / Write the FinalizeBlock state into branched storage and commit the MultiStore. + / The write to the FinalizeBlock state writes all state transitions to the root + / MultiStore (app.cms) + +so when Commit() + +is called it persists those values. + app.finalizeBlockState.ms.Write() + + / Get the hash of all writes in order to return the apphash to the comet in finalizeBlock. + commitHash := app.cms.WorkingHash() + +app.logger.Debug("hash of all writes", "workingHash", fmt.Sprintf("%X", commitHash)) + +return commitHash +} + +func handleQueryApp(app *BaseApp, path []string, req *abci.RequestQuery) *abci.ResponseQuery { + if len(path) >= 2 { + switch path[1] { + case "simulate": + txBytes := req.Data + + gInfo, res, err := app.Simulate(txBytes) + if err != nil { + return sdkerrors.QueryResult(errorsmod.Wrap(err, "failed to simulate tx"), app.trace) +} + simRes := &sdk.SimulationResponse{ + GasInfo: gInfo, + Result: res, +} + +bz, err := codec.ProtoMarshalJSON(simRes, app.interfaceRegistry) + if err != nil { + return sdkerrors.QueryResult(errorsmod.Wrap(err, "failed to JSON encode simulation response"), app.trace) +} + +return &abci.ResponseQuery{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: bz, +} + case "version": + return &abci.ResponseQuery{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: []byte(app.version), +} + +default: + return sdkerrors.QueryResult(errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "unknown query: %s", path), app.trace) +} + +} + +return sdkerrors.QueryResult( + errorsmod.Wrap( + sdkerrors.ErrUnknownRequest, + "expected second parameter to be either 'simulate' or 'version', neither was present", + ), app.trace) +} + +func handleQueryStore(app *BaseApp, path []string, req abci.RequestQuery) *abci.ResponseQuery { + / "/store" prefix for store queries + queryable, ok := app.cms.(storetypes.Queryable) + if !ok { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "multi-store does not support queries"), app.trace) +} + +req.Path = "/" + strings.Join(path[1:], "/") + if req.Height <= 1 && req.Prove { + return sdkerrors.QueryResult( + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ), app.trace) +} + sdkReq := storetypes.RequestQuery(req) + +resp, err := queryable.Query(&sdkReq) + if err != nil { + return sdkerrors.QueryResult(err, app.trace) +} + +resp.Height = req.Height + abciResp := abci.ResponseQuery(*resp) + +return &abciResp +} + +func handleQueryP2P(app *BaseApp, path []string) *abci.ResponseQuery { + / "/p2p" prefix for p2p queries + if len(path) < 4 { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "path should be p2p filter "), app.trace) +} + +var resp *abci.ResponseQuery + + cmd, typ, arg := path[1], path[2], path[3] + switch cmd { + case "filter": + switch typ { + case "addr": + resp = app.FilterPeerByAddrPort(arg) + case "id": + resp = app.FilterPeerByID(arg) +} + +default: + resp = sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "expected second parameter to be 'filter'"), app.trace) +} + +return resp +} + +/ SplitABCIQueryPath splits a string path using the delimiter '/'. +/ +/ e.g. "this/is/funny" becomes []string{"this", "is", "funny" +} + +func SplitABCIQueryPath(requestPath string) (path []string) { + path = strings.Split(requestPath, "/") + + / first element is empty string + if len(path) > 0 && path[0] == "" { + path = path[1:] +} + +return path +} + +/ FilterPeerByAddrPort filters peers by address/port. +func (app *BaseApp) + +FilterPeerByAddrPort(info string) *abci.ResponseQuery { + if app.addrPeerFilter != nil { + return app.addrPeerFilter(info) +} + +return &abci.ResponseQuery{ +} +} + +/ FilterPeerByID filters peers by node ID. +func (app *BaseApp) + +FilterPeerByID(info string) *abci.ResponseQuery { + if app.idPeerFilter != nil { + return app.idPeerFilter(info) +} + +return &abci.ResponseQuery{ +} +} + +/ getContextForProposal returns the correct Context for PrepareProposal and +/ ProcessProposal. We use finalizeBlockState on the first block to be able to +/ access any state changes made in InitChain. +func (app *BaseApp) + +getContextForProposal(ctx sdk.Context, height int64) + +sdk.Context { + if height == app.initialHeight { + ctx, _ = app.finalizeBlockState.Context().CacheContext() + + / clear all context data set during InitChain to avoid inconsistent behavior + ctx = ctx.WithBlockHeader(cmtproto.Header{ +}).WithHeaderInfo(coreheader.Info{ +}) + +return ctx +} + +return ctx +} + +func (app *BaseApp) + +handleQueryGRPC(handler GRPCQueryHandler, req *abci.RequestQuery) *abci.ResponseQuery { + ctx, err := app.CreateQueryContext(req.Height, req.Prove) + if err != nil { + return sdkerrors.QueryResult(err, app.trace) +} + +resp, err := handler(ctx, req) + if err != nil { + resp = sdkerrors.QueryResult(gRPCErrorToSDKError(err), app.trace) + +resp.Height = req.Height + return resp +} + +return resp +} + +func gRPCErrorToSDKError(err error) + +error { + status, ok := grpcstatus.FromError(err) + if !ok { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) +} + switch status.Code() { + case codes.NotFound: + return errorsmod.Wrap(sdkerrors.ErrKeyNotFound, err.Error()) + case codes.InvalidArgument: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.FailedPrecondition: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.Unauthenticated: + return errorsmod.Wrap(sdkerrors.ErrUnauthorized, err.Error()) + +default: + return errorsmod.Wrap(sdkerrors.ErrUnknownRequest, err.Error()) +} +} + +func checkNegativeHeight(height int64) + +error { + if height < 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "cannot query with height < 0; please provide a valid height") +} + +return nil +} + +/ CreateQueryContext creates a new sdk.Context for a query, taking as args +/ the block height and whether the query needs a proof or not. +func (app *BaseApp) + +CreateQueryContext(height int64, prove bool) (sdk.Context, error) { + return app.CreateQueryContextWithCheckHeader(height, prove, true) +} + +/ CreateQueryContextWithCheckHeader creates a new sdk.Context for a query, taking as args +/ the block height, whether the query needs a proof or not, and whether to check the header or not. +func (app *BaseApp) + +CreateQueryContextWithCheckHeader(height int64, prove, checkHeader bool) (sdk.Context, error) { + if err := checkNegativeHeight(height); err != nil { + return sdk.Context{ +}, err +} + + / use custom query multi-store if provided + qms := app.qms + if qms == nil { + qms = app.cms.(storetypes.MultiStore) +} + lastBlockHeight := qms.LatestVersion() + if lastBlockHeight == 0 { + return sdk.Context{ +}, errorsmod.Wrapf(sdkerrors.ErrInvalidHeight, "%s is not ready; please wait for first block", app.Name()) +} + if height > lastBlockHeight { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidHeight, + "cannot query with height in the future; please provide a valid height", + ) +} + if height == 1 && prove { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ) +} + +var header *cmtproto.Header + isLatest := height == 0 + for _, state := range []*state{ + app.checkState, + app.finalizeBlockState, +} { + if state != nil { + / branch the commit multi-store for safety + h := state.Context().BlockHeader() + if isLatest { + lastBlockHeight = qms.LatestVersion() +} + if !checkHeader || !isLatest || isLatest && h.Height == lastBlockHeight { + header = &h + break +} + +} + +} + if header == nil { + return sdk.Context{ +}, + errorsmod.Wrapf( + sdkerrors.ErrInvalidHeight, + "context did not contain latest block height in either check state or finalize block state (%d)", lastBlockHeight, + ) +} + + / when a client did not provide a query height, manually inject the latest + if isLatest { + height = lastBlockHeight +} + +cacheMS, err := qms.CacheMultiStoreWithVersion(height) + if err != nil { + return sdk.Context{ +}, + errorsmod.Wrapf( + sdkerrors.ErrNotFound, + "failed to load state at height %d; %s (latest height: %d)", height, err, lastBlockHeight, + ) +} + + / branch the commit multi-store for safety + ctx := sdk.NewContext(cacheMS, *header, true, app.logger). + WithMinGasPrices(app.minGasPrices). + WithGasMeter(storetypes.NewGasMeter(app.queryGasLimit)). + WithBlockHeader(*header). + WithBlockHeight(height) + if !isLatest { + rms, ok := app.cms.(*rootmulti.Store) + if ok { + cInfo, err := rms.GetCommitInfo(height) + if cInfo != nil && err == nil { + ctx = ctx.WithBlockHeight(height).WithBlockTime(cInfo.Timestamp) +} + +} + +} + +return ctx, nil +} + +/ GetBlockRetentionHeight returns the height for which all blocks below this height +/ are pruned from CometBFT. Given a commitment height and a non-zero local +/ minRetainBlocks configuration, the retentionHeight is the smallest height that +/ satisfies: +/ +/ - Unbonding (safety threshold) + +time: The block interval in which validators +/ can be economically punished for misbehavior. Blocks in this interval must be +/ auditable e.g. by the light client. +/ +/ - Logical store snapshot interval: The block interval at which the underlying +/ logical store database is persisted to disk, e.g. every 10000 heights. Blocks +/ since the last IAVL snapshot must be available for replay on application restart. +/ +/ - State sync snapshots: Blocks since the oldest available snapshot must be +/ available for state sync nodes to catch up (oldest because a node may be +/ restoring an old snapshot while a new snapshot was taken). +/ +/ - Local (minRetainBlocks) + +config: Archive nodes may want to retain more or +/ all blocks, e.g. via a local config option min-retain-blocks. There may also +/ be a need to vary retention for other nodes, e.g. sentry nodes which do not +/ need historical blocks. +func (app *BaseApp) + +GetBlockRetentionHeight(commitHeight int64) + +int64 { + / If minRetainBlocks is zero, pruning is disabled and we return 0 + / If commitHeight is less than or equal to minRetainBlocks, return 0 since there are not enough + / blocks to trigger pruning yet. This ensures we keep all blocks until we have at least minRetainBlocks. + retentionBlockWindow := commitHeight - int64(app.minRetainBlocks) + if app.minRetainBlocks == 0 || retentionBlockWindow <= 0 { + return 0 +} + minNonZero := func(x, y int64) + +int64 { + switch { + case x == 0: + return y + case y == 0: + return x + case x < y: + return x + + default: + return y +} + +} + + / Define retentionHeight as the minimum value that satisfies all non-zero + / constraints. All blocks below (commitHeight-retentionHeight) + +are pruned + / from CometBFT. + var retentionHeight int64 + + / Define the number of blocks needed to protect against misbehaving validators + / which allows light clients to operate safely. Note, we piggy back of the + / evidence parameters instead of computing an estimated number of blocks based + / on the unbonding period and block commitment time as the two should be + / equivalent. + cp := app.GetConsensusParams(app.finalizeBlockState.Context()) + if cp.Evidence != nil && cp.Evidence.MaxAgeNumBlocks > 0 { + retentionHeight = commitHeight - cp.Evidence.MaxAgeNumBlocks +} + if app.snapshotManager != nil { + snapshotRetentionHeights := app.snapshotManager.GetSnapshotBlockRetentionHeights() + if snapshotRetentionHeights > 0 { + retentionHeight = minNonZero(retentionHeight, commitHeight-snapshotRetentionHeights) +} + +} + +retentionHeight = minNonZero(retentionHeight, retentionBlockWindow) + if retentionHeight <= 0 { + / prune nothing in the case of a non-positive height + return 0 +} + +return retentionHeight +} + +/ toVoteInfo converts the new ExtendedVoteInfo to VoteInfo. +func toVoteInfo(votes []abci.ExtendedVoteInfo) []abci.VoteInfo { + legacyVotes := make([]abci.VoteInfo, len(votes)) + for i, vote := range votes { + legacyVotes[i] = abci.VoteInfo{ + Validator: abci.Validator{ + Address: vote.Validator.Address, + Power: vote.Validator.Power, +}, + BlockIdFlag: vote.BlockIdFlag, +} + +} + +return legacyVotes +} +``` + +#### PreBlock + +- Run the application's [`preBlocker()`](/docs/sdk/next/documentation/application-framework/app-anatomy#preblocker), which mainly runs the [`PreBlocker()`](/docs/sdk/next/documentation/module-system/preblock#preblock) method of each of the modules. + +#### BeginBlock + +- Initialize [`finalizeBlockState`](#state-updates) with the latest header using the `req abci.FinalizeBlockRequest` passed as parameter via the `setState` function. + + ```go expandable + package baseapp + + import ( + + "context" + "fmt" + "maps" + "math" + "slices" + "strconv" + "sync" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/core/header" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp/oe" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/types/msgservice" + ) + + type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + + error + ) + + const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + + transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeVerifyVoteExtension / Verify a vote extension + execModeFinalize / Finalize a block proposal + ) + + var _ servertypes.ABCI = (*BaseApp)(nil) + + / BaseApp reflects the ABCI application implementation. + type BaseApp struct { + / initialized on creation + mu sync.Mutex / mu protects the fields below. + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + + state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + + grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional + + checkTxHandler sdk.CheckTxHandler / ABCI CheckTx handler + initChainer sdk.InitChainer / ABCI InitChain handler + preBlocker sdk.PreBlocker / logic to run before BeginBlocker + beginBlocker sdk.BeginBlocker / (legacy ABCI) + + BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + + EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + sigverifyTx bool / in the simulation test, since the account does not have a private key, we have to ignore the tx sigverify. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / queryGasLimit defines the maximum gas for queries; unbounded if 0. + queryGasLimit uint64 + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + + at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + + period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType + }.{ + attributeKey + }, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ + } + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec + + / optimisticExec contains the context required for Optimistic Execution, + / including the goroutine handling.This is experimental and must be enabled + / by developers. + optimisticExec *oe.OptimisticExecution + + / disableBlockGasMeter will disable the block gas meter if true, block gas meter is tricky to support + / when executing transactions in parallel. + / when disabled, the block gas meter in context is a noop one. + / + / SAFETY: it's safe to do if validators validate the total gas wanted in the `ProcessProposal`, which is the case in the default handler. + disableBlockGasMeter bool + } + + / NewBaseApp returns a reference to an initialized BaseApp. It accepts a + / variadic number of option functions, which act on the BaseApp to set + / configuration choices. + func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), + ) *BaseApp { + app := &BaseApp{ + logger: logger.With(log.ModuleKey, "baseapp"), + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, + sigverifyTx: true, + queryGasLimit: math.MaxUint64, + } + for _, option := range options { + option(app) + } + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ + }) + } + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) + } + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) + } + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) + } + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) + } + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) + } + + app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs baseapp will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + + protoFiles, err := proto.MergedRegistry() + if err != nil { + logger.Warn("error creating merged proto registry", "error", err) + } + + else { + err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + logger.Warn("error validating merged proto registry annotations", "error", err) + } + + } + + return app + } + + / Name returns the name of the BaseApp. + func (app *BaseApp) + + Name() + + string { + return app.name + } + + / AppVersion returns the application's protocol version. + func (app *BaseApp) + + AppVersion() + + uint64 { + return app.appVersion + } + + / Version returns the application's version string. + func (app *BaseApp) + + Version() + + string { + return app.version + } + + / Logger returns the logger of the BaseApp. + func (app *BaseApp) + + Logger() + + log.Logger { + return app.logger + } + + / Trace returns the boolean value for logging error stack traces. + func (app *BaseApp) + + Trace() + + bool { + return app.trace + } + + / MsgServiceRouter returns the MsgServiceRouter of a BaseApp. + func (app *BaseApp) + + MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter + } + + / GRPCQueryRouter returns the GRPCQueryRouter of a BaseApp. + func (app *BaseApp) + + GRPCQueryRouter() *GRPCQueryRouter { + return app.grpcQueryRouter + } + + / MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp + / multistore. + func (app *BaseApp) + + MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) + } + + else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) + } + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + + default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) + } + + } + } + + / MountKVStores mounts all IAVL or DB stores to the provided keys in the + / BaseApp multistore. + func (app *BaseApp) + + MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) + } + + else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) + } + + } + } + + / MountTransientStores mounts all transient stores to the provided keys in + / the BaseApp multistore. + func (app *BaseApp) + + MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) + } + } + + / MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal + / commit multi-store. + func (app *BaseApp) + + MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := slices.Sorted(maps.Keys(keys)) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) + } + } + + / MountStore mounts a store to the provided key in the BaseApp multistore, + / using the default DB. + func (app *BaseApp) + + MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) + } + + / LoadLatestVersion loads the latest application version. It will panic if + / called more than once on a running BaseApp. + func (app *BaseApp) + + LoadLatestVersion() + + error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) + } + + return app.Init() + } + + / DefaultStoreLoader will be used by default and loads the latest version + func DefaultStoreLoader(ms storetypes.CommitMultiStore) + + error { + return ms.LoadLatestVersion() + } + + / CommitMultiStore returns the root multi-store. + / App constructor can use this to access the `cms`. + / UNSAFE: must not be used during the abci life cycle. + func (app *BaseApp) + + CommitMultiStore() + + storetypes.CommitMultiStore { + return app.cms + } + + / SnapshotManager returns the snapshot manager. + / application use this to register extra extension snapshotters. + func (app *BaseApp) + + SnapshotManager() *snapshots.Manager { + return app.snapshotManager + } + + / LoadVersion loads the BaseApp application version. It will panic if called + / more than once on a running baseapp. + func (app *BaseApp) + + LoadVersion(version int64) + + error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) + } + + return app.Init() + } + + / LastCommitID returns the last CommitID of the multistore. + func (app *BaseApp) + + LastCommitID() + + storetypes.CommitID { + return app.cms.LastCommitID() + } + + / LastBlockHeight returns the last committed block height. + func (app *BaseApp) + + LastBlockHeight() + + int64 { + return app.cms.LastCommitID().Version + } + + / ChainID returns the chainID of the app. + func (app *BaseApp) + + ChainID() + + string { + return app.chainID + } + + / AnteHandler returns the AnteHandler of the app. + func (app *BaseApp) + + AnteHandler() + + sdk.AnteHandler { + return app.anteHandler + } + + / Mempool returns the Mempool of the app. + func (app *BaseApp) + + Mempool() + + mempool.Mempool { + return app.mempool + } + + / Init initializes the app. It seals the app, preventing any + / further modifications. In addition, it validates the app against + / the earlier provided settings. Returns an error if validation fails. + / nil otherwise. Panics if the app is already sealed. + func (app *BaseApp) + + Init() + + error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") + } + if app.cms == nil { + return errors.New("commit multi-store must not be nil") + } + emptyHeader := cmtproto.Header{ + ChainID: app.chainID + } + + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) + + app.Seal() + + return app.cms.GetPruning().Validate() + } + + func (app *BaseApp) + + setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices + } + + func (app *BaseApp) + + setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight + } + + func (app *BaseApp) + + setHaltTime(haltTime uint64) { + app.haltTime = haltTime + } + + func (app *BaseApp) + + setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks + } + + func (app *BaseApp) + + setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache + } + + func (app *BaseApp) + + setTrace(trace bool) { + app.trace = trace + } + + func (app *BaseApp) + + setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ + }) + for _, e := range ie { + app.indexEvents[e] = struct{ + }{ + } + + } + } + + / Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. + func (app *BaseApp) + + Seal() { + app.sealed = true + } + + / IsSealed returns true if the BaseApp is sealed and false otherwise. + func (app *BaseApp) + + IsSealed() + + bool { + return app.sealed + } + + / setState sets the BaseApp's state for the corresponding mode with a branched + / multi-store (i.e. a CacheMultiStore) + + and a new Context with the same + / multi-store branch, and provided header. + func (app *BaseApp) + + setState(mode execMode, h cmtproto.Header) { + ms := app.cms.CacheMultiStore() + headerInfo := header.Info{ + Height: h.Height, + Time: h.Time, + ChainID: h.ChainID, + AppHash: h.AppHash, + } + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, h, false, app.logger). + WithStreamingManager(app.streamingManager). + WithHeaderInfo(headerInfo), + } + switch mode { + case execModeCheck: + baseState.SetContext(baseState.Context().WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices)) + + app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) + } + } + + / SetCircuitBreaker sets the circuit breaker for the BaseApp. + / The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. + func (app *BaseApp) + + SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") + } + + app.msgServiceRouter.SetCircuit(cb) + } + + / GetConsensusParams returns the current consensus parameters from the BaseApp's + / ParamStore. If the BaseApp has no ParamStore defined, nil is returned. + func (app *BaseApp) + + GetConsensusParams(ctx sdk.Context) + + cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ + } + + } + + cp, err := app.paramStore.Get(ctx) + if err != nil { + / This could happen while migrating from v0.45/v0.46 to v0.50, we should + / allow it to happen so during preblock the upgrade plan can be executed + / and the consensus params set for the first time in the new format. + app.logger.Error("failed to get consensus params", "err", err) + + return cmtproto.ConsensusParams{ + } + + } + + return cp + } + + / StoreConsensusParams sets the consensus parameters to the BaseApp's param + / store. + / + / NOTE: We're explicitly not storing the CometBFT app_version in the param store. + / It's stored instead in the x/upgrade store, with its own bump logic. + func (app *BaseApp) + + StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + + error { + if app.paramStore == nil { + return errors.New("cannot store consensus params with no params store set") + } + + return app.paramStore.Set(ctx, cp) + } + + / AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. + func (app *BaseApp) + + AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) + } + } + + / GetMaximumBlockGas gets the maximum gas from the consensus params. It panics + / if maximum block gas is less than negative one and returns zero if negative + / one. + func (app *BaseApp) + + GetMaximumBlockGas(ctx sdk.Context) + + uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 + } + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) + } + } + + func (app *BaseApp) + + validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + + error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) + } + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight + } + + else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 + } + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) + } + + return nil + } + + / validateBasicTxMsgs executes basic validator calls for messages. + func validateBasicTxMsgs(msgs []sdk.Msg) + + error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") + } + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue + } + if err := m.ValidateBasic(); err != nil { + return err + } + + } + + return nil + } + + func (app *BaseApp) + + getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState + } + } + + func (app *BaseApp) + + getBlockGasMeter(ctx sdk.Context) + + storetypes.GasMeter { + if app.disableBlockGasMeter { + return noopGasMeter{ + } + + } + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) + } + + return storetypes.NewInfiniteGasMeter() + } + + / retrieve the context for the tx w/ txBytes and other memoized values. + func (app *BaseApp) + + getContextForTx(mode execMode, txBytes []byte) + + sdk.Context { + app.mu.Lock() + + defer app.mu.Unlock() + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) + } + ctx := modeState.Context(). + WithTxBytes(txBytes). + WithGasMeter(storetypes.NewInfiniteGasMeter()) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithIsSigverifyTx(app.sigverifyTx) + + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) + } + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() + + ctx = ctx.WithExecMode(sdk.ExecMode(execModeSimulate)) + } + + return ctx + } + + / cacheTxContext returns a new context based off of the provided context with + / a branched multi-store. + func (app *BaseApp) + + cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]any{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), + }, + ), + ).(storetypes.CacheMultiStore) + } + + return ctx.WithMultiStore(msCache), msCache + } + + func (app *BaseApp) + + preBlock(req *abci.RequestFinalizeBlock) ([]abci.Event, error) { + var events []abci.Event + if app.preBlocker != nil { + ctx := app.finalizeBlockState.Context().WithEventManager(sdk.NewEventManager()) + + rsp, err := app.preBlocker(ctx, req) + if err != nil { + return nil, err + } + / rsp.ConsensusParamsChanged is true from preBlocker means ConsensusParams in store get changed + / write the consensus parameters in store to context + if rsp.ConsensusParamsChanged { + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + / GasMeter must be set after we get a context with updated consensus params. + gasMeter := app.getBlockGasMeter(ctx) + + ctx = ctx.WithBlockGasMeter(gasMeter) + + app.finalizeBlockState.SetContext(ctx) + } + + events = ctx.EventManager().ABCIEvents() + } + + return events, nil + } + + func (app *BaseApp) + + beginBlock(_ *abci.RequestFinalizeBlock) (sdk.BeginBlock, error) { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.Context()) + if err != nil { + return resp, err + } + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" + }, + ) + } + + resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) + } + + return resp, nil + } + + func (app *BaseApp) + + deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ + } + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + + telemetry.IncrCounter(1, "tx", resultStr) + + telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + + telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") + }() + + gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx, nil) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + + return resp + } + + resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), + } + + return resp + } + + / endBlock is an application-defined function that is called after transactions + / have been processed in FinalizeBlock. + func (app *BaseApp) + + endBlock(_ context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.Context()) + if err != nil { + return endblock, err + } + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" + }, + ) + } + + eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + + endblock = eb + } + + return endblock, nil + } + + / runTx processes a transaction within a given execution mode, encoded transaction + / bytes, and the decoded transaction itself. All state transitions occur through + / a cached Context depending on the mode provided. State only gets persisted + / if all messages get executed successfully and the execution mode is DeliverTx. + / Note, gas execution info is always returned. A reference to a Result is + / returned if the tx does not run out of gas and if all the messages are valid + / and execute successfully. An error is returned otherwise. + / both txbytes and the decoded tx are passed to runTx to avoid the state machine encoding the tx and decoding the transaction twice + / passing the decoded tx to runTX is optional, it will be decoded if the tx is nil + func (app *BaseApp) + + runTx(mode execMode, txBytes []byte, tx sdk.Tx) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") + } + + defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + + err, result = processRecovery(r, recoveryMW), nil + ctx.Logger().Error("panic recovered in runTx", "err", err) + } + + gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() + } + + }() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) + } + + } + + / If BlockGasMeter() + + panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() + } + + / if the transaction is not decoded, decode it here + if tx == nil { + tx, err = app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ + GasUsed: 0, + GasWanted: 0 + }, nil, nil, sdkerrors.ErrTxDecode.Wrap(err.Error()) + } + + } + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ + }, nil, nil, err + } + for _, msg := range msgs { + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return sdk.GasInfo{ + }, nil, nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) + } + + } + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + + anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + + newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + + is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) + } + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + if mode == execModeReCheck { + / if the ante handler fails on recheck, we want to remove the tx from the mempool + if mempoolErr := app.mempool.Remove(tx); mempoolErr != nil { + return gInfo, nil, anteEvents, errors.Join(err, mempoolErr) + } + + } + + return gInfo, nil, nil, err + } + + msCache.Write() + + anteEvents = events.ToABCIEvents() + } + switch mode { + case execModeCheck: + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err + } + case execModeFinalize: + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) + } + + } + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) + } + + / Run optional postHandlers (should run regardless of the execution result). + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + + newCtx, errPostHandler := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if errPostHandler != nil { + if err == nil { + / when the msg was handled successfully, return the post handler error only + return gInfo, nil, anteEvents, errPostHandler + } + / otherwise append to the msg error so that we keep the original error code for better user experience + return gInfo, nil, anteEvents, errorsmod.Wrapf(err, "postHandler: %s", errPostHandler) + } + + / we don't want runTx to panic if runMsgs has failed earlier + if result == nil { + result = &sdk.Result{ + } + + } + + result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) + } + if err == nil { + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + + msCache.Write() + } + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) + } + + } + + return gInfo, result, anteEvents, err + } + + / runMsgs iterates through a list of messages and executes them with the provided + / Context and execution mode. Messages will only be executed during simulation + / and DeliverTx. An error is returned if any single message fails or if a + / Handler does not exist for a given message route. Otherwise, a reference to a + / Result is returned. The caller must not commit state if an error is returned. + func (app *BaseApp) + + runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + + var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break + } + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) + } + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) + } + + / create message events + msgEvents, err := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to create message events; message index: %d", i) + } + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) + } + + events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + + has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) + } + + msgResponses = append(msgResponses, msgResponse) + } + + + } + + data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") + } + + return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, + }, nil + } + + / makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. + func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses + }) + } + + func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) (sdk.Events, error) { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + return nil, err + } + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + return nil, err + } + + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) + } + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) + } + + } + + return sdk.Events{ + msgEvent + }.AppendEvents(events), nil + } + + / PrepareProposalVerifyTx performs transaction verification when a proposer is + / creating a block proposal during PrepareProposal. Any state committed to the + / PrepareProposal state internally will be discarded. will be + / returned if the transaction cannot be encoded. will be returned if + / the transaction is valid, otherwise will be returned. + func (app *BaseApp) + + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err + } + + _, _, _, err = app.runTx(execModePrepareProposal, bz, tx) + if err != nil { + return nil, err + } + + return bz, nil + } + + / ProcessProposalVerifyTx performs transaction verification when receiving a + / block proposal during ProcessProposal. Any state committed to the + / ProcessProposal state internally will be discarded. will be + / returned if the transaction cannot be decoded. will be returned if + / the transaction is valid, otherwise will be returned. + func (app *BaseApp) + + ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err + } + + _, _, _, err = app.runTx(execModeProcessProposal, txBz, tx) + if err != nil { + return nil, err + } + + return tx, nil + } + + func (app *BaseApp) + + TxDecode(txBytes []byte) (sdk.Tx, error) { + return app.txDecoder(txBytes) + } + + func (app *BaseApp) + + TxEncode(tx sdk.Tx) ([]byte, error) { + return app.txEncoder(tx) + } + + func (app *BaseApp) + + StreamingManager() + + storetypes.StreamingManager { + return app.streamingManager + } + + / Close is called in start cmd to gracefully cleanup resources. + func (app *BaseApp) + + Close() + + error { + var errs []error + + / Close app.db (opened by cosmos-sdk/server/start.go call to openDB) + if app.db != nil { + app.logger.Info("Closing application.db") + if err := app.db.Close(); err != nil { + errs = append(errs, err) + } + + } + + / Close app.snapshotManager + / - opened when app chains use cosmos-sdk/server/util.go/DefaultBaseappOptions (boilerplate) + / - which calls cosmos-sdk/server/util.go/GetSnapshotStore + / - which is passed to baseapp/options.go/SetSnapshot + / - to set app.snapshotManager = snapshots.NewManager + if app.snapshotManager != nil { + app.logger.Info("Closing snapshots/metadata.db") + if err := app.snapshotManager.Close(); err != nil { + errs = append(errs, err) + } + + } + + return errors.Join(errs...) + } + + / GetBaseApp returns the pointer to itself. + func (app *BaseApp) + + GetBaseApp() *BaseApp { + return app + } + ``` + + This function also resets the [main gas meter](/docs/sdk/next/documentation/protocol-development/gas-fees#main-gas-meter). + +- Initialize the [block gas meter](/docs/sdk/next/documentation/protocol-development/gas-fees#block-gas-meter) with the `maxGas` limit. The `gas` consumed within the block cannot go above `maxGas`. This parameter is defined in the application's consensus parameters. + +- Run the application's [`beginBlocker()`](/docs/sdk/next/documentation/application-framework/app-anatomy#beginblocker-and-endblocker), which mainly runs the [`BeginBlocker()`](/docs/sdk/next/documentation/module-system/beginblock-endblock#beginblock) method of each of the modules. + +- Set the [`VoteInfos`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#voteinfo) of the application, i.e. the list of validators whose _precommit_ for the previous block was included by the proposer of the current block. This information is carried into the [`Context`](/docs/sdk/next/documentation/application-framework/context) so that it can be used during transaction execution and EndBlock. + +#### Transaction Execution + +When the underlying consensus engine receives a block proposal, each transaction in the block needs to be processed by the application. To that end, the underlying consensus engine sends the transactions in FinalizeBlock message to the application for each transaction in a sequential order. + +Before the first transaction of a given block is processed, a [volatile state](#state-updates) called `finalizeBlockState` is initialized during FinalizeBlock. This state is updated each time a transaction is processed via `FinalizeBlock`, and committed to the [main state](#main-state) when the block is [committed](#commit), after what it is set to `nil`. + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + "maps" + "math" + "slices" + "strconv" + "sync" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/core/header" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp/oe" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + +error +) + +const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + +transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeVerifyVoteExtension / Verify a vote extension + execModeFinalize / Finalize a block proposal +) + +var _ servertypes.ABCI = (*BaseApp)(nil) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { + / initialized on creation + mu sync.Mutex / mu protects the fields below. + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + +state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional + + checkTxHandler sdk.CheckTxHandler / ABCI CheckTx handler + initChainer sdk.InitChainer / ABCI InitChain handler + preBlocker sdk.PreBlocker / logic to run before BeginBlocker + beginBlocker sdk.BeginBlocker / (legacy ABCI) + +BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + +EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + sigverifyTx bool / in the simulation test, since the account does not have a private key, we have to ignore the tx sigverify. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / queryGasLimit defines the maximum gas for queries; unbounded if 0. + queryGasLimit uint64 + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec + + / optimisticExec contains the context required for Optimistic Execution, + / including the goroutine handling.This is experimental and must be enabled + / by developers. + optimisticExec *oe.OptimisticExecution + + / disableBlockGasMeter will disable the block gas meter if true, block gas meter is tricky to support + / when executing transactions in parallel. + / when disabled, the block gas meter in context is a noop one. + / + / SAFETY: it's safe to do if validators validate the total gas wanted in the `ProcessProposal`, which is the case in the default handler. + disableBlockGasMeter bool +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger.With(log.ModuleKey, "baseapp"), + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, + sigverifyTx: true, + queryGasLimit: math.MaxUint64, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) +} + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) +} + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) +} + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs baseapp will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + +protoFiles, err := proto.MergedRegistry() + if err != nil { + logger.Warn("error creating merged proto registry", "error", err) +} + +else { + err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + logger.Warn("error validating merged proto registry annotations", "error", err) +} + +} + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ GRPCQueryRouter returns the GRPCQueryRouter of a BaseApp. +func (app *BaseApp) + +GRPCQueryRouter() *GRPCQueryRouter { + return app.grpcQueryRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} + +} +} + +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) + +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + +} +} + +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) + +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} + +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := slices.Sorted(maps.Keys(keys)) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms storetypes.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +storetypes.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ ChainID returns the chainID of the app. +func (app *BaseApp) + +ChainID() + +string { + return app.chainID +} + +/ AnteHandler returns the AnteHandler of the app. +func (app *BaseApp) + +AnteHandler() + +sdk.AnteHandler { + return app.anteHandler +} + +/ Mempool returns the Mempool of the app. +func (app *BaseApp) + +Mempool() + +mempool.Mempool { + return app.mempool +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + if app.cms == nil { + return errors.New("commit multi-store must not be nil") +} + emptyHeader := cmtproto.Header{ + ChainID: app.chainID +} + + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) + +app.Seal() + +return app.cms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode execMode, h cmtproto.Header) { + ms := app.cms.CacheMultiStore() + headerInfo := header.Info{ + Height: h.Height, + Time: h.Time, + ChainID: h.ChainID, + AppHash: h.AppHash, +} + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, h, false, app.logger). + WithStreamingManager(app.streamingManager). + WithHeaderInfo(headerInfo), +} + switch mode { + case execModeCheck: + baseState.SetContext(baseState.Context().WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices)) + +app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ SetCircuitBreaker sets the circuit breaker for the BaseApp. +/ The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. +func (app *BaseApp) + +SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") +} + +app.msgServiceRouter.SetCircuit(cb) +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) + +cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ +} + +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + / This could happen while migrating from v0.45/v0.46 to v0.50, we should + / allow it to happen so during preblock the upgrade plan can be executed + / and the consensus params set for the first time in the new format. + app.logger.Error("failed to get consensus params", "err", err) + +return cmtproto.ConsensusParams{ +} + +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the BaseApp's param +/ store. +/ +/ NOTE: We're explicitly not storing the CometBFT app_version in the param store. +/ It's stored instead in the x/upgrade store, with its own bump logic. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + +error { + if app.paramStore == nil { + return errors.New("cannot store consensus params with no params store set") +} + +return app.paramStore.Set(ctx, cp) +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + +error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) +} + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 +} + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return err +} + +} + +return nil +} + +func (app *BaseApp) + +getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState +} +} + +func (app *BaseApp) + +getBlockGasMeter(ctx sdk.Context) + +storetypes.GasMeter { + if app.disableBlockGasMeter { + return noopGasMeter{ +} + +} + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) +} + +return storetypes.NewInfiniteGasMeter() +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode execMode, txBytes []byte) + +sdk.Context { + app.mu.Lock() + +defer app.mu.Unlock() + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.Context(). + WithTxBytes(txBytes). + WithGasMeter(storetypes.NewInfiniteGasMeter()) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithIsSigverifyTx(app.sigverifyTx) + +ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() + +ctx = ctx.WithExecMode(sdk.ExecMode(execModeSimulate)) +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]any{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(storetypes.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +func (app *BaseApp) + +preBlock(req *abci.RequestFinalizeBlock) ([]abci.Event, error) { + var events []abci.Event + if app.preBlocker != nil { + ctx := app.finalizeBlockState.Context().WithEventManager(sdk.NewEventManager()) + +rsp, err := app.preBlocker(ctx, req) + if err != nil { + return nil, err +} + / rsp.ConsensusParamsChanged is true from preBlocker means ConsensusParams in store get changed + / write the consensus parameters in store to context + if rsp.ConsensusParamsChanged { + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + / GasMeter must be set after we get a context with updated consensus params. + gasMeter := app.getBlockGasMeter(ctx) + +ctx = ctx.WithBlockGasMeter(gasMeter) + +app.finalizeBlockState.SetContext(ctx) +} + +events = ctx.EventManager().ABCIEvents() +} + +return events, nil +} + +func (app *BaseApp) + +beginBlock(_ *abci.RequestFinalizeBlock) (sdk.BeginBlock, error) { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.Context()) + if err != nil { + return resp, err +} + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" +}, + ) +} + +resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) +} + +return resp, nil +} + +func (app *BaseApp) + +deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ +} + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + +telemetry.IncrCounter(1, "tx", resultStr) + +telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + +telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") +}() + +gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx, nil) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + +return resp +} + +resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +} + +return resp +} + +/ endBlock is an application-defined function that is called after transactions +/ have been processed in FinalizeBlock. +func (app *BaseApp) + +endBlock(_ context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.Context()) + if err != nil { + return endblock, err +} + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" +}, + ) +} + +eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + +endblock = eb +} + +return endblock, nil +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +/ both txbytes and the decoded tx are passed to runTx to avoid the state machine encoding the tx and decoding the transaction twice +/ passing the decoded tx to runTX is optional, it will be decoded if the tx is nil +func (app *BaseApp) + +runTx(mode execMode, txBytes []byte, tx sdk.Tx) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil + ctx.Logger().Error("panic recovered in runTx", "err", err) +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() +} + + / if the transaction is not decoded, decode it here + if tx == nil { + tx, err = app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ + GasUsed: 0, + GasWanted: 0 +}, nil, nil, sdkerrors.ErrTxDecode.Wrap(err.Error()) +} + +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + for _, msg := range msgs { + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return sdk.GasInfo{ +}, nil, nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + if mode == execModeReCheck { + / if the ante handler fails on recheck, we want to remove the tx from the mempool + if mempoolErr := app.mempool.Remove(tx); mempoolErr != nil { + return gInfo, nil, anteEvents, errors.Join(err, mempoolErr) +} + +} + +return gInfo, nil, nil, err +} + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + switch mode { + case execModeCheck: + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err +} + case execModeFinalize: + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) +} + + / Run optional postHandlers (should run regardless of the execution result). + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, errPostHandler := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if errPostHandler != nil { + if err == nil { + / when the msg was handled successfully, return the post handler error only + return gInfo, nil, anteEvents, errPostHandler +} + / otherwise append to the msg error so that we keep the original error code for better user experience + return gInfo, nil, anteEvents, errorsmod.Wrapf(err, "postHandler: %s", errPostHandler) +} + + / we don't want runTx to panic if runMsgs has failed earlier + if result == nil { + result = &sdk.Result{ +} + +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if err == nil { + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents, err := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to create message events; message index: %d", i) +} + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) +} + +events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + + +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) (sdk.Events, error) { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + return nil, err +} + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + return nil, err +} + +msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events), nil +} + +/ PrepareProposalVerifyTx performs transaction verification when a proposer is +/ creating a block proposal during PrepareProposal. Any state committed to the +/ PrepareProposal state internally will be discarded. will be +/ returned if the transaction cannot be encoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModePrepareProposal, bz, tx) + if err != nil { + return nil, err +} + +return bz, nil +} + +/ ProcessProposalVerifyTx performs transaction verification when receiving a +/ block proposal during ProcessProposal. Any state committed to the +/ ProcessProposal state internally will be discarded. will be +/ returned if the transaction cannot be decoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModeProcessProposal, txBz, tx) + if err != nil { + return nil, err +} + +return tx, nil +} + +func (app *BaseApp) + +TxDecode(txBytes []byte) (sdk.Tx, error) { + return app.txDecoder(txBytes) +} + +func (app *BaseApp) + +TxEncode(tx sdk.Tx) ([]byte, error) { + return app.txEncoder(tx) +} + +func (app *BaseApp) + +StreamingManager() + +storetypes.StreamingManager { + return app.streamingManager +} + +/ Close is called in start cmd to gracefully cleanup resources. +func (app *BaseApp) + +Close() + +error { + var errs []error + + / Close app.db (opened by cosmos-sdk/server/start.go call to openDB) + if app.db != nil { + app.logger.Info("Closing application.db") + if err := app.db.Close(); err != nil { + errs = append(errs, err) +} + +} + + / Close app.snapshotManager + / - opened when app chains use cosmos-sdk/server/util.go/DefaultBaseappOptions (boilerplate) + / - which calls cosmos-sdk/server/util.go/GetSnapshotStore + / - which is passed to baseapp/options.go/SetSnapshot + / - to set app.snapshotManager = snapshots.NewManager + if app.snapshotManager != nil { + app.logger.Info("Closing snapshots/metadata.db") + if err := app.snapshotManager.Close(); err != nil { + errs = append(errs, err) +} + +} + +return errors.Join(errs...) +} + +/ GetBaseApp returns the pointer to itself. +func (app *BaseApp) + +GetBaseApp() *BaseApp { + return app +} +``` + +Transaction execution within `FinalizeBlock` performs the **exact same steps as `CheckTx`**, with a little caveat at step 3 and the addition of a fifth step: + +1. The `AnteHandler` does **not** check that the transaction's `gas-prices` is sufficient. That is because the `min-gas-prices` value `gas-prices` is checked against is local to the node, and therefore what is enough for one full-node might not be for another. This means that the proposer can potentially include transactions for free, although they are not incentivized to do so, as they earn a bonus on the total fee of the block they propose. +2. For each `sdk.Msg` in the transaction, route to the appropriate module's Protobuf [`Msg` service](/docs/sdk/next/documentation/module-system/msg-services). Additional _stateful_ checks are performed, and the branched multistore held in `finalizeBlockState`'s `context` is updated by the module's `keeper`. If the `Msg` service returns successfully, the branched multistore held in `context` is written to `finalizeBlockState` `CacheMultiStore`. + +During the additional fifth step outlined in (2), each read/write to the store increases the value of `GasConsumed`. You can find the default cost of each operation: + +```go expandable +package types + +import ( + + "fmt" + "math" +) + +/ Gas consumption descriptors. +const ( + GasIterNextCostFlatDesc = "IterNextFlat" + GasValuePerByteDesc = "ValuePerByte" + GasWritePerByteDesc = "WritePerByte" + GasReadPerByteDesc = "ReadPerByte" + GasWriteCostFlatDesc = "WriteFlat" + GasReadCostFlatDesc = "ReadFlat" + GasHasDesc = "Has" + GasDeleteDesc = "Delete" +) + +/ Gas measured by the SDK +type Gas = uint64 + +/ ErrorNegativeGasConsumed defines an error thrown when the amount of gas refunded results in a +/ negative gas consumed amount. +type ErrorNegativeGasConsumed struct { + Descriptor string +} + +/ ErrorOutOfGas defines an error thrown when an action results in out of gas. +type ErrorOutOfGas struct { + Descriptor string +} + +/ ErrorGasOverflow defines an error thrown when an action results gas consumption +/ unsigned integer overflow. +type ErrorGasOverflow struct { + Descriptor string +} + +/ GasMeter interface to track gas consumption +type GasMeter interface { + GasConsumed() + +Gas + GasConsumedToLimit() + +Gas + GasRemaining() + +Gas + Limit() + +Gas + ConsumeGas(amount Gas, descriptor string) + +RefundGas(amount Gas, descriptor string) + +IsPastLimit() + +bool + IsOutOfGas() + +bool + String() + +string +} + +type basicGasMeter struct { + limit Gas + consumed Gas +} + +/ NewGasMeter returns a reference to a new basicGasMeter. +func NewGasMeter(limit Gas) + +GasMeter { + return &basicGasMeter{ + limit: limit, + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *basicGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasRemaining returns the gas left in the GasMeter. +func (g *basicGasMeter) + +GasRemaining() + +Gas { + if g.IsPastLimit() { + return 0 +} + +return g.limit - g.consumed +} + +/ Limit returns the gas limit of the GasMeter. +func (g *basicGasMeter) + +Limit() + +Gas { + return g.limit +} + +/ GasConsumedToLimit returns the gas limit if gas consumed is past the limit, +/ otherwise it returns the consumed gas. +/ +/ NOTE: This behavior is only called when recovering from panic when +/ BlockGasMeter consumes gas past the limit. +func (g *basicGasMeter) + +GasConsumedToLimit() + +Gas { + if g.IsPastLimit() { + return g.limit +} + +return g.consumed +} + +/ addUint64Overflow performs the addition operation on two uint64 integers and +/ returns a boolean on whether or not the result overflows. +func addUint64Overflow(a, b uint64) (uint64, bool) { + if math.MaxUint64-a < b { + return 0, true +} + +return a + b, false +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit or out of gas. +func (g *basicGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + g.consumed = math.MaxUint64 + panic(ErrorGasOverflow{ + descriptor +}) +} + if g.consumed > g.limit { + panic(ErrorOutOfGas{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the transaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *basicGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns true if gas consumed is past limit, otherwise it returns false. +func (g *basicGasMeter) + +IsPastLimit() + +bool { + return g.consumed > g.limit +} + +/ IsOutOfGas returns true if gas consumed is greater than or equal to gas limit, otherwise it returns false. +func (g *basicGasMeter) + +IsOutOfGas() + +bool { + return g.consumed >= g.limit +} + +/ String returns the BasicGasMeter's gas limit and gas consumed. +func (g *basicGasMeter) + +String() + +string { + return fmt.Sprintf("BasicGasMeter:\n limit: %d\n consumed: %d", g.limit, g.consumed) +} + +type infiniteGasMeter struct { + consumed Gas +} + +/ NewInfiniteGasMeter returns a new gas meter without a limit. +func NewInfiniteGasMeter() + +GasMeter { + return &infiniteGasMeter{ + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *infiniteGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasConsumedToLimit returns the gas consumed from the GasMeter since the gas is not confined to a limit. +/ NOTE: This behavior is only called when recovering from panic when BlockGasMeter consumes gas past the limit. +func (g *infiniteGasMeter) + +GasConsumedToLimit() + +Gas { + return g.consumed +} + +/ GasRemaining returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +GasRemaining() + +Gas { + return math.MaxUint64 +} + +/ Limit returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +Limit() + +Gas { + return math.MaxUint64 +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit. +func (g *infiniteGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + / TODO: Should we set the consumed field after overflow checking? + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + panic(ErrorGasOverflow{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the trasaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *infiniteGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsPastLimit() + +bool { + return false +} + +/ IsOutOfGas returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsOutOfGas() + +bool { + return false +} + +/ String returns the InfiniteGasMeter's gas consumed. +func (g *infiniteGasMeter) + +String() + +string { + return fmt.Sprintf("InfiniteGasMeter:\n consumed: %d", g.consumed) +} + +/ GasConfig defines gas cost for each operation on KVStores +type GasConfig struct { + HasCost Gas + DeleteCost Gas + ReadCostFlat Gas + ReadCostPerByte Gas + WriteCostFlat Gas + WriteCostPerByte Gas + IterNextCostFlat Gas +} + +/ KVGasConfig returns a default gas config for KVStores. +func KVGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 1000, + DeleteCost: 1000, + ReadCostFlat: 1000, + ReadCostPerByte: 3, + WriteCostFlat: 2000, + WriteCostPerByte: 30, + IterNextCostFlat: 30, +} +} + +/ TransientGasConfig returns a default gas config for TransientStores. +func TransientGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 100, + DeleteCost: 100, + ReadCostFlat: 100, + ReadCostPerByte: 0, + WriteCostFlat: 200, + WriteCostPerByte: 3, + IterNextCostFlat: 3, +} +} +``` + +At any point, if `GasConsumed > GasWanted`, the function returns with `Code != 0` and the execution fails. + +Each transactions returns a response to the underlying consensus engine of type [`abci.ExecTxResult`](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci%2B%2B_methods.md#exectxresult). The response contains: + +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. +- `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction. +- `GasUsed (int64)`: Amount of gas consumed by transaction. During transaction execution, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction, and by adding gas each time a read/write to the store occurs. +- `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](/docs/sdk/next/api-reference/events-streaming/events) for more. +- `Codespace (string)`: Namespace for the Code. + +#### EndBlock + +EndBlock is run after transaction execution completes. It allows developers to have logic be executed at the end of each block. In the Cosmos SDK, the bulk EndBlock() method is to run the application's EndBlocker(), which mainly runs the EndBlocker() method of each of the application's modules. + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + "maps" + "math" + "slices" + "strconv" + "sync" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/core/header" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp/oe" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + +error +) + +const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + +transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeVerifyVoteExtension / Verify a vote extension + execModeFinalize / Finalize a block proposal +) + +var _ servertypes.ABCI = (*BaseApp)(nil) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { + / initialized on creation + mu sync.Mutex / mu protects the fields below. + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + +state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional + + checkTxHandler sdk.CheckTxHandler / ABCI CheckTx handler + initChainer sdk.InitChainer / ABCI InitChain handler + preBlocker sdk.PreBlocker / logic to run before BeginBlocker + beginBlocker sdk.BeginBlocker / (legacy ABCI) + +BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + +EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + sigverifyTx bool / in the simulation test, since the account does not have a private key, we have to ignore the tx sigverify. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / queryGasLimit defines the maximum gas for queries; unbounded if 0. + queryGasLimit uint64 + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec + + / optimisticExec contains the context required for Optimistic Execution, + / including the goroutine handling.This is experimental and must be enabled + / by developers. + optimisticExec *oe.OptimisticExecution + + / disableBlockGasMeter will disable the block gas meter if true, block gas meter is tricky to support + / when executing transactions in parallel. + / when disabled, the block gas meter in context is a noop one. + / + / SAFETY: it's safe to do if validators validate the total gas wanted in the `ProcessProposal`, which is the case in the default handler. + disableBlockGasMeter bool +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger.With(log.ModuleKey, "baseapp"), + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, + sigverifyTx: true, + queryGasLimit: math.MaxUint64, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) +} + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) +} + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) +} + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs baseapp will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + +protoFiles, err := proto.MergedRegistry() + if err != nil { + logger.Warn("error creating merged proto registry", "error", err) +} + +else { + err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + logger.Warn("error validating merged proto registry annotations", "error", err) +} + +} + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ GRPCQueryRouter returns the GRPCQueryRouter of a BaseApp. +func (app *BaseApp) + +GRPCQueryRouter() *GRPCQueryRouter { + return app.grpcQueryRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} + +} +} + +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) + +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + +} +} + +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) + +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} + +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := slices.Sorted(maps.Keys(keys)) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms storetypes.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +storetypes.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ ChainID returns the chainID of the app. +func (app *BaseApp) + +ChainID() + +string { + return app.chainID +} + +/ AnteHandler returns the AnteHandler of the app. +func (app *BaseApp) + +AnteHandler() + +sdk.AnteHandler { + return app.anteHandler +} + +/ Mempool returns the Mempool of the app. +func (app *BaseApp) + +Mempool() + +mempool.Mempool { + return app.mempool +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + if app.cms == nil { + return errors.New("commit multi-store must not be nil") +} + emptyHeader := cmtproto.Header{ + ChainID: app.chainID +} + + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) + +app.Seal() + +return app.cms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode execMode, h cmtproto.Header) { + ms := app.cms.CacheMultiStore() + headerInfo := header.Info{ + Height: h.Height, + Time: h.Time, + ChainID: h.ChainID, + AppHash: h.AppHash, +} + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, h, false, app.logger). + WithStreamingManager(app.streamingManager). + WithHeaderInfo(headerInfo), +} + switch mode { + case execModeCheck: + baseState.SetContext(baseState.Context().WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices)) + +app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ SetCircuitBreaker sets the circuit breaker for the BaseApp. +/ The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. +func (app *BaseApp) + +SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") +} + +app.msgServiceRouter.SetCircuit(cb) +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) + +cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ +} + +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + / This could happen while migrating from v0.45/v0.46 to v0.50, we should + / allow it to happen so during preblock the upgrade plan can be executed + / and the consensus params set for the first time in the new format. + app.logger.Error("failed to get consensus params", "err", err) + +return cmtproto.ConsensusParams{ +} + +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the BaseApp's param +/ store. +/ +/ NOTE: We're explicitly not storing the CometBFT app_version in the param store. +/ It's stored instead in the x/upgrade store, with its own bump logic. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + +error { + if app.paramStore == nil { + return errors.New("cannot store consensus params with no params store set") +} + +return app.paramStore.Set(ctx, cp) +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + +error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) +} + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 +} + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return err +} + +} + +return nil +} + +func (app *BaseApp) + +getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState +} +} + +func (app *BaseApp) + +getBlockGasMeter(ctx sdk.Context) + +storetypes.GasMeter { + if app.disableBlockGasMeter { + return noopGasMeter{ +} + +} + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) +} + +return storetypes.NewInfiniteGasMeter() +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode execMode, txBytes []byte) + +sdk.Context { + app.mu.Lock() + +defer app.mu.Unlock() + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.Context(). + WithTxBytes(txBytes). + WithGasMeter(storetypes.NewInfiniteGasMeter()) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithIsSigverifyTx(app.sigverifyTx) + +ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() + +ctx = ctx.WithExecMode(sdk.ExecMode(execModeSimulate)) +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]any{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(storetypes.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +func (app *BaseApp) + +preBlock(req *abci.RequestFinalizeBlock) ([]abci.Event, error) { + var events []abci.Event + if app.preBlocker != nil { + ctx := app.finalizeBlockState.Context().WithEventManager(sdk.NewEventManager()) + +rsp, err := app.preBlocker(ctx, req) + if err != nil { + return nil, err +} + / rsp.ConsensusParamsChanged is true from preBlocker means ConsensusParams in store get changed + / write the consensus parameters in store to context + if rsp.ConsensusParamsChanged { + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + / GasMeter must be set after we get a context with updated consensus params. + gasMeter := app.getBlockGasMeter(ctx) + +ctx = ctx.WithBlockGasMeter(gasMeter) + +app.finalizeBlockState.SetContext(ctx) +} + +events = ctx.EventManager().ABCIEvents() +} + +return events, nil +} + +func (app *BaseApp) + +beginBlock(_ *abci.RequestFinalizeBlock) (sdk.BeginBlock, error) { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.Context()) + if err != nil { + return resp, err +} + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" +}, + ) +} + +resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) +} + +return resp, nil +} + +func (app *BaseApp) + +deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ +} + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + +telemetry.IncrCounter(1, "tx", resultStr) + +telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + +telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") +}() + +gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx, nil) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + +return resp +} + +resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +} + +return resp +} + +/ endBlock is an application-defined function that is called after transactions +/ have been processed in FinalizeBlock. +func (app *BaseApp) + +endBlock(_ context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.Context()) + if err != nil { + return endblock, err +} + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" +}, + ) +} + +eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + +endblock = eb +} + +return endblock, nil +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +/ both txbytes and the decoded tx are passed to runTx to avoid the state machine encoding the tx and decoding the transaction twice +/ passing the decoded tx to runTX is optional, it will be decoded if the tx is nil +func (app *BaseApp) + +runTx(mode execMode, txBytes []byte, tx sdk.Tx) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil + ctx.Logger().Error("panic recovered in runTx", "err", err) +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() +} + + / if the transaction is not decoded, decode it here + if tx == nil { + tx, err = app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ + GasUsed: 0, + GasWanted: 0 +}, nil, nil, sdkerrors.ErrTxDecode.Wrap(err.Error()) +} + +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + for _, msg := range msgs { + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return sdk.GasInfo{ +}, nil, nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + if mode == execModeReCheck { + / if the ante handler fails on recheck, we want to remove the tx from the mempool + if mempoolErr := app.mempool.Remove(tx); mempoolErr != nil { + return gInfo, nil, anteEvents, errors.Join(err, mempoolErr) +} + +} + +return gInfo, nil, nil, err +} + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + switch mode { + case execModeCheck: + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err +} + case execModeFinalize: + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) +} + + / Run optional postHandlers (should run regardless of the execution result). + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, errPostHandler := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if errPostHandler != nil { + if err == nil { + / when the msg was handled successfully, return the post handler error only + return gInfo, nil, anteEvents, errPostHandler +} + / otherwise append to the msg error so that we keep the original error code for better user experience + return gInfo, nil, anteEvents, errorsmod.Wrapf(err, "postHandler: %s", errPostHandler) +} + + / we don't want runTx to panic if runMsgs has failed earlier + if result == nil { + result = &sdk.Result{ +} + +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if err == nil { + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents, err := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to create message events; message index: %d", i) +} + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) +} + +events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + + +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) (sdk.Events, error) { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + return nil, err +} + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + return nil, err +} + +msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events), nil +} + +/ PrepareProposalVerifyTx performs transaction verification when a proposer is +/ creating a block proposal during PrepareProposal. Any state committed to the +/ PrepareProposal state internally will be discarded. will be +/ returned if the transaction cannot be encoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModePrepareProposal, bz, tx) + if err != nil { + return nil, err +} + +return bz, nil +} + +/ ProcessProposalVerifyTx performs transaction verification when receiving a +/ block proposal during ProcessProposal. Any state committed to the +/ ProcessProposal state internally will be discarded. will be +/ returned if the transaction cannot be decoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModeProcessProposal, txBz, tx) + if err != nil { + return nil, err +} + +return tx, nil +} + +func (app *BaseApp) + +TxDecode(txBytes []byte) (sdk.Tx, error) { + return app.txDecoder(txBytes) +} + +func (app *BaseApp) + +TxEncode(tx sdk.Tx) ([]byte, error) { + return app.txEncoder(tx) +} + +func (app *BaseApp) + +StreamingManager() + +storetypes.StreamingManager { + return app.streamingManager +} + +/ Close is called in start cmd to gracefully cleanup resources. +func (app *BaseApp) + +Close() + +error { + var errs []error + + / Close app.db (opened by cosmos-sdk/server/start.go call to openDB) + if app.db != nil { + app.logger.Info("Closing application.db") + if err := app.db.Close(); err != nil { + errs = append(errs, err) +} + +} + + / Close app.snapshotManager + / - opened when app chains use cosmos-sdk/server/util.go/DefaultBaseappOptions (boilerplate) + / - which calls cosmos-sdk/server/util.go/GetSnapshotStore + / - which is passed to baseapp/options.go/SetSnapshot + / - to set app.snapshotManager = snapshots.NewManager + if app.snapshotManager != nil { + app.logger.Info("Closing snapshots/metadata.db") + if err := app.snapshotManager.Close(); err != nil { + errs = append(errs, err) +} + +} + +return errors.Join(errs...) +} + +/ GetBaseApp returns the pointer to itself. +func (app *BaseApp) + +GetBaseApp() *BaseApp { + return app +} +``` + +### Commit + +The [`Commit` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine after the full-node has received _precommits_ from 2/3+ of validators (weighted by voting power). On the `BaseApp` end, the `Commit(res abci.CommitResponse)` function is implemented to commit all the valid state transitions that occurred during `FinalizeBlock` and to reset state for the next block. + +To commit state-transitions, the `Commit` function calls the `Write()` function on `finalizeBlockState.ms`, where `finalizeBlockState.ms` is a branched multistore of the main store `app.cms`. Then, the `Commit` function sets `checkState` to the latest header (obtained from `finalizeBlockState.ctx.BlockHeader`) and `finalizeBlockState` to `nil`. + +Finally, `Commit` returns the hash of the commitment of `app.cms` back to the underlying consensus engine. This hash is used as a reference in the header of the next block. + +### Info + +The [`Info` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#info-methods) is a simple query from the underlying consensus engine, notably used to sync the latter with the application during a handshake that happens on startup. When called, the `Info(res abci.InfoResponse)` function from `BaseApp` will return the application's name, version and the hash of the last commit of `app.cms`. + +### Query + +The [`Query` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#info-methods) is used to serve queries received from the underlying consensus engine, including queries received via RPC like CometBFT RPC. It used to be the main entrypoint to build interfaces with the application, but with the introduction of [gRPC queries](/docs/sdk/next/documentation/module-system/query-services) in Cosmos SDK v0.40, its usage is more limited. The application must respect a few rules when implementing the `Query` method, which are outlined [here](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#query). + +Each CometBFT `query` comes with a `path`, which is a `string` which denotes what to query. If the `path` matches a gRPC fully-qualified service method, then `BaseApp` will defer the query to the `grpcQueryRouter` and let it handle it like explained [above](#grpc-query-router). Otherwise, the `path` represents a query that is not (yet) handled by the gRPC router. `BaseApp` splits the `path` string with the `/` delimiter. By convention, the first element of the split string (`split[0]`) contains the category of `query` (`app`, `p2p`, `store` or `custom` ). The `BaseApp` implementation of the `Query(req abci.QueryRequest)` method is a simple dispatcher serving these 4 main categories of queries: + +- Application-related queries like querying the application's version, which are served via the `handleQueryApp` method. +- Direct queries to the multistore, which are served by the `handlerQueryStore` method. These direct queries are different from custom queries which go through `app.queryRouter`, and are mainly used by third-party service provider like block explorers. +- P2P queries, which are served via the `handleQueryP2P` method. These queries return either `app.addrPeerFilter` or `app.ipPeerFilter` that contain the list of peers filtered by address or IP respectively. These lists are first initialized via `options` in `BaseApp`'s [constructor](#constructor). + +### ExtendVote + +`ExtendVote` allows an application to extend a pre-commit vote with arbitrary data. This process does NOT have to be deterministic and the data returned can be unique to the validator process. + +In the Cosmos-SDK this is implemented as a NoOp: + +```go expandable +package baseapp + +import ( + + "bytes" + "context" + "fmt" + "slices" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cryptoenc "github.com/cometbft/cometbft/crypto/encoding" + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + protoio "github.com/cosmos/gogoproto/io" + "github.com/cosmos/gogoproto/proto" + "cosmossdk.io/core/comet" + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +type ( + / ValidatorStore defines the interface contract required for verifying vote + / extension signatures. Typically, this will be implemented by the x/staking + / module, which has knowledge of the CometBFT public key. + ValidatorStore interface { + GetPubKeyByConsAddr(context.Context, sdk.ConsAddress) (cmtprotocrypto.PublicKey, error) +} + + / GasTx defines the contract that a transaction with a gas limit must implement. + GasTx interface { + GetGas() + +uint64 +} +) + +/ ValidateVoteExtensions defines a helper function for verifying vote extension +/ signatures that may be passed or manually injected into a block proposal from +/ a proposer in PrepareProposal. It returns an error if any signature is invalid +/ or if unexpected vote extensions and/or signatures are found or less than 2/3 +/ power is received. +/ NOTE: From v0.50.5 `currentHeight` and `chainID` arguments are ignored for fixing an issue. +/ They will be removed from the function in v0.51+. +func ValidateVoteExtensions( + ctx sdk.Context, + valStore ValidatorStore, + _ int64, + _ string, + extCommit abci.ExtendedCommitInfo, +) + +error { + / Get values from context + cp := ctx.ConsensusParams() + currentHeight := ctx.HeaderInfo().Height + chainID := ctx.HeaderInfo().ChainID + commitInfo := ctx.CometInfo().GetLastCommit() + + / Check that both extCommit + commit are ordered in accordance with vp/address. + if err := validateExtendedCommitAgainstLastCommit(extCommit, commitInfo); err != nil { + return err +} + + / Start checking vote extensions only **after** the vote extensions enable + / height, because when `currentHeight == VoteExtensionsEnableHeight` + / PrepareProposal doesn't get any vote extensions in its request. + extsEnabled := cp.Abci != nil && currentHeight > cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + marshalDelimitedFn := func(msg proto.Message) ([]byte, error) { + var buf bytes.Buffer + if err := protoio.NewDelimitedWriter(&buf).WriteMsg(msg); err != nil { + return nil, err +} + +return buf.Bytes(), nil +} + +var ( + / Total voting power of all vote extensions. + totalVP int64 + / Total voting power of all validators that submitted valid vote extensions. + sumVP int64 + ) + for _, vote := range extCommit.Votes { + totalVP += vote.Validator.Power + + / Only check + include power if the vote is a commit vote. There must be super-majority, otherwise the + / previous block (the block the vote is for) + +could not have been committed. + if vote.BlockIdFlag != cmtproto.BlockIDFlagCommit { + continue +} + if !extsEnabled { + if len(vote.VoteExtension) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension at height %d", currentHeight) +} + if len(vote.ExtensionSignature) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension signature at height %d", currentHeight) +} + +continue +} + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("vote extensions enabled; received empty vote extension signature at height %d", currentHeight) +} + valConsAddr := sdk.ConsAddress(vote.Validator.Address) + +pubKeyProto, err := valStore.GetPubKeyByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get validator %X public key: %w", valConsAddr, err) +} + +cmtPubKey, err := cryptoenc.PubKeyFromProto(pubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) +} + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, / the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: chainID, +} + +extSignBytes, err := marshalDelimitedFn(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) +} + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return fmt.Errorf("failed to verify validator %X vote extension signature", valConsAddr) +} + +sumVP += vote.Validator.Power +} + + / This check is probably unnecessary, but better safe than sorry. + if totalVP <= 0 { + return fmt.Errorf("total voting power must be positive, got: %d", totalVP) +} + + / If the sum of the voting power has not reached (2/3 + 1) + +we need to error. + if requiredVP := ((totalVP * 2) / 3) + 1; sumVP < requiredVP { + return fmt.Errorf( + "insufficient cumulative voting power received to verify vote extensions; got: %d, expected: >=%d", + sumVP, requiredVP, + ) +} + +return nil +} + +/ validateExtendedCommitAgainstLastCommit validates an ExtendedCommitInfo against a LastCommit. Specifically, +/ it checks that the ExtendedCommit + LastCommit (for the same height), are consistent with each other + that +/ they are ordered correctly (by voting power) + +in accordance with +/ [comet](https://github.com/cometbft/cometbft/blob/4ce0277b35f31985bbf2c25d3806a184a4510010/types/validator_set.go#L784). +func validateExtendedCommitAgainstLastCommit(ec abci.ExtendedCommitInfo, lc comet.CommitInfo) + +error { + / check that the rounds are the same + if ec.Round != lc.Round() { + return fmt.Errorf("extended commit round %d does not match last commit round %d", ec.Round, lc.Round()) +} + + / check that the # of votes are the same + if len(ec.Votes) != lc.Votes().Len() { + return fmt.Errorf("extended commit votes length %d does not match last commit votes length %d", len(ec.Votes), lc.Votes().Len()) +} + + / check sort order of extended commit votes + if !slices.IsSortedFunc(ec.Votes, func(vote1, vote2 abci.ExtendedVoteInfo) + +int { + if vote1.Validator.Power == vote2.Validator.Power { + return bytes.Compare(vote1.Validator.Address, vote2.Validator.Address) / addresses sorted in ascending order (used to break vp conflicts) +} + +return -int(vote1.Validator.Power - vote2.Validator.Power) / vp sorted in descending order +}) { + return fmt.Errorf("extended commit votes are not sorted by voting power") +} + addressCache := make(map[string]struct{ +}, len(ec.Votes)) + / check that consistency between LastCommit and ExtendedCommit + for i, vote := range ec.Votes { + / cache addresses to check for duplicates + if _, ok := addressCache[string(vote.Validator.Address)]; ok { + return fmt.Errorf("extended commit vote address %X is duplicated", vote.Validator.Address) +} + +addressCache[string(vote.Validator.Address)] = struct{ +}{ +} + if !bytes.Equal(vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) { + return fmt.Errorf("extended commit vote address %X does not match last commit vote address %X", vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) +} + if vote.Validator.Power != lc.Votes().Get(i).Validator().Power() { + return fmt.Errorf("extended commit vote power %d does not match last commit vote power %d", vote.Validator.Power, lc.Votes().Get(i).Validator().Power()) +} + +} + +return nil +} + +type ( + / ProposalTxVerifier defines the interface that is implemented by BaseApp, + / that any custom ABCI PrepareProposal and ProcessProposal handler can use + / to verify a transaction. + ProposalTxVerifier interface { + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) + +TxDecode(txBz []byte) (sdk.Tx, error) + +TxEncode(tx sdk.Tx) ([]byte, error) +} + + / DefaultProposalHandler defines the default ABCI PrepareProposal and + / ProcessProposal handlers. + DefaultProposalHandler struct { + mempool mempool.Mempool + txVerifier ProposalTxVerifier + txSelector TxSelector + signerExtAdapter mempool.SignerExtractionAdapter +} +) + +func NewDefaultProposalHandler(mp mempool.Mempool, txVerifier ProposalTxVerifier) *DefaultProposalHandler { + return &DefaultProposalHandler{ + mempool: mp, + txVerifier: txVerifier, + txSelector: NewDefaultTxSelector(), + signerExtAdapter: mempool.NewDefaultSignerExtractionAdapter(), +} +} + +/ SetTxSelector sets the TxSelector function on the DefaultProposalHandler. +func (h *DefaultProposalHandler) + +SetTxSelector(ts TxSelector) { + h.txSelector = ts +} + +/ PrepareProposalHandler returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from CometBFT will simply be returned, which, by default, are in +/ FIFO order. +func (h *DefaultProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + var maxBlockGas uint64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = uint64(b.MaxGas) +} + +defer h.txSelector.Clear() + + / If the mempool is nil or NoOp we simply return the transactions + / requested from CometBFT, which, by default, should be in FIFO order. + / + / Note, we still need to ensure the transactions returned respect req.MaxTxBytes. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + for _, txBz := range req.Txs { + tx, err := h.txVerifier.TxDecode(txBz) + if err != nil { + return nil, err +} + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, tx, txBz) + if stop { + break +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} + selectedTxsSignersSeqs := make(map[string]uint64) + +var ( + resError error + selectedTxsNums int + invalidTxs []sdk.Tx / invalid txs to be removed out of the loop to avoid dead lock + ) + +mempool.SelectBy(ctx, h.mempool, req.Txs, func(memTx sdk.Tx) + +bool { + unorderedTx, ok := memTx.(sdk.TxWithUnordered) + isUnordered := ok && unorderedTx.GetUnordered() + txSignersSeqs := make(map[string]uint64) + + / if the tx is unordered, we don't need to check the sequence, we just add it + if !isUnordered { + signerData, err := h.signerExtAdapter.GetSigners(memTx) + if err != nil { + / propagate the error to the caller + resError = err + return false +} + + / If the signers aren't in selectedTxsSignersSeqs then we haven't seen them before + / so we add them and continue given that we don't need to check the sequence. + shouldAdd := true + for _, signer := range signerData { + seq, ok := selectedTxsSignersSeqs[signer.Signer.String()] + if !ok { + txSignersSeqs[signer.Signer.String()] = signer.Sequence + continue +} + + / If we have seen this signer before in this block, we must make + / sure that the current sequence is seq+1; otherwise is invalid + / and we skip it. + if seq+1 != signer.Sequence { + shouldAdd = false + break +} + +txSignersSeqs[signer.Signer.String()] = signer.Sequence +} + if !shouldAdd { + return true +} + +} + + / NOTE: Since transaction verification was already executed in CheckTx, + / which calls mempool.Insert, in theory everything in the pool should be + / valid. But some mempool implementations may insert invalid txs, so we + / check again. + txBz, err := h.txVerifier.PrepareProposalVerifyTx(memTx) + if err != nil { + invalidTxs = append(invalidTxs, memTx) +} + +else { + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, memTx, txBz) + if stop { + return false +} + txsLen := len(h.txSelector.SelectedTxs(ctx)) + / If the tx is unordered, we don't need to update the sender sequence. + if !isUnordered { + for sender, seq := range txSignersSeqs { + / If txsLen != selectedTxsNums is true, it means that we've + / added a new tx to the selected txs, so we need to update + / the sequence of the sender. + if txsLen != selectedTxsNums { + selectedTxsSignersSeqs[sender] = seq +} + +else if _, ok := selectedTxsSignersSeqs[sender]; !ok { + / The transaction hasn't been added but it passed the + / verification, so we know that the sequence is correct. + / So we set this sender's sequence to seq-1, in order + / to avoid unnecessary calls to PrepareProposalVerifyTx. + selectedTxsSignersSeqs[sender] = seq - 1 +} + +} + +} + +selectedTxsNums = txsLen +} + +return true +}) + if resError != nil { + return nil, resError +} + for _, tx := range invalidTxs { + err := h.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return nil, err +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} +} + +/ ProcessProposalHandler returns the default implementation for processing an +/ ABCI proposal. Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. +/ Note that step (2) + +is identical to the validation step performed in +/ DefaultPrepareProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +func (h *DefaultProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + / If the mempool is nil or NoOp we simply return ACCEPT, + / because PrepareProposal may have included txs that could fail verification. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return NoOpProcessProposal() +} + +return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + var totalTxGas uint64 + + var maxBlockGas int64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = b.MaxGas +} + for _, txBytes := range req.Txs { + tx, err := h.txVerifier.ProcessProposalVerifyTx(txBytes) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if maxBlockGas > 0 { + gasTx, ok := tx.(GasTx) + if ok { + totalTxGas += gasTx.GetGas() +} + if totalTxGas > uint64(maxBlockGas) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpExtendVote defines a no-op ExtendVote handler. It will always return an +/ empty byte slice as the vote extension. +func NoOpExtendVote() + +sdk.ExtendVoteHandler { + return func(_ sdk.Context, _ *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} +} + +/ NoOpVerifyVoteExtensionHandler defines a no-op VerifyVoteExtension handler. It +/ will always return an ACCEPT status with no error. +func NoOpVerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(_ sdk.Context, _ *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} + +/ TxSelector defines a helper type that assists in selecting transactions during +/ mempool transaction selection in PrepareProposal. It keeps track of the total +/ number of bytes and total gas of the selected transactions. It also keeps +/ track of the selected transactions themselves. +type TxSelector interface { + / SelectedTxs should return a copy of the selected transactions. + SelectedTxs(ctx context.Context) [][]byte + + / Clear should clear the TxSelector, nulling out all relevant fields. + Clear() + + / SelectTxForProposal should attempt to select a transaction for inclusion in + / a proposal based on inclusion criteria defined by the TxSelector. It must + / return if the caller should halt the transaction selection loop + / (typically over a mempool) + +or otherwise. + SelectTxForProposal(ctx context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool +} + +type defaultTxSelector struct { + totalTxBytes uint64 + totalTxGas uint64 + selectedTxs [][]byte +} + +func NewDefaultTxSelector() + +TxSelector { + return &defaultTxSelector{ +} +} + +func (ts *defaultTxSelector) + +SelectedTxs(_ context.Context) [][]byte { + txs := make([][]byte, len(ts.selectedTxs)) + +copy(txs, ts.selectedTxs) + +return txs +} + +func (ts *defaultTxSelector) + +Clear() { + ts.totalTxBytes = 0 + ts.totalTxGas = 0 + ts.selectedTxs = nil +} + +func (ts *defaultTxSelector) + +SelectTxForProposal(_ context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool { + txSize := uint64(cmttypes.ComputeProtoSizeForTxs([]cmttypes.Tx{ + txBz +})) + +var txGasLimit uint64 + if memTx != nil { + if gasTx, ok := memTx.(GasTx); ok { + txGasLimit = gasTx.GetGas() +} + +} + + / only add the transaction to the proposal if we have enough capacity + if (txSize + ts.totalTxBytes) <= maxTxBytes { + / If there is a max block gas limit, add the tx only if the limit has + / not been met. + if maxBlockGas > 0 { + if (txGasLimit + ts.totalTxGas) <= maxBlockGas { + ts.totalTxGas += txGasLimit + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + +else { + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + + / check if we've reached capacity; if so, we cannot select any more transactions + return ts.totalTxBytes >= maxTxBytes || (maxBlockGas > 0 && (ts.totalTxGas >= maxBlockGas)) +} +``` + +### VerifyVoteExtension + +`VerifyVoteExtension` allows an application to verify that the data returned by `ExtendVote` is valid. This process MUST be deterministic. Moreover, the value of ResponseVerifyVoteExtension.status MUST exclusively depend on the parameters passed in the call to RequestVerifyVoteExtension, and the last committed Application state. + +In the Cosmos-SDK this is implemented as a NoOp: + +```go expandable +package baseapp + +import ( + + "bytes" + "context" + "fmt" + "slices" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cryptoenc "github.com/cometbft/cometbft/crypto/encoding" + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + protoio "github.com/cosmos/gogoproto/io" + "github.com/cosmos/gogoproto/proto" + "cosmossdk.io/core/comet" + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +type ( + / ValidatorStore defines the interface contract required for verifying vote + / extension signatures. Typically, this will be implemented by the x/staking + / module, which has knowledge of the CometBFT public key. + ValidatorStore interface { + GetPubKeyByConsAddr(context.Context, sdk.ConsAddress) (cmtprotocrypto.PublicKey, error) +} + + / GasTx defines the contract that a transaction with a gas limit must implement. + GasTx interface { + GetGas() + +uint64 +} +) + +/ ValidateVoteExtensions defines a helper function for verifying vote extension +/ signatures that may be passed or manually injected into a block proposal from +/ a proposer in PrepareProposal. It returns an error if any signature is invalid +/ or if unexpected vote extensions and/or signatures are found or less than 2/3 +/ power is received. +/ NOTE: From v0.50.5 `currentHeight` and `chainID` arguments are ignored for fixing an issue. +/ They will be removed from the function in v0.51+. +func ValidateVoteExtensions( + ctx sdk.Context, + valStore ValidatorStore, + _ int64, + _ string, + extCommit abci.ExtendedCommitInfo, +) + +error { + / Get values from context + cp := ctx.ConsensusParams() + currentHeight := ctx.HeaderInfo().Height + chainID := ctx.HeaderInfo().ChainID + commitInfo := ctx.CometInfo().GetLastCommit() + + / Check that both extCommit + commit are ordered in accordance with vp/address. + if err := validateExtendedCommitAgainstLastCommit(extCommit, commitInfo); err != nil { + return err +} + + / Start checking vote extensions only **after** the vote extensions enable + / height, because when `currentHeight == VoteExtensionsEnableHeight` + / PrepareProposal doesn't get any vote extensions in its request. + extsEnabled := cp.Abci != nil && currentHeight > cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + marshalDelimitedFn := func(msg proto.Message) ([]byte, error) { + var buf bytes.Buffer + if err := protoio.NewDelimitedWriter(&buf).WriteMsg(msg); err != nil { + return nil, err +} + +return buf.Bytes(), nil +} + +var ( + / Total voting power of all vote extensions. + totalVP int64 + / Total voting power of all validators that submitted valid vote extensions. + sumVP int64 + ) + for _, vote := range extCommit.Votes { + totalVP += vote.Validator.Power + + / Only check + include power if the vote is a commit vote. There must be super-majority, otherwise the + / previous block (the block the vote is for) + +could not have been committed. + if vote.BlockIdFlag != cmtproto.BlockIDFlagCommit { + continue +} + if !extsEnabled { + if len(vote.VoteExtension) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension at height %d", currentHeight) +} + if len(vote.ExtensionSignature) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension signature at height %d", currentHeight) +} + +continue +} + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("vote extensions enabled; received empty vote extension signature at height %d", currentHeight) +} + valConsAddr := sdk.ConsAddress(vote.Validator.Address) + +pubKeyProto, err := valStore.GetPubKeyByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get validator %X public key: %w", valConsAddr, err) +} + +cmtPubKey, err := cryptoenc.PubKeyFromProto(pubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) +} + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, / the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: chainID, +} + +extSignBytes, err := marshalDelimitedFn(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) +} + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return fmt.Errorf("failed to verify validator %X vote extension signature", valConsAddr) +} + +sumVP += vote.Validator.Power +} + + / This check is probably unnecessary, but better safe than sorry. + if totalVP <= 0 { + return fmt.Errorf("total voting power must be positive, got: %d", totalVP) +} + + / If the sum of the voting power has not reached (2/3 + 1) + +we need to error. + if requiredVP := ((totalVP * 2) / 3) + 1; sumVP < requiredVP { + return fmt.Errorf( + "insufficient cumulative voting power received to verify vote extensions; got: %d, expected: >=%d", + sumVP, requiredVP, + ) +} + +return nil +} + +/ validateExtendedCommitAgainstLastCommit validates an ExtendedCommitInfo against a LastCommit. Specifically, +/ it checks that the ExtendedCommit + LastCommit (for the same height), are consistent with each other + that +/ they are ordered correctly (by voting power) + +in accordance with +/ [comet](https://github.com/cometbft/cometbft/blob/4ce0277b35f31985bbf2c25d3806a184a4510010/types/validator_set.go#L784). +func validateExtendedCommitAgainstLastCommit(ec abci.ExtendedCommitInfo, lc comet.CommitInfo) + +error { + / check that the rounds are the same + if ec.Round != lc.Round() { + return fmt.Errorf("extended commit round %d does not match last commit round %d", ec.Round, lc.Round()) +} + + / check that the # of votes are the same + if len(ec.Votes) != lc.Votes().Len() { + return fmt.Errorf("extended commit votes length %d does not match last commit votes length %d", len(ec.Votes), lc.Votes().Len()) +} + + / check sort order of extended commit votes + if !slices.IsSortedFunc(ec.Votes, func(vote1, vote2 abci.ExtendedVoteInfo) + +int { + if vote1.Validator.Power == vote2.Validator.Power { + return bytes.Compare(vote1.Validator.Address, vote2.Validator.Address) / addresses sorted in ascending order (used to break vp conflicts) +} + +return -int(vote1.Validator.Power - vote2.Validator.Power) / vp sorted in descending order +}) { + return fmt.Errorf("extended commit votes are not sorted by voting power") +} + addressCache := make(map[string]struct{ +}, len(ec.Votes)) + / check that consistency between LastCommit and ExtendedCommit + for i, vote := range ec.Votes { + / cache addresses to check for duplicates + if _, ok := addressCache[string(vote.Validator.Address)]; ok { + return fmt.Errorf("extended commit vote address %X is duplicated", vote.Validator.Address) +} + +addressCache[string(vote.Validator.Address)] = struct{ +}{ +} + if !bytes.Equal(vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) { + return fmt.Errorf("extended commit vote address %X does not match last commit vote address %X", vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) +} + if vote.Validator.Power != lc.Votes().Get(i).Validator().Power() { + return fmt.Errorf("extended commit vote power %d does not match last commit vote power %d", vote.Validator.Power, lc.Votes().Get(i).Validator().Power()) +} + +} + +return nil +} + +type ( + / ProposalTxVerifier defines the interface that is implemented by BaseApp, + / that any custom ABCI PrepareProposal and ProcessProposal handler can use + / to verify a transaction. + ProposalTxVerifier interface { + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) + +TxDecode(txBz []byte) (sdk.Tx, error) + +TxEncode(tx sdk.Tx) ([]byte, error) +} + + / DefaultProposalHandler defines the default ABCI PrepareProposal and + / ProcessProposal handlers. + DefaultProposalHandler struct { + mempool mempool.Mempool + txVerifier ProposalTxVerifier + txSelector TxSelector + signerExtAdapter mempool.SignerExtractionAdapter +} +) + +func NewDefaultProposalHandler(mp mempool.Mempool, txVerifier ProposalTxVerifier) *DefaultProposalHandler { + return &DefaultProposalHandler{ + mempool: mp, + txVerifier: txVerifier, + txSelector: NewDefaultTxSelector(), + signerExtAdapter: mempool.NewDefaultSignerExtractionAdapter(), +} +} + +/ SetTxSelector sets the TxSelector function on the DefaultProposalHandler. +func (h *DefaultProposalHandler) + +SetTxSelector(ts TxSelector) { + h.txSelector = ts +} + +/ PrepareProposalHandler returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from CometBFT will simply be returned, which, by default, are in +/ FIFO order. +func (h *DefaultProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + var maxBlockGas uint64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = uint64(b.MaxGas) +} + +defer h.txSelector.Clear() + + / If the mempool is nil or NoOp we simply return the transactions + / requested from CometBFT, which, by default, should be in FIFO order. + / + / Note, we still need to ensure the transactions returned respect req.MaxTxBytes. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + for _, txBz := range req.Txs { + tx, err := h.txVerifier.TxDecode(txBz) + if err != nil { + return nil, err +} + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, tx, txBz) + if stop { + break +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} + selectedTxsSignersSeqs := make(map[string]uint64) + +var ( + resError error + selectedTxsNums int + invalidTxs []sdk.Tx / invalid txs to be removed out of the loop to avoid dead lock + ) + +mempool.SelectBy(ctx, h.mempool, req.Txs, func(memTx sdk.Tx) + +bool { + unorderedTx, ok := memTx.(sdk.TxWithUnordered) + isUnordered := ok && unorderedTx.GetUnordered() + txSignersSeqs := make(map[string]uint64) + + / if the tx is unordered, we don't need to check the sequence, we just add it + if !isUnordered { + signerData, err := h.signerExtAdapter.GetSigners(memTx) + if err != nil { + / propagate the error to the caller + resError = err + return false +} + + / If the signers aren't in selectedTxsSignersSeqs then we haven't seen them before + / so we add them and continue given that we don't need to check the sequence. + shouldAdd := true + for _, signer := range signerData { + seq, ok := selectedTxsSignersSeqs[signer.Signer.String()] + if !ok { + txSignersSeqs[signer.Signer.String()] = signer.Sequence + continue +} + + / If we have seen this signer before in this block, we must make + / sure that the current sequence is seq+1; otherwise is invalid + / and we skip it. + if seq+1 != signer.Sequence { + shouldAdd = false + break +} + +txSignersSeqs[signer.Signer.String()] = signer.Sequence +} + if !shouldAdd { + return true +} + +} + + / NOTE: Since transaction verification was already executed in CheckTx, + / which calls mempool.Insert, in theory everything in the pool should be + / valid. But some mempool implementations may insert invalid txs, so we + / check again. + txBz, err := h.txVerifier.PrepareProposalVerifyTx(memTx) + if err != nil { + invalidTxs = append(invalidTxs, memTx) +} + +else { + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, memTx, txBz) + if stop { + return false +} + txsLen := len(h.txSelector.SelectedTxs(ctx)) + / If the tx is unordered, we don't need to update the sender sequence. + if !isUnordered { + for sender, seq := range txSignersSeqs { + / If txsLen != selectedTxsNums is true, it means that we've + / added a new tx to the selected txs, so we need to update + / the sequence of the sender. + if txsLen != selectedTxsNums { + selectedTxsSignersSeqs[sender] = seq +} + +else if _, ok := selectedTxsSignersSeqs[sender]; !ok { + / The transaction hasn't been added but it passed the + / verification, so we know that the sequence is correct. + / So we set this sender's sequence to seq-1, in order + / to avoid unnecessary calls to PrepareProposalVerifyTx. + selectedTxsSignersSeqs[sender] = seq - 1 +} + +} + +} + +selectedTxsNums = txsLen +} + +return true +}) + if resError != nil { + return nil, resError +} + for _, tx := range invalidTxs { + err := h.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return nil, err +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} +} + +/ ProcessProposalHandler returns the default implementation for processing an +/ ABCI proposal. Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. +/ Note that step (2) + +is identical to the validation step performed in +/ DefaultPrepareProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +func (h *DefaultProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + / If the mempool is nil or NoOp we simply return ACCEPT, + / because PrepareProposal may have included txs that could fail verification. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return NoOpProcessProposal() +} + +return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + var totalTxGas uint64 + + var maxBlockGas int64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = b.MaxGas +} + for _, txBytes := range req.Txs { + tx, err := h.txVerifier.ProcessProposalVerifyTx(txBytes) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if maxBlockGas > 0 { + gasTx, ok := tx.(GasTx) + if ok { + totalTxGas += gasTx.GetGas() +} + if totalTxGas > uint64(maxBlockGas) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpExtendVote defines a no-op ExtendVote handler. It will always return an +/ empty byte slice as the vote extension. +func NoOpExtendVote() + +sdk.ExtendVoteHandler { + return func(_ sdk.Context, _ *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} +} + +/ NoOpVerifyVoteExtensionHandler defines a no-op VerifyVoteExtension handler. It +/ will always return an ACCEPT status with no error. +func NoOpVerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(_ sdk.Context, _ *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} + +/ TxSelector defines a helper type that assists in selecting transactions during +/ mempool transaction selection in PrepareProposal. It keeps track of the total +/ number of bytes and total gas of the selected transactions. It also keeps +/ track of the selected transactions themselves. +type TxSelector interface { + / SelectedTxs should return a copy of the selected transactions. + SelectedTxs(ctx context.Context) [][]byte + + / Clear should clear the TxSelector, nulling out all relevant fields. + Clear() + + / SelectTxForProposal should attempt to select a transaction for inclusion in + / a proposal based on inclusion criteria defined by the TxSelector. It must + / return if the caller should halt the transaction selection loop + / (typically over a mempool) + +or otherwise. + SelectTxForProposal(ctx context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool +} + +type defaultTxSelector struct { + totalTxBytes uint64 + totalTxGas uint64 + selectedTxs [][]byte +} + +func NewDefaultTxSelector() + +TxSelector { + return &defaultTxSelector{ +} +} + +func (ts *defaultTxSelector) + +SelectedTxs(_ context.Context) [][]byte { + txs := make([][]byte, len(ts.selectedTxs)) + +copy(txs, ts.selectedTxs) + +return txs +} + +func (ts *defaultTxSelector) + +Clear() { + ts.totalTxBytes = 0 + ts.totalTxGas = 0 + ts.selectedTxs = nil +} + +func (ts *defaultTxSelector) + +SelectTxForProposal(_ context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool { + txSize := uint64(cmttypes.ComputeProtoSizeForTxs([]cmttypes.Tx{ + txBz +})) + +var txGasLimit uint64 + if memTx != nil { + if gasTx, ok := memTx.(GasTx); ok { + txGasLimit = gasTx.GetGas() +} + +} + + / only add the transaction to the proposal if we have enough capacity + if (txSize + ts.totalTxBytes) <= maxTxBytes { + / If there is a max block gas limit, add the tx only if the limit has + / not been met. + if maxBlockGas > 0 { + if (txGasLimit + ts.totalTxGas) <= maxBlockGas { + ts.totalTxGas += txGasLimit + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + +else { + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + + / check if we've reached capacity; if so, we cannot select any more transactions + return ts.totalTxBytes >= maxTxBytes || (maxBlockGas > 0 && (ts.totalTxGas >= maxBlockGas)) +} +``` diff --git a/docs/sdk/next/documentation/application-framework/context.mdx b/docs/sdk/next/documentation/application-framework/context.mdx new file mode 100644 index 00000000..13dd4a85 --- /dev/null +++ b/docs/sdk/next/documentation/application-framework/context.mdx @@ -0,0 +1,821 @@ +--- +title: Context +--- + +## Synopsis + +The `context` is a data structure intended to be passed from function to function that carries information about the current state of the application. It provides access to a branched storage (a safe branch of the entire state) as well as useful objects and information like `gasMeter`, `block height`, `consensus parameters` and more. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/next/documentation/application-framework/app-anatomy) +- [Lifecycle of a Transaction](/docs/sdk/next/documentation/protocol-development/tx-lifecycle) + + + +## Context Definition + +The Cosmos SDK `Context` is a custom data structure that contains Go's stdlib [`context`](https://pkg.go.dev/context) as its base, and has many additional types within its definition that are specific to the Cosmos SDK. The `Context` is integral to transaction processing in that it allows modules to easily access their respective [store](/docs/sdk/next/documentation/state-storage/store#base-layer-kvstores) in the [`multistore`](/docs/sdk/next/documentation/state-storage/store#multistore) and retrieve transactional context such as the block header and gas meter. + +```go expandable +package types + +import ( + + "context" + "time" + + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/header" + "cosmossdk.io/log" + "cosmossdk.io/store/gaskv" + storetypes "cosmossdk.io/store/types" +) + +/ ExecMode defines the execution mode which can be set on a Context. +type ExecMode uint8 + +/ All possible execution modes. +const ( + ExecModeCheck ExecMode = iota + ExecModeReCheck + ExecModeSimulate + ExecModePrepareProposal + ExecModeProcessProposal + ExecModeVoteExtension + ExecModeVerifyVoteExtension + ExecModeFinalize +) + +/* +Context is an immutable object contains all information needed to +process a request. + +It contains a context.Context object inside if you want to use that, +but please do not over-use it. We try to keep all data structured +and standard additions here would be better just to add to the Context struct +*/ +type Context struct { + baseCtx context.Context + ms storetypes.MultiStore + / Deprecated: Use HeaderService for height, time, and chainID and CometService for the rest + header cmtproto.Header + / Deprecated: Use HeaderService for hash + headerHash []byte + / Deprecated: Use HeaderService for chainID and CometService for the rest + chainID string + txBytes []byte + logger log.Logger + voteInfo []abci.VoteInfo + gasMeter storetypes.GasMeter + blockGasMeter storetypes.GasMeter + checkTx bool + recheckTx bool / if recheckTx == true, then checkTx must also be true + sigverifyTx bool / when run simulation, because the private key corresponding to the account in the genesis.json randomly generated, we must skip the sigverify. + execMode ExecMode + minGasPrice DecCoins + consParams cmtproto.ConsensusParams + eventManager EventManagerI + priority int64 / The tx priority, only relevant in CheckTx + kvGasConfig storetypes.GasConfig + transientKVGasConfig storetypes.GasConfig + streamingManager storetypes.StreamingManager + cometInfo comet.BlockInfo + headerInfo header.Info +} + +/ Proposed rename, not done to avoid API breakage +type Request = Context + +/ Read-only accessors +func (c Context) + +Context() + +context.Context { + return c.baseCtx +} + +func (c Context) + +MultiStore() + +storetypes.MultiStore { + return c.ms +} + +func (c Context) + +BlockHeight() + +int64 { + return c.header.Height +} + +func (c Context) + +BlockTime() + +time.Time { + return c.header.Time +} + +func (c Context) + +ChainID() + +string { + return c.chainID +} + +func (c Context) + +TxBytes() []byte { + return c.txBytes +} + +func (c Context) + +Logger() + +log.Logger { + return c.logger +} + +func (c Context) + +VoteInfos() []abci.VoteInfo { + return c.voteInfo +} + +func (c Context) + +GasMeter() + +storetypes.GasMeter { + return c.gasMeter +} + +func (c Context) + +BlockGasMeter() + +storetypes.GasMeter { + return c.blockGasMeter +} + +func (c Context) + +IsCheckTx() + +bool { + return c.checkTx +} + +func (c Context) + +IsReCheckTx() + +bool { + return c.recheckTx +} + +func (c Context) + +IsSigverifyTx() + +bool { + return c.sigverifyTx +} + +func (c Context) + +ExecMode() + +ExecMode { + return c.execMode +} + +func (c Context) + +MinGasPrices() + +DecCoins { + return c.minGasPrice +} + +func (c Context) + +EventManager() + +EventManagerI { + return c.eventManager +} + +func (c Context) + +Priority() + +int64 { + return c.priority +} + +func (c Context) + +KVGasConfig() + +storetypes.GasConfig { + return c.kvGasConfig +} + +func (c Context) + +TransientKVGasConfig() + +storetypes.GasConfig { + return c.transientKVGasConfig +} + +func (c Context) + +StreamingManager() + +storetypes.StreamingManager { + return c.streamingManager +} + +func (c Context) + +CometInfo() + +comet.BlockInfo { + return c.cometInfo +} + +func (c Context) + +HeaderInfo() + +header.Info { + return c.headerInfo +} + +/ BlockHeader returns the header by value. +func (c Context) + +BlockHeader() + +cmtproto.Header { + return c.header +} + +/ HeaderHash returns a copy of the header hash obtained during abci.RequestBeginBlock +func (c Context) + +HeaderHash() []byte { + hash := make([]byte, len(c.headerHash)) + +copy(hash, c.headerHash) + +return hash +} + +func (c Context) + +ConsensusParams() + +cmtproto.ConsensusParams { + return c.consParams +} + +func (c Context) + +Deadline() (deadline time.Time, ok bool) { + return c.baseCtx.Deadline() +} + +func (c Context) + +Done() <-chan struct{ +} { + return c.baseCtx.Done() +} + +func (c Context) + +Err() + +error { + return c.baseCtx.Err() +} + +/ create a new context +func NewContext(ms storetypes.MultiStore, header cmtproto.Header, isCheckTx bool, logger log.Logger) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +return Context{ + baseCtx: context.Background(), + ms: ms, + header: header, + chainID: header.ChainID, + checkTx: isCheckTx, + sigverifyTx: true, + logger: logger, + gasMeter: storetypes.NewInfiniteGasMeter(), + minGasPrice: DecCoins{ +}, + eventManager: NewEventManager(), + kvGasConfig: storetypes.KVGasConfig(), + transientKVGasConfig: storetypes.TransientGasConfig(), +} +} + +/ WithContext returns a Context with an updated context.Context. +func (c Context) + +WithContext(ctx context.Context) + +Context { + c.baseCtx = ctx + return c +} + +/ WithMultiStore returns a Context with an updated MultiStore. +func (c Context) + +WithMultiStore(ms storetypes.MultiStore) + +Context { + c.ms = ms + return c +} + +/ WithBlockHeader returns a Context with an updated CometBFT block header in UTC time. +func (c Context) + +WithBlockHeader(header cmtproto.Header) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +c.header = header + return c +} + +/ WithHeaderHash returns a Context with an updated CometBFT block header hash. +func (c Context) + +WithHeaderHash(hash []byte) + +Context { + temp := make([]byte, len(hash)) + +copy(temp, hash) + +c.headerHash = temp + return c +} + +/ WithBlockTime returns a Context with an updated CometBFT block header time in UTC with no monotonic component. +/ Stripping the monotonic component is for time equality. +func (c Context) + +WithBlockTime(newTime time.Time) + +Context { + newHeader := c.BlockHeader() + / https://github.com/gogo/protobuf/issues/519 + newHeader.Time = newTime.Round(0).UTC() + +return c.WithBlockHeader(newHeader) +} + +/ WithProposer returns a Context with an updated proposer consensus address. +func (c Context) + +WithProposer(addr ConsAddress) + +Context { + newHeader := c.BlockHeader() + +newHeader.ProposerAddress = addr.Bytes() + +return c.WithBlockHeader(newHeader) +} + +/ WithBlockHeight returns a Context with an updated block height. +func (c Context) + +WithBlockHeight(height int64) + +Context { + newHeader := c.BlockHeader() + +newHeader.Height = height + return c.WithBlockHeader(newHeader) +} + +/ WithChainID returns a Context with an updated chain identifier. +func (c Context) + +WithChainID(chainID string) + +Context { + c.chainID = chainID + return c +} + +/ WithTxBytes returns a Context with an updated txBytes. +func (c Context) + +WithTxBytes(txBytes []byte) + +Context { + c.txBytes = txBytes + return c +} + +/ WithLogger returns a Context with an updated logger. +func (c Context) + +WithLogger(logger log.Logger) + +Context { + c.logger = logger + return c +} + +/ WithVoteInfos returns a Context with an updated consensus VoteInfo. +func (c Context) + +WithVoteInfos(voteInfo []abci.VoteInfo) + +Context { + c.voteInfo = voteInfo + return c +} + +/ WithGasMeter returns a Context with an updated transaction GasMeter. +func (c Context) + +WithGasMeter(meter storetypes.GasMeter) + +Context { + c.gasMeter = meter + return c +} + +/ WithBlockGasMeter returns a Context with an updated block GasMeter +func (c Context) + +WithBlockGasMeter(meter storetypes.GasMeter) + +Context { + c.blockGasMeter = meter + return c +} + +/ WithKVGasConfig returns a Context with an updated gas configuration for +/ the KVStore +func (c Context) + +WithKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.kvGasConfig = gasConfig + return c +} + +/ WithTransientKVGasConfig returns a Context with an updated gas configuration for +/ the transient KVStore +func (c Context) + +WithTransientKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.transientKVGasConfig = gasConfig + return c +} + +/ WithIsCheckTx enables or disables CheckTx value for verifying transactions and returns an updated Context +func (c Context) + +WithIsCheckTx(isCheckTx bool) + +Context { + c.checkTx = isCheckTx + c.execMode = ExecModeCheck + return c +} + +/ WithIsRecheckTx called with true will also set true on checkTx in order to +/ enforce the invariant that if recheckTx = true then checkTx = true as well. +func (c Context) + +WithIsReCheckTx(isRecheckTx bool) + +Context { + if isRecheckTx { + c.checkTx = true +} + +c.recheckTx = isRecheckTx + c.execMode = ExecModeReCheck + return c +} + +/ WithIsSigverifyTx called with true will sigverify in auth module +func (c Context) + +WithIsSigverifyTx(isSigverifyTx bool) + +Context { + c.sigverifyTx = isSigverifyTx + return c +} + +/ WithExecMode returns a Context with an updated ExecMode. +func (c Context) + +WithExecMode(m ExecMode) + +Context { + c.execMode = m + return c +} + +/ WithMinGasPrices returns a Context with an updated minimum gas price value +func (c Context) + +WithMinGasPrices(gasPrices DecCoins) + +Context { + c.minGasPrice = gasPrices + return c +} + +/ WithConsensusParams returns a Context with an updated consensus params +func (c Context) + +WithConsensusParams(params cmtproto.ConsensusParams) + +Context { + c.consParams = params + return c +} + +/ WithEventManager returns a Context with an updated event manager +func (c Context) + +WithEventManager(em EventManagerI) + +Context { + c.eventManager = em + return c +} + +/ WithPriority returns a Context with an updated tx priority +func (c Context) + +WithPriority(p int64) + +Context { + c.priority = p + return c +} + +/ WithStreamingManager returns a Context with an updated streaming manager +func (c Context) + +WithStreamingManager(sm storetypes.StreamingManager) + +Context { + c.streamingManager = sm + return c +} + +/ WithCometInfo returns a Context with an updated comet info +func (c Context) + +WithCometInfo(cometInfo comet.BlockInfo) + +Context { + c.cometInfo = cometInfo + return c +} + +/ WithHeaderInfo returns a Context with an updated header info +func (c Context) + +WithHeaderInfo(headerInfo header.Info) + +Context { + / Settime to UTC + headerInfo.Time = headerInfo.Time.UTC() + +c.headerInfo = headerInfo + return c +} + +/ TODO: remove??? +func (c Context) + +IsZero() + +bool { + return c.ms == nil +} + +func (c Context) + +WithValue(key, value interface{ +}) + +Context { + c.baseCtx = context.WithValue(c.baseCtx, key, value) + +return c +} + +func (c Context) + +Value(key interface{ +}) + +interface{ +} { + if key == SdkContextKey { + return c +} + +return c.baseCtx.Value(key) +} + +/ ---------------------------------------------------------------------------- +/ Store / Caching +/ ---------------------------------------------------------------------------- + +/ KVStore fetches a KVStore from the MultiStore. +func (c Context) + +KVStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.kvGasConfig) +} + +/ TransientStore fetches a TransientStore from the MultiStore. +func (c Context) + +TransientStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.transientKVGasConfig) +} + +/ CacheContext returns a new Context with the multi-store cached and a new +/ EventManager. The cached context is written to the context when writeCache +/ is called. Note, events are automatically emitted on the parent context's +/ EventManager when the caller executes the write. +func (c Context) + +CacheContext() (cc Context, writeCache func()) { + cms := c.ms.CacheMultiStore() + +cc = c.WithMultiStore(cms).WithEventManager(NewEventManager()) + +writeCache = func() { + c.EventManager().EmitEvents(cc.EventManager().Events()) + +cms.Write() +} + +return cc, writeCache +} + +var ( + _ context.Context = Context{ +} + _ storetypes.Context = Context{ +} +) + +/ ContextKey defines a type alias for a stdlib Context key. +type ContextKey string + +/ SdkContextKey is the key in the context.Context which holds the sdk.Context. +const SdkContextKey ContextKey = "sdk-context" + +/ WrapSDKContext returns a stdlib context.Context with the provided sdk.Context's internal +/ context as a value. It is useful for passing an sdk.Context through methods that take a +/ stdlib context.Context parameter such as generated gRPC methods. To get the original +/ sdk.Context back, call UnwrapSDKContext. +/ +/ Deprecated: there is no need to wrap anymore as the Cosmos SDK context implements context.Context. +func WrapSDKContext(ctx Context) + +context.Context { + return ctx +} + +/ UnwrapSDKContext retrieves a Context from a context.Context instance +/ attached with WrapSDKContext. It panics if a Context was not properly +/ attached +func UnwrapSDKContext(ctx context.Context) + +Context { + if sdkCtx, ok := ctx.(Context); ok { + return sdkCtx +} + +return ctx.Value(SdkContextKey).(Context) +} +``` + +- **Base Context:** The base type is a Go [Context](https://pkg.go.dev/context), which is explained further in the [Go Context Package](#go-context-package) section below. +- **Multistore:** Every application's `BaseApp` contains a [`CommitMultiStore`](/docs/sdk/next/documentation/state-storage/store#multistore) which is provided when a `Context` is created. Calling the `KVStore()` and `TransientStore()` methods allows modules to fetch their respective [`KVStore`](/docs/sdk/next/documentation/state-storage/store#base-layer-kvstores) using their unique `StoreKey`. +- **Header:** The [header](https://docs.cometbft.com/v0.37/spec/core/data_structures#header) is a Blockchain type. It carries important information about the state of the blockchain, such as block height and proposer of the current block. +- **Header Hash:** The current block header hash, obtained during `abci.FinalizeBlock`. +- **Chain ID:** The unique identification number of the blockchain a block pertains to. +- **Transaction Bytes:** The `[]byte` representation of a transaction being processed using the context. Every transaction is processed by various parts of the Cosmos SDK and consensus engine (e.g. CometBFT) throughout its [lifecycle](/docs/sdk/next/documentation/protocol-development/tx-lifecycle), some of which do not have any understanding of transaction types. Thus, transactions are marshaled into the generic `[]byte` type using some kind of [encoding format](/docs/sdk/next/documentation/protocol-development/encoding) such as [Amino](/docs/sdk/next/documentation/protocol-development/encoding). +- **Logger:** A `logger` from the CometBFT libraries. Learn more about logs [here](https://docs.cometbft.com/v0.37/core/configuration). Modules call this method to create their own unique module-specific logger. +- **VoteInfo:** A list of the ABCI type [`VoteInfo`](https://docs.cometbft.com/main/spec/abci/abci++_methods.html#voteinfo), which includes the name of a validator and a boolean indicating whether they have signed the block. +- **Gas Meters:** Specifically, a [`gasMeter`](/docs/sdk/next/documentation/protocol-development/gas-fees#main-gas-meter) for the transaction currently being processed using the context and a [`blockGasMeter`](/docs/sdk/next/documentation/protocol-development/gas-fees#block-gas-meter) for the entire block it belongs to. Users specify how much in fees they wish to pay for the execution of their transaction; these gas meters keep track of how much [gas](/docs/sdk/next/documentation/protocol-development/gas-fees) has been used in the transaction or block so far. If the gas meter runs out, execution halts. +- **CheckTx Mode:** A boolean value indicating whether a transaction should be processed in `CheckTx` or `DeliverTx` mode. +- **Min Gas Price:** The minimum [gas](/docs/sdk/next/documentation/protocol-development/gas-fees) price a node is willing to take in order to include a transaction in its block. This price is a local value configured by each node individually, and should therefore **not be used in any functions used in sequences leading to state-transitions**. +- **Consensus Params:** The ABCI type [Consensus Parameters](https://docs.cometbft.com/v0.37/spec/abci/abci++_app_requirements#consensus-parameters), which specify certain limits for the blockchain, such as maximum gas for a block. +- **Event Manager:** The event manager allows any caller with access to a `Context` to emit [`Events`](/docs/sdk/next/api-reference/events-streaming/events). Modules may define module specific + `Events` by defining various `Types` and `Attributes` or use the common definitions found in `types/`. Clients can subscribe or query for these `Events`. These `Events` are collected throughout `FinalizeBlock` and are returned to CometBFT for indexing. +- **Priority:** The transaction priority, only relevant in `CheckTx`. +- **KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the `KVStore`. +- **Transient KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the transient `KVStore`. +- **StreamingManager:** The streamingManager field provides access to the streaming manager, which allows modules to subscribe to state changes emitted by the blockchain. The streaming manager is used by the state listening API, which is described in [ADR 038](https://docs.cosmos.network/main/architecture/adr-038-state-listening). +- **CometInfo:** A lightweight field that contains information about the current block, such as the block height, time, and hash. This information can be used for validating evidence, providing historical data, and enhancing the user experience. For further details see [here](https://github.com/cosmos/cosmos-sdk/blob/main/core/comet/service.go#L14). +- **HeaderInfo:** The `headerInfo` field contains information about the current block header, such as the chain ID, gas limit, and timestamp. For further details see [here](https://github.com/cosmos/cosmos-sdk/blob/main/core/header/service.go#L14). + +## Go Context Package + +A basic `Context` is defined in the [Golang Context Package](https://pkg.go.dev/context). A `Context` +is an immutable data structure that carries request-scoped data across APIs and processes. Contexts +are also designed to enable concurrency and to be used in goroutines. + +Contexts are intended to be **immutable**; they should never be edited. Instead, the convention is +to create a child context from its parent using a `With` function. For example: + +```go +childCtx = parentCtx.WithBlockHeader(header) +``` + +The [Golang Context Package](https://pkg.go.dev/context) documentation instructs developers to +explicitly pass a context `ctx` as the first argument of a process. + +## Store branching + +The `Context` contains a `MultiStore`, which allows for branching and caching functionality using `CacheMultiStore` +(queries in `CacheMultiStore` are cached to avoid future round trips). +Each `KVStore` is branched in a safe and isolated ephemeral storage. Processes are free to write changes to +the `CacheMultiStore`. If a state-transition sequence is performed without issue, the store branch can +be committed to the underlying store at the end of the sequence or disregard them if something +goes wrong. The pattern of usage for a Context is as follows: + +1. A process receives a Context `ctx` from its parent process, which provides information needed to + perform the process. +2. The `ctx.ms` is a **branched store**, i.e. a branch of the [multistore](/docs/sdk/next/documentation/state-storage/store#multistore) is made so that the process can make changes to the state as it executes, without changing the original `ctx.ms`. This is useful to protect the underlying multistore in case the changes need to be reverted at some point in the execution. +3. The process may read and write from `ctx` as it is executing. It may call a subprocess and pass + `ctx` to it as needed. +4. When a subprocess returns, it checks if the result is a success or failure. If a failure, nothing + needs to be done - the branch `ctx` is simply discarded. If successful, the changes made to + the `CacheMultiStore` can be committed to the original `ctx.ms` via `Write()`. + +For example, here is a snippet from the [`runTx`](/docs/sdk/next/documentation/application-framework/baseapp#runtx-antehandler-runmsgs-posthandler) function in [`baseapp`](/docs/sdk/next/documentation/application-framework/baseapp): + +```go +runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + +result = app.runMsgs(runMsgCtx, msgs, mode) + +result.GasWanted = gasWanted + if mode != runTxModeDeliver { + return result +} + if result.IsOK() { + msCache.Write() +} +``` + +Here is the process: + +1. Prior to calling `runMsgs` on the message(s) in the transaction, it uses `app.cacheTxContext()` + to branch and cache the context and multistore. +2. `runMsgCtx` - the context with branched store, is used in `runMsgs` to return a result. +3. If the process is running in [`checkTxMode`](/docs/sdk/next/documentation/application-framework/baseapp#checktx), there is no need to write the + changes - the result is returned immediately. +4. If the process is running in [`deliverTxMode`](/docs/sdk/next/documentation/application-framework/baseapp#delivertx) and the result indicates + a successful run over all the messages, the branched multistore is written back to the original. diff --git a/docs/sdk/next/documentation/application-framework/depinject.mdx b/docs/sdk/next/documentation/application-framework/depinject.mdx new file mode 100644 index 00000000..d9b94cd9 --- /dev/null +++ b/docs/sdk/next/documentation/application-framework/depinject.mdx @@ -0,0 +1,678 @@ +--- +title: Depinject +--- + +> **DISCLAIMER**: This is a **beta** package. The SDK team is actively working on this feature and we are looking for feedback from the community. Please try it out and let us know what you think. + +## Overview + +`depinject` is a dependency injection (DI) framework for the Cosmos SDK, designed to streamline the process of building and configuring blockchain applications. It works in conjunction with the `core/appconfig` module to replace the majority of boilerplate code in `app.go` with a configuration file in Go, YAML, or JSON format. + +`depinject` is particularly useful for developing blockchain applications: + +* With multiple interdependent components, modules, or services. Helping manage their dependencies effectively. +* That require decoupling of these components, making it easier to test, modify, or replace individual parts without affecting the entire system. +* That are wanting to simplify the setup and initialisation of modules and their dependencies by reducing boilerplate code and automating dependency management. + +By using `depinject`, developers can achieve: + +* Cleaner and more organised code. + +* Improved modularity and maintainability. + +* A more maintainable and modular structure for their blockchain applications, ultimately enhancing development velocity and code quality. + +* [Go Doc](https://pkg.go.dev/cosmossdk.io/depinject) + +## Usage + +The `depinject` framework, based on dependency injection concepts, streamlines the management of dependencies within your blockchain application using its Configuration API. This API offers a set of functions and methods to create easy to use configurations, making it simple to define, modify, and access dependencies and their relationships. + +A core component of the [Configuration API](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/depinject#Config) is the `Provide` function, which allows you to register provider functions that supply dependencies. Inspired by constructor injection, these provider functions form the basis of the dependency tree, enabling the management and resolution of dependencies in a structured and maintainable manner. Additionally, `depinject` supports interface types as inputs to provider functions, offering flexibility and decoupling between components, similar to interface injection concepts. + +By leveraging `depinject` and its Configuration API, you can efficiently handle dependencies in your blockchain application, ensuring a clean, modular, and well-organised codebase. + +Example: + +```go expandable +package main + +import ( + + "fmt" + "cosmossdk.io/depinject" +) + +type AnotherInt int + +func GetInt() + +int { + return 1 +} + +func GetAnotherInt() + +AnotherInt { + return 2 +} + +func main() { + var ( + x int + y AnotherInt + ) + +fmt.Printf("Before (%v, %v)\n", x, y) + +depinject.Inject( + depinject.Provide( + GetInt, + GetAnotherInt, + ), + &x, + &y, + ) + +fmt.Printf("After (%v, %v)\n", x, y) +} +``` + +In this example, `depinject.Provide` registers two provider functions that return `int` and `AnotherInt` values. The `depinject.Inject` function is then used to inject these values into the variables `x` and `y`. + +Provider functions serve as the basis for the dependency tree. They are analysed to identify their inputs as dependencies and their outputs as dependents. These dependents can either be used by another provider function or be stored outside the DI container (e.g., `&x` and `&y` in the example above). Provider functions must be exported. + +### Interface type resolution + +`depinject` supports the use of interface types as inputs to provider functions, which helps decouple dependencies between modules. This approach is particularly useful for managing complex systems with multiple modules, such as the Cosmos SDK, where dependencies need to be flexible and maintainable. + +For example, `x/bank` expects an [AccountKeeper](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/x/bank/types#AccountKeeper) interface as [input to ProvideModule](https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/x/bank/module.go#L208-L260). `SimApp` uses the implementation in `x/auth`, but the modular design allows for easy changes to the implementation if needed. + +Consider the following example: + +```go expandable +package duck + +type Duck interface { + quack() +} + +type AlsoDuck interface { + quack() +} + +type Mallard struct{ +} + +type Canvasback struct{ +} + +func (duck Mallard) + +quack() { +} + +func (duck Canvasback) + +quack() { +} + +type Pond struct { + Duck AlsoDuck +} +``` + +And the following provider functions: + +```go expandable +func GetMallard() + +duck.Mallard { + return Mallard{ +} +} + +func GetPond(duck Duck) + +Pond { + return Pond{ + Duck: duck +} +} + +func GetCanvasback() + +Canvasback { + return Canvasback{ +} +} +``` + +In this example, there's a `Pond` struct that has a `Duck` field of type `AlsoDuck`. The `depinject` framework can automatically resolve the appropriate implementation when there's only one available, as shown below: + +```go +var pond Pond + +depinject.Inject( + depinject.Provide( + GetMallard, + GetPond, + ), + &pond) +``` + +This code snippet results in the `Duck` field of `Pond` being implicitly bound to the `Mallard` implementation because it's the only implementation of the `Duck` interface in the container. + +However, if there are multiple implementations of the `Duck` interface, as in the following example, you'll encounter an error: + +```go +var pond Pond + +depinject.Inject( + depinject.Provide( + GetMallard, + GetCanvasback, + GetPond, + ), + &pond) +``` + +A specific binding preference for `Duck` is required. + +#### `BindInterface` API + +In the above situation registering a binding for a given interface binding may look like: + +```go expandable +depinject.Inject( + depinject.Configs( + depinject.BindInterface( + "duck/duck.Duck", + "duck/duck.Mallard", + ), + depinject.Provide( + GetMallard, + GetCanvasback, + GetPond, + ), + ), + &pond) +``` + +Now `depinject` has enough information to provide `Mallard` as an input to `APond`. + +### Full example in real app + + +When using `depinject.Inject`, the injected types must be pointers. + + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + / + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + / + / For providing a different address codec, add it below. + / By default the auth module uses a Bech32 address codec, + / with the prefix defined in the auth module configuration. + / + / func() + +address.Codec { + return <- custom address codec type -> +} + / + / STAKING + / + / For provinding a different validator and consensus address codec, add it below. + / By default the staking module uses the bech32 prefix provided in the auth config, + / and appends "valoper" and "valcons" for validator and consensus addresses respectively. + / When providing a custom address codec in auth, custom address codecs must be provided here as well. + / + / func() + +runtime.ValidatorAddressCodec { + return <- custom validator address codec type -> +} + / func() + +runtime.ConsensusAddressCodec { + return <- custom consensus address codec type -> +} + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom minting function that implements the mintkeeper.MintFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.UpgradeKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitKeeper, + &app.EpochsKeeper, + &app.ProtocolPoolKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + + / set custom ante handler + app.setAnteHandler(app.txConfig) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ setAnteHandler sets custom ante handlers. +/ "x/auth/tx" pre-defined ante handler have been disabled in app_config. +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + +## Debugging + +Issues with resolving dependencies in the container can be done with logs and [Graphviz](https://graphviz.org) renderings of the container tree. +By default, whenever there is an error, logs will be printed to stderr and a rendering of the dependency graph in Graphviz DOT format will be saved to `debug_container.dot`. + +Here is an example Graphviz rendering of a successful build of a dependency graph: +![Graphviz Example](https://raw.githubusercontent.com/cosmos/cosmos-sdk/ff39d243d421442b400befcd959ec3ccd2525154/depinject/testdata/example.svg) + +Rectangles represent functions, ovals represent types, rounded rectangles represent modules and the single hexagon +represents the function which called `Build`. Black-colored shapes mark functions and types that were called/resolved +without an error. Gray-colored nodes mark functions and types that could have been called/resolved in the container but +were left unused. + +Here is an example Graphviz rendering of a dependency graph build which failed: +![Graphviz Error Example](https://raw.githubusercontent.com/cosmos/cosmos-sdk/ff39d243d421442b400befcd959ec3ccd2525154/depinject/testdata/example_error.svg) + +Graphviz DOT files can be converted into SVG's for viewing in a web browser using the `dot` command-line tool, ex: + +```txt +dot -Tsvg debug_container.dot > debug_container.svg +``` + +Many other tools including some IDEs support working with DOT files. diff --git a/docs/sdk/next/documentation/application-framework/runtime.mdx b/docs/sdk/next/documentation/application-framework/runtime.mdx new file mode 100644 index 00000000..9a89077f --- /dev/null +++ b/docs/sdk/next/documentation/application-framework/runtime.mdx @@ -0,0 +1,1877 @@ +--- +title: What is `runtime`? +description: >- + The runtime package in the Cosmos SDK provides a flexible framework for + configuring and managing blockchain applications. It serves as the foundation + for creating modular blockchain applications using a declarative configuration + approach. +--- + +The `runtime` package in the Cosmos SDK provides a flexible framework for configuring and managing blockchain applications. It serves as the foundation for creating modular blockchain applications using a declarative configuration approach. + +## Overview + +The runtime package acts as a wrapper around the `BaseApp` and `ModuleManager`, offering a hybrid approach where applications can be configured both declaratively through configuration files and programmatically through traditional methods. +It is a layer of abstraction between `baseapp` and the application modules that simplifies the process of building a Cosmos SDK application. + +## Core Components + +### App Structure + +The runtime App struct contains several key components: + +```go +type App struct { + *baseapp.BaseApp + ModuleManager *module.Manager + configurator module.Configurator + config *runtimev1alpha1.Module + storeKeys []storetypes.StoreKey + / ... other fields +} +``` + +Cosmos SDK applications should embed the `*runtime.App` struct to leverage the runtime module. + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + / + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + / + / For providing a different address codec, add it below. + / By default the auth module uses a Bech32 address codec, + / with the prefix defined in the auth module configuration. + / + / func() + +address.Codec { + return <- custom address codec type -> +} + / + / STAKING + / + / For provinding a different validator and consensus address codec, add it below. + / By default the staking module uses the bech32 prefix provided in the auth config, + / and appends "valoper" and "valcons" for validator and consensus addresses respectively. + / When providing a custom address codec in auth, custom address codecs must be provided here as well. + / + / func() + +runtime.ValidatorAddressCodec { + return <- custom validator address codec type -> +} + / func() + +runtime.ConsensusAddressCodec { + return <- custom consensus address codec type -> +} + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom minting function that implements the mintkeeper.MintFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.UpgradeKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitKeeper, + &app.EpochsKeeper, + &app.ProtocolPoolKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + + / set custom ante handler + app.setAnteHandler(app.txConfig) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ setAnteHandler sets custom ante handlers. +/ "x/auth/tx" pre-defined ante handler have been disabled in app_config. +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + +### Configuration + +The runtime module is configured using App Wiring. The main configuration object is the [`Module` message](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/proto/cosmos/app/runtime/v1alpha1/module.proto), which supports the following key settings: + +* `app_name`: The name of the application +* `begin_blockers`: List of module names to call during BeginBlock +* `end_blockers`: List of module names to call during EndBlock +* `init_genesis`: Order of module initialization during genesis +* `export_genesis`: Order for exporting module genesis data +* `pre_blockers`: Modules to execute before block processing + +Learn more about wiring `runtime` in the [next section](/docs/sdk/next/documentation/application-framework/app-go-di). + +#### Store Configuration + +By default, the runtime module uses the module name as the store key. +However it provides a flexible store key configuration through: + +* `override_store_keys`: Allows customizing module store keys +* `skip_store_keys`: Specifies store keys to skip during keeper construction + +Example configuration: + +```go expandable +package simapp + +import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "cosmossdk.io/depinject" + _ "cosmossdk.io/x/circuit" / import for side-effects + circuittypes "cosmossdk.io/x/circuit/types" + _ "cosmossdk.io/x/evidence" / import for side-effects + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + "cosmossdk.io/x/nft" + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName +}, + { + Account: distrtypes.ModuleName +}, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter +}}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName +}}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName +}}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner +}}, + { + Account: nft.ModuleName +}, + { + Account: protocolpooltypes.ModuleName +}, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount +}, +} + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName +} + +ModuleConfig = []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / NOTE: upgrade module is required to be prioritized + PreBlockers: []string{ + upgradetypes.ModuleName, + authtypes.ModuleName, +}, + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, +}, + EndBlockers: []string{ + govtypes.ModuleName, + stakingtypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, +}, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", +}, +}, + SkipStoreKeys: []string{ + "tx", +}, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +}, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + ExportGenesis: []string{ + consensustypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +}, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ +}, +}), +}, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + EnableUnorderedTransactions: true, +}), +}, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ +}), +}, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, +}), +}, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + / NOTE: specifying a prefix is only necessary when using bech32 addresses + / If not specfied, the auth Bech32Prefix appended with "valoper" and "valcons" is used by default + Bech32PrefixValidator: "cosmosvaloper", + Bech32PrefixConsensus: "cosmosvalcons", +}), +}, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ +}), +}, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + SkipAnteHandler: true, / Enable this to skip the default antehandlers and set custom ante handlers. +}), +}, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ +}), +}, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ +}), +}, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ +}), +}, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ +}), +}, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ +}), +}, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ +}), +}, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, +}), +}, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ +}), +}, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ +}), +}, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ +}), +}, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ +}), +}, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ +}), +}, + { + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ +}), +}, + { + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ +}), +}, +} + + / AppConfig is application configuration (used by depinject) + +AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: ModuleConfig, +}), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}, + ), + ) +) +``` + +## Key Features + +### 1. BaseApp and other Core SDK components integration + +The runtime module integrates with the `BaseApp` and other core SDK components to provide a seamless experience for developers. + +The developer only needs to embed the `runtime.App` struct in their application to leverage the runtime module. +The configuration of the module manager and other core components is handled internally via the [`AppBuilder`](#4-application-building). + +### 2. Module Registration + +Runtime has built-in support for [`depinject`-enabled modules](/docs/sdk/next/documentation/module-system/depinject). +Such modules can be registered through the configuration file (often named `app_config.go`), with no additional code required. + +```go expandable +package simapp + +import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "cosmossdk.io/depinject" + _ "cosmossdk.io/x/circuit" / import for side-effects + circuittypes "cosmossdk.io/x/circuit/types" + _ "cosmossdk.io/x/evidence" / import for side-effects + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + "cosmossdk.io/x/nft" + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName +}, + { + Account: distrtypes.ModuleName +}, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter +}}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName +}}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName +}}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner +}}, + { + Account: nft.ModuleName +}, + { + Account: protocolpooltypes.ModuleName +}, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount +}, +} + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName +} + +ModuleConfig = []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / NOTE: upgrade module is required to be prioritized + PreBlockers: []string{ + upgradetypes.ModuleName, + authtypes.ModuleName, +}, + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, +}, + EndBlockers: []string{ + govtypes.ModuleName, + stakingtypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, +}, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", +}, +}, + SkipStoreKeys: []string{ + "tx", +}, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +}, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + ExportGenesis: []string{ + consensustypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +}, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ +}, +}), +}, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + EnableUnorderedTransactions: true, +}), +}, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ +}), +}, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, +}), +}, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + / NOTE: specifying a prefix is only necessary when using bech32 addresses + / If not specfied, the auth Bech32Prefix appended with "valoper" and "valcons" is used by default + Bech32PrefixValidator: "cosmosvaloper", + Bech32PrefixConsensus: "cosmosvalcons", +}), +}, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ +}), +}, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + SkipAnteHandler: true, / Enable this to skip the default antehandlers and set custom ante handlers. +}), +}, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ +}), +}, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ +}), +}, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ +}), +}, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ +}), +}, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ +}), +}, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ +}), +}, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, +}), +}, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ +}), +}, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ +}), +}, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ +}), +}, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ +}), +}, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ +}), +}, + { + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ +}), +}, + { + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ +}), +}, +} + + / AppConfig is application configuration (used by depinject) + +AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: ModuleConfig, +}), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}, + ), + ) +) +``` + +Additionally, the runtime package facilitates manual module registration through the `RegisterModules` method. This is the primary integration point for modules not registered via configuration. + + +Even when using manual registration, the module should still be configured in the `Module` message in AppConfig. + + +```go +func (a *App) + +RegisterModules(modules ...module.AppModule) + +error +``` + +The SDK recommends using the declarative approach with `depinject` for module registration whenever possible. + +### 3. Service Registration + +Runtime registers all [core services](https://pkg.go.dev/cosmossdk.io/core) required by modules. +These services include `store`, `event manager`, `context`, and `logger`. +Runtime ensures that services are scoped to their respective modules during the wiring process. + +```go expandable +package runtime + +import ( + + "fmt" + "os" + "slices" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/protobuf/reflect/protodesc" + "google.golang.org/protobuf/reflect/protoregistry" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/event" + "cosmossdk.io/core/genesis" + "cosmossdk.io/core/header" + "cosmossdk.io/core/store" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/tx/signing" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + addresscodec "github.com/cosmos/cosmos-sdk/codec/address" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type appModule struct { + app *App +} + +func (m appModule) + +RegisterServices(configurator module.Configurator) { + err := m.app.registerRuntimeServices(configurator) + if err != nil { + panic(err) +} +} + +func (m appModule) + +IsOnePerModuleType() { +} + +func (m appModule) + +IsAppModule() { +} + +var ( + _ appmodule.AppModule = appModule{ +} + _ module.HasServices = appModule{ +} +) + +/ BaseAppOption is a depinject.AutoGroupType which can be used to pass +/ BaseApp options into the depinject. It should be used carefully. +type BaseAppOption func(*baseapp.BaseApp) + +/ IsManyPerContainerType indicates that this is a depinject.ManyPerContainerType. +func (b BaseAppOption) + +IsManyPerContainerType() { +} + +func init() { + appmodule.Register(&runtimev1alpha1.Module{ +}, + appmodule.Provide( + ProvideApp, + ProvideInterfaceRegistry, + ProvideKVStoreKey, + ProvideTransientStoreKey, + ProvideMemoryStoreKey, + ProvideGenesisTxHandler, + ProvideKVStoreService, + ProvideMemoryStoreService, + ProvideTransientStoreService, + ProvideEventService, + ProvideHeaderInfoService, + ProvideCometInfoService, + ProvideBasicManager, + ProvideAddressCodec, + ), + appmodule.Invoke(SetupAppBuilder), + ) +} + +func ProvideApp(interfaceRegistry codectypes.InterfaceRegistry) ( + codec.Codec, + *codec.LegacyAmino, + *AppBuilder, + *baseapp.MsgServiceRouter, + *baseapp.GRPCQueryRouter, + appmodule.AppModule, + protodesc.Resolver, + protoregistry.MessageTypeResolver, + error, +) { + protoFiles := proto.HybridResolver + protoTypes := protoregistry.GlobalTypes + + / At startup, check that all proto annotations are correct. + if err := msgservice.ValidateProtoAnnotations(protoFiles); err != nil { + / Once we switch to using protoreflect-based ante handlers, we might + / want to panic here instead of logging a warning. + _, _ = fmt.Fprintln(os.Stderr, err.Error()) +} + amino := codec.NewLegacyAmino() + +std.RegisterInterfaces(interfaceRegistry) + +std.RegisterLegacyAminoCodec(amino) + cdc := codec.NewProtoCodec(interfaceRegistry) + msgServiceRouter := baseapp.NewMsgServiceRouter() + grpcQueryRouter := baseapp.NewGRPCQueryRouter() + app := &App{ + storeKeys: nil, + interfaceRegistry: interfaceRegistry, + cdc: cdc, + amino: amino, + basicManager: module.BasicManager{ +}, + msgServiceRouter: msgServiceRouter, + grpcQueryRouter: grpcQueryRouter, +} + appBuilder := &AppBuilder{ + app +} + +return cdc, amino, appBuilder, msgServiceRouter, grpcQueryRouter, appModule{ + app +}, protoFiles, protoTypes, nil +} + +type AppInputs struct { + depinject.In + + AppConfig *appv1alpha1.Config `optional:"true"` + Config *runtimev1alpha1.Module + AppBuilder *AppBuilder + Modules map[string]appmodule.AppModule + CustomModuleBasics map[string]module.AppModuleBasic `optional:"true"` + BaseAppOptions []BaseAppOption + InterfaceRegistry codectypes.InterfaceRegistry + LegacyAmino *codec.LegacyAmino + Logger log.Logger +} + +func SetupAppBuilder(inputs AppInputs) { + app := inputs.AppBuilder.app + app.baseAppOptions = inputs.BaseAppOptions + app.config = inputs.Config + app.appConfig = inputs.AppConfig + app.logger = inputs.Logger + app.ModuleManager = module.NewManagerFromMap(inputs.Modules) + for name, mod := range inputs.Modules { + if customBasicMod, ok := inputs.CustomModuleBasics[name]; ok { + app.basicManager[name] = customBasicMod + customBasicMod.RegisterInterfaces(inputs.InterfaceRegistry) + +customBasicMod.RegisterLegacyAminoCodec(inputs.LegacyAmino) + +continue +} + coreAppModuleBasic := module.CoreAppModuleBasicAdaptor(name, mod) + +app.basicManager[name] = coreAppModuleBasic + coreAppModuleBasic.RegisterInterfaces(inputs.InterfaceRegistry) + +coreAppModuleBasic.RegisterLegacyAminoCodec(inputs.LegacyAmino) +} +} + +func ProvideInterfaceRegistry(addressCodec address.Codec, validatorAddressCodec ValidatorAddressCodec, customGetSigners []signing.CustomGetSigner) (codectypes.InterfaceRegistry, error) { + signingOptions := signing.Options{ + AddressCodec: addressCodec, + ValidatorAddressCodec: validatorAddressCodec, +} + for _, signer := range customGetSigners { + signingOptions.DefineCustomGetSigners(signer.MsgType, signer.Fn) +} + +interfaceRegistry, err := codectypes.NewInterfaceRegistryWithOptions(codectypes.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signingOptions, +}) + if err != nil { + return nil, err +} + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + return nil, err +} + +return interfaceRegistry, nil +} + +func registerStoreKey(wrapper *AppBuilder, key storetypes.StoreKey) { + wrapper.app.storeKeys = append(wrapper.app.storeKeys, key) +} + +func storeKeyOverride(config *runtimev1alpha1.Module, moduleName string) *runtimev1alpha1.StoreKeyConfig { + for _, cfg := range config.OverrideStoreKeys { + if cfg.ModuleName == moduleName { + return cfg +} + +} + +return nil +} + +func ProvideKVStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.KVStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + override := storeKeyOverride(config, key.Name()) + +var storeKeyName string + if override != nil { + storeKeyName = override.KvStoreKey +} + +else { + storeKeyName = key.Name() +} + storeKey := storetypes.NewKVStoreKey(storeKeyName) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideTransientStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.TransientStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + storeKey := storetypes.NewTransientStoreKey(fmt.Sprintf("transient:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideMemoryStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.MemoryStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + storeKey := storetypes.NewMemoryStoreKey(fmt.Sprintf("memory:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideGenesisTxHandler(appBuilder *AppBuilder) + +genesis.TxHandler { + return appBuilder.app +} + +func ProvideKVStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.KVStoreService { + storeKey := ProvideKVStoreKey(config, key, app) + +return kvStoreService{ + key: storeKey +} +} + +func ProvideMemoryStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.MemoryStoreService { + storeKey := ProvideMemoryStoreKey(config, key, app) + +return memStoreService{ + key: storeKey +} +} + +func ProvideTransientStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.TransientStoreService { + storeKey := ProvideTransientStoreKey(config, key, app) + +return transientStoreService{ + key: storeKey +} +} + +func ProvideEventService() + +event.Service { + return EventService{ +} +} + +func ProvideCometInfoService() + +comet.BlockInfoService { + return cometInfoService{ +} +} + +func ProvideHeaderInfoService(app *AppBuilder) + +header.Service { + return headerInfoService{ +} +} + +func ProvideBasicManager(app *AppBuilder) + +module.BasicManager { + return app.app.basicManager +} + +type ( + / ValidatorAddressCodec is an alias for address.Codec for validator addresses. + ValidatorAddressCodec address.Codec + + / ConsensusAddressCodec is an alias for address.Codec for validator consensus addresses. + ConsensusAddressCodec address.Codec +) + +type AddressCodecInputs struct { + depinject.In + + AuthConfig *authmodulev1.Module `optional:"true"` + StakingConfig *stakingmodulev1.Module `optional:"true"` + + AddressCodecFactory func() + +address.Codec `optional:"true"` + ValidatorAddressCodecFactory func() + +ValidatorAddressCodec `optional:"true"` + ConsensusAddressCodecFactory func() + +ConsensusAddressCodec `optional:"true"` +} + +/ ProvideAddressCodec provides an address.Codec to the container for any +/ modules that want to do address string <> bytes conversion. +func ProvideAddressCodec(in AddressCodecInputs) (address.Codec, ValidatorAddressCodec, ConsensusAddressCodec) { + if in.AddressCodecFactory != nil && in.ValidatorAddressCodecFactory != nil && in.ConsensusAddressCodecFactory != nil { + return in.AddressCodecFactory(), in.ValidatorAddressCodecFactory(), in.ConsensusAddressCodecFactory() +} + if in.AuthConfig == nil || in.AuthConfig.Bech32Prefix == "" { + panic("auth config bech32 prefix cannot be empty if no custom address codec is provided") +} + if in.StakingConfig == nil { + in.StakingConfig = &stakingmodulev1.Module{ +} + +} + if in.StakingConfig.Bech32PrefixValidator == "" { + in.StakingConfig.Bech32PrefixValidator = fmt.Sprintf("%svaloper", in.AuthConfig.Bech32Prefix) +} + if in.StakingConfig.Bech32PrefixConsensus == "" { + in.StakingConfig.Bech32PrefixConsensus = fmt.Sprintf("%svalcons", in.AuthConfig.Bech32Prefix) +} + +return addresscodec.NewBech32Codec(in.AuthConfig.Bech32Prefix), + addresscodec.NewBech32Codec(in.StakingConfig.Bech32PrefixValidator), + addresscodec.NewBech32Codec(in.StakingConfig.Bech32PrefixConsensus) +} +``` + +Additionally, runtime provides automatic registration of other essential (i.e., gRPC routes) services available to the App: + +* AutoCLI Query Service +* Reflection Service +* Custom module services + +```go expandable +package runtime + +import ( + + "encoding/json" + "io" + + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AppBuilder is a type that is injected into a container by the runtime module +/ (as *AppBuilder) + +which can be used to create an app which is compatible with +/ the existing app.go initialization conventions. +type AppBuilder struct { + app *App +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *AppBuilder) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.app.DefaultGenesis() +} + +/ Build builds an *App instance. +func (a *AppBuilder) + +Build(db dbm.DB, traceStore io.Writer, baseAppOptions ...func(*baseapp.BaseApp)) *App { + for _, option := range a.app.baseAppOptions { + baseAppOptions = append(baseAppOptions, option) +} + + / set routers first in case they get modified by other options + baseAppOptions = append( + []func(*baseapp.BaseApp) { + func(bApp *baseapp.BaseApp) { + bApp.SetMsgServiceRouter(a.app.msgServiceRouter) + +bApp.SetGRPCQueryRouter(a.app.grpcQueryRouter) +}, +}, + baseAppOptions..., + ) + bApp := baseapp.NewBaseApp(a.app.config.AppName, a.app.logger, db, nil, baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(a.app.interfaceRegistry) + +bApp.MountStores(a.app.storeKeys...) + +a.app.BaseApp = bApp + a.app.configurator = module.NewConfigurator(a.app.cdc, a.app.MsgServiceRouter(), a.app.GRPCQueryRouter()) + if err := a.app.ModuleManager.RegisterServices(a.app.configurator); err != nil { + panic(err) +} + +return a.app +} +``` + +### 4. Application Building + +The `AppBuilder` type provides a structured way to build applications: + +```go expandable +package runtime + +import ( + + "encoding/json" + "io" + + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AppBuilder is a type that is injected into a container by the runtime module +/ (as *AppBuilder) + +which can be used to create an app which is compatible with +/ the existing app.go initialization conventions. +type AppBuilder struct { + app *App +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *AppBuilder) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.app.DefaultGenesis() +} + +/ Build builds an *App instance. +func (a *AppBuilder) + +Build(db dbm.DB, traceStore io.Writer, baseAppOptions ...func(*baseapp.BaseApp)) *App { + for _, option := range a.app.baseAppOptions { + baseAppOptions = append(baseAppOptions, option) +} + + / set routers first in case they get modified by other options + baseAppOptions = append( + []func(*baseapp.BaseApp) { + func(bApp *baseapp.BaseApp) { + bApp.SetMsgServiceRouter(a.app.msgServiceRouter) + +bApp.SetGRPCQueryRouter(a.app.grpcQueryRouter) +}, +}, + baseAppOptions..., + ) + bApp := baseapp.NewBaseApp(a.app.config.AppName, a.app.logger, db, nil, baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(a.app.interfaceRegistry) + +bApp.MountStores(a.app.storeKeys...) + +a.app.BaseApp = bApp + a.app.configurator = module.NewConfigurator(a.app.cdc, a.app.MsgServiceRouter(), a.app.GRPCQueryRouter()) + if err := a.app.ModuleManager.RegisterServices(a.app.configurator); err != nil { + panic(err) +} + +return a.app +} +``` + +Key building steps: + +1. Configuration loading +2. Module registration +3. Service setup +4. Store mounting +5. Router configuration + +An application only needs to call `AppBuilder.Build` to create a fully configured application (`runtime.App`). + +```go expandable +package runtime + +import ( + + "encoding/json" + "io" + + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AppBuilder is a type that is injected into a container by the runtime module +/ (as *AppBuilder) + +which can be used to create an app which is compatible with +/ the existing app.go initialization conventions. +type AppBuilder struct { + app *App +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *AppBuilder) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.app.DefaultGenesis() +} + +/ Build builds an *App instance. +func (a *AppBuilder) + +Build(db dbm.DB, traceStore io.Writer, baseAppOptions ...func(*baseapp.BaseApp)) *App { + for _, option := range a.app.baseAppOptions { + baseAppOptions = append(baseAppOptions, option) +} + + / set routers first in case they get modified by other options + baseAppOptions = append( + []func(*baseapp.BaseApp) { + func(bApp *baseapp.BaseApp) { + bApp.SetMsgServiceRouter(a.app.msgServiceRouter) + +bApp.SetGRPCQueryRouter(a.app.grpcQueryRouter) +}, +}, + baseAppOptions..., + ) + bApp := baseapp.NewBaseApp(a.app.config.AppName, a.app.logger, db, nil, baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(a.app.interfaceRegistry) + +bApp.MountStores(a.app.storeKeys...) + +a.app.BaseApp = bApp + a.app.configurator = module.NewConfigurator(a.app.cdc, a.app.MsgServiceRouter(), a.app.GRPCQueryRouter()) + if err := a.app.ModuleManager.RegisterServices(a.app.configurator); err != nil { + panic(err) +} + +return a.app +} +``` + +More information on building applications can be found in the [next section](/docs/sdk/next/documentation/application-framework/app-go-di). + +## Best Practices + +1. **Module Order**: Carefully consider the order of modules in begin\_blockers, end\_blockers, and pre\_blockers. +2. **Store Keys**: Use override\_store\_keys only when necessary to maintain clarity +3. **Genesis Order**: Maintain correct initialization order in init\_genesis +4. **Migration Management**: Use order\_migrations to control upgrade paths + +### Migration Considerations + +When upgrading between versions: + +1. Review the migration order specified in `order_migrations` +2. Ensure all required modules are included in the configuration +3. Validate store key configurations +4. Test the upgrade path thoroughly diff --git a/docs/sdk/next/documentation/application-framework/runtx_middleware.mdx b/docs/sdk/next/documentation/application-framework/runtx_middleware.mdx new file mode 100644 index 00000000..d10836b5 --- /dev/null +++ b/docs/sdk/next/documentation/application-framework/runtx_middleware.mdx @@ -0,0 +1,179 @@ +--- +title: RunTx recovery middleware +--- + +`BaseApp.runTx()` function handles Go panics that might occur during transactions execution, for example, keeper has faced an invalid state and panicked. +Depending on the panic type different handler is used, for instance the default one prints an error log message. +Recovery middleware is used to add custom panic recovery for Cosmos SDK application developers. + +More context can found in the corresponding [ADR-022](/docs/common/pages/adr-comprehensive#adr-022-custom-baseapp-panic-handling) and the implementation in [recovery.go](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/baseapp/recovery.go). + +## Interface + +```go expandable +package baseapp + +import ( + + "fmt" + "runtime/debug" + + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ RecoveryHandler handles recovery() + +object. +/ Return a non-nil error if recoveryObj was processed. +/ Return nil if recoveryObj was not processed. +type RecoveryHandler func(recoveryObj interface{ +}) + +error + +/ recoveryMiddleware is wrapper for RecoveryHandler to create chained recovery handling. +/ returns (recoveryMiddleware, nil) + if recoveryObj was not processed and should be passed to the next middleware in chain. +/ returns (nil, error) + if recoveryObj was processed and middleware chain processing should be stopped. +type recoveryMiddleware func(recoveryObj interface{ +}) (recoveryMiddleware, error) + +/ processRecovery processes recoveryMiddleware chain for recovery() + +object. +/ Chain processing stops on non-nil error or when chain is processed. +func processRecovery(recoveryObj interface{ +}, middleware recoveryMiddleware) + +error { + if middleware == nil { + return nil +} + +next, err := middleware(recoveryObj) + if err != nil { + return err +} + +return processRecovery(recoveryObj, next) +} + +/ newRecoveryMiddleware creates a RecoveryHandler middleware. +func newRecoveryMiddleware(handler RecoveryHandler, next recoveryMiddleware) + +recoveryMiddleware { + return func(recoveryObj interface{ +}) (recoveryMiddleware, error) { + if err := handler(recoveryObj); err != nil { + return nil, err +} + +return next, nil +} +} + +/ newOutOfGasRecoveryMiddleware creates a standard OutOfGas recovery middleware for app.runTx method. +func newOutOfGasRecoveryMiddleware(gasWanted uint64, ctx sdk.Context, next recoveryMiddleware) + +recoveryMiddleware { + handler := func(recoveryObj interface{ +}) + +error { + err, ok := recoveryObj.(storetypes.ErrorOutOfGas) + if !ok { + return nil +} + +return errorsmod.Wrap( + sdkerrors.ErrOutOfGas, fmt.Sprintf( + "out of gas in location: %v; gasWanted: %d, gasUsed: %d", + err.Descriptor, gasWanted, ctx.GasMeter().GasConsumed(), + ), + ) +} + +return newRecoveryMiddleware(handler, next) +} + +/ newDefaultRecoveryMiddleware creates a default (last in chain) + +recovery middleware for app.runTx method. +func newDefaultRecoveryMiddleware() + +recoveryMiddleware { + handler := func(recoveryObj interface{ +}) + +error { + return errorsmod.Wrap( + sdkerrors.ErrPanic, fmt.Sprintf( + "recovered: %v\nstack:\n%v", recoveryObj, string(debug.Stack()), + ), + ) +} + +return newRecoveryMiddleware(handler, nil) +} +``` + +`recoveryObj` is a return value for `recover()` function from the `building` Go package. + +**Contract:** + +* RecoveryHandler returns `nil` if `recoveryObj` wasn't handled and should be passed to the next recovery middleware; +* RecoveryHandler returns a non-nil `error` if `recoveryObj` was handled; + +## Custom RecoveryHandler register + +`BaseApp.AddRunTxRecoveryHandler(handlers ...RecoveryHandler)` + +BaseApp method adds recovery middleware to the default recovery chain. + +## Example + +Lets assume we want to emit the "Consensus failure" chain state if some particular error occurred. + +We have a module keeper that panics: + +```go +func (k FooKeeper) + +Do(obj interface{ +}) { + if obj == nil { + / that shouldn't happen, we need to crash the app + err := errorsmod.Wrap(fooTypes.InternalError, "obj is nil") + +panic(err) +} +} +``` + +By default that panic would be recovered and an error message will be printed to log. To override that behavior we should register a custom RecoveryHandler: + +```go expandable +/ Cosmos SDK application constructor + customHandler := func(recoveryObj interface{ +}) + +error { + err, ok := recoveryObj.(error) + if !ok { + return nil +} + if fooTypes.InternalError.Is(err) { + panic(fmt.Errorf("FooKeeper did panic with error: %w", err)) +} + +return nil +} + baseApp := baseapp.NewBaseApp(...) + +baseApp.AddRunTxRecoveryHandler(customHandler) +``` diff --git a/docs/sdk/next/documentation/application-framework/vote-extensions.mdx b/docs/sdk/next/documentation/application-framework/vote-extensions.mdx new file mode 100644 index 00000000..3a5fee71 --- /dev/null +++ b/docs/sdk/next/documentation/application-framework/vote-extensions.mdx @@ -0,0 +1,185 @@ +--- +title: Vote Extensions +--- + +## Synopsis + +This section describes how the application can define and use vote extensions +defined in ABCI++. + +## Extend Vote + +ABCI++ allows an application to extend a pre-commit vote with arbitrary data. This +process does NOT have to be deterministic, and the data returned can be unique to the +validator process. The Cosmos SDK defines `baseapp.ExtendVoteHandler`: + +```go +type ExtendVoteHandler func(Context, *abci.ExtendVoteRequest) (*abci.ExtendVoteResponse, error) +``` + +An application can set this handler in `app.go` via the `baseapp.SetExtendVoteHandler` +`BaseApp` option function. The `sdk.ExtendVoteHandler`, if defined, is called during +the `ExtendVote` ABCI method. Note, if an application decides to implement +`baseapp.ExtendVoteHandler`, it MUST return a non-nil `VoteExtension`. However, the vote +extension can be empty. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#extendvote) +for more details. + +There are many decentralized censorship-resistant use cases for vote extensions. +For example, a validator may want to submit prices for a price oracle or encryption +shares for an encrypted transaction mempool. Note, an application should be careful +to consider the size of the vote extensions as they could increase latency in block +production. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/docs/qa/CometBFT-QA-38.md#vote-extensions-testbed) +for more details. + +## Verify Vote Extension + +Similar to extending a vote, an application can also verify vote extensions from +other validators when validating their pre-commits. For a given vote extension, +this process MUST be deterministic. The Cosmos SDK defines `sdk.VerifyVoteExtensionHandler`: + +```go expandable +package types + +import ( + + abci "github.com/cometbft/cometbft/abci/types" +) + +/ InitChainer initializes application state at genesis +type InitChainer func(ctx Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) + +/ PrepareCheckStater runs code during commit after the block has been committed, and the `checkState` +/ has been branched for the new block. +type PrepareCheckStater func(ctx Context) + +/ Precommiter runs code during commit immediately before the `deliverState` is written to the `rootMultiStore`. +type Precommiter func(ctx Context) + +/ PeerFilter responds to p2p filtering queries from Tendermint +type PeerFilter func(info string) *abci.ResponseQuery + +/ ProcessProposalHandler defines a function type alias for processing a proposer +type ProcessProposalHandler func(Context, *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) + +/ PrepareProposalHandler defines a function type alias for preparing a proposal +type PrepareProposalHandler func(Context, *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) + +/ ExtendVoteHandler defines a function type alias for extending a pre-commit vote. +type ExtendVoteHandler func(Context, *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) + +/ VerifyVoteExtensionHandler defines a function type alias for verifying a +/ pre-commit vote extension. +type VerifyVoteExtensionHandler func(Context, *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) + +/ BeginBlocker defines a function type alias for executing application +/ business logic before transactions are executed. +/ +/ Note: The BeginBlock ABCI method no longer exists in the ABCI specification +/ as of CometBFT v0.38.0. This function type alias is provided for backwards +/ compatibility with applications that still use the BeginBlock ABCI method +/ and allows for existing BeginBlock functionality within applications. +type BeginBlocker func(Context) (BeginBlock, error) + +/ EndBlocker defines a function type alias for executing application +/ business logic after transactions are executed but before committing. +/ +/ Note: The EndBlock ABCI method no longer exists in the ABCI specification +/ as of CometBFT v0.38.0. This function type alias is provided for backwards +/ compatibility with applications that still use the EndBlock ABCI method +/ and allows for existing EndBlock functionality within applications. +type EndBlocker func(Context) (EndBlock, error) + +/ EndBlock defines a type which contains endblock events and validator set updates +type EndBlock struct { + ValidatorUpdates []abci.ValidatorUpdate + Events []abci.Event +} + +/ BeginBlock defines a type which contains beginBlock events +type BeginBlock struct { + Events []abci.Event +} +``` + +An application can set this handler in `app.go` via the `baseapp.SetVerifyVoteExtensionHandler` +`BaseApp` option function. The `sdk.VerifyVoteExtensionHandler`, if defined, is called +during the `VerifyVoteExtension` ABCI method. If an application defines a vote +extension handler, it should also define a verification handler. Note, not all +validators will share the same view of what vote extensions they verify depending +on how votes are propagated. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#verifyvoteextension) +for more details. + +## Vote Extension Propagation + +The agreed upon vote extensions at height `H` are provided to the proposing validator +at height `H+1` during `PrepareProposal`. As a result, the vote extensions are +not natively provided or exposed to the remaining validators during `ProcessProposal`. +As a result, if an application requires that the agreed upon vote extensions from +height `H` are available to all validators at `H+1`, the application must propagate +these vote extensions manually in the block proposal itself. This can be done by +"injecting" them into the block proposal, since the `Txs` field in `PrepareProposal` +is just a slice of byte slices. + +`FinalizeBlock` will ignore any byte slice that doesn't implement an `sdk.Tx`, so +any injected vote extensions will safely be ignored in `FinalizeBlock`. For more +details on propagation, see the [ABCI++ 2.0 ADR](docs/sdk/next/documentation/legacy/adr-comprehensive#vote-extension-propagation--verification). + +### Recovery of injected Vote Extensions + +As stated before, vote extensions can be injected into a block proposal (along with +other transactions in the `Txs` field). The Cosmos SDK provides a pre-FinalizeBlock +hook to allow applications to recover vote extensions, perform any necessary +computation on them, and then store the results in the cached store. These results +will be available to the application during the subsequent `FinalizeBlock` call. + +An example of how a pre-FinalizeBlock hook could look is shown below: + +```go expandable +app.SetPreBlocker(func(ctx sdk.Context, req *abci.FinalizeBlockRequest) + +error { + allVEs := []VE{ +} / store all parsed vote extensions here + for _, tx := range req.Txs { + / define a custom function that tries to parse the tx as a vote extension + ve, ok := parseVoteExtension(tx) + if !ok { + continue +} + +allVEs = append(allVEs, ve) +} + + / perform any necessary computation on the vote extensions and store the result + / in the cached store + result := compute(allVEs) + err := storeVEResult(ctx, result) + if err != nil { + return err +} + +return nil +}) +``` + +Then, in an app's module, the application can retrieve the result of the computation +of vote extensions from the cached store: + +```go expandable +func (k Keeper) + +BeginBlocker(ctx context.Context) + +error { + / retrieve the result of the computation of vote extensions from the cached store + result, err := k.GetVEResult(ctx) + if err != nil { + return err +} + + / use the result of the computation of vote extensions + k.setSomething(result) + +return nil +} +``` diff --git a/docs/sdk/next/documentation/consensus-block-production/checktx.mdx b/docs/sdk/next/documentation/consensus-block-production/checktx.mdx new file mode 100644 index 00000000..37908656 --- /dev/null +++ b/docs/sdk/next/documentation/consensus-block-production/checktx.mdx @@ -0,0 +1,1577 @@ +--- +title: CheckTx +description: >- + CheckTx is called by the BaseApp when comet receives a transaction from a + client, over the p2p network or RPC. The CheckTx method is responsible for + validating the transaction and returning an error if the transaction is + invalid. +--- + +CheckTx is called by the `BaseApp` when comet receives a transaction from a client, over the p2p network or RPC. The CheckTx method is responsible for validating the transaction and returning an error if the transaction is invalid. + +```mermaid +graph TD + subgraph SDK[Cosmos SDK] + B[Baseapp] + A[AnteHandlers] + B <-->|Validate TX| A + end + C[CometBFT] <-->|CheckTx|SDK + U((User)) -->|Submit TX| C + N[P2P] -->|Receive TX| C +``` + +```go expandable +package baseapp + +import ( + + "context" + "errors" + "fmt" + "sort" + "strings" + "time" + + abcitypes "github.com/cometbft/cometbft/abci/types" + abci "github.com/cometbft/cometbft/api/cometbft/abci/v1" + cmtproto "github.com/cometbft/cometbft/api/cometbft/types/v1" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc/codes" + grpcstatus "google.golang.org/grpc/status" + + corecomet "cosmossdk.io/core/comet" + coreheader "cosmossdk.io/core/header" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/store/rootmulti" + snapshottypes "cosmossdk.io/store/snapshots/types" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ Supported ABCI Query prefixes and paths +const ( + QueryPathApp = "app" + QueryPathCustom = "custom" + QueryPathP2P = "p2p" + QueryPathStore = "store" + + QueryPathBroadcastTx = "/cosmos.tx.v1beta1.Service/BroadcastTx" +) + +/ InitChain implements the ABCI interface. It initializes the application's state +/ and sets up the initial validator set. +func (app *BaseApp) + +InitChain(req *abci.InitChainRequest) (*abci.InitChainResponse, error) { + if req.ChainId != app.chainID { + return nil, fmt.Errorf("invalid chain-id on InitChain; expected: %s, got: %s", app.chainID, req.ChainId) +} + + / On a new chain, we consider the init chain block height as 0, even though + / req.InitialHeight is 1 by default. + initHeader := cmtproto.Header{ + ChainID: req.ChainId, + Time: req.Time +} + +app.logger.Info("InitChain", "initialHeight", req.InitialHeight, "chainID", req.ChainId) + + / Set the initial height, which will be used to determine if we are proposing + / or processing the first block or not. + app.initialHeight = req.InitialHeight + if app.initialHeight == 0 { / If initial height is 0, set it to 1 + app.initialHeight = 1 +} + + / if req.InitialHeight is > 1, then we set the initial version on all stores + if req.InitialHeight > 1 { + initHeader.Height = req.InitialHeight + if err := app.cms.SetInitialVersion(req.InitialHeight); err != nil { + return nil, err +} + +} + + / initialize states with a correct header + app.setState(execModeFinalize, initHeader) + +app.setState(execModeCheck, initHeader) + + / Store the consensus params in the BaseApp's param store. Note, this must be + / done after the finalizeBlockState and context have been set as it's persisted + / to state. + if req.ConsensusParams != nil { + err := app.StoreConsensusParams(app.finalizeBlockState.Context(), *req.ConsensusParams) + if err != nil { + return nil, err +} + +} + +defer func() { + / InitChain represents the state of the application BEFORE the first block, + / i.e. the genesis block. This means that when processing the app's InitChain + / handler, the block height is zero by default. However, after Commit is called + / the height needs to reflect the true block height. + initHeader.Height = req.InitialHeight + app.checkState.SetContext(app.checkState.Context().WithBlockHeader(initHeader). + WithHeaderInfo(coreheader.Info{ + ChainID: req.ChainId, + Height: req.InitialHeight, + Time: req.Time, +})) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockHeader(initHeader). + WithHeaderInfo(coreheader.Info{ + ChainID: req.ChainId, + Height: req.InitialHeight, + Time: req.Time, +})) +}() + if app.initChainer == nil { + return &abci.InitChainResponse{ +}, nil +} + + / add block gas meter for any genesis transactions (allow infinite gas) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(storetypes.NewInfiniteGasMeter())) + +res, err := app.initChainer(app.finalizeBlockState.Context(), req) + if err != nil { + return nil, err +} + if len(req.Validators) > 0 { + if len(req.Validators) != len(res.Validators) { + return nil, fmt.Errorf( + "len(RequestInitChain.Validators) != len(GenesisValidators) (%d != %d)", + len(req.Validators), len(res.Validators), + ) +} + +sort.Sort(abcitypes.ValidatorUpdates(req.Validators)) + for i := range res.Validators { + if !proto.Equal(&res.Validators[i], &req.Validators[i]) { + return nil, fmt.Errorf("genesisValidators[%d] != req.Validators[%d] ", i, i) +} + +} + +} + + / NOTE: We don't commit, but FinalizeBlock for block InitialHeight starts from + / this FinalizeBlockState. + return &abci.InitChainResponse{ + ConsensusParams: res.ConsensusParams, + Validators: res.Validators, + AppHash: app.LastCommitID().Hash, +}, nil +} + +/ Info implements the ABCI interface. It returns information about the application. +func (app *BaseApp) + +Info(_ *abci.InfoRequest) (*abci.InfoResponse, error) { + lastCommitID := app.cms.LastCommitID() + appVersion := InitialAppVersion + if lastCommitID.Version > 0 { + ctx, err := app.CreateQueryContext(lastCommitID.Version, false) + if err != nil { + return nil, fmt.Errorf("failed creating query context: %w", err) +} + +appVersion, err = app.AppVersion(ctx) + if err != nil { + return nil, fmt.Errorf("failed getting app version: %w", err) +} + +} + +return &abci.InfoResponse{ + Data: app.name, + Version: app.version, + AppVersion: appVersion, + LastBlockHeight: lastCommitID.Version, + LastBlockAppHash: lastCommitID.Hash, +}, nil +} + +/ Query implements the ABCI interface. It delegates to CommitMultiStore if it +/ implements Queryable. +func (app *BaseApp) + +Query(_ context.Context, req *abci.QueryRequest) (resp *abci.QueryResponse, err error) { + / add panic recovery for all queries + / + / Ref: https://github.com/cosmos/cosmos-sdk/pull/8039 + defer func() { + if r := recover(); r != nil { + resp = queryResult(errorsmod.Wrapf(sdkerrors.ErrPanic, "%v", r), app.trace) +} + +}() + + / when a client did not provide a query height, manually inject the latest + if req.Height == 0 { + req.Height = app.LastBlockHeight() +} + +telemetry.IncrCounter(1, "query", "count") + +telemetry.IncrCounter(1, "query", req.Path) + start := telemetry.Now() + +defer telemetry.MeasureSince(start, req.Path) + if req.Path == QueryPathBroadcastTx { + return queryResult(errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "can't route a broadcast tx message"), app.trace), nil +} + + / handle gRPC routes first rather than calling splitPath because '/' characters + / are used as part of gRPC paths + if grpcHandler := app.grpcQueryRouter.Route(req.Path); grpcHandler != nil { + return app.handleQueryGRPC(grpcHandler, req), nil +} + path := SplitABCIQueryPath(req.Path) + if len(path) == 0 { + return queryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "no query path provided"), app.trace), nil +} + switch path[0] { + case QueryPathApp: + / "/app" prefix for special application queries + resp = handleQueryApp(app, path, req) + case QueryPathStore: + resp = handleQueryStore(app, path, *req) + case QueryPathP2P: + resp = handleQueryP2P(app, path) + +default: + resp = queryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "unknown query path"), app.trace) +} + +return resp, nil +} + +/ ListSnapshots implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ListSnapshots(req *abci.ListSnapshotsRequest) (*abci.ListSnapshotsResponse, error) { + resp := &abci.ListSnapshotsResponse{ + Snapshots: []*abci.Snapshot{ +}} + if app.snapshotManager == nil { + return resp, nil +} + +snapshots, err := app.snapshotManager.List() + if err != nil { + app.logger.Error("failed to list snapshots", "err", err) + +return nil, err +} + for _, snapshot := range snapshots { + abciSnapshot, err := snapshot.ToABCI() + if err != nil { + app.logger.Error("failed to convert ABCI snapshots", "err", err) + +return nil, err +} + +resp.Snapshots = append(resp.Snapshots, &abciSnapshot) +} + +return resp, nil +} + +/ LoadSnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +LoadSnapshotChunk(req *abci.LoadSnapshotChunkRequest) (*abci.LoadSnapshotChunkResponse, error) { + if app.snapshotManager == nil { + return &abci.LoadSnapshotChunkResponse{ +}, nil +} + +chunk, err := app.snapshotManager.LoadChunk(req.Height, req.Format, req.Chunk) + if err != nil { + app.logger.Error( + "failed to load snapshot chunk", + "height", req.Height, + "format", req.Format, + "chunk", req.Chunk, + "err", err, + ) + +return nil, err +} + +return &abci.LoadSnapshotChunkResponse{ + Chunk: chunk +}, nil +} + +/ OfferSnapshot implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +OfferSnapshot(req *abci.OfferSnapshotRequest) (*abci.OfferSnapshotResponse, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.OfferSnapshotResponse{ + Result: abci.OFFER_SNAPSHOT_RESULT_ABORT +}, nil +} + if req.Snapshot == nil { + app.logger.Error("received nil snapshot") + +return &abci.OfferSnapshotResponse{ + Result: abci.OFFER_SNAPSHOT_RESULT_REJECT +}, nil +} + +snapshot, err := snapshottypes.SnapshotFromABCI(req.Snapshot) + if err != nil { + app.logger.Error("failed to decode snapshot metadata", "err", err) + +return &abci.OfferSnapshotResponse{ + Result: abci.OFFER_SNAPSHOT_RESULT_REJECT +}, nil +} + +err = app.snapshotManager.Restore(snapshot) + switch { + case err == nil: + return &abci.OfferSnapshotResponse{ + Result: abci.OFFER_SNAPSHOT_RESULT_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrUnknownFormat): + return &abci.OfferSnapshotResponse{ + Result: abci.OFFER_SNAPSHOT_RESULT_REJECT_FORMAT +}, nil + case errors.Is(err, snapshottypes.ErrInvalidMetadata): + app.logger.Error( + "rejecting invalid snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + +return &abci.OfferSnapshotResponse{ + Result: abci.OFFER_SNAPSHOT_RESULT_REJECT +}, nil + + default: + / CometBFT errors are defined here: https://github.com/cometbft/cometbft/blob/main/statesync/syncer.go + / It may happen that in case of a CometBFT error, such as a timeout (which occurs after two minutes), + / the process is aborted. This is done intentionally because deleting the database programmatically + / can lead to more complicated situations. + app.logger.Error( + "failed to restore snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + + / We currently don't support resetting the IAVL stores and retrying a + / different snapshot, so we ask CometBFT to abort all snapshot restoration. + return &abci.OfferSnapshotResponse{ + Result: abci.OFFER_SNAPSHOT_RESULT_ABORT +}, nil +} +} + +/ ApplySnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ApplySnapshotChunk(req *abci.ApplySnapshotChunkRequest) (*abci.ApplySnapshotChunkResponse, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.ApplySnapshotChunkResponse{ + Result: abci.APPLY_SNAPSHOT_CHUNK_RESULT_ABORT +}, nil +} + + _, err := app.snapshotManager.RestoreChunk(req.Chunk) + switch { + case err == nil: + return &abci.ApplySnapshotChunkResponse{ + Result: abci.APPLY_SNAPSHOT_CHUNK_RESULT_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrChunkHashMismatch): + app.logger.Error( + "chunk checksum mismatch; rejecting sender and requesting refetch", + "chunk", req.Index, + "sender", req.Sender, + "err", err, + ) + +return &abci.ApplySnapshotChunkResponse{ + Result: abci.APPLY_SNAPSHOT_CHUNK_RESULT_RETRY, + RefetchChunks: []uint32{ + req.Index +}, + RejectSenders: []string{ + req.Sender +}, +}, nil + + default: + app.logger.Error("failed to restore snapshot", "err", err) + +return &abci.ApplySnapshotChunkResponse{ + Result: abci.APPLY_SNAPSHOT_CHUNK_RESULT_ABORT +}, nil +} +} + +/ CheckTx implements the ABCI interface and executes a tx in CheckTx mode. In +/ CheckTx mode, messages are not executed. This means messages are only validated +/ and only the AnteHandler is executed. State is persisted to the BaseApp's +/ internal CheckTx state if the AnteHandler passes. Otherwise, the ResponseCheckTx +/ will contain relevant error information. Regardless of tx execution outcome, +/ the ResponseCheckTx will contain relevant gas execution context. +func (app *BaseApp) + +CheckTx(req *abci.CheckTxRequest) (*abci.CheckTxResponse, error) { + var mode execMode + switch { + case req.Type == abci.CHECK_TX_TYPE_CHECK: + mode = execModeCheck + case req.Type == abci.CHECK_TX_TYPE_RECHECK: + mode = execModeReCheck + + default: + return nil, fmt.Errorf("unknown RequestCheckTx type: %s", req.Type) +} + if app.checkTxHandler == nil { + gInfo, result, anteEvents, err := app.runTx(mode, req.Tx, nil) + if err != nil { + return responseCheckTxWithEvents(err, gInfo.GasWanted, gInfo.GasUsed, anteEvents, app.trace), nil +} + +return &abci.CheckTxResponse{ + GasWanted: int64(gInfo.GasWanted), / TODO: Should type accept unsigned ints? + GasUsed: int64(gInfo.GasUsed), / TODO: Should type accept unsigned ints? + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +}, nil +} + +return app.checkTxHandler(app.runTx, req) +} + +/ PrepareProposal implements the PrepareProposal ABCI method and returns a +/ ResponsePrepareProposal object to the client. The PrepareProposal method is +/ responsible for allowing the block proposer to perform application-dependent +/ work in a block before proposing it. +/ +/ Transactions can be modified, removed, or added by the application. Since the +/ application maintains its own local mempool, it will ignore the transactions +/ provided to it in RequestPrepareProposal. Instead, it will determine which +/ transactions to return based on the mempool's semantics and the MaxTxBytes +/ provided by the client's request. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +PrepareProposal(req *abci.PrepareProposalRequest) (resp *abci.PrepareProposalResponse, err error) { + if app.prepareProposal == nil { + return nil, errors.New("PrepareProposal handler not set") +} + + / Always reset state given that PrepareProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, + AppHash: app.LastCommitID().Hash, +} + +app.setState(execModePrepareProposal, header) + + / CometBFT must never call PrepareProposal with a height of 0. + / + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("PrepareProposal called with invalid height") +} + +app.prepareProposalState.SetContext(app.getContextForProposal(app.prepareProposalState.Context(), req.Height). + WithVoteInfos(toVoteInfo(req.LocalLastCommit.Votes)). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithProposer(req.ProposerAddress). + WithExecMode(sdk.ExecModePrepareProposal). + WithCometInfo(corecomet.Info{ + Evidence: sdk.ToSDKEvidence(req.Misbehavior), + ValidatorsHash: req.NextValidatorsHash, + ProposerAddress: req.ProposerAddress, + LastCommit: sdk.ToSDKExtendedCommitInfo(req.LocalLastCommit), +}). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, +})) + +app.prepareProposalState.SetContext(app.prepareProposalState.Context(). + WithConsensusParams(app.GetConsensusParams(app.prepareProposalState.Context())). + WithBlockGasMeter(app.getBlockGasMeter(app.prepareProposalState.Context()))) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in PrepareProposal", + "height", req.Height, + "time", req.Time, + "panic", err, + ) + +resp = &abci.PrepareProposalResponse{ + Txs: req.Txs +} + +} + +}() + +resp, err = app.prepareProposal(app.prepareProposalState.Context(), req) + if err != nil { + app.logger.Error("failed to prepare proposal", "height", req.Height, "time", req.Time, "err", err) + +return &abci.PrepareProposalResponse{ + Txs: req.Txs +}, nil +} + +return resp, nil +} + +/ ProcessProposal implements the ProcessProposal ABCI method and returns a +/ ResponseProcessProposal object to the client. The ProcessProposal method is +/ responsible for allowing execution of application-dependent work in a proposed +/ block. Note, the application defines the exact implementation details of +/ ProcessProposal. In general, the application must at the very least ensure +/ that all transactions are valid. If all transactions are valid, then we inform +/ CometBFT that the Status is ACCEPT. However, the application is also able +/ to implement optimizations such as executing the entire proposed block +/ immediately. +/ +/ If a panic is detected during execution of an application's ProcessProposal +/ handler, it will be recovered and we will reject the proposal. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +ProcessProposal(req *abci.ProcessProposalRequest) (resp *abci.ProcessProposalResponse, err error) { + if app.processProposal == nil { + return nil, errors.New("ProcessProposal handler not set") +} + + / CometBFT must never call ProcessProposal with a height of 0. + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("ProcessProposal called with invalid height") +} + + / Always reset state given that ProcessProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, + AppHash: app.LastCommitID().Hash, +} + +app.setState(execModeProcessProposal, header) + + / Since the application can get access to FinalizeBlock state and write to it, + / we must be sure to reset it in case ProcessProposal timeouts and is called + / again in a subsequent round. However, we only want to do this after we've + / processed the first block, as we want to avoid overwriting the finalizeState + / after state changes during InitChain. + if req.Height > app.initialHeight { + / abort any running OE + app.optimisticExec.Abort() + +app.setState(execModeFinalize, header) +} + +app.processProposalState.SetContext(app.getContextForProposal(app.processProposalState.Context(), req.Height). + WithVoteInfos(req.ProposedLastCommit.Votes). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithHeaderHash(req.Hash). + WithProposer(req.ProposerAddress). + WithCometInfo(corecomet.Info{ + ProposerAddress: req.ProposerAddress, + ValidatorsHash: req.NextValidatorsHash, + Evidence: sdk.ToSDKEvidence(req.Misbehavior), + LastCommit: sdk.ToSDKCommitInfo(req.ProposedLastCommit), +}, + ). + WithExecMode(sdk.ExecModeProcessProposal). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, +})) + +app.processProposalState.SetContext(app.processProposalState.Context(). + WithConsensusParams(app.GetConsensusParams(app.processProposalState.Context())). + WithBlockGasMeter(app.getBlockGasMeter(app.processProposalState.Context()))) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in ProcessProposal", + "height", req.Height, + "time", req.Time, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +resp = &abci.ProcessProposalResponse{ + Status: abci.PROCESS_PROPOSAL_STATUS_REJECT +} + +} + +}() + +resp, err = app.processProposal(app.processProposalState.Context(), req) + if err != nil { + app.logger.Error("failed to process proposal", "height", req.Height, "time", req.Time, "hash", fmt.Sprintf("%X", req.Hash), "err", err) + +return &abci.ProcessProposalResponse{ + Status: abci.PROCESS_PROPOSAL_STATUS_REJECT +}, nil +} + + / Only execute optimistic execution if the proposal is accepted, OE is + / enabled and the block height is greater than the initial height. During + / the first block we'll be carrying state from InitChain, so it would be + / impossible for us to easily revert. + / After the first block has been processed, the next blocks will get executed + / optimistically, so that when the ABCI client calls `FinalizeBlock` the app + / can have a response ready. + if resp.Status == abci.PROCESS_PROPOSAL_STATUS_ACCEPT && + app.optimisticExec.Enabled() && + req.Height > app.initialHeight { + app.optimisticExec.Execute(req) +} + +return resp, nil +} + +/ ExtendVote implements the ExtendVote ABCI method and returns a ResponseExtendVote. +/ It calls the application's ExtendVote handler which is responsible for performing +/ application-specific business logic when sending a pre-commit for the NEXT +/ block height. The extensions response may be non-deterministic but must always +/ be returned, even if empty. +/ +/ Agreed upon vote extensions are made available to the proposer of the next +/ height and are committed in the subsequent height, i.e. H+2. An error is +/ returned if vote extensions are not enabled or if extendVote fails or panics. +func (app *BaseApp) + +ExtendVote(_ context.Context, req *abci.ExtendVoteRequest) (resp *abci.ExtendVoteResponse, err error) { + / Always reset state given that ExtendVote and VerifyVoteExtension can timeout + / and be called again in a subsequent round. + var ctx sdk.Context + + / If we're extending the vote for the initial height, we need to use the + / finalizeBlockState context, otherwise we don't get the uncommitted data + / from InitChain. + if req.Height == app.initialHeight { + ctx, _ = app.finalizeBlockState.Context().CacheContext() +} + +else { + ms := app.cms.CacheMultiStore() + +ctx = sdk.NewContext(ms, false, app.logger).WithStreamingManager(app.streamingManager).WithChainID(app.chainID).WithBlockHeight(req.Height) +} + if app.extendVote == nil { + return nil, errors.New("application ExtendVote handler not set") +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(ctx) + + / Note: In this case, we do want to extend vote if the height is equal or + / greater than VoteExtensionsEnableHeight. This defers from the check done + / in ValidateVoteExtensions and PrepareProposal in which we'll check for + / vote extensions on VoteExtensionsEnableHeight+1. + extsEnabled := cp.Feature != nil && req.Height >= cp.Feature.VoteExtensionsEnableHeight.Value && cp.Feature.VoteExtensionsEnableHeight.Value != 0 + if !extsEnabled { + / check abci params + extsEnabled = cp.Abci != nil && req.Height >= cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + if !extsEnabled { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to ExtendVote at height %d", req.Height) +} + +} + +ctx = ctx. + WithConsensusParams(cp). + WithBlockGasMeter(storetypes.NewInfiniteGasMeter()). + WithBlockHeight(req.Height). + WithHeaderHash(req.Hash). + WithExecMode(sdk.ExecModeVoteExtension). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Hash: req.Hash, +}) + + / add a deferred recover handler in case extendVote panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in ExtendVote", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +err = fmt.Errorf("recovered application panic in ExtendVote: %v", r) +} + +}() + +resp, err = app.extendVote(ctx, req) + if err != nil { + app.logger.Error("failed to extend vote", "height", req.Height, "hash", fmt.Sprintf("%X", req.Hash), "err", err) + +return &abci.ExtendVoteResponse{ + VoteExtension: []byte{ +}}, nil +} + +return resp, err +} + +/ VerifyVoteExtension implements the VerifyVoteExtension ABCI method and returns +/ a ResponseVerifyVoteExtension. It calls the applications' VerifyVoteExtension +/ handler which is responsible for performing application-specific business +/ logic in verifying a vote extension from another validator during the pre-commit +/ phase. The response MUST be deterministic. An error is returned if vote +/ extensions are not enabled or if verifyVoteExt fails or panics. +/ We highly recommend a size validation due to performance degradation, +/ see more here https://docs.cometbft.com/v1.0/references/qa/cometbft-qa-38#vote-extensions-testbed +func (app *BaseApp) + +VerifyVoteExtension(req *abci.VerifyVoteExtensionRequest) (resp *abci.VerifyVoteExtensionResponse, err error) { + if app.verifyVoteExt == nil { + return nil, errors.New("application VerifyVoteExtension handler not set") +} + +var ctx sdk.Context + + / If we're verifying the vote for the initial height, we need to use the + / finalizeBlockState context, otherwise we don't get the uncommitted data + / from InitChain. + if req.Height == app.initialHeight { + ctx, _ = app.finalizeBlockState.Context().CacheContext() +} + +else { + ms := app.cms.CacheMultiStore() + +ctx = sdk.NewContext(ms, false, app.logger).WithStreamingManager(app.streamingManager).WithChainID(app.chainID).WithBlockHeight(req.Height) +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(ctx) + + / Note: we verify votes extensions on VoteExtensionsEnableHeight+1. Check + / comment in ExtendVote and ValidateVoteExtensions for more details. + extsEnabled := cp.Feature.VoteExtensionsEnableHeight != nil && req.Height >= cp.Feature.VoteExtensionsEnableHeight.Value && cp.Feature.VoteExtensionsEnableHeight.Value != 0 + if !extsEnabled { + / check abci params + extsEnabled = cp.Abci != nil && req.Height >= cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + if !extsEnabled { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to VerifyVoteExtension at height %d", req.Height) +} + +} + + / add a deferred recover handler in case verifyVoteExt panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in VerifyVoteExtension", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "validator", fmt.Sprintf("%X", req.ValidatorAddress), + "panic", r, + ) + +err = fmt.Errorf("recovered application panic in VerifyVoteExtension: %v", r) +} + +}() + +ctx = ctx. + WithConsensusParams(cp). + WithBlockGasMeter(storetypes.NewInfiniteGasMeter()). + WithBlockHeight(req.Height). + WithHeaderHash(req.Hash). + WithExecMode(sdk.ExecModeVerifyVoteExtension). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Hash: req.Hash, +}) + +resp, err = app.verifyVoteExt(ctx, req) + if err != nil { + app.logger.Error("failed to verify vote extension", "height", req.Height, "err", err) + +return &abci.VerifyVoteExtensionResponse{ + Status: abci.VERIFY_VOTE_EXTENSION_STATUS_REJECT +}, nil +} + +return resp, err +} + +/ internalFinalizeBlock executes the block, called by the Optimistic +/ Execution flow or by the FinalizeBlock ABCI method. The context received is +/ only used to handle early cancellation, for anything related to state app.finalizeBlockState.Context() +/ must be used. +func (app *BaseApp) + +internalFinalizeBlock(ctx context.Context, req *abci.FinalizeBlockRequest) (*abci.FinalizeBlockResponse, error) { + var events []abci.Event + if err := app.checkHalt(req.Height, req.Time); err != nil { + return nil, err +} + if err := app.validateFinalizeBlockHeight(req); err != nil { + return nil, err +} + if app.cms.TracingEnabled() { + app.cms.SetTracingContext(storetypes.TraceContext( + map[string]any{"blockHeight": req.Height +}, + )) +} + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, + AppHash: app.LastCommitID().Hash, +} + + / finalizeBlockState should be set on InitChain or ProcessProposal. If it is + / nil, it means we are replaying this block and we need to set the state here + / given that during block replay ProcessProposal is not executed by CometBFT. + if app.finalizeBlockState == nil { + app.setState(execModeFinalize, header) +} + + / Context is now updated with Header information. + app.finalizeBlockState.SetContext(app.finalizeBlockState.Context(). + WithBlockHeader(header). + WithHeaderHash(req.Hash). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + Hash: req.Hash, + AppHash: app.LastCommitID().Hash, +}). + WithConsensusParams(app.GetConsensusParams(app.finalizeBlockState.Context())). + WithVoteInfos(req.DecidedLastCommit.Votes). + WithExecMode(sdk.ExecModeFinalize). + WithCometInfo(corecomet.Info{ + Evidence: sdk.ToSDKEvidence(req.Misbehavior), + ValidatorsHash: req.NextValidatorsHash, + ProposerAddress: req.ProposerAddress, + LastCommit: sdk.ToSDKCommitInfo(req.DecidedLastCommit), +})) + + / GasMeter must be set after we get a context with updated consensus params. + gasMeter := app.getBlockGasMeter(app.finalizeBlockState.Context()) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(gasMeter)) + if app.checkState != nil { + app.checkState.SetContext(app.checkState.Context(). + WithBlockGasMeter(gasMeter). + WithHeaderHash(req.Hash)) +} + +preblockEvents, err := app.preBlock(req) + if err != nil { + return nil, err +} + +events = append(events, preblockEvents...) + +beginBlock, err := app.beginBlock(req) + if err != nil { + return nil, err +} + + / First check for an abort signal after beginBlock, as it's the first place + / we spend any significant amount of time. + select { + case <-ctx.Done(): + return nil, ctx.Err() + +default: + / continue +} + +events = append(events, beginBlock.Events...) + + / Reset the gas meter so that the AnteHandlers aren't required to + gasMeter = app.getBlockGasMeter(app.finalizeBlockState.Context()) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(gasMeter)) + + / Iterate over all raw transactions in the proposal and attempt to execute + / them, gathering the execution results. + / + / NOTE: Not all raw transactions may adhere to the sdk.Tx interface, e.g. + / vote extensions, so skip those. + txResults := make([]*abci.ExecTxResult, 0, len(req.Txs)) + for _, rawTx := range req.Txs { + response := app.deliverTx(rawTx) + + / check after every tx if we should abort + select { + case <-ctx.Done(): + return nil, ctx.Err() + +default: + / continue +} + +txResults = append(txResults, response) +} + if app.finalizeBlockState.ms.TracingEnabled() { + app.finalizeBlockState.ms = app.finalizeBlockState.ms.SetTracingContext(nil).(storetypes.CacheMultiStore) +} + +endBlock, err := app.endBlock(app.finalizeBlockState.Context()) + if err != nil { + return nil, err +} + + / check after endBlock if we should abort, to avoid propagating the result + select { + case <-ctx.Done(): + return nil, ctx.Err() + +default: + / continue +} + +events = append(events, endBlock.Events...) + cp := app.GetConsensusParams(app.finalizeBlockState.Context()) + +return &abci.FinalizeBlockResponse{ + Events: events, + TxResults: txResults, + ValidatorUpdates: endBlock.ValidatorUpdates, + ConsensusParamUpdates: &cp, +}, nil +} + +/ FinalizeBlock will execute the block proposal provided by RequestFinalizeBlock. +/ Specifically, it will execute an application's BeginBlock (if defined), followed +/ by the transactions in the proposal, finally followed by the application's +/ EndBlock (if defined). +/ +/ For each raw transaction, i.e. a byte slice, BaseApp will only execute it if +/ it adheres to the sdk.Tx interface. Otherwise, the raw transaction will be +/ skipped. This is to support compatibility with proposers injecting vote +/ extensions into the proposal, which should not themselves be executed in cases +/ where they adhere to the sdk.Tx interface. +func (app *BaseApp) + +FinalizeBlock(req *abci.FinalizeBlockRequest) (res *abci.FinalizeBlockResponse, err error) { + defer func() { + / call the streaming service hooks with the FinalizeBlock messages + for _, streamingListener := range app.streamingManager.ABCIListeners { + if err := streamingListener.ListenFinalizeBlock(app.finalizeBlockState.Context(), *req, *res); err != nil { + app.logger.Error("ListenFinalizeBlock listening hook failed", "height", req.Height, "err", err) +} + +} + +}() + if app.optimisticExec.Initialized() { + / check if the hash we got is the same as the one we are executing + aborted := app.optimisticExec.AbortIfNeeded(req.Hash) + / Wait for the OE to finish, regardless of whether it was aborted or not + res, err = app.optimisticExec.WaitResult() + + / only return if we are not aborting + if !aborted { + if res != nil { + res.AppHash = app.workingHash() +} + +return res, err +} + + / if it was aborted, we need to reset the state + app.finalizeBlockState = nil + app.optimisticExec.Reset() +} + + / if no OE is running, just run the block (this is either a block replay or a OE that got aborted) + +res, err = app.internalFinalizeBlock(context.Background(), req) + if res != nil { + res.AppHash = app.workingHash() +} + +return res, err +} + +/ checkHalt checks if height or time exceeds halt-height or halt-time respectively. +func (app *BaseApp) + +checkHalt(height int64, time time.Time) + +error { + var halt bool + switch { + case app.haltHeight > 0 && uint64(height) >= app.haltHeight: + halt = true + case app.haltTime > 0 && time.Unix() >= int64(app.haltTime): + halt = true +} + if halt { + return fmt.Errorf("halt per configuration height %d time %d", app.haltHeight, app.haltTime) +} + +return nil +} + +/ Commit implements the ABCI interface. It will commit all state that exists in +/ the deliver state's multi-store and includes the resulting commit ID in the +/ returned abci.ResponseCommit. Commit will set the check state based on the +/ latest header and reset the deliver state. Also, if a non-zero halt height is +/ defined in config, Commit will execute a deferred function call to check +/ against that height and gracefully halt if it matches the latest committed +/ height. +func (app *BaseApp) + +Commit() (*abci.CommitResponse, error) { + header := app.finalizeBlockState.Context().BlockHeader() + retainHeight := app.GetBlockRetentionHeight(header.Height) + if app.precommiter != nil { + app.precommiter(app.finalizeBlockState.Context()) +} + +rms, ok := app.cms.(*rootmulti.Store) + if ok { + rms.SetCommitHeader(header) +} + +app.cms.Commit() + resp := &abci.CommitResponse{ + RetainHeight: retainHeight, +} + abciListeners := app.streamingManager.ABCIListeners + if len(abciListeners) > 0 { + ctx := app.finalizeBlockState.Context() + blockHeight := ctx.BlockHeight() + changeSet := app.cms.PopStateCache() + for _, abciListener := range abciListeners { + if err := abciListener.ListenCommit(ctx, *resp, changeSet); err != nil { + app.logger.Error("Commit listening hook failed", "height", blockHeight, "err", err) +} + +} + +} + + / Reset the CheckTx state to the latest committed. + / + / NOTE: This is safe because CometBFT holds a lock on the mempool for + / Commit. Use the header from this latest block. + app.setState(execModeCheck, header) + +app.finalizeBlockState = nil + if app.prepareCheckStater != nil { + app.prepareCheckStater(app.checkState.Context()) +} + + / The SnapshotIfApplicable method will create the snapshot by starting the goroutine + app.snapshotManager.SnapshotIfApplicable(header.Height) + +return resp, nil +} + +/ workingHash gets the apphash that will be finalized in commit. +/ These writes will be persisted to the root multi-store (app.cms) + +and flushed to +/ disk in the Commit phase. This means when the ABCI client requests Commit(), the application +/ state transitions will be flushed to disk and as a result, but we already have +/ an application Merkle root. +func (app *BaseApp) + +workingHash() []byte { + / Write the FinalizeBlock state into branched storage and commit the MultiStore. + / The write to the FinalizeBlock state writes all state transitions to the root + / MultiStore (app.cms) + +so when Commit() + +is called it persists those values. + app.finalizeBlockState.ms.Write() + + / Get the hash of all writes in order to return the apphash to the comet in finalizeBlock. + commitHash := app.cms.WorkingHash() + +app.logger.Debug("hash of all writes", "workingHash", fmt.Sprintf("%X", commitHash)) + +return commitHash +} + +func handleQueryApp(app *BaseApp, path []string, req *abci.QueryRequest) *abci.QueryResponse { + if len(path) >= 2 { + switch path[1] { + case "simulate": + txBytes := req.Data + + gInfo, res, err := app.Simulate(txBytes) + if err != nil { + return queryResult(errorsmod.Wrap(err, "failed to simulate tx"), app.trace) +} + simRes := &sdk.SimulationResponse{ + GasInfo: gInfo, + Result: res, +} + +bz, err := codec.ProtoMarshalJSON(simRes, app.interfaceRegistry) + if err != nil { + return queryResult(errorsmod.Wrap(err, "failed to JSON encode simulation response"), app.trace) +} + +return &abci.QueryResponse{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: bz, +} + case "version": + return &abci.QueryResponse{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: []byte(app.version), +} + +default: + return queryResult(errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "unknown query: %s", path), app.trace) +} + +} + +return queryResult( + errorsmod.Wrap( + sdkerrors.ErrUnknownRequest, + "expected second parameter to be either 'simulate' or 'version', neither was present", + ), app.trace) +} + +func handleQueryStore(app *BaseApp, path []string, req abci.QueryRequest) *abci.QueryResponse { + / "/store" prefix for store queries + queryable, ok := app.cms.(storetypes.Queryable) + if !ok { + return queryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "multi-store does not support queries"), app.trace) +} + +req.Path = "/" + strings.Join(path[1:], "/") + if req.Height <= 1 && req.Prove { + return queryResult( + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ), app.trace) +} + sdkReq := storetypes.RequestQuery(req) + +resp, err := queryable.Query(&sdkReq) + if err != nil { + return queryResult(err, app.trace) +} + +resp.Height = req.Height + abciResp := abci.QueryResponse(*resp) + +return &abciResp +} + +func handleQueryP2P(app *BaseApp, path []string) *abci.QueryResponse { + / "/p2p" prefix for p2p queries + if len(path) < 4 { + return queryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "path should be p2p filter "), app.trace) +} + +var resp *abci.QueryResponse + + cmd, typ, arg := path[1], path[2], path[3] + switch cmd { + case "filter": + switch typ { + case "addr": + resp = app.FilterPeerByAddrPort(arg) + case "id": + resp = app.FilterPeerByID(arg) +} + +default: + resp = queryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "expected second parameter to be 'filter'"), app.trace) +} + +return resp +} + +/ SplitABCIQueryPath splits a string path using the delimiter '/'. +/ +/ e.g. "this/is/funny" becomes []string{"this", "is", "funny" +} + +func SplitABCIQueryPath(requestPath string) (path []string) { + path = strings.Split(requestPath, "/") + + / first element is empty string + if len(path) > 0 && path[0] == "" { + path = path[1:] +} + +return path +} + +/ FilterPeerByAddrPort filters peers by address/port. +func (app *BaseApp) + +FilterPeerByAddrPort(info string) *abci.QueryResponse { + if app.addrPeerFilter != nil { + return app.addrPeerFilter(info) +} + +return &abci.QueryResponse{ +} +} + +/ FilterPeerByID filters peers by node ID. +func (app *BaseApp) + +FilterPeerByID(info string) *abci.QueryResponse { + if app.idPeerFilter != nil { + return app.idPeerFilter(info) +} + +return &abci.QueryResponse{ +} +} + +/ getContextForProposal returns the correct Context for PrepareProposal and +/ ProcessProposal. We use finalizeBlockState on the first block to be able to +/ access any state changes made in InitChain. +func (app *BaseApp) + +getContextForProposal(ctx sdk.Context, height int64) + +sdk.Context { + if height == app.initialHeight { + ctx, _ = app.finalizeBlockState.Context().CacheContext() + + / clear all context data set during InitChain to avoid inconsistent behavior + ctx = ctx.WithHeaderInfo(coreheader.Info{ +}).WithBlockHeader(cmtproto.Header{ +}) + +return ctx +} + +return ctx +} + +func (app *BaseApp) + +handleQueryGRPC(handler GRPCQueryHandler, req *abci.QueryRequest) *abci.QueryResponse { + ctx, err := app.CreateQueryContext(req.Height, req.Prove) + if err != nil { + return queryResult(err, app.trace) +} + +resp, err := handler(ctx, req) + if err != nil { + resp = queryResult(gRPCErrorToSDKError(err), app.trace) + +resp.Height = req.Height + return resp +} + +return resp +} + +func gRPCErrorToSDKError(err error) + +error { + status, ok := grpcstatus.FromError(err) + if !ok { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) +} + switch status.Code() { + case codes.NotFound: + return errorsmod.Wrap(sdkerrors.ErrKeyNotFound, err.Error()) + case codes.InvalidArgument: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.FailedPrecondition: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.Unauthenticated: + return errorsmod.Wrap(sdkerrors.ErrUnauthorized, err.Error()) + +default: + return errorsmod.Wrap(sdkerrors.ErrUnknownRequest, err.Error()) +} +} + +func checkNegativeHeight(height int64) + +error { + if height < 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "cannot query with height < 0; please provide a valid height") +} + +return nil +} + +/ CreateQueryContext creates a new sdk.Context for a query, taking as args +/ the block height and whether the query needs a proof or not. +func (app *BaseApp) + +CreateQueryContext(height int64, prove bool) (sdk.Context, error) { + if err := checkNegativeHeight(height); err != nil { + return sdk.Context{ +}, err +} + + / use custom query multi-store if provided + qms := app.qms + if qms == nil { + qms = app.cms.(storetypes.MultiStore) +} + lastBlockHeight := qms.LatestVersion() + if lastBlockHeight == 0 { + return sdk.Context{ +}, errorsmod.Wrapf(sdkerrors.ErrInvalidHeight, "%s is not ready; please wait for first block", app.Name()) +} + if height > lastBlockHeight { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidHeight, + "cannot query with height in the future; please provide a valid height", + ) +} + + / when a client did not provide a query height, manually inject the latest + if height == 0 { + height = lastBlockHeight +} + if height <= 1 && prove { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ) +} + +cacheMS, err := qms.CacheMultiStoreWithVersion(height) + if err != nil { + return sdk.Context{ +}, + errorsmod.Wrapf( + sdkerrors.ErrNotFound, + "failed to load state at height %d; %s (latest height: %d)", height, err, lastBlockHeight, + ) +} + + / branch the commit multi-store for safety + ctx := sdk.NewContext(cacheMS, true, app.logger). + WithMinGasPrices(app.minGasPrices). + WithGasMeter(storetypes.NewGasMeter(app.queryGasLimit)). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: height, +}). + WithBlockHeader(app.checkState.Context().BlockHeader()). + WithBlockHeight(height) + if height != lastBlockHeight { + rms, ok := app.cms.(*rootmulti.Store) + if ok { + cInfo, err := rms.GetCommitInfo(height) + if cInfo != nil && err == nil { + ctx = ctx.WithHeaderInfo(coreheader.Info{ + Height: height, + Time: cInfo.Timestamp +}) +} + +} + +} + +return ctx, nil +} + +/ GetBlockRetentionHeight returns the height for which all blocks below this height +/ are pruned from CometBFT. Given a commitment height and a non-zero local +/ minRetainBlocks configuration, the retentionHeight is the smallest height that +/ satisfies: +/ +/ - Unbonding (safety threshold) + +time: The block interval in which validators +/ can be economically punished for misbehavior. Blocks in this interval must be +/ auditable e.g. by the light client. +/ +/ - Logical store snapshot interval: The block interval at which the underlying +/ logical store database is persisted to disk, e.g. every 10000 heights. Blocks +/ since the last IAVL snapshot must be available for replay on application restart. +/ +/ - State sync snapshots: Blocks since the oldest available snapshot must be +/ available for state sync nodes to catch up (oldest because a node may be +/ restoring an old snapshot while a new snapshot was taken). +/ +/ - Local (minRetainBlocks) + +config: Archive nodes may want to retain more or +/ all blocks, e.g. via a local config option min-retain-blocks. There may also +/ be a need to vary retention for other nodes, e.g. sentry nodes which do not +/ need historical blocks. +func (app *BaseApp) + +GetBlockRetentionHeight(commitHeight int64) + +int64 { + / pruning is disabled if minRetainBlocks is zero + if app.minRetainBlocks == 0 { + return 0 +} + minNonZero := func(x, y int64) + +int64 { + switch { + case x == 0: + return y + case y == 0: + return x + case x < y: + return x + + default: + return y +} + +} + + / Define retentionHeight as the minimum value that satisfies all non-zero + / constraints. All blocks below (commitHeight-retentionHeight) + +are pruned + / from CometBFT. + var retentionHeight int64 + + / Define the number of blocks needed to protect against misbehaving validators + / which allows light clients to operate safely. Note, we piggy back of the + / evidence parameters instead of computing an estimated number of blocks based + / on the unbonding period and block commitment time as the two should be + / equivalent. + cp := app.GetConsensusParams(app.finalizeBlockState.Context()) + if cp.Evidence != nil && cp.Evidence.MaxAgeNumBlocks > 0 { + retentionHeight = commitHeight - cp.Evidence.MaxAgeNumBlocks +} + if app.snapshotManager != nil { + snapshotRetentionHeights := app.snapshotManager.GetSnapshotBlockRetentionHeights() + if snapshotRetentionHeights > 0 { + retentionHeight = minNonZero(retentionHeight, commitHeight-snapshotRetentionHeights) +} + +} + v := commitHeight - int64(app.minRetainBlocks) + +retentionHeight = minNonZero(retentionHeight, v) + if retentionHeight <= 0 { + / prune nothing in the case of a non-positive height + return 0 +} + +return retentionHeight +} + +/ toVoteInfo converts the new ExtendedVoteInfo to VoteInfo. +func toVoteInfo(votes []abci.ExtendedVoteInfo) []abci.VoteInfo { + legacyVotes := make([]abci.VoteInfo, len(votes)) + for i, vote := range votes { + legacyVotes[i] = abci.VoteInfo{ + Validator: abci.Validator{ + Address: vote.Validator.Address, + Power: vote.Validator.Power, +}, + BlockIdFlag: vote.BlockIdFlag, +} + +} + +return legacyVotes +} +``` + +## CheckTx Handler + +`CheckTxHandler` allows users to extend the logic of `CheckTx`. `CheckTxHandler` is called by passing context and the transaction bytes received through ABCI. It is required that the handler returns deterministic results given the same transaction bytes. + + +we return the raw decoded transaction here to avoid decoding it twice. + + +```go +type CheckTxHandler func(ctx sdk.Context, tx []byte) (Tx, error) +``` + +Setting a custom `CheckTxHandler` is optional. It can be done from your app.go file: + +```go expandable +func NewSimApp( + logger log.Logger, + db corestore.KVStoreWithBatch, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + ... + / Create ChecktxHandler + checktxHandler := abci.NewCustomCheckTxHandler(...) + +app.SetCheckTxHandler(checktxHandler) + ... +} +``` diff --git a/docs/sdk/next/documentation/consensus-block-production/introduction.mdx b/docs/sdk/next/documentation/consensus-block-production/introduction.mdx new file mode 100644 index 00000000..6223e0d4 --- /dev/null +++ b/docs/sdk/next/documentation/consensus-block-production/introduction.mdx @@ -0,0 +1,56 @@ +--- +title: Introduction +description: >- + ABCI, Application Blockchain Interface is the interface between CometBFT and + the application. More information about ABCI can be found here. CometBFT + version 0.38 included a new version of ABCI (called ABCI 2.0) which added + several new methods. +--- + +## What is ABCI? + +ABCI, Application Blockchain Interface is the interface between CometBFT and the application. More information about ABCI can be found [here](https://docs.cometbft.com/v0.38/spec/abci/). CometBFT version 0.38 included a new version of ABCI (called ABCI 2.0) which added several new methods. + +The 5 methods introduced in ABCI 2.0 are: + +* `PrepareProposal` +* `ProcessProposal` +* `ExtendVote` +* `VerifyVoteExtension` +* `FinalizeBlock` + +## The Flow + +## PrepareProposal + +Based on validator voting power, CometBFT chooses a block proposer and calls `PrepareProposal` on the block proposer's application (Cosmos SDK). The selected block proposer is responsible for collecting outstanding transactions from the mempool, adhering to the application's specifications. The application can enforce custom transaction ordering and incorporate additional transactions, potentially generated from vote extensions in the previous block. + +To perform this manipulation on the application side, a custom handler must be implemented. By default, the Cosmos SDK provides `PrepareProposalHandler`, used in conjunction with an application specific mempool. A custom handler can be written by an application developer, if a noop handler is provided, all transactions are considered valid. + +Please note that vote extensions will only be available on the following height in which vote extensions are enabled. More information about vote extensions can be found [here](https://docs.cosmos.network/main/build/abci/vote-extensions). + +After creating the proposal, the proposer returns it to CometBFT. + +PrepareProposal CAN be non-deterministic. + +## ProcessProposal + +This method allows validators to perform application-specific checks on the block proposal and is called on all validators. This is an important step in the consensus process, as it ensures that the block is valid and meets the requirements of the application. For example, validators could check that the block contains all the required transactions or that the block does not create any invalid state transitions. + +The implementation of `ProcessProposal` MUST be deterministic. + +## ExtendVote and VerifyVoteExtensions + +These methods allow applications to extend the voting process by requiring validators to perform additional actions beyond simply validating blocks. + +If vote extensions are enabled, `ExtendVote` will be called on every validator and each one will return its vote extension which is in practice a bunch of bytes. As mentioned above this data (vote extension) can only be retrieved in the next block height during `PrepareProposal`. Additionally, this data can be arbitrary, but in the provided tutorials, it serves as an oracle or proof of transactions in the mempool. Essentially, vote extensions are processed and injected as transactions. Examples of use-cases for vote extensions include prices for a price oracle or encryption shares for an encrypted transaction mempool. `ExtendVote` CAN be non-deterministic. + +`VerifyVoteExtensions` is performed on every validator multiple times in order to verify other validators' vote extensions. This check is performed to validate the integrity and validity of the vote extensions preventing malicious or invalid vote extensions. + +Additionally, applications must keep the vote extension data concise as it can degrade the performance of their chain, see testing results [here](https://docs.cometbft.com/v0.38/qa/cometbft-qa-38#vote-extensions-testbed). + +`VerifyVoteExtensions` MUST be deterministic. + +## FinalizeBlock + +`FinalizeBlock` is then called and is responsible for updating the state of the blockchain and making the block available to users. diff --git a/docs/sdk/next/documentation/consensus-block-production/prepare-proposal.mdx b/docs/sdk/next/documentation/consensus-block-production/prepare-proposal.mdx new file mode 100644 index 00000000..d577eaab --- /dev/null +++ b/docs/sdk/next/documentation/consensus-block-production/prepare-proposal.mdx @@ -0,0 +1,667 @@ +--- +title: Prepare Proposal +--- + +`PrepareProposal` handles construction of the block, meaning that when a proposer +is preparing to propose a block, it requests the application to evaluate a +`RequestPrepareProposal`, which contains a series of transactions from CometBFT's +mempool. At this point, the application has complete control over the proposal. +It can modify, delete, and inject transactions from its own app-side mempool into +the proposal or even ignore all the transactions altogether. What the application +does with the transactions provided to it by `RequestPrepareProposal` has no +effect on CometBFT's mempool. + +Note, that the application defines the semantics of the `PrepareProposal` and it +MAY be non-deterministic and is only executed by the current block proposer. + +Now, reading mempool twice in the previous sentence is confusing, lets break it down. +CometBFT has a mempool that handles gossiping transactions to other nodes +in the network. The order of these transactions is determined by CometBFT's mempool, +using FIFO as the sole ordering mechanism. It's worth noting that the priority mempool +in Comet was removed or deprecated. +However, since the application is able to fully inspect +all transactions, it can provide greater control over transaction ordering. +Allowing the application to handle ordering enables the application to define how +it would like the block constructed. + +The Cosmos SDK defines the `DefaultProposalHandler` type, which provides applications with +`PrepareProposal` and `ProcessProposal` handlers. If you decide to implement your +own `PrepareProposal` handler, you must ensure that the transactions +selected DO NOT exceed the maximum block gas (if set) and the maximum bytes provided +by `req.MaxBytes`. + +```go expandable +package baseapp + +import ( + + "bytes" + "context" + "fmt" + "slices" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cryptoenc "github.com/cometbft/cometbft/crypto/encoding" + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + protoio "github.com/cosmos/gogoproto/io" + "github.com/cosmos/gogoproto/proto" + "cosmossdk.io/core/comet" + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +type ( + / ValidatorStore defines the interface contract required for verifying vote + / extension signatures. Typically, this will be implemented by the x/staking + / module, which has knowledge of the CometBFT public key. + ValidatorStore interface { + GetPubKeyByConsAddr(context.Context, sdk.ConsAddress) (cmtprotocrypto.PublicKey, error) +} + + / GasTx defines the contract that a transaction with a gas limit must implement. + GasTx interface { + GetGas() + +uint64 +} +) + +/ ValidateVoteExtensions defines a helper function for verifying vote extension +/ signatures that may be passed or manually injected into a block proposal from +/ a proposer in PrepareProposal. It returns an error if any signature is invalid +/ or if unexpected vote extensions and/or signatures are found or less than 2/3 +/ power is received. +/ NOTE: From v0.50.5 `currentHeight` and `chainID` arguments are ignored for fixing an issue. +/ They will be removed from the function in v0.51+. +func ValidateVoteExtensions( + ctx sdk.Context, + valStore ValidatorStore, + _ int64, + _ string, + extCommit abci.ExtendedCommitInfo, +) + +error { + / Get values from context + cp := ctx.ConsensusParams() + currentHeight := ctx.HeaderInfo().Height + chainID := ctx.HeaderInfo().ChainID + commitInfo := ctx.CometInfo().GetLastCommit() + + / Check that both extCommit + commit are ordered in accordance with vp/address. + if err := validateExtendedCommitAgainstLastCommit(extCommit, commitInfo); err != nil { + return err +} + + / Start checking vote extensions only **after** the vote extensions enable + / height, because when `currentHeight == VoteExtensionsEnableHeight` + / PrepareProposal doesn't get any vote extensions in its request. + extsEnabled := cp.Abci != nil && currentHeight > cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + marshalDelimitedFn := func(msg proto.Message) ([]byte, error) { + var buf bytes.Buffer + if err := protoio.NewDelimitedWriter(&buf).WriteMsg(msg); err != nil { + return nil, err +} + +return buf.Bytes(), nil +} + +var ( + / Total voting power of all vote extensions. + totalVP int64 + / Total voting power of all validators that submitted valid vote extensions. + sumVP int64 + ) + for _, vote := range extCommit.Votes { + totalVP += vote.Validator.Power + + / Only check + include power if the vote is a commit vote. There must be super-majority, otherwise the + / previous block (the block the vote is for) + +could not have been committed. + if vote.BlockIdFlag != cmtproto.BlockIDFlagCommit { + continue +} + if !extsEnabled { + if len(vote.VoteExtension) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension at height %d", currentHeight) +} + if len(vote.ExtensionSignature) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension signature at height %d", currentHeight) +} + +continue +} + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("vote extensions enabled; received empty vote extension signature at height %d", currentHeight) +} + valConsAddr := sdk.ConsAddress(vote.Validator.Address) + +pubKeyProto, err := valStore.GetPubKeyByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get validator %X public key: %w", valConsAddr, err) +} + +cmtPubKey, err := cryptoenc.PubKeyFromProto(pubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) +} + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, / the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: chainID, +} + +extSignBytes, err := marshalDelimitedFn(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) +} + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return fmt.Errorf("failed to verify validator %X vote extension signature", valConsAddr) +} + +sumVP += vote.Validator.Power +} + + / This check is probably unnecessary, but better safe than sorry. + if totalVP <= 0 { + return fmt.Errorf("total voting power must be positive, got: %d", totalVP) +} + + / If the sum of the voting power has not reached (2/3 + 1) + +we need to error. + if requiredVP := ((totalVP * 2) / 3) + 1; sumVP < requiredVP { + return fmt.Errorf( + "insufficient cumulative voting power received to verify vote extensions; got: %d, expected: >=%d", + sumVP, requiredVP, + ) +} + +return nil +} + +/ validateExtendedCommitAgainstLastCommit validates an ExtendedCommitInfo against a LastCommit. Specifically, +/ it checks that the ExtendedCommit + LastCommit (for the same height), are consistent with each other + that +/ they are ordered correctly (by voting power) + +in accordance with +/ [comet](https://github.com/cometbft/cometbft/blob/4ce0277b35f31985bbf2c25d3806a184a4510010/types/validator_set.go#L784). +func validateExtendedCommitAgainstLastCommit(ec abci.ExtendedCommitInfo, lc comet.CommitInfo) + +error { + / check that the rounds are the same + if ec.Round != lc.Round() { + return fmt.Errorf("extended commit round %d does not match last commit round %d", ec.Round, lc.Round()) +} + + / check that the # of votes are the same + if len(ec.Votes) != lc.Votes().Len() { + return fmt.Errorf("extended commit votes length %d does not match last commit votes length %d", len(ec.Votes), lc.Votes().Len()) +} + + / check sort order of extended commit votes + if !slices.IsSortedFunc(ec.Votes, func(vote1, vote2 abci.ExtendedVoteInfo) + +int { + if vote1.Validator.Power == vote2.Validator.Power { + return bytes.Compare(vote1.Validator.Address, vote2.Validator.Address) / addresses sorted in ascending order (used to break vp conflicts) +} + +return -int(vote1.Validator.Power - vote2.Validator.Power) / vp sorted in descending order +}) { + return fmt.Errorf("extended commit votes are not sorted by voting power") +} + addressCache := make(map[string]struct{ +}, len(ec.Votes)) + / check that consistency between LastCommit and ExtendedCommit + for i, vote := range ec.Votes { + / cache addresses to check for duplicates + if _, ok := addressCache[string(vote.Validator.Address)]; ok { + return fmt.Errorf("extended commit vote address %X is duplicated", vote.Validator.Address) +} + +addressCache[string(vote.Validator.Address)] = struct{ +}{ +} + if !bytes.Equal(vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) { + return fmt.Errorf("extended commit vote address %X does not match last commit vote address %X", vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) +} + if vote.Validator.Power != lc.Votes().Get(i).Validator().Power() { + return fmt.Errorf("extended commit vote power %d does not match last commit vote power %d", vote.Validator.Power, lc.Votes().Get(i).Validator().Power()) +} + +} + +return nil +} + +type ( + / ProposalTxVerifier defines the interface that is implemented by BaseApp, + / that any custom ABCI PrepareProposal and ProcessProposal handler can use + / to verify a transaction. + ProposalTxVerifier interface { + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) + +TxDecode(txBz []byte) (sdk.Tx, error) + +TxEncode(tx sdk.Tx) ([]byte, error) +} + + / DefaultProposalHandler defines the default ABCI PrepareProposal and + / ProcessProposal handlers. + DefaultProposalHandler struct { + mempool mempool.Mempool + txVerifier ProposalTxVerifier + txSelector TxSelector + signerExtAdapter mempool.SignerExtractionAdapter +} +) + +func NewDefaultProposalHandler(mp mempool.Mempool, txVerifier ProposalTxVerifier) *DefaultProposalHandler { + return &DefaultProposalHandler{ + mempool: mp, + txVerifier: txVerifier, + txSelector: NewDefaultTxSelector(), + signerExtAdapter: mempool.NewDefaultSignerExtractionAdapter(), +} +} + +/ SetTxSelector sets the TxSelector function on the DefaultProposalHandler. +func (h *DefaultProposalHandler) + +SetTxSelector(ts TxSelector) { + h.txSelector = ts +} + +/ PrepareProposalHandler returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from CometBFT will simply be returned, which, by default, are in +/ FIFO order. +func (h *DefaultProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + var maxBlockGas uint64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = uint64(b.MaxGas) +} + +defer h.txSelector.Clear() + + / If the mempool is nil or NoOp we simply return the transactions + / requested from CometBFT, which, by default, should be in FIFO order. + / + / Note, we still need to ensure the transactions returned respect req.MaxTxBytes. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + for _, txBz := range req.Txs { + tx, err := h.txVerifier.TxDecode(txBz) + if err != nil { + return nil, err +} + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, tx, txBz) + if stop { + break +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} + selectedTxsSignersSeqs := make(map[string]uint64) + +var ( + resError error + selectedTxsNums int + invalidTxs []sdk.Tx / invalid txs to be removed out of the loop to avoid dead lock + ) + +mempool.SelectBy(ctx, h.mempool, req.Txs, func(memTx sdk.Tx) + +bool { + unorderedTx, ok := memTx.(sdk.TxWithUnordered) + isUnordered := ok && unorderedTx.GetUnordered() + txSignersSeqs := make(map[string]uint64) + + / if the tx is unordered, we don't need to check the sequence, we just add it + if !isUnordered { + signerData, err := h.signerExtAdapter.GetSigners(memTx) + if err != nil { + / propagate the error to the caller + resError = err + return false +} + + / If the signers aren't in selectedTxsSignersSeqs then we haven't seen them before + / so we add them and continue given that we don't need to check the sequence. + shouldAdd := true + for _, signer := range signerData { + seq, ok := selectedTxsSignersSeqs[signer.Signer.String()] + if !ok { + txSignersSeqs[signer.Signer.String()] = signer.Sequence + continue +} + + / If we have seen this signer before in this block, we must make + / sure that the current sequence is seq+1; otherwise is invalid + / and we skip it. + if seq+1 != signer.Sequence { + shouldAdd = false + break +} + +txSignersSeqs[signer.Signer.String()] = signer.Sequence +} + if !shouldAdd { + return true +} + +} + + / NOTE: Since transaction verification was already executed in CheckTx, + / which calls mempool.Insert, in theory everything in the pool should be + / valid. But some mempool implementations may insert invalid txs, so we + / check again. + txBz, err := h.txVerifier.PrepareProposalVerifyTx(memTx) + if err != nil { + invalidTxs = append(invalidTxs, memTx) +} + +else { + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, memTx, txBz) + if stop { + return false +} + txsLen := len(h.txSelector.SelectedTxs(ctx)) + / If the tx is unordered, we don't need to update the sender sequence. + if !isUnordered { + for sender, seq := range txSignersSeqs { + / If txsLen != selectedTxsNums is true, it means that we've + / added a new tx to the selected txs, so we need to update + / the sequence of the sender. + if txsLen != selectedTxsNums { + selectedTxsSignersSeqs[sender] = seq +} + +else if _, ok := selectedTxsSignersSeqs[sender]; !ok { + / The transaction hasn't been added but it passed the + / verification, so we know that the sequence is correct. + / So we set this sender's sequence to seq-1, in order + / to avoid unnecessary calls to PrepareProposalVerifyTx. + selectedTxsSignersSeqs[sender] = seq - 1 +} + +} + +} + +selectedTxsNums = txsLen +} + +return true +}) + if resError != nil { + return nil, resError +} + for _, tx := range invalidTxs { + err := h.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return nil, err +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} +} + +/ ProcessProposalHandler returns the default implementation for processing an +/ ABCI proposal. Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. +/ Note that step (2) + +is identical to the validation step performed in +/ DefaultPrepareProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +func (h *DefaultProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + / If the mempool is nil or NoOp we simply return ACCEPT, + / because PrepareProposal may have included txs that could fail verification. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return NoOpProcessProposal() +} + +return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + var totalTxGas uint64 + + var maxBlockGas int64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = b.MaxGas +} + for _, txBytes := range req.Txs { + tx, err := h.txVerifier.ProcessProposalVerifyTx(txBytes) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if maxBlockGas > 0 { + gasTx, ok := tx.(GasTx) + if ok { + totalTxGas += gasTx.GetGas() +} + if totalTxGas > uint64(maxBlockGas) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpExtendVote defines a no-op ExtendVote handler. It will always return an +/ empty byte slice as the vote extension. +func NoOpExtendVote() + +sdk.ExtendVoteHandler { + return func(_ sdk.Context, _ *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} +} + +/ NoOpVerifyVoteExtensionHandler defines a no-op VerifyVoteExtension handler. It +/ will always return an ACCEPT status with no error. +func NoOpVerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(_ sdk.Context, _ *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} + +/ TxSelector defines a helper type that assists in selecting transactions during +/ mempool transaction selection in PrepareProposal. It keeps track of the total +/ number of bytes and total gas of the selected transactions. It also keeps +/ track of the selected transactions themselves. +type TxSelector interface { + / SelectedTxs should return a copy of the selected transactions. + SelectedTxs(ctx context.Context) [][]byte + + / Clear should clear the TxSelector, nulling out all relevant fields. + Clear() + + / SelectTxForProposal should attempt to select a transaction for inclusion in + / a proposal based on inclusion criteria defined by the TxSelector. It must + / return if the caller should halt the transaction selection loop + / (typically over a mempool) + +or otherwise. + SelectTxForProposal(ctx context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool +} + +type defaultTxSelector struct { + totalTxBytes uint64 + totalTxGas uint64 + selectedTxs [][]byte +} + +func NewDefaultTxSelector() + +TxSelector { + return &defaultTxSelector{ +} +} + +func (ts *defaultTxSelector) + +SelectedTxs(_ context.Context) [][]byte { + txs := make([][]byte, len(ts.selectedTxs)) + +copy(txs, ts.selectedTxs) + +return txs +} + +func (ts *defaultTxSelector) + +Clear() { + ts.totalTxBytes = 0 + ts.totalTxGas = 0 + ts.selectedTxs = nil +} + +func (ts *defaultTxSelector) + +SelectTxForProposal(_ context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool { + txSize := uint64(cmttypes.ComputeProtoSizeForTxs([]cmttypes.Tx{ + txBz +})) + +var txGasLimit uint64 + if memTx != nil { + if gasTx, ok := memTx.(GasTx); ok { + txGasLimit = gasTx.GetGas() +} + +} + + / only add the transaction to the proposal if we have enough capacity + if (txSize + ts.totalTxBytes) <= maxTxBytes { + / If there is a max block gas limit, add the tx only if the limit has + / not been met. + if maxBlockGas > 0 { + if (txGasLimit + ts.totalTxGas) <= maxBlockGas { + ts.totalTxGas += txGasLimit + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + +else { + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + + / check if we've reached capacity; if so, we cannot select any more transactions + return ts.totalTxBytes >= maxTxBytes || (maxBlockGas > 0 && (ts.totalTxGas >= maxBlockGas)) +} +``` + +This default implementation can be overridden by the application developer in +favor of a custom implementation in [`app_di.go`](/docs/sdk/next/documentation/application-framework/app-go-di): + +```go +prepareOpt := func(app *baseapp.BaseApp) { + abciPropHandler := baseapp.NewDefaultProposalHandler(mempool, app) + +app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) +} + +baseAppOptions = append(baseAppOptions, prepareOpt) +``` diff --git a/docs/sdk/next/documentation/consensus-block-production/process-proposal.mdx b/docs/sdk/next/documentation/consensus-block-production/process-proposal.mdx new file mode 100644 index 00000000..7e132d77 --- /dev/null +++ b/docs/sdk/next/documentation/consensus-block-production/process-proposal.mdx @@ -0,0 +1,654 @@ +--- +title: Process Proposal +--- + +`ProcessProposal` handles the validation of a proposal from `PrepareProposal`, +which also includes a block header. After a block has been proposed, +the other validators have the right to accept or reject that block. The validator in the +default implementation of `PrepareProposal` runs basic validity checks on each +transaction. + +Note, `ProcessProposal` MUST be deterministic. Non-deterministic behaviors will cause apphash mismatches. +This means that if `ProcessProposal` panics or fails and we reject, all honest validator +processes should reject (i.e., prevote nil). If so, CometBFT will start a new round with a new block proposal and the same cycle will happen with `PrepareProposal` +and `ProcessProposal` for the new proposal. + +Here is the implementation of the default implementation: + +```go expandable +package baseapp + +import ( + + "bytes" + "context" + "fmt" + "slices" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cryptoenc "github.com/cometbft/cometbft/crypto/encoding" + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + protoio "github.com/cosmos/gogoproto/io" + "github.com/cosmos/gogoproto/proto" + "cosmossdk.io/core/comet" + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +type ( + / ValidatorStore defines the interface contract required for verifying vote + / extension signatures. Typically, this will be implemented by the x/staking + / module, which has knowledge of the CometBFT public key. + ValidatorStore interface { + GetPubKeyByConsAddr(context.Context, sdk.ConsAddress) (cmtprotocrypto.PublicKey, error) +} + + / GasTx defines the contract that a transaction with a gas limit must implement. + GasTx interface { + GetGas() + +uint64 +} +) + +/ ValidateVoteExtensions defines a helper function for verifying vote extension +/ signatures that may be passed or manually injected into a block proposal from +/ a proposer in PrepareProposal. It returns an error if any signature is invalid +/ or if unexpected vote extensions and/or signatures are found or less than 2/3 +/ power is received. +/ NOTE: From v0.50.5 `currentHeight` and `chainID` arguments are ignored for fixing an issue. +/ They will be removed from the function in v0.51+. +func ValidateVoteExtensions( + ctx sdk.Context, + valStore ValidatorStore, + _ int64, + _ string, + extCommit abci.ExtendedCommitInfo, +) + +error { + / Get values from context + cp := ctx.ConsensusParams() + currentHeight := ctx.HeaderInfo().Height + chainID := ctx.HeaderInfo().ChainID + commitInfo := ctx.CometInfo().GetLastCommit() + + / Check that both extCommit + commit are ordered in accordance with vp/address. + if err := validateExtendedCommitAgainstLastCommit(extCommit, commitInfo); err != nil { + return err +} + + / Start checking vote extensions only **after** the vote extensions enable + / height, because when `currentHeight == VoteExtensionsEnableHeight` + / PrepareProposal doesn't get any vote extensions in its request. + extsEnabled := cp.Abci != nil && currentHeight > cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + marshalDelimitedFn := func(msg proto.Message) ([]byte, error) { + var buf bytes.Buffer + if err := protoio.NewDelimitedWriter(&buf).WriteMsg(msg); err != nil { + return nil, err +} + +return buf.Bytes(), nil +} + +var ( + / Total voting power of all vote extensions. + totalVP int64 + / Total voting power of all validators that submitted valid vote extensions. + sumVP int64 + ) + for _, vote := range extCommit.Votes { + totalVP += vote.Validator.Power + + / Only check + include power if the vote is a commit vote. There must be super-majority, otherwise the + / previous block (the block the vote is for) + +could not have been committed. + if vote.BlockIdFlag != cmtproto.BlockIDFlagCommit { + continue +} + if !extsEnabled { + if len(vote.VoteExtension) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension at height %d", currentHeight) +} + if len(vote.ExtensionSignature) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension signature at height %d", currentHeight) +} + +continue +} + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("vote extensions enabled; received empty vote extension signature at height %d", currentHeight) +} + valConsAddr := sdk.ConsAddress(vote.Validator.Address) + +pubKeyProto, err := valStore.GetPubKeyByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get validator %X public key: %w", valConsAddr, err) +} + +cmtPubKey, err := cryptoenc.PubKeyFromProto(pubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) +} + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, / the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: chainID, +} + +extSignBytes, err := marshalDelimitedFn(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) +} + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return fmt.Errorf("failed to verify validator %X vote extension signature", valConsAddr) +} + +sumVP += vote.Validator.Power +} + + / This check is probably unnecessary, but better safe than sorry. + if totalVP <= 0 { + return fmt.Errorf("total voting power must be positive, got: %d", totalVP) +} + + / If the sum of the voting power has not reached (2/3 + 1) + +we need to error. + if requiredVP := ((totalVP * 2) / 3) + 1; sumVP < requiredVP { + return fmt.Errorf( + "insufficient cumulative voting power received to verify vote extensions; got: %d, expected: >=%d", + sumVP, requiredVP, + ) +} + +return nil +} + +/ validateExtendedCommitAgainstLastCommit validates an ExtendedCommitInfo against a LastCommit. Specifically, +/ it checks that the ExtendedCommit + LastCommit (for the same height), are consistent with each other + that +/ they are ordered correctly (by voting power) + +in accordance with +/ [comet](https://github.com/cometbft/cometbft/blob/4ce0277b35f31985bbf2c25d3806a184a4510010/types/validator_set.go#L784). +func validateExtendedCommitAgainstLastCommit(ec abci.ExtendedCommitInfo, lc comet.CommitInfo) + +error { + / check that the rounds are the same + if ec.Round != lc.Round() { + return fmt.Errorf("extended commit round %d does not match last commit round %d", ec.Round, lc.Round()) +} + + / check that the # of votes are the same + if len(ec.Votes) != lc.Votes().Len() { + return fmt.Errorf("extended commit votes length %d does not match last commit votes length %d", len(ec.Votes), lc.Votes().Len()) +} + + / check sort order of extended commit votes + if !slices.IsSortedFunc(ec.Votes, func(vote1, vote2 abci.ExtendedVoteInfo) + +int { + if vote1.Validator.Power == vote2.Validator.Power { + return bytes.Compare(vote1.Validator.Address, vote2.Validator.Address) / addresses sorted in ascending order (used to break vp conflicts) +} + +return -int(vote1.Validator.Power - vote2.Validator.Power) / vp sorted in descending order +}) { + return fmt.Errorf("extended commit votes are not sorted by voting power") +} + addressCache := make(map[string]struct{ +}, len(ec.Votes)) + / check that consistency between LastCommit and ExtendedCommit + for i, vote := range ec.Votes { + / cache addresses to check for duplicates + if _, ok := addressCache[string(vote.Validator.Address)]; ok { + return fmt.Errorf("extended commit vote address %X is duplicated", vote.Validator.Address) +} + +addressCache[string(vote.Validator.Address)] = struct{ +}{ +} + if !bytes.Equal(vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) { + return fmt.Errorf("extended commit vote address %X does not match last commit vote address %X", vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) +} + if vote.Validator.Power != lc.Votes().Get(i).Validator().Power() { + return fmt.Errorf("extended commit vote power %d does not match last commit vote power %d", vote.Validator.Power, lc.Votes().Get(i).Validator().Power()) +} + +} + +return nil +} + +type ( + / ProposalTxVerifier defines the interface that is implemented by BaseApp, + / that any custom ABCI PrepareProposal and ProcessProposal handler can use + / to verify a transaction. + ProposalTxVerifier interface { + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) + +TxDecode(txBz []byte) (sdk.Tx, error) + +TxEncode(tx sdk.Tx) ([]byte, error) +} + + / DefaultProposalHandler defines the default ABCI PrepareProposal and + / ProcessProposal handlers. + DefaultProposalHandler struct { + mempool mempool.Mempool + txVerifier ProposalTxVerifier + txSelector TxSelector + signerExtAdapter mempool.SignerExtractionAdapter +} +) + +func NewDefaultProposalHandler(mp mempool.Mempool, txVerifier ProposalTxVerifier) *DefaultProposalHandler { + return &DefaultProposalHandler{ + mempool: mp, + txVerifier: txVerifier, + txSelector: NewDefaultTxSelector(), + signerExtAdapter: mempool.NewDefaultSignerExtractionAdapter(), +} +} + +/ SetTxSelector sets the TxSelector function on the DefaultProposalHandler. +func (h *DefaultProposalHandler) + +SetTxSelector(ts TxSelector) { + h.txSelector = ts +} + +/ PrepareProposalHandler returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from CometBFT will simply be returned, which, by default, are in +/ FIFO order. +func (h *DefaultProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + var maxBlockGas uint64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = uint64(b.MaxGas) +} + +defer h.txSelector.Clear() + + / If the mempool is nil or NoOp we simply return the transactions + / requested from CometBFT, which, by default, should be in FIFO order. + / + / Note, we still need to ensure the transactions returned respect req.MaxTxBytes. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + for _, txBz := range req.Txs { + tx, err := h.txVerifier.TxDecode(txBz) + if err != nil { + return nil, err +} + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, tx, txBz) + if stop { + break +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} + selectedTxsSignersSeqs := make(map[string]uint64) + +var ( + resError error + selectedTxsNums int + invalidTxs []sdk.Tx / invalid txs to be removed out of the loop to avoid dead lock + ) + +mempool.SelectBy(ctx, h.mempool, req.Txs, func(memTx sdk.Tx) + +bool { + unorderedTx, ok := memTx.(sdk.TxWithUnordered) + isUnordered := ok && unorderedTx.GetUnordered() + txSignersSeqs := make(map[string]uint64) + + / if the tx is unordered, we don't need to check the sequence, we just add it + if !isUnordered { + signerData, err := h.signerExtAdapter.GetSigners(memTx) + if err != nil { + / propagate the error to the caller + resError = err + return false +} + + / If the signers aren't in selectedTxsSignersSeqs then we haven't seen them before + / so we add them and continue given that we don't need to check the sequence. + shouldAdd := true + for _, signer := range signerData { + seq, ok := selectedTxsSignersSeqs[signer.Signer.String()] + if !ok { + txSignersSeqs[signer.Signer.String()] = signer.Sequence + continue +} + + / If we have seen this signer before in this block, we must make + / sure that the current sequence is seq+1; otherwise is invalid + / and we skip it. + if seq+1 != signer.Sequence { + shouldAdd = false + break +} + +txSignersSeqs[signer.Signer.String()] = signer.Sequence +} + if !shouldAdd { + return true +} + +} + + / NOTE: Since transaction verification was already executed in CheckTx, + / which calls mempool.Insert, in theory everything in the pool should be + / valid. But some mempool implementations may insert invalid txs, so we + / check again. + txBz, err := h.txVerifier.PrepareProposalVerifyTx(memTx) + if err != nil { + invalidTxs = append(invalidTxs, memTx) +} + +else { + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, memTx, txBz) + if stop { + return false +} + txsLen := len(h.txSelector.SelectedTxs(ctx)) + / If the tx is unordered, we don't need to update the sender sequence. + if !isUnordered { + for sender, seq := range txSignersSeqs { + / If txsLen != selectedTxsNums is true, it means that we've + / added a new tx to the selected txs, so we need to update + / the sequence of the sender. + if txsLen != selectedTxsNums { + selectedTxsSignersSeqs[sender] = seq +} + +else if _, ok := selectedTxsSignersSeqs[sender]; !ok { + / The transaction hasn't been added but it passed the + / verification, so we know that the sequence is correct. + / So we set this sender's sequence to seq-1, in order + / to avoid unnecessary calls to PrepareProposalVerifyTx. + selectedTxsSignersSeqs[sender] = seq - 1 +} + +} + +} + +selectedTxsNums = txsLen +} + +return true +}) + if resError != nil { + return nil, resError +} + for _, tx := range invalidTxs { + err := h.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return nil, err +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} +} + +/ ProcessProposalHandler returns the default implementation for processing an +/ ABCI proposal. Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. +/ Note that step (2) + +is identical to the validation step performed in +/ DefaultPrepareProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +func (h *DefaultProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + / If the mempool is nil or NoOp we simply return ACCEPT, + / because PrepareProposal may have included txs that could fail verification. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return NoOpProcessProposal() +} + +return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + var totalTxGas uint64 + + var maxBlockGas int64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = b.MaxGas +} + for _, txBytes := range req.Txs { + tx, err := h.txVerifier.ProcessProposalVerifyTx(txBytes) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if maxBlockGas > 0 { + gasTx, ok := tx.(GasTx) + if ok { + totalTxGas += gasTx.GetGas() +} + if totalTxGas > uint64(maxBlockGas) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpExtendVote defines a no-op ExtendVote handler. It will always return an +/ empty byte slice as the vote extension. +func NoOpExtendVote() + +sdk.ExtendVoteHandler { + return func(_ sdk.Context, _ *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} +} + +/ NoOpVerifyVoteExtensionHandler defines a no-op VerifyVoteExtension handler. It +/ will always return an ACCEPT status with no error. +func NoOpVerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(_ sdk.Context, _ *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} + +/ TxSelector defines a helper type that assists in selecting transactions during +/ mempool transaction selection in PrepareProposal. It keeps track of the total +/ number of bytes and total gas of the selected transactions. It also keeps +/ track of the selected transactions themselves. +type TxSelector interface { + / SelectedTxs should return a copy of the selected transactions. + SelectedTxs(ctx context.Context) [][]byte + + / Clear should clear the TxSelector, nulling out all relevant fields. + Clear() + + / SelectTxForProposal should attempt to select a transaction for inclusion in + / a proposal based on inclusion criteria defined by the TxSelector. It must + / return if the caller should halt the transaction selection loop + / (typically over a mempool) + +or otherwise. + SelectTxForProposal(ctx context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool +} + +type defaultTxSelector struct { + totalTxBytes uint64 + totalTxGas uint64 + selectedTxs [][]byte +} + +func NewDefaultTxSelector() + +TxSelector { + return &defaultTxSelector{ +} +} + +func (ts *defaultTxSelector) + +SelectedTxs(_ context.Context) [][]byte { + txs := make([][]byte, len(ts.selectedTxs)) + +copy(txs, ts.selectedTxs) + +return txs +} + +func (ts *defaultTxSelector) + +Clear() { + ts.totalTxBytes = 0 + ts.totalTxGas = 0 + ts.selectedTxs = nil +} + +func (ts *defaultTxSelector) + +SelectTxForProposal(_ context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool { + txSize := uint64(cmttypes.ComputeProtoSizeForTxs([]cmttypes.Tx{ + txBz +})) + +var txGasLimit uint64 + if memTx != nil { + if gasTx, ok := memTx.(GasTx); ok { + txGasLimit = gasTx.GetGas() +} + +} + + / only add the transaction to the proposal if we have enough capacity + if (txSize + ts.totalTxBytes) <= maxTxBytes { + / If there is a max block gas limit, add the tx only if the limit has + / not been met. + if maxBlockGas > 0 { + if (txGasLimit + ts.totalTxGas) <= maxBlockGas { + ts.totalTxGas += txGasLimit + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + +else { + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + + / check if we've reached capacity; if so, we cannot select any more transactions + return ts.totalTxBytes >= maxTxBytes || (maxBlockGas > 0 && (ts.totalTxGas >= maxBlockGas)) +} +``` + +Like `PrepareProposal`, this implementation is the default and can be modified by +the application developer in [`app_di.go`](/docs/sdk/next/documentation/application-framework/app-go-di). If you decide to implement +your own `ProcessProposal` handler, you must ensure that the transactions +provided in the proposal DO NOT exceed the maximum block gas and `maxtxbytes` (if set). + +```go +processOpt := func(app *baseapp.BaseApp) { + abciPropHandler := baseapp.NewDefaultProposalHandler(mempool, app) + +app.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) +} + +baseAppOptions = append(baseAppOptions, processOpt) +``` diff --git a/docs/sdk/next/documentation/consensus-block-production/vote-extensions.mdx b/docs/sdk/next/documentation/consensus-block-production/vote-extensions.mdx new file mode 100644 index 00000000..01a62616 --- /dev/null +++ b/docs/sdk/next/documentation/consensus-block-production/vote-extensions.mdx @@ -0,0 +1,128 @@ +--- +title: Vote Extensions +--- + +## Synopsis + +This section describes how the application can define and use vote extensions +defined in ABCI++. + +## Extend Vote + +ABCI 2.0 (colloquially called ABCI++) allows an application to extend a pre-commit vote with arbitrary data. This process does NOT have to be deterministic, and the data returned can be unique to the +validator process. The Cosmos SDK defines [`baseapp.ExtendVoteHandler`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/abci.go#L32): + +```go +type ExtendVoteHandler func(Context, *abci.ExtendVoteRequest) (*abci.ExtendVoteResponse, error) +``` + +An application can set this handler in `app.go` via the `baseapp.SetExtendVoteHandler` +`BaseApp` option function. The `sdk.ExtendVoteHandler`, if defined, is called during +the `ExtendVote` ABCI method. Note, if an application decides to implement +`baseapp.ExtendVoteHandler`, it MUST return a non-nil `VoteExtension`. However, the vote +extension can be empty. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#extendvote) +for more details. + +There are many decentralized censorship-resistant use cases for vote extensions. +For example, a validator may want to submit prices for a price oracle or encryption +shares for an encrypted transaction mempool. Note, an application should be careful +to consider the size of the vote extensions as they could increase latency in block +production. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/docs/qa/CometBFT-QA-38.md#vote-extensions-testbed) +for more details. + +Click [here](https://docs.cosmos.network/main/build/abci/vote-extensions) if you would like a walkthrough of how to implement vote extensions. + +## Verify Vote Extension + +Similar to extending a vote, an application can also verify vote extensions from +other validators when validating their pre-commits. For a given vote extension, +this process MUST be deterministic. The Cosmos SDK defines [`sdk.VerifyVoteExtensionHandler`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/types/abci.go#L29-L31): + +```go +type VerifyVoteExtensionHandler func(Context, *abci.VerifyVoteExtensionRequest) (*abci.VerifyVoteExtensionResponse, error) +``` + +An application can set this handler in `app.go` via the `baseapp.SetVerifyVoteExtensionHandler` +`BaseApp` option function. The `sdk.VerifyVoteExtensionHandler`, if defined, is called +during the `VerifyVoteExtension` ABCI method. If an application defines a vote +extension handler, it should also define a verification handler. Note, not all +validators will share the same view of what vote extensions they verify depending +on how votes are propagated. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#verifyvoteextension) +for more details. + +Additionally, please keep in mind that performance can be degraded if vote extensions are too big ([Link](https://docs.cometbft.com/v0.38/qa/cometbft-qa-38#vote-extensions-testbed)), so we highly recommend a size validation in `VerifyVoteExtensions`. + +## Vote Extension Propagation + +The agreed upon vote extensions at height `H` are provided to the proposing validator +at height `H+1` during `PrepareProposal`. As a result, the vote extensions are +not natively provided or exposed to the remaining validators during `ProcessProposal`. +As a result, if an application requires that the agreed upon vote extensions from +height `H` are available to all validators at `H+1`, the application must propagate +these vote extensions manually in the block proposal itself. This can be done by +"injecting" them into the block proposal, since the `Txs` field in `PrepareProposal` +is just a slice of byte slices. + +`FinalizeBlock` will ignore any byte slice that doesn't implement an `sdk.Tx`, so +any injected vote extensions will safely be ignored in `FinalizeBlock`. For more +details on propagation, see the [ABCI++ 2.0 ADR](docs/sdk/next/documentation/legacy/adr-comprehensive#vote-extension-propagation--verification). + +### Recovery of injected Vote Extensions + +As stated before, vote extensions can be injected into a block proposal (along with +other transactions in the `Txs` field). The Cosmos SDK provides a pre-FinalizeBlock +hook to allow applications to recover vote extensions, perform any necessary +computation on them, and then store the results in the cached store. These results +will be available to the application during the subsequent `FinalizeBlock` call. + +An example of how a pre-FinalizeBlock hook could look like is shown below: + +```go expandable +app.SetPreBlocker(func(ctx sdk.Context, req *abci.FinalizeBlockRequest) + +error { + allVEs := []VE{ +} / store all parsed vote extensions here + for _, tx := range req.Txs { + / define a custom function that tries to parse the tx as a vote extension + ve, ok := parseVoteExtension(tx) + if !ok { + continue +} + +allVEs = append(allVEs, ve) +} + + / perform any necessary computation on the vote extensions and store the result + / in the cached store + result := compute(allVEs) + err := storeVEResult(ctx, result) + if err != nil { + return err +} + +return nil +}) +``` + +Then, in an app's module, the application can retrieve the result of the computation +of vote extensions from the cached store: + +```go expandable +func (k Keeper) + +BeginBlocker(ctx context.Context) + +error { + / retrieve the result of the computation of vote extensions from the cached store + result, err := k.GetVEResult(ctx) + if err != nil { + return err +} + + / use the result of the computation of vote extensions + k.setSomething(result) + +return nil +} +``` diff --git a/docs/sdk/next/documentation/core-concepts/ocap.mdx b/docs/sdk/next/documentation/core-concepts/ocap.mdx new file mode 100644 index 00000000..72dfaa2c --- /dev/null +++ b/docs/sdk/next/documentation/core-concepts/ocap.mdx @@ -0,0 +1,1098 @@ +--- +title: Object-Capability Model +description: >- + When thinking about security, it is good to start with a specific threat + model. Our threat model is the following: +--- + +## Intro + +When thinking about security, it is good to start with a specific threat model. Our threat model is the following: + +> We assume that a thriving ecosystem of Cosmos SDK modules that are easy to compose into a blockchain application will contain faulty or malicious modules. + +The Cosmos SDK is designed to address this threat by being the +foundation of an object capability system. + +> The structural properties of object capability systems favor +> modularity in code design and ensure reliable encapsulation in +> code implementation. +> +> These structural properties facilitate the analysis of some +> security properties of an object-capability program or operating +> system. Some of these — in particular, information flow properties +> — can be analyzed at the level of object references and +> connectivity, independent of any knowledge or analysis of the code +> that determines the behavior of the objects. +> +> As a consequence, these security properties can be established +> and maintained in the presence of new objects that contain unknown +> and possibly malicious code. +> +> These structural properties stem from the two rules governing +> access to existing objects: +> +> 1. An object A can send a message to B only if object A holds a +> reference to B. +> 2. An object A can obtain a reference to C only +> if object A receives a message containing a reference to C. As a +> consequence of these two rules, an object can obtain a reference +> to another object only through a preexisting chain of references. +> In short, "Only connectivity begets connectivity." + +For an introduction to object-capabilities, see this [Wikipedia article](https://en.wikipedia.org/wiki/Object-capability_model). + +## Ocaps in practice + +The idea is to only reveal what is necessary to get the work done. + +For example, the following code snippet violates the object capabilities +principle: + +```go +type AppAccount struct {... +} + account := &AppAccount{ + Address: pub.Address(), + Coins: sdk.Coins{ + sdk.NewInt64Coin("ATM", 100) +}, +} + sumValue := externalModule.ComputeSumValue(account) +``` + +The method `ComputeSumValue` implies a pure function, yet the implied +capability of accepting a pointer value is the capability to modify that +value. The preferred method signature should take a copy instead. + +```go +sumValue := externalModule.ComputeSumValue(*account) +``` + +In the Cosmos SDK, you can see the application of this principle in simapp. + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "maps" + "os" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/tx/signing" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" + sigtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + panic(err) +} + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, + banktypes.StoreKey, + stakingtypes.StoreKey, + minttypes.StoreKey, + distrtypes.StoreKey, + slashingtypes.StoreKey, + govtypes.StoreKey, + consensusparamtypes.StoreKey, + upgradetypes.StoreKey, + feegrant.StoreKey, + evidencetypes.StoreKey, + circuittypes.StoreKey, + authzkeeper.StoreKey, + nftkeeper.StoreKey, + group.StoreKey, + epochstypes.StoreKey, + protocolpooltypes.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, +} + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + runtime.EventService{ +}, + ) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + + / optional: enable sign mode textual by overwriting the default tx config (after setting the bank keeper) + enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + panic(err) +} + +app.txConfig = txConfig + + app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[stakingtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authcodec.NewBech32Codec(sdk.Bech32PrefixValAddr), + authcodec.NewBech32Codec(sdk.Bech32PrefixConsAddr), + ) + +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), + ) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, + legacyAmino, + runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), + app.StakingKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[feegrant.StoreKey]), + app.AccountKeeper, + ) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks( + app.DistrKeeper.Hooks(), + app.SlashingKeeper.Hooks(), + ), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[circuittypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + app.AccountKeeper.AddressCodec(), + ) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper( + runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + ) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper( + keys[group.StoreKey], + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + groupConfig, + ) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper( + skipUpgradeHeights, + runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), + appCodec, + homePath, + app.BaseApp, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper( + runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), + appCodec, + app.AccountKeeper, + app.BankKeeper, + ) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), + app.StakingKeeper, + app.SlashingKeeper, + app.AccountKeeper.AddressCodec(), + runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, + ) + +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + ), + ) + + /**** Module Options ****/ + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, nil), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, nil), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil, app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, nil), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + epochs.NewAppModule(appCodec, app.EpochsKeeper), + protocolpool.NewAppModule(appCodec, app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / NOTE: upgrade module is required to be prioritized + app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, + ) + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensusparamtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +} + exportModuleOrder := []string{ + consensusparamtypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +err = app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetPreBlocker(app.PreBlocker) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + + / At startup, after all modules have been registered, check that all prot + / annotations are correct. + protoFiles, err := proto.MergedRegistry() + if err != nil { + panic(err) +} + +err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + fmt.Fprintln(os.Stderr, err.Error()) +} + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + UnorderedNonceManager: app.AccountKeeper, + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ PreBlocker application updates every pre block +func (app *SimApp) + +PreBlocker(ctx sdk.Context, _ *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + return app.ModuleManager.PreBlock(ctx) +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), + ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), + ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, 0, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + return maps.Clone(maccPerms) +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} +``` + +The following diagram shows the current dependencies between keepers. + +![Keeper dependencies](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg) diff --git a/docs/sdk/next/documentation/core-concepts/overview.mdx b/docs/sdk/next/documentation/core-concepts/overview.mdx new file mode 100644 index 00000000..80fe95af --- /dev/null +++ b/docs/sdk/next/documentation/core-concepts/overview.mdx @@ -0,0 +1,41 @@ +--- +title: What is the Cosmos SDK +--- + +The [Cosmos SDK](https://github.com/cosmos/cosmos-sdk) is an open-source toolkit for building multi-asset public Proof-of-Stake (PoS) blockchains, like the Cosmos Hub, as well as permissioned Proof-of-Authority (PoA) blockchains. Blockchains built with the Cosmos SDK are generally referred to as **application-specific blockchains**. + +The goal of the Cosmos SDK is to allow developers to easily create custom blockchains from scratch that can natively interoperate with other blockchains. +We further this modular approach by allowing developers to plug and play with different consensus engines this can range from the [CometBFT](https://github.com/cometbft/cometbft) or [Rollkit](https://rollkit.dev/). + +SDK-based blockchains have the choice to use the predefined modules or to build their own modules. What this means is that developers can build a blockchain that is tailored to their specific use case, without having to worry about the low-level details of building a blockchain from scratch. Predefined modules include staking, governance, and token issuance, among others. + +What's more, the Cosmos SDK is a capabilities-based system that allows developers to better reason about the security of interactions between modules. For a deeper look at capabilities, jump to [Object-Capability Model](/docs/sdk/next/documentation/core-concepts/ocap). + +How you can look at this is if we imagine that the SDK is like a lego kit. You can choose to build the basic house from the instructions or you can choose to modify your house and add more floors, more doors, more windows. The choice is yours. + +## What are Application-Specific Blockchains + +One development paradigm in the blockchain world today is that of virtual-machine blockchains like Ethereum, where development generally revolves around building decentralized applications on top of an existing blockchain as a set of smart contracts. While smart contracts can be very good for some use cases like single-use applications (e.g. ICOs), they often fall short for building complex decentralized platforms. More generally, smart contracts can be limiting in terms of flexibility, sovereignty and performance. + +Application-specific blockchains offer a radically different development paradigm than virtual-machine blockchains. An application-specific blockchain is a blockchain customized to operate a single application: developers have all the freedom to make the design decisions required for the application to run optimally. They can also provide better sovereignty, security and performance. + +Learn more about [application-specific blockchains](/docs/sdk/next/documentation/core-concepts/why-app-specific). + +## What is Modularity + +Today there is a lot of talk around modularity and discussions between monolithic and modular. Originally the Cosmos SDK was built with a vision of modularity in mind. Modularity is derived from splitting a blockchain into customizable layers of execution, consensus, settlement and data availability, which is what the Cosmos SDK enables. This means that developers can plug and play, making their blockchain customisable by using different software for different layers. For example you can choose to build a vanilla chain and use the Cosmos SDK with CometBFT. CometBFT will be your consensus layer and the chain itself would be the settlement and execution layer. Another route could be to use the SDK with Rollkit and Celestia as your consensus and data availability layer. The benefit of modularity is that you can customize your chain to your specific use case. + +## Why the Cosmos SDK + +The Cosmos SDK is the most advanced framework for building custom modular application-specific blockchains today. Here are a few reasons why you might want to consider building your decentralized application with the Cosmos SDK: + +* It allows you to plug and play and customize your consensus layer. As above you can use Rollkit and Celestia as your consensus and data availability layer. This offers a lot of flexibility and customisation. +* Previously the default consensus engine available within the Cosmos SDK is [CometBFT](https://github.com/cometbft/cometbft). CometBFT is the most mature BFT consensus engine in existence. It is widely used across the industry and is considered the gold standard consensus engine for building Proof-of-Stake systems. +* The Cosmos SDK is open-source and designed to make it easy to build blockchains out of composable [modules](/docs/sdk/next/documentation/module-system). As the ecosystem of open-source Cosmos SDK modules grows, it will become increasingly easier to build complex decentralized platforms with it. +* The Cosmos SDK is inspired by capabilities-based security, and informed by years of wrestling with blockchain state-machines. This makes the Cosmos SDK a very secure environment to build blockchains. +* Most importantly, the Cosmos SDK has already been used to build many application-specific blockchains that are already in production. Among others, we can cite [Cosmos Hub](https://hub.cosmos.network), [IRIS Hub](https://irisnet.org), [Binance Chain](https://docs.binance.org/), [Terra](https://terra.money/) or [Kava](https://www.kava.io/). [Many more](https://cosmos.network/ecosystem) are building on the Cosmos SDK. + +## Getting started with the Cosmos SDK + +* Learn more about the [architecture of a Cosmos SDK application](/docs/sdk/next/documentation/core-concepts/sdk-app-architecture) +* Learn how to build an application-specific blockchain from scratch with the [Cosmos SDK Tutorial](https://cosmos.network/docs/tutorial) diff --git a/docs/sdk/next/documentation/core-concepts/sdk-app-architecture.mdx b/docs/sdk/next/documentation/core-concepts/sdk-app-architecture.mdx new file mode 100644 index 00000000..5937cdb1 --- /dev/null +++ b/docs/sdk/next/documentation/core-concepts/sdk-app-architecture.mdx @@ -0,0 +1,90 @@ +--- +title: Blockchain Architecture +description: 'At its core, a blockchain is a replicated deterministic state machine.' +--- + +## State machine + +At its core, a blockchain is a [replicated deterministic state machine](https://en.wikipedia.org/wiki/State_machine_replication). + +A state machine is a computer science concept whereby a machine can have multiple states, but only one at any given time. There is a `state`, which describes the current state of the system, and `transactions`, that trigger state transitions. + +Given a state S and a transaction T, the state machine will return a new state S'. + +```mermaid expandable +flowchart LR + S["State S"] -->|"apply(T)"| S'["State S'"] + style S fill:#f9f,stroke:#333,stroke-width:2px + style S' fill:#9ff,stroke:#333,stroke-width:2px +``` + +In practice, the transactions are bundled in blocks to make the process more efficient. Given a state S and a block of transactions B, the state machine will return a new state S'. + +```mermaid expandable +flowchart LR + S["State S"] -->|"For each T in Block B: apply(T)"| S'["State S'"] + style S fill:#f9f,stroke:#333,stroke-width:2px + style S' fill:#9ff,stroke:#333,stroke-width:2px +``` + +In a blockchain context, the state machine is deterministic. This means that if a node is started at a given state and replays the same sequence of transactions, it will always end up with the same final state. + +The Cosmos SDK gives developers maximum flexibility to define the state of their application, transaction types and state transition functions. The process of building state-machines with the Cosmos SDK will be described more in depth in the following sections. But first, let us see how the state-machine is replicated using **CometBFT**. + +## CometBFT + +Thanks to the Cosmos SDK, developers just have to define the state machine, and [*CometBFT*](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft) will handle replication over the network for them. + +```mermaid expandable +flowchart TB + subgraph Node["Blockchain Node"] + subgraph SDK["Built with Cosmos SDK"] + App["State-machine = Application"] + end + subgraph CBT["CometBFT"] + Consensus["Consensus"] + Network["Networking"] + end + end + + App -.->|"ABCI"| Consensus + Consensus --> Network + + style App fill:#e1f5e1,stroke:#4caf50,stroke-width:2px + style Consensus fill:#e3f2fd,stroke:#2196f3,stroke-width:2px + style Network fill:#e3f2fd,stroke:#2196f3,stroke-width:2px + style SDK fill:#f5f5f5,stroke:#333,stroke-width:1px + style CBT fill:#f5f5f5,stroke:#333,stroke-width:1px + style Node fill:#fafafa,stroke:#333,stroke-width:2px +``` + +[CometBFT](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft) is an application-agnostic engine that is responsible for handling the *networking* and *consensus* layers of a blockchain. In practice, this means that CometBFT is responsible for propagating and ordering transaction bytes. CometBFT relies on an eponymous Byzantine-Fault-Tolerant (BFT) algorithm to reach consensus on the order of transactions. + +The CometBFT [consensus algorithm](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft#consensus-overview) works with a set of special nodes called *Validators*. Validators are responsible for adding blocks of transactions to the blockchain. At any given block, there is a validator set V. A validator in V is chosen by the algorithm to be the proposer of the next block. This block is considered valid if more than two thirds of V signed a `prevote` and a `precommit` on it, and if all the transactions that it contains are valid. The validator set can be changed by rules written in the state-machine. + +## ABCI + +CometBFT passes transactions to the application through an interface called the [ABCI](https://docs.cometbft.com/v0.37/spec/abci/), which the application must implement. + +```mermaid expandable +flowchart TB + App["Application
(State Machine)"] + CBT["CometBFT
(Consensus & Networking)"] + + App <-->|"ABCI
Interface"| CBT + + style App fill:#e1f5e1,stroke:#4caf50,stroke-width:2px + style CBT fill:#e3f2fd,stroke:#2196f3,stroke-width:2px +``` + +Note that **CometBFT only handles transaction bytes**. It has no knowledge of what these bytes mean. All CometBFT does is order these transaction bytes deterministically. CometBFT passes the bytes to the application via the ABCI, and expects a return code to inform it if the messages contained in the transactions were successfully processed or not. + +Here are the most important messages of the ABCI: + +* `CheckTx`: When a transaction is received by CometBFT, it is passed to the application to check if a few basic requirements are met. `CheckTx` is used to protect the mempool of full-nodes against spam transactions. A special handler called the [`AnteHandler`](/docs/sdk/next/documentation/protocol-development/gas-fees#antehandler) is used to execute a series of validation steps such as checking for sufficient fees and validating the signatures. If the checks are valid, the transaction is added to the [mempool](https://docs.cometbft.com/v0.37/spec/p2p/legacy-docs/messages/mempool) and relayed to peer nodes. Note that transactions are not processed (i.e. no modification of the state occurs) with `CheckTx` since they have not been included in a block yet. +* `DeliverTx`: When a [valid block](https://docs.cometbft.com/v0.37/spec/core/data_structures#block) is received by CometBFT, each transaction in the block is passed to the application via `DeliverTx` in order to be processed. It is during this stage that the state transitions occur. The `AnteHandler` executes again, along with the actual [`Msg` service](/docs/sdk/next/documentation/module-system/msg-services) RPC for each message in the transaction. +* `BeginBlock`/`EndBlock`: These messages are executed at the beginning and the end of each block, whether the block contains transactions or not. It is useful to trigger automatic execution of logic. Proceed with caution though, as computationally expensive loops could slow down your blockchain, or even freeze it if the loop is infinite. + +Find a more detailed view of the ABCI methods from the [CometBFT docs](https://docs.cometbft.com/v0.37/spec/abci/). + +Any application built on CometBFT needs to implement the ABCI interface in order to communicate with the underlying local CometBFT engine. Fortunately, you do not have to implement the ABCI interface. The Cosmos SDK provides a boilerplate implementation of it in the form of [baseapp](/docs/sdk/next/documentation/core-concepts/sdk-design#baseapp). diff --git a/docs/sdk/next/documentation/core-concepts/sdk-design.mdx b/docs/sdk/next/documentation/core-concepts/sdk-design.mdx new file mode 100644 index 00000000..3b549cbe --- /dev/null +++ b/docs/sdk/next/documentation/core-concepts/sdk-design.mdx @@ -0,0 +1,1071 @@ +--- +title: Main Components of the Cosmos SDK +--- + +The Cosmos SDK is a framework that facilitates the development of secure state-machines on top of CometBFT. At its core, the Cosmos SDK is a boilerplate implementation of the [ABCI](/docs/sdk/next/documentation/core-concepts/sdk-app-architecture#abci) in Golang. It comes with a [`multistore`](/docs/sdk/next/documentation/state-storage/store#multistore) to persist data and a [`router`](/docs/sdk/next/documentation/application-framework/baseapp#routing) to handle transactions. + +Here is a simplified view of how transactions are handled by an application built on top of the Cosmos SDK when transferred from CometBFT via `DeliverTx`: + +1. Decode `transactions` received from the CometBFT consensus engine (remember that CometBFT only deals with `[]bytes`). +2. Extract `messages` from `transactions` and do basic sanity checks. +3. Route each message to the appropriate module so that it can be processed. +4. Commit state changes. + +## `baseapp` + +`baseapp` is the boilerplate implementation of a Cosmos SDK application. It comes with an implementation of the ABCI to handle the connection with the underlying consensus engine. Typically, a Cosmos SDK application extends `baseapp` by embedding it in [`app.go`](/docs/sdk/next/documentation/application-framework/app-anatomy#core-application-file). + +Here is an example of this from `simapp`, the Cosmos SDK demonstration app: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "maps" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/tx/signing" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + sigtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + panic(err) +} + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, + banktypes.StoreKey, + stakingtypes.StoreKey, + minttypes.StoreKey, + distrtypes.StoreKey, + slashingtypes.StoreKey, + govtypes.StoreKey, + consensusparamtypes.StoreKey, + upgradetypes.StoreKey, + feegrant.StoreKey, + evidencetypes.StoreKey, + circuittypes.StoreKey, + authzkeeper.StoreKey, + nftkeeper.StoreKey, + group.StoreKey, + epochstypes.StoreKey, + protocolpooltypes.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, +} + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + runtime.EventService{ +}, + ) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), + ) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + + / optional: enable sign mode textual by overwriting the default tx config (after setting the bank keeper) + enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + panic(err) +} + +app.txConfig = txConfig + + app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[stakingtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authcodec.NewBech32Codec(sdk.Bech32PrefixValAddr), + authcodec.NewBech32Codec(sdk.Bech32PrefixConsAddr), + ) + +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(minttypes.DefaultInflationCalculationFn)), custom mintFn can be added here + ) + +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), + ) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, + legacyAmino, + runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), + app.StakingKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[feegrant.StoreKey]), + app.AccountKeeper, + ) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks( + app.DistrKeeper.Hooks(), + app.SlashingKeeper.Hooks(), + ), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[circuittypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + app.AccountKeeper.AddressCodec(), + ) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper( + runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + ) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper( + keys[group.StoreKey], + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + groupConfig, + ) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper( + skipUpgradeHeights, + runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), + appCodec, + homePath, + app.BaseApp, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(...), / Add if you want to use a custom vote calculation function. + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper( + runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), + appCodec, + app.AccountKeeper, + app.BankKeeper, + ) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), + app.StakingKeeper, + app.SlashingKeeper, + app.AccountKeeper.AddressCodec(), + runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, + ) + +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + ), + ) + + /**** Module Options ****/ + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, nil), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, nil), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil, app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, nil), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + epochs.NewAppModule(app.EpochsKeeper), + protocolpool.NewAppModule(app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / NOTE: upgrade module is required to be prioritized + app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, + ) + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensusparamtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +} + exportModuleOrder := []string{ + consensusparamtypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +err = app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetPreBlocker(app.PreBlocker) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimeoutDuration), +}, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ PreBlocker application updates every pre block +func (app *SimApp) + +PreBlocker(ctx sdk.Context, _ *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + return app.ModuleManager.PreBlock(ctx) +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), + ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), + ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, 0, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + return maps.Clone(maccPerms) +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} +``` + +The goal of `baseapp` is to provide a secure interface between the store and the extensible state machine while defining as little about the state machine as possible (staying true to the ABCI). + +For more on `baseapp`, please click [here](/docs/sdk/next/documentation/application-framework/baseapp). + +## Multistore + +The Cosmos SDK provides a [`multistore`](/docs/sdk/next/documentation/state-storage/store#multistore) for persisting state. The multistore allows developers to declare any number of [`KVStores`](/docs/sdk/next/documentation/state-storage/store#base-layer-kvstores). These `KVStores` only accept the `[]byte` type as value and therefore any custom structure needs to be marshalled using [a codec](/docs/sdk/next/documentation/protocol-development/encoding) before being stored. + +The multistore abstraction is used to divide the state in distinct compartments, each managed by its own module. For more on the multistore, click [here](/docs/sdk/next/documentation/state-storage/store#multistore) + +## Modules + +The power of the Cosmos SDK lies in its modularity. Cosmos SDK applications are built by aggregating a collection of interoperable modules. Each module defines a subset of the state and contains its own message/transaction processor, while the Cosmos SDK is responsible for routing each message to its respective module. + +Here is a simplified view of how a transaction is processed by the application of each full-node when it is received in a valid block: + +```mermaid expandable + flowchart TD + A[Transaction relayed from the full-node's CometBFT engine to the node's application via DeliverTx] --> B[APPLICATION] + B -->|"Using baseapp's methods: Decode the Tx, extract and route the message(s)"| C[Message routed to the correct module to be processed] + C --> D1[AUTH MODULE] + C --> D2[BANK MODULE] + C --> D3[STAKING MODULE] + C --> D4[GOV MODULE] + D1 -->|Handle message, Update state| E["Return result to CometBFT (0=Ok, 1=Err)"] + D2 -->|Handle message, Update state| E["Return result to CometBFT (0=Ok, 1=Err)"] + D3 -->|Handle message, Update state| E["Return result to CometBFT (0=Ok, 1=Err)"] + D4 -->|Handle message, Update state| E["Return result to CometBFT (0=Ok, 1=Err)"] +``` + +Each module can be seen as a little state-machine. Developers need to define the subset of the state handled by the module, as well as custom message types that modify the state (*Note:* `messages` are extracted from `transactions` by `baseapp`). In general, each module declares its own `KVStore` in the `multistore` to persist the subset of the state it defines. Most developers will need to access other 3rd party modules when building their own modules. Given that the Cosmos SDK is an open framework, some of the modules may be malicious, which means there is a need for security principles to reason about inter-module interactions. These principles are based on [object-capabilities](/docs/sdk/next/documentation/core-concepts/ocap). In practice, this means that instead of having each module keep an access control list for other modules, each module implements special objects called `keepers` that can be passed to other modules to grant a pre-defined set of capabilities. + +Cosmos SDK modules are defined in the `x/` folder of the Cosmos SDK. Some core modules include: + +* `x/auth`: Used to manage accounts and signatures. +* `x/bank`: Used to enable tokens and token transfers. +* `x/staking` + `x/slashing`: Used to build Proof-of-Stake blockchains. + +In addition to the already existing modules in `x/`, which anyone can use in their app, the Cosmos SDK lets you build your own custom modules. You can check an [example of that in the tutorial](https://tutorials.cosmos.network/). diff --git a/docs/sdk/next/documentation/core-concepts/why-app-specific.mdx b/docs/sdk/next/documentation/core-concepts/why-app-specific.mdx new file mode 100644 index 00000000..88bec7c8 --- /dev/null +++ b/docs/sdk/next/documentation/core-concepts/why-app-specific.mdx @@ -0,0 +1,88 @@ +--- +title: Application-Specific Blockchains +--- + +## Synopsis + +This document explains what application-specific blockchains are, and why developers would want to build one as opposed to writing Smart Contracts. + +## What are application-specific blockchains + +Application-specific blockchains are blockchains customized to operate a single application. Instead of building a decentralized application on top of an underlying blockchain like Ethereum, developers build their own blockchain from the ground up. This means building a full-node client, a light-client, and all the necessary interfaces (CLI, REST, ...) to interact with the nodes. + +```mermaid expandable +flowchart TB + subgraph Node["Blockchain Node"] + subgraph SDK["Built with Cosmos SDK"] + App["State-machine = Application"] + end + subgraph CBT["CometBFT"] + Consensus["Consensus"] + Network["Networking"] + end + end + + App -.->|"ABCI"| Consensus + Consensus --> Network + + style App fill:#e1f5e1,stroke:#4caf50,stroke-width:2px + style Consensus fill:#e3f2fd,stroke:#2196f3,stroke-width:2px + style Network fill:#e3f2fd,stroke:#2196f3,stroke-width:2px + style SDK fill:#f5f5f5,stroke:#333,stroke-width:1px + style CBT fill:#f5f5f5,stroke:#333,stroke-width:1px + style Node fill:#fafafa,stroke:#333,stroke-width:2px +``` + +## What are the shortcomings of Smart Contracts + +Virtual-machine blockchains like Ethereum addressed the demand for more programmability back in 2014. At the time, the options available for building decentralized applications were quite limited. Most developers would build on top of the complex and limited Bitcoin scripting language, or fork the Bitcoin codebase which was hard to work with and customize. + +Virtual-machine blockchains came in with a new value proposition. Their state-machine incorporates a virtual-machine that is able to interpret turing-complete programs called Smart Contracts. These Smart Contracts are very good for use cases like one-time events (e.g. ICOs), but they can fall short for building complex decentralized platforms. Here is why: + +- Smart Contracts are generally developed with specific programming languages that can be interpreted by the underlying virtual-machine. These programming languages are often immature and inherently limited by the constraints of the virtual-machine itself. For example, the Ethereum Virtual Machine does not allow developers to implement automatic execution of code. Developers are also limited to the account-based system of the EVM, and they can only choose from a limited set of functions for their cryptographic operations. These are examples, but they hint at the lack of **flexibility** that a smart contract environment often entails. +- Smart Contracts are all run by the same virtual machine. This means that they compete for resources, which can severely restrain **performance**. And even if the state-machine were to be split in multiple subsets (e.g. via sharding), Smart Contracts would still need to be interpreted by a virtual machine, which would limit performance compared to a native application implemented at state-machine level (our benchmarks show an improvement on the order of 10x in performance when the virtual-machine is removed). +- Another issue with the fact that Smart Contracts share the same underlying environment is the resulting limitation in **sovereignty**. A decentralized application is an ecosystem that involves multiple players. If the application is built on a general-purpose virtual-machine blockchain, stakeholders have very limited sovereignty over their application, and are ultimately superseded by the governance of the underlying blockchain. If there is a bug in the application, very little can be done about it. + +Application-Specific Blockchains are designed to address these shortcomings. + +## Application-Specific Blockchains Benefits + +### Flexibility + +Application-specific blockchains give maximum flexibility to developers: + +- In Cosmos blockchains, the state-machine is typically connected to the underlying consensus engine via an interface called the [ABCI](https://docs.cometbft.com/v0.37/spec/abci/). This interface can be wrapped in any programming language, meaning developers can build their state-machine in the programming language of their choice. + +- Developers can choose among multiple frameworks to build their state-machine. The most widely used today is the Cosmos SDK, but others exist (e.g. [Lotion](https://github.com/nomic-io/lotion), [Weave](https://github.com/iov-one/weave), ...). Typically the choice will be made based on the programming language they want to use (Cosmos SDK and Weave are in Golang, Lotion is in Javascript, ...). + +- The ABCI also allows developers to swap the consensus engine of their application-specific blockchain. Today, only CometBFT is production-ready, but in the future other consensus engines are expected to emerge. + +- Even when they settle for a framework and consensus engine, developers still have the freedom to tweak them if they don't perfectly match their requirements in their pristine forms. + +- Developers are free to explore the full spectrum of tradeoffs (e.g. number of validators vs transaction throughput, safety vs availability in asynchrony, ...) and design choices (DB or IAVL tree for storage, UTXO or account model, ...). + +- Developers can implement automatic execution of code. In the Cosmos SDK, logic can be automatically triggered at the beginning and the end of each block. They are also free to choose the cryptographic library used in their application, as opposed to being constrained by what is made available by the underlying environment in the case of virtual-machine blockchains. + +The list above contains a few examples that show how much flexibility application-specific blockchains give to developers. The goal of Cosmos and the Cosmos SDK is to make developer tooling as generic and composable as possible, so that each part of the stack can be forked, tweaked and improved without losing compatibility. As the community grows, more alternatives for each of the core building blocks will emerge, giving more options to developers. + +### Performance + +Decentralized applications built with Smart Contracts are inherently capped in performance by the underlying environment. For a decentralized application to optimise performance, it needs to be built as an application-specific blockchain. Next are some of the benefits an application-specific blockchain brings in terms of performance: + +- Developers of application-specific blockchains can choose to operate with a novel consensus engine such as CometBFT. Compared to Proof-of-Work (used by most virtual-machine blockchains today), it offers significant gains in throughput. +- An application-specific blockchain only operates a single application, so that the application does not compete with others for computation and storage. This is the opposite of most non-sharded virtual-machine blockchains today, where smart contracts all compete for computation and storage. +- Even if a virtual-machine blockchain offered application-based sharding coupled with an efficient consensus algorithm, performance would still be limited by the virtual-machine itself. The real throughput bottleneck is the state-machine, and requiring transactions to be interpreted by a virtual-machine significantly increases the computational complexity of processing them. + +### Security + +Security is hard to quantify, and greatly varies from platform to platform. That said here are some important benefits an application-specific blockchain can bring in terms of security: + +- Developers can choose proven programming languages like Go when building their application-specific blockchains, as opposed to smart contract programming languages that are often more immature. +- Developers are not constrained by the cryptographic functions made available by the underlying virtual-machines. They can use their own custom cryptography, and rely on well-audited crypto libraries. +- Developers do not have to worry about potential bugs or exploitable mechanisms in the underlying virtual-machine, making it easier to reason about the security of the application. + +### Sovereignty + +One of the major benefits of application-specific blockchains is sovereignty. A decentralized application is an ecosystem that involves many actors: users, developers, third-party services, and more. When developers build on a virtual-machine blockchain where many decentralized applications coexist, the community of the application is different than the community of the underlying blockchain, and the latter supersedes the former in the governance process. If there is a bug or if a new feature is needed, stakeholders of the application have very little leeway to upgrade the code. If the community of the underlying blockchain refuses to act, nothing can happen. + +The fundamental issue here is that the governance of the application and the governance of the network are not aligned. This issue is solved by application-specific blockchains. Because application-specific blockchains specialize to operate a single application, stakeholders of the application have full control over the entire chain. This ensures that the community will not be stuck if a bug is discovered, and that it has the freedom to choose how it is going to evolve. diff --git a/docs/sdk/next/documentation/legacy/adr-overview.mdx b/docs/sdk/next/documentation/legacy/adr-overview.mdx new file mode 100644 index 00000000..71bfc517 --- /dev/null +++ b/docs/sdk/next/documentation/legacy/adr-overview.mdx @@ -0,0 +1,99 @@ +--- +title: Architecture Decision Records (ADR) +description: Historical overview of the Cosmos SDK's architecture decision process +--- + +# Architecture Decision Records (ADR) + +## Overview + +Architecture Decision Records (ADRs) were the primary mechanism used by the Cosmos SDK team to document significant architectural decisions from 2019 to 2023. While the ADR process is no longer actively used for new decisions, these historical documents remain valuable references for understanding the design rationale behind many core SDK features. + +An ADR captured a single architectural decision, addressing functional or non-functional requirements that were architecturally significant. These records formed the project's decision log and served as a form of Architectural Knowledge Management. + +## What Were ADRs? + +ADRs documented implementation and design decisions that had already been discussed and agreed upon by the team. Unlike RFCs (Request for Comments) which facilitated discussion, ADRs recorded decisions that had already reached consensus through either: +- Prior RFC discussions +- Synchronous team meetings +- Working group sessions + +Each ADR provided: +- **Context** on the relevant goals and current state +- **Proposed changes** to achieve those goals +- **Analysis** of pros and cons +- **References** to related work +- **Implementation status** and changelog + +## The ADR Lifecycle + +The ADR process followed a defined lifecycle: + +1. **Consensus Building**: Every ADR started with either an RFC or discussion where consensus was reached +2. **Documentation**: A pull request was created using the ADR template +3. **Review**: Project stakeholders reviewed the proposed architecture +4. **Acceptance**: ADRs were merged regardless of outcome (accepted, rejected, or abandoned) to maintain historical record +5. **Supersession**: ADRs could be superseded by newer decisions as the project evolved + +### Status Classifications + +ADRs used a two-component status system: +- **Consensus Status**: Draft → Proposed → Last Call → Accepted/Rejected → Superseded +- **Implementation Status**: Implemented or Not Implemented + +## Historical ADR Index + +The complete collection of ADRs can be found in the [Cosmos SDK repository](https://github.com/cosmos/cosmos-sdk/tree/main/docs/architecture). Below are some of the most significant ADRs that shaped the current SDK architecture: + +### Core Architecture +- [ADR 002: SDK Documentation Structure](/docs/common/pages/adr-comprehensive#adr-002-sdk-documentation-structure) - Established documentation organization +- [ADR 057: App Wiring](/docs/common/pages/adr-comprehensive#adr-057-app-wiring) - Introduced dependency injection system +- [ADR 063: Core Module API](/docs/common/pages/adr-comprehensive#adr-063-core-module-api) - Defined module interface standards + +### State Management +- [ADR 004: Split Denomination Keys](/docs/common/pages/adr-comprehensive#adr-004-split-denomination-keys) - Optimized denomination storage +- [ADR 062: Collections State Layer](/docs/common/pages/adr-comprehensive#adr-062-collections-a-simplified-storage-layer-for-cosmos-sdk-modules) - Simplified state management +- [ADR 065: Store V2](/docs/common/pages/adr-comprehensive#adr-065-adr-065-store-v2) - Next generation storage layer + +### Protocol & Encoding +- [ADR 019: Protocol Buffer State Encoding](/docs/common/pages/adr-comprehensive#adr-019-protocol-buffer-state-encoding) - Protobuf adoption for state +- [ADR 020: Protocol Buffer Transaction Encoding](/docs/common/pages/adr-comprehensive#adr-020-protocol-buffer-transaction-encoding) - Protobuf for transactions +- [ADR 027: Deterministic Protobuf Serialization](/docs/common/pages/adr-comprehensive#adr-027-deterministic-protobuf-serialization) - Ensuring determinism + +### Module System +- [ADR 030: Authorization Module](/docs/common/pages/adr-comprehensive#adr-030-authorization-module) - Authorization framework +- [ADR 029: Fee Grant Module](/docs/common/pages/adr-comprehensive#adr-029-fee-grant-module) - Fee abstraction +- [ADR 031: Protobuf Msg Services](/docs/common/pages/adr-comprehensive#adr-031-protobuf-msg-services) - Message service pattern + +### Consensus & ABCI +- [ADR 060: ABCI 1.0](/docs/common/pages/adr-comprehensive#adr-060-abci-10-integration-phase-i) - ABCI 1.0 integration +- [ADR 064: ABCI 2.0](/docs/common/pages/adr-comprehensive#adr-064-abci-20-integration-phase-ii) - ABCI 2.0 planning + +### Developer Experience +- [ADR 058: Auto-Generated CLI](/docs/common/pages/adr-comprehensive#adr-058-auto-generated-cli) - AutoCLI system +- [ADR 055: ORM](/docs/common/pages/adr-comprehensive#adr-055-orm) - Object-relational mapping + +## Why ADRs Matter + +Although the formal ADR process is no longer active, these documents remain valuable because they: + +1. **Preserve Design Rationale**: They explain not just what was built, but why specific design choices were made +2. **Document Trade-offs**: They capture the pros and cons considered for each decision +3. **Show Evolution**: They demonstrate how the SDK architecture evolved over time +4. **Provide Context**: They help developers understand the historical context of current features + +## Current Decision Process + +The Cosmos SDK team has transitioned to more agile decision-making processes, utilizing: +- GitHub Discussions for community input +- Pull requests for implementation proposals +- Working groups for synchronous collaboration +- Direct implementation with iterative refinement + +For developers interested in understanding specific SDK components, the historical ADRs remain an excellent resource for diving deep into the architectural reasoning behind the current implementation. + +## Additional Resources + +- [Complete ADR Archive](https://github.com/cosmos/cosmos-sdk/tree/main/docs/architecture) - Full collection of all historical ADRs +- [ADR Template](/docs/common/pages/adr-comprehensive#adr-template-adr-template) - The template that was used for creating ADRs +- [ADR Process Documentation](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/PROCESS.md) - Detailed process that was followed \ No newline at end of file diff --git a/docs/sdk/next/documentation/module-system/README.mdx b/docs/sdk/next/documentation/module-system/README.mdx new file mode 100644 index 00000000..8d96a163 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/README.mdx @@ -0,0 +1,64 @@ +--- +title: List of Modules +description: >- + Here are some production-grade modules that can be used in Cosmos SDK + applications, along with their respective documentation: +--- + +Here are some production-grade modules that can be used in Cosmos SDK applications, along with their respective documentation: + +## Essential Modules + +Essential modules include functionality that *must* be included in your Cosmos SDK blockchain. +These modules provide the core behaviors that are needed for users and operators such as balance tracking, +proof-of-stake capabilities and governance. + +* [Auth](/docs/sdk/next/documentation/module-system/auth) - Authentication of accounts and transactions for Cosmos SDK applications. +* [Bank](/docs/sdk/next/documentation/module-system/bank) - Token transfer functionalities. +* [Circuit](/docs/sdk/next/documentation/module-system/circuit) - Circuit breaker module for pausing messages. +* [Consensus](/docs/sdk/next/documentation/module-system/consensus) - Consensus module for modifying CometBFT's ABCI consensus params. +* [Distribution](/docs/sdk/next/documentation/module-system/distribution) - Fee distribution, and staking token provision distribution. +* [Evidence](/docs/sdk/next/documentation/module-system/evidence) - Evidence handling for double signing, misbehaviour, etc. +* [Governance](/docs/sdk/next/documentation/module-system/gov) - On-chain proposals and voting. +* [Genutil](/docs/sdk/next/documentation/module-system/genutil) - Genesis utilities for the Cosmos SDK. +* [Mint](/docs/sdk/next/documentation/module-system/mint) - Creation of new units of staking token. +* [Slashing](/docs/sdk/next/documentation/module-system/slashing) - Validator punishment mechanisms. +* [Staking](/docs/sdk/next/documentation/module-system/staking) - Proof-of-Stake layer for public blockchains. +* [Upgrade](/docs/sdk/next/documentation/module-system/upgrade) - Software upgrades handling and coordination. + +## Supplementary Modules + +Supplementary modules are modules that are maintained in the Cosmos SDK but are not necessary for +the core functionality of your blockchain. They can be thought of as ways to extend the +capabilities of your blockchain or further specialize it. + +* [Authz](/docs/sdk/next/documentation/module-system/authz) - Authorization for accounts to perform actions on behalf of other accounts. +* [Epochs](/docs/sdk/next/documentation/module-system/epochs) - Registration so SDK modules can have logic to be executed at the timed tickers. +* [Feegrant](/docs/sdk/next/documentation/module-system/feegrant) - Grant fee allowances for executing transactions. +* [ProtocolPool](/docs/sdk/next/documentation/module-system/protocolpool) - Extended management of community pool functionality. + +## Deprecated Modules + +The following modules are deprecated. They will no longer be maintained and eventually will be removed +in an upcoming release of the Cosmos SDK per our [release process](https://github.com/cosmos/cosmos-sdk/blob/main/RELEASE_PROCESS.md). + +* [Crisis](/docs/sdk/next/documentation/module-system/crisis) - *Deprecated* halting the blockchain under certain circumstances (e.g. if an invariant is broken). +* [Params](/docs/sdk/next/documentation/module-system/params) - *Deprecated* Globally available parameter store. +* [NFT](/docs/sdk/next/documentation/module-system/nft) - *Deprecated* NFT module implemented based on [ADR43](https://docs.cosmos.network/main/build/architecture/adr-043-nft-module). This module will be moved to the `cosmos-sdk-legacy` repo for use. +* [Group](/docs/sdk/next/documentation/module-system/group) - *Deprecated* Allows for the creation and management of on-chain multisig accounts. This module will be moved to the `cosmos-sdk-legacy` repo for legacy use. + +To learn more about the process of building modules, visit the [building modules reference documentation](https://docs.cosmos.network/main/building-modules/intro). + +## IBC + +The IBC module for the SDK is maintained by the IBC Go team in its [own repository](https://github.com/cosmos/ibc-go). + +Additionally, the [capability module](https://github.com/cosmos/ibc-go/tree/fdd664698d79864f1e00e147f9879e58497b5ef1/modules/capability) is from v0.50+ maintained by the IBC Go team in its [own repository](https://github.com/cosmos/ibc-go/tree/fdd664698d79864f1e00e147f9879e58497b5ef1/modules/capability). + +## CosmWasm + +The CosmWasm module enables smart contracts, learn more by going to their [documentation site](https://book.cosmwasm.com/), or visit [the repository](https://github.com/CosmWasm/cosmwasm). + +## EVM + +Read more about writing smart contracts with solidity at the official [`evm` documentation page](https://evm.cosmos.network/). diff --git a/docs/sdk/next/documentation/module-system/auth.mdx b/docs/sdk/next/documentation/module-system/auth.mdx new file mode 100644 index 00000000..1389faa4 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/auth.mdx @@ -0,0 +1,738 @@ +--- +title: '`x/auth`' +description: This document specifies the auth module of the Cosmos SDK. +--- + +## Abstract + +This document specifies the auth module of the Cosmos SDK. + +The auth module is responsible for specifying the base transaction and account types +for an application, since the SDK itself is agnostic to these particulars. It contains +the middlewares, where all basic transaction validity checks (signatures, nonces, auxiliary fields) +are performed, and exposes the account keeper, which allows other modules to read, write, and modify accounts. + +This module is used in the Cosmos Hub. + +## Contents + +* [Concepts](#concepts) + * [Gas & Fees](#gas--fees) +* [State](#state) + * [Accounts](#accounts) +* [AnteHandlers](#antehandlers) +* [Keepers](#keepers) + * [Account Keeper](#account-keeper) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +**Note:** The auth module is different from the [authz module](/docs/sdk/next/documentation/module-system/authz). + +The differences are: + +* `auth` - authentication of accounts and transactions for Cosmos SDK applications and is responsible for specifying the base transaction and account types. +* `authz` - authorization for accounts to perform actions on behalf of other accounts and enables a granter to grant authorizations to a grantee that allows the grantee to execute messages on behalf of the granter. + +### Gas & Fees + +Fees serve two purposes for an operator of the network. + +Fees limit the growth of the state stored by every full node and allow for +general purpose censorship of transactions of little economic value. Fees +are best suited as an anti-spam mechanism where validators are disinterested in +the use of the network and identities of users. + +Fees are determined by the gas limits and gas prices transactions provide, where +`fees = ceil(gasLimit * gasPrices)`. Txs incur gas costs for all state reads/writes, +signature verification, as well as costs proportional to the tx size. Operators +should set minimum gas prices when starting their nodes. They must set the unit +costs of gas in each token denomination they wish to support: + +`simd start ... --minimum-gas-prices=0.00001stake;0.05photinos` + +When adding transactions to mempool or gossipping transactions, validators check +if the transaction's gas prices, which are determined by the provided fees, meet +any of the validator's minimum gas prices. In other words, a transaction must +provide a fee of at least one denomination that matches a validator's minimum +gas price. + +CometBFT does not currently provide fee based mempool prioritization, and fee +based mempool filtering is local to node and not part of consensus. But with +minimum gas prices set, such a mechanism could be implemented by node operators. + +Because the market value for tokens will fluctuate, validators are expected to +dynamically adjust their minimum gas prices to a level that would encourage the +use of the network. + +## State + +### Accounts + +Accounts contain authentication information for a uniquely identified external user of an SDK blockchain, +including public key, address, and account number / sequence number for replay protection. For efficiency, +since account balances must also be fetched to pay fees, account structs also store the balance of a user +as `sdk.Coins`. + +Accounts are exposed externally as an interface, and stored internally as +either a base account or vesting account. Module clients wishing to add more +account types may do so. + +* `0x01 | Address -> ProtocolBuffer(account)` + +#### Account Interface + +The account interface exposes methods to read and write standard account information. +Note that all of these methods operate on an account struct conforming to the +interface - in order to write the account to the store, the account keeper will +need to be used. + +```go expandable +/ AccountI is an interface used to store coins at a given address within state. +/ It presumes a notion of sequence numbers for replay protection, +/ a notion of account numbers for replay protection for previously pruned accounts, +/ and a pubkey for authentication purposes. +/ +/ Many complex conditions can be used in the concrete struct which implements AccountI. +type AccountI interface { + proto.Message + + GetAddress() + +sdk.AccAddress + SetAddress(sdk.AccAddress) + +error / errors if already set. + + GetPubKey() + +crypto.PubKey / can return nil. + SetPubKey(crypto.PubKey) + +error + + GetAccountNumber() + +uint64 + SetAccountNumber(uint64) + +error + + GetSequence() + +uint64 + SetSequence(uint64) + +error + + / Ensure that account implements stringer + String() + +string +} +``` + +##### Base Account + +A base account is the simplest and most common account type, which just stores all requisite +fields directly in a struct. + +```protobuf +/ BaseAccount defines a base account type. It contains all the necessary fields +/ for basic account functionality. Any custom account type should extend this +/ type for additional functionality (e.g. vesting). +message BaseAccount { + string address = 1; + google.protobuf.Any pub_key = 2; + uint64 account_number = 3; + uint64 sequence = 4; +} +``` + +### Vesting Account + +See [Vesting](https://docs.cosmos.network/main/modules/auth/vesting/). + +## AnteHandlers + +The `x/auth` module presently has no transaction handlers of its own, but does expose the special `AnteHandler`, used for performing basic validity checks on a transaction, such that it could be thrown out of the mempool. +The `AnteHandler` can be seen as a set of decorators that check transactions within the current context, per [ADR 010](docs/sdk/next/documentation/legacy/adr-comprehensive). + +Note that the `AnteHandler` is called on both `CheckTx` and `DeliverTx`, as CometBFT proposers presently have the ability to include in their proposed block transactions which fail `CheckTx`. + +### Decorators + +The auth module provides `AnteDecorator`s that are recursively chained together into a single `AnteHandler` in the following order: + +* `SetUpContextDecorator`: Sets the `GasMeter` in the `Context` and wraps the next `AnteHandler` with a defer clause to recover from any downstream `OutOfGas` panics in the `AnteHandler` chain to return an error with information on gas provided and gas used. + +* `RejectExtensionOptionsDecorator`: Rejects all extension options which can optionally be included in protobuf transactions. + +* `MempoolFeeDecorator`: Checks if the `tx` fee is above local mempool `minFee` parameter during `CheckTx`. + +* `ValidateBasicDecorator`: Calls `tx.ValidateBasic` and returns any non-nil error. + +* `TxTimeoutHeightDecorator`: Check for a `tx` height timeout. + +* `ValidateMemoDecorator`: Validates `tx` memo with application parameters and returns any non-nil error. + +* `ConsumeGasTxSizeDecorator`: Consumes gas proportional to the `tx` size based on application parameters. + +* `DeductFeeDecorator`: Deducts the `FeeAmount` from first signer of the `tx`. If the `x/feegrant` module is enabled and a fee granter is set, it deducts fees from the fee granter account. + +* `SetPubKeyDecorator`: Sets the pubkey from a `tx`'s signers that does not already have its corresponding pubkey saved in the state machine and in the current context. + +* `ValidateSigCountDecorator`: Validates the number of signatures in `tx` based on app-parameters. + +* `SigGasConsumeDecorator`: Consumes parameter-defined amount of gas for each signature. This requires pubkeys to be set in context for all signers as part of `SetPubKeyDecorator`. + +* `SigVerificationDecorator`: Verifies all signatures are valid. This requires pubkeys to be set in context for all signers as part of `SetPubKeyDecorator`. + +* `IncrementSequenceDecorator`: Increments the account sequence for each signer to prevent replay attacks. + +## Keepers + +The auth module only exposes one keeper, the account keeper, which can be used to read and write accounts. + +### Account Keeper + +Presently only one fully-permissioned account keeper is exposed, which has the ability to both read and write +all fields of all accounts, and to iterate over all stored accounts. + +```go expandable +/ AccountKeeperI is the interface contract that x/auth's keeper implements. +type AccountKeeperI interface { + / Return a new account with the next account number and the specified address. Does not save the new account to the store. + NewAccountWithAddress(sdk.Context, sdk.AccAddress) + +types.AccountI + + / Return a new account with the next account number. Does not save the new account to the store. + NewAccount(sdk.Context, types.AccountI) + +types.AccountI + + / Check if an account exists in the store. + HasAccount(sdk.Context, sdk.AccAddress) + +bool + + / Retrieve an account from the store. + GetAccount(sdk.Context, sdk.AccAddress) + +types.AccountI + + / Set an account in the store. + SetAccount(sdk.Context, types.AccountI) + + / Remove an account from the store. + RemoveAccount(sdk.Context, types.AccountI) + + / Iterate over all accounts, calling the provided function. Stop iteration when it returns true. + IterateAccounts(sdk.Context, func(types.AccountI) + +bool) + + / Fetch the public key of an account at a specified address + GetPubKey(sdk.Context, sdk.AccAddress) (crypto.PubKey, error) + + / Fetch the sequence of an account at a specified address. + GetSequence(sdk.Context, sdk.AccAddress) (uint64, error) + + / Fetch the next account number, and increment the internal counter. + NextAccountNumber(sdk.Context) + +uint64 +} +``` + +## Parameters + +The auth module contains the following parameters: + +| Key | Type | Example | +| ---------------------- | ------ | ------- | +| MaxMemoCharacters | uint64 | 256 | +| TxSigLimit | uint64 | 7 | +| TxSizeCostPerByte | uint64 | 10 | +| SigVerifyCostED25519 | uint64 | 590 | +| SigVerifyCostSecp256k1 | uint64 | 1000 | + +## Client + +### CLI + +A user can query and interact with the `auth` module using the CLI. + +### Query + +The `query` commands allow users to query `auth` state. + +```bash +simd query auth --help +``` + +#### account + +The `account` command allow users to query for an account by it's address. + +```bash +simd query auth account [address] [flags] +``` + +Example: + +```bash +simd query auth account cosmos1... +``` + +Example Output: + +```bash +'@type': /cosmos.auth.v1beta1.BaseAccount +account_number: "0" +address: cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2 +pub_key: + '@type': /cosmos.crypto.secp256k1.PubKey + key: ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD +sequence: "1" +``` + +#### accounts + +The `accounts` command allow users to query all the available accounts. + +```bash +simd query auth accounts [flags] +``` + +Example: + +```bash +simd query auth accounts +``` + +Example Output: + +```bash expandable +accounts: +- '@type': /cosmos.auth.v1beta1.BaseAccount + account_number: "0" + address: cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2 + pub_key: + '@type': /cosmos.crypto.secp256k1.PubKey + key: ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD + sequence: "1" +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "8" + address: cosmos1yl6hdjhmkf37639730gffanpzndzdpmhwlkfhr + pub_key: null + sequence: "0" + name: transfer + permissions: + - minter + - burner +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "4" + address: cosmos1fl48vsnmsdzcv85q5d2q4z5ajdha8yu34mf0eh + pub_key: null + sequence: "0" + name: bonded_tokens_pool + permissions: + - burner + - staking +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "5" + address: cosmos1tygms3xhhs3yv487phx3dw4a95jn7t7lpm470r + pub_key: null + sequence: "0" + name: not_bonded_tokens_pool + permissions: + - burner + - staking +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "6" + address: cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn + pub_key: null + sequence: "0" + name: gov + permissions: + - burner +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "3" + address: cosmos1jv65s3grqf6v6jl3dp4t6c9t9rk99cd88lyufl + pub_key: null + sequence: "0" + name: distribution + permissions: [] +- '@type': /cosmos.auth.v1beta1.BaseAccount + account_number: "1" + address: cosmos147k3r7v2tvwqhcmaxcfql7j8rmkrlsemxshd3j + pub_key: null + sequence: "0" +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "7" + address: cosmos1m3h30wlvsf8llruxtpukdvsy0km2kum8g38c8q + pub_key: null + sequence: "0" + name: mint + permissions: + - minter +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "2" + address: cosmos17xpfvakm2amg962yls6f84z3kell8c5lserqta + pub_key: null + sequence: "0" + name: fee_collector + permissions: [] +pagination: + next_key: null + total: "0" +``` + +#### params + +The `params` command allow users to query the current auth parameters. + +```bash +simd query auth params [flags] +``` + +Example: + +```bash +simd query auth params +``` + +Example Output: + +```bash +max_memo_characters: "256" +sig_verify_cost_ed25519: "590" +sig_verify_cost_secp256k1: "1000" +tx_sig_limit: "7" +tx_size_cost_per_byte: "10" +``` + +### Transactions + +The `auth` module supports transactions commands to help you with signing and more. Compared to other modules you can access directly the `auth` module transactions commands using the only `tx` command. + +Use directly the `--help` flag to get more information about the `tx` command. + +```bash +simd tx --help +``` + +#### `sign` + +The `sign` command allows users to sign transactions that was generated offline. + +```bash +simd tx sign tx.json --from $ALICE > tx.signed.json +``` + +The result is a signed transaction that can be broadcasted to the network thanks to the broadcast command. + +More information about the `sign` command can be found running `simd tx sign --help`. + +#### `sign-batch` + +The `sign-batch` command allows users to sign multiples offline generated transactions. +The transactions can be in one file, with one tx per line, or in multiple files. + +```bash +simd tx sign txs.json --from $ALICE > tx.signed.json +``` + +or + +```bash +simd tx sign tx1.json tx2.json tx3.json --from $ALICE > tx.signed.json +``` + +The result is multiples signed transactions. For combining the signed transactions into one transactions, use the `--append` flag. + +More information about the `sign-batch` command can be found running `simd tx sign-batch --help`. + +#### `multi-sign` + +The `multi-sign` command allows users to sign transactions that was generated offline by a multisig account. + +```bash +simd tx multisign transaction.json k1k2k3 k1sig.json k2sig.json k3sig.json +``` + +Where `k1k2k3` is the multisig account address, `k1sig.json` is the signature of the first signer, `k2sig.json` is the signature of the second signer, and `k3sig.json` is the signature of the third signer. + +##### Nested multisig transactions + +To allow transactions to be signed by nested multisigs, meaning that a participant of a multisig account can be another multisig account, the `--skip-signature-verification` flag must be used. + +```bash +# First aggregate signatures of the multisig participant +simd tx multi-sign transaction.json ms1 ms1p1sig.json ms1p2sig.json --signature-only --skip-signature-verification > ms1sig.json + +# Then use the aggregated signatures and the other signatures to sign the final transaction +simd tx multi-sign transaction.json k1ms1 k1sig.json ms1sig.json --skip-signature-verification +``` + +Where `ms1` is the nested multisig account address, `ms1p1sig.json` is the signature of the first participant of the nested multisig account, `ms1p2sig.json` is the signature of the second participant of the nested multisig account, and `ms1sig.json` is the aggregated signature of the nested multisig account. + +`k1ms1` is a multisig account comprised of an individual signer and another nested multisig account (`ms1`). `k1sig.json` is the signature of the first signer of the individual member. + +More information about the `multi-sign` command can be found running `simd tx multi-sign --help`. + +#### `multisign-batch` + +The `multisign-batch` works the same way as `sign-batch`, but for multisig accounts. +With the difference that the `multisign-batch` command requires all transactions to be in one file, and the `--append` flag does not exist. + +More information about the `multisign-batch` command can be found running `simd tx multisign-batch --help`. + +#### `validate-signatures` + +The `validate-signatures` command allows users to validate the signatures of a signed transaction. + +```bash +$ simd tx validate-signatures tx.signed.json +Signers: + 0: cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275 + +Signatures: + 0: cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275 [OK] +``` + +More information about the `validate-signatures` command can be found running `simd tx validate-signatures --help`. + +#### `broadcast` + +The `broadcast` command allows users to broadcast a signed transaction to the network. + +```bash +simd tx broadcast tx.signed.json +``` + +More information about the `broadcast` command can be found running `simd tx broadcast --help`. + +### gRPC + +A user can query the `auth` module using gRPC endpoints. + +#### Account + +The `account` endpoint allow users to query for an account by it's address. + +```bash +cosmos.auth.v1beta1.Query/Account +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Account +``` + +Example Output: + +```bash expandable +{ + "account":{ + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "address":"cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2", + "pubKey":{ + "@type":"/cosmos.crypto.secp256k1.PubKey", + "key":"ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD" + }, + "sequence":"1" + } +} +``` + +#### Accounts + +The `accounts` endpoint allow users to query all the available accounts. + +```bash +cosmos.auth.v1beta1.Query/Accounts +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Accounts +``` + +Example Output: + +```bash expandable +{ + "accounts":[ + { + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "address":"cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2", + "pubKey":{ + "@type":"/cosmos.crypto.secp256k1.PubKey", + "key":"ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD" + }, + "sequence":"1" + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1yl6hdjhmkf37639730gffanpzndzdpmhwlkfhr", + "accountNumber":"8" + }, + "name":"transfer", + "permissions":[ + "minter", + "burner" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1fl48vsnmsdzcv85q5d2q4z5ajdha8yu34mf0eh", + "accountNumber":"4" + }, + "name":"bonded_tokens_pool", + "permissions":[ + "burner", + "staking" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1tygms3xhhs3yv487phx3dw4a95jn7t7lpm470r", + "accountNumber":"5" + }, + "name":"not_bonded_tokens_pool", + "permissions":[ + "burner", + "staking" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn", + "accountNumber":"6" + }, + "name":"gov", + "permissions":[ + "burner" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1jv65s3grqf6v6jl3dp4t6c9t9rk99cd88lyufl", + "accountNumber":"3" + }, + "name":"distribution" + }, + { + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "accountNumber":"1", + "address":"cosmos147k3r7v2tvwqhcmaxcfql7j8rmkrlsemxshd3j" + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1m3h30wlvsf8llruxtpukdvsy0km2kum8g38c8q", + "accountNumber":"7" + }, + "name":"mint", + "permissions":[ + "minter" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos17xpfvakm2amg962yls6f84z3kell8c5lserqta", + "accountNumber":"2" + }, + "name":"fee_collector" + } + ], + "pagination":{ + "total":"9" + } +} +``` + +#### Params + +The `params` endpoint allow users to query the current auth parameters. + +```bash +cosmos.auth.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Params +``` + +Example Output: + +```bash +{ + "params": { + "maxMemoCharacters": "256", + "txSigLimit": "7", + "txSizeCostPerByte": "10", + "sigVerifyCostEd25519": "590", + "sigVerifyCostSecp256k1": "1000" + } +} +``` + +### REST + +A user can query the `auth` module using REST endpoints. + +#### Account + +The `account` endpoint allow users to query for an account by it's address. + +```bash +/cosmos/auth/v1beta1/account?address={address} +``` + +#### Accounts + +The `accounts` endpoint allow users to query all the available accounts. + +```bash +/cosmos/auth/v1beta1/accounts +``` + +#### Params + +The `params` endpoint allow users to query the current auth parameters. + +```bash +/cosmos/auth/v1beta1/params +``` diff --git a/docs/sdk/next/documentation/module-system/authz.mdx b/docs/sdk/next/documentation/module-system/authz.mdx new file mode 100644 index 00000000..c30e29b4 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/authz.mdx @@ -0,0 +1,1430 @@ +--- +title: '`x/authz`' +--- + +## Abstract + +`x/authz` is an implementation of a Cosmos SDK module, per [ADR 30](docs/sdk/next/documentation/legacy/adr-comprehensive), that allows +granting arbitrary privileges from one account (the granter) to another account (the grantee). Authorizations must be granted for a particular Msg service method one by one using an implementation of the `Authorization` interface. + +## Contents + +* [Concepts](#concepts) + * [Authorization and Grant](#authorization-and-grant) + * [Built-in Authorizations](#built-in-authorizations) + * [Gas](#gas) +* [State](#state) + * [Grant](#grant) + * [GrantQueue](#grantqueue) +* [Messages](#messages) + * [MsgGrant](#msggrant) + * [MsgRevoke](#msgrevoke) + * [MsgExec](#msgexec) +* [Events](#events) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +### Authorization and Grant + +The `x/authz` module defines interfaces and messages grant authorizations to perform actions +on behalf of one account to other accounts. The design is defined in the [ADR 030](docs/sdk/next/documentation/legacy/adr-comprehensive). + +A *grant* is an allowance to execute a Msg by the grantee on behalf of the granter. +Authorization is an interface that must be implemented by a concrete authorization logic to validate and execute grants. Authorizations are extensible and can be defined for any Msg service method even outside of the module where the Msg method is defined. See the `SendAuthorization` example in the next section for more details. + +**Note:** The authz module is different from the [auth (authentication)](/docs/sdk/next/documentation/module-system/auth) module that is responsible for specifying the base transaction and account types. + +```go expandable +package authz + +import ( + + "github.com/cosmos/gogoproto/proto" + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ Authorization represents the interface of various Authorization types implemented +/ by other modules. +type Authorization interface { + proto.Message + + / MsgTypeURL returns the fully-qualified Msg service method URL (as described in ADR 031), + / which will process and accept or reject a request. + MsgTypeURL() + +string + + / Accept determines whether this grant permits the provided sdk.Msg to be performed, + / and if so provides an upgraded authorization instance. + Accept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error) + + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} + +/ AcceptResponse instruments the controller of an authz message if the request is accepted +/ and if it should be updated or deleted. +type AcceptResponse struct { + / If Accept=true, the controller can accept and authorization and handle the update. + Accept bool + / If Delete=true, the controller must delete the authorization object and release + / storage resources. + Delete bool + / Controller, who is calling Authorization.Accept must check if `Updated != nil`. If yes, + / it must use the updated version and handle the update on the storage level. + Updated Authorization +} +``` + +### Built-in Authorizations + +The Cosmos SDK `x/authz` module comes with following authorization types: + +#### GenericAuthorization + +`GenericAuthorization` implements the `Authorization` interface that gives unrestricted permission to execute the provided Msg on behalf of granter's account. + +```protobuf +// GenericAuthorization gives the grantee unrestricted permissions to execute +// the provided method on behalf of the granter's account. +message GenericAuthorization { + option (amino.name) = "cosmos-sdk/GenericAuthorization"; + option (cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"; + + // Msg, identified by it's type URL, to grant unrestricted permissions to execute + string msg = 1; +} +``` + +```go expandable +package authz + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var _ Authorization = &GenericAuthorization{ +} + +/ NewGenericAuthorization creates a new GenericAuthorization object. +func NewGenericAuthorization(msgTypeURL string) *GenericAuthorization { + return &GenericAuthorization{ + Msg: msgTypeURL, +} +} + +/ MsgTypeURL implements Authorization.MsgTypeURL. +func (a GenericAuthorization) + +MsgTypeURL() + +string { + return a.Msg +} + +/ Accept implements Authorization.Accept. +func (a GenericAuthorization) + +Accept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error) { + return AcceptResponse{ + Accept: true +}, nil +} + +/ ValidateBasic implements Authorization.ValidateBasic. +func (a GenericAuthorization) + +ValidateBasic() + +error { + return nil +} +``` + +* `msg` stores Msg type URL. + +#### SendAuthorization + +`SendAuthorization` implements the `Authorization` interface for the `cosmos.bank.v1beta1.MsgSend` Msg. + +* It takes a (positive) `SpendLimit` that specifies the maximum amount of tokens the grantee can spend. The `SpendLimit` is updated as the tokens are spent. +* It takes an (optional) `AllowList` that specifies to which addresses a grantee can send token. + +```protobuf +// SendAuthorization allows the grantee to spend up to spend_limit coins from +// the granter's account. +// +// Since: cosmos-sdk 0.43 +message SendAuthorization { + option (cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"; + option (amino.name) = "cosmos-sdk/SendAuthorization"; + + repeated cosmos.base.v1beta1.Coin spend_limit = 1 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + + // allow_list specifies an optional list of addresses to whom the grantee can send tokens on behalf of the + // granter. If omitted, any recipient is allowed. + // + // Since: cosmos-sdk 0.47 + repeated string allow_list = 2; +} +``` + +```go expandable +package types + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ TODO: Revisit this once we have proper gas fee framework. +/ Ref: https://github.com/cosmos/cosmos-sdk/issues/9054 +/ Ref: https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(10) + +var _ authz.Authorization = &SendAuthorization{ +} + +/ NewSendAuthorization creates a new SendAuthorization object. +func NewSendAuthorization(spendLimit sdk.Coins, allowed []sdk.AccAddress) *SendAuthorization { + return &SendAuthorization{ + AllowList: toBech32Addresses(allowed), + SpendLimit: spendLimit, +} +} + +/ MsgTypeURL implements Authorization.MsgTypeURL. +func (a SendAuthorization) + +MsgTypeURL() + +string { + return sdk.MsgTypeURL(&MsgSend{ +}) +} + +/ Accept implements Authorization.Accept. +func (a SendAuthorization) + +Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) { + mSend, ok := msg.(*MsgSend) + if !ok { + return authz.AcceptResponse{ +}, sdkerrors.ErrInvalidType.Wrap("type mismatch") +} + toAddr := mSend.ToAddress + + limitLeft, isNegative := a.SpendLimit.SafeSub(mSend.Amount...) + if isNegative { + return authz.AcceptResponse{ +}, sdkerrors.ErrInsufficientFunds.Wrapf("requested amount is more than spend limit") +} + if limitLeft.IsZero() { + return authz.AcceptResponse{ + Accept: true, + Delete: true +}, nil +} + isAddrExists := false + allowedList := a.GetAllowList() + for _, addr := range allowedList { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "send authorization") + if addr == toAddr { + isAddrExists = true + break +} + +} + if len(allowedList) > 0 && !isAddrExists { + return authz.AcceptResponse{ +}, sdkerrors.ErrUnauthorized.Wrapf("cannot send to %s address", toAddr) +} + +return authz.AcceptResponse{ + Accept: true, + Delete: false, + Updated: &SendAuthorization{ + SpendLimit: limitLeft, + AllowList: allowedList +}}, nil +} + +/ ValidateBasic implements Authorization.ValidateBasic. +func (a SendAuthorization) + +ValidateBasic() + +error { + if a.SpendLimit == nil { + return sdkerrors.ErrInvalidCoins.Wrap("spend limit cannot be nil") +} + if !a.SpendLimit.IsAllPositive() { + return sdkerrors.ErrInvalidCoins.Wrapf("spend limit must be positive") +} + found := make(map[string]bool, 0) + for i := 0; i < len(a.AllowList); i++ { + if found[a.AllowList[i]] { + return ErrDuplicateEntry +} + +found[a.AllowList[i]] = true +} + +return nil +} + +func toBech32Addresses(allowed []sdk.AccAddress) []string { + if len(allowed) == 0 { + return nil +} + allowedAddrs := make([]string, len(allowed)) + for i, addr := range allowed { + allowedAddrs[i] = addr.String() +} + +return allowedAddrs +} +``` + +* `spend_limit` keeps track of how many coins are left in the authorization. +* `allow_list` specifies an optional list of addresses to whom the grantee can send tokens on behalf of the granter. + +#### StakeAuthorization + +`StakeAuthorization` implements the `Authorization` interface for messages in the [staking module](https://docs.cosmos.network/v0.53/build/modules/staking). It takes an `AuthorizationType` to specify whether you want to authorise delegating, undelegating or redelegating (i.e. these have to be authorised separately). It also takes an optional `MaxTokens` that keeps track of a limit to the amount of tokens that can be delegated/undelegated/redelegated. If left empty, the amount is unlimited. Additionally, this Msg takes an `AllowList` or a `DenyList`, which allows you to select which validators you allow or deny grantees to stake with. + +```protobuf +// StakeAuthorization defines authorization for delegate/undelegate/redelegate. +// +// Since: cosmos-sdk 0.43 +message StakeAuthorization { + option (cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"; + option (amino.name) = "cosmos-sdk/StakeAuthorization"; + + // max_tokens specifies the maximum amount of tokens can be delegate to a validator. If it is + // empty, there is no spend limit and any amount of coins can be delegated. + cosmos.base.v1beta1.Coin max_tokens = 1 [(gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coin"]; + // validators is the oneof that represents either allow_list or deny_list + oneof validators { + // allow_list specifies list of validator addresses to whom grantee can delegate tokens on behalf of granter's + // account. + Validators allow_list = 2; + // deny_list specifies list of validator addresses to whom grantee can not delegate tokens. + Validators deny_list = 3; + } + // Validators defines list of validator addresses. + message Validators { + repeated string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + } + // authorization_type defines one of AuthorizationType. + AuthorizationType authorization_type = 4; +} +``` + +```go expandable +package types + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ TODO: Revisit this once we have propoer gas fee framework. +/ Tracking issues https://github.com/cosmos/cosmos-sdk/issues/9054, https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(10) + +var _ authz.Authorization = &StakeAuthorization{ +} + +/ NewStakeAuthorization creates a new StakeAuthorization object. +func NewStakeAuthorization(allowed []sdk.ValAddress, denied []sdk.ValAddress, authzType AuthorizationType, amount *sdk.Coin) (*StakeAuthorization, error) { + allowedValidators, deniedValidators, err := validateAllowAndDenyValidators(allowed, denied) + if err != nil { + return nil, err +} + a := StakeAuthorization{ +} + if allowedValidators != nil { + a.Validators = &StakeAuthorization_AllowList{ + AllowList: &StakeAuthorization_Validators{ + Address: allowedValidators +}} + +} + +else { + a.Validators = &StakeAuthorization_DenyList{ + DenyList: &StakeAuthorization_Validators{ + Address: deniedValidators +}} + +} + if amount != nil { + a.MaxTokens = amount +} + +a.AuthorizationType = authzType + + return &a, nil +} + +/ MsgTypeURL implements Authorization.MsgTypeURL. +func (a StakeAuthorization) + +MsgTypeURL() + +string { + authzType, err := normalizeAuthzType(a.AuthorizationType) + if err != nil { + panic(err) +} + +return authzType +} + +func (a StakeAuthorization) + +ValidateBasic() + +error { + if a.MaxTokens != nil && a.MaxTokens.IsNegative() { + return sdkerrors.Wrapf(authz.ErrNegativeMaxTokens, "negative coin amount: %v", a.MaxTokens) +} + if a.AuthorizationType == AuthorizationType_AUTHORIZATION_TYPE_UNSPECIFIED { + return authz.ErrUnknownAuthorizationType +} + +return nil +} + +/ Accept implements Authorization.Accept. +func (a StakeAuthorization) + +Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) { + var validatorAddress string + var amount sdk.Coin + switch msg := msg.(type) { + case *MsgDelegate: + validatorAddress = msg.ValidatorAddress + amount = msg.Amount + case *MsgUndelegate: + validatorAddress = msg.ValidatorAddress + amount = msg.Amount + case *MsgBeginRedelegate: + validatorAddress = msg.ValidatorDstAddress + amount = msg.Amount + default: + return authz.AcceptResponse{ +}, sdkerrors.ErrInvalidRequest.Wrap("unknown msg type") +} + isValidatorExists := false + allowedList := a.GetAllowList().GetAddress() + for _, validator := range allowedList { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "stake authorization") + if validator == validatorAddress { + isValidatorExists = true + break +} + +} + denyList := a.GetDenyList().GetAddress() + for _, validator := range denyList { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "stake authorization") + if validator == validatorAddress { + return authz.AcceptResponse{ +}, sdkerrors.ErrUnauthorized.Wrapf("cannot delegate/undelegate to %s validator", validator) +} + +} + if len(allowedList) > 0 && !isValidatorExists { + return authz.AcceptResponse{ +}, sdkerrors.ErrUnauthorized.Wrapf("cannot delegate/undelegate to %s validator", validatorAddress) +} + if a.MaxTokens == nil { + return authz.AcceptResponse{ + Accept: true, + Delete: false, + Updated: &StakeAuthorization{ + Validators: a.GetValidators(), + AuthorizationType: a.GetAuthorizationType() +}, +}, nil +} + +limitLeft, err := a.MaxTokens.SafeSub(amount) + if err != nil { + return authz.AcceptResponse{ +}, err +} + if limitLeft.IsZero() { + return authz.AcceptResponse{ + Accept: true, + Delete: true +}, nil +} + +return authz.AcceptResponse{ + Accept: true, + Delete: false, + Updated: &StakeAuthorization{ + Validators: a.GetValidators(), + AuthorizationType: a.GetAuthorizationType(), + MaxTokens: &limitLeft +}, +}, nil +} + +func validateAllowAndDenyValidators(allowed []sdk.ValAddress, denied []sdk.ValAddress) ([]string, []string, error) { + if len(allowed) == 0 && len(denied) == 0 { + return nil, nil, sdkerrors.ErrInvalidRequest.Wrap("both allowed & deny list cannot be empty") +} + if len(allowed) > 0 && len(denied) > 0 { + return nil, nil, sdkerrors.ErrInvalidRequest.Wrap("cannot set both allowed & deny list") +} + allowedValidators := make([]string, len(allowed)) + if len(allowed) > 0 { + for i, validator := range allowed { + allowedValidators[i] = validator.String() +} + +return allowedValidators, nil, nil +} + deniedValidators := make([]string, len(denied)) + for i, validator := range denied { + deniedValidators[i] = validator.String() +} + +return nil, deniedValidators, nil +} + +/ Normalized Msg type URLs +func normalizeAuthzType(authzType AuthorizationType) (string, error) { + switch authzType { + case AuthorizationType_AUTHORIZATION_TYPE_DELEGATE: + return sdk.MsgTypeURL(&MsgDelegate{ +}), nil + case AuthorizationType_AUTHORIZATION_TYPE_UNDELEGATE: + return sdk.MsgTypeURL(&MsgUndelegate{ +}), nil + case AuthorizationType_AUTHORIZATION_TYPE_REDELEGATE: + return sdk.MsgTypeURL(&MsgBeginRedelegate{ +}), nil + default: + return "", sdkerrors.Wrapf(authz.ErrUnknownAuthorizationType, "cannot normalize authz type with %T", authzType) +} +} +``` + +### Gas + +In order to prevent DoS attacks, granting `StakeAuthorization`s with `x/authz` incurs gas. `StakeAuthorization` allows you to authorize another account to delegate, undelegate, or redelegate to validators. The authorizer can define a list of validators they allow or deny delegations to. The Cosmos SDK iterates over these lists and charge 10 gas for each validator in both of the lists. + +Since the state maintaining a list for granter, grantee pair with same expiration, we are iterating over the list to remove the grant (in case of any revoke of particular `msgType`) from the list and we are charging 20 gas per iteration. + +## State + +### Grant + +Grants are identified by combining granter address (the address bytes of the granter), grantee address (the address bytes of the grantee) and Authorization type (its type URL). Hence we only allow one grant for the (granter, grantee, Authorization) triple. + +* Grant: `0x01 | granter_address_len (1 byte) | granter_address_bytes | grantee_address_len (1 byte) | grantee_address_bytes | msgType_bytes -> ProtocolBuffer(AuthorizationGrant)` + +The grant object encapsulates an `Authorization` type and an expiration timestamp: + +```protobuf +// Grant gives permissions to execute +// the provide method with expiration time. +message Grant { + google.protobuf.Any authorization = 1 [(cosmos_proto.accepts_interface) = "cosmos.authz.v1beta1.Authorization"]; + // time when the grant will expire and will be pruned. If null, then the grant + // doesn't have a time expiration (other conditions in `authorization` + // may apply to invalidate the grant) + google.protobuf.Timestamp expiration = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = true]; +} +``` + +### GrantQueue + +We are maintaining a queue for authz pruning. Whenever a grant is created, an item will be added to `GrantQueue` with a key of expiration, granter, grantee. + +In `EndBlock` (which runs for every block) we continuously check and prune the expired grants by forming a prefix key with current blocktime that passed the stored expiration in `GrantQueue`, we iterate through all the matched records from `GrantQueue` and delete them from the `GrantQueue` & `Grant`s store. + +```go expandable +package keeper + +import ( + + "fmt" + "strconv" + "time" + "github.com/cosmos/gogoproto/proto" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ TODO: Revisit this once we have propoer gas fee framework. +/ Tracking issues https://github.com/cosmos/cosmos-sdk/issues/9054, +/ https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(20) + +type Keeper struct { + storeKey storetypes.StoreKey + cdc codec.BinaryCodec + router *baseapp.MsgServiceRouter + authKeeper authz.AccountKeeper +} + +/ NewKeeper constructs a message authorization Keeper +func NewKeeper(storeKey storetypes.StoreKey, cdc codec.BinaryCodec, router *baseapp.MsgServiceRouter, ak authz.AccountKeeper) + +Keeper { + return Keeper{ + storeKey: storeKey, + cdc: cdc, + router: router, + authKeeper: ak, +} +} + +/ Logger returns a module-specific logger. +func (k Keeper) + +Logger(ctx sdk.Context) + +log.Logger { + return ctx.Logger().With("module", fmt.Sprintf("x/%s", authz.ModuleName)) +} + +/ getGrant returns grant stored at skey. +func (k Keeper) + +getGrant(ctx sdk.Context, skey []byte) (grant authz.Grant, found bool) { + store := ctx.KVStore(k.storeKey) + bz := store.Get(skey) + if bz == nil { + return grant, false +} + +k.cdc.MustUnmarshal(bz, &grant) + +return grant, true +} + +func (k Keeper) + +update(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, updated authz.Authorization) + +error { + skey := grantStoreKey(grantee, granter, updated.MsgTypeURL()) + +grant, found := k.getGrant(ctx, skey) + if !found { + return authz.ErrNoAuthorizationFound +} + +msg, ok := updated.(proto.Message) + if !ok { + return sdkerrors.ErrPackAny.Wrapf("cannot proto marshal %T", updated) +} + +any, err := codectypes.NewAnyWithValue(msg) + if err != nil { + return err +} + +grant.Authorization = any + store := ctx.KVStore(k.storeKey) + +store.Set(skey, k.cdc.MustMarshal(&grant)) + +return nil +} + +/ DispatchActions attempts to execute the provided messages via authorization +/ grants from the message signer to the grantee. +func (k Keeper) + +DispatchActions(ctx sdk.Context, grantee sdk.AccAddress, msgs []sdk.Msg) ([][]byte, error) { + results := make([][]byte, len(msgs)) + now := ctx.BlockTime() + for i, msg := range msgs { + signers := msg.GetSigners() + if len(signers) != 1 { + return nil, authz.ErrAuthorizationNumOfSigners +} + granter := signers[0] + + / If granter != grantee then check authorization.Accept, otherwise we + / implicitly accept. + if !granter.Equals(grantee) { + skey := grantStoreKey(grantee, granter, sdk.MsgTypeURL(msg)) + +grant, found := k.getGrant(ctx, skey) + if !found { + return nil, sdkerrors.Wrapf(authz.ErrNoAuthorizationFound, "failed to update grant with key %s", string(skey)) +} + if grant.Expiration != nil && grant.Expiration.Before(now) { + return nil, authz.ErrAuthorizationExpired +} + +authorization, err := grant.GetAuthorization() + if err != nil { + return nil, err +} + +resp, err := authorization.Accept(ctx, msg) + if err != nil { + return nil, err +} + if resp.Delete { + err = k.DeleteGrant(ctx, grantee, granter, sdk.MsgTypeURL(msg)) +} + +else if resp.Updated != nil { + err = k.update(ctx, grantee, granter, resp.Updated) +} + if err != nil { + return nil, err +} + if !resp.Accept { + return nil, sdkerrors.ErrUnauthorized +} + +} + handler := k.router.Handler(msg) + if handler == nil { + return nil, sdkerrors.ErrUnknownRequest.Wrapf("unrecognized message route: %s", sdk.MsgTypeURL(msg)) +} + +msgResp, err := handler(ctx, msg) + if err != nil { + return nil, sdkerrors.Wrapf(err, "failed to execute message; message %v", msg) +} + +results[i] = msgResp.Data + + / emit the events from the dispatched actions + events := msgResp.Events + sdkEvents := make([]sdk.Event, 0, len(events)) + for _, event := range events { + e := event + e.Attributes = append(e.Attributes, abci.EventAttribute{ + Key: "authz_msg_index", + Value: strconv.Itoa(i) +}) + +sdkEvents = append(sdkEvents, sdk.Event(e)) +} + +ctx.EventManager().EmitEvents(sdkEvents) +} + +return results, nil +} + +/ SaveGrant method grants the provided authorization to the grantee on the granter's account +/ with the provided expiration time and insert authorization key into the grants queue. If there is an existing authorization grant for the +/ same `sdk.Msg` type, this grant overwrites that. +func (k Keeper) + +SaveGrant(ctx sdk.Context, grantee, granter sdk.AccAddress, authorization authz.Authorization, expiration *time.Time) + +error { + store := ctx.KVStore(k.storeKey) + msgType := authorization.MsgTypeURL() + skey := grantStoreKey(grantee, granter, msgType) + +grant, err := authz.NewGrant(ctx.BlockTime(), authorization, expiration) + if err != nil { + return err +} + +var oldExp *time.Time + if oldGrant, found := k.getGrant(ctx, skey); found { + oldExp = oldGrant.Expiration +} + if oldExp != nil && (expiration == nil || !oldExp.Equal(*expiration)) { + if err = k.removeFromGrantQueue(ctx, skey, granter, grantee, *oldExp); err != nil { + return err +} + +} + + / If the expiration didn't change, then we don't remove it and we should not insert again + if expiration != nil && (oldExp == nil || !oldExp.Equal(*expiration)) { + if err = k.insertIntoGrantQueue(ctx, granter, grantee, msgType, *expiration); err != nil { + return err +} + +} + bz := k.cdc.MustMarshal(&grant) + +store.Set(skey, bz) + +return ctx.EventManager().EmitTypedEvent(&authz.EventGrant{ + MsgTypeUrl: authorization.MsgTypeURL(), + Granter: granter.String(), + Grantee: grantee.String(), +}) +} + +/ DeleteGrant revokes any authorization for the provided message type granted to the grantee +/ by the granter. +func (k Keeper) + +DeleteGrant(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) + +error { + store := ctx.KVStore(k.storeKey) + skey := grantStoreKey(grantee, granter, msgType) + +grant, found := k.getGrant(ctx, skey) + if !found { + return sdkerrors.Wrapf(authz.ErrNoAuthorizationFound, "failed to delete grant with key %s", string(skey)) +} + if grant.Expiration != nil { + err := k.removeFromGrantQueue(ctx, skey, granter, grantee, *grant.Expiration) + if err != nil { + return err +} + +} + +store.Delete(skey) + +return ctx.EventManager().EmitTypedEvent(&authz.EventRevoke{ + MsgTypeUrl: msgType, + Granter: granter.String(), + Grantee: grantee.String(), +}) +} + +/ GetAuthorizations Returns list of `Authorizations` granted to the grantee by the granter. +func (k Keeper) + +GetAuthorizations(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress) ([]authz.Authorization, error) { + store := ctx.KVStore(k.storeKey) + key := grantStoreKey(grantee, granter, "") + iter := sdk.KVStorePrefixIterator(store, key) + +defer iter.Close() + +var authorization authz.Grant + var authorizations []authz.Authorization + for ; iter.Valid(); iter.Next() { + if err := k.cdc.Unmarshal(iter.Value(), &authorization); err != nil { + return nil, err +} + +a, err := authorization.GetAuthorization() + if err != nil { + return nil, err +} + +authorizations = append(authorizations, a) +} + +return authorizations, nil +} + +/ GetAuthorization returns an Authorization and it's expiration time. +/ A nil Authorization is returned under the following circumstances: +/ - No grant is found. +/ - A grant is found, but it is expired. +/ - There was an error getting the authorization from the grant. +func (k Keeper) + +GetAuthorization(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) (authz.Authorization, *time.Time) { + grant, found := k.getGrant(ctx, grantStoreKey(grantee, granter, msgType)) + if !found || (grant.Expiration != nil && grant.Expiration.Before(ctx.BlockHeader().Time)) { + return nil, nil +} + +auth, err := grant.GetAuthorization() + if err != nil { + return nil, nil +} + +return auth, grant.Expiration +} + +/ IterateGrants iterates over all authorization grants +/ This function should be used with caution because it can involve significant IO operations. +/ It should not be used in query or msg services without charging additional gas. +/ The iteration stops when the handler function returns true or the iterator exhaust. +func (k Keeper) + +IterateGrants(ctx sdk.Context, + handler func(granterAddr sdk.AccAddress, granteeAddr sdk.AccAddress, grant authz.Grant) + +bool, +) { + store := ctx.KVStore(k.storeKey) + iter := sdk.KVStorePrefixIterator(store, GrantKey) + +defer iter.Close() + for ; iter.Valid(); iter.Next() { + var grant authz.Grant + granterAddr, granteeAddr, _ := parseGrantStoreKey(iter.Key()) + +k.cdc.MustUnmarshal(iter.Value(), &grant) + if handler(granterAddr, granteeAddr, grant) { + break +} + +} +} + +func (k Keeper) + +getGrantQueueItem(ctx sdk.Context, expiration time.Time, granter, grantee sdk.AccAddress) (*authz.GrantQueueItem, error) { + store := ctx.KVStore(k.storeKey) + bz := store.Get(GrantQueueKey(expiration, granter, grantee)) + if bz == nil { + return &authz.GrantQueueItem{ +}, nil +} + +var queueItems authz.GrantQueueItem + if err := k.cdc.Unmarshal(bz, &queueItems); err != nil { + return nil, err +} + +return &queueItems, nil +} + +func (k Keeper) + +setGrantQueueItem(ctx sdk.Context, expiration time.Time, + granter sdk.AccAddress, grantee sdk.AccAddress, queueItems *authz.GrantQueueItem, +) + +error { + store := ctx.KVStore(k.storeKey) + +bz, err := k.cdc.Marshal(queueItems) + if err != nil { + return err +} + +store.Set(GrantQueueKey(expiration, granter, grantee), bz) + +return nil +} + +/ insertIntoGrantQueue inserts a grant key into the grant queue +func (k Keeper) + +insertIntoGrantQueue(ctx sdk.Context, granter, grantee sdk.AccAddress, msgType string, expiration time.Time) + +error { + queueItems, err := k.getGrantQueueItem(ctx, expiration, granter, grantee) + if err != nil { + return err +} + if len(queueItems.MsgTypeUrls) == 0 { + k.setGrantQueueItem(ctx, expiration, granter, grantee, &authz.GrantQueueItem{ + MsgTypeUrls: []string{ + msgType +}, +}) +} + +else { + queueItems.MsgTypeUrls = append(queueItems.MsgTypeUrls, msgType) + +k.setGrantQueueItem(ctx, expiration, granter, grantee, queueItems) +} + +return nil +} + +/ removeFromGrantQueue removes a grant key from the grant queue +func (k Keeper) + +removeFromGrantQueue(ctx sdk.Context, grantKey []byte, granter, grantee sdk.AccAddress, expiration time.Time) + +error { + store := ctx.KVStore(k.storeKey) + key := GrantQueueKey(expiration, granter, grantee) + bz := store.Get(key) + if bz == nil { + return sdkerrors.Wrap(authz.ErrNoGrantKeyFound, "can't remove grant from the expire queue, grant key not found") +} + +var queueItem authz.GrantQueueItem + if err := k.cdc.Unmarshal(bz, &queueItem); err != nil { + return err +} + + _, _, msgType := parseGrantStoreKey(grantKey) + queueItems := queueItem.MsgTypeUrls + for index, typeURL := range queueItems { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "grant queue") + if typeURL == msgType { + end := len(queueItem.MsgTypeUrls) - 1 + queueItems[index] = queueItems[end] + queueItems = queueItems[:end] + if err := k.setGrantQueueItem(ctx, expiration, granter, grantee, &authz.GrantQueueItem{ + MsgTypeUrls: queueItems, +}); err != nil { + return err +} + +break +} + +} + +return nil +} + +/ DequeueAndDeleteExpiredGrants deletes expired grants from the state and grant queue. +func (k Keeper) + +DequeueAndDeleteExpiredGrants(ctx sdk.Context) + +error { + store := ctx.KVStore(k.storeKey) + iterator := store.Iterator(GrantQueuePrefix, sdk.InclusiveEndBytes(GrantQueueTimePrefix(ctx.BlockTime()))) + +defer iterator.Close() + for ; iterator.Valid(); iterator.Next() { + var queueItem authz.GrantQueueItem + if err := k.cdc.Unmarshal(iterator.Value(), &queueItem); err != nil { + return err +} + + _, granter, grantee, err := parseGrantQueueKey(iterator.Key()) + if err != nil { + return err +} + +store.Delete(iterator.Key()) + for _, typeURL := range queueItem.MsgTypeUrls { + store.Delete(grantStoreKey(grantee, granter, typeURL)) +} + +} + +return nil +} +``` + +* GrantQueue: `0x02 | expiration_bytes | granter_address_len (1 byte) | granter_address_bytes | grantee_address_len (1 byte) | grantee_address_bytes -> ProtocolBuffer(GrantQueueItem)` + +The `expiration_bytes` are the expiration date in UTC with the format `"2006-01-02T15:04:05.000000000"`. + +```go expandable +package keeper + +import ( + + "time" + "github.com/cosmos/cosmos-sdk/internal/conv" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/address" + "github.com/cosmos/cosmos-sdk/types/kv" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ Keys for store prefixes +/ Items are stored with the following key: values +/ +/ - 0x01: Grant +/ - 0x02: GrantQueueItem +var ( + GrantKey = []byte{0x01 +} / prefix for each key + GrantQueuePrefix = []byte{0x02 +} +) + +var lenTime = len(sdk.FormatTimeBytes(time.Now())) + +/ StoreKey is the store key string for authz +const StoreKey = authz.ModuleName + +/ grantStoreKey - return authorization store key +/ Items are stored with the following key: values +/ +/ - 0x01: Grant +func grantStoreKey(grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) []byte { + m := conv.UnsafeStrToBytes(msgType) + +granter = address.MustLengthPrefix(granter) + +grantee = address.MustLengthPrefix(grantee) + key := sdk.AppendLengthPrefixedBytes(GrantKey, granter, grantee, m) + +return key +} + +/ parseGrantStoreKey - split granter, grantee address and msg type from the authorization key +func parseGrantStoreKey(key []byte) (granterAddr, granteeAddr sdk.AccAddress, msgType string) { + / key is of format: + / 0x01 + + granterAddrLen, granterAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, 1, 1) / ignore key[0] since it is a prefix key + granterAddr, granterAddrEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrLenEndIndex+1, int(granterAddrLen[0])) + +granteeAddrLen, granteeAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrEndIndex+1, 1) + +granteeAddr, granteeAddrEndIndex := sdk.ParseLengthPrefixedBytes(key, granteeAddrLenEndIndex+1, int(granteeAddrLen[0])) + +kv.AssertKeyAtLeastLength(key, granteeAddrEndIndex+1) + +return granterAddr, granteeAddr, conv.UnsafeBytesToStr(key[(granteeAddrEndIndex + 1):]) +} + +/ parseGrantQueueKey split expiration time, granter and grantee from the grant queue key +func parseGrantQueueKey(key []byte) (time.Time, sdk.AccAddress, sdk.AccAddress, error) { + / key is of format: + / 0x02 + + expBytes, expEndIndex := sdk.ParseLengthPrefixedBytes(key, 1, lenTime) + +exp, err := sdk.ParseTimeBytes(expBytes) + if err != nil { + return exp, nil, nil, err +} + +granterAddrLen, granterAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, expEndIndex+1, 1) + +granter, granterEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrLenEndIndex+1, int(granterAddrLen[0])) + +granteeAddrLen, granteeAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, granterEndIndex+1, 1) + +grantee, _ := sdk.ParseLengthPrefixedBytes(key, granteeAddrLenEndIndex+1, int(granteeAddrLen[0])) + +return exp, granter, grantee, nil +} + +/ GrantQueueKey - return grant queue store key. If a given grant doesn't have a defined +/ expiration, then it should not be used in the pruning queue. +/ Key format is: +/ +/ 0x02: GrantQueueItem +func GrantQueueKey(expiration time.Time, granter sdk.AccAddress, grantee sdk.AccAddress) []byte { + exp := sdk.FormatTimeBytes(expiration) + +granter = address.MustLengthPrefix(granter) + +grantee = address.MustLengthPrefix(grantee) + +return sdk.AppendLengthPrefixedBytes(GrantQueuePrefix, exp, granter, grantee) +} + +/ GrantQueueTimePrefix - return grant queue time prefix +func GrantQueueTimePrefix(expiration time.Time) []byte { + return append(GrantQueuePrefix, sdk.FormatTimeBytes(expiration)...) +} + +/ firstAddressFromGrantStoreKey parses the first address only +func firstAddressFromGrantStoreKey(key []byte) + +sdk.AccAddress { + addrLen := key[0] + return sdk.AccAddress(key[1 : 1+addrLen]) +} +``` + +The `GrantQueueItem` object contains the list of type urls between granter and grantee that expire at the time indicated in the key. + +## Messages + +In this section we describe the processing of messages for the authz module. + +### MsgGrant + +An authorization grant is created using the `MsgGrant` message. +If there is already a grant for the `(granter, grantee, Authorization)` triple, then the new grant overwrites the previous one. To update or extend an existing grant, a new grant with the same `(granter, grantee, Authorization)` triple should be created. + +```protobuf +// MsgGrant is a request type for Grant method. It declares authorization to the grantee +// on behalf of the granter with the provided expiration time. +message MsgGrant { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgGrant"; + + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + cosmos.authz.v1beta1.Grant grant = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message handling should fail if: + +* both granter and grantee have the same address. +* provided `Expiration` time is less than current unix timestamp (but a grant will be created if no `expiration` time is provided since `expiration` is optional). +* provided `Grant.Authorization` is not implemented. +* `Authorization.MsgTypeURL()` is not defined in the router (there is no defined handler in the app router to handle that Msg types). + +### MsgRevoke + +A grant can be removed with the `MsgRevoke` message. + +```protobuf +// MsgRevoke revokes any authorization with the provided sdk.Msg type on the +// granter's account with that has been granted to the grantee. +message MsgRevoke { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgRevoke"; + + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string msg_type_url = 3; +} +``` + +The message handling should fail if: + +* both granter and grantee have the same address. +* provided `MsgTypeUrl` is empty. + +NOTE: The `MsgExec` message removes a grant if the grant has expired. + +### MsgExec + +When a grantee wants to execute a transaction on behalf of a granter, they must send `MsgExec`. + +```protobuf +// MsgExec attempts to execute the provided messages using +// authorizations granted to the grantee. Each message should have only +// one signer corresponding to the granter of the authorization. +message MsgExec { + option (cosmos.msg.v1.signer) = "grantee"; + option (amino.name) = "cosmos-sdk/MsgExec"; + + string grantee = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // Execute Msg. + // The x/authz will try to find a grant matching (msg.signers[0], grantee, MsgTypeURL(msg)) + // triple and validate it. + repeated google.protobuf.Any msgs = 2 [(cosmos_proto.accepts_interface) = "cosmos.base.v1beta1.Msg"]; +``` + +The message handling should fail if: + +* provided `Authorization` is not implemented. +* grantee doesn't have permission to run the transaction. +* if granted authorization is expired. + +## Events + +The authz module emits proto events defined in [the Protobuf reference](https://buf.build/cosmos/cosmos-sdk/docs/main/cosmos.authz.v1beta1#cosmos.authz.v1beta1.EventGrant). + +## Client + +### CLI + +A user can query and interact with the `authz` module using the CLI. + +#### Query + +The `query` commands allow users to query `authz` state. + +```bash +simd query authz --help +``` + +##### grants + +The `grants` command allows users to query grants for a granter-grantee pair. If the message type URL is set, it selects grants only for that message type. + +```bash +simd query authz grants [granter-addr] [grantee-addr] [msg-type-url]? [flags] +``` + +Example: + +```bash +simd query authz grants cosmos1.. cosmos1.. /cosmos.bank.v1beta1.MsgSend +``` + +Example Output: + +```bash +grants: +- authorization: + '@type': /cosmos.bank.v1beta1.SendAuthorization + spend_limit: + - amount: "100" + denom: stake + expiration: "2022-01-01T00:00:00Z" +pagination: null +``` + +#### Transactions + +The `tx` commands allow users to interact with the `authz` module. + +```bash +simd tx authz --help +``` + +##### exec + +The `exec` command allows a grantee to execute a transaction on behalf of granter. + +```bash + simd tx authz exec [tx-json-file] --from [grantee] [flags] +``` + +Example: + +```bash +simd tx authz exec tx.json --from=cosmos1.. +``` + +##### grant + +The `grant` command allows a granter to grant an authorization to a grantee. + +```bash +simd tx authz grant --from [flags] +``` + +* The `send` authorization\_type refers to the built-in `SendAuthorization` type. The custom flags available are `spend-limit` (required) and `allow-list` (optional) , documented [here](#sendauthorization) + +Example: + +```bash + simd tx authz grant cosmos1.. send --spend-limit=100stake --allow-list=cosmos1...,cosmos2... --from=cosmos1.. +``` + +* The `generic` authorization\_type refers to the built-in `GenericAuthorization` type. The custom flag available is `msg-type` (required) documented [here](#genericauthorization). + +> Note: `msg-type` is any valid Cosmos SDK `Msg` type url. + +Example: + +```bash + simd tx authz grant cosmos1.. generic --msg-type=/cosmos.bank.v1beta1.MsgSend --from=cosmos1.. +``` + +* The `delegate`,`unbond`,`redelegate` authorization\_types refer to the built-in `StakeAuthorization` type. The custom flags available are `spend-limit` (optional), `allowed-validators` (optional) and `deny-validators` (optional) documented [here](#stakeauthorization). + +> Note: `allowed-validators` and `deny-validators` cannot both be empty. `spend-limit` represents the `MaxTokens` + +Example: + +```bash +simd tx authz grant cosmos1.. delegate --spend-limit=100stake --allowed-validators=cosmos...,cosmos... --deny-validators=cosmos... --from=cosmos1.. +``` + +##### revoke + +The `revoke` command allows a granter to revoke an authorization from a grantee. + +```bash +simd tx authz revoke [grantee] [msg-type-url] --from=[granter] [flags] +``` + +Example: + +```bash +simd tx authz revoke cosmos1.. /cosmos.bank.v1beta1.MsgSend --from=cosmos1.. +``` + +### gRPC + +A user can query the `authz` module using gRPC endpoints. + +#### Grants + +The `Grants` endpoint allows users to query grants for a granter-grantee pair. If the message type URL is set, it selects grants only for that message type. + +```bash +cosmos.authz.v1beta1.Query/Grants +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"granter":"cosmos1..","grantee":"cosmos1..","msg_type_url":"/cosmos.bank.v1beta1.MsgSend"}' \ + localhost:9090 \ + cosmos.authz.v1beta1.Query/Grants +``` + +Example Output: + +```bash expandable +{ + "grants": [ + { + "authorization": { + "@type": "/cosmos.bank.v1beta1.SendAuthorization", + "spendLimit": [ + { + "denom":"stake", + "amount":"100" + } + ] + }, + "expiration": "2022-01-01T00:00:00Z" + } + ] +} +``` + +### REST + +A user can query the `authz` module using REST endpoints. + +```bash +/cosmos/authz/v1beta1/grants +``` + +Example: + +```bash +curl "localhost:1317/cosmos/authz/v1beta1/grants?granter=cosmos1..&grantee=cosmos1..&msg_type_url=/cosmos.bank.v1beta1.MsgSend" +``` + +Example Output: + +```bash expandable +{ + "grants": [ + { + "authorization": { + "@type": "/cosmos.bank.v1beta1.SendAuthorization", + "spend_limit": [ + { + "denom": "stake", + "amount": "100" + } + ] + }, + "expiration": "2022-01-01T00:00:00Z" + } + ], + "pagination": null +} +``` diff --git a/docs/sdk/next/documentation/module-system/bank.mdx b/docs/sdk/next/documentation/module-system/bank.mdx new file mode 100644 index 00000000..9c0d5bbc --- /dev/null +++ b/docs/sdk/next/documentation/module-system/bank.mdx @@ -0,0 +1,1209 @@ +--- +title: '`x/bank`' +description: This document specifies the bank module of the Cosmos SDK. +--- + +## Abstract + +This document specifies the bank module of the Cosmos SDK. + +The bank module is responsible for handling multi-asset coin transfers between +accounts and tracking special-case pseudo-transfers which must work differently +with particular kinds of accounts (notably delegating/undelegating for vesting +accounts). It exposes several interfaces with varying capabilities for secure +interaction with other modules which must alter user balances. + +In addition, the bank module tracks and provides query support for the total +supply of all assets used in the application. + +This module is used in the Cosmos Hub. + +## Contents + +* [Supply](#supply) + * [Total Supply](#total-supply) +* [Module Accounts](#module-accounts) + * [Permissions](#permissions) +* [State](#state) +* [Params](#params) +* [Keepers](#keepers) +* [Messages](#messages) +* [Events](#events) + * [Message Events](#message-events) + * [Keeper Events](#keeper-events) +* [Parameters](#parameters) + * [SendEnabled](#sendenabled) + * [DefaultSendEnabled](#defaultsendenabled) +* [Client](#client) + * [CLI](#cli) + * [Query](#query) + * [Transactions](#transactions) +* [gRPC](#grpc) + +## Supply + +The `supply` functionality: + +* passively tracks the total supply of coins within a chain, +* provides a pattern for modules to hold/interact with `Coins`, and +* introduces the invariant check to verify a chain's total supply. + +### Total Supply + +The total `Supply` of the network is equal to the sum of all coins from the +account. The total supply is updated every time a `Coin` is minted (eg: as part +of the inflation mechanism) or burned (eg: due to slashing or if a governance +proposal is vetoed). + +## Module Accounts + +The supply functionality introduces a new type of `auth.Account` which can be used by +modules to allocate tokens and in special cases mint or burn tokens. At a base +level these module accounts are capable of sending/receiving tokens to and from +`auth.Account`s and other module accounts. This design replaces previous +alternative designs where, to hold tokens, modules would burn the incoming +tokens from the sender account, and then track those tokens internally. Later, +in order to send tokens, the module would need to effectively mint tokens +within a destination account. The new design removes duplicate logic between +modules to perform this accounting. + +The `ModuleAccount` interface is defined as follows: + +```go +type ModuleAccount interface { + auth.Account / same methods as the Account interface + + GetName() + +string / name of the module; used to obtain the address + GetPermissions() []string / permissions of module account + HasPermission(string) + +bool +} +``` + +> **WARNING!** +> Any module or message handler that allows either direct or indirect sending of funds must explicitly guarantee those funds cannot be sent to module accounts (unless allowed). + +The supply `Keeper` also introduces new wrapper functions for the auth `Keeper` +and the bank `Keeper` that are related to `ModuleAccount`s in order to be able +to: + +* Get and set `ModuleAccount`s by providing the `Name`. +* Send coins from and to other `ModuleAccount`s or standard `Account`s + (`BaseAccount` or `VestingAccount`) by passing only the `Name`. +* `Mint` or `Burn` coins for a `ModuleAccount` (restricted to its permissions). + +### Permissions + +Each `ModuleAccount` has a different set of permissions that provide different +object capabilities to perform certain actions. Permissions need to be +registered upon the creation of the supply `Keeper` so that every time a +`ModuleAccount` calls the allowed functions, the `Keeper` can lookup the +permissions to that specific account and perform or not perform the action. + +The available permissions are: + +* `Minter`: allows for a module to mint a specific amount of coins. +* `Burner`: allows for a module to burn a specific amount of coins. +* `Staking`: allows for a module to delegate and undelegate a specific amount of coins. + +## State + +The `x/bank` module keeps state of the following primary objects: + +1. Account balances +2. Denomination metadata +3. The total supply of all balances +4. Information on which denominations are allowed to be sent. + +In addition, the `x/bank` module keeps the following indexes to manage the +aforementioned state: + +* Supply Index: `0x0 | byte(denom) -> byte(amount)` +* Denom Metadata Index: `0x1 | byte(denom) -> ProtocolBuffer(Metadata)` +* Balances Index: `0x2 | byte(address length) | []byte(address) | []byte(balance.Denom) -> ProtocolBuffer(balance)` +* Reverse Denomination to Address Index: `0x03 | byte(denom) | 0x00 | []byte(address) -> 0` + +## Params + +The bank module stores it's params in state with the prefix of `0x05`, +it can be updated with governance or the address with authority. + +* Params: `0x05 | ProtocolBuffer(Params)` + +```protobuf +// Params defines the parameters for the bank module. +message Params { + option (amino.name) = "cosmos-sdk/x/bank/Params"; + option (gogoproto.goproto_stringer) = false; + // Deprecated: Use of SendEnabled in params is deprecated. + // For genesis, use the newly added send_enabled field in the genesis object. + // Storage, lookup, and manipulation of this information is now in the keeper. + // + // As of cosmos-sdk 0.47, this only exists for backwards compatibility of genesis files. + repeated SendEnabled send_enabled = 1 [deprecated = true]; + bool default_send_enabled = 2; +} +``` + +## Keepers + +The bank module provides these exported keeper interfaces that can be +passed to other modules that read or update account balances. Modules +should use the least-permissive interface that provides the functionality they +require. + +Best practices dictate careful review of `bank` module code to ensure that +permissions are limited in the way that you expect. + +### Denied Addresses + +The `x/bank` module accepts a map of addresses that are considered blocklisted +from directly and explicitly receiving funds through means such as `MsgSend` and +`MsgMultiSend` and direct API calls like `SendCoinsFromModuleToAccount`. + +Typically, these addresses are module accounts. If these addresses receive funds +outside the expected rules of the state machine, invariants are likely to be +broken and could result in a halted network. + +By providing the `x/bank` module with a blocklisted set of addresses, an error occurs for the operation if a user or client attempts to directly or indirectly send funds to a blocklisted account, for example, by using [IBC](https://ibc.cosmos.network). + +### Common Types + +#### Input + +An input of a multiparty transfer + +```protobuf +/ Input models transaction input. +message Input { + string address = 1; + repeated cosmos.base.v1beta1.Coin coins = 2; +} +``` + +#### Output + +An output of a multiparty transfer. + +```protobuf +/ Output models transaction outputs. +message Output { + string address = 1; + repeated cosmos.base.v1beta1.Coin coins = 2; +} +``` + +### BaseKeeper + +The base keeper provides full-permission access: the ability to arbitrary modify any account's balance and mint or burn coins. + +Restricted permission to mint per module could be achieved by using baseKeeper with `WithMintCoinsRestriction` to give specific restrictions to mint (e.g. only minting certain denom). + +```go expandable +/ Keeper defines a module interface that facilitates the transfer of coins +/ between accounts. +type Keeper interface { + SendKeeper + WithMintCoinsRestriction(MintingRestrictionFn) + +BaseKeeper + + InitGenesis(context.Context, *types.GenesisState) + +ExportGenesis(context.Context) *types.GenesisState + + GetSupply(ctx context.Context, denom string) + +sdk.Coin + HasSupply(ctx context.Context, denom string) + +bool + GetPaginatedTotalSupply(ctx context.Context, pagination *query.PageRequest) (sdk.Coins, *query.PageResponse, error) + +IterateTotalSupply(ctx context.Context, cb func(sdk.Coin) + +bool) + +GetDenomMetaData(ctx context.Context, denom string) (types.Metadata, bool) + +HasDenomMetaData(ctx context.Context, denom string) + +bool + SetDenomMetaData(ctx context.Context, denomMetaData types.Metadata) + +IterateAllDenomMetaData(ctx context.Context, cb func(types.Metadata) + +bool) + +SendCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + SendCoinsFromModuleToModule(ctx context.Context, senderModule, recipientModule string, amt sdk.Coins) + +error + SendCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + DelegateCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + UndelegateCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + MintCoins(ctx context.Context, moduleName string, amt sdk.Coins) + +error + BurnCoins(ctx context.Context, moduleName string, amt sdk.Coins) + +error + + DelegateCoins(ctx context.Context, delegatorAddr, moduleAccAddr sdk.AccAddress, amt sdk.Coins) + +error + UndelegateCoins(ctx context.Context, moduleAccAddr, delegatorAddr sdk.AccAddress, amt sdk.Coins) + +error + + / GetAuthority gets the address capable of executing governance proposal messages. Usually the gov module account. + GetAuthority() + +string + + types.QueryServer +} +``` + +### SendKeeper + +The send keeper provides access to account balances and the ability to transfer coins between +accounts. The send keeper does not alter the total supply (mint or burn coins). + +```go expandable +/ SendKeeper defines a module interface that facilitates the transfer of coins +/ between accounts without the possibility of creating coins. +type SendKeeper interface { + ViewKeeper + + AppendSendRestriction(restriction SendRestrictionFn) + +PrependSendRestriction(restriction SendRestrictionFn) + +ClearSendRestriction() + +InputOutputCoins(ctx context.Context, input types.Input, outputs []types.Output) + +error + SendCoins(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) + +error + + GetParams(ctx context.Context) + +types.Params + SetParams(ctx context.Context, params types.Params) + +error + + IsSendEnabledDenom(ctx context.Context, denom string) + +bool + SetSendEnabled(ctx context.Context, denom string, value bool) + +SetAllSendEnabled(ctx context.Context, sendEnableds []*types.SendEnabled) + +DeleteSendEnabled(ctx context.Context, denom string) + +IterateSendEnabledEntries(ctx context.Context, cb func(denom string, sendEnabled bool) (stop bool)) + +GetAllSendEnabledEntries(ctx context.Context) []types.SendEnabled + + IsSendEnabledCoin(ctx context.Context, coin sdk.Coin) + +bool + IsSendEnabledCoins(ctx context.Context, coins ...sdk.Coin) + +error + + BlockedAddr(addr sdk.AccAddress) + +bool +} +``` + +#### Send Restrictions + +The `SendKeeper` applies a `SendRestrictionFn` before each transfer of funds. + +```golang +/ A SendRestrictionFn can restrict sends and/or provide a new receiver address. +type SendRestrictionFn func(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) (newToAddr sdk.AccAddress, err error) +``` + +After the `SendKeeper` (or `BaseKeeper`) has been created, send restrictions can be added to it using the `AppendSendRestriction` or `PrependSendRestriction` functions. +Both functions compose the provided restriction with any previously provided restrictions. +`AppendSendRestriction` adds the provided restriction to be run after any previously provided send restrictions. +`PrependSendRestriction` adds the restriction to be run before any previously provided send restrictions. +The composition will short-circuit when an error is encountered. I.e. if the first one returns an error, the second is not run. + +During `SendCoins`, the send restriction is applied before coins are removed from the from address and adding them to the to address. +During `InputOutputCoins`, the send restriction is applied after the input coins are removed and once for each output before the funds are added. + +A send restriction function should make use of a custom value in the context to allow bypassing that specific restriction. + +Send Restrictions are not placed on `ModuleToAccount` or `ModuleToModule` transfers. This is done due to modules needing to move funds to user accounts and other module accounts. This is a design decision to allow for more flexibility in the state machine. The state machine should be able to move funds between module accounts and user accounts without restrictions. + +Secondly this limitation would limit the usage of the state machine even for itself. users would not be able to receive rewards, not be able to move funds between module accounts. In the case that a user sends funds from a user account to the community pool and then a governance proposal is used to get those tokens into the users account this would fall under the discretion of the app chain developer to what they would like to do here. We can not make strong assumptions here. +Thirdly, this issue could lead into a chain halt if a token is disabled and the token is moved in the begin/endblock. This is the last reason we see the current change and more damaging then beneficial for users. + +For example, in your module's keeper package, you'd define the send restriction function: + +```golang expandable +var _ banktypes.SendRestrictionFn = Keeper{ +}.SendRestrictionFn + +func (k Keeper) + +SendRestrictionFn(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) (sdk.AccAddress, error) { + / Bypass if the context says to. + if mymodule.HasBypass(ctx) { + return toAddr, nil +} + + / Your custom send restriction logic goes here. + return nil, errors.New("not implemented") +} +``` + +The bank keeper should be provided to your keeper's constructor so the send restriction can be added to it: + +```golang +func NewKeeper(cdc codec.BinaryCodec, storeKey storetypes.StoreKey, bankKeeper mymodule.BankKeeper) + +Keeper { + rv := Keeper{/*...*/ +} + +bankKeeper.AppendSendRestriction(rv.SendRestrictionFn) + +return rv +} +``` + +Then, in the `mymodule` package, define the context helpers: + +```golang expandable +const bypassKey = "bypass-mymodule-restriction" + +/ WithBypass returns a new context that will cause the mymodule bank send restriction to be skipped. +func WithBypass(ctx context.Context) + +context.Context { + return sdk.UnwrapSDKContext(ctx).WithValue(bypassKey, true) +} + +/ WithoutBypass returns a new context that will cause the mymodule bank send restriction to not be skipped. +func WithoutBypass(ctx context.Context) + +context.Context { + return sdk.UnwrapSDKContext(ctx).WithValue(bypassKey, false) +} + +/ HasBypass checks the context to see if the mymodule bank send restriction should be skipped. +func HasBypass(ctx context.Context) + +bool { + bypassValue := ctx.Value(bypassKey) + if bypassValue == nil { + return false +} + +bypass, isBool := bypassValue.(bool) + +return isBool && bypass +} +``` + +Now, anywhere where you want to use `SendCoins` or `InputOutputCoins`, but you don't want your send restriction applied: + +```golang +func (k Keeper) + +DoThing(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) + +error { + return k.bankKeeper.SendCoins(mymodule.WithBypass(ctx), fromAddr, toAddr, amt) +} +``` + +### ViewKeeper + +The view keeper provides read-only access to account balances. The view keeper does not have balance alteration functionality. All balance lookups are `O(1)`. + +```go expandable +/ ViewKeeper defines a module interface that facilitates read only access to +/ account balances. +type ViewKeeper interface { + ValidateBalance(ctx context.Context, addr sdk.AccAddress) + +error + HasBalance(ctx context.Context, addr sdk.AccAddress, amt sdk.Coin) + +bool + + GetAllBalances(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + GetAccountsBalances(ctx context.Context) []types.Balance + GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin + LockedCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + SpendableCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + SpendableCoin(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin + + IterateAccountBalances(ctx context.Context, addr sdk.AccAddress, cb func(coin sdk.Coin) (stop bool)) + +IterateAllBalances(ctx context.Context, cb func(address sdk.AccAddress, coin sdk.Coin) (stop bool)) +} +``` + +## Messages + +### MsgSend + +Send coins from one address to another. + +```protobuf +// MsgSend represents a message to send coins from one account to another. +message MsgSend { + option (cosmos.msg.v1.signer) = "from_address"; + option (amino.name) = "cosmos-sdk/MsgSend"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string from_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string to_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + repeated cosmos.base.v1beta1.Coin amount = 3 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; +} +``` + +The message will fail under the following conditions: + +* The coins do not have sending enabled +* The `to` address is restricted + +### MsgMultiSend + +Send coins from one sender and to a series of different address. If any of the receiving addresses do not correspond to an existing account, a new account is created. + +```protobuf +// MsgMultiSend represents an arbitrary multi-in, multi-out send message. +message MsgMultiSend { + option (cosmos.msg.v1.signer) = "inputs"; + option (amino.name) = "cosmos-sdk/MsgMultiSend"; + + option (gogoproto.equal) = false; + + // Inputs, despite being `repeated`, only allows one sender input. This is + // checked in MsgMultiSend's ValidateBasic. + repeated Input inputs = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + repeated Output outputs = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message will fail under the following conditions: + +* Any of the coins do not have sending enabled +* Any of the `to` addresses are restricted +* Any of the coins are locked +* The inputs and outputs do not correctly correspond to one another + +### MsgUpdateParams + +The `bank` module params can be updated through `MsgUpdateParams`, which can be done using governance proposal. The signer will always be the `gov` module account address. + +```protobuf +// MsgUpdateParams is the Msg/UpdateParams request type. +// +// Since: cosmos-sdk 0.47 +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + option (amino.name) = "cosmos-sdk/x/bank/MsgUpdateParams"; + + // params defines the x/bank parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message handling can fail if: + +* signer is not the gov module account address. + +### MsgSetSendEnabled + +Used with the x/gov module to set create/edit SendEnabled entries. + +```protobuf +// MsgSetSendEnabled is the Msg/SetSendEnabled request type. +// +// Only entries to add/update/delete need to be included. +// Existing SendEnabled entries that are not included in this +// message are left unchanged. +// +// Since: cosmos-sdk 0.47 +message MsgSetSendEnabled { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/MsgSetSendEnabled"; + + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // send_enabled is the list of entries to add or update. + repeated SendEnabled send_enabled = 2; + + // use_default_for is a list of denoms that should use the params.default_send_enabled value. + // Denoms listed here will have their SendEnabled entries deleted. + // If a denom is included that doesn't have a SendEnabled entry, + // it will be ignored. + repeated string use_default_for = 3; +} +``` + +The message will fail under the following conditions: + +* The authority is not a bech32 address. +* The authority is not x/gov module's address. +* There are multiple SendEnabled entries with the same Denom. +* One or more SendEnabled entries has an invalid Denom. + +## Events + +The bank module emits the following events: + +### Message Events + +#### MsgSend + +| Type | Attribute Key | Attribute Value | +| -------- | ------------- | ------------------ | +| transfer | recipient | `{recipientAddress}` | +| transfer | amount | `{amount}` | +| message | module | bank | +| message | action | send | +| message | sender | `{senderAddress}` | + +#### MsgMultiSend + +| Type | Attribute Key | Attribute Value | +| -------- | ------------- | ------------------ | +| transfer | recipient | `{recipientAddress}` | +| transfer | amount | `{amount}` | +| message | module | bank | +| message | action | multisend | +| message | sender | `{senderAddress}` | + +### Keeper Events + +In addition to message events, the bank keeper will produce events when the following methods are called (or any method which ends up calling them) + +#### MintCoins + +```json expandable +{ + "type": "coinbase", + "attributes": [ + { + "key": "minter", + "value": "{{sdk.AccAddress of the module minting coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being minted}}", + "index": true + } + ] +} +``` + +```json expandable +{ + "type": "coin_received", + "attributes": [ + { + "key": "receiver", + "value": "{{sdk.AccAddress of the module minting coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being received}}", + "index": true + } + ] +} +``` + +#### BurnCoins + +```json expandable +{ + "type": "burn", + "attributes": [ + { + "key": "burner", + "value": "{{sdk.AccAddress of the module burning coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being burned}}", + "index": true + } + ] +} +``` + +```json expandable +{ + "type": "coin_spent", + "attributes": [ + { + "key": "spender", + "value": "{{sdk.AccAddress of the module burning coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being burned}}", + "index": true + } + ] +} +``` + +#### addCoins + +```json expandable +{ + "type": "coin_received", + "attributes": [ + { + "key": "receiver", + "value": "{{sdk.AccAddress of the address beneficiary of the coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being received}}", + "index": true + } + ] +} +``` + +#### subUnlockedCoins/DelegateCoins + +```json expandable +{ + "type": "coin_spent", + "attributes": [ + { + "key": "spender", + "value": "{{sdk.AccAddress of the address which is spending coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being spent}}", + "index": true + } + ] +} +``` + +## Parameters + +The bank module contains the following parameters + +### SendEnabled + +The SendEnabled parameter is now deprecated and not to be use. It is replaced +with state store records. + +### DefaultSendEnabled + +The default send enabled value controls send transfer capability for all +coin denominations unless specifically included in the array of `SendEnabled` +parameters. + +## Client + +### CLI + +A user can query and interact with the `bank` module using the CLI. + +#### Query + +The `query` commands allow users to query `bank` state. + +```shell +simd query bank --help +``` + +##### balances + +The `balances` command allows users to query account balances by address. + +```shell +simd query bank balances [address] [flags] +``` + +Example: + +```shell +simd query bank balances cosmos1.. +``` + +Example Output: + +```yml +balances: +- amount: "1000000000" + denom: stake +pagination: + next_key: null + total: "0" +``` + +##### denom-metadata + +The `denom-metadata` command allows users to query metadata for coin denominations. A user can query metadata for a single denomination using the `--denom` flag or all denominations without it. + +```shell +simd query bank denom-metadata [flags] +``` + +Example: + +```shell +simd query bank denom-metadata --denom stake +``` + +Example Output: + +```yml +metadata: + base: stake + denom_units: + - aliases: + - STAKE + denom: stake + description: native staking token of simulation app + display: stake + name: SimApp Token + symbol: STK +``` + +##### total + +The `total` command allows users to query the total supply of coins. A user can query the total supply for a single coin using the `--denom` flag or all coins without it. + +```shell +simd query bank total [flags] +``` + +Example: + +```shell +simd query bank total --denom stake +``` + +Example Output: + +```yml +amount: "10000000000" +denom: stake +``` + +##### send-enabled + +The `send-enabled` command allows users to query for all or some SendEnabled entries. + +```shell +simd query bank send-enabled [denom1 ...] [flags] +``` + +Example: + +```shell +simd query bank send-enabled +``` + +Example output: + +```yml +send_enabled: +- denom: foocoin + enabled: true +- denom: barcoin +pagination: + next-key: null + total: 2 +``` + +#### Transactions + +The `tx` commands allow users to interact with the `bank` module. + +```shell +simd tx bank --help +``` + +##### send + +The `send` command allows users to send funds from one account to another. + +```shell +simd tx bank send [from_key_or_address] [to_address] [amount] [flags] +``` + +Example: + +```shell +simd tx bank send cosmos1.. cosmos1.. 100stake +``` + +## gRPC + +A user can query the `bank` module using gRPC endpoints. + +### Balance + +The `Balance` endpoint allows users to query account balance by address for a given denomination. + +```shell +cosmos.bank.v1beta1.Query/Balance +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"address":"cosmos1..","denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/Balance +``` + +Example Output: + +```json +{ + "balance": { + "denom": "stake", + "amount": "1000000000" + } +} +``` + +### AllBalances + +The `AllBalances` endpoint allows users to query account balance by address for all denominations. + +```shell +cosmos.bank.v1beta1.Query/AllBalances +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/AllBalances +``` + +Example Output: + +```json expandable +{ + "balances": [ + { + "denom": "stake", + "amount": "1000000000" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### DenomMetadata + +The `DenomMetadata` endpoint allows users to query metadata for a single coin denomination. + +```shell +cosmos.bank.v1beta1.Query/DenomMetadata +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/DenomMetadata +``` + +Example Output: + +```json expandable +{ + "metadata": { + "description": "native staking token of simulation app", + "denomUnits": [ + { + "denom": "stake", + "aliases": [ + "STAKE" + ] + } + ], + "base": "stake", + "display": "stake", + "name": "SimApp Token", + "symbol": "STK" + } +} +``` + +### DenomsMetadata + +The `DenomsMetadata` endpoint allows users to query metadata for all coin denominations. + +```shell +cosmos.bank.v1beta1.Query/DenomsMetadata +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/DenomsMetadata +``` + +Example Output: + +```json expandable +{ + "metadatas": [ + { + "description": "native staking token of simulation app", + "denomUnits": [ + { + "denom": "stake", + "aliases": [ + "STAKE" + ] + } + ], + "base": "stake", + "display": "stake", + "name": "SimApp Token", + "symbol": "STK" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### DenomOwners + +The `DenomOwners` endpoint allows users to query metadata for a single coin denomination. + +```shell +cosmos.bank.v1beta1.Query/DenomOwners +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/DenomOwners +``` + +Example Output: + +```json expandable +{ + "denomOwners": [ + { + "address": "cosmos1..", + "balance": { + "denom": "stake", + "amount": "5000000000" + } + +}, + { + "address": "cosmos1..", + "balance": { + "denom": "stake", + "amount": "5000000000" + } + +}, + ], + "pagination": { + "total": "2" + } +} +``` + +### TotalSupply + +The `TotalSupply` endpoint allows users to query the total supply of all coins. + +```shell +cosmos.bank.v1beta1.Query/TotalSupply +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/TotalSupply +``` + +Example Output: + +```json expandable +{ + "supply": [ + { + "denom": "stake", + "amount": "10000000000" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### SupplyOf + +The `SupplyOf` endpoint allows users to query the total supply of a single coin. + +```shell +cosmos.bank.v1beta1.Query/SupplyOf +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/SupplyOf +``` + +Example Output: + +```json +{ + "amount": { + "denom": "stake", + "amount": "10000000000" + } +} +``` + +### Params + +The `Params` endpoint allows users to query the parameters of the `bank` module. + +```shell +cosmos.bank.v1beta1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "defaultSendEnabled": true + } +} +``` + +### SendEnabled + +The `SendEnabled` endpoints allows users to query the SendEnabled entries of the `bank` module. + +Any denominations NOT returned, use the `Params.DefaultSendEnabled` value. + +```shell +cosmos.bank.v1beta1.Query/SendEnabled +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/SendEnabled +``` + +Example Output: + +```json expandable +{ + "send_enabled": [ + { + "denom": "foocoin", + "enabled": true + }, + { + "denom": "barcoin" + } + ], + "pagination": { + "next-key": null, + "total": 2 + } +} +``` diff --git a/docs/sdk/next/documentation/module-system/beginblock-endblock.mdx b/docs/sdk/next/documentation/module-system/beginblock-endblock.mdx new file mode 100644 index 00000000..f2786a93 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/beginblock-endblock.mdx @@ -0,0 +1,112 @@ +--- +title: BeginBlocker and EndBlocker +--- + +## Synopsis + +`BeginBlocker` and `EndBlocker` are optional methods module developers can implement in their module. They will be triggered at the beginning and at the end of each block respectively, when the [`BeginBlock`](/docs/sdk/next/documentation/application-framework/baseapp#beginblock) and [`EndBlock`](/docs/sdk/next/documentation/application-framework/baseapp#endblock) ABCI messages are received from the underlying consensus engine. + + +**Pre-requisite Readings** + +- [Module Manager](/docs/sdk/next/documentation/module-system/module-manager) + + + +## BeginBlocker and EndBlocker + +`BeginBlocker` and `EndBlocker` are a way for module developers to add automatic execution of logic to their module. This is a powerful tool that should be used carefully, as complex automatic functions can slow down or even halt the chain. + +In 0.47.0, Prepare and Process Proposal were added that allow app developers to do arbitrary work at those phases, but they do not influence the work that will be done in BeginBlock. If an application requires `BeginBlock` to execute prior to any sort of work is done then this is not possible today (0.50.0). + +When needed, `BeginBlocker` and `EndBlocker` are implemented as part of the [`HasBeginBlocker`, `HasABCIEndBlocker` and `EndBlocker` interfaces](/docs/sdk/next/documentation/module-system/module-manager#appmodule). This means either can be left-out if not required. The `BeginBlock` and `EndBlock` methods of the interface implemented in `module.go` generally defer to `BeginBlocker` and `EndBlocker` methods respectively, which are usually implemented in `abci.go`. + +The actual implementation of `BeginBlocker` and `EndBlocker` in `abci.go` is very similar to that of a [`Msg` service](/docs/sdk/next/documentation/module-system/msg-services): + +- They generally use the [`keeper`](/docs/sdk/next/documentation/module-system/keeper) and [`ctx`](/docs/sdk/next/documentation/application-framework/context) to retrieve information about the latest state. +- If needed, they use the `keeper` and `ctx` to trigger state-transitions. +- If needed, they can emit [`events`](/docs/sdk/next/api-reference/events-streaming/events) via the `ctx`'s `EventManager`. + +A specific type of `EndBlocker` is available to return validator updates to the underlying consensus engine in the form of an [`[]abci.ValidatorUpdates`](https://docs.cometbft.com/v0.37/spec/abci/abci++_methods#endblock). This is the preferred way to implement custom validator changes. + +It is possible for developers to define the order of execution between the `BeginBlocker`/`EndBlocker` functions of each of their application's modules via the module's manager `SetOrderBeginBlocker`/`SetOrderEndBlocker` methods. For more on the module manager, click [here](/docs/sdk/next/documentation/module-system/module-manager#manager). + +See an example implementation of `BeginBlocker` from the `distribution` module: + +```go expandable +package distribution + +import ( + + "time" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + "github.com/cosmos/cosmos-sdk/x/distribution/types" +) + +/ BeginBlocker sets the proposer for determining distribution during endblock +/ and distribute rewards for the previous block. +func BeginBlocker(ctx sdk.Context, k keeper.Keeper) + +error { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyBeginBlocker) + + / determine the total power signing the block + var previousTotalPower int64 + for _, voteInfo := range ctx.VoteInfos() { + previousTotalPower += voteInfo.Validator.Power +} + + / TODO this is Tendermint-dependent + / ref https://github.com/cosmos/cosmos-sdk/issues/3095 + if ctx.BlockHeight() > 1 { + k.AllocateTokens(ctx, previousTotalPower, ctx.VoteInfos()) +} + + / record the proposer for when we payout on the next block + consAddr := sdk.ConsAddress(ctx.BlockHeader().ProposerAddress) + +k.SetPreviousProposerConsAddr(ctx, consAddr) + +return nil +} +``` + +and an example implementation of `EndBlocker` from the `staking` module: + +```go expandable +package keeper + +import ( + + "context" + "time" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ BeginBlocker will persist the current header and validator set as a historical entry +/ and prune the oldest entry based on the HistoricalEntries parameter +func (k *Keeper) + +BeginBlocker(ctx sdk.Context) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyBeginBlocker) + +k.TrackHistoricalInfo(ctx) +} + +/ Called every block, update validator set +func (k *Keeper) + +EndBlocker(ctx context.Context) ([]abci.ValidatorUpdate, error) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) + +return k.BlockValidatorUpdates(sdk.UnwrapSDKContext(ctx)), nil +} +``` + +{/* TODO: leaving this here to update docs with core api changes */} diff --git a/docs/sdk/next/documentation/module-system/circuit.mdx b/docs/sdk/next/documentation/module-system/circuit.mdx new file mode 100644 index 00000000..1e6d58e6 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/circuit.mdx @@ -0,0 +1,600 @@ +--- +title: "`x/circuit`" +--- + +## Concepts + +Circuit Breaker is a module that is meant to avoid a chain needing to halt/shut down in the presence of a vulnerability, instead the module will allow specific messages or all messages to be disabled. When operating a chain, if it is app specific then a halt of the chain is less detrimental, but if there are applications built on top of the chain then halting is expensive due to the disturbance to applications. + +Circuit Breaker works with the idea that an address or set of addresses have the right to block messages from being executed and/or included in the mempool. Any address with a permission is able to reset the circuit breaker for the message. + +The transactions are checked and can be rejected at two points: + +- In `CircuitBreakerDecorator` [ante handler](https://docs.cosmos.network/main/learn/advanced/baseapp#antehandler): + +```go expandable +package ante + +import ( + + "context" + "github.com/cockroachdb/errors" + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ CircuitBreaker is an interface that defines the methods for a circuit breaker. +type CircuitBreaker interface { + IsAllowed(ctx context.Context, typeURL string) (bool, error) +} + +/ CircuitBreakerDecorator is an AnteDecorator that checks if the transaction type is allowed to enter the mempool or be executed +type CircuitBreakerDecorator struct { + circuitKeeper CircuitBreaker +} + +func NewCircuitBreakerDecorator(ck CircuitBreaker) + +CircuitBreakerDecorator { + return CircuitBreakerDecorator{ + circuitKeeper: ck, +} +} + +func (cbd CircuitBreakerDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + / loop through all the messages and check if the message type is allowed + for _, msg := range tx.GetMsgs() { + isAllowed, err := cbd.circuitKeeper.IsAllowed(ctx, sdk.MsgTypeURL(msg)) + if err != nil { + return ctx, err +} + if !isAllowed { + return ctx, errors.New("tx type not allowed") +} + +} + +return next(ctx, tx, simulate) +} +``` + +- With a [message router check](https://docs.cosmos.network/main/learn/advanced/baseapp#msg-service-router): + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + + gogogrpc "github.com/cosmos/gogoproto/grpc" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc" + "google.golang.org/protobuf/runtime/protoiface" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/baseapp/internal/protocompat" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ MessageRouter ADR 031 request type routing +/ docs/sdk/next/documentation/legacy/adr-comprehensive +type MessageRouter interface { + Handler(msg sdk.Msg) + +MsgServiceHandler + HandlerByTypeURL(typeURL string) + +MsgServiceHandler +} + +/ MsgServiceRouter routes fully-qualified Msg service methods to their handler. +type MsgServiceRouter struct { + interfaceRegistry codectypes.InterfaceRegistry + routes map[string]MsgServiceHandler + hybridHandlers map[string]func(ctx context.Context, req, resp protoiface.MessageV1) + +error + circuitBreaker CircuitBreaker +} + +var _ gogogrpc.Server = &MsgServiceRouter{ +} + +/ NewMsgServiceRouter creates a new MsgServiceRouter. +func NewMsgServiceRouter() *MsgServiceRouter { + return &MsgServiceRouter{ + routes: map[string]MsgServiceHandler{ +}, + hybridHandlers: map[string]func(ctx context.Context, req, resp protoiface.MessageV1) + +error{ +}, +} +} + +func (msr *MsgServiceRouter) + +SetCircuit(cb CircuitBreaker) { + msr.circuitBreaker = cb +} + +/ MsgServiceHandler defines a function type which handles Msg service message. +type MsgServiceHandler = func(ctx sdk.Context, req sdk.Msg) (*sdk.Result, error) + +/ Handler returns the MsgServiceHandler for a given msg or nil if not found. +func (msr *MsgServiceRouter) + +Handler(msg sdk.Msg) + +MsgServiceHandler { + return msr.routes[sdk.MsgTypeURL(msg)] +} + +/ HandlerByTypeURL returns the MsgServiceHandler for a given query route path or nil +/ if not found. +func (msr *MsgServiceRouter) + +HandlerByTypeURL(typeURL string) + +MsgServiceHandler { + return msr.routes[typeURL] +} + +/ RegisterService implements the gRPC Server.RegisterService method. sd is a gRPC +/ service description, handler is an object which implements that gRPC service. +/ +/ This function PANICs: +/ - if it is called before the service `Msg`s have been registered using +/ RegisterInterfaces, +/ - or if a service is being registered twice. +func (msr *MsgServiceRouter) + +RegisterService(sd *grpc.ServiceDesc, handler interface{ +}) { + / Adds a top-level query handler based on the gRPC service name. + for _, method := range sd.Methods { + err := msr.registerMsgServiceHandler(sd, method, handler) + if err != nil { + panic(err) +} + +err = msr.registerHybridHandler(sd, method, handler) + if err != nil { + panic(err) +} + +} +} + +func (msr *MsgServiceRouter) + +HybridHandlerByMsgName(msgName string) + +func(ctx context.Context, req, resp protoiface.MessageV1) + +error { + return msr.hybridHandlers[msgName] +} + +func (msr *MsgServiceRouter) + +registerHybridHandler(sd *grpc.ServiceDesc, method grpc.MethodDesc, handler interface{ +}) + +error { + inputName, err := protocompat.RequestFullNameFromMethodDesc(sd, method) + if err != nil { + return err +} + cdc := codec.NewProtoCodec(msr.interfaceRegistry) + +hybridHandler, err := protocompat.MakeHybridHandler(cdc, sd, method, handler) + if err != nil { + return err +} + / if circuit breaker is not nil, then we decorate the hybrid handler with the circuit breaker + if msr.circuitBreaker == nil { + msr.hybridHandlers[string(inputName)] = hybridHandler + return nil +} + / decorate the hybrid handler with the circuit breaker + circuitBreakerHybridHandler := func(ctx context.Context, req, resp protoiface.MessageV1) + +error { + messageName := codectypes.MsgTypeURL(req) + +allowed, err := msr.circuitBreaker.IsAllowed(ctx, messageName) + if err != nil { + return err +} + if !allowed { + return fmt.Errorf("circuit breaker disallows execution of message %s", messageName) +} + +return hybridHandler(ctx, req, resp) +} + +msr.hybridHandlers[string(inputName)] = circuitBreakerHybridHandler + return nil +} + +func (msr *MsgServiceRouter) + +registerMsgServiceHandler(sd *grpc.ServiceDesc, method grpc.MethodDesc, handler interface{ +}) + +error { + fqMethod := fmt.Sprintf("/%s/%s", sd.ServiceName, method.MethodName) + methodHandler := method.Handler + + var requestTypeName string + + / NOTE: This is how we pull the concrete request type for each handler for registering in the InterfaceRegistry. + / This approach is maybe a bit hacky, but less hacky than reflecting on the handler object itself. + / We use a no-op interceptor to avoid actually calling into the handler itself. + _, _ = methodHandler(nil, context.Background(), func(i interface{ +}) + +error { + msg, ok := i.(sdk.Msg) + if !ok { + / We panic here because there is no other alternative and the app cannot be initialized correctly + / this should only happen if there is a problem with code generation in which case the app won't + / work correctly anyway. + panic(fmt.Errorf("unable to register service method %s: %T does not implement sdk.Msg", fqMethod, i)) +} + +requestTypeName = sdk.MsgTypeURL(msg) + +return nil +}, noopInterceptor) + + / Check that the service Msg fully-qualified method name has already + / been registered (via RegisterInterfaces). If the user registers a + / service without registering according service Msg type, there might be + / some unexpected behavior down the road. Since we can't return an error + / (`Server.RegisterService` interface restriction) + +we panic (at startup). + reqType, err := msr.interfaceRegistry.Resolve(requestTypeName) + if err != nil || reqType == nil { + return fmt.Errorf( + "type_url %s has not been registered yet. "+ + "Before calling RegisterService, you must register all interfaces by calling the `RegisterInterfaces` "+ + "method on module.BasicManager. Each module should call `msgservice.RegisterMsgServiceDesc` inside its "+ + "`RegisterInterfaces` method with the `_Msg_serviceDesc` generated by proto-gen", + requestTypeName, + ) +} + + / Check that each service is only registered once. If a service is + / registered more than once, then we should error. Since we can't + / return an error (`Server.RegisterService` interface restriction) + +we + / panic (at startup). + _, found := msr.routes[requestTypeName] + if found { + return fmt.Errorf( + "msg service %s has already been registered. Please make sure to only register each service once. "+ + "This usually means that there are conflicting modules registering the same msg service", + fqMethod, + ) +} + +msr.routes[requestTypeName] = func(ctx sdk.Context, msg sdk.Msg) (*sdk.Result, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + interceptor := func(goCtx context.Context, _ interface{ +}, _ *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{ +}, error) { + goCtx = context.WithValue(goCtx, sdk.SdkContextKey, ctx) + +return handler(goCtx, msg) +} + if m, ok := msg.(sdk.HasValidateBasic); ok { + if err := m.ValidateBasic(); err != nil { + return nil, err +} + +} + if msr.circuitBreaker != nil { + msgURL := sdk.MsgTypeURL(msg) + +isAllowed, err := msr.circuitBreaker.IsAllowed(ctx, msgURL) + if err != nil { + return nil, err +} + if !isAllowed { + return nil, fmt.Errorf("circuit breaker disables execution of this message: %s", msgURL) +} + +} + + / Call the method handler from the service description with the handler object. + / We don't do any decoding here because the decoding was already done. + res, err := methodHandler(handler, ctx, noopDecoder, interceptor) + if err != nil { + return nil, err +} + +resMsg, ok := res.(proto.Message) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "Expecting proto.Message, got %T", resMsg) +} + +return sdk.WrapServiceResult(ctx, resMsg, err) +} + +return nil +} + +/ SetInterfaceRegistry sets the interface registry for the router. +func (msr *MsgServiceRouter) + +SetInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) { + msr.interfaceRegistry = interfaceRegistry +} + +func noopDecoder(_ interface{ +}) + +error { + return nil +} + +func noopInterceptor(_ context.Context, _ interface{ +}, _ *grpc.UnaryServerInfo, _ grpc.UnaryHandler) (interface{ +}, error) { + return nil, nil +} +``` + + + The `CircuitBreakerDecorator` works for most use cases, but [does not check + the inner messages of a + transaction](https://docs.cosmos.network/main/learn/beginner/tx-lifecycle#antehandler). + This means some transactions (such as `x/authz` transactions or some `x/gov` + transactions) may pass the ante handler. **This does not affect the circuit + breaker** as the message router check will still fail the transaction. This + tradeoff is to avoid introducing more dependencies in the `x/circuit` module. + Chains can re-define the `CircuitBreakerDecorator` to check for inner messages + if they wish to do so. + + +## State + +### Accounts + +- AccountPermissions `0x1 | account_address -> ProtocolBuffer(CircuitBreakerPermissions)` + +```go expandable +type level int32 + +const ( + / LEVEL_NONE_UNSPECIFIED indicates that the account will have no circuit + / breaker permissions. + LEVEL_NONE_UNSPECIFIED = iota + / LEVEL_SOME_MSGS indicates that the account will have permission to + / trip or reset the circuit breaker for some Msg type URLs. If this level + / is chosen, a non-empty list of Msg type URLs must be provided in + / limit_type_urls. + LEVEL_SOME_MSGS + / LEVEL_ALL_MSGS indicates that the account can trip or reset the circuit + / breaker for Msg's of all type URLs. + LEVEL_ALL_MSGS + / LEVEL_SUPER_ADMIN indicates that the account can take all circuit breaker + / actions and can grant permissions to other accounts. + LEVEL_SUPER_ADMIN +) + +type Access struct { + level int32 + msgs []string / if full permission, msgs can be empty +} +``` + +### Disable List + +List of type urls that are disabled. + +- DisableList `0x2 | msg_type_url -> []byte{}` {/* - should this be stored in json to skip encoding and decoding each block, does it matter? */} + +## State Transitions + +### Authorize + +Authorize, is called by the module authority (default governance module account) or any account with `LEVEL_SUPER_ADMIN` to give permission to disable/enable messages to another account. There are three levels of permissions that can be granted. `LEVEL_SOME_MSGS` limits the number of messages that can be disabled. `LEVEL_ALL_MSGS` permits all messages to be disabled. `LEVEL_SUPER_ADMIN` allows an account to take all circuit breaker actions including authorizing and deauthorizing other accounts. + +```protobuf + / AuthorizeCircuitBreaker allows a super-admin to grant (or revoke) another + / account's circuit breaker permissions. + rpc AuthorizeCircuitBreaker(MsgAuthorizeCircuitBreaker) returns (MsgAuthorizeCircuitBreakerResponse); +``` + +### Trip + +Trip, is called by an authorized account to disable message execution for a specific msgURL. If empty, all the msgs will be disabled. + +```protobuf + / TripCircuitBreaker pauses processing of Msg's in the state machine. + rpc TripCircuitBreaker(MsgTripCircuitBreaker) returns (MsgTripCircuitBreakerResponse); +``` + +### Reset + +Reset is called by an authorized account to enable execution for a specific msgURL of previously disabled message. If empty, all the disabled messages will be enabled. + +```protobuf + / ResetCircuitBreaker resumes processing of Msg's in the state machine that + / have been paused using TripCircuitBreaker. + rpc ResetCircuitBreaker(MsgResetCircuitBreaker) returns (MsgResetCircuitBreakerResponse); +``` + +## Messages + +### MsgAuthorizeCircuitBreaker + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/main/contrib/proto/cosmos/circuit/v1/tx.proto#L25-L75 +``` + +This message is expected to fail if: + +- the granter is not an account with permission level `LEVEL_SUPER_ADMIN` or the module authority + +### MsgTripCircuitBreaker + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/main/contrib/proto/cosmos/circuit/v1/tx.proto#L77-L93 +``` + +This message is expected to fail if: + +- if the signer does not have a permission level with the ability to disable the specified type url message + +### MsgResetCircuitBreaker + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/main/contrib/proto/cosmos/circuit/v1/tx.proto#L95-109 +``` + +This message is expected to fail if: + +- if the type url is not disabled + +## Events - list and describe event tags + +The circuit module emits the following events: + +### Message Events + +#### MsgAuthorizeCircuitBreaker + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ------------------------- | +| string | granter | `{granterAddress}` | +| string | grantee | `{granteeAddress}` | +| string | permission | `{granteePermissions}` | +| message | module | circuit | +| message | action | authorize_circuit_breaker | + +#### MsgTripCircuitBreaker + +| Type | Attribute Key | Attribute Value | +| --------- | ------------- | ---------------------- | +| string | authority | `{authorityAddress}` | +| \[]string | msg_urls | \[]string`{msg\_urls}` | +| message | module | circuit | +| message | action | trip_circuit_breaker | + +#### ResetCircuitBreaker + +| Type | Attribute Key | Attribute Value | +| --------- | ------------- | ---------------------- | +| string | authority | `{authorityAddress}` | +| \[]string | msg_urls | \[]string`{msg\_urls}` | +| message | module | circuit | +| message | action | reset_circuit_breaker | + +## Keys - list of key prefixes used by the circuit module + +- `AccountPermissionPrefix` - `0x01` +- `DisableListPrefix` - `0x02` + +## Client - list and describe CLI commands and gRPC and REST endpoints + +## Examples: Using Circuit Breaker CLI Commands + +This section provides practical examples for using the Circuit Breaker module through the command-line interface (CLI). These examples demonstrate how to authorize accounts, disable (trip) specific message types, and re-enable (reset) them when needed. + +### Querying Circuit Breaker Permissions + +Check an account's current circuit breaker permissions: + +```bash +# Query permissions for a specific account + query circuit account-permissions + +# Example: +simd query circuit account-permissions cosmos1... +``` + +Check which message types are currently disabled: + +```bash +# Query all disabled message types + query circuit disabled-list + +# Example: +simd query circuit disabled-list +``` + +### Authorizing an Account as Circuit Breaker + +Only a super-admin or the module authority (typically the governance module account) can grant circuit breaker permissions to other accounts: + +```bash +# Grant LEVEL_ALL_MSGS permission (can disable any message type) + tx circuit authorize --level=ALL_MSGS --from= --gas=auto --gas-adjustment=1.5 + +# Grant LEVEL_SOME_MSGS permission (can only disable specific message types) + tx circuit authorize --level=SOME_MSGS --limit-type-urls="/cosmos.bank.v1beta1.MsgSend,/cosmos.staking.v1beta1.MsgDelegate" --from= --gas=auto --gas-adjustment=1.5 + +# Grant LEVEL_SUPER_ADMIN permission (can disable messages and authorize other accounts) + tx circuit authorize --level=SUPER_ADMIN --from= --gas=auto --gas-adjustment=1.5 +``` + +### Disabling Message Processing (Trip) + +Disable specific message types to prevent their execution (requires authorization): + +```bash +# Disable a single message type + tx circuit trip --type-urls="/cosmos.bank.v1beta1.MsgSend" --from= --gas=auto --gas-adjustment=1.5 + +# Disable multiple message types + tx circuit trip --type-urls="/cosmos.bank.v1beta1.MsgSend,/cosmos.staking.v1beta1.MsgDelegate" --from= --gas=auto --gas-adjustment=1.5 + +# Disable all message types (emergency measure) + tx circuit trip --from= --gas=auto --gas-adjustment=1.5 +``` + +### Re-enabling Message Processing (Reset) + +Re-enable previously disabled message types (requires authorization): + +```bash +# Re-enable a single message type + tx circuit reset --type-urls="/cosmos.bank.v1beta1.MsgSend" --from= --gas=auto --gas-adjustment=1.5 + +# Re-enable multiple message types + tx circuit reset --type-urls="/cosmos.bank.v1beta1.MsgSend,/cosmos.staking.v1beta1.MsgDelegate" --from= --gas=auto --gas-adjustment=1.5 + +# Re-enable all disabled message types + tx circuit reset --from= --gas=auto --gas-adjustment=1.5 +``` + +### Usage in Emergency Scenarios + +In case of a critical vulnerability in a specific message type: + +1. Quickly disable the vulnerable message type: + + ```bash + tx circuit trip --type-urls="/cosmos.vulnerable.v1beta1.MsgVulnerable" --from= --gas=auto --gas-adjustment=1.5 + ``` + +2. After a fix is deployed, re-enable the message type: + + ```bash + tx circuit reset --type-urls="/cosmos.vulnerable.v1beta1.MsgVulnerable" --from= --gas=auto --gas-adjustment=1.5 + ``` + +This allows chains to surgically disable problematic functionality without halting the entire chain, providing time for developers to implement and deploy fixes. diff --git a/docs/sdk/next/documentation/module-system/consensus.mdx b/docs/sdk/next/documentation/module-system/consensus.mdx new file mode 100644 index 00000000..0a377fe4 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/consensus.mdx @@ -0,0 +1,6 @@ +--- +title: '`x/consensus`' +description: Functionality to modify CometBFT's ABCI consensus params. +--- + +Functionality to modify CometBFT's ABCI consensus params. diff --git a/docs/sdk/next/documentation/module-system/crisis.mdx b/docs/sdk/next/documentation/module-system/crisis.mdx new file mode 100644 index 00000000..06abc189 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/crisis.mdx @@ -0,0 +1,129 @@ +--- +title: '`x/crisis`' +description: >- + NOTE: x/crisis is deprecated as of Cosmos SDK v0.53 and will be removed in the + next release. +--- + +NOTE: `x/crisis` is deprecated as of Cosmos SDK v0.53 and will be removed in the next release. + +## Overview + +The crisis module halts the blockchain under the circumstance that a blockchain +invariant is broken. Invariants can be registered with the application during the +application initialization process. + +## Contents + +* [State](#state) +* [Messages](#messages) +* [Events](#events) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + +## State + +### ConstantFee + +Due to the anticipated large gas cost requirement to verify an invariant (and +potential to exceed the maximum allowable block gas limit) a constant fee is +used instead of the standard gas consumption method. The constant fee is +intended to be larger than the anticipated gas cost of running the invariant +with the standard gas consumption method. + +The ConstantFee param is stored in the module params state with the prefix of `0x01`, +it can be updated with governance or the address with authority. + +* Params: `mint/params -> legacy_amino(sdk.Coin)` + +## Messages + +In this section we describe the processing of the crisis messages and the +corresponding updates to the state. + +### MsgVerifyInvariant + +Blockchain invariants can be checked using the `MsgVerifyInvariant` message. + +```protobuf +// MsgVerifyInvariant represents a message to verify a particular invariance. +message MsgVerifyInvariant { + option (cosmos.msg.v1.signer) = "sender"; + option (amino.name) = "cosmos-sdk/MsgVerifyInvariant"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + // sender is the account address of private key to send coins to fee collector account. + string sender = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // name of the invariant module. + string invariant_module_name = 2; + + // invariant_route is the msg's invariant route. + string invariant_route = 3; +} +``` + +This message is expected to fail if: + +* the sender does not have enough coins for the constant fee +* the invariant route is not registered + +This message checks the invariant provided, and if the invariant is broken it +panics, halting the blockchain. If the invariant is broken, the constant fee is +never deducted as the transaction is never committed to a block (equivalent to +being refunded). However, if the invariant is not broken, the constant fee will +not be refunded. + +## Events + +The crisis module emits the following events: + +### Handlers + +#### MsgVerifyInvariant + +| Type | Attribute Key | Attribute Value | +| --------- | ------------- | ----------------- | +| invariant | route | `{invariantRoute}` | +| message | module | crisis | +| message | action | verify\_invariant | +| message | sender | `{senderAddress}` | + +## Parameters + +The crisis module contains the following parameters: + +| Key | Type | Example | +| ----------- | ------------- | --------------------------------- | +| ConstantFee | object (coin) | `{"denom":"uatom","amount":"1000"}` | + +## Client + +### CLI + +A user can query and interact with the `crisis` module using the CLI. + +#### Transactions + +The `tx` commands allow users to interact with the `crisis` module. + +```bash +simd tx crisis --help +``` + +##### invariant-broken + +The `invariant-broken` command submits proof when an invariant was broken to halt the chain + +```bash +simd tx crisis invariant-broken [module-name] [invariant-route] [flags] +``` + +Example: + +```bash +simd tx crisis invariant-broken bank total-supply --from=[keyname or address] +``` diff --git a/docs/sdk/next/documentation/module-system/depinject.mdx b/docs/sdk/next/documentation/module-system/depinject.mdx new file mode 100644 index 00000000..fb726e2d --- /dev/null +++ b/docs/sdk/next/documentation/module-system/depinject.mdx @@ -0,0 +1,3519 @@ +--- +title: Modules depinject-ready +--- + + +**Pre-requisite Readings** + +* [Depinject Documentation](/docs/sdk/next/documentation/module-system/depinject) + + + +[`depinject`](/docs/sdk/next/documentation/module-system/depinject) is used to wire any module in `app.go`. +All core modules are already configured to support dependency injection. + +To work with `depinject` a module must define its configuration and requirements so that `depinject` can provide the right dependencies. + +In brief, as a module developer, the following steps are required: + +1. Define the module configuration using Protobuf +2. Define the module dependencies in `x/{moduleName}/module.go` + +A chain developer can then use the module by following these two steps: + +1. Configure the module in `app_config.go` or `app.yaml` +2. Inject the module in `app.go` + +## Module Configuration + +The module available configuration is defined in a Protobuf file, located at `{moduleName}/module/v1/module.proto`. + +```protobuf +syntax = "proto3"; + +package cosmos.group.module.v1; + +import "cosmos/app/v1alpha1/module.proto"; +import "gogoproto/gogo.proto"; +import "google/protobuf/duration.proto"; +import "amino/amino.proto"; + +// Module is the config object of the group module. +message Module { + option (cosmos.app.v1alpha1.module) = { + go_import: "github.com/cosmos/cosmos-sdk/x/group" + }; + + // max_execution_period defines the max duration after a proposal's voting period ends that members can send a MsgExec + // to execute the proposal. + google.protobuf.Duration max_execution_period = 1 + [(gogoproto.stdduration) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // max_metadata_len defines the max length of the metadata bytes field for various entities within the group module. + // Defaults to 255 if not explicitly set. + uint64 max_metadata_len = 2; +} + +``` + +* `go_import` must point to the Go package of the custom module. +* Message fields define the module configuration. + That configuration can be set in the `app_config.go` / `app.yaml` file for a chain developer to configure the module.\ + Taking `group` as an example, a chain developer is able to decide, thanks to `uint64 max_metadata_len`, what the maximum metadata length allowed for a group proposal is. + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + crisismodulev1 "cosmossdk.io/api/cosmos/crisis/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + paramsmodulev1 "cosmossdk.io/api/cosmos/params/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/depinject" + + _ "cosmossdk.io/x/circuit" / import for side-effects + _ "cosmossdk.io/x/evidence" / import for side-effects + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/crisis" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + "github.com/cosmos/cosmos-sdk/x/genutil" + "github.com/cosmos/cosmos-sdk/x/gov" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/params" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + + "cosmossdk.io/core/appconfig" + circuittypes "cosmossdk.io/x/circuit/types" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + "cosmossdk.io/x/nft" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + / application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + }, + EndBlockers: []string{ + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + crisistypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensustypes.ModuleName, + circuittypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + / ExportGenesis: []string{ + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: paramstypes.ModuleName, + Config: appconfig.WrapAny(¶msmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: crisistypes.ModuleName, + Config: appconfig.WrapAny(&crisismodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + }, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + }, + ), + }, + )) + ) + ``` + +That message is generated using [`pulsar`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/scripts/protocgen-pulsar.sh) (by running `make proto-gen`). +In the case of the `group` module, this file is generated here: [Link](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/api/cosmos/group/module/v1/module.pulsar.go). + +The part that is relevant for the module configuration is: + +```go expandable +/ Code generated by protoc-gen-go-pulsar. DO NOT EDIT. +package modulev1 + +import ( + + _ "cosmossdk.io/api/amino" + _ "cosmossdk.io/api/cosmos/app/v1alpha1" + fmt "fmt" + runtime "github.com/cosmos/cosmos-proto/runtime" + _ "github.com/cosmos/gogoproto/gogoproto" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoiface "google.golang.org/protobuf/runtime/protoiface" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + durationpb "google.golang.org/protobuf/types/known/durationpb" + io "io" + reflect "reflect" + sync "sync" +) + +var ( + md_Module protoreflect.MessageDescriptor + fd_Module_max_execution_period protoreflect.FieldDescriptor + fd_Module_max_metadata_len protoreflect.FieldDescriptor +) + +func init() { + file_cosmos_group_module_v1_module_proto_init() + +md_Module = File_cosmos_group_module_v1_module_proto.Messages().ByName("Module") + +fd_Module_max_execution_period = md_Module.Fields().ByName("max_execution_period") + +fd_Module_max_metadata_len = md_Module.Fields().ByName("max_metadata_len") +} + +var _ protoreflect.Message = (*fastReflection_Module)(nil) + +type fastReflection_Module Module + +func (x *Module) + +ProtoReflect() + +protoreflect.Message { + return (*fastReflection_Module)(x) +} + +func (x *Module) + +slowProtoReflect() + +protoreflect.Message { + mi := &file_cosmos_group_module_v1_module_proto_msgTypes[0] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) +} + +return ms +} + +return mi.MessageOf(x) +} + +var _fastReflection_Module_messageType fastReflection_Module_messageType +var _ protoreflect.MessageType = fastReflection_Module_messageType{ +} + +type fastReflection_Module_messageType struct{ +} + +func (x fastReflection_Module_messageType) + +Zero() + +protoreflect.Message { + return (*fastReflection_Module)(nil) +} + +func (x fastReflection_Module_messageType) + +New() + +protoreflect.Message { + return new(fastReflection_Module) +} + +func (x fastReflection_Module_messageType) + +Descriptor() + +protoreflect.MessageDescriptor { + return md_Module +} + +/ Descriptor returns message descriptor, which contains only the protobuf +/ type information for the message. +func (x *fastReflection_Module) + +Descriptor() + +protoreflect.MessageDescriptor { + return md_Module +} + +/ Type returns the message type, which encapsulates both Go and protobuf +/ type information. If the Go type information is not needed, +/ it is recommended that the message descriptor be used instead. +func (x *fastReflection_Module) + +Type() + +protoreflect.MessageType { + return _fastReflection_Module_messageType +} + +/ New returns a newly allocated and mutable empty message. +func (x *fastReflection_Module) + +New() + +protoreflect.Message { + return new(fastReflection_Module) +} + +/ Interface unwraps the message reflection interface and +/ returns the underlying ProtoMessage interface. +func (x *fastReflection_Module) + +Interface() + +protoreflect.ProtoMessage { + return (*Module)(x) +} + +/ Range iterates over every populated field in an undefined order, +/ calling f for each field descriptor and value encountered. +/ Range returns immediately if f returns false. +/ While iterating, mutating operations may only be performed +/ on the current field descriptor. +func (x *fastReflection_Module) + +Range(f func(protoreflect.FieldDescriptor, protoreflect.Value) + +bool) { + if x.MaxExecutionPeriod != nil { + value := protoreflect.ValueOfMessage(x.MaxExecutionPeriod.ProtoReflect()) + if !f(fd_Module_max_execution_period, value) { + return +} + +} + if x.MaxMetadataLen != uint64(0) { + value := protoreflect.ValueOfUint64(x.MaxMetadataLen) + if !f(fd_Module_max_metadata_len, value) { + return +} + +} +} + +/ Has reports whether a field is populated. +/ +/ Some fields have the property of nullability where it is possible to +/ distinguish between the default value of a field and whether the field +/ was explicitly populated with the default value. Singular message fields, +/ member fields of a oneof, and proto2 scalar fields are nullable. Such +/ fields are populated only if explicitly set. +/ +/ In other cases (aside from the nullable cases above), +/ a proto3 scalar field is populated if it contains a non-zero value, and +/ a repeated field is populated if it is non-empty. +func (x *fastReflection_Module) + +Has(fd protoreflect.FieldDescriptor) + +bool { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + return x.MaxExecutionPeriod != nil + case "cosmos.group.module.v1.Module.max_metadata_len": + return x.MaxMetadataLen != uint64(0) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ Clear clears the field such that a subsequent Has call reports false. +/ +/ Clearing an extension field clears both the extension type and value +/ associated with the given field number. +/ +/ Clear is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +Clear(fd protoreflect.FieldDescriptor) { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + x.MaxExecutionPeriod = nil + case "cosmos.group.module.v1.Module.max_metadata_len": + x.MaxMetadataLen = uint64(0) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ Get retrieves the value for a field. +/ +/ For unpopulated scalars, it returns the default value, where +/ the default value of a bytes scalar is guaranteed to be a copy. +/ For unpopulated composite types, it returns an empty, read-only view +/ of the value; to obtain a mutable reference, use Mutable. +func (x *fastReflection_Module) + +Get(descriptor protoreflect.FieldDescriptor) + +protoreflect.Value { + switch descriptor.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + value := x.MaxExecutionPeriod + return protoreflect.ValueOfMessage(value.ProtoReflect()) + case "cosmos.group.module.v1.Module.max_metadata_len": + value := x.MaxMetadataLen + return protoreflect.ValueOfUint64(value) + +default: + if descriptor.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", descriptor.FullName())) +} +} + +/ Set stores the value for a field. +/ +/ For a field belonging to a oneof, it implicitly clears any other field +/ that may be currently set within the same oneof. +/ For extension fields, it implicitly stores the provided ExtensionType. +/ When setting a composite type, it is unspecified whether the stored value +/ aliases the source's memory in any way. If the composite value is an +/ empty, read-only value, then it panics. +/ +/ Set is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +Set(fd protoreflect.FieldDescriptor, value protoreflect.Value) { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + x.MaxExecutionPeriod = value.Message().Interface().(*durationpb.Duration) + case "cosmos.group.module.v1.Module.max_metadata_len": + x.MaxMetadataLen = value.Uint() + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ Mutable returns a mutable reference to a composite type. +/ +/ If the field is unpopulated, it may allocate a composite value. +/ For a field belonging to a oneof, it implicitly clears any other field +/ that may be currently set within the same oneof. +/ For extension fields, it implicitly stores the provided ExtensionType +/ if not already stored. +/ It panics if the field does not contain a composite type. +/ +/ Mutable is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +Mutable(fd protoreflect.FieldDescriptor) + +protoreflect.Value { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + if x.MaxExecutionPeriod == nil { + x.MaxExecutionPeriod = new(durationpb.Duration) +} + +return protoreflect.ValueOfMessage(x.MaxExecutionPeriod.ProtoReflect()) + case "cosmos.group.module.v1.Module.max_metadata_len": + panic(fmt.Errorf("field max_metadata_len of message cosmos.group.module.v1.Module is not mutable")) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ NewField returns a new value that is assignable to the field +/ for the given descriptor. For scalars, this returns the default value. +/ For lists, maps, and messages, this returns a new, empty, mutable value. +func (x *fastReflection_Module) + +NewField(fd protoreflect.FieldDescriptor) + +protoreflect.Value { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + m := new(durationpb.Duration) + +return protoreflect.ValueOfMessage(m.ProtoReflect()) + case "cosmos.group.module.v1.Module.max_metadata_len": + return protoreflect.ValueOfUint64(uint64(0)) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ WhichOneof reports which field within the oneof is populated, +/ returning nil if none are populated. +/ It panics if the oneof descriptor does not belong to this message. +func (x *fastReflection_Module) + +WhichOneof(d protoreflect.OneofDescriptor) + +protoreflect.FieldDescriptor { + switch d.FullName() { + default: + panic(fmt.Errorf("%s is not a oneof field in cosmos.group.module.v1.Module", d.FullName())) +} + +panic("unreachable") +} + +/ GetUnknown retrieves the entire list of unknown fields. +/ The caller may only mutate the contents of the RawFields +/ if the mutated bytes are stored back into the message with SetUnknown. +func (x *fastReflection_Module) + +GetUnknown() + +protoreflect.RawFields { + return x.unknownFields +} + +/ SetUnknown stores an entire list of unknown fields. +/ The raw fields must be syntactically valid according to the wire format. +/ An implementation may panic if this is not the case. +/ Once stored, the caller must not mutate the content of the RawFields. +/ An empty RawFields may be passed to clear the fields. +/ +/ SetUnknown is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +SetUnknown(fields protoreflect.RawFields) { + x.unknownFields = fields +} + +/ IsValid reports whether the message is valid. +/ +/ An invalid message is an empty, read-only value. +/ +/ An invalid message often corresponds to a nil pointer of the concrete +/ message type, but the details are implementation dependent. +/ Validity is not part of the protobuf data model, and may not +/ be preserved in marshaling or other operations. +func (x *fastReflection_Module) + +IsValid() + +bool { + return x != nil +} + +/ ProtoMethods returns optional fastReflectionFeature-path implementations of various operations. +/ This method may return nil. +/ +/ The returned methods type is identical to +/ "google.golang.org/protobuf/runtime/protoiface".Methods. +/ Consult the protoiface package documentation for details. +func (x *fastReflection_Module) + +ProtoMethods() *protoiface.Methods { + size := func(input protoiface.SizeInput) + +protoiface.SizeOutput { + x := input.Message.Interface().(*Module) + if x == nil { + return protoiface.SizeOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Size: 0, +} + +} + options := runtime.SizeInputToOptions(input) + _ = options + var n int + var l int + _ = l + if x.MaxExecutionPeriod != nil { + l = options.Size(x.MaxExecutionPeriod) + +n += 1 + l + runtime.Sov(uint64(l)) +} + if x.MaxMetadataLen != 0 { + n += 1 + runtime.Sov(uint64(x.MaxMetadataLen)) +} + if x.unknownFields != nil { + n += len(x.unknownFields) +} + +return protoiface.SizeOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Size: n, +} + +} + marshal := func(input protoiface.MarshalInput) (protoiface.MarshalOutput, error) { + x := input.Message.Interface().(*Module) + if x == nil { + return protoiface.MarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Buf: input.Buf, +}, nil +} + options := runtime.MarshalInputToOptions(input) + _ = options + size := options.Size(x) + dAtA := make([]byte, size) + i := len(dAtA) + _ = i + var l int + _ = l + if x.unknownFields != nil { + i -= len(x.unknownFields) + +copy(dAtA[i:], x.unknownFields) +} + if x.MaxMetadataLen != 0 { + i = runtime.EncodeVarint(dAtA, i, uint64(x.MaxMetadataLen)) + +i-- + dAtA[i] = 0x10 +} + if x.MaxExecutionPeriod != nil { + encoded, err := options.Marshal(x.MaxExecutionPeriod) + if err != nil { + return protoiface.MarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Buf: input.Buf, +}, err +} + +i -= len(encoded) + +copy(dAtA[i:], encoded) + +i = runtime.EncodeVarint(dAtA, i, uint64(len(encoded))) + +i-- + dAtA[i] = 0xa +} + if input.Buf != nil { + input.Buf = append(input.Buf, dAtA...) +} + +else { + input.Buf = dAtA +} + +return protoiface.MarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Buf: input.Buf, +}, nil +} + unmarshal := func(input protoiface.UnmarshalInput) (protoiface.UnmarshalOutput, error) { + x := input.Message.Interface().(*Module) + if x == nil { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags, +}, nil +} + options := runtime.UnmarshalInputToOptions(input) + _ = options + dAtA := input.Buf + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrIntOverflow +} + if iNdEx >= l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: Module: wiretype end group for non-group") +} + if fieldNum <= 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: Module: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: wrong wireType = %d for field MaxExecutionPeriod", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrIntOverflow +} + if iNdEx >= l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrInvalidLength +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrInvalidLength +} + if postIndex > l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + if x.MaxExecutionPeriod == nil { + x.MaxExecutionPeriod = &durationpb.Duration{ +} + +} + if err := options.Unmarshal(dAtA[iNdEx:postIndex], x.MaxExecutionPeriod); err != nil { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, err +} + +iNdEx = postIndex + case 2: + if wireType != 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: wrong wireType = %d for field MaxMetadataLen", wireType) +} + +x.MaxMetadataLen = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrIntOverflow +} + if iNdEx >= l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + x.MaxMetadataLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + +default: + iNdEx = preIndex + skippy, err := runtime.Skip(dAtA[iNdEx:]) + if err != nil { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrInvalidLength +} + if (iNdEx + skippy) > l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + if !options.DiscardUnknown { + x.unknownFields = append(x.unknownFields, dAtA[iNdEx:iNdEx+skippy]...) +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + +return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, nil +} + +return &protoiface.Methods{ + NoUnkeyedLiterals: struct{ +}{ +}, + Flags: protoiface.SupportMarshalDeterministic | protoiface.SupportUnmarshalDiscardUnknown, + Size: size, + Marshal: marshal, + Unmarshal: unmarshal, + Merge: nil, + CheckInitialized: nil, +} +} + +/ Code generated by protoc-gen-go. DO NOT EDIT. +/ versions: +/ protoc-gen-go v1.27.0 +/ protoc (unknown) +/ source: cosmos/group/module/v1/module.proto + +const ( + / Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + / Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +/ Module is the config object of the group module. +type Module struct { + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields + + / max_execution_period defines the max duration after a proposal's voting period ends that members can send a MsgExec + / to execute the proposal. + MaxExecutionPeriod *durationpb.Duration `protobuf:"bytes,1,opt,name=max_execution_period,json=maxExecutionPeriod,proto3" json:"max_execution_period,omitempty"` + / max_metadata_len defines the max length of the metadata bytes field for various entities within the group module. + / Defaults to 255 if not explicitly set. + MaxMetadataLen uint64 `protobuf:"varint,2,opt,name=max_metadata_len,json=maxMetadataLen,proto3" json:"max_metadata_len,omitempty"` +} + +func (x *Module) + +Reset() { + *x = Module{ +} + if protoimpl.UnsafeEnabled { + mi := &file_cosmos_group_module_v1_module_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + +ms.StoreMessageInfo(mi) +} +} + +func (x *Module) + +String() + +string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Module) + +ProtoMessage() { +} + +/ Deprecated: Use Module.ProtoReflect.Descriptor instead. +func (*Module) + +Descriptor() ([]byte, []int) { + return file_cosmos_group_module_v1_module_proto_rawDescGZIP(), []int{0 +} +} + +func (x *Module) + +GetMaxExecutionPeriod() *durationpb.Duration { + if x != nil { + return x.MaxExecutionPeriod +} + +return nil +} + +func (x *Module) + +GetMaxMetadataLen() + +uint64 { + if x != nil { + return x.MaxMetadataLen +} + +return 0 +} + +var File_cosmos_group_module_v1_module_proto protoreflect.FileDescriptor + +var file_cosmos_group_module_v1_module_proto_rawDesc = []byte{ + 0x0a, 0x23, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x2f, 0x6d, + 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2f, 0x76, 0x31, 0x2f, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x16, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2e, 0x67, 0x72, + 0x6f, 0x75, 0x70, 0x2e, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x1a, 0x20, 0x63, + 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x61, 0x70, 0x70, 0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, + 0x61, 0x31, 0x2f, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, + 0x14, 0x67, 0x6f, 0x67, 0x6f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x67, 0x6f, 0x67, 0x6f, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, + 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x11, 0x61, 0x6d, 0x69, 0x6e, 0x6f, 0x2f, 0x61, 0x6d, 0x69, + 0x6e, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xbc, 0x01, 0x0a, 0x06, 0x4d, 0x6f, 0x64, + 0x75, 0x6c, 0x65, 0x12, 0x5a, 0x0a, 0x14, 0x6d, 0x61, 0x78, 0x5f, 0x65, 0x78, 0x65, 0x63, 0x75, + 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x70, 0x65, 0x72, 0x69, 0x6f, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, + 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, + 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x42, 0x0d, 0xc8, 0xde, + 0x1f, 0x00, 0x98, 0xdf, 0x1f, 0x01, 0xa8, 0xe7, 0xb0, 0x2a, 0x01, 0x52, 0x12, 0x6d, 0x61, 0x78, + 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x65, 0x72, 0x69, 0x6f, 0x64, 0x12, + 0x28, 0x0a, 0x10, 0x6d, 0x61, 0x78, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x5f, + 0x6c, 0x65, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0e, 0x6d, 0x61, 0x78, 0x4d, 0x65, + 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x4c, 0x65, 0x6e, 0x3a, 0x2c, 0xba, 0xc0, 0x96, 0xda, 0x01, + 0x26, 0x0a, 0x24, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, + 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2d, 0x73, 0x64, 0x6b, 0x2f, + 0x78, 0x2f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x42, 0xd6, 0x01, 0x0a, 0x1a, 0x63, 0x6f, 0x6d, 0x2e, + 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2e, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x2e, 0x6d, 0x6f, 0x64, + 0x75, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x42, 0x0b, 0x4d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x50, 0x72, + 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x30, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x73, 0x64, 0x6b, + 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x67, + 0x72, 0x6f, 0x75, 0x70, 0x2f, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2f, 0x76, 0x31, 0x3b, 0x6d, + 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x76, 0x31, 0xa2, 0x02, 0x03, 0x43, 0x47, 0x4d, 0xaa, 0x02, 0x16, + 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2e, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x2e, 0x4d, 0x6f, 0x64, + 0x75, 0x6c, 0x65, 0x2e, 0x56, 0x31, 0xca, 0x02, 0x16, 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x5c, + 0x47, 0x72, 0x6f, 0x75, 0x70, 0x5c, 0x4d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x5c, 0x56, 0x31, 0xe2, + 0x02, 0x22, 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x5c, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x5c, 0x4d, + 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x5c, 0x56, 0x31, 0x5c, 0x47, 0x50, 0x42, 0x4d, 0x65, 0x74, 0x61, + 0x64, 0x61, 0x74, 0x61, 0xea, 0x02, 0x19, 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x3a, 0x3a, 0x47, + 0x72, 0x6f, 0x75, 0x70, 0x3a, 0x3a, 0x4d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x3a, 0x3a, 0x56, 0x31, + 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, +} + +var ( + file_cosmos_group_module_v1_module_proto_rawDescOnce sync.Once + file_cosmos_group_module_v1_module_proto_rawDescData = file_cosmos_group_module_v1_module_proto_rawDesc +) + +func file_cosmos_group_module_v1_module_proto_rawDescGZIP() []byte { + file_cosmos_group_module_v1_module_proto_rawDescOnce.Do(func() { + file_cosmos_group_module_v1_module_proto_rawDescData = protoimpl.X.CompressGZIP(file_cosmos_group_module_v1_module_proto_rawDescData) +}) + +return file_cosmos_group_module_v1_module_proto_rawDescData +} + +var file_cosmos_group_module_v1_module_proto_msgTypes = make([]protoimpl.MessageInfo, 1) + +var file_cosmos_group_module_v1_module_proto_goTypes = []interface{ +}{ + (*Module)(nil), / 0: cosmos.group.module.v1.Module + (*durationpb.Duration)(nil), / 1: google.protobuf.Duration +} + +var file_cosmos_group_module_v1_module_proto_depIdxs = []int32{ + 1, / 0: cosmos.group.module.v1.Module.max_execution_period:type_name -> google.protobuf.Duration + 1, / [1:1] is the sub-list for method output_type + 1, / [1:1] is the sub-list for method input_type + 1, / [1:1] is the sub-list for extension type_name + 1, / [1:1] is the sub-list for extension extendee + 0, / [0:1] is the sub-list for field type_name +} + +func init() { + file_cosmos_group_module_v1_module_proto_init() +} + +func file_cosmos_group_module_v1_module_proto_init() { + if File_cosmos_group_module_v1_module_proto != nil { + return +} + if !protoimpl.UnsafeEnabled { + file_cosmos_group_module_v1_module_proto_msgTypes[0].Exporter = func(v interface{ +}, i int) + +interface{ +} { + switch v := v.(*Module); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil +} + +} + +} + +type x struct{ +} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{ +}).PkgPath(), + RawDescriptor: file_cosmos_group_module_v1_module_proto_rawDesc, + NumEnums: 0, + NumMessages: 1, + NumExtensions: 0, + NumServices: 0, +}, + GoTypes: file_cosmos_group_module_v1_module_proto_goTypes, + DependencyIndexes: file_cosmos_group_module_v1_module_proto_depIdxs, + MessageInfos: file_cosmos_group_module_v1_module_proto_msgTypes, +}.Build() + +File_cosmos_group_module_v1_module_proto = out.File + file_cosmos_group_module_v1_module_proto_rawDesc = nil + file_cosmos_group_module_v1_module_proto_goTypes = nil + file_cosmos_group_module_v1_module_proto_depIdxs = nil +} +``` + + +Pulsar is optional. The official [`protoc-gen-go`](https://developers.google.com/protocol-buffers/docs/reference/go-generated) can be used as well. + + +## Dependency Definition + +Once the configuration proto is defined, the module's `module.go` must define what dependencies are required by the module. +The boilerplate is similar for all modules. + + +All methods, structs and their fields must be public for `depinject`. + + +1. Import the module configuration generated package: + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var ( + _ appmodule.AppModule = AppModule{ + } + _ appmodule.HasEndBlocker = AppModule{ + } + ) + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx context.Context) + + error { + c := sdk.UnwrapSDKContext(ctx) + + return EndBlocker(c, am.keeper) + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + + Define an `init()` function for defining the `providers` of the module configuration:\ + This registers the module configuration message and the wiring of the module. + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var ( + _ appmodule.AppModule = AppModule{ + } + _ appmodule.HasEndBlocker = AppModule{ + } + ) + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx context.Context) + + error { + c := sdk.UnwrapSDKContext(ctx) + + return EndBlocker(c, am.keeper) + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + +2. Ensure that the module implements the `appmodule.AppModule` interface: + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.EndBlockAppModule = AppModule{ + } + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var _ appmodule.AppModule = AppModule{ + } + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name()) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + func (am AppModule) + + NewHandler() + + sdk.Handler { + return nil + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + EndBlocker(ctx, am.keeper) + + return []abci.ValidatorUpdate{ + } + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr sdk.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter *baseapp.MsgServiceRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + +3. Define a struct that inherits `depinject.In` and define the module inputs (i.e. module dependencies): + + * `depinject` provides the right dependencies to the module. + * `depinject` also checks that all dependencies are provided. + + :::tip + For making a dependency optional, add the `optional:"true"` struct tag.\ + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var ( + _ appmodule.AppModule = AppModule{ + } + _ appmodule.HasEndBlocker = AppModule{ + } + ) + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx context.Context) + + error { + c := sdk.UnwrapSDKContext(ctx) + + return EndBlocker(c, am.keeper) + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + +4. Define the module outputs with a public struct that inherits `depinject.Out`: + The module outputs are the dependencies that the module provides to other modules. It is usually the module itself and its keeper. + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var ( + _ appmodule.AppModule = AppModule{ + } + _ appmodule.HasEndBlocker = AppModule{ + } + ) + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx context.Context) + + error { + c := sdk.UnwrapSDKContext(ctx) + + return EndBlocker(c, am.keeper) + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + +5. Create a function named `ProvideModule` (as called in 1.) and use the inputs for instantiating the module outputs. + +```go expandable +package module + +import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" +) + +/ ConsensusVersion defines the current x/group module consensus version. +const ConsensusVersion = 2 + +var ( + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() +}, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, +} +} + +var ( + _ appmodule.AppModule = AppModule{ +} + _ appmodule.HasEndBlocker = AppModule{ +} +) + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec +} + +/ Name returns the group module's name. +func (AppModuleBasic) + +Name() + +string { + return group.ModuleName +} + +/ DefaultGenesis returns default genesis state as raw bytes for the group +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the group module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) +} + +return data.Validate() +} + +/ GetQueryCmd returns the cli query commands for the group module +func (a AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) +} + +/ GetTxCmd returns the transaction commands for the group module +func (a AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. +func (a AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ RegisterInterfaces registers the group module's interface types +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) +} + +/ RegisterLegacyAminoCodec registers the group module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) +} + +/ Name returns the group module's name. +func (AppModule) + +Name() + +string { + return group.ModuleName +} + +/ RegisterInvariants does nothing, there are no invariants to enforce +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +/ InitGenesis performs genesis initialization for the group module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the group +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + +return cdc.MustMarshalJSON(gs) +} + +/ RegisterServices registers a gRPC query service to respond to the +/ module-specific gRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + +group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) +} +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ EndBlock implements the group module's EndBlock. +func (am AppModule) + +EndBlock(ctx context.Context) + +error { + c := sdk.UnwrapSDKContext(ctx) + +return EndBlocker(c, am.keeper) +} + +/ ____________________________________________________________________________ + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the group module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ RegisterStoreDecoder registers a decoder for group module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter +} + +type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule +} + +func ProvideModule(in GroupInputs) + +GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen +}) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + +return GroupOutputs{ + GroupKeeper: k, + Module: m +} +} +``` + +The `ProvideModule` function should return an instance of `cosmossdk.io/core/appmodule.AppModule` which implements +one or more app module extension interfaces for initializing the module. + +Following is the complete app wiring configuration for `group`: + +```go expandable +package module + +import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" +) + +/ ConsensusVersion defines the current x/group module consensus version. +const ConsensusVersion = 2 + +var ( + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() +}, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, +} +} + +var ( + _ appmodule.AppModule = AppModule{ +} + _ appmodule.HasEndBlocker = AppModule{ +} +) + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec +} + +/ Name returns the group module's name. +func (AppModuleBasic) + +Name() + +string { + return group.ModuleName +} + +/ DefaultGenesis returns default genesis state as raw bytes for the group +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the group module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) +} + +return data.Validate() +} + +/ GetQueryCmd returns the cli query commands for the group module +func (a AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) +} + +/ GetTxCmd returns the transaction commands for the group module +func (a AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. +func (a AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ RegisterInterfaces registers the group module's interface types +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) +} + +/ RegisterLegacyAminoCodec registers the group module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) +} + +/ Name returns the group module's name. +func (AppModule) + +Name() + +string { + return group.ModuleName +} + +/ RegisterInvariants does nothing, there are no invariants to enforce +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +/ InitGenesis performs genesis initialization for the group module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the group +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + +return cdc.MustMarshalJSON(gs) +} + +/ RegisterServices registers a gRPC query service to respond to the +/ module-specific gRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + +group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) +} +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ EndBlock implements the group module's EndBlock. +func (am AppModule) + +EndBlock(ctx context.Context) + +error { + c := sdk.UnwrapSDKContext(ctx) + +return EndBlocker(c, am.keeper) +} + +/ ____________________________________________________________________________ + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the group module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ RegisterStoreDecoder registers a decoder for group module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter +} + +type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule +} + +func ProvideModule(in GroupInputs) + +GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen +}) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + +return GroupOutputs{ + GroupKeeper: k, + Module: m +} +} +``` + +The module is now ready to be used with `depinject` by a chain developer. + +## Integrate in an application + +The App Wiring is done in `app_config.go` / `app.yaml` and `app_di.go` and is explained in detail in the [overview of `app_di.go`](/docs/sdk/next/documentation/application-framework/app-go-di). diff --git a/docs/sdk/next/documentation/module-system/distribution.mdx b/docs/sdk/next/documentation/module-system/distribution.mdx new file mode 100644 index 00000000..f93afb56 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/distribution.mdx @@ -0,0 +1,1222 @@ +--- +title: '`x/distribution`' +--- + +## Overview + +This *simple* distribution mechanism describes a functional way to passively +distribute rewards between validators and delegators. Note that this mechanism does +not distribute funds in as precisely as active reward distribution mechanisms and +will therefore be upgraded in the future. + +The mechanism operates as follows. Collected rewards are pooled globally and +divided out passively to validators and delegators. Each validator has the +opportunity to charge commission to the delegators on the rewards collected on +behalf of the delegators. Fees are collected directly into a global reward pool +and validator proposer-reward pool. Due to the nature of passive accounting, +whenever changes to parameters which affect the rate of reward distribution +occurs, withdrawal of rewards must also occur. + +* Whenever withdrawing, one must withdraw the maximum amount they are entitled + to, leaving nothing in the pool. +* Whenever bonding, unbonding, or re-delegating tokens to an existing account, a + full withdrawal of the rewards must occur (as the rules for lazy accounting + change). +* Whenever a validator chooses to change the commission on rewards, all accumulated + commission rewards must be simultaneously withdrawn. + +The above scenarios are covered in `hooks.md`. + +The distribution mechanism outlined herein is used to lazily distribute the +following rewards between validators and associated delegators: + +* multi-token fees to be socially distributed +* inflated staked asset provisions +* validator commission on all rewards earned by their delegators stake + +Fees are pooled within a global pool. The mechanisms used allow for validators +and delegators to independently and lazily withdraw their rewards. + +## Shortcomings + +As a part of the lazy computations, each delegator holds an accumulation term +specific to each validator which is used to estimate what their approximate +fair portion of tokens held in the global fee pool is owed to them. + +```text +entitlement = delegator-accumulation / all-delegators-accumulation +``` + +Under the circumstance that there was constant and equal flow of incoming +reward tokens every block, this distribution mechanism would be equal to the +active distribution (distribute individually to all delegators each block). +However, this is unrealistic so deviations from the active distribution will +occur based on fluctuations of incoming reward tokens as well as timing of +reward withdrawal by other delegators. + +If you happen to know that incoming rewards are about to significantly increase, +you are incentivized to not withdraw until after this event, increasing the +worth of your existing *accum*. See [#2764](https://github.com/cosmos/cosmos-sdk/issues/2764) +for further details. + +## Effect on Staking + +Charging commission on Atom provisions while also allowing for Atom-provisions +to be auto-bonded (distributed directly to the validators bonded stake) is +problematic within BPoS. Fundamentally, these two mechanisms are mutually +exclusive. If both commission and auto-bonding mechanisms are simultaneously +applied to the staking-token then the distribution of staking-tokens between +any validator and its delegators will change with each block. This then +necessitates a calculation for each delegation records for each block - +which is considered computationally expensive. + +In conclusion, we can only have Atom commission and unbonded atoms +provisions or bonded atom provisions with no Atom commission, and we elect to +implement the former. Stakeholders wishing to rebond their provisions may elect +to set up a script to periodically withdraw and rebond rewards. + +## Contents + +* [Concepts](#concepts) +* [State](#state) + * [FeePool](#feepool) + * [Validator Distribution](#validator-distribution) + * [Delegation Distribution](#delegation-distribution) + * [Params](#params) +* [Begin Block](#begin-block) +* [Messages](#messages) +* [Hooks](#hooks) +* [Events](#events) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + +## Concepts + +In Proof of Stake (PoS) blockchains, rewards gained from transaction fees are paid to validators. The fee distribution module fairly distributes the rewards to the validators' constituent delegators. + +Rewards are calculated per period. The period is updated each time a validator's delegation changes, for example, when the validator receives a new delegation. +The rewards for a single validator can then be calculated by taking the total rewards for the period before the delegation started, minus the current total rewards. +To learn more, see the [F1 Fee Distribution paper](https://github.com/cosmos/cosmos-sdk/tree/main/docs/spec/fee_distribution/f1_fee_distr.pdf). + +The commission to the validator is paid when the validator is removed or when the validator requests a withdrawal. +The commission is calculated and incremented at every `BeginBlock` operation to update accumulated fee amounts. + +The rewards to a delegator are distributed when the delegation is changed or removed, or a withdrawal is requested. +Before rewards are distributed, all slashes to the validator that occurred during the current delegation are applied. + +### Reference Counting in F1 Fee Distribution + +In F1 fee distribution, the rewards a delegator receives are calculated when their delegation is withdrawn. This calculation must read the terms of the summation of rewards divided by the share of tokens from the period which they ended when they delegated, and the final period that was created for the withdrawal. + +Additionally, as slashes change the amount of tokens a delegation will have (but we calculate this lazily, +only when a delegator un-delegates), we must calculate rewards in separate periods before / after any slashes +which occurred in between when a delegator delegated and when they withdrew their rewards. Thus slashes, like +delegations, reference the period which was ended by the slash event. + +All stored historical rewards records for periods which are no longer referenced by any delegations +or any slashes can thus be safely removed, as they will never be read (future delegations and future +slashes will always reference future periods). This is implemented by tracking a `ReferenceCount` +along with each historical reward storage entry. Each time a new object (delegation or slash) +is created which might need to reference the historical record, the reference count is incremented. +Each time one object which previously needed to reference the historical record is deleted, the reference +count is decremented. If the reference count hits zero, the historical record is deleted. + +### External Community Pool Keepers + +An external pool community keeper is defined as: + +```go expandable +/ ExternalCommunityPoolKeeper is the interface that an external community pool module keeper must fulfill +/ for x/distribution to properly accept it as a community pool fund destination. +type ExternalCommunityPoolKeeper interface { + / GetCommunityPoolModule gets the module name that funds should be sent to for the community pool. + / This is the address that x/distribution will send funds to for external management. + GetCommunityPoolModule() + +string + / FundCommunityPool allows an account to directly fund the community fund pool. + FundCommunityPool(ctx sdk.Context, amount sdk.Coins, senderAddr sdk.AccAddress) + +error + / DistributeFromCommunityPool distributes funds from the community pool module account to + / a receiver address. + DistributeFromCommunityPool(ctx sdk.Context, amount sdk.Coins, receiveAddr sdk.AccAddress) + +error +} +``` + +By default, the distribution module will use a community pool implementation that is internal. An external community pool +can be provided to the module which will have funds be diverted to it instead of the internal implementation. The reference +external community pool maintained by the Cosmos SDK is [`x/protocolpool`](/docs/sdk/next/documentation/module-system/protocolpool). + +## State + +### FeePool + +All globally tracked parameters for distribution are stored within +`FeePool`. Rewards are collected and added to the reward pool and +distributed to validators/delegators from here. + +Note that the reward pool holds decimal coins (`DecCoins`) to allow +for fractions of coins to be received from operations like inflation. +When coins are distributed from the pool they are truncated back to +`sdk.Coins` which are non-decimal. + +* FeePool: `0x00 -> ProtocolBuffer(FeePool)` + +```go +/ coins with decimal +type DecCoins []DecCoin + +type DecCoin struct { + Amount math.LegacyDec + Denom string +} +``` + +```protobuf +// FeePool is the global fee pool for distribution. +message FeePool { + repeated cosmos.base.v1beta1.DecCoin community_pool = 1 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.DecCoins" + ]; +} +``` + +### Validator Distribution + +Validator distribution information for the relevant validator is updated each time: + +1. delegation amount to a validator is updated, +2. any delegator withdraws from a validator, or +3. the validator withdraws its commission. + +* ValidatorDistInfo: `0x02 | ValOperatorAddrLen (1 byte) | ValOperatorAddr -> ProtocolBuffer(validatorDistribution)` + +```go +type ValidatorDistInfo struct { + OperatorAddress sdk.AccAddress + SelfBondRewards sdkmath.DecCoins + ValidatorCommission types.ValidatorAccumulatedCommission +} +``` + +### Delegation Distribution + +Each delegation distribution only needs to record the height at which it last +withdrew fees. Because a delegation must withdraw fees each time it's +properties change (aka bonded tokens etc.) its properties will remain constant +and the delegator's *accumulation* factor can be calculated passively knowing +only the height of the last withdrawal and its current properties. + +* DelegationDistInfo: `0x02 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValOperatorAddrLen (1 byte) | ValOperatorAddr -> ProtocolBuffer(delegatorDist)` + +```go +type DelegationDistInfo struct { + WithdrawalHeight int64 / last time this delegation withdrew rewards +} +``` + +### Params + +The distribution module stores it's params in state with the prefix of `0x09`, +it can be updated with governance or the address with authority. + +* Params: `0x09 | ProtocolBuffer(Params)` + +```protobuf +// Params defines the set of params for the distribution module. +message Params { + option (amino.name) = "cosmos-sdk/x/distribution/Params"; + option (gogoproto.goproto_stringer) = false; + + string community_tax = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + + // Deprecated: The base_proposer_reward field is deprecated and is no longer used + // in the x/distribution module's reward mechanism. + string base_proposer_reward = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + deprecated = true + ]; + + // Deprecated: The bonus_proposer_reward field is deprecated and is no longer used + // in the x/distribution module's reward mechanism. + string bonus_proposer_reward = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + deprecated = true + ]; + + bool withdraw_addr_enabled = 4; +} +``` + +## Begin Block + +At each `BeginBlock`, all fees received in the previous block are transferred to +the distribution `ModuleAccount` account. When a delegator or validator +withdraws their rewards, they are taken out of the `ModuleAccount`. During begin +block, the different claims on the fees collected are updated as follows: + +* The reserve community tax is charged. +* The remainder is distributed proportionally by voting power to all bonded validators + +### The Distribution Scheme + +See [params](#params) for description of parameters. + +Let `fees` be the total fees collected in the previous block, including +inflationary rewards to the stake. All fees are collected in a specific module +account during the block. During `BeginBlock`, they are sent to the +`"distribution"` `ModuleAccount`. No other sending of tokens occurs. Instead, the +rewards each account is entitled to are stored, and withdrawals can be triggered +through the messages `FundCommunityPool`, `WithdrawValidatorCommission` and +`WithdrawDelegatorReward`. + +#### Reward to the Community Pool + +The community pool gets `community_tax * fees`, plus any remaining dust after +validators get their rewards that are always rounded down to the nearest +integer value. + +#### Using an External Community Pool + +Starting with Cosmos SDK v0.53.0, an external community pool, such as `x/protocolpool`, can be used in place of the `x/distribution` managed community pool. + +Please view the warning in the next section before deciding to use an external community pool. + +```go expandable +/ ExternalCommunityPoolKeeper is the interface that an external community pool module keeper must fulfill +/ for x/distribution to properly accept it as a community pool fund destination. +type ExternalCommunityPoolKeeper interface { + / GetCommunityPoolModule gets the module name that funds should be sent to for the community pool. + / This is the address that x/distribution will send funds to for external management. + GetCommunityPoolModule() + +string + / FundCommunityPool allows an account to directly fund the community fund pool. + FundCommunityPool(ctx sdk.Context, amount sdk.Coins, senderAddr sdk.AccAddress) + +error + / DistributeFromCommunityPool distributes funds from the community pool module account to + / a receiver address. + DistributeFromCommunityPool(ctx sdk.Context, amount sdk.Coins, receiveAddr sdk.AccAddress) + +error +} +``` + +```go +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), / New option. +) +``` + +#### External Community Pool Usage Warning + +When using an external community pool with `x/distribution`, the following handlers will return an error: + +**QueryService** + +* `CommunityPool` + +**MsgService** + +* `CommunityPoolSpend` +* `FundCommunityPool` + +If you have services that rely on this functionality from `x/distribution`, please update them to use the `x/protocolpool` equivalents. + +#### Reward To the Validators + +The proposer receives no extra rewards. All fees are distributed among all the +bonded validators, including the proposer, in proportion to their consensus power. + +```text +powFrac = validator power / total bonded validator power +voteMul = 1 - community_tax +``` + +All validators receive `fees * voteMul * powFrac`. + +#### Rewards to Delegators + +Each validator's rewards are distributed to its delegators. The validator also +has a self-delegation that is treated like a regular delegation in +distribution calculations. + +The validator sets a commission rate. The commission rate is flexible, but each +validator sets a maximum rate and a maximum daily increase. These maximums cannot be exceeded and protect delegators from sudden increases of validator commission rates to prevent validators from taking all of the rewards. + +The outstanding rewards that the operator is entitled to are stored in +`ValidatorAccumulatedCommission`, while the rewards the delegators are entitled +to are stored in `ValidatorCurrentRewards`. The [F1 fee distribution scheme](#concepts) is used to calculate the rewards per delegator as they +withdraw or update their delegation, and is thus not handled in `BeginBlock`. + +#### Example Distribution + +For this example distribution, the underlying consensus engine selects block proposers in +proportion to their power relative to the entire bonded power. + +All validators are equally performant at including pre-commits in their proposed +blocks. Then hold `(pre_commits included) / (total bonded validator power)` +constant so that the amortized block reward for the validator is `( validator power / total bonded power) * (1 - community tax rate)` of +the total rewards. Consequently, the reward for a single delegator is: + +```text +(delegator proportion of the validator power / validator power) * (validator power / total bonded power) + * (1 - community tax rate) * (1 - validator commission rate) += (delegator proportion of the validator power / total bonded power) * (1 - +community tax rate) * (1 - validator commission rate) +``` + +## Messages + +### MsgSetWithdrawAddress + +By default, the withdraw address is the delegator address. To change its withdraw address, a delegator must send a `MsgSetWithdrawAddress` message. +Changing the withdraw address is possible only if the parameter `WithdrawAddrEnabled` is set to `true`. + +The withdraw address cannot be any of the module accounts. These accounts are blocked from being withdraw addresses by being added to the distribution keeper's `blockedAddrs` array at initialization. + +Response: + +```protobuf +// MsgSetWithdrawAddress sets the withdraw address for +// a delegator (or validator self-delegation). +message MsgSetWithdrawAddress { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgModifyWithdrawAddress"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string withdraw_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +```go +func (k Keeper) + +SetWithdrawAddr(ctx context.Context, delegatorAddr sdk.AccAddress, withdrawAddr sdk.AccAddress) + +error + if k.blockedAddrs[withdrawAddr.String()] { + fail with "`{ + withdrawAddr +}` is not allowed to receive external funds" +} + if !k.GetWithdrawAddrEnabled(ctx) { + fail with `ErrSetWithdrawAddrDisabled` +} + +k.SetDelegatorWithdrawAddr(ctx, delegatorAddr, withdrawAddr) +``` + +### MsgWithdrawDelegatorReward + +A delegator can withdraw its rewards. +Internally in the distribution module, this transaction simultaneously removes the previous delegation with associated rewards, the same as if the delegator simply started a new delegation of the same value. +The rewards are sent immediately from the distribution `ModuleAccount` to the withdraw address. +Any remainder (truncated decimals) are sent to the community pool. +The starting height of the delegation is set to the current validator period, and the reference count for the previous period is decremented. +The amount withdrawn is deducted from the `ValidatorOutstandingRewards` variable for the validator. + +In the F1 distribution, the total rewards are calculated per validator period, and a delegator receives a piece of those rewards in proportion to their stake in the validator. +In basic F1, the total rewards that all the delegators are entitled to between to periods is calculated the following way. +Let `R(X)` be the total accumulated rewards up to period `X` divided by the tokens staked at that time. The delegator allocation is `R(X) * delegator_stake`. +Then the rewards for all the delegators for staking between periods `A` and `B` are `(R(B) - R(A)) * total stake`. +However, these calculated rewards don't account for slashing. + +Taking the slashes into account requires iteration. +Let `F(X)` be the fraction a validator is to be slashed for a slashing event that happened at period `X`. +If the validator was slashed at periods `P1, ..., PN`, where `A < P1`, `PN < B`, the distribution module calculates the individual delegator's rewards, `T(A, B)`, as follows: + +```go +stake := initial stake + rewards := 0 + previous := A + for P in P1, ..., PN`: + rewards = (R(P) - previous) * stake + stake = stake * F(P) + +previous = P +rewards = rewards + (R(B) - R(PN)) * stake +``` + +The historical rewards are calculated retroactively by playing back all the slashes and then attenuating the delegator's stake at each step. +The final calculated stake is equivalent to the actual staked coins in the delegation with a margin of error due to rounding errors. + +Response: + +```protobuf +// MsgWithdrawDelegatorReward represents delegation withdrawal to a delegator +// from a single validator. +message MsgWithdrawDelegatorReward { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgWithdrawDelegationReward"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +### WithdrawValidatorCommission + +The validator can send the WithdrawValidatorCommission message to withdraw their accumulated commission. +The commission is calculated in every block during `BeginBlock`, so no iteration is required to withdraw. +The amount withdrawn is deducted from the `ValidatorOutstandingRewards` variable for the validator. +Only integer amounts can be sent. If the accumulated awards have decimals, the amount is truncated before the withdrawal is sent, and the remainder is left to be withdrawn later. + +### FundCommunityPool + + + +This handler will return an error if an `ExternalCommunityPool` is used. + + + +This message sends coins directly from the sender to the community pool. + +The transaction fails if the amount cannot be transferred from the sender to the distribution module account. + +```go expandable +func (k Keeper) + +FundCommunityPool(ctx context.Context, amount sdk.Coins, sender sdk.AccAddress) + +error { + if err := k.bankKeeper.SendCoinsFromAccountToModule(ctx, sender, types.ModuleName, amount); err != nil { + return err +} + +feePool, err := k.FeePool.Get(ctx) + if err != nil { + return err +} + +feePool.CommunityPool = feePool.CommunityPool.Add(sdk.NewDecCoinsFromCoins(amount...)...) + if err := k.FeePool.Set(ctx, feePool); err != nil { + return err +} + +return nil +} +``` + +### Common distribution operations + +These operations take place during many different messages. + +#### Initialize delegation + +Each time a delegation is changed, the rewards are withdrawn and the delegation is reinitialized. +Initializing a delegation increments the validator period and keeps track of the starting period of the delegation. + +```go expandable +/ initialize starting info for a new delegation +func (k Keeper) + +initializeDelegation(ctx context.Context, val sdk.ValAddress, del sdk.AccAddress) { + / period has already been incremented - we want to store the period ended by this delegation action + previousPeriod := k.GetValidatorCurrentRewards(ctx, val).Period - 1 + + / increment reference count for the period we're going to track + k.incrementReferenceCount(ctx, val, previousPeriod) + validator := k.stakingKeeper.Validator(ctx, val) + delegation := k.stakingKeeper.Delegation(ctx, del, val) + + / calculate delegation stake in tokens + / we don't store directly, so multiply delegation shares * (tokens per share) + / note: necessary to truncate so we don't allow withdrawing more rewards than owed + stake := validator.TokensFromSharesTruncated(delegation.GetShares()) + +k.SetDelegatorStartingInfo(ctx, val, del, types.NewDelegatorStartingInfo(previousPeriod, stake, uint64(ctx.BlockHeight()))) +} +``` + +### MsgUpdateParams + +Distribution module params can be updated through `MsgUpdateParams`, which can be done using governance proposal and the signer will always be gov module account address. + +```protobuf +// MsgUpdateParams is the Msg/UpdateParams request type. +// +// Since: cosmos-sdk 0.47 +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/distribution/MsgUpdateParams"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // params defines the x/distribution parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message handling can fail if: + +* signer is not the gov module account address. + +## Hooks + +Available hooks that can be called by and from this module. + +### Create or modify delegation distribution + +* triggered-by: `staking.MsgDelegate`, `staking.MsgBeginRedelegate`, `staking.MsgUndelegate` + +#### Before + +* The delegation rewards are withdrawn to the withdraw address of the delegator. + The rewards include the current period and exclude the starting period. +* The validator period is incremented. + The validator period is incremented because the validator's power and share distribution might have changed. +* The reference count for the delegator's starting period is decremented. + +#### After + +The starting height of the delegation is set to the previous period. +Because of the `Before`-hook, this period is the last period for which the delegator was rewarded. + +### Validator created + +* triggered-by: `staking.MsgCreateValidator` + +When a validator is created, the following validator variables are initialized: + +* Historical rewards +* Current accumulated rewards +* Accumulated commission +* Total outstanding rewards +* Period + +By default, all values are set to a `0`, except period, which is set to `1`. + +### Validator removed + +* triggered-by: `staking.RemoveValidator` + +Outstanding commission is sent to the validator's self-delegation withdrawal address. +Remaining delegator rewards get sent to the community fee pool. + +Note: The validator gets removed only when it has no remaining delegations. +At that time, all outstanding delegator rewards will have been withdrawn. +Any remaining rewards are dust amounts. + +### Validator is slashed + +* triggered-by: `staking.Slash` +* The current validator period reference count is incremented. + The reference count is incremented because the slash event has created a reference to it. +* The validator period is incremented. +* The slash event is stored for later use. + The slash event will be referenced when calculating delegator rewards. + +## Events + +The distribution module emits the following events: + +### BeginBlocker + +| Type | Attribute Key | Attribute Value | +| ---------------- | ------------- | ------------------ | +| proposer\_reward | validator | `{validatorAddress}` | +| proposer\_reward | reward | `{proposerReward}` | +| commission | amount | `{commissionAmount}` | +| commission | validator | `{validatorAddress}` | +| rewards | amount | `{rewardAmount}` | +| rewards | validator | `{validatorAddress}` | + +### Handlers + +#### MsgSetWithdrawAddress + +| Type | Attribute Key | Attribute Value | +| ---------------------- | ----------------- | ---------------------- | +| set\_withdraw\_address | withdraw\_address | `{withdrawAddress}` | +| message | module | distribution | +| message | action | set\_withdraw\_address | +| message | sender | `{senderAddress}` | + +#### MsgWithdrawDelegatorReward + +| Type | Attribute Key | Attribute Value | +| ----------------- | ------------- | --------------------------- | +| withdraw\_rewards | amount | `{rewardAmount}` | +| withdraw\_rewards | validator | `{validatorAddress}` | +| message | module | distribution | +| message | action | withdraw\_delegator\_reward | +| message | sender | `{senderAddress}` | + +#### MsgWithdrawValidatorCommission + +| Type | Attribute Key | Attribute Value | +| -------------------- | ------------- | ------------------------------- | +| withdraw\_commission | amount | `{commissionAmount}` | +| message | module | distribution | +| message | action | withdraw\_validator\_commission | +| message | sender | `{senderAddress}` | + +## Parameters + +The distribution module contains the following parameters: + +| Key | Type | Example | +| ------------------- | ------------ | --------------------------- | +| communitytax | string (dec) | "0.020000000000000000" \[0] | +| withdrawaddrenabled | bool | true | + +* \[0] `communitytax` must be positive and cannot exceed 1.00. +* `baseproposerreward` and `bonusproposerreward` were parameters that are deprecated in v0.47 and are not used. + + +The reserve pool is the pool of collected funds for use by governance taken via the `CommunityTax`. +Currently with the Cosmos SDK, tokens collected by the CommunityTax are accounted for but unspendable. + + +## Client + +## CLI + +A user can query and interact with the `distribution` module using the CLI. + +#### Query + +The `query` commands allow users to query `distribution` state. + +```shell +simd query distribution --help +``` + +##### commission + +The `commission` command allows users to query validator commission rewards by address. + +```shell +simd query distribution commission [address] [flags] +``` + +Example: + +```shell +simd query distribution commission cosmosvaloper1... +``` + +Example Output: + +```yml +commission: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### community-pool + +The `community-pool` command allows users to query all coin balances within the community pool. + +```shell +simd query distribution community-pool [flags] +``` + +Example: + +```shell +simd query distribution community-pool +``` + +Example Output: + +```yml +pool: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### params + +The `params` command allows users to query the parameters of the `distribution` module. + +```shell +simd query distribution params [flags] +``` + +Example: + +```shell +simd query distribution params +``` + +Example Output: + +```yml +base_proposer_reward: "0.000000000000000000" +bonus_proposer_reward: "0.000000000000000000" +community_tax: "0.020000000000000000" +withdraw_addr_enabled: true +``` + +##### rewards + +The `rewards` command allows users to query delegator rewards. Users can optionally include the validator address to query rewards earned from a specific validator. + +```shell +simd query distribution rewards [delegator-addr] [validator-addr] [flags] +``` + +Example: + +```shell +simd query distribution rewards cosmos1... +``` + +Example Output: + +```yml +rewards: +- reward: + - amount: "1000000.000000000000000000" + denom: stake + validator_address: cosmosvaloper1.. +total: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### slashes + +The `slashes` command allows users to query all slashes for a given block range. + +```shell +simd query distribution slashes [validator] [start-height] [end-height] [flags] +``` + +Example: + +```shell +simd query distribution slashes cosmosvaloper1... 1 1000 +``` + +Example Output: + +```yml +pagination: + next_key: null + total: "0" +slashes: +- validator_period: 20, + fraction: "0.009999999999999999" +``` + +##### validator-outstanding-rewards + +The `validator-outstanding-rewards` command allows users to query all outstanding (un-withdrawn) rewards for a validator and all their delegations. + +```shell +simd query distribution validator-outstanding-rewards [validator] [flags] +``` + +Example: + +```shell +simd query distribution validator-outstanding-rewards cosmosvaloper1... +``` + +Example Output: + +```yml +rewards: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### validator-distribution-info + +The `validator-distribution-info` command allows users to query validator commission and self-delegation rewards for validator. + +````shell expandable +simd query distribution validator-distribution-info cosmosvaloper1... +``` + +Example Output: + +```yml +commission: +- amount: "100000.000000000000000000" + denom: stake +operator_address: cosmosvaloper1... +self_bond_rewards: +- amount: "100000.000000000000000000" + denom: stake +``` + +#### Transactions + +The `tx` commands allow users to interact with the `distribution` module. + +```shell +simd tx distribution --help +``` + +##### fund-community-pool + +The `fund-community-pool` command allows users to send funds to the community pool. + +```shell +simd tx distribution fund-community-pool [amount] [flags] +``` + +Example: + +```shell +simd tx distribution fund-community-pool 100stake --from cosmos1... +``` + +##### set-withdraw-addr + +The `set-withdraw-addr` command allows users to set the withdraw address for rewards associated with a delegator address. + +```shell +simd tx distribution set-withdraw-addr [withdraw-addr] [flags] +``` + +Example: + +```shell +simd tx distribution set-withdraw-addr cosmos1... --from cosmos1... +``` + +##### withdraw-all-rewards + +The `withdraw-all-rewards` command allows users to withdraw all rewards for a delegator. + +```shell +simd tx distribution withdraw-all-rewards [flags] +``` + +Example: + +```shell +simd tx distribution withdraw-all-rewards --from cosmos1... +``` + +##### withdraw-rewards + +The `withdraw-rewards` command allows users to withdraw all rewards from a given delegation address, +and optionally withdraw validator commission if the delegation address given is a validator operator and the user proves the `--commission` flag. + +```shell +simd tx distribution withdraw-rewards [validator-addr] [flags] +``` + +Example: + +```shell +simd tx distribution withdraw-rewards cosmosvaloper1... --from cosmos1... --commission +``` + +### gRPC + +A user can query the `distribution` module using gRPC endpoints. + +#### Params + +The `Params` endpoint allows users to query parameters of the `distribution` module. + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "communityTax": "20000000000000000", + "baseProposerReward": "00000000000000000", + "bonusProposerReward": "00000000000000000", + "withdrawAddrEnabled": true + } +} +``` + +#### ValidatorDistributionInfo + +The `ValidatorDistributionInfo` queries validator commission and self-delegation rewards for validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorDistributionInfo +``` + +Example Output: + +```json +{ + "commission": { + "commission": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + }, + "self_bond_rewards": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ], + "validator_address": "cosmosvalop1..." +} +``` + +#### ValidatorOutstandingRewards + +The `ValidatorOutstandingRewards` endpoint allows users to query rewards of a validator address. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1.."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorOutstandingRewards +``` + +Example Output: + +```json +{ + "rewards": { + "rewards": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + } +} +``` + +#### ValidatorCommission + +The `ValidatorCommission` endpoint allows users to query accumulated commission for a validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1.."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorCommission +``` + +Example Output: + +```json +{ + "commission": { + "commission": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + } +} +``` + +#### ValidatorSlashes + +The `ValidatorSlashes` endpoint allows users to query slash events of a validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1.."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorSlashes +``` + +Example Output: + +```json +{ + "slashes": [ + { + "validator_period": "20", + "fraction": "0.009999999999999999" + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### DelegationRewards + +The `DelegationRewards` endpoint allows users to query the total rewards accrued by a delegation. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1...","validator_address":"cosmosvalop1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegationRewards +``` + +Example Output: + +```json +{ + "rewards": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] +} +``` + +#### DelegationTotalRewards + +The `DelegationTotalRewards` endpoint allows users to query the total rewards accrued by each validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegationTotalRewards +``` + +Example Output: + +```json +{ + "rewards": [ + { + "validatorAddress": "cosmosvaloper1...", + "reward": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + } + ], + "total": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] +} +``` + +#### DelegatorValidators + +The `DelegatorValidators` endpoint allows users to query all validators for given delegator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegatorValidators +``` + +Example Output: + +```json +{ + "validators": ["cosmosvaloper1..."] +} +``` + +#### DelegatorWithdrawAddress + +The `DelegatorWithdrawAddress` endpoint allows users to query the withdraw address of a delegator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegatorWithdrawAddress +``` + +Example Output: + +```json +{ + "withdrawAddress": "cosmos1..." +} +``` + +#### CommunityPool + +The `CommunityPool` endpoint allows users to query the community pool coins. + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/CommunityPool +``` + +Example Output: + +```json +{ + "pool": [ + { + "denom": "stake", + "amount": "1000000000000000000" + } + ] +} +``` +```` diff --git a/docs/sdk/next/documentation/module-system/epochs.mdx b/docs/sdk/next/documentation/module-system/epochs.mdx new file mode 100644 index 00000000..9c883b56 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/epochs.mdx @@ -0,0 +1,179 @@ +--- +title: '`x/epochs`' +--- + +## Abstract + +Often in the SDK, we would like to run certain code every so often. The +purpose of `epochs` module is to allow other modules to set that they +would like to be signaled once every period. So another module can +specify it wants to execute code once a week, starting at UTC-time = x. +`epochs` creates a generalized epoch interface to other modules so that +they can easily be signaled upon such events. + +## Contents + +1. **[Concept](#concepts)** +2. **[State](#state)** +3. **[Events](#events)** +4. **[Keeper](#keepers)** +5. **[Hooks](#hooks)** +6. **[Queries](#queries)** + +## Concepts + +The epochs module defines on-chain timers that execute at fixed time intervals. +Other SDK modules can then register logic to be executed at the timer ticks. +We refer to the period in between two timer ticks as an "epoch". + +Every timer has a unique identifier. +Every epoch will have a start time, and an end time, where `end time = start time + timer interval`. +On mainnet, we only utilize one identifier, with a time interval of `one day`. + +The timer will tick at the first block whose block time is greater than the timer end time, +and set the start as the prior timer end time. (Notably, it's not set to the block time!) +This means that if the chain has been down for a while, you will get one timer tick per block, +until the timer has caught up. + +## State + +The Epochs module keeps a single `EpochInfo` per identifier. +This contains the current state of the timer with the corresponding identifier. +Its fields are modified at every timer tick. +EpochInfos are initialized as part of genesis initialization or upgrade logic, +and are only modified on begin blockers. + +## Events + +The `epochs` module emits the following events: + +### BeginBlocker + +| Type | Attribute Key | Attribute Value | +| ------------ | ------------- | --------------- | +| epoch\_start | epoch\_number | `{epoch\_number}` | +| epoch\_start | start\_time | `{start\_time}` | + +### EndBlocker + +| Type | Attribute Key | Attribute Value | +| ---------- | ------------- | --------------- | +| epoch\_end | epoch\_number | `{epoch\_number}` | + +## Keepers + +### Keeper functions + +Epochs keeper module provides utility functions to manage epochs. + +## Hooks + +```go +/ the first block whose timestamp is after the duration is counted as the end of the epoch + AfterEpochEnd(ctx sdk.Context, epochIdentifier string, epochNumber int64) + / new epoch is next block of epoch end block + BeforeEpochStart(ctx sdk.Context, epochIdentifier string, epochNumber int64) +``` + +### How modules receive hooks + +On hook receiver function of other modules, they need to filter +`epochIdentifier` and only do executions for only specific +epochIdentifier. Filtering epochIdentifier could be in `Params` of other +modules so that they can be modified by governance. + +This is the standard dev UX of this: + +```golang +func (k MyModuleKeeper) + +AfterEpochEnd(ctx sdk.Context, epochIdentifier string, epochNumber int64) { + params := k.GetParams(ctx) + if epochIdentifier == params.DistrEpochIdentifier { + / my logic +} +} +``` + +### Panic isolation + +If a given epoch hook panics, its state update is reverted, but we keep +proceeding through the remaining hooks. This allows more advanced epoch +logic to be used, without concern over state machine halting, or halting +subsequent modules. + +This does mean that if there is behavior you expect from a prior epoch +hook, and that epoch hook reverted, your hook may also have an issue. So +do keep in mind "what if a prior hook didn't get executed" in the safety +checks you consider for a new epoch hook. + +## Queries + +The Epochs module provides the following queries to check the module's state. + +```protobuf +service Query { + / EpochInfos provide running epochInfos + rpc EpochInfos(QueryEpochsInfoRequest) returns (QueryEpochsInfoResponse) {} + / CurrentEpoch provide current epoch of specified identifier + rpc CurrentEpoch(QueryCurrentEpochRequest) returns (QueryCurrentEpochResponse) {} +} +``` + +### Epoch Infos + +Query the currently running epochInfos + +```sh + query epochs epoch-infos +``` + + +**Example** + +An example output: + +```sh expandable +epochs: +- current_epoch: "183" + current_epoch_start_height: "2438409" + current_epoch_start_time: "2021-12-18T17:16:09.898160996Z" + duration: 86400s + epoch_counting_started: true + identifier: day + start_time: "2021-06-18T17:00:00Z" +- current_epoch: "26" + current_epoch_start_height: "2424854" + current_epoch_start_time: "2021-12-17T17:02:07.229632445Z" + duration: 604800s + epoch_counting_started: true + identifier: week + start_time: "2021-06-18T17:00:00Z" +``` + + + +### Current Epoch + +Query the current epoch by the specified identifier + +```sh + query epochs current-epoch [identifier] +``` + + +**Example** + +Query the current `day` epoch: + +```sh + query epochs current-epoch day +``` + +Which in this example outputs: + +```sh +current_epoch: "183" +``` + + diff --git a/docs/sdk/next/documentation/module-system/evidence.mdx b/docs/sdk/next/documentation/module-system/evidence.mdx new file mode 100644 index 00000000..17487010 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/evidence.mdx @@ -0,0 +1,594 @@ +--- +title: '`x/evidence`' +description: Concepts State Messages Events Parameters BeginBlock Client CLI REST gRPC +--- + +* [Concepts](#concepts) +* [State](#state) +* [Messages](#messages) +* [Events](#events) +* [Parameters](#parameters) +* [BeginBlock](#beginblock) +* [Client](#client) + * [CLI](#cli) + * [REST](#rest) + * [gRPC](#grpc) + +## Abstract + +`x/evidence` is an implementation of a Cosmos SDK module, per [ADR 009](docs/sdk/next/documentation/legacy/adr-comprehensive), +that allows for the submission and handling of arbitrary evidence of misbehavior such +as equivocation and counterfactual signing. + +The evidence module differs from standard evidence handling which typically expects the +underlying consensus engine, e.g. CometBFT, to automatically submit evidence when +it is discovered by allowing clients and foreign chains to submit more complex evidence +directly. + +All concrete evidence types must implement the `Evidence` interface contract. Submitted +`Evidence` is first routed through the evidence module's `Router` in which it attempts +to find a corresponding registered `Handler` for that specific `Evidence` type. +Each `Evidence` type must have a `Handler` registered with the evidence module's +keeper in order for it to be successfully routed and executed. + +Each corresponding handler must also fulfill the `Handler` interface contract. The +`Handler` for a given `Evidence` type can perform any arbitrary state transitions +such as slashing, jailing, and tombstoning. + +## Concepts + +### Evidence + +Any concrete type of evidence submitted to the `x/evidence` module must fulfill the +`Evidence` contract outlined below. Not all concrete types of evidence will fulfill +this contract in the same way and some data may be entirely irrelevant to certain +types of evidence. An additional `ValidatorEvidence`, which extends `Evidence`, +has also been created to define a contract for evidence against malicious validators. + +```go expandable +/ Evidence defines the contract which concrete evidence types of misbehavior +/ must implement. +type Evidence interface { + proto.Message + + Route() + +string + String() + +string + Hash() []byte + ValidateBasic() + +error + + / Height at which the infraction occurred + GetHeight() + +int64 +} + +/ ValidatorEvidence extends Evidence interface to define contract +/ for evidence against malicious validators +type ValidatorEvidence interface { + Evidence + + / The consensus address of the malicious validator at time of infraction + GetConsensusAddress() + +sdk.ConsAddress + + / The total power of the malicious validator at time of infraction + GetValidatorPower() + +int64 + + / The total validator set power at time of infraction + GetTotalPower() + +int64 +} +``` + +### Registration & Handling + +The `x/evidence` module must first know about all types of evidence it is expected +to handle. This is accomplished by registering the `Route` method in the `Evidence` +contract with what is known as a `Router` (defined below). The `Router` accepts +`Evidence` and attempts to find the corresponding `Handler` for the `Evidence` +via the `Route` method. + +```go +type Router interface { + AddRoute(r string, h Handler) + +Router + HasRoute(r string) + +bool + GetRoute(path string) + +Handler + Seal() + +Sealed() + +bool +} +``` + +The `Handler` (defined below) is responsible for executing the entirety of the +business logic for handling `Evidence`. This typically includes validating the +evidence, both stateless checks via `ValidateBasic` and stateful checks via any +keepers provided to the `Handler`. In addition, the `Handler` may also perform +capabilities such as slashing and jailing a validator. All `Evidence` handled +by the `Handler` should be persisted. + +```go +/ Handler defines an agnostic Evidence handler. The handler is responsible +/ for executing all corresponding business logic necessary for verifying the +/ evidence as valid. In addition, the Handler may execute any necessary +/ slashing and potential jailing. +type Handler func(context.Context, Evidence) + +error +``` + +## State + +Currently the `x/evidence` module only stores valid submitted `Evidence` in state. +The evidence state is also stored and exported in the `x/evidence` module's `GenesisState`. + +```protobuf +/ GenesisState defines the evidence module's genesis state. +message GenesisState { + / evidence defines all the evidence at genesis. + repeated google.protobuf.Any evidence = 1; +} + +``` + +All `Evidence` is retrieved and stored via a prefix `KVStore` using prefix `0x00` (`KeyPrefixEvidence`). + +## Messages + +### MsgSubmitEvidence + +Evidence is submitted through a `MsgSubmitEvidence` message: + +```protobuf +/ MsgSubmitEvidence represents a message that supports submitting arbitrary +/ Evidence of misbehavior such as equivocation or counterfactual signing. +message MsgSubmitEvidence { + string submitter = 1; + google.protobuf.Any evidence = 2; +} +``` + +Note, the `Evidence` of a `MsgSubmitEvidence` message must have a corresponding +`Handler` registered with the `x/evidence` module's `Router` in order to be processed +and routed correctly. + +Given the `Evidence` is registered with a corresponding `Handler`, it is processed +as follows: + +```go expandable +func SubmitEvidence(ctx Context, evidence Evidence) + +error { + if _, err := GetEvidence(ctx, evidence.Hash()); err == nil { + return errorsmod.Wrap(types.ErrEvidenceExists, strings.ToUpper(hex.EncodeToString(evidence.Hash()))) +} + if !router.HasRoute(evidence.Route()) { + return errorsmod.Wrap(types.ErrNoEvidenceHandlerExists, evidence.Route()) +} + handler := router.GetRoute(evidence.Route()) + if err := handler(ctx, evidence); err != nil { + return errorsmod.Wrap(types.ErrInvalidEvidence, err.Error()) +} + +ctx.EventManager().EmitEvent( + sdk.NewEvent( + types.EventTypeSubmitEvidence, + sdk.NewAttribute(types.AttributeKeyEvidenceHash, strings.ToUpper(hex.EncodeToString(evidence.Hash()))), + ), + ) + +SetEvidence(ctx, evidence) + +return nil +} +``` + +First, there must not already exist valid submitted `Evidence` of the exact same +type. Secondly, the `Evidence` is routed to the `Handler` and executed. Finally, +if there is no error in handling the `Evidence`, an event is emitted and it is persisted to state. + +## Events + +The `x/evidence` module emits the following events: + +### Handlers + +#### MsgSubmitEvidence + +| Type | Attribute Key | Attribute Value | +| ---------------- | -------------- | ---------------- | +| submit\_evidence | evidence\_hash | `{evidenceHash}` | +| message | module | evidence | +| message | sender | `{senderAddress}` | +| message | action | submit\_evidence | + +## Parameters + +The evidence module does not contain any parameters. + +## BeginBlock + +### Evidence Handling + +CometBFT blocks can include +[Evidence](https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md#evidence) that indicates if a validator committed malicious behavior. The relevant information is forwarded to the application as ABCI Evidence in `abci.RequestBeginBlock` so that the validator can be punished accordingly. + +#### Equivocation + +The Cosmos SDK handles two types of evidence inside the ABCI `BeginBlock`: + +* `DuplicateVoteEvidence`, +* `LightClientAttackEvidence`. + +The evidence module handles these two evidence types the same way. First, the Cosmos SDK converts the CometBFT concrete evidence type to an SDK `Evidence` interface using `Equivocation` as the concrete type. + +```protobuf +// Equivocation implements the Evidence interface and defines evidence of double +// signing misbehavior. +message Equivocation { + option (amino.name) = "cosmos-sdk/Equivocation"; + option (gogoproto.goproto_getters) = false; + option (gogoproto.equal) = false; + + // height is the equivocation height. + int64 height = 1; + + // time is the equivocation time. + google.protobuf.Timestamp time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + + // power is the equivocation validator power. + int64 power = 3; + + // consensus_address is the equivocation validator consensus address. + string consensus_address = 4 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +For some `Equivocation` submitted in `block` to be valid, it must satisfy: + +`Evidence.Timestamp >= block.Timestamp - MaxEvidenceAge` + +Where: + +* `Evidence.Timestamp` is the timestamp in the block at height `Evidence.Height` +* `block.Timestamp` is the current block timestamp. + +If valid `Equivocation` evidence is included in a block, the validator's stake is +reduced (slashed) by `SlashFractionDoubleSign` as defined by the `x/slashing` module +of what their stake was when the infraction occurred, rather than when the evidence was discovered. +We want to "follow the stake", i.e., the stake that contributed to the infraction +should be slashed, even if it has since been redelegated or started unbonding. + +In addition, the validator is permanently jailed and tombstoned to make it impossible for that +validator to ever re-enter the validator set. + +The `Equivocation` evidence is handled as follows: + +```go +// in the case of a lunatic attack. +func (k Keeper) handleEquivocationEvidence(ctx context.Context, evidence *types.Equivocation) error { + sdkCtx := sdk.UnwrapSDKContext(ctx) + logger := k.Logger(ctx) + consAddr := evidence.GetConsensusAddress(k.stakingKeeper.ConsensusAddressCodec()) + + validator, err := k.stakingKeeper.ValidatorByConsAddr(ctx, consAddr) + if err != nil { + return err + } + if validator == nil || validator.IsUnbonded() { + // Defensive: Simulation doesn't take unbonding periods into account, and + // CometBFT might break this assumption at some point. + return nil + } + + if len(validator.GetOperator()) != 0 { + if _, err := k.slashingKeeper.GetPubkey(ctx, consAddr.Bytes()); err != nil { + // Ignore evidence that cannot be handled. + // + // NOTE: We used to panic with: + // `panic(fmt.Sprintf("Validator consensus-address %v not found", consAddr))`, + // but this couples the expectations of the app to both CometBFT and + // the simulator. Both are expected to provide the full range of + // allowable but none of the disallowed evidence types. Instead of + // getting this coordination right, it is easier to relax the + // constraints and ignore evidence that cannot be handled. + logger.Error(fmt.Sprintf("ignore evidence; expected public key for validator %s not found", consAddr)) + return nil + } + } + + // calculate the age of the evidence + infractionHeight := evidence.GetHeight() + infractionTime := evidence.GetTime() + ageDuration := sdkCtx.BlockHeader().Time.Sub(infractionTime) + ageBlocks := sdkCtx.BlockHeader().Height - infractionHeight + + // Reject evidence if the double-sign is too old. Evidence is considered stale + // if the difference in time and number of blocks is greater than the allowed + // parameters defined. + cp := sdkCtx.ConsensusParams() + if cp.Evidence != nil { + if ageDuration > cp.Evidence.MaxAgeDuration && ageBlocks > cp.Evidence.MaxAgeNumBlocks { + logger.Info( + "ignored equivocation; evidence too old", + "validator", consAddr, + "infraction_height", infractionHeight, + "max_age_num_blocks", cp.Evidence.MaxAgeNumBlocks, + "infraction_time", infractionTime, + "max_age_duration", cp.Evidence.MaxAgeDuration, + ) + return nil + } + } + + if ok := k.slashingKeeper.HasValidatorSigningInfo(ctx, consAddr); !ok { + panic(fmt.Sprintf("expected signing info for validator %s but not found", consAddr)) + } + + // ignore if the validator is already tombstoned + if k.slashingKeeper.IsTombstoned(ctx, consAddr) { + logger.Info( + "ignored equivocation; validator already tombstoned", + "validator", consAddr, + "infraction_height", infractionHeight, + "infraction_time", infractionTime, + ) + return nil + } + + logger.Info( + "confirmed equivocation", + "validator", consAddr, + "infraction_height", infractionHeight, + "infraction_time", infractionTime, + ) + + // We need to retrieve the stake distribution which signed the block, so we + // subtract ValidatorUpdateDelay from the evidence height. + // Note, that this *can* result in a negative "distributionHeight", up to + // -ValidatorUpdateDelay, i.e. at the end of the + // pre-genesis block (none) = at the beginning of the genesis block. + // That's fine since this is just used to filter unbonding delegations & redelegations. + distributionHeight := infractionHeight - sdk.ValidatorUpdateDelay + + // Slash validator. The `power` is the int64 power of the validator as provided + // to/by CometBFT. This value is validator.Tokens as sent to CometBFT via + // ABCI, and now received as evidence. The fraction is passed in to separately + // to slash unbonding and rebonding delegations. + slashFractionDoubleSign, err := k.slashingKeeper.SlashFractionDoubleSign(ctx) + if err != nil { + return err + } + + err = k.slashingKeeper.SlashWithInfractionReason( + ctx, + consAddr, + slashFractionDoubleSign, + evidence.GetValidatorPower(), distributionHeight, + stakingtypes.Infraction_INFRACTION_DOUBLE_SIGN, + ) + if err != nil { + return err + } + + // Jail the validator if not already jailed. This will begin unbonding the + // validator if not already unbonding (tombstoned). + if !validator.IsJailed() { + err = k.slashingKeeper.Jail(ctx, consAddr) + if err != nil { + return err + } + } + +``` + +**Note:** The slashing, jailing, and tombstoning calls are delegated through the `x/slashing` module +that emits informative events and finally delegates calls to the `x/staking` module. See documentation +on slashing and jailing in [State Transitions](/docs/sdk/next/documentation/module-system/staking#state-transitions). + +## Client + +### CLI + +A user can query and interact with the `evidence` module using the CLI. + +#### Query + +The `query` commands allows users to query `evidence` state. + +```bash +simd query evidence --help +``` + +#### evidence + +The `evidence` command allows users to list all evidence or evidence by hash. + +Usage: + +```bash +simd query evidence [flags] +``` + +To query evidence by hash + +Example: + +```bash +simd query evidence evidence "DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660" +``` + +Example Output: + +```bash +evidence: + consensus_address: cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h + height: 11 + power: 100 + time: "2021-10-20T16:08:38.194017624Z" +``` + +To get all evidence + +Example: + +```bash +simd query evidence list +``` + +Example Output: + +```bash +evidence: + consensus_address: cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h + height: 11 + power: 100 + time: "2021-10-20T16:08:38.194017624Z" +pagination: + next_key: null + total: "1" +``` + +### REST + +A user can query the `evidence` module using REST endpoints. + +#### Evidence + +Get evidence by hash + +```bash +/cosmos/evidence/v1beta1/evidence/{hash} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence/DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660" +``` + +Example Output: + +```bash +{ + "evidence": { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } +} +``` + +#### All evidence + +Get all evidence + +```bash +/cosmos/evidence/v1beta1/evidence +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence" +``` + +Example Output: + +```bash expandable +{ + "evidence": [ + { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### gRPC + +A user can query the `evidence` module using gRPC endpoints. + +#### Evidence + +Get evidence by hash + +```bash +cosmos.evidence.v1beta1.Query/Evidence +``` + +Example: + +```bash +grpcurl -plaintext -d '{"evidence_hash":"DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660"}' localhost:9090 cosmos.evidence.v1beta1.Query/Evidence +``` + +Example Output: + +```bash +{ + "evidence": { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } +} +``` + +#### All evidence + +Get all evidence + +```bash +cosmos.evidence.v1beta1.Query/AllEvidence +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.evidence.v1beta1.Query/AllEvidence +``` + +Example Output: + +```bash expandable +{ + "evidence": [ + { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } + ], + "pagination": { + "total": "1" + } +} +``` diff --git a/docs/sdk/next/documentation/module-system/feegrant.mdx b/docs/sdk/next/documentation/module-system/feegrant.mdx new file mode 100644 index 00000000..82084988 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/feegrant.mdx @@ -0,0 +1,3771 @@ +--- +title: '`x/feegrant`' +description: >- + This document specifies the fee grant module. For the full ADR, please see Fee + Grant ADR-029. +--- + +## Abstract + +This document specifies the fee grant module. For the full ADR, please see [Fee Grant ADR-029](docs/sdk/next/documentation/legacy/adr-comprehensive). + +This module allows accounts to grant fee allowances and to use fees from their accounts. Grantees can execute any transaction without the need to maintain sufficient fees. + +## Contents + +* [Concepts](#concepts) +* [State](#state) + * [FeeAllowance](#feeallowance) + * [FeeAllowanceQueue](#feeallowancequeue) +* [Messages](#messages) + * [Msg/GrantAllowance](#msggrantallowance) + * [Msg/RevokeAllowance](#msgrevokeallowance) +* [Events](#events) +* [Msg Server](#msg-server) + * [MsgGrantAllowance](#msggrantallowance-1) + * [MsgRevokeAllowance](#msgrevokeallowance-1) + * [Exec fee allowance](#exec-fee-allowance) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + +## Concepts + +### Grant + +`Grant` is stored in the KVStore to record a grant with full context. Every grant will contain `granter`, `grantee` and what kind of `allowance` is granted. `granter` is an account address who is giving permission to `grantee` (the beneficiary account address) to pay for some or all of `grantee`'s transaction fees. `allowance` defines what kind of fee allowance (`BasicAllowance` or `PeriodicAllowance`, see below) is granted to `grantee`. `allowance` accepts an interface which implements `FeeAllowanceI`, encoded as `Any` type. There can be only one existing fee grant allowed for a `grantee` and `granter`, self grants are not allowed. + +```protobuf +// Grant is stored in the KVStore to record a grant with full context +message Grant { + // granter is the address of the user granting an allowance of their funds. + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // grantee is the address of the user being granted an allowance of another user's funds. + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // allowance can be any of basic, periodic, allowed fee allowance. + google.protobuf.Any allowance = 3 [(cosmos_proto.accepts_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"]; +} +``` + +`FeeAllowanceI` looks like: + +```go expandable +package feegrant + +import ( + + "time" + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ FeeAllowance implementations are tied to a given fee delegator and delegatee, +/ and are used to enforce fee grant limits. +type FeeAllowanceI interface { + / Accept can use fee payment requested as well as timestamp of the current block + / to determine whether or not to process this. This is checked in + / Keeper.UseGrantedFees and the return values should match how it is handled there. + / + / If it returns an error, the fee payment is rejected, otherwise it is accepted. + / The FeeAllowance implementation is expected to update it's internal state + / and will be saved again after an acceptance. + / + / If remove is true (regardless of the error), the FeeAllowance will be deleted from storage + / (eg. when it is used up). (See call to RevokeAllowance in Keeper.UseGrantedFees) + +Accept(ctx sdk.Context, fee sdk.Coins, msgs []sdk.Msg) (remove bool, err error) + + / ValidateBasic should evaluate this FeeAllowance for internal consistency. + / Don't allow negative amounts, or negative periods for example. + ValidateBasic() + +error + + / ExpiresAt returns the expiry time of the allowance. + ExpiresAt() (*time.Time, error) +} +``` + +### Fee Allowance types + +There are two types of fee allowances present at the moment: + +* `BasicAllowance` +* `PeriodicAllowance` +* `AllowedMsgAllowance` + +### BasicAllowance + +`BasicAllowance` is permission for `grantee` to use fee from a `granter`'s account. If any of the `spend_limit` or `expiration` reaches its limit, the grant will be removed from the state. + +```protobuf +// BasicAllowance implements Allowance with a one-time grant of coins +// that optionally expires. The grantee can use up to SpendLimit to cover fees. +message BasicAllowance { + option (cosmos_proto.implements_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"; + option (amino.name) = "cosmos-sdk/BasicAllowance"; + + // spend_limit specifies the maximum amount of coins that can be spent + // by this allowance and will be updated as coins are spent. If it is + // empty, there is no spend limit and any amount of coins can be spent. + repeated cosmos.base.v1beta1.Coin spend_limit = 1 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; +``` + +* `spend_limit` is the limit of coins that are allowed to be used from the `granter` account. If it is empty, it assumes there's no spend limit, `grantee` can use any number of available coins from `granter` account address before the expiration. + +* `expiration` specifies an optional time when this allowance expires. If the value is left empty, there is no expiry for the grant. + +* When a grant is created with empty values for `spend_limit` and `expiration`, it is still a valid grant. It won't restrict the `grantee` to use any number of coins from `granter` and it won't have any expiration. The only way to restrict the `grantee` is by revoking the grant. + +### PeriodicAllowance + +`PeriodicAllowance` is a repeating fee allowance for the mentioned period, we can mention when the grant can expire as well as when a period can reset. We can also define the maximum number of coins that can be used in a mentioned period of time. + +```protobuf +// PeriodicAllowance extends Allowance to allow for both a maximum cap, +// as well as a limit per time period. +message PeriodicAllowance { + option (cosmos_proto.implements_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"; + option (amino.name) = "cosmos-sdk/PeriodicAllowance"; + + // basic specifies a struct of `BasicAllowance` + BasicAllowance basic = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // period specifies the time duration in which period_spend_limit coins can + // be spent before that allowance is reset + google.protobuf.Duration period = 2 + [(gogoproto.stdduration) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // period_spend_limit specifies the maximum number of coins that can be spent + // in the period + repeated cosmos.base.v1beta1.Coin period_spend_limit = 3 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + + // period_can_spend is the number of coins left to be spent before the period_reset time + repeated cosmos.base.v1beta1.Coin period_can_spend = 4 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + + // period_reset is the time at which this period resets and a new one begins, + // it is calculated from the start time of the first transaction after the + // last period ended + google.protobuf.Timestamp period_reset = 5 + [(gogoproto.stdtime) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +* `basic` is the instance of `BasicAllowance` which is optional for periodic fee allowance. If empty, the grant will have no `expiration` and no `spend_limit`. + +* `period` is the specific period of time, after each period passes, `period_can_spend` will be reset. + +* `period_spend_limit` specifies the maximum number of coins that can be spent in the period. + +* `period_can_spend` is the number of coins left to be spent before the period\_reset time. + +* `period_reset` keeps track of when a next period reset should happen. + +### AllowedMsgAllowance + +`AllowedMsgAllowance` is a fee allowance, it can be any of `BasicFeeAllowance`, `PeriodicAllowance` but restricted only to the allowed messages mentioned by the granter. + +```protobuf +// AllowedMsgAllowance creates allowance only for specified message types. +message AllowedMsgAllowance { + option (gogoproto.goproto_getters) = false; + option (cosmos_proto.implements_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"; + option (amino.name) = "cosmos-sdk/AllowedMsgAllowance"; + + // allowance can be any of basic and periodic fee allowance. + google.protobuf.Any allowance = 1 [(cosmos_proto.accepts_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"]; + + // allowed_messages are the messages for which the grantee has the access. + repeated string allowed_messages = 2; +} +``` + +* `allowance` is either `BasicAllowance` or `PeriodicAllowance`. + +* `allowed_messages` is array of messages allowed to execute the given allowance. + +### FeeGranter flag + +`feegrant` module introduces a `FeeGranter` flag for CLI for the sake of executing transactions with fee granter. When this flag is set, `clientCtx` will append the granter account address for transactions generated through CLI. + +```go expandable +package client + +import ( + + "crypto/tls" + "fmt" + "strings" + "github.com/pkg/errors" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "github.com/tendermint/tendermint/libs/cli" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials" + "google.golang.org/grpc/credentials/insecure" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ ClientContextKey defines the context key used to retrieve a client.Context from +/ a command's Context. +const ClientContextKey = sdk.ContextKey("client.context") + +/ SetCmdClientContextHandler is to be used in a command pre-hook execution to +/ read flags that populate a Context and sets that to the command's Context. +func SetCmdClientContextHandler(clientCtx Context, cmd *cobra.Command) (err error) { + clientCtx, err = ReadPersistentCommandFlags(clientCtx, cmd.Flags()) + if err != nil { + return err +} + +return SetCmdClientContext(cmd, clientCtx) +} + +/ ValidateCmd returns unknown command error or Help display if help flag set +func ValidateCmd(cmd *cobra.Command, args []string) + +error { + var unknownCmd string + var skipNext bool + for _, arg := range args { + / search for help flag + if arg == "--help" || arg == "-h" { + return cmd.Help() +} + + / check if the current arg is a flag + switch { + case len(arg) > 0 && (arg[0] == '-'): + / the next arg should be skipped if the current arg is a + / flag and does not use "=" to assign the flag's value + if !strings.Contains(arg, "=") { + skipNext = true +} + +else { + skipNext = false +} + case skipNext: + / skip current arg + skipNext = false + case unknownCmd == "": + / unknown command found + / continue searching for help flag + unknownCmd = arg +} + +} + + / return the help screen if no unknown command is found + if unknownCmd != "" { + err := fmt.Sprintf("unknown command \"%s\" for \"%s\"", unknownCmd, cmd.CalledAs()) + + / build suggestions for unknown argument + if suggestions := cmd.SuggestionsFor(unknownCmd); len(suggestions) > 0 { + err += "\n\nDid you mean this?\n" + for _, s := range suggestions { + err += fmt.Sprintf("\t%v\n", s) +} + +} + +return errors.New(err) +} + +return cmd.Help() +} + +/ ReadPersistentCommandFlags returns a Context with fields set for "persistent" +/ or common flags that do not necessarily change with context. +/ +/ Note, the provided clientCtx may have field pre-populated. The following order +/ of precedence occurs: +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func ReadPersistentCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { + if clientCtx.OutputFormat == "" || flagSet.Changed(cli.OutputFlag) { + output, _ := flagSet.GetString(cli.OutputFlag) + +clientCtx = clientCtx.WithOutputFormat(output) +} + if clientCtx.HomeDir == "" || flagSet.Changed(flags.FlagHome) { + homeDir, _ := flagSet.GetString(flags.FlagHome) + +clientCtx = clientCtx.WithHomeDir(homeDir) +} + if !clientCtx.Simulate || flagSet.Changed(flags.FlagDryRun) { + dryRun, _ := flagSet.GetBool(flags.FlagDryRun) + +clientCtx = clientCtx.WithSimulation(dryRun) +} + if clientCtx.KeyringDir == "" || flagSet.Changed(flags.FlagKeyringDir) { + keyringDir, _ := flagSet.GetString(flags.FlagKeyringDir) + + / The keyring directory is optional and falls back to the home directory + / if omitted. + if keyringDir == "" { + keyringDir = clientCtx.HomeDir +} + +clientCtx = clientCtx.WithKeyringDir(keyringDir) +} + if clientCtx.ChainID == "" || flagSet.Changed(flags.FlagChainID) { + chainID, _ := flagSet.GetString(flags.FlagChainID) + +clientCtx = clientCtx.WithChainID(chainID) +} + if clientCtx.Keyring == nil || flagSet.Changed(flags.FlagKeyringBackend) { + keyringBackend, _ := flagSet.GetString(flags.FlagKeyringBackend) + if keyringBackend != "" { + kr, err := NewKeyringFromBackend(clientCtx, keyringBackend) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithKeyring(kr) +} + +} + if clientCtx.Client == nil || flagSet.Changed(flags.FlagNode) { + rpcURI, _ := flagSet.GetString(flags.FlagNode) + if rpcURI != "" { + clientCtx = clientCtx.WithNodeURI(rpcURI) + +client, err := NewClientFromNode(rpcURI) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithClient(client) +} + +} + if clientCtx.GRPCClient == nil || flagSet.Changed(flags.FlagGRPC) { + grpcURI, _ := flagSet.GetString(flags.FlagGRPC) + if grpcURI != "" { + var dialOpts []grpc.DialOption + + useInsecure, _ := flagSet.GetBool(flags.FlagGRPCInsecure) + if useInsecure { + dialOpts = append(dialOpts, grpc.WithTransportCredentials(insecure.NewCredentials())) +} + +else { + dialOpts = append(dialOpts, grpc.WithTransportCredentials(credentials.NewTLS(&tls.Config{ + MinVersion: tls.VersionTLS12, +}))) +} + +grpcClient, err := grpc.Dial(grpcURI, dialOpts...) + if err != nil { + return Context{ +}, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) +} + +} + +return clientCtx, nil +} + +/ readQueryCommandFlags returns an updated Context with fields set based on flags +/ defined in AddQueryFlagsToCmd. An error is returned if any flag query fails. +/ +/ Note, the provided clientCtx may have field pre-populated. The following order +/ of precedence occurs: +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func readQueryCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { + if clientCtx.Height == 0 || flagSet.Changed(flags.FlagHeight) { + height, _ := flagSet.GetInt64(flags.FlagHeight) + +clientCtx = clientCtx.WithHeight(height) +} + if !clientCtx.UseLedger || flagSet.Changed(flags.FlagUseLedger) { + useLedger, _ := flagSet.GetBool(flags.FlagUseLedger) + +clientCtx = clientCtx.WithUseLedger(useLedger) +} + +return ReadPersistentCommandFlags(clientCtx, flagSet) +} + +/ readTxCommandFlags returns an updated Context with fields set based on flags +/ defined in AddTxFlagsToCmd. An error is returned if any flag query fails. +/ +/ Note, the provided clientCtx may have field pre-populated. The following order +/ of precedence occurs: +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func readTxCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { + clientCtx, err := ReadPersistentCommandFlags(clientCtx, flagSet) + if err != nil { + return clientCtx, err +} + if !clientCtx.GenerateOnly || flagSet.Changed(flags.FlagGenerateOnly) { + genOnly, _ := flagSet.GetBool(flags.FlagGenerateOnly) + +clientCtx = clientCtx.WithGenerateOnly(genOnly) +} + if !clientCtx.Offline || flagSet.Changed(flags.FlagOffline) { + offline, _ := flagSet.GetBool(flags.FlagOffline) + +clientCtx = clientCtx.WithOffline(offline) +} + if !clientCtx.UseLedger || flagSet.Changed(flags.FlagUseLedger) { + useLedger, _ := flagSet.GetBool(flags.FlagUseLedger) + +clientCtx = clientCtx.WithUseLedger(useLedger) +} + if clientCtx.BroadcastMode == "" || flagSet.Changed(flags.FlagBroadcastMode) { + bMode, _ := flagSet.GetString(flags.FlagBroadcastMode) + +clientCtx = clientCtx.WithBroadcastMode(bMode) +} + if !clientCtx.SkipConfirm || flagSet.Changed(flags.FlagSkipConfirmation) { + skipConfirm, _ := flagSet.GetBool(flags.FlagSkipConfirmation) + +clientCtx = clientCtx.WithSkipConfirmation(skipConfirm) +} + if clientCtx.SignModeStr == "" || flagSet.Changed(flags.FlagSignMode) { + signModeStr, _ := flagSet.GetString(flags.FlagSignMode) + +clientCtx = clientCtx.WithSignModeStr(signModeStr) +} + if clientCtx.FeePayer == nil || flagSet.Changed(flags.FlagFeePayer) { + payer, _ := flagSet.GetString(flags.FlagFeePayer) + if payer != "" { + payerAcc, err := sdk.AccAddressFromBech32(payer) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithFeePayerAddress(payerAcc) +} + +} + if clientCtx.FeeGranter == nil || flagSet.Changed(flags.FlagFeeGranter) { + granter, _ := flagSet.GetString(flags.FlagFeeGranter) + if granter != "" { + granterAcc, err := sdk.AccAddressFromBech32(granter) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithFeeGranterAddress(granterAcc) +} + +} + if clientCtx.From == "" || flagSet.Changed(flags.FlagFrom) { + from, _ := flagSet.GetString(flags.FlagFrom) + +fromAddr, fromName, keyType, err := GetFromFields(clientCtx, clientCtx.Keyring, from) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithFrom(from).WithFromAddress(fromAddr).WithFromName(fromName) + + / If the `from` signer account is a ledger key, we need to use + / SIGN_MODE_AMINO_JSON, because ledger doesn't support proto yet. + / ref: https://github.com/cosmos/cosmos-sdk/issues/8109 + if keyType == keyring.TypeLedger && clientCtx.SignModeStr != flags.SignModeLegacyAminoJSON && !clientCtx.LedgerHasProtobuf { + fmt.Println("Default sign-mode 'direct' not supported by Ledger, using sign-mode 'amino-json'.") + +clientCtx = clientCtx.WithSignModeStr(flags.SignModeLegacyAminoJSON) +} + +} + if !clientCtx.IsAux || flagSet.Changed(flags.FlagAux) { + isAux, _ := flagSet.GetBool(flags.FlagAux) + +clientCtx = clientCtx.WithAux(isAux) + if isAux { + / If the user didn't explicitly set an --output flag, use JSON by + / default. + if clientCtx.OutputFormat == "" || !flagSet.Changed(cli.OutputFlag) { + clientCtx = clientCtx.WithOutputFormat("json") +} + + / If the user didn't explicitly set a --sign-mode flag, use + / DIRECT_AUX by default. + if clientCtx.SignModeStr == "" || !flagSet.Changed(flags.FlagSignMode) { + clientCtx = clientCtx.WithSignModeStr(flags.SignModeDirectAux) +} + +} + +} + +return clientCtx, nil +} + +/ GetClientQueryContext returns a Context from a command with fields set based on flags +/ defined in AddQueryFlagsToCmd. An error is returned if any flag query fails. +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func GetClientQueryContext(cmd *cobra.Command) (Context, error) { + ctx := GetClientContextFromCmd(cmd) + +return readQueryCommandFlags(ctx, cmd.Flags()) +} + +/ GetClientTxContext returns a Context from a command with fields set based on flags +/ defined in AddTxFlagsToCmd. An error is returned if any flag query fails. +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func GetClientTxContext(cmd *cobra.Command) (Context, error) { + ctx := GetClientContextFromCmd(cmd) + +return readTxCommandFlags(ctx, cmd.Flags()) +} + +/ GetClientContextFromCmd returns a Context from a command or an empty Context +/ if it has not been set. +func GetClientContextFromCmd(cmd *cobra.Command) + +Context { + if v := cmd.Context().Value(ClientContextKey); v != nil { + clientCtxPtr := v.(*Context) + +return *clientCtxPtr +} + +return Context{ +} +} + +/ SetCmdClientContext sets a command's Context value to the provided argument. +func SetCmdClientContext(cmd *cobra.Command, clientCtx Context) + +error { + v := cmd.Context().Value(ClientContextKey) + if v == nil { + return errors.New("client context not set") +} + clientCtxPtr := v.(*Context) + *clientCtxPtr = clientCtx + + return nil +} +``` + +```go expandable +package tx + +import ( + + "bufio" + "context" + "encoding/json" + "errors" + "fmt" + "os" + + gogogrpc "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/pflag" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/input" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +/ GenerateOrBroadcastTxCLI will either generate and print and unsigned transaction +/ or sign it and broadcast it returning an error upon failure. +func GenerateOrBroadcastTxCLI(clientCtx client.Context, flagSet *pflag.FlagSet, msgs ...sdk.Msg) + +error { + txf := NewFactoryCLI(clientCtx, flagSet) + +return GenerateOrBroadcastTxWithFactory(clientCtx, txf, msgs...) +} + +/ GenerateOrBroadcastTxWithFactory will either generate and print and unsigned transaction +/ or sign it and broadcast it returning an error upon failure. +func GenerateOrBroadcastTxWithFactory(clientCtx client.Context, txf Factory, msgs ...sdk.Msg) + +error { + / Validate all msgs before generating or broadcasting the tx. + / We were calling ValidateBasic separately in each CLI handler before. + / Right now, we're factorizing that call inside this function. + / ref: https://github.com/cosmos/cosmos-sdk/pull/9236#discussion_r623803504 + for _, msg := range msgs { + if err := msg.ValidateBasic(); err != nil { + return err +} + +} + + / If the --aux flag is set, we simply generate and print the AuxSignerData. + if clientCtx.IsAux { + auxSignerData, err := makeAuxSignerData(clientCtx, txf, msgs...) + if err != nil { + return err +} + +return clientCtx.PrintProto(&auxSignerData) +} + if clientCtx.GenerateOnly { + return txf.PrintUnsignedTx(clientCtx, msgs...) +} + +return BroadcastTx(clientCtx, txf, msgs...) +} + +/ BroadcastTx attempts to generate, sign and broadcast a transaction with the +/ given set of messages. It will also simulate gas requirements if necessary. +/ It will return an error upon failure. +func BroadcastTx(clientCtx client.Context, txf Factory, msgs ...sdk.Msg) + +error { + txf, err := txf.Prepare(clientCtx) + if err != nil { + return err +} + if txf.SimulateAndExecute() || clientCtx.Simulate { + _, adjusted, err := CalculateGas(clientCtx, txf, msgs...) + if err != nil { + return err +} + +txf = txf.WithGas(adjusted) + _, _ = fmt.Fprintf(os.Stderr, "%s\n", GasEstimateResponse{ + GasEstimate: txf.Gas() +}) +} + if clientCtx.Simulate { + return nil +} + +tx, err := txf.BuildUnsignedTx(msgs...) + if err != nil { + return err +} + if !clientCtx.SkipConfirm { + txBytes, err := clientCtx.TxConfig.TxJSONEncoder()(tx.GetTx()) + if err != nil { + return err +} + if err := clientCtx.PrintRaw(json.RawMessage(txBytes)); err != nil { + _, _ = fmt.Fprintf(os.Stderr, "%s\n", txBytes) +} + buf := bufio.NewReader(os.Stdin) + +ok, err := input.GetConfirmation("confirm transaction before signing and broadcasting", buf, os.Stderr) + if err != nil || !ok { + _, _ = fmt.Fprintf(os.Stderr, "%s\n", "cancelled transaction") + +return err +} + +} + +err = Sign(txf, clientCtx.GetFromName(), tx, true) + if err != nil { + return err +} + +txBytes, err := clientCtx.TxConfig.TxEncoder()(tx.GetTx()) + if err != nil { + return err +} + + / broadcast to a Tendermint node + res, err := clientCtx.BroadcastTx(txBytes) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +} + +/ CalculateGas simulates the execution of a transaction and returns the +/ simulation response obtained by the query and the adjusted gas amount. +func CalculateGas( + clientCtx gogogrpc.ClientConn, txf Factory, msgs ...sdk.Msg, +) (*tx.SimulateResponse, uint64, error) { + txBytes, err := txf.BuildSimTx(msgs...) + if err != nil { + return nil, 0, err +} + txSvcClient := tx.NewServiceClient(clientCtx) + +simRes, err := txSvcClient.Simulate(context.Background(), &tx.SimulateRequest{ + TxBytes: txBytes, +}) + if err != nil { + return nil, 0, err +} + +return simRes, uint64(txf.GasAdjustment() * float64(simRes.GasInfo.GasUsed)), nil +} + +/ SignWithPrivKey signs a given tx with the given private key, and returns the +/ corresponding SignatureV2 if the signing is successful. +func SignWithPrivKey( + signMode signing.SignMode, signerData authsigning.SignerData, + txBuilder client.TxBuilder, priv cryptotypes.PrivKey, txConfig client.TxConfig, + accSeq uint64, +) (signing.SignatureV2, error) { + var sigV2 signing.SignatureV2 + + / Generate the bytes to be signed. + signBytes, err := txConfig.SignModeHandler().GetSignBytes(signMode, signerData, txBuilder.GetTx()) + if err != nil { + return sigV2, err +} + + / Sign those bytes + signature, err := priv.Sign(signBytes) + if err != nil { + return sigV2, err +} + + / Construct the SignatureV2 struct + sigData := signing.SingleSignatureData{ + SignMode: signMode, + Signature: signature, +} + +sigV2 = signing.SignatureV2{ + PubKey: priv.PubKey(), + Data: &sigData, + Sequence: accSeq, +} + +return sigV2, nil +} + +/ countDirectSigners counts the number of DIRECT signers in a signature data. +func countDirectSigners(data signing.SignatureData) + +int { + switch data := data.(type) { + case *signing.SingleSignatureData: + if data.SignMode == signing.SignMode_SIGN_MODE_DIRECT { + return 1 +} + +return 0 + case *signing.MultiSignatureData: + directSigners := 0 + for _, d := range data.Signatures { + directSigners += countDirectSigners(d) +} + +return directSigners + default: + panic("unreachable case") +} +} + +/ checkMultipleSigners checks that there can be maximum one DIRECT signer in +/ a tx. +func checkMultipleSigners(tx authsigning.Tx) + +error { + directSigners := 0 + sigsV2, err := tx.GetSignaturesV2() + if err != nil { + return err +} + for _, sig := range sigsV2 { + directSigners += countDirectSigners(sig.Data) + if directSigners > 1 { + return sdkerrors.ErrNotSupported.Wrap("txs signed with CLI can have maximum 1 DIRECT signer") +} + +} + +return nil +} + +/ Sign signs a given tx with a named key. The bytes signed over are canconical. +/ The resulting signature will be added to the transaction builder overwriting the previous +/ ones if overwrite=true (otherwise, the signature will be appended). +/ Signing a transaction with mutltiple signers in the DIRECT mode is not supprted and will +/ return an error. +/ An error is returned upon failure. +func Sign(txf Factory, name string, txBuilder client.TxBuilder, overwriteSig bool) + +error { + if txf.keybase == nil { + return errors.New("keybase must be set prior to signing a transaction") +} + signMode := txf.signMode + if signMode == signing.SignMode_SIGN_MODE_UNSPECIFIED { + / use the SignModeHandler's default mode if unspecified + signMode = txf.txConfig.SignModeHandler().DefaultMode() +} + +k, err := txf.keybase.Key(name) + if err != nil { + return err +} + +pubKey, err := k.GetPubKey() + if err != nil { + return err +} + signerData := authsigning.SignerData{ + ChainID: txf.chainID, + AccountNumber: txf.accountNumber, + Sequence: txf.sequence, + PubKey: pubKey, + Address: sdk.AccAddress(pubKey.Address()).String(), +} + + / For SIGN_MODE_DIRECT, calling SetSignatures calls setSignerInfos on + / TxBuilder under the hood, and SignerInfos is needed to generated the + / sign bytes. This is the reason for setting SetSignatures here, with a + / nil signature. + / + / Note: this line is not needed for SIGN_MODE_LEGACY_AMINO, but putting it + / also doesn't affect its generated sign bytes, so for code's simplicity + / sake, we put it here. + sigData := signing.SingleSignatureData{ + SignMode: signMode, + Signature: nil, +} + sig := signing.SignatureV2{ + PubKey: pubKey, + Data: &sigData, + Sequence: txf.Sequence(), +} + +var prevSignatures []signing.SignatureV2 + if !overwriteSig { + prevSignatures, err = txBuilder.GetTx().GetSignaturesV2() + if err != nil { + return err +} + +} + / Overwrite or append signer infos. + var sigs []signing.SignatureV2 + if overwriteSig { + sigs = []signing.SignatureV2{ + sig +} + +} + +else { + sigs = append(sigs, prevSignatures...) + +sigs = append(sigs, sig) +} + if err := txBuilder.SetSignatures(sigs...); err != nil { + return err +} + if err := checkMultipleSigners(txBuilder.GetTx()); err != nil { + return err +} + + / Generate the bytes to be signed. + bytesToSign, err := txf.txConfig.SignModeHandler().GetSignBytes(signMode, signerData, txBuilder.GetTx()) + if err != nil { + return err +} + + / Sign those bytes + sigBytes, _, err := txf.keybase.Sign(name, bytesToSign) + if err != nil { + return err +} + + / Construct the SignatureV2 struct + sigData = signing.SingleSignatureData{ + SignMode: signMode, + Signature: sigBytes, +} + +sig = signing.SignatureV2{ + PubKey: pubKey, + Data: &sigData, + Sequence: txf.Sequence(), +} + if overwriteSig { + err = txBuilder.SetSignatures(sig) +} + +else { + prevSignatures = append(prevSignatures, sig) + +err = txBuilder.SetSignatures(prevSignatures...) +} + if err != nil { + return fmt.Errorf("unable to set signatures on payload: %w", err) +} + + / Run optional preprocessing if specified. By default, this is unset + / and will return nil. + return txf.PreprocessTx(name, txBuilder) +} + +/ GasEstimateResponse defines a response definition for tx gas estimation. +type GasEstimateResponse struct { + GasEstimate uint64 `json:"gas_estimate" yaml:"gas_estimate"` +} + +func (gr GasEstimateResponse) + +String() + +string { + return fmt.Sprintf("gas estimate: %d", gr.GasEstimate) +} + +/ makeAuxSignerData generates an AuxSignerData from the client inputs. +func makeAuxSignerData(clientCtx client.Context, f Factory, msgs ...sdk.Msg) (tx.AuxSignerData, error) { + b := NewAuxTxBuilder() + +fromAddress, name, _, err := client.GetFromFields(clientCtx, clientCtx.Keyring, clientCtx.From) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetAddress(fromAddress.String()) + if clientCtx.Offline { + b.SetAccountNumber(f.accountNumber) + +b.SetSequence(f.sequence) +} + +else { + accNum, seq, err := clientCtx.AccountRetriever.GetAccountNumberSequence(clientCtx, fromAddress) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetAccountNumber(accNum) + +b.SetSequence(seq) +} + +err = b.SetMsgs(msgs...) + if err != nil { + return tx.AuxSignerData{ +}, err +} + if f.tip != nil { + if _, err := sdk.AccAddressFromBech32(f.tip.Tipper); err != nil { + return tx.AuxSignerData{ +}, sdkerrors.ErrInvalidAddress.Wrap("tipper must be a bech32 address") +} + +b.SetTip(f.tip) +} + +err = b.SetSignMode(f.SignMode()) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +key, err := clientCtx.Keyring.Key(name) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +pub, err := key.GetPubKey() + if err != nil { + return tx.AuxSignerData{ +}, err +} + +err = b.SetPubKey(pub) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetChainID(clientCtx.ChainID) + +signBz, err := b.GetSignBytes() + if err != nil { + return tx.AuxSignerData{ +}, err +} + +sig, _, err := clientCtx.Keyring.Sign(name, signBz) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetSignature(sig) + +return b.GetAuxSignerData() +} +``` + +```go expandable +package tx + +import ( + + "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +/ wrapper is a wrapper around the tx.Tx proto.Message which retain the raw +/ body and auth_info bytes. +type wrapper struct { + cdc codec.Codec + + tx *tx.Tx + + / bodyBz represents the protobuf encoding of TxBody. This should be encoding + / from the client using TxRaw if the tx was decoded from the wire + bodyBz []byte + + / authInfoBz represents the protobuf encoding of TxBody. This should be encoding + / from the client using TxRaw if the tx was decoded from the wire + authInfoBz []byte + + txBodyHasUnknownNonCriticals bool +} + +var ( + _ authsigning.Tx = &wrapper{ +} + _ client.TxBuilder = &wrapper{ +} + _ tx.TipTx = &wrapper{ +} + _ ante.HasExtensionOptionsTx = &wrapper{ +} + _ ExtensionOptionsTxBuilder = &wrapper{ +} + _ tx.TipTx = &wrapper{ +} +) + +/ ExtensionOptionsTxBuilder defines a TxBuilder that can also set extensions. +type ExtensionOptionsTxBuilder interface { + client.TxBuilder + + SetExtensionOptions(...*codectypes.Any) + +SetNonCriticalExtensionOptions(...*codectypes.Any) +} + +func newBuilder(cdc codec.Codec) *wrapper { + return &wrapper{ + cdc: cdc, + tx: &tx.Tx{ + Body: &tx.TxBody{ +}, + AuthInfo: &tx.AuthInfo{ + Fee: &tx.Fee{ +}, +}, +}, +} +} + +func (w *wrapper) + +GetMsgs() []sdk.Msg { + return w.tx.GetMsgs() +} + +func (w *wrapper) + +ValidateBasic() + +error { + return w.tx.ValidateBasic() +} + +func (w *wrapper) + +getBodyBytes() []byte { + if len(w.bodyBz) == 0 { + / if bodyBz is empty, then marshal the body. bodyBz will generally + / be set to nil whenever SetBody is called so the result of calling + / this method should always return the correct bytes. Note that after + / decoding bodyBz is derived from TxRaw so that it matches what was + / transmitted over the wire + var err error + w.bodyBz, err = proto.Marshal(w.tx.Body) + if err != nil { + panic(err) +} + +} + +return w.bodyBz +} + +func (w *wrapper) + +getAuthInfoBytes() []byte { + if len(w.authInfoBz) == 0 { + / if authInfoBz is empty, then marshal the body. authInfoBz will generally + / be set to nil whenever SetAuthInfo is called so the result of calling + / this method should always return the correct bytes. Note that after + / decoding authInfoBz is derived from TxRaw so that it matches what was + / transmitted over the wire + var err error + w.authInfoBz, err = proto.Marshal(w.tx.AuthInfo) + if err != nil { + panic(err) +} + +} + +return w.authInfoBz +} + +func (w *wrapper) + +GetSigners() []sdk.AccAddress { + return w.tx.GetSigners() +} + +func (w *wrapper) + +GetPubKeys() ([]cryptotypes.PubKey, error) { + signerInfos := w.tx.AuthInfo.SignerInfos + pks := make([]cryptotypes.PubKey, len(signerInfos)) + for i, si := range signerInfos { + / NOTE: it is okay to leave this nil if there is no PubKey in the SignerInfo. + / PubKey's can be left unset in SignerInfo. + if si.PublicKey == nil { + continue +} + pkAny := si.PublicKey.GetCachedValue() + +pk, ok := pkAny.(cryptotypes.PubKey) + if ok { + pks[i] = pk +} + +else { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "Expecting PubKey, got: %T", pkAny) +} + +} + +return pks, nil +} + +func (w *wrapper) + +GetGas() + +uint64 { + return w.tx.AuthInfo.Fee.GasLimit +} + +func (w *wrapper) + +GetFee() + +sdk.Coins { + return w.tx.AuthInfo.Fee.Amount +} + +func (w *wrapper) + +FeePayer() + +sdk.AccAddress { + feePayer := w.tx.AuthInfo.Fee.Payer + if feePayer != "" { + return sdk.MustAccAddressFromBech32(feePayer) +} + / use first signer as default if no payer specified + return w.GetSigners()[0] +} + +func (w *wrapper) + +FeeGranter() + +sdk.AccAddress { + feePayer := w.tx.AuthInfo.Fee.Granter + if feePayer != "" { + return sdk.MustAccAddressFromBech32(feePayer) +} + +return nil +} + +func (w *wrapper) + +GetTip() *tx.Tip { + return w.tx.AuthInfo.Tip +} + +func (w *wrapper) + +GetMemo() + +string { + return w.tx.Body.Memo +} + +/ GetTimeoutHeight returns the transaction's timeout height (if set). +func (w *wrapper) + +GetTimeoutHeight() + +uint64 { + return w.tx.Body.TimeoutHeight +} + +func (w *wrapper) + +GetSignaturesV2() ([]signing.SignatureV2, error) { + signerInfos := w.tx.AuthInfo.SignerInfos + sigs := w.tx.Signatures + pubKeys, err := w.GetPubKeys() + if err != nil { + return nil, err +} + n := len(signerInfos) + res := make([]signing.SignatureV2, n) + for i, si := range signerInfos { + / handle nil signatures (in case of simulation) + if si.ModeInfo == nil { + res[i] = signing.SignatureV2{ + PubKey: pubKeys[i], +} + +} + +else { + var err error + sigData, err := ModeInfoAndSigToSignatureData(si.ModeInfo, sigs[i]) + if err != nil { + return nil, err +} + / sequence number is functionally a transaction nonce and referred to as such in the SDK + nonce := si.GetSequence() + +res[i] = signing.SignatureV2{ + PubKey: pubKeys[i], + Data: sigData, + Sequence: nonce, +} + + +} + +} + +return res, nil +} + +func (w *wrapper) + +SetMsgs(msgs ...sdk.Msg) + +error { + anys, err := tx.SetMsgs(msgs) + if err != nil { + return err +} + +w.tx.Body.Messages = anys + + / set bodyBz to nil because the cached bodyBz no longer matches tx.Body + w.bodyBz = nil + + return nil +} + +/ SetTimeoutHeight sets the transaction's height timeout. +func (w *wrapper) + +SetTimeoutHeight(height uint64) { + w.tx.Body.TimeoutHeight = height + + / set bodyBz to nil because the cached bodyBz no longer matches tx.Body + w.bodyBz = nil +} + +func (w *wrapper) + +SetMemo(memo string) { + w.tx.Body.Memo = memo + + / set bodyBz to nil because the cached bodyBz no longer matches tx.Body + w.bodyBz = nil +} + +func (w *wrapper) + +SetGasLimit(limit uint64) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.GasLimit = limit + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetFeeAmount(coins sdk.Coins) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.Amount = coins + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetTip(tip *tx.Tip) { + w.tx.AuthInfo.Tip = tip + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetFeePayer(feePayer sdk.AccAddress) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.Payer = feePayer.String() + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetFeeGranter(feeGranter sdk.AccAddress) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.Granter = feeGranter.String() + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetSignatures(signatures ...signing.SignatureV2) + +error { + n := len(signatures) + signerInfos := make([]*tx.SignerInfo, n) + rawSigs := make([][]byte, n) + for i, sig := range signatures { + var modeInfo *tx.ModeInfo + modeInfo, rawSigs[i] = SignatureDataToModeInfoAndSig(sig.Data) + +any, err := codectypes.NewAnyWithValue(sig.PubKey) + if err != nil { + return err +} + +signerInfos[i] = &tx.SignerInfo{ + PublicKey: any, + ModeInfo: modeInfo, + Sequence: sig.Sequence, +} + +} + +w.setSignerInfos(signerInfos) + +w.setSignatures(rawSigs) + +return nil +} + +func (w *wrapper) + +setSignerInfos(infos []*tx.SignerInfo) { + w.tx.AuthInfo.SignerInfos = infos + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +setSignerInfoAtIndex(index int, info *tx.SignerInfo) { + if w.tx.AuthInfo.SignerInfos == nil { + w.tx.AuthInfo.SignerInfos = make([]*tx.SignerInfo, len(w.GetSigners())) +} + +w.tx.AuthInfo.SignerInfos[index] = info + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +setSignatures(sigs [][]byte) { + w.tx.Signatures = sigs +} + +func (w *wrapper) + +setSignatureAtIndex(index int, sig []byte) { + if w.tx.Signatures == nil { + w.tx.Signatures = make([][]byte, len(w.GetSigners())) +} + +w.tx.Signatures[index] = sig +} + +func (w *wrapper) + +GetTx() + +authsigning.Tx { + return w +} + +func (w *wrapper) + +GetProtoTx() *tx.Tx { + return w.tx +} + +/ Deprecated: AsAny extracts proto Tx and wraps it into Any. +/ NOTE: You should probably use `GetProtoTx` if you want to serialize the transaction. +func (w *wrapper) + +AsAny() *codectypes.Any { + return codectypes.UnsafePackAny(w.tx) +} + +/ WrapTx creates a TxBuilder wrapper around a tx.Tx proto message. +func WrapTx(protoTx *tx.Tx) + +client.TxBuilder { + return &wrapper{ + tx: protoTx, +} +} + +func (w *wrapper) + +GetExtensionOptions() []*codectypes.Any { + return w.tx.Body.ExtensionOptions +} + +func (w *wrapper) + +GetNonCriticalExtensionOptions() []*codectypes.Any { + return w.tx.Body.NonCriticalExtensionOptions +} + +func (w *wrapper) + +SetExtensionOptions(extOpts ...*codectypes.Any) { + w.tx.Body.ExtensionOptions = extOpts + w.bodyBz = nil +} + +func (w *wrapper) + +SetNonCriticalExtensionOptions(extOpts ...*codectypes.Any) { + w.tx.Body.NonCriticalExtensionOptions = extOpts + w.bodyBz = nil +} + +func (w *wrapper) + +AddAuxSignerData(data tx.AuxSignerData) + +error { + err := data.ValidateBasic() + if err != nil { + return err +} + +w.bodyBz = data.SignDoc.BodyBytes + + var body tx.TxBody + err = w.cdc.Unmarshal(w.bodyBz, &body) + if err != nil { + return err +} + if w.tx.Body.Memo != "" && w.tx.Body.Memo != body.Memo { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has memo %s, got %s in AuxSignerData", w.tx.Body.Memo, body.Memo) +} + if w.tx.Body.TimeoutHeight != 0 && w.tx.Body.TimeoutHeight != body.TimeoutHeight { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has timeout height %d, got %d in AuxSignerData", w.tx.Body.TimeoutHeight, body.TimeoutHeight) +} + if len(w.tx.Body.ExtensionOptions) != 0 { + if len(w.tx.Body.ExtensionOptions) != len(body.ExtensionOptions) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d extension options, got %d in AuxSignerData", len(w.tx.Body.ExtensionOptions), len(body.ExtensionOptions)) +} + for i, o := range w.tx.Body.ExtensionOptions { + if !o.Equal(body.ExtensionOptions[i]) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has extension option %+v at index %d, got %+v in AuxSignerData", o, i, body.ExtensionOptions[i]) +} + +} + +} + if len(w.tx.Body.NonCriticalExtensionOptions) != 0 { + if len(w.tx.Body.NonCriticalExtensionOptions) != len(body.NonCriticalExtensionOptions) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d non-critical extension options, got %d in AuxSignerData", len(w.tx.Body.NonCriticalExtensionOptions), len(body.NonCriticalExtensionOptions)) +} + for i, o := range w.tx.Body.NonCriticalExtensionOptions { + if !o.Equal(body.NonCriticalExtensionOptions[i]) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has non-critical extension option %+v at index %d, got %+v in AuxSignerData", o, i, body.NonCriticalExtensionOptions[i]) +} + +} + +} + if len(w.tx.Body.Messages) != 0 { + if len(w.tx.Body.Messages) != len(body.Messages) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d Msgs, got %d in AuxSignerData", len(w.tx.Body.Messages), len(body.Messages)) +} + for i, o := range w.tx.Body.Messages { + if !o.Equal(body.Messages[i]) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has Msg %+v at index %d, got %+v in AuxSignerData", o, i, body.Messages[i]) +} + +} + +} + if w.tx.AuthInfo.Tip != nil && data.SignDoc.Tip != nil { + if !w.tx.AuthInfo.Tip.Amount.IsEqual(data.SignDoc.Tip.Amount) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has tip %+v, got %+v in AuxSignerData", w.tx.AuthInfo.Tip.Amount, data.SignDoc.Tip.Amount) +} + if w.tx.AuthInfo.Tip.Tipper != data.SignDoc.Tip.Tipper { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has tipper %s, got %s in AuxSignerData", w.tx.AuthInfo.Tip.Tipper, data.SignDoc.Tip.Tipper) +} + +} + +w.SetMemo(body.Memo) + +w.SetTimeoutHeight(body.TimeoutHeight) + +w.SetExtensionOptions(body.ExtensionOptions...) + +w.SetNonCriticalExtensionOptions(body.NonCriticalExtensionOptions...) + msgs := make([]sdk.Msg, len(body.Messages)) + for i, msgAny := range body.Messages { + msgs[i] = msgAny.GetCachedValue().(sdk.Msg) +} + +w.SetMsgs(msgs...) + +w.SetTip(data.GetSignDoc().GetTip()) + + / Get the aux signer's index in GetSigners. + signerIndex := -1 + for i, signer := range w.GetSigners() { + if signer.String() == data.Address { + signerIndex = i +} + +} + if signerIndex < 0 { + return sdkerrors.ErrLogic.Wrapf("address %s is not a signer", data.Address) +} + +w.setSignerInfoAtIndex(signerIndex, &tx.SignerInfo{ + PublicKey: data.SignDoc.PublicKey, + ModeInfo: &tx.ModeInfo{ + Sum: &tx.ModeInfo_Single_{ + Single: &tx.ModeInfo_Single{ + Mode: data.Mode +}}}, + Sequence: data.SignDoc.Sequence, +}) + +w.setSignatureAtIndex(signerIndex, data.Sig) + +return nil +} +``` + +```protobuf +// Fee includes the amount of coins paid in fees and the maximum +// gas to be used by the transaction. The ratio yields an effective "gasprice", +// which must be above some miminum to be accepted into the mempool. +message Fee { + // amount is the amount of coins to be paid as a fee + repeated cosmos.base.v1beta1.Coin amount = 1 + [(gogoproto.nullable) = false, (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins"]; + + // gas_limit is the maximum gas that can be used in transaction processing + // before an out of gas error occurs + uint64 gas_limit = 2; + + // if unset, the first signer is responsible for paying the fees. If set, the specified account must pay the fees. + // the payer must be a tx signer (and thus have signed this field in AuthInfo). + // setting this field does *not* change the ordering of required signers for the transaction. + string payer = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // if set, the fee payer (either the first signer or the value of the payer field) requests that a fee grant be used + // to pay fees instead of the fee payer's own balance. If an appropriate fee grant does not exist or the chain does + // not support fee grants, this will fail + string granter = 4 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +Example cmd: + +```go +./simd tx gov submit-proposal --title="Test Proposal" --description="My awesome proposal" --type="Text" --from validator-key --fee-granter=cosmos1xh44hxt7spr67hqaa7nyx5gnutrz5fraw6grxn --chain-id=testnet --fees="10stake" +``` + +### Granted Fee Deductions + +Fees are deducted from grants in the `x/auth` ante handler. To learn more about how ante handlers work, read the [Auth Module AnteHandlers Guide](/docs/sdk/next/documentation/module-system/auth#antehandlers). + +### Gas + +In order to prevent DoS attacks, using a filtered `x/feegrant` incurs gas. The SDK must assure that the `grantee`'s transactions all conform to the filter set by the `granter`. The SDK does this by iterating over the allowed messages in the filter and charging 10 gas per filtered message. The SDK will then iterate over the messages being sent by the `grantee` to ensure the messages adhere to the filter, also charging 10 gas per message. The SDK will stop iterating and fail the transaction if it finds a message that does not conform to the filter. + +**WARNING**: The gas is charged against the granted allowance. Ensure your messages conform to the filter, if any, before sending transactions using your allowance. + +### Pruning + +A queue in the state maintained with the prefix of expiration of the grants and checks them on EndBlock with the current block time for every block to prune. + +## State + +### FeeAllowance + +Fee Allowances are identified by combining `Grantee` (the account address of fee allowance grantee) with the `Granter` (the account address of fee allowance granter). + +Fee allowance grants are stored in the state as follows: + +* Grant: `0x00 | grantee_addr_len (1 byte) | grantee_addr_bytes | granter_addr_len (1 byte) | granter_addr_bytes -> ProtocolBuffer(Grant)` + +```go expandable +/ Code generated by protoc-gen-gogo. DO NOT EDIT. +/ source: cosmos/feegrant/v1beta1/feegrant.proto + +package feegrant + +import ( + + fmt "fmt" + _ "github.com/cosmos/cosmos-proto" + types1 "github.com/cosmos/cosmos-sdk/codec/types" + github_com_cosmos_cosmos_sdk_types "github.com/cosmos/cosmos-sdk/types" + types "github.com/cosmos/cosmos-sdk/types" + _ "github.com/cosmos/cosmos-sdk/types/tx/amino" + _ "github.com/cosmos/gogoproto/gogoproto" + proto "github.com/cosmos/gogoproto/proto" + github_com_cosmos_gogoproto_types "github.com/cosmos/gogoproto/types" + _ "google.golang.org/protobuf/types/known/durationpb" + _ "google.golang.org/protobuf/types/known/timestamppb" + io "io" + math "math" + math_bits "math/bits" + time "time" +) + +/ Reference imports to suppress errors if they are not otherwise used. +var _ = proto.Marshal +var _ = fmt.Errorf +var _ = math.Inf +var _ = time.Kitchen + +/ This is a compile-time assertion to ensure that this generated file +/ is compatible with the proto package it is being compiled against. +/ A compilation error at this line likely means your copy of the +/ proto package needs to be updated. +const _ = proto.GoGoProtoPackageIsVersion3 / please upgrade the proto package + +/ BasicAllowance implements Allowance with a one-time grant of coins +/ that optionally expires. The grantee can use up to SpendLimit to cover fees. +type BasicAllowance struct { + / spend_limit specifies the maximum amount of coins that can be spent + / by this allowance and will be updated as coins are spent. If it is + / empty, there is no spend limit and any amount of coins can be spent. + SpendLimit github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,1,rep,name=spend_limit,json=spendLimit,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"spend_limit"` + / expiration specifies an optional time when this allowance expires + Expiration *time.Time `protobuf:"bytes,2,opt,name=expiration,proto3,stdtime" json:"expiration,omitempty"` +} + +func (m *BasicAllowance) + +Reset() { *m = BasicAllowance{ +} +} + +func (m *BasicAllowance) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*BasicAllowance) + +ProtoMessage() { +} + +func (*BasicAllowance) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{0 +} +} + +func (m *BasicAllowance) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *BasicAllowance) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_BasicAllowance.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *BasicAllowance) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_BasicAllowance.Merge(m, src) +} + +func (m *BasicAllowance) + +XXX_Size() + +int { + return m.Size() +} + +func (m *BasicAllowance) + +XXX_DiscardUnknown() { + xxx_messageInfo_BasicAllowance.DiscardUnknown(m) +} + +var xxx_messageInfo_BasicAllowance proto.InternalMessageInfo + +func (m *BasicAllowance) + +GetSpendLimit() + +github_com_cosmos_cosmos_sdk_types.Coins { + if m != nil { + return m.SpendLimit +} + +return nil +} + +func (m *BasicAllowance) + +GetExpiration() *time.Time { + if m != nil { + return m.Expiration +} + +return nil +} + +/ PeriodicAllowance extends Allowance to allow for both a maximum cap, +/ as well as a limit per time period. +type PeriodicAllowance struct { + / basic specifies a struct of `BasicAllowance` + Basic BasicAllowance `protobuf:"bytes,1,opt,name=basic,proto3" json:"basic"` + / period specifies the time duration in which period_spend_limit coins can + / be spent before that allowance is reset + Period time.Duration `protobuf:"bytes,2,opt,name=period,proto3,stdduration" json:"period"` + / period_spend_limit specifies the maximum number of coins that can be spent + / in the period + PeriodSpendLimit github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,3,rep,name=period_spend_limit,json=periodSpendLimit,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"period_spend_limit"` + / period_can_spend is the number of coins left to be spent before the period_reset time + PeriodCanSpend github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,4,rep,name=period_can_spend,json=periodCanSpend,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"period_can_spend"` + / period_reset is the time at which this period resets and a new one begins, + / it is calculated from the start time of the first transaction after the + / last period ended + PeriodReset time.Time `protobuf:"bytes,5,opt,name=period_reset,json=periodReset,proto3,stdtime" json:"period_reset"` +} + +func (m *PeriodicAllowance) + +Reset() { *m = PeriodicAllowance{ +} +} + +func (m *PeriodicAllowance) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*PeriodicAllowance) + +ProtoMessage() { +} + +func (*PeriodicAllowance) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{1 +} +} + +func (m *PeriodicAllowance) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *PeriodicAllowance) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_PeriodicAllowance.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *PeriodicAllowance) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_PeriodicAllowance.Merge(m, src) +} + +func (m *PeriodicAllowance) + +XXX_Size() + +int { + return m.Size() +} + +func (m *PeriodicAllowance) + +XXX_DiscardUnknown() { + xxx_messageInfo_PeriodicAllowance.DiscardUnknown(m) +} + +var xxx_messageInfo_PeriodicAllowance proto.InternalMessageInfo + +func (m *PeriodicAllowance) + +GetBasic() + +BasicAllowance { + if m != nil { + return m.Basic +} + +return BasicAllowance{ +} +} + +func (m *PeriodicAllowance) + +GetPeriod() + +time.Duration { + if m != nil { + return m.Period +} + +return 0 +} + +func (m *PeriodicAllowance) + +GetPeriodSpendLimit() + +github_com_cosmos_cosmos_sdk_types.Coins { + if m != nil { + return m.PeriodSpendLimit +} + +return nil +} + +func (m *PeriodicAllowance) + +GetPeriodCanSpend() + +github_com_cosmos_cosmos_sdk_types.Coins { + if m != nil { + return m.PeriodCanSpend +} + +return nil +} + +func (m *PeriodicAllowance) + +GetPeriodReset() + +time.Time { + if m != nil { + return m.PeriodReset +} + +return time.Time{ +} +} + +/ AllowedMsgAllowance creates allowance only for specified message types. +type AllowedMsgAllowance struct { + / allowance can be any of basic and periodic fee allowance. + Allowance *types1.Any `protobuf:"bytes,1,opt,name=allowance,proto3" json:"allowance,omitempty"` + / allowed_messages are the messages for which the grantee has the access. + AllowedMessages []string `protobuf:"bytes,2,rep,name=allowed_messages,json=allowedMessages,proto3" json:"allowed_messages,omitempty"` +} + +func (m *AllowedMsgAllowance) + +Reset() { *m = AllowedMsgAllowance{ +} +} + +func (m *AllowedMsgAllowance) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*AllowedMsgAllowance) + +ProtoMessage() { +} + +func (*AllowedMsgAllowance) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{2 +} +} + +func (m *AllowedMsgAllowance) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *AllowedMsgAllowance) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_AllowedMsgAllowance.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *AllowedMsgAllowance) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_AllowedMsgAllowance.Merge(m, src) +} + +func (m *AllowedMsgAllowance) + +XXX_Size() + +int { + return m.Size() +} + +func (m *AllowedMsgAllowance) + +XXX_DiscardUnknown() { + xxx_messageInfo_AllowedMsgAllowance.DiscardUnknown(m) +} + +var xxx_messageInfo_AllowedMsgAllowance proto.InternalMessageInfo + +/ Grant is stored in the KVStore to record a grant with full context +type Grant struct { + / granter is the address of the user granting an allowance of their funds. + Granter string `protobuf:"bytes,1,opt,name=granter,proto3" json:"granter,omitempty"` + / grantee is the address of the user being granted an allowance of another user's funds. + Grantee string `protobuf:"bytes,2,opt,name=grantee,proto3" json:"grantee,omitempty"` + / allowance can be any of basic, periodic, allowed fee allowance. + Allowance *types1.Any `protobuf:"bytes,3,opt,name=allowance,proto3" json:"allowance,omitempty"` +} + +func (m *Grant) + +Reset() { *m = Grant{ +} +} + +func (m *Grant) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*Grant) + +ProtoMessage() { +} + +func (*Grant) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{3 +} +} + +func (m *Grant) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *Grant) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_Grant.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *Grant) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_Grant.Merge(m, src) +} + +func (m *Grant) + +XXX_Size() + +int { + return m.Size() +} + +func (m *Grant) + +XXX_DiscardUnknown() { + xxx_messageInfo_Grant.DiscardUnknown(m) +} + +var xxx_messageInfo_Grant proto.InternalMessageInfo + +func (m *Grant) + +GetGranter() + +string { + if m != nil { + return m.Granter +} + +return "" +} + +func (m *Grant) + +GetGrantee() + +string { + if m != nil { + return m.Grantee +} + +return "" +} + +func (m *Grant) + +GetAllowance() *types1.Any { + if m != nil { + return m.Allowance +} + +return nil +} + +func init() { + proto.RegisterType((*BasicAllowance)(nil), "cosmos.feegrant.v1beta1.BasicAllowance") + +proto.RegisterType((*PeriodicAllowance)(nil), "cosmos.feegrant.v1beta1.PeriodicAllowance") + +proto.RegisterType((*AllowedMsgAllowance)(nil), "cosmos.feegrant.v1beta1.AllowedMsgAllowance") + +proto.RegisterType((*Grant)(nil), "cosmos.feegrant.v1beta1.Grant") +} + +func init() { + proto.RegisterFile("cosmos/feegrant/v1beta1/feegrant.proto", fileDescriptor_7279582900c30aea) +} + +var fileDescriptor_7279582900c30aea = []byte{ + / 639 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xb4, 0x55, 0x3f, 0x6f, 0xd3, 0x40, + 0x14, 0x8f, 0x9b, 0xb6, 0x28, 0x17, 0x28, 0xad, 0xa9, 0x84, 0x53, 0x21, 0xbb, 0x8a, 0x04, 0x4d, + 0x2b, 0xd5, 0x56, 0x8b, 0x58, 0x3a, 0x35, 0x2e, 0xa2, 0x80, 0x5a, 0xa9, 0x72, 0x99, 0x90, 0x50, + 0x74, 0xb6, 0xaf, 0xe6, 0x44, 0xec, 0x33, 0x3e, 0x17, 0x1a, 0x06, 0x66, 0xc4, 0x80, 0x32, 0x32, + 0x32, 0x22, 0xa6, 0x0e, 0xe5, 0x3b, 0x54, 0x0c, 0xa8, 0x62, 0x62, 0x22, 0x28, 0x19, 0x3a, 0xf3, + 0x0d, 0x90, 0xef, 0xce, 0x8e, 0x9b, 0x50, 0x68, 0x25, 0xba, 0x24, 0x77, 0xef, 0xde, 0xfb, 0xfd, + 0x79, 0xef, 0x45, 0x01, 0xb7, 0x1c, 0x42, 0x7d, 0x42, 0x8d, 0x1d, 0x84, 0xbc, 0x08, 0x06, 0xb1, + 0xf1, 0x62, 0xc9, 0x46, 0x31, 0x5c, 0xca, 0x02, 0x7a, 0x18, 0x91, 0x98, 0xc8, 0xd7, 0x79, 0x9e, + 0x9e, 0x85, 0x45, 0xde, 0xcc, 0xb4, 0x47, 0x3c, 0xc2, 0x72, 0x8c, 0xe4, 0xc4, 0xd3, 0x67, 0x2a, + 0x1e, 0x21, 0x5e, 0x13, 0x19, 0xec, 0x66, 0xef, 0xee, 0x18, 0x30, 0x68, 0xa5, 0x4f, 0x1c, 0xa9, + 0xc1, 0x6b, 0x04, 0x2c, 0x7f, 0x52, 0x85, 0x18, 0x1b, 0x52, 0x94, 0x09, 0x71, 0x08, 0x0e, 0xc4, + 0xfb, 0x14, 0xf4, 0x71, 0x40, 0x0c, 0xf6, 0x29, 0x42, 0xda, 0x20, 0x51, 0x8c, 0x7d, 0x44, 0x63, + 0xe8, 0x87, 0x29, 0xe6, 0x60, 0x82, 0xbb, 0x1b, 0xc1, 0x18, 0x13, 0x81, 0x59, 0x7d, 0x37, 0x02, + 0x26, 0x4c, 0x48, 0xb1, 0x53, 0x6f, 0x36, 0xc9, 0x4b, 0x18, 0x38, 0x48, 0x7e, 0x0e, 0xca, 0x34, + 0x44, 0x81, 0xdb, 0x68, 0x62, 0x1f, 0xc7, 0x8a, 0x34, 0x5b, 0xac, 0x95, 0x97, 0x2b, 0xba, 0x90, + 0x9a, 0x88, 0x4b, 0xdd, 0xeb, 0x6b, 0x04, 0x07, 0xe6, 0x9d, 0xc3, 0x1f, 0x5a, 0xe1, 0x53, 0x47, + 0xab, 0x79, 0x38, 0x7e, 0xba, 0x6b, 0xeb, 0x0e, 0xf1, 0x85, 0x2f, 0xf1, 0xb5, 0x48, 0xdd, 0x67, + 0x46, 0xdc, 0x0a, 0x11, 0x65, 0x05, 0xf4, 0xe3, 0xf1, 0xfe, 0x82, 0x64, 0x01, 0x46, 0xb2, 0x91, + 0x70, 0xc8, 0xab, 0x00, 0xa0, 0xbd, 0x10, 0x73, 0x65, 0xca, 0xc8, 0xac, 0x54, 0x2b, 0x2f, 0xcf, + 0xe8, 0x5c, 0xba, 0x9e, 0x4a, 0xd7, 0x1f, 0xa5, 0xde, 0xcc, 0xd1, 0x76, 0x47, 0x93, 0xac, 0x5c, + 0xcd, 0xca, 0xfa, 0x97, 0x83, 0xc5, 0x9b, 0xa7, 0x0c, 0x49, 0xbf, 0x87, 0x50, 0x66, 0xef, 0xc1, + 0xdb, 0xe3, 0xfd, 0x85, 0x4a, 0x4e, 0xd8, 0x49, 0xf7, 0xd5, 0xcf, 0xa3, 0x60, 0x6a, 0x0b, 0x45, + 0x98, 0xb8, 0xf9, 0x9e, 0xdc, 0x07, 0x63, 0x76, 0x92, 0xa7, 0x48, 0x4c, 0xdb, 0x9c, 0x7e, 0x1a, + 0xd5, 0x49, 0x34, 0xb3, 0x94, 0xf4, 0x86, 0xfb, 0xe5, 0x00, 0xf2, 0x2a, 0x18, 0x0f, 0x19, 0xbc, + 0xb0, 0x59, 0x19, 0xb2, 0x79, 0x57, 0x4c, 0xc8, 0xbc, 0x92, 0x14, 0xbf, 0xef, 0x68, 0x12, 0x07, + 0x10, 0x75, 0xf2, 0x6b, 0x20, 0xf3, 0x53, 0x23, 0x3f, 0xa6, 0xe2, 0x05, 0x8d, 0x69, 0x92, 0x73, + 0x6d, 0xf7, 0x87, 0xf5, 0x0a, 0x88, 0x58, 0xc3, 0x81, 0x01, 0xd7, 0xa0, 0x8c, 0x5e, 0x10, 0xfb, + 0x04, 0x67, 0x5a, 0x83, 0x01, 0x13, 0x20, 0x6f, 0x80, 0xcb, 0x82, 0x3b, 0x42, 0x14, 0xc5, 0xca, + 0xd8, 0x3f, 0x57, 0x85, 0x35, 0xb1, 0x9d, 0x35, 0xb1, 0xcc, 0xcb, 0xad, 0xa4, 0x7a, 0xe5, 0xe1, + 0xb9, 0x96, 0xe6, 0x46, 0x4e, 0xe8, 0xd0, 0x86, 0x54, 0x7f, 0x49, 0xe0, 0x1a, 0xbb, 0x21, 0x77, + 0x93, 0x7a, 0xfd, 0xcd, 0x79, 0x02, 0x4a, 0x30, 0xbd, 0x88, 0xed, 0x99, 0x1e, 0x92, 0x5b, 0x0f, + 0x5a, 0xe6, 0xfc, 0x99, 0xc5, 0x58, 0x7d, 0x44, 0x79, 0x1e, 0x4c, 0x42, 0xce, 0xda, 0xf0, 0x11, + 0xa5, 0xd0, 0x43, 0x54, 0x19, 0x99, 0x2d, 0xd6, 0x4a, 0xd6, 0x55, 0x11, 0xdf, 0x14, 0xe1, 0x95, + 0xad, 0x37, 0x1f, 0xb4, 0xc2, 0xb9, 0x1c, 0xab, 0x39, 0xc7, 0x7f, 0xf0, 0x56, 0xfd, 0x2a, 0x81, + 0xb1, 0xf5, 0x04, 0x42, 0x5e, 0x06, 0x97, 0x18, 0x16, 0x8a, 0x98, 0xc7, 0x92, 0xa9, 0x7c, 0x3b, + 0x58, 0x9c, 0x16, 0x44, 0x75, 0xd7, 0x8d, 0x10, 0xa5, 0xdb, 0x71, 0x84, 0x03, 0xcf, 0x4a, 0x13, + 0xfb, 0x35, 0x88, 0xfd, 0x14, 0xce, 0x50, 0x33, 0xd0, 0xcd, 0xe2, 0xff, 0xee, 0xa6, 0x59, 0x3f, + 0xec, 0xaa, 0xd2, 0x51, 0x57, 0x95, 0x7e, 0x76, 0x55, 0xa9, 0xdd, 0x53, 0x0b, 0x47, 0x3d, 0xb5, + 0xf0, 0xbd, 0xa7, 0x16, 0x1e, 0xcf, 0xfd, 0x75, 0x6f, 0xf7, 0xb2, 0xff, 0x0b, 0x7b, 0x9c, 0xc9, + 0xb8, 0xfd, 0x3b, 0x00, 0x00, 0xff, 0xff, 0xe4, 0x3d, 0x09, 0x1d, 0x5a, 0x06, 0x00, 0x00, +} + +func (m *BasicAllowance) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *BasicAllowance) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *BasicAllowance) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.Expiration != nil { + n1, err1 := github_com_cosmos_gogoproto_types.StdTimeMarshalTo(*m.Expiration, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdTime(*m.Expiration):]) + if err1 != nil { + return 0, err1 +} + +i -= n1 + i = encodeVarintFeegrant(dAtA, i, uint64(n1)) + +i-- + dAtA[i] = 0x12 +} + if len(m.SpendLimit) > 0 { + for iNdEx := len(m.SpendLimit) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.SpendLimit[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa +} + +} + +return len(dAtA) - i, nil +} + +func (m *PeriodicAllowance) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *PeriodicAllowance) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *PeriodicAllowance) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + n2, err2 := github_com_cosmos_gogoproto_types.StdTimeMarshalTo(m.PeriodReset, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdTime(m.PeriodReset):]) + if err2 != nil { + return 0, err2 +} + +i -= n2 + i = encodeVarintFeegrant(dAtA, i, uint64(n2)) + +i-- + dAtA[i] = 0x2a + if len(m.PeriodCanSpend) > 0 { + for iNdEx := len(m.PeriodCanSpend) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.PeriodCanSpend[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x22 +} + +} + if len(m.PeriodSpendLimit) > 0 { + for iNdEx := len(m.PeriodSpendLimit) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.PeriodSpendLimit[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x1a +} + +} + +n3, err3 := github_com_cosmos_gogoproto_types.StdDurationMarshalTo(m.Period, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdDuration(m.Period):]) + if err3 != nil { + return 0, err3 +} + +i -= n3 + i = encodeVarintFeegrant(dAtA, i, uint64(n3)) + +i-- + dAtA[i] = 0x12 + { + size, err := m.Basic.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *AllowedMsgAllowance) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *AllowedMsgAllowance) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AllowedMsgAllowance) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.AllowedMessages) > 0 { + for iNdEx := len(m.AllowedMessages) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.AllowedMessages[iNdEx]) + +copy(dAtA[i:], m.AllowedMessages[iNdEx]) + +i = encodeVarintFeegrant(dAtA, i, uint64(len(m.AllowedMessages[iNdEx]))) + +i-- + dAtA[i] = 0x12 +} + +} + if m.Allowance != nil { + { + size, err := m.Allowance.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *Grant) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *Grant) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *Grant) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.Allowance != nil { + { + size, err := m.Allowance.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x1a +} + if len(m.Grantee) > 0 { + i -= len(m.Grantee) + +copy(dAtA[i:], m.Grantee) + +i = encodeVarintFeegrant(dAtA, i, uint64(len(m.Grantee))) + +i-- + dAtA[i] = 0x12 +} + if len(m.Granter) > 0 { + i -= len(m.Granter) + +copy(dAtA[i:], m.Granter) + +i = encodeVarintFeegrant(dAtA, i, uint64(len(m.Granter))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func encodeVarintFeegrant(dAtA []byte, offset int, v uint64) + +int { + offset -= sovFeegrant(v) + base := offset + for v >= 1<<7 { + dAtA[offset] = uint8(v&0x7f | 0x80) + +v >>= 7 + offset++ +} + +dAtA[offset] = uint8(v) + +return base +} + +func (m *BasicAllowance) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + if len(m.SpendLimit) > 0 { + for _, e := range m.SpendLimit { + l = e.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + if m.Expiration != nil { + l = github_com_cosmos_gogoproto_types.SizeOfStdTime(*m.Expiration) + +n += 1 + l + sovFeegrant(uint64(l)) +} + +return n +} + +func (m *PeriodicAllowance) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = m.Basic.Size() + +n += 1 + l + sovFeegrant(uint64(l)) + +l = github_com_cosmos_gogoproto_types.SizeOfStdDuration(m.Period) + +n += 1 + l + sovFeegrant(uint64(l)) + if len(m.PeriodSpendLimit) > 0 { + for _, e := range m.PeriodSpendLimit { + l = e.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + if len(m.PeriodCanSpend) > 0 { + for _, e := range m.PeriodCanSpend { + l = e.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + +l = github_com_cosmos_gogoproto_types.SizeOfStdTime(m.PeriodReset) + +n += 1 + l + sovFeegrant(uint64(l)) + +return n +} + +func (m *AllowedMsgAllowance) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + if m.Allowance != nil { + l = m.Allowance.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + if len(m.AllowedMessages) > 0 { + for _, s := range m.AllowedMessages { + l = len(s) + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + +return n +} + +func (m *Grant) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.Granter) + if l > 0 { + n += 1 + l + sovFeegrant(uint64(l)) +} + +l = len(m.Grantee) + if l > 0 { + n += 1 + l + sovFeegrant(uint64(l)) +} + if m.Allowance != nil { + l = m.Allowance.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +return n +} + +func sovFeegrant(x uint64) (n int) { + return (math_bits.Len64(x|1) + 6) / 7 +} + +func sozFeegrant(x uint64) (n int) { + return sovFeegrant(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} + +func (m *BasicAllowance) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: BasicAllowance: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: BasicAllowance: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SpendLimit", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.SpendLimit = append(m.SpendLimit, types.Coin{ +}) + if err := m.SpendLimit[len(m.SpendLimit)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Expiration", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if m.Expiration == nil { + m.Expiration = new(time.Time) +} + if err := github_com_cosmos_gogoproto_types.StdTimeUnmarshal(m.Expiration, dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *PeriodicAllowance) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PeriodicAllowance: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: PeriodicAllowance: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Basic", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := m.Basic.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Period", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := github_com_cosmos_gogoproto_types.StdDurationUnmarshal(&m.Period, dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PeriodSpendLimit", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.PeriodSpendLimit = append(m.PeriodSpendLimit, types.Coin{ +}) + if err := m.PeriodSpendLimit[len(m.PeriodSpendLimit)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PeriodCanSpend", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.PeriodCanSpend = append(m.PeriodCanSpend, types.Coin{ +}) + if err := m.PeriodCanSpend[len(m.PeriodCanSpend)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PeriodReset", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := github_com_cosmos_gogoproto_types.StdTimeUnmarshal(&m.PeriodReset, dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *AllowedMsgAllowance) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AllowedMsgAllowance: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: AllowedMsgAllowance: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Allowance", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if m.Allowance == nil { + m.Allowance = &types1.Any{ +} + +} + if err := m.Allowance.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AllowedMessages", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.AllowedMessages = append(m.AllowedMessages, string(dAtA[iNdEx:postIndex])) + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *Grant) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: Grant: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: Grant: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Granter", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Granter = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Grantee", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Grantee = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Allowance", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if m.Allowance == nil { + m.Allowance = &types1.Any{ +} + +} + if err := m.Allowance.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func skipFeegrant(dAtA []byte) (n int, err error) { + l := len(dAtA) + iNdEx := 0 + depth := 0 + for iNdEx < l { + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowFeegrant +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + wireType := int(wire & 0x7) + switch wireType { + case 0: + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowFeegrant +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + +iNdEx++ + if dAtA[iNdEx-1] < 0x80 { + break +} + +} + case 1: + iNdEx += 8 + case 2: + var length int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowFeegrant +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + length |= (int(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + if length < 0 { + return 0, ErrInvalidLengthFeegrant +} + +iNdEx += length + case 3: + depth++ + case 4: + if depth == 0 { + return 0, ErrUnexpectedEndOfGroupFeegrant +} + +depth-- + case 5: + iNdEx += 4 + default: + return 0, fmt.Errorf("proto: illegal wireType %d", wireType) +} + if iNdEx < 0 { + return 0, ErrInvalidLengthFeegrant +} + if depth == 0 { + return iNdEx, nil +} + +} + +return 0, io.ErrUnexpectedEOF +} + +var ( + ErrInvalidLengthFeegrant = fmt.Errorf("proto: negative length found during unmarshaling") + +ErrIntOverflowFeegrant = fmt.Errorf("proto: integer overflow") + +ErrUnexpectedEndOfGroupFeegrant = fmt.Errorf("proto: unexpected end of group") +) +``` + +### FeeAllowanceQueue + +Fee Allowances queue items are identified by combining the `FeeAllowancePrefixQueue` (i.e., 0x01), `expiration`, `grantee` (the account address of fee allowance grantee), `granter` (the account address of fee allowance granter). Endblocker checks `FeeAllowanceQueue` state for the expired grants and prunes them from `FeeAllowance` if there are any found. + +Fee allowance queue keys are stored in the state as follows: + +* Grant: `0x01 | expiration_bytes | grantee_addr_len (1 byte) | grantee_addr_bytes | granter_addr_len (1 byte) | granter_addr_bytes -> EmptyBytes` + +## Messages + +### Msg/GrantAllowance + +A fee allowance grant will be created with the `MsgGrantAllowance` message. + +```protobuf +// MsgGrantAllowance adds permission for Grantee to spend up to Allowance +// of fees from the account of Granter. +message MsgGrantAllowance { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgGrantAllowance"; + + // granter is the address of the user granting an allowance of their funds. + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // grantee is the address of the user being granted an allowance of another user's funds. + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // allowance can be any of basic, periodic, allowed fee allowance. + google.protobuf.Any allowance = 3 [(cosmos_proto.accepts_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"]; +} +``` + +### Msg/RevokeAllowance + +An allowed grant fee allowance can be removed with the `MsgRevokeAllowance` message. + +```protobuf +// MsgGrantAllowanceResponse defines the Msg/GrantAllowanceResponse response type. +message MsgGrantAllowanceResponse {} + +// MsgRevokeAllowance removes any existing Allowance from Granter to Grantee. +message MsgRevokeAllowance { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgRevokeAllowance"; + + // granter is the address of the user granting an allowance of their funds. + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // grantee is the address of the user being granted an allowance of another user's funds. + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +## Events + +The feegrant module emits the following events: + +## Msg Server + +### MsgGrantAllowance + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ---------------- | +| message | action | set\_feegrant | +| message | granter | `{granterAddress}` | +| message | grantee | `{granteeAddress}` | + +### MsgRevokeAllowance + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ---------------- | +| message | action | revoke\_feegrant | +| message | granter | `{granterAddress}` | +| message | grantee | `{granteeAddress}` | + +### Exec fee allowance + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ---------------- | +| message | action | use\_feegrant | +| message | granter | `{granterAddress}` | +| message | grantee | `{granteeAddress}` | + +### Prune fee allowances + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | --------------- | +| message | action | prune\_feegrant | +| message | pruner | `{prunerAddress}` | + +## Client + +### CLI + +A user can query and interact with the `feegrant` module using the CLI. + +#### Query + +The `query` commands allow users to query `feegrant` state. + +```shell +simd query feegrant --help +``` + +##### grant + +The `grant` command allows users to query a grant for a given granter-grantee pair. + +```shell +simd query feegrant grant [granter] [grantee] [flags] +``` + +Example: + +```shell +simd query feegrant grant cosmos1.. cosmos1.. +``` + +Example Output: + +```yml +allowance: + '@type': /cosmos.feegrant.v1beta1.BasicAllowance + expiration: null + spend_limit: + - amount: "100" + denom: stake +grantee: cosmos1.. +granter: cosmos1.. +``` + +##### grants + +The `grants` command allows users to query all grants for a given grantee. + +```shell +simd query feegrant grants [grantee] [flags] +``` + +Example: + +```shell +simd query feegrant grants cosmos1.. +``` + +Example Output: + +```yml expandable +allowances: +- allowance: + '@type': /cosmos.feegrant.v1beta1.BasicAllowance + expiration: null + spend_limit: + - amount: "100" + denom: stake + grantee: cosmos1.. + granter: cosmos1.. +pagination: + next_key: null + total: "0" +``` + +#### Transactions + +The `tx` commands allow users to interact with the `feegrant` module. + +```shell +simd tx feegrant --help +``` + +##### grant + +The `grant` command allows users to grant fee allowances to another account. The fee allowance can have an expiration date, a total spend limit, and/or a periodic spend limit. + +```shell +simd tx feegrant grant [granter] [grantee] [flags] +``` + +Example (one-time spend limit): + +```shell +simd tx feegrant grant cosmos1.. cosmos1.. --spend-limit 100stake +``` + +Example (periodic spend limit): + +```shell +simd tx feegrant grant cosmos1.. cosmos1.. --period 3600 --period-limit 10stake +``` + +##### revoke + +The `revoke` command allows users to revoke a granted fee allowance. + +```shell +simd tx feegrant revoke [granter] [grantee] [flags] +``` + +Example: + +```shell +simd tx feegrant revoke cosmos1.. cosmos1.. +``` + +### gRPC + +A user can query the `feegrant` module using gRPC endpoints. + +#### Allowance + +The `Allowance` endpoint allows users to query a granted fee allowance. + +```shell +cosmos.feegrant.v1beta1.Query/Allowance +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"grantee":"cosmos1..","granter":"cosmos1.."}' \ + localhost:9090 \ + cosmos.feegrant.v1beta1.Query/Allowance +``` + +Example Output: + +```json +{ + "allowance": { + "granter": "cosmos1..", + "grantee": "cosmos1..", + "allowance": { + "@type": "/cosmos.feegrant.v1beta1.BasicAllowance", + "spendLimit": [ + { + "denom": "stake", + "amount": "100" + } + ] + } + } +} +``` + +#### Allowances + +The `Allowances` endpoint allows users to query all granted fee allowances for a given grantee. + +```shell +cosmos.feegrant.v1beta1.Query/Allowances +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' \ + localhost:9090 \ + cosmos.feegrant.v1beta1.Query/Allowances +``` + +Example Output: + +```json expandable +{ + "allowances": [ + { + "granter": "cosmos1..", + "grantee": "cosmos1..", + "allowance": { + "@type": "/cosmos.feegrant.v1beta1.BasicAllowance", + "spendLimit": [ + { + "denom": "stake", + "amount": "100" + } + ] + } + } + ], + "pagination": { + "total": "1" + } +} +``` diff --git a/docs/sdk/next/documentation/module-system/genesis.mdx b/docs/sdk/next/documentation/module-system/genesis.mdx new file mode 100644 index 00000000..c7c15f2e --- /dev/null +++ b/docs/sdk/next/documentation/module-system/genesis.mdx @@ -0,0 +1,784 @@ +--- +title: Module Genesis +--- + +## Synopsis + +Modules generally handle a subset of the state and, as such, they need to define the related subset of the genesis file as well as methods to initialize, verify and export it. + + +**Pre-requisite Readings** + +- [Module Manager](/docs/sdk/next/documentation/module-system/module-manager) +- [Keepers](/docs/sdk/next/documentation/module-system/keeper) + + + +## Type Definition + +The subset of the genesis state defined by a given module is generally defined in a `genesis.proto` file ([more info](/docs/sdk/next/documentation/protocol-development/encoding#gogoproto) on how to define protobuf messages). The struct defining the module's subset of the genesis state is usually called `GenesisState` and contains all the module-related values that need to be initialized during the genesis process. + +See an example of `GenesisState` protobuf message definition from the `auth` module: + +```protobuf +syntax = "proto3"; +package cosmos.auth.v1beta1; + +import "google/protobuf/any.proto"; +import "gogoproto/gogo.proto"; +import "cosmos/auth/v1beta1/auth.proto"; +import "amino/amino.proto"; + +option go_package = "github.com/cosmos/cosmos-sdk/x/auth/types"; + +// GenesisState defines the auth module's genesis state. +message GenesisState { + // params defines all the parameters of the module. + Params params = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // accounts are the accounts present at genesis. + repeated google.protobuf.Any accounts = 2; +} + +``` + +Next we present the main genesis-related methods that need to be implemented by module developers in order for their module to be used in Cosmos SDK applications. + +### `DefaultGenesis` + +The `DefaultGenesis()` method is a simple function that calls the constructor function for `GenesisState` with the default value for each parameter. See an example from the `auth` module: + +```go expandable +package auth + +import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/depinject" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + + modulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + "cosmossdk.io/core/store" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/exported" + "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ ConsensusVersion defines the current x/auth module consensus version. +const ( + ConsensusVersion = 5 + GovModuleName = "gov" +) + +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the auth module. +type AppModuleBasic struct { + ac address.Codec +} + +/ Name returns the auth module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the auth module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the auth +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the auth module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return types.ValidateGenesis(data) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the auth module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the auth module. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return nil +} + +/ GetQueryCmd returns the root query command for the auth module. +func (ab AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.GetQueryCmd(ab.ac) +} + +/ RegisterInterfaces registers interfaces and implementations of the auth module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the auth module. +type AppModule struct { + AppModuleBasic + + accountKeeper keeper.AccountKeeper + randGenAccountsFn types.RandomGenesisAccountsFn + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, accountKeeper keeper.AccountKeeper, randGenAccountsFn types.RandomGenesisAccountsFn, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + ac: accountKeeper.AddressCodec() +}, + accountKeeper: accountKeeper, + randGenAccountsFn: randGenAccountsFn, + legacySubspace: ss, +} +} + +/ Name returns the auth module's name. +func (AppModule) + +Name() + +string { + return types.ModuleName +} + +/ RegisterServices registers a GRPC query service to respond to the +/ module-specific GRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.accountKeeper)) + +types.RegisterQueryServer(cfg.QueryServer(), keeper.NewQueryServer(am.accountKeeper)) + m := keeper.NewMigrator(am.accountKeeper, cfg.QueryServer(), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 2 to 3: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 3 to 4: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 4, m.Migrate4To5); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 4 to 5", types.ModuleName)) +} +} + +/ InitGenesis performs genesis initialization for the auth module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +am.accountKeeper.InitGenesis(ctx, genesisState) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the auth +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.accountKeeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the auth module +func (am AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState, am.randGenAccountsFn) +} + +/ ProposalMsgs returns msgs used for governance proposals for simulations. +func (AppModule) + +ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg { + return simulation.ProposalMsgs() +} + +/ RegisterStoreDecoder registers a decoder for auth module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[types.StoreKey] = simtypes.NewStoreDecoderFuncFromCollectionsSchema(am.accountKeeper.Schema) +} + +/ WeightedOperations doesn't return any auth module operation. +func (AppModule) + +WeightedOperations(_ module.SimulationState) []simtypes.WeightedOperation { + return nil +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideAddressCodec), + appmodule.Provide(ProvideModule), + ) +} + +/ ProvideAddressCodec provides an address.Codec to the container for any +/ modules that want to do address string <> bytes conversion. +func ProvideAddressCodec(config *modulev1.Module) + +address.Codec { + return authcodec.NewBech32Codec(config.Bech32Prefix) +} + +type ModuleInputs struct { + depinject.In + + Config *modulev1.Module + StoreService store.KVStoreService + Cdc codec.Codec + + RandomGenesisAccountsFn types.RandomGenesisAccountsFn `optional:"true"` + AccountI func() + +sdk.AccountI `optional:"true"` + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type ModuleOutputs struct { + depinject.Out + + AccountKeeper keeper.AccountKeeper + Module appmodule.AppModule +} + +func ProvideModule(in ModuleInputs) + +ModuleOutputs { + maccPerms := map[string][]string{ +} + for _, permission := range in.Config.ModuleAccountPermissions { + maccPerms[permission.Account] = permission.Permissions +} + + / default to governance authority if not provided + authority := types.NewModuleAddress(GovModuleName) + if in.Config.Authority != "" { + authority = types.NewModuleAddressOrBech32Address(in.Config.Authority) +} + if in.RandomGenesisAccountsFn == nil { + in.RandomGenesisAccountsFn = simulation.RandomGenesisAccounts +} + if in.AccountI == nil { + in.AccountI = types.ProtoBaseAccount +} + k := keeper.NewAccountKeeper(in.Cdc, in.StoreService, in.AccountI, maccPerms, in.Config.Bech32Prefix, authority.String()) + m := NewAppModule(in.Cdc, k, in.RandomGenesisAccountsFn, in.LegacySubspace) + +return ModuleOutputs{ + AccountKeeper: k, + Module: m +} +} +``` + +### `ValidateGenesis` + +The `ValidateGenesis(data GenesisState)` method is called to verify that the provided `genesisState` is correct. It should perform validity checks on each of the parameters listed in `GenesisState`. See an example from the `auth` module: + +```go expandable +package types + +import ( + + "encoding/json" + "fmt" + "sort" + + proto "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" +) + +var _ types.UnpackInterfacesMessage = GenesisState{ +} + +/ RandomGenesisAccountsFn defines the function required to generate custom account types +type RandomGenesisAccountsFn func(simState *module.SimulationState) + +GenesisAccounts + +/ NewGenesisState - Create a new genesis state +func NewGenesisState(params Params, accounts GenesisAccounts) *GenesisState { + genAccounts, err := PackAccounts(accounts) + if err != nil { + panic(err) +} + +return &GenesisState{ + Params: params, + Accounts: genAccounts, +} +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (g GenesisState) + +UnpackInterfaces(unpacker types.AnyUnpacker) + +error { + for _, any := range g.Accounts { + var account GenesisAccount + err := unpacker.UnpackAny(any, &account) + if err != nil { + return err +} + +} + +return nil +} + +/ DefaultGenesisState - Return a default genesis state +func DefaultGenesisState() *GenesisState { + return NewGenesisState(DefaultParams(), GenesisAccounts{ +}) +} + +/ GetGenesisStateFromAppState returns x/auth GenesisState given raw application +/ genesis state. +func GetGenesisStateFromAppState(cdc codec.Codec, appState map[string]json.RawMessage) + +GenesisState { + var genesisState GenesisState + if appState[ModuleName] != nil { + cdc.MustUnmarshalJSON(appState[ModuleName], &genesisState) +} + +return genesisState +} + +/ ValidateGenesis performs basic validation of auth genesis data returning an +/ error for any failed validation criteria. +func ValidateGenesis(data GenesisState) + +error { + if err := data.Params.Validate(); err != nil { + return err +} + +genAccs, err := UnpackAccounts(data.Accounts) + if err != nil { + return err +} + +return ValidateGenAccounts(genAccs) +} + +/ SanitizeGenesisAccounts sorts accounts and coin sets. +func SanitizeGenesisAccounts(genAccs GenesisAccounts) + +GenesisAccounts { + / Make sure there aren't any duplicated account numbers by fixing the duplicates with the lowest unused values. + / seenAccNum = easy lookup for used account numbers. + seenAccNum := map[uint64]bool{ +} + / dupAccNum = a map of account number to accounts with duplicate account numbers (excluding the 1st one seen). + dupAccNum := map[uint64]GenesisAccounts{ +} + for _, acc := range genAccs { + num := acc.GetAccountNumber() + if !seenAccNum[num] { + seenAccNum[num] = true +} + +else { + dupAccNum[num] = append(dupAccNum[num], acc) +} + +} + + / dupAccNums a sorted list of the account numbers with duplicates. + var dupAccNums []uint64 + for num := range dupAccNum { + dupAccNums = append(dupAccNums, num) +} + +sort.Slice(dupAccNums, func(i, j int) + +bool { + return dupAccNums[i] < dupAccNums[j] +}) + + / Change the account number of the duplicated ones to the first unused value. + globalNum := uint64(0) + for _, dupNum := range dupAccNums { + accs := dupAccNum[dupNum] + for _, acc := range accs { + for seenAccNum[globalNum] { + globalNum++ +} + if err := acc.SetAccountNumber(globalNum); err != nil { + panic(err) +} + +seenAccNum[globalNum] = true +} + +} + + / Then sort them all by account number. + sort.Slice(genAccs, func(i, j int) + +bool { + return genAccs[i].GetAccountNumber() < genAccs[j].GetAccountNumber() +}) + +return genAccs +} + +/ ValidateGenAccounts validates an array of GenesisAccounts and checks for duplicates +func ValidateGenAccounts(accounts GenesisAccounts) + +error { + addrMap := make(map[string]bool, len(accounts)) + for _, acc := range accounts { + / check for duplicated accounts + addrStr := acc.GetAddress().String() + if _, ok := addrMap[addrStr]; ok { + return fmt.Errorf("duplicate account found in genesis state; address: %s", addrStr) +} + +addrMap[addrStr] = true + + / check account specific validation + if err := acc.Validate(); err != nil { + return fmt.Errorf("invalid account found in genesis state; address: %s, error: %s", addrStr, err.Error()) +} + +} + +return nil +} + +/ GenesisAccountIterator implements genesis account iteration. +type GenesisAccountIterator struct{ +} + +/ IterateGenesisAccounts iterates over all the genesis accounts found in +/ appGenesis and invokes a callback on each genesis account. If any call +/ returns true, iteration stops. +func (GenesisAccountIterator) + +IterateGenesisAccounts( + cdc codec.Codec, appGenesis map[string]json.RawMessage, cb func(sdk.AccountI) (stop bool), +) { + for _, genAcc := range GetGenesisStateFromAppState(cdc, appGenesis).Accounts { + acc, ok := genAcc.GetCachedValue().(sdk.AccountI) + if !ok { + panic("expected account") +} + if cb(acc) { + break +} + +} +} + +/ PackAccounts converts GenesisAccounts to Any slice +func PackAccounts(accounts GenesisAccounts) ([]*types.Any, error) { + accountsAny := make([]*types.Any, len(accounts)) + for i, acc := range accounts { + msg, ok := acc.(proto.Message) + if !ok { + return nil, fmt.Errorf("cannot proto marshal %T", acc) +} + +any, err := types.NewAnyWithValue(msg) + if err != nil { + return nil, err +} + +accountsAny[i] = any +} + +return accountsAny, nil +} + +/ UnpackAccounts converts Any slice to GenesisAccounts +func UnpackAccounts(accountsAny []*types.Any) (GenesisAccounts, error) { + accounts := make(GenesisAccounts, len(accountsAny)) + for i, any := range accountsAny { + acc, ok := any.GetCachedValue().(GenesisAccount) + if !ok { + return nil, fmt.Errorf("expected genesis account") +} + +accounts[i] = acc +} + +return accounts, nil +} +``` + +## Other Genesis Methods + +Other than the methods related directly to `GenesisState`, module developers are expected to implement two other methods as part of the [`AppModuleGenesis` interface](/docs/sdk/next/documentation/module-system/module-manager#appmodulegenesis) (only if the module needs to initialize a subset of state in genesis). These methods are [`InitGenesis`](#initgenesis) and [`ExportGenesis`](#exportgenesis). + +### `InitGenesis` + +The `InitGenesis` method is executed during [`InitChain`](/docs/sdk/next/documentation/application-framework/baseapp#initchain) when the application is first started. Given a `GenesisState`, it initializes the subset of the state managed by the module by using the module's [`keeper`](/docs/sdk/next/documentation/module-system/keeper) setter function on each parameter within the `GenesisState`. + +The [module manager](/docs/sdk/next/documentation/module-system/module-manager#manager) of the application is responsible for calling the `InitGenesis` method of each of the application's modules in order. This order is set by the application developer via the manager's `SetOrderGenesisMethod`, which is called in the [application's constructor function](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor-function). + +See an example of `InitGenesis` from the `auth` module: + +```go expandable +package keeper + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ InitGenesis - Init store state from genesis data +/ +/ CONTRACT: old coins from the FeeCollectionKeeper need to be transferred through +/ a genesis port script to the new fee collector account +func (ak AccountKeeper) + +InitGenesis(ctx sdk.Context, data types.GenesisState) { + if err := ak.Params.Set(ctx, data.Params); err != nil { + panic(err) +} + +accounts, err := types.UnpackAccounts(data.Accounts) + if err != nil { + panic(err) +} + +accounts = types.SanitizeGenesisAccounts(accounts) + + / Set the accounts and make sure the global account number matches the largest account number (even if zero). + var lastAccNum *uint64 + for _, acc := range accounts { + accNum := acc.GetAccountNumber() + for lastAccNum == nil || *lastAccNum < accNum { + n := ak.NextAccountNumber(ctx) + +lastAccNum = &n +} + +ak.SetAccount(ctx, acc) +} + +ak.GetModuleAccount(ctx, types.FeeCollectorName) +} + +/ ExportGenesis returns a GenesisState for a given context and keeper +func (ak AccountKeeper) + +ExportGenesis(ctx sdk.Context) *types.GenesisState { + params := ak.GetParams(ctx) + +var genAccounts types.GenesisAccounts + ak.IterateAccounts(ctx, func(account sdk.AccountI) + +bool { + genAccount := account.(types.GenesisAccount) + +genAccounts = append(genAccounts, genAccount) + +return false +}) + +return types.NewGenesisState(params, genAccounts) +} +``` + +### `ExportGenesis` + +The `ExportGenesis` method is executed whenever an export of the state is made. It takes the latest known version of the subset of the state managed by the module and creates a new `GenesisState` out of it. This is mainly used when the chain needs to be upgraded via a hard fork. + +See an example of `ExportGenesis` from the `auth` module. + +```go expandable +package keeper + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ InitGenesis - Init store state from genesis data +/ +/ CONTRACT: old coins from the FeeCollectionKeeper need to be transferred through +/ a genesis port script to the new fee collector account +func (ak AccountKeeper) + +InitGenesis(ctx sdk.Context, data types.GenesisState) { + if err := ak.Params.Set(ctx, data.Params); err != nil { + panic(err) +} + +accounts, err := types.UnpackAccounts(data.Accounts) + if err != nil { + panic(err) +} + +accounts = types.SanitizeGenesisAccounts(accounts) + + / Set the accounts and make sure the global account number matches the largest account number (even if zero). + var lastAccNum *uint64 + for _, acc := range accounts { + accNum := acc.GetAccountNumber() + for lastAccNum == nil || *lastAccNum < accNum { + n := ak.NextAccountNumber(ctx) + +lastAccNum = &n +} + +ak.SetAccount(ctx, acc) +} + +ak.GetModuleAccount(ctx, types.FeeCollectorName) +} + +/ ExportGenesis returns a GenesisState for a given context and keeper +func (ak AccountKeeper) + +ExportGenesis(ctx sdk.Context) *types.GenesisState { + params := ak.GetParams(ctx) + +var genAccounts types.GenesisAccounts + ak.IterateAccounts(ctx, func(account sdk.AccountI) + +bool { + genAccount := account.(types.GenesisAccount) + +genAccounts = append(genAccounts, genAccount) + +return false +}) + +return types.NewGenesisState(params, genAccounts) +} +``` + +### GenesisTxHandler + +`GenesisTxHandler` is a way for modules to submit state transitions prior to the first block. This is used by `x/genutil` to submit the genesis transactions for the validators to be added to staking. + +```go +package genesis + +/ TxHandler is an interface that modules can implement to provide genesis state transitions +type TxHandler interface { + ExecuteGenesisTx([]byte) + +error +} +``` diff --git a/docs/sdk/next/documentation/module-system/genutil.mdx b/docs/sdk/next/documentation/module-system/genutil.mdx new file mode 100644 index 00000000..dfbcbb11 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/genutil.mdx @@ -0,0 +1,1251 @@ +--- +title: '`x/genutil`' +description: >- + The genutil package contains a variety of genesis utility functionalities for + usage within a blockchain application. Namely: +--- + +## Concepts + +The `genutil` package contains a variety of genesis utility functionalities for usage within a blockchain application. Namely: + +* Genesis transactions related (gentx) +* Commands for collection and creation of gentxs +* `InitChain` processing of gentxs +* Genesis file creation +* Genesis file validation +* Genesis file migration +* CometBFT related initialization + * Translation of an app genesis to a CometBFT genesis + +## Genesis + +Genutil contains the data structure that defines an application genesis. +An application genesis consists of a consensus genesis (g.e. CometBFT genesis) and application related genesis data. + +```go expandable +package types + +import ( + + "bytes" + "encoding/json" + "errors" + "fmt" + "os" + "time" + + cmtjson "github.com/cometbft/cometbft/libs/json" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + cmttime "github.com/cometbft/cometbft/types/time" + "github.com/cosmos/cosmos-sdk/version" +) + +const ( + / MaxChainIDLen is the maximum length of a chain ID. + MaxChainIDLen = cmttypes.MaxChainIDLen +) + +/ AppGenesis defines the app's genesis. +type AppGenesis struct { + AppName string `json:"app_name"` + AppVersion string `json:"app_version"` + GenesisTime time.Time `json:"genesis_time"` + ChainID string `json:"chain_id"` + InitialHeight int64 `json:"initial_height"` + AppHash []byte `json:"app_hash"` + AppState json.RawMessage `json:"app_state,omitempty"` + Consensus *ConsensusGenesis `json:"consensus,omitempty"` +} + +/ NewAppGenesisWithVersion returns a new AppGenesis with the app name and app version already. +func NewAppGenesisWithVersion(chainID string, appState json.RawMessage) *AppGenesis { + return &AppGenesis{ + AppName: version.AppName, + AppVersion: version.Version, + ChainID: chainID, + AppState: appState, + Consensus: &ConsensusGenesis{ + Validators: nil, +}, +} +} + +/ ValidateAndComplete performs validation and completes the AppGenesis. +func (ag *AppGenesis) + +ValidateAndComplete() + +error { + if ag.ChainID == "" { + return errors.New("genesis doc must include non-empty chain_id") +} + if len(ag.ChainID) > MaxChainIDLen { + return fmt.Errorf("chain_id in genesis doc is too long (max: %d)", MaxChainIDLen) +} + if ag.InitialHeight < 0 { + return fmt.Errorf("initial_height cannot be negative (got %v)", ag.InitialHeight) +} + if ag.InitialHeight == 0 { + ag.InitialHeight = 1 +} + if ag.GenesisTime.IsZero() { + ag.GenesisTime = cmttime.Now() +} + if err := ag.Consensus.ValidateAndComplete(); err != nil { + return err +} + +return nil +} + +/ SaveAs is a utility method for saving AppGenesis as a JSON file. +func (ag *AppGenesis) + +SaveAs(file string) + +error { + appGenesisBytes, err := json.MarshalIndent(ag, "", " + ") + if err != nil { + return err +} + +return os.WriteFile(file, appGenesisBytes, 0o600) +} + +/ AppGenesisFromFile reads the AppGenesis from the provided file. +func AppGenesisFromFile(genFile string) (*AppGenesis, error) { + jsonBlob, err := os.ReadFile(genFile) + if err != nil { + return nil, fmt.Errorf("couldn't read AppGenesis file (%s): %w", genFile, err) +} + +var appGenesis AppGenesis + if err := json.Unmarshal(jsonBlob, &appGenesis); err != nil { + / fallback to CometBFT genesis + var ctmGenesis cmttypes.GenesisDoc + if err2 := cmtjson.Unmarshal(jsonBlob, &ctmGenesis); err2 != nil { + return nil, fmt.Errorf("error unmarshalling AppGenesis at %s: %w\n failed fallback to CometBFT GenDoc: %w", genFile, err, err2) +} + +appGenesis = AppGenesis{ + AppName: version.AppName, + / AppVersion is not filled as we do not know it from a CometBFT genesis + GenesisTime: ctmGenesis.GenesisTime, + ChainID: ctmGenesis.ChainID, + InitialHeight: ctmGenesis.InitialHeight, + AppHash: ctmGenesis.AppHash, + AppState: ctmGenesis.AppState, + Consensus: &ConsensusGenesis{ + Validators: ctmGenesis.Validators, + Params: ctmGenesis.ConsensusParams, +}, +} + +} + +return &appGenesis, nil +} + +/ -------------------------- +/ CometBFT Genesis Handling +/ -------------------------- + +/ ToGenesisDoc converts the AppGenesis to a CometBFT GenesisDoc. +func (ag *AppGenesis) + +ToGenesisDoc() (*cmttypes.GenesisDoc, error) { + return &cmttypes.GenesisDoc{ + GenesisTime: ag.GenesisTime, + ChainID: ag.ChainID, + InitialHeight: ag.InitialHeight, + AppHash: ag.AppHash, + AppState: ag.AppState, + Validators: ag.Consensus.Validators, + ConsensusParams: ag.Consensus.Params, +}, nil +} + +/ ConsensusGenesis defines the consensus layer's genesis. +/ TODO(@julienrbrt) + +eventually abstract from CometBFT types +type ConsensusGenesis struct { + Validators []cmttypes.GenesisValidator `json:"validators,omitempty"` + Params *cmttypes.ConsensusParams `json:"params,omitempty"` +} + +/ NewConsensusGenesis returns a ConsensusGenesis with given values. +/ It takes a proto consensus params so it can called from server export command. +func NewConsensusGenesis(params cmtproto.ConsensusParams, validators []cmttypes.GenesisValidator) *ConsensusGenesis { + return &ConsensusGenesis{ + Params: &cmttypes.ConsensusParams{ + Block: cmttypes.BlockParams{ + MaxBytes: params.Block.MaxBytes, + MaxGas: params.Block.MaxGas, +}, + Evidence: cmttypes.EvidenceParams{ + MaxAgeNumBlocks: params.Evidence.MaxAgeNumBlocks, + MaxAgeDuration: params.Evidence.MaxAgeDuration, + MaxBytes: params.Evidence.MaxBytes, +}, + Validator: cmttypes.ValidatorParams{ + PubKeyTypes: params.Validator.PubKeyTypes, +}, +}, + Validators: validators, +} +} + +func (cs *ConsensusGenesis) + +MarshalJSON() ([]byte, error) { + type Alias ConsensusGenesis + return cmtjson.Marshal(&Alias{ + Validators: cs.Validators, + Params: cs.Params, +}) +} + +func (cs *ConsensusGenesis) + +UnmarshalJSON(b []byte) + +error { + type Alias ConsensusGenesis + result := Alias{ +} + if err := cmtjson.Unmarshal(b, &result); err != nil { + return err +} + +cs.Params = result.Params + cs.Validators = result.Validators + + return nil +} + +func (cs *ConsensusGenesis) + +ValidateAndComplete() + +error { + if cs == nil { + return fmt.Errorf("consensus genesis cannot be nil") +} + if cs.Params == nil { + cs.Params = cmttypes.DefaultConsensusParams() +} + +else if err := cs.Params.ValidateBasic(); err != nil { + return err +} + for i, v := range cs.Validators { + if v.Power == 0 { + return fmt.Errorf("the genesis file cannot contain validators with no voting power: %v", v) +} + if len(v.Address) > 0 && !bytes.Equal(v.PubKey.Address(), v.Address) { + return fmt.Errorf("incorrect address for validator %v in the genesis file, should be %v", v, v.PubKey.Address()) +} + if len(v.Address) == 0 { + cs.Validators[i].Address = v.PubKey.Address() +} + +} + +return nil +} +``` + +The application genesis can then be translated to the consensus engine to the right format: + +```go expandable +package types + +import ( + + "bytes" + "encoding/json" + "errors" + "fmt" + "os" + "time" + + cmtjson "github.com/cometbft/cometbft/libs/json" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + cmttime "github.com/cometbft/cometbft/types/time" + "github.com/cosmos/cosmos-sdk/version" +) + +const ( + / MaxChainIDLen is the maximum length of a chain ID. + MaxChainIDLen = cmttypes.MaxChainIDLen +) + +/ AppGenesis defines the app's genesis. +type AppGenesis struct { + AppName string `json:"app_name"` + AppVersion string `json:"app_version"` + GenesisTime time.Time `json:"genesis_time"` + ChainID string `json:"chain_id"` + InitialHeight int64 `json:"initial_height"` + AppHash []byte `json:"app_hash"` + AppState json.RawMessage `json:"app_state,omitempty"` + Consensus *ConsensusGenesis `json:"consensus,omitempty"` +} + +/ NewAppGenesisWithVersion returns a new AppGenesis with the app name and app version already. +func NewAppGenesisWithVersion(chainID string, appState json.RawMessage) *AppGenesis { + return &AppGenesis{ + AppName: version.AppName, + AppVersion: version.Version, + ChainID: chainID, + AppState: appState, + Consensus: &ConsensusGenesis{ + Validators: nil, +}, +} +} + +/ ValidateAndComplete performs validation and completes the AppGenesis. +func (ag *AppGenesis) + +ValidateAndComplete() + +error { + if ag.ChainID == "" { + return errors.New("genesis doc must include non-empty chain_id") +} + if len(ag.ChainID) > MaxChainIDLen { + return fmt.Errorf("chain_id in genesis doc is too long (max: %d)", MaxChainIDLen) +} + if ag.InitialHeight < 0 { + return fmt.Errorf("initial_height cannot be negative (got %v)", ag.InitialHeight) +} + if ag.InitialHeight == 0 { + ag.InitialHeight = 1 +} + if ag.GenesisTime.IsZero() { + ag.GenesisTime = cmttime.Now() +} + if err := ag.Consensus.ValidateAndComplete(); err != nil { + return err +} + +return nil +} + +/ SaveAs is a utility method for saving AppGenesis as a JSON file. +func (ag *AppGenesis) + +SaveAs(file string) + +error { + appGenesisBytes, err := json.MarshalIndent(ag, "", " + ") + if err != nil { + return err +} + +return os.WriteFile(file, appGenesisBytes, 0o600) +} + +/ AppGenesisFromFile reads the AppGenesis from the provided file. +func AppGenesisFromFile(genFile string) (*AppGenesis, error) { + jsonBlob, err := os.ReadFile(genFile) + if err != nil { + return nil, fmt.Errorf("couldn't read AppGenesis file (%s): %w", genFile, err) +} + +var appGenesis AppGenesis + if err := json.Unmarshal(jsonBlob, &appGenesis); err != nil { + / fallback to CometBFT genesis + var ctmGenesis cmttypes.GenesisDoc + if err2 := cmtjson.Unmarshal(jsonBlob, &ctmGenesis); err2 != nil { + return nil, fmt.Errorf("error unmarshalling AppGenesis at %s: %w\n failed fallback to CometBFT GenDoc: %w", genFile, err, err2) +} + +appGenesis = AppGenesis{ + AppName: version.AppName, + / AppVersion is not filled as we do not know it from a CometBFT genesis + GenesisTime: ctmGenesis.GenesisTime, + ChainID: ctmGenesis.ChainID, + InitialHeight: ctmGenesis.InitialHeight, + AppHash: ctmGenesis.AppHash, + AppState: ctmGenesis.AppState, + Consensus: &ConsensusGenesis{ + Validators: ctmGenesis.Validators, + Params: ctmGenesis.ConsensusParams, +}, +} + +} + +return &appGenesis, nil +} + +/ -------------------------- +/ CometBFT Genesis Handling +/ -------------------------- + +/ ToGenesisDoc converts the AppGenesis to a CometBFT GenesisDoc. +func (ag *AppGenesis) + +ToGenesisDoc() (*cmttypes.GenesisDoc, error) { + return &cmttypes.GenesisDoc{ + GenesisTime: ag.GenesisTime, + ChainID: ag.ChainID, + InitialHeight: ag.InitialHeight, + AppHash: ag.AppHash, + AppState: ag.AppState, + Validators: ag.Consensus.Validators, + ConsensusParams: ag.Consensus.Params, +}, nil +} + +/ ConsensusGenesis defines the consensus layer's genesis. +/ TODO(@julienrbrt) + +eventually abstract from CometBFT types +type ConsensusGenesis struct { + Validators []cmttypes.GenesisValidator `json:"validators,omitempty"` + Params *cmttypes.ConsensusParams `json:"params,omitempty"` +} + +/ NewConsensusGenesis returns a ConsensusGenesis with given values. +/ It takes a proto consensus params so it can called from server export command. +func NewConsensusGenesis(params cmtproto.ConsensusParams, validators []cmttypes.GenesisValidator) *ConsensusGenesis { + return &ConsensusGenesis{ + Params: &cmttypes.ConsensusParams{ + Block: cmttypes.BlockParams{ + MaxBytes: params.Block.MaxBytes, + MaxGas: params.Block.MaxGas, +}, + Evidence: cmttypes.EvidenceParams{ + MaxAgeNumBlocks: params.Evidence.MaxAgeNumBlocks, + MaxAgeDuration: params.Evidence.MaxAgeDuration, + MaxBytes: params.Evidence.MaxBytes, +}, + Validator: cmttypes.ValidatorParams{ + PubKeyTypes: params.Validator.PubKeyTypes, +}, +}, + Validators: validators, +} +} + +func (cs *ConsensusGenesis) + +MarshalJSON() ([]byte, error) { + type Alias ConsensusGenesis + return cmtjson.Marshal(&Alias{ + Validators: cs.Validators, + Params: cs.Params, +}) +} + +func (cs *ConsensusGenesis) + +UnmarshalJSON(b []byte) + +error { + type Alias ConsensusGenesis + result := Alias{ +} + if err := cmtjson.Unmarshal(b, &result); err != nil { + return err +} + +cs.Params = result.Params + cs.Validators = result.Validators + + return nil +} + +func (cs *ConsensusGenesis) + +ValidateAndComplete() + +error { + if cs == nil { + return fmt.Errorf("consensus genesis cannot be nil") +} + if cs.Params == nil { + cs.Params = cmttypes.DefaultConsensusParams() +} + +else if err := cs.Params.ValidateBasic(); err != nil { + return err +} + for i, v := range cs.Validators { + if v.Power == 0 { + return fmt.Errorf("the genesis file cannot contain validators with no voting power: %v", v) +} + if len(v.Address) > 0 && !bytes.Equal(v.PubKey.Address(), v.Address) { + return fmt.Errorf("incorrect address for validator %v in the genesis file, should be %v", v, v.PubKey.Address()) +} + if len(v.Address) == 0 { + cs.Validators[i].Address = v.PubKey.Address() +} + +} + +return nil +} +``` + +```go expandable +package server + +import ( + + "context" + "errors" + "fmt" + "io" + "net" + "os" + "runtime/pprof" + "github.com/cometbft/cometbft/abci/server" + cmtcmd "github.com/cometbft/cometbft/cmd/cometbft/commands" + cmtcfg "github.com/cometbft/cometbft/config" + "github.com/cometbft/cometbft/node" + "github.com/cometbft/cometbft/p2p" + pvm "github.com/cometbft/cometbft/privval" + "github.com/cometbft/cometbft/proxy" + "github.com/cometbft/cometbft/rpc/client/local" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/hashicorp/go-metrics" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "golang.org/x/sync/errgroup" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + + pruningtypes "cosmossdk.io/store/pruning/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + servercmtlog "github.com/cosmos/cosmos-sdk/server/log" + "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/version" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" +) + +const ( + / CometBFT full-node start flags + flagWithComet = "with-comet" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagQueryGasLimit = "query-gas-limit" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" + + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" + + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" + + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" + + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" + + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" +) + +/ StartCmdOptions defines options that can be customized in `StartCmdWithOptions`, +type StartCmdOptions struct { + / DBOpener can be used to customize db opening, for example customize db options or support different db backends, + / default to the builtin db opener. + DBOpener func(rootDir string, backendType dbm.BackendType) (dbm.DB, error) + / PostSetup can be used to setup extra services under the same cancellable context, + / it's not called in stand-alone mode, only for in-process mode. + PostSetup func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / AddFlags add custom flags to start cmd + AddFlags func(cmd *cobra.Command) +} + +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + return StartCmdWithOptions(appCreator, defaultNodeHome, StartCmdOptions{ +}) +} + +/ StartCmdWithOptions runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmdWithOptions(appCreator types.AppCreator, defaultNodeHome string, opts StartCmdOptions) *cobra.Command { + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with CometBFT in or out of process. By +default, the application will run with CometBFT in process. + +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. + +For '--pruning' the options are as follows: + +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) + +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' + +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. + +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. + +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, CometBFT is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + PreRunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + + / Bind flags to the Context's Viper so the app construction can set + / options accordingly. + if err := serverCtx.Viper.BindPFlags(cmd.Flags()); err != nil { + return err +} + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + +return err +}, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + +return wrapCPUProfile(serverCtx, func() + +error { + return start(serverCtx, clientCtx, appCreator, withCMT, opts) +}) +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +cmd.Flags().Bool(flagWithComet, true, "Run abci app embedded in-process with CometBFT") + +cmd.Flags().String(flagAddress, "tcp://0.0.0.0:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().Uint64(FlagQueryGasLimit, 0, "Maximum gas a Rest/Grpc query can consume. Blank and 0 imply unbounded.") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune CometBFT blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the CometBFT RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the CometBFT RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the CometBFT maximum request body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no CometBFT process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + + / support old flags name for backwards compatibility + cmd.Flags().SetNormalizeFunc(func(f *pflag.FlagSet, name string) + +pflag.NormalizedName { + if name == "with-tendermint" { + name = flagWithComet +} + +return pflag.NormalizedName(name) +}) + + / add support for all CometBFT-specific command line options + cmtcmd.AddNodeFlags(cmd) + if opts.AddFlags != nil { + opts.AddFlags(cmd) +} + +return cmd +} + +func start(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, withCmt bool, opts StartCmdOptions) + +error { + svrCfg, err := getAndValidateConfig(svrCtx) + if err != nil { + return err +} + +app, appCleanupFn, err := startApp(svrCtx, appCreator, opts) + if err != nil { + return err +} + +defer appCleanupFn() + +metrics, err := startTelemetry(svrCfg) + if err != nil { + return err +} + +emitServerInfoMetrics() + if !withCmt { + return startStandAlone(svrCtx, app, opts) +} + +return startInProcess(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +func startStandAlone(svrCtx *Context, app types.Application, opts StartCmdOptions) + +error { + addr := svrCtx.Viper.GetString(flagAddress) + transport := svrCtx.Viper.GetString(flagTransport) + cmtApp := NewCometABCIWrapper(app) + +svr, err := server.NewServer(addr, transport, cmtApp) + if err != nil { + return fmt.Errorf("error creating listener: %v", err) +} + +svr.SetLogger(servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger.With("module", "abci-server") +}) + +g, ctx := getCtx(svrCtx, false) + +g.Go(func() + +error { + if err := svr.Start(); err != nil { + svrCtx.Logger.Error("failed to start out-of-process ABCI server", "err", err) + +return err +} + + / Wait for the calling process to be canceled or close the provided context, + / so we can gracefully stop the ABCI server. + <-ctx.Done() + +svrCtx.Logger.Info("stopping the ABCI server...") + +return errors.Join(svr.Stop(), app.Close()) +}) + +return g.Wait() +} + +func startInProcess(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, + metrics *telemetry.Metrics, opts StartCmdOptions, +) + +error { + cmtCfg := svrCtx.Config + home := cmtCfg.RootDir + gRPCOnly := svrCtx.Viper.GetBool(flagGRPCOnly) + +g, ctx := getCtx(svrCtx, true) + if gRPCOnly { + / TODO: Generalize logic so that gRPC only is really in startStandAlone + svrCtx.Logger.Info("starting node in gRPC only mode; CometBFT is disabled") + +svrCfg.GRPC.Enable = true +} + +else { + svrCtx.Logger.Info("starting node with ABCI CometBFT in-process") + +tmNode, cleanupFn, err := startCmtNode(ctx, cmtCfg, app, svrCtx) + if err != nil { + return err +} + +defer cleanupFn() + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / Re-assign for making the client available below do not use := to avoid + / shadowing the clientCtx variable. + clientCtx = clientCtx.WithClient(local.New(tmNode)) + +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, cmtCfg, svrCfg, clientCtx, svrCtx, app, home, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetup != nil { + if err := opts.PostSetup(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + + / wait for signal capture and gracefully return + / we are guaranteed to be waiting for the "ListenForQuitSignals" goroutine. + return g.Wait() +} + +/ TODO: Move nodeKey into being created within the function. +func startCmtNode( + ctx context.Context, + cfg *cmtcfg.Config, + app types.Application, + svrCtx *Context, +) (tmNode *node.Node, cleanupFn func(), err error) { + nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return nil, cleanupFn, err +} + cmtApp := NewCometABCIWrapper(app) + +tmNode, err = node.NewNodeWithContext( + ctx, + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(cmtApp), + getGenDocProvider(cfg), + cmtcfg.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger +}, + ) + if err != nil { + return tmNode, cleanupFn, err +} + if err := tmNode.Start(); err != nil { + return tmNode, cleanupFn, err +} + +cleanupFn = func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() + _ = app.Close() +} + +} + +return tmNode, cleanupFn, nil +} + +func getAndValidateConfig(svrCtx *Context) (serverconfig.Config, error) { + config, err := serverconfig.GetConfig(svrCtx.Viper) + if err != nil { + return config, err +} + if err := config.ValidateBasic(); err != nil { + return config, err +} + +return config, nil +} + +/ returns a function which returns the genesis doc from the genesis file. +func getGenDocProvider(cfg *cmtcfg.Config) + +func() (*cmttypes.GenesisDoc, error) { + return func() (*cmttypes.GenesisDoc, error) { + appGenesis, err := genutiltypes.AppGenesisFromFile(cfg.GenesisFile()) + if err != nil { + return nil, err +} + +return appGenesis.ToGenesisDoc() +} +} + +func setupTraceWriter(svrCtx *Context) (traceWriter io.WriteCloser, cleanup func(), err error) { + / clean up the traceWriter when the server is shutting down + cleanup = func() { +} + traceWriterFile := svrCtx.Viper.GetString(flagTraceStore) + +traceWriter, err = openTraceWriter(traceWriterFile) + if err != nil { + return traceWriter, cleanup, err +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + cleanup = func() { + if err = traceWriter.Close(); err != nil { + svrCtx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +return traceWriter, cleanup, nil +} + +func startGrpcServer( + ctx context.Context, + g *errgroup.Group, + config serverconfig.GRPCConfig, + clientCtx client.Context, + svrCtx *Context, + app types.Application, +) (*grpc.Server, client.Context, error) { + if !config.Enable { + / return grpcServer as nil if gRPC is disabled + return nil, clientCtx, nil +} + _, port, err := net.SplitHostPort(config.Address) + if err != nil { + return nil, clientCtx, err +} + maxSendMsgSize := config.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + grpcAddress := fmt.Sprintf("127.0.0.1:%s", port) + + / if gRPC is enabled, configure gRPC client for gRPC gateway + grpcClient, err := grpc.Dial( + grpcAddress, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return nil, clientCtx, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +svrCtx.Logger.Debug("gRPC client assigned to client context", "target", grpcAddress) + +grpcSrv, err := servergrpc.NewGRPCServer(clientCtx, app, config) + if err != nil { + return nil, clientCtx, err +} + + / Start the gRPC server in a goroutine. Note, the provided ctx will ensure + / that the server is gracefully shut down. + g.Go(func() + +error { + return servergrpc.StartGRPCServer(ctx, svrCtx.Logger.With("module", "grpc-server"), config, grpcSrv) +}) + +return grpcSrv, clientCtx, nil +} + +func startAPIServer( + ctx context.Context, + g *errgroup.Group, + cmtCfg *cmtcfg.Config, + svrCfg serverconfig.Config, + clientCtx client.Context, + svrCtx *Context, + app types.Application, + home string, + grpcSrv *grpc.Server, + metrics *telemetry.Metrics, +) + +error { + if !svrCfg.API.Enable { + return nil +} + +clientCtx = clientCtx.WithHomeDir(home) + apiSrv := api.New(clientCtx, svrCtx.Logger.With("module", "api-server"), grpcSrv) + +app.RegisterAPIRoutes(apiSrv, svrCfg.API) + if svrCfg.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + +g.Go(func() + +error { + return apiSrv.Start(ctx, svrCfg) +}) + +return nil +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + if !cfg.Telemetry.Enabled { + return nil, nil +} + +return telemetry.New(cfg.Telemetry) +} + +/ wrapCPUProfile starts CPU profiling, if enabled, and executes the provided +/ callbackFn in a separate goroutine, then will wait for that callback to +/ return. +/ +/ NOTE: We expect the caller to handle graceful shutdown and signal handling. +func wrapCPUProfile(svrCtx *Context, callbackFn func() + +error) + +error { + if cpuProfile := svrCtx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +svrCtx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +defer func() { + svrCtx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + svrCtx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +}() +} + +return callbackFn() +} + +/ emitServerInfoMetrics emits server info related metrics using application telemetry. +func emitServerInfoMetrics() { + var ls []metrics.Label + versionInfo := version.NewInfo() + if len(versionInfo.GoVersion) > 0 { + ls = append(ls, telemetry.NewLabel("go", versionInfo.GoVersion)) +} + if len(versionInfo.CosmosSdkVersion) > 0 { + ls = append(ls, telemetry.NewLabel("version", versionInfo.CosmosSdkVersion)) +} + if len(ls) == 0 { + return +} + +telemetry.SetGaugeWithLabels([]string{"server", "info" +}, 1, ls) +} + +func getCtx(svrCtx *Context, block bool) (*errgroup.Group, context.Context) { + ctx, cancelFn := context.WithCancel(context.Background()) + +g, ctx := errgroup.WithContext(ctx) + / listen for quit signals so the calling parent process can gracefully exit + ListenForQuitSignals(g, block, cancelFn, svrCtx.Logger) + +return g, ctx +} + +func startApp(svrCtx *Context, appCreator types.AppCreator, opts StartCmdOptions) (app types.Application, cleanupFn func(), err error) { + traceWriter, traceCleanupFn, err := setupTraceWriter(svrCtx) + if err != nil { + return app, traceCleanupFn, err +} + home := svrCtx.Config.RootDir + db, err := opts.DBOpener(home, GetAppDBBackend(svrCtx.Viper)) + if err != nil { + return app, traceCleanupFn, err +} + +app = appCreator(svrCtx.Logger, db, traceWriter, svrCtx.Viper) + +cleanupFn = func() { + traceCleanupFn() + if localErr := app.Close(); localErr != nil { + svrCtx.Logger.Error(localErr.Error()) +} + +} + +return app, cleanupFn, nil +} +``` + +## Client + +### CLI + +The genutil commands are available under the `genesis` subcommand. + +#### add-genesis-account + +Add a genesis account to `genesis.json`. Learn more [here](https://docs.cosmos.network/main/run-node/run-node#adding-genesis-accounts). + +#### collect-gentxs + +Collect genesis txs and output a `genesis.json` file. + +```shell +simd genesis collect-gentxs +``` + +This will create a new `genesis.json` file that includes data from all the validators (we sometimes call it the "super genesis file" to distinguish it from single-validator genesis files). + +#### gentx + +Generate a genesis tx carrying a self delegation. + +```shell +simd genesis gentx [key_name] [amount] --chain-id [chain-id] +``` + +This will create the genesis transaction for your new chain. Here `amount` should be at least `1000000000stake`. +If you provide too much or too little, you will encounter an error when starting a node. + +#### migrate + +Migrate genesis to a specified target (SDK) version. + +```shell +simd genesis migrate [target-version] +``` + + +The `migrate` command is extensible and takes a `MigrationMap`. This map is a mapping of target versions to genesis migrations functions. +When not using the default `MigrationMap`, it is recommended to still call the default `MigrationMap` corresponding the SDK version of the chain and prepend/append your own genesis migrations. + + +#### validate-genesis + +Validates the genesis file at the default location or at the location passed as an argument. + +```shell +simd genesis validate-genesis +``` + + +Validate genesis only validates if the genesis is valid at the **current application binary**. For validating a genesis from a previous version of the application, use the `migrate` command to migrate the genesis to the current version. + diff --git a/docs/sdk/next/documentation/module-system/gov.mdx b/docs/sdk/next/documentation/module-system/gov.mdx new file mode 100644 index 00000000..f73ed207 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/gov.mdx @@ -0,0 +1,2991 @@ +--- +title: '`x/gov`' +description: >- + This paper specifies the Governance module of the Cosmos SDK, which was first + described in the Cosmos Whitepaper in June 2016. +--- + +## Abstract + +This paper specifies the Governance module of the Cosmos SDK, which was first +described in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper) in +June 2016. + +The module enables Cosmos SDK based blockchain to support an on-chain governance +system. In this system, holders of the native staking token of the chain can vote +on proposals on a 1 token 1 vote basis. Next is a list of features the module +currently supports: + +* **Proposal submission:** Users can submit proposals with a deposit. Once the + minimum deposit is reached, the proposal enters voting period. The minimum deposit can be reached by collecting deposits from different users (including proposer) within deposit period. +* **Vote:** Participants can vote on proposals that reached MinDeposit and entered voting period. +* **Inheritance and penalties:** Delegators inherit their validator's vote if + they don't vote themselves. +* **Claiming deposit:** Users that deposited on proposals can recover their + deposits if the proposal was accepted or rejected. If the proposal was vetoed, or never entered voting period (minimum deposit not reached within deposit period), the deposit is burned. + +This module is in use on the Cosmos Hub (a.k.a [gaia](https://github.com/cosmos/gaia)). +Features that may be added in the future are described in [Future Improvements](#future-improvements). + +## Contents + +The following specification uses *ATOM* as the native staking token. The module +can be adapted to any Proof-Of-Stake blockchain by replacing *ATOM* with the native +staking token of the chain. + +* [Concepts](#concepts) + * [Proposal submission](#proposal-submission) + * [Deposit](#deposit) + * [Vote](#vote) + * [Software Upgrade](#software-upgrade) +* [State](#state) + * [Proposals](#proposals) + * [Parameters and base types](#parameters-and-base-types) + * [Deposit](#deposit-1) + * [ValidatorGovInfo](#validatorgovinfo) + * [Stores](#stores) + * [Proposal Processing Queue](#proposal-processing-queue) + * [Legacy Proposal](#legacy-proposal) +* [Messages](#messages) + * [Proposal Submission](#proposal-submission-1) + * [Deposit](#deposit-2) + * [Vote](#vote-1) +* [Events](#events) + * [EndBlocker](#endblocker) + * [Handlers](#handlers) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) +* [Metadata](#metadata) + * [Proposal](#proposal-3) + * [Vote](#vote-5) +* [Future Improvements](#future-improvements) + +## Concepts + +*Disclaimer: This is work in progress. Mechanisms are susceptible to change.* + +The governance process is divided in a few steps that are outlined below: + +* **Proposal submission:** Proposal is submitted to the blockchain with a + deposit. +* **Vote:** Once deposit reaches a certain value (`MinDeposit`), proposal is + confirmed and vote opens. Bonded Atom holders can then send `TxGovVote` + transactions to vote on the proposal. +* **Execution** After a period of time, the votes are tallied and depending + on the result, the messages in the proposal will be executed. + +### Proposal submission + +#### Right to submit a proposal + +Every account can submit proposals by sending a `MsgSubmitProposal` transaction. +Once a proposal is submitted, it is identified by its unique `proposalID`. + +#### Proposal Messages + +A proposal includes an array of `sdk.Msg`s which are executed automatically if the +proposal passes. The messages are executed by the governance `ModuleAccount` itself. Modules +such as `x/upgrade`, that want to allow certain messages to be executed by governance +only should add a whitelist within the respective msg server, granting the governance +module the right to execute the message once a quorum has been reached. The governance +module uses the `MsgServiceRouter` to check that these messages are correctly constructed +and have a respective path to execute on but do not perform a full validity check. + +### Deposit + +To prevent spam, proposals must be submitted with a deposit in the coins defined by +the `MinDeposit` param. + +When a proposal is submitted, it has to be accompanied with a deposit that must be +strictly positive, but can be inferior to `MinDeposit`. The submitter doesn't need +to pay for the entire deposit on their own. The newly created proposal is stored in +an *inactive proposal queue* and stays there until its deposit passes the `MinDeposit`. +Other token holders can increase the proposal's deposit by sending a `Deposit` +transaction. If a proposal doesn't pass the `MinDeposit` before the deposit end time +(the time when deposits are no longer accepted), the proposal will be destroyed: the +proposal will be removed from state and the deposit will be burned (see x/gov `EndBlocker`). +When a proposal deposit passes the `MinDeposit` threshold (even during the proposal +submission) before the deposit end time, the proposal will be moved into the +*active proposal queue* and the voting period will begin. + +The deposit is kept in escrow and held by the governance `ModuleAccount` until the +proposal is finalized (passed or rejected). + +#### Deposit refund and burn + +When a proposal is finalized, the coins from the deposit are either refunded or burned +according to the final tally of the proposal: + +* If the proposal is approved or rejected but *not* vetoed, each deposit will be + automatically refunded to its respective depositor (transferred from the governance + `ModuleAccount`). +* When the proposal is vetoed with greater than 1/3, deposits will be burned from the + governance `ModuleAccount` and the proposal information along with its deposit + information will be removed from state. +* All refunded or burned deposits are removed from the state. Events are issued when + burning or refunding a deposit. + +### Vote + +#### Participants + +*Participants* are users that have the right to vote on proposals. On the +Cosmos Hub, participants are bonded Atom holders. Unbonded Atom holders and +other users do not get the right to participate in governance. However, they +can submit and deposit on proposals. + +Note that when *participants* have bonded and unbonded Atoms, their voting power is calculated from their bonded Atom holdings only. + +#### Voting period + +Once a proposal reaches `MinDeposit`, it immediately enters `Voting period`. We +define `Voting period` as the interval between the moment the vote opens and +the moment the vote closes. The initial value of `Voting period` is 2 weeks. + +#### Option set + +The option set of a proposal refers to the set of choices a participant can +choose from when casting its vote. + +The initial option set includes the following options: + +* `Yes` +* `No` +* `NoWithVeto` +* `Abstain` + +`NoWithVeto` counts as `No` but also adds a `Veto` vote. `Abstain` option +allows voters to signal that they do not intend to vote in favor or against the +proposal but accept the result of the vote. + +*Note: from the UI, for urgent proposals we should maybe add a ‘Not Urgent’ option that casts a `NoWithVeto` vote.* + +#### Weighted Votes + +[ADR-037](/docs/common/pages/adr-comprehensive#adr-037-governance-split-votes) introduces the weighted vote feature which allows a staker to split their votes into several voting options. For example, it could use 70% of its voting power to vote Yes and 30% of its voting power to vote No. + +Often times the entity owning that address might not be a single individual. For example, a company might have different stakeholders who want to vote differently, and so it makes sense to allow them to split their voting power. Currently, it is not possible for them to do "passthrough voting" and giving their users voting rights over their tokens. However, with this system, exchanges can poll their users for voting preferences, and then vote on-chain proportionally to the results of the poll. + +To represent weighted vote on chain, we use the following Protobuf message. + +```protobuf +// WeightedVoteOption defines a unit of vote for vote split. +// +// Since: cosmos-sdk 0.43 +message WeightedVoteOption { + // option defines the valid vote options, it must not contain duplicate vote options. + VoteOption option = 1; + + // weight is the vote weight associated with the vote option. + string weight = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +```protobuf +// Vote defines a vote on a governance proposal. +// A Vote consists of a proposal ID, the voter, and the vote option. +message Vote { + option (gogoproto.goproto_stringer) = false; + option (gogoproto.equal) = false; + + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1 [(gogoproto.jsontag) = "id", (amino.field_name) = "id", (amino.dont_omitempty) = true]; + + // voter is the voter address of the proposal. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // Deprecated: Prefer to use `options` instead. This field is set in queries + // if and only if `len(options) == 1` and that option has weight 1. In all + // other cases, this field will default to VOTE_OPTION_UNSPECIFIED. + VoteOption option = 3 [deprecated = true]; + + // options is the weighted vote options. + // + // Since: cosmos-sdk 0.43 + repeated WeightedVoteOption options = 4 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +For a weighted vote to be valid, the `options` field must not contain duplicate vote options, and the sum of weights of all options must be equal to 1. + +#### Custom Vote Calculation + +Cosmos SDK v0.53.0 introduced an option for developers to define a custom vote result and voting power calculation function. + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "cosmossdk.io/collections" + "cosmossdk.io/math" + + sdk "github.com/cosmos/cosmos-sdk/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ CalculateVoteResultsAndVotingPowerFn is a function signature for calculating vote results and voting power +/ It can be overridden to customize the voting power calculation for proposals +/ It gets the proposal tallied and the validators governance infos (bonded tokens, voting power, etc.) +/ It must return the total voting power and the results of the vote +type CalculateVoteResultsAndVotingPowerFn func( + ctx context.Context, + k Keeper, + proposal v1.Proposal, + validators map[string]v1.ValidatorGovInfo, +) (totalVoterPower math.LegacyDec, results map[v1.VoteOption]math.LegacyDec, err error) + +func defaultCalculateVoteResultsAndVotingPower( + ctx context.Context, + k Keeper, + proposal v1.Proposal, + validators map[string]v1.ValidatorGovInfo, +) (totalVoterPower math.LegacyDec, results map[v1.VoteOption]math.LegacyDec, err error) { + totalVotingPower := math.LegacyZeroDec() + +results = make(map[v1.VoteOption]math.LegacyDec) + +results[v1.OptionYes] = math.LegacyZeroDec() + +results[v1.OptionAbstain] = math.LegacyZeroDec() + +results[v1.OptionNo] = math.LegacyZeroDec() + +results[v1.OptionNoWithVeto] = math.LegacyZeroDec() + rng := collections.NewPrefixedPairRange[uint64, sdk.AccAddress](/docs/sdk/next/documentation/module-system/proposal.Id) + votesToRemove := []collections.Pair[uint64, sdk.AccAddress]{ +} + +err = k.Votes.Walk(ctx, rng, func(key collections.Pair[uint64, sdk.AccAddress], vote v1.Vote) (bool, error) { + / if validator, just record it in the map + voter, err := k.authKeeper.AddressCodec().StringToBytes(vote.Voter) + if err != nil { + return false, err +} + +valAddrStr, err := k.sk.ValidatorAddressCodec().BytesToString(voter) + if err != nil { + return false, err +} + if val, ok := validators[valAddrStr]; ok { + val.Vote = vote.Options + validators[valAddrStr] = val +} + + / iterate over all delegations from voter, deduct from any delegated-to validators + err = k.sk.IterateDelegations(ctx, voter, func(index int64, delegation stakingtypes.DelegationI) (stop bool) { + valAddrStr := delegation.GetValidatorAddr() + if val, ok := validators[valAddrStr]; ok { + / There is no need to handle the special case that validator address equal to voter address. + / Because voter's voting power will tally again even if there will be deduction of voter's voting power from validator. + val.DelegatorDeductions = val.DelegatorDeductions.Add(delegation.GetShares()) + +validators[valAddrStr] = val + + / delegation shares * bonded / total shares + votingPower := delegation.GetShares().MulInt(val.BondedTokens).Quo(val.DelegatorShares) + for _, option := range vote.Options { + weight, _ := math.LegacyNewDecFromStr(option.Weight) + subPower := votingPower.Mul(weight) + +results[option.Option] = results[option.Option].Add(subPower) +} + +totalVotingPower = totalVotingPower.Add(votingPower) +} + +return false +}) + if err != nil { + return false, err +} + +votesToRemove = append(votesToRemove, key) + +return false, nil +}) + if err != nil { + return math.LegacyZeroDec(), nil, fmt.Errorf("error while iterating delegations: %w", err) +} + + / remove all votes from store + for _, key := range votesToRemove { + if err := k.Votes.Remove(ctx, key); err != nil { + return math.LegacyDec{ +}, nil, fmt.Errorf("error while removing vote (%d/%s): %w", key.K1(), key.K2(), err) +} + +} + + / iterate over the validators again to tally their voting power + for _, val := range validators { + if len(val.Vote) == 0 { + continue +} + sharesAfterDeductions := val.DelegatorShares.Sub(val.DelegatorDeductions) + votingPower := sharesAfterDeductions.MulInt(val.BondedTokens).Quo(val.DelegatorShares) + for _, option := range val.Vote { + weight, _ := math.LegacyNewDecFromStr(option.Weight) + subPower := votingPower.Mul(weight) + +results[option.Option] = results[option.Option].Add(subPower) +} + +totalVotingPower = totalVotingPower.Add(votingPower) +} + +return totalVotingPower, results, nil +} + +/ getCurrentValidators fetches all the bonded validators, insert them into currValidators +func (k Keeper) + +getCurrentValidators(ctx context.Context) (map[string]v1.ValidatorGovInfo, error) { + currValidators := make(map[string]v1.ValidatorGovInfo) + if err := k.sk.IterateBondedValidatorsByPower(ctx, func(index int64, validator stakingtypes.ValidatorI) (stop bool) { + valBz, err := k.sk.ValidatorAddressCodec().StringToBytes(validator.GetOperator()) + if err != nil { + return false +} + +currValidators[validator.GetOperator()] = v1.NewValidatorGovInfo( + valBz, + validator.GetBondedTokens(), + validator.GetDelegatorShares(), + math.LegacyZeroDec(), + v1.WeightedVoteOptions{ +}, + ) + +return false +}); err != nil { + return nil, err +} + +return currValidators, nil +} + +/ Tally iterates over the votes and updates the tally of a proposal based on the voting power of the +/ voters +func (k Keeper) + +Tally(ctx context.Context, proposal v1.Proposal) (passes, burnDeposits bool, tallyResults v1.TallyResult, err error) { + currValidators, err := k.getCurrentValidators(ctx) + if err != nil { + return false, false, tallyResults, fmt.Errorf("error while getting current validators: %w", err) +} + tallyFn := k.calculateVoteResultsAndVotingPowerFn + totalVotingPower, results, err := tallyFn(ctx, k, proposal, currValidators) + if err != nil { + return false, false, tallyResults, fmt.Errorf("error while calculating tally results: %w", err) +} + +tallyResults = v1.NewTallyResultFromMap(results) + + / TODO: Upgrade the spec to cover all of these cases & remove pseudocode. + / If there is no staked coins, the proposal fails + totalBonded, err := k.sk.TotalBondedTokens(ctx) + if err != nil { + return false, false, tallyResults, err +} + if totalBonded.IsZero() { + return false, false, tallyResults, nil +} + +params, err := k.Params.Get(ctx) + if err != nil { + return false, false, tallyResults, fmt.Errorf("error while getting params: %w", err) +} + + / If there is not enough quorum of votes, the proposal fails + percentVoting := totalVotingPower.Quo(math.LegacyNewDecFromInt(totalBonded)) + +quorum, _ := math.LegacyNewDecFromStr(params.Quorum) + if percentVoting.LT(quorum) { + return false, params.BurnVoteQuorum, tallyResults, nil +} + + / If no one votes (everyone abstains), proposal fails + if totalVotingPower.Sub(results[v1.OptionAbstain]).Equal(math.LegacyZeroDec()) { + return false, false, tallyResults, nil +} + + / If more than 1/3 of voters veto, proposal fails + vetoThreshold, _ := math.LegacyNewDecFromStr(params.VetoThreshold) + if results[v1.OptionNoWithVeto].Quo(totalVotingPower).GT(vetoThreshold) { + return false, params.BurnVoteVeto, tallyResults, nil +} + + / If more than 1/2 of non-abstaining voters vote Yes, proposal passes + / For expedited 2/3 + var thresholdStr string + if proposal.Expedited { + thresholdStr = params.GetExpeditedThreshold() +} + +else { + thresholdStr = params.GetThreshold() +} + +threshold, _ := math.LegacyNewDecFromStr(thresholdStr) + if results[v1.OptionYes].Quo(totalVotingPower.Sub(results[v1.OptionAbstain])).GT(threshold) { + return true, false, tallyResults, nil +} + + / If more than 1/2 of non-abstaining voters vote No, proposal fails + return false, false, tallyResults, nil +} +``` + +This gives developers a more expressive way to handle governance on their appchains. +Developers can now build systems with: + +* Quadratic Voting +* Time-weighted Voting +* Reputation-Based voting + +##### Example + +```go expandable +func myCustomVotingFunction( + ctx context.Context, + k Keeper, + proposal v1.Proposal, + validators map[string]v1.ValidatorGovInfo, +) (totalVoterPower math.LegacyDec, results map[v1.VoteOption]math.LegacyDec, err error) { + / ... tally logic +} + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(myCustomVotingFunction), +) +``` + +### Quorum + +Quorum is defined as the minimum percentage of voting power that needs to be +cast on a proposal for the result to be valid. + +### Expedited Proposals + +A proposal can be expedited, making the proposal use shorter voting duration and a higher tally threshold by its default. If an expedited proposal fails to meet the threshold within the scope of shorter voting duration, the expedited proposal is then converted to a regular proposal and restarts voting under regular voting conditions. + +#### Threshold + +Threshold is defined as the minimum proportion of `Yes` votes (excluding +`Abstain` votes) for the proposal to be accepted. + +Initially, the threshold is set at 50% of `Yes` votes, excluding `Abstain` +votes. A possibility to veto exists if more than 1/3rd of all votes are +`NoWithVeto` votes. Note, both of these values are derived from the `TallyParams` +on-chain parameter, which is modifiable by governance. +This means that proposals are accepted iff: + +* There exist bonded tokens. +* Quorum has been achieved. +* The proportion of `Abstain` votes is inferior to 1/1. +* The proportion of `NoWithVeto` votes is inferior to 1/3, including + `Abstain` votes. +* The proportion of `Yes` votes, excluding `Abstain` votes, at the end of + the voting period is superior to 1/2. + +For expedited proposals, by default, the threshold is higher than with a *normal proposal*, namely, 66.7%. + +#### Inheritance + +If a delegator does not vote, it will inherit its validator vote. + +* If the delegator votes before its validator, it will not inherit from the + validator's vote. +* If the delegator votes after its validator, it will override its validator + vote with its own. If the proposal is urgent, it is possible + that the vote will close before delegators have a chance to react and + override their validator's vote. This is not a problem, as proposals require more than 2/3rd of the total voting power to pass, when tallied at the end of the voting period. Because as little as 1/3 + 1 validation power could collude to censor transactions, non-collusion is already assumed for ranges exceeding this threshold. + +#### Validator’s punishment for non-voting + +At present, validators are not punished for failing to vote. + +#### Governance address + +Later, we may add permissioned keys that could only sign txs from certain modules. For the MVP, the `Governance address` will be the main validator address generated at account creation. This address corresponds to a different PrivKey than the CometBFT PrivKey which is responsible for signing consensus messages. Validators thus do not have to sign governance transactions with the sensitive CometBFT PrivKey. + +#### Burnable Params + +There are three parameters that define if the deposit of a proposal should be burned or returned to the depositors. + +* `BurnVoteVeto` burns the proposal deposit if the proposal gets vetoed. +* `BurnVoteQuorum` burns the proposal deposit if the proposal deposit if the vote does not reach quorum. +* `BurnProposalDepositPrevote` burns the proposal deposit if it does not enter the voting phase. + +> Note: These parameters are modifiable via governance. + +## State + +### Constitution + +`Constitution` is found in the genesis state. It is a string field intended to be used to describe the purpose of a particular blockchain, and its expected norms. A few examples of how the constitution field can be used: + +* define the purpose of the chain, laying a foundation for its future development +* set expectations for delegators +* set expectations for validators +* define the chain's relationship to "meatspace" entities, like a foundation or corporation + +Since this is more of a social feature than a technical feature, we'll now get into some items that may have been useful to have in a genesis constitution: + +* What limitations on governance exist, if any? + * is it okay for the community to slash the wallet of a whale that they no longer feel that they want around? (viz: Juno Proposal 4 and 16) + * can governance "socially slash" a validator who is using unapproved MEV? (viz: commonwealth.im/osmosis) + * In the event of an economic emergency, what should validators do? + * Terra crash of May, 2022, saw validators choose to run a new binary with code that had not been approved by governance, because the governance token had been inflated to nothing. +* What is the purpose of the chain, specifically? + * best example of this is the Cosmos hub, where different founding groups, have different interpretations of the purpose of the network. + +This genesis entry, "constitution" hasn't been designed for existing chains, who should likely just ratify a constitution using their governance system. Instead, this is for new chains. It will allow for validators to have a much clearer idea of purpose and the expectations placed on them while operating their nodes. Likewise, for community members, the constitution will give them some idea of what to expect from both the "chain team" and the validators, respectively. + +This constitution is designed to be immutable, and placed only in genesis, though that could change over time by a pull request to the cosmos-sdk that allows for the constitution to be changed by governance. Communities whishing to make amendments to their original constitution should use the governance mechanism and a "signaling proposal" to do exactly that. + +**Ideal use scenario for a cosmos chain constitution** + +As a chain developer, you decide that you'd like to provide clarity to your key user groups: + +* validators +* token holders +* developers (yourself) + +You use the constitution to immutably store some Markdown in genesis, so that when difficult questions come up, the constitution can provide guidance to the community. + +### Proposals + +`Proposal` objects are used to tally votes and generally track the proposal's state. +They contain an array of arbitrary `sdk.Msg`'s which the governance module will attempt +to resolve and then execute if the proposal passes. `Proposal`'s are identified by a +unique id and contains a series of timestamps: `submit_time`, `deposit_end_time`, +`voting_start_time`, `voting_end_time` which track the lifecycle of a proposal + +```protobuf +// Proposal defines the core field members of a governance proposal. +message Proposal { + // id defines the unique id of the proposal. + uint64 id = 1; + + // messages are the arbitrary messages to be executed if the proposal passes. + repeated google.protobuf.Any messages = 2; + + // status defines the proposal status. + ProposalStatus status = 3; + + // final_tally_result is the final tally result of the proposal. When + // querying a proposal via gRPC, this field is not populated until the + // proposal's voting period has ended. + TallyResult final_tally_result = 4; + + // submit_time is the time of proposal submission. + google.protobuf.Timestamp submit_time = 5 [(gogoproto.stdtime) = true]; + + // deposit_end_time is the end time for deposition. + google.protobuf.Timestamp deposit_end_time = 6 [(gogoproto.stdtime) = true]; + + // total_deposit is the total deposit on the proposal. + repeated cosmos.base.v1beta1.Coin total_deposit = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // voting_start_time is the starting time to vote on a proposal. + google.protobuf.Timestamp voting_start_time = 8 [(gogoproto.stdtime) = true]; + + // voting_end_time is the end time of voting on a proposal. + google.protobuf.Timestamp voting_end_time = 9 [(gogoproto.stdtime) = true]; + + // metadata is any arbitrary metadata attached to the proposal. + string metadata = 10; + + // title is the title of the proposal + // + // Since: cosmos-sdk 0.47 + string title = 11; + + // summary is a short summary of the proposal + // + // Since: cosmos-sdk 0.47 + string summary = 12; + + // Proposer is the address of the proposal sumbitter + // + // Since: cosmos-sdk 0.47 + string proposer = 13 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +A proposal will generally require more than just a set of messages to explain its +purpose but need some greater justification and allow a means for interested participants +to discuss and debate the proposal. +In most cases, **it is encouraged to have an off-chain system that supports the on-chain governance process**. +To accommodate for this, a proposal contains a special **`metadata`** field, a string, +which can be used to add context to the proposal. The `metadata` field allows custom use for networks, +however, it is expected that the field contains a URL or some form of CID using a system such as +[IPFS](https://docs.ipfs.io/concepts/content-addressing/). To support the case of +interoperability across networks, the SDK recommends that the `metadata` represents +the following `JSON` template: + +```json +{ + "title": "...", + "description": "...", + "forum": "...", / a link to the discussion platform (i.e. Discord) + "other": "..." / any extra data that doesn't correspond to the other fields +} +``` + +This makes it far easier for clients to support multiple networks. + +The metadata has a maximum length that is chosen by the app developer, and +passed into the gov keeper as a config. The default maximum length in the SDK is 255 characters. + +#### Writing a module that uses governance + +There are many aspects of a chain, or of the individual modules that you may want to +use governance to perform such as changing various parameters. This is very simple +to do. First, write out your message types and `MsgServer` implementation. Add an +`authority` field to the keeper which will be populated in the constructor with the +governance module account: `govKeeper.GetGovernanceAccount().GetAddress()`. Then for +the methods in the `msg_server.go`, perform a check on the message that the signer +matches `authority`. This will prevent any user from executing that message. + +### Parameters and base types + +`Parameters` define the rules according to which votes are run. There can only +be one active parameter set at any given time. If governance wants to change a +parameter set, either to modify a value or add/remove a parameter field, a new +parameter set has to be created and the previous one rendered inactive. + +#### DepositParams + +```protobuf +// DepositParams defines the params for deposits on governance proposals. +message DepositParams { + // Minimum deposit for a proposal to enter voting period. + repeated cosmos.base.v1beta1.Coin min_deposit = 1 + [(gogoproto.nullable) = false, (gogoproto.jsontag) = "min_deposit,omitempty"]; + + // Maximum period for Atom holders to deposit on a proposal. Initial value: 2 + // months. + google.protobuf.Duration max_deposit_period = 2 + [(gogoproto.stdduration) = true, (gogoproto.jsontag) = "max_deposit_period,omitempty"]; +} +``` + +#### VotingParams + +```protobuf +// VotingParams defines the params for voting on governance proposals. +message VotingParams { + // Duration of the voting period. + google.protobuf.Duration voting_period = 1 [(gogoproto.stdduration) = true]; +} +``` + +#### TallyParams + +```protobuf +// TallyParams defines the params for tallying votes on governance proposals. +message TallyParams { + // Minimum percentage of total stake needed to vote for a result to be + // considered valid. + string quorum = 1 [(cosmos_proto.scalar) = "cosmos.Dec"]; + + // Minimum proportion of Yes votes for proposal to pass. Default value: 0.5. + string threshold = 2 [(cosmos_proto.scalar) = "cosmos.Dec"]; + + // Minimum value of Veto votes to Total votes ratio for proposal to be + // vetoed. Default value: 1/3. + string veto_threshold = 3 [(cosmos_proto.scalar) = "cosmos.Dec"]; +} +``` + +Parameters are stored in a global `GlobalParams` KVStore. + +Additionally, we introduce some basic types: + +```go expandable +type Vote byte + +const ( + VoteYes = 0x1 + VoteNo = 0x2 + VoteNoWithVeto = 0x3 + VoteAbstain = 0x4 +) + +type ProposalType string + +const ( + ProposalTypePlainText = "Text" + ProposalTypeSoftwareUpgrade = "SoftwareUpgrade" +) + +type ProposalStatus byte + +const ( + StatusNil ProposalStatus = 0x00 + StatusDepositPeriod ProposalStatus = 0x01 / Proposal is submitted. Participants can deposit on it but not vote + StatusVotingPeriod ProposalStatus = 0x02 / MinDeposit is reached, participants can vote + StatusPassed ProposalStatus = 0x03 / Proposal passed and successfully executed + StatusRejected ProposalStatus = 0x04 / Proposal has been rejected + StatusFailed ProposalStatus = 0x05 / Proposal passed but failed execution +) +``` + +### Deposit + +```protobuf +// Deposit defines an amount deposited by an account address to an active +// proposal. +message Deposit { + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1; + + // depositor defines the deposit addresses from the proposals. + string depositor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // amount to be deposited by depositor. + repeated cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +### ValidatorGovInfo + +This type is used in a temp map when tallying + +```go +type ValidatorGovInfo struct { + Minus sdk.Dec + Vote Vote +} +``` + +## Stores + + +Stores are KVStores in the multi-store. The key to find the store is the first parameter in the list + + +We will use one KVStore `Governance` to store four mappings: + +* A mapping from `proposalID|'proposal'` to `Proposal`. +* A mapping from `proposalID|'addresses'|address` to `Vote`. This mapping allows + us to query all addresses that voted on the proposal along with their vote by + doing a range query on `proposalID:addresses`. +* A mapping from `ParamsKey|'Params'` to `Params`. This map allows to query all + x/gov params. +* A mapping from `VotingPeriodProposalKeyPrefix|proposalID` to a single byte. This allows + us to know if a proposal is in the voting period or not with very low gas cost. + +For pseudocode purposes, here are the two function we will use to read or write in stores: + +* `load(StoreKey, Key)`: Retrieve item stored at key `Key` in store found at key `StoreKey` in the multistore +* `store(StoreKey, Key, value)`: Write value `Value` at key `Key` in store found at key `StoreKey` in the multistore + +### Proposal Processing Queue + +**Store:** + +* `ProposalProcessingQueue`: A queue `queue[proposalID]` containing all the + `ProposalIDs` of proposals that reached `MinDeposit`. During each `EndBlock`, + all the proposals that have reached the end of their voting period are processed. + To process a finished proposal, the application tallies the votes, computes the + votes of each validator and checks if every validator in the validator set has + voted. If the proposal is accepted, deposits are refunded. Finally, the proposal + content `Handler` is executed. + +And the pseudocode for the `ProposalProcessingQueue`: + +```go expandable +in EndBlock do + for finishedProposalID in GetAllFinishedProposalIDs(block.Time) + +proposal = load(Governance, ) / proposal is a const key + + validators = Keeper.getAllValidators() + tmpValMap := map(sdk.AccAddress) + +ValidatorGovInfo + + / Initiate mapping at 0. This is the amount of shares of the validator's vote that will be overridden by their delegator's votes + for each validator in validators + tmpValMap(validator.OperatorAddr).Minus = 0 + + / Tally + voterIterator = rangeQuery(Governance, ) /return all the addresses that voted on the proposal + for each (voterAddress, vote) + +in voterIterator + delegations = stakingKeeper.getDelegations(voterAddress) / get all delegations for current voter + for each delegation in delegations + / make sure delegation.Shares does NOT include shares being unbonded + tmpValMap(delegation.ValidatorAddr).Minus += delegation.Shares + proposal.updateTally(vote, delegation.Shares) + + _, isVal = stakingKeeper.getValidator(voterAddress) + if (isVal) + +tmpValMap(voterAddress).Vote = vote + + tallyingParam = load(GlobalParams, 'TallyingParam') + + / Update tally if validator voted + for each validator in validators + if tmpValMap(validator).HasVoted + proposal.updateTally(tmpValMap(validator).Vote, (validator.TotalShares - tmpValMap(validator).Minus)) + + / Check if proposal is accepted or rejected + totalNonAbstain := proposal.YesVotes + proposal.NoVotes + proposal.NoWithVetoVotes + if (proposal.Votes.YesVotes/totalNonAbstain > tallyingParam.Threshold AND proposal.Votes.NoWithVetoVotes/totalNonAbstain < tallyingParam.Veto) + / proposal was accepted at the end of the voting period + / refund deposits (non-voters already punished) + for each (amount, depositor) + +in proposal.Deposits + depositor.AtomBalance += amount + + stateWriter, err := proposal.Handler() + if err != nil + / proposal passed but failed during state execution + proposal.CurrentStatus = ProposalStatusFailed + else + / proposal pass and state is persisted + proposal.CurrentStatus = ProposalStatusAccepted + stateWriter.save() + +else + / proposal was rejected + proposal.CurrentStatus = ProposalStatusRejected + + store(Governance, , proposal) +``` + +### Legacy Proposal + + +Legacy proposals are deprecated. Use the new proposal flow by granting the governance module the right to execute the message. + + +A legacy proposal is the old implementation of governance proposal. +Contrary to proposal that can contain any messages, a legacy proposal allows to submit a set of pre-defined proposals. +These proposals are defined by their types and handled by handlers that are registered in the gov v1beta1 router. + +More information on how to submit proposals in the [client section](#client). + +## Messages + +### Proposal Submission + +Proposals can be submitted by any account via a `MsgSubmitProposal` transaction. + +```protobuf +// MsgSubmitProposal defines an sdk.Msg type that supports submitting arbitrary +// proposal Content. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposer"; + option (amino.name) = "cosmos-sdk/v1/MsgSubmitProposal"; + + // messages are the arbitrary messages to be executed if proposal passes. + repeated google.protobuf.Any messages = 1; + + // initial_deposit is the deposit value that must be paid at proposal submission. + repeated cosmos.base.v1beta1.Coin initial_deposit = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // proposer is the account address of the proposer. + string proposer = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // metadata is any arbitrary metadata attached to the proposal. + string metadata = 4; + + // title is the title of the proposal. + // + // Since: cosmos-sdk 0.47 + string title = 5; + + // summary is the summary of the proposal + // + // Since: cosmos-sdk 0.47 + string summary = 6; +} +``` + +All `sdk.Msgs` passed into the `messages` field of a `MsgSubmitProposal` message +must be registered in the app's `MsgServiceRouter`. Each of these messages must +have one signer, namely the gov module account. And finally, the metadata length +must not be larger than the `maxMetadataLen` config passed into the gov keeper. +The `initialDeposit` must be strictly positive and conform to the accepted denom of the `MinDeposit` param. + +**State modifications:** + +* Generate new `proposalID` +* Create new `Proposal` +* Initialise `Proposal`'s attributes +* Decrease balance of sender by `InitialDeposit` +* If `MinDeposit` is reached: + * Push `proposalID` in `ProposalProcessingQueue` +* Transfer `InitialDeposit` from the `Proposer` to the governance `ModuleAccount` + +### Deposit + +Once a proposal is submitted, if `Proposal.TotalDeposit < ActiveParam.MinDeposit`, Atom holders can send +`MsgDeposit` transactions to increase the proposal's deposit. + +A deposit is accepted iff: + +* The proposal exists +* The proposal is not in the voting period +* The deposited coins are conform to the accepted denom from the `MinDeposit` param + +```protobuf +// MsgDeposit defines a message to submit a deposit to an existing proposal. +message MsgDeposit { + option (cosmos.msg.v1.signer) = "depositor"; + option (amino.name) = "cosmos-sdk/v1/MsgDeposit"; + + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1 [(gogoproto.jsontag) = "proposal_id", (amino.dont_omitempty) = true]; + + // depositor defines the deposit addresses from the proposals. + string depositor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // amount to be deposited by depositor. + repeated cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +**State modifications:** + +* Decrease balance of sender by `deposit` +* Add `deposit` of sender in `proposal.Deposits` +* Increase `proposal.TotalDeposit` by sender's `deposit` +* If `MinDeposit` is reached: + * Push `proposalID` in `ProposalProcessingQueueEnd` +* Transfer `Deposit` from the `proposer` to the governance `ModuleAccount` + +### Vote + +Once `ActiveParam.MinDeposit` is reached, voting period starts. From there, +bonded Atom holders are able to send `MsgVote` transactions to cast their +vote on the proposal. + +```protobuf +// MsgVote defines a message to cast a vote. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/v1/MsgVote"; + + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1 [(gogoproto.jsontag) = "proposal_id", (amino.dont_omitempty) = true]; + + // voter is the voter address for the proposal. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // option defines the vote option. + VoteOption option = 3; + + // metadata is any arbitrary metadata attached to the Vote. + string metadata = 4; +} +``` + +**State modifications:** + +* Record `Vote` of sender + + +Gas cost for this message has to take into account the future tallying of the vote in EndBlocker. + + +## Events + +The governance module emits the following events: + +### EndBlocker + +| Type | Attribute Key | Attribute Value | +| ------------------ | ---------------- | ---------------- | +| inactive\_proposal | proposal\_id | `{proposalID}` | +| inactive\_proposal | proposal\_result | `{proposalResult}` | +| active\_proposal | proposal\_id | `{proposalID}` | +| active\_proposal | proposal\_result | `{proposalResult}` | + +### Handlers + +#### MsgSubmitProposal + +| Type | Attribute Key | Attribute Value | +| --------------------- | --------------------- | ---------------- | +| submit\_proposal | proposal\_id | `{proposalID}` | +| submit\_proposal \[0] | voting\_period\_start | `{proposalID}` | +| proposal\_deposit | amount | `{depositAmount}` | +| proposal\_deposit | proposal\_id | `{proposalID}` | +| message | module | governance | +| message | action | submit\_proposal | +| message | sender | `{senderAddress}` | + +* \[0] Event only emitted if the voting period starts during the submission. + +#### MsgVote + +| Type | Attribute Key | Attribute Value | +| -------------- | ------------- | --------------- | +| proposal\_vote | option | `{voteOption}` | +| proposal\_vote | proposal\_id | `{proposalID}` | +| message | module | governance | +| message | action | vote | +| message | sender | `{senderAddress}` | + +#### MsgVoteWeighted + +| Type | Attribute Key | Attribute Value | +| -------------- | ------------- | --------------------- | +| proposal\_vote | option | `{weightedVoteOptions}` | +| proposal\_vote | proposal\_id | `{proposalID}` | +| message | module | governance | +| message | action | vote | +| message | sender | `{senderAddress}` | + +#### MsgDeposit + +| Type | Attribute Key | Attribute Value | +| ---------------------- | --------------------- | --------------- | +| proposal\_deposit | amount | `{depositAmount}` | +| proposal\_deposit | proposal\_id | `{proposalID}` | +| proposal\_deposit \[0] | voting\_period\_start | `{proposalID}` | +| message | module | governance | +| message | action | deposit | +| message | sender | `{senderAddress}` | + +* \[0] Event only emitted if the voting period starts during the submission. + +## Parameters + +The governance module contains the following parameters: + +| Key | Type | Example | +| -------------------------------- | ---------------- | ---------------------------------------- | +| min\_deposit | array (coins) | \[`{"denom":"uatom","amount":"10000000"}`] | +| max\_deposit\_period | string (time ns) | "172800000000000" (17280s) | +| voting\_period | string (time ns) | "172800000000000" (17280s) | +| quorum | string (dec) | "0.334000000000000000" | +| threshold | string (dec) | "0.500000000000000000" | +| veto | string (dec) | "0.334000000000000000" | +| expedited\_threshold | string (time ns) | "0.667000000000000000" | +| expedited\_voting\_period | string (time ns) | "86400000000000" (8600s) | +| expedited\_min\_deposit | array (coins) | \[`{"denom":"uatom","amount":"50000000"}`] | +| burn\_proposal\_deposit\_prevote | bool | false | +| burn\_vote\_quorum | bool | false | +| burn\_vote\_veto | bool | true | +| min\_initial\_deposit\_ratio | string | "0.1" | + +**NOTE**: The governance module contains parameters that are objects unlike other +modules. If only a subset of parameters are desired to be changed, only they need +to be included and not the entire parameter object structure. + +## Client + +### CLI + +A user can query and interact with the `gov` module using the CLI. + +#### Query + +The `query` commands allow users to query `gov` state. + +```bash +simd query gov --help +``` + +##### deposit + +The `deposit` command allows users to query a deposit for a given proposal from a given depositor. + +```bash +simd query gov deposit [proposal-id] [depositor-addr] [flags] +``` + +Example: + +```bash +simd query gov deposit 1 cosmos1.. +``` + +Example Output: + +```bash +amount: +- amount: "100" + denom: stake +depositor: cosmos1.. +proposal_id: "1" +``` + +##### deposits + +The `deposits` command allows users to query all deposits for a given proposal. + +```bash +simd query gov deposits [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov deposits 1 +``` + +Example Output: + +```bash +deposits: +- amount: + - amount: "100" + denom: stake + depositor: cosmos1.. + proposal_id: "1" +pagination: + next_key: null + total: "0" +``` + +##### param + +The `param` command allows users to query a given parameter for the `gov` module. + +```bash +simd query gov param [param-type] [flags] +``` + +Example: + +```bash +simd query gov param voting +``` + +Example Output: + +```bash +voting_period: "172800000000000" +``` + +##### params + +The `params` command allows users to query all parameters for the `gov` module. + +```bash +simd query gov params [flags] +``` + +Example: + +```bash +simd query gov params +``` + +Example Output: + +```bash expandable +deposit_params: + max_deposit_period: 172800s + min_deposit: + - amount: "10000000" + denom: stake +params: + expedited_min_deposit: + - amount: "50000000" + denom: stake + expedited_threshold: "0.670000000000000000" + expedited_voting_period: 86400s + max_deposit_period: 172800s + min_deposit: + - amount: "10000000" + denom: stake + min_initial_deposit_ratio: "0.000000000000000000" + proposal_cancel_burn_rate: "0.500000000000000000" + quorum: "0.334000000000000000" + threshold: "0.500000000000000000" + veto_threshold: "0.334000000000000000" + voting_period: 172800s +tally_params: + quorum: "0.334000000000000000" + threshold: "0.500000000000000000" + veto_threshold: "0.334000000000000000" +voting_params: + voting_period: 172800s +``` + +##### proposal + +The `proposal` command allows users to query a given proposal. + +```bash +simd query gov proposal [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov proposal 1 +``` + +Example Output: + +```bash expandable +deposit_end_time: "2022-03-30T11:50:20.819676256Z" +final_tally_result: + abstain_count: "0" + no_count: "0" + no_with_veto_count: "0" + yes_count: "0" +id: "1" +messages: +- '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "10" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. +metadata: AQ== +status: PROPOSAL_STATUS_DEPOSIT_PERIOD +submit_time: "2022-03-28T11:50:20.819676256Z" +total_deposit: +- amount: "10" + denom: stake +voting_end_time: null +voting_start_time: null +``` + +##### proposals + +The `proposals` command allows users to query all proposals with optional filters. + +```bash +simd query gov proposals [flags] +``` + +Example: + +```bash +simd query gov proposals +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +proposals: +- deposit_end_time: "2022-03-30T11:50:20.819676256Z" + final_tally_result: + abstain_count: "0" + no_count: "0" + no_with_veto_count: "0" + yes_count: "0" + id: "1" + messages: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "10" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + metadata: AQ== + status: PROPOSAL_STATUS_DEPOSIT_PERIOD + submit_time: "2022-03-28T11:50:20.819676256Z" + total_deposit: + - amount: "10" + denom: stake + voting_end_time: null + voting_start_time: null +- deposit_end_time: "2022-03-30T14:02:41.165025015Z" + final_tally_result: + abstain_count: "0" + no_count: "0" + no_with_veto_count: "0" + yes_count: "0" + id: "2" + messages: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "10" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + metadata: AQ== + status: PROPOSAL_STATUS_DEPOSIT_PERIOD + submit_time: "2022-03-28T14:02:41.165025015Z" + total_deposit: + - amount: "10" + denom: stake + voting_end_time: null + voting_start_time: null +``` + +##### proposer + +The `proposer` command allows users to query the proposer for a given proposal. + +```bash +simd query gov proposer [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov proposer 1 +``` + +Example Output: + +```bash +proposal_id: "1" +proposer: cosmos1.. +``` + +##### tally + +The `tally` command allows users to query the tally of a given proposal vote. + +```bash +simd query gov tally [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov tally 1 +``` + +Example Output: + +```bash +abstain: "0" +"no": "0" +no_with_veto: "0" +"yes": "1" +``` + +##### vote + +The `vote` command allows users to query a vote for a given proposal. + +```bash +simd query gov vote [proposal-id] [voter-addr] [flags] +``` + +Example: + +```bash +simd query gov vote 1 cosmos1.. +``` + +Example Output: + +```bash +option: VOTE_OPTION_YES +options: +- option: VOTE_OPTION_YES + weight: "1.000000000000000000" +proposal_id: "1" +voter: cosmos1.. +``` + +##### votes + +The `votes` command allows users to query all votes for a given proposal. + +```bash +simd query gov votes [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov votes 1 +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "0" +votes: +- option: VOTE_OPTION_YES + options: + - option: VOTE_OPTION_YES + weight: "1.000000000000000000" + proposal_id: "1" + voter: cosmos1.. +``` + +#### Transactions + +The `tx` commands allow users to interact with the `gov` module. + +```bash +simd tx gov --help +``` + +##### deposit + +The `deposit` command allows users to deposit tokens for a given proposal. + +```bash +simd tx gov deposit [proposal-id] [deposit] [flags] +``` + +Example: + +```bash +simd tx gov deposit 1 10000000stake --from cosmos1.. +``` + +##### draft-proposal + +The `draft-proposal` command allows users to draft any type of proposal. +The command returns a `draft_proposal.json`, to be used by `submit-proposal` after being completed. +The `draft_metadata.json` is meant to be uploaded to [IPFS](#metadata). + +```bash +simd tx gov draft-proposal +``` + +##### submit-proposal + +The `submit-proposal` command allows users to submit a governance proposal along with some messages and metadata. +Messages, metadata and deposit are defined in a JSON file. + +```bash +simd tx gov submit-proposal [path-to-proposal-json] [flags] +``` + +Example: + +```bash +simd tx gov submit-proposal /path/to/proposal.json --from cosmos1.. +``` + +where `proposal.json` contains: + +```json expandable +{ + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1...", / The gov module address + "to_address": "cosmos1...", + "amount":[{ + "denom": "stake", + "amount": "10"}] + } + ], + "metadata": "AQ==", + "deposit": "10stake", + "title": "Proposal Title", + "summary": "Proposal Summary" +} +``` + + +By default the metadata, summary and title are both limited by 255 characters, this can be overridden by the application developer. + + + +When metadata is not specified, the title is limited to 255 characters and the summary 40x the title length. + + +##### submit-legacy-proposal + +The `submit-legacy-proposal` command allows users to submit a governance legacy proposal along with an initial deposit. + +```bash +simd tx gov submit-legacy-proposal [command] [flags] +``` + +Example: + +```bash +simd tx gov submit-legacy-proposal --title="Test Proposal" --description="testing" --type="Text" --deposit="100000000stake" --from cosmos1.. +``` + +Example (`param-change`): + +```bash +simd tx gov submit-legacy-proposal param-change proposal.json --from cosmos1.. +``` + +```json expandable +{ + "title": "Test Proposal", + "description": "testing, testing, 1, 2, 3", + "changes": [ + { + "subspace": "staking", + "key": "MaxValidators", + "value": 100 + } + ], + "deposit": "10000000stake" +} +``` + +#### cancel-proposal + +Once proposal is canceled, from the deposits of proposal `deposits * proposal_cancel_ratio` will be burned or sent to `ProposalCancelDest` address , if `ProposalCancelDest` is empty then deposits will be burned. The `remaining deposits` will be sent to depositors. + +```bash +simd tx gov cancel-proposal [proposal-id] [flags] +``` + +Example: + +```bash +simd tx gov cancel-proposal 1 --from cosmos1... +``` + +##### vote + +The `vote` command allows users to submit a vote for a given governance proposal. + +```bash +simd tx gov vote [command] [flags] +``` + +Example: + +```bash +simd tx gov vote 1 yes --from cosmos1.. +``` + +##### weighted-vote + +The `weighted-vote` command allows users to submit a weighted vote for a given governance proposal. + +```bash +simd tx gov weighted-vote [proposal-id] [weighted-options] [flags] +``` + +Example: + +```bash +simd tx gov weighted-vote 1 yes=0.5,no=0.5 --from cosmos1.. +``` + +### gRPC + +A user can query the `gov` module using gRPC endpoints. + +#### Proposal + +The `Proposal` endpoint allows users to query a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Proposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Proposal +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposalId": "1", + "content": {"@type":"/cosmos.gov.v1beta1.TextProposal","description":"testing, testing, 1, 2, 3","title":"Test Proposal"}, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2021-09-16T19:40:08.712440474Z", + "depositEndTime": "2021-09-18T19:40:08.712440474Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "votingStartTime": "2021-09-16T19:40:08.712440474Z", + "votingEndTime": "2021-09-18T19:40:08.712440474Z", + "title": "Test Proposal", + "summary": "testing, testing, 1, 2, 3" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Proposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Proposal +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "id": "1", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yesCount": "0", + "abstainCount": "0", + "noCount": "0", + "noWithVetoCount": "0" + }, + "submitTime": "2022-03-28T11:50:20.819676256Z", + "depositEndTime": "2022-03-30T11:50:20.819676256Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "votingStartTime": "2022-03-28T14:25:26.644857113Z", + "votingEndTime": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Test Proposal", + "summary": "testing, testing, 1, 2, 3" + } +} +``` + +#### Proposals + +The `Proposals` endpoint allows users to query all proposals with optional filters. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Proposals +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "proposalId": "1", + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2022-03-28T11:50:20.819676256Z", + "depositEndTime": "2022-03-30T11:50:20.819676256Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "votingStartTime": "2022-03-28T14:25:26.644857113Z", + "votingEndTime": "2022-03-30T14:25:26.644857113Z" + }, + { + "proposalId": "2", + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2022-03-28T14:02:41.165025015Z", + "depositEndTime": "2022-03-30T14:02:41.165025015Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "votingStartTime": "0001-01-01T00:00:00Z", + "votingEndTime": "0001-01-01T00:00:00Z" + } + ], + "pagination": { + "total": "2" + } +} + +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Proposals +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.gov.v1.Query/Proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "id": "1", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yesCount": "0", + "abstainCount": "0", + "noCount": "0", + "noWithVetoCount": "0" + }, + "submitTime": "2022-03-28T11:50:20.819676256Z", + "depositEndTime": "2022-03-30T11:50:20.819676256Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "votingStartTime": "2022-03-28T14:25:26.644857113Z", + "votingEndTime": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + }, + { + "id": "2", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "finalTallyResult": { + "yesCount": "0", + "abstainCount": "0", + "noCount": "0", + "noWithVetoCount": "0" + }, + "submitTime": "2022-03-28T14:02:41.165025015Z", + "depositEndTime": "2022-03-30T14:02:41.165025015Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### Vote + +The `Vote` endpoint allows users to query a vote for a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Vote +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1","voter":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Vote +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposalId": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1000000000000000000" + } + ] + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Vote +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1","voter":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Vote +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposalId": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } +} +``` + +#### Votes + +The `Votes` endpoint allows users to query all votes for a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Votes +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1000000000000000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Votes +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### Params + +The `Params` endpoint allows users to query all parameters for the `gov` module. + +{/* TODO: #10197 Querying governance params outputs nil values */} + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"params_type":"voting"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Params +``` + +Example Output: + +```bash expandable +{ + "votingParams": { + "votingPeriod": "172800s" + }, + "depositParams": { + "maxDepositPeriod": "0s" + }, + "tallyParams": { + "quorum": "MA==", + "threshold": "MA==", + "vetoThreshold": "MA==" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"params_type":"voting"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Params +``` + +Example Output: + +```bash +{ + "votingParams": { + "votingPeriod": "172800s" + } +} +``` + +#### Deposit + +The `Deposit` endpoint allows users to query a deposit for a given proposal from a given depositor. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Deposit +``` + +Example: + +```bash +grpcurl -plaintext \ + '{"proposal_id":"1","depositor":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Deposit +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Deposit +``` + +Example: + +```bash +grpcurl -plaintext \ + '{"proposal_id":"1","depositor":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Deposit +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +#### deposits + +The `Deposits` endpoint allows users to query all deposits for a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Deposits +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Deposits +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### TallyResult + +The `TallyResult` endpoint allows users to query the tally of a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/TallyResult +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/TallyResult +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/TallyResult +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/TallyResult +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + } +} +``` + +### REST + +A user can query the `gov` module using REST endpoints. + +#### proposal + +The `proposals` endpoint allows users to query a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1 +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposal_id": "1", + "content": null, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1 +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "id": "1", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10" + } + ] + } + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes_count": "0", + "abstain_count": "0", + "no_count": "0", + "no_with_veto_count": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + } +} +``` + +#### proposals + +The `proposals` endpoint also allows users to query all proposals with optional filters. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "proposal_id": "1", + "content": null, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z" + }, + { + "proposal_id": "2", + "content": null, + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2022-03-28T14:02:41.165025015Z", + "deposit_end_time": "2022-03-30T14:02:41.165025015Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "voting_start_time": "0001-01-01T00:00:00Z", + "voting_end_time": "0001-01-01T00:00:00Z" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "id": "1", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10" + } + ] + } + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes_count": "0", + "abstain_count": "0", + "no_count": "0", + "no_with_veto_count": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + }, + { + "id": "2", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10" + } + ] + } + ], + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "final_tally_result": { + "yes_count": "0", + "abstain_count": "0", + "no_count": "0", + "no_with_veto_count": "0" + }, + "submit_time": "2022-03-28T14:02:41.165025015Z", + "deposit_end_time": "2022-03-30T14:02:41.165025015Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "voting_start_time": null, + "voting_end_time": null, + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### voter vote + +The `votes` endpoint allows users to query a vote for a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/votes/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/votes/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposal_id": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/votes/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/votes/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposal_id": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ], + "metadata": "" + } +} +``` + +#### votes + +The `votes` endpoint allows users to query all votes for a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/votes +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/votes +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ], + "metadata": "" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### params + +The `params` endpoint allows users to query all parameters for the `gov` module. + +{/* TODO: #10197 Querying governance params outputs nil values */} + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/params/{params_type} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/params/voting +``` + +Example Output: + +```bash expandable +{ + "voting_params": { + "voting_period": "172800s" + }, + "deposit_params": { + "min_deposit": [ + ], + "max_deposit_period": "0s" + }, + "tally_params": { + "quorum": "0.000000000000000000", + "threshold": "0.000000000000000000", + "veto_threshold": "0.000000000000000000" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/params/{params_type} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/params/voting +``` + +Example Output: + +```bash expandable +{ + "voting_params": { + "voting_period": "172800s" + }, + "deposit_params": { + "min_deposit": [ + ], + "max_deposit_period": "0s" + }, + "tally_params": { + "quorum": "0.000000000000000000", + "threshold": "0.000000000000000000", + "veto_threshold": "0.000000000000000000" + } +} +``` + +#### deposits + +The `deposits` endpoint allows users to query a deposit for a given proposal from a given depositor. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/deposits/{depositor} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/deposits/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/deposits/{depositor} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/deposits/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +#### proposal deposits + +The `deposits` endpoint allows users to query all deposits for a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/deposits +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/deposits +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### tally + +The `tally` endpoint allows users to query the tally of a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/tally +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/tally +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/tally +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/tally +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + } +} +``` + +## Metadata + +The gov module has two locations for metadata where users can provide further context about the on-chain actions they are taking. By default all metadata fields have a 255 character length field where metadata can be stored in json format, either on-chain or off-chain depending on the amount of data required. Here we provide a recommendation for the json structure and where the data should be stored. There are two important factors in making these recommendations. First, that the gov and group modules are consistent with one another, note the number of proposals made by all groups may be quite large. Second, that client applications such as block explorers and governance interfaces have confidence in the consistency of metadata structure across chains. + +### Proposal + +Location: off-chain as json object stored on IPFS (mirrors [group proposal](/docs/sdk/next/documentation/module-system/group#metadata)) + +```json +{ + "title": "", + "authors": [""], + "summary": "", + "details": "", + "proposal_forum_url": "", + "vote_option_context": "", +} +``` + + +The `authors` field is an array of strings, this is to allow for multiple authors to be listed in the metadata. +In v0.46, the `authors` field is a comma-separated string. Frontends are encouraged to support both formats for backwards compatibility. + + +### Vote + +Location: on-chain as json within 255 character limit (mirrors [group vote](/docs/sdk/next/documentation/module-system/group#metadata)) + +```json +{ + "justification": "", +} +``` + +## Future Improvements + +The current documentation only describes the minimum viable product for the +governance module. Future improvements may include: + +* **`BountyProposals`:** If accepted, a `BountyProposal` creates an open + bounty. The `BountyProposal` specifies how many Atoms will be given upon + completion. These Atoms will be taken from the `reserve pool`. After a + `BountyProposal` is accepted by governance, anybody can submit a + `SoftwareUpgradeProposal` with the code to claim the bounty. Note that once a + `BountyProposal` is accepted, the corresponding funds in the `reserve pool` + are locked so that payment can always be honored. In order to link a + `SoftwareUpgradeProposal` to an open bounty, the submitter of the + `SoftwareUpgradeProposal` will use the `Proposal.LinkedProposal` attribute. + If a `SoftwareUpgradeProposal` linked to an open bounty is accepted by + governance, the funds that were reserved are automatically transferred to the + submitter. +* **Complex delegation:** Delegators could choose other representatives than + their validators. Ultimately, the chain of representatives would always end + up to a validator, but delegators could inherit the vote of their chosen + representative before they inherit the vote of their validator. In other + words, they would only inherit the vote of their validator if their other + appointed representative did not vote. +* **Better process for proposal review:** There would be two parts to + `proposal.Deposit`, one for anti-spam (same as in MVP) and an other one to + reward third party auditors. diff --git a/docs/sdk/next/documentation/module-system/group.mdx b/docs/sdk/next/documentation/module-system/group.mdx new file mode 100644 index 00000000..0a13e657 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/group.mdx @@ -0,0 +1,2405 @@ +--- +title: "`x/group`" +--- + +**DEPRECATED**: This package is deprecated and will be removed in the next major release. The `x/group` module will be moved to a separate repo `github.com/cosmos/cosmos-sdk-legacy`. + +## Abstract + +The following documents specify the group module. + +This module allows the creation and management of on-chain multisig accounts and enables voting for message execution based on configurable decision policies. + +## Contents + +- [Concepts](#concepts) + - [Group](#group) + - [Group Policy](#group-policy) + - [Decision Policy](#decision-policy) + - [Proposal](#proposal) + - [Pruning](#pruning) +- [State](#state) + - [Group Table](#group-table) + - [Group Member Table](#group-member-table) + - [Group Policy Table](#group-policy-table) + - [Proposal Table](#proposal-table) + - [Vote Table](#vote-table) +- [Msg Service](#msg-service) + - [Msg/CreateGroup](#msgcreategroup) + - [Msg/UpdateGroupMembers](#msgupdategroupmembers) + - [Msg/UpdateGroupAdmin](#msgupdategroupadmin) + - [Msg/UpdateGroupMetadata](#msgupdategroupmetadata) + - [Msg/CreateGroupPolicy](#msgcreategrouppolicy) + - [Msg/CreateGroupWithPolicy](#msgcreategroupwithpolicy) + - [Msg/UpdateGroupPolicyAdmin](#msgupdategrouppolicyadmin) + - [Msg/UpdateGroupPolicyDecisionPolicy](#msgupdategrouppolicydecisionpolicy) + - [Msg/UpdateGroupPolicyMetadata](#msgupdategrouppolicymetadata) + - [Msg/SubmitProposal](#msgsubmitproposal) + - [Msg/WithdrawProposal](#msgwithdrawproposal) + - [Msg/Vote](#msgvote) + - [Msg/Exec](#msgexec) + - [Msg/LeaveGroup](#msgleavegroup) +- [Events](#events) + - [EventCreateGroup](#eventcreategroup) + - [EventUpdateGroup](#eventupdategroup) + - [EventCreateGroupPolicy](#eventcreategrouppolicy) + - [EventUpdateGroupPolicy](#eventupdategrouppolicy) + - [EventCreateProposal](#eventcreateproposal) + - [EventWithdrawProposal](#eventwithdrawproposal) + - [EventVote](#eventvote) + - [EventExec](#eventexec) + - [EventLeaveGroup](#eventleavegroup) + - [EventProposalPruned](#eventproposalpruned) +- [Client](#client) + - [CLI](#cli) + - [gRPC](#grpc) + - [REST](#rest) +- [Metadata](#metadata) + +## Concepts + +### Group + +A group is simply an aggregation of accounts with associated weights. It is not +an account and doesn't have a balance. It doesn't in and of itself have any +sort of voting or decision weight. It does have an "administrator" which has +the ability to add, remove and update members in the group. Note that a +group policy account could be an administrator of a group, and that the +administrator doesn't necessarily have to be a member of the group. + +### Group Policy + +A group policy is an account associated with a group and a decision policy. +Group policies are abstracted from groups because a single group may have +multiple decision policies for different types of actions. Managing group +membership separately from decision policies results in the least overhead +and keeps membership consistent across different policies. The pattern that +is recommended is to have a single master group policy for a given group, +and then to create separate group policies with different decision policies +and delegate the desired permissions from the master account to +those "sub-accounts" using the `x/authz` module. + +### Decision Policy + +A decision policy is the mechanism by which members of a group can vote on +proposals, as well as the rules that dictate whether a proposal should pass +or not based on its tally outcome. + +All decision policies generally would have a minimum execution period and a +maximum voting window. The minimum execution period is the minimum amount of time +that must pass after submission in order for a proposal to potentially be executed, and it may +be set to 0. The maximum voting window is the maximum time after submission that a proposal may +be voted on before it is tallied. + +The chain developer also defines an app-wide maximum execution period, which is +the maximum amount of time after a proposal's voting period end where users are +allowed to execute a proposal. + +The current group module comes shipped with two decision policies: threshold +and percentage. Any chain developer can extend upon these two, by creating +custom decision policies, as long as they adhere to the `DecisionPolicy` +interface: + +```go + Final bool +} + +// DecisionPolicy is the persistent set of rules to determine the result of election on a proposal. +type DecisionPolicy interface { + proto.Message + + // GetVotingPeriod returns the duration after proposal submission where + // votes are accepted. + GetVotingPeriod() time.Duration + // GetMinExecutionPeriod returns the minimum duration after submission + // where we can execution a proposal. It can be set to 0 or to a value + // lesser than VotingPeriod to allow TRY_EXEC. + GetMinExecutionPeriod() time.Duration + // Allow defines policy-specific logic to allow a proposal to pass or not, + // based on its tally result, the group's total power and the time since + // the proposal was submitted. + Allow(tallyResult TallyResult, totalPower string) (DecisionPolicyResult, error) + +``` + +#### Threshold decision policy + +A threshold decision policy defines a threshold of yes votes (based on a tally +of voter weights) that must be achieved in order for a proposal to pass. For +this decision policy, abstain and veto are simply treated as no's. + +This decision policy also has a VotingPeriod window and a MinExecutionPeriod +window. The former defines the duration after proposal submission where members +are allowed to vote, after which tallying is performed. The latter specifies +the minimum duration after proposal submission where the proposal can be +executed. If set to 0, then the proposal is allowed to be executed immediately +on submission (using the `TRY_EXEC` option). Obviously, MinExecutionPeriod +cannot be greater than VotingPeriod+MaxExecutionPeriod (where MaxExecution is +the app-defined duration that specifies the window after voting ended where a +proposal can be executed). + +#### Percentage decision policy + +A percentage decision policy is similar to a threshold decision policy, except +that the threshold is not defined as a constant weight, but as a percentage. +It's more suited for groups where the group members' weights can be updated, as +the percentage threshold stays the same, and doesn't depend on how those member +weights get updated. + +Same as the Threshold decision policy, the percentage decision policy has the +two VotingPeriod and MinExecutionPeriod parameters. + +### Proposal + +Any member(s) of a group can submit a proposal for a group policy account to decide upon. +A proposal consists of a set of messages that will be executed if the proposal +passes as well as any metadata associated with the proposal. + +#### Voting + +There are four choices to choose while voting - yes, no, abstain and veto. Not +all decision policies will take the four choices into account. Votes can contain some optional metadata. +In the current implementation, the voting window begins as soon as a proposal +is submitted, and the end is defined by the group policy's decision policy. + +#### Withdrawing Proposals + +Proposals can be withdrawn any time before the voting period end, either by the +admin of the group policy or by one of the proposers. Once withdrawn, it is +marked as `PROPOSAL_STATUS_WITHDRAWN`, and no more voting or execution is +allowed on it. + +#### Aborted Proposals + +If the group policy is updated during the voting period of the proposal, then +the proposal is marked as `PROPOSAL_STATUS_ABORTED`, and no more voting or +execution is allowed on it. This is because the group policy defines the rules +of proposal voting and execution, so if those rules change during the lifecycle +of a proposal, then the proposal should be marked as stale. + +#### Tallying + +Tallying is the counting of all votes on a proposal. It happens only once in +the lifecycle of a proposal, but can be triggered by two factors, whichever +happens first: + +- either someone tries to execute the proposal (see next section), which can + happen on a `Msg/Exec` transaction, or a `Msg/{SubmitProposal,Vote}` + transaction with the `Exec` field set. When a proposal execution is attempted, + a tally is done first to make sure the proposal passes. +- or on `EndBlock` when the proposal's voting period end just passed. + +If the tally result passes the decision policy's rules, then the proposal is +marked as `PROPOSAL_STATUS_ACCEPTED`, or else it is marked as +`PROPOSAL_STATUS_REJECTED`. In any case, no more voting is allowed anymore, and the tally +result is persisted to state in the proposal's `FinalTallyResult`. + +#### Executing Proposals + +Proposals are executed only when the tallying is done, and the group account's +decision policy allows the proposal to pass based on the tally outcome. They +are marked by the status `PROPOSAL_STATUS_ACCEPTED`. Execution must happen +before a duration of `MaxExecutionPeriod` (set by the chain developer) after +each proposal's voting period end. + +Proposals will not be automatically executed by the chain in this current design, +but rather a user must submit a `Msg/Exec` transaction to attempt to execute the +proposal based on the current votes and decision policy. Any user (not only the +group members) can execute proposals that have been accepted, and execution fees are +paid by the proposal executor. +It's also possible to try to execute a proposal immediately on creation or on +new votes using the `Exec` field of `Msg/SubmitProposal` and `Msg/Vote` requests. +In the former case, proposers signatures are considered as yes votes. +In these cases, if the proposal can't be executed (i.e. it didn't pass the +decision policy's rules), it will still be opened for new votes and +could be tallied and executed later on. + +A successful proposal execution will have its `ExecutorResult` marked as +`PROPOSAL_EXECUTOR_RESULT_SUCCESS`. The proposal will be automatically pruned +after execution. On the other hand, a failed proposal execution will be marked +as `PROPOSAL_EXECUTOR_RESULT_FAILURE`. Such a proposal can be re-executed +multiple times, until it expires after `MaxExecutionPeriod` after voting period +end. + +### Pruning + +Proposals and votes are automatically pruned to avoid state bloat. + +Votes are pruned: + +- either after a successful tally, i.e. a tally whose result passes the decision + policy's rules, which can be triggered by a `Msg/Exec` or a + `Msg/{SubmitProposal,Vote}` with the `Exec` field set, +- or on `EndBlock` right after the proposal's voting period end. This applies to proposals with status `aborted` or `withdrawn` too. + +whichever happens first. + +Proposals are pruned: + +- on `EndBlock` whose proposal status is `withdrawn` or `aborted` on proposal's voting period end before tallying, +- and either after a successful proposal execution, +- or on `EndBlock` right after the proposal's `voting_period_end` + + `max_execution_period` (defined as an app-wide configuration) is passed, + +whichever happens first. + +## State + +The `group` module uses the `orm` package which provides table storage with support for +primary keys and secondary indexes. `orm` also defines `Sequence` which is a persistent unique key generator based on a counter that can be used along with `Table`s. + +Here's the list of tables and associated sequences and indexes stored as part of the `group` module. + +### Group Table + +The `groupTable` stores `GroupInfo`: `0x0 | BigEndian(GroupId) -> ProtocolBuffer(GroupInfo)`. + +#### groupSeq + +The value of `groupSeq` is incremented when creating a new group and corresponds to the new `GroupId`: `0x1 | 0x1 -> BigEndian`. + +The second `0x1` corresponds to the ORM `sequenceStorageKey`. + +#### groupByAdminIndex + +`groupByAdminIndex` allows to retrieve groups by admin address: +`0x2 | len([]byte(group.Admin)) | []byte(group.Admin) | BigEndian(GroupId) -> []byte()`. + +### Group Member Table + +The `groupMemberTable` stores `GroupMember`s: `0x10 | BigEndian(GroupId) | []byte(member.Address) -> ProtocolBuffer(GroupMember)`. + +The `groupMemberTable` is a primary key table and its `PrimaryKey` is given by +`BigEndian(GroupId) | []byte(member.Address)` which is used by the following indexes. + +#### groupMemberByGroupIndex + +`groupMemberByGroupIndex` allows to retrieve group members by group id: +`0x11 | BigEndian(GroupId) | PrimaryKey -> []byte()`. + +#### groupMemberByMemberIndex + +`groupMemberByMemberIndex` allows to retrieve group members by member address: +`0x12 | len([]byte(member.Address)) | []byte(member.Address) | PrimaryKey -> []byte()`. + +### Group Policy Table + +The `groupPolicyTable` stores `GroupPolicyInfo`: `0x20 | len([]byte(Address)) | []byte(Address) -> ProtocolBuffer(GroupPolicyInfo)`. + +The `groupPolicyTable` is a primary key table and its `PrimaryKey` is given by +`len([]byte(Address)) | []byte(Address)` which is used by the following indexes. + +#### groupPolicySeq + +The value of `groupPolicySeq` is incremented when creating a new group policy and is used to generate the new group policy account `Address`: +`0x21 | 0x1 -> BigEndian`. + +The second `0x1` corresponds to the ORM `sequenceStorageKey`. + +#### groupPolicyByGroupIndex + +`groupPolicyByGroupIndex` allows to retrieve group policies by group id: +`0x22 | BigEndian(GroupId) | PrimaryKey -> []byte()`. + +#### groupPolicyByAdminIndex + +`groupPolicyByAdminIndex` allows to retrieve group policies by admin address: +`0x23 | len([]byte(Address)) | []byte(Address) | PrimaryKey -> []byte()`. + +### Proposal Table + +The `proposalTable` stores `Proposal`s: `0x30 | BigEndian(ProposalId) -> ProtocolBuffer(Proposal)`. + +#### proposalSeq + +The value of `proposalSeq` is incremented when creating a new proposal and corresponds to the new `ProposalId`: `0x31 | 0x1 -> BigEndian`. + +The second `0x1` corresponds to the ORM `sequenceStorageKey`. + +#### proposalByGroupPolicyIndex + +`proposalByGroupPolicyIndex` allows to retrieve proposals by group policy account address: +`0x32 | len([]byte(account.Address)) | []byte(account.Address) | BigEndian(ProposalId) -> []byte()`. + +#### ProposalsByVotingPeriodEndIndex + +`proposalsByVotingPeriodEndIndex` allows to retrieve proposals sorted by chronological `voting_period_end`: +`0x33 | sdk.FormatTimeBytes(proposal.VotingPeriodEnd) | BigEndian(ProposalId) -> []byte()`. + +This index is used when tallying the proposal votes at the end of the voting period, and for pruning proposals at `VotingPeriodEnd + MaxExecutionPeriod`. + +### Vote Table + +The `voteTable` stores `Vote`s: `0x40 | BigEndian(ProposalId) | []byte(voter.Address) -> ProtocolBuffer(Vote)`. + +The `voteTable` is a primary key table and its `PrimaryKey` is given by +`BigEndian(ProposalId) | []byte(voter.Address)` which is used by the following indexes. + +#### voteByProposalIndex + +`voteByProposalIndex` allows to retrieve votes by proposal id: +`0x41 | BigEndian(ProposalId) | PrimaryKey -> []byte()`. + +#### voteByVoterIndex + +`voteByVoterIndex` allows to retrieve votes by voter address: +`0x42 | len([]byte(voter.Address)) | []byte(voter.Address) | PrimaryKey -> []byte()`. + +## Msg Service + +### Msg/CreateGroup + +A new group can be created with the `MsgCreateGroup`, which has an admin address, a list of members and some optional metadata. + +The metadata has a maximum length that is chosen by the app developer, and +passed into the group keeper as a config. + +```go +// MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + // admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} +``` + +It's expected to fail if + +- metadata length is greater than `MaxMetadataLen` config +- members are not correctly set (e.g. wrong address format, duplicates, or with 0 weight). + +### Msg/UpdateGroupMembers + +Group members can be updated with the `UpdateGroupMembers`. + +```go +// MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + // admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_id is the unique ID of the group. + uint64 group_id = 2; + + // member_updates is the list of members to update, + // set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +In the list of `MemberUpdates`, an existing member can be removed by setting its weight to 0. + +It's expected to fail if: + +- the signer is not the admin of the group. +- for any one of the associated group policies, if its decision policy's `Validate()` method fails against the updated group. + +### Msg/UpdateGroupAdmin + +The `UpdateGroupAdmin` can be used to update a group admin. + +```go +// MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + // admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_id is the unique ID of the group. + uint64 group_id = 2; + + // new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +It's expected to fail if the signer is not the admin of the group. + +### Msg/UpdateGroupMetadata + +The `UpdateGroupMetadata` can be used to update a group metadata. + +```go +// MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + // admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_id is the unique ID of the group. + uint64 group_id = 2; + + // metadata is the updated group's metadata. + string metadata = 3; +} +``` + +It's expected to fail if: + +- new metadata length is greater than `MaxMetadataLen` config. +- the signer is not the admin of the group. + +### Msg/CreateGroupPolicy + +A new group policy can be created with the `MsgCreateGroupPolicy`, which has an admin address, a group id, a decision policy and some optional metadata. + +```go +// MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + // admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_id is the unique ID of the group. + uint64 group_id = 2; + + // metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + // decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} +``` + +It's expected to fail if: + +- the signer is not the admin of the group. +- metadata length is greater than `MaxMetadataLen` config. +- the decision policy's `Validate()` method doesn't pass against the group. + +### Msg/CreateGroupWithPolicy + +A new group with policy can be created with the `MsgCreateGroupWithPolicy`, which has an admin address, a list of members, a decision policy, a `group_policy_as_admin` field to optionally set group and group policy admin with group policy address and some optional metadata for group and group policy. + +```go +// MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + // admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + // group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + // group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + // and group policy admin. + bool group_policy_as_admin = 5; + + // decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} +``` + +It's expected to fail for the same reasons as `Msg/CreateGroup` and `Msg/CreateGroupPolicy`. + +### Msg/UpdateGroupPolicyAdmin + +The `UpdateGroupPolicyAdmin` can be used to update a group policy admin. + +```go +// MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + // admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +It's expected to fail if the signer is not the admin of the group policy. + +### Msg/UpdateGroupPolicyDecisionPolicy + +The `UpdateGroupPolicyDecisionPolicy` can be used to update a decision policy. + +```go +// MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + // admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} +``` + +It's expected to fail if: + +- the signer is not the admin of the group policy. +- the new decision policy's `Validate()` method doesn't pass against the group. + +### Msg/UpdateGroupPolicyMetadata + +The `UpdateGroupPolicyMetadata` can be used to update a group policy metadata. + +```go +// MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + // admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // metadata is the group policy metadata to be updated. + string metadata = 3; +} +``` + +It's expected to fail if: + +- new metadata length is greater than `MaxMetadataLen` config. +- the signer is not the admin of the group. + +### Msg/SubmitProposal + +A new proposal can be created with the `MsgSubmitProposal`, which has a group policy account address, a list of proposers addresses, a list of messages to execute if the proposal is accepted and some optional metadata. +An optional `Exec` value can be provided to try to execute the proposal immediately after proposal creation. Proposers signatures are considered as yes votes in this case. + +```go +// MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + // group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // proposers are the account addresses of the proposers. + // Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + // metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + // messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + // exec defines the mode of execution of the proposal, + // whether it should be executed immediately on creation or not. + // If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + // title is the title of the proposal. + // + // Since: cosmos-sdk 0.47 + string title = 6; + + // summary is the summary of the proposal. + // + // Since: cosmos-sdk 0.47 + string summary = 7; +} +``` + +It's expected to fail if: + +- metadata, title, or summary length is greater than `MaxMetadataLen` config. +- if any of the proposers is not a group member. + +### Msg/WithdrawProposal + +A proposal can be withdrawn using `MsgWithdrawProposal` which has an `address` (can be either a proposer or the group policy admin) and a `proposal_id` (which has to be withdrawn). + +```go +// MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + // proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + // address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +It's expected to fail if: + +- the signer is neither the group policy admin nor proposer of the proposal. +- the proposal is already closed or aborted. + +### Msg/Vote + +A new vote can be created with the `MsgVote`, given a proposal id, a voter address, a choice (yes, no, veto or abstain) and some optional metadata. +An optional `Exec` value can be provided to try to execute the proposal immediately after voting. + +```go +// MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + // proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + // voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // option is the voter's choice on the proposal. + VoteOption option = 3; + + // metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + // exec defines whether the proposal should be executed + // immediately after voting or not. + Exec exec = 5; +} +``` + +It's expected to fail if: + +- metadata length is greater than `MaxMetadataLen` config. +- the proposal is not in voting period anymore. + +### Msg/Exec + +A proposal can be executed with the `MsgExec`. + +```go +// MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "executor"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + // proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + // executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +The messages that are part of this proposal won't be executed if: + +- the proposal has not been accepted by the group policy. +- the proposal has already been successfully executed. + +### Msg/LeaveGroup + +The `MsgLeaveGroup` allows group member to leave a group. + +```go +// MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + // address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_id is the unique ID of the group. + uint64 group_id = 2; +} +``` + +It's expected to fail if: + +- the group member is not part of the group. +- for any one of the associated group policies, if its decision policy's `Validate()` method fails against the updated group. + +## Events + +The group module emits the following events: + +### EventCreateGroup + +| Type | Attribute Key | Attribute Value | +| -------------------------------- | ------------- | -------------------------------- | +| message | action | /cosmos.group.v1.Msg/CreateGroup | +| cosmos.group.v1.EventCreateGroup | group_id | `{groupId}` | + +### EventUpdateGroup + +| Type | Attribute Key | Attribute Value | +| -------------------------------- | ------------- | ------------------------------------------------------------ | +| message | action | `/cosmos.group.v1.Msg/UpdateGroup{Admin\|Metadata\|Members}` | +| cosmos.group.v1.EventUpdateGroup | group_id | `{groupId}` | + +### EventCreateGroupPolicy + +| Type | Attribute Key | Attribute Value | +| -------------------------------------- | ------------- | -------------------------------------- | +| message | action | /cosmos.group.v1.Msg/CreateGroupPolicy | +| cosmos.group.v1.EventCreateGroupPolicy | address | `{groupPolicyAddress}` | + +### EventUpdateGroupPolicy + +| Type | Attribute Key | Attribute Value | +| -------------------------------------- | ------------- | ------------------------------------------------------------------------- | +| message | action | `/cosmos.group.v1.Msg/UpdateGroupPolicy{Admin\|Metadata\|DecisionPolicy}` | +| cosmos.group.v1.EventUpdateGroupPolicy | address | `{groupPolicyAddress}` | + +### EventCreateProposal + +| Type | Attribute Key | Attribute Value | +| ----------------------------------- | ------------- | ----------------------------------- | +| message | action | /cosmos.group.v1.Msg/CreateProposal | +| cosmos.group.v1.EventCreateProposal | proposal_id | `{proposalId}` | + +### EventWithdrawProposal + +| Type | Attribute Key | Attribute Value | +| ------------------------------------- | ------------- | ------------------------------------- | +| message | action | /cosmos.group.v1.Msg/WithdrawProposal | +| cosmos.group.v1.EventWithdrawProposal | proposal_id | `{proposalId}` | + +### EventVote + +| Type | Attribute Key | Attribute Value | +| ------------------------- | ------------- | ------------------------- | +| message | action | /cosmos.group.v1.Msg/Vote | +| cosmos.group.v1.EventVote | proposal_id | `{proposalId}` | + +## EventExec + +| Type | Attribute Key | Attribute Value | +| ------------------------- | ------------- | ------------------------- | +| message | action | /cosmos.group.v1.Msg/Exec | +| cosmos.group.v1.EventExec | proposal_id | `{proposalId}` | +| cosmos.group.v1.EventExec | logs | `{logs\_string}` | + +### EventLeaveGroup + +| Type | Attribute Key | Attribute Value | +| ------------------------------- | ------------- | ------------------------------- | +| message | action | /cosmos.group.v1.Msg/LeaveGroup | +| cosmos.group.v1.EventLeaveGroup | proposal_id | `{proposalId}` | +| cosmos.group.v1.EventLeaveGroup | address | `{address}` | + +### EventProposalPruned + +| Type | Attribute Key | Attribute Value | +| ----------------------------------- | ------------- | ------------------------------- | +| message | action | /cosmos.group.v1.Msg/LeaveGroup | +| cosmos.group.v1.EventProposalPruned | proposal_id | `{proposalId}` | +| cosmos.group.v1.EventProposalPruned | status | `{ProposalStatus}` | +| cosmos.group.v1.EventProposalPruned | tally_result | `{TallyResult}` | + +## Client + +### CLI + +A user can query and interact with the `group` module using the CLI. + +#### Query + +The `query` commands allow users to query `group` state. + +```bash +simd query group --help +``` + +##### group-info + +The `group-info` command allows users to query for group info by given group id. + +```bash +simd query group group-info [id] [flags] +``` + +Example: + +```bash +simd query group group-info 1 +``` + +Example Output: + +```bash +admin: cosmos1.. +group_id: "1" +metadata: AQ== +total_weight: "3" +version: "1" +``` + +##### group-policy-info + +The `group-policy-info` command allows users to query for group policy info by account address of group policy . + +```bash +simd query group group-policy-info [group-policy-account] [flags] +``` + +Example: + +```bash +simd query group group-policy-info cosmos1.. +``` + +Example Output: + +```bash expandable +address: cosmos1.. +admin: cosmos1.. +decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s +group_id: "1" +metadata: AQ== +version: "1" +``` + +##### group-members + +The `group-members` command allows users to query for group members by group id with pagination flags. + +```bash +simd query group group-members [id] [flags] +``` + +Example: + +```bash +simd query group group-members 1 +``` + +Example Output: + +```bash expandable +members: +- group_id: "1" + member: + address: cosmos1.. + metadata: AQ== + weight: "2" +- group_id: "1" + member: + address: cosmos1.. + metadata: AQ== + weight: "1" +pagination: + next_key: null + total: "2" +``` + +##### groups-by-admin + +The `groups-by-admin` command allows users to query for groups by admin account address with pagination flags. + +```bash +simd query group groups-by-admin [admin] [flags] +``` + +Example: + +```bash +simd query group groups-by-admin cosmos1.. +``` + +Example Output: + +```bash expandable +groups: +- admin: cosmos1.. + group_id: "1" + metadata: AQ== + total_weight: "3" + version: "1" +- admin: cosmos1.. + group_id: "2" + metadata: AQ== + total_weight: "3" + version: "1" +pagination: + next_key: null + total: "2" +``` + +##### group-policies-by-group + +The `group-policies-by-group` command allows users to query for group policies by group id with pagination flags. + +```bash +simd query group group-policies-by-group [group-id] [flags] +``` + +Example: + +```bash +simd query group group-policies-by-group 1 +``` + +Example Output: + +```bash expandable +group_policies: +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +pagination: + next_key: null + total: "2" +``` + +##### group-policies-by-admin + +The `group-policies-by-admin` command allows users to query for group policies by admin account address with pagination flags. + +```bash +simd query group group-policies-by-admin [admin] [flags] +``` + +Example: + +```bash +simd query group group-policies-by-admin cosmos1.. +``` + +Example Output: + +```bash expandable +group_policies: +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +pagination: + next_key: null + total: "2" +``` + +##### proposal + +The `proposal` command allows users to query for proposal by id. + +```bash +simd query group proposal [id] [flags] +``` + +Example: + +```bash +simd query group proposal 1 +``` + +Example Output: + +```bash expandable +proposal: + address: cosmos1.. + executor_result: EXECUTOR_RESULT_NOT_RUN + group_policy_version: "1" + group_version: "1" + metadata: AQ== + msgs: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "100000000" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + proposal_id: "1" + proposers: + - cosmos1.. + result: RESULT_UNFINALIZED + status: STATUS_SUBMITTED + submitted_at: "2021-12-17T07:06:26.310638964Z" + windows: + min_execution_period: 0s + voting_period: 432000s + vote_state: + abstain_count: "0" + no_count: "0" + veto_count: "0" + yes_count: "0" + summary: "Summary" + title: "Title" +``` + +##### proposals-by-group-policy + +The `proposals-by-group-policy` command allows users to query for proposals by account address of group policy with pagination flags. + +```bash +simd query group proposals-by-group-policy [group-policy-account] [flags] +``` + +Example: + +```bash +simd query group proposals-by-group-policy cosmos1.. +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "1" +proposals: +- address: cosmos1.. + executor_result: EXECUTOR_RESULT_NOT_RUN + group_policy_version: "1" + group_version: "1" + metadata: AQ== + msgs: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "100000000" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + proposal_id: "1" + proposers: + - cosmos1.. + result: RESULT_UNFINALIZED + status: STATUS_SUBMITTED + submitted_at: "2021-12-17T07:06:26.310638964Z" + windows: + min_execution_period: 0s + voting_period: 432000s + vote_state: + abstain_count: "0" + no_count: "0" + veto_count: "0" + yes_count: "0" + summary: "Summary" + title: "Title" +``` + +##### vote + +The `vote` command allows users to query for vote by proposal id and voter account address. + +```bash +simd query group vote [proposal-id] [voter] [flags] +``` + +Example: + +```bash +simd query group vote 1 cosmos1.. +``` + +Example Output: + +```bash +vote: + choice: CHOICE_YES + metadata: AQ== + proposal_id: "1" + submitted_at: "2021-12-17T08:05:02.490164009Z" + voter: cosmos1.. +``` + +##### votes-by-proposal + +The `votes-by-proposal` command allows users to query for votes by proposal id with pagination flags. + +```bash +simd query group votes-by-proposal [proposal-id] [flags] +``` + +Example: + +```bash +simd query group votes-by-proposal 1 +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "1" +votes: +- choice: CHOICE_YES + metadata: AQ== + proposal_id: "1" + submitted_at: "2021-12-17T08:05:02.490164009Z" + voter: cosmos1.. +``` + +##### votes-by-voter + +The `votes-by-voter` command allows users to query for votes by voter account address with pagination flags. + +```bash +simd query group votes-by-voter [voter] [flags] +``` + +Example: + +```bash +simd query group votes-by-voter cosmos1.. +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "1" +votes: +- choice: CHOICE_YES + metadata: AQ== + proposal_id: "1" + submitted_at: "2021-12-17T08:05:02.490164009Z" + voter: cosmos1.. +``` + +### Transactions + +The `tx` commands allow users to interact with the `group` module. + +```bash +simd tx group --help +``` + +#### create-group + +The `create-group` command allows users to create a group which is an aggregation of member accounts with associated weights and +an administrator account. + +```bash +simd tx group create-group [admin] [metadata] [members-json-file] +``` + +Example: + +```bash +simd tx group create-group cosmos1.. "AQ==" members.json +``` + +#### update-group-admin + +The `update-group-admin` command allows users to update a group's admin. + +```bash +simd tx group update-group-admin [admin] [group-id] [new-admin] [flags] +``` + +Example: + +```bash +simd tx group update-group-admin cosmos1.. 1 cosmos1.. +``` + +#### update-group-members + +The `update-group-members` command allows users to update a group's members. + +```bash +simd tx group update-group-members [admin] [group-id] [members-json-file] [flags] +``` + +Example: + +```bash +simd tx group update-group-members cosmos1.. 1 members.json +``` + +#### update-group-metadata + +The `update-group-metadata` command allows users to update a group's metadata. + +```bash +simd tx group update-group-metadata [admin] [group-id] [metadata] [flags] +``` + +Example: + +```bash +simd tx group update-group-metadata cosmos1.. 1 "AQ==" +``` + +#### create-group-policy + +The `create-group-policy` command allows users to create a group policy which is an account associated with a group and a decision policy. + +```bash +simd tx group create-group-policy [admin] [group-id] [metadata] [decision-policy] [flags] +``` + +Example: + +```bash +simd tx group create-group-policy cosmos1.. 1 "AQ==" '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"1", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' +``` + +#### create-group-with-policy + +The `create-group-with-policy` command allows users to create a group which is an aggregation of member accounts with associated weights and an administrator account with decision policy. If the `--group-policy-as-admin` flag is set to `true`, the group policy address becomes the group and group policy admin. + +```bash +simd tx group create-group-with-policy [admin] [group-metadata] [group-policy-metadata] [members-json-file] [decision-policy] [flags] +``` + +Example: + +```bash +simd tx group create-group-with-policy cosmos1.. "AQ==" "AQ==" members.json '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"1", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' +``` + +#### update-group-policy-admin + +The `update-group-policy-admin` command allows users to update a group policy admin. + +```bash +simd tx group update-group-policy-admin [admin] [group-policy-account] [new-admin] [flags] +``` + +Example: + +```bash +simd tx group update-group-policy-admin cosmos1.. cosmos1.. cosmos1.. +``` + +#### update-group-policy-metadata + +The `update-group-policy-metadata` command allows users to update a group policy metadata. + +```bash +simd tx group update-group-policy-metadata [admin] [group-policy-account] [new-metadata] [flags] +``` + +Example: + +```bash +simd tx group update-group-policy-metadata cosmos1.. cosmos1.. "AQ==" +``` + +#### update-group-policy-decision-policy + +The `update-group-policy-decision-policy` command allows users to update a group policy's decision policy. + +```bash +simd tx group update-group-policy-decision-policy [admin] [group-policy-account] [decision-policy] [flags] +``` + +Example: + +```bash +simd tx group update-group-policy-decision-policy cosmos1.. cosmos1.. '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"2", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' +``` + +#### submit-proposal + +The `submit-proposal` command allows users to submit a new proposal. + +```bash +simd tx group submit-proposal [group-policy-account] [proposer[,proposer]*] [msg_tx_json_file] [metadata] [flags] +``` + +Example: + +```bash +simd tx group submit-proposal cosmos1.. cosmos1.. msg_tx.json "AQ==" +``` + +#### withdraw-proposal + +The `withdraw-proposal` command allows users to withdraw a proposal. + +```bash +simd tx group withdraw-proposal [proposal-id] [group-policy-admin-or-proposer] +``` + +Example: + +```bash +simd tx group withdraw-proposal 1 cosmos1.. +``` + +#### vote + +The `vote` command allows users to vote on a proposal. + +```bash +simd tx group vote proposal-id] [voter] [choice] [metadata] [flags] +``` + +Example: + +```bash +simd tx group vote 1 cosmos1.. CHOICE_YES "AQ==" +``` + +#### exec + +The `exec` command allows users to execute a proposal. + +```bash +simd tx group exec [proposal-id] [flags] +``` + +Example: + +```bash +simd tx group exec 1 +``` + +#### leave-group + +The `leave-group` command allows group member to leave the group. + +```bash +simd tx group leave-group [member-address] [group-id] +``` + +Example: + +```bash +simd tx group leave-group cosmos1... 1 +``` + +### gRPC + +A user can query the `group` module using gRPC endpoints. + +#### GroupInfo + +The `GroupInfo` endpoint allows users to query for group info by given group id. + +```bash +cosmos.group.v1.Query/GroupInfo +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"group_id":1}' localhost:9090 cosmos.group.v1.Query/GroupInfo +``` + +Example Output: + +```bash +{ + "info": { + "groupId": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "totalWeight": "3" + } +} +``` + +#### GroupPolicyInfo + +The `GroupPolicyInfo` endpoint allows users to query for group policy info by account address of group policy. + +```bash +cosmos.group.v1.Query/GroupPolicyInfo +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupPolicyInfo +``` + +Example Output: + +```bash +{ + "info": { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows": {"voting_period": "120h", "min_execution_period": "0s"}}, + } +} +``` + +#### GroupMembers + +The `GroupMembers` endpoint allows users to query for group members by group id with pagination flags. + +```bash +cosmos.group.v1.Query/GroupMembers +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"group_id":"1"}' localhost:9090 cosmos.group.v1.Query/GroupMembers +``` + +Example Output: + +```bash expandable +{ + "members": [ + { + "groupId": "1", + "member": { + "address": "cosmos1..", + "weight": "1" + } + }, + { + "groupId": "1", + "member": { + "address": "cosmos1..", + "weight": "2" + } + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### GroupsByAdmin + +The `GroupsByAdmin` endpoint allows users to query for groups by admin account address with pagination flags. + +```bash +cosmos.group.v1.Query/GroupsByAdmin +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"admin":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupsByAdmin +``` + +Example Output: + +```bash expandable +{ + "groups": [ + { + "groupId": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "totalWeight": "3" + }, + { + "groupId": "2", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "totalWeight": "3" + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### GroupPoliciesByGroup + +The `GroupPoliciesByGroup` endpoint allows users to query for group policies by group id with pagination flags. + +```bash +cosmos.group.v1.Query/GroupPoliciesByGroup +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"group_id":"1"}' localhost:9090 cosmos.group.v1.Query/GroupPoliciesByGroup +``` + +Example Output: + +```bash expandable +{ + "GroupPolicies": [ + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + }, + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### GroupPoliciesByAdmin + +The `GroupPoliciesByAdmin` endpoint allows users to query for group policies by admin account address with pagination flags. + +```bash +cosmos.group.v1.Query/GroupPoliciesByAdmin +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"admin":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupPoliciesByAdmin +``` + +Example Output: + +```bash expandable +{ + "GroupPolicies": [ + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + }, + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### Proposal + +The `Proposal` endpoint allows users to query for proposal by id. + +```bash +cosmos.group.v1.Query/Proposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' localhost:9090 cosmos.group.v1.Query/Proposal +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposalId": "1", + "address": "cosmos1..", + "proposers": [ + "cosmos1.." + ], + "submittedAt": "2021-12-17T07:06:26.310638964Z", + "groupVersion": "1", + "GroupPolicyVersion": "1", + "status": "STATUS_SUBMITTED", + "result": "RESULT_UNFINALIZED", + "voteState": { + "yesCount": "0", + "noCount": "0", + "abstainCount": "0", + "vetoCount": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executorResult": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100000000"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "title": "Title", + "summary": "Summary", + } +} +``` + +#### ProposalsByGroupPolicy + +The `ProposalsByGroupPolicy` endpoint allows users to query for proposals by account address of group policy with pagination flags. + +```bash +cosmos.group.v1.Query/ProposalsByGroupPolicy +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/ProposalsByGroupPolicy +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "proposalId": "1", + "address": "cosmos1..", + "proposers": [ + "cosmos1.." + ], + "submittedAt": "2021-12-17T08:03:27.099649352Z", + "groupVersion": "1", + "GroupPolicyVersion": "1", + "status": "STATUS_CLOSED", + "result": "RESULT_ACCEPTED", + "voteState": { + "yesCount": "1", + "noCount": "0", + "abstainCount": "0", + "vetoCount": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executorResult": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100000000"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "title": "Title", + "summary": "Summary", + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### VoteByProposalVoter + +The `VoteByProposalVoter` endpoint allows users to query for vote by proposal id and voter account address. + +```bash +cosmos.group.v1.Query/VoteByProposalVoter +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1","voter":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/VoteByProposalVoter +``` + +Example Output: + +```bash +{ + "vote": { + "proposalId": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "submittedAt": "2021-12-17T08:05:02.490164009Z" + } +} +``` + +#### VotesByProposal + +The `VotesByProposal` endpoint allows users to query for votes by proposal id with pagination flags. + +```bash +cosmos.group.v1.Query/VotesByProposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' localhost:9090 cosmos.group.v1.Query/VotesByProposal +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "submittedAt": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### VotesByVoter + +The `VotesByVoter` endpoint allows users to query for votes by voter account address with pagination flags. + +```bash +cosmos.group.v1.Query/VotesByVoter +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"voter":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/VotesByVoter +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "submittedAt": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### REST + +A user can query the `group` module using REST endpoints. + +#### GroupInfo + +The `GroupInfo` endpoint allows users to query for group info by given group id. + +```bash +/cosmos/group/v1/group_info/{group_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_info/1 +``` + +Example Output: + +```bash +{ + "info": { + "id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "total_weight": "3" + } +} +``` + +#### GroupPolicyInfo + +The `GroupPolicyInfo` endpoint allows users to query for group policy info by account address of group policy. + +```bash +/cosmos/group/v1/group_policy_info/{address} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_policy_info/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "info": { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + } +} +``` + +#### GroupMembers + +The `GroupMembers` endpoint allows users to query for group members by group id with pagination flags. + +```bash +/cosmos/group/v1/group_members/{group_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_members/1 +``` + +Example Output: + +```bash expandable +{ + "members": [ + { + "group_id": "1", + "member": { + "address": "cosmos1..", + "weight": "1", + "metadata": "AQ==" + } + }, + { + "group_id": "1", + "member": { + "address": "cosmos1..", + "weight": "2", + "metadata": "AQ==" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### GroupsByAdmin + +The `GroupsByAdmin` endpoint allows users to query for groups by admin account address with pagination flags. + +```bash +/cosmos/group/v1/groups_by_admin/{admin} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/groups_by_admin/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "groups": [ + { + "id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "total_weight": "3" + }, + { + "id": "2", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "total_weight": "3" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### GroupPoliciesByGroup + +The `GroupPoliciesByGroup` endpoint allows users to query for group policies by group id with pagination flags. + +```bash +/cosmos/group/v1/group_policies_by_group/{group_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_policies_by_group/1 +``` + +Example Output: + +```bash expandable +{ + "group_policies": [ + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + }, + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### GroupPoliciesByAdmin + +The `GroupPoliciesByAdmin` endpoint allows users to query for group policies by admin account address with pagination flags. + +```bash +/cosmos/group/v1/group_policies_by_admin/{admin} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_policies_by_admin/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "group_policies": [ + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + }, + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +``` + +#### Proposal + +The `Proposal` endpoint allows users to query for proposal by id. + +```bash +/cosmos/group/v1/proposal/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/proposal/1 +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposal_id": "1", + "address": "cosmos1..", + "metadata": "AQ==", + "proposers": [ + "cosmos1.." + ], + "submitted_at": "2021-12-17T07:06:26.310638964Z", + "group_version": "1", + "group_policy_version": "1", + "status": "STATUS_SUBMITTED", + "result": "RESULT_UNFINALIZED", + "vote_state": { + "yes_count": "0", + "no_count": "0", + "abstain_count": "0", + "veto_count": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executor_result": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "100000000" + } + ] + } + ], + "title": "Title", + "summary": "Summary", + } +} +``` + +#### ProposalsByGroupPolicy + +The `ProposalsByGroupPolicy` endpoint allows users to query for proposals by account address of group policy with pagination flags. + +```bash +/cosmos/group/v1/proposals_by_group_policy/{address} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/proposals_by_group_policy/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "id": "1", + "group_policy_address": "cosmos1..", + "metadata": "AQ==", + "proposers": [ + "cosmos1.." + ], + "submit_time": "2021-12-17T08:03:27.099649352Z", + "group_version": "1", + "group_policy_version": "1", + "status": "STATUS_CLOSED", + "result": "RESULT_ACCEPTED", + "vote_state": { + "yes_count": "1", + "no_count": "0", + "abstain_count": "0", + "veto_count": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executor_result": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "100000000" + } + ] + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### VoteByProposalVoter + +The `VoteByProposalVoter` endpoint allows users to query for vote by proposal id and voter account address. + +```bash +/cosmos/group/v1/vote_by_proposal_voter/{proposal_id}/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1beta1/vote_by_proposal_voter/1/cosmos1.. +``` + +Example Output: + +```bash +{ + "vote": { + "proposal_id": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "metadata": "AQ==", + "submitted_at": "2021-12-17T08:05:02.490164009Z" + } +} +``` + +#### VotesByProposal + +The `VotesByProposal` endpoint allows users to query for votes by proposal id with pagination flags. + +```bash +/cosmos/group/v1/votes_by_proposal/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/votes_by_proposal/1 +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "option": "CHOICE_YES", + "metadata": "AQ==", + "submit_time": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### VotesByVoter + +The `VotesByVoter` endpoint allows users to query for votes by voter account address with pagination flags. + +```bash +/cosmos/group/v1/votes_by_voter/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/votes_by_voter/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "metadata": "AQ==", + "submitted_at": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +## Metadata + +The group module has four locations for metadata where users can provide further context about the on-chain actions they are taking. By default all metadata fields have a 255 character length field where metadata can be stored in json format, either on-chain or off-chain depending on the amount of data required. Here we provide a recommendation for the json structure and where the data should be stored. There are two important factors in making these recommendations. First, that the group and gov modules are consistent with one another, note the number of proposals made by all groups may be quite large. Second, that client applications such as block explorers and governance interfaces have confidence in the consistency of metadata structure across chains. + +### Proposal + +Location: off-chain as json object stored on IPFS (mirrors [gov proposal](/docs/sdk/next/documentation/module-system/gov#metadata)) + +```json +{ + "title": "", + "authors": [""], + "summary": "", + "details": "", + "proposal_forum_url": "", + "vote_option_context": "" +} +``` + + + The `authors` field is an array of strings, this is to allow for multiple + authors to be listed in the metadata. In v0.46, the `authors` field is a + comma-separated string. Frontends are encouraged to support both formats for + backwards compatibility. + + +### Vote + +Location: on-chain as json within 255 character limit (mirrors [gov vote](/docs/sdk/next/documentation/module-system/gov#metadata)) + +```json +{ + "justification": "" +} +``` + +### Group + +Location: off-chain as json object stored on IPFS + +```json +{ + "name": "", + "description": "", + "group_website_url": "", + "group_forum_url": "" +} +``` + +### Decision policy + +Location: on-chain as json within 255 character limit + +```json +{ + "name": "", + "description": "" +} +``` diff --git a/docs/sdk/next/documentation/module-system/intro.mdx b/docs/sdk/next/documentation/module-system/intro.mdx new file mode 100644 index 00000000..d12a6333 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/intro.mdx @@ -0,0 +1,303 @@ +--- +title: Introduction to Cosmos SDK Modules +--- + +## Synopsis + +Modules define most of the logic of Cosmos SDK applications. Developers compose modules together using the Cosmos SDK to build their custom application-specific blockchains. This document outlines the basic concepts behind SDK modules and how to approach module management. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK application](/docs/sdk/next/documentation/application-framework/app-anatomy) +- [Lifecycle of a Cosmos SDK transaction](/docs/sdk/next/documentation/protocol-development/tx-lifecycle) + + + +## Role of Modules in a Cosmos SDK Application + +The Cosmos SDK can be thought of as the Ruby-on-Rails of blockchain development. It comes with a core that provides the basic functionalities every blockchain application needs, like a [boilerplate implementation of the ABCI](/docs/sdk/next/documentation/application-framework/baseapp) to communicate with the underlying consensus engine, a [`multistore`](/docs/sdk/next/documentation/state-storage/store#multistore) to persist state, a [server](/docs/sdk/next/documentation/operations/node) to form a full-node and [interfaces](/docs/sdk/next/documentation/module-system/module-interfaces) to handle queries. + +On top of this core, the Cosmos SDK enables developers to build modules that implement the business logic of their application. In other words, SDK modules implement the bulk of the logic of applications, while the core does the wiring and enables modules to be composed together. The end goal is to build a robust ecosystem of open-source Cosmos SDK modules, making it increasingly easier to build complex blockchain applications. + +Cosmos SDK modules can be seen as little state-machines within the state-machine. They generally define a subset of the state using one or more `KVStore`s in the [main multistore](/docs/sdk/next/documentation/state-storage/store), as well as a subset of [message types](/docs/sdk/next/documentation/module-system/messages-and-queries#messages). These messages are routed by one of the main components of Cosmos SDK core, [`BaseApp`](/docs/sdk/next/documentation/application-framework/baseapp), to a module Protobuf [`Msg` service](/docs/sdk/next/documentation/module-system/msg-services) that defines them. + +```mermaid expandable +flowchart TD + A[Transaction relayed from the full-node's consensus engine to the node's application via DeliverTx] + A --> B[APPLICATION] + B --> C["Using baseapp's methods: Decode the Tx, extract and route the message(s)"] + C --> D[Message routed to the correct module to be processed] + D --> E[AUTH MODULE] + D --> F[BANK MODULE] + D --> G[STAKING MODULE] + D --> H[GOV MODULE] + H --> I[Handles message, Updates state] + E --> I + F --> I + G --> I + I --> J["Return result to the underlying consensus engine (e.g. CometBFT) (0=Ok, 1=Err)"] +``` + +As a result of this architecture, building a Cosmos SDK application usually revolves around writing modules to implement the specialized logic of the application and composing them with existing modules to complete the application. Developers will generally work on modules that implement logic needed for their specific use case that do not exist yet, and will use existing modules for more generic functionalities like staking, accounts, or token management. + +### Modules as super-users + +Modules have the ability to perform actions that are not available to regular users. This is because modules are given sudo permissions by the state machine. Modules can reject another modules desire to execute a function but this logic must be explicit. Examples of this can be seen when modules create functions to modify parameters: + +```go expandable +package keeper + +import ( + + "context" + "github.com/hashicorp/go-metrics" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/x/bank/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +type msgServer struct { + Keeper +} + +var _ types.MsgServer = msgServer{ +} + +/ NewMsgServerImpl returns an implementation of the bank MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &msgServer{ + Keeper: keeper +} +} + +func (k msgServer) + +Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + var ( + from, to []byte + err error + ) + if base, ok := k.Keeper.(BaseKeeper); ok { + from, err = base.ak.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid from address: %s", err) +} + +to, err = base.ak.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid to address: %s", err) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + if !msg.Amount.IsValid() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + if !msg.Amount.IsAllPositive() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + if err := k.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if k.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + +err = k.SendCoins(ctx, from, to, msg.Amount) + if err != nil { + return nil, err +} + +defer func() { + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "send" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + +return &types.MsgSendResponse{ +}, nil +} + +func (k msgServer) + +MultiSend(ctx context.Context, msg *types.MsgMultiSend) (*types.MsgMultiSendResponse, error) { + if len(msg.Inputs) == 0 { + return nil, types.ErrNoInputs +} + if len(msg.Inputs) != 1 { + return nil, types.ErrMultipleSenders +} + if len(msg.Outputs) == 0 { + return nil, types.ErrNoOutputs +} + if err := types.ValidateInputOutputs(msg.Inputs[0], msg.Outputs); err != nil { + return nil, err +} + + / NOTE: totalIn == totalOut should already have been checked + for _, in := range msg.Inputs { + if err := k.IsSendEnabledCoins(ctx, in.Coins...); err != nil { + return nil, err +} + +} + for _, out := range msg.Outputs { + if base, ok := k.Keeper.(BaseKeeper); ok { + accAddr, err := base.ak.AddressCodec().StringToBytes(out.Address) + if err != nil { + return nil, err +} + if k.BlockedAddr(accAddr) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", out.Address) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + +} + err := k.InputOutputCoins(ctx, msg.Inputs[0], msg.Outputs) + if err != nil { + return nil, err +} + +return &types.MsgMultiSendResponse{ +}, nil +} + +func (k msgServer) + +UpdateParams(ctx context.Context, req *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + if k.GetAuthority() != req.Authority { + return nil, errorsmod.Wrapf(types.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), req.Authority) +} + if err := req.Params.Validate(); err != nil { + return nil, err +} + if err := k.SetParams(ctx, req.Params); err != nil { + return nil, err +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} + +func (k msgServer) + +SetSendEnabled(ctx context.Context, msg *types.MsgSetSendEnabled) (*types.MsgSetSendEnabledResponse, error) { + if k.GetAuthority() != msg.Authority { + return nil, errorsmod.Wrapf(types.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), msg.Authority) +} + seen := map[string]bool{ +} + for _, se := range msg.SendEnabled { + if _, alreadySeen := seen[se.Denom]; alreadySeen { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("duplicate denom entries found for %q", se.Denom) +} + +seen[se.Denom] = true + if err := se.Validate(); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid SendEnabled denom %q: %s", se.Denom, err) +} + +} + for _, denom := range msg.UseDefaultFor { + if err := sdk.ValidateDenom(denom); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid UseDefaultFor denom %q: %s", denom, err) +} + +} + if len(msg.SendEnabled) > 0 { + k.SetAllSendEnabled(ctx, msg.SendEnabled) +} + if len(msg.UseDefaultFor) > 0 { + k.DeleteSendEnabled(ctx, msg.UseDefaultFor...) +} + +return &types.MsgSetSendEnabledResponse{ +}, nil +} + +func (k msgServer) + +Burn(goCtx context.Context, msg *types.MsgBurn) (*types.MsgBurnResponse, error) { + var ( + from []byte + err error + ) + +var coins sdk.Coins + for _, coin := range msg.Amount { + coins = coins.Add(sdk.NewCoin(coin.Denom, coin.Amount)) +} + if base, ok := k.Keeper.(BaseKeeper); ok { + from, err = base.ak.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid from address: %s", err) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + if !coins.IsValid() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, coins.String()) +} + if !coins.IsAllPositive() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, coins.String()) +} + +err = k.BurnCoins(goCtx, from, coins) + if err != nil { + return nil, err +} + +return &types.MsgBurnResponse{ +}, nil +} +``` + +## How to Approach Building Modules as a Developer + +While there are no definitive guidelines for writing modules, here are some important design principles developers should keep in mind when building them: + +- **Composability**: Cosmos SDK applications are almost always composed of multiple modules. This means developers need to carefully consider the integration of their module not only with the core of the Cosmos SDK, but also with other modules. The former is achieved by following standard design patterns outlined [here](#main-components-of-cosmos-sdk-modules), while the latter is achieved by properly exposing the store(s) of the module via the [`keeper`](/docs/sdk/next/documentation/module-system/keeper). +- **Specialization**: A direct consequence of the **composability** feature is that modules should be **specialized**. Developers should carefully establish the scope of their module and not batch multiple functionalities into the same module. This separation of concerns enables modules to be reused in other projects and improves the upgradability of the application. **Specialization** also plays an important role in the [object-capabilities model](/docs/sdk/next/documentation/core-concepts/ocap) of the Cosmos SDK. +- **Capabilities**: Most modules need to read and/or write to the store(s) of other modules. However, in an open-source environment, it is possible for some modules to be malicious. That is why module developers need to carefully think not only about how their module interacts with other modules, but also about how to give access to the module's store(s). The Cosmos SDK takes a capabilities-oriented approach to inter-module security. This means that each store defined by a module is accessed by a `key`, which is held by the module's [`keeper`](/docs/sdk/next/documentation/module-system/keeper). This `keeper` defines how to access the store(s) and under what conditions. Access to the module's store(s) is done by passing a reference to the module's `keeper`. + +## Main Components of Cosmos SDK Modules + +Modules are by convention defined in the `./x/` subfolder (e.g. the `bank` module will be defined in the `./x/bank` folder). They generally share the same core components: + +- A [`keeper`](/docs/sdk/next/documentation/module-system/keeper), used to access the module's store(s) and update the state. +- A [`Msg` service](/docs/sdk/next/documentation/module-system/messages-and-queries#messages), used to process messages when they are routed to the module by [`BaseApp`](/docs/sdk/next/documentation/application-framework/baseapp#message-routing) and trigger state-transitions. +- A [query service](/docs/sdk/next/documentation/module-system/query-services), used to process user queries when they are routed to the module by [`BaseApp`](/docs/sdk/next/documentation/application-framework/baseapp#query-routing). +- Interfaces, for end users to query the subset of the state defined by the module and create `message`s of the custom types defined in the module. + +In addition to these components, modules implement the `AppModule` interface in order to be managed by the [`module manager`](/docs/sdk/next/documentation/module-system/module-manager). + +Please refer to the [structure document](/docs/sdk/next/documentation/module-system/structure) to learn about the recommended structure of a module's directory. diff --git a/docs/sdk/next/documentation/module-system/invariants.mdx b/docs/sdk/next/documentation/module-system/invariants.mdx new file mode 100644 index 00000000..eebea4fe --- /dev/null +++ b/docs/sdk/next/documentation/module-system/invariants.mdx @@ -0,0 +1,528 @@ +--- +title: Invariants +--- + +## Synopsis + +An invariant is a property of the application that should always be true. In the context of the Cosmos SDK, an `Invariant` is a function that checks for a particular invariant. These functions are useful to detect bugs early on and act upon them to limit their potential consequences (e.g. by halting the chain). They are also useful in the development process of the application to detect bugs via simulations. + + +**Pre-requisite Readings** + +- [Keepers](/docs/sdk/next/documentation/module-system/keeper) + + + +## Implementing `Invariant`s + +An `Invariant` is a function that checks for a particular invariant within a module. Module `Invariant`s must follow the `Invariant` type: + +```go expandable +package types + +import "fmt" + +/ An Invariant is a function which tests a particular invariant. +/ The invariant returns a descriptive message about what happened +/ and a boolean indicating whether the invariant has been broken. +/ The simulator will then halt and print the logs. +type Invariant func(ctx Context) (string, bool) + +/ Invariants defines a group of invariants +type Invariants []Invariant + +/ expected interface for registering invariants +type InvariantRegistry interface { + RegisterRoute(moduleName, route string, invar Invariant) +} + +/ FormatInvariant returns a standardized invariant message. +func FormatInvariant(module, name, msg string) + +string { + return fmt.Sprintf("%s: %s invariant\n%s\n", module, name, msg) +} +``` + +The `string` return value is the invariant message, which can be used when printing logs, and the `bool` return value is the actual result of the invariant check. + +In practice, each module implements `Invariant`s in a `keeper/invariants.go` file within the module's folder. The standard is to implement one `Invariant` function per logical grouping of invariants with the following model: + +```go +/ Example for an Invariant that checks balance-related invariants + +func BalanceInvariants(k Keeper) + +sdk.Invariant { + return func(ctx context.Context) (string, bool) { + / Implement checks for balance-related invariants +} +} +``` + +Additionally, module developers should generally implement an `AllInvariants` function that runs all the `Invariant`s functions of the module: + +```go expandable +/ AllInvariants runs all invariants of the module. +/ In this example, the module implements two Invariants: BalanceInvariants and DepositsInvariants + +func AllInvariants(k Keeper) + +sdk.Invariant { + return func(ctx context.Context) (string, bool) { + res, stop := BalanceInvariants(k)(ctx) + if stop { + return res, stop +} + +return DepositsInvariant(k)(ctx) +} +} +``` + +Finally, module developers need to implement the `RegisterInvariants` method as part of the [`AppModule` interface](/docs/sdk/next/documentation/module-system/module-manager#appmodule). Indeed, the `RegisterInvariants` method of the module, implemented in the `module/module.go` file, typically only defers the call to a `RegisterInvariants` method implemented in the `keeper/invariants.go` file. The `RegisterInvariants` method registers a route for each `Invariant` function in the [`InvariantRegistry`](#invariant-registry): + +```go expandable +package keeper + +import ( + + "bytes" + "fmt" + "cosmossdk.io/math" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ RegisterInvariants registers all staking invariants +func RegisterInvariants(ir sdk.InvariantRegistry, k *Keeper) { + ir.RegisterRoute(types.ModuleName, "module-accounts", + ModuleAccountInvariants(k)) + +ir.RegisterRoute(types.ModuleName, "nonnegative-power", + NonNegativePowerInvariant(k)) + +ir.RegisterRoute(types.ModuleName, "positive-delegation", + PositiveDelegationInvariant(k)) + +ir.RegisterRoute(types.ModuleName, "delegator-shares", + DelegatorSharesInvariant(k)) +} + +/ AllInvariants runs all invariants of the staking module. +func AllInvariants(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + res, stop := ModuleAccountInvariants(k)(ctx) + if stop { + return res, stop +} + +res, stop = NonNegativePowerInvariant(k)(ctx) + if stop { + return res, stop +} + +res, stop = PositiveDelegationInvariant(k)(ctx) + if stop { + return res, stop +} + +return DelegatorSharesInvariant(k)(ctx) +} +} + +/ ModuleAccountInvariants checks that the bonded and notBonded ModuleAccounts pools +/ reflects the tokens actively bonded and not bonded +func ModuleAccountInvariants(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + bonded := math.ZeroInt() + notBonded := math.ZeroInt() + bondedPool := k.GetBondedPool(ctx) + notBondedPool := k.GetNotBondedPool(ctx) + bondDenom := k.BondDenom(ctx) + +k.IterateValidators(ctx, func(_ int64, validator types.ValidatorI) + +bool { + switch validator.GetStatus() { + case types.Bonded: + bonded = bonded.Add(validator.GetTokens()) + case types.Unbonding, types.Unbonded: + notBonded = notBonded.Add(validator.GetTokens()) + +default: + panic("invalid validator status") +} + +return false +}) + +k.IterateUnbondingDelegations(ctx, func(_ int64, ubd types.UnbondingDelegation) + +bool { + for _, entry := range ubd.Entries { + notBonded = notBonded.Add(entry.Balance) +} + +return false +}) + poolBonded := k.bankKeeper.GetBalance(ctx, bondedPool.GetAddress(), bondDenom) + poolNotBonded := k.bankKeeper.GetBalance(ctx, notBondedPool.GetAddress(), bondDenom) + broken := !poolBonded.Amount.Equal(bonded) || !poolNotBonded.Amount.Equal(notBonded) + + / Bonded tokens should equal sum of tokens with bonded validators + / Not-bonded tokens should equal unbonding delegations plus tokens on unbonded validators + return sdk.FormatInvariant(types.ModuleName, "bonded and not bonded module account coins", fmt.Sprintf( + "\tPool's bonded tokens: %v\n"+ + "\tsum of bonded tokens: %v\n"+ + "not bonded token invariance:\n"+ + "\tPool's not bonded tokens: %v\n"+ + "\tsum of not bonded tokens: %v\n"+ + "module accounts total (bonded + not bonded):\n"+ + "\tModule Accounts' tokens: %v\n"+ + "\tsum tokens: %v\n", + poolBonded, bonded, poolNotBonded, notBonded, poolBonded.Add(poolNotBonded), bonded.Add(notBonded))), broken +} +} + +/ NonNegativePowerInvariant checks that all stored validators have >= 0 power. +func NonNegativePowerInvariant(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + var ( + msg string + broken bool + ) + iterator := k.ValidatorsPowerStoreIterator(ctx) + for ; iterator.Valid(); iterator.Next() { + validator, found := k.GetValidator(ctx, iterator.Value()) + if !found { + panic(fmt.Sprintf("validator record not found for address: %X\n", iterator.Value())) +} + powerKey := types.GetValidatorsByPowerIndexKey(validator, k.PowerReduction(ctx)) + if !bytes.Equal(iterator.Key(), powerKey) { + broken = true + msg += fmt.Sprintf("power store invariance:\n\tvalidator.Power: %v"+ + "\n\tkey should be: %v\n\tkey in store: %v\n", + validator.GetConsensusPower(k.PowerReduction(ctx)), powerKey, iterator.Key()) +} + if validator.Tokens.IsNegative() { + broken = true + msg += fmt.Sprintf("\tnegative tokens for validator: %v\n", validator) +} + +} + +iterator.Close() + +return sdk.FormatInvariant(types.ModuleName, "nonnegative power", fmt.Sprintf("found invalid validator powers\n%s", msg)), broken +} +} + +/ PositiveDelegationInvariant checks that all stored delegations have > 0 shares. +func PositiveDelegationInvariant(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + var ( + msg string + count int + ) + delegations := k.GetAllDelegations(ctx) + for _, delegation := range delegations { + if delegation.Shares.IsNegative() { + count++ + msg += fmt.Sprintf("\tdelegation with negative shares: %+v\n", delegation) +} + if delegation.Shares.IsZero() { + count++ + msg += fmt.Sprintf("\tdelegation with zero shares: %+v\n", delegation) +} + +} + broken := count != 0 + + return sdk.FormatInvariant(types.ModuleName, "positive delegations", fmt.Sprintf( + "%d invalid delegations found\n%s", count, msg)), broken +} +} + +/ DelegatorSharesInvariant checks whether all the delegator shares which persist +/ in the delegator object add up to the correct total delegator shares +/ amount stored in each validator. +func DelegatorSharesInvariant(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + var ( + msg string + broken bool + ) + validators := k.GetAllValidators(ctx) + validatorsDelegationShares := map[string]math.LegacyDec{ +} + + / initialize a map: validator -> its delegation shares + for _, validator := range validators { + validatorsDelegationShares[validator.GetOperator().String()] = math.LegacyZeroDec() +} + + / iterate through all the delegations to calculate the total delegation shares for each validator + delegations := k.GetAllDelegations(ctx) + for _, delegation := range delegations { + delegationValidatorAddr := delegation.GetValidatorAddr().String() + validatorDelegationShares := validatorsDelegationShares[delegationValidatorAddr] + validatorsDelegationShares[delegationValidatorAddr] = validatorDelegationShares.Add(delegation.Shares) +} + + / for each validator, check if its total delegation shares calculated from the step above equals to its expected delegation shares + for _, validator := range validators { + expValTotalDelShares := validator.GetDelegatorShares() + calculatedValTotalDelShares := validatorsDelegationShares[validator.GetOperator().String()] + if !calculatedValTotalDelShares.Equal(expValTotalDelShares) { + broken = true + msg += fmt.Sprintf("broken delegator shares invariance:\n"+ + "\tvalidator.DelegatorShares: %v\n"+ + "\tsum of Delegator.Shares: %v\n", expValTotalDelShares, calculatedValTotalDelShares) +} + +} + +return sdk.FormatInvariant(types.ModuleName, "delegator shares", msg), broken +} +} +``` + +For more, see an example of [`Invariant`s implementation from the `staking` module](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/keeper/invariants.go). + +## Invariant Registry + +The `InvariantRegistry` is a registry where the `Invariant`s of all the modules of an application are registered. There is only one `InvariantRegistry` per **application**, meaning module developers need not implement their own `InvariantRegistry` when building a module. **All module developers need to do is to register their modules' invariants in the `InvariantRegistry`, as explained in the section above**. The rest of this section gives more information on the `InvariantRegistry` itself, and does not contain anything directly relevant to module developers. + +At its core, the `InvariantRegistry` is defined in the Cosmos SDK as an interface: + +```go expandable +package types + +import "fmt" + +/ An Invariant is a function which tests a particular invariant. +/ The invariant returns a descriptive message about what happened +/ and a boolean indicating whether the invariant has been broken. +/ The simulator will then halt and print the logs. +type Invariant func(ctx Context) (string, bool) + +/ Invariants defines a group of invariants +type Invariants []Invariant + +/ expected interface for registering invariants +type InvariantRegistry interface { + RegisterRoute(moduleName, route string, invar Invariant) +} + +/ FormatInvariant returns a standardized invariant message. +func FormatInvariant(module, name, msg string) + +string { + return fmt.Sprintf("%s: %s invariant\n%s\n", module, name, msg) +} +``` + +Typically, this interface is implemented in the `keeper` of a specific module. The most used implementation of an `InvariantRegistry` can be found in the `crisis` module: + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "time" + "cosmossdk.io/collections" + "cosmossdk.io/core/address" + "cosmossdk.io/log" + + storetypes "cosmossdk.io/core/store" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/crisis/types" +) + +/ Keeper - crisis keeper +type Keeper struct { + routes []types.InvarRoute + invCheckPeriod uint + storeService storetypes.KVStoreService + cdc codec.BinaryCodec + + / the address capable of executing a MsgUpdateParams message. Typically, this + / should be the x/gov module account. + authority string + + supplyKeeper types.SupplyKeeper + + feeCollectorName string / name of the FeeCollector ModuleAccount + + addressCodec address.Codec + + Schema collections.Schema + ConstantFee collections.Item[sdk.Coin] +} + +/ NewKeeper creates a new Keeper object +func NewKeeper( + cdc codec.BinaryCodec, storeService storetypes.KVStoreService, invCheckPeriod uint, + supplyKeeper types.SupplyKeeper, feeCollectorName, authority string, ac address.Codec, +) *Keeper { + sb := collections.NewSchemaBuilder(storeService) + k := &Keeper{ + storeService: storeService, + cdc: cdc, + routes: make([]types.InvarRoute, 0), + invCheckPeriod: invCheckPeriod, + supplyKeeper: supplyKeeper, + feeCollectorName: feeCollectorName, + authority: authority, + addressCodec: ac, + ConstantFee: collections.NewItem(sb, types.ConstantFeeKey, "constant_fee", codec.CollValue[sdk.Coin](/docs/sdk/next/documentation/module-system/cdc)), +} + +schema, err := sb.Build() + if err != nil { + panic(err) +} + +k.Schema = schema + return k +} + +/ GetAuthority returns the x/crisis module's authority. +func (k *Keeper) + +GetAuthority() + +string { + return k.authority +} + +/ Logger returns a module-specific logger. +func (k *Keeper) + +Logger(ctx context.Context) + +log.Logger { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +return sdkCtx.Logger().With("module", "x/"+types.ModuleName) +} + +/ RegisterRoute register the routes for each of the invariants +func (k *Keeper) + +RegisterRoute(moduleName, route string, invar sdk.Invariant) { + invarRoute := types.NewInvarRoute(moduleName, route, invar) + +k.routes = append(k.routes, invarRoute) +} + +/ Routes - return the keeper's invariant routes +func (k *Keeper) + +Routes() []types.InvarRoute { + return k.routes +} + +/ Invariants returns a copy of all registered Crisis keeper invariants. +func (k *Keeper) + +Invariants() []sdk.Invariant { + invars := make([]sdk.Invariant, len(k.routes)) + for i, route := range k.routes { + invars[i] = route.Invar +} + +return invars +} + +/ AssertInvariants asserts all registered invariants. If any invariant fails, +/ the method panics. +func (k *Keeper) + +AssertInvariants(ctx sdk.Context) { + logger := k.Logger(ctx) + start := time.Now() + invarRoutes := k.Routes() + n := len(invarRoutes) + for i, ir := range invarRoutes { + logger.Info("asserting crisis invariants", "inv", fmt.Sprint(i+1, "/", n), "name", ir.FullRoute()) + +invCtx, _ := ctx.CacheContext() + if res, stop := ir.Invar(invCtx); stop { + / TODO: Include app name as part of context to allow for this to be + / variable. + panic(fmt.Errorf("invariant broken: %s\n"+ + "\tCRITICAL please submit the following transaction:\n"+ + "\t\t tx crisis invariant-broken %s %s", res, ir.ModuleName, ir.Route)) +} + +} + diff := time.Since(start) + +logger.Info("asserted all invariants", "duration", diff, "height", ctx.BlockHeight()) +} + +/ InvCheckPeriod returns the invariant checks period. +func (k *Keeper) + +InvCheckPeriod() + +uint { + return k.invCheckPeriod +} + +/ SendCoinsFromAccountToFeeCollector transfers amt to the fee collector account. +func (k *Keeper) + +SendCoinsFromAccountToFeeCollector(ctx context.Context, senderAddr sdk.AccAddress, amt sdk.Coins) + +error { + return k.supplyKeeper.SendCoinsFromAccountToModule(ctx, senderAddr, k.feeCollectorName, amt) +} +``` + +The `InvariantRegistry` is therefore typically instantiated by instantiating the `keeper` of the `crisis` module in the [application's constructor function](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor-function). + +`Invariant`s can be checked manually via [`message`s](/docs/sdk/next/documentation/module-system/messages-and-queries), but most often they are checked automatically at the end of each block. Here is an example from the `crisis` module: + +```go expandable +package crisis + +import ( + + "context" + "time" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + "github.com/cosmos/cosmos-sdk/x/crisis/types" +) + +/ check all registered invariants +func EndBlocker(ctx context.Context, k keeper.Keeper) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) + sdkCtx := sdk.UnwrapSDKContext(ctx) + if k.InvCheckPeriod() == 0 || sdkCtx.BlockHeight()%int64(k.InvCheckPeriod()) != 0 { + / skip running the invariant check + return +} + +k.AssertInvariants(sdkCtx) +} +``` + +In both cases, if one of the `Invariant`s returns false, the `InvariantRegistry` can trigger special logic (e.g. have the application panic and print the `Invariant`s message in the log). diff --git a/docs/sdk/next/documentation/module-system/keeper.mdx b/docs/sdk/next/documentation/module-system/keeper.mdx new file mode 100644 index 00000000..74b9186b --- /dev/null +++ b/docs/sdk/next/documentation/module-system/keeper.mdx @@ -0,0 +1,370 @@ +--- +title: Keepers +--- + +## Synopsis + +`Keeper`s refer to a Cosmos SDK abstraction whose role is to manage access to the subset of the state defined by various modules. `Keeper`s are module-specific, i.e. the subset of state defined by a module can only be accessed by a `keeper` defined in said module. If a module needs to access the subset of state defined by another module, a reference to the second module's internal `keeper` needs to be passed to the first one. This is done in `app.go` during the instantiation of module keepers. + + +**Pre-requisite Readings** + +- [Introduction to Cosmos SDK Modules](/docs/sdk/next/documentation/module-system/intro) + + + +## Motivation + +The Cosmos SDK is a framework that makes it easy for developers to build complex decentralized applications from scratch, mainly by composing modules together. As the ecosystem of open-source modules for the Cosmos SDK expands, it will become increasingly likely that some of these modules contain vulnerabilities, as a result of the negligence or malice of their developers. + +The Cosmos SDK adopts an [object-capabilities-based approach](/docs/sdk/next/documentation/core-concepts/ocap) to help developers better protect their application from unwanted inter-module interactions, and `keeper`s are at the core of this approach. A `keeper` can be considered quite literally to be the gatekeeper of a module's store(s). Each store (typically an [`IAVL` Store](/docs/sdk/next/documentation/state-storage/store#iavl-store)) defined within a module comes with a `storeKey`, which grants unlimited access to it. The module's `keeper` holds this `storeKey` (which should otherwise remain unexposed), and defines [methods](#implementing-methods) for reading and writing to the store(s). + +The core idea behind the object-capabilities approach is to only reveal what is necessary to get the work done. In practice, this means that instead of handling permissions of modules through access-control lists, module `keeper`s are passed a reference to the specific instance of the other modules' `keeper`s that they need to access (this is done in the [application's constructor function](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor-function)). As a consequence, a module can only interact with the subset of state defined in another module via the methods exposed by the instance of the other module's `keeper`. This is a great way for developers to control the interactions that their own module can have with modules developed by external developers. + +## Type Definition + +`keeper`s are generally implemented in a `/keeper/keeper.go` file located in the module's folder. By convention, the type `keeper` of a module is simply named `Keeper` and usually follows the following structure: + +```go +type Keeper struct { + / External keepers, if any + + / Store key(s) + + / codec + + / authority +} +``` + +For example, here is the type definition of the `keeper` from the `staking` module: + +```go expandable +package keeper + +import ( + + "fmt" + "cosmossdk.io/log" + "cosmossdk.io/math" + abci "github.com/cometbft/cometbft/abci/types" + + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ Implements ValidatorSet interface +var _ types.ValidatorSet = Keeper{ +} + +/ Implements DelegationSet interface +var _ types.DelegationSet = Keeper{ +} + +/ Keeper of the x/staking store +type Keeper struct { + storeKey storetypes.StoreKey + cdc codec.BinaryCodec + authKeeper types.AccountKeeper + bankKeeper types.BankKeeper + hooks types.StakingHooks + authority string +} + +/ NewKeeper creates a new staking Keeper instance +func NewKeeper( + cdc codec.BinaryCodec, + key storetypes.StoreKey, + ak types.AccountKeeper, + bk types.BankKeeper, + authority string, +) *Keeper { + / ensure bonded and not bonded module accounts are set + if addr := ak.GetModuleAddress(types.BondedPoolName); addr == nil { + panic(fmt.Sprintf("%s module account has not been set", types.BondedPoolName)) +} + if addr := ak.GetModuleAddress(types.NotBondedPoolName); addr == nil { + panic(fmt.Sprintf("%s module account has not been set", types.NotBondedPoolName)) +} + + / ensure that authority is a valid AccAddress + if _, err := ak.AddressCodec().StringToBytes(authority); err != nil { + panic("authority is not a valid acc address") +} + +return &Keeper{ + storeKey: key, + cdc: cdc, + authKeeper: ak, + bankKeeper: bk, + hooks: nil, + authority: authority, +} +} + +/ Logger returns a module-specific logger. +func (k Keeper) + +Logger(ctx sdk.Context) + +log.Logger { + return ctx.Logger().With("module", "x/"+types.ModuleName) +} + +/ Hooks gets the hooks for staking *Keeper { + func (k *Keeper) + +Hooks() + +types.StakingHooks { + if k.hooks == nil { + / return a no-op implementation if no hooks are set + return types.MultiStakingHooks{ +} + +} + +return k.hooks +} + +/ SetHooks Set the validator hooks. In contrast to other receivers, this method must take a pointer due to nature +/ of the hooks interface and SDK start up sequence. +func (k *Keeper) + +SetHooks(sh types.StakingHooks) { + if k.hooks != nil { + panic("cannot set validator hooks twice") +} + +k.hooks = sh +} + +/ GetLastTotalPower Load the last total validator power. +func (k Keeper) + +GetLastTotalPower(ctx sdk.Context) + +math.Int { + store := ctx.KVStore(k.storeKey) + bz := store.Get(types.LastTotalPowerKey) + if bz == nil { + return math.ZeroInt() +} + ip := sdk.IntProto{ +} + +k.cdc.MustUnmarshal(bz, &ip) + +return ip.Int +} + +/ SetLastTotalPower Set the last total validator power. +func (k Keeper) + +SetLastTotalPower(ctx sdk.Context, power math.Int) { + store := ctx.KVStore(k.storeKey) + bz := k.cdc.MustMarshal(&sdk.IntProto{ + Int: power +}) + +store.Set(types.LastTotalPowerKey, bz) +} + +/ GetAuthority returns the x/staking module's authority. +func (k Keeper) + +GetAuthority() + +string { + return k.authority +} + +/ SetValidatorUpdates sets the ABCI validator power updates for the current block. +func (k Keeper) + +SetValidatorUpdates(ctx sdk.Context, valUpdates []abci.ValidatorUpdate) { + store := ctx.KVStore(k.storeKey) + bz := k.cdc.MustMarshal(&types.ValidatorUpdates{ + Updates: valUpdates +}) + +store.Set(types.ValidatorUpdatesKey, bz) +} + +/ GetValidatorUpdates returns the ABCI validator power updates within the current block. +func (k Keeper) + +GetValidatorUpdates(ctx sdk.Context) []abci.ValidatorUpdate { + store := ctx.KVStore(k.storeKey) + bz := store.Get(types.ValidatorUpdatesKey) + +var valUpdates types.ValidatorUpdates + k.cdc.MustUnmarshal(bz, &valUpdates) + +return valUpdates.Updates +} +``` + +Let us go through the different parameters: + +- An expected `keeper` is a `keeper` external to a module that is required by the internal `keeper` of said module. External `keeper`s are listed in the internal `keeper`'s type definition as interfaces. These interfaces are themselves defined in an `expected_keepers.go` file in the root of the module's folder. In this context, interfaces are used to reduce the number of dependencies, as well as to facilitate the maintenance of the module itself. +- `storeKey`s grant access to the store(s) of the [multistore](/docs/sdk/next/documentation/state-storage/store) managed by the module. They should always remain unexposed to external modules. +- `cdc` is the [codec](/docs/sdk/next/documentation/protocol-development/encoding) used to marshall and unmarshall structs to/from `[]byte`. The `cdc` can be any of `codec.BinaryCodec`, `codec.JSONCodec` or `codec.Codec` based on your requirements. It can be either a proto or amino codec as long as they implement these interfaces. +- The authority listed is a module account or user account that has the right to change module level parameters. Previously this was handled by the param module, which has been deprecated. + +Of course, it is possible to define different types of internal `keeper`s for the same module (e.g. a read-only `keeper`). Each type of `keeper` comes with its own constructor function, which is called from the [application's constructor function](/docs/sdk/next/documentation/application-framework/app-anatomy). This is where `keeper`s are instantiated, and where developers make sure to pass correct instances of modules' `keeper`s to other modules that require them. + +## Implementing Methods + +`Keeper`s primarily expose getter and setter methods for the store(s) managed by their module. These methods should remain as simple as possible and strictly be limited to getting or setting the requested value, as validity checks should have already been performed by the [`Msg` server](/docs/sdk/next/documentation/module-system/msg-services) when `keeper`s' methods are called. + +Typically, a _getter_ method will have the following signature + +```go +func (k Keeper) + +Get(ctx context.Context, key string) + +returnType +``` + +and the method will go through the following steps: + +1. Retrieve the appropriate store from the `ctx` using the `storeKey`. This is done through the `KVStore(storeKey sdk.StoreKey)` method of the `ctx`. Then it's preferred to use the `prefix.Store` to access only the desired limited subset of the store for convenience and safety. +2. If it exists, get the `[]byte` value stored at location `[]byte(key)` using the `Get(key []byte)` method of the store. +3. Unmarshall the retrieved value from `[]byte` to `returnType` using the codec `cdc`. Return the value. + +Similarly, a _setter_ method will have the following signature + +```go +func (k Keeper) + +Set(ctx context.Context, key string, value valueType) +``` + +and the method will go through the following steps: + +1. Retrieve the appropriate store from the `ctx` using the `storeKey`. This is done through the `KVStore(storeKey sdk.StoreKey)` method of the `ctx`. It's preferred to use the `prefix.Store` to access only the desired limited subset of the store for convenience and safety. +2. Marshal `value` to `[]byte` using the codec `cdc`. +3. Set the encoded value in the store at location `key` using the `Set(key []byte, value []byte)` method of the store. + +For more, see an example of `keeper`'s [methods implementation from the `staking` module](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/keeper/keeper.go). + +The [module `KVStore`](/docs/sdk/next/documentation/state-storage/store#kvstore-and-commitkvstore-interfaces) also provides an `Iterator()` method which returns an `Iterator` object to iterate over a domain of keys. + +This is an example from the `auth` module to iterate accounts: + +```go expandable +package keeper + +import ( + + "context" + "errors" + "cosmossdk.io/collections" + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ NewAccountWithAddress implements AccountKeeperI. +func (ak AccountKeeper) + +NewAccountWithAddress(ctx context.Context, addr sdk.AccAddress) + +sdk.AccountI { + acc := ak.proto() + err := acc.SetAddress(addr) + if err != nil { + panic(err) +} + +return ak.NewAccount(ctx, acc) +} + +/ NewAccount sets the next account number to a given account interface +func (ak AccountKeeper) + +NewAccount(ctx context.Context, acc sdk.AccountI) + +sdk.AccountI { + if err := acc.SetAccountNumber(ak.NextAccountNumber(ctx)); err != nil { + panic(err) +} + +return acc +} + +/ HasAccount implements AccountKeeperI. +func (ak AccountKeeper) + +HasAccount(ctx context.Context, addr sdk.AccAddress) + +bool { + has, _ := ak.Accounts.Has(ctx, addr) + +return has +} + +/ GetAccount implements AccountKeeperI. +func (ak AccountKeeper) + +GetAccount(ctx context.Context, addr sdk.AccAddress) + +sdk.AccountI { + acc, err := ak.Accounts.Get(ctx, addr) + if err != nil && !errors.Is(err, collections.ErrNotFound) { + panic(err) +} + +return acc +} + +/ GetAllAccounts returns all accounts in the accountKeeper. +func (ak AccountKeeper) + +GetAllAccounts(ctx context.Context) (accounts []sdk.AccountI) { + ak.IterateAccounts(ctx, func(acc sdk.AccountI) (stop bool) { + accounts = append(accounts, acc) + +return false +}) + +return accounts +} + +/ SetAccount implements AccountKeeperI. +func (ak AccountKeeper) + +SetAccount(ctx context.Context, acc sdk.AccountI) { + err := ak.Accounts.Set(ctx, acc.GetAddress(), acc) + if err != nil { + panic(err) +} +} + +/ RemoveAccount removes an account for the account mapper store. +/ NOTE: this will cause supply invariant violation if called +func (ak AccountKeeper) + +RemoveAccount(ctx context.Context, acc sdk.AccountI) { + err := ak.Accounts.Remove(ctx, acc.GetAddress()) + if err != nil { + panic(err) +} +} + +/ IterateAccounts iterates over all the stored accounts and performs a callback function. +/ Stops iteration when callback returns true. +func (ak AccountKeeper) + +IterateAccounts(ctx context.Context, cb func(account sdk.AccountI) (stop bool)) { + err := ak.Accounts.Walk(ctx, nil, func(_ sdk.AccAddress, value sdk.AccountI) (bool, error) { + return cb(value), nil +}) + if err != nil { + panic(err) +} +} +``` diff --git a/docs/sdk/next/documentation/module-system/messages-and-queries.mdx b/docs/sdk/next/documentation/module-system/messages-and-queries.mdx new file mode 100644 index 00000000..d3f582c5 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/messages-and-queries.mdx @@ -0,0 +1,1703 @@ +--- +title: Messages and Queries +--- + +## Synopsis + +`Msg`s and `Queries` are the two primary objects handled by modules. Most of the core components defined in a module, like `Msg` services, `keeper`s and `Query` services, exist to process `message`s and `queries`. + + +**Pre-requisite Readings** + +- [Introduction to Cosmos SDK Modules](/docs/sdk/next/documentation/module-system/intro) + + + +## Messages + +`Msg`s are objects whose end-goal is to trigger state-transitions. They are wrapped in [transactions](/docs/sdk/next/documentation/protocol-development/transactions), which may contain one or more of them. + +When a transaction is relayed from the underlying consensus engine to the Cosmos SDK application, it is first decoded by [`BaseApp`](/docs/sdk/next/documentation/application-framework/baseapp). Then, each message contained in the transaction is extracted and routed to the appropriate module via `BaseApp`'s `MsgServiceRouter` so that it can be processed by the module's [`Msg` service](/docs/sdk/next/documentation/module-system/msg-services). For a more detailed explanation of the lifecycle of a transaction, click [here](/docs/sdk/next/documentation/protocol-development/tx-lifecycle). + +### `Msg` Services + +Defining Protobuf `Msg` services is the recommended way to handle messages. A Protobuf `Msg` service should be created for each module, typically in `tx.proto` (see more info about [conventions and naming](/docs/sdk/next/documentation/protocol-development/encoding#faq)). It must have an RPC service method defined for each message in the module. + +Each `Msg` service method must have exactly one argument, which must implement the `sdk.Msg` interface, and a Protobuf response. The naming convention is to call the RPC argument `Msg` and the RPC response `MsgResponse`. For example: + +```protobuf + rpc Send(MsgSend) returns (MsgSendResponse); +``` + +See an example of a `Msg` service definition from `x/bank` module: + +```protobuf +// Msg defines the bank Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + // Send defines a method for sending coins from one account to another account. + rpc Send(MsgSend) returns (MsgSendResponse); + + // MultiSend defines a method for sending coins from some accounts to other accounts. + rpc MultiSend(MsgMultiSend) returns (MsgMultiSendResponse); + + // UpdateParams defines a governance operation for updating the x/bank module parameters. + // The authority is defined in the keeper. + // + // Since: cosmos-sdk 0.47 + rpc UpdateParams(MsgUpdateParams) returns (MsgUpdateParamsResponse); + + // SetSendEnabled is a governance operation for setting the SendEnabled flag + // on any number of Denoms. Only the entries to add or update should be + // included. Entries that already exist in the store, but that aren't + // included in this message, will be left unchanged. + // + // Since: cosmos-sdk 0.47 + rpc SetSendEnabled(MsgSetSendEnabled) returns (MsgSetSendEnabledResponse); +} +``` + +### `sdk.Msg` Interface + +`sdk.Msg` is an alias of `proto.Message`. + +To attach a `ValidateBasic()` method to a message then you must add methods to the type adhering to the `HasValidateBasic`. + +```go expandable +package types + +import ( + + "encoding/json" + fmt "fmt" + strings "strings" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "github.com/cosmos/cosmos-sdk/codec" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) + +type ( + / Msg defines the interface a transaction message needed to fulfill. + Msg = proto.Message + + / LegacyMsg defines the interface a transaction message needed to fulfill up through + / v0.47. + LegacyMsg interface { + Msg + + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} + + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / HasMsgs defines an interface a transaction must fulfill. + HasMsgs interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg +} + + / Tx defines an interface a transaction must fulfill. + Tx interface { + HasMsgs + + / GetMsgsV2 gets the transaction's messages as google.golang.org/protobuf/proto.Message's. + GetMsgsV2() ([]protov2.Message, error) +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() []byte + FeeGranter() []byte +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} + + / HasValidateBasic defines a type that has a ValidateBasic method. + / ValidateBasic is deprecated and now facultative. + / Prefer validating messages directly in the msg server. + HasValidateBasic interface { + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +func MsgTypeURL(msg proto.Message) + +string { + if m, ok := msg.(protov2.Message); ok { + return "/" + string(m.ProtoReflect().Descriptor().FullName()) +} + +return "/" + proto.MessageName(msg) +} + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} + +/ GetModuleNameFromTypeURL assumes that module name is the second element of the msg type URL +/ e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" +/ It returns an empty string if the input is not a valid type URL +func GetModuleNameFromTypeURL(input string) + +string { + moduleName := strings.Split(input, ".") + if len(moduleName) > 1 { + return moduleName[1] +} + +return "" +} +``` + +In 0.50+ signers from the `GetSigners()` call are automated via a protobuf annotation. + +Read more about the signer field [here](/docs/sdk/next/documentation/protocol-development/protobuf-annotations). + +```protobuf + option (cosmos.msg.v1.signer) = "from_address"; +``` + +If there is a need for custom signers then there is an alternative path which can be taken. A function which returns `signing.CustomGetSigner` for a specific message can be defined. + +```go expandable +func ProvideBankSendTransactionGetSigners() + +signing.CustomGetSigner { + + / Extract the signer from the signature. + signer, err := coretypes.LatestSigner(Tx).Sender(ethTx) + if err != nil { + return nil, err +} + + / Return the signer in the required format. + return [][]byte{ + signer.Bytes() +}, nil +} +``` + +When using dependency injection (depinject) this can be provided to the application via the provide method. + +```go +depinject.Provide(banktypes.ProvideBankSendTransactionGetSigners) +``` + +The Cosmos SDK uses Protobuf definitions to generate client and server code: + +- `MsgServer` interface defines the server API for the `Msg` service and its implementation is described as part of the [`Msg` services](/docs/sdk/next/documentation/module-system/msg-services) documentation. +- Structures are generated for all RPC request and response types. + +A `RegisterMsgServer` method is also generated and should be used to register the module's `MsgServer` implementation in `RegisterServices` method from the [`AppModule` interface](/docs/sdk/next/documentation/module-system/module-manager#appmodule). + +In order for clients (CLI and grpc-gateway) to have these URLs registered, the Cosmos SDK provides the function `RegisterMsgServiceDesc(registry codectypes.InterfaceRegistry, sd *grpc.ServiceDesc)` that should be called inside module's [`RegisterInterfaces`](/docs/sdk/next/documentation/module-system/module-manager#appmodulebasic) method, using the proto-generated `&_Msg_serviceDesc` as `*grpc.ServiceDesc` argument. + +## Queries + +A `query` is a request for information made by end-users of applications through an interface and processed by a full-node. A `query` is received by a full-node through its consensus engine and relayed to the application via the ABCI. It is then routed to the appropriate module via `BaseApp`'s `QueryRouter` so that it can be processed by the module's query service (./04-query-services.md). For a deeper look at the lifecycle of a `query`, click [here](/docs/sdk/next/api-reference/service-apis/query-lifecycle). + +### gRPC Queries + +Queries should be defined using [Protobuf services](https://developers.google.com/protocol-buffers/docs/proto#services). A `Query` service should be created per module in `query.proto`. This service lists endpoints starting with `rpc`. + +Here's an example of such a `Query` service definition: + +```protobuf +// Query defines the gRPC querier service. +service Query { + // Accounts returns all the existing accounts. + // + // When called from another module, this query might consume a high amount of + // gas if the pagination field is incorrectly set. + // + // Since: cosmos-sdk 0.43 + rpc Accounts(QueryAccountsRequest) returns (QueryAccountsResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/accounts"; + } + + // Account returns account details based on address. + rpc Account(QueryAccountRequest) returns (QueryAccountResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/accounts/{address}"; + } + + // AccountAddressByID returns account address based on account number. + // + // Since: cosmos-sdk 0.46.2 + rpc AccountAddressByID(QueryAccountAddressByIDRequest) returns (QueryAccountAddressByIDResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/address_by_id/{id}"; + } + + // Params queries all parameters. + rpc Params(QueryParamsRequest) returns (QueryParamsResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/params"; + } + + // ModuleAccounts returns all the existing module accounts. + // + // Since: cosmos-sdk 0.46 + rpc ModuleAccounts(QueryModuleAccountsRequest) returns (QueryModuleAccountsResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/module_accounts"; + } + + // ModuleAccountByName returns the module account info by module name + rpc ModuleAccountByName(QueryModuleAccountByNameRequest) returns (QueryModuleAccountByNameResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/module_accounts/{name}"; + } + + // Bech32Prefix queries bech32Prefix + // + // Since: cosmos-sdk 0.46 + rpc Bech32Prefix(Bech32PrefixRequest) returns (Bech32PrefixResponse) { + option (google.api.http).get = "/cosmos/auth/v1beta1/bech32"; + } + + // AddressBytesToString converts Account Address bytes to string + // + // Since: cosmos-sdk 0.46 + rpc AddressBytesToString(AddressBytesToStringRequest) returns (AddressBytesToStringResponse) { + option (google.api.http).get = "/cosmos/auth/v1beta1/bech32/{address_bytes}"; + } + + // AddressStringToBytes converts Address string to bytes + // + // Since: cosmos-sdk 0.46 + rpc AddressStringToBytes(AddressStringToBytesRequest) returns (AddressStringToBytesResponse) { + option (google.api.http).get = "/cosmos/auth/v1beta1/bech32/{address_string}"; + } + + // AccountInfo queries account info which is common to all account types. + // + // Since: cosmos-sdk 0.47 + rpc AccountInfo(QueryAccountInfoRequest) returns (QueryAccountInfoResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/account_info/{address}"; + } +} +``` + +As `proto.Message`s, generated `Response` types implement by default `String()` method of [`fmt.Stringer`](https://pkg.go.dev/fmt#Stringer). + +A `RegisterQueryServer` method is also generated and should be used to register the module's query server in the `RegisterServices` method from the [`AppModule` interface](/docs/sdk/next/documentation/module-system/module-manager#appmodule). + +### Legacy Queries + +Before the introduction of Protobuf and gRPC in the Cosmos SDK, there was usually no specific `query` object defined by module developers, contrary to `message`s. Instead, the Cosmos SDK took the simpler approach of using a simple `path` to define each `query`. The `path` contains the `query` type and all the arguments needed to process it. For most module queries, the `path` should look like the following: + +```text +queryCategory/queryRoute/queryType/arg1/arg2/... +``` + +where: + +- `queryCategory` is the category of the `query`, typically `custom` for module queries. It is used to differentiate between different kinds of queries within `BaseApp`'s [`Query` method](/docs/sdk/next/documentation/application-framework/baseapp#query). +- `queryRoute` is used by `BaseApp`'s [`queryRouter`](/docs/sdk/next/documentation/application-framework/baseapp#query-routing) to map the `query` to its module. Usually, `queryRoute` should be the name of the module. +- `queryType` is used by the module's [`querier`](/docs/sdk/next/documentation/module-system/query-services#legacy-queriers) to map the `query` to the appropriate `querier function` within the module. +- `args` are the actual arguments needed to process the `query`. They are filled out by the end-user. Note that for bigger queries, you might prefer passing arguments in the `Data` field of the request `req` instead of the `path`. + +The `path` for each `query` must be defined by the module developer in the module's [command-line interface file](/docs/sdk/next/documentation/module-system/module-interfaces#query-commands). Overall, there are 3 mains components module developers need to implement in order to make the subset of the state defined by their module queryable: + +- A [`querier`](/docs/sdk/next/documentation/module-system/query-services#legacy-queriers), to process the `query` once it has been [routed to the module](/docs/sdk/next/documentation/application-framework/baseapp#query-routing). +- [Query commands](/docs/sdk/next/documentation/module-system/module-interfaces#query-commands) in the module's CLI file, where the `path` for each `query` is specified. +- `query` return types. Typically defined in a file `types/querier.go`, they specify the result type of each of the module's `queries`. These custom types must implement the `String()` method of [`fmt.Stringer`](https://pkg.go.dev/fmt#Stringer). + +### Store Queries + +Store queries access store keys directly. They use `clientCtx.QueryABCI(req abci.QueryRequest)` to return the full `abci.QueryResponse` with inclusion Merkle proofs. + +See following examples: + +```go expandable +package baseapp + +import ( + + "context" + "crypto/sha256" + "fmt" + "os" + "sort" + "strings" + "syscall" + "time" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/store/rootmulti" + snapshottypes "cosmossdk.io/store/snapshots/types" + storetypes "cosmossdk.io/store/types" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc/codes" + grpcstatus "google.golang.org/grpc/status" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ Supported ABCI Query prefixes and paths +const ( + QueryPathApp = "app" + QueryPathCustom = "custom" + QueryPathP2P = "p2p" + QueryPathStore = "store" + + QueryPathBroadcastTx = "/cosmos.tx.v1beta1.Service/BroadcastTx" +) + +func (app *BaseApp) + +InitChain(req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + if req.ChainId != app.chainID { + return nil, fmt.Errorf("invalid chain-id on InitChain; expected: %s, got: %s", app.chainID, req.ChainId) +} + + / On a new chain, we consider the init chain block height as 0, even though + / req.InitialHeight is 1 by default. + initHeader := cmtproto.Header{ + ChainID: req.ChainId, + Time: req.Time +} + +app.initialHeight = req.InitialHeight + + app.logger.Info("InitChain", "initialHeight", req.InitialHeight, "chainID", req.ChainId) + + / Set the initial height, which will be used to determine if we are proposing + / or processing the first block or not. + app.initialHeight = req.InitialHeight + + / if req.InitialHeight is > 1, then we set the initial version on all stores + if req.InitialHeight > 1 { + initHeader.Height = req.InitialHeight + if err := app.cms.SetInitialVersion(req.InitialHeight); err != nil { + return nil, err +} + +} + + / initialize states with a correct header + app.setState(execModeFinalize, initHeader) + +app.setState(execModeCheck, initHeader) + + / Store the consensus params in the BaseApp's param store. Note, this must be + / done after the finalizeBlockState and context have been set as it's persisted + / to state. + if req.ConsensusParams != nil { + err := app.StoreConsensusParams(app.finalizeBlockState.ctx, *req.ConsensusParams) + if err != nil { + return nil, err +} + +} + +defer func() { + / InitChain represents the state of the application BEFORE the first block, + / i.e. the genesis block. This means that when processing the app's InitChain + / handler, the block height is zero by default. However, after Commit is called + / the height needs to reflect the true block height. + initHeader.Height = req.InitialHeight + app.checkState.ctx = app.checkState.ctx.WithBlockHeader(initHeader) + +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithBlockHeader(initHeader) +}() + if app.initChainer == nil { + return &abci.ResponseInitChain{ +}, nil +} + + / add block gas meter for any genesis transactions (allow infinite gas) + +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithBlockGasMeter(storetypes.NewInfiniteGasMeter()) + +res, err := app.initChainer(app.finalizeBlockState.ctx, req) + if err != nil { + return nil, err +} + if len(req.Validators) > 0 { + if len(req.Validators) != len(res.Validators) { + return nil, fmt.Errorf( + "len(RequestInitChain.Validators) != len(GenesisValidators) (%d != %d)", + len(req.Validators), len(res.Validators), + ) +} + +sort.Sort(abci.ValidatorUpdates(req.Validators)) + +sort.Sort(abci.ValidatorUpdates(res.Validators)) + for i := range res.Validators { + if !proto.Equal(&res.Validators[i], &req.Validators[i]) { + return nil, fmt.Errorf("genesisValidators[%d] != req.Validators[%d] ", i, i) +} + +} + +} + + / In the case of a new chain, AppHash will be the hash of an empty string. + / During an upgrade, it'll be the hash of the last committed block. + var appHash []byte + if !app.LastCommitID().IsZero() { + appHash = app.LastCommitID().Hash +} + +else { + / $ echo -n '' | sha256sum + / e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 + emptyHash := sha256.Sum256([]byte{ +}) + +appHash = emptyHash[:] +} + + / NOTE: We don't commit, but FinalizeBlock for block InitialHeight starts from + / this FinalizeBlockState. + return &abci.ResponseInitChain{ + ConsensusParams: res.ConsensusParams, + Validators: res.Validators, + AppHash: appHash, +}, nil +} + +func (app *BaseApp) + +Info(req *abci.RequestInfo) (*abci.ResponseInfo, error) { + lastCommitID := app.cms.LastCommitID() + +return &abci.ResponseInfo{ + Data: app.name, + Version: app.version, + AppVersion: app.appVersion, + LastBlockHeight: lastCommitID.Version, + LastBlockAppHash: lastCommitID.Hash, +}, nil +} + +/ Query implements the ABCI interface. It delegates to CommitMultiStore if it +/ implements Queryable. +func (app *BaseApp) + +Query(_ context.Context, req *abci.RequestQuery) (resp *abci.ResponseQuery, err error) { + / add panic recovery for all queries + / + / Ref: https://github.com/cosmos/cosmos-sdk/pull/8039 + defer func() { + if r := recover(); r != nil { + resp = sdkerrors.QueryResult(errorsmod.Wrapf(sdkerrors.ErrPanic, "%v", r), app.trace) +} + +}() + + / when a client did not provide a query height, manually inject the latest + if req.Height == 0 { + req.Height = app.LastBlockHeight() +} + +telemetry.IncrCounter(1, "query", "count") + +telemetry.IncrCounter(1, "query", req.Path) + +defer telemetry.MeasureSince(time.Now(), req.Path) + if req.Path == QueryPathBroadcastTx { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "can't route a broadcast tx message"), app.trace), nil +} + + / handle gRPC routes first rather than calling splitPath because '/' characters + / are used as part of gRPC paths + if grpcHandler := app.grpcQueryRouter.Route(req.Path); grpcHandler != nil { + return app.handleQueryGRPC(grpcHandler, req), nil +} + path := SplitABCIQueryPath(req.Path) + if len(path) == 0 { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "no query path provided"), app.trace), nil +} + switch path[0] { + case QueryPathApp: + / "/app" prefix for special application queries + resp = handleQueryApp(app, path, req) + case QueryPathStore: + resp = handleQueryStore(app, path, *req) + case QueryPathP2P: + resp = handleQueryP2P(app, path) + +default: + resp = sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "unknown query path"), app.trace) +} + +return resp, nil +} + +/ ListSnapshots implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ListSnapshots(req *abci.RequestListSnapshots) (*abci.ResponseListSnapshots, error) { + resp := &abci.ResponseListSnapshots{ + Snapshots: []*abci.Snapshot{ +}} + if app.snapshotManager == nil { + return resp, nil +} + +snapshots, err := app.snapshotManager.List() + if err != nil { + app.logger.Error("failed to list snapshots", "err", err) + +return nil, err +} + for _, snapshot := range snapshots { + abciSnapshot, err := snapshot.ToABCI() + if err != nil { + app.logger.Error("failed to convert ABCI snapshots", "err", err) + +return nil, err +} + +resp.Snapshots = append(resp.Snapshots, &abciSnapshot) +} + +return resp, nil +} + +/ LoadSnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +LoadSnapshotChunk(req *abci.RequestLoadSnapshotChunk) (*abci.ResponseLoadSnapshotChunk, error) { + if app.snapshotManager == nil { + return &abci.ResponseLoadSnapshotChunk{ +}, nil +} + +chunk, err := app.snapshotManager.LoadChunk(req.Height, req.Format, req.Chunk) + if err != nil { + app.logger.Error( + "failed to load snapshot chunk", + "height", req.Height, + "format", req.Format, + "chunk", req.Chunk, + "err", err, + ) + +return nil, err +} + +return &abci.ResponseLoadSnapshotChunk{ + Chunk: chunk +}, nil +} + +/ OfferSnapshot implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +OfferSnapshot(req *abci.RequestOfferSnapshot) (*abci.ResponseOfferSnapshot, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ABORT +}, nil +} + if req.Snapshot == nil { + app.logger.Error("received nil snapshot") + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil +} + +snapshot, err := snapshottypes.SnapshotFromABCI(req.Snapshot) + if err != nil { + app.logger.Error("failed to decode snapshot metadata", "err", err) + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil +} + +err = app.snapshotManager.Restore(snapshot) + switch { + case err == nil: + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrUnknownFormat): + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT_FORMAT +}, nil + case errors.Is(err, snapshottypes.ErrInvalidMetadata): + app.logger.Error( + "rejecting invalid snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil + + default: + app.logger.Error( + "failed to restore snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + + / We currently don't support resetting the IAVL stores and retrying a + / different snapshot, so we ask CometBFT to abort all snapshot restoration. + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ABORT +}, nil +} +} + +/ ApplySnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ApplySnapshotChunk(req *abci.RequestApplySnapshotChunk) (*abci.ResponseApplySnapshotChunk, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ABORT +}, nil +} + + _, err := app.snapshotManager.RestoreChunk(req.Chunk) + switch { + case err == nil: + return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrChunkHashMismatch): + app.logger.Error( + "chunk checksum mismatch; rejecting sender and requesting refetch", + "chunk", req.Index, + "sender", req.Sender, + "err", err, + ) + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_RETRY, + RefetchChunks: []uint32{ + req.Index +}, + RejectSenders: []string{ + req.Sender +}, +}, nil + + default: + app.logger.Error("failed to restore snapshot", "err", err) + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ABORT +}, nil +} +} + +/ CheckTx implements the ABCI interface and executes a tx in CheckTx mode. In +/ CheckTx mode, messages are not executed. This means messages are only validated +/ and only the AnteHandler is executed. State is persisted to the BaseApp's +/ internal CheckTx state if the AnteHandler passes. Otherwise, the ResponseCheckTx +/ will contain relevant error information. Regardless of tx execution outcome, +/ the ResponseCheckTx will contain relevant gas execution context. +func (app *BaseApp) + +CheckTx(req *abci.RequestCheckTx) (*abci.ResponseCheckTx, error) { + var mode execMode + switch { + case req.Type == abci.CheckTxType_New: + mode = execModeCheck + case req.Type == abci.CheckTxType_Recheck: + mode = execModeReCheck + + default: + return nil, fmt.Errorf("unknown RequestCheckTx type: %s", req.Type) +} + +gInfo, result, anteEvents, err := app.runTx(mode, req.Tx) + if err != nil { + return sdkerrors.ResponseCheckTxWithEvents(err, gInfo.GasWanted, gInfo.GasUsed, anteEvents, app.trace), nil +} + +return &abci.ResponseCheckTx{ + GasWanted: int64(gInfo.GasWanted), / TODO: Should type accept unsigned ints? + GasUsed: int64(gInfo.GasUsed), / TODO: Should type accept unsigned ints? + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +}, nil +} + +/ PrepareProposal implements the PrepareProposal ABCI method and returns a +/ ResponsePrepareProposal object to the client. The PrepareProposal method is +/ responsible for allowing the block proposer to perform application-dependent +/ work in a block before proposing it. +/ +/ Transactions can be modified, removed, or added by the application. Since the +/ application maintains its own local mempool, it will ignore the transactions +/ provided to it in RequestPrepareProposal. Instead, it will determine which +/ transactions to return based on the mempool's semantics and the MaxTxBytes +/ provided by the client's request. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +PrepareProposal(req *abci.RequestPrepareProposal) (resp *abci.ResponsePrepareProposal, err error) { + if app.prepareProposal == nil { + return nil, errors.New("PrepareProposal handler not set") +} + + / Always reset state given that PrepareProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, +} + +app.setState(execModePrepareProposal, header) + + / CometBFT must never call PrepareProposal with a height of 0. + / + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("PrepareProposal called with invalid height") +} + +app.prepareProposalState.ctx = app.getContextForProposal(app.prepareProposalState.ctx, req.Height). + WithVoteInfos(toVoteInfo(req.LocalLastCommit.Votes)). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithBlockTime(req.Time). + WithProposer(req.ProposerAddress). + WithExecMode(sdk.ExecModePrepareProposal). + WithCometInfo(prepareProposalInfo{ + req +}) + +app.prepareProposalState.ctx = app.prepareProposalState.ctx. + WithConsensusParams(app.GetConsensusParams(app.prepareProposalState.ctx)). + WithBlockGasMeter(app.getBlockGasMeter(app.prepareProposalState.ctx)) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in PrepareProposal", + "height", req.Height, + "time", req.Time, + "panic", err, + ) + +resp = &abci.ResponsePrepareProposal{ +} + +} + +}() + +resp, err = app.prepareProposal(app.prepareProposalState.ctx, req) + if err != nil { + app.logger.Error("failed to prepare proposal", "height", req.Height, "error", err) + +return &abci.ResponsePrepareProposal{ +}, nil +} + +return resp, nil +} + +/ ProcessProposal implements the ProcessProposal ABCI method and returns a +/ ResponseProcessProposal object to the client. The ProcessProposal method is +/ responsible for allowing execution of application-dependent work in a proposed +/ block. Note, the application defines the exact implementation details of +/ ProcessProposal. In general, the application must at the very least ensure +/ that all transactions are valid. If all transactions are valid, then we inform +/ CometBFT that the Status is ACCEPT. However, the application is also able +/ to implement optimizations such as executing the entire proposed block +/ immediately. +/ +/ If a panic is detected during execution of an application's ProcessProposal +/ handler, it will be recovered and we will reject the proposal. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +ProcessProposal(req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) { + if app.processProposal == nil { + return nil, errors.New("ProcessProposal handler not set") +} + + / CometBFT must never call ProcessProposal with a height of 0. + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("ProcessProposal called with invalid height") +} + + / Always reset state given that ProcessProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, +} + +app.setState(execModeProcessProposal, header) + + / Since the application can get access to FinalizeBlock state and write to it, + / we must be sure to reset it in case ProcessProposal timeouts and is called + / again in a subsequent round. However, we only want to do this after we've + / processed the first block, as we want to avoid overwriting the finalizeState + / after state changes during InitChain. + if req.Height > app.initialHeight { + app.setState(execModeFinalize, header) +} + +app.processProposalState.ctx = app.getContextForProposal(app.processProposalState.ctx, req.Height). + WithVoteInfos(req.ProposedLastCommit.Votes). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithBlockTime(req.Time). + WithHeaderHash(req.Hash). + WithProposer(req.ProposerAddress). + WithCometInfo(cometInfo{ + ProposerAddress: req.ProposerAddress, + ValidatorsHash: req.NextValidatorsHash, + Misbehavior: req.Misbehavior, + LastCommit: req.ProposedLastCommit +}). + WithExecMode(sdk.ExecModeProcessProposal) + +app.processProposalState.ctx = app.processProposalState.ctx. + WithConsensusParams(app.GetConsensusParams(app.processProposalState.ctx)). + WithBlockGasMeter(app.getBlockGasMeter(app.processProposalState.ctx)) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in ProcessProposal", + "height", req.Height, + "time", req.Time, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +resp = &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + +}() + +resp, err = app.processProposal(app.processProposalState.ctx, req) + if err != nil { + app.logger.Error("failed to process proposal", "height", req.Height, "error", err) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +return resp, nil +} + +/ ExtendVote implements the ExtendVote ABCI method and returns a ResponseExtendVote. +/ It calls the application's ExtendVote handler which is responsible for performing +/ application-specific business logic when sending a pre-commit for the NEXT +/ block height. The extensions response may be non-deterministic but must always +/ be returned, even if empty. +/ +/ Agreed upon vote extensions are made available to the proposer of the next +/ height and are committed in the subsequent height, i.e. H+2. An error is +/ returned if vote extensions are not enabled or if extendVote fails or panics. +func (app *BaseApp) + +ExtendVote(_ context.Context, req *abci.RequestExtendVote) (resp *abci.ResponseExtendVote, err error) { + / Always reset state given that ExtendVote and VerifyVoteExtension can timeout + / and be called again in a subsequent round. + emptyHeader := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height +} + +app.setState(execModeVoteExtension, emptyHeader) + if app.extendVote == nil { + return nil, errors.New("application ExtendVote handler not set") +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(app.voteExtensionState.ctx) + if cp.Abci != nil && cp.Abci.VoteExtensionsEnableHeight <= 0 { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to ExtendVote at height %d", req.Height) +} + +app.voteExtensionState.ctx = app.voteExtensionState.ctx. + WithConsensusParams(cp). + WithBlockGasMeter(storetypes.NewInfiniteGasMeter()). + WithBlockHeight(req.Height). + WithHeaderHash(req.Hash). + WithExecMode(sdk.ExecModeVoteExtension) + + / add a deferred recover handler in case extendVote panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in ExtendVote", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +err = fmt.Errorf("recovered application panic in ExtendVote: %v", r) +} + +}() + +resp, err = app.extendVote(app.voteExtensionState.ctx, req) + if err != nil { + app.logger.Error("failed to extend vote", "height", req.Height, "error", err) + +return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} + +return resp, err +} + +/ VerifyVoteExtension implements the VerifyVoteExtension ABCI method and returns +/ a ResponseVerifyVoteExtension. It calls the applications' VerifyVoteExtension +/ handler which is responsible for performing application-specific business +/ logic in verifying a vote extension from another validator during the pre-commit +/ phase. The response MUST be deterministic. An error is returned if vote +/ extensions are not enabled or if verifyVoteExt fails or panics. +func (app *BaseApp) + +VerifyVoteExtension(req *abci.RequestVerifyVoteExtension) (resp *abci.ResponseVerifyVoteExtension, err error) { + if app.verifyVoteExt == nil { + return nil, errors.New("application VerifyVoteExtension handler not set") +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(app.voteExtensionState.ctx) + if cp.Abci != nil && cp.Abci.VoteExtensionsEnableHeight <= 0 { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to VerifyVoteExtension at height %d", req.Height) +} + + / add a deferred recover handler in case verifyVoteExt panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in VerifyVoteExtension", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "validator", fmt.Sprintf("%X", req.ValidatorAddress), + "panic", r, + ) + +err = fmt.Errorf("recovered application panic in VerifyVoteExtension: %v", r) +} + +}() + +resp, err = app.verifyVoteExt(app.voteExtensionState.ctx, req) + if err != nil { + app.logger.Error("failed to verify vote extension", "height", req.Height, "error", err) + +return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_REJECT +}, nil +} + +return resp, err +} + +/ FinalizeBlock will execute the block proposal provided by RequestFinalizeBlock. +/ Specifically, it will execute an application's BeginBlock (if defined), followed +/ by the transactions in the proposal, finally followed by the application's +/ EndBlock (if defined). +/ +/ For each raw transaction, i.e. a byte slice, BaseApp will only execute it if +/ it adheres to the sdk.Tx interface. Otherwise, the raw transaction will be +/ skipped. This is to support compatibility with proposers injecting vote +/ extensions into the proposal, which should not themselves be executed in cases +/ where they adhere to the sdk.Tx interface. +func (app *BaseApp) + +FinalizeBlock(req *abci.RequestFinalizeBlock) (*abci.ResponseFinalizeBlock, error) { + var events []abci.Event + if err := app.validateFinalizeBlockHeight(req); err != nil { + return nil, err +} + if app.cms.TracingEnabled() { + app.cms.SetTracingContext(storetypes.TraceContext( + map[string]any{"blockHeight": req.Height +}, + )) +} + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, +} + + / Initialize the FinalizeBlock state. If this is the first block, it should + / already be initialized in InitChain. Otherwise app.finalizeBlockState will be + / nil, since it is reset on Commit. + if app.finalizeBlockState == nil { + app.setState(execModeFinalize, header) +} + +else { + / In the first block, app.finalizeBlockState.ctx will already be initialized + / by InitChain. Context is now updated with Header information. + app.finalizeBlockState.ctx = app.finalizeBlockState.ctx. + WithBlockHeader(header). + WithBlockHeight(req.Height) +} + gasMeter := app.getBlockGasMeter(app.finalizeBlockState.ctx) + +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx. + WithBlockGasMeter(gasMeter). + WithHeaderHash(req.Hash). + WithConsensusParams(app.GetConsensusParams(app.finalizeBlockState.ctx)). + WithVoteInfos(req.DecidedLastCommit.Votes). + WithExecMode(sdk.ExecModeFinalize) + if app.checkState != nil { + app.checkState.ctx = app.checkState.ctx. + WithBlockGasMeter(gasMeter). + WithHeaderHash(req.Hash) +} + beginBlock := app.beginBlock(req) + +events = append(events, beginBlock.Events...) + + / Iterate over all raw transactions in the proposal and attempt to execute + / them, gathering the execution results. + / + / NOTE: Not all raw transactions may adhere to the sdk.Tx interface, e.g. + / vote extensions, so skip those. + txResults := make([]*abci.ExecTxResult, 0, len(req.Txs)) + for _, rawTx := range req.Txs { + if _, err := app.txDecoder(rawTx); err == nil { + txResults = append(txResults, app.deliverTx(rawTx)) +} + +} + if app.finalizeBlockState.ms.TracingEnabled() { + app.finalizeBlockState.ms = app.finalizeBlockState.ms.SetTracingContext(nil).(storetypes.CacheMultiStore) +} + +endBlock, err := app.endBlock(app.finalizeBlockState.ctx) + if err != nil { + return nil, err +} + +events = append(events, endBlock.Events...) + cp := app.GetConsensusParams(app.finalizeBlockState.ctx) + +return &abci.ResponseFinalizeBlock{ + Events: events, + TxResults: txResults, + ValidatorUpdates: endBlock.ValidatorUpdates, + ConsensusParamUpdates: &cp, + AppHash: app.workingHash(), +}, nil +} + +/ Commit implements the ABCI interface. It will commit all state that exists in +/ the deliver state's multi-store and includes the resulting commit ID in the +/ returned abci.ResponseCommit. Commit will set the check state based on the +/ latest header and reset the deliver state. Also, if a non-zero halt height is +/ defined in config, Commit will execute a deferred function call to check +/ against that height and gracefully halt if it matches the latest committed +/ height. +func (app *BaseApp) + +Commit() (*abci.ResponseCommit, error) { + header := app.finalizeBlockState.ctx.BlockHeader() + retainHeight := app.GetBlockRetentionHeight(header.Height) + if app.precommiter != nil { + app.precommiter(app.finalizeBlockState.ctx) +} + +rms, ok := app.cms.(*rootmulti.Store) + if ok { + rms.SetCommitHeader(header) +} + +app.cms.Commit() + resp := &abci.ResponseCommit{ + RetainHeight: retainHeight, +} + abciListeners := app.streamingManager.ABCIListeners + if len(abciListeners) > 0 { + ctx := app.finalizeBlockState.ctx + blockHeight := ctx.BlockHeight() + changeSet := app.cms.PopStateCache() + for _, abciListener := range abciListeners { + if err := abciListener.ListenCommit(ctx, *resp, changeSet); err != nil { + app.logger.Error("Commit listening hook failed", "height", blockHeight, "err", err) +} + +} + +} + + / Reset the CheckTx state to the latest committed. + / + / NOTE: This is safe because CometBFT holds a lock on the mempool for + / Commit. Use the header from this latest block. + app.setState(execModeCheck, header) + +app.finalizeBlockState = nil + if app.prepareCheckStater != nil { + app.prepareCheckStater(app.checkState.ctx) +} + +var halt bool + switch { + case app.haltHeight > 0 && uint64(header.Height) >= app.haltHeight: + halt = true + case app.haltTime > 0 && header.Time.Unix() >= int64(app.haltTime): + halt = true +} + if halt { + / Halt the binary and allow CometBFT to receive the ResponseCommit + / response with the commit ID hash. This will allow the node to successfully + / restart and process blocks assuming the halt configuration has been + / reset or moved to a more distant value. + app.halt() +} + +go app.snapshotManager.SnapshotIfApplicable(header.Height) + +return resp, nil +} + +/ workingHash gets the apphash that will be finalized in commit. +/ These writes will be persisted to the root multi-store (app.cms) + +and flushed to +/ disk in the Commit phase. This means when the ABCI client requests Commit(), the application +/ state transitions will be flushed to disk and as a result, but we already have +/ an application Merkle root. +func (app *BaseApp) + +workingHash() []byte { + / Write the FinalizeBlock state into branched storage and commit the MultiStore. + / The write to the FinalizeBlock state writes all state transitions to the root + / MultiStore (app.cms) + +so when Commit() + +is called it persists those values. + app.finalizeBlockState.ms.Write() + + / Get the hash of all writes in order to return the apphash to the comet in finalizeBlock. + commitHash := app.cms.WorkingHash() + +app.logger.Debug("hash of all writes", "workingHash", fmt.Sprintf("%X", commitHash)) + +return commitHash +} + +/ halt attempts to gracefully shutdown the node via SIGINT and SIGTERM falling +/ back on os.Exit if both fail. +func (app *BaseApp) + +halt() { + app.logger.Info("halting node per configuration", "height", app.haltHeight, "time", app.haltTime) + +p, err := os.FindProcess(os.Getpid()) + if err == nil { + / attempt cascading signals in case SIGINT fails (os dependent) + sigIntErr := p.Signal(syscall.SIGINT) + sigTermErr := p.Signal(syscall.SIGTERM) + if sigIntErr == nil || sigTermErr == nil { + return +} + +} + + / Resort to exiting immediately if the process could not be found or killed + / via SIGINT/SIGTERM signals. + app.logger.Info("failed to send SIGINT/SIGTERM; exiting...") + +os.Exit(0) +} + +func handleQueryApp(app *BaseApp, path []string, req *abci.RequestQuery) *abci.ResponseQuery { + if len(path) >= 2 { + switch path[1] { + case "simulate": + txBytes := req.Data + + gInfo, res, err := app.Simulate(txBytes) + if err != nil { + return sdkerrors.QueryResult(errorsmod.Wrap(err, "failed to simulate tx"), app.trace) +} + simRes := &sdk.SimulationResponse{ + GasInfo: gInfo, + Result: res, +} + +bz, err := codec.ProtoMarshalJSON(simRes, app.interfaceRegistry) + if err != nil { + return sdkerrors.QueryResult(errorsmod.Wrap(err, "failed to JSON encode simulation response"), app.trace) +} + +return &abci.ResponseQuery{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: bz, +} + case "version": + return &abci.ResponseQuery{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: []byte(app.version), +} + +default: + return sdkerrors.QueryResult(errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "unknown query: %s", path), app.trace) +} + +} + +return sdkerrors.QueryResult( + errorsmod.Wrap( + sdkerrors.ErrUnknownRequest, + "expected second parameter to be either 'simulate' or 'version', neither was present", + ), app.trace) +} + +func handleQueryStore(app *BaseApp, path []string, req abci.RequestQuery) *abci.ResponseQuery { + / "/store" prefix for store queries + queryable, ok := app.cms.(storetypes.Queryable) + if !ok { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "multi-store does not support queries"), app.trace) +} + +req.Path = "/" + strings.Join(path[1:], "/") + if req.Height <= 1 && req.Prove { + return sdkerrors.QueryResult( + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ), app.trace) +} + sdkReq := storetypes.RequestQuery(req) + +resp, err := queryable.Query(&sdkReq) + if err != nil { + return sdkerrors.QueryResult(err, app.trace) +} + +resp.Height = req.Height + abciResp := abci.ResponseQuery(*resp) + +return &abciResp +} + +func handleQueryP2P(app *BaseApp, path []string) *abci.ResponseQuery { + / "/p2p" prefix for p2p queries + if len(path) < 4 { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "path should be p2p filter "), app.trace) +} + +var resp *abci.ResponseQuery + + cmd, typ, arg := path[1], path[2], path[3] + switch cmd { + case "filter": + switch typ { + case "addr": + resp = app.FilterPeerByAddrPort(arg) + case "id": + resp = app.FilterPeerByID(arg) +} + +default: + resp = sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "expected second parameter to be 'filter'"), app.trace) +} + +return resp +} + +/ SplitABCIQueryPath splits a string path using the delimiter '/'. +/ +/ e.g. "this/is/funny" becomes []string{"this", "is", "funny" +} + +func SplitABCIQueryPath(requestPath string) (path []string) { + path = strings.Split(requestPath, "/") + + / first element is empty string + if len(path) > 0 && path[0] == "" { + path = path[1:] +} + +return path +} + +/ FilterPeerByAddrPort filters peers by address/port. +func (app *BaseApp) + +FilterPeerByAddrPort(info string) *abci.ResponseQuery { + if app.addrPeerFilter != nil { + return app.addrPeerFilter(info) +} + +return &abci.ResponseQuery{ +} +} + +/ FilterPeerByID filters peers by node ID. +func (app *BaseApp) + +FilterPeerByID(info string) *abci.ResponseQuery { + if app.idPeerFilter != nil { + return app.idPeerFilter(info) +} + +return &abci.ResponseQuery{ +} +} + +/ getContextForProposal returns the correct Context for PrepareProposal and +/ ProcessProposal. We use finalizeBlockState on the first block to be able to +/ access any state changes made in InitChain. +func (app *BaseApp) + +getContextForProposal(ctx sdk.Context, height int64) + +sdk.Context { + if height == app.initialHeight { + ctx, _ = app.finalizeBlockState.ctx.CacheContext() + + / clear all context data set during InitChain to avoid inconsistent behavior + ctx = ctx.WithBlockHeader(cmtproto.Header{ +}) + +return ctx +} + +return ctx +} + +func (app *BaseApp) + +handleQueryGRPC(handler GRPCQueryHandler, req *abci.RequestQuery) *abci.ResponseQuery { + ctx, err := app.CreateQueryContext(req.Height, req.Prove) + if err != nil { + return sdkerrors.QueryResult(err, app.trace) +} + +resp, err := handler(ctx, req) + if err != nil { + resp = sdkerrors.QueryResult(gRPCErrorToSDKError(err), app.trace) + +resp.Height = req.Height + return resp +} + +return resp +} + +func gRPCErrorToSDKError(err error) + +error { + status, ok := grpcstatus.FromError(err) + if !ok { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) +} + switch status.Code() { + case codes.NotFound: + return errorsmod.Wrap(sdkerrors.ErrKeyNotFound, err.Error()) + case codes.InvalidArgument: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.FailedPrecondition: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.Unauthenticated: + return errorsmod.Wrap(sdkerrors.ErrUnauthorized, err.Error()) + +default: + return errorsmod.Wrap(sdkerrors.ErrUnknownRequest, err.Error()) +} +} + +func checkNegativeHeight(height int64) + +error { + if height < 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "cannot query with height < 0; please provide a valid height") +} + +return nil +} + +/ createQueryContext creates a new sdk.Context for a query, taking as args +/ the block height and whether the query needs a proof or not. +func (app *BaseApp) + +CreateQueryContext(height int64, prove bool) (sdk.Context, error) { + if err := checkNegativeHeight(height); err != nil { + return sdk.Context{ +}, err +} + + / use custom query multi-store if provided + qms := app.qms + if qms == nil { + qms = app.cms.(storetypes.MultiStore) +} + lastBlockHeight := qms.LatestVersion() + if lastBlockHeight == 0 { + return sdk.Context{ +}, errorsmod.Wrapf(sdkerrors.ErrInvalidHeight, "%s is not ready; please wait for first block", app.Name()) +} + if height > lastBlockHeight { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidHeight, + "cannot query with height in the future; please provide a valid height", + ) +} + + / when a client did not provide a query height, manually inject the latest + if height == 0 { + height = lastBlockHeight +} + if height <= 1 && prove { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ) +} + +cacheMS, err := qms.CacheMultiStoreWithVersion(height) + if err != nil { + return sdk.Context{ +}, + errorsmod.Wrapf( + sdkerrors.ErrInvalidRequest, + "failed to load state at height %d; %s (latest height: %d)", height, err, lastBlockHeight, + ) +} + + / branch the commit multi-store for safety + ctx := sdk.NewContext(cacheMS, app.checkState.ctx.BlockHeader(), true, app.logger). + WithMinGasPrices(app.minGasPrices). + WithBlockHeight(height) + if height != lastBlockHeight { + rms, ok := app.cms.(*rootmulti.Store) + if ok { + cInfo, err := rms.GetCommitInfo(height) + if cInfo != nil && err == nil { + ctx = ctx.WithBlockTime(cInfo.Timestamp) +} + +} + +} + +return ctx, nil +} + +/ GetBlockRetentionHeight returns the height for which all blocks below this height +/ are pruned from CometBFT. Given a commitment height and a non-zero local +/ minRetainBlocks configuration, the retentionHeight is the smallest height that +/ satisfies: +/ +/ - Unbonding (safety threshold) + +time: The block interval in which validators +/ can be economically punished for misbehavior. Blocks in this interval must be +/ auditable e.g. by the light client. +/ +/ - Logical store snapshot interval: The block interval at which the underlying +/ logical store database is persisted to disk, e.g. every 10000 heights. Blocks +/ since the last IAVL snapshot must be available for replay on application restart. +/ +/ - State sync snapshots: Blocks since the oldest available snapshot must be +/ available for state sync nodes to catch up (oldest because a node may be +/ restoring an old snapshot while a new snapshot was taken). +/ +/ - Local (minRetainBlocks) + +config: Archive nodes may want to retain more or +/ all blocks, e.g. via a local config option min-retain-blocks. There may also +/ be a need to vary retention for other nodes, e.g. sentry nodes which do not +/ need historical blocks. +func (app *BaseApp) + +GetBlockRetentionHeight(commitHeight int64) + +int64 { + / pruning is disabled if minRetainBlocks is zero + if app.minRetainBlocks == 0 { + return 0 +} + minNonZero := func(x, y int64) + +int64 { + switch { + case x == 0: + return y + case y == 0: + return x + case x < y: + return x + + default: + return y +} + +} + + / Define retentionHeight as the minimum value that satisfies all non-zero + / constraints. All blocks below (commitHeight-retentionHeight) + +are pruned + / from CometBFT. + var retentionHeight int64 + + / Define the number of blocks needed to protect against misbehaving validators + / which allows light clients to operate safely. Note, we piggy back of the + / evidence parameters instead of computing an estimated number of blocks based + / on the unbonding period and block commitment time as the two should be + / equivalent. + cp := app.GetConsensusParams(app.finalizeBlockState.ctx) + if cp.Evidence != nil && cp.Evidence.MaxAgeNumBlocks > 0 { + retentionHeight = commitHeight - cp.Evidence.MaxAgeNumBlocks +} + if app.snapshotManager != nil { + snapshotRetentionHeights := app.snapshotManager.GetSnapshotBlockRetentionHeights() + if snapshotRetentionHeights > 0 { + retentionHeight = minNonZero(retentionHeight, commitHeight-snapshotRetentionHeights) +} + +} + v := commitHeight - int64(app.minRetainBlocks) + +retentionHeight = minNonZero(retentionHeight, v) + if retentionHeight <= 0 { + / prune nothing in the case of a non-positive height + return 0 +} + +return retentionHeight +} + +/ toVoteInfo converts the new ExtendedVoteInfo to VoteInfo. +func toVoteInfo(votes []abci.ExtendedVoteInfo) []abci.VoteInfo { + legacyVotes := make([]abci.VoteInfo, len(votes)) + for i, vote := range votes { + legacyVotes[i] = abci.VoteInfo{ + Validator: abci.Validator{ + Address: vote.Validator.Address, + Power: vote.Validator.Power, +}, + BlockIdFlag: vote.BlockIdFlag, +} + +} + +return legacyVotes +} +``` diff --git a/docs/sdk/next/documentation/module-system/mint.mdx b/docs/sdk/next/documentation/module-system/mint.mdx new file mode 100644 index 00000000..c1f5f922 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/mint.mdx @@ -0,0 +1,518 @@ +--- +title: '`x/mint`' +description: >- + The x/mint module handles the regular minting of new tokens in a configurable + manner. +--- + +The `x/mint` module handles the regular minting of new tokens in a configurable manner. + +## Contents + +* [State](#state) + * [Minter](#minter) + * [Params](#params) +* [Begin-Block](#begin-block) + * [NextInflationRate](#nextinflationrate) + * [NextAnnualProvisions](#nextannualprovisions) + * [BlockProvision](#blockprovision) +* [Parameters](#parameters) +* [Events](#events) + * [BeginBlocker](#beginblocker) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +### The Minting Mechanism + +The default minting mechanism was designed to: + +* allow for a flexible inflation rate determined by market demand targeting a particular bonded-stake ratio +* effect a balance between market liquidity and staked supply + +In order to best determine the appropriate market rate for inflation rewards, a +moving change rate is used. The moving change rate mechanism ensures that if +the % bonded is either over or under the goal %-bonded, the inflation rate will +adjust to further incentivize or disincentivize being bonded, respectively. Setting the goal +%-bonded at less than 100% encourages the network to maintain some non-staked tokens +which should help provide some liquidity. + +It can be broken down in the following way: + +* If the actual percentage of bonded tokens is below the goal %-bonded the inflation rate will + increase until a maximum value is reached +* If the goal % bonded (67% in Cosmos-Hub) is maintained, then the inflation + rate will stay constant +* If the actual percentage of bonded tokens is above the goal %-bonded the inflation rate will + decrease until a minimum value is reached + +### Custom Minters + +As of Cosmos SDK v0.53.0, developers can set a custom `MintFn` for the module for specialized token minting logic. + +The function signature that a `MintFn` must implement is as follows: + +```go +/ MintFn defines the function that needs to be implemented in order to customize the minting process. +type MintFn func(ctx sdk.Context, k *Keeper) + +error +``` + +This can be passed to the `Keeper` upon creation with an additional `Option`: + +```go +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / mintkeeper.WithMintFn(CUSTOM_MINT_FN), / custom mintFn can be added here + ) +``` + +#### Custom Minter DI Example + +Below is a simple approach to creating a custom mint function with extra dependencies in DI configurations. +For this basic example, we will make the minter simply double the supply of `foo` coin. + +First, we will define a function that takes our required dependencies, and returns a `MintFn`. + +```go expandable +/ MyCustomMintFunction is a custom mint function that doubles the supply of `foo` coin. +func MyCustomMintFunction(bank bankkeeper.BaseKeeper) + +mintkeeper.MintFn { + return func(ctx sdk.Context, k *mintkeeper.Keeper) + +error { + supply := bank.GetSupply(ctx, "foo") + err := k.MintCoins(ctx, sdk.NewCoins(supply.Add(supply))) + if err != nil { + return err +} + +return nil +} +} +``` + +Then, pass the function defined above into the `depinject.Supply` function with the required dependencies. + +```go expandable +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + appOpts, + logger, + / our custom mint function with the necessary dependency passed in. + MyCustomMintFunction(app.BankKeeper), + ), + ) + ) + / ... +} +``` + +## State + +### Minter + +The minter is a space for holding current inflation information. + +* Minter: `0x00 -> ProtocolBuffer(minter)` + +```protobuf +// Minter represents the minting state. +message Minter { + // current annual inflation rate + string inflation = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // current annual expected provisions + string annual_provisions = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +### Params + +The mint module stores its params in state with the prefix of `0x01`, +it can be updated with governance or the address with authority. + +* Params: `mint/params -> legacy_amino(params)` + +```protobuf +// Params defines the parameters for the x/mint module. +message Params { + option (gogoproto.goproto_stringer) = false; + option (amino.name) = "cosmos-sdk/x/mint/Params"; + + // type of coin to mint + string mint_denom = 1; + // maximum annual change in inflation rate + string inflation_rate_change = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // maximum inflation rate + string inflation_max = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // minimum inflation rate + string inflation_min = 4 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // goal of percent bonded atoms + string goal_bonded = 5 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // expected blocks per year + uint64 blocks_per_year = 6; +} +``` + +## Begin-Block + +Minting parameters are recalculated and inflation paid at the beginning of each block. + +### Inflation rate calculation + +Inflation rate is calculated using an "inflation calculation function" that's +passed to the `NewAppModule` function. If no function is passed, then the SDK's +default inflation function will be used (`NextInflationRate`). In case a custom +inflation calculation logic is needed, this can be achieved by defining and +passing a function that matches `InflationCalculationFn`'s signature. + +```go +type InflationCalculationFn func(ctx sdk.Context, minter Minter, params Params, bondedRatio math.LegacyDec) + +math.LegacyDec +``` + +#### NextInflationRate + +The target annual inflation rate is recalculated each block. +The inflation is also subject to a rate change (positive or negative) +depending on the distance from the desired ratio (67%). The maximum rate change +possible is defined to be 13% per year, however, the annual inflation is capped +as between 7% and 20%. + +```go expandable +NextInflationRate(params Params, bondedRatio math.LegacyDec) (inflation math.LegacyDec) { + inflationRateChangePerYear = (1 - bondedRatio/params.GoalBonded) * params.InflationRateChange + inflationRateChange = inflationRateChangePerYear/blocksPerYr + + / increase the new annual inflation for this next block + inflation += inflationRateChange + if inflation > params.InflationMax { + inflation = params.InflationMax +} + if inflation < params.InflationMin { + inflation = params.InflationMin +} + +return inflation +} +``` + +### NextAnnualProvisions + +Calculate the annual provisions based on current total supply and inflation +rate. This parameter is calculated once per block. + +```go +NextAnnualProvisions(params Params, totalSupply math.LegacyDec) (provisions math.LegacyDec) { + return Inflation * totalSupply +``` + +### BlockProvision + +Calculate the provisions generated for each block based on current annual provisions. The provisions are then minted by the `mint` module's `ModuleMinterAccount` and then transferred to the `auth`'s `FeeCollector` `ModuleAccount`. + +```go +BlockProvision(params Params) + +sdk.Coin { + provisionAmt = AnnualProvisions/ params.BlocksPerYear + return sdk.NewCoin(params.MintDenom, provisionAmt.Truncate()) +``` + +## Parameters + +The minting module contains the following parameters: + +| Key | Type | Example | +| ------------------- | --------------- | ---------------------- | +| MintDenom | string | "uatom" | +| InflationRateChange | string (dec) | "0.130000000000000000" | +| InflationMax | string (dec) | "0.200000000000000000" | +| InflationMin | string (dec) | "0.070000000000000000" | +| GoalBonded | string (dec) | "0.670000000000000000" | +| BlocksPerYear | string (uint64) | "6311520" | + +## Events + +The minting module emits the following events: + +### BeginBlocker + +| Type | Attribute Key | Attribute Value | +| ---- | ------------------ | ------------------ | +| mint | bonded\_ratio | `{bondedRatio}` | +| mint | inflation | `{inflation}` | +| mint | annual\_provisions | `{annualProvisions}` | +| mint | amount | `{amount}` | + +## Client + +### CLI + +A user can query and interact with the `mint` module using the CLI. + +#### Query + +The `query` commands allows users to query `mint` state. + +```shell +simd query mint --help +``` + +##### annual-provisions + +The `annual-provisions` command allows users to query the current minting annual provisions value + +```shell +simd query mint annual-provisions [flags] +``` + +Example: + +```shell +simd query mint annual-provisions +``` + +Example Output: + +```shell +22268504368893.612100895088410693 +``` + +##### inflation + +The `inflation` command allows users to query the current minting inflation value + +```shell +simd query mint inflation [flags] +``` + +Example: + +```shell +simd query mint inflation +``` + +Example Output: + +```shell +0.199200302563256955 +``` + +##### params + +The `params` command allows users to query the current minting parameters + +```shell +simd query mint params [flags] +``` + +Example: + +```yml +blocks_per_year: "4360000" +goal_bonded: "0.670000000000000000" +inflation_max: "0.200000000000000000" +inflation_min: "0.070000000000000000" +inflation_rate_change: "0.130000000000000000" +mint_denom: stake +``` + +### gRPC + +A user can query the `mint` module using gRPC endpoints. + +#### AnnualProvisions + +The `AnnualProvisions` endpoint allows users to query the current minting annual provisions value + +```shell +/cosmos.mint.v1beta1.Query/AnnualProvisions +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/AnnualProvisions +``` + +Example Output: + +```json +{ + "annualProvisions": "1432452520532626265712995618" +} +``` + +#### Inflation + +The `Inflation` endpoint allows users to query the current minting inflation value + +```shell +/cosmos.mint.v1beta1.Query/Inflation +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/Inflation +``` + +Example Output: + +```json +{ + "inflation": "130197115720711261" +} +``` + +#### Params + +The `Params` endpoint allows users to query the current minting parameters + +```shell +/cosmos.mint.v1beta1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "mintDenom": "stake", + "inflationRateChange": "130000000000000000", + "inflationMax": "200000000000000000", + "inflationMin": "70000000000000000", + "goalBonded": "670000000000000000", + "blocksPerYear": "6311520" + } +} +``` + +### REST + +A user can query the `mint` module using REST endpoints. + +#### annual-provisions + +```shell +/cosmos/mint/v1beta1/annual_provisions +``` + +Example: + +```shell +curl "localhost:1317/cosmos/mint/v1beta1/annual_provisions" +``` + +Example Output: + +```json +{ + "annualProvisions": "1432452520532626265712995618" +} +``` + +#### inflation + +```shell +/cosmos/mint/v1beta1/inflation +``` + +Example: + +```shell +curl "localhost:1317/cosmos/mint/v1beta1/inflation" +``` + +Example Output: + +```json +{ + "inflation": "130197115720711261" +} +``` + +#### params + +```shell +/cosmos/mint/v1beta1/params +``` + +Example: + +```shell +curl "localhost:1317/cosmos/mint/v1beta1/params" +``` + +Example Output: + +```json +{ + "params": { + "mintDenom": "stake", + "inflationRateChange": "130000000000000000", + "inflationMax": "200000000000000000", + "inflationMin": "70000000000000000", + "goalBonded": "670000000000000000", + "blocksPerYear": "6311520" + } +} +``` diff --git a/docs/sdk/next/documentation/module-system/module-development-guide.mdx b/docs/sdk/next/documentation/module-system/module-development-guide.mdx new file mode 100644 index 00000000..29271ecd --- /dev/null +++ b/docs/sdk/next/documentation/module-system/module-development-guide.mdx @@ -0,0 +1,527 @@ +--- +title: Module Development Guide +description: Technical reference for Cosmos SDK module development +--- + +# Module Development Guide + +## Overview + +Cosmos SDK modules are self-contained units of functionality that extend the capabilities of a blockchain application. To understand their flexibility, consider the range of modules in production: from the `x/bank` module that handles token transfers, to `x/ibc` that enables communication between independent blockchains, to the EVM module that runs Ethereum smart contracts on Cosmos chains. + +## Module Types and Capabilities + +The Cosmos SDK doesn't prescribe a single pattern for modules. Let's examine how different production modules leverage this flexibility: + +### The Bank Module: State Management Foundation + +The `x/bank` module manages all token balances and transfers. It's the foundation that other modules build upon: + +```go +// From x/bank/keeper/keeper.go +type BaseKeeper struct { + ak types.AccountKeeper + cdc codec.BinaryCodec + storeService store.KVStoreService + mintCoinsRestrictionFn MintingRestrictionFn + + // State management using Collections + Schema collections.Schema + Balances collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] + Supply collections.Map[string, math.Int] + DenomMetadata collections.Map[string, types.Metadata] + SendEnabled collections.Map[string, bool] +} +``` + +The bank module demonstrates how modules manage state using the Collections framework. Every account balance is stored as a key-value pair where the key is the combination of address and denomination. + +### IBC: Protocol Implementation + +The IBC (Inter-Blockchain Communication) module shows how modules can implement complex protocols. IBC isn't just a simple state manager—it's a complete protocol for trustless communication between chains: + +```go +// From ibc-go/modules/core/keeper/keeper.go +type Keeper struct { + cdc codec.BinaryCodec + + ClientKeeper clientkeeper.Keeper + ConnectionKeeper connectionkeeper.Keeper + ChannelKeeper channelkeeper.Keeper + PortKeeper portkeeper.Keeper + + Router *porttypes.Router + + authority string +} +``` + +IBC is composed of multiple sub-keepers, each managing different aspects of the protocol: clients track other chains, connections establish communication paths, channels handle packet flow, and ports manage module bindings. + +### The Upgrade Module: Coordinating Chain Evolution + +The `x/upgrade` module demonstrates a different pattern—it doesn't manage user funds or complex protocols, but coordinates the entire chain's upgrade process: + +```go +// From x/upgrade/keeper/keeper.go +func (k Keeper) ScheduleUpgrade(ctx context.Context, plan types.Plan) error { + sdkCtx := sdk.UnwrapSDKContext(ctx) + + // The upgrade module has the special ability to halt the chain + if err := plan.ValidateBasic(); err != nil { + return err + } + + if !plan.Time.IsZero() { + return errors.New("upgrade by time is disabled") + } + + if plan.Height <= sdkCtx.HeaderInfo().Height { + return errors.New("upgrade cannot be scheduled in the past") + } + + // Store the plan - when the chain reaches this height, it will halt + if err := k.SetUpgradePlan(ctx, plan); err != nil { + return err + } + + return nil +} +``` + +This demonstrates how modules can have special privileges—the upgrade module can halt the entire chain, something no regular transaction could do. + +## Core Requirements + +Every module must implement `AppModuleBasic`, but what they do beyond that varies widely. Let's look at how different modules approach this: + +### The Minimal Interface: x/auth + +The auth module manages accounts and transaction authentication. Its basic interface is straightforward: + +```go +// From x/auth/module.go +func (AppModuleBasic) Name() string { + return types.ModuleName +} + +func (AppModuleBasic) RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +func (AppModuleBasic) RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) +} +``` + +### Extended Capabilities: x/staking + +The staking module needs to participate in block processing to handle validator updates: + +```go +// From x/staking/module.go +type AppModule struct { + AppModuleBasic + keeper *keeper.Keeper + accountKeeper types.AccountKeeper + bankKeeper types.BankKeeper +} + +// BeginBlock updates validator set and handles slashing +func (am AppModule) BeginBlock(ctx context.Context) error { + return am.keeper.BeginBlocker(ctx) +} + +// EndBlock processes validator updates for CometBFT +func (am AppModule) EndBlock(ctx context.Context) error { + return am.keeper.EndBlocker(ctx) +} +``` + +The staking module's `BeginBlock` and `EndBlock` methods aren't just arbitrary hooks—they're essential for maintaining consensus by updating the validator set that CometBFT uses. + +## State Management Patterns + +### Collections Framework: x/bank's Approach + +The bank module uses Collections for type-safe state management: + +```go +// From x/bank/keeper/keeper.go - Setting up collections +sb := collections.NewSchemaBuilder(storeService) + +k := BaseKeeper{ + Balances: collections.NewMap( + sb, + types.BalancesPrefix, + "balances", + collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), + sdk.IntValue, + ), + Supply: collections.NewMap( + sb, + types.SupplyKey, + "supply", + collections.StringKey, + sdk.IntValue, + ), +} + +// From x/bank/keeper/send.go - Using collections +func (k BaseSendKeeper) GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) sdk.Coin { + amt, err := k.Balances.Get(ctx, collections.Join(addr, denom)) + if err != nil && !errors.Is(err, collections.ErrNotFound) { + panic(err) + } + return sdk.NewCoin(denom, amt) +} +``` + +### Collections in Governance: x/gov Module + +The governance module also uses Collections for managing proposals and votes: + +```go +// From x/gov/keeper/keeper.go +type Keeper struct { + // ... other fields ... + + // Collections for state management + Proposals collections.Map[uint64, v1.Proposal] + Votes collections.Map[collections.Pair[uint64, sdk.AccAddress], v1.Vote] + ActiveProposalsQueue collections.Map[collections.Pair[time.Time, uint64], uint64] + InactiveProposalsQueue collections.Map[collections.Pair[time.Time, uint64], uint64] + VotingPeriodProposals collections.Map[uint64, []byte] +} + +// From x/gov/keeper/keeper.go - Setup +k := Keeper{ + Proposals: collections.NewMap( + sb, types.ProposalsKeyPrefix, "proposals", + collections.Uint64Key, codec.CollValue[v1.Proposal](cdc), + ), + ActiveProposalsQueue: collections.NewMap( + sb, types.ActiveProposalQueuePrefix, "active_proposals_queue", + collections.PairKeyCodec(sdk.TimeKey, collections.Uint64Key), + collections.Uint64Value, + ), +} +``` + +## Service Registration + +### Transaction Processing: x/bank's Message Server + +The bank module's message server shows how modules handle transactions: + +```go +// From x/bank/keeper/msg_server.go +type msgServer struct { + BaseKeeper +} + +func (k msgServer) Send(goCtx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + from, err := k.ak.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, err + } + + to, err := k.ak.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, err + } + + if k.BlockedAddr(to) { + return nil, errors.Wrapf(types.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) + } + + err = k.SendCoins(goCtx, from, to, msg.Amount) + if err != nil { + return nil, err + } + + return &types.MsgSendResponse{}, nil +} +``` + +This isn't just moving numbers in a database—the bank module enforces critical invariants like preventing sends to module accounts that shouldn't receive funds. + +### Query Services: x/staking's Information Provider + +The staking module provides extensive query services for validator information: + +```go +// From x/staking/keeper/grpc_query.go +func (k Querier) Validators(ctx context.Context, req *types.QueryValidatorsRequest) (*types.QueryValidatorsResponse, error) { + if req == nil { + return nil, errors.New("empty request") + } + + // This query is critical for UIs and other chains to understand the validator set + validators, pageRes, err := query.CollectionPaginate( + ctx, + k.Keeper.Validators, + req.Pagination, + func(key []byte, val types.Validator) (types.Validator, error) { + if req.Status != "" && !strings.EqualFold(val.Status.String(), req.Status) { + return types.Validator{}, errors.Wrapf(collection.ErrSkipIteration, "validator %s does not match status %s", val.OperatorAddress, req.Status) + } + return val, nil + }, + ) + + if err != nil { + return nil, errors.New("failed to query validators") + } + + return &types.QueryValidatorsResponse{Validators: validators, Pagination: pageRes}, nil +} +``` + +## Module Lifecycle Integration + +### The Distribution Module: Rewards at Block Boundaries + +The distribution module calculates and distributes staking rewards during block processing: + +```go +// From x/distribution/abci.go +func (am AppModule) BeginBlock(ctx context.Context) error { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyBeginBlocker) + + // Distribute rewards for the previous block + // This must happen in BeginBlock to ensure rewards are distributed before any transactions + return am.keeper.AllocateTokens(ctx) +} + +// From x/distribution/keeper/allocation.go +func (k Keeper) AllocateTokens(ctx context.Context) error { + // Get the total power of all validators + totalPower := k.stakingKeeper.TotalBondedTokens(ctx) + + // Get the fees collected in the last block + feeCollector := k.authKeeper.GetModuleAccount(ctx, k.feeCollectorName) + feesCollected := k.bankKeeper.GetAllBalances(ctx, feeCollector.GetAddress()) + + // Distribute to validators based on their voting power + // This complex calculation ensures fair reward distribution + // ... +} +``` + +### The Evidence Module: Handling Misbehavior + +The evidence module processes evidence of validator misbehavior: + +```go +// From x/evidence/keeper/keeper.go +type Keeper struct { + cdc codec.BinaryCodec + storeService corestore.KVStoreService + router types.Router + stakingKeeper types.StakingKeeper + slashingKeeper types.SlashingKeeper + + Schema collections.Schema + Evidences collections.Map[[]byte, exported.Evidence] +} + +// From x/evidence/keeper/infraction.go +// handleEquivocationEvidence processes evidence of double signing +func (k Keeper) handleEquivocationEvidence(ctx context.Context, evidence *types.Equivocation) error { + consAddr := evidence.GetConsensusAddress(k.stakingKeeper.ConsensusAddressCodec()) + + validator, err := k.stakingKeeper.ValidatorByConsAddr(ctx, consAddr) + if err != nil { + return err + } + if validator == nil || validator.IsUnbonded() { + return nil // Cannot slash unbonded validators + } + + // Check if validator is already tombstoned + if k.slashingKeeper.IsTombstoned(ctx, consAddr) { + return nil // Already slashed for equivocation + } + + // Slash the validator for double signing + // This protects the network from malicious validators + k.slashingKeeper.SlashWithInfractionReason( + ctx, consAddr, infractionHeight, power, + k.slashingKeeper.SlashFractionDoubleSign(ctx), + types.Infraction_INFRACTION_DOUBLE_SIGN, + ) +} +``` + +## Module Communication Patterns + +### Direct Keeper Interaction: x/gov and x/staking + +The governance module needs to execute proposals that can affect other modules. Here's how it interacts with the staking module: + +```go +// From x/gov/keeper/keeper.go +type Keeper struct { + // Governance needs access to staking to handle validator-related proposals + sk types.StakingKeeper + // ... other keepers +} + +// From x/staking/types/expected_keepers.go +type StakingKeeper interface { + // Gov module can jail validators through proposals + Jail(context.Context, sdk.ConsAddress) error + Unjail(context.Context, sdk.ConsAddress) error + + // Gov can update staking parameters + SetParams(context.Context, types.Params) error +} +``` + +### Event-Based Communication: x/slashing Notifications + +The slashing module emits events that other modules and external systems can respond to: + +```go +// From x/slashing/keeper/infractions.go +func (k Keeper) HandleValidatorSignature(ctx context.Context, addr cryptotypes.Address, power int64, signed comet.BlockIDFlag) error { + // ... slashing logic ... + + // Emit an event that other modules or external monitors can observe + sdkCtx.EventManager().EmitEvent( + sdk.NewEvent( + types.EventTypeSlash, + sdk.NewAttribute(types.AttributeKeyAddress, consAddr.String()), + sdk.NewAttribute(types.AttributeKeyPower, fmt.Sprintf("%d", power)), + sdk.NewAttribute(types.AttributeKeyReason, types.AttributeValueMissingSignature), + sdk.NewAttribute(types.AttributeKeyJailed, fmt.Sprintf("%v", !validator.IsJailed())), + ), + ) +} +``` + +### The Feegrant Module: Extending Capabilities + +The feegrant module shows how modules can extend the capabilities of others without modifying them: + +```go +// From x/feegrant/keeper/keeper.go +func (k Keeper) UseGrantedFees(ctx context.Context, granter, grantee sdk.AccAddress, fee sdk.Coins, msgs []sdk.Msg) error { + grant, err := k.GetGrant(ctx, granter, grantee) + if err != nil { + return err + } + + // The feegrant module allows one account to pay fees for another + // This extends the auth module's fee payment system without modifying it + allow, err := grant.GetGrant().Accept(ctx, fee, msgs) + if err != nil { + return err + } + + // Deduct fees from the granter, not the grantee + err = k.bankKeeper.SendCoinsFromAccountToModule(ctx, granter, types.FeeCollectorName, fee) + if err != nil { + return err + } + + // Update or remove the grant based on its remaining allowance + if allow != nil { + grant.Grant = allow + k.SetGrant(ctx, grant) + } else { + k.RemoveGrant(ctx, granter, grantee) + } + + return nil +} +``` + +## Advanced Module Capabilities + +### The AuthZ Module: Delegated Permissions + +The authz module enables sophisticated permission delegation: + +```go +// From x/authz/keeper/keeper.go +func (k Keeper) DispatchActions(ctx context.Context, grantee sdk.AccAddress, msgs []sdk.Msg) ([][]byte, error) { + results := make([][]byte, len(msgs)) + + for i, msg := range msgs { + signers := msg.GetSigners() + if len(signers) != 1 { + return nil, errors.New("authz can only dispatch messages with exactly one signer") + } + + granter := signers[0] + + // Check if grantee has authorization from granter + authorization, expiration, err := k.GetAuthorization(ctx, grantee, granter, sdk.MsgTypeURL(msg)) + if err != nil { + return nil, err + } + + // Execute the message on behalf of the granter + resp, err := authorization.Accept(ctx, msg) + if err != nil { + return nil, err + } + + if resp.Delete { + k.DeleteGrant(ctx, grantee, granter, sdk.MsgTypeURL(msg)) + } else if resp.Updated != nil { + k.SaveGrant(ctx, grantee, granter, resp.Updated, expiration) + } + + results[i] = sdk.MsgTypeURL(msg) + } + + return results, nil +} +``` + +## Testing Strategies + +### The Bank Module's Comprehensive Testing + +The bank module's tests demonstrate thorough testing practices: + +```go +// From x/bank/keeper/keeper_test.go +func (suite *KeeperTestSuite) TestSendCoinsFromModuleToAccount() { + ctx := suite.ctx + + // Setup: Create a module account with funds + moduleName := "test" + moduleAcc := authtypes.NewModuleAccount( + authtypes.NewBaseAccountWithAddress(sdk.AccAddress([]byte(moduleName))), + moduleName, + authtypes.Minter, + ) + + suite.authKeeper.SetModuleAccount(ctx, moduleAcc) + suite.Require().NoError(suite.bankKeeper.MintCoins(ctx, moduleName, initCoins)) + + // Test: Send from module to account + addr := sdk.AccAddress([]byte("addr")) + suite.Require().NoError(suite.bankKeeper.SendCoinsFromModuleToAccount( + ctx, moduleName, addr, initCoins, + )) + + // Verify: Check balances + balance := suite.bankKeeper.GetAllBalances(ctx, addr) + suite.Require().Equal(initCoins, balance) + + moduleBalance := suite.bankKeeper.GetAllBalances(ctx, moduleAcc.GetAddress()) + suite.Require().True(moduleBalance.Empty()) +} +``` + +## References + +- [Cosmos SDK Modules Source Code](https://github.com/cosmos/cosmos-sdk/tree/main/x) - Production module implementations +- [IBC-Go Modules](https://github.com/cosmos/ibc-go) - Inter-blockchain communication protocol +- [CosmWasm](https://github.com/CosmWasm/cosmwasm) - Smart contract platform module +- [Ethermint EVM](https://github.com/evmos/ethermint) - Ethereum Virtual Machine implementation \ No newline at end of file diff --git a/docs/sdk/next/documentation/module-system/module-interfaces.mdx b/docs/sdk/next/documentation/module-system/module-interfaces.mdx new file mode 100644 index 00000000..f04e1638 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/module-interfaces.mdx @@ -0,0 +1,1157 @@ +--- +title: Module Interfaces +--- + +## Synopsis + +This document details how to build CLI and REST interfaces for a module. Examples from various Cosmos SDK modules are included. + + +**Pre-requisite Readings** + +- [Building Modules Intro](/docs/sdk/next/documentation/module-system/intro) + + + +## CLI + +One of the main interfaces for an application is the [command-line interface](/docs/sdk/next/api-reference/client-tools/cli). This entrypoint adds commands from the application's modules enabling end-users to create [**messages**](/docs/sdk/next/documentation/module-system/messages-and-queries#messages) wrapped in transactions and [**queries**](/docs/sdk/next/documentation/module-system/messages-and-queries#queries). The CLI files are typically found in the module's `./client/cli` folder. + +### Transaction Commands + +In order to create messages that trigger state changes, end-users must create [transactions](/docs/sdk/next/documentation/protocol-development/transactions) that wrap and deliver the messages. A transaction command creates a transaction that includes one or more messages. + +Transaction commands typically have their own `tx.go` file that lives within the module's `./client/cli` folder. The commands are specified in getter functions and the name of the function should include the name of the command. + +Here is an example from the `x/bank` module: + +```go expandable +package cli + +import ( + + "fmt" + "cosmossdk.io/core/address" + sdkmath "cosmossdk.io/math" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/tx" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var FlagSplit = "split" + +/ NewTxCmd returns a root CLI command handler for all x/bank transaction commands. +func NewTxCmd(ac address.Codec) *cobra.Command { + txCmd := &cobra.Command{ + Use: types.ModuleName, + Short: "Bank transaction subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +txCmd.AddCommand( + NewSendTxCmd(ac), + NewMultiSendTxCmd(ac), + ) + +return txCmd +} + +/ NewSendTxCmd returns a CLI command handler for creating a MsgSend transaction. +func NewSendTxCmd(ac address.Codec) *cobra.Command { + cmd := &cobra.Command{ + Use: "send [from_key_or_address] [to_address] [amount]", + Short: "Send funds from one account to another.", + Long: `Send funds from one account to another. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.ExactArgs(3), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +toAddr, err := ac.StringToBytes(args[1]) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[2]) + if err != nil { + return err +} + if len(coins) == 0 { + return fmt.Errorf("invalid coins") +} + msg := types.NewMsgSend(clientCtx.GetFromAddress(), toAddr, coins) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} + +/ NewMultiSendTxCmd returns a CLI command handler for creating a MsgMultiSend transaction. +/ For a better UX this command is limited to send funds from one account to two or more accounts. +func NewMultiSendTxCmd(ac address.Codec) *cobra.Command { + cmd := &cobra.Command{ + Use: "multi-send [from_key_or_address] [to_address_1, to_address_2, ...] [amount]", + Short: "Send funds from one account to two or more accounts.", + Long: `Send funds from one account to two or more accounts. +By default, sends the [amount] to each address of the list. +Using the '--split' flag, the [amount] is split equally between the addresses. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.MinimumNArgs(4), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[len(args)-1]) + if err != nil { + return err +} + if coins.IsZero() { + return fmt.Errorf("must send positive amount") +} + +split, err := cmd.Flags().GetBool(FlagSplit) + if err != nil { + return err +} + totalAddrs := sdkmath.NewInt(int64(len(args) - 2)) + / coins to be received by the addresses + sendCoins := coins + if split { + sendCoins = coins.QuoInt(totalAddrs) +} + +var output []types.Output + for _, arg := range args[1 : len(args)-1] { + toAddr, err := ac.StringToBytes(arg) + if err != nil { + return err +} + +output = append(output, types.NewOutput(toAddr, sendCoins)) +} + + / amount to be send from the from address + var amount sdk.Coins + if split { + / user input: 1000stake to send to 3 addresses + / actual: 333stake to each address (=> 999stake actually sent) + +amount = sendCoins.MulInt(totalAddrs) +} + +else { + amount = coins.MulInt(totalAddrs) +} + msg := types.NewMsgMultiSend(types.NewInput(clientCtx.FromAddress, amount), output) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +cmd.Flags().Bool(FlagSplit, false, "Send the equally split token amount to each address") + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} +``` + +In the example, `NewSendTxCmd()` creates and returns the transaction command for a transaction that wraps and delivers `MsgSend`. `MsgSend` is the message used to send tokens from one account to another. + +In general, the getter function does the following: + +- **Constructs the command:** Read the [Cobra Documentation](https://pkg.go.dev/github.com/spf13/cobra) for more detailed information on how to create commands. + - **Use:** Specifies the format of the user input required to invoke the command. In the example above, `send` is the name of the transaction command and `[from_key_or_address]`, `[to_address]`, and `[amount]` are the arguments. + - **Args:** The number of arguments the user provides. In this case, there are exactly three: `[from_key_or_address]`, `[to_address]`, and `[amount]`. + - **Short and Long:** Descriptions for the command. A `Short` description is expected. A `Long` description can be used to provide additional information that is displayed when a user adds the `--help` flag. + - **RunE:** Defines a function that can return an error. This is the function that is called when the command is executed. This function encapsulates all of the logic to create a new transaction. + - The function typically starts by getting the `clientCtx`, which can be done with `client.GetClientTxContext(cmd)`. The `clientCtx` contains information relevant to transaction handling, including information about the user. In this example, the `clientCtx` is used to retrieve the address of the sender by calling `clientCtx.GetFromAddress()`. + - If applicable, the command's arguments are parsed. In this example, the arguments `[to_address]` and `[amount]` are both parsed. + - A [message](/docs/sdk/next/documentation/module-system/messages-and-queries) is created using the parsed arguments and information from the `clientCtx`. The constructor function of the message type is called directly. In this case, `types.NewMsgSend(fromAddr, toAddr, amount)`. Its good practice to call, if possible, the necessary [message validation methods](/docs/sdk/next/documentation/module-system/msg-services#Validation) before broadcasting the message. + - Depending on what the user wants, the transaction is either generated offline or signed and broadcasted to the preconfigured node using `tx.GenerateOrBroadcastTxCLI(clientCtx, flags, msg)`. +- **Adds transaction flags:** All transaction commands must add a set of transaction [flags](#flags). The transaction flags are used to collect additional information from the user (e.g. the amount of fees the user is willing to pay). The transaction flags are added to the constructed command using `AddTxFlagsToCmd(cmd)`. +- **Returns the command:** Finally, the transaction command is returned. + +Each module can implement `NewTxCmd()`, which aggregates all of the transaction commands of the module. Here is an example from the `x/bank` module: + +```go expandable +package cli + +import ( + + "fmt" + "cosmossdk.io/core/address" + sdkmath "cosmossdk.io/math" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/tx" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var FlagSplit = "split" + +/ NewTxCmd returns a root CLI command handler for all x/bank transaction commands. +func NewTxCmd(ac address.Codec) *cobra.Command { + txCmd := &cobra.Command{ + Use: types.ModuleName, + Short: "Bank transaction subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +txCmd.AddCommand( + NewSendTxCmd(ac), + NewMultiSendTxCmd(ac), + ) + +return txCmd +} + +/ NewSendTxCmd returns a CLI command handler for creating a MsgSend transaction. +func NewSendTxCmd(ac address.Codec) *cobra.Command { + cmd := &cobra.Command{ + Use: "send [from_key_or_address] [to_address] [amount]", + Short: "Send funds from one account to another.", + Long: `Send funds from one account to another. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.ExactArgs(3), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +toAddr, err := ac.StringToBytes(args[1]) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[2]) + if err != nil { + return err +} + if len(coins) == 0 { + return fmt.Errorf("invalid coins") +} + msg := types.NewMsgSend(clientCtx.GetFromAddress(), toAddr, coins) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} + +/ NewMultiSendTxCmd returns a CLI command handler for creating a MsgMultiSend transaction. +/ For a better UX this command is limited to send funds from one account to two or more accounts. +func NewMultiSendTxCmd(ac address.Codec) *cobra.Command { + cmd := &cobra.Command{ + Use: "multi-send [from_key_or_address] [to_address_1, to_address_2, ...] [amount]", + Short: "Send funds from one account to two or more accounts.", + Long: `Send funds from one account to two or more accounts. +By default, sends the [amount] to each address of the list. +Using the '--split' flag, the [amount] is split equally between the addresses. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.MinimumNArgs(4), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[len(args)-1]) + if err != nil { + return err +} + if coins.IsZero() { + return fmt.Errorf("must send positive amount") +} + +split, err := cmd.Flags().GetBool(FlagSplit) + if err != nil { + return err +} + totalAddrs := sdkmath.NewInt(int64(len(args) - 2)) + / coins to be received by the addresses + sendCoins := coins + if split { + sendCoins = coins.QuoInt(totalAddrs) +} + +var output []types.Output + for _, arg := range args[1 : len(args)-1] { + toAddr, err := ac.StringToBytes(arg) + if err != nil { + return err +} + +output = append(output, types.NewOutput(toAddr, sendCoins)) +} + + / amount to be send from the from address + var amount sdk.Coins + if split { + / user input: 1000stake to send to 3 addresses + / actual: 333stake to each address (=> 999stake actually sent) + +amount = sendCoins.MulInt(totalAddrs) +} + +else { + amount = coins.MulInt(totalAddrs) +} + msg := types.NewMsgMultiSend(types.NewInput(clientCtx.FromAddress, amount), output) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +cmd.Flags().Bool(FlagSplit, false, "Send the equally split token amount to each address") + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} +``` + +Each module then can also implement a `GetTxCmd()` method that simply returns `NewTxCmd()`. This allows the root command to easily aggregate all of the transaction commands for each module. Here is an example: + +```go expandable +package bank + +import ( + + "context" + "encoding/json" + "fmt" + "time" + + modulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + corestore "cosmossdk.io/core/store" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/bank/client/cli" + "github.com/cosmos/cosmos-sdk/x/bank/exported" + "github.com/cosmos/cosmos-sdk/x/bank/keeper" + v1bank "github.com/cosmos/cosmos-sdk/x/bank/migrations/v1" + "github.com/cosmos/cosmos-sdk/x/bank/simulation" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +/ ConsensusVersion defines the current x/bank module consensus version. +const ConsensusVersion = 4 + +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the bank module. +type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec +} + +/ Name returns the bank module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the bank module's types on the LegacyAmino codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the bank +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the bank module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, _ client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return data.Validate() +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the bank module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the bank module. +func (ab AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.NewTxCmd(ab.ac) +} + +/ GetQueryCmd returns no root query command for the bank module. +func (ab AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.GetQueryCmd(ab.ac) +} + +/ RegisterInterfaces registers interfaces and implementations of the bank module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) + + / Register legacy interfaces for migration scripts. + v1bank.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the bank module. +type AppModule struct { + AppModuleBasic + + keeper keeper.Keeper + accountKeeper types.AccountKeeper + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ RegisterServices registers module services. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.keeper)) + +types.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper.(keeper.BaseKeeper), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 1 to 2: %v", err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 2 to 3: %v", err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 3 to 4: %v", err)) +} +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, accountKeeper types.AccountKeeper, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: accountKeeper.AddressCodec() +}, + keeper: keeper, + accountKeeper: accountKeeper, + legacySubspace: ss, +} +} + +/ Name returns the bank module's name. +func (AppModule) + +Name() + +string { + return types.ModuleName +} + +/ RegisterInvariants registers the bank module invariants. +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +/ QuerierRoute returns the bank module's querier route name. +func (AppModule) + +QuerierRoute() + +string { + return types.RouterKey +} + +/ InitGenesis performs genesis initialization for the bank module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + start := time.Now() + +var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +telemetry.MeasureSince(start, "InitGenesis", "crisis", "unmarshal") + +am.keeper.InitGenesis(ctx, &genesisState) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the bank +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the bank module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalMsgs returns msgs used for governance proposals for simulations. +func (AppModule) + +ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg { + return simulation.ProposalMsgs() +} + +/ RegisterStoreDecoder registers a decoder for supply module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[types.StoreKey] = simtypes.NewStoreDecoderFuncFromCollectionsSchema(am.keeper.(keeper.BaseKeeper).Schema) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + simState.AppParams, simState.Cdc, simState.TxConfig, am.accountKeeper, am.keeper, + ) +} + +/ App Wiring Setup + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type ModuleInputs struct { + depinject.In + + Config *modulev1.Module + Cdc codec.Codec + StoreService corestore.KVStoreService + Logger log.Logger + + AccountKeeper types.AccountKeeper + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type ModuleOutputs struct { + depinject.Out + + BankKeeper keeper.BaseKeeper + Module appmodule.AppModule +} + +func ProvideModule(in ModuleInputs) + +ModuleOutputs { + / Configure blocked module accounts. + / + / Default behavior for blockedAddresses is to regard any module mentioned in + / AccountKeeper's module account permissions as blocked. + blockedAddresses := make(map[string]bool) + if len(in.Config.BlockedModuleAccountsOverride) > 0 { + for _, moduleName := range in.Config.BlockedModuleAccountsOverride { + blockedAddresses[authtypes.NewModuleAddress(moduleName).String()] = true +} + +} + +else { + for _, permission := range in.AccountKeeper.GetModulePermissions() { + blockedAddresses[permission.GetAddress().String()] = true +} + +} + + / default to governance authority if not provided + authority := authtypes.NewModuleAddress(govtypes.ModuleName) + if in.Config.Authority != "" { + authority = authtypes.NewModuleAddressOrBech32Address(in.Config.Authority) +} + bankKeeper := keeper.NewBaseKeeper( + in.Cdc, + in.StoreService, + in.AccountKeeper, + blockedAddresses, + authority.String(), + in.Logger, + ) + m := NewAppModule(in.Cdc, bankKeeper, in.AccountKeeper, in.LegacySubspace) + +return ModuleOutputs{ + BankKeeper: bankKeeper, + Module: m +} +} +``` + +### Query Commands + + + This section is being rewritten. Refer to + [AutoCLI](https://docs.cosmos.network/main/core/autocli) while this section is + being updated. + + +## gRPC + +[gRPC](https://grpc.io/) is a Remote Procedure Call (RPC) framework. RPC is the preferred way for external clients like wallets and exchanges to interact with a blockchain. + +In addition to providing an ABCI query pathway, the Cosmos SDK provides a gRPC proxy server that routes gRPC query requests to ABCI query requests. + +In order to do that, modules must implement `RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *runtime.ServeMux)` on `AppModuleBasic` to wire the client gRPC requests to the correct handler inside the module. + +Here's an example from the `x/auth` module: + +```go expandable +package auth + +import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/depinject" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + + modulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + "cosmossdk.io/core/store" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/exported" + "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ ConsensusVersion defines the current x/auth module consensus version. +const ( + ConsensusVersion = 5 + GovModuleName = "gov" +) + +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the auth module. +type AppModuleBasic struct { + ac address.Codec +} + +/ Name returns the auth module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the auth module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the auth +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the auth module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return types.ValidateGenesis(data) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the auth module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the auth module. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return nil +} + +/ GetQueryCmd returns the root query command for the auth module. +func (ab AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.GetQueryCmd(ab.ac) +} + +/ RegisterInterfaces registers interfaces and implementations of the auth module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the auth module. +type AppModule struct { + AppModuleBasic + + accountKeeper keeper.AccountKeeper + randGenAccountsFn types.RandomGenesisAccountsFn + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, accountKeeper keeper.AccountKeeper, randGenAccountsFn types.RandomGenesisAccountsFn, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + ac: accountKeeper.AddressCodec() +}, + accountKeeper: accountKeeper, + randGenAccountsFn: randGenAccountsFn, + legacySubspace: ss, +} +} + +/ Name returns the auth module's name. +func (AppModule) + +Name() + +string { + return types.ModuleName +} + +/ RegisterServices registers a GRPC query service to respond to the +/ module-specific GRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.accountKeeper)) + +types.RegisterQueryServer(cfg.QueryServer(), keeper.NewQueryServer(am.accountKeeper)) + m := keeper.NewMigrator(am.accountKeeper, cfg.QueryServer(), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 2 to 3: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 3 to 4: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 4, m.Migrate4To5); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 4 to 5", types.ModuleName)) +} +} + +/ InitGenesis performs genesis initialization for the auth module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +am.accountKeeper.InitGenesis(ctx, genesisState) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the auth +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.accountKeeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the auth module +func (am AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState, am.randGenAccountsFn) +} + +/ ProposalMsgs returns msgs used for governance proposals for simulations. +func (AppModule) + +ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg { + return simulation.ProposalMsgs() +} + +/ RegisterStoreDecoder registers a decoder for auth module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[types.StoreKey] = simtypes.NewStoreDecoderFuncFromCollectionsSchema(am.accountKeeper.Schema) +} + +/ WeightedOperations doesn't return any auth module operation. +func (AppModule) + +WeightedOperations(_ module.SimulationState) []simtypes.WeightedOperation { + return nil +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideAddressCodec), + appmodule.Provide(ProvideModule), + ) +} + +/ ProvideAddressCodec provides an address.Codec to the container for any +/ modules that want to do address string <> bytes conversion. +func ProvideAddressCodec(config *modulev1.Module) + +address.Codec { + return authcodec.NewBech32Codec(config.Bech32Prefix) +} + +type ModuleInputs struct { + depinject.In + + Config *modulev1.Module + StoreService store.KVStoreService + Cdc codec.Codec + + RandomGenesisAccountsFn types.RandomGenesisAccountsFn `optional:"true"` + AccountI func() + +sdk.AccountI `optional:"true"` + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type ModuleOutputs struct { + depinject.Out + + AccountKeeper keeper.AccountKeeper + Module appmodule.AppModule +} + +func ProvideModule(in ModuleInputs) + +ModuleOutputs { + maccPerms := map[string][]string{ +} + for _, permission := range in.Config.ModuleAccountPermissions { + maccPerms[permission.Account] = permission.Permissions +} + + / default to governance authority if not provided + authority := types.NewModuleAddress(GovModuleName) + if in.Config.Authority != "" { + authority = types.NewModuleAddressOrBech32Address(in.Config.Authority) +} + if in.RandomGenesisAccountsFn == nil { + in.RandomGenesisAccountsFn = simulation.RandomGenesisAccounts +} + if in.AccountI == nil { + in.AccountI = types.ProtoBaseAccount +} + k := keeper.NewAccountKeeper(in.Cdc, in.StoreService, in.AccountI, maccPerms, in.Config.Bech32Prefix, authority.String()) + m := NewAppModule(in.Cdc, k, in.RandomGenesisAccountsFn, in.LegacySubspace) + +return ModuleOutputs{ + AccountKeeper: k, + Module: m +} +} +``` + +## gRPC-gateway REST + +Applications need to support web services that use HTTP requests (e.g. a web wallet like [Keplr](https://keplr.app)). [grpc-gateway](https://github.com/grpc-ecosystem/grpc-gateway) translates REST calls into gRPC calls, which might be useful for clients that do not use gRPC. + +Modules that want to expose REST queries should add `google.api.http` annotations to their `rpc` methods, such as in the example below from the `x/auth` module: + +```protobuf +// Query defines the gRPC querier service. +service Query { + // Accounts returns all the existing accounts. + // + // When called from another module, this query might consume a high amount of + // gas if the pagination field is incorrectly set. + // + // Since: cosmos-sdk 0.43 + rpc Accounts(QueryAccountsRequest) returns (QueryAccountsResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/accounts"; + } + + // Account returns account details based on address. + rpc Account(QueryAccountRequest) returns (QueryAccountResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/accounts/{address}"; + } + + // AccountAddressByID returns account address based on account number. + // + // Since: cosmos-sdk 0.46.2 + rpc AccountAddressByID(QueryAccountAddressByIDRequest) returns (QueryAccountAddressByIDResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/address_by_id/{id}"; + } + + // Params queries all parameters. + rpc Params(QueryParamsRequest) returns (QueryParamsResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/params"; + } + + // ModuleAccounts returns all the existing module accounts. + // + // Since: cosmos-sdk 0.46 + rpc ModuleAccounts(QueryModuleAccountsRequest) returns (QueryModuleAccountsResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/module_accounts"; + } + + // ModuleAccountByName returns the module account info by module name + rpc ModuleAccountByName(QueryModuleAccountByNameRequest) returns (QueryModuleAccountByNameResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/module_accounts/{name}"; + } + + // Bech32Prefix queries bech32Prefix + // + // Since: cosmos-sdk 0.46 + rpc Bech32Prefix(Bech32PrefixRequest) returns (Bech32PrefixResponse) { + option (google.api.http).get = "/cosmos/auth/v1beta1/bech32"; + } + + // AddressBytesToString converts Account Address bytes to string + // + // Since: cosmos-sdk 0.46 + rpc AddressBytesToString(AddressBytesToStringRequest) returns (AddressBytesToStringResponse) { + option (google.api.http).get = "/cosmos/auth/v1beta1/bech32/{address_bytes}"; + } + + // AddressStringToBytes converts Address string to bytes + // + // Since: cosmos-sdk 0.46 + rpc AddressStringToBytes(AddressStringToBytesRequest) returns (AddressStringToBytesResponse) { + option (google.api.http).get = "/cosmos/auth/v1beta1/bech32/{address_string}"; + } + + // AccountInfo queries account info which is common to all account types. + // + // Since: cosmos-sdk 0.47 + rpc AccountInfo(QueryAccountInfoRequest) returns (QueryAccountInfoResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/account_info/{address}"; + } +} +``` + +gRPC gateway is started in-process along with the application and CometBFT. It can be enabled or disabled by setting gRPC Configuration `enable` in [`app.toml`](/docs/sdk/next/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml). + +The Cosmos SDK provides a command for generating [Swagger](https://swagger.io/) documentation (`protoc-gen-swagger`). Setting `swagger` in [`app.toml`](/docs/sdk/next/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml) defines if swagger documentation should be automatically registered. diff --git a/docs/sdk/next/documentation/module-system/module-manager.mdx b/docs/sdk/next/documentation/module-system/module-manager.mdx new file mode 100644 index 00000000..26bddeb2 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/module-manager.mdx @@ -0,0 +1,16227 @@ +--- +title: Module Manager +--- + +## Synopsis + +Cosmos SDK modules need to implement the [`AppModule` interfaces](#application-module-interfaces), in order to be managed by the application's [module manager](#module-manager). The module manager plays an important role in [`message` and `query` routing](/docs/sdk/next/documentation/application-framework/baseapp#routing), and allows application developers to set the order of execution of a variety of functions like [`PreBlocker`](/docs/sdk/next/documentation/application-framework/app-anatomy#preblocker) and [`BeginBlocker` and `EndBlocker`](/docs/sdk/next/documentation/application-framework/app-anatomy#beginblocker-and-endblocker). + + +**Pre-requisite Readings** + +- [Introduction to Cosmos SDK Modules](/docs/sdk/next/documentation/module-system/intro) + + + +## Application Module Interfaces + +Application module interfaces exist to facilitate the composition of modules together to form a functional Cosmos SDK application. + + + +It is recommended to implement interfaces from the [Core API](https://docs.cosmos.network/main/architecture/adr-063-core-module-api) `appmodule` package. This makes modules less dependent on the SDK. +For legacy reason modules can still implement interfaces from the SDK `module` package. + + + +There are 2 main application module interfaces: + +- [`appmodule.AppModule` / `module.AppModule`](#appmodule) for inter-dependent module functionalities (except genesis-related functionalities). +- (legacy) [`module.AppModuleBasic`](#appmodulebasic) for independent module functionalities. New modules can use `module.CoreAppModuleBasicAdaptor` instead. + +The above interfaces are mostly embedding smaller interfaces (extension interfaces), that define specific functionalities: + +- (legacy) `module.HasName`: Allows the module to provide its own name for legacy purposes. +- (legacy) [`module.HasGenesisBasics`](#modulehasgenesisbasics): The legacy interface for stateless genesis methods. +- [`module.HasGenesis`](#modulehasgenesis) for inter-dependent genesis-related module functionalities. +- [`module.HasABCIGenesis`](#modulehasabcigenesis) for inter-dependent genesis-related module functionalities. +- [`appmodule.HasGenesis` / `module.HasGenesis`](#appmodulehasgenesis): The extension interface for stateful genesis methods. +- [`appmodule.HasPreBlocker`](#haspreblocker): The extension interface that contains information about the `AppModule` and `PreBlock`. +- [`appmodule.HasBeginBlocker`](#hasbeginblocker): The extension interface that contains information about the `AppModule` and `BeginBlock`. +- [`appmodule.HasEndBlocker`](#hasendblocker): The extension interface that contains information about the `AppModule` and `EndBlock`. +- [`appmodule.HasPrecommit`](#hasprecommit): The extension interface that contains information about the `AppModule` and `Precommit`. +- [`appmodule.HasPrepareCheckState`](#haspreparecheckstate): The extension interface that contains information about the `AppModule` and `PrepareCheckState`. +- [`appmodule.HasService` / `module.HasServices`](#hasservices): The extension interface for modules to register services. +- [`module.HasABCIEndBlock`](#hasabciendblock): The extension interface that contains information about the `AppModule`, `EndBlock` and returns an updated validator set. +- (legacy) [`module.HasInvariants`](#hasinvariants): The extension interface for registering invariants. +- (legacy) [`module.HasConsensusVersion`](#hasconsensusversion): The extension interface for declaring a module consensus version. + +The `AppModuleBasic` interface exists to define independent methods of the module, i.e. those that do not depend on other modules in the application. This allows for the construction of the basic application structure early in the application definition, generally in the `init()` function of the [main application file](/docs/sdk/next/documentation/application-framework/app-anatomy#core-application-file). + +The `AppModule` interface exists to define inter-dependent module methods. Many modules need to interact with other modules, typically through [`keeper`s](/docs/sdk/next/documentation/module-system/keeper), which means there is a need for an interface where modules list their `keeper`s and other methods that require a reference to another module's object. `AppModule` interface extension, such as `HasBeginBlocker` and `HasEndBlocker`, also enables the module manager to set the order of execution between module's methods like `BeginBlock` and `EndBlock`, which is important in cases where the order of execution between modules matters in the context of the application. + +The usage of extension interfaces allows modules to define only the functionalities they need. For example, a module that does not need an `EndBlock` does not need to define the `HasEndBlocker` interface and thus the `EndBlock` method. `AppModule` and `AppModuleGenesis` are voluntarily small interfaces, that can take advantage of the `Module` patterns without having to define many placeholder functions. + +### `AppModuleBasic` + + + Use `module.CoreAppModuleBasicAdaptor` instead for creating an + `AppModuleBasic` from an `appmodule.AppModule`. + + +The `AppModuleBasic` interface defines the independent methods modules need to implement. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +- `RegisterLegacyAminoCodec(*codec.LegacyAmino)`: Registers the `amino` codec for the module, which is used to marshal and unmarshal structs to/from `[]byte` in order to persist them in the module's `KVStore`. +- `RegisterInterfaces(codectypes.InterfaceRegistry)`: Registers a module's interface types and their concrete implementations as `proto.Message`. +- `RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux)`: Registers gRPC routes for the module. + +All the `AppModuleBasic` of an application are managed by the [`BasicManager`](#basicmanager). + +### `HasName` + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +- `HasName` is an interface that has a method `Name()`. This method returns the name of the module as a `string`. + +### Genesis + + + For easily creating an `AppModule` that only has genesis functionalities, use + `module.GenesisOnlyAppModule`. + + +#### `module.HasGenesisBasics` + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +Let us go through the methods: + +- `DefaultGenesis(codec.JSONCodec)`: Returns a default [`GenesisState`](/docs/sdk/next/documentation/module-system/genesis#genesisstate) for the module, marshalled to `json.RawMessage`. The default `GenesisState` need to be defined by the module developer and is primarily used for testing. +- `ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage)`: Used to validate the `GenesisState` defined by a module, given in its `json.RawMessage` form. It will usually unmarshall the `json` before running a custom [`ValidateGenesis`](/docs/sdk/next/documentation/module-system/genesis#validategenesis) function defined by the module developer. + +#### `module.HasGenesis` + +`HasGenesis` is an extension interface for allowing modules to implement genesis functionalities. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ genesisOnlyModule is an interface need to return GenesisOnlyAppModule struct in order to wrap two interfaces +type genesisOnlyModule interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + genesisOnlyModule +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg genesisOnlyModule) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + genesisOnlyModule: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndblock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + +module1, ok := m.Modules[moduleName].(HasGenesis) + if ok { + module1.InitGenesis(sdkCtx, c.cdc, module1.DefaultGenesis(c.cdc)) +} + if module2, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module2.InitGenesis(sdkCtx, c.cdc, module1.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ RunMigrationBeginBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was executed or not and an error if fails. +func (m *Manager) + +RunMigrationBeginBlock(ctx sdk.Context) (bool, error) { + for _, moduleName := range m.OrderBeginBlockers { + if mod, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if _, ok := mod.(appmodule.UpgradeModule); ok { + err := mod.BeginBlock(ctx) + +return err == nil, err +} + +} + +} + +return false, nil +} + +/ BeginBlock performs begin block functionality for non-upgrade modules. It creates a +/ child context with an event manager to aggregate events emitted from non-upgrade +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if _, ok := module.(appmodule.UpgradeModule); !ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +#### `module.HasABCIGenesis` + +`HasABCIGenesis` is an extension interface for allowing modules to implement genesis functionalities and returns validator set updates. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ genesisOnlyModule is an interface need to return GenesisOnlyAppModule struct in order to wrap two interfaces +type genesisOnlyModule interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + genesisOnlyModule +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg genesisOnlyModule) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + genesisOnlyModule: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndblock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + +module1, ok := m.Modules[moduleName].(HasGenesis) + if ok { + module1.InitGenesis(sdkCtx, c.cdc, module1.DefaultGenesis(c.cdc)) +} + if module2, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module2.InitGenesis(sdkCtx, c.cdc, module1.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ RunMigrationBeginBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was executed or not and an error if fails. +func (m *Manager) + +RunMigrationBeginBlock(ctx sdk.Context) (bool, error) { + for _, moduleName := range m.OrderBeginBlockers { + if mod, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if _, ok := mod.(appmodule.UpgradeModule); ok { + err := mod.BeginBlock(ctx) + +return err == nil, err +} + +} + +} + +return false, nil +} + +/ BeginBlock performs begin block functionality for non-upgrade modules. It creates a +/ child context with an event manager to aggregate events emitted from non-upgrade +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if _, ok := module.(appmodule.UpgradeModule); !ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +#### `appmodule.HasGenesis` + + + `appmodule.HasGenesis` is experimental and should be considered unstable, it + is recommended to not use this interface at this time. + + +```go expandable +package appmodule + +import ( + + "context" + "io" +) + +/ HasGenesis is the extension interface that modules should implement to handle +/ genesis data and state initialization. +/ WARNING: This interface is experimental and may change at any time. +type HasGenesis interface { + AppModule + + / DefaultGenesis writes the default genesis for this module to the target. + DefaultGenesis(GenesisTarget) + +error + + / ValidateGenesis validates the genesis data read from the source. + ValidateGenesis(GenesisSource) + +error + + / InitGenesis initializes module state from the genesis source. + InitGenesis(context.Context, GenesisSource) + +error + + / ExportGenesis exports module state to the genesis target. + ExportGenesis(context.Context, GenesisTarget) + +error +} + +/ GenesisSource is a source for genesis data in JSON format. It may abstract over a +/ single JSON object or separate files for each field in a JSON object that can +/ be streamed over. Modules should open a separate io.ReadCloser for each field that +/ is required. When fields represent arrays they can efficiently be streamed +/ over. If there is no data for a field, this function should return nil, nil. It is +/ important that the caller closes the reader when done with it. +type GenesisSource = func(field string) (io.ReadCloser, error) + +/ GenesisTarget is a target for writing genesis data in JSON format. It may +/ abstract over a single JSON object or JSON in separate files that can be +/ streamed over. Modules should open a separate io.WriteCloser for each field +/ and should prefer writing fields as arrays when possible to support efficient +/ iteration. It is important the caller closers the writer AND checks the error +/ when done with it. It is expected that a stream of JSON data is written +/ to the writer. +type GenesisTarget = func(field string) (io.WriteCloser, error) +``` + +### `AppModule` + +The `AppModule` interface defines a module. Modules can declare their functionalities by implementing extensions interfaces. +`AppModule`s are managed by the [module manager](#manager), which checks which extension interfaces are implemented by the module. + +#### `appmodule.AppModule` + +```go expandable +package appmodule + +import ( + + "context" + "google.golang.org/grpc" + "cosmossdk.io/depinject" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} +``` + +#### `module.AppModule` + + + Previously the `module.AppModule` interface was containing all the methods + that are defined in the extensions interfaces. This was leading to much + boilerplate for modules that did not need all the functionalities. + + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +### `HasInvariants` + +This interface defines one method. It allows checking if a module can register invariants. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +- `RegisterInvariants(sdk.InvariantRegistry)`: Registers the [`invariants`](/docs/sdk/next/documentation/module-system/invariants) of the module. If an invariant deviates from its predicted value, the [`InvariantRegistry`](/docs/sdk/next/documentation/module-system/invariants#registry) triggers appropriate logic (most often the chain will be halted). + +### `HasServices` + +This interface defines one method. It allows checking if a module can register services. + +#### `appmodule.HasService` + +```go expandable +package appmodule + +import ( + + "context" + "google.golang.org/grpc" + "cosmossdk.io/depinject" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} +``` + +#### `module.HasServices` + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +- `RegisterServices(Configurator)`: Allows a module to register services. + +### `HasConsensusVersion` + +This interface defines one method for checking a module consensus version. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +- `ConsensusVersion() uint64`: Returns the consensus version of the module. + +### `HasPreBlocker` + +The `HasPreBlocker` is an extension interface from `appmodule.AppModule`. All modules that have an `PreBlock` method implement this interface. + +### `HasBeginBlocker` + +The `HasBeginBlocker` is an extension interface from `appmodule.AppModule`. All modules that have an `BeginBlock` method implement this interface. + +```go expandable +package appmodule + +import ( + + "context" + "google.golang.org/grpc" + "cosmossdk.io/depinject" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ ResponsePreBlock represents the response from the PreBlock method. +/ It can modify consensus parameters in storage and signal the caller through the return value. +/ When it returns ConsensusParamsChanged=true, the caller must refresh the consensus parameter in the finalize context. +/ The new context (ctx) + +must be passed to all the other lifecycle methods. +type ResponsePreBlock interface { + IsConsensusParamsChanged() + +bool +} + +/ HasPreBlocker is the extension interface that modules should implement to run +/ custom logic before BeginBlock. +type HasPreBlocker interface { + AppModule + / PreBlock is method that will be run before BeginBlock. + PreBlock(context.Context) (ResponsePreBlock, error) +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} + +/ UpgradeModule is the extension interface that upgrade module should implement to differentiate +/ it from other modules, migration handler need ensure the upgrade module's migration is executed +/ before the rest of the modules. +type UpgradeModule interface { + IsUpgradeModule() +} +``` + +- `BeginBlock(context.Context) error`: This method gives module developers the option to implement logic that is automatically triggered at the beginning of each block. + +### `HasEndBlocker` + +The `HasEndBlocker` is an extension interface from `appmodule.AppModule`. All modules that have an `EndBlock` method implement this interface. If a module needs to return validator set updates (staking), they can use `HasABCIEndBlock` + +```go expandable +package appmodule + +import ( + + "context" + "google.golang.org/grpc" + "cosmossdk.io/depinject" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ ResponsePreBlock represents the response from the PreBlock method. +/ It can modify consensus parameters in storage and signal the caller through the return value. +/ When it returns ConsensusParamsChanged=true, the caller must refresh the consensus parameter in the finalize context. +/ The new context (ctx) + +must be passed to all the other lifecycle methods. +type ResponsePreBlock interface { + IsConsensusParamsChanged() + +bool +} + +/ HasPreBlocker is the extension interface that modules should implement to run +/ custom logic before BeginBlock. +type HasPreBlocker interface { + AppModule + / PreBlock is method that will be run before BeginBlock. + PreBlock(context.Context) (ResponsePreBlock, error) +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} + +/ UpgradeModule is the extension interface that upgrade module should implement to differentiate +/ it from other modules, migration handler need ensure the upgrade module's migration is executed +/ before the rest of the modules. +type UpgradeModule interface { + IsUpgradeModule() +} +``` + +- `EndBlock(context.Context) error`: This method gives module developers the option to implement logic that is automatically triggered at the end of each block. + +### `HasABCIEndBlock` + +The `HasABCIEndBlock` is an extension interface from `module.AppModule`. All modules that have an `EndBlock` which return validator set updates implement this interface. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +- `EndBlock(context.Context) ([]abci.ValidatorUpdate, error)`: This method gives module developers the option to inform the underlying consensus engine of validator set changes (e.g. the `staking` module). + +### `HasPrecommit` + +`HasPrecommit` is an extension interface from `appmodule.AppModule`. All modules that have a `Precommit` method implement this interface. + +```go expandable +package appmodule + +import ( + + "context" + "google.golang.org/grpc" + "cosmossdk.io/depinject" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ ResponsePreBlock represents the response from the PreBlock method. +/ It can modify consensus parameters in storage and signal the caller through the return value. +/ When it returns ConsensusParamsChanged=true, the caller must refresh the consensus parameter in the finalize context. +/ The new context (ctx) + +must be passed to all the other lifecycle methods. +type ResponsePreBlock interface { + IsConsensusParamsChanged() + +bool +} + +/ HasPreBlocker is the extension interface that modules should implement to run +/ custom logic before BeginBlock. +type HasPreBlocker interface { + AppModule + / PreBlock is method that will be run before BeginBlock. + PreBlock(context.Context) (ResponsePreBlock, error) +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} + +/ UpgradeModule is the extension interface that upgrade module should implement to differentiate +/ it from other modules, migration handler need ensure the upgrade module's migration is executed +/ before the rest of the modules. +type UpgradeModule interface { + IsUpgradeModule() +} +``` + +- `Precommit(context.Context)`: This method gives module developers the option to implement logic that is automatically triggered during \[`Commit'](/docs/sdk/next/learn/advanced/00-baseapp#commit) of each block using the [`finalizeblockstate`](/docs/sdk/next/learn/advanced/00-baseapp#state-updates) of the block to be committed. Implement empty if no logic needs to be triggered during `Commit\` of each block for this module. + +### `HasPrepareCheckState` + +`HasPrepareCheckState` is an extension interface from `appmodule.AppModule`. All modules that have a `PrepareCheckState` method implement this interface. + +```go expandable +package appmodule + +import ( + + "context" + "google.golang.org/grpc" + "cosmossdk.io/depinject" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ ResponsePreBlock represents the response from the PreBlock method. +/ It can modify consensus parameters in storage and signal the caller through the return value. +/ When it returns ConsensusParamsChanged=true, the caller must refresh the consensus parameter in the finalize context. +/ The new context (ctx) + +must be passed to all the other lifecycle methods. +type ResponsePreBlock interface { + IsConsensusParamsChanged() + +bool +} + +/ HasPreBlocker is the extension interface that modules should implement to run +/ custom logic before BeginBlock. +type HasPreBlocker interface { + AppModule + / PreBlock is method that will be run before BeginBlock. + PreBlock(context.Context) (ResponsePreBlock, error) +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} + +/ UpgradeModule is the extension interface that upgrade module should implement to differentiate +/ it from other modules, migration handler need ensure the upgrade module's migration is executed +/ before the rest of the modules. +type UpgradeModule interface { + IsUpgradeModule() +} +``` + +- `PrepareCheckState(context.Context)`: This method gives module developers the option to implement logic that is automatically triggered during \[`Commit'](/docs/sdk/next/learn/advanced/00-baseapp#commit) of each block using the [`checkState`](/docs/sdk/next/learn/advanced/00-baseapp#state-updates) of the next block. Implement empty if no logic needs to be triggered during `Commit\` of each block for this module. + +### Implementing the Application Module Interfaces + +Typically, the various application module interfaces are implemented in a file called `module.go`, located in the module's folder (e.g. `./x/module/module.go`). + +Almost every module needs to implement the `AppModuleBasic` and `AppModule` interfaces. If the module is only used for genesis, it will implement `AppModuleGenesis` instead of `AppModule`. The concrete type that implements the interface can add parameters that are required for the implementation of the various methods of the interface. For example, the `Route()` function often calls a `NewMsgServerImpl(k keeper)` function defined in `keeper/msg_server.go` and therefore needs to pass the module's [`keeper`](/docs/sdk/next/documentation/module-system/keeper) as a parameter. + +```go +/ example +type AppModule struct { + AppModuleBasic + keeper Keeper +} +``` + +In the example above, you can see that the `AppModule` concrete type references an `AppModuleBasic`, and not an `AppModuleGenesis`. That is because `AppModuleGenesis` only needs to be implemented in modules that focus on genesis-related functionalities. In most modules, the concrete `AppModule` type will have a reference to an `AppModuleBasic` and implement the two added methods of `AppModuleGenesis` directly in the `AppModule` type. + +If no parameter is required (which is often the case for `AppModuleBasic`), just declare an empty concrete type like so: + +```go +type AppModuleBasic struct{ +} +``` + +## Module Managers + +Module managers are used to manage collections of `AppModuleBasic` and `AppModule`. + +### `BasicManager` + +The `BasicManager` is a structure that lists all the `AppModuleBasic` of an application: + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +It implements the following methods: + +- `NewBasicManager(modules ...AppModuleBasic)`: Constructor function. It takes a list of the application's `AppModuleBasic` and builds a new `BasicManager`. This function is generally called in the `init()` function of [`app.go`](/docs/sdk/next/documentation/application-framework/app-anatomy#core-application-file) to quickly initialize the independent elements of the application's modules (click [here](https://github.com/cosmos/gaia/blob/main/app/app.go#L59-L74) to see an example). +- `NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic)`: Constructor function. It creates a new `BasicManager` from a `Manager`. The `BasicManager` will contain all `AppModuleBasic` from the `AppModule` manager using `CoreAppModuleBasicAdaptor` whenever possible. Module's `AppModuleBasic` can be overridden by passing a custom AppModuleBasic map +- `RegisterLegacyAminoCodec(cdc *codec.LegacyAmino)`: Registers the [`codec.LegacyAmino`s](/docs/sdk/next/documentation/protocol-development/encoding#amino) of each of the application's `AppModuleBasic`. This function is usually called early on in the [application's construction](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor). +- `RegisterInterfaces(registry codectypes.InterfaceRegistry)`: Registers interface types and implementations of each of the application's `AppModuleBasic`. +- `DefaultGenesis(cdc codec.JSONCodec)`: Provides default genesis information for modules in the application by calling the [`DefaultGenesis(cdc codec.JSONCodec)`](/docs/sdk/next/documentation/module-system/genesis#defaultgenesis) function of each module. It only calls the modules that implements the `HasGenesisBasics` interfaces. +- `ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage)`: Validates the genesis information modules by calling the [`ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage)`](/docs/sdk/next/documentation/module-system/genesis#validategenesis) function of modules implementing the `HasGenesisBasics` interface. +- `RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux)`: Registers gRPC routes for modules. +- `AddTxCommands(rootTxCmd *cobra.Command)`: Adds modules' transaction commands (defined as `GetTxCmd() *cobra.Command`) to the application's [`rootTxCommand`](/docs/sdk/next/api-reference/client-tools/cli#transaction-commands). This function is usually called function from the `main.go` function of the [application's command-line interface](/docs/sdk/next/api-reference/client-tools/cli). +- `AddQueryCommands(rootQueryCmd *cobra.Command)`: Adds modules' query commands (defined as `GetQueryCmd() *cobra.Command`) to the application's [`rootQueryCommand`](/docs/sdk/next/api-reference/client-tools/cli#query-commands). This function is usually called function from the `main.go` function of the [application's command-line interface](/docs/sdk/next/api-reference/client-tools/cli). + +### `Manager` + +The `Manager` is a structure that holds all the `AppModule` of an application, and defines the order of execution between several key components of these modules: + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +The module manager is used throughout the application whenever an action on a collection of modules is required. It implements the following methods: + +- `NewManager(modules ...AppModule)`: Constructor function. It takes a list of the application's `AppModule`s and builds a new `Manager`. It is generally called from the application's main [constructor function](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor-function). +- `SetOrderInitGenesis(moduleNames ...string)`: Sets the order in which the [`InitGenesis`](/docs/sdk/next/documentation/module-system/genesis#initgenesis) function of each module will be called when the application is first started. This function is generally called from the application's main [constructor function](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor-function). + To initialize modules successfully, module dependencies should be considered. For example, the `genutil` module must occur after `staking` module so that the pools are properly initialized with tokens from genesis accounts, the `genutils` module must also occur after `auth` so that it can access the params from auth, IBC's `capability` module should be initialized before all other modules so that it can initialize any capabilities. +- `SetOrderExportGenesis(moduleNames ...string)`: Sets the order in which the [`ExportGenesis`](/docs/sdk/next/documentation/module-system/genesis#exportgenesis) function of each module will be called in case of an export. This function is generally called from the application's main [constructor function](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor-function). +- `SetOrderPreBlockers(moduleNames ...string)`: Sets the order in which the `PreBlock()` function of each module will be called before `BeginBlock()` of all modules. This function is generally called from the application's main [constructor function](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor-function). +- `SetOrderBeginBlockers(moduleNames ...string)`: Sets the order in which the `BeginBlock()` function of each module will be called at the beginning of each block. This function is generally called from the application's main [constructor function](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor-function). +- `SetOrderEndBlockers(moduleNames ...string)`: Sets the order in which the `EndBlock()` function of each module will be called at the end of each block. This function is generally called from the application's main [constructor function](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor-function). +- `SetOrderPrecommiters(moduleNames ...string)`: Sets the order in which the `Precommit()` function of each module will be called during commit of each block. This function is generally called from the application's main [constructor function](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor-function). +- `SetOrderPrepareCheckStaters(moduleNames ...string)`: Sets the order in which the `PrepareCheckState()` function of each module will be called during commit of each block. This function is generally called from the application's main [constructor function](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor-function). +- `SetOrderMigrations(moduleNames ...string)`: Sets the order of migrations to be run. If not set then migrations will be run with an order defined in `DefaultMigrationsOrder`. +- `RegisterInvariants(ir sdk.InvariantRegistry)`: Registers the [invariants](/docs/sdk/next/documentation/module-system/invariants) of module implementing the `HasInvariants` interface. +- `RegisterServices(cfg Configurator)`: Registers the services of modules implementing the `HasServices` interface. +- `InitGenesis(ctx context.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage)`: Calls the [`InitGenesis`](/docs/sdk/next/documentation/module-system/genesis#initgenesis) function of each module when the application is first started, in the order defined in `OrderInitGenesis`. Returns an `abci.InitChainResponse` to the underlying consensus engine, which can contain validator updates. +- `ExportGenesis(ctx context.Context, cdc codec.JSONCodec)`: Calls the [`ExportGenesis`](/docs/sdk/next/documentation/module-system/genesis#exportgenesis) function of each module, in the order defined in `OrderExportGenesis`. The export constructs a genesis file from a previously existing state, and is mainly used when a hard-fork upgrade of the chain is required. +- `ExportGenesisForModules(ctx context.Context, cdc codec.JSONCodec, modulesToExport []string)`: Behaves the same as `ExportGenesis`, except takes a list of modules to export. +- `BeginBlock(ctx context.Context) error`: At the beginning of each block, this function is called from [`BaseApp`](/docs/sdk/next/documentation/application-framework/baseapp#beginblock) and, in turn, calls the [`BeginBlock`](/docs/sdk/next/documentation/module-system/beginblock-endblock) function of each modules implementing the `appmodule.HasBeginBlocker` interface, in the order defined in `OrderBeginBlockers`. It creates a child [context](/docs/sdk/next/documentation/application-framework/context) with an event manager to aggregate [events](/docs/sdk/next/api-reference/events-streaming/events) emitted from each modules. +- `EndBlock(ctx context.Context) error`: At the end of each block, this function is called from [`BaseApp`](/docs/sdk/next/documentation/application-framework/baseapp#endblock) and, in turn, calls the [`EndBlock`](/docs/sdk/next/documentation/module-system/beginblock-endblock) function of each modules implementing the `appmodule.HasEndBlocker` interface, in the order defined in `OrderEndBlockers`. It creates a child [context](/docs/sdk/next/documentation/application-framework/context) with an event manager to aggregate [events](/docs/sdk/next/api-reference/events-streaming/events) emitted from all modules. The function returns an `abci` which contains the aforementioned events, as well as validator set updates (if any). +- `EndBlock(context.Context) ([]abci.ValidatorUpdate, error)`: At the end of each block, this function is called from [`BaseApp`](/docs/sdk/next/documentation/application-framework/baseapp#endblock) and, in turn, calls the [`EndBlock`](/docs/sdk/next/documentation/module-system/beginblock-endblock) function of each modules implementing the `module.HasABCIEndBlock` interface, in the order defined in `OrderEndBlockers`. It creates a child [context](/docs/sdk/next/documentation/application-framework/context) with an event manager to aggregate [events](/docs/sdk/next/api-reference/events-streaming/events) emitted from all modules. The function returns an `abci` which contains the aforementioned events, as well as validator set updates (if any). +- `Precommit(ctx context.Context)`: During [`Commit`](/docs/sdk/next/documentation/application-framework/baseapp#commit), this function is called from `BaseApp` immediately before the [`deliverState`](/docs/sdk/next/documentation/application-framework/baseapp#state-updates) is written to the underlying [`rootMultiStore`](/docs/sdk/next/documentation/state-storage/store#commitmultistore) and, in turn calls the `Precommit` function of each modules implementing the `HasPrecommit` interface, in the order defined in `OrderPrecommiters`. It creates a child [context](/docs/sdk/next/documentation/application-framework/context) where the underlying `CacheMultiStore` is that of the newly committed block's [`finalizeblockstate`](/docs/sdk/next/documentation/application-framework/baseapp#state-updates). +- `PrepareCheckState(ctx context.Context)`: During [`Commit`](/docs/sdk/next/documentation/application-framework/baseapp#commit), this function is called from `BaseApp` immediately after the [`deliverState`](/docs/sdk/next/documentation/application-framework/baseapp#state-updates) is written to the underlying [`rootMultiStore`](/docs/sdk/next/documentation/state-storage/store#commitmultistore) and, in turn calls the `PrepareCheckState` function of each module implementing the `HasPrepareCheckState` interface, in the order defined in `OrderPrepareCheckStaters`. It creates a child [context](/docs/sdk/next/documentation/application-framework/context) where the underlying `CacheMultiStore` is that of the next block's [`checkState`](/docs/sdk/next/documentation/application-framework/baseapp#state-updates). Writes to this state will be present in the [`checkState`](/docs/sdk/next/documentation/application-framework/baseapp#state-updates) of the next block, and therefore this method can be used to prepare the `checkState` for the next block. + +Here's an example of a concrete integration within an `simapp`: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "maps" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/tx/signing" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + sigtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + panic(err) +} + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, + banktypes.StoreKey, + stakingtypes.StoreKey, + minttypes.StoreKey, + distrtypes.StoreKey, + slashingtypes.StoreKey, + govtypes.StoreKey, + consensusparamtypes.StoreKey, + upgradetypes.StoreKey, + feegrant.StoreKey, + evidencetypes.StoreKey, + circuittypes.StoreKey, + authzkeeper.StoreKey, + nftkeeper.StoreKey, + group.StoreKey, + epochstypes.StoreKey, + protocolpooltypes.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, +} + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + runtime.EventService{ +}, + ) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), + ) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + + / optional: enable sign mode textual by overwriting the default tx config (after setting the bank keeper) + enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + panic(err) +} + +app.txConfig = txConfig + + app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[stakingtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authcodec.NewBech32Codec(sdk.Bech32PrefixValAddr), + authcodec.NewBech32Codec(sdk.Bech32PrefixConsAddr), + ) + +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(minttypes.DefaultInflationCalculationFn)), custom mintFn can be added here + ) + +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), + ) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, + legacyAmino, + runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), + app.StakingKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[feegrant.StoreKey]), + app.AccountKeeper, + ) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks( + app.DistrKeeper.Hooks(), + app.SlashingKeeper.Hooks(), + ), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[circuittypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + app.AccountKeeper.AddressCodec(), + ) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper( + runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + ) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper( + keys[group.StoreKey], + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + groupConfig, + ) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper( + skipUpgradeHeights, + runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), + appCodec, + homePath, + app.BaseApp, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(...), / Add if you want to use a custom vote calculation function. + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper( + runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), + appCodec, + app.AccountKeeper, + app.BankKeeper, + ) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), + app.StakingKeeper, + app.SlashingKeeper, + app.AccountKeeper.AddressCodec(), + runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, + ) + +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + ), + ) + + /**** Module Options ****/ + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, nil), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, nil), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil, app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, nil), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + epochs.NewAppModule(app.EpochsKeeper), + protocolpool.NewAppModule(app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / NOTE: upgrade module is required to be prioritized + app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, + ) + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensusparamtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +} + exportModuleOrder := []string{ + consensusparamtypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +err = app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetPreBlocker(app.PreBlocker) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimeoutDuration), +}, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ PreBlocker application updates every pre block +func (app *SimApp) + +PreBlocker(ctx sdk.Context, _ *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + return app.ModuleManager.PreBlock(ctx) +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), + ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), + ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, 0, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + return maps.Clone(maccPerms) +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} +``` + +This is the same example from `runtime` (the package that powers app di): + +```go expandable +package runtime + +import ( + + "fmt" + "os" + "slices" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/protobuf/reflect/protodesc" + "google.golang.org/protobuf/reflect/protoregistry" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/event" + "cosmossdk.io/core/genesis" + "cosmossdk.io/core/header" + "cosmossdk.io/core/store" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/tx/signing" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + addresscodec "github.com/cosmos/cosmos-sdk/codec/address" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type appModule struct { + app *App +} + +func (m appModule) + +RegisterServices(configurator module.Configurator) { + err := m.app.registerRuntimeServices(configurator) + if err != nil { + panic(err) +} +} + +func (m appModule) + +IsOnePerModuleType() { +} + +func (m appModule) + +IsAppModule() { +} + +var ( + _ appmodule.AppModule = appModule{ +} + _ module.HasServices = appModule{ +} +) + +/ BaseAppOption is a depinject.AutoGroupType which can be used to pass +/ BaseApp options into the depinject. It should be used carefully. +type BaseAppOption func(*baseapp.BaseApp) + +/ IsManyPerContainerType indicates that this is a depinject.ManyPerContainerType. +func (b BaseAppOption) + +IsManyPerContainerType() { +} + +func init() { + appmodule.Register(&runtimev1alpha1.Module{ +}, + appmodule.Provide( + ProvideApp, + ProvideInterfaceRegistry, + ProvideKVStoreKey, + ProvideTransientStoreKey, + ProvideMemoryStoreKey, + ProvideGenesisTxHandler, + ProvideKVStoreService, + ProvideMemoryStoreService, + ProvideTransientStoreService, + ProvideEventService, + ProvideHeaderInfoService, + ProvideCometInfoService, + ProvideBasicManager, + ProvideAddressCodec, + ), + appmodule.Invoke(SetupAppBuilder), + ) +} + +func ProvideApp(interfaceRegistry codectypes.InterfaceRegistry) ( + codec.Codec, + *codec.LegacyAmino, + *AppBuilder, + *baseapp.MsgServiceRouter, + *baseapp.GRPCQueryRouter, + appmodule.AppModule, + protodesc.Resolver, + protoregistry.MessageTypeResolver, + error, +) { + protoFiles := proto.HybridResolver + protoTypes := protoregistry.GlobalTypes + + / At startup, check that all proto annotations are correct. + if err := msgservice.ValidateProtoAnnotations(protoFiles); err != nil { + / Once we switch to using protoreflect-based ante handlers, we might + / want to panic here instead of logging a warning. + _, _ = fmt.Fprintln(os.Stderr, err.Error()) +} + amino := codec.NewLegacyAmino() + +std.RegisterInterfaces(interfaceRegistry) + +std.RegisterLegacyAminoCodec(amino) + cdc := codec.NewProtoCodec(interfaceRegistry) + msgServiceRouter := baseapp.NewMsgServiceRouter() + grpcQueryRouter := baseapp.NewGRPCQueryRouter() + app := &App{ + storeKeys: nil, + interfaceRegistry: interfaceRegistry, + cdc: cdc, + amino: amino, + basicManager: module.BasicManager{ +}, + msgServiceRouter: msgServiceRouter, + grpcQueryRouter: grpcQueryRouter, +} + appBuilder := &AppBuilder{ + app +} + +return cdc, amino, appBuilder, msgServiceRouter, grpcQueryRouter, appModule{ + app +}, protoFiles, protoTypes, nil +} + +type AppInputs struct { + depinject.In + + AppConfig *appv1alpha1.Config `optional:"true"` + Config *runtimev1alpha1.Module + AppBuilder *AppBuilder + Modules map[string]appmodule.AppModule + CustomModuleBasics map[string]module.AppModuleBasic `optional:"true"` + BaseAppOptions []BaseAppOption + InterfaceRegistry codectypes.InterfaceRegistry + LegacyAmino *codec.LegacyAmino + Logger log.Logger +} + +func SetupAppBuilder(inputs AppInputs) { + app := inputs.AppBuilder.app + app.baseAppOptions = inputs.BaseAppOptions + app.config = inputs.Config + app.appConfig = inputs.AppConfig + app.logger = inputs.Logger + app.ModuleManager = module.NewManagerFromMap(inputs.Modules) + for name, mod := range inputs.Modules { + if customBasicMod, ok := inputs.CustomModuleBasics[name]; ok { + app.basicManager[name] = customBasicMod + customBasicMod.RegisterInterfaces(inputs.InterfaceRegistry) + +customBasicMod.RegisterLegacyAminoCodec(inputs.LegacyAmino) + +continue +} + coreAppModuleBasic := module.CoreAppModuleBasicAdaptor(name, mod) + +app.basicManager[name] = coreAppModuleBasic + coreAppModuleBasic.RegisterInterfaces(inputs.InterfaceRegistry) + +coreAppModuleBasic.RegisterLegacyAminoCodec(inputs.LegacyAmino) +} +} + +func ProvideInterfaceRegistry(addressCodec address.Codec, validatorAddressCodec ValidatorAddressCodec, customGetSigners []signing.CustomGetSigner) (codectypes.InterfaceRegistry, error) { + signingOptions := signing.Options{ + AddressCodec: addressCodec, + ValidatorAddressCodec: validatorAddressCodec, +} + for _, signer := range customGetSigners { + signingOptions.DefineCustomGetSigners(signer.MsgType, signer.Fn) +} + +interfaceRegistry, err := codectypes.NewInterfaceRegistryWithOptions(codectypes.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signingOptions, +}) + if err != nil { + return nil, err +} + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + return nil, err +} + +return interfaceRegistry, nil +} + +func registerStoreKey(wrapper *AppBuilder, key storetypes.StoreKey) { + wrapper.app.storeKeys = append(wrapper.app.storeKeys, key) +} + +func storeKeyOverride(config *runtimev1alpha1.Module, moduleName string) *runtimev1alpha1.StoreKeyConfig { + for _, cfg := range config.OverrideStoreKeys { + if cfg.ModuleName == moduleName { + return cfg +} + +} + +return nil +} + +func ProvideKVStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.KVStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + override := storeKeyOverride(config, key.Name()) + +var storeKeyName string + if override != nil { + storeKeyName = override.KvStoreKey +} + +else { + storeKeyName = key.Name() +} + storeKey := storetypes.NewKVStoreKey(storeKeyName) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideTransientStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.TransientStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + storeKey := storetypes.NewTransientStoreKey(fmt.Sprintf("transient:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideMemoryStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.MemoryStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + storeKey := storetypes.NewMemoryStoreKey(fmt.Sprintf("memory:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideGenesisTxHandler(appBuilder *AppBuilder) + +genesis.TxHandler { + return appBuilder.app +} + +func ProvideKVStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.KVStoreService { + storeKey := ProvideKVStoreKey(config, key, app) + +return kvStoreService{ + key: storeKey +} +} + +func ProvideMemoryStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.MemoryStoreService { + storeKey := ProvideMemoryStoreKey(config, key, app) + +return memStoreService{ + key: storeKey +} +} + +func ProvideTransientStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.TransientStoreService { + storeKey := ProvideTransientStoreKey(config, key, app) + +return transientStoreService{ + key: storeKey +} +} + +func ProvideEventService() + +event.Service { + return EventService{ +} +} + +func ProvideCometInfoService() + +comet.BlockInfoService { + return cometInfoService{ +} +} + +func ProvideHeaderInfoService(app *AppBuilder) + +header.Service { + return headerInfoService{ +} +} + +func ProvideBasicManager(app *AppBuilder) + +module.BasicManager { + return app.app.basicManager +} + +type ( + / ValidatorAddressCodec is an alias for address.Codec for validator addresses. + ValidatorAddressCodec address.Codec + + / ConsensusAddressCodec is an alias for address.Codec for validator consensus addresses. + ConsensusAddressCodec address.Codec +) + +type AddressCodecInputs struct { + depinject.In + + AuthConfig *authmodulev1.Module `optional:"true"` + StakingConfig *stakingmodulev1.Module `optional:"true"` + + AddressCodecFactory func() + +address.Codec `optional:"true"` + ValidatorAddressCodecFactory func() + +ValidatorAddressCodec `optional:"true"` + ConsensusAddressCodecFactory func() + +ConsensusAddressCodec `optional:"true"` +} + +/ ProvideAddressCodec provides an address.Codec to the container for any +/ modules that want to do address string <> bytes conversion. +func ProvideAddressCodec(in AddressCodecInputs) (address.Codec, ValidatorAddressCodec, ConsensusAddressCodec) { + if in.AddressCodecFactory != nil && in.ValidatorAddressCodecFactory != nil && in.ConsensusAddressCodecFactory != nil { + return in.AddressCodecFactory(), in.ValidatorAddressCodecFactory(), in.ConsensusAddressCodecFactory() +} + if in.AuthConfig == nil || in.AuthConfig.Bech32Prefix == "" { + panic("auth config bech32 prefix cannot be empty if no custom address codec is provided") +} + if in.StakingConfig == nil { + in.StakingConfig = &stakingmodulev1.Module{ +} + +} + if in.StakingConfig.Bech32PrefixValidator == "" { + in.StakingConfig.Bech32PrefixValidator = fmt.Sprintf("%svaloper", in.AuthConfig.Bech32Prefix) +} + if in.StakingConfig.Bech32PrefixConsensus == "" { + in.StakingConfig.Bech32PrefixConsensus = fmt.Sprintf("%svalcons", in.AuthConfig.Bech32Prefix) +} + +return addresscodec.NewBech32Codec(in.AuthConfig.Bech32Prefix), + addresscodec.NewBech32Codec(in.StakingConfig.Bech32PrefixValidator), + addresscodec.NewBech32Codec(in.StakingConfig.Bech32PrefixConsensus) +} +``` + +```go expandable +package runtime + +import ( + + "fmt" + "os" + "slices" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/protobuf/reflect/protodesc" + "google.golang.org/protobuf/reflect/protoregistry" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/event" + "cosmossdk.io/core/genesis" + "cosmossdk.io/core/header" + "cosmossdk.io/core/store" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/tx/signing" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + addresscodec "github.com/cosmos/cosmos-sdk/codec/address" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type appModule struct { + app *App +} + +func (m appModule) + +RegisterServices(configurator module.Configurator) { + err := m.app.registerRuntimeServices(configurator) + if err != nil { + panic(err) +} +} + +func (m appModule) + +IsOnePerModuleType() { +} + +func (m appModule) + +IsAppModule() { +} + +var ( + _ appmodule.AppModule = appModule{ +} + _ module.HasServices = appModule{ +} +) + +/ BaseAppOption is a depinject.AutoGroupType which can be used to pass +/ BaseApp options into the depinject. It should be used carefully. +type BaseAppOption func(*baseapp.BaseApp) + +/ IsManyPerContainerType indicates that this is a depinject.ManyPerContainerType. +func (b BaseAppOption) + +IsManyPerContainerType() { +} + +func init() { + appmodule.Register(&runtimev1alpha1.Module{ +}, + appmodule.Provide( + ProvideApp, + ProvideInterfaceRegistry, + ProvideKVStoreKey, + ProvideTransientStoreKey, + ProvideMemoryStoreKey, + ProvideGenesisTxHandler, + ProvideKVStoreService, + ProvideMemoryStoreService, + ProvideTransientStoreService, + ProvideEventService, + ProvideHeaderInfoService, + ProvideCometInfoService, + ProvideBasicManager, + ProvideAddressCodec, + ), + appmodule.Invoke(SetupAppBuilder), + ) +} + +func ProvideApp(interfaceRegistry codectypes.InterfaceRegistry) ( + codec.Codec, + *codec.LegacyAmino, + *AppBuilder, + *baseapp.MsgServiceRouter, + *baseapp.GRPCQueryRouter, + appmodule.AppModule, + protodesc.Resolver, + protoregistry.MessageTypeResolver, + error, +) { + protoFiles := proto.HybridResolver + protoTypes := protoregistry.GlobalTypes + + / At startup, check that all proto annotations are correct. + if err := msgservice.ValidateProtoAnnotations(protoFiles); err != nil { + / Once we switch to using protoreflect-based ante handlers, we might + / want to panic here instead of logging a warning. + _, _ = fmt.Fprintln(os.Stderr, err.Error()) +} + amino := codec.NewLegacyAmino() + +std.RegisterInterfaces(interfaceRegistry) + +std.RegisterLegacyAminoCodec(amino) + cdc := codec.NewProtoCodec(interfaceRegistry) + msgServiceRouter := baseapp.NewMsgServiceRouter() + grpcQueryRouter := baseapp.NewGRPCQueryRouter() + app := &App{ + storeKeys: nil, + interfaceRegistry: interfaceRegistry, + cdc: cdc, + amino: amino, + basicManager: module.BasicManager{ +}, + msgServiceRouter: msgServiceRouter, + grpcQueryRouter: grpcQueryRouter, +} + appBuilder := &AppBuilder{ + app +} + +return cdc, amino, appBuilder, msgServiceRouter, grpcQueryRouter, appModule{ + app +}, protoFiles, protoTypes, nil +} + +type AppInputs struct { + depinject.In + + AppConfig *appv1alpha1.Config `optional:"true"` + Config *runtimev1alpha1.Module + AppBuilder *AppBuilder + Modules map[string]appmodule.AppModule + CustomModuleBasics map[string]module.AppModuleBasic `optional:"true"` + BaseAppOptions []BaseAppOption + InterfaceRegistry codectypes.InterfaceRegistry + LegacyAmino *codec.LegacyAmino + Logger log.Logger +} + +func SetupAppBuilder(inputs AppInputs) { + app := inputs.AppBuilder.app + app.baseAppOptions = inputs.BaseAppOptions + app.config = inputs.Config + app.appConfig = inputs.AppConfig + app.logger = inputs.Logger + app.ModuleManager = module.NewManagerFromMap(inputs.Modules) + for name, mod := range inputs.Modules { + if customBasicMod, ok := inputs.CustomModuleBasics[name]; ok { + app.basicManager[name] = customBasicMod + customBasicMod.RegisterInterfaces(inputs.InterfaceRegistry) + +customBasicMod.RegisterLegacyAminoCodec(inputs.LegacyAmino) + +continue +} + coreAppModuleBasic := module.CoreAppModuleBasicAdaptor(name, mod) + +app.basicManager[name] = coreAppModuleBasic + coreAppModuleBasic.RegisterInterfaces(inputs.InterfaceRegistry) + +coreAppModuleBasic.RegisterLegacyAminoCodec(inputs.LegacyAmino) +} +} + +func ProvideInterfaceRegistry(addressCodec address.Codec, validatorAddressCodec ValidatorAddressCodec, customGetSigners []signing.CustomGetSigner) (codectypes.InterfaceRegistry, error) { + signingOptions := signing.Options{ + AddressCodec: addressCodec, + ValidatorAddressCodec: validatorAddressCodec, +} + for _, signer := range customGetSigners { + signingOptions.DefineCustomGetSigners(signer.MsgType, signer.Fn) +} + +interfaceRegistry, err := codectypes.NewInterfaceRegistryWithOptions(codectypes.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signingOptions, +}) + if err != nil { + return nil, err +} + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + return nil, err +} + +return interfaceRegistry, nil +} + +func registerStoreKey(wrapper *AppBuilder, key storetypes.StoreKey) { + wrapper.app.storeKeys = append(wrapper.app.storeKeys, key) +} + +func storeKeyOverride(config *runtimev1alpha1.Module, moduleName string) *runtimev1alpha1.StoreKeyConfig { + for _, cfg := range config.OverrideStoreKeys { + if cfg.ModuleName == moduleName { + return cfg +} + +} + +return nil +} + +func ProvideKVStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.KVStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + override := storeKeyOverride(config, key.Name()) + +var storeKeyName string + if override != nil { + storeKeyName = override.KvStoreKey +} + +else { + storeKeyName = key.Name() +} + storeKey := storetypes.NewKVStoreKey(storeKeyName) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideTransientStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.TransientStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + storeKey := storetypes.NewTransientStoreKey(fmt.Sprintf("transient:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideMemoryStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.MemoryStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + storeKey := storetypes.NewMemoryStoreKey(fmt.Sprintf("memory:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideGenesisTxHandler(appBuilder *AppBuilder) + +genesis.TxHandler { + return appBuilder.app +} + +func ProvideKVStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.KVStoreService { + storeKey := ProvideKVStoreKey(config, key, app) + +return kvStoreService{ + key: storeKey +} +} + +func ProvideMemoryStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.MemoryStoreService { + storeKey := ProvideMemoryStoreKey(config, key, app) + +return memStoreService{ + key: storeKey +} +} + +func ProvideTransientStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.TransientStoreService { + storeKey := ProvideTransientStoreKey(config, key, app) + +return transientStoreService{ + key: storeKey +} +} + +func ProvideEventService() + +event.Service { + return EventService{ +} +} + +func ProvideCometInfoService() + +comet.BlockInfoService { + return cometInfoService{ +} +} + +func ProvideHeaderInfoService(app *AppBuilder) + +header.Service { + return headerInfoService{ +} +} + +func ProvideBasicManager(app *AppBuilder) + +module.BasicManager { + return app.app.basicManager +} + +type ( + / ValidatorAddressCodec is an alias for address.Codec for validator addresses. + ValidatorAddressCodec address.Codec + + / ConsensusAddressCodec is an alias for address.Codec for validator consensus addresses. + ConsensusAddressCodec address.Codec +) + +type AddressCodecInputs struct { + depinject.In + + AuthConfig *authmodulev1.Module `optional:"true"` + StakingConfig *stakingmodulev1.Module `optional:"true"` + + AddressCodecFactory func() + +address.Codec `optional:"true"` + ValidatorAddressCodecFactory func() + +ValidatorAddressCodec `optional:"true"` + ConsensusAddressCodecFactory func() + +ConsensusAddressCodec `optional:"true"` +} + +/ ProvideAddressCodec provides an address.Codec to the container for any +/ modules that want to do address string <> bytes conversion. +func ProvideAddressCodec(in AddressCodecInputs) (address.Codec, ValidatorAddressCodec, ConsensusAddressCodec) { + if in.AddressCodecFactory != nil && in.ValidatorAddressCodecFactory != nil && in.ConsensusAddressCodecFactory != nil { + return in.AddressCodecFactory(), in.ValidatorAddressCodecFactory(), in.ConsensusAddressCodecFactory() +} + if in.AuthConfig == nil || in.AuthConfig.Bech32Prefix == "" { + panic("auth config bech32 prefix cannot be empty if no custom address codec is provided") +} + if in.StakingConfig == nil { + in.StakingConfig = &stakingmodulev1.Module{ +} + +} + if in.StakingConfig.Bech32PrefixValidator == "" { + in.StakingConfig.Bech32PrefixValidator = fmt.Sprintf("%svaloper", in.AuthConfig.Bech32Prefix) +} + if in.StakingConfig.Bech32PrefixConsensus == "" { + in.StakingConfig.Bech32PrefixConsensus = fmt.Sprintf("%svalcons", in.AuthConfig.Bech32Prefix) +} + +return addresscodec.NewBech32Codec(in.AuthConfig.Bech32Prefix), + addresscodec.NewBech32Codec(in.StakingConfig.Bech32PrefixValidator), + addresscodec.NewBech32Codec(in.StakingConfig.Bech32PrefixConsensus) +} +``` diff --git a/docs/sdk/next/documentation/module-system/msg-services.mdx b/docs/sdk/next/documentation/module-system/msg-services.mdx new file mode 100644 index 00000000..d33d553a --- /dev/null +++ b/docs/sdk/next/documentation/module-system/msg-services.mdx @@ -0,0 +1,3619 @@ +--- +title: "`Msg` Services" +--- + +## Synopsis + +A Protobuf `Msg` service processes [messages](/docs/sdk/next/documentation/module-system/messages-and-queries#messages). Protobuf `Msg` services are specific to the module in which they are defined, and only process messages defined within the said module. They are called from `BaseApp` during [`DeliverTx`](/docs/sdk/next/documentation/application-framework/baseapp#delivertx). + + +**Pre-requisite Readings** + +- [Module Manager](/docs/sdk/next/documentation/module-system/module-manager) +- [Messages and Queries](/docs/sdk/next/documentation/module-system/messages-and-queries) + + + +## Implementation of a module `Msg` service + +Each module should define a Protobuf `Msg` service, which will be responsible for processing requests (implementing `sdk.Msg`) and returning responses. + +As further described in [ADR 031](docs/sdk/next/documentation/legacy/adr-comprehensive), this approach has the advantage of clearly specifying return types and generating server and client code. + +Protobuf generates a `MsgServer` interface based on a definition of `Msg` service. It is the role of the module developer to implement this interface, by implementing the state transition logic that should happen upon receival of each `sdk.Msg`. As an example, here is the generated `MsgServer` interface for `x/bank`, which exposes two `sdk.Msg`s: + +```go expandable +/ Code generated by protoc-gen-gogo. DO NOT EDIT. +/ source: cosmos/bank/v1beta1/tx.proto + +package types + +import ( + + context "context" + fmt "fmt" + _ "github.com/cosmos/cosmos-proto" + github_com_cosmos_cosmos_sdk_types "github.com/cosmos/cosmos-sdk/types" + types "github.com/cosmos/cosmos-sdk/types" + _ "github.com/cosmos/cosmos-sdk/types/msgservice" + _ "github.com/cosmos/cosmos-sdk/types/tx/amino" + _ "github.com/cosmos/gogoproto/gogoproto" + grpc1 "github.com/cosmos/gogoproto/grpc" + proto "github.com/cosmos/gogoproto/proto" + grpc "google.golang.org/grpc" + codes "google.golang.org/grpc/codes" + status "google.golang.org/grpc/status" + io "io" + math "math" + math_bits "math/bits" +) + +/ Reference imports to suppress errors if they are not otherwise used. +var _ = proto.Marshal +var _ = fmt.Errorf +var _ = math.Inf + +/ This is a compile-time assertion to ensure that this generated file +/ is compatible with the proto package it is being compiled against. +/ A compilation error at this line likely means your copy of the +/ proto package needs to be updated. +const _ = proto.GoGoProtoPackageIsVersion3 / please upgrade the proto package + +/ MsgSend represents a message to send coins from one account to another. +type MsgSend struct { + FromAddress string `protobuf:"bytes,1,opt,name=from_address,json=fromAddress,proto3" json:"from_address,omitempty"` + ToAddress string `protobuf:"bytes,2,opt,name=to_address,json=toAddress,proto3" json:"to_address,omitempty"` + Amount github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,3,rep,name=amount,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"amount"` +} + +func (m *MsgSend) + +Reset() { *m = MsgSend{ +} +} + +func (m *MsgSend) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSend) + +ProtoMessage() { +} + +func (*MsgSend) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{0 +} +} + +func (m *MsgSend) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSend) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSend.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSend) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSend.Merge(m, src) +} + +func (m *MsgSend) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSend) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSend.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSend proto.InternalMessageInfo + +/ MsgSendResponse defines the Msg/Send response type. +type MsgSendResponse struct { +} + +func (m *MsgSendResponse) + +Reset() { *m = MsgSendResponse{ +} +} + +func (m *MsgSendResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSendResponse) + +ProtoMessage() { +} + +func (*MsgSendResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{1 +} +} + +func (m *MsgSendResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSendResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSendResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSendResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSendResponse.Merge(m, src) +} + +func (m *MsgSendResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSendResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSendResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSendResponse proto.InternalMessageInfo + +/ MsgMultiSend represents an arbitrary multi-in, multi-out send message. +type MsgMultiSend struct { + / Inputs, despite being `repeated`, only allows one sender input. This is + / checked in MsgMultiSend's ValidateBasic. + Inputs []Input `protobuf:"bytes,1,rep,name=inputs,proto3" json:"inputs"` + Outputs []Output `protobuf:"bytes,2,rep,name=outputs,proto3" json:"outputs"` +} + +func (m *MsgMultiSend) + +Reset() { *m = MsgMultiSend{ +} +} + +func (m *MsgMultiSend) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgMultiSend) + +ProtoMessage() { +} + +func (*MsgMultiSend) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{2 +} +} + +func (m *MsgMultiSend) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgMultiSend) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgMultiSend.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgMultiSend) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgMultiSend.Merge(m, src) +} + +func (m *MsgMultiSend) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgMultiSend) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgMultiSend.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgMultiSend proto.InternalMessageInfo + +func (m *MsgMultiSend) + +GetInputs() []Input { + if m != nil { + return m.Inputs +} + +return nil +} + +func (m *MsgMultiSend) + +GetOutputs() []Output { + if m != nil { + return m.Outputs +} + +return nil +} + +/ MsgMultiSendResponse defines the Msg/MultiSend response type. +type MsgMultiSendResponse struct { +} + +func (m *MsgMultiSendResponse) + +Reset() { *m = MsgMultiSendResponse{ +} +} + +func (m *MsgMultiSendResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgMultiSendResponse) + +ProtoMessage() { +} + +func (*MsgMultiSendResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{3 +} +} + +func (m *MsgMultiSendResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgMultiSendResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgMultiSendResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgMultiSendResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgMultiSendResponse.Merge(m, src) +} + +func (m *MsgMultiSendResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgMultiSendResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgMultiSendResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgMultiSendResponse proto.InternalMessageInfo + +/ MsgUpdateParams is the Msg/UpdateParams request type. +/ +/ Since: cosmos-sdk 0.47 +type MsgUpdateParams struct { + / authority is the address that controls the module (defaults to x/gov unless overwritten). + Authority string `protobuf:"bytes,1,opt,name=authority,proto3" json:"authority,omitempty"` + / params defines the x/bank parameters to update. + / + / NOTE: All parameters must be supplied. + Params Params `protobuf:"bytes,2,opt,name=params,proto3" json:"params"` +} + +func (m *MsgUpdateParams) + +Reset() { *m = MsgUpdateParams{ +} +} + +func (m *MsgUpdateParams) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgUpdateParams) + +ProtoMessage() { +} + +func (*MsgUpdateParams) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{4 +} +} + +func (m *MsgUpdateParams) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgUpdateParams) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgUpdateParams.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgUpdateParams) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgUpdateParams.Merge(m, src) +} + +func (m *MsgUpdateParams) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgUpdateParams) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgUpdateParams.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgUpdateParams proto.InternalMessageInfo + +func (m *MsgUpdateParams) + +GetAuthority() + +string { + if m != nil { + return m.Authority +} + +return "" +} + +func (m *MsgUpdateParams) + +GetParams() + +Params { + if m != nil { + return m.Params +} + +return Params{ +} +} + +/ MsgUpdateParamsResponse defines the response structure for executing a +/ MsgUpdateParams message. +/ +/ Since: cosmos-sdk 0.47 +type MsgUpdateParamsResponse struct { +} + +func (m *MsgUpdateParamsResponse) + +Reset() { *m = MsgUpdateParamsResponse{ +} +} + +func (m *MsgUpdateParamsResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgUpdateParamsResponse) + +ProtoMessage() { +} + +func (*MsgUpdateParamsResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{5 +} +} + +func (m *MsgUpdateParamsResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgUpdateParamsResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgUpdateParamsResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgUpdateParamsResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgUpdateParamsResponse.Merge(m, src) +} + +func (m *MsgUpdateParamsResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgUpdateParamsResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgUpdateParamsResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgUpdateParamsResponse proto.InternalMessageInfo + +/ MsgSetSendEnabled is the Msg/SetSendEnabled request type. +/ +/ Only entries to add/update/delete need to be included. +/ Existing SendEnabled entries that are not included in this +/ message are left unchanged. +/ +/ Since: cosmos-sdk 0.47 +type MsgSetSendEnabled struct { + Authority string `protobuf:"bytes,1,opt,name=authority,proto3" json:"authority,omitempty"` + / send_enabled is the list of entries to add or update. + SendEnabled []*SendEnabled `protobuf:"bytes,2,rep,name=send_enabled,json=sendEnabled,proto3" json:"send_enabled,omitempty"` + / use_default_for is a list of denoms that should use the params.default_send_enabled value. + / Denoms listed here will have their SendEnabled entries deleted. + / If a denom is included that doesn't have a SendEnabled entry, + / it will be ignored. + UseDefaultFor []string `protobuf:"bytes,3,rep,name=use_default_for,json=useDefaultFor,proto3" json:"use_default_for,omitempty"` +} + +func (m *MsgSetSendEnabled) + +Reset() { *m = MsgSetSendEnabled{ +} +} + +func (m *MsgSetSendEnabled) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSetSendEnabled) + +ProtoMessage() { +} + +func (*MsgSetSendEnabled) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{6 +} +} + +func (m *MsgSetSendEnabled) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSetSendEnabled) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSetSendEnabled.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSetSendEnabled) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSetSendEnabled.Merge(m, src) +} + +func (m *MsgSetSendEnabled) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSetSendEnabled) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSetSendEnabled.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSetSendEnabled proto.InternalMessageInfo + +func (m *MsgSetSendEnabled) + +GetAuthority() + +string { + if m != nil { + return m.Authority +} + +return "" +} + +func (m *MsgSetSendEnabled) + +GetSendEnabled() []*SendEnabled { + if m != nil { + return m.SendEnabled +} + +return nil +} + +func (m *MsgSetSendEnabled) + +GetUseDefaultFor() []string { + if m != nil { + return m.UseDefaultFor +} + +return nil +} + +/ MsgSetSendEnabledResponse defines the Msg/SetSendEnabled response type. +/ +/ Since: cosmos-sdk 0.47 +type MsgSetSendEnabledResponse struct { +} + +func (m *MsgSetSendEnabledResponse) + +Reset() { *m = MsgSetSendEnabledResponse{ +} +} + +func (m *MsgSetSendEnabledResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSetSendEnabledResponse) + +ProtoMessage() { +} + +func (*MsgSetSendEnabledResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{7 +} +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSetSendEnabledResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSetSendEnabledResponse.Merge(m, src) +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSetSendEnabledResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSetSendEnabledResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSetSendEnabledResponse proto.InternalMessageInfo + +func init() { + proto.RegisterType((*MsgSend)(nil), "cosmos.bank.v1beta1.MsgSend") + +proto.RegisterType((*MsgSendResponse)(nil), "cosmos.bank.v1beta1.MsgSendResponse") + +proto.RegisterType((*MsgMultiSend)(nil), "cosmos.bank.v1beta1.MsgMultiSend") + +proto.RegisterType((*MsgMultiSendResponse)(nil), "cosmos.bank.v1beta1.MsgMultiSendResponse") + +proto.RegisterType((*MsgUpdateParams)(nil), "cosmos.bank.v1beta1.MsgUpdateParams") + +proto.RegisterType((*MsgUpdateParamsResponse)(nil), "cosmos.bank.v1beta1.MsgUpdateParamsResponse") + +proto.RegisterType((*MsgSetSendEnabled)(nil), "cosmos.bank.v1beta1.MsgSetSendEnabled") + +proto.RegisterType((*MsgSetSendEnabledResponse)(nil), "cosmos.bank.v1beta1.MsgSetSendEnabledResponse") +} + +func init() { + proto.RegisterFile("cosmos/bank/v1beta1/tx.proto", fileDescriptor_1d8cb1613481f5b7) +} + +var fileDescriptor_1d8cb1613481f5b7 = []byte{ + / 700 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x54, 0xcf, 0x4f, 0xd3, 0x50, + 0x1c, 0x5f, 0x99, 0x8e, 0xec, 0x31, 0x25, 0x54, 0x22, 0xac, 0x90, 0x0e, 0x16, 0x43, 0x00, 0xa5, + 0x15, 0x34, 0x9a, 0xcc, 0x68, 0x74, 0x28, 0x89, 0x26, 0x8b, 0x66, 0xc4, 0x83, 0x5e, 0x96, 0xd7, + 0xf5, 0x51, 0x1a, 0xd6, 0xbe, 0xa6, 0xef, 0x95, 0xb0, 0x9b, 0x7a, 0x32, 0x9e, 0x3c, 0x7b, 0xe2, + 0x68, 0x8c, 0x07, 0x0e, 0x1e, 0x4d, 0xbc, 0x72, 0x24, 0x9e, 0x3c, 0xa9, 0x81, 0x03, 0xfa, 0x5f, + 0x98, 0xf7, 0xa3, 0xa5, 0x8c, 0x8d, 0x11, 0x2f, 0x6b, 0xf7, 0x3e, 0x3f, 0xbe, 0xef, 0xf3, 0xed, + 0xf7, 0x3d, 0x30, 0xd9, 0xc4, 0xc4, 0xc3, 0xc4, 0xb4, 0xa0, 0xbf, 0x61, 0x6e, 0x2e, 0x5a, 0x88, + 0xc2, 0x45, 0x93, 0x6e, 0x19, 0x41, 0x88, 0x29, 0x56, 0x2f, 0x09, 0xd4, 0x60, 0xa8, 0x21, 0x51, + 0x6d, 0xd4, 0xc1, 0x0e, 0xe6, 0xb8, 0xc9, 0xde, 0x04, 0x55, 0xd3, 0x13, 0x23, 0x82, 0x12, 0xa3, + 0x26, 0x76, 0xfd, 0x13, 0x78, 0xaa, 0x10, 0xf7, 0x15, 0x78, 0x51, 0xe0, 0x0d, 0x61, 0x2c, 0xeb, + 0x0a, 0x68, 0x4c, 0x4a, 0x3d, 0xe2, 0x98, 0x9b, 0x8b, 0xec, 0x21, 0x81, 0x11, 0xe8, 0xb9, 0x3e, + 0x36, 0xf9, 0xaf, 0x58, 0x2a, 0x7f, 0x1e, 0x00, 0x83, 0x35, 0xe2, 0xac, 0x22, 0xdf, 0x56, 0xef, + 0x80, 0xc2, 0x5a, 0x88, 0xbd, 0x06, 0xb4, 0xed, 0x10, 0x11, 0x32, 0xae, 0x4c, 0x29, 0xb3, 0xf9, + 0xea, 0xf8, 0xf7, 0x2f, 0x0b, 0xa3, 0xd2, 0xff, 0x81, 0x40, 0x56, 0x69, 0xe8, 0xfa, 0x4e, 0x7d, + 0x88, 0xb1, 0xe5, 0x92, 0x7a, 0x1b, 0x00, 0x8a, 0x13, 0xe9, 0x40, 0x1f, 0x69, 0x9e, 0xe2, 0x58, + 0xd8, 0x06, 0x39, 0xe8, 0xe1, 0xc8, 0xa7, 0xe3, 0xd9, 0xa9, 0xec, 0xec, 0xd0, 0x52, 0xd1, 0x48, + 0x9a, 0x48, 0x50, 0xdc, 0x44, 0x63, 0x19, 0xbb, 0x7e, 0x75, 0x65, 0xf7, 0x67, 0x29, 0xf3, 0xe9, + 0x57, 0x69, 0xd6, 0x71, 0xe9, 0x7a, 0x64, 0x19, 0x4d, 0xec, 0xc9, 0xe4, 0xf2, 0xb1, 0x40, 0xec, + 0x0d, 0x93, 0xb6, 0x03, 0x44, 0xb8, 0x80, 0x7c, 0x38, 0xdc, 0x99, 0x2f, 0xb4, 0x90, 0x03, 0x9b, + 0xed, 0x06, 0xeb, 0x2d, 0xf9, 0x78, 0xb8, 0x33, 0xaf, 0xd4, 0x65, 0xc1, 0xca, 0xf5, 0xb7, 0xdb, + 0xa5, 0xcc, 0x9f, 0xed, 0x52, 0xe6, 0x0d, 0xe3, 0xa5, 0xb3, 0xbf, 0x3b, 0xdc, 0x99, 0x57, 0x53, + 0x9e, 0xb2, 0x45, 0xe5, 0x11, 0x30, 0x2c, 0x5f, 0xeb, 0x88, 0x04, 0xd8, 0x27, 0xa8, 0xfc, 0x55, + 0x01, 0x85, 0x1a, 0x71, 0x6a, 0x51, 0x8b, 0xba, 0xbc, 0x8d, 0x77, 0x41, 0xce, 0xf5, 0x83, 0x88, + 0xb2, 0x06, 0xb2, 0x40, 0x9a, 0xd1, 0x65, 0x2a, 0x8c, 0xc7, 0x8c, 0x52, 0xcd, 0xb3, 0x44, 0x72, + 0x53, 0x42, 0xa4, 0xde, 0x07, 0x83, 0x38, 0xa2, 0x5c, 0x3f, 0xc0, 0xf5, 0x13, 0x5d, 0xf5, 0x4f, + 0x39, 0x27, 0x6d, 0x10, 0xcb, 0x2a, 0x57, 0xe3, 0x48, 0xd2, 0x92, 0x85, 0x19, 0x3b, 0x1e, 0x26, + 0xd9, 0x6d, 0xf9, 0x32, 0x18, 0x4d, 0xff, 0x4f, 0x62, 0x7d, 0x53, 0x78, 0xd4, 0xe7, 0x81, 0x0d, + 0x29, 0x7a, 0x06, 0x43, 0xe8, 0x11, 0xf5, 0x16, 0xc8, 0xc3, 0x88, 0xae, 0xe3, 0xd0, 0xa5, 0xed, + 0xbe, 0xd3, 0x71, 0x44, 0x55, 0xef, 0x81, 0x5c, 0xc0, 0x1d, 0xf8, 0x5c, 0xf4, 0x4a, 0x24, 0x8a, + 0x1c, 0x6b, 0x89, 0x50, 0x55, 0x6e, 0xb2, 0x30, 0x47, 0x7e, 0x2c, 0xcf, 0x74, 0x2a, 0xcf, 0x96, + 0x38, 0x24, 0x1d, 0xbb, 0x2d, 0x17, 0xc1, 0x58, 0xc7, 0x52, 0x12, 0xee, 0xaf, 0x02, 0x46, 0xf8, + 0x77, 0xa4, 0x2c, 0xf3, 0x23, 0x1f, 0x5a, 0x2d, 0x64, 0xff, 0x77, 0xbc, 0x65, 0x50, 0x20, 0xc8, + 0xb7, 0x1b, 0x48, 0xf8, 0xc8, 0xcf, 0x36, 0xd5, 0x35, 0x64, 0xaa, 0x5e, 0x7d, 0x88, 0xa4, 0x8a, + 0xcf, 0x80, 0xe1, 0x88, 0xa0, 0x86, 0x8d, 0xd6, 0x60, 0xd4, 0xa2, 0x8d, 0x35, 0x1c, 0xf2, 0xf3, + 0x90, 0xaf, 0x5f, 0x88, 0x08, 0x7a, 0x28, 0x56, 0x57, 0x70, 0x58, 0x31, 0x4f, 0xf6, 0x62, 0xb2, + 0x73, 0x50, 0xd3, 0xa9, 0xca, 0x13, 0xa0, 0x78, 0x62, 0x31, 0x6e, 0xc4, 0xd2, 0xeb, 0x2c, 0xc8, + 0xd6, 0x88, 0xa3, 0x3e, 0x01, 0xe7, 0xf8, 0xec, 0x4e, 0x76, 0xdd, 0xb4, 0x1c, 0x79, 0xed, 0xca, + 0x69, 0x68, 0xec, 0xa9, 0xbe, 0x00, 0xf9, 0xa3, 0xc3, 0x30, 0xdd, 0x4b, 0x92, 0x50, 0xb4, 0xb9, + 0xbe, 0x94, 0xc4, 0xda, 0x02, 0x85, 0x63, 0x03, 0xd9, 0x73, 0x43, 0x69, 0x96, 0x76, 0xed, 0x2c, + 0xac, 0xa4, 0xc6, 0x3a, 0xb8, 0xd8, 0x31, 0x17, 0x33, 0xbd, 0x63, 0xa7, 0x79, 0x9a, 0x71, 0x36, + 0x5e, 0x5c, 0x49, 0x3b, 0xff, 0x8a, 0x4d, 0x79, 0x75, 0x79, 0x77, 0x5f, 0x57, 0xf6, 0xf6, 0x75, + 0xe5, 0xf7, 0xbe, 0xae, 0xbc, 0x3f, 0xd0, 0x33, 0x7b, 0x07, 0x7a, 0xe6, 0xc7, 0x81, 0x9e, 0x79, + 0x39, 0x77, 0xea, 0x3d, 0x27, 0xc7, 0x9e, 0x5f, 0x77, 0x56, 0x8e, 0x5f, 0xe7, 0x37, 0xfe, 0x05, + 0x00, 0x00, 0xff, 0xff, 0x5b, 0x5b, 0x43, 0xa9, 0xa0, 0x06, 0x00, 0x00, +} + +/ Reference imports to suppress errors if they are not otherwise used. +var _ context.Context +var _ grpc.ClientConn + +/ This is a compile-time assertion to ensure that this generated file +/ is compatible with the grpc package it is being compiled against. +const _ = grpc.SupportPackageIsVersion4 + +/ MsgClient is the client API for Msg service. +/ +/ For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. +type MsgClient interface { + / Send defines a method for sending coins from one account to another account. + Send(ctx context.Context, in *MsgSend, opts ...grpc.CallOption) (*MsgSendResponse, error) + / MultiSend defines a method for sending coins from some accounts to other accounts. + MultiSend(ctx context.Context, in *MsgMultiSend, opts ...grpc.CallOption) (*MsgMultiSendResponse, error) + / UpdateParams defines a governance operation for updating the x/bank module parameters. + / The authority is defined in the keeper. + / + / Since: cosmos-sdk 0.47 + UpdateParams(ctx context.Context, in *MsgUpdateParams, opts ...grpc.CallOption) (*MsgUpdateParamsResponse, error) + / SetSendEnabled is a governance operation for setting the SendEnabled flag + / on any number of Denoms. Only the entries to add or update should be + / included. Entries that already exist in the store, but that aren't + / included in this message, will be left unchanged. + / + / Since: cosmos-sdk 0.47 + SetSendEnabled(ctx context.Context, in *MsgSetSendEnabled, opts ...grpc.CallOption) (*MsgSetSendEnabledResponse, error) +} + +type msgClient struct { + cc grpc1.ClientConn +} + +func NewMsgClient(cc grpc1.ClientConn) + +MsgClient { + return &msgClient{ + cc +} +} + +func (c *msgClient) + +Send(ctx context.Context, in *MsgSend, opts ...grpc.CallOption) (*MsgSendResponse, error) { + out := new(MsgSendResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/Send", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +func (c *msgClient) + +MultiSend(ctx context.Context, in *MsgMultiSend, opts ...grpc.CallOption) (*MsgMultiSendResponse, error) { + out := new(MsgMultiSendResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/MultiSend", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +func (c *msgClient) + +UpdateParams(ctx context.Context, in *MsgUpdateParams, opts ...grpc.CallOption) (*MsgUpdateParamsResponse, error) { + out := new(MsgUpdateParamsResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/UpdateParams", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +func (c *msgClient) + +SetSendEnabled(ctx context.Context, in *MsgSetSendEnabled, opts ...grpc.CallOption) (*MsgSetSendEnabledResponse, error) { + out := new(MsgSetSendEnabledResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/SetSendEnabled", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +/ MsgServer is the server API for Msg service. +type MsgServer interface { + / Send defines a method for sending coins from one account to another account. + Send(context.Context, *MsgSend) (*MsgSendResponse, error) + / MultiSend defines a method for sending coins from some accounts to other accounts. + MultiSend(context.Context, *MsgMultiSend) (*MsgMultiSendResponse, error) + / UpdateParams defines a governance operation for updating the x/bank module parameters. + / The authority is defined in the keeper. + / + / Since: cosmos-sdk 0.47 + UpdateParams(context.Context, *MsgUpdateParams) (*MsgUpdateParamsResponse, error) + / SetSendEnabled is a governance operation for setting the SendEnabled flag + / on any number of Denoms. Only the entries to add or update should be + / included. Entries that already exist in the store, but that aren't + / included in this message, will be left unchanged. + / + / Since: cosmos-sdk 0.47 + SetSendEnabled(context.Context, *MsgSetSendEnabled) (*MsgSetSendEnabledResponse, error) +} + +/ UnimplementedMsgServer can be embedded to have forward compatible implementations. +type UnimplementedMsgServer struct { +} + +func (*UnimplementedMsgServer) + +Send(ctx context.Context, req *MsgSend) (*MsgSendResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method Send not implemented") +} + +func (*UnimplementedMsgServer) + +MultiSend(ctx context.Context, req *MsgMultiSend) (*MsgMultiSendResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method MultiSend not implemented") +} + +func (*UnimplementedMsgServer) + +UpdateParams(ctx context.Context, req *MsgUpdateParams) (*MsgUpdateParamsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UpdateParams not implemented") +} + +func (*UnimplementedMsgServer) + +SetSendEnabled(ctx context.Context, req *MsgSetSendEnabled) (*MsgSetSendEnabledResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method SetSendEnabled not implemented") +} + +func RegisterMsgServer(s grpc1.Server, srv MsgServer) { + s.RegisterService(&_Msg_serviceDesc, srv) +} + +func _Msg_Send_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgSend) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).Send(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/Send", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).Send(ctx, req.(*MsgSend)) +} + +return interceptor(ctx, in, info, handler) +} + +func _Msg_MultiSend_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgMultiSend) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).MultiSend(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/MultiSend", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).MultiSend(ctx, req.(*MsgMultiSend)) +} + +return interceptor(ctx, in, info, handler) +} + +func _Msg_UpdateParams_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgUpdateParams) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).UpdateParams(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/UpdateParams", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).UpdateParams(ctx, req.(*MsgUpdateParams)) +} + +return interceptor(ctx, in, info, handler) +} + +func _Msg_SetSendEnabled_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgSetSendEnabled) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).SetSendEnabled(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/SetSendEnabled", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).SetSendEnabled(ctx, req.(*MsgSetSendEnabled)) +} + +return interceptor(ctx, in, info, handler) +} + +var _Msg_serviceDesc = grpc.ServiceDesc{ + ServiceName: "cosmos.bank.v1beta1.Msg", + HandlerType: (*MsgServer)(nil), + Methods: []grpc.MethodDesc{ + { + MethodName: "Send", + Handler: _Msg_Send_Handler, +}, + { + MethodName: "MultiSend", + Handler: _Msg_MultiSend_Handler, +}, + { + MethodName: "UpdateParams", + Handler: _Msg_UpdateParams_Handler, +}, + { + MethodName: "SetSendEnabled", + Handler: _Msg_SetSendEnabled_Handler, +}, +}, + Streams: []grpc.StreamDesc{ +}, + Metadata: "cosmos/bank/v1beta1/tx.proto", +} + +func (m *MsgSend) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSend) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSend) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Amount) > 0 { + for iNdEx := len(m.Amount) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Amount[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x1a +} + +} + if len(m.ToAddress) > 0 { + i -= len(m.ToAddress) + +copy(dAtA[i:], m.ToAddress) + +i = encodeVarintTx(dAtA, i, uint64(len(m.ToAddress))) + +i-- + dAtA[i] = 0x12 +} + if len(m.FromAddress) > 0 { + i -= len(m.FromAddress) + +copy(dAtA[i:], m.FromAddress) + +i = encodeVarintTx(dAtA, i, uint64(len(m.FromAddress))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *MsgSendResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSendResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSendResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func (m *MsgMultiSend) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgMultiSend) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgMultiSend) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Outputs) > 0 { + for iNdEx := len(m.Outputs) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Outputs[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x12 +} + +} + if len(m.Inputs) > 0 { + for iNdEx := len(m.Inputs) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Inputs[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa +} + +} + +return len(dAtA) - i, nil +} + +func (m *MsgMultiSendResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgMultiSendResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgMultiSendResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func (m *MsgUpdateParams) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgUpdateParams) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgUpdateParams) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + { + size, err := m.Params.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x12 + if len(m.Authority) > 0 { + i -= len(m.Authority) + +copy(dAtA[i:], m.Authority) + +i = encodeVarintTx(dAtA, i, uint64(len(m.Authority))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *MsgUpdateParamsResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgUpdateParamsResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgUpdateParamsResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func (m *MsgSetSendEnabled) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSetSendEnabled) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSetSendEnabled) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.UseDefaultFor) > 0 { + for iNdEx := len(m.UseDefaultFor) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.UseDefaultFor[iNdEx]) + +copy(dAtA[i:], m.UseDefaultFor[iNdEx]) + +i = encodeVarintTx(dAtA, i, uint64(len(m.UseDefaultFor[iNdEx]))) + +i-- + dAtA[i] = 0x1a +} + +} + if len(m.SendEnabled) > 0 { + for iNdEx := len(m.SendEnabled) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.SendEnabled[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x12 +} + +} + if len(m.Authority) > 0 { + i -= len(m.Authority) + +copy(dAtA[i:], m.Authority) + +i = encodeVarintTx(dAtA, i, uint64(len(m.Authority))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *MsgSetSendEnabledResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSetSendEnabledResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSetSendEnabledResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func encodeVarintTx(dAtA []byte, offset int, v uint64) + +int { + offset -= sovTx(v) + base := offset + for v >= 1<<7 { + dAtA[offset] = uint8(v&0x7f | 0x80) + +v >>= 7 + offset++ +} + +dAtA[offset] = uint8(v) + +return base +} + +func (m *MsgSend) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.FromAddress) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + +l = len(m.ToAddress) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + if len(m.Amount) > 0 { + for _, e := range m.Amount { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + +return n +} + +func (m *MsgSendResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func (m *MsgMultiSend) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + if len(m.Inputs) > 0 { + for _, e := range m.Inputs { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + if len(m.Outputs) > 0 { + for _, e := range m.Outputs { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + +return n +} + +func (m *MsgMultiSendResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func (m *MsgUpdateParams) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.Authority) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + +l = m.Params.Size() + +n += 1 + l + sovTx(uint64(l)) + +return n +} + +func (m *MsgUpdateParamsResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func (m *MsgSetSendEnabled) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.Authority) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + if len(m.SendEnabled) > 0 { + for _, e := range m.SendEnabled { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + if len(m.UseDefaultFor) > 0 { + for _, s := range m.UseDefaultFor { + l = len(s) + +n += 1 + l + sovTx(uint64(l)) +} + +} + +return n +} + +func (m *MsgSetSendEnabledResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func sovTx(x uint64) (n int) { + return (math_bits.Len64(x|1) + 6) / 7 +} + +func sozTx(x uint64) (n int) { + return sovTx(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} + +func (m *MsgSend) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSend: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSend: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field FromAddress", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.FromAddress = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ToAddress", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.ToAddress = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Amount", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Amount = append(m.Amount, types.Coin{ +}) + if err := m.Amount[len(m.Amount)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgSendResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSendResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSendResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgMultiSend) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgMultiSend: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgMultiSend: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Inputs", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Inputs = append(m.Inputs, Input{ +}) + if err := m.Inputs[len(m.Inputs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Outputs", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Outputs = append(m.Outputs, Output{ +}) + if err := m.Outputs[len(m.Outputs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgMultiSendResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgMultiSendResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgMultiSendResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgUpdateParams) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgUpdateParams: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgUpdateParams: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Authority", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Authority = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Params", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := m.Params.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgUpdateParamsResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgUpdateParamsResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgUpdateParamsResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgSetSendEnabled) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSetSendEnabled: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSetSendEnabled: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Authority", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Authority = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SendEnabled", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.SendEnabled = append(m.SendEnabled, &SendEnabled{ +}) + if err := m.SendEnabled[len(m.SendEnabled)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UseDefaultFor", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.UseDefaultFor = append(m.UseDefaultFor, string(dAtA[iNdEx:postIndex])) + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgSetSendEnabledResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSetSendEnabledResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSetSendEnabledResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func skipTx(dAtA []byte) (n int, err error) { + l := len(dAtA) + iNdEx := 0 + depth := 0 + for iNdEx < l { + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowTx +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + wireType := int(wire & 0x7) + switch wireType { + case 0: + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowTx +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + +iNdEx++ + if dAtA[iNdEx-1] < 0x80 { + break +} + +} + case 1: + iNdEx += 8 + case 2: + var length int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowTx +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + length |= (int(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + if length < 0 { + return 0, ErrInvalidLengthTx +} + +iNdEx += length + case 3: + depth++ + case 4: + if depth == 0 { + return 0, ErrUnexpectedEndOfGroupTx +} + +depth-- + case 5: + iNdEx += 4 + default: + return 0, fmt.Errorf("proto: illegal wireType %d", wireType) +} + if iNdEx < 0 { + return 0, ErrInvalidLengthTx +} + if depth == 0 { + return iNdEx, nil +} + +} + +return 0, io.ErrUnexpectedEOF +} + +var ( + ErrInvalidLengthTx = fmt.Errorf("proto: negative length found during unmarshaling") + +ErrIntOverflowTx = fmt.Errorf("proto: integer overflow") + +ErrUnexpectedEndOfGroupTx = fmt.Errorf("proto: unexpected end of group") +) +``` + +When possible, the existing module's [`Keeper`](/docs/sdk/next/documentation/module-system/keeper) should implement `MsgServer`, otherwise a `msgServer` struct that embeds the `Keeper` can be created, typically in `./keeper/msg_server.go`: + +```go expandable +package keeper + +import ( + + "context" + "github.com/armon/go-metrics" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +type msgServer struct { + Keeper +} + +var _ types.MsgServer = msgServer{ +} + +/ NewMsgServerImpl returns an implementation of the bank MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &msgServer{ + Keeper: keeper +} +} + +func (k msgServer) + +Send(goCtx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + var ( + from, to []byte + err error + ) + if base, ok := k.Keeper.(BaseKeeper); ok { + from, err = base.ak.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid from address: %s", err) +} + +to, err = base.ak.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid to address: %s", err) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + if !msg.Amount.IsValid() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + if !msg.Amount.IsAllPositive() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if k.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + +err = k.SendCoins(ctx, from, to, msg.Amount) + if err != nil { + return nil, err +} + +defer func() { + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "send" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + +return &types.MsgSendResponse{ +}, nil +} + +func (k msgServer) + +MultiSend(goCtx context.Context, msg *types.MsgMultiSend) (*types.MsgMultiSendResponse, error) { + if len(msg.Inputs) == 0 { + return nil, types.ErrNoInputs +} + if len(msg.Inputs) != 1 { + return nil, types.ErrMultipleSenders +} + if len(msg.Outputs) == 0 { + return nil, types.ErrNoOutputs +} + if err := types.ValidateInputOutputs(msg.Inputs[0], msg.Outputs); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + + / NOTE: totalIn == totalOut should already have been checked + for _, in := range msg.Inputs { + if err := k.IsSendEnabledCoins(ctx, in.Coins...); err != nil { + return nil, err +} + +} + for _, out := range msg.Outputs { + if base, ok := k.Keeper.(BaseKeeper); ok { + accAddr, err := base.ak.AddressCodec().StringToBytes(out.Address) + if err != nil { + return nil, err +} + if k.BlockedAddr(accAddr) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", out.Address) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + +} + err := k.InputOutputCoins(ctx, msg.Inputs[0], msg.Outputs) + if err != nil { + return nil, err +} + +return &types.MsgMultiSendResponse{ +}, nil +} + +func (k msgServer) + +UpdateParams(goCtx context.Context, req *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + if k.GetAuthority() != req.Authority { + return nil, errorsmod.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), req.Authority) +} + if err := req.Params.Validate(); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.SetParams(ctx, req.Params); err != nil { + return nil, err +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} + +func (k msgServer) + +SetSendEnabled(goCtx context.Context, msg *types.MsgSetSendEnabled) (*types.MsgSetSendEnabledResponse, error) { + if k.GetAuthority() != msg.Authority { + return nil, errorsmod.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), msg.Authority) +} + seen := map[string]bool{ +} + for _, se := range msg.SendEnabled { + if _, alreadySeen := seen[se.Denom]; alreadySeen { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("duplicate denom entries found for %q", se.Denom) +} + +seen[se.Denom] = true + if err := se.Validate(); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid SendEnabled denom %q: %s", se.Denom, err) +} + +} + for _, denom := range msg.UseDefaultFor { + if err := sdk.ValidateDenom(denom); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid UseDefaultFor denom %q: %s", denom, err) +} + +} + ctx := sdk.UnwrapSDKContext(goCtx) + if len(msg.SendEnabled) > 0 { + k.SetAllSendEnabled(ctx, msg.SendEnabled) +} + if len(msg.UseDefaultFor) > 0 { + k.DeleteSendEnabled(ctx, msg.UseDefaultFor...) +} + +return &types.MsgSetSendEnabledResponse{ +}, nil +} +``` + +`msgServer` methods can retrieve the `sdk.Context` from the `context.Context` parameter using the `sdk.UnwrapSDKContext` method: + +```go expandable +package keeper + +import ( + + "context" + "github.com/armon/go-metrics" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +type msgServer struct { + Keeper +} + +var _ types.MsgServer = msgServer{ +} + +/ NewMsgServerImpl returns an implementation of the bank MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &msgServer{ + Keeper: keeper +} +} + +func (k msgServer) + +Send(goCtx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + var ( + from, to []byte + err error + ) + if base, ok := k.Keeper.(BaseKeeper); ok { + from, err = base.ak.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid from address: %s", err) +} + +to, err = base.ak.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid to address: %s", err) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + if !msg.Amount.IsValid() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + if !msg.Amount.IsAllPositive() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if k.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + +err = k.SendCoins(ctx, from, to, msg.Amount) + if err != nil { + return nil, err +} + +defer func() { + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "send" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + +return &types.MsgSendResponse{ +}, nil +} + +func (k msgServer) + +MultiSend(goCtx context.Context, msg *types.MsgMultiSend) (*types.MsgMultiSendResponse, error) { + if len(msg.Inputs) == 0 { + return nil, types.ErrNoInputs +} + if len(msg.Inputs) != 1 { + return nil, types.ErrMultipleSenders +} + if len(msg.Outputs) == 0 { + return nil, types.ErrNoOutputs +} + if err := types.ValidateInputOutputs(msg.Inputs[0], msg.Outputs); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + + / NOTE: totalIn == totalOut should already have been checked + for _, in := range msg.Inputs { + if err := k.IsSendEnabledCoins(ctx, in.Coins...); err != nil { + return nil, err +} + +} + for _, out := range msg.Outputs { + if base, ok := k.Keeper.(BaseKeeper); ok { + accAddr, err := base.ak.AddressCodec().StringToBytes(out.Address) + if err != nil { + return nil, err +} + if k.BlockedAddr(accAddr) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", out.Address) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + +} + err := k.InputOutputCoins(ctx, msg.Inputs[0], msg.Outputs) + if err != nil { + return nil, err +} + +return &types.MsgMultiSendResponse{ +}, nil +} + +func (k msgServer) + +UpdateParams(goCtx context.Context, req *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + if k.GetAuthority() != req.Authority { + return nil, errorsmod.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), req.Authority) +} + if err := req.Params.Validate(); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.SetParams(ctx, req.Params); err != nil { + return nil, err +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} + +func (k msgServer) + +SetSendEnabled(goCtx context.Context, msg *types.MsgSetSendEnabled) (*types.MsgSetSendEnabledResponse, error) { + if k.GetAuthority() != msg.Authority { + return nil, errorsmod.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), msg.Authority) +} + seen := map[string]bool{ +} + for _, se := range msg.SendEnabled { + if _, alreadySeen := seen[se.Denom]; alreadySeen { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("duplicate denom entries found for %q", se.Denom) +} + +seen[se.Denom] = true + if err := se.Validate(); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid SendEnabled denom %q: %s", se.Denom, err) +} + +} + for _, denom := range msg.UseDefaultFor { + if err := sdk.ValidateDenom(denom); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid UseDefaultFor denom %q: %s", denom, err) +} + +} + ctx := sdk.UnwrapSDKContext(goCtx) + if len(msg.SendEnabled) > 0 { + k.SetAllSendEnabled(ctx, msg.SendEnabled) +} + if len(msg.UseDefaultFor) > 0 { + k.DeleteSendEnabled(ctx, msg.UseDefaultFor...) +} + +return &types.MsgSetSendEnabledResponse{ +}, nil +} +``` + +`sdk.Msg` processing usually follows these 3 steps: + +### Validation + +The message server must perform all validation required (both _stateful_ and _stateless_) to make sure the `message` is valid. +The `signer` is charged for the gas cost of this validation. + +For example, a `msgServer` method for a `transfer` message should check that the sending account has enough funds to actually perform the transfer. + +It is recommended to implement all validation checks in a separate function that passes state values as arguments. This implementation simplifies testing. As expected, expensive validation functions charge additional gas. Example: + +```go +ValidateMsgA(msg MsgA, now Time, gm GasMeter) + +error { + if now.Before(msg.Expire) { + return sdkerrors.ErrInvalidRequest.Wrap("msg expired") +} + +gm.ConsumeGas(1000, "signature verification") + +return signatureVerification(msg.Prover, msg.Data) +} +``` + + + Previously, the `ValidateBasic` method was used to perform simple and + stateless validation checks. This way of validating is deprecated, this means + the `msgServer` must perform all validation checks. + + +### State Transition + +After the validation is successful, the `msgServer` method uses the [`keeper`](/docs/sdk/next/documentation/module-system/keeper) functions to access the state and perform a state transition. + +### Events + +Before returning, `msgServer` methods generally emit one or more [events](/docs/sdk/next/api-reference/events-streaming/events) by using the `EventManager` held in the `ctx`. Use the new `EmitTypedEvent` function that uses protobuf-based event types: + +```go +ctx.EventManager().EmitTypedEvent( + &group.EventABC{ + Key1: Value1, Key2: Value2 +}) +``` + +or the older `EmitEvent` function: + +```go +ctx.EventManager().EmitEvent( + sdk.NewEvent( + eventType, / e.g. sdk.EventTypeMessage for a message, types.CustomEventType for a custom event defined in the module + sdk.NewAttribute(key1, value1), + sdk.NewAttribute(key2, value2), + ), +) +``` + +These events are relayed back to the underlying consensus engine and can be used by service providers to implement services around the application. Click [here](/docs/sdk/next/api-reference/events-streaming/events) to learn more about events. + +The invoked `msgServer` method returns a `proto.Message` response and an `error`. These return values are then wrapped into an `*sdk.Result` or an `error` using `sdk.WrapServiceResult(ctx context.Context, res proto.Message, err error)`: + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + + gogogrpc "github.com/cosmos/gogoproto/grpc" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc" + + errorsmod "cosmossdk.io/errors" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ MessageRouter ADR 031 request type routing +/ docs/sdk/next/documentation/legacy/adr-comprehensive +type MessageRouter interface { + Handler(msg sdk.Msg) + +MsgServiceHandler + HandlerByTypeURL(typeURL string) + +MsgServiceHandler +} + +/ MsgServiceRouter routes fully-qualified Msg service methods to their handler. +type MsgServiceRouter struct { + interfaceRegistry codectypes.InterfaceRegistry + routes map[string]MsgServiceHandler + circuitBreaker CircuitBreaker +} + +var _ gogogrpc.Server = &MsgServiceRouter{ +} + +/ NewMsgServiceRouter creates a new MsgServiceRouter. +func NewMsgServiceRouter() *MsgServiceRouter { + return &MsgServiceRouter{ + routes: map[string]MsgServiceHandler{ +}, +} +} + +func (msr *MsgServiceRouter) + +SetCircuit(cb CircuitBreaker) { + msr.circuitBreaker = cb +} + +/ MsgServiceHandler defines a function type which handles Msg service message. +type MsgServiceHandler = func(ctx sdk.Context, req sdk.Msg) (*sdk.Result, error) + +/ Handler returns the MsgServiceHandler for a given msg or nil if not found. +func (msr *MsgServiceRouter) + +Handler(msg sdk.Msg) + +MsgServiceHandler { + return msr.routes[sdk.MsgTypeURL(msg)] +} + +/ HandlerByTypeURL returns the MsgServiceHandler for a given query route path or nil +/ if not found. +func (msr *MsgServiceRouter) + +HandlerByTypeURL(typeURL string) + +MsgServiceHandler { + return msr.routes[typeURL] +} + +/ RegisterService implements the gRPC Server.RegisterService method. sd is a gRPC +/ service description, handler is an object which implements that gRPC service. +/ +/ This function PANICs: +/ - if it is called before the service `Msg`s have been registered using +/ RegisterInterfaces, +/ - or if a service is being registered twice. +func (msr *MsgServiceRouter) + +RegisterService(sd *grpc.ServiceDesc, handler interface{ +}) { + / Adds a top-level query handler based on the gRPC service name. + for _, method := range sd.Methods { + fqMethod := fmt.Sprintf("/%s/%s", sd.ServiceName, method.MethodName) + methodHandler := method.Handler + + var requestTypeName string + + / NOTE: This is how we pull the concrete request type for each handler for registering in the InterfaceRegistry. + / This approach is maybe a bit hacky, but less hacky than reflecting on the handler object itself. + / We use a no-op interceptor to avoid actually calling into the handler itself. + _, _ = methodHandler(nil, context.Background(), func(i interface{ +}) + +error { + msg, ok := i.(sdk.Msg) + if !ok { + / We panic here because there is no other alternative and the app cannot be initialized correctly + / this should only happen if there is a problem with code generation in which case the app won't + / work correctly anyway. + panic(fmt.Errorf("unable to register service method %s: %T does not implement sdk.Msg", fqMethod, i)) +} + +requestTypeName = sdk.MsgTypeURL(msg) + +return nil +}, noopInterceptor) + + / Check that the service Msg fully-qualified method name has already + / been registered (via RegisterInterfaces). If the user registers a + / service without registering according service Msg type, there might be + / some unexpected behavior down the road. Since we can't return an error + / (`Server.RegisterService` interface restriction) + +we panic (at startup). + reqType, err := msr.interfaceRegistry.Resolve(requestTypeName) + if err != nil || reqType == nil { + panic( + fmt.Errorf( + "type_url %s has not been registered yet. "+ + "Before calling RegisterService, you must register all interfaces by calling the `RegisterInterfaces` "+ + "method on module.BasicManager. Each module should call `msgservice.RegisterMsgServiceDesc` inside its "+ + "`RegisterInterfaces` method with the `_Msg_serviceDesc` generated by proto-gen", + requestTypeName, + ), + ) +} + + / Check that each service is only registered once. If a service is + / registered more than once, then we should error. Since we can't + / return an error (`Server.RegisterService` interface restriction) + +we + / panic (at startup). + _, found := msr.routes[requestTypeName] + if found { + panic( + fmt.Errorf( + "msg service %s has already been registered. Please make sure to only register each service once. "+ + "This usually means that there are conflicting modules registering the same msg service", + fqMethod, + ), + ) +} + +msr.routes[requestTypeName] = func(ctx sdk.Context, msg sdk.Msg) (*sdk.Result, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + interceptor := func(goCtx context.Context, _ interface{ +}, _ *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{ +}, error) { + goCtx = context.WithValue(goCtx, sdk.SdkContextKey, ctx) + +return handler(goCtx, msg) +} + if m, ok := msg.(sdk.HasValidateBasic); ok { + if err := m.ValidateBasic(); err != nil { + return nil, err +} + +} + if msr.circuitBreaker != nil { + msgURL := sdk.MsgTypeURL(msg) + +isAllowed, err := msr.circuitBreaker.IsAllowed(ctx, msgURL) + if err != nil { + return nil, err +} + if !isAllowed { + return nil, fmt.Errorf("circuit breaker disables execution of this message: %s", msgURL) +} + +} + + / Call the method handler from the service description with the handler object. + / We don't do any decoding here because the decoding was already done. + res, err := methodHandler(handler, ctx, noopDecoder, interceptor) + if err != nil { + return nil, err +} + +resMsg, ok := res.(proto.Message) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "Expecting proto.Message, got %T", resMsg) +} + +return sdk.WrapServiceResult(ctx, resMsg, err) +} + +} +} + +/ SetInterfaceRegistry sets the interface registry for the router. +func (msr *MsgServiceRouter) + +SetInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) { + msr.interfaceRegistry = interfaceRegistry +} + +func noopDecoder(_ interface{ +}) + +error { + return nil +} + +func noopInterceptor(_ context.Context, _ interface{ +}, _ *grpc.UnaryServerInfo, _ grpc.UnaryHandler) (interface{ +}, error) { + return nil, nil +} +``` + +This method takes care of marshaling the `res` parameter to protobuf and attaching any events on the `ctx.EventManager()` to the `sdk.Result`. + +```protobuf +// Result is the union of ResponseFormat and ResponseCheckTx. +message Result { + option (gogoproto.goproto_getters) = false; + + // Data is any data returned from message or handler execution. It MUST be + // length prefixed in order to separate data from multiple message executions. + // Deprecated. This field is still populated, but prefer msg_response instead + // because it also contains the Msg response typeURL. + bytes data = 1 [deprecated = true]; + + // Log contains the log information from message or handler execution. + string log = 2; + + // Events contains a slice of Event objects that were emitted during message + // or handler execution. + repeated tendermint.abci.Event events = 3 [(gogoproto.nullable) = false]; + + // msg_responses contains the Msg handler responses type packed in Anys. + // + // Since: cosmos-sdk 0.46 + repeated google.protobuf.Any msg_responses = 4; +``` + +This diagram shows a typical structure of a Protobuf `Msg` service, and how the message propagates through the module. + +![Transaction flow](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/transaction_flow.svg) + +## Telemetry + +New [telemetry metrics](/docs/sdk/next/api-reference/telemetry-metrics/telemetry) can be created from `msgServer` methods when handling messages. + +This is an example from the `x/auth/vesting` module: + +```go expandable +package vesting + +import ( + + "context" + "github.com/armon/go-metrics" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" +) + +type msgServer struct { + keeper.AccountKeeper + types.BankKeeper +} + +/ NewMsgServerImpl returns an implementation of the vesting MsgServer interface, +/ wrapping the corresponding AccountKeeper and BankKeeper. +func NewMsgServerImpl(k keeper.AccountKeeper, bk types.BankKeeper) + +types.MsgServer { + return &msgServer{ + AccountKeeper: k, + BankKeeper: bk +} +} + +var _ types.MsgServer = msgServer{ +} + +func (s msgServer) + +CreateVestingAccount(goCtx context.Context, msg *types.MsgCreateVestingAccount) (*types.MsgCreateVestingAccountResponse, error) { + from, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'from' address: %s", err) +} + +to, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'to' address: %s", err) +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + if msg.EndTime <= 0 { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "invalid end time") +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := s.BankKeeper.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if s.BankKeeper.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + if acc := s.AccountKeeper.GetAccount(ctx, to); acc != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "account %s already exists", msg.ToAddress) +} + baseAccount := authtypes.NewBaseAccountWithAddress(to) + +baseAccount = s.AccountKeeper.NewAccount(ctx, baseAccount).(*authtypes.BaseAccount) + baseVestingAccount := types.NewBaseVestingAccount(baseAccount, msg.Amount.Sort(), msg.EndTime) + +var vestingAccount sdk.AccountI + if msg.Delayed { + vestingAccount = types.NewDelayedVestingAccountRaw(baseVestingAccount) +} + +else { + vestingAccount = types.NewContinuousVestingAccountRaw(baseVestingAccount, ctx.BlockTime().Unix()) +} + +s.AccountKeeper.SetAccount(ctx, vestingAccount) + +defer func() { + telemetry.IncrCounter(1, "new", "account") + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "create_vesting_account" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + if err = s.BankKeeper.SendCoins(ctx, from, to, msg.Amount); err != nil { + return nil, err +} + +return &types.MsgCreateVestingAccountResponse{ +}, nil +} + +func (s msgServer) + +CreatePermanentLockedAccount(goCtx context.Context, msg *types.MsgCreatePermanentLockedAccount) (*types.MsgCreatePermanentLockedAccountResponse, error) { + from, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'from' address: %s", err) +} + +to, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'to' address: %s", err) +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := s.BankKeeper.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if s.BankKeeper.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + if acc := s.AccountKeeper.GetAccount(ctx, to); acc != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "account %s already exists", msg.ToAddress) +} + baseAccount := authtypes.NewBaseAccountWithAddress(to) + +baseAccount = s.AccountKeeper.NewAccount(ctx, baseAccount).(*authtypes.BaseAccount) + vestingAccount := types.NewPermanentLockedAccount(baseAccount, msg.Amount) + +s.AccountKeeper.SetAccount(ctx, vestingAccount) + +defer func() { + telemetry.IncrCounter(1, "new", "account") + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "create_permanent_locked_account" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + if err = s.BankKeeper.SendCoins(ctx, from, to, msg.Amount); err != nil { + return nil, err +} + +return &types.MsgCreatePermanentLockedAccountResponse{ +}, nil +} + +func (s msgServer) + +CreatePeriodicVestingAccount(goCtx context.Context, msg *types.MsgCreatePeriodicVestingAccount) (*types.MsgCreatePeriodicVestingAccountResponse, error) { + from, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'from' address: %s", err) +} + +to, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'to' address: %s", err) +} + if msg.StartTime < 1 { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "invalid start time of %d, length must be greater than 0", msg.StartTime) +} + +var totalCoins sdk.Coins + for i, period := range msg.VestingPeriods { + if period.Length < 1 { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "invalid period length of %d in period %d, length must be greater than 0", period.Length, i) +} + +totalCoins = totalCoins.Add(period.Amount...) +} + ctx := sdk.UnwrapSDKContext(goCtx) + if acc := s.AccountKeeper.GetAccount(ctx, to); acc != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "account %s already exists", msg.ToAddress) +} + if err := s.BankKeeper.IsSendEnabledCoins(ctx, totalCoins...); err != nil { + return nil, err +} + baseAccount := authtypes.NewBaseAccountWithAddress(to) + +baseAccount = s.AccountKeeper.NewAccount(ctx, baseAccount).(*authtypes.BaseAccount) + vestingAccount := types.NewPeriodicVestingAccount(baseAccount, totalCoins.Sort(), msg.StartTime, msg.VestingPeriods) + +s.AccountKeeper.SetAccount(ctx, vestingAccount) + +defer func() { + telemetry.IncrCounter(1, "new", "account") + for _, a := range totalCoins { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "create_periodic_vesting_account" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + if err = s.BankKeeper.SendCoins(ctx, from, to, totalCoins); err != nil { + return nil, err +} + +return &types.MsgCreatePeriodicVestingAccountResponse{ +}, nil +} + +func validateAmount(amount sdk.Coins) + +error { + if !amount.IsValid() { + return sdkerrors.ErrInvalidCoins.Wrap(amount.String()) +} + if !amount.IsAllPositive() { + return sdkerrors.ErrInvalidCoins.Wrap(amount.String()) +} + +return nil +} +``` diff --git a/docs/sdk/next/documentation/module-system/nft.mdx b/docs/sdk/next/documentation/module-system/nft.mdx new file mode 100644 index 00000000..4455ad67 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/nft.mdx @@ -0,0 +1,89 @@ +--- +title: "`x/nft`" +--- + +**DEPRECATED**: This package is deprecated and will be removed in the next major release. The `x/nft` module will be moved to a separate repo `github.com/cosmos/cosmos-sdk-legacy`. + +## Contents + +## Abstract + +`x/nft` is an implementation of a Cosmos SDK module, per [ADR 43](/docs/common/pages/adr-comprehensive#adr-043-nft-module), that allows you to create nft classification, create nft, transfer nft, update nft, and support various queries by integrating the module. It is fully compatible with the ERC721 specification. + +- [Concepts](#concepts) + - [Class](#class) + - [NFT](#nft) +- [State](#state) + - [Class](#class-1) + - [NFT](#nft-1) + - [NFTOfClassByOwner](#nftofclassbyowner) + - [Owner](#owner) + - [TotalSupply](#totalsupply) +- [Messages](#messages) + - [MsgSend](#msgsend) +- [Events](#events) + +## Concepts + +### Class + +`x/nft` module defines a struct `Class` to describe the common characteristics of a class of nft, under this class, you can create a variety of nft, which is equivalent to an erc721 contract for Ethereum. The design is defined in the [ADR 043](docs/sdk/next/documentation/legacy/adr-comprehensive). + +### NFT + +The full name of NFT is Non-Fungible Tokens. Because of the irreplaceable nature of NFT, it means that it can be used to represent unique things. The nft implemented by this module is fully compatible with Ethereum ERC721 standard. + +## State + +### Class + +Class is mainly composed of `id`, `name`, `symbol`, `description`, `uri`, `uri_hash`,`data` where `id` is the unique identifier of the class, similar to the Ethereum ERC721 contract address, the others are optional. + +- Class: `0x01 | classID | -> ProtocolBuffer(Class)` + +### NFT + +NFT is mainly composed of `class_id`, `id`, `uri`, `uri_hash` and `data`. Among them, `class_id` and `id` are two-tuples that identify the uniqueness of nft, `uri` and `uri_hash` is optional, which identifies the off-chain storage location of the nft, and `data` is an Any type. Use Any chain of `x/nft` modules can be customized by extending this field + +- NFT: `0x02 | classID | 0x00 | nftID |-> ProtocolBuffer(NFT)` + +### NFTOfClassByOwner + +NFTOfClassByOwner is mainly to realize the function of querying all nfts using classID and owner, without other redundant functions. + +- NFTOfClassByOwner: `0x03 | owner | 0x00 | classID | 0x00 | nftID |-> 0x01` + +### Owner + +Since there is no extra field in NFT to indicate the owner of nft, an additional key-value pair is used to save the ownership of nft. With the transfer of nft, the key-value pair is updated synchronously. + +- OwnerKey: `0x04 | classID | 0x00 | nftID |-> owner` + +### TotalSupply + +TotalSupply is responsible for tracking the number of all nfts under a certain class. Mint operation is performed under the changed class, supply increases by one, burn operation, and supply decreases by one. + +- OwnerKey: `0x05 | classID |-> totalSupply` + +## Messages + +In this section we describe the processing of messages for the NFT module. + + + The validation of `ClassID` and `NftID` is left to the app developer.\ The SDK + does not provide any validation for these fields. + + +### MsgSend + +You can use the `MsgSend` message to transfer the ownership of nft. This is a function provided by the `x/nft` module. Of course, you can use the `Transfer` method to implement your own transfer logic, but you need to pay extra attention to the transfer permissions. + +The message handling should fail if: + +- provided `ClassID` does not exist. +- provided `Id` does not exist. +- provided `Sender` does not the owner of nft. + +## Events + +The nft module emits proto events defined in [the Protobuf reference](https://buf.build/cosmos/cosmos-sdk/docs/main:cosmos.nft.v1beta1). diff --git a/docs/sdk/next/documentation/module-system/params.mdx b/docs/sdk/next/documentation/module-system/params.mdx new file mode 100644 index 00000000..812979b3 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/params.mdx @@ -0,0 +1,82 @@ +--- +title: '`x/params`' +description: >- + NOTE: x/params is deprecated as of Cosmos SDK v0.53 and will be removed in the + next release. +--- + +NOTE: `x/params` is deprecated as of Cosmos SDK v0.53 and will be removed in the next release. + +## Abstract + +Package params provides a globally available parameter store. + +There are two main types, Keeper and Subspace. Subspace is an isolated namespace for a +paramstore, where keys are prefixed by preconfigured spacename. Keeper has a +permission to access all existing spaces. + +Subspace can be used by the individual keepers, which need a private parameter store +that the other keepers cannot modify. The params Keeper can be used to add a route to `x/gov` router in order to modify any parameter in case a proposal passes. + +The following contents explains how to use params module for master and user modules. + +## Contents + +* [Keeper](#keeper) +* [Subspace](#subspace) + * [Key](#key) + * [KeyTable](#keytable) + * [ParamSet](#paramset) + +## Keeper + +In the app initialization stage, [subspaces](#subspace) can be allocated for other modules' keeper using `Keeper.Subspace` and are stored in `Keeper.spaces`. Then, those modules can have a reference to their specific parameter store through `Keeper.GetSubspace`. + +Example: + +```go +type ExampleKeeper struct { + paramSpace paramtypes.Subspace +} + +func (k ExampleKeeper) + +SetParams(ctx sdk.Context, params types.Params) { + k.paramSpace.SetParamSet(ctx, ¶ms) +} +``` + +## Subspace + +`Subspace` is a prefixed subspace of the parameter store. Each module which uses the +parameter store will take a `Subspace` to isolate permission to access. + +### Key + +Parameter keys are human readable alphanumeric strings. A parameter for the key +`"ExampleParameter"` is stored under `[]byte("SubspaceName" + "/" + "ExampleParameter")`, +where `"SubspaceName"` is the name of the subspace. + +Subkeys are secondary parameter keys those are used along with a primary parameter key. +Subkeys can be used for grouping or dynamic parameter key generation during runtime. + +### KeyTable + +All of the parameter keys that will be used should be registered at the compile +time. `KeyTable` is essentially a `map[string]attribute`, where the `string` is a parameter key. + +Currently, `attribute` consists of a `reflect.Type`, which indicates the parameter +type to check that provided key and value are compatible and registered, as well as a function `ValueValidatorFn` to validate values. + +Only primary keys have to be registered on the `KeyTable`. Subkeys inherit the +attribute of the primary key. + +### ParamSet + +Modules often define parameters as a proto message. The generated struct can implement +`ParamSet` interface to be used with the following methods: + +* `KeyTable.RegisterParamSet()`: registers all parameters in the struct +* `Subspace.{Get, Set}ParamSet()`: Get to & Set from the struct + +The implementer should be a pointer in order to use `GetParamSet()`. diff --git a/docs/sdk/next/documentation/module-system/preblock.mdx b/docs/sdk/next/documentation/module-system/preblock.mdx new file mode 100644 index 00000000..d102b9af --- /dev/null +++ b/docs/sdk/next/documentation/module-system/preblock.mdx @@ -0,0 +1,31 @@ +--- +title: PreBlocker +--- + +## Synopsis + +`PreBlocker` is an optional method module developers can implement in their module. They will be triggered before [`BeginBlock`](/docs/sdk/next/documentation/application-framework/baseapp#beginblock). + + +**Pre-requisite Readings** + +- [Module Manager](/docs/sdk/next/documentation/module-system/module-manager) + + + +## PreBlocker + +There are two semantics around the new lifecycle method: + +- It runs before the `BeginBlocker` of all modules +- It can modify consensus parameters in storage, and signal the caller through the return value. + +When it returns `ConsensusParamsChanged=true`, the caller must refresh the consensus parameters in the deliver context: + +``` +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithConsensusParams(app.GetConsensusParams()) +``` + +The new ctx must be passed to all the other lifecycle methods. + +{/* TODO: leaving this here to update docs with core api changes */} diff --git a/docs/sdk/next/documentation/module-system/protocolpool.mdx b/docs/sdk/next/documentation/module-system/protocolpool.mdx new file mode 100644 index 00000000..33d2e0d2 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/protocolpool.mdx @@ -0,0 +1,694 @@ +--- +title: '`x/protocolpool`' +--- + +## Concepts + +`x/protocolpool` is a supplemental Cosmos SDK module that handles functionality for community pool funds. The module provides a separate module account for the community pool making it easier to track the pool assets. Starting with v0.53 of the Cosmos SDK, community funds can be tracked using this module instead of the `x/distribution` module. Funds are migrated from the `x/distribution` module's community pool to `x/protocolpool`'s module account automatically. + +This module is `supplemental`; it is not required to run a Cosmos SDK chain. `x/protocolpool` enhances the community pool functionality provided by `x/distribution` and enables custom modules to further extend the community pool. + +Note: *as long as an external community pool keeper (here, `x/protocolpool`) is wired in DI configs, `x/distribution` will automatically use it for its external pool.* + +## Usage Limitations + +The following `x/distribution` handlers will now return an error when the `protocolpool` module is used with `x/distribution`: + +**QueryService** + +* `CommunityPool` + +**MsgService** + +* `CommunityPoolSpend` +* `FundCommunityPool` + +If you have services that rely on this functionality from `x/distribution`, please update them to use the `x/protocolpool` equivalents. + +## State Transitions + +### FundCommunityPool + +FundCommunityPool can be called by any valid account to send funds to the `x/protocolpool` module account. + +```protobuf + / FundCommunityPool defines a method to allow an account to directly + / fund the community pool. + rpc FundCommunityPool(MsgFundCommunityPool) returns (MsgFundCommunityPoolResponse); +``` + +### CommunityPoolSpend + +CommunityPoolSpend can be called by the module authority (default governance module account) or any account with authorization to spend funds from the `x/protocolpool` module account to a receiver address. + +```protobuf + / CommunityPoolSpend defines a governance operation for sending tokens from + / the community pool in the x/protocolpool module to another account, which + / could be the governance module itself. The authority is defined in the + / keeper. + rpc CommunityPoolSpend(MsgCommunityPoolSpend) returns (MsgCommunityPoolSpendResponse); +``` + +### CreateContinuousFund + +CreateContinuousFund is a message used to initiate a continuous fund for a specific recipient. The proposed percentage of funds will be distributed only on withdraw request for the recipient. The fund distribution continues until expiry time is reached or continuous fund request is canceled. +NOTE: This feature is designed to work with the SDK's default bond denom. + +```protobuf + / CreateContinuousFund defines a method to distribute a percentage of funds to an address continuously. + / This ContinuousFund can be indefinite or run until a given expiry time. + / Funds come from validator block rewards from x/distribution, but may also come from + / any user who funds the ProtocolPoolEscrow module account directly through x/bank. + rpc CreateContinuousFund(MsgCreateContinuousFund) returns (MsgCreateContinuousFundResponse); +``` + +### CancelContinuousFund + +CancelContinuousFund is a message used to cancel an existing continuous fund proposal for a specific recipient. Cancelling a continuous fund stops further distribution of funds, and the state object is removed from storage. + +```protobuf + / CancelContinuousFund defines a method for cancelling continuous fund. + rpc CancelContinuousFund(MsgCancelContinuousFund) returns (MsgCancelContinuousFundResponse); +``` + +## Messages + +### MsgFundCommunityPool + +This message sends coins directly from the sender to the community pool. + + +If you know the `x/protocolpool` module account address, you can directly use bank `send` transaction instead. + + +```protobuf +message MsgFundCommunityPool { + option (cosmos.msg.v1.signer) = "depositor"; + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string depositor = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + repeated cosmos.base.v1beta1.Coin amount = 2 + [(gogoproto.nullable) = false, (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins"]; +} + +``` + +* The msg will fail if the amount cannot be transferred from the sender to the `x/protocolpool` module account. + +```go +func (k Keeper) + +FundCommunityPool(ctx context.Context, amount sdk.Coins, sender sdk.AccAddress) + +error { + return k.bankKeeper.SendCoinsFromAccountToModule(ctx, sender, types.ModuleName, amount) +} +``` + +### MsgCommunityPoolSpend + +This message distributes funds from the `x/protocolpool` module account to the recipient using `DistributeFromCommunityPool` keeper method. + +```protobuf +// pool to another account. This message is typically executed via a governance +// proposal with the governance module being the executing authority. +message MsgCommunityPoolSpend { + option (cosmos.msg.v1.signer) = "authority"; + + // Authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string recipient = 2; + repeated cosmos.base.v1beta1.Coin amount = 3 + [(gogoproto.nullable) = false, (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins"]; +} + +``` + +The message will fail under the following conditions: + +* The amount cannot be transferred to the recipient from the `x/protocolpool` module account. +* The `recipient` address is restricted + +```go +func (k Keeper) + +DistributeFromCommunityPool(ctx context.Context, amount sdk.Coins, receiveAddr sdk.AccAddress) + +error { + return k.bankKeeper.SendCoinsFromModuleToAccount(ctx, types.ModuleName, receiveAddr, amount) +} +``` + +### MsgCreateContinuousFund + +This message is used to create a continuous fund for a specific recipient. The proposed percentage of funds will be distributed only on withdraw request for the recipient. This fund distribution continues until expiry time is reached or continuous fund request is canceled. + +```protobuf + string recipient = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +// MsgUpdateParams is the Msg/UpdateParams request type. +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // params defines the x/protocolpool parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false]; +} + +// MsgUpdateParamsResponse defines the response structure for executing a +``` + +The message will fail under the following conditions: + +* The recipient address is empty or restricted. +* The percentage is zero/negative/greater than one. +* The Expiry time is less than the current block time. + + +If two continuous fund proposals to the same address are created, the previous ContinuousFund will be updated with the new ContinuousFund. + + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "cosmossdk.io/math" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/protocolpool/types" +) + +type MsgServer struct { + Keeper +} + +var _ types.MsgServer = MsgServer{ +} + +/ NewMsgServerImpl returns an implementation of the protocolpool MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &MsgServer{ + Keeper: keeper +} +} + +func (k MsgServer) + +FundCommunityPool(ctx context.Context, msg *types.MsgFundCommunityPool) (*types.MsgFundCommunityPoolResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +depositor, err := k.authKeeper.AddressCodec().StringToBytes(msg.Depositor) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid depositor address: %s", err) +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + + / send funds to community pool module account + if err := k.Keeper.FundCommunityPool(sdkCtx, msg.Amount, depositor); err != nil { + return nil, err +} + +return &types.MsgFundCommunityPoolResponse{ +}, nil +} + +func (k MsgServer) + +CommunityPoolSpend(ctx context.Context, msg *types.MsgCommunityPoolSpend) (*types.MsgCommunityPoolSpendResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + +recipient, err := k.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + + / distribute funds from community pool module account + if err := k.DistributeFromCommunityPool(sdkCtx, msg.Amount, recipient); err != nil { + return nil, err +} + +sdkCtx.Logger().Debug("transferred from the community pool", "amount", msg.Amount.String(), "recipient", msg.Recipient) + +return &types.MsgCommunityPoolSpendResponse{ +}, nil +} + +func (k MsgServer) + +CreateContinuousFund(ctx context.Context, msg *types.MsgCreateContinuousFund) (*types.MsgCreateContinuousFundResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + +recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + + / deny creation if we know this address is blocked from receiving funds + if k.bankKeeper.BlockedAddr(recipient) { + return nil, fmt.Errorf("recipient is blocked in the bank keeper: %s", msg.Recipient) +} + +has, err := k.ContinuousFunds.Has(sdkCtx, recipient) + if err != nil { + return nil, err +} + if has { + return nil, fmt.Errorf("continuous fund already exists for recipient %s", msg.Recipient) +} + + / Validate the message fields + err = validateContinuousFund(sdkCtx, *msg) + if err != nil { + return nil, err +} + + / Check if total funds percentage exceeds 100% + / If exceeds, we should not setup continuous fund proposal. + totalStreamFundsPercentage := math.LegacyZeroDec() + +err = k.ContinuousFunds.Walk(sdkCtx, nil, func(key sdk.AccAddress, value types.ContinuousFund) (stop bool, err error) { + totalStreamFundsPercentage = totalStreamFundsPercentage.Add(value.Percentage) + +return false, nil +}) + if err != nil { + return nil, err +} + +totalStreamFundsPercentage = totalStreamFundsPercentage.Add(msg.Percentage) + if totalStreamFundsPercentage.GT(math.LegacyOneDec()) { + return nil, fmt.Errorf("cannot set continuous fund proposal\ntotal funds percentage exceeds 100\ncurrent total percentage: %s", totalStreamFundsPercentage.Sub(msg.Percentage).MulInt64(100).TruncateInt().String()) +} + + / Create continuous fund proposal + cf := types.ContinuousFund{ + Recipient: msg.Recipient, + Percentage: msg.Percentage, + Expiry: msg.Expiry, +} + + / Set continuous fund to the state + err = k.ContinuousFunds.Set(sdkCtx, recipient, cf) + if err != nil { + return nil, err +} + +return &types.MsgCreateContinuousFundResponse{ +}, nil +} + +func (k MsgServer) + +CancelContinuousFund(ctx context.Context, msg *types.MsgCancelContinuousFund) (*types.MsgCancelContinuousFundResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + +recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + canceledHeight := sdkCtx.BlockHeight() + canceledTime := sdkCtx.BlockTime() + +has, err := k.ContinuousFunds.Has(sdkCtx, recipient) + if err != nil { + return nil, fmt.Errorf("cannot get continuous fund for recipient %w", err) +} + if !has { + return nil, fmt.Errorf("cannot cancel continuous fund for recipient %s - does not exist", msg.Recipient) +} + if err := k.ContinuousFunds.Remove(sdkCtx, recipient); err != nil { + return nil, fmt.Errorf("failed to remove continuous fund for recipient %s: %w", msg.Recipient, err) +} + +return &types.MsgCancelContinuousFundResponse{ + CanceledTime: canceledTime, + CanceledHeight: uint64(canceledHeight), + Recipient: msg.Recipient, +}, nil +} + +func (k MsgServer) + +UpdateParams(ctx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.GetAuthority()); err != nil { + return nil, err +} + if err := msg.Params.Validate(); err != nil { + return nil, fmt.Errorf("invalid params: %w", err) +} + if err := k.Params.Set(sdkCtx, msg.Params); err != nil { + return nil, fmt.Errorf("failed to set params: %w", err) +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} +``` + +### MsgCancelContinuousFund + +This message is used to cancel an existing continuous fund proposal for a specific recipient. Once canceled, the continuous fund will no longer distribute funds at each begin block, and the state object will be removed. + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/x/protocolpool/proto/cosmos/protocolpool/v1/tx.proto#L136-L161 +``` + +The message will fail under the following conditions: + +* The recipient address is empty or restricted. +* The ContinuousFund for the recipient does not exist. + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "cosmossdk.io/math" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/protocolpool/types" +) + +type MsgServer struct { + Keeper +} + +var _ types.MsgServer = MsgServer{ +} + +/ NewMsgServerImpl returns an implementation of the protocolpool MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &MsgServer{ + Keeper: keeper +} +} + +func (k MsgServer) + +FundCommunityPool(ctx context.Context, msg *types.MsgFundCommunityPool) (*types.MsgFundCommunityPoolResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +depositor, err := k.authKeeper.AddressCodec().StringToBytes(msg.Depositor) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid depositor address: %s", err) +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + + / send funds to community pool module account + if err := k.Keeper.FundCommunityPool(sdkCtx, msg.Amount, depositor); err != nil { + return nil, err +} + +return &types.MsgFundCommunityPoolResponse{ +}, nil +} + +func (k MsgServer) + +CommunityPoolSpend(ctx context.Context, msg *types.MsgCommunityPoolSpend) (*types.MsgCommunityPoolSpendResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + +recipient, err := k.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + + / distribute funds from community pool module account + if err := k.DistributeFromCommunityPool(sdkCtx, msg.Amount, recipient); err != nil { + return nil, err +} + +sdkCtx.Logger().Debug("transferred from the community pool", "amount", msg.Amount.String(), "recipient", msg.Recipient) + +return &types.MsgCommunityPoolSpendResponse{ +}, nil +} + +func (k MsgServer) + +CreateContinuousFund(ctx context.Context, msg *types.MsgCreateContinuousFund) (*types.MsgCreateContinuousFundResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + +recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + + / deny creation if we know this address is blocked from receiving funds + if k.bankKeeper.BlockedAddr(recipient) { + return nil, fmt.Errorf("recipient is blocked in the bank keeper: %s", msg.Recipient) +} + +has, err := k.ContinuousFunds.Has(sdkCtx, recipient) + if err != nil { + return nil, err +} + if has { + return nil, fmt.Errorf("continuous fund already exists for recipient %s", msg.Recipient) +} + + / Validate the message fields + err = validateContinuousFund(sdkCtx, *msg) + if err != nil { + return nil, err +} + + / Check if total funds percentage exceeds 100% + / If exceeds, we should not setup continuous fund proposal. + totalStreamFundsPercentage := math.LegacyZeroDec() + +err = k.ContinuousFunds.Walk(sdkCtx, nil, func(key sdk.AccAddress, value types.ContinuousFund) (stop bool, err error) { + totalStreamFundsPercentage = totalStreamFundsPercentage.Add(value.Percentage) + +return false, nil +}) + if err != nil { + return nil, err +} + +totalStreamFundsPercentage = totalStreamFundsPercentage.Add(msg.Percentage) + if totalStreamFundsPercentage.GT(math.LegacyOneDec()) { + return nil, fmt.Errorf("cannot set continuous fund proposal\ntotal funds percentage exceeds 100\ncurrent total percentage: %s", totalStreamFundsPercentage.Sub(msg.Percentage).MulInt64(100).TruncateInt().String()) +} + + / Create continuous fund proposal + cf := types.ContinuousFund{ + Recipient: msg.Recipient, + Percentage: msg.Percentage, + Expiry: msg.Expiry, +} + + / Set continuous fund to the state + err = k.ContinuousFunds.Set(sdkCtx, recipient, cf) + if err != nil { + return nil, err +} + +return &types.MsgCreateContinuousFundResponse{ +}, nil +} + +func (k MsgServer) + +CancelContinuousFund(ctx context.Context, msg *types.MsgCancelContinuousFund) (*types.MsgCancelContinuousFundResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + +recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + canceledHeight := sdkCtx.BlockHeight() + canceledTime := sdkCtx.BlockTime() + +has, err := k.ContinuousFunds.Has(sdkCtx, recipient) + if err != nil { + return nil, fmt.Errorf("cannot get continuous fund for recipient %w", err) +} + if !has { + return nil, fmt.Errorf("cannot cancel continuous fund for recipient %s - does not exist", msg.Recipient) +} + if err := k.ContinuousFunds.Remove(sdkCtx, recipient); err != nil { + return nil, fmt.Errorf("failed to remove continuous fund for recipient %s: %w", msg.Recipient, err) +} + +return &types.MsgCancelContinuousFundResponse{ + CanceledTime: canceledTime, + CanceledHeight: uint64(canceledHeight), + Recipient: msg.Recipient, +}, nil +} + +func (k MsgServer) + +UpdateParams(ctx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.GetAuthority()); err != nil { + return nil, err +} + if err := msg.Params.Validate(); err != nil { + return nil, fmt.Errorf("invalid params: %w", err) +} + if err := k.Params.Set(sdkCtx, msg.Params); err != nil { + return nil, fmt.Errorf("failed to set params: %w", err) +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} +``` + +## Client + +It takes the advantage of `AutoCLI` + +```go expandable +package protocolpool + +import ( + + "fmt" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + poolv1 "cosmossdk.io/api/cosmos/protocolpool/v1" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. +func (am AppModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: &autocliv1.ServiceCommandDescriptor{ + Service: poolv1.Query_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "CommunityPool", + Use: "community-pool", + Short: "Query the amount of coins in the community pool", + Example: fmt.Sprintf(`%s query protocolpool community-pool`, version.AppName), +}, + { + RpcMethod: "ContinuousFunds", + Use: "continuous-funds", + Short: "Query all continuous funds", + Example: fmt.Sprintf(`%s query protocolpool continuous-funds`, version.AppName), +}, + { + RpcMethod: "ContinuousFund", + Use: "continuous-fund ", + Short: "Query a continuous fund by its recipient address", + Example: fmt.Sprintf(`%s query protocolpool continuous-fund cosmos1...`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "recipient" +}}, +}, +}, +}, + Tx: &autocliv1.ServiceCommandDescriptor{ + Service: poolv1.Msg_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "FundCommunityPool", + Use: "fund-community-pool ", + Short: "Funds the community pool with the specified amount", + Example: fmt.Sprintf(`%s tx protocolpool fund-community-pool 100uatom --from mykey`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "amount" +}}, +}, + { + RpcMethod: "CreateContinuousFund", + Use: "create-continuous-fund ", + Short: "Create continuous fund for a recipient with optional expiry", + Example: fmt.Sprintf(`%s tx protocolpool create-continuous-fund cosmos1... 0.2 2023-11-31T12:34:56.789Z --from mykey`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "recipient" +}, + { + ProtoField: "percentage" +}, + { + ProtoField: "expiry", + Optional: true +}, +}, + GovProposal: true, +}, + { + RpcMethod: "CancelContinuousFund", + Use: "cancel-continuous-fund ", + Short: "Cancel continuous fund for a specific recipient", + Example: fmt.Sprintf(`%s tx protocolpool cancel-continuous-fund cosmos1... --from mykey`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "recipient" +}, +}, + GovProposal: true, +}, + { + RpcMethod: "UpdateParams", + Use: "update-params-proposal ", + Short: "Submit a proposal to update protocolpool module params. Note: the entire params must be provided.", + Example: fmt.Sprintf(`%s tx protocolpool update-params-proposal '{ "enabled_distribution_denoms": ["stake", "foo"] +}'`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "params" +}}, + GovProposal: true, +}, +}, +}, +} +} +``` diff --git a/docs/sdk/next/documentation/module-system/query-services.mdx b/docs/sdk/next/documentation/module-system/query-services.mdx new file mode 100644 index 00000000..1560fd0c --- /dev/null +++ b/docs/sdk/next/documentation/module-system/query-services.mdx @@ -0,0 +1,390 @@ +--- +title: Query Services +--- + +## Synopsis + +A Protobuf Query service processes [`queries`](/docs/sdk/next/documentation/module-system/messages-and-queries#queries). Query services are specific to the module in which they are defined, and only process `queries` defined within said module. They are called from `BaseApp`'s [`Query` method](/docs/sdk/next/documentation/application-framework/baseapp#query). + + +**Pre-requisite Readings** + +- [Module Manager](/docs/sdk/next/documentation/module-system/module-manager) +- [Messages and Queries](/docs/sdk/next/documentation/module-system/messages-and-queries) + + + +## Implementation of a module query service + +### gRPC Service + +When defining a Protobuf `Query` service, a `QueryServer` interface is generated for each module with all the service methods: + +```go +type QueryServer interface { + QueryBalance(context.Context, *QueryBalanceParams) (*types.Coin, error) + +QueryAllBalances(context.Context, *QueryAllBalancesParams) (*QueryAllBalancesResponse, error) +} +``` + +These custom queries methods should be implemented by a module's keeper, typically in `./keeper/grpc_query.go`. The first parameter of these methods is a generic `context.Context`. Therefore, the Cosmos SDK provides a function `sdk.UnwrapSDKContext` to retrieve the `context.Context` from the provided +`context.Context`. + +Here's an example implementation for the bank module: + +```go expandable +package keeper + +import ( + + "context" + "cosmossdk.io/collections" + "cosmossdk.io/math" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/status" + "cosmossdk.io/store/prefix" + "github.com/cosmos/cosmos-sdk/runtime" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/query" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +type Querier struct { + BaseKeeper +} + +var _ types.QueryServer = BaseKeeper{ +} + +func NewQuerier(keeper *BaseKeeper) + +Querier { + return Querier{ + BaseKeeper: *keeper +} +} + +/ Balance implements the Query/Balance gRPC method +func (k BaseKeeper) + +Balance(ctx context.Context, req *types.QueryBalanceRequest) (*types.QueryBalanceResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + +address, err := k.ak.AddressCodec().StringToBytes(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + balance := k.GetBalance(sdkCtx, address, req.Denom) + +return &types.QueryBalanceResponse{ + Balance: &balance +}, nil +} + +/ AllBalances implements the Query/AllBalances gRPC method +func (k BaseKeeper) + +AllBalances(ctx context.Context, req *types.QueryAllBalancesRequest) (*types.QueryAllBalancesResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + +addr, err := k.ak.AddressCodec().StringToBytes(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + balances := sdk.NewCoins() + + _, pageRes, err := query.CollectionFilteredPaginate(ctx, k.Balances, req.Pagination, func(key collections.Pair[sdk.AccAddress, string], value math.Int) (include bool, err error) { + denom := key.K2() + if req.ResolveDenom { + if metadata, ok := k.GetDenomMetaData(sdkCtx, denom); ok { + denom = metadata.Display +} + +} + +balances = append(balances, sdk.NewCoin(denom, value)) + +return false, nil / we don't include results because we're appending them here. +}, query.WithCollectionPaginationPairPrefix[sdk.AccAddress, string](/docs/sdk/next/documentation/module-system/addr)) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "paginate: %v", err) +} + +return &types.QueryAllBalancesResponse{ + Balances: balances, + Pagination: pageRes +}, nil +} + +/ SpendableBalances implements a gRPC query handler for retrieving an account's +/ spendable balances. +func (k BaseKeeper) + +SpendableBalances(ctx context.Context, req *types.QuerySpendableBalancesRequest) (*types.QuerySpendableBalancesResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + +addr, err := k.ak.AddressCodec().StringToBytes(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + balances := sdk.NewCoins() + zeroAmt := math.ZeroInt() + + _, pageRes, err := query.CollectionFilteredPaginate(ctx, k.Balances, req.Pagination, func(key collections.Pair[sdk.AccAddress, string], _ math.Int) (include bool, err error) { + balances = append(balances, sdk.NewCoin(key.K2(), zeroAmt)) + +return false, nil / not including results as they're appended here +}, query.WithCollectionPaginationPairPrefix[sdk.AccAddress, string](/docs/sdk/next/documentation/module-system/addr)) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "paginate: %v", err) +} + result := sdk.NewCoins() + spendable := k.SpendableCoins(sdkCtx, addr) + for _, c := range balances { + result = append(result, sdk.NewCoin(c.Denom, spendable.AmountOf(c.Denom))) +} + +return &types.QuerySpendableBalancesResponse{ + Balances: result, + Pagination: pageRes +}, nil +} + +/ SpendableBalanceByDenom implements a gRPC query handler for retrieving an account's +/ spendable balance for a specific denom. +func (k BaseKeeper) + +SpendableBalanceByDenom(ctx context.Context, req *types.QuerySpendableBalanceByDenomRequest) (*types.QuerySpendableBalanceByDenomResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + +addr, err := k.ak.AddressCodec().StringToBytes(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + spendable := k.SpendableCoin(sdkCtx, addr, req.Denom) + +return &types.QuerySpendableBalanceByDenomResponse{ + Balance: &spendable +}, nil +} + +/ TotalSupply implements the Query/TotalSupply gRPC method +func (k BaseKeeper) + +TotalSupply(ctx context.Context, req *types.QueryTotalSupplyRequest) (*types.QueryTotalSupplyResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +totalSupply, pageRes, err := k.GetPaginatedTotalSupply(sdkCtx, req.Pagination) + if err != nil { + return nil, status.Error(codes.Internal, err.Error()) +} + +return &types.QueryTotalSupplyResponse{ + Supply: totalSupply, + Pagination: pageRes +}, nil +} + +/ SupplyOf implements the Query/SupplyOf gRPC method +func (k BaseKeeper) + +SupplyOf(c context.Context, req *types.QuerySupplyOfRequest) (*types.QuerySupplyOfResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + ctx := sdk.UnwrapSDKContext(c) + supply := k.GetSupply(ctx, req.Denom) + +return &types.QuerySupplyOfResponse{ + Amount: sdk.NewCoin(req.Denom, supply.Amount) +}, nil +} + +/ Params implements the gRPC service handler for querying x/bank parameters. +func (k BaseKeeper) + +Params(ctx context.Context, req *types.QueryParamsRequest) (*types.QueryParamsResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + params := k.GetParams(sdkCtx) + +return &types.QueryParamsResponse{ + Params: params +}, nil +} + +/ DenomsMetadata implements Query/DenomsMetadata gRPC method. +func (k BaseKeeper) + +DenomsMetadata(c context.Context, req *types.QueryDenomsMetadataRequest) (*types.QueryDenomsMetadataResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + kvStore := runtime.KVStoreAdapter(k.storeService.OpenKVStore(c)) + store := prefix.NewStore(kvStore, types.DenomMetadataPrefix) + metadatas := []types.Metadata{ +} + +pageRes, err := query.Paginate(store, req.Pagination, func(_, value []byte) + +error { + var metadata types.Metadata + k.cdc.MustUnmarshal(value, &metadata) + +metadatas = append(metadatas, metadata) + +return nil +}) + if err != nil { + return nil, status.Error(codes.Internal, err.Error()) +} + +return &types.QueryDenomsMetadataResponse{ + Metadatas: metadatas, + Pagination: pageRes, +}, nil +} + +/ DenomMetadata implements Query/DenomMetadata gRPC method. +func (k BaseKeeper) + +DenomMetadata(c context.Context, req *types.QueryDenomMetadataRequest) (*types.QueryDenomMetadataResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + ctx := sdk.UnwrapSDKContext(c) + +metadata, found := k.GetDenomMetaData(ctx, req.Denom) + if !found { + return nil, status.Errorf(codes.NotFound, "client metadata for denom %s", req.Denom) +} + +return &types.QueryDenomMetadataResponse{ + Metadata: metadata, +}, nil +} + +func (k BaseKeeper) + +DenomOwners( + goCtx context.Context, + req *types.QueryDenomOwnersRequest, +) (*types.QueryDenomOwnersResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + +var denomOwners []*types.DenomOwner + + _, pageRes, err := query.CollectionFilteredPaginate(goCtx, k.Balances.Indexes.Denom, req.Pagination, + func(key collections.Pair[string, sdk.AccAddress], value collections.NoValue) (include bool, err error) { + amt, err := k.Balances.Get(goCtx, collections.Join(key.K2(), req.Denom)) + if err != nil { + return false, err +} + +denomOwners = append(denomOwners, &types.DenomOwner{ + Address: key.K2().String(), + Balance: sdk.NewCoin(req.Denom, amt), +}) + +return false, nil +}, + query.WithCollectionPaginationPairPrefix[string, sdk.AccAddress](/docs/sdk/next/documentation/module-system/req.Denom), + ) + if err != nil { + return nil, err +} + +return &types.QueryDenomOwnersResponse{ + DenomOwners: denomOwners, + Pagination: pageRes +}, nil +} + +func (k BaseKeeper) + +SendEnabled(goCtx context.Context, req *types.QuerySendEnabledRequest) (*types.QuerySendEnabledResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + ctx := sdk.UnwrapSDKContext(goCtx) + resp := &types.QuerySendEnabledResponse{ +} + if len(req.Denoms) > 0 { + for _, denom := range req.Denoms { + if se, ok := k.getSendEnabled(ctx, denom); ok { + resp.SendEnabled = append(resp.SendEnabled, types.NewSendEnabled(denom, se)) +} + +} + +} + +else { + results, pageResp, err := query.CollectionPaginate[string, bool](/docs/sdk/next/documentation/module-system/ctx, k.BaseViewKeeper.SendEnabled, req.Pagination) + if err != nil { + return nil, status.Error(codes.Internal, err.Error()) +} + for _, r := range results { + resp.SendEnabled = append(resp.SendEnabled, &types.SendEnabled{ + Denom: r.Key, + Enabled: r.Value, +}) +} + +resp.Pagination = pageResp +} + +return resp, nil +} +``` + +### Calling queries from the State Machine + +The Cosmos SDK v0.47 introduces a new `cosmos.query.v1.module_query_safe` Protobuf annotation which is used to state that a query that is safe to be called from within the state machine, for example: + +- a Keeper's query function can be called from another module's Keeper, +- ADR-033 intermodule query calls, +- CosmWasm contracts can also directly interact with these queries. + +If the `module_query_safe` annotation set to `true`, it means: + +- The query is deterministic: given a block height it will return the same response upon multiple calls, and doesn't introduce any state-machine breaking changes across SDK patch versions. +- Gas consumption never fluctuates across calls and across patch versions. + +If you are a module developer and want to use `module_query_safe` annotation for your own query, you have to ensure the following things: + +- the query is deterministic and won't introduce state-machine-breaking changes without coordinated upgrades +- it has its gas tracked, to avoid the attack vector where no gas is accounted for + on potentially high-computation queries. diff --git a/docs/sdk/next/documentation/module-system/slashing.mdx b/docs/sdk/next/documentation/module-system/slashing.mdx new file mode 100644 index 00000000..d1999032 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/slashing.mdx @@ -0,0 +1,858 @@ +--- +title: '`x/slashing`' +description: >- + This section specifies the slashing module of the Cosmos SDK, which implements + functionality first outlined in the Cosmos Whitepaper in June 2016. +--- + +## Abstract + +This section specifies the slashing module of the Cosmos SDK, which implements functionality +first outlined in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper) in June 2016. + +The slashing module enables Cosmos SDK-based blockchains to disincentivize any attributable action +by a protocol-recognized actor with value at stake by penalizing them ("slashing"). + +Penalties may include, but are not limited to: + +* Burning some amount of their stake +* Removing their ability to vote on future blocks for a period of time. + +This module will be used by the Cosmos Hub, the first hub in the Cosmos ecosystem. + +## Contents + +* [Concepts](#concepts) + * [States](#states) + * [Tombstone Caps](#tombstone-caps) + * [Infraction Timelines](#infraction-timelines) +* [State](#state) + * [Signing Info (Liveness)](#signing-info-liveness) + * [Params](#params) +* [Messages](#messages) + * [Unjail](#unjail) +* [BeginBlock](#beginblock) + * [Liveness Tracking](#liveness-tracking) +* [Hooks](#hooks) +* [Events](#events) +* [Staking Tombstone](#staking-tombstone) +* [Parameters](#parameters) +* [CLI](#cli) + * [Query](#query) + * [Transactions](#transactions) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +### States + +At any given time, there are any number of validators registered in the state +machine. Each block, the top `MaxValidators` (defined by `x/staking`) validators +who are not jailed become *bonded*, meaning that they may propose and vote on +blocks. Validators who are *bonded* are *at stake*, meaning that part or all of +their stake and their delegators' stake is at risk if they commit a protocol fault. + +For each of these validators we keep a `ValidatorSigningInfo` record that contains +information pertaining to validator's liveness and other infraction related +attributes. + +### Tombstone Caps + +In order to mitigate the impact of initially likely categories of non-malicious +protocol faults, the Cosmos Hub implements for each validator +a *tombstone* cap, which only allows a validator to be slashed once for a double +sign fault. For example, if you misconfigure your HSM and double-sign a bunch of +old blocks, you'll only be punished for the first double-sign (and then immediately tombstoned). This will still be quite expensive and desirable to avoid, but tombstone caps +somewhat blunt the economic impact of unintentional misconfiguration. + +Liveness faults do not have caps, as they can't stack upon each other. Liveness bugs are "detected" as soon as the infraction occurs, and the validators are immediately put in jail, so it is not possible for them to commit multiple liveness faults without unjailing in between. + +### Infraction Timelines + +To illustrate how the `x/slashing` module handles submitted evidence through +CometBFT consensus, consider the following examples: + +**Definitions**: + +*\[* : timeline start\ +*]* : timeline end\ +*Cn* : infraction `n` committed\ +*Dn* : infraction `n` discovered\ +*Vb* : validator bonded\ +*Vu* : validator unbonded + +#### Single Double Sign Infraction + +\[----------C1----D1,Vu-----] + +A single infraction is committed then later discovered, at which point the +validator is unbonded and slashed at the full amount for the infraction. + +#### Multiple Double Sign Infractions + +\[----------C1--C2---C3---D1,D2,D3Vu-----] + +Multiple infractions are committed and then later discovered, at which point the +validator is jailed and slashed for only one infraction. Because the validator +is also tombstoned, they can not rejoin the validator set. + +## State + +### Signing Info (Liveness) + +Every block includes a set of precommits by the validators for the previous block, +known as the `LastCommitInfo` provided by CometBFT. A `LastCommitInfo` is valid so +long as it contains precommits from +2/3 of total voting power. + +Proposers are incentivized to include precommits from all validators in the CometBFT `LastCommitInfo` +by receiving additional fees proportional to the difference between the voting +power included in the `LastCommitInfo` and +2/3 (see [fee distribution](/docs/sdk/next/documentation/module-system/distribution#begin-block)). + +```go +type LastCommitInfo struct { + Round int32 + Votes []VoteInfo +} +``` + +Validators are penalized for failing to be included in the `LastCommitInfo` for some +number of blocks by being automatically jailed, potentially slashed, and unbonded. + +Information about validator's liveness activity is tracked through `ValidatorSigningInfo`. +It is indexed in the store as follows: + +* ValidatorSigningInfo: `0x01 | ConsAddrLen (1 byte) | ConsAddress -> ProtocolBuffer(ValSigningInfo)` +* MissedBlocksBitArray: `0x02 | ConsAddrLen (1 byte) | ConsAddress | LittleEndianUint64(signArrayIndex) -> VarInt(didMiss)` (varint is a number encoding format) + +The first mapping allows us to easily lookup the recent signing info for a +validator based on the validator's consensus address. + +The second mapping (`MissedBlocksBitArray`) acts +as a bit-array of size `SignedBlocksWindow` that tells us if the validator missed +the block for a given index in the bit-array. The index in the bit-array is given +as little endian uint64. +The result is a `varint` that takes on `0` or `1`, where `0` indicates the +validator did not miss (did sign) the corresponding block, and `1` indicates +they missed the block (did not sign). + +Note that the `MissedBlocksBitArray` is not explicitly initialized up-front. Keys +are added as we progress through the first `SignedBlocksWindow` blocks for a newly +bonded validator. The `SignedBlocksWindow` parameter defines the size +(number of blocks) of the sliding window used to track validator liveness. + +The information stored for tracking validator liveness is as follows: + +```protobuf +// ValidatorSigningInfo defines a validator's signing info for monitoring their +// liveness activity. +message ValidatorSigningInfo { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // Height at which validator was first a candidate OR was unjailed + int64 start_height = 2; + // Index which is incremented each time the validator was a bonded + // in a block and may have signed a precommit or not. This in conjunction with the + // `SignedBlocksWindow` param determines the index in the `MissedBlocksBitArray`. + int64 index_offset = 3; + // Timestamp until which the validator is jailed due to liveness downtime. + google.protobuf.Timestamp jailed_until = 4 + [(gogoproto.stdtime) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // Whether or not a validator has been tombstoned (killed out of validator set). It is set + // once the validator commits an equivocation or for any other configured misbehiavor. + bool tombstoned = 5; + // A counter kept to avoid unnecessary array reads. + // Note that `Sum(MissedBlocksBitArray)` always equals `MissedBlocksCounter`. + int64 missed_blocks_counter = 6; +} +``` + +### Params + +The slashing module stores it's params in state with the prefix of `0x00`, +it can be updated with governance or the address with authority. + +* Params: `0x00 | ProtocolBuffer(Params)` + +```protobuf +// Params represents the parameters used for by the slashing module. +message Params { + option (amino.name) = "cosmos-sdk/x/slashing/Params"; + + int64 signed_blocks_window = 1; + bytes min_signed_per_window = 2 [ + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true + ]; + google.protobuf.Duration downtime_jail_duration = 3 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdduration) = true]; + bytes slash_fraction_double_sign = 4 [ + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true + ]; + bytes slash_fraction_downtime = 5 [ + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true + ]; +} +``` + +## Messages + +In this section we describe the processing of messages for the `slashing` module. + +### Unjail + +If a validator was automatically unbonded due to downtime and wishes to come back online & +possibly rejoin the bonded set, it must send `MsgUnjail`: + +```protobuf +/ MsgUnjail is an sdk.Msg used for unjailing a jailed validator, thus returning +/ them into the bonded validator set, so they can begin receiving provisions +/ and rewards again. +message MsgUnjail { + string validator_addr = 1; +} +``` + +Below is a pseudocode of the `MsgSrv/Unjail` RPC: + +```go expandable +unjail(tx MsgUnjail) + +validator = getValidator(tx.ValidatorAddr) + if validator == nil + fail with "No validator found" + if getSelfDelegation(validator) == 0 + fail with "validator must self delegate before unjailing" + if !validator.Jailed + fail with "Validator not jailed, cannot unjail" + + info = GetValidatorSigningInfo(operator) + if info.Tombstoned + fail with "Tombstoned validator cannot be unjailed" + if block time < info.JailedUntil + fail with "Validator still jailed, cannot unjail until period has expired" + + validator.Jailed = false + setValidator(validator) + +return +``` + +If the validator has enough stake to be in the top `n = MaximumBondedValidators`, it will be automatically rebonded, +and all delegators still delegated to the validator will be rebonded and begin to again collect +provisions and rewards. + +## BeginBlock + +### Liveness Tracking + +At the beginning of each block, we update the `ValidatorSigningInfo` for each +validator and check if they've crossed below the liveness threshold over a +sliding window. This sliding window is defined by `SignedBlocksWindow` and the +index in this window is determined by `IndexOffset` found in the validator's +`ValidatorSigningInfo`. For each block processed, the `IndexOffset` is incremented +regardless if the validator signed or not. Once the index is determined, the +`MissedBlocksBitArray` and `MissedBlocksCounter` are updated accordingly. + +Finally, in order to determine if a validator crosses below the liveness threshold, +we fetch the maximum number of blocks missed, `maxMissed`, which is +`SignedBlocksWindow - (MinSignedPerWindow * SignedBlocksWindow)` and the minimum +height at which we can determine liveness, `minHeight`. If the current block is +greater than `minHeight` and the validator's `MissedBlocksCounter` is greater than +`maxMissed`, they will be slashed by `SlashFractionDowntime`, will be jailed +for `DowntimeJailDuration`, and have the following values reset: +`MissedBlocksBitArray`, `MissedBlocksCounter`, and `IndexOffset`. + +**Note**: Liveness slashes do **NOT** lead to a tombstoning. + +```go expandable +height := block.Height + for vote in block.LastCommitInfo.Votes { + signInfo := GetValidatorSigningInfo(vote.Validator.Address) + + / This is a relative index, so we count blocks the validator SHOULD have + / signed. We use the 0-value default signing info if not present, except for + / start height. + index := signInfo.IndexOffset % SignedBlocksWindow() + +signInfo.IndexOffset++ + + / Update MissedBlocksBitArray and MissedBlocksCounter. The MissedBlocksCounter + / just tracks the sum of MissedBlocksBitArray. That way we avoid needing to + / read/write the whole array each time. + missedPrevious := GetValidatorMissedBlockBitArray(vote.Validator.Address, index) + missed := !signed + switch { + case !missedPrevious && missed: + / array index has changed from not missed to missed, increment counter + SetValidatorMissedBlockBitArray(vote.Validator.Address, index, true) + +signInfo.MissedBlocksCounter++ + case missedPrevious && !missed: + / array index has changed from missed to not missed, decrement counter + SetValidatorMissedBlockBitArray(vote.Validator.Address, index, false) + +signInfo.MissedBlocksCounter-- + + default: + / array index at this index has not changed; no need to update counter +} + if missed { + / emit events... +} + minHeight := signInfo.StartHeight + SignedBlocksWindow() + maxMissed := SignedBlocksWindow() - MinSignedPerWindow() + + / If we are past the minimum height and the validator has missed too many + / jail and slash them. + if height > minHeight && signInfo.MissedBlocksCounter > maxMissed { + validator := ValidatorByConsAddr(vote.Validator.Address) + + / emit events... + + / We need to retrieve the stake distribution which signed the block, so we + / subtract ValidatorUpdateDelay from the block height, and subtract an + / additional 1 since this is the LastCommit. + / + / Note, that this CAN result in a negative "distributionHeight" up to + / -ValidatorUpdateDelay-1, i.e. at the end of the pre-genesis block (none) = at the beginning of the genesis block. + / That's fine since this is just used to filter unbonding delegations & redelegations. + distributionHeight := height - sdk.ValidatorUpdateDelay - 1 + + SlashWithInfractionReason(vote.Validator.Address, distributionHeight, vote.Validator.Power, SlashFractionDowntime(), stakingtypes.Downtime) + +Jail(vote.Validator.Address) + +signInfo.JailedUntil = block.Time.Add(DowntimeJailDuration()) + + / We need to reset the counter & array so that the validator won't be + / immediately slashed for downtime upon rebonding. + signInfo.MissedBlocksCounter = 0 + signInfo.IndexOffset = 0 + ClearValidatorMissedBlockBitArray(vote.Validator.Address) +} + +SetValidatorSigningInfo(vote.Validator.Address, signInfo) +} +``` + +## Hooks + +This section contains a description of the module's `hooks`. Hooks are operations that are executed automatically when events are raised. + +### Staking hooks + +The slashing module implements the `StakingHooks` defined in `x/staking` and are used as record-keeping of validators information. During the app initialization, these hooks should be registered in the staking module struct. + +The following hooks impact the slashing state: + +* `AfterValidatorBonded` creates a `ValidatorSigningInfo` instance as described in the following section. +* `AfterValidatorCreated` stores a validator's consensus key. +* `AfterValidatorRemoved` removes a validator's consensus key. + +### Validator Bonded + +Upon successful first-time bonding of a new validator, we create a new `ValidatorSigningInfo` structure for the +now-bonded validator, which `StartHeight` of the current block. + +If the validator was out of the validator set and gets bonded again, its new bonded height is set. + +```go expandable +onValidatorBonded(address sdk.ValAddress) + +signingInfo, found = GetValidatorSigningInfo(address) + if !found { + signingInfo = ValidatorSigningInfo { + StartHeight : CurrentHeight, + IndexOffset : 0, + JailedUntil : time.Unix(0, 0), + Tombstone : false, + MissedBlockCounter : 0 +} + +else { + signingInfo.StartHeight = CurrentHeight +} + +setValidatorSigningInfo(signingInfo) +} + +return +``` + +## Events + +The slashing module emits the following events: + +### MsgServer + +#### MsgUnjail + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ------------------ | +| message | module | slashing | +| message | sender | `{validatorAddress}` | + +### Keeper + +### BeginBlocker: HandleValidatorSignature + +| Type | Attribute Key | Attribute Value | +| ----- | ------------- | --------------------------- | +| slash | address | `{validatorConsensusAddress}` | +| slash | power | `{validatorPower}` | +| slash | reason | `{slashReason}` | +| slash | jailed \[0] | `{validatorConsensusAddress}` | +| slash | burned coins | `{math.Int}` | + +* \[0] Only included if the validator is jailed. + +| Type | Attribute Key | Attribute Value | +| -------- | -------------- | --------------------------- | +| liveness | address | `{validatorConsensusAddress}` | +| liveness | missed\_blocks | `{missedBlocksCounter}` | +| liveness | height | `{blockHeight}` | + +#### Slash + +* same as `"slash"` event from `HandleValidatorSignature`, but without the `jailed` attribute. + +#### Jail + +| Type | Attribute Key | Attribute Value | +| ----- | ------------- | ------------------ | +| slash | jailed | `{validatorAddress}` | + +## Staking Tombstone + +### Abstract + +In the current implementation of the `slashing` module, when the consensus engine +informs the state machine of a validator's consensus fault, the validator is +partially slashed, and put into a "jail period", a period of time in which they +are not allowed to rejoin the validator set. However, because of the nature of +consensus faults and ABCI, there can be a delay between an infraction occurring, +and evidence of the infraction reaching the state machine (this is one of the +primary reasons for the existence of the unbonding period). + +> Note: The tombstone concept, only applies to faults that have a delay between +> the infraction occurring and evidence reaching the state machine. For example, +> evidence of a validator double signing may take a while to reach the state machine +> due to unpredictable evidence gossip layer delays and the ability of validators to +> selectively reveal double-signatures (e.g. to infrequently-online light clients). +> Liveness slashing, on the other hand, is detected immediately as soon as the +> infraction occurs, and therefore no slashing period is needed. A validator is +> immediately put into jail period, and they cannot commit another liveness fault +> until they unjail. In the future, there may be other types of byzantine faults +> that have delays (for example, submitting evidence of an invalid proposal as a transaction). +> When implemented, it will have to be decided whether these future types of +> byzantine faults will result in a tombstoning (and if not, the slash amounts +> will not be capped by a slashing period). + +In the current system design, once a validator is put in the jail for a consensus +fault, after the `JailPeriod` they are allowed to send a transaction to `unjail` +themselves, and thus rejoin the validator set. + +One of the "design desires" of the `slashing` module is that if multiple +infractions occur before evidence is executed (and a validator is put in jail), +they should only be punished for single worst infraction, but not cumulatively. +For example, if the sequence of events is: + +1. Validator A commits Infraction 1 (worth 30% slash) +2. Validator A commits Infraction 2 (worth 40% slash) +3. Validator A commits Infraction 3 (worth 35% slash) +4. Evidence for Infraction 1 reaches state machine (and validator is put in jail) +5. Evidence for Infraction 2 reaches state machine +6. Evidence for Infraction 3 reaches state machine + +Only Infraction 2 should have its slash take effect, as it is the highest. This +is done, so that in the case of the compromise of a validator's consensus key, +they will only be punished once, even if the hacker double-signs many blocks. +Because, the unjailing has to be done with the validator's operator key, they +have a chance to re-secure their consensus key, and then signal that they are +ready using their operator key. We call this period during which we track only +the max infraction, the "slashing period". + +Once, a validator rejoins by unjailing themselves, we begin a new slashing period; +if they commit a new infraction after unjailing, it gets slashed cumulatively on +top of the worst infraction from the previous slashing period. + +However, while infractions are grouped based off of the slashing periods, because +evidence can be submitted up to an `unbondingPeriod` after the infraction, we +still have to allow for evidence to be submitted for previous slashing periods. +For example, if the sequence of events is: + +1. Validator A commits Infraction 1 (worth 30% slash) +2. Validator A commits Infraction 2 (worth 40% slash) +3. Evidence for Infraction 1 reaches state machine (and Validator A is put in jail) +4. Validator A unjails + +We are now in a new slashing period, however we still have to keep the door open +for the previous infraction, as the evidence for Infraction 2 may still come in. +As the number of slashing periods increase, it creates more complexity as we have +to keep track of the highest infraction amount for every single slashing period. + +> Note: Currently, according to the `slashing` module spec, a new slashing period +> is created every time a validator is unbonded then rebonded. This should probably +> be changed to jailed/unjailed. See issue [#3205](https://github.com/cosmos/cosmos-sdk/issues/3205) +> for further details. For the remainder of this, I will assume that we only start +> a new slashing period when a validator gets unjailed. + +The maximum number of slashing periods is the `len(UnbondingPeriod) / len(JailPeriod)`. +The current defaults in Gaia for the `UnbondingPeriod` and `JailPeriod` are 3 weeks +and 2 days, respectively. This means there could potentially be up to 11 slashing +periods concurrently being tracked per validator. If we set the `JailPeriod >= UnbondingPeriod`, +we only have to track 1 slashing period (i.e not have to track slashing periods). + +Currently, in the jail period implementation, once a validator unjails, all of +their delegators who are delegated to them (haven't unbonded / redelegated away), +stay with them. Given that consensus safety faults are so egregious +(way more so than liveness faults), it is probably prudent to have delegators not +"auto-rebond" to the validator. + +#### Proposal: infinite jail + +We propose setting the "jail time" for a +validator who commits a consensus safety fault, to `infinite` (i.e. a tombstone state). +This essentially kicks the validator out of the validator set and does not allow +them to re-enter the validator set. All of their delegators (including the operator themselves) +have to either unbond or redelegate away. The validator operator can create a new +validator if they would like, with a new operator key and consensus key, but they +have to "re-earn" their delegations back. + +Implementing the tombstone system and getting rid of the slashing period tracking +will make the `slashing` module way simpler, especially because we can remove all +of the hooks defined in the `slashing` module consumed by the `staking` module +(the `slashing` module still consumes hooks defined in `staking`). + +#### Single slashing amount + +Another optimization that can be made is that if we assume that all ABCI faults +for CometBFT consensus are slashed at the same level, we don't have to keep +track of "max slash". Once an ABCI fault happens, we don't have to worry about +comparing potential future ones to find the max. + +Currently the only CometBFT ABCI fault is: + +* Unjustified precommits (double signs) + +It is currently planned to include the following fault in the near future: + +* Signing a precommit when you're in unbonding phase (needed to make light client bisection safe) + +Given that these faults are both attributable byzantine faults, we will likely +want to slash them equally, and thus we can enact the above change. + +> Note: This change may make sense for current CometBFT consensus, but maybe +> not for a different consensus algorithm or future versions of CometBFT that +> may want to punish at different levels (for example, partial slashing). + +## Parameters + +The slashing module contains the following parameters: + +| Key | Type | Example | +| ----------------------- | -------------- | ---------------------- | +| SignedBlocksWindow | string (int64) | "100" | +| MinSignedPerWindow | string (dec) | "0.500000000000000000" | +| DowntimeJailDuration | string (ns) | "600000000000" | +| SlashFractionDoubleSign | string (dec) | "0.050000000000000000" | +| SlashFractionDowntime | string (dec) | "0.010000000000000000" | + +## CLI + +A user can query and interact with the `slashing` module using the CLI. + +### Query + +The `query` commands allow users to query `slashing` state. + +```shell +simd query slashing --help +``` + +#### params + +The `params` command allows users to query genesis parameters for the slashing module. + +```shell +simd query slashing params [flags] +``` + +Example: + +```shell +simd query slashing params +``` + +Example Output: + +```yml +downtime_jail_duration: 600s +min_signed_per_window: "0.500000000000000000" +signed_blocks_window: "100" +slash_fraction_double_sign: "0.050000000000000000" +slash_fraction_downtime: "0.010000000000000000" +``` + +#### signing-info + +The `signing-info` command allows users to query signing-info of the validator using consensus public key. + +```shell +simd query slashing signing-infos [flags] +``` + +Example: + +```shell +simd query slashing signing-info '{"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys6jD5B6tPgC8="}' + +``` + +Example Output: + +```yml +address: cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c +index_offset: "2068" +jailed_until: "1970-01-01T00:00:00Z" +missed_blocks_counter: "0" +start_height: "0" +tombstoned: false +``` + +#### signing-infos + +The `signing-infos` command allows users to query signing infos of all validators. + +```shell +simd query slashing signing-infos [flags] +``` + +Example: + +```shell +simd query slashing signing-infos +``` + +Example Output: + +```yml +info: +- address: cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c + index_offset: "2075" + jailed_until: "1970-01-01T00:00:00Z" + missed_blocks_counter: "0" + start_height: "0" + tombstoned: false +pagination: + next_key: null + total: "0" +``` + +### Transactions + +The `tx` commands allow users to interact with the `slashing` module. + +```bash +simd tx slashing --help +``` + +#### unjail + +The `unjail` command allows users to unjail a validator previously jailed for downtime. + +```bash +simd tx slashing unjail --from mykey [flags] +``` + +Example: + +```bash +simd tx slashing unjail --from mykey +``` + +### gRPC + +A user can query the `slashing` module using gRPC endpoints. + +#### Params + +The `Params` endpoint allows users to query the parameters of slashing module. + +```shell +cosmos.slashing.v1beta1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "signedBlocksWindow": "100", + "minSignedPerWindow": "NTAwMDAwMDAwMDAwMDAwMDAw", + "downtimeJailDuration": "600s", + "slashFractionDoubleSign": "NTAwMDAwMDAwMDAwMDAwMDA=", + "slashFractionDowntime": "MTAwMDAwMDAwMDAwMDAwMDA=" + } +} +``` + +#### SigningInfo + +The SigningInfo queries the signing info of given cons address. + +```shell +cosmos.slashing.v1beta1.Query/SigningInfo +``` + +Example: + +```shell +grpcurl -plaintext -d '{"cons_address":"cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c"}' localhost:9090 cosmos.slashing.v1beta1.Query/SigningInfo +``` + +Example Output: + +```json +{ + "valSigningInfo": { + "address": "cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c", + "indexOffset": "3493", + "jailedUntil": "1970-01-01T00:00:00Z" + } +} +``` + +#### SigningInfos + +The SigningInfos queries signing info of all validators. + +```shell +cosmos.slashing.v1beta1.Query/SigningInfos +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/SigningInfos +``` + +Example Output: + +```json expandable +{ + "info": [ + { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "indexOffset": "2467", + "jailedUntil": "1970-01-01T00:00:00Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### REST + +A user can query the `slashing` module using REST endpoints. + +#### Params + +```shell +/cosmos/slashing/v1beta1/params +``` + +Example: + +```shell +curl "localhost:1317/cosmos/slashing/v1beta1/params" +``` + +Example Output: + +```json +{ + "params": { + "signed_blocks_window": "100", + "min_signed_per_window": "0.500000000000000000", + "downtime_jail_duration": "600s", + "slash_fraction_double_sign": "0.050000000000000000", + "slash_fraction_downtime": "0.010000000000000000" +} +``` + +#### signing\_info + +```shell +/cosmos/slashing/v1beta1/signing_infos/%s +``` + +Example: + +```shell +curl "localhost:1317/cosmos/slashing/v1beta1/signing_infos/cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c" +``` + +Example Output: + +```json +{ + "val_signing_info": { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "start_height": "0", + "index_offset": "4184", + "jailed_until": "1970-01-01T00:00:00Z", + "tombstoned": false, + "missed_blocks_counter": "0" + } +} +``` + +#### signing\_infos + +```shell +/cosmos/slashing/v1beta1/signing_infos +``` + +Example: + +```shell +curl "localhost:1317/cosmos/slashing/v1beta1/signing_infos +``` + +Example Output: + +```json expandable +{ + "info": [ + { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "start_height": "0", + "index_offset": "4169", + "jailed_until": "1970-01-01T00:00:00Z", + "tombstoned": false, + "missed_blocks_counter": "0" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` diff --git a/docs/sdk/next/documentation/module-system/staking.mdx b/docs/sdk/next/documentation/module-system/staking.mdx new file mode 100644 index 00000000..2930e83d --- /dev/null +++ b/docs/sdk/next/documentation/module-system/staking.mdx @@ -0,0 +1,3851 @@ +--- +title: '`x/staking`' +description: >- + This paper specifies the Staking module of the Cosmos SDK that was first + described in the Cosmos Whitepaper in June 2016. +--- + +## Abstract + +This paper specifies the Staking module of the Cosmos SDK that was first +described in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper) +in June 2016. + +The module enables Cosmos SDK-based blockchain to support an advanced +Proof-of-Stake (PoS) system. In this system, holders of the native staking token of +the chain can become validators and can delegate tokens to validators, +ultimately determining the effective validator set for the system. + +This module is used in the Cosmos Hub, the first Hub in the Cosmos +network. + +## Contents + +* [State](#state) + * [Pool](#pool) + * [LastTotalPower](#lasttotalpower) + * [ValidatorUpdates](#validatorupdates) + * [UnbondingID](#unbondingid) + * [Params](#params) + * [Validator](#validator) + * [Delegation](#delegation) + * [UnbondingDelegation](#unbondingdelegation) + * [Redelegation](#redelegation) + * [Queues](#queues) + * [HistoricalInfo](#historicalinfo) +* [State Transitions](#state-transitions) + * [Validators](#validators) + * [Delegations](#delegations) + * [Slashing](#slashing) + * [How Shares are calculated](#how-shares-are-calculated) +* [Messages](#messages) + * [MsgCreateValidator](#msgcreatevalidator) + * [MsgEditValidator](#msgeditvalidator) + * [MsgDelegate](#msgdelegate) + * [MsgUndelegate](#msgundelegate) + * [MsgCancelUnbondingDelegation](#msgcancelunbondingdelegation) + * [MsgBeginRedelegate](#msgbeginredelegate) + * [MsgUpdateParams](#msgupdateparams) +* [Begin-Block](#begin-block) + * [Historical Info Tracking](#historical-info-tracking) +* [End-Block](#end-block) + * [Validator Set Changes](#validator-set-changes) + * [Queues](#queues-1) +* [Hooks](#hooks) +* [Events](#events) + * [EndBlocker](#endblocker) + * [Msg's](#msgs) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## State + +### Pool + +Pool is used for tracking bonded and not-bonded token supply of the bond denomination. + +### LastTotalPower + +LastTotalPower tracks the total amounts of bonded tokens recorded during the previous end block. +Store entries prefixed with "Last" must remain unchanged until EndBlock. + +* LastTotalPower: `0x12 -> ProtocolBuffer(math.Int)` + +### ValidatorUpdates + +ValidatorUpdates contains the validator updates returned to ABCI at the end of every block. +The values are overwritten in every block. + +* ValidatorUpdates `0x61 -> []abci.ValidatorUpdate` + +### UnbondingID + +UnbondingID stores the ID of the latest unbonding operation. It enables creating unique IDs for unbonding operations, i.e., UnbondingID is incremented every time a new unbonding operation (validator unbonding, unbonding delegation, redelegation) is initiated. + +* UnbondingID: `0x37 -> uint64` + +### Params + +The staking module stores its params in state with the prefix of `0x51`, +it can be updated with governance or the address with authority. + +* Params: `0x51 | ProtocolBuffer(Params)` + +```protobuf +// Params defines the parameters for the x/staking module. +message Params { + option (amino.name) = "cosmos-sdk/x/staking/Params"; + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // unbonding_time is the time duration of unbonding. + google.protobuf.Duration unbonding_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdduration) = true]; + // max_validators is the maximum number of validators. + uint32 max_validators = 2; + // max_entries is the max entries for either unbonding delegation or redelegation (per pair/trio). + uint32 max_entries = 3; + // historical_entries is the number of historical entries to persist. + uint32 historical_entries = 4; + // bond_denom defines the bondable coin denomination. + string bond_denom = 5; + // min_commission_rate is the chain-wide minimum commission rate that a validator can charge their delegators + string min_commission_rate = 6 [ + (gogoproto.moretags) = "yaml:\"min_commission_rate\"", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +### Validator + +Validators can have one of three statuses + +* `Unbonded`: The validator is not in the active set. They cannot sign blocks and do not earn + rewards. They can receive delegations. +* `Bonded`: Once the validator receives sufficient bonded tokens they automatically join the + active set during [`EndBlock`](#validator-set-changes) and their status is updated to `Bonded`. + They are signing blocks and receiving rewards. They can receive further delegations. + They can be slashed for misbehavior. Delegators to this validator who unbond their delegation + must wait the duration of the UnbondingTime, a chain-specific param, during which time + they are still slashable for offences of the source validator if those offences were committed + during the period of time that the tokens were bonded. +* `Unbonding`: When a validator leaves the active set, either by choice or due to slashing, jailing or + tombstoning, an unbonding of all their delegations begins. All delegations must then wait the UnbondingTime + before their tokens are moved to their accounts from the `BondedPool`. + + +Tombstoning is permanent, once tombstoned a validator's consensus key can not be reused within the chain where the tombstoning happened. + + +Validators objects should be primarily stored and accessed by the +`OperatorAddr`, an SDK validator address for the operator of the validator. Two +additional indices are maintained per validator object in order to fulfill +required lookups for slashing and validator-set updates. A third special index +(`LastValidatorPower`) is also maintained which however remains constant +throughout each block, unlike the first two indices which mirror the validator +records within a block. + +* Validators: `0x21 | OperatorAddrLen (1 byte) | OperatorAddr -> ProtocolBuffer(validator)` +* ValidatorsByConsAddr: `0x22 | ConsAddrLen (1 byte) | ConsAddr -> OperatorAddr` +* ValidatorsByPower: `0x23 | BigEndian(ConsensusPower) | OperatorAddrLen (1 byte) | OperatorAddr -> OperatorAddr` +* LastValidatorsPower: `0x11 | OperatorAddrLen (1 byte) | OperatorAddr -> ProtocolBuffer(ConsensusPower)` +* ValidatorsByUnbondingID: `0x38 | UnbondingID -> 0x21 | OperatorAddrLen (1 byte) | OperatorAddr` + +`Validators` is the primary index - it ensures that each operator can have only one +associated validator, where the public key of that validator can change in the +future. Delegators can refer to the immutable operator of the validator, without +concern for the changing public key. + +`ValidatorsByUnbondingID` is an additional index that enables lookups for +validators by the unbonding IDs corresponding to their current unbonding. + +`ValidatorByConsAddr` is an additional index that enables lookups for slashing. +When CometBFT reports evidence, it provides the validator address, so this +map is needed to find the operator. Note that the `ConsAddr` corresponds to the +address which can be derived from the validator's `ConsPubKey`. + +`ValidatorsByPower` is an additional index that provides a sorted list of +potential validators to quickly determine the current active set. Here +ConsensusPower is validator.Tokens/10^6 by default. Note that all validators +where `Jailed` is true are not stored within this index. + +`LastValidatorsPower` is a special index that provides a historical list of the +last-block's bonded validators. This index remains constant during a block but +is updated during the validator set update process which takes place in [`EndBlock`](#end-block). + +Each validator's state is stored in a `Validator` struct: + +```protobuf +// Validator defines a validator, together with the total amount of the +// Validator's bond shares and their exchange rate to coins. Slashing results in +// a decrease in the exchange rate, allowing correct calculation of future +// undelegations without iterating over delegators. When coins are delegated to +// this validator, the validator is credited with a delegation whose number of +// bond shares is based on the amount of coins delegated divided by the current +// exchange rate. Voting power can be calculated as total bonded shares +// multiplied by exchange rate. +message Validator { + option (gogoproto.equal) = false; + option (gogoproto.goproto_stringer) = false; + option (gogoproto.goproto_getters) = false; + + // operator_address defines the address of the validator's operator; bech encoded in JSON. + string operator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // consensus_pubkey is the consensus public key of the validator, as a Protobuf Any. + google.protobuf.Any consensus_pubkey = 2 [(cosmos_proto.accepts_interface) = "cosmos.crypto.PubKey"]; + // jailed defined whether the validator has been jailed from bonded status or not. + bool jailed = 3; + // status is the validator status (bonded/unbonding/unbonded). + BondStatus status = 4; + // tokens define the delegated tokens (incl. self-delegation). + string tokens = 5 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // delegator_shares defines total shares issued to a validator's delegators. + string delegator_shares = 6 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // description defines the description terms for the validator. + Description description = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // unbonding_height defines, if unbonding, the height at which this validator has begun unbonding. + int64 unbonding_height = 8; + // unbonding_time defines, if unbonding, the min time for the validator to complete unbonding. + google.protobuf.Timestamp unbonding_time = 9 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + // commission defines the commission parameters. + Commission commission = 10 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // min_self_delegation is the validator's self declared minimum self delegation. + // + // Since: cosmos-sdk 0.46 + string min_self_delegation = 11 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + + // strictly positive if this validator's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 12; + + // list of unbonding ids, each uniquely identifing an unbonding of this validator + repeated uint64 unbonding_ids = 13; +} +``` + +```protobuf +// CommissionRates defines the initial commission rates to be used for creating +// a validator. +message CommissionRates { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // rate is the commission rate charged to delegators, as a fraction. + string rate = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // max_rate defines the maximum commission rate which validator can ever charge, as a fraction. + string max_rate = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // max_change_rate defines the maximum daily increase of the validator commission, as a fraction. + string max_change_rate = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +// Commission defines commission parameters for a given validator. +message Commission { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // commission_rates defines the initial commission rates to be used for creating a validator. + CommissionRates commission_rates = 1 + [(gogoproto.embed) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // update_time is the last time the commission rate was changed. + google.protobuf.Timestamp update_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} + +// Description defines a validator description. +message Description { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // moniker defines a human-readable name for the validator. + string moniker = 1; + // identity defines an optional identity signature (ex. UPort or Keybase). + string identity = 2; + // website defines an optional website link. + string website = 3; + // security_contact defines an optional email for security contact. + string security_contact = 4; + // details define other optional details. + string details = 5; +} +``` + +### Delegation + +Delegations are identified by combining `DelegatorAddr` (the address of the delegator) +with the `ValidatorAddr` Delegators are indexed in the store as follows: + +* Delegation: `0x31 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr -> ProtocolBuffer(delegation)` + +Stake holders may delegate coins to validators; under this circumstance their +funds are held in a `Delegation` data structure. It is owned by one +delegator, and is associated with the shares for one validator. The sender of +the transaction is the owner of the bond. + +```protobuf +// Delegation represents the bond with tokens held by an account. It is +// owned by one delegator, and is associated with the voting power of one +// validator. +message Delegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + // delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // shares define the delegation shares received. + string shares = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +#### Delegator Shares + +When one delegates tokens to a Validator, they are issued a number of delegator shares based on a +dynamic exchange rate, calculated as follows from the total number of tokens delegated to the +validator and the number of shares issued so far: + +`Shares per Token = validator.TotalShares() / validator.Tokens()` + +Only the number of shares received is stored on the DelegationEntry. When a delegator then +Undelegates, the token amount they receive is calculated from the number of shares they currently +hold and the inverse exchange rate: + +`Tokens per Share = validator.Tokens() / validatorShares()` + +These `Shares` are simply an accounting mechanism. They are not a fungible asset. The reason for +this mechanism is to simplify the accounting around slashing. Rather than iteratively slashing the +tokens of every delegation entry, instead the Validator's total bonded tokens can be slashed, +effectively reducing the value of each issued delegator share. + +### UnbondingDelegation + +Shares in a `Delegation` can be unbonded, but they must for some time exist as +an `UnbondingDelegation`, where shares can be reduced if Byzantine behavior is +detected. + +`UnbondingDelegation` are indexed in the store as: + +* UnbondingDelegation: `0x32 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr -> ProtocolBuffer(unbondingDelegation)` +* UnbondingDelegationsFromValidator: `0x33 | ValidatorAddrLen (1 byte) | ValidatorAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` +* UnbondingDelegationByUnbondingId: `0x38 | UnbondingId -> 0x32 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr` + `UnbondingDelegation` is used in queries, to lookup all unbonding delegations for + a given delegator. + +`UnbondingDelegationsFromValidator` is used in slashing, to lookup all +unbonding delegations associated with a given validator that need to be +slashed. + +`UnbondingDelegationByUnbondingId` is an additional index that enables +lookups for unbonding delegations by the unbonding IDs of the containing +unbonding delegation entries. + +A UnbondingDelegation object is created every time an unbonding is initiated. + +```protobuf +// UnbondingDelegation stores all of a single delegator's unbonding bonds +// for a single validator in an time-ordered list. +message UnbondingDelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + // delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // entries are the unbonding delegation entries. + repeated UnbondingDelegationEntry entries = 3 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; // unbonding delegation entries +} + +// UnbondingDelegationEntry defines an unbonding object with relevant metadata. +message UnbondingDelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // creation_height is the height which the unbonding took place. + int64 creation_height = 1; + // completion_time is the unix time for unbonding completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + // initial_balance defines the tokens initially scheduled to receive at completion. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // balance defines the tokens to receive at completion. + string balance = 4 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + // Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} +``` + +### Redelegation + +The bonded tokens worth of a `Delegation` may be instantly redelegated from a +source validator to a different validator (destination validator). However when +this occurs they must be tracked in a `Redelegation` object, whereby their +shares can be slashed if their tokens have contributed to a Byzantine fault +committed by the source validator. + +`Redelegation` are indexed in the store as: + +* Redelegations: `0x34 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddr -> ProtocolBuffer(redelegation)` +* RedelegationsBySrc: `0x35 | ValidatorSrcAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddrLen (1 byte) | ValidatorDstAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` +* RedelegationsByDst: `0x36 | ValidatorDstAddrLen (1 byte) | ValidatorDstAddr | ValidatorSrcAddrLen (1 byte) | ValidatorSrcAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` +* RedelegationByUnbondingId: `0x38 | UnbondingId -> 0x34 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddr` + +`Redelegations` is used for queries, to lookup all redelegations for a given +delegator. + +`RedelegationsBySrc` is used for slashing based on the `ValidatorSrcAddr`. + +`RedelegationsByDst` is used for slashing based on the `ValidatorDstAddr` + +The first map here is used for queries, to lookup all redelegations for a given +delegator. The second map is used for slashing based on the `ValidatorSrcAddr`, +while the third map is for slashing based on the `ValidatorDstAddr`. + +`RedelegationByUnbondingId` is an additional index that enables +lookups for redelegations by the unbonding IDs of the containing +redelegation entries. + +A redelegation object is created every time a redelegation occurs. To prevent +"redelegation hopping" redelegations may not occur under the situation that: + +* the (re)delegator already has another immature redelegation in progress + with a destination to a validator (let's call it `Validator X`) +* and, the (re)delegator is attempting to create a *new* redelegation + where the source validator for this new redelegation is `Validator X`. + +```protobuf +// RedelegationEntry defines a redelegation object with relevant metadata. +message RedelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // creation_height defines the height which the redelegation took place. + int64 creation_height = 1; + // completion_time defines the unix time for redelegation completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + // initial_balance defines the initial balance when redelegation started. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // shares_dst is the amount of destination-validator shares created by redelegation. + string shares_dst = 4 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + // Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} + +// Redelegation contains the list of a particular delegator's redelegating bonds +// from a particular source validator to a particular destination validator. +message Redelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + // delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_src_address is the validator redelegation source operator address. + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_dst_address is the validator redelegation destination operator address. + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // entries are the redelegation entries. + repeated RedelegationEntry entries = 4 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; // redelegation entries +} +``` + +### Queues + +All queue objects are sorted by timestamp. The time used within any queue is +firstly converted to UTC, rounded to the nearest nanosecond then sorted. The sortable time format +used is a slight modification of the RFC3339Nano and uses the format string +`"2006-01-02T15:04:05.000000000"`. Notably this format: + +* right pads all zeros +* drops the time zone info (we already use UTC) + +In all cases, the stored timestamp represents the maturation time of the queue +element. + +#### UnbondingDelegationQueue + +For the purpose of tracking progress of unbonding delegations the unbonding +delegations queue is kept. + +* UnbondingDelegation: `0x41 | format(time) -> []DVPair` + +```protobuf +// DVPair is struct that just has a delegator-validator pair with no other data. +// It is intended to be used as a marshalable pointer. For example, a DVPair can +// be used to construct the key to getting an UnbondingDelegation from state. +message DVPair { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +#### RedelegationQueue + +For the purpose of tracking progress of redelegations the redelegation queue is +kept. + +* RedelegationQueue: `0x42 | format(time) -> []DVVTriplet` + +```protobuf +// DVVTriplet is struct that just has a delegator-validator-validator triplet +// with no other data. It is intended to be used as a marshalable pointer. For +// example, a DVVTriplet can be used to construct the key to getting a +// Redelegation from state. +message DVVTriplet { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +#### ValidatorQueue + +For the purpose of tracking progress of unbonding validators the validator +queue is kept. + +* ValidatorQueueTime: `0x43 | format(time) -> []sdk.ValAddress` + +The stored object by each key is an array of validator operator addresses from +which the validator object can be accessed. Typically it is expected that only +a single validator record will be associated with a given timestamp however it is possible +that multiple validators exist in the queue at the same location. + +### HistoricalInfo + +HistoricalInfo objects are stored and pruned at each block such that the staking keeper persists +the `n` most recent historical info defined by staking module parameter: `HistoricalEntries`. + +```go expandable +syntax = "proto3"; +package cosmos.staking.v1beta1; + +import "gogoproto/gogo.proto"; +import "google/protobuf/any.proto"; +import "google/protobuf/duration.proto"; +import "google/protobuf/timestamp.proto"; + +import "cosmos_proto/cosmos.proto"; +import "cosmos/base/v1beta1/coin.proto"; +import "amino/amino.proto"; +import "tendermint/types/types.proto"; +import "tendermint/abci/types.proto"; + +option go_package = "github.com/cosmos/cosmos-sdk/x/staking/types"; + +/ HistoricalInfo contains header and validator information for a given block. +/ It is stored as part of staking module's state, which persists the `n` most +/ recent HistoricalInfo +/ (`n` is set by the staking module's `historical_entries` parameter). +message HistoricalInfo { + tendermint.types.Header header = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + repeated Validator valset = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ CommissionRates defines the initial commission rates to be used for creating +/ a validator. +message CommissionRates { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / rate is the commission rate charged to delegators, as a fraction. + string rate = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / max_rate defines the maximum commission rate which validator can ever charge, as a fraction. + string max_rate = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / max_change_rate defines the maximum daily increase of the validator commission, as a fraction. + string max_change_rate = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +/ Commission defines commission parameters for a given validator. +message Commission { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / commission_rates defines the initial commission rates to be used for creating a validator. + CommissionRates commission_rates = 1 + [(gogoproto.embed) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + / update_time is the last time the commission rate was changed. + google.protobuf.Timestamp update_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} + +/ Description defines a validator description. +message Description { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / moniker defines a human-readable name for the validator. + string moniker = 1; + / identity defines an optional identity signature (ex. UPort or Keybase). + string identity = 2; + / website defines an optional website link. + string website = 3; + / security_contact defines an optional email for security contact. + string security_contact = 4; + / details define other optional details. + string details = 5; +} + +/ Validator defines a validator, together with the total amount of the +/ Validator's bond shares and their exchange rate to coins. Slashing results in +/ a decrease in the exchange rate, allowing correct calculation of future +/ undelegations without iterating over delegators. When coins are delegated to +/ this validator, the validator is credited with a delegation whose number of +/ bond shares is based on the amount of coins delegated divided by the current +/ exchange rate. Voting power can be calculated as total bonded shares +/ multiplied by exchange rate. +message Validator { + option (gogoproto.equal) = false; + option (gogoproto.goproto_stringer) = false; + option (gogoproto.goproto_getters) = false; + + / operator_address defines the address of the validator's operator; bech encoded in JSON. + string operator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / consensus_pubkey is the consensus public key of the validator, as a Protobuf Any. + google.protobuf.Any consensus_pubkey = 2 [(cosmos_proto.accepts_interface) = "cosmos.crypto.PubKey"]; + / jailed defined whether the validator has been jailed from bonded status or not. + bool jailed = 3; + / status is the validator status (bonded/unbonding/unbonded). + BondStatus status = 4; + / tokens define the delegated tokens (incl. self-delegation). + string tokens = 5 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / delegator_shares defines total shares issued to a validator's delegators. + string delegator_shares = 6 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / description defines the description terms for the validator. + Description description = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + / unbonding_height defines, if unbonding, the height at which this validator has begun unbonding. + int64 unbonding_height = 8; + / unbonding_time defines, if unbonding, the min time for the validator to complete unbonding. + google.protobuf.Timestamp unbonding_time = 9 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + / commission defines the commission parameters. + Commission commission = 10 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + / min_self_delegation is the validator's self declared minimum self delegation. + / + / Since: cosmos-sdk 0.46 + string min_self_delegation = 11 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + + / strictly positive if this validator's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 12; + + / list of unbonding ids, each uniquely identifing an unbonding of this validator + repeated uint64 unbonding_ids = 13; +} + +/ BondStatus is the status of a validator. +enum BondStatus { + option (gogoproto.goproto_enum_prefix) = false; + + / UNSPECIFIED defines an invalid validator status. + BOND_STATUS_UNSPECIFIED = 0 [(gogoproto.enumvalue_customname) = "Unspecified"]; + / UNBONDED defines a validator that is not bonded. + BOND_STATUS_UNBONDED = 1 [(gogoproto.enumvalue_customname) = "Unbonded"]; + / UNBONDING defines a validator that is unbonding. + BOND_STATUS_UNBONDING = 2 [(gogoproto.enumvalue_customname) = "Unbonding"]; + / BONDED defines a validator that is bonded. + BOND_STATUS_BONDED = 3 [(gogoproto.enumvalue_customname) = "Bonded"]; +} + +/ ValAddresses defines a repeated set of validator addresses. +message ValAddresses { + option (gogoproto.goproto_stringer) = false; + option (gogoproto.stringer) = true; + + repeated string addresses = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ DVPair is struct that just has a delegator-validator pair with no other data. +/ It is intended to be used as a marshalable pointer. For example, a DVPair can +/ be used to construct the key to getting an UnbondingDelegation from state. +message DVPair { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ DVPairs defines an array of DVPair objects. +message DVPairs { + repeated DVPair pairs = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ DVVTriplet is struct that just has a delegator-validator-validator triplet +/ with no other data. It is intended to be used as a marshalable pointer. For +/ example, a DVVTriplet can be used to construct the key to getting a +/ Redelegation from state. +message DVVTriplet { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ DVVTriplets defines an array of DVVTriplet objects. +message DVVTriplets { + repeated DVVTriplet triplets = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ Delegation represents the bond with tokens held by an account. It is +/ owned by one delegator, and is associated with the voting power of one +/ validator. +message Delegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + / delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / shares define the delegation shares received. + string shares = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +/ UnbondingDelegation stores all of a single delegator's unbonding bonds +/ for a single validator in an time-ordered list. +message UnbondingDelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + / delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / entries are the unbonding delegation entries. + repeated UnbondingDelegationEntry entries = 3 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; / unbonding delegation entries +} + +/ UnbondingDelegationEntry defines an unbonding object with relevant metadata. +message UnbondingDelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / creation_height is the height which the unbonding took place. + int64 creation_height = 1; + / completion_time is the unix time for unbonding completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + / initial_balance defines the tokens initially scheduled to receive at completion. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / balance defines the tokens to receive at completion. + string balance = 4 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + / Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} + +/ RedelegationEntry defines a redelegation object with relevant metadata. +message RedelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / creation_height defines the height which the redelegation took place. + int64 creation_height = 1; + / completion_time defines the unix time for redelegation completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + / initial_balance defines the initial balance when redelegation started. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / shares_dst is the amount of destination-validator shares created by redelegation. + string shares_dst = 4 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + / Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} + +/ Redelegation contains the list of a particular delegator's redelegating bonds +/ from a particular source validator to a particular destination validator. +message Redelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + / delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_src_address is the validator redelegation source operator address. + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_dst_address is the validator redelegation destination operator address. + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / entries are the redelegation entries. + repeated RedelegationEntry entries = 4 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; / redelegation entries +} + +/ Params defines the parameters for the x/staking module. +message Params { + option (amino.name) = "cosmos-sdk/x/staking/Params"; + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / unbonding_time is the time duration of unbonding. + google.protobuf.Duration unbonding_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdduration) = true]; + / max_validators is the maximum number of validators. + uint32 max_validators = 2; + / max_entries is the max entries for either unbonding delegation or redelegation (per pair/trio). + uint32 max_entries = 3; + / historical_entries is the number of historical entries to persist. + uint32 historical_entries = 4; + / bond_denom defines the bondable coin denomination. + string bond_denom = 5; + / min_commission_rate is the chain-wide minimum commission rate that a validator can charge their delegators + string min_commission_rate = 6 [ + (gogoproto.moretags) = "yaml:\"min_commission_rate\"", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +/ DelegationResponse is equivalent to Delegation except that it contains a +/ balance in addition to shares which is more suitable for client responses. +message DelegationResponse { + option (gogoproto.equal) = false; + option (gogoproto.goproto_stringer) = false; + + Delegation delegation = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + cosmos.base.v1beta1.Coin balance = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ RedelegationEntryResponse is equivalent to a RedelegationEntry except that it +/ contains a balance in addition to shares which is more suitable for client +/ responses. +message RedelegationEntryResponse { + option (gogoproto.equal) = true; + + RedelegationEntry redelegation_entry = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + string balance = 4 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; +} + +/ RedelegationResponse is equivalent to a Redelegation except that its entries +/ contain a balance in addition to shares which is more suitable for client +/ responses. +message RedelegationResponse { + option (gogoproto.equal) = false; + + Redelegation redelegation = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + repeated RedelegationEntryResponse entries = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ Pool is used for tracking bonded and not-bonded token supply of the bond +/ denomination. +message Pool { + option (gogoproto.description) = true; + option (gogoproto.equal) = true; + string not_bonded_tokens = 1 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "not_bonded_tokens", + (amino.dont_omitempty) = true + ]; + string bonded_tokens = 2 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "bonded_tokens", + (amino.dont_omitempty) = true + ]; +} + +/ Infraction indicates the infraction a validator commited. +enum Infraction { + / UNSPECIFIED defines an empty infraction. + INFRACTION_UNSPECIFIED = 0; + / DOUBLE_SIGN defines a validator that double-signs a block. + INFRACTION_DOUBLE_SIGN = 1; + / DOWNTIME defines a validator that missed signing too many blocks. + INFRACTION_DOWNTIME = 2; +} + +/ ValidatorUpdates defines an array of abci.ValidatorUpdate objects. +/ TODO: explore moving this to proto/cosmos/base to separate modules from tendermint dependence +message ValidatorUpdates { + repeated tendermint.abci.ValidatorUpdate updates = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +At each BeginBlock, the staking keeper will persist the current Header and the Validators that committed +the current block in a `HistoricalInfo` object. The Validators are sorted on their address to ensure that +they are in a deterministic order. +The oldest HistoricalEntries will be pruned to ensure that there only exist the parameter-defined number of +historical entries. + +## State Transitions + +### Validators + +State transitions in validators are performed on every [`EndBlock`](#validator-set-changes) +in order to check for changes in the active `ValidatorSet`. + +A validator can be `Unbonded`, `Unbonding` or `Bonded`. `Unbonded` +and `Unbonding` are collectively called `Not Bonded`. A validator can move +directly between all the states, except for from `Bonded` to `Unbonded`. + +#### Not bonded to Bonded + +The following transition occurs when a validator's ranking in the `ValidatorPowerIndex` surpasses +that of the `LastValidator`. + +* set `validator.Status` to `Bonded` +* send the `validator.Tokens` from the `NotBondedTokens` to the `BondedPool` `ModuleAccount` +* delete the existing record from `ValidatorByPowerIndex` +* add a new updated record to the `ValidatorByPowerIndex` +* update the `Validator` object for this validator +* if it exists, delete any `ValidatorQueue` record for this validator + +#### Bonded to Unbonding + +When a validator begins the unbonding process the following operations occur: + +* send the `validator.Tokens` from the `BondedPool` to the `NotBondedTokens` `ModuleAccount` +* set `validator.Status` to `Unbonding` +* delete the existing record from `ValidatorByPowerIndex` +* add a new updated record to the `ValidatorByPowerIndex` +* update the `Validator` object for this validator +* insert a new record into the `ValidatorQueue` for this validator + +#### Unbonding to Unbonded + +A validator moves from unbonding to unbonded when the `ValidatorQueue` object +moves from bonded to unbonded + +* update the `Validator` object for this validator +* set `validator.Status` to `Unbonded` + +#### Jail/Unjail + +when a validator is jailed it is effectively removed from the CometBFT set. +this process may be also be reversed. the following operations occur: + +* set `Validator.Jailed` and update object +* if jailed delete record from `ValidatorByPowerIndex` +* if unjailed add record to `ValidatorByPowerIndex` + +Jailed validators are not present in any of the following stores: + +* the power store (from consensus power to address) + +### Delegations + +#### Delegate + +When a delegation occurs both the validator and the delegation objects are affected + +* determine the delegators shares based on tokens delegated and the validator's exchange rate +* remove tokens from the sending account +* add shares the delegation object or add them to a created validator object +* add new delegator shares and update the `Validator` object +* transfer the `delegation.Amount` from the delegator's account to the `BondedPool` or the `NotBondedPool` `ModuleAccount` depending if the `validator.Status` is `Bonded` or not +* delete the existing record from `ValidatorByPowerIndex` +* add an new updated record to the `ValidatorByPowerIndex` + +#### Begin Unbonding + +As a part of the Undelegate and Complete Unbonding state transitions Unbond +Delegation may be called. + +* subtract the unbonded shares from delegator +* add the unbonded tokens to an `UnbondingDelegationEntry` +* update the delegation or remove the delegation if there are no more shares +* if the delegation is the operator of the validator and no more shares exist then trigger a jail validator +* update the validator with removed the delegator shares and associated coins +* if the validator state is `Bonded`, transfer the `Coins` worth of the unbonded + shares from the `BondedPool` to the `NotBondedPool` `ModuleAccount` +* remove the validator if it is unbonded and there are no more delegation shares. +* remove the validator if it is unbonded and there are no more delegation shares +* get a unique `unbondingId` and map it to the `UnbondingDelegationEntry` in `UnbondingDelegationByUnbondingId` +* call the `AfterUnbondingInitiated(unbondingId)` hook +* add the unbonding delegation to `UnbondingDelegationQueue` with the completion time set to `UnbondingTime` + +#### Cancel an `UnbondingDelegation` Entry + +When a `cancel unbond delegation` occurs both the `validator`, the `delegation` and an `UnbondingDelegationQueue` state will be updated. + +* if cancel unbonding delegation amount equals to the `UnbondingDelegation` entry `balance`, then the `UnbondingDelegation` entry deleted from `UnbondingDelegationQueue`. +* if the `cancel unbonding delegation amount is less than the `UnbondingDelegation`entry balance, then the`UnbondingDelegation`entry will be updated with new balance in the`UnbondingDelegationQueue\`. +* cancel `amount` is [Delegated](#delegations) back to the original `validator`. + +#### Complete Unbonding + +For undelegations which do not complete immediately, the following operations +occur when the unbonding delegation queue element matures: + +* remove the entry from the `UnbondingDelegation` object +* transfer the tokens from the `NotBondedPool` `ModuleAccount` to the delegator `Account` + +#### Begin Redelegation + +Redelegations affect the delegation, source and destination validators. + +* perform an `unbond` delegation from the source validator to retrieve the tokens worth of the unbonded shares +* using the unbonded tokens, `Delegate` them to the destination validator +* if the `sourceValidator.Status` is `Bonded`, and the `destinationValidator` is not, + transfer the newly delegated tokens from the `BondedPool` to the `NotBondedPool` `ModuleAccount` +* otherwise, if the `sourceValidator.Status` is not `Bonded`, and the `destinationValidator` + is `Bonded`, transfer the newly delegated tokens from the `NotBondedPool` to the `BondedPool` `ModuleAccount` +* record the token amount in an new entry in the relevant `Redelegation` + +From when a redelegation begins until it completes, the delegator is in a state of "pseudo-unbonding", and can still be +slashed for infractions that occurred before the redelegation began. + +#### Complete Redelegation + +When a redelegations complete the following occurs: + +* remove the entry from the `Redelegation` object + +### Slashing + +#### Slash Validator + +When a Validator is slashed, the following occurs: + +* The total `slashAmount` is calculated as the `slashFactor` (a chain parameter) \* `TokensFromConsensusPower`, + the total number of tokens bonded to the validator at the time of the infraction. +* Every unbonding delegation and pseudo-unbonding redelegation such that the infraction occurred before the unbonding or + redelegation began from the validator are slashed by the `slashFactor` percentage of the initialBalance. +* Each amount slashed from redelegations and unbonding delegations is subtracted from the + total slash amount. +* The `remainingSlashAmount` is then slashed from the validator's tokens in the `BondedPool` or + `NonBondedPool` depending on the validator's status. This reduces the total supply of tokens. + +In the case of a slash due to any infraction that requires evidence to submitted (for example double-sign), the slash +occurs at the block where the evidence is included, not at the block where the infraction occurred. +Put otherwise, validators are not slashed retroactively, only when they are caught. + +#### Slash Unbonding Delegation + +When a validator is slashed, so are those unbonding delegations from the validator that began unbonding +after the time of the infraction. Every entry in every unbonding delegation from the validator +is slashed by `slashFactor`. The amount slashed is calculated from the `InitialBalance` of the +delegation and is capped to prevent a resulting negative balance. Completed (or mature) unbondings are not slashed. + +#### Slash Redelegation + +When a validator is slashed, so are all redelegations from the validator that began after the +infraction. Redelegations are slashed by `slashFactor`. +Redelegations that began before the infraction are not slashed. +The amount slashed is calculated from the `InitialBalance` of the delegation and is capped to +prevent a resulting negative balance. +Mature redelegations (that have completed pseudo-unbonding) are not slashed. + +### How Shares are calculated + +At any given point in time, each validator has a number of tokens, `T`, and has a number of shares issued, `S`. +Each delegator, `i`, holds a number of shares, `S_i`. +The number of tokens is the sum of all tokens delegated to the validator, plus the rewards, minus the slashes. + +The delegator is entitled to a portion of the underlying tokens proportional to their proportion of shares. +So delegator `i` is entitled to `T * S_i / S` of the validator's tokens. + +When a delegator delegates new tokens to the validator, they receive a number of shares proportional to their contribution. +So when delegator `j` delegates `T_j` tokens, they receive `S_j = S * T_j / T` shares. +The total number of tokens is now `T + T_j`, and the total number of shares is `S + S_j`. +`j`s proportion of the shares is the same as their proportion of the total tokens contributed: `(S + S_j) / S = (T + T_j) / T`. + +A special case is the initial delegation, when `T = 0` and `S = 0`, so `T_j / T` is undefined. +For the initial delegation, delegator `j` who delegates `T_j` tokens receive `S_j = T_j` shares. +So a validator that hasn't received any rewards and has not been slashed will have `T = S`. + +## Messages + +In this section we describe the processing of the staking messages and the corresponding updates to the state. All created/modified state objects specified by each message are defined within the [state](#state) section. + +### MsgCreateValidator + +A validator is created using the `MsgCreateValidator` message. +The validator must be created with an initial delegation from the operator. + +```protobuf + // CreateValidator defines a method for creating a new validator. + rpc CreateValidator(MsgCreateValidator) returns (MsgCreateValidatorResponse); +``` + +```protobuf +// MsgCreateValidator defines a SDK message for creating a new validator. +message MsgCreateValidator { + // NOTE(fdymylja): this is a particular case in which + // if validator_address == delegator_address then only one + // is expected to sign, otherwise both are. + option (cosmos.msg.v1.signer) = "delegator_address"; + option (cosmos.msg.v1.signer) = "validator_address"; + option (amino.name) = "cosmos-sdk/MsgCreateValidator"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + Description description = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + CommissionRates commission = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + string min_self_delegation = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + string delegator_address = 4 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 5 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + google.protobuf.Any pubkey = 6 [(cosmos_proto.accepts_interface) = "cosmos.crypto.PubKey"]; + cosmos.base.v1beta1.Coin value = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message is expected to fail if: + +* another validator with this operator address is already registered +* another validator with this pubkey is already registered +* the initial self-delegation tokens are of a denom not specified as the bonding denom +* the commission parameters are faulty, namely: + * `MaxRate` is either > 1 or < 0 + * the initial `Rate` is either negative or > `MaxRate` + * the initial `MaxChangeRate` is either negative or > `MaxRate` +* the description fields are too large + +This message creates and stores the `Validator` object at appropriate indexes. +Additionally a self-delegation is made with the initial tokens delegation +tokens `Delegation`. The validator always starts as unbonded but may be bonded +in the first end-block. + +### MsgEditValidator + +The `Description`, `CommissionRate` of a validator can be updated using the +`MsgEditValidator` message. + +```protobuf + // EditValidator defines a method for editing an existing validator. + rpc EditValidator(MsgEditValidator) returns (MsgEditValidatorResponse); +``` + +```protobuf +// MsgEditValidator defines a SDK message for editing an existing validator. +message MsgEditValidator { + option (cosmos.msg.v1.signer) = "validator_address"; + option (amino.name) = "cosmos-sdk/MsgEditValidator"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + Description description = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // We pass a reference to the new commission rate and min self delegation as + // it's not mandatory to update. If not updated, the deserialized rate will be + // zero with no way to distinguish if an update was intended. + // REF: #2373 + string commission_rate = 3 + [(cosmos_proto.scalar) = "cosmos.Dec", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec"]; + string min_self_delegation = 4 + [(cosmos_proto.scalar) = "cosmos.Int", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int"]; +} +``` + +This message is expected to fail if: + +* the initial `CommissionRate` is either negative or > `MaxRate` +* the `CommissionRate` has already been updated within the previous 24 hours +* the `CommissionRate` is > `MaxChangeRate` +* the description fields are too large + +This message stores the updated `Validator` object. + +### MsgDelegate + +Within this message the delegator provides coins, and in return receives +some amount of their validator's (newly created) delegator-shares that are +assigned to `Delegation.Shares`. + +```protobuf + // Delegate defines a method for performing a delegation of coins + // from a delegator to a validator. + rpc Delegate(MsgDelegate) returns (MsgDelegateResponse); +``` + +```protobuf +// MsgDelegate defines a SDK message for performing a delegation of coins +// from a delegator to a validator. +message MsgDelegate { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgDelegate"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message is expected to fail if: + +* the validator does not exist +* the `Amount` `Coin` has a denomination different than one defined by `params.BondDenom` +* the exchange rate is invalid, meaning the validator has no tokens (due to slashing) but there are outstanding shares +* the amount delegated is less than the minimum allowed delegation + +If an existing `Delegation` object for provided addresses does not already +exist then it is created as part of this message otherwise the existing +`Delegation` is updated to include the newly received shares. + +The delegator receives newly minted shares at the current exchange rate. +The exchange rate is the number of existing shares in the validator divided by +the number of currently delegated tokens. + +The validator is updated in the `ValidatorByPower` index, and the delegation is +tracked in validator object in the `Validators` index. + +It is possible to delegate to a jailed validator, the only difference being it +will not be added to the power index until it is unjailed. + +![Delegation sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/delegation_sequence.svg) + +### MsgUndelegate + +The `MsgUndelegate` message allows delegators to undelegate their tokens from +validator. + +```protobuf + // Undelegate defines a method for performing an undelegation from a + // delegate and a validator. + rpc Undelegate(MsgUndelegate) returns (MsgUndelegateResponse); +``` + +```protobuf +// MsgUndelegate defines a SDK message for performing an undelegation from a +// delegate and a validator. +message MsgUndelegate { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgUndelegate"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message returns a response containing the completion time of the undelegation: + +```protobuf +// MsgUndelegateResponse defines the Msg/Undelegate response type. +message MsgUndelegateResponse { + google.protobuf.Timestamp completion_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} +``` + +This message is expected to fail if: + +* the delegation doesn't exist +* the validator doesn't exist +* the delegation has less shares than the ones worth of `Amount` +* existing `UnbondingDelegation` has maximum entries as defined by `params.MaxEntries` +* the `Amount` has a denomination different than one defined by `params.BondDenom` + +When this message is processed the following actions occur: + +* validator's `DelegatorShares` and the delegation's `Shares` are both reduced by the message `SharesAmount` +* calculate the token worth of the shares remove that amount tokens held within the validator +* with those removed tokens, if the validator is: + * `Bonded` - add them to an entry in `UnbondingDelegation` (create `UnbondingDelegation` if it doesn't exist) with a completion time a full unbonding period from the current time. Update pool shares to reduce BondedTokens and increase NotBondedTokens by token worth of the shares. + * `Unbonding` - add them to an entry in `UnbondingDelegation` (create `UnbondingDelegation` if it doesn't exist) with the same completion time as the validator (`UnbondingMinTime`). + * `Unbonded` - then send the coins the message `DelegatorAddr` +* if there are no more `Shares` in the delegation, then the delegation object is removed from the store + * under this situation if the delegation is the validator's self-delegation then also jail the validator. + +![Unbond sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/unbond_sequence.svg) + +### MsgCancelUnbondingDelegation + +The `MsgCancelUnbondingDelegation` message allows delegators to cancel the `unbondingDelegation` entry and delegate back to a previous validator. + +```protobuf + // CancelUnbondingDelegation defines a method for performing canceling the unbonding delegation + // and delegate back to previous validator. + // + // Since: cosmos-sdk 0.46 + rpc CancelUnbondingDelegation(MsgCancelUnbondingDelegation) returns (MsgCancelUnbondingDelegationResponse); +``` + +```protobuf +// MsgCancelUnbondingDelegation defines the SDK message for performing a cancel unbonding delegation for delegator +// +// Since: cosmos-sdk 0.46 +message MsgCancelUnbondingDelegation { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgCancelUnbondingDelegation"; + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // amount is always less than or equal to unbonding delegation entry balance + cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // creation_height is the height which the unbonding took place. + int64 creation_height = 4; +} +``` + +This message is expected to fail if: + +* the `unbondingDelegation` entry is already processed. +* the `cancel unbonding delegation` amount is greater than the `unbondingDelegation` entry balance. +* the `cancel unbonding delegation` height doesn't exist in the `unbondingDelegationQueue` of the delegator. + +When this message is processed the following actions occur: + +* if the `unbondingDelegation` Entry balance is zero + * in this condition `unbondingDelegation` entry will be removed from `unbondingDelegationQueue`. + * otherwise `unbondingDelegationQueue` will be updated with new `unbondingDelegation` entry balance and initial balance +* the validator's `DelegatorShares` and the delegation's `Shares` are both increased by the message `Amount`. + +### MsgBeginRedelegate + +The redelegation command allows delegators to instantly switch validators. Once +the unbonding period has passed, the redelegation is automatically completed in +the EndBlocker. + +```protobuf + // BeginRedelegate defines a method for performing a redelegation + // of coins from a delegator and source validator to a destination validator. + rpc BeginRedelegate(MsgBeginRedelegate) returns (MsgBeginRedelegateResponse); +``` + +```protobuf +// MsgBeginRedelegate defines a SDK message for performing a redelegation +// of coins from a delegator and source validator to a destination validator. +message MsgBeginRedelegate { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgBeginRedelegate"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + cosmos.base.v1beta1.Coin amount = 4 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message returns a response containing the completion time of the redelegation: + +```protobuf + +// MsgBeginRedelegateResponse defines the Msg/BeginRedelegate response type. +message MsgBeginRedelegateResponse { + google.protobuf.Timestamp completion_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} +``` + +This message is expected to fail if: + +* the delegation doesn't exist +* the source or destination validators don't exist +* the delegation has less shares than the ones worth of `Amount` +* the source validator has a receiving redelegation which is not matured (aka. the redelegation may be transitive) +* existing `Redelegation` has maximum entries as defined by `params.MaxEntries` +* the `Amount` `Coin` has a denomination different than one defined by `params.BondDenom` + +When this message is processed the following actions occur: + +* the source validator's `DelegatorShares` and the delegations `Shares` are both reduced by the message `SharesAmount` +* calculate the token worth of the shares remove that amount tokens held within the source validator. +* if the source validator is: + * `Bonded` - add an entry to the `Redelegation` (create `Redelegation` if it doesn't exist) with a completion time a full unbonding period from the current time. Update pool shares to reduce BondedTokens and increase NotBondedTokens by token worth of the shares (this may be effectively reversed in the next step however). + * `Unbonding` - add an entry to the `Redelegation` (create `Redelegation` if it doesn't exist) with the same completion time as the validator (`UnbondingMinTime`). + * `Unbonded` - no action required in this step +* Delegate the token worth to the destination validator, possibly moving tokens back to the bonded state. +* if there are no more `Shares` in the source delegation, then the source delegation object is removed from the store + * under this situation if the delegation is the validator's self-delegation then also jail the validator. + +![Begin redelegation sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/begin_redelegation_sequence.svg) + +### MsgUpdateParams + +The `MsgUpdateParams` update the staking module parameters. +The params are updated through a governance proposal where the signer is the gov module account address. + +```protobuf +// MsgUpdateParams is the Msg/UpdateParams request type. +// +// Since: cosmos-sdk 0.47 +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/x/staking/MsgUpdateParams"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // params defines the x/staking parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +}; +``` + +The message handling can fail if: + +* signer is not the authority defined in the staking keeper (usually the gov module account). + +## Begin-Block + +Each abci begin block call, the historical info will get stored and pruned +according to the `HistoricalEntries` parameter. + +### Historical Info Tracking + +If the `HistoricalEntries` parameter is 0, then the `BeginBlock` performs a no-op. + +Otherwise, the latest historical info is stored under the key `historicalInfoKey|height`, while any entries older than `height - HistoricalEntries` is deleted. +In most cases, this results in a single entry being pruned per block. +However, if the parameter `HistoricalEntries` has changed to a lower value there will be multiple entries in the store that must be pruned. + +## End-Block + +Each abci end block call, the operations to update queues and validator set +changes are specified to execute. + +### Validator Set Changes + +The staking validator set is updated during this process by state transitions +that run at the end of every block. As a part of this process any updated +validators are also returned back to CometBFT for inclusion in the CometBFT +validator set which is responsible for validating CometBFT messages at the +consensus layer. Operations are as following: + +* the new validator set is taken as the top `params.MaxValidators` number of + validators retrieved from the `ValidatorsByPower` index +* the previous validator set is compared with the new validator set: + * missing validators begin unbonding and their `Tokens` are transferred from the + `BondedPool` to the `NotBondedPool` `ModuleAccount` + * new validators are instantly bonded and their `Tokens` are transferred from the + `NotBondedPool` to the `BondedPool` `ModuleAccount` + +In all cases, any validators leaving or entering the bonded validator set or +changing balances and staying within the bonded validator set incur an update +message reporting their new consensus power which is passed back to CometBFT. + +The `LastTotalPower` and `LastValidatorsPower` hold the state of the total power +and validator power from the end of the last block, and are used to check for +changes that have occurred in `ValidatorsByPower` and the total new power, which +is calculated during `EndBlock`. + +### Queues + +Within staking, certain state-transitions are not instantaneous but take place +over a duration of time (typically the unbonding period). When these +transitions are mature certain operations must take place in order to complete +the state operation. This is achieved through the use of queues which are +checked/processed at the end of each block. + +#### Unbonding Validators + +When a validator is kicked out of the bonded validator set (either through +being jailed, or not having sufficient bonded tokens) it begins the unbonding +process along with all its delegations begin unbonding (while still being +delegated to this validator). At this point the validator is said to be an +"unbonding validator", whereby it will mature to become an "unbonded validator" +after the unbonding period has passed. + +Each block the validator queue is to be checked for mature unbonding validators +(namely with a completion time `<=` current time and completion height `<=` current +block height). At this point any mature validators which do not have any +delegations remaining are deleted from state. For all other mature unbonding +validators that still have remaining delegations, the `validator.Status` is +switched from `types.Unbonding` to +`types.Unbonded`. + +Unbonding operations can be put on hold by external modules via the `PutUnbondingOnHold(unbondingId)` method. +As a result, an unbonding operation (e.g., an unbonding delegation) that is on hold, cannot complete +even if it reaches maturity. For an unbonding operation with `unbondingId` to eventually complete +(after it reaches maturity), every call to `PutUnbondingOnHold(unbondingId)` must be matched +by a call to `UnbondingCanComplete(unbondingId)`. + +#### Unbonding Delegations + +Complete the unbonding of all mature `UnbondingDelegations.Entries` within the +`UnbondingDelegations` queue with the following procedure: + +* transfer the balance coins to the delegator's wallet address +* remove the mature entry from `UnbondingDelegation.Entries` +* remove the `UnbondingDelegation` object from the store if there are no + remaining entries. + +#### Redelegations + +Complete the unbonding of all mature `Redelegation.Entries` within the +`Redelegations` queue with the following procedure: + +* remove the mature entry from `Redelegation.Entries` +* remove the `Redelegation` object from the store if there are no + remaining entries. + +## Hooks + +Other modules may register operations to execute when a certain event has +occurred within staking. These events can be registered to execute either +right `Before` or `After` the staking event (as per the hook name). The +following hooks can registered with staking: + +* `AfterValidatorCreated(Context, ValAddress) error` + * called when a validator is created +* `BeforeValidatorModified(Context, ValAddress) error` + * called when a validator's state is changed +* `AfterValidatorRemoved(Context, ConsAddress, ValAddress) error` + * called when a validator is deleted +* `AfterValidatorBonded(Context, ConsAddress, ValAddress) error` + * called when a validator is bonded +* `AfterValidatorBeginUnbonding(Context, ConsAddress, ValAddress) error` + * called when a validator begins unbonding +* `BeforeDelegationCreated(Context, AccAddress, ValAddress) error` + * called when a delegation is created +* `BeforeDelegationSharesModified(Context, AccAddress, ValAddress) error` + * called when a delegation's shares are modified +* `AfterDelegationModified(Context, AccAddress, ValAddress) error` + * called when a delegation is created or modified +* `BeforeDelegationRemoved(Context, AccAddress, ValAddress) error` + * called when a delegation is removed +* `AfterUnbondingInitiated(Context, UnbondingID)` + * called when an unbonding operation (validator unbonding, unbonding delegation, redelegation) was initiated + +## Events + +The staking module emits the following events: + +### EndBlocker + +| Type | Attribute Key | Attribute Value | +| ---------------------- | ---------------------- | ------------------------- | +| complete\_unbonding | amount | `{totalUnbondingAmount}` | +| complete\_unbonding | validator | `{validatorAddress}` | +| complete\_unbonding | delegator | `{delegatorAddress}` | +| complete\_redelegation | amount | `{totalRedelegationAmount}` | +| complete\_redelegation | source\_validator | `{srcValidatorAddress}` | +| complete\_redelegation | destination\_validator | `{dstValidatorAddress}` | +| complete\_redelegation | delegator | `{delegatorAddress}` | + +## Msg's + +### MsgCreateValidator + +| Type | Attribute Key | Attribute Value | +| ----------------- | ------------- | ------------------ | +| create\_validator | validator | `{validatorAddress}` | +| create\_validator | amount | `{delegationAmount}` | +| message | module | staking | +| message | action | create\_validator | +| message | sender | `{senderAddress}` | + +### MsgEditValidator + +| Type | Attribute Key | Attribute Value | +| --------------- | --------------------- | ------------------- | +| edit\_validator | commission\_rate | `{commissionRate}` | +| edit\_validator | min\_self\_delegation | `{minSelfDelegation}` | +| message | module | staking | +| message | action | edit\_validator | +| message | sender | `{senderAddress}` | + +### MsgDelegate + +| Type | Attribute Key | Attribute Value | +| -------- | ------------- | ------------------ | +| delegate | validator | `{validatorAddress}` | +| delegate | amount | `{delegationAmount}` | +| message | module | staking | +| message | action | delegate | +| message | sender | `{senderAddress}` | + +### MsgUndelegate + +| Type | Attribute Key | Attribute Value | +| ------- | --------------------- | ------------------ | +| unbond | validator | `{validatorAddress}` | +| unbond | amount | `{unbondAmount}` | +| unbond | completion\_time \[0] | `{completionTime}` | +| message | module | staking | +| message | action | begin\_unbonding | +| message | sender | `{senderAddress}` | + +* \[0] Time is formatted in the RFC3339 standard + +### MsgCancelUnbondingDelegation + +| Type | Attribute Key | Attribute Value | +| ----------------------------- | ---------------- | --------------------------------- | +| cancel\_unbonding\_delegation | validator | `{validatorAddress}` | +| cancel\_unbonding\_delegation | delegator | `{delegatorAddress}` | +| cancel\_unbonding\_delegation | amount | `{cancelUnbondingDelegationAmount}` | +| cancel\_unbonding\_delegation | creation\_height | `{unbondingCreationHeight}` | +| message | module | staking | +| message | action | cancel\_unbond | +| message | sender | `{senderAddress}` | + +### MsgBeginRedelegate + +| Type | Attribute Key | Attribute Value | +| ---------- | ---------------------- | --------------------- | +| redelegate | source\_validator | `{srcValidatorAddress}` | +| redelegate | destination\_validator | `{dstValidatorAddress}` | +| redelegate | amount | `{unbondAmount}` | +| redelegate | completion\_time \[0] | `{completionTime}` | +| message | module | staking | +| message | action | begin\_redelegate | +| message | sender | `{senderAddress}` | + +* \[0] Time is formatted in the RFC3339 standard + +## Parameters + +The staking module contains the following parameters: + +| Key | Type | Example | +| ----------------- | ---------------- | ---------------------- | +| UnbondingTime | string (time ns) | "259200000000000" | +| MaxValidators | uint16 | 100 | +| KeyMaxEntries | uint16 | 7 | +| HistoricalEntries | uint16 | 3 | +| BondDenom | string | "stake" | +| MinCommissionRate | string | "0.000000000000000000" | + +## Client + +### CLI + +A user can query and interact with the `staking` module using the CLI. + +#### Query + +The `query` commands allows users to query `staking` state. + +```bash +simd query staking --help +``` + +##### delegation + +The `delegation` command allows users to query delegations for an individual delegator on an individual validator. + +Usage: + +```bash +simd query staking delegation [delegator-addr] [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash +balance: + amount: "10000000000" + denom: stake +delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +##### delegations + +The `delegations` command allows users to query delegations for an individual delegator on all validators. + +Usage: + +```bash +simd query staking delegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegations cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash expandable +delegation_responses: +- balance: + amount: "10000000000" + denom: stake + delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- balance: + amount: "10000000000" + denom: stake + delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1x20lytyf6zkcrv5edpkfkn8sz578qg5sqfyqnp +pagination: + next_key: null + total: "0" +``` + +##### delegations-to + +The `delegations-to` command allows users to query delegations on an individual validator. + +Usage: + +```bash +simd query staking delegations-to [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegations-to cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +- balance: + amount: "504000000" + denom: stake + delegation: + delegator_address: cosmos1q2qwwynhv8kh3lu5fkeex4awau9x8fwt45f5cp + shares: "504000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- balance: + amount: "78125000000" + denom: uixo + delegation: + delegator_address: cosmos1qvppl3479hw4clahe0kwdlfvf8uvjtcd99m2ca + shares: "78125000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +pagination: + next_key: null + total: "0" +``` + +##### historical-info + +The `historical-info` command allows users to query historical information at given height. + +Usage: + +```bash +simd query staking historical-info [height] [flags] +``` + +Example: + +```bash +simd query staking historical-info 10 +``` + +Example Output: + +```bash expandable +header: + app_hash: Lbx8cXpI868wz8sgp4qPYVrlaKjevR5WP/IjUxwp3oo= + chain_id: testnet + consensus_hash: BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8= + data_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + evidence_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + height: "10" + last_block_id: + hash: RFbkpu6pWfSThXxKKl6EZVDnBSm16+U0l0xVjTX08Fk= + part_set_header: + hash: vpIvXD4rxD5GM4MXGz0Sad9I7/iVYLzZsEU4BVgWIU= + total: 1 + last_commit_hash: Ne4uXyx4QtNp4Zx89kf9UK7oG9QVbdB6e7ZwZkhy8K0= + last_results_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + next_validators_hash: nGBgKeWBjoxeKFti00CxHsnULORgKY4LiuQwBuUrhCs= + proposer_address: mMEP2c2IRPLr99LedSRtBg9eONM= + time: "2021-10-01T06:00:49.785790894Z" + validators_hash: nGBgKeWBjoxeKFti00CxHsnULORgKY4LiuQwBuUrhCs= + version: + app: "0" + block: "11" +valset: +- commission: + commission_rates: + max_change_rate: "0.010000000000000000" + max_rate: "0.200000000000000000" + rate: "0.100000000000000000" + update_time: "2021-10-01T05:52:50.380144238Z" + consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8= + delegator_shares: "10000000.000000000000000000" + description: + details: "" + identity: "" + moniker: myvalidator + security_contact: "" + website: "" + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc + status: BOND_STATUS_BONDED + tokens: "10000000" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +``` + +##### params + +The `params` command allows users to query values set as staking parameters. + +Usage: + +```bash +simd query staking params [flags] +``` + +Example: + +```bash +simd query staking params +``` + +Example Output: + +```bash +bond_denom: stake +historical_entries: 10000 +max_entries: 7 +max_validators: 50 +unbonding_time: 1814400s +``` + +##### pool + +The `pool` command allows users to query values for amounts stored in the staking pool. + +Usage: + +```bash +simd q staking pool [flags] +``` + +Example: + +```bash +simd q staking pool +``` + +Example Output: + +```bash +bonded_tokens: "10000000" +not_bonded_tokens: "0" +``` + +##### redelegation + +The `redelegation` command allows users to query a redelegation record based on delegator and a source and destination validator address. + +Usage: + +```bash +simd query staking redelegation [delegator-addr] [src-validator-addr] [dst-validator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +pagination: null +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm + validator_src_address: cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm +``` + +##### redelegations + +The `redelegations` command allows users to query all redelegation records for an individual delegator. + +Usage: + +```bash +simd query staking redelegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1zppjyal5emta5cquje8ndkpz0rs046m7zqxrpp +- entries: + - balance: "562770000000" + redelegation_entry: + completion_time: "2021-10-25T21:42:07.336911677Z" + creation_height: 2.39735e+06 + initial_balance: "562770000000" + shares_dst: "562770000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1zppjyal5emta5cquje8ndkpz0rs046m7zqxrpp +``` + +##### redelegations-from + +The `redelegations-from` command allows users to query delegations that are redelegating *from* a validator. + +Usage: + +```bash +simd query staking redelegations-from [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegations-from cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1pm6e78p4pgn0da365plzl4t56pxy8hwtqp2mph + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +- entries: + - balance: "221000000" + redelegation_entry: + completion_time: "2021-10-05T21:05:45.669420544Z" + creation_height: 2.120693e+06 + initial_balance: "221000000" + shares_dst: "221000000.000000000000000000" + redelegation: + delegator_address: cosmos1zqv8qxy2zgn4c58fz8jt8jmhs3d0attcussrf6 + entries: null + validator_dst_address: cosmosvaloper10mseqwnwtjaqfrwwp2nyrruwmjp6u5jhah4c3y + validator_src_address: cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +``` + +##### unbonding-delegation + +The `unbonding-delegation` command allows users to query unbonding delegations for an individual delegator on an individual validator. + +Usage: + +```bash +simd query staking unbonding-delegation [delegator-addr] [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash +delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +entries: +- balance: "52000000" + completion_time: "2021-11-02T11:35:55.391594709Z" + creation_height: "55078" + initial_balance: "52000000" +validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +##### unbonding-delegations + +The `unbonding-delegations` command allows users to query all unbonding-delegations records for one delegator. + +Usage: + +```bash +simd query staking unbonding-delegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegations cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +unbonding_responses: +- delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: + - balance: "52000000" + completion_time: "2021-11-02T11:35:55.391594709Z" + creation_height: "55078" + initial_balance: "52000000" + validator_address: cosmosvaloper1t8ehvswxjfn3ejzkjtntcyrqwvmvuknzmvtaaa + +``` + +##### unbonding-delegations-from + +The `unbonding-delegations-from` command allows users to query delegations that are unbonding *from* a validator. + +Usage: + +```bash +simd query staking unbonding-delegations-from [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegations-from cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +unbonding_responses: +- delegator_address: cosmos1qqq9txnw4c77sdvzx0tkedsafl5s3vk7hn53fn + entries: + - balance: "150000000" + completion_time: "2021-11-01T21:41:13.098141574Z" + creation_height: "46823" + initial_balance: "150000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- delegator_address: cosmos1peteje73eklqau66mr7h7rmewmt2vt99y24f5z + entries: + - balance: "24000000" + completion_time: "2021-10-31T02:57:18.192280361Z" + creation_height: "21516" + initial_balance: "24000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +##### validator + +The `validator` command allows users to query details about an individual validator. + +Usage: + +```bash +simd query staking validator [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking validator cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +commission: + commission_rates: + max_change_rate: "0.020000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-01T19:24:52.663191049Z" +consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc= +delegator_shares: "32948270000.000000000000000000" +description: + details: Witval is the validator arm from Vitwit. Vitwit is into software consulting + and services business since 2015. We are working closely with Cosmos ecosystem + since 2018. We are also building tools for the ecosystem, Aneka is our explorer + for the cosmos ecosystem. + identity: 51468B615127273A + moniker: Witval + security_contact: "" + website: "" +jailed: false +min_self_delegation: "1" +operator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +status: BOND_STATUS_BONDED +tokens: "32948270000" +unbonding_height: "0" +unbonding_time: "1970-01-01T00:00:00Z" +``` + +##### validators + +The `validators` command allows users to query details about all validators on a network. + +Usage: + +```bash +simd query staking validators [flags] +``` + +Example: + +```bash +simd query staking validators +``` + +Example Output: + +```bash expandable +pagination: + next_key: FPTi7TKAjN63QqZh+BaXn6gBmD5/ + total: "0" +validators: +commission: + commission_rates: + max_change_rate: "0.020000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-01T19:24:52.663191049Z" +consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc= +delegator_shares: "32948270000.000000000000000000" +description: + details: Witval is the validator arm from Vitwit. Vitwit is into software consulting + and services business since 2015. We are working closely with Cosmos ecosystem + since 2018. We are also building tools for the ecosystem, Aneka is our explorer + for the cosmos ecosystem. + identity: 51468B615127273A + moniker: Witval + security_contact: "" + website: "" + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj + status: BOND_STATUS_BONDED + tokens: "32948270000" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +- commission: + commission_rates: + max_change_rate: "0.100000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-04T18:02:21.446645619Z" + consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: GDNpuKDmCg9GnhnsiU4fCWktuGUemjNfvpCZiqoRIYA= + delegator_shares: "559343421.000000000000000000" + description: + details: Noderunners is a professional validator in POS networks. We have a huge + node running experience, reliable soft and hardware. Our commissions are always + low, our support to delegators is always full. Stake with us and start receiving + your Cosmos rewards now! + identity: 812E82D12FEA3493 + moniker: Noderunners + security_contact: info@noderunners.biz + website: http://noderunners.biz + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1q5ku90atkhktze83j9xjaks2p7uruag5zp6wt7 + status: BOND_STATUS_BONDED + tokens: "559343421" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +``` + +#### Transactions + +The `tx` commands allows users to interact with the `staking` module. + +```bash +simd tx staking --help +``` + +##### create-validator + +The command `create-validator` allows users to create new validator initialized with a self-delegation to it. + +Usage: + +```bash +simd tx staking create-validator [path/to/validator.json] [flags] +``` + +Example: + +```bash +simd tx staking create-validator /path/to/validator.json \ + --chain-id="name_of_chain_id" \ + --gas="auto" \ + --gas-adjustment="1.2" \ + --gas-prices="0.025stake" \ + --from=mykey +``` + +where `validator.json` contains: + +```json expandable +{ + "pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "BnbwFpeONLqvWqJb3qaUbL5aoIcW3fSuAp9nT3z5f20=" + }, + "amount": "1000000stake", + "moniker": "my-moniker", + "website": "https://myweb.site", + "security": "security-contact@gmail.com", + "details": "description of your validator", + "commission-rate": "0.10", + "commission-max-rate": "0.20", + "commission-max-change-rate": "0.01", + "min-self-delegation": "1" +} +``` + +and pubkey can be obtained by using `simd tendermint show-validator` command. + +##### delegate + +The command `delegate` allows users to delegate liquid tokens to a validator. + +Usage: + +```bash +simd tx staking delegate [validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking delegate cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm 1000stake --from mykey +``` + +##### edit-validator + +The command `edit-validator` allows users to edit an existing validator account. + +Usage: + +```bash +simd tx staking edit-validator [flags] +``` + +Example: + +```bash +simd tx staking edit-validator --moniker "new_moniker_name" --website "new_website_url" --from mykey +``` + +##### redelegate + +The command `redelegate` allows users to redelegate illiquid tokens from one validator to another. + +Usage: + +```bash +simd tx staking redelegate [src-validator-addr] [dst-validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking redelegate cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm 100stake --from mykey +``` + +##### unbond + +The command `unbond` allows users to unbond shares from a validator. + +Usage: + +```bash +simd tx staking unbond [validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking unbond cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj 100stake --from mykey +``` + +##### cancel unbond + +The command `cancel-unbond` allow users to cancel the unbonding delegation entry and delegate back to the original validator. + +Usage: + +```bash +simd tx staking cancel-unbond [validator-addr] [amount] [creation-height] +``` + +Example: + +```bash +simd tx staking cancel-unbond cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj 100stake 123123 --from mykey +``` + +### gRPC + +A user can query the `staking` module using gRPC endpoints. + +#### Validators + +The `Validators` endpoint queries all validators that match the given status. + +```bash +cosmos.staking.v1beta1.Query/Validators +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.staking.v1beta1.Query/Validators +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "consensusPubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8="}, + "status": "BOND_STATUS_BONDED", + "tokens": "10000000", + "delegatorShares": "10000000000000000000000000", + "description": { + "moniker": "myvalidator" + }, + "unbondingTime": "1970-01-01T00:00:00Z", + "commission": { + "commissionRates": { + "rate": "100000000000000000", + "maxRate": "200000000000000000", + "maxChangeRate": "10000000000000000" + }, + "updateTime": "2021-10-01T05:52:50.380144238Z" + }, + "minSelfDelegation": "1" + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### Validator + +The `Validator` endpoint queries validator information for given validator address. + +```bash +cosmos.staking.v1beta1.Query/Validator +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Validator +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "consensusPubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8="}, + "status": "BOND_STATUS_BONDED", + "tokens": "10000000", + "delegatorShares": "10000000000000000000000000", + "description": { + "moniker": "myvalidator" + }, + "unbondingTime": "1970-01-01T00:00:00Z", + "commission": { + "commissionRates": { + "rate": "100000000000000000", + "maxRate": "200000000000000000", + "maxChangeRate": "10000000000000000" + }, + "updateTime": "2021-10-01T05:52:50.380144238Z" + }, + "minSelfDelegation": "1" + } +} +``` + +#### ValidatorDelegations + +The `ValidatorDelegations` endpoint queries delegate information for given validator. + +```bash +cosmos.staking.v1beta1.Query/ValidatorDelegations +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/ValidatorDelegations +``` + +Example Output: + +```bash expandable +{ + "delegationResponses": [ + { + "delegation": { + "delegatorAddress": "cosmos1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgy3ua5t", + "validatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "shares": "10000000000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "10000000" + } + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### ValidatorUnbondingDelegations + +The `ValidatorUnbondingDelegations` endpoint queries delegate information for given validator. + +```bash +cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1z3pzzw84d6xn00pw9dy3yapqypfde7vg6965fy", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "25325", + "completion_time": "2021-10-31T09:24:36.797320636Z", + "initial_balance": "20000000", + "balance": "20000000" + } + ] + }, + { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "13100", + "completion_time": "2021-10-30T12:53:02.272266791Z", + "initial_balance": "1000000", + "balance": "1000000" + } + ] + }, + ], + "pagination": { + "next_key": null, + "total": "8" + } +} +``` + +#### Delegation + +The `Delegation` endpoint queries delegate information for given validator delegator pair. + +```bash +cosmos.staking.v1beta1.Query/Delegation +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Delegation +``` + +Example Output: + +```bash expandable +{ + "delegation_response": + { + "delegation": + { + "delegator_address":"cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "shares":"25083119936.000000000000000000" + }, + "balance": + { + "denom":"stake", + "amount":"25083119936" + } + } +} +``` + +#### UnbondingDelegation + +The `UnbondingDelegation` endpoint queries unbonding information for given validator delegator. + +```bash +cosmos.staking.v1beta1.Query/UnbondingDelegation +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/UnbondingDelegation +``` + +Example Output: + +```bash expandable +{ + "unbond": { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "136984", + "completion_time": "2021-11-08T05:38:47.505593891Z", + "initial_balance": "400000000", + "balance": "400000000" + }, + { + "creation_height": "137005", + "completion_time": "2021-11-08T05:40:53.526196312Z", + "initial_balance": "385000000", + "balance": "385000000" + } + ] + } +} +``` + +#### DelegatorDelegations + +The `DelegatorDelegations` endpoint queries all delegations of a given delegator address. + +```bash +cosmos.staking.v1beta1.Query/DelegatorDelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorDelegations +``` + +Example Output: + +```bash +{ + "delegation_responses": [ + {"delegation":{"delegator_address":"cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77","validator_address":"cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8","shares":"25083339023.000000000000000000"},"balance":{"denom":"stake","amount":"25083339023"}} + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorUnbondingDelegations + +The `DelegatorUnbondingDelegations` endpoint queries all unbonding delegations of a given delegator address. + +```bash +cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1sjllsnramtg3ewxqwwrwjxfgc4n4ef9uxyejze", + "entries": [ + { + "creation_height": "136984", + "completion_time": "2021-11-08T05:38:47.505593891Z", + "initial_balance": "400000000", + "balance": "400000000" + }, + { + "creation_height": "137005", + "completion_time": "2021-11-08T05:40:53.526196312Z", + "initial_balance": "385000000", + "balance": "385000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### Redelegations + +The `Redelegations` endpoint queries redelegations of given address. + +```bash +cosmos.staking.v1beta1.Query/Redelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf", "src_validator_addr" : "cosmosvaloper1j7euyj85fv2jugejrktj540emh9353ltgppc3g", "dst_validator_addr" : "cosmosvaloper1yy3tnegzmkdcm7czzcy3flw5z0zyr9vkkxrfse"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Redelegations +``` + +Example Output: + +```bash expandable +{ + "redelegation_responses": [ + { + "redelegation": { + "delegator_address": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf", + "validator_src_address": "cosmosvaloper1j7euyj85fv2jugejrktj540emh9353ltgppc3g", + "validator_dst_address": "cosmosvaloper1yy3tnegzmkdcm7czzcy3flw5z0zyr9vkkxrfse", + "entries": null + }, + "entries": [ + { + "redelegation_entry": { + "creation_height": 135932, + "completion_time": "2021-11-08T03:52:55.299147901Z", + "initial_balance": "2900000", + "shares_dst": "2900000.000000000000000000" + }, + "balance": "2900000" + } + ] + } + ], + "pagination": null +} +``` + +#### DelegatorValidators + +The `DelegatorValidators` endpoint queries all validators information for given delegator. + +```bash +cosmos.staking.v1beta1.Query/DelegatorValidators +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorValidators +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operator_address": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "UPwHWxH1zHJWGOa/m6JB3f5YjHMvPQPkVbDqqi+U7Uw=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "347260647559", + "delegator_shares": "347260647559.000000000000000000", + "description": { + "moniker": "BouBouNode", + "identity": "", + "website": "https://boubounode.com", + "security_contact": "", + "details": "AI-based Validator. #1 AI Validator on Game of Stakes. Fairly priced. Don't trust (humans), verify. Made with BouBou love." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.061000000000000000", + "max_rate": "0.300000000000000000", + "max_change_rate": "0.150000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorValidator + +The `DelegatorValidator` endpoint queries validator information for given delegator validator + +```bash +cosmos.staking.v1beta1.Query/DelegatorValidator +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1eh5mwu044gd5ntkkc2xgfg8247mgc56f3n8rr7", "validator_addr": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorValidator +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operator_address": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "UPwHWxH1zHJWGOa/m6JB3f5YjHMvPQPkVbDqqi+U7Uw=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "347262754841", + "delegator_shares": "347262754841.000000000000000000", + "description": { + "moniker": "BouBouNode", + "identity": "", + "website": "https://boubounode.com", + "security_contact": "", + "details": "AI-based Validator. #1 AI Validator on Game of Stakes. Fairly priced. Don't trust (humans), verify. Made with BouBou love." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.061000000000000000", + "max_rate": "0.300000000000000000", + "max_change_rate": "0.150000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } +} +``` + +#### HistoricalInfo + +```bash +cosmos.staking.v1beta1.Query/HistoricalInfo +``` + +Example: + +```bash +grpcurl -plaintext -d '{"height" : 1}' localhost:9090 cosmos.staking.v1beta1.Query/HistoricalInfo +``` + +Example Output: + +```bash expandable +{ + "hist": { + "header": { + "version": { + "block": "11", + "app": "0" + }, + "chain_id": "simd-1", + "height": "140142", + "time": "2021-10-11T10:56:29.720079569Z", + "last_block_id": { + "hash": "9gri/4LLJUBFqioQ3NzZIP9/7YHR9QqaM6B2aJNQA7o=", + "part_set_header": { + "total": 1, + "hash": "Hk1+C864uQkl9+I6Zn7IurBZBKUevqlVtU7VqaZl1tc=" + } + }, + "last_commit_hash": "VxrcS27GtvGruS3I9+AlpT7udxIT1F0OrRklrVFSSKc=", + "data_hash": "80BjOrqNYUOkTnmgWyz9AQ8n7SoEmPVi4QmAe8RbQBY=", + "validators_hash": "95W49n2hw8RWpr1GPTAO5MSPi6w6Wjr3JjjS7AjpBho=", + "next_validators_hash": "95W49n2hw8RWpr1GPTAO5MSPi6w6Wjr3JjjS7AjpBho=", + "consensus_hash": "BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=", + "app_hash": "ZZaxnSY3E6Ex5Bvkm+RigYCK82g8SSUL53NymPITeOE=", + "last_results_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "evidence_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "proposer_address": "aH6dO428B+ItuoqPq70efFHrSMY=" + }, + "valset": [ + { + "operator_address": "cosmosvaloper196ax4vc0lwpxndu9dyhvca7jhxp70rmcqcnylw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "/O7BtNW0pafwfvomgR4ZnfldwPXiFfJs9mHg3gwfv5Q=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1426045203613", + "delegator_shares": "1426045203613.000000000000000000", + "description": { + "moniker": "SG-1", + "identity": "48608633F99D1B60", + "website": "https://sg-1.online", + "security_contact": "", + "details": "SG-1 - your favorite validator on Witval. We offer 100% Soft Slash protection." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.037500000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.030000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } + ] + } +} + +``` + +#### Pool + +The `Pool` endpoint queries the pool information. + +```bash +cosmos.staking.v1beta1.Query/Pool +``` + +Example: + +```bash +grpcurl -plaintext -d localhost:9090 cosmos.staking.v1beta1.Query/Pool +``` + +Example Output: + +```bash +{ + "pool": { + "not_bonded_tokens": "369054400189", + "bonded_tokens": "15657192425623" + } +} +``` + +#### Params + +The `Params` endpoint queries the pool information. + +```bash +cosmos.staking.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.staking.v1beta1.Query/Params +``` + +Example Output: + +```bash +{ + "params": { + "unbondingTime": "1814400s", + "maxValidators": 100, + "maxEntries": 7, + "historicalEntries": 10000, + "bondDenom": "stake" + } +} +``` + +### REST + +A user can query the `staking` module using REST endpoints. + +#### DelegatorDelegations + +The `DelegatorDelegations` REST endpoint queries all delegations of a given delegator address. + +```bash +/cosmos/staking/v1beta1/delegations/{delegatorAddr} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/delegations/cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "delegation_responses": [ + { + "delegation": { + "delegator_address": "cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5", + "validator_address": "cosmosvaloper1quqxfrxkycr0uzt4yk0d57tcq3zk7srm7sm6r8", + "shares": "256250000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "256250000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5", + "validator_address": "cosmosvaloper194v8uwee2fvs2s8fa5k7j03ktwc87h5ym39jfv", + "shares": "255150000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "255150000" + } + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### Redelegations + +The `Redelegations` REST endpoint queries redelegations of given address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/redelegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1thfntksw0d35n2tkr0k8v54fr8wxtxwxl2c56e/redelegations?srcValidatorAddr=cosmosvaloper1lzhlnpahvznwfv4jmay2tgaha5kmz5qx4cuznf&dstValidatorAddr=cosmosvaloper1vq8tw77kp8lvxq9u3c8eeln9zymn68rng8pgt4" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "redelegation_responses": [ + { + "redelegation": { + "delegator_address": "cosmos1thfntksw0d35n2tkr0k8v54fr8wxtxwxl2c56e", + "validator_src_address": "cosmosvaloper1lzhlnpahvznwfv4jmay2tgaha5kmz5qx4cuznf", + "validator_dst_address": "cosmosvaloper1vq8tw77kp8lvxq9u3c8eeln9zymn68rng8pgt4", + "entries": null + }, + "entries": [ + { + "redelegation_entry": { + "creation_height": 151523, + "completion_time": "2021-11-09T06:03:25.640682116Z", + "initial_balance": "200000000", + "shares_dst": "200000000.000000000000000000" + }, + "balance": "200000000" + } + ] + } + ], + "pagination": null +} +``` + +#### DelegatorUnbondingDelegations + +The `DelegatorUnbondingDelegations` REST endpoint queries all unbonding delegations of a given delegator address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/unbonding_delegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1nxv42u3lv642q0fuzu2qmrku27zgut3n3z7lll/unbonding_delegations" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1nxv42u3lv642q0fuzu2qmrku27zgut3n3z7lll", + "validator_address": "cosmosvaloper1e7mvqlz50ch6gw4yjfemsc069wfre4qwmw53kq", + "entries": [ + { + "creation_height": "2442278", + "completion_time": "2021-10-12T10:59:03.797335857Z", + "initial_balance": "50000000000", + "balance": "50000000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorValidators + +The `DelegatorValidators` REST endpoint queries all validators information for given delegator address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/validators +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1xwazl8ftks4gn00y5x3c47auquc62ssune9ppv/validators" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operator_address": "cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "5v4n3px3PkfNnKflSgepDnsMQR1hiNXnqOC11Y72/PQ=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "21592843799", + "delegator_shares": "21592843799.000000000000000000", + "description": { + "moniker": "jabbey", + "identity": "", + "website": "https://twitter.com/JoeAbbey", + "security_contact": "", + "details": "just another dad in the cosmos" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.100000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-09T19:03:54.984821705Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorValidator + +The `DelegatorValidator` REST endpoint queries validator information for given delegator validator pair. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/validators/{validatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1xwazl8ftks4gn00y5x3c47auquc62ssune9ppv/validators/cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operator_address": "cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "5v4n3px3PkfNnKflSgepDnsMQR1hiNXnqOC11Y72/PQ=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "21592843799", + "delegator_shares": "21592843799.000000000000000000", + "description": { + "moniker": "jabbey", + "identity": "", + "website": "https://twitter.com/JoeAbbey", + "security_contact": "", + "details": "just another dad in the cosmos" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.100000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-09T19:03:54.984821705Z" + }, + "min_self_delegation": "1" + } +} +``` + +#### HistoricalInfo + +The `HistoricalInfo` REST endpoint queries the historical information for given height. + +```bash +/cosmos/staking/v1beta1/historical_info/{height} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/historical_info/153332" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "hist": { + "header": { + "version": { + "block": "11", + "app": "0" + }, + "chain_id": "cosmos-1", + "height": "153332", + "time": "2021-10-12T09:05:35.062230221Z", + "last_block_id": { + "hash": "NX8HevR5khb7H6NGKva+jVz7cyf0skF1CrcY9A0s+d8=", + "part_set_header": { + "total": 1, + "hash": "zLQ2FiKM5tooL3BInt+VVfgzjlBXfq0Hc8Iux/xrhdg=" + } + }, + "last_commit_hash": "P6IJrK8vSqU3dGEyRHnAFocoDGja0bn9euLuy09s350=", + "data_hash": "eUd+6acHWrNXYju8Js449RJ99lOYOs16KpqQl4SMrEM=", + "validators_hash": "mB4pravvMsJKgi+g8aYdSeNlt0kPjnRFyvtAQtaxcfw=", + "next_validators_hash": "mB4pravvMsJKgi+g8aYdSeNlt0kPjnRFyvtAQtaxcfw=", + "consensus_hash": "BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=", + "app_hash": "fuELArKRK+CptnZ8tu54h6xEleSWenHNmqC84W866fU=", + "last_results_hash": "p/BPexV4LxAzlVcPRvW+lomgXb6Yze8YLIQUo/4Kdgc=", + "evidence_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "proposer_address": "G0MeY8xQx7ooOsni8KE/3R/Ib3Q=" + }, + "valset": [ + { + "operator_address": "cosmosvaloper196ax4vc0lwpxndu9dyhvca7jhxp70rmcqcnylw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "/O7BtNW0pafwfvomgR4ZnfldwPXiFfJs9mHg3gwfv5Q=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1416521659632", + "delegator_shares": "1416521659632.000000000000000000", + "description": { + "moniker": "SG-1", + "identity": "48608633F99D1B60", + "website": "https://sg-1.online", + "security_contact": "", + "details": "SG-1 - your favorite validator on cosmos. We offer 100% Soft Slash protection." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.037500000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.030000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + }, + { + "operator_address": "cosmosvaloper1t8ehvswxjfn3ejzkjtntcyrqwvmvuknzmvtaaa", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "uExZyjNLtr2+FFIhNDAMcQ8+yTrqE7ygYTsI7khkA5Y=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1348298958808", + "delegator_shares": "1348298958808.000000000000000000", + "description": { + "moniker": "Cosmostation", + "identity": "AE4C403A6E7AA1AC", + "website": "https://www.cosmostation.io", + "security_contact": "admin@stamper.network", + "details": "Cosmostation validator node. Delegate your tokens and Start Earning Staking Rewards" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "1.000000000000000000", + "max_change_rate": "0.200000000000000000" + }, + "update_time": "2021-10-01T15:06:38.821314287Z" + }, + "min_self_delegation": "1" + } + ] + } +} +``` + +#### Parameters + +The `Parameters` REST endpoint queries the staking parameters. + +```bash +/cosmos/staking/v1beta1/params +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/params" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "params": { + "unbonding_time": "2419200s", + "max_validators": 100, + "max_entries": 7, + "historical_entries": 10000, + "bond_denom": "stake" + } +} +``` + +#### Pool + +The `Pool` REST endpoint queries the pool information. + +```bash +/cosmos/staking/v1beta1/pool +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/pool" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "pool": { + "not_bonded_tokens": "432805737458", + "bonded_tokens": "15783637712645" + } +} +``` + +#### Validators + +The `Validators` REST endpoint queries all validators that match the given status. + +```bash +/cosmos/staking/v1beta1/validators +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/validators" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operator_address": "cosmosvaloper1q3jsx9dpfhtyqqgetwpe5tmk8f0ms5qywje8tw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "N7BPyek2aKuNZ0N/8YsrqSDhGZmgVaYUBuddY8pwKaE=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "383301887799", + "delegator_shares": "383301887799.000000000000000000", + "description": { + "moniker": "SmartNodes", + "identity": "D372724899D1EDC8", + "website": "https://smartnodes.co", + "security_contact": "", + "details": "Earn Rewards with Crypto Staking & Node Deployment" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-01T15:51:31.596618510Z" + }, + "min_self_delegation": "1" + }, + { + "operator_address": "cosmosvaloper1q5ku90atkhktze83j9xjaks2p7uruag5zp6wt7", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "GDNpuKDmCg9GnhnsiU4fCWktuGUemjNfvpCZiqoRIYA=" + }, + "jailed": false, + "status": "BOND_STATUS_UNBONDING", + "tokens": "1017819654", + "delegator_shares": "1017819654.000000000000000000", + "description": { + "moniker": "Noderunners", + "identity": "812E82D12FEA3493", + "website": "http://noderunners.biz", + "security_contact": "info@noderunners.biz", + "details": "Noderunners is a professional validator in POS networks. We have a huge node running experience, reliable soft and hardware. Our commissions are always low, our support to delegators is always full. Stake with us and start receiving your cosmos rewards now!" + }, + "unbonding_height": "147302", + "unbonding_time": "2021-11-08T22:58:53.718662452Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-04T18:02:21.446645619Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": "FONDBFkE4tEEf7yxWWKOD49jC2NK", + "total": "2" + } +} +``` + +#### Validator + +The `Validator` REST endpoint queries validator information for given validator address. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "33027900000", + "delegator_shares": "33027900000.000000000000000000", + "description": { + "moniker": "Witval", + "identity": "51468B615127273A", + "website": "", + "security_contact": "", + "details": "Witval is the validator arm from Vitwit. Vitwit is into software consulting and services business since 2015. We are working closely with Cosmos ecosystem since 2018. We are also building tools for the ecosystem, Aneka is our explorer for the cosmos ecosystem." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.020000000000000000" + }, + "update_time": "2021-10-01T19:24:52.663191049Z" + }, + "min_self_delegation": "1" + } +} +``` + +#### ValidatorDelegations + +The `ValidatorDelegations` REST endpoint queries delegate information for given validator. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q/delegations" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "delegation_responses": [ + { + "delegation": { + "delegator_address": "cosmos190g5j8aszqhvtg7cprmev8xcxs6csra7xnk3n3", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "31000000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "31000000000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1ddle9tczl87gsvmeva3c48nenyng4n56qwq4ee", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "628470000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "628470000" + } + }, + { + "delegation": { + "delegator_address": "cosmos10fdvkczl76m040smd33lh9xn9j0cf26kk4s2nw", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "838120000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "838120000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "500000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "500000000" + } + }, + { + "delegation": { + "delegator_address": "cosmos16msryt3fqlxtvsy8u5ay7wv2p8mglfg9hrek2e", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "61310000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "61310000" + } + } + ], + "pagination": { + "next_key": null, + "total": "5" + } +} +``` + +#### Delegation + +The `Delegation` REST endpoint queries delegate information for given validator delegator pair. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations/{delegatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q/delegations/cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "delegation_response": { + "delegation": { + "delegator_address": "cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "500000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "500000000" + } + } +} +``` + +#### UnbondingDelegation + +The `UnbondingDelegation` REST endpoint queries unbonding information for given validator delegator pair. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations/{delegatorAddr}/unbonding_delegation +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu/delegations/cosmos1ze2ye5u5k3qdlexvt2e0nn0508p04094ya0qpm/unbonding_delegation" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "unbond": { + "delegator_address": "cosmos1ze2ye5u5k3qdlexvt2e0nn0508p04094ya0qpm", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "153687", + "completion_time": "2021-11-09T09:41:18.352401903Z", + "initial_balance": "525111", + "balance": "525111" + } + ] + } +} +``` + +#### ValidatorUnbondingDelegations + +The `ValidatorUnbondingDelegations` REST endpoint queries unbonding delegations of a validator. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/unbonding_delegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu/unbonding_delegations" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1q9snn84jfrd9ge8t46kdcggpe58dua82vnj7uy", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "90998", + "completion_time": "2021-11-05T00:14:37.005841058Z", + "initial_balance": "24000000", + "balance": "24000000" + } + ] + }, + { + "delegator_address": "cosmos1qf36e6wmq9h4twhdvs6pyq9qcaeu7ye0s3dqq2", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "47478", + "completion_time": "2021-11-01T22:47:26.714116854Z", + "initial_balance": "8000000", + "balance": "8000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` diff --git a/docs/sdk/next/documentation/module-system/structure.mdx b/docs/sdk/next/documentation/module-system/structure.mdx new file mode 100644 index 00000000..bdec4b74 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/structure.mdx @@ -0,0 +1,93 @@ +--- +title: Recommended Folder Structure +--- + +## Synopsis + +This document outlines the recommended structure of Cosmos SDK modules. These ideas are meant to be applied as suggestions. Application developers are encouraged to improve upon and contribute to module structure and development design. + +## Structure + +A typical Cosmos SDK module can be structured as follows: + +```shell +proto +└── {project_name} +    └── {module_name} +    └── {proto_version} +       ├── {module_name}.proto +       ├── event.proto +       ├── genesis.proto +       ├── query.proto +       └── tx.proto +``` + +- `{module_name}.proto`: The module's common message type definitions. +- `event.proto`: The module's message type definitions related to events. +- `genesis.proto`: The module's message type definitions related to genesis state. +- `query.proto`: The module's Query service and related message type definitions. +- `tx.proto`: The module's Msg service and related message type definitions. + +```shell expandable +x/{module_name} +├── client +│   ├── cli +│   │ ├── query.go +│   │   └── tx.go +│   └── testutil +│   ├── cli_test.go +│   └── suite.go +├── exported +│   └── exported.go +├── keeper +│   ├── genesis.go +│   ├── grpc_query.go +│   ├── hooks.go +│   ├── invariants.go +│   ├── keeper.go +│   ├── keys.go +│   ├── msg_server.go +│   └── querier.go +├── module +│   └── module.go +│   └── abci.go +│   └── autocli.go +├── simulation +│   ├── decoder.go +│   ├── genesis.go +│   ├── operations.go +│   └── params.go +├── {module_name}.pb.go +├── codec.go +├── errors.go +├── events.go +├── events.pb.go +├── expected_keepers.go +├── genesis.go +├── genesis.pb.go +├── keys.go +├── msgs.go +├── params.go +├── query.pb.go +├── tx.pb.go +└── README.md +``` + +- `client/`: The module's CLI client functionality implementation and the module's CLI testing suite. +- `exported/`: The module's exported types - typically interface types. If a module relies on keepers from another module, it is expected to receive the keepers as interface contracts through the `expected_keepers.go` file (see below) in order to avoid a direct dependency on the module implementing the keepers. However, these interface contracts can define methods that operate on and/or return types that are specific to the module that is implementing the keepers and this is where `exported/` comes into play. The interface types that are defined in `exported/` use canonical types, allowing for the module to receive the keepers as interface contracts through the `expected_keepers.go` file. This pattern allows for code to remain DRY and also alleviates import cycle chaos. +- `keeper/`: The module's `Keeper` and `MsgServer` implementation. +- `module/`: The module's `AppModule` and `AppModuleBasic` implementation. + - `abci.go`: The module's `BeginBlocker` and `EndBlocker` implementations (this file is only required if `BeginBlocker` and/or `EndBlocker` need to be defined). + - `autocli.go`: The module [autocli](https://docs.cosmos.network/main/core/autocli) options. +- `simulation/`: The module's [simulation](/docs/sdk/next/documentation/operations/simulator) package defines functions used by the blockchain simulator application (`simapp`). +- `README.md`: The module's specification documents outlining important concepts, state storage structure, and message and event type definitions. Learn more about how to write module specs in the [spec guidelines](/docs/sdk/next/documentation/protocol-development/SPEC_MODULE). +- The root directory includes type definitions for messages, events, and genesis state, including the type definitions generated by Protocol Buffers. + - `codec.go`: The module's registry methods for interface types. + - `errors.go`: The module's sentinel errors. + - `events.go`: The module's event types and constructors. + - `expected_keepers.go`: The module's [expected keeper](/docs/sdk/next/documentation/module-system/keeper#type-definition) interfaces. + - `genesis.go`: The module's genesis state methods and helper functions. + - `keys.go`: The module's store keys and associated helper functions. + - `msgs.go`: The module's message type definitions and associated methods. + - `params.go`: The module's parameter type definitions and associated methods. + - `*.pb.go`: The module's type definitions generated by Protocol Buffers (as defined in the respective `*.proto` files above). diff --git a/docs/sdk/v0.53/tutorials.mdx b/docs/sdk/next/documentation/module-system/tutorials.mdx similarity index 88% rename from docs/sdk/v0.53/tutorials.mdx rename to docs/sdk/next/documentation/module-system/tutorials.mdx index 2474f604..c896cc54 100644 --- a/docs/sdk/v0.53/tutorials.mdx +++ b/docs/sdk/next/documentation/module-system/tutorials.mdx @@ -1,9 +1,8 @@ --- -title: "Tutorials" -description: "Version: v0.53" +title: Tutorials --- -## Advanced Tutorials[​](#advanced-tutorials "Direct link to Advanced Tutorials") +## Advanced Tutorials This section provides a concise overview of tutorials focused on implementing vote extensions in the Cosmos SDK. Vote extensions are a powerful feature for enhancing the security and fairness of blockchain applications, particularly in scenarios like implementing oracles and mitigating auction front-running. diff --git a/docs/sdk/next/documentation/module-system/tx.mdx b/docs/sdk/next/documentation/module-system/tx.mdx new file mode 100644 index 00000000..4393f1ca --- /dev/null +++ b/docs/sdk/next/documentation/module-system/tx.mdx @@ -0,0 +1,301 @@ +--- +title: '`x/auth/tx`' +--- + + +**Pre-requisite Readings** + +* [Transactions](https://docs.cosmos.network/main/core/transactions#transaction-generation) +* [Encoding](https://docs.cosmos.network/main/core/encoding#transaction-encoding) + + + +## Abstract + +This document specifies the `x/auth/tx` package of the Cosmos SDK. + +This package represents the Cosmos SDK implementation of the `client.TxConfig`, `client.TxBuilder`, `client.TxEncoder` and `client.TxDecoder` interfaces. + +## Contents + +* [Transactions](#transactions) + * [`TxConfig`](#txconfig) + * [`TxBuilder`](#txbuilder) + * [`TxEncoder`/ `TxDecoder`](#txencoder-txdecoder) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + +## Transactions + +### `TxConfig` + +`client.TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type. +The interface defines a set of methods for creating a `client.TxBuilder`. + +```go + // TxConfig defines an interface a client can utilize to generate an + // application-defined concrete transaction type. The type returned must + // implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() TxBuilder +``` + +The default implementation of `client.TxConfig` is instantiated by `NewTxConfig` in `x/auth/tx` module. + +```go + encoder sdk.TxEncoder + jsonDecoder sdk.TxDecoder + jsonEncoder sdk.TxEncoder + protoCodec codec.Codec + signingContext *txsigning.Context +} + +``` + +### `TxBuilder` + +```go + SignModeHandler() *txsigning.HandlerMap + SigningContext() *txsigning.Context + } + + // TxBuilder defines an interface which an application-defined concrete transaction + // type must implement. Namely, it must be able to set messages, generate + // signatures, and provide canonical bytes to sign over. The transaction must + // also know how to encode itself. + TxBuilder interface { + GetTx() signing.Tx + + SetMsgs(msgs ...sdk.Msg) error + SetSignatures(signatures ...signingtypes.SignatureV2) error + SetMemo(memo string) + SetFeeAmount(amount sdk.Coins) + SetFeePayer(feePayer sdk.AccAddress) + SetGasLimit(limit uint64) + SetTimeoutHeight(height uint64) +``` + +The [`client.TxBuilder`](https://docs.cosmos.network/main/core/transactions#transaction-generation) interface is as well implemented by `x/auth/tx`. +A `client.TxBuilder` can be accessed with `TxConfig.NewTxBuilder()`. + +### `TxEncoder`/ `TxDecoder` + +More information about `TxEncoder` and `TxDecoder` can be found [here](https://docs.cosmos.network/main/core/encoding#transaction-encoding). + +## Client + +### CLI + +#### Query + +The `x/auth/tx` module provides a CLI command to query any transaction, given its hash, transaction sequence or signature. + +Without any argument, the command will query the transaction using the transaction hash. + +```shell +simd query tx DFE87B78A630C0EFDF76C80CD24C997E252792E0317502AE1A02B9809F0D8685 +``` + +When querying a transaction from an account given its sequence, use the `--type=acc_seq` flag: + +```shell +simd query tx --type=acc_seq cosmos1u69uyr6v9qwe6zaaeaqly2h6wnedac0xpxq325/1 +``` + +When querying a transaction given its signature, use the `--type=signature` flag: + +```shell +simd query tx --type=signature Ofjvgrqi8twZfqVDmYIhqwRLQjZZ40XbxEamk/veH3gQpRF0hL2PH4ejRaDzAX+2WChnaWNQJQ41ekToIi5Wqw== +``` + +When querying a transaction given its events, use the `--type=events` flag: + +```shell +simd query txs --events 'message.sender=cosmos...' --page 1 --limit 30 +``` + +The `x/auth/block` module provides a CLI command to query any block, given its hash, height, or events. + +When querying a block by its hash, use the `--type=hash` flag: + +```shell +simd query block --type=hash DFE87B78A630C0EFDF76C80CD24C997E252792E0317502AE1A02B9809F0D8685 +``` + +When querying a block by its height, use the `--type=height` flag: + +```shell +simd query block --type=height 1357 +``` + +When querying a block by its events, use the `--query` flag: + +```shell +simd query blocks --query 'message.sender=cosmos...' --page 1 --limit 30 +``` + +#### Transactions + +The `x/auth/tx` module provides a convenient CLI command for decoding and encoding transactions. + +#### `encode` + +The `encode` command encodes a transaction created with the `--generate-only` flag or signed with the sign command. +The transaction is serialized it to Protobuf and returned as base64. + +```bash +$ simd tx encode tx.json +Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA== +$ simd tx encode tx.signed.json +``` + +More information about the `encode` command can be found running `simd tx encode --help`. + +#### `decode` + +The `decode` commands decodes a transaction encoded with the `encode` command. + +```bash +simd tx decode Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA== +``` + +More information about the `decode` command can be found running `simd tx decode --help`. + +### gRPC + +A user can query the `x/auth/tx` module using gRPC endpoints. + +#### `TxDecode` + +The `TxDecode` endpoint allows to decode a transaction. + +```shell +cosmos.tx.v1beta1.Service/TxDecode +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"tx_bytes":"Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA=="}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxDecode +``` + +Example Output: + +```json expandable +{ + "tx": { + "body": { + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "amount": [ + { + "denom": "stake", + "amount": "100" + } + ], + "fromAddress": "cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275", + "toAddress": "cosmos158saldyg8pmxu7fwvt0d6x7jeswp4gwyklk6y3" + } + ] + }, + "authInfo": { + "fee": { + "gasLimit": "200000" + } + } + } +} +``` + +#### `TxEncode` + +The `TxEncode` endpoint allows to encode a transaction. + +```shell +cosmos.tx.v1beta1.Service/TxEncode +``` + +Example: + +```shell expandable +grpcurl -plaintext \ + -d '{"tx": { + "body": { + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100"}],"fromAddress":"cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275","toAddress":"cosmos158saldyg8pmxu7fwvt0d6x7jeswp4gwyklk6y3"} + ] + }, + "authInfo": { + "fee": { + "gasLimit": "200000" + } + } + }}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxEncode +``` + +Example Output: + +```json +{ + "txBytes": "Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA==" +} +``` + +#### `TxDecodeAmino` + +The `TxDecode` endpoint allows to decode an amino transaction. + +```shell +cosmos.tx.v1beta1.Service/TxDecodeAmino +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"amino_binary": "KCgWqQpvqKNhmgotY29zbW9zMXRzeno3cDJ6Z2Q3dnZrYWh5ZnJlNHduNXh5dTgwcnB0ZzZ2OWg1Ei1jb3Ntb3MxdHN6ejdwMnpnZDd2dmthaHlmcmU0d241eHl1ODBycHRnNnY5aDUaCwoFc3Rha2USAjEwEhEKCwoFc3Rha2USAjEwEMCaDCIGZm9vYmFy"}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxDecodeAmino +``` + +Example Output: + +```json +{ + "aminoJson": "{\"type\":\"cosmos-sdk/StdTx\",\"value\":{\"msg\":[{\"type\":\"cosmos-sdk/MsgSend\",\"value\":{\"from_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"to_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}]}}],\"fee\":{\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}],\"gas\":\"200000\"},\"signatures\":null,\"memo\":\"foobar\",\"timeout_height\":\"0\"}}" +} +``` + +#### `TxEncodeAmino` + +The `TxEncodeAmino` endpoint allows to encode an amino transaction. + +```shell +cosmos.tx.v1beta1.Service/TxEncodeAmino +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"amino_json":"{\"type\":\"cosmos-sdk/StdTx\",\"value\":{\"msg\":[{\"type\":\"cosmos-sdk/MsgSend\",\"value\":{\"from_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"to_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}]}}],\"fee\":{\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}],\"gas\":\"200000\"},\"signatures\":null,\"memo\":\"foobar\",\"timeout_height\":\"0\"}}"}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxEncodeAmino +``` + +Example Output: + +```json +{ + "amino_binary": "KCgWqQpvqKNhmgotY29zbW9zMXRzeno3cDJ6Z2Q3dnZrYWh5ZnJlNHduNXh5dTgwcnB0ZzZ2OWg1Ei1jb3Ntb3MxdHN6ejdwMnpnZDd2dmthaHlmcmU0d241eHl1ODBycHRnNnY5aDUaCwoFc3Rha2USAjEwEhEKCwoFc3Rha2USAjEwEMCaDCIGZm9vYmFy" +} +``` diff --git a/docs/sdk/next/documentation/module-system/upgrade.mdx b/docs/sdk/next/documentation/module-system/upgrade.mdx new file mode 100644 index 00000000..cd7e1436 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/upgrade.mdx @@ -0,0 +1,630 @@ +--- +title: '`x/upgrade`' +--- + +## Abstract + +`x/upgrade` is an implementation of a Cosmos SDK module that facilitates smoothly +upgrading a live Cosmos chain to a new (breaking) software version. It accomplishes this by +providing a `PreBlocker` hook that prevents the blockchain state machine from +proceeding once a pre-defined upgrade block height has been reached. + +The module does not prescribe anything regarding how governance decides to do an +upgrade, but just the mechanism for coordinating the upgrade safely. Without software +support for upgrades, upgrading a live chain is risky because all of the validators +need to pause their state machines at exactly the same point in the process. If +this is not done correctly, there can be state inconsistencies which are hard to +recover from. + +* [Concepts](#concepts) +* [State](#state) +* [Events](#events) +* [Client](#client) + * [CLI](#cli) + * [REST](#rest) + * [gRPC](#grpc) +* [Resources](#resources) + +## Concepts + +### Plan + +The `x/upgrade` module defines a `Plan` type in which a live upgrade is scheduled +to occur. A `Plan` can be scheduled at a specific block height. +A `Plan` is created once a (frozen) release candidate along with an appropriate upgrade +`Handler` (see below) is agreed upon, where the `Name` of a `Plan` corresponds to a +specific `Handler`. Typically, a `Plan` is created through a governance proposal +process, where if voted upon and passed, will be scheduled. The `Info` of a `Plan` +may contain various metadata about the upgrade, typically application specific +upgrade info to be included on-chain such as a git commit that validators could +automatically upgrade to. + +```go +type Plan struct { + Name string + Height int64 + Info string +} +``` + +#### Sidecar Process + +If an operator running the application binary also runs a sidecar process to assist +in the automatic download and upgrade of a binary, the `Info` allows this process to +be seamless. This tool is [Cosmovisor](https://github.com/cosmos/cosmos-sdk/tree/main/tools/cosmovisor#readme). + +### Handler + +The `x/upgrade` module facilitates upgrading from major version X to major version Y. To +accomplish this, node operators must first upgrade their current binary to a new +binary that has a corresponding `Handler` for the new version Y. It is assumed that +this version has fully been tested and approved by the community at large. This +`Handler` defines what state migrations need to occur before the new binary Y +can successfully run the chain. Naturally, this `Handler` is application specific +and not defined on a per-module basis. Registering a `Handler` is done via +`Keeper#SetUpgradeHandler` in the application. + +```go +type UpgradeHandler func(Context, Plan, VersionMap) (VersionMap, error) +``` + +During each `EndBlock` execution, the `x/upgrade` module checks if there exists a +`Plan` that should execute (is scheduled at that height). If so, the corresponding +`Handler` is executed. If the `Plan` is expected to execute but no `Handler` is registered +or if the binary was upgraded too early, the node will gracefully panic and exit. + +### StoreLoader + +The `x/upgrade` module also facilitates store migrations as part of the upgrade. The +`StoreLoader` sets the migrations that need to occur before the new binary can +successfully run the chain. This `StoreLoader` is also application specific and +not defined on a per-module basis. Registering this `StoreLoader` is done via +`app#SetStoreLoader` in the application. + +```go +func UpgradeStoreLoader (upgradeHeight int64, storeUpgrades *store.StoreUpgrades) + +baseapp.StoreLoader +``` + +If there's a planned upgrade and the upgrade height is reached, the old binary writes `Plan` to the disk before panicking. + +This information is critical to ensure the `StoreUpgrades` happens smoothly at correct height and +expected upgrade. It eliminates the chances for the new binary to execute `StoreUpgrades` multiple +times every time on restart. Also if there are multiple upgrades planned on same height, the `Name` +will ensure these `StoreUpgrades` takes place only in planned upgrade handler. + +### Proposal + +Typically, a `Plan` is proposed and submitted through governance via a proposal +containing a `MsgSoftwareUpgrade` message. +This proposal prescribes to the standard governance process. If the proposal passes, +the `Plan`, which targets a specific `Handler`, is persisted and scheduled. The +upgrade can be delayed or hastened by updating the `Plan.Height` in a new proposal. + +```protobuf +// MsgSoftwareUpgrade is the Msg/SoftwareUpgrade request type. +// +// Since: cosmos-sdk 0.46 +message MsgSoftwareUpgrade { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/MsgSoftwareUpgrade"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // plan is the upgrade plan. + Plan plan = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +#### Cancelling Upgrade Proposals + +Upgrade proposals can be cancelled. There exists a gov-enabled `MsgCancelUpgrade` +message type, which can be embedded in a proposal, voted on and, if passed, will +remove the scheduled upgrade `Plan`. +Of course this requires that the upgrade was known to be a bad idea well before the +upgrade itself, to allow time for a vote. + +```protobuf +// MsgCancelUpgrade is the Msg/CancelUpgrade request type. +// +// Since: cosmos-sdk 0.46 +message MsgCancelUpgrade { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/MsgCancelUpgrade"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +If such a possibility is desired, the upgrade height is to be +`2 * (VotingPeriod + DepositPeriod) + (SafetyDelta)` from the beginning of the +upgrade proposal. The `SafetyDelta` is the time available from the success of an +upgrade proposal and the realization it was a bad idea (due to external social consensus). + +A `MsgCancelUpgrade` proposal can also be made while the original +`MsgSoftwareUpgrade` proposal is still being voted upon, as long as the `VotingPeriod` +ends after the `MsgSoftwareUpgrade` proposal. + +## State + +The internal state of the `x/upgrade` module is relatively minimal and simple. The +state contains the currently active upgrade `Plan` (if one exists) by key +`0x0` and if a `Plan` is marked as "done" by key `0x1`. The state +contains the consensus versions of all app modules in the application. The versions +are stored as big endian `uint64`, and can be accessed with prefix `0x2` appended +by the corresponding module name of type `string`. The state maintains a +`Protocol Version` which can be accessed by key `0x3`. + +* Plan: `0x0 -> Plan` +* Done: `0x1 | byte(plan name) -> BigEndian(Block Height)` +* ConsensusVersion: `0x2 | byte(module name) -> BigEndian(Module Consensus Version)` +* ProtocolVersion: `0x3 -> BigEndian(Protocol Version)` + +The `x/upgrade` module contains no genesis state. + +## Events + +The `x/upgrade` does not emit any events by itself. Any and all proposal related +events are emitted through the `x/gov` module. + +## Client + +### CLI + +A user can query and interact with the `upgrade` module using the CLI. + +#### Query + +The `query` commands allow users to query `upgrade` state. + +```bash +simd query upgrade --help +``` + +##### applied + +The `applied` command allows users to query the block header for height at which a completed upgrade was applied. + +```bash +simd query upgrade applied [upgrade-name] [flags] +``` + +If upgrade-name was previously executed on the chain, this returns the header for the block at which it was applied. +This helps a client determine which binary was valid over a given range of blocks, as well as more context to understand past migrations. + +Example: + +```bash +simd query upgrade applied "test-upgrade" +``` + +Example Output: + +```bash expandable +"block_id": { + "hash": "A769136351786B9034A5F196DC53F7E50FCEB53B48FA0786E1BFC45A0BB646B5", + "parts": { + "total": 1, + "hash": "B13CBD23011C7480E6F11BE4594EE316548648E6A666B3575409F8F16EC6939E" + } + }, + "block_size": "7213", + "header": { + "version": { + "block": "11" + }, + "chain_id": "testnet-2", + "height": "455200", + "time": "2021-04-10T04:37:57.085493838Z", + "last_block_id": { + "hash": "0E8AD9309C2DC411DF98217AF59E044A0E1CCEAE7C0338417A70338DF50F4783", + "parts": { + "total": 1, + "hash": "8FE572A48CD10BC2CBB02653CA04CA247A0F6830FF19DC972F64D339A355E77D" + } + }, + "last_commit_hash": "DE890239416A19E6164C2076B837CC1D7F7822FC214F305616725F11D2533140", + "data_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "validators_hash": "A31047ADE54AE9072EE2A12FF260A8990BA4C39F903EAF5636B50D58DBA72582", + "next_validators_hash": "A31047ADE54AE9072EE2A12FF260A8990BA4C39F903EAF5636B50D58DBA72582", + "consensus_hash": "048091BC7DDC283F77BFBF91D73C44DA58C3DF8A9CBC867405D8B7F3DAADA22F", + "app_hash": "28ECC486AFC332BA6CC976706DBDE87E7D32441375E3F10FD084CD4BAF0DA021", + "last_results_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "evidence_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "proposer_address": "2ABC4854B1A1C5AA8403C4EA853A81ACA901CC76" + }, + "num_txs": "0" +} +``` + +##### module versions + +The `module_versions` command gets a list of module names and their respective consensus versions. + +Following the command with a specific module name will return only +that module's information. + +```bash +simd query upgrade module_versions [optional module_name] [flags] +``` + +Example: + +```bash +simd query upgrade module_versions +``` + +Example Output: + +```bash expandable +module_versions: +- name: auth + version: "2" +- name: authz + version: "1" +- name: bank + version: "2" +- name: distribution + version: "2" +- name: evidence + version: "1" +- name: feegrant + version: "1" +- name: genutil + version: "1" +- name: gov + version: "2" +- name: ibc + version: "2" +- name: mint + version: "1" +- name: params + version: "1" +- name: slashing + version: "2" +- name: staking + version: "2" +- name: transfer + version: "1" +- name: upgrade + version: "1" +- name: vesting + version: "1" +``` + +Example: + +```bash +regen query upgrade module_versions ibc +``` + +Example Output: + +```bash +module_versions: +- name: ibc + version: "2" +``` + +##### plan + +The `plan` command gets the currently scheduled upgrade plan, if one exists. + +```bash +regen query upgrade plan [flags] +``` + +Example: + +```bash +simd query upgrade plan +``` + +Example Output: + +```bash +height: "130" +info: "" +name: test-upgrade +time: "0001-01-01T00:00:00Z" +upgraded_client_state: null +``` + +#### Transactions + +The upgrade module supports the following transactions: + +* `software-proposal` - submits an upgrade proposal: + +```bash +simd tx upgrade software-upgrade v2 --title="Test Proposal" --summary="testing" --deposit="100000000stake" --upgrade-height 1000000 \ +--upgrade-info '{ "binaries": { "linux/amd64":"https://example.com/simd.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" } }' --from cosmos1.. +``` + +* `cancel-software-upgrade` - cancels a previously submitted upgrade proposal: + +```bash +simd tx upgrade cancel-software-upgrade --title="Test Proposal" --summary="testing" --deposit="100000000stake" --from cosmos1.. +``` + +### REST + +A user can query the `upgrade` module using REST endpoints. + +#### Applied Plan + +`AppliedPlan` queries a previously applied upgrade plan by its name. + +```bash +/cosmos/upgrade/v1beta1/applied_plan/{name} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/applied_plan/v2.0-upgrade" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "height": "30" +} +``` + +#### Current Plan + +`CurrentPlan` queries the current upgrade plan. + +```bash +/cosmos/upgrade/v1beta1/current_plan +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/current_plan" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "plan": "v2.1-upgrade" +} +``` + +#### Module versions + +`ModuleVersions` queries the list of module versions from state. + +```bash +/cosmos/upgrade/v1beta1/module_versions +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/module_versions" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "module_versions": [ + { + "name": "auth", + "version": "2" + }, + { + "name": "authz", + "version": "1" + }, + { + "name": "bank", + "version": "2" + }, + { + "name": "distribution", + "version": "2" + }, + { + "name": "evidence", + "version": "1" + }, + { + "name": "feegrant", + "version": "1" + }, + { + "name": "genutil", + "version": "1" + }, + { + "name": "gov", + "version": "2" + }, + { + "name": "ibc", + "version": "2" + }, + { + "name": "mint", + "version": "1" + }, + { + "name": "params", + "version": "1" + }, + { + "name": "slashing", + "version": "2" + }, + { + "name": "staking", + "version": "2" + }, + { + "name": "transfer", + "version": "1" + }, + { + "name": "upgrade", + "version": "1" + }, + { + "name": "vesting", + "version": "1" + } + ] +} +``` + +### gRPC + +A user can query the `upgrade` module using gRPC endpoints. + +#### Applied Plan + +`AppliedPlan` queries a previously applied upgrade plan by its name. + +```bash +cosmos.upgrade.v1beta1.Query/AppliedPlan +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"name":"v2.0-upgrade"}' \ + localhost:9090 \ + cosmos.upgrade.v1beta1.Query/AppliedPlan +``` + +Example Output: + +```bash +{ + "height": "30" +} +``` + +#### Current Plan + +`CurrentPlan` queries the current upgrade plan. + +```bash +cosmos.upgrade.v1beta1.Query/CurrentPlan +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/CurrentPlan +``` + +Example Output: + +```bash +{ + "plan": "v2.1-upgrade" +} +``` + +#### Module versions + +`ModuleVersions` queries the list of module versions from state. + +```bash +cosmos.upgrade.v1beta1.Query/ModuleVersions +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/ModuleVersions +``` + +Example Output: + +```bash expandable +{ + "module_versions": [ + { + "name": "auth", + "version": "2" + }, + { + "name": "authz", + "version": "1" + }, + { + "name": "bank", + "version": "2" + }, + { + "name": "distribution", + "version": "2" + }, + { + "name": "evidence", + "version": "1" + }, + { + "name": "feegrant", + "version": "1" + }, + { + "name": "genutil", + "version": "1" + }, + { + "name": "gov", + "version": "2" + }, + { + "name": "ibc", + "version": "2" + }, + { + "name": "mint", + "version": "1" + }, + { + "name": "params", + "version": "1" + }, + { + "name": "slashing", + "version": "2" + }, + { + "name": "staking", + "version": "2" + }, + { + "name": "transfer", + "version": "1" + }, + { + "name": "upgrade", + "version": "1" + }, + { + "name": "vesting", + "version": "1" + } + ] +} +``` + +## Resources + +A list of (external) resources to learn more about the `x/upgrade` module. + +* [Cosmos Dev Series: Cosmos Blockchain Upgrade](https://medium.com/web3-surfers/cosmos-dev-series-cosmos-sdk-based-blockchain-upgrade-b5e99181554c) - The blog post that explains how software upgrades work in detail. diff --git a/docs/sdk/next/documentation/module-system/vesting.mdx b/docs/sdk/next/documentation/module-system/vesting.mdx new file mode 100644 index 00000000..d1aa2f56 --- /dev/null +++ b/docs/sdk/next/documentation/module-system/vesting.mdx @@ -0,0 +1,752 @@ +--- +title: '`x/auth/vesting`' +--- + +* [Intro and Requirements](#intro-and-requirements) +* [Note](#note) +* [Vesting Account Types](#vesting-account-types) + * [BaseVestingAccount](#basevestingaccount) + * [ContinuousVestingAccount](#continuousvestingaccount) + * [DelayedVestingAccount](#delayedvestingaccount) + * [Period](#period) + * [PeriodicVestingAccount](#periodicvestingaccount) + * [PermanentLockedAccount](#permanentlockedaccount) +* [Vesting Account Specification](#vesting-account-specification) + * [Determining Vesting & Vested Amounts](#determining-vesting--vested-amounts) + * [Periodic Vesting Accounts](#periodic-vesting-accounts) + * [Transferring/Sending](#transferringsending) + * [Delegating](#delegating) + * [Undelegating](#undelegating) +* [Keepers & Handlers](#keepers--handlers) +* [Genesis Initialization](#genesis-initialization) +* [Examples](#examples) + * [Simple](#simple) + * [Slashing](#slashing) + * [Periodic Vesting](#periodic-vesting) +* [Glossary](#glossary) + +## Intro and Requirements + +This specification defines the vesting account implementation that is used by the Cosmos Hub. The requirements for this vesting account is that it should be initialized during genesis with a starting balance `X` and a vesting end time `ET`. A vesting account may be initialized with a vesting start time `ST` and a number of vesting periods `P`. If a vesting start time is included, the vesting period does not begin until start time is reached. If vesting periods are included, the vesting occurs over the specified number of periods. + +For all vesting accounts, the owner of the vesting account is able to delegate and undelegate from validators, however they cannot transfer coins to another account until those coins are vested. This specification allows for four different kinds of vesting: + +* Delayed vesting, where all coins are vested once `ET` is reached. +* Continuous vesting, where coins begin to vest at `ST` and vest linearly with respect to time until `ET` is reached +* Periodic vesting, where coins begin to vest at `ST` and vest periodically according to number of periods and the vesting amount per period. The number of periods, length per period, and amount per period are configurable. A periodic vesting account is distinguished from a continuous vesting account in that coins can be released in staggered tranches. For example, a periodic vesting account could be used for vesting arrangements where coins are released quarterly, yearly, or over any other function of tokens over time. +* Permanent locked vesting, where coins are locked forever. Coins in this account can still be used for delegating and for governance votes even while locked. + +## Note + +Vesting accounts can be initialized with some vesting and non-vesting coins. The non-vesting coins would be immediately transferable. DelayedVesting ContinuousVesting, PeriodicVesting and PermanentVesting accounts can be created with normal messages after genesis. Other types of vesting accounts must be created at genesis, or as part of a manual network upgrade. The current specification only allows for *unconditional* vesting (ie. there is no possibility of reaching `ET` and +having coins fail to vest). + +## Vesting Account Types + +```go expandable +/ VestingAccount defines an interface that any vesting account type must +/ implement. +type VestingAccount interface { + Account + + GetVestedCoins(Time) + +Coins + GetVestingCoins(Time) + +Coins + + / TrackDelegation performs internal vesting accounting necessary when + / delegating from a vesting account. It accepts the current block time, the + / delegation amount and balance of all coins whose denomination exists in + / the account's original vesting balance. + TrackDelegation(Time, Coins, Coins) + + / TrackUndelegation performs internal vesting accounting necessary when a + / vesting account performs an undelegation. + TrackUndelegation(Coins) + +GetStartTime() + +int64 + GetEndTime() + +int64 +} +``` + +### BaseVestingAccount + +```protobuf +// BaseVestingAccount implements the VestingAccount interface. It contains all +// the necessary fields needed for any vesting account implementation. +message BaseVestingAccount { + option (amino.name) = "cosmos-sdk/BaseVestingAccount"; + option (gogoproto.goproto_getters) = false; + + cosmos.auth.v1beta1.BaseAccount base_account = 1 [(gogoproto.embed) = true]; + repeated cosmos.base.v1beta1.Coin original_vesting = 2 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (amino.encoding) = "legacy_coins", + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + repeated cosmos.base.v1beta1.Coin delegated_free = 3 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (amino.encoding) = "legacy_coins", + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + repeated cosmos.base.v1beta1.Coin delegated_vesting = 4 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (amino.encoding) = "legacy_coins", + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; +``` + +### ContinuousVestingAccount + +```protobuf + int64 end_time = 5; +} + +// ContinuousVestingAccount implements the VestingAccount interface. It +// continuously vests by unlocking coins linearly with respect to time. +message ContinuousVestingAccount { + option (amino.name) = "cosmos-sdk/ContinuousVestingAccount"; + option (gogoproto.goproto_getters) = false; + + BaseVestingAccount base_vesting_account = 1 [(gogoproto.embed) = true]; +``` + +### DelayedVestingAccount + +```protobuf + int64 start_time = 2; +} + +// DelayedVestingAccount implements the VestingAccount interface. It vests all +// coins after a specific time, but non prior. In other words, it keeps them +// locked until a specified time. +message DelayedVestingAccount { + option (amino.name) = "cosmos-sdk/DelayedVestingAccount"; + option (gogoproto.goproto_getters) = false; + +``` + +### Period + +```protobuf +} + +// Period defines a length of time and amount of coins that will vest. +message Period { + // Period duration in seconds. + int64 length = 1; + repeated cosmos.base.v1beta1.Coin amount = 2 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (amino.encoding) = "legacy_coins", + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" +``` + +```go +/ Stores all vesting periods passed as part of a PeriodicVestingAccount +type Periods []Period +``` + +### PeriodicVestingAccount + +```protobuf +} + +// PeriodicVestingAccount implements the VestingAccount interface. It +// periodically vests by unlocking coins during each specified period. +message PeriodicVestingAccount { + option (amino.name) = "cosmos-sdk/PeriodicVestingAccount"; + option (gogoproto.goproto_getters) = false; + + BaseVestingAccount base_vesting_account = 1 [(gogoproto.embed) = true]; + int64 start_time = 2; + repeated Period vesting_periods = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +``` + +In order to facilitate less ad-hoc type checking and assertions and to support flexibility in account balance usage, the existing `x/bank` `ViewKeeper` interface is updated to contain the following: + +```go +type ViewKeeper interface { + / ... + + / Calculates the total locked account balance. + LockedCoins(ctx sdk.Context, addr sdk.AccAddress) + +sdk.Coins + + / Calculates the total spendable balance that can be sent to other accounts. + SpendableCoins(ctx sdk.Context, addr sdk.AccAddress) + +sdk.Coins +} +``` + +### PermanentLockedAccount + +```protobuf + +// PermanentLockedAccount implements the VestingAccount interface. It does +// not ever release coins, locking them indefinitely. Coins in this account can +// still be used for delegating and for governance votes even while locked. +// +// Since: cosmos-sdk 0.43 +message PermanentLockedAccount { + option (amino.name) = "cosmos-sdk/PermanentLockedAccount"; + option (gogoproto.goproto_getters) = false; + + BaseVestingAccount base_vesting_account = 1 [(gogoproto.embed) = true]; +} +``` + +## Vesting Account Specification + +Given a vesting account, we define the following in the proceeding operations: + +* `OV`: The original vesting coin amount. It is a constant value. +* `V`: The number of `OV` coins that are still *vesting*. It is derived by + `OV`, `StartTime` and `EndTime`. This value is computed on demand and not on a per-block basis. +* `V'`: The number of `OV` coins that are *vested* (unlocked). This value is computed on demand and not a per-block basis. +* `DV`: The number of delegated *vesting* coins. It is a variable value. It is stored and modified directly in the vesting account. +* `DF`: The number of delegated *vested* (unlocked) coins. It is a variable value. It is stored and modified directly in the vesting account. +* `BC`: The number of `OV` coins less any coins that are transferred + (which can be negative or delegated). It is considered to be balance of the embedded base account. It is stored and modified directly in the vesting account. + +### Determining Vesting & Vested Amounts + +It is important to note that these values are computed on demand and not on a mandatory per-block basis (e.g. `BeginBlocker` or `EndBlocker`). + +#### Continuously Vesting Accounts + +To determine the amount of coins that are vested for a given block time `T`, the +following is performed: + +1. Compute `X := T - StartTime` +2. Compute `Y := EndTime - StartTime` +3. Compute `V' := OV * (X / Y)` +4. Compute `V := OV - V'` + +Thus, the total amount of *vested* coins is `V'` and the remaining amount, `V`, +is *vesting*. + +```go expandable +func (cva ContinuousVestingAccount) + +GetVestedCoins(t Time) + +Coins { + if t <= cva.StartTime { + / We must handle the case where the start time for a vesting account has + / been set into the future or when the start of the chain is not exactly + / known. + return ZeroCoins +} + +else if t >= cva.EndTime { + return cva.OriginalVesting +} + x := t - cva.StartTime + y := cva.EndTime - cva.StartTime + + return cva.OriginalVesting * (x / y) +} + +func (cva ContinuousVestingAccount) + +GetVestingCoins(t Time) + +Coins { + return cva.OriginalVesting - cva.GetVestedCoins(t) +} +``` + +### Periodic Vesting Accounts + +Periodic vesting accounts require calculating the coins released during each period for a given block time `T`. Note that multiple periods could have passed when calling `GetVestedCoins`, so we must iterate over each period until the end of that period is after `T`. + +1. Set `CT := StartTime` +2. Set `V' := 0` + +For each Period P: + +1. Compute `X := T - CT` +2. IF `X >= P.Length` + 1. Compute `V' += P.Amount` + 2. Compute `CT += P.Length` + 3. ELSE break +3. Compute `V := OV - V'` + +```go expandable +func (pva PeriodicVestingAccount) + +GetVestedCoins(t Time) + +Coins { + if t < pva.StartTime { + return ZeroCoins +} + ct := pva.StartTime / The start of the vesting schedule + vested := 0 + periods = pva.GetPeriods() + for _, period := range periods { + if t - ct < period.Length { + break +} + +vested += period.Amount + ct += period.Length / increment ct to the start of the next vesting period +} + +return vested +} + +func (pva PeriodicVestingAccount) + +GetVestingCoins(t Time) + +Coins { + return pva.OriginalVesting - cva.GetVestedCoins(t) +} +``` + +#### Delayed/Discrete Vesting Accounts + +Delayed vesting accounts are easier to reason about as they only have the full amount vesting up until a certain time, then all the coins become vested (unlocked). This does not include any unlocked coins the account may have initially. + +```go expandable +func (dva DelayedVestingAccount) + +GetVestedCoins(t Time) + +Coins { + if t >= dva.EndTime { + return dva.OriginalVesting +} + +return ZeroCoins +} + +func (dva DelayedVestingAccount) + +GetVestingCoins(t Time) + +Coins { + return dva.OriginalVesting - dva.GetVestedCoins(t) +} +``` + +### Transferring/Sending + +At any given time, a vesting account may transfer: `min((BC + DV) - V, BC)`. + +In other words, a vesting account may transfer the minimum of the base account balance and the base account balance plus the number of currently delegated vesting coins less the number of coins vested so far. + +However, given that account balances are tracked via the `x/bank` module and that we want to avoid loading the entire account balance, we can instead determine the locked balance, which can be defined as `max(V - DV, 0)`, and infer the spendable balance from that. + +```go +func (va VestingAccount) + +LockedCoins(t Time) + +Coins { + return max(va.GetVestingCoins(t) - va.DelegatedVesting, 0) +} +``` + +The `x/bank` `ViewKeeper` can then provide APIs to determine locked and spendable coins for any account: + +```go expandable +func (k Keeper) + +LockedCoins(ctx Context, addr AccAddress) + +Coins { + acc := k.GetAccount(ctx, addr) + if acc != nil { + if acc.IsVesting() { + return acc.LockedCoins(ctx.BlockTime()) +} + +} + + / non-vesting accounts do not have any locked coins + return NewCoins() +} +``` + +#### Keepers/Handlers + +The corresponding `x/bank` keeper should appropriately handle sending coins based on if the account is a vesting account or not. + +```go expandable +func (k Keeper) + +SendCoins(ctx Context, from Account, to Account, amount Coins) { + bc := k.GetBalances(ctx, from) + v := k.LockedCoins(ctx, from) + spendable := bc - v + newCoins := spendable - amount + assert(newCoins >= 0) + +from.SetBalance(newCoins) + +to.AddBalance(amount) + + / save balances... +} +``` + +### Delegating + +For a vesting account attempting to delegate `D` coins, the following is performed: + +1. Verify `BC >= D > 0` +2. Compute `X := min(max(V - DV, 0), D)` (portion of `D` that is vesting) +3. Compute `Y := D - X` (portion of `D` that is free) +4. Set `DV += X` +5. Set `DF += Y` + +```go +func (va VestingAccount) + +TrackDelegation(t Time, balance Coins, amount Coins) { + assert(balance <= amount) + x := min(max(va.GetVestingCoins(t) - va.DelegatedVesting, 0), amount) + y := amount - x + + va.DelegatedVesting += x + va.DelegatedFree += y +} +``` + +**Note** `TrackDelegation` only modifies the `DelegatedVesting` and `DelegatedFree` fields, so upstream callers MUST modify the `Coins` field by subtracting `amount`. + +#### Keepers/Handlers + +```go +func DelegateCoins(t Time, from Account, amount Coins) { + if isVesting(from) { + from.TrackDelegation(t, amount) +} + +else { + from.SetBalance(sc - amount) +} + + / save account... +} +``` + +### Undelegating + +For a vesting account attempting to undelegate `D` coins, the following is performed: + +> NOTE: `DV < D` and `(DV + DF) < D` may be possible due to quirks in the rounding of delegation/undelegation logic. + +1. Verify `D > 0` +2. Compute `X := min(DF, D)` (portion of `D` that should become free, prioritizing free coins) +3. Compute `Y := min(DV, D - X)` (portion of `D` that should remain vesting) +4. Set `DF -= X` +5. Set `DV -= Y` + +```go +func (cva ContinuousVestingAccount) + +TrackUndelegation(amount Coins) { + x := min(cva.DelegatedFree, amount) + y := amount - x + + cva.DelegatedFree -= x + cva.DelegatedVesting -= y +} +``` + +**Note** `TrackUnDelegation` only modifies the `DelegatedVesting` and `DelegatedFree` fields, so upstream callers MUST modify the `Coins` field by adding `amount`. + +**Note**: If a delegation is slashed, the continuous vesting account ends up with an excess `DV` amount, even after all its coins have vested. This is because undelegating free coins are prioritized. + +**Note**: The undelegation (bond refund) amount may exceed the delegated vesting (bond) amount due to the way undelegation truncates the bond refund, which can increase the validator's exchange rate (tokens/shares) slightly if the undelegated tokens are non-integral. + +#### Keepers/Handlers + +```go expandable +func UndelegateCoins(to Account, amount Coins) { + if isVesting(to) { + if to.DelegatedFree + to.DelegatedVesting >= amount { + to.TrackUndelegation(amount) + / save account ... +} + +} + +else { + AddBalance(to, amount) + / save account... +} +} +``` + +## Keepers & Handlers + +The `VestingAccount` implementations reside in `x/auth`. However, any keeper in a module (e.g. staking in `x/staking`) wishing to potentially utilize any vesting coins, must call explicit methods on the `x/bank` keeper (e.g. `DelegateCoins`) opposed to `SendCoins` and `SubtractCoins`. + +In addition, the vesting account should also be able to spend any coins it receives from other users. Thus, the bank module's `MsgSend` handler should error if a vesting account is trying to send an amount that exceeds their unlocked coin amount. + +See the above specification for full implementation details. + +## Genesis Initialization + +To initialize both vesting and non-vesting accounts, the `GenesisAccount` struct includes new fields: `Vesting`, `StartTime`, and `EndTime`. Accounts meant to be of type `BaseAccount` or any non-vesting type have `Vesting = false`. The genesis initialization logic (e.g. `initFromGenesisState`) must parse and return the correct accounts accordingly based off of these fields. + +```go expandable +type GenesisAccount struct { + / ... + + / vesting account fields + OriginalVesting sdk.Coins `json:"original_vesting"` + DelegatedFree sdk.Coins `json:"delegated_free"` + DelegatedVesting sdk.Coins `json:"delegated_vesting"` + StartTime int64 `json:"start_time"` + EndTime int64 `json:"end_time"` +} + +func ToAccount(gacc GenesisAccount) + +Account { + bacc := NewBaseAccount(gacc) + if gacc.OriginalVesting > 0 { + if ga.StartTime != 0 && ga.EndTime != 0 { + / return a continuous vesting account +} + +else if ga.EndTime != 0 { + / return a delayed vesting account +} + +else { + / invalid genesis vesting account provided + panic() +} + +} + +return bacc +} +``` + +## Examples + +### Simple + +Given a continuous vesting account with 10 vesting coins. + +```text +OV = 10 +DF = 0 +DV = 0 +BC = 10 +V = 10 +V' = 0 +``` + +1. Immediately receives 1 coin + + ```text + BC = 11 + ``` + +2. Time passes, 2 coins vest + + ```text + V = 8 + V' = 2 + ``` + +3. Delegates 4 coins to validator A + + ```text + DV = 4 + BC = 7 + ``` + +4. Sends 3 coins + + ```text + BC = 4 + ``` + +5. More time passes, 2 more coins vest + + ```text + V = 6 + V' = 4 + ``` + +6. Sends 2 coins. At this point the account cannot send anymore until further + coins vest or it receives additional coins. It can still however, delegate. + + ```text + BC = 2 + ``` + +### Slashing + +Same initial starting conditions as the simple example. + +1. Time passes, 5 coins vest + + ```text + V = 5 + V' = 5 + ``` + +2. Delegate 5 coins to validator A + + ```text + DV = 5 + BC = 5 + ``` + +3. Delegate 5 coins to validator B + + ```text + DF = 5 + BC = 0 + ``` + +4. Validator A gets slashed by 50%, making the delegation to A now worth 2.5 coins + +5. Undelegate from validator A (2.5 coins) + + ```text + DF = 5 - 2.5 = 2.5 + BC = 0 + 2.5 = 2.5 + ``` + +6. Undelegate from validator B (5 coins). The account at this point can only + send 2.5 coins unless it receives more coins or until more coins vest. + It can still however, delegate. + + ```text + DV = 5 - 2.5 = 2.5 + DF = 2.5 - 2.5 = 0 + BC = 2.5 + 5 = 7.5 + ``` + + Notice how we have an excess amount of `DV`. + +### Periodic Vesting + +A vesting account is created where 100 tokens will be released over 1 year, with +1/4 of tokens vesting each quarter. The vesting schedule would be as follows: + +```yaml +Periods: +- amount: 25stake, length: 7884000 +- amount: 25stake, length: 7884000 +- amount: 25stake, length: 7884000 +- amount: 25stake, length: 7884000 +``` + +```text +OV = 100 +DF = 0 +DV = 0 +BC = 100 +V = 100 +V' = 0 +``` + +1. Immediately receives 1 coin + + ```text + BC = 101 + ``` + +2. Vesting period 1 passes, 25 coins vest + + ```text + V = 75 + V' = 25 + ``` + +3. During vesting period 2, 5 coins are transferred and 5 coins are delegated + + ```text + DV = 5 + BC = 91 + ``` + +4. Vesting period 2 passes, 25 coins vest + + ```text + V = 50 + V' = 50 + ``` + +## Glossary + +* OriginalVesting: The amount of coins (per denomination) that are initially + part of a vesting account. These coins are set at genesis. +* StartTime: The BFT time at which a vesting account starts to vest. +* EndTime: The BFT time at which a vesting account is fully vested. +* DelegatedFree: The tracked amount of coins (per denomination) that are + delegated from a vesting account that have been fully vested at time of delegation. +* DelegatedVesting: The tracked amount of coins (per denomination) that are + delegated from a vesting account that were vesting at time of delegation. +* ContinuousVestingAccount: A vesting account implementation that vests coins + linearly over time. +* DelayedVestingAccount: A vesting account implementation that only fully vests + all coins at a given time. +* PeriodicVestingAccount: A vesting account implementation that vests coins + according to a custom vesting schedule. +* PermanentLockedAccount: It does not ever release coins, locking them indefinitely. + Coins in this account can still be used for delegating and for governance votes even while locked. + +## CLI + +A user can query and interact with the `vesting` module using the CLI. + +### Transactions + +The `tx` commands allow users to interact with the `vesting` module. + +```bash +simd tx vesting --help +``` + +#### create-periodic-vesting-account + +The `create-periodic-vesting-account` command creates a new vesting account funded with an allocation of tokens, where a sequence of coins and period length in seconds. Periods are sequential, in that the duration of a period only starts at the end of the previous period. The duration of the first period starts upon account creation. + +```bash +simd tx vesting create-periodic-vesting-account [to_address] [periods_json_file] [flags] +``` + +Example: + +```bash +simd tx vesting create-periodic-vesting-account cosmos1.. periods.json +``` + +#### create-vesting-account + +The `create-vesting-account` command creates a new vesting account funded with an allocation of tokens. The account can either be a delayed or continuous vesting account, which is determined by the '--delayed' flag. All vesting accounts created will have their start time set by the committed block's time. The end\_time must be provided as a UNIX epoch timestamp. + +```bash +simd tx vesting create-vesting-account [to_address] [amount] [end_time] [flags] +``` + +Example: + +```bash +simd tx vesting create-vesting-account cosmos1.. 100stake 2592000 +``` diff --git a/docs/sdk/next/documentation/operations/app-testnet.mdx b/docs/sdk/next/documentation/operations/app-testnet.mdx new file mode 100644 index 00000000..fff57c0e --- /dev/null +++ b/docs/sdk/next/documentation/operations/app-testnet.mdx @@ -0,0 +1,257 @@ +--- +title: Application Testnets +description: >- + Building an application is complicated and requires a lot of testing. The + Cosmos SDK provides a way to test your application in a real-world + environment: a testnet. +--- + +Building an application is complicated and requires a lot of testing. The Cosmos SDK provides a way to test your application in a real-world environment: a testnet. + +We allow developers to take the state from their mainnet and run tests against the state. This is useful for testing upgrade migrations, or for testing the application in a real-world environment. + +## Testnet Setup + +We will be breaking down the steps to create a testnet from mainnet state. + +```go +/ InitSimAppForTestnet is broken down into two sections: + / Required Changes: Changes that, if not made, will cause the testnet to halt or panic + / Optional Changes: Changes to customize the testnet to one's liking (lower vote times, fund accounts, etc) + +func InitSimAppForTestnet(app *SimApp, newValAddr bytes.HexBytes, newValPubKey crypto.PubKey, newOperatorAddress, upgradeToTrigger string) *SimApp { + ... +} +``` + +### Required Changes + +#### Staking + +When creating a testnet the important part is to migrate the validator set from many validators to one or a few. This allows developers to spin up the chain without needing to replace validator keys. + +```go expandable +ctx := app.BaseApp.NewUncachedContext(true, tmproto.Header{ +}) + pubkey := &ed25519.PubKey{ + Key: newValPubKey.Bytes() +} + +pubkeyAny, err := types.NewAnyWithValue(pubkey) + if err != nil { + tmos.Exit(err.Error()) +} + + / STAKING + / + + / Create Validator struct for our new validator. + _, bz, err := bech32.DecodeAndConvert(newOperatorAddress) + if err != nil { + tmos.Exit(err.Error()) +} + +bech32Addr, err := bech32.ConvertAndEncode("simvaloper", bz) + if err != nil { + tmos.Exit(err.Error()) +} + newVal := stakingtypes.Validator{ + OperatorAddress: bech32Addr, + ConsensusPubkey: pubkeyAny, + Jailed: false, + Status: stakingtypes.Bonded, + Tokens: sdk.NewInt(900000000000000), + DelegatorShares: sdk.MustNewDecFromStr("10000000"), + Description: stakingtypes.Description{ + Moniker: "Testnet Validator", +}, + Commission: stakingtypes.Commission{ + CommissionRates: stakingtypes.CommissionRates{ + Rate: sdk.MustNewDecFromStr("0.05"), + MaxRate: sdk.MustNewDecFromStr("0.1"), + MaxChangeRate: sdk.MustNewDecFromStr("0.05"), +}, +}, + MinSelfDelegation: sdk.OneInt(), +} + + / Remove all validators from power store + stakingKey := app.GetKey(stakingtypes.ModuleName) + stakingStore := ctx.KVStore(stakingKey) + iterator := app.StakingKeeper.ValidatorsPowerStoreIterator(ctx) + for ; iterator.Valid(); iterator.Next() { + stakingStore.Delete(iterator.Key()) +} + +iterator.Close() + + / Remove all validators from last validators store + iterator = app.StakingKeeper.LastValidatorsIterator(ctx) + for ; iterator.Valid(); iterator.Next() { + app.StakingKeeper.LastValidatorPower.Delete(iterator.Key()) +} + +iterator.Close() + + / Add our validator to power and last validators store + app.StakingKeeper.SetValidator(ctx, newVal) + +err = app.StakingKeeper.SetValidatorByConsAddr(ctx, newVal) + if err != nil { + panic(err) +} + +app.StakingKeeper.SetValidatorByPowerIndex(ctx, newVal) + +app.StakingKeeper.SetLastValidatorPower(ctx, newVal.GetOperator(), 0) + if err := app.StakingKeeper.Hooks().AfterValidatorCreated(ctx, newVal.GetOperator()); err != nil { + panic(err) +} +``` + +#### Distribution + +Since the validator set has changed, we need to update the distribution records for the new validator. + +```go +/ Initialize records for this validator across all distribution stores + app.DistrKeeper.ValidatorHistoricalRewards.Set(ctx, newVal.GetOperator(), 0, distrtypes.NewValidatorHistoricalRewards(sdk.DecCoins{ +}, 1)) + +app.DistrKeeper.ValidatorCurrentRewards.Set(ctx, newVal.GetOperator(), distrtypes.NewValidatorCurrentRewards(sdk.DecCoins{ +}, 1)) + +app.DistrKeeper.ValidatorAccumulatedCommission.Set(ctx, newVal.GetOperator(), distrtypes.InitialValidatorAccumulatedCommission()) + +app.DistrKeeper.ValidatorOutstandingRewards.Set(ctx, newVal.GetOperator(), distrtypes.ValidatorOutstandingRewards{ + Rewards: sdk.DecCoins{ +}}) +``` + +#### Slashing + +We also need to set the validator signing info for the new validator. + +```go expandable +/ SLASHING + / + + / Set validator signing info for our new validator. + newConsAddr := sdk.ConsAddress(newValAddr.Bytes()) + newValidatorSigningInfo := slashingtypes.ValidatorSigningInfo{ + Address: newConsAddr.String(), + StartHeight: app.LastBlockHeight() - 1, + Tombstoned: false, +} + +app.SlashingKeeper.ValidatorSigningInfo.Set(ctx, newConsAddr, newValidatorSigningInfo) +``` + +#### Bank + +It is useful to create new accounts for your testing purposes. This avoids the need to have the same key as you may have on mainnet. + +```go expandable +/ BANK + / + defaultCoins := sdk.NewCoins(sdk.NewInt64Coin("ustake", 1000000000000)) + localSimAppAccounts := []sdk.AccAddress{ + sdk.MustAccAddressFromBech32("cosmos12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj"), + sdk.MustAccAddressFromBech32("cosmos1cyyzpxplxdzkeea7kwsydadg87357qnahakaks"), + sdk.MustAccAddressFromBech32("cosmos18s5lynnmx37hq4wlrw9gdn68sg2uxp5rgk26vv"), + sdk.MustAccAddressFromBech32("cosmos1qwexv7c6sm95lwhzn9027vyu2ccneaqad4w8ka"), + sdk.MustAccAddressFromBech32("cosmos14hcxlnwlqtq75ttaxf674vk6mafspg8xwgnn53"), + sdk.MustAccAddressFromBech32("cosmos12rr534cer5c0vj53eq4y32lcwguyy7nndt0u2t"), + sdk.MustAccAddressFromBech32("cosmos1nt33cjd5auzh36syym6azgc8tve0jlvklnq7jq"), + sdk.MustAccAddressFromBech32("cosmos10qfrpash5g2vk3hppvu45x0g860czur8ff5yx0"), + sdk.MustAccAddressFromBech32("cosmos1f4tvsdukfwh6s9swrc24gkuz23tp8pd3e9r5fa"), + sdk.MustAccAddressFromBech32("cosmos1myv43sqgnj5sm4zl98ftl45af9cfzk7nhjxjqh"), + sdk.MustAccAddressFromBech32("cosmos14gs9zqh8m49yy9kscjqu9h72exyf295afg6kgk"), + sdk.MustAccAddressFromBech32("cosmos1jllfytsz4dryxhz5tl7u73v29exsf80vz52ucc") +} + + / Fund localSimApp accounts + for _, account := range localSimAppAccounts { + err := app.BankKeeper.MintCoins(ctx, minttypes.ModuleName, defaultCoins) + if err != nil { + tmos.Exit(err.Error()) +} + +err = app.BankKeeper.SendCoinsFromModuleToAccount(ctx, minttypes.ModuleName, account, defaultCoins) + if err != nil { + tmos.Exit(err.Error()) +} + +} +``` + +#### Upgrade + +If you would like to schedule an upgrade the below can be used. + +```go expandable +/ UPGRADE + / + if upgradeToTrigger != "" { + upgradePlan := upgradetypes.Plan{ + Name: upgradeToTrigger, + Height: app.LastBlockHeight(), +} + +err = app.UpgradeKeeper.ScheduleUpgrade(ctx, upgradePlan) + if err != nil { + panic(err) +} + +} +``` + +### Optional Changes + +If you have custom modules that rely on specific state from the above modules and/or you would like to test your custom module, you will need to update the state of your custom module to reflect your needs + +## Running the Testnet + +Before we can run the testnet we must plug everything together. + +in `root.go`, in the `initRootCmd` function we add: + +```diff + server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, createSimAppAndExport, addModuleInitFlags) + ++ server.AddTestnetCreatorCommand(rootCmd, simapp.DefaultNodeHome, newTestnetApp, addModuleInitFlags) +``` + +Next we will add a newTestnetApp helper function: + +```diff expandable +/ newTestnetApp starts by running the normal newApp method. From there, the app interface returned is modified in order +/ for a testnet to be created from the provided app. +func newTestnetApp(logger log.Logger, db cometbftdb.DB, traceStore io.Writer, appOpts servertypes.AppOptions) servertypes.Application { + / Create an app and type cast to an SimApp + app := newApp(logger, db, traceStore, appOpts) + simApp, ok := app.(*simapp.SimApp) + if !ok { + panic("app created from newApp is not of type simApp") + } + + newValAddr, ok := appOpts.Get(server.KeyNewValAddr).(bytes.HexBytes) + if !ok { + panic("newValAddr is not of type bytes.HexBytes") + } + newValPubKey, ok := appOpts.Get(server.KeyUserPubKey).(crypto.PubKey) + if !ok { + panic("newValPubKey is not of type crypto.PubKey") + } + newOperatorAddress, ok := appOpts.Get(server.KeyNewOpAddr).(string) + if !ok { + panic("newOperatorAddress is not of type string") + } + upgradeToTrigger, ok := appOpts.Get(server.KeyTriggerTestnetUpgrade).(string) + if !ok { + panic("upgradeToTrigger is not of type string") + } + + / Make modifications to the normal SimApp required to run the network locally + return simapp.InitSimAppForTestnet(simApp, newValAddr, newValPubKey, newOperatorAddress, upgradeToTrigger) +} +``` diff --git a/docs/sdk/next/documentation/operations/app-upgrade.mdx b/docs/sdk/next/documentation/operations/app-upgrade.mdx new file mode 100644 index 00000000..f402bb61 --- /dev/null +++ b/docs/sdk/next/documentation/operations/app-upgrade.mdx @@ -0,0 +1,219 @@ +--- +title: Application Upgrade +--- + + +This document describes how to upgrade your application. If you are looking specifically for the changes to perform between SDK versions, see the [SDK migrations documentation](https://docs.cosmos.network/main/migrations/intro). + + + +This section is currently incomplete. Track the progress of this document [here](https://github.com/cosmos/cosmos-sdk/issues/11504). + + + +**Pre-requisite Readings** + +* [`x/upgrade` Documentation](https://docs.cosmos.network/main/modules/upgrade) + + + +## General Workflow + +Let's assume we are running v0.38.0 of our software in our testnet and want to upgrade to v0.40.0. +How would this look in practice? First, we want to finalize the v0.40.0 release candidate +and then install a specially named upgrade handler (e.g. "testnet-v2" or even "v0.40.0"). An upgrade +handler should be defined in a new version of the software to define what migrations +to run to migrate from the older version of the software. Naturally, this is app-specific rather +than module-specific, and must be defined in `app.go`, even if it imports logic from various +modules to perform the actions. You can register them with `upgradeKeeper.SetUpgradeHandler` +during the app initialization (before starting the abci server), and they serve not only to +perform a migration, but also to identify if this is the old or new version (e.g. presence of +a handler registered for the named upgrade). + +Once the release candidate along with an appropriate upgrade handler is frozen, +we can have a governance vote to approve this upgrade at some future block height (e.g. 200000). +This is known as an upgrade.Plan. The v0.38.0 code will not know of this handler, but will +continue to run until block 200000, when the plan kicks in at `BeginBlock`. It will check +for the existence of the handler, and finding it missing, know that it is running the obsolete software, +and gracefully exit. + +Generally the application binary will restart on exit, but then will execute this BeginBlocker +again and exit, causing a restart loop. Either the operator can manually install the new software, +or you can make use of an external watcher daemon to possibly download and then switch binaries, +also potentially doing a backup. The SDK tool for doing such, is called [Cosmovisor](https://docs.cosmos.network/main/tooling/cosmovisor). + +When the binary restarts with the upgraded version (here v0.40.0), it will detect we have registered the +"testnet-v2" upgrade handler in the code, and realize it is the new version. It then will run the upgrade handler +and *migrate the database in-place*. Once finished, it marks the upgrade as done, and continues processing +the rest of the block as normal. Once 2/3 of the voting power has upgraded, the blockchain will immediately +resume the consensus mechanism. If the majority of operators add a custom `do-upgrade` script, this should +be a matter of minutes and not even require them to be awake at that time. + +## Integrating With An App + + +The following is not required for users using `depinject`, this is abstracted for them. + + +In addition to basic module wiring, set up the upgrade Keeper for the app and then define a `PreBlocker` that calls the upgrade +keeper's PreBlocker method: + +```go +func (app *myApp) + +PreBlocker(ctx sdk.Context, req req.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + / For demonstration sake, the app PreBlocker only returns the upgrade module pre-blocker. + / In a real app, the module manager should call all pre-blockers + / return app.ModuleManager.PreBlock(ctx, req) + +return app.upgradeKeeper.PreBlocker(ctx, req) +} +``` + +The app must then integrate the upgrade keeper with its governance module as appropriate. The governance module +should call ScheduleUpgrade to schedule an upgrade and ClearUpgradePlan to cancel a pending upgrade. + +## Performing Upgrades + +Upgrades can be scheduled at a predefined block height. Once this block height is reached, the +existing software will cease to process ABCI messages and a new version with code that handles the upgrade must be deployed. +All upgrades are coordinated by a unique upgrade name that cannot be reused on the same blockchain. In order for the upgrade +module to know that the upgrade has been safely applied, a handler with the name of the upgrade must be installed. +Here is an example handler for an upgrade named "my-fancy-upgrade": + +```go +app.upgradeKeeper.SetUpgradeHandler("my-fancy-upgrade", func(ctx context.Context, plan upgrade.Plan) { + / Perform any migrations of the state store needed for this upgrade +}) +``` + +This upgrade handler performs the dual function of alerting the upgrade module that the named upgrade has been applied, +as well as providing the opportunity for the upgraded software to perform any necessary state migrations. Both the halt +(with the old binary) and applying the migration (with the new binary) are enforced in the state machine. Actually +switching the binaries is an ops task and not handled inside the sdk / abci app. + +Here is a sample code to set store migrations with an upgrade: + +```go expandable +/ this configures a no-op upgrade handler for the "my-fancy-upgrade" upgrade +app.UpgradeKeeper.SetUpgradeHandler("my-fancy-upgrade", func(ctx context.Context, plan upgrade.Plan) { + / upgrade changes here +}) + +upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk() + if err != nil { + / handle error +} + if upgradeInfo.Name == "my-fancy-upgrade" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := store.StoreUpgrades{ + Renamed: []store.StoreRename{{ + OldKey: "foo", + NewKey: "bar", +}}, + Deleted: []string{ +}, +} + / configure store loader that checks if version == upgradeHeight and applies store upgrades + app.SetStoreLoader(upgrade.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +``` + +## Halt Behavior + +Before halting the ABCI state machine in the BeginBlocker method, the upgrade module will log an error +that looks like: + +```text + UPGRADE "" NEEDED at height : +``` + +where `Name` and `Info` are the values of the respective fields on the upgrade Plan. + +To perform the actual halt of the blockchain, the upgrade keeper simply panics which prevents the ABCI state machine +from proceeding but doesn't actually exit the process. Exiting the process can cause issues for other nodes that start +to lose connectivity with the exiting nodes, thus this module prefers to just halt but not exit. + +## Automation + +Read more about [Cosmovisor](https://docs.cosmos.network/main/tooling/cosmovisor), the tool for automating upgrades. + +## Canceling Upgrades + +There are two ways to cancel a planned upgrade - with on-chain governance or off-chain social consensus. +For the first one, there is a `CancelSoftwareUpgrade` governance proposal, which can be voted on and will +remove the scheduled upgrade plan. Of course this requires that the upgrade was known to be a bad idea +well before the upgrade itself, to allow time for a vote. If you want to allow such a possibility, you +should set the upgrade height to be `2 * (votingperiod + depositperiod) + (safety delta)` from the beginning of +the first upgrade proposal. Safety delta is the time available from the success of an upgrade proposal +and the realization it was a bad idea (due to external testing). You can also start a `CancelSoftwareUpgrade` +proposal while the original `SoftwareUpgrade` proposal is still being voted upon, as long as the voting +period ends after the `SoftwareUpgrade` proposal. + +However, let's assume that we don't realize the upgrade has a bug until shortly before it will occur +(or while we try it out - hitting some panic in the migration). It would seem the blockchain is stuck, +but we need to allow an escape for social consensus to overrule the planned upgrade. To do so, there's +a `--unsafe-skip-upgrades` flag to the start command, which will cause the node to mark the upgrade +as done upon hitting the planned upgrade height(s), without halting and without actually performing a migration. +If over two-thirds run their nodes with this flag on the old binary, it will allow the chain to continue through +the upgrade with a manual override. (This must be well-documented for anyone syncing from genesis later on). + +Example: + +```shell + start --unsafe-skip-upgrades ... +``` + +## Pre-Upgrade Handling + +Cosmovisor supports custom pre-upgrade handling. Use pre-upgrade handling when you need to implement application config changes that are required in the newer version before you perform the upgrade. + +Using Cosmovisor pre-upgrade handling is optional. If pre-upgrade handling is not implemented, the upgrade continues. + +For example, make the required new-version changes to `app.toml` settings during the pre-upgrade handling. The pre-upgrade handling process means that the file does not have to be manually updated after the upgrade. + +Before the application binary is upgraded, Cosmovisor calls a `pre-upgrade` command that can be implemented by the application. + +The `pre-upgrade` command does not take in any command-line arguments and is expected to terminate with the following exit codes: + +| Exit status code | How it is handled in Cosmosvisor | +| ---------------- | ------------------------------------------------------------------------------------------------------------------- | +| `0` | Assumes `pre-upgrade` command executed successfully and continues the upgrade. | +| `1` | Default exit code when `pre-upgrade` command has not been implemented. | +| `30` | `pre-upgrade` command was executed but failed. This fails the entire upgrade. | +| `31` | `pre-upgrade` command was executed but failed. But the command is retried until exit code `1` or `30` are returned. | + +## Sample + +Here is a sample structure of the `pre-upgrade` command: + +```go expandable +func preUpgradeCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "pre-upgrade", + Short: "Pre-upgrade command", + Long: "Pre-upgrade command to implement custom pre-upgrade handling", + Run: func(cmd *cobra.Command, args []string) { + err := HandlePreUpgrade() + if err != nil { + os.Exit(30) +} + +os.Exit(0) +}, +} + +return cmd +} +``` + +Ensure that the pre-upgrade command has been registered in the application: + +```go +rootCmd.AddCommand( + / .. + preUpgradeCommand(), + / .. + ) +``` + +When not using Cosmovisor, ensure to run ` pre-upgrade` before starting the application binary. diff --git a/docs/sdk/next/documentation/operations/config.mdx b/docs/sdk/next/documentation/operations/config.mdx new file mode 100644 index 00000000..a9526711 --- /dev/null +++ b/docs/sdk/next/documentation/operations/config.mdx @@ -0,0 +1,288 @@ +--- +title: Configuration +description: >- + This documentation refers to the app.toml, if you'd like to read about the + config.toml please visit CometBFT docs. +--- + +This documentation refers to the app.toml, if you'd like to read about the config.toml please visit [CometBFT docs](https://docs.cometbft.com/v0.37/). + +{/* the following is not a python reference, however syntax coloring makes the file more readable in the docs */} + +```python +# This is a TOML config file. +# For more information, see https://github.com/toml-lang/toml + +############################################################################### +### Base Configuration ### +############################################################################### + +# The minimum gas prices a validator is willing to accept for processing a +# transaction. A transaction's fees must meet the minimum of any denomination +# specified in this config (e.g. 0.25token1,0.0001token2). +minimum-gas-prices = "0stake" + +# default: the last 362880 states are kept, pruning at 10 block intervals +# nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) +# everything: 2 latest states will be kept; pruning at 10 block intervals. +# custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' +pruning = "default" + +# These are applied if and only if the pruning strategy is custom. +pruning-keep-recent = "0" +pruning-interval = "0" + +# HaltHeight contains a non-zero block height at which a node will gracefully +# halt and shutdown that can be used to assist upgrades and testing. +# +# Note: Commitment of state will be attempted on the corresponding block. +halt-height = 0 + +# HaltTime contains a non-zero minimum block time (in Unix seconds) at which +# a node will gracefully halt and shutdown that can be used to assist upgrades +# and testing. +# +# Note: Commitment of state will be attempted on the corresponding block. +halt-time = 0 + +# MinRetainBlocks defines the minimum block height offset from the current +# block being committed, such that all blocks past this offset are pruned +# from Tendermint. It is used as part of the process of determining the +# ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates +# that no blocks should be pruned. +# +# This configuration value is only responsible for pruning Tendermint blocks. +# It has no bearing on application state pruning which is determined by the +# "pruning-*" configurations. +# +# Note: Tendermint block pruning is dependent on this parameter in conjunction +# with the unbonding (safety threshold) period, state pruning and state sync +# snapshot parameters to determine the correct minimum value of +# ResponseCommit.RetainHeight. +min-retain-blocks = 0 + +# InterBlockCache enables inter-block caching. +inter-block-cache = true + +# IndexEvents defines the set of events in the form {eventType}.{attributeKey}, +# which informs Tendermint what to index. If empty, all events will be indexed. +# +# Example: +# ["message.sender", "message.recipient"] +index-events = [] + +# IavlCacheSize set the size of the iavl tree cache (in number of nodes). +iavl-cache-size = 781250 + +# IAVLDisableFastNode enables or disables the fast node feature of IAVL. +# Default is false. +iavl-disable-fastnode = false + +# IAVLLazyLoading enable/disable the lazy loading of iavl store. +# Default is false. +iavl-lazy-loading = false + +# AppDBBackend defines the database backend type to use for the application and snapshots DBs. +# An empty string indicates that a fallback will be used. +# The fallback is the db_backend value set in Tendermint's config.toml. +app-db-backend = "" + +############################################################################### +### Telemetry Configuration ### +############################################################################### + +[telemetry] + +# Prefixed with keys to separate services. +service-name = "" + +# Enabled enables the application telemetry functionality. When enabled, +# an in-memory sink is also enabled by default. Operators may also enabled +# other sinks such as Prometheus. +enabled = false + +# Enable prefixing gauge values with hostname. +enable-hostname = false + +# Enable adding hostname to labels. +enable-hostname-label = false + +# Enable adding service to labels. +enable-service-label = false + +# PrometheusRetentionTime, when positive, enables a Prometheus metrics sink. +prometheus-retention-time = 0 + +# GlobalLabels defines a global set of name/value label tuples applied to all +# metrics emitted using the wrapper functions defined in telemetry package. +# +# Example: +# [["chain_id", "cosmoshub-1"]] +global-labels = [] + +############################################################################### +### API Configuration ### +############################################################################### + +[api] + +# Enable defines if the API server should be enabled. +enable = false + +# Swagger defines if swagger documentation should automatically be registered. +swagger = false + +# Address defines the API server to listen on. +address = "tcp://localhost:1317" + +# MaxOpenConnections defines the number of maximum open connections. +max-open-connections = 1000 + +# RPCReadTimeout defines the Tendermint RPC read timeout (in seconds). +rpc-read-timeout = 10 + +# RPCWriteTimeout defines the Tendermint RPC write timeout (in seconds). +rpc-write-timeout = 0 + +# RPCMaxBodyBytes defines the Tendermint maximum request body (in bytes). +rpc-max-body-bytes = 1000000 + +# EnableUnsafeCORS defines if CORS should be enabled (unsafe - use it at your own risk). +enabled-unsafe-cors = false + +############################################################################### +### Rosetta Configuration ### +############################################################################### + +[rosetta] + +# Enable defines if the Rosetta API server should be enabled. +enable = false + +# Address defines the Rosetta API server to listen on. +address = ":8080" + +# Network defines the name of the blockchain that will be returned by Rosetta. +blockchain = "app" + +# Network defines the name of the network that will be returned by Rosetta. +network = "network" + +# Retries defines the number of retries when connecting to the node before failing. +retries = 3 + +# Offline defines if Rosetta server should run in offline mode. +offline = false + +# EnableDefaultSuggestedFee defines if the server should suggest fee by default. +# If 'construction/medata' is called without gas limit and gas price, +# suggested fee based on gas-to-suggest and denom-to-suggest will be given. +enable-fee-suggestion = false + +# GasToSuggest defines gas limit when calculating the fee +gas-to-suggest = 200000 + +# DenomToSuggest defines the default denom for fee suggestion. +# Price must be in minimum-gas-prices. +denom-to-suggest = "uatom" + +############################################################################### +### gRPC Configuration ### +############################################################################### + +[grpc] + +# Enable defines if the gRPC server should be enabled. +enable = true + +# Address defines the gRPC server address to bind to. +address = "localhost:9090" + +# MaxRecvMsgSize defines the max message size in bytes the server can receive. +# The default value is 10MB. +max-recv-msg-size = "10485760" + +# MaxSendMsgSize defines the max message size in bytes the server can send. +# The default value is math.MaxInt32. +max-send-msg-size = "2147483647" + +############################################################################### +### gRPC Web Configuration ### +############################################################################### + +[grpc-web] + +# GRPCWebEnable defines if the gRPC-web should be enabled. +# NOTE: gRPC must also be enabled, otherwise, this configuration is a no-op. +enable = true + +# Address defines the gRPC-web server address to bind to. +address = "localhost:9091" + +# EnableUnsafeCORS defines if CORS should be enabled (unsafe - use it at your own risk). +enable-unsafe-cors = false + +############################################################################### +### State Sync Configuration ### +############################################################################### + +# State sync snapshots allow other nodes to rapidly join the network without replaying historical +# blocks, instead downloading and applying a snapshot of the application state at a given height. +[state-sync] + +# snapshot-interval specifies the block interval at which local state sync snapshots are +# taken (0 to disable). +snapshot-interval = 0 + +# snapshot-keep-recent specifies the number of recent snapshots to keep and serve (0 to keep all). +snapshot-keep-recent = 2 + +############################################################################### +### Store / State Streaming ### +############################################################################### + +[store] +streamers = [] + +[streamers] +[streamers.file] +keys = ["*"] +write_dir = "" +prefix = "" + +# output-metadata specifies if output the metadata file which includes the abci request/responses +# during processing the block. +output-metadata = "true" + +# stop-node-on-error specifies if propagate the file streamer errors to consensus state machine. +stop-node-on-error = "true" + +# fsync specifies if call fsync after writing the files. +fsync = "false" + +############################################################################### +### Mempool ### +############################################################################### + +[mempool] +# Setting max-txs to 0 will allow for a unbounded amount of transactions in the mempool. +# Setting max_txs to negative 1 (-1) will disable transactions from being inserted into the mempool. +# Setting max_txs to a positive number (> 0) will limit the number of transactions in the mempool, by the specified amount. +# +# Note, this configuration only applies to SDK built-in app-side mempool +# implementations. +max-txs = 5000 + +``` + +## inter-block-cache + +This feature will consume more ram than a normal node, if enabled. + +## iavl-cache-size + +Using this feature will increase ram consumption + +## iavl-lazy-loading + +This feature is to be used for archive nodes, allowing them to have a faster start up time. diff --git a/docs/sdk/next/documentation/operations/confix.mdx b/docs/sdk/next/documentation/operations/confix.mdx new file mode 100644 index 00000000..77031a64 --- /dev/null +++ b/docs/sdk/next/documentation/operations/confix.mdx @@ -0,0 +1,157 @@ +--- +title: Confix +description: >- + Confix is a configuration management tool that allows you to manage your + configuration via CLI. +--- + +`Confix` is a configuration management tool that allows you to manage your configuration via CLI. + +It is based on the [CometBFT RFC 019](https://github.com/cometbft/cometbft/blob/5013bc3f4a6d64dcc2bf02ccc002ebc9881c62e4/docs/rfc/rfc-019-config-version.md). + +## Installation + +### Add Config Command + +To add the confix tool, it's required to add the `ConfigCommand` to your application's root command file (e.g. `/cmd/root.go`). + +Import the `confixCmd` package: + +```go +import "cosmossdk.io/tools/confix/cmd" +``` + +Find the following line: + +```go +initRootCmd(rootCmd, moduleManager) +``` + +After that line, add the following: + +```go +rootCmd.AddCommand( + confixcmd.ConfigCommand(), +) +``` + +The `ConfixCommand` function builds the `config` root command and is defined in the `confixCmd` package (`cosmossdk.io/tools/confix/cmd`). +An implementation example can be found in `simapp`. + +The command will be available as `simd config`. + + +Using confix directly in the application can have less features than using it standalone. +This is because confix is versioned with the SDK, while `latest` is the standalone version. + + +### Using Confix Standalone + +To use Confix standalone, without having to add it in your application, install it with the following command: + +```bash +go install cosmossdk.io/tools/confix/cmd/confix@latest +``` + +Alternatively, for building from source, simply run `make confix`. The binary will be located in `tools/confix`. + +## Usage + +Use standalone: + +```shell +confix --help +``` + +Use in simd: + +```shell +simd config fix --help +``` + +### Get + +Get a configuration value, e.g.: + +```shell +simd config get app pruning # gets the value pruning from app.toml +simd config get client chain-id # gets the value chain-id from client.toml +``` + +```shell +confix get ~/.simapp/config/app.toml pruning # gets the value pruning from app.toml +confix get ~/.simapp/config/client.toml chain-id # gets the value chain-id from client.toml +``` + +### Set + +Set a configuration value, e.g.: + +```shell +simd config set app pruning "enabled" # sets the value pruning from app.toml +simd config set client chain-id "foo-1" # sets the value chain-id from client.toml +``` + +```shell +confix set ~/.simapp/config/app.toml pruning "enabled" # sets the value pruning from app.toml +confix set ~/.simapp/config/client.toml chain-id "foo-1" # sets the value chain-id from client.toml +``` + +### Migrate + +Migrate a configuration file to a new version, config type defaults to `app.toml`, if you want to change it to `client.toml`, please indicate it by adding the optional parameter, e.g.: + +```shell +simd config migrate v0.50 # migrates defaultHome/config/app.toml to the latest v0.50 config +simd config migrate v0.50 --client # migrates defaultHome/config/client.toml to the latest v0.50 config +``` + +```shell +confix migrate v0.50 ~/.simapp/config/app.toml # migrate ~/.simapp/config/app.toml to the latest v0.50 config +confix migrate v0.50 ~/.simapp/config/client.toml --client # migrate ~/.simapp/config/client.toml to the latest v0.50 config +``` + +### Diff + +Get the diff between a given configuration file and the default configuration file, e.g.: + +```shell +simd config diff v0.47 # gets the diff between defaultHome/config/app.toml and the latest v0.47 config +simd config diff v0.47 --client # gets the diff between defaultHome/config/client.toml and the latest v0.47 config +``` + +```shell +confix diff v0.47 ~/.simapp/config/app.toml # gets the diff between ~/.simapp/config/app.toml and the latest v0.47 config +confix diff v0.47 ~/.simapp/config/client.toml --client # gets the diff between ~/.simapp/config/client.toml and the latest v0.47 config +``` + +### View + +View a configuration file, e.g: + +```shell +simd config view client # views the current app client config +``` + +```shell +confix view ~/.simapp/config/client.toml # views the current app client conf +``` + +### Maintainer + +At each SDK modification of the default configuration, add the default SDK config under `data/vXX-app.toml`. +This allows users to use the tool standalone. + +### Compatibility + +The recommended standalone version is `latest`, which is using the latest development version of the Confix. + +| SDK Version | Confix Version | +| ----------- | -------------- | +| v0.50 | v0.1.x | +| v0.52 | v0.2.x | +| v2 | v0.2.x | + +## Credits + +This project is based on the [CometBFT RFC 019](https://github.com/cometbft/cometbft/blob/5013bc3f4a6d64dcc2bf02ccc002ebc9881c62e4/docs/rfc/rfc-019-config-version.md) and their never released own implementation of [confix](https://github.com/cometbft/cometbft/blob/v0.36.x/scripts/confix/confix.go). diff --git a/docs/sdk/next/documentation/operations/cosmovisor.mdx b/docs/sdk/next/documentation/operations/cosmovisor.mdx new file mode 100644 index 00000000..92da4fa2 --- /dev/null +++ b/docs/sdk/next/documentation/operations/cosmovisor.mdx @@ -0,0 +1,409 @@ +--- +title: Cosmovisor +--- + +`cosmovisor` is a process manager for Cosmos SDK application binaries that automates application binary switch at chain upgrades. +It polls the `upgrade-info.json` file that is created by the x/upgrade module at upgrade height, and then can automatically download the new binary, stop the current binary, switch from the old binary to the new one, and finally restart the node with the new binary. + +* [Design](#design) +* [Contributing](#contributing) +* [Setup](#setup) + * [Installation](#installation) + * [Command Line Arguments And Environment Variables](#command-line-arguments-and-environment-variables) + * [Folder Layout](#folder-layout) +* [Usage](#usage) + * [Initialization](#initialization) + * [Detecting Upgrades](#detecting-upgrades) + * [Adding Upgrade Binary](#adding-upgrade-binary) + * [Auto-Download](#auto-download) + * [Preparing for an Upgrade](#preparing-for-an-upgrade) +* [Example: SimApp Upgrade](#example-simapp-upgrade) + * [Chain Setup](#chain-setup) + * [Prepare Cosmovisor and Start the Chain](#prepare-cosmovisor-and-start-the-chain) + * [Update App](#update-app) + +## Design + +Cosmovisor is designed to be used as a wrapper for a `Cosmos SDK` app: + +* it will pass arguments to the associated app (configured by `DAEMON_NAME` env variable). + Running `cosmovisor run arg1 arg2 ....` will run `app arg1 arg2 ...`; +* it will manage an app by restarting and upgrading if needed; +* it is configured using environment variables, not positional arguments. + +*Note: If new versions of the application are not set up to run in-place store migrations, migrations will need to be run manually before restarting `cosmovisor` with the new binary. For this reason, we recommend applications adopt in-place store migrations.* + + +Only the latest version of cosmovisor is actively developed/maintained. + + + +Versions prior to v1.0.0 have a vulnerability that could lead to a DOS. Please upgrade to the latest version. + + +## Contributing + +Cosmovisor is part of the Cosmos SDK monorepo, but it's a separate module with it's own release schedule. + +Release branches have the following format `release/cosmovisor/vA.B.x`, where A and B are a number (e.g. `release/cosmovisor/v1.3.x`). Releases are tagged using the following format: `cosmovisor/vA.B.C`. + +## Setup + +### Installation + +You can download Cosmovisor from the [GitHub releases](https://github.com/cosmos/cosmos-sdk/releases/tag/cosmovisor%2Fv1.5.0). + +To install the latest version of `cosmovisor`, run the following command: + +```shell +go install cosmossdk.io/tools/cosmovisor/cmd/cosmovisor@latest +``` + +To install a specific version, you can specify the version: + +```shell +go install cosmossdk.io/tools/cosmovisor/cmd/cosmovisor@v1.5.0 +``` + +Run `cosmovisor version` to check the cosmovisor version. + +Alternatively, for building from source, simply run `make cosmovisor`. The binary will be located in `tools/cosmovisor`. + + +Installing cosmovisor using `go install` will display the correct `cosmovisor` version. +Building from source (`make cosmovisor`) or installing `cosmovisor` by other means won't display the correct version. + + +### Command Line Arguments And Environment Variables + +The first argument passed to `cosmovisor` is the action for `cosmovisor` to take. Options are: + +* `help`, `--help`, or `-h` - Output `cosmovisor` help information and check your `cosmovisor` configuration. +* `run` - Run the configured binary using the rest of the provided arguments. +* `version` - Output the `cosmovisor` version and also run the binary with the `version` argument. +* `config` - Display the current `cosmovisor` configuration, that means displaying the environment variables value that `cosmovisor` is using. +* `add-upgrade` - Add an upgrade manually to `cosmovisor`. This command allow you to easily add the binary corresponding to an upgrade in cosmovisor. + +All arguments passed to `cosmovisor run` will be passed to the application binary (as a subprocess). `cosmovisor` will return `/dev/stdout` and `/dev/stderr` of the subprocess as its own. For this reason, `cosmovisor run` cannot accept any command-line arguments other than those available to the application binary. + +`cosmovisor` reads its configuration from environment variables, or its configuration file (use `--cosmovisor-config `): + +* `DAEMON_HOME` is the location where the `cosmovisor/` directory is kept that contains the genesis binary, the upgrade binaries, and any additional auxiliary files associated with each binary (e.g. `$HOME/.gaiad`, `$HOME/.regend`, `$HOME/.simd`, etc.). +* `DAEMON_NAME` is the name of the binary itself (e.g. `gaiad`, `regend`, `simd`, etc.). +* `DAEMON_ALLOW_DOWNLOAD_BINARIES` (*optional*), if set to `true`, will enable auto-downloading of new binaries (for security reasons, this is intended for full nodes rather than validators). By default, `cosmovisor` will not auto-download new binaries. +* `DAEMON_DOWNLOAD_MUST_HAVE_CHECKSUM` (*optional*, default = `false`), if `true` cosmovisor will require that a checksum is provided in the upgrade plan for the binary to be downloaded. If `false`, cosmovisor will not require a checksum to be provided, but still check the checksum if one is provided. +* `DAEMON_RESTART_AFTER_UPGRADE` (*optional*, default = `true`), if `true`, restarts the subprocess with the same command-line arguments and flags (but with the new binary) after a successful upgrade. Otherwise (`false`), `cosmovisor` stops running after an upgrade and requires the system administrator to manually restart it. Note restart is only after the upgrade and does not auto-restart the subprocess after an error occurs. +* `DAEMON_RESTART_DELAY` (*optional*, default none), allow a node operator to define a delay between the node halt (for upgrade) and backup by the specified time. The value must be a duration (e.g. `1s`). +* `DAEMON_SHUTDOWN_GRACE` (*optional*, default none), if set, send interrupt to binary and wait the specified time to allow for cleanup/cache flush to disk before sending the kill signal. The value must be a duration (e.g. `1s`). +* `DAEMON_POLL_INTERVAL` (*optional*, default 300 milliseconds), is the interval length for polling the upgrade plan file. The value must be a duration (e.g. `1s`). +* `DAEMON_DATA_BACKUP_DIR` option to set a custom backup directory. If not set, `DAEMON_HOME` is used. +* `UNSAFE_SKIP_BACKUP` (defaults to `false`), if set to `true`, upgrades directly without performing a backup. Otherwise (`false`, default) backs up the data before trying the upgrade. The default value of false is useful and recommended in case of failures and when a backup needed to rollback. We recommend using the default backup option `UNSAFE_SKIP_BACKUP=false`. +* `DAEMON_PREUPGRADE_MAX_RETRIES` (defaults to `0`). The maximum number of times to call [`pre-upgrade`](https://docs.cosmos.network/main/build/building-apps/app-upgrade#pre-upgrade-handling) in the application after exit status of `31`. After the maximum number of retries, Cosmovisor fails the upgrade. +* `COSMOVISOR_DISABLE_LOGS` (defaults to `false`). If set to true, this will disable Cosmovisor logs (but not the underlying process) completely. This may be useful, for example, when a Cosmovisor subcommand you are executing returns a valid JSON you are then parsing, as logs added by Cosmovisor make this output not a valid JSON. +* `COSMOVISOR_COLOR_LOGS` (defaults to `true`). If set to true, this will colorise Cosmovisor logs (but not the underlying process). +* `COSMOVISOR_TIMEFORMAT_LOGS` (defaults to `kitchen`). If set to a value (`layout|ansic|unixdate|rubydate|rfc822|rfc822z|rfc850|rfc1123|rfc1123z|rfc3339|rfc3339nano|kitchen`), this will add timestamp prefix to Cosmovisor logs (but not the underlying process). +* `COSMOVISOR_CUSTOM_PREUPGRADE` (defaults to \`\`). If set, this will run $DAEMON\_HOME/cosmovisor/$COSMOVISOR\_CUSTOM\_PREUPGRADE prior to upgrade with the arguments \[ upgrade.Name, upgrade.Height ]. Executes a custom script (separate and prior to the chain daemon pre-upgrade command) +* `COSMOVISOR_DISABLE_RECASE` (defaults to `false`). If set to true, the upgrade directory will expected to match the upgrade plan name without any case changes + +### Folder Layout + +`$DAEMON_HOME/cosmovisor` is expected to belong completely to `cosmovisor` and the subprocesses that are controlled by it. The folder content is organized as follows: + +```text expandable +. +├── current -> genesis or upgrades/ +├── genesis +│   └── bin +│   └── $DAEMON_NAME +└── upgrades +│ └── +│ ├── bin +│ │   └── $DAEMON_NAME +│ └── upgrade-info.json +└── preupgrade.sh (optional) +``` + +The `cosmovisor/` directory includes a subdirectory for each version of the application (i.e. `genesis` or `upgrades/`). Within each subdirectory is the application binary (i.e. `bin/$DAEMON_NAME`) and any additional auxiliary files associated with each binary. `current` is a symbolic link to the currently active directory (i.e. `genesis` or `upgrades/`). The `name` variable in `upgrades/` is the lowercased URI-encoded name of the upgrade as specified in the upgrade module plan. Note that the upgrade name path are normalized to be lowercased: for instance, `MyUpgrade` is normalized to `myupgrade`, and its path is `upgrades/myupgrade`. + +Please note that `$DAEMON_HOME/cosmovisor` only stores the *application binaries*. The `cosmovisor` binary itself can be stored in any typical location (e.g. `/usr/local/bin`). The application will continue to store its data in the default data directory (e.g. `$HOME/.simapp`) or the data directory specified with the `--home` flag. `$DAEMON_HOME` is dependent of the data directory and must be set to the same directory as the data directory, you will end up with a configuration like the following: + +```text +.simapp +├── config +├── data +└── cosmovisor +``` + +## Usage + +The system administrator is responsible for: + +* installing the `cosmovisor` binary +* configuring the host's init system (e.g. `systemd`, `launchd`, etc.) +* appropriately setting the environmental variables +* creating the `/cosmovisor` directory +* creating the `/cosmovisor/genesis/bin` folder +* creating the `/cosmovisor/upgrades//bin` folders +* placing the different versions of the `` executable in the appropriate `bin` folders. + +`cosmovisor` will set the `current` link to point to `genesis` at first start (i.e. when no `current` link exists) and then handle switching binaries at the correct points in time so that the system administrator can prepare days in advance and relax at upgrade time. + +In order to support downloadable binaries, a tarball for each upgrade binary will need to be packaged up and made available through a canonical URL. Additionally, a tarball that includes the genesis binary and all available upgrade binaries can be packaged up and made available so that all the necessary binaries required to sync a fullnode from start can be easily downloaded. + +The `DAEMON` specific code and operations (e.g. cometBFT config, the application db, syncing blocks, etc.) all work as expected. The application binaries' directives such as command-line flags and environment variables also work as expected. + +### Initialization + +The `cosmovisor init ` command creates the folder structure required for using cosmovisor. + +It does the following: + +* creates the `/cosmovisor` folder if it doesn't yet exist +* creates the `/cosmovisor/genesis/bin` folder if it doesn't yet exist +* copies the provided executable file to `/cosmovisor/genesis/bin/` +* creates the `current` link, pointing to the `genesis` folder + +It uses the `DAEMON_HOME` and `DAEMON_NAME` environment variables for folder location and executable name. + +The `cosmovisor init` command is specifically for initializing cosmovisor, and should not be confused with a chain's `init` command (e.g. `cosmovisor run init`). + +### Detecting Upgrades + +`cosmovisor` is polling the `$DAEMON_HOME/data/upgrade-info.json` file for new upgrade instructions. The file is created by the x/upgrade module in `BeginBlocker` when an upgrade is detected and the blockchain reaches the upgrade height. +The following heuristic is applied to detect the upgrade: + +* When starting, `cosmovisor` doesn't know much about currently running upgrade, except the binary which is `current/bin/`. It tries to read the `current/update-info.json` file to get information about the current upgrade name. +* If neither `cosmovisor/current/upgrade-info.json` nor `data/upgrade-info.json` exist, then `cosmovisor` will wait for `data/upgrade-info.json` file to trigger an upgrade. +* If `cosmovisor/current/upgrade-info.json` doesn't exist but `data/upgrade-info.json` exists, then `cosmovisor` assumes that whatever is in `data/upgrade-info.json` is a valid upgrade request. In this case `cosmovisor` tries immediately to make an upgrade according to the `name` attribute in `data/upgrade-info.json`. +* Otherwise, `cosmovisor` waits for changes in `upgrade-info.json`. As soon as a new upgrade name is recorded in the file, `cosmovisor` will trigger an upgrade mechanism. + +When the upgrade mechanism is triggered, `cosmovisor` will: + +1. if `DAEMON_ALLOW_DOWNLOAD_BINARIES` is enabled, start by auto-downloading a new binary into `cosmovisor//bin` (where `` is the `upgrade-info.json:name` attribute); +2. update the `current` symbolic link to point to the new directory and save `data/upgrade-info.json` to `cosmovisor/current/upgrade-info.json`. + +### Adding Upgrade Binary + +`cosmovisor` has an `add-upgrade` command that allows to easily link a binary to an upgrade. It creates a new folder in `cosmovisor/upgrades/` and copies the provided executable file to `cosmovisor/upgrades//bin/`. + +Using the `--upgrade-height` flag allows to specify at which height the binary should be switched, without going via a governance proposal. +This enables support for an emergency coordinated upgrades where the binary must be switched at a specific height, but there is no time to go through a governance proposal. + + +`--upgrade-height` creates an `upgrade-info.json` file. This means if a chain upgrade via governance proposal is executed before the specified height with `--upgrade-height`, the governance proposal will overwrite the `upgrade-info.json` plan created by `add-upgrade --upgrade-height `. +Take this into consideration when using `--upgrade-height`. + + +### Auto-Download + +Generally, `cosmovisor` requires that the system administrator place all relevant binaries on disk before the upgrade happens. However, for people who don't need such control and want an automated setup (maybe they are syncing a non-validating fullnode and want to do little maintenance), there is another option. + +**NOTE: we don't recommend using auto-download** because it doesn't verify in advance if a binary is available. If there will be any issue with downloading a binary, the cosmovisor will stop and won't restart an App (which could lead to a chain halt). + +If `DAEMON_ALLOW_DOWNLOAD_BINARIES` is set to `true`, and no local binary can be found when an upgrade is triggered, `cosmovisor` will attempt to download and install the binary itself based on the instructions in the `info` attribute in the `data/upgrade-info.json` file. The files is constructed by the x/upgrade module and contains data from the upgrade `Plan` object. The `Plan` has an info field that is expected to have one of the following two valid formats to specify a download: + +1. Store an os/architecture -> binary URI map in the upgrade plan info field as JSON under the `"binaries"` key. For example: + + ```json + { + "binaries": { + "linux/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" + } + } + ``` + + You can include multiple binaries at once to ensure more than one environment will receive the correct binaries: + + ```json + { + "binaries": { + "linux/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f", + "linux/arm64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f", + "darwin/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" + } + } + ``` + + When submitting this as a proposal ensure there are no spaces. An example command using `gaiad` could look like: + + ```shell expandable + > gaiad tx upgrade software-upgrade Vega \ + --title Vega \ + --deposit 100uatom \ + --upgrade-height 7368420 \ + --upgrade-info '{"binaries":{"linux/amd64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-linux-amd64","linux/arm64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-linux-arm64","darwin/amd64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-darwin-amd64"}}' \ + --summary "upgrade to Vega" \ + --gas 400000 \ + --from user \ + --chain-id test \ + --home test/val2 \ + --node tcp://localhost:36657 \ + --yes + ``` + +2. Store a link to a file that contains all information in the above format (e.g. if you want to specify lots of binaries, changelog info, etc. without filling up the blockchain). For example: + + ```text + https://example.com/testnet-1001-info.json?checksum=sha256:deaaa99fda9407c4dbe1d04bd49bab0cc3c1dd76fa392cd55a9425be074af01e + ``` + +When `cosmovisor` is triggered to download the new binary, `cosmovisor` will parse the `"binaries"` field, download the new binary with [go-getter](https://github.com/hashicorp/go-getter), and unpack the new binary in the `upgrades/` folder so that it can be run as if it was installed manually. + +Note that for this mechanism to provide strong security guarantees, all URLs should include a SHA 256/512 checksum. This ensures that no false binary is run, even if someone hacks the server or hijacks the DNS. `go-getter` will always ensure the downloaded file matches the checksum if it is provided. `go-getter` will also handle unpacking archives into directories (in this case the download link should point to a `zip` file of all data in the `bin` directory). + +To properly create a sha256 checksum on linux, you can use the `sha256sum` utility. For example: + +```shell +sha256sum ./testdata/repo/zip_directory/autod.zip +``` + +The result will look something like the following: `29139e1381b8177aec909fab9a75d11381cab5adf7d3af0c05ff1c9c117743a7`. + +You can also use `sha512sum` if you would prefer to use longer hashes, or `md5sum` if you would prefer to use broken hashes. Whichever you choose, make sure to set the hash algorithm properly in the checksum argument to the URL. + +### Preparing for an Upgrade + +To prepare for an upgrade, use the `prepare-upgrade` command: + +```shell +cosmovisor prepare-upgrade +``` + +This command performs the following actions: + +1. Retrieves upgrade information directly from the blockchain about the next scheduled upgrade. +2. Downloads the new binary specified in the upgrade plan. +3. Verifies the binary's checksum (if required by configuration). +4. Places the new binary in the appropriate directory for Cosmovisor to use during the upgrade. + +The `prepare-upgrade` command provides detailed logging throughout the process, including: + +* The name and height of the upcoming upgrade +* The URL from which the new binary is being downloaded +* Confirmation of successful download and verification +* The path where the new binary has been placed + +Example output: + +```bash +INFO Preparing for upgrade name=v1.0.0 height=1000000 +INFO Downloading upgrade binary url=https://example.com/binary/v1.0.0?checksum=sha256:339911508de5e20b573ce902c500ee670589073485216bee8b045e853f24bce8 +INFO Upgrade preparation complete name=v1.0.0 height=1000000 +``` + +*Note: The current way of downloading manually and placing the binary at the right place would still work.* + +## Example: SimApp Upgrade + +The following instructions provide a demonstration of `cosmovisor` using the simulation application (`simapp`) shipped with the Cosmos SDK's source code. The following commands are to be run from within the `cosmos-sdk` repository. + +### Chain Setup + +Let's create a new chain using the `v0.47.4` version of simapp (the Cosmos SDK demo app): + +```shell +git checkout v0.47.4 +make build +``` + +Clean `~/.simapp` (never do this in a production environment): + +```shell +./build/simd tendermint unsafe-reset-all +``` + +Set up app config: + +```shell +./build/simd config chain-id test +./build/simd config keyring-backend test +./build/simd config broadcast-mode sync +``` + +Initialize the node and overwrite any previous genesis file (never do this in a production environment): + +```shell +./build/simd init test --chain-id test --overwrite +``` + +For the sake of this demonstration, amend `voting_period` in `genesis.json` to a reduced time of 20 seconds (`20s`): + +```shell +cat <<< $(jq '.app_state.gov.params.voting_period = "20s"' $HOME/.simapp/config/genesis.json) > $HOME/.simapp/config/genesis.json +``` + +Create a validator, and setup genesis transaction: + +```shell +./build/simd keys add validator +./build/simd genesis add-genesis-account validator 1000000000stake --keyring-backend test +./build/simd genesis gentx validator 1000000stake --chain-id test +./build/simd genesis collect-gentxs +``` + +#### Prepare Cosmovisor and Start the Chain + +Set the required environment variables: + +```shell +export DAEMON_NAME=simd +export DAEMON_HOME=$HOME/.simapp +``` + +Set the optional environment variable to trigger an automatic app restart: + +```shell +export DAEMON_RESTART_AFTER_UPGRADE=true +``` + +Initialize cosmovisor with the current binary: + +```shell +cosmovisor init ./build/simd +``` + +Now you can run cosmovisor with simapp v0.47.4: + +```shell +cosmovisor run start +``` + +### Update App + +Update app to the latest version (e.g. v0.50.0). + + + +Migration plans are defined using the `x/upgrade` module and described in [In-Place Store Migrations](https://github.com/cosmos/cosmos-sdk/blob/main/docs/learn/advanced/15-upgrade.md). Migrations can perform any deterministic state change. + +The migration plan to upgrade the simapp from v0.47 to v0.50 is defined in `simapp/upgrade.go`. + + + +Build the new version `simd` binary: + +```shell +make build +``` + +Add the new `simd` binary and the upgrade name: + + + +The migration name must match the one defined in the migration plan. + + + +```shell +cosmovisor add-upgrade v047-to-v050 ./build/simd +``` + +Open a new terminal window and submit an upgrade proposal along with a deposit and a vote (these commands must be run within 20 seconds of each other): + +```shell +./build/simd tx upgrade software-upgrade v047-to-v050 --title upgrade --summary upgrade --upgrade-height 200 --upgrade-info "{}" --no-validate --from validator --yes +./build/simd tx gov deposit 1 10000000stake --from validator --yes +./build/simd tx gov vote 1 yes --from validator --yes +``` + +The upgrade will occur automatically at height 200. Note: you may need to change the upgrade height in the snippet above if your test play takes more time. diff --git a/docs/sdk/next/documentation/operations/interact-node.mdx b/docs/sdk/next/documentation/operations/interact-node.mdx new file mode 100644 index 00000000..efd8cf70 --- /dev/null +++ b/docs/sdk/next/documentation/operations/interact-node.mdx @@ -0,0 +1,298 @@ +--- +title: Interacting with the Node +--- + +## Synopsis + +There are multiple ways to interact with a node: using the CLI, using gRPC or using the REST endpoints. + + +**Pre-requisite Readings** + +- [gRPC, REST and CometBFT Endpoints](/docs/sdk/next/api-reference/service-apis/grpc_rest) +- [Running a Node](/docs/sdk/next/documentation/operations/run-node) + + + +## Using the CLI + +Now that your chain is running, it is time to try sending tokens from the first account you created to a second account. In a new terminal window, start by running the following query command: + +```bash +simd query bank balances $MY_VALIDATOR_ADDRESS +``` + +You should see the current balance of the account you created, equal to the original balance of `stake` you granted it minus the amount you delegated via the `gentx`. Now, create a second account: + +```bash +simd keys add recipient --keyring-backend test + +# Put the generated address in a variable for later use. +RECIPIENT=$(simd keys show recipient -a --keyring-backend test) +``` + +The command above creates a local key-pair that is not yet registered on the chain. An account is created the first time it receives tokens from another account. Now, run the following command to send tokens to the `recipient` account: + +```bash +simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000000stake --chain-id my-test-chain --keyring-backend test + +# Check that the recipient account did receive the tokens. +simd query bank balances $RECIPIENT +``` + +Finally, delegate some of the stake tokens sent to the `recipient` account to the validator: + +```bash +simd tx staking delegate $(simd keys show my_validator --bech val -a --keyring-backend test) 500stake --from recipient --chain-id my-test-chain --keyring-backend test + +# Query the total delegations to `validator`. +simd query staking delegations-to $(simd keys show my_validator --bech val -a --keyring-backend test) +``` + +You should see two delegations, the first one made from the `gentx`, and the second one you just performed from the `recipient` account. + +## Using gRPC + +The Protobuf ecosystem developed tools for different use cases, including code-generation from `*.proto` files into various languages. These tools allow the building of clients easily. Often, the client connection (i.e. the transport) can be plugged and replaced very easily. Let's explore one of the most popular transports: [gRPC](/docs/sdk/next/api-reference/service-apis/grpc_rest). + +Since the code generation library largely depends on your own tech stack, we will only present three alternatives: + +- `grpcurl` for generic debugging and testing, +- programmatically via Go, +- CosmJS for JavaScript/TypeScript developers. + +### grpcurl + +[grpcurl](https://github.com/fullstorydev/grpcurl) is like `curl` but for gRPC. It is also available as a Go library, but we will use it only as a CLI command for debugging and testing purposes. Follow the instructions in the previous link to install it. + +Assuming you have a local node running (either a localnet, or connected to a live network), you should be able to run the following command to list the Protobuf services available (you can replace `localhost:9000` by the gRPC server endpoint of another node, which is configured under the `grpc.address` field inside [`app.toml`](/docs/sdk/next/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml)): + +```bash +grpcurl -plaintext localhost:9090 list +``` + +You should see a list of gRPC services, like `cosmos.bank.v1beta1.Query`. This is called reflection, which is a Protobuf endpoint returning a description of all available endpoints. Each of these represents a different Protobuf service, and each service exposes multiple RPC methods you can query against. + +In order to get a description of the service you can run the following command: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + describe cosmos.bank.v1beta1.Query # Service we want to inspect +``` + +It's also possible to execute an RPC call to query the node for information: + +```bash +grpcurl \ + -plaintext \ + -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/AllBalances +``` + +The list of all available gRPC query endpoints is [coming soon](https://github.com/cosmos/cosmos-sdk/issues/7786). + +#### Query for historical state using grpcurl + +You may also query for historical data by passing some [gRPC metadata](https://github.com/grpc/grpc-go/blob/master/Documentation/grpc-metadata.md) to the query: the `x-cosmos-block-height` metadata should contain the block to query. Using grpcurl as above, the command looks like: + +```bash +grpcurl \ + -plaintext \ + -H "x-cosmos-block-height: 123" \ + -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/AllBalances +``` + +Assuming the state at that block has not yet been pruned by the node, this query should return a non-empty response. + +### Programmatically via Go + +The following snippet shows how to query the state using gRPC inside a Go program. The idea is to create a gRPC connection, and use the Protobuf-generated client code to query the gRPC server. + +#### Install Cosmos SDK + +```bash +go get github.com/cosmos/cosmos-sdk@main +``` + +```go expandable +package main + +import ( + + "context" + "fmt" + "google.golang.org/grpc" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +func queryState() + +error { + myAddress, err := sdk.AccAddressFromBech32("cosmos1...") / the my_validator or recipient address. + if err != nil { + return err +} + + / Create a connection to the gRPC server. + grpcConn, err := grpc.Dial( + "127.0.0.1:9090", / your gRPC server address. + grpc.WithInsecure(), / The Cosmos SDK doesn't support any transport security mechanism. + / This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry + / if the request/response types contain interface instead of 'nil' you should pass the application specific codec. + grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), + ) + if err != nil { + return err +} + +defer grpcConn.Close() + + / This creates a gRPC client to query the x/bank service. + bankClient := banktypes.NewQueryClient(grpcConn) + +bankRes, err := bankClient.Balance( + context.Background(), + &banktypes.QueryBalanceRequest{ + Address: myAddress.String(), + Denom: "stake" +}, + ) + if err != nil { + return err +} + +fmt.Println(bankRes.GetBalance()) / Prints the account balance + + return nil +} + +func main() { + if err := queryState(); err != nil { + panic(err) +} +} +``` + +You can replace the query client (here we are using `x/bank`'s) with one generated from any other Protobuf service. The list of all available gRPC query endpoints is [coming soon](https://github.com/cosmos/cosmos-sdk/issues/7786). + +#### Query for historical state using Go + +Querying for historical blocks is done by adding the block height metadata in the gRPC request. + +```go expandable +package main + +import ( + + "context" + "fmt" + "google.golang.org/grpc" + "google.golang.org/grpc/metadata" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + grpctypes "github.com/cosmos/cosmos-sdk/types/grpc" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +func queryState() + +error { + myAddress, err := sdk.AccAddressFromBech32("cosmos1yerherx4d43gj5wa3zl5vflj9d4pln42n7kuzu") / the my_validator or recipient address. + if err != nil { + return err +} + + / Create a connection to the gRPC server. + grpcConn, err := grpc.Dial( + "127.0.0.1:9090", / your gRPC server address. + grpc.WithInsecure(), / The Cosmos SDK doesn't support any transport security mechanism. + / This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry + / if the request/response types contain interface instead of 'nil' you should pass the application specific codec. + grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), + ) + if err != nil { + return err +} + +defer grpcConn.Close() + + / This creates a gRPC client to query the x/bank service. + bankClient := banktypes.NewQueryClient(grpcConn) + +var header metadata.MD + _, err = bankClient.Balance( + metadata.AppendToOutgoingContext(context.Background(), grpctypes.GRPCBlockHeightHeader, "12"), / Add metadata to request + &banktypes.QueryBalanceRequest{ + Address: myAddress.String(), + Denom: "stake" +}, + grpc.Header(&header), / Retrieve header from response + ) + if err != nil { + return err +} + blockHeight := header.Get(grpctypes.GRPCBlockHeightHeader) + +fmt.Println(blockHeight) / Prints the block height (12) + +return nil +} + +func main() { + if err := queryState(); err != nil { + panic(err) +} +} +``` + +### CosmJS + +CosmJS documentation can be found at [Link](https://cosmos.github.io/cosmjs). As of January 2021, CosmJS documentation is still a work in progress. + +## Using the REST Endpoints + +As described in the [gRPC guide](/docs/sdk/next/api-reference/service-apis/grpc_rest), all gRPC services on the Cosmos SDK are made available for more convenient REST-based queries through gRPC-gateway. The format of the URL path is based on the Protobuf service method's full-qualified name, but may contain small customizations so that final URLs look more idiomatic. For example, the REST endpoint for the `cosmos.bank.v1beta1.Query/AllBalances` method is `GET /cosmos/bank/v1beta1/balances/{address}`. Request arguments are passed as query parameters. + +Note that the REST endpoints are not enabled by default. To enable them, edit the `api` section of your `~/.simapp/config/app.toml` file: + +```toml +# Enable defines if the API server should be enabled. +enable = true +``` + +As a concrete example, the `curl` command to make balances request is: + +```bash +curl \ + -X GET \ + -H "Content-Type: application/json" \ + http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS +``` + +Make sure to replace `localhost:1317` with the REST endpoint of your node, configured under the `api.address` field. + +The list of all available REST endpoints is available as a Swagger specification file, it can be viewed at `localhost:1317/swagger`. Make sure that the `api.swagger` field is set to true in your [`app.toml`](/docs/sdk/next/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml) file. + +### Query for historical state using REST + +Querying for historical state is done using the HTTP header `x-cosmos-block-height`. For example, a curl command would look like: + +```bash +curl \ + -X GET \ + -H "Content-Type: application/json" \ + -H "x-cosmos-block-height: 123" \ + http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS +``` + +Assuming the state at that block has not yet been pruned by the node, this query should return a non-empty response. + +### Cross-Origin Resource Sharing (CORS) + +[CORS policies](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) are not enabled by default to help with security. If you would like to use the rest-server in a public environment we recommend you provide a reverse proxy, this can be done with [nginx](https://www.nginx.com/). For testing and development purposes there is an `enabled-unsafe-cors` field inside [`app.toml`](/docs/sdk/next/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml). diff --git a/docs/sdk/next/documentation/operations/intro.mdx b/docs/sdk/next/documentation/operations/intro.mdx new file mode 100644 index 00000000..727ee456 --- /dev/null +++ b/docs/sdk/next/documentation/operations/intro.mdx @@ -0,0 +1,13 @@ +--- +title: SDK Migrations +--- + +To smoothen the update to the latest stable release, the SDK includes a CLI command for hard-fork migrations (under the ` genesis migrate` subcommand). +Additionally, the SDK includes in-place migrations for its core modules. These in-place migrations are useful to migrate between major releases. + +* Hard-fork migrations are supported from the last major release to the current one. +* [In-place module migrations](https://docs.cosmos.network/main/core/upgrade#overwriting-genesis-functions) are supported from the last two major releases to the current one. + +Migration from a version older than the last two major releases is not supported. + +When migrating from a previous version, refer to the [`UPGRADING.md`](/docs/sdk/next/documentation/operations/upgrading) and the `CHANGELOG.md` of the version you are migrating to. diff --git a/docs/sdk/next/documentation/operations/keyring.mdx b/docs/sdk/next/documentation/operations/keyring.mdx new file mode 100644 index 00000000..39bf63df --- /dev/null +++ b/docs/sdk/next/documentation/operations/keyring.mdx @@ -0,0 +1,146 @@ +--- +title: Setting up the keyring +--- + +## Synopsis + +This document describes how to configure and use the keyring and its various backends for an [**application**](/docs/sdk/next/documentation/application-framework/app-anatomy). + +The keyring holds the private/public key pairs used to interact with a node. For instance, a validator key needs to be set up before running the blockchain node, so that blocks can be correctly signed. The private key can be stored in different locations, called "backends," such as a file or the operating system's own key storage. + +## Available backends for the keyring + +Starting with the v0.38.0 release, Cosmos SDK comes with a new keyring implementation +that provides a set of commands to manage cryptographic keys in a secure fashion. The +new keyring supports multiple storage backends, some of which may not be available on +all operating systems. + +### The `os` backend + +The `os` backend relies on operating system-specific defaults to handle key storage +securely. Typically, an operating system's credential subsystem handles password prompts, +private keys storage, and user sessions according to the user's password policies. Here +is a list of the most popular operating systems and their respective password managers: + +- macOS: [Keychain](https://support.apple.com/en-gb/guide/keychain-access/welcome/mac) +- Windows: [Credentials Management API](https://docs.microsoft.com/en-us/windows/win32/secauthn/credentials-management) +- GNU/Linux: + - [libsecret](https://gitlab.gnome.org/GNOME/libsecret) + - [kwallet](https://api.kde.org/frameworks/kwallet/html/index.html) + - [keyctl](https://www.kernel.org/doc/html/latest/security/keys/core.html) + +GNU/Linux distributions that use GNOME as the default desktop environment typically come with +[Seahorse](https://wiki.gnome.org/Apps/Seahorse). Users of KDE based distributions are +commonly provided with [KDE Wallet Manager](https://userbase.kde.org/KDE_Wallet_Manager). +Whilst the former is in fact a `libsecret` convenient frontend, the latter is a `kwallet` +client. `keyctl` is a secure backend that leverages the Linux kernel security key management system +to store cryptographic keys securely in memory. + +`os` is the default option since operating system's default credentials managers are +designed to meet users' most common needs and provide them with a comfortable +experience without compromising on security. + +The recommended backends for headless environments are `file` and `pass`. + +### The `file` backend + +The `file` backend more closely resembles the keybase implementation used prior to +v0.38.1. It stores the keyring encrypted within the app's configuration directory. This +keyring will request a password each time it is accessed, which may occur multiple +times in a single command resulting in repeated password prompts. If using bash scripts +to execute commands using the `file` option you may want to utilize the following format +for multiple prompts: + +```shell +# assuming that KEYPASSWD is set in the environment +$ gaiacli config keyring-backend file # use file backend +$ (echo $KEYPASSWD; echo $KEYPASSWD) | gaiacli keys add me # multiple prompts +$ echo $KEYPASSWD | gaiacli keys show me # single prompt +``` + + + The first time you add a key to an empty keyring, you will be prompted to type + the password twice. + + +### The `pass` backend + +The `pass` backend uses the [pass](https://www.passwordstore.org/) utility to manage on-disk +encryption of keys' sensitive data and metadata. Keys are stored inside `gpg` encrypted files +within app-specific directories. `pass` is available for the most popular UNIX +operating systems as well as GNU/Linux distributions. Please refer to its manual page for +information on how to download and install it. + + + **pass** uses [GnuPG](https://gnupg.org/) for encryption. `gpg` automatically + invokes the `gpg-agent` daemon upon execution, which handles the caching of + GnuPG credentials. Please refer to `gpg-agent` man page for more information + on how to configure cache parameters such as credentials TTL and passphrase + expiration. + + +The password store must be set up prior to first use: + +```shell +pass init +``` + +Replace `` with your GPG key ID. You can use your personal GPG key or an alternative +one you may want to use specifically to encrypt the password store. + +### The `kwallet` backend + +The `kwallet` backend uses `KDE Wallet Manager`, which comes installed by default on the +GNU/Linux distributions that ship KDE as the default desktop environment. Please refer to +[KWallet API documentation](https://api.kde.org/frameworks/kwallet/html/index.html) for more +information. + +### The `keyctl` backend + +The _Kernel Key Retention Service_ is a security facility that +has been added to the Linux kernel relatively recently. It allows sensitive +cryptographic data such as passwords, private key, authentication tokens, etc +to be stored securely in memory. + +The `keyctl` backend is available on Linux platforms only. + +### The `test` backend + +The `test` backend is a password-less variation of the `file` backend. Keys are stored +unencrypted on disk. + +**Provided for testing purposes only. The `test` backend is not recommended for use in production environments**. + +### The `memory` backend + +The `memory` backend stores keys in memory. The keys are immediately deleted after the program has exited. + +**Provided for testing purposes only. The `memory` backend is not recommended for use in production environments**. + +### Setting backend using an env variable + +You can set the keyring-backend using env variable: `BINNAME_KEYRING_BACKEND`. For example, if your binary name is `gaia-v5` then set: `export GAIA_V5_KEYRING_BACKEND=pass` + +## Adding keys to the keyring + + + Make sure you can build your own binary, and replace `simd` with the name of + your binary in the snippets. + + +Applications developed using the Cosmos SDK come with the `keys` subcommand. For the purpose of this tutorial, we're running the `simd` CLI, which is an application built using the Cosmos SDK for testing and educational purposes. For more information, see [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). + +You can use `simd keys` for help about the keys command and `simd keys [command] --help` for more information about a particular subcommand. + +To create a new key in the keyring, run the `add` subcommand with a `` argument. For the purpose of this tutorial, we will solely use the `test` backend, and call our new key `my_validator`. This key will be used in the next section. + +```bash +$ simd keys add my_validator --keyring-backend test + +# Put the generated address in a variable for later use. +MY_VALIDATOR_ADDRESS=$(simd keys show my_validator -a --keyring-backend test) +``` + +This command generates a new 24-word mnemonic phrase, persists it to the relevant backend, and outputs information about the keypair. If this keypair will be used to hold value-bearing tokens, be sure to write down the mnemonic phrase somewhere safe! + +By default, the keyring generates a `secp256k1` keypair. The keyring also supports `ed25519` keys, which may be created by passing the `--algo ed25519` flag. A keyring can of course hold both types of keys simultaneously, and the Cosmos SDK's `x/auth` module supports natively these two public key algorithms. diff --git a/docs/sdk/next/documentation/operations/node.mdx b/docs/sdk/next/documentation/operations/node.mdx new file mode 100644 index 00000000..724a154f --- /dev/null +++ b/docs/sdk/next/documentation/operations/node.mdx @@ -0,0 +1,4192 @@ +--- +title: Node Client (Daemon) +--- + +## Synopsis + +The main endpoint of a Cosmos SDK application is the daemon client, otherwise known as the full-node client. The full-node runs the state-machine, starting from a genesis file. It connects to peers running the same client in order to receive and relay transactions, block proposals and signatures. The full-node is constituted of the application, defined with the Cosmos SDK, and of a consensus engine connected to the application via the ABCI. + + +**Pre-requisite Readings** + +- [Anatomy of an SDK application](/docs/sdk/next/documentation/application-framework/app-anatomy) + + + +## `main` function + +The full-node client of any Cosmos SDK application is built by running a `main` function. The client is generally named by appending the `-d` suffix to the application name (e.g. `appd` for an application named `app`), and the `main` function is defined in a `./appd/cmd/main.go` file. Running this function creates an executable `appd` that comes with a set of commands. For an app named `app`, the main command is [`appd start`](#start-command), which starts the full-node. + +In general, developers will implement the `main.go` function with the following structure: + +- First, an [`encodingCodec`](/docs/sdk/next/documentation/protocol-development/encoding) is instantiated for the application. +- Then, the `config` is retrieved and config parameters are set. This mainly involves setting the Bech32 prefixes for [addresses](/docs/sdk/next/documentation/protocol-development/accounts#addresses). + +```go expandable +package types + +import ( + + "context" + "fmt" + "sync" + "github.com/cosmos/cosmos-sdk/version" +) + +/ DefaultKeyringServiceName defines a default service name for the keyring. +const DefaultKeyringServiceName = "cosmos" + +/ Config is the structure that holds the SDK configuration parameters. +/ This could be used to initialize certain configuration parameters for the SDK. +type Config struct { + fullFundraiserPath string + bech32AddressPrefix map[string]string + txEncoder TxEncoder + addressVerifier func([]byte) + +error + mtx sync.RWMutex + + / SLIP-44 related + purpose uint32 + coinType uint32 + + sealed bool + sealedch chan struct{ +} +} + +/ cosmos-sdk wide global singleton +var ( + sdkConfig *Config + initConfig sync.Once +) + +/ New returns a new Config with default values. +func NewConfig() *Config { + return &Config{ + sealedch: make(chan struct{ +}), + bech32AddressPrefix: map[string]string{ + "account_addr": Bech32PrefixAccAddr, + "validator_addr": Bech32PrefixValAddr, + "consensus_addr": Bech32PrefixConsAddr, + "account_pub": Bech32PrefixAccPub, + "validator_pub": Bech32PrefixValPub, + "consensus_pub": Bech32PrefixConsPub, +}, + fullFundraiserPath: FullFundraiserPath, + + purpose: Purpose, + coinType: CoinType, + txEncoder: nil, +} +} + +/ GetConfig returns the config instance for the SDK. +func GetConfig() *Config { + initConfig.Do(func() { + sdkConfig = NewConfig() +}) + +return sdkConfig +} + +/ GetSealedConfig returns the config instance for the SDK if/once it is sealed. +func GetSealedConfig(ctx context.Context) (*Config, error) { + config := GetConfig() + +select { + case <-config.sealedch: + return config, nil + case <-ctx.Done(): + return nil, ctx.Err() +} +} + +func (config *Config) + +assertNotSealed() { + config.mtx.RLock() + +defer config.mtx.RUnlock() + if config.sealed { + panic("Config is sealed") +} +} + +/ SetBech32PrefixForAccount builds the Config with Bech32 addressPrefix and publKeyPrefix for accounts +/ and returns the config instance +func (config *Config) + +SetBech32PrefixForAccount(addressPrefix, pubKeyPrefix string) { + config.assertNotSealed() + +config.bech32AddressPrefix["account_addr"] = addressPrefix + config.bech32AddressPrefix["account_pub"] = pubKeyPrefix +} + +/ SetBech32PrefixForValidator builds the Config with Bech32 addressPrefix and publKeyPrefix for validators +/ +/ and returns the config instance +func (config *Config) + +SetBech32PrefixForValidator(addressPrefix, pubKeyPrefix string) { + config.assertNotSealed() + +config.bech32AddressPrefix["validator_addr"] = addressPrefix + config.bech32AddressPrefix["validator_pub"] = pubKeyPrefix +} + +/ SetBech32PrefixForConsensusNode builds the Config with Bech32 addressPrefix and publKeyPrefix for consensus nodes +/ and returns the config instance +func (config *Config) + +SetBech32PrefixForConsensusNode(addressPrefix, pubKeyPrefix string) { + config.assertNotSealed() + +config.bech32AddressPrefix["consensus_addr"] = addressPrefix + config.bech32AddressPrefix["consensus_pub"] = pubKeyPrefix +} + +/ SetTxEncoder builds the Config with TxEncoder used to marshal StdTx to bytes +func (config *Config) + +SetTxEncoder(encoder TxEncoder) { + config.assertNotSealed() + +config.txEncoder = encoder +} + +/ SetAddressVerifier builds the Config with the provided function for verifying that addresses +/ have the correct format +func (config *Config) + +SetAddressVerifier(addressVerifier func([]byte) + +error) { + config.assertNotSealed() + +config.addressVerifier = addressVerifier +} + +/ Set the FullFundraiserPath (BIP44Prefix) + +on the config. +/ +/ Deprecated: This method is supported for backward compatibility only and will be removed in a future release. Use SetPurpose and SetCoinType instead. +func (config *Config) + +SetFullFundraiserPath(fullFundraiserPath string) { + config.assertNotSealed() + +config.fullFundraiserPath = fullFundraiserPath +} + +/ Set the BIP-0044 Purpose code on the config +func (config *Config) + +SetPurpose(purpose uint32) { + config.assertNotSealed() + +config.purpose = purpose +} + +/ Set the BIP-0044 CoinType code on the config +func (config *Config) + +SetCoinType(coinType uint32) { + config.assertNotSealed() + +config.coinType = coinType +} + +/ Seal seals the config such that the config state could not be modified further +func (config *Config) + +Seal() *Config { + config.mtx.Lock() + if config.sealed { + config.mtx.Unlock() + +return config +} + + / signal sealed after state exposed/unlocked + config.sealed = true + config.mtx.Unlock() + +close(config.sealedch) + +return config +} + +/ GetBech32AccountAddrPrefix returns the Bech32 prefix for account address +func (config *Config) + +GetBech32AccountAddrPrefix() + +string { + return config.bech32AddressPrefix["account_addr"] +} + +/ GetBech32ValidatorAddrPrefix returns the Bech32 prefix for validator address +func (config *Config) + +GetBech32ValidatorAddrPrefix() + +string { + return config.bech32AddressPrefix["validator_addr"] +} + +/ GetBech32ConsensusAddrPrefix returns the Bech32 prefix for consensus node address +func (config *Config) + +GetBech32ConsensusAddrPrefix() + +string { + return config.bech32AddressPrefix["consensus_addr"] +} + +/ GetBech32AccountPubPrefix returns the Bech32 prefix for account public key +func (config *Config) + +GetBech32AccountPubPrefix() + +string { + return config.bech32AddressPrefix["account_pub"] +} + +/ GetBech32ValidatorPubPrefix returns the Bech32 prefix for validator public key +func (config *Config) + +GetBech32ValidatorPubPrefix() + +string { + return config.bech32AddressPrefix["validator_pub"] +} + +/ GetBech32ConsensusPubPrefix returns the Bech32 prefix for consensus node public key +func (config *Config) + +GetBech32ConsensusPubPrefix() + +string { + return config.bech32AddressPrefix["consensus_pub"] +} + +/ GetTxEncoder return function to encode transactions +func (config *Config) + +GetTxEncoder() + +TxEncoder { + return config.txEncoder +} + +/ GetAddressVerifier returns the function to verify that addresses have the correct format +func (config *Config) + +GetAddressVerifier() + +func([]byte) + +error { + return config.addressVerifier +} + +/ GetPurpose returns the BIP-0044 Purpose code on the config. +func (config *Config) + +GetPurpose() + +uint32 { + return config.purpose +} + +/ GetCoinType returns the BIP-0044 CoinType code on the config. +func (config *Config) + +GetCoinType() + +uint32 { + return config.coinType +} + +/ GetFullFundraiserPath returns the BIP44Prefix. +/ +/ Deprecated: This method is supported for backward compatibility only and will be removed in a future release. Use GetFullBIP44Path instead. +func (config *Config) + +GetFullFundraiserPath() + +string { + return config.fullFundraiserPath +} + +/ GetFullBIP44Path returns the BIP44Prefix. +func (config *Config) + +GetFullBIP44Path() + +string { + return fmt.Sprintf("m/%d'/%d'/0'/0/0", config.purpose, config.coinType) +} + +func KeyringServiceName() + +string { + if len(version.Name) == 0 { + return DefaultKeyringServiceName +} + +return version.Name +} +``` + +- Using [cobra](https://github.com/spf13/cobra), the root command of the full-node client is created. After that, all the custom commands of the application are added using the `AddCommand()` method of `rootCmd`. +- Add default server commands to `rootCmd` using the `server.AddCommands()` method. These commands are separated from the ones added above since they are standard and defined at Cosmos SDK level. They should be shared by all Cosmos SDK-based applications. They include the most important command: the [`start` command](#start-command). +- Prepare and execute the `executor`. + +```go expandable +package cli + +import ( + + "fmt" + "os" + "path/filepath" + "runtime" + "strings" + "github.com/spf13/cobra" + "github.com/spf13/viper" +) + +const ( + HomeFlag = "home" + TraceFlag = "trace" + OutputFlag = "output" + EncodingFlag = "encoding" +) + +/ Executable is the minimal interface to *corba.Command, so we can +/ wrap if desired before the test +type Executable interface { + Execute() + +error +} + +/ PrepareBaseCmd is meant for CometBFT and other servers +func PrepareBaseCmd(cmd *cobra.Command, envPrefix, defaultHome string) + +Executor { + cobra.OnInitialize(func() { + initEnv(envPrefix) +}) + +cmd.PersistentFlags().StringP(HomeFlag, "", defaultHome, "directory for config and data") + +cmd.PersistentFlags().Bool(TraceFlag, false, "print out full stack trace on errors") + +cmd.PersistentPreRunE = concatCobraCmdFuncs(bindFlagsLoadViper, cmd.PersistentPreRunE) + +return Executor{ + cmd, os.Exit +} +} + +/ PrepareMainCmd is meant for client side libs that want some more flags +/ +/ This adds --encoding (hex, btc, base64) + +and --output (text, json) + +to +/ the command. These only really make sense in interactive commands. +func PrepareMainCmd(cmd *cobra.Command, envPrefix, defaultHome string) + +Executor { + cmd.PersistentFlags().StringP(EncodingFlag, "e", "hex", "Binary encoding (hex|b64|btc)") + +cmd.PersistentFlags().StringP(OutputFlag, "o", "text", "Output format (text|json)") + +cmd.PersistentPreRunE = concatCobraCmdFuncs(validateOutput, cmd.PersistentPreRunE) + +return PrepareBaseCmd(cmd, envPrefix, defaultHome) +} + +/ initEnv sets to use ENV variables if set. +func initEnv(prefix string) { + copyEnvVars(prefix) + + / env variables with TM prefix (eg. TM_ROOT) + +viper.SetEnvPrefix(prefix) + +viper.SetEnvKeyReplacer(strings.NewReplacer(".", "_", "-", "_")) + +viper.AutomaticEnv() +} + +/ This copies all variables like TMROOT to TM_ROOT, +/ so we can support both formats for the user +func copyEnvVars(prefix string) { + prefix = strings.ToUpper(prefix) + ps := prefix + "_" + for _, e := range os.Environ() { + kv := strings.SplitN(e, "=", 2) + if len(kv) == 2 { + k, v := kv[0], kv[1] + if strings.HasPrefix(k, prefix) && !strings.HasPrefix(k, ps) { + k2 := strings.Replace(k, prefix, ps, 1) + +os.Setenv(k2, v) +} + +} + +} +} + +/ Executor wraps the cobra Command with a nicer Execute method +type Executor struct { + *cobra.Command + Exit func(int) / this is os.Exit by default, override in tests +} + +type ExitCoder interface { + ExitCode() + +int +} + +/ execute adds all child commands to the root command sets flags appropriately. +/ This is called by main.main(). It only needs to happen once to the rootCmd. +func (e Executor) + +Execute() + +error { + e.SilenceUsage = true + e.SilenceErrors = true + err := e.Command.Execute() + if err != nil { + if viper.GetBool(TraceFlag) { + const size = 64 << 10 + buf := make([]byte, size) + +buf = buf[:runtime.Stack(buf, false)] + fmt.Fprintf(os.Stderr, "ERROR: %v\n%s\n", err, buf) +} + +else { + fmt.Fprintf(os.Stderr, "ERROR: %v\n", err) +} + + / return error code 1 by default, can override it with a special error type + exitCode := 1 + if ec, ok := err.(ExitCoder); ok { + exitCode = ec.ExitCode() +} + +e.Exit(exitCode) +} + +return err +} + +type cobraCmdFunc func(cmd *cobra.Command, args []string) + +error + +/ Returns a single function that calls each argument function in sequence +/ RunE, PreRunE, PersistentPreRunE, etc. all have this same signature +func concatCobraCmdFuncs(fs ...cobraCmdFunc) + +cobraCmdFunc { + return func(cmd *cobra.Command, args []string) + +error { + for _, f := range fs { + if f != nil { + if err := f(cmd, args); err != nil { + return err +} + +} + +} + +return nil +} +} + +/ Bind all flags and read the config into viper +func bindFlagsLoadViper(cmd *cobra.Command, args []string) + +error { + / cmd.Flags() + +includes flags from this command and all persistent flags from the parent + if err := viper.BindPFlags(cmd.Flags()); err != nil { + return err +} + homeDir := viper.GetString(HomeFlag) + +viper.Set(HomeFlag, homeDir) + +viper.SetConfigName("config") / name of config file (without extension) + +viper.AddConfigPath(homeDir) / search root directory + viper.AddConfigPath(filepath.Join(homeDir, "config")) / search root directory /config + + / If a config file is found, read it in. + if err := viper.ReadInConfig(); err == nil { + / stderr, so if we redirect output to json file, this doesn't appear + / fmt.Fprintln(os.Stderr, "Using config file:", viper.ConfigFileUsed()) +} + +else if _, ok := err.(viper.ConfigFileNotFoundError); !ok { + / ignore not found error, return other errors + return err +} + +return nil +} + +func validateOutput(cmd *cobra.Command, args []string) + +error { + / validate output format + output := viper.GetString(OutputFlag) + switch output { + case "text", "json": + default: + return fmt.Errorf("unsupported output format: %s", output) +} + +return nil +} +``` + +See an example of `main` function from the `simapp` application, the Cosmos SDK's application for demo purposes: + +```go expandable +package main + +import ( + + "fmt" + "os" + + clientv2helpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/simd/cmd" + + svrcmd "github.com/cosmos/cosmos-sdk/server/cmd" +) + +func main() { + rootCmd := cmd.NewRootCmd() + if err := svrcmd.Execute(rootCmd, clientv2helpers.EnvPrefix, simapp.DefaultNodeHome); err != nil { + fmt.Fprintln(rootCmd.OutOrStderr(), err) + +os.Exit(1) +} +} +``` + +## `start` command + +The `start` command is defined in the `/server` folder of the Cosmos SDK. It is added to the root command of the full-node client in the [`main` function](#main-function) and called by the end-user to start their node: + +```bash +# For an example app named "app", the following command starts the full-node. +appd start + +# Using the Cosmos SDK's own simapp, the following commands start the simapp node. +simd start +``` + +As a reminder, the full-node is composed of three conceptual layers: the networking layer, the consensus layer and the application layer. The first two are generally bundled together in an entity called the consensus engine (CometBFT by default), while the third is the state-machine defined with the help of the Cosmos SDK. Currently, the Cosmos SDK uses CometBFT as the default consensus engine, meaning the start command is implemented to boot up a CometBFT node. + +The flow of the `start` command is pretty straightforward. First, it retrieves the `config` from the `context` in order to open the `db` (a [`leveldb`](https://github.com/syndtr/goleveldb) instance by default). This `db` contains the latest known state of the application (empty if the application is started from the first time. + +With the `db`, the `start` command creates a new instance of the application using an `appCreator` function: + +```go expandable +package server + +import ( + + "bufio" + "context" + "fmt" + "io" + "net" + "os" + "path/filepath" + "runtime/pprof" + "strings" + "time" + "github.com/cometbft/cometbft/abci/server" + cmtcmd "github.com/cometbft/cometbft/cmd/cometbft/commands" + cmtcfg "github.com/cometbft/cometbft/config" + cmtjson "github.com/cometbft/cometbft/libs/json" + "github.com/cometbft/cometbft/node" + "github.com/cometbft/cometbft/p2p" + pvm "github.com/cometbft/cometbft/privval" + cmtstate "github.com/cometbft/cometbft/proto/tendermint/state" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cometbft/cometbft/proxy" + rpchttp "github.com/cometbft/cometbft/rpc/client/http" + "github.com/cometbft/cometbft/rpc/client/local" + sm "github.com/cometbft/cometbft/state" + "github.com/cometbft/cometbft/store" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/hashicorp/go-metrics" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "golang.org/x/sync/errgroup" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + + pruningtypes "cosmossdk.io/store/pruning/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + servercmtlog "github.com/cosmos/cosmos-sdk/server/log" + "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/version" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" +) + +const ( + / CometBFT full-node start flags + flagWithComet = "with-comet" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagQueryGasLimit = "query-gas-limit" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" + + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" + FlagIAVLSyncPruning = "iavl-sync-pruning" + FlagShutdownGrace = "shutdown-grace" + + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" + + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" + + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" + flagGRPCSkipCheckHeader = "grpc.skip-check-header" + + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" + + / testnet keys + KeyIsTestnet = "is-testnet" + KeyNewChainID = "new-chain-ID" + KeyNewOpAddr = "new-operator-addr" + KeyNewValAddr = "new-validator-addr" + KeyUserPubKey = "user-pub-key" + KeyTriggerTestnetUpgrade = "trigger-testnet-upgrade" +) + +/ StartCmdOptions defines options that can be customized in `StartCmdWithOptions`, +type StartCmdOptions struct { + / DBOpener can be used to customize db opening, for example customize db options or support different db backends, + / default to the builtin db opener. + DBOpener func(rootDir string, backendType dbm.BackendType) (dbm.DB, error) + / PostSetup can be used to setup extra services under the same cancellable context, + / it's not called in stand-alone mode, only for in-process mode. + PostSetup func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / PostSetupStandalone can be used to setup extra services under the same cancellable context, + PostSetupStandalone func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / AddFlags add custom flags to start cmd + AddFlags func(cmd *cobra.Command) + / StartCommandHanlder can be used to customize the start command handler + StartCommandHandler func(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, inProcessConsensus bool, opts StartCmdOptions) + +error +} + +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + return StartCmdWithOptions(appCreator, defaultNodeHome, StartCmdOptions{ +}) +} + +/ StartCmdWithOptions runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmdWithOptions(appCreator types.AppCreator, defaultNodeHome string, opts StartCmdOptions) *cobra.Command { + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + if opts.StartCommandHandler == nil { + opts.StartCommandHandler = start +} + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with CometBFT in or out of process. By +default, the application will run with CometBFT in process. + +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. + +For '--pruning' the options are as follows: + +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) + +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' + +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. + +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. + +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, CometBFT is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + if err != nil { + return err +} + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + +err = wrapCPUProfile(serverCtx, func() + +error { + return opts.StartCommandHandler(serverCtx, clientCtx, appCreator, withCMT, opts) +}) + +serverCtx.Logger.Debug("received quit signal") + +graceDuration, _ := cmd.Flags().GetDuration(FlagShutdownGrace) + if graceDuration > 0 { + serverCtx.Logger.Info("graceful shutdown start", FlagShutdownGrace, graceDuration) + <-time.After(graceDuration) + +serverCtx.Logger.Info("graceful shutdown complete") +} + +return err +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +addStartNodeFlags(cmd, opts) + +return cmd +} + +func start(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, withCmt bool, opts StartCmdOptions) + +error { + svrCfg, err := getAndValidateConfig(svrCtx) + if err != nil { + return err +} + +app, appCleanupFn, err := startApp(svrCtx, appCreator, opts) + if err != nil { + return err +} + +defer appCleanupFn() + +metrics, err := startTelemetry(svrCfg) + if err != nil { + return err +} + +emitServerInfoMetrics() + if !withCmt { + return startStandAlone(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +return startInProcess(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +func startStandAlone(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, metrics *telemetry.Metrics, opts StartCmdOptions) + +error { + addr := svrCtx.Viper.GetString(flagAddress) + transport := svrCtx.Viper.GetString(flagTransport) + cmtApp := NewCometABCIWrapper(app) + +svr, err := server.NewServer(addr, transport, cmtApp) + if err != nil { + return fmt.Errorf("error creating listener: %w", err) +} + +svr.SetLogger(servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger.With("module", "abci-server") +}) + +g, ctx := getCtx(svrCtx, false) + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / create tendermint client + / assumes the rpc listen address is where tendermint has its rpc server + rpcclient, err := rpchttp.New(svrCtx.Config.RPC.ListenAddress, "/websocket") + if err != nil { + return err +} + / re-assign for making the client available below + / do not use := to avoid shadowing clientCtx + clientCtx = clientCtx.WithClient(rpcclient) + + / use the provided clientCtx to register the services + app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, svrCfg, clientCtx, svrCtx, app, svrCtx.Config.RootDir, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetupStandalone != nil { + if err := opts.PostSetupStandalone(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + +g.Go(func() + +error { + if err := svr.Start(); err != nil { + svrCtx.Logger.Error("failed to start out-of-process ABCI server", "err", err) + +return err +} + + / Wait for the calling process to be canceled or close the provided context, + / so we can gracefully stop the ABCI server. + <-ctx.Done() + +svrCtx.Logger.Info("stopping the ABCI server...") + +return svr.Stop() +}) + +return g.Wait() +} + +func startInProcess(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, + metrics *telemetry.Metrics, opts StartCmdOptions, +) + +error { + cmtCfg := svrCtx.Config + gRPCOnly := svrCtx.Viper.GetBool(flagGRPCOnly) + +g, ctx := getCtx(svrCtx, true) + if gRPCOnly { + / TODO: Generalize logic so that gRPC only is really in startStandAlone + svrCtx.Logger.Info("starting node in gRPC only mode; CometBFT is disabled") + +svrCfg.GRPC.Enable = true +} + +else { + svrCtx.Logger.Info("starting node with ABCI CometBFT in-process") + +tmNode, cleanupFn, err := startCmtNode(ctx, cmtCfg, app, svrCtx) + if err != nil { + return err +} + +defer cleanupFn() + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / Re-assign for making the client available below do not use := to avoid + / shadowing the clientCtx variable. + clientCtx = clientCtx.WithClient(local.New(tmNode)) + +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, svrCfg, clientCtx, svrCtx, app, cmtCfg.RootDir, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetup != nil { + if err := opts.PostSetup(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + + / wait for signal capture and gracefully return + / we are guaranteed to be waiting for the "ListenForQuitSignals" goroutine. + return g.Wait() +} + +/ TODO: Move nodeKey into being created within the function. +func startCmtNode( + ctx context.Context, + cfg *cmtcfg.Config, + app types.Application, + svrCtx *Context, +) (tmNode *node.Node, cleanupFn func(), err error) { + nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return nil, cleanupFn, err +} + cmtApp := NewCometABCIWrapper(app) + +tmNode, err = node.NewNodeWithContext( + ctx, + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(cmtApp), + getGenDocProvider(cfg), + cmtcfg.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger +}, + ) + if err != nil { + return tmNode, cleanupFn, err +} + if err := tmNode.Start(); err != nil { + return tmNode, cleanupFn, err +} + +cleanupFn = func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() +} + +} + +return tmNode, cleanupFn, nil +} + +func getAndValidateConfig(svrCtx *Context) (serverconfig.Config, error) { + config, err := serverconfig.GetConfig(svrCtx.Viper) + if err != nil { + return config, err +} + if err := config.ValidateBasic(); err != nil { + return config, err +} + +return config, nil +} + +/ returns a function which returns the genesis doc from the genesis file. +func getGenDocProvider(cfg *cmtcfg.Config) + +func() (*cmttypes.GenesisDoc, error) { + return func() (*cmttypes.GenesisDoc, error) { + appGenesis, err := genutiltypes.AppGenesisFromFile(cfg.GenesisFile()) + if err != nil { + return nil, err +} + +return appGenesis.ToGenesisDoc() +} +} + +func setupTraceWriter(svrCtx *Context) (traceWriter io.WriteCloser, cleanup func(), err error) { + / clean up the traceWriter when the server is shutting down + cleanup = func() { +} + traceWriterFile := svrCtx.Viper.GetString(flagTraceStore) + +traceWriter, err = openTraceWriter(traceWriterFile) + if err != nil { + return traceWriter, cleanup, err +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + cleanup = func() { + if err = traceWriter.Close(); err != nil { + svrCtx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +return traceWriter, cleanup, nil +} + +func startGrpcServer( + ctx context.Context, + g *errgroup.Group, + config serverconfig.GRPCConfig, + clientCtx client.Context, + svrCtx *Context, + app types.Application, +) (*grpc.Server, client.Context, error) { + if !config.Enable { + / return grpcServer as nil if gRPC is disabled + return nil, clientCtx, nil +} + _, _, err := net.SplitHostPort(config.Address) + if err != nil { + return nil, clientCtx, err +} + maxSendMsgSize := config.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + + / if gRPC is enabled, configure gRPC client for gRPC gateway + grpcClient, err := grpc.Dial( /nolint: staticcheck / ignore this line for this linter + config.Address, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return nil, clientCtx, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +svrCtx.Logger.Debug("gRPC client assigned to client context", "target", config.Address) + +grpcSrv, err := servergrpc.NewGRPCServer(clientCtx, app, config) + if err != nil { + return nil, clientCtx, err +} + + / Start the gRPC server in a goroutine. Note, the provided ctx will ensure + / that the server is gracefully shut down. + g.Go(func() + +error { + return servergrpc.StartGRPCServer(ctx, svrCtx.Logger.With("module", "grpc-server"), config, grpcSrv) +}) + +return grpcSrv, clientCtx, nil +} + +func startAPIServer( + ctx context.Context, + g *errgroup.Group, + svrCfg serverconfig.Config, + clientCtx client.Context, + svrCtx *Context, + app types.Application, + home string, + grpcSrv *grpc.Server, + metrics *telemetry.Metrics, +) + +error { + if !svrCfg.API.Enable { + return nil +} + +clientCtx = clientCtx.WithHomeDir(home) + apiSrv := api.New(clientCtx, svrCtx.Logger.With("module", "api-server"), grpcSrv) + +app.RegisterAPIRoutes(apiSrv, svrCfg.API) + if svrCfg.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + +g.Go(func() + +error { + return apiSrv.Start(ctx, svrCfg) +}) + +return nil +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + return telemetry.New(cfg.Telemetry) +} + +/ wrapCPUProfile starts CPU profiling, if enabled, and executes the provided +/ callbackFn in a separate goroutine, then will wait for that callback to +/ return. +/ +/ NOTE: We expect the caller to handle graceful shutdown and signal handling. +func wrapCPUProfile(svrCtx *Context, callbackFn func() + +error) + +error { + if cpuProfile := svrCtx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +svrCtx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +defer func() { + svrCtx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + svrCtx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +}() +} + +return callbackFn() +} + +/ emitServerInfoMetrics emits server info related metrics using application telemetry. +func emitServerInfoMetrics() { + var ls []metrics.Label + versionInfo := version.NewInfo() + if len(versionInfo.GoVersion) > 0 { + ls = append(ls, telemetry.NewLabel("go", versionInfo.GoVersion)) +} + if len(versionInfo.CosmosSdkVersion) > 0 { + ls = append(ls, telemetry.NewLabel("version", versionInfo.CosmosSdkVersion)) +} + if len(ls) == 0 { + return +} + +telemetry.SetGaugeWithLabels([]string{"server", "info" +}, 1, ls) +} + +func getCtx(svrCtx *Context, block bool) (*errgroup.Group, context.Context) { + ctx, cancelFn := context.WithCancel(context.Background()) + +g, ctx := errgroup.WithContext(ctx) + / listen for quit signals so the calling parent process can gracefully exit + ListenForQuitSignals(g, block, cancelFn, svrCtx.Logger) + +return g, ctx +} + +func startApp(svrCtx *Context, appCreator types.AppCreator, opts StartCmdOptions) (app types.Application, cleanupFn func(), err error) { + traceWriter, traceCleanupFn, err := setupTraceWriter(svrCtx) + if err != nil { + return app, traceCleanupFn, err +} + home := svrCtx.Config.RootDir + db, err := opts.DBOpener(home, GetAppDBBackend(svrCtx.Viper)) + if err != nil { + return app, traceCleanupFn, err +} + if isTestnet, ok := svrCtx.Viper.Get(KeyIsTestnet).(bool); ok && isTestnet { + app, err = testnetify(svrCtx, appCreator, db, traceWriter) + if err != nil { + return app, traceCleanupFn, err +} + +} + +else { + app = appCreator(svrCtx.Logger, db, traceWriter, svrCtx.Viper) +} + +cleanupFn = func() { + traceCleanupFn() + if localErr := app.Close(); localErr != nil { + svrCtx.Logger.Error(localErr.Error()) +} + +} + +return app, cleanupFn, nil +} + +/ InPlaceTestnetCreator utilizes the provided chainID and operatorAddress as well as the local private validator key to +/ control the network represented in the data folder. This is useful to create testnets nearly identical to your +/ mainnet environment. +func InPlaceTestnetCreator(testnetAppCreator types.AppCreator) *cobra.Command { + opts := StartCmdOptions{ +} + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + if opts.StartCommandHandler == nil { + opts.StartCommandHandler = start +} + cmd := &cobra.Command{ + Use: "in-place-testnet [newChainID] [newOperatorAddress]", + Short: "Create and start a testnet from current local state", + Long: `Create and start a testnet from current local state. +After utilizing this command the network will start. If the network is stopped, +the normal "start" command should be used. Re-using this command on state that +has already been modified by this command could result in unexpected behavior. + +Additionally, the first block may take up to one minute to be committed, depending +on how old the block is. For instance, if a snapshot was taken weeks ago and we want +to turn this into a testnet, it is possible lots of pending state needs to be committed +(expiring locks, etc.). It is recommended that you should wait for this block to be committed +before stopping the daemon. + +If the --trigger-testnet-upgrade flag is set, the upgrade handler specified by the flag will be run +on the first block of the testnet. + +Regardless of whether the flag is set or not, if any new stores are introduced in the daemon being run, +those stores will be registered in order to prevent panics. Therefore, you only need to set the flag if +you want to test the upgrade handler itself. +`, + Example: "in-place-testnet localosmosis osmo12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj", + Args: cobra.ExactArgs(2), + RunE: func(cmd *cobra.Command, args []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + if err != nil { + return err +} + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + newChainID := args[0] + newOperatorAddress := args[1] + + skipConfirmation, _ := cmd.Flags().GetBool("skip-confirmation") + if !skipConfirmation { + / Confirmation prompt to prevent accidental modification of state. + reader := bufio.NewReader(os.Stdin) + +fmt.Println("This operation will modify state in your data folder and cannot be undone. Do you want to continue? (y/n)") + +text, _ := reader.ReadString('\n') + response := strings.TrimSpace(strings.ToLower(text)) + if response != "y" && response != "yes" { + fmt.Println("Operation canceled.") + +return nil +} + +} + + / Set testnet keys to be used by the application. + / This is done to prevent changes to existing start API. + serverCtx.Viper.Set(KeyIsTestnet, true) + +serverCtx.Viper.Set(KeyNewChainID, newChainID) + +serverCtx.Viper.Set(KeyNewOpAddr, newOperatorAddress) + +err = wrapCPUProfile(serverCtx, func() + +error { + return opts.StartCommandHandler(serverCtx, clientCtx, testnetAppCreator, withCMT, opts) +}) + +serverCtx.Logger.Debug("received quit signal") + +graceDuration, _ := cmd.Flags().GetDuration(FlagShutdownGrace) + if graceDuration > 0 { + serverCtx.Logger.Info("graceful shutdown start", FlagShutdownGrace, graceDuration) + <-time.After(graceDuration) + +serverCtx.Logger.Info("graceful shutdown complete") +} + +return err +}, +} + +addStartNodeFlags(cmd, opts) + +cmd.Flags().String(KeyTriggerTestnetUpgrade, "", "If set (example: \"v21\"), triggers the v21 upgrade handler to run on the first block of the testnet") + +cmd.Flags().Bool("skip-confirmation", false, "Skip the confirmation prompt") + +return cmd +} + +/ testnetify modifies both state and blockStore, allowing the provided operator address and local validator key to control the network +/ that the state in the data folder represents. The chainID of the local genesis file is modified to match the provided chainID. +func testnetify(ctx *Context, testnetAppCreator types.AppCreator, db dbm.DB, traceWriter io.WriteCloser) (types.Application, error) { + config := ctx.Config + + newChainID, ok := ctx.Viper.Get(KeyNewChainID).(string) + if !ok { + return nil, fmt.Errorf("expected string for key %s", KeyNewChainID) +} + + / Modify app genesis chain ID and save to genesis file. + genFilePath := config.GenesisFile() + +appGen, err := genutiltypes.AppGenesisFromFile(genFilePath) + if err != nil { + return nil, err +} + +appGen.ChainID = newChainID + if err := appGen.ValidateAndComplete(); err != nil { + return nil, err +} + if err := appGen.SaveAs(genFilePath); err != nil { + return nil, err +} + + / Regenerate addrbook.json to prevent peers on old network from causing error logs. + addrBookPath := filepath.Join(config.RootDir, "config", "addrbook.json") + if err := os.Remove(addrBookPath); err != nil && !os.IsNotExist(err) { + return nil, fmt.Errorf("failed to remove existing addrbook.json: %w", err) +} + emptyAddrBook := []byte("{ +}") + if err := os.WriteFile(addrBookPath, emptyAddrBook, 0o600); err != nil { + return nil, fmt.Errorf("failed to create empty addrbook.json: %w", err) +} + + / Load the comet genesis doc provider. + genDocProvider := node.DefaultGenesisDocProviderFunc(config) + + / Initialize blockStore and stateDB. + blockStoreDB, err := cmtcfg.DefaultDBProvider(&cmtcfg.DBContext{ + ID: "blockstore", + Config: config +}) + if err != nil { + return nil, err +} + blockStore := store.NewBlockStore(blockStoreDB) + +stateDB, err := cmtcfg.DefaultDBProvider(&cmtcfg.DBContext{ + ID: "state", + Config: config +}) + if err != nil { + return nil, err +} + +defer blockStore.Close() + +defer stateDB.Close() + privValidator := pvm.LoadOrGenFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile()) + +userPubKey, err := privValidator.GetPubKey() + if err != nil { + return nil, err +} + validatorAddress := userPubKey.Address() + stateStore := sm.NewStore(stateDB, sm.StoreOptions{ + DiscardABCIResponses: config.Storage.DiscardABCIResponses, +}) + +state, genDoc, err := node.LoadStateFromDBOrGenesisDocProvider(stateDB, genDocProvider) + if err != nil { + return nil, err +} + +ctx.Viper.Set(KeyNewValAddr, validatorAddress) + +ctx.Viper.Set(KeyUserPubKey, userPubKey) + testnetApp := testnetAppCreator(ctx.Logger, db, traceWriter, ctx.Viper) + + / We need to create a temporary proxyApp to get the initial state of the application. + / Depending on how the node was stopped, the application height can differ from the blockStore height. + / This height difference changes how we go about modifying the state. + cmtApp := NewCometABCIWrapper(testnetApp) + _, context := getCtx(ctx, true) + clientCreator := proxy.NewLocalClientCreator(cmtApp) + metrics := node.DefaultMetricsProvider(cmtcfg.DefaultConfig().Instrumentation) + _, _, _, _, proxyMetrics, _, _ := metrics(genDoc.ChainID) + proxyApp := proxy.NewAppConns(clientCreator, proxyMetrics) + if err := proxyApp.Start(); err != nil { + return nil, fmt.Errorf("error starting proxy app connections: %w", err) +} + +res, err := proxyApp.Query().Info(context, proxy.RequestInfo) + if err != nil { + return nil, fmt.Errorf("error calling Info: %w", err) +} + +err = proxyApp.Stop() + if err != nil { + return nil, err +} + appHash := res.LastBlockAppHash + appHeight := res.LastBlockHeight + + var block *cmttypes.Block + switch { + case appHeight == blockStore.Height(): + block = blockStore.LoadBlock(blockStore.Height()) + / If the state's last blockstore height does not match the app and blockstore height, we likely stopped with the halt height flag. + if state.LastBlockHeight != appHeight { + state.LastBlockHeight = appHeight + block.AppHash = appHash + state.AppHash = appHash +} + +else { + / Node was likely stopped via SIGTERM, delete the next block's seen commit + err := blockStoreDB.Delete(fmt.Appendf(nil, "SC:%v", blockStore.Height()+1)) + if err != nil { + return nil, err +} + +} + case blockStore.Height() > state.LastBlockHeight: + / This state usually occurs when we gracefully stop the node. + err = blockStore.DeleteLatestBlock() + if err != nil { + return nil, err +} + +block = blockStore.LoadBlock(blockStore.Height()) + +default: + / If there is any other state, we just load the block + block = blockStore.LoadBlock(blockStore.Height()) +} + +block.ChainID = newChainID + state.ChainID = newChainID + + block.LastBlockID = state.LastBlockID + block.LastCommit.BlockID = state.LastBlockID + + / Create a vote from our validator + vote := cmttypes.Vote{ + Type: cmtproto.PrecommitType, + Height: state.LastBlockHeight, + Round: 0, + BlockID: state.LastBlockID, + Timestamp: time.Now(), + ValidatorAddress: validatorAddress, + ValidatorIndex: 0, + Signature: []byte{ +}, +} + + / Sign the vote, and copy the proto changes from the act of signing to the vote itself + voteProto := vote.ToProto() + +err = privValidator.SignVote(newChainID, voteProto) + if err != nil { + return nil, err +} + +vote.Signature = voteProto.Signature + vote.Timestamp = voteProto.Timestamp + + / Modify the block's lastCommit to be signed only by our validator + block.LastCommit.Signatures[0].ValidatorAddress = validatorAddress + block.LastCommit.Signatures[0].Signature = vote.Signature + block.LastCommit.Signatures = []cmttypes.CommitSig{ + block.LastCommit.Signatures[0] +} + + / Load the seenCommit of the lastBlockHeight and modify it to be signed from our validator + seenCommit := blockStore.LoadSeenCommit(state.LastBlockHeight) + +seenCommit.BlockID = state.LastBlockID + seenCommit.Round = vote.Round + seenCommit.Signatures[0].Signature = vote.Signature + seenCommit.Signatures[0].ValidatorAddress = validatorAddress + seenCommit.Signatures[0].Timestamp = vote.Timestamp + seenCommit.Signatures = []cmttypes.CommitSig{ + seenCommit.Signatures[0] +} + +err = blockStore.SaveSeenCommit(state.LastBlockHeight, seenCommit) + if err != nil { + return nil, err +} + + / Create ValidatorSet struct containing just our valdiator. + newVal := &cmttypes.Validator{ + Address: validatorAddress, + PubKey: userPubKey, + VotingPower: 900000000000000, +} + newValSet := &cmttypes.ValidatorSet{ + Validators: []*cmttypes.Validator{ + newVal +}, + Proposer: newVal, +} + + / Replace all valSets in state to be the valSet with just our validator. + state.Validators = newValSet + state.LastValidators = newValSet + state.NextValidators = newValSet + state.LastHeightValidatorsChanged = blockStore.Height() + +err = stateStore.Save(state) + if err != nil { + return nil, err +} + + / Create a ValidatorsInfo struct to store in stateDB. + valSet, err := state.Validators.ToProto() + if err != nil { + return nil, err +} + valInfo := &cmtstate.ValidatorsInfo{ + ValidatorSet: valSet, + LastHeightChanged: state.LastBlockHeight, +} + +buf, err := valInfo.Marshal() + if err != nil { + return nil, err +} + + / Modfiy Validators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()), buf) + if err != nil { + return nil, err +} + + / Modify LastValidators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()-1), buf) + if err != nil { + return nil, err +} + + / Modify NextValidators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()+1), buf) + if err != nil { + return nil, err +} + + / Since we modified the chainID, we set the new genesisDoc in the stateDB. + b, err := cmtjson.Marshal(genDoc) + if err != nil { + return nil, err +} + if err := stateDB.SetSync([]byte("genesisDoc"), b); err != nil { + return nil, err +} + +return testnetApp, err +} + +/ addStartNodeFlags should be added to any CLI commands that start the network. +func addStartNodeFlags(cmd *cobra.Command, opts StartCmdOptions) { + cmd.Flags().Bool(flagWithComet, true, "Run abci app embedded in-process with CometBFT") + +cmd.Flags().String(flagAddress, "tcp://127.0.0.1:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().Uint64(FlagQueryGasLimit, 0, "Maximum gas a Rest/Grpc query can consume. Blank and 0 imply unbounded.") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune CometBFT blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the CometBFT RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the CometBFT RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the CometBFT maximum request body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no CometBFT process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + +cmd.Flags().Duration(FlagShutdownGrace, 0*time.Second, "On Shutdown, duration to wait for resource clean up") + + / support old flags name for backwards compatibility + cmd.Flags().SetNormalizeFunc(func(f *pflag.FlagSet, name string) + +pflag.NormalizedName { + if name == "with-tendermint" { + name = flagWithComet +} + +return pflag.NormalizedName(name) +}) + + / add support for all CometBFT-specific command line options + cmtcmd.AddNodeFlags(cmd) + if opts.AddFlags != nil { + opts.AddFlags(cmd) +} +} +``` + +Note that an `appCreator` is a function that fulfills the `AppCreator` signature: + +```go expandable +package types + +import ( + + "encoding/json" + "io" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/cobra" + "cosmossdk.io/log" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" +) + +type ( + / AppOptions defines an interface that is passed into an application + / constructor, typically used to set BaseApp options that are either supplied + / via config file or through CLI arguments/flags. The underlying implementation + / is defined by the server package and is typically implemented via a Viper + / literal defined on the server Context. Note, casting Get calls may not yield + / the expected types and could result in type assertion errors. It is recommend + / to either use the cast package or perform manual conversion for safety. + AppOptions interface { + Get(string) + +any +} + + / Application defines an application interface that wraps abci.Application. + / The interface defines the necessary contracts to be implemented in order + / to fully bootstrap and start an application. + Application interface { + ABCI + + RegisterAPIRoutes(*api.Server, config.APIConfig) + + / RegisterGRPCServerWithSkipCheckHeader registers gRPC services directly with the gRPC + / server and bypass check header flag. + RegisterGRPCServerWithSkipCheckHeader(grpc.Server, bool) + + / RegisterTxService registers the gRPC Query service for tx (such as tx + / simulation, fetching txs by hash...). + RegisterTxService(client.Context) + + / RegisterTendermintService registers the gRPC Query service for CometBFT queries. + RegisterTendermintService(client.Context) + + / RegisterNodeService registers the node gRPC Query service. + RegisterNodeService(client.Context, config.Config) + + / CommitMultiStore return the multistore instance + CommitMultiStore() + +storetypes.CommitMultiStore + + / Return the snapshot manager + SnapshotManager() *snapshots.Manager + + / Close is called in start cmd to gracefully cleanup resources. + / Must be safe to be called multiple times. + Close() + +error +} + + / AppCreator is a function that allows us to lazily initialize an + / application using various configurations. + AppCreator func(log.Logger, dbm.DB, io.Writer, AppOptions) + +Application + + / ModuleInitFlags takes a start command and adds modules specific init flags. + ModuleInitFlags func(startCmd *cobra.Command) + + / ExportedApp represents an exported app state, along with + / validators, consensus params and latest app height. + ExportedApp struct { + / AppState is the application state as JSON. + AppState json.RawMessage + / Validators is the exported validator set. + Validators []cmttypes.GenesisValidator + / Height is the app's latest block height. + Height int64 + / ConsensusParams are the exported consensus params for ABCI. + ConsensusParams cmtproto.ConsensusParams +} + + / AppExporter is a function that dumps all app state to + / JSON-serializable structure and returns the current validator set. + AppExporter func( + logger log.Logger, + db dbm.DB, + traceWriter io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + opts AppOptions, + modulesToExport []string, + ) (ExportedApp, error) +) +``` + +In practice, the [constructor of the application](/docs/sdk/next/documentation/application-framework/app-anatomy#constructor-function) is passed as the `appCreator`. + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L294-L308 +``` + +Then, the instance of `app` is used to instantiate a new CometBFT node: + +```go expandable +package server + +import ( + + "bufio" + "context" + "fmt" + "io" + "net" + "os" + "path/filepath" + "runtime/pprof" + "strings" + "time" + "github.com/cometbft/cometbft/abci/server" + cmtcmd "github.com/cometbft/cometbft/cmd/cometbft/commands" + cmtcfg "github.com/cometbft/cometbft/config" + cmtjson "github.com/cometbft/cometbft/libs/json" + "github.com/cometbft/cometbft/node" + "github.com/cometbft/cometbft/p2p" + pvm "github.com/cometbft/cometbft/privval" + cmtstate "github.com/cometbft/cometbft/proto/tendermint/state" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cometbft/cometbft/proxy" + rpchttp "github.com/cometbft/cometbft/rpc/client/http" + "github.com/cometbft/cometbft/rpc/client/local" + sm "github.com/cometbft/cometbft/state" + "github.com/cometbft/cometbft/store" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/hashicorp/go-metrics" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "golang.org/x/sync/errgroup" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + + pruningtypes "cosmossdk.io/store/pruning/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + servercmtlog "github.com/cosmos/cosmos-sdk/server/log" + "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/version" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" +) + +const ( + / CometBFT full-node start flags + flagWithComet = "with-comet" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagQueryGasLimit = "query-gas-limit" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" + + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" + FlagIAVLSyncPruning = "iavl-sync-pruning" + FlagShutdownGrace = "shutdown-grace" + + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" + + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" + + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" + flagGRPCSkipCheckHeader = "grpc.skip-check-header" + + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" + + / testnet keys + KeyIsTestnet = "is-testnet" + KeyNewChainID = "new-chain-ID" + KeyNewOpAddr = "new-operator-addr" + KeyNewValAddr = "new-validator-addr" + KeyUserPubKey = "user-pub-key" + KeyTriggerTestnetUpgrade = "trigger-testnet-upgrade" +) + +/ StartCmdOptions defines options that can be customized in `StartCmdWithOptions`, +type StartCmdOptions struct { + / DBOpener can be used to customize db opening, for example customize db options or support different db backends, + / default to the builtin db opener. + DBOpener func(rootDir string, backendType dbm.BackendType) (dbm.DB, error) + / PostSetup can be used to setup extra services under the same cancellable context, + / it's not called in stand-alone mode, only for in-process mode. + PostSetup func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / PostSetupStandalone can be used to setup extra services under the same cancellable context, + PostSetupStandalone func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / AddFlags add custom flags to start cmd + AddFlags func(cmd *cobra.Command) + / StartCommandHanlder can be used to customize the start command handler + StartCommandHandler func(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, inProcessConsensus bool, opts StartCmdOptions) + +error +} + +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + return StartCmdWithOptions(appCreator, defaultNodeHome, StartCmdOptions{ +}) +} + +/ StartCmdWithOptions runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmdWithOptions(appCreator types.AppCreator, defaultNodeHome string, opts StartCmdOptions) *cobra.Command { + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + if opts.StartCommandHandler == nil { + opts.StartCommandHandler = start +} + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with CometBFT in or out of process. By +default, the application will run with CometBFT in process. + +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. + +For '--pruning' the options are as follows: + +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) + +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' + +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. + +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. + +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, CometBFT is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + if err != nil { + return err +} + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + +err = wrapCPUProfile(serverCtx, func() + +error { + return opts.StartCommandHandler(serverCtx, clientCtx, appCreator, withCMT, opts) +}) + +serverCtx.Logger.Debug("received quit signal") + +graceDuration, _ := cmd.Flags().GetDuration(FlagShutdownGrace) + if graceDuration > 0 { + serverCtx.Logger.Info("graceful shutdown start", FlagShutdownGrace, graceDuration) + <-time.After(graceDuration) + +serverCtx.Logger.Info("graceful shutdown complete") +} + +return err +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +addStartNodeFlags(cmd, opts) + +return cmd +} + +func start(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, withCmt bool, opts StartCmdOptions) + +error { + svrCfg, err := getAndValidateConfig(svrCtx) + if err != nil { + return err +} + +app, appCleanupFn, err := startApp(svrCtx, appCreator, opts) + if err != nil { + return err +} + +defer appCleanupFn() + +metrics, err := startTelemetry(svrCfg) + if err != nil { + return err +} + +emitServerInfoMetrics() + if !withCmt { + return startStandAlone(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +return startInProcess(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +func startStandAlone(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, metrics *telemetry.Metrics, opts StartCmdOptions) + +error { + addr := svrCtx.Viper.GetString(flagAddress) + transport := svrCtx.Viper.GetString(flagTransport) + cmtApp := NewCometABCIWrapper(app) + +svr, err := server.NewServer(addr, transport, cmtApp) + if err != nil { + return fmt.Errorf("error creating listener: %w", err) +} + +svr.SetLogger(servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger.With("module", "abci-server") +}) + +g, ctx := getCtx(svrCtx, false) + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / create tendermint client + / assumes the rpc listen address is where tendermint has its rpc server + rpcclient, err := rpchttp.New(svrCtx.Config.RPC.ListenAddress, "/websocket") + if err != nil { + return err +} + / re-assign for making the client available below + / do not use := to avoid shadowing clientCtx + clientCtx = clientCtx.WithClient(rpcclient) + + / use the provided clientCtx to register the services + app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, svrCfg, clientCtx, svrCtx, app, svrCtx.Config.RootDir, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetupStandalone != nil { + if err := opts.PostSetupStandalone(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + +g.Go(func() + +error { + if err := svr.Start(); err != nil { + svrCtx.Logger.Error("failed to start out-of-process ABCI server", "err", err) + +return err +} + + / Wait for the calling process to be canceled or close the provided context, + / so we can gracefully stop the ABCI server. + <-ctx.Done() + +svrCtx.Logger.Info("stopping the ABCI server...") + +return svr.Stop() +}) + +return g.Wait() +} + +func startInProcess(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, + metrics *telemetry.Metrics, opts StartCmdOptions, +) + +error { + cmtCfg := svrCtx.Config + gRPCOnly := svrCtx.Viper.GetBool(flagGRPCOnly) + +g, ctx := getCtx(svrCtx, true) + if gRPCOnly { + / TODO: Generalize logic so that gRPC only is really in startStandAlone + svrCtx.Logger.Info("starting node in gRPC only mode; CometBFT is disabled") + +svrCfg.GRPC.Enable = true +} + +else { + svrCtx.Logger.Info("starting node with ABCI CometBFT in-process") + +tmNode, cleanupFn, err := startCmtNode(ctx, cmtCfg, app, svrCtx) + if err != nil { + return err +} + +defer cleanupFn() + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / Re-assign for making the client available below do not use := to avoid + / shadowing the clientCtx variable. + clientCtx = clientCtx.WithClient(local.New(tmNode)) + +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, svrCfg, clientCtx, svrCtx, app, cmtCfg.RootDir, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetup != nil { + if err := opts.PostSetup(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + + / wait for signal capture and gracefully return + / we are guaranteed to be waiting for the "ListenForQuitSignals" goroutine. + return g.Wait() +} + +/ TODO: Move nodeKey into being created within the function. +func startCmtNode( + ctx context.Context, + cfg *cmtcfg.Config, + app types.Application, + svrCtx *Context, +) (tmNode *node.Node, cleanupFn func(), err error) { + nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return nil, cleanupFn, err +} + cmtApp := NewCometABCIWrapper(app) + +tmNode, err = node.NewNodeWithContext( + ctx, + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(cmtApp), + getGenDocProvider(cfg), + cmtcfg.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger +}, + ) + if err != nil { + return tmNode, cleanupFn, err +} + if err := tmNode.Start(); err != nil { + return tmNode, cleanupFn, err +} + +cleanupFn = func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() +} + +} + +return tmNode, cleanupFn, nil +} + +func getAndValidateConfig(svrCtx *Context) (serverconfig.Config, error) { + config, err := serverconfig.GetConfig(svrCtx.Viper) + if err != nil { + return config, err +} + if err := config.ValidateBasic(); err != nil { + return config, err +} + +return config, nil +} + +/ returns a function which returns the genesis doc from the genesis file. +func getGenDocProvider(cfg *cmtcfg.Config) + +func() (*cmttypes.GenesisDoc, error) { + return func() (*cmttypes.GenesisDoc, error) { + appGenesis, err := genutiltypes.AppGenesisFromFile(cfg.GenesisFile()) + if err != nil { + return nil, err +} + +return appGenesis.ToGenesisDoc() +} +} + +func setupTraceWriter(svrCtx *Context) (traceWriter io.WriteCloser, cleanup func(), err error) { + / clean up the traceWriter when the server is shutting down + cleanup = func() { +} + traceWriterFile := svrCtx.Viper.GetString(flagTraceStore) + +traceWriter, err = openTraceWriter(traceWriterFile) + if err != nil { + return traceWriter, cleanup, err +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + cleanup = func() { + if err = traceWriter.Close(); err != nil { + svrCtx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +return traceWriter, cleanup, nil +} + +func startGrpcServer( + ctx context.Context, + g *errgroup.Group, + config serverconfig.GRPCConfig, + clientCtx client.Context, + svrCtx *Context, + app types.Application, +) (*grpc.Server, client.Context, error) { + if !config.Enable { + / return grpcServer as nil if gRPC is disabled + return nil, clientCtx, nil +} + _, _, err := net.SplitHostPort(config.Address) + if err != nil { + return nil, clientCtx, err +} + maxSendMsgSize := config.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + + / if gRPC is enabled, configure gRPC client for gRPC gateway + grpcClient, err := grpc.Dial( /nolint: staticcheck / ignore this line for this linter + config.Address, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return nil, clientCtx, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +svrCtx.Logger.Debug("gRPC client assigned to client context", "target", config.Address) + +grpcSrv, err := servergrpc.NewGRPCServer(clientCtx, app, config) + if err != nil { + return nil, clientCtx, err +} + + / Start the gRPC server in a goroutine. Note, the provided ctx will ensure + / that the server is gracefully shut down. + g.Go(func() + +error { + return servergrpc.StartGRPCServer(ctx, svrCtx.Logger.With("module", "grpc-server"), config, grpcSrv) +}) + +return grpcSrv, clientCtx, nil +} + +func startAPIServer( + ctx context.Context, + g *errgroup.Group, + svrCfg serverconfig.Config, + clientCtx client.Context, + svrCtx *Context, + app types.Application, + home string, + grpcSrv *grpc.Server, + metrics *telemetry.Metrics, +) + +error { + if !svrCfg.API.Enable { + return nil +} + +clientCtx = clientCtx.WithHomeDir(home) + apiSrv := api.New(clientCtx, svrCtx.Logger.With("module", "api-server"), grpcSrv) + +app.RegisterAPIRoutes(apiSrv, svrCfg.API) + if svrCfg.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + +g.Go(func() + +error { + return apiSrv.Start(ctx, svrCfg) +}) + +return nil +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + return telemetry.New(cfg.Telemetry) +} + +/ wrapCPUProfile starts CPU profiling, if enabled, and executes the provided +/ callbackFn in a separate goroutine, then will wait for that callback to +/ return. +/ +/ NOTE: We expect the caller to handle graceful shutdown and signal handling. +func wrapCPUProfile(svrCtx *Context, callbackFn func() + +error) + +error { + if cpuProfile := svrCtx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +svrCtx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +defer func() { + svrCtx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + svrCtx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +}() +} + +return callbackFn() +} + +/ emitServerInfoMetrics emits server info related metrics using application telemetry. +func emitServerInfoMetrics() { + var ls []metrics.Label + versionInfo := version.NewInfo() + if len(versionInfo.GoVersion) > 0 { + ls = append(ls, telemetry.NewLabel("go", versionInfo.GoVersion)) +} + if len(versionInfo.CosmosSdkVersion) > 0 { + ls = append(ls, telemetry.NewLabel("version", versionInfo.CosmosSdkVersion)) +} + if len(ls) == 0 { + return +} + +telemetry.SetGaugeWithLabels([]string{"server", "info" +}, 1, ls) +} + +func getCtx(svrCtx *Context, block bool) (*errgroup.Group, context.Context) { + ctx, cancelFn := context.WithCancel(context.Background()) + +g, ctx := errgroup.WithContext(ctx) + / listen for quit signals so the calling parent process can gracefully exit + ListenForQuitSignals(g, block, cancelFn, svrCtx.Logger) + +return g, ctx +} + +func startApp(svrCtx *Context, appCreator types.AppCreator, opts StartCmdOptions) (app types.Application, cleanupFn func(), err error) { + traceWriter, traceCleanupFn, err := setupTraceWriter(svrCtx) + if err != nil { + return app, traceCleanupFn, err +} + home := svrCtx.Config.RootDir + db, err := opts.DBOpener(home, GetAppDBBackend(svrCtx.Viper)) + if err != nil { + return app, traceCleanupFn, err +} + if isTestnet, ok := svrCtx.Viper.Get(KeyIsTestnet).(bool); ok && isTestnet { + app, err = testnetify(svrCtx, appCreator, db, traceWriter) + if err != nil { + return app, traceCleanupFn, err +} + +} + +else { + app = appCreator(svrCtx.Logger, db, traceWriter, svrCtx.Viper) +} + +cleanupFn = func() { + traceCleanupFn() + if localErr := app.Close(); localErr != nil { + svrCtx.Logger.Error(localErr.Error()) +} + +} + +return app, cleanupFn, nil +} + +/ InPlaceTestnetCreator utilizes the provided chainID and operatorAddress as well as the local private validator key to +/ control the network represented in the data folder. This is useful to create testnets nearly identical to your +/ mainnet environment. +func InPlaceTestnetCreator(testnetAppCreator types.AppCreator) *cobra.Command { + opts := StartCmdOptions{ +} + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + if opts.StartCommandHandler == nil { + opts.StartCommandHandler = start +} + cmd := &cobra.Command{ + Use: "in-place-testnet [newChainID] [newOperatorAddress]", + Short: "Create and start a testnet from current local state", + Long: `Create and start a testnet from current local state. +After utilizing this command the network will start. If the network is stopped, +the normal "start" command should be used. Re-using this command on state that +has already been modified by this command could result in unexpected behavior. + +Additionally, the first block may take up to one minute to be committed, depending +on how old the block is. For instance, if a snapshot was taken weeks ago and we want +to turn this into a testnet, it is possible lots of pending state needs to be committed +(expiring locks, etc.). It is recommended that you should wait for this block to be committed +before stopping the daemon. + +If the --trigger-testnet-upgrade flag is set, the upgrade handler specified by the flag will be run +on the first block of the testnet. + +Regardless of whether the flag is set or not, if any new stores are introduced in the daemon being run, +those stores will be registered in order to prevent panics. Therefore, you only need to set the flag if +you want to test the upgrade handler itself. +`, + Example: "in-place-testnet localosmosis osmo12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj", + Args: cobra.ExactArgs(2), + RunE: func(cmd *cobra.Command, args []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + if err != nil { + return err +} + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + newChainID := args[0] + newOperatorAddress := args[1] + + skipConfirmation, _ := cmd.Flags().GetBool("skip-confirmation") + if !skipConfirmation { + / Confirmation prompt to prevent accidental modification of state. + reader := bufio.NewReader(os.Stdin) + +fmt.Println("This operation will modify state in your data folder and cannot be undone. Do you want to continue? (y/n)") + +text, _ := reader.ReadString('\n') + response := strings.TrimSpace(strings.ToLower(text)) + if response != "y" && response != "yes" { + fmt.Println("Operation canceled.") + +return nil +} + +} + + / Set testnet keys to be used by the application. + / This is done to prevent changes to existing start API. + serverCtx.Viper.Set(KeyIsTestnet, true) + +serverCtx.Viper.Set(KeyNewChainID, newChainID) + +serverCtx.Viper.Set(KeyNewOpAddr, newOperatorAddress) + +err = wrapCPUProfile(serverCtx, func() + +error { + return opts.StartCommandHandler(serverCtx, clientCtx, testnetAppCreator, withCMT, opts) +}) + +serverCtx.Logger.Debug("received quit signal") + +graceDuration, _ := cmd.Flags().GetDuration(FlagShutdownGrace) + if graceDuration > 0 { + serverCtx.Logger.Info("graceful shutdown start", FlagShutdownGrace, graceDuration) + <-time.After(graceDuration) + +serverCtx.Logger.Info("graceful shutdown complete") +} + +return err +}, +} + +addStartNodeFlags(cmd, opts) + +cmd.Flags().String(KeyTriggerTestnetUpgrade, "", "If set (example: \"v21\"), triggers the v21 upgrade handler to run on the first block of the testnet") + +cmd.Flags().Bool("skip-confirmation", false, "Skip the confirmation prompt") + +return cmd +} + +/ testnetify modifies both state and blockStore, allowing the provided operator address and local validator key to control the network +/ that the state in the data folder represents. The chainID of the local genesis file is modified to match the provided chainID. +func testnetify(ctx *Context, testnetAppCreator types.AppCreator, db dbm.DB, traceWriter io.WriteCloser) (types.Application, error) { + config := ctx.Config + + newChainID, ok := ctx.Viper.Get(KeyNewChainID).(string) + if !ok { + return nil, fmt.Errorf("expected string for key %s", KeyNewChainID) +} + + / Modify app genesis chain ID and save to genesis file. + genFilePath := config.GenesisFile() + +appGen, err := genutiltypes.AppGenesisFromFile(genFilePath) + if err != nil { + return nil, err +} + +appGen.ChainID = newChainID + if err := appGen.ValidateAndComplete(); err != nil { + return nil, err +} + if err := appGen.SaveAs(genFilePath); err != nil { + return nil, err +} + + / Regenerate addrbook.json to prevent peers on old network from causing error logs. + addrBookPath := filepath.Join(config.RootDir, "config", "addrbook.json") + if err := os.Remove(addrBookPath); err != nil && !os.IsNotExist(err) { + return nil, fmt.Errorf("failed to remove existing addrbook.json: %w", err) +} + emptyAddrBook := []byte("{ +}") + if err := os.WriteFile(addrBookPath, emptyAddrBook, 0o600); err != nil { + return nil, fmt.Errorf("failed to create empty addrbook.json: %w", err) +} + + / Load the comet genesis doc provider. + genDocProvider := node.DefaultGenesisDocProviderFunc(config) + + / Initialize blockStore and stateDB. + blockStoreDB, err := cmtcfg.DefaultDBProvider(&cmtcfg.DBContext{ + ID: "blockstore", + Config: config +}) + if err != nil { + return nil, err +} + blockStore := store.NewBlockStore(blockStoreDB) + +stateDB, err := cmtcfg.DefaultDBProvider(&cmtcfg.DBContext{ + ID: "state", + Config: config +}) + if err != nil { + return nil, err +} + +defer blockStore.Close() + +defer stateDB.Close() + privValidator := pvm.LoadOrGenFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile()) + +userPubKey, err := privValidator.GetPubKey() + if err != nil { + return nil, err +} + validatorAddress := userPubKey.Address() + stateStore := sm.NewStore(stateDB, sm.StoreOptions{ + DiscardABCIResponses: config.Storage.DiscardABCIResponses, +}) + +state, genDoc, err := node.LoadStateFromDBOrGenesisDocProvider(stateDB, genDocProvider) + if err != nil { + return nil, err +} + +ctx.Viper.Set(KeyNewValAddr, validatorAddress) + +ctx.Viper.Set(KeyUserPubKey, userPubKey) + testnetApp := testnetAppCreator(ctx.Logger, db, traceWriter, ctx.Viper) + + / We need to create a temporary proxyApp to get the initial state of the application. + / Depending on how the node was stopped, the application height can differ from the blockStore height. + / This height difference changes how we go about modifying the state. + cmtApp := NewCometABCIWrapper(testnetApp) + _, context := getCtx(ctx, true) + clientCreator := proxy.NewLocalClientCreator(cmtApp) + metrics := node.DefaultMetricsProvider(cmtcfg.DefaultConfig().Instrumentation) + _, _, _, _, proxyMetrics, _, _ := metrics(genDoc.ChainID) + proxyApp := proxy.NewAppConns(clientCreator, proxyMetrics) + if err := proxyApp.Start(); err != nil { + return nil, fmt.Errorf("error starting proxy app connections: %w", err) +} + +res, err := proxyApp.Query().Info(context, proxy.RequestInfo) + if err != nil { + return nil, fmt.Errorf("error calling Info: %w", err) +} + +err = proxyApp.Stop() + if err != nil { + return nil, err +} + appHash := res.LastBlockAppHash + appHeight := res.LastBlockHeight + + var block *cmttypes.Block + switch { + case appHeight == blockStore.Height(): + block = blockStore.LoadBlock(blockStore.Height()) + / If the state's last blockstore height does not match the app and blockstore height, we likely stopped with the halt height flag. + if state.LastBlockHeight != appHeight { + state.LastBlockHeight = appHeight + block.AppHash = appHash + state.AppHash = appHash +} + +else { + / Node was likely stopped via SIGTERM, delete the next block's seen commit + err := blockStoreDB.Delete(fmt.Appendf(nil, "SC:%v", blockStore.Height()+1)) + if err != nil { + return nil, err +} + +} + case blockStore.Height() > state.LastBlockHeight: + / This state usually occurs when we gracefully stop the node. + err = blockStore.DeleteLatestBlock() + if err != nil { + return nil, err +} + +block = blockStore.LoadBlock(blockStore.Height()) + +default: + / If there is any other state, we just load the block + block = blockStore.LoadBlock(blockStore.Height()) +} + +block.ChainID = newChainID + state.ChainID = newChainID + + block.LastBlockID = state.LastBlockID + block.LastCommit.BlockID = state.LastBlockID + + / Create a vote from our validator + vote := cmttypes.Vote{ + Type: cmtproto.PrecommitType, + Height: state.LastBlockHeight, + Round: 0, + BlockID: state.LastBlockID, + Timestamp: time.Now(), + ValidatorAddress: validatorAddress, + ValidatorIndex: 0, + Signature: []byte{ +}, +} + + / Sign the vote, and copy the proto changes from the act of signing to the vote itself + voteProto := vote.ToProto() + +err = privValidator.SignVote(newChainID, voteProto) + if err != nil { + return nil, err +} + +vote.Signature = voteProto.Signature + vote.Timestamp = voteProto.Timestamp + + / Modify the block's lastCommit to be signed only by our validator + block.LastCommit.Signatures[0].ValidatorAddress = validatorAddress + block.LastCommit.Signatures[0].Signature = vote.Signature + block.LastCommit.Signatures = []cmttypes.CommitSig{ + block.LastCommit.Signatures[0] +} + + / Load the seenCommit of the lastBlockHeight and modify it to be signed from our validator + seenCommit := blockStore.LoadSeenCommit(state.LastBlockHeight) + +seenCommit.BlockID = state.LastBlockID + seenCommit.Round = vote.Round + seenCommit.Signatures[0].Signature = vote.Signature + seenCommit.Signatures[0].ValidatorAddress = validatorAddress + seenCommit.Signatures[0].Timestamp = vote.Timestamp + seenCommit.Signatures = []cmttypes.CommitSig{ + seenCommit.Signatures[0] +} + +err = blockStore.SaveSeenCommit(state.LastBlockHeight, seenCommit) + if err != nil { + return nil, err +} + + / Create ValidatorSet struct containing just our valdiator. + newVal := &cmttypes.Validator{ + Address: validatorAddress, + PubKey: userPubKey, + VotingPower: 900000000000000, +} + newValSet := &cmttypes.ValidatorSet{ + Validators: []*cmttypes.Validator{ + newVal +}, + Proposer: newVal, +} + + / Replace all valSets in state to be the valSet with just our validator. + state.Validators = newValSet + state.LastValidators = newValSet + state.NextValidators = newValSet + state.LastHeightValidatorsChanged = blockStore.Height() + +err = stateStore.Save(state) + if err != nil { + return nil, err +} + + / Create a ValidatorsInfo struct to store in stateDB. + valSet, err := state.Validators.ToProto() + if err != nil { + return nil, err +} + valInfo := &cmtstate.ValidatorsInfo{ + ValidatorSet: valSet, + LastHeightChanged: state.LastBlockHeight, +} + +buf, err := valInfo.Marshal() + if err != nil { + return nil, err +} + + / Modfiy Validators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()), buf) + if err != nil { + return nil, err +} + + / Modify LastValidators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()-1), buf) + if err != nil { + return nil, err +} + + / Modify NextValidators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()+1), buf) + if err != nil { + return nil, err +} + + / Since we modified the chainID, we set the new genesisDoc in the stateDB. + b, err := cmtjson.Marshal(genDoc) + if err != nil { + return nil, err +} + if err := stateDB.SetSync([]byte("genesisDoc"), b); err != nil { + return nil, err +} + +return testnetApp, err +} + +/ addStartNodeFlags should be added to any CLI commands that start the network. +func addStartNodeFlags(cmd *cobra.Command, opts StartCmdOptions) { + cmd.Flags().Bool(flagWithComet, true, "Run abci app embedded in-process with CometBFT") + +cmd.Flags().String(flagAddress, "tcp://127.0.0.1:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().Uint64(FlagQueryGasLimit, 0, "Maximum gas a Rest/Grpc query can consume. Blank and 0 imply unbounded.") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune CometBFT blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the CometBFT RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the CometBFT RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the CometBFT maximum request body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no CometBFT process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + +cmd.Flags().Duration(FlagShutdownGrace, 0*time.Second, "On Shutdown, duration to wait for resource clean up") + + / support old flags name for backwards compatibility + cmd.Flags().SetNormalizeFunc(func(f *pflag.FlagSet, name string) + +pflag.NormalizedName { + if name == "with-tendermint" { + name = flagWithComet +} + +return pflag.NormalizedName(name) +}) + + / add support for all CometBFT-specific command line options + cmtcmd.AddNodeFlags(cmd) + if opts.AddFlags != nil { + opts.AddFlags(cmd) +} +} +``` + +The CometBFT node can be created with `app` because the latter satisfies the [`abci.Application` interface](https://github.com/cometbft/cometbft/blob/v0.37.0/abci/types/application.go#L9-L35) (given that `app` extends [`baseapp`](/docs/sdk/next/documentation/application-framework/baseapp)). As part of the `node.New` method, CometBFT makes sure that the height of the application (i.e. number of blocks since genesis) is equal to the height of the CometBFT node. The difference between these two heights should always be negative or null. If it is strictly negative, `node.New` will replay blocks until the height of the application reaches the height of the CometBFT node. Finally, if the height of the application is `0`, the CometBFT node will call [`InitChain`](/docs/sdk/next/documentation/application-framework/baseapp#initchain) on the application to initialize the state from the genesis file. + +Once the CometBFT node is instantiated and in sync with the application, the node can be started: + +```go expandable +package server + +import ( + + "bufio" + "context" + "fmt" + "io" + "net" + "os" + "path/filepath" + "runtime/pprof" + "strings" + "time" + "github.com/cometbft/cometbft/abci/server" + cmtcmd "github.com/cometbft/cometbft/cmd/cometbft/commands" + cmtcfg "github.com/cometbft/cometbft/config" + cmtjson "github.com/cometbft/cometbft/libs/json" + "github.com/cometbft/cometbft/node" + "github.com/cometbft/cometbft/p2p" + pvm "github.com/cometbft/cometbft/privval" + cmtstate "github.com/cometbft/cometbft/proto/tendermint/state" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cometbft/cometbft/proxy" + rpchttp "github.com/cometbft/cometbft/rpc/client/http" + "github.com/cometbft/cometbft/rpc/client/local" + sm "github.com/cometbft/cometbft/state" + "github.com/cometbft/cometbft/store" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/hashicorp/go-metrics" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "golang.org/x/sync/errgroup" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + + pruningtypes "cosmossdk.io/store/pruning/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + servercmtlog "github.com/cosmos/cosmos-sdk/server/log" + "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/version" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" +) + +const ( + / CometBFT full-node start flags + flagWithComet = "with-comet" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagQueryGasLimit = "query-gas-limit" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" + + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" + FlagIAVLSyncPruning = "iavl-sync-pruning" + FlagShutdownGrace = "shutdown-grace" + + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" + + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" + + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" + flagGRPCSkipCheckHeader = "grpc.skip-check-header" + + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" + + / testnet keys + KeyIsTestnet = "is-testnet" + KeyNewChainID = "new-chain-ID" + KeyNewOpAddr = "new-operator-addr" + KeyNewValAddr = "new-validator-addr" + KeyUserPubKey = "user-pub-key" + KeyTriggerTestnetUpgrade = "trigger-testnet-upgrade" +) + +/ StartCmdOptions defines options that can be customized in `StartCmdWithOptions`, +type StartCmdOptions struct { + / DBOpener can be used to customize db opening, for example customize db options or support different db backends, + / default to the builtin db opener. + DBOpener func(rootDir string, backendType dbm.BackendType) (dbm.DB, error) + / PostSetup can be used to setup extra services under the same cancellable context, + / it's not called in stand-alone mode, only for in-process mode. + PostSetup func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / PostSetupStandalone can be used to setup extra services under the same cancellable context, + PostSetupStandalone func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / AddFlags add custom flags to start cmd + AddFlags func(cmd *cobra.Command) + / StartCommandHanlder can be used to customize the start command handler + StartCommandHandler func(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, inProcessConsensus bool, opts StartCmdOptions) + +error +} + +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + return StartCmdWithOptions(appCreator, defaultNodeHome, StartCmdOptions{ +}) +} + +/ StartCmdWithOptions runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmdWithOptions(appCreator types.AppCreator, defaultNodeHome string, opts StartCmdOptions) *cobra.Command { + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + if opts.StartCommandHandler == nil { + opts.StartCommandHandler = start +} + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with CometBFT in or out of process. By +default, the application will run with CometBFT in process. + +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. + +For '--pruning' the options are as follows: + +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) + +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' + +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. + +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. + +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, CometBFT is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + if err != nil { + return err +} + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + +err = wrapCPUProfile(serverCtx, func() + +error { + return opts.StartCommandHandler(serverCtx, clientCtx, appCreator, withCMT, opts) +}) + +serverCtx.Logger.Debug("received quit signal") + +graceDuration, _ := cmd.Flags().GetDuration(FlagShutdownGrace) + if graceDuration > 0 { + serverCtx.Logger.Info("graceful shutdown start", FlagShutdownGrace, graceDuration) + <-time.After(graceDuration) + +serverCtx.Logger.Info("graceful shutdown complete") +} + +return err +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +addStartNodeFlags(cmd, opts) + +return cmd +} + +func start(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, withCmt bool, opts StartCmdOptions) + +error { + svrCfg, err := getAndValidateConfig(svrCtx) + if err != nil { + return err +} + +app, appCleanupFn, err := startApp(svrCtx, appCreator, opts) + if err != nil { + return err +} + +defer appCleanupFn() + +metrics, err := startTelemetry(svrCfg) + if err != nil { + return err +} + +emitServerInfoMetrics() + if !withCmt { + return startStandAlone(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +return startInProcess(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +func startStandAlone(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, metrics *telemetry.Metrics, opts StartCmdOptions) + +error { + addr := svrCtx.Viper.GetString(flagAddress) + transport := svrCtx.Viper.GetString(flagTransport) + cmtApp := NewCometABCIWrapper(app) + +svr, err := server.NewServer(addr, transport, cmtApp) + if err != nil { + return fmt.Errorf("error creating listener: %w", err) +} + +svr.SetLogger(servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger.With("module", "abci-server") +}) + +g, ctx := getCtx(svrCtx, false) + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / create tendermint client + / assumes the rpc listen address is where tendermint has its rpc server + rpcclient, err := rpchttp.New(svrCtx.Config.RPC.ListenAddress, "/websocket") + if err != nil { + return err +} + / re-assign for making the client available below + / do not use := to avoid shadowing clientCtx + clientCtx = clientCtx.WithClient(rpcclient) + + / use the provided clientCtx to register the services + app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, svrCfg, clientCtx, svrCtx, app, svrCtx.Config.RootDir, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetupStandalone != nil { + if err := opts.PostSetupStandalone(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + +g.Go(func() + +error { + if err := svr.Start(); err != nil { + svrCtx.Logger.Error("failed to start out-of-process ABCI server", "err", err) + +return err +} + + / Wait for the calling process to be canceled or close the provided context, + / so we can gracefully stop the ABCI server. + <-ctx.Done() + +svrCtx.Logger.Info("stopping the ABCI server...") + +return svr.Stop() +}) + +return g.Wait() +} + +func startInProcess(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, + metrics *telemetry.Metrics, opts StartCmdOptions, +) + +error { + cmtCfg := svrCtx.Config + gRPCOnly := svrCtx.Viper.GetBool(flagGRPCOnly) + +g, ctx := getCtx(svrCtx, true) + if gRPCOnly { + / TODO: Generalize logic so that gRPC only is really in startStandAlone + svrCtx.Logger.Info("starting node in gRPC only mode; CometBFT is disabled") + +svrCfg.GRPC.Enable = true +} + +else { + svrCtx.Logger.Info("starting node with ABCI CometBFT in-process") + +tmNode, cleanupFn, err := startCmtNode(ctx, cmtCfg, app, svrCtx) + if err != nil { + return err +} + +defer cleanupFn() + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / Re-assign for making the client available below do not use := to avoid + / shadowing the clientCtx variable. + clientCtx = clientCtx.WithClient(local.New(tmNode)) + +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, svrCfg, clientCtx, svrCtx, app, cmtCfg.RootDir, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetup != nil { + if err := opts.PostSetup(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + + / wait for signal capture and gracefully return + / we are guaranteed to be waiting for the "ListenForQuitSignals" goroutine. + return g.Wait() +} + +/ TODO: Move nodeKey into being created within the function. +func startCmtNode( + ctx context.Context, + cfg *cmtcfg.Config, + app types.Application, + svrCtx *Context, +) (tmNode *node.Node, cleanupFn func(), err error) { + nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return nil, cleanupFn, err +} + cmtApp := NewCometABCIWrapper(app) + +tmNode, err = node.NewNodeWithContext( + ctx, + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(cmtApp), + getGenDocProvider(cfg), + cmtcfg.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger +}, + ) + if err != nil { + return tmNode, cleanupFn, err +} + if err := tmNode.Start(); err != nil { + return tmNode, cleanupFn, err +} + +cleanupFn = func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() +} + +} + +return tmNode, cleanupFn, nil +} + +func getAndValidateConfig(svrCtx *Context) (serverconfig.Config, error) { + config, err := serverconfig.GetConfig(svrCtx.Viper) + if err != nil { + return config, err +} + if err := config.ValidateBasic(); err != nil { + return config, err +} + +return config, nil +} + +/ returns a function which returns the genesis doc from the genesis file. +func getGenDocProvider(cfg *cmtcfg.Config) + +func() (*cmttypes.GenesisDoc, error) { + return func() (*cmttypes.GenesisDoc, error) { + appGenesis, err := genutiltypes.AppGenesisFromFile(cfg.GenesisFile()) + if err != nil { + return nil, err +} + +return appGenesis.ToGenesisDoc() +} +} + +func setupTraceWriter(svrCtx *Context) (traceWriter io.WriteCloser, cleanup func(), err error) { + / clean up the traceWriter when the server is shutting down + cleanup = func() { +} + traceWriterFile := svrCtx.Viper.GetString(flagTraceStore) + +traceWriter, err = openTraceWriter(traceWriterFile) + if err != nil { + return traceWriter, cleanup, err +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + cleanup = func() { + if err = traceWriter.Close(); err != nil { + svrCtx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +return traceWriter, cleanup, nil +} + +func startGrpcServer( + ctx context.Context, + g *errgroup.Group, + config serverconfig.GRPCConfig, + clientCtx client.Context, + svrCtx *Context, + app types.Application, +) (*grpc.Server, client.Context, error) { + if !config.Enable { + / return grpcServer as nil if gRPC is disabled + return nil, clientCtx, nil +} + _, _, err := net.SplitHostPort(config.Address) + if err != nil { + return nil, clientCtx, err +} + maxSendMsgSize := config.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + + / if gRPC is enabled, configure gRPC client for gRPC gateway + grpcClient, err := grpc.Dial( /nolint: staticcheck / ignore this line for this linter + config.Address, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return nil, clientCtx, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +svrCtx.Logger.Debug("gRPC client assigned to client context", "target", config.Address) + +grpcSrv, err := servergrpc.NewGRPCServer(clientCtx, app, config) + if err != nil { + return nil, clientCtx, err +} + + / Start the gRPC server in a goroutine. Note, the provided ctx will ensure + / that the server is gracefully shut down. + g.Go(func() + +error { + return servergrpc.StartGRPCServer(ctx, svrCtx.Logger.With("module", "grpc-server"), config, grpcSrv) +}) + +return grpcSrv, clientCtx, nil +} + +func startAPIServer( + ctx context.Context, + g *errgroup.Group, + svrCfg serverconfig.Config, + clientCtx client.Context, + svrCtx *Context, + app types.Application, + home string, + grpcSrv *grpc.Server, + metrics *telemetry.Metrics, +) + +error { + if !svrCfg.API.Enable { + return nil +} + +clientCtx = clientCtx.WithHomeDir(home) + apiSrv := api.New(clientCtx, svrCtx.Logger.With("module", "api-server"), grpcSrv) + +app.RegisterAPIRoutes(apiSrv, svrCfg.API) + if svrCfg.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + +g.Go(func() + +error { + return apiSrv.Start(ctx, svrCfg) +}) + +return nil +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + return telemetry.New(cfg.Telemetry) +} + +/ wrapCPUProfile starts CPU profiling, if enabled, and executes the provided +/ callbackFn in a separate goroutine, then will wait for that callback to +/ return. +/ +/ NOTE: We expect the caller to handle graceful shutdown and signal handling. +func wrapCPUProfile(svrCtx *Context, callbackFn func() + +error) + +error { + if cpuProfile := svrCtx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +svrCtx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +defer func() { + svrCtx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + svrCtx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +}() +} + +return callbackFn() +} + +/ emitServerInfoMetrics emits server info related metrics using application telemetry. +func emitServerInfoMetrics() { + var ls []metrics.Label + versionInfo := version.NewInfo() + if len(versionInfo.GoVersion) > 0 { + ls = append(ls, telemetry.NewLabel("go", versionInfo.GoVersion)) +} + if len(versionInfo.CosmosSdkVersion) > 0 { + ls = append(ls, telemetry.NewLabel("version", versionInfo.CosmosSdkVersion)) +} + if len(ls) == 0 { + return +} + +telemetry.SetGaugeWithLabels([]string{"server", "info" +}, 1, ls) +} + +func getCtx(svrCtx *Context, block bool) (*errgroup.Group, context.Context) { + ctx, cancelFn := context.WithCancel(context.Background()) + +g, ctx := errgroup.WithContext(ctx) + / listen for quit signals so the calling parent process can gracefully exit + ListenForQuitSignals(g, block, cancelFn, svrCtx.Logger) + +return g, ctx +} + +func startApp(svrCtx *Context, appCreator types.AppCreator, opts StartCmdOptions) (app types.Application, cleanupFn func(), err error) { + traceWriter, traceCleanupFn, err := setupTraceWriter(svrCtx) + if err != nil { + return app, traceCleanupFn, err +} + home := svrCtx.Config.RootDir + db, err := opts.DBOpener(home, GetAppDBBackend(svrCtx.Viper)) + if err != nil { + return app, traceCleanupFn, err +} + if isTestnet, ok := svrCtx.Viper.Get(KeyIsTestnet).(bool); ok && isTestnet { + app, err = testnetify(svrCtx, appCreator, db, traceWriter) + if err != nil { + return app, traceCleanupFn, err +} + +} + +else { + app = appCreator(svrCtx.Logger, db, traceWriter, svrCtx.Viper) +} + +cleanupFn = func() { + traceCleanupFn() + if localErr := app.Close(); localErr != nil { + svrCtx.Logger.Error(localErr.Error()) +} + +} + +return app, cleanupFn, nil +} + +/ InPlaceTestnetCreator utilizes the provided chainID and operatorAddress as well as the local private validator key to +/ control the network represented in the data folder. This is useful to create testnets nearly identical to your +/ mainnet environment. +func InPlaceTestnetCreator(testnetAppCreator types.AppCreator) *cobra.Command { + opts := StartCmdOptions{ +} + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + if opts.StartCommandHandler == nil { + opts.StartCommandHandler = start +} + cmd := &cobra.Command{ + Use: "in-place-testnet [newChainID] [newOperatorAddress]", + Short: "Create and start a testnet from current local state", + Long: `Create and start a testnet from current local state. +After utilizing this command the network will start. If the network is stopped, +the normal "start" command should be used. Re-using this command on state that +has already been modified by this command could result in unexpected behavior. + +Additionally, the first block may take up to one minute to be committed, depending +on how old the block is. For instance, if a snapshot was taken weeks ago and we want +to turn this into a testnet, it is possible lots of pending state needs to be committed +(expiring locks, etc.). It is recommended that you should wait for this block to be committed +before stopping the daemon. + +If the --trigger-testnet-upgrade flag is set, the upgrade handler specified by the flag will be run +on the first block of the testnet. + +Regardless of whether the flag is set or not, if any new stores are introduced in the daemon being run, +those stores will be registered in order to prevent panics. Therefore, you only need to set the flag if +you want to test the upgrade handler itself. +`, + Example: "in-place-testnet localosmosis osmo12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj", + Args: cobra.ExactArgs(2), + RunE: func(cmd *cobra.Command, args []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + if err != nil { + return err +} + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + newChainID := args[0] + newOperatorAddress := args[1] + + skipConfirmation, _ := cmd.Flags().GetBool("skip-confirmation") + if !skipConfirmation { + / Confirmation prompt to prevent accidental modification of state. + reader := bufio.NewReader(os.Stdin) + +fmt.Println("This operation will modify state in your data folder and cannot be undone. Do you want to continue? (y/n)") + +text, _ := reader.ReadString('\n') + response := strings.TrimSpace(strings.ToLower(text)) + if response != "y" && response != "yes" { + fmt.Println("Operation canceled.") + +return nil +} + +} + + / Set testnet keys to be used by the application. + / This is done to prevent changes to existing start API. + serverCtx.Viper.Set(KeyIsTestnet, true) + +serverCtx.Viper.Set(KeyNewChainID, newChainID) + +serverCtx.Viper.Set(KeyNewOpAddr, newOperatorAddress) + +err = wrapCPUProfile(serverCtx, func() + +error { + return opts.StartCommandHandler(serverCtx, clientCtx, testnetAppCreator, withCMT, opts) +}) + +serverCtx.Logger.Debug("received quit signal") + +graceDuration, _ := cmd.Flags().GetDuration(FlagShutdownGrace) + if graceDuration > 0 { + serverCtx.Logger.Info("graceful shutdown start", FlagShutdownGrace, graceDuration) + <-time.After(graceDuration) + +serverCtx.Logger.Info("graceful shutdown complete") +} + +return err +}, +} + +addStartNodeFlags(cmd, opts) + +cmd.Flags().String(KeyTriggerTestnetUpgrade, "", "If set (example: \"v21\"), triggers the v21 upgrade handler to run on the first block of the testnet") + +cmd.Flags().Bool("skip-confirmation", false, "Skip the confirmation prompt") + +return cmd +} + +/ testnetify modifies both state and blockStore, allowing the provided operator address and local validator key to control the network +/ that the state in the data folder represents. The chainID of the local genesis file is modified to match the provided chainID. +func testnetify(ctx *Context, testnetAppCreator types.AppCreator, db dbm.DB, traceWriter io.WriteCloser) (types.Application, error) { + config := ctx.Config + + newChainID, ok := ctx.Viper.Get(KeyNewChainID).(string) + if !ok { + return nil, fmt.Errorf("expected string for key %s", KeyNewChainID) +} + + / Modify app genesis chain ID and save to genesis file. + genFilePath := config.GenesisFile() + +appGen, err := genutiltypes.AppGenesisFromFile(genFilePath) + if err != nil { + return nil, err +} + +appGen.ChainID = newChainID + if err := appGen.ValidateAndComplete(); err != nil { + return nil, err +} + if err := appGen.SaveAs(genFilePath); err != nil { + return nil, err +} + + / Regenerate addrbook.json to prevent peers on old network from causing error logs. + addrBookPath := filepath.Join(config.RootDir, "config", "addrbook.json") + if err := os.Remove(addrBookPath); err != nil && !os.IsNotExist(err) { + return nil, fmt.Errorf("failed to remove existing addrbook.json: %w", err) +} + emptyAddrBook := []byte("{ +}") + if err := os.WriteFile(addrBookPath, emptyAddrBook, 0o600); err != nil { + return nil, fmt.Errorf("failed to create empty addrbook.json: %w", err) +} + + / Load the comet genesis doc provider. + genDocProvider := node.DefaultGenesisDocProviderFunc(config) + + / Initialize blockStore and stateDB. + blockStoreDB, err := cmtcfg.DefaultDBProvider(&cmtcfg.DBContext{ + ID: "blockstore", + Config: config +}) + if err != nil { + return nil, err +} + blockStore := store.NewBlockStore(blockStoreDB) + +stateDB, err := cmtcfg.DefaultDBProvider(&cmtcfg.DBContext{ + ID: "state", + Config: config +}) + if err != nil { + return nil, err +} + +defer blockStore.Close() + +defer stateDB.Close() + privValidator := pvm.LoadOrGenFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile()) + +userPubKey, err := privValidator.GetPubKey() + if err != nil { + return nil, err +} + validatorAddress := userPubKey.Address() + stateStore := sm.NewStore(stateDB, sm.StoreOptions{ + DiscardABCIResponses: config.Storage.DiscardABCIResponses, +}) + +state, genDoc, err := node.LoadStateFromDBOrGenesisDocProvider(stateDB, genDocProvider) + if err != nil { + return nil, err +} + +ctx.Viper.Set(KeyNewValAddr, validatorAddress) + +ctx.Viper.Set(KeyUserPubKey, userPubKey) + testnetApp := testnetAppCreator(ctx.Logger, db, traceWriter, ctx.Viper) + + / We need to create a temporary proxyApp to get the initial state of the application. + / Depending on how the node was stopped, the application height can differ from the blockStore height. + / This height difference changes how we go about modifying the state. + cmtApp := NewCometABCIWrapper(testnetApp) + _, context := getCtx(ctx, true) + clientCreator := proxy.NewLocalClientCreator(cmtApp) + metrics := node.DefaultMetricsProvider(cmtcfg.DefaultConfig().Instrumentation) + _, _, _, _, proxyMetrics, _, _ := metrics(genDoc.ChainID) + proxyApp := proxy.NewAppConns(clientCreator, proxyMetrics) + if err := proxyApp.Start(); err != nil { + return nil, fmt.Errorf("error starting proxy app connections: %w", err) +} + +res, err := proxyApp.Query().Info(context, proxy.RequestInfo) + if err != nil { + return nil, fmt.Errorf("error calling Info: %w", err) +} + +err = proxyApp.Stop() + if err != nil { + return nil, err +} + appHash := res.LastBlockAppHash + appHeight := res.LastBlockHeight + + var block *cmttypes.Block + switch { + case appHeight == blockStore.Height(): + block = blockStore.LoadBlock(blockStore.Height()) + / If the state's last blockstore height does not match the app and blockstore height, we likely stopped with the halt height flag. + if state.LastBlockHeight != appHeight { + state.LastBlockHeight = appHeight + block.AppHash = appHash + state.AppHash = appHash +} + +else { + / Node was likely stopped via SIGTERM, delete the next block's seen commit + err := blockStoreDB.Delete(fmt.Appendf(nil, "SC:%v", blockStore.Height()+1)) + if err != nil { + return nil, err +} + +} + case blockStore.Height() > state.LastBlockHeight: + / This state usually occurs when we gracefully stop the node. + err = blockStore.DeleteLatestBlock() + if err != nil { + return nil, err +} + +block = blockStore.LoadBlock(blockStore.Height()) + +default: + / If there is any other state, we just load the block + block = blockStore.LoadBlock(blockStore.Height()) +} + +block.ChainID = newChainID + state.ChainID = newChainID + + block.LastBlockID = state.LastBlockID + block.LastCommit.BlockID = state.LastBlockID + + / Create a vote from our validator + vote := cmttypes.Vote{ + Type: cmtproto.PrecommitType, + Height: state.LastBlockHeight, + Round: 0, + BlockID: state.LastBlockID, + Timestamp: time.Now(), + ValidatorAddress: validatorAddress, + ValidatorIndex: 0, + Signature: []byte{ +}, +} + + / Sign the vote, and copy the proto changes from the act of signing to the vote itself + voteProto := vote.ToProto() + +err = privValidator.SignVote(newChainID, voteProto) + if err != nil { + return nil, err +} + +vote.Signature = voteProto.Signature + vote.Timestamp = voteProto.Timestamp + + / Modify the block's lastCommit to be signed only by our validator + block.LastCommit.Signatures[0].ValidatorAddress = validatorAddress + block.LastCommit.Signatures[0].Signature = vote.Signature + block.LastCommit.Signatures = []cmttypes.CommitSig{ + block.LastCommit.Signatures[0] +} + + / Load the seenCommit of the lastBlockHeight and modify it to be signed from our validator + seenCommit := blockStore.LoadSeenCommit(state.LastBlockHeight) + +seenCommit.BlockID = state.LastBlockID + seenCommit.Round = vote.Round + seenCommit.Signatures[0].Signature = vote.Signature + seenCommit.Signatures[0].ValidatorAddress = validatorAddress + seenCommit.Signatures[0].Timestamp = vote.Timestamp + seenCommit.Signatures = []cmttypes.CommitSig{ + seenCommit.Signatures[0] +} + +err = blockStore.SaveSeenCommit(state.LastBlockHeight, seenCommit) + if err != nil { + return nil, err +} + + / Create ValidatorSet struct containing just our valdiator. + newVal := &cmttypes.Validator{ + Address: validatorAddress, + PubKey: userPubKey, + VotingPower: 900000000000000, +} + newValSet := &cmttypes.ValidatorSet{ + Validators: []*cmttypes.Validator{ + newVal +}, + Proposer: newVal, +} + + / Replace all valSets in state to be the valSet with just our validator. + state.Validators = newValSet + state.LastValidators = newValSet + state.NextValidators = newValSet + state.LastHeightValidatorsChanged = blockStore.Height() + +err = stateStore.Save(state) + if err != nil { + return nil, err +} + + / Create a ValidatorsInfo struct to store in stateDB. + valSet, err := state.Validators.ToProto() + if err != nil { + return nil, err +} + valInfo := &cmtstate.ValidatorsInfo{ + ValidatorSet: valSet, + LastHeightChanged: state.LastBlockHeight, +} + +buf, err := valInfo.Marshal() + if err != nil { + return nil, err +} + + / Modfiy Validators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()), buf) + if err != nil { + return nil, err +} + + / Modify LastValidators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()-1), buf) + if err != nil { + return nil, err +} + + / Modify NextValidators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()+1), buf) + if err != nil { + return nil, err +} + + / Since we modified the chainID, we set the new genesisDoc in the stateDB. + b, err := cmtjson.Marshal(genDoc) + if err != nil { + return nil, err +} + if err := stateDB.SetSync([]byte("genesisDoc"), b); err != nil { + return nil, err +} + +return testnetApp, err +} + +/ addStartNodeFlags should be added to any CLI commands that start the network. +func addStartNodeFlags(cmd *cobra.Command, opts StartCmdOptions) { + cmd.Flags().Bool(flagWithComet, true, "Run abci app embedded in-process with CometBFT") + +cmd.Flags().String(flagAddress, "tcp://127.0.0.1:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().Uint64(FlagQueryGasLimit, 0, "Maximum gas a Rest/Grpc query can consume. Blank and 0 imply unbounded.") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune CometBFT blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the CometBFT RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the CometBFT RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the CometBFT maximum request body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no CometBFT process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + +cmd.Flags().Duration(FlagShutdownGrace, 0*time.Second, "On Shutdown, duration to wait for resource clean up") + + / support old flags name for backwards compatibility + cmd.Flags().SetNormalizeFunc(func(f *pflag.FlagSet, name string) + +pflag.NormalizedName { + if name == "with-tendermint" { + name = flagWithComet +} + +return pflag.NormalizedName(name) +}) + + / add support for all CometBFT-specific command line options + cmtcmd.AddNodeFlags(cmd) + if opts.AddFlags != nil { + opts.AddFlags(cmd) +} +} +``` + +Upon starting, the node will bootstrap its RPC and P2P server and start dialing peers. During handshake with its peers, if the node realizes they are ahead, it will query all the blocks sequentially in order to catch up. Then, it will wait for new block proposals and block signatures from validators in order to make progress. + +## Other commands + +To discover how to concretely run a node and interact with it, please refer to our [Running a Node, API and CLI](/docs/sdk/v0.53/documentation/operations/run-node) guide. diff --git a/docs/sdk/next/documentation/operations/rosetta.mdx b/docs/sdk/next/documentation/operations/rosetta.mdx new file mode 100644 index 00000000..74d69eb6 --- /dev/null +++ b/docs/sdk/next/documentation/operations/rosetta.mdx @@ -0,0 +1,155 @@ +--- +title: Rosetta +description: >- + The rosetta project implements Coinbase's Rosetta API. This document provides + instructions on how to use the Rosetta API integration. For information about + the motivation and design choices, refer to ADR 035. +--- + +The `rosetta` project implements Coinbase's [Rosetta API](https://www.rosetta-api.org). This document provides instructions on how to use the Rosetta API integration. For information about the motivation and design choices, refer to [ADR 035](https://docs.cosmos.network/main/architecture/adr-035-rosetta-api-support). + +## Installing Rosetta + +The Rosetta API server is a stand-alone server that connects to a node of a chain developed with Cosmos SDK. + +Rosetta can be added to any cosmos chain node. standalone or natively. + +### Standalone + +Rosetta can be executed as a standalone service, it connects to the node endpoints and expose the required endpoints. + +Install Rosetta standalone server with the following command: + +```bash +go install github.com/cosmos/rosetta +``` + +Alternatively, for building from source, simply run `make build`. The binary will be located in the root folder. + +### Native - As a node command + +To enable Native Rosetta API support, it's required to add the `RosettaCommand` to your application's root command file (e.g. `simd/cmd/root.go`). + +Import the `rosettaCmd` package: + +```go +import "github.com/cosmos/rosetta/cmd" +``` + +Find the following line: + +```go +initRootCmd(rootCmd, encodingConfig) +``` + +After that line, add the following: + +```go +rootCmd.AddCommand( + rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec) +) +``` + +The `RosettaCommand` function builds the `rosetta` root command and is defined in the `rosettaCmd` package (`github.com/cosmos/rosetta/cmd`). + +Since we’ve updated the Cosmos SDK to work with the Rosetta API, updating the application's root command file is all you need to do. + +An implementation example can be found in `simapp` package. + +## Use Rosetta Command + +To run Rosetta in your application CLI, use the following command: + +> **Note:** if using the native approach, add your node name before any rosetta command. + +```shell +rosetta --help +``` + +To test and run Rosetta API endpoints for applications that are running and exposed, use the following command: + +```shell +rosetta + --blockchain "your application name (ex: gaia)" + --network "your chain identifier (ex: testnet-1)" + --tendermint "tendermint endpoint (ex: localhost:26657)" + --grpc "gRPC endpoint (ex: localhost:9090)" + --addr "rosetta binding address (ex: :8080)" + --grpc-types-server (optional) "gRPC endpoint for message descriptor types" +``` + +## Plugins - Multi chain connections + +Rosetta will try to reflect the node types trough reflection over the node gRPC endpoints, there may be cases were this approach is not enough. It is possible to extend or implement the required types easily through plugins. + +To use Rosetta over any chain, it is required to set up prefixes and registering zone specific interfaces through plugins. + +Each plugin is a minimalist implementation of `InitZone` and `RegisterInterfaces` which allow Rosetta to parse chain specific data. There is an example for cosmos-hub chain under `plugins/cosmos-hun/` folder + +* **InitZone**: An empty method that is executed first and defines prefixes, parameters and other settings. +* **RegisterInterfaces**: This method receives an interface registry which is were the zone specific types and interfaces will be loaded + +In order to add a new plugin: + +1. Create a folder over `plugins` folder with the name of the desired zone +2. Add a `main.go` file with the mentioned methods above. +3. Build the code binary through `go build -buildmode=plugin -o main.so main.go` + +The plugin folder is selected through the cli `--plugin` flag and loaded into the Rosetta server. + +## Extensions + +There are two ways in which you can customize and extend the implementation with your custom settings. + +### Message extension + +In order to make an `sdk.Msg` understandable by rosetta the only thing which is required is adding the methods to your messages that satisfy the `rosetta.Msg` interface. Examples on how to do so can be found in the staking types such as `MsgDelegate`, or in bank types such as `MsgSend`. + +### Client interface override + +In case more customization is required, it's possible to embed the Client type and override the methods which require customizations. + +Example: + +```go expandable +package custom_client +import ( + + +"context" + "github.com/coinbase/rosetta-sdk-go/types" + "github.com/cosmos/rosetta/lib" +) + +/ CustomClient embeds the standard cosmos client +/ which means that it implements the cosmos-rosetta-gateway Client +/ interface while at the same time allowing to customize certain methods +type CustomClient struct { + *rosetta.Client +} + +func (c *CustomClient) + +ConstructionPayload(_ context.Context, request *types.ConstructionPayloadsRequest) (resp *types.ConstructionPayloadsResponse, err error) { + / provide custom signature bytes + panic("implement me") +} +``` + +NOTE: when using a customized client, the command cannot be used as the constructors required **may** differ, so it's required to create a new one. We intend to provide a way to init a customized client without writing extra code in the future. + +### Error extension + +Since rosetta requires to provide 'returned' errors to network options. In order to declare a new rosetta error, we use the `errors` package in cosmos-rosetta-gateway. + +Example: + +```go +package custom_errors +import crgerrs "github.com/cosmos/rosetta/lib/errors" + +var customErrRetriable = true +var CustomError = crgerrs.RegisterError(100, "custom message", customErrRetriable, "description") +``` + +Note: errors must be registered before cosmos-rosetta-gateway's `Server`.`Start` method is called. Otherwise the registration will be ignored. Errors with same code will be ignored too. diff --git a/docs/sdk/next/documentation/operations/run-node.mdx b/docs/sdk/next/documentation/operations/run-node.mdx new file mode 100644 index 00000000..0220070d --- /dev/null +++ b/docs/sdk/next/documentation/operations/run-node.mdx @@ -0,0 +1,218 @@ +--- +title: Running a Node +--- + +## Synopsis + +Now that the application is ready and the keyring populated, it's time to see how to run the blockchain node. In this section, the application we are running is called [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp), and its corresponding CLI binary `simd`. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/next/documentation/application-framework/app-anatomy) +- [Setting up the keyring](/docs/sdk/next/documentation/operations/keyring) + + + +## Initialize the Chain + + + Make sure you can build your own binary, and replace `simd` with the name of + your binary in the snippets. + + +Before actually running the node, we need to initialize the chain, and most importantly, its genesis file. This is done with the `init` subcommand: + +```bash +# The argument is the custom username of your node, it should be human-readable. +simd init --chain-id my-test-chain +``` + +The command above creates all the configuration files needed for your node to run, as well as a default genesis file, which defines the initial state of the network. + + + All these configuration files are in `~/.simapp` by default, but you can + overwrite the location of this folder by passing the `--home` flag to each + command, or set an `$APPD_HOME` environment variable (where `APPD` is the name + of the binary). + + +The `~/.simapp` folder has the following structure: + +```bash +. # ~/.simapp + |- data # Contains the databases used by the node. + |- config/ + |- app.toml # Application-related configuration file. + |- config.toml # CometBFT-related configuration file. + |- genesis.json # The genesis file. + |- node_key.json # Private key to use for node authentication in the p2p protocol. + |- priv_validator_key.json # Private key to use as a validator in the consensus protocol. +``` + +## Updating Some Default Settings + +If you want to change any field values in configuration files (for ex: genesis.json) you can use `jq` ([installation](https://stedolan.github.io/jq/download/) & [docs](https://stedolan.github.io/jq/manual/#Assignment)) & `sed` commands to do that. A few examples are listed here. + +```bash expandable +# to change the chain-id +jq '.chain_id = "testing"' genesis.json > temp.json && mv temp.json genesis.json + +# to enable the api server +sed -i '/\[api\]/,+3 s/enable = false/enable = true/' app.toml + +# to change the voting_period +jq '.app_state.gov.voting_params.voting_period = "600s"' genesis.json > temp.json && mv temp.json genesis.json + +# to change the inflation +jq '.app_state.mint.minter.inflation = "0.300000000000000000"' genesis.json > temp.json && mv temp.json genesis.json +``` + +### Client Interaction + +When instantiating a node, GRPC and REST are defaulted to localhost to avoid unknown exposure of your node to the public. It is recommended not to expose these endpoints without a proxy that can handle load balancing or authentication set up between your node and the public. + +A commonly used tool for this is [nginx](https://nginx.org). + +## Adding Genesis Accounts + +Before starting the chain, you need to populate the state with at least one account. To do so, first [create a new account in the keyring](/docs/sdk/next/documentation/operations/keyring#adding-keys-to-the-keyring) named `my_validator` under the `test` keyring backend (feel free to choose another name and another backend). + +Now that you have created a local account, go ahead and grant it some `stake` tokens in your chain's genesis file. Doing so will also make sure your chain is aware of this account's existence: + +```bash +simd genesis add-genesis-account $MY_VALIDATOR_ADDRESS 100000000000stake +``` + +Recall that `$MY_VALIDATOR_ADDRESS` is a variable that holds the address of the `my_validator` key in the [keyring](/docs/sdk/next/documentation/operations/keyring#adding-keys-to-the-keyring). Also note that the tokens in the Cosmos SDK have the `{amount}{denom}` format: `amount` is an 18-digit-precision decimal number, and `denom` is the unique token identifier with its denomination key (e.g. `atom` or `uatom`). Here, we are granting `stake` tokens, as `stake` is the token identifier used for staking in [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). For your own chain with its own staking denom, that token identifier should be used instead. + +Now that your account has some tokens, you need to add a validator to your chain. Validators are special full-nodes that participate in the consensus process (implemented in the [underlying consensus engine](/docs/sdk/next/documentation/core-concepts/sdk-app-architecture#cometbft)) in order to add new blocks to the chain. Any account can declare its intention to become a validator operator, but only those with sufficient delegation get to enter the active set (for example, only the top 125 validator candidates with the most delegation get to be validators in the Cosmos Hub). For this guide, you will add your local node (created via the `init` command above) as a validator of your chain. Validators can be declared before a chain is first started via a special transaction included in the genesis file called a `gentx`: + +```bash +# Create a gentx. +simd genesis gentx my_validator 100000000stake --chain-id my-test-chain --keyring-backend test + +# Add the gentx to the genesis file. +simd genesis collect-gentxs +``` + +A `gentx` does three things: + +1. Registers the `validator` account you created as a validator operator account (i.e., the account that controls the validator). +2. Self-delegates the provided `amount` of staking tokens. +3. Link the operator account with a CometBFT node pubkey that will be used for signing blocks. If no `--pubkey` flag is provided, it defaults to the local node pubkey created via the `simd init` command above. + +For more information on `gentx`, use the following command: + +```bash +simd genesis gentx --help +``` + +## Configuring the Node Using `app.toml` and `config.toml` + +The Cosmos SDK automatically generates two configuration files inside `~/.simapp/config`: + +- `config.toml`: used to configure the CometBFT, learn more on [CometBFT's documentation](https://docs.cometbft.com/v0.37/core/configuration), +- `app.toml`: generated by the Cosmos SDK, and used to configure your app, such as state pruning strategies, telemetry, gRPC and REST servers configuration, state sync... + +Both files are heavily commented, please refer to them directly to tweak your node. + +One example config to tweak is the `minimum-gas-prices` field inside `app.toml`, which defines the minimum gas prices the validator node is willing to accept for processing a transaction. Depending on the chain, it might be an empty string or not. If it's empty, make sure to edit the field with some value, for example `10token`, or else the node will halt on startup. For the purpose of this tutorial, let's set the minimum gas price to 0: + +```toml + # The minimum gas prices a validator is willing to accept for processing a + # transaction. A transaction's fees must meet the minimum of any denomination + # specified in this config (e.g. 0.25token1;0.0001token2). + minimum-gas-prices = "0stake" +``` + + +When running a node (not a validator!) and not wanting to run the application mempool, set the `max-txs` field to `-1`. + +```toml +[mempool] +# Setting max-txs to 0 will allow for an unbounded amount of transactions in the mempool. +# Setting max_txs to negative 1 (-1) will disable transactions from being inserted into the mempool. +# Setting max_txs to a positive number (> 0) will limit the number of transactions in the mempool, by the specified amount. +# +# Note, this configuration only applies to SDK built-in app-side mempool +# implementations. +max-txs = "-1" +``` + + + +## Run a Localnet + +Now that everything is set up, you can finally start your node: + +```bash +simd start +``` + +You should see blocks come in. + +The previous command allows you to run a single node. This is enough for the next section on interacting with this node, but you may wish to run multiple nodes at the same time, and see how consensus happens between them. + +The naive way would be to run the same commands again in separate terminal windows. This is possible, however, in the Cosmos SDK, we leverage the power of [Docker Compose](https://docs.docker.com/compose/) to run a localnet. If you need inspiration on how to set up your own localnet with Docker Compose, you can have a look at the Cosmos SDK's [`docker-compose.yml`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/docker-compose.yml). + +### Standalone App/CometBFT + +By default, the Cosmos SDK runs CometBFT in-process with the application +If you want to run the application and CometBFT in separate processes, +start the application with the `--with-comet=false` flag +and set `rpc.laddr` in `config.toml` to the CometBFT node's RPC address. + +## Logging + +Logging provides a way to see what is going on with a node. The default logging level is info. This is a global level and all info logs will be outputted to the terminal. If you would like to filter specific logs to the terminal instead of all, then setting `module:log_level` is how this can work. + +Example: + +In config.toml: + +```toml +log_level: "state:info,p2p:info,consensus:info,x/staking:info,x/ibc:info,*error" +``` + +## State Sync + +State sync is the act in which a node syncs the latest or close to the latest state of a blockchain. This is useful for users who don't want to sync all the blocks in history. Read more in [CometBFT documentation](https://docs.cometbft.com/v0.37/core/state-sync). + +State sync works thanks to snapshots. Read how the SDK handles snapshots [here](https://github.com/cosmos/cosmos-sdk/blob/825245d/store/snapshots/README.md). + +### Local State Sync + +Local state sync works similar to normal state sync except that it works off a local snapshot of state instead of one provided via the p2p network. The steps to start local state sync are similar to normal state sync with a few different designs. + +1. As mentioned in [Link](https://docs.cometbft.com/v0.37/core/state-sync), one must set a height and hash in the config.toml along with a few rpc servers (the aforementioned link has instructions on how to do this). +2. Run ` ` to restore a local snapshot (note: first load it from a file with the _load_ command). +3. Bootstrapping Comet state to start the node after the snapshot has been ingested. This can be done with the bootstrap command ` comet bootstrap-state` + +### Snapshots Commands + +The Cosmos SDK provides commands for managing snapshots. +These commands can be added in an app with the following snippet in `cmd//root.go`: + +```go +import ( + + "github.com/cosmos/cosmos-sdk/client/snapshot" +) + +func initRootCmd(/* ... */) { + / ... + rootCmd.AddCommand( + snapshot.Cmd(appCreator), + ) +} +``` + +Then the following commands are available at ` snapshots [command]`: + +- **list**: list local snapshots +- **load**: Load a snapshot archive file into snapshot store +- **restore**: Restore app state from local snapshot +- **export**: Export app state to snapshot store +- **dump**: Dump the snapshot as portable archive format +- **delete**: Delete a local snapshot diff --git a/docs/sdk/next/documentation/operations/run-production.mdx b/docs/sdk/next/documentation/operations/run-production.mdx new file mode 100644 index 00000000..58b5d303 --- /dev/null +++ b/docs/sdk/next/documentation/operations/run-production.mdx @@ -0,0 +1,272 @@ +--- +title: Running in Production +--- + +## Synopsis + +This section describes how to securely run a node in a public setting and/or on a mainnet on one of the many Cosmos SDK public blockchains. + +When operating a node, full node or validator, in production it is important to set your server up securely. + + + There are many different ways to secure a server and your node, the described + steps here is one way. To see another way of setting up a server see the [run + in production + tutorial](https://tutorials.cosmos.network/hands-on-exercise/4-run-in-prod). + + +This walkthrough assumes the underlying operating system is Ubuntu. + +## Server Setup + +### User + +When creating a server most times it is created as user `root`. This user has heightened privileges on the server. When operating a node, it is recommended to not run your node as the root user. + +1. Create a new user + +```bash +sudo adduser change_me +``` + +2. We want to allow this user to perform sudo tasks + +```bash +sudo usermod -aG sudo change_me +``` + +Now when logging into the server, the non `root` user can be used. + +### Go + +1. Install the [Go](https://go.dev/doc/install) version preconized by the application. + + + In the past, validators [have had + issues](https://github.com/cosmos/cosmos-sdk/issues/13976) when using + different versions of Go. It is recommended that the whole validator set uses + the version of Go that is preconized by the application. + + +### Firewall + +Nodes should not have all ports open to the public, this is a simple way to get DDOS'd. Secondly it is recommended by [CometBFT](https://github.com/cometbft/cometbft) to never expose ports that are not required to operate a node. + +When setting up a firewall there are a few ports that can be open when operating a Cosmos SDK node. There is the CometBFT json-RPC, prometheus, p2p, remote signer and Cosmos SDK GRPC and REST. If the node is being operated as a node that does not offer endpoints to be used for submission or querying then a max of three endpoints are needed. + +Most, if not all servers come equipped with [ufw](https://help.ubuntu.com/community/UFW). Ufw will be used in this tutorial. + +1. Reset UFW to disallow all incoming connections and allow outgoing + +```bash +sudo ufw default deny incoming +sudo ufw default allow outgoing +``` + +2. Lets make sure that port 22 (ssh) stays open. + +```bash +sudo ufw allow ssh +``` + +or + +```bash +sudo ufw allow 22 +``` + +Both of the above commands are the same. + +3. Allow Port 26656 (cometbft p2p port). If the node has a modified p2p port then that port must be used here. + +```bash +sudo ufw allow 26656/tcp +``` + +4. Allow port 26660 (cometbft [prometheus](https://prometheus.io)). This acts as the applications monitoring port as well. + +```bash +sudo ufw allow 26660/tcp +``` + +5. IF the node which is being setup would like to expose CometBFTs jsonRPC and Cosmos SDK GRPC and REST then follow this step. (Optional) + +##### CometBFT JsonRPC + +```bash +sudo ufw allow 26657/tcp +``` + +##### Cosmos SDK GRPC + +```bash +sudo ufw allow 9090/tcp +``` + +##### Cosmos SDK REST + +```bash +sudo ufw allow 1317/tcp +``` + +6. Lastly, enable ufw + +```bash +sudo ufw enable +``` + +### Signing + +If the node that is being started is a validator there are multiple ways a validator could sign blocks. + +#### File + +File based signing is the simplest and default approach. This approach works by storing the consensus key, generated on initialization, to sign blocks. This approach is only as safe as your server setup as if the server is compromised so is your key. This key is located in the `config/priv_val_key.json` directory generated on initialization. + +A second file exists that user must be aware of, the file is located in the data directory `data/priv_val_state.json`. This file protects your node from double signing. It keeps track of the consensus keys last sign height, round and latest signature. If the node crashes and needs to be recovered this file must be kept in order to ensure that the consensus key will not be used for signing a block that was previously signed. + +#### Remote Signer + +A remote signer is a secondary server that is separate from the running node that signs blocks with the consensus key. This means that the consensus key does not live on the node itself. This increases security because your full node which is connected to the remote signer can be swapped without missing blocks. + +The two most used remote signers are [tmkms](https://github.com/iqlusioninc/tmkms) from [Iqlusion](https://www.iqlusion.io) and [horcrux](https://github.com/strangelove-ventures/horcrux) from [Strangelove](https://strange.love). + +##### TMKMS + +###### Dependencies + +1. Update server dependencies and install extras needed. + +```sh +sudo apt update -y && sudo apt install build-essential curl jq -y +``` + +2. Install Rust: + +```sh +curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh +``` + +3. Install Libusb: + +```sh +sudo apt install libusb-1.0-0-dev +``` + +###### Setup + +There are two ways to install tmkms, from source or `cargo install`. In the examples we will cover downloading or building from source and using softsign. Softsign stands for software signing, but you could use a [yubihsm](https://www.yubico.com/products/hardware-security-module/) as your signing key if you wish. + +1. Build: + +From source: + +```bash +cd $HOME +git clone https://github.com/iqlusioninc/tmkms.git +cd $HOME/tmkms +cargo install tmkms --features=softsign +tmkms init config +tmkms softsign keygen ./config/secrets/secret_connection_key +``` + +or + +Cargo install: + +```bash +cargo install tmkms --features=softsign +tmkms init config +tmkms softsign keygen ./config/secrets/secret_connection_key +``` + + + To use tmkms with a yubikey install the binary with `--features=yubihsm`. + + +2. Migrate the validator key from the full node to the new tmkms instance. + +```bash +scp user@123.456.32.123:~/.simd/config/priv_validator_key.json ~/tmkms/config/secrets +``` + +3. Import the validator key into tmkms. + +```bash +tmkms softsign import $HOME/tmkms/config/secrets/priv_validator_key.json $HOME/tmkms/config/secrets/priv_validator_key +``` + +At this point, it is necessary to delete the `priv_validator_key.json` from the validator node and the tmkms node. Since the key has been imported into tmkms (above) it is no longer necessary on the nodes. The key can be safely stored offline. + +4. Modify the `tmkms.toml`. + +```bash +vim $HOME/tmkms/config/tmkms.toml +``` + +This example shows a configuration that could be used for soft signing. The example has an IP of `123.456.12.345` with a port of `26659` a chain_id of `test-chain-waSDSe`. These are items that must be modified for the usecase of tmkms and the network. + +```toml expandable +# CometBFT KMS configuration file + +## Chain Configuration + +[[chain]] +id = "osmosis-1" +key_format = { type = "bech32", account_key_prefix = "cosmospub", consensus_key_prefix = "cosmosvalconspub" } +state_file = "/root/tmkms/config/state/priv_validator_state.json" + +## Signing Provider Configuration + +### Software-based Signer Configuration + +[[providers.softsign]] +chain_ids = ["test-chain-waSDSe"] +key_type = "consensus" +path = "/root/tmkms/config/secrets/priv_validator_key" + +## Validator Configuration + +[[validator]] +chain_id = "test-chain-waSDSe" +addr = "tcp://123.456.12.345:26659" +secret_key = "/root/tmkms/config/secrets/secret_connection_key" +protocol_version = "v0.34" +reconnect = true +``` + +5. Set the address of the tmkms instance. + +```bash +vim $HOME/.simd/config/config.toml + +priv_validator_laddr = "tcp://0.0.0.0:26659" +``` + + + The above address it set to `0.0.0.0` but it is recommended to set the tmkms + server to secure the startup + + + +It is recommended to comment or delete the lines that specify the path of the validator key and validator: + +```toml +# Path to the JSON file containing the private key to use as a validator in the consensus protocol +# priv_validator_key_file = "config/priv_validator_key.json" + +# Path to the JSON file containing the last sign state of a validator +# priv_validator_state_file = "data/priv_validator_state.json" +``` + + + +6. Start the two processes. + +```bash +tmkms start -c $HOME/tmkms/config/tmkms.toml +``` + +```bash +simd start +``` diff --git a/docs/sdk/v0.53/user/run-node/run-testnet.mdx b/docs/sdk/next/documentation/operations/run-testnet.mdx similarity index 65% rename from docs/sdk/v0.53/user/run-node/run-testnet.mdx rename to docs/sdk/next/documentation/operations/run-testnet.mdx index e56d190c..228d8276 100644 --- a/docs/sdk/v0.53/user/run-node/run-testnet.mdx +++ b/docs/sdk/next/documentation/operations/run-testnet.mdx @@ -1,15 +1,14 @@ --- -title: "Running a Testnet" -description: "Version: v0.53" +title: Running a Testnet --- - - The `simd testnet` subcommand makes it easy to initialize and start a simulated test network for testing purposes. - +## Synopsis -In addition to the commands for [running a node](/v0.53/user/run-node/run-node), the `simd` binary also includes a `testnet` command that allows you to start a simulated test network in-process or to initialize files for a simulated test network that runs in a separate process. +The `simd testnet` subcommand makes it easy to initialize and start a simulated test network for testing purposes. -## Initialize Files[​](#initialize-files "Direct link to Initialize Files") +In addition to the commands for [running a node](/docs/sdk/next/documentation/operations/run-node), the `simd` binary also includes a `testnet` command that allows you to start a simulated test network in-process or to initialize files for a simulated test network that runs in a separate process. + +## Initialize Files First, let's take a look at the `init-files` subcommand. @@ -19,27 +18,27 @@ The `init-files` subcommand initializes the necessary files to run a test networ In order to initialize the files for a test network, run the following command: -``` +```bash simd testnet init-files ``` You should see the following output in your terminal: -``` +```bash Successfully initialized 4 node directories ``` The default output directory is a relative `.testnets` directory. Let's take a look at the files created within the `.testnets` directory. -### gentxs[​](#gentxs "Direct link to gentxs") +### gentxs The `gentxs` directory includes a genesis transaction for each validator node. Each file includes a JSON encoded genesis transaction used to register a validator node at the time of genesis. The genesis transactions are added to the `genesis.json` file within each node directory during the initialization process. -### nodes[​](#nodes "Direct link to nodes") +### nodes A node directory is created for each validator node. Within each node directory is a `simd` directory. The `simd` directory is the home directory for each node, which includes the configuration and data files for that node (i.e. the same files included in the default `~/.simapp` directory when running a single node). -## Start Testnet[​](#start-testnet "Direct link to Start Testnet") +## Start Testnet Now, let's take a look at the `start` subcommand. @@ -47,38 +46,52 @@ The `start` subcommand both initializes and starts an in-process test network. T You can start the local test network by running the following command: -``` +```bash simd testnet start ``` You should see something similar to the following: -``` -acquiring test network lockpreparing test network with chain-id "chain-mtoD9v"+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ THIS MNEMONIC IS FOR TESTING PURPOSES ONLY ++++ DO NOT USE IN PRODUCTION ++++ ++++ sustain know debris minute gate hybrid stereo custom ++++ divorce cross spoon machine latin vibrant term oblige ++++ moment beauty laundry repeat grab game bronze truly +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++starting test network...started test networkpress the Enter Key to terminate +```bash expandable +acquiring test network lock +preparing test network with chain-id "chain-mtoD9v" + ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++ THIS MNEMONIC IS FOR TESTING PURPOSES ONLY ++ +++ DO NOT USE IN PRODUCTION ++ +++ ++ +++ sustain know debris minute gate hybrid stereo custom ++ +++ divorce cross spoon machine latin vibrant term oblige ++ +++ moment beauty laundry repeat grab game bronze truly ++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +starting test network... +started test network +press the Enter Key to terminate ``` The first validator node is now running in-process, which means the test network will terminate once you either close the terminal window or you press the Enter key. In the output, the mnemonic phrase for the first validator node is provided for testing purposes. The validator node is using the same default addresses being used when initializing and starting a single node (no need to provide a `--node` flag). Check the status of the first validator node: -``` +```shell simd status ``` Import the key from the provided mnemonic: -``` +```shell simd keys add test --recover --keyring-backend test ``` Check the balance of the account address: -``` +```shell simd q bank balances [address] ``` Use this test account to manually test against the test network. -## Testnet Options[​](#testnet-options "Direct link to Testnet Options") +## Testnet Options You can customize the configuration of the test network with flags. In order to see all flag options, append the `--help` flag to each command. diff --git a/docs/sdk/next/documentation/operations/simulation.mdx b/docs/sdk/next/documentation/operations/simulation.mdx new file mode 100644 index 00000000..ed869568 --- /dev/null +++ b/docs/sdk/next/documentation/operations/simulation.mdx @@ -0,0 +1,95 @@ +--- +title: Cosmos Blockchain Simulator +description: >- + The Cosmos SDK offers a full fledged simulation framework to fuzz test every + message defined by a module. +--- + +The Cosmos SDK offers a full fledged simulation framework to fuzz test every +message defined by a module. + +On the Cosmos SDK, this functionality is provided by [`SimApp`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_di.go), which is a +`Baseapp` application that is used for running the [`simulation`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/simulation) module. +This module defines all the simulation logic as well as the operations for +randomized parameters like accounts, balances etc. + +## Goals + +The blockchain simulator tests how the blockchain application would behave under +real life circumstances by generating and sending randomized messages. +The goal of this is to detect and debug failures that could halt a live chain, +by providing logs and statistics about the operations run by the simulator as +well as exporting the latest application state when a failure was found. + +Its main difference with integration testing is that the simulator app allows +you to pass parameters to customize the chain that's being simulated. +This comes in handy when trying to reproduce bugs that were generated in the +provided operations (randomized or not). + +## Simulation commands + +The simulation app has different commands, each of which tests a different +failure type: + +* `AppImportExport`: The simulator exports the initial app state and then it + creates a new app with the exported `genesis.json` as an input, checking for + inconsistencies between the stores. +* `AppSimulationAfterImport`: Queues two simulations together. The first one provides the app state (*i.e* genesis) to the second. Useful to test software upgrades or hard-forks from a live chain. +* `AppStateDeterminism`: Checks that all the nodes return the same values, in the same order. +* `FullAppSimulation`: General simulation mode. Runs the chain and the specified operations for a given number of blocks. Tests that there're no `panics` on the simulation. + +Each simulation must receive a set of inputs (*i.e* flags) such as the number of +blocks that the simulation is run, seed, block size, etc. +Check the full list of flags [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/simulation/client/cli/flags.go#L43-L70). + +## Simulator Modes + +In addition to the various inputs and commands, the simulator runs in three modes: + +1. Completely random where the initial state, module parameters and simulation + parameters are **pseudo-randomly generated**. +2. From a `genesis.json` file where the initial state and the module parameters are defined. + This mode is helpful for running simulations on a known state such as a live network export where a new (mostly likely breaking) version of the application needs to be tested. +3. From a `params.json` file where the initial state is pseudo-randomly generated but the module and simulation parameters can be provided manually. + This allows for a more controlled and deterministic simulation setup while allowing the state space to still be pseudo-randomly simulated. + The list of available parameters are listed [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/simulation/client/cli/flags.go#L72-L90). + + +These modes are not mutually exclusive. So you can for example run a randomly +generated genesis state (`1`) with manually generated simulation params (`3`). + + +## Usage + +This is a general example of how simulations are run. For more specific examples +check the Cosmos SDK [Makefile](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/Makefile#L285-L320). + +```bash + $ go test -mod=readonly github.com/cosmos/cosmos-sdk/simapp \ + -run=TestApp \ + ... + -v -timeout 24h +``` + +## Debugging Tips + +Here are some suggestions when encountering a simulation failure: + +* Export the app state at the height where the failure was found. You can do this + by passing the `-ExportStatePath` flag to the simulator. +* Use `-Verbose` logs. They could give you a better hint on all the operations + involved. +* Try using another `-Seed`. If it can reproduce the same error and if it fails + sooner, you will spend less time running the simulations. +* Reduce the `-NumBlocks` . How's the app state at the height previous to the + failure? +* Try adding logs to operations that are not logged. You will have to define a + [Logger](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/staking/keeper/keeper.go#L77-L81) on your `Keeper`. + +## Use simulation in your Cosmos SDK-based application + +Learn how you can build the simulation into your Cosmos SDK-based application: + +* Application Simulation Manager +* [Building modules: Simulator](/docs/sdk/next/documentation/operations/simulator) +* Simulator tests diff --git a/docs/sdk/next/documentation/operations/simulator.mdx b/docs/sdk/next/documentation/operations/simulator.mdx new file mode 100644 index 00000000..7ab79f5f --- /dev/null +++ b/docs/sdk/next/documentation/operations/simulator.mdx @@ -0,0 +1,4063 @@ +--- +title: Module Simulation +--- + + +**Pre-requisite Readings** + +* [Cosmos Blockchain Simulator](/docs/sdk/next/documentation/operations/simulation) + + + +## Synopsis + +This document guides developers on integrating their custom modules with the Cosmos SDK `Simulations`. +Simulations are useful for testing edge cases in module implementations. + +* [Simulation Package](#simulation-package) +* [Simulation App Module](#simulation-app-module) +* [SimsX](#simsx) + * [Example Implementations](#example-implementations) +* [Store decoders](#store-decoders) +* [Randomized genesis](#randomized-genesis) +* [Random weighted operations](#random-weighted-operations) + * [Using Simsx](#using-simsx) +* [App Simulator manager](#app-simulator-manager) +* [Running Simulations](#running-simulations) + +## Simulation Package + +The Cosmos SDK suggests organizing your simulation related code in a `x//simulation` package. + +## Simulation App Module + +To integrate with the Cosmos SDK `SimulationManager`, app modules must implement the `AppModuleSimulation` interface. + +```go expandable +package module + +import ( + + "encoding/json" + "math/rand" + "sort" + "time" + + sdkmath "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/types/simulation" +) + +/ AppModuleSimulation defines the standard functions that every module should expose +/ for the SDK blockchain simulator +type AppModuleSimulation interface { + / randomized genesis states + GenerateGenesisState(input *SimulationState) + + / register a func to decode the each module's defined types from their corresponding store key + RegisterStoreDecoder(simulation.StoreDecoderRegistry) + + / simulation operations (i.e msgs) + +with their respective weight + WeightedOperations(simState SimulationState) []simulation.WeightedOperation +} + +/ HasProposalMsgs defines the messages that can be used to simulate governance (v1) + +proposals +type HasProposalMsgs interface { + / msg functions used to simulate governance proposals + ProposalMsgs(simState SimulationState) []simulation.WeightedProposalMsg +} + +/ HasProposalContents defines the contents that can be used to simulate legacy governance (v1beta1) + +proposals +type HasProposalContents interface { + / content functions used to simulate governance proposals + ProposalContents(simState SimulationState) []simulation.WeightedProposalContent /nolint:staticcheck / legacy v1beta1 governance +} + +/ SimulationManager defines a simulation manager that provides the high level utility +/ for managing and executing simulation functionalities for a group of modules +type SimulationManager struct { + Modules []AppModuleSimulation / array of app modules; we use an array for deterministic simulation tests + StoreDecoders simulation.StoreDecoderRegistry / functions to decode the key-value pairs from each module's store +} + +/ NewSimulationManager creates a new SimulationManager object +/ +/ CONTRACT: All the modules provided must be also registered on the module Manager +func NewSimulationManager(modules ...AppModuleSimulation) *SimulationManager { + return &SimulationManager{ + Modules: modules, + StoreDecoders: make(simulation.StoreDecoderRegistry), +} +} + +/ NewSimulationManagerFromAppModules creates a new SimulationManager object. +/ +/ First it sets any SimulationModule provided by overrideModules, and ignores any AppModule +/ with the same moduleName. +/ Then it attempts to cast every provided AppModule into an AppModuleSimulation. +/ If the cast succeeds, its included, otherwise it is excluded. +func NewSimulationManagerFromAppModules(modules map[string]any, overrideModules map[string]AppModuleSimulation) *SimulationManager { + simModules := []AppModuleSimulation{ +} + appModuleNamesSorted := make([]string, 0, len(modules)) + for moduleName := range modules { + appModuleNamesSorted = append(appModuleNamesSorted, moduleName) +} + +sort.Strings(appModuleNamesSorted) + for _, moduleName := range appModuleNamesSorted { + / for every module, see if we override it. If so, use override. + / Else, if we can cast the app module into a simulation module add it. + / otherwise no simulation module. + if simModule, ok := overrideModules[moduleName]; ok { + simModules = append(simModules, simModule) +} + +else { + appModule := modules[moduleName] + if simModule, ok := appModule.(AppModuleSimulation); ok { + simModules = append(simModules, simModule) +} + / cannot cast, so we continue +} + +} + +return NewSimulationManager(simModules...) +} + +/ Deprecated: Use GetProposalMsgs instead. +/ GetProposalContents returns each module's proposal content generator function +/ with their default operation weight and key. +func (sm *SimulationManager) + +GetProposalContents(simState SimulationState) []simulation.WeightedProposalContent { + wContents := make([]simulation.WeightedProposalContent, 0, len(sm.Modules)) + for _, module := range sm.Modules { + if module, ok := module.(HasProposalContents); ok { + wContents = append(wContents, module.ProposalContents(simState)...) +} + +} + +return wContents +} + +/ GetProposalMsgs returns each module's proposal msg generator function +/ with their default operation weight and key. +func (sm *SimulationManager) + +GetProposalMsgs(simState SimulationState) []simulation.WeightedProposalMsg { + wContents := make([]simulation.WeightedProposalMsg, 0, len(sm.Modules)) + for _, module := range sm.Modules { + if module, ok := module.(HasProposalMsgs); ok { + wContents = append(wContents, module.ProposalMsgs(simState)...) +} + +} + +return wContents +} + +/ RegisterStoreDecoders registers each of the modules' store decoders into a map +func (sm *SimulationManager) + +RegisterStoreDecoders() { + for _, module := range sm.Modules { + module.RegisterStoreDecoder(sm.StoreDecoders) +} +} + +/ GenerateGenesisStates generates a randomized GenesisState for each of the +/ registered modules +func (sm *SimulationManager) + +GenerateGenesisStates(simState *SimulationState) { + for _, module := range sm.Modules { + module.GenerateGenesisState(simState) +} +} + +/ WeightedOperations returns all the modules' weighted operations of an application +func (sm *SimulationManager) + +WeightedOperations(simState SimulationState) []simulation.WeightedOperation { + wOps := make([]simulation.WeightedOperation, 0, len(sm.Modules)) + for _, module := range sm.Modules { + wOps = append(wOps, module.WeightedOperations(simState)...) +} + +return wOps +} + +/ SimulationState is the input parameters used on each of the module's randomized +/ GenesisState generator function +type SimulationState struct { + AppParams simulation.AppParams + Cdc codec.JSONCodec / application codec + TxConfig client.TxConfig / Shared TxConfig; this is expensive to create and stateless, so create it once up front. + Rand *rand.Rand / random number + GenState map[string]json.RawMessage / genesis state + Accounts []simulation.Account / simulation accounts + InitialStake sdkmath.Int / initial coins per account + NumBonded int64 / number of initially bonded accounts + BondDenom string / denom to be used as default + GenTimestamp time.Time / genesis timestamp + UnbondTime time.Duration / staking unbond time stored to use it as the slashing maximum evidence duration + LegacyParamChange []simulation.LegacyParamChange / simulated parameter changes from modules + /nolint:staticcheck / legacy used for testing + LegacyProposalContents []simulation.WeightedProposalContent / proposal content generator functions with their default weight and app sim key + ProposalMsgs []simulation.WeightedProposalMsg / proposal msg generator functions with their default weight and app sim key +} +``` + +See an example implementation of these methods from `x/distribution` [here](https://github.com/cosmos/cosmos-sdk/blob/b55b9e14fb792cc8075effb373be9d26327fddea/x/distribution/module.go#L170-L194). + +## SimsX + +Cosmos SDK v0.53.0 introduced a new package, `simsx`, providing improved DevX for writing simulation code. + +It exposes the following extension interfaces that modules may implement to integrate with the new `simsx` runner. + +```go expandable +package simsx + +import ( + + "encoding/json" + "fmt" + "io" + "math" + "os" + "path/filepath" + "strings" + "testing" + + dbm "github.com/cosmos/cosmos-db" + "github.com/stretchr/testify/require" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/runtime" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/simulation" + "github.com/cosmos/cosmos-sdk/x/simulation/client/cli" +) + +const SimAppChainID = "simulation-app" + +/ this list of seeds was imported from the original simulation runner: https://github.com/cosmos/tools/blob/v1.0.0/cmd/runsim/main.go#L32 +var defaultSeeds = []int64{ + 1, 2, 4, 7, + 32, 123, 124, 582, 1893, 2989, + 3012, 4728, 37827, 981928, 87821, 891823782, + 989182, 89182391, 11, 22, 44, 77, 99, 2020, + 3232, 123123, 124124, 582582, 18931893, + 29892989, 30123012, 47284728, 7601778, 8090485, + 977367484, 491163361, 424254581, 673398983, +} + +/ SimStateFactory is a factory type that provides a convenient way to create a simulation state for testing. +/ It contains the following fields: +/ - Codec: a codec used for serializing other objects +/ - AppStateFn: a function that returns the app state JSON bytes and the genesis accounts +/ - BlockedAddr: a map of blocked addresses +/ - AccountSource: an interface for retrieving accounts +/ - BalanceSource: an interface for retrieving balance-related information +type SimStateFactory struct { + Codec codec.Codec + AppStateFn simtypes.AppStateFn + BlockedAddr map[string]bool + AccountSource AccountSourceX + BalanceSource BalanceSource +} + +/ SimulationApp abstract app that is used by sims +type SimulationApp interface { + runtime.AppI + SetNotSigverifyTx() + +GetBaseApp() *baseapp.BaseApp + TxConfig() + +client.TxConfig + Close() + +error +} + +/ Run is a helper function that runs a simulation test with the given parameters. +/ It calls the RunWithSeeds function with the default seeds and parameters. +/ +/ This is the entrypoint to run simulation tests that used to run with the runsim binary. +func Run[T SimulationApp](/docs/sdk/next/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + +RunWithSeeds(t, appFactory, setupStateFactory, defaultSeeds, nil, postRunActions...) +} + +/ RunWithSeeds is a helper function that runs a simulation test with the given parameters. +/ It iterates over the provided seeds and runs the simulation test for each seed in parallel. +/ +/ It sets up the environment, creates an instance of the simulation app, +/ calls the simulation.SimulateFromSeed function to run the simulation, and performs post-run actions for each seed. +/ The execution is deterministic and can be used for fuzz tests as well. +/ +/ The system under test is isolated for each run but unlike the old runsim command, there is no Process separation. +/ This means, global caches may be reused for example. This implementation build upon the vanilla Go stdlib test framework. +func RunWithSeeds[T SimulationApp](/docs/sdk/next/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seeds []int64, + fuzzSeed []byte, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + +RunWithSeedsAndRandAcc(t, appFactory, setupStateFactory, seeds, fuzzSeed, simtypes.RandomAccounts, postRunActions...) +} + +/ RunWithSeedsAndRandAcc calls RunWithSeeds with randAccFn +func RunWithSeedsAndRandAcc[T SimulationApp](/docs/sdk/next/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seeds []int64, + fuzzSeed []byte, + randAccFn simtypes.RandomAccountFn, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + if deprecatedParams := cli.GetDeprecatedFlagUsed(); len(deprecatedParams) != 0 { + fmt.Printf("Warning: Deprecated flag are used: %s", strings.Join(deprecatedParams, ",")) +} + cfg := cli.NewConfigFromFlags() + +cfg.ChainID = SimAppChainID + for i := range seeds { + seed := seeds[i] + t.Run(fmt.Sprintf("seed: %d", seed), func(t *testing.T) { + t.Parallel() + +RunWithSeed(t, cfg, appFactory, setupStateFactory, seed, fuzzSeed, postRunActions...) +}) +} +} + +/ RunWithSeed is a helper function that runs a simulation test with the given parameters. +/ It iterates over the provided seeds and runs the simulation test for each seed in parallel. +/ +/ It sets up the environment, creates an instance of the simulation app, +/ calls the simulation.SimulateFromSeed function to run the simulation, and performs post-run actions for the seed. +/ The execution is deterministic and can be used for fuzz tests as well. +func RunWithSeed[T SimulationApp](/docs/sdk/next/documentation/operations/ + tb testing.TB, + cfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seed int64, + fuzzSeed []byte, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + tb.Helper() + +RunWithSeedAndRandAcc(tb, cfg, appFactory, setupStateFactory, seed, fuzzSeed, simtypes.RandomAccounts, postRunActions...) +} + +/ RunWithSeedAndRandAcc calls RunWithSeed with randAccFn +func RunWithSeedAndRandAcc[T SimulationApp](/docs/sdk/next/documentation/operations/ + tb testing.TB, + cfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seed int64, + fuzzSeed []byte, + randAccFn simtypes.RandomAccountFn, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + tb.Helper() + / setup environment + tCfg := cfg.With(tb, seed, fuzzSeed) + testInstance := NewSimulationAppInstance(tb, tCfg, appFactory) + +var runLogger log.Logger + if cli.FlagVerboseValue { + runLogger = log.NewTestLogger(tb) +} + +else { + runLogger = log.NewTestLoggerInfo(tb) +} + +runLogger = runLogger.With("seed", tCfg.Seed) + app := testInstance.App + stateFactory := setupStateFactory(app) + +ops, reporter := prepareWeightedOps(app.SimulationManager(), stateFactory, tCfg, testInstance.App.TxConfig(), runLogger) + +simParams, accs, err := simulation.SimulateFromSeedX( + tb, + runLogger, + WriteToDebugLog(runLogger), + app.GetBaseApp(), + stateFactory.AppStateFn, + randAccFn, + ops, + stateFactory.BlockedAddr, + tCfg, + stateFactory.Codec, + testInstance.ExecLogWriter, + ) + +require.NoError(tb, err) + +err = simtestutil.CheckExportSimulation(app, tCfg, simParams) + +require.NoError(tb, err) + if tCfg.Commit { + simtestutil.PrintStats(testInstance.DB) +} + / not using tb.Log to always print the summary + fmt.Printf("+++ DONE (seed: %d): \n%s\n", seed, reporter.Summary().String()) + for _, step := range postRunActions { + step(tb, testInstance, accs) +} + +require.NoError(tb, app.Close()) +} + +type ( + HasWeightedOperationsX interface { + WeightedOperationsX(weight WeightSource, reg Registry) +} + +HasWeightedOperationsXWithProposals interface { + WeightedOperationsX(weights WeightSource, reg Registry, proposals WeightedProposalMsgIter, + legacyProposals []simtypes.WeightedProposalContent) /nolint: staticcheck / used for legacy proposal types +} + +HasProposalMsgsX interface { + ProposalMsgsX(weights WeightSource, reg Registry) +} +) + +type ( + HasLegacyWeightedOperations interface { + / WeightedOperations simulation operations (i.e msgs) + +with their respective weight + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation +} + / HasLegacyProposalMsgs defines the messages that can be used to simulate governance (v1) + +proposals + / Deprecated replaced by HasProposalMsgsX + HasLegacyProposalMsgs interface { + / ProposalMsgs msg fu nctions used to simulate governance proposals + ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg +} + + / HasLegacyProposalContents defines the contents that can be used to simulate legacy governance (v1beta1) + +proposals + / Deprecated replaced by HasProposalMsgsX + HasLegacyProposalContents interface { + / ProposalContents content functions used to simulate governance proposals + ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent /nolint:staticcheck / legacy v1beta1 governance +} +) + +/ TestInstance is a generic type that represents an instance of a SimulationApp used for testing simulations. +/ It contains the following fields: +/ - App: The instance of the SimulationApp under test. +/ - DB: The LevelDB database for the simulation app. +/ - WorkDir: The temporary working directory for the simulation app. +/ - Cfg: The configuration flags for the simulator. +/ - AppLogger: The logger used for logging in the app during the simulation, with seed value attached. +/ - ExecLogWriter: Captures block and operation data coming from the simulation +type TestInstance[T SimulationApp] struct { + App T + DB dbm.DB + WorkDir string + Cfg simtypes.Config + AppLogger log.Logger + ExecLogWriter simulation.LogWriter +} + +/ included to avoid cyclic dependency in testutils/sims +func prepareWeightedOps( + sm *module.SimulationManager, + stateFact SimStateFactory, + config simtypes.Config, + txConfig client.TxConfig, + logger log.Logger, +) (simulation.WeightedOperations, *BasicSimulationReporter) { + cdc := stateFact.Codec + simState := module.SimulationState{ + AppParams: make(simtypes.AppParams), + Cdc: cdc, + TxConfig: txConfig, + BondDenom: sdk.DefaultBondDenom, +} + if config.ParamsFile != "" { + bz, err := os.ReadFile(config.ParamsFile) + if err != nil { + panic(err) +} + +err = json.Unmarshal(bz, &simState.AppParams) + if err != nil { + panic(err) +} + +} + weights := ParamWeightSource(simState.AppParams) + reporter := NewBasicSimulationReporter() + pReg := make(UniqueTypeRegistry) + wContent := make([]simtypes.WeightedProposalContent, 0) /nolint:staticcheck / required for legacy type + legacyPReg := NewWeightedFactoryMethods() + / add gov proposals types + for _, m := range sm.Modules { + switch xm := m.(type) { + case HasProposalMsgsX: + xm.ProposalMsgsX(weights, pReg) + case HasLegacyProposalMsgs: + for _, p := range xm.ProposalMsgs(simState) { + weight := weights.Get(p.AppParamsKey(), safeUint(p.DefaultWeight())) + +legacyPReg.Add(weight, legacyToMsgFactoryAdapter(p.MsgSimulatorFn())) +} + case HasLegacyProposalContents: + wContent = append(wContent, xm.ProposalContents(simState)...) +} + +} + oReg := NewSimsMsgRegistryAdapter( + reporter, + stateFact.AccountSource, + stateFact.BalanceSource, + txConfig, + logger, + ) + wOps := make([]simtypes.WeightedOperation, 0, len(sm.Modules)) + for _, m := range sm.Modules { + / add operations + switch xm := m.(type) { + case HasWeightedOperationsX: + xm.WeightedOperationsX(weights, oReg) + case HasWeightedOperationsXWithProposals: + xm.WeightedOperationsX(weights, oReg, AppendIterators(legacyPReg.Iterator(), pReg.Iterator()), wContent) + case HasLegacyWeightedOperations: + wOps = append(wOps, xm.WeightedOperations(simState)...) +} + +} + +return append(wOps, Collect(oReg.items, func(a weightedOperation) + +simtypes.WeightedOperation { + return a +})...), reporter +} + +func safeUint(p int) + +uint32 { + if p < 0 || p > math.MaxUint32 { + panic(fmt.Sprintf("can not cast to uint32: %d", p)) +} + +return uint32(p) +} + +/ NewSimulationAppInstance initializes and returns a TestInstance of a SimulationApp. +/ The function takes a testing.T instance, a simtypes.Config instance, and an appFactory function as parameters. +/ It creates a temporary working directory and a LevelDB database for the simulation app. +/ The function then initializes a logger based on the verbosity flag and sets the logger's seed to the test configuration's seed. +/ The database is closed and cleaned up on test completion. +func NewSimulationAppInstance[T SimulationApp](/docs/sdk/next/documentation/operations/ + tb testing.TB, + tCfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, +) + +TestInstance[T] { + tb.Helper() + workDir := tb.TempDir() + +require.NoError(tb, os.Mkdir(filepath.Join(workDir, "data"), 0o750)) + dbDir := filepath.Join(workDir, "leveldb-app-sim") + +var logger log.Logger + if cli.FlagVerboseValue { + logger = log.NewTestLogger(tb) +} + +else { + logger = log.NewTestLoggerError(tb) +} + +logger = logger.With("seed", tCfg.Seed) + +db, err := dbm.NewDB("Simulation", dbm.BackendType(tCfg.DBBackend), dbDir) + +require.NoError(tb, err) + +tb.Cleanup(func() { + _ = db.Close() / ensure db is closed +}) + appOptions := make(simtestutil.AppOptionsMap) + +appOptions[flags.FlagHome] = workDir + opts := []func(*baseapp.BaseApp) { + baseapp.SetChainID(tCfg.ChainID) +} + if tCfg.FauxMerkle { + opts = append(opts, FauxMerkleModeOpt) +} + app := appFactory(logger, db, nil, true, appOptions, opts...) + if !cli.FlagSigverifyTxValue { + app.SetNotSigverifyTx() +} + +return TestInstance[T]{ + App: app, + DB: db, + WorkDir: workDir, + Cfg: tCfg, + AppLogger: logger, + ExecLogWriter: &simulation.StandardLogWriter{ + Seed: tCfg.Seed +}, +} +} + +var _ io.Writer = writerFn(nil) + +type writerFn func(p []byte) (n int, err error) + +func (w writerFn) + +Write(p []byte) (n int, err error) { + return w(p) +} + +/ WriteToDebugLog is an adapter to io.Writer interface +func WriteToDebugLog(logger log.Logger) + +io.Writer { + return writerFn(func(p []byte) (n int, err error) { + logger.Debug(string(p)) + +return len(p), nil +}) +} + +/ FauxMerkleModeOpt returns a BaseApp option to use a dbStoreAdapter instead of +/ an IAVLStore for faster simulation speed. +func FauxMerkleModeOpt(bapp *baseapp.BaseApp) { + bapp.SetFauxMerkleMode() +} +``` + +These methods allow constructing randomized messages and/or proposal messages. + + +Note that modules should **not** implement both `HasWeightedOperationsX` and `HasWeightedOperationsXWithProposals`. +See the runner code [here](https://github.com/cosmos/cosmos-sdk/blob/main/testutil/simsx/runner.go#L330-L339) for details + +If the module does **not** have message handlers or governance proposal handlers, these interface methods do **not** need to be implemented. + + +### Example Implementations + +* `HasWeightedOperationsXWithProposals`: [x/gov](https://github.com/cosmos/cosmos-sdk/blob/main/x/gov/module.go#L242-L261) +* `HasWeightedOperationsX`: [x/bank](https://github.com/cosmos/cosmos-sdk/blob/main/x/bank/module.go#L199-L203) +* `HasProposalMsgsX`: [x/bank](https://github.com/cosmos/cosmos-sdk/blob/main/x/bank/module.go#L194-L197) + +## Store decoders + +Registering the store decoders is required for the `AppImportExport` simulation. This allows +for the key-value pairs from the stores to be decoded to their corresponding types. +In particular, it matches the key to a concrete type and then unmarshals the value from the `KVPair` to the type provided. + +Modules using [collections](https://github.com/cosmos/cosmos-sdk/blob/main/collections/README.md) can use the `NewStoreDecoderFuncFromCollectionsSchema` function that builds the decoder for you: + +```go expandable +package bank + +import ( + + "context" + "encoding/json" + "fmt" + "maps" + "slices" + "sort" + + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + corestore "cosmossdk.io/core/store" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/testutil/simsx" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/bank/client/cli" + "github.com/cosmos/cosmos-sdk/x/bank/exported" + "github.com/cosmos/cosmos-sdk/x/bank/keeper" + v1bank "github.com/cosmos/cosmos-sdk/x/bank/migrations/v1" + "github.com/cosmos/cosmos-sdk/x/bank/simulation" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +/ ConsensusVersion defines the current x/bank module consensus version. +const ConsensusVersion = 4 + +var ( + _ module.AppModuleBasic = AppModule{ +} + _ module.AppModuleSimulation = AppModule{ +} + _ module.HasGenesis = AppModule{ +} + _ module.HasServices = AppModule{ +} + + _ appmodule.AppModule = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the bank module. +type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec +} + +/ Name returns the bank module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the bank module's types on the LegacyAmino codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the bank +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the bank module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, _ client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return data.Validate() +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the bank module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the bank module. +func (ab AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.NewTxCmd(ab.ac) +} + +/ RegisterInterfaces registers interfaces and implementations of the bank module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) + + / Register legacy interfaces for migration scripts. + v1bank.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the bank module. +type AppModule struct { + AppModuleBasic + + keeper keeper.Keeper + accountKeeper types.AccountKeeper + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ RegisterServices registers module services. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.keeper)) + +types.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper.(keeper.BaseKeeper), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 1 to 2: %v", err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 2 to 3: %v", err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 3 to 4: %v", err)) +} +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, accountKeeper types.AccountKeeper, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: accountKeeper.AddressCodec() +}, + keeper: keeper, + accountKeeper: accountKeeper, + legacySubspace: ss, +} +} + +/ QuerierRoute returns the bank module's querier route name. +func (AppModule) + +QuerierRoute() + +string { + return types.RouterKey +} + +/ InitGenesis performs genesis initialization for the bank module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) { + var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +am.keeper.InitGenesis(ctx, &genesisState) +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the bank +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the bank module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalMsgs returns msgs used for governance proposals for simulations. +/ migrate to ProposalMsgsX. This method is ignored when ProposalMsgsX exists and will be removed in the future. +func (AppModule) + +ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg { + return simulation.ProposalMsgs() +} + +/ RegisterStoreDecoder registers a decoder for supply module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[types.StoreKey] = simtypes.NewStoreDecoderFuncFromCollectionsSchema(am.keeper.(keeper.BaseKeeper).Schema) +} + +/ WeightedOperations returns the all the bank module operations with their respective weights. +/ migrate to WeightedOperationsX. This method is ignored when WeightedOperationsX exists and will be removed in the future +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + simState.AppParams, simState.Cdc, simState.TxConfig, am.accountKeeper, am.keeper, + ) +} + +/ ProposalMsgsX registers governance proposal messages in the simulation registry. +func (AppModule) + +ProposalMsgsX(weights simsx.WeightSource, reg simsx.Registry) { + reg.Add(weights.Get("msg_update_params", 100), simulation.MsgUpdateParamsFactory()) +} + +/ WeightedOperationsX registers weighted bank module operations for simulation. +func (am AppModule) + +WeightedOperationsX(weights simsx.WeightSource, reg simsx.Registry) { + reg.Add(weights.Get("msg_send", 100), simulation.MsgSendFactory()) + +reg.Add(weights.Get("msg_multisend", 10), simulation.MsgMultiSendFactory()) +} + +/ App Wiring Setup + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + appmodule.Invoke(InvokeSetSendRestrictions), + ) +} + +type ModuleInputs struct { + depinject.In + + Config *modulev1.Module + Cdc codec.Codec + StoreService corestore.KVStoreService + Logger log.Logger + + AccountKeeper types.AccountKeeper + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type ModuleOutputs struct { + depinject.Out + + BankKeeper keeper.BaseKeeper + Module appmodule.AppModule +} + +func ProvideModule(in ModuleInputs) + +ModuleOutputs { + / Configure blocked module accounts. + / + / Default behavior for blockedAddresses is to regard any module mentioned in + / AccountKeeper's module account permissions as blocked. + blockedAddresses := make(map[string]bool) + if len(in.Config.BlockedModuleAccountsOverride) > 0 { + for _, moduleName := range in.Config.BlockedModuleAccountsOverride { + blockedAddresses[authtypes.NewModuleAddress(moduleName).String()] = true +} + +} + +else { + for _, permission := range in.AccountKeeper.GetModulePermissions() { + blockedAddresses[permission.GetAddress().String()] = true +} + +} + + / default to governance authority if not provided + authority := authtypes.NewModuleAddress(govtypes.ModuleName) + if in.Config.Authority != "" { + authority = authtypes.NewModuleAddressOrBech32Address(in.Config.Authority) +} + bankKeeper := keeper.NewBaseKeeper( + in.Cdc, + in.StoreService, + in.AccountKeeper, + blockedAddresses, + authority.String(), + in.Logger, + ) + m := NewAppModule(in.Cdc, bankKeeper, in.AccountKeeper, in.LegacySubspace) + +return ModuleOutputs{ + BankKeeper: bankKeeper, + Module: m +} +} + +func InvokeSetSendRestrictions( + config *modulev1.Module, + keeper keeper.BaseKeeper, + restrictions map[string]types.SendRestrictionFn, +) + +error { + if config == nil { + return nil +} + modules := slices.Collect(maps.Keys(restrictions)) + order := config.RestrictionsOrder + if len(order) == 0 { + order = modules + sort.Strings(order) +} + if len(order) != len(modules) { + return fmt.Errorf("len(restrictions order: %v) != len(restriction modules: %v)", order, modules) +} + if len(modules) == 0 { + return nil +} + for _, module := range order { + restriction, ok := restrictions[module] + if !ok { + return fmt.Errorf("can't find send restriction for module %s", module) +} + +keeper.AppendSendRestriction(restriction) +} + +return nil +} +``` + +Modules not using collections must manually build the store decoder. +See the implementation [here](https://github.com/cosmos/cosmos-sdk/blob/main/x/distribution/simulation/decoder.go) from the distribution module for an example. + +## Randomized genesis + +The simulator tests different scenarios and values for genesis parameters. +App modules must implement a `GenerateGenesisState` method to generate the initial random `GenesisState` from a given seed. + +```go expandable +package module + +import ( + + "encoding/json" + "math/rand" + "sort" + "time" + + sdkmath "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/types/simulation" +) + +/ AppModuleSimulation defines the standard functions that every module should expose +/ for the SDK blockchain simulator +type AppModuleSimulation interface { + / randomized genesis states + GenerateGenesisState(input *SimulationState) + + / register a func to decode the each module's defined types from their corresponding store key + RegisterStoreDecoder(simulation.StoreDecoderRegistry) + + / simulation operations (i.e msgs) + +with their respective weight + WeightedOperations(simState SimulationState) []simulation.WeightedOperation +} + +/ HasProposalMsgs defines the messages that can be used to simulate governance (v1) + +proposals +type HasProposalMsgs interface { + / msg functions used to simulate governance proposals + ProposalMsgs(simState SimulationState) []simulation.WeightedProposalMsg +} + +/ HasProposalContents defines the contents that can be used to simulate legacy governance (v1beta1) + +proposals +type HasProposalContents interface { + / content functions used to simulate governance proposals + ProposalContents(simState SimulationState) []simulation.WeightedProposalContent /nolint:staticcheck / legacy v1beta1 governance +} + +/ SimulationManager defines a simulation manager that provides the high level utility +/ for managing and executing simulation functionalities for a group of modules +type SimulationManager struct { + Modules []AppModuleSimulation / array of app modules; we use an array for deterministic simulation tests + StoreDecoders simulation.StoreDecoderRegistry / functions to decode the key-value pairs from each module's store +} + +/ NewSimulationManager creates a new SimulationManager object +/ +/ CONTRACT: All the modules provided must be also registered on the module Manager +func NewSimulationManager(modules ...AppModuleSimulation) *SimulationManager { + return &SimulationManager{ + Modules: modules, + StoreDecoders: make(simulation.StoreDecoderRegistry), +} +} + +/ NewSimulationManagerFromAppModules creates a new SimulationManager object. +/ +/ First it sets any SimulationModule provided by overrideModules, and ignores any AppModule +/ with the same moduleName. +/ Then it attempts to cast every provided AppModule into an AppModuleSimulation. +/ If the cast succeeds, its included, otherwise it is excluded. +func NewSimulationManagerFromAppModules(modules map[string]any, overrideModules map[string]AppModuleSimulation) *SimulationManager { + simModules := []AppModuleSimulation{ +} + appModuleNamesSorted := make([]string, 0, len(modules)) + for moduleName := range modules { + appModuleNamesSorted = append(appModuleNamesSorted, moduleName) +} + +sort.Strings(appModuleNamesSorted) + for _, moduleName := range appModuleNamesSorted { + / for every module, see if we override it. If so, use override. + / Else, if we can cast the app module into a simulation module add it. + / otherwise no simulation module. + if simModule, ok := overrideModules[moduleName]; ok { + simModules = append(simModules, simModule) +} + +else { + appModule := modules[moduleName] + if simModule, ok := appModule.(AppModuleSimulation); ok { + simModules = append(simModules, simModule) +} + / cannot cast, so we continue +} + +} + +return NewSimulationManager(simModules...) +} + +/ Deprecated: Use GetProposalMsgs instead. +/ GetProposalContents returns each module's proposal content generator function +/ with their default operation weight and key. +func (sm *SimulationManager) + +GetProposalContents(simState SimulationState) []simulation.WeightedProposalContent { + wContents := make([]simulation.WeightedProposalContent, 0, len(sm.Modules)) + for _, module := range sm.Modules { + if module, ok := module.(HasProposalContents); ok { + wContents = append(wContents, module.ProposalContents(simState)...) +} + +} + +return wContents +} + +/ GetProposalMsgs returns each module's proposal msg generator function +/ with their default operation weight and key. +func (sm *SimulationManager) + +GetProposalMsgs(simState SimulationState) []simulation.WeightedProposalMsg { + wContents := make([]simulation.WeightedProposalMsg, 0, len(sm.Modules)) + for _, module := range sm.Modules { + if module, ok := module.(HasProposalMsgs); ok { + wContents = append(wContents, module.ProposalMsgs(simState)...) +} + +} + +return wContents +} + +/ RegisterStoreDecoders registers each of the modules' store decoders into a map +func (sm *SimulationManager) + +RegisterStoreDecoders() { + for _, module := range sm.Modules { + module.RegisterStoreDecoder(sm.StoreDecoders) +} +} + +/ GenerateGenesisStates generates a randomized GenesisState for each of the +/ registered modules +func (sm *SimulationManager) + +GenerateGenesisStates(simState *SimulationState) { + for _, module := range sm.Modules { + module.GenerateGenesisState(simState) +} +} + +/ WeightedOperations returns all the modules' weighted operations of an application +func (sm *SimulationManager) + +WeightedOperations(simState SimulationState) []simulation.WeightedOperation { + wOps := make([]simulation.WeightedOperation, 0, len(sm.Modules)) + for _, module := range sm.Modules { + wOps = append(wOps, module.WeightedOperations(simState)...) +} + +return wOps +} + +/ SimulationState is the input parameters used on each of the module's randomized +/ GenesisState generator function +type SimulationState struct { + AppParams simulation.AppParams + Cdc codec.JSONCodec / application codec + TxConfig client.TxConfig / Shared TxConfig; this is expensive to create and stateless, so create it once up front. + Rand *rand.Rand / random number + GenState map[string]json.RawMessage / genesis state + Accounts []simulation.Account / simulation accounts + InitialStake sdkmath.Int / initial coins per account + NumBonded int64 / number of initially bonded accounts + BondDenom string / denom to be used as default + GenTimestamp time.Time / genesis timestamp + UnbondTime time.Duration / staking unbond time stored to use it as the slashing maximum evidence duration + LegacyParamChange []simulation.LegacyParamChange / simulated parameter changes from modules + /nolint:staticcheck / legacy used for testing + LegacyProposalContents []simulation.WeightedProposalContent / proposal content generator functions with their default weight and app sim key + ProposalMsgs []simulation.WeightedProposalMsg / proposal msg generator functions with their default weight and app sim key +} +``` + +See an example from `x/auth` [here](https://github.com/cosmos/cosmos-sdk/blob/main/x/auth/module.go#L169-L172). + +Once the module's genesis parameters are generated randomly (or with the key and +values defined in a `params` file), they are marshaled to JSON format and added +to the app genesis JSON for the simulation. + +## Random weighted operations + +Operations are one of the crucial parts of the Cosmos SDK simulation. They are the transactions +(`Msg`) that are simulated with random field values. The sender of the operation +is also assigned randomly. + +Operations on the simulation are simulated using the full [transaction cycle](/docs/sdk/next/documentation/protocol-development/transactions) of a +`ABCI` application that exposes the `BaseApp`. + +### Using Simsx + +Simsx introduces the ability to define a `MsgFactory` for each of a module's messages. + +These factories are registered in `WeightedOperationsX` and/or `ProposalMsgsX`. + +```go expandable +package distribution + +import ( + + "context" + "encoding/json" + "fmt" + + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/store" + "cosmossdk.io/depinject" + + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/testutil/simsx" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/distribution/client/cli" + "github.com/cosmos/cosmos-sdk/x/distribution/exported" + "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + "github.com/cosmos/cosmos-sdk/x/distribution/simulation" + "github.com/cosmos/cosmos-sdk/x/distribution/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + staking "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ ConsensusVersion defines the current x/distribution module consensus version. +const ConsensusVersion = 3 + +var ( + _ module.AppModuleBasic = AppModule{ +} + _ module.AppModuleSimulation = AppModule{ +} + _ module.HasGenesis = AppModule{ +} + _ module.HasServices = AppModule{ +} + + _ appmodule.AppModule = AppModule{ +} + _ appmodule.HasBeginBlocker = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the distribution module. +type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec +} + +/ Name returns the distribution module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the distribution module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the distribution +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the distribution module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, _ sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return types.ValidateGenesis(&data) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the distribution module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the distribution module. +func (ab AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.NewTxCmd(ab.cdc.InterfaceRegistry().SigningContext().ValidatorAddressCodec(), ab.cdc.InterfaceRegistry().SigningContext().AddressCodec()) +} + +/ RegisterInterfaces implements InterfaceModule +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the distribution module. +type AppModule struct { + AppModuleBasic + + keeper keeper.Keeper + accountKeeper types.AccountKeeper + bankKeeper types.BankKeeper + stakingKeeper types.StakingKeeper + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +/ NewAppModule creates a new AppModule object +func NewAppModule( + cdc codec.Codec, keeper keeper.Keeper, accountKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, stakingKeeper types.StakingKeeper, ss exported.Subspace, +) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: accountKeeper.AddressCodec() +}, + keeper: keeper, + accountKeeper: accountKeeper, + bankKeeper: bankKeeper, + stakingKeeper: stakingKeeper, + legacySubspace: ss, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ RegisterServices registers module services. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.keeper)) + +types.RegisterQueryServer(cfg.QueryServer(), keeper.NewQuerier(am.keeper)) + m := keeper.NewMigrator(am.keeper, am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 2 to 3: %v", types.ModuleName, err)) +} +} + +/ InitGenesis performs genesis initialization for the distribution module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) { + var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +am.keeper.InitGenesis(ctx, genesisState) +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the distribution +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ BeginBlock returns the begin blocker for the distribution module. +func (am AppModule) + +BeginBlock(ctx context.Context) + +error { + c := sdk.UnwrapSDKContext(ctx) + +return BeginBlocker(c, am.keeper) +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the distribution module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalMsgs returns msgs used for governance proposals for simulations. +/ migrate to ProposalMsgsX. This method is ignored when ProposalMsgsX exists and will be removed in the future. +func (AppModule) + +ProposalMsgs(_ module.SimulationState) []simtypes.WeightedProposalMsg { + return simulation.ProposalMsgs() +} + +/ RegisterStoreDecoder registers a decoder for distribution module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[types.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +/ migrate to WeightedOperationsX. This method is ignored when WeightedOperationsX exists and will be removed in the future +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accountKeeper, am.bankKeeper, am.keeper, am.stakingKeeper, + ) +} + +/ ProposalMsgsX registers governance proposal messages in the simulation registry. +func (AppModule) + +ProposalMsgsX(weights simsx.WeightSource, reg simsx.Registry) { + reg.Add(weights.Get("msg_update_params", 100), simulation.MsgUpdateParamsFactory()) +} + +/ WeightedOperationsX registers weighted distribution module operations for simulation. +func (am AppModule) + +WeightedOperationsX(weights simsx.WeightSource, reg simsx.Registry) { + reg.Add(weights.Get("msg_set_withdraw_address", 50), simulation.MsgSetWithdrawAddressFactory(am.keeper)) + +reg.Add(weights.Get("msg_withdraw_delegation_reward", 50), simulation.MsgWithdrawDelegatorRewardFactory(am.keeper, am.stakingKeeper)) + +reg.Add(weights.Get("msg_withdraw_validator_commission", 50), simulation.MsgWithdrawValidatorCommissionFactory(am.keeper, am.stakingKeeper)) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type ModuleInputs struct { + depinject.In + + Config *modulev1.Module + StoreService store.KVStoreService + Cdc codec.Codec + + AccountKeeper types.AccountKeeper + BankKeeper types.BankKeeper + StakingKeeper types.StakingKeeper + ExternalPoolKeeper types.ExternalCommunityPoolKeeper `optional:"true"` + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type ModuleOutputs struct { + depinject.Out + + DistrKeeper keeper.Keeper + Module appmodule.AppModule + Hooks staking.StakingHooksWrapper +} + +func ProvideModule(in ModuleInputs) + +ModuleOutputs { + feeCollectorName := in.Config.FeeCollectorName + if feeCollectorName == "" { + feeCollectorName = authtypes.FeeCollectorName +} + + / default to governance authority if not provided + authority := authtypes.NewModuleAddress(govtypes.ModuleName) + if in.Config.Authority != "" { + authority = authtypes.NewModuleAddressOrBech32Address(in.Config.Authority) +} + +var opts []keeper.InitOption + if in.ExternalPoolKeeper != nil { + opts = append(opts, keeper.WithExternalCommunityPool(in.ExternalPoolKeeper)) +} + k := keeper.NewKeeper( + in.Cdc, + in.StoreService, + in.AccountKeeper, + in.BankKeeper, + in.StakingKeeper, + feeCollectorName, + authority.String(), + opts..., + ) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.StakingKeeper, in.LegacySubspace) + +return ModuleOutputs{ + DistrKeeper: k, + Module: m, + Hooks: staking.StakingHooksWrapper{ + StakingHooks: k.Hooks() +}, +} +} +``` + +Note that the name passed in to `weights.Get` must match the name of the operation set in the `WeightedOperations`. + +For example, if the module contains an operation `op_weight_msg_set_withdraw_address`, the name passed to `weights.Get` should be `msg_set_withdraw_address`. + +See the `x/distribution` for an example of implementing message factories [here](https://github.com/cosmos/cosmos-sdk/blob/main/x/distribution/simulation/msg_factory.go) + +## App Simulator manager + +The following step is setting up the `SimulatorManager` at the app level. This +is required for the simulation test files in the next step. + +```go +type CoolApp struct { +... +sm *module.SimulationManager +} +``` + +Within the constructor of the application, construct the simulation manager using the modules from `ModuleManager` and call the `RegisterStoreDecoders` method. + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "maps" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/tx/signing" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + sigtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + panic(err) +} + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, + banktypes.StoreKey, + stakingtypes.StoreKey, + minttypes.StoreKey, + distrtypes.StoreKey, + slashingtypes.StoreKey, + govtypes.StoreKey, + consensusparamtypes.StoreKey, + upgradetypes.StoreKey, + feegrant.StoreKey, + evidencetypes.StoreKey, + authzkeeper.StoreKey, + epochstypes.StoreKey, + protocolpooltypes.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, +} + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + runtime.EventService{ +}, + ) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), + ) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + + / optional: enable sign mode textual by overwriting the default tx config (after setting the bank keeper) + enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + panic(err) +} + +app.txConfig = txConfig + + app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[stakingtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authcodec.NewBech32Codec(sdk.Bech32PrefixValAddr), + authcodec.NewBech32Codec(sdk.Bech32PrefixConsAddr), + ) + +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(minttypes.DefaultInflationCalculationFn)), custom mintFn can be added here + ) + +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), + ) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, + legacyAmino, + runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), + app.StakingKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[feegrant.StoreKey]), + app.AccountKeeper, + ) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks( + app.DistrKeeper.Hooks(), + app.SlashingKeeper.Hooks(), + ), + ) + +app.AuthzKeeper = authzkeeper.NewKeeper( + runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + ) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper( + skipUpgradeHeights, + runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), + appCodec, + homePath, + app.BaseApp, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(...), / Add if you want to use a custom vote calculation function. + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), + app.StakingKeeper, + app.SlashingKeeper, + app.AccountKeeper.AddressCodec(), + runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, + ) + +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + ), + ) + + /**** Module Options ****/ + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, nil), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, nil), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil, app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, nil), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + epochs.NewAppModule(app.EpochsKeeper), + protocolpool.NewAppModule(app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependent module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / NOTE: upgrade module is required to be prioritized + app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, + ) + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + protocolpooltypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensusparamtypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +} + exportModuleOrder := []string{ + consensusparamtypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + epochstypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +err = app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetPreBlocker(app.PreBlocker) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := ante.NewAnteHandler( + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimeoutDuration), +}, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ PreBlocker application updates every pre block +func (app *SimApp) + +PreBlocker(ctx sdk.Context, _ *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + return app.ModuleManager.PreBlock(ctx) +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), + ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), + ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, 0, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + return maps.Clone(maccPerms) +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} +``` + +Note that you may override some modules. +This is useful if the existing module configuration in the `ModuleManager` should be different in the `SimulationManager`. + +Finally, the application should expose the `SimulationManager` via the following method defined in the `Runtime` interface: + +```go +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} +``` + +## Running Simulations + +To run the simulation, use the `simsx` runner. + +Call the following function from the `simsx` package to begin simulating with a default seed: + +```go expandable +package simsx + +import ( + + "encoding/json" + "fmt" + "io" + "math" + "os" + "path/filepath" + "strings" + "testing" + + dbm "github.com/cosmos/cosmos-db" + "github.com/stretchr/testify/require" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/runtime" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/simulation" + "github.com/cosmos/cosmos-sdk/x/simulation/client/cli" +) + +const SimAppChainID = "simulation-app" + +/ this list of seeds was imported from the original simulation runner: https://github.com/cosmos/tools/blob/v1.0.0/cmd/runsim/main.go#L32 +var defaultSeeds = []int64{ + 1, 2, 4, 7, + 32, 123, 124, 582, 1893, 2989, + 3012, 4728, 37827, 981928, 87821, 891823782, + 989182, 89182391, 11, 22, 44, 77, 99, 2020, + 3232, 123123, 124124, 582582, 18931893, + 29892989, 30123012, 47284728, 7601778, 8090485, + 977367484, 491163361, 424254581, 673398983, +} + +/ SimStateFactory is a factory type that provides a convenient way to create a simulation state for testing. +/ It contains the following fields: +/ - Codec: a codec used for serializing other objects +/ - AppStateFn: a function that returns the app state JSON bytes and the genesis accounts +/ - BlockedAddr: a map of blocked addresses +/ - AccountSource: an interface for retrieving accounts +/ - BalanceSource: an interface for retrieving balance-related information +type SimStateFactory struct { + Codec codec.Codec + AppStateFn simtypes.AppStateFn + BlockedAddr map[string]bool + AccountSource AccountSourceX + BalanceSource BalanceSource +} + +/ SimulationApp abstract app that is used by sims +type SimulationApp interface { + runtime.AppI + SetNotSigverifyTx() + +GetBaseApp() *baseapp.BaseApp + TxConfig() + +client.TxConfig + Close() + +error +} + +/ Run is a helper function that runs a simulation test with the given parameters. +/ It calls the RunWithSeeds function with the default seeds and parameters. +/ +/ This is the entrypoint to run simulation tests that used to run with the runsim binary. +func Run[T SimulationApp](/docs/sdk/next/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + +RunWithSeeds(t, appFactory, setupStateFactory, defaultSeeds, nil, postRunActions...) +} + +/ RunWithSeeds is a helper function that runs a simulation test with the given parameters. +/ It iterates over the provided seeds and runs the simulation test for each seed in parallel. +/ +/ It sets up the environment, creates an instance of the simulation app, +/ calls the simulation.SimulateFromSeed function to run the simulation, and performs post-run actions for each seed. +/ The execution is deterministic and can be used for fuzz tests as well. +/ +/ The system under test is isolated for each run but unlike the old runsim command, there is no Process separation. +/ This means, global caches may be reused for example. This implementation build upon the vanilla Go stdlib test framework. +func RunWithSeeds[T SimulationApp](/docs/sdk/next/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seeds []int64, + fuzzSeed []byte, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + +RunWithSeedsAndRandAcc(t, appFactory, setupStateFactory, seeds, fuzzSeed, simtypes.RandomAccounts, postRunActions...) +} + +/ RunWithSeedsAndRandAcc calls RunWithSeeds with randAccFn +func RunWithSeedsAndRandAcc[T SimulationApp](/docs/sdk/next/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seeds []int64, + fuzzSeed []byte, + randAccFn simtypes.RandomAccountFn, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + if deprecatedParams := cli.GetDeprecatedFlagUsed(); len(deprecatedParams) != 0 { + fmt.Printf("Warning: Deprecated flag are used: %s", strings.Join(deprecatedParams, ",")) +} + cfg := cli.NewConfigFromFlags() + +cfg.ChainID = SimAppChainID + for i := range seeds { + seed := seeds[i] + t.Run(fmt.Sprintf("seed: %d", seed), func(t *testing.T) { + t.Parallel() + +RunWithSeed(t, cfg, appFactory, setupStateFactory, seed, fuzzSeed, postRunActions...) +}) +} +} + +/ RunWithSeed is a helper function that runs a simulation test with the given parameters. +/ It iterates over the provided seeds and runs the simulation test for each seed in parallel. +/ +/ It sets up the environment, creates an instance of the simulation app, +/ calls the simulation.SimulateFromSeed function to run the simulation, and performs post-run actions for the seed. +/ The execution is deterministic and can be used for fuzz tests as well. +func RunWithSeed[T SimulationApp](/docs/sdk/next/documentation/operations/ + tb testing.TB, + cfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seed int64, + fuzzSeed []byte, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + tb.Helper() + +RunWithSeedAndRandAcc(tb, cfg, appFactory, setupStateFactory, seed, fuzzSeed, simtypes.RandomAccounts, postRunActions...) +} + +/ RunWithSeedAndRandAcc calls RunWithSeed with randAccFn +func RunWithSeedAndRandAcc[T SimulationApp](/docs/sdk/next/documentation/operations/ + tb testing.TB, + cfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seed int64, + fuzzSeed []byte, + randAccFn simtypes.RandomAccountFn, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + tb.Helper() + / setup environment + tCfg := cfg.With(tb, seed, fuzzSeed) + testInstance := NewSimulationAppInstance(tb, tCfg, appFactory) + +var runLogger log.Logger + if cli.FlagVerboseValue { + runLogger = log.NewTestLogger(tb) +} + +else { + runLogger = log.NewTestLoggerInfo(tb) +} + +runLogger = runLogger.With("seed", tCfg.Seed) + app := testInstance.App + stateFactory := setupStateFactory(app) + +ops, reporter := prepareWeightedOps(app.SimulationManager(), stateFactory, tCfg, testInstance.App.TxConfig(), runLogger) + +simParams, accs, err := simulation.SimulateFromSeedX( + tb, + runLogger, + WriteToDebugLog(runLogger), + app.GetBaseApp(), + stateFactory.AppStateFn, + randAccFn, + ops, + stateFactory.BlockedAddr, + tCfg, + stateFactory.Codec, + testInstance.ExecLogWriter, + ) + +require.NoError(tb, err) + +err = simtestutil.CheckExportSimulation(app, tCfg, simParams) + +require.NoError(tb, err) + if tCfg.Commit { + simtestutil.PrintStats(testInstance.DB) +} + / not using tb.Log to always print the summary + fmt.Printf("+++ DONE (seed: %d): \n%s\n", seed, reporter.Summary().String()) + for _, step := range postRunActions { + step(tb, testInstance, accs) +} + +require.NoError(tb, app.Close()) +} + +type ( + HasWeightedOperationsX interface { + WeightedOperationsX(weight WeightSource, reg Registry) +} + +HasWeightedOperationsXWithProposals interface { + WeightedOperationsX(weights WeightSource, reg Registry, proposals WeightedProposalMsgIter, + legacyProposals []simtypes.WeightedProposalContent) /nolint: staticcheck / used for legacy proposal types +} + +HasProposalMsgsX interface { + ProposalMsgsX(weights WeightSource, reg Registry) +} +) + +type ( + HasLegacyWeightedOperations interface { + / WeightedOperations simulation operations (i.e msgs) + +with their respective weight + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation +} + / HasLegacyProposalMsgs defines the messages that can be used to simulate governance (v1) + +proposals + / Deprecated replaced by HasProposalMsgsX + HasLegacyProposalMsgs interface { + / ProposalMsgs msg fu nctions used to simulate governance proposals + ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg +} + + / HasLegacyProposalContents defines the contents that can be used to simulate legacy governance (v1beta1) + +proposals + / Deprecated replaced by HasProposalMsgsX + HasLegacyProposalContents interface { + / ProposalContents content functions used to simulate governance proposals + ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent /nolint:staticcheck / legacy v1beta1 governance +} +) + +/ TestInstance is a generic type that represents an instance of a SimulationApp used for testing simulations. +/ It contains the following fields: +/ - App: The instance of the SimulationApp under test. +/ - DB: The LevelDB database for the simulation app. +/ - WorkDir: The temporary working directory for the simulation app. +/ - Cfg: The configuration flags for the simulator. +/ - AppLogger: The logger used for logging in the app during the simulation, with seed value attached. +/ - ExecLogWriter: Captures block and operation data coming from the simulation +type TestInstance[T SimulationApp] struct { + App T + DB dbm.DB + WorkDir string + Cfg simtypes.Config + AppLogger log.Logger + ExecLogWriter simulation.LogWriter +} + +/ included to avoid cyclic dependency in testutils/sims +func prepareWeightedOps( + sm *module.SimulationManager, + stateFact SimStateFactory, + config simtypes.Config, + txConfig client.TxConfig, + logger log.Logger, +) (simulation.WeightedOperations, *BasicSimulationReporter) { + cdc := stateFact.Codec + simState := module.SimulationState{ + AppParams: make(simtypes.AppParams), + Cdc: cdc, + TxConfig: txConfig, + BondDenom: sdk.DefaultBondDenom, +} + if config.ParamsFile != "" { + bz, err := os.ReadFile(config.ParamsFile) + if err != nil { + panic(err) +} + +err = json.Unmarshal(bz, &simState.AppParams) + if err != nil { + panic(err) +} + +} + weights := ParamWeightSource(simState.AppParams) + reporter := NewBasicSimulationReporter() + pReg := make(UniqueTypeRegistry) + wContent := make([]simtypes.WeightedProposalContent, 0) /nolint:staticcheck / required for legacy type + legacyPReg := NewWeightedFactoryMethods() + / add gov proposals types + for _, m := range sm.Modules { + switch xm := m.(type) { + case HasProposalMsgsX: + xm.ProposalMsgsX(weights, pReg) + case HasLegacyProposalMsgs: + for _, p := range xm.ProposalMsgs(simState) { + weight := weights.Get(p.AppParamsKey(), safeUint(p.DefaultWeight())) + +legacyPReg.Add(weight, legacyToMsgFactoryAdapter(p.MsgSimulatorFn())) +} + case HasLegacyProposalContents: + wContent = append(wContent, xm.ProposalContents(simState)...) +} + +} + oReg := NewSimsMsgRegistryAdapter( + reporter, + stateFact.AccountSource, + stateFact.BalanceSource, + txConfig, + logger, + ) + wOps := make([]simtypes.WeightedOperation, 0, len(sm.Modules)) + for _, m := range sm.Modules { + / add operations + switch xm := m.(type) { + case HasWeightedOperationsX: + xm.WeightedOperationsX(weights, oReg) + case HasWeightedOperationsXWithProposals: + xm.WeightedOperationsX(weights, oReg, AppendIterators(legacyPReg.Iterator(), pReg.Iterator()), wContent) + case HasLegacyWeightedOperations: + wOps = append(wOps, xm.WeightedOperations(simState)...) +} + +} + +return append(wOps, Collect(oReg.items, func(a weightedOperation) + +simtypes.WeightedOperation { + return a +})...), reporter +} + +func safeUint(p int) + +uint32 { + if p < 0 || p > math.MaxUint32 { + panic(fmt.Sprintf("can not cast to uint32: %d", p)) +} + +return uint32(p) +} + +/ NewSimulationAppInstance initializes and returns a TestInstance of a SimulationApp. +/ The function takes a testing.T instance, a simtypes.Config instance, and an appFactory function as parameters. +/ It creates a temporary working directory and a LevelDB database for the simulation app. +/ The function then initializes a logger based on the verbosity flag and sets the logger's seed to the test configuration's seed. +/ The database is closed and cleaned up on test completion. +func NewSimulationAppInstance[T SimulationApp](/docs/sdk/next/documentation/operations/ + tb testing.TB, + tCfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, +) + +TestInstance[T] { + tb.Helper() + workDir := tb.TempDir() + +require.NoError(tb, os.Mkdir(filepath.Join(workDir, "data"), 0o750)) + dbDir := filepath.Join(workDir, "leveldb-app-sim") + +var logger log.Logger + if cli.FlagVerboseValue { + logger = log.NewTestLogger(tb) +} + +else { + logger = log.NewTestLoggerError(tb) +} + +logger = logger.With("seed", tCfg.Seed) + +db, err := dbm.NewDB("Simulation", dbm.BackendType(tCfg.DBBackend), dbDir) + +require.NoError(tb, err) + +tb.Cleanup(func() { + _ = db.Close() / ensure db is closed +}) + appOptions := make(simtestutil.AppOptionsMap) + +appOptions[flags.FlagHome] = workDir + opts := []func(*baseapp.BaseApp) { + baseapp.SetChainID(tCfg.ChainID) +} + if tCfg.FauxMerkle { + opts = append(opts, FauxMerkleModeOpt) +} + app := appFactory(logger, db, nil, true, appOptions, opts...) + if !cli.FlagSigverifyTxValue { + app.SetNotSigverifyTx() +} + +return TestInstance[T]{ + App: app, + DB: db, + WorkDir: workDir, + Cfg: tCfg, + AppLogger: logger, + ExecLogWriter: &simulation.StandardLogWriter{ + Seed: tCfg.Seed +}, +} +} + +var _ io.Writer = writerFn(nil) + +type writerFn func(p []byte) (n int, err error) + +func (w writerFn) + +Write(p []byte) (n int, err error) { + return w(p) +} + +/ WriteToDebugLog is an adapter to io.Writer interface +func WriteToDebugLog(logger log.Logger) + +io.Writer { + return writerFn(func(p []byte) (n int, err error) { + logger.Debug(string(p)) + +return len(p), nil +}) +} + +/ FauxMerkleModeOpt returns a BaseApp option to use a dbStoreAdapter instead of +/ an IAVLStore for faster simulation speed. +func FauxMerkleModeOpt(bapp *baseapp.BaseApp) { + bapp.SetFauxMerkleMode() +} +``` + +If a custom seed is desired, tests should use `RunWithSeed`: + +```go expandable +package simsx + +import ( + + "encoding/json" + "fmt" + "io" + "math" + "os" + "path/filepath" + "strings" + "testing" + + dbm "github.com/cosmos/cosmos-db" + "github.com/stretchr/testify/require" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/runtime" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/simulation" + "github.com/cosmos/cosmos-sdk/x/simulation/client/cli" +) + +const SimAppChainID = "simulation-app" + +/ this list of seeds was imported from the original simulation runner: https://github.com/cosmos/tools/blob/v1.0.0/cmd/runsim/main.go#L32 +var defaultSeeds = []int64{ + 1, 2, 4, 7, + 32, 123, 124, 582, 1893, 2989, + 3012, 4728, 37827, 981928, 87821, 891823782, + 989182, 89182391, 11, 22, 44, 77, 99, 2020, + 3232, 123123, 124124, 582582, 18931893, + 29892989, 30123012, 47284728, 7601778, 8090485, + 977367484, 491163361, 424254581, 673398983, +} + +/ SimStateFactory is a factory type that provides a convenient way to create a simulation state for testing. +/ It contains the following fields: +/ - Codec: a codec used for serializing other objects +/ - AppStateFn: a function that returns the app state JSON bytes and the genesis accounts +/ - BlockedAddr: a map of blocked addresses +/ - AccountSource: an interface for retrieving accounts +/ - BalanceSource: an interface for retrieving balance-related information +type SimStateFactory struct { + Codec codec.Codec + AppStateFn simtypes.AppStateFn + BlockedAddr map[string]bool + AccountSource AccountSourceX + BalanceSource BalanceSource +} + +/ SimulationApp abstract app that is used by sims +type SimulationApp interface { + runtime.AppI + SetNotSigverifyTx() + +GetBaseApp() *baseapp.BaseApp + TxConfig() + +client.TxConfig + Close() + +error +} + +/ Run is a helper function that runs a simulation test with the given parameters. +/ It calls the RunWithSeeds function with the default seeds and parameters. +/ +/ This is the entrypoint to run simulation tests that used to run with the runsim binary. +func Run[T SimulationApp](/docs/sdk/next/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + +RunWithSeeds(t, appFactory, setupStateFactory, defaultSeeds, nil, postRunActions...) +} + +/ RunWithSeeds is a helper function that runs a simulation test with the given parameters. +/ It iterates over the provided seeds and runs the simulation test for each seed in parallel. +/ +/ It sets up the environment, creates an instance of the simulation app, +/ calls the simulation.SimulateFromSeed function to run the simulation, and performs post-run actions for each seed. +/ The execution is deterministic and can be used for fuzz tests as well. +/ +/ The system under test is isolated for each run but unlike the old runsim command, there is no Process separation. +/ This means, global caches may be reused for example. This implementation build upon the vanilla Go stdlib test framework. +func RunWithSeeds[T SimulationApp](/docs/sdk/next/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seeds []int64, + fuzzSeed []byte, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + +RunWithSeedsAndRandAcc(t, appFactory, setupStateFactory, seeds, fuzzSeed, simtypes.RandomAccounts, postRunActions...) +} + +/ RunWithSeedsAndRandAcc calls RunWithSeeds with randAccFn +func RunWithSeedsAndRandAcc[T SimulationApp](/docs/sdk/next/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seeds []int64, + fuzzSeed []byte, + randAccFn simtypes.RandomAccountFn, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + if deprecatedParams := cli.GetDeprecatedFlagUsed(); len(deprecatedParams) != 0 { + fmt.Printf("Warning: Deprecated flag are used: %s", strings.Join(deprecatedParams, ",")) +} + cfg := cli.NewConfigFromFlags() + +cfg.ChainID = SimAppChainID + for i := range seeds { + seed := seeds[i] + t.Run(fmt.Sprintf("seed: %d", seed), func(t *testing.T) { + t.Parallel() + +RunWithSeed(t, cfg, appFactory, setupStateFactory, seed, fuzzSeed, postRunActions...) +}) +} +} + +/ RunWithSeed is a helper function that runs a simulation test with the given parameters. +/ It iterates over the provided seeds and runs the simulation test for each seed in parallel. +/ +/ It sets up the environment, creates an instance of the simulation app, +/ calls the simulation.SimulateFromSeed function to run the simulation, and performs post-run actions for the seed. +/ The execution is deterministic and can be used for fuzz tests as well. +func RunWithSeed[T SimulationApp](/docs/sdk/next/documentation/operations/ + tb testing.TB, + cfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seed int64, + fuzzSeed []byte, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + tb.Helper() + +RunWithSeedAndRandAcc(tb, cfg, appFactory, setupStateFactory, seed, fuzzSeed, simtypes.RandomAccounts, postRunActions...) +} + +/ RunWithSeedAndRandAcc calls RunWithSeed with randAccFn +func RunWithSeedAndRandAcc[T SimulationApp](/docs/sdk/next/documentation/operations/ + tb testing.TB, + cfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seed int64, + fuzzSeed []byte, + randAccFn simtypes.RandomAccountFn, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + tb.Helper() + / setup environment + tCfg := cfg.With(tb, seed, fuzzSeed) + testInstance := NewSimulationAppInstance(tb, tCfg, appFactory) + +var runLogger log.Logger + if cli.FlagVerboseValue { + runLogger = log.NewTestLogger(tb) +} + +else { + runLogger = log.NewTestLoggerInfo(tb) +} + +runLogger = runLogger.With("seed", tCfg.Seed) + app := testInstance.App + stateFactory := setupStateFactory(app) + +ops, reporter := prepareWeightedOps(app.SimulationManager(), stateFactory, tCfg, testInstance.App.TxConfig(), runLogger) + +simParams, accs, err := simulation.SimulateFromSeedX( + tb, + runLogger, + WriteToDebugLog(runLogger), + app.GetBaseApp(), + stateFactory.AppStateFn, + randAccFn, + ops, + stateFactory.BlockedAddr, + tCfg, + stateFactory.Codec, + testInstance.ExecLogWriter, + ) + +require.NoError(tb, err) + +err = simtestutil.CheckExportSimulation(app, tCfg, simParams) + +require.NoError(tb, err) + if tCfg.Commit { + simtestutil.PrintStats(testInstance.DB) +} + / not using tb.Log to always print the summary + fmt.Printf("+++ DONE (seed: %d): \n%s\n", seed, reporter.Summary().String()) + for _, step := range postRunActions { + step(tb, testInstance, accs) +} + +require.NoError(tb, app.Close()) +} + +type ( + HasWeightedOperationsX interface { + WeightedOperationsX(weight WeightSource, reg Registry) +} + +HasWeightedOperationsXWithProposals interface { + WeightedOperationsX(weights WeightSource, reg Registry, proposals WeightedProposalMsgIter, + legacyProposals []simtypes.WeightedProposalContent) /nolint: staticcheck / used for legacy proposal types +} + +HasProposalMsgsX interface { + ProposalMsgsX(weights WeightSource, reg Registry) +} +) + +type ( + HasLegacyWeightedOperations interface { + / WeightedOperations simulation operations (i.e msgs) + +with their respective weight + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation +} + / HasLegacyProposalMsgs defines the messages that can be used to simulate governance (v1) + +proposals + / Deprecated replaced by HasProposalMsgsX + HasLegacyProposalMsgs interface { + / ProposalMsgs msg fu nctions used to simulate governance proposals + ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg +} + + / HasLegacyProposalContents defines the contents that can be used to simulate legacy governance (v1beta1) + +proposals + / Deprecated replaced by HasProposalMsgsX + HasLegacyProposalContents interface { + / ProposalContents content functions used to simulate governance proposals + ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent /nolint:staticcheck / legacy v1beta1 governance +} +) + +/ TestInstance is a generic type that represents an instance of a SimulationApp used for testing simulations. +/ It contains the following fields: +/ - App: The instance of the SimulationApp under test. +/ - DB: The LevelDB database for the simulation app. +/ - WorkDir: The temporary working directory for the simulation app. +/ - Cfg: The configuration flags for the simulator. +/ - AppLogger: The logger used for logging in the app during the simulation, with seed value attached. +/ - ExecLogWriter: Captures block and operation data coming from the simulation +type TestInstance[T SimulationApp] struct { + App T + DB dbm.DB + WorkDir string + Cfg simtypes.Config + AppLogger log.Logger + ExecLogWriter simulation.LogWriter +} + +/ included to avoid cyclic dependency in testutils/sims +func prepareWeightedOps( + sm *module.SimulationManager, + stateFact SimStateFactory, + config simtypes.Config, + txConfig client.TxConfig, + logger log.Logger, +) (simulation.WeightedOperations, *BasicSimulationReporter) { + cdc := stateFact.Codec + simState := module.SimulationState{ + AppParams: make(simtypes.AppParams), + Cdc: cdc, + TxConfig: txConfig, + BondDenom: sdk.DefaultBondDenom, +} + if config.ParamsFile != "" { + bz, err := os.ReadFile(config.ParamsFile) + if err != nil { + panic(err) +} + +err = json.Unmarshal(bz, &simState.AppParams) + if err != nil { + panic(err) +} + +} + weights := ParamWeightSource(simState.AppParams) + reporter := NewBasicSimulationReporter() + pReg := make(UniqueTypeRegistry) + wContent := make([]simtypes.WeightedProposalContent, 0) /nolint:staticcheck / required for legacy type + legacyPReg := NewWeightedFactoryMethods() + / add gov proposals types + for _, m := range sm.Modules { + switch xm := m.(type) { + case HasProposalMsgsX: + xm.ProposalMsgsX(weights, pReg) + case HasLegacyProposalMsgs: + for _, p := range xm.ProposalMsgs(simState) { + weight := weights.Get(p.AppParamsKey(), safeUint(p.DefaultWeight())) + +legacyPReg.Add(weight, legacyToMsgFactoryAdapter(p.MsgSimulatorFn())) +} + case HasLegacyProposalContents: + wContent = append(wContent, xm.ProposalContents(simState)...) +} + +} + oReg := NewSimsMsgRegistryAdapter( + reporter, + stateFact.AccountSource, + stateFact.BalanceSource, + txConfig, + logger, + ) + wOps := make([]simtypes.WeightedOperation, 0, len(sm.Modules)) + for _, m := range sm.Modules { + / add operations + switch xm := m.(type) { + case HasWeightedOperationsX: + xm.WeightedOperationsX(weights, oReg) + case HasWeightedOperationsXWithProposals: + xm.WeightedOperationsX(weights, oReg, AppendIterators(legacyPReg.Iterator(), pReg.Iterator()), wContent) + case HasLegacyWeightedOperations: + wOps = append(wOps, xm.WeightedOperations(simState)...) +} + +} + +return append(wOps, Collect(oReg.items, func(a weightedOperation) + +simtypes.WeightedOperation { + return a +})...), reporter +} + +func safeUint(p int) + +uint32 { + if p < 0 || p > math.MaxUint32 { + panic(fmt.Sprintf("can not cast to uint32: %d", p)) +} + +return uint32(p) +} + +/ NewSimulationAppInstance initializes and returns a TestInstance of a SimulationApp. +/ The function takes a testing.T instance, a simtypes.Config instance, and an appFactory function as parameters. +/ It creates a temporary working directory and a LevelDB database for the simulation app. +/ The function then initializes a logger based on the verbosity flag and sets the logger's seed to the test configuration's seed. +/ The database is closed and cleaned up on test completion. +func NewSimulationAppInstance[T SimulationApp](/docs/sdk/next/documentation/operations/ + tb testing.TB, + tCfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, +) + +TestInstance[T] { + tb.Helper() + workDir := tb.TempDir() + +require.NoError(tb, os.Mkdir(filepath.Join(workDir, "data"), 0o750)) + dbDir := filepath.Join(workDir, "leveldb-app-sim") + +var logger log.Logger + if cli.FlagVerboseValue { + logger = log.NewTestLogger(tb) +} + +else { + logger = log.NewTestLoggerError(tb) +} + +logger = logger.With("seed", tCfg.Seed) + +db, err := dbm.NewDB("Simulation", dbm.BackendType(tCfg.DBBackend), dbDir) + +require.NoError(tb, err) + +tb.Cleanup(func() { + _ = db.Close() / ensure db is closed +}) + appOptions := make(simtestutil.AppOptionsMap) + +appOptions[flags.FlagHome] = workDir + opts := []func(*baseapp.BaseApp) { + baseapp.SetChainID(tCfg.ChainID) +} + if tCfg.FauxMerkle { + opts = append(opts, FauxMerkleModeOpt) +} + app := appFactory(logger, db, nil, true, appOptions, opts...) + if !cli.FlagSigverifyTxValue { + app.SetNotSigverifyTx() +} + +return TestInstance[T]{ + App: app, + DB: db, + WorkDir: workDir, + Cfg: tCfg, + AppLogger: logger, + ExecLogWriter: &simulation.StandardLogWriter{ + Seed: tCfg.Seed +}, +} +} + +var _ io.Writer = writerFn(nil) + +type writerFn func(p []byte) (n int, err error) + +func (w writerFn) + +Write(p []byte) (n int, err error) { + return w(p) +} + +/ WriteToDebugLog is an adapter to io.Writer interface +func WriteToDebugLog(logger log.Logger) + +io.Writer { + return writerFn(func(p []byte) (n int, err error) { + logger.Debug(string(p)) + +return len(p), nil +}) +} + +/ FauxMerkleModeOpt returns a BaseApp option to use a dbStoreAdapter instead of +/ an IAVLStore for faster simulation speed. +func FauxMerkleModeOpt(bapp *baseapp.BaseApp) { + bapp.SetFauxMerkleMode() +} +``` + +These functions should be called in tests (i.e., app\_test.go, app\_sim\_test.go, etc.) + +Example: + +```go expandable +/go:build sims + +package simapp + +import ( + + "encoding/binary" + "encoding/json" + "flag" + "io" + "math/rand" + "strings" + "sync" + "testing" + + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "cosmossdk.io/log" + "cosmossdk.io/store" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sims "github.com/cosmos/cosmos-sdk/testutil/simsx" + sdk "github.com/cosmos/cosmos-sdk/types" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + "github.com/cosmos/cosmos-sdk/x/feegrant" + "github.com/cosmos/cosmos-sdk/x/simulation" + simcli "github.com/cosmos/cosmos-sdk/x/simulation/client/cli" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +var FlagEnableStreamingValue bool + +/ Get flags every time the simulator is run +func init() { + simcli.GetSimulatorFlags() + +flag.BoolVar(&FlagEnableStreamingValue, "EnableStreaming", false, "Enable streaming service") +} + +/ interBlockCacheOpt returns a BaseApp option function that sets the persistent +/ inter-block write-through cache. +func interBlockCacheOpt() + +func(*baseapp.BaseApp) { + return baseapp.SetInterBlockCache(store.NewCommitKVStoreCacheManager()) +} + +func TestFullAppSimulation(t *testing.T) { + sims.Run(t, NewSimApp, setupStateFactory) +} + +func setupStateFactory(app *SimApp) + +sims.SimStateFactory { + return sims.SimStateFactory{ + Codec: app.AppCodec(), + AppStateFn: simtestutil.AppStateFn(app.AppCodec(), app.SimulationManager(), app.DefaultGenesis()), + BlockedAddr: BlockedAddresses(), + AccountSource: app.AccountKeeper, + BalanceSource: app.BankKeeper, +} +} + +var ( + exportAllModules = []string{ +} + +exportWithValidatorSet = []string{ +} +) + +func TestAppImportExport(t *testing.T) { + sims.Run(t, NewSimApp, setupStateFactory, func(tb testing.TB, ti sims.TestInstance[*SimApp], accs []simtypes.Account) { + tb.Helper() + app := ti.App + tb.Log("exporting genesis...\n") + +exported, err := app.ExportAppStateAndValidators(false, exportWithValidatorSet, exportAllModules) + +require.NoError(tb, err) + +tb.Log("importing genesis...\n") + newTestInstance := sims.NewSimulationAppInstance(tb, ti.Cfg, NewSimApp) + newApp := newTestInstance.App + var genesisState GenesisState + require.NoError(tb, json.Unmarshal(exported.AppState, &genesisState)) + ctxB := newApp.NewContextLegacy(true, cmtproto.Header{ + Height: app.LastBlockHeight() +}) + _, err = newApp.ModuleManager.InitGenesis(ctxB, newApp.appCodec, genesisState) + if IsEmptyValidatorSetErr(err) { + tb.Skip("Skipping simulation as all validators have been unbonded") + +return +} + +require.NoError(tb, err) + +err = newApp.StoreConsensusParams(ctxB, exported.ConsensusParams) + +require.NoError(tb, err) + +tb.Log("comparing stores...") + / skip certain prefixes + skipPrefixes := map[string][][]byte{ + stakingtypes.StoreKey: { + stakingtypes.UnbondingQueueKey, stakingtypes.RedelegationQueueKey, stakingtypes.ValidatorQueueKey, + stakingtypes.HistoricalInfoKey, stakingtypes.UnbondingIDKey, stakingtypes.UnbondingIndexKey, + stakingtypes.UnbondingTypeKey, + stakingtypes.ValidatorUpdatesKey, / todo (Alex): double check why there is a diff with test-sim-import-export +}, + authzkeeper.StoreKey: { + authzkeeper.GrantQueuePrefix +}, + feegrant.StoreKey: { + feegrant.FeeAllowanceQueueKeyPrefix +}, + slashingtypes.StoreKey: { + slashingtypes.ValidatorMissedBlockBitmapKeyPrefix +}, +} + +AssertEqualStores(tb, app, newApp, app.SimulationManager().StoreDecoders, skipPrefixes) +}) +} + +/ Scenario: +/ +/ Start a fresh node and run n blocks, export state +/ set up a new node instance, Init chain from exported genesis +/ run new instance for n blocks +func TestAppSimulationAfterImport(t *testing.T) { + sims.Run(t, NewSimApp, setupStateFactory, func(tb testing.TB, ti sims.TestInstance[*SimApp], accs []simtypes.Account) { + tb.Helper() + app := ti.App + tb.Log("exporting genesis...\n") + +exported, err := app.ExportAppStateAndValidators(false, exportWithValidatorSet, exportAllModules) + +require.NoError(tb, err) + +tb.Log("importing genesis...\n") + newTestInstance := sims.NewSimulationAppInstance(tb, ti.Cfg, NewSimApp) + newApp := newTestInstance.App + _, err = newApp.InitChain(&abci.RequestInitChain{ + AppStateBytes: exported.AppState, + ChainId: sims.SimAppChainID, +}) + if IsEmptyValidatorSetErr(err) { + tb.Skip("Skipping simulation as all validators have been unbonded") + +return +} + +require.NoError(tb, err) + newStateFactory := setupStateFactory(newApp) + _, _, err = simulation.SimulateFromSeedX( + tb, + newTestInstance.AppLogger, + sims.WriteToDebugLog(newTestInstance.AppLogger), + newApp.BaseApp, + newStateFactory.AppStateFn, + simtypes.RandomAccounts, + simtestutil.BuildSimulationOperations(newApp, newApp.AppCodec(), newTestInstance.Cfg, newApp.TxConfig()), + newStateFactory.BlockedAddr, + newTestInstance.Cfg, + newStateFactory.Codec, + ti.ExecLogWriter, + ) + +require.NoError(tb, err) +}) +} + +func IsEmptyValidatorSetErr(err error) + +bool { + return err != nil && strings.Contains(err.Error(), "validator set is empty after InitGenesis") +} + +func TestAppStateDeterminism(t *testing.T) { + const numTimesToRunPerSeed = 3 + var seeds []int64 + if s := simcli.NewConfigFromFlags().Seed; s != simcli.DefaultSeedValue { + / We will be overriding the random seed and just run a single simulation on the provided seed value + for j := 0; j < numTimesToRunPerSeed; j++ { / multiple rounds + seeds = append(seeds, s) +} + +} + +else { + / setup with 3 random seeds + for i := 0; i < 3; i++ { + seed := rand.Int63() + for j := 0; j < numTimesToRunPerSeed; j++ { / multiple rounds + seeds = append(seeds, seed) +} + +} + +} + / overwrite default app config + interBlockCachingAppFactory := func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) *SimApp { + if FlagEnableStreamingValue { + m := map[string]any{ + "streaming.abci.keys": []string{"*" +}, + "streaming.abci.plugin": "abci_v1", + "streaming.abci.stop-node-on-err": true, +} + others := appOpts + appOpts = appOptionsFn(func(k string) + +any { + if v, ok := m[k]; ok { + return v +} + +return others.Get(k) +}) +} + +return NewSimApp(logger, db, nil, true, appOpts, append(baseAppOptions, interBlockCacheOpt())...) +} + +var mx sync.Mutex + appHashResults := make(map[int64][][]byte) + appSimLogger := make(map[int64][]simulation.LogWriter) + captureAndCheckHash := func(tb testing.TB, ti sims.TestInstance[*SimApp], _ []simtypes.Account) { + tb.Helper() + +seed, appHash := ti.Cfg.Seed, ti.App.LastCommitID().Hash + mx.Lock() + +otherHashes, execWriters := appHashResults[seed], appSimLogger[seed] + if len(otherHashes) < numTimesToRunPerSeed-1 { + appHashResults[seed], appSimLogger[seed] = append(otherHashes, appHash), append(execWriters, ti.ExecLogWriter) +} + +else { / cleanup + delete(appHashResults, seed) + +delete(appSimLogger, seed) +} + +mx.Unlock() + +var failNow bool + / and check that all app hashes per seed are equal for each iteration + for i := 0; i < len(otherHashes); i++ { + if !assert.Equal(tb, otherHashes[i], appHash) { + execWriters[i].PrintLogs() + +failNow = true +} + +} + if failNow { + ti.ExecLogWriter.PrintLogs() + +tb.Fatalf("non-determinism in seed %d", seed) +} + +} + / run simulations + sims.RunWithSeeds(t, interBlockCachingAppFactory, setupStateFactory, seeds, []byte{ +}, captureAndCheckHash) +} + +type ComparableStoreApp interface { + LastBlockHeight() + +int64 + NewContextLegacy(isCheckTx bool, header cmtproto.Header) + +sdk.Context + GetKey(storeKey string) *storetypes.KVStoreKey + GetStoreKeys() []storetypes.StoreKey +} + +func AssertEqualStores( + tb testing.TB, + app, newApp ComparableStoreApp, + storeDecoders simtypes.StoreDecoderRegistry, + skipPrefixes map[string][][]byte, +) { + tb.Helper() + ctxA := app.NewContextLegacy(true, cmtproto.Header{ + Height: app.LastBlockHeight() +}) + ctxB := newApp.NewContextLegacy(true, cmtproto.Header{ + Height: app.LastBlockHeight() +}) + storeKeys := app.GetStoreKeys() + +require.NotEmpty(tb, storeKeys) + for _, appKeyA := range storeKeys { + / only compare kvstores + if _, ok := appKeyA.(*storetypes.KVStoreKey); !ok { + continue +} + keyName := appKeyA.Name() + appKeyB := newApp.GetKey(keyName) + storeA := ctxA.KVStore(appKeyA) + storeB := ctxB.KVStore(appKeyB) + +failedKVAs, failedKVBs := simtestutil.DiffKVStores(storeA, storeB, skipPrefixes[keyName]) + +require.Equal(tb, len(failedKVAs), len(failedKVBs), "unequal sets of key-values to compare %s, key stores %s and %s", keyName, appKeyA, appKeyB) + +tb.Logf("compared %d different key/value pairs between %s and %s\n", len(failedKVAs), appKeyA, appKeyB) + if !assert.Equal(tb, 0, len(failedKVAs), simtestutil.GetSimulationLog(keyName, storeDecoders, failedKVAs, failedKVBs)) { + for _, v := range failedKVAs { + tb.Logf("store mismatch: %q\n", v) +} + +tb.FailNow() +} + +} +} + +/ appOptionsFn is an adapter to the single method AppOptions interface +type appOptionsFn func(string) + +any + +func (f appOptionsFn) + +Get(k string) + +any { + return f(k) +} + +/ FauxMerkleModeOpt returns a BaseApp option to use a dbStoreAdapter instead of +/ an IAVLStore for faster simulation speed. +func FauxMerkleModeOpt(bapp *baseapp.BaseApp) { + bapp.SetFauxMerkleMode() +} + +func FuzzFullAppSimulation(f *testing.F) { + f.Fuzz(func(t *testing.T, rawSeed []byte) { + if len(rawSeed) < 8 { + t.Skip() + +return +} + +sims.RunWithSeeds( + t, + NewSimApp, + setupStateFactory, + []int64{ + int64(binary.BigEndian.Uint64(rawSeed)) +}, + rawSeed[8:], + ) +}) +} +``` diff --git a/docs/sdk/next/documentation/operations/testing.mdx b/docs/sdk/next/documentation/operations/testing.mdx new file mode 100644 index 00000000..b22ee454 --- /dev/null +++ b/docs/sdk/next/documentation/operations/testing.mdx @@ -0,0 +1,2922 @@ +--- +title: Testing +--- + +The Cosmos SDK contains different types of [tests](https://martinfowler.com/articles/practical-test-pyramid.html). +These tests have different goals and are used at different stages of the development cycle. +We advise, as a general rule, to use tests at all stages of the development cycle. +It is advised, as a chain developer, to test your application and modules in a similar way to the SDK. + +The rationale behind testing can be found in [ADR-59](https://docs.cosmos.network/main/build/architecture/adr-059-test-scopes). + +## Unit Tests + +Unit tests are the lowest test category of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html). +All packages and modules should have unit test coverage. Modules should have their dependencies mocked: this means mocking keepers. + +The SDK uses `mockgen` to generate mocks for keepers: + +```go expandable +#!/usr/bin/env bash + +mockgen_cmd="mockgen" +$mockgen_cmd -source=baseapp/abci_utils.go -package mock -destination baseapp/testutil/mock/mocks.go +$mockgen_cmd -source=client/account_retriever.go -package mock -destination testutil/mock/account_retriever.go +$mockgen_cmd -package mock -destination store/mock/cosmos_cosmos_db_DB.go github.com/cosmos/cosmos-db DB +$mockgen_cmd -source=types/module/module.go -package mock -destination testutil/mock/types_module_module.go +$mockgen_cmd -source=types/module/mock_appmodule_test.go -package mock -destination testutil/mock/types_mock_appmodule.go +$mockgen_cmd -source=types/invariant.go -package mock -destination testutil/mock/types_invariant.go +$mockgen_cmd -package mock -destination testutil/mock/grpc_server.go github.com/cosmos/gogoproto/grpc Server +$mockgen_cmd -package mock -destination testutil/mock/logger.go cosmossdk.io/log Logger +$mockgen_cmd -source=x/nft/expected_keepers.go -package testutil -destination x/nft/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/feegrant/expected_keepers.go -package testutil -destination x/feegrant/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/mint/types/expected_keepers.go -package testutil -destination x/mint/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/params/proposal_handler_test.go -package testutil -destination x/params/testutil/staking_keeper_mock.go +$mockgen_cmd -source=x/auth/tx/config/expected_keepers.go -package testutil -destination x/auth/tx/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/auth/types/expected_keepers.go -package testutil -destination x/auth/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/auth/ante/expected_keepers.go -package testutil -destination x/auth/ante/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/authz/expected_keepers.go -package testutil -destination x/authz/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/bank/types/expected_keepers.go -package testutil -destination x/bank/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/group/testutil/expected_keepers.go -package testutil -destination x/group/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/evidence/types/expected_keepers.go -package testutil -destination x/evidence/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/distribution/types/expected_keepers.go -package testutil -destination x/distribution/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/slashing/types/expected_keepers.go -package testutil -destination x/slashing/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/genutil/types/expected_keepers.go -package testutil -destination x/genutil/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/gov/testutil/expected_keepers.go -package testutil -destination x/gov/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/staking/types/expected_keepers.go -package testutil -destination x/staking/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/auth/vesting/types/expected_keepers.go -package testutil -destination x/auth/vesting/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/protocolpool/types/expected_keepers.go -package testutil -destination x/protocolpool/testutil/expected_keepers_mocks.go +``` + +You can read more about mockgen [here](https://go.uber.org/mock). + +### Example + +As an example, we will walkthrough the [keeper tests](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/gov/keeper/keeper_test.go) of the `x/gov` module. + +The `x/gov` module has a `Keeper` type, which requires a few external dependencies (ie. imports outside `x/gov` to work properly). + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "time" + "cosmossdk.io/collections" + corestoretypes "cosmossdk.io/core/store" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" +) + +/ Keeper defines the governance module Keeper +type Keeper struct { + authKeeper types.AccountKeeper + bankKeeper types.BankKeeper + distrKeeper types.DistributionKeeper + + / The reference to the DelegationSet and ValidatorSet to get information about validators and delegators + sk types.StakingKeeper + + / GovHooks + hooks types.GovHooks + + / The (unexposed) + +keys used to access the stores from the Context. + storeService corestoretypes.KVStoreService + + / The codec for binary encoding/decoding. + cdc codec.Codec + + / Legacy Proposal router + legacyRouter v1beta1.Router + + / Msg server router + router baseapp.MessageRouter + + config types.Config + + calculateVoteResultsAndVotingPowerFn CalculateVoteResultsAndVotingPowerFn + + / the address capable of executing a MsgUpdateParams message. Typically, this + / should be the x/gov module account. + authority string + + Schema collections.Schema + Constitution collections.Item[string] + Params collections.Item[v1.Params] + Deposits collections.Map[collections.Pair[uint64, sdk.AccAddress], v1.Deposit] + Votes collections.Map[collections.Pair[uint64, sdk.AccAddress], v1.Vote] + ProposalID collections.Sequence + Proposals collections.Map[uint64, v1.Proposal] + ActiveProposalsQueue collections.Map[collections.Pair[time.Time, uint64], uint64] / TODO(tip): this should be simplified and go into an index. + InactiveProposalsQueue collections.Map[collections.Pair[time.Time, uint64], uint64] / TODO(tip): this should be simplified and go into an index. + VotingPeriodProposals collections.Map[uint64, []byte] / TODO(tip): this could be a keyset or index. +} + +type InitOption func(*Keeper) + +/ WithCustomCalculateVoteResultsAndVotingPowerFn is an optional input to set a custom CalculateVoteResultsAndVotingPowerFn. +/ If this function is not provided, the default function is used. +func WithCustomCalculateVoteResultsAndVotingPowerFn(calculateVoteResultsAndVotingPowerFn CalculateVoteResultsAndVotingPowerFn) + +InitOption { + return func(k *Keeper) { + if calculateVoteResultsAndVotingPowerFn == nil { + panic("calculateVoteResultsAndVotingPowerFn cannot be nil") +} + +k.calculateVoteResultsAndVotingPowerFn = calculateVoteResultsAndVotingPowerFn +} +} + +/ GetAuthority returns the x/gov module's authority. +func (k Keeper) + +GetAuthority() + +string { + return k.authority +} + +/ NewKeeper returns a governance keeper. It handles: +/ - submitting governance proposals +/ - depositing funds into proposals, and activating upon sufficient funds being deposited +/ - users voting on proposals, with weight proportional to stake in the system +/ - and tallying the result of the vote. +/ +/ CONTRACT: the parameter Subspace must have the param key table already initialized +func NewKeeper( + cdc codec.Codec, storeService corestoretypes.KVStoreService, authKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, sk types.StakingKeeper, distrKeeper types.DistributionKeeper, + router baseapp.MessageRouter, config types.Config, authority string, initOptions ...InitOption, +) *Keeper { + / ensure governance module account is set + if addr := authKeeper.GetModuleAddress(types.ModuleName); addr == nil { + panic(fmt.Sprintf("%s module account has not been set", types.ModuleName)) +} + if _, err := authKeeper.AddressCodec().StringToBytes(authority); err != nil { + panic(fmt.Sprintf("invalid authority address: %s", authority)) +} + + / If MaxMetadataLen not set by app developer, set to default value. + if config.MaxMetadataLen == 0 { + config.MaxMetadataLen = types.DefaultConfig().MaxMetadataLen +} + sb := collections.NewSchemaBuilder(storeService) + k := &Keeper{ + storeService: storeService, + authKeeper: authKeeper, + bankKeeper: bankKeeper, + distrKeeper: distrKeeper, + sk: sk, + cdc: cdc, + router: router, + config: config, + calculateVoteResultsAndVotingPowerFn: defaultCalculateVoteResultsAndVotingPower, + authority: authority, + Constitution: collections.NewItem(sb, types.ConstitutionKey, "constitution", collections.StringValue), + Params: collections.NewItem(sb, types.ParamsKey, "params", codec.CollValue[v1.Params](/docs/sdk/next/documentation/operations/cdc)), + Deposits: collections.NewMap(sb, types.DepositsKeyPrefix, "deposits", collections.PairKeyCodec(collections.Uint64Key, sdk.LengthPrefixedAddressKey(sdk.AccAddressKey)), codec.CollValue[v1.Deposit](/docs/sdk/next/documentation/operations/cdc)), / nolint: staticcheck / sdk.LengthPrefixedAddressKey is needed to retain state compatibility + Votes: collections.NewMap(sb, types.VotesKeyPrefix, "votes", collections.PairKeyCodec(collections.Uint64Key, sdk.LengthPrefixedAddressKey(sdk.AccAddressKey)), codec.CollValue[v1.Vote](/docs/sdk/next/documentation/operations/cdc)), / nolint: staticcheck / sdk.LengthPrefixedAddressKey is needed to retain state compatibility + ProposalID: collections.NewSequence(sb, types.ProposalIDKey, "proposal_id"), + Proposals: collections.NewMap(sb, types.ProposalsKeyPrefix, "proposals", collections.Uint64Key, codec.CollValue[v1.Proposal](/docs/sdk/next/documentation/operations/cdc)), + ActiveProposalsQueue: collections.NewMap(sb, types.ActiveProposalQueuePrefix, "active_proposals_queue", collections.PairKeyCodec(sdk.TimeKey, collections.Uint64Key), collections.Uint64Value), / sdk.TimeKey is needed to retain state compatibility + InactiveProposalsQueue: collections.NewMap(sb, types.InactiveProposalQueuePrefix, "inactive_proposals_queue", collections.PairKeyCodec(sdk.TimeKey, collections.Uint64Key), collections.Uint64Value), / sdk.TimeKey is needed to retain state compatibility + VotingPeriodProposals: collections.NewMap(sb, types.VotingPeriodProposalKeyPrefix, "voting_period_proposals", collections.Uint64Key, collections.BytesValue), +} + for _, opt := range initOptions { + opt(k) +} + +schema, err := sb.Build() + if err != nil { + panic(err) +} + +k.Schema = schema + return k +} + +/ Hooks gets the hooks for governance *Keeper { + func (k *Keeper) + +Hooks() + +types.GovHooks { + if k.hooks == nil { + / return a no-op implementation if no hooks are set + return types.MultiGovHooks{ +} + +} + +return k.hooks +} + +/ SetHooks sets the hooks for governance +func (k *Keeper) + +SetHooks(gh types.GovHooks) *Keeper { + if k.hooks != nil { + panic("cannot set governance hooks twice") +} + +k.hooks = gh + + return k +} + +/ SetLegacyRouter sets the legacy router for governance +func (k *Keeper) + +SetLegacyRouter(router v1beta1.Router) { + / It is vital to seal the governance proposal router here as to not allow + / further handlers to be registered after the keeper is created since this + / could create invalid or non-deterministic behavior. + router.Seal() + +k.legacyRouter = router +} + +/ Logger returns a module-specific logger. +func (k Keeper) + +Logger(ctx context.Context) + +log.Logger { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +return sdkCtx.Logger().With("module", "x/"+types.ModuleName) +} + +/ Router returns the gov keeper's router +func (k Keeper) + +Router() + +baseapp.MessageRouter { + return k.router +} + +/ LegacyRouter returns the gov keeper's legacy router +func (k Keeper) + +LegacyRouter() + +v1beta1.Router { + return k.legacyRouter +} + +/ GetGovernanceAccount returns the governance ModuleAccount +func (k Keeper) + +GetGovernanceAccount(ctx context.Context) + +sdk.ModuleAccountI { + return k.authKeeper.GetModuleAccount(ctx, types.ModuleName) +} + +/ ModuleAccountAddress returns gov module account address +func (k Keeper) + +ModuleAccountAddress() + +sdk.AccAddress { + return k.authKeeper.GetModuleAddress(types.ModuleName) +} + +/ assertMetadataLength returns an error if given metadata length +/ is greater than a pre-defined MaxMetadataLen. +func (k Keeper) + +assertMetadataLength(metadata string) + +error { + if metadata != "" && uint64(len(metadata)) > k.config.MaxMetadataLen { + return types.ErrMetadataTooLong.Wrapf("got metadata with length %d", len(metadata)) +} + +return nil +} + +/ assertSummaryLength returns an error if given summary length +/ is greater than a pre-defined 40*MaxMetadataLen. +func (k Keeper) + +assertSummaryLength(summary string) + +error { + if summary != "" && uint64(len(summary)) > 40*k.config.MaxMetadataLen { + return types.ErrSummaryTooLong.Wrapf("got summary with length %d", len(summary)) +} + +return nil +} +``` + +In order to only test `x/gov`, we mock the [expected keepers](https://docs.cosmos.network/v0.46/building-modules/keeper.html#type-definition) and instantiate the `Keeper` with the mocked dependencies. Note that we may need to configure the mocked dependencies to return the expected values: + +```go expandable +package keeper_test + +import ( + + "fmt" + "testing" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttime "github.com/cometbft/cometbft/types/time" + "github.com/stretchr/testify/require" + "go.uber.org/mock/gomock" + "cosmossdk.io/math" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil" + "github.com/cosmos/cosmos-sdk/testutil/testdata" + sdk "github.com/cosmos/cosmos-sdk/types" + moduletestutil "github.com/cosmos/cosmos-sdk/types/module/testutil" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + disttypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtestutil "github.com/cosmos/cosmos-sdk/x/gov/testutil" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +var ( + _, _, addr = testdata.KeyTestPubAddr() + +govAcct = authtypes.NewModuleAddress(types.ModuleName) + +distAcct = authtypes.NewModuleAddress(disttypes.ModuleName) + +TestProposal = getTestProposal() +) + +/ getTestProposal creates and returns a test proposal message. +func getTestProposal() []sdk.Msg { + legacyProposalMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Title", "description"), authtypes.NewModuleAddress(types.ModuleName).String()) + if err != nil { + panic(err) +} + +return []sdk.Msg{ + banktypes.NewMsgSend(govAcct, addr, sdk.NewCoins(sdk.NewCoin("stake", math.NewInt(1000)))), + legacyProposalMsg, +} +} + +/ setupGovKeeper creates a govKeeper as well as all its dependencies. +func setupGovKeeper(t *testing.T) ( + *keeper.Keeper, + *govtestutil.MockAccountKeeper, + *govtestutil.MockBankKeeper, + *govtestutil.MockStakingKeeper, + *govtestutil.MockDistributionKeeper, + moduletestutil.TestEncodingConfig, + sdk.Context, +) { + t.Helper() + key := storetypes.NewKVStoreKey(types.StoreKey) + storeService := runtime.NewKVStoreService(key) + testCtx := testutil.DefaultContextWithDB(t, key, storetypes.NewTransientStoreKey("transient_test")) + ctx := testCtx.Ctx.WithBlockHeader(cmtproto.Header{ + Time: cmttime.Now() +}) + encCfg := moduletestutil.MakeTestEncodingConfig() + +v1.RegisterInterfaces(encCfg.InterfaceRegistry) + +v1beta1.RegisterInterfaces(encCfg.InterfaceRegistry) + +banktypes.RegisterInterfaces(encCfg.InterfaceRegistry) + + / Create MsgServiceRouter, but don't populate it before creating the gov + / keeper. + msr := baseapp.NewMsgServiceRouter() + + / gomock initializations + ctrl := gomock.NewController(t) + acctKeeper := govtestutil.NewMockAccountKeeper(ctrl) + bankKeeper := govtestutil.NewMockBankKeeper(ctrl) + stakingKeeper := govtestutil.NewMockStakingKeeper(ctrl) + distributionKeeper := govtestutil.NewMockDistributionKeeper(ctrl) + +acctKeeper.EXPECT().GetModuleAddress(types.ModuleName).Return(govAcct).AnyTimes() + +acctKeeper.EXPECT().GetModuleAddress(disttypes.ModuleName).Return(distAcct).AnyTimes() + +acctKeeper.EXPECT().GetModuleAccount(gomock.Any(), types.ModuleName).Return(authtypes.NewEmptyModuleAccount(types.ModuleName)).AnyTimes() + +acctKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + +trackMockBalances(bankKeeper, distributionKeeper) + +stakingKeeper.EXPECT().TokensFromConsensusPower(ctx, gomock.Any()).DoAndReturn(func(ctx sdk.Context, power int64) + +math.Int { + return sdk.TokensFromConsensusPower(power, math.NewIntFromUint64(1000000)) +}).AnyTimes() + +stakingKeeper.EXPECT().BondDenom(ctx).Return("stake", nil).AnyTimes() + +stakingKeeper.EXPECT().IterateBondedValidatorsByPower(gomock.Any(), gomock.Any()).AnyTimes() + +stakingKeeper.EXPECT().IterateDelegations(gomock.Any(), gomock.Any(), gomock.Any()).AnyTimes() + +stakingKeeper.EXPECT().TotalBondedTokens(gomock.Any()).Return(math.NewInt(10000000), nil).AnyTimes() + +distributionKeeper.EXPECT().FundCommunityPool(gomock.Any(), gomock.Any(), gomock.Any()).Return(nil).AnyTimes() + + / Gov keeper initializations + govKeeper := keeper.NewKeeper(encCfg.Codec, storeService, acctKeeper, bankKeeper, stakingKeeper, distributionKeeper, msr, types.DefaultConfig(), govAcct.String()) + +require.NoError(t, govKeeper.ProposalID.Set(ctx, 1)) + govRouter := v1beta1.NewRouter() / Also register legacy gov handlers to test them too. + govRouter.AddRoute(types.RouterKey, v1beta1.ProposalHandler) + +govKeeper.SetLegacyRouter(govRouter) + err := govKeeper.Params.Set(ctx, v1.DefaultParams()) + +require.NoError(t, err) + +err = govKeeper.Constitution.Set(ctx, "constitution") + +require.NoError(t, err) + + / Register all handlers for the MegServiceRouter. + msr.SetInterfaceRegistry(encCfg.InterfaceRegistry) + +v1.RegisterMsgServer(msr, keeper.NewMsgServerImpl(govKeeper)) + +banktypes.RegisterMsgServer(msr, nil) / Nil is fine here as long as we never execute the proposal's Msgs. + + return govKeeper, acctKeeper, bankKeeper, stakingKeeper, distributionKeeper, encCfg, ctx +} + +/ trackMockBalances sets up expected calls on the Mock BankKeeper, and also +/ locally tracks accounts balances (not modules balances). +func trackMockBalances(bankKeeper *govtestutil.MockBankKeeper, distributionKeeper *govtestutil.MockDistributionKeeper) { + balances := make(map[string]sdk.Coins) + +balances[distAcct.String()] = sdk.NewCoins(sdk.NewCoin(sdk.DefaultBondDenom, math.NewInt(0))) + + / We don't track module account balances. + bankKeeper.EXPECT().MintCoins(gomock.Any(), minttypes.ModuleName, gomock.Any()).AnyTimes() + +bankKeeper.EXPECT().BurnCoins(gomock.Any(), types.ModuleName, gomock.Any()).AnyTimes() + +bankKeeper.EXPECT().SendCoinsFromModuleToModule(gomock.Any(), minttypes.ModuleName, types.ModuleName, gomock.Any()).AnyTimes() + + / But we do track normal account balances. + bankKeeper.EXPECT().SendCoinsFromAccountToModule(gomock.Any(), gomock.Any(), types.ModuleName, gomock.Any()).DoAndReturn(func(_ sdk.Context, sender sdk.AccAddress, _ string, coins sdk.Coins) + +error { + newBalance, negative := balances[sender.String()].SafeSub(coins...) + if negative { + return fmt.Errorf("not enough balance") +} + +balances[sender.String()] = newBalance + return nil +}).AnyTimes() + +bankKeeper.EXPECT().SendCoinsFromModuleToAccount(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).DoAndReturn(func(_ sdk.Context, module string, rcpt sdk.AccAddress, coins sdk.Coins) + +error { + balances[rcpt.String()] = balances[rcpt.String()].Add(coins...) + +return nil +}).AnyTimes() + +bankKeeper.EXPECT().GetAllBalances(gomock.Any(), gomock.Any()).DoAndReturn(func(_ sdk.Context, addr sdk.AccAddress) + +sdk.Coins { + return balances[addr.String()] +}).AnyTimes() + +bankKeeper.EXPECT().GetBalance(gomock.Any(), gomock.Any(), sdk.DefaultBondDenom).DoAndReturn(func(_ sdk.Context, addr sdk.AccAddress, _ string) + +sdk.Coin { + balances := balances[addr.String()] + for _, balance := range balances { + if balance.Denom == sdk.DefaultBondDenom { + return balance +} + +} + +return sdk.NewCoin(sdk.DefaultBondDenom, math.NewInt(0)) +}).AnyTimes() + +distributionKeeper.EXPECT().FundCommunityPool(gomock.Any(), gomock.Any(), gomock.Any()).DoAndReturn(func(_ sdk.Context, coins sdk.Coins, sender sdk.AccAddress) + +error { + / sender balance + newBalance, negative := balances[sender.String()].SafeSub(coins...) + if negative { + return fmt.Errorf("not enough balance") +} + +balances[sender.String()] = newBalance + / receiver balance + balances[distAcct.String()] = balances[distAcct.String()].Add(coins...) + +return nil +}).AnyTimes() +} +``` + +This allows us to test the `x/gov` module without having to import other modules. + +```go expandable +package keeper_test + +import ( + + "testing" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" + "cosmossdk.io/collections" + sdkmath "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtestutil "github.com/cosmos/cosmos-sdk/x/gov/testutil" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +var address1 = "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r" + +type KeeperTestSuite struct { + suite.Suite + + cdc codec.Codec + ctx sdk.Context + govKeeper *keeper.Keeper + acctKeeper *govtestutil.MockAccountKeeper + bankKeeper *govtestutil.MockBankKeeper + stakingKeeper *govtestutil.MockStakingKeeper + distKeeper *govtestutil.MockDistributionKeeper + queryClient v1.QueryClient + legacyQueryClient v1beta1.QueryClient + addrs []sdk.AccAddress + msgSrvr v1.MsgServer + legacyMsgSrvr v1beta1.MsgServer +} + +func (suite *KeeperTestSuite) + +SetupSuite() { + suite.reset() +} + +func (suite *KeeperTestSuite) + +reset() { + govKeeper, acctKeeper, bankKeeper, stakingKeeper, distKeeper, encCfg, ctx := setupGovKeeper(suite.T()) + + / Populate the gov account with some coins, as the TestProposal we have + / is a MsgSend from the gov account. + coins := sdk.NewCoins(sdk.NewCoin("stake", sdkmath.NewInt(100000))) + err := bankKeeper.MintCoins(suite.ctx, minttypes.ModuleName, coins) + +suite.NoError(err) + +err = bankKeeper.SendCoinsFromModuleToModule(ctx, minttypes.ModuleName, types.ModuleName, coins) + +suite.NoError(err) + queryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1.RegisterQueryServer(queryHelper, keeper.NewQueryServer(govKeeper)) + legacyQueryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1beta1.RegisterQueryServer(legacyQueryHelper, keeper.NewLegacyQueryServer(govKeeper)) + queryClient := v1.NewQueryClient(queryHelper) + legacyQueryClient := v1beta1.NewQueryClient(legacyQueryHelper) + +suite.ctx = ctx + suite.govKeeper = govKeeper + suite.acctKeeper = acctKeeper + suite.bankKeeper = bankKeeper + suite.stakingKeeper = stakingKeeper + suite.distKeeper = distKeeper + suite.cdc = encCfg.Codec + suite.queryClient = queryClient + suite.legacyQueryClient = legacyQueryClient + suite.msgSrvr = keeper.NewMsgServerImpl(suite.govKeeper) + +suite.legacyMsgSrvr = keeper.NewLegacyMsgServerImpl(govAcct.String(), suite.msgSrvr) + +suite.addrs = simtestutil.AddTestAddrsIncremental(bankKeeper, stakingKeeper, ctx, 3, sdkmath.NewInt(30000000)) + +suite.acctKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() +} + +func TestIncrementProposalNumber(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + +authKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + ac := address.NewBech32Codec("cosmos") + +addrBz, err := ac.StringToBytes(address1) + +require.NoError(t, err) + tp := TestProposal + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, true) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, true) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +proposal6, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +require.Equal(t, uint64(6), proposal6.Id) +} + +func TestProposalQueues(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + ac := address.NewBech32Codec("cosmos") + +addrBz, err := ac.StringToBytes(address1) + +require.NoError(t, err) + +authKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + + / create test proposals + tp := TestProposal + proposal, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +has, err := govKeeper.InactiveProposalsQueue.Has(ctx, collections.Join(*proposal.DepositEndTime, proposal.Id)) + +require.NoError(t, err) + +require.True(t, has) + +require.NoError(t, govKeeper.ActivateVotingPeriod(ctx, proposal)) + +proposal, err = govKeeper.Proposals.Get(ctx, proposal.Id) + +require.Nil(t, err) + +has, err = govKeeper.ActiveProposalsQueue.Has(ctx, collections.Join(*proposal.VotingEndTime, proposal.Id)) + +require.NoError(t, err) + +require.True(t, has) +} + +func TestSetHooks(t *testing.T) { + govKeeper, _, _, _, _, _, _ := setupGovKeeper(t) + +require.Empty(t, govKeeper.Hooks()) + govHooksReceiver := MockGovHooksReceiver{ +} + +govKeeper.SetHooks(types.NewMultiGovHooks(&govHooksReceiver)) + +require.NotNil(t, govKeeper.Hooks()) + +require.Panics(t, func() { + govKeeper.SetHooks(&govHooksReceiver) +}) +} + +func TestGetGovGovernanceAndModuleAccountAddress(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + mAcc := authKeeper.GetModuleAccount(ctx, "gov") + +require.Equal(t, mAcc, govKeeper.GetGovernanceAccount(ctx)) + mAddr := authKeeper.GetModuleAddress("gov") + +require.Equal(t, mAddr, govKeeper.ModuleAccountAddress()) +} + +func TestKeeperTestSuite(t *testing.T) { + suite.Run(t, new(KeeperTestSuite)) +} +``` + +We can then create unit tests using the newly created `Keeper` instance. + +```go expandable +package keeper_test + +import ( + + "testing" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" + "cosmossdk.io/collections" + sdkmath "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtestutil "github.com/cosmos/cosmos-sdk/x/gov/testutil" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +var address1 = "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r" + +type KeeperTestSuite struct { + suite.Suite + + cdc codec.Codec + ctx sdk.Context + govKeeper *keeper.Keeper + acctKeeper *govtestutil.MockAccountKeeper + bankKeeper *govtestutil.MockBankKeeper + stakingKeeper *govtestutil.MockStakingKeeper + distKeeper *govtestutil.MockDistributionKeeper + queryClient v1.QueryClient + legacyQueryClient v1beta1.QueryClient + addrs []sdk.AccAddress + msgSrvr v1.MsgServer + legacyMsgSrvr v1beta1.MsgServer +} + +func (suite *KeeperTestSuite) + +SetupSuite() { + suite.reset() +} + +func (suite *KeeperTestSuite) + +reset() { + govKeeper, acctKeeper, bankKeeper, stakingKeeper, distKeeper, encCfg, ctx := setupGovKeeper(suite.T()) + + / Populate the gov account with some coins, as the TestProposal we have + / is a MsgSend from the gov account. + coins := sdk.NewCoins(sdk.NewCoin("stake", sdkmath.NewInt(100000))) + err := bankKeeper.MintCoins(suite.ctx, minttypes.ModuleName, coins) + +suite.NoError(err) + +err = bankKeeper.SendCoinsFromModuleToModule(ctx, minttypes.ModuleName, types.ModuleName, coins) + +suite.NoError(err) + queryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1.RegisterQueryServer(queryHelper, keeper.NewQueryServer(govKeeper)) + legacyQueryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1beta1.RegisterQueryServer(legacyQueryHelper, keeper.NewLegacyQueryServer(govKeeper)) + queryClient := v1.NewQueryClient(queryHelper) + legacyQueryClient := v1beta1.NewQueryClient(legacyQueryHelper) + +suite.ctx = ctx + suite.govKeeper = govKeeper + suite.acctKeeper = acctKeeper + suite.bankKeeper = bankKeeper + suite.stakingKeeper = stakingKeeper + suite.distKeeper = distKeeper + suite.cdc = encCfg.Codec + suite.queryClient = queryClient + suite.legacyQueryClient = legacyQueryClient + suite.msgSrvr = keeper.NewMsgServerImpl(suite.govKeeper) + +suite.legacyMsgSrvr = keeper.NewLegacyMsgServerImpl(govAcct.String(), suite.msgSrvr) + +suite.addrs = simtestutil.AddTestAddrsIncremental(bankKeeper, stakingKeeper, ctx, 3, sdkmath.NewInt(30000000)) + +suite.acctKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() +} + +func TestIncrementProposalNumber(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + +authKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + ac := address.NewBech32Codec("cosmos") + +addrBz, err := ac.StringToBytes(address1) + +require.NoError(t, err) + tp := TestProposal + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, true) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, true) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +proposal6, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +require.Equal(t, uint64(6), proposal6.Id) +} + +func TestProposalQueues(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + ac := address.NewBech32Codec("cosmos") + +addrBz, err := ac.StringToBytes(address1) + +require.NoError(t, err) + +authKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + + / create test proposals + tp := TestProposal + proposal, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +has, err := govKeeper.InactiveProposalsQueue.Has(ctx, collections.Join(*proposal.DepositEndTime, proposal.Id)) + +require.NoError(t, err) + +require.True(t, has) + +require.NoError(t, govKeeper.ActivateVotingPeriod(ctx, proposal)) + +proposal, err = govKeeper.Proposals.Get(ctx, proposal.Id) + +require.Nil(t, err) + +has, err = govKeeper.ActiveProposalsQueue.Has(ctx, collections.Join(*proposal.VotingEndTime, proposal.Id)) + +require.NoError(t, err) + +require.True(t, has) +} + +func TestSetHooks(t *testing.T) { + govKeeper, _, _, _, _, _, _ := setupGovKeeper(t) + +require.Empty(t, govKeeper.Hooks()) + govHooksReceiver := MockGovHooksReceiver{ +} + +govKeeper.SetHooks(types.NewMultiGovHooks(&govHooksReceiver)) + +require.NotNil(t, govKeeper.Hooks()) + +require.Panics(t, func() { + govKeeper.SetHooks(&govHooksReceiver) +}) +} + +func TestGetGovGovernanceAndModuleAccountAddress(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + mAcc := authKeeper.GetModuleAccount(ctx, "gov") + +require.Equal(t, mAcc, govKeeper.GetGovernanceAccount(ctx)) + mAddr := authKeeper.GetModuleAddress("gov") + +require.Equal(t, mAddr, govKeeper.ModuleAccountAddress()) +} + +func TestKeeperTestSuite(t *testing.T) { + suite.Run(t, new(KeeperTestSuite)) +} +``` + +## Integration Tests + +Integration tests are at the second level of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html). +In the SDK, we locate our integration tests under [`/tests/integrations`](https://github.com/cosmos/cosmos-sdk/tree/main/tests/integration). + +The goal of these integration tests is to test how a component interacts with other dependencies. Compared to unit tests, integration tests do not mock dependencies. Instead, they use the direct dependencies of the component. This differs as well from end-to-end tests, which test the component with a full application. + +Integration tests interact with the tested module via the defined `Msg` and `Query` services. The result of the test can be verified by checking the state of the application, by checking the emitted events or the response. It is advised to combine two of these methods to verify the result of the test. + +The SDK provides small helpers for quickly setting up an integration tests. These helpers can be found at [Link](https://github.com/cosmos/cosmos-sdk/blob/main/testutil/integration). + +### Example + +```go expandable +package integration_test + +import ( + + "fmt" + "io" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/google/go-cmp/cmp" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + + addresscodec "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/integration" + sdk "github.com/cosmos/cosmos-sdk/types" + moduletestutil "github.com/cosmos/cosmos-sdk/types/module/testutil" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +/ Example shows how to use the integration test framework to test the integration of SDK modules. +/ Panics are used in this example, but in a real test case, you should use the testing.T object and assertions. +func Example() { + / in this example we are testing the integration of the following modules: + / - mint, which directly depends on auth, bank and staking + encodingCfg := moduletestutil.MakeTestEncodingConfig(auth.AppModuleBasic{ +}, mint.AppModuleBasic{ +}) + keys := storetypes.NewKVStoreKeys(authtypes.StoreKey, minttypes.StoreKey) + authority := authtypes.NewModuleAddress("gov").String() + + / replace the logger by testing values in a real test case (e.g. log.NewTestLogger(t)) + logger := log.NewNopLogger() + cms := integration.CreateMultiStore(keys, logger) + newCtx := sdk.NewContext(cms, cmtproto.Header{ +}, true, logger) + accountKeeper := authkeeper.NewAccountKeeper( + encodingCfg.Codec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + map[string][]string{ + minttypes.ModuleName: { + authtypes.Minter +}}, + addresscodec.NewBech32Codec("cosmos"), + "cosmos", + authority, + ) + + / subspace is nil because we don't test params (which is legacy anyway) + authModule := auth.NewAppModule(encodingCfg.Codec, accountKeeper, authsims.RandomGenesisAccounts, nil) + + / here bankkeeper and staking keeper is nil because we are not testing them + / subspace is nil because we don't test params (which is legacy anyway) + mintKeeper := mintkeeper.NewKeeper(encodingCfg.Codec, runtime.NewKVStoreService(keys[minttypes.StoreKey]), nil, accountKeeper, nil, authtypes.FeeCollectorName, authority) + mintModule := mint.NewAppModule(encodingCfg.Codec, mintKeeper, accountKeeper, nil, nil) + + / create the application and register all the modules from the previous step + integrationApp := integration.NewIntegrationApp( + newCtx, + logger, + keys, + encodingCfg.Codec, + map[string]appmodule.AppModule{ + authtypes.ModuleName: authModule, + minttypes.ModuleName: mintModule, +}, + ) + + / register the message and query servers + authtypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), authkeeper.NewMsgServerImpl(accountKeeper)) + +minttypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), mintkeeper.NewMsgServerImpl(mintKeeper)) + +minttypes.RegisterQueryServer(integrationApp.QueryHelper(), mintkeeper.NewQueryServerImpl(mintKeeper)) + params := minttypes.DefaultParams() + +params.BlocksPerYear = 10000 + + / now we can use the application to test a mint message + result, err := integrationApp.RunMsg(&minttypes.MsgUpdateParams{ + Authority: authority, + Params: params, +}) + if err != nil { + panic(err) +} + + / in this example the result is an empty response, a nil check is enough + / in other cases, it is recommended to check the result value. + if result == nil { + panic(fmt.Errorf("unexpected nil result")) +} + + / we now check the result + resp := minttypes.MsgUpdateParamsResponse{ +} + +err = encodingCfg.Codec.Unmarshal(result.Value, &resp) + if err != nil { + panic(err) +} + sdkCtx := sdk.UnwrapSDKContext(integrationApp.Context()) + + / we should also check the state of the application + got, err := mintKeeper.Params.Get(sdkCtx) + if err != nil { + panic(err) +} + if diff := cmp.Diff(got, params); diff != "" { + panic(diff) +} + +fmt.Println(got.BlocksPerYear) + / Output: 10000 +} + +/ ExampleOneModule shows how to use the integration test framework to test the integration of a single module. +/ That module has no dependency on other modules. +func Example_oneModule() { + / in this example we are testing the integration of the auth module: + encodingCfg := moduletestutil.MakeTestEncodingConfig(auth.AppModuleBasic{ +}) + keys := storetypes.NewKVStoreKeys(authtypes.StoreKey) + authority := authtypes.NewModuleAddress("gov").String() + + / replace the logger by testing values in a real test case (e.g. log.NewTestLogger(t)) + logger := log.NewLogger(io.Discard) + cms := integration.CreateMultiStore(keys, logger) + newCtx := sdk.NewContext(cms, cmtproto.Header{ +}, true, logger) + accountKeeper := authkeeper.NewAccountKeeper( + encodingCfg.Codec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + map[string][]string{ + minttypes.ModuleName: { + authtypes.Minter +}}, + addresscodec.NewBech32Codec("cosmos"), + "cosmos", + authority, + ) + + / subspace is nil because we don't test params (which is legacy anyway) + authModule := auth.NewAppModule(encodingCfg.Codec, accountKeeper, authsims.RandomGenesisAccounts, nil) + + / create the application and register all the modules from the previous step + integrationApp := integration.NewIntegrationApp( + newCtx, + logger, + keys, + encodingCfg.Codec, + map[string]appmodule.AppModule{ + authtypes.ModuleName: authModule, +}, + ) + + / register the message and query servers + authtypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), authkeeper.NewMsgServerImpl(accountKeeper)) + params := authtypes.DefaultParams() + +params.MaxMemoCharacters = 1000 + + / now we can use the application to test a mint message + result, err := integrationApp.RunMsg(&authtypes.MsgUpdateParams{ + Authority: authority, + Params: params, +}, + / this allows to the begin and end blocker of the module before and after the message + integration.WithAutomaticFinalizeBlock(), + / this allows to commit the state after the message + integration.WithAutomaticCommit(), + ) + if err != nil { + panic(err) +} + + / verify that the begin and end blocker were called + / NOTE: in this example, we are testing auth, which doesn't have any begin or end blocker + / so verifying the block height is enough + if integrationApp.LastBlockHeight() != 2 { + panic(fmt.Errorf("expected block height to be 2, got %d", integrationApp.LastBlockHeight())) +} + + / in this example the result is an empty response, a nil check is enough + / in other cases, it is recommended to check the result value. + if result == nil { + panic(fmt.Errorf("unexpected nil result")) +} + + / we now check the result + resp := authtypes.MsgUpdateParamsResponse{ +} + +err = encodingCfg.Codec.Unmarshal(result.Value, &resp) + if err != nil { + panic(err) +} + sdkCtx := sdk.UnwrapSDKContext(integrationApp.Context()) + + / we should also check the state of the application + got := accountKeeper.GetParams(sdkCtx) + if diff := cmp.Diff(got, params); diff != "" { + panic(diff) +} + +fmt.Println(got.MaxMemoCharacters) + / Output: 1000 +} +``` + +## Deterministic and Regression tests + +Tests are written for queries in the Cosmos SDK which have `module_query_safe` Protobuf annotation. + +Each query is tested using 2 methods: + +* Use property-based testing with the [`rapid`](https://pkg.go.dev/pgregory.net/rapid@v0.5.3) library. The property that is tested is that the query response and gas consumption are the same upon 1000 query calls. +* Regression tests are written with hardcoded responses and gas, and verify they don't change upon 1000 calls and between SDK patch versions. + +Here's an example of regression tests: + +```go expandable +package keeper_test + +import ( + + "testing" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/stretchr/testify/require" + "gotest.tools/v3/assert" + "pgregory.net/rapid" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + "cosmossdk.io/math" + storetypes "cosmossdk.io/store/types" + + addresscodec "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/integration" + "github.com/cosmos/cosmos-sdk/testutil/testdata" + sdk "github.com/cosmos/cosmos-sdk/types" + moduletestutil "github.com/cosmos/cosmos-sdk/types/module/testutil" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/bank" + "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktestutil "github.com/cosmos/cosmos-sdk/x/bank/testutil" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + _ "github.com/cosmos/cosmos-sdk/x/consensus" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + _ "github.com/cosmos/cosmos-sdk/x/params" + _ "github.com/cosmos/cosmos-sdk/x/staking" +) + +var ( + denomRegex = sdk.DefaultCoinDenomRegex() + +addr1 = sdk.MustAccAddressFromBech32("cosmos139f7kncmglres2nf3h4hc4tade85ekfr8sulz5") + +coin1 = sdk.NewCoin("denom", math.NewInt(10)) + +metadataAtom = banktypes.Metadata{ + Description: "The native staking token of the Cosmos Hub.", + DenomUnits: []*banktypes.DenomUnit{ + { + Denom: "uatom", + Exponent: 0, + Aliases: []string{"microatom" +}, +}, + { + Denom: "atom", + Exponent: 6, + Aliases: []string{"ATOM" +}, +}, +}, + Base: "uatom", + Display: "atom", +} +) + +type deterministicFixture struct { + ctx sdk.Context + bankKeeper keeper.BaseKeeper + queryClient banktypes.QueryClient +} + +func initDeterministicFixture(t *testing.T) *deterministicFixture { + t.Helper() + keys := storetypes.NewKVStoreKeys(authtypes.StoreKey, banktypes.StoreKey) + cdc := moduletestutil.MakeTestEncodingConfig(auth.AppModuleBasic{ +}, bank.AppModuleBasic{ +}).Codec + logger := log.NewTestLogger(t) + cms := integration.CreateMultiStore(keys, logger) + newCtx := sdk.NewContext(cms, cmtproto.Header{ +}, true, logger) + authority := authtypes.NewModuleAddress("gov") + maccPerms := map[string][]string{ + minttypes.ModuleName: { + authtypes.Minter +}, +} + accountKeeper := authkeeper.NewAccountKeeper( + cdc, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + addresscodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authority.String(), + ) + blockedAddresses := map[string]bool{ + accountKeeper.GetAuthority(): false, +} + bankKeeper := keeper.NewBaseKeeper( + cdc, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + accountKeeper, + blockedAddresses, + authority.String(), + log.NewNopLogger(), + ) + authModule := auth.NewAppModule(cdc, accountKeeper, authsims.RandomGenesisAccounts, nil) + bankModule := bank.NewAppModule(cdc, bankKeeper, accountKeeper, nil) + integrationApp := integration.NewIntegrationApp(newCtx, logger, keys, cdc, map[string]appmodule.AppModule{ + authtypes.ModuleName: authModule, + banktypes.ModuleName: bankModule, +}) + sdkCtx := sdk.UnwrapSDKContext(integrationApp.Context()) + + / Register MsgServer and QueryServer + banktypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), keeper.NewMsgServerImpl(bankKeeper)) + +banktypes.RegisterQueryServer(integrationApp.QueryHelper(), keeper.NewQuerier(&bankKeeper)) + qr := integrationApp.QueryHelper() + queryClient := banktypes.NewQueryClient(qr) + f := deterministicFixture{ + ctx: sdkCtx, + bankKeeper: bankKeeper, + queryClient: queryClient, +} + +return &f +} + +func fundAccount(f *deterministicFixture, addr sdk.AccAddress, coin ...sdk.Coin) { + err := banktestutil.FundAccount(f.ctx, f.bankKeeper, addr, sdk.NewCoins(coin...)) + +assert.NilError(&testing.T{ +}, err) +} + +func getCoin(rt *rapid.T) + +sdk.Coin { + return sdk.NewCoin( + rapid.StringMatching(denomRegex).Draw(rt, "denom"), + math.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) +} + +func TestGRPCQueryBalance(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + addr := testdata.AddressGenerator(rt).Draw(rt, "address") + coin := getCoin(rt) + +fundAccount(f, addr, coin) + req := banktypes.NewQueryBalanceRequest(addr, coin.GetDenom()) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.Balance, 0, true) +}) + +fundAccount(f, addr1, coin1) + req := banktypes.NewQueryBalanceRequest(addr1, coin1.GetDenom()) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.Balance, 1087, false) +} + +func TestGRPCQueryAllBalances(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + addr := testdata.AddressGenerator(rt).Draw(rt, "address") + numCoins := rapid.IntRange(1, 10).Draw(rt, "num-count") + coins := make(sdk.Coins, 0, numCoins) + for i := 0; i < numCoins; i++ { + coin := getCoin(rt) + + / NewCoins sorts the denoms + coins = sdk.NewCoins(append(coins, coin)...) +} + +fundAccount(f, addr, coins...) + req := banktypes.NewQueryAllBalancesRequest(addr, testdata.PaginationGenerator(rt, uint64(numCoins)).Draw(rt, "pagination"), false) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.AllBalances, 0, true) +}) + coins := sdk.NewCoins( + sdk.NewCoin("stake", math.NewInt(10)), + sdk.NewCoin("denom", math.NewInt(100)), + ) + +fundAccount(f, addr1, coins...) + req := banktypes.NewQueryAllBalancesRequest(addr1, nil, false) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.AllBalances, 357, false) +} + +func TestGRPCQuerySpendableBalances(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + addr := testdata.AddressGenerator(rt).Draw(rt, "address") + + / Denoms must be unique, otherwise sdk.NewCoins will panic. + denoms := rapid.SliceOfNDistinct(rapid.StringMatching(denomRegex), 1, 10, rapid.ID[string]).Draw(rt, "denoms") + coins := make(sdk.Coins, 0, len(denoms)) + for _, denom := range denoms { + coin := sdk.NewCoin( + denom, + math.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) + + / NewCoins sorts the denoms + coins = sdk.NewCoins(append(coins, coin)...) +} + err := banktestutil.FundAccount(f.ctx, f.bankKeeper, addr, coins) + +assert.NilError(t, err) + req := banktypes.NewQuerySpendableBalancesRequest(addr, testdata.PaginationGenerator(rt, uint64(len(denoms))).Draw(rt, "pagination")) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SpendableBalances, 0, true) +}) + coins := sdk.NewCoins( + sdk.NewCoin("stake", math.NewInt(10)), + sdk.NewCoin("denom", math.NewInt(100)), + ) + err := banktestutil.FundAccount(f.ctx, f.bankKeeper, addr1, coins) + +assert.NilError(t, err) + req := banktypes.NewQuerySpendableBalancesRequest(addr1, nil) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SpendableBalances, 2032, false) +} + +func TestGRPCQueryTotalSupply(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +res, err := f.queryClient.TotalSupply(f.ctx, &banktypes.QueryTotalSupplyRequest{ +}) + +assert.NilError(t, err) + initialSupply := res.GetSupply() + +rapid.Check(t, func(rt *rapid.T) { + numCoins := rapid.IntRange(1, 3).Draw(rt, "num-count") + coins := make(sdk.Coins, 0, numCoins) + for i := 0; i < numCoins; i++ { + coin := sdk.NewCoin( + rapid.StringMatching(denomRegex).Draw(rt, "denom"), + math.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) + +coins = coins.Add(coin) +} + +assert.NilError(t, f.bankKeeper.MintCoins(f.ctx, minttypes.ModuleName, coins)) + +initialSupply = initialSupply.Add(coins...) + req := &banktypes.QueryTotalSupplyRequest{ + Pagination: testdata.PaginationGenerator(rt, uint64(len(initialSupply))).Draw(rt, "pagination"), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.TotalSupply, 0, true) +}) + +f = initDeterministicFixture(t) / reset + coins := sdk.NewCoins( + sdk.NewCoin("foo", math.NewInt(10)), + sdk.NewCoin("bar", math.NewInt(100)), + ) + +assert.NilError(t, f.bankKeeper.MintCoins(f.ctx, minttypes.ModuleName, coins)) + req := &banktypes.QueryTotalSupplyRequest{ +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.TotalSupply, 150, false) +} + +func TestGRPCQueryTotalSupplyOf(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + coin := sdk.NewCoin( + rapid.StringMatching(denomRegex).Draw(rt, "denom"), + math.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) + +assert.NilError(t, f.bankKeeper.MintCoins(f.ctx, minttypes.ModuleName, sdk.NewCoins(coin))) + req := &banktypes.QuerySupplyOfRequest{ + Denom: coin.GetDenom() +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SupplyOf, 0, true) +}) + coin := sdk.NewCoin("bar", math.NewInt(100)) + +assert.NilError(t, f.bankKeeper.MintCoins(f.ctx, minttypes.ModuleName, sdk.NewCoins(coin))) + req := &banktypes.QuerySupplyOfRequest{ + Denom: coin.GetDenom() +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SupplyOf, 1021, false) +} + +func TestGRPCQueryParams(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + enabledStatus := banktypes.SendEnabled{ + Denom: rapid.StringMatching(denomRegex).Draw(rt, "denom"), + Enabled: rapid.Bool().Draw(rt, "status"), +} + params := banktypes.Params{ + SendEnabled: []*banktypes.SendEnabled{&enabledStatus +}, + DefaultSendEnabled: rapid.Bool().Draw(rt, "send"), +} + +require.NoError(t, f.bankKeeper.SetParams(f.ctx, params)) + req := &banktypes.QueryParamsRequest{ +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.Params, 0, true) +}) + enabledStatus := banktypes.SendEnabled{ + Denom: "denom", + Enabled: true, +} + params := banktypes.Params{ + SendEnabled: []*banktypes.SendEnabled{&enabledStatus +}, + DefaultSendEnabled: false, +} + +require.NoError(t, f.bankKeeper.SetParams(f.ctx, params)) + req := &banktypes.QueryParamsRequest{ +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.Params, 1003, false) +} + +func createAndReturnMetadatas(t *rapid.T, count int) []banktypes.Metadata { + denomsMetadata := make([]banktypes.Metadata, 0, count) + for i := 0; i < count; i++ { + denom := rapid.StringMatching(denomRegex).Draw(t, "denom") + aliases := rapid.SliceOf(rapid.String()).Draw(t, "aliases") + / In the GRPC server code, empty arrays are returned as nil + if len(aliases) == 0 { + aliases = nil +} + metadata := banktypes.Metadata{ + Description: rapid.StringN(1, 100, 100).Draw(t, "desc"), + DenomUnits: []*banktypes.DenomUnit{ + { + Denom: denom, + Exponent: rapid.Uint32().Draw(t, "exponent"), + Aliases: aliases, +}, +}, + Base: denom, + Display: denom, + Name: rapid.String().Draw(t, "name"), + Symbol: rapid.String().Draw(t, "symbol"), + URI: rapid.String().Draw(t, "uri"), + URIHash: rapid.String().Draw(t, "uri-hash"), +} + +denomsMetadata = append(denomsMetadata, metadata) +} + +return denomsMetadata +} + +func TestGRPCDenomsMetadata(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + count := rapid.IntRange(1, 3).Draw(rt, "count") + denomsMetadata := createAndReturnMetadatas(rt, count) + +assert.Assert(t, len(denomsMetadata) == count) + for i := 0; i < count; i++ { + f.bankKeeper.SetDenomMetaData(f.ctx, denomsMetadata[i]) +} + req := &banktypes.QueryDenomsMetadataRequest{ + Pagination: testdata.PaginationGenerator(rt, uint64(count)).Draw(rt, "pagination"), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomsMetadata, 0, true) +}) + +f = initDeterministicFixture(t) / reset + + f.bankKeeper.SetDenomMetaData(f.ctx, metadataAtom) + req := &banktypes.QueryDenomsMetadataRequest{ +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomsMetadata, 660, false) +} + +func TestGRPCDenomMetadata(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + denomMetadata := createAndReturnMetadatas(rt, 1) + +assert.Assert(t, len(denomMetadata) == 1) + +f.bankKeeper.SetDenomMetaData(f.ctx, denomMetadata[0]) + req := &banktypes.QueryDenomMetadataRequest{ + Denom: denomMetadata[0].Base, +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomMetadata, 0, true) +}) + +f.bankKeeper.SetDenomMetaData(f.ctx, metadataAtom) + req := &banktypes.QueryDenomMetadataRequest{ + Denom: metadataAtom.Base, +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomMetadata, 1300, false) +} + +func TestGRPCSendEnabled(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + allDenoms := []string{ +} + +rapid.Check(t, func(rt *rapid.T) { + count := rapid.IntRange(0, 10).Draw(rt, "count") + denoms := make([]string, 0, count) + for i := 0; i < count; i++ { + coin := banktypes.SendEnabled{ + Denom: rapid.StringMatching(denomRegex).Draw(rt, "denom"), + Enabled: rapid.Bool().Draw(rt, "enabled-status"), +} + +f.bankKeeper.SetSendEnabled(f.ctx, coin.Denom, coin.Enabled) + +denoms = append(denoms, coin.Denom) +} + +allDenoms = append(allDenoms, denoms...) + req := &banktypes.QuerySendEnabledRequest{ + Denoms: denoms, + / Pagination is only taken into account when `denoms` is an empty array + Pagination: testdata.PaginationGenerator(rt, uint64(len(allDenoms))).Draw(rt, "pagination"), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SendEnabled, 0, true) +}) + +coin1 := banktypes.SendEnabled{ + Denom: "falsecoin", + Enabled: false, +} + +coin2 := banktypes.SendEnabled{ + Denom: "truecoin", + Enabled: true, +} + +f.bankKeeper.SetSendEnabled(f.ctx, coin1.Denom, false) + +f.bankKeeper.SetSendEnabled(f.ctx, coin2.Denom, true) + req := &banktypes.QuerySendEnabledRequest{ + Denoms: []string{ + coin1.GetDenom(), coin2.GetDenom() +}, +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SendEnabled, 4063, false) +} + +func TestGRPCDenomOwners(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + denom := rapid.StringMatching(denomRegex).Draw(rt, "denom") + numAddr := rapid.IntRange(1, 10).Draw(rt, "number-address") + for i := 0; i < numAddr; i++ { + addr := testdata.AddressGenerator(rt).Draw(rt, "address") + coin := sdk.NewCoin( + denom, + math.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) + err := banktestutil.FundAccount(f.ctx, f.bankKeeper, addr, sdk.NewCoins(coin)) + +assert.NilError(t, err) +} + req := &banktypes.QueryDenomOwnersRequest{ + Denom: denom, + Pagination: testdata.PaginationGenerator(rt, uint64(numAddr)).Draw(rt, "pagination"), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomOwners, 0, true) +}) + denomOwners := []*banktypes.DenomOwner{ + { + Address: "cosmos1qg65a9q6k2sqq7l3ycp428sqqpmqcucgzze299", + Balance: coin1, +}, + { + Address: "cosmos1qglnsqgpq48l7qqzgs8qdshr6fh3gqq9ej3qut", + Balance: coin1, +}, +} + for i := 0; i < len(denomOwners); i++ { + addr, err := sdk.AccAddressFromBech32(denomOwners[i].Address) + +assert.NilError(t, err) + +err = banktestutil.FundAccount(f.ctx, f.bankKeeper, addr, sdk.NewCoins(coin1)) + +assert.NilError(t, err) +} + req := &banktypes.QueryDenomOwnersRequest{ + Denom: coin1.GetDenom(), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomOwners, 2516, false) +} +``` + +## Simulations + +Simulations uses as well a minimal application, built with [`depinject`](/docs/sdk/next/documentation/module-system/depinject): + + +You can as well use the `AppConfig` `configurator` for creating an `AppConfig` [inline](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/slashing/app_test.go#L54-L62). There is no difference between those two ways, use whichever you prefer. + + +Following is an example for `x/gov/` simulations: + +```go expandable +package simulation_test + +import ( + + "fmt" + "math/rand" + "testing" + "time" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/gogoproto/proto" + "github.com/stretchr/testify/require" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/configurator" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + _ "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + "github.com/cosmos/cosmos-sdk/x/bank/testutil" + _ "github.com/cosmos/cosmos-sdk/x/consensus" + _ "github.com/cosmos/cosmos-sdk/x/distribution" + dk "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + _ "github.com/cosmos/cosmos-sdk/x/gov" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + "github.com/cosmos/cosmos-sdk/x/gov/simulation" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + _ "github.com/cosmos/cosmos-sdk/x/params" + _ "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +var ( + _ simtypes.WeightedProposalMsg = MockWeightedProposals{ +} + /nolint:staticcheck / keeping around for legacy testing + _ simtypes.WeightedProposalContent = MockWeightedProposals{ +} +) + +type MockWeightedProposals struct { + n int +} + +func (m MockWeightedProposals) + +AppParamsKey() + +string { + return fmt.Sprintf("AppParamsKey-%d", m.n) +} + +func (m MockWeightedProposals) + +DefaultWeight() + +int { + return m.n +} + +func (m MockWeightedProposals) + +MsgSimulatorFn() + +simtypes.MsgSimulatorFn { + return func(r *rand.Rand, _ sdk.Context, _ []simtypes.Account) + +sdk.Msg { + return nil +} +} + +/nolint:staticcheck / retaining legacy content to maintain gov functionality +func (m MockWeightedProposals) + +ContentSimulatorFn() + +simtypes.ContentSimulatorFn { + return func(r *rand.Rand, _ sdk.Context, _ []simtypes.Account) + +simtypes.Content { + return v1beta1.NewTextProposal( + fmt.Sprintf("title-%d: %s", m.n, simtypes.RandStringOfLength(r, 100)), + fmt.Sprintf("description-%d: %s", m.n, simtypes.RandStringOfLength(r, 4000)), + ) +} +} + +func mockWeightedProposalMsg(n int) []simtypes.WeightedProposalMsg { + wpc := make([]simtypes.WeightedProposalMsg, n) + for i := range n { + wpc[i] = MockWeightedProposals{ + i +} + +} + +return wpc +} + +/ nolint / keeping this legacy proposal for testing +func mockWeightedLegacyProposalContent(n int) []simtypes.WeightedProposalContent { + wpc := make([]simtypes.WeightedProposalContent, n) + for i := range n { + wpc[i] = MockWeightedProposals{ + i +} + +} + +return wpc +} + +/ TestWeightedOperations tests the weights of the operations. +func TestWeightedOperations(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + ctx.WithChainID("test-chain") + appParams := make(simtypes.AppParams) + weightesOps := simulation.WeightedOperations(appParams, suite.TxConfig, suite.AccountKeeper, + suite.BankKeeper, suite.GovKeeper, mockWeightedProposalMsg(3), mockWeightedLegacyProposalContent(1), + ) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accs := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + expected := []struct { + weight int + opMsgRoute string + opMsgName string +}{ + { + simulation.DefaultWeightMsgDeposit, types.ModuleName, simulation.TypeMsgDeposit +}, + { + simulation.DefaultWeightMsgVote, types.ModuleName, simulation.TypeMsgVote +}, + { + simulation.DefaultWeightMsgVoteWeighted, types.ModuleName, simulation.TypeMsgVoteWeighted +}, + { + simulation.DefaultWeightMsgCancelProposal, types.ModuleName, simulation.TypeMsgCancelProposal +}, + {0, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {1, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {2, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {0, types.ModuleName, simulation.TypeMsgSubmitProposal +}, +} + +require.Equal(t, len(weightesOps), len(expected), "number of operations should be the same") + for i, w := range weightesOps { + operationMsg, _, err := w.Op()(r, app.BaseApp, ctx, accs, ctx.ChainID()) + +require.NoError(t, err) + + / the following checks are very much dependent from the ordering of the output given + / by WeightedOperations. if the ordering in WeightedOperations changes some tests + / will fail + require.Equal(t, expected[i].weight, w.Weight(), "weight should be the same") + +require.Equal(t, expected[i].opMsgRoute, operationMsg.Route, "route should be the same") + +require.Equal(t, expected[i].opMsgName, operationMsg.Name, "operation Msg name should be the same") +} +} + +/ TestSimulateMsgSubmitProposal tests the normal scenario of a valid message of type TypeMsgSubmitProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgSubmitProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + _, err := app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgSubmitProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper, MockWeightedProposals{3 +}.MsgSimulatorFn()) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgSubmitProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Proposer) + +require.NotEqual(t, len(msg.InitialDeposit), 0) + +require.Equal(t, "47841094stake", msg.InitialDeposit[0].String()) + +require.Equal(t, simulation.TypeMsgSubmitProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgSubmitProposal tests the normal scenario of a valid message of type TypeMsgSubmitProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgSubmitLegacyProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + _, err := app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + / execute operation + op := simulation.SimulateMsgSubmitLegacyProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper, MockWeightedProposals{3 +}.ContentSimulatorFn()) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgSubmitProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +var msgLegacyContent v1.MsgExecLegacyContent + err = proto.Unmarshal(msg.Messages[0].Value, &msgLegacyContent) + +require.NoError(t, err) + +var textProposal v1beta1.TextProposal + err = proto.Unmarshal(msgLegacyContent.Content.Value, &textProposal) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, "cosmos1p8wcgrjr4pjju90xg6u9cgq55dxwq8j7u4x9a0", msg.Proposer) + +require.NotEqual(t, len(msg.InitialDeposit), 0) + +require.Equal(t, "25166256stake", msg.InitialDeposit[0].String()) + +require.Equal(t, "title-3: ZBSpYuLyYggwexjxusrBqDOTtGTOWeLrQKjLxzIivHSlcxgdXhhuTSkuxKGLwQvuyNhYFmBZHeAerqyNEUzXPFGkqEGqiQWIXnku", + textProposal.GetTitle()) + +require.Equal(t, "description-3: NJWzHdBNpAXKJPHWQdrGYcAHSctgVlqwqHoLfHsXUdStwfefwzqLuKEhmMyYLdbZrcPgYqjNHxPexsruwEGStAneKbWkQDDIlCWBLSiAASNhZqNFlPtfqPJoxKsgMdzjWqLWdqKQuJqWPMvwPQWZUtVMOTMYKJbfdlZsjdsomuScvDmbDkgRualsxDvRJuCAmPOXitIbcyWsKGSdrEunFAOdmXnsuyFVgJqEjbklvmwrUlsxjRSfKZxGcpayDdgoFcnVSutxjRgOSFzPwidAjubMncNweqpbxhXGchpZUxuFDOtpnhNUycJICRYqsPhPSCjPTWZFLkstHWJxvdPEAyEIxXgLwbNOjrgzmaujiBABBIXvcXpLrbcEWNNQsbjvgJFgJkflpRohHUutvnaUqoopuKjTDaemDeSdqbnOzcfJpcTuAQtZoiLZOoAIlboFDAeGmSNwkvObPRvRWQgWkGkxwtPauYgdkmypLjbqhlHJIQTntgWjXwZdOyYEdQRRLfMSdnxqppqUofqLbLQDUjwKVKfZJUJQPsWIPwIVaSTrmKskoAhvmZyJgeRpkaTfGgrJzAigcxtfshmiDCFkuiluqtMOkidknnTBtumyJYlIsWLnCQclqdVmikUoMOPdPWwYbJxXyqUVicNxFxyqJTenNblyyKSdlCbiXxUiYUiMwXZASYfvMDPFgxniSjWaZTjHkqlJvtBsXqwPpyVxnJVGFWhfSxgOcduoxkiopJvFjMmFabrGYeVtTXLhxVUEiGwYUvndjFGzDVntUvibiyZhfMQdMhgsiuysLMiePBNXifRLMsSmXPkwlPloUbJveCvUlaalhZHuvdkCnkSHbMbmOnrfEGPwQiACiPlnihiaOdbjPqPiTXaHDoJXjSlZmltGqNHHNrcKdlFSCdmVOuvDcBLdSklyGJmcLTbSFtALdGlPkqqecJrpLCXNPWefoTJNgEJlyMEPneVaxxduAAEqQpHWZodWyRkDAxzyMnFMcjSVqeRXLqsNyNtQBbuRvunZflWSbbvXXdkyLikYqutQhLPONXbvhcQZJPSWnOulqQaXmbfFxAkqfYeseSHOQidHwbcsOaMnSrrmGjjRmEMQNuknupMxJiIeVjmgZvbmjPIQTEhQFULQLBMPrxcFPvBinaOPYWGvYGRKxLZdwamfRQQFngcdSlvwjfaPbURasIsGJVHtcEAxnIIrhSriiXLOlbEBLXFElXJFGxHJczRBIxAuPKtBisjKBwfzZFagdNmjdwIRvwzLkFKWRTDPxJCmpzHUcrPiiXXHnOIlqNVoGSXZewdnCRhuxeYGPVTfrNTQNOxZmxInOazUYNTNDgzsxlgiVEHPKMfbesvPHUqpNkUqbzeuzfdrsuLDpKHMUbBMKczKKWOdYoIXoPYtEjfOnlQLoGnbQUCuERdEFaptwnsHzTJDsuZkKtzMpFaZobynZdzNydEeJJHDYaQcwUxcqvwfWwNUsCiLvkZQiSfzAHftYgAmVsXgtmcYgTqJIawstRYJrZdSxlfRiqTufgEQVambeZZmaAyRQbcmdjVUZZCgqDrSeltJGXPMgZnGDZqISrGDOClxXCxMjmKqEPwKHoOfOeyGmqWqihqjINXLqnyTesZePQRqaWDQNqpLgNrAUKulklmckTijUltQKuWQDwpLmDyxLppPVMwsmBIpOwQttYFMjgJQZLYFPmxWFLIeZihkRNnkzoypBICIxgEuYsVWGIGRbbxqVasYnstWomJnHwmtOhAFSpttRYYzBmyEtZXiCthvKvWszTXDbiJbGXMcrYpKAgvUVFtdKUfvdMfhAryctklUCEdjetjuGNfJjajZtvzdYaqInKtFPPLYmRaXPdQzxdSQfmZDEVHlHGEGNSPRFJuIfKLLfUmnHxHnRjmzQPNlqrXgifUdzAGKVabYqvcDeYoTYgPsBUqehrBhmQUgTvDnsdpuhUoxskDdppTsYMcnDIPSwKIqhXDCIxOuXrywahvVavvHkPuaenjLmEbMgrkrQLHEAwrhHkPRNvonNQKqprqOFVZKAtpRSpvQUxMoXCMZLSSbnLEFsjVfANdQNQVwTmGxqVjVqRuxREAhuaDrFgEZpYKhwWPEKBevBfsOIcaZKyykQafzmGPLRAKDtTcJxJVgiiuUkmyMYuDUNEUhBEdoBLJnamtLmMJQgmLiUELIhLpiEvpOXOvXCPUeldLFqkKOwfacqIaRcnnZvERKRMCKUkMABbDHytQqQblrvoxOZkwzosQfDKGtIdfcXRJNqlBNwOCWoQBcEWyqrMlYZIAXYJmLfnjoJepgSFvrgajaBAIksoyeHqgqbGvpAstMIGmIhRYGGNPRIfOQKsGoKgxtsidhTaAePRCBFqZgPDWCIkqOJezGVkjfYUCZTlInbxBXwUAVRsxHTQtJFnnpmMvXDYCVlEmnZBKhmmxQOIQzxFWpJQkQoSAYzTEiDWEOsVLNrbfzeHFRyeYATakQQWmFDLPbVMCJcWjFGJjfqCoVzlbNNEsqxdSmNPjTjHYOkuEMFLkXYGaoJlraLqayMeCsTjWNRDPBywBJLAPVkGQqTwApVVwYAetlwSbzsdHWsTwSIcctkyKDuRWYDQikRqsKTMJchrliONJeaZIzwPQrNbTwxsGdwuduvibtYndRwpdsvyCktRHFalvUuEKMqXbItfGcNGWsGzubdPMYayOUOINjpcFBeESdwpdlTYmrPsLsVDhpTzoMegKrytNVZkfJRPuDCUXxSlSthOohmsuxmIZUedzxKmowKOdXTMcEtdpHaPWgIsIjrViKrQOCONlSuazmLuCUjLltOGXeNgJKedTVrrVCpWYWHyVrdXpKgNaMJVjbXxnVMSChdWKuZdqpisvrkBJPoURDYxWOtpjzZoOpWzyUuYNhCzRoHsMjmmWDcXzQiHIyjwdhPNwiPqFxeUfMVFQGImhykFgMIlQEoZCaRoqSBXTSWAeDumdbsOGtATwEdZlLfoBKiTvodQBGOEcuATWXfiinSjPmJKcWgQrTVYVrwlyMWhxqNbCMpIQNoSMGTiWfPTCezUjYcdWppnsYJihLQCqbNLRGgqrwHuIvsazapTpoPZIyZyeeSueJuTIhpHMEJfJpScshJubJGfkusuVBgfTWQoywSSliQQSfbvaHKiLnyjdSbpMkdBgXepoSsHnCQaYuHQqZsoEOmJCiuQUpJkmfyfbIShzlZpHFmLCsbknEAkKXKfRTRnuwdBeuOGgFbJLbDksHVapaRayWzwoYBEpmrlAxrUxYMUekKbpjPNfjUCjhbdMAnJmYQVZBQZkFVweHDAlaqJjRqoQPoOMLhyvYCzqEuQsAFoxWrzRnTVjStPadhsESlERnKhpEPsfDxNvxqcOyIulaCkmPdambLHvGhTZzysvqFauEgkFRItPfvisehFmoBhQqmkfbHVsgfHXDPJVyhwPllQpuYLRYvGodxKjkarnSNgsXoKEMlaSKxKdcVgvOkuLcfLFfdtXGTclqfPOfeoVLbqcjcXCUEBgAGplrkgsmIEhWRZLlGPGCwKWRaCKMkBHTAcypUrYjWwCLtOPVygMwMANGoQwFnCqFrUGMCRZUGJKTZIGPyldsifauoMnJPLTcDHmilcmahlqOELaAUYDBuzsVywnDQfwRLGIWozYaOAilMBcObErwgTDNGWnwQMUgFFSKtPDMEoEQCTKVREqrXZSGLqwTMcxHfWotDllNkIJPMbXzjDVjPOOjCFuIvTyhXKLyhUScOXvYthRXpPfKwMhptXaxIxgqBoUqzrWbaoLTVpQoottZyPFfNOoMioXHRuFwMRYUiKvcWPkrayyTLOCFJlAyslDameIuqVAuxErqFPEWIScKpBORIuZqoXlZuTvAjEdlEWDODFRregDTqGNoFBIHxvimmIZwLfFyKUfEWAnNBdtdzDmTPXtpHRGdIbuucfTjOygZsTxPjfweXhSUkMhPjMaxKlMIJMOXcnQfyzeOcbWwNbeH", + textProposal.GetDescription()) + +require.Equal(t, simulation.TypeMsgSubmitProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgCancelProposal tests the normal scenario of a valid message of type TypeMsgCancelProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgCancelProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + / setup a proposal + proposer := accounts[0].Address + content := v1beta1.NewTextProposal("Test", "description") + +contentMsg, err := v1.NewLegacyContent(content, suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String()) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "title", "summary", proposer, false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.SetProposal(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgCancelProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgCancelProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, proposer.String(), msg.Proposer) + +require.Equal(t, simulation.TypeMsgCancelProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgDeposit tests the normal scenario of a valid message of type TypeMsgDeposit. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgDeposit(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + content := v1beta1.NewTextProposal("Test", "description") + +contentMsg, err := v1.NewLegacyContent(content, suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String()) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.SetProposal(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgDeposit(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgDeposit + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Depositor) + +require.NotEqual(t, len(msg.Amount), 0) + +require.Equal(t, "560969stake", msg.Amount[0].String()) + +require.Equal(t, simulation.TypeMsgDeposit, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgVote tests the normal scenario of a valid message of type TypeMsgVote. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVote(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.ActivateVotingPeriod(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgVote(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVote + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.Equal(t, v1.OptionYes, msg.Option) + +require.Equal(t, simulation.TypeMsgVote, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgVoteWeighted tests the normal scenario of a valid message of type TypeMsgVoteWeighted. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVoteWeighted(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "test", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.ActivateVotingPeriod(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgVoteWeighted(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVoteWeighted + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.True(t, len(msg.Options) >= 1) + +require.Equal(t, simulation.TypeMsgVoteWeighted, sdk.MsgTypeURL(&msg)) +} + +type suite struct { + TxConfig client.TxConfig + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + GovKeeper *keeper.Keeper + StakingKeeper *stakingkeeper.Keeper + DistributionKeeper dk.Keeper + App *runtime.App +} + +/ returns context and an app with updated mint keeper +func createTestSuite(t *testing.T, isCheckTx bool) (suite, sdk.Context) { + t.Helper() + res := suite{ +} + +app, err := simtestutil.Setup( + depinject.Configs( + configurator.NewAppConfig( + configurator.AuthModule(), + configurator.TxModule(), + configurator.ParamsModule(), + configurator.BankModule(), + configurator.StakingModule(), + configurator.ConsensusModule(), + configurator.DistributionModule(), + configurator.GovModule(), + ), + depinject.Supply(log.NewNopLogger()), + ), + &res.TxConfig, &res.AccountKeeper, &res.BankKeeper, &res.GovKeeper, &res.StakingKeeper, &res.DistributionKeeper) + +require.NoError(t, err) + ctx := app.NewContext(isCheckTx) + +res.App = app + return res, ctx +} + +func getTestingAccounts( + t *testing.T, + r *rand.Rand, + accountKeeper authkeeper.AccountKeeper, + bankKeeper bankkeeper.Keeper, + stakingKeeper *stakingkeeper.Keeper, + ctx sdk.Context, + n int, +) []simtypes.Account { + t.Helper() + accounts := simtypes.RandomAccounts(r, n) + initAmt := stakingKeeper.TokensFromConsensusPower(ctx, 200) + initCoins := sdk.NewCoins(sdk.NewCoin(sdk.DefaultBondDenom, initAmt)) + + / add coins to the accounts + for _, account := range accounts { + acc := accountKeeper.NewAccountWithAddress(ctx, account.Address) + +accountKeeper.SetAccount(ctx, acc) + +require.NoError(t, testutil.FundAccount(ctx, bankKeeper, account.Address, initCoins)) +} + +return accounts +} +``` + +```go expandable +package simulation_test + +import ( + + "fmt" + "math/rand" + "testing" + "time" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/gogoproto/proto" + "github.com/stretchr/testify/require" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/configurator" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + _ "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + "github.com/cosmos/cosmos-sdk/x/bank/testutil" + _ "github.com/cosmos/cosmos-sdk/x/consensus" + _ "github.com/cosmos/cosmos-sdk/x/distribution" + dk "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + _ "github.com/cosmos/cosmos-sdk/x/gov" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + "github.com/cosmos/cosmos-sdk/x/gov/simulation" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + _ "github.com/cosmos/cosmos-sdk/x/params" + _ "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +var ( + _ simtypes.WeightedProposalMsg = MockWeightedProposals{ +} + /nolint:staticcheck / keeping around for legacy testing + _ simtypes.WeightedProposalContent = MockWeightedProposals{ +} +) + +type MockWeightedProposals struct { + n int +} + +func (m MockWeightedProposals) + +AppParamsKey() + +string { + return fmt.Sprintf("AppParamsKey-%d", m.n) +} + +func (m MockWeightedProposals) + +DefaultWeight() + +int { + return m.n +} + +func (m MockWeightedProposals) + +MsgSimulatorFn() + +simtypes.MsgSimulatorFn { + return func(r *rand.Rand, _ sdk.Context, _ []simtypes.Account) + +sdk.Msg { + return nil +} +} + +/nolint:staticcheck / retaining legacy content to maintain gov functionality +func (m MockWeightedProposals) + +ContentSimulatorFn() + +simtypes.ContentSimulatorFn { + return func(r *rand.Rand, _ sdk.Context, _ []simtypes.Account) + +simtypes.Content { + return v1beta1.NewTextProposal( + fmt.Sprintf("title-%d: %s", m.n, simtypes.RandStringOfLength(r, 100)), + fmt.Sprintf("description-%d: %s", m.n, simtypes.RandStringOfLength(r, 4000)), + ) +} +} + +func mockWeightedProposalMsg(n int) []simtypes.WeightedProposalMsg { + wpc := make([]simtypes.WeightedProposalMsg, n) + for i := range n { + wpc[i] = MockWeightedProposals{ + i +} + +} + +return wpc +} + +/ nolint / keeping this legacy proposal for testing +func mockWeightedLegacyProposalContent(n int) []simtypes.WeightedProposalContent { + wpc := make([]simtypes.WeightedProposalContent, n) + for i := range n { + wpc[i] = MockWeightedProposals{ + i +} + +} + +return wpc +} + +/ TestWeightedOperations tests the weights of the operations. +func TestWeightedOperations(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + ctx.WithChainID("test-chain") + appParams := make(simtypes.AppParams) + weightesOps := simulation.WeightedOperations(appParams, suite.TxConfig, suite.AccountKeeper, + suite.BankKeeper, suite.GovKeeper, mockWeightedProposalMsg(3), mockWeightedLegacyProposalContent(1), + ) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accs := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + expected := []struct { + weight int + opMsgRoute string + opMsgName string +}{ + { + simulation.DefaultWeightMsgDeposit, types.ModuleName, simulation.TypeMsgDeposit +}, + { + simulation.DefaultWeightMsgVote, types.ModuleName, simulation.TypeMsgVote +}, + { + simulation.DefaultWeightMsgVoteWeighted, types.ModuleName, simulation.TypeMsgVoteWeighted +}, + { + simulation.DefaultWeightMsgCancelProposal, types.ModuleName, simulation.TypeMsgCancelProposal +}, + {0, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {1, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {2, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {0, types.ModuleName, simulation.TypeMsgSubmitProposal +}, +} + +require.Equal(t, len(weightesOps), len(expected), "number of operations should be the same") + for i, w := range weightesOps { + operationMsg, _, err := w.Op()(r, app.BaseApp, ctx, accs, ctx.ChainID()) + +require.NoError(t, err) + + / the following checks are very much dependent from the ordering of the output given + / by WeightedOperations. if the ordering in WeightedOperations changes some tests + / will fail + require.Equal(t, expected[i].weight, w.Weight(), "weight should be the same") + +require.Equal(t, expected[i].opMsgRoute, operationMsg.Route, "route should be the same") + +require.Equal(t, expected[i].opMsgName, operationMsg.Name, "operation Msg name should be the same") +} +} + +/ TestSimulateMsgSubmitProposal tests the normal scenario of a valid message of type TypeMsgSubmitProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgSubmitProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + _, err := app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgSubmitProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper, MockWeightedProposals{3 +}.MsgSimulatorFn()) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgSubmitProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Proposer) + +require.NotEqual(t, len(msg.InitialDeposit), 0) + +require.Equal(t, "47841094stake", msg.InitialDeposit[0].String()) + +require.Equal(t, simulation.TypeMsgSubmitProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgSubmitProposal tests the normal scenario of a valid message of type TypeMsgSubmitProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgSubmitLegacyProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + _, err := app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + / execute operation + op := simulation.SimulateMsgSubmitLegacyProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper, MockWeightedProposals{3 +}.ContentSimulatorFn()) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgSubmitProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +var msgLegacyContent v1.MsgExecLegacyContent + err = proto.Unmarshal(msg.Messages[0].Value, &msgLegacyContent) + +require.NoError(t, err) + +var textProposal v1beta1.TextProposal + err = proto.Unmarshal(msgLegacyContent.Content.Value, &textProposal) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, "cosmos1p8wcgrjr4pjju90xg6u9cgq55dxwq8j7u4x9a0", msg.Proposer) + +require.NotEqual(t, len(msg.InitialDeposit), 0) + +require.Equal(t, "25166256stake", msg.InitialDeposit[0].String()) + +require.Equal(t, "title-3: ZBSpYuLyYggwexjxusrBqDOTtGTOWeLrQKjLxzIivHSlcxgdXhhuTSkuxKGLwQvuyNhYFmBZHeAerqyNEUzXPFGkqEGqiQWIXnku", + textProposal.GetTitle()) + +require.Equal(t, "description-3: NJWzHdBNpAXKJPHWQdrGYcAHSctgVlqwqHoLfHsXUdStwfefwzqLuKEhmMyYLdbZrcPgYqjNHxPexsruwEGStAneKbWkQDDIlCWBLSiAASNhZqNFlPtfqPJoxKsgMdzjWqLWdqKQuJqWPMvwPQWZUtVMOTMYKJbfdlZsjdsomuScvDmbDkgRualsxDvRJuCAmPOXitIbcyWsKGSdrEunFAOdmXnsuyFVgJqEjbklvmwrUlsxjRSfKZxGcpayDdgoFcnVSutxjRgOSFzPwidAjubMncNweqpbxhXGchpZUxuFDOtpnhNUycJICRYqsPhPSCjPTWZFLkstHWJxvdPEAyEIxXgLwbNOjrgzmaujiBABBIXvcXpLrbcEWNNQsbjvgJFgJkflpRohHUutvnaUqoopuKjTDaemDeSdqbnOzcfJpcTuAQtZoiLZOoAIlboFDAeGmSNwkvObPRvRWQgWkGkxwtPauYgdkmypLjbqhlHJIQTntgWjXwZdOyYEdQRRLfMSdnxqppqUofqLbLQDUjwKVKfZJUJQPsWIPwIVaSTrmKskoAhvmZyJgeRpkaTfGgrJzAigcxtfshmiDCFkuiluqtMOkidknnTBtumyJYlIsWLnCQclqdVmikUoMOPdPWwYbJxXyqUVicNxFxyqJTenNblyyKSdlCbiXxUiYUiMwXZASYfvMDPFgxniSjWaZTjHkqlJvtBsXqwPpyVxnJVGFWhfSxgOcduoxkiopJvFjMmFabrGYeVtTXLhxVUEiGwYUvndjFGzDVntUvibiyZhfMQdMhgsiuysLMiePBNXifRLMsSmXPkwlPloUbJveCvUlaalhZHuvdkCnkSHbMbmOnrfEGPwQiACiPlnihiaOdbjPqPiTXaHDoJXjSlZmltGqNHHNrcKdlFSCdmVOuvDcBLdSklyGJmcLTbSFtALdGlPkqqecJrpLCXNPWefoTJNgEJlyMEPneVaxxduAAEqQpHWZodWyRkDAxzyMnFMcjSVqeRXLqsNyNtQBbuRvunZflWSbbvXXdkyLikYqutQhLPONXbvhcQZJPSWnOulqQaXmbfFxAkqfYeseSHOQidHwbcsOaMnSrrmGjjRmEMQNuknupMxJiIeVjmgZvbmjPIQTEhQFULQLBMPrxcFPvBinaOPYWGvYGRKxLZdwamfRQQFngcdSlvwjfaPbURasIsGJVHtcEAxnIIrhSriiXLOlbEBLXFElXJFGxHJczRBIxAuPKtBisjKBwfzZFagdNmjdwIRvwzLkFKWRTDPxJCmpzHUcrPiiXXHnOIlqNVoGSXZewdnCRhuxeYGPVTfrNTQNOxZmxInOazUYNTNDgzsxlgiVEHPKMfbesvPHUqpNkUqbzeuzfdrsuLDpKHMUbBMKczKKWOdYoIXoPYtEjfOnlQLoGnbQUCuERdEFaptwnsHzTJDsuZkKtzMpFaZobynZdzNydEeJJHDYaQcwUxcqvwfWwNUsCiLvkZQiSfzAHftYgAmVsXgtmcYgTqJIawstRYJrZdSxlfRiqTufgEQVambeZZmaAyRQbcmdjVUZZCgqDrSeltJGXPMgZnGDZqISrGDOClxXCxMjmKqEPwKHoOfOeyGmqWqihqjINXLqnyTesZePQRqaWDQNqpLgNrAUKulklmckTijUltQKuWQDwpLmDyxLppPVMwsmBIpOwQttYFMjgJQZLYFPmxWFLIeZihkRNnkzoypBICIxgEuYsVWGIGRbbxqVasYnstWomJnHwmtOhAFSpttRYYzBmyEtZXiCthvKvWszTXDbiJbGXMcrYpKAgvUVFtdKUfvdMfhAryctklUCEdjetjuGNfJjajZtvzdYaqInKtFPPLYmRaXPdQzxdSQfmZDEVHlHGEGNSPRFJuIfKLLfUmnHxHnRjmzQPNlqrXgifUdzAGKVabYqvcDeYoTYgPsBUqehrBhmQUgTvDnsdpuhUoxskDdppTsYMcnDIPSwKIqhXDCIxOuXrywahvVavvHkPuaenjLmEbMgrkrQLHEAwrhHkPRNvonNQKqprqOFVZKAtpRSpvQUxMoXCMZLSSbnLEFsjVfANdQNQVwTmGxqVjVqRuxREAhuaDrFgEZpYKhwWPEKBevBfsOIcaZKyykQafzmGPLRAKDtTcJxJVgiiuUkmyMYuDUNEUhBEdoBLJnamtLmMJQgmLiUELIhLpiEvpOXOvXCPUeldLFqkKOwfacqIaRcnnZvERKRMCKUkMABbDHytQqQblrvoxOZkwzosQfDKGtIdfcXRJNqlBNwOCWoQBcEWyqrMlYZIAXYJmLfnjoJepgSFvrgajaBAIksoyeHqgqbGvpAstMIGmIhRYGGNPRIfOQKsGoKgxtsidhTaAePRCBFqZgPDWCIkqOJezGVkjfYUCZTlInbxBXwUAVRsxHTQtJFnnpmMvXDYCVlEmnZBKhmmxQOIQzxFWpJQkQoSAYzTEiDWEOsVLNrbfzeHFRyeYATakQQWmFDLPbVMCJcWjFGJjfqCoVzlbNNEsqxdSmNPjTjHYOkuEMFLkXYGaoJlraLqayMeCsTjWNRDPBywBJLAPVkGQqTwApVVwYAetlwSbzsdHWsTwSIcctkyKDuRWYDQikRqsKTMJchrliONJeaZIzwPQrNbTwxsGdwuduvibtYndRwpdsvyCktRHFalvUuEKMqXbItfGcNGWsGzubdPMYayOUOINjpcFBeESdwpdlTYmrPsLsVDhpTzoMegKrytNVZkfJRPuDCUXxSlSthOohmsuxmIZUedzxKmowKOdXTMcEtdpHaPWgIsIjrViKrQOCONlSuazmLuCUjLltOGXeNgJKedTVrrVCpWYWHyVrdXpKgNaMJVjbXxnVMSChdWKuZdqpisvrkBJPoURDYxWOtpjzZoOpWzyUuYNhCzRoHsMjmmWDcXzQiHIyjwdhPNwiPqFxeUfMVFQGImhykFgMIlQEoZCaRoqSBXTSWAeDumdbsOGtATwEdZlLfoBKiTvodQBGOEcuATWXfiinSjPmJKcWgQrTVYVrwlyMWhxqNbCMpIQNoSMGTiWfPTCezUjYcdWppnsYJihLQCqbNLRGgqrwHuIvsazapTpoPZIyZyeeSueJuTIhpHMEJfJpScshJubJGfkusuVBgfTWQoywSSliQQSfbvaHKiLnyjdSbpMkdBgXepoSsHnCQaYuHQqZsoEOmJCiuQUpJkmfyfbIShzlZpHFmLCsbknEAkKXKfRTRnuwdBeuOGgFbJLbDksHVapaRayWzwoYBEpmrlAxrUxYMUekKbpjPNfjUCjhbdMAnJmYQVZBQZkFVweHDAlaqJjRqoQPoOMLhyvYCzqEuQsAFoxWrzRnTVjStPadhsESlERnKhpEPsfDxNvxqcOyIulaCkmPdambLHvGhTZzysvqFauEgkFRItPfvisehFmoBhQqmkfbHVsgfHXDPJVyhwPllQpuYLRYvGodxKjkarnSNgsXoKEMlaSKxKdcVgvOkuLcfLFfdtXGTclqfPOfeoVLbqcjcXCUEBgAGplrkgsmIEhWRZLlGPGCwKWRaCKMkBHTAcypUrYjWwCLtOPVygMwMANGoQwFnCqFrUGMCRZUGJKTZIGPyldsifauoMnJPLTcDHmilcmahlqOELaAUYDBuzsVywnDQfwRLGIWozYaOAilMBcObErwgTDNGWnwQMUgFFSKtPDMEoEQCTKVREqrXZSGLqwTMcxHfWotDllNkIJPMbXzjDVjPOOjCFuIvTyhXKLyhUScOXvYthRXpPfKwMhptXaxIxgqBoUqzrWbaoLTVpQoottZyPFfNOoMioXHRuFwMRYUiKvcWPkrayyTLOCFJlAyslDameIuqVAuxErqFPEWIScKpBORIuZqoXlZuTvAjEdlEWDODFRregDTqGNoFBIHxvimmIZwLfFyKUfEWAnNBdtdzDmTPXtpHRGdIbuucfTjOygZsTxPjfweXhSUkMhPjMaxKlMIJMOXcnQfyzeOcbWwNbeH", + textProposal.GetDescription()) + +require.Equal(t, simulation.TypeMsgSubmitProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgCancelProposal tests the normal scenario of a valid message of type TypeMsgCancelProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgCancelProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + / setup a proposal + proposer := accounts[0].Address + content := v1beta1.NewTextProposal("Test", "description") + +contentMsg, err := v1.NewLegacyContent(content, suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String()) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "title", "summary", proposer, false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.SetProposal(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgCancelProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgCancelProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, proposer.String(), msg.Proposer) + +require.Equal(t, simulation.TypeMsgCancelProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgDeposit tests the normal scenario of a valid message of type TypeMsgDeposit. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgDeposit(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + content := v1beta1.NewTextProposal("Test", "description") + +contentMsg, err := v1.NewLegacyContent(content, suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String()) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.SetProposal(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgDeposit(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgDeposit + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Depositor) + +require.NotEqual(t, len(msg.Amount), 0) + +require.Equal(t, "560969stake", msg.Amount[0].String()) + +require.Equal(t, simulation.TypeMsgDeposit, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgVote tests the normal scenario of a valid message of type TypeMsgVote. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVote(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.ActivateVotingPeriod(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgVote(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVote + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.Equal(t, v1.OptionYes, msg.Option) + +require.Equal(t, simulation.TypeMsgVote, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgVoteWeighted tests the normal scenario of a valid message of type TypeMsgVoteWeighted. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVoteWeighted(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "test", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.ActivateVotingPeriod(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgVoteWeighted(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVoteWeighted + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.True(t, len(msg.Options) >= 1) + +require.Equal(t, simulation.TypeMsgVoteWeighted, sdk.MsgTypeURL(&msg)) +} + +type suite struct { + TxConfig client.TxConfig + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + GovKeeper *keeper.Keeper + StakingKeeper *stakingkeeper.Keeper + DistributionKeeper dk.Keeper + App *runtime.App +} + +/ returns context and an app with updated mint keeper +func createTestSuite(t *testing.T, isCheckTx bool) (suite, sdk.Context) { + t.Helper() + res := suite{ +} + +app, err := simtestutil.Setup( + depinject.Configs( + configurator.NewAppConfig( + configurator.AuthModule(), + configurator.TxModule(), + configurator.ParamsModule(), + configurator.BankModule(), + configurator.StakingModule(), + configurator.ConsensusModule(), + configurator.DistributionModule(), + configurator.GovModule(), + ), + depinject.Supply(log.NewNopLogger()), + ), + &res.TxConfig, &res.AccountKeeper, &res.BankKeeper, &res.GovKeeper, &res.StakingKeeper, &res.DistributionKeeper) + +require.NoError(t, err) + ctx := app.NewContext(isCheckTx) + +res.App = app + return res, ctx +} + +func getTestingAccounts( + t *testing.T, + r *rand.Rand, + accountKeeper authkeeper.AccountKeeper, + bankKeeper bankkeeper.Keeper, + stakingKeeper *stakingkeeper.Keeper, + ctx sdk.Context, + n int, +) []simtypes.Account { + t.Helper() + accounts := simtypes.RandomAccounts(r, n) + initAmt := stakingKeeper.TokensFromConsensusPower(ctx, 200) + initCoins := sdk.NewCoins(sdk.NewCoin(sdk.DefaultBondDenom, initAmt)) + + / add coins to the accounts + for _, account := range accounts { + acc := accountKeeper.NewAccountWithAddress(ctx, account.Address) + +accountKeeper.SetAccount(ctx, acc) + +require.NoError(t, testutil.FundAccount(ctx, bankKeeper, account.Address, initCoins)) +} + +return accounts +} +``` + +## End-to-end Tests + +End-to-end tests are at the top of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html). +They must test the whole application flow, from the user perspective (for instance, CLI tests). They are located under [`/tests/e2e`](https://github.com/cosmos/cosmos-sdk/tree/main/tests/e2e). + +{/* @julienrbrt: makes more sense to use an app wired app to have 0 simapp dependencies */} +For that, the SDK is using `simapp` but you should use your own application (`appd`). +Here are some examples: + +* SDK E2E tests: [Link](https://github.com/cosmos/cosmos-sdk/tree/main/tests/e2e). +* Cosmos Hub E2E tests: [Link](https://github.com/cosmos/gaia/tree/main/tests/e2e). +* Osmosis E2E tests: [Link](https://github.com/osmosis-labs/osmosis/tree/main/tests/e2e). + + +**warning** +The SDK is in the process of creating its E2E tests, as defined in [ADR-59](https://docs.cosmos.network/main/build/architecture/adr-059-test-scopes). This page will eventually be updated with better examples. + + +## Learn More + +Learn more about testing scope in [ADR-59](https://docs.cosmos.network/main/build/architecture/adr-059-test-scopes). diff --git a/docs/sdk/next/documentation/operations/txs.mdx b/docs/sdk/next/documentation/operations/txs.mdx new file mode 100644 index 00000000..fd479106 --- /dev/null +++ b/docs/sdk/next/documentation/operations/txs.mdx @@ -0,0 +1,556 @@ +--- +title: "Generating, Signing and Broadcasting Transactions" +--- + +## Synopsis + +This document describes how to generate an (unsigned) transaction, signing it (with one or multiple keys), and broadcasting it to the network. + +## Using the CLI + +The easiest way to send transactions is using the CLI, as we have seen in the previous page when [interacting with a node](/docs/sdk/next/documentation/operations/interact-node#using-the-cli). For example, running the following command + +```bash +simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --chain-id my-test-chain --keyring-backend test +``` + +will run the following steps: + +- generate a transaction with one `Msg` (`x/bank`'s `MsgSend`), and print the generated transaction to the console. +- ask the user for confirmation to send the transaction from the `$MY_VALIDATOR_ADDRESS` account. +- fetch `$MY_VALIDATOR_ADDRESS` from the keyring. This is possible because we have [set up the CLI's keyring](/docs/sdk/next/documentation/operations/keyring) in a previous step. +- sign the generated transaction with the keyring's account. +- broadcast the signed transaction to the network. This is possible because the CLI connects to the node's CometBFT RPC endpoint. + +The CLI bundles all the necessary steps into a simple-to-use user experience. However, it's possible to run all the steps individually too. + +### Generating a Transaction + +Generating a transaction can simply be done by appending the `--generate-only` flag on any `tx` command, e.g.: + +```bash +simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --chain-id my-test-chain --generate-only +``` + +This will output the unsigned transaction as JSON in the console. We can also save the unsigned transaction to a file (to be passed around between signers more easily) by appending `> unsigned_tx.json` to the above command. + +### Signing a Transaction + +Signing a transaction using the CLI requires the unsigned transaction to be saved in a file. Let's assume the unsigned transaction is in a file called `unsigned_tx.json` in the current directory (see previous paragraph on how to do that). Then, simply run the following command: + +```bash +simd tx sign unsigned_tx.json --chain-id my-test-chain --keyring-backend test --from $MY_VALIDATOR_ADDRESS +``` + +This command will decode the unsigned transaction and sign it with `SIGN_MODE_DIRECT` with `$MY_VALIDATOR_ADDRESS`'s key, which we already set up in the keyring. The signed transaction will be output as JSON to the console, and, as above, we can save it to a file by appending `--output-document signed_tx.json`. + +Some useful flags to consider in the `tx sign` command: + +- `--sign-mode`: you may use `amino-json` to sign the transaction using `SIGN_MODE_LEGACY_AMINO_JSON`, +- `--offline`: sign in offline mode. This means that the `tx sign` command doesn't connect to the node to retrieve the signer's account number and sequence, both needed for signing. In this case, you must manually supply the `--account-number` and `--sequence` flags. This is useful for offline signing, i.e. signing in a secure environment which doesn't have access to the internet. + +#### Signing with Multiple Signers + + + Please note that signing a transaction with multiple signers or with a + multisig account, where at least one signer uses `SIGN_MODE_DIRECT`, is not + yet possible. You may follow [this Github + issue](https://github.com/cosmos/cosmos-sdk/issues/8141) for more info. + + +Signing with multiple signers is done with the `tx multisign` command. This command assumes that all signers use `SIGN_MODE_LEGACY_AMINO_JSON`. The flow is similar to the `tx sign` command flow, but instead of signing an unsigned transaction file, each signer signs the file signed by previous signer(s). The `tx multisign` command will append signatures to the existing transactions. It is important that signers sign the transaction **in the same order** as given by the transaction, which is retrievable using the `GetSigners()` method. + +For example, starting with the `unsigned_tx.json`, and assuming the transaction has 4 signers, we would run: + +```bash +# Let signer1 sign the unsigned tx. +simd tx multisign unsigned_tx.json signer_key_1 --chain-id my-test-chain --keyring-backend test > partial_tx_1.json +# Now signer1 will send the partial_tx_1.json to the signer2. +# Signer2 appends their signature: +simd tx multisign partial_tx_1.json signer_key_2 --chain-id my-test-chain --keyring-backend test > partial_tx_2.json +# Signer2 sends the partial_tx_2.json file to signer3, and signer3 can append his signature: +simd tx multisign partial_tx_2.json signer_key_3 --chain-id my-test-chain --keyring-backend test > partial_tx_3.json +``` + +### Broadcasting a Transaction + +Broadcasting a transaction is done using the following command: + +```bash +simd tx broadcast tx_signed.json +``` + +You may optionally pass the `--broadcast-mode` flag to specify which response to receive from the node: + +- `sync`: the CLI waits for a CheckTx execution response only. +- `async`: the CLI returns immediately (transaction might fail). + +### Encoding a Transaction + +In order to broadcast a transaction using the gRPC or REST endpoints, the transaction will need to be encoded first. This can be done using the CLI. + +Encoding a transaction is done using the following command: + +```bash +simd tx encode tx_signed.json +``` + +This will read the transaction from the file, serialize it using Protobuf, and output the transaction bytes as base64 in the console. + +### Decoding a Transaction + +The CLI can also be used to decode transaction bytes. + +Decoding a transaction is done using the following command: + +```bash +simd tx decode [protobuf-byte-string] +``` + +This will decode the transaction bytes and output the transaction as JSON in the console. You can also save the transaction to a file by appending `> tx.json` to the above command. + +## Programmatically with Go + +It is possible to manipulate transactions programmatically via Go using the Cosmos SDK's `TxBuilder` interface. + +### Generating a Transaction + +Before generating a transaction, a new instance of a `TxBuilder` needs to be created. Since the Cosmos SDK supports both Amino and Protobuf transactions, the first step would be to decide which encoding scheme to use. All the subsequent steps remain unchanged, whether you're using Amino or Protobuf, as `TxBuilder` abstracts the encoding mechanisms. In the following snippet, we will use Protobuf. + +```go expandable +import ( + + "github.com/cosmos/cosmos-sdk/simapp" +) + +func sendTx() + +error { + / Choose your codec: Amino or Protobuf. Here, we use Protobuf, given by the following function. + app := simapp.NewSimApp(...) + + / Create a new TxBuilder. + txBuilder := app.TxConfig().NewTxBuilder() + + / --snip-- +} +``` + +We can also set up some keys and addresses that will send and receive the transactions. Here, for the purpose of the tutorial, we will be using some dummy data to create keys. + +```go +import ( + + "github.com/cosmos/cosmos-sdk/testutil/testdata" +) + +priv1, _, addr1 := testdata.KeyTestPubAddr() + +priv2, _, addr2 := testdata.KeyTestPubAddr() + +priv3, _, addr3 := testdata.KeyTestPubAddr() +``` + +Populating the `TxBuilder` can be done via its methods: + +```go expandable +package client + +import ( + + "time" + + txsigning "cosmossdk.io/x/tx/signing" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() + +sdk.TxEncoder + TxDecoder() + +sdk.TxDecoder + TxJSONEncoder() + +sdk.TxEncoder + TxJSONDecoder() + +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) + +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} + + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() + +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() *txsigning.HandlerMap + SigningContext() *txsigning.Context +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTimeoutHeight(height uint64) + +SetTimeoutTimestamp(timestamp time.Time) + +SetUnordered(v bool) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} + + / ExtendedTxBuilder extends the TxBuilder interface, + / which is used to set extension options to be included in a transaction. + ExtendedTxBuilder interface { + SetExtensionOptions(extOpts ...*codectypes.Any) +} +) +``` + +```go expandable +import ( + + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +func sendTx() + +error { + / --snip-- + + / Define two x/bank MsgSend messages: + / - from addr1 to addr3, + / - from addr2 to addr3. + / This means that the transaction needs two signers: addr1 and addr2. + msg1 := banktypes.NewMsgSend(addr1, addr3, types.NewCoins(types.NewInt64Coin("atom", 12))) + +msg2 := banktypes.NewMsgSend(addr2, addr3, types.NewCoins(types.NewInt64Coin("atom", 34))) + err := txBuilder.SetMsgs(msg1, msg2) + if err != nil { + return err +} + +txBuilder.SetGasLimit(...) + +txBuilder.SetFeeAmount(...) + +txBuilder.SetMemo(...) + +txBuilder.SetTimeoutHeight(...) +} +``` + +At this point, `TxBuilder`'s underlying transaction is ready to be signed. + +#### Generating an Unordered Transaction + +Starting with Cosmos SDK v0.53.0, users may send unordered transactions to chains that have the feature enabled. + + + +Unordered transactions MUST leave sequence values unset. When a transaction is both unordered and contains a non-zero sequence value, +the transaction will be rejected. External services that operate on prior assumptions about transaction sequence values should be updated to handle unordered transactions. +Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. + + + +Using the example above, we can set the required fields to mark a transaction as unordered. +By default, unordered transactions charge an extra 2240 units of gas to offset the additional storage overhead that supports their functionality. +The extra units of gas are customizable and therefore vary by chain, so be sure to check the chain's ante handler for the gas value set, if any. + +```go +func sendTx() + +error { + / --snip-- + expiration := 5 * time.Minute + txBuilder.SetUnordered(true) + +txBuilder.SetTimeoutTimestamp(time.Now().Add(expiration + (1 * time.Nanosecond))) +} +``` + +Unordered transactions from the same account must use a unique timeout timestamp value. The difference between each timeout timestamp value may be as small as a nanosecond, however. + +```go expandable +import ( + + "github.com/cosmos/cosmos-sdk/client" +) + +func sendMessages(txBuilders []client.TxBuilder) + +error { + / --snip-- + expiration := 5 * time.Minute + for _, txb := range txBuilders { + txb.SetUnordered(true) + +txb.SetTimeoutTimestamp(time.Now().Add(expiration + (1 * time.Nanosecond))) +} +} +``` + +### Signing a Transaction + +We set encoding config to use Protobuf, which will use `SIGN_MODE_DIRECT` by default. As per [ADR-020](docs/sdk/next/documentation/legacy/adr-comprehensive), each signer needs to sign the `SignerInfo`s of all other signers. This means that we need to perform two steps sequentially: + +- for each signer, populate the signer's `SignerInfo` inside `TxBuilder`, +- once all `SignerInfo`s are populated, for each signer, sign the `SignDoc` (the payload to be signed). + +In the current `TxBuilder`'s API, both steps are done using the same method: `SetSignatures()`. The current API requires us to first perform a round of `SetSignatures()` _with empty signatures_, only to populate `SignerInfo`s, and a second round of `SetSignatures()` to actually sign the correct payload. + +```go expandable +import ( + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + xauthsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +func sendTx() + +error { + / --snip-- + privs := []cryptotypes.PrivKey{ + priv1, priv2 +} + accNums:= []uint64{..., ... +} / The accounts' account numbers + accSeqs:= []uint64{..., ... +} / The accounts' sequence numbers + + / First round: we gather all the signer infos. We use the "set empty + / signature" hack to do that. + var sigsV2 []signing.SignatureV2 + for i, priv := range privs { + sigV2 := signing.SignatureV2{ + PubKey: priv.PubKey(), + Data: &signing.SingleSignatureData{ + SignMode: encCfg.TxConfig.SignModeHandler().DefaultMode(), + Signature: nil, +}, + Sequence: accSeqs[i], +} + +sigsV2 = append(sigsV2, sigV2) +} + err := txBuilder.SetSignatures(sigsV2...) + if err != nil { + return err +} + + / Second round: all signer infos are set, so each signer can sign. + sigsV2 = []signing.SignatureV2{ +} + for i, priv := range privs { + signerData := xauthsigning.SignerData{ + ChainID: chainID, + AccountNumber: accNums[i], + Sequence: accSeqs[i], +} + +sigV2, err := tx.SignWithPrivKey( + encCfg.TxConfig.SignModeHandler().DefaultMode(), signerData, + txBuilder, priv, encCfg.TxConfig, accSeqs[i]) + if err != nil { + return nil, err +} + +sigsV2 = append(sigsV2, sigV2) +} + +err = txBuilder.SetSignatures(sigsV2...) + if err != nil { + return err +} +} +``` + +The `TxBuilder` is now correctly populated. To print it, you can use the `TxConfig` interface from the initial encoding config `encCfg`: + +```go expandable +func sendTx() + +error { + / --snip-- + + / Generated Protobuf-encoded bytes. + txBytes, err := encCfg.TxConfig.TxEncoder()(txBuilder.GetTx()) + if err != nil { + return err +} + + / Generate a JSON string. + txJSONBytes, err := encCfg.TxConfig.TxJSONEncoder()(txBuilder.GetTx()) + if err != nil { + return err +} + txJSON := string(txJSONBytes) +} +``` + +### Broadcasting a Transaction + +The preferred way to broadcast a transaction is to use gRPC, though using REST (via `gRPC-gateway`) or the CometBFT RPC is also possible. An overview of the differences between these methods is exposed [here](/docs/sdk/next/api-reference/service-apis/grpc_rest). For this tutorial, we will only describe the gRPC method. + +```go expandable +import ( + + "context" + "fmt" + "google.golang.org/grpc" + "github.com/cosmos/cosmos-sdk/types/tx" +) + +func sendTx(ctx context.Context) + +error { + / --snip-- + + / Create a connection to the gRPC server. + grpcConn := grpc.Dial( + "127.0.0.1:9090", / Or your gRPC server address. + grpc.WithInsecure(), / The Cosmos SDK doesn't support any transport security mechanism. + ) + +defer grpcConn.Close() + + / Broadcast the tx via gRPC. We create a new client for the Protobuf Tx + / service. + txClient := tx.NewServiceClient(grpcConn) + / We then call the BroadcastTx method on this client. + grpcRes, err := txClient.BroadcastTx( + ctx, + &tx.BroadcastTxRequest{ + Mode: tx.BroadcastMode_BROADCAST_MODE_SYNC, + TxBytes: txBytes, / Proto-binary of the signed transaction, see previous step. +}, + ) + if err != nil { + return err +} + +fmt.Println(grpcRes.TxResponse.Code) / Should be `0` if the tx is successful + + return nil +} +``` + +#### Simulating a Transaction + +Before broadcasting a transaction, we sometimes may want to dry-run the transaction, to estimate some information about the transaction without actually committing it. This is called simulating a transaction, and can be done as follows: + +```go expandable +import ( + + "context" + "fmt" + "testing" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/types/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" +) + +func simulateTx() + +error { + / --snip-- + + / Simulate the tx via gRPC. We create a new client for the Protobuf Tx + / service. + txClient := tx.NewServiceClient(grpcConn) + txBytes := /* Fill in with your signed transaction bytes. */ + + / We then call the Simulate method on this client. + grpcRes, err := txClient.Simulate( + context.Background(), + &tx.SimulateRequest{ + TxBytes: txBytes, +}, + ) + if err != nil { + return err +} + +fmt.Println(grpcRes.GasInfo) / Prints estimated gas used. + + return nil +} +``` + +## Using gRPC + +It is not possible to generate or sign a transaction using gRPC, only to broadcast one. In order to broadcast a transaction using gRPC, you will need to generate, sign, and encode the transaction using either the CLI or programmatically with Go. + +### Broadcasting a Transaction + +Broadcasting a transaction using the gRPC endpoint can be done by sending a `BroadcastTx` request as follows, where the `txBytes` are the protobuf-encoded bytes of a signed transaction: + +```bash +grpcurl -plaintext \ + -d '{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/BroadcastTx +``` + +## Using REST + +It is not possible to generate or sign a transaction using REST, only to broadcast one. In order to broadcast a transaction using REST, you will need to generate, sign, and encode the transaction using either the CLI or programmatically with Go. + +### Broadcasting a Transaction + +Broadcasting a transaction using the REST endpoint (served by `gRPC-gateway`) can be done by sending a POST request as follows, where the `txBytes` are the protobuf-encoded bytes of a signed transaction: + +```bash +curl -X POST \ + -H "Content-Type: application/json" \ + -d' {"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ + localhost:1317/cosmos/tx/v1beta1/txs +``` + +## Using CosmJS (JavaScript & TypeScript) + +CosmJS aims to build client libraries in JavaScript that can be embedded in web applications. Please see [Link](https://cosmos.github.io/cosmjs) for more information. As of January 2021, CosmJS documentation is still a work in progress. diff --git a/docs/sdk/next/documentation/operations/upgrade-advanced.mdx b/docs/sdk/next/documentation/operations/upgrade-advanced.mdx new file mode 100644 index 00000000..962446ee --- /dev/null +++ b/docs/sdk/next/documentation/operations/upgrade-advanced.mdx @@ -0,0 +1,167 @@ +--- +title: In-Place Store Migrations +--- + + + Read and understand all the in-place store migration documentation before you + run a migration on a live chain. + + +## Synopsis + +Upgrade your app modules smoothly with custom in-place store migration logic. + +The Cosmos SDK uses two methods to perform upgrades: + +- Exporting the entire application state to a JSON file using the `export` CLI command, making changes, and then starting a new binary with the changed JSON file as the genesis file. + +- Perform upgrades in place, which significantly decrease the upgrade time for chains with a larger state. Use the [Module Upgrade Guide](/docs/sdk/next/documentation/module-system/upgrade) to set up your application modules to take advantage of in-place upgrades. + +This document provides steps to use the In-Place Store Migrations upgrade method. + +## Tracking Module Versions + +Each module gets assigned a consensus version by the module developer. The consensus version serves as the breaking change version of the module. The Cosmos SDK keeps track of all module consensus versions in the x/upgrade `VersionMap` store. During an upgrade, the difference between the old `VersionMap` stored in state and the new `VersionMap` is calculated by the Cosmos SDK. For each identified difference, the module-specific migrations are run and the respective consensus version of each upgraded module is incremented. + +### Consensus Version + +The consensus version is defined on each app module by the module developer and serves as the breaking change version of the module. The consensus version informs the Cosmos SDK on which modules need to be upgraded. For example, if the bank module was version 2 and an upgrade introduces bank module 3, the Cosmos SDK upgrades the bank module and runs the "version 2 to 3" migration script. + +### Version Map + +The version map is a mapping of module names to consensus versions. The map is persisted to x/upgrade's state for use during in-place migrations. When migrations finish, the updated version map is persisted in the state. + +## Upgrade Handlers + +Upgrades use an `UpgradeHandler` to facilitate migrations. The `UpgradeHandler` functions implemented by the app developer must conform to the following function signature. These functions retrieve the `VersionMap` from x/upgrade's state and return the new `VersionMap` to be stored in x/upgrade after the upgrade. The diff between the two `VersionMap`s determines which modules need upgrading. + +```go +type UpgradeHandler func(ctx sdk.Context, plan Plan, fromVM VersionMap) (VersionMap, error) +``` + +Inside these functions, you must perform any upgrade logic to include in the provided `plan`. All upgrade handler functions must end with the following line of code: + +```go +return app.mm.RunMigrations(ctx, cfg, fromVM) +``` + +## Running Migrations + +Migrations are run inside of an `UpgradeHandler` using `app.mm.RunMigrations(ctx, cfg, vm)`. The `UpgradeHandler` functions describe the functionality to occur during an upgrade. The `RunMigration` function loops through the `VersionMap` argument and runs the migration scripts for all versions that are less than the versions of the new binary app module. After the migrations are finished, a new `VersionMap` is returned to persist the upgraded module versions to state. + +```go +cfg := module.NewConfigurator(...) + +app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + + / ... + / additional upgrade logic + / ... + + / returns a VersionMap with the updated module ConsensusVersions + return app.mm.RunMigrations(ctx, fromVM) +}) +``` + +To learn more about configuring migration scripts for your modules, see the [Module Upgrade Guide](/docs/sdk/next/documentation/module-system/upgrade). + +### Order Of Migrations + +By default, all migrations are run in module name alphabetical ascending order, except `x/auth` which is run last. The reason is state dependencies between x/auth and other modules (you can read more in [issue #10606](https://github.com/cosmos/cosmos-sdk/issues/10606)). + +If you want to change the order of migration, then you should call `app.mm.SetOrderMigrations(module1, module2, ...)` in your app.go file. The function will panic if you forget to include a module in the argument list. + +## Adding New Modules During Upgrades + +You can introduce entirely new modules to the application during an upgrade. New modules are recognized because they have not yet been registered in `x/upgrade`'s `VersionMap` store. In this case, `RunMigrations` calls the `InitGenesis` function from the corresponding module to set up its initial state. + +### Add StoreUpgrades for New Modules + +All chains preparing to run in-place store migrations will need to manually add store upgrades for new modules and then configure the store loader to apply those upgrades. This ensures that the new module's stores are added to the multistore before the migrations begin. + +```go expandable +upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk() + if err != nil { + panic(err) +} + if upgradeInfo.Name == "my-plan" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := storetypes.StoreUpgrades{ + / add store upgrades for new modules + / Example: + / Added: []string{"foo", "bar" +}, + / ... +} + + / configure store loader that checks if version == upgradeHeight and applies store upgrades + app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +``` + +## Genesis State + +When starting a new chain, the consensus version of each module MUST be saved to state during the application's genesis. To save the consensus version, add the following line to the `InitChainer` method in `app.go`: + +```diff +func (app *MyApp) InitChainer(ctx sdk.Context, req abci.InitChainRequest) abci.InitChainResponse { + ... ++ app.UpgradeKeeper.SetModuleVersionMap(ctx, app.mm.GetVersionMap()) + ... +} +``` + +This information is used by the Cosmos SDK to detect when modules with newer versions are introduced to the app. + +For a new module `foo`, `InitGenesis` is called by `RunMigration` only when `foo` is registered in the module manager but it's not set in the `fromVM`. Therefore, if you want to skip `InitGenesis` when a new module is added to the app, then you should set its module version in `fromVM` to the module consensus version: + +```go +app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / ... + + / Set foo's version to the latest ConsensusVersion in the VersionMap. + / This will skip running InitGenesis on Foo + fromVM[foo.ModuleName] = foo.AppModule{ +}.ConsensusVersion() + +return app.mm.RunMigrations(ctx, fromVM) +}) +``` + +### Overwriting Genesis Functions + +The Cosmos SDK offers modules that the application developer can import in their app. These modules often have an `InitGenesis` function already defined. + +You can write your own `InitGenesis` function for an imported module. To do this, manually trigger your custom genesis function in the upgrade handler. + + + You MUST manually set the consensus version in the version map passed to the + `UpgradeHandler` function. Without this, the SDK will run the Module's + existing `InitGenesis` code even if you triggered your custom function in the + `UpgradeHandler`. + + +```go expandable +import foo "github.com/my/module/foo" + +app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + + / Register the consensus version in the version map + / to avoid the SDK from triggering the default + / InitGenesis function. + fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() + + / Run custom InitGenesis for foo + app.mm["foo"].InitGenesis(ctx, app.appCodec, myCustomGenesisState) + +return app.mm.RunMigrations(ctx, cfg, fromVM) +}) +``` + +## Syncing a Full Node to an Upgraded Blockchain + +You can sync a full node to an existing blockchain which has been upgraded using Cosmovisor + +To successfully sync, you must start with the initial binary that the blockchain started with at genesis. If all Software Upgrade Plans contain binary instruction, then you can run Cosmovisor with auto-download option to automatically handle downloading and switching to the binaries associated with each sequential upgrade. Otherwise, you need to manually provide all binaries to Cosmovisor. + +To learn more about Cosmovisor, see the [Cosmovisor Quick Start](/docs/sdk/next/documentation/operations/cosmovisor). diff --git a/docs/sdk/next/documentation/operations/upgrade-guide.mdx b/docs/sdk/next/documentation/operations/upgrade-guide.mdx new file mode 100644 index 00000000..5b81d327 --- /dev/null +++ b/docs/sdk/next/documentation/operations/upgrade-guide.mdx @@ -0,0 +1,519 @@ +--- +title: Upgrade Guide +description: >- + This document provides a full guide for upgrading a Cosmos SDK chain from + v0.50.x to v0.53.x. +--- + +This document provides a full guide for upgrading a Cosmos SDK chain from `v0.50.x` to `v0.53.x`. + +This guide includes one **required** change and three **optional** features. + +After completing this guide, applications will have: + +* The `x/protocolpool` module +* The `x/epochs` module +* Unordered Transaction support + +## Table of Contents + +* [App Wiring Changes (REQUIRED)](#app-wiring-changes-required) +* [Adding ProtocolPool Module (OPTIONAL)](#adding-protocolpool-module-optional) + * [ProtocolPool Manual Wiring](#protocolpool-manual-wiring) + * [ProtocolPool DI Wiring](#protocolpool-di-wiring) +* [Adding Epochs Module (OPTIONAL)](#adding-epochs-module-optional) + * [Epochs Manual Wiring](#epochs-manual-wiring) + * [Epochs DI Wiring](#epochs-di-wiring) +* [Enable Unordered Transactions (OPTIONAL)](#enable-unordered-transactions-optional) +* [Upgrade Handler](#upgrade-handler) + +## App Wiring Changes **REQUIRED** + +The `x/auth` module now contains a `PreBlocker` that *must* be set in the module manager's `SetOrderPreBlockers` method. + +```go +app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, / NEW +) +``` + +## Adding ProtocolPool Module **OPTIONAL** + + + +Using an external community pool such as `x/protocolpool` will cause the following `x/distribution` handlers to return an error: + +**QueryService** + +* `CommunityPool` + +**MsgService** + +* `CommunityPoolSpend` +* `FundCommunityPool` + +If your services depend on this functionality from `x/distribution`, please update them to use either `x/protocolpool` or your custom external community pool alternatives. + + + +### Manual Wiring + +Import the following: + +```go +import ( + + / ... + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" +) +``` + +Set the module account permissions. + +```go +maccPerms = map[string][]string{ + / ... + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil, +} +``` + +Add the protocol pool keeper to your application struct. + +```go +ProtocolPoolKeeper protocolpoolkeeper.Keeper +``` + +Add the store key: + +```go +keys := storetypes.NewKVStoreKeys( + / ... + protocolpooltypes.StoreKey, +) +``` + +Instantiate the keeper. + +Make sure to do this before the distribution module instantiation, as you will pass the keeper there next. + +```go +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +Pass the protocolpool keeper to the distribution keeper: + +```go +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), / NEW +) +``` + +Add the protocolpool module to the module manager: + +```go +app.ModuleManager = module.NewManager( + / ... + protocolpool.NewAppModule(appCodec, app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), +) +``` + +Add an entry for SetOrderBeginBlockers, SetOrderEndBlockers, SetOrderInitGenesis, and SetOrderExportGenesis. + +```go +app.ModuleManager.SetOrderBeginBlockers( + / must come AFTER distribution. + distrtypes.ModuleName, + protocolpooltypes.ModuleName, +) +``` + +```go +app.ModuleManager.SetOrderEndBlockers( + / order does not matter. + protocolpooltypes.ModuleName, +) +``` + +```go +app.ModuleManager.SetOrderInitGenesis( + / order does not matter. + protocolpooltypes.ModuleName, +) +``` + +```go +app.ModuleManager.SetOrderInitGenesis( + protocolpooltypes.ModuleName, / must be exported before bank. + banktypes.ModuleName, +) +``` + +### DI Wiring + +Note: *as long as an external community pool keeper (here, `x/protocolpool`) is wired in DI configs, `x/distribution` will automatically use it for its external pool.* + +First, set up the keeper for the application. + +Import the protocolpool keeper: + +```go +protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" +``` + +Add the keeper to your application struct: + +```go +ProtocolPoolKeeper protocolpoolkeeper.Keeper +``` + +Add the keeper to the depinject system: + +```go +depinject.Inject( + appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + / ... other modules + &app.ProtocolPoolKeeper, / NEW MODULE! +) +``` + +Next, set up configuration for the module. + +Import the following: + +```go +import ( + + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" +) +``` + +The protocolpool module has module accounts that handle funds. Add them to the module account permission configuration: + +```go +moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + / ... + { + Account: protocolpooltypes.ModuleName +}, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount +}, +} +``` + +Next, add an entry for BeginBlockers, EndBlockers, InitGenesis, and ExportGenesis. + +```go +BeginBlockers: []string{ + / ... + / must be AFTER distribution. + distrtypes.ModuleName, + protocolpooltypes.ModuleName, +}, +``` + +```go +EndBlockers: []string{ + / ... + / order for protocolpool does not matter. + protocolpooltypes.ModuleName, +}, +``` + +```go +InitGenesis: []string{ + / ... must be AFTER distribution. + distrtypes.ModuleName, + protocolpooltypes.ModuleName, +}, +``` + +```go +ExportGenesis: []string{ + / ... + / Must be exported before x/bank. + protocolpooltypes.ModuleName, + banktypes.ModuleName, +}, +``` + +Lastly, add an entry for protocolpool in the ModuleConfig. + +```go +{ + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ +}), +}, +``` + +## Adding Epochs Module **OPTIONAL** + +### Manual Wiring + +Import the following: + +```go +import ( + + / ... + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" +) +``` + +Add the epochs keeper to your application struct: + +```go +EpochsKeeper epochskeeper.Keeper +``` + +Add the store key: + +```go +keys := storetypes.NewKVStoreKeys( + / ... + epochstypes.StoreKey, +) +``` + +Instantiate the keeper: + +```go +app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, +) +``` + +Set up hooks for the epochs keeper: + +To learn how to write hooks for the epoch keeper, see the [x/epoch README](https://github.com/cosmos/cosmos-sdk/blob/main/x/epochs/README.md) + +```go +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + app.SomeOtherModule + ), +) +``` + +Add the epochs module to the module manager: + +```go +app.ModuleManager = module.NewManager( + / ... + epochs.NewAppModule(appCodec, app.EpochsKeeper), +) +``` + +Add entries for SetOrderBeginBlockers and SetOrderInitGenesis: + +```go +app.ModuleManager.SetOrderBeginBlockers( + / ... + epochstypes.ModuleName, +) +``` + +```go +app.ModuleManager.SetOrderInitGenesis( + / ... + epochstypes.ModuleName, +) +``` + +### DI Wiring + +First, set up the keeper for the application. + +Import the epochs keeper: + +```go +epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" +``` + +Add the keeper to your application struct: + +```go +EpochsKeeper epochskeeper.Keeper +``` + +Add the keeper to the depinject system: + +```go +depinject.Inject( + appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + / ... other modules + &app.EpochsKeeper, / NEW MODULE! +) +``` + +Next, set up configuration for the module. + +Import the following: + +```go +import ( + + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" +) +``` + +Add an entry for BeginBlockers and InitGenesis: + +```go +BeginBlockers: []string{ + / ... + epochstypes.ModuleName, +}, +``` + +```go +InitGenesis: []string{ + / ... + epochstypes.ModuleName, +}, +``` + +Lastly, add an entry for epochs in the ModuleConfig: + +```go +{ + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ +}), +}, +``` + +## Enable Unordered Transactions **OPTIONAL** + +To enable unordered transaction support on an application, the `x/auth` keeper must be supplied with the `WithUnorderedTransactions` option. + +Note that unordered transactions require sequence values to be zero, and will **FAIL** if a non-zero sequence value is set. +Please ensure no sequence value is set when submitting an unordered transaction. +Services that rely on prior assumptions about sequence values should be updated to handle unordered transactions. +Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. + +```go +app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), / new option! + ) +``` + +If using dependency injection, update the auth module config. + +```go +{ + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + EnableUnorderedTransactions: true, / remove this line if you do not want unordered transactions. +}), +}, +``` + +By default, unordered transactions use a transaction timeout duration of 10 minutes and a default gas charge of 2240 gas units. +To modify these default values, pass in the corresponding options to the new `SigVerifyOptions` field in `x/auth's` `ante.HandlerOptions`. + +```go +options := ante.HandlerOptions{ + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimeoutDuration), +}, +} +``` + +```go +anteDecorators := []sdk.AnteDecorator{ + / ... other decorators ... + ante.NewSigVerificationDecorator(options.AccountKeeper, options.SignModeHandler, options.SigVerifyOptions...), / supply new options +} +``` + +## Upgrade Handler + +The upgrade handler only requires adding the store upgrades for the modules added above. +If your application is not adding `x/protocolpool` or `x/epochs`, you do not need to add the store upgrade. + +```go expandable +/ UpgradeName defines the on-chain upgrade name for the sample SimApp upgrade +/ from v050 to v053. +/ +/ NOTE: This upgrade defines a reference implementation of what an upgrade +/ could look like when an application is migrating from Cosmos SDK version +/ v0.50.x to v0.53.x. +const UpgradeName = "v050-to-v053" + +func (app SimApp) + +RegisterUpgradeHandlers() { + app.UpgradeKeeper.SetUpgradeHandler( + UpgradeName, + func(ctx context.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + return app.ModuleManager.RunMigrations(ctx, app.Configurator(), fromVM) +}, + ) + +upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk() + if err != nil { + panic(err) +} + if upgradeInfo.Name == UpgradeName && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := storetypes.StoreUpgrades{ + Added: []string{ + epochstypes.ModuleName, / if not adding x/epochs to your chain, remove this line. + protocolpooltypes.ModuleName, / if not adding x/protocolpool to your chain, remove this line. +}, +} + + / configure store loader that checks if version == upgradeHeight and applies store upgrades + app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +} +``` diff --git a/docs/sdk/next/documentation/operations/upgrade-reference.mdx b/docs/sdk/next/documentation/operations/upgrade-reference.mdx new file mode 100644 index 00000000..c338a62a --- /dev/null +++ b/docs/sdk/next/documentation/operations/upgrade-reference.mdx @@ -0,0 +1,31 @@ +--- +title: Upgrade Reference +description: >- + This document provides a quick reference for the upgrades from v0.53.x to + v0.54.x of Cosmos SDK. +--- + +This document provides a quick reference for the upgrades from `v0.53.x` to `v0.54.x` of Cosmos SDK. + +Note, always read the **App Wiring Changes** section for more information on application wiring updates. + +Upgrading to v0.54.x will require a **coordinated** chain upgrade. + +### TLDR + +**The only major feature in Cosmos SDK v0.54.x is the upgrade from CometBFT v0.x.x to CometBFT v2.** + +For a full list of changes, see the [Changelog](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/CHANGELOG.md). + +#### Deprecation of `TimeoutCommit` + +CometBFT v2 has deprecated the use of `TimeoutCommit` for a new field, `NextBlockDelay`, that is part of the +`FinalizeBlockResponse` ABCI message that is returned to CometBFT via the SDK baseapp. More information from +the CometBFT repo can be found [here](https://github.com/cometbft/cometbft/blob/88ef3d267de491db98a654be0af6d791e8724ed0/spec/abci/abci%2B%2B_methods.md?plain=1#L689). + +For SDK application developers and node runners, this means that the `timeout_commit` value in the `config.toml` file +is still used if `NextBlockDelay` is 0 (its default value). This means that when upgrading to Cosmos SDK v0.54.x, if +the existing `timout_commit` values that validators have been using will be maintained and have the same behavior. + +For setting the field in your application, there is a new `baseapp` option, `SetNextBlockDelay` which can be passed to your application upon +initialization in `app.go`. Setting this value to any non-zero value will override anything that is set in validators' `config.toml`. diff --git a/docs/sdk/next/documentation/operations/upgrading.mdx b/docs/sdk/next/documentation/operations/upgrading.mdx new file mode 100644 index 00000000..048831a2 --- /dev/null +++ b/docs/sdk/next/documentation/operations/upgrading.mdx @@ -0,0 +1,533 @@ +--- +title: Upgrading Cosmos SDK +description: >- + This guide provides instructions for upgrading to specific versions of Cosmos + SDK. Note, always read the SimApp section for more information on application + wiring updates. +--- + +This guide provides instructions for upgrading to specific versions of Cosmos SDK. +Note, always read the **SimApp** section for more information on application wiring updates. + +## [v0.50.x](https://github.com/cosmos/cosmos-sdk/releases/tag/v0.50.0) + +### Migration to CometBFT (Part 2) + +The Cosmos SDK has migrated in its previous versions, to CometBFT. +Some functions have been renamed to reflect the naming change. + +Following an exhaustive list: + +* `client.TendermintRPC` -> `client.CometRPC` +* `clitestutil.MockTendermintRPC` -> `clitestutil.MockCometRPC` +* `clitestutilgenutil.CreateDefaultTendermintConfig` -> `clitestutilgenutil.CreateDefaultCometConfig` +* Package `client/grpc/tmservice` -> `client/grpc/cmtservice` + +Additionally, the commands and flags mentioning `tendermint` have been renamed to `comet`. +These commands and flags are still supported for backward compatibility. + +For backward compatibility, the `**/tendermint/**` gRPC services are still supported. + +Additionally, the SDK is starting its abstraction from CometBFT Go types through the codebase: + +* The usage of the CometBFT logger has been replaced by the Cosmos SDK logger interface (`cosmossdk.io/log.Logger`). +* The usage of `github.com/cometbft/cometbft/libs/bytes.HexByte` has been replaced by `[]byte`. +* Usage of an application genesis (see [genutil](#xgenutil)). + +#### Enable Vote Extensions + + +This is an optional feature that is disabled by default. + + +Once all the code changes required to implement Vote Extensions are in place, +they can be enabled by setting the consensus param `Abci.VoteExtensionsEnableHeight` +to a value greater than zero. + +In a new chain, this can be done in the `genesis.json` file. + +For existing chains this can be done in two ways: + +* During an upgrade the value is set in an upgrade handler. +* A governance proposal that changes the consensus param **after a coordinated upgrade has taken place**. + +### BaseApp + +All ABCI methods now accept a pointer to the request and response types defined +by CometBFT. In addition, they also return errors. An ABCI method should only +return errors in cases where a catastrophic failure has occurred and the application +should halt. However, this is abstracted away from the application developer. Any +handler that an application can define or set that returns an error, will gracefully +by handled by `BaseApp` on behalf of the application. + +BaseApp calls of `BeginBlock` & `Endblock` are now private but are still exposed +to the application to define via the `Manager` type. `FinalizeBlock` is public +and should be used in order to test and run operations. This means that although +`BeginBlock` & `Endblock` no longer exist in the ABCI interface, they are automatically +called by `BaseApp` during `FinalizeBlock`. Specifically, the order of operations +is `BeginBlock` -> `DeliverTx` (for all txs) -> `EndBlock`. + +ABCI++ 2.0 also brings `ExtendVote` and `VerifyVoteExtension` ABCI methods. These +methods allow applications to extend and verify pre-commit votes. The Cosmos SDK +allows an application to define handlers for these methods via `ExtendVoteHandler` +and `VerifyVoteExtensionHandler` respectively. Please see [here](https://docs.cosmos.network/v0.50/build/building-apps/vote-extensions) +for more info. + +#### Set PreBlocker + +A `SetPreBlocker` method has been added to BaseApp. This is essential for BaseApp to run `PreBlock` which runs before begin blocker other modules, and allows to modify consensus parameters, and the changes are visible to the following state machine logics. +Read more about other use cases [here](docs/sdk/next/documentation/legacy/adr-comprehensive). + +`depinject` / app di users need to add `x/upgrade` in their `app_config.go` / `app.yml`: + +```diff ++ PreBlockers: []string{ ++ upgradetypes.ModuleName, ++ }, +BeginBlockers: []string{ +- upgradetypes.ModuleName, + minttypes.ModuleName, +} +``` + +When using (legacy) application wiring, the following must be added to `app.go`: + +```diff expandable ++app.ModuleManager.SetOrderPreBlockers( ++ upgradetypes.ModuleName, ++) + +app.ModuleManager.SetOrderBeginBlockers( +- upgradetypes.ModuleName, +) + ++ app.SetPreBlocker(app.PreBlocker) + +/ ... / + ++func (app *SimApp) PreBlocker(ctx sdk.Context, req *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { ++ return app.ModuleManager.PreBlock(ctx, req) ++} +``` + +#### Events + +The log section of `abci.TxResult` is not populated in the case of successful +msg(s) execution. Instead a new attribute is added to all messages indicating +the `msg_index` which identifies which events and attributes relate the same +transaction. + +`BeginBlock` & `EndBlock` Events are now emitted through `FinalizeBlock` but have +an added attribute, `mode=BeginBlock|EndBlock`, to identify if the event belongs +to `BeginBlock` or `EndBlock`. + +### Config files + +Confix is a new SDK tool for modifying and migrating configuration of the SDK. +It is the replacement of the `config.Cmd` command from the `client/config` package. + +Use the following command to migrate your configuration: + +```bash +simd config migrate v0.50 +``` + +If you were using ` config [key]` or ` config [key] [value]` to set and get values from the `client.toml`, replace it with ` config get client [key]` and ` config set client [key] [value]`. The extra verbosity is due to the extra functionalities added in config. + +More information about [confix](https://docs.cosmos.network/main/tooling/confix) and how to add it in your application binary in the [documentation](https://docs.cosmos.network/main/tooling/confix). + +#### gRPC-Web + +gRPC-Web is now listening to the same address and port as the gRPC Gateway API server (default: `localhost:1317`). +The possibility to listen to a different address has been removed, as well as its settings. +Use `confix` to clean-up your `app.toml`. A nginx (or alike) reverse-proxy can be set to keep the previous behavior. + +#### Database Support + +ClevelDB, BoltDB and BadgerDB are not supported anymore. To migrate from a unsupported database to a supported database please use a database migration tool. + +### Protobuf + +With the deprecation of the Amino JSON codec defined in [cosmos/gogoproto](https://github.com/cosmos/gogoproto) in favor of the protoreflect powered x/tx/aminojson codec, module developers are encouraged verify that their messages have the correct protobuf annotations to deterministically produce identical output from both codecs. + +For core SDK types equivalence is asserted by generative testing of [SignableTypes](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/tests/integration/rapidgen/rapidgen.go#L102) in [TestAminoJSON\_Equivalence](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/tests/integration/tx/aminojson/aminojson_test.go#L94). + +**TODO: summarize proto annotation requirements.** + +#### Stringer + +The `gogoproto.goproto_stringer = false` annotation has been removed from most proto files. This means that the `String()` method is being generated for types that previously had this annotation. The generated `String()` method uses `proto.CompactTextString` for *stringifying* structs. +[Verify](https://github.com/cosmos/cosmos-sdk/pull/13850#issuecomment-1328889651) the usage of the modified `String()` methods and double-check that they are not used in state-machine code. + +### SimApp + +In this section we describe the changes made in Cosmos SDK' SimApp. +**These changes are directly applicable to your application wiring.** + +#### Module Assertions + +Previously, all modules were required to be set in `OrderBeginBlockers`, `OrderEndBlockers` and `OrderInitGenesis / OrderExportGenesis` in `app.go` / `app_config.go`. This is no longer the case, the assertion has been loosened to only require modules implementing, respectively, the `appmodule.HasBeginBlocker`, `appmodule.HasEndBlocker` and `appmodule.HasGenesis` / `module.HasGenesis` interfaces. + +#### Module wiring + +The following modules `NewKeeper` function now take a `KVStoreService` instead of a `StoreKey`: + +* `x/auth` +* `x/authz` +* `x/bank` +* `x/consensus` +* `x/crisis` +* `x/distribution` +* `x/evidence` +* `x/feegrant` +* `x/gov` +* `x/mint` +* `x/nft` +* `x/slashing` +* `x/upgrade` + +**Users using `depinject` / app di do not need any changes, this is abstracted for them.** + +Users manually wiring their chain need to use the `runtime.NewKVStoreService` method to create a `KVStoreService` from a `StoreKey`: + +```diff +app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, +- keys[consensusparamtypes.StoreKey] ++ runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +#### Logger + +Replace all your CometBFT logger imports by `cosmossdk.io/log`. + +Additionally, `depinject` / app di users must now supply a logger through the main `depinject.Supply` function instead of passing it to `appBuilder.Build`. + +```diff +appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, ++ logger, + ... +``` + +```diff +- app.App = appBuilder.Build(logger, db, traceStore, baseAppOptions...) ++ app.App = appBuilder.Build(db, traceStore, baseAppOptions...) +``` + +User manually wiring their chain need to add the logger argument when creating the `x/bank` keeper. + +#### Module Basics + +Previously, the `ModuleBasics` was a global variable that was used to register all modules' `AppModuleBasic` implementation. +The global variable has been removed and the basic module manager can be now created from the module manager. + +This is automatically done for `depinject` / app di users, however for supplying different app module implementation, pass them via `depinject.Supply` in the main `AppConfig` (`app_config.go`): + +```go expandable +depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +}, + ), +}, + ) +``` + +Users manually wiring their chain need to use the new `module.NewBasicManagerFromManager` function, after the module manager creation, and pass a `map[string]module.AppModuleBasic` as argument for optionally overriding some module's `AppModuleBasic`. + +#### AutoCLI + +[`AutoCLI`](https://docs.cosmos.network/main/core/autocli) has been implemented by the SDK for all its module CLI queries. This means chains must add the following in their `root.go` to enable `AutoCLI` in their application: + +```go +if err := autoCliOpts.EnhanceRootCommand(rootCmd); err != nil { + panic(err) +} +``` + +Where `autoCliOpts` is the autocli options of the app, containing all modules and codecs. +That value can injected by depinject ([see root\_v2.go](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/simapp/simd/cmd/root_v2.go#L49-L67)) or manually provided by the app ([see legacy app.go](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/simapp/app.go#L636-L655)). + + +Not doing this will result in all core SDK modules queries not to be included in the binary. + + +Additionally `AutoCLI` automatically adds the custom modules commands to the root command for all modules implementing the [`appmodule.AppModule`](https://pkg.go.dev/cosmossdk.io/core/appmodule#AppModule) interface. +This means, after ensuring all the used modules implement this interface, the following can be removed from your `root.go`: + +```diff +func txCommand() *cobra.Command { + .... +- appd.ModuleBasics.AddTxCommands(cmd) +} +``` + +```diff +func queryCommand() *cobra.Command { + .... +- appd.ModuleBasics.AddQueryCommands(cmd) +} +``` + +### Packages + +#### Math + +References to `types/math.go` which contained aliases for math types aliasing the `cosmossdk.io/math` package have been removed. +Import directly the `cosmossdk.io/math` package instead. + +#### Store + +References to `types/store.go` which contained aliases for store types have been remapped to point to appropriate `store/types`, hence the `types/store.go` file is no longer needed and has been removed. + +##### Extract Store to a standalone module + +The `store` module is extracted to have a separate go.mod file which allows it be a standalone module. +All the store imports are now renamed to use `cosmossdk.io/store` instead of `github.com/cosmos/cosmos-sdk/store` across the SDK. + +##### Streaming + +[ADR-38](https://docs.cosmos.network/main/architecture/adr-038-state-listening) has been implemented in the SDK. + +To continue using state streaming, replace `streaming.LoadStreamingServices` by the following in your `app.go`: + +```go +if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} +``` + +#### Client + +The return type of the interface method `TxConfig.SignModeHandler()` has been changed from `x/auth/signing.SignModeHandler` to `x/tx/signing.HandlerMap`. This change is transparent to most users as the `TxConfig` interface is typically implemented by private `x/auth/tx.config` struct (as returned by `auth.NewTxConfig`) which has been updated to return the new type. If users have implemented their own `TxConfig` interface, they will need to update their implementation to return the new type. + +##### Textual sign mode + +A new sign mode is available in the SDK that produces more human readable output, currently only available on Ledger +devices but soon to be implemented in other UIs. + + +This sign mode does not allow offline signing + + +When using (legacy) application wiring, the following must be added to `app.go` after setting the app's bank keeper: + +```go expandable +enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + log.Fatalf("Failed to create new TxConfig with options: %v", err) +} + +app.txConfig = txConfig +``` + +When using `depinject` / `app di`, **it's enabled by default** if there's a bank keeper present. + +And in the application client (usually `root.go`): + +```go expandable +if !clientCtx.Offline { + txConfigOpts.EnabledSignModes = append(txConfigOpts.EnabledSignModes, signing.SignMode_SIGN_MODE_TEXTUAL) + +txConfigOpts.TextualCoinMetadataQueryFn = txmodule.NewGRPCCoinMetadataQueryFn(clientCtx) + +txConfigWithTextual, err := tx.NewTxConfigWithOptions( + codec.NewProtoCodec(clientCtx.InterfaceRegistry), + txConfigOpts, + ) + if err != nil { + return err +} + +clientCtx = clientCtx.WithTxConfig(txConfigWithTextual) +} +``` + +When using `depinject` / `app di`, the a tx config should be recreated from the `txConfigOpts` to use `NewGRPCCoinMetadataQueryFn` instead of depending on the bank keeper (that is used in the server). + +To learn more see the [docs](https://docs.cosmos.network/main/learn/advanced/transactions#sign_mode_textual) and the [ADR-050](https://docs.cosmos.network/main/build/architecture/adr-050-sign-mode-textual). + +### Modules + +#### `**all**` + +* [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) has defined a simplification of the message validation process for modules. + The `sdk.Msg` interface has been updated to not require the implementation of the `ValidateBasic` method. + It is now recommended to validate message directly in the message server. When the validation is performed in the message server, the `ValidateBasic` method on a message is no longer required and can be removed. + +* Messages no longer need to implement the `LegacyMsg` interface and implementations of `GetSignBytes` can be deleted. Because of this change, global legacy Amino codec definitions and their registration in `init()` can safely be removed as well. + +* The `AppModuleBasic` interface has been simplified. Defining `GetTxCmd() *cobra.Command` and `GetQueryCmd() *cobra.Command` is no longer required. The module manager detects when module commands are defined. If AutoCLI is enabled, `EnhanceRootCommand()` will add the auto-generated commands to the root command, unless a custom module command is defined and register that one instead. + +* The following modules' `Keeper` methods now take in a `context.Context` instead of `sdk.Context`. Any module that has an interfaces for them (like "expected keepers") will need to update and re-generate mocks if needed: + + * `x/authz` + * `x/bank` + * `x/mint` + * `x/crisis` + * `x/distribution` + * `x/evidence` + * `x/gov` + * `x/slashing` + * `x/upgrade` + +* `BeginBlock` and `EndBlock` have changed their signature, so it is important that any module implementing them are updated accordingly. + +```diff +- BeginBlock(sdk.Context, abci.RequestBeginBlock) ++ BeginBlock(context.Context) error +``` + +```diff +- EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate ++ EndBlock(context.Context) error +``` + +In case a module requires to return `abci.ValidatorUpdate` from `EndBlock`, it can use the `HasABCIEndBlock` interface instead. + +```diff +- EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate ++ EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +``` + + +It is possible to ensure that a module implements the correct interfaces by using compiler assertions in your `x/{moduleName}/module.go`: + +```go +var ( + _ module.AppModuleBasic = (*AppModule)(nil) + _ module.AppModuleSimulation = (*AppModule)(nil) + _ module.HasGenesis = (*AppModule)(nil) + + _ appmodule.AppModule = (*AppModule)(nil) + _ appmodule.HasBeginBlocker = (*AppModule)(nil) + _ appmodule.HasEndBlocker = (*AppModule)(nil) + ... +) +``` + +Read more on those interfaces [here](https://docs.cosmos.network/v0.50/building-modules/module-manager#application-module-interfaces). + + + +* `GetSigners()` is no longer required to be implemented on `Msg` types. The SDK will automatically infer the signers from the `Signer` field on the message. The signer field is required on all messages unless using a custom signer function. + +To find out more please read the [signer field](/docs/sdk/next/documentation/protocol-development/protobuf-annotations#signer) & [here](https://github.com/cosmos/cosmos-sdk/blob/7352d0bce8e72121e824297df453eb1059c28da8/docs/docs/build/building-modules/02-messages-and-queries.md#L40) documentation. +{/* Link to docs once redeployed */} + +#### `x/auth` + +For ante handler construction via `ante.NewAnteHandler`, the field `ante.HandlerOptions.SignModeHandler` has been updated to `x/tx/signing/HandlerMap` from `x/auth/signing/SignModeHandler`. Callers typically fetch this value from `client.TxConfig.SignModeHandler()` (which is also changed) so this change should be transparent to most users. + +#### `x/capability` + +The capability module has been moved to [cosmos/ibc-go](https://github.com/cosmos/ibc-go). IBC v8 will contain the necessary changes to incorporate the new module location. In your `app.go`, you must import the capability module from the new location: + +```diff ++ "github.com/cosmos/ibc-go/modules/capability" ++ capabilitykeeper "github.com/cosmos/ibc-go/modules/capability/keeper" ++ capabilitytypes "github.com/cosmos/ibc-go/modules/capability/types" +- "github.com/cosmos/cosmos-sdk/x/capability/types" +- capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" +- capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" +``` + +Similar to previous versions, your module manager must include the capability module. + +```go +app.ModuleManager = module.NewManager( + capability.NewAppModule(encodingConfig.Codec, *app.CapabilityKeeper, true), + / remaining modules +) +``` + +#### `x/genutil` + +The Cosmos SDK has migrated from a CometBFT genesis to a application managed genesis file. +The genesis is now fully handled by `x/genutil`. This has no consequences for running chains: + +* Importing a CometBFT genesis is still supported. +* Exporting a genesis now exports the genesis as an application genesis. + +When needing to read an application genesis, use the following helpers from the `x/genutil/types` package: + +```go +/ AppGenesisFromReader reads the AppGenesis from the reader. +func AppGenesisFromReader(reader io.Reader) (*AppGenesis, error) + +/ AppGenesisFromFile reads the AppGenesis from the provided file. +func AppGenesisFromFile(genFile string) (*AppGenesis, error) +``` + +#### `x/gov` + +##### Expedited Proposals + +The `gov` v1 module now supports expedited governance proposals. When a proposal is expedited, the voting period will be shortened to `ExpeditedVotingPeriod` parameter. An expedited proposal must have an higher voting threshold than a classic proposal, that threshold is defined with the `ExpeditedThreshold` parameter. + +##### Cancelling Proposals + +The `gov` module now supports cancelling governance proposals. When a proposal is canceled, all the deposits of the proposal are either burnt or sent to `ProposalCancelDest` address. The deposits burn rate will be determined by a new parameter called `ProposalCancelRatio` parameter. + +```text +1. deposits * proposal_cancel_ratio will be burned or sent to `ProposalCancelDest` address , if `ProposalCancelDest` is empty then deposits will be burned. +2. deposits * (1 - proposal_cancel_ratio) will be sent to depositors. +``` + +By default, the new `ProposalCancelRatio` parameter is set to `0.5` during migration and `ProposalCancelDest` is set to empty string (i.e. burnt). + +#### `x/evidence` + +##### Extract evidence to a standalone module + +The `x/evidence` module is extracted to have a separate go.mod file which allows it be a standalone module. +All the evidence imports are now renamed to use `cosmossdk.io/x/evidence` instead of `github.com/cosmos/cosmos-sdk/x/evidence` across the SDK. + +#### `x/nft` + +##### Extract nft to a standalone module + +The `x/nft` module is extracted to have a separate go.mod file which allows it to be a standalone module. +All the evidence imports are now renamed to use `cosmossdk.io/x/nft` instead of `github.com/cosmos/cosmos-sdk/x/nft` across the SDK. + +#### x/feegrant + +##### Extract feegrant to a standalone module + +The `x/feegrant` module is extracted to have a separate go.mod file which allows it to be a standalone module. +All the feegrant imports are now renamed to use `cosmossdk.io/x/feegrant` instead of `github.com/cosmos/cosmos-sdk/x/feegrant` across the SDK. + +#### `x/upgrade` + +##### Extract upgrade to a standalone module + +The `x/upgrade` module is extracted to have a separate go.mod file which allows it to be a standalone module. +All the upgrade imports are now renamed to use `cosmossdk.io/x/upgrade` instead of `github.com/cosmos/cosmos-sdk/x/upgrade` across the SDK. + +### Tooling + +#### Rosetta + +Rosetta has moved to it's own [repo](https://github.com/cosmos/rosetta) and not imported by the Cosmos SDK SimApp by default. +Any user who is interested on using the tool can connect it standalone to any node without the need to add it as part of the node binary. + +The rosetta tool also allows multi chain connections. diff --git a/docs/sdk/next/documentation/operations/user.mdx b/docs/sdk/next/documentation/operations/user.mdx new file mode 100644 index 00000000..0c5d9ead --- /dev/null +++ b/docs/sdk/next/documentation/operations/user.mdx @@ -0,0 +1,13 @@ +--- +title: User Guides +description: >- + This section is designed for developers who are using the Cosmos SDK to build + applications. It provides essential guides and references to effectively use + the SDK's features. +--- + +This section is designed for developers who are using the Cosmos SDK to build applications. It provides essential guides and references to effectively use the SDK's features. + +* [Setting up keys](/docs/sdk/next/documentation/operations/keyring) - Learn how to set up secure key management using the Cosmos SDK's keyring feature. This guide provides a streamlined approach to cryptographic key handling, which is crucial for securing your application. +* [Running a node](/docs/sdk/next/documentation/operations/run-node) - This guide provides step-by-step instructions to deploy and manage a node in the Cosmos network. It ensures a smooth and reliable operation of your blockchain application by covering all the necessary setup and maintenance steps. +* [CLI](/docs/sdk/next/documentation/operations/interact-node) - Discover how to navigate and interact with the Cosmos SDK using the Command Line Interface (CLI). This section covers efficient and powerful command-based operations that can help you manage your application effectively. diff --git a/docs/sdk/next/documentation/protocol-development/README.mdx b/docs/sdk/next/documentation/protocol-development/README.mdx new file mode 100644 index 00000000..ddede107 --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/README.mdx @@ -0,0 +1,5 @@ +--- +title: Addresses spec +--- + +* [Bech32](/docs/sdk/next/documentation/protocol-development/bech32) diff --git a/docs/sdk/next/documentation/protocol-development/SPEC_MODULE.mdx b/docs/sdk/next/documentation/protocol-development/SPEC_MODULE.mdx new file mode 100644 index 00000000..694df2ba --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/SPEC_MODULE.mdx @@ -0,0 +1,62 @@ +--- +title: Specification of Modules +description: >- + This file intends to outline the common structure for specifications within + this directory. +--- + +## Tense + +For consistency, specs should be written in passive present tense. + +## Pseudo-Code + +Generally, pseudo-code should be minimized throughout the spec. Often, simple +bulleted-lists which describe a function's operations are sufficient and should +be considered preferable. In certain instances, due to the complex nature of +the functionality being described pseudo-code may the most suitable form of +specification. In these cases use of pseudo-code is permissible, but should be +presented in a concise manner, ideally restricted to only the complex +element as a part of a larger description. + +## Common Layout + +The following generalized `README` structure should be used to breakdown +specifications for modules. The following list is nonbinding and all sections are optional. + +- `# {Module Name}` - overview of the module +- `## Concepts` - describe specialized concepts and definitions used throughout the spec +- `## State` - specify and describe structures expected to be marshaled into the store, and their keys +- `## State Transitions` - standard state transition operations triggered by hooks, messages, etc. +- `## Messages` - specify message structure(s) and expected state machine behavior(s) +- `## Begin Block` - specify any begin-block operations +- `## End Block` - specify any end-block operations +- `## Hooks` - describe available hooks to be called by/from this module +- `## Events` - list and describe event tags used +- `## Client` - list and describe CLI commands and gRPC and REST endpoints +- `## Params` - list all module parameters, their types (in JSON) and examples +- `## Future Improvements` - describe future improvements of this module +- `## Tests` - acceptance tests +- `## Appendix` - supplementary details referenced elsewhere within the spec + +### Notation for key-value mapping + +Within `## State` the following notation `->` should be used to describe key to +value mapping: + +```text +key -> value +``` + +to represent byte concatenation the `|` may be used. In addition, encoding +type may be specified, for example: + +```text +0x00 | addressBytes | address2Bytes -> amino(value_object) +``` + +Additionally, index mappings may be specified by mapping to the `nil` value, for example: + +```text +0x01 | address2Bytes | addressBytes -> nil +``` diff --git a/docs/sdk/next/documentation/protocol-development/SPEC_STANDARD.mdx b/docs/sdk/next/documentation/protocol-development/SPEC_STANDARD.mdx new file mode 100644 index 00000000..6d302e03 --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/SPEC_STANDARD.mdx @@ -0,0 +1,128 @@ +--- +title: What is an SDK standard? +--- + +An SDK standard is a design document describing a particular protocol, standard, or feature expected to be used by the Cosmos SDK. An SDK standard should list the desired properties of the standard, explain the design rationale, and provide a concise but comprehensive technical specification. The primary author is responsible for pushing the proposal through the standardization process, soliciting input and support from the community, and communicating with relevant stakeholders to ensure (social) consensus. + +## Sections + +An SDK standard consists of: + +* a synopsis, +* overview and basic concepts, +* technical specification, +* history log, and +* copyright notice. + +All top-level sections are required. References should be included inline as links, or tabulated at the bottom of the section if necessary. Included subsections should be listed in the order specified below. + +### Table Of Contents + +Provide a table of contents at the top of the file to help readers. + +### Synopsis + +The document should include a brief (\~200 word) synopsis providing a high-level description of and rationale for the specification. + +### Overview and basic concepts + +This section should include a motivation subsection and a definition subsection if required: + +* *Motivation* - A rationale for the existence of the proposed feature, or the proposed changes to an existing feature. +* *Definitions* - A list of new terms or concepts used in the document or required to understand it. + +### System model and properties + +This section should include an assumption subsection if any, the mandatory properties subsection, and a dependency subsection. Note that the first two subsections are tightly coupled: how to enforce a property will depend directly on the assumptions made. This subsection is important to capture the interactions of the specified feature with the "rest-of-the-world," i.e., with other features of the ecosystem. + +* *Assumptions* - A list of any assumptions made by the feature designer. It should capture which features are used by the feature under specification, and what do we expect from them. +* *Properties* - A list of the desired properties or characteristics of the feature specified, and expected effects or failures when the properties are violated. In case it is relevant, it can also include a list of properties that the feature does not guarantee. +* *Dependencies* - A list of the features that use the feature under specification and how. + +### Technical specification + +This is the main section of the document, and should contain protocol documentation, design rationale, required references, and technical details where appropriate. +The section may have any or all of the following subsections, as appropriate to the particular specification. The API subsection is especially encouraged when appropriate. + +* *API* - A detailed description of the feature's API. +* *Technical Details* - All technical details including syntax, diagrams, semantics, protocols, data structures, algorithms, and pseudocode as appropriate. The technical specification should be detailed enough such that separate correct implementations of the specification without knowledge of each other are compatible. +* *Backwards Compatibility* - A discussion of compatibility (or lack thereof) with previous feature or protocol versions. +* *Known Issues* - A list of known issues. This subsection is specially important for specifications of already in-use features. +* *Example Implementation* - A concrete example implementation or description of an expected implementation to serve as the primary reference for implementers. + +### History + +A specification should include a history section, listing any inspiring documents and a plaintext log of significant changes. + +See an example history section [below](#history-1). + +### Copyright + +A specification should include a copyright section waiving rights via [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). + +## Formatting + +### General + +Specifications must be written in GitHub-flavored Markdown. + +For a GitHub-flavored Markdown cheat sheet, see [here](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet). For a local Markdown renderer, see [here](https://github.com/joeyespo/grip). + +### Language + +Specifications should be written in Simple English, avoiding obscure terminology and unnecessary jargon. For excellent examples of Simple English, please see the [Simple English Wikipedia](https://simple.wikipedia.org/wiki/Main_Page). + +The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in specifications are to be interpreted as described in [RFC 2119](https://tools.ietf.org/html/rfc2119). + +### Pseudocode + +Pseudocode in specifications should be language-agnostic and formatted in a simple imperative standard, with line numbers, variables, simple conditional blocks, for loops, and +English fragments where necessary to explain further functionality such as scheduling timeouts. LaTeX images should be avoided because they are challenging to review in diff form. + +Pseudocode for structs can be written in a simple language like TypeScript or golang, as interfaces. + +Example Golang pseudocode struct: + +```go +type CacheKVStore interface { + cache: map[Key]Value + parent: KVStore + deleted: Key +} +``` + +Pseudocode for algorithms should be written in simple Golang, as functions. + +Example pseudocode algorithm: + +```go expandable +func get( + store CacheKVStore, + key Key) + +Value { + value = store.cache.get(Key) + if (value !== null) { + return value +} + +else { + value = store.parent.get(key) + +store.cache.set(key, value) + +return value +} +} +``` + +## History + +This specification was significantly inspired by and derived from IBC's [ICS](https://github.com/cosmos/ibc/blob/main/spec/ics-001-ics-standard/README.md), which +was in turn derived from Ethereum's [EIP 1](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1.md). + +Nov 24, 2022 - Initial draft finished and submitted as a PR + +## Copyright + +All content herein is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/docs/sdk/next/documentation/protocol-development/accounts.mdx b/docs/sdk/next/documentation/protocol-development/accounts.mdx new file mode 100644 index 00000000..ddb96944 --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/accounts.mdx @@ -0,0 +1,3590 @@ +--- +title: Accounts +--- + +## Synopsis + +This document describes the in-built account and public key system of the Cosmos SDK. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/next/documentation/application-framework/app-anatomy) + + + +## Account Definition + +In the Cosmos SDK, an _account_ designates a pair of _public key_ `PubKey` and _private key_ `PrivKey`. The `PubKey` can be derived to generate various `Addresses`, which are used to identify users (among other parties) in the application. `Addresses` are also associated with [`message`s](/docs/sdk/next/documentation/module-system/messages-and-queries#messages) to identify the sender of the `message`. The `PrivKey` is used to generate [digital signatures](#signatures) to prove that an `Address` associated with the `PrivKey` approved of a given `message`. + +For HD key derivation the Cosmos SDK uses a standard called [BIP32](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki). The BIP32 allows users to create an HD wallet (as specified in [BIP44](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)) - a set of accounts derived from an initial secret seed. A seed is usually created from a 12- or 24-word mnemonic. A single seed can derive any number of `PrivKey`s using a one-way cryptographic function. Then, a `PubKey` can be derived from the `PrivKey`. Naturally, the mnemonic is the most sensitive information, as private keys can always be re-generated if the mnemonic is preserved. + +```text expandable + Account 0 Account 1 Account 2 + ++------------------+ +------------------+ +------------------+ +| | | | | | +| Address 0 | | Address 1 | | Address 2 | +| ^ | | ^ | | ^ | +| | | | | | | | | +| | | | | | | | | +| | | | | | | | | +| + | | + | | + | +| Public key 0 | | Public key 1 | | Public key 2 | +| ^ | | ^ | | ^ | +| | | | | | | | | +| | | | | | | | | +| | | | | | | | | +| + | | + | | + | +| Private key 0 | | Private key 1 | | Private key 2 | +| ^ | | ^ | | ^ | ++------------------+ +------------------+ +------------------+ + | | | + | | | + | | | + +--------------------------------------------------------------------+ + | + | + +---------+---------+ + | | + | Master PrivKey | + | | + +-------------------+ + | + | + +---------+---------+ + | | + | Mnemonic (Seed) | + | | + +-------------------+ +``` + +In the Cosmos SDK, keys are stored and managed by using an object called a [`Keyring`](#keyring). + +## Keys, accounts, addresses, and signatures + +The principal way of authenticating a user is done using [digital signatures](https://en.wikipedia.org/wiki/Digital_signature). Users sign transactions using their own private key. Signature verification is done with the associated public key. For on-chain signature verification purposes, we store the public key in an `Account` object (alongside other data required for a proper transaction validation). + +In the node, all data is stored using Protocol Buffers serialization. + +The Cosmos SDK supports the following digital key schemes for creating digital signatures: + +- `secp256k1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256k1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keys/secp256k1/secp256k1.go). +- `secp256r1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256r1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keys/secp256r1/pubkey.go). +- `tm-ed25519`, as implemented in the [Cosmos SDK `crypto/keys/ed25519` package](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keys/ed25519/ed25519.go). This scheme is supported only for the consensus validation. + +| | Address length in bytes | Public key length in bytes | Used for transaction authentication | Used for consensus (cometbft) | +| :----------: | :---------------------: | :------------------------: | :---------------------------------: | :---------------------------: | +| `secp256k1` | 20 | 33 | yes | no | +| `secp256r1` | 32 | 33 | yes | no | +| `tm-ed25519` | -- not used -- | 32 | no | yes | + +## Addresses + +`Addresses` and `PubKey`s are both public information that identifies actors in the application. `Account` is used to store authentication information. The basic account implementation is provided by a `BaseAccount` object. + +Each account is identified using `Address` which is a sequence of bytes derived from a public key. In the Cosmos SDK, we define 3 types of addresses that specify a context where an account is used: + +- `AccAddress` identifies users (the sender of a `message`). +- `ValAddress` identifies validator operators. +- `ConsAddress` identifies validator nodes that are participating in consensus. Validator nodes are derived using the **`ed25519`** curve. + +These types implement the `Address` interface: + +```go expandable +package types + +import ( + + "bytes" + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "strings" + "sync" + "sync/atomic" + "github.com/hashicorp/golang-lru/simplelru" + "sigs.k8s.io/yaml" + + errorsmod "cosmossdk.io/errors" + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/internal/conv" + "github.com/cosmos/cosmos-sdk/types/address" + "github.com/cosmos/cosmos-sdk/types/bech32" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +const ( + / Constants defined here are the defaults value for address. + / You can use the specific values for your project. + / Add the follow lines to the `main()` of your server. + / + / config := sdk.GetConfig() + / config.SetBech32PrefixForAccount(yourBech32PrefixAccAddr, yourBech32PrefixAccPub) + / config.SetBech32PrefixForValidator(yourBech32PrefixValAddr, yourBech32PrefixValPub) + / config.SetBech32PrefixForConsensusNode(yourBech32PrefixConsAddr, yourBech32PrefixConsPub) + / config.SetPurpose(yourPurpose) + / config.SetCoinType(yourCoinType) + / config.Seal() + + / Bech32MainPrefix defines the main SDK Bech32 prefix of an account's address + Bech32MainPrefix = "cosmos" + + / Purpose is the ATOM purpose as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +Purpose = 44 + + / CoinType is the ATOM coin type as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +CoinType = 118 + + / FullFundraiserPath is the parts of the BIP44 HD path that are fixed by + / what we used during the ATOM fundraiser. + FullFundraiserPath = "m/44'/118'/0'/0/0" + + / PrefixAccount is the prefix for account keys + PrefixAccount = "acc" + / PrefixValidator is the prefix for validator keys + PrefixValidator = "val" + / PrefixConsensus is the prefix for consensus keys + PrefixConsensus = "cons" + / PrefixPublic is the prefix for public keys + PrefixPublic = "pub" + / PrefixOperator is the prefix for operator keys + PrefixOperator = "oper" + + / PrefixAddress is the prefix for addresses + PrefixAddress = "addr" + + / Bech32PrefixAccAddr defines the Bech32 prefix of an account's address + Bech32PrefixAccAddr = Bech32MainPrefix + / Bech32PrefixAccPub defines the Bech32 prefix of an account's public key + Bech32PrefixAccPub = Bech32MainPrefix + PrefixPublic + / Bech32PrefixValAddr defines the Bech32 prefix of a validator's operator address + Bech32PrefixValAddr = Bech32MainPrefix + PrefixValidator + PrefixOperator + / Bech32PrefixValPub defines the Bech32 prefix of a validator's operator public key + Bech32PrefixValPub = Bech32MainPrefix + PrefixValidator + PrefixOperator + PrefixPublic + / Bech32PrefixConsAddr defines the Bech32 prefix of a consensus node address + Bech32PrefixConsAddr = Bech32MainPrefix + PrefixValidator + PrefixConsensus + / Bech32PrefixConsPub defines the Bech32 prefix of a consensus node public key + Bech32PrefixConsPub = Bech32MainPrefix + PrefixValidator + PrefixConsensus + PrefixPublic +) + +/ cache variables +var ( + / AccAddress.String() + +is expensive and if unoptimized dominantly showed up in profiles, + / yet has no mechanisms to trivially cache the result given that AccAddress is a []byte type. + accAddrMu sync.Mutex + accAddrCache *simplelru.LRU + consAddrMu sync.Mutex + consAddrCache *simplelru.LRU + valAddrMu sync.Mutex + valAddrCache *simplelru.LRU + + isCachingEnabled atomic.Bool +) + +/ sentinel errors +var ( + ErrEmptyHexAddress = errors.New("decoding address from hex string failed: empty address") +) + +func init() { + var err error + SetAddrCacheEnabled(true) + + / in total the cache size is 61k entries. Key is 32 bytes and value is around 50-70 bytes. + / That will make around 92 * 61k * 2 (LRU) + +bytes ~ 11 MB + if accAddrCache, err = simplelru.NewLRU(60000, nil); err != nil { + panic(err) +} + if consAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} + if valAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} +} + +/ SetAddrCacheEnabled enables or disables accAddrCache, consAddrCache, and valAddrCache. By default, caches are enabled. +func SetAddrCacheEnabled(enabled bool) { + isCachingEnabled.Store(enabled) +} + +/ IsAddrCacheEnabled returns if the address caches are enabled. +func IsAddrCacheEnabled() + +bool { + return isCachingEnabled.Load() +} + +/ Address is a common interface for different types of addresses used by the SDK +type Address interface { + Equals(Address) + +bool + Empty() + +bool + Marshal() ([]byte, error) + +MarshalJSON() ([]byte, error) + +Bytes() []byte + String() + +string + Format(s fmt.State, verb rune) +} + +/ Ensure that different address types implement the interface +var ( + _ Address = AccAddress{ +} + _ Address = ValAddress{ +} + _ Address = ConsAddress{ +} +) + +/ ---------------------------------------------------------------------------- +/ account +/ ---------------------------------------------------------------------------- + +/ AccAddress a wrapper around bytes meant to represent an account address. +/ When marshaled to a string or JSON, it uses Bech32. +type AccAddress []byte + +/ AccAddressFromHexUnsafe creates an AccAddress from a HEX-encoded string. +/ +/ Note, this function is considered unsafe as it may produce an AccAddress from +/ otherwise invalid input, such as a transaction hash. Please use +/ AccAddressFromBech32. +func AccAddressFromHexUnsafe(address string) (addr AccAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return AccAddress(bz), err +} + +/ VerifyAddressFormat verifies that the provided bytes form a valid address +/ according to the default address rules or a custom address verifier set by +/ GetConfig().SetAddressVerifier(). +/ TODO make an issue to get rid of global Config +/ ref: https://github.com/cosmos/cosmos-sdk/issues/9690 +func VerifyAddressFormat(bz []byte) + +error { + verifier := GetConfig().GetAddressVerifier() + if verifier != nil { + return verifier(bz) +} + if len(bz) == 0 { + return errorsmod.Wrap(sdkerrors.ErrUnknownAddress, "addresses cannot be empty") +} + if len(bz) > address.MaxAddrLen { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "address max length is %d, got %d", address.MaxAddrLen, len(bz)) +} + +return nil +} + +/ MustAccAddressFromBech32 calls AccAddressFromBech32 and panics on error. +func MustAccAddressFromBech32(address string) + +AccAddress { + addr, err := AccAddressFromBech32(address) + if err != nil { + panic(err) +} + +return addr +} + +/ AccAddressFromBech32 creates an AccAddress from a Bech32 string. +func AccAddressFromBech32(address string) (addr AccAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return AccAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixAccAddr := GetConfig().GetBech32AccountAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixAccAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return AccAddress(bz), nil +} + +/ Returns boolean for whether two AccAddresses are Equal +func (aa AccAddress) + +Equals(aa2 Address) + +bool { + if aa.Empty() && aa2.Empty() { + return true +} + +return bytes.Equal(aa.Bytes(), aa2.Bytes()) +} + +/ Returns boolean for whether an AccAddress is empty +func (aa AccAddress) + +Empty() + +bool { + return len(aa) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (aa AccAddress) + +Marshal() ([]byte, error) { + return aa, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (aa *AccAddress) + +Unmarshal(data []byte) + +error { + *aa = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (aa AccAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(aa.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (aa AccAddress) + +MarshalYAML() (any, error) { + return aa.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ UnmarshalYAML unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ Bytes returns the raw address bytes. +func (aa AccAddress) + +Bytes() []byte { + return aa +} + +/ String implements the Stringer interface. +func (aa AccAddress) + +String() + +string { + if aa.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(aa) + if IsAddrCacheEnabled() { + accAddrMu.Lock() + +defer accAddrMu.Unlock() + +addr, ok := accAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32AccountAddrPrefix(), aa, accAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (aa AccAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + _, _ = s.Write([]byte(aa.String())) + case 'p': + _, _ = fmt.Fprintf(s, "%p", aa) + +default: + _, _ = fmt.Fprintf(s, "%X", []byte(aa)) +} +} + +/ ---------------------------------------------------------------------------- +/ validator operator +/ ---------------------------------------------------------------------------- + +/ ValAddress defines a wrapper around bytes meant to present a validator's +/ operator. When marshaled to a string or JSON, it uses Bech32. +type ValAddress []byte + +/ ValAddressFromHex creates a ValAddress from a hex string. +func ValAddressFromHex(address string) (addr ValAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return ValAddress(bz), err +} + +/ ValAddressFromBech32 creates a ValAddress from a Bech32 string. +func ValAddressFromBech32(address string) (addr ValAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ValAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixValAddr := GetConfig().GetBech32ValidatorAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixValAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ValAddress(bz), nil +} + +/ MustValAddressFromBech32 calls ValAddressFromBech32 and panics on error. +func MustValAddressFromBech32(address string) + +ValAddress { + addr, err := ValAddressFromBech32(address) + if err != nil { + panic(err) +} + +return addr +} + +/ Returns boolean for whether two ValAddresses are Equal +func (va ValAddress) + +Equals(va2 Address) + +bool { + if va.Empty() && va2.Empty() { + return true +} + +return bytes.Equal(va.Bytes(), va2.Bytes()) +} + +/ Returns boolean for whether an ValAddress is empty +func (va ValAddress) + +Empty() + +bool { + return len(va) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (va ValAddress) + +Marshal() ([]byte, error) { + return va, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (va *ValAddress) + +Unmarshal(data []byte) + +error { + *va = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (va ValAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(va.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (va ValAddress) + +MarshalYAML() (any, error) { + return va.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ Bytes returns the raw address bytes. +func (va ValAddress) + +Bytes() []byte { + return va +} + +/ String implements the Stringer interface. +func (va ValAddress) + +String() + +string { + if va.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(va) + if IsAddrCacheEnabled() { + valAddrMu.Lock() + +defer valAddrMu.Unlock() + +addr, ok := valAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32ValidatorAddrPrefix(), va, valAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (va ValAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + _, _ = s.Write([]byte(va.String())) + case 'p': + _, _ = fmt.Fprintf(s, "%p", va) + +default: + _, _ = fmt.Fprintf(s, "%X", []byte(va)) +} +} + +/ ---------------------------------------------------------------------------- +/ consensus node +/ ---------------------------------------------------------------------------- + +/ ConsAddress defines a wrapper around bytes meant to present a consensus node. +/ When marshaled to a string or JSON, it uses Bech32. +type ConsAddress []byte + +/ ConsAddressFromHex creates a ConsAddress from a hex string. +/ Deprecated: use ConsensusAddressCodec from Staking keeper +func ConsAddressFromHex(address string) (addr ConsAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return ConsAddress(bz), err +} + +/ ConsAddressFromBech32 creates a ConsAddress from a Bech32 string. +func ConsAddressFromBech32(address string) (addr ConsAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ConsAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixConsAddr := GetConfig().GetBech32ConsensusAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixConsAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ConsAddress(bz), nil +} + +/ get ConsAddress from pubkey +func GetConsAddress(pubkey cryptotypes.PubKey) + +ConsAddress { + return ConsAddress(pubkey.Address()) +} + +/ Returns boolean for whether two ConsAddress are Equal +func (ca ConsAddress) + +Equals(ca2 Address) + +bool { + if ca.Empty() && ca2.Empty() { + return true +} + +return bytes.Equal(ca.Bytes(), ca2.Bytes()) +} + +/ Returns boolean for whether an ConsAddress is empty +func (ca ConsAddress) + +Empty() + +bool { + return len(ca) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (ca ConsAddress) + +Marshal() ([]byte, error) { + return ca, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (ca *ConsAddress) + +Unmarshal(data []byte) + +error { + *ca = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (ca ConsAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(ca.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (ca ConsAddress) + +MarshalYAML() (any, error) { + return ca.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ Bytes returns the raw address bytes. +func (ca ConsAddress) + +Bytes() []byte { + return ca +} + +/ String implements the Stringer interface. +func (ca ConsAddress) + +String() + +string { + if ca.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(ca) + if IsAddrCacheEnabled() { + consAddrMu.Lock() + +defer consAddrMu.Unlock() + +addr, ok := consAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32ConsensusAddrPrefix(), ca, consAddrCache, key) +} + +/ Bech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. Returns an error if the bech32 conversion +/ fails or the prefix is empty. +func Bech32ifyAddressBytes(prefix string, bs []byte) (string, error) { + if len(bs) == 0 { + return "", nil +} + if len(prefix) == 0 { + return "", errors.New("prefix cannot be empty") +} + +return bech32.ConvertAndEncode(prefix, bs) +} + +/ MustBech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. It panics if the bech32 conversion +/ fails or the prefix is empty. +func MustBech32ifyAddressBytes(prefix string, bs []byte) + +string { + s, err := Bech32ifyAddressBytes(prefix, bs) + if err != nil { + panic(err) +} + +return s +} + +/ Format implements the fmt.Formatter interface. + +func (ca ConsAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + _, _ = s.Write([]byte(ca.String())) + case 'p': + _, _ = fmt.Fprintf(s, "%p", ca) + +default: + _, _ = fmt.Fprintf(s, "%X", []byte(ca)) +} +} + +/ ---------------------------------------------------------------------------- +/ auxiliary +/ ---------------------------------------------------------------------------- + +var errBech32EmptyAddress = errors.New("decoding Bech32 address failed: must provide a non empty address") + +/ GetFromBech32 decodes a bytestring from a Bech32 encoded string. +func GetFromBech32(bech32str, prefix string) ([]byte, error) { + if len(bech32str) == 0 { + return nil, errBech32EmptyAddress +} + +hrp, bz, err := bech32.DecodeAndConvert(bech32str) + if err != nil { + return nil, err +} + if hrp != prefix { + return nil, fmt.Errorf("invalid Bech32 prefix; expected %s, got %s", prefix, hrp) +} + +return bz, nil +} + +func addressBytesFromHexString(address string) ([]byte, error) { + if len(address) == 0 { + return nil, ErrEmptyHexAddress +} + +return hex.DecodeString(address) +} + +/ cacheBech32Addr is not concurrency safe. Concurrent access to cache causes race condition. +func cacheBech32Addr(prefix string, addr []byte, cache *simplelru.LRU, cacheKey string) + +string { + bech32Addr, err := bech32.ConvertAndEncode(prefix, addr) + if err != nil { + panic(err) +} + if IsAddrCacheEnabled() { + cache.Add(cacheKey, bech32Addr) +} + +return bech32Addr +} +``` + +Address construction algorithm is defined in [ADR-28](docs/sdk/next/documentation/legacy/adr-comprehensive#adr-028-public-key-addresses). +Here is the standard way to obtain an account address from a `pub` public key: + +```go +sdk.AccAddress(pub.Address().Bytes()) +``` + +Of note, the `Marshal()` and `Bytes()` method both return the same raw `[]byte` form of the address. `Marshal()` is required for Protobuf compatibility. + +For user interaction, addresses are formatted using [Bech32](https://en.bitcoin.it/wiki/Bech32) and implemented by the `String` method. The Bech32 method is the only supported format to use when interacting with a blockchain. The Bech32 human-readable part (Bech32 prefix) is used to denote an address type. Example: + +```go expandable +package types + +import ( + + "bytes" + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "strings" + "sync" + "sync/atomic" + "github.com/hashicorp/golang-lru/simplelru" + "sigs.k8s.io/yaml" + + errorsmod "cosmossdk.io/errors" + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/internal/conv" + "github.com/cosmos/cosmos-sdk/types/address" + "github.com/cosmos/cosmos-sdk/types/bech32" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +const ( + / Constants defined here are the defaults value for address. + / You can use the specific values for your project. + / Add the follow lines to the `main()` of your server. + / + / config := sdk.GetConfig() + / config.SetBech32PrefixForAccount(yourBech32PrefixAccAddr, yourBech32PrefixAccPub) + / config.SetBech32PrefixForValidator(yourBech32PrefixValAddr, yourBech32PrefixValPub) + / config.SetBech32PrefixForConsensusNode(yourBech32PrefixConsAddr, yourBech32PrefixConsPub) + / config.SetPurpose(yourPurpose) + / config.SetCoinType(yourCoinType) + / config.Seal() + + / Bech32MainPrefix defines the main SDK Bech32 prefix of an account's address + Bech32MainPrefix = "cosmos" + + / Purpose is the ATOM purpose as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +Purpose = 44 + + / CoinType is the ATOM coin type as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +CoinType = 118 + + / FullFundraiserPath is the parts of the BIP44 HD path that are fixed by + / what we used during the ATOM fundraiser. + FullFundraiserPath = "m/44'/118'/0'/0/0" + + / PrefixAccount is the prefix for account keys + PrefixAccount = "acc" + / PrefixValidator is the prefix for validator keys + PrefixValidator = "val" + / PrefixConsensus is the prefix for consensus keys + PrefixConsensus = "cons" + / PrefixPublic is the prefix for public keys + PrefixPublic = "pub" + / PrefixOperator is the prefix for operator keys + PrefixOperator = "oper" + + / PrefixAddress is the prefix for addresses + PrefixAddress = "addr" + + / Bech32PrefixAccAddr defines the Bech32 prefix of an account's address + Bech32PrefixAccAddr = Bech32MainPrefix + / Bech32PrefixAccPub defines the Bech32 prefix of an account's public key + Bech32PrefixAccPub = Bech32MainPrefix + PrefixPublic + / Bech32PrefixValAddr defines the Bech32 prefix of a validator's operator address + Bech32PrefixValAddr = Bech32MainPrefix + PrefixValidator + PrefixOperator + / Bech32PrefixValPub defines the Bech32 prefix of a validator's operator public key + Bech32PrefixValPub = Bech32MainPrefix + PrefixValidator + PrefixOperator + PrefixPublic + / Bech32PrefixConsAddr defines the Bech32 prefix of a consensus node address + Bech32PrefixConsAddr = Bech32MainPrefix + PrefixValidator + PrefixConsensus + / Bech32PrefixConsPub defines the Bech32 prefix of a consensus node public key + Bech32PrefixConsPub = Bech32MainPrefix + PrefixValidator + PrefixConsensus + PrefixPublic +) + +/ cache variables +var ( + / AccAddress.String() + +is expensive and if unoptimized dominantly showed up in profiles, + / yet has no mechanisms to trivially cache the result given that AccAddress is a []byte type. + accAddrMu sync.Mutex + accAddrCache *simplelru.LRU + consAddrMu sync.Mutex + consAddrCache *simplelru.LRU + valAddrMu sync.Mutex + valAddrCache *simplelru.LRU + + isCachingEnabled atomic.Bool +) + +/ sentinel errors +var ( + ErrEmptyHexAddress = errors.New("decoding address from hex string failed: empty address") +) + +func init() { + var err error + SetAddrCacheEnabled(true) + + / in total the cache size is 61k entries. Key is 32 bytes and value is around 50-70 bytes. + / That will make around 92 * 61k * 2 (LRU) + +bytes ~ 11 MB + if accAddrCache, err = simplelru.NewLRU(60000, nil); err != nil { + panic(err) +} + if consAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} + if valAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} +} + +/ SetAddrCacheEnabled enables or disables accAddrCache, consAddrCache, and valAddrCache. By default, caches are enabled. +func SetAddrCacheEnabled(enabled bool) { + isCachingEnabled.Store(enabled) +} + +/ IsAddrCacheEnabled returns if the address caches are enabled. +func IsAddrCacheEnabled() + +bool { + return isCachingEnabled.Load() +} + +/ Address is a common interface for different types of addresses used by the SDK +type Address interface { + Equals(Address) + +bool + Empty() + +bool + Marshal() ([]byte, error) + +MarshalJSON() ([]byte, error) + +Bytes() []byte + String() + +string + Format(s fmt.State, verb rune) +} + +/ Ensure that different address types implement the interface +var ( + _ Address = AccAddress{ +} + _ Address = ValAddress{ +} + _ Address = ConsAddress{ +} +) + +/ ---------------------------------------------------------------------------- +/ account +/ ---------------------------------------------------------------------------- + +/ AccAddress a wrapper around bytes meant to represent an account address. +/ When marshaled to a string or JSON, it uses Bech32. +type AccAddress []byte + +/ AccAddressFromHexUnsafe creates an AccAddress from a HEX-encoded string. +/ +/ Note, this function is considered unsafe as it may produce an AccAddress from +/ otherwise invalid input, such as a transaction hash. Please use +/ AccAddressFromBech32. +func AccAddressFromHexUnsafe(address string) (addr AccAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return AccAddress(bz), err +} + +/ VerifyAddressFormat verifies that the provided bytes form a valid address +/ according to the default address rules or a custom address verifier set by +/ GetConfig().SetAddressVerifier(). +/ TODO make an issue to get rid of global Config +/ ref: https://github.com/cosmos/cosmos-sdk/issues/9690 +func VerifyAddressFormat(bz []byte) + +error { + verifier := GetConfig().GetAddressVerifier() + if verifier != nil { + return verifier(bz) +} + if len(bz) == 0 { + return errorsmod.Wrap(sdkerrors.ErrUnknownAddress, "addresses cannot be empty") +} + if len(bz) > address.MaxAddrLen { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "address max length is %d, got %d", address.MaxAddrLen, len(bz)) +} + +return nil +} + +/ MustAccAddressFromBech32 calls AccAddressFromBech32 and panics on error. +func MustAccAddressFromBech32(address string) + +AccAddress { + addr, err := AccAddressFromBech32(address) + if err != nil { + panic(err) +} + +return addr +} + +/ AccAddressFromBech32 creates an AccAddress from a Bech32 string. +func AccAddressFromBech32(address string) (addr AccAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return AccAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixAccAddr := GetConfig().GetBech32AccountAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixAccAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return AccAddress(bz), nil +} + +/ Returns boolean for whether two AccAddresses are Equal +func (aa AccAddress) + +Equals(aa2 Address) + +bool { + if aa.Empty() && aa2.Empty() { + return true +} + +return bytes.Equal(aa.Bytes(), aa2.Bytes()) +} + +/ Returns boolean for whether an AccAddress is empty +func (aa AccAddress) + +Empty() + +bool { + return len(aa) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (aa AccAddress) + +Marshal() ([]byte, error) { + return aa, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (aa *AccAddress) + +Unmarshal(data []byte) + +error { + *aa = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (aa AccAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(aa.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (aa AccAddress) + +MarshalYAML() (any, error) { + return aa.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ UnmarshalYAML unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ Bytes returns the raw address bytes. +func (aa AccAddress) + +Bytes() []byte { + return aa +} + +/ String implements the Stringer interface. +func (aa AccAddress) + +String() + +string { + if aa.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(aa) + if IsAddrCacheEnabled() { + accAddrMu.Lock() + +defer accAddrMu.Unlock() + +addr, ok := accAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32AccountAddrPrefix(), aa, accAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (aa AccAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + _, _ = s.Write([]byte(aa.String())) + case 'p': + _, _ = fmt.Fprintf(s, "%p", aa) + +default: + _, _ = fmt.Fprintf(s, "%X", []byte(aa)) +} +} + +/ ---------------------------------------------------------------------------- +/ validator operator +/ ---------------------------------------------------------------------------- + +/ ValAddress defines a wrapper around bytes meant to present a validator's +/ operator. When marshaled to a string or JSON, it uses Bech32. +type ValAddress []byte + +/ ValAddressFromHex creates a ValAddress from a hex string. +func ValAddressFromHex(address string) (addr ValAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return ValAddress(bz), err +} + +/ ValAddressFromBech32 creates a ValAddress from a Bech32 string. +func ValAddressFromBech32(address string) (addr ValAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ValAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixValAddr := GetConfig().GetBech32ValidatorAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixValAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ValAddress(bz), nil +} + +/ MustValAddressFromBech32 calls ValAddressFromBech32 and panics on error. +func MustValAddressFromBech32(address string) + +ValAddress { + addr, err := ValAddressFromBech32(address) + if err != nil { + panic(err) +} + +return addr +} + +/ Returns boolean for whether two ValAddresses are Equal +func (va ValAddress) + +Equals(va2 Address) + +bool { + if va.Empty() && va2.Empty() { + return true +} + +return bytes.Equal(va.Bytes(), va2.Bytes()) +} + +/ Returns boolean for whether an ValAddress is empty +func (va ValAddress) + +Empty() + +bool { + return len(va) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (va ValAddress) + +Marshal() ([]byte, error) { + return va, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (va *ValAddress) + +Unmarshal(data []byte) + +error { + *va = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (va ValAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(va.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (va ValAddress) + +MarshalYAML() (any, error) { + return va.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ Bytes returns the raw address bytes. +func (va ValAddress) + +Bytes() []byte { + return va +} + +/ String implements the Stringer interface. +func (va ValAddress) + +String() + +string { + if va.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(va) + if IsAddrCacheEnabled() { + valAddrMu.Lock() + +defer valAddrMu.Unlock() + +addr, ok := valAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32ValidatorAddrPrefix(), va, valAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (va ValAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + _, _ = s.Write([]byte(va.String())) + case 'p': + _, _ = fmt.Fprintf(s, "%p", va) + +default: + _, _ = fmt.Fprintf(s, "%X", []byte(va)) +} +} + +/ ---------------------------------------------------------------------------- +/ consensus node +/ ---------------------------------------------------------------------------- + +/ ConsAddress defines a wrapper around bytes meant to present a consensus node. +/ When marshaled to a string or JSON, it uses Bech32. +type ConsAddress []byte + +/ ConsAddressFromHex creates a ConsAddress from a hex string. +/ Deprecated: use ConsensusAddressCodec from Staking keeper +func ConsAddressFromHex(address string) (addr ConsAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return ConsAddress(bz), err +} + +/ ConsAddressFromBech32 creates a ConsAddress from a Bech32 string. +func ConsAddressFromBech32(address string) (addr ConsAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ConsAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixConsAddr := GetConfig().GetBech32ConsensusAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixConsAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ConsAddress(bz), nil +} + +/ get ConsAddress from pubkey +func GetConsAddress(pubkey cryptotypes.PubKey) + +ConsAddress { + return ConsAddress(pubkey.Address()) +} + +/ Returns boolean for whether two ConsAddress are Equal +func (ca ConsAddress) + +Equals(ca2 Address) + +bool { + if ca.Empty() && ca2.Empty() { + return true +} + +return bytes.Equal(ca.Bytes(), ca2.Bytes()) +} + +/ Returns boolean for whether an ConsAddress is empty +func (ca ConsAddress) + +Empty() + +bool { + return len(ca) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (ca ConsAddress) + +Marshal() ([]byte, error) { + return ca, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (ca *ConsAddress) + +Unmarshal(data []byte) + +error { + *ca = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (ca ConsAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(ca.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (ca ConsAddress) + +MarshalYAML() (any, error) { + return ca.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ Bytes returns the raw address bytes. +func (ca ConsAddress) + +Bytes() []byte { + return ca +} + +/ String implements the Stringer interface. +func (ca ConsAddress) + +String() + +string { + if ca.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(ca) + if IsAddrCacheEnabled() { + consAddrMu.Lock() + +defer consAddrMu.Unlock() + +addr, ok := consAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32ConsensusAddrPrefix(), ca, consAddrCache, key) +} + +/ Bech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. Returns an error if the bech32 conversion +/ fails or the prefix is empty. +func Bech32ifyAddressBytes(prefix string, bs []byte) (string, error) { + if len(bs) == 0 { + return "", nil +} + if len(prefix) == 0 { + return "", errors.New("prefix cannot be empty") +} + +return bech32.ConvertAndEncode(prefix, bs) +} + +/ MustBech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. It panics if the bech32 conversion +/ fails or the prefix is empty. +func MustBech32ifyAddressBytes(prefix string, bs []byte) + +string { + s, err := Bech32ifyAddressBytes(prefix, bs) + if err != nil { + panic(err) +} + +return s +} + +/ Format implements the fmt.Formatter interface. + +func (ca ConsAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + _, _ = s.Write([]byte(ca.String())) + case 'p': + _, _ = fmt.Fprintf(s, "%p", ca) + +default: + _, _ = fmt.Fprintf(s, "%X", []byte(ca)) +} +} + +/ ---------------------------------------------------------------------------- +/ auxiliary +/ ---------------------------------------------------------------------------- + +var errBech32EmptyAddress = errors.New("decoding Bech32 address failed: must provide a non empty address") + +/ GetFromBech32 decodes a bytestring from a Bech32 encoded string. +func GetFromBech32(bech32str, prefix string) ([]byte, error) { + if len(bech32str) == 0 { + return nil, errBech32EmptyAddress +} + +hrp, bz, err := bech32.DecodeAndConvert(bech32str) + if err != nil { + return nil, err +} + if hrp != prefix { + return nil, fmt.Errorf("invalid Bech32 prefix; expected %s, got %s", prefix, hrp) +} + +return bz, nil +} + +func addressBytesFromHexString(address string) ([]byte, error) { + if len(address) == 0 { + return nil, ErrEmptyHexAddress +} + +return hex.DecodeString(address) +} + +/ cacheBech32Addr is not concurrency safe. Concurrent access to cache causes race condition. +func cacheBech32Addr(prefix string, addr []byte, cache *simplelru.LRU, cacheKey string) + +string { + bech32Addr, err := bech32.ConvertAndEncode(prefix, addr) + if err != nil { + panic(err) +} + if IsAddrCacheEnabled() { + cache.Add(cacheKey, bech32Addr) +} + +return bech32Addr +} +``` + +| | Address Bech32 Prefix | +| ------------------ | --------------------- | +| Accounts | cosmos | +| Validator Operator | cosmosvaloper | +| Consensus Nodes | cosmosvalcons | + +### Public Keys + +Public keys in Cosmos SDK are defined by `cryptotypes.PubKey` interface. Since public keys are saved in a store, `cryptotypes.PubKey` extends the `proto.Message` interface: + +```go expandable +package types + +import ( + + cmtcrypto "github.com/cometbft/cometbft/crypto" + proto "github.com/cosmos/gogoproto/proto" +) + +/ PubKey defines a public key and extends proto.Message. +type PubKey interface { + proto.Message + + Address() + +Address + Bytes() []byte + VerifySignature(msg, sig []byte) + +bool + Equals(PubKey) + +bool + Type() + +string +} + +/ LedgerPrivKey defines a private key that is not a proto message. For now, +/ LedgerSecp256k1 keys are not converted to proto.Message yet, this is why +/ they use LedgerPrivKey instead of PrivKey. All other keys must use PrivKey +/ instead of LedgerPrivKey. +/ TODO https://github.com/cosmos/cosmos-sdk/issues/7357. +type LedgerPrivKey interface { + Bytes() []byte + Sign(msg []byte) ([]byte, error) + +PubKey() + +PubKey + Equals(LedgerPrivKey) + +bool + Type() + +string +} + +/ LedgerPrivKeyAminoJSON is a Ledger PrivKey type that supports signing with +/ SIGN_MODE_LEGACY_AMINO_JSON. It is added as a non-breaking change, instead of directly +/ on the LedgerPrivKey interface (whose Sign method will sign with TEXTUAL), +/ and will be deprecated/removed once LEGACY_AMINO_JSON is removed. +type LedgerPrivKeyAminoJSON interface { + LedgerPrivKey + / SignLedgerAminoJSON signs a messages on the Ledger device using + / SIGN_MODE_LEGACY_AMINO_JSON. + SignLedgerAminoJSON(msg []byte) ([]byte, error) +} + +/ PrivKey defines a private key and extends proto.Message. For now, it extends +/ LedgerPrivKey (see godoc for LedgerPrivKey). Ultimately, we should remove +/ LedgerPrivKey and add its methods here directly. +/ TODO https://github.com/cosmos/cosmos-sdk/issues/7357. +type PrivKey interface { + proto.Message + LedgerPrivKey +} + +type ( + Address = cmtcrypto.Address +) +``` + +A compressed format is used for `secp256k1` and `secp256r1` serialization. + +- The first byte is a `0x02` byte if the `y`-coordinate is the lexicographically largest of the two associated with the `x`-coordinate. +- Otherwise the first byte is a `0x03`. + +This prefix is followed by the `x`-coordinate. + +Public Keys are not used to reference accounts (or users) and in general are not used when composing transaction messages (with few exceptions: `MsgCreateValidator`, `Validator` and `Multisig` messages). +For user interactions, `PubKey` is formatted using Protobufs JSON ([ProtoMarshalJSON](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/codec/json.go#L14-L34) function). Example: + +```go expandable +package keys + +import ( + + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ Use protobuf interface marshaler rather then generic JSON + +/ KeyOutput defines a structure wrapping around an Info object used for output +/ functionality. +type KeyOutput struct { + Name string `json:"name" yaml:"name"` + Type string `json:"type" yaml:"type"` + Address string `json:"address" yaml:"address"` + PubKey string `json:"pubkey" yaml:"pubkey"` + Mnemonic string `json:"mnemonic,omitempty" yaml:"mnemonic"` +} + +/ NewKeyOutput creates a default KeyOutput instance without Mnemonic, Threshold and PubKeys +func NewKeyOutput(name string, keyType keyring.KeyType, a sdk.Address, pk cryptotypes.PubKey) (KeyOutput, error) { + apk, err := codectypes.NewAnyWithValue(pk) + if err != nil { + return KeyOutput{ +}, err +} + +bz, err := codec.ProtoMarshalJSON(apk, nil) + if err != nil { + return KeyOutput{ +}, err +} + +return KeyOutput{ + Name: name, + Type: keyType.String(), + Address: a.String(), + PubKey: string(bz), +}, nil +} + +/ MkConsKeyOutput create a KeyOutput in with "cons" Bech32 prefixes. +func MkConsKeyOutput(k *keyring.Record) (KeyOutput, error) { + pk, err := k.GetPubKey() + if err != nil { + return KeyOutput{ +}, err +} + addr := sdk.ConsAddress(pk.Address()) + +return NewKeyOutput(k.Name, k.GetType(), addr, pk) +} + +/ MkValKeyOutput create a KeyOutput in with "val" Bech32 prefixes. +func MkValKeyOutput(k *keyring.Record) (KeyOutput, error) { + pk, err := k.GetPubKey() + if err != nil { + return KeyOutput{ +}, err +} + addr := sdk.ValAddress(pk.Address()) + +return NewKeyOutput(k.Name, k.GetType(), addr, pk) +} + +/ MkAccKeyOutput create a KeyOutput in with "acc" Bech32 prefixes. If the +/ public key is a multisig public key, then the threshold and constituent +/ public keys will be added. +func MkAccKeyOutput(k *keyring.Record) (KeyOutput, error) { + pk, err := k.GetPubKey() + if err != nil { + return KeyOutput{ +}, err +} + addr := sdk.AccAddress(pk.Address()) + +return NewKeyOutput(k.Name, k.GetType(), addr, pk) +} + +/ MkAccKeysOutput returns a slice of KeyOutput objects, each with the "acc" +/ Bech32 prefixes, given a slice of Record objects. It returns an error if any +/ call to MkKeyOutput fails. +func MkAccKeysOutput(records []*keyring.Record) ([]KeyOutput, error) { + kos := make([]KeyOutput, len(records)) + +var err error + for i, r := range records { + kos[i], err = MkAccKeyOutput(r) + if err != nil { + return nil, err +} + +} + +return kos, nil +} +``` + +## Keyring + +A `Keyring` is an object that stores and manages accounts. In the Cosmos SDK, a `Keyring` implementation follows the `Keyring` interface: + +```go expandable +package keyring + +import ( + + "bufio" + "encoding/hex" + "fmt" + "io" + "os" + "path/filepath" + "sort" + "strings" + "github.com/99designs/keyring" + "github.com/cockroachdb/errors" + "github.com/cosmos/go-bip39" + "golang.org/x/crypto/bcrypt" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/client/input" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/crypto" + "github.com/cosmos/cosmos-sdk/crypto/hd" + "github.com/cosmos/cosmos-sdk/crypto/ledger" + "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx/signing" +) + +/ Backend options for Keyring +const ( + BackendFile = "file" + BackendOS = "os" + BackendKWallet = "kwallet" + BackendPass = "pass" + BackendTest = "test" + BackendMemory = "memory" +) + +const ( + keyringFileDirName = "keyring-file" + keyringTestDirName = "keyring-test" + passKeyringPrefix = "keyring-%s" + + / temporary pass phrase for exporting a key during a key rename + passPhrase = "temp" + / prefix for exported hex private keys + hexPrefix = "0x" +) + +var ( + _ Keyring = &keystore{ +} + _ KeyringWithDB = &keystore{ +} + +maxPassphraseEntryAttempts = 3 +) + +/ Keyring exposes operations over a backend supported by github.com/99designs/keyring. +type Keyring interface { + / Get the backend type used in the keyring config: "file", "os", "kwallet", "pass", "test", "memory". + Backend() + +string + / List all keys. + List() ([]*Record, error) + + / Supported signing algorithms for Keyring and Ledger respectively. + SupportedAlgorithms() (SigningAlgoList, SigningAlgoList) + + / Key and KeyByAddress return keys by uid and address respectively. + Key(uid string) (*Record, error) + +KeyByAddress(address sdk.Address) (*Record, error) + + / Delete and DeleteByAddress remove keys from the keyring. + Delete(uid string) + +error + DeleteByAddress(address sdk.Address) + +error + + / Rename an existing key from the Keyring + Rename(from, to string) + +error + + / NewMnemonic generates a new mnemonic, derives a hierarchical deterministic key from it, and + / persists the key to storage. Returns the generated mnemonic and the key Info. + / It returns an error if it fails to generate a key for the given algo type, or if + / another key is already stored under the same name or address. + / + / A passphrase set to the empty string will set the passphrase to the DefaultBIP39Passphrase value. + NewMnemonic(uid string, language Language, hdPath, bip39Passphrase string, algo SignatureAlgo) (*Record, string, error) + + / NewAccount converts a mnemonic to a private key and BIP-39 HD Path and persists it. + / It fails if there is an existing key Info with the same address. + NewAccount(uid, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error) + + / SaveLedgerKey retrieves a public key reference from a Ledger device and persists it. + SaveLedgerKey(uid string, algo SignatureAlgo, hrp string, coinType, account, index uint32) (*Record, error) + + / SaveOfflineKey stores a public key and returns the persisted Info structure. + SaveOfflineKey(uid string, pubkey types.PubKey) (*Record, error) + + / SaveMultisig stores and returns a new multsig (offline) + +key reference. + SaveMultisig(uid string, pubkey types.PubKey) (*Record, error) + +Signer + + Importer + Exporter + + Migrator +} + +type KeyringWithDB interface { + Keyring + + / Get the db keyring used in the keystore. + DB() + +keyring.Keyring +} + +/ Signer is implemented by key stores that want to provide signing capabilities. +type Signer interface { + / Sign sign byte messages with a user key. + Sign(uid string, msg []byte, signMode signing.SignMode) ([]byte, types.PubKey, error) + + / SignByAddress sign byte messages with a user key providing the address. + SignByAddress(address sdk.Address, msg []byte, signMode signing.SignMode) ([]byte, types.PubKey, error) +} + +/ Importer is implemented by key stores that support import of public and private keys. +type Importer interface { + / ImportPrivKey imports ASCII armored passphrase-encrypted private keys. + ImportPrivKey(uid, armor, passphrase string) + +error + / ImportPrivKeyHex imports hex encoded keys. + ImportPrivKeyHex(uid, privKey, algoStr string) + +error + / ImportPubKey imports ASCII armored public keys. + ImportPubKey(uid, armor string) + +error +} + +/ Migrator is implemented by key stores and enables migration of keys from amino to proto +type Migrator interface { + MigrateAll() ([]*Record, error) +} + +/ Exporter is implemented by key stores that support export of public and private keys. +type Exporter interface { + / Export public key + ExportPubKeyArmor(uid string) (string, error) + +ExportPubKeyArmorByAddress(address sdk.Address) (string, error) + + / ExportPrivKeyArmor returns a private key in ASCII armored format. + / It returns an error if the key does not exist or a wrong encryption passphrase is supplied. + ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error) + +ExportPrivKeyArmorByAddress(address sdk.Address, encryptPassphrase string) (armor string, err error) +} + +/ Option overrides keyring configuration options. +type Option func(options *Options) + +/ NewInMemory creates a transient keyring useful for testing +/ purposes and on-the-fly key generation. +/ Keybase options can be applied when generating this new Keybase. +func NewInMemory(cdc codec.Codec, opts ...Option) + +Keyring { + return NewInMemoryWithKeyring(keyring.NewArrayKeyring(nil), cdc, opts...) +} + +/ NewInMemoryWithKeyring returns an in memory keyring using the specified keyring.Keyring +/ as the backing keyring. +func NewInMemoryWithKeyring(kr keyring.Keyring, cdc codec.Codec, opts ...Option) + +Keyring { + return newKeystore(kr, cdc, BackendMemory, opts...) +} + +/ New creates a new instance of a keyring. +/ Keyring options can be applied when generating the new instance. +/ Available backends are "os", "file", "kwallet", "memory", "pass", "test". +func newKeyringGeneric( + appName, backend, rootDir string, userInput io.Reader, cdc codec.Codec, opts ...Option, +) (Keyring, error) { + var ( + db keyring.Keyring + err error + ) + switch backend { + case BackendMemory: + return NewInMemory(cdc, opts...), err + case BackendTest: + db, err = keyring.Open(newTestBackendKeyringConfig(appName, rootDir)) + case BackendFile: + db, err = keyring.Open(newFileBackendKeyringConfig(appName, rootDir, userInput)) + case BackendOS: + db, err = keyring.Open(newOSBackendKeyringConfig(appName, rootDir, userInput)) + case BackendKWallet: + db, err = keyring.Open(newKWalletBackendKeyringConfig(appName, rootDir, userInput)) + case BackendPass: + db, err = keyring.Open(newPassBackendKeyringConfig(appName, rootDir, userInput)) + +default: + return nil, errorsmod.Wrap(ErrUnknownBacked, backend) +} + if err != nil { + return nil, err +} + +return newKeystore(db, cdc, backend, opts...), nil +} + +type keystore struct { + db keyring.Keyring + cdc codec.Codec + backend string + options Options +} + +func newKeystore(kr keyring.Keyring, cdc codec.Codec, backend string, opts ...Option) + +keystore { + / Default options for keybase, these can be overwritten using the + / Option function + options := Options{ + SupportedAlgos: SigningAlgoList{ + hd.Secp256k1 +}, + SupportedAlgosLedger: SigningAlgoList{ + hd.Secp256k1 +}, +} + for _, optionFn := range opts { + optionFn(&options) +} + if options.LedgerDerivation != nil { + ledger.SetDiscoverLedger(options.LedgerDerivation) +} + if options.LedgerCreateKey != nil { + ledger.SetCreatePubkey(options.LedgerCreateKey) +} + if options.LedgerAppName != "" { + ledger.SetAppName(options.LedgerAppName) +} + if options.LedgerSigSkipDERConv { + ledger.SetSkipDERConversion() +} + +return keystore{ + db: kr, + cdc: cdc, + backend: backend, + options: options, +} +} + +/ Backend returns the keyring backend option used in the config +func (ks keystore) + +Backend() + +string { + return ks.backend +} + +func (ks keystore) + +ExportPubKeyArmor(uid string) (string, error) { + k, err := ks.Key(uid) + if err != nil { + return "", err +} + +key, err := k.GetPubKey() + if err != nil { + return "", err +} + +bz, err := ks.cdc.MarshalInterface(key) + if err != nil { + return "", err +} + +return crypto.ArmorPubKeyBytes(bz, key.Type()), nil +} + +/ DB returns the db keyring used in the keystore +func (ks keystore) + +DB() + +keyring.Keyring { + return ks.db +} + +func (ks keystore) + +ExportPubKeyArmorByAddress(address sdk.Address) (string, error) { + k, err := ks.KeyByAddress(address) + if err != nil { + return "", err +} + +return ks.ExportPubKeyArmor(k.Name) +} + +/ ExportPrivKeyArmor exports encrypted privKey +func (ks keystore) + +ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error) { + priv, err := ks.ExportPrivateKeyObject(uid) + if err != nil { + return "", err +} + +return crypto.EncryptArmorPrivKey(priv, encryptPassphrase, priv.Type()), nil +} + +/ ExportPrivateKeyObject exports an armored private key object. +func (ks keystore) + +ExportPrivateKeyObject(uid string) (types.PrivKey, error) { + k, err := ks.Key(uid) + if err != nil { + return nil, err +} + +priv, err := extractPrivKeyFromRecord(k) + if err != nil { + return nil, err +} + +return priv, err +} + +func (ks keystore) + +ExportPrivKeyArmorByAddress(address sdk.Address, encryptPassphrase string) (armor string, err error) { + k, err := ks.KeyByAddress(address) + if err != nil { + return "", err +} + +return ks.ExportPrivKeyArmor(k.Name, encryptPassphrase) +} + +func (ks keystore) + +ImportPrivKey(uid, armor, passphrase string) + +error { + if k, err := ks.Key(uid); err == nil { + if uid == k.Name { + return errorsmod.Wrap(ErrOverwriteKey, uid) +} + +} + +privKey, _, err := crypto.UnarmorDecryptPrivKey(armor, passphrase) + if err != nil { + return errorsmod.Wrap(err, "failed to decrypt private key") +} + + _, err = ks.writeLocalKey(uid, privKey) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +ImportPrivKeyHex(uid, privKey, algoStr string) + +error { + if _, err := ks.Key(uid); err == nil { + return errorsmod.Wrap(ErrOverwriteKey, uid) +} + if privKey[:2] == hexPrefix { + privKey = privKey[2:] +} + +decodedPriv, err := hex.DecodeString(privKey) + if err != nil { + return err +} + +algo, err := NewSigningAlgoFromString(algoStr, ks.options.SupportedAlgos) + if err != nil { + return err +} + priv := algo.Generate()(decodedPriv) + _, err = ks.writeLocalKey(uid, priv) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +ImportPubKey(uid, armor string) + +error { + if _, err := ks.Key(uid); err == nil { + return errorsmod.Wrap(ErrOverwriteKey, uid) +} + +pubBytes, _, err := crypto.UnarmorPubKeyBytes(armor) + if err != nil { + return err +} + +var pubKey types.PubKey + if err := ks.cdc.UnmarshalInterface(pubBytes, &pubKey); err != nil { + return err +} + + _, err = ks.writeOfflineKey(uid, pubKey) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +Sign(uid string, msg []byte, signMode signing.SignMode) ([]byte, types.PubKey, error) { + k, err := ks.Key(uid) + if err != nil { + return nil, nil, err +} + switch { + case k.GetLocal() != nil: + priv, err := extractPrivKeyFromLocal(k.GetLocal()) + if err != nil { + return nil, nil, err +} + +sig, err := priv.Sign(msg) + if err != nil { + return nil, nil, err +} + +return sig, priv.PubKey(), nil + case k.GetLedger() != nil: + return SignWithLedger(k, msg, signMode) + + / multi or offline record + default: + pub, err := k.GetPubKey() + if err != nil { + return nil, nil, err +} + +return nil, pub, ErrOfflineSign +} +} + +func (ks keystore) + +SignByAddress(address sdk.Address, msg []byte, signMode signing.SignMode) ([]byte, types.PubKey, error) { + k, err := ks.KeyByAddress(address) + if err != nil { + return nil, nil, err +} + +return ks.Sign(k.Name, msg, signMode) +} + +func (ks keystore) + +SaveLedgerKey(uid string, algo SignatureAlgo, hrp string, coinType, account, index uint32) (*Record, error) { + if !ks.options.SupportedAlgosLedger.Contains(algo) { + return nil, errorsmod.Wrap(ErrUnsupportedSigningAlgo, fmt.Sprintf("signature algo %s is not defined in the keyring options", algo.Name())) +} + hdPath := hd.NewFundraiserParams(account, coinType, index) + +priv, _, err := ledger.NewPrivKeySecp256k1(*hdPath, hrp) + if err != nil { + return nil, errors.CombineErrors(ErrLedgerGenerateKey, err) +} + +return ks.writeLedgerKey(uid, priv.PubKey(), hdPath) +} + +func (ks keystore) + +writeLedgerKey(name string, pk types.PubKey, path *hd.BIP44Params) (*Record, error) { + k, err := NewLedgerRecord(name, pk, path) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +func (ks keystore) + +SaveMultisig(uid string, pubkey types.PubKey) (*Record, error) { + return ks.writeMultisigKey(uid, pubkey) +} + +func (ks keystore) + +SaveOfflineKey(uid string, pubkey types.PubKey) (*Record, error) { + return ks.writeOfflineKey(uid, pubkey) +} + +func (ks keystore) + +DeleteByAddress(address sdk.Address) + +error { + k, err := ks.KeyByAddress(address) + if err != nil { + return err +} + +err = ks.Delete(k.Name) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +Rename(oldName, newName string) + +error { + _, err := ks.Key(newName) + if err == nil { + return errorsmod.Wrap(ErrKeyAlreadyExists, fmt.Sprintf("rename failed, %s", newName)) +} + +armor, err := ks.ExportPrivKeyArmor(oldName, passPhrase) + if err != nil { + return err +} + if err := ks.Delete(oldName); err != nil { + return err +} + if err := ks.ImportPrivKey(newName, armor, passPhrase); err != nil { + return err +} + +return nil +} + +/ Delete deletes a key in the keyring. `uid` represents the key name, without +/ the `.info` suffix. +func (ks keystore) + +Delete(uid string) + +error { + k, err := ks.Key(uid) + if err != nil { + return err +} + +addr, err := k.GetAddress() + if err != nil { + return err +} + +err = ks.db.Remove(addrHexKeyAsString(addr)) + if err != nil { + return err +} + +err = ks.db.Remove(infoKey(uid)) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +KeyByAddress(address sdk.Address) (*Record, error) { + ik, err := ks.db.Get(addrHexKeyAsString(address)) + if err != nil { + return nil, wrapKeyNotFound(err, fmt.Sprintf("key with address %s not found", address.String())) +} + if len(ik.Data) == 0 { + return nil, wrapKeyNotFound(err, fmt.Sprintf("key with address %s not found", address.String())) +} + +return ks.Key(string(ik.Data)) +} + +func wrapKeyNotFound(err error, msg string) + +error { + if errors.Is(err, keyring.ErrKeyNotFound) { + return errorsmod.Wrap(sdkerrors.ErrKeyNotFound, msg) +} + +return err +} + +func (ks keystore) + +List() ([]*Record, error) { + return ks.MigrateAll() +} + +func (ks keystore) + +NewMnemonic(uid string, language Language, hdPath, bip39Passphrase string, algo SignatureAlgo) (*Record, string, error) { + if language != English { + return nil, "", ErrUnsupportedLanguage +} + if !ks.isSupportedSigningAlgo(algo) { + return nil, "", ErrUnsupportedSigningAlgo +} + + / Default number of words (24): This generates a mnemonic directly from the + / number of words by reading system entropy. + entropy, err := bip39.NewEntropy(defaultEntropySize) + if err != nil { + return nil, "", err +} + +mnemonic, err := bip39.NewMnemonic(entropy) + if err != nil { + return nil, "", err +} + if bip39Passphrase == "" { + bip39Passphrase = DefaultBIP39Passphrase +} + +k, err := ks.NewAccount(uid, mnemonic, bip39Passphrase, hdPath, algo) + if err != nil { + return nil, "", err +} + +return k, mnemonic, nil +} + +func (ks keystore) + +NewAccount(name, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error) { + if !ks.isSupportedSigningAlgo(algo) { + return nil, ErrUnsupportedSigningAlgo +} + + / create master key and derive first key for keyring + derivedPriv, err := algo.Derive()(mnemonic, bip39Passphrase, hdPath) + if err != nil { + return nil, err +} + privKey := algo.Generate()(derivedPriv) + + / check if the key already exists with the same address and return an error + / if found + address := sdk.AccAddress(privKey.PubKey().Address()) + if _, err := ks.KeyByAddress(address); err == nil { + return nil, ErrDuplicatedAddress +} + +return ks.writeLocalKey(name, privKey) +} + +func (ks keystore) + +isSupportedSigningAlgo(algo SignatureAlgo) + +bool { + return ks.options.SupportedAlgos.Contains(algo) +} + +func (ks keystore) + +Key(uid string) (*Record, error) { + k, err := ks.migrate(uid) + if err != nil { + return nil, err +} + +return k, nil +} + +/ SupportedAlgorithms returns the keystore Options' supported signing algorithm. +/ for the keyring and Ledger. +func (ks keystore) + +SupportedAlgorithms() (SigningAlgoList, SigningAlgoList) { + return ks.options.SupportedAlgos, ks.options.SupportedAlgosLedger +} + +/ SignWithLedger signs a binary message with the ledger device referenced by an Info object +/ and returns the signed bytes and the public key. It returns an error if the device could +/ not be queried or it returned an error. +func SignWithLedger(k *Record, msg []byte, signMode signing.SignMode) (sig []byte, pub types.PubKey, err error) { + ledgerInfo := k.GetLedger() + if ledgerInfo == nil { + return nil, nil, ErrNotLedgerObj +} + path := ledgerInfo.GetPath() + +priv, err := ledger.NewPrivKeySecp256k1Unsafe(*path) + if err != nil { + return nil, nil, err +} + ledgerPubKey := priv.PubKey() + +pubKey, err := k.GetPubKey() + if err != nil { + return nil, nil, err +} + if !pubKey.Equals(ledgerPubKey) { + return nil, nil, fmt.Errorf("the public key that the user attempted to sign with does not match the public key on the ledger device. %v does not match %v", pubKey.String(), ledgerPubKey.String()) +} + switch signMode { + case signing.SignMode_SIGN_MODE_TEXTUAL: + sig, err = priv.Sign(msg) + if err != nil { + return nil, nil, err +} + case signing.SignMode_SIGN_MODE_LEGACY_AMINO_JSON: + sig, err = priv.SignLedgerAminoJSON(msg) + if err != nil { + return nil, nil, err +} + +default: + return nil, nil, errorsmod.Wrap(ErrInvalidSignMode, fmt.Sprintf("%v", signMode)) +} + if !priv.PubKey().VerifySignature(msg, sig) { + return nil, nil, ErrLedgerInvalidSignature +} + +return sig, priv.PubKey(), nil +} + +func newOSBackendKeyringConfig(appName, dir string, buf io.Reader) + +keyring.Config { + return keyring.Config{ + ServiceName: appName, + FileDir: dir, + KeychainTrustApplication: true, + FilePasswordFunc: newRealPrompt(dir, buf), +} +} + +func newTestBackendKeyringConfig(appName, dir string) + +keyring.Config { + return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.FileBackend +}, + ServiceName: appName, + FileDir: filepath.Join(dir, keyringTestDirName), + FilePasswordFunc: func(_ string) (string, error) { + return "test", nil +}, +} +} + +func newKWalletBackendKeyringConfig(appName, _ string, _ io.Reader) + +keyring.Config { + return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.KWalletBackend +}, + ServiceName: "kdewallet", + KWalletAppID: appName, + KWalletFolder: "", +} +} + +func newPassBackendKeyringConfig(appName, _ string, _ io.Reader) + +keyring.Config { + prefix := fmt.Sprintf(passKeyringPrefix, appName) + +return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.PassBackend +}, + ServiceName: appName, + PassPrefix: prefix, +} +} + +func newFileBackendKeyringConfig(name, dir string, buf io.Reader) + +keyring.Config { + fileDir := filepath.Join(dir, keyringFileDirName) + +return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.FileBackend +}, + ServiceName: name, + FileDir: fileDir, + FilePasswordFunc: newRealPrompt(fileDir, buf), +} +} + +func newRealPrompt(dir string, buf io.Reader) + +func(string) (string, error) { + return func(prompt string) (string, error) { + keyhashStored := false + keyhashFilePath := filepath.Join(dir, "keyhash") + +var keyhash []byte + + _, err := os.Stat(keyhashFilePath) + switch { + case err == nil: + keyhash, err = os.ReadFile(keyhashFilePath) + if err != nil { + return "", errorsmod.Wrap(err, fmt.Sprintf("failed to read %s", keyhashFilePath)) +} + +keyhashStored = true + case os.IsNotExist(err): + keyhashStored = false + + default: + return "", errorsmod.Wrap(err, fmt.Sprintf("failed to open %s", keyhashFilePath)) +} + failureCounter := 0 + for { + failureCounter++ + if failureCounter > maxPassphraseEntryAttempts { + return "", ErrMaxPassPhraseAttempts +} + buf := bufio.NewReader(buf) + +pass, err := input.GetPassword(fmt.Sprintf("Enter keyring passphrase (attempt %d/%d):", failureCounter, maxPassphraseEntryAttempts), buf) + if err != nil { + / NOTE: LGTM.io reports a false positive alert that states we are printing the password, + / but we only log the error. + / + / lgtm [go/clear-text-logging] + fmt.Fprintln(os.Stderr, err) + +continue +} + if keyhashStored { + if err := bcrypt.CompareHashAndPassword(keyhash, []byte(pass)); err != nil { + fmt.Fprintln(os.Stderr, "incorrect passphrase") + +continue +} + +return pass, nil +} + +reEnteredPass, err := input.GetPassword("Re-enter keyring passphrase:", buf) + if err != nil { + / NOTE: LGTM.io reports a false positive alert that states we are printing the password, + / but we only log the error. + / + / lgtm [go/clear-text-logging] + fmt.Fprintln(os.Stderr, err) + +continue +} + if pass != reEnteredPass { + fmt.Fprintln(os.Stderr, "passphrase do not match") + +continue +} + +passwordHash, err := bcrypt.GenerateFromPassword([]byte(pass), 2) + if err != nil { + fmt.Fprintln(os.Stderr, err) + +continue +} + if err := os.WriteFile(keyhashFilePath, passwordHash, 0o600); err != nil { + return "", err +} + +return pass, nil +} + +} +} + +func (ks keystore) + +writeLocalKey(name string, privKey types.PrivKey) (*Record, error) { + k, err := NewLocalRecord(name, privKey, privKey.PubKey()) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +/ writeRecord persists a keyring item in keystore if it does not exist there. +/ For each key record, we actually write 2 items: +/ - one with key `.info`, with Data = the serialized protobuf key +/ - another with key `.address`, with Data = the uid (i.e. the key name) +/ This is to be able to query keys both by name and by address. +func (ks keystore) + +writeRecord(k *Record) + +error { + addr, err := k.GetAddress() + if err != nil { + return err +} + key := infoKey(k.Name) + +exists, err := ks.existsInDb(addr, key) + if err != nil { + return err +} + if exists { + return errorsmod.Wrap(ErrKeyAlreadyExists, key) +} + +serializedRecord, err := ks.cdc.Marshal(k) + if err != nil { + return errors.CombineErrors(ErrUnableToSerialize, err) +} + item := keyring.Item{ + Key: key, + Data: serializedRecord, +} + if err := ks.SetItem(item); err != nil { + return err +} + +item = keyring.Item{ + Key: addrHexKeyAsString(addr), + Data: []byte(key), +} + if err := ks.SetItem(item); err != nil { + return err +} + +return nil +} + +/ existsInDb returns (true, nil) + if either addr or name exist is in keystore DB. +/ On the other hand, it returns (false, error) + if Get method returns error different from keyring.ErrKeyNotFound +/ In case of inconsistent keyring, it recovers it automatically. +func (ks keystore) + +existsInDb(addr sdk.Address, name string) (bool, error) { + _, errAddr := ks.db.Get(addrHexKeyAsString(addr)) + if errAddr != nil && !errors.Is(errAddr, keyring.ErrKeyNotFound) { + return false, errAddr +} + + _, errInfo := ks.db.Get(infoKey(name)) + if errInfo == nil { + return true, nil / uid lookup succeeds - info exists +} + +else if !errors.Is(errInfo, keyring.ErrKeyNotFound) { + return false, errInfo / received unexpected error - returns +} + + / looking for an issue, record with meta (getByAddress) + +exists, but record with public key itself does not + if errAddr == nil && errors.Is(errInfo, keyring.ErrKeyNotFound) { + fmt.Fprintf(os.Stderr, "address \"%s\" exists but pubkey itself does not\n", hex.EncodeToString(addr.Bytes())) + +fmt.Fprintln(os.Stderr, "recreating pubkey record") + err := ks.db.Remove(addrHexKeyAsString(addr)) + if err != nil { + return true, err +} + +return false, nil +} + + / both lookups failed, info does not exist + return false, nil +} + +func (ks keystore) + +writeOfflineKey(name string, pk types.PubKey) (*Record, error) { + k, err := NewOfflineRecord(name, pk) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +/ writeMultisigKey investigate where thisf function is called maybe remove it +func (ks keystore) + +writeMultisigKey(name string, pk types.PubKey) (*Record, error) { + k, err := NewMultiRecord(name, pk) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +func (ks keystore) + +MigrateAll() ([]*Record, error) { + keys, err := ks.db.Keys() + if err != nil { + return nil, err +} + if len(keys) == 0 { + return nil, nil +} + +sort.Strings(keys) + +var recs []*Record + for _, key := range keys { + / The keyring items only with `.info` consists the key info. + if !strings.HasSuffix(key, infoSuffix) { + continue +} + +rec, err := ks.migrate(key) + if err != nil { + fmt.Fprintf(os.Stderr, "migrate err for key %s: %q\n", key, err) + +continue +} + +recs = append(recs, rec) +} + +return recs, nil +} + +/ migrate converts keyring.Item from amino to proto serialization format. +/ the `key` argument can be a key uid (e.g. "alice") + +or with the '.info' +/ suffix (e.g. "alice.info"). +/ +/ It operates as follows: +/ 1. retrieve any key +/ 2. try to decode it using protobuf +/ 3. if ok, then return the key, do nothing else +/ 4. if it fails, then try to decode it using amino +/ 5. convert from the amino struct to the protobuf struct +/ 6. write the proto-encoded key back to the keyring +func (ks keystore) + +migrate(key string) (*Record, error) { + if !strings.HasSuffix(key, infoSuffix) { + key = infoKey(key) +} + + / 1. get the key. + item, err := ks.db.Get(key) + if err != nil { + if key == fmt.Sprintf(".%s", infoSuffix) { + return nil, errors.New("no key name or address provided; have you forgotten the --from flag?") +} + +return nil, wrapKeyNotFound(err, key) +} + if len(item.Data) == 0 { + return nil, errorsmod.Wrap(sdkerrors.ErrKeyNotFound, key) +} + + / 2. Try to deserialize using proto + k, err := ks.protoUnmarshalRecord(item.Data) + / 3. If ok then return the key + if err == nil { + return k, nil +} + + / 4. Try to decode with amino + legacyInfo, err := unMarshalLegacyInfo(item.Data) + if err != nil { + return nil, errorsmod.Wrap(err, "unable to unmarshal item.Data") +} + + / 5. Convert and serialize info using proto + k, err = ks.convertFromLegacyInfo(legacyInfo) + if err != nil { + return nil, errorsmod.Wrap(err, "convertFromLegacyInfo") +} + +serializedRecord, err := ks.cdc.Marshal(k) + if err != nil { + return nil, errors.CombineErrors(ErrUnableToSerialize, err) +} + +item = keyring.Item{ + Key: key, + Data: serializedRecord, +} + + / 6. Overwrite the keyring entry with the new proto-encoded key. + if err := ks.SetItem(item); err != nil { + return nil, errorsmod.Wrap(err, "unable to set keyring.Item") +} + +fmt.Fprintf(os.Stderr, "Successfully migrated key %s.\n", key) + +return k, nil +} + +func (ks keystore) + +protoUnmarshalRecord(bz []byte) (*Record, error) { + k := new(Record) + if err := ks.cdc.Unmarshal(bz, k); err != nil { + return nil, err +} + +return k, nil +} + +func (ks keystore) + +SetItem(item keyring.Item) + +error { + return ks.db.Set(item) +} + +func (ks keystore) + +convertFromLegacyInfo(info LegacyInfo) (*Record, error) { + if info == nil { + return nil, errorsmod.Wrap(ErrLegacyToRecord, "info is nil") +} + name := info.GetName() + pk := info.GetPubKey() + switch info.GetType() { + case TypeLocal: + priv, err := privKeyFromLegacyInfo(info) + if err != nil { + return nil, err +} + +return NewLocalRecord(name, priv, pk) + case TypeOffline: + return NewOfflineRecord(name, pk) + case TypeMulti: + return NewMultiRecord(name, pk) + case TypeLedger: + path, err := info.GetPath() + if err != nil { + return nil, err +} + +return NewLedgerRecord(name, pk, path) + +default: + return nil, ErrUnknownLegacyType +} +} + +func addrHexKeyAsString(address sdk.Address) + +string { + return fmt.Sprintf("%s.%s", hex.EncodeToString(address.Bytes()), addressSuffix) +} +``` + +The default implementation of `Keyring` comes from the third-party [`99designs/keyring`](https://github.com/99designs/keyring) library. + +A few notes on the `Keyring` methods: + +- `Sign(uid string, msg []byte) ([]byte, types.PubKey, error)` strictly deals with the signature of the `msg` bytes. You must prepare and encode the transaction into a canonical `[]byte` form. Because protobuf is not deterministic, it has been decided in [ADR-020](/docs/common/pages/adr-comprehensive#adr-020-protocol-buffer-transaction-encoding) that the canonical `payload` to sign is the `SignDoc` struct, deterministically encoded using [ADR-027](/docs/common/pages/adr-comprehensive#adr-027-deterministic-protobuf-serialization). Note that signature verification is not implemented in the Cosmos SDK by default, it is deferred to the [`anteHandler`](/docs/sdk/next/documentation/application-framework/baseapp#antehandler). + +```protobuf +// SignDoc is the type used for generating sign bytes for SIGN_MODE_DIRECT. +message SignDoc { + // body_bytes is protobuf serialization of a TxBody that matches the + // representation in TxRaw. + bytes body_bytes = 1; + + // auth_info_bytes is a protobuf serialization of an AuthInfo that matches the + // representation in TxRaw. + bytes auth_info_bytes = 2; + + // chain_id is the unique identifier of the chain this transaction targets. + // It prevents signed transactions from being used on another chain by an + // attacker + string chain_id = 3; + + // account_number is the account number of the account in state + uint64 account_number = 4; +} +``` + +- `NewAccount(uid, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error)` creates a new account based on the [`bip44 path`](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki) and persists it on disk. The `PrivKey` is **never stored unencrypted**, instead it is [encrypted with a passphrase](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/armor.go) before being persisted. In the context of this method, the key type and sequence number refer to the segment of the BIP44 derivation path (for example, `0`, `1`, `2`, ...) that is used to derive a private and a public key from the mnemonic. Using the same mnemonic and derivation path, the same `PrivKey`, `PubKey` and `Address` is generated. The following keys are supported by the keyring: + +- `secp256k1` + +- `ed25519` + +- `ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error)` exports a private key in ASCII-armored encrypted format using the given passphrase. You can then either import the private key again into the keyring using the `ImportPrivKey(uid, armor, passphrase string)` function or decrypt it into a raw private key using the `UnarmorDecryptPrivKey(armorStr string, passphrase string)` function. + +### Create New Key Type + +To create a new key type for using in keyring, `keyring.SignatureAlgo` interface must be fulfilled. + +```go expandable +package keyring + +import ( + + "strings" + "github.com/cockroachdb/errors" + "github.com/cosmos/cosmos-sdk/crypto/hd" +) + +/ SignatureAlgo defines the interface for a keyring supported algorithm. +type SignatureAlgo interface { + Name() + +hd.PubKeyType + Derive() + +hd.DeriveFn + Generate() + +hd.GenerateFn +} + +/ NewSigningAlgoFromString creates a supported SignatureAlgo. +func NewSigningAlgoFromString(str string, algoList SigningAlgoList) (SignatureAlgo, error) { + for _, algo := range algoList { + if str == string(algo.Name()) { + return algo, nil +} + +} + +return nil, errors.Wrap(ErrUnsupportedSigningAlgo, str) +} + +/ SigningAlgoList is a slice of signature algorithms +type SigningAlgoList []SignatureAlgo + +/ Contains returns true if the SigningAlgoList the given SignatureAlgo. +func (sal SigningAlgoList) + +Contains(algo SignatureAlgo) + +bool { + for _, cAlgo := range sal { + if cAlgo.Name() == algo.Name() { + return true +} + +} + +return false +} + +/ String returns a comma separated string of the signature algorithm names in the list. +func (sal SigningAlgoList) + +String() + +string { + names := make([]string, len(sal)) + for i := range sal { + names[i] = string(sal[i].Name()) +} + +return strings.Join(names, ",") +} +``` + +The interface consists of three methods where `Name()` returns the name of the algorithm as a `hd.PubKeyType` and `Derive()` and `Generate()` must return the following functions respectively: + +```go expandable +package hd + +import ( + + "github.com/cosmos/go-bip39" + "github.com/cosmos/cosmos-sdk/crypto/keys/secp256k1" + "github.com/cosmos/cosmos-sdk/crypto/types" +) + +/ PubKeyType defines an algorithm to derive key-pairs which can be used for cryptographic signing. +type PubKeyType string + +const ( + / MultiType implies that a pubkey is a multisignature + MultiType = PubKeyType("multi") + / Secp256k1Type uses the Bitcoin secp256k1 ECDSA parameters. + Secp256k1Type = PubKeyType("secp256k1") + / Ed25519Type represents the Ed25519Type signature system. + / It is currently not supported for end-user keys (wallets/ledgers). + Ed25519Type = PubKeyType("ed25519") + / Sr25519Type represents the Sr25519Type signature system. + Sr25519Type = PubKeyType("sr25519") +) + +/ Secp256k1 uses the Bitcoin secp256k1 ECDSA parameters. +var Secp256k1 = secp256k1Algo{ +} + +type ( + DeriveFn func(mnemonic, bip39Passphrase, hdPath string) ([]byte, error) + +GenerateFn func(bz []byte) + +types.PrivKey +) + +type WalletGenerator interface { + Derive(mnemonic, bip39Passphrase, hdPath string) ([]byte, error) + +Generate(bz []byte) + +types.PrivKey +} + +type secp256k1Algo struct{ +} + +func (s secp256k1Algo) + +Name() + +PubKeyType { + return Secp256k1Type +} + +/ Derive derives and returns the secp256k1 private key for the given seed and HD path. +func (s secp256k1Algo) + +Derive() + +DeriveFn { + return func(mnemonic, bip39Passphrase, hdPath string) ([]byte, error) { + seed, err := bip39.NewSeedWithErrorChecking(mnemonic, bip39Passphrase) + if err != nil { + return nil, err +} + +masterPriv, ch := ComputeMastersFromSeed(seed) + if len(hdPath) == 0 { + return masterPriv[:], nil +} + +derivedKey, err := DerivePrivateKeyForPath(masterPriv, ch, hdPath) + +return derivedKey, err +} +} + +/ Generate generates a secp256k1 private key from the given bytes. +func (s secp256k1Algo) + +Generate() + +GenerateFn { + return func(bz []byte) + +types.PrivKey { + bzArr := make([]byte, secp256k1.PrivKeySize) + +copy(bzArr, bz) + +return &secp256k1.PrivKey{ + Key: bzArr +} + +} +} +``` + +Once the `keyring.SignatureAlgo` has been implemented it must be added to the [list of supported algos](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keyring/keyring.go#L209) of the keyring. + +For simplicity the implementation of a new key type should be done inside the `crypto/hd` package. +There is an example of a working `secp256k1` implementation in [algo.go](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/hd/algo.go#L38). + +#### Implementing secp256r1 algo + +Here is an example of how secp256r1 could be implemented. + +First a new function to create a private key from a secret number is needed in the secp256r1 package. This function could look like this: + +```go expandable +/ cosmos-sdk/crypto/keys/secp256r1/privkey.go + +/ NewPrivKeyFromSecret creates a private key derived for the secret number +/ represented in big-endian. The `secret` must be a valid ECDSA field element. +func NewPrivKeyFromSecret(secret []byte) (*PrivKey, error) { + var d = new(big.Int).SetBytes(secret) + if d.Cmp(secp256r1.Params().N) >= 1 { + return nil, errorsmod.Wrap(errors.ErrInvalidRequest, "secret not in the curve base field") +} + sk := new(ecdsa.PrivKey) + +return &PrivKey{&ecdsaSK{*sk +}}, nil +} +``` + +After that `secp256r1Algo` can be implemented. + +```go expandable +/ cosmos-sdk/crypto/hd/secp256r1Algo.go + +package hd + +import ( + + "github.com/cosmos/go-bip39" + "github.com/cosmos/cosmos-sdk/crypto/keys/secp256r1" + "github.com/cosmos/cosmos-sdk/crypto/types" +) + +/ Secp256r1Type uses the secp256r1 ECDSA parameters. +const Secp256r1Type = PubKeyType("secp256r1") + +var Secp256r1 = secp256r1Algo{ +} + +type secp256r1Algo struct{ +} + +func (s secp256r1Algo) + +Name() + +PubKeyType { + return Secp256r1Type +} + +/ Derive derives and returns the secp256r1 private key for the given seed and HD path. +func (s secp256r1Algo) + +Derive() + +DeriveFn { + return func(mnemonic string, bip39Passphrase, hdPath string) ([]byte, error) { + seed, err := bip39.NewSeedWithErrorChecking(mnemonic, bip39Passphrase) + if err != nil { + return nil, err +} + +masterPriv, ch := ComputeMastersFromSeed(seed) + if len(hdPath) == 0 { + return masterPriv[:], nil +} + +derivedKey, err := DerivePrivateKeyForPath(masterPriv, ch, hdPath) + +return derivedKey, err +} +} + +/ Generate generates a secp256r1 private key from the given bytes. +func (s secp256r1Algo) + +Generate() + +GenerateFn { + return func(bz []byte) + +types.PrivKey { + key, err := secp256r1.NewPrivKeyFromSecret(bz) + if err != nil { + panic(err) +} + +return key +} +} +``` + +Finally, the algo must be added to the list of [supported algos](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keyring/keyring.go#L209) by the keyring. + +```go +/ cosmos-sdk/crypto/keyring/keyring.go + +func newKeystore(kr keyring.Keyring, cdc codec.Codec, backend string, opts ...Option) + +keystore { + / Default options for keybase, these can be overwritten using the + / Option function + options := Options{ + SupportedAlgos: SigningAlgoList{ + hd.Secp256k1, hd.Secp256r1 +}, / added here + SupportedAlgosLedger: SigningAlgoList{ + hd.Secp256k1 +}, +} +... +``` + +Hereafter to create new keys using your algo, you must specify it with the flag `--algo` : + +`simd keys add myKey --algo secp256r1` diff --git a/docs/sdk/next/documentation/protocol-development/bech32.mdx b/docs/sdk/next/documentation/protocol-development/bech32.mdx new file mode 100644 index 00000000..07bcc5a0 --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/bech32.mdx @@ -0,0 +1,23 @@ +--- +title: Bech32 on Cosmos +--- + +The Cosmos network prefers to use the Bech32 address format wherever users must handle binary data. Bech32 encoding provides robust integrity checks on data and the human readable part (HRP) provides contextual hints that can assist UI developers with providing informative error messages. + +In the Cosmos network, keys and addresses may refer to a number of different roles in the network like accounts, validators etc. + +## HRP table + +| HRP | Definition | +| ------------- | ---------------------------------- | +| cosmos | Cosmos Account Address | +| cosmosvalcons | Cosmos Validator Consensus Address | +| cosmosvaloper | Cosmos Validator Operator Address | + +## Encoding + +While all user facing interfaces to Cosmos software should exposed Bech32 interfaces, many internal interfaces encode binary value in hex or base64 encoded form. + +To convert between other binary representation of addresses and keys, it is important to first apply the Amino encoding process before Bech32 encoding. + +A complete implementation of the Amino serialization format is unnecessary in most cases. Simply prepending bytes from this [table](https://github.com/cometbft/cometbft/blob/main/spec/blockchain/encoding.md) to the byte string payload before Bech32 encoding will be sufficient for compatible representation. diff --git a/docs/sdk/next/documentation/protocol-development/encoding.mdx b/docs/sdk/next/documentation/protocol-development/encoding.mdx new file mode 100644 index 00000000..25dbad0b --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/encoding.mdx @@ -0,0 +1,1976 @@ +--- +title: Encoding +--- + +## Synopsis + +While encoding in the Cosmos SDK used to be mainly handled by `go-amino` codec, the Cosmos SDK is moving towards using `gogoprotobuf` for both state and client-side encoding. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK application](/docs/sdk/next/documentation/application-framework/app-anatomy) + + + +## Encoding + +The Cosmos SDK utilizes two binary wire encoding protocols, [Amino](https://github.com/tendermint/go-amino/) which is an object encoding specification and [Protocol Buffers](https://developers.google.com/protocol-buffers), a subset of Proto3 with an extension for +interface support. See the [Proto3 spec](https://developers.google.com/protocol-buffers/docs/proto3) +for more information on Proto3, which Amino is largely compatible with (but not with Proto2). + +Due to Amino having significant performance drawbacks, being reflection-based, and +not having any meaningful cross-language/client support, Protocol Buffers, specifically +[gogoprotobuf](https://github.com/cosmos/gogoproto/), is being used in place of Amino. +Note, this process of using Protocol Buffers over Amino is still an ongoing process. + +Binary wire encoding of types in the Cosmos SDK can be broken down into two main +categories, client encoding and store encoding. Client encoding mainly revolves +around transaction processing and signing, whereas store encoding revolves around +types used in state-machine transitions and what is ultimately stored in the Merkle +tree. + +For store encoding, protobuf definitions can exist for any type and will typically +have an Amino-based "intermediary" type. Specifically, the protobuf-based type +definition is used for serialization and persistence, whereas the Amino-based type +is used for business logic in the state-machine where they may convert back-n-forth. +Note, the Amino-based types may slowly be phased-out in the future, so developers +should take note to use the protobuf message definitions where possible. + +In the `codec` package, there exists two core interfaces, `BinaryCodec` and `JSONCodec`, +where the former encapsulates the current Amino interface except it operates on +types implementing the latter instead of generic `interface{}` types. + +The `ProtoCodec`, where both binary and JSON serialization is handled +via Protobuf. This means that modules may use Protobuf encoding, but the types must +implement `ProtoMarshaler`. If modules wish to avoid implementing this interface +for their types, this is autogenerated via [buf](https://buf.build/) + +If modules use [Collections](/docs/sdk/next/documentation/state-storage/collections), encoding and decoding are handled, marshal and unmarshal should not be handled manually unless for specific cases identified by the developer. + +### Gogoproto + +Modules are encouraged to utilize Protobuf encoding for their respective types. In the Cosmos SDK, we use the [Gogoproto](https://github.com/cosmos/gogoproto) specific implementation of the Protobuf spec that offers speed and DX improvements compared to the official [Google protobuf implementation](https://github.com/protocolbuffers/protobuf). + +### Guidelines for protobuf message definitions + +In addition to [following official Protocol Buffer guidelines](https://developers.google.com/protocol-buffers/docs/proto3#simple), we recommend using these annotations in .proto files when dealing with interfaces: + +- use `cosmos_proto.accepts_interface` to annotate `Any` fields that accept interfaces + - pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface` + - example: `(cosmos_proto.accepts_interface) = "cosmos.gov.v1beta1.Content"` (and not just `Content`) +- annotate interface implementations with `cosmos_proto.implements_interface` + - pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface` + - example: `(cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"` (and not just `Authorization`) + +Code generators can then match the `accepts_interface` and `implements_interface` annotations to know whether some Protobuf messages are allowed to be packed in a given `Any` field or not. + +### Transaction Encoding + +Another important use of Protobuf is the encoding and decoding of +[transactions](/docs/sdk/next/documentation/protocol-development/transactions). Transactions are defined by the application or +the Cosmos SDK but are then passed to the underlying consensus engine to be relayed to +other peers. Since the underlying consensus engine is agnostic to the application, +the consensus engine accepts only transactions in the form of raw bytes. + +- The `TxEncoder` object performs the encoding. +- The `TxDecoder` object performs the decoding. + +```go expandable +package types + +import ( + + "encoding/json" + fmt "fmt" + strings "strings" + "time" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) + +type ( + / Msg defines the interface a transaction message needed to fulfill. + Msg = proto.Message + + / LegacyMsg defines the interface a transaction message needed to fulfill up through + / v0.47. + LegacyMsg interface { + Msg + + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} + + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / HasMsgs defines an interface a transaction must fulfill. + HasMsgs interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg +} + + / Tx defines an interface a transaction must fulfill. + Tx interface { + HasMsgs + + / GetMsgsV2 gets the transaction's messages as google.golang.org/protobuf/proto.Message's. + GetMsgsV2() ([]protov2.Message, error) +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() []byte + FeeGranter() []byte +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutTimeStamp extends the Tx interface by allowing a transaction to + / set a timeout timestamp. + TxWithTimeoutTimeStamp interface { + Tx + + GetTimeoutTimeStamp() + +time.Time +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} + + / TxWithUnordered extends the Tx interface by allowing a transaction to set + / the unordered field, which implicitly relies on TxWithTimeoutTimeStamp. + TxWithUnordered interface { + TxWithTimeoutTimeStamp + + GetUnordered() + +bool +} + + / HasValidateBasic defines a type that has a ValidateBasic method. + / ValidateBasic is deprecated and now facultative. + / Prefer validating messages directly in the msg server. + HasValidateBasic interface { + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +var MsgTypeURL = codectypes.MsgTypeURL + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} + +/ GetModuleNameFromTypeURL assumes that module name is the second element of the msg type URL +/ e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" +/ It returns an empty string if the input is not a valid type URL +func GetModuleNameFromTypeURL(input string) + +string { + moduleName := strings.Split(input, ".") + if len(moduleName) > 1 { + return moduleName[1] +} + +return "" +} +``` + +A standard implementation of both these objects can be found in the [`auth/tx` module](/docs/sdk/next/documentation/module-system/tx): + +```go expandable +package tx + +import ( + + "fmt" + "google.golang.org/protobuf/encoding/protowire" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/unknownproto" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" +) + +/ DefaultTxDecoder returns a default protobuf TxDecoder using the provided Marshaler. +func DefaultTxDecoder(cdc codec.Codec) + +sdk.TxDecoder { + return func(txBytes []byte) (sdk.Tx, error) { + / Make sure txBytes follow ADR-027. + err := rejectNonADR027TxRaw(txBytes) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +var raw tx.TxRaw + + / reject all unknown proto fields in the root TxRaw + err = unknownproto.RejectUnknownFieldsStrict(txBytes, &raw, cdc.InterfaceRegistry()) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +err = cdc.Unmarshal(txBytes, &raw) + if err != nil { + return nil, err +} + +var body tx.TxBody + + / allow non-critical unknown fields in TxBody + txBodyHasUnknownNonCriticals, err := unknownproto.RejectUnknownFields(raw.BodyBytes, &body, true, cdc.InterfaceRegistry()) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +err = cdc.Unmarshal(raw.BodyBytes, &body) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +var authInfo tx.AuthInfo + + / reject all unknown proto fields in AuthInfo + err = unknownproto.RejectUnknownFieldsStrict(raw.AuthInfoBytes, &authInfo, cdc.InterfaceRegistry()) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +err = cdc.Unmarshal(raw.AuthInfoBytes, &authInfo) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + theTx := &tx.Tx{ + Body: &body, + AuthInfo: &authInfo, + Signatures: raw.Signatures, +} + +return &wrapper{ + tx: theTx, + bodyBz: raw.BodyBytes, + authInfoBz: raw.AuthInfoBytes, + txBodyHasUnknownNonCriticals: txBodyHasUnknownNonCriticals, + cdc: cdc, +}, nil +} +} + +/ DefaultJSONTxDecoder returns a default protobuf JSON TxDecoder using the provided Marshaler. +func DefaultJSONTxDecoder(cdc codec.Codec) + +sdk.TxDecoder { + return func(txBytes []byte) (sdk.Tx, error) { + var theTx tx.Tx + err := cdc.UnmarshalJSON(txBytes, &theTx) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +return &wrapper{ + tx: &theTx, + cdc: cdc, +}, nil +} +} + +/ rejectNonADR027TxRaw rejects txBytes that do not follow ADR-027. This is NOT +/ a generic ADR-027 checker, it only applies decoding TxRaw. Specifically, it +/ only checks that: +/ - field numbers are in ascending order (1, 2, and potentially multiple 3s), +/ - and varints are as short as possible. +/ All other ADR-027 edge cases (e.g. default values) + +are not applicable with +/ TxRaw. +func rejectNonADR027TxRaw(txBytes []byte) + +error { + / Make sure all fields are ordered in ascending order with this variable. + prevTagNum := protowire.Number(0) + for len(txBytes) > 0 { + tagNum, wireType, m := protowire.ConsumeTag(txBytes) + if m < 0 { + return fmt.Errorf("invalid length; %w", protowire.ParseError(m)) +} + / TxRaw only has bytes fields. + if wireType != protowire.BytesType { + return fmt.Errorf("expected %d wire type, got %d", protowire.BytesType, wireType) +} + / Make sure fields are ordered in ascending order. + if tagNum < prevTagNum { + return fmt.Errorf("txRaw must follow ADR-027, got tagNum %d after tagNum %d", tagNum, prevTagNum) +} + +prevTagNum = tagNum + + / All 3 fields of TxRaw have wireType == 2, so their next component + / is a varint, so we can safely call ConsumeVarint here. + / Byte structure: + / Inner fields are verified in `DefaultTxDecoder` + lengthPrefix, m := protowire.ConsumeVarint(txBytes[m:]) + if m < 0 { + return fmt.Errorf("invalid length; %w", protowire.ParseError(m)) +} + / We make sure that this varint is as short as possible. + n := varintMinLength(lengthPrefix) + if n != m { + return fmt.Errorf("length prefix varint for tagNum %d is not as short as possible, read %d, only need %d", tagNum, m, n) +} + + / Skip over the bytes that store fieldNumber and wireType bytes. + _, _, m = protowire.ConsumeField(txBytes) + if m < 0 { + return fmt.Errorf("invalid length; %w", protowire.ParseError(m)) +} + +txBytes = txBytes[m:] +} + +return nil +} + +/ varintMinLength returns the minimum number of bytes necessary to encode an +/ uint using varint encoding. +func varintMinLength(n uint64) + +int { + switch { + / Note: 1< valz[j].ConsensusPower(r) +} + +func (valz ValidatorsByVotingPower) + +Swap(i, j int) { + valz[i], valz[j] = valz[j], valz[i] +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (v Validators) + +UnpackInterfaces(c codectypes.AnyUnpacker) + +error { + for i := range v.Validators { + if err := v.Validators[i].UnpackInterfaces(c); err != nil { + return err +} + +} + +return nil +} + +/ return the redelegation +func MustMarshalValidator(cdc codec.BinaryCodec, validator *Validator) []byte { + return cdc.MustMarshal(validator) +} + +/ unmarshal a redelegation from a store value +func MustUnmarshalValidator(cdc codec.BinaryCodec, value []byte) + +Validator { + validator, err := UnmarshalValidator(cdc, value) + if err != nil { + panic(err) +} + +return validator +} + +/ unmarshal a redelegation from a store value +func UnmarshalValidator(cdc codec.BinaryCodec, value []byte) (v Validator, err error) { + err = cdc.Unmarshal(value, &v) + +return v, err +} + +/ IsBonded checks if the validator status equals Bonded +func (v Validator) + +IsBonded() + +bool { + return v.GetStatus() == Bonded +} + +/ IsUnbonded checks if the validator status equals Unbonded +func (v Validator) + +IsUnbonded() + +bool { + return v.GetStatus() == Unbonded +} + +/ IsUnbonding checks if the validator status equals Unbonding +func (v Validator) + +IsUnbonding() + +bool { + return v.GetStatus() == Unbonding +} + +/ constant used in flags to indicate that description field should not be updated +const DoNotModifyDesc = "[do-not-modify]" + +func NewDescription(moniker, identity, website, securityContact, details string) + +Description { + return Description{ + Moniker: moniker, + Identity: identity, + Website: website, + SecurityContact: securityContact, + Details: details, +} +} + +/ UpdateDescription updates the fields of a given description. An error is +/ returned if the resulting description contains an invalid length. +func (d Description) + +UpdateDescription(d2 Description) (Description, error) { + if d2.Moniker == DoNotModifyDesc { + d2.Moniker = d.Moniker +} + if d2.Identity == DoNotModifyDesc { + d2.Identity = d.Identity +} + if d2.Website == DoNotModifyDesc { + d2.Website = d.Website +} + if d2.SecurityContact == DoNotModifyDesc { + d2.SecurityContact = d.SecurityContact +} + if d2.Details == DoNotModifyDesc { + d2.Details = d.Details +} + +return NewDescription( + d2.Moniker, + d2.Identity, + d2.Website, + d2.SecurityContact, + d2.Details, + ).EnsureLength() +} + +/ EnsureLength ensures the length of a validator's description. +func (d Description) + +EnsureLength() (Description, error) { + if len(d.Moniker) > MaxMonikerLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid moniker length; got: %d, max: %d", len(d.Moniker), MaxMonikerLength) +} + if len(d.Identity) > MaxIdentityLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid identity length; got: %d, max: %d", len(d.Identity), MaxIdentityLength) +} + if len(d.Website) > MaxWebsiteLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid website length; got: %d, max: %d", len(d.Website), MaxWebsiteLength) +} + if len(d.SecurityContact) > MaxSecurityContactLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid security contact length; got: %d, max: %d", len(d.SecurityContact), MaxSecurityContactLength) +} + if len(d.Details) > MaxDetailsLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid details length; got: %d, max: %d", len(d.Details), MaxDetailsLength) +} + +return d, nil +} + +/ ABCIValidatorUpdate returns an abci.ValidatorUpdate from a staking validator type +/ with the full validator power +func (v Validator) + +ABCIValidatorUpdate(r math.Int) + +abci.ValidatorUpdate { + tmProtoPk, err := v.TmConsPublicKey() + if err != nil { + panic(err) +} + +return abci.ValidatorUpdate{ + PubKey: tmProtoPk, + Power: v.ConsensusPower(r), +} +} + +/ ABCIValidatorUpdateZero returns an abci.ValidatorUpdate from a staking validator type +/ with zero power used for validator updates. +func (v Validator) + +ABCIValidatorUpdateZero() + +abci.ValidatorUpdate { + tmProtoPk, err := v.TmConsPublicKey() + if err != nil { + panic(err) +} + +return abci.ValidatorUpdate{ + PubKey: tmProtoPk, + Power: 0, +} +} + +/ SetInitialCommission attempts to set a validator's initial commission. An +/ error is returned if the commission is invalid. +func (v Validator) + +SetInitialCommission(commission Commission) (Validator, error) { + if err := commission.Validate(); err != nil { + return v, err +} + +v.Commission = commission + + return v, nil +} + +/ In some situations, the exchange rate becomes invalid, e.g. if +/ Validator loses all tokens due to slashing. In this case, +/ make all future delegations invalid. +func (v Validator) + +InvalidExRate() + +bool { + return v.Tokens.IsZero() && v.DelegatorShares.IsPositive() +} + +/ calculate the token worth of provided shares +func (v Validator) + +TokensFromShares(shares math.LegacyDec) + +math.LegacyDec { + return (shares.MulInt(v.Tokens)).Quo(v.DelegatorShares) +} + +/ calculate the token worth of provided shares, truncated +func (v Validator) + +TokensFromSharesTruncated(shares math.LegacyDec) + +math.LegacyDec { + return (shares.MulInt(v.Tokens)).QuoTruncate(v.DelegatorShares) +} + +/ TokensFromSharesRoundUp returns the token worth of provided shares, rounded +/ up. +func (v Validator) + +TokensFromSharesRoundUp(shares math.LegacyDec) + +math.LegacyDec { + return (shares.MulInt(v.Tokens)).QuoRoundUp(v.DelegatorShares) +} + +/ SharesFromTokens returns the shares of a delegation given a bond amount. It +/ returns an error if the validator has no tokens. +func (v Validator) + +SharesFromTokens(amt math.Int) (math.LegacyDec, error) { + if v.Tokens.IsZero() { + return math.LegacyZeroDec(), ErrInsufficientShares +} + +return v.GetDelegatorShares().MulInt(amt).QuoInt(v.GetTokens()), nil +} + +/ SharesFromTokensTruncated returns the truncated shares of a delegation given +/ a bond amount. It returns an error if the validator has no tokens. +func (v Validator) + +SharesFromTokensTruncated(amt math.Int) (math.LegacyDec, error) { + if v.Tokens.IsZero() { + return math.LegacyZeroDec(), ErrInsufficientShares +} + +return v.GetDelegatorShares().MulInt(amt).QuoTruncate(math.LegacyNewDecFromInt(v.GetTokens())), nil +} + +/ get the bonded tokens which the validator holds +func (v Validator) + +BondedTokens() + +math.Int { + if v.IsBonded() { + return v.Tokens +} + +return math.ZeroInt() +} + +/ ConsensusPower gets the consensus-engine power. Aa reduction of 10^6 from +/ validator tokens is applied +func (v Validator) + +ConsensusPower(r math.Int) + +int64 { + if v.IsBonded() { + return v.PotentialConsensusPower(r) +} + +return 0 +} + +/ PotentialConsensusPower returns the potential consensus-engine power. +func (v Validator) + +PotentialConsensusPower(r math.Int) + +int64 { + return sdk.TokensToConsensusPower(v.Tokens, r) +} + +/ UpdateStatus updates the location of the shares within a validator +/ to reflect the new status +func (v Validator) + +UpdateStatus(newStatus BondStatus) + +Validator { + v.Status = newStatus + return v +} + +/ AddTokensFromDel adds tokens to a validator +func (v Validator) + +AddTokensFromDel(amount math.Int) (Validator, math.LegacyDec) { + / calculate the shares to issue + var issuedShares math.LegacyDec + if v.DelegatorShares.IsZero() { + / the first delegation to a validator sets the exchange rate to one + issuedShares = math.LegacyNewDecFromInt(amount) +} + +else { + shares, err := v.SharesFromTokens(amount) + if err != nil { + panic(err) +} + +issuedShares = shares +} + +v.Tokens = v.Tokens.Add(amount) + +v.DelegatorShares = v.DelegatorShares.Add(issuedShares) + +return v, issuedShares +} + +/ RemoveTokens removes tokens from a validator +func (v Validator) + +RemoveTokens(tokens math.Int) + +Validator { + if tokens.IsNegative() { + panic(fmt.Sprintf("should not happen: trying to remove negative tokens %v", tokens)) +} + if v.Tokens.LT(tokens) { + panic(fmt.Sprintf("should not happen: only have %v tokens, trying to remove %v", v.Tokens, tokens)) +} + +v.Tokens = v.Tokens.Sub(tokens) + +return v +} + +/ RemoveDelShares removes delegator shares from a validator. +/ NOTE: because token fractions are left in the valiadator, +/ +/ the exchange rate of future shares of this validator can increase. +func (v Validator) + +RemoveDelShares(delShares math.LegacyDec) (Validator, math.Int) { + remainingShares := v.DelegatorShares.Sub(delShares) + +var issuedTokens math.Int + if remainingShares.IsZero() { + / last delegation share gets any trimmings + issuedTokens = v.Tokens + v.Tokens = math.ZeroInt() +} + +else { + / leave excess tokens in the validator + / however fully use all the delegator shares + issuedTokens = v.TokensFromShares(delShares).TruncateInt() + +v.Tokens = v.Tokens.Sub(issuedTokens) + if v.Tokens.IsNegative() { + panic("attempting to remove more tokens than available in validator") +} + +} + +v.DelegatorShares = remainingShares + + return v, issuedTokens +} + +/ MinEqual defines a more minimum set of equality conditions when comparing two +/ validators. +func (v *Validator) + +MinEqual(other *Validator) + +bool { + return v.OperatorAddress == other.OperatorAddress && + v.Status == other.Status && + v.Tokens.Equal(other.Tokens) && + v.DelegatorShares.Equal(other.DelegatorShares) && + v.Description.Equal(other.Description) && + v.Commission.Equal(other.Commission) && + v.Jailed == other.Jailed && + v.MinSelfDelegation.Equal(other.MinSelfDelegation) && + v.ConsensusPubkey.Equal(other.ConsensusPubkey) +} + +/ Equal checks if the receiver equals the parameter +func (v *Validator) + +Equal(v2 *Validator) + +bool { + return v.MinEqual(v2) && + v.UnbondingHeight == v2.UnbondingHeight && + v.UnbondingTime.Equal(v2.UnbondingTime) +} + +func (v Validator) + +IsJailed() + +bool { + return v.Jailed +} + +func (v Validator) + +GetMoniker() + +string { + return v.Description.Moniker +} + +func (v Validator) + +GetStatus() + +BondStatus { + return v.Status +} + +func (v Validator) + +GetOperator() + +string { + return v.OperatorAddress +} + +/ ConsPubKey returns the validator PubKey as a cryptotypes.PubKey. +func (v Validator) + +ConsPubKey() (cryptotypes.PubKey, error) { + pk, ok := v.ConsensusPubkey.GetCachedValue().(cryptotypes.PubKey) + if !ok { + return nil, errors.Wrapf(sdkerrors.ErrInvalidType, "expecting cryptotypes.PubKey, got %T", pk) +} + +return pk, nil +} + +/ Deprecated: use CmtConsPublicKey instead +func (v Validator) + +TmConsPublicKey() (cmtprotocrypto.PublicKey, error) { + return v.CmtConsPublicKey() +} + +/ CmtConsPublicKey casts Validator.ConsensusPubkey to cmtprotocrypto.PubKey. +func (v Validator) + +CmtConsPublicKey() (cmtprotocrypto.PublicKey, error) { + pk, err := v.ConsPubKey() + if err != nil { + return cmtprotocrypto.PublicKey{ +}, err +} + +tmPk, err := cryptocodec.ToCmtProtoPublicKey(pk) + if err != nil { + return cmtprotocrypto.PublicKey{ +}, err +} + +return tmPk, nil +} + +/ GetConsAddr extracts Consensus key address +func (v Validator) + +GetConsAddr() ([]byte, error) { + pk, ok := v.ConsensusPubkey.GetCachedValue().(cryptotypes.PubKey) + if !ok { + return nil, errors.Wrapf(sdkerrors.ErrInvalidType, "expecting cryptotypes.PubKey, got %T", pk) +} + +return pk.Address().Bytes(), nil +} + +func (v Validator) + +GetTokens() + +math.Int { + return v.Tokens +} + +func (v Validator) + +GetBondedTokens() + +math.Int { + return v.BondedTokens() +} + +func (v Validator) + +GetConsensusPower(r math.Int) + +int64 { + return v.ConsensusPower(r) +} + +func (v Validator) + +GetCommission() + +math.LegacyDec { + return v.Commission.Rate +} + +func (v Validator) + +GetMinSelfDelegation() + +math.Int { + return v.MinSelfDelegation +} + +func (v Validator) + +GetDelegatorShares() + +math.LegacyDec { + return v.DelegatorShares +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (v Validator) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + var pk cryptotypes.PubKey + return unpacker.UnpackAny(v.ConsensusPubkey, &pk) +} +``` + +#### `Any`'s TypeURL + +When packing a protobuf message inside an `Any`, the message's type is uniquely defined by its type URL, which is the message's fully qualified name prefixed by a `/` (slash) character. In some implementations of `Any`, like the gogoproto one, there's generally [a resolvable prefix, e.g. `type.googleapis.com`](https://github.com/gogo/protobuf/blob/b03c65ea87cdc3521ede29f62fe3ce239267c1bc/protobuf/google/protobuf/any.proto#L87-L91). However, in the Cosmos SDK, we made the decision to not include such prefix, to have shorter type URLs. The Cosmos SDK's own `Any` implementation can be found in `github.com/cosmos/cosmos-sdk/codec/types`. + +The Cosmos SDK is also switching away from gogoproto to the official `google.golang.org/protobuf` (known as the Protobuf API v2). Its default `Any` implementation also contains the [`type.googleapis.com`](https://github.com/protocolbuffers/protobuf-go/blob/v1.28.1/types/known/anypb/any.pb.go#L266) prefix. To maintain compatibility with the SDK, the following methods from `"google.golang.org/protobuf/types/known/anypb"` should not be used: + +- `anypb.New` +- `anypb.MarshalFrom` +- `anypb.Any#MarshalFrom` + +Instead, the Cosmos SDK provides helper functions in `"github.com/cosmos/cosmos-proto/anyutil"`, which create an official `anypb.Any` without inserting the prefixes: + +- `anyutil.New` +- `anyutil.MarshalFrom` + +For example, to pack a `sdk.Msg` called `internalMsg`, use: + +```diff +import ( +- "google.golang.org/protobuf/types/known/anypb" ++ "github.com/cosmos/cosmos-proto/anyutil" +) + +- anyMsg, err := anypb.New(internalMsg.Message().Interface()) ++ anyMsg, err := anyutil.New(internalMsg.Message().Interface()) + +- fmt.Println(anyMsg.TypeURL) / type.googleapis.com/cosmos.bank.v1beta1.MsgSend ++ fmt.Println(anyMsg.TypeURL) / /cosmos.bank.v1beta1.MsgSend +``` + +## FAQ + +### How to create modules using protobuf encoding + +#### Defining module types + +Protobuf types can be defined to encode: + +- state +- [`Msg`s](/docs/sdk/next/documentation/module-system/messages-and-queries#messages) +- [Query services](/docs/sdk/next/documentation/module-system/query-services) +- [genesis](/docs/sdk/next/documentation/module-system/genesis) + +#### Naming and conventions + +We encourage developers to follow industry guidelines: [Protocol Buffers style guide](https://developers.google.com/protocol-buffers/docs/style) +and [Buf](https://buf.build/docs/style-guide), see more details in [ADR 023](/docs/common/pages/adr-comprehensive#adr-023-protocol-buffer-naming-and-versioning-conventions) + +### How to update modules to protobuf encoding + +If modules do not contain any interfaces (e.g. `Account` or `Content`), then they +may simply migrate any existing types that +are encoded and persisted via their concrete Amino codec to Protobuf (see 1. for further guidelines) and accept a `Marshaler` as the codec which is implemented via the `ProtoCodec` +without any further customization. + +However, if a module type composes an interface, it must wrap it in the `sdk.Any` (from `/types` package) type. To do that, a module-level .proto file must use [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto) for respective message type interface types. + +For example, in the `x/evidence` module defines an `Evidence` interface, which is used by the `MsgSubmitEvidence`. The structure definition must use `sdk.Any` to wrap the evidence file. In the proto file we define it as follows: + +```protobuf +/ proto/cosmos/evidence/v1beta1/tx.proto + +message MsgSubmitEvidence { + string submitter = 1; + google.protobuf.Any evidence = 2 [(cosmos_proto.accepts_interface) = "cosmos.evidence.v1beta1.Evidence"]; +} +``` + +The Cosmos SDK `codec.Codec` interface provides support methods `MarshalInterface` and `UnmarshalInterface` for easy encoding of state to `Any`. + +Module should register interfaces using `InterfaceRegistry` which provides a mechanism for registering interfaces: `RegisterInterface(protoName string, iface interface{}, impls ...proto.Message)` and implementations: `RegisterImplementations(iface interface{}, impls ...proto.Message)` that can be safely unpacked from Any, similarly to type registration with Amino: + +```go expandable +package types + +import ( + + "errors" + "fmt" + "reflect" + "github.com/cosmos/gogoproto/jsonpb" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/protobuf/reflect/protodesc" + "google.golang.org/protobuf/reflect/protoreflect" + "cosmossdk.io/x/tx/signing" +) + +var ( + + / MaxUnpackAnySubCalls extension point that defines the maximum number of sub-calls allowed during the unpacking + / process of protobuf Any messages. + MaxUnpackAnySubCalls = 100 + + / MaxUnpackAnyRecursionDepth extension point that defines the maximum allowed recursion depth during protobuf Any + / message unpacking. + MaxUnpackAnyRecursionDepth = 10 +) + +/ AnyUnpacker is an interface which allows safely unpacking types packed +/ in Any's against a whitelist of registered types +type AnyUnpacker interface { + / UnpackAny unpacks the value in any to the interface pointer passed in as + / iface. Note that the type in any must have been registered in the + / underlying whitelist registry as a concrete type for that interface + / Ex: + / var msg sdk.Msg + / err := cdc.UnpackAny(any, &msg) + / ... + UnpackAny(any *Any, iface interface{ +}) + +error +} + +/ InterfaceRegistry provides a mechanism for registering interfaces and +/ implementations that can be safely unpacked from Any +type InterfaceRegistry interface { + AnyUnpacker + jsonpb.AnyResolver + + / RegisterInterface associates protoName as the public name for the + / interface passed in as iface. This is to be used primarily to create + / a public facing registry of interface implementations for clients. + / protoName should be a well-chosen public facing name that remains stable. + / RegisterInterface takes an optional list of impls to be registered + / as implementations of iface. + / + / Ex: + / registry.RegisterInterface("cosmos.base.v1beta1.Msg", (*sdk.Msg)(nil)) + +RegisterInterface(protoName string, iface interface{ +}, impls ...proto.Message) + + / RegisterImplementations registers impls as concrete implementations of + / the interface iface. + / + / Ex: + / registry.RegisterImplementations((*sdk.Msg)(nil), &MsgSend{ +}, &MsgMultiSend{ +}) + +RegisterImplementations(iface interface{ +}, impls ...proto.Message) + + / ListAllInterfaces list the type URLs of all registered interfaces. + ListAllInterfaces() []string + + / ListImplementations lists the valid type URLs for the given interface name that can be used + / for the provided interface type URL. + ListImplementations(ifaceTypeURL string) []string + + / EnsureRegistered ensures there is a registered interface for the given concrete type. + EnsureRegistered(iface interface{ +}) + +error + + protodesc.Resolver + + / RangeFiles iterates over all registered files and calls f on each one. This + / implements the part of protoregistry.Files that is needed for reflecting over + / the entire FileDescriptorSet. + RangeFiles(f func(protoreflect.FileDescriptor) + +bool) + +SigningContext() *signing.Context + + / mustEmbedInterfaceRegistry requires that all implementations of InterfaceRegistry embed an official implementation + / from this package. This allows new methods to be added to the InterfaceRegistry interface without breaking + / backwards compatibility. + mustEmbedInterfaceRegistry() +} + +/ UnpackInterfacesMessage is meant to extend protobuf types (which implement +/ proto.Message) + +to support a post-deserialization phase which unpacks +/ types packed within Any's using the whitelist provided by AnyUnpacker +type UnpackInterfacesMessage interface { + / UnpackInterfaces is implemented in order to unpack values packed within + / Any's using the AnyUnpacker. It should generally be implemented as + / follows: + / func (s *MyStruct) + +UnpackInterfaces(unpacker AnyUnpacker) + +error { + / var x AnyInterface + / / where X is an Any field on MyStruct + / err := unpacker.UnpackAny(s.X, &x) + / if err != nil { + / return nil + / +} + / / where Y is a field on MyStruct that implements UnpackInterfacesMessage itself + / err = s.Y.UnpackInterfaces(unpacker) + / if err != nil { + / return nil + / +} + / return nil + / +} + +UnpackInterfaces(unpacker AnyUnpacker) + +error +} + +type interfaceRegistry struct { + signing.ProtoFileResolver + interfaceNames map[string]reflect.Type + interfaceImpls map[reflect.Type]interfaceMap + implInterfaces map[reflect.Type]reflect.Type + typeURLMap map[string]reflect.Type + signingCtx *signing.Context +} + +type interfaceMap = map[string]reflect.Type + +/ NewInterfaceRegistry returns a new InterfaceRegistry +func NewInterfaceRegistry() + +InterfaceRegistry { + registry, err := NewInterfaceRegistryWithOptions(InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: failingAddressCodec{ +}, + ValidatorAddressCodec: failingAddressCodec{ +}, +}, +}) + if err != nil { + panic(err) +} + +return registry +} + +/ InterfaceRegistryOptions are options for creating a new InterfaceRegistry. +type InterfaceRegistryOptions struct { + / ProtoFiles is the set of files to use for the registry. It is required. + ProtoFiles signing.ProtoFileResolver + + / SigningOptions are the signing options to use for the registry. + SigningOptions signing.Options +} + +/ NewInterfaceRegistryWithOptions returns a new InterfaceRegistry with the given options. +func NewInterfaceRegistryWithOptions(options InterfaceRegistryOptions) (InterfaceRegistry, error) { + if options.ProtoFiles == nil { + return nil, fmt.Errorf("proto files must be provided") +} + +options.SigningOptions.FileResolver = options.ProtoFiles + signingCtx, err := signing.NewContext(options.SigningOptions) + if err != nil { + return nil, err +} + +return &interfaceRegistry{ + interfaceNames: map[string]reflect.Type{ +}, + interfaceImpls: map[reflect.Type]interfaceMap{ +}, + implInterfaces: map[reflect.Type]reflect.Type{ +}, + typeURLMap: map[string]reflect.Type{ +}, + ProtoFileResolver: options.ProtoFiles, + signingCtx: signingCtx, +}, nil +} + +func (registry *interfaceRegistry) + +RegisterInterface(protoName string, iface interface{ +}, impls ...proto.Message) { + typ := reflect.TypeOf(iface) + if typ.Elem().Kind() != reflect.Interface { + panic(fmt.Errorf("%T is not an interface type", iface)) +} + +registry.interfaceNames[protoName] = typ + registry.RegisterImplementations(iface, impls...) +} + +/ EnsureRegistered ensures there is a registered interface for the given concrete type. +/ +/ Returns an error if not, and nil if so. +func (registry *interfaceRegistry) + +EnsureRegistered(impl interface{ +}) + +error { + if reflect.ValueOf(impl).Kind() != reflect.Ptr { + return fmt.Errorf("%T is not a pointer", impl) +} + if _, found := registry.implInterfaces[reflect.TypeOf(impl)]; !found { + return fmt.Errorf("%T does not have a registered interface", impl) +} + +return nil +} + +/ RegisterImplementations registers a concrete proto Message which implements +/ the given interface. +/ +/ This function PANICs if different concrete types are registered under the +/ same typeURL. +func (registry *interfaceRegistry) + +RegisterImplementations(iface interface{ +}, impls ...proto.Message) { + for _, impl := range impls { + typeURL := MsgTypeURL(impl) + +registry.registerImpl(iface, typeURL, impl) +} +} + +/ RegisterCustomTypeURL registers a concrete type which implements the given +/ interface under `typeURL`. +/ +/ This function PANICs if different concrete types are registered under the +/ same typeURL. +func (registry *interfaceRegistry) + +RegisterCustomTypeURL(iface interface{ +}, typeURL string, impl proto.Message) { + registry.registerImpl(iface, typeURL, impl) +} + +/ registerImpl registers a concrete type which implements the given +/ interface under `typeURL`. +/ +/ This function PANICs if different concrete types are registered under the +/ same typeURL. +func (registry *interfaceRegistry) + +registerImpl(iface interface{ +}, typeURL string, impl proto.Message) { + ityp := reflect.TypeOf(iface).Elem() + +imap, found := registry.interfaceImpls[ityp] + if !found { + imap = map[string]reflect.Type{ +} + +} + implType := reflect.TypeOf(impl) + if !implType.AssignableTo(ityp) { + panic(fmt.Errorf("type %T doesn't actually implement interface %+v", impl, ityp)) +} + + / Check if we already registered something under the given typeURL. It's + / okay to register the same concrete type again, but if we are registering + / a new concrete type under the same typeURL, then we throw an error (here, + / we panic). + foundImplType, found := imap[typeURL] + if found && foundImplType != implType { + panic( + fmt.Errorf( + "concrete type %s has already been registered under typeURL %s, cannot register %s under same typeURL. "+ + "This usually means that there are conflicting modules registering different concrete types "+ + "for a same interface implementation", + foundImplType, + typeURL, + implType, + ), + ) +} + +imap[typeURL] = implType + registry.typeURLMap[typeURL] = implType + registry.implInterfaces[implType] = ityp + registry.interfaceImpls[ityp] = imap +} + +func (registry *interfaceRegistry) + +ListAllInterfaces() []string { + interfaceNames := registry.interfaceNames + keys := make([]string, 0, len(interfaceNames)) + for key := range interfaceNames { + keys = append(keys, key) +} + +return keys +} + +func (registry *interfaceRegistry) + +ListImplementations(ifaceName string) []string { + typ, ok := registry.interfaceNames[ifaceName] + if !ok { + return []string{ +} + +} + +impls, ok := registry.interfaceImpls[typ.Elem()] + if !ok { + return []string{ +} + +} + keys := make([]string, 0, len(impls)) + for key := range impls { + keys = append(keys, key) +} + +return keys +} + +func (registry *interfaceRegistry) + +UnpackAny(any *Any, iface interface{ +}) + +error { + unpacker := &statefulUnpacker{ + registry: registry, + maxDepth: MaxUnpackAnyRecursionDepth, + maxCalls: &sharedCounter{ + count: MaxUnpackAnySubCalls +}, +} + +return unpacker.UnpackAny(any, iface) +} + +/ sharedCounter is a type that encapsulates a counter value +type sharedCounter struct { + count int +} + +/ statefulUnpacker is a struct that helps in deserializing and unpacking +/ protobuf Any messages while maintaining certain stateful constraints. +type statefulUnpacker struct { + registry *interfaceRegistry + maxDepth int + maxCalls *sharedCounter +} + +/ cloneForRecursion returns a new statefulUnpacker instance with maxDepth reduced by one, preserving the registry and maxCalls. +func (r statefulUnpacker) + +cloneForRecursion() *statefulUnpacker { + return &statefulUnpacker{ + registry: r.registry, + maxDepth: r.maxDepth - 1, + maxCalls: r.maxCalls, +} +} + +/ UnpackAny deserializes a protobuf Any message into the provided interface, ensuring the interface is a pointer. +/ It applies stateful constraints such as max depth and call limits, and unpacks interfaces if required. +func (r *statefulUnpacker) + +UnpackAny(any *Any, iface interface{ +}) + +error { + if r.maxDepth == 0 { + return errors.New("max depth exceeded") +} + if r.maxCalls.count == 0 { + return errors.New("call limit exceeded") +} + / here we gracefully handle the case in which `any` itself is `nil`, which may occur in message decoding + if any == nil { + return nil +} + if any.TypeUrl == "" { + / if TypeUrl is empty return nil because without it we can't actually unpack anything + return nil +} + +r.maxCalls.count-- + rv := reflect.ValueOf(iface) + if rv.Kind() != reflect.Ptr { + return fmt.Errorf("UnpackAny expects a pointer") +} + rt := rv.Elem().Type() + cachedValue := any.cachedValue + if cachedValue != nil { + if reflect.TypeOf(cachedValue).AssignableTo(rt) { + rv.Elem().Set(reflect.ValueOf(cachedValue)) + +return nil +} + +} + +imap, found := r.registry.interfaceImpls[rt] + if !found { + return fmt.Errorf("no registered implementations of type %+v", rt) +} + +typ, found := imap[any.TypeUrl] + if !found { + return fmt.Errorf("no concrete type registered for type URL %s against interface %T", any.TypeUrl, iface) +} + +msg, ok := reflect.New(typ.Elem()).Interface().(proto.Message) + if !ok { + return fmt.Errorf("can't proto unmarshal %T", msg) +} + err := proto.Unmarshal(any.Value, msg) + if err != nil { + return err +} + +err = UnpackInterfaces(msg, r.cloneForRecursion()) + if err != nil { + return err +} + +rv.Elem().Set(reflect.ValueOf(msg)) + +any.cachedValue = msg + + return nil +} + +/ Resolve returns the proto message given its typeURL. It works with types +/ registered with RegisterInterface/RegisterImplementations, as well as those +/ registered with RegisterWithCustomTypeURL. +func (registry *interfaceRegistry) + +Resolve(typeURL string) (proto.Message, error) { + typ, found := registry.typeURLMap[typeURL] + if !found { + return nil, fmt.Errorf("unable to resolve type URL %s", typeURL) +} + +msg, ok := reflect.New(typ.Elem()).Interface().(proto.Message) + if !ok { + return nil, fmt.Errorf("can't resolve type URL %s", typeURL) +} + +return msg, nil +} + +func (registry *interfaceRegistry) + +SigningContext() *signing.Context { + return registry.signingCtx +} + +func (registry *interfaceRegistry) + +mustEmbedInterfaceRegistry() { +} + +/ UnpackInterfaces is a convenience function that calls UnpackInterfaces +/ on x if x implements UnpackInterfacesMessage +func UnpackInterfaces(x interface{ +}, unpacker AnyUnpacker) + +error { + if msg, ok := x.(UnpackInterfacesMessage); ok { + return msg.UnpackInterfaces(unpacker) +} + +return nil +} + +type failingAddressCodec struct{ +} + +func (f failingAddressCodec) + +StringToBytes(string) ([]byte, error) { + return nil, fmt.Errorf("InterfaceRegistry requires a proper address codec implementation to do address conversion") +} + +func (f failingAddressCodec) + +BytesToString([]byte) (string, error) { + return "", fmt.Errorf("InterfaceRegistry requires a proper address codec implementation to do address conversion") +} +``` + +In addition, an `UnpackInterfaces` phase should be introduced to deserialization to unpack interfaces before they're needed. Protobuf types that contain a protobuf `Any` either directly or via one of their members should implement the `UnpackInterfacesMessage` interface: + +```go +type UnpackInterfacesMessage interface { + UnpackInterfaces(InterfaceUnpacker) + +error +} +``` diff --git a/docs/sdk/next/documentation/protocol-development/errors.mdx b/docs/sdk/next/documentation/protocol-development/errors.mdx new file mode 100644 index 00000000..14f34d22 --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/errors.mdx @@ -0,0 +1,701 @@ +--- +title: Errors +--- + +## Synopsis + +This document outlines the recommended usage and APIs for error handling in Cosmos SDK modules. + +Modules are encouraged to define and register their own errors to provide better +context on failed message or handler execution. Typically, these errors should be +common or general errors which can be further wrapped to provide additional specific +execution context. + +## Registration + +Modules should define and register their custom errors in `x/{module}/errors.go`. +Registration of errors is handled via the [`errors` package](https://github.com/cosmos/cosmos-sdk/blob/main/errors/errors.go). + +Example: + +```go expandable +package types + +import "cosmossdk.io/errors" + +/ x/distribution module sentinel errors +var ( + ErrEmptyDelegatorAddr = errors.Register(ModuleName, 2, "delegator address is empty") + +ErrEmptyWithdrawAddr = errors.Register(ModuleName, 3, "withdraw address is empty") + +ErrEmptyValidatorAddr = errors.Register(ModuleName, 4, "validator address is empty") + +ErrEmptyDelegationDistInfo = errors.Register(ModuleName, 5, "no delegation distribution info") + +ErrNoValidatorDistInfo = errors.Register(ModuleName, 6, "no validator distribution info") + +ErrNoValidatorCommission = errors.Register(ModuleName, 7, "no validator commission to withdraw") + +ErrSetWithdrawAddrDisabled = errors.Register(ModuleName, 8, "set withdraw address disabled") + +ErrBadDistribution = errors.Register(ModuleName, 9, "community pool does not have sufficient coins to distribute") + +ErrInvalidProposalAmount = errors.Register(ModuleName, 10, "invalid community pool spend proposal amount") + +ErrEmptyProposalRecipient = errors.Register(ModuleName, 11, "invalid community pool spend proposal recipient") + +ErrNoValidatorExists = errors.Register(ModuleName, 12, "validator does not exist") + +ErrNoDelegationExists = errors.Register(ModuleName, 13, "delegation does not exist") +) +``` + +Each custom module error must provide the codespace, which is typically the module name +(e.g. "distribution") and is unique per module, and a uint32 code. Together, the codespace and code +provide a globally unique Cosmos SDK error. Typically, the code is monotonically increasing but does not +necessarily have to be. The only restrictions on error codes are the following: + +- Must be greater than one, as a code value of one is reserved for internal errors. +- Must be unique within the module. + +Note, the Cosmos SDK provides a core set of _common_ errors. These errors are defined in [`types/errors/errors.go`](https://github.com/cosmos/cosmos-sdk/blob/main/types/errors/errors.go). + +## Wrapping + +The custom module errors can be returned as their concrete type as they already fulfill the `error` +interface. However, module errors can be wrapped to provide further context and meaning to failed +execution. + +Example: + +```go expandable +package keeper + +import ( + + "context" + "errors" + "fmt" + "cosmossdk.io/collections" + "cosmossdk.io/core/store" + "cosmossdk.io/log" + "cosmossdk.io/math" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/query" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var _ Keeper = (*BaseKeeper)(nil) + +/ Keeper defines a module interface that facilitates the transfer of coins +/ between accounts. +type Keeper interface { + SendKeeper + WithMintCoinsRestriction(MintingRestrictionFn) + +BaseKeeper + + InitGenesis(context.Context, *types.GenesisState) + +ExportGenesis(context.Context) *types.GenesisState + + GetSupply(ctx context.Context, denom string) + +sdk.Coin + HasSupply(ctx context.Context, denom string) + +bool + GetPaginatedTotalSupply(ctx context.Context, pagination *query.PageRequest) (sdk.Coins, *query.PageResponse, error) + +IterateTotalSupply(ctx context.Context, cb func(sdk.Coin) + +bool) + +GetDenomMetaData(ctx context.Context, denom string) (types.Metadata, bool) + +HasDenomMetaData(ctx context.Context, denom string) + +bool + SetDenomMetaData(ctx context.Context, denomMetaData types.Metadata) + +GetAllDenomMetaData(ctx context.Context) []types.Metadata + IterateAllDenomMetaData(ctx context.Context, cb func(types.Metadata) + +bool) + +SendCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + SendCoinsFromModuleToModule(ctx context.Context, senderModule, recipientModule string, amt sdk.Coins) + +error + SendCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + DelegateCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + UndelegateCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + MintCoins(ctx context.Context, moduleName string, amt sdk.Coins) + +error + BurnCoins(ctx context.Context, moduleName string, amt sdk.Coins) + +error + + DelegateCoins(ctx context.Context, delegatorAddr, moduleAccAddr sdk.AccAddress, amt sdk.Coins) + +error + UndelegateCoins(ctx context.Context, moduleAccAddr, delegatorAddr sdk.AccAddress, amt sdk.Coins) + +error + + types.QueryServer +} + +/ BaseKeeper manages transfers between accounts. It implements the Keeper interface. +type BaseKeeper struct { + BaseSendKeeper + + ak types.AccountKeeper + cdc codec.BinaryCodec + storeService store.KVStoreService + mintCoinsRestrictionFn MintingRestrictionFn + logger log.Logger +} + +type MintingRestrictionFn func(ctx context.Context, coins sdk.Coins) + +error + +/ GetPaginatedTotalSupply queries for the supply, ignoring 0 coins, with a given pagination +func (k BaseKeeper) + +GetPaginatedTotalSupply(ctx context.Context, pagination *query.PageRequest) (sdk.Coins, *query.PageResponse, error) { + results, pageResp, err := query.CollectionPaginate[string, math.Int](/docs/sdk/next/documentation/protocol-development/ctx, k.Supply, pagination) + if err != nil { + return nil, nil, err +} + coins := sdk.NewCoins() + for _, res := range results { + coins = coins.Add(sdk.NewCoin(res.Key, res.Value)) +} + +return coins, pageResp, nil +} + +/ NewBaseKeeper returns a new BaseKeeper object with a given codec, dedicated +/ store key, an AccountKeeper implementation, and a parameter Subspace used to +/ store and fetch module parameters. The BaseKeeper also accepts a +/ blocklist map. This blocklist describes the set of addresses that are not allowed +/ to receive funds through direct and explicit actions, for example, by using a MsgSend or +/ by using a SendCoinsFromModuleToAccount execution. +func NewBaseKeeper( + cdc codec.BinaryCodec, + storeService store.KVStoreService, + ak types.AccountKeeper, + blockedAddrs map[string]bool, + authority string, + logger log.Logger, +) + +BaseKeeper { + if _, err := ak.AddressCodec().StringToBytes(authority); err != nil { + panic(fmt.Errorf("invalid bank authority address: %w", err)) +} + + / add the module name to the logger + logger = logger.With(log.ModuleKey, "x/"+types.ModuleName) + +return BaseKeeper{ + BaseSendKeeper: NewBaseSendKeeper(cdc, storeService, ak, blockedAddrs, authority, logger), + ak: ak, + cdc: cdc, + storeService: storeService, + mintCoinsRestrictionFn: func(ctx context.Context, coins sdk.Coins) + +error { + return nil +}, + logger: logger, +} +} + +/ WithMintCoinsRestriction restricts the bank Keeper used within a specific module to +/ have restricted permissions on minting via function passed in parameter. +/ Previous restriction functions can be nested as such: +/ +/ bankKeeper.WithMintCoinsRestriction(restriction1).WithMintCoinsRestriction(restriction2) + +func (k BaseKeeper) + +WithMintCoinsRestriction(check MintingRestrictionFn) + +BaseKeeper { + oldRestrictionFn := k.mintCoinsRestrictionFn + k.mintCoinsRestrictionFn = func(ctx context.Context, coins sdk.Coins) + +error { + err := check(ctx, coins) + if err != nil { + return err +} + +err = oldRestrictionFn(ctx, coins) + if err != nil { + return err +} + +return nil +} + +return k +} + +/ DelegateCoins performs delegation by deducting amt coins from an account with +/ address addr. For vesting accounts, delegations amounts are tracked for both +/ vesting and vested coins. The coins are then transferred from the delegator +/ address to a ModuleAccount address. If any of the delegation amounts are negative, +/ an error is returned. +func (k BaseKeeper) + +DelegateCoins(ctx context.Context, delegatorAddr, moduleAccAddr sdk.AccAddress, amt sdk.Coins) + +error { + moduleAcc := k.ak.GetAccount(ctx, moduleAccAddr) + if moduleAcc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleAccAddr) +} + if !amt.IsValid() { + return errorsmod.Wrap(sdkerrors.ErrInvalidCoins, amt.String()) +} + balances := sdk.NewCoins() + for _, coin := range amt { + balance := k.GetBalance(ctx, delegatorAddr, coin.GetDenom()) + if balance.IsLT(coin) { + return errorsmod.Wrapf( + sdkerrors.ErrInsufficientFunds, "failed to delegate; %s is smaller than %s", balance, amt, + ) +} + +balances = balances.Add(balance) + err := k.setBalance(ctx, delegatorAddr, balance.Sub(coin)) + if err != nil { + return err +} + +} + if err := k.trackDelegation(ctx, delegatorAddr, balances, amt); err != nil { + return errorsmod.Wrap(err, "failed to track delegation") +} + / emit coin spent event + sdkCtx := sdk.UnwrapSDKContext(ctx) + +sdkCtx.EventManager().EmitEvent( + types.NewCoinSpentEvent(delegatorAddr, amt), + ) + err := k.addCoins(ctx, moduleAccAddr, amt) + if err != nil { + return err +} + +return nil +} + +/ UndelegateCoins performs undelegation by crediting amt coins to an account with +/ address addr. For vesting accounts, undelegation amounts are tracked for both +/ vesting and vested coins. The coins are then transferred from a ModuleAccount +/ address to the delegator address. If any of the undelegation amounts are +/ negative, an error is returned. +func (k BaseKeeper) + +UndelegateCoins(ctx context.Context, moduleAccAddr, delegatorAddr sdk.AccAddress, amt sdk.Coins) + +error { + moduleAcc := k.ak.GetAccount(ctx, moduleAccAddr) + if moduleAcc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleAccAddr) +} + if !amt.IsValid() { + return errorsmod.Wrap(sdkerrors.ErrInvalidCoins, amt.String()) +} + err := k.subUnlockedCoins(ctx, moduleAccAddr, amt) + if err != nil { + return err +} + if err := k.trackUndelegation(ctx, delegatorAddr, amt); err != nil { + return errorsmod.Wrap(err, "failed to track undelegation") +} + +err = k.addCoins(ctx, delegatorAddr, amt) + if err != nil { + return err +} + +return nil +} + +/ GetSupply retrieves the Supply from store +func (k BaseKeeper) + +GetSupply(ctx context.Context, denom string) + +sdk.Coin { + amt, err := k.Supply.Get(ctx, denom) + if err != nil { + return sdk.NewCoin(denom, math.ZeroInt()) +} + +return sdk.NewCoin(denom, amt) +} + +/ HasSupply checks if the supply coin exists in store. +func (k BaseKeeper) + +HasSupply(ctx context.Context, denom string) + +bool { + has, err := k.Supply.Has(ctx, denom) + +return has && err == nil +} + +/ GetDenomMetaData retrieves the denomination metadata. returns the metadata and true if the denom exists, +/ false otherwise. +func (k BaseKeeper) + +GetDenomMetaData(ctx context.Context, denom string) (types.Metadata, bool) { + m, err := k.BaseViewKeeper.DenomMetadata.Get(ctx, denom) + +return m, err == nil +} + +/ HasDenomMetaData checks if the denomination metadata exists in store. +func (k BaseKeeper) + +HasDenomMetaData(ctx context.Context, denom string) + +bool { + has, err := k.BaseViewKeeper.DenomMetadata.Has(ctx, denom) + +return has && err == nil +} + +/ GetAllDenomMetaData retrieves all denominations metadata +func (k BaseKeeper) + +GetAllDenomMetaData(ctx context.Context) []types.Metadata { + denomMetaData := make([]types.Metadata, 0) + +k.IterateAllDenomMetaData(ctx, func(metadata types.Metadata) + +bool { + denomMetaData = append(denomMetaData, metadata) + +return false +}) + +return denomMetaData +} + +/ IterateAllDenomMetaData iterates over all the denominations metadata and +/ provides the metadata to a callback. If true is returned from the +/ callback, iteration is halted. +func (k BaseKeeper) + +IterateAllDenomMetaData(ctx context.Context, cb func(types.Metadata) + +bool) { + err := k.BaseViewKeeper.DenomMetadata.Walk(ctx, nil, func(_ string, metadata types.Metadata) (stop bool, err error) { + return cb(metadata), nil +}) + if err != nil && !errors.Is(err, collections.ErrInvalidIterator) { + panic(err) +} +} + +/ SetDenomMetaData sets the denominations metadata +func (k BaseKeeper) + +SetDenomMetaData(ctx context.Context, denomMetaData types.Metadata) { + _ = k.BaseViewKeeper.DenomMetadata.Set(ctx, denomMetaData.Base, denomMetaData) +} + +/ SendCoinsFromModuleToAccount transfers coins from a ModuleAccount to an AccAddress. +/ It will panic if the module account does not exist. An error is returned if +/ the recipient address is black-listed or if sending the tokens fails. +func (k BaseKeeper) + +SendCoinsFromModuleToAccount( + ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins, +) + +error { + senderAddr := k.ak.GetModuleAddress(senderModule) + if senderAddr == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", senderModule)) +} + if k.BlockedAddr(recipientAddr) { + return errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", recipientAddr) +} + +return k.SendCoins(ctx, senderAddr, recipientAddr, amt) +} + +/ SendCoinsFromModuleToModule transfers coins from a ModuleAccount to another. +/ It will panic if either module account does not exist. +func (k BaseKeeper) + +SendCoinsFromModuleToModule( + ctx context.Context, senderModule, recipientModule string, amt sdk.Coins, +) + +error { + senderAddr := k.ak.GetModuleAddress(senderModule) + if senderAddr == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", senderModule)) +} + recipientAcc := k.ak.GetModuleAccount(ctx, recipientModule) + if recipientAcc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", recipientModule)) +} + +return k.SendCoins(ctx, senderAddr, recipientAcc.GetAddress(), amt) +} + +/ SendCoinsFromAccountToModule transfers coins from an AccAddress to a ModuleAccount. +/ It will panic if the module account does not exist. +func (k BaseKeeper) + +SendCoinsFromAccountToModule( + ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins, +) + +error { + recipientAcc := k.ak.GetModuleAccount(ctx, recipientModule) + if recipientAcc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", recipientModule)) +} + +return k.SendCoins(ctx, senderAddr, recipientAcc.GetAddress(), amt) +} + +/ DelegateCoinsFromAccountToModule delegates coins and transfers them from a +/ delegator account to a module account. It will panic if the module account +/ does not exist or is unauthorized. +func (k BaseKeeper) + +DelegateCoinsFromAccountToModule( + ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins, +) + +error { + recipientAcc := k.ak.GetModuleAccount(ctx, recipientModule) + if recipientAcc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", recipientModule)) +} + if !recipientAcc.HasPermission(authtypes.Staking) { + panic(errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to receive delegated coins", recipientModule)) +} + +return k.DelegateCoins(ctx, senderAddr, recipientAcc.GetAddress(), amt) +} + +/ UndelegateCoinsFromModuleToAccount undelegates the unbonding coins and transfers +/ them from a module account to the delegator account. It will panic if the +/ module account does not exist or is unauthorized. +func (k BaseKeeper) + +UndelegateCoinsFromModuleToAccount( + ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins, +) + +error { + acc := k.ak.GetModuleAccount(ctx, senderModule) + if acc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", senderModule)) +} + if !acc.HasPermission(authtypes.Staking) { + panic(errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to undelegate coins", senderModule)) +} + +return k.UndelegateCoins(ctx, acc.GetAddress(), recipientAddr, amt) +} + +/ MintCoins creates new coins from thin air and adds it to the module account. +/ It will panic if the module account does not exist or is unauthorized. +func (k BaseKeeper) + +MintCoins(ctx context.Context, moduleName string, amounts sdk.Coins) + +error { + sdkCtx := sdk.UnwrapSDKContext(ctx) + err := k.mintCoinsRestrictionFn(ctx, amounts) + if err != nil { + k.logger.Error(fmt.Sprintf("Module %q attempted to mint coins %s it doesn't have permission for, error %v", moduleName, amounts, err)) + +return err +} + acc := k.ak.GetModuleAccount(ctx, moduleName) + if acc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleName)) +} + if !acc.HasPermission(authtypes.Minter) { + panic(errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to mint tokens", moduleName)) +} + +err = k.addCoins(ctx, acc.GetAddress(), amounts) + if err != nil { + return err +} + for _, amount := range amounts { + supply := k.GetSupply(ctx, amount.GetDenom()) + +supply = supply.Add(amount) + +k.setSupply(ctx, supply) +} + +k.logger.Debug("minted coins from module account", "amount", amounts.String(), "from", moduleName) + + / emit mint event + sdkCtx.EventManager().EmitEvent( + types.NewCoinMintEvent(acc.GetAddress(), amounts), + ) + +return nil +} + +/ BurnCoins burns coins deletes coins from the balance of the module account. +/ It will panic if the module account does not exist or is unauthorized. +func (k BaseKeeper) + +BurnCoins(ctx context.Context, moduleName string, amounts sdk.Coins) + +error { + acc := k.ak.GetModuleAccount(ctx, moduleName) + if acc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleName)) +} + if !acc.HasPermission(authtypes.Burner) { + panic(errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to burn tokens", moduleName)) +} + err := k.subUnlockedCoins(ctx, acc.GetAddress(), amounts) + if err != nil { + return err +} + for _, amount := range amounts { + supply := k.GetSupply(ctx, amount.GetDenom()) + +supply = supply.Sub(amount) + +k.setSupply(ctx, supply) +} + +k.logger.Debug("burned tokens from module account", "amount", amounts.String(), "from", moduleName) + + / emit burn event + sdkCtx := sdk.UnwrapSDKContext(ctx) + +sdkCtx.EventManager().EmitEvent( + types.NewCoinBurnEvent(acc.GetAddress(), amounts), + ) + +return nil +} + +/ setSupply sets the supply for the given coin +func (k BaseKeeper) + +setSupply(ctx context.Context, coin sdk.Coin) { + / Bank invariants and IBC requires to remove zero coins. + if coin.IsZero() { + _ = k.Supply.Remove(ctx, coin.Denom) +} + +else { + _ = k.Supply.Set(ctx, coin.Denom, coin.Amount) +} +} + +/ trackDelegation tracks the delegation of the given account if it is a vesting account +func (k BaseKeeper) + +trackDelegation(ctx context.Context, addr sdk.AccAddress, balance, amt sdk.Coins) + +error { + acc := k.ak.GetAccount(ctx, addr) + if acc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "account %s does not exist", addr) +} + +vacc, ok := acc.(types.VestingAccount) + if ok { + / TODO: return error on account.TrackDelegation + sdkCtx := sdk.UnwrapSDKContext(ctx) + +vacc.TrackDelegation(sdkCtx.BlockHeader().Time, balance, amt) + +k.ak.SetAccount(ctx, acc) +} + +return nil +} + +/ trackUndelegation trakcs undelegation of the given account if it is a vesting account +func (k BaseKeeper) + +trackUndelegation(ctx context.Context, addr sdk.AccAddress, amt sdk.Coins) + +error { + acc := k.ak.GetAccount(ctx, addr) + if acc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "account %s does not exist", addr) +} + +vacc, ok := acc.(types.VestingAccount) + if ok { + / TODO: return error on account.TrackUndelegation + vacc.TrackUndelegation(amt) + +k.ak.SetAccount(ctx, acc) +} + +return nil +} + +/ IterateTotalSupply iterates over the total supply calling the given cb (callback) + +function +/ with the balance of each coin. +/ The iteration stops if the callback returns true. +func (k BaseViewKeeper) + +IterateTotalSupply(ctx context.Context, cb func(sdk.Coin) + +bool) { + err := k.Supply.Walk(ctx, nil, func(s string, m math.Int) (bool, error) { + return cb(sdk.NewCoin(s, m)), nil +}) + if err != nil && !errors.Is(err, collections.ErrInvalidIterator) { + panic(err) +} +} +``` + +Regardless if an error is wrapped or not, the Cosmos SDK's `errors` package provides a function to determine if +an error is of a particular kind via `Is`. + +## ABCI + +If a module error is registered, the Cosmos SDK `errors` package allows ABCI information to be extracted +through the `ABCIInfo` function. The package also provides `ResponseCheckTx` and `ResponseDeliverTx` as +auxiliary functions to automatically get `CheckTx` and `DeliverTx` responses from an error. diff --git a/docs/sdk/next/documentation/protocol-development/gas-fees.mdx b/docs/sdk/next/documentation/protocol-development/gas-fees.mdx new file mode 100644 index 00000000..49726972 --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/gas-fees.mdx @@ -0,0 +1,636 @@ +--- +title: Gas and Fees +--- + +## Synopsis + +This document describes the default strategies to handle gas and fees within a Cosmos SDK application. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/next/documentation/application-framework/app-anatomy) + + + +## Introduction to `Gas` and `Fees` + +In the Cosmos SDK, `gas` is a special unit that is used to track the consumption of resources during execution. `gas` is typically consumed whenever read and writes are made to the store, but it can also be consumed if expensive computation needs to be done. It serves two main purposes: + +- Make sure blocks are not consuming too many resources and are finalized. This is implemented by default in the Cosmos SDK via the [block gas meter](#block-gas-meter). +- Prevent spam and abuse from end-user. To this end, `gas` consumed during [`message`](/docs/sdk/next/documentation/module-system/messages-and-queries#messages) execution is typically priced, resulting in a `fee` (`fees = gas * gas-prices`). `fees` generally have to be paid by the sender of the `message`. Note that the Cosmos SDK does not enforce `gas` pricing by default, as there may be other ways to prevent spam (e.g. bandwidth schemes). Still, most applications implement `fee` mechanisms to prevent spam by using the [`AnteHandler`](#antehandler). + +## Gas Meter + +In the Cosmos SDK, `gas` is a simple alias for `uint64`, and is managed by an object called a _gas meter_. Gas meters implement the `GasMeter` interface: + +```go expandable +package types + +import ( + + "fmt" + "math" +) + +/ Gas consumption descriptors. +const ( + GasIterNextCostFlatDesc = "IterNextFlat" + GasValuePerByteDesc = "ValuePerByte" + GasWritePerByteDesc = "WritePerByte" + GasReadPerByteDesc = "ReadPerByte" + GasWriteCostFlatDesc = "WriteFlat" + GasReadCostFlatDesc = "ReadFlat" + GasHasDesc = "Has" + GasDeleteDesc = "Delete" +) + +/ Gas measured by the SDK +type Gas = uint64 + +/ ErrorNegativeGasConsumed defines an error thrown when the amount of gas refunded results in a +/ negative gas consumed amount. +type ErrorNegativeGasConsumed struct { + Descriptor string +} + +/ ErrorOutOfGas defines an error thrown when an action results in out of gas. +type ErrorOutOfGas struct { + Descriptor string +} + +/ ErrorGasOverflow defines an error thrown when an action results gas consumption +/ unsigned integer overflow. +type ErrorGasOverflow struct { + Descriptor string +} + +/ GasMeter interface to track gas consumption +type GasMeter interface { + GasConsumed() + +Gas + GasConsumedToLimit() + +Gas + GasRemaining() + +Gas + Limit() + +Gas + ConsumeGas(amount Gas, descriptor string) + +RefundGas(amount Gas, descriptor string) + +IsPastLimit() + +bool + IsOutOfGas() + +bool + String() + +string +} + +type basicGasMeter struct { + limit Gas + consumed Gas +} + +/ NewGasMeter returns a reference to a new basicGasMeter. +func NewGasMeter(limit Gas) + +GasMeter { + return &basicGasMeter{ + limit: limit, + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *basicGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasRemaining returns the gas left in the GasMeter. +func (g *basicGasMeter) + +GasRemaining() + +Gas { + if g.IsPastLimit() { + return 0 +} + +return g.limit - g.consumed +} + +/ Limit returns the gas limit of the GasMeter. +func (g *basicGasMeter) + +Limit() + +Gas { + return g.limit +} + +/ GasConsumedToLimit returns the gas limit if gas consumed is past the limit, +/ otherwise it returns the consumed gas. +/ +/ NOTE: This behavior is only called when recovering from panic when +/ BlockGasMeter consumes gas past the limit. +func (g *basicGasMeter) + +GasConsumedToLimit() + +Gas { + if g.IsPastLimit() { + return g.limit +} + +return g.consumed +} + +/ addUint64Overflow performs the addition operation on two uint64 integers and +/ returns a boolean on whether or not the result overflows. +func addUint64Overflow(a, b uint64) (uint64, bool) { + if math.MaxUint64-a < b { + return 0, true +} + +return a + b, false +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit or out of gas. +func (g *basicGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + g.consumed = math.MaxUint64 + panic(ErrorGasOverflow{ + descriptor +}) +} + if g.consumed > g.limit { + panic(ErrorOutOfGas{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the transaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *basicGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns true if gas consumed is past limit, otherwise it returns false. +func (g *basicGasMeter) + +IsPastLimit() + +bool { + return g.consumed > g.limit +} + +/ IsOutOfGas returns true if gas consumed is greater than or equal to gas limit, otherwise it returns false. +func (g *basicGasMeter) + +IsOutOfGas() + +bool { + return g.consumed >= g.limit +} + +/ String returns the BasicGasMeter's gas limit and gas consumed. +func (g *basicGasMeter) + +String() + +string { + return fmt.Sprintf("BasicGasMeter:\n limit: %d\n consumed: %d", g.limit, g.consumed) +} + +type infiniteGasMeter struct { + consumed Gas +} + +/ NewInfiniteGasMeter returns a new gas meter without a limit. +func NewInfiniteGasMeter() + +GasMeter { + return &infiniteGasMeter{ + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *infiniteGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasConsumedToLimit returns the gas consumed from the GasMeter since the gas is not confined to a limit. +/ NOTE: This behavior is only called when recovering from panic when BlockGasMeter consumes gas past the limit. +func (g *infiniteGasMeter) + +GasConsumedToLimit() + +Gas { + return g.consumed +} + +/ GasRemaining returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +GasRemaining() + +Gas { + return math.MaxUint64 +} + +/ Limit returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +Limit() + +Gas { + return math.MaxUint64 +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit. +func (g *infiniteGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + / TODO: Should we set the consumed field after overflow checking? + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + panic(ErrorGasOverflow{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the trasaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *infiniteGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsPastLimit() + +bool { + return false +} + +/ IsOutOfGas returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsOutOfGas() + +bool { + return false +} + +/ String returns the InfiniteGasMeter's gas consumed. +func (g *infiniteGasMeter) + +String() + +string { + return fmt.Sprintf("InfiniteGasMeter:\n consumed: %d", g.consumed) +} + +/ GasConfig defines gas cost for each operation on KVStores +type GasConfig struct { + HasCost Gas + DeleteCost Gas + ReadCostFlat Gas + ReadCostPerByte Gas + WriteCostFlat Gas + WriteCostPerByte Gas + IterNextCostFlat Gas +} + +/ KVGasConfig returns a default gas config for KVStores. +func KVGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 1000, + DeleteCost: 1000, + ReadCostFlat: 1000, + ReadCostPerByte: 3, + WriteCostFlat: 2000, + WriteCostPerByte: 30, + IterNextCostFlat: 30, +} +} + +/ TransientGasConfig returns a default gas config for TransientStores. +func TransientGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 100, + DeleteCost: 100, + ReadCostFlat: 100, + ReadCostPerByte: 0, + WriteCostFlat: 200, + WriteCostPerByte: 3, + IterNextCostFlat: 3, +} +} +``` + +where: + +- `GasConsumed()` returns the amount of gas that was consumed by the gas meter instance. +- `GasConsumedToLimit()` returns the amount of gas that was consumed by the gas meter instance, or the limit if it is reached. +- `GasRemaining()` returns the gas left in the GasMeter. +- `Limit()` returns the limit of the gas meter instance. `0` if the gas meter is infinite. +- `ConsumeGas(amount Gas, descriptor string)` consumes the amount of `gas` provided. If the `gas` overflows, it panics with the `descriptor` message. If the gas meter is not infinite, it panics if `gas` consumed goes above the limit. +- `RefundGas()` deducts the given amount from the gas consumed. This functionality enables refunding gas to the transaction or block gas pools so that EVM-compatible chains can fully support the go-ethereum StateDB interface. +- `IsPastLimit()` returns `true` if the amount of gas consumed by the gas meter instance is strictly above the limit, `false` otherwise. +- `IsOutOfGas()` returns `true` if the amount of gas consumed by the gas meter instance is above or equal to the limit, `false` otherwise. + +The gas meter is generally held in [`ctx`](/docs/sdk/next/documentation/application-framework/context), and consuming gas is done with the following pattern: + +```go +ctx.GasMeter().ConsumeGas(amount, "description") +``` + +By default, the Cosmos SDK makes use of two different gas meters, the [main gas meter](#main-gas-meter) and the [block gas meter](#block-gas-meter). + +### Main Gas Meter + +`ctx.GasMeter()` is the main gas meter of the application. The main gas meter is initialized in `FinalizeBlock` via `setFinalizeBlockState`, and then tracks gas consumption during execution sequences that lead to state-transitions, i.e. those originally triggered by [`FinalizeBlock`](/docs/sdk/next/documentation/application-framework/baseapp#finalizeblock). At the beginning of each transaction execution, the main gas meter **must be set to 0** in the [`AnteHandler`](#antehandler), so that it can track gas consumption per-transaction. + +Gas consumption can be done manually, generally by the module developer in the [`BeginBlocker`, `EndBlocker`](/docs/sdk/next/documentation/module-system/beginblock-endblock) or [`Msg` service](/docs/sdk/next/documentation/module-system/msg-services), but most of the time it is done automatically whenever there is a read or write to the store. This automatic gas consumption logic is implemented in a special store called [`GasKv`](/docs/sdk/next/documentation/state-storage/store#gaskv-store). + +### Block Gas Meter + +`ctx.BlockGasMeter()` is the gas meter used to track gas consumption per block and make sure it does not go above a certain limit. + +During the genesis phase, gas consumption is unlimited to accommodate initialization transactions. + +```go +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(storetypes.NewInfiniteGasMeter())) +``` + +Following the genesis block, the block gas meter is set to a finite value by the SDK. This transition is facilitated by the consensus engine (e.g., CometBFT) calling the `RequestFinalizeBlock` function, which in turn triggers the SDK's `FinalizeBlock` method. Within `FinalizeBlock`, `internalFinalizeBlock` is executed, performing necessary state updates and function executions. The block gas meter, initialized each with a finite limit, is then incorporated into the context for transaction execution, ensuring gas consumption does not exceed the block's gas limit and is reset at the end of each block. + +Modules within the Cosmos SDK can consume block gas at any point during their execution by utilizing the `ctx`. This gas consumption primarily occurs during state read/write operations and transaction processing. The block gas meter, accessible via `ctx.BlockGasMeter()`, monitors the total gas usage within a block, enforcing the gas limit to prevent excessive computation. This ensures that gas limits are adhered to on a per-block basis, starting from the first block post-genesis. + +```go +gasMeter := app.getBlockGasMeter(app.finalizeBlockState.Context()) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(gasMeter)) +``` + +The above shows the general mechanism for setting the block gas meter with a finite limit based on the block's consensus parameters. + +## AnteHandler + +The `AnteHandler` is run for every transaction during `CheckTx` and `FinalizeBlock`, before a Protobuf `Msg` service method for each `sdk.Msg` in the transaction. + +The anteHandler is not implemented in the core Cosmos SDK but in a module. That said, most applications today use the default implementation defined in the [`auth` module](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth). Here is what the `anteHandler` is intended to do in a normal Cosmos SDK application: + +- Verify that the transactions are of the correct type. Transaction types are defined in the module that implements the `anteHandler`, and they follow the transaction interface: + +```go expandable +package types + +import ( + + "encoding/json" + fmt "fmt" + strings "strings" + "time" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) + +type ( + / Msg defines the interface a transaction message needed to fulfill. + Msg = proto.Message + + / LegacyMsg defines the interface a transaction message needed to fulfill up through + / v0.47. + LegacyMsg interface { + Msg + + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} + + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / HasMsgs defines an interface a transaction must fulfill. + HasMsgs interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg +} + + / Tx defines an interface a transaction must fulfill. + Tx interface { + HasMsgs + + / GetMsgsV2 gets the transaction's messages as google.golang.org/protobuf/proto.Message's. + GetMsgsV2() ([]protov2.Message, error) +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() []byte + FeeGranter() []byte +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutTimeStamp extends the Tx interface by allowing a transaction to + / set a timeout timestamp. + TxWithTimeoutTimeStamp interface { + Tx + + GetTimeoutTimeStamp() + +time.Time +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} + + / TxWithUnordered extends the Tx interface by allowing a transaction to set + / the unordered field, which implicitly relies on TxWithTimeoutTimeStamp. + TxWithUnordered interface { + TxWithTimeoutTimeStamp + + GetUnordered() + +bool +} + + / HasValidateBasic defines a type that has a ValidateBasic method. + / ValidateBasic is deprecated and now facultative. + / Prefer validating messages directly in the msg server. + HasValidateBasic interface { + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +var MsgTypeURL = codectypes.MsgTypeURL + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} + +/ GetModuleNameFromTypeURL assumes that module name is the second element of the msg type URL +/ e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" +/ It returns an empty string if the input is not a valid type URL +func GetModuleNameFromTypeURL(input string) + +string { + moduleName := strings.Split(input, ".") + if len(moduleName) > 1 { + return moduleName[1] +} + +return "" +} +``` + +This enables developers to play with various types for the transaction of their application. In the default `auth` module, the default transaction type is `Tx`: + +```protobuf +// Tx is the standard type used for broadcasting transactions. +message Tx { + // body is the processable content of the transaction + TxBody body = 1; + + // auth_info is the authorization related content of the transaction, + // specifically signers, signer modes and fee + AuthInfo auth_info = 2; + + // signatures is a list of signatures that matches the length and order of + // AuthInfo's signer_infos to allow connecting signature meta information like + // public key and signing mode by position. + repeated bytes signatures = 3; +} +``` + +- Verify signatures for each [`message`](/docs/sdk/next/documentation/module-system/messages-and-queries#messages) contained in the transaction. Each `message` should be signed by one or multiple sender(s), and these signatures must be verified in the `anteHandler`. +- During `CheckTx`, verify that the gas prices provided with the transaction are greater than the local `min-gas-prices` (as a reminder, gas-prices can be deducted from the following equation: `fees = gas * gas-prices`). `min-gas-prices` is a parameter local to each full-node and used during `CheckTx` to discard transactions that do not provide a minimum amount of fees. This ensures that the mempool cannot be spammed with garbage transactions. +- Verify that the sender of the transaction has enough funds to cover for the `fees`. When the end-user generates a transaction, they must indicate 2 of the 3 following parameters (the third one being implicit): `fees`, `gas` and `gas-prices`. This signals how much they are willing to pay for nodes to execute their transaction. The provided `gas` value is stored in a parameter called `GasWanted` for later use. +- Set `newCtx.GasMeter` to 0, with a limit of `GasWanted`. **This step is crucial**, as it not only makes sure the transaction cannot consume infinite gas, but also that `ctx.GasMeter` is reset in-between each transaction (`ctx` is set to `newCtx` after `anteHandler` is run, and the `anteHandler` is run each time a transaction executes). + +As explained above, the `anteHandler` returns a maximum limit of `gas` the transaction can consume during execution called `GasWanted`. The actual amount consumed in the end is denominated `GasUsed`, and we must therefore have `GasUsed =< GasWanted`. Both `GasWanted` and `GasUsed` are relayed to the underlying consensus engine when [`FinalizeBlock`](/docs/sdk/next/documentation/application-framework/baseapp#finalizeblock) returns. diff --git a/docs/sdk/next/documentation/protocol-development/ics-030-signed-messages.mdx b/docs/sdk/next/documentation/protocol-development/ics-030-signed-messages.mdx new file mode 100644 index 00000000..4787eb37 --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/ics-030-signed-messages.mdx @@ -0,0 +1,194 @@ +--- +title: 'ICS 030: Cosmos Signed Messages' +--- + +> TODO: Replace with valid ICS number and possibly move to new location. + +* [Changelog](#changelog) +* [Abstract](#abstract) +* [Preliminary](#preliminary) +* [Specification](#specification) +* [Future Adaptations](#future-adaptations) +* [API](#api) +* [References](#references) + +## Status + +Proposed. + +## Changelog + +## Abstract + +Having the ability to sign messages off-chain has proven to be a fundamental aspect +of nearly any blockchain. The notion of signing messages off-chain has many +added benefits such as saving on computational costs and reducing transaction +throughput and overhead. Within the context of the Cosmos, some of the major +applications of signing such data includes, but is not limited to, providing a +cryptographic secure and verifiable means of proving validator identity and +possibly associating it with some other framework or organization. In addition, +having the ability to sign Cosmos messages with a Ledger or similar HSM device. + +A standardized protocol for hashing, signing, and verifying messages that can be +implemented by the Cosmos SDK and other third-party organizations is needed. Such a +standardized protocol subscribes to the following: + +* Contains a specification of human-readable and machine-verifiable typed structured data +* Contains a framework for deterministic and injective encoding of structured data +* Utilizes cryptographic secure hashing and signing algorithms +* A framework for supporting extensions and domain separation +* Is invulnerable to chosen ciphertext attacks +* Has protection against potentially signing transactions a user did not intend to + +This specification is only concerned with the rationale and the standardized +implementation of Cosmos signed messages. It does **not** concern itself with the +concept of replay attacks as that will be left up to the higher-level application +implementation. If you view signed messages in the means of authorizing some +action or data, then such an application would have to either treat this as +idempotent or have mechanisms in place to reject known signed messages. + +## Preliminary + +The Cosmos message signing protocol will be parameterized with a cryptographic +secure hashing algorithm `SHA-256` and a signing algorithm `S` that contains +the operations `sign` and `verify` which provide a digital signature over a set +of bytes and verification of a signature respectively. + +Note, our goal here is not to provide context and reasoning about why necessarily +these algorithms were chosen apart from the fact they are the defacto algorithms +used in CometBFT and the Cosmos SDK and that they satisfy our needs for such +cryptographic algorithms such as having resistance to collision and second +pre-image attacks, as well as being [deterministic](https://en.wikipedia.org/wiki/Hash_function#Determinism) and [uniform](https://en.wikipedia.org/wiki/Hash_function#Uniformity). + +## Specification + +CometBFT has a well established protocol for signing messages using a canonical +JSON representation as defined [here](https://github.com/cometbft/cometbft/blob/master/types/canonical.go). + +An example of such a canonical JSON structure is CometBFT's vote structure: + +```go +type CanonicalJSONVote struct { + ChainID string `json:"@chain_id"` + Type string `json:"@type"` + BlockID CanonicalJSONBlockID `json:"block_id"` + Height int64 `json:"height"` + Round int `json:"round"` + Timestamp string `json:"timestamp"` + VoteType byte `json:"type"` +} +``` + +With such canonical JSON structures, the specification requires that they include +meta fields: `@chain_id` and `@type`. These meta fields are reserved and must be +included. They are both of type `string`. In addition, fields must be ordered +in lexicographically ascending order. + +For the purposes of signing Cosmos messages, the `@chain_id` field must correspond +to the Cosmos chain identifier. The user-agent should **refuse** signing if the +`@chain_id` field does not match the currently active chain! The `@type` field +must equal the constant `"message"`. The `@type` field corresponds to the type of +structure the user will be signing in an application. For now, a user is only +allowed to sign bytes of valid ASCII text ([see here](https://github.com/cometbft/cometbft/blob/v0.37.0/libs/strings/string.go#L35-L64)). +However, this will change and evolve to support additional application-specific +structures that are human-readable and machine-verifiable ([see Future Adaptations](#future-adaptations)). + +Thus, we can have a canonical JSON structure for signing Cosmos messages using +the [JSON schema](http://json-schema.org/) specification as such: + +```json expandable +{ + "$schema": "http://json-schema.org/draft-04/schema#", + "$id": "cosmos/signing/typeData/schema", + "title": "The Cosmos signed message typed data schema.", + "type": "object", + "properties": { + "@chain_id": { + "type": "string", + "description": "The corresponding Cosmos chain identifier.", + "minLength": 1 + }, + "@type": { + "type": "string", + "description": "The message type. It must be 'message'.", + "enum": [ + "message" + ] + }, + "text": { + "type": "string", + "description": "The valid ASCII text to sign.", + "pattern": "^[\\x20-\\x7E]+$", + "minLength": 1 + } + }, + "required": [ + "@chain_id", + "@type", + "text" + ] +} +``` + +e.g. + +```json +{ + "@chain_id": "1", + "@type": "message", + "text": "Hello, you can identify me as XYZ on keybase." +} +``` + +## Future Adaptations + +As applications can vary greatly in domain, it will be vital to support both +domain separation and human-readable and machine-verifiable structures. + +Domain separation will allow for application developers to prevent collisions of +otherwise identical structures. It should be designed to be unique per application +use and should directly be used in the signature encoding itself. + +Human-readable and machine-verifiable structures will allow end users to sign +more complex structures, apart from just string messages, and still be able to +know exactly what they are signing (opposed to signing a bunch of arbitrary bytes). + +Thus, in the future, the Cosmos signing message specification will be expected +to expand upon it's canonical JSON structure to include such functionality. + +## API + +Application developers and designers should formalize a standard set of APIs that +adhere to the following specification: + +*** + +### **cosmosSignBytes** + +Params: + +* `data`: the Cosmos signed message canonical JSON structure +* `address`: the Bech32 Cosmos account address to sign data with + +Returns: + +* `signature`: the Cosmos signature derived using signing algorithm `S` + +*** + +### Examples + +Using the `secp256k1` as the DSA, `S`: + +```javascript +data = { + "@chain_id": "1", + "@type": "message", + "text": "I hereby claim I am ABC on Keybase!" +} + +cosmosSignBytes(data, "cosmos1pvsch6cddahhrn5e8ekw0us50dpnugwnlfngt3") +> "0x7fc4a495473045022100dec81a9820df0102381cdbf7e8b0f1e2cb64c58e0ecda1324543742e0388e41a02200df37905a6505c1b56a404e23b7473d2c0bc5bcda96771d2dda59df6ed2b98f8" +``` + +## References diff --git a/docs/sdk/next/documentation/protocol-development/ics-overview.mdx b/docs/sdk/next/documentation/protocol-development/ics-overview.mdx new file mode 100644 index 00000000..e58df302 --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/ics-overview.mdx @@ -0,0 +1,6 @@ +--- +title: Cosmos ICS +description: ICS030 - Signed Messages +--- + +* [ICS030 - Signed Messages](/docs/sdk/next/documentation/protocol-development/ics-030-signed-messages) diff --git a/docs/sdk/next/documentation/protocol-development/protobuf-annotations.mdx b/docs/sdk/next/documentation/protocol-development/protobuf-annotations.mdx new file mode 100644 index 00000000..c5176cdc --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/protobuf-annotations.mdx @@ -0,0 +1,132 @@ +--- +title: ProtocolBuffer Annotations +description: >- + This document explains the various protobuf scalars that have been added to + make working with protobuf easier for Cosmos SDK application developers +--- + +This document explains the various protobuf scalars that have been added to make working with protobuf easier for Cosmos SDK application developers + +## Signer + +Signer specifies which field should be used to determine the signer of a message for the Cosmos SDK. This field can be used for clients as well to infer which field should be used to determine the signer of a message. + +Read more about the signer field [here](/docs/sdk/next/documentation/module-system/messages-and-queries). + +```protobuf + option (cosmos.msg.v1.signer) = "from_address"; +``` + +```proto +option (cosmos.msg.v1.signer) = "from_address"; +``` + +## Scalar + +The scalar type defines a way for clients to understand how to construct protobuf messages according to what is expected by the module and sdk. + +```proto +(cosmos_proto.scalar) = "cosmos.AddressString" +``` + +Example of account address string scalar: + +```proto + string from_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +``` + +Example of validator address string scalar: + +```proto + string validator_address = 1 [(cosmos_proto.scalar) = "cosmos.ValidatorAddressString"]; +``` + +Example of Decimals scalar: + +```proto + (cosmos_proto.scalar) = "cosmos.Dec", +``` + +Example of Int scalar: + +```proto + string yes_count = 1 [(cosmos_proto.scalar) = "cosmos.Int"]; +``` + +There are a few options for what can be provided as a scalar: `cosmos.AddressString`, `cosmos.ValidatorAddressString`, `cosmos.ConsensusAddressString`, `cosmos.Int`, `cosmos.Dec`. + +## Implements\_Interface + +`Implements_Interface` is used to provide information to client tooling like [telescope](https://github.com/cosmology-tech/telescope) on how to encode and decode protobuf messages. + +```proto +option (cosmos_proto.implements_interface) = "cosmos.auth.v1beta1.AccountI"; +``` + +## Method,Field,Message Added In + +`method_added_in`, `field_added_in` and `message_added_in` are annotations to denote to clients that a field has been supported in a later version. This is useful when new methods or fields are added in later versions and that the client needs to be aware of what it can call. + +The annotation should be worded as follows: + +```proto +option (cosmos_proto.method_added_in) = "cosmos-sdk v0.50.1"; +option (cosmos_proto.method_added_in) = "x/epochs v1.0.0"; +option (cosmos_proto.method_added_in) = "simapp v24.0.0"; +``` + +## Amino + +The amino codec was removed in `v0.50+`, this means there is no need to register `legacyAminoCodec`. To replace the amino codec, Amino protobuf annotations are used to provide information to the amino codec on how to encode and decode protobuf messages. + +Amino annotations are only used for backwards compatibility with amino. New modules are not required to use amino annotations. + +The below annotations are used to provide information to the amino codec on how to encode and decode protobuf messages in a backwards compatible manner. + +### Name + +Name specifies the amino name that would show up for the user in order for them to see which message they are signing. + +```proto +option (amino.name) = "cosmos-sdk/BaseAccount"; +``` + +```proto + option (amino.name) = "cosmos-sdk/MsgSend"; +``` + +### Field\_Name + +Field name specifies the amino name that would show up for the user in order for them to see which field they are signing. + +```proto +uint64 height = 1 [(amino.field_name) = "public_key"]; +``` + +```proto + [(gogoproto.jsontag) = "creation_height", (amino.field_name) = "creation_height", (amino.dont_omitempty) = true]; +``` + +### Dont\_OmitEmpty + +Dont omitempty specifies that the field should not be omitted when encoding to amino. + +```proto +repeated cosmos.base.v1beta1.Coin amount = 3 [(amino.dont_omitempty) = true]; +``` + +```proto + (amino.dont_omitempty) = true, +``` + +### Encoding + +Encoding instructs the amino json marshaler how to encode certain fields that may differ from the standard encoding behaviour. The most common example of this is how `repeated cosmos.base.v1beta1.Coin` is encoded when using the amino json encoding format. The `legacy_coins` option tells the json marshaler [how to encode a null slice](https://github.com/cosmos/cosmos-sdk/blob/e8f28bf5db18b8d6b7e0d94b542ce4cf48fed9d6/x/tx/signing/aminojson/json_marshal.go#L65) of `cosmos.base.v1beta1.Coin`. + +```proto +(amino.encoding) = "legacy_coins", +``` + +```proto + (amino.encoding) = "legacy_coins", +``` diff --git a/docs/sdk/next/documentation/protocol-development/protobuf-complete-guide.mdx b/docs/sdk/next/documentation/protocol-development/protobuf-complete-guide.mdx new file mode 100644 index 00000000..a1e2edbd --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/protobuf-complete-guide.mdx @@ -0,0 +1,420 @@ +--- +title: Protocol Buffers Implementation Guide +description: Technical reference for Protocol Buffers in Cosmos SDK +--- + +# Protocol Buffers in Cosmos SDK + +## Overview + +The Cosmos SDK uses Protocol Buffers for all serialization. This document provides a technical reference for implementing and working with protobuf in SDK modules. + +## Architecture + +Protocol Buffers serve four primary functions in the Cosmos SDK: + +1. **State Encoding** (ADR-019): Serialization of blockchain state +2. **Transaction Encoding** (ADR-020): Transaction structure and signing +3. **Query System** (ADR-021): gRPC query services +4. **Message Services** (ADR-031): Transaction message routing + +## Setup + +### Required Tools + +```bash +# Install buf +curl -sSL https://github.com/bufbuild/buf/releases/download/v1.28.1/buf-$(uname -s)-$(uname -m) \ + -o /usr/local/bin/buf && chmod +x /usr/local/bin/buf +``` + +### Directory Structure + +``` +proto/ +├── buf.yaml +├── buf.gen.gogo.yaml +└── [org]/ + └── [module]/ + └── v1/ + ├── types.proto + ├── tx.proto + ├── query.proto + ├── genesis.proto + └── events.proto +``` + +### Configuration Files + +`buf.yaml`: +```yaml +version: v1 +name: buf.build/[org]/[project] +deps: + - buf.build/cosmos/cosmos-sdk + - buf.build/cosmos/cosmos-proto + - buf.build/cosmos/gogo-proto + - buf.build/googleapis/googleapis +breaking: + use: + - FILE +lint: + use: + - STANDARD + - COMMENTS + - FILE_LOWER_SNAKE_CASE + except: + - UNARY_RPC + - COMMENT_FIELD +``` + +`buf.gen.gogo.yaml`: +```yaml +version: v1 +plugins: + - name: gocosmos + out: .. + opt: plugins=grpc,Mgoogle/protobuf/any.proto=github.com/cosmos/gogoproto/types/any + - name: grpc-gateway + out: .. + opt: logtostderr=true,allow_colon_final_segments=true +``` + +## Message Definitions + +### Basic Message Structure + +```protobuf +syntax = "proto3"; +package example.module.v1; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; + +option go_package = "github.com/example/module/types"; + +message Params { + option (gogoproto.goproto_stringer) = false; + + bool enabled = 1; + uint32 max_items = 2; + string authority = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +### Transaction Messages + +```protobuf +syntax = "proto3"; +package example.module.v1; + +import "cosmos/msg/v1/msg.proto"; +import "cosmos_proto/cosmos.proto"; +import "gogoproto/gogo.proto"; + +option go_package = "github.com/example/module/types"; + +service Msg { + option (cosmos.msg.v1.service) = true; + + rpc UpdateParams(MsgUpdateParams) returns (MsgUpdateParamsResponse); + rpc Execute(MsgExecute) returns (MsgExecuteResponse); +} + +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + Params params = 2 [(gogoproto.nullable) = false]; +} + +message MsgUpdateParamsResponse {} + +message MsgExecute { + option (cosmos.msg.v1.signer) = "sender"; + + string sender = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string data = 2; +} + +message MsgExecuteResponse { + string result = 1; +} +``` + +### Query Services + +```protobuf +syntax = "proto3"; +package example.module.v1; + +import "google/api/annotations.proto"; +import "cosmos/base/query/v1beta1/pagination.proto"; +import "gogoproto/gogo.proto"; + +option go_package = "github.com/example/module/types"; + +service Query { + rpc Params(QueryParamsRequest) returns (QueryParamsResponse) { + option (google.api.http).get = "/example/module/v1/params"; + } + + rpc State(QueryStateRequest) returns (QueryStateResponse) { + option (google.api.http).get = "/example/module/v1/state/{id}"; + } + + rpc States(QueryStatesRequest) returns (QueryStatesResponse) { + option (google.api.http).get = "/example/module/v1/states"; + } +} + +message QueryParamsRequest {} + +message QueryParamsResponse { + Params params = 1 [(gogoproto.nullable) = false]; +} + +message QueryStateRequest { + string id = 1; +} + +message QueryStateResponse { + State state = 1; +} + +message QueryStatesRequest { + cosmos.base.query.v1beta1.PageRequest pagination = 1; +} + +message QueryStatesResponse { + repeated State states = 1 [(gogoproto.nullable) = false]; + cosmos.base.query.v1beta1.PageResponse pagination = 2; +} +``` + +## Annotations + +### Cosmos Proto Annotations + +| Annotation | Usage | Description | +|------------|-------|-------------| +| `(cosmos_proto.scalar)` | Field-level | Type mapping for addresses and numbers | +| `(cosmos_proto.accepts_interface)` | Field-level | Any field interface constraint | +| `(cosmos_proto.implements_interface)` | Message-level | Interface implementation marker | +| `(cosmos.msg.v1.signer)` | Message-level | Transaction signer field | +| `(cosmos.msg.v1.service)` | Service-level | Msg service marker | + +### Scalar Types + +```protobuf +message Example { + string account = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator = 2 [(cosmos_proto.scalar) = "cosmos.ValidatorAddressString"]; + string consensus = 3 [(cosmos_proto.scalar) = "cosmos.ConsensusAddressString"]; + string amount = 4 [(cosmos_proto.scalar) = "cosmos.Int"]; + string rate = 5 [(cosmos_proto.scalar) = "cosmos.Dec"]; +} +``` + +### Gogoproto Annotations + +| Annotation | Effect | +|------------|--------| +| `(gogoproto.nullable) = false` | Non-pointer field in Go | +| `(gogoproto.goproto_getters) = false` | No getter methods | +| `(gogoproto.equal) = true` | Generate Equal() method | +| `(gogoproto.goproto_stringer) = false` | Custom String() method | +| `(gogoproto.castrepeated)` | Custom slice type | +| `(gogoproto.casttype)` | Custom Go type | + +## Interface Registry + +### Registration + +```go +package types + +import ( + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +func RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + registry.RegisterImplementations((*sdk.Msg)(nil), + &MsgUpdateParams{}, + &MsgExecute{}, + ) + + msgservice.RegisterMsgServiceDesc(registry, &_Msg_serviceDesc) +} + +var ( + Amino = codec.NewLegacyAmino() + ModuleCdc = codec.NewProtoCodec(cdctypes.NewInterfaceRegistry()) +) +``` + +### Working with Any Types + +```go +// Pack interface to Any +func PackInterface(i proto.Message) (*codectypes.Any, error) { + return codectypes.NewAnyWithValue(i) +} + +// Unpack Any to interface +func UnpackInterface(any *codectypes.Any, iface interface{}) error { + return ModuleCdc.UnpackAny(any, iface) +} + +// Implement UnpackInterfaces for types containing Any fields +func (m *Message) UnpackInterfaces(unpacker codectypes.AnyUnpacker) error { + var iface InterfaceType + return unpacker.UnpackAny(m.AnyField, &iface) +} +``` + +## Code Generation + +```bash +# Generate code +cd proto +buf generate + +# Move generated files +cp -r github.com/[org]/[project]/* ../ +rm -rf github.com +``` + +## Deterministic Encoding + +The SDK requires deterministic encoding for: +- Transaction signing +- State commitment +- Consensus + +Implementation: +```go +func GetSignBytes(msg proto.Message) ([]byte, error) { + return msg.Marshal() // protobuf marshaling is deterministic +} +``` + +## Field Management + +### Field Numbers + +```protobuf +message Schema { + // Frequently accessed fields: 1-15 (single byte encoding) + string id = 1; + string name = 2; + + // Reserved for deleted fields + reserved 3, 4; + reserved "old_field", "removed_field"; + + // Deprecated field + string legacy_field = 5 [deprecated = true]; + + // Less frequent fields: 16+ + string metadata = 16; +} +``` + +### Versioning + +```protobuf +// Initial version +package example.module.v1beta1; + +// Stable version +package example.module.v1; + +// Breaking changes require new version +package example.module.v2; +``` + +### Version Annotations + +```protobuf +service Msg { + rpc OriginalMethod(Request) returns (Response); + + rpc NewMethod(NewRequest) returns (NewResponse) { + option (cosmos_proto.method_added_in) = "v0.47"; + } +} + +message Enhanced { + string existing = 1; + string new_field = 2 [(cosmos_proto.field_added_in) = "v0.47"]; +} +``` + +## Testing + +### Message Testing + +```go +func TestMessage(t *testing.T) { + msg := &types.MsgExecute{ + Sender: "cosmos1...", + Data: "test", + } + + // Test marshaling + bz, err := msg.Marshal() + require.NoError(t, err) + + // Test unmarshaling + var decoded types.MsgExecute + err = decoded.Unmarshal(bz) + require.NoError(t, err) + require.Equal(t, msg, &decoded) + + // Test determinism + bz2, err := msg.Marshal() + require.NoError(t, err) + require.Equal(t, bz, bz2) +} +``` + +### Interface Registration Testing + +```go +func TestRegistration(t *testing.T) { + registry := cdctypes.NewInterfaceRegistry() + types.RegisterInterfaces(registry) + + msg := &MsgExecute{Sender: "cosmos1...", Data: "test"} + + any, err := codectypes.NewAnyWithValue(msg) + require.NoError(t, err) + + var unpacked sdk.Msg + err = registry.UnpackAny(any, &unpacked) + require.NoError(t, err) + require.Equal(t, msg, unpacked) +} +``` + +## Troubleshooting + +| Error | Cause | Solution | +|-------|-------|----------| +| `Type URL not found` | Missing registration | Add to `RegisterInterfaces` | +| `Cannot unmarshal Any` | Interface mismatch | Verify interface implementation | +| `Breaking changes detected` | Field modification | Use deprecation or new version | +| `Import not found` | Missing dependency | Run `buf mod update` | +| `Field number collision` | Reused number | Use reserved statement | + +## References + +- [Protocol Buffers Language Guide](https://protobuf.dev/programming-guides/proto3/) +- [Buf Documentation](https://buf.build/docs/) +- [Gogoproto Extensions](https://github.com/cosmos/gogoproto) +- [Cosmos Proto](https://github.com/cosmos/cosmos-proto) +- [SDK Proto Definitions](https://buf.build/cosmos/cosmos-sdk) \ No newline at end of file diff --git a/docs/sdk/next/documentation/protocol-development/protobuf.mdx b/docs/sdk/next/documentation/protocol-development/protobuf.mdx new file mode 100644 index 00000000..1f2bbd7d --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/protobuf.mdx @@ -0,0 +1,1052 @@ +--- +title: Protocol Buffers +description: >- + It is known that Cosmos SDK uses protocol buffers extensively, this document + is meant to provide a guide on how it is used in the cosmos-sdk. +--- + +It is known that Cosmos SDK uses protocol buffers extensively, this document is meant to provide a guide on how it is used in the cosmos-sdk. + +To generate the proto file, the Cosmos SDK uses a docker image, this image is provided to all to use as well. The latest version is `ghcr.io/cosmos/proto-builder:0.17.0` + +Below is the example of the Cosmos SDK's commands for generating, linting, and formatting protobuf files that can be reused in any applications makefile. + +```go expandable +#!/usr/bin/make -f + +PACKAGES_NOSIMULATION=$(shell go list ./... | grep -v '/simulation') + +PACKAGES_SIMTEST=$(shell go list ./... | grep '/simulation') + +export VERSION := $(shell echo $(shell git describe --tags --always --match "v*") | sed 's/^v/') + +export CMTVERSION := $(shell go list -m github.com/cometbft/cometbft | sed 's:.* ::') + +export COMMIT := $(shell git log -1 --format='%H') + +LEDGER_ENABLED ?= true +BINDIR ?= $(GOPATH)/bin +BUILDDIR ?= $(CURDIR)/build +SIMAPP = ./simapp +MOCKS_DIR = $(CURDIR)/tests/mocks +HTTPS_GIT := https://github.com/cosmos/cosmos-sdk.git +DOCKER := $(shell which docker) + +PROJECT_NAME = $(shell git remote get-url origin | xargs basename -s .git) + +# process build tags +build_tags = netgo + ifeq ($(LEDGER_ENABLED),true) + ifeq ($(OS),Windows_NT) + +GCCEXE = $(shell where gcc.exe 2> NUL) + ifeq ($(GCCEXE),) + $(error gcc.exe not installed for ledger support, please install or set LEDGER_ENABLED=false) + +else + build_tags += ledger + endif + else + UNAME_S = $(shell uname -s) + ifeq ($(UNAME_S),OpenBSD) + $(warning OpenBSD detected, disabling ledger support (https://github.com/cosmos/cosmos-sdk/issues/1988)) + +else + GCC = $(shell command -v gcc 2> /dev/null) + ifeq ($(GCC),) + $(error gcc not installed for ledger support, please install or set LEDGER_ENABLED=false) + +else + build_tags += ledger + endif + endif + endif +endif + ifeq (secp,$(findstring secp,$(COSMOS_BUILD_OPTIONS))) + +build_tags += libsecp256k1_sdk +endif + ifeq (legacy,$(findstring legacy,$(COSMOS_BUILD_OPTIONS))) + +build_tags += app_v1 +endif + whitespace := +whitespace += $(whitespace) + comma := , +build_tags_comma_sep := $(subst $(whitespace),$(comma),$(build_tags)) + +# process linker flags + +ldflags = -X github.com/cosmos/cosmos-sdk/version.Name=sim \ + -X github.com/cosmos/cosmos-sdk/version.AppName=simd \ + -X github.com/cosmos/cosmos-sdk/version.Version=$(VERSION) \ + -X github.com/cosmos/cosmos-sdk/version.Commit=$(COMMIT) \ + -X "github.com/cosmos/cosmos-sdk/version.BuildTags=$(build_tags_comma_sep)" \ + -X github.com/cometbft/cometbft/version.TMCoreSemVer=$(CMTVERSION) + +# DB backend selection + ifeq (cleveldb,$(findstring cleveldb,$(COSMOS_BUILD_OPTIONS))) + +build_tags += gcc +endif + ifeq (badgerdb,$(findstring badgerdb,$(COSMOS_BUILD_OPTIONS))) + +build_tags += badgerdb +endif +# handle rocksdb + ifeq (rocksdb,$(findstring rocksdb,$(COSMOS_BUILD_OPTIONS))) + +CGO_ENABLED=1 + build_tags += rocksdb +endif +# handle boltdb + ifeq (boltdb,$(findstring boltdb,$(COSMOS_BUILD_OPTIONS))) + +build_tags += boltdb +endif + ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS))) + +ldflags += -w -s +endif +ldflags += $(LDFLAGS) + ldflags := $(strip $(ldflags)) + +build_tags += $(BUILD_TAGS) + +build_tags := $(strip $(build_tags)) + +BUILD_FLAGS := -tags "$(build_tags)" -ldflags '$(ldflags)' +# check for nostrip option + ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS))) + +BUILD_FLAGS += -trimpath +endif + +# Check for debug option + ifeq (debug,$(findstring debug,$(COSMOS_BUILD_OPTIONS))) + +BUILD_FLAGS += -gcflags "all=-N -l" +endif + +all: tools build lint test vulncheck + +# The below include contains the tools and runsim targets. +include contrib/devtools/Makefile + +############################################################################### +### Build ### +############################################################################### + +BUILD_TARGETS := build install + +build: BUILD_ARGS=-o $(BUILDDIR)/ + +build-linux-amd64: + GOOS=linux GOARCH=amd64 LEDGER_ENABLED=false $(MAKE) + +build + +build-linux-arm64: + GOOS=linux GOARCH=arm64 LEDGER_ENABLED=false $(MAKE) + +build + +$(BUILD_TARGETS): go.sum $(BUILDDIR)/ + cd ${ + CURRENT_DIR +}/simapp && go $@ -mod=readonly $(BUILD_FLAGS) $(BUILD_ARGS) ./... + +$(BUILDDIR)/: + mkdir -p $(BUILDDIR)/ + +cosmovisor: + $(MAKE) -C tools/cosmovisor cosmovisor + +rosetta: + $(MAKE) -C tools/rosetta rosetta + +confix: + $(MAKE) -C tools/confix confix + +hubl: + $(MAKE) -C tools/hubl hubl + +.PHONY: build build-linux-amd64 build-linux-arm64 cosmovisor rosetta confix + +mocks: $(MOCKS_DIR) + @go install github.com/golang/mock/mockgen@v1.6.0 + sh ./scripts/mockgen.sh +.PHONY: mocks + +vulncheck: $(BUILDDIR)/ + GOBIN=$(BUILDDIR) + +go install golang.org/x/vuln/cmd/govulncheck@latest + $(BUILDDIR)/govulncheck ./... + +$(MOCKS_DIR): + mkdir -p $(MOCKS_DIR) + +distclean: clean tools-clean +clean: + rm -rf \ + $(BUILDDIR)/ \ + artifacts/ \ + tmp-swagger-gen/ \ + .testnets + +.PHONY: distclean clean + +############################################################################### +### Tools & Dependencies ### +############################################################################### + +go.sum: go.mod + echo "Ensure dependencies have not been modified ..." >&2 + go mod verify + go mod tidy + +############################################################################### +### Documentation ### +############################################################################### + +godocs: + @echo "--> Wait a few seconds and visit http://localhost:6060/pkg/github.com/cosmos/cosmos-sdk/types" + go install golang.org/x/tools/cmd/godoc@latest + godoc -http=:6060 + +build-docs: + @cd docs && DOCS_DOMAIN=docs.cosmos.network sh ./build-all.sh + +.PHONY: build-docs + +############################################################################### +### Tests & Simulation ### +############################################################################### + +# make init-simapp initializes a single local node network +# it is useful for testing and development +# Usage: make install && make init-simapp && simd start +# Warning: make init-simapp will remove all data in simapp home directory +init-simapp: + ./scripts/init-simapp.sh + +test: test-unit +test-e2e: + $(MAKE) -C tests test-e2e +test-e2e-cov: + $(MAKE) -C tests test-e2e-cov +test-integration: + $(MAKE) -C tests test-integration +test-integration-cov: + $(MAKE) -C tests test-integration-cov +test-all: test-unit test-e2e test-integration test-ledger-mock test-race + +TEST_PACKAGES=./... +TEST_TARGETS := test-unit test-unit-amino test-unit-proto test-ledger-mock test-race test-ledger test-race + +# Test runs-specific rules. To add a new test target, just add +# a new rule, customise ARGS or TEST_PACKAGES ad libitum, and +# append the new rule to the TEST_TARGETS list. +test-unit: test_tags += cgo ledger test_ledger_mock norace +test-unit-amino: test_tags += ledger test_ledger_mock test_amino norace +test-ledger: test_tags += cgo ledger norace +test-ledger-mock: test_tags += ledger test_ledger_mock norace +test-race: test_tags += cgo ledger test_ledger_mock +test-race: ARGS=-race +test-race: TEST_PACKAGES=$(PACKAGES_NOSIMULATION) +$(TEST_TARGETS): run-tests + +# check-* compiles and collects tests without running them +# note: go test -c doesn't support multiple packages yet (https://github.com/golang/go/issues/15513) + +CHECK_TEST_TARGETS := check-test-unit check-test-unit-amino +check-test-unit: test_tags += cgo ledger test_ledger_mock norace +check-test-unit-amino: test_tags += ledger test_ledger_mock test_amino norace +$(CHECK_TEST_TARGETS): EXTRA_ARGS=-run=none +$(CHECK_TEST_TARGETS): run-tests + +ARGS += -tags "$(test_tags)" +SUB_MODULES = $(shell find . -type f -name 'go.mod' -print0 | xargs -0 -n1 dirname | sort) + +CURRENT_DIR = $(shell pwd) + +run-tests: + ifneq (,$(shell which tparse 2>/dev/null)) + @echo "Starting unit tests"; \ + finalec=0; \ + for module in $(SUB_MODULES); do \ + cd ${ + CURRENT_DIR +}/$module; \ + echo "Running unit tests for $(grep '^module' go.mod)"; \ + go test -mod=readonly -json $(ARGS) $(TEST_PACKAGES) ./... | tparse; \ + ec=$?; \ + if [ "$ec" -ne '0' ]; then finalec=$ec; fi; \ + done; \ + exit $finalec +else + @echo "Starting unit tests"; \ + finalec=0; \ + for module in $(SUB_MODULES); do \ + cd ${ + CURRENT_DIR +}/$module; \ + echo "Running unit tests for $(grep '^module' go.mod)"; \ + go test -mod=readonly $(ARGS) $(TEST_PACKAGES) ./... ; \ + ec=$?; \ + if [ "$ec" -ne '0' ]; then finalec=$ec; fi; \ + done; \ + exit $finalec +endif + +.PHONY: run-tests test test-all $(TEST_TARGETS) + +test-sim-nondeterminism: + @echo "Running non-determinism test..." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestAppStateDeterminism -Enabled=true \ + -NumBlocks=100 -BlockSize=200 -Commit=true -Period=0 -v -timeout 24h + +# Requires an exported plugin. See store/streaming/README.md for documentation. +# +# example: +# export COSMOS_SDK_ABCI_V1= +# make test-sim-nondeterminism-streaming +# +# Using the built-in examples: +# export COSMOS_SDK_ABCI_V1=/store/streaming/abci/examples/file/file +# make test-sim-nondeterminism-streaming +test-sim-nondeterminism-streaming: + @echo "Running non-determinism-streaming test..." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestAppStateDeterminism -Enabled=true \ + -NumBlocks=100 -BlockSize=200 -Commit=true -Period=0 -v -timeout 24h -EnableStreaming=true + +test-sim-custom-genesis-fast: + @echo "Running custom genesis simulation..." + @echo "By default, ${ + HOME +}/.gaiad/config/genesis.json will be used." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestFullAppSimulation -Genesis=${ + HOME +}/.gaiad/config/genesis.json \ + -Enabled=true -NumBlocks=100 -BlockSize=200 -Commit=true -Seed=99 -Period=5 -v -timeout 24h + +test-sim-import-export: runsim + @echo "Running application import/export simulation. This may take several minutes..." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 5 TestAppImportExport + +test-sim-after-import: runsim + @echo "Running application simulation-after-import. This may take several minutes..." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 5 TestAppSimulationAfterImport + +test-sim-custom-genesis-multi-seed: runsim + @echo "Running multi-seed custom genesis simulation..." + @echo "By default, ${ + HOME +}/.gaiad/config/genesis.json will be used." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Genesis=${ + HOME +}/.gaiad/config/genesis.json -SimAppPkg=. -ExitOnFail 400 5 TestFullAppSimulation + +test-sim-multi-seed-long: runsim + @echo "Running long multi-seed application simulation. This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 500 50 TestFullAppSimulation + +test-sim-multi-seed-short: runsim + @echo "Running short multi-seed application simulation. This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 10 TestFullAppSimulation + +test-sim-benchmark-invariants: + @echo "Running simulation invariant benchmarks..." + cd ${ + CURRENT_DIR +}/simapp && @go test -mod=readonly -benchmem -bench=BenchmarkInvariants -run=^$ \ + -Enabled=true -NumBlocks=1000 -BlockSize=200 \ + -Period=1 -Commit=true -Seed=57 -v -timeout 24h + +.PHONY: \ +test-sim-nondeterminism \ +test-sim-nondeterminism-streaming \ +test-sim-custom-genesis-fast \ +test-sim-import-export \ +test-sim-after-import \ +test-sim-custom-genesis-multi-seed \ +test-sim-multi-seed-short \ +test-sim-multi-seed-long \ +test-sim-benchmark-invariants + +SIM_NUM_BLOCKS ?= 500 +SIM_BLOCK_SIZE ?= 200 +SIM_COMMIT ?= true + +test-sim-benchmark: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h + +# Requires an exported plugin. See store/streaming/README.md for documentation. +# +# example: +# export COSMOS_SDK_ABCI_V1= +# make test-sim-benchmark-streaming +# +# Using the built-in examples: +# export COSMOS_SDK_ABCI_V1=/store/streaming/abci/examples/file/file +# make test-sim-benchmark-streaming +test-sim-benchmark-streaming: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -EnableStreaming=true + +test-sim-profile: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -benchmem -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -cpuprofile cpu.out -memprofile mem.out + +# Requires an exported plugin. See store/streaming/README.md for documentation. +# +# example: +# export COSMOS_SDK_ABCI_V1= +# make test-sim-profile-streaming +# +# Using the built-in examples: +# export COSMOS_SDK_ABCI_V1=/store/streaming/abci/examples/file/file +# make test-sim-profile-streaming +test-sim-profile-streaming: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -benchmem -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -cpuprofile cpu.out -memprofile mem.out -EnableStreaming=true + +.PHONY: test-sim-profile test-sim-benchmark + +test-rosetta: + docker build -t rosetta-ci:latest -f contrib/rosetta/rosetta-ci/Dockerfile . + docker-compose -f contrib/rosetta/docker-compose.yaml up --abort-on-container-exit --exit-code-from test_rosetta --build +.PHONY: test-rosetta + +benchmark: + @go test -mod=readonly -bench=. $(PACKAGES_NOSIMULATION) +.PHONY: benchmark + +############################################################################### +### Linting ### +############################################################################### + +golangci_lint_cmd=golangci-lint +golangci_version=v1.51.2 + +lint: + @echo "--> Running linter" + @go install github.com/golangci/golangci-lint/cmd/golangci-lint@$(golangci_version) + @./scripts/go-lint-all.bash --timeout=15m + +lint-fix: + @echo "--> Running linter" + @go install github.com/golangci/golangci-lint/cmd/golangci-lint@$(golangci_version) + @./scripts/go-lint-all.bash --fix + +.PHONY: lint lint-fix + +############################################################################### +### Protobuf ### +############################################################################### + +protoVer=0.13.2 +protoImageName=ghcr.io/cosmos/proto-builder:$(protoVer) + +protoImage=$(DOCKER) + +run --rm -v $(CURDIR):/workspace --workdir /workspace $(protoImageName) + +proto-all: proto-format proto-lint proto-gen + +proto-gen: + @echo "Generating Protobuf files" + @$(protoImage) + +sh ./scripts/protocgen.sh + +proto-swagger-gen: + @echo "Generating Protobuf Swagger" + @$(protoImage) + +sh ./scripts/protoc-swagger-gen.sh + +proto-format: + @$(protoImage) + +find ./ -name "*.proto" -exec clang-format -i { +} \; + +proto-lint: + @$(protoImage) + +buf lint --error-format=json + +proto-check-breaking: + @$(protoImage) + +buf breaking --against $(HTTPS_GIT)#branch=main + +CMT_URL = https://raw.githubusercontent.com/cometbft/cometbft/v0.38.0-alpha.2/proto/tendermint + +CMT_CRYPTO_TYPES = proto/tendermint/crypto +CMT_ABCI_TYPES = proto/tendermint/abci +CMT_TYPES = proto/tendermint/types +CMT_VERSION = proto/tendermint/version +CMT_LIBS = proto/tendermint/libs/bits +CMT_P2P = proto/tendermint/p2p + +proto-update-deps: + @echo "Updating Protobuf dependencies" + + @mkdir -p $(CMT_ABCI_TYPES) + @curl -sSL $(CMT_URL)/abci/types.proto > $(CMT_ABCI_TYPES)/types.proto + + @mkdir -p $(CMT_VERSION) + @curl -sSL $(CMT_URL)/version/types.proto > $(CMT_VERSION)/types.proto + + @mkdir -p $(CMT_TYPES) + @curl -sSL $(CMT_URL)/types/types.proto > $(CMT_TYPES)/types.proto + @curl -sSL $(CMT_URL)/types/evidence.proto > $(CMT_TYPES)/evidence.proto + @curl -sSL $(CMT_URL)/types/params.proto > $(CMT_TYPES)/params.proto + @curl -sSL $(CMT_URL)/types/validator.proto > $(CMT_TYPES)/validator.proto + @curl -sSL $(CMT_URL)/types/block.proto > $(CMT_TYPES)/block.proto + + @mkdir -p $(CMT_CRYPTO_TYPES) + @curl -sSL $(CMT_URL)/crypto/proof.proto > $(CMT_CRYPTO_TYPES)/proof.proto + @curl -sSL $(CMT_URL)/crypto/keys.proto > $(CMT_CRYPTO_TYPES)/keys.proto + + @mkdir -p $(CMT_LIBS) + @curl -sSL $(CMT_URL)/libs/bits/types.proto > $(CMT_LIBS)/types.proto + + @mkdir -p $(CMT_P2P) + @curl -sSL $(CMT_URL)/p2p/types.proto > $(CMT_P2P)/types.proto + + $(DOCKER) + +run --rm -v $(CURDIR)/proto:/workspace --workdir /workspace $(protoImageName) + +buf mod update + +.PHONY: proto-all proto-gen proto-swagger-gen proto-format proto-lint proto-check-breaking proto-update-deps + +############################################################################### +### Localnet ### +############################################################################### + +localnet-build-env: + $(MAKE) -C contrib/images simd-env +localnet-build-dlv: + $(MAKE) -C contrib/images simd-dlv + +localnet-build-nodes: + $(DOCKER) + +run --rm -v $(CURDIR)/.testnets:/data cosmossdk/simd \ + testnet init-files --v 4 -o /data --starting-ip-address 192.168.10.2 --keyring-backend=test + docker-compose up -d + +localnet-stop: + docker-compose down + +# localnet-start will run a 4-node testnet locally. The nodes are +# based off the docker images in: ./contrib/images/simd-env +localnet-start: localnet-stop localnet-build-env localnet-build-nodes + +# localnet-debug will run a 4-node testnet locally in debug mode +# you can read more about the debug mode here: ./contrib/images/simd-dlv/README.md +localnet-debug: localnet-stop localnet-build-dlv localnet-build-nodes + +.PHONY: localnet-start localnet-stop localnet-debug localnet-build-env localnet-build-dlv localnet-build-nodes + +############################################################################### +### rosetta ### +############################################################################### +# builds rosetta test data dir +rosetta-data: + -docker container rm data_dir_build + docker build -t rosetta-ci:latest -f contrib/rosetta/rosetta-ci/Dockerfile . + docker run --name data_dir_build -t rosetta-ci:latest sh /rosetta/data.sh + docker cp data_dir_build:/tmp/data.tar.gz "$(CURDIR)/contrib/rosetta/rosetta-ci/data.tar.gz" + docker container rm data_dir_build +.PHONY: rosetta-data +``` + +The script used to generate the protobuf files can be found in the `scripts/` directory. + +```shell +#!/usr/bin/env bash + +# How to run manually: +# docker build --pull --rm -f "contrib/devtools/Dockerfile" -t cosmossdk-proto:latest "contrib/devtools" +# docker run --rm -v $(pwd):/workspace --workdir /workspace cosmossdk-proto sh ./scripts/protocgen.sh + +set -e + +echo "Generating gogo proto code" +cd proto +proto_dirs=$(find ./cosmos ./amino -path -prune -o -name '*.proto' -print0 | xargs -0 -n1 dirname | sort | uniq) +for dir in $proto_dirs; do + for file in $(find "${dir}" -maxdepth 1 -name '*.proto'); do + # this regex checks if a proto file has its go_package set to cosmossdk.io/api/... + # gogo proto files SHOULD ONLY be generated if this is false + # we don't want gogo proto to run for proto files which are natively built for google.golang.org/protobuf + if grep -q "option go_package" "$file" && grep -H -o -c 'option go_package.*cosmossdk.io/api' "$file" | grep -q ':0 +``` + +## Buf + +[Buf](https://buf.build) is a protobuf tool that abstracts the need to use the complicated `protoc` toolchain on top of various other things that ensure you are using protobuf in accordance with the majority of the ecosystem. Within the cosmos-sdk repository there are a few files that have a buf prefix. Lets start with the top level and then dive into the various directories. + +### Workspace + +At the root level directory a workspace is defined using [buf workspaces](https://docs.buf.build/configuration/v1/buf-work-yaml). This helps if there are one or more protobuf containing directories in your project. + +Cosmos SDK example: + +```go +version: v1 +directories: + - proto +``` + +### Proto Directory + +Next is the `proto/` directory where all of our protobuf files live. In here there are many different buf files defined each serving a different purpose. + +```bash +├── README.md +├── buf.gen.gogo.yaml +├── buf.gen.pulsar.yaml +├── buf.gen.swagger.yaml +├── buf.lock +├── buf.md +├── buf.yaml +├── cosmos +└── tendermint +``` + +The above diagram shows all the files and directories within the Cosmos SDK `proto/` directory. + +#### `buf.gen.gogo.yaml` + +`buf.gen.gogo.yaml` defines how the protobuf files should be generated for use with in the module. This file uses [gogoproto](https://github.com/gogo/protobuf), a separate generator from the google go-proto generator that makes working with various objects more ergonomic, and it has more performant encode and decode steps + +```go +version: v1 +plugins: + - name: gocosmos + out: .. + opt: plugins=grpc,Mgoogle/protobuf/any.proto=github.com/cosmos/gogoproto/types/any + - name: grpc-gateway + out: .. + opt: logtostderr=true,allow_colon_final_segments=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/generate/overview) + + +#### `buf.gen.pulsar.yaml` + +`buf.gen.pulsar.yaml` defines how protobuf files should be generated using the [new golang apiv2 of protobuf](https://go.dev/blog/protobuf-apiv2). This generator is used instead of the google go-proto generator because it has some extra helpers for Cosmos SDK applications and will have more performant encode and decode than the google go-proto generator. You can follow the development of this generator [here](https://github.com/cosmos/cosmos-proto). + +```go expandable +version: v1 +managed: + enabled: true + go_package_prefix: + default: cosmossdk.io/api + except: + - buf.build/googleapis/googleapis + - buf.build/cosmos/gogo-proto + - buf.build/cosmos/cosmos-proto + override: +plugins: + - name: go-pulsar + out: ../api + opt: paths=source_relative + - name: go-grpc + out: ../api + opt: paths=source_relative +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/generate/overview) + + +#### `buf.gen.swagger.yaml` + +`buf.gen.swagger.yaml` generates the swagger documentation for the query and messages of the chain. This will only define the REST API endpoints that were defined in the query and msg servers. You can find examples of this [here](https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/bank/v1beta1/query.proto#L19) + +```go +version: v1 +plugins: + - name: swagger + out: ../tmp-swagger-gen + opt: logtostderr=true,fqn_for_swagger_name=true,simple_operation_ids=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/generate/overview) + + +#### `buf.lock` + +This is an autogenerated file based on the dependencies required by the `.gen` files. There is no need to copy the current one. If you depend on cosmos-sdk proto definitions a new entry for the Cosmos SDK will need to be provided. The dependency you will need to use is `buf.build/cosmos/cosmos-sdk`. + +```go expandable +# Generated by buf. DO NOT EDIT. +version: v1 +deps: + - remote: buf.build + owner: cosmos + repository: cosmos-proto + commit: 04467658e59e44bbb22fe568206e1f70 + digest: shake256:73a640bd60e0c523b0f8237ff34eab67c45a38b64bbbde1d80224819d272dbf316ac183526bd245f994af6608b025f5130483d0133c5edd385531326b5990466 + - remote: buf.build + owner: cosmos + repository: gogo-proto + commit: 88ef6483f90f478fb938c37dde52ece3 + digest: shake256:89c45df2aa11e0cff97b0d695436713db3d993d76792e9f8dc1ae90e6ab9a9bec55503d48ceedd6b86069ab07d3041b32001b2bfe0227fa725dd515ff381e5ba + - remote: buf.build + owner: googleapis + repository: googleapis + commit: 751cbe31638d43a9bfb6162cd2352e67 + digest: shake256:87f55470d9d124e2d1dedfe0231221f4ed7efbc55bc5268917c678e2d9b9c41573a7f9a557f6d8539044524d9fc5ca8fbb7db05eb81379d168285d76b57eb8a4 + - remote: buf.build + owner: protocolbuffers + repository: wellknowntypes + commit: 3ddd61d1f53d485abd3d3a2b47a62b8e + digest: shake256:9e6799d56700d0470c3723a2fd027e8b4a41a07085a0c90c58e05f6c0038fac9b7a0170acd7692707a849983b1b8189aa33e7b73f91d68157f7136823115546b +``` + +#### `buf.yaml` + +`buf.yaml` defines the [name of your package](https://github.com/cosmos/cosmos-sdk/blob/main/proto/buf.yaml#L3), which [breakage checker](https://docs.buf.build/breaking/overview) to use and how to [lint your protobuf files](https://buf.build/docs/tutorials/getting-started-with-buf-cli#lint-your-api). + +```go expandable +# This module represents buf.build/cosmos/cosmos-sdk +version: v1 +name: buf.build/cosmos/cosmos-sdk +deps: + - buf.build/cosmos/cosmos-proto + - buf.build/cosmos/gogo-proto + - buf.build/googleapis/googleapis + - buf.build/protocolbuffers/wellknowntypes +breaking: + use: + - FILE + ignore: + - testpb +lint: + use: + - STANDARD + - COMMENTS + - FILE_LOWER_SNAKE_CASE + except: + - UNARY_RPC + - COMMENT_FIELD + - SERVICE_SUFFIX + - PACKAGE_VERSION_SUFFIX + - RPC_REQUEST_STANDARD_NAME + ignore: + - tendermint +``` + +We use a variety of linters for the Cosmos SDK protobuf files. The repo also checks this in ci. + +A reference to the github actions can be found [here](https://github.com/cosmos/cosmos-sdk/blob/main/.github/workflows/proto.yml#L1-L32) + +```go expandable +name: Protobuf +# Protobuf runs buf (https://buf.build/) + +lint and check-breakage +# This workflow is only run when a .proto file has been changed +on: + pull_request: + paths: + - "proto/**" + +permissions: + contents: read + +jobs: + lint: + runs-on: depot-ubuntu-22.04-4 + timeout-minutes: 5 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-lint-action@v1 + with: + input: "proto" + + break-check: + runs-on: depot-ubuntu-22.04-4 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-breaking-action@v1 + with: + input: "proto" + against: "https://github.com/${{ + github.repository +}}.git#branch=${{ + github.event.pull_request.base.ref +}},ref=HEAD~1,subdir=proto" +``` +; then + buf generate --template buf.gen.gogo.yaml $file + fi + done +done + +cd .. + +# generate codec/testdata proto code +(cd testutil/testdata; buf generate) + +# generate baseapp test messages +(cd baseapp/testutil; buf generate) + +# move proto files to the right places +cp -r github.com/cosmos/cosmos-sdk/* ./ +cp -r cosmossdk.io/** ./ +rm -rf github.com cosmossdk.io + +go mod tidy + +./scripts/protocgen-pulsar.sh + +``` + +## Buf + +[Buf](https://buf.build) is a protobuf tool that abstracts the need to use the complicated `protoc` toolchain on top of various other things that ensure you are using protobuf in accordance with the majority of the ecosystem. Within the cosmos-sdk repository there are a few files that have a buf prefix. Lets start with the top level and then dive into the various directories. + +### Workspace + +At the root level directory a workspace is defined using [buf workspaces](https://docs.buf.build/configuration/v1/buf-work-yaml). This helps if there are one or more protobuf containing directories in your project. + +Cosmos SDK example: + +```go +version: v1 +directories: + - proto +``` + +### Proto Directory + +Next is the `proto/` directory where all of our protobuf files live. In here there are many different buf files defined each serving a different purpose. + +```bash +├── README.md +├── buf.gen.gogo.yaml +├── buf.gen.pulsar.yaml +├── buf.gen.swagger.yaml +├── buf.lock +├── buf.md +├── buf.yaml +├── cosmos +└── tendermint +``` + +The above diagram shows all the files and directories within the Cosmos SDK `proto/` directory. + +#### `buf.gen.gogo.yaml` + +`buf.gen.gogo.yaml` defines how the protobuf files should be generated for use with in the module. This file uses [gogoproto](https://github.com/gogo/protobuf), a separate generator from the google go-proto generator that makes working with various objects more ergonomic, and it has more performant encode and decode steps + +```go +version: v1 +plugins: + - name: gocosmos + out: .. + opt: plugins=grpc,Mgoogle/protobuf/any.proto=github.com/cosmos/gogoproto/types/any + - name: grpc-gateway + out: .. + opt: logtostderr=true,allow_colon_final_segments=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/generate/overview) + + +#### `buf.gen.pulsar.yaml` + +`buf.gen.pulsar.yaml` defines how protobuf files should be generated using the [new golang apiv2 of protobuf](https://go.dev/blog/protobuf-apiv2). This generator is used instead of the google go-proto generator because it has some extra helpers for Cosmos SDK applications and will have more performant encode and decode than the google go-proto generator. You can follow the development of this generator [here](https://github.com/cosmos/cosmos-proto). + +```go expandable +version: v1 +managed: + enabled: true + go_package_prefix: + default: cosmossdk.io/api + except: + - buf.build/googleapis/googleapis + - buf.build/cosmos/gogo-proto + - buf.build/cosmos/cosmos-proto + override: +plugins: + - name: go-pulsar + out: ../api + opt: paths=source_relative + - name: go-grpc + out: ../api + opt: paths=source_relative +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/generate/overview) + + +#### `buf.gen.swagger.yaml` + +`buf.gen.swagger.yaml` generates the swagger documentation for the query and messages of the chain. This will only define the REST API endpoints that were defined in the query and msg servers. You can find examples of this [here](https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/bank/v1beta1/query.proto#L19) + +```go +version: v1 +plugins: + - name: swagger + out: ../tmp-swagger-gen + opt: logtostderr=true,fqn_for_swagger_name=true,simple_operation_ids=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/generate/overview) + + +#### `buf.lock` + +This is an autogenerated file based on the dependencies required by the `.gen` files. There is no need to copy the current one. If you depend on cosmos-sdk proto definitions a new entry for the Cosmos SDK will need to be provided. The dependency you will need to use is `buf.build/cosmos/cosmos-sdk`. + +```go expandable +# Generated by buf. DO NOT EDIT. +version: v1 +deps: + - remote: buf.build + owner: cosmos + repository: cosmos-proto + commit: 04467658e59e44bbb22fe568206e1f70 + digest: shake256:73a640bd60e0c523b0f8237ff34eab67c45a38b64bbbde1d80224819d272dbf316ac183526bd245f994af6608b025f5130483d0133c5edd385531326b5990466 + - remote: buf.build + owner: cosmos + repository: gogo-proto + commit: 88ef6483f90f478fb938c37dde52ece3 + digest: shake256:89c45df2aa11e0cff97b0d695436713db3d993d76792e9f8dc1ae90e6ab9a9bec55503d48ceedd6b86069ab07d3041b32001b2bfe0227fa725dd515ff381e5ba + - remote: buf.build + owner: googleapis + repository: googleapis + commit: 751cbe31638d43a9bfb6162cd2352e67 + digest: shake256:87f55470d9d124e2d1dedfe0231221f4ed7efbc55bc5268917c678e2d9b9c41573a7f9a557f6d8539044524d9fc5ca8fbb7db05eb81379d168285d76b57eb8a4 + - remote: buf.build + owner: protocolbuffers + repository: wellknowntypes + commit: 3ddd61d1f53d485abd3d3a2b47a62b8e + digest: shake256:9e6799d56700d0470c3723a2fd027e8b4a41a07085a0c90c58e05f6c0038fac9b7a0170acd7692707a849983b1b8189aa33e7b73f91d68157f7136823115546b +``` + +#### `buf.yaml` + +`buf.yaml` defines the [name of your package](https://github.com/cosmos/cosmos-sdk/blob/main/proto/buf.yaml#L3), which [breakage checker](https://docs.buf.build/breaking/overview) to use and how to [lint your protobuf files](https://buf.build/docs/tutorials/getting-started-with-buf-cli#lint-your-api). + +```go expandable +# This module represents buf.build/cosmos/cosmos-sdk +version: v1 +name: buf.build/cosmos/cosmos-sdk +deps: + - buf.build/cosmos/cosmos-proto + - buf.build/cosmos/gogo-proto + - buf.build/googleapis/googleapis + - buf.build/protocolbuffers/wellknowntypes +breaking: + use: + - FILE + ignore: + - testpb +lint: + use: + - STANDARD + - COMMENTS + - FILE_LOWER_SNAKE_CASE + except: + - UNARY_RPC + - COMMENT_FIELD + - SERVICE_SUFFIX + - PACKAGE_VERSION_SUFFIX + - RPC_REQUEST_STANDARD_NAME + ignore: + - tendermint +``` + +We use a variety of linters for the Cosmos SDK protobuf files. The repo also checks this in ci. + +A reference to the github actions can be found [here](https://github.com/cosmos/cosmos-sdk/blob/main/.github/workflows/proto.yml#L1-L32) + +```go expandable +name: Protobuf +# Protobuf runs buf (https://buf.build/) + +lint and check-breakage +# This workflow is only run when a .proto file has been changed +on: + pull_request: + paths: + - "proto/**" + +permissions: + contents: read + +jobs: + lint: + runs-on: depot-ubuntu-22.04-4 + timeout-minutes: 5 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-lint-action@v1 + with: + input: "proto" + + break-check: + runs-on: depot-ubuntu-22.04-4 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-breaking-action@v1 + with: + input: "proto" + against: "https://github.com/${{ + github.repository +}}.git#branch=${{ + github.event.pull_request.base.ref +}},ref=HEAD~1,subdir=proto" +``` diff --git a/docs/sdk/next/documentation/protocol-development/spec-overview.mdx b/docs/sdk/next/documentation/protocol-development/spec-overview.mdx new file mode 100644 index 00000000..602258fc --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/spec-overview.mdx @@ -0,0 +1,24 @@ +--- +title: Specifications +description: >- + This directory contains specifications for the modules of the Cosmos SDK as + well as Interchain Standards (ICS) and other specifications. +--- + +Cosmos SDK applications hold this state in a Merkle store. Updates to +the store may be made during transactions and at the beginning and end of every +block. + +## Cosmos SDK specifications + +- [Store](/docs/sdk/next/documentation/state-storage/store) - The core Merkle store that holds the state. +- [Bech32](/docs/sdk/next/documentation/protocol-development/bech32) - Address format for Cosmos SDK applications. + +## Modules specifications + +Go to the [module directory](https://docs.cosmos.network/main/modules) + +## CometBFT + +For details on the underlying blockchain and p2p protocols, see +the [CometBFT specification](https://github.com/cometbft/cometbft/tree/main/spec). diff --git a/docs/sdk/next/documentation/protocol-development/transactions.mdx b/docs/sdk/next/documentation/protocol-development/transactions.mdx new file mode 100644 index 00000000..36df2018 --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/transactions.mdx @@ -0,0 +1,1392 @@ +--- +title: Transactions +--- + +## Synopsis + +`Transactions` are objects created by end-users to trigger state changes in the application. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/next/documentation/application-framework/app-anatomy) + + + +## Transactions + +Transactions are comprised of metadata held in [contexts](/docs/sdk/next/documentation/application-framework/context) and [`sdk.Msg`s](/docs/sdk/next/documentation/module-system/messages-and-queries) that trigger state changes within a module through the module's Protobuf [`Msg` service](/docs/sdk/next/documentation/module-system/msg-services). + +When users want to interact with an application and make state changes (e.g. sending coins), they create transactions. Each of a transaction's `sdk.Msg` must be signed using the private key associated with the appropriate account(s), before the transaction is broadcasted to the network. A transaction must then be included in a block, validated, and approved by the network through the consensus process. To read more about the lifecycle of a transaction, click [here](/docs/sdk/next/documentation/protocol-development/tx-lifecycle). + +## Type Definition + +Transaction objects are Cosmos SDK types that implement the `Tx` interface + +```go expandable +package types + +import ( + + "encoding/json" + fmt "fmt" + strings "strings" + "time" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) + +type ( + / Msg defines the interface a transaction message needed to fulfill. + Msg = proto.Message + + / LegacyMsg defines the interface a transaction message needed to fulfill up through + / v0.47. + LegacyMsg interface { + Msg + + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} + + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / HasMsgs defines an interface a transaction must fulfill. + HasMsgs interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg +} + + / Tx defines an interface a transaction must fulfill. + Tx interface { + HasMsgs + + / GetMsgsV2 gets the transaction's messages as google.golang.org/protobuf/proto.Message's. + GetMsgsV2() ([]protov2.Message, error) +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() []byte + FeeGranter() []byte +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutTimeStamp extends the Tx interface by allowing a transaction to + / set a timeout timestamp. + TxWithTimeoutTimeStamp interface { + Tx + + / GetTimeoutTimeStamp gets the timeout timestamp for the tx. + / IMPORTANT: when the uint value is needed here, you MUST use UnixNano. + GetTimeoutTimeStamp() + +time.Time +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} + + / TxWithUnordered extends the Tx interface by allowing a transaction to set + / the unordered field, which implicitly relies on TxWithTimeoutTimeStamp. + TxWithUnordered interface { + TxWithTimeoutTimeStamp + + GetUnordered() + +bool +} + + / HasValidateBasic defines a type that has a ValidateBasic method. + / ValidateBasic is deprecated and now facultative. + / Prefer validating messages directly in the msg server. + HasValidateBasic interface { + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +var MsgTypeURL = codectypes.MsgTypeURL + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} + +/ GetModuleNameFromTypeURL assumes that module name is the second element of the msg type URL +/ e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" +/ It returns an empty string if the input is not a valid type URL +func GetModuleNameFromTypeURL(input string) + +string { + moduleName := strings.Split(input, ".") + if len(moduleName) > 1 { + return moduleName[1] +} + +return "" +} +``` + +It contains the following methods: + +- **GetMsgs:** unwraps the transaction and returns a list of contained `sdk.Msg`s - one transaction may have one or multiple messages, which are defined by module developers. + +As a developer, you should rarely manipulate `Tx` directly, as `Tx` is an intermediate type used for transaction generation. Instead, developers should prefer the `TxBuilder` interface, which you can learn more about [below](#transaction-generation). + +### Signing Transactions + +Every message in a transaction must be signed by the addresses specified by its `GetSigners`. The Cosmos SDK currently allows signing transactions in two different ways. + +#### `SIGN_MODE_DIRECT` (preferred) + +The most used implementation of the `Tx` interface is the Protobuf `Tx` message, which is used in `SIGN_MODE_DIRECT`: + +```protobuf +// Tx is the standard type used for broadcasting transactions. +message Tx { + // body is the processable content of the transaction + TxBody body = 1; + + // auth_info is the authorization related content of the transaction, + // specifically signers, signer modes and fee + AuthInfo auth_info = 2; + + // signatures is a list of signatures that matches the length and order of + // AuthInfo's signer_infos to allow connecting signature meta information like + // public key and signing mode by position. + repeated bytes signatures = 3; +} +``` + +Because Protobuf serialization is not deterministic, the Cosmos SDK uses an additional `TxRaw` type to denote the pinned bytes over which a transaction is signed. Any user can generate a valid `body` and `auth_info` for a transaction, and serialize these two messages using Protobuf. `TxRaw` then pins the user's exact binary representation of `body` and `auth_info`, called respectively `body_bytes` and `auth_info_bytes`. The document that is signed by all signers of the transaction is `SignDoc` (deterministically serialized using [ADR-027](docs/sdk/next/documentation/legacy/adr-comprehensive)): + +```protobuf +// SignDoc is the type used for generating sign bytes for SIGN_MODE_DIRECT. +message SignDoc { + // body_bytes is protobuf serialization of a TxBody that matches the + // representation in TxRaw. + bytes body_bytes = 1; + + // auth_info_bytes is a protobuf serialization of an AuthInfo that matches the + // representation in TxRaw. + bytes auth_info_bytes = 2; + + // chain_id is the unique identifier of the chain this transaction targets. + // It prevents signed transactions from being used on another chain by an + // attacker + string chain_id = 3; + + // account_number is the account number of the account in state + uint64 account_number = 4; +} +``` + +Once signed by all signers, the `body_bytes`, `auth_info_bytes` and `signatures` are gathered into `TxRaw`, whose serialized bytes are broadcasted over the network. + +#### `SIGN_MODE_LEGACY_AMINO_JSON` + +The legacy implementation of the `Tx` interface is the `StdTx` struct from `x/auth`: + +```go expandable +package legacytx + +import ( + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/codec/legacy" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx/signing" +) + +/ Interface implementation checks +var ( + _ codectypes.UnpackInterfacesMessage = (*StdTx)(nil) + + _ codectypes.UnpackInterfacesMessage = (*StdSignature)(nil) +) + +/ StdFee includes the amount of coins paid in fees and the maximum +/ gas to be used by the transaction. The ratio yields an effective "gasprice", +/ which must be above some miminum to be accepted into the mempool. +/ [Deprecated] +type StdFee struct { + Amount sdk.Coins `json:"amount" yaml:"amount"` + Gas uint64 `json:"gas" yaml:"gas"` + Payer string `json:"payer,omitempty" yaml:"payer"` + Granter string `json:"granter,omitempty" yaml:"granter"` +} + +/ Deprecated: NewStdFee returns a new instance of StdFee +func NewStdFee(gas uint64, amount sdk.Coins) + +StdFee { + return StdFee{ + Amount: amount, + Gas: gas, +} +} + +/ GetGas returns the fee's (wanted) + +gas. +func (fee StdFee) + +GetGas() + +uint64 { + return fee.Gas +} + +/ GetAmount returns the fee's amount. +func (fee StdFee) + +GetAmount() + +sdk.Coins { + return fee.Amount +} + +/ Bytes returns the encoded bytes of a StdFee. +func (fee StdFee) + +Bytes() []byte { + if len(fee.Amount) == 0 { + fee.Amount = sdk.NewCoins() +} + +bz, err := legacy.Cdc.MarshalJSON(fee) + if err != nil { + panic(err) +} + +return bz +} + +/ GasPrices returns the gas prices for a StdFee. +/ +/ NOTE: The gas prices returned are not the true gas prices that were +/ originally part of the submitted transaction because the fee is computed +/ as fee = ceil(gasWanted * gasPrices). +func (fee StdFee) + +GasPrices() + +sdk.DecCoins { + return sdk.NewDecCoinsFromCoins(fee.Amount...).QuoDec(math.LegacyNewDec(int64(fee.Gas))) +} + +/ StdTip is the tips used in a tipped transaction. +type StdTip struct { + Amount sdk.Coins `json:"amount" yaml:"amount"` + Tipper string `json:"tipper" yaml:"tipper"` +} + +/ StdTx is the legacy transaction format for wrapping a Msg with Fee and Signatures. +/ It only works with Amino, please prefer the new protobuf Tx in types/tx. +/ NOTE: the first signature is the fee payer (Signatures must not be nil). +/ Deprecated +type StdTx struct { + Msgs []sdk.Msg `json:"msg" yaml:"msg"` + Fee StdFee `json:"fee" yaml:"fee"` + Signatures []StdSignature `json:"signatures" yaml:"signatures"` + Memo string `json:"memo" yaml:"memo"` + TimeoutHeight uint64 `json:"timeout_height" yaml:"timeout_height"` +} + +/ Deprecated +func NewStdTx(msgs []sdk.Msg, fee StdFee, sigs []StdSignature, memo string) + +StdTx { + return StdTx{ + Msgs: msgs, + Fee: fee, + Signatures: sigs, + Memo: memo, +} +} + +/ GetMsgs returns the all the transaction's messages. +func (tx StdTx) + +GetMsgs() []sdk.Msg { + return tx.Msgs +} + +/ Deprecated: AsAny implements intoAny. It doesn't work for protobuf serialization, +/ so it can't be saved into protobuf configured storage. We are using it only for API +/ compatibility. +func (tx *StdTx) + +AsAny() *codectypes.Any { + return codectypes.UnsafePackAny(tx) +} + +/ GetMemo returns the memo +func (tx StdTx) + +GetMemo() + +string { + return tx.Memo +} + +/ GetTimeoutHeight returns the transaction's timeout height (if set). +func (tx StdTx) + +GetTimeoutHeight() + +uint64 { + return tx.TimeoutHeight +} + +/ GetSignatures returns the signature of signers who signed the Msg. +/ CONTRACT: Length returned is same as length of +/ pubkeys returned from MsgKeySigners, and the order +/ matches. +/ CONTRACT: If the signature is missing (ie the Msg is +/ invalid), then the corresponding signature is +/ .Empty(). +func (tx StdTx) + +GetSignatures() [][]byte { + sigs := make([][]byte, len(tx.Signatures)) + for i, stdSig := range tx.Signatures { + sigs[i] = stdSig.Signature +} + +return sigs +} + +/ GetSignaturesV2 implements SigVerifiableTx.GetSignaturesV2 +func (tx StdTx) + +GetSignaturesV2() ([]signing.SignatureV2, error) { + res := make([]signing.SignatureV2, len(tx.Signatures)) + for i, sig := range tx.Signatures { + var err error + res[i], err = StdSignatureToSignatureV2(legacy.Cdc, sig) + if err != nil { + return nil, errorsmod.Wrapf(err, "Unable to convert signature %v to V2", sig) +} + +} + +return res, nil +} + +/ GetPubkeys returns the pubkeys of signers if the pubkey is included in the signature +/ If pubkey is not included in the signature, then nil is in the slice instead +func (tx StdTx) + +GetPubKeys() ([]cryptotypes.PubKey, error) { + pks := make([]cryptotypes.PubKey, len(tx.Signatures)) + for i, stdSig := range tx.Signatures { + pks[i] = stdSig.GetPubKey() +} + +return pks, nil +} + +/ GetGas returns the Gas in StdFee +func (tx StdTx) + +GetGas() + +uint64 { + return tx.Fee.Gas +} + +/ GetFee returns the FeeAmount in StdFee +func (tx StdTx) + +GetFee() + +sdk.Coins { + return tx.Fee.Amount +} + +/ FeeGranter always returns nil for StdTx +func (tx StdTx) + +FeeGranter() + +sdk.AccAddress { + return nil +} + +func (tx StdTx) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + for _, m := range tx.Msgs { + err := codectypes.UnpackInterfaces(m, unpacker) + if err != nil { + return err +} + +} + + / Signatures contain PubKeys, which need to be unpacked. + for _, s := range tx.Signatures { + err := s.UnpackInterfaces(unpacker) + if err != nil { + return err +} + +} + +return nil +} +``` + +The document signed by all signers is `StdSignDoc`: + +```go expandable +package legacytx + +import ( + + "encoding/json" + "fmt" + "sigs.k8s.io/yaml" + "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/legacy" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/crypto/types/multisig" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx/signing" +) + +/ LegacyMsg defines the old interface a message must fulfill, +/ containing Amino signing method. +/ Deprecated: Please use `Msg` instead. +type LegacyMsg interface { + sdk.Msg + + / Get the canonical byte representation of the Msg. + GetSignBytes() []byte +} + +/ StdSignDoc is replay-prevention structure. +/ It includes the result of msg.GetSignBytes(), +/ as well as the ChainID (prevent cross chain replay) +/ and the Sequence numbers for each signature (prevent +/ inchain replay and enforce tx ordering per account). +type StdSignDoc struct { + AccountNumber uint64 `json:"account_number" yaml:"account_number"` + Sequence uint64 `json:"sequence" yaml:"sequence"` + TimeoutHeight uint64 `json:"timeout_height,omitempty" yaml:"timeout_height"` + ChainID string `json:"chain_id" yaml:"chain_id"` + Memo string `json:"memo" yaml:"memo"` + Fee json.RawMessage `json:"fee" yaml:"fee"` + Msgs []json.RawMessage `json:"msgs" yaml:"msgs"` +} + +var RegressionTestingAminoCodec *codec.LegacyAmino + +/ Deprecated: please delete this code eventually. +func mustSortJSON(bz []byte) []byte { + var c any + err := json.Unmarshal(bz, &c) + if err != nil { + panic(err) +} + +js, err := json.Marshal(c) + if err != nil { + panic(err) +} + +return js +} + +/ StdSignBytes returns the bytes to sign for a transaction. +/ Deprecated: Please use x/tx/signing/aminojson instead. +func StdSignBytes(chainID string, accnum, sequence, timeout uint64, fee StdFee, msgs []sdk.Msg, memo string) []byte { + if RegressionTestingAminoCodec == nil { + panic(fmt.Errorf("must set RegressionTestingAminoCodec before calling StdSignBytes")) +} + msgsBytes := make([]json.RawMessage, 0, len(msgs)) + for _, msg := range msgs { + bz := RegressionTestingAminoCodec.MustMarshalJSON(msg) + +msgsBytes = append(msgsBytes, mustSortJSON(bz)) +} + +bz, err := legacy.Cdc.MarshalJSON(StdSignDoc{ + AccountNumber: accnum, + ChainID: chainID, + Fee: json.RawMessage(fee.Bytes()), + Memo: memo, + Msgs: msgsBytes, + Sequence: sequence, + TimeoutHeight: timeout, +}) + if err != nil { + panic(err) +} + +return mustSortJSON(bz) +} + +/ Deprecated: StdSignature represents a sig +type StdSignature struct { + cryptotypes.PubKey `json:"pub_key" yaml:"pub_key"` / optional + Signature []byte `json:"signature" yaml:"signature"` +} + +/ Deprecated +func NewStdSignature(pk cryptotypes.PubKey, sig []byte) + +StdSignature { + return StdSignature{ + PubKey: pk, + Signature: sig +} +} + +/ GetSignature returns the raw signature bytes. +func (ss StdSignature) + +GetSignature() []byte { + return ss.Signature +} + +/ GetPubKey returns the public key of a signature as a cryptotypes.PubKey using the +/ Amino codec. +func (ss StdSignature) + +GetPubKey() + +cryptotypes.PubKey { + return ss.PubKey +} + +/ MarshalYAML returns the YAML representation of the signature. +func (ss StdSignature) + +MarshalYAML() (any, error) { + pk := "" + if ss.PubKey != nil { + pk = ss.String() +} + +bz, err := yaml.Marshal(struct { + PubKey string `json:"pub_key"` + Signature string `json:"signature"` +}{ + pk, + fmt.Sprintf("%X", ss.Signature), +}) + if err != nil { + return nil, err +} + +return string(bz), nil +} + +func (ss StdSignature) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + return codectypes.UnpackInterfaces(ss.PubKey, unpacker) +} + +/ StdSignatureToSignatureV2 converts a StdSignature to a SignatureV2 +func StdSignatureToSignatureV2(cdc *codec.LegacyAmino, sig StdSignature) (signing.SignatureV2, error) { + pk := sig.GetPubKey() + +data, err := pubKeySigToSigData(cdc, pk, sig.Signature) + if err != nil { + return signing.SignatureV2{ +}, err +} + +return signing.SignatureV2{ + PubKey: pk, + Data: data, +}, nil +} + +func pubKeySigToSigData(cdc *codec.LegacyAmino, key cryptotypes.PubKey, sig []byte) (signing.SignatureData, error) { + multiPK, ok := key.(multisig.PubKey) + if !ok { + return &signing.SingleSignatureData{ + SignMode: signing.SignMode_SIGN_MODE_LEGACY_AMINO_JSON, + Signature: sig, +}, nil +} + +var multiSig multisig.AminoMultisignature + err := cdc.Unmarshal(sig, &multiSig) + if err != nil { + return nil, err +} + sigs := multiSig.Sigs + sigDatas := make([]signing.SignatureData, len(sigs)) + pubKeys := multiPK.GetPubKeys() + bitArray := multiSig.BitArray + n := multiSig.BitArray.Count() + signatures := multisig.NewMultisig(n) + sigIdx := 0 + for i := range n { + if bitArray.GetIndex(i) { + data, err := pubKeySigToSigData(cdc, pubKeys[i], multiSig.Sigs[sigIdx]) + if err != nil { + return nil, errors.Wrapf(err, "Unable to convert Signature to SigData %d", sigIdx) +} + +sigDatas[sigIdx] = data + multisig.AddSignature(signatures, data, sigIdx) + +sigIdx++ +} + +} + +return signatures, nil +} +``` + +which is encoded into bytes using Amino JSON. Once all signatures are gathered into `StdTx`, `StdTx` is serialized using Amino JSON, and these bytes are broadcasted over the network. + +#### Other Sign Modes + +The Cosmos SDK also provides a couple of other sign modes for particular use cases. + +#### `SIGN_MODE_DIRECT_AUX` + +`SIGN_MODE_DIRECT_AUX` is a sign mode released in the Cosmos SDK v0.46 which targets transactions with multiple signers. Whereas `SIGN_MODE_DIRECT` expects each signer to sign over both `TxBody` and `AuthInfo` (which includes all other signers' signer infos, i.e. their account sequence, public key and mode info), `SIGN_MODE_DIRECT_AUX` allows N-1 signers to only sign over `TxBody` and _their own_ signer info. Moreover, each auxiliary signer (i.e. a signer using `SIGN_MODE_DIRECT_AUX`) doesn't +need to sign over the fees: + +```protobuf + +// SignDocDirectAux is the type used for generating sign bytes for +// SIGN_MODE_DIRECT_AUX. +message SignDocDirectAux { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.46"; + // body_bytes is protobuf serialization of a TxBody that matches the + // representation in TxRaw. + bytes body_bytes = 1; + + // public_key is the public key of the signing account. + google.protobuf.Any public_key = 2; + + // chain_id is the identifier of the chain this transaction targets. + // It prevents signed transactions from being used on another chain by an + // attacker. + string chain_id = 3; + + // account_number is the account number of the account in state. + uint64 account_number = 4; + + // sequence is the sequence number of the signing account. + uint64 sequence = 5; + + // tips have been depreacted and should not be used + Tip tip = 6 [deprecated = true]; +} +``` + +The use case is a multi-signer transaction, where one of the signers is appointed to gather all signatures, broadcast the signature and pay for fees, and the others only care about the transaction body. This generally allows for a better multi-signing UX. If Alice, Bob and Charlie are part of a 3-signer transaction, then Alice and Bob can both use `SIGN_MODE_DIRECT_AUX` to sign over the `TxBody` and their own signer info (no need an additional step to gather other signers' ones, like in `SIGN_MODE_DIRECT`), without specifying a fee in their SignDoc. Charlie can then gather both signatures from Alice and Bob, and +create the final transaction by appending a fee. Note that the fee payer of the transaction (in our case Charlie) must sign over the fees, so must use `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. + +#### `SIGN_MODE_TEXTUAL` + +`SIGN_MODE_TEXTUAL` is a new sign mode for delivering a better signing experience on hardware wallets and it is included in the v0.50 release. In this mode, the signer signs over the human-readable string representation of the transaction (CBOR) and makes all data being displayed easier to read. The data is formatted as screens, and each screen is meant to be displayed in its entirety even on small devices like the Ledger Nano. + +There are also _expert_ screens, which will only be displayed if the user has chosen that option in its hardware device. These screens contain things like account number, account sequence and the sign data hash. + +Data is formatted using a set of `ValueRenderer` which the SDK provides defaults for all the known messages and value types. Chain developers can also opt to implement their own `ValueRenderer` for a type/message if they'd like to display information differently. + +If you wish to learn more, please refer to [ADR-050](docs/sdk/next/documentation/legacy/adr-comprehensive). + +#### Custom Sign modes + +There is an opportunity to add your own custom sign mode to the Cosmos-SDK. While we can not accept the implementation of the sign mode to the repository, we can accept a pull request to add the custom signmode to the SignMode enum located [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/tx/signing/v1beta1/signing.proto#L17) + +## Transaction Process + +The process of an end-user sending a transaction is: + +- decide on the messages to put into the transaction, +- generate the transaction using the Cosmos SDK's `TxBuilder`, +- broadcast the transaction using one of the available interfaces. + +The next paragraphs will describe each of these components, in this order. + +### Messages + + + Module `sdk.Msg`s are not to be confused with [ABCI + Messages](https://docs.cometbft.com/v0.37/spec/abci/) which define + interactions between the CometBFT and application layers. + + +**Messages** (or `sdk.Msg`s) are module-specific objects that trigger state transitions within the scope of the module they belong to. Module developers define the messages for their module by adding methods to the Protobuf [`Msg` service](/docs/sdk/next/documentation/module-system/msg-services), and also implement the corresponding `MsgServer`. + +Each `sdk.Msg`s is related to exactly one Protobuf [`Msg` service](/docs/sdk/next/documentation/module-system/msg-services) RPC, defined inside each module's `tx.proto` file. A SDK app router automatically maps every `sdk.Msg` to a corresponding RPC. Protobuf generates a `MsgServer` interface for each module `Msg` service, and the module developer needs to implement this interface. +This design puts more responsibility on module developers, allowing application developers to reuse common functionalities without having to implement state transition logic repetitively. + +To learn more about Protobuf `Msg` services and how to implement `MsgServer`, click [here](/docs/sdk/next/documentation/module-system/msg-services). + +While messages contain the information for state transition logic, a transaction's other metadata and relevant information are stored in the `TxBuilder` and `Context`. + +### Transaction Generation + +The `TxBuilder` interface contains data closely related with the generation of transactions, which an end-user can set to generate the desired transaction: + +```go expandable +package client + +import ( + + "time" + + txsigning "cosmossdk.io/x/tx/signing" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() + +sdk.TxEncoder + TxDecoder() + +sdk.TxDecoder + TxJSONEncoder() + +sdk.TxEncoder + TxJSONDecoder() + +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) + +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} + + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() + +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() *txsigning.HandlerMap + SigningContext() *txsigning.Context +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTimeoutHeight(height uint64) + +SetTimeoutTimestamp(timestamp time.Time) + +SetUnordered(v bool) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} + + / ExtendedTxBuilder extends the TxBuilder interface, + / which is used to set extension options to be included in a transaction. + ExtendedTxBuilder interface { + SetExtensionOptions(extOpts ...*codectypes.Any) +} +) +``` + +- `Msg`s, the array of [messages](#messages) included in the transaction. +- `GasLimit`, option chosen by the users for how to calculate how much gas they will need to pay. +- `Memo`, a note or comment to send with the transaction. +- `FeeAmount`, the maximum amount the user is willing to pay in fees. +- `TimeoutHeight`, block height until which the transaction is valid. +- `Unordered`, an option indicating this transaction may be executed in any order (requires Sequence to be unset.) +- `TimeoutTimestamp`, the timeout timestamp (unordered nonce) of the transaction (required to be used with Unordered). +- `Signatures`, the array of signatures from all signers of the transaction. + +As there are currently two sign modes for signing transactions, there are also two implementations of `TxBuilder`: + +- [wrapper](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/tx/builder.go#L27-L44) for creating transactions for `SIGN_MODE_DIRECT`, +- [StdTxBuilder](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/migrations/legacytx/stdtx_builder.go#L14-L17) for `SIGN_MODE_LEGACY_AMINO_JSON`. + +However, the two implementations of `TxBuilder` should be hidden away from end-users, as they should prefer using the overarching `TxConfig` interface: + +```go expandable +package client + +import ( + + "time" + + txsigning "cosmossdk.io/x/tx/signing" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() + +sdk.TxEncoder + TxDecoder() + +sdk.TxDecoder + TxJSONEncoder() + +sdk.TxEncoder + TxJSONDecoder() + +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) + +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} + + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() + +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() *txsigning.HandlerMap + SigningContext() *txsigning.Context +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTimeoutHeight(height uint64) + +SetTimeoutTimestamp(timestamp time.Time) + +SetUnordered(v bool) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} + + / ExtendedTxBuilder extends the TxBuilder interface, + / which is used to set extension options to be included in a transaction. + ExtendedTxBuilder interface { + SetExtensionOptions(extOpts ...*codectypes.Any) +} +) +``` + +`TxConfig` is an app-wide configuration for managing transactions. Most importantly, it holds the information about whether to sign each transaction with `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. By calling `txBuilder := txConfig.NewTxBuilder()`, a new `TxBuilder` will be created with the appropriate sign mode. + +Once `TxBuilder` is correctly populated with the setters exposed above, `TxConfig` will also take care of correctly encoding the bytes (again, either using `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`). Here's a pseudo-code snippet of how to generate and encode a transaction, using the `TxEncoder()` method: + +```go +txBuilder := txConfig.NewTxBuilder() + +txBuilder.SetMsgs(...) / and other setters on txBuilder + +bz, err := txConfig.TxEncoder()(txBuilder.GetTx()) +/ bz are bytes to be broadcasted over the network +``` + +### Broadcasting the Transaction + +Once the transaction bytes are generated, there are currently three ways of broadcasting it. + +#### CLI + +Application developers create entry points to the application by creating a [command-line interface](/docs/sdk/next/api-reference/client-tools/cli), [gRPC and/or REST interface](/docs/sdk/next/api-reference/service-apis/grpc_rest), typically found in the application's `./cmd` folder. These interfaces allow users to interact with the application through command-line. + +For the [command-line interface](/docs/sdk/next/documentation/module-system/module-interfaces#cli), module developers create subcommands to add as children to the application top-level transaction command `TxCmd`. CLI commands actually bundle all the steps of transaction processing into one simple command: creating messages, generating transactions and broadcasting. For concrete examples, see the [Interacting with a Node](/docs/sdk/next/documentation/operations/interact-node) section. An example transaction made using CLI looks like: + +```bash +simd tx send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake +``` + +#### gRPC + +[gRPC](https://grpc.io) is the main component for the Cosmos SDK's RPC layer. Its principal usage is in the context of modules' [`Query` services](/docs/sdk/next/documentation/module-system/query-services). However, the Cosmos SDK also exposes a few other module-agnostic gRPC services, one of them being the `Tx` service: + +```go expandable +syntax = "proto3"; +package cosmos.tx.v1beta1; + +import "google/api/annotations.proto"; +import "cosmos/base/abci/v1beta1/abci.proto"; +import "cosmos/tx/v1beta1/tx.proto"; +import "cosmos/base/query/v1beta1/pagination.proto"; +import "tendermint/types/block.proto"; +import "tendermint/types/types.proto"; +import "cosmos_proto/cosmos.proto"; + +option go_package = "github.com/cosmos/cosmos-sdk/types/tx"; + +/ Service defines a gRPC service for interacting with transactions. +service Service { + / Simulate simulates executing a transaction for estimating gas usage. + rpc Simulate(SimulateRequest) + +returns (SimulateResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/simulate" + body: "*" +}; +} + / GetTx fetches a tx by hash. + rpc GetTx(GetTxRequest) + +returns (GetTxResponse) { + option (google.api.http).get = "/cosmos/tx/v1beta1/txs/{ + hash +}"; +} + / BroadcastTx broadcast transaction. + rpc BroadcastTx(BroadcastTxRequest) + +returns (BroadcastTxResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/txs" + body: "*" +}; +} + / GetTxsEvent fetches txs by event. + rpc GetTxsEvent(GetTxsEventRequest) + +returns (GetTxsEventResponse) { + option (google.api.http).get = "/cosmos/tx/v1beta1/txs"; +} + / GetBlockWithTxs fetches a block with decoded txs. + rpc GetBlockWithTxs(GetBlockWithTxsRequest) + +returns (GetBlockWithTxsResponse) { + option (cosmos_proto.method_added_in) = "cosmos-sdk 0.45.2"; + option (google.api.http).get = "/cosmos/tx/v1beta1/txs/block/{ + height +}"; +} + / TxDecode decodes the transaction. + rpc TxDecode(TxDecodeRequest) + +returns (TxDecodeResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/decode" + body: "*" +}; + option (cosmos_proto.method_added_in) = "cosmos-sdk 0.47"; +} + / TxEncode encodes the transaction. + rpc TxEncode(TxEncodeRequest) + +returns (TxEncodeResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/encode" + body: "*" +}; + option (cosmos_proto.method_added_in) = "cosmos-sdk 0.47"; +} + / TxEncodeAmino encodes an Amino transaction from JSON to encoded bytes. + rpc TxEncodeAmino(TxEncodeAminoRequest) + +returns (TxEncodeAminoResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/encode/amino" + body: "*" +}; + option (cosmos_proto.method_added_in) = "cosmos-sdk 0.47"; +} + / TxDecodeAmino decodes an Amino transaction from encoded bytes to JSON. + rpc TxDecodeAmino(TxDecodeAminoRequest) + +returns (TxDecodeAminoResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/decode/amino" + body: "*" +}; + option (cosmos_proto.method_added_in) = "cosmos-sdk 0.47"; +} +} + +/ GetTxsEventRequest is the request type for the Service.TxsByEvents +/ RPC method. +message GetTxsEventRequest { + / events is the list of transaction event type. + / Deprecated post v0.47.x: use query instead, which should contain a valid + / events query. + repeated string events = 1 [deprecated = true]; + + / pagination defines a pagination for the request. + / Deprecated post v0.46.x: use page and limit instead. + cosmos.base.query.v1beta1.PageRequest pagination = 2 [deprecated = true]; + + OrderBy order_by = 3; + + / page is the page number to query, starts at 1. If not provided, will + / default to first page. + uint64 page = 4; + + / limit is the total number of results to be returned in the result page. + / If left empty it will default to a value to be set by each app. + uint64 limit = 5; + + / query defines the transaction event query that is proxied to Tendermint's + / TxSearch RPC method. The query must be valid. + string query = 6 [(cosmos_proto.field_added_in) = "cosmos-sdk 0.50"]; +} + +/ OrderBy defines the sorting order +enum OrderBy { + / ORDER_BY_UNSPECIFIED specifies an unknown sorting order. OrderBy defaults + / to ASC in this case. + ORDER_BY_UNSPECIFIED = 0; + / ORDER_BY_ASC defines ascending order + ORDER_BY_ASC = 1; + / ORDER_BY_DESC defines descending order + ORDER_BY_DESC = 2; +} + +/ GetTxsEventResponse is the response type for the Service.TxsByEvents +/ RPC method. +message GetTxsEventResponse { + / txs is the list of queried transactions. + repeated cosmos.tx.v1beta1.Tx txs = 1; + / tx_responses is the list of queried TxResponses. + repeated cosmos.base.abci.v1beta1.TxResponse tx_responses = 2; + / pagination defines a pagination for the response. + / Deprecated post v0.46.x: use total instead. + cosmos.base.query.v1beta1.PageResponse pagination = 3 [deprecated = true]; + / total is total number of results available + uint64 total = 4; +} + +/ BroadcastTxRequest is the request type for the Service.BroadcastTxRequest +/ RPC method. +message BroadcastTxRequest { + / tx_bytes is the raw transaction. + bytes tx_bytes = 1; + BroadcastMode mode = 2; +} + +/ BroadcastMode specifies the broadcast mode for the TxService.Broadcast RPC +/ method. +enum BroadcastMode { + / zero-value for mode ordering + BROADCAST_MODE_UNSPECIFIED = 0; + / DEPRECATED: use BROADCAST_MODE_SYNC instead, + / BROADCAST_MODE_BLOCK is not supported by the SDK from v0.47.x onwards. + BROADCAST_MODE_BLOCK = 1 [deprecated = true]; + / BROADCAST_MODE_SYNC defines a tx broadcasting mode where the client waits + / for a CheckTx execution response only. + BROADCAST_MODE_SYNC = 2; + / BROADCAST_MODE_ASYNC defines a tx broadcasting mode where the client + / returns immediately. + BROADCAST_MODE_ASYNC = 3; +} + +/ BroadcastTxResponse is the response type for the +/ Service.BroadcastTx method. +message BroadcastTxResponse { + / tx_response is the queried TxResponses. + cosmos.base.abci.v1beta1.TxResponse tx_response = 1; +} + +/ SimulateRequest is the request type for the Service.Simulate +/ RPC method. +message SimulateRequest { + / tx is the transaction to simulate. + / Deprecated. Send raw tx bytes instead. + cosmos.tx.v1beta1.Tx tx = 1 [deprecated = true]; + / tx_bytes is the raw transaction. + bytes tx_bytes = 2 [(cosmos_proto.field_added_in) = "cosmos-sdk 0.43"]; +} + +/ SimulateResponse is the response type for the +/ Service.SimulateRPC method. +message SimulateResponse { + / gas_info is the information about gas used in the simulation. + cosmos.base.abci.v1beta1.GasInfo gas_info = 1; + / result is the result of the simulation. + cosmos.base.abci.v1beta1.Result result = 2; +} + +/ GetTxRequest is the request type for the Service.GetTx +/ RPC method. +message GetTxRequest { + / hash is the tx hash to query, encoded as a hex string. + string hash = 1; +} + +/ GetTxResponse is the response type for the Service.GetTx method. +message GetTxResponse { + / tx is the queried transaction. + cosmos.tx.v1beta1.Tx tx = 1; + / tx_response is the queried TxResponses. + cosmos.base.abci.v1beta1.TxResponse tx_response = 2; +} + +/ GetBlockWithTxsRequest is the request type for the Service.GetBlockWithTxs +/ RPC method. +message GetBlockWithTxsRequest { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.45.2"; + / height is the height of the block to query. + int64 height = 1; + / pagination defines a pagination for the request. + cosmos.base.query.v1beta1.PageRequest pagination = 2; +} + +/ GetBlockWithTxsResponse is the response type for the Service.GetBlockWithTxs +/ method. +message GetBlockWithTxsResponse { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.45.2"; + / txs are the transactions in the block. + repeated cosmos.tx.v1beta1.Tx txs = 1; + .tendermint.types.BlockID block_id = 2; + .tendermint.types.Block block = 3; + / pagination defines a pagination for the response. + cosmos.base.query.v1beta1.PageResponse pagination = 4; +} + +/ TxDecodeRequest is the request type for the Service.TxDecode +/ RPC method. +message TxDecodeRequest { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + / tx_bytes is the raw transaction. + bytes tx_bytes = 1; +} + +/ TxDecodeResponse is the response type for the +/ Service.TxDecode method. +message TxDecodeResponse { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + / tx is the decoded transaction. + cosmos.tx.v1beta1.Tx tx = 1; +} + +/ TxEncodeRequest is the request type for the Service.TxEncode +/ RPC method. +message TxEncodeRequest { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + / tx is the transaction to encode. + cosmos.tx.v1beta1.Tx tx = 1; +} + +/ TxEncodeResponse is the response type for the +/ Service.TxEncode method. +message TxEncodeResponse { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + / tx_bytes is the encoded transaction bytes. + bytes tx_bytes = 1; +} + +/ TxEncodeAminoRequest is the request type for the Service.TxEncodeAmino +/ RPC method. +message TxEncodeAminoRequest { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + string amino_json = 1; +} + +/ TxEncodeAminoResponse is the response type for the Service.TxEncodeAmino +/ RPC method. +message TxEncodeAminoResponse { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + bytes amino_binary = 1; +} + +/ TxDecodeAminoRequest is the request type for the Service.TxDecodeAmino +/ RPC method. +message TxDecodeAminoRequest { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + bytes amino_binary = 1; +} + +/ TxDecodeAminoResponse is the response type for the Service.TxDecodeAmino +/ RPC method. +message TxDecodeAminoResponse { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + string amino_json = 1; +} +``` + +The `Tx` service exposes a handful of utility functions, such as simulating a transaction or querying a transaction, and also one method to broadcast transactions. + +Examples of broadcasting and simulating a transaction are shown [here](/docs/sdk/next/documentation/operations/txs#programmatically-with-go). + +#### REST + +Each gRPC method has its corresponding REST endpoint, generated using [gRPC-gateway](https://github.com/grpc-ecosystem/grpc-gateway). Therefore, instead of using gRPC, you can also use HTTP to broadcast the same transaction, on the `POST /cosmos/tx/v1beta1/txs` endpoint. + +An example can be seen [here](/docs/sdk/next/documentation/operations/txs#using-rest) + +#### CometBFT RPC + +The three methods presented above are actually higher abstractions over the CometBFT RPC `/broadcast_tx_{async,sync,commit}` endpoints, documented [here](https://docs.cometbft.com/v0.37/core/rpc). This means that you can use the CometBFT RPC endpoints directly to broadcast the transaction, if you wish so. + +### Unordered Transactions + + + +Looking to enable unordered transactions on your chain? +Check out the [v0.53.0 Upgrade Guide](https://docs.cosmos.network/v0.53/build/migrations/upgrade-guide#enable-unordered-transactions-optional) + + + + + +Unordered transactions MUST leave sequence values unset. When a transaction is both unordered and contains a non-zero sequence value, +the transaction will be rejected. Services that operate on prior assumptions about transaction sequence values should be updated to handle unordered transactions. +Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. + + + +Beginning with Cosmos SDK v0.53.0, chains may enable unordered transaction support. +Unordered transactions work by using a timestamp as the transaction's nonce value. The sequence value must NOT be set in the signature(s) of the transaction. +The timestamp must be greater than the current block time and not exceed the chain's configured max unordered timeout timestamp duration. +Senders must use a unique timestamp for each distinct transaction. The difference may be as small as a nanosecond, however. + +These unique timestamps serve as a one-shot nonce, and their lifespan in state is short-lived. +Upon transaction inclusion, an entry consisting of timeout timestamp and account address will be recorded to state. +Once the block time is passed the timeout timestamp value, the entry will be removed. This ensures that unordered nonces do not indefinitely fill up the chain's storage. diff --git a/docs/sdk/next/documentation/protocol-development/tx-lifecycle.mdx b/docs/sdk/next/documentation/protocol-development/tx-lifecycle.mdx new file mode 100644 index 00000000..42168645 --- /dev/null +++ b/docs/sdk/next/documentation/protocol-development/tx-lifecycle.mdx @@ -0,0 +1,292 @@ +--- +title: Transaction Lifecycle +--- + +## Synopsis + +This document describes the lifecycle of a transaction from creation to committed state changes. Transaction definition is described in a [different doc](/docs/sdk/next/documentation/protocol-development/transactions). The transaction is referred to as `Tx`. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/next/documentation/application-framework/app-anatomy) + + + +## Creation + +### Transaction Creation + +One of the main application interfaces is the command-line interface. The transaction `Tx` can be created by the user inputting a command in the following format from the [command-line](/docs/sdk/next/api-reference/client-tools/cli), providing the type of transaction in `[command]`, arguments in `[args]`, and configurations such as gas prices in `[flags]`: + +```bash +[appname] tx [command] [args] [flags] +``` + +This command automatically **creates** the transaction, **signs** it using the account's private key, and **broadcasts** it to the specified peer node. + +There are several required and optional flags for transaction creation. The `--from` flag specifies which [account](/docs/sdk/next/documentation/protocol-development/accounts) the transaction is originating from. For example, if the transaction is sending coins, the funds are drawn from the specified `from` address. + +#### Gas and Fees + +Additionally, there are several [flags](/docs/sdk/next/api-reference/client-tools/cli) users can use to indicate how much they are willing to pay in [fees](/docs/sdk/next/documentation/protocol-development/gas-fees): + +- `--gas` refers to how much [gas](/docs/sdk/next/documentation/protocol-development/gas-fees), which represents computational resources, `Tx` consumes. Gas is dependent on the transaction and is not precisely calculated until execution, but can be estimated by providing `auto` as the value for `--gas`. +- `--gas-adjustment` (optional) can be used to scale `gas` up in order to avoid underestimating. For example, users can specify their gas adjustment as 1.5 to use 1.5 times the estimated gas. +- `--gas-prices` specifies how much the user is willing to pay per unit of gas, which can be one or multiple denominations of tokens. For example, `--gas-prices=0.025uatom, 0.025upho` means the user is willing to pay 0.025uatom AND 0.025upho per unit of gas. +- `--fees` specifies how much in fees the user is willing to pay in total. +- `--timeout-height` specifies a block timeout height to prevent the tx from being committed past a certain height. + +The ultimate value of the fees paid is equal to the gas multiplied by the gas prices. In other words, `fees = ceil(gas * gasPrices)`. Thus, since fees can be calculated using gas prices and vice versa, the users specify only one of the two. + +Later, validators decide whether to include the transaction in their block by comparing the given or calculated `gas-prices` to their local `min-gas-prices`. `Tx` is rejected if its `gas-prices` is not high enough, so users are incentivized to pay more. + +#### Unordered Transactions + +With Cosmos SDK v0.53.0, users may send unordered transactions to chains that have this feature enabled. +The following flags allow a user to build an unordered transaction from the CLI. + +- `--unordered` specifies that this transaction should be unordered. (transaction sequence must be unset) +- `--timeout-duration` specifies the amount of time the unordered transaction should be valid in the mempool. The transaction's unordered nonce will be set to the time of transaction creation + timeout duration. + + + +Unordered transactions MUST leave sequence values unset. When a transaction is both unordered and contains a non-zero sequence value, +the transaction will be rejected. External services that operate on prior assumptions about transaction sequence values should be updated to handle unordered transactions. +Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. + + + +#### CLI Example + +Users of the application `app` can enter the following command into their CLI to generate a transaction to send 1000uatom from a `senderAddress` to a `recipientAddress`. The command specifies how much gas they are willing to pay: an automatic estimate scaled up by 1.5 times, with a gas price of 0.025uatom per unit gas. + +```bash +appd tx send 1000uatom --from --gas auto --gas-adjustment 1.5 --gas-prices 0.025uatom +``` + +#### Other Transaction Creation Methods + +The command-line is an easy way to interact with an application, but `Tx` can also be created using a [gRPC or REST interface](/docs/sdk/next/api-reference/service-apis/grpc_rest) or some other entry point defined by the application developer. From the user's perspective, the interaction depends on the web interface or wallet they are using (e.g. creating `Tx` using [Lunie.io](https://lunie.io/#/) and signing it with a Ledger Nano S). + +## Addition to Mempool + +Each full-node (running CometBFT) that receives a `Tx` sends an [ABCI message](https://docs.cometbft.com/v0.37/spec/p2p/legacy-docs/messages/), +`CheckTx`, to the application layer to check for validity, and receives an `abci.CheckTxResponse`. If the `Tx` passes the checks, it is held in the node's +[**Mempool**](https://docs.cometbft.com/v0.37/spec/p2p/legacy-docs/messages/mempool), an in-memory pool of transactions unique to each node, pending inclusion in a block - honest nodes discard a `Tx` if it is found to be invalid. Prior to consensus, nodes continuously check incoming transactions and gossip them to their peers. + +### Types of Checks + +The full-nodes perform stateless, then stateful checks on `Tx` during `CheckTx`, with the goal to +identify and reject an invalid transaction as early on as possible to avoid wasted computation. + +**_Stateless_** checks do not require nodes to access state - light clients or offline nodes can do +them - and are thus less computationally expensive. Stateless checks include making sure addresses +are not empty, enforcing nonnegative numbers, and other logic specified in the definitions. + +**_Stateful_** checks validate transactions and messages based on a committed state. Examples +include checking that the relevant values exist and can be transacted with, the address +has sufficient funds, and the sender is authorized or has the correct ownership to transact. +At any given moment, full-nodes typically have [multiple versions](/docs/sdk/next/documentation/application-framework/baseapp#state-updates) +of the application's internal state for different purposes. For example, nodes execute state +changes while in the process of verifying transactions, but still need a copy of the last committed +state in order to answer queries - they should not respond using state with uncommitted changes. + +In order to verify a `Tx`, full-nodes call `CheckTx`, which includes both _stateless_ and _stateful_ +checks. Further validation happens later in the [`DeliverTx`](#delivertx) stage. `CheckTx` goes +through several steps, beginning with decoding `Tx`. + +### Decoding + +When `Tx` is received by the application from the underlying consensus engine (e.g. CometBFT), it is still in its [encoded](/docs/sdk/next/documentation/protocol-development/encoding) `[]byte` form and needs to be unmarshaled in order to be processed. Then, the [`runTx`](/docs/sdk/next/documentation/application-framework/baseapp#runtx-antehandler-runmsgs-posthandler) function is called to run in `runTxModeCheck` mode, meaning the function runs all checks but exits before executing messages and writing state changes. + +### ValidateBasic (deprecated) + +Messages ([`sdk.Msg`](/docs/sdk/next/documentation/protocol-development/transactions#messages)) are extracted from transactions (`Tx`). The `ValidateBasic` method of the `sdk.Msg` interface implemented by the module developer is run for each transaction. +To discard obviously invalid messages, the `BaseApp` type calls the `ValidateBasic` method very early in the processing of the message in the [`CheckTx`](/docs/sdk/next/documentation/application-framework/baseapp#checktx) and [`DeliverTx`](/docs/sdk/next/documentation/application-framework/baseapp#delivertx) transactions. +`ValidateBasic` can include only **stateless** checks (the checks that do not require access to the state). + + +The `ValidateBasic` method on messages has been deprecated in favor of validating messages directly in their respective [`Msg` services](/docs/sdk/next/documentation/module-system/msg-services#Validation). + +Read [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) for more details. + + + + + `BaseApp` still calls `ValidateBasic` on messages that implement that method + for backwards compatibility. + + +#### Guideline + +`ValidateBasic` should not be used anymore. Message validation should be performed in the `Msg` service when [handling a message](/docs/sdk/next/documentation/module-system/msg-services#Validation) in a module Msg Server. + +### AnteHandler + +`AnteHandler`s even though optional, are in practice very often used to perform signature verification, gas calculation, fee deduction, and other core operations related to blockchain transactions. + +A copy of the cached context is provided to the `AnteHandler`, which performs limited checks specified for the transaction type. Using a copy allows the `AnteHandler` to do stateful checks for `Tx` without modifying the last committed state, and revert back to the original if the execution fails. + +For example, the [`auth`](https://github.com/cosmos/cosmos-sdk/blob/main/x/auth/README.md) module `AnteHandler` checks and increments sequence numbers, checks signatures and account numbers, and deducts fees from the first signer of the transaction - all state changes are made using the `checkState`. + + + Ante handlers only run on a transaction. If a transaction embeds multiple + messages (like some x/authz, x/gov transactions for instance), the ante + handlers only have awareness of the outer message. Inner messages are mostly + directly routed to the [message + router](https://docs.cosmos.network/main/learn/advanced/baseapp#msg-service-router) + and will skip the chain of ante handlers. Keep that in mind when designing + your own ante handler. + + +### Gas + +The [`Context`](/docs/sdk/next/documentation/application-framework/context), which keeps a `GasMeter` that tracks how much gas is used during the execution of `Tx`, is initialized. The user-provided amount of gas for `Tx` is known as `GasWanted`. If `GasConsumed`, the amount of gas consumed during execution, ever exceeds `GasWanted`, the execution stops and the changes made to the cached copy of the state are not committed. Otherwise, `CheckTx` sets `GasUsed` equal to `GasConsumed` and returns it in the result. After calculating the gas and fee values, validator-nodes check that the user-specified `gas-prices` is greater than their locally defined `min-gas-prices`. + +### Discard or Addition to Mempool + +If at any point during `CheckTx` the `Tx` fails, it is discarded and the transaction lifecycle ends +there. Otherwise, if it passes `CheckTx` successfully, the default protocol is to relay it to peer +nodes and add it to the Mempool so that the `Tx` becomes a candidate to be included in the next block. + +The **mempool** serves the purpose of keeping track of transactions seen by all full-nodes. +Full-nodes keep a **mempool cache** of the last `mempool.cache_size` transactions they have seen, as a first line of +defense to prevent replay attacks. Ideally, `mempool.cache_size` is large enough to encompass all +of the transactions in the full mempool. If the mempool cache is too small to keep track of all +the transactions, `CheckTx` is responsible for identifying and rejecting replayed transactions. + +Currently existing preventative measures include fees and a `sequence` (nonce) counter to distinguish +replayed transactions from identical but valid ones. If an attacker tries to spam nodes with many +copies of a `Tx`, full-nodes keeping a mempool cache reject all identical copies instead of running +`CheckTx` on them. Even if the copies have incremented `sequence` numbers, attackers are +disincentivized by the need to pay fees. + +Validator nodes keep a mempool to prevent replay attacks, just as full-nodes do, but also use it as +a pool of unconfirmed transactions in preparation of block inclusion. Note that even if a `Tx` +passes all checks at this stage, it is still possible to be found invalid later on, because +`CheckTx` does not fully validate the transaction (that is, it does not actually execute the messages). + +## Inclusion in a Block + +Consensus, the process through which validator nodes come to agreement on which transactions to +accept, happens in **rounds**. Each round begins with a proposer creating a block of the most +recent transactions and ends with **validators**, special full-nodes with voting power responsible +for consensus, agreeing to accept the block or go with a `nil` block instead. Validator nodes +execute the consensus algorithm, such as [CometBFT](https://docs.cometbft.com/v0.37/spec/consensus/), +confirming the transactions using ABCI requests to the application, in order to come to this agreement. + +The first step of consensus is the **block proposal**. One proposer amongst the validators is chosen +by the consensus algorithm to create and propose a block - in order for a `Tx` to be included, it +must be in this proposer's mempool. + +## State Changes + +The next step of consensus is to execute the transactions to fully validate them. All full-nodes +that receive a block proposal from the correct proposer execute the transactions by calling the ABCI function `FinalizeBlock`. +As mentioned throughout the documentation `BeginBlock`, `ExecuteTx` and `EndBlock` are called within FinalizeBlock. +Although every full-node operates individually and locally, the outcome is always consistent and unequivocal. This is because the state changes brought about by the messages are predictable, and the transactions are specifically sequenced in the proposed block. + +```text expandable + -------------------------- + | Receive Block Proposal | + -------------------------- + | + v + ------------------------- + | FinalizeBlock | + ------------------------- + | + v + ------------------- + | BeginBlock | + ------------------- + | + v + -------------------- + | ExecuteTx(tx0) | + | ExecuteTx(tx1) | + | ExecuteTx(tx2) | + | ExecuteTx(tx3) | + | . | + | . | + | . | + ------------------- + | + v + -------------------- + | EndBlock | + -------------------- + | + v + ------------------------- + | Consensus | + ------------------------- + | + v + ------------------------- + | Commit | + ------------------------- +``` + +### Transaction Execution + +The `FinalizeBlock` ABCI function defined in [`BaseApp`](/docs/sdk/next/documentation/application-framework/baseapp) does the bulk of the +state transitions: it is run for each transaction in the block in sequential order as committed +to during consensus. Under the hood, transaction execution is almost identical to `CheckTx` but calls the +[`runTx`](/docs/sdk/next/documentation/application-framework/baseapp#runtx) function in deliver mode instead of check mode. +Instead of using their `checkState`, full-nodes use `finalizeblock`: + +- **Decoding:** Since `FinalizeBlock` is an ABCI call, `Tx` is received in the encoded `[]byte` form. + Nodes first unmarshal the transaction, using the [`TxConfig`](/docs/sdk/next/documentation/application-framework/app-anatomy#register-codec) defined in the app, then call `runTx` in `execModeFinalize`, which is very similar to `CheckTx` but also executes and writes state changes. + +- **Checks and `AnteHandler`:** Full-nodes call `validateBasicMsgs` and `AnteHandler` again. This second check + happens because they may not have seen the same transactions during the addition to Mempool stage + and a malicious proposer may have included invalid ones. One difference here is that the + `AnteHandler` does not compare `gas-prices` to the node's `min-gas-prices` since that value is local + to each node - differing values across nodes yield nondeterministic results. + +- **`MsgServiceRouter`:** After `CheckTx` exits, `FinalizeBlock` continues to run + [`runMsgs`](/docs/sdk/next/documentation/application-framework/baseapp#runtx-antehandler-runmsgs-posthandler) to fully execute each `Msg` within the transaction. + Since the transaction may have messages from different modules, `BaseApp` needs to know which module + to find the appropriate handler. This is achieved using `BaseApp`'s `MsgServiceRouter` so that it can be processed by the module's Protobuf [`Msg` service](/docs/sdk/next/documentation/module-system/msg-services). + For `LegacyMsg` routing, the `Route` function is called via the [module manager](/docs/sdk/next/documentation/module-system/module-manager) to retrieve the route name and find the legacy [`Handler`](/docs/sdk/next/documentation/module-system/msg-services#handler-type) within the module. + +- **`Msg` service:** Protobuf `Msg` service is responsible for executing each message in the `Tx` and causes state transitions to persist in `finalizeBlockState`. + +- **PostHandlers:** [`PostHandler`](/docs/sdk/next/documentation/application-framework/baseapp#posthandler)s run after the execution of the message. If they fail, the state change of `runMsgs`, as well of `PostHandlers`, are both reverted. + +- **Gas:** While a `Tx` is being delivered, a `GasMeter` is used to keep track of how much + gas is being used; if execution completes, `GasUsed` is set and returned in the + `abci.ExecTxResult`. If execution halts because `BlockGasMeter` or `GasMeter` has run out or something else goes + wrong, a deferred function at the end appropriately errors or panics. + +If there are any failed state changes resulting from a `Tx` being invalid or `GasMeter` running out, +the transaction processing terminates and any state changes are reverted. Invalid transactions in a +block proposal cause validator nodes to reject the block and vote for a `nil` block instead. + +### Commit + +The final step is for nodes to commit the block and state changes. Validator nodes +perform the previous step of executing state transitions in order to validate the transactions, +then sign the block to confirm it. Full nodes that are not validators do not +participate in consensus - i.e. they cannot vote - but listen for votes to understand whether or +not they should commit the state changes. + +When they receive enough validator votes (2/3+ _precommits_ weighted by voting power), full nodes commit to a new block to be added to the blockchain and +finalize the state transitions in the application layer. A new state root is generated to serve as +a merkle proof for the state transitions. Applications use the [`Commit`](/docs/sdk/next/documentation/application-framework/baseapp#commit) +ABCI method inherited from [Baseapp](/docs/sdk/next/documentation/application-framework/baseapp); it syncs all the state transitions by +writing the `deliverState` into the application's internal state. As soon as the state changes are +committed, `checkState` starts afresh from the most recently committed state and `deliverState` +resets to `nil` in order to be consistent and reflect the changes. + +Note that not all blocks have the same number of transactions and it is possible for consensus to +result in a `nil` block or one with none at all. In a public blockchain network, it is also possible +for validators to be **byzantine**, or malicious, which may prevent a `Tx` from being committed in +the blockchain. Possible malicious behaviors include the proposer deciding to censor a `Tx` by +excluding it from the block or a validator voting against the block. + +At this point, the transaction lifecycle of a `Tx` is over: nodes have verified its validity, +delivered it by executing its state changes, and committed those changes. The `Tx` itself, +in `[]byte` form, is stored in a block and appended to the blockchain. diff --git a/docs/sdk/next/documentation/state-storage/README.mdx b/docs/sdk/next/documentation/state-storage/README.mdx new file mode 100644 index 00000000..c3644af6 --- /dev/null +++ b/docs/sdk/next/documentation/state-storage/README.mdx @@ -0,0 +1,242 @@ +--- +title: Store +--- + +The store package defines the interfaces, types and abstractions for Cosmos SDK +modules to read and write to Merkleized state within a Cosmos SDK application. +The store package provides many primitives for developers to use in order to +work with both state storage and state commitment. Below we describe the various +abstractions. + +## Types + +### `Store` + +The bulk of the store interfaces are defined [here](https://github.com/cosmos/cosmos-sdk/blob/main/store/types/store.go), +where the base primitive interface, for which other interfaces build off of, is +the `Store` type. The `Store` interface defines the ability to tell the type of +the implementing store and the ability to cache wrap via the `CacheWrapper` interface. + +### `CacheWrapper` & `CacheWrap` + +One of the most important features a store has the ability to perform is the +ability to cache wrap. Cache wrapping is essentially the underlying store wrapping +itself within another store type that performs caching for both reads and writes +with the ability to flush writes via `Write()`. + +### `KVStore` & `CacheKVStore` + +One of the most important interfaces that both developers and modules interface +with, which also provides the basis of most state storage and commitment operations, +is the `KVStore`. The `KVStore` interface provides basic CRUD abilities and +prefix-based iteration, including reverse iteration. + +Typically, each module has it's own dedicated `KVStore` instance, which it can +get access to via the `sdk.Context` and the use of a pointer-based named key -- +`KVStoreKey`. The `KVStoreKey` provides pseudo-OCAP. How a exactly a `KVStoreKey` +maps to a `KVStore` will be illustrated below through the `CommitMultiStore`. + +Note, a `KVStore` cannot directly commit state. Instead, a `KVStore` can be wrapped +by a `CacheKVStore` which extends a `KVStore` and provides the ability for the +caller to execute `Write()` which commits state to the underlying state storage. +Note, this doesn't actually flush writes to disk as writes are held in memory +until `Commit()` is called on the `CommitMultiStore`. + +### `CommitMultiStore` + +The `CommitMultiStore` interface exposes the top-level interface that is used +to manage state commitment and storage by an SDK application and abstracts the +concept of multiple `KVStore`s which are used by multiple modules. Specifically, +it supports the following high-level primitives: + +* Allows for a caller to retrieve a `KVStore` by providing a `KVStoreKey`. +* Exposes pruning mechanisms to remove state pinned against a specific height/version + in the past. +* Allows for loading state storage at a particular height/version in the past to + provide current head and historical queries. +* Provides the ability to rollback state to a previous height/version. +* Provides the ability to load state storage at a particular height/version + while also performing store upgrades, which are used during live hard-fork + application state migrations. +* Provides the ability to commit all current accumulated state to disk and performs + Merkle commitment. + +## Implementation Details + +While there are many interfaces that the `store` package provides, there is +typically a core implementation for each main interface that modules and +developers interact with that are defined in the Cosmos SDK. + +### `iavl.Store` + +The `iavl.Store` provides the core implementation for state storage and commitment +by implementing the following interfaces: + +* `KVStore` +* `CommitStore` +* `CommitKVStore` +* `Queryable` +* `StoreWithInitialVersion` + +It allows for all CRUD operations to be performed along with allowing current +and historical state queries, prefix iteration, and state commitment along with +Merkle proof operations. The `iavl.Store` also provides the ability to remove +historical state from the state commitment layer. + +An overview of the IAVL implementation can be found [here](https://github.com/cosmos/iavl/blob/master/docs/overview.md). +It is important to note that the IAVL store provides both state commitment and +logical storage operations, which comes with drawbacks as there are various +performance impacts, some of which are very drastic, when it comes to the +operations mentioned above. + +When dealing with state management in modules and clients, the Cosmos SDK provides +various layers of abstractions or "store wrapping", where the `iavl.Store` is the +bottom most layer. When requesting a store to perform reads or writes in a module, +the typical abstraction layer in order is defined as follows: + +```text +iavl.Store <- cachekv.Store <- gaskv.Store <- cachemulti.Store <- rootmulti.Store +``` + +### Concurrent use of IAVL store + +The tree under `iavl.Store` is not safe for concurrent use. It is the +responsibility of the caller to ensure that concurrent access to the store is +not performed. + +The main issue with concurrent use is when data is written at the same time as +it's being iterated over. Doing so will cause an irrecoverable fatal error because +of concurrent reads and writes to an internal map. + +Although it's not recommended, you can iterate through values while writing to +it by disabling "FastNode" **without guarantees that the values being written will +be returned during the iteration** (if you need this, you might want to reconsider +the design of your application). This is done by setting `iavl-disable-fastnode` +to `true` in the config TOML file. + +### `cachekv.Store` + +The `cachekv.Store` store wraps an underlying `KVStore`, typically a `iavl.Store` +and contains an in-memory cache for storing pending writes to underlying `KVStore`. +`Set` and `Delete` calls are executed on the in-memory cache, whereas `Has` calls +are proxied to the underlying `KVStore`. + +One of the most important calls to a `cachekv.Store` is `Write()`, which ensures +that key-value pairs are written to the underlying `KVStore` in a deterministic +and ordered manner by sorting the keys first. The store keeps track of "dirty" +keys and uses these to determine what keys to sort. In addition, it also keeps +track of deleted keys and ensures these are also removed from the underlying +`KVStore`. + +The `cachekv.Store` also provides the ability to perform iteration and reverse +iteration. Iteration is performed through the `cacheMergeIterator` type and uses +both the dirty cache and underlying `KVStore` to iterate over key-value pairs. + +Note, all calls to CRUD and iteration operations on a `cachekv.Store` are thread-safe. + +### `gaskv.Store` + +The `gaskv.Store` store provides a simple implementation of a `KVStore`. +Specifically, it just wraps an existing `KVStore`, such as a cache-wrapped +`iavl.Store`, and incurs configurable gas costs for CRUD operations via +`ConsumeGas()` calls defined on the `GasMeter` which exists in a `sdk.Context` +and then proxies the underlying CRUD call to the underlying store. Note, the +`GasMeter` is reset on each block. + +### `cachemulti.Store` & `rootmulti.Store` + +The `rootmulti.Store` acts as an abstraction around a series of stores. Namely, +it implements the `CommitMultiStore` an `Queryable` interfaces. Through the +`rootmulti.Store`, an SDK module can request access to a `KVStore` to perform +state CRUD operations and queries by holding access to a unique `KVStoreKey`. + +The `rootmulti.Store` ensures these queries and state operations are performed +through cached-wrapped instances of `cachekv.Store` which is described above. The +`rootmulti.Store` implementation is also responsible for committing all accumulated +state from each `KVStore` to disk and returning an application state Merkle root. + +Queries can be performed to return state data along with associated state +commitment proofs for both previous heights/versions and the current state root. +Queries are routed based on store name, i.e. a module, along with other parameters +which are defined in `abci.QueryRequest`. + +The `rootmulti.Store` also provides primitives for pruning data at a given +height/version from state storage. When a height is committed, the `rootmulti.Store` +will determine if other previous heights should be considered for removal based +on the operator's pruning settings defined by `PruningOptions`, which defines +how many recent versions to keep on disk and the interval at which to remove +"staged" pruned heights from disk. During each interval, the staged heights are +removed from each `KVStore`. Note, it is up to the underlying `KVStore` +implementation to determine how pruning is actually performed. The `PruningOptions` +are defined as follows: + +```go +type PruningOptions struct { + / KeepRecent defines how many recent heights to keep on disk. + KeepRecent uint64 + + / Interval defines when the pruned heights are removed from disk. + Interval uint64 + + / Strategy defines the kind of pruning strategy. See below for more information on each. + Strategy PruningStrategy +} +``` + +The Cosmos SDK defines a preset number of pruning "strategies": `default`, `everything` +`nothing`, and `custom`. + +It is important to note that the `rootmulti.Store` considers each `KVStore` as a +separate logical store. In other words, they do not share a Merkle tree or +comparable data structure. This means that when state is committed via +`rootmulti.Store`, each store is committed in sequence and thus is not atomic. + +In terms of store construction and wiring, each Cosmos SDK application contains +a `BaseApp` instance which internally has a reference to a `CommitMultiStore` +that is implemented by a `rootmulti.Store`. The application then registers one or +more `KVStoreKey` that pertain to a unique module and thus a `KVStore`. Through +the use of an `sdk.Context` and a `KVStoreKey`, each module can get direct access +to it's respective `KVStore` instance. + +Example: + +```go expandable +func NewApp(...) + +Application { + / ... + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + + / ... + keys := sdk.NewKVStoreKeys(...) + transientKeys := sdk.NewTransientStoreKeys(...) + memKeys := sdk.NewMemoryStoreKeys(...) + + / ... + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(transientKeys) + +app.MountMemoryStores(memKeys) + + / ... +} +``` + +The `rootmulti.Store` itself can be cache-wrapped which returns an instance of a +`cachemulti.Store`. For each block, `BaseApp` ensures that the proper abstractions +are created on the `CommitMultiStore`, i.e. ensuring that the `rootmulti.Store` +is cached-wrapped and uses the resulting `cachemulti.Store` to be set on the +`sdk.Context` which is then used for block and transaction execution. As a result, +all state mutations due to block and transaction execution are actually held +ephemerally until `Commit()` is called by the ABCI client. This concept is further +expanded upon when the AnteHandler is executed per transaction to ensure state +is not committed for transactions that failed CheckTx. diff --git a/docs/sdk/next/documentation/state-storage/collections.mdx b/docs/sdk/next/documentation/state-storage/collections.mdx new file mode 100644 index 00000000..2044f71e --- /dev/null +++ b/docs/sdk/next/documentation/state-storage/collections.mdx @@ -0,0 +1,1374 @@ +--- +title: Collections +description: >- + Collections is a library meant to simplify the experience with respect to + module state handling. +--- + +Collections is a library meant to simplify the experience with respect to module state handling. + +Cosmos SDK modules handle their state using the `KVStore` interface. The problem with working with +`KVStore` is that it forces you to think of state as a bytes KV pairings when in reality the majority of +state comes from complex concrete golang objects (strings, ints, structs, etc.). + +Collections allows you to work with state as if they were normal golang objects and removes the need +for you to think of your state as raw bytes in your code. + +It also allows you to migrate your existing state without causing any state breakage that forces you into +tedious and complex chain state migrations. + +## Installation + +To install collections in your cosmos-sdk chain project, run the following command: + +```shell +go get cosmossdk.io/collections +``` + +## Core types + +Collections offers 5 different APIs to work with state, which will be explored in the next sections, these APIs are: + +* `Map`: to work with typed arbitrary KV pairings. +* `KeySet`: to work with just typed keys +* `Item`: to work with just one typed value +* `Sequence`: which is a monotonically increasing number. +* `IndexedMap`: which combines `Map` and `KeySet` to provide a `Map` with indexing capabilities. + +## Preliminary components + +Before exploring the different collections types and their capability it is necessary to introduce +the three components that every collection shares. In fact when instantiating a collection type by doing, for example, +`collections.NewMap/collections.NewItem/...` you will find yourself having to pass them some common arguments. + +For example, in code: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var AllowListPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + AllowList collections.KeySet[string] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + AllowList: collections.NewKeySet(sb, AllowListPrefix, "allow_list", collections.StringKey), +} +} +``` + +Let's analyse the shared arguments, what they do, and why we need them. + +### SchemaBuilder + +The first argument passed is the `SchemaBuilder` + +`SchemaBuilder` is a structure that keeps track of all the state of a module, it is not required by the collections +to deal with state but it offers a dynamic and reflective way for clients to explore a module's state. + +We instantiate a `SchemaBuilder` by passing it a function that given the modules store key returns the module's specific store. + +We then need to pass the schema builder to every collection type we instantiate in our keeper, in our case the `AllowList`. + +### Prefix + +The second argument passed to our `KeySet` is a `collections.Prefix`, a prefix represents a partition of the module's `KVStore` +where all the state of a specific collection will be saved. + +Since a module can have multiple collections, the following is expected: + +* module params will become a `collections.Item` +* the `AllowList` is a `collections.KeySet` + +We don't want a collection to write over the state of the other collection so we pass it a prefix, which defines a storage +partition owned by the collection. + +If you already built modules, the prefix translates to the items you were creating in your `types/keys.go` file, example: [Link](https://github.com/cosmos/cosmos-sdk/blob/v0.52.0-rc.1/x/feegrant/key.go#L16~L22) + +your old: + +```go +var ( + / FeeAllowanceKeyPrefix is the set of the kvstore for fee allowance data + / - 0x00: allowance + FeeAllowanceKeyPrefix = []byte{0x00 +} + + / FeeAllowanceQueueKeyPrefix is the set of the kvstore for fee allowance keys data + / - 0x01: + FeeAllowanceQueueKeyPrefix = []byte{0x01 +} +) +``` + +becomes: + +```go +var ( + / FeeAllowanceKeyPrefix is the set of the kvstore for fee allowance data + / - 0x00: allowance + FeeAllowanceKeyPrefix = collections.NewPrefix(0) + + / FeeAllowanceQueueKeyPrefix is the set of the kvstore for fee allowance keys data + / - 0x01: + FeeAllowanceQueueKeyPrefix = collections.NewPrefix(1) +) +``` + +#### Rules + +`collections.NewPrefix` accepts either `uint8`, `string` or `[]bytes` it's good practice to use an always increasing `uint8`for disk space efficiency. + +A collection **MUST NOT** share the same prefix as another collection in the same module, and a collection prefix **MUST NEVER** start with the same prefix as another, examples: + +```go +prefix1 := collections.NewPrefix("prefix") + +prefix2 := collections.NewPrefix("prefix") / THIS IS BAD! +``` + +```go +prefix1 := collections.NewPrefix("a") + +prefix2 := collections.NewPrefix("aa") / prefix2 starts with the same as prefix1: BAD!!! +``` + +### Human-Readable Name + +The third parameter we pass to a collection is a string, which is a human-readable name. +It is needed to make the role of a collection understandable by clients who have no clue about +what a module is storing in state. + +#### Rules + +Each collection in a module **MUST** have a unique humanised name. + +## Key and Value Codecs + +A collection is generic over the type you can use as keys or values. +This makes collections dumb, but also means that hypothetically we can store everything +that can be a go type into a collection. We are not bounded to any type of encoding (be it proto, json or whatever) + +So a collection needs to be given a way to understand how to convert your keys and values to bytes. +This is achieved through `KeyCodec` and `ValueCodec`, which are arguments that you pass to your +collections when you're instantiating them using the `collections.NewMap/collections.NewItem/...` +instantiation functions. + +NOTE: Generally speaking you will never be required to implement your own `Key/ValueCodec` as +the SDK and collections libraries already come with default, safe and fast implementation of those. +You might need to implement them only if you're migrating to collections and there are state layout incompatibilities. + +Let's explore an example: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var IDsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + IDs collections.Map[string, uint64] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + IDs: collections.NewMap(sb, IDsPrefix, "ids", collections.StringKey, collections.Uint64Value), +} +} +``` + +We're now instantiating a map where the key is string and the value is `uint64`. +We already know the first three arguments of the `NewMap` function. + +The fourth parameter is our `KeyCodec`, we know that the `Map` has `string` as key so we pass it a `KeyCodec` that handles strings as keys. + +The fifth parameter is our `ValueCodec`, we know that the `Map` has a `uint64` as value so we pass it a `ValueCodec` that handles uint64. + +Collections already comes with all the required implementations for golang primitive types. + +Let's make another example, this falls closer to what we build using cosmos SDK, let's say we want +to create a `collections.Map` that maps account addresses to their base account. So we want to map an `sdk.AccAddress` to an `auth.BaseAccount` (which is a proto): + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts collections.Map[sdk.AccAddress, authtypes.BaseAccount] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/next/documentation/state-storage/cdc)), +} +} +``` + +As we can see here since our `collections.Map` maps `sdk.AccAddress` to `authtypes.BaseAccount`, +we use the `sdk.AccAddressKey` which is the `KeyCodec` implementation for `AccAddress` and we use `codec.CollValue` to +encode our proto type `BaseAccount`. + +Generally speaking you will always find the respective key and value codecs for types in the `go.mod` path you're using +to import that type. If you want to encode proto values refer to the codec `codec.CollValue` function, which allows you +to encode any type implement the `proto.Message` interface. + +## Map + +We analyse the first and most important collection type, the `collections.Map`. +This is the type that everything else builds on top of. + +### Use case + +A `collections.Map` is used to map arbitrary keys with arbitrary values. + +### Example + +It's easier to explain a `collections.Map` capabilities through an example: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "fmt" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts collections.Map[sdk.AccAddress, authtypes.BaseAccount] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/next/documentation/state-storage/cdc)), +} +} + +func (k Keeper) + +CreateAccount(ctx sdk.Context, addr sdk.AccAddress, account authtypes.BaseAccount) + +error { + has, err := k.Accounts.Has(ctx, addr) + if err != nil { + return err +} + if has { + return fmt.Errorf("account already exists: %s", addr) +} + +err = k.Accounts.Set(ctx, addr, account) + if err != nil { + return err +} + +return nil +} + +func (k Keeper) + +GetAccount(ctx sdk.Context, addr sdk.AccAddress) (authtypes.BaseAccount, error) { + acc, err := k.Accounts.Get(ctx, addr) + if err != nil { + return authtypes.BaseAccount{ +}, err +} + +return acc, nil +} + +func (k Keeper) + +RemoveAccount(ctx sdk.Context, addr sdk.AccAddress) + +error { + err := k.Accounts.Remove(ctx, addr) + if err != nil { + return err +} + +return nil +} +``` + +#### Set method + +Set maps with the provided `AccAddress` (the key) to the `auth.BaseAccount` (the value). + +Under the hood the `collections.Map` will convert the key and value to bytes using the [key and value codec](/docs/sdk/next/documentation/state-storage/README#key-and-value-codecs). +It will prepend to our bytes key the [prefix](/docs/sdk/next/documentation/state-storage/README#prefix) and store it in the KVStore of the module. + +#### Has method + +The has method reports if the provided key exists in the store. + +#### Get method + +The get method accepts the `AccAddress` and returns the associated `auth.BaseAccount` if it exists, otherwise it errors. + +#### Remove method + +The remove method accepts the `AccAddress` and removes it from the store. It won't report errors +if it does not exist, to check for existence before removal use the `Has` method. + +#### Iteration + +Iteration has a separate section. + +## KeySet + +The second type of collection is `collections.KeySet`, as the word suggests it maintains +only a set of keys without values. + +#### Implementation curiosity + +A `collections.KeySet` is just a `collections.Map` with a `key` but no value. +The value internally is always the same and is represented as an empty byte slice `[]byte{}`. + +### Example + +As always we explore the collection type through an example: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "fmt" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var ValidatorsSetPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + ValidatorsSet collections.KeySet[sdk.ValAddress] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + ValidatorsSet: collections.NewKeySet(sb, ValidatorsSetPrefix, "validators_set", sdk.ValAddressKey), +} +} + +func (k Keeper) + +AddValidator(ctx sdk.Context, validator sdk.ValAddress) + +error { + has, err := k.ValidatorsSet.Has(ctx, validator) + if err != nil { + return err +} + if has { + return fmt.Errorf("validator already in set: %s", validator) +} + +err = k.ValidatorsSet.Set(ctx, validator) + if err != nil { + return err +} + +return nil +} + +func (k Keeper) + +RemoveValidator(ctx sdk.Context, validator sdk.ValAddress) + +error { + err := k.ValidatorsSet.Remove(ctx, validator) + if err != nil { + return err +} + +return nil +} +``` + +The first difference we notice is that `KeySet` needs use to specify only one type parameter: the key (`sdk.ValAddress` in this case). +The second difference we notice is that `KeySet` in its `NewKeySet` function does not require +us to specify a `ValueCodec` but only a `KeyCodec`. This is because a `KeySet` only saves keys and not values. + +Let's explore the methods. + +#### Has method + +Has allows us to understand if a key is present in the `collections.KeySet` or not, functions in the same way as `collections.Map.Has +` + +#### Set method + +Set inserts the provided key in the `KeySet`. + +#### Remove method + +Remove removes the provided key from the `KeySet`, it does not error if the key does not exist, +if existence check before removal is required it needs to be coupled with the `Has` method. + +## Item + +The third type of collection is the `collections.Item`. +It stores only one single item, it's useful for example for parameters, there's only one instance +of parameters in state always. + +### implementation curiosity + +A `collections.Item` is just a `collections.Map` with no key but just a value. +The key is the prefix of the collection! + +### Example + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + stakingtypes "cosmossdk.io/x/staking/types" +) + +var ParamsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Params collections.Item[stakingtypes.Params] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Params: collections.NewItem(sb, ParamsPrefix, "params", codec.CollValue[stakingtypes.Params](/docs/sdk/next/documentation/state-storage/cdc)), +} +} + +func (k Keeper) + +UpdateParams(ctx sdk.Context, params stakingtypes.Params) + +error { + err := k.Params.Set(ctx, params) + if err != nil { + return err +} + +return nil +} + +func (k Keeper) + +GetParams(ctx sdk.Context) (stakingtypes.Params, error) { + return k.Params.Get(ctx) +} +``` + +The first key difference we notice is that we specify only one type parameter, which is the value we're storing. +The second key difference is that we don't specify the `KeyCodec`, since we store only one item we already know the key +and the fact that it is constant. + +## Iteration + +One of the key features of the `KVStore` is iterating over keys. + +Collections which deal with keys (so `Map`, `KeySet` and `IndexedMap`) allow you to iterate +over keys in a safe and typed way. They all share the same API, the only difference being +that `KeySet` returns a different type of `Iterator` because `KeySet` only deals with keys. + + + +Every collection shares the same `Iterator` semantics. + + + +Let's have a look at the `Map.Iterate` method: + +```go +func (m Map[K, V]) + +Iterate(ctx context.Context, ranger Ranger[K]) (Iterator[K, V], error) +``` + +It accepts a `collections.Ranger[K]`, which is an API that instructs map on how to iterate over keys. +As always we don't need to implement anything here as `collections` already provides some generic `Ranger` implementers +that expose all you need to work with ranges. + +### Example + +We have a `collections.Map` that maps accounts using `uint64` IDs. + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts collections.Map[uint64, authtypes.BaseAccount] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", collections.Uint64Key, codec.CollValue[authtypes.BaseAccount](/docs/sdk/next/documentation/state-storage/cdc)), +} +} + +func (k Keeper) + +GetAllAccounts(ctx sdk.Context) ([]authtypes.BaseAccount, error) { + / passing a nil Ranger equals to: iterate over every possible key + iter, err := k.Accounts.Iterate(ctx, nil) + if err != nil { + return nil, err +} + +accounts, err := iter.Values() + if err != nil { + return nil, err +} + +return accounts, err +} + +func (k Keeper) + +IterateAccountsBetween(ctx sdk.Context, start, end uint64) ([]authtypes.BaseAccount, error) { + / The collections.Range API offers a lot of capabilities + / like defining where the iteration starts or ends. + rng := new(collections.Range[uint64]). + StartInclusive(start). + EndExclusive(end). + Descending() + +iter, err := k.Accounts.Iterate(ctx, rng) + if err != nil { + return nil, err +} + +accounts, err := iter.Values() + if err != nil { + return nil, err +} + +return accounts, nil +} + +func (k Keeper) + +IterateAccounts(ctx sdk.Context, do func(id uint64, acc authtypes.BaseAccount) (stop bool)) + +error { + iter, err := k.Accounts.Iterate(ctx, nil) + if err != nil { + return err +} + +defer iter.Close() + for ; iter.Valid(); iter.Next() { + kv, err := iter.KeyValue() + if err != nil { + return err +} + if do(kv.Key, kv.Value) { + break +} + +} + +return nil +} +``` + +Let's analyse each method in the example and how it makes use of the `Iterate` and the returned `Iterator` API. + +#### GetAllAccounts + +In `GetAllAccounts` we pass to our `Iterate` a nil `Ranger`. This means that the returned `Iterator` will include +all the existing keys within the collection. + +Then we use the `Values` method from the returned `Iterator` API to collect all the values into a slice. + +`Iterator` offers other methods such as `Keys()` to collect only the keys and not the values and `KeyValues` to collect +all the keys and values. + +#### IterateAccountsBetween + +Here we make use of the `collections.Range` helper to specialise our range. +We make it start in a point through `StartInclusive` and end in the other with `EndExclusive`, then +we instruct it to report us results in reverse order through `Descending` + +Then we pass the range instruction to `Iterate` and get an `Iterator`, which will contain only the results +we specified in the range. + +Then we use again the `Values` method of the `Iterator` to collect all the results. + +`collections.Range` also offers a `Prefix` API which is not applicable to all keys types, +for example uint64 cannot be prefix because it is of constant size, but a `string` key +can be prefixed. + +#### IterateAccounts + +Here we showcase how to lazily collect values from an Iterator. + + + +`Keys/Values/KeyValues` fully consume and close the `Iterator`, here we need to explicitly do a `defer iterator.Close()` call. + + + +`Iterator` also exposes a `Value` and `Key` method to collect only the current value or key, if collecting both is not needed. + + + +For this `callback` pattern, collections expose a `Walk` API. + + + +## Composite keys + +So far we've worked only with simple keys, like `uint64`, the account address, etc. +There are some more complex cases in, which we need to deal with composite keys. + +A key is composite when it is composed of multiple keys, for example bank balances as stored as the composite key +`(AccAddress, string)` where the first part is the address holding the coins and the second part is the denom. + +Example, let's say address `BOB` holds `10atom,15osmo`, this is how it is stored in state: + +```javascript +(bob, atom) => 10 +(bob, osmos) => 15 +``` + +Now this allows to efficiently get a specific denom balance of an address, by simply `getting` `(address, denom)`, or getting all the balances +of an address by prefixing over `(address)`. + +Let's see now how we can work with composite keys using collections. + +### Example + +In our example we will show-case how we can use collections when we are dealing with balances, similar to bank, +a balance is a mapping between `(address, denom) => math.Int` the composite key in our case is `(address, denom)`. + +## Instantiation of a composite key collection + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + "cosmossdk.io/math" + storetypes "cosmossdk.io/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var BalancesPrefix = collections.NewPrefix(1) + +type Keeper struct { + Schema collections.Schema + Balances collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Balances: collections.NewMap( + sb, BalancesPrefix, "balances", + collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), + sdk.IntValue, + ), +} +} +``` + +### The Map Key definition + +First of all we can see that in order to define a composite key of two elements we use the `collections.Pair` type: + +```go +collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] +``` + +`collections.Pair` defines a key composed of two other keys, in our case the first part is `sdk.AccAddress`, the second +part is `string`. + +#### The Key Codec instantiation + +The arguments to instantiate are always the same, the only thing that changes is how we instantiate +the `KeyCodec`, since this key is composed of two keys we use `collections.PairKeyCodec`, which generates +a `KeyCodec` composed of two key codecs. The first one will encode the first part of the key, the second one will +encode the second part of the key. + +### Working with composite key collections + +Let's expand on the example we used before: + +```go expandable +var BalancesPrefix = collections.NewPrefix(1) + +type Keeper struct { + Schema collections.Schema + Balances collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Balances: collections.NewMap( + sb, BalancesPrefix, "balances", + collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), + sdk.IntValue, + ), +} +} + +func (k Keeper) + +SetBalance(ctx sdk.Context, address sdk.AccAddress, denom string, amount math.Int) + +error { + key := collections.Join(address, denom) + +return k.Balances.Set(ctx, key, amount) +} + +func (k Keeper) + +GetBalance(ctx sdk.Context, address sdk.AccAddress, denom string) (math.Int, error) { + return k.Balances.Get(ctx, collections.Join(address, denom)) +} + +func (k Keeper) + +GetAllAddressBalances(ctx sdk.Context, address sdk.AccAddress) (sdk.Coins, error) { + balances := sdk.NewCoins() + rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/next/documentation/state-storage/address) + +iter, err := k.Balances.Iterate(ctx, rng) + if err != nil { + return nil, err +} + +kvs, err := iter.KeyValues() + if err != nil { + return nil, err +} + for _, kv := range kvs { + balances = balances.Add(sdk.NewCoin(kv.Key.K2(), kv.Value)) +} + +return balances, nil +} + +func (k Keeper) + +GetAllAddressBalancesBetween(ctx sdk.Context, address sdk.AccAddress, startDenom, endDenom string) (sdk.Coins, error) { + rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/next/documentation/state-storage/address). + StartInclusive(startDenom). + EndInclusive(endDenom) + +iter, err := k.Balances.Iterate(ctx, rng) + if err != nil { + return nil, err +} + ... +} +``` + +#### SetBalance + +As we can see here we're setting the balance of an address for a specific denom. +We use the `collections.Join` function to generate the composite key. +`collections.Join` returns a `collections.Pair` (which is the key of our `collections.Map`) + +`collections.Pair` contains the two keys we have joined, it also exposes two methods: `K1` to fetch the 1st part of the +key and `K2` to fetch the second part. + +As always, we use the `collections.Map.Set` method to map the composite key to our value (`math.Int` in this case) + +#### GetBalance + +To get a value in composite key collection, we simply use `collections.Join` to compose the key. + +#### GetAllAddressBalances + +We use `collections.PrefixedPairRange` to iterate over all the keys starting with the provided address. +Concretely the iteration will report all the balances belonging to the provided address. + +The first part is that we instantiate a `PrefixedPairRange`, which is a `Ranger` implementer aimed to help +in `Pair` keys iterations. + +```go +rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/next/documentation/state-storage/address) +``` + +As we can see here we're passing the type parameters of the `collections.Pair` because golang type inference +with respect to generics is not as permissive as other languages, so we need to explicitly say what are the types of the pair key. + +#### GetAllAddressesBalancesBetween + +This showcases how we can further specialise our range to limit the results further, by specifying +the range between the second part of the key (in our case the denoms, which are strings). + +## IndexedMap + +`collections.IndexedMap` is a collection that uses under the hood a `collections.Map`, and has a struct, which contains the indexes that we need to define. + +### Example + +Let's say we have an `auth.BaseAccount` struct which looks like the following: + +```go +type BaseAccount struct { + AccountNumber uint64 `protobuf:"varint,3,opt,name=account_number,json=accountNumber,proto3" json:"account_number,omitempty"` + Sequence uint64 `protobuf:"varint,4,opt,name=sequence,proto3" json:"sequence,omitempty"` +} +``` + +First of all, when we save our accounts in state we map them using a primary key `sdk.AccAddress`. +If it were to be a `collections.Map` it would be `collections.Map[sdk.AccAddress, authtypes.BaseAccount]`. + +Then we also want to be able to get an account not only by its `sdk.AccAddress`, but also by its `AccountNumber`. + +So we can say we want to create an `Index` that maps our `BaseAccount` to its `AccountNumber`. + +We also know that this `Index` is unique. Unique means that there can only be one `BaseAccount` that maps to a specific +`AccountNumber`. + +First of all, we start by defining the object that contains our index: + +```go expandable +var AccountsNumberIndexPrefix = collections.NewPrefix(1) + +type AccountsIndexes struct { + Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +} + +func NewAccountIndexes(sb *collections.SchemaBuilder) + +AccountsIndexes { + return AccountsIndexes{ + Number: indexes.NewUnique( + sb, AccountsNumberIndexPrefix, "accounts_by_number", + collections.Uint64Key, sdk.AccAddressKey, + func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { + return v.AccountNumber, nil +}, + ), +} +} +``` + +We create an `AccountIndexes` struct which contains a field: `Number`. This field represents our `AccountNumber` index. +`AccountNumber` is a field of `authtypes.BaseAccount` and it's a `uint64`. + +Then we can see in our `AccountIndexes` struct the `Number` field is defined as: + +```go +*indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +``` + +Where the first type parameter is `uint64`, which is the field type of our index. +The second type parameter is the primary key `sdk.AccAddress`. +And the third type parameter is the actual object we're storing `authtypes.BaseAccount`. + +Then we create a `NewAccountIndexes` function that instantiates and returns the `AccountsIndexes` struct. + +The function takes a `SchemaBuilder`. Then we instantiate our `indexes.Unique`, let's analyse the arguments we pass to +`indexes.NewUnique`. + +#### NOTE: indexes list + +The `AccountsIndexes` struct contains the indexes, the `NewIndexedMap` function will infer the indexes form that struct +using reflection, this happens only at init and is not computationally expensive. In case you want to explicitly declare +indexes: implement the `Indexes` interface in the `AccountsIndexes` struct: + +```go +func (a AccountsIndexes) + +IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { + return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ + a.Number +} +} +``` + +#### Instantiating a `indexes.Unique` + +The first three arguments, we already know them, they are: `SchemaBuilder`, `Prefix` which is our index prefix (the partition +where index keys relationship for the `Number` index will be maintained), and the human name for the `Number` index. + +The second argument is a `collections.Uint64Key` which is a key codec to deal with `uint64` keys, we pass that because +the key we're trying to index is a `uint64` key (the account number), and then we pass as fifth argument the primary key codec, +which in our case is `sdk.AccAddress` (remember: we're mapping `sdk.AccAddress` => `BaseAccount`). + +Then as last parameter we pass a function that: given the `BaseAccount` returns its `AccountNumber`. + +After this we can proceed instantiating our `IndexedMap`. + +```go expandable +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewIndexedMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/next/documentation/state-storage/cdc), + NewAccountIndexes(sb), + ), +} +} +``` + +As we can see here what we do, for now, is the same thing as we did for `collections.Map`. +We pass it the `SchemaBuilder`, the `Prefix` where we plan to store the mapping between `sdk.AccAddress` and `authtypes.BaseAccount`, +the human name and the respective `sdk.AccAddress` key codec and `authtypes.BaseAccount` value codec. + +Then we pass the instantiation of our `AccountIndexes` through `NewAccountIndexes`. + +Full example: + +```go expandable +package docs + +import ( + + "cosmossdk.io/collections" + "cosmossdk.io/collections/indexes" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsNumberIndexPrefix = collections.NewPrefix(1) + +type AccountsIndexes struct { + Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +} + +func (a AccountsIndexes) + +IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { + return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ + a.Number +} +} + +func NewAccountIndexes(sb *collections.SchemaBuilder) + +AccountsIndexes { + return AccountsIndexes{ + Number: indexes.NewUnique( + sb, AccountsNumberIndexPrefix, "accounts_by_number", + collections.Uint64Key, sdk.AccAddressKey, + func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { + return v.AccountNumber, nil +}, + ), +} +} + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewIndexedMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/next/documentation/state-storage/cdc), + NewAccountIndexes(sb), + ), +} +} +``` + +### Working with IndexedMaps + +Whilst instantiating `collections.IndexedMap` is tedious, working with them is extremely smooth. + +Let's take the full example, and expand it with some use-cases. + +```go expandable +package docs + +import ( + + "cosmossdk.io/collections" + "cosmossdk.io/collections/indexes" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsNumberIndexPrefix = collections.NewPrefix(1) + +type AccountsIndexes struct { + Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +} + +func (a AccountsIndexes) + +IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { + return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ + a.Number +} +} + +func NewAccountIndexes(sb *collections.SchemaBuilder) + +AccountsIndexes { + return AccountsIndexes{ + Number: indexes.NewUnique( + sb, AccountsNumberIndexPrefix, "accounts_by_number", + collections.Uint64Key, sdk.AccAddressKey, + func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { + return v.AccountNumber, nil +}, + ), +} +} + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewIndexedMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/next/documentation/state-storage/cdc), + NewAccountIndexes(sb), + ), +} +} + +func (k Keeper) + +CreateAccount(ctx sdk.Context, addr sdk.AccAddress) + +error { + nextAccountNumber := k.getNextAccountNumber() + newAcc := authtypes.BaseAccount{ + AccountNumber: nextAccountNumber, + Sequence: 0, +} + +return k.Accounts.Set(ctx, addr, newAcc) +} + +func (k Keeper) + +RemoveAccount(ctx sdk.Context, addr sdk.AccAddress) + +error { + return k.Accounts.Remove(ctx, addr) +} + +func (k Keeper) + +GetAccountByNumber(ctx sdk.Context, accNumber uint64) (sdk.AccAddress, authtypes.BaseAccount, error) { + accAddress, err := k.Accounts.Indexes.Number.MatchExact(ctx, accNumber) + if err != nil { + return nil, authtypes.BaseAccount{ +}, err +} + +acc, err := k.Accounts.Get(ctx, accAddress) + +return accAddress, acc, nil +} + +func (k Keeper) + +GetAccountsByNumber(ctx sdk.Context, startAccNum, endAccNum uint64) ([]authtypes.BaseAccount, error) { + rng := new(collections.Range[uint64]). + StartInclusive(startAccNum). + EndInclusive(endAccNum) + +iter, err := k.Accounts.Indexes.Number.Iterate(ctx, rng) + if err != nil { + return nil, err +} + +return indexes.CollectValues(ctx, k.Accounts, iter) +} + +func (k Keeper) + +getNextAccountNumber() + +uint64 { + return 0 +} +``` + +## Collections with interfaces as values + +Although cosmos-sdk is shifting away from the usage of interface registry, there are still some places where it is used. +In order to support old code, we have to support collections with interface values. + +The generic `codec.CollValue` is not able to handle interface values, so we need to use a special type `codec.CollValueInterface`. +`codec.CollValueInterface` takes a `codec.BinaryCodec` as an argument, and uses it to marshal and unmarshal values as interfaces. +The `codec.CollValueInterface` lives in the `codec` package, whose import path is `github.com/cosmos/cosmos-sdk/codec`. + +### Instantiating Collections with interface values + +In order to instantiate a collection with interface values, we need to use `codec.CollValueInterface` instead of `codec.CollValue`. + +```go expandable +package example + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.Map[sdk.AccAddress, sdk.AccountI] +} + +func NewKeeper(cdc codec.BinaryCodec, storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollInterfaceValue[sdk.AccountI](/docs/sdk/next/documentation/state-storage/cdc), + ), +} +} + +func (k Keeper) + +SaveBaseAccount(ctx sdk.Context, account authtypes.BaseAccount) + +error { + return k.Accounts.Set(ctx, account.GetAddress(), account) +} + +func (k Keeper) + +SaveModuleAccount(ctx sdk.Context, account authtypes.ModuleAccount) + +error { + return k.Accounts.Set(ctx, account.GetAddress(), account) +} + +func (k Keeper) + +GetAccount(ctx sdk.context, addr sdk.AccAddress) (sdk.AccountI, error) { + return k.Accounts.Get(ctx, addr) +} +``` + +## Triple key + +The `collections.Triple` is a special type of key composed of three keys, it's identical to `collections.Pair`. + +Let's see an example. + +```go expandable +package example + +import ( + + "context" + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" +) + +type AccAddress = string +type ValAddress = string + +type Keeper struct { + / let's simulate we have redelegations which are stored as a triple key composed of + / the delegator, the source validator and the destination validator. + Redelegations collections.KeySet[collections.Triple[AccAddress, ValAddress, ValAddress]] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Redelegations: collections.NewKeySet(sb, collections.NewPrefix(0), "redelegations", collections.TripleKeyCodec(collections.StringKey, collections.StringKey, collections.StringKey) +} +} + +/ RedelegationsByDelegator iterates over all the redelegations of a given delegator and calls onResult providing +/ each redelegation from source validator towards the destination validator. +func (k Keeper) + +RedelegationsByDelegator(ctx context.Context, delegator AccAddress, onResult func(src, dst ValAddress) (stop bool, err error)) + +error { + rng := collections.NewPrefixedTripleRange[AccAddress, ValAddress, ValAddress](/docs/sdk/next/documentation/state-storage/delegator) + +return k.Redelegations.Walk(ctx, rng, func(key collections.Triple[AccAddress, ValAddress, ValAddress]) (stop bool, err error) { + return onResult(key.K2(), key.K3()) +}) +} + +/ RedelegationsByDelegatorAndValidator iterates over all the redelegations of a given delegator and its source validator and calls onResult for each +/ destination validator. +func (k Keeper) + +RedelegationsByDelegatorAndValidator(ctx context.Context, delegator AccAddress, validator ValAddress, onResult func(dst ValAddress) (stop bool, err error)) + +error { + rng := collections.NewSuperPrefixedTripleRange[AccAddress, ValAddress, ValAddress](/docs/sdk/next/documentation/state-storage/delegator, validator) + +return k.Redelegations.Walk(ctx, rng, func(key collections.Triple[AccAddress, ValAddress, ValAddress]) (stop bool, err error) { + return onResult(key.K3()) +}) +} +``` + +## Advanced Usages + +### Alternative Value Codec + +The `codec.AltValueCodec` allows a collection to decode values using a different codec than the one used to encode them. +Basically it enables to decode two different byte representations of the same concrete value. +It can be used to lazily migrate values from one bytes representation to another, as long as the new representation is +not able to decode the old one. + +A concrete example can be found in `x/bank` where the balance was initially stored as `Coin` and then migrated to `Int`. + +```go +var BankBalanceValueCodec = codec.NewAltValueCodec(sdk.IntValue, func(b []byte) (sdk.Int, error) { + coin := sdk.Coin{ +} + err := coin.Unmarshal(b) + if err != nil { + return sdk.Int{ +}, err +} + +return coin.Amount, nil +}) +``` + +The above example shows how to create an `AltValueCodec` that can decode both `sdk.Int` and `sdk.Coin` values. The provided +decoder function will be used as a fallback in case the default decoder fails. When the value will be encoded back into state +it will use the default encoder. This allows to lazily migrate values to a new bytes representation. diff --git a/docs/sdk/next/documentation/state-storage/interblock-cache.mdx b/docs/sdk/next/documentation/state-storage/interblock-cache.mdx new file mode 100644 index 00000000..fd22dd22 --- /dev/null +++ b/docs/sdk/next/documentation/state-storage/interblock-cache.mdx @@ -0,0 +1,294 @@ +--- +title: Inter-block Cache +--- + +## Synopsis + +The inter-block cache is an in-memory cache storing (in-most-cases) immutable state that modules need to read in between blocks. When enabled, all sub-stores of a multi store, e.g., `rootmulti`, are wrapped. + +## Overview and basic concepts + +### Motivation + +The goal of the inter-block cache is to allow SDK modules to have fast access to data that it is typically queried during the execution of every block. This is data that do not change often, e.g. module parameters. The inter-block cache wraps each `CommitKVStore` of a multi store such as `rootmulti` with a fixed size, write-through cache. Caches are not cleared after a block is committed, as opposed to other caching layers such as `cachekv`. + +### Definitions + +- `Store key` uniquely identifies a store. +- `KVCache` is a `CommitKVStore` wrapped with a cache. +- `Cache manager` is a key component of the inter-block cache responsible for maintaining a map from `store keys` to `KVCaches`. + +## System model and properties + +### Assumptions + +This specification assumes that there exists a cache implementation accessible to the inter-block cache feature. + +> The implementation uses adaptive replacement cache (ARC), an enhancement over the standard last-recently-used (LRU) cache in that tracks both frequency and recency of use. + +The inter-block cache requires that the cache implementation to provide methods to create a cache, add a key/value pair, remove a key/value pair and retrieve the value associated to a key. In this specification, we assume that a `Cache` feature offers this functionality through the following methods: + +- `NewCache(size int)` creates a new cache with `size` capacity and returns it. +- `Get(key string)` attempts to retrieve a key/value pair from `Cache.` It returns `(value []byte, success bool)`. If `Cache` contains the key, it `value` contains the associated value and `success=true`. Otherwise, `success=false` and `value` should be ignored. +- `Add(key string, value []byte)` inserts a key/value pair into the `Cache`. +- `Remove(key string)` removes the key/value pair identified by `key` from `Cache`. + +The specification also assumes that `CommitKVStore` offers the following API: + +- `Get(key string)` attempts to retrieve a key/value pair from `CommitKVStore`. +- `Set(key, string, value []byte)` inserts a key/value pair into the `CommitKVStore`. +- `Delete(key string)` removes the key/value pair identified by `key` from `CommitKVStore`. + +> Ideally, both `Cache` and `CommitKVStore` should be specified in a different document and referenced here. + +### Properties + +#### Thread safety + +Accessing the `cache manager` or a `KVCache` is not thread-safe: no method is guarded with a lock. +Note that this is true even if the cache implementation is thread-safe. + +> For instance, assume that two `Set` operations are executed concurrently on the same key, each writing a different value. After both are executed, the cache and the underlying store may be inconsistent, each storing a different value under the same key. + +#### Crash recovery + +The inter-block cache transparently delegates `Commit()` to its aggregate `CommitKVStore`. If the +aggregate `CommitKVStore` supports atomic writes and use them to guarantee that the store is always in a consistent state in disk, the inter-block cache can be transparently moved to a consistent state when a failure occurs. + +> Note that this is the case for `IAVLStore`, the preferred `CommitKVStore`. On commit, it calls `SaveVersion()` on the underlying `MutableTree`. `SaveVersion` writes to disk are atomic via batching. This means that only consistent versions of the store (the tree) are written to the disk. Thus, in case of a failure during a `SaveVersion` call, on recovery from disk, the version of the store will be consistent. + +#### Iteration + +Iteration over each wrapped store is supported via the embedded `CommitKVStore` interface. + +## Technical specification + +### General design + +The inter-block cache feature is composed by two components: `CommitKVCacheManager` and `CommitKVCache`. + +`CommitKVCacheManager` implements the cache manager. It maintains a mapping from a store key to a `KVStore`. + +```go +type CommitKVStoreCacheManager interface{ + cacheSize uint + caches map[string]CommitKVStore +} +``` + +`CommitKVStoreCache` implements a `KVStore`: a write-through cache that wraps a `CommitKVStore`. This means that deletes and writes always happen to both the cache and the underlying `CommitKVStore`. Reads on the other hand first hit the internal cache. During a cache miss, the read is delegated to the underlying `CommitKVStore` and cached. + +```go +type CommitKVStoreCache interface{ + store CommitKVStore + cache Cache +} +``` + +To enable inter-block cache on `rootmulti`, one needs to instantiate a `CommitKVCacheManager` and set it by calling `SetInterBlockCache()` before calling one of `LoadLatestVersion()`, `LoadLatestVersionAndUpgrade(...)`, `LoadVersionAndUpgrade(...)` and `LoadVersion(version)`. + +### API + +#### CommitKVCacheManager + +The method `NewCommitKVStoreCacheManager` creates a new cache manager and returns it. + +| Name | Type | Description | +| ---- | ------- | ------------------------------------------------------------------------ | +| size | integer | Determines the capacity of each of the KVCache maintained by the manager | + +```go +func NewCommitKVStoreCacheManager(size uint) + +CommitKVStoreCacheManager { + manager = CommitKVStoreCacheManager{ + size, make(map[string]CommitKVStore) +} + +return manager +} +``` + +`GetStoreCache` returns a cache from the CommitStoreCacheManager for a given store key. If no cache exists for the store key, then one is created and set. + +| Name | Type | Description | +| -------- | --------------------------- | -------------------------------------------------------------------------------------- | +| manager | `CommitKVStoreCacheManager` | The cache manager | +| storeKey | string | The store key of the store being retrieved | +| store | `CommitKVStore` | The store that it is cached in case the manager does not have any in its map of caches | + +```go expandable +func GetStoreCache( + manager CommitKVStoreCacheManager, + storeKey string, + store CommitKVStore) + +CommitKVStore { + if manager.caches.has(storeKey) { + return manager.caches.get(storeKey) +} + +else { + cache = CommitKVStoreCacheManager{ + store, manager.cacheSize +} + +manager.set(storeKey, cache) + +return cache +} +} +``` + +`Unwrap` returns the underlying CommitKVStore for a given store key. + +| Name | Type | Description | +| -------- | --------------------------- | ------------------------------------------ | +| manager | `CommitKVStoreCacheManager` | The cache manager | +| storeKey | string | The store key of the store being unwrapped | + +```go expandable +func Unwrap( + manager CommitKVStoreCacheManager, + storeKey string) + +CommitKVStore { + if manager.caches.has(storeKey) { + cache = manager.caches.get(storeKey) + +return cache.store +} + +else { + return nil +} +} +``` + +`Reset` resets the manager's map of caches. + +| Name | Type | Description | +| ------- | --------------------------- | ----------------- | +| manager | `CommitKVStoreCacheManager` | The cache manager | + +```go +function Reset(manager CommitKVStoreCacheManager) { + for (let storeKey of manager.caches.keys()) { + manager.caches.delete(storeKey) +} +} +``` + +#### CommitKVStoreCache + +`NewCommitKVStoreCache` creates a new `CommitKVStoreCache` and returns it. + +| Name | Type | Description | +| ----- | ------------- | -------------------------------------------------- | +| store | CommitKVStore | The store to be cached | +| size | string | Determines the capacity of the cache being created | + +```go +func NewCommitKVStoreCache( + store CommitKVStore, + size uint) + +CommitKVStoreCache { + KVCache = CommitKVStoreCache{ + store, NewCache(size) +} + +return KVCache +} +``` + +`Get` retrieves a value by key. It first looks in the cache. If the key is not in the cache, the query is delegated to the underlying `CommitKVStore`. In the latter case, the key/value pair is cached. The method returns the value. + +| Name | Type | Description | +| ------- | -------------------- | ------------------------------------------------------------------- | +| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` from which the key/value pair is retrieved | +| key | string | Key of the key/value pair being retrieved | + +```go expandable +func Get( + KVCache CommitKVStoreCache, + key string) []byte { + valueCache, success := KVCache.cache.Get(key) + if success { + / cache hit + return valueCache +} + +else { + / cache miss + valueStore = KVCache.store.Get(key) + +KVCache.cache.Add(key, valueStore) + +return valueStore +} +} +``` + +`Set` inserts a key/value pair into both the write-through cache and the underlying `CommitKVStore`. + +| Name | Type | Description | +| ------- | -------------------- | ---------------------------------------------------------------- | +| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` to which the key/value pair is inserted | +| key | string | Key of the key/value pair being inserted | +| value | \[]byte | Value of the key/value pair being inserted | + +```go +func Set( + KVCache CommitKVStoreCache, + key string, + value []byte) { + KVCache.cache.Add(key, value) + +KVCache.store.Set(key, value) +} +``` + +`Delete` removes a key/value pair from both the write-through cache and the underlying `CommitKVStore`. + +| Name | Type | Description | +| ------- | -------------------- | ----------------------------------------------------------------- | +| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` from which the key/value pair is deleted | +| key | string | Key of the key/value pair being deleted | + +```go +func Delete( + KVCache CommitKVStoreCache, + key string) { + KVCache.cache.Remove(key) + +KVCache.store.Delete(key) +} +``` + +`CacheWrap` wraps a `CommitKVStoreCache` with another caching layer (`CacheKV`). + +> It is unclear whether there is a use case for `CacheWrap`. + +| Name | Type | Description | +| ------- | -------------------- | -------------------------------------- | +| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` being wrapped | + +```go +func CacheWrap( + KVCache CommitKVStoreCache) { + return CacheKV.NewStore(KVCache) +} +``` + +### Implementation details + +The inter-block cache implementation uses a fixed-sized adaptive replacement cache (ARC) as cache. [The ARC implementation](https://github.com/hashicorp/golang-lru/blob/main/arc/arc.go) is thread-safe. ARC is an enhancement over the standard LRU cache in that tracks both frequency and recency of use. This avoids a burst in access to new entries from evicting the frequently used older entries. It adds some additional tracking overhead to a standard LRU cache, computationally it is roughly `2x` the cost, and the extra memory overhead is linear with the size of the cache. The default cache size is `1000`. + +## History + +Dec 20, 2022 - Initial draft finished and submitted as a PR + +## Copyright + +All content herein is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/docs/sdk/next/documentation/state-storage/store.mdx b/docs/sdk/next/documentation/state-storage/store.mdx new file mode 100644 index 00000000..035e9555 --- /dev/null +++ b/docs/sdk/next/documentation/state-storage/store.mdx @@ -0,0 +1,11855 @@ +--- +title: Store +--- + +## Synopsis + +A store is a data structure that holds the state of the application. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK application](/docs/sdk/next/documentation/application-framework/app-anatomy) + + + +## Introduction to Cosmos SDK Stores + +The Cosmos SDK comes with a large set of stores to persist the state of applications. By default, the main store of Cosmos SDK applications is a `multistore`, i.e. a store of stores. Developers can add any number of key-value stores to the multistore, depending on their application needs. The multistore exists to support the modularity of the Cosmos SDK, as it lets each module declare and manage their own subset of the state. Key-value stores in the multistore can only be accessed with a specific capability `key`, which is typically held in the [`keeper`](/docs/sdk/next/documentation/module-system/keeper) of the module that declared the store. + +```text expandable ++-----------------------------------------------------+ +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 1 - Manage by keeper of Module 1 | +| | | | +| +--------------------------------------------+ | +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 2 - Manage by keeper of Module 2 | | +| | | | +| +--------------------------------------------+ | +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 3 - Manage by keeper of Module 2 | | +| | | | +| +--------------------------------------------+ | +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 4 - Manage by keeper of Module 3 | | +| | | | +| +--------------------------------------------+ | +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 5 - Manage by keeper of Module 4 | | +| | | | +| +--------------------------------------------+ | +| | +| Main Multistore | +| | ++-----------------------------------------------------+ + + Application's State +``` + +### Store Interface + +At its very core, a Cosmos SDK `store` is an object that holds a `CacheWrapper` and has a `GetStoreType()` method: + +```go expandable +package types + +import ( + + "fmt" + "io" + "maps" + "slices" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Added, key) +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Deleted, key) +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / SetIAVLSyncPruning set sync/async pruning on iavl. + / It is not recommended to use this option. + / It is here to enable the prune command to force this to true, allowing the command to wait + / for the pruning to finish before returning. + SetIAVLSyncPruning(sync bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + +maps.Copy(ret, tc) + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + +maps.Copy(tc, newTc) + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +The `GetStoreType` is a simple method that returns the type of store, whereas a `CacheWrapper` is a simple interface that implements store read caching and write branching through `Write` method: + +```go expandable +package types + +import ( + + "fmt" + "io" + "maps" + "slices" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Added, key) +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Deleted, key) +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / SetIAVLSyncPruning set sync/async pruning on iavl. + / It is not recommended to use this option. + / It is here to enable the prune command to force this to true, allowing the command to wait + / for the pruning to finish before returning. + SetIAVLSyncPruning(sync bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + +maps.Copy(ret, tc) + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + +maps.Copy(tc, newTc) + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +Branching and cache is used ubiquitously in the Cosmos SDK and required to be implemented on every store type. A storage branch creates an isolated, ephemeral branch of a store that can be passed around and updated without affecting the main underlying store. This is used to trigger temporary state-transitions that may be reverted later should an error occur. Read more about it in [context](/docs/sdk/next/documentation/application-framework/context#Store-branching) + +### Commit Store + +A commit store is a store that has the ability to commit changes made to the underlying tree or db. The Cosmos SDK differentiates simple stores from commit stores by extending the basic store interfaces with a `Committer`: + +```go expandable +package types + +import ( + + "fmt" + "io" + "maps" + "slices" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Added, key) +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Deleted, key) +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / SetIAVLSyncPruning set sync/async pruning on iavl. + / It is not recommended to use this option. + / It is here to enable the prune command to force this to true, allowing the command to wait + / for the pruning to finish before returning. + SetIAVLSyncPruning(sync bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + +maps.Copy(ret, tc) + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + +maps.Copy(tc, newTc) + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +The `Committer` is an interface that defines methods to persist changes to disk: + +```go expandable +package types + +import ( + + "fmt" + "io" + "maps" + "slices" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Added, key) +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Deleted, key) +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / SetIAVLSyncPruning set sync/async pruning on iavl. + / It is not recommended to use this option. + / It is here to enable the prune command to force this to true, allowing the command to wait + / for the pruning to finish before returning. + SetIAVLSyncPruning(sync bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + +maps.Copy(ret, tc) + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + +maps.Copy(tc, newTc) + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +The `CommitID` is a deterministic commit of the state tree. Its hash is returned to the underlying consensus engine and stored in the block header. Note that commit store interfaces exist for various purposes, one of which is to make sure not every object can commit the store. As part of the [object-capabilities model](/docs/sdk/next/documentation/core-concepts/ocap) of the Cosmos SDK, only `baseapp` should have the ability to commit stores. For example, this is the reason why the `ctx.KVStore()` method by which modules typically access stores returns a `KVStore` and not a `CommitKVStore`. + +The Cosmos SDK comes with many types of stores, the most used being [`CommitMultiStore`](#multistore), [`KVStore`](#kvstore) and [`GasKv` store](#gaskv-store). [Other types of stores](#other-stores) include `Transient` and `TraceKV` stores. + +## Multistore + +### Multistore Interface + +Each Cosmos SDK application holds a multistore at its root to persist its state. The multistore is a store of `KVStores` that follows the `Multistore` interface: + +```go expandable +package types + +import ( + + "fmt" + "io" + "maps" + "slices" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Added, key) +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Deleted, key) +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / SetIAVLSyncPruning set sync/async pruning on iavl. + / It is not recommended to use this option. + / It is here to enable the prune command to force this to true, allowing the command to wait + / for the pruning to finish before returning. + SetIAVLSyncPruning(sync bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + +maps.Copy(ret, tc) + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + +maps.Copy(tc, newTc) + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +If tracing is enabled, then branching the multistore will firstly wrap all the underlying `KVStore` in [`TraceKv.Store`](#tracekv-store). + +### CommitMultiStore + +The main type of `Multistore` used in the Cosmos SDK is `CommitMultiStore`, which is an extension of the `Multistore` interface: + +```go expandable +package types + +import ( + + "fmt" + "io" + "maps" + "slices" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Added, key) +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Deleted, key) +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / SetIAVLSyncPruning set sync/async pruning on iavl. + / It is not recommended to use this option. + / It is here to enable the prune command to force this to true, allowing the command to wait + / for the pruning to finish before returning. + SetIAVLSyncPruning(sync bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + +maps.Copy(ret, tc) + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + +maps.Copy(tc, newTc) + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +As for concrete implementation, the \[`rootMulti.Store`] is the go-to implementation of the `CommitMultiStore` interface. + +```go expandable +package rootmulti + +import ( + + "crypto/sha256" + "errors" + "fmt" + "io" + "maps" + "math" + "sort" + "strings" + "sync" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + protoio "github.com/cosmos/gogoproto/io" + gogotypes "github.com/cosmos/gogoproto/types" + iavltree "github.com/cosmos/iavl" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store/cachemulti" + "cosmossdk.io/store/dbadapter" + "cosmossdk.io/store/iavl" + "cosmossdk.io/store/listenkv" + "cosmossdk.io/store/mem" + "cosmossdk.io/store/metrics" + "cosmossdk.io/store/pruning" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/transient" + "cosmossdk.io/store/types" +) + +const ( + latestVersionKey = "s/latest" + commitInfoKeyFmt = "s/%d" / s/ +) + +const iavlDisablefastNodeDefault = false + +/ keysFromStoreKeyMap returns a slice of keys for the provided map lexically sorted by StoreKey.Name() + +func keysFromStoreKeyMap[V any](/docs/sdk/next/documentation/state-storage/m map[types.StoreKey]V) []types.StoreKey { + keys := make([]types.StoreKey, 0, len(m)) + for key := range m { + keys = append(keys, key) +} + +sort.Slice(keys, func(i, j int) + +bool { + ki, kj := keys[i], keys[j] + return ki.Name() < kj.Name() +}) + +return keys +} + +/ Store is composed of many CommitStores. Name contrasts with +/ cacheMultiStore which is used for branching other MultiStores. It implements +/ the CommitMultiStore interface. +type Store struct { + db dbm.DB + logger log.Logger + lastCommitInfo *types.CommitInfo + pruningManager *pruning.Manager + iavlCacheSize int + iavlDisableFastNode bool + / iavlSyncPruning should rarely be set to true. + / The Prune command will automatically set this to true. + / This allows the prune command to wait for the pruning to finish before returning. + iavlSyncPruning bool + storesParams map[types.StoreKey]storeParams + stores map[types.StoreKey]types.CommitKVStore + keysByName map[string]types.StoreKey + initialVersion int64 + removalMap map[types.StoreKey]bool + traceWriter io.Writer + traceContext types.TraceContext + traceContextMutex sync.Mutex + interBlockCache types.MultiStorePersistentCache + listeners map[types.StoreKey]*types.MemoryListener + metrics metrics.StoreMetrics + commitHeader cmtproto.Header +} + +var ( + _ types.CommitMultiStore = (*Store)(nil) + _ types.Queryable = (*Store)(nil) +) + +/ NewStore returns a reference to a new Store object with the provided DB. The +/ store will be created with a PruneNothing pruning strategy by default. After +/ a store is created, KVStores must be mounted and finally LoadLatestVersion or +/ LoadVersion must be called. +func NewStore(db dbm.DB, logger log.Logger, metricGatherer metrics.StoreMetrics) *Store { + return &Store{ + db: db, + logger: logger, + iavlCacheSize: iavl.DefaultIAVLCacheSize, + iavlDisableFastNode: iavlDisablefastNodeDefault, + storesParams: make(map[types.StoreKey]storeParams), + stores: make(map[types.StoreKey]types.CommitKVStore), + keysByName: make(map[string]types.StoreKey), + listeners: make(map[types.StoreKey]*types.MemoryListener), + removalMap: make(map[types.StoreKey]bool), + pruningManager: pruning.NewManager(db, logger), + metrics: metricGatherer, +} +} + +/ GetPruning fetches the pruning strategy from the root store. +func (rs *Store) + +GetPruning() + +pruningtypes.PruningOptions { + return rs.pruningManager.GetOptions() +} + +/ SetPruning sets the pruning strategy on the root store and all the sub-stores. +/ Note, calling SetPruning on the root store prior to LoadVersion or +/ LoadLatestVersion performs a no-op as the stores aren't mounted yet. +func (rs *Store) + +SetPruning(pruningOpts pruningtypes.PruningOptions) { + rs.pruningManager.SetOptions(pruningOpts) +} + +/ SetMetrics sets the metrics gatherer for the store package +func (rs *Store) + +SetMetrics(metrics metrics.StoreMetrics) { + rs.metrics = metrics +} + +/ SetSnapshotInterval sets the interval at which the snapshots are taken. +/ It is used by the store to determine which heights to retain until after the snapshot is complete. +func (rs *Store) + +SetSnapshotInterval(snapshotInterval uint64) { + rs.pruningManager.SetSnapshotInterval(snapshotInterval) +} + +func (rs *Store) + +SetIAVLCacheSize(cacheSize int) { + rs.iavlCacheSize = cacheSize +} + +func (rs *Store) + +SetIAVLDisableFastNode(disableFastNode bool) { + rs.iavlDisableFastNode = disableFastNode +} + +func (rs *Store) + +SetIAVLSyncPruning(syncPruning bool) { + rs.iavlSyncPruning = syncPruning +} + +/ GetStoreType implements Store. +func (rs *Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeMulti +} + +/ MountStoreWithDB implements CommitMultiStore. +func (rs *Store) + +MountStoreWithDB(key types.StoreKey, typ types.StoreType, db dbm.DB) { + if key == nil { + panic("MountIAVLStore() + +key cannot be nil") +} + if _, ok := rs.storesParams[key]; ok { + panic(fmt.Sprintf("store duplicate store key %v", key)) +} + if _, ok := rs.keysByName[key.Name()]; ok { + panic(fmt.Sprintf("store duplicate store key name %v", key)) +} + +rs.storesParams[key] = newStoreParams(key, db, typ, 0) + +rs.keysByName[key.Name()] = key +} + +/ GetCommitStore returns a mounted CommitStore for a given StoreKey. If the +/ store is wrapped in an inter-block cache, it will be unwrapped before returning. +func (rs *Store) + +GetCommitStore(key types.StoreKey) + +types.CommitStore { + return rs.GetCommitKVStore(key) +} + +/ GetCommitKVStore returns a mounted CommitKVStore for a given StoreKey. If the +/ store is wrapped in an inter-block cache, it will be unwrapped before returning. +func (rs *Store) + +GetCommitKVStore(key types.StoreKey) + +types.CommitKVStore { + / If the Store has an inter-block cache, first attempt to lookup and unwrap + / the underlying CommitKVStore by StoreKey. If it does not exist, fallback to + / the main mapping of CommitKVStores. + if rs.interBlockCache != nil { + if store := rs.interBlockCache.Unwrap(key); store != nil { + return store +} + +} + +return rs.stores[key] +} + +/ StoreKeysByName returns mapping storeNames -> StoreKeys +func (rs *Store) + +StoreKeysByName() + +map[string]types.StoreKey { + return rs.keysByName +} + +/ LoadLatestVersionAndUpgrade implements CommitMultiStore +func (rs *Store) + +LoadLatestVersionAndUpgrade(upgrades *types.StoreUpgrades) + +error { + ver := GetLatestVersion(rs.db) + +return rs.loadVersion(ver, upgrades) +} + +/ LoadVersionAndUpgrade allows us to rename substores while loading an older version +func (rs *Store) + +LoadVersionAndUpgrade(ver int64, upgrades *types.StoreUpgrades) + +error { + return rs.loadVersion(ver, upgrades) +} + +/ LoadLatestVersion implements CommitMultiStore. +func (rs *Store) + +LoadLatestVersion() + +error { + ver := GetLatestVersion(rs.db) + +return rs.loadVersion(ver, nil) +} + +/ LoadVersion implements CommitMultiStore. +func (rs *Store) + +LoadVersion(ver int64) + +error { + return rs.loadVersion(ver, nil) +} + +func (rs *Store) + +loadVersion(ver int64, upgrades *types.StoreUpgrades) + +error { + infos := make(map[string]types.StoreInfo) + +rs.logger.Debug("loadVersion", "ver", ver) + cInfo := &types.CommitInfo{ +} + + / load old data if we are not version 0 + if ver != 0 { + var err error + cInfo, err = rs.GetCommitInfo(ver) + if err != nil { + return err +} + + / convert StoreInfos slice to map + for _, storeInfo := range cInfo.StoreInfos { + infos[storeInfo.Name] = storeInfo +} + +} + + / load each Store (note this doesn't panic on unmounted keys now) + newStores := make(map[types.StoreKey]types.CommitKVStore) + storesKeys := make([]types.StoreKey, 0, len(rs.storesParams)) + for key := range rs.storesParams { + storesKeys = append(storesKeys, key) +} + if upgrades != nil { + / deterministic iteration order for upgrades + / (as the underlying store may change and + / upgrades make store changes where the execution order may matter) + +sort.Slice(storesKeys, func(i, j int) + +bool { + return storesKeys[i].Name() < storesKeys[j].Name() +}) +} + for _, key := range storesKeys { + storeParams := rs.storesParams[key] + commitID := rs.getCommitID(infos, key.Name()) + +rs.logger.Debug("loadVersion commitID", "key", key, "ver", ver, "hash", fmt.Sprintf("%x", commitID.Hash)) + + / If it has been added, set the initial version + if upgrades.IsAdded(key.Name()) || upgrades.RenamedFrom(key.Name()) != "" { + storeParams.initialVersion = uint64(ver) + 1 +} + +else if commitID.Version != ver && storeParams.typ == types.StoreTypeIAVL { + return fmt.Errorf("version of store %s mismatch root store's version; expected %d got %d; new stores should be added using StoreUpgrades", key.Name(), ver, commitID.Version) +} + +store, err := rs.loadCommitStoreFromParams(key, commitID, storeParams) + if err != nil { + return errorsmod.Wrap(err, "failed to load store") +} + +newStores[key] = store + + / If it was deleted, remove all data + if upgrades.IsDeleted(key.Name()) { + if err := deleteKVStore(store.(types.KVStore)); err != nil { + return errorsmod.Wrapf(err, "failed to delete store %s", key.Name()) +} + +rs.removalMap[key] = true +} + +else if oldName := upgrades.RenamedFrom(key.Name()); oldName != "" { + / handle renames specially + / make an unregistered key to satisfy loadCommitStore params + oldKey := types.NewKVStoreKey(oldName) + oldParams := newStoreParams(oldKey, storeParams.db, storeParams.typ, 0) + + / load from the old name + oldStore, err := rs.loadCommitStoreFromParams(oldKey, rs.getCommitID(infos, oldName), oldParams) + if err != nil { + return errorsmod.Wrapf(err, "failed to load old store %s", oldName) +} + + / move all data + if err := moveKVStoreData(oldStore.(types.KVStore), store.(types.KVStore)); err != nil { + return errorsmod.Wrapf(err, "failed to move store %s -> %s", oldName, key.Name()) +} + + / add the old key so its deletion is committed + newStores[oldKey] = oldStore + / this will ensure it's not perpetually stored in commitInfo + rs.removalMap[oldKey] = true +} + +} + +rs.lastCommitInfo = cInfo + rs.stores = newStores + + / load any snapshot heights we missed from disk to be pruned on the next run + if err := rs.pruningManager.LoadSnapshotHeights(rs.db); err != nil { + return err +} + +return nil +} + +func (rs *Store) + +getCommitID(infos map[string]types.StoreInfo, name string) + +types.CommitID { + info, ok := infos[name] + if !ok { + return types.CommitID{ +} + +} + +return info.CommitId +} + +func deleteKVStore(kv types.KVStore) + +error { + / Note that we cannot write while iterating, so load all keys here, delete below + var keys [][]byte + itr := kv.Iterator(nil, nil) + for itr.Valid() { + keys = append(keys, itr.Key()) + +itr.Next() +} + if err := itr.Close(); err != nil { + return err +} + for _, k := range keys { + kv.Delete(k) +} + +return nil +} + +/ we simulate move by a copy and delete +func moveKVStoreData(oldDB, newDB types.KVStore) + +error { + / we read from one and write to another + itr := oldDB.Iterator(nil, nil) + for itr.Valid() { + newDB.Set(itr.Key(), itr.Value()) + +itr.Next() +} + if err := itr.Close(); err != nil { + return err +} + + / then delete the old store + return deleteKVStore(oldDB) +} + +/ PruneSnapshotHeight prunes the given height according to the prune strategy. +/ If the strategy is PruneNothing, this is a no-op. +/ For other strategies, this height is persisted until the snapshot is operated. +func (rs *Store) + +PruneSnapshotHeight(height int64) { + rs.pruningManager.HandleSnapshotHeight(height) +} + +/ SetInterBlockCache sets the Store's internal inter-block (persistent) + +cache. +/ When this is defined, all CommitKVStores will be wrapped with their respective +/ inter-block cache. +func (rs *Store) + +SetInterBlockCache(c types.MultiStorePersistentCache) { + rs.interBlockCache = c +} + +/ SetTracer sets the tracer for the MultiStore that the underlying +/ stores will utilize to trace operations. A MultiStore is returned. +func (rs *Store) + +SetTracer(w io.Writer) + +types.MultiStore { + rs.traceWriter = w + return rs +} + +/ SetTracingContext updates the tracing context for the MultiStore by merging +/ the given context with the existing context by key. Any existing keys will +/ be overwritten. It is implied that the caller should update the context when +/ necessary between tracing operations. It returns a modified MultiStore. +func (rs *Store) + +SetTracingContext(tc types.TraceContext) + +types.MultiStore { + rs.traceContextMutex.Lock() + +defer rs.traceContextMutex.Unlock() + +rs.traceContext = rs.traceContext.Merge(tc) + +return rs +} + +func (rs *Store) + +getTracingContext() + +types.TraceContext { + rs.traceContextMutex.Lock() + +defer rs.traceContextMutex.Unlock() + if rs.traceContext == nil { + return nil +} + ctx := types.TraceContext{ +} + +maps.Copy(ctx, rs.traceContext) + +return ctx +} + +/ TracingEnabled returns if tracing is enabled for the MultiStore. +func (rs *Store) + +TracingEnabled() + +bool { + return rs.traceWriter != nil +} + +/ AddListeners adds a listener for the KVStore belonging to the provided StoreKey +func (rs *Store) + +AddListeners(keys []types.StoreKey) { + for i := range keys { + listener := rs.listeners[keys[i]] + if listener == nil { + rs.listeners[keys[i]] = types.NewMemoryListener() +} + +} +} + +/ ListeningEnabled returns if listening is enabled for a specific KVStore +func (rs *Store) + +ListeningEnabled(key types.StoreKey) + +bool { + if ls, ok := rs.listeners[key]; ok { + return ls != nil +} + +return false +} + +/ PopStateCache returns the accumulated state change messages from the CommitMultiStore +/ Calling PopStateCache destroys only the currently accumulated state in each listener +/ not the state in the store itself. This is a mutating and destructive operation. +/ This method has been synchronized. +func (rs *Store) + +PopStateCache() []*types.StoreKVPair { + var cache []*types.StoreKVPair + for key := range rs.listeners { + ls := rs.listeners[key] + if ls != nil { + cache = append(cache, ls.PopStateCache()...) +} + +} + +sort.SliceStable(cache, func(i, j int) + +bool { + return cache[i].StoreKey < cache[j].StoreKey +}) + +return cache +} + +/ LatestVersion returns the latest version in the store +func (rs *Store) + +LatestVersion() + +int64 { + return rs.LastCommitID().Version +} + +/ LastCommitID implements Committer/CommitStore. +func (rs *Store) + +LastCommitID() + +types.CommitID { + if rs.lastCommitInfo == nil { + emptyHash := sha256.Sum256([]byte{ +}) + appHash := emptyHash[:] + return types.CommitID{ + Version: GetLatestVersion(rs.db), + Hash: appHash, / set empty apphash to sha256([]byte{ +}) + if info is nil +} + +} + if len(rs.lastCommitInfo.CommitID().Hash) == 0 { + emptyHash := sha256.Sum256([]byte{ +}) + appHash := emptyHash[:] + return types.CommitID{ + Version: rs.lastCommitInfo.Version, + Hash: appHash, / set empty apphash to sha256([]byte{ +}) + if hash is nil +} + +} + +return rs.lastCommitInfo.CommitID() +} + +/ Commit implements Committer/CommitStore. +func (rs *Store) + +Commit() + +types.CommitID { + var previousHeight, version int64 + if rs.lastCommitInfo.GetVersion() == 0 && rs.initialVersion > 1 { + / This case means that no commit has been made in the store, we + / start from initialVersion. + version = rs.initialVersion +} + +else { + / This case can means two things: + / - either there was already a previous commit in the store, in which + / case we increment the version from there, + / - or there was no previous commit, and initial version was not set, + / in which case we start at version 1. + previousHeight = rs.lastCommitInfo.GetVersion() + +version = previousHeight + 1 +} + if rs.commitHeader.Height != version { + rs.logger.Debug("commit header and version mismatch", "header_height", rs.commitHeader.Height, "version", version) +} + +rs.lastCommitInfo = commitStores(version, rs.stores, rs.removalMap) + +rs.lastCommitInfo.Timestamp = rs.commitHeader.Time + defer rs.flushMetadata(rs.db, version, rs.lastCommitInfo) + + / remove remnants of removed stores + for sk := range rs.removalMap { + if _, ok := rs.stores[sk]; ok { + delete(rs.stores, sk) + +delete(rs.storesParams, sk) + +delete(rs.keysByName, sk.Name()) +} + +} + + / reset the removalMap + rs.removalMap = make(map[types.StoreKey]bool) + if err := rs.handlePruning(version); err != nil { + rs.logger.Error( + "failed to prune store, please check your pruning configuration", + "err", err, + ) +} + +return types.CommitID{ + Version: version, + Hash: rs.lastCommitInfo.Hash(), +} +} + +/ WorkingHash returns the current hash of the store. +/ it will be used to get the current app hash before commit. +func (rs *Store) + +WorkingHash() []byte { + storeInfos := make([]types.StoreInfo, 0, len(rs.stores)) + storeKeys := keysFromStoreKeyMap(rs.stores) + for _, key := range storeKeys { + store := rs.stores[key] + if store.GetStoreType() != types.StoreTypeIAVL { + continue +} + if !rs.removalMap[key] { + si := types.StoreInfo{ + Name: key.Name(), + CommitId: types.CommitID{ + Hash: store.WorkingHash(), +}, +} + +storeInfos = append(storeInfos, si) +} + +} + +sort.SliceStable(storeInfos, func(i, j int) + +bool { + return storeInfos[i].Name < storeInfos[j].Name +}) + +return types.CommitInfo{ + StoreInfos: storeInfos +}.Hash() +} + +/ CacheWrap implements CacheWrapper/Store/CommitStore. +func (rs *Store) + +CacheWrap() + +types.CacheWrap { + return rs.CacheMultiStore().(types.CacheWrap) +} + +/ CacheWrapWithTrace implements the CacheWrapper interface. +func (rs *Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + return rs.CacheWrap() +} + +/ CacheMultiStore creates ephemeral branch of the multi-store and returns a CacheMultiStore. +/ It implements the MultiStore interface. +func (rs *Store) + +CacheMultiStore() + +types.CacheMultiStore { + stores := make(map[types.StoreKey]types.CacheWrapper) + for k, v := range rs.stores { + store := types.KVStore(v) + / Wire the listenkv.Store to allow listeners to observe the writes from the cache store, + / set same listeners on cache store will observe duplicated writes. + if rs.ListeningEnabled(k) { + store = listenkv.NewStore(store, k, rs.listeners[k]) +} + +stores[k] = store +} + +return cachemulti.NewStore(rs.db, stores, rs.keysByName, rs.traceWriter, rs.getTracingContext()) +} + +/ CacheMultiStoreWithVersion is analogous to CacheMultiStore except that it +/ attempts to load stores at a given version (height). An error is returned if +/ any store cannot be loaded. This should only be used for querying and +/ iterating at past heights. +func (rs *Store) + +CacheMultiStoreWithVersion(version int64) (types.CacheMultiStore, error) { + cachedStores := make(map[types.StoreKey]types.CacheWrapper) + +var commitInfo *types.CommitInfo + storeInfos := map[string]bool{ +} + for key, store := range rs.stores { + var cacheStore types.KVStore + switch store.GetStoreType() { + case types.StoreTypeIAVL: + / If the store is wrapped with an inter-block cache, we must first unwrap + / it to get the underlying IAVL store. + store = rs.GetCommitKVStore(key) + + / Attempt to lazy-load an already saved IAVL store version. If the + / version does not exist or is pruned, an error should be returned. + var err error + cacheStore, err = store.(*iavl.Store).GetImmutable(version) + / if we got error from loading a module store + / we fetch commit info of this version + / we use commit info to check if the store existed at this version or not + if err != nil { + if commitInfo == nil { + var errCommitInfo error + commitInfo, errCommitInfo = rs.GetCommitInfo(version) + if errCommitInfo != nil { + return nil, errCommitInfo +} + for _, storeInfo := range commitInfo.StoreInfos { + storeInfos[storeInfo.Name] = true +} + +} + + / If the store existed at this version, it means there's actually an error + / getting the root store at this version. + if storeInfos[key.Name()] { + return nil, err +} + +} + +default: + cacheStore = store +} + + / Wire the listenkv.Store to allow listeners to observe the writes from the cache store, + / set same listeners on cache store will observe duplicated writes. + if rs.ListeningEnabled(key) { + cacheStore = listenkv.NewStore(cacheStore, key, rs.listeners[key]) +} + +cachedStores[key] = cacheStore +} + +return cachemulti.NewStore(rs.db, cachedStores, rs.keysByName, rs.traceWriter, rs.getTracingContext()), nil +} + +/ GetStore returns a mounted Store for a given StoreKey. If the StoreKey does +/ not exist, it will panic. If the Store is wrapped in an inter-block cache, it +/ will be unwrapped prior to being returned. +/ +/ TODO: This isn't used directly upstream. Consider returning the Store as-is +/ instead of unwrapping. +func (rs *Store) + +GetStore(key types.StoreKey) + +types.Store { + store := rs.GetCommitKVStore(key) + if store == nil { + panic(fmt.Sprintf("store does not exist for key: %s", key.Name())) +} + +return store +} + +/ GetKVStore returns a mounted KVStore for a given StoreKey. If tracing is +/ enabled on the KVStore, a wrapped TraceKVStore will be returned with the root +/ store's tracer, otherwise, the original KVStore will be returned. +/ +/ NOTE: The returned KVStore may be wrapped in an inter-block cache if it is +/ set on the root store. +func (rs *Store) + +GetKVStore(key types.StoreKey) + +types.KVStore { + s := rs.stores[key] + if s == nil { + panic(fmt.Sprintf("store does not exist for key: %s", key.Name())) +} + store := types.KVStore(s) + if rs.TracingEnabled() { + store = tracekv.NewStore(store, rs.traceWriter, rs.getTracingContext()) +} + if rs.ListeningEnabled(key) { + store = listenkv.NewStore(store, key, rs.listeners[key]) +} + +return store +} + +func (rs *Store) + +handlePruning(version int64) + +error { + pruneHeight := rs.pruningManager.GetPruningHeight(version) + +rs.logger.Debug("prune start", "height", version) + +defer rs.logger.Debug("prune end", "height", version) + +return rs.PruneStores(pruneHeight) +} + +/ PruneStores prunes all history upto the specific height of the multi store. +func (rs *Store) + +PruneStores(pruningHeight int64) (err error) { + if pruningHeight <= 0 { + rs.logger.Debug("pruning skipped, height is less than or equal to 0") + +return nil +} + +rs.logger.Debug("pruning store", "heights", pruningHeight) + for key, store := range rs.stores { + rs.logger.Debug("pruning store", "key", key) / Also log store.name (a private variable)? + + / If the store is wrapped with an inter-block cache, we must first unwrap + / it to get the underlying IAVL store. + if store.GetStoreType() != types.StoreTypeIAVL { + continue +} + +store = rs.GetCommitKVStore(key) + err := store.(*iavl.Store).DeleteVersionsTo(pruningHeight) + if err == nil { + continue +} + if errors.Is(err, iavltree.ErrVersionDoesNotExist) { + return err +} + +rs.logger.Error("failed to prune store", "key", key, "err", err) +} + +return nil +} + +/ getStoreByName performs a lookup of a StoreKey given a store name typically +/ provided in a path. The StoreKey is then used to perform a lookup and return +/ a Store. If the Store is wrapped in an inter-block cache, it will be unwrapped +/ prior to being returned. If the StoreKey does not exist, nil is returned. +func (rs *Store) + +GetStoreByName(name string) + +types.Store { + key := rs.keysByName[name] + if key == nil { + return nil +} + +return rs.GetCommitKVStore(key) +} + +/ Query calls substore.Query with the same `req` where `req.Path` is +/ modified to remove the substore prefix. +/ Ie. `req.Path` here is `//`, and trimmed to `/` for the substore. +/ TODO: add proof for `multistore -> substore`. +func (rs *Store) + +Query(req *types.RequestQuery) (*types.ResponseQuery, error) { + path := req.Path + storeName, subpath, err := parsePath(path) + if err != nil { + return &types.ResponseQuery{ +}, err +} + store := rs.GetStoreByName(storeName) + if store == nil { + return &types.ResponseQuery{ +}, errorsmod.Wrapf(types.ErrUnknownRequest, "no such store: %s", storeName) +} + +queryable, ok := store.(types.Queryable) + if !ok { + return &types.ResponseQuery{ +}, errorsmod.Wrapf(types.ErrUnknownRequest, "store %s (type %T) + +doesn't support queries", storeName, store) +} + + / trim the path and make the query + req.Path = subpath + res, err := queryable.Query(req) + if !req.Prove || !RequireProof(subpath) { + return res, err +} + if res.ProofOps == nil || len(res.ProofOps.Ops) == 0 { + return &types.ResponseQuery{ +}, errorsmod.Wrap(types.ErrInvalidRequest, "proof is unexpectedly empty; ensure height has not been pruned") +} + + / If the request's height is the latest height we've committed, then utilize + / the store's lastCommitInfo as this commit info may not be flushed to disk. + / Otherwise, we query for the commit info from disk. + var commitInfo *types.CommitInfo + if res.Height == rs.lastCommitInfo.Version { + commitInfo = rs.lastCommitInfo +} + +else { + commitInfo, err = rs.GetCommitInfo(res.Height) + if err != nil { + return &types.ResponseQuery{ +}, err +} + +} + + / Restore origin path and append proof op. + res.ProofOps.Ops = append(res.ProofOps.Ops, commitInfo.ProofOp(storeName)) + +return res, nil +} + +/ SetInitialVersion sets the initial version of the IAVL tree. It is used when +/ starting a new chain at an arbitrary height. +func (rs *Store) + +SetInitialVersion(version int64) + +error { + rs.initialVersion = version + + / Loop through all the stores, if it's an IAVL store, then set initial + / version on it. + for key, store := range rs.stores { + if store.GetStoreType() == types.StoreTypeIAVL { + / If the store is wrapped with an inter-block cache, we must first unwrap + / it to get the underlying IAVL store. + store = rs.GetCommitKVStore(key) + +store.(types.StoreWithInitialVersion).SetInitialVersion(version) +} + +} + +return nil +} + +/ parsePath expects a format like /[/] +/ Must start with /, subpath may be empty +/ Returns error if it doesn't start with / +func parsePath(path string) (storeName, subpath string, err error) { + if !strings.HasPrefix(path, "/") { + return storeName, subpath, errorsmod.Wrapf(types.ErrUnknownRequest, "invalid path: %s", path) +} + +storeName, subpath, found := strings.Cut(path[1:], "/") + if !found { + return storeName, subpath, nil +} + +return storeName, "/" + subpath, nil +} + +/---------------------- Snapshotting ------------------ + +/ Snapshot implements snapshottypes.Snapshotter. The snapshot output for a given format must be +/ identical across nodes such that chunks from different sources fit together. If the output for a +/ given format changes (at the byte level), the snapshot format must be bumped - see +/ TestMultistoreSnapshot_Checksum test. +func (rs *Store) + +Snapshot(height uint64, protoWriter protoio.Writer) + +error { + if height == 0 { + return errorsmod.Wrap(types.ErrLogic, "cannot snapshot height 0") +} + if height > uint64(GetLatestVersion(rs.db)) { + return errorsmod.Wrapf(types.ErrLogic, "cannot snapshot future height %v", height) +} + + / Collect stores to snapshot (only IAVL stores are supported) + +type namedStore struct { + *iavl.Store + name string +} + stores := []namedStore{ +} + keys := keysFromStoreKeyMap(rs.stores) + for _, key := range keys { + switch store := rs.GetCommitKVStore(key).(type) { + case *iavl.Store: + stores = append(stores, namedStore{ + name: key.Name(), + Store: store +}) + case *transient.Store, *mem.Store: + / Non-persisted stores shouldn't be snapshotted + continue + default: + return errorsmod.Wrapf(types.ErrLogic, + "don't know how to snapshot store %q of type %T", key.Name(), store) +} + +} + +sort.Slice(stores, func(i, j int) + +bool { + return strings.Compare(stores[i].name, stores[j].name) == -1 +}) + + / Export each IAVL store. Stores are serialized as a stream of SnapshotItem Protobuf + / messages. The first item contains a SnapshotStore with store metadata (i.e. name), + / and the following messages contain a SnapshotNode (i.e. an ExportNode). Store changes + / are demarcated by new SnapshotStore items. + for _, store := range stores { + rs.logger.Debug("starting snapshot", "store", store.name, "height", height) + +exporter, err := store.Export(int64(height)) + if err != nil { + rs.logger.Error("snapshot failed; exporter error", "store", store.name, "err", err) + +return err +} + +err = func() + +error { + defer exporter.Close() + err := protoWriter.WriteMsg(&snapshottypes.SnapshotItem{ + Item: &snapshottypes.SnapshotItem_Store{ + Store: &snapshottypes.SnapshotStoreItem{ + Name: store.name, +}, +}, +}) + if err != nil { + rs.logger.Error("snapshot failed; item store write failed", "store", store.name, "err", err) + +return err +} + nodeCount := 0 + for { + node, err := exporter.Next() + if errors.Is(err, iavltree.ErrorExportDone) { + rs.logger.Debug("snapshot Done", "store", store.name, "nodeCount", nodeCount) + +break +} + +else if err != nil { + return err +} + +err = protoWriter.WriteMsg(&snapshottypes.SnapshotItem{ + Item: &snapshottypes.SnapshotItem_IAVL{ + IAVL: &snapshottypes.SnapshotIAVLItem{ + Key: node.Key, + Value: node.Value, + Height: int32(node.Height), + Version: node.Version, +}, +}, +}) + if err != nil { + return err +} + +nodeCount++ +} + +return nil +}() + if err != nil { + return err +} + +} + +return nil +} + +/ Restore implements snapshottypes.Snapshotter. +/ returns next snapshot item and error. +func (rs *Store) + +Restore( + height uint64, format uint32, protoReader protoio.Reader, +) (snapshottypes.SnapshotItem, error) { + / Import nodes into stores. The first item is expected to be a SnapshotItem containing + / a SnapshotStoreItem, telling us which store to import into. The following items will contain + / SnapshotNodeItem (i.e. ExportNode) + +until we reach the next SnapshotStoreItem or EOF. + var importer *iavltree.Importer + var snapshotItem snapshottypes.SnapshotItem +loop: + for { + snapshotItem = snapshottypes.SnapshotItem{ +} + err := protoReader.ReadMsg(&snapshotItem) + if errors.Is(err, io.EOF) { + break +} + +else if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "invalid protobuf message") +} + switch item := snapshotItem.Item.(type) { + case *snapshottypes.SnapshotItem_Store: + if importer != nil { + err = importer.Commit() + if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "IAVL commit failed") +} + +importer.Close() +} + +store, ok := rs.GetStoreByName(item.Store.Name).(*iavl.Store) + if !ok || store == nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrapf(types.ErrLogic, "cannot import into non-IAVL store %q", item.Store.Name) +} + +importer, err = store.Import(int64(height)) + if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "import failed") +} + +defer importer.Close() + / Importer height must reflect the node height (which usually matches the block height, but not always) + +rs.logger.Debug("restoring snapshot", "store", item.Store.Name) + case *snapshottypes.SnapshotItem_IAVL: + if importer == nil { + rs.logger.Error("failed to restore; received IAVL node item before store item") + +return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(types.ErrLogic, "received IAVL node item before store item") +} + if item.IAVL.Height > math.MaxInt8 { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrapf(types.ErrLogic, "node height %v cannot exceed %v", + item.IAVL.Height, math.MaxInt8) +} + node := &iavltree.ExportNode{ + Key: item.IAVL.Key, + Value: item.IAVL.Value, + Height: int8(item.IAVL.Height), + Version: item.IAVL.Version, +} + / Protobuf does not differentiate between []byte{ +} + +as nil, but fortunately IAVL does + / not allow nil keys nor nil values for leaf nodes, so we can always set them to empty. + if node.Key == nil { + node.Key = []byte{ +} + +} + if node.Height == 0 && node.Value == nil { + node.Value = []byte{ +} + +} + err := importer.Add(node) + if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "IAVL node import failed") +} + +default: + break loop +} + +} + if importer != nil { + err := importer.Commit() + if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "IAVL commit failed") +} + +importer.Close() +} + +rs.flushMetadata(rs.db, int64(height), rs.buildCommitInfo(int64(height))) + +return snapshotItem, rs.LoadLatestVersion() +} + +func (rs *Store) + +loadCommitStoreFromParams(key types.StoreKey, id types.CommitID, params storeParams) (types.CommitKVStore, error) { + var db dbm.DB + if params.db != nil { + db = dbm.NewPrefixDB(params.db, []byte("s/_/")) +} + +else { + prefix := "s/k:" + params.key.Name() + "/" + db = dbm.NewPrefixDB(rs.db, []byte(prefix)) +} + switch params.typ { + case types.StoreTypeMulti: + panic("recursive MultiStores not yet supported") + case types.StoreTypeIAVL: + store, err := iavl.LoadStoreWithOpts(db, rs.logger, key, id, params.initialVersion, rs.iavlCacheSize, rs.iavlDisableFastNode, rs.metrics, iavltree.AsyncPruningOption(!rs.iavlSyncPruning)) + if err != nil { + return nil, err +} + if rs.interBlockCache != nil { + / Wrap and get a CommitKVStore with inter-block caching. Note, this should + / only wrap the primary CommitKVStore, not any store that is already + / branched as that will create unexpected behavior. + store = rs.interBlockCache.GetStoreCache(key, store) +} + +return store, err + case types.StoreTypeDB: + return commitDBStoreAdapter{ + Store: dbadapter.Store{ + DB: db +}}, nil + case types.StoreTypeTransient: + _, ok := key.(*types.TransientStoreKey) + if !ok { + return nil, fmt.Errorf("invalid StoreKey for StoreTypeTransient: %s", key.String()) +} + +return transient.NewStore(), nil + case types.StoreTypeMemory: + if _, ok := key.(*types.MemoryStoreKey); !ok { + return nil, fmt.Errorf("unexpected key type for a MemoryStoreKey; got: %s", key.String()) +} + +return mem.NewStore(), nil + + default: + panic(fmt.Sprintf("unrecognized store type %v", params.typ)) +} +} + +func (rs *Store) + +buildCommitInfo(version int64) *types.CommitInfo { + keys := keysFromStoreKeyMap(rs.stores) + storeInfos := []types.StoreInfo{ +} + for _, key := range keys { + store := rs.stores[key] + storeType := store.GetStoreType() + if storeType == types.StoreTypeTransient || storeType == types.StoreTypeMemory { + continue +} + +storeInfos = append(storeInfos, types.StoreInfo{ + Name: key.Name(), + CommitId: store.LastCommitID(), +}) +} + +return &types.CommitInfo{ + Version: version, + StoreInfos: storeInfos, +} +} + +/ RollbackToVersion delete the versions after `target` and update the latest version. +func (rs *Store) + +RollbackToVersion(target int64) + +error { + if target <= 0 { + return fmt.Errorf("invalid rollback height target: %d", target) +} + for key, store := range rs.stores { + if store.GetStoreType() == types.StoreTypeIAVL { + / If the store is wrapped with an inter-block cache, we must first unwrap + / it to get the underlying IAVL store. + store = rs.GetCommitKVStore(key) + err := store.(*iavl.Store).LoadVersionForOverwriting(target) + if err != nil { + return err +} + +} + +} + +rs.flushMetadata(rs.db, target, rs.buildCommitInfo(target)) + +return rs.LoadLatestVersion() +} + +/ SetCommitHeader sets the commit block header of the store. +func (rs *Store) + +SetCommitHeader(h cmtproto.Header) { + rs.commitHeader = h +} + +/ GetCommitInfo attempts to retrieve CommitInfo for a given version/height. It +/ will return an error if no CommitInfo exists, we fail to unmarshal the record +/ or if we cannot retrieve the object from the DB. +func (rs *Store) + +GetCommitInfo(ver int64) (*types.CommitInfo, error) { + cInfoKey := fmt.Sprintf(commitInfoKeyFmt, ver) + +bz, err := rs.db.Get([]byte(cInfoKey)) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to get commit info") +} + +else if bz == nil { + return nil, errors.New("no commit info found") +} + cInfo := &types.CommitInfo{ +} + if err = cInfo.Unmarshal(bz); err != nil { + return nil, errorsmod.Wrap(err, "failed unmarshal commit info") +} + +return cInfo, nil +} + +func (rs *Store) + +flushMetadata(db dbm.DB, version int64, cInfo *types.CommitInfo) { + rs.logger.Debug("flushing metadata", "height", version) + batch := db.NewBatch() + +defer func() { + _ = batch.Close() +}() + if cInfo != nil { + flushCommitInfo(batch, version, cInfo) +} + +else { + rs.logger.Debug("commitInfo is nil, not flushed", "height", version) +} + +flushLatestVersion(batch, version) + if err := batch.WriteSync(); err != nil { + panic(fmt.Errorf("error on batch write %w", err)) +} + +rs.logger.Debug("flushing metadata finished", "height", version) +} + +type storeParams struct { + key types.StoreKey + db dbm.DB + typ types.StoreType + initialVersion uint64 +} + +func newStoreParams(key types.StoreKey, db dbm.DB, typ types.StoreType, initialVersion uint64) + +storeParams { + return storeParams{ + key: key, + db: db, + typ: typ, + initialVersion: initialVersion, +} +} + +func GetLatestVersion(db dbm.DB) + +int64 { + bz, err := db.Get([]byte(latestVersionKey)) + if err != nil { + panic(err) +} + +else if bz == nil { + return 0 +} + +var latestVersion int64 + if err := gogotypes.StdInt64Unmarshal(&latestVersion, bz); err != nil { + panic(err) +} + +return latestVersion +} + +/ Commits each store and returns a new commitInfo. +func commitStores(version int64, storeMap map[types.StoreKey]types.CommitKVStore, removalMap map[types.StoreKey]bool) *types.CommitInfo { + storeInfos := make([]types.StoreInfo, 0, len(storeMap)) + storeKeys := keysFromStoreKeyMap(storeMap) + for _, key := range storeKeys { + store := storeMap[key] + last := store.LastCommitID() + + / If a commit event execution is interrupted, a new iavl store's version + / will be larger than the RMS's metadata, when the block is replayed, we + / should avoid committing that iavl store again. + var commitID types.CommitID + if last.Version >= version { + last.Version = version + commitID = last +} + +else { + commitID = store.Commit() +} + storeType := store.GetStoreType() + if storeType == types.StoreTypeTransient || storeType == types.StoreTypeMemory { + continue +} + if !removalMap[key] { + si := types.StoreInfo{ +} + +si.Name = key.Name() + +si.CommitId = commitID + storeInfos = append(storeInfos, si) +} + +} + +sort.SliceStable(storeInfos, func(i, j int) + +bool { + return strings.Compare(storeInfos[i].Name, storeInfos[j].Name) < 0 +}) + +return &types.CommitInfo{ + Version: version, + StoreInfos: storeInfos, +} +} + +func flushCommitInfo(batch dbm.Batch, version int64, cInfo *types.CommitInfo) { + bz, err := cInfo.Marshal() + if err != nil { + panic(err) +} + cInfoKey := fmt.Sprintf(commitInfoKeyFmt, version) + +err = batch.Set([]byte(cInfoKey), bz) + if err != nil { + panic(err) +} +} + +func flushLatestVersion(batch dbm.Batch, version int64) { + bz, err := gogotypes.StdInt64Marshal(version) + if err != nil { + panic(err) +} + +err = batch.Set([]byte(latestVersionKey), bz) + if err != nil { + panic(err) +} +} +``` + +The `rootMulti.Store` is a base-layer multistore built around a `db` on top of which multiple `KVStores` can be mounted, and is the default multistore store used in [`baseapp`](/docs/sdk/next/documentation/application-framework/baseapp). + +### CacheMultiStore + +Whenever the `rootMulti.Store` needs to be branched, a [`cachemulti.Store`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/cachemulti/store.go) is used. + +```go expandable +package cachemulti + +import ( + + "fmt" + "io" + "maps" + + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/cachekv" + "cosmossdk.io/store/dbadapter" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" +) + +/ storeNameCtxKey is the TraceContext metadata key that identifies +/ the store which emitted a given trace. +const storeNameCtxKey = "store_name" + +/---------------------------------------- +/ Store + +/ Store holds many branched stores. +/ Implements MultiStore. +/ NOTE: a Store (and MultiStores in general) + +should never expose the +/ keys for the substores. +type Store struct { + db types.CacheKVStore + stores map[types.StoreKey]types.CacheWrap + keys map[string]types.StoreKey + + traceWriter io.Writer + traceContext types.TraceContext +} + +var _ types.CacheMultiStore = Store{ +} + +/ NewFromKVStore creates a new Store object from a mapping of store keys to +/ CacheWrapper objects and a KVStore as the database. Each CacheWrapper store +/ is a branched store. +func NewFromKVStore( + store types.KVStore, stores map[types.StoreKey]types.CacheWrapper, + keys map[string]types.StoreKey, traceWriter io.Writer, traceContext types.TraceContext, +) + +Store { + cms := Store{ + db: cachekv.NewStore(store), + stores: make(map[types.StoreKey]types.CacheWrap, len(stores)), + keys: keys, + traceWriter: traceWriter, + traceContext: traceContext, +} + for key, store := range stores { + if cms.TracingEnabled() { + tctx := cms.traceContext.Clone().Merge(types.TraceContext{ + storeNameCtxKey: key.Name(), +}) + +store = tracekv.NewStore(store.(types.KVStore), cms.traceWriter, tctx) +} + +cms.stores[key] = cachekv.NewStore(store.(types.KVStore)) +} + +return cms +} + +/ NewStore creates a new Store object from a mapping of store keys to +/ CacheWrapper objects. Each CacheWrapper store is a branched store. +func NewStore( + db dbm.DB, stores map[types.StoreKey]types.CacheWrapper, keys map[string]types.StoreKey, + traceWriter io.Writer, traceContext types.TraceContext, +) + +Store { + return NewFromKVStore(dbadapter.Store{ + DB: db +}, stores, keys, traceWriter, traceContext) +} + +func newCacheMultiStoreFromCMS(cms Store) + +Store { + stores := make(map[types.StoreKey]types.CacheWrapper) + for k, v := range cms.stores { + stores[k] = v +} + +return NewFromKVStore(cms.db, stores, nil, cms.traceWriter, cms.traceContext) +} + +/ SetTracer sets the tracer for the MultiStore that the underlying +/ stores will utilize to trace operations. A MultiStore is returned. +func (cms Store) + +SetTracer(w io.Writer) + +types.MultiStore { + cms.traceWriter = w + return cms +} + +/ SetTracingContext updates the tracing context for the MultiStore by merging +/ the given context with the existing context by key. Any existing keys will +/ be overwritten. It is implied that the caller should update the context when +/ necessary between tracing operations. It returns a modified MultiStore. +func (cms Store) + +SetTracingContext(tc types.TraceContext) + +types.MultiStore { + if cms.traceContext != nil { + maps.Copy(cms.traceContext, tc) +} + +else { + cms.traceContext = tc +} + +return cms +} + +/ TracingEnabled returns if tracing is enabled for the MultiStore. +func (cms Store) + +TracingEnabled() + +bool { + return cms.traceWriter != nil +} + +/ LatestVersion returns the branch version of the store +func (cms Store) + +LatestVersion() + +int64 { + panic("cannot get latest version from branch cached multi-store") +} + +/ GetStoreType returns the type of the store. +func (cms Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeMulti +} + +/ Write calls Write on each underlying store. +func (cms Store) + +Write() { + cms.db.Write() + for _, store := range cms.stores { + store.Write() +} +} + +/ Implements CacheWrapper. +func (cms Store) + +CacheWrap() + +types.CacheWrap { + return cms.CacheMultiStore().(types.CacheWrap) +} + +/ CacheWrapWithTrace implements the CacheWrapper interface. +func (cms Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + return cms.CacheWrap() +} + +/ Implements MultiStore. +func (cms Store) + +CacheMultiStore() + +types.CacheMultiStore { + return newCacheMultiStoreFromCMS(cms) +} + +/ CacheMultiStoreWithVersion implements the MultiStore interface. It will panic +/ as an already cached multi-store cannot load previous versions. +/ +/ TODO: The store implementation can possibly be modified to support this as it +/ seems safe to load previous versions (heights). +func (cms Store) + +CacheMultiStoreWithVersion(_ int64) (types.CacheMultiStore, error) { + panic("cannot branch cached multi-store with a version") +} + +/ GetStore returns an underlying Store by key. +func (cms Store) + +GetStore(key types.StoreKey) + +types.Store { + s := cms.stores[key] + if key == nil || s == nil { + panic(fmt.Sprintf("kv store with key %v has not been registered in stores", key)) +} + +return s.(types.Store) +} + +/ GetKVStore returns an underlying KVStore by key. +func (cms Store) + +GetKVStore(key types.StoreKey) + +types.KVStore { + store := cms.stores[key] + if key == nil || store == nil { + panic(fmt.Sprintf("kv store with key %v has not been registered in stores", key)) +} + +return store.(types.KVStore) +} +``` + +`cachemulti.Store` branches all substores (creates a virtual store for each substore) in its constructor and hold them in `Store.stores`. Moreover caches all read queries. `Store.GetKVStore()` returns the store from `Store.stores`, and `Store.Write()` recursively calls `CacheWrap.Write()` on all the substores. + +## Base-layer KVStores + +### `KVStore` and `CommitKVStore` Interfaces + +A `KVStore` is a simple key-value store used to store and retrieve data. A `CommitKVStore` is a `KVStore` that also implements a `Committer`. By default, stores mounted in `baseapp`'s main `CommitMultiStore` are `CommitKVStore`s. The `KVStore` interface is primarily used to restrict modules from accessing the committer. + +Individual `KVStore`s are used by modules to manage a subset of the global state. `KVStores` can be accessed by objects that hold a specific key. This `key` should only be exposed to the [`keeper`](/docs/sdk/next/documentation/module-system/keeper) of the module that defines the store. + +`CommitKVStore`s are declared by proxy of their respective `key` and mounted on the application's [multistore](#multistore) in the [main application file](/docs/sdk/next/documentation/application-framework/app-anatomy#core-application-file). In the same file, the `key` is also passed to the module's `keeper` that is responsible for managing the store. + +```go expandable +package types + +import ( + + "fmt" + "io" + "maps" + "slices" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Added, key) +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Deleted, key) +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / SetIAVLSyncPruning set sync/async pruning on iavl. + / It is not recommended to use this option. + / It is here to enable the prune command to force this to true, allowing the command to wait + / for the pruning to finish before returning. + SetIAVLSyncPruning(sync bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + +maps.Copy(ret, tc) + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + +maps.Copy(tc, newTc) + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +Apart from the traditional `Get` and `Set` methods, that a `KVStore` must implement via the `BasicKVStore` interface; a `KVStore` must provide an `Iterator(start, end)` method which returns an `Iterator` object. It is used to iterate over a range of keys, typically keys that share a common prefix. Below is an example from the bank's module keeper, used to iterate over all account balances: + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "cosmossdk.io/collections" + "cosmossdk.io/collections/indexes" + "cosmossdk.io/core/store" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var _ ViewKeeper = (*BaseViewKeeper)(nil) + +/ ViewKeeper defines a module interface that facilitates read only access to +/ account balances. +type ViewKeeper interface { + ValidateBalance(ctx context.Context, addr sdk.AccAddress) + +error + HasBalance(ctx context.Context, addr sdk.AccAddress, amt sdk.Coin) + +bool + + GetAllBalances(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + GetAccountsBalances(ctx context.Context) []types.Balance + GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin + LockedCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + SpendableCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + SpendableCoin(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin + + IterateAccountBalances(ctx context.Context, addr sdk.AccAddress, cb func(coin sdk.Coin) (stop bool)) + +IterateAllBalances(ctx context.Context, cb func(address sdk.AccAddress, coin sdk.Coin) (stop bool)) +} + +func newBalancesIndexes(sb *collections.SchemaBuilder) + +BalancesIndexes { + return BalancesIndexes{ + Denom: indexes.NewReversePair[math.Int](/docs/sdk/next/documentation/state-storage/ + sb, types.DenomAddressPrefix, "address_by_denom_index", + collections.PairKeyCodec(sdk.LengthPrefixedAddressKey(sdk.AccAddressKey), collections.StringKey), / nolint:staticcheck / Note: refer to the LengthPrefixedAddressKey docs to understand why we do this. + indexes.WithReversePairUncheckedValue(), / denom to address indexes were stored as Key: Join(denom, address) + +Value: []byte{0 +}, this will migrate the value to []byte{ +} + +in a lazy way. + ), +} +} + +type BalancesIndexes struct { + Denom *indexes.ReversePair[sdk.AccAddress, string, math.Int] +} + +func (b BalancesIndexes) + +IndexesList() []collections.Index[collections.Pair[sdk.AccAddress, string], math.Int] { + return []collections.Index[collections.Pair[sdk.AccAddress, string], math.Int]{ + b.Denom +} +} + +/ BaseViewKeeper implements a read only keeper implementation of ViewKeeper. +type BaseViewKeeper struct { + cdc codec.BinaryCodec + storeService store.KVStoreService + ak types.AccountKeeper + logger log.Logger + + Schema collections.Schema + Supply collections.Map[string, math.Int] + DenomMetadata collections.Map[string, types.Metadata] + SendEnabled collections.Map[string, bool] + Balances *collections.IndexedMap[collections.Pair[sdk.AccAddress, string], math.Int, BalancesIndexes] + Params collections.Item[types.Params] +} + +/ NewBaseViewKeeper returns a new BaseViewKeeper. +func NewBaseViewKeeper(cdc codec.BinaryCodec, storeService store.KVStoreService, ak types.AccountKeeper, logger log.Logger) + +BaseViewKeeper { + sb := collections.NewSchemaBuilder(storeService) + k := BaseViewKeeper{ + cdc: cdc, + storeService: storeService, + ak: ak, + logger: logger, + Supply: collections.NewMap(sb, types.SupplyKey, "supply", collections.StringKey, sdk.IntValue), + DenomMetadata: collections.NewMap(sb, types.DenomMetadataPrefix, "denom_metadata", collections.StringKey, codec.CollValue[types.Metadata](/docs/sdk/next/documentation/state-storage/cdc)), + SendEnabled: collections.NewMap(sb, types.SendEnabledPrefix, "send_enabled", collections.StringKey, codec.BoolValue), / NOTE: we use a bool value which uses protobuf to retain state backwards compat + Balances: collections.NewIndexedMap(sb, types.BalancesPrefix, "balances", collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), types.BalanceValueCodec, newBalancesIndexes(sb)), + Params: collections.NewItem(sb, types.ParamsKey, "params", codec.CollValue[types.Params](/docs/sdk/next/documentation/state-storage/cdc)), +} + +schema, err := sb.Build() + if err != nil { + panic(err) +} + +k.Schema = schema + return k +} + +/ HasBalance returns whether or not an account has at least amt balance. +func (k BaseViewKeeper) + +HasBalance(ctx context.Context, addr sdk.AccAddress, amt sdk.Coin) + +bool { + return k.GetBalance(ctx, addr, amt.Denom).IsGTE(amt) +} + +/ Logger returns a module-specific logger. +func (k BaseViewKeeper) + +Logger() + +log.Logger { + return k.logger +} + +/ GetAllBalances returns all the account balances for the given account address. +func (k BaseViewKeeper) + +GetAllBalances(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins { + balances := sdk.NewCoins() + +k.IterateAccountBalances(ctx, addr, func(balance sdk.Coin) + +bool { + balances = balances.Add(balance) + +return false +}) + +return balances.Sort() +} + +/ GetAccountsBalances returns all the accounts balances from the store. +func (k BaseViewKeeper) + +GetAccountsBalances(ctx context.Context) []types.Balance { + balances := make([]types.Balance, 0) + mapAddressToBalancesIdx := make(map[string]int) + +k.IterateAllBalances(ctx, func(addr sdk.AccAddress, balance sdk.Coin) + +bool { + idx, ok := mapAddressToBalancesIdx[addr.String()] + if ok { + / address is already on the set of accounts balances + balances[idx].Coins = balances[idx].Coins.Add(balance) + +balances[idx].Coins.Sort() + +return false +} + accountBalance := types.Balance{ + Address: addr.String(), + Coins: sdk.NewCoins(balance), +} + +balances = append(balances, accountBalance) + +mapAddressToBalancesIdx[addr.String()] = len(balances) - 1 + return false +}) + +return balances +} + +/ GetBalance returns the balance of a specific denomination for a given account +/ by address. +func (k BaseViewKeeper) + +GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin { + amt, err := k.Balances.Get(ctx, collections.Join(addr, denom)) + if err != nil { + return sdk.NewCoin(denom, math.ZeroInt()) +} + +return sdk.NewCoin(denom, amt) +} + +/ IterateAccountBalances iterates over the balances of a single account and +/ provides the token balance to a callback. If true is returned from the +/ callback, iteration is halted. +func (k BaseViewKeeper) + +IterateAccountBalances(ctx context.Context, addr sdk.AccAddress, cb func(sdk.Coin) + +bool) { + err := k.Balances.Walk(ctx, collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/next/documentation/state-storage/addr), func(key collections.Pair[sdk.AccAddress, string], value math.Int) (stop bool, err error) { + return cb(sdk.NewCoin(key.K2(), value)), nil +}) + if err != nil { + panic(err) +} +} + +/ IterateAllBalances iterates over all the balances of all accounts and +/ denominations that are provided to a callback. If true is returned from the +/ callback, iteration is halted. +func (k BaseViewKeeper) + +IterateAllBalances(ctx context.Context, cb func(sdk.AccAddress, sdk.Coin) + +bool) { + err := k.Balances.Walk(ctx, nil, func(key collections.Pair[sdk.AccAddress, string], value math.Int) (stop bool, err error) { + return cb(key.K1(), sdk.NewCoin(key.K2(), value)), nil +}) + if err != nil { + panic(err) +} +} + +/ LockedCoins returns all the coins that are not spendable (i.e. locked) + for an +/ account by address. For standard accounts, the result will always be no coins. +/ For vesting accounts, LockedCoins is delegated to the concrete vesting account +/ type. +func (k BaseViewKeeper) + +LockedCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins { + acc := k.ak.GetAccount(ctx, addr) + if acc != nil { + vacc, ok := acc.(types.VestingAccount) + if ok { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +return vacc.LockedCoins(sdkCtx.BlockTime()) +} + +} + +return sdk.NewCoins() +} + +/ SpendableCoins returns the total balances of spendable coins for an account +/ by address. If the account has no spendable coins, an empty Coins slice is +/ returned. +func (k BaseViewKeeper) + +SpendableCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins { + spendable, _ := k.spendableCoins(ctx, addr) + +return spendable +} + +/ SpendableCoin returns the balance of specific denomination of spendable coins +/ for an account by address. If the account has no spendable coin, a zero Coin +/ is returned. +func (k BaseViewKeeper) + +SpendableCoin(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin { + balance := k.GetBalance(ctx, addr, denom) + locked := k.LockedCoins(ctx, addr) + +return balance.SubAmount(locked.AmountOf(denom)) +} + +/ spendableCoins returns the coins the given address can spend alongside the total amount of coins it holds. +/ It exists for gas efficiency, in order to avoid to have to get balance multiple times. +func (k BaseViewKeeper) + +spendableCoins(ctx context.Context, addr sdk.AccAddress) (spendable, total sdk.Coins) { + total = k.GetAllBalances(ctx, addr) + locked := k.LockedCoins(ctx, addr) + +spendable, hasNeg := total.SafeSub(locked...) + if hasNeg { + spendable = sdk.NewCoins() + +return +} + +return +} + +/ ValidateBalance validates all balances for a given account address returning +/ an error if any balance is invalid. It will check for vesting account types +/ and validate the balances against the original vesting balances. +/ +/ CONTRACT: ValidateBalance should only be called upon genesis state. In the +/ case of vesting accounts, balances may change in a valid manner that would +/ otherwise yield an error from this call. +func (k BaseViewKeeper) + +ValidateBalance(ctx context.Context, addr sdk.AccAddress) + +error { + acc := k.ak.GetAccount(ctx, addr) + if acc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "account %s does not exist", addr) +} + balances := k.GetAllBalances(ctx, addr) + if !balances.IsValid() { + return fmt.Errorf("account balance of %s is invalid", balances) +} + +vacc, ok := acc.(types.VestingAccount) + if ok { + ogv := vacc.GetOriginalVesting() + if ogv.IsAnyGT(balances) { + return fmt.Errorf("vesting amount %s cannot be greater than total amount %s", ogv, balances) +} + +} + +return nil +} +``` + +### `IAVL` Store + +The default implementation of `KVStore` and `CommitKVStore` used in `baseapp` is the `iavl.Store`. + +```go expandable +package iavl + +import ( + + "errors" + "fmt" + "io" + + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/iavl" + ics23 "github.com/cosmos/ics23/go" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store/cachekv" + "cosmossdk.io/store/internal/kv" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" + "cosmossdk.io/store/wrapper" +) + +const ( + DefaultIAVLCacheSize = 500000 +) + +var ( + _ types.KVStore = (*Store)(nil) + _ types.CommitStore = (*Store)(nil) + _ types.CommitKVStore = (*Store)(nil) + _ types.Queryable = (*Store)(nil) + _ types.StoreWithInitialVersion = (*Store)(nil) +) + +/ Store Implements types.KVStore and CommitKVStore. +type Store struct { + tree Tree + logger log.Logger + metrics metrics.StoreMetrics +} + +/ LoadStore returns an IAVL Store as a CommitKVStore. Internally, it will load the +/ store's version (id) + +from the provided DB. An error is returned if the version +/ fails to load, or if called with a positive version on an empty tree. +func LoadStore(db dbm.DB, logger log.Logger, key types.StoreKey, id types.CommitID, cacheSize int, disableFastNode bool, metrics metrics.StoreMetrics) (types.CommitKVStore, error) { + return LoadStoreWithInitialVersion(db, logger, key, id, 0, cacheSize, disableFastNode, metrics) +} + +/ LoadStoreWithInitialVersion returns an IAVL Store as a CommitKVStore setting its initialVersion +/ to the one given. Internally, it will load the store's version (id) + +from the +/ provided DB. An error is returned if the version fails to load, or if called with a positive +/ version on an empty tree. +func LoadStoreWithInitialVersion(db dbm.DB, logger log.Logger, key types.StoreKey, id types.CommitID, initialVersion uint64, cacheSize int, disableFastNode bool, metrics metrics.StoreMetrics) (types.CommitKVStore, error) { + return LoadStoreWithOpts(db, logger, key, id, initialVersion, cacheSize, disableFastNode, metrics, iavl.AsyncPruningOption(true)) +} + +func LoadStoreWithOpts(db dbm.DB, logger log.Logger, key types.StoreKey, id types.CommitID, initialVersion uint64, cacheSize int, disableFastNode bool, metrics metrics.StoreMetrics, opts ...iavl.Option) (types.CommitKVStore, error) { + / store/v1 and app/v1 flows never require an initial version of 0 + if initialVersion == 0 { + initialVersion = 1 +} + +opts = append(opts, iavl.InitialVersionOption(initialVersion)) + tree := iavl.NewMutableTree(wrapper.NewDBWrapper(db), cacheSize, disableFastNode, logger, opts...) + +isUpgradeable, err := tree.IsUpgradeable() + if err != nil { + return nil, err +} + if isUpgradeable && logger != nil { + logger.Info( + "Upgrading IAVL storage for faster queries + execution on live state. This may take a while", + "store_key", key.String(), + "version", initialVersion, + "commit", fmt.Sprintf("%X", id), + ) +} + + _, err = tree.LoadVersion(id.Version) + if err != nil { + return nil, err +} + if logger != nil { + logger.Debug("Finished loading IAVL tree") +} + +return &Store{ + tree: tree, + logger: logger, + metrics: metrics, +}, nil +} + +/ UnsafeNewStore returns a reference to a new IAVL Store with a given mutable +/ IAVL tree reference. It should only be used for testing purposes. +/ +/ CONTRACT: The IAVL tree should be fully loaded. +/ CONTRACT: PruningOptions passed in as argument must be the same as pruning options +/ passed into iavl.MutableTree +func UnsafeNewStore(tree *iavl.MutableTree) *Store { + return &Store{ + tree: tree, + metrics: metrics.NewNoOpMetrics(), +} +} + +/ GetImmutable returns a reference to a new store backed by an immutable IAVL +/ tree at a specific version (height) + +without any pruning options. This should +/ be used for querying and iteration only. If the version does not exist or has +/ been pruned, an empty immutable IAVL tree will be used. +/ Any mutable operations executed will result in a panic. +func (st *Store) + +GetImmutable(version int64) (*Store, error) { + if !st.VersionExists(version) { + return nil, errors.New("version mismatch on immutable IAVL tree; version does not exist. Version has either been pruned, or is for a future block height") +} + +iTree, err := st.tree.GetImmutable(version) + if err != nil { + return nil, err +} + +return &Store{ + tree: &immutableTree{ + iTree +}, + metrics: st.metrics, +}, nil +} + +/ Commit commits the current store state and returns a CommitID with the new +/ version and hash. +func (st *Store) + +Commit() + +types.CommitID { + defer st.metrics.MeasureSince("store", "iavl", "commit") + +hash, version, err := st.tree.SaveVersion() + if err != nil { + panic(err) +} + +return types.CommitID{ + Version: version, + Hash: hash, +} +} + +/ WorkingHash returns the hash of the current working tree. +func (st *Store) + +WorkingHash() []byte { + return st.tree.WorkingHash() +} + +/ LastCommitID implements Committer. +func (st *Store) + +LastCommitID() + +types.CommitID { + return types.CommitID{ + Version: st.tree.Version(), + Hash: st.tree.Hash(), +} +} + +/ SetPruning panics as pruning options should be provided at initialization +/ since IAVl accepts pruning options directly. +func (st *Store) + +SetPruning(_ pruningtypes.PruningOptions) { + panic("cannot set pruning options on an initialized IAVL store") +} + +/ SetPruning panics as pruning options should be provided at initialization +/ since IAVl accepts pruning options directly. +func (st *Store) + +GetPruning() + +pruningtypes.PruningOptions { + panic("cannot get pruning options on an initialized IAVL store") +} + +/ VersionExists returns whether or not a given version is stored. +func (st *Store) + +VersionExists(version int64) + +bool { + return st.tree.VersionExists(version) +} + +/ GetAllVersions returns all versions in the iavl tree +func (st *Store) + +GetAllVersions() []int { + return st.tree.AvailableVersions() +} + +/ Implements Store. +func (st *Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeIAVL +} + +/ Implements Store. +func (st *Store) + +CacheWrap() + +types.CacheWrap { + return cachekv.NewStore(st) +} + +/ CacheWrapWithTrace implements the Store interface. +func (st *Store) + +CacheWrapWithTrace(w io.Writer, tc types.TraceContext) + +types.CacheWrap { + return cachekv.NewStore(tracekv.NewStore(st, w, tc)) +} + +/ Implements types.KVStore. +func (st *Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +types.AssertValidValue(value) + _, err := st.tree.Set(key, value) + if err != nil && st.logger != nil { + st.logger.Error("iavl set error", "error", err.Error()) +} +} + +/ Implements types.KVStore. +func (st *Store) + +Get(key []byte) []byte { + defer st.metrics.MeasureSince("store", "iavl", "get") + +value, err := st.tree.Get(key) + if err != nil { + panic(err) +} + +return value +} + +/ Implements types.KVStore. +func (st *Store) + +Has(key []byte) (exists bool) { + defer st.metrics.MeasureSince("store", "iavl", "has") + +has, err := st.tree.Has(key) + if err != nil { + panic(err) +} + +return has +} + +/ Implements types.KVStore. +func (st *Store) + +Delete(key []byte) { + defer st.metrics.MeasureSince("store", "iavl", "delete") + _, _, err := st.tree.Remove(key) + if err != nil { + panic(err) +} +} + +/ DeleteVersionsTo deletes versions upto the given version from the MutableTree. An error +/ is returned if any single version is invalid or the delete fails. All writes +/ happen in a single batch with a single commit. +func (st *Store) + +DeleteVersionsTo(version int64) + +error { + return st.tree.DeleteVersionsTo(version) +} + +/ LoadVersionForOverwriting attempts to load a tree at a previously committed +/ version. Any versions greater than targetVersion will be deleted. +func (st *Store) + +LoadVersionForOverwriting(targetVersion int64) + +error { + return st.tree.LoadVersionForOverwriting(targetVersion) +} + +/ Implements types.KVStore. +func (st *Store) + +Iterator(start, end []byte) + +types.Iterator { + iterator, err := st.tree.Iterator(start, end, true) + if err != nil { + panic(err) +} + +return iterator +} + +/ Implements types.KVStore. +func (st *Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + iterator, err := st.tree.Iterator(start, end, false) + if err != nil { + panic(err) +} + +return iterator +} + +/ SetInitialVersion sets the initial version of the IAVL tree. It is used when +/ starting a new chain at an arbitrary height. +func (st *Store) + +SetInitialVersion(version int64) { + st.tree.SetInitialVersion(uint64(version)) +} + +/ Exports the IAVL store at the given version, returning an iavl.Exporter for the tree. +func (st *Store) + +Export(version int64) (*iavl.Exporter, error) { + istore, err := st.GetImmutable(version) + if err != nil { + return nil, errorsmod.Wrapf(err, "iavl export failed for version %v", version) +} + +tree, ok := istore.tree.(*immutableTree) + if !ok || tree == nil { + return nil, fmt.Errorf("iavl export failed: unable to fetch tree for version %v", version) +} + +return tree.Export() +} + +/ Import imports an IAVL tree at the given version, returning an iavl.Importer for importing. +func (st *Store) + +Import(version int64) (*iavl.Importer, error) { + tree, ok := st.tree.(*iavl.MutableTree) + if !ok { + return nil, errors.New("iavl import failed: unable to find mutable tree") +} + +return tree.Import(version) +} + +/ Handle gatest the latest height, if height is 0 +func getHeight(tree Tree, req *types.RequestQuery) + +int64 { + height := req.Height + if height == 0 { + latest := tree.Version() + if tree.VersionExists(latest - 1) { + height = latest - 1 +} + +else { + height = latest +} + +} + +return height +} + +/ Query implements ABCI interface, allows queries +/ +/ by default we will return from (latest height -1), +/ as we will have merkle proofs immediately (header height = data height + 1) +/ If latest-1 is not present, use latest (which must be present) +/ if you care to have the latest data to see a tx results, you must +/ explicitly set the height you want to see +func (st *Store) + +Query(req *types.RequestQuery) (res *types.ResponseQuery, err error) { + defer st.metrics.MeasureSince("store", "iavl", "query") + if len(req.Data) == 0 { + return &types.ResponseQuery{ +}, errorsmod.Wrap(types.ErrTxDecode, "query cannot be zero length") +} + tree := st.tree + + / store the height we chose in the response, with 0 being changed to the + / latest height + res = &types.ResponseQuery{ + Height: getHeight(tree, req), +} + switch req.Path { + case "/key": / get by key + key := req.Data / data holds the key bytes + + res.Key = key + if !st.VersionExists(res.Height) { + res.Log = iavl.ErrVersionDoesNotExist.Error() + +break +} + +value, err := tree.GetVersioned(key, res.Height) + if err != nil { + panic(err) +} + +res.Value = value + if !req.Prove { + break +} + + / Continue to prove existence/absence of value + / Must convert store.Tree to iavl.MutableTree with given version to use in CreateProof + iTree, err := tree.GetImmutable(res.Height) + if err != nil { + / sanity check: If value for given version was retrieved, immutable tree must also be retrievable + panic(fmt.Sprintf("version exists in store but could not retrieve corresponding versioned tree in store, %s", err.Error())) +} + mtree := &iavl.MutableTree{ + ImmutableTree: iTree, +} + + / get proof from tree and convert to merkle.Proof before adding to result + res.ProofOps = getProofFromTree(mtree, req.Data, res.Value != nil) + case "/subspace": + pairs := kv.Pairs{ + Pairs: make([]kv.Pair, 0), +} + subspace := req.Data + res.Key = subspace + iterator := types.KVStorePrefixIterator(st, subspace) + for ; iterator.Valid(); iterator.Next() { + pairs.Pairs = append(pairs.Pairs, kv.Pair{ + Key: iterator.Key(), + Value: iterator.Value() +}) +} + if err := iterator.Close(); err != nil { + panic(fmt.Errorf("failed to close iterator: %w", err)) +} + +bz, err := pairs.Marshal() + if err != nil { + panic(fmt.Errorf("failed to marshal KV pairs: %w", err)) +} + +res.Value = bz + + default: + return &types.ResponseQuery{ +}, errorsmod.Wrapf(types.ErrUnknownRequest, "unexpected query path: %v", req.Path) +} + +return res, err +} + +/ TraverseStateChanges traverses the state changes between two versions and calls the given function. +func (st *Store) + +TraverseStateChanges(startVersion, endVersion int64, fn func(version int64, changeSet *iavl.ChangeSet) + +error) + +error { + return st.tree.TraverseStateChanges(startVersion, endVersion, fn) +} + +/ Takes a MutableTree, a key, and a flag for creating existence or absence proof and returns the +/ appropriate merkle.Proof. Since this must be called after querying for the value, this function should never error +/ Thus, it will panic on error rather than returning it +func getProofFromTree(tree *iavl.MutableTree, key []byte, exists bool) *cmtprotocrypto.ProofOps { + var ( + commitmentProof *ics23.CommitmentProof + err error + ) + if exists { + / value was found + commitmentProof, err = tree.GetMembershipProof(key) + if err != nil { + / sanity check: If value was found, membership proof must be creatable + panic(fmt.Sprintf("unexpected value for empty proof: %s", err.Error())) +} + +} + +else { + / value wasn't found + commitmentProof, err = tree.GetNonMembershipProof(key) + if err != nil { + / sanity check: If value wasn't found, nonmembership proof must be creatable + panic(fmt.Sprintf("unexpected error for nonexistence proof: %s", err.Error())) +} + +} + op := types.NewIavlCommitmentOp(key, commitmentProof) + +return &cmtprotocrypto.ProofOps{ + Ops: []cmtprotocrypto.ProofOp{ + op.ProofOp() +}} +} +``` + +`iavl` stores are based around an [IAVL Tree](https://github.com/cosmos/iavl), a self-balancing binary tree which guarantees that: + +- `Get` and `Set` operations are O(log n), where n is the number of elements in the tree. +- Iteration efficiently returns the sorted elements within the range. +- Each tree version is immutable and can be retrieved even after a commit (depending on the pruning settings). + +The documentation on the IAVL Tree is located [here](https://github.com/cosmos/iavl/blob/master/docs/overview.md). + +### `DbAdapter` Store + +`dbadapter.Store` is an adapter for `dbm.DB` making it fulfilling the `KVStore` interface. + +```go expandable +package dbadapter + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/cachekv" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" +) + +/ Wrapper type for dbm.Db with implementation of KVStore +type Store struct { + dbm.DB +} + +/ Get wraps the underlying DB's Get method panicing on error. +func (dsa Store) + +Get(key []byte) []byte { + v, err := dsa.DB.Get(key) + if err != nil { + panic(err) +} + +return v +} + +/ Has wraps the underlying DB's Has method panicing on error. +func (dsa Store) + +Has(key []byte) + +bool { + ok, err := dsa.DB.Has(key) + if err != nil { + panic(err) +} + +return ok +} + +/ Set wraps the underlying DB's Set method panicing on error. +func (dsa Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +types.AssertValidValue(value) + if err := dsa.DB.Set(key, value); err != nil { + panic(err) +} +} + +/ Delete wraps the underlying DB's Delete method panicing on error. +func (dsa Store) + +Delete(key []byte) { + if err := dsa.DB.Delete(key); err != nil { + panic(err) +} +} + +/ Iterator wraps the underlying DB's Iterator method panicing on error. +func (dsa Store) + +Iterator(start, end []byte) + +types.Iterator { + iter, err := dsa.DB.Iterator(start, end) + if err != nil { + panic(err) +} + +return iter +} + +/ ReverseIterator wraps the underlying DB's ReverseIterator method panicing on error. +func (dsa Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + iter, err := dsa.DB.ReverseIterator(start, end) + if err != nil { + panic(err) +} + +return iter +} + +/ GetStoreType returns the type of the store. +func (Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeDB +} + +/ CacheWrap branches the underlying store. +func (dsa Store) + +CacheWrap() + +types.CacheWrap { + return cachekv.NewStore(dsa) +} + +/ CacheWrapWithTrace implements KVStore. +func (dsa Store) + +CacheWrapWithTrace(w io.Writer, tc types.TraceContext) + +types.CacheWrap { + return cachekv.NewStore(tracekv.NewStore(dsa, w, tc)) +} + +/ dbm.DB implements KVStore so we can CacheKVStore it. +var _ types.KVStore = Store{ +} +``` + +`dbadapter.Store` embeds `dbm.DB`, meaning most of the `KVStore` interface functions are implemented. The other functions (mostly miscellaneous) are manually implemented. This store is primarily used within [Transient Stores](#transient-store) + +### `Transient` Store + +`Transient.Store` is a base-layer `KVStore` which is automatically discarded at the end of the block. + +```go expandable +package transient + +import ( + + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/dbadapter" + pruningtypes "cosmossdk.io/store/pruning/types" + "cosmossdk.io/store/types" +) + +var ( + _ types.Committer = (*Store)(nil) + _ types.KVStore = (*Store)(nil) +) + +/ Store is a wrapper for a MemDB with Commiter implementation +type Store struct { + dbadapter.Store +} + +/ Constructs new MemDB adapter +func NewStore() *Store { + return &Store{ + Store: dbadapter.Store{ + DB: dbm.NewMemDB() +}} +} + +/ Implements CommitStore +/ Commit cleans up Store. +func (ts *Store) + +Commit() (id types.CommitID) { + ts.Store = dbadapter.Store{ + DB: dbm.NewMemDB() +} + +return +} + +func (ts *Store) + +SetPruning(_ pruningtypes.PruningOptions) { +} + +/ GetPruning is a no-op as pruning options cannot be directly set on this store. +/ They must be set on the root commit multi-store. +func (ts *Store) + +GetPruning() + +pruningtypes.PruningOptions { + return pruningtypes.NewPruningOptions(pruningtypes.PruningUndefined) +} + +/ Implements CommitStore +func (ts *Store) + +LastCommitID() + +types.CommitID { + return types.CommitID{ +} +} + +func (ts *Store) + +WorkingHash() []byte { + return []byte{ +} +} + +/ Implements Store. +func (ts *Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeTransient +} +``` + +`Transient.Store` is a `dbadapter.Store` with a `dbm.NewMemDB()`. All `KVStore` methods are reused. When `Store.Commit()` is called, a new `dbadapter.Store` is assigned, discarding previous reference and making it garbage collected. + +This type of store is useful to persist information that is only relevant per-block. One example would be to store parameter changes (i.e. a bool set to `true` if a parameter changed in a block). + +```go expandable +package types + +import ( + + "fmt" + "maps" + "reflect" + "cosmossdk.io/store/prefix" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +const ( + / StoreKey is the string store key for the param store + StoreKey = "params" + + / TStoreKey is the string store key for the param transient store + TStoreKey = "transient_params" +) + +/ Individual parameter store for each keeper +/ Transient store persists for a block, so we use it for +/ recording whether the parameter has been changed or not +type Subspace struct { + cdc codec.BinaryCodec + legacyAmino *codec.LegacyAmino + key storetypes.StoreKey / []byte -> []byte, stores parameter + tkey storetypes.StoreKey / []byte -> bool, stores parameter change + name []byte + table KeyTable +} + +/ NewSubspace constructs a store with namestore +func NewSubspace(cdc codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey, name string) + +Subspace { + return Subspace{ + cdc: cdc, + legacyAmino: legacyAmino, + key: key, + tkey: tkey, + name: []byte(name), + table: NewKeyTable(), +} +} + +/ HasKeyTable returns if the Subspace has a KeyTable registered. +func (s Subspace) + +HasKeyTable() + +bool { + return len(s.table.m) > 0 +} + +/ WithKeyTable initializes KeyTable and returns modified Subspace +func (s Subspace) + +WithKeyTable(table KeyTable) + +Subspace { + if table.m == nil { + panic("WithKeyTable() + +called with nil KeyTable") +} + if len(s.table.m) != 0 { + panic("WithKeyTable() + +called on already initialized Subspace") +} + +maps.Copy(s.table.m, table.m) + + / Allocate additional capacity for Subspace.name + / So we don't have to allocate extra space each time appending to the key + name := s.name + s.name = make([]byte, len(name), len(name)+table.maxKeyLength()) + +copy(s.name, name) + +return s +} + +/ Returns a KVStore identical with ctx.KVStore(s.key).Prefix() + +func (s Subspace) + +kvStore(ctx sdk.Context) + +storetypes.KVStore { + / append here is safe, appends within a function won't cause + / weird side effects when its singlethreaded + return prefix.NewStore(ctx.KVStore(s.key), append(s.name, '/')) +} + +/ Returns a transient store for modification +func (s Subspace) + +transientStore(ctx sdk.Context) + +storetypes.KVStore { + / append here is safe, appends within a function won't cause + / weird side effects when its singlethreaded + return prefix.NewStore(ctx.TransientStore(s.tkey), append(s.name, '/')) +} + +/ Validate attempts to validate a parameter value by its key. If the key is not +/ registered or if the validation of the value fails, an error is returned. +func (s Subspace) + +Validate(ctx sdk.Context, key []byte, value any) + +error { + attr, ok := s.table.m[string(key)] + if !ok { + return fmt.Errorf("parameter %s not registered", key) +} + if err := attr.vfn(value); err != nil { + return fmt.Errorf("invalid parameter value: %w", err) +} + +return nil +} + +/ Get queries for a parameter by key from the Subspace's KVStore and sets the +/ value to the provided pointer. If the value does not exist, it will panic. +func (s Subspace) + +Get(ctx sdk.Context, key []byte, ptr any) { + s.checkType(key, ptr) + store := s.kvStore(ctx) + bz := store.Get(key) + if err := s.legacyAmino.UnmarshalJSON(bz, ptr); err != nil { + panic(err) +} +} + +/ GetIfExists queries for a parameter by key from the Subspace's KVStore and +/ sets the value to the provided pointer. If the value does not exist, it will +/ perform a no-op. +func (s Subspace) + +GetIfExists(ctx sdk.Context, key []byte, ptr any) { + store := s.kvStore(ctx) + bz := store.Get(key) + if bz == nil { + return +} + +s.checkType(key, ptr) + if err := s.legacyAmino.UnmarshalJSON(bz, ptr); err != nil { + panic(err) +} +} + +/ IterateKeys iterates over all the keys in the subspace and executes the +/ provided callback. If the callback returns true for a given key, iteration +/ will halt. +func (s Subspace) + +IterateKeys(ctx sdk.Context, cb func(key []byte) + +bool) { + store := s.kvStore(ctx) + iter := storetypes.KVStorePrefixIterator(store, nil) + +defer iter.Close() + for ; iter.Valid(); iter.Next() { + if cb(iter.Key()) { + break +} + +} +} + +/ GetRaw queries for the raw values bytes for a parameter by key. +func (s Subspace) + +GetRaw(ctx sdk.Context, key []byte) []byte { + store := s.kvStore(ctx) + +return store.Get(key) +} + +/ Has returns if a parameter key exists or not in the Subspace's KVStore. +func (s Subspace) + +Has(ctx sdk.Context, key []byte) + +bool { + store := s.kvStore(ctx) + +return store.Has(key) +} + +/ Modified returns true if the parameter key is set in the Subspace's transient +/ KVStore. +func (s Subspace) + +Modified(ctx sdk.Context, key []byte) + +bool { + tstore := s.transientStore(ctx) + +return tstore.Has(key) +} + +/ checkType verifies that the provided key and value are comptable and registered. +func (s Subspace) + +checkType(key []byte, value any) { + attr, ok := s.table.m[string(key)] + if !ok { + panic(fmt.Sprintf("parameter %s not registered", key)) +} + ty := attr.ty + pty := reflect.TypeOf(value) + if pty.Kind() == reflect.Ptr { + pty = pty.Elem() +} + if pty != ty { + panic("type mismatch with registered table") +} +} + +/ Set stores a value for given a parameter key assuming the parameter type has +/ been registered. It will panic if the parameter type has not been registered +/ or if the value cannot be encoded. A change record is also set in the Subspace's +/ transient KVStore to mark the parameter as modified. +func (s Subspace) + +Set(ctx sdk.Context, key []byte, value any) { + s.checkType(key, value) + store := s.kvStore(ctx) + +bz, err := s.legacyAmino.MarshalJSON(value) + if err != nil { + panic(err) +} + +store.Set(key, bz) + tstore := s.transientStore(ctx) + +tstore.Set(key, []byte{ +}) +} + +/ Update stores an updated raw value for a given parameter key assuming the +/ parameter type has been registered. It will panic if the parameter type has +/ not been registered or if the value cannot be encoded. An error is returned +/ if the raw value is not compatible with the registered type for the parameter +/ key or if the new value is invalid as determined by the registered type's +/ validation function. +func (s Subspace) + +Update(ctx sdk.Context, key, value []byte) + +error { + attr, ok := s.table.m[string(key)] + if !ok { + panic(fmt.Sprintf("parameter %s not registered", key)) +} + ty := attr.ty + dest := reflect.New(ty).Interface() + +s.GetIfExists(ctx, key, dest) + if err := s.legacyAmino.UnmarshalJSON(value, dest); err != nil { + return err +} + + / destValue contains the dereferenced value of dest so validation function do + / not have to operate on pointers. + destValue := reflect.Indirect(reflect.ValueOf(dest)).Interface() + if err := s.Validate(ctx, key, destValue); err != nil { + return err +} + +s.Set(ctx, key, dest) + +return nil +} + +/ GetParamSet iterates through each ParamSetPair where for each pair, it will +/ retrieve the value and set it to the corresponding value pointer provided +/ in the ParamSetPair by calling Subspace#Get. +func (s Subspace) + +GetParamSet(ctx sdk.Context, ps ParamSet) { + for _, pair := range ps.ParamSetPairs() { + s.Get(ctx, pair.Key, pair.Value) +} +} + +/ GetParamSetIfExists iterates through each ParamSetPair where for each pair, it will +/ retrieve the value and set it to the corresponding value pointer provided +/ in the ParamSetPair by calling Subspace#GetIfExists. +func (s Subspace) + +GetParamSetIfExists(ctx sdk.Context, ps ParamSet) { + for _, pair := range ps.ParamSetPairs() { + s.GetIfExists(ctx, pair.Key, pair.Value) +} +} + +/ SetParamSet iterates through each ParamSetPair and sets the value with the +/ corresponding parameter key in the Subspace's KVStore. +func (s Subspace) + +SetParamSet(ctx sdk.Context, ps ParamSet) { + for _, pair := range ps.ParamSetPairs() { + / pair.Field is a pointer to the field, so indirecting the ptr. + / go-amino automatically handles it but just for sure, + / since SetStruct is meant to be used in InitGenesis + / so this method will not be called frequently + v := reflect.Indirect(reflect.ValueOf(pair.Value)).Interface() + if err := pair.ValidatorFn(v); err != nil { + panic(fmt.Sprintf("value from ParamSetPair is invalid: %s", err)) +} + +s.Set(ctx, pair.Key, v) +} +} + +/ Name returns the name of the Subspace. +func (s Subspace) + +Name() + +string { + return string(s.name) +} + +/ Wrapper of Subspace, provides immutable functions only +type ReadOnlySubspace struct { + s Subspace +} + +/ Get delegates a read-only Get call to the Subspace. +func (ros ReadOnlySubspace) + +Get(ctx sdk.Context, key []byte, ptr any) { + ros.s.Get(ctx, key, ptr) +} + +/ GetRaw delegates a read-only GetRaw call to the Subspace. +func (ros ReadOnlySubspace) + +GetRaw(ctx sdk.Context, key []byte) []byte { + return ros.s.GetRaw(ctx, key) +} + +/ Has delegates a read-only Has call to the Subspace. +func (ros ReadOnlySubspace) + +Has(ctx sdk.Context, key []byte) + +bool { + return ros.s.Has(ctx, key) +} + +/ Modified delegates a read-only Modified call to the Subspace. +func (ros ReadOnlySubspace) + +Modified(ctx sdk.Context, key []byte) + +bool { + return ros.s.Modified(ctx, key) +} + +/ Name delegates a read-only Name call to the Subspace. +func (ros ReadOnlySubspace) + +Name() + +string { + return ros.s.Name() +} +``` + +Transient stores are typically accessed via the [`context`](/docs/sdk/next/documentation/application-framework/context) via the `TransientStore()` method: + +```go expandable +package types + +import ( + + "context" + "time" + + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/header" + "cosmossdk.io/log" + "cosmossdk.io/store/gaskv" + storetypes "cosmossdk.io/store/types" +) + +/ ExecMode defines the execution mode which can be set on a Context. +type ExecMode uint8 + +/ All possible execution modes. +const ( + ExecModeCheck ExecMode = iota + ExecModeReCheck + ExecModeSimulate + ExecModePrepareProposal + ExecModeProcessProposal + ExecModeVoteExtension + ExecModeVerifyVoteExtension + ExecModeFinalize +) + +/* +Context is an immutable object contains all information needed to +process a request. + +It contains a context.Context object inside if you want to use that, +but please do not over-use it. We try to keep all data structured +and standard additions here would be better just to add to the Context struct +*/ +type Context struct { + baseCtx context.Context + ms storetypes.MultiStore + / Deprecated: Use HeaderService for height, time, and chainID and CometService for the rest + header cmtproto.Header + / Deprecated: Use HeaderService for hash + headerHash []byte + / Deprecated: Use HeaderService for chainID and CometService for the rest + chainID string + txBytes []byte + logger log.Logger + voteInfo []abci.VoteInfo + gasMeter storetypes.GasMeter + blockGasMeter storetypes.GasMeter + checkTx bool + recheckTx bool / if recheckTx == true, then checkTx must also be true + sigverifyTx bool / when run simulation, because the private key corresponding to the account in the genesis.json randomly generated, we must skip the sigverify. + execMode ExecMode + minGasPrice DecCoins + consParams cmtproto.ConsensusParams + eventManager EventManagerI + priority int64 / The tx priority, only relevant in CheckTx + kvGasConfig storetypes.GasConfig + transientKVGasConfig storetypes.GasConfig + streamingManager storetypes.StreamingManager + cometInfo comet.BlockInfo + headerInfo header.Info +} + +/ Proposed rename, not done to avoid API breakage +type Request = Context + +/ Read-only accessors +func (c Context) + +Context() + +context.Context { + return c.baseCtx +} + +func (c Context) + +MultiStore() + +storetypes.MultiStore { + return c.ms +} + +func (c Context) + +BlockHeight() + +int64 { + return c.header.Height +} + +func (c Context) + +BlockTime() + +time.Time { + return c.header.Time +} + +func (c Context) + +ChainID() + +string { + return c.chainID +} + +func (c Context) + +TxBytes() []byte { + return c.txBytes +} + +func (c Context) + +Logger() + +log.Logger { + return c.logger +} + +func (c Context) + +VoteInfos() []abci.VoteInfo { + return c.voteInfo +} + +func (c Context) + +GasMeter() + +storetypes.GasMeter { + return c.gasMeter +} + +func (c Context) + +BlockGasMeter() + +storetypes.GasMeter { + return c.blockGasMeter +} + +func (c Context) + +IsCheckTx() + +bool { + return c.checkTx +} + +func (c Context) + +IsReCheckTx() + +bool { + return c.recheckTx +} + +func (c Context) + +IsSigverifyTx() + +bool { + return c.sigverifyTx +} + +func (c Context) + +ExecMode() + +ExecMode { + return c.execMode +} + +func (c Context) + +MinGasPrices() + +DecCoins { + return c.minGasPrice +} + +func (c Context) + +EventManager() + +EventManagerI { + return c.eventManager +} + +func (c Context) + +Priority() + +int64 { + return c.priority +} + +func (c Context) + +KVGasConfig() + +storetypes.GasConfig { + return c.kvGasConfig +} + +func (c Context) + +TransientKVGasConfig() + +storetypes.GasConfig { + return c.transientKVGasConfig +} + +func (c Context) + +StreamingManager() + +storetypes.StreamingManager { + return c.streamingManager +} + +func (c Context) + +CometInfo() + +comet.BlockInfo { + return c.cometInfo +} + +func (c Context) + +HeaderInfo() + +header.Info { + return c.headerInfo +} + +/ BlockHeader returns the header by value. +func (c Context) + +BlockHeader() + +cmtproto.Header { + return c.header +} + +/ HeaderHash returns a copy of the header hash obtained during abci.RequestBeginBlock +func (c Context) + +HeaderHash() []byte { + hash := make([]byte, len(c.headerHash)) + +copy(hash, c.headerHash) + +return hash +} + +func (c Context) + +ConsensusParams() + +cmtproto.ConsensusParams { + return c.consParams +} + +func (c Context) + +Deadline() (deadline time.Time, ok bool) { + return c.baseCtx.Deadline() +} + +func (c Context) + +Done() <-chan struct{ +} { + return c.baseCtx.Done() +} + +func (c Context) + +Err() + +error { + return c.baseCtx.Err() +} + +/ create a new context +func NewContext(ms storetypes.MultiStore, header cmtproto.Header, isCheckTx bool, logger log.Logger) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +return Context{ + baseCtx: context.Background(), + ms: ms, + header: header, + chainID: header.ChainID, + checkTx: isCheckTx, + sigverifyTx: true, + logger: logger, + gasMeter: storetypes.NewInfiniteGasMeter(), + minGasPrice: DecCoins{ +}, + eventManager: NewEventManager(), + kvGasConfig: storetypes.KVGasConfig(), + transientKVGasConfig: storetypes.TransientGasConfig(), +} +} + +/ WithContext returns a Context with an updated context.Context. +func (c Context) + +WithContext(ctx context.Context) + +Context { + c.baseCtx = ctx + return c +} + +/ WithMultiStore returns a Context with an updated MultiStore. +func (c Context) + +WithMultiStore(ms storetypes.MultiStore) + +Context { + c.ms = ms + return c +} + +/ WithBlockHeader returns a Context with an updated CometBFT block header in UTC time. +func (c Context) + +WithBlockHeader(header cmtproto.Header) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +c.header = header + return c +} + +/ WithHeaderHash returns a Context with an updated CometBFT block header hash. +func (c Context) + +WithHeaderHash(hash []byte) + +Context { + temp := make([]byte, len(hash)) + +copy(temp, hash) + +c.headerHash = temp + return c +} + +/ WithBlockTime returns a Context with an updated CometBFT block header time in UTC with no monotonic component. +/ Stripping the monotonic component is for time equality. +func (c Context) + +WithBlockTime(newTime time.Time) + +Context { + newHeader := c.BlockHeader() + / https://github.com/gogo/protobuf/issues/519 + newHeader.Time = newTime.Round(0).UTC() + +return c.WithBlockHeader(newHeader) +} + +/ WithProposer returns a Context with an updated proposer consensus address. +func (c Context) + +WithProposer(addr ConsAddress) + +Context { + newHeader := c.BlockHeader() + +newHeader.ProposerAddress = addr.Bytes() + +return c.WithBlockHeader(newHeader) +} + +/ WithBlockHeight returns a Context with an updated block height. +func (c Context) + +WithBlockHeight(height int64) + +Context { + newHeader := c.BlockHeader() + +newHeader.Height = height + return c.WithBlockHeader(newHeader) +} + +/ WithChainID returns a Context with an updated chain identifier. +func (c Context) + +WithChainID(chainID string) + +Context { + c.chainID = chainID + return c +} + +/ WithTxBytes returns a Context with an updated txBytes. +func (c Context) + +WithTxBytes(txBytes []byte) + +Context { + c.txBytes = txBytes + return c +} + +/ WithLogger returns a Context with an updated logger. +func (c Context) + +WithLogger(logger log.Logger) + +Context { + c.logger = logger + return c +} + +/ WithVoteInfos returns a Context with an updated consensus VoteInfo. +func (c Context) + +WithVoteInfos(voteInfo []abci.VoteInfo) + +Context { + c.voteInfo = voteInfo + return c +} + +/ WithGasMeter returns a Context with an updated transaction GasMeter. +func (c Context) + +WithGasMeter(meter storetypes.GasMeter) + +Context { + c.gasMeter = meter + return c +} + +/ WithBlockGasMeter returns a Context with an updated block GasMeter +func (c Context) + +WithBlockGasMeter(meter storetypes.GasMeter) + +Context { + c.blockGasMeter = meter + return c +} + +/ WithKVGasConfig returns a Context with an updated gas configuration for +/ the KVStore +func (c Context) + +WithKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.kvGasConfig = gasConfig + return c +} + +/ WithTransientKVGasConfig returns a Context with an updated gas configuration for +/ the transient KVStore +func (c Context) + +WithTransientKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.transientKVGasConfig = gasConfig + return c +} + +/ WithIsCheckTx enables or disables CheckTx value for verifying transactions and returns an updated Context +func (c Context) + +WithIsCheckTx(isCheckTx bool) + +Context { + c.checkTx = isCheckTx + c.execMode = ExecModeCheck + return c +} + +/ WithIsRecheckTx called with true will also set true on checkTx in order to +/ enforce the invariant that if recheckTx = true then checkTx = true as well. +func (c Context) + +WithIsReCheckTx(isRecheckTx bool) + +Context { + if isRecheckTx { + c.checkTx = true +} + +c.recheckTx = isRecheckTx + c.execMode = ExecModeReCheck + return c +} + +/ WithIsSigverifyTx called with true will sigverify in auth module +func (c Context) + +WithIsSigverifyTx(isSigverifyTx bool) + +Context { + c.sigverifyTx = isSigverifyTx + return c +} + +/ WithExecMode returns a Context with an updated ExecMode. +func (c Context) + +WithExecMode(m ExecMode) + +Context { + c.execMode = m + return c +} + +/ WithMinGasPrices returns a Context with an updated minimum gas price value +func (c Context) + +WithMinGasPrices(gasPrices DecCoins) + +Context { + c.minGasPrice = gasPrices + return c +} + +/ WithConsensusParams returns a Context with an updated consensus params +func (c Context) + +WithConsensusParams(params cmtproto.ConsensusParams) + +Context { + c.consParams = params + return c +} + +/ WithEventManager returns a Context with an updated event manager +func (c Context) + +WithEventManager(em EventManagerI) + +Context { + c.eventManager = em + return c +} + +/ WithPriority returns a Context with an updated tx priority +func (c Context) + +WithPriority(p int64) + +Context { + c.priority = p + return c +} + +/ WithStreamingManager returns a Context with an updated streaming manager +func (c Context) + +WithStreamingManager(sm storetypes.StreamingManager) + +Context { + c.streamingManager = sm + return c +} + +/ WithCometInfo returns a Context with an updated comet info +func (c Context) + +WithCometInfo(cometInfo comet.BlockInfo) + +Context { + c.cometInfo = cometInfo + return c +} + +/ WithHeaderInfo returns a Context with an updated header info +func (c Context) + +WithHeaderInfo(headerInfo header.Info) + +Context { + / Settime to UTC + headerInfo.Time = headerInfo.Time.UTC() + +c.headerInfo = headerInfo + return c +} + +/ TODO: remove??? +func (c Context) + +IsZero() + +bool { + return c.ms == nil +} + +func (c Context) + +WithValue(key, value any) + +Context { + c.baseCtx = context.WithValue(c.baseCtx, key, value) + +return c +} + +func (c Context) + +Value(key any) + +any { + if key == SdkContextKey { + return c +} + +return c.baseCtx.Value(key) +} + +/ ---------------------------------------------------------------------------- +/ Store / Caching +/ ---------------------------------------------------------------------------- + +/ KVStore fetches a KVStore from the MultiStore. +func (c Context) + +KVStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.kvGasConfig) +} + +/ TransientStore fetches a TransientStore from the MultiStore. +func (c Context) + +TransientStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.transientKVGasConfig) +} + +/ CacheContext returns a new Context with the multi-store cached and a new +/ EventManager. The cached context is written to the context when writeCache +/ is called. Note, events are automatically emitted on the parent context's +/ EventManager when the caller executes the write. +func (c Context) + +CacheContext() (cc Context, writeCache func()) { + cms := c.ms.CacheMultiStore() + +cc = c.WithMultiStore(cms).WithEventManager(NewEventManager()) + +writeCache = func() { + c.EventManager().EmitEvents(cc.EventManager().Events()) + +cms.Write() +} + +return cc, writeCache +} + +var ( + _ context.Context = Context{ +} + _ storetypes.Context = Context{ +} +) + +/ ContextKey defines a type alias for a stdlib Context key. +type ContextKey string + +/ SdkContextKey is the key in the context.Context which holds the sdk.Context. +const SdkContextKey ContextKey = "sdk-context" + +/ WrapSDKContext returns a stdlib context.Context with the provided sdk.Context's internal +/ context as a value. It is useful for passing an sdk.Context through methods that take a +/ stdlib context.Context parameter such as generated gRPC methods. To get the original +/ sdk.Context back, call UnwrapSDKContext. +/ +/ Deprecated: there is no need to wrap anymore as the Cosmos SDK context implements context.Context. +func WrapSDKContext(ctx Context) + +context.Context { + return ctx +} + +/ UnwrapSDKContext retrieves a Context from a context.Context instance +/ attached with WrapSDKContext. It panics if a Context was not properly +/ attached +func UnwrapSDKContext(ctx context.Context) + +Context { + if sdkCtx, ok := ctx.(Context); ok { + return sdkCtx +} + +return ctx.Value(SdkContextKey).(Context) +} +``` + +## KVStore Wrappers + +### CacheKVStore + +`cachekv.Store` is a wrapper `KVStore` which provides buffered writing / cached reading functionalities over the underlying `KVStore`. + +```go expandable +package cachekv + +import ( + + "bytes" + "io" + "sort" + "sync" + + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/math" + "cosmossdk.io/store/cachekv/internal" + "cosmossdk.io/store/internal/conv" + "cosmossdk.io/store/internal/kv" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" +) + +/ cValue represents a cached value. +/ If dirty is true, it indicates the cached value is different from the underlying value. +type cValue struct { + value []byte + dirty bool +} + +/ Store wraps an in-memory cache around an underlying types.KVStore. +type Store struct { + mtx sync.Mutex + cache map[string]*cValue + unsortedCache map[string]struct{ +} + +sortedCache internal.BTree / always ascending sorted + parent types.KVStore +} + +var _ types.CacheKVStore = (*Store)(nil) + +/ NewStore creates a new Store object +func NewStore(parent types.KVStore) *Store { + return &Store{ + cache: make(map[string]*cValue), + unsortedCache: make(map[string]struct{ +}), + sortedCache: internal.NewBTree(), + parent: parent, +} +} + +/ GetStoreType implements Store. +func (store *Store) + +GetStoreType() + +types.StoreType { + return store.parent.GetStoreType() +} + +/ Get implements types.KVStore. +func (store *Store) + +Get(key []byte) (value []byte) { + store.mtx.Lock() + +defer store.mtx.Unlock() + +types.AssertValidKey(key) + +cacheValue, ok := store.cache[conv.UnsafeBytesToStr(key)] + if !ok { + value = store.parent.Get(key) + +store.setCacheValue(key, value, false) +} + +else { + value = cacheValue.value +} + +return value +} + +/ Set implements types.KVStore. +func (store *Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +types.AssertValidValue(value) + +store.mtx.Lock() + +defer store.mtx.Unlock() + +store.setCacheValue(key, value, true) +} + +/ Has implements types.KVStore. +func (store *Store) + +Has(key []byte) + +bool { + value := store.Get(key) + +return value != nil +} + +/ Delete implements types.KVStore. +func (store *Store) + +Delete(key []byte) { + types.AssertValidKey(key) + +store.mtx.Lock() + +defer store.mtx.Unlock() + +store.setCacheValue(key, nil, true) +} + +func (store *Store) + +resetCaches() { + if len(store.cache) > 100_000 { + / Cache is too large. We likely did something linear time + / (e.g. Epoch block, Genesis block, etc). Free the old caches from memory, and let them get re-allocated. + / TODO: In a future CacheKV redesign, such linear workloads should get into a different cache instantiation. + / 100_000 is arbitrarily chosen as it solved Osmosis' InitGenesis RAM problem. + store.cache = make(map[string]*cValue) + +store.unsortedCache = make(map[string]struct{ +}) +} + +else { + / Clear the cache using the map clearing idiom + / and not allocating fresh objects. + / Please see https://bencher.orijtech.com/perfclinic/mapclearing/ + for key := range store.cache { + delete(store.cache, key) +} + for key := range store.unsortedCache { + delete(store.unsortedCache, key) +} + +} + +store.sortedCache = internal.NewBTree() +} + +/ Implements Cachetypes.KVStore. +func (store *Store) + +Write() { + store.mtx.Lock() + +defer store.mtx.Unlock() + if len(store.cache) == 0 && len(store.unsortedCache) == 0 { + store.sortedCache = internal.NewBTree() + +return +} + +type cEntry struct { + key string + val *cValue +} + + / We need a copy of all of the keys. + / Not the best. To reduce RAM pressure, we copy the values as well + / and clear out the old caches right after the copy. + sortedCache := make([]cEntry, 0, len(store.cache)) + for key, dbValue := range store.cache { + if dbValue.dirty { + sortedCache = append(sortedCache, cEntry{ + key, dbValue +}) +} + +} + +store.resetCaches() + +sort.Slice(sortedCache, func(i, j int) + +bool { + return sortedCache[i].key < sortedCache[j].key +}) + + / TODO: Consider allowing usage of Batch, which would allow the write to + / at least happen atomically. + for _, obj := range sortedCache { + / We use []byte(key) + +instead of conv.UnsafeStrToBytes because we cannot + / be sure if the underlying store might do a save with the byteslice or + / not. Once we get confirmation that .Delete is guaranteed not to + / save the byteslice, then we can assume only a read-only copy is sufficient. + if obj.val.value != nil { + / It already exists in the parent, hence update it. + store.parent.Set([]byte(obj.key), obj.val.value) +} + +else { + store.parent.Delete([]byte(obj.key)) +} + +} +} + +/ CacheWrap implements CacheWrapper. +func (store *Store) + +CacheWrap() + +types.CacheWrap { + return NewStore(store) +} + +/ CacheWrapWithTrace implements the CacheWrapper interface. +func (store *Store) + +CacheWrapWithTrace(w io.Writer, tc types.TraceContext) + +types.CacheWrap { + return NewStore(tracekv.NewStore(store, w, tc)) +} + +/---------------------------------------- +/ Iteration + +/ Iterator implements types.KVStore. +func (store *Store) + +Iterator(start, end []byte) + +types.Iterator { + return store.iterator(start, end, true) +} + +/ ReverseIterator implements types.KVStore. +func (store *Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + return store.iterator(start, end, false) +} + +func (store *Store) + +iterator(start, end []byte, ascending bool) + +types.Iterator { + store.mtx.Lock() + +defer store.mtx.Unlock() + +store.dirtyItems(start, end) + isoSortedCache := store.sortedCache.Copy() + +var ( + err error + parent, cache types.Iterator + ) + if ascending { + parent = store.parent.Iterator(start, end) + +cache, err = isoSortedCache.Iterator(start, end) +} + +else { + parent = store.parent.ReverseIterator(start, end) + +cache, err = isoSortedCache.ReverseIterator(start, end) +} + if err != nil { + panic(err) +} + +return internal.NewCacheMergeIterator(parent, cache, ascending) +} + +func findStartIndex(strL []string, startQ string) + +int { + / Modified binary search to find the very first element in >=startQ. + if len(strL) == 0 { + return -1 +} + +var left, right, mid int + right = len(strL) - 1 + for left <= right { + mid = (left + right) >> 1 + midStr := strL[mid] + if midStr == startQ { + / Handle condition where there might be multiple values equal to startQ. + / We are looking for the very first value < midStL, that i+1 will be the first + / element >= midStr. + for i := mid - 1; i >= 0; i-- { + if strL[i] != midStr { + return i + 1 +} + +} + +return 0 +} + if midStr < startQ { + left = mid + 1 +} + +else { / midStrL > startQ + right = mid - 1 +} + +} + if left >= 0 && left < len(strL) && strL[left] >= startQ { + return left +} + +return -1 +} + +func findEndIndex(strL []string, endQ string) + +int { + if len(strL) == 0 { + return -1 +} + + / Modified binary search to find the very first element > 1 + midStr := strL[mid] + if midStr == endQ { + / Handle condition where there might be multiple values equal to startQ. + / We are looking for the very first value < midStL, that i+1 will be the first + / element >= midStr. + for i := mid - 1; i >= 0; i-- { + if strL[i] < midStr { + return i + 1 +} + +} + +return 0 +} + if midStr < endQ { + left = mid + 1 +} + +else { / midStrL > startQ + right = mid - 1 +} + +} + + / Binary search failed, now let's find a value less than endQ. + for i := right; i >= 0; i-- { + if strL[i] < endQ { + return i +} + +} + +return -1 +} + +type sortState int + +const ( + stateUnsorted sortState = iota + stateAlreadySorted +) + +const minSortSize = 1024 + +/ Constructs a slice of dirty items, to use w/ memIterator. +func (store *Store) + +dirtyItems(start, end []byte) { + startStr, endStr := conv.UnsafeBytesToStr(start), conv.UnsafeBytesToStr(end) + if end != nil && startStr > endStr { + / Nothing to do here. + return +} + n := len(store.unsortedCache) + unsorted := make([]*kv.Pair, 0) + / If the unsortedCache is too big, its costs too much to determine + / whats in the subset we are concerned about. + / If you are interleaving iterator calls with writes, this can easily become an + / O(N^2) + +overhead. + / Even without that, too many range checks eventually becomes more expensive + / than just not having the cache. + if n < minSortSize { + for key := range store.unsortedCache { + / dbm.IsKeyInDomain is nil safe and returns true iff key is greater than start + if dbm.IsKeyInDomain(conv.UnsafeStrToBytes(key), start, end) { + cacheValue := store.cache[key] + unsorted = append(unsorted, &kv.Pair{ + Key: []byte(key), + Value: cacheValue.value +}) +} + +} + +store.clearUnsortedCacheSubset(unsorted, stateUnsorted) + +return +} + + / Otherwise it is large so perform a modified binary search to find + / the target ranges for the keys that we should be looking for. + strL := make([]string, 0, n) + for key := range store.unsortedCache { + strL = append(strL, key) +} + +sort.Strings(strL) + + / Now find the values within the domain + / [start, end) + startIndex := findStartIndex(strL, startStr) + if startIndex < 0 { + startIndex = 0 +} + +var endIndex int + if end == nil { + endIndex = len(strL) - 1 +} + +else { + endIndex = findEndIndex(strL, endStr) +} + if endIndex < 0 { + endIndex = len(strL) - 1 +} + + / Since we spent cycles to sort the values, we should process and remove a reasonable amount + / ensure start to end is at least minSortSize in size + / if below minSortSize, expand it to cover additional values + / this amortizes the cost of processing elements across multiple calls + if endIndex-startIndex < minSortSize { + endIndex = math.Min(startIndex+minSortSize, len(strL)-1) + if endIndex-startIndex < minSortSize { + startIndex = math.Max(endIndex-minSortSize, 0) +} + +} + kvL := make([]*kv.Pair, 0, 1+endIndex-startIndex) + for i := startIndex; i <= endIndex; i++ { + key := strL[i] + cacheValue := store.cache[key] + kvL = append(kvL, &kv.Pair{ + Key: []byte(key), + Value: cacheValue.value +}) +} + + / kvL was already sorted so pass it in as is. + store.clearUnsortedCacheSubset(kvL, stateAlreadySorted) +} + +func (store *Store) + +clearUnsortedCacheSubset(unsorted []*kv.Pair, sortState sortState) { + n := len(store.unsortedCache) + if len(unsorted) == n { / This pattern allows the Go compiler to emit the map clearing idiom for the entire map. + for key := range store.unsortedCache { + delete(store.unsortedCache, key) +} + +} + +else { / Otherwise, normally delete the unsorted keys from the map. + for _, kv := range unsorted { + delete(store.unsortedCache, conv.UnsafeBytesToStr(kv.Key)) +} + +} + if sortState == stateUnsorted { + sort.Slice(unsorted, func(i, j int) + +bool { + return bytes.Compare(unsorted[i].Key, unsorted[j].Key) < 0 +}) +} + for _, item := range unsorted { + / sortedCache is able to store `nil` value to represent deleted items. + store.sortedCache.Set(item.Key, item.Value) +} +} + +/---------------------------------------- +/ etc + +/ Only entrypoint to mutate store.cache. +/ A `nil` value means a deletion. +func (store *Store) + +setCacheValue(key, value []byte, dirty bool) { + keyStr := conv.UnsafeBytesToStr(key) + +store.cache[keyStr] = &cValue{ + value: value, + dirty: dirty, +} + if dirty { + store.unsortedCache[keyStr] = struct{ +}{ +} + +} +} +``` + +This is the type used whenever an IAVL Store needs to be branched to create an isolated store (typically when we need to mutate a state that might be reverted later). + +#### `Get` + +`Store.Get()` firstly checks if `Store.cache` has an associated value with the key. If the value exists, the function returns it. If not, the function calls `Store.parent.Get()`, caches the result in `Store.cache`, and returns it. + +#### `Set` + +`Store.Set()` sets the key-value pair to the `Store.cache`. `cValue` has the field dirty bool which indicates whether the cached value is different from the underlying value. When `Store.Set()` caches a new pair, the `cValue.dirty` is set `true` so when `Store.Write()` is called it can be written to the underlying store. + +#### `Iterator` + +`Store.Iterator()` has to traverse on both cached items and the original items. In `Store.iterator()`, two iterators are generated for each of them, and merged. `memIterator` is essentially a slice of the `KVPairs`, used for cached items. `mergeIterator` is a combination of two iterators, where traverse happens ordered on both iterators. + +### `GasKv` Store + +Cosmos SDK applications use [`gas`](/docs/sdk/next/documentation/protocol-development/gas-fees) to track resources usage and prevent spam. [`GasKv.Store`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/gaskv/store.go) is a `KVStore` wrapper that enables automatic gas consumption each time a read or write to the store is made. It is the solution of choice to track storage usage in Cosmos SDK applications. + +```go expandable +package gaskv + +import ( + + "io" + "cosmossdk.io/store/types" +) + +var _ types.KVStore = &Store{ +} + +/ Store applies gas tracking to an underlying KVStore. It implements the +/ KVStore interface. +type Store struct { + gasMeter types.GasMeter + gasConfig types.GasConfig + parent types.KVStore +} + +/ NewStore returns a reference to a new GasKVStore. +func NewStore(parent types.KVStore, gasMeter types.GasMeter, gasConfig types.GasConfig) *Store { + kvs := &Store{ + gasMeter: gasMeter, + gasConfig: gasConfig, + parent: parent, +} + +return kvs +} + +/ Implements Store. +func (gs *Store) + +GetStoreType() + +types.StoreType { + return gs.parent.GetStoreType() +} + +/ Implements KVStore. +func (gs *Store) + +Get(key []byte) (value []byte) { + gs.gasMeter.ConsumeGas(gs.gasConfig.ReadCostFlat, types.GasReadCostFlatDesc) + +value = gs.parent.Get(key) + + / TODO overflow-safe math? + gs.gasMeter.ConsumeGas(gs.gasConfig.ReadCostPerByte*types.Gas(len(key)), types.GasReadPerByteDesc) + +gs.gasMeter.ConsumeGas(gs.gasConfig.ReadCostPerByte*types.Gas(len(value)), types.GasReadPerByteDesc) + +return value +} + +/ Implements KVStore. +func (gs *Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +types.AssertValidValue(value) + +gs.gasMeter.ConsumeGas(gs.gasConfig.WriteCostFlat, types.GasWriteCostFlatDesc) + / TODO overflow-safe math? + gs.gasMeter.ConsumeGas(gs.gasConfig.WriteCostPerByte*types.Gas(len(key)), types.GasWritePerByteDesc) + +gs.gasMeter.ConsumeGas(gs.gasConfig.WriteCostPerByte*types.Gas(len(value)), types.GasWritePerByteDesc) + +gs.parent.Set(key, value) +} + +/ Implements KVStore. +func (gs *Store) + +Has(key []byte) + +bool { + gs.gasMeter.ConsumeGas(gs.gasConfig.HasCost, types.GasHasDesc) + +return gs.parent.Has(key) +} + +/ Implements KVStore. +func (gs *Store) + +Delete(key []byte) { + / charge gas to prevent certain attack vectors even though space is being freed + gs.gasMeter.ConsumeGas(gs.gasConfig.DeleteCost, types.GasDeleteDesc) + +gs.parent.Delete(key) +} + +/ Iterator implements the KVStore interface. It returns an iterator which +/ incurs a flat gas cost for seeking to the first key/value pair and a variable +/ gas cost based on the current value's length if the iterator is valid. +func (gs *Store) + +Iterator(start, end []byte) + +types.Iterator { + return gs.iterator(start, end, true) +} + +/ ReverseIterator implements the KVStore interface. It returns a reverse +/ iterator which incurs a flat gas cost for seeking to the first key/value pair +/ and a variable gas cost based on the current value's length if the iterator +/ is valid. +func (gs *Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + return gs.iterator(start, end, false) +} + +/ Implements KVStore. +func (gs *Store) + +CacheWrap() + +types.CacheWrap { + panic("cannot CacheWrap a GasKVStore") +} + +/ CacheWrapWithTrace implements the KVStore interface. +func (gs *Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + panic("cannot CacheWrapWithTrace a GasKVStore") +} + +func (gs *Store) + +iterator(start, end []byte, ascending bool) + +types.Iterator { + var parent types.Iterator + if ascending { + parent = gs.parent.Iterator(start, end) +} + +else { + parent = gs.parent.ReverseIterator(start, end) +} + gi := newGasIterator(gs.gasMeter, gs.gasConfig, parent) + +gi.(*gasIterator).consumeSeekGas() + +return gi +} + +type gasIterator struct { + gasMeter types.GasMeter + gasConfig types.GasConfig + parent types.Iterator +} + +func newGasIterator(gasMeter types.GasMeter, gasConfig types.GasConfig, parent types.Iterator) + +types.Iterator { + return &gasIterator{ + gasMeter: gasMeter, + gasConfig: gasConfig, + parent: parent, +} +} + +/ Implements Iterator. +func (gi *gasIterator) + +Domain() (start, end []byte) { + return gi.parent.Domain() +} + +/ Implements Iterator. +func (gi *gasIterator) + +Valid() + +bool { + return gi.parent.Valid() +} + +/ Next implements the Iterator interface. It seeks to the next key/value pair +/ in the iterator. It incurs a flat gas cost for seeking and a variable gas +/ cost based on the current value's length if the iterator is valid. +func (gi *gasIterator) + +Next() { + gi.consumeSeekGas() + +gi.parent.Next() +} + +/ Key implements the Iterator interface. It returns the current key and it does +/ not incur any gas cost. +func (gi *gasIterator) + +Key() (key []byte) { + key = gi.parent.Key() + +return key +} + +/ Value implements the Iterator interface. It returns the current value and it +/ does not incur any gas cost. +func (gi *gasIterator) + +Value() (value []byte) { + value = gi.parent.Value() + +return value +} + +/ Implements Iterator. +func (gi *gasIterator) + +Close() + +error { + return gi.parent.Close() +} + +/ Error delegates the Error call to the parent iterator. +func (gi *gasIterator) + +Error() + +error { + return gi.parent.Error() +} + +/ consumeSeekGas consumes on each iteration step a flat gas cost and a variable gas cost +/ based on the current value's length. +func (gi *gasIterator) + +consumeSeekGas() { + if gi.Valid() { + key := gi.Key() + value := gi.Value() + +gi.gasMeter.ConsumeGas(gi.gasConfig.ReadCostPerByte*types.Gas(len(key)), types.GasValuePerByteDesc) + +gi.gasMeter.ConsumeGas(gi.gasConfig.ReadCostPerByte*types.Gas(len(value)), types.GasValuePerByteDesc) +} + +gi.gasMeter.ConsumeGas(gi.gasConfig.IterNextCostFlat, types.GasIterNextCostFlatDesc) +} +``` + +When methods of the parent `KVStore` are called, `GasKv.Store` automatically consumes appropriate amount of gas depending on the `Store.gasConfig`: + +```go expandable +package types + +import ( + + "fmt" + "math" +) + +/ Gas consumption descriptors. +const ( + GasIterNextCostFlatDesc = "IterNextFlat" + GasValuePerByteDesc = "ValuePerByte" + GasWritePerByteDesc = "WritePerByte" + GasReadPerByteDesc = "ReadPerByte" + GasWriteCostFlatDesc = "WriteFlat" + GasReadCostFlatDesc = "ReadFlat" + GasHasDesc = "Has" + GasDeleteDesc = "Delete" +) + +/ Gas measured by the SDK +type Gas = uint64 + +/ ErrorNegativeGasConsumed defines an error thrown when the amount of gas refunded results in a +/ negative gas consumed amount. +type ErrorNegativeGasConsumed struct { + Descriptor string +} + +/ ErrorOutOfGas defines an error thrown when an action results in out of gas. +type ErrorOutOfGas struct { + Descriptor string +} + +/ ErrorGasOverflow defines an error thrown when an action results gas consumption +/ unsigned integer overflow. +type ErrorGasOverflow struct { + Descriptor string +} + +/ GasMeter interface to track gas consumption +type GasMeter interface { + GasConsumed() + +Gas + GasConsumedToLimit() + +Gas + GasRemaining() + +Gas + Limit() + +Gas + ConsumeGas(amount Gas, descriptor string) + +RefundGas(amount Gas, descriptor string) + +IsPastLimit() + +bool + IsOutOfGas() + +bool + String() + +string +} + +type basicGasMeter struct { + limit Gas + consumed Gas +} + +/ NewGasMeter returns a reference to a new basicGasMeter. +func NewGasMeter(limit Gas) + +GasMeter { + return &basicGasMeter{ + limit: limit, + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *basicGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasRemaining returns the gas left in the GasMeter. +func (g *basicGasMeter) + +GasRemaining() + +Gas { + if g.IsPastLimit() { + return 0 +} + +return g.limit - g.consumed +} + +/ Limit returns the gas limit of the GasMeter. +func (g *basicGasMeter) + +Limit() + +Gas { + return g.limit +} + +/ GasConsumedToLimit returns the gas limit if gas consumed is past the limit, +/ otherwise it returns the consumed gas. +/ +/ NOTE: This behavior is only called when recovering from panic when +/ BlockGasMeter consumes gas past the limit. +func (g *basicGasMeter) + +GasConsumedToLimit() + +Gas { + if g.IsPastLimit() { + return g.limit +} + +return g.consumed +} + +/ addUint64Overflow performs the addition operation on two uint64 integers and +/ returns a boolean on whether or not the result overflows. +func addUint64Overflow(a, b uint64) (uint64, bool) { + if math.MaxUint64-a < b { + return 0, true +} + +return a + b, false +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit or out of gas. +func (g *basicGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + g.consumed = math.MaxUint64 + panic(ErrorGasOverflow{ + descriptor +}) +} + if g.consumed > g.limit { + panic(ErrorOutOfGas{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the transaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *basicGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns true if gas consumed is past limit, otherwise it returns false. +func (g *basicGasMeter) + +IsPastLimit() + +bool { + return g.consumed > g.limit +} + +/ IsOutOfGas returns true if gas consumed is greater than or equal to gas limit, otherwise it returns false. +func (g *basicGasMeter) + +IsOutOfGas() + +bool { + return g.consumed >= g.limit +} + +/ String returns the BasicGasMeter's gas limit and gas consumed. +func (g *basicGasMeter) + +String() + +string { + return fmt.Sprintf("BasicGasMeter:\n limit: %d\n consumed: %d", g.limit, g.consumed) +} + +type infiniteGasMeter struct { + consumed Gas +} + +/ NewInfiniteGasMeter returns a new gas meter without a limit. +func NewInfiniteGasMeter() + +GasMeter { + return &infiniteGasMeter{ + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *infiniteGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasConsumedToLimit returns the gas consumed from the GasMeter since the gas is not confined to a limit. +/ NOTE: This behavior is only called when recovering from panic when BlockGasMeter consumes gas past the limit. +func (g *infiniteGasMeter) + +GasConsumedToLimit() + +Gas { + return g.consumed +} + +/ GasRemaining returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +GasRemaining() + +Gas { + return math.MaxUint64 +} + +/ Limit returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +Limit() + +Gas { + return math.MaxUint64 +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit. +func (g *infiniteGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + / TODO: Should we set the consumed field after overflow checking? + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + panic(ErrorGasOverflow{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the trasaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *infiniteGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsPastLimit() + +bool { + return false +} + +/ IsOutOfGas returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsOutOfGas() + +bool { + return false +} + +/ String returns the InfiniteGasMeter's gas consumed. +func (g *infiniteGasMeter) + +String() + +string { + return fmt.Sprintf("InfiniteGasMeter:\n consumed: %d", g.consumed) +} + +/ GasConfig defines gas cost for each operation on KVStores +type GasConfig struct { + HasCost Gas + DeleteCost Gas + ReadCostFlat Gas + ReadCostPerByte Gas + WriteCostFlat Gas + WriteCostPerByte Gas + IterNextCostFlat Gas +} + +/ KVGasConfig returns a default gas config for KVStores. +func KVGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 1000, + DeleteCost: 1000, + ReadCostFlat: 1000, + ReadCostPerByte: 3, + WriteCostFlat: 2000, + WriteCostPerByte: 30, + IterNextCostFlat: 30, +} +} + +/ TransientGasConfig returns a default gas config for TransientStores. +func TransientGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 100, + DeleteCost: 100, + ReadCostFlat: 100, + ReadCostPerByte: 0, + WriteCostFlat: 200, + WriteCostPerByte: 3, + IterNextCostFlat: 3, +} +} +``` + +By default, all `KVStores` are wrapped in `GasKv.Stores` when retrieved. This is done in the `KVStore()` method of the [`context`](/docs/sdk/next/documentation/application-framework/context): + +```go expandable +package types + +import ( + + "context" + "time" + + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/header" + "cosmossdk.io/log" + "cosmossdk.io/store/gaskv" + storetypes "cosmossdk.io/store/types" +) + +/ ExecMode defines the execution mode which can be set on a Context. +type ExecMode uint8 + +/ All possible execution modes. +const ( + ExecModeCheck ExecMode = iota + ExecModeReCheck + ExecModeSimulate + ExecModePrepareProposal + ExecModeProcessProposal + ExecModeVoteExtension + ExecModeVerifyVoteExtension + ExecModeFinalize +) + +/* +Context is an immutable object contains all information needed to +process a request. + +It contains a context.Context object inside if you want to use that, +but please do not over-use it. We try to keep all data structured +and standard additions here would be better just to add to the Context struct +*/ +type Context struct { + baseCtx context.Context + ms storetypes.MultiStore + / Deprecated: Use HeaderService for height, time, and chainID and CometService for the rest + header cmtproto.Header + / Deprecated: Use HeaderService for hash + headerHash []byte + / Deprecated: Use HeaderService for chainID and CometService for the rest + chainID string + txBytes []byte + logger log.Logger + voteInfo []abci.VoteInfo + gasMeter storetypes.GasMeter + blockGasMeter storetypes.GasMeter + checkTx bool + recheckTx bool / if recheckTx == true, then checkTx must also be true + sigverifyTx bool / when run simulation, because the private key corresponding to the account in the genesis.json randomly generated, we must skip the sigverify. + execMode ExecMode + minGasPrice DecCoins + consParams cmtproto.ConsensusParams + eventManager EventManagerI + priority int64 / The tx priority, only relevant in CheckTx + kvGasConfig storetypes.GasConfig + transientKVGasConfig storetypes.GasConfig + streamingManager storetypes.StreamingManager + cometInfo comet.BlockInfo + headerInfo header.Info +} + +/ Proposed rename, not done to avoid API breakage +type Request = Context + +/ Read-only accessors +func (c Context) + +Context() + +context.Context { + return c.baseCtx +} + +func (c Context) + +MultiStore() + +storetypes.MultiStore { + return c.ms +} + +func (c Context) + +BlockHeight() + +int64 { + return c.header.Height +} + +func (c Context) + +BlockTime() + +time.Time { + return c.header.Time +} + +func (c Context) + +ChainID() + +string { + return c.chainID +} + +func (c Context) + +TxBytes() []byte { + return c.txBytes +} + +func (c Context) + +Logger() + +log.Logger { + return c.logger +} + +func (c Context) + +VoteInfos() []abci.VoteInfo { + return c.voteInfo +} + +func (c Context) + +GasMeter() + +storetypes.GasMeter { + return c.gasMeter +} + +func (c Context) + +BlockGasMeter() + +storetypes.GasMeter { + return c.blockGasMeter +} + +func (c Context) + +IsCheckTx() + +bool { + return c.checkTx +} + +func (c Context) + +IsReCheckTx() + +bool { + return c.recheckTx +} + +func (c Context) + +IsSigverifyTx() + +bool { + return c.sigverifyTx +} + +func (c Context) + +ExecMode() + +ExecMode { + return c.execMode +} + +func (c Context) + +MinGasPrices() + +DecCoins { + return c.minGasPrice +} + +func (c Context) + +EventManager() + +EventManagerI { + return c.eventManager +} + +func (c Context) + +Priority() + +int64 { + return c.priority +} + +func (c Context) + +KVGasConfig() + +storetypes.GasConfig { + return c.kvGasConfig +} + +func (c Context) + +TransientKVGasConfig() + +storetypes.GasConfig { + return c.transientKVGasConfig +} + +func (c Context) + +StreamingManager() + +storetypes.StreamingManager { + return c.streamingManager +} + +func (c Context) + +CometInfo() + +comet.BlockInfo { + return c.cometInfo +} + +func (c Context) + +HeaderInfo() + +header.Info { + return c.headerInfo +} + +/ BlockHeader returns the header by value. +func (c Context) + +BlockHeader() + +cmtproto.Header { + return c.header +} + +/ HeaderHash returns a copy of the header hash obtained during abci.RequestBeginBlock +func (c Context) + +HeaderHash() []byte { + hash := make([]byte, len(c.headerHash)) + +copy(hash, c.headerHash) + +return hash +} + +func (c Context) + +ConsensusParams() + +cmtproto.ConsensusParams { + return c.consParams +} + +func (c Context) + +Deadline() (deadline time.Time, ok bool) { + return c.baseCtx.Deadline() +} + +func (c Context) + +Done() <-chan struct{ +} { + return c.baseCtx.Done() +} + +func (c Context) + +Err() + +error { + return c.baseCtx.Err() +} + +/ create a new context +func NewContext(ms storetypes.MultiStore, header cmtproto.Header, isCheckTx bool, logger log.Logger) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +return Context{ + baseCtx: context.Background(), + ms: ms, + header: header, + chainID: header.ChainID, + checkTx: isCheckTx, + sigverifyTx: true, + logger: logger, + gasMeter: storetypes.NewInfiniteGasMeter(), + minGasPrice: DecCoins{ +}, + eventManager: NewEventManager(), + kvGasConfig: storetypes.KVGasConfig(), + transientKVGasConfig: storetypes.TransientGasConfig(), +} +} + +/ WithContext returns a Context with an updated context.Context. +func (c Context) + +WithContext(ctx context.Context) + +Context { + c.baseCtx = ctx + return c +} + +/ WithMultiStore returns a Context with an updated MultiStore. +func (c Context) + +WithMultiStore(ms storetypes.MultiStore) + +Context { + c.ms = ms + return c +} + +/ WithBlockHeader returns a Context with an updated CometBFT block header in UTC time. +func (c Context) + +WithBlockHeader(header cmtproto.Header) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +c.header = header + return c +} + +/ WithHeaderHash returns a Context with an updated CometBFT block header hash. +func (c Context) + +WithHeaderHash(hash []byte) + +Context { + temp := make([]byte, len(hash)) + +copy(temp, hash) + +c.headerHash = temp + return c +} + +/ WithBlockTime returns a Context with an updated CometBFT block header time in UTC with no monotonic component. +/ Stripping the monotonic component is for time equality. +func (c Context) + +WithBlockTime(newTime time.Time) + +Context { + newHeader := c.BlockHeader() + / https://github.com/gogo/protobuf/issues/519 + newHeader.Time = newTime.Round(0).UTC() + +return c.WithBlockHeader(newHeader) +} + +/ WithProposer returns a Context with an updated proposer consensus address. +func (c Context) + +WithProposer(addr ConsAddress) + +Context { + newHeader := c.BlockHeader() + +newHeader.ProposerAddress = addr.Bytes() + +return c.WithBlockHeader(newHeader) +} + +/ WithBlockHeight returns a Context with an updated block height. +func (c Context) + +WithBlockHeight(height int64) + +Context { + newHeader := c.BlockHeader() + +newHeader.Height = height + return c.WithBlockHeader(newHeader) +} + +/ WithChainID returns a Context with an updated chain identifier. +func (c Context) + +WithChainID(chainID string) + +Context { + c.chainID = chainID + return c +} + +/ WithTxBytes returns a Context with an updated txBytes. +func (c Context) + +WithTxBytes(txBytes []byte) + +Context { + c.txBytes = txBytes + return c +} + +/ WithLogger returns a Context with an updated logger. +func (c Context) + +WithLogger(logger log.Logger) + +Context { + c.logger = logger + return c +} + +/ WithVoteInfos returns a Context with an updated consensus VoteInfo. +func (c Context) + +WithVoteInfos(voteInfo []abci.VoteInfo) + +Context { + c.voteInfo = voteInfo + return c +} + +/ WithGasMeter returns a Context with an updated transaction GasMeter. +func (c Context) + +WithGasMeter(meter storetypes.GasMeter) + +Context { + c.gasMeter = meter + return c +} + +/ WithBlockGasMeter returns a Context with an updated block GasMeter +func (c Context) + +WithBlockGasMeter(meter storetypes.GasMeter) + +Context { + c.blockGasMeter = meter + return c +} + +/ WithKVGasConfig returns a Context with an updated gas configuration for +/ the KVStore +func (c Context) + +WithKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.kvGasConfig = gasConfig + return c +} + +/ WithTransientKVGasConfig returns a Context with an updated gas configuration for +/ the transient KVStore +func (c Context) + +WithTransientKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.transientKVGasConfig = gasConfig + return c +} + +/ WithIsCheckTx enables or disables CheckTx value for verifying transactions and returns an updated Context +func (c Context) + +WithIsCheckTx(isCheckTx bool) + +Context { + c.checkTx = isCheckTx + c.execMode = ExecModeCheck + return c +} + +/ WithIsRecheckTx called with true will also set true on checkTx in order to +/ enforce the invariant that if recheckTx = true then checkTx = true as well. +func (c Context) + +WithIsReCheckTx(isRecheckTx bool) + +Context { + if isRecheckTx { + c.checkTx = true +} + +c.recheckTx = isRecheckTx + c.execMode = ExecModeReCheck + return c +} + +/ WithIsSigverifyTx called with true will sigverify in auth module +func (c Context) + +WithIsSigverifyTx(isSigverifyTx bool) + +Context { + c.sigverifyTx = isSigverifyTx + return c +} + +/ WithExecMode returns a Context with an updated ExecMode. +func (c Context) + +WithExecMode(m ExecMode) + +Context { + c.execMode = m + return c +} + +/ WithMinGasPrices returns a Context with an updated minimum gas price value +func (c Context) + +WithMinGasPrices(gasPrices DecCoins) + +Context { + c.minGasPrice = gasPrices + return c +} + +/ WithConsensusParams returns a Context with an updated consensus params +func (c Context) + +WithConsensusParams(params cmtproto.ConsensusParams) + +Context { + c.consParams = params + return c +} + +/ WithEventManager returns a Context with an updated event manager +func (c Context) + +WithEventManager(em EventManagerI) + +Context { + c.eventManager = em + return c +} + +/ WithPriority returns a Context with an updated tx priority +func (c Context) + +WithPriority(p int64) + +Context { + c.priority = p + return c +} + +/ WithStreamingManager returns a Context with an updated streaming manager +func (c Context) + +WithStreamingManager(sm storetypes.StreamingManager) + +Context { + c.streamingManager = sm + return c +} + +/ WithCometInfo returns a Context with an updated comet info +func (c Context) + +WithCometInfo(cometInfo comet.BlockInfo) + +Context { + c.cometInfo = cometInfo + return c +} + +/ WithHeaderInfo returns a Context with an updated header info +func (c Context) + +WithHeaderInfo(headerInfo header.Info) + +Context { + / Settime to UTC + headerInfo.Time = headerInfo.Time.UTC() + +c.headerInfo = headerInfo + return c +} + +/ TODO: remove??? +func (c Context) + +IsZero() + +bool { + return c.ms == nil +} + +func (c Context) + +WithValue(key, value any) + +Context { + c.baseCtx = context.WithValue(c.baseCtx, key, value) + +return c +} + +func (c Context) + +Value(key any) + +any { + if key == SdkContextKey { + return c +} + +return c.baseCtx.Value(key) +} + +/ ---------------------------------------------------------------------------- +/ Store / Caching +/ ---------------------------------------------------------------------------- + +/ KVStore fetches a KVStore from the MultiStore. +func (c Context) + +KVStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.kvGasConfig) +} + +/ TransientStore fetches a TransientStore from the MultiStore. +func (c Context) + +TransientStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.transientKVGasConfig) +} + +/ CacheContext returns a new Context with the multi-store cached and a new +/ EventManager. The cached context is written to the context when writeCache +/ is called. Note, events are automatically emitted on the parent context's +/ EventManager when the caller executes the write. +func (c Context) + +CacheContext() (cc Context, writeCache func()) { + cms := c.ms.CacheMultiStore() + +cc = c.WithMultiStore(cms).WithEventManager(NewEventManager()) + +writeCache = func() { + c.EventManager().EmitEvents(cc.EventManager().Events()) + +cms.Write() +} + +return cc, writeCache +} + +var ( + _ context.Context = Context{ +} + _ storetypes.Context = Context{ +} +) + +/ ContextKey defines a type alias for a stdlib Context key. +type ContextKey string + +/ SdkContextKey is the key in the context.Context which holds the sdk.Context. +const SdkContextKey ContextKey = "sdk-context" + +/ WrapSDKContext returns a stdlib context.Context with the provided sdk.Context's internal +/ context as a value. It is useful for passing an sdk.Context through methods that take a +/ stdlib context.Context parameter such as generated gRPC methods. To get the original +/ sdk.Context back, call UnwrapSDKContext. +/ +/ Deprecated: there is no need to wrap anymore as the Cosmos SDK context implements context.Context. +func WrapSDKContext(ctx Context) + +context.Context { + return ctx +} + +/ UnwrapSDKContext retrieves a Context from a context.Context instance +/ attached with WrapSDKContext. It panics if a Context was not properly +/ attached +func UnwrapSDKContext(ctx context.Context) + +Context { + if sdkCtx, ok := ctx.(Context); ok { + return sdkCtx +} + +return ctx.Value(SdkContextKey).(Context) +} +``` + +In this case, the gas configuration set in the `context` is used. The gas configuration can be set using the `WithKVGasConfig` method of the `context`. +Otherwise it uses the following default: + +```go expandable +package types + +import ( + + "fmt" + "math" +) + +/ Gas consumption descriptors. +const ( + GasIterNextCostFlatDesc = "IterNextFlat" + GasValuePerByteDesc = "ValuePerByte" + GasWritePerByteDesc = "WritePerByte" + GasReadPerByteDesc = "ReadPerByte" + GasWriteCostFlatDesc = "WriteFlat" + GasReadCostFlatDesc = "ReadFlat" + GasHasDesc = "Has" + GasDeleteDesc = "Delete" +) + +/ Gas measured by the SDK +type Gas = uint64 + +/ ErrorNegativeGasConsumed defines an error thrown when the amount of gas refunded results in a +/ negative gas consumed amount. +type ErrorNegativeGasConsumed struct { + Descriptor string +} + +/ ErrorOutOfGas defines an error thrown when an action results in out of gas. +type ErrorOutOfGas struct { + Descriptor string +} + +/ ErrorGasOverflow defines an error thrown when an action results gas consumption +/ unsigned integer overflow. +type ErrorGasOverflow struct { + Descriptor string +} + +/ GasMeter interface to track gas consumption +type GasMeter interface { + GasConsumed() + +Gas + GasConsumedToLimit() + +Gas + GasRemaining() + +Gas + Limit() + +Gas + ConsumeGas(amount Gas, descriptor string) + +RefundGas(amount Gas, descriptor string) + +IsPastLimit() + +bool + IsOutOfGas() + +bool + String() + +string +} + +type basicGasMeter struct { + limit Gas + consumed Gas +} + +/ NewGasMeter returns a reference to a new basicGasMeter. +func NewGasMeter(limit Gas) + +GasMeter { + return &basicGasMeter{ + limit: limit, + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *basicGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasRemaining returns the gas left in the GasMeter. +func (g *basicGasMeter) + +GasRemaining() + +Gas { + if g.IsPastLimit() { + return 0 +} + +return g.limit - g.consumed +} + +/ Limit returns the gas limit of the GasMeter. +func (g *basicGasMeter) + +Limit() + +Gas { + return g.limit +} + +/ GasConsumedToLimit returns the gas limit if gas consumed is past the limit, +/ otherwise it returns the consumed gas. +/ +/ NOTE: This behavior is only called when recovering from panic when +/ BlockGasMeter consumes gas past the limit. +func (g *basicGasMeter) + +GasConsumedToLimit() + +Gas { + if g.IsPastLimit() { + return g.limit +} + +return g.consumed +} + +/ addUint64Overflow performs the addition operation on two uint64 integers and +/ returns a boolean on whether or not the result overflows. +func addUint64Overflow(a, b uint64) (uint64, bool) { + if math.MaxUint64-a < b { + return 0, true +} + +return a + b, false +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit or out of gas. +func (g *basicGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + g.consumed = math.MaxUint64 + panic(ErrorGasOverflow{ + descriptor +}) +} + if g.consumed > g.limit { + panic(ErrorOutOfGas{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the transaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *basicGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns true if gas consumed is past limit, otherwise it returns false. +func (g *basicGasMeter) + +IsPastLimit() + +bool { + return g.consumed > g.limit +} + +/ IsOutOfGas returns true if gas consumed is greater than or equal to gas limit, otherwise it returns false. +func (g *basicGasMeter) + +IsOutOfGas() + +bool { + return g.consumed >= g.limit +} + +/ String returns the BasicGasMeter's gas limit and gas consumed. +func (g *basicGasMeter) + +String() + +string { + return fmt.Sprintf("BasicGasMeter:\n limit: %d\n consumed: %d", g.limit, g.consumed) +} + +type infiniteGasMeter struct { + consumed Gas +} + +/ NewInfiniteGasMeter returns a new gas meter without a limit. +func NewInfiniteGasMeter() + +GasMeter { + return &infiniteGasMeter{ + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *infiniteGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasConsumedToLimit returns the gas consumed from the GasMeter since the gas is not confined to a limit. +/ NOTE: This behavior is only called when recovering from panic when BlockGasMeter consumes gas past the limit. +func (g *infiniteGasMeter) + +GasConsumedToLimit() + +Gas { + return g.consumed +} + +/ GasRemaining returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +GasRemaining() + +Gas { + return math.MaxUint64 +} + +/ Limit returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +Limit() + +Gas { + return math.MaxUint64 +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit. +func (g *infiniteGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + / TODO: Should we set the consumed field after overflow checking? + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + panic(ErrorGasOverflow{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the trasaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *infiniteGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsPastLimit() + +bool { + return false +} + +/ IsOutOfGas returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsOutOfGas() + +bool { + return false +} + +/ String returns the InfiniteGasMeter's gas consumed. +func (g *infiniteGasMeter) + +String() + +string { + return fmt.Sprintf("InfiniteGasMeter:\n consumed: %d", g.consumed) +} + +/ GasConfig defines gas cost for each operation on KVStores +type GasConfig struct { + HasCost Gas + DeleteCost Gas + ReadCostFlat Gas + ReadCostPerByte Gas + WriteCostFlat Gas + WriteCostPerByte Gas + IterNextCostFlat Gas +} + +/ KVGasConfig returns a default gas config for KVStores. +func KVGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 1000, + DeleteCost: 1000, + ReadCostFlat: 1000, + ReadCostPerByte: 3, + WriteCostFlat: 2000, + WriteCostPerByte: 30, + IterNextCostFlat: 30, +} +} + +/ TransientGasConfig returns a default gas config for TransientStores. +func TransientGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 100, + DeleteCost: 100, + ReadCostFlat: 100, + ReadCostPerByte: 0, + WriteCostFlat: 200, + WriteCostPerByte: 3, + IterNextCostFlat: 3, +} +} +``` + +### `TraceKv` Store + +`tracekv.Store` is a wrapper `KVStore` which provides operation tracing functionalities over the underlying `KVStore`. It is applied automatically by the Cosmos SDK on all `KVStore` if tracing is enabled on the parent `MultiStore`. + +```go expandable +package tracekv + +import ( + + "encoding/base64" + "encoding/json" + "io" + "cosmossdk.io/errors" + "cosmossdk.io/store/types" +) + +const ( + writeOp operation = "write" + readOp operation = "read" + deleteOp operation = "delete" + iterKeyOp operation = "iterKey" + iterValueOp operation = "iterValue" +) + +type ( + / Store implements the KVStore interface with tracing enabled. + / Operations are traced on each core KVStore call and written to the + / underlying io.writer. + / + / TODO: Should we use a buffered writer and implement Commit on + / Store? + Store struct { + parent types.KVStore + writer io.Writer + context types.TraceContext +} + + / operation represents an IO operation + operation string + + / traceOperation implements a traced KVStore operation + traceOperation struct { + Operation operation `json:"operation"` + Key string `json:"key"` + Value string `json:"value"` + Metadata map[string]interface{ +} `json:"metadata"` +} +) + +/ NewStore returns a reference to a new traceKVStore given a parent +/ KVStore implementation and a buffered writer. +func NewStore(parent types.KVStore, writer io.Writer, tc types.TraceContext) *Store { + return &Store{ + parent: parent, writer: writer, context: tc +} +} + +/ Get implements the KVStore interface. It traces a read operation and +/ delegates a Get call to the parent KVStore. +func (tkv *Store) + +Get(key []byte) []byte { + value := tkv.parent.Get(key) + +writeOperation(tkv.writer, readOp, tkv.context, key, value) + +return value +} + +/ Set implements the KVStore interface. It traces a write operation and +/ delegates the Set call to the parent KVStore. +func (tkv *Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +writeOperation(tkv.writer, writeOp, tkv.context, key, value) + +tkv.parent.Set(key, value) +} + +/ Delete implements the KVStore interface. It traces a write operation and +/ delegates the Delete call to the parent KVStore. +func (tkv *Store) + +Delete(key []byte) { + writeOperation(tkv.writer, deleteOp, tkv.context, key, nil) + +tkv.parent.Delete(key) +} + +/ Has implements the KVStore interface. It delegates the Has call to the +/ parent KVStore. +func (tkv *Store) + +Has(key []byte) + +bool { + return tkv.parent.Has(key) +} + +/ Iterator implements the KVStore interface. It delegates the Iterator call +/ to the parent KVStore. +func (tkv *Store) + +Iterator(start, end []byte) + +types.Iterator { + return tkv.iterator(start, end, true) +} + +/ ReverseIterator implements the KVStore interface. It delegates the +/ ReverseIterator call to the parent KVStore. +func (tkv *Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + return tkv.iterator(start, end, false) +} + +/ iterator facilitates iteration over a KVStore. It delegates the necessary +/ calls to it's parent KVStore. +func (tkv *Store) + +iterator(start, end []byte, ascending bool) + +types.Iterator { + var parent types.Iterator + if ascending { + parent = tkv.parent.Iterator(start, end) +} + +else { + parent = tkv.parent.ReverseIterator(start, end) +} + +return newTraceIterator(tkv.writer, parent, tkv.context) +} + +type traceIterator struct { + parent types.Iterator + writer io.Writer + context types.TraceContext +} + +func newTraceIterator(w io.Writer, parent types.Iterator, tc types.TraceContext) + +types.Iterator { + return &traceIterator{ + writer: w, parent: parent, context: tc +} +} + +/ Domain implements the Iterator interface. +func (ti *traceIterator) + +Domain() (start, end []byte) { + return ti.parent.Domain() +} + +/ Valid implements the Iterator interface. +func (ti *traceIterator) + +Valid() + +bool { + return ti.parent.Valid() +} + +/ Next implements the Iterator interface. +func (ti *traceIterator) + +Next() { + ti.parent.Next() +} + +/ Key implements the Iterator interface. +func (ti *traceIterator) + +Key() []byte { + key := ti.parent.Key() + +writeOperation(ti.writer, iterKeyOp, ti.context, key, nil) + +return key +} + +/ Value implements the Iterator interface. +func (ti *traceIterator) + +Value() []byte { + value := ti.parent.Value() + +writeOperation(ti.writer, iterValueOp, ti.context, nil, value) + +return value +} + +/ Close implements the Iterator interface. +func (ti *traceIterator) + +Close() + +error { + return ti.parent.Close() +} + +/ Error delegates the Error call to the parent iterator. +func (ti *traceIterator) + +Error() + +error { + return ti.parent.Error() +} + +/ GetStoreType implements the KVStore interface. It returns the underlying +/ KVStore type. +func (tkv *Store) + +GetStoreType() + +types.StoreType { + return tkv.parent.GetStoreType() +} + +/ CacheWrap implements the KVStore interface. It panics because a Store +/ cannot be branched. +func (tkv *Store) + +CacheWrap() + +types.CacheWrap { + panic("cannot CacheWrap a TraceKVStore") +} + +/ CacheWrapWithTrace implements the KVStore interface. It panics as a +/ Store cannot be branched. +func (tkv *Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + panic("cannot CacheWrapWithTrace a TraceKVStore") +} + +/ writeOperation writes a KVStore operation to the underlying io.Writer as +/ JSON-encoded data where the key/value pair is base64 encoded. +func writeOperation(w io.Writer, op operation, tc types.TraceContext, key, value []byte) { + traceOp := traceOperation{ + Operation: op, + Key: base64.StdEncoding.EncodeToString(key), + Value: base64.StdEncoding.EncodeToString(value), +} + if tc != nil { + traceOp.Metadata = tc +} + +raw, err := json.Marshal(traceOp) + if err != nil { + panic(errors.Wrap(err, "failed to serialize trace operation")) +} + if _, err := w.Write(raw); err != nil { + panic(errors.Wrap(err, "failed to write trace operation")) +} + + _, err = io.WriteString(w, "\n") + if err != nil { + panic(errors.Wrap(err, "failed to write newline")) +} +} +``` + +When each `KVStore` methods are called, `tracekv.Store` automatically logs `traceOperation` to the `Store.writer`. `traceOperation.Metadata` is filled with `Store.context` when it is not nil. `TraceContext` is a `map[string]interface{}`. + +### `Prefix` Store + +`prefix.Store` is a wrapper `KVStore` which provides automatic key-prefixing functionalities over the underlying `KVStore`. + +```go expandable +package prefix + +import ( + + "bytes" + "errors" + "io" + "cosmossdk.io/store/cachekv" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" +) + +var _ types.KVStore = Store{ +} + +/ Store is similar with cometbft/cometbft/libs/db/prefix_db +/ both gives access only to the limited subset of the store +/ for convinience or safety +type Store struct { + parent types.KVStore + prefix []byte +} + +func NewStore(parent types.KVStore, prefix []byte) + +Store { + return Store{ + parent: parent, + prefix: prefix, +} +} + +func cloneAppend(bz, tail []byte) (res []byte) { + res = make([]byte, len(bz)+len(tail)) + +copy(res, bz) + +copy(res[len(bz):], tail) + +return +} + +func (s Store) + +key(key []byte) (res []byte) { + if key == nil { + panic("nil key on Store") +} + +res = cloneAppend(s.prefix, key) + +return +} + +/ Implements Store +func (s Store) + +GetStoreType() + +types.StoreType { + return s.parent.GetStoreType() +} + +/ Implements CacheWrap +func (s Store) + +CacheWrap() + +types.CacheWrap { + return cachekv.NewStore(s) +} + +/ CacheWrapWithTrace implements the KVStore interface. +func (s Store) + +CacheWrapWithTrace(w io.Writer, tc types.TraceContext) + +types.CacheWrap { + return cachekv.NewStore(tracekv.NewStore(s, w, tc)) +} + +/ Implements KVStore +func (s Store) + +Get(key []byte) []byte { + res := s.parent.Get(s.key(key)) + +return res +} + +/ Implements KVStore +func (s Store) + +Has(key []byte) + +bool { + return s.parent.Has(s.key(key)) +} + +/ Implements KVStore +func (s Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +types.AssertValidValue(value) + +s.parent.Set(s.key(key), value) +} + +/ Implements KVStore +func (s Store) + +Delete(key []byte) { + s.parent.Delete(s.key(key)) +} + +/ Implements KVStore +/ Check https://github.com/cometbft/cometbft/blob/master/libs/db/prefix_db.go#L106 +func (s Store) + +Iterator(start, end []byte) + +types.Iterator { + newstart := cloneAppend(s.prefix, start) + +var newend []byte + if end == nil { + newend = cpIncr(s.prefix) +} + +else { + newend = cloneAppend(s.prefix, end) +} + iter := s.parent.Iterator(newstart, newend) + +return newPrefixIterator(s.prefix, start, end, iter) +} + +/ ReverseIterator implements KVStore +/ Check https://github.com/cometbft/cometbft/blob/master/libs/db/prefix_db.go#L129 +func (s Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + newstart := cloneAppend(s.prefix, start) + +var newend []byte + if end == nil { + newend = cpIncr(s.prefix) +} + +else { + newend = cloneAppend(s.prefix, end) +} + iter := s.parent.ReverseIterator(newstart, newend) + +return newPrefixIterator(s.prefix, start, end, iter) +} + +var _ types.Iterator = (*prefixIterator)(nil) + +type prefixIterator struct { + prefix []byte + start []byte + end []byte + iter types.Iterator + valid bool +} + +func newPrefixIterator(prefix, start, end []byte, parent types.Iterator) *prefixIterator { + return &prefixIterator{ + prefix: prefix, + start: start, + end: end, + iter: parent, + valid: parent.Valid() && bytes.HasPrefix(parent.Key(), prefix), +} +} + +/ Implements Iterator +func (pi *prefixIterator) + +Domain() ([]byte, []byte) { + return pi.start, pi.end +} + +/ Implements Iterator +func (pi *prefixIterator) + +Valid() + +bool { + return pi.valid && pi.iter.Valid() +} + +/ Implements Iterator +func (pi *prefixIterator) + +Next() { + if !pi.valid { + panic("prefixIterator invalid, cannot call Next()") +} + if pi.iter.Next(); !pi.iter.Valid() || !bytes.HasPrefix(pi.iter.Key(), pi.prefix) { + / TODO: shouldn't pi be set to nil instead? + pi.valid = false +} +} + +/ Implements Iterator +func (pi *prefixIterator) + +Key() (key []byte) { + if !pi.valid { + panic("prefixIterator invalid, cannot call Key()") +} + +key = pi.iter.Key() + +key = stripPrefix(key, pi.prefix) + +return +} + +/ Implements Iterator +func (pi *prefixIterator) + +Value() []byte { + if !pi.valid { + panic("prefixIterator invalid, cannot call Value()") +} + +return pi.iter.Value() +} + +/ Implements Iterator +func (pi *prefixIterator) + +Close() + +error { + return pi.iter.Close() +} + +/ Error returns an error if the prefixIterator is invalid defined by the Valid +/ method. +func (pi *prefixIterator) + +Error() + +error { + if !pi.Valid() { + return errors.New("invalid prefixIterator") +} + +return nil +} + +/ copied from github.com/cometbft/cometbft/libs/db/prefix_db.go +func stripPrefix(key, prefix []byte) []byte { + if len(key) < len(prefix) || !bytes.Equal(key[:len(prefix)], prefix) { + panic("should not happen") +} + +return key[len(prefix):] +} + +/ wrapping types.PrefixEndBytes +func cpIncr(bz []byte) []byte { + return types.PrefixEndBytes(bz) +} +``` + +When `Store.{Get, Set}()` is called, the store forwards the call to its parent, with the key prefixed with the `Store.prefix`. + +When `Store.Iterator()` is called, it does not simply prefix the `Store.prefix`, since it does not work as intended. In that case, some of the elements are traversed even if they are not starting with the prefix. + +### `ListenKv` Store + +`listenkv.Store` is a wrapper `KVStore` which provides state listening capabilities over the underlying `KVStore`. +It is applied automatically by the Cosmos SDK on any `KVStore` whose `StoreKey` is specified during state streaming configuration. +Additional information about state streaming configuration can be found in the [store/streaming/README.md](https://github.com/cosmos/cosmos-sdk/tree/v0.53.0/store/streaming). + +```go expandable +package listenkv + +import ( + + "io" + "cosmossdk.io/store/types" +) + +var _ types.KVStore = &Store{ +} + +/ Store implements the KVStore interface with listening enabled. +/ Operations are traced on each core KVStore call and written to any of the +/ underlying listeners with the proper key and operation permissions +type Store struct { + parent types.KVStore + listener *types.MemoryListener + parentStoreKey types.StoreKey +} + +/ NewStore returns a reference to a new traceKVStore given a parent +/ KVStore implementation and a buffered writer. +func NewStore(parent types.KVStore, parentStoreKey types.StoreKey, listener *types.MemoryListener) *Store { + return &Store{ + parent: parent, listener: listener, parentStoreKey: parentStoreKey +} +} + +/ Get implements the KVStore interface. It traces a read operation and +/ delegates a Get call to the parent KVStore. +func (s *Store) + +Get(key []byte) []byte { + value := s.parent.Get(key) + +return value +} + +/ Set implements the KVStore interface. It traces a write operation and +/ delegates the Set call to the parent KVStore. +func (s *Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +s.parent.Set(key, value) + +s.listener.OnWrite(s.parentStoreKey, key, value, false) +} + +/ Delete implements the KVStore interface. It traces a write operation and +/ delegates the Delete call to the parent KVStore. +func (s *Store) + +Delete(key []byte) { + s.parent.Delete(key) + +s.listener.OnWrite(s.parentStoreKey, key, nil, true) +} + +/ Has implements the KVStore interface. It delegates the Has call to the +/ parent KVStore. +func (s *Store) + +Has(key []byte) + +bool { + return s.parent.Has(key) +} + +/ Iterator implements the KVStore interface. It delegates the Iterator call +/ the to the parent KVStore. +func (s *Store) + +Iterator(start, end []byte) + +types.Iterator { + return s.iterator(start, end, true) +} + +/ ReverseIterator implements the KVStore interface. It delegates the +/ ReverseIterator call the to the parent KVStore. +func (s *Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + return s.iterator(start, end, false) +} + +/ iterator facilitates iteration over a KVStore. It delegates the necessary +/ calls to it's parent KVStore. +func (s *Store) + +iterator(start, end []byte, ascending bool) + +types.Iterator { + var parent types.Iterator + if ascending { + parent = s.parent.Iterator(start, end) +} + +else { + parent = s.parent.ReverseIterator(start, end) +} + +return newTraceIterator(parent, s.listener) +} + +type listenIterator struct { + parent types.Iterator + listener *types.MemoryListener +} + +func newTraceIterator(parent types.Iterator, listener *types.MemoryListener) + +types.Iterator { + return &listenIterator{ + parent: parent, listener: listener +} +} + +/ Domain implements the Iterator interface. +func (li *listenIterator) + +Domain() (start, end []byte) { + return li.parent.Domain() +} + +/ Valid implements the Iterator interface. +func (li *listenIterator) + +Valid() + +bool { + return li.parent.Valid() +} + +/ Next implements the Iterator interface. +func (li *listenIterator) + +Next() { + li.parent.Next() +} + +/ Key implements the Iterator interface. +func (li *listenIterator) + +Key() []byte { + key := li.parent.Key() + +return key +} + +/ Value implements the Iterator interface. +func (li *listenIterator) + +Value() []byte { + value := li.parent.Value() + +return value +} + +/ Close implements the Iterator interface. +func (li *listenIterator) + +Close() + +error { + return li.parent.Close() +} + +/ Error delegates the Error call to the parent iterator. +func (li *listenIterator) + +Error() + +error { + return li.parent.Error() +} + +/ GetStoreType implements the KVStore interface. It returns the underlying +/ KVStore type. +func (s *Store) + +GetStoreType() + +types.StoreType { + return s.parent.GetStoreType() +} + +/ CacheWrap implements the KVStore interface. It panics as a Store +/ cannot be cache wrapped. +func (s *Store) + +CacheWrap() + +types.CacheWrap { + panic("cannot CacheWrap a ListenKVStore") +} + +/ CacheWrapWithTrace implements the KVStore interface. It panics as a +/ Store cannot be cache wrapped. +func (s *Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + panic("cannot CacheWrapWithTrace a ListenKVStore") +} +``` + +When `KVStore.Set` or `KVStore.Delete` methods are called, `listenkv.Store` automatically writes the operations to the set of `Store.listeners`. + +## `BasicKVStore` interface + +An interface providing only the basic CRUD functionality (`Get`, `Set`, `Has`, and `Delete` methods), without iteration or caching. This is used to partially expose components of a larger store. diff --git a/docs/sdk/next/index.mdx b/docs/sdk/next/index.mdx deleted file mode 100644 index 27bc46e1..00000000 --- a/docs/sdk/next/index.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "SDK Overview" -description: "Cosmos SDK integration docs (coming soon)" ---- - - -The SDK documentation is under active development and will appear here soon. - - -If you have specific topics you’d like covered, please open an issue. - diff --git a/docs/sdk/next/release-notes/placeholder.mdx b/docs/sdk/next/release-notes/placeholder.mdx new file mode 100644 index 00000000..166ef9e1 --- /dev/null +++ b/docs/sdk/next/release-notes/placeholder.mdx @@ -0,0 +1,8 @@ +--- +title: Release Notes +description: SDK release notes +--- + +# Release Notes + +For detailed release notes, please visit the [Cosmos SDK releases page](https://github.com/cosmos/cosmos-sdk/releases). diff --git a/docs/sdk/next/tutorials/transactions/building-a-transaction.mdx b/docs/sdk/next/tutorials/transactions/building-a-transaction.mdx new file mode 100644 index 00000000..8ecf4a2b --- /dev/null +++ b/docs/sdk/next/tutorials/transactions/building-a-transaction.mdx @@ -0,0 +1,193 @@ +--- +title: Building a Transaction +description: >- + These are the steps to build, sign and broadcast a transaction using v2 + semantics. +--- + +These are the steps to build, sign and broadcast a transaction using v2 semantics. + +1. Correctly set up imports + +```go expandable +import ( + + "context" + "fmt" + "log" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + + apisigning "cosmossdk.io/api/cosmos/tx/signing/v1beta1" + "cosmossdk.io/client/v2/broadcast/comet" + "cosmossdk.io/client/v2/tx" + "cosmossdk.io/core/transaction" + "cosmossdk.io/math" + banktypes "cosmossdk.io/x/bank/types" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptocodec "github.com/cosmos/cosmos-sdk/crypto/codec" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/codec" + addrcodec "github.com/cosmos/cosmos-sdk/codec/address" + sdk "github.com/cosmos/cosmos-sdk/types" +) +``` + +2. Create a gRPC connection + +```go +clientConn, err := grpc.NewClient("127.0.0.1:9090", grpc.WithTransportCredentials(insecure.NewCredentials())) + if err != nil { + log.Fatal(err) +} +``` + +3. Setup codec and interface registry + +```go +/ Setup interface registry and register necessary interfaces + interfaceRegistry := codectypes.NewInterfaceRegistry() + +banktypes.RegisterInterfaces(interfaceRegistry) + +authtypes.RegisterInterfaces(interfaceRegistry) + +cryptocodec.RegisterInterfaces(interfaceRegistry) + + / Create a ProtoCodec for encoding/decoding + protoCodec := codec.NewProtoCodec(interfaceRegistry) +``` + +4. Initialize keyring + +```go expandable +ckr, err := keyring.New("autoclikeyring", "test", home, nil, protoCodec) + if err != nil { + log.Fatal("error creating keyring", err) +} + +kr, err := keyring.NewAutoCLIKeyring(ckr, addrcodec.NewBech32Codec("cosmos")) + if err != nil { + log.Fatal("error creating auto cli keyring", err) +} +``` + +5. Setup transaction parameters + +```go expandable +/ Setup transaction parameters + txParams := tx.TxParameters{ + ChainID: "simapp-v2-chain", + SignMode: apisigning.SignMode_SIGN_MODE_DIRECT, + AccountConfig: tx.AccountConfig{ + FromAddress: "cosmos1t0fmn0lyp2v99ga55mm37mpnqrlnc4xcs2hhhy", + FromName: "alice", +}, +} + + / Configure gas settings + gasConfig, err := tx.NewGasConfig(100, 100, "0stake") + if err != nil { + log.Fatal("error creating gas config: ", err) +} + +txParams.GasConfig = gasConfig + + / Create auth query client + authClient := authtypes.NewQueryClient(clientConn) + + / Retrieve account information for the sender + fromAccount, err := getAccount("cosmos1t0fmn0lyp2v99ga55mm37mpnqrlnc4xcs2hhhy", authClient, protoCodec) + if err != nil { + log.Fatal("error getting from account: ", err) +} + + / Update txParams with the correct account number and sequence + txParams.AccountConfig.AccountNumber = fromAccount.GetAccountNumber() + +txParams.AccountConfig.Sequence = fromAccount.GetSequence() + + / Retrieve account information for the recipient + toAccount, err := getAccount("cosmos1e2wanzh89mlwct7cs7eumxf7mrh5m3ykpsh66m", authClient, protoCodec) + if err != nil { + log.Fatal("error getting to account: ", err) +} + + / Configure transaction settings + txConf, _ := tx.NewTxConfig(tx.ConfigOptions{ + AddressCodec: addrcodec.NewBech32Codec("cosmos"), + Cdc: protoCodec, + ValidatorAddressCodec: addrcodec.NewBech32Codec("cosmosval"), + EnabledSignModes: []apisigning.SignMode{ + apisigning.SignMode_SIGN_MODE_DIRECT +}, +}) +``` + +6. Build the transaction + +```go expandable +/ Create a transaction factory + f, err := tx.NewFactory(kr, codec.NewProtoCodec(codectypes.NewInterfaceRegistry()), nil, txConf, addrcodec.NewBech32Codec("cosmos"), clientConn, txParams) + if err != nil { + log.Fatal("error creating factory", err) +} + + / Define the transaction message + msgs := []transaction.Msg{ + &banktypes.MsgSend{ + FromAddress: fromAccount.GetAddress().String(), + ToAddress: toAccount.GetAddress().String(), + Amount: sdk.Coins{ + sdk.NewCoin("stake", math.NewInt(1000000)), +}, +}, +} + + / Build and sign the transaction + tx, err := f.BuildsSignedTx(context.Background(), msgs...) + if err != nil { + log.Fatal("error building signed tx", err) +} +``` + +7. Broadcast the transaction + +```go expandable +/ Create a broadcaster for the transaction + c, err := comet.NewCometBFTBroadcaster("http://127.0.0.1:26657", comet.BroadcastSync, protoCodec) + if err != nil { + log.Fatal("error creating comet broadcaster", err) +} + + / Broadcast the transaction + res, err := c.Broadcast(context.Background(), tx.Bytes()) + if err != nil { + log.Fatal("error broadcasting tx", err) +} +``` + +8. Helpers + +```go expandable +/ getAccount retrieves account information using the provided address +func getAccount(address string, authClient authtypes.QueryClient, codec codec.Codec) (sdk.AccountI, error) { + / Query account info + accountQuery, err := authClient.Account(context.Background(), &authtypes.QueryAccountRequest{ + Address: string(address), +}) + if err != nil { + return nil, fmt.Errorf("error getting account: %w", err) +} + + / Unpack the account information + var account sdk.AccountI + err = codec.InterfaceRegistry().UnpackAny(accountQuery.Account, &account) + if err != nil { + return nil, fmt.Errorf("error unpacking account: %w", err) +} + +return account, nil +} +``` diff --git a/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running.mdx b/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running.mdx new file mode 100644 index 00000000..dad1cebf --- /dev/null +++ b/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running.mdx @@ -0,0 +1,109 @@ +--- +title: Demo of Mitigating Front-Running with Vote Extensions +--- + +The purpose of this demo is to test the implementation of the `VoteExtensionHandler` and `PrepareProposalHandler` that we have just added to the codebase. These handlers are designed to mitigate front-running by ensuring that all validators have a consistent view of the mempool when preparing proposals. + +In this demo, we are using a 3 validator network. The Beacon validator is special because it has a custom transaction provider enabled. This means that it can potentially manipulate the order of transactions in a proposal to its advantage (i.e., front-running). + +1. Bootstrap the validator network: This sets up a network with 3 validators. The script \`./scripts/configure.sh is used to configure the network and the validators. + +```shell +cd scripts +configure.sh +``` + +If this doesn't work please ensure you have run `make build` in the `tutorials/nameservice/base` directory. + +{/* nolint:all */} +2\. Have alice attempt to reserve `bob.cosmos`: This is a normal transaction that alice wants to execute. The script \`\`./scripts/reserve.sh "bob.cosmos"\` is used to send this transaction. + +```shell +reserve.sh "bob.cosmos" +``` + +{/* /nolint:all */} +3\. Query to verify the name has been reserved: This is to check the result of the transaction. The script `./scripts/whois.sh "bob.cosmos"` is used to query the state of the blockchain. + +```shell +whois.sh "bob.cosmos" +``` + +It should return: + +```{ expandable + "name": { + "name": "bob.cosmos", + "owner": "cosmos1nq9wuvuju4jdmpmzvxmg8zhhu2ma2y2l2pnu6w", + "resolve_address": "cosmos1h6zy2kn9efxtw5z22rc5k9qu7twl70z24kr3ht", + "amount": [ + { + "denom": "uatom", + "amount": "1000" + } + ] + } +} +``` + +To detect front-running attempts by the beacon, scrutinise the logs during the `ProcessProposal` stage. Open the logs for each validator, including the beacon, `val1`, and `val2`, to observe the following behavior. Open the log file of the validator node. The location of this file can vary depending on your setup, but it's typically located in a directory like `$HOME/cosmos/nodes/#{validator}/logs`. The directory in this case will be under the validator so, `beacon` `val1` or `val2`. Run the following to tail the logs of the validator or beacon: + +```shell +tail -f $HOME/cosmos/nodes/#{validator}/logs +``` + +```shell +2:47PM ERR :: Detected invalid proposal bid :: name:"bob.cosmos" resolveAddress:"cosmos1wmuwv38pdur63zw04t0c78r2a8dyt08hf9tpvd" owner:"cosmos1wmuwv38pdur63zw04t0c78r2a8dyt08hf9tpvd" amount: module=server +2:47PM ERR :: Unable to validate bids in Process Proposal :: module=server +2:47PM ERR prevote step: state machine rejected a proposed block; this should not happen:the proposer may be misbehaving; prevoting nil err=null height=142 module=consensus round=0 +``` + +{/* /nolint:all */} +4\. List the Beacon's keys: This is to verify the addresses of the validators. The script `./scripts/list-beacon-keys.sh` is used to list the keys. + +```shell +list-beacon-keys.sh +``` + +We should receive something similar to the following: + +```shell expandable +[ + { + "name": "alice", + "type": "local", + "address": "cosmos1h6zy2kn9efxtw5z22rc5k9qu7twl70z24kr3ht", + "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"A32cvBUkNJz+h2vld4A5BxvU5Rd+HyqpR3aGtvEhlm4C\"}" + }, + { + "name": "barbara", + "type": "local", + "address": "cosmos1nq9wuvuju4jdmpmzvxmg8zhhu2ma2y2l2pnu6w", + "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"Ag9PFsNyTQPoJdbyCWia5rZH9CrvSrjMsk7Oz4L3rXQ5\"}" + }, + { + "name": "beacon-key", + "type": "local", + "address": "cosmos1ez9a6x7lz4gvn27zr368muw8jeyas7sv84lfup", + "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"AlzJZMWyN7lass710TnAhyuFKAFIaANJyw5ad5P2kpcH\"}" + }, + { + "name": "cindy", + "type": "local", + "address": "cosmos1m5j6za9w4qc2c5ljzxmm2v7a63mhjeag34pa3g", + "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"A6F1/3yot5OpyXoSkBbkyl+3rqBkxzRVSJfvSpm/AvW5\"}" + } +] +``` + +This allows us to match up the addresses and see that the bid was not front run by the beacon due as the resolve address is Alice's address and not the beacons address. + +By running this demo, we can verify that the `VoteExtensionHandler` and `PrepareProposalHandler` are working as expected and that they are able to prevent front-running. + +## Conclusion + +In this tutorial, we've tackled front-running and MEV, focusing on nameservice auctions' vulnerability to these issues. We've explored vote extensions, a key feature of ABCI 2.0, and integrated them into a Cosmos SDK application. + +Through practical exercises, you've implemented vote extensions, and tested their effectiveness in creating a fair auction system. You've gained practical insights by configuring a validator network and analysing blockchain logs. + +Keep experimenting with these concepts, engage with the community, and stay updated on new advancements. The knowledge you've acquired here is crucial for developing secure and fair blockchain applications. diff --git a/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/getting-started.mdx b/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/getting-started.mdx new file mode 100644 index 00000000..0a17cf07 --- /dev/null +++ b/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/getting-started.mdx @@ -0,0 +1,45 @@ +--- +title: Getting Started +description: >- + - Getting Started - Understanding Front-Running - Mitigating Front-running + with Vote Extensions - Demo of Mitigating Front-Running +--- + +## Table of Contents + +* [Getting Started](#overview-of-the-project) +* [Understanding Front-Running](/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) +* [Mitigating Front-running with Vote Extensions](/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions) +* [Demo of Mitigating Front-Running](/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running) + +## Getting Started + +### Overview of the Project + +This tutorial outlines the development of a module designed to mitigate front-running in nameservice auctions. The following functions are central to this module: + +* `ExtendVote`: Gathers bids from the mempool and includes them in the vote extension to ensure a fair and transparent auction process. +* `PrepareProposal`: Processes the vote extensions from the previous block, creating a special transaction that encapsulates bids to be included in the current proposal. +* `ProcessProposal`: Validates that the first transaction in the proposal is the special transaction containing the vote extensions and ensures the integrity of the bids. + +In this advanced tutorial, we will be working with an example application that facilitates the auctioning of nameservices. To see what frontrunning and nameservices are [here](/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) This application provides a practical use case to explore the prevention of auction front-running, also known as "bid sniping", where a validator takes advantage of seeing a bid in the mempool to place their own higher bid before the original bid is processed. + +The tutorial will guide you through using the Cosmos SDK to mitigate front-running using vote extensions. The module will be built on top of the base blockchain provided in the `tutorials/base` directory and will use the `auction` module as a foundation. By the end of this tutorial, you will have a better understanding of how to prevent front-running in blockchain auctions, specifically in the context of nameservice auctioning. + +## What are Vote extensions? + +Vote extensions is arbitrary information which can be inserted into a block. This feature is part of ABCI 2.0, which is available for use in the SDK 0.50 release and part of the 0.38 CometBFT release. + +More information about vote extensions can be seen [here](https://docs.cosmos.network/main/build/abci/vote-extensions). + +## Requirements and Setup + +Before diving into the advanced tutorial on auction front-running simulation, ensure you meet the following requirements: + +* [Golang >1.21.5](https://golang.org/doc/install) installed +* Familiarity with the concepts of front-running and MEV, as detailed in [Understanding Front-Running](/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) +* Understanding of Vote Extensions as described [here](https://docs.cosmos.network/main/build/abci/vote-extensions) + +You will also need a foundational blockchain to build upon coupled with your own module. The `tutorials/base` directory has the necessary blockchain code to start your custom project with the Cosmos SDK. For the module, you can use the `auction` module provided in the `tutorials/auction/x/auction` directory as a reference but please be aware that all of the code needed to implement vote extensions is already implemented in this module. + +This will set up a strong base for your blockchain, enabling the integration of advanced features such as auction front-running simulation. diff --git a/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions.mdx b/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions.mdx new file mode 100644 index 00000000..406d510c --- /dev/null +++ b/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions.mdx @@ -0,0 +1,380 @@ +--- +title: Mitigating Front-running with Vote Extensions +description: >- + Prerequisites Implementing Structs for Vote Extensions Implementing Handlers + and Configuring Handlers +--- + +## Table of Contents + +* [Prerequisites](#prerequisites) +* [Implementing Structs for Vote Extensions](#implementing-structs-for-vote-extensions) +* [Implementing Handlers and Configuring Handlers](#implementing-handlers-and-configuring-handlers) + +## Prerequisites + +Before implementing vote extensions to mitigate front-running, ensure you have a module ready to implement the vote extensions with. If you need to create or reference a similar module, see `x/auction` for guidance. + +In this section, we will discuss the steps to mitigate front-running using vote extensions. We will introduce new types within the `abci/types.go` file. These types will be used to handle the process of preparing proposals, processing proposals, and handling vote extensions. + +### Implementing Structs for Vote Extensions + +First, copy the following structs into the `abci/types.go` and each of these structs serves a specific purpose in the process of mitigating front-running using vote extensions: + +```go expandable +package abci + +import ( + + /import the necessary files +) + +type PrepareProposalHandler struct { + logger log.Logger + txConfig client.TxConfig + cdc codec.Codec + mempool *mempool.ThresholdMempool + txProvider provider.TxProvider + keyname string + runProvider bool +} +``` + +The `PrepareProposalHandler` struct is used to handle the preparation of a proposal in the consensus process. It contains several fields: logger for logging information and errors, txConfig for transaction configuration, cdc (Codec) for encoding and decoding transactions, mempool for referencing the set of unconfirmed transactions, txProvider for building the proposal with transactions, keyname for the name of the key used for signing transactions, and runProvider, a boolean flag indicating whether the provider should be run to build the proposal. + +```go +type ProcessProposalHandler struct { + TxConfig client.TxConfig + Codec codec.Codec + Logger log.Logger +} +``` + +After the proposal has been prepared and vote extensions have been included, the `ProcessProposalHandler` is used to process the proposal. This includes validating the proposal and the included vote extensions. The `ProcessProposalHandler` allows you to access the transaction configuration and codec, which are necessary for processing the vote extensions. + +```go +type VoteExtHandler struct { + logger log.Logger + currentBlock int64 + mempool *mempool.ThresholdMempool + cdc codec.Codec +} +``` + +This struct is used to handle vote extensions. It contains a logger for logging events, the current block number, a mempool for storing transactions, and a codec for encoding and decoding. Vote extensions are a key part of the process to mitigate front-running, as they allow for additional information to be included with each vote. + +```go +type InjectedVoteExt struct { + VoteExtSigner []byte + Bids [][]byte +} + +type InjectedVotes struct { + Votes []InjectedVoteExt +} +``` + +These structs are used to handle injected vote extensions. They include the signer of the vote extension and the bids associated with the vote extension. Each byte array in Bids is a serialised form of a bid transaction. Injected vote extensions are used to add additional information to a vote after it has been created, which can be useful for adding context or additional data to a vote. The serialised bid transactions provide a way to include complex transaction data in a compact, efficient format. + +```go +type AppVoteExtension struct { + Height int64 + Bids [][]byte +} +``` + +This struct is used for application vote extensions. It includes the height of the block and the bids associated with the vote extension. Application vote extensions are used to add additional information to a vote at the application level, which can be useful for adding context or additional data to a vote that is specific to the application. + +```go +type SpecialTransaction struct { + Height int + Bids [][]byte +} +``` + +This struct is used for special transactions. It includes the height of the block and the bids associated with the transaction. Special transactions are used for transactions that need to be handled differently from regular transactions, such as transactions that are part of the process to mitigate front-running. + +### Implementing Handlers and Configuring Handlers + +To establish the `VoteExtensionHandler`, follow these steps: + +1. Navigate to the `abci/proposal.go` file. This is where we will implement the \`VoteExtensionHandler\`\`. + +2. Implement the `NewVoteExtensionHandler` function. This function is a constructor for the `VoteExtHandler` struct. It takes a logger, a mempool, and a codec as parameters and returns a new instance of `VoteExtHandler`. + +```go +func NewVoteExtensionHandler(lg log.Logger, mp *mempool.ThresholdMempool, cdc codec.Codec) *VoteExtHandler { + return &VoteExtHandler{ + logger: lg, + mempool: mp, + cdc: cdc, +} +} +``` + +3. Implement the `ExtendVoteHandler()` method. This method should handle the logic of extending votes, including inspecting the mempool and submitting a list of all pending bids. This will allow you to access the list of unconfirmed transactions in the abci.`RequestPrepareProposal` during the ensuing block. + +```go expandable +func (h *VoteExtHandler) + +ExtendVoteHandler() + +sdk.ExtendVoteHandler { + return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + h.logger.Info(fmt.Sprintf("Extending votes at block height : %v", req.Height)) + voteExtBids := [][]byte{ +} + + / Get mempool txs + itr := h.mempool.SelectPending(context.Background(), nil) + for itr != nil { + tmptx := itr.Tx() + sdkMsgs := tmptx.GetMsgs() + + / Iterate through msgs, check for any bids + for _, msg := range sdkMsgs { + switch msg := msg.(type) { + case *nstypes.MsgBid: + / Marshal sdk bids to []byte + bz, err := h.cdc.Marshal(msg) + if err != nil { + h.logger.Error(fmt.Sprintf("Error marshalling VE Bid : %v", err)) + +break +} + +voteExtBids = append(voteExtBids, bz) + +default: +} + +} + + / Move tx to ready pool + err := h.mempool.Update(context.Background(), tmptx) + + / Remove tx from app side mempool + if err != nil { + h.logger.Info(fmt.Sprintf("Unable to update mempool tx: %v", err)) +} + +itr = itr.Next() +} + + / Create vote extension + voteExt := AppVoteExtension{ + Height: req.Height, + Bids: voteExtBids, +} + + / Encode Vote Extension + bz, err := json.Marshal(voteExt) + if err != nil { + return nil, fmt.Errorf("Error marshalling VE: %w", err) +} + +return &abci.ResponseExtendVote{ + VoteExtension: bz +}, nil +} +``` + +4. Configure the handler in `app/app.go` as shown below + +```go +bApp := baseapp.NewBaseApp(AppName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + voteExtHandler := abci2.NewVoteExtensionHandler(logger, mempool, appCodec) + +bApp.SetExtendVoteHandler(voteExtHandler.ExtendVoteHandler()) +``` + +To give a bit of context on what is happening above, we first create a new instance of `VoteExtensionHandler` with the necessary dependencies (logger, mempool, and codec). Then, we set this handler as the `ExtendVoteHandler` for our application. This means that whenever a vote needs to be extended, our custom `ExtendVoteHandler()` method will be called. + +To test if vote extensions have been propagated, add the following to the `PrepareProposalHandler`: + +```go +if req.Height > 2 { + voteExt := req.GetLocalLastCommit() + +h.logger.Info(fmt.Sprintf(" :: Get vote extensions: %v", voteExt)) +} +``` + +This is how the whole function should look: + +```go expandable +func (h *PrepareProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + h.logger.Info(fmt.Sprintf(" :: Prepare Proposal")) + +var proposalTxs [][]byte + + var txs []sdk.Tx + + / Get Vote Extensions + if req.Height > 2 { + voteExt := req.GetLocalLastCommit() + +h.logger.Info(fmt.Sprintf(" :: Get vote extensions: %v", voteExt)) +} + itr := h.mempool.Select(context.Background(), nil) + for itr != nil { + tmptx := itr.Tx() + +txs = append(txs, tmptx) + +itr = itr.Next() +} + +h.logger.Info(fmt.Sprintf(" :: Number of Transactions available from mempool: %v", len(txs))) + if h.runProvider { + tmpMsgs, err := h.txProvider.BuildProposal(ctx, txs) + if err != nil { + h.logger.Error(fmt.Sprintf(" :: Error Building Custom Proposal: %v", err)) +} + +txs = tmpMsgs +} + for _, sdkTxs := range txs { + txBytes, err := h.txConfig.TxEncoder()(sdkTxs) + if err != nil { + h.logger.Info(fmt.Sprintf("~Error encoding transaction: %v", err.Error())) +} + +proposalTxs = append(proposalTxs, txBytes) +} + +h.logger.Info(fmt.Sprintf(" :: Number of Transactions in proposal: %v", len(proposalTxs))) + +return &abci.ResponsePrepareProposal{ + Txs: proposalTxs +}, nil +} +} +``` + +As mentioned above, we check if vote extensions have been propagated, you can do this by checking the logs for any relevant messages such as ` :: Get vote extensions:`. If the logs do not provide enough information, you can also reinitialise your local testing environment by running the `./scripts/single_node/setup.sh` script again. + +5. Implement the `ProcessProposalHandler()`. This function is responsible for processing the proposal. It should handle the logic of processing vote extensions, including inspecting the proposal and validating the bids. + +```go expandable +func (h *ProcessProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + return func(ctx sdk.Context, req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) { + h.Logger.Info(fmt.Sprintf(" :: Process Proposal")) + + / The first transaction will always be the Special Transaction + numTxs := len(req.Txs) + +h.Logger.Info(fmt.Sprintf(":: Number of transactions :: %v", numTxs)) + if numTxs >= 1 { + var st SpecialTransaction + err = json.Unmarshal(req.Txs[0], &st) + if err != nil { + h.Logger.Error(fmt.Sprintf(":: Error unmarshalling special Tx in Process Proposal :: %v", err)) +} + if len(st.Bids) > 0 { + h.Logger.Info(fmt.Sprintf(":: There are bids in the Special Transaction")) + +var bids []nstypes.MsgBid + for i, b := range st.Bids { + var bid nstypes.MsgBid + h.Codec.Unmarshal(b, &bid) + +h.Logger.Info(fmt.Sprintf(":: Special Transaction Bid No %v :: %v", i, bid)) + +bids = append(bids, bid) +} + / Validate Bids in Tx + txs := req.Txs[1:] + ok, err := ValidateBids(h.TxConfig, bids, txs, h.Logger) + if err != nil { + h.Logger.Error(fmt.Sprintf(":: Error validating bids in Process Proposal :: %v", err)) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if !ok { + h.Logger.Error(fmt.Sprintf(":: Unable to validate bids in Process Proposal :: %v", err)) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +h.Logger.Info(":: Successfully validated bids in Process Proposal") +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} +``` + +6. Implement the `ProcessVoteExtensions()` function. This function should handle the logic of processing vote extensions, including validating the bids. + +```go expandable +func processVoteExtensions(req *abci.RequestPrepareProposal, log log.Logger) (SpecialTransaction, error) { + log.Info(fmt.Sprintf(" :: Process Vote Extensions")) + + / Create empty response + st := SpecialTransaction{ + 0, + [][]byte{ +}, +} + + / Get Vote Ext for H-1 from Req + voteExt := req.GetLocalLastCommit() + votes := voteExt.Votes + + / Iterate through votes + var ve AppVoteExtension + for _, vote := range votes { + / Unmarshal to AppExt + err := json.Unmarshal(vote.VoteExtension, &ve) + if err != nil { + log.Error(fmt.Sprintf(" :: Error unmarshalling Vote Extension")) +} + +st.Height = int(ve.Height) + + / If Bids in VE, append to Special Transaction + if len(ve.Bids) > 0 { + log.Info(" :: Bids in VE") + for _, b := range ve.Bids { + st.Bids = append(st.Bids, b) +} + +} + +} + +return st, nil +} +``` + +7. Configure the `ProcessProposalHandler()` in app/app.go: + +```go +processPropHandler := abci2.ProcessProposalHandler{ + app.txConfig, appCodec, logger +} + +bApp.SetProcessProposal(processPropHandler.ProcessProposalHandler()) +``` + +This sets the `ProcessProposalHandler()` for our application. This means that whenever a proposal needs to be processed, our custom `ProcessProposalHandler()` method will be called. + +To test if the proposal processing and vote extensions are working correctly, you can check the logs for any relevant messages. If the logs do not provide enough information, you can also reinitialize your local testing environment by running `./scripts/single_node/setup.sh` script. diff --git a/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning.mdx b/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning.mdx new file mode 100644 index 00000000..252d4184 --- /dev/null +++ b/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning.mdx @@ -0,0 +1,48 @@ +--- +title: Understanding Front-Running and more +description: >- + Blockchain technology is vulnerable to practices that can affect the fairness + and security of the network. Two such practices are front-running and Maximal + Extractable Value (MEV), which are important for blockchain participants to + understand. +--- + +## Introduction + +Blockchain technology is vulnerable to practices that can affect the fairness and security of the network. Two such practices are front-running and Maximal Extractable Value (MEV), which are important for blockchain participants to understand. + +## What is Front-Running? + +Front-running is when someone, such as a validator, uses their ability to see pending transactions to execute their own transactions first, benefiting from the knowledge of upcoming transactions. In nameservice auctions, a front-runner might place a higher bid before the original bid is confirmed, unfairly winning the auction. + +## Nameservices and Nameservice Auctions + +Nameservices are human-readable identifiers on a blockchain, akin to internet domain names, that correspond to specific addresses or resources. They simplify interactions with typically long and complex blockchain addresses, allowing users to have a memorable and unique identifier for their blockchain address or smart contract. + +Nameservice auctions are the process by which these identifiers are bid on and acquired. To combat front-running—where someone might use knowledge of pending bids to place a higher bid first—mechanisms such as commit-reveal schemes, auction extensions, and fair sequencing are implemented. These strategies ensure a transparent and fair bidding process, reducing the potential for Maximal Extractable Value (MEV) exploitation. + +## What is Maximal Extractable Value (MEV)? + +MEV is the highest value that can be extracted by manipulating the order of transactions within a block, beyond the standard block rewards and fees. This has become more prominent with the growth of decentralised finance (DeFi), where transaction order can greatly affect profits. + +## Implications of MEV + +MEV can lead to: + +* **Network Security**: Potential centralisation, as those with more computational power might dominate the process, increasing the risk of attacks. +* **Market Fairness**: An uneven playing field where only a few can gain at the expense of the majority. +* **User Experience**: Higher fees and network congestion due to the competition for MEV. + +## Mitigating MEV and Front-Running + +Some solutions being developed to mitigate MEV and front-running, including: + +* **Time-delayed Transactions**: Random delays to make transaction timing unpredictable. +* **Private Transaction Pools**: Concealing transactions until they are mined. +* **Fair Sequencing Services**: Processing transactions in the order they are received. + +For this tutorial, we will be exploring the last solution, fair sequencing services, in the context of nameservice auctions. + +## Conclusion + +MEV and front-running are challenges to blockchain integrity and fairness. Ongoing innovation and implementation of mitigation strategies are crucial for the ecosystem's health and success. diff --git a/docs/sdk/next/tutorials/vote-extensions/oracle/getting-started.mdx b/docs/sdk/next/tutorials/vote-extensions/oracle/getting-started.mdx new file mode 100644 index 00000000..b004780a --- /dev/null +++ b/docs/sdk/next/tutorials/vote-extensions/oracle/getting-started.mdx @@ -0,0 +1,39 @@ +--- +title: Getting Started +description: What is an Oracle? Implementing Vote Extensions Testing the Oracle Module +--- + +## Table of Contents + +* [What is an Oracle?](/docs/sdk/next/tutorials/vote-extensions/oracle/what-is-an-oracle) +* [Implementing Vote Extensions](/docs/sdk/next/tutorials/vote-extensions/oracle/implementing-vote-extensions) +* [Testing the Oracle Module](/docs/sdk/next/tutorials/vote-extensions/oracle/testing-oracle) + +## Prerequisites + +Before you start with this tutorial, make sure you have: + +* A working chain project. This tutorial won't cover the steps of creating a new chain/module. +* Familiarity with the Cosmos SDK. If you're not, we suggest you start with [Cosmos SDK Tutorials](https://tutorials.cosmos.network), as ABCI++ is considered an advanced topic. +* Read and understood [What is an Oracle?](/docs/sdk/next/tutorials/vote-extensions/oracle/what-is-an-oracle). This provides necessary background information for understanding the Oracle module. +* Basic understanding of Go programming language. + +## What are Vote extensions? + +Vote extensions is arbitrary information which can be inserted into a block. This feature is part of ABCI 2.0, which is available for use in the SDK 0.50 release and part of the 0.38 CometBFT release. + +More information about vote extensions can be seen [here](https://docs.cosmos.network/main/build/abci/vote-extensions). + +## Overview of the project + +We’ll go through the creation of a simple price oracle module focusing on the vote extensions implementation, ignoring the details inside the price oracle itself. + +We’ll go through the implementation of: + +* `ExtendVote` to get information from external price APIs. +* `VerifyVoteExtension` to check that the format of the provided votes is correct. +* `PrepareProposal` to process the vote extensions from the previous block and include them into the proposal as a transaction. +* `ProcessProposal` to check that the first transaction in the proposal is actually a “special tx” that contains the price information. +* `PreBlocker` to make price information available during FinalizeBlock. + +If you would like to see the complete working oracle module please see [here](https://github.com/cosmos/sdk-tutorials/blob/master/tutorials/oracle/base/x/oracle) diff --git a/docs/sdk/next/tutorials/vote-extensions/oracle/implementing-vote-extensions.mdx b/docs/sdk/next/tutorials/vote-extensions/oracle/implementing-vote-extensions.mdx new file mode 100644 index 00000000..1c2df539 --- /dev/null +++ b/docs/sdk/next/tutorials/vote-extensions/oracle/implementing-vote-extensions.mdx @@ -0,0 +1,254 @@ +--- +title: Implementing Vote Extensions +description: >- + First we’ll create the OracleVoteExtension struct, this is the object that + will be marshaled as bytes and signed by the validator. +--- + +## Implement ExtendVote + +First we’ll create the `OracleVoteExtension` struct, this is the object that will be marshaled as bytes and signed by the validator. + +In our example we’ll use JSON to marshal the vote extension for simplicity but we recommend to find an encoding that produces a smaller output, given that large vote extensions could impact CometBFT’s performance. Custom encodings and compressed bytes can be used out of the box. + +```go +/ OracleVoteExtension defines the canonical vote extension structure. +type OracleVoteExtension struct { + Height int64 + Prices map[string]math.LegacyDec +} +``` + +Then we’ll create a `VoteExtensionsHandler` struct that contains everything we need to query for prices. + +```go +type VoteExtHandler struct { + logger log.Logger + currentBlock int64 / current block height + lastPriceSyncTS time.Time / last time we synced prices + providerTimeout time.Duration / timeout for fetching prices from providers + providers map[string]Provider / mapping of provider name to provider (e.g. Binance -> BinanceProvider) + +providerPairs map[string][]keeper.CurrencyPair / mapping of provider name to supported pairs (e.g. Binance -> [ATOM/USD]) + +Keeper keeper.Keeper / keeper of our oracle module +} +``` + +Finally, a function that returns `sdk.ExtendVoteHandler` is needed too, and this is where our vote extension logic will live. + +```go expandable +func (h *VoteExtHandler) + +ExtendVoteHandler() + +sdk.ExtendVoteHandler { + return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + / here we'd have a helper function that gets all the prices and does a weighted average using the volume of each market + prices := h.getAllVolumeWeightedPrices() + voteExt := OracleVoteExtension{ + Height: req.Height, + Prices: prices, +} + +bz, err := json.Marshal(voteExt) + if err != nil { + return nil, fmt.Errorf("failed to marshal vote extension: %w", err) +} + +return &abci.ResponseExtendVote{ + VoteExtension: bz +}, nil +} +} +``` + +As you can see above, the creation of a vote extension is pretty simple and we just have to return bytes. CometBFT will handle the signing of these bytes for us. We ignored the process of getting the prices but you can see a more complete example [here:](https://github.com/cosmos/sdk-tutorials/blob/master/tutorials/oracle/base/x/oracle/abci/vote_extensions.go) + +Here we’ll do some simple checks like: + +* Is the vote extension unmarshaled correctly? +* Is the vote extension for the right height? +* Some other validation, for example, are the prices from this extension too deviated from my own prices? Or maybe checks that can detect malicious behavior. + +```go expandable +func (h *VoteExtHandler) + +VerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(ctx sdk.Context, req *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + var voteExt OracleVoteExtension + err := json.Unmarshal(req.VoteExtension, &voteExt) + if err != nil { + return nil, fmt.Errorf("failed to unmarshal vote extension: %w", err) +} + if voteExt.Height != req.Height { + return nil, fmt.Errorf("vote extension height does not match request height; expected: %d, got: %d", req.Height, voteExt.Height) +} + + / Verify incoming prices from a validator are valid. Note, verification during + / VerifyVoteExtensionHandler MUST be deterministic. For brevity and demo + / purposes, we omit implementation. + if err := h.verifyOraclePrices(ctx, voteExt.Prices); err != nil { + return nil, fmt.Errorf("failed to verify oracle prices from validator %X: %w", req.ValidatorAddress, err) +} + +return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} +``` + +## Implement PrepareProposal + +```go +type ProposalHandler struct { + logger log.Logger + keeper keeper.Keeper / our oracle module keeper + valStore baseapp.ValidatorStore / to get the current validators' pubkeys +} +``` + +And we create the struct for our “special tx”, that will contain the prices and the votes so validators can later re-check in ProcessPRoposal that they get the same result than the block’s proposer. With this we could also check if all the votes have been used by comparing the votes received in ProcessProposal. + +```go +type StakeWeightedPrices struct { + StakeWeightedPrices map[string]math.LegacyDec + ExtendedCommitInfo abci.ExtendedCommitInfo +} +``` + +Now we create the `PrepareProposalHandler`. In this step we’ll first check if the vote extensions’ signatures are correct using a helper function called ValidateVoteExtensions from the baseapp package. + +```go +func (h *ProposalHandler) + +PrepareProposal() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + err := baseapp.ValidateVoteExtensions(ctx, h.valStore, req.Height, ctx.ChainID(), req.LocalLastCommit) + if err != nil { + return nil, err +} +... +``` + +Then we proceed to make the calculations only if the current height if higher than the height at which vote extensions have been enabled. Remember that vote extensions are made available to the block proposer on the next block at which they are produced/enabled. + +```go expandable +... + proposalTxs := req.Txs + if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { + stakeWeightedPrices, err := h.computeStakeWeightedOraclePrices(ctx, req.LocalLastCommit) + if err != nil { + return nil, errors.New("failed to compute stake-weighted oracle prices") +} + injectedVoteExtTx := StakeWeightedPrices{ + StakeWeightedPrices: stakeWeightedPrices, + ExtendedCommitInfo: req.LocalLastCommit, +} +... +``` + +Finally we inject the result as a transaction at a specific location, usually at the beginning of the block: + +## Implement ProcessProposal + +Now we can implement the method that all validators will execute to ensure the proposer is doing his work correctly. + +Here, if vote extensions are enabled, we’ll check if the tx at index 0 is an injected vote extension + +```go +func (h *ProposalHandler) + +ProcessProposal() + +sdk.ProcessProposalHandler { + return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { + var injectedVoteExtTx StakeWeightedPrices + if err := json.Unmarshal(req.Txs[0], &injectedVoteExtTx); err != nil { + h.logger.Error("failed to decode injected vote extension tx", "err", err) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} +... +``` + +Then we re-validate the vote extensions signatures using +baseapp.ValidateVoteExtensions, re-calculate the results (just like in PrepareProposal) and compare them with the results we got from the injected tx. + +```go expandable +err := baseapp.ValidateVoteExtensions(ctx, h.valStore, req.Height, ctx.ChainID(), injectedVoteExtTx.ExtendedCommitInfo) + if err != nil { + return nil, err +} + + / Verify the proposer's stake-weighted oracle prices by computing the same + / calculation and comparing the results. We omit verification for brevity + / and demo purposes. + stakeWeightedPrices, err := h.computeStakeWeightedOraclePrices(ctx, injectedVoteExtTx.ExtendedCommitInfo) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if err := compareOraclePrices(injectedVoteExtTx.StakeWeightedPrices, stakeWeightedPrices); err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} +``` + +Important: In this example we avoided using the mempool and other basics, please refer to the DefaultProposalHandler for a complete implementation: [Link](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/baseapp/abci_utils.go) + +## Implement PreBlocker + +Now validators are extending their vote, verifying other votes and including the result in the block. But how do we actually make use of this result? This is done in the PreBlocker which is code that is run before any other code during FinalizeBlock so we make sure we make this information available to the chain and its modules during the entire block execution (from BeginBlock). + +At this step we know that the injected tx is well-formatted and has been verified by the validators participating in consensus, so making use of it is straightforward. Just check if vote extensions are enabled, pick up the first transaction and use a method in your module’s keeper to set the result. + +```go expandable +func (h *ProposalHandler) + +PreBlocker(ctx sdk.Context, req *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + res := &sdk.ResponsePreBlock{ +} + if len(req.Txs) == 0 { + return res, nil +} + if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { + var injectedVoteExtTx StakeWeightedPrices + if err := json.Unmarshal(req.Txs[0], &injectedVoteExtTx); err != nil { + h.logger.Error("failed to decode injected vote extension tx", "err", err) + +return nil, err +} + + / set oracle prices using the passed in context, which will make these prices available in the current block + if err := h.keeper.SetOraclePrices(ctx, injectedVoteExtTx.StakeWeightedPrices); err != nil { + return nil, err +} + +} + +return res, nil +} +``` + +## Conclusion + +In this tutorial, we've created a simple price oracle module that incorporates vote extensions. We've seen how to implement `ExtendVote`, `VerifyVoteExtension`, `PrepareProposal`, `ProcessProposal`, and `PreBlocker` to handle the voting and verification process of vote extensions, as well as how to make use of the results during the block execution. diff --git a/docs/sdk/next/tutorials/vote-extensions/oracle/testing-oracle.mdx b/docs/sdk/next/tutorials/vote-extensions/oracle/testing-oracle.mdx new file mode 100644 index 00000000..c60c9ea2 --- /dev/null +++ b/docs/sdk/next/tutorials/vote-extensions/oracle/testing-oracle.mdx @@ -0,0 +1,64 @@ +--- +title: Testing the Oracle Module +description: >- + We will guide you through the process of testing the Oracle module in your + application. The Oracle module uses vote extensions to provide current price + data. If you would like to see the complete working oracle module please see + here. +--- + +We will guide you through the process of testing the Oracle module in your application. The Oracle module uses vote extensions to provide current price data. If you would like to see the complete working oracle module please see [here](https://github.com/cosmos/sdk-tutorials/blob/master/tutorials/oracle/base/x/oracle). + +## Step 1: Compile and Install the Application + +First, we need to compile and install the application. Please ensure you are in the `tutorials/oracle/base` directory. Run the following command in your terminal: + +```shell +make install +``` + +This command compiles the application and moves the resulting binary to a location in your system's PATH. + +## Step 2: Initialise the Application + +Next, we need to initialise the application. Run the following command in your terminal: + +```shell +make init +``` + +This command runs the script `tutorials/oracle/base/scripts/init.sh`, which sets up the necessary configuration for your application to run. This includes creating the `app.toml` configuration file and initialising the blockchain with a genesis block. + +## Step 3: Start the Application + +Now, we can start the application. Run the following command in your terminal: + +```shell +exampled start +``` + +This command starts your application, begins the blockchain node, and starts processing transactions. + +## Step 4: Query the Oracle Prices + +Finally, we can query the current prices from the Oracle module. Run the following command in your terminal: + +```shell +exampled q oracle prices +``` + +This command queries the current prices from the Oracle module. The expected output shows that the vote extensions were successfully included in the block and the Oracle module was able to retrieve the price data. + +## Understanding Vote Extensions in Oracle + +In the Oracle module, the `ExtendVoteHandler` function is responsible for creating the vote extensions. This function fetches the current prices from the provider, creates a `OracleVoteExtension` struct with these prices, and then marshals this struct into bytes. These bytes are then set as the vote extension. + +In the context of testing, the Oracle module uses a mock provider to simulate the behavior of a real price provider. This mock provider is defined in the mockprovider package and is used to return predefined prices for specific currency pairs. + +## Conclusion + +In this tutorial, we've delved into the concept of Oracle's in blockchain technology, focusing on their role in providing external data to a blockchain network. We've explored vote extensions, a powerful feature of ABCI++, and integrated them into a Cosmos SDK application to create a price oracle module. + +Through hands-on exercises, you've implemented vote extensions, and tested their effectiveness in providing timely and accurate asset price information. You've gained practical insights by setting up a mock provider for testing and analysing the process of extending votes, verifying vote extensions, and preparing and processing proposals. + +Keep experimenting with these concepts, engage with the community, and stay updated on new advancements. The knowledge you've acquired here is crucial for developing robust and reliable blockchain applications that can interact with real-world data. diff --git a/docs/sdk/next/tutorials/vote-extensions/oracle/what-is-an-oracle.mdx b/docs/sdk/next/tutorials/vote-extensions/oracle/what-is-an-oracle.mdx new file mode 100644 index 00000000..9e273990 --- /dev/null +++ b/docs/sdk/next/tutorials/vote-extensions/oracle/what-is-an-oracle.mdx @@ -0,0 +1,15 @@ +--- +title: What is an Oracle? +--- + +An oracle in blockchain technology is a system that provides external data to a blockchain network. It acts as a source of information that is not natively accessible within the blockchain's closed environment. This can range from financial market prices to real-world event, making it crucial for decentralised applications. + +## Oracle in the Cosmos SDK + +In the Cosmos SDK, an oracle module can be implemented to provide external data to the blockchain. This module can use features like vote extensions to submit additional data during the consensus process, which can then be used by the blockchain to update its state with information from the outside world. + +For instance, a price oracle module in the Cosmos SDK could supply timely and accurate asset price information, which is vital for various financial operations within the blockchain ecosystem. + +## Conclusion + +Oracles are essential for blockchains to interact with external data, enabling them to respond to real-world information and events. Their implementation is key to the reliability and robustness of blockchain networks. diff --git a/docs/sdk/v0.47/build.mdx b/docs/sdk/v0.47/build.mdx deleted file mode 100644 index efda570a..00000000 --- a/docs/sdk/v0.47/build.mdx +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: "Build" -description: "Version: v0.47" ---- - -* [Architecture](/v0.47/build/architecture) - Overview and detailed explanation of the system architecture. -* [Building Apps](/v0.47/build/building-apps/app-go) - The documentation in this section will guide you through the process of developing your dApp using the Cosmos SDK framework. -* [Modules](/v0.47/build/modules) - Information about the various modules available in the Cosmos SDK: Auth, Authz, Bank, Crisis, Distribution, Evidence, Feegrant, Governance, Mint, Params, Slashing, Staking, Upgrade, NFT, Consensus, Circuit, Genutil. -* [Migrations](/v0.47/build/migrations/intro) - See what has been updated in each release the process of the transition between versions. -* [Packages](/v0.47/build/packages) - Explore a curated collection of pre-built modules and functionalities, streamlining the development process. -* [Tooling](/v0.47/build/tooling) - A suite of utilities designed to enhance the development workflow, optimizing the efficiency of Cosmos SDK-based projects. -* [ADR's](/v0.47/build/architecture) - Provides a structured repository of key decisions made during the development process, which have been documented and offers rationale behind key decisions being made. -* [RFC](/v0.47/build/rfc) - A Request for Comments (RFC) is a record of discussion on an open-ended topic related to the design and implementation of the Cosmos SDK, for which no immediate decision is required. -* [Specifications](/v0.47/build/spec/SPEC_STANDARD) - A detailed reference for the specifications of various components and features. -* [REST API](https://docs.cosmos.network/api) - A comprehensive reference for the application programming interfaces (APIs) provided by the SDK. diff --git a/docs/sdk/v0.47/changelog/UPGRADING.md b/docs/sdk/v0.47/changelog/UPGRADING.md new file mode 100644 index 00000000..dd5f86d1 --- /dev/null +++ b/docs/sdk/v0.47/changelog/UPGRADING.md @@ -0,0 +1,138 @@ +# Upgrading to Cosmos SDK v0.47 + +This document outlines the changes required when upgrading to Cosmos SDK v0.47. + +## Migration to CometBFT (Part 1) + +Cosmos SDK v0.47 begins the migration from Tendermint to CometBFT. This is a multi-part transition. + +### Breaking Changes + +- **Import paths**: Begin updating Tendermint imports to CometBFT +- **Dependencies**: CometBFT replaces Tendermint Core +- **Configuration**: Some consensus parameters have changed + +### Migration Steps + +1. **Update Go dependencies**: + ```bash + go get github.com/cometbft/cometbft@v0.37.0 + go mod edit -replace github.com/tendermint/tendermint=github.com/cometbft/cometbft@v0.37.0 + go mod tidy + ``` + +2. **Update import statements gradually**: + ```diff + - import "github.com/tendermint/tendermint/libs/log" + + import "github.com/cometbft/cometbft/libs/log" + ``` + +## Module Changes + +### New Modules + +- **Group**: Decentralized group management and decision making +- **NFT**: Basic NFT functionality +- **Consensus**: Consensus parameter management + +### Updated Modules + +- **Gov**: Enhanced governance with new features +- **Staking**: Improved validator and delegation handling +- **Auth**: Enhanced account types and features +- **Bank**: Multi-send improvements + +### Module API Changes + +- Several keeper interfaces have been updated +- New module manager patterns +- Enhanced inter-module communication + +## Protobuf Changes + +### gogoproto Migration + +SDK v0.47 continues the migration away from gogoproto to standard protobuf. + +```bash +# Update protobuf generation +make proto-gen +``` + +### Breaking Proto Changes + +- Some message types have been restructured +- Field ordering may have changed +- New validation rules + +## Configuration Changes + +### App Configuration + +- New configuration structure for some modules +- Updated genesis format for certain modules +- Enhanced node configuration options + +### Client Configuration + +- Updated client configuration patterns +- New gRPC configuration options +- Enhanced REST API configuration + +## API Changes + +### REST API + +- Some REST endpoints have been updated or deprecated +- New gRPC-gateway endpoints +- Enhanced error responses + +### gRPC + +- New gRPC services for modules +- Updated query and message interfaces +- Enhanced streaming capabilities + +## CLI Changes + +- Updated command structure for some modules +- New CLI flags and options +- Enhanced output formatting + +## Testing Changes + +- Updated test utilities +- New simulation framework features +- Enhanced integration testing patterns + +## Known Issues + +### IBC Compatibility + +Ensure your IBC-go version is compatible with SDK v0.47: +- Use IBC-go v7.x for SDK v0.47 + +### Third-party Dependencies + +- Some third-party tools may need updates +- Custom modules may require interface updates +- Review all external dependencies + +## Performance Improvements + +- Optimized store operations +- Better memory management +- Enhanced caching + +## Migration Checklist + +- [ ] Update Go dependencies +- [ ] Update import statements +- [ ] Update keeper constructors +- [ ] Update module interfaces +- [ ] Update CLI commands +- [ ] Update tests +- [ ] Verify IBC compatibility +- [ ] Test application thoroughly + +For more detailed information, see the [Cosmos SDK v0.47 release notes](https://github.com/cosmos/cosmos-sdk/releases/tag/v0.47.0). \ No newline at end of file diff --git a/docs/sdk/v0.47/changelog/release-notes.mdx b/docs/sdk/v0.47/changelog/release-notes.mdx new file mode 100644 index 00000000..b779947f --- /dev/null +++ b/docs/sdk/v0.47/changelog/release-notes.mdx @@ -0,0 +1,738 @@ +--- +title: "Release Notes" +description: "Release notes generated from the project `CHANGELOG.md`" +mode: "center" +--- + + + + This page tracks all releases and changes from the + [cosmos/cosmos-sdk](https://github.com/cosmos/cosmos-sdk) repository. For the latest + development updates, see the + [UNRELEASED](https://github.com/cosmos/cosmos-sdk/blob/main/CHANGELOG.md#unreleased) + section. + + + + + ### Bug Fixes * + [GHSA-47ww-ff84-4jrg](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-47ww-ff84-4jrg) + Fix x/group can halt when erroring in EndBlocker + + + + ### Bug Fixes * + [GHSA-x5vx-95h7-rv4p](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-x5vx-95h7-rv4p) + Fix Group module can halt chain when handling a malicious proposal + + + + ### Bug Fixes * Bump `cosmossdk.io/math` to v1.4. * Fix + [ABS-0043/ABS-0044](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-8wcc-m6j2-qxvm) + Limit recursion depth for unknown field detection and unpack any + + + + ### Improvements * [#21295](https://github.com/cosmos/cosmos-sdk/pull/21295) + Bump to gogoproto v1.7.0. * + [#21295](https://github.com/cosmos/cosmos-sdk/pull/21295) Remove usage of + `slices.SortFunc` due to API break in used versions. + + + + ### Bug Fixes * (client) + [#20912](https://github.com/cosmos/cosmos-sdk/pull/20912) Fix `math.LegacyDec` + type deserialization in GRPC queries. * (x/group) + [#20750](https://github.com/cosmos/cosmos-sdk/pull/20750) x/group shouldn't + claim "orm" error codespace. This prevents any chain Cosmos SDK `v0.47` chain + to use the ORM module. + + + + * (x/authz,x/feegrant) + [#20590](https://github.com/cosmos/cosmos-sdk/pull/20590) Provide updated + keeper in depinject for authz and feegrant modules. ### Bug Fixes * (baseapp) + [#20144](https://github.com/cosmos/cosmos-sdk/pull/20144) Remove txs from + mempool when AnteHandler fails in recheck. * (testutil/sims) + [#20151](https://github.com/cosmos/cosmos-sdk/pull/20151) Set all signatures + and don't overwrite the previous one in `GenSignedMockTx`. + + + + ### Bug Fixes * (x/feegrant,x/authz) + [#20114](https://github.com/cosmos/cosmos-sdk/pull/20114) Follow up of + [GHSA-4j93-fm92-rp4m](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-4j93-fm92-rp4m). + The same issue was found in `x/feegrant` and `x/authz` modules. * (crypto) + [#20027](https://github.com/cosmos/cosmos-sdk/pull/20027) secp256r1 keys now + implement gogoproto's customtype interface. * (x/gov) + [#19725](https://github.com/cosmos/cosmos-sdk/pull/19725) Fetch a failed + proposal tally from `proposal.FinalTallyResult` in the gprc query. * (crypto) + [#19691](https://github.com/cosmos/cosmos-sdk/pull/19746) Throw an error when + signing with incorrect Ledger. + + + + ### Bug Fixes * (x/staking) Fix a possible bypass of delagator slashing: + [GHSA-86h5-xcpx-cfqc](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-86h5-xcpx-cfqc) + * (server) [#19573](https://github.com/cosmos/cosmos-sdk/pull/19573) Use + proper `db_backend` type when reading chain-id + + + + ### Bug Fixes * (x/auth/vesting) [GHSA-4j93-fm92-rp4m](#bug-fixes) Add + `BlockedAddr` check in `CreatePeriodicVestingAccount`. * (baseapp) + [#19177](https://github.com/cosmos/cosmos-sdk/pull/19177) Fix baseapp + `DefaultProposalHandler` same-sender non-sequential sequence. + + + + ### Improvements + * (client/tx) [#18852](https://github.com/cosmos/cosmos-sdk/pull/18852) Add `WithFromName` to tx factory. + * (types) [#18875](https://github.com/cosmos/cosmos-sdk/pull/18875) Speedup coins.Sort() if len(coins) \<= 1. + * (types) [#18888](https://github.com/cosmos/cosmos-sdk/pull/18888) Speedup DecCoin.Sort() if len(coins) \<= 1 + * (testutil) [#18930](https://github.com/cosmos/cosmos-sdk/pull/18930) Add NodeURI for clientCtx. + ### Bug Fixes + * [#19106](https://github.com/cosmos/cosmos-sdk/pull/19106) Allow empty public keys when setting signatures. Public keys aren't needed for every transaction. + * (server) [#18920](https://github.com/cosmos/cosmos-sdk/pull/18920) Fixes consensus failure while restart node with wrong `chainId` in genesis. + + + + ### Improvements * (x/gov) + [#18707](https://github.com/cosmos/cosmos-sdk/pull/18707) Improve genesis + validation. * (server) + [#18478](https://github.com/cosmos/cosmos-sdk/pull/18478) Add command flag to + disable colored logs. ### Bug Fixes * (baseapp) + [#18609](https://github.com/cosmos/cosmos-sdk/issues/18609) Fixed accounting + in the block gas meter after BeginBlock and before DeliverTx, ensuring + transaction processing always starts with the expected zeroed out block gas + meter. * (server) [#18537](https://github.com/cosmos/cosmos-sdk/pull/18537) + Fix panic when defining minimum gas config as `100stake;100uatom`. Use a `,` + delimiter instead of `;`. Fixes the server config getter to use the correct + delimiter. * (client/tx) + [#18472](https://github.com/cosmos/cosmos-sdk/pull/18472) Utilizes the correct + Pubkey when simulating a transaction. + + + + ### Features * (server) + [#18110](https://github.com/cosmos/cosmos-sdk/pull/18110) Start gRPC & API + server in standalone mode. ### Improvements * (baseapp) + [#17954](https://github.com/cosmos/cosmos-sdk/issues/17954) Add `Mempool()` + method on `BaseApp` to allow access to the mempool. * (x/gov) + [#17780](https://github.com/cosmos/cosmos-sdk/pull/17780) Recover panics and + turn them into errors when executing x/gov proposals. ### Bug Fixes * (server) + [#18254](https://github.com/cosmos/cosmos-sdk/pull/18254) Don't hardcode gRPC + address to localhost. * (server) + [#18251](https://github.com/cosmos/cosmos-sdk/pull/18251) Call + `baseapp.Close()` when app started as grpc only. * (baseapp) + [#17769](https://github.com/cosmos/cosmos-sdk/pull/17769) Ensure we respect + block size constraints in the `DefaultProposalHandler`'s `PrepareProposal` + handler when a nil or no-op mempool is used. We provide a `TxSelector` type to + assist in making transaction selection generalized. We also fix a comparison + bug in tx selection when `req.maxTxBytes` is reached. * (config) + [#17649](https://github.com/cosmos/cosmos-sdk/pull/17649) Fix + `mempool.max-txs` configuration is invalid in `app.config`. * (mempool) + [#17668](https://github.com/cosmos/cosmos-sdk/pull/17668) Fix + `PriorityNonceIterator.Next()` nil pointer ref for min priority at the end of + iteration. * (x/auth) + [#17902](https://github.com/cosmos/cosmos-sdk/pull/17902) Remove tip + posthandler. * (x/bank) + [#18107](https://github.com/cosmos/cosmos-sdk/pull/18107) Add missing keypair + of SendEnabled to restore legacy param set before migration. ### Client + Breaking Changes * (x/gov) + [#17910](https://github.com/cosmos/cosmos-sdk/pull/17910) Remove telemetry for + counting votes and proposals. It was incorrectly counting votes. Use + alternatives, such as state streaming. + + + + ### Features * (client/rpc) + [#17274](https://github.com/cosmos/cosmos-sdk/pull/17274) Add + `QueryEventForTxCmd` cmd to subscribe and wait event for transaction by hash. + * (keyring) [#17424](https://github.com/cosmos/cosmos-sdk/pull/17424) Allows + to import private keys encoded in hex. ### Improvements * (x/gov) + [#17387](https://github.com/cosmos/cosmos-sdk/pull/17387) Add + `MsgSubmitProposal` `SetMsgs` method. * (x/gov) + [#17354](https://github.com/cosmos/cosmos-sdk/issues/17354) Emit `VoterAddr` + in `proposal_vote` event. * (x/group, x/gov) + [#17220](https://github.com/cosmos/cosmos-sdk/pull/17220) Add + `--skip-metadata` flag in `draft-proposal` to skip metadata prompt. * + (x/genutil) [#17296](https://github.com/cosmos/cosmos-sdk/pull/17296) Add + `MigrateHandler` to allow reuse migrate genesis related function. * In v0.46, + v0.47 this function is additive to the `genesis migrate` command. However in + v0.50+, adding custom migrations to the `genesis migrate` command is directly + possible. ### Bug Fixes * (server) + [#17181](https://github.com/cosmos/cosmos-sdk/pull/17181) Fix `db_backend` + lookup fallback from `config.toml`. * (runtime) + [#17284](https://github.com/cosmos/cosmos-sdk/pull/17284) Properly allow to + combine depinject-enabled modules and non-depinject-enabled modules in app v2. + * (baseapp) [#17159](https://github.com/cosmos/cosmos-sdk/pull/17159) + Validators can propose blocks that exceed the gas limit. * (baseapp) + [#16547](https://github.com/cosmos/cosmos-sdk/pull/16547) Ensure a + transaction's gas limit cannot exceed the block gas limit. * (x/gov,x/group) + [#17220](https://github.com/cosmos/cosmos-sdk/pull/17220) Do not try validate + `msgURL` as web URL in `draft-proposal` command. * (cli) + [#17188](https://github.com/cosmos/cosmos-sdk/pull/17188) Fix + `--output-document` flag in `tx multi-sign`. * (x/auth) + [#17209](https://github.com/cosmos/cosmos-sdk/pull/17209) Internal error on + AccountInfo when account's public key is not set. + + + + ### Features + * (sims) [#16656](https://github.com/cosmos/cosmos-sdk/pull/16656) Add custom max gas for block for sim config with unlimited as default. + ### Improvements + * (cli) [#16856](https://github.com/cosmos/cosmos-sdk/pull/16856) Improve `simd prune` UX by using the app default home directory and set pruning method as first variable argument (defaults to default). `pruning.PruningCmd` rest unchanged for API compability, use `pruning.Cmd` instead. + * (testutil) [#16704](https://github.com/cosmos/cosmos-sdk/pull/16704) Make app config configurator for testing configurable with external modules. + * (deps) [#16565](https://github.com/cosmos/cosmos-sdk/pull/16565) Bump CometBFT to [v0.37.2](https://github.com/cometbft/cometbft/blob/v0.37.2/CHANGELOG.md). + ### Bug Fixes + * (x/auth) [#16994](https://github.com/cosmos/cosmos-sdk/pull/16994) Fix regression where querying transactions events with `<=` or `>=` would not work. + * (server) [#16827](https://github.com/cosmos/cosmos-sdk/pull/16827) Properly use `--trace` flag (before it was setting the trace level instead of displaying the stacktraces). + * (x/auth) [#16554](https://github.com/cosmos/cosmos-sdk/pull/16554) `ModuleAccount.Validate` now reports a nil `.BaseAccount` instead of panicking. + * [#16588](https://github.com/cosmos/cosmos-sdk/pull/16588) Propogate snapshotter failures to the caller, (it would create an empty snapshot silently before). + * (x/slashing) [#16784](https://github.com/cosmos/cosmos-sdk/pull/16784) Emit event with the correct reason in `SlashWithInfractionReason`. + + + + ### Features * (baseapp) + [#16290](https://github.com/cosmos/cosmos-sdk/pull/16290) Add circuit breaker + setter in baseapp. * (x/group) + [#16191](https://github.com/cosmos/cosmos-sdk/pull/16191) Add + EventProposalPruned event to group module whenever a proposal is pruned. * + (tx) [#15992](https://github.com/cosmos/cosmos-sdk/pull/15992) Add + `WithExtensionOptions` in tx Factory to allow `SetExtensionOptions` with given + extension options. ### Improvements * (baseapp) + [#16407](https://github.com/cosmos/cosmos-sdk/pull/16407) Make + `DefaultProposalHandler.ProcessProposalHandler` return a ProcessProposal NoOp + when using none or a NoOp mempool. * (deps) + [#16083](https://github.com/cosmos/cosmos-sdk/pull/16083) Bumps + `proto-builder` image to 0.13.0. * (client) + [#16075](https://github.com/cosmos/cosmos-sdk/pull/16075) Partly revert + [#15953](https://github.com/cosmos/cosmos-sdk/issues/15953) and + `factory.Prepare` now does nothing in offline mode. * (server) + [#15984](https://github.com/cosmos/cosmos-sdk/pull/15984) Use + `cosmossdk.io/log` package for logging instead of CometBFT logger. NOTE: v0.45 + and v0.46 were not using CometBFT logger either. This keeps the same + underlying logger (zerolog) as in v0.45.x+ and v0.46.x+ but now properly + supporting filtered logging. * (gov) + [#15979](https://github.com/cosmos/cosmos-sdk/pull/15979) Improve gov error + message when failing to convert v1 proposal to v1beta1. * (store) + [#16067](https://github.com/cosmos/cosmos-sdk/pull/16067) Add local snapshots + management commands. * (server) + [#16061](https://github.com/cosmos/cosmos-sdk/pull/16061) Add Comet bootstrap + command. * (snapshots) + [#16060](https://github.com/cosmos/cosmos-sdk/pull/16060) Support saving and + restoring snapshot locally. * (x/staking) + [#16068](https://github.com/cosmos/cosmos-sdk/pull/16068) Update simulation to + allow non-EOA accounts to stake. * (server) + [#16142](https://github.com/cosmos/cosmos-sdk/pull/16142) Remove JSON + Indentation from the GRPC to REST gateway's responses. (Saving bandwidth) * + (types) [#16145](https://github.com/cosmos/cosmos-sdk/pull/16145) Rename + interface `ExtensionOptionI` back to `TxExtensionOptionI` to avoid breaking + change. * (baseapp) [#16193](https://github.com/cosmos/cosmos-sdk/pull/16193) + Add `Close` method to `BaseApp` for custom app to cleanup resource in graceful + shutdown. ### Bug Fixes * Fix + [barberry](https://forum.cosmos.network/t/cosmos-sdk-security-advisory-barberry/10825) + security vulnerability. * (server) + [#16395](https://github.com/cosmos/cosmos-sdk/pull/16395) Do not override some + Comet config is purposely set differently in `InterceptConfigsPreRunHandler`. + * (store) [#16449](https://github.com/cosmos/cosmos-sdk/pull/16449) Fix + StateSync Restore by excluding memory store. * (cli) + [#16312](https://github.com/cosmos/cosmos-sdk/pull/16312) Allow any addresses + in `client.ValidatePromptAddress`. * (x/group) + [#16017](https://github.com/cosmos/cosmos-sdk/pull/16017) Correctly apply + account number in group v2 migration. ### API Breaking Changes * (testutil) + [#14991](https://github.com/cosmos/cosmos-sdk/pull/14991) The + `testutil/testdata_pulsar` package has moved to `testutil/testdata/testpb`. + Chains will not notice this breaking change as this package contains testing + utilities only used by the SDK internally. + + + + ### Improvements + * (x/evidence) [#15908](https://github.com/cosmos/cosmos-sdk/pull/15908) Update the equivocation handler to work with ICS by removing a pubkey check that was performing a no-op for consumer chains. + * (x/slashing) [#15908](https://github.com/cosmos/cosmos-sdk/pull/15908) Remove the validators' pubkey check in the signature handler in order to work with ICS. + * (deps) [#15957](https://github.com/cosmos/cosmos-sdk/pull/15957) Bump CometBFT to [v0.37.1](https://github.com/cometbft/cometbft/blob/v0.37.1/CHANGELOG.md#v0371). + * (store) [#15683](https://github.com/cosmos/cosmos-sdk/pull/15683) `rootmulti.Store.CacheMultiStoreWithVersion` now can handle loading archival states that don't persist any of the module stores the current state has. + * [#15448](https://github.com/cosmos/cosmos-sdk/pull/15448) Automatically populate the block timestamp for historical queries. In contexts where the block timestamp is needed for previous states, the timestamp will now be set. Note, when querying against a node it must be re-synced in order to be able to automatically populate the block timestamp. Otherwise, the block timestamp will be populated for heights going forward once upgraded. + * [#14019](https://github.com/cosmos/cosmos-sdk/issues/14019) Remove the interface casting to allow other implementations of a `CommitMultiStore`. + * (simtestutil) [#15903](https://github.com/cosmos/cosmos-sdk/pull/15903) Add `AppStateFnWithExtendedCbs` with moduleStateCb callback function to allow access moduleState. + ### Bug Fixes + * (baseapp) [#15789](https://github.com/cosmos/cosmos-sdk/pull/15789) Ensure `PrepareProposal` and `ProcessProposal` respect `InitialHeight` set by CometBFT when set to a value greater than 1. + * (types) [#15433](https://github.com/cosmos/cosmos-sdk/pull/15433) Allow disabling of account address caches (for printing bech32 account addresses). + * (client/keys) [#15876](https://github.com/cosmos/cosmos-sdk/pull/15876) Fix the JSON output ` keys list --output json` when there are no keys. + + + + ### Features * (x/bank) + [#15265](https://github.com/cosmos/cosmos-sdk/pull/15265) Update keeper + interface to include `GetAllDenomMetaData`. * (x/groups) + [#14879](https://github.com/cosmos/cosmos-sdk/pull/14879) Add `Query/Groups` + query to get all the groups. * (x/gov,cli) + [#14718](https://github.com/cosmos/cosmos-sdk/pull/14718) Added + `AddGovPropFlagsToCmd` and `ReadGovPropFlags` functions. * (cli) + [#14655](https://github.com/cosmos/cosmos-sdk/pull/14655) Add a new command to + list supported algos. * (x/genutil,cli) + [#15147](https://github.com/cosmos/cosmos-sdk/pull/15147) Add + `--initial-height` flag to cli init cmd to provide `genesis.json` with + user-defined initial block height. ### Improvements * (x/distribution) + [#15462](https://github.com/cosmos/cosmos-sdk/pull/15462) Add delegator + address to the event for withdrawing delegation rewards. * + [#14609](https://github.com/cosmos/cosmos-sdk/pull/14609) Add `RetryForBlocks` + method to use in tests that require waiting for a transaction to be included + in a block. ### Bug Fixes * (baseapp) + [#15487](https://github.com/cosmos/cosmos-sdk/pull/15487) Reset state before + calling PrepareProposal and ProcessProposal. * (cli) + [#15123](https://github.com/cosmos/cosmos-sdk/pull/15123) Fix the CLI + `offline` mode behavior to be really offline. The API of + `clienttx.NewFactoryCLI` is updated to return an error. ### Deprecated * + (x/genutil) [#15316](https://github.com/cosmos/cosmos-sdk/pull/15316) Remove + requirement on node & IP being included in a gentx. + + + + ### Features * (x/gov) + [#15151](https://github.com/cosmos/cosmos-sdk/pull/15151) Add + `burn_vote_quorum`, `burn_proposal_deposit_prevote` and `burn_vote_veto` + params to allow applications to decide if they would like to burn deposits * + (client) [#14509](https://github.com/cosmos/cosmos-sdk/pull/#14509) Added + `AddKeyringFlags` function. * (x/bank) + [#14045](https://github.com/cosmos/cosmos-sdk/pull/14045) Add CLI command + `spendable-balances`, which also accepts the flag `--denom`. * (x/slashing, + x/staking) [#14363](https://github.com/cosmos/cosmos-sdk/pull/14363) Add the + infraction a validator commited type as an argument to a + `SlashWithInfractionReason` keeper method. * (client) + [#14051](https://github.com/cosmos/cosmos-sdk/pull/14051) Add `--grpc` client + option. * (x/genutil) + [#14149](https://github.com/cosmos/cosmos-sdk/pull/14149) Add + `genutilcli.GenesisCoreCommand` command, which contains all genesis-related + sub-commands. * (x/evidence) + [#13740](https://github.com/cosmos/cosmos-sdk/pull/13740) Add new proto field + `hash` of type `string` to `QueryEvidenceRequest` which helps to decode the + hash properly while using query API. * (core) + [#13306](https://github.com/cosmos/cosmos-sdk/pull/13306) Add a `FormatCoins` + function to in `core/coins` to format sdk Coins following the Value Renderers + spec. * (math) [#13306](https://github.com/cosmos/cosmos-sdk/pull/13306) Add + `FormatInt` and `FormatDec` functiosn in `math` to format integers and + decimals following the Value Renderers spec. * (x/staking) + [#13122](https://github.com/cosmos/cosmos-sdk/pull/13122) Add + `UnbondingCanComplete` and `PutUnbondingOnHold` to `x/staking` module. * + [#13437](https://github.com/cosmos/cosmos-sdk/pull/13437) Add new flag + `--modules-to-export` in `simd export` command to export only selected + modules. * [#13298](https://github.com/cosmos/cosmos-sdk/pull/13298) Add + `AddGenesisAccount` helper func in x/auth module which helps adding accounts + to genesis state. * (x/authz) + [#12648](https://github.com/cosmos/cosmos-sdk/pull/12648) Add an allow list, + an optional list of addresses allowed to receive bank assets via authz MsgSend + grant. * (sdk.Coins) [#12627](https://github.com/cosmos/cosmos-sdk/pull/12627) + Make a Denoms method on sdk.Coins. * (testutil) + [#12973](https://github.com/cosmos/cosmos-sdk/pull/12973) Add generic + `testutil.RandSliceElem` function which selects a random element from the + list. * (client) [#12936](https://github.com/cosmos/cosmos-sdk/pull/12936) Add + capability to preprocess transactions before broadcasting from a higher level + chain. * (cli) [#13064](https://github.com/cosmos/cosmos-sdk/pull/13064) Add + `debug prefixes` to list supported HRP prefixes via . * (ledger) + [#12935](https://github.com/cosmos/cosmos-sdk/pull/12935) Generalize Ledger + integration to allow for different apps or keytypes that use SECP256k1. * + (x/bank) [#11981](https://github.com/cosmos/cosmos-sdk/pull/11981) Create the + `SetSendEnabled` endpoint for managing the bank's SendEnabled settings. * + (x/auth) [#13210](https://github.com/cosmos/cosmos-sdk/pull/13210) Add + `Query/AccountInfo` endpoint for simplified access to basic account info. * + (x/consensus) [#12905](https://github.com/cosmos/cosmos-sdk/pull/12905) Create + a new `x/consensus` module that is now responsible for maintaining Tendermint + consensus parameters instead of `x/param`. Legacy types remain in order to + facilitate parameter migration from the deprecated `x/params`. App developers + should ensure that they execute `baseapp.MigrateParams` during their chain + upgrade. These legacy types will be removed in a future release. * (client/tx) + [#13670](https://github.com/cosmos/cosmos-sdk/pull/13670) Add validation in + `BuildUnsignedTx` to prevent simple inclusion of valid mnemonics ### + Improvements * [#14995](https://github.com/cosmos/cosmos-sdk/pull/14995) Allow + unknown fields in `ParseTypedEvent`. * (store) + [#14931](https://github.com/cosmos/cosmos-sdk/pull/14931) Exclude in-memory + KVStores, i.e. `StoreTypeMemory`, from CommitInfo commitments. * (cli) + [#14919](https://github.com/cosmos/cosmos-sdk/pull/14919) Fix never assigned + error when write validators. * (x/group) + [#14923](https://github.com/cosmos/cosmos-sdk/pull/14923) Fix error while + using pagination in `x/group` from CLI. * (types/coin) + [#14715](https://github.com/cosmos/cosmos-sdk/pull/14715) `sdk.Coins.Add` now + returns an empty set of coins `sdk.Coins{}` if both coins set are empty. * + This is a behavior change, as previously `sdk.Coins.Add` would return `nil` in + this case. * (reflection) + [#14838](https://github.com/cosmos/cosmos-sdk/pull/14838) We now require that + all proto files' import path (i.e. the OS path) matches their fully-qualified + package name. For example, proto files with package name `cosmos.my.pkg.v1` + should live in the folder `cosmos/my/pkg/v1/*.proto` relatively to the protoc + import root folder (usually the root `proto/` folder). * (baseapp) + [#14505](https://github.com/cosmos/cosmos-sdk/pull/14505) PrepareProposal and + ProcessProposal now use deliverState for the first block in order to access + changes made in InitChain. * (x/group) + [#14527](https://github.com/cosmos/cosmos-sdk/pull/14527) Fix wrong address + set in `EventUpdateGroupPolicy`. * (cli) + [#14509](https://github.com/cosmos/cosmos-sdk/pull/14509) Added missing + options to keyring-backend flag usage. * (server) + [#14441](https://github.com/cosmos/cosmos-sdk/pull/14441) Fix `--log_format` + flag not working. * (ante) + [#14448](https://github.com/cosmos/cosmos-sdk/pull/14448) Return anteEvents + when postHandler fail. * (baseapp) + [#13983](https://github.com/cosmos/cosmos-sdk/pull/13983) Don't emit duplicate + ante-handler events when a post-handler is defined. * (x/staking) + [#14064](https://github.com/cosmos/cosmos-sdk/pull/14064) Set all fields in + `redelegation.String()`. * (x/upgrade) + [#13936](https://github.com/cosmos/cosmos-sdk/pull/13936) Make downgrade + verification work again. * (x/group) + [#13742](https://github.com/cosmos/cosmos-sdk/pull/13742) Fix + `validate-genesis` when group policy accounts exist. * (store) + [#13516](https://github.com/cosmos/cosmos-sdk/pull/13516) Fix state listener + that was observing writes at wrong time. * (simstestutil) + [#15305](https://github.com/cosmos/cosmos-sdk/pull/15305) Add + `AppStateFnWithExtendedCb` with callback function to extend rawState. * + (simapp) [#14977](https://github.com/cosmos/cosmos-sdk/pull/14977) Move + simulation helpers functions (`AppStateFn` and `AppStateRandomizedFn`) to + `testutil/sims`. These takes an extra genesisState argument which is the + default state of the app. * (cli) + [#14953](https://github.com/cosmos/cosmos-sdk/pull/14953) Enable profiling + block replay during abci handshake with `--cpu-profile`. * (store) + [#14410](https://github.com/cosmos/cosmos-sdk/pull/14410) + `rootmulti.Store.loadVersion` has validation to check if all the module + stores' height is correct, it will error if any module store has incorrect + height. * (store) [#14189](https://github.com/cosmos/cosmos-sdk/pull/14189) + Add config `iavl-lazy-loading` to enable lazy loading of iavl store, to + improve start up time of archive nodes, add method `SetLazyLoading` to + `CommitMultiStore` interface. * (deps) + [#14830](https://github.com/cosmos/cosmos-sdk/pull/14830) Bump to IAVL + `v0.19.5-rc.1`. * (tools) + [#14793](https://github.com/cosmos/cosmos-sdk/pull/14793) Dockerfile + optimization. * (x/gov) + [#13010](https://github.com/cosmos/cosmos-sdk/pull/13010) Partial cherry-pick + of this issue for adding proposer migration. * + [#14691](https://github.com/cosmos/cosmos-sdk/pull/14691) Change behavior of + `sdk.StringifyEvents` to not flatten events attributes by events type. * This + change only affects ABCI message logs, and not the events field. * + [#14692](https://github.com/cosmos/cosmos-sdk/pull/14692) Improve RPC queries + error message when app is at height 0. * + [#14017](https://github.com/cosmos/cosmos-sdk/pull/14017) Simplify ADR-028 and + `address.Module`. * This updates the + [ADR-028](/docs/common/pages/adr-comprehensive#adr-028-public-key-addresses) + and enhance the `address.Module` API to support module addresses and + sub-module addresses in a backward compatible way. * (snapshots) + [#14608](https://github.com/cosmos/cosmos-sdk/pull/14608/) Deprecate unused + structs `SnapshotKVItem` and `SnapshotSchema`. * + [#15243](https://github.com/cosmos/cosmos-sdk/pull/15243) + `LatestBlockResponse` & `BlockByHeightResponse` types' field `sdk_block` was + incorrectly cast `proposer_address` bytes to validator operator address, now + to consensus address * (x/group, x/gov) + [#14483](https://github.com/cosmos/cosmos-sdk/pull/14483) Add support for + `[]string` and `[]int` in `draft-proposal` prompt. * (protobuf) + [#14476](https://github.com/cosmos/cosmos-sdk/pull/14476) Clean up protobuf + annotations `{(accepts, implements)}_interface`. * (x/gov, x/group) + [#14472](https://github.com/cosmos/cosmos-sdk/pull/14472) The recommended + metadata format for x/gov and x/group proposals now uses an array of strings + (instead of a single string) for the `authors` field. * (crypto) + [#14460](https://github.com/cosmos/cosmos-sdk/pull/14460) Check the signature + returned by a ledger device against the public key in the keyring. * + [#14356](https://github.com/cosmos/cosmos-sdk/pull/14356) Add + `events.GetAttributes` and `event.GetAttribute` methods to simplify the + retrieval of an attribute from event(s). * (types) + [#14332](https://github.com/cosmos/cosmos-sdk/issues/14332) Reduce state + export time by 50%. * (types) + [#14163](https://github.com/cosmos/cosmos-sdk/pull/14163) Refactor `(coins + Coins) Validate()` to avoid unnecessary map. * + [#13881](https://github.com/cosmos/cosmos-sdk/pull/13881) Optimize iteration + on nested cached KV stores and other operations in general. * (x/gov) + [#14347](https://github.com/cosmos/cosmos-sdk/pull/14347) Support + `v1.Proposal` message in `v1beta1.Proposal.Content`. * + [#13882](https://github.com/cosmos/cosmos-sdk/pull/13882) Add tx `encode` and + `decode` endpoints to amino tx service. * (config) + [#13894](https://github.com/cosmos/cosmos-sdk/pull/13894) Support state + streaming configuration in `app.toml` template and default configuration. * + (x/nft) [#13836](https://github.com/cosmos/cosmos-sdk/pull/13836) Remove the + validation for `classID` and `nftID` from the NFT module. * + [#13789](https://github.com/cosmos/cosmos-sdk/pull/13789) Add tx `encode` and + `decode` endpoints to tx service. * + [#13619](https://github.com/cosmos/cosmos-sdk/pull/13619) Add new function + called LogDeferred to report errors in defers. Use the function in x/bank + files. * (deps) [#13397](https://github.com/cosmos/cosmos-sdk/pull/13397) Bump + Go version minimum requirement to `1.19`. * + [#13070](https://github.com/cosmos/cosmos-sdk/pull/13070) Migrate from + `gogo/protobuf` to `cosmos/gogoproto`. * + [#12995](https://github.com/cosmos/cosmos-sdk/pull/12995) Add `FormatTime` and + `ParseTimeString` methods. * + [#12952](https://github.com/cosmos/cosmos-sdk/pull/12952) Replace keyring + module to Cosmos fork. * + [#12352](https://github.com/cosmos/cosmos-sdk/pull/12352) Move the + `RegisterSwaggerAPI` logic into a separate helper function in the server + package. * [#12876](https://github.com/cosmos/cosmos-sdk/pull/12876) Remove + proposer-based rewards. * + [#12846](https://github.com/cosmos/cosmos-sdk/pull/12846) Remove + `RandomizedParams` from the `AppModuleSimulation` interface which is no longer + needed. * (ci) [#12854](https://github.com/cosmos/cosmos-sdk/pull/12854) Use + ghcr.io to host the proto builder image. Update proto builder image to go 1.19 + * (x/bank) [#12706](https://github.com/cosmos/cosmos-sdk/pull/12706) Added the + `chain-id` flag to the `AddTxFlagsToCmd` API. There is no longer a need to + explicitly register this flag on commands whens `AddTxFlagsToCmd` is already + called. * [#12717](https://github.com/cosmos/cosmos-sdk/pull/12717) Use + injected encoding params in simapp. * + [#12634](https://github.com/cosmos/cosmos-sdk/pull/12634) Move `sdk.Dec` to + math package. * [#12187](https://github.com/cosmos/cosmos-sdk/pull/12187) Add + batch operation for x/nft module. * + [#12455](https://github.com/cosmos/cosmos-sdk/pull/12455) Show attempts count + in error for signing. * + [#13101](https://github.com/cosmos/cosmos-sdk/pull/13101) Remove weights from + `simapp/params` and `testutil/sims`. They are now in their respective modules. + * [#12398](https://github.com/cosmos/cosmos-sdk/issues/12398) Refactor all `x` + modules to unit-test via mocks and decouple `simapp`. * + [#13144](https://github.com/cosmos/cosmos-sdk/pull/13144) Add validator + distribution info grpc gateway get endpoint. * + [#13168](https://github.com/cosmos/cosmos-sdk/pull/13168) Migrate + tendermintdev/proto-builder to ghcr.io. New image + `ghcr.io/cosmos/proto-builder:0.8` * + [#13178](https://github.com/cosmos/cosmos-sdk/pull/13178) Add + `cosmos.msg.v1.service` protobuf annotation to allow tooling to distinguish + between Msg and Query services via reflection. * + [#13236](https://github.com/cosmos/cosmos-sdk/pull/13236) Integrate Filter + Logging * [#13528](https://github.com/cosmos/cosmos-sdk/pull/13528) Update + `ValidateMemoDecorator` to only check memo against `MaxMemoCharacters` param + when a memo is present. * + [#13651](https://github.com/cosmos/cosmos-sdk/pull/13651) Update + `server/config/config.GetConfig` function. * + [#13781](https://github.com/cosmos/cosmos-sdk/pull/13781) Remove + `client/keys.KeysCdc`. * + [#13802](https://github.com/cosmos/cosmos-sdk/pull/13802) Add + --output-document flag to the export CLI command to allow writing genesis + state to a file. * [#13794](https://github.com/cosmos/cosmos-sdk/pull/13794) + `types/module.Manager` now supports the * + [#14175](https://github.com/cosmos/cosmos-sdk/pull/14175) Add + `server.DefaultBaseappOptions(appopts)` function to reduce boiler plate in + root.go. ### State Machine Breaking * (baseapp, x/auth/posthandler) + [#13940](https://github.com/cosmos/cosmos-sdk/pull/13940) Update `PostHandler` + to receive the `runTx` success boolean. * (store) + [#14378](https://github.com/cosmos/cosmos-sdk/pull/14378) The `CacheKV` store + is thread-safe again, which includes improved iteration and deletion logic. + Iteration is on a strictly isolated view now, which is breaking from previous + behavior. * (x/bank) [#14538](https://github.com/cosmos/cosmos-sdk/pull/14538) + Validate denom in bank balances GRPC queries. * (x/group) + [#14465](https://github.com/cosmos/cosmos-sdk/pull/14465) Add title and + summary to proposal struct. * (x/gov) + [#14390](https://github.com/cosmos/cosmos-sdk/pull/14390) Add title, proposer + and summary to proposal struct. * (x/group) + [#14071](https://github.com/cosmos/cosmos-sdk/pull/14071) Don't re-tally + proposal after voting period end if they have been marked as ACCEPTED or + REJECTED. * (x/group) + [#13742](https://github.com/cosmos/cosmos-sdk/pull/13742) Migrate group policy + account from module accounts to base account. * + (x/auth)[#13780](https://github.com/cosmos/cosmos-sdk/pull/13780) `id` (type + of int64) in `AccountAddressByID` grpc query is now deprecated, update to + account-id(type of uint64) to use `AccountAddressByID`. * (codec) + [#13307](https://github.com/cosmos/cosmos-sdk/pull/13307) Register all + modules' `Msg`s with group's ModuleCdc so that Amino sign bytes are correctly + generated.* (x/gov) * (codec) + [#13196](https://github.com/cosmos/cosmos-sdk/pull/13196) Register all + modules' `Msg`s with gov's ModuleCdc so that Amino sign bytes are correctly + generated. * (group) [#13592](https://github.com/cosmos/cosmos-sdk/pull/13592) + Fix group types registration with Amino. * (x/distribution) + [#12852](https://github.com/cosmos/cosmos-sdk/pull/12852) Deprecate + `CommunityPoolSpendProposal`. Please execute a `MsgCommunityPoolSpend` message + via the new v1 `x/gov` module instead. This message can be used to directly + fund the `x/gov` module account. * (x/bank) + [#12610](https://github.com/cosmos/cosmos-sdk/pull/12610) `MsgMultiSend` now + allows only a single input. * (x/bank) + [#12630](https://github.com/cosmos/cosmos-sdk/pull/12630) Migrate `x/bank` to + self-managed parameters and deprecate its usage of `x/params`. * (x/auth) + [#12475](https://github.com/cosmos/cosmos-sdk/pull/12475) Migrate `x/auth` to + self-managed parameters and deprecate its usage of `x/params`. * (x/slashing) + [#12399](https://github.com/cosmos/cosmos-sdk/pull/12399) Migrate `x/slashing` + to self-managed parameters and deprecate its usage of `x/params`. * (x/mint) + [#12363](https://github.com/cosmos/cosmos-sdk/pull/12363) Migrate `x/mint` to + self-managed parameters and deprecate it's usage of `x/params`. * + (x/distribution) [#12434](https://github.com/cosmos/cosmos-sdk/pull/12434) + Migrate `x/distribution` to self-managed parameters and deprecate it's usage + of `x/params`. * (x/crisis) + [#12445](https://github.com/cosmos/cosmos-sdk/pull/12445) Migrate `x/crisis` + to self-managed parameters and deprecate it's usage of `x/params`. * (x/gov) + [#12631](https://github.com/cosmos/cosmos-sdk/pull/12631) Migrate `x/gov` to + self-managed parameters and deprecate it's usage of `x/params`. * (x/staking) + [#12409](https://github.com/cosmos/cosmos-sdk/pull/12409) Migrate `x/staking` + to self-managed parameters and deprecate it's usage of `x/params`. * (x/bank) + [#11859](https://github.com/cosmos/cosmos-sdk/pull/11859) Move the SendEnabled + information out of the Params and into the state store directly. * (x/gov) + [#12771](https://github.com/cosmos/cosmos-sdk/pull/12771) Initial deposit + requirement for proposals at submission time. * (x/staking) + [#12967](https://github.com/cosmos/cosmos-sdk/pull/12967) `unbond` now creates + only one unbonding delegation entry when multiple unbondings exist at a single + height (e.g. through multiple messages in a transaction). * (x/auth/vesting) + [#13502](https://github.com/cosmos/cosmos-sdk/pull/13502) Add Amino Msg + registration for `MsgCreatePeriodicVestingAccount`. ### API Breaking Changes * + Migrate to CometBFT. Follow the migration instructions in the [upgrade + guide](./UPGRADING.md#migration-to-cometbft-part-1). * (simulation) + [#14728](https://github.com/cosmos/cosmos-sdk/pull/14728) Rename the + `ParamChanges` field to `LegacyParamChange` and `Contents` to + `LegacyProposalContents` in `simulation.SimulationState`. Additionally it adds + a `ProposalMsgs` field to `simulation.SimulationState`. * (x/gov) + [#14782](https://github.com/cosmos/cosmos-sdk/pull/14782) Move the `metadata` + argument in `govv1.NewProposal` alongside `title` and `summary`. * (x/upgrade) + [#14216](https://github.com/cosmos/cosmos-sdk/pull/14216) Change upgrade + keeper receiver to upgrade keeper pointers. * (x/auth) + [#13780](https://github.com/cosmos/cosmos-sdk/pull/13780) Querying with `id` + (type of int64) in `AccountAddressByID` grpc query now throws error, use + account-id(type of uint64) instead. * (store) + [#13516](https://github.com/cosmos/cosmos-sdk/pull/13516) Update State + Streaming APIs: * Add method `ListenCommit` to `ABCIListener` * Move + `ListeningEnabled` and `AddListener` methods to `CommitMultiStore` * Remove + `CacheWrapWithListeners` from `CacheWrap` and `CacheWrapper` interfaces * + Remove listening APIs from the caching layer (it should only listen to the + `rootmulti.Store`) * Add three new options to file streaming service + constructor. * Modify `ABCIListener` such that any error from any method will + always halt the app via `panic` * (x/auth) + [#13877](https://github.com/cosmos/cosmos-sdk/pull/13877) Rename + `AccountKeeper`'s `GetNextAccountNumber` to `NextAccountNumber`. * + (x/evidence) [#13740](https://github.com/cosmos/cosmos-sdk/pull/13740) The + `NewQueryEvidenceRequest` function now takes `hash` as a HEX encoded `string`. + * (server) [#13485](https://github.com/cosmos/cosmos-sdk/pull/13485) The + `Application` service now requires the `RegisterNodeService` method to be + implemented. * [#13437](https://github.com/cosmos/cosmos-sdk/pull/13437) Add a + list of modules to export argument in `ExportAppStateAndValidators`. * + (simapp) [#13402](https://github.com/cosmos/cosmos-sdk/pull/13402) Move + simulation flags to `x/simulation/client/cli`. * (simapp) + [#13402](https://github.com/cosmos/cosmos-sdk/pull/13402) Move simulation + helpers functions (`SetupSimulation`, `SimulationOperations`, + `CheckExportSimulation`, `PrintStats`, `GetSimulationLog`) to `testutil/sims`. + * (simapp) [#13402](https://github.com/cosmos/cosmos-sdk/pull/13402) Move + `testutil/rest` package to `testutil`. * (types) + [#13380](https://github.com/cosmos/cosmos-sdk/pull/13380) Remove deprecated + `sdk.NewLevelDB`. * (simapp) + [#13378](https://github.com/cosmos/cosmos-sdk/pull/13378) Move `simapp.App` to + `runtime.AppI`. * (tx) + [#12659](https://github.com/cosmos/cosmos-sdk/pull/12659) Remove broadcast + mode `block`. * (simapp) + [#12747](https://github.com/cosmos/cosmos-sdk/pull/12747) Remove + `simapp.MakeTestEncodingConfig`. Please use + `moduletestutil.MakeTestEncodingConfig` (`types/module/testutil`) in tests + instead. * (x/bank) [#12648](https://github.com/cosmos/cosmos-sdk/pull/12648) + `NewSendAuthorization` takes a new argument of an optional list of addresses + allowed to receive bank assests via authz MsgSend grant. You can pass `nil` + for the same behavior as before, i.e. any recipient is allowed. * (x/bank) + [#12593](https://github.com/cosmos/cosmos-sdk/pull/12593) Add `SpendableCoin` + method to `BaseViewKeeper` * (x/slashing) + [#12581](https://github.com/cosmos/cosmos-sdk/pull/12581) Remove `x/slashing` + legacy querier. * (types) + [#12355](https://github.com/cosmos/cosmos-sdk/pull/12355) Remove the + compile-time `types.DBbackend` variable. Removes usage of the same in + server/util.go * (x/gov) + [#12368](https://github.com/cosmos/cosmos-sdk/pull/12369) Gov keeper is now + passed by reference instead of copy to make post-construction mutation of + Hooks and Proposal Handlers possible at a framework level. * (simapp) + [#12270](https://github.com/cosmos/cosmos-sdk/pull/12270) Remove + `invCheckPeriod uint` attribute from `SimApp` struct as per migration of + `x/crisis` to app wiring * (simapp) + [#12334](https://github.com/cosmos/cosmos-sdk/pull/12334) Move + `simapp.ConvertAddrsToValAddrs` and `simapp.CreateTestPubKeys ` to + respectively `simtestutil.ConvertAddrsToValAddrs` and + `simtestutil.CreateTestPubKeys` (`testutil/sims`) * (simapp) + [#12312](https://github.com/cosmos/cosmos-sdk/pull/12312) Move + `simapp.EmptyAppOptions` to `simtestutil.EmptyAppOptions` (`testutil/sims`) * + (simapp) [#12312](https://github.com/cosmos/cosmos-sdk/pull/12312) Remove + `skipUpgradeHeights map[int64]bool` and `homePath string` from `NewSimApp` + constructor as per migration of `x/upgrade` to app-wiring. * (testutil) + [#12278](https://github.com/cosmos/cosmos-sdk/pull/12278) Move all functions + from `simapp/helpers` to `testutil/sims` * (testutil) + [#12233](https://github.com/cosmos/cosmos-sdk/pull/12233) Move + `simapp.TestAddr` to `simtestutil.TestAddr` (`testutil/sims`) * (x/staking) + [#12102](https://github.com/cosmos/cosmos-sdk/pull/12102) Staking keeper now + is passed by reference instead of copy. Keeper's SetHooks no longer returns + keeper. It updates the keeper in place instead. * (linting) + [#12141](https://github.com/cosmos/cosmos-sdk/pull/12141) Fix usability + related linting for database. This means removing the infix Prefix from + `prefix.NewPrefixWriter` and such so that it is `prefix.NewWriter` and making + `db.DBConnection` and such into `db.Connection` * (x/distribution) + [#12434](https://github.com/cosmos/cosmos-sdk/pull/12434) `x/distribution` + module `SetParams` keeper method definition is now updated to return `error`. + * (x/staking) [#12409](https://github.com/cosmos/cosmos-sdk/pull/12409) + `x/staking` module `SetParams` keeper method definition is now updated to + return `error`. * (x/crisis) + [#12445](https://github.com/cosmos/cosmos-sdk/pull/12445) `x/crisis` module + `SetConstantFee` keeper method definition is now updated to return `error`. * + (x/gov) [#12631](https://github.com/cosmos/cosmos-sdk/pull/12631) `x/gov` + module refactored to use `Params` as single struct instead of `DepositParams`, + `TallyParams` & `VotingParams`. * (x/gov) + [#12631](https://github.com/cosmos/cosmos-sdk/pull/12631) Migrate `x/gov` to + self-managed parameters and deprecate it's usage of `x/params`. * (x/bank) + [#12630](https://github.com/cosmos/cosmos-sdk/pull/12630) `x/bank` module + `SetParams` keeper method definition is now updated to return `error`. * + (x/bank) [#11859](https://github.com/cosmos/cosmos-sdk/pull/11859) Move the + SendEnabled information out of the Params and into the state store directly. * + (appModule) Remove `Route`, `QuerierRoute` and `LegacyQuerierHandler` from + AppModule Interface. * (x/modules) Remove all LegacyQueries and related code + from modules * (store) + [#11825](https://github.com/cosmos/cosmos-sdk/pull/11825) Make extension + snapshotter interface safer to use, renamed the util function + `WriteExtensionItem` to `WriteExtensionPayload`. * + (x/genutil)[#12956](https://github.com/cosmos/cosmos-sdk/pull/12956) + `genutil.AppModuleBasic` has a new attribute: genesis transaction validation + function. The existing validation logic is implemented in + `genutiltypes.DefaultMessageValidator`. Use `genutil.NewAppModuleBasic` to + create a new genutil Module Basic. * (codec) + [#12964](https://github.com/cosmos/cosmos-sdk/pull/12964) + `ProtoCodec.MarshalInterface` now returns an error when serializing + unregistered types and a subsequent `ProtoCodec.UnmarshalInterface` would + fail. * (x/staking) [#12973](https://github.com/cosmos/cosmos-sdk/pull/12973) + Removed `stakingkeeper.RandomValidator`. Use `testutil.RandSliceElem(r, + sk.GetAllValidators(ctx))` instead. * (x/gov) + [#13160](https://github.com/cosmos/cosmos-sdk/pull/13160) Remove custom + marshaling of proposl and voteoption. * (types) + [#13430](https://github.com/cosmos/cosmos-sdk/pull/13430) Remove unused code + `ResponseCheckTx` and `ResponseDeliverTx` * (store) + [#13529](https://github.com/cosmos/cosmos-sdk/pull/13529) Add method + `LatestVersion` to `MultiStore` interface, add method `SetQueryMultiStore` to + baesapp to support alternative `MultiStore` implementation for query service. + * (pruning) [#13609](https://github.com/cosmos/cosmos-sdk/pull/13609) Move + pruning package to be under store package * + [#13794](https://github.com/cosmos/cosmos-sdk/pull/13794) Most methods on + `types/module.AppModule` have been moved to ### CLI Breaking Changes * + (genesis) [#14149](https://github.com/cosmos/cosmos-sdk/pull/14149) Add `simd + genesis` command, which contains all genesis-related sub-commands. * + (x/genutil) [#13535](https://github.com/cosmos/cosmos-sdk/pull/13535) Replace + in `simd init`, the `--staking-bond-denom` flag with `--default-denom` which + is used for all default denomination in the genesis, instead of only staking. + ### Bug Fixes * (x/auth/vesting) + [#15373](https://github.com/cosmos/cosmos-sdk/pull/15373) Add extra checks + when creating a periodic vesting account. * (x/auth) + [#13838](https://github.com/cosmos/cosmos-sdk/pull/13838) Fix calling + `String()` and `MarshalYAML` panics when pubkey is set on a `BaseAccount``. * + (x/evidence) [#13740](https://github.com/cosmos/cosmos-sdk/pull/13740) Fix + evidence query API to decode the hash properly. * (bank) + [#13691](https://github.com/cosmos/cosmos-sdk/issues/13691) Fix unhandled + error for vesting account transfers, when total vesting amount exceeds total + balance. * [#13553](https://github.com/cosmos/cosmos-sdk/pull/13553) Ensure + all parameter validation for decimal types handles nil decimal values. * + [#13145](https://github.com/cosmos/cosmos-sdk/pull/13145) Fix panic when + calling `String()` to a Record struct type. * + [#13116](https://github.com/cosmos/cosmos-sdk/pull/13116) Fix a dead-lock in + the `Group-TotalWeight` `x/group` invariant. * (types) + [#12154](https://github.com/cosmos/cosmos-sdk/pull/12154) Add + `baseAccountGetter` to avoid invalid account error when create vesting + account. * (x/staking) + [#12303](https://github.com/cosmos/cosmos-sdk/pull/12303) Use bytes instead of + string comparison in delete validator queue * (store/rootmulti) + [#12487](https://github.com/cosmos/cosmos-sdk/pull/12487) Fix + non-deterministic map iteration. * (sdk/dec_coins) + [#12903](https://github.com/cosmos/cosmos-sdk/pull/12903) Fix nil `DecCoin` + creation when converting `Coins` to `DecCoins` * (store) + [#12945](https://github.com/cosmos/cosmos-sdk/pull/12945) Fix nil end + semantics in store/cachekv/iterator when iterating a dirty cache. * (x/gov) + [#13051](https://github.com/cosmos/cosmos-sdk/pull/13051) In SubmitPropsal, + when a legacy msg fails it's handler call, wrap the error as + ErrInvalidProposalContent (instead of ErrNoProposalHandlerExists). * + (snapshot) [#13400](https://github.com/cosmos/cosmos-sdk/pull/13400) Fix + snapshot checksum issue in golang 1.19. * (server) + [#13778](https://github.com/cosmos/cosmos-sdk/pull/13778) Set Cosmos SDK + default endpoints to localhost to avoid unknown exposure of endpoints. * + (x/auth) [#13877](https://github.com/cosmos/cosmos-sdk/pull/13877) Handle + missing account numbers during `InitGenesis`. * (x/gov) + [#13918](https://github.com/cosmos/cosmos-sdk/pull/13918) Propagate message + errors when executing a proposal. ### Deprecated * (x/evidence) + [#13740](https://github.com/cosmos/cosmos-sdk/pull/13740) The `evidence_hash` + field of `QueryEvidenceRequest` has been deprecated and now contains a new + field `hash` with type `string`. * (x/bank) + [#11859](https://github.com/cosmos/cosmos-sdk/pull/11859) The + Params.SendEnabled field is deprecated and unusable. + diff --git a/docs/sdk/v0.47/documentation/application-framework/app-go-v2.mdx b/docs/sdk/v0.47/documentation/application-framework/app-go-v2.mdx new file mode 100644 index 00000000..d37a3a15 --- /dev/null +++ b/docs/sdk/v0.47/documentation/application-framework/app-go-v2.mdx @@ -0,0 +1,2998 @@ +--- +title: Overview of `app_v2.go` +--- + + +**Synopsis** + +The Cosmos SDK allows much easier wiring of an `app.go` thanks to App Wiring and [`depinject`](/docs/sdk/v0.47/documentation/module-system/depinject). +Learn more about the rationale of App Wiring in [ADR-057](docs/sdk/next/documentation/legacy/adr-comprehensive). + + + + + +### Pre-requisite Readings + +* [ADR 057: App Wiring](docs/sdk/next/documentation/legacy/adr-comprehensive) +* [Depinject Documentation](/docs/sdk/v0.47/documentation/module-system/depinject) +* [Modules depinject-ready](/docs/sdk/v0.47/documentation/module-system/depinject) + + + +This section is intended to provide an overview of the `SimApp` `app_v2.go` file with App Wiring. + +## `app_config.go` + +The `app_config.go` file is the single place to configure all modules parameters. + +1. Create the `AppConfig` variable: + + ```go expandable + package simapp + + import ( + + "time" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + capabilitymodulev1 "cosmossdk.io/api/cosmos/capability/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + crisismodulev1 "cosmossdk.io/api/cosmos/crisis/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + paramsmodulev1 "cosmossdk.io/api/cosmos/params/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "google.golang.org/protobuf/types/known/durationpb" + + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" + ) + + var ( + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder = []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensustypes.ModuleName, + } + + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + / application configuration (used by depinject) + + AppConfig = appconfig.Compose(&appv1alpha1.Config{ + Modules: []*appv1alpha1.ModuleConfig{ + { + Name: "runtime", + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + + BeginBlockers: []string{ + upgradetypes.ModuleName, + capabilitytypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authtypes.ModuleName, + banktypes.ModuleName, + govtypes.ModuleName, + crisistypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + vestingtypes.ModuleName, + consensustypes.ModuleName, + }, + EndBlockers: []string{ + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + capabilitytypes.ModuleName, + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + consensustypes.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + InitGenesis: genesisModuleOrder, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + / ExportGenesis: genesisModuleOrder, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: nil, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: paramstypes.ModuleName, + Config: appconfig.WrapAny(¶msmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: capabilitytypes.ModuleName, + Config: appconfig.WrapAny(&capabilitymodulev1.Module{ + SealKeeper: true, + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: crisistypes.ModuleName, + Config: appconfig.WrapAny(&crisismodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + }, + }) + ) + ``` + +2. Configure the `runtime` module: + + ```go expandable + package simapp + + import ( + + "time" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + capabilitymodulev1 "cosmossdk.io/api/cosmos/capability/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + crisismodulev1 "cosmossdk.io/api/cosmos/crisis/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + paramsmodulev1 "cosmossdk.io/api/cosmos/params/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "google.golang.org/protobuf/types/known/durationpb" + + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" + ) + + var ( + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder = []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensustypes.ModuleName, + } + + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + / application configuration (used by depinject) + + AppConfig = appconfig.Compose(&appv1alpha1.Config{ + Modules: []*appv1alpha1.ModuleConfig{ + { + Name: "runtime", + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + + BeginBlockers: []string{ + upgradetypes.ModuleName, + capabilitytypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authtypes.ModuleName, + banktypes.ModuleName, + govtypes.ModuleName, + crisistypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + vestingtypes.ModuleName, + consensustypes.ModuleName, + }, + EndBlockers: []string{ + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + capabilitytypes.ModuleName, + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + consensustypes.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + InitGenesis: genesisModuleOrder, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + / ExportGenesis: genesisModuleOrder, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: nil, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: paramstypes.ModuleName, + Config: appconfig.WrapAny(¶msmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: capabilitytypes.ModuleName, + Config: appconfig.WrapAny(&capabilitymodulev1.Module{ + SealKeeper: true, + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: crisistypes.ModuleName, + Config: appconfig.WrapAny(&crisismodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + }, + }) + ) + ``` + +3. Configure the modules defined in the `BeginBlocker` and `EndBlocker` and the `tx` module: + + ```go expandable + package simapp + + import ( + + "time" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + capabilitymodulev1 "cosmossdk.io/api/cosmos/capability/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + crisismodulev1 "cosmossdk.io/api/cosmos/crisis/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + paramsmodulev1 "cosmossdk.io/api/cosmos/params/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "google.golang.org/protobuf/types/known/durationpb" + + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" + ) + + var ( + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder = []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensustypes.ModuleName, + } + + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + / application configuration (used by depinject) + + AppConfig = appconfig.Compose(&appv1alpha1.Config{ + Modules: []*appv1alpha1.ModuleConfig{ + { + Name: "runtime", + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + + BeginBlockers: []string{ + upgradetypes.ModuleName, + capabilitytypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authtypes.ModuleName, + banktypes.ModuleName, + govtypes.ModuleName, + crisistypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + vestingtypes.ModuleName, + consensustypes.ModuleName, + }, + EndBlockers: []string{ + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + capabilitytypes.ModuleName, + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + consensustypes.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + InitGenesis: genesisModuleOrder, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + / ExportGenesis: genesisModuleOrder, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: nil, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: paramstypes.ModuleName, + Config: appconfig.WrapAny(¶msmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: capabilitytypes.ModuleName, + Config: appconfig.WrapAny(&capabilitymodulev1.Module{ + SealKeeper: true, + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: crisistypes.ModuleName, + Config: appconfig.WrapAny(&crisismodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + }, + }) + ) + ``` + + ```go expandable + package simapp + + import ( + + "time" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + capabilitymodulev1 "cosmossdk.io/api/cosmos/capability/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + crisismodulev1 "cosmossdk.io/api/cosmos/crisis/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + paramsmodulev1 "cosmossdk.io/api/cosmos/params/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "google.golang.org/protobuf/types/known/durationpb" + + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" + ) + + var ( + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder = []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensustypes.ModuleName, + } + + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + / application configuration (used by depinject) + + AppConfig = appconfig.Compose(&appv1alpha1.Config{ + Modules: []*appv1alpha1.ModuleConfig{ + { + Name: "runtime", + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + + BeginBlockers: []string{ + upgradetypes.ModuleName, + capabilitytypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authtypes.ModuleName, + banktypes.ModuleName, + govtypes.ModuleName, + crisistypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + vestingtypes.ModuleName, + consensustypes.ModuleName, + }, + EndBlockers: []string{ + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + capabilitytypes.ModuleName, + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + consensustypes.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + InitGenesis: genesisModuleOrder, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + / ExportGenesis: genesisModuleOrder, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: nil, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: paramstypes.ModuleName, + Config: appconfig.WrapAny(¶msmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: capabilitytypes.ModuleName, + Config: appconfig.WrapAny(&capabilitymodulev1.Module{ + SealKeeper: true, + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: crisistypes.ModuleName, + Config: appconfig.WrapAny(&crisismodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + }, + }) + ) + ``` + +### Complete `app_config.go` + +```go expandable +package simapp + +import ( + + "time" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + capabilitymodulev1 "cosmossdk.io/api/cosmos/capability/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + crisismodulev1 "cosmossdk.io/api/cosmos/crisis/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + paramsmodulev1 "cosmossdk.io/api/cosmos/params/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "google.golang.org/protobuf/types/known/durationpb" + + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" +) + +var ( + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder = []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensustypes.ModuleName, +} + + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName +}, + { + Account: distrtypes.ModuleName +}, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter +}}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName +}}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName +}}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner +}}, + { + Account: nft.ModuleName +}, +} + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName +} + + / application configuration (used by depinject) + +AppConfig = appconfig.Compose(&appv1alpha1.Config{ + Modules: []*appv1alpha1.ModuleConfig{ + { + Name: "runtime", + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + +BeginBlockers: []string{ + upgradetypes.ModuleName, + capabilitytypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authtypes.ModuleName, + banktypes.ModuleName, + govtypes.ModuleName, + crisistypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + vestingtypes.ModuleName, + consensustypes.ModuleName, +}, + EndBlockers: []string{ + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + capabilitytypes.ModuleName, + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + consensustypes.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, +}, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", +}, +}, + InitGenesis: genesisModuleOrder, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + / ExportGenesis: genesisModuleOrder, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: nil, +}), +}, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address +}), +}, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ +}), +}, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, +}), +}, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ +}), +}, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ +}), +}, + { + Name: paramstypes.ModuleName, + Config: appconfig.WrapAny(¶msmodulev1.Module{ +}), +}, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ +}), +}, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ +}), +}, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ +}), +}, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ +}), +}, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ +}), +}, + { + Name: capabilitytypes.ModuleName, + Config: appconfig.WrapAny(&capabilitymodulev1.Module{ + SealKeeper: true, +}), +}, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ +}), +}, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ +}), +}, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, +}), +}, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ +}), +}, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ +}), +}, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ +}), +}, + { + Name: crisistypes.ModuleName, + Config: appconfig.WrapAny(&crisismodulev1.Module{ +}), +}, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ +}), +}, +}, +}) +) +``` + +### Alternative formats + + +The example above shows how to create an `AppConfig` using Go. However, it is also possible to create an `AppConfig` using YAML, or JSON.\ +The configuration can then be embed with `go:embed` and read with [`appconfig.LoadYAML`](https://pkg.go.dev/cosmossdk.io/core/appconfig#LoadYAML), or [`appconfig.LoadJSON`](https://pkg.go.dev/cosmossdk.io/core/appconfig#LoadJSON), in `app_v2.go`. + +```go +/go:embed app_config.yaml +var ( + appConfigYaml []byte + appConfig = appconfig.LoadYAML(appConfigYaml) +) +``` + + + +```yaml expandable +modules: + - name: runtime + config: + "@type": cosmos.app.runtime.v1alpha1.Module + app_name: SimApp + begin_blockers: [staking, auth, bank] + end_blockers: [bank, auth, staking] + init_genesis: [bank, auth, staking] + - name: auth + config: + "@type": cosmos.auth.module.v1.Module + bech32_prefix: cosmos + - name: bank + config: + "@type": cosmos.bank.module.v1.Module + - name: staking + config: + "@type": cosmos.staking.module.v1.Module + - name: tx + config: + "@type": cosmos.tx.module.v1.Module +``` + +A more complete example of `app.yaml` can be found [here](https://github.com/cosmos/cosmos-sdk/blob/91b1d83f1339e235a1dfa929ecc00084101a19e3/simapp/app.yaml). + +## `app_v2.go` + +`app_v2.go` is the place where `SimApp` is constructed. `depinject.Inject` facilitates that by automatically wiring the app modules and keepers, provided an application configuration `AppConfig` is provided. `SimApp` is constructed, when calling the injected `*runtime.AppBuilder`, with `appBuilder.Build(...)`.\ +In short `depinject` and the [`runtime` package](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/runtime) abstract the wiring of the app, and the `AppBuilder` is the place where the app is constructed. [`runtime`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/runtime) takes care of registering the codecs, KV store, subspaces and instantiating `baseapp`. + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + _ "embed" + "io" + "os" + "path/filepath" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" +) + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool = mempool.NewSenderNonceMempool() + / mempoolOpt = baseapp.SetMempool(nonceMempool) + / prepareOpt = func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt = func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +authtypes.AccountI { + return authtypes.ProtoBaseAccount() +}, + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.CapabilityKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.CrisisKeeper, + &app.UpgradeKeeper, + &app.ParamsKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + ); err != nil { + panic(err) +} + +app.App = appBuilder.Build(logger, db, traceStore, baseAppOptions...) + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(app.App.BaseApp, appOpts, app.appCodec, logger, app.kvStoreKeys()); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + + /**** Module Options ****/ + + app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + + +When using `depinject.Inject`, the injected types must be pointers. + + +### Advanced Configuration + +In advanced cases, it is possible to inject extra (module) configuration in a way that is not (yet) supported by `AppConfig`.\ +In this case, use `depinject.Configs` for combining the extra configuration and `AppConfig`, and `depinject.Supply` to providing that extra configuration. +More information on how work `depinject.Configs` and `depinject.Supply` can be found in the [`depinject` documentation](https://pkg.go.dev/cosmossdk.io/depinject). + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + _ "embed" + "io" + "os" + "path/filepath" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" +) + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool = mempool.NewSenderNonceMempool() + / mempoolOpt = baseapp.SetMempool(nonceMempool) + / prepareOpt = func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt = func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +authtypes.AccountI { + return authtypes.ProtoBaseAccount() +}, + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.CapabilityKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.CrisisKeeper, + &app.UpgradeKeeper, + &app.ParamsKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + ); err != nil { + panic(err) +} + +app.App = appBuilder.Build(logger, db, traceStore, baseAppOptions...) + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(app.App.BaseApp, appOpts, app.appCodec, logger, app.kvStoreKeys()); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + + /**** Module Options ****/ + + app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + +### Complete `app_v2.go` + + +Note that in the complete `SimApp` `app_v2.go` file, testing utilities are also defined, but they could as well be defined in a separate file. + + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + _ "embed" + "io" + "os" + "path/filepath" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" +) + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool = mempool.NewSenderNonceMempool() + / mempoolOpt = baseapp.SetMempool(nonceMempool) + / prepareOpt = func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt = func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +authtypes.AccountI { + return authtypes.ProtoBaseAccount() +}, + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.CapabilityKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.CrisisKeeper, + &app.UpgradeKeeper, + &app.ParamsKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + ); err != nil { + panic(err) +} + +app.App = appBuilder.Build(logger, db, traceStore, baseAppOptions...) + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(app.App.BaseApp, appOpts, app.appCodec, logger, app.kvStoreKeys()); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + + /**** Module Options ****/ + + app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` diff --git a/docs/sdk/v0.47/documentation/application-framework/app-go.mdx b/docs/sdk/v0.47/documentation/application-framework/app-go.mdx new file mode 100644 index 00000000..9d8fb41f --- /dev/null +++ b/docs/sdk/v0.47/documentation/application-framework/app-go.mdx @@ -0,0 +1,891 @@ +--- +title: Overview of `app.go` +description: >- + This section is intended to provide an overview of the SimApp app.go file and + is still a work in progress. For now please instead read the tutorials for a + deep dive on how to build a chain. +--- + +This section is intended to provide an overview of the `SimApp` `app.go` file and is still a work in progress. +For now please instead read the [tutorials](https://tutorials.cosmos.network) for a deep dive on how to build a chain. + +## Complete `app.go` + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "io" + "os" + "path/filepath" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "github.com/spf13/cast" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + + simappparams "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/client/grpc/tmservice" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + memKeys map[string]*storetypes.MemoryStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + encodingConfig := makeEncodingConfig() + appCodec := encodingConfig.Codec + legacyAmino := encodingConfig.Amino + interfaceRegistry := encodingConfig.InterfaceRegistry + txConfig := encodingConfig.TxConfig + + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool := mempool.NewSenderNonceMempool() + / mempoolOpt := baseapp.SetMempool(nonceMempool) + / prepareOpt := func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt := func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := sdk.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, capabilitytypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + tkeys := sdk.NewTransientStoreKeys(paramstypes.TStoreKey) + / NOTE: The testingkey is just mounted for testing purposes. Actual applications should + / not include this key. + memKeys := sdk.NewMemoryStoreKeys(capabilitytypes.MemStoreKey, "testingkey") + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(bApp, appOpts, appCodec, logger, keys); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, + memKeys: memKeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, keys[upgradetypes.StoreKey], authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +bApp.SetParamStore(&app.ConsensusParamsKeeper) + +app.CapabilityKeeper = capabilitykeeper.NewKeeper(appCodec, keys[capabilitytypes.StoreKey], memKeys[capabilitytypes.MemStoreKey]) + / Applications that wish to enforce statically created ScopedKeepers should call `Seal` after creating + / their scoped modules in `NewApp` with `ScopeToModule` + app.CapabilityKeeper.Seal() + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, keys[authtypes.StoreKey], authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + keys[banktypes.StoreKey], + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, keys[minttypes.StoreKey], app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, keys[distrtypes.StoreKey], app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, keys[slashingtypes.StoreKey], app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, keys[crisistypes.StoreKey], invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, keys[feegrant.StoreKey], app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.AuthzKeeper = authzkeeper.NewKeeper(keys[authzkeeper.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, keys[upgradetypes.StoreKey], appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/gov/spec/01_concepts.md#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, keys[govtypes.StoreKey], app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(keys[nftkeeper.StoreKey], appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, keys[evidencetypes.StoreKey], app.StakingKeeper, app.SlashingKeeper, + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app.BaseApp.DeliverTx, + encodingConfig.TxConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + capability.NewAppModule(appCodec, *app.CapabilityKeeper, false), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName)), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + ) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + +app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, capabilitytypes.ModuleName, minttypes.ModuleName, distrtypes.ModuleName, slashingtypes.ModuleName, + evidencetypes.ModuleName, stakingtypes.ModuleName, + authtypes.ModuleName, banktypes.ModuleName, govtypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, + authz.ModuleName, feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, govtypes.ModuleName, stakingtypes.ModuleName, + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, distrtypes.ModuleName, + slashingtypes.ModuleName, minttypes.ModuleName, + genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, upgradetypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder := []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +app.ModuleManager.RegisterServices(app.configurator) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + +app.MountMemoryStores(memKeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(encodingConfig.TxConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + logger.Error("error on loading last version", "err", err) + +os.Exit(1) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := ante.NewAnteHandler( + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + ) + if err != nil { + panic(err) +} + +app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + return app.ModuleManager.BeginBlock(ctx, req) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + return app.ModuleManager.EndBlock(ctx, req) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return ModuleBasics.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetTKey returns the TransientStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetTKey(storeKey string) *storetypes.TransientStoreKey { + return app.tkeys[storeKey] +} + +/ GetMemKey returns the MemStoreKey for the provided mem key. +/ +/ NOTE: This is solely used for testing purposes. +func (app *SimApp) + +GetMemKey(storeKey string) *storetypes.MemoryStoreKey { + return app.memKeys[storeKey] +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new tendermint queries routes from grpc-gateway. + tmservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + ModuleBasics.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + tmservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + app.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter()) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName).WithKeyTable(govv1.ParamKeyTable()) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} + +func makeEncodingConfig() + +simappparams.EncodingConfig { + encodingConfig := simappparams.MakeTestEncodingConfig() + +std.RegisterLegacyAminoCodec(encodingConfig.Amino) + +std.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +ModuleBasics.RegisterLegacyAminoCodec(encodingConfig.Amino) + +ModuleBasics.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +return encodingConfig +} +``` diff --git a/docs/sdk/v0.47/documentation/application-framework/app-mempool.mdx b/docs/sdk/v0.47/documentation/application-framework/app-mempool.mdx new file mode 100644 index 00000000..e3a55c7a --- /dev/null +++ b/docs/sdk/v0.47/documentation/application-framework/app-mempool.mdx @@ -0,0 +1,2514 @@ +--- +title: Application mempool +--- + + +**Synopsis** +This sections describes how the app-side mempool can be used and replaced. + + +Since `v0.47` the application has its own mempool to allow much more granular +block building than previous versions. This change was enabled by +[ABCI 1.0](https://github.com/cometbft/cometbft/blob/v0.37.0/spec/abci). +Notably it introduces the `PrepareProposal` and `ProcessProposal` steps of ABCI++. + + + +### Pre-requisite Readings + +* [BaseApp](/docs/sdk/v0.47/learn/advanced/baseapp) + + + +## Prepare Proposal + +`PrepareProposal` handles construction of the block, meaning that when a proposer +is preparing to propose a block, it requests the application to evaluate a +`RequestPrepareProposal`, which contains a series of transactions from CometBFT's +mempool. At this point, the application has complete control over the proposal. +It can modify, delete, and inject transactions from it's own app-side mempool into +the proposal or even ignore all the transactions altogether. What the application +does with the transactions provided to it by `RequestPrepareProposal` have no +effect on CometBFT's mempool. + +Note, that the application defines the semantics of the `PrepareProposal` and it +MAY be non-deterministic and is only executed by the current block proposer. + +Now, reading mempool twice in the previous sentence is confusing, lets break it down. +CometBFT has a mempool that handles gossiping transactions to other nodes +in the network. How these transactions are ordered is determined by CometBFT's +mempool, typically FIFO. However, since the application is able to fully inspect +all transactions, it can provide greater control over transaction ordering. +Allowing the application to handle ordering enables the application to define how +it would like the block constructed. + +Currently, there is a default `PrepareProposal` implementation provided by the application. + +```go expandable +package baseapp + +import ( + + "errors" + "fmt" + "sort" + "strings" + "github.com/cosmos/gogoproto/proto" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/crypto/tmhash" + "github.com/tendermint/tendermint/libs/log" + tmproto "github.com/tendermint/tendermint/proto/tendermint/types" + dbm "github.com/tendermint/tm-db" + "golang.org/x/exp/maps" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/snapshots" + "github.com/cosmos/cosmos-sdk/store" + "github.com/cosmos/cosmos-sdk/store/rootmulti" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +const ( + runTxModeCheck runTxMode = iota / Check a transaction + runTxModeReCheck / Recheck a (pending) + +transaction after a commit + runTxModeSimulate / Simulate a transaction + runTxModeDeliver / Deliver a transaction + runTxPrepareProposal + runTxProcessProposal +) + +var _ abci.Application = (*BaseApp)(nil) + +type ( + / Enum mode for app.runTx + runTxMode uint8 + + / StoreLoader defines a customizable function to control how we load the CommitMultiStore + / from disk. This is useful for state migration, when loading a datastore written with + / an older version of the software. In particular, if a module changed the substore key name + / (or removed a substore) + +between two versions of the software. + StoreLoader func(ms sdk.CommitMultiStore) + +error +) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { /nolint: maligned + / initialized on creation + logger log.Logger + name string / application name from abci.Info + db dbm.DB / common DB backend + cms sdk.CommitMultiStore / Main (uncached) + +state + qms sdk.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.AnteHandler / post handler, optional, e.g. for tips + initChainer sdk.InitChainer / initialize state with validators and state blob + beginBlocker sdk.BeginBlocker / logic to run before any txs + processProposal sdk.ProcessProposalHandler / the handler which runs on ABCI ProcessProposal + prepareProposal sdk.PrepareProposalHandler / the handler which runs on ABCI PrepareProposal + endBlocker sdk.EndBlocker / logic to run after all txs, and to determine valset changes + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / checkState is set on InitChain and reset on Commit + / deliverState is set on InitChain and BeginBlock and set to nil on Commit + checkState *state / for CheckTx + deliverState *state / for DeliverTx + processProposalState *state / for ProcessProposal + prepareProposalState *state / for PrepareProposal + + / an inter-block write-through cache provided to the context during deliverState + interBlockCache sdk.MultiStorePersistentCache + + / absent validators from begin block + voteInfos []abci.VoteInfo + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the baseapp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from Tendermint. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: Tendermint block pruning is dependant on this parameter in conunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs Tendermint what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / abciListeners for hooking into the ABCI message processing of the BaseApp + / and exposing the requests and responses to external consumers + abciListeners []ABCIListener +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +/ +/ NOTE: The db is used to store the version number for now. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger, + name: name, + db: db, + cms: store.NewCommitMultiStore(db), + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + if app.processProposal == nil { + app.SetProcessProposal(app.DefaultProcessProposal()) +} + if app.prepareProposal == nil { + app.SetPrepareProposal(app.DefaultPrepareProposal()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ SetMsgServiceRouter sets the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +SetMsgServiceRouter(msgServiceRouter *MsgServiceRouter) { + app.msgServiceRouter = msgServiceRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} + +} +} + +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) + +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + +} +} + +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) + +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} + +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := maps.Keys(keys) + +sort.Strings(skeys) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms sdk.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +sdk.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + emptyHeader := tmproto.Header{ +} + + / needed for the export command which inits from store but never calls initchain + app.setState(runTxModeCheck, emptyHeader) + + / needed for ABCI Replay Blocks mode which calls Prepare/Process proposal (InitChain is not called) + +app.setState(runTxPrepareProposal, emptyHeader) + +app.setState(runTxProcessProposal, emptyHeader) + +app.Seal() + +rms, ok := app.cms.(*rootmulti.Store) + if !ok { + return fmt.Errorf("invalid commit multi-store; expected %T, got: %T", &rootmulti.Store{ +}, app.cms) +} + +return rms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache sdk.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode runTxMode, header tmproto.Header) { + ms := app.cms.CacheMultiStore() + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, header, false, app.logger), +} + switch mode { + case runTxModeCheck: + / Minimum gas prices are also set. It is set on InitChain and reset on Commit. + baseState.ctx = baseState.ctx.WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices) + +app.checkState = baseState + case runTxModeDeliver: + / It is set on InitChain and BeginBlock and set to nil on Commit. + app.deliverState = baseState + case runTxPrepareProposal: + / It is set on InitChain and Commit. + app.prepareProposalState = baseState + case runTxProcessProposal: + / It is set on InitChain and Commit. + app.processProposalState = baseState + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) *tmproto.ConsensusParams { + if app.paramStore == nil { + return nil +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + panic(err) +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the baseapp's param store. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp *tmproto.ConsensusParams) { + if app.paramStore == nil { + panic("cannot store consensus params with no params store set") +} + if cp == nil { + return +} + +app.paramStore.Set(ctx, cp) + / We're explicitly not storing the Tendermint app_version in the param store. It's + / stored instead in the x/upgrade store, with its own bump logic. +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp == nil || cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateHeight(req abci.RequestBeginBlock) + +error { + if req.Header.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Header.Height) +} + + / expectedHeight holds the expected height to validate. + var expectedHeight int64 + if app.LastBlockHeight() == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain (no + / previous commit). The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / - either there was already a previous commit in the store, in which + / case we increment the version from there, + / - or there was no previous commit, and initial version was not set, + / in which case we start at version 1. + expectedHeight = app.LastBlockHeight() + 1 +} + if req.Header.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Header.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + err := msg.ValidateBasic() + if err != nil { + return err +} + +} + +return nil +} + +/ Returns the application's deliverState if app is in runTxModeDeliver, +/ prepareProposalState if app is in runTxPrepareProposal, processProposalState +/ if app is in runTxProcessProposal, and checkState otherwise. +func (app *BaseApp) + +getState(mode runTxMode) *state { + switch mode { + case runTxModeDeliver: + return app.deliverState + case runTxPrepareProposal: + return app.prepareProposalState + case runTxProcessProposal: + return app.processProposalState + default: + return app.checkState +} +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode runTxMode, txBytes []byte) + +sdk.Context { + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.ctx. + WithTxBytes(txBytes). + WithVoteInfos(app.voteInfos) + +ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == runTxModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == runTxModeSimulate { + ctx, _ = ctx.CacheContext() +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, sdk.CacheMultiStore) { + ms := ctx.MultiStore() + / TODO: https://github.com/cosmos/cosmos-sdk/issues/2824 + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + sdk.TraceContext( + map[string]interface{ +}{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(sdk.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +func (app *BaseApp) + +runTx(mode runTxMode, txBytes []byte) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, priority int64, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == runTxModeDeliver && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, 0, sdkerrors.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + / consumeBlockGas makes sure block gas is consumed at most once. It must happen after + / tx processing, and must be executed even if tx processing fails. Hence, we use trick with `defer` + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: This must exist in a separate defer function for the above recovery + / to recover from this one. + if mode == runTxModeDeliver { + defer consumeBlockGas() +} + +tx, err := app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ +}, nil, nil, 0, err +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, 0, err +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache sdk.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == runTxModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + return gInfo, nil, nil, 0, err +} + +priority = ctx.Priority() + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + if mode == runTxModeCheck { + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, priority, err +} + +} + +else if mode == runTxModeDeliver { + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, priority, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + result, err = app.runMsgs(runMsgCtx, msgs, mode) + if err == nil { + + / Run optional postHandlers. + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.postHandler(postCtx, tx, mode == runTxModeSimulate) + if err != nil { + return gInfo, nil, anteEvents, priority, err +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if mode == runTxModeDeliver { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == runTxModeDeliver || mode == runTxModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, priority, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, mode runTxMode) (*sdk.Result, error) { + msgLogs := make(sdk.ABCIMessageLogs, 0, len(msgs)) + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != runTxModeDeliver && mode != runTxModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, sdkerrors.Wrapf(sdkerrors.ErrUnknownRequest, "can't route message %+v", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, sdkerrors.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents := createEvents(msgResult.GetEvents(), msg) + + / append message events, data and logs + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + +msgLogs = append(msgLogs, sdk.NewABCIMessageLog(uint32(i), msgResult.Log, msgEvents)) +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, sdkerrors.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Log: strings.TrimSpace(msgLogs.String()), + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(events sdk.Events, msg sdk.Msg) + +sdk.Events { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + if len(msg.GetSigners()) > 0 && !msg.GetSigners()[0].Empty() { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, msg.GetSigners()[0].String())) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + / here we assume that routes module name is the second element of the route + / e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" + moduleName := strings.Split(eventMsgName, ".") + if len(moduleName) > 1 { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName[1])) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events) +} + +/ DefaultPrepareProposal returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from Tendermint will simply be returned, which, by default, are in +/ FIFO order. +func (app *BaseApp) + +DefaultPrepareProposal() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req abci.RequestPrepareProposal) + +abci.ResponsePrepareProposal { + / If the mempool is nil or a no-op mempool, we simply return the transactions + / requested from Tendermint, which, by default, should be in FIFO order. + _, isNoOp := app.mempool.(mempool.NoOpMempool) + if app.mempool == nil || isNoOp { + return abci.ResponsePrepareProposal{ + Txs: req.Txs +} + +} + +var ( + txsBytes [][]byte + byteCount int64 + ) + iterator := app.mempool.Select(ctx, req.Txs) + for iterator != nil { + memTx := iterator.Tx() + +bz, err := app.txEncoder(memTx) + if err != nil { + panic(err) +} + txSize := int64(len(bz)) + + / NOTE: Since runTx was already executed in CheckTx, which calls + / mempool.Insert, ideally everything in the pool should be valid. But + / some mempool implementations may insert invalid txs, so we check again. + _, _, _, _, err = app.runTx(runTxPrepareProposal, bz) + if err != nil { + err := app.mempool.Remove(memTx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + panic(err) +} + +iterator = iterator.Next() + +continue +} + +else if byteCount += txSize; byteCount <= req.MaxTxBytes { + txsBytes = append(txsBytes, bz) +} + +else { + break +} + +iterator = iterator.Next() +} + +return abci.ResponsePrepareProposal{ + Txs: txsBytes +} + +} +} + +/ DefaultProcessProposal returns the default implementation for processing an ABCI proposal. +/ Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. Note that step (2) + +is identical to the +/ validation step performed in DefaultPrepareProposal. It is very important that the same validation logic is used +/ in both steps, and applications must ensure that this is the case in non-default handlers. +func (app *BaseApp) + +DefaultProcessProposal() + +sdk.ProcessProposalHandler { + return func(ctx sdk.Context, req abci.RequestProcessProposal) + +abci.ResponseProcessProposal { + for _, txBytes := range req.Txs { + _, err := app.txDecoder(txBytes) + if err != nil { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + + _, _, _, _, err = app.runTx(runTxProcessProposal, txBytes) + if err != nil { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + +} + +return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +} + +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req abci.RequestPrepareProposal) + +abci.ResponsePrepareProposal { + return abci.ResponsePrepareProposal{ + Txs: req.Txs +} + +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ abci.RequestProcessProposal) + +abci.ResponseProcessProposal { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +} + +} +} +``` + +This default implementation can be overridden by the application developer in +favor of a custom implementation in [`app.go`](/docs/sdk/v0.47/documentation/application-framework/app-go-v2): + +```go +prepareOpt := func(app *baseapp.BaseApp) { + abciPropHandler := baseapp.NewDefaultProposalHandler(mempool, app) + +app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) +} + +baseAppOptions = append(baseAppOptions, prepareOpt) +``` + +## Process Proposal + +`ProcessProposal` handles the validation of a proposal from `PrepareProposal`, +which also includes a block header. Meaning, that after a block has been proposed +the other validators have the right to vote on a block. The validator in the +default implementation of `PrepareProposal` runs basic validity checks on each +transaction. + +Note, `ProcessProposal` MAY NOT be non-deterministic, i.e. it must be deterministic. +This means if `ProcessProposal` panics or fails and we reject, all honest validator +processes will prevote nil and the CometBFT round will proceed again until a valid +proposal is proposed. + +Here is the implementation of the default implementation: + +```go expandable +package baseapp + +import ( + + "errors" + "fmt" + "sort" + "strings" + "github.com/cosmos/gogoproto/proto" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/crypto/tmhash" + "github.com/tendermint/tendermint/libs/log" + tmproto "github.com/tendermint/tendermint/proto/tendermint/types" + dbm "github.com/tendermint/tm-db" + "golang.org/x/exp/maps" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/snapshots" + "github.com/cosmos/cosmos-sdk/store" + "github.com/cosmos/cosmos-sdk/store/rootmulti" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +const ( + runTxModeCheck runTxMode = iota / Check a transaction + runTxModeReCheck / Recheck a (pending) + +transaction after a commit + runTxModeSimulate / Simulate a transaction + runTxModeDeliver / Deliver a transaction + runTxPrepareProposal + runTxProcessProposal +) + +var _ abci.Application = (*BaseApp)(nil) + +type ( + / Enum mode for app.runTx + runTxMode uint8 + + / StoreLoader defines a customizable function to control how we load the CommitMultiStore + / from disk. This is useful for state migration, when loading a datastore written with + / an older version of the software. In particular, if a module changed the substore key name + / (or removed a substore) + +between two versions of the software. + StoreLoader func(ms sdk.CommitMultiStore) + +error +) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { /nolint: maligned + / initialized on creation + logger log.Logger + name string / application name from abci.Info + db dbm.DB / common DB backend + cms sdk.CommitMultiStore / Main (uncached) + +state + qms sdk.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.AnteHandler / post handler, optional, e.g. for tips + initChainer sdk.InitChainer / initialize state with validators and state blob + beginBlocker sdk.BeginBlocker / logic to run before any txs + processProposal sdk.ProcessProposalHandler / the handler which runs on ABCI ProcessProposal + prepareProposal sdk.PrepareProposalHandler / the handler which runs on ABCI PrepareProposal + endBlocker sdk.EndBlocker / logic to run after all txs, and to determine valset changes + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / checkState is set on InitChain and reset on Commit + / deliverState is set on InitChain and BeginBlock and set to nil on Commit + checkState *state / for CheckTx + deliverState *state / for DeliverTx + processProposalState *state / for ProcessProposal + prepareProposalState *state / for PrepareProposal + + / an inter-block write-through cache provided to the context during deliverState + interBlockCache sdk.MultiStorePersistentCache + + / absent validators from begin block + voteInfos []abci.VoteInfo + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the baseapp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from Tendermint. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: Tendermint block pruning is dependant on this parameter in conunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs Tendermint what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / abciListeners for hooking into the ABCI message processing of the BaseApp + / and exposing the requests and responses to external consumers + abciListeners []ABCIListener +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +/ +/ NOTE: The db is used to store the version number for now. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger, + name: name, + db: db, + cms: store.NewCommitMultiStore(db), + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + if app.processProposal == nil { + app.SetProcessProposal(app.DefaultProcessProposal()) +} + if app.prepareProposal == nil { + app.SetPrepareProposal(app.DefaultPrepareProposal()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ SetMsgServiceRouter sets the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +SetMsgServiceRouter(msgServiceRouter *MsgServiceRouter) { + app.msgServiceRouter = msgServiceRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} + +} +} + +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) + +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + +} +} + +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) + +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} + +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := maps.Keys(keys) + +sort.Strings(skeys) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms sdk.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +sdk.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + emptyHeader := tmproto.Header{ +} + + / needed for the export command which inits from store but never calls initchain + app.setState(runTxModeCheck, emptyHeader) + + / needed for ABCI Replay Blocks mode which calls Prepare/Process proposal (InitChain is not called) + +app.setState(runTxPrepareProposal, emptyHeader) + +app.setState(runTxProcessProposal, emptyHeader) + +app.Seal() + +rms, ok := app.cms.(*rootmulti.Store) + if !ok { + return fmt.Errorf("invalid commit multi-store; expected %T, got: %T", &rootmulti.Store{ +}, app.cms) +} + +return rms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache sdk.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode runTxMode, header tmproto.Header) { + ms := app.cms.CacheMultiStore() + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, header, false, app.logger), +} + switch mode { + case runTxModeCheck: + / Minimum gas prices are also set. It is set on InitChain and reset on Commit. + baseState.ctx = baseState.ctx.WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices) + +app.checkState = baseState + case runTxModeDeliver: + / It is set on InitChain and BeginBlock and set to nil on Commit. + app.deliverState = baseState + case runTxPrepareProposal: + / It is set on InitChain and Commit. + app.prepareProposalState = baseState + case runTxProcessProposal: + / It is set on InitChain and Commit. + app.processProposalState = baseState + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) *tmproto.ConsensusParams { + if app.paramStore == nil { + return nil +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + panic(err) +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the baseapp's param store. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp *tmproto.ConsensusParams) { + if app.paramStore == nil { + panic("cannot store consensus params with no params store set") +} + if cp == nil { + return +} + +app.paramStore.Set(ctx, cp) + / We're explicitly not storing the Tendermint app_version in the param store. It's + / stored instead in the x/upgrade store, with its own bump logic. +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp == nil || cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateHeight(req abci.RequestBeginBlock) + +error { + if req.Header.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Header.Height) +} + + / expectedHeight holds the expected height to validate. + var expectedHeight int64 + if app.LastBlockHeight() == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain (no + / previous commit). The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / - either there was already a previous commit in the store, in which + / case we increment the version from there, + / - or there was no previous commit, and initial version was not set, + / in which case we start at version 1. + expectedHeight = app.LastBlockHeight() + 1 +} + if req.Header.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Header.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + err := msg.ValidateBasic() + if err != nil { + return err +} + +} + +return nil +} + +/ Returns the application's deliverState if app is in runTxModeDeliver, +/ prepareProposalState if app is in runTxPrepareProposal, processProposalState +/ if app is in runTxProcessProposal, and checkState otherwise. +func (app *BaseApp) + +getState(mode runTxMode) *state { + switch mode { + case runTxModeDeliver: + return app.deliverState + case runTxPrepareProposal: + return app.prepareProposalState + case runTxProcessProposal: + return app.processProposalState + default: + return app.checkState +} +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode runTxMode, txBytes []byte) + +sdk.Context { + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.ctx. + WithTxBytes(txBytes). + WithVoteInfos(app.voteInfos) + +ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == runTxModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == runTxModeSimulate { + ctx, _ = ctx.CacheContext() +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, sdk.CacheMultiStore) { + ms := ctx.MultiStore() + / TODO: https://github.com/cosmos/cosmos-sdk/issues/2824 + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + sdk.TraceContext( + map[string]interface{ +}{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(sdk.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +func (app *BaseApp) + +runTx(mode runTxMode, txBytes []byte) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, priority int64, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == runTxModeDeliver && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, 0, sdkerrors.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + / consumeBlockGas makes sure block gas is consumed at most once. It must happen after + / tx processing, and must be executed even if tx processing fails. Hence, we use trick with `defer` + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: This must exist in a separate defer function for the above recovery + / to recover from this one. + if mode == runTxModeDeliver { + defer consumeBlockGas() +} + +tx, err := app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ +}, nil, nil, 0, err +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, 0, err +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache sdk.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == runTxModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + return gInfo, nil, nil, 0, err +} + +priority = ctx.Priority() + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + if mode == runTxModeCheck { + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, priority, err +} + +} + +else if mode == runTxModeDeliver { + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, priority, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + result, err = app.runMsgs(runMsgCtx, msgs, mode) + if err == nil { + + / Run optional postHandlers. + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.postHandler(postCtx, tx, mode == runTxModeSimulate) + if err != nil { + return gInfo, nil, anteEvents, priority, err +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if mode == runTxModeDeliver { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == runTxModeDeliver || mode == runTxModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, priority, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, mode runTxMode) (*sdk.Result, error) { + msgLogs := make(sdk.ABCIMessageLogs, 0, len(msgs)) + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != runTxModeDeliver && mode != runTxModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, sdkerrors.Wrapf(sdkerrors.ErrUnknownRequest, "can't route message %+v", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, sdkerrors.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents := createEvents(msgResult.GetEvents(), msg) + + / append message events, data and logs + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + +msgLogs = append(msgLogs, sdk.NewABCIMessageLog(uint32(i), msgResult.Log, msgEvents)) +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, sdkerrors.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Log: strings.TrimSpace(msgLogs.String()), + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(events sdk.Events, msg sdk.Msg) + +sdk.Events { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + if len(msg.GetSigners()) > 0 && !msg.GetSigners()[0].Empty() { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, msg.GetSigners()[0].String())) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + / here we assume that routes module name is the second element of the route + / e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" + moduleName := strings.Split(eventMsgName, ".") + if len(moduleName) > 1 { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName[1])) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events) +} + +/ DefaultPrepareProposal returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from Tendermint will simply be returned, which, by default, are in +/ FIFO order. +func (app *BaseApp) + +DefaultPrepareProposal() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req abci.RequestPrepareProposal) + +abci.ResponsePrepareProposal { + / If the mempool is nil or a no-op mempool, we simply return the transactions + / requested from Tendermint, which, by default, should be in FIFO order. + _, isNoOp := app.mempool.(mempool.NoOpMempool) + if app.mempool == nil || isNoOp { + return abci.ResponsePrepareProposal{ + Txs: req.Txs +} + +} + +var ( + txsBytes [][]byte + byteCount int64 + ) + iterator := app.mempool.Select(ctx, req.Txs) + for iterator != nil { + memTx := iterator.Tx() + +bz, err := app.txEncoder(memTx) + if err != nil { + panic(err) +} + txSize := int64(len(bz)) + + / NOTE: Since runTx was already executed in CheckTx, which calls + / mempool.Insert, ideally everything in the pool should be valid. But + / some mempool implementations may insert invalid txs, so we check again. + _, _, _, _, err = app.runTx(runTxPrepareProposal, bz) + if err != nil { + err := app.mempool.Remove(memTx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + panic(err) +} + +iterator = iterator.Next() + +continue +} + +else if byteCount += txSize; byteCount <= req.MaxTxBytes { + txsBytes = append(txsBytes, bz) +} + +else { + break +} + +iterator = iterator.Next() +} + +return abci.ResponsePrepareProposal{ + Txs: txsBytes +} + +} +} + +/ DefaultProcessProposal returns the default implementation for processing an ABCI proposal. +/ Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. Note that step (2) + +is identical to the +/ validation step performed in DefaultPrepareProposal. It is very important that the same validation logic is used +/ in both steps, and applications must ensure that this is the case in non-default handlers. +func (app *BaseApp) + +DefaultProcessProposal() + +sdk.ProcessProposalHandler { + return func(ctx sdk.Context, req abci.RequestProcessProposal) + +abci.ResponseProcessProposal { + for _, txBytes := range req.Txs { + _, err := app.txDecoder(txBytes) + if err != nil { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + + _, _, _, _, err = app.runTx(runTxProcessProposal, txBytes) + if err != nil { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + +} + +return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +} + +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req abci.RequestPrepareProposal) + +abci.ResponsePrepareProposal { + return abci.ResponsePrepareProposal{ + Txs: req.Txs +} + +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ abci.RequestProcessProposal) + +abci.ResponseProcessProposal { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +} + +} +} +``` + +Like `PrepareProposal` this implementation is the default and can be modified by the application developer in [`app.go`](/docs/sdk/v0.47/documentation/application-framework/app-go-v2): + +```go +processOpt := func(app *baseapp.BaseApp) { + abciPropHandler := baseapp.NewDefaultProposalHandler(mempool, app) + +app.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) +} + +baseAppOptions = append(baseAppOptions, processOpt) +``` + +## Mempool + +Now that we have walked through the `PrepareProposal` & `ProcessProposal`, we can move on to walking through the mempool. + +There are countless designs that an application developer can write for a mempool, the SDK opted to provide only simple mempool implementations. +Namely, the SDK provides the following mempools: + +* [No-op Mempool](#no-op-mempool) +* [Sender Nonce Mempool](#sender-nonce-mempool) +* [Priority Nonce Mempool](#priority-nonce-mempool) + +The default SDK is a [No-op Mempool](#no-op-mempool), but it can be replaced by the application developer in [`app.go`](/docs/sdk/v0.47/documentation/application-framework/app-go-v2): + +```go +nonceMempool := mempool.NewSenderNonceMempool() + mempoolOpt := baseapp.SetMempool(nonceMempool) + +baseAppOptions = append(baseAppOptions, mempoolOpt) +``` + +### No-op Mempool + +A no-op mempool is a mempool where transactions are completely discarded and ignored when BaseApp interacts with the mempool. +When this mempool is used, it assumed that an application will rely on CometBFT's transaction ordering defined in `RequestPrepareProposal`, +which is FIFO-ordered by default. + +### Sender Nonce Mempool + +The nonce mempool is a mempool that keeps transactions from an sorted by nonce in order to avoid the issues with nonces. +It works by storing the transaction in a list sorted by the transaction nonce. When the proposer asks for transactions to be included in a block it randomly selects a sender and gets the first transaction in the list. It repeats this until the mempool is empty or the block is full. + +It is configurable with the following parameters: + +#### MaxTxs + +It is an integer value that sets the mempool in one of three modes, *bounded*, *unbounded*, or *disabled*. + +* **negative**: Disabled, mempool does not insert new transaction and return early. +* **zero**: Unbounded mempool has no transaction limit and will never fail with `ErrMempoolTxMaxCapacity`. +* **positive**: Bounded, it fails with `ErrMempoolTxMaxCapacity` when `maxTx` value is the same as `CountTx()` + +#### Seed + +Set the seed for the random number generator used to select transactions from the mempool. + +### Priority Nonce Mempool + +The [priority nonce mempool](https://github.com/cosmos/cosmos-sdk/blob/main/types/mempool/priority_nonce_spec) is a mempool implementation that stores txs in a partially ordered set by 2 dimensions: + +* priority +* sender-nonce (sequence number) + +Internally it uses one priority ordered [skip list](https://pkg.go.dev/github.com/huandu/skiplist) and one skip list per sender ordered by sender-nonce (sequence number). When there are multiple txs from the same sender, they are not always comparable by priority to other sender txs and must be partially ordered by both sender-nonce and priority. + +It is configurable with the following parameters: + +#### MaxTxs + +It is an integer value that sets the mempool in one of three modes, *bounded*, *unbounded*, or *disabled*. + +* **negative**: Disabled, mempool does not insert new transaction and return early. +* **zero**: Unbounded mempool has no transaction limit and will never fail with `ErrMempoolTxMaxCapacity`. +* **positive**: Bounded, it fails with `ErrMempoolTxMaxCapacity` when `maxTx` value is the same as `CountTx()` + +#### Callback + +The priority nonce mempool provides mempool options allowing the application sets callback(s). + +* **OnRead**: Set a callback to be called when a transaction is read from the mempool. +* **TxReplacement**: Sets a callback to be called when duplicated transaction nonce detected during mempool insert. Application can define a transaction replacement rule based on tx priority or certain transaction fields. + +More information on the SDK mempool implementation can be found in the [godocs](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/mempool). diff --git a/docs/sdk/v0.47/documentation/application-framework/app-upgrade.mdx b/docs/sdk/v0.47/documentation/application-framework/app-upgrade.mdx new file mode 100644 index 00000000..78a1f40f --- /dev/null +++ b/docs/sdk/v0.47/documentation/application-framework/app-upgrade.mdx @@ -0,0 +1,64 @@ +--- +title: Application upgrade +--- + + +This document describes how to upgrade your application. If you are looking specifically for the changes to perform between SDK versions, see the [SDK migrations documentation](https://docs.cosmos.network/main/migrations/intro). + + + +This section is currently incomplete. Track the progress of this document [here](https://github.com/cosmos/cosmos-sdk/issues/11504). + + +## Pre-Upgrade Handling + +Cosmovisor supports custom pre-upgrade handling. Use pre-upgrade handling when you need to implement application config changes that are required in the newer version before you perform the upgrade. + +Using Cosmovisor pre-upgrade handling is optional. If pre-upgrade handling is not implemented, the upgrade continues. + +For example, make the required new-version changes to `app.toml` settings during the pre-upgrade handling. The pre-upgrade handling process means that the file does not have to be manually updated after the upgrade. + +Before the application binary is upgraded, Cosmovisor calls a `pre-upgrade` command that can be implemented by the application. + +The `pre-upgrade` command does not take in any command-line arguments and is expected to terminate with the following exit codes: + +| Exit status code | How it is handled in Cosmosvisor | +| ---------------- | ------------------------------------------------------------------------------------------------------------------- | +| `0` | Assumes `pre-upgrade` command executed successfully and continues the upgrade. | +| `1` | Default exit code when `pre-upgrade` command has not been implemented. | +| `30` | `pre-upgrade` command was executed but failed. This fails the entire upgrade. | +| `31` | `pre-upgrade` command was executed but failed. But the command is retried until exit code `1` or `30` are returned. | + +## Sample + +Here is a sample structure of the `pre-upgrade` command: + +```go expandable +func preUpgradeCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "pre-upgrade", + Short: "Pre-upgrade command", + Long: "Pre-upgrade command to implement custom pre-upgrade handling", + Run: func(cmd *cobra.Command, args []string) { + err := HandlePreUpgrade() + if err != nil { + os.Exit(30) +} + +os.Exit(0) +}, +} + +return cmd +} +``` + +Ensure that the pre-upgrade command has been registered in the application: + +```go +rootCmd.AddCommand( + / .. + preUpgradeCommand(), + / .. + ) +``` diff --git a/docs/sdk/v0.47/documentation/module-system/beginblock-endblock.mdx b/docs/sdk/v0.47/documentation/module-system/beginblock-endblock.mdx new file mode 100644 index 00000000..fdbf3530 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/beginblock-endblock.mdx @@ -0,0 +1,106 @@ +--- +title: BeginBlocker and EndBlocker +--- + + +**Synopsis** +`BeginBlocker` and `EndBlocker` are optional methods module developers can implement in their module. They will be triggered at the beginning and at the end of each block respectively, when the [`BeginBlock`](/docs/sdk/v0.47/learn/advanced/baseapp#beginblock) and [`EndBlock`](/docs/sdk/v0.47/learn/advanced/baseapp#endblock) ABCI messages are received from the underlying consensus engine. + + + + +### Pre-requisite Readings + +* [Module Manager](/docs/sdk/v0.47/documentation/module-system/module-manager) + + + +## BeginBlocker and EndBlocker + +`BeginBlocker` and `EndBlocker` are a way for module developers to add automatic execution of logic to their module. This is a powerful tool that should be used carefully, as complex automatic functions can slow down or even halt the chain. + +When needed, `BeginBlocker` and `EndBlocker` are implemented as part of the [`BeginBlockAppModule` and `BeginBlockAppModule` interfaces](/docs/sdk/v0.47/documentation/module-system/module-manager#appmodule). This means either can be left-out if not required. The `BeginBlock` and `EndBlock` methods of the interface implemented in `module.go` generally defer to `BeginBlocker` and `EndBlocker` methods respectively, which are usually implemented in `abci.go`. + +The actual implementation of `BeginBlocker` and `EndBlocker` in `abci.go` are very similar to that of a [`Msg` service](/docs/sdk/v0.47/documentation/module-system/msg-services): + +* They generally use the [`keeper`](/docs/sdk/v0.47/documentation/module-system/keeper) and [`ctx`](/docs/sdk/v0.47/learn/advanced/context) to retrieve information about the latest state. +* If needed, they use the `keeper` and `ctx` to trigger state-transitions. +* If needed, they can emit [`events`](/docs/sdk/v0.47/learn/advanced/events) via the `ctx`'s `EventManager`. + +A specificity of the `EndBlocker` is that it can return validator updates to the underlying consensus engine in the form of an [`[]abci.ValidatorUpdates`](https://docs.cometbft.com/v0.37/spec/abci/abci++_methods#endblock). This is the preferred way to implement custom validator changes. + +It is possible for developers to define the order of execution between the `BeginBlocker`/`EndBlocker` functions of each of their application's modules via the module's manager `SetOrderBeginBlocker`/`SetOrderEndBlocker` methods. For more on the module manager, click [here](/docs/sdk/v0.47/documentation/module-system/module-manager#manager). + +See an example implementation of `BeginBlocker` from the `distribution` module: + +```go expandable +package distribution + +import ( + + "time" + + abci "github.com/tendermint/tendermint/abci/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + "github.com/cosmos/cosmos-sdk/x/distribution/types" +) + +/ BeginBlocker sets the proposer for determining distribution during endblock +/ and distribute rewards for the previous block. +func BeginBlocker(ctx sdk.Context, req abci.RequestBeginBlock, k keeper.Keeper) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyBeginBlocker) + + / determine the total power signing the block + var previousTotalPower int64 + for _, voteInfo := range req.LastCommitInfo.GetVotes() { + previousTotalPower += voteInfo.Validator.Power +} + + / TODO this is Tendermint-dependent + / ref https://github.com/cosmos/cosmos-sdk/issues/3095 + if ctx.BlockHeight() > 1 { + k.AllocateTokens(ctx, previousTotalPower, req.LastCommitInfo.GetVotes()) +} + + / record the proposer for when we payout on the next block + consAddr := sdk.ConsAddress(req.Header.ProposerAddress) + +k.SetPreviousProposerConsAddr(ctx, consAddr) +} +``` + +and an example implementation of `EndBlocker` from the `staking` module: + +```go expandable +package staking + +import ( + + "time" + + abci "github.com/tendermint/tendermint/abci/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/staking/keeper" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ BeginBlocker will persist the current header and validator set as a historical entry +/ and prune the oldest entry based on the HistoricalEntries parameter +func BeginBlocker(ctx sdk.Context, k *keeper.Keeper) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyBeginBlocker) + +k.TrackHistoricalInfo(ctx) +} + +/ Called every block, update validator set +func EndBlocker(ctx sdk.Context, k *keeper.Keeper) []abci.ValidatorUpdate { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) + +return k.BlockValidatorUpdates(ctx) +} +``` + +{/* TODO: leaving this here to update docs with advanced api changes */} diff --git a/docs/sdk/v0.47/documentation/module-system/depinject.mdx b/docs/sdk/v0.47/documentation/module-system/depinject.mdx new file mode 100644 index 00000000..0588e1e7 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/depinject.mdx @@ -0,0 +1,3561 @@ +--- +title: Modules depinject-ready +--- + + + +### Pre-requisite Readings + +* [Depinject Documentation](/docs/sdk/v0.47/documentation/module-system/depinject) + + + +[`depinject`](/docs/sdk/v0.47/documentation/module-system/depinject) is used to wire any module in `app.go`. +All core modules are already configured to support dependency injection. + +To work with `depinject` a module must define its configuration and requirements so that `depinject` can provide the right dependencies. + +In brief, as a module developer, the following steps are required: + +1. Define the module configuration using Protobuf +2. Define the module dependencies in `x/{moduleName}/module.go` + +A chain developer can then use the module by following these two steps: + +1. Configure the module in `app_config.go` or `app.yaml` +2. Inject the module in `app.go` + +## Module Configuration + +The module available configuration is defined in a Protobuf file, located at `{moduleName}/module/v1/module.proto`. + +```protobuf +syntax = "proto3"; + +package cosmos.group.module.v1; + +import "cosmos/app/v1alpha1/module.proto"; +import "gogoproto/gogo.proto"; +import "google/protobuf/duration.proto"; +import "amino/amino.proto"; + +// Module is the config object of the group module. +message Module { + option (cosmos.app.v1alpha1.module) = { + go_import: "github.com/cosmos/cosmos-sdk/x/group" + }; + + // max_execution_period defines the max duration after a proposal's voting period ends that members can send a MsgExec + // to execute the proposal. + google.protobuf.Duration max_execution_period = 1 + [(gogoproto.stdduration) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // max_metadata_len defines the max length of the metadata bytes field for various entities within the group module. + // Defaults to 255 if not explicitly set. + uint64 max_metadata_len = 2; +} + +``` + +* `go_import` must point to the Go package of the custom module. +* Message fields define the module configuration. + That configuration can be set in the `app_config.go` / `app.yaml` file for a chain developer to configure the module.\ + Taking `group` as example, a chain developer is able to decide, thanks to `uint64 max_metadata_len`, what the maximum metatada length allowed for a group porposal is. + + ```go expandable + package simapp + + import ( + + "time" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + capabilitymodulev1 "cosmossdk.io/api/cosmos/capability/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + crisismodulev1 "cosmossdk.io/api/cosmos/crisis/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + paramsmodulev1 "cosmossdk.io/api/cosmos/params/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "google.golang.org/protobuf/types/known/durationpb" + + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" + ) + + var ( + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder = []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensustypes.ModuleName, + } + + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + / application configuration (used by depinject) + + AppConfig = appconfig.Compose(&appv1alpha1.Config{ + Modules: []*appv1alpha1.ModuleConfig{ + { + Name: "runtime", + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + + BeginBlockers: []string{ + upgradetypes.ModuleName, + capabilitytypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authtypes.ModuleName, + banktypes.ModuleName, + govtypes.ModuleName, + crisistypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + vestingtypes.ModuleName, + consensustypes.ModuleName, + }, + EndBlockers: []string{ + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + capabilitytypes.ModuleName, + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + consensustypes.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + InitGenesis: genesisModuleOrder, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + / ExportGenesis: genesisModuleOrder, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: nil, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: paramstypes.ModuleName, + Config: appconfig.WrapAny(¶msmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: capabilitytypes.ModuleName, + Config: appconfig.WrapAny(&capabilitymodulev1.Module{ + SealKeeper: true, + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: crisistypes.ModuleName, + Config: appconfig.WrapAny(&crisismodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + }, + }) + ) + ``` + +That message is generated using [`pulsar`](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/scripts/protocgen-pulsar.sh) (by running `make proto-gen`). +In the case of the `group` module, this file is generated here: [Link](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/api/cosmos/group/module/v1/module.pulsar.go). + +The part that is relevant for the module configuration is: + +```go expandable +/ Code generated by protoc-gen-go-pulsar. DO NOT EDIT. +package modulev1 + +import ( + + _ "cosmossdk.io/api/amino" + _ "cosmossdk.io/api/cosmos/app/v1alpha1" + fmt "fmt" + runtime "github.com/cosmos/cosmos-proto/runtime" + _ "github.com/cosmos/gogoproto/gogoproto" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoiface "google.golang.org/protobuf/runtime/protoiface" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + durationpb "google.golang.org/protobuf/types/known/durationpb" + io "io" + reflect "reflect" + sync "sync" +) + +var ( + md_Module protoreflect.MessageDescriptor + fd_Module_max_execution_period protoreflect.FieldDescriptor + fd_Module_max_metadata_len protoreflect.FieldDescriptor +) + +func init() { + file_cosmos_group_module_v1_module_proto_init() + +md_Module = File_cosmos_group_module_v1_module_proto.Messages().ByName("Module") + +fd_Module_max_execution_period = md_Module.Fields().ByName("max_execution_period") + +fd_Module_max_metadata_len = md_Module.Fields().ByName("max_metadata_len") +} + +var _ protoreflect.Message = (*fastReflection_Module)(nil) + +type fastReflection_Module Module + +func (x *Module) + +ProtoReflect() + +protoreflect.Message { + return (*fastReflection_Module)(x) +} + +func (x *Module) + +slowProtoReflect() + +protoreflect.Message { + mi := &file_cosmos_group_module_v1_module_proto_msgTypes[0] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) +} + +return ms +} + +return mi.MessageOf(x) +} + +var _fastReflection_Module_messageType fastReflection_Module_messageType +var _ protoreflect.MessageType = fastReflection_Module_messageType{ +} + +type fastReflection_Module_messageType struct{ +} + +func (x fastReflection_Module_messageType) + +Zero() + +protoreflect.Message { + return (*fastReflection_Module)(nil) +} + +func (x fastReflection_Module_messageType) + +New() + +protoreflect.Message { + return new(fastReflection_Module) +} + +func (x fastReflection_Module_messageType) + +Descriptor() + +protoreflect.MessageDescriptor { + return md_Module +} + +/ Descriptor returns message descriptor, which contains only the protobuf +/ type information for the message. +func (x *fastReflection_Module) + +Descriptor() + +protoreflect.MessageDescriptor { + return md_Module +} + +/ Type returns the message type, which encapsulates both Go and protobuf +/ type information. If the Go type information is not needed, +/ it is recommended that the message descriptor be used instead. +func (x *fastReflection_Module) + +Type() + +protoreflect.MessageType { + return _fastReflection_Module_messageType +} + +/ New returns a newly allocated and mutable empty message. +func (x *fastReflection_Module) + +New() + +protoreflect.Message { + return new(fastReflection_Module) +} + +/ Interface unwraps the message reflection interface and +/ returns the underlying ProtoMessage interface. +func (x *fastReflection_Module) + +Interface() + +protoreflect.ProtoMessage { + return (*Module)(x) +} + +/ Range iterates over every populated field in an undefined order, +/ calling f for each field descriptor and value encountered. +/ Range returns immediately if f returns false. +/ While iterating, mutating operations may only be performed +/ on the current field descriptor. +func (x *fastReflection_Module) + +Range(f func(protoreflect.FieldDescriptor, protoreflect.Value) + +bool) { + if x.MaxExecutionPeriod != nil { + value := protoreflect.ValueOfMessage(x.MaxExecutionPeriod.ProtoReflect()) + if !f(fd_Module_max_execution_period, value) { + return +} + +} + if x.MaxMetadataLen != uint64(0) { + value := protoreflect.ValueOfUint64(x.MaxMetadataLen) + if !f(fd_Module_max_metadata_len, value) { + return +} + +} +} + +/ Has reports whether a field is populated. +/ +/ Some fields have the property of nullability where it is possible to +/ distinguish between the default value of a field and whether the field +/ was explicitly populated with the default value. Singular message fields, +/ member fields of a oneof, and proto2 scalar fields are nullable. Such +/ fields are populated only if explicitly set. +/ +/ In other cases (aside from the nullable cases above), +/ a proto3 scalar field is populated if it contains a non-zero value, and +/ a repeated field is populated if it is non-empty. +func (x *fastReflection_Module) + +Has(fd protoreflect.FieldDescriptor) + +bool { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + return x.MaxExecutionPeriod != nil + case "cosmos.group.module.v1.Module.max_metadata_len": + return x.MaxMetadataLen != uint64(0) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ Clear clears the field such that a subsequent Has call reports false. +/ +/ Clearing an extension field clears both the extension type and value +/ associated with the given field number. +/ +/ Clear is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +Clear(fd protoreflect.FieldDescriptor) { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + x.MaxExecutionPeriod = nil + case "cosmos.group.module.v1.Module.max_metadata_len": + x.MaxMetadataLen = uint64(0) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ Get retrieves the value for a field. +/ +/ For unpopulated scalars, it returns the default value, where +/ the default value of a bytes scalar is guaranteed to be a copy. +/ For unpopulated composite types, it returns an empty, read-only view +/ of the value; to obtain a mutable reference, use Mutable. +func (x *fastReflection_Module) + +Get(descriptor protoreflect.FieldDescriptor) + +protoreflect.Value { + switch descriptor.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + value := x.MaxExecutionPeriod + return protoreflect.ValueOfMessage(value.ProtoReflect()) + case "cosmos.group.module.v1.Module.max_metadata_len": + value := x.MaxMetadataLen + return protoreflect.ValueOfUint64(value) + +default: + if descriptor.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", descriptor.FullName())) +} +} + +/ Set stores the value for a field. +/ +/ For a field belonging to a oneof, it implicitly clears any other field +/ that may be currently set within the same oneof. +/ For extension fields, it implicitly stores the provided ExtensionType. +/ When setting a composite type, it is unspecified whether the stored value +/ aliases the source's memory in any way. If the composite value is an +/ empty, read-only value, then it panics. +/ +/ Set is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +Set(fd protoreflect.FieldDescriptor, value protoreflect.Value) { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + x.MaxExecutionPeriod = value.Message().Interface().(*durationpb.Duration) + case "cosmos.group.module.v1.Module.max_metadata_len": + x.MaxMetadataLen = value.Uint() + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ Mutable returns a mutable reference to a composite type. +/ +/ If the field is unpopulated, it may allocate a composite value. +/ For a field belonging to a oneof, it implicitly clears any other field +/ that may be currently set within the same oneof. +/ For extension fields, it implicitly stores the provided ExtensionType +/ if not already stored. +/ It panics if the field does not contain a composite type. +/ +/ Mutable is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +Mutable(fd protoreflect.FieldDescriptor) + +protoreflect.Value { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + if x.MaxExecutionPeriod == nil { + x.MaxExecutionPeriod = new(durationpb.Duration) +} + +return protoreflect.ValueOfMessage(x.MaxExecutionPeriod.ProtoReflect()) + case "cosmos.group.module.v1.Module.max_metadata_len": + panic(fmt.Errorf("field max_metadata_len of message cosmos.group.module.v1.Module is not mutable")) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ NewField returns a new value that is assignable to the field +/ for the given descriptor. For scalars, this returns the default value. +/ For lists, maps, and messages, this returns a new, empty, mutable value. +func (x *fastReflection_Module) + +NewField(fd protoreflect.FieldDescriptor) + +protoreflect.Value { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + m := new(durationpb.Duration) + +return protoreflect.ValueOfMessage(m.ProtoReflect()) + case "cosmos.group.module.v1.Module.max_metadata_len": + return protoreflect.ValueOfUint64(uint64(0)) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ WhichOneof reports which field within the oneof is populated, +/ returning nil if none are populated. +/ It panics if the oneof descriptor does not belong to this message. +func (x *fastReflection_Module) + +WhichOneof(d protoreflect.OneofDescriptor) + +protoreflect.FieldDescriptor { + switch d.FullName() { + default: + panic(fmt.Errorf("%s is not a oneof field in cosmos.group.module.v1.Module", d.FullName())) +} + +panic("unreachable") +} + +/ GetUnknown retrieves the entire list of unknown fields. +/ The caller may only mutate the contents of the RawFields +/ if the mutated bytes are stored back into the message with SetUnknown. +func (x *fastReflection_Module) + +GetUnknown() + +protoreflect.RawFields { + return x.unknownFields +} + +/ SetUnknown stores an entire list of unknown fields. +/ The raw fields must be syntactically valid according to the wire format. +/ An implementation may panic if this is not the case. +/ Once stored, the caller must not mutate the content of the RawFields. +/ An empty RawFields may be passed to clear the fields. +/ +/ SetUnknown is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +SetUnknown(fields protoreflect.RawFields) { + x.unknownFields = fields +} + +/ IsValid reports whether the message is valid. +/ +/ An invalid message is an empty, read-only value. +/ +/ An invalid message often corresponds to a nil pointer of the concrete +/ message type, but the details are implementation dependent. +/ Validity is not part of the protobuf data model, and may not +/ be preserved in marshaling or other operations. +func (x *fastReflection_Module) + +IsValid() + +bool { + return x != nil +} + +/ ProtoMethods returns optional fastReflectionFeature-path implementations of various operations. +/ This method may return nil. +/ +/ The returned methods type is identical to +/ "google.golang.org/protobuf/runtime/protoiface".Methods. +/ Consult the protoiface package documentation for details. +func (x *fastReflection_Module) + +ProtoMethods() *protoiface.Methods { + size := func(input protoiface.SizeInput) + +protoiface.SizeOutput { + x := input.Message.Interface().(*Module) + if x == nil { + return protoiface.SizeOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Size: 0, +} + +} + options := runtime.SizeInputToOptions(input) + _ = options + var n int + var l int + _ = l + if x.MaxExecutionPeriod != nil { + l = options.Size(x.MaxExecutionPeriod) + +n += 1 + l + runtime.Sov(uint64(l)) +} + if x.MaxMetadataLen != 0 { + n += 1 + runtime.Sov(uint64(x.MaxMetadataLen)) +} + if x.unknownFields != nil { + n += len(x.unknownFields) +} + +return protoiface.SizeOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Size: n, +} + +} + marshal := func(input protoiface.MarshalInput) (protoiface.MarshalOutput, error) { + x := input.Message.Interface().(*Module) + if x == nil { + return protoiface.MarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Buf: input.Buf, +}, nil +} + options := runtime.MarshalInputToOptions(input) + _ = options + size := options.Size(x) + dAtA := make([]byte, size) + i := len(dAtA) + _ = i + var l int + _ = l + if x.unknownFields != nil { + i -= len(x.unknownFields) + +copy(dAtA[i:], x.unknownFields) +} + if x.MaxMetadataLen != 0 { + i = runtime.EncodeVarint(dAtA, i, uint64(x.MaxMetadataLen)) + +i-- + dAtA[i] = 0x10 +} + if x.MaxExecutionPeriod != nil { + encoded, err := options.Marshal(x.MaxExecutionPeriod) + if err != nil { + return protoiface.MarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Buf: input.Buf, +}, err +} + +i -= len(encoded) + +copy(dAtA[i:], encoded) + +i = runtime.EncodeVarint(dAtA, i, uint64(len(encoded))) + +i-- + dAtA[i] = 0xa +} + if input.Buf != nil { + input.Buf = append(input.Buf, dAtA...) +} + +else { + input.Buf = dAtA +} + +return protoiface.MarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Buf: input.Buf, +}, nil +} + unmarshal := func(input protoiface.UnmarshalInput) (protoiface.UnmarshalOutput, error) { + x := input.Message.Interface().(*Module) + if x == nil { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags, +}, nil +} + options := runtime.UnmarshalInputToOptions(input) + _ = options + dAtA := input.Buf + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrIntOverflow +} + if iNdEx >= l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: Module: wiretype end group for non-group") +} + if fieldNum <= 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: Module: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: wrong wireType = %d for field MaxExecutionPeriod", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrIntOverflow +} + if iNdEx >= l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrInvalidLength +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrInvalidLength +} + if postIndex > l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + if x.MaxExecutionPeriod == nil { + x.MaxExecutionPeriod = &durationpb.Duration{ +} + +} + if err := options.Unmarshal(dAtA[iNdEx:postIndex], x.MaxExecutionPeriod); err != nil { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, err +} + +iNdEx = postIndex + case 2: + if wireType != 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: wrong wireType = %d for field MaxMetadataLen", wireType) +} + +x.MaxMetadataLen = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrIntOverflow +} + if iNdEx >= l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + x.MaxMetadataLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + +default: + iNdEx = preIndex + skippy, err := runtime.Skip(dAtA[iNdEx:]) + if err != nil { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrInvalidLength +} + if (iNdEx + skippy) > l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + if !options.DiscardUnknown { + x.unknownFields = append(x.unknownFields, dAtA[iNdEx:iNdEx+skippy]...) +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + +return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, nil +} + +return &protoiface.Methods{ + NoUnkeyedLiterals: struct{ +}{ +}, + Flags: protoiface.SupportMarshalDeterministic | protoiface.SupportUnmarshalDiscardUnknown, + Size: size, + Marshal: marshal, + Unmarshal: unmarshal, + Merge: nil, + CheckInitialized: nil, +} +} + +/ Code generated by protoc-gen-go. DO NOT EDIT. +/ versions: +/ protoc-gen-go v1.27.0 +/ protoc (unknown) +/ source: cosmos/group/module/v1/module.proto + +const ( + / Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + / Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +/ Module is the config object of the group module. +type Module struct { + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields + + / max_execution_period defines the max duration after a proposal's voting period ends that members can send a MsgExec + / to execute the proposal. + MaxExecutionPeriod *durationpb.Duration `protobuf:"bytes,1,opt,name=max_execution_period,json=maxExecutionPeriod,proto3" json:"max_execution_period,omitempty"` + / max_metadata_len defines the max length of the metadata bytes field for various entities within the group module. + / Defaults to 255 if not explicitly set. + MaxMetadataLen uint64 `protobuf:"varint,2,opt,name=max_metadata_len,json=maxMetadataLen,proto3" json:"max_metadata_len,omitempty"` +} + +func (x *Module) + +Reset() { + *x = Module{ +} + if protoimpl.UnsafeEnabled { + mi := &file_cosmos_group_module_v1_module_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + +ms.StoreMessageInfo(mi) +} +} + +func (x *Module) + +String() + +string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Module) + +ProtoMessage() { +} + +/ Deprecated: Use Module.ProtoReflect.Descriptor instead. +func (*Module) + +Descriptor() ([]byte, []int) { + return file_cosmos_group_module_v1_module_proto_rawDescGZIP(), []int{0 +} +} + +func (x *Module) + +GetMaxExecutionPeriod() *durationpb.Duration { + if x != nil { + return x.MaxExecutionPeriod +} + +return nil +} + +func (x *Module) + +GetMaxMetadataLen() + +uint64 { + if x != nil { + return x.MaxMetadataLen +} + +return 0 +} + +var File_cosmos_group_module_v1_module_proto protoreflect.FileDescriptor + +var file_cosmos_group_module_v1_module_proto_rawDesc = []byte{ + 0x0a, 0x23, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x2f, 0x6d, + 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2f, 0x76, 0x31, 0x2f, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x16, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2e, 0x67, 0x72, + 0x6f, 0x75, 0x70, 0x2e, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x1a, 0x20, 0x63, + 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x61, 0x70, 0x70, 0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, + 0x61, 0x31, 0x2f, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, + 0x14, 0x67, 0x6f, 0x67, 0x6f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x67, 0x6f, 0x67, 0x6f, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, + 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x11, 0x61, 0x6d, 0x69, 0x6e, 0x6f, 0x2f, 0x61, 0x6d, 0x69, + 0x6e, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xbc, 0x01, 0x0a, 0x06, 0x4d, 0x6f, 0x64, + 0x75, 0x6c, 0x65, 0x12, 0x5a, 0x0a, 0x14, 0x6d, 0x61, 0x78, 0x5f, 0x65, 0x78, 0x65, 0x63, 0x75, + 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x70, 0x65, 0x72, 0x69, 0x6f, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, + 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, + 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x42, 0x0d, 0xc8, 0xde, + 0x1f, 0x00, 0x98, 0xdf, 0x1f, 0x01, 0xa8, 0xe7, 0xb0, 0x2a, 0x01, 0x52, 0x12, 0x6d, 0x61, 0x78, + 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x65, 0x72, 0x69, 0x6f, 0x64, 0x12, + 0x28, 0x0a, 0x10, 0x6d, 0x61, 0x78, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x5f, + 0x6c, 0x65, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0e, 0x6d, 0x61, 0x78, 0x4d, 0x65, + 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x4c, 0x65, 0x6e, 0x3a, 0x2c, 0xba, 0xc0, 0x96, 0xda, 0x01, + 0x26, 0x0a, 0x24, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, + 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2d, 0x73, 0x64, 0x6b, 0x2f, + 0x78, 0x2f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x42, 0xd6, 0x01, 0x0a, 0x1a, 0x63, 0x6f, 0x6d, 0x2e, + 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2e, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x2e, 0x6d, 0x6f, 0x64, + 0x75, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x42, 0x0b, 0x4d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x50, 0x72, + 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x30, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x73, 0x64, 0x6b, + 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x67, + 0x72, 0x6f, 0x75, 0x70, 0x2f, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2f, 0x76, 0x31, 0x3b, 0x6d, + 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x76, 0x31, 0xa2, 0x02, 0x03, 0x43, 0x47, 0x4d, 0xaa, 0x02, 0x16, + 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2e, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x2e, 0x4d, 0x6f, 0x64, + 0x75, 0x6c, 0x65, 0x2e, 0x56, 0x31, 0xca, 0x02, 0x16, 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x5c, + 0x47, 0x72, 0x6f, 0x75, 0x70, 0x5c, 0x4d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x5c, 0x56, 0x31, 0xe2, + 0x02, 0x22, 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x5c, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x5c, 0x4d, + 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x5c, 0x56, 0x31, 0x5c, 0x47, 0x50, 0x42, 0x4d, 0x65, 0x74, 0x61, + 0x64, 0x61, 0x74, 0x61, 0xea, 0x02, 0x19, 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x3a, 0x3a, 0x47, + 0x72, 0x6f, 0x75, 0x70, 0x3a, 0x3a, 0x4d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x3a, 0x3a, 0x56, 0x31, + 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, +} + +var ( + file_cosmos_group_module_v1_module_proto_rawDescOnce sync.Once + file_cosmos_group_module_v1_module_proto_rawDescData = file_cosmos_group_module_v1_module_proto_rawDesc +) + +func file_cosmos_group_module_v1_module_proto_rawDescGZIP() []byte { + file_cosmos_group_module_v1_module_proto_rawDescOnce.Do(func() { + file_cosmos_group_module_v1_module_proto_rawDescData = protoimpl.X.CompressGZIP(file_cosmos_group_module_v1_module_proto_rawDescData) +}) + +return file_cosmos_group_module_v1_module_proto_rawDescData +} + +var file_cosmos_group_module_v1_module_proto_msgTypes = make([]protoimpl.MessageInfo, 1) + +var file_cosmos_group_module_v1_module_proto_goTypes = []interface{ +}{ + (*Module)(nil), / 0: cosmos.group.module.v1.Module + (*durationpb.Duration)(nil), / 1: google.protobuf.Duration +} + +var file_cosmos_group_module_v1_module_proto_depIdxs = []int32{ + 1, / 0: cosmos.group.module.v1.Module.max_execution_period:type_name -> google.protobuf.Duration + 1, / [1:1] is the sub-list for method output_type + 1, / [1:1] is the sub-list for method input_type + 1, / [1:1] is the sub-list for extension type_name + 1, / [1:1] is the sub-list for extension extendee + 0, / [0:1] is the sub-list for field type_name +} + +func init() { + file_cosmos_group_module_v1_module_proto_init() +} + +func file_cosmos_group_module_v1_module_proto_init() { + if File_cosmos_group_module_v1_module_proto != nil { + return +} + if !protoimpl.UnsafeEnabled { + file_cosmos_group_module_v1_module_proto_msgTypes[0].Exporter = func(v interface{ +}, i int) + +interface{ +} { + switch v := v.(*Module); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil +} + +} + +} + +type x struct{ +} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{ +}).PkgPath(), + RawDescriptor: file_cosmos_group_module_v1_module_proto_rawDesc, + NumEnums: 0, + NumMessages: 1, + NumExtensions: 0, + NumServices: 0, +}, + GoTypes: file_cosmos_group_module_v1_module_proto_goTypes, + DependencyIndexes: file_cosmos_group_module_v1_module_proto_depIdxs, + MessageInfos: file_cosmos_group_module_v1_module_proto_msgTypes, +}.Build() + +File_cosmos_group_module_v1_module_proto = out.File + file_cosmos_group_module_v1_module_proto_rawDesc = nil + file_cosmos_group_module_v1_module_proto_goTypes = nil + file_cosmos_group_module_v1_module_proto_depIdxs = nil +} +``` + + +Pulsar is optional. The official [`protoc-gen-go`](https://developers.google.com/protocol-buffers/docs/reference/go-generated) can be used as well. + + +## Dependency Definition + +Once the configuration proto is defined, the module's `module.go` must define what dependencies are required by the module. +The boilerplate is similar for all modules. + + +All methods, structs and their fields must be public for `depinject`. + + +1. Import the module configuration generated package: + +```go expandable +package module + +import ( + + "context" + "encoding/json" + "fmt" + + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" +) + +/ ConsensusVersion defines the current x/group module consensus version. +const ConsensusVersion = 2 + +var ( + _ module.EndBlockAppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc +}, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, +} +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +type AppModuleBasic struct { + cdc codec.Codec +} + +/ Name returns the group module's name. +func (AppModuleBasic) + +Name() + +string { + return group.ModuleName +} + +/ DefaultGenesis returns default genesis state as raw bytes for the group +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the group module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) +} + +return data.Validate() +} + +/ GetQueryCmd returns the cli query commands for the group module +func (a AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) +} + +/ GetTxCmd returns the transaction commands for the group module +func (a AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name()) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. +func (a AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ RegisterInterfaces registers the group module's interface types +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) +} + +/ RegisterLegacyAminoCodec registers the group module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) +} + +/ Name returns the group module's name. +func (AppModule) + +Name() + +string { + return group.ModuleName +} + +/ RegisterInvariants does nothing, there are no invariants to enforce +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +func (am AppModule) + +NewHandler() + +sdk.Handler { + return nil +} + +/ InitGenesis performs genesis initialization for the group module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the group +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + +return cdc.MustMarshalJSON(gs) +} + +/ RegisterServices registers a gRPC query service to respond to the +/ module-specific gRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + +group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) +} +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ EndBlock implements the group module's EndBlock. +func (am AppModule) + +EndBlock(ctx sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + EndBlocker(ctx, am.keeper) + +return []abci.ValidatorUpdate{ +} +} + +/ ____________________________________________________________________________ + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the group module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalContents returns all the group content functions used to +/ simulate governance proposals. +func (am AppModule) + +ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent { + return nil +} + +/ RegisterStoreDecoder registers a decoder for group module's types +func (am AppModule) + +RegisterStoreDecoder(sdr sdk.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter *baseapp.MsgServiceRouter +} + +type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule +} + +func ProvideModule(in GroupInputs) + +GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen +}) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + +return GroupOutputs{ + GroupKeeper: k, + Module: m +} +} +``` + +Define an `init()` function for defining the `providers` of the module configuration:\ +This registers the module configuration message and the wiring of the module. + +```go expandable +package module + +import ( + + "context" + "encoding/json" + "fmt" + + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" +) + +/ ConsensusVersion defines the current x/group module consensus version. +const ConsensusVersion = 2 + +var ( + _ module.EndBlockAppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc +}, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, +} +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +type AppModuleBasic struct { + cdc codec.Codec +} + +/ Name returns the group module's name. +func (AppModuleBasic) + +Name() + +string { + return group.ModuleName +} + +/ DefaultGenesis returns default genesis state as raw bytes for the group +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the group module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) +} + +return data.Validate() +} + +/ GetQueryCmd returns the cli query commands for the group module +func (a AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) +} + +/ GetTxCmd returns the transaction commands for the group module +func (a AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name()) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. +func (a AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ RegisterInterfaces registers the group module's interface types +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) +} + +/ RegisterLegacyAminoCodec registers the group module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) +} + +/ Name returns the group module's name. +func (AppModule) + +Name() + +string { + return group.ModuleName +} + +/ RegisterInvariants does nothing, there are no invariants to enforce +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +func (am AppModule) + +NewHandler() + +sdk.Handler { + return nil +} + +/ InitGenesis performs genesis initialization for the group module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the group +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + +return cdc.MustMarshalJSON(gs) +} + +/ RegisterServices registers a gRPC query service to respond to the +/ module-specific gRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + +group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) +} +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ EndBlock implements the group module's EndBlock. +func (am AppModule) + +EndBlock(ctx sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + EndBlocker(ctx, am.keeper) + +return []abci.ValidatorUpdate{ +} +} + +/ ____________________________________________________________________________ + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the group module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalContents returns all the group content functions used to +/ simulate governance proposals. +func (am AppModule) + +ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent { + return nil +} + +/ RegisterStoreDecoder registers a decoder for group module's types +func (am AppModule) + +RegisterStoreDecoder(sdr sdk.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter *baseapp.MsgServiceRouter +} + +type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule +} + +func ProvideModule(in GroupInputs) + +GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen +}) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + +return GroupOutputs{ + GroupKeeper: k, + Module: m +} +} +``` + +2. Ensure that the module implements the `appmodule.AppModule` interface: + +```go expandable +package module + +import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" +) + +/ ConsensusVersion defines the current x/group module consensus version. +const ConsensusVersion = 2 + +var ( + _ module.EndBlockAppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc +}, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, +} +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +type AppModuleBasic struct { + cdc codec.Codec +} + +/ Name returns the group module's name. +func (AppModuleBasic) + +Name() + +string { + return group.ModuleName +} + +/ DefaultGenesis returns default genesis state as raw bytes for the group +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the group module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) +} + +return data.Validate() +} + +/ GetQueryCmd returns the cli query commands for the group module +func (a AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) +} + +/ GetTxCmd returns the transaction commands for the group module +func (a AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name()) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. +func (a AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ RegisterInterfaces registers the group module's interface types +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) +} + +/ RegisterLegacyAminoCodec registers the group module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) +} + +/ Name returns the group module's name. +func (AppModule) + +Name() + +string { + return group.ModuleName +} + +/ RegisterInvariants does nothing, there are no invariants to enforce +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +func (am AppModule) + +NewHandler() + +sdk.Handler { + return nil +} + +/ InitGenesis performs genesis initialization for the group module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the group +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + +return cdc.MustMarshalJSON(gs) +} + +/ RegisterServices registers a gRPC query service to respond to the +/ module-specific gRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + +group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) +} +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ EndBlock implements the group module's EndBlock. +func (am AppModule) + +EndBlock(ctx sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + EndBlocker(ctx, am.keeper) + +return []abci.ValidatorUpdate{ +} +} + +/ ____________________________________________________________________________ + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the group module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ RegisterStoreDecoder registers a decoder for group module's types +func (am AppModule) + +RegisterStoreDecoder(sdr sdk.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter *baseapp.MsgServiceRouter +} + +type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule +} + +func ProvideModule(in GroupInputs) + +GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen +}) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + +return GroupOutputs{ + GroupKeeper: k, + Module: m +} +} +``` + +3. Define a struct that inherits `depinject.In` and define the module inputs (i.e. module dependencies): + * `depinject` provides the right dependencies to the module. + * `depinject` also checks that all dependencies are provided. + + +For making a dependency optional, add the `optional:"true"` struct tag.\ + + +```go expandable +package module + +import ( + + "context" + "encoding/json" + "fmt" + + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" +) + +/ ConsensusVersion defines the current x/group module consensus version. +const ConsensusVersion = 2 + +var ( + _ module.EndBlockAppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc +}, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, +} +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +type AppModuleBasic struct { + cdc codec.Codec +} + +/ Name returns the group module's name. +func (AppModuleBasic) + +Name() + +string { + return group.ModuleName +} + +/ DefaultGenesis returns default genesis state as raw bytes for the group +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the group module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) +} + +return data.Validate() +} + +/ GetQueryCmd returns the cli query commands for the group module +func (a AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) +} + +/ GetTxCmd returns the transaction commands for the group module +func (a AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name()) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. +func (a AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ RegisterInterfaces registers the group module's interface types +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) +} + +/ RegisterLegacyAminoCodec registers the group module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) +} + +/ Name returns the group module's name. +func (AppModule) + +Name() + +string { + return group.ModuleName +} + +/ RegisterInvariants does nothing, there are no invariants to enforce +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +func (am AppModule) + +NewHandler() + +sdk.Handler { + return nil +} + +/ InitGenesis performs genesis initialization for the group module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the group +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + +return cdc.MustMarshalJSON(gs) +} + +/ RegisterServices registers a gRPC query service to respond to the +/ module-specific gRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + +group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) +} +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ EndBlock implements the group module's EndBlock. +func (am AppModule) + +EndBlock(ctx sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + EndBlocker(ctx, am.keeper) + +return []abci.ValidatorUpdate{ +} +} + +/ ____________________________________________________________________________ + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the group module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalContents returns all the group content functions used to +/ simulate governance proposals. +func (am AppModule) + +ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent { + return nil +} + +/ RegisterStoreDecoder registers a decoder for group module's types +func (am AppModule) + +RegisterStoreDecoder(sdr sdk.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter *baseapp.MsgServiceRouter +} + +type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule +} + +func ProvideModule(in GroupInputs) + +GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen +}) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + +return GroupOutputs{ + GroupKeeper: k, + Module: m +} +} +``` + +4. Define the module outputs with a public struct that inherits `depinject.Out`: + The module outputs are the dependencies that the module provides to other modules. It is usually the module itself and its keeper. + +```go expandable +package module + +import ( + + "context" + "encoding/json" + "fmt" + + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" +) + +/ ConsensusVersion defines the current x/group module consensus version. +const ConsensusVersion = 2 + +var ( + _ module.EndBlockAppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc +}, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, +} +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +type AppModuleBasic struct { + cdc codec.Codec +} + +/ Name returns the group module's name. +func (AppModuleBasic) + +Name() + +string { + return group.ModuleName +} + +/ DefaultGenesis returns default genesis state as raw bytes for the group +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the group module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) +} + +return data.Validate() +} + +/ GetQueryCmd returns the cli query commands for the group module +func (a AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) +} + +/ GetTxCmd returns the transaction commands for the group module +func (a AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name()) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. +func (a AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ RegisterInterfaces registers the group module's interface types +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) +} + +/ RegisterLegacyAminoCodec registers the group module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) +} + +/ Name returns the group module's name. +func (AppModule) + +Name() + +string { + return group.ModuleName +} + +/ RegisterInvariants does nothing, there are no invariants to enforce +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +func (am AppModule) + +NewHandler() + +sdk.Handler { + return nil +} + +/ InitGenesis performs genesis initialization for the group module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the group +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + +return cdc.MustMarshalJSON(gs) +} + +/ RegisterServices registers a gRPC query service to respond to the +/ module-specific gRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + +group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) +} +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ EndBlock implements the group module's EndBlock. +func (am AppModule) + +EndBlock(ctx sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + EndBlocker(ctx, am.keeper) + +return []abci.ValidatorUpdate{ +} +} + +/ ____________________________________________________________________________ + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the group module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalContents returns all the group content functions used to +/ simulate governance proposals. +func (am AppModule) + +ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent { + return nil +} + +/ RegisterStoreDecoder registers a decoder for group module's types +func (am AppModule) + +RegisterStoreDecoder(sdr sdk.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter *baseapp.MsgServiceRouter +} + +type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule +} + +func ProvideModule(in GroupInputs) + +GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen +}) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + +return GroupOutputs{ + GroupKeeper: k, + Module: m +} +} +``` + +5. Create a function named `ProvideModule` (as called in 1.) and use the inputs for instantiating the module outputs. + +```go expandable +package module + +import ( + + "context" + "encoding/json" + "fmt" + + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" +) + +/ ConsensusVersion defines the current x/group module consensus version. +const ConsensusVersion = 2 + +var ( + _ module.EndBlockAppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc +}, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, +} +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +type AppModuleBasic struct { + cdc codec.Codec +} + +/ Name returns the group module's name. +func (AppModuleBasic) + +Name() + +string { + return group.ModuleName +} + +/ DefaultGenesis returns default genesis state as raw bytes for the group +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the group module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) +} + +return data.Validate() +} + +/ GetQueryCmd returns the cli query commands for the group module +func (a AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) +} + +/ GetTxCmd returns the transaction commands for the group module +func (a AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name()) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. +func (a AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ RegisterInterfaces registers the group module's interface types +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) +} + +/ RegisterLegacyAminoCodec registers the group module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) +} + +/ Name returns the group module's name. +func (AppModule) + +Name() + +string { + return group.ModuleName +} + +/ RegisterInvariants does nothing, there are no invariants to enforce +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +func (am AppModule) + +NewHandler() + +sdk.Handler { + return nil +} + +/ InitGenesis performs genesis initialization for the group module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the group +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + +return cdc.MustMarshalJSON(gs) +} + +/ RegisterServices registers a gRPC query service to respond to the +/ module-specific gRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + +group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) +} +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ EndBlock implements the group module's EndBlock. +func (am AppModule) + +EndBlock(ctx sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + EndBlocker(ctx, am.keeper) + +return []abci.ValidatorUpdate{ +} +} + +/ ____________________________________________________________________________ + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the group module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalContents returns all the group content functions used to +/ simulate governance proposals. +func (am AppModule) + +ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent { + return nil +} + +/ RegisterStoreDecoder registers a decoder for group module's types +func (am AppModule) + +RegisterStoreDecoder(sdr sdk.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter *baseapp.MsgServiceRouter +} + +type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule +} + +func ProvideModule(in GroupInputs) + +GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen +}) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + +return GroupOutputs{ + GroupKeeper: k, + Module: m +} +} +``` + +The `ProvideModule` function should return an instance of `cosmossdk.io/core/appmodule.AppModule` which implements +one or more app module extension interfaces for initializing the module. + +Following is the complete app wiring configuration for `group`: + +```go expandable +package module + +import ( + + "context" + "encoding/json" + "fmt" + + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" +) + +/ ConsensusVersion defines the current x/group module consensus version. +const ConsensusVersion = 2 + +var ( + _ module.EndBlockAppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc +}, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, +} +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +type AppModuleBasic struct { + cdc codec.Codec +} + +/ Name returns the group module's name. +func (AppModuleBasic) + +Name() + +string { + return group.ModuleName +} + +/ DefaultGenesis returns default genesis state as raw bytes for the group +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the group module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) +} + +return data.Validate() +} + +/ GetQueryCmd returns the cli query commands for the group module +func (a AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) +} + +/ GetTxCmd returns the transaction commands for the group module +func (a AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name()) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. +func (a AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ RegisterInterfaces registers the group module's interface types +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) +} + +/ RegisterLegacyAminoCodec registers the group module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) +} + +/ Name returns the group module's name. +func (AppModule) + +Name() + +string { + return group.ModuleName +} + +/ RegisterInvariants does nothing, there are no invariants to enforce +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +func (am AppModule) + +NewHandler() + +sdk.Handler { + return nil +} + +/ InitGenesis performs genesis initialization for the group module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the group +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + +return cdc.MustMarshalJSON(gs) +} + +/ RegisterServices registers a gRPC query service to respond to the +/ module-specific gRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + +group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) +} +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ EndBlock implements the group module's EndBlock. +func (am AppModule) + +EndBlock(ctx sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + EndBlocker(ctx, am.keeper) + +return []abci.ValidatorUpdate{ +} +} + +/ ____________________________________________________________________________ + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the group module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalContents returns all the group content functions used to +/ simulate governance proposals. +func (am AppModule) + +ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent { + return nil +} + +/ RegisterStoreDecoder registers a decoder for group module's types +func (am AppModule) + +RegisterStoreDecoder(sdr sdk.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter *baseapp.MsgServiceRouter +} + +type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule +} + +func ProvideModule(in GroupInputs) + +GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen +}) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + +return GroupOutputs{ + GroupKeeper: k, + Module: m +} +} +``` + +The module is now ready to be used with `depinject` by a chain developer. + +## Integrate in an application + +The App Wiring is done in `app_config.go` / `app.yaml` and `app_v2.go` and is explained in detail in the [overview of `app_v2.go`](/docs/sdk/v0.47/documentation/application-framework/app-go-v2). diff --git a/docs/sdk/v0.47/documentation/module-system/errors.mdx b/docs/sdk/v0.47/documentation/module-system/errors.mdx new file mode 100644 index 00000000..2248533b --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/errors.mdx @@ -0,0 +1,770 @@ +--- +title: Errors +--- + + +**Synopsis** +This document outlines the recommended usage and APIs for error handling in Cosmos SDK modules. + + +Modules are encouraged to define and register their own errors to provide better +context on failed message or handler execution. Typically, these errors should be +common or general errors which can be further wrapped to provide additional specific +execution context. + +## Registration + +Modules should define and register their custom errors in `x/{module}/errors.go`. +Registration of errors is handled via the [`errors` package](https://github.com/cosmos/cosmos-sdk/blob/main/errors/errors.go). + +Example: + +```go expandable +package types + +import ( + + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ x/distribution module sentinel errors +var ( + ErrEmptyDelegatorAddr = sdkerrors.Register(ModuleName, 2, "delegator address is empty") + +ErrEmptyWithdrawAddr = sdkerrors.Register(ModuleName, 3, "withdraw address is empty") + +ErrEmptyValidatorAddr = sdkerrors.Register(ModuleName, 4, "validator address is empty") + +ErrEmptyDelegationDistInfo = sdkerrors.Register(ModuleName, 5, "no delegation distribution info") + +ErrNoValidatorDistInfo = sdkerrors.Register(ModuleName, 6, "no validator distribution info") + +ErrNoValidatorCommission = sdkerrors.Register(ModuleName, 7, "no validator commission to withdraw") + +ErrSetWithdrawAddrDisabled = sdkerrors.Register(ModuleName, 8, "set withdraw address disabled") + +ErrBadDistribution = sdkerrors.Register(ModuleName, 9, "community pool does not have sufficient coins to distribute") + +ErrInvalidProposalAmount = sdkerrors.Register(ModuleName, 10, "invalid community pool spend proposal amount") + +ErrEmptyProposalRecipient = sdkerrors.Register(ModuleName, 11, "invalid community pool spend proposal recipient") + +ErrNoValidatorExists = sdkerrors.Register(ModuleName, 12, "validator does not exist") + +ErrNoDelegationExists = sdkerrors.Register(ModuleName, 13, "delegation does not exist") +) +``` + +Each custom module error must provide the codespace, which is typically the module name +(e.g. "distribution") and is unique per module, and a uint32 code. Together, the codespace and code +provide a globally unique Cosmos SDK error. Typically, the code is monotonically increasing but does not +necessarily have to be. The only restrictions on error codes are the following: + +* Must be greater than one, as a code value of one is reserved for internal errors. +* Must be unique within the module. + +Note, the Cosmos SDK provides a core set of *common* errors. These errors are defined in [`types/errors/errors.go`](https://github.com/cosmos/cosmos-sdk/blob/main/types/errors/errors.go). + +## Wrapping + +The custom module errors can be returned as their concrete type as they already fulfill the `error` +interface. However, module errors can be wrapped to provide further context and meaning to failed +execution. + +Example: + +```go expandable +package keeper + +import ( + + "fmt" + "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/internal/conv" + "github.com/cosmos/cosmos-sdk/store/prefix" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/query" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var _ Keeper = (*BaseKeeper)(nil) + +/ Keeper defines a module interface that facilitates the transfer of coins +/ between accounts. +type Keeper interface { + SendKeeper + WithMintCoinsRestriction(MintingRestrictionFn) + +BaseKeeper + + InitGenesis(sdk.Context, *types.GenesisState) + +ExportGenesis(sdk.Context) *types.GenesisState + + GetSupply(ctx sdk.Context, denom string) + +sdk.Coin + HasSupply(ctx sdk.Context, denom string) + +bool + GetPaginatedTotalSupply(ctx sdk.Context, pagination *query.PageRequest) (sdk.Coins, *query.PageResponse, error) + +IterateTotalSupply(ctx sdk.Context, cb func(sdk.Coin) + +bool) + +GetDenomMetaData(ctx sdk.Context, denom string) (types.Metadata, bool) + +HasDenomMetaData(ctx sdk.Context, denom string) + +bool + SetDenomMetaData(ctx sdk.Context, denomMetaData types.Metadata) + +IterateAllDenomMetaData(ctx sdk.Context, cb func(types.Metadata) + +bool) + +SendCoinsFromModuleToAccount(ctx sdk.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + SendCoinsFromModuleToModule(ctx sdk.Context, senderModule, recipientModule string, amt sdk.Coins) + +error + SendCoinsFromAccountToModule(ctx sdk.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + DelegateCoinsFromAccountToModule(ctx sdk.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + UndelegateCoinsFromModuleToAccount(ctx sdk.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + MintCoins(ctx sdk.Context, moduleName string, amt sdk.Coins) + +error + BurnCoins(ctx sdk.Context, moduleName string, amt sdk.Coins) + +error + + DelegateCoins(ctx sdk.Context, delegatorAddr, moduleAccAddr sdk.AccAddress, amt sdk.Coins) + +error + UndelegateCoins(ctx sdk.Context, moduleAccAddr, delegatorAddr sdk.AccAddress, amt sdk.Coins) + +error + + types.QueryServer +} + +/ BaseKeeper manages transfers between accounts. It implements the Keeper interface. +type BaseKeeper struct { + BaseSendKeeper + + ak types.AccountKeeper + cdc codec.BinaryCodec + storeKey storetypes.StoreKey + mintCoinsRestrictionFn MintingRestrictionFn +} + +type MintingRestrictionFn func(ctx sdk.Context, coins sdk.Coins) + +error + +/ GetPaginatedTotalSupply queries for the supply, ignoring 0 coins, with a given pagination +func (k BaseKeeper) + +GetPaginatedTotalSupply(ctx sdk.Context, pagination *query.PageRequest) (sdk.Coins, *query.PageResponse, error) { + store := ctx.KVStore(k.storeKey) + supplyStore := prefix.NewStore(store, types.SupplyKey) + supply := sdk.NewCoins() + +pageRes, err := query.Paginate(supplyStore, pagination, func(key, value []byte) + +error { + var amount math.Int + err := amount.Unmarshal(value) + if err != nil { + return fmt.Errorf("unable to convert amount string to Int %v", err) +} + + / `Add` omits the 0 coins addition to the `supply`. + supply = supply.Add(sdk.NewCoin(string(key), amount)) + +return nil +}) + if err != nil { + return nil, nil, err +} + +return supply, pageRes, nil +} + +/ NewBaseKeeper returns a new BaseKeeper object with a given codec, dedicated +/ store key, an AccountKeeper implementation, and a parameter Subspace used to +/ store and fetch module parameters. The BaseKeeper also accepts a +/ blocklist map. This blocklist describes the set of addresses that are not allowed +/ to receive funds through direct and explicit actions, for example, by using a MsgSend or +/ by using a SendCoinsFromModuleToAccount execution. +func NewBaseKeeper( + cdc codec.BinaryCodec, + storeKey storetypes.StoreKey, + ak types.AccountKeeper, + blockedAddrs map[string]bool, + authority string, +) + +BaseKeeper { + if _, err := sdk.AccAddressFromBech32(authority); err != nil { + panic(fmt.Errorf("invalid bank authority address: %w", err)) +} + +return BaseKeeper{ + BaseSendKeeper: NewBaseSendKeeper(cdc, storeKey, ak, blockedAddrs, authority), + ak: ak, + cdc: cdc, + storeKey: storeKey, + mintCoinsRestrictionFn: func(ctx sdk.Context, coins sdk.Coins) + +error { + return nil +}, +} +} + +/ WithMintCoinsRestriction restricts the bank Keeper used within a specific module to +/ have restricted permissions on minting via function passed in parameter. +/ Previous restriction functions can be nested as such: +/ +/ bankKeeper.WithMintCoinsRestriction(restriction1).WithMintCoinsRestriction(restriction2) + +func (k BaseKeeper) + +WithMintCoinsRestriction(check MintingRestrictionFn) + +BaseKeeper { + oldRestrictionFn := k.mintCoinsRestrictionFn + k.mintCoinsRestrictionFn = func(ctx sdk.Context, coins sdk.Coins) + +error { + err := check(ctx, coins) + if err != nil { + return err +} + +err = oldRestrictionFn(ctx, coins) + if err != nil { + return err +} + +return nil +} + +return k +} + +/ DelegateCoins performs delegation by deducting amt coins from an account with +/ address addr. For vesting accounts, delegations amounts are tracked for both +/ vesting and vested coins. The coins are then transferred from the delegator +/ address to a ModuleAccount address. If any of the delegation amounts are negative, +/ an error is returned. +func (k BaseKeeper) + +DelegateCoins(ctx sdk.Context, delegatorAddr, moduleAccAddr sdk.AccAddress, amt sdk.Coins) + +error { + moduleAcc := k.ak.GetAccount(ctx, moduleAccAddr) + if moduleAcc == nil { + return sdkerrors.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleAccAddr) +} + if !amt.IsValid() { + return sdkerrors.Wrap(sdkerrors.ErrInvalidCoins, amt.String()) +} + balances := sdk.NewCoins() + for _, coin := range amt { + balance := k.GetBalance(ctx, delegatorAddr, coin.GetDenom()) + if balance.IsLT(coin) { + return sdkerrors.Wrapf( + sdkerrors.ErrInsufficientFunds, "failed to delegate; %s is smaller than %s", balance, amt, + ) +} + +balances = balances.Add(balance) + err := k.setBalance(ctx, delegatorAddr, balance.Sub(coin)) + if err != nil { + return err +} + +} + if err := k.trackDelegation(ctx, delegatorAddr, balances, amt); err != nil { + return sdkerrors.Wrap(err, "failed to track delegation") +} + / emit coin spent event + ctx.EventManager().EmitEvent( + types.NewCoinSpentEvent(delegatorAddr, amt), + ) + err := k.addCoins(ctx, moduleAccAddr, amt) + if err != nil { + return err +} + +return nil +} + +/ UndelegateCoins performs undelegation by crediting amt coins to an account with +/ address addr. For vesting accounts, undelegation amounts are tracked for both +/ vesting and vested coins. The coins are then transferred from a ModuleAccount +/ address to the delegator address. If any of the undelegation amounts are +/ negative, an error is returned. +func (k BaseKeeper) + +UndelegateCoins(ctx sdk.Context, moduleAccAddr, delegatorAddr sdk.AccAddress, amt sdk.Coins) + +error { + moduleAcc := k.ak.GetAccount(ctx, moduleAccAddr) + if moduleAcc == nil { + return sdkerrors.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleAccAddr) +} + if !amt.IsValid() { + return sdkerrors.Wrap(sdkerrors.ErrInvalidCoins, amt.String()) +} + err := k.subUnlockedCoins(ctx, moduleAccAddr, amt) + if err != nil { + return err +} + if err := k.trackUndelegation(ctx, delegatorAddr, amt); err != nil { + return sdkerrors.Wrap(err, "failed to track undelegation") +} + +err = k.addCoins(ctx, delegatorAddr, amt) + if err != nil { + return err +} + +return nil +} + +/ GetSupply retrieves the Supply from store +func (k BaseKeeper) + +GetSupply(ctx sdk.Context, denom string) + +sdk.Coin { + store := ctx.KVStore(k.storeKey) + supplyStore := prefix.NewStore(store, types.SupplyKey) + bz := supplyStore.Get(conv.UnsafeStrToBytes(denom)) + if bz == nil { + return sdk.Coin{ + Denom: denom, + Amount: sdk.NewInt(0), +} + +} + +var amount math.Int + err := amount.Unmarshal(bz) + if err != nil { + panic(fmt.Errorf("unable to unmarshal supply value %v", err)) +} + +return sdk.Coin{ + Denom: denom, + Amount: amount, +} +} + +/ HasSupply checks if the supply coin exists in store. +func (k BaseKeeper) + +HasSupply(ctx sdk.Context, denom string) + +bool { + store := ctx.KVStore(k.storeKey) + supplyStore := prefix.NewStore(store, types.SupplyKey) + +return supplyStore.Has(conv.UnsafeStrToBytes(denom)) +} + +/ GetDenomMetaData retrieves the denomination metadata. returns the metadata and true if the denom exists, +/ false otherwise. +func (k BaseKeeper) + +GetDenomMetaData(ctx sdk.Context, denom string) (types.Metadata, bool) { + store := ctx.KVStore(k.storeKey) + +store = prefix.NewStore(store, types.DenomMetadataPrefix) + bz := store.Get(conv.UnsafeStrToBytes(denom)) + if bz == nil { + return types.Metadata{ +}, false +} + +var metadata types.Metadata + k.cdc.MustUnmarshal(bz, &metadata) + +return metadata, true +} + +/ HasDenomMetaData checks if the denomination metadata exists in store. +func (k BaseKeeper) + +HasDenomMetaData(ctx sdk.Context, denom string) + +bool { + store := ctx.KVStore(k.storeKey) + +store = prefix.NewStore(store, types.DenomMetadataPrefix) + +return store.Has(conv.UnsafeStrToBytes(denom)) +} + +/ GetAllDenomMetaData retrieves all denominations metadata +func (k BaseKeeper) + +GetAllDenomMetaData(ctx sdk.Context) []types.Metadata { + denomMetaData := make([]types.Metadata, 0) + +k.IterateAllDenomMetaData(ctx, func(metadata types.Metadata) + +bool { + denomMetaData = append(denomMetaData, metadata) + +return false +}) + +return denomMetaData +} + +/ IterateAllDenomMetaData iterates over all the denominations metadata and +/ provides the metadata to a callback. If true is returned from the +/ callback, iteration is halted. +func (k BaseKeeper) + +IterateAllDenomMetaData(ctx sdk.Context, cb func(types.Metadata) + +bool) { + store := ctx.KVStore(k.storeKey) + denomMetaDataStore := prefix.NewStore(store, types.DenomMetadataPrefix) + iterator := denomMetaDataStore.Iterator(nil, nil) + +defer sdk.LogDeferred(ctx.Logger(), func() + +error { + return iterator.Close() +}) + for ; iterator.Valid(); iterator.Next() { + var metadata types.Metadata + k.cdc.MustUnmarshal(iterator.Value(), &metadata) + if cb(metadata) { + break +} + +} +} + +/ SetDenomMetaData sets the denominations metadata +func (k BaseKeeper) + +SetDenomMetaData(ctx sdk.Context, denomMetaData types.Metadata) { + store := ctx.KVStore(k.storeKey) + denomMetaDataStore := prefix.NewStore(store, types.DenomMetadataPrefix) + m := k.cdc.MustMarshal(&denomMetaData) + +denomMetaDataStore.Set([]byte(denomMetaData.Base), m) +} + +/ SendCoinsFromModuleToAccount transfers coins from a ModuleAccount to an AccAddress. +/ It will panic if the module account does not exist. An error is returned if +/ the recipient address is black-listed or if sending the tokens fails. +func (k BaseKeeper) + +SendCoinsFromModuleToAccount( + ctx sdk.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins, +) + +error { + senderAddr := k.ak.GetModuleAddress(senderModule) + if senderAddr == nil { + panic(sdkerrors.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", senderModule)) +} + if k.BlockedAddr(recipientAddr) { + return sdkerrors.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", recipientAddr) +} + +return k.SendCoins(ctx, senderAddr, recipientAddr, amt) +} + +/ SendCoinsFromModuleToModule transfers coins from a ModuleAccount to another. +/ It will panic if either module account does not exist. +func (k BaseKeeper) + +SendCoinsFromModuleToModule( + ctx sdk.Context, senderModule, recipientModule string, amt sdk.Coins, +) + +error { + senderAddr := k.ak.GetModuleAddress(senderModule) + if senderAddr == nil { + panic(sdkerrors.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", senderModule)) +} + recipientAcc := k.ak.GetModuleAccount(ctx, recipientModule) + if recipientAcc == nil { + panic(sdkerrors.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", recipientModule)) +} + +return k.SendCoins(ctx, senderAddr, recipientAcc.GetAddress(), amt) +} + +/ SendCoinsFromAccountToModule transfers coins from an AccAddress to a ModuleAccount. +/ It will panic if the module account does not exist. +func (k BaseKeeper) + +SendCoinsFromAccountToModule( + ctx sdk.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins, +) + +error { + recipientAcc := k.ak.GetModuleAccount(ctx, recipientModule) + if recipientAcc == nil { + panic(sdkerrors.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", recipientModule)) +} + +return k.SendCoins(ctx, senderAddr, recipientAcc.GetAddress(), amt) +} + +/ DelegateCoinsFromAccountToModule delegates coins and transfers them from a +/ delegator account to a module account. It will panic if the module account +/ does not exist or is unauthorized. +func (k BaseKeeper) + +DelegateCoinsFromAccountToModule( + ctx sdk.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins, +) + +error { + recipientAcc := k.ak.GetModuleAccount(ctx, recipientModule) + if recipientAcc == nil { + panic(sdkerrors.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", recipientModule)) +} + if !recipientAcc.HasPermission(authtypes.Staking) { + panic(sdkerrors.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to receive delegated coins", recipientModule)) +} + +return k.DelegateCoins(ctx, senderAddr, recipientAcc.GetAddress(), amt) +} + +/ UndelegateCoinsFromModuleToAccount undelegates the unbonding coins and transfers +/ them from a module account to the delegator account. It will panic if the +/ module account does not exist or is unauthorized. +func (k BaseKeeper) + +UndelegateCoinsFromModuleToAccount( + ctx sdk.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins, +) + +error { + acc := k.ak.GetModuleAccount(ctx, senderModule) + if acc == nil { + panic(sdkerrors.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", senderModule)) +} + if !acc.HasPermission(authtypes.Staking) { + panic(sdkerrors.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to undelegate coins", senderModule)) +} + +return k.UndelegateCoins(ctx, acc.GetAddress(), recipientAddr, amt) +} + +/ MintCoins creates new coins from thin air and adds it to the module account. +/ It will panic if the module account does not exist or is unauthorized. +func (k BaseKeeper) + +MintCoins(ctx sdk.Context, moduleName string, amounts sdk.Coins) + +error { + err := k.mintCoinsRestrictionFn(ctx, amounts) + if err != nil { + ctx.Logger().Error(fmt.Sprintf("Module %q attempted to mint coins %s it doesn't have permission for, error %v", moduleName, amounts, err)) + +return err +} + acc := k.ak.GetModuleAccount(ctx, moduleName) + if acc == nil { + panic(sdkerrors.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleName)) +} + if !acc.HasPermission(authtypes.Minter) { + panic(sdkerrors.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to mint tokens", moduleName)) +} + +err = k.addCoins(ctx, acc.GetAddress(), amounts) + if err != nil { + return err +} + for _, amount := range amounts { + supply := k.GetSupply(ctx, amount.GetDenom()) + +supply = supply.Add(amount) + +k.setSupply(ctx, supply) +} + logger := k.Logger(ctx) + +logger.Info("minted coins from module account", "amount", amounts.String(), "from", moduleName) + + / emit mint event + ctx.EventManager().EmitEvent( + types.NewCoinMintEvent(acc.GetAddress(), amounts), + ) + +return nil +} + +/ BurnCoins burns coins deletes coins from the balance of the module account. +/ It will panic if the module account does not exist or is unauthorized. +func (k BaseKeeper) + +BurnCoins(ctx sdk.Context, moduleName string, amounts sdk.Coins) + +error { + acc := k.ak.GetModuleAccount(ctx, moduleName) + if acc == nil { + panic(sdkerrors.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleName)) +} + if !acc.HasPermission(authtypes.Burner) { + panic(sdkerrors.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to burn tokens", moduleName)) +} + err := k.subUnlockedCoins(ctx, acc.GetAddress(), amounts) + if err != nil { + return err +} + for _, amount := range amounts { + supply := k.GetSupply(ctx, amount.GetDenom()) + +supply = supply.Sub(amount) + +k.setSupply(ctx, supply) +} + logger := k.Logger(ctx) + +logger.Info("burned tokens from module account", "amount", amounts.String(), "from", moduleName) + + / emit burn event + ctx.EventManager().EmitEvent( + types.NewCoinBurnEvent(acc.GetAddress(), amounts), + ) + +return nil +} + +/ setSupply sets the supply for the given coin +func (k BaseKeeper) + +setSupply(ctx sdk.Context, coin sdk.Coin) { + intBytes, err := coin.Amount.Marshal() + if err != nil { + panic(fmt.Errorf("unable to marshal amount value %v", err)) +} + store := ctx.KVStore(k.storeKey) + supplyStore := prefix.NewStore(store, types.SupplyKey) + + / Bank invariants and IBC requires to remove zero coins. + if coin.IsZero() { + supplyStore.Delete(conv.UnsafeStrToBytes(coin.GetDenom())) +} + +else { + supplyStore.Set([]byte(coin.GetDenom()), intBytes) +} +} + +/ trackDelegation tracks the delegation of the given account if it is a vesting account +func (k BaseKeeper) + +trackDelegation(ctx sdk.Context, addr sdk.AccAddress, balance, amt sdk.Coins) + +error { + acc := k.ak.GetAccount(ctx, addr) + if acc == nil { + return sdkerrors.Wrapf(sdkerrors.ErrUnknownAddress, "account %s does not exist", addr) +} + +vacc, ok := acc.(types.VestingAccount) + if ok { + / TODO: return error on account.TrackDelegation + vacc.TrackDelegation(ctx.BlockHeader().Time, balance, amt) + +k.ak.SetAccount(ctx, acc) +} + +return nil +} + +/ trackUndelegation trakcs undelegation of the given account if it is a vesting account +func (k BaseKeeper) + +trackUndelegation(ctx sdk.Context, addr sdk.AccAddress, amt sdk.Coins) + +error { + acc := k.ak.GetAccount(ctx, addr) + if acc == nil { + return sdkerrors.Wrapf(sdkerrors.ErrUnknownAddress, "account %s does not exist", addr) +} + +vacc, ok := acc.(types.VestingAccount) + if ok { + / TODO: return error on account.TrackUndelegation + vacc.TrackUndelegation(amt) + +k.ak.SetAccount(ctx, acc) +} + +return nil +} + +/ IterateTotalSupply iterates over the total supply calling the given cb (callback) + +function +/ with the balance of each coin. +/ The iteration stops if the callback returns true. +func (k BaseViewKeeper) + +IterateTotalSupply(ctx sdk.Context, cb func(sdk.Coin) + +bool) { + store := ctx.KVStore(k.storeKey) + supplyStore := prefix.NewStore(store, types.SupplyKey) + iterator := supplyStore.Iterator(nil, nil) + +defer sdk.LogDeferred(ctx.Logger(), func() + +error { + return iterator.Close() +}) + for ; iterator.Valid(); iterator.Next() { + var amount math.Int + err := amount.Unmarshal(iterator.Value()) + if err != nil { + panic(fmt.Errorf("unable to unmarshal supply value %v", err)) +} + balance := sdk.Coin{ + Denom: string(iterator.Key()), + Amount: amount, +} + if cb(balance) { + break +} + +} +} +``` + +Regardless if an error is wrapped or not, the Cosmos SDK's `errors` package provides a function to determine if +an error is of a particular kind via `Is`. + +## ABCI + +If a module error is registered, the Cosmos SDK `errors` package allows ABCI information to be extracted +through the `ABCIInfo` function. The package also provides `ResponseCheckTx` and `ResponseDeliverTx` as +auxiliary functions to automatically get `CheckTx` and `DeliverTx` responses from an error. diff --git a/docs/sdk/v0.47/documentation/module-system/genesis.mdx b/docs/sdk/v0.47/documentation/module-system/genesis.mdx new file mode 100644 index 00000000..7d8494dd --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/genesis.mdx @@ -0,0 +1,756 @@ +--- +title: Module Genesis +--- + + +**Synopsis** +Modules generally handle a subset of the state and, as such, they need to define the related subset of the genesis file as well as methods to initialize, verify and export it. + + + + +### Pre-requisite Readings + +* [Module Manager](/docs/sdk/v0.47/documentation/module-system/module-manager) +* [Keepers](/docs/sdk/v0.47/documentation/module-system/keeper) + + + +## Type Definition + +The subset of the genesis state defined from a given module is generally defined in a `genesis.proto` file ([more info](/docs/sdk/v0.47/learn/advanced/encoding#gogoproto) on how to define protobuf messages). The struct defining the module's subset of the genesis state is usually called `GenesisState` and contains all the module-related values that need to be initialized during the genesis process. + +See an example of `GenesisState` protobuf message definition from the `auth` module: + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/auth/v1beta1/genesis.proto +``` + +Next we present the main genesis-related methods that need to be implemented by module developers in order for their module to be used in Cosmos SDK applications. + +### `DefaultGenesis` + +The `DefaultGenesis()` method is a simple method that calls the constructor function for `GenesisState` with the default value for each parameter. See an example from the `auth` module: + +```go expandable +package auth + +import ( + + "context" + "encoding/json" + "fmt" + + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "cosmossdk.io/depinject" + "cosmossdk.io/core/appmodule" + + modulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/exported" + "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +/ ConsensusVersion defines the current x/auth module consensus version. +const ConsensusVersion = 4 + +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the auth module. +type AppModuleBasic struct{ +} + +/ Name returns the auth module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the auth module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the auth +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the auth module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return types.ValidateGenesis(data) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the auth module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the auth module. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return nil +} + +/ GetQueryCmd returns the root query command for the auth module. +func (AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.GetQueryCmd() +} + +/ RegisterInterfaces registers interfaces and implementations of the auth module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the auth module. +type AppModule struct { + AppModuleBasic + + accountKeeper keeper.AccountKeeper + randGenAccountsFn types.RandomGenesisAccountsFn + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, accountKeeper keeper.AccountKeeper, randGenAccountsFn types.RandomGenesisAccountsFn, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ +}, + accountKeeper: accountKeeper, + randGenAccountsFn: randGenAccountsFn, + legacySubspace: ss, +} +} + +/ Name returns the auth module's name. +func (AppModule) + +Name() + +string { + return types.ModuleName +} + +/ RegisterInvariants performs a no-op. +func (AppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers a GRPC query service to respond to the +/ module-specific GRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.accountKeeper)) + +types.RegisterQueryServer(cfg.QueryServer(), am.accountKeeper) + m := keeper.NewMigrator(am.accountKeeper, cfg.QueryServer(), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 2 to 3: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 3 to 4: %v", types.ModuleName, err)) +} + + / see migrations/v5/doc.go + if err := cfg.RegisterMigration(types.ModuleName, 4, func(ctx sdk.Context) + +error { + return nil +}); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 4 to 5: %v", types.ModuleName, err)) +} +} + +/ InitGenesis performs genesis initialization for the auth module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +am.accountKeeper.InitGenesis(ctx, genesisState) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the auth +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.accountKeeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the auth module +func (am AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState, am.randGenAccountsFn) +} + +/ ProposalContents doesn't return any content functions for governance proposals. +func (AppModule) + +ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent { + return nil +} + +/ RegisterStoreDecoder registers a decoder for auth module's types +func (am AppModule) + +RegisterStoreDecoder(sdr sdk.StoreDecoderRegistry) { + sdr[types.StoreKey] = simulation.NewDecodeStore(am.accountKeeper) +} + +/ WeightedOperations doesn't return any auth module operation. +func (AppModule) + +WeightedOperations(_ module.SimulationState) []simtypes.WeightedOperation { + return nil +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type AuthInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + + RandomGenesisAccountsFn types.RandomGenesisAccountsFn `optional:"true"` + AccountI func() + +types.AccountI `optional:"true"` + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type AuthOutputs struct { + depinject.Out + + AccountKeeper keeper.AccountKeeper + Module appmodule.AppModule +} + +func ProvideModule(in AuthInputs) + +AuthOutputs { + maccPerms := map[string][]string{ +} + for _, permission := range in.Config.ModuleAccountPermissions { + maccPerms[permission.Account] = permission.Permissions +} + + / default to governance authority if not provided + authority := types.NewModuleAddress(govtypes.ModuleName) + if in.Config.Authority != "" { + authority = types.NewModuleAddressOrBech32Address(in.Config.Authority) +} + if in.RandomGenesisAccountsFn == nil { + in.RandomGenesisAccountsFn = simulation.RandomGenesisAccounts +} + if in.AccountI == nil { + in.AccountI = types.ProtoBaseAccount +} + k := keeper.NewAccountKeeper(in.Cdc, in.Key, in.AccountI, maccPerms, in.Config.Bech32Prefix, authority.String()) + m := NewAppModule(in.Cdc, k, in.RandomGenesisAccountsFn, in.LegacySubspace) + +return AuthOutputs{ + AccountKeeper: k, + Module: m +} +} +``` + +### `ValidateGenesis` + +The `ValidateGenesis(data GenesisState)` method is called to verify that the provided `genesisState` is correct. It should perform validity checks on each of the parameters listed in `GenesisState`. See an example from the `auth` module: + +```go expandable +package types + +import ( + + "encoding/json" + "fmt" + "sort" + + proto "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/types/module" +) + +var _ types.UnpackInterfacesMessage = GenesisState{ +} + +/ RandomGenesisAccountsFn defines the function required to generate custom account types +type RandomGenesisAccountsFn func(simState *module.SimulationState) + +GenesisAccounts + +/ NewGenesisState - Create a new genesis state +func NewGenesisState(params Params, accounts GenesisAccounts) *GenesisState { + genAccounts, err := PackAccounts(accounts) + if err != nil { + panic(err) +} + +return &GenesisState{ + Params: params, + Accounts: genAccounts, +} +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (g GenesisState) + +UnpackInterfaces(unpacker types.AnyUnpacker) + +error { + for _, any := range g.Accounts { + var account GenesisAccount + err := unpacker.UnpackAny(any, &account) + if err != nil { + return err +} + +} + +return nil +} + +/ DefaultGenesisState - Return a default genesis state +func DefaultGenesisState() *GenesisState { + return NewGenesisState(DefaultParams(), GenesisAccounts{ +}) +} + +/ GetGenesisStateFromAppState returns x/auth GenesisState given raw application +/ genesis state. +func GetGenesisStateFromAppState(cdc codec.Codec, appState map[string]json.RawMessage) + +GenesisState { + var genesisState GenesisState + if appState[ModuleName] != nil { + cdc.MustUnmarshalJSON(appState[ModuleName], &genesisState) +} + +return genesisState +} + +/ ValidateGenesis performs basic validation of auth genesis data returning an +/ error for any failed validation criteria. +func ValidateGenesis(data GenesisState) + +error { + if err := data.Params.Validate(); err != nil { + return err +} + +genAccs, err := UnpackAccounts(data.Accounts) + if err != nil { + return err +} + +return ValidateGenAccounts(genAccs) +} + +/ SanitizeGenesisAccounts sorts accounts and coin sets. +func SanitizeGenesisAccounts(genAccs GenesisAccounts) + +GenesisAccounts { + / Make sure there aren't any duplicated account numbers by fixing the duplicates with the lowest unused values. + / seenAccNum = easy lookup for used account numbers. + seenAccNum := map[uint64]bool{ +} + / dupAccNum = a map of account number to accounts with duplicate account numbers (excluding the 1st one seen). + dupAccNum := map[uint64]GenesisAccounts{ +} + for _, acc := range genAccs { + num := acc.GetAccountNumber() + if !seenAccNum[num] { + seenAccNum[num] = true +} + +else { + dupAccNum[num] = append(dupAccNum[num], acc) +} + +} + + / dupAccNums a sorted list of the account numbers with duplicates. + var dupAccNums []uint64 + for num := range dupAccNum { + dupAccNums = append(dupAccNums, num) +} + +sort.Slice(dupAccNums, func(i, j int) + +bool { + return dupAccNums[i] < dupAccNums[j] +}) + + / Change the account number of the duplicated ones to the first unused value. + globalNum := uint64(0) + for _, dupNum := range dupAccNums { + accs := dupAccNum[dupNum] + for _, acc := range accs { + for seenAccNum[globalNum] { + globalNum++ +} + if err := acc.SetAccountNumber(globalNum); err != nil { + panic(err) +} + +seenAccNum[globalNum] = true +} + +} + + / Then sort them all by account number. + sort.Slice(genAccs, func(i, j int) + +bool { + return genAccs[i].GetAccountNumber() < genAccs[j].GetAccountNumber() +}) + +return genAccs +} + +/ ValidateGenAccounts validates an array of GenesisAccounts and checks for duplicates +func ValidateGenAccounts(accounts GenesisAccounts) + +error { + addrMap := make(map[string]bool, len(accounts)) + for _, acc := range accounts { + / check for duplicated accounts + addrStr := acc.GetAddress().String() + if _, ok := addrMap[addrStr]; ok { + return fmt.Errorf("duplicate account found in genesis state; address: %s", addrStr) +} + +addrMap[addrStr] = true + + / check account specific validation + if err := acc.Validate(); err != nil { + return fmt.Errorf("invalid account found in genesis state; address: %s, error: %s", addrStr, err.Error()) +} + +} + +return nil +} + +/ GenesisAccountIterator implements genesis account iteration. +type GenesisAccountIterator struct{ +} + +/ IterateGenesisAccounts iterates over all the genesis accounts found in +/ appGenesis and invokes a callback on each genesis account. If any call +/ returns true, iteration stops. +func (GenesisAccountIterator) + +IterateGenesisAccounts( + cdc codec.Codec, appGenesis map[string]json.RawMessage, cb func(AccountI) (stop bool), +) { + for _, genAcc := range GetGenesisStateFromAppState(cdc, appGenesis).Accounts { + acc, ok := genAcc.GetCachedValue().(AccountI) + if !ok { + panic("expected account") +} + if cb(acc) { + break +} + +} +} + +/ PackAccounts converts GenesisAccounts to Any slice +func PackAccounts(accounts GenesisAccounts) ([]*types.Any, error) { + accountsAny := make([]*types.Any, len(accounts)) + for i, acc := range accounts { + msg, ok := acc.(proto.Message) + if !ok { + return nil, fmt.Errorf("cannot proto marshal %T", acc) +} + +any, err := types.NewAnyWithValue(msg) + if err != nil { + return nil, err +} + +accountsAny[i] = any +} + +return accountsAny, nil +} + +/ UnpackAccounts converts Any slice to GenesisAccounts +func UnpackAccounts(accountsAny []*types.Any) (GenesisAccounts, error) { + accounts := make(GenesisAccounts, len(accountsAny)) + for i, any := range accountsAny { + acc, ok := any.GetCachedValue().(GenesisAccount) + if !ok { + return nil, fmt.Errorf("expected genesis account") +} + +accounts[i] = acc +} + +return accounts, nil +} +``` + +## Other Genesis Methods + +Other than the methods related directly to `GenesisState`, module developers are expected to implement two other methods as part of the [`AppModuleGenesis` interface](/docs/sdk/v0.47/documentation/module-system/module-manager#appmodulegenesis) (only if the module needs to initialize a subset of state in genesis). These methods are [`InitGenesis`](#initgenesis) and [`ExportGenesis`](#exportgenesis). + +### `InitGenesis` + +The `InitGenesis` method is executed during [`InitChain`](/docs/sdk/v0.47/learn/advanced/baseapp#initchain) when the application is first started. Given a `GenesisState`, it initializes the subset of the state managed by the module by using the module's [`keeper`](/docs/sdk/v0.47/documentation/module-system/keeper) setter function on each parameter within the `GenesisState`. + +The [module manager](/docs/sdk/v0.47/documentation/module-system/module-manager#manager) of the application is responsible for calling the `InitGenesis` method of each of the application's modules in order. This order is set by the application developer via the manager's `SetOrderGenesisMethod`, which is called in the [application's constructor function](/docs/sdk/v0.47/learn/beginner/overview-app#constructor-function). + +See an example of `InitGenesis` from the `auth` module: + +```go expandable +package keeper + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ InitGenesis - Init store state from genesis data +/ +/ CONTRACT: old coins from the FeeCollectionKeeper need to be transferred through +/ a genesis port script to the new fee collector account +func (ak AccountKeeper) + +InitGenesis(ctx sdk.Context, data types.GenesisState) { + if err := ak.SetParams(ctx, data.Params); err != nil { + panic(err) +} + +accounts, err := types.UnpackAccounts(data.Accounts) + if err != nil { + panic(err) +} + +accounts = types.SanitizeGenesisAccounts(accounts) + + / Set the accounts and make sure the global account number matches the largest account number (even if zero). + var lastAccNum *uint64 + for _, acc := range accounts { + accNum := acc.GetAccountNumber() + for lastAccNum == nil || *lastAccNum < accNum { + n := ak.NextAccountNumber(ctx) + +lastAccNum = &n +} + +ak.SetAccount(ctx, acc) +} + +ak.GetModuleAccount(ctx, types.FeeCollectorName) +} + +/ ExportGenesis returns a GenesisState for a given context and keeper +func (ak AccountKeeper) + +ExportGenesis(ctx sdk.Context) *types.GenesisState { + params := ak.GetParams(ctx) + +var genAccounts types.GenesisAccounts + ak.IterateAccounts(ctx, func(account types.AccountI) + +bool { + genAccount := account.(types.GenesisAccount) + +genAccounts = append(genAccounts, genAccount) + +return false +}) + +return types.NewGenesisState(params, genAccounts) +} +``` + +### `ExportGenesis` + +The `ExportGenesis` method is executed whenever an export of the state is made. It takes the latest known version of the subset of the state managed by the module and creates a new `GenesisState` out of it. This is mainly used when the chain needs to be upgraded via a hard fork. + +See an example of `ExportGenesis` from the `auth` module. + +```go expandable +package keeper + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ InitGenesis - Init store state from genesis data +/ +/ CONTRACT: old coins from the FeeCollectionKeeper need to be transferred through +/ a genesis port script to the new fee collector account +func (ak AccountKeeper) + +InitGenesis(ctx sdk.Context, data types.GenesisState) { + if err := ak.SetParams(ctx, data.Params); err != nil { + panic(err) +} + +accounts, err := types.UnpackAccounts(data.Accounts) + if err != nil { + panic(err) +} + +accounts = types.SanitizeGenesisAccounts(accounts) + + / Set the accounts and make sure the global account number matches the largest account number (even if zero). + var lastAccNum *uint64 + for _, acc := range accounts { + accNum := acc.GetAccountNumber() + for lastAccNum == nil || *lastAccNum < accNum { + n := ak.NextAccountNumber(ctx) + +lastAccNum = &n +} + +ak.SetAccount(ctx, acc) +} + +ak.GetModuleAccount(ctx, types.FeeCollectorName) +} + +/ ExportGenesis returns a GenesisState for a given context and keeper +func (ak AccountKeeper) + +ExportGenesis(ctx sdk.Context) *types.GenesisState { + params := ak.GetParams(ctx) + +var genAccounts types.GenesisAccounts + ak.IterateAccounts(ctx, func(account types.AccountI) + +bool { + genAccount := account.(types.GenesisAccount) + +genAccounts = append(genAccounts, genAccount) + +return false +}) + +return types.NewGenesisState(params, genAccounts) +} +``` + +### GenesisTxHandler + +`GenesisTxHandler` is a way for modules to submit state transitions prior to the first block. This is used by `x/genutil` to submit the genesis transactions for the validators to be added to staking. + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/advanced/genesis/txhandler.go#L3-L6 +``` diff --git a/docs/sdk/v0.47/documentation/module-system/intro.mdx b/docs/sdk/v0.47/documentation/module-system/intro.mdx new file mode 100644 index 00000000..e63ec6d4 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/intro.mdx @@ -0,0 +1,93 @@ +--- +title: Introduction to Cosmos SDK Modules +--- + + +**Synopsis** +Modules define most of the logic of Cosmos SDK applications. Developers compose modules together using the Cosmos SDK to build their custom application-specific blockchains. This document outlines the basic concepts behind SDK modules and how to approach module management. + + + + +### Pre-requisite Readings + +* [Anatomy of a Cosmos SDK application](/docs/sdk/v0.47/learn/beginner/overview-app) +* [Lifecycle of a Cosmos SDK transaction](/docs/sdk/v0.47/learn/beginner/tx-lifecycle) + + + +## Role of Modules in a Cosmos SDK Application + +The Cosmos SDK can be thought of as the Ruby-on-Rails of blockchain development. It comes with a core that provides the basic functionalities every blockchain application needs, like a [boilerplate implementation of the ABCI](/docs/sdk/v0.47/learn/advanced/baseapp) to communicate with the underlying consensus engine, a [`multistore`](/docs/sdk/v0.47/learn/advanced/store#multistore) to persist state, a [server](/docs/sdk/v0.47/learn/advanced/node) to form a full-node and [interfaces](/docs/sdk/v0.47/documentation/module-system/module-interfaces) to handle queries. + +On top of this core, the Cosmos SDK enables developers to build modules that implement the business logic of their application. In other words, SDK modules implement the bulk of the logic of applications, while the core does the wiring and enables modules to be composed together. The end goal is to build a robust ecosystem of open-source Cosmos SDK modules, making it increasingly easier to build complex blockchain applications. + +Cosmos SDK modules can be seen as little state-machines within the state-machine. They generally define a subset of the state using one or more `KVStore`s in the [main multistore](/docs/sdk/v0.47/learn/advanced/store), as well as a subset of [message types](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#messages). These messages are routed by one of the main components of Cosmos SDK core, [`BaseApp`](/docs/sdk/v0.47/learn/advanced/baseapp), to a module Protobuf [`Msg` service](/docs/sdk/v0.47/documentation/module-system/msg-services) that defines them. + +```text expandable + + + | + | Transaction relayed from the full-node's consensus engine + | to the node's application via DeliverTx + | + | + | + +---------------------v--------------------------+ + | APPLICATION | + | | + | Using baseapp's methods: Decode the Tx, | + | extract and route the message(s) | + | | + +---------------------+--------------------------+ + | + | + | + +---------------------------+ + | + | + | + | Message routed to the correct + | module to be processed + | + | ++----------------+ +---------------+ +----------------+ +------v----------+ +| | | | | | | | +| AUTH MODULE | | BANK MODULE | | STAKING MODULE | | GOV MODULE | +| | | | | | | | +| | | | | | | Handles message,| +| | | | | | | Updates state | +| | | | | | | | ++----------------+ +---------------+ +----------------+ +------+----------+ + | + | + | + | + +--------------------------+ + | + | Return result to the underlying consensus engine (e.g. CometBFT) + | (0=Ok, 1=Err) + v +``` + +As a result of this architecture, building a Cosmos SDK application usually revolves around writing modules to implement the specialized logic of the application and composing them with existing modules to complete the application. Developers will generally work on modules that implement logic needed for their specific use case that do not exist yet, and will use existing modules for more generic functionalities like staking, accounts, or token management. + +## How to Approach Building Modules as a Developer + +While there are no definitive guidelines for writing modules, here are some important design principles developers should keep in mind when building them: + +* **Composability**: Cosmos SDK applications are almost always composed of multiple modules. This means developers need to carefully consider the integration of their module not only with the core of the Cosmos SDK, but also with other modules. The former is achieved by following standard design patterns outlined [here](#main-components-of-cosmos-sdk-modules), while the latter is achieved by properly exposing the store(s) of the module via the [`keeper`](/docs/sdk/v0.47/documentation/module-system/keeper). +* **Specialization**: A direct consequence of the **composability** feature is that modules should be **specialized**. Developers should carefully establish the scope of their module and not batch multiple functionalities into the same module. This separation of concerns enables modules to be re-used in other projects and improves the upgradability of the application. **Specialization** also plays an important role in the [object-capabilities model](/docs/sdk/v0.47/learn/advanced/ocap) of the Cosmos SDK. +* **Capabilities**: Most modules need to read and/or write to the store(s) of other modules. However, in an open-source environment, it is possible for some modules to be malicious. That is why module developers need to carefully think not only about how their module interacts with other modules, but also about how to give access to the module's store(s). The Cosmos SDK takes a capabilities-oriented approach to inter-module security. This means that each store defined by a module is accessed by a `key`, which is held by the module's [`keeper`](/docs/sdk/v0.47/documentation/module-system/keeper). This `keeper` defines how to access the store(s) and under what conditions. Access to the module's store(s) is done by passing a reference to the module's `keeper`. + +## Main Components of Cosmos SDK Modules + +Modules are by convention defined in the `./x/` subfolder (e.g. the `bank` module will be defined in the `./x/bank` folder). They generally share the same core components: + +* A [`keeper`](/docs/sdk/v0.47/documentation/module-system/keeper), used to access the module's store(s) and update the state. +* A [`Msg` service](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#messages), used to process messages when they are routed to the module by [`BaseApp`](/docs/sdk/v0.47/learn/advanced/baseapp#message-routing) and trigger state-transitions. +* A [query service](/docs/sdk/v0.47/documentation/module-system/query-services), used to process user queries when they are routed to the module by [`BaseApp`](/docs/sdk/v0.47/learn/advanced/baseapp#query-routing). +* Interfaces, for end users to query the subset of the state defined by the module and create `message`s of the custom types defined in the module. + +In addition to these components, modules implement the `AppModule` interface in order to be managed by the [`module manager`](/docs/sdk/v0.47/documentation/module-system/module-manager). + +Please refer to the [structure document](/docs/sdk/v0.47/documentation/module-system/structure) to learn about the recommended structure of a module's directory. diff --git a/docs/sdk/v0.47/documentation/module-system/invariants.mdx b/docs/sdk/v0.47/documentation/module-system/invariants.mdx new file mode 100644 index 00000000..1cdae721 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/invariants.mdx @@ -0,0 +1,504 @@ +--- +title: Invariants +--- + + +**Synopsis** +An invariant is a property of the application that should always be true. In the context of the Cosmos SDK, an `Invariant` is a function that checks for a particular invariant. These functions are useful to detect bugs early on and act upon them to limit their potential consequences (e.g. by halting the chain). They are also useful in the development process of the application to detect bugs via simulations. + + + + +### Pre-requisite Readings + +* [Keepers](/docs/sdk/v0.47/documentation/module-system/keeper) + + + +## Implementing `Invariant`s + +An `Invariant` is a function that checks for a particular invariant within a module. Module `Invariant`s must follow the `Invariant` type: + +```go expandable +package types + +import "fmt" + +/ An Invariant is a function which tests a particular invariant. +/ The invariant returns a descriptive message about what happened +/ and a boolean indicating whether the invariant has been broken. +/ The simulator will then halt and print the logs. +type Invariant func(ctx Context) (string, bool) + +/ Invariants defines a group of invariants +type Invariants []Invariant + +/ expected interface for registering invariants +type InvariantRegistry interface { + RegisterRoute(moduleName, route string, invar Invariant) +} + +/ FormatInvariant returns a standardized invariant message. +func FormatInvariant(module, name, msg string) + +string { + return fmt.Sprintf("%s: %s invariant\n%s\n", module, name, msg) +} +``` + +The `string` return value is the invariant message, which can be used when printing logs, and the `bool` return value is the actual result of the invariant check. + +In practice, each module implements `Invariant`s in a `keeper/invariants.go` file within the module's folder. The standard is to implement one `Invariant` function per logical grouping of invariants with the following model: + +```go +/ Example for an Invariant that checks balance-related invariants + +func BalanceInvariants(k Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + / Implement checks for balance-related invariants +} +} +``` + +Additionally, module developers should generally implement an `AllInvariants` function that runs all the `Invariant`s functions of the module: + +```go expandable +/ AllInvariants runs all invariants of the module. +/ In this example, the module implements two Invariants: BalanceInvariants and DepositsInvariants + +func AllInvariants(k Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + res, stop := BalanceInvariants(k)(ctx) + if stop { + return res, stop +} + +return DepositsInvariant(k)(ctx) +} +} +``` + +Finally, module developers need to implement the `RegisterInvariants` method as part of the [`AppModule` interface](/docs/sdk/v0.47/documentation/module-system/module-manager#appmodule). Indeed, the `RegisterInvariants` method of the module, implemented in the `module/module.go` file, typically only defers the call to a `RegisterInvariants` method implemented in the `keeper/invariants.go` file. The `RegisterInvariants` method registers a route for each `Invariant` function in the [`InvariantRegistry`](#invariant-registry): + +```go expandable +package keeper + +import ( + + "bytes" + "fmt" + "cosmossdk.io/math" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ RegisterInvariants registers all staking invariants +func RegisterInvariants(ir sdk.InvariantRegistry, k *Keeper) { + ir.RegisterRoute(types.ModuleName, "module-accounts", + ModuleAccountInvariants(k)) + +ir.RegisterRoute(types.ModuleName, "nonnegative-power", + NonNegativePowerInvariant(k)) + +ir.RegisterRoute(types.ModuleName, "positive-delegation", + PositiveDelegationInvariant(k)) + +ir.RegisterRoute(types.ModuleName, "delegator-shares", + DelegatorSharesInvariant(k)) +} + +/ AllInvariants runs all invariants of the staking module. +func AllInvariants(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + res, stop := ModuleAccountInvariants(k)(ctx) + if stop { + return res, stop +} + +res, stop = NonNegativePowerInvariant(k)(ctx) + if stop { + return res, stop +} + +res, stop = PositiveDelegationInvariant(k)(ctx) + if stop { + return res, stop +} + +return DelegatorSharesInvariant(k)(ctx) +} +} + +/ ModuleAccountInvariants checks that the bonded and notBonded ModuleAccounts pools +/ reflects the tokens actively bonded and not bonded +func ModuleAccountInvariants(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + bonded := math.ZeroInt() + notBonded := math.ZeroInt() + bondedPool := k.GetBondedPool(ctx) + notBondedPool := k.GetNotBondedPool(ctx) + bondDenom := k.BondDenom(ctx) + +k.IterateValidators(ctx, func(_ int64, validator types.ValidatorI) + +bool { + switch validator.GetStatus() { + case types.Bonded: + bonded = bonded.Add(validator.GetTokens()) + case types.Unbonding, types.Unbonded: + notBonded = notBonded.Add(validator.GetTokens()) + +default: + panic("invalid validator status") +} + +return false +}) + +k.IterateUnbondingDelegations(ctx, func(_ int64, ubd types.UnbondingDelegation) + +bool { + for _, entry := range ubd.Entries { + notBonded = notBonded.Add(entry.Balance) +} + +return false +}) + poolBonded := k.bankKeeper.GetBalance(ctx, bondedPool.GetAddress(), bondDenom) + poolNotBonded := k.bankKeeper.GetBalance(ctx, notBondedPool.GetAddress(), bondDenom) + broken := !poolBonded.Amount.Equal(bonded) || !poolNotBonded.Amount.Equal(notBonded) + + / Bonded tokens should equal sum of tokens with bonded validators + / Not-bonded tokens should equal unbonding delegations plus tokens on unbonded validators + return sdk.FormatInvariant(types.ModuleName, "bonded and not bonded module account coins", fmt.Sprintf( + "\tPool's bonded tokens: %v\n"+ + "\tsum of bonded tokens: %v\n"+ + "not bonded token invariance:\n"+ + "\tPool's not bonded tokens: %v\n"+ + "\tsum of not bonded tokens: %v\n"+ + "module accounts total (bonded + not bonded):\n"+ + "\tModule Accounts' tokens: %v\n"+ + "\tsum tokens: %v\n", + poolBonded, bonded, poolNotBonded, notBonded, poolBonded.Add(poolNotBonded), bonded.Add(notBonded))), broken +} +} + +/ NonNegativePowerInvariant checks that all stored validators have >= 0 power. +func NonNegativePowerInvariant(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + var ( + msg string + broken bool + ) + iterator := k.ValidatorsPowerStoreIterator(ctx) + for ; iterator.Valid(); iterator.Next() { + validator, found := k.GetValidator(ctx, iterator.Value()) + if !found { + panic(fmt.Sprintf("validator record not found for address: %X\n", iterator.Value())) +} + powerKey := types.GetValidatorsByPowerIndexKey(validator, k.PowerReduction(ctx)) + if !bytes.Equal(iterator.Key(), powerKey) { + broken = true + msg += fmt.Sprintf("power store invariance:\n\tvalidator.Power: %v"+ + "\n\tkey should be: %v\n\tkey in store: %v\n", + validator.GetConsensusPower(k.PowerReduction(ctx)), powerKey, iterator.Key()) +} + if validator.Tokens.IsNegative() { + broken = true + msg += fmt.Sprintf("\tnegative tokens for validator: %v\n", validator) +} + +} + +iterator.Close() + +return sdk.FormatInvariant(types.ModuleName, "nonnegative power", fmt.Sprintf("found invalid validator powers\n%s", msg)), broken +} +} + +/ PositiveDelegationInvariant checks that all stored delegations have > 0 shares. +func PositiveDelegationInvariant(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + var ( + msg string + count int + ) + delegations := k.GetAllDelegations(ctx) + for _, delegation := range delegations { + if delegation.Shares.IsNegative() { + count++ + msg += fmt.Sprintf("\tdelegation with negative shares: %+v\n", delegation) +} + if delegation.Shares.IsZero() { + count++ + msg += fmt.Sprintf("\tdelegation with zero shares: %+v\n", delegation) +} + +} + broken := count != 0 + + return sdk.FormatInvariant(types.ModuleName, "positive delegations", fmt.Sprintf( + "%d invalid delegations found\n%s", count, msg)), broken +} +} + +/ DelegatorSharesInvariant checks whether all the delegator shares which persist +/ in the delegator object add up to the correct total delegator shares +/ amount stored in each validator. +func DelegatorSharesInvariant(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + var ( + msg string + broken bool + ) + validators := k.GetAllValidators(ctx) + validatorsDelegationShares := map[string]sdk.Dec{ +} + + / initialize a map: validator -> its delegation shares + for _, validator := range validators { + validatorsDelegationShares[validator.GetOperator().String()] = math.LegacyZeroDec() +} + + / iterate through all the delegations to calculate the total delegation shares for each validator + delegations := k.GetAllDelegations(ctx) + for _, delegation := range delegations { + delegationValidatorAddr := delegation.GetValidatorAddr().String() + validatorDelegationShares := validatorsDelegationShares[delegationValidatorAddr] + validatorsDelegationShares[delegationValidatorAddr] = validatorDelegationShares.Add(delegation.Shares) +} + + / for each validator, check if its total delegation shares calculated from the step above equals to its expected delegation shares + for _, validator := range validators { + expValTotalDelShares := validator.GetDelegatorShares() + calculatedValTotalDelShares := validatorsDelegationShares[validator.GetOperator().String()] + if !calculatedValTotalDelShares.Equal(expValTotalDelShares) { + broken = true + msg += fmt.Sprintf("broken delegator shares invariance:\n"+ + "\tvalidator.DelegatorShares: %v\n"+ + "\tsum of Delegator.Shares: %v\n", expValTotalDelShares, calculatedValTotalDelShares) +} + +} + +return sdk.FormatInvariant(types.ModuleName, "delegator shares", msg), broken +} +} +``` + +For more, see an example of [`Invariant`s implementation from the `staking` module](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/staking/keeper/invariants.go). + +## Invariant Registry + +The `InvariantRegistry` is a registry where the `Invariant`s of all the modules of an application are registered. There is only one `InvariantRegistry` per **application**, meaning module developers need not implement their own `InvariantRegistry` when building a module. **All module developers need to do is to register their modules' invariants in the `InvariantRegistry`, as explained in the section above**. The rest of this section gives more information on the `InvariantRegistry` itself, and does not contain anything directly relevant to module developers. + +At its core, the `InvariantRegistry` is defined in the Cosmos SDK as an interface: + +```go expandable +package types + +import "fmt" + +/ An Invariant is a function which tests a particular invariant. +/ The invariant returns a descriptive message about what happened +/ and a boolean indicating whether the invariant has been broken. +/ The simulator will then halt and print the logs. +type Invariant func(ctx Context) (string, bool) + +/ Invariants defines a group of invariants +type Invariants []Invariant + +/ expected interface for registering invariants +type InvariantRegistry interface { + RegisterRoute(moduleName, route string, invar Invariant) +} + +/ FormatInvariant returns a standardized invariant message. +func FormatInvariant(module, name, msg string) + +string { + return fmt.Sprintf("%s: %s invariant\n%s\n", module, name, msg) +} +``` + +Typically, this interface is implemented in the `keeper` of a specific module. The most used implementation of an `InvariantRegistry` can be found in the `crisis` module: + +```go expandable +package keeper + +import ( + + "fmt" + "time" + "github.com/tendermint/tendermint/libs/log" + "github.com/cosmos/cosmos-sdk/codec" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/crisis/types" +) + +/ Keeper - crisis keeper +type Keeper struct { + routes []types.InvarRoute + invCheckPeriod uint + storeKey storetypes.StoreKey + cdc codec.BinaryCodec + + / the address capable of executing a MsgUpdateParams message. Typically, this + / should be the x/gov module account. + authority string + + supplyKeeper types.SupplyKeeper + + feeCollectorName string / name of the FeeCollector ModuleAccount +} + +/ NewKeeper creates a new Keeper object +func NewKeeper( + cdc codec.BinaryCodec, storeKey storetypes.StoreKey, invCheckPeriod uint, + supplyKeeper types.SupplyKeeper, feeCollectorName string, authority string, +) *Keeper { + return &Keeper{ + storeKey: storeKey, + cdc: cdc, + routes: make([]types.InvarRoute, 0), + invCheckPeriod: invCheckPeriod, + supplyKeeper: supplyKeeper, + feeCollectorName: feeCollectorName, + authority: authority, +} +} + +/ GetAuthority returns the x/crisis module's authority. +func (k *Keeper) + +GetAuthority() + +string { + return k.authority +} + +/ Logger returns a module-specific logger. +func (k *Keeper) + +Logger(ctx sdk.Context) + +log.Logger { + return ctx.Logger().With("module", "x/"+types.ModuleName) +} + +/ RegisterRoute register the routes for each of the invariants +func (k *Keeper) + +RegisterRoute(moduleName, route string, invar sdk.Invariant) { + invarRoute := types.NewInvarRoute(moduleName, route, invar) + +k.routes = append(k.routes, invarRoute) +} + +/ Routes - return the keeper's invariant routes +func (k *Keeper) + +Routes() []types.InvarRoute { + return k.routes +} + +/ Invariants returns a copy of all registered Crisis keeper invariants. +func (k *Keeper) + +Invariants() []sdk.Invariant { + invars := make([]sdk.Invariant, len(k.routes)) + for i, route := range k.routes { + invars[i] = route.Invar +} + +return invars +} + +/ AssertInvariants asserts all registered invariants. If any invariant fails, +/ the method panics. +func (k *Keeper) + +AssertInvariants(ctx sdk.Context) { + logger := k.Logger(ctx) + start := time.Now() + invarRoutes := k.Routes() + n := len(invarRoutes) + for i, ir := range invarRoutes { + logger.Info("asserting crisis invariants", "inv", fmt.Sprint(i+1, "/", n), "name", ir.FullRoute()) + if res, stop := ir.Invar(ctx); stop { + / TODO: Include app name as part of context to allow for this to be + / variable. + panic(fmt.Errorf("invariant broken: %s\n"+ + "\tCRITICAL please submit the following transaction:\n"+ + "\t\t tx crisis invariant-broken %s %s", res, ir.ModuleName, ir.Route)) +} + +} + diff := time.Since(start) + +logger.Info("asserted all invariants", "duration", diff, "height", ctx.BlockHeight()) +} + +/ InvCheckPeriod returns the invariant checks period. +func (k *Keeper) + +InvCheckPeriod() + +uint { + return k.invCheckPeriod +} + +/ SendCoinsFromAccountToFeeCollector transfers amt to the fee collector account. +func (k *Keeper) + +SendCoinsFromAccountToFeeCollector(ctx sdk.Context, senderAddr sdk.AccAddress, amt sdk.Coins) + +error { + return k.supplyKeeper.SendCoinsFromAccountToModule(ctx, senderAddr, k.feeCollectorName, amt) +} +``` + +The `InvariantRegistry` is therefore typically instantiated by instantiating the `keeper` of the `crisis` module in the [application's constructor function](/docs/sdk/v0.47/learn/beginner/overview-app#constructor-function). + +`Invariant`s can be checked manually via [`message`s](/docs/sdk/v0.47/documentation/module-system/messages-and-queries), but most often they are checked automatically at the end of each block. Here is an example from the `crisis` module: + +```go expandable +package crisis + +import ( + + "time" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + "github.com/cosmos/cosmos-sdk/x/crisis/types" +) + +/ check all registered invariants +func EndBlocker(ctx sdk.Context, k keeper.Keeper) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) + if k.InvCheckPeriod() == 0 || ctx.BlockHeight()%int64(k.InvCheckPeriod()) != 0 { + / skip running the invariant check + return +} + +k.AssertInvariants(ctx) +} +``` + +In both cases, if one of the `Invariant`s returns false, the `InvariantRegistry` can trigger special logic (e.g. have the application panic and print the `Invariant`s message in the log). diff --git a/docs/sdk/v0.47/documentation/module-system/keeper.mdx b/docs/sdk/v0.47/documentation/module-system/keeper.mdx new file mode 100644 index 00000000..fbd287b7 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/keeper.mdx @@ -0,0 +1,409 @@ +--- +title: Keepers +--- + + +**Synopsis** +`Keeper`s refer to a Cosmos SDK abstraction whose role is to manage access to the subset of the state defined by various modules. `Keeper`s are module-specific, i.e. the subset of state defined by a module can only be accessed by a `keeper` defined in said module. If a module needs to access the subset of state defined by another module, a reference to the second module's internal `keeper` needs to be passed to the first one. This is done in `app.go` during the instantiation of module keepers. + + + + +### Pre-requisite Readings + +* [Introduction to Cosmos SDK Modules](/docs/sdk/v0.47/documentation/module-system/intro) + + + +## Motivation + +The Cosmos SDK is a framework that makes it easy for developers to build complex decentralized applications from scratch, mainly by composing modules together. As the ecosystem of open-source modules for the Cosmos SDK expands, it will become increasingly likely that some of these modules contain vulnerabilities, as a result of the negligence or malice of their developer. + +The Cosmos SDK adopts an [object-capabilities-based approach](/docs/sdk/v0.47/learn/advanced/ocap) to help developers better protect their application from unwanted inter-module interactions, and `keeper`s are at the core of this approach. A `keeper` can be considered quite literally to be the gatekeeper of a module's store(s). Each store (typically an [`IAVL` Store](/docs/sdk/v0.47/learn/advanced/store#iavl-store)) defined within a module comes with a `storeKey`, which grants unlimited access to it. The module's `keeper` holds this `storeKey` (which should otherwise remain unexposed), and defines [methods](#implementing-methods) for reading and writing to the store(s). + +The core idea behind the object-capabilities approach is to only reveal what is necessary to get the work done. In practice, this means that instead of handling permissions of modules through access-control lists, module `keeper`s are passed a reference to the specific instance of the other modules' `keeper`s that they need to access (this is done in the [application's constructor function](/docs/sdk/v0.47/learn/beginner/overview-app#constructor-function)). As a consequence, a module can only interact with the subset of state defined in another module via the methods exposed by the instance of the other module's `keeper`. This is a great way for developers to control the interactions that their own module can have with modules developed by external developers. + +## Type Definition + +`keeper`s are generally implemented in a `/keeper/keeper.go` file located in the module's folder. By convention, the type `keeper` of a module is simply named `Keeper` and usually follows the following structure: + +```go +type Keeper struct { + / External keepers, if any + + / Store key(s) + + / codec + + / authority +} +``` + +For example, here is the type definition of the `keeper` from the `staking` module: + +```go expandable +package keeper + +import ( + + "fmt" + + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/codec" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ Implements ValidatorSet interface +var _ types.ValidatorSet = Keeper{ +} + +/ Implements DelegationSet interface +var _ types.DelegationSet = Keeper{ +} + +/ Keeper of the x/staking store +type Keeper struct { + storeKey storetypes.StoreKey + cdc codec.BinaryCodec + authKeeper types.AccountKeeper + bankKeeper types.BankKeeper + hooks types.StakingHooks + authority string +} + +/ NewKeeper creates a new staking Keeper instance +func NewKeeper( + cdc codec.BinaryCodec, + key storetypes.StoreKey, + ak types.AccountKeeper, + bk types.BankKeeper, + authority string, +) *Keeper { + / ensure bonded and not bonded module accounts are set + if addr := ak.GetModuleAddress(types.BondedPoolName); addr == nil { + panic(fmt.Sprintf("%s module account has not been set", types.BondedPoolName)) +} + if addr := ak.GetModuleAddress(types.NotBondedPoolName); addr == nil { + panic(fmt.Sprintf("%s module account has not been set", types.NotBondedPoolName)) +} + + / ensure that authority is a valid AccAddress + if _, err := sdk.AccAddressFromBech32(authority); err != nil { + panic("authority is not a valid acc address") +} + +return &Keeper{ + storeKey: key, + cdc: cdc, + authKeeper: ak, + bankKeeper: bk, + hooks: nil, + authority: authority, +} +} + +/ Logger returns a module-specific logger. +func (k Keeper) + +Logger(ctx sdk.Context) + +log.Logger { + return ctx.Logger().With("module", "x/"+types.ModuleName) +} + +/ Hooks gets the hooks for staking *Keeper { + func (k *Keeper) + +Hooks() + +types.StakingHooks { + if k.hooks == nil { + / return a no-op implementation if no hooks are set + return types.MultiStakingHooks{ +} + +} + +return k.hooks +} + +/ SetHooks Set the validator hooks. In contrast to other receivers, this method must take a pointer due to nature +/ of the hooks interface and SDK start up sequence. +func (k *Keeper) + +SetHooks(sh types.StakingHooks) { + if k.hooks != nil { + panic("cannot set validator hooks twice") +} + +k.hooks = sh +} + +/ GetLastTotalPower Load the last total validator power. +func (k Keeper) + +GetLastTotalPower(ctx sdk.Context) + +math.Int { + store := ctx.KVStore(k.storeKey) + bz := store.Get(types.LastTotalPowerKey) + if bz == nil { + return math.ZeroInt() +} + ip := sdk.IntProto{ +} + +k.cdc.MustUnmarshal(bz, &ip) + +return ip.Int +} + +/ SetLastTotalPower Set the last total validator power. +func (k Keeper) + +SetLastTotalPower(ctx sdk.Context, power math.Int) { + store := ctx.KVStore(k.storeKey) + bz := k.cdc.MustMarshal(&sdk.IntProto{ + Int: power +}) + +store.Set(types.LastTotalPowerKey, bz) +} + +/ GetAuthority returns the x/staking module's authority. +func (k Keeper) + +GetAuthority() + +string { + return k.authority +} + +/ SetValidatorUpdates sets the ABCI validator power updates for the current block. +func (k Keeper) + +SetValidatorUpdates(ctx sdk.Context, valUpdates []abci.ValidatorUpdate) { + store := ctx.KVStore(k.storeKey) + bz := k.cdc.MustMarshal(&types.ValidatorUpdates{ + Updates: valUpdates +}) + +store.Set(types.ValidatorUpdatesKey, bz) +} + +/ GetValidatorUpdates returns the ABCI validator power updates within the current block. +func (k Keeper) + +GetValidatorUpdates(ctx sdk.Context) []abci.ValidatorUpdate { + store := ctx.KVStore(k.storeKey) + bz := store.Get(types.ValidatorUpdatesKey) + +var valUpdates types.ValidatorUpdates + k.cdc.MustUnmarshal(bz, &valUpdates) + +return valUpdates.Updates +} +``` + +Let us go through the different parameters: + +* An expected `keeper` is a `keeper` external to a module that is required by the internal `keeper` of said module. External `keeper`s are listed in the internal `keeper`'s type definition as interfaces. These interfaces are themselves defined in an `expected_keepers.go` file in the root of the module's folder. In this context, interfaces are used to reduce the number of dependencies, as well as to facilitate the maintenance of the module itself. +* `storeKey`s grant access to the store(s) of the [multistore](/docs/sdk/v0.47/learn/advanced/store) managed by the module. They should always remain unexposed to external modules. +* `cdc` is the [codec](/docs/sdk/v0.47/learn/advanced/encoding) used to marshall and unmarshall structs to/from `[]byte`. The `cdc` can be any of `codec.BinaryCodec`, `codec.JSONCodec` or `codec.Codec` based on your requirements. It can be either a proto or amino codec as long as they implement these interfaces. The authority listed is a module account or user account that has the right to change module level parameters. Previously this was handled by the param module, which has been deprecated. + +Of course, it is possible to define different types of internal `keeper`s for the same module (e.g. a read-only `keeper`). Each type of `keeper` comes with its own constructor function, which is called from the [application's constructor function](/docs/sdk/v0.47/learn/beginner/overview-app). This is where `keeper`s are instantiated, and where developers make sure to pass correct instances of modules' `keeper`s to other modules that require them. + +## Implementing Methods + +`Keeper`s primarily expose getter and setter methods for the store(s) managed by their module. These methods should remain as simple as possible and strictly be limited to getting or setting the requested value, as validity checks should have already been performed by the [`Msg` server](/docs/sdk/v0.47/documentation/module-system/msg-services) when `keeper`s' methods are called. + +Typically, a *getter* method will have the following signature + +```go +func (k Keeper) + +Get(ctx sdk.Context, key string) + +returnType +``` + +and the method will go through the following steps: + +1. Retrieve the appropriate store from the `ctx` using the `storeKey`. This is done through the `KVStore(storeKey sdk.StoreKey)` method of the `ctx`. Then it's preferred to use the `prefix.Store` to access only the desired limited subset of the store for convenience and safety. +2. If it exists, get the `[]byte` value stored at location `[]byte(key)` using the `Get(key []byte)` method of the store. +3. Unmarshall the retrieved value from `[]byte` to `returnType` using the codec `cdc`. Return the value. + +Similarly, a *setter* method will have the following signature + +```go +func (k Keeper) + +Set(ctx sdk.Context, key string, value valueType) +``` + +and the method will go through the following steps: + +1. Retrieve the appropriate store from the `ctx` using the `storeKey`. This is done through the `KVStore(storeKey sdk.StoreKey)` method of the `ctx`. It's preferred to use the `prefix.Store` to access only the desired limited subset of the store for convenience and safety. +2. Marshal `value` to `[]byte` using the codec `cdc`. +3. Set the encoded value in the store at location `key` using the `Set(key []byte, value []byte)` method of the store. + +For more, see an example of `keeper`'s [methods implementation from the `staking` module](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/staking/keeper/keeper.go). + +The [module `KVStore`](/docs/sdk/v0.47/learn/advanced/store#kvstore-and-commitkvstore-interfaces) also provides an `Iterator()` method which returns an `Iterator` object to iterate over a domain of keys. + +This is an example from the `auth` module to iterate accounts: + +```go expandable +package keeper + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ NewAccountWithAddress implements AccountKeeperI. +func (ak AccountKeeper) + +NewAccountWithAddress(ctx sdk.Context, addr sdk.AccAddress) + +types.AccountI { + acc := ak.proto() + err := acc.SetAddress(addr) + if err != nil { + panic(err) +} + +return ak.NewAccount(ctx, acc) +} + +/ NewAccount sets the next account number to a given account interface +func (ak AccountKeeper) + +NewAccount(ctx sdk.Context, acc types.AccountI) + +types.AccountI { + if err := acc.SetAccountNumber(ak.NextAccountNumber(ctx)); err != nil { + panic(err) +} + +return acc +} + +/ HasAccount implements AccountKeeperI. +func (ak AccountKeeper) + +HasAccount(ctx sdk.Context, addr sdk.AccAddress) + +bool { + store := ctx.KVStore(ak.storeKey) + +return store.Has(types.AddressStoreKey(addr)) +} + +/ HasAccountAddressByID checks account address exists by id. +func (ak AccountKeeper) + +HasAccountAddressByID(ctx sdk.Context, id uint64) + +bool { + store := ctx.KVStore(ak.storeKey) + +return store.Has(types.AccountNumberStoreKey(id)) +} + +/ GetAccount implements AccountKeeperI. +func (ak AccountKeeper) + +GetAccount(ctx sdk.Context, addr sdk.AccAddress) + +types.AccountI { + store := ctx.KVStore(ak.storeKey) + bz := store.Get(types.AddressStoreKey(addr)) + if bz == nil { + return nil +} + +return ak.decodeAccount(bz) +} + +/ GetAccountAddressById returns account address by id. +func (ak AccountKeeper) + +GetAccountAddressByID(ctx sdk.Context, id uint64) + +string { + store := ctx.KVStore(ak.storeKey) + bz := store.Get(types.AccountNumberStoreKey(id)) + if bz == nil { + return "" +} + +return sdk.AccAddress(bz).String() +} + +/ GetAllAccounts returns all accounts in the accountKeeper. +func (ak AccountKeeper) + +GetAllAccounts(ctx sdk.Context) (accounts []types.AccountI) { + ak.IterateAccounts(ctx, func(acc types.AccountI) (stop bool) { + accounts = append(accounts, acc) + +return false +}) + +return accounts +} + +/ SetAccount implements AccountKeeperI. +func (ak AccountKeeper) + +SetAccount(ctx sdk.Context, acc types.AccountI) { + addr := acc.GetAddress() + store := ctx.KVStore(ak.storeKey) + +bz, err := ak.MarshalAccount(acc) + if err != nil { + panic(err) +} + +store.Set(types.AddressStoreKey(addr), bz) + +store.Set(types.AccountNumberStoreKey(acc.GetAccountNumber()), addr.Bytes()) +} + +/ RemoveAccount removes an account for the account mapper store. +/ NOTE: this will cause supply invariant violation if called +func (ak AccountKeeper) + +RemoveAccount(ctx sdk.Context, acc types.AccountI) { + addr := acc.GetAddress() + store := ctx.KVStore(ak.storeKey) + +store.Delete(types.AddressStoreKey(addr)) + +store.Delete(types.AccountNumberStoreKey(acc.GetAccountNumber())) +} + +/ IterateAccounts iterates over all the stored accounts and performs a callback function. +/ Stops iteration when callback returns true. +func (ak AccountKeeper) + +IterateAccounts(ctx sdk.Context, cb func(account types.AccountI) (stop bool)) { + store := ctx.KVStore(ak.storeKey) + iterator := sdk.KVStorePrefixIterator(store, types.AddressStoreKeyPrefix) + +defer iterator.Close() + for ; iterator.Valid(); iterator.Next() { + account := ak.decodeAccount(iterator.Value()) + if cb(account) { + break +} + +} +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/messages-and-queries.mdx b/docs/sdk/v0.47/documentation/module-system/messages-and-queries.mdx new file mode 100644 index 00000000..34c7ac18 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/messages-and-queries.mdx @@ -0,0 +1,2057 @@ +--- +title: Messages and Queries +--- + + +**Synopsis** +`Msg`s and `Queries` are the two primary objects handled by modules. Most of the core components defined in a module, like `Msg` services, `keeper`s and `Query` services, exist to process `message`s and `queries`. + + + + +### Pre-requisite Readings + +* [Introduction to Cosmos SDK Modules](/docs/sdk/v0.47/documentation/module-system/intro) + + + +## Messages + +`Msg`s are objects whose end-goal is to trigger state-transitions. They are wrapped in [transactions](/docs/sdk/v0.47/learn/advanced/transactions), which may contain one or more of them. + +When a transaction is relayed from the underlying consensus engine to the Cosmos SDK application, it is first decoded by [`BaseApp`](/docs/sdk/v0.47/learn/advanced/baseapp). Then, each message contained in the transaction is extracted and routed to the appropriate module via `BaseApp`'s `MsgServiceRouter` so that it can be processed by the module's [`Msg` service](/docs/sdk/v0.47/documentation/module-system/msg-services). For a more detailed explanation of the lifecycle of a transaction, click [here](/docs/sdk/v0.47/learn/beginner/tx-lifecycle). + +### `Msg` Services + +Defining Protobuf `Msg` services is the recommended way to handle messages. A Protobuf `Msg` service should be created for each module, typically in `tx.proto` (see more info about [conventions and naming](/docs/sdk/v0.47/learn/advanced/encoding#faq)). It must have an RPC service method defined for each message in the module. + +See an example of a `Msg` service definition from `x/bank` module: + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/tx.proto#L13-L36 +``` + +Each `Msg` service method must have exactly one argument, which must implement the `sdk.Msg` interface, and a Protobuf response. The naming convention is to call the RPC argument `Msg` and the RPC response `MsgResponse`. For example: + +```protobuf + rpc Send(MsgSend) returns (MsgSendResponse); +``` + +`sdk.Msg` interface is a simplified version of the Amino `LegacyMsg` interface described [below](#legacy-amino-legacymsg-s) with the `GetSigners()` method. For backwards compatibility with [Amino `LegacyMsg`s](#egacy-amino-legacymsg-s), existing `LegacyMsg` types should be used as the request parameter for `service` RPC definitions. Newer `sdk.Msg` types, which only support `service` definitions, should use canonical `Msg...` name. + +The Cosmos SDK uses Protobuf definitions to generate client and server code: + +* `MsgServer` interface defines the server API for the `Msg` service and its implementation is described as part of the [`Msg` services](/docs/sdk/v0.47/documentation/module-system/msg-services) documentation. +* Structures are generated for all RPC request and response types. + +A `RegisterMsgServer` method is also generated and should be used to register the module's `MsgServer` implementation in `RegisterServices` method from the [`AppModule` interface](/docs/sdk/v0.47/documentation/module-system/module-manager#appmodule). + +In order for clients (CLI and grpc-gateway) to have these URLs registered, the Cosmos SDK provides the function `RegisterMsgServiceDesc(registry codectypes.InterfaceRegistry, sd *grpc.ServiceDesc)` that should be called inside module's [`RegisterInterfaces`](/docs/sdk/v0.47/documentation/module-system/module-manager#appmodulebasic) method, using the proto-generated `&_Msg_serviceDesc` as `*grpc.ServiceDesc` argument. + +### Legacy Amino `LegacyMsg`s + +The following way of defining messages is deprecated and using [`Msg` services](#msg-services) is preferred. + +Amino `LegacyMsg`s can be defined as protobuf messages. The messages definition usually includes a list of parameters needed to process the message that will be provided by end-users when they want to create a new transaction containing said message. + +A `LegacyMsg` is typically accompanied by a standard constructor function, that is called from one of the [module's interface](/docs/sdk/v0.47/documentation/module-system/module-interfaces). `message`s also need to implement the `sdk.Msg` interface: + +```go expandable +package types + +import ( + + "encoding/json" + fmt "fmt" + "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/codec" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) + +type ( + / Msg defines the interface a transaction message must fulfill. + Msg interface { + proto.Message + + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error + + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} + + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / Tx defines the interface a transaction must fulfill. + Tx interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg + + / ValidateBasic does a simple and lightweight validation check that doesn't + / require access to any other information. + ValidateBasic() + +error +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() + +AccAddress + FeeGranter() + +AccAddress +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +func MsgTypeURL(msg Msg) + +string { + return "/" + proto.MessageName(msg) +} + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} +``` + +It extends `proto.Message` and contains the following methods: + +* `GetSignBytes() []byte`: Return the canonical byte representation of the message. Used to generate a signature. +* `GetSigners() []AccAddress`: Return the list of signers. The Cosmos SDK will make sure that each `message` contained in a transaction is signed by all the signers listed in the list returned by this method. + +```go expandable +package legacytx + +import ( + + "encoding/json" + "fmt" + "sigs.k8s.io/yaml" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/legacy" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/crypto/types/multisig" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/types/tx/signing" +) + +/ LegacyMsg defines the old interface a message must fulfill, containing +/ Amino signing method and legacy router info. +/ Deprecated: Please use `Msg` instead. +type LegacyMsg interface { + sdk.Msg + + / Get the canonical byte representation of the Msg. + GetSignBytes() []byte + + / Return the message type. + / Must be alphanumeric or empty. + Route() + +string + + / Returns a human-readable string for the message, intended for utilization + / within tags + Type() + +string +} + +/ StdSignDoc is replay-prevention structure. +/ It includes the result of msg.GetSignBytes(), +/ as well as the ChainID (prevent cross chain replay) +/ and the Sequence numbers for each signature (prevent +/ inchain replay and enforce tx ordering per account). +type StdSignDoc struct { + AccountNumber uint64 `json:"account_number" yaml:"account_number"` + Sequence uint64 `json:"sequence" yaml:"sequence"` + TimeoutHeight uint64 `json:"timeout_height,omitempty" yaml:"timeout_height"` + ChainID string `json:"chain_id" yaml:"chain_id"` + Memo string `json:"memo" yaml:"memo"` + Fee json.RawMessage `json:"fee" yaml:"fee"` + Msgs []json.RawMessage `json:"msgs" yaml:"msgs"` + Tip *StdTip `json:"tip,omitempty" yaml:"tip"` +} + +/ StdSignBytes returns the bytes to sign for a transaction. +func StdSignBytes(chainID string, accnum, sequence, timeout uint64, fee StdFee, msgs []sdk.Msg, memo string, tip *tx.Tip) []byte { + msgsBytes := make([]json.RawMessage, 0, len(msgs)) + for _, msg := range msgs { + legacyMsg, ok := msg.(LegacyMsg) + if !ok { + panic(fmt.Errorf("expected %T when using amino JSON", (*LegacyMsg)(nil))) +} + +msgsBytes = append(msgsBytes, json.RawMessage(legacyMsg.GetSignBytes())) +} + +var stdTip *StdTip + if tip != nil { + if tip.Tipper == "" { + panic(fmt.Errorf("tipper cannot be empty")) +} + +stdTip = &StdTip{ + Amount: tip.Amount, + Tipper: tip.Tipper +} + +} + +bz, err := legacy.Cdc.MarshalJSON(StdSignDoc{ + AccountNumber: accnum, + ChainID: chainID, + Fee: json.RawMessage(fee.Bytes()), + Memo: memo, + Msgs: msgsBytes, + Sequence: sequence, + TimeoutHeight: timeout, + Tip: stdTip, +}) + if err != nil { + panic(err) +} + +return sdk.MustSortJSON(bz) +} + +/ Deprecated: StdSignature represents a sig +type StdSignature struct { + cryptotypes.PubKey `json:"pub_key" yaml:"pub_key"` / optional + Signature []byte `json:"signature" yaml:"signature"` +} + +/ Deprecated +func NewStdSignature(pk cryptotypes.PubKey, sig []byte) + +StdSignature { + return StdSignature{ + PubKey: pk, + Signature: sig +} +} + +/ GetSignature returns the raw signature bytes. +func (ss StdSignature) + +GetSignature() []byte { + return ss.Signature +} + +/ GetPubKey returns the public key of a signature as a cryptotypes.PubKey using the +/ Amino codec. +func (ss StdSignature) + +GetPubKey() + +cryptotypes.PubKey { + return ss.PubKey +} + +/ MarshalYAML returns the YAML representation of the signature. +func (ss StdSignature) + +MarshalYAML() (interface{ +}, error) { + pk := "" + if ss.PubKey != nil { + pk = ss.PubKey.String() +} + +bz, err := yaml.Marshal(struct { + PubKey string `json:"pub_key"` + Signature string `json:"signature"` +}{ + pk, + fmt.Sprintf("%X", ss.Signature), +}) + if err != nil { + return nil, err +} + +return string(bz), err +} + +func (ss StdSignature) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + return codectypes.UnpackInterfaces(ss.PubKey, unpacker) +} + +/ StdSignatureToSignatureV2 converts a StdSignature to a SignatureV2 +func StdSignatureToSignatureV2(cdc *codec.LegacyAmino, sig StdSignature) (signing.SignatureV2, error) { + pk := sig.GetPubKey() + +data, err := pubKeySigToSigData(cdc, pk, sig.Signature) + if err != nil { + return signing.SignatureV2{ +}, err +} + +return signing.SignatureV2{ + PubKey: pk, + Data: data, +}, nil +} + +func pubKeySigToSigData(cdc *codec.LegacyAmino, key cryptotypes.PubKey, sig []byte) (signing.SignatureData, error) { + multiPK, ok := key.(multisig.PubKey) + if !ok { + return &signing.SingleSignatureData{ + SignMode: signing.SignMode_SIGN_MODE_LEGACY_AMINO_JSON, + Signature: sig, +}, nil +} + +var multiSig multisig.AminoMultisignature + err := cdc.Unmarshal(sig, &multiSig) + if err != nil { + return nil, err +} + sigs := multiSig.Sigs + sigDatas := make([]signing.SignatureData, len(sigs)) + pubKeys := multiPK.GetPubKeys() + bitArray := multiSig.BitArray + n := multiSig.BitArray.Count() + signatures := multisig.NewMultisig(n) + sigIdx := 0 + for i := 0; i < n; i++ { + if bitArray.GetIndex(i) { + data, err := pubKeySigToSigData(cdc, pubKeys[i], multiSig.Sigs[sigIdx]) + if err != nil { + return nil, sdkerrors.Wrapf(err, "Unable to convert Signature to SigData %d", sigIdx) +} + +sigDatas[sigIdx] = data + multisig.AddSignature(signatures, data, sigIdx) + +sigIdx++ +} + +} + +return signatures, nil +} +``` + +See an example implementation of a `message` from the `gov` module: + +```go expandable +package v1 + +import ( + + "fmt" + "cosmossdk.io/math" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + sdktx "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/x/gov/codec" + "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" +) + +var ( + _, _, _, _, _, _ sdk.Msg = &MsgSubmitProposal{ +}, &MsgDeposit{ +}, &MsgVote{ +}, &MsgVoteWeighted{ +}, &MsgExecLegacyContent{ +}, &MsgUpdateParams{ +} + _, _ codectypes.UnpackInterfacesMessage = &MsgSubmitProposal{ +}, &MsgExecLegacyContent{ +} +) + +/ NewMsgSubmitProposal creates a new MsgSubmitProposal. +/ +/nolint:interfacer +func NewMsgSubmitProposal(messages []sdk.Msg, initialDeposit sdk.Coins, proposer, metadata, title, summary string) (*MsgSubmitProposal, error) { + m := &MsgSubmitProposal{ + InitialDeposit: initialDeposit, + Proposer: proposer, + Metadata: metadata, + Title: title, + Summary: summary, +} + +anys, err := sdktx.SetMsgs(messages) + if err != nil { + return nil, err +} + +m.Messages = anys + + return m, nil +} + +/ GetMsgs unpacks m.Messages Any's into sdk.Msg's +func (m *MsgSubmitProposal) + +GetMsgs() ([]sdk.Msg, error) { + return sdktx.GetMsgs(m.Messages, "sdk.MsgProposal") +} + +/ Route implements the sdk.Msg interface. +func (m MsgSubmitProposal) + +Route() + +string { + return types.RouterKey +} + +/ Type implements the sdk.Msg interface. +func (m MsgSubmitProposal) + +Type() + +string { + return sdk.MsgTypeURL(&m) +} + +/ ValidateBasic implements the sdk.Msg interface. +func (m MsgSubmitProposal) + +ValidateBasic() + +error { + if m.Title == "" { + return sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, "proposal title cannot be empty") +} + if m.Summary == "" { + return sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, "proposal summary cannot be empty") +} + if _, err := sdk.AccAddressFromBech32(m.Proposer); err != nil { + return sdkerrors.ErrInvalidAddress.Wrapf("invalid proposer address: %s", err) +} + deposit := sdk.NewCoins(m.InitialDeposit...) + if !deposit.IsValid() { + return sdkerrors.Wrap(sdkerrors.ErrInvalidCoins, deposit.String()) +} + if deposit.IsAnyNegative() { + return sdkerrors.Wrap(sdkerrors.ErrInvalidCoins, deposit.String()) +} + + / Check that either metadata or Msgs length is non nil. + if len(m.Messages) == 0 && len(m.Metadata) == 0 { + return sdkerrors.Wrap(types.ErrNoProposalMsgs, "either metadata or Msgs length must be non-nil") +} + +msgs, err := m.GetMsgs() + if err != nil { + return err +} + for idx, msg := range msgs { + if err := msg.ValidateBasic(); err != nil { + return sdkerrors.Wrap(types.ErrInvalidProposalMsg, + fmt.Sprintf("msg: %d, err: %s", idx, err.Error())) +} + +} + +return nil +} + +/ GetSignBytes returns the message bytes to sign over. +func (m MsgSubmitProposal) + +GetSignBytes() []byte { + bz := codec.ModuleCdc.MustMarshalJSON(&m) + +return sdk.MustSortJSON(bz) +} + +/ GetSigners returns the expected signers for a MsgSubmitProposal. +func (m MsgSubmitProposal) + +GetSigners() []sdk.AccAddress { + proposer, _ := sdk.AccAddressFromBech32(m.Proposer) + +return []sdk.AccAddress{ + proposer +} +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (m MsgSubmitProposal) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + return sdktx.UnpackInterfaces(unpacker, m.Messages) +} + +/ NewMsgDeposit creates a new MsgDeposit instance +/ +/nolint:interfacer +func NewMsgDeposit(depositor sdk.AccAddress, proposalID uint64, amount sdk.Coins) *MsgDeposit { + return &MsgDeposit{ + proposalID, depositor.String(), amount +} +} + +/ Route implements the sdk.Msg interface. +func (msg MsgDeposit) + +Route() + +string { + return types.RouterKey +} + +/ Type implements the sdk.Msg interface. +func (msg MsgDeposit) + +Type() + +string { + return sdk.MsgTypeURL(&msg) +} + +/ ValidateBasic implements the sdk.Msg interface. +func (msg MsgDeposit) + +ValidateBasic() + +error { + if _, err := sdk.AccAddressFromBech32(msg.Depositor); err != nil { + return sdkerrors.ErrInvalidAddress.Wrapf("invalid depositor address: %s", err) +} + amount := sdk.NewCoins(msg.Amount...) + if !amount.IsValid() { + return sdkerrors.Wrap(sdkerrors.ErrInvalidCoins, amount.String()) +} + if amount.IsAnyNegative() { + return sdkerrors.Wrap(sdkerrors.ErrInvalidCoins, amount.String()) +} + +return nil +} + +/ GetSignBytes returns the message bytes to sign over. +func (msg MsgDeposit) + +GetSignBytes() []byte { + bz := codec.ModuleCdc.MustMarshalJSON(&msg) + +return sdk.MustSortJSON(bz) +} + +/ GetSigners returns the expected signers for a MsgDeposit. +func (msg MsgDeposit) + +GetSigners() []sdk.AccAddress { + depositor, _ := sdk.AccAddressFromBech32(msg.Depositor) + +return []sdk.AccAddress{ + depositor +} +} + +/ NewMsgVote creates a message to cast a vote on an active proposal +/ +/nolint:interfacer +func NewMsgVote(voter sdk.AccAddress, proposalID uint64, option VoteOption, metadata string) *MsgVote { + return &MsgVote{ + proposalID, voter.String(), option, metadata +} +} + +/ Route implements the sdk.Msg interface. +func (msg MsgVote) + +Route() + +string { + return types.RouterKey +} + +/ Type implements the sdk.Msg interface. +func (msg MsgVote) + +Type() + +string { + return sdk.MsgTypeURL(&msg) +} + +/ ValidateBasic implements the sdk.Msg interface. +func (msg MsgVote) + +ValidateBasic() + +error { + if _, err := sdk.AccAddressFromBech32(msg.Voter); err != nil { + return sdkerrors.ErrInvalidAddress.Wrapf("invalid voter address: %s", err) +} + if !ValidVoteOption(msg.Option) { + return sdkerrors.Wrap(types.ErrInvalidVote, msg.Option.String()) +} + +return nil +} + +/ GetSignBytes returns the message bytes to sign over. +func (msg MsgVote) + +GetSignBytes() []byte { + bz := codec.ModuleCdc.MustMarshalJSON(&msg) + +return sdk.MustSortJSON(bz) +} + +/ GetSigners returns the expected signers for a MsgVote. +func (msg MsgVote) + +GetSigners() []sdk.AccAddress { + voter, _ := sdk.AccAddressFromBech32(msg.Voter) + +return []sdk.AccAddress{ + voter +} +} + +/ NewMsgVoteWeighted creates a message to cast a vote on an active proposal +/ +/nolint:interfacer +func NewMsgVoteWeighted(voter sdk.AccAddress, proposalID uint64, options WeightedVoteOptions, metadata string) *MsgVoteWeighted { + return &MsgVoteWeighted{ + proposalID, voter.String(), options, metadata +} +} + +/ Route implements the sdk.Msg interface. +func (msg MsgVoteWeighted) + +Route() + +string { + return types.RouterKey +} + +/ Type implements the sdk.Msg interface. +func (msg MsgVoteWeighted) + +Type() + +string { + return sdk.MsgTypeURL(&msg) +} + +/ ValidateBasic implements the sdk.Msg interface. +func (msg MsgVoteWeighted) + +ValidateBasic() + +error { + if _, err := sdk.AccAddressFromBech32(msg.Voter); err != nil { + return sdkerrors.ErrInvalidAddress.Wrapf("invalid voter address: %s", err) +} + if len(msg.Options) == 0 { + return sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, WeightedVoteOptions(msg.Options).String()) +} + totalWeight := math.LegacyNewDec(0) + usedOptions := make(map[VoteOption]bool) + for _, option := range msg.Options { + if !option.IsValid() { + return sdkerrors.Wrap(types.ErrInvalidVote, option.String()) +} + +weight, err := sdk.NewDecFromStr(option.Weight) + if err != nil { + return sdkerrors.Wrapf(types.ErrInvalidVote, "Invalid weight: %s", err) +} + +totalWeight = totalWeight.Add(weight) + if usedOptions[option.Option] { + return sdkerrors.Wrap(types.ErrInvalidVote, "Duplicated vote option") +} + +usedOptions[option.Option] = true +} + if totalWeight.GT(math.LegacyNewDec(1)) { + return sdkerrors.Wrap(types.ErrInvalidVote, "Total weight overflow 1.00") +} + if totalWeight.LT(math.LegacyNewDec(1)) { + return sdkerrors.Wrap(types.ErrInvalidVote, "Total weight lower than 1.00") +} + +return nil +} + +/ GetSignBytes returns the message bytes to sign over. +func (msg MsgVoteWeighted) + +GetSignBytes() []byte { + bz := codec.ModuleCdc.MustMarshalJSON(&msg) + +return sdk.MustSortJSON(bz) +} + +/ GetSigners returns the expected signers for a MsgVoteWeighted. +func (msg MsgVoteWeighted) + +GetSigners() []sdk.AccAddress { + voter, _ := sdk.AccAddressFromBech32(msg.Voter) + +return []sdk.AccAddress{ + voter +} +} + +/ NewMsgExecLegacyContent creates a new MsgExecLegacyContent instance +/ +/nolint:interfacer +func NewMsgExecLegacyContent(content *codectypes.Any, authority string) *MsgExecLegacyContent { + return &MsgExecLegacyContent{ + Content: content, + Authority: authority, +} +} + +/ GetSigners returns the expected signers for a MsgExecLegacyContent. +func (c MsgExecLegacyContent) + +GetSigners() []sdk.AccAddress { + authority, _ := sdk.AccAddressFromBech32(c.Authority) + +return []sdk.AccAddress{ + authority +} +} + +/ ValidateBasic implements the sdk.Msg interface. +func (c MsgExecLegacyContent) + +ValidateBasic() + +error { + _, err := sdk.AccAddressFromBech32(c.Authority) + if err != nil { + return err +} + +return nil +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (c MsgExecLegacyContent) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + var content v1beta1.Content + return unpacker.UnpackAny(c.Content, &content) +} + +/ Route implements the sdk.Msg interface. +func (msg MsgUpdateParams) + +Route() + +string { + return types.RouterKey +} + +/ Type implements the sdk.Msg interface. +func (msg MsgUpdateParams) + +Type() + +string { + return sdk.MsgTypeURL(&msg) +} + +/ ValidateBasic implements the sdk.Msg interface. +func (msg MsgUpdateParams) + +ValidateBasic() + +error { + if _, err := sdk.AccAddressFromBech32(msg.Authority); err != nil { + return sdkerrors.ErrInvalidAddress.Wrapf("invalid authority address: %s", err) +} + +return msg.Params.ValidateBasic() +} + +/ GetSignBytes returns the message bytes to sign over. +func (msg MsgUpdateParams) + +GetSignBytes() []byte { + bz := codec.ModuleCdc.MustMarshalJSON(&msg) + +return sdk.MustSortJSON(bz) +} + +/ GetSigners returns the expected signers for a MsgUpdateParams. +func (msg MsgUpdateParams) + +GetSigners() []sdk.AccAddress { + authority, _ := sdk.AccAddressFromBech32(msg.Authority) + +return []sdk.AccAddress{ + authority +} +} +``` + +## Queries + +A `query` is a request for information made by end-users of applications through an interface and processed by a full-node. A `query` is received by a full-node through its consensus engine and relayed to the application via the ABCI. It is then routed to the appropriate module via `BaseApp`'s `QueryRouter` so that it can be processed by the module's query service (./04-query-services.md). For a deeper look at the lifecycle of a `query`, click [here](/docs/sdk/v0.47/learn/beginner/query-lifecycle). + +### gRPC Queries + +Queries should be defined using [Protobuf services](https://developers.google.com/protocol-buffers/docs/proto#services). A `Query` service should be created per module in `query.proto`. This service lists endpoints starting with `rpc`. + +Here's an example of such a `Query` service definition: + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/auth/v1beta1/query.proto#L14-L89 +``` + +As `proto.Message`s, generated `Response` types implement by default `String()` method of [`fmt.Stringer`](https://pkg.go.dev/fmt#Stringer). + +A `RegisterQueryServer` method is also generated and should be used to register the module's query server in the `RegisterServices` method from the [`AppModule` interface](/docs/sdk/v0.47/documentation/module-system/module-manager#appmodule). + +### Legacy Queries + +Before the introduction of Protobuf and gRPC in the Cosmos SDK, there was usually no specific `query` object defined by module developers, contrary to `message`s. Instead, the Cosmos SDK took the simpler approach of using a simple `path` to define each `query`. The `path` contains the `query` type and all the arguments needed to process it. For most module queries, the `path` should look like the following: + +```text +queryCategory/queryRoute/queryType/arg1/arg2/... +``` + +where: + +* `queryCategory` is the category of the `query`, typically `custom` for module queries. It is used to differentiate between different kinds of queries within `BaseApp`'s [`Query` method](/docs/sdk/v0.47/learn/advanced/baseapp#query). +* `queryRoute` is used by `BaseApp`'s [`queryRouter`](/docs/sdk/v0.47/learn/advanced/baseapp#grpc-query-router) to map the `query` to its module. Usually, `queryRoute` should be the name of the module. +* `queryType` is used by the module's [`querier`](/docs/sdk/v0.47/documentation/module-system/query-services#query-services) to map the `query` to the appropriate `querier function` within the module. +* `args` are the actual arguments needed to process the `query`. They are filled out by the end-user. Note that for bigger queries, you might prefer passing arguments in the `Data` field of the request `req` instead of the `path`. + +The `path` for each `query` must be defined by the module developer in the module's [command-line interface file](/docs/sdk/v0.47/documentation/module-system/module-interfaces#query-commands).Overall, there are 3 mains components module developers need to implement in order to make the subset of the state defined by their module queryable: + +* A [`querier`](/docs/sdk/v0.47/documentation/module-system/query-services#query-services), to process the `query` once it has been [routed to the module](/docs/sdk/v0.47/learn/advanced/baseapp#grpc-query-router). +* [Query commands](/docs/sdk/v0.47/documentation/module-system/module-interfaces#query-commands) in the module's CLI file, where the `path` for each `query` is specified. +* `query` return types. Typically defined in a file `types/querier.go`, they specify the result type of each of the module's `queries`. These custom types must implement the `String()` method of [`fmt.Stringer`](https://pkg.go.dev/fmt#Stringer). + +### Store Queries + +Store queries query directly for store keys. They use `clientCtx.QueryABCI(req abci.RequestQuery)` to return the full `abci.ResponseQuery` with inclusion Merkle proofs. + +See following examples: + +```go expandable +package baseapp + +import ( + + "crypto/sha256" + "errors" + "fmt" + "os" + "sort" + "strings" + "syscall" + "time" + "github.com/cosmos/gogoproto/proto" + abci "github.com/tendermint/tendermint/abci/types" + tmproto "github.com/tendermint/tendermint/proto/tendermint/types" + "google.golang.org/grpc/codes" + grpcstatus "google.golang.org/grpc/status" + "github.com/cosmos/cosmos-sdk/codec" + snapshottypes "github.com/cosmos/cosmos-sdk/snapshots/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ Supported ABCI Query prefixes +const ( + QueryPathApp = "app" + QueryPathCustom = "custom" + QueryPathP2P = "p2p" + QueryPathStore = "store" +) + +/ InitChain implements the ABCI interface. It runs the initialization logic +/ directly on the CommitMultiStore. +func (app *BaseApp) + +InitChain(req abci.RequestInitChain) (res abci.ResponseInitChain) { + / On a new chain, we consider the init chain block height as 0, even though + / req.InitialHeight is 1 by default. + initHeader := tmproto.Header{ + ChainID: req.ChainId, + Time: req.Time +} + +app.logger.Info("InitChain", "initialHeight", req.InitialHeight, "chainID", req.ChainId) + + / If req.InitialHeight is > 1, then we set the initial version in the + / stores. + if req.InitialHeight > 1 { + app.initialHeight = req.InitialHeight + initHeader = tmproto.Header{ + ChainID: req.ChainId, + Height: req.InitialHeight, + Time: req.Time +} + err := app.cms.SetInitialVersion(req.InitialHeight) + if err != nil { + panic(err) +} + +} + + / initialize states with a correct header + app.setState(runTxModeDeliver, initHeader) + +app.setState(runTxModeCheck, initHeader) + +app.setState(runTxPrepareProposal, initHeader) + +app.setState(runTxProcessProposal, initHeader) + + / Store the consensus params in the BaseApp's paramstore. Note, this must be + / done after the deliver state and context have been set as it's persisted + / to state. + if req.ConsensusParams != nil { + app.StoreConsensusParams(app.deliverState.ctx, req.ConsensusParams) +} + if app.initChainer == nil { + return +} + + / add block gas meter for any genesis transactions (allow infinite gas) + +app.deliverState.ctx = app.deliverState.ctx.WithBlockGasMeter(sdk.NewInfiniteGasMeter()) + +res = app.initChainer(app.deliverState.ctx, req) + + / sanity check + if len(req.Validators) > 0 { + if len(req.Validators) != len(res.Validators) { + panic( + fmt.Errorf( + "len(RequestInitChain.Validators) != len(GenesisValidators) (%d != %d)", + len(req.Validators), len(res.Validators), + ), + ) +} + +sort.Sort(abci.ValidatorUpdates(req.Validators)) + +sort.Sort(abci.ValidatorUpdates(res.Validators)) + for i := range res.Validators { + if !proto.Equal(&res.Validators[i], &req.Validators[i]) { + panic(fmt.Errorf("genesisValidators[%d] != req.Validators[%d] ", i, i)) +} + +} + +} + + / In the case of a new chain, AppHash will be the hash of an empty string. + / During an upgrade, it'll be the hash of the last committed block. + var appHash []byte + if !app.LastCommitID().IsZero() { + appHash = app.LastCommitID().Hash +} + +else { + / $ echo -n '' | sha256sum + / e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 + emptyHash := sha256.Sum256([]byte{ +}) + +appHash = emptyHash[:] +} + + / NOTE: We don't commit, but BeginBlock for block `initial_height` starts from this + / deliverState. + return abci.ResponseInitChain{ + ConsensusParams: res.ConsensusParams, + Validators: res.Validators, + AppHash: appHash, +} +} + +/ Info implements the ABCI interface. +func (app *BaseApp) + +Info(req abci.RequestInfo) + +abci.ResponseInfo { + lastCommitID := app.cms.LastCommitID() + +return abci.ResponseInfo{ + Data: app.name, + Version: app.version, + AppVersion: app.appVersion, + LastBlockHeight: lastCommitID.Version, + LastBlockAppHash: lastCommitID.Hash, +} +} + +/ FilterPeerByAddrPort filters peers by address/port. +func (app *BaseApp) + +FilterPeerByAddrPort(info string) + +abci.ResponseQuery { + if app.addrPeerFilter != nil { + return app.addrPeerFilter(info) +} + +return abci.ResponseQuery{ +} +} + +/ FilterPeerByID filters peers by node ID. +func (app *BaseApp) + +FilterPeerByID(info string) + +abci.ResponseQuery { + if app.idPeerFilter != nil { + return app.idPeerFilter(info) +} + +return abci.ResponseQuery{ +} +} + +/ BeginBlock implements the ABCI application interface. +func (app *BaseApp) + +BeginBlock(req abci.RequestBeginBlock) (res abci.ResponseBeginBlock) { + if app.cms.TracingEnabled() { + app.cms.SetTracingContext(sdk.TraceContext( + map[string]interface{ +}{"blockHeight": req.Header.Height +}, + )) +} + if err := app.validateHeight(req); err != nil { + panic(err) +} + + / Initialize the DeliverTx state. If this is the first block, it should + / already be initialized in InitChain. Otherwise app.deliverState will be + / nil, since it is reset on Commit. + if app.deliverState == nil { + app.setState(runTxModeDeliver, req.Header) +} + +else { + / In the first block, app.deliverState.ctx will already be initialized + / by InitChain. Context is now updated with Header information. + app.deliverState.ctx = app.deliverState.ctx. + WithBlockHeader(req.Header). + WithBlockHeight(req.Header.Height) +} + + / add block gas meter + var gasMeter sdk.GasMeter + if maxGas := app.GetMaximumBlockGas(app.deliverState.ctx); maxGas > 0 { + gasMeter = sdk.NewGasMeter(maxGas) +} + +else { + gasMeter = sdk.NewInfiniteGasMeter() +} + + / NOTE: header hash is not set in NewContext, so we manually set it here + + app.deliverState.ctx = app.deliverState.ctx. + WithBlockGasMeter(gasMeter). + WithHeaderHash(req.Hash). + WithConsensusParams(app.GetConsensusParams(app.deliverState.ctx)) + if app.checkState != nil { + app.checkState.ctx = app.checkState.ctx. + WithBlockGasMeter(gasMeter). + WithHeaderHash(req.Hash) +} + if app.beginBlocker != nil { + res = app.beginBlocker(app.deliverState.ctx, req) + +res.Events = sdk.MarkEventsToIndex(res.Events, app.indexEvents) +} + / set the signed validators for addition to context in deliverTx + app.voteInfos = req.LastCommitInfo.GetVotes() + + / call the hooks with the BeginBlock messages + for _, streamingListener := range app.abciListeners { + if err := streamingListener.ListenBeginBlock(app.deliverState.ctx, req, res); err != nil { + panic(fmt.Errorf("BeginBlock listening hook failed, height: %d, err: %w", req.Header.Height, err)) +} + +} + +return res +} + +/ EndBlock implements the ABCI interface. +func (app *BaseApp) + +EndBlock(req abci.RequestEndBlock) (res abci.ResponseEndBlock) { + if app.deliverState.ms.TracingEnabled() { + app.deliverState.ms = app.deliverState.ms.SetTracingContext(nil).(sdk.CacheMultiStore) +} + if app.endBlocker != nil { + res = app.endBlocker(app.deliverState.ctx, req) + +res.Events = sdk.MarkEventsToIndex(res.Events, app.indexEvents) +} + if cp := app.GetConsensusParams(app.deliverState.ctx); cp != nil { + res.ConsensusParamUpdates = cp +} + + / call the streaming service hooks with the EndBlock messages + for _, streamingListener := range app.abciListeners { + if err := streamingListener.ListenEndBlock(app.deliverState.ctx, req, res); err != nil { + panic(fmt.Errorf("EndBlock listening hook failed, height: %d, err: %w", req.Height, err)) +} + +} + +return res +} + +/ PrepareProposal implements the PrepareProposal ABCI method and returns a +/ ResponsePrepareProposal object to the client. The PrepareProposal method is +/ responsible for allowing the block proposer to perform application-dependent +/ work in a block before proposing it. +/ +/ Transactions can be modified, removed, or added by the application. Since the +/ application maintains its own local mempool, it will ignore the transactions +/ provided to it in RequestPrepareProposal. Instead, it will determine which +/ transactions to return based on the mempool's semantics and the MaxTxBytes +/ provided by the client's request. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/tendermint/tendermint/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +PrepareProposal(req abci.RequestPrepareProposal) (resp abci.ResponsePrepareProposal) { + if app.prepareProposal == nil { + panic("PrepareProposal method not set") +} + + / Tendermint must never call PrepareProposal with a height of 0. + / Ref: https://github.com/tendermint/tendermint/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + panic("PrepareProposal called with invalid height") +} + ctx := app.getContextForProposal(app.prepareProposalState.ctx, req.Height) + +ctx = ctx.WithVoteInfos(app.voteInfos). + WithBlockHeight(req.Height). + WithBlockTime(req.Time). + WithProposer(req.ProposerAddress). + WithConsensusParams(app.GetConsensusParams(ctx)) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in PrepareProposal", + "height", req.Height, + "time", req.Time, + "panic", err, + ) + +resp = abci.ResponsePrepareProposal{ + Txs: req.Txs +} + +} + +}() + +resp = app.prepareProposal(ctx, req) + +return resp +} + +/ ProcessProposal implements the ProcessProposal ABCI method and returns a +/ ResponseProcessProposal object to the client. The ProcessProposal method is +/ responsible for allowing execution of application-dependent work in a proposed +/ block. Note, the application defines the exact implementation details of +/ ProcessProposal. In general, the application must at the very least ensure +/ that all transactions are valid. If all transactions are valid, then we inform +/ Tendermint that the Status is ACCEPT. However, the application is also able +/ to implement optimizations such as executing the entire proposed block +/ immediately. +/ +/ If a panic is detected during execution of an application's ProcessProposal +/ handler, it will be recovered and we will reject the proposal. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/tendermint/tendermint/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +ProcessProposal(req abci.RequestProcessProposal) (resp abci.ResponseProcessProposal) { + if app.processProposal == nil { + panic("app.ProcessProposal is not set") +} + ctx := app.getContextForProposal(app.processProposalState.ctx, req.Height) + +ctx = ctx. + WithVoteInfos(app.voteInfos). + WithBlockHeight(req.Height). + WithBlockTime(req.Time). + WithHeaderHash(req.Hash). + WithProposer(req.ProposerAddress). + WithConsensusParams(app.GetConsensusParams(ctx)) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in ProcessProposal", + "height", req.Height, + "time", req.Time, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +resp = abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + +}() + +resp = app.processProposal(ctx, req) + +return resp +} + +/ CheckTx implements the ABCI interface and executes a tx in CheckTx mode. In +/ CheckTx mode, messages are not executed. This means messages are only validated +/ and only the AnteHandler is executed. State is persisted to the BaseApp's +/ internal CheckTx state if the AnteHandler passes. Otherwise, the ResponseCheckTx +/ will contain relevant error information. Regardless of tx execution outcome, +/ the ResponseCheckTx will contain relevant gas execution context. +func (app *BaseApp) + +CheckTx(req abci.RequestCheckTx) + +abci.ResponseCheckTx { + var mode runTxMode + switch { + case req.Type == abci.CheckTxType_New: + mode = runTxModeCheck + case req.Type == abci.CheckTxType_Recheck: + mode = runTxModeReCheck + + default: + panic(fmt.Sprintf("unknown RequestCheckTx type: %s", req.Type)) +} + +gInfo, result, anteEvents, priority, err := app.runTx(mode, req.Tx) + if err != nil { + return sdkerrors.ResponseCheckTxWithEvents(err, gInfo.GasWanted, gInfo.GasUsed, anteEvents, app.trace) +} + +return abci.ResponseCheckTx{ + GasWanted: int64(gInfo.GasWanted), / TODO: Should type accept unsigned ints? + GasUsed: int64(gInfo.GasUsed), / TODO: Should type accept unsigned ints? + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), + Priority: priority, +} +} + +/ DeliverTx implements the ABCI interface and executes a tx in DeliverTx mode. +/ State only gets persisted if all messages are valid and get executed successfully. +/ Otherwise, the ResponseDeliverTx will contain relevant error information. +/ Regardless of tx execution outcome, the ResponseDeliverTx will contain relevant +/ gas execution context. +func (app *BaseApp) + +DeliverTx(req abci.RequestDeliverTx) (res abci.ResponseDeliverTx) { + gInfo := sdk.GasInfo{ +} + resultStr := "successful" + + defer func() { + for _, streamingListener := range app.abciListeners { + if err := streamingListener.ListenDeliverTx(app.deliverState.ctx, req, res); err != nil { + panic(fmt.Errorf("DeliverTx listening hook failed: %w", err)) +} + +} + +}() + +defer func() { + telemetry.IncrCounter(1, "tx", "count") + +telemetry.IncrCounter(1, "tx", resultStr) + +telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + +telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") +}() + +gInfo, result, anteEvents, _, err := app.runTx(runTxModeDeliver, req.Tx) + if err != nil { + resultStr = "failed" + return sdkerrors.ResponseDeliverTxWithEvents(err, gInfo.GasWanted, gInfo.GasUsed, sdk.MarkEventsToIndex(anteEvents, app.indexEvents), app.trace) +} + +return abci.ResponseDeliverTx{ + GasWanted: int64(gInfo.GasWanted), / TODO: Should type accept unsigned ints? + GasUsed: int64(gInfo.GasUsed), / TODO: Should type accept unsigned ints? + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +} +} + +/ Commit implements the ABCI interface. It will commit all state that exists in +/ the deliver state's multi-store and includes the resulting commit ID in the +/ returned abci.ResponseCommit. Commit will set the check state based on the +/ latest header and reset the deliver state. Also, if a non-zero halt height is +/ defined in config, Commit will execute a deferred function call to check +/ against that height and gracefully halt if it matches the latest committed +/ height. +func (app *BaseApp) + +Commit() + +abci.ResponseCommit { + header := app.deliverState.ctx.BlockHeader() + retainHeight := app.GetBlockRetentionHeight(header.Height) + + / Write the DeliverTx state into branched storage and commit the MultiStore. + / The write to the DeliverTx state writes all state transitions to the root + / MultiStore (app.cms) + +so when Commit() + +is called is persists those values. + app.deliverState.ms.Write() + commitID := app.cms.Commit() + res := abci.ResponseCommit{ + Data: commitID.Hash, + RetainHeight: retainHeight, +} + + / call the hooks with the Commit message + for _, streamingListener := range app.abciListeners { + if err := streamingListener.ListenCommit(app.deliverState.ctx, res); err != nil { + panic(fmt.Errorf("Commit listening hook failed, height: %d, err: %w", header.Height, err)) +} + +} + +app.logger.Info("commit synced", "commit", fmt.Sprintf("%X", commitID)) + + / Reset the Check state to the latest committed. + / + / NOTE: This is safe because Tendermint holds a lock on the mempool for + / Commit. Use the header from this latest block. + app.setState(runTxModeCheck, header) + +app.setState(runTxPrepareProposal, header) + +app.setState(runTxProcessProposal, header) + + / empty/reset the deliver state + app.deliverState = nil + + var halt bool + switch { + case app.haltHeight > 0 && uint64(header.Height) >= app.haltHeight: + halt = true + case app.haltTime > 0 && header.Time.Unix() >= int64(app.haltTime): + halt = true +} + if halt { + / Halt the binary and allow Tendermint to receive the ResponseCommit + / response with the commit ID hash. This will allow the node to successfully + / restart and process blocks assuming the halt configuration has been + / reset or moved to a more distant value. + app.halt() +} + +go app.snapshotManager.SnapshotIfApplicable(header.Height) + +return res +} + +/ halt attempts to gracefully shutdown the node via SIGINT and SIGTERM falling +/ back on os.Exit if both fail. +func (app *BaseApp) + +halt() { + app.logger.Info("halting node per configuration", "height", app.haltHeight, "time", app.haltTime) + +p, err := os.FindProcess(os.Getpid()) + if err == nil { + / attempt cascading signals in case SIGINT fails (os dependent) + sigIntErr := p.Signal(syscall.SIGINT) + sigTermErr := p.Signal(syscall.SIGTERM) + if sigIntErr == nil || sigTermErr == nil { + return +} + +} + + / Resort to exiting immediately if the process could not be found or killed + / via SIGINT/SIGTERM signals. + app.logger.Info("failed to send SIGINT/SIGTERM; exiting...") + +os.Exit(0) +} + +/ Query implements the ABCI interface. It delegates to CommitMultiStore if it +/ implements Queryable. +func (app *BaseApp) + +Query(req abci.RequestQuery) (res abci.ResponseQuery) { + / Add panic recovery for all queries. + / ref: https://github.com/cosmos/cosmos-sdk/pull/8039 + defer func() { + if r := recover(); r != nil { + res = sdkerrors.QueryResult(sdkerrors.Wrapf(sdkerrors.ErrPanic, "%v", r), app.trace) +} + +}() + + / when a client did not provide a query height, manually inject the latest + if req.Height == 0 { + req.Height = app.LastBlockHeight() +} + +telemetry.IncrCounter(1, "query", "count") + +telemetry.IncrCounter(1, "query", req.Path) + +defer telemetry.MeasureSince(time.Now(), req.Path) + + / handle gRPC routes first rather than calling splitPath because '/' characters + / are used as part of gRPC paths + if grpcHandler := app.grpcQueryRouter.Route(req.Path); grpcHandler != nil { + return app.handleQueryGRPC(grpcHandler, req) +} + path := SplitABCIQueryPath(req.Path) + if len(path) == 0 { + return sdkerrors.QueryResult(sdkerrors.Wrap(sdkerrors.ErrUnknownRequest, "no query path provided"), app.trace) +} + switch path[0] { + case QueryPathApp: + / "/app" prefix for special application queries + return handleQueryApp(app, path, req) + case QueryPathStore: + return handleQueryStore(app, path, req) + case QueryPathP2P: + return handleQueryP2P(app, path) +} + +return sdkerrors.QueryResult(sdkerrors.Wrap(sdkerrors.ErrUnknownRequest, "unknown query path"), app.trace) +} + +/ ListSnapshots implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ListSnapshots(req abci.RequestListSnapshots) + +abci.ResponseListSnapshots { + resp := abci.ResponseListSnapshots{ + Snapshots: []*abci.Snapshot{ +}} + if app.snapshotManager == nil { + return resp +} + +snapshots, err := app.snapshotManager.List() + if err != nil { + app.logger.Error("failed to list snapshots", "err", err) + +return resp +} + for _, snapshot := range snapshots { + abciSnapshot, err := snapshot.ToABCI() + if err != nil { + app.logger.Error("failed to list snapshots", "err", err) + +return resp +} + +resp.Snapshots = append(resp.Snapshots, &abciSnapshot) +} + +return resp +} + +/ LoadSnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +LoadSnapshotChunk(req abci.RequestLoadSnapshotChunk) + +abci.ResponseLoadSnapshotChunk { + if app.snapshotManager == nil { + return abci.ResponseLoadSnapshotChunk{ +} + +} + +chunk, err := app.snapshotManager.LoadChunk(req.Height, req.Format, req.Chunk) + if err != nil { + app.logger.Error( + "failed to load snapshot chunk", + "height", req.Height, + "format", req.Format, + "chunk", req.Chunk, + "err", err, + ) + +return abci.ResponseLoadSnapshotChunk{ +} + +} + +return abci.ResponseLoadSnapshotChunk{ + Chunk: chunk +} +} + +/ OfferSnapshot implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +OfferSnapshot(req abci.RequestOfferSnapshot) + +abci.ResponseOfferSnapshot { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ABORT +} + +} + if req.Snapshot == nil { + app.logger.Error("received nil snapshot") + +return abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +} + +} + +snapshot, err := snapshottypes.SnapshotFromABCI(req.Snapshot) + if err != nil { + app.logger.Error("failed to decode snapshot metadata", "err", err) + +return abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +} + +} + +err = app.snapshotManager.Restore(snapshot) + switch { + case err == nil: + return abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ACCEPT +} + case errors.Is(err, snapshottypes.ErrUnknownFormat): + return abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT_FORMAT +} + case errors.Is(err, snapshottypes.ErrInvalidMetadata): + app.logger.Error( + "rejecting invalid snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + +return abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +} + +default: + app.logger.Error( + "failed to restore snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + + / We currently don't support resetting the IAVL stores and retrying a different snapshot, + / so we ask Tendermint to abort all snapshot restoration. + return abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ABORT +} + +} +} + +/ ApplySnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ApplySnapshotChunk(req abci.RequestApplySnapshotChunk) + +abci.ResponseApplySnapshotChunk { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ABORT +} + +} + + _, err := app.snapshotManager.RestoreChunk(req.Chunk) + switch { + case err == nil: + return abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ACCEPT +} + case errors.Is(err, snapshottypes.ErrChunkHashMismatch): + app.logger.Error( + "chunk checksum mismatch; rejecting sender and requesting refetch", + "chunk", req.Index, + "sender", req.Sender, + "err", err, + ) + +return abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_RETRY, + RefetchChunks: []uint32{ + req.Index +}, + RejectSenders: []string{ + req.Sender +}, +} + +default: + app.logger.Error("failed to restore snapshot", "err", err) + +return abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ABORT +} + +} +} + +func (app *BaseApp) + +handleQueryGRPC(handler GRPCQueryHandler, req abci.RequestQuery) + +abci.ResponseQuery { + ctx, err := app.CreateQueryContext(req.Height, req.Prove) + if err != nil { + return sdkerrors.QueryResult(err, app.trace) +} + +res, err := handler(ctx, req) + if err != nil { + res = sdkerrors.QueryResult(gRPCErrorToSDKError(err), app.trace) + +res.Height = req.Height + return res +} + +return res +} + +func gRPCErrorToSDKError(err error) + +error { + status, ok := grpcstatus.FromError(err) + if !ok { + return sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) +} + switch status.Code() { + case codes.NotFound: + return sdkerrors.Wrap(sdkerrors.ErrKeyNotFound, err.Error()) + case codes.InvalidArgument: + return sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.FailedPrecondition: + return sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.Unauthenticated: + return sdkerrors.Wrap(sdkerrors.ErrUnauthorized, err.Error()) + +default: + return sdkerrors.Wrap(sdkerrors.ErrUnknownRequest, err.Error()) +} +} + +func checkNegativeHeight(height int64) + +error { + if height < 0 { + / Reject invalid heights. + return sdkerrors.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with height < 0; please provide a valid height", + ) +} + +return nil +} + +/ createQueryContext creates a new sdk.Context for a query, taking as args +/ the block height and whether the query needs a proof or not. +func (app *BaseApp) + +CreateQueryContext(height int64, prove bool) (sdk.Context, error) { + if err := checkNegativeHeight(height); err != nil { + return sdk.Context{ +}, err +} + + / use custom query multistore if provided + qms := app.qms + if qms == nil { + qms = app.cms.(sdk.MultiStore) +} + lastBlockHeight := qms.LatestVersion() + if height > lastBlockHeight { + return sdk.Context{ +}, + sdkerrors.Wrap( + sdkerrors.ErrInvalidHeight, + "cannot query with height in the future; please provide a valid height", + ) +} + + / when a client did not provide a query height, manually inject the latest + if height == 0 { + height = lastBlockHeight +} + if height <= 1 && prove { + return sdk.Context{ +}, + sdkerrors.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ) +} + +cacheMS, err := qms.CacheMultiStoreWithVersion(height) + if err != nil { + return sdk.Context{ +}, + sdkerrors.Wrapf( + sdkerrors.ErrInvalidRequest, + "failed to load state at height %d; %s (latest height: %d)", height, err, lastBlockHeight, + ) +} + + / branch the commit-multistore for safety + ctx := sdk.NewContext( + cacheMS, app.checkState.ctx.BlockHeader(), true, app.logger, + ).WithMinGasPrices(app.minGasPrices).WithBlockHeight(height) + +return ctx, nil +} + +/ GetBlockRetentionHeight returns the height for which all blocks below this height +/ are pruned from Tendermint. Given a commitment height and a non-zero local +/ minRetainBlocks configuration, the retentionHeight is the smallest height that +/ satisfies: +/ +/ - Unbonding (safety threshold) + +time: The block interval in which validators +/ can be economically punished for misbehavior. Blocks in this interval must be +/ auditable e.g. by the light client. +/ +/ - Logical store snapshot interval: The block interval at which the underlying +/ logical store database is persisted to disk, e.g. every 10000 heights. Blocks +/ since the last IAVL snapshot must be available for replay on application restart. +/ +/ - State sync snapshots: Blocks since the oldest available snapshot must be +/ available for state sync nodes to catch up (oldest because a node may be +/ restoring an old snapshot while a new snapshot was taken). +/ +/ - Local (minRetainBlocks) + +config: Archive nodes may want to retain more or +/ all blocks, e.g. via a local config option min-retain-blocks. There may also +/ be a need to vary retention for other nodes, e.g. sentry nodes which do not +/ need historical blocks. +func (app *BaseApp) + +GetBlockRetentionHeight(commitHeight int64) + +int64 { + / pruning is disabled if minRetainBlocks is zero + if app.minRetainBlocks == 0 { + return 0 +} + minNonZero := func(x, y int64) + +int64 { + switch { + case x == 0: + return y + case y == 0: + return x + case x < y: + return x + default: + return y +} + +} + + / Define retentionHeight as the minimum value that satisfies all non-zero + / constraints. All blocks below (commitHeight-retentionHeight) + +are pruned + / from Tendermint. + var retentionHeight int64 + + / Define the number of blocks needed to protect against misbehaving validators + / which allows light clients to operate safely. Note, we piggy back of the + / evidence parameters instead of computing an estimated nubmer of blocks based + / on the unbonding period and block commitment time as the two should be + / equivalent. + cp := app.GetConsensusParams(app.deliverState.ctx) + if cp != nil && cp.Evidence != nil && cp.Evidence.MaxAgeNumBlocks > 0 { + retentionHeight = commitHeight - cp.Evidence.MaxAgeNumBlocks +} + if app.snapshotManager != nil { + snapshotRetentionHeights := app.snapshotManager.GetSnapshotBlockRetentionHeights() + if snapshotRetentionHeights > 0 { + retentionHeight = minNonZero(retentionHeight, commitHeight-snapshotRetentionHeights) +} + +} + v := commitHeight - int64(app.minRetainBlocks) + +retentionHeight = minNonZero(retentionHeight, v) + if retentionHeight <= 0 { + / prune nothing in the case of a non-positive height + return 0 +} + +return retentionHeight +} + +func handleQueryApp(app *BaseApp, path []string, req abci.RequestQuery) + +abci.ResponseQuery { + if len(path) >= 2 { + switch path[1] { + case "simulate": + txBytes := req.Data + + gInfo, res, err := app.Simulate(txBytes) + if err != nil { + return sdkerrors.QueryResult(sdkerrors.Wrap(err, "failed to simulate tx"), app.trace) +} + simRes := &sdk.SimulationResponse{ + GasInfo: gInfo, + Result: res, +} + +bz, err := codec.ProtoMarshalJSON(simRes, app.interfaceRegistry) + if err != nil { + return sdkerrors.QueryResult(sdkerrors.Wrap(err, "failed to JSON encode simulation response"), app.trace) +} + +return abci.ResponseQuery{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: bz, +} + case "version": + return abci.ResponseQuery{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: []byte(app.version), +} + +default: + return sdkerrors.QueryResult(sdkerrors.Wrapf(sdkerrors.ErrUnknownRequest, "unknown query: %s", path), app.trace) +} + +} + +return sdkerrors.QueryResult( + sdkerrors.Wrap( + sdkerrors.ErrUnknownRequest, + "expected second parameter to be either 'simulate' or 'version', neither was present", + ), app.trace) +} + +func handleQueryStore(app *BaseApp, path []string, req abci.RequestQuery) + +abci.ResponseQuery { + / "/store" prefix for store queries + queryable, ok := app.cms.(sdk.Queryable) + if !ok { + return sdkerrors.QueryResult(sdkerrors.Wrap(sdkerrors.ErrUnknownRequest, "multistore doesn't support queries"), app.trace) +} + +req.Path = "/" + strings.Join(path[1:], "/") + if req.Height <= 1 && req.Prove { + return sdkerrors.QueryResult( + sdkerrors.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ), app.trace) +} + resp := queryable.Query(req) + +resp.Height = req.Height + + return resp +} + +func handleQueryP2P(app *BaseApp, path []string) + +abci.ResponseQuery { + / "/p2p" prefix for p2p queries + if len(path) < 4 { + return sdkerrors.QueryResult( + sdkerrors.Wrap( + sdkerrors.ErrUnknownRequest, "path should be p2p filter ", + ), app.trace) +} + +var resp abci.ResponseQuery + + cmd, typ, arg := path[1], path[2], path[3] + switch cmd { + case "filter": + switch typ { + case "addr": + resp = app.FilterPeerByAddrPort(arg) + case "id": + resp = app.FilterPeerByID(arg) +} + +default: + resp = sdkerrors.QueryResult(sdkerrors.Wrap(sdkerrors.ErrUnknownRequest, "expected second parameter to be 'filter'"), app.trace) +} + +return resp +} + +/ SplitABCIQueryPath splits a string path using the delimiter '/'. +/ +/ e.g. "this/is/funny" becomes []string{"this", "is", "funny" +} + +func SplitABCIQueryPath(requestPath string) (path []string) { + path = strings.Split(requestPath, "/") + + / first element is empty string + if len(path) > 0 && path[0] == "" { + path = path[1:] +} + +return path +} + +/ getContextForProposal returns the right context for PrepareProposal and +/ ProcessProposal. We use deliverState on the first block to be able to access +/ any state changes made in InitChain. +func (app *BaseApp) + +getContextForProposal(ctx sdk.Context, height int64) + +sdk.Context { + if height == 1 { + ctx, _ = app.deliverState.ctx.CacheContext() + +return ctx +} + +return ctx +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/module-interfaces.mdx b/docs/sdk/v0.47/documentation/module-system/module-interfaces.mdx new file mode 100644 index 00000000..fa7f826a --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/module-interfaces.mdx @@ -0,0 +1,2901 @@ +--- +title: Module Interfaces +--- + + +**Synopsis** +This document details how to build CLI and REST interfaces for a module. Examples from various Cosmos SDK modules are included. + + + + +### Pre-requisite Readings + +* [Building Modules Intro](/docs/sdk/v0.47/documentation/module-system/intro) + + + +## CLI + +One of the main interfaces for an application is the [command-line interface](/docs/sdk/v0.47/learn/advanced/cli). This entrypoint adds commands from the application's modules enabling end-users to create [**messages**](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#messages) wrapped in transactions and [**queries**](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#queries). The CLI files are typically found in the module's `./client/cli` folder. + +### Transaction Commands + +In order to create messages that trigger state changes, end-users must create [transactions](/docs/sdk/v0.47/learn/advanced/transactions) that wrap and deliver the messages. A transaction command creates a transaction that includes one or more messages. + +Transaction commands typically have their own `tx.go` file that lives within the module's `./client/cli` folder. The commands are specified in getter functions and the name of the function should include the name of the command. + +Here is an example from the `x/bank` module: + +```go expandable +package cli + +import ( + + "fmt" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/tx" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var FlagSplit = "split" + +/ NewTxCmd returns a root CLI command handler for all x/bank transaction commands. +func NewTxCmd() *cobra.Command { + txCmd := &cobra.Command{ + Use: types.ModuleName, + Short: "Bank transaction subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +txCmd.AddCommand( + NewSendTxCmd(), + NewMultiSendTxCmd(), + ) + +return txCmd +} + +/ NewSendTxCmd returns a CLI command handler for creating a MsgSend transaction. +func NewSendTxCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "send [from_key_or_address] [to_address] [amount]", + Short: "Send funds from one account to another.", + Long: `Send funds from one account to another. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.ExactArgs(3), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +toAddr, err := sdk.AccAddressFromBech32(args[1]) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[2]) + if err != nil { + return err +} + msg := types.NewMsgSend(clientCtx.GetFromAddress(), toAddr, coins) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} + +/ NewMultiSendTxCmd returns a CLI command handler for creating a MsgMultiSend transaction. +/ For a better UX this command is limited to send funds from one account to two or more accounts. +func NewMultiSendTxCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "multi-send [from_key_or_address] [to_address_1, to_address_2, ...] [amount]", + Short: "Send funds from one account to two or more accounts.", + Long: `Send funds from one account to two or more accounts. +By default, sends the [amount] to each address of the list. +Using the '--split' flag, the [amount] is split equally between the addresses. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.MinimumNArgs(4), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[len(args)-1]) + if err != nil { + return err +} + if coins.IsZero() { + return fmt.Errorf("must send positive amount") +} + +split, err := cmd.Flags().GetBool(FlagSplit) + if err != nil { + return err +} + totalAddrs := sdk.NewInt(int64(len(args) - 2)) + / coins to be received by the addresses + sendCoins := coins + if split { + sendCoins = coins.QuoInt(totalAddrs) +} + +var output []types.Output + for _, arg := range args[1 : len(args)-1] { + toAddr, err := sdk.AccAddressFromBech32(arg) + if err != nil { + return err +} + +output = append(output, types.NewOutput(toAddr, sendCoins)) +} + + / amount to be send from the from address + var amount sdk.Coins + if split { + / user input: 1000stake to send to 3 addresses + / actual: 333stake to each address (=> 999stake actually sent) + +amount = sendCoins.MulInt(totalAddrs) +} + +else { + amount = coins.MulInt(totalAddrs) +} + msg := types.NewMsgMultiSend([]types.Input{ + types.NewInput(clientCtx.FromAddress, amount) +}, output) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +cmd.Flags().Bool(FlagSplit, false, "Send the equally split token amount to each address") + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} +``` + +In the example, `NewSendTxCmd()` creates and returns the transaction command for a transaction that wraps and delivers `MsgSend`. `MsgSend` is the message used to send tokens from one account to another. + +In general, the getter function does the following: + +* **Constructs the command:** Read the [Cobra Documentation](https://pkg.go.dev/github.com/spf13/cobra) for more detailed information on how to create commands. + * **Use:** Specifies the format of the user input required to invoke the command. In the example above, `send` is the name of the transaction command and `[from_key_or_address]`, `[to_address]`, and `[amount]` are the arguments. + * **Args:** The number of arguments the user provides. In this case, there are exactly three: `[from_key_or_address]`, `[to_address]`, and `[amount]`. + * **Short and Long:** Descriptions for the command. A `Short` description is expected. A `Long` description can be used to provide additional information that is displayed when a user adds the `--help` flag. + * **RunE:** Defines a function that can return an error. This is the function that is called when the command is executed. This function encapsulates all of the logic to create a new transaction. + * The function typically starts by getting the `clientCtx`, which can be done with `client.GetClientTxContext(cmd)`. The `clientCtx` contains information relevant to transaction handling, including information about the user. In this example, the `clientCtx` is used to retrieve the address of the sender by calling `clientCtx.GetFromAddress()`. + * If applicable, the command's arguments are parsed. In this example, the arguments `[to_address]` and `[amount]` are both parsed. + * A [message](/docs/sdk/v0.47/documentation/module-system/messages-and-queries) is created using the parsed arguments and information from the `clientCtx`. The constructor function of the message type is called directly. In this case, `types.NewMsgSend(fromAddr, toAddr, amount)`. Its good practice to call, if possible, the necessary [message validation methods](/docs/sdk/v0.47/documentation/module-system/Validation) before broadcasting the message. + * Depending on what the user wants, the transaction is either generated offline or signed and broadcasted to the preconfigured node using `tx.GenerateOrBroadcastTxCLI(clientCtx, flags, msg)`. +* **Adds transaction flags:** All transaction commands must add a set of transaction [flags](#flags). The transaction flags are used to collect additional information from the user (e.g. the amount of fees the user is willing to pay). The transaction flags are added to the constructed command using `AddTxFlagsToCmd(cmd)`. +* **Returns the command:** Finally, the transaction command is returned. + +Each module must implement `NewTxCmd()`, which aggregates all of the transaction commands of the module. Here is an example from the `x/bank` module: + +```go expandable +package cli + +import ( + + "fmt" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/tx" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var FlagSplit = "split" + +/ NewTxCmd returns a root CLI command handler for all x/bank transaction commands. +func NewTxCmd() *cobra.Command { + txCmd := &cobra.Command{ + Use: types.ModuleName, + Short: "Bank transaction subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +txCmd.AddCommand( + NewSendTxCmd(), + NewMultiSendTxCmd(), + ) + +return txCmd +} + +/ NewSendTxCmd returns a CLI command handler for creating a MsgSend transaction. +func NewSendTxCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "send [from_key_or_address] [to_address] [amount]", + Short: "Send funds from one account to another.", + Long: `Send funds from one account to another. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.ExactArgs(3), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +toAddr, err := sdk.AccAddressFromBech32(args[1]) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[2]) + if err != nil { + return err +} + msg := types.NewMsgSend(clientCtx.GetFromAddress(), toAddr, coins) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} + +/ NewMultiSendTxCmd returns a CLI command handler for creating a MsgMultiSend transaction. +/ For a better UX this command is limited to send funds from one account to two or more accounts. +func NewMultiSendTxCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "multi-send [from_key_or_address] [to_address_1, to_address_2, ...] [amount]", + Short: "Send funds from one account to two or more accounts.", + Long: `Send funds from one account to two or more accounts. +By default, sends the [amount] to each address of the list. +Using the '--split' flag, the [amount] is split equally between the addresses. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.MinimumNArgs(4), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[len(args)-1]) + if err != nil { + return err +} + if coins.IsZero() { + return fmt.Errorf("must send positive amount") +} + +split, err := cmd.Flags().GetBool(FlagSplit) + if err != nil { + return err +} + totalAddrs := sdk.NewInt(int64(len(args) - 2)) + / coins to be received by the addresses + sendCoins := coins + if split { + sendCoins = coins.QuoInt(totalAddrs) +} + +var output []types.Output + for _, arg := range args[1 : len(args)-1] { + toAddr, err := sdk.AccAddressFromBech32(arg) + if err != nil { + return err +} + +output = append(output, types.NewOutput(toAddr, sendCoins)) +} + + / amount to be send from the from address + var amount sdk.Coins + if split { + / user input: 1000stake to send to 3 addresses + / actual: 333stake to each address (=> 999stake actually sent) + +amount = sendCoins.MulInt(totalAddrs) +} + +else { + amount = coins.MulInt(totalAddrs) +} + msg := types.NewMsgMultiSend([]types.Input{ + types.NewInput(clientCtx.FromAddress, amount) +}, output) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +cmd.Flags().Bool(FlagSplit, false, "Send the equally split token amount to each address") + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} +``` + +Each module must also implement the `GetTxCmd()` method for `AppModuleBasic` that simply returns `NewTxCmd()`. This allows the root command to easily aggregate all of the transaction commands for each module. Here is an example: + +```go expandable +package bank + +import ( + + "context" + "encoding/json" + "fmt" + "time" + + modulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/bank/client/cli" + "github.com/cosmos/cosmos-sdk/x/bank/exported" + "github.com/cosmos/cosmos-sdk/x/bank/keeper" + v1bank "github.com/cosmos/cosmos-sdk/x/bank/migrations/v1" + "github.com/cosmos/cosmos-sdk/x/bank/simulation" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +/ ConsensusVersion defines the current x/bank module consensus version. +const ConsensusVersion = 4 + +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the bank module. +type AppModuleBasic struct { + cdc codec.Codec +} + +/ Name returns the bank module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the bank module's types on the LegacyAmino codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the bank +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the bank module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, _ client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return data.Validate() +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the bank module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the bank module. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.NewTxCmd() +} + +/ GetQueryCmd returns no root query command for the bank module. +func (AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.GetQueryCmd() +} + +/ RegisterInterfaces registers interfaces and implementations of the bank module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) + + / Register legacy interfaces for migration scripts. + v1bank.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the bank module. +type AppModule struct { + AppModuleBasic + + keeper keeper.Keeper + accountKeeper types.AccountKeeper + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ RegisterServices registers module services. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.keeper)) + +types.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper.(keeper.BaseKeeper), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 1 to 2: %v", err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 2 to 3: %v", err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 3 to 4: %v", err)) +} +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, accountKeeper types.AccountKeeper, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc +}, + keeper: keeper, + accountKeeper: accountKeeper, + legacySubspace: ss, +} +} + +/ Name returns the bank module's name. +func (AppModule) + +Name() + +string { + return types.ModuleName +} + +/ RegisterInvariants registers the bank module invariants. +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +/ QuerierRoute returns the bank module's querier route name. +func (AppModule) + +QuerierRoute() + +string { + return types.RouterKey +} + +/ InitGenesis performs genesis initialization for the bank module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + start := time.Now() + +var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +telemetry.MeasureSince(start, "InitGenesis", "crisis", "unmarshal") + +am.keeper.InitGenesis(ctx, &genesisState) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the bank +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the bank module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalContents doesn't return any content functions for governance proposals. +func (AppModule) + +ProposalContents(_ module.SimulationState) []simtypes.WeightedProposalContent { + return nil +} + +/ RegisterStoreDecoder registers a decoder for supply module's types +func (am AppModule) + +RegisterStoreDecoder(_ sdk.StoreDecoderRegistry) { +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + simState.AppParams, simState.Cdc, am.accountKeeper, am.keeper, + ) +} + +/ App Wiring Setup + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type BankInputs struct { + depinject.In + + Config *modulev1.Module + Cdc codec.Codec + Key *store.KVStoreKey + + AccountKeeper types.AccountKeeper + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type BankOutputs struct { + depinject.Out + + BankKeeper keeper.BaseKeeper + Module appmodule.AppModule +} + +func ProvideModule(in BankInputs) + +BankOutputs { + / Configure blocked module accounts. + / + / Default behavior for blockedAddresses is to regard any module mentioned in + / AccountKeeper's module account permissions as blocked. + blockedAddresses := make(map[string]bool) + if len(in.Config.BlockedModuleAccountsOverride) > 0 { + for _, moduleName := range in.Config.BlockedModuleAccountsOverride { + blockedAddresses[authtypes.NewModuleAddress(moduleName).String()] = true +} + +} + +else { + for _, permission := range in.AccountKeeper.GetModulePermissions() { + blockedAddresses[permission.GetAddress().String()] = true +} + +} + + / default to governance authority if not provided + authority := authtypes.NewModuleAddress(govtypes.ModuleName) + if in.Config.Authority != "" { + authority = authtypes.NewModuleAddressOrBech32Address(in.Config.Authority) +} + bankKeeper := keeper.NewBaseKeeper( + in.Cdc, + in.Key, + in.AccountKeeper, + blockedAddresses, + authority.String(), + ) + m := NewAppModule(in.Cdc, bankKeeper, in.AccountKeeper, in.LegacySubspace) + +return BankOutputs{ + BankKeeper: bankKeeper, + Module: m +} +} +``` + +### Query Commands + +[Queries](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#queries) allow users to gather information about the application or network state; they are routed by the application and processed by the module in which they are defined. Query commands typically have their own `query.go` file in the module's `./client/cli` folder. Like transaction commands, they are specified in getter functions. Here is an example of a query command from the `x/auth` module: + +```go expandable +package cli + +import ( + + "context" + "fmt" + "strconv" + "strings" + "github.com/spf13/cobra" + tmtypes "github.com/tendermint/tendermint/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/query" + "github.com/cosmos/cosmos-sdk/version" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +const ( + flagEvents = "events" + flagType = "type" + + typeHash = "hash" + typeAccSeq = "acc_seq" + typeSig = "signature" + + eventFormat = "{ + eventType +}.{ + eventAttribute +}={ + value +}" +) + +/ GetQueryCmd returns the transaction commands for this module +func GetQueryCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: types.ModuleName, + Short: "Querying commands for the auth module", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + GetAccountCmd(), + GetAccountAddressByIDCmd(), + GetAccountsCmd(), + QueryParamsCmd(), + QueryModuleAccountsCmd(), + QueryModuleAccountByNameCmd(), + ) + +return cmd +} + +/ QueryParamsCmd returns the command handler for evidence parameter querying. +func QueryParamsCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "params", + Short: "Query the current auth parameters", + Args: cobra.NoArgs, + Long: strings.TrimSpace(`Query the current auth parameters: + +$ query auth params +`), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.Params(cmd.Context(), &types.QueryParamsRequest{ +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Params) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetAccountCmd returns a query account that will display the state of the +/ account at a given address. +func GetAccountCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "account [address]", + Short: "Query for account by address", + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +key, err := sdk.AccAddressFromBech32(args[0]) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.Account(cmd.Context(), &types.QueryAccountRequest{ + Address: key.String() +}) + if err != nil { + node, err2 := clientCtx.GetNode() + if err2 != nil { + return err2 +} + +status, err2 := node.Status(context.Background()) + if err2 != nil { + return err2 +} + catchingUp := status.SyncInfo.CatchingUp + if !catchingUp { + return errors.Wrapf(err, "your node may be syncing, please check node status using `/status`") +} + +return err +} + +return clientCtx.PrintProto(res.Account) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetAccountAddressByIDCmd returns a query account that will display the account address of a given account id. +func GetAccountAddressByIDCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "address-by-acc-num [acc-num]", + Aliases: []string{"address-by-id" +}, + Short: "Query for an address by account number", + Args: cobra.ExactArgs(1), + Example: fmt.Sprintf("%s q auth address-by-acc-num 1", version.AppName), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +accNum, err := strconv.ParseUint(args[0], 10, 64) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.AccountAddressByID(cmd.Context(), &types.QueryAccountAddressByIDRequest{ + AccountId: accNum, +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetAccountsCmd returns a query command that will display a list of accounts +func GetAccountsCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "accounts", + Short: "Query all the accounts", + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.Accounts(cmd.Context(), &types.QueryAccountsRequest{ + Pagination: pageReq +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "all-accounts") + +return cmd +} + +/ QueryAllModuleAccountsCmd returns a list of all the existing module accounts with their account information and permissions +func QueryModuleAccountsCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "module-accounts", + Short: "Query all module accounts", + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.ModuleAccounts(context.Background(), &types.QueryModuleAccountsRequest{ +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ QueryModuleAccountByNameCmd returns a command to +func QueryModuleAccountByNameCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "module-account [module-name]", + Short: "Query module account info by module name", + Args: cobra.ExactArgs(1), + Example: fmt.Sprintf("%s q auth module-account auth", version.AppName), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + moduleName := args[0] + if len(moduleName) == 0 { + return fmt.Errorf("module name should not be empty") +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.ModuleAccountByName(context.Background(), &types.QueryModuleAccountByNameRequest{ + Name: moduleName +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ QueryTxsByEventsCmd returns a command to search through transactions by events. +func QueryTxsByEventsCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "txs", + Short: "Query for paginated transactions that match a set of events", + Long: strings.TrimSpace( + fmt.Sprintf(` +Search for transactions that match the exact given events where results are paginated. +Each event takes the form of '%s'. Please refer +to each module's documentation for the full set of events to query for. Each module +documents its respective events under 'xx_events.md'. + +Example: +$ %s query txs --%s 'message.sender=cosmos1...&message.action=withdraw_delegator_reward' --page 1 --limit 30 +`, eventFormat, version.AppName, flagEvents), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +eventsRaw, _ := cmd.Flags().GetString(flagEvents) + eventsStr := strings.Trim(eventsRaw, "'") + +var events []string + if strings.Contains(eventsStr, "&") { + events = strings.Split(eventsStr, "&") +} + +else { + events = append(events, eventsStr) +} + +var tmEvents []string + for _, event := range events { + if !strings.Contains(event, "=") { + return fmt.Errorf("invalid event; event %s should be of the format: %s", event, eventFormat) +} + +else if strings.Count(event, "=") > 1 { + return fmt.Errorf("invalid event; event %s should be of the format: %s", event, eventFormat) +} + tokens := strings.Split(event, "=") + if tokens[0] == tmtypes.TxHeightKey { + event = fmt.Sprintf("%s=%s", tokens[0], tokens[1]) +} + +else { + event = fmt.Sprintf("%s='%s'", tokens[0], tokens[1]) +} + +tmEvents = append(tmEvents, event) +} + +page, _ := cmd.Flags().GetInt(flags.FlagPage) + +limit, _ := cmd.Flags().GetInt(flags.FlagLimit) + +txs, err := authtx.QueryTxsByEvents(clientCtx, tmEvents, page, limit, "") + if err != nil { + return err +} + +return clientCtx.PrintProto(txs) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +cmd.Flags().Int(flags.FlagPage, query.DefaultPage, "Query a specific page of paginated results") + +cmd.Flags().Int(flags.FlagLimit, query.DefaultLimit, "Query number of transactions results per page returned") + +cmd.Flags().String(flagEvents, "", fmt.Sprintf("list of transaction events in the form of %s", eventFormat)) + +cmd.MarkFlagRequired(flagEvents) + +return cmd +} + +/ QueryTxCmd implements the default command for a tx query. +func QueryTxCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx --type=[hash|acc_seq|signature] [hash|acc_seq|signature]", + Short: "Query for a transaction by hash, \"/\" combination or comma-separated signatures in a committed block", + Long: strings.TrimSpace(fmt.Sprintf(` +Example: +$ %s query tx +$ %s query tx --%s=%s / +$ %s query tx --%s=%s , +`, + version.AppName, + version.AppName, flagType, typeAccSeq, + version.AppName, flagType, typeSig)), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +typ, _ := cmd.Flags().GetString(flagType) + switch typ { + case typeHash: + { + if args[0] == "" { + return fmt.Errorf("argument should be a tx hash") +} + + / If hash is given, then query the tx by hash. + output, err := authtx.QueryTx(clientCtx, args[0]) + if err != nil { + return err +} + if output.Empty() { + return fmt.Errorf("no transaction found with hash %s", args[0]) +} + +return clientCtx.PrintProto(output) +} + case typeSig: + { + sigParts, err := ParseSigArgs(args) + if err != nil { + return err +} + tmEvents := make([]string, len(sigParts)) + for i, sig := range sigParts { + tmEvents[i] = fmt.Sprintf("%s.%s='%s'", sdk.EventTypeTx, sdk.AttributeKeySignature, sig) +} + +txs, err := authtx.QueryTxsByEvents(clientCtx, tmEvents, query.DefaultPage, query.DefaultLimit, "") + if err != nil { + return err +} + if len(txs.Txs) == 0 { + return fmt.Errorf("found no txs matching given signatures") +} + if len(txs.Txs) > 1 { + / This case means there's a bug somewhere else in the code. Should not happen. + return errors.ErrLogic.Wrapf("found %d txs matching given signatures", len(txs.Txs)) +} + +return clientCtx.PrintProto(txs.Txs[0]) +} + case typeAccSeq: + { + if args[0] == "" { + return fmt.Errorf("`acc_seq` type takes an argument '/'") +} + tmEvents := []string{ + fmt.Sprintf("%s.%s='%s'", sdk.EventTypeTx, sdk.AttributeKeyAccountSequence, args[0]), +} + +txs, err := authtx.QueryTxsByEvents(clientCtx, tmEvents, query.DefaultPage, query.DefaultLimit, "") + if err != nil { + return err +} + if len(txs.Txs) == 0 { + return fmt.Errorf("found no txs matching given address and sequence combination") +} + if len(txs.Txs) > 1 { + / This case means there's a bug somewhere else in the code. Should not happen. + return fmt.Errorf("found %d txs matching given address and sequence combination", len(txs.Txs)) +} + +return clientCtx.PrintProto(txs.Txs[0]) +} + +default: + return fmt.Errorf("unknown --%s value %s", flagType, typ) +} + +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +cmd.Flags().String(flagType, typeHash, fmt.Sprintf("The type to be used when querying tx, can be one of \"%s\", \"%s\", \"%s\"", typeHash, typeAccSeq, typeSig)) + +return cmd +} + +/ ParseSigArgs parses comma-separated signatures from the CLI arguments. +func ParseSigArgs(args []string) ([]string, error) { + if len(args) != 1 || args[0] == "" { + return nil, fmt.Errorf("argument should be comma-separated signatures") +} + +return strings.Split(args[0], ","), nil +} +``` + +In the example, `GetAccountCmd()` creates and returns a query command that returns the state of an account based on the provided account address. + +In general, the getter function does the following: + +* **Constructs the command:** Read the [Cobra Documentation](https://pkg.go.dev/github.com/spf13/cobra) for more detailed information on how to create commands. + * **Use:** Specifies the format of the user input required to invoke the command. In the example above, `account` is the name of the query command and `[address]` is the argument. + * **Args:** The number of arguments the user provides. In this case, there is exactly one: `[address]`. + * **Short and Long:** Descriptions for the command. A `Short` description is expected. A `Long` description can be used to provide additional information that is displayed when a user adds the `--help` flag. + * **RunE:** Defines a function that can return an error. This is the function that is called when the command is executed. This function encapsulates all of the logic to create a new query. + * The function typically starts by getting the `clientCtx`, which can be done with `client.GetClientQueryContext(cmd)`. The `clientCtx` contains information relevant to query handling. + * If applicable, the command's arguments are parsed. In this example, the argument `[address]` is parsed. + * A new `queryClient` is initialized using `NewQueryClient(clientCtx)`. The `queryClient` is then used to call the appropriate [query](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#grpc-queries). + * The `clientCtx.PrintProto` method is used to format the `proto.Message` object so that the results can be printed back to the user. +* **Adds query flags:** All query commands must add a set of query [flags](#flags). The query flags are added to the constructed command using `AddQueryFlagsToCmd(cmd)`. +* **Returns the command:** Finally, the query command is returned. + +Each module must implement `GetQueryCmd()`, which aggregates all of the query commands of the module. Here is an example from the `x/auth` module: + +```go expandable +package cli + +import ( + + "context" + "fmt" + "strconv" + "strings" + "github.com/spf13/cobra" + tmtypes "github.com/tendermint/tendermint/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/query" + "github.com/cosmos/cosmos-sdk/version" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +const ( + flagEvents = "events" + flagType = "type" + + typeHash = "hash" + typeAccSeq = "acc_seq" + typeSig = "signature" + + eventFormat = "{ + eventType +}.{ + eventAttribute +}={ + value +}" +) + +/ GetQueryCmd returns the transaction commands for this module +func GetQueryCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: types.ModuleName, + Short: "Querying commands for the auth module", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + GetAccountCmd(), + GetAccountAddressByIDCmd(), + GetAccountsCmd(), + QueryParamsCmd(), + QueryModuleAccountsCmd(), + QueryModuleAccountByNameCmd(), + ) + +return cmd +} + +/ QueryParamsCmd returns the command handler for evidence parameter querying. +func QueryParamsCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "params", + Short: "Query the current auth parameters", + Args: cobra.NoArgs, + Long: strings.TrimSpace(`Query the current auth parameters: + +$ query auth params +`), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.Params(cmd.Context(), &types.QueryParamsRequest{ +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Params) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetAccountCmd returns a query account that will display the state of the +/ account at a given address. +func GetAccountCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "account [address]", + Short: "Query for account by address", + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +key, err := sdk.AccAddressFromBech32(args[0]) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.Account(cmd.Context(), &types.QueryAccountRequest{ + Address: key.String() +}) + if err != nil { + node, err2 := clientCtx.GetNode() + if err2 != nil { + return err2 +} + +status, err2 := node.Status(context.Background()) + if err2 != nil { + return err2 +} + catchingUp := status.SyncInfo.CatchingUp + if !catchingUp { + return errors.Wrapf(err, "your node may be syncing, please check node status using `/status`") +} + +return err +} + +return clientCtx.PrintProto(res.Account) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetAccountAddressByIDCmd returns a query account that will display the account address of a given account id. +func GetAccountAddressByIDCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "address-by-acc-num [acc-num]", + Aliases: []string{"address-by-id" +}, + Short: "Query for an address by account number", + Args: cobra.ExactArgs(1), + Example: fmt.Sprintf("%s q auth address-by-acc-num 1", version.AppName), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +accNum, err := strconv.ParseUint(args[0], 10, 64) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.AccountAddressByID(cmd.Context(), &types.QueryAccountAddressByIDRequest{ + AccountId: accNum, +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetAccountsCmd returns a query command that will display a list of accounts +func GetAccountsCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "accounts", + Short: "Query all the accounts", + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.Accounts(cmd.Context(), &types.QueryAccountsRequest{ + Pagination: pageReq +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "all-accounts") + +return cmd +} + +/ QueryAllModuleAccountsCmd returns a list of all the existing module accounts with their account information and permissions +func QueryModuleAccountsCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "module-accounts", + Short: "Query all module accounts", + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.ModuleAccounts(context.Background(), &types.QueryModuleAccountsRequest{ +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ QueryModuleAccountByNameCmd returns a command to +func QueryModuleAccountByNameCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "module-account [module-name]", + Short: "Query module account info by module name", + Args: cobra.ExactArgs(1), + Example: fmt.Sprintf("%s q auth module-account auth", version.AppName), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + moduleName := args[0] + if len(moduleName) == 0 { + return fmt.Errorf("module name should not be empty") +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.ModuleAccountByName(context.Background(), &types.QueryModuleAccountByNameRequest{ + Name: moduleName +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ QueryTxsByEventsCmd returns a command to search through transactions by events. +func QueryTxsByEventsCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "txs", + Short: "Query for paginated transactions that match a set of events", + Long: strings.TrimSpace( + fmt.Sprintf(` +Search for transactions that match the exact given events where results are paginated. +Each event takes the form of '%s'. Please refer +to each module's documentation for the full set of events to query for. Each module +documents its respective events under 'xx_events.md'. + +Example: +$ %s query txs --%s 'message.sender=cosmos1...&message.action=withdraw_delegator_reward' --page 1 --limit 30 +`, eventFormat, version.AppName, flagEvents), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +eventsRaw, _ := cmd.Flags().GetString(flagEvents) + eventsStr := strings.Trim(eventsRaw, "'") + +var events []string + if strings.Contains(eventsStr, "&") { + events = strings.Split(eventsStr, "&") +} + +else { + events = append(events, eventsStr) +} + +var tmEvents []string + for _, event := range events { + if !strings.Contains(event, "=") { + return fmt.Errorf("invalid event; event %s should be of the format: %s", event, eventFormat) +} + +else if strings.Count(event, "=") > 1 { + return fmt.Errorf("invalid event; event %s should be of the format: %s", event, eventFormat) +} + tokens := strings.Split(event, "=") + if tokens[0] == tmtypes.TxHeightKey { + event = fmt.Sprintf("%s=%s", tokens[0], tokens[1]) +} + +else { + event = fmt.Sprintf("%s='%s'", tokens[0], tokens[1]) +} + +tmEvents = append(tmEvents, event) +} + +page, _ := cmd.Flags().GetInt(flags.FlagPage) + +limit, _ := cmd.Flags().GetInt(flags.FlagLimit) + +txs, err := authtx.QueryTxsByEvents(clientCtx, tmEvents, page, limit, "") + if err != nil { + return err +} + +return clientCtx.PrintProto(txs) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +cmd.Flags().Int(flags.FlagPage, query.DefaultPage, "Query a specific page of paginated results") + +cmd.Flags().Int(flags.FlagLimit, query.DefaultLimit, "Query number of transactions results per page returned") + +cmd.Flags().String(flagEvents, "", fmt.Sprintf("list of transaction events in the form of %s", eventFormat)) + +cmd.MarkFlagRequired(flagEvents) + +return cmd +} + +/ QueryTxCmd implements the default command for a tx query. +func QueryTxCmd() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx --type=[hash|acc_seq|signature] [hash|acc_seq|signature]", + Short: "Query for a transaction by hash, \"/\" combination or comma-separated signatures in a committed block", + Long: strings.TrimSpace(fmt.Sprintf(` +Example: +$ %s query tx +$ %s query tx --%s=%s / +$ %s query tx --%s=%s , +`, + version.AppName, + version.AppName, flagType, typeAccSeq, + version.AppName, flagType, typeSig)), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +typ, _ := cmd.Flags().GetString(flagType) + switch typ { + case typeHash: + { + if args[0] == "" { + return fmt.Errorf("argument should be a tx hash") +} + + / If hash is given, then query the tx by hash. + output, err := authtx.QueryTx(clientCtx, args[0]) + if err != nil { + return err +} + if output.Empty() { + return fmt.Errorf("no transaction found with hash %s", args[0]) +} + +return clientCtx.PrintProto(output) +} + case typeSig: + { + sigParts, err := ParseSigArgs(args) + if err != nil { + return err +} + tmEvents := make([]string, len(sigParts)) + for i, sig := range sigParts { + tmEvents[i] = fmt.Sprintf("%s.%s='%s'", sdk.EventTypeTx, sdk.AttributeKeySignature, sig) +} + +txs, err := authtx.QueryTxsByEvents(clientCtx, tmEvents, query.DefaultPage, query.DefaultLimit, "") + if err != nil { + return err +} + if len(txs.Txs) == 0 { + return fmt.Errorf("found no txs matching given signatures") +} + if len(txs.Txs) > 1 { + / This case means there's a bug somewhere else in the code. Should not happen. + return errors.ErrLogic.Wrapf("found %d txs matching given signatures", len(txs.Txs)) +} + +return clientCtx.PrintProto(txs.Txs[0]) +} + case typeAccSeq: + { + if args[0] == "" { + return fmt.Errorf("`acc_seq` type takes an argument '/'") +} + tmEvents := []string{ + fmt.Sprintf("%s.%s='%s'", sdk.EventTypeTx, sdk.AttributeKeyAccountSequence, args[0]), +} + +txs, err := authtx.QueryTxsByEvents(clientCtx, tmEvents, query.DefaultPage, query.DefaultLimit, "") + if err != nil { + return err +} + if len(txs.Txs) == 0 { + return fmt.Errorf("found no txs matching given address and sequence combination") +} + if len(txs.Txs) > 1 { + / This case means there's a bug somewhere else in the code. Should not happen. + return fmt.Errorf("found %d txs matching given address and sequence combination", len(txs.Txs)) +} + +return clientCtx.PrintProto(txs.Txs[0]) +} + +default: + return fmt.Errorf("unknown --%s value %s", flagType, typ) +} + +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +cmd.Flags().String(flagType, typeHash, fmt.Sprintf("The type to be used when querying tx, can be one of \"%s\", \"%s\", \"%s\"", typeHash, typeAccSeq, typeSig)) + +return cmd +} + +/ ParseSigArgs parses comma-separated signatures from the CLI arguments. +func ParseSigArgs(args []string) ([]string, error) { + if len(args) != 1 || args[0] == "" { + return nil, fmt.Errorf("argument should be comma-separated signatures") +} + +return strings.Split(args[0], ","), nil +} +``` + +Each module must also implement the `GetQueryCmd()` method for `AppModuleBasic` that returns the `GetQueryCmd()` function. This allows for the root command to easily aggregate all of the query commands for each module. Here is an example: + +```go expandable +package bank + +import ( + + "context" + "encoding/json" + "fmt" + "time" + + modulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/bank/client/cli" + "github.com/cosmos/cosmos-sdk/x/bank/exported" + "github.com/cosmos/cosmos-sdk/x/bank/keeper" + v1bank "github.com/cosmos/cosmos-sdk/x/bank/migrations/v1" + "github.com/cosmos/cosmos-sdk/x/bank/simulation" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +/ ConsensusVersion defines the current x/bank module consensus version. +const ConsensusVersion = 4 + +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the bank module. +type AppModuleBasic struct { + cdc codec.Codec +} + +/ Name returns the bank module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the bank module's types on the LegacyAmino codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the bank +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the bank module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, _ client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return data.Validate() +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the bank module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the bank module. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.NewTxCmd() +} + +/ GetQueryCmd returns no root query command for the bank module. +func (AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.GetQueryCmd() +} + +/ RegisterInterfaces registers interfaces and implementations of the bank module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) + + / Register legacy interfaces for migration scripts. + v1bank.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the bank module. +type AppModule struct { + AppModuleBasic + + keeper keeper.Keeper + accountKeeper types.AccountKeeper + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ RegisterServices registers module services. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.keeper)) + +types.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper.(keeper.BaseKeeper), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 1 to 2: %v", err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 2 to 3: %v", err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 3 to 4: %v", err)) +} +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, accountKeeper types.AccountKeeper, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc +}, + keeper: keeper, + accountKeeper: accountKeeper, + legacySubspace: ss, +} +} + +/ Name returns the bank module's name. +func (AppModule) + +Name() + +string { + return types.ModuleName +} + +/ RegisterInvariants registers the bank module invariants. +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +/ QuerierRoute returns the bank module's querier route name. +func (AppModule) + +QuerierRoute() + +string { + return types.RouterKey +} + +/ InitGenesis performs genesis initialization for the bank module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + start := time.Now() + +var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +telemetry.MeasureSince(start, "InitGenesis", "crisis", "unmarshal") + +am.keeper.InitGenesis(ctx, &genesisState) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the bank +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the bank module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalContents doesn't return any content functions for governance proposals. +func (AppModule) + +ProposalContents(_ module.SimulationState) []simtypes.WeightedProposalContent { + return nil +} + +/ RegisterStoreDecoder registers a decoder for supply module's types +func (am AppModule) + +RegisterStoreDecoder(_ sdk.StoreDecoderRegistry) { +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + simState.AppParams, simState.Cdc, am.accountKeeper, am.keeper, + ) +} + +/ App Wiring Setup + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type BankInputs struct { + depinject.In + + Config *modulev1.Module + Cdc codec.Codec + Key *store.KVStoreKey + + AccountKeeper types.AccountKeeper + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type BankOutputs struct { + depinject.Out + + BankKeeper keeper.BaseKeeper + Module appmodule.AppModule +} + +func ProvideModule(in BankInputs) + +BankOutputs { + / Configure blocked module accounts. + / + / Default behavior for blockedAddresses is to regard any module mentioned in + / AccountKeeper's module account permissions as blocked. + blockedAddresses := make(map[string]bool) + if len(in.Config.BlockedModuleAccountsOverride) > 0 { + for _, moduleName := range in.Config.BlockedModuleAccountsOverride { + blockedAddresses[authtypes.NewModuleAddress(moduleName).String()] = true +} + +} + +else { + for _, permission := range in.AccountKeeper.GetModulePermissions() { + blockedAddresses[permission.GetAddress().String()] = true +} + +} + + / default to governance authority if not provided + authority := authtypes.NewModuleAddress(govtypes.ModuleName) + if in.Config.Authority != "" { + authority = authtypes.NewModuleAddressOrBech32Address(in.Config.Authority) +} + bankKeeper := keeper.NewBaseKeeper( + in.Cdc, + in.Key, + in.AccountKeeper, + blockedAddresses, + authority.String(), + ) + m := NewAppModule(in.Cdc, bankKeeper, in.AccountKeeper, in.LegacySubspace) + +return BankOutputs{ + BankKeeper: bankKeeper, + Module: m +} +} +``` + +### Flags + +[Flags](/docs/sdk/v0.47/learn/advanced/cli#flags) allow users to customize commands. `--fees` and `--gas-prices` are examples of flags that allow users to set the [fees](/docs/sdk/v0.47/learn/beginner/gas-fees) and gas prices for their transactions. + +Flags that are specific to a module are typically created in a `flags.go` file in the module's `./client/cli` folder. When creating a flag, developers set the value type, the name of the flag, the default value, and a description about the flag. Developers also have the option to mark flags as *required* so that an error is thrown if the user does not include a value for the flag. + +Here is an example that adds the `--from` flag to a command: + +```go +cmd.Flags().String(FlagFrom, "", "Name or address of private key with which to sign") +``` + +In this example, the value of the flag is a `String`, the name of the flag is `from` (the value of the `FlagFrom` constant), the default value of the flag is `""`, and there is a description that will be displayed when a user adds `--help` to the command. + +Here is an example that marks the `--from` flag as *required*: + +```go +cmd.MarkFlagRequired(FlagFrom) +``` + +For more detailed information on creating flags, visit the [Cobra Documentation](https://github.com/spf13/cobra). + +As mentioned in [transaction commands](#transaction-commands), there is a set of flags that all transaction commands must add. This is done with the `AddTxFlagsToCmd` method defined in the Cosmos SDK's `./client/flags` package. + +```go expandable +package flags + +import ( + + "fmt" + "strconv" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + tmcli "github.com/tendermint/tendermint/libs/cli" + "github.com/cosmos/cosmos-sdk/crypto/keyring" +) + +const ( + / DefaultGasAdjustment is applied to gas estimates to avoid tx execution + / failures due to state changes that might occur between the tx simulation + / and the actual run. + DefaultGasAdjustment = 1.0 + DefaultGasLimit = 200000 + GasFlagAuto = "auto" + + / DefaultKeyringBackend + DefaultKeyringBackend = keyring.BackendOS + + / BroadcastSync defines a tx broadcasting mode where the client waits for + / a CheckTx execution response only. + BroadcastSync = "sync" + / BroadcastAsync defines a tx broadcasting mode where the client returns + / immediately. + BroadcastAsync = "async" + + / SignModeDirect is the value of the --sign-mode flag for SIGN_MODE_DIRECT + SignModeDirect = "direct" + / SignModeLegacyAminoJSON is the value of the --sign-mode flag for SIGN_MODE_LEGACY_AMINO_JSON + SignModeLegacyAminoJSON = "amino-json" + / SignModeDirectAux is the value of the --sign-mode flag for SIGN_MODE_DIRECT_AUX + SignModeDirectAux = "direct-aux" + / SignModeEIP191 is the value of the --sign-mode flag for SIGN_MODE_EIP_191 + SignModeEIP191 = "eip-191" +) + +/ List of CLI flags +const ( + FlagHome = tmcli.HomeFlag + FlagKeyringDir = "keyring-dir" + FlagUseLedger = "ledger" + FlagChainID = "chain-id" + FlagNode = "node" + FlagGRPC = "grpc-addr" + FlagGRPCInsecure = "grpc-insecure" + FlagHeight = "height" + FlagGasAdjustment = "gas-adjustment" + FlagFrom = "from" + FlagName = "name" + FlagAccountNumber = "account-number" + FlagSequence = "sequence" + FlagNote = "note" + FlagFees = "fees" + FlagGas = "gas" + FlagGasPrices = "gas-prices" + FlagBroadcastMode = "broadcast-mode" + FlagDryRun = "dry-run" + FlagGenerateOnly = "generate-only" + FlagOffline = "offline" + FlagOutputDocument = "output-document" / inspired by wget -O + FlagSkipConfirmation = "yes" + FlagProve = "prove" + FlagKeyringBackend = "keyring-backend" + FlagPage = "page" + FlagLimit = "limit" + FlagSignMode = "sign-mode" + FlagPageKey = "page-key" + FlagOffset = "offset" + FlagCountTotal = "count-total" + FlagTimeoutHeight = "timeout-height" + FlagKeyAlgorithm = "algo" + FlagFeePayer = "fee-payer" + FlagFeeGranter = "fee-granter" + FlagReverse = "reverse" + FlagTip = "tip" + FlagAux = "aux" + / FlagOutput is the flag to set the output format. + / This differs from FlagOutputDocument that is used to set the output file. + FlagOutput = tmcli.OutputFlag + + / Tendermint logging flags + FlagLogLevel = "log_level" + FlagLogFormat = "log_format" +) + +/ LineBreak can be included in a command list to provide a blank line +/ to help with readability +var LineBreak = &cobra.Command{ + Run: func(*cobra.Command, []string) { +}} + +/ AddQueryFlagsToCmd adds common flags to a module query command. +func AddQueryFlagsToCmd(cmd *cobra.Command) { + cmd.Flags().String(FlagNode, "tcp:/localhost:26657", ": to Tendermint RPC interface for this chain") + +cmd.Flags().String(FlagGRPC, "", "the gRPC endpoint to use for this chain") + +cmd.Flags().Bool(FlagGRPCInsecure, false, "allow gRPC over insecure channels, if not TLS the server must use TLS") + +cmd.Flags().Int64(FlagHeight, 0, "Use a specific height to query state at (this can error if the node is pruning state)") + +cmd.Flags().StringP(FlagOutput, "o", "text", "Output format (text|json)") + + / some base commands does not require chainID e.g `simd testnet` while subcommands do + / hence the flag should not be required for those commands + _ = cmd.MarkFlagRequired(FlagChainID) +} + +/ AddTxFlagsToCmd adds common flags to a module tx command. +func AddTxFlagsToCmd(cmd *cobra.Command) { + f := cmd.Flags() + +f.StringP(FlagOutput, "o", "json", "Output format (text|json)") + +f.String(FlagFrom, "", "Name or address of private key with which to sign") + +f.Uint64P(FlagAccountNumber, "a", 0, "The account number of the signing account (offline mode only)") + +f.Uint64P(FlagSequence, "s", 0, "The sequence number of the signing account (offline mode only)") + +f.String(FlagNote, "", "Note to add a description to the transaction (previously --memo)") + +f.String(FlagFees, "", "Fees to pay along with transaction; eg: 10uatom") + +f.String(FlagGasPrices, "", "Gas prices in decimal format to determine the transaction fee (e.g. 0.1uatom)") + +f.String(FlagNode, "tcp:/localhost:26657", ": to tendermint rpc interface for this chain") + +f.Bool(FlagUseLedger, false, "Use a connected Ledger device") + +f.Float64(FlagGasAdjustment, DefaultGasAdjustment, "adjustment factor to be multiplied against the estimate returned by the tx simulation; if the gas limit is set manually this flag is ignored ") + +f.StringP(FlagBroadcastMode, "b", BroadcastSync, "Transaction broadcasting mode (sync|async)") + +f.Bool(FlagDryRun, false, "ignore the --gas flag and perform a simulation of a transaction, but don't broadcast it (when enabled, the local Keybase is not accessible)") + +f.Bool(FlagGenerateOnly, false, "Build an unsigned transaction and write it to STDOUT (when enabled, the local Keybase only accessed when providing a key name)") + +f.Bool(FlagOffline, false, "Offline mode (does not allow any online functionality)") + +f.BoolP(FlagSkipConfirmation, "y", false, "Skip tx broadcasting prompt confirmation") + +f.String(FlagSignMode, "", "Choose sign mode (direct|amino-json|direct-aux), this is an advanced feature") + +f.Uint64(FlagTimeoutHeight, 0, "Set a block timeout height to prevent the tx from being committed past a certain height") + +f.String(FlagFeePayer, "", "Fee payer pays fees for the transaction instead of deducting from the signer") + +f.String(FlagFeeGranter, "", "Fee granter grants fees for the transaction") + +f.String(FlagTip, "", "Tip is the amount that is going to be transferred to the fee payer on the target chain. This flag is only valid when used with --aux, and is ignored if the target chain didn't enable the TipDecorator") + +f.Bool(FlagAux, false, "Generate aux signer data instead of sending a tx") + +f.String(FlagChainID, "", "The network chain ID") + / --gas can accept integers and "auto" + f.String(FlagGas, "", fmt.Sprintf("gas limit to set per-transaction; set to %q to calculate sufficient gas automatically. Note: %q option doesn't always report accurate results. Set a valid coin value to adjust the result. Can be used instead of %q. (default %d)", + GasFlagAuto, GasFlagAuto, FlagFees, DefaultGasLimit)) + +AddKeyringFlags(f) +} + +/ AddKeyringFlags sets common keyring flags +func AddKeyringFlags(flags *pflag.FlagSet) { + flags.String(FlagKeyringDir, "", "The client Keyring directory; if omitted, the default 'home' directory will be used") + +flags.String(FlagKeyringBackend, DefaultKeyringBackend, "Select keyring's backend (os|file|kwallet|pass|test|memory)") +} + +/ AddPaginationFlagsToCmd adds common pagination flags to cmd +func AddPaginationFlagsToCmd(cmd *cobra.Command, query string) { + cmd.Flags().Uint64(FlagPage, 1, fmt.Sprintf("pagination page of %s to query for. This sets offset to a multiple of limit", query)) + +cmd.Flags().String(FlagPageKey, "", fmt.Sprintf("pagination page-key of %s to query for", query)) + +cmd.Flags().Uint64(FlagOffset, 0, fmt.Sprintf("pagination offset of %s to query for", query)) + +cmd.Flags().Uint64(FlagLimit, 100, fmt.Sprintf("pagination limit of %s to query for", query)) + +cmd.Flags().Bool(FlagCountTotal, false, fmt.Sprintf("count total number of records in %s to query for", query)) + +cmd.Flags().Bool(FlagReverse, false, "results are sorted in descending order") +} + +/ GasSetting encapsulates the possible values passed through the --gas flag. +type GasSetting struct { + Simulate bool + Gas uint64 +} + +func (v *GasSetting) + +String() + +string { + if v.Simulate { + return GasFlagAuto +} + +return strconv.FormatUint(v.Gas, 10) +} + +/ ParseGasSetting parses a string gas value. The value may either be 'auto', +/ which indicates a transaction should be executed in simulate mode to +/ automatically find a sufficient gas value, or a string integer. It returns an +/ error if a string integer is provided which cannot be parsed. +func ParseGasSetting(gasStr string) (GasSetting, error) { + switch gasStr { + case "": + return GasSetting{ + false, DefaultGasLimit +}, nil + case GasFlagAuto: + return GasSetting{ + true, 0 +}, nil + + default: + gas, err := strconv.ParseUint(gasStr, 10, 64) + if err != nil { + return GasSetting{ +}, fmt.Errorf("gas must be either integer or %s", GasFlagAuto) +} + +return GasSetting{ + false, gas +}, nil +} +} +``` + +Since `AddTxFlagsToCmd(cmd *cobra.Command)` includes all of the basic flags required for a transaction command, module developers may choose not to add any of their own (specifying arguments instead may often be more appropriate). + +Similarly, there is a `AddQueryFlagsToCmd(cmd *cobra.Command)` to add common flags to a module query command. + +```go expandable +package flags + +import ( + + "fmt" + "strconv" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + tmcli "github.com/tendermint/tendermint/libs/cli" + "github.com/cosmos/cosmos-sdk/crypto/keyring" +) + +const ( + / DefaultGasAdjustment is applied to gas estimates to avoid tx execution + / failures due to state changes that might occur between the tx simulation + / and the actual run. + DefaultGasAdjustment = 1.0 + DefaultGasLimit = 200000 + GasFlagAuto = "auto" + + / DefaultKeyringBackend + DefaultKeyringBackend = keyring.BackendOS + + / BroadcastSync defines a tx broadcasting mode where the client waits for + / a CheckTx execution response only. + BroadcastSync = "sync" + / BroadcastAsync defines a tx broadcasting mode where the client returns + / immediately. + BroadcastAsync = "async" + + / SignModeDirect is the value of the --sign-mode flag for SIGN_MODE_DIRECT + SignModeDirect = "direct" + / SignModeLegacyAminoJSON is the value of the --sign-mode flag for SIGN_MODE_LEGACY_AMINO_JSON + SignModeLegacyAminoJSON = "amino-json" + / SignModeDirectAux is the value of the --sign-mode flag for SIGN_MODE_DIRECT_AUX + SignModeDirectAux = "direct-aux" + / SignModeEIP191 is the value of the --sign-mode flag for SIGN_MODE_EIP_191 + SignModeEIP191 = "eip-191" +) + +/ List of CLI flags +const ( + FlagHome = tmcli.HomeFlag + FlagKeyringDir = "keyring-dir" + FlagUseLedger = "ledger" + FlagChainID = "chain-id" + FlagNode = "node" + FlagGRPC = "grpc-addr" + FlagGRPCInsecure = "grpc-insecure" + FlagHeight = "height" + FlagGasAdjustment = "gas-adjustment" + FlagFrom = "from" + FlagName = "name" + FlagAccountNumber = "account-number" + FlagSequence = "sequence" + FlagNote = "note" + FlagFees = "fees" + FlagGas = "gas" + FlagGasPrices = "gas-prices" + FlagBroadcastMode = "broadcast-mode" + FlagDryRun = "dry-run" + FlagGenerateOnly = "generate-only" + FlagOffline = "offline" + FlagOutputDocument = "output-document" / inspired by wget -O + FlagSkipConfirmation = "yes" + FlagProve = "prove" + FlagKeyringBackend = "keyring-backend" + FlagPage = "page" + FlagLimit = "limit" + FlagSignMode = "sign-mode" + FlagPageKey = "page-key" + FlagOffset = "offset" + FlagCountTotal = "count-total" + FlagTimeoutHeight = "timeout-height" + FlagKeyAlgorithm = "algo" + FlagFeePayer = "fee-payer" + FlagFeeGranter = "fee-granter" + FlagReverse = "reverse" + FlagTip = "tip" + FlagAux = "aux" + / FlagOutput is the flag to set the output format. + / This differs from FlagOutputDocument that is used to set the output file. + FlagOutput = tmcli.OutputFlag + + / Tendermint logging flags + FlagLogLevel = "log_level" + FlagLogFormat = "log_format" +) + +/ LineBreak can be included in a command list to provide a blank line +/ to help with readability +var LineBreak = &cobra.Command{ + Run: func(*cobra.Command, []string) { +}} + +/ AddQueryFlagsToCmd adds common flags to a module query command. +func AddQueryFlagsToCmd(cmd *cobra.Command) { + cmd.Flags().String(FlagNode, "tcp:/localhost:26657", ": to Tendermint RPC interface for this chain") + +cmd.Flags().String(FlagGRPC, "", "the gRPC endpoint to use for this chain") + +cmd.Flags().Bool(FlagGRPCInsecure, false, "allow gRPC over insecure channels, if not TLS the server must use TLS") + +cmd.Flags().Int64(FlagHeight, 0, "Use a specific height to query state at (this can error if the node is pruning state)") + +cmd.Flags().StringP(FlagOutput, "o", "text", "Output format (text|json)") + + / some base commands does not require chainID e.g `simd testnet` while subcommands do + / hence the flag should not be required for those commands + _ = cmd.MarkFlagRequired(FlagChainID) +} + +/ AddTxFlagsToCmd adds common flags to a module tx command. +func AddTxFlagsToCmd(cmd *cobra.Command) { + f := cmd.Flags() + +f.StringP(FlagOutput, "o", "json", "Output format (text|json)") + +f.String(FlagFrom, "", "Name or address of private key with which to sign") + +f.Uint64P(FlagAccountNumber, "a", 0, "The account number of the signing account (offline mode only)") + +f.Uint64P(FlagSequence, "s", 0, "The sequence number of the signing account (offline mode only)") + +f.String(FlagNote, "", "Note to add a description to the transaction (previously --memo)") + +f.String(FlagFees, "", "Fees to pay along with transaction; eg: 10uatom") + +f.String(FlagGasPrices, "", "Gas prices in decimal format to determine the transaction fee (e.g. 0.1uatom)") + +f.String(FlagNode, "tcp:/localhost:26657", ": to tendermint rpc interface for this chain") + +f.Bool(FlagUseLedger, false, "Use a connected Ledger device") + +f.Float64(FlagGasAdjustment, DefaultGasAdjustment, "adjustment factor to be multiplied against the estimate returned by the tx simulation; if the gas limit is set manually this flag is ignored ") + +f.StringP(FlagBroadcastMode, "b", BroadcastSync, "Transaction broadcasting mode (sync|async)") + +f.Bool(FlagDryRun, false, "ignore the --gas flag and perform a simulation of a transaction, but don't broadcast it (when enabled, the local Keybase is not accessible)") + +f.Bool(FlagGenerateOnly, false, "Build an unsigned transaction and write it to STDOUT (when enabled, the local Keybase only accessed when providing a key name)") + +f.Bool(FlagOffline, false, "Offline mode (does not allow any online functionality)") + +f.BoolP(FlagSkipConfirmation, "y", false, "Skip tx broadcasting prompt confirmation") + +f.String(FlagSignMode, "", "Choose sign mode (direct|amino-json|direct-aux), this is an advanced feature") + +f.Uint64(FlagTimeoutHeight, 0, "Set a block timeout height to prevent the tx from being committed past a certain height") + +f.String(FlagFeePayer, "", "Fee payer pays fees for the transaction instead of deducting from the signer") + +f.String(FlagFeeGranter, "", "Fee granter grants fees for the transaction") + +f.String(FlagTip, "", "Tip is the amount that is going to be transferred to the fee payer on the target chain. This flag is only valid when used with --aux, and is ignored if the target chain didn't enable the TipDecorator") + +f.Bool(FlagAux, false, "Generate aux signer data instead of sending a tx") + +f.String(FlagChainID, "", "The network chain ID") + / --gas can accept integers and "auto" + f.String(FlagGas, "", fmt.Sprintf("gas limit to set per-transaction; set to %q to calculate sufficient gas automatically. Note: %q option doesn't always report accurate results. Set a valid coin value to adjust the result. Can be used instead of %q. (default %d)", + GasFlagAuto, GasFlagAuto, FlagFees, DefaultGasLimit)) + +AddKeyringFlags(f) +} + +/ AddKeyringFlags sets common keyring flags +func AddKeyringFlags(flags *pflag.FlagSet) { + flags.String(FlagKeyringDir, "", "The client Keyring directory; if omitted, the default 'home' directory will be used") + +flags.String(FlagKeyringBackend, DefaultKeyringBackend, "Select keyring's backend (os|file|kwallet|pass|test|memory)") +} + +/ AddPaginationFlagsToCmd adds common pagination flags to cmd +func AddPaginationFlagsToCmd(cmd *cobra.Command, query string) { + cmd.Flags().Uint64(FlagPage, 1, fmt.Sprintf("pagination page of %s to query for. This sets offset to a multiple of limit", query)) + +cmd.Flags().String(FlagPageKey, "", fmt.Sprintf("pagination page-key of %s to query for", query)) + +cmd.Flags().Uint64(FlagOffset, 0, fmt.Sprintf("pagination offset of %s to query for", query)) + +cmd.Flags().Uint64(FlagLimit, 100, fmt.Sprintf("pagination limit of %s to query for", query)) + +cmd.Flags().Bool(FlagCountTotal, false, fmt.Sprintf("count total number of records in %s to query for", query)) + +cmd.Flags().Bool(FlagReverse, false, "results are sorted in descending order") +} + +/ GasSetting encapsulates the possible values passed through the --gas flag. +type GasSetting struct { + Simulate bool + Gas uint64 +} + +func (v *GasSetting) + +String() + +string { + if v.Simulate { + return GasFlagAuto +} + +return strconv.FormatUint(v.Gas, 10) +} + +/ ParseGasSetting parses a string gas value. The value may either be 'auto', +/ which indicates a transaction should be executed in simulate mode to +/ automatically find a sufficient gas value, or a string integer. It returns an +/ error if a string integer is provided which cannot be parsed. +func ParseGasSetting(gasStr string) (GasSetting, error) { + switch gasStr { + case "": + return GasSetting{ + false, DefaultGasLimit +}, nil + case GasFlagAuto: + return GasSetting{ + true, 0 +}, nil + + default: + gas, err := strconv.ParseUint(gasStr, 10, 64) + if err != nil { + return GasSetting{ +}, fmt.Errorf("gas must be either integer or %s", GasFlagAuto) +} + +return GasSetting{ + false, gas +}, nil +} +} +``` + +## gRPC + +[gRPC](https://grpc.io/) is a Remote Procedure Call (RPC) framework. RPC is the preferred way for external clients like wallets and exchanges to interact with a blockchain. + +In addition to providing an ABCI query pathway, the Cosmos SDK provides a gRPC proxy server that routes gRPC query requests to ABCI query requests. + +In order to do that, modules must implement `RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *runtime.ServeMux)` on `AppModuleBasic` to wire the client gRPC requests to the correct handler inside the module. + +Here's an example from the `x/auth` module: + +```go expandable +package auth + +import ( + + "context" + "encoding/json" + "fmt" + + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "cosmossdk.io/depinject" + "cosmossdk.io/core/appmodule" + + modulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/exported" + "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +/ ConsensusVersion defines the current x/auth module consensus version. +const ConsensusVersion = 4 + +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the auth module. +type AppModuleBasic struct{ +} + +/ Name returns the auth module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the auth module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the auth +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the auth module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return types.ValidateGenesis(data) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the auth module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the auth module. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return nil +} + +/ GetQueryCmd returns the root query command for the auth module. +func (AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.GetQueryCmd() +} + +/ RegisterInterfaces registers interfaces and implementations of the auth module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the auth module. +type AppModule struct { + AppModuleBasic + + accountKeeper keeper.AccountKeeper + randGenAccountsFn types.RandomGenesisAccountsFn + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, accountKeeper keeper.AccountKeeper, randGenAccountsFn types.RandomGenesisAccountsFn, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ +}, + accountKeeper: accountKeeper, + randGenAccountsFn: randGenAccountsFn, + legacySubspace: ss, +} +} + +/ Name returns the auth module's name. +func (AppModule) + +Name() + +string { + return types.ModuleName +} + +/ RegisterInvariants performs a no-op. +func (AppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers a GRPC query service to respond to the +/ module-specific GRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.accountKeeper)) + +types.RegisterQueryServer(cfg.QueryServer(), am.accountKeeper) + m := keeper.NewMigrator(am.accountKeeper, cfg.QueryServer(), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 2 to 3: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 3 to 4: %v", types.ModuleName, err)) +} + + / see migrations/v5/doc.go + if err := cfg.RegisterMigration(types.ModuleName, 4, func(ctx sdk.Context) + +error { + return nil +}); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 4 to 5: %v", types.ModuleName, err)) +} +} + +/ InitGenesis performs genesis initialization for the auth module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +am.accountKeeper.InitGenesis(ctx, genesisState) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the auth +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.accountKeeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the auth module +func (am AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState, am.randGenAccountsFn) +} + +/ ProposalContents doesn't return any content functions for governance proposals. +func (AppModule) + +ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent { + return nil +} + +/ RegisterStoreDecoder registers a decoder for auth module's types +func (am AppModule) + +RegisterStoreDecoder(sdr sdk.StoreDecoderRegistry) { + sdr[types.StoreKey] = simulation.NewDecodeStore(am.accountKeeper) +} + +/ WeightedOperations doesn't return any auth module operation. +func (AppModule) + +WeightedOperations(_ module.SimulationState) []simtypes.WeightedOperation { + return nil +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type AuthInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + + RandomGenesisAccountsFn types.RandomGenesisAccountsFn `optional:"true"` + AccountI func() + +types.AccountI `optional:"true"` + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type AuthOutputs struct { + depinject.Out + + AccountKeeper keeper.AccountKeeper + Module appmodule.AppModule +} + +func ProvideModule(in AuthInputs) + +AuthOutputs { + maccPerms := map[string][]string{ +} + for _, permission := range in.Config.ModuleAccountPermissions { + maccPerms[permission.Account] = permission.Permissions +} + + / default to governance authority if not provided + authority := types.NewModuleAddress(govtypes.ModuleName) + if in.Config.Authority != "" { + authority = types.NewModuleAddressOrBech32Address(in.Config.Authority) +} + if in.RandomGenesisAccountsFn == nil { + in.RandomGenesisAccountsFn = simulation.RandomGenesisAccounts +} + if in.AccountI == nil { + in.AccountI = types.ProtoBaseAccount +} + k := keeper.NewAccountKeeper(in.Cdc, in.Key, in.AccountI, maccPerms, in.Config.Bech32Prefix, authority.String()) + m := NewAppModule(in.Cdc, k, in.RandomGenesisAccountsFn, in.LegacySubspace) + +return AuthOutputs{ + AccountKeeper: k, + Module: m +} +} +``` + +## gRPC-gateway REST + +Applications need to support web services that use HTTP requests (e.g. a web wallet like [Keplr](https://keplr.app)). [grpc-gateway](https://github.com/grpc-ecosystem/grpc-gateway) translates REST calls into gRPC calls, which might be useful for clients that do not use gRPC. + +Modules that want to expose REST queries should add `google.api.http` annotations to their `rpc` methods, such as in the example below from the `x/auth` module: + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/auth/v1beta1/query.proto#L14-L89 +``` + +gRPC gateway is started in-process along with the application and CometBFT. It can be enabled or disabled by setting gRPC Configuration `enable` in [`app.toml`](/docs/sdk/v0.47/user/run-node/interact-node#configuring-the-node-using-apptoml). + +The Cosmos SDK provides a command for generating [Swagger](https://swagger.io/) documentation (`protoc-gen-swagger`). Setting `swagger` in [`app.toml`](/docs/sdk/v0.47/user/run-node/interact-node#configuring-the-node-using-apptoml) defines if swagger documentation should be automatically registered. diff --git a/docs/sdk/v0.47/documentation/module-system/module-manager.mdx b/docs/sdk/v0.47/documentation/module-system/module-manager.mdx new file mode 100644 index 00000000..8fc7b4a6 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/module-manager.mdx @@ -0,0 +1,11863 @@ +--- +title: Module Manager +--- + + +**Synopsis** +Cosmos SDK modules need to implement the [`AppModule` interfaces](#application-module-interfaces), in order to be managed by the application's [module manager](#module-manager). The module manager plays an important role in [`message` and `query` routing](/docs/sdk/v0.47/learn/advanced/baseapp#msg-service-router), and allows application developers to set the order of execution of a variety of functions like [`BeginBlocker` and `EndBlocker`](/docs/sdk/v0.47/learn/beginner/overview-app#beginblocker-and-endblocker). + + + + +### Pre-requisite Readings + +* [Introduction to Cosmos SDK Modules](/docs/sdk/v0.47/documentation/module-system/intro) + + + +## Application Module Interfaces + +Application module interfaces exist to facilitate the composition of modules together to form a functional Cosmos SDK application. +There are 4 main application module interfaces: + +* [`AppModuleBasic`](#appmodulebasic) for independent module functionalities. +* [`AppModule`](#appmodule) for inter-dependent module functionalities (except genesis-related functionalities). +* [`AppModuleGenesis`](#appmodulegenesis) for inter-dependent genesis-related module functionalities. +* `GenesisOnlyAppModule`: Defines an `AppModule` that only has import/export functionality + +The above interfaces are mostly embedding smaller interfaces (extension interfaces), that defines specific functionalities: + +* `HasName`: Allows the module to provide its own name for legacy purposes. +* [`HasGenesisBasics`](#hasgenesisbasics): The legacy interface for stateless genesis methods. +* [`HasGenesis`](#hasgenesis): The extension interface for stateful genesis methods. +* [`HasInvariants`](#hasinvariants): The extension interface for registering invariants. +* [`HasServices`](#hasservices): The extension interface for modules to register services. +* [`HasConsensusVersion`](#hasconsensusversion): The extension interface for declaring a module consensus version. +* [`BeginBlockAppModule`](#beginblockappmodule): The extension interface that contains information about the `AppModule` and `BeginBlock`. +* [`EndBlockAppModule`](#endblockappmodule): The extension interface that contains information about the `AppModule` and `EndBlock`. +* [`HasPrecommit`](#hasprecommit): The extension interface that contains information about the `AppModule` and `Precommit`. +* [`HasPrepareCheckState`](#haspreparecheckstate): The extension interface that contains information about the `AppModule` and `PrepareCheckState`. + +The `AppModuleBasic` interface exists to define independent methods of the module, i.e. those that do not depend on other modules in the application. This allows for the construction of the basic application structure early in the application definition, generally in the `init()` function of the [main application file](/docs/sdk/v0.47/learn/beginner/overview-app#core-application-file). + +The `AppModule` interface exists to define inter-dependent module methods. Many modules need to interact with other modules, typically through [`keeper`s](/docs/sdk/v0.47/documentation/module-system/keeper), which means there is a need for an interface where modules list their `keeper`s and other methods that require a reference to another module's object. `AppModule` interface extension, such as `BeginBlockAppModule` and `EndBlockAppModule`, also enables the module manager to set the order of execution between module's methods like `BeginBlock` and `EndBlock`, which is important in cases where the order of execution between modules matters in the context of the application. + +The usage of extension interfaces allows modules to define only the functionalities they need. For example, a module that does not need an `EndBlock` does not need to define the `EndBlockAppModule` interface and thus the `EndBlock` method. `AppModule` and `AppModuleGenesis` are voluntarily small interfaces, that can take advantage of the `Module` patterns without having to define many placeholder functions. + +### `AppModuleBasic` + +The `AppModuleBasic` interface defines the independent methods modules need to implement. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "encoding/json" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "golang.org/x/exp/maps" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(codectypes.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesis := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesis[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesis +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage) + +error { + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesis[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ BeginBlockAppModule is an extension interface that contains information about the AppModule and BeginBlock. +type BeginBlockAppModule interface { + AppModule + BeginBlock(sdk.Context, abci.RequestBeginBlock) +} + +/ EndBlockAppModule is an extension interface that contains information about the AppModule and EndBlock. +type EndBlockAppModule interface { + AppModule + EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) { +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(_ sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + return []abci.ValidatorUpdate{ +} +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + +} +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) + +abci.ResponseInitChain { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + panic(fmt.Sprintf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction)) +} + +return abci.ResponseInitChain{ + Validators: validatorUpdates, +} +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +map[string]json.RawMessage { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) + +map[string]json.RawMessage { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + panic(err) +} + channels := make(map[string]chan json.RawMessage) + for _, moduleName := range modulesToExport { + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + channels[moduleName] = make(chan json.RawMessage) + +go func(module HasGenesis, ch chan json.RawMessage) { + ctx := ctx.WithGasMeter(sdk.NewInfiniteGasMeter()) / avoid race conditions + ch <- module.ExportGenesis(ctx, cdc) +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + genesisData[moduleName] = <-channels[moduleName] +} + +return genesisData +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "%s: all modules must be defined when setting %s, missing: %v", setOrderFnName, setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to docs/core/upgrade.md for more information. +func (m Manager) + +RunMigrations(ctx sdk.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(configurator) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(ctx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + ctx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(ctx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + module, ok := m.Modules[moduleName].(BeginBlockAppModule) + if ok { + module.BeginBlock(ctx, req) +} + +} + +return abci.ResponseBeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + module, ok := m.Modules[moduleName].(EndBlockAppModule) + if !ok { + continue +} + moduleValUpdates := module.EndBlock(ctx, req) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator EndBlock updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +return abci.ResponseEndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +Let us go through the methods: + +* `RegisterLegacyAminoCodec(*codec.LegacyAmino)`: Registers the `amino` codec for the module, which is used to marshal and unmarshal structs to/from `[]byte` in order to persist them in the module's `KVStore`. +* `RegisterInterfaces(codectypes.InterfaceRegistry)`: Registers a module's interface types and their concrete implementations as `proto.Message`. +* `RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux)`: Registers gRPC routes for the module. +* `GetTxCmd()`: Returns the root [`Tx` command](/docs/sdk/v0.47/documentation/module-system/module-interfaces#transaction-commands) for the module. The subcommands of this root command are used by end-users to generate new transactions containing [`message`s](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#queries) defined in the module. +* `GetQueryCmd()`: Return the root [`query` command](/docs/sdk/v0.47/documentation/module-system/module-interfaces#query-commands) for the module. The subcommands of this root command are used by end-users to generate new queries to the subset of the state defined by the module. + +All the `AppModuleBasic` of an application are managed by the [`BasicManager`](#basicmanager). + +### `HasName` + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "encoding/json" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "golang.org/x/exp/maps" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(codectypes.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesis := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesis[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesis +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage) + +error { + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesis[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ BeginBlockAppModule is an extension interface that contains information about the AppModule and BeginBlock. +type BeginBlockAppModule interface { + AppModule + BeginBlock(sdk.Context, abci.RequestBeginBlock) +} + +/ EndBlockAppModule is an extension interface that contains information about the AppModule and EndBlock. +type EndBlockAppModule interface { + AppModule + EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) { +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(_ sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + return []abci.ValidatorUpdate{ +} +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + +} +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) + +abci.ResponseInitChain { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + panic(fmt.Sprintf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction)) +} + +return abci.ResponseInitChain{ + Validators: validatorUpdates, +} +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +map[string]json.RawMessage { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) + +map[string]json.RawMessage { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + panic(err) +} + channels := make(map[string]chan json.RawMessage) + for _, moduleName := range modulesToExport { + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + channels[moduleName] = make(chan json.RawMessage) + +go func(module HasGenesis, ch chan json.RawMessage) { + ctx := ctx.WithGasMeter(sdk.NewInfiniteGasMeter()) / avoid race conditions + ch <- module.ExportGenesis(ctx, cdc) +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + genesisData[moduleName] = <-channels[moduleName] +} + +return genesisData +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "%s: all modules must be defined when setting %s, missing: %v", setOrderFnName, setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to docs/core/upgrade.md for more information. +func (m Manager) + +RunMigrations(ctx sdk.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(configurator) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(ctx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + ctx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(ctx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + module, ok := m.Modules[moduleName].(BeginBlockAppModule) + if ok { + module.BeginBlock(ctx, req) +} + +} + +return abci.ResponseBeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + module, ok := m.Modules[moduleName].(EndBlockAppModule) + if !ok { + continue +} + moduleValUpdates := module.EndBlock(ctx, req) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator EndBlock updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +return abci.ResponseEndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +* `HasName` is an interface that has a method `Name()`. This method returns the name of the module as a `string`. + +### `HasGenesisBasics` + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "encoding/json" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "golang.org/x/exp/maps" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(codectypes.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesis := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesis[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesis +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage) + +error { + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesis[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ BeginBlockAppModule is an extension interface that contains information about the AppModule and BeginBlock. +type BeginBlockAppModule interface { + AppModule + BeginBlock(sdk.Context, abci.RequestBeginBlock) +} + +/ EndBlockAppModule is an extension interface that contains information about the AppModule and EndBlock. +type EndBlockAppModule interface { + AppModule + EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) { +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(_ sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + return []abci.ValidatorUpdate{ +} +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + +} +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) + +abci.ResponseInitChain { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + panic(fmt.Sprintf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction)) +} + +return abci.ResponseInitChain{ + Validators: validatorUpdates, +} +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +map[string]json.RawMessage { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) + +map[string]json.RawMessage { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + panic(err) +} + channels := make(map[string]chan json.RawMessage) + for _, moduleName := range modulesToExport { + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + channels[moduleName] = make(chan json.RawMessage) + +go func(module HasGenesis, ch chan json.RawMessage) { + ctx := ctx.WithGasMeter(sdk.NewInfiniteGasMeter()) / avoid race conditions + ch <- module.ExportGenesis(ctx, cdc) +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + genesisData[moduleName] = <-channels[moduleName] +} + +return genesisData +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "%s: all modules must be defined when setting %s, missing: %v", setOrderFnName, setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to docs/core/upgrade.md for more information. +func (m Manager) + +RunMigrations(ctx sdk.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(configurator) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(ctx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + ctx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(ctx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + module, ok := m.Modules[moduleName].(BeginBlockAppModule) + if ok { + module.BeginBlock(ctx, req) +} + +} + +return abci.ResponseBeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + module, ok := m.Modules[moduleName].(EndBlockAppModule) + if !ok { + continue +} + moduleValUpdates := module.EndBlock(ctx, req) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator EndBlock updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +return abci.ResponseEndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +Let us go through the methods: + +* `DefaultGenesis(codec.JSONCodec)`: Returns a default [`GenesisState`](/docs/sdk/v0.47/documentation/module-system/genesis) for the module, marshalled to `json.RawMessage`. The default `GenesisState` need to be defined by the module developer and is primarily used for testing. +* `ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage)`: Used to validate the `GenesisState` defined by a module, given in its `json.RawMessage` form. It will usually unmarshall the `json` before running a custom [`ValidateGenesis`](/docs/sdk/v0.47/documentation/module-system/genesis#validategenesis) function defined by the module developer. + +### `AppModuleGenesis` + +The `AppModuleGenesis` interface is a simple embedding of the `AppModuleBasic` and `HasGenesis` interfaces. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "encoding/json" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "golang.org/x/exp/maps" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(codectypes.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesis := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesis[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesis +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage) + +error { + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesis[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ BeginBlockAppModule is an extension interface that contains information about the AppModule and BeginBlock. +type BeginBlockAppModule interface { + AppModule + BeginBlock(sdk.Context, abci.RequestBeginBlock) +} + +/ EndBlockAppModule is an extension interface that contains information about the AppModule and EndBlock. +type EndBlockAppModule interface { + AppModule + EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) { +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(_ sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + return []abci.ValidatorUpdate{ +} +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + +} +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) + +abci.ResponseInitChain { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + panic(fmt.Sprintf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction)) +} + +return abci.ResponseInitChain{ + Validators: validatorUpdates, +} +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +map[string]json.RawMessage { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) + +map[string]json.RawMessage { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + panic(err) +} + channels := make(map[string]chan json.RawMessage) + for _, moduleName := range modulesToExport { + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + channels[moduleName] = make(chan json.RawMessage) + +go func(module HasGenesis, ch chan json.RawMessage) { + ctx := ctx.WithGasMeter(sdk.NewInfiniteGasMeter()) / avoid race conditions + ch <- module.ExportGenesis(ctx, cdc) +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + genesisData[moduleName] = <-channels[moduleName] +} + +return genesisData +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "%s: all modules must be defined when setting %s, missing: %v", setOrderFnName, setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to docs/core/upgrade.md for more information. +func (m Manager) + +RunMigrations(ctx sdk.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(configurator) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(ctx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + ctx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(ctx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + module, ok := m.Modules[moduleName].(BeginBlockAppModule) + if ok { + module.BeginBlock(ctx, req) +} + +} + +return abci.ResponseBeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + module, ok := m.Modules[moduleName].(EndBlockAppModule) + if !ok { + continue +} + moduleValUpdates := module.EndBlock(ctx, req) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator EndBlock updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +return abci.ResponseEndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +It does not have its own manager, and exists separately from [`AppModule`](#appmodule) only for modules that exist only to implement genesis functionalities, so that they can be managed without having to implement all of `AppModule`'s methods. + +### `HasGenesis` + +The `HasGenesis` interface is an extension interface of `HasGenesisBasics`. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "encoding/json" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "golang.org/x/exp/maps" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(codectypes.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesis := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesis[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesis +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage) + +error { + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesis[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ BeginBlockAppModule is an extension interface that contains information about the AppModule and BeginBlock. +type BeginBlockAppModule interface { + AppModule + BeginBlock(sdk.Context, abci.RequestBeginBlock) +} + +/ EndBlockAppModule is an extension interface that contains information about the AppModule and EndBlock. +type EndBlockAppModule interface { + AppModule + EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) { +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(_ sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + return []abci.ValidatorUpdate{ +} +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + +} +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) + +abci.ResponseInitChain { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + panic(fmt.Sprintf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction)) +} + +return abci.ResponseInitChain{ + Validators: validatorUpdates, +} +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +map[string]json.RawMessage { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) + +map[string]json.RawMessage { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + panic(err) +} + channels := make(map[string]chan json.RawMessage) + for _, moduleName := range modulesToExport { + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + channels[moduleName] = make(chan json.RawMessage) + +go func(module HasGenesis, ch chan json.RawMessage) { + ctx := ctx.WithGasMeter(sdk.NewInfiniteGasMeter()) / avoid race conditions + ch <- module.ExportGenesis(ctx, cdc) +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + genesisData[moduleName] = <-channels[moduleName] +} + +return genesisData +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "%s: all modules must be defined when setting %s, missing: %v", setOrderFnName, setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to docs/core/upgrade.md for more information. +func (m Manager) + +RunMigrations(ctx sdk.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(configurator) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(ctx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + ctx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(ctx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + module, ok := m.Modules[moduleName].(BeginBlockAppModule) + if ok { + module.BeginBlock(ctx, req) +} + +} + +return abci.ResponseBeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + module, ok := m.Modules[moduleName].(EndBlockAppModule) + if !ok { + continue +} + moduleValUpdates := module.EndBlock(ctx, req) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator EndBlock updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +return abci.ResponseEndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +Let us go through the two added methods: + +* `InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage)`: Initializes the subset of the state managed by the module. It is called at genesis (i.e. when the chain is first started). +* `ExportGenesis(sdk.Context, codec.JSONCodec)`: Exports the latest subset of the state managed by the module to be used in a new genesis file. `ExportGenesis` is called for each module when a new chain is started from the state of an existing chain. + +### `AppModule` + +The `AppModule` interface defines a module. Modules can declare their functionalities by implementing extensions interfaces. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "encoding/json" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "golang.org/x/exp/maps" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(codectypes.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesis := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesis[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesis +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage) + +error { + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesis[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ BeginBlockAppModule is an extension interface that contains information about the AppModule and BeginBlock. +type BeginBlockAppModule interface { + AppModule + BeginBlock(sdk.Context, abci.RequestBeginBlock) +} + +/ EndBlockAppModule is an extension interface that contains information about the AppModule and EndBlock. +type EndBlockAppModule interface { + AppModule + EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) { +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(_ sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + return []abci.ValidatorUpdate{ +} +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + +} +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) + +abci.ResponseInitChain { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + panic(fmt.Sprintf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction)) +} + +return abci.ResponseInitChain{ + Validators: validatorUpdates, +} +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +map[string]json.RawMessage { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) + +map[string]json.RawMessage { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + panic(err) +} + channels := make(map[string]chan json.RawMessage) + for _, moduleName := range modulesToExport { + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + channels[moduleName] = make(chan json.RawMessage) + +go func(module HasGenesis, ch chan json.RawMessage) { + ctx := ctx.WithGasMeter(sdk.NewInfiniteGasMeter()) / avoid race conditions + ch <- module.ExportGenesis(ctx, cdc) +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + genesisData[moduleName] = <-channels[moduleName] +} + +return genesisData +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "%s: all modules must be defined when setting %s, missing: %v", setOrderFnName, setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to docs/core/upgrade.md for more information. +func (m Manager) + +RunMigrations(ctx sdk.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(configurator) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(ctx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + ctx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(ctx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + module, ok := m.Modules[moduleName].(BeginBlockAppModule) + if ok { + module.BeginBlock(ctx, req) +} + +} + +return abci.ResponseBeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + module, ok := m.Modules[moduleName].(EndBlockAppModule) + if !ok { + continue +} + moduleValUpdates := module.EndBlock(ctx, req) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator EndBlock updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +return abci.ResponseEndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +`AppModule`s are managed by the [module manager](#manager), which checks which extension interfaces are implemented by the module. + + +Previously the `AppModule` interface was containing all the methods that are defined in the extensions interfaces. This was leading to much boilerplate for modules that did not need all the functionalities. + + +### `HasInvariants` + +This interface defines one method. It allows to checks if a module can register invariants. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "encoding/json" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "golang.org/x/exp/maps" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(codectypes.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesis := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesis[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesis +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage) + +error { + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesis[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ BeginBlockAppModule is an extension interface that contains information about the AppModule and BeginBlock. +type BeginBlockAppModule interface { + AppModule + BeginBlock(sdk.Context, abci.RequestBeginBlock) +} + +/ EndBlockAppModule is an extension interface that contains information about the AppModule and EndBlock. +type EndBlockAppModule interface { + AppModule + EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) { +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(_ sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + return []abci.ValidatorUpdate{ +} +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + +} +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) + +abci.ResponseInitChain { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + panic(fmt.Sprintf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction)) +} + +return abci.ResponseInitChain{ + Validators: validatorUpdates, +} +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +map[string]json.RawMessage { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) + +map[string]json.RawMessage { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + panic(err) +} + channels := make(map[string]chan json.RawMessage) + for _, moduleName := range modulesToExport { + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + channels[moduleName] = make(chan json.RawMessage) + +go func(module HasGenesis, ch chan json.RawMessage) { + ctx := ctx.WithGasMeter(sdk.NewInfiniteGasMeter()) / avoid race conditions + ch <- module.ExportGenesis(ctx, cdc) +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + genesisData[moduleName] = <-channels[moduleName] +} + +return genesisData +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "%s: all modules must be defined when setting %s, missing: %v", setOrderFnName, setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to docs/core/upgrade.md for more information. +func (m Manager) + +RunMigrations(ctx sdk.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(configurator) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(ctx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + ctx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(ctx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + module, ok := m.Modules[moduleName].(BeginBlockAppModule) + if ok { + module.BeginBlock(ctx, req) +} + +} + +return abci.ResponseBeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + module, ok := m.Modules[moduleName].(EndBlockAppModule) + if !ok { + continue +} + moduleValUpdates := module.EndBlock(ctx, req) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator EndBlock updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +return abci.ResponseEndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +* `RegisterInvariants(sdk.InvariantRegistry)`: Registers the [`invariants`](/docs/sdk/v0.47/documentation/module-system/invariants) of the module. If an invariant deviates from its predicted value, the [`InvariantRegistry`](/docs/sdk/v0.47/documentation/module-system/invariants#invariant-registry) triggers appropriate logic (most often the chain will be halted). + +### `HasServices` + +This interface defines one method. It allows to checks if a module can register invariants. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "encoding/json" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "golang.org/x/exp/maps" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(codectypes.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesis := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesis[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesis +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage) + +error { + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesis[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ BeginBlockAppModule is an extension interface that contains information about the AppModule and BeginBlock. +type BeginBlockAppModule interface { + AppModule + BeginBlock(sdk.Context, abci.RequestBeginBlock) +} + +/ EndBlockAppModule is an extension interface that contains information about the AppModule and EndBlock. +type EndBlockAppModule interface { + AppModule + EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) { +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(_ sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + return []abci.ValidatorUpdate{ +} +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + +} +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) + +abci.ResponseInitChain { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + panic(fmt.Sprintf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction)) +} + +return abci.ResponseInitChain{ + Validators: validatorUpdates, +} +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +map[string]json.RawMessage { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) + +map[string]json.RawMessage { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + panic(err) +} + channels := make(map[string]chan json.RawMessage) + for _, moduleName := range modulesToExport { + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + channels[moduleName] = make(chan json.RawMessage) + +go func(module HasGenesis, ch chan json.RawMessage) { + ctx := ctx.WithGasMeter(sdk.NewInfiniteGasMeter()) / avoid race conditions + ch <- module.ExportGenesis(ctx, cdc) +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + genesisData[moduleName] = <-channels[moduleName] +} + +return genesisData +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "%s: all modules must be defined when setting %s, missing: %v", setOrderFnName, setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to docs/core/upgrade.md for more information. +func (m Manager) + +RunMigrations(ctx sdk.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(configurator) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(ctx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + ctx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(ctx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + module, ok := m.Modules[moduleName].(BeginBlockAppModule) + if ok { + module.BeginBlock(ctx, req) +} + +} + +return abci.ResponseBeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + module, ok := m.Modules[moduleName].(EndBlockAppModule) + if !ok { + continue +} + moduleValUpdates := module.EndBlock(ctx, req) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator EndBlock updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +return abci.ResponseEndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +* `RegisterServices(Configurator)`: Allows a module to register services. + +### `HasConsensusVersion` + +This interface defines one method for checking a module consensus version. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "encoding/json" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "golang.org/x/exp/maps" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(codectypes.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesis := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesis[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesis +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage) + +error { + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesis[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ BeginBlockAppModule is an extension interface that contains information about the AppModule and BeginBlock. +type BeginBlockAppModule interface { + AppModule + BeginBlock(sdk.Context, abci.RequestBeginBlock) +} + +/ EndBlockAppModule is an extension interface that contains information about the AppModule and EndBlock. +type EndBlockAppModule interface { + AppModule + EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) { +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(_ sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + return []abci.ValidatorUpdate{ +} +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + +} +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) + +abci.ResponseInitChain { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + panic(fmt.Sprintf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction)) +} + +return abci.ResponseInitChain{ + Validators: validatorUpdates, +} +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +map[string]json.RawMessage { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) + +map[string]json.RawMessage { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + panic(err) +} + channels := make(map[string]chan json.RawMessage) + for _, moduleName := range modulesToExport { + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + channels[moduleName] = make(chan json.RawMessage) + +go func(module HasGenesis, ch chan json.RawMessage) { + ctx := ctx.WithGasMeter(sdk.NewInfiniteGasMeter()) / avoid race conditions + ch <- module.ExportGenesis(ctx, cdc) +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + genesisData[moduleName] = <-channels[moduleName] +} + +return genesisData +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "%s: all modules must be defined when setting %s, missing: %v", setOrderFnName, setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to docs/core/upgrade.md for more information. +func (m Manager) + +RunMigrations(ctx sdk.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(configurator) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(ctx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + ctx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(ctx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + module, ok := m.Modules[moduleName].(BeginBlockAppModule) + if ok { + module.BeginBlock(ctx, req) +} + +} + +return abci.ResponseBeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + module, ok := m.Modules[moduleName].(EndBlockAppModule) + if !ok { + continue +} + moduleValUpdates := module.EndBlock(ctx, req) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator EndBlock updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +return abci.ResponseEndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +* `ConsensusVersion() uint64`: Returns the consensus version of the module. + +### `BeginBlockAppModule` + +The `BeginBlockAppModule` is an extension interface from `AppModule`. All modules that have an `BeginBlock` method implement this interface. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "encoding/json" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "golang.org/x/exp/maps" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(codectypes.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesis := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesis[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesis +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage) + +error { + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesis[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ BeginBlockAppModule is an extension interface that contains information about the AppModule and BeginBlock. +type BeginBlockAppModule interface { + AppModule + BeginBlock(sdk.Context, abci.RequestBeginBlock) +} + +/ EndBlockAppModule is an extension interface that contains information about the AppModule and EndBlock. +type EndBlockAppModule interface { + AppModule + EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) { +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(_ sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + return []abci.ValidatorUpdate{ +} +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + +} +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) + +abci.ResponseInitChain { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + panic(fmt.Sprintf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction)) +} + +return abci.ResponseInitChain{ + Validators: validatorUpdates, +} +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +map[string]json.RawMessage { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) + +map[string]json.RawMessage { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + panic(err) +} + channels := make(map[string]chan json.RawMessage) + for _, moduleName := range modulesToExport { + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + channels[moduleName] = make(chan json.RawMessage) + +go func(module HasGenesis, ch chan json.RawMessage) { + ctx := ctx.WithGasMeter(sdk.NewInfiniteGasMeter()) / avoid race conditions + ch <- module.ExportGenesis(ctx, cdc) +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + genesisData[moduleName] = <-channels[moduleName] +} + +return genesisData +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "%s: all modules must be defined when setting %s, missing: %v", setOrderFnName, setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to docs/core/upgrade.md for more information. +func (m Manager) + +RunMigrations(ctx sdk.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(configurator) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(ctx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + ctx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(ctx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + module, ok := m.Modules[moduleName].(BeginBlockAppModule) + if ok { + module.BeginBlock(ctx, req) +} + +} + +return abci.ResponseBeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + module, ok := m.Modules[moduleName].(EndBlockAppModule) + if !ok { + continue +} + moduleValUpdates := module.EndBlock(ctx, req) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator EndBlock updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +return abci.ResponseEndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +* `BeginBlock(sdk.Context, abci.RequestBeginBlock)`: This method gives module developers the option to implement logic that is automatically triggered at the beginning of each block. Implement empty if no logic needs to be triggered at the beginning of each block for this module. + +### `EndBlockAppModule` + +The `EndBlockAppModule` is an extension interface from `AppModule`. All modules that have an `EndBlock` method implement this interface. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "encoding/json" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "golang.org/x/exp/maps" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(codectypes.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesis := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesis[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesis +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage) + +error { + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesis[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ BeginBlockAppModule is an extension interface that contains information about the AppModule and BeginBlock. +type BeginBlockAppModule interface { + AppModule + BeginBlock(sdk.Context, abci.RequestBeginBlock) +} + +/ EndBlockAppModule is an extension interface that contains information about the AppModule and EndBlock. +type EndBlockAppModule interface { + AppModule + EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) { +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(_ sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + return []abci.ValidatorUpdate{ +} +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + +} +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) + +abci.ResponseInitChain { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + panic(fmt.Sprintf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction)) +} + +return abci.ResponseInitChain{ + Validators: validatorUpdates, +} +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +map[string]json.RawMessage { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) + +map[string]json.RawMessage { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + panic(err) +} + channels := make(map[string]chan json.RawMessage) + for _, moduleName := range modulesToExport { + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + channels[moduleName] = make(chan json.RawMessage) + +go func(module HasGenesis, ch chan json.RawMessage) { + ctx := ctx.WithGasMeter(sdk.NewInfiniteGasMeter()) / avoid race conditions + ch <- module.ExportGenesis(ctx, cdc) +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + genesisData[moduleName] = <-channels[moduleName] +} + +return genesisData +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "%s: all modules must be defined when setting %s, missing: %v", setOrderFnName, setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to docs/core/upgrade.md for more information. +func (m Manager) + +RunMigrations(ctx sdk.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(configurator) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(ctx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + ctx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(ctx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + module, ok := m.Modules[moduleName].(BeginBlockAppModule) + if ok { + module.BeginBlock(ctx, req) +} + +} + +return abci.ResponseBeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + module, ok := m.Modules[moduleName].(EndBlockAppModule) + if !ok { + continue +} + moduleValUpdates := module.EndBlock(ctx, req) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator EndBlock updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +return abci.ResponseEndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +* `EndBlock(sdk.Context, abci.RequestEndBlock)`: This method gives module developers the option to implement logic that is automatically triggered at the end of each block. This is also where the module can inform the underlying consensus engine of validator set changes (e.g. the `staking` module). Implement empty if no logic needs to be triggered at the end of each block for this module. + +### `HasPrecommit` + +`HasPrecommit` is an extension interface from `AppModule`. All modules that have a `Precommit` method implement this interface. + +* `Precommit(sdk.Context)`: This method gives module developers the option to implement logic that is automatically triggered during \[`Commit'](/docs/sdk/v0.47/learn/advanced/00-baseapp#commit) of each block using the [`deliverState`](/docs/sdk/v0.47/learn/advanced/00-baseapp#state-updates) of the block to be committed. Implement empty if no logic needs to be triggered during `Commit\` of each block for this module. + +### `HasPrepareCheckState` + +`HasPrepareCheckState` is an extension interface from `AppModule`. All modules that have a `PrepareCheckState` method implement this interface. + +* `PrepareCheckState(sdk.Context)`: This method gives module developers the option to implement logic that is automatically triggered during \[`Commit'](/docs/sdk/v0.47/learn/advanced/00-baseapp) of each block using the [`checkState`](/docs/sdk/v0.47/learn/advanced/00-baseapp#state-updates) of the next block. Implement empty if no logic needs to be triggered during `Commit\` of each block for this module. + +### Implementing the Application Module Interfaces + +Typically, the various application module interfaces are implemented in a file called `module.go`, located in the module's folder (e.g. `./x/module/module.go`). + +Almost every module needs to implement the `AppModuleBasic` and `AppModule` interfaces. If the module is only used for genesis, it will implement `AppModuleGenesis` instead of `AppModule`. The concrete type that implements the interface can add parameters that are required for the implementation of the various methods of the interface. For example, the `Route()` function often calls a `NewMsgServerImpl(k keeper)` function defined in `keeper/msg_server.go` and therefore needs to pass the module's [`keeper`](/docs/sdk/v0.47/documentation/module-system/keeper) as a parameter. + +```go +/ example +type AppModule struct { + AppModuleBasic + keeper Keeper +} +``` + +In the example above, you can see that the `AppModule` concrete type references an `AppModuleBasic`, and not an `AppModuleGenesis`. That is because `AppModuleGenesis` only needs to be implemented in modules that focus on genesis-related functionalities. In most modules, the concrete `AppModule` type will have a reference to an `AppModuleBasic` and implement the two added methods of `AppModuleGenesis` directly in the `AppModule` type. + +If no parameter is required (which is often the case for `AppModuleBasic`), just declare an empty concrete type like so: + +```go +type AppModuleBasic struct{ +} +``` + +## Module Managers + +Module managers are used to manage collections of `AppModuleBasic` and `AppModule`. + +### `BasicManager` + +The `BasicManager` is a structure that lists all the `AppModuleBasic` of an application: + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "encoding/json" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "golang.org/x/exp/maps" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(codectypes.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesis := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesis[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesis +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage) + +error { + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesis[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ BeginBlockAppModule is an extension interface that contains information about the AppModule and BeginBlock. +type BeginBlockAppModule interface { + AppModule + BeginBlock(sdk.Context, abci.RequestBeginBlock) +} + +/ EndBlockAppModule is an extension interface that contains information about the AppModule and EndBlock. +type EndBlockAppModule interface { + AppModule + EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) { +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(_ sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + return []abci.ValidatorUpdate{ +} +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + +} +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) + +abci.ResponseInitChain { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + panic(fmt.Sprintf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction)) +} + +return abci.ResponseInitChain{ + Validators: validatorUpdates, +} +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +map[string]json.RawMessage { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) + +map[string]json.RawMessage { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + panic(err) +} + channels := make(map[string]chan json.RawMessage) + for _, moduleName := range modulesToExport { + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + channels[moduleName] = make(chan json.RawMessage) + +go func(module HasGenesis, ch chan json.RawMessage) { + ctx := ctx.WithGasMeter(sdk.NewInfiniteGasMeter()) / avoid race conditions + ch <- module.ExportGenesis(ctx, cdc) +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + genesisData[moduleName] = <-channels[moduleName] +} + +return genesisData +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "%s: all modules must be defined when setting %s, missing: %v", setOrderFnName, setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to docs/core/upgrade.md for more information. +func (m Manager) + +RunMigrations(ctx sdk.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(configurator) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(ctx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + ctx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(ctx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + module, ok := m.Modules[moduleName].(BeginBlockAppModule) + if ok { + module.BeginBlock(ctx, req) +} + +} + +return abci.ResponseBeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + module, ok := m.Modules[moduleName].(EndBlockAppModule) + if !ok { + continue +} + moduleValUpdates := module.EndBlock(ctx, req) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator EndBlock updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +return abci.ResponseEndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +It implements the following methods: + +* `NewBasicManager(modules ...AppModuleBasic)`: Constructor function. It takes a list of the application's `AppModuleBasic` and builds a new `BasicManager`. This function is generally called in the `init()` function of [`app.go`](/docs/sdk/v0.47/learn/beginner/overview-app#core-application-file) to quickly initialize the independent elements of the application's modules (click [here](https://github.com/cosmos/gaia/blob/main/app/app.go#L59-L74) to see an example). +* `RegisterLegacyAminoCodec(cdc *codec.LegacyAmino)`: Registers the [`codec.LegacyAmino`s](/docs/sdk/v0.47/learn/advanced/encoding#amino) of each of the application's `AppModuleBasic`. This function is usually called early on in the [application's construction](/docs/sdk/v0.47/learn/beginner/overview-app#constructor). +* `RegisterInterfaces(registry codectypes.InterfaceRegistry)`: Registers interface types and implementations of each of the application's `AppModuleBasic`. +* `DefaultGenesis(cdc codec.JSONCodec)`: Provides default genesis information for modules in the application by calling the [`DefaultGenesis(cdc codec.JSONCodec)`](/docs/sdk/v0.47/documentation/module-system/genesis#defaultgenesis) function of each module. It only calls the modules that implements the `HasGenesisBasics` interfaces. +* `ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage)`: Validates the genesis information modules by calling the [`ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage)`](/docs/sdk/v0.47/documentation/module-system/genesis#validategenesis) function of modules implementing the `HasGenesisBasics` interface. +* `RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux)`: Registers gRPC routes for modules. +* `AddTxCommands(rootTxCmd *cobra.Command)`: Adds modules' transaction commands to the application's [`rootTxCommand`](/docs/sdk/v0.47/learn/advanced/cli#transaction-commands). This function is usually called function from the `main.go` function of the [application's command-line interface](/docs/sdk/v0.47/learn/advanced/cli). +* `AddQueryCommands(rootQueryCmd *cobra.Command)`: Adds modules' query commands to the application's [`rootQueryCommand`](/docs/sdk/v0.47/learn/advanced/cli#query-commands). This function is usually called function from the `main.go` function of the [application's command-line interface](/docs/sdk/v0.47/learn/advanced/cli). + +### `Manager` + +The `Manager` is a structure that holds all the `AppModule` of an application, and defines the order of execution between several key components of these modules: + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "encoding/json" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "golang.org/x/exp/maps" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(codectypes.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesis := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesis[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesis +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage) + +error { + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesis[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +/ +/ TODO: Remove clientCtx argument. +/ REF: https://github.com/cosmos/cosmos-sdk/issues/6571 +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ BeginBlockAppModule is an extension interface that contains information about the AppModule and BeginBlock. +type BeginBlockAppModule interface { + AppModule + BeginBlock(sdk.Context, abci.RequestBeginBlock) +} + +/ EndBlockAppModule is an extension interface that contains information about the AppModule and EndBlock. +type EndBlockAppModule interface { + AppModule + EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) { +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(_ sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + return []abci.ValidatorUpdate{ +} +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + +} +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) + +abci.ResponseInitChain { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + panic(fmt.Sprintf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction)) +} + +return abci.ResponseInitChain{ + Validators: validatorUpdates, +} +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +map[string]json.RawMessage { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) + +map[string]json.RawMessage { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + panic(err) +} + channels := make(map[string]chan json.RawMessage) + for _, moduleName := range modulesToExport { + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + channels[moduleName] = make(chan json.RawMessage) + +go func(module HasGenesis, ch chan json.RawMessage) { + ctx := ctx.WithGasMeter(sdk.NewInfiniteGasMeter()) / avoid race conditions + ch <- module.ExportGenesis(ctx, cdc) +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + genesisData[moduleName] = <-channels[moduleName] +} + +return genesisData +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "%s: all modules must be defined when setting %s, missing: %v", setOrderFnName, setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to docs/core/upgrade.md for more information. +func (m Manager) + +RunMigrations(ctx sdk.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(configurator) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(ctx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + ctx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(ctx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + module, ok := m.Modules[moduleName].(BeginBlockAppModule) + if ok { + module.BeginBlock(ctx, req) +} + +} + +return abci.ResponseBeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + module, ok := m.Modules[moduleName].(EndBlockAppModule) + if !ok { + continue +} + moduleValUpdates := module.EndBlock(ctx, req) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + panic("validator EndBlock updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +return abci.ResponseEndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +} +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +The module manager is used throughout the application whenever an action on a collection of modules is required. It implements the following methods: + +* `NewManager(modules ...AppModule)`: Constructor function. It takes a list of the application's `AppModule`s and builds a new `Manager`. It is generally called from the application's main [constructor function](/docs/sdk/v0.47/learn/beginner/overview-app#constructor-function). +* `SetOrderInitGenesis(moduleNames ...string)`: Sets the order in which the [`InitGenesis`](/docs/sdk/v0.47/documentation/module-system/genesis#initgenesis) function of each module will be called when the application is first started. This function is generally called from the application's main [constructor function](/docs/sdk/v0.47/learn/beginner/overview-app#constructor-function). + To initialize modules successfully, module dependencies should be considered. For example, the `genutil` module must occur after `staking` module so that the pools are properly initialized with tokens from genesis accounts, the `genutils` module must also occur after `auth` so that it can access the params from auth, IBC's `capability` module should be initialized before all other modules so that it can initialize any capabilities. +* `SetOrderExportGenesis(moduleNames ...string)`: Sets the order in which the [`ExportGenesis`](/docs/sdk/v0.47/documentation/module-system/genesis#exportgenesis) function of each module will be called in case of an export. This function is generally called from the application's main [constructor function](/docs/sdk/v0.47/learn/beginner/overview-app#constructor-function). +* `SetOrderBeginBlockers(moduleNames ...string)`: Sets the order in which the `BeginBlock()` function of each module will be called at the beginning of each block. This function is generally called from the application's main [constructor function](/docs/sdk/v0.47/learn/beginner/overview-app#constructor-function). +* `SetOrderEndBlockers(moduleNames ...string)`: Sets the order in which the `EndBlock()` function of each module will be called at the end of each block. This function is generally called from the application's main [constructor function](/docs/sdk/v0.47/learn/beginner/overview-app#constructor-function). +* `SetOrderPrecommiters(moduleNames ...string)`: Sets the order in which the `Precommit()` function of each module will be called during commit of each block. This function is generally called from the application's main [constructor function](/docs/sdk/v0.47/learn/beginner/overview-app#constructor-function). +* `SetOrderPrepareCheckStaters(moduleNames ...string)`: Sets the order in which the `PrepareCheckState()` function of each module will be called during commit of each block. This function is generally called from the application's main [constructor function](/docs/sdk/v0.47/learn/beginner/overview-app#constructor-function). +* `SetOrderMigrations(moduleNames ...string)`: Sets the order of migrations to be run. If not set then migrations will be run with an order defined in `DefaultMigrationsOrder`. +* `RegisterInvariants(ir sdk.InvariantRegistry)`: Registers the [invariants](/docs/sdk/v0.47/documentation/module-system/invariants) of module implementing the `HasInvariants` interface. +* `RegisterRoutes(router sdk.Router, queryRouter sdk.QueryRouter, legacyQuerierCdc *codec.LegacyAmino)`: Registers legacy [`Msg`](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#messages) and [`querier`](/docs/sdk/v0.47/documentation/module-system/query-services) routes. +* `RegisterServices(cfg Configurator)`: Registers the services of modules implementing the `HasServices` interface. +* `InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage)`: Calls the [`InitGenesis`](/docs/sdk/v0.47/documentation/module-system/genesis#initgenesis) function of each module when the application is first started, in the order defined in `OrderInitGenesis`. Returns an `abci.ResponseInitChain` to the underlying consensus engine, which can contain validator updates. +* `ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec)`: Calls the [`ExportGenesis`](/docs/sdk/v0.47/documentation/module-system/genesis#exportgenesis) function of each module, in the order defined in `OrderExportGenesis`. The export constructs a genesis file from a previously existing state, and is mainly used when a hard-fork upgrade of the chain is required. +* `ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string)`: Behaves the same as `ExportGenesis`, except takes a list of modules to export. +* `BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock)`: At the beginning of each block, this function is called from [`BaseApp`](/docs/sdk/v0.47/learn/advanced/baseapp#beginblock) and, in turn, calls the [`BeginBlock`](/docs/sdk/v0.47/documentation/module-system/beginblock-endblock) function of each modules implementing the `BeginBlockAppModule` interface, in the order defined in `OrderBeginBlockers`. It creates a child [context](/docs/sdk/v0.47/learn/advanced/context) with an event manager to aggregate [events](/docs/sdk/v0.47/learn/advanced/events) emitted from all modules. The function returns an `abci.ResponseBeginBlock` which contains the aforementioned events. +* `EndBlock(ctx sdk.Context, req abci.RequestEndBlock)`: At the end of each block, this function is called from [`BaseApp`](/docs/sdk/v0.47/learn/advanced/baseapp#endblock) and, in turn, calls the [`EndBlock`](/docs/sdk/v0.47/documentation/module-system/beginblock-endblock) function of each modules implementing the `EndBlockAppModule` interface, in the order defined in `OrderEndBlockers`. It creates a child [context](/docs/sdk/v0.47/learn/advanced/context) with an event manager to aggregate [events](/docs/sdk/v0.47/learn/advanced/events) emitted from all modules. The function returns an `abci.ResponseEndBlock` which contains the aforementioned events, as well as validator set updates (if any). +* `Precommit(ctx sdk.Context)`: During [`Commit`](/docs/sdk/v0.47/learn/advanced/baseapp#commit), this function is called from `BaseApp` immediately before the [`deliverState`](/docs/sdk/v0.47/learn/advanced/baseapp#state-updates) is written to the underlying [`rootMultiStore`](/docs/sdk/v0.47/learn/advanced/store#commitkvstore) and, in turn calls the `Precommit` function of each modules implementing the `HasPrecommit` interface, in the order defined in `OrderPrecommiters`. It creates a child [context](/docs/sdk/v0.47/learn/advanced/context) where the underlying `CacheMultiStore` is that of the newly committed block's [`deliverState`](/docs/sdk/v0.47/learn/advanced/baseapp#state-updates). +* `PrepareCheckState(ctx sdk.Context)`: During [`Commit`](/docs/sdk/v0.47/learn/advanced/baseapp#commit), this function is called from `BaseApp` immediately after the [`deliverState`](/docs/sdk/v0.47/learn/advanced/baseapp#state-updates) is written to the underlying [`rootMultiStore`](/docs/sdk/v0.47/learn/advanced/store#commitmultistore) and, in turn calls the `PrepareCheckState` function of each module implementing the `HasPrepareCheckState` interface, in the order defined in `OrderPrepareCheckStaters`. It creates a child [context](/docs/sdk/v0.47/learn/advanced/context) where the underlying `CacheMultiStore` is that of the next block's [`checkState`](/docs/sdk/v0.47/learn/advanced/baseapp#state-updates). Writes to this state will be present in the [`checkState`](/docs/sdk/v0.47/learn/advanced/baseapp#state-updates) of the next block, and therefore this method can be used to prepare the `checkState` for the next block. + +Here's an example of a concrete integration within an `simapp`: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "io" + "os" + "path/filepath" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "github.com/spf13/cast" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + + simappparams "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/client/grpc/tmservice" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + memKeys map[string]*storetypes.MemoryStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + encodingConfig := makeEncodingConfig() + appCodec := encodingConfig.Codec + legacyAmino := encodingConfig.Amino + interfaceRegistry := encodingConfig.InterfaceRegistry + txConfig := encodingConfig.TxConfig + + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool := mempool.NewSenderNonceMempool() + / mempoolOpt := baseapp.SetMempool(nonceMempool) + / prepareOpt := func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt := func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := sdk.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, capabilitytypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + tkeys := sdk.NewTransientStoreKeys(paramstypes.TStoreKey) + / NOTE: The testingkey is just mounted for testing purposes. Actual applications should + / not include this key. + memKeys := sdk.NewMemoryStoreKeys(capabilitytypes.MemStoreKey, "testingkey") + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(bApp, appOpts, appCodec, logger, keys); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, + memKeys: memKeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, keys[upgradetypes.StoreKey], authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +bApp.SetParamStore(&app.ConsensusParamsKeeper) + +app.CapabilityKeeper = capabilitykeeper.NewKeeper(appCodec, keys[capabilitytypes.StoreKey], memKeys[capabilitytypes.MemStoreKey]) + / Applications that wish to enforce statically created ScopedKeepers should call `Seal` after creating + / their scoped modules in `NewApp` with `ScopeToModule` + app.CapabilityKeeper.Seal() + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, keys[authtypes.StoreKey], authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + keys[banktypes.StoreKey], + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, keys[minttypes.StoreKey], app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, keys[distrtypes.StoreKey], app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, keys[slashingtypes.StoreKey], app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, keys[crisistypes.StoreKey], invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, keys[feegrant.StoreKey], app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.AuthzKeeper = authzkeeper.NewKeeper(keys[authzkeeper.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, keys[upgradetypes.StoreKey], appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/gov/spec/01_concepts.md#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, keys[govtypes.StoreKey], app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(keys[nftkeeper.StoreKey], appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, keys[evidencetypes.StoreKey], app.StakingKeeper, app.SlashingKeeper, + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app.BaseApp.DeliverTx, + encodingConfig.TxConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + capability.NewAppModule(appCodec, *app.CapabilityKeeper, false), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName)), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + ) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + +app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, capabilitytypes.ModuleName, minttypes.ModuleName, distrtypes.ModuleName, slashingtypes.ModuleName, + evidencetypes.ModuleName, stakingtypes.ModuleName, + authtypes.ModuleName, banktypes.ModuleName, govtypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, + authz.ModuleName, feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, govtypes.ModuleName, stakingtypes.ModuleName, + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, distrtypes.ModuleName, + slashingtypes.ModuleName, minttypes.ModuleName, + genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, upgradetypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder := []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +app.ModuleManager.RegisterServices(app.configurator) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + +app.MountMemoryStores(memKeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(encodingConfig.TxConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + logger.Error("error on loading last version", "err", err) + +os.Exit(1) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := ante.NewAnteHandler( + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + ) + if err != nil { + panic(err) +} + +app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + return app.ModuleManager.BeginBlock(ctx, req) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + return app.ModuleManager.EndBlock(ctx, req) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return ModuleBasics.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetTKey returns the TransientStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetTKey(storeKey string) *storetypes.TransientStoreKey { + return app.tkeys[storeKey] +} + +/ GetMemKey returns the MemStoreKey for the provided mem key. +/ +/ NOTE: This is solely used for testing purposes. +func (app *SimApp) + +GetMemKey(storeKey string) *storetypes.MemoryStoreKey { + return app.memKeys[storeKey] +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new tendermint queries routes from grpc-gateway. + tmservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + ModuleBasics.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + tmservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + app.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter()) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName).WithKeyTable(govv1.ParamKeyTable()) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} + +func makeEncodingConfig() + +simappparams.EncodingConfig { + encodingConfig := simappparams.MakeTestEncodingConfig() + +std.RegisterLegacyAminoCodec(encodingConfig.Amino) + +std.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +ModuleBasics.RegisterLegacyAminoCodec(encodingConfig.Amino) + +ModuleBasics.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +return encodingConfig +} +``` + +This is the same example from `runtime` (the package that powers app v2): + +```go expandable +package runtime + +import ( + + "fmt" + + abci "github.com/tendermint/tendermint/abci/types" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/std" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/types/module" +) + +/ BaseAppOption is a depinject.AutoGroupType which can be used to pass +/ BaseApp options into the depinject. It should be used carefully. +type BaseAppOption func(*baseapp.BaseApp) + +/ IsManyPerContainerType indicates that this is a depinject.ManyPerContainerType. +func (b BaseAppOption) + +IsManyPerContainerType() { +} + +func init() { + appmodule.Register(&runtimev1alpha1.Module{ +}, + appmodule.Provide( + ProvideApp, + ProvideKVStoreKey, + ProvideTransientStoreKey, + ProvideMemoryStoreKey, + ProvideDeliverTx, + ), + appmodule.Invoke(SetupAppBuilder), + ) +} + +func ProvideApp() ( + codectypes.InterfaceRegistry, + codec.Codec, + *codec.LegacyAmino, + *AppBuilder, + codec.ProtoCodecMarshaler, + *baseapp.MsgServiceRouter, +) { + interfaceRegistry := codectypes.NewInterfaceRegistry() + amino := codec.NewLegacyAmino() + +std.RegisterInterfaces(interfaceRegistry) + +std.RegisterLegacyAminoCodec(amino) + cdc := codec.NewProtoCodec(interfaceRegistry) + msgServiceRouter := baseapp.NewMsgServiceRouter() + app := &AppBuilder{ + &App{ + storeKeys: nil, + interfaceRegistry: interfaceRegistry, + cdc: cdc, + amino: amino, + basicManager: module.BasicManager{ +}, + msgServiceRouter: msgServiceRouter, +}, +} + +return interfaceRegistry, cdc, amino, app, cdc, msgServiceRouter +} + +type AppInputs struct { + depinject.In + + AppConfig *appv1alpha1.Config + Config *runtimev1alpha1.Module + AppBuilder *AppBuilder + Modules map[string]appmodule.AppModule + BaseAppOptions []BaseAppOption + InterfaceRegistry codectypes.InterfaceRegistry + LegacyAmino *codec.LegacyAmino +} + +func SetupAppBuilder(inputs AppInputs) { + app := inputs.AppBuilder.app + app.baseAppOptions = inputs.BaseAppOptions + app.config = inputs.Config + app.ModuleManager = module.NewManagerFromMap(inputs.Modules) + +app.appConfig = inputs.AppConfig + for name, mod := range inputs.Modules { + if basicMod, ok := mod.(module.AppModuleBasic); ok { + app.basicManager[name] = basicMod + basicMod.RegisterInterfaces(inputs.InterfaceRegistry) + +basicMod.RegisterLegacyAminoCodec(inputs.LegacyAmino) +} + +} +} + +func registerStoreKey(wrapper *AppBuilder, key storetypes.StoreKey) { + wrapper.app.storeKeys = append(wrapper.app.storeKeys, key) +} + +func storeKeyOverride(config *runtimev1alpha1.Module, moduleName string) *runtimev1alpha1.StoreKeyConfig { + for _, cfg := range config.OverrideStoreKeys { + if cfg.ModuleName == moduleName { + return cfg +} + +} + +return nil +} + +func ProvideKVStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.KVStoreKey { + override := storeKeyOverride(config, key.Name()) + +var storeKeyName string + if override != nil { + storeKeyName = override.KvStoreKey +} + +else { + storeKeyName = key.Name() +} + storeKey := storetypes.NewKVStoreKey(storeKeyName) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideTransientStoreKey(key depinject.ModuleKey, app *AppBuilder) *storetypes.TransientStoreKey { + storeKey := storetypes.NewTransientStoreKey(fmt.Sprintf("transient:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideMemoryStoreKey(key depinject.ModuleKey, app *AppBuilder) *storetypes.MemoryStoreKey { + storeKey := storetypes.NewMemoryStoreKey(fmt.Sprintf("memory:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideDeliverTx(appBuilder *AppBuilder) + +func(abci.RequestDeliverTx) + +abci.ResponseDeliverTx { + return func(tx abci.RequestDeliverTx) + +abci.ResponseDeliverTx { + return appBuilder.app.BaseApp.DeliverTx(tx) +} +} +``` + +```go expandable +package runtime + +import ( + + "fmt" + + abci "github.com/tendermint/tendermint/abci/types" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/std" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/types/module" +) + +/ BaseAppOption is a depinject.AutoGroupType which can be used to pass +/ BaseApp options into the depinject. It should be used carefully. +type BaseAppOption func(*baseapp.BaseApp) + +/ IsManyPerContainerType indicates that this is a depinject.ManyPerContainerType. +func (b BaseAppOption) + +IsManyPerContainerType() { +} + +func init() { + appmodule.Register(&runtimev1alpha1.Module{ +}, + appmodule.Provide( + ProvideApp, + ProvideKVStoreKey, + ProvideTransientStoreKey, + ProvideMemoryStoreKey, + ProvideDeliverTx, + ), + appmodule.Invoke(SetupAppBuilder), + ) +} + +func ProvideApp() ( + codectypes.InterfaceRegistry, + codec.Codec, + *codec.LegacyAmino, + *AppBuilder, + codec.ProtoCodecMarshaler, + *baseapp.MsgServiceRouter, +) { + interfaceRegistry := codectypes.NewInterfaceRegistry() + amino := codec.NewLegacyAmino() + +std.RegisterInterfaces(interfaceRegistry) + +std.RegisterLegacyAminoCodec(amino) + cdc := codec.NewProtoCodec(interfaceRegistry) + msgServiceRouter := baseapp.NewMsgServiceRouter() + app := &AppBuilder{ + &App{ + storeKeys: nil, + interfaceRegistry: interfaceRegistry, + cdc: cdc, + amino: amino, + basicManager: module.BasicManager{ +}, + msgServiceRouter: msgServiceRouter, +}, +} + +return interfaceRegistry, cdc, amino, app, cdc, msgServiceRouter +} + +type AppInputs struct { + depinject.In + + AppConfig *appv1alpha1.Config + Config *runtimev1alpha1.Module + AppBuilder *AppBuilder + Modules map[string]appmodule.AppModule + BaseAppOptions []BaseAppOption + InterfaceRegistry codectypes.InterfaceRegistry + LegacyAmino *codec.LegacyAmino +} + +func SetupAppBuilder(inputs AppInputs) { + app := inputs.AppBuilder.app + app.baseAppOptions = inputs.BaseAppOptions + app.config = inputs.Config + app.ModuleManager = module.NewManagerFromMap(inputs.Modules) + +app.appConfig = inputs.AppConfig + for name, mod := range inputs.Modules { + if basicMod, ok := mod.(module.AppModuleBasic); ok { + app.basicManager[name] = basicMod + basicMod.RegisterInterfaces(inputs.InterfaceRegistry) + +basicMod.RegisterLegacyAminoCodec(inputs.LegacyAmino) +} + +} +} + +func registerStoreKey(wrapper *AppBuilder, key storetypes.StoreKey) { + wrapper.app.storeKeys = append(wrapper.app.storeKeys, key) +} + +func storeKeyOverride(config *runtimev1alpha1.Module, moduleName string) *runtimev1alpha1.StoreKeyConfig { + for _, cfg := range config.OverrideStoreKeys { + if cfg.ModuleName == moduleName { + return cfg +} + +} + +return nil +} + +func ProvideKVStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.KVStoreKey { + override := storeKeyOverride(config, key.Name()) + +var storeKeyName string + if override != nil { + storeKeyName = override.KvStoreKey +} + +else { + storeKeyName = key.Name() +} + storeKey := storetypes.NewKVStoreKey(storeKeyName) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideTransientStoreKey(key depinject.ModuleKey, app *AppBuilder) *storetypes.TransientStoreKey { + storeKey := storetypes.NewTransientStoreKey(fmt.Sprintf("transient:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideMemoryStoreKey(key depinject.ModuleKey, app *AppBuilder) *storetypes.MemoryStoreKey { + storeKey := storetypes.NewMemoryStoreKey(fmt.Sprintf("memory:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideDeliverTx(appBuilder *AppBuilder) + +func(abci.RequestDeliverTx) + +abci.ResponseDeliverTx { + return func(tx abci.RequestDeliverTx) + +abci.ResponseDeliverTx { + return appBuilder.app.BaseApp.DeliverTx(tx) +} +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/modules/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/README.mdx new file mode 100644 index 00000000..e74dd98d --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/README.mdx @@ -0,0 +1,42 @@ +--- +title: Module Summary +description: >- + Here are some production-grade modules that can be used in Cosmos SDK + applications, along with their respective documentation: +--- + +Here are some production-grade modules that can be used in Cosmos SDK applications, along with their respective documentation: + +* [Auth](/docs/sdk/v0.47/documentation/module-system/modules/auth/README) - Authentication of accounts and transactions for Cosmos SDK applications. +* [Authz](/docs/sdk/v0.47/documentation/module-system/modules/authz/README) - Authorization for accounts to perform actions on behalf of other accounts. +* [Bank](/docs/sdk/v0.47/documentation/module-system/modules/bank/README) - Token transfer functionalities. +* [Crisis](/docs/sdk/v0.47/documentation/module-system/modules/crisis/README) - Halting the blockchain under certain circumstances (e.g. if an invariant is broken). +* [Distribution](/docs/sdk/v0.47/documentation/module-system/modules/distribution/README) - Fee distribution, and staking token provision distribution. +* [Evidence](/docs/sdk/v0.47/documentation/module-system/modules/evidence/README) - Evidence handling for double signing, misbehaviour, etc. +* [Feegrant](/docs/sdk/v0.47/documentation/module-system/modules/feegrant/README) - Grant fee allowances for executing transactions. +* [Governance](/docs/sdk/v0.47/documentation/module-system/modules/gov/README) - On-chain proposals and voting. +* [Mint](/docs/sdk/v0.47/documentation/module-system/modules/mint/README) - Creation of new units of staking token. +* [Params](/docs/sdk/v0.47/documentation/module-system/modules/params/README) - Globally available parameter store. +* [Slashing](/docs/sdk/v0.47/documentation/module-system/modules/slashing/README) - Validator punishment mechanisms. +* [Staking](/docs/sdk/v0.47/documentation/module-system/modules/staking/README) - Proof-of-Stake layer for public blockchains. +* [Upgrade](/docs/sdk/v0.47/documentation/module-system/modules/upgrade/README) - Software upgrades handling and coordination. +* [NFT](/docs/sdk/v0.47/documentation/module-system/modules/nft/README) - NFT module implemented based on [ADR43](/docs/common/pages/adr-comprehensive#adr-043-nft-module). +* [Consensus](/docs/sdk/v0.47/documentation/module-system/modules/consensus/README) - Consensus module for modifying CometBFT's ABCI consensus params. +* [Circuit](/docs/sdk/v0.47/documentation/module-system/modules/circuit/README) - Circuit breaker module for pausing messages. +* [Genutil](/docs/sdk/v0.47/documentation/module-system/modules/genutil/README) - Genesis utilities for the Cosmos SDK. + +To learn more about the process of building modules, visit the [building modules reference documentation](https://docs.cosmos.network/main/building-modules/intro). + +## IBC + +The IBC module for the SDK is maintained by the IBC Go team in its [own repository](https://github.com/cosmos/ibc-go). + +Additionally, the [capability module](https://github.com/cosmos/ibc-go/tree/fdd664698d79864f1e00e147f9879e58497b5ef1/modules/capability) is from v0.48+ maintained by the IBC Go team in its [own repository](https://github.com/cosmos/ibc-go/tree/fdd664698d79864f1e00e147f9879e58497b5ef1/modules/capability). + +## CosmWasm + +The CosmWasm module enables smart contracts, learn more by going to their [documentation site](https://book.cosmwasm.com/), or visit [the repository](https://github.com/CosmWasm/cosmwasm). + +## EVM + +Read more about writing smart contracts with solidity at the official [`evm` documentation page](https://docs.evmos.org/protocol/modules/evm). diff --git a/docs/sdk/v0.47/documentation/module-system/modules/accounts/accounts.mdx b/docs/sdk/v0.47/documentation/module-system/modules/accounts/accounts.mdx new file mode 100644 index 00000000..42288b8b --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/accounts/accounts.mdx @@ -0,0 +1,8 @@ +--- +title: x/accounts +description: >- + The x/accounts module provides module and facilities for writing smart + cosmos-sdk accounts. +--- + +The x/accounts module provides module and facilities for writing smart cosmos-sdk accounts. diff --git a/docs/sdk/v0.47/documentation/module-system/modules/auth/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/auth/README.mdx new file mode 100644 index 00000000..8448eb66 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/auth/README.mdx @@ -0,0 +1,734 @@ +--- +title: '`x/auth`' +description: This document specifies the auth module of the Cosmos SDK. +--- + +## Abstract + +This document specifies the auth module of the Cosmos SDK. + +The auth module is responsible for specifying the base transaction and account types +for an application, since the SDK itself is agnostic to these particulars. It contains +the middlewares, where all basic transaction validity checks (signatures, nonces, auxiliary fields) +are performed, and exposes the account keeper, which allows other modules to read, write, and modify accounts. + +This module is used in the Cosmos Hub. + +## Contents + +* [Concepts](#concepts) + * [Gas & Fees](#gas--fees) +* [State](#state) + * [Accounts](#accounts) +* [AnteHandlers](#antehandlers) +* [Keepers](#keepers) + * [Account Keeper](#account-keeper) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +**Note:** The auth module is different from the [authz module](docs/sdk/v0.47/documentation/module-system/modules/authz/README). + +The differences are: + +* `auth` - authentication of accounts and transactions for Cosmos SDK applications and is responsible for specifying the base transaction and account types. +* `authz` - authorization for accounts to perform actions on behalf of other accounts and enables a granter to grant authorizations to a grantee that allows the grantee to execute messages on behalf of the granter. + +### Gas & Fees + +Fees serve two purposes for an operator of the network. + +Fees limit the growth of the state stored by every full node and allow for +general purpose censorship of transactions of little economic value. Fees +are best suited as an anti-spam mechanism where validators are disinterested in +the use of the network and identities of users. + +Fees are determined by the gas limits and gas prices transactions provide, where +`fees = ceil(gasLimit * gasPrices)`. Txs incur gas costs for all state reads/writes, +signature verification, as well as costs proportional to the tx size. Operators +should set minimum gas prices when starting their nodes. They must set the unit +costs of gas in each token denomination they wish to support: + +`simd start ... --minimum-gas-prices=0.00001stake;0.05photinos` + +When adding transactions to mempool or gossipping transactions, validators check +if the transaction's gas prices, which are determined by the provided fees, meet +any of the validator's minimum gas prices. In other words, a transaction must +provide a fee of at least one denomination that matches a validator's minimum +gas price. + +CometBFT does not currently provide fee based mempool prioritization, and fee +based mempool filtering is local to node and not part of consensus. But with +minimum gas prices set, such a mechanism could be implemented by node operators. + +Because the market value for tokens will fluctuate, validators are expected to +dynamically adjust their minimum gas prices to a level that would encourage the +use of the network. + +## State + +### Accounts + +Accounts contain authentication information for a uniquely identified external user of an SDK blockchain, +including public key, address, and account number / sequence number for replay protection. For efficiency, +since account balances must also be fetched to pay fees, account structs also store the balance of a user +as `sdk.Coins`. + +Accounts are exposed externally as an interface, and stored internally as +either a base account or vesting account. Module clients wishing to add more +account types may do so. + +* `0x01 | Address -> ProtocolBuffer(account)` + +#### Account Interface + +The account interface exposes methods to read and write standard account information. +Note that all of these methods operate on an account struct conforming to the +interface - in order to write the account to the store, the account keeper will +need to be used. + +```go expandable +/ AccountI is an interface used to store coins at a given address within state. +/ It presumes a notion of sequence numbers for replay protection, +/ a notion of account numbers for replay protection for previously pruned accounts, +/ and a pubkey for authentication purposes. +/ +/ Many complex conditions can be used in the concrete struct which implements AccountI. +type AccountI interface { + proto.Message + + GetAddress() + +sdk.AccAddress + SetAddress(sdk.AccAddress) + +error / errors if already set. + + GetPubKey() + +crypto.PubKey / can return nil. + SetPubKey(crypto.PubKey) + +error + + GetAccountNumber() + +uint64 + SetAccountNumber(uint64) + +error + + GetSequence() + +uint64 + SetSequence(uint64) + +error + + / Ensure that account implements stringer + String() + +string +} +``` + +##### Base Account + +A base account is the simplest and most common account type, which just stores all requisite +fields directly in a struct. + +```protobuf +/ BaseAccount defines a base account type. It contains all the necessary fields +/ for basic account functionality. Any custom account type should extend this +/ type for additional functionality (e.g. vesting). +message BaseAccount { + string address = 1; + google.protobuf.Any pub_key = 2; + uint64 account_number = 3; + uint64 sequence = 4; +} +``` + +### Vesting Account + +See [Vesting](https://docs.cosmos.network/main/modules/auth/vesting/). + +## AnteHandlers + +The `x/auth` module presently has no transaction handlers of its own, but does expose the special `AnteHandler`, used for performing basic validity checks on a transaction, such that it could be thrown out of the mempool. +The `AnteHandler` can be seen as a set of decorators that check transactions within the current context, per [ADR 010](docs/sdk/next/documentation/legacy/adr-comprehensive). + +Note that the `AnteHandler` is called on both `CheckTx` and `DeliverTx`, as CometBFT proposers presently have the ability to include in their proposed block transactions which fail `CheckTx`. + +### Decorators + +The auth module provides `AnteDecorator`s that are recursively chained together into a single `AnteHandler` in the following order: + +* `SetUpContextDecorator`: Sets the `GasMeter` in the `Context` and wraps the next `AnteHandler` with a defer clause to recover from any downstream `OutOfGas` panics in the `AnteHandler` chain to return an error with information on gas provided and gas used. + +* `RejectExtensionOptionsDecorator`: Rejects all extension options which can optionally be included in protobuf transactions. + +* `MempoolFeeDecorator`: Checks if the `tx` fee is above local mempool `minFee` parameter during `CheckTx`. + +* `ValidateBasicDecorator`: Calls `tx.ValidateBasic` and returns any non-nil error. + +* `TxTimeoutHeightDecorator`: Check for a `tx` height timeout. + +* `ValidateMemoDecorator`: Validates `tx` memo with application parameters and returns any non-nil error. + +* `ConsumeGasTxSizeDecorator`: Consumes gas proportional to the `tx` size based on application parameters. + +* `DeductFeeDecorator`: Deducts the `FeeAmount` from first signer of the `tx`. If the `x/feegrant` module is enabled and a fee granter is set, it deducts fees from the fee granter account. + +* `SetPubKeyDecorator`: Sets the pubkey from a `tx`'s signers that does not already have its corresponding pubkey saved in the state machine and in the current context. + +* `ValidateSigCountDecorator`: Validates the number of signatures in `tx` based on app-parameters. + +* `SigGasConsumeDecorator`: Consumes parameter-defined amount of gas for each signature. This requires pubkeys to be set in context for all signers as part of `SetPubKeyDecorator`. + +* `SigVerificationDecorator`: Verifies all signatures are valid. This requires pubkeys to be set in context for all signers as part of `SetPubKeyDecorator`. + +* `IncrementSequenceDecorator`: Increments the account sequence for each signer to prevent replay attacks. + +## Keepers + +The auth module only exposes one keeper, the account keeper, which can be used to read and write accounts. + +### Account Keeper + +Presently only one fully-permissioned account keeper is exposed, which has the ability to both read and write +all fields of all accounts, and to iterate over all stored accounts. + +```go expandable +/ AccountKeeperI is the interface contract that x/auth's keeper implements. +type AccountKeeperI interface { + / Return a new account with the next account number and the specified address. Does not save the new account to the store. + NewAccountWithAddress(sdk.Context, sdk.AccAddress) + +types.AccountI + + / Return a new account with the next account number. Does not save the new account to the store. + NewAccount(sdk.Context, types.AccountI) + +types.AccountI + + / Check if an account exists in the store. + HasAccount(sdk.Context, sdk.AccAddress) + +bool + + / Retrieve an account from the store. + GetAccount(sdk.Context, sdk.AccAddress) + +types.AccountI + + / Set an account in the store. + SetAccount(sdk.Context, types.AccountI) + + / Remove an account from the store. + RemoveAccount(sdk.Context, types.AccountI) + + / Iterate over all accounts, calling the provided function. Stop iteration when it returns true. + IterateAccounts(sdk.Context, func(types.AccountI) + +bool) + + / Fetch the public key of an account at a specified address + GetPubKey(sdk.Context, sdk.AccAddress) (crypto.PubKey, error) + + / Fetch the sequence of an account at a specified address. + GetSequence(sdk.Context, sdk.AccAddress) (uint64, error) + + / Fetch the next account number, and increment the internal counter. + NextAccountNumber(sdk.Context) + +uint64 +} +``` + +## Parameters + +The auth module contains the following parameters: + +| Key | Type | Example | +| ---------------------- | ------ | ------- | +| MaxMemoCharacters | uint64 | 256 | +| TxSigLimit | uint64 | 7 | +| TxSizeCostPerByte | uint64 | 10 | +| SigVerifyCostED25519 | uint64 | 590 | +| SigVerifyCostSecp256k1 | uint64 | 1000 | + +## Client + +### CLI + +A user can query and interact with the `auth` module using the CLI. + +### Query + +The `query` commands allow users to query `auth` state. + +```bash +simd query auth --help +``` + +#### account + +The `account` command allow users to query for an account by it's address. + +```bash +simd query auth account [address] [flags] +``` + +Example: + +```bash +simd query auth account cosmos1... +``` + +Example Output: + +```bash +'@type': /cosmos.auth.v1beta1.BaseAccount +account_number: "0" +address: cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2 +pub_key: + '@type': /cosmos.crypto.secp256k1.PubKey + key: ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD +sequence: "1" +``` + +#### accounts + +The `accounts` command allow users to query all the available accounts. + +```bash +simd query auth accounts [flags] +``` + +Example: + +```bash +simd query auth accounts +``` + +Example Output: + +```bash expandable +accounts: +- '@type': /cosmos.auth.v1beta1.BaseAccount + account_number: "0" + address: cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2 + pub_key: + '@type': /cosmos.crypto.secp256k1.PubKey + key: ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD + sequence: "1" +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "8" + address: cosmos1yl6hdjhmkf37639730gffanpzndzdpmhwlkfhr + pub_key: null + sequence: "0" + name: transfer + permissions: + - minter + - burner +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "4" + address: cosmos1fl48vsnmsdzcv85q5d2q4z5ajdha8yu34mf0eh + pub_key: null + sequence: "0" + name: bonded_tokens_pool + permissions: + - burner + - staking +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "5" + address: cosmos1tygms3xhhs3yv487phx3dw4a95jn7t7lpm470r + pub_key: null + sequence: "0" + name: not_bonded_tokens_pool + permissions: + - burner + - staking +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "6" + address: cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn + pub_key: null + sequence: "0" + name: gov + permissions: + - burner +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "3" + address: cosmos1jv65s3grqf6v6jl3dp4t6c9t9rk99cd88lyufl + pub_key: null + sequence: "0" + name: distribution + permissions: [] +- '@type': /cosmos.auth.v1beta1.BaseAccount + account_number: "1" + address: cosmos147k3r7v2tvwqhcmaxcfql7j8rmkrlsemxshd3j + pub_key: null + sequence: "0" +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "7" + address: cosmos1m3h30wlvsf8llruxtpukdvsy0km2kum8g38c8q + pub_key: null + sequence: "0" + name: mint + permissions: + - minter +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "2" + address: cosmos17xpfvakm2amg962yls6f84z3kell8c5lserqta + pub_key: null + sequence: "0" + name: fee_collector + permissions: [] +pagination: + next_key: null + total: "0" +``` + +#### params + +The `params` command allow users to query the current auth parameters. + +```bash +simd query auth params [flags] +``` + +Example: + +```bash +simd query auth params +``` + +Example Output: + +```bash +max_memo_characters: "256" +sig_verify_cost_ed25519: "590" +sig_verify_cost_secp256k1: "1000" +tx_sig_limit: "7" +tx_size_cost_per_byte: "10" +``` + +### Transactions + +The `auth` module supports transactions commands to help you with signing and more. Compared to other modules you can access directly the `auth` module transactions commands using the only `tx` command. + +Use directly the `--help` flag to get more information about the `tx` command. + +```bash +simd tx --help +``` + +#### `sign` + +The `sign` command allows users to sign transactions that was generated offline. + +```bash +simd tx sign tx.json --from $ALICE > tx.signed.json +``` + +The result is a signed transaction that can be broadcasted to the network thanks to the broadcast command. + +More information about the `sign` command can be found running `simd tx sign --help`. + +#### `sign-batch` + +The `sign-batch` command allows users to sign multiples offline generated transactions. +The transactions can be in one file, with one tx per line, or in multiple files. + +```bash +simd tx sign txs.json --from $ALICE > tx.signed.json +``` + +or + +```bash +simd tx sign tx1.json tx2.json tx3.json --from $ALICE > tx.signed.json +``` + +The result is multiples signed transactions. For combining the signed transactions into one transactions, use the `--append` flag. + +More information about the `sign-batch` command can be found running `simd tx sign-batch --help`. + +#### `multi-sign` + +The `multi-sign` command allows users to sign transactions that was generated offline by a multisig account. + +```bash +simd tx multisign transaction.json k1k2k3 k1sig.json k2sig.json k3sig.json +``` + +Where `k1k2k3` is the multisig account address, `k1sig.json` is the signature of the first signer, `k2sig.json` is the signature of the second signer, and `k3sig.json` is the signature of the third signer. + +More information about the `multi-sign` command can be found running `simd tx multi-sign --help`. + +#### `multisign-batch` + +The `multisign-batch` works the same way as `sign-batch`, but for multisig accounts. +With the difference that the `multisign-batch` command requires all transactions to be in one file, and the `--append` flag does not exist. + +More information about the `multisign-batch` command can be found running `simd tx multisign-batch --help`. + +#### `validate-signatures` + +The `validate-signatures` command allows users to validate the signatures of a signed transaction. + +```bash +$ simd tx validate-signatures tx.signed.json +Signers: + 0: cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275 + +Signatures: + 0: cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275 [OK] +``` + +More information about the `validate-signatures` command can be found running `simd tx validate-signatures --help`. + +#### `broadcast` + +The `broadcast` command allows users to broadcast a signed transaction to the network. + +```bash +simd tx broadcast tx.signed.json +``` + +More information about the `broadcast` command can be found running `simd tx broadcast --help`. + +#### `aux-to-fee` + +The `aux-to-fee` comamnds includes the aux signer data in the tx, broadcast the tx, and sends the tip amount to the broadcaster. +[Learn more about tip transaction](https://docs.cosmos.network/main/core/tips). + +```bash +# simd tx bank send --aux (optional: --tip --tipper ) +simd tx aux-to-fee tx.aux.signed.json +``` + +More information about the `aux-to-fee` command can be found running `simd tx aux-to-fee --help`. + +### gRPC + +A user can query the `auth` module using gRPC endpoints. + +#### Account + +The `account` endpoint allow users to query for an account by it's address. + +```bash +cosmos.auth.v1beta1.Query/Account +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Account +``` + +Example Output: + +```bash expandable +{ + "account":{ + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "address":"cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2", + "pubKey":{ + "@type":"/cosmos.crypto.secp256k1.PubKey", + "key":"ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD" + }, + "sequence":"1" + } +} +``` + +#### Accounts + +The `accounts` endpoint allow users to query all the available accounts. + +```bash +cosmos.auth.v1beta1.Query/Accounts +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Accounts +``` + +Example Output: + +```bash expandable +{ + "accounts":[ + { + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "address":"cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2", + "pubKey":{ + "@type":"/cosmos.crypto.secp256k1.PubKey", + "key":"ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD" + }, + "sequence":"1" + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1yl6hdjhmkf37639730gffanpzndzdpmhwlkfhr", + "accountNumber":"8" + }, + "name":"transfer", + "permissions":[ + "minter", + "burner" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1fl48vsnmsdzcv85q5d2q4z5ajdha8yu34mf0eh", + "accountNumber":"4" + }, + "name":"bonded_tokens_pool", + "permissions":[ + "burner", + "staking" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1tygms3xhhs3yv487phx3dw4a95jn7t7lpm470r", + "accountNumber":"5" + }, + "name":"not_bonded_tokens_pool", + "permissions":[ + "burner", + "staking" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn", + "accountNumber":"6" + }, + "name":"gov", + "permissions":[ + "burner" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1jv65s3grqf6v6jl3dp4t6c9t9rk99cd88lyufl", + "accountNumber":"3" + }, + "name":"distribution" + }, + { + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "accountNumber":"1", + "address":"cosmos147k3r7v2tvwqhcmaxcfql7j8rmkrlsemxshd3j" + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1m3h30wlvsf8llruxtpukdvsy0km2kum8g38c8q", + "accountNumber":"7" + }, + "name":"mint", + "permissions":[ + "minter" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos17xpfvakm2amg962yls6f84z3kell8c5lserqta", + "accountNumber":"2" + }, + "name":"fee_collector" + } + ], + "pagination":{ + "total":"9" + } +} +``` + +#### Params + +The `params` endpoint allow users to query the current auth parameters. + +```bash +cosmos.auth.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Params +``` + +Example Output: + +```bash +{ + "params": { + "maxMemoCharacters": "256", + "txSigLimit": "7", + "txSizeCostPerByte": "10", + "sigVerifyCostEd25519": "590", + "sigVerifyCostSecp256k1": "1000" + } +} +``` + +### REST + +A user can query the `auth` module using REST endpoints. + +#### Account + +The `account` endpoint allow users to query for an account by it's address. + +```bash +/cosmos/auth/v1beta1/account?address={address} +``` + +#### Accounts + +The `accounts` endpoint allow users to query all the available accounts. + +```bash +/cosmos/auth/v1beta1/accounts +``` + +#### Params + +The `params` endpoint allow users to query the current auth parameters. + +```bash +/cosmos/auth/v1beta1/params +``` diff --git a/docs/sdk/v0.47/documentation/module-system/modules/auth/tx.mdx b/docs/sdk/v0.47/documentation/module-system/modules/auth/tx.mdx new file mode 100644 index 00000000..b4f98020 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/auth/tx.mdx @@ -0,0 +1,537 @@ +--- +title: '`x/auth/tx`' +--- + + + +### Pre-requisite Readings + +* [Transactions](https://docs.cosmos.network/main/core/transactions#transaction-generation) +* [Encoding](https://docs.cosmos.network/main/core/encoding#transaction-encoding) + + + +## Abstract + +This document specifies the `x/auth/tx` package of the Cosmos SDK. + +This package represents the Cosmos SDK implementation of the `client.TxConfig`, `client.TxBuilder`, `client.TxEncoder` and `client.TxDecoder` interfaces. + +## Contents + +* [Transactions](#transactions) + * [`TxConfig`](#txconfig) + * [`TxBuilder`](#txbuilder) + * [`TxEncoder`/ `TxDecoder`](#txencoder-txdecoder) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + +## Transactions + +### `TxConfig` + +`client.TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type. +The interface defines a set of methods for creating a `client.TxBuilder`. + +```go expandable +package client + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() + +sdk.TxEncoder + TxDecoder() + +sdk.TxDecoder + TxJSONEncoder() + +sdk.TxEncoder + TxJSONDecoder() + +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) + +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} + + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() + +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() + +signing.SignModeHandler +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTip(tip *tx.Tip) + +SetTimeoutHeight(height uint64) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} +) +``` + +The default implementation of `client.TxConfig` is instantiated by `NewTxConfig` in `x/auth/tx` module. + +```go expandable +package tx + +import ( + + "fmt" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +type config struct { + handler signing.SignModeHandler + decoder sdk.TxDecoder + encoder sdk.TxEncoder + jsonDecoder sdk.TxDecoder + jsonEncoder sdk.TxEncoder + protoCodec codec.ProtoCodecMarshaler +} + +/ NewTxConfig returns a new protobuf TxConfig using the provided ProtoCodec and sign modes. The +/ first enabled sign mode will become the default sign mode. +/ NOTE: Use NewTxConfigWithHandler to provide a custom signing handler in case the sign mode +/ is not supported by default (eg: SignMode_SIGN_MODE_EIP_191). +func NewTxConfig(protoCodec codec.ProtoCodecMarshaler, enabledSignModes []signingtypes.SignMode) + +client.TxConfig { + return NewTxConfigWithHandler(protoCodec, makeSignModeHandler(enabledSignModes)) +} + +/ NewTxConfig returns a new protobuf TxConfig using the provided ProtoCodec and signing handler. +func NewTxConfigWithHandler(protoCodec codec.ProtoCodecMarshaler, handler signing.SignModeHandler) + +client.TxConfig { + return &config{ + handler: handler, + decoder: DefaultTxDecoder(protoCodec), + encoder: DefaultTxEncoder(), + jsonDecoder: DefaultJSONTxDecoder(protoCodec), + jsonEncoder: DefaultJSONTxEncoder(protoCodec), + protoCodec: protoCodec, +} +} + +func (g config) + +NewTxBuilder() + +client.TxBuilder { + return newBuilder(g.protoCodec) +} + +/ WrapTxBuilder returns a builder from provided transaction +func (g config) + +WrapTxBuilder(newTx sdk.Tx) (client.TxBuilder, error) { + newBuilder, ok := newTx.(*wrapper) + if !ok { + return nil, fmt.Errorf("expected %T, got %T", &wrapper{ +}, newTx) +} + +return newBuilder, nil +} + +func (g config) + +SignModeHandler() + +signing.SignModeHandler { + return g.handler +} + +func (g config) + +TxEncoder() + +sdk.TxEncoder { + return g.encoder +} + +func (g config) + +TxDecoder() + +sdk.TxDecoder { + return g.decoder +} + +func (g config) + +TxJSONEncoder() + +sdk.TxEncoder { + return g.jsonEncoder +} + +func (g config) + +TxJSONDecoder() + +sdk.TxDecoder { + return g.jsonDecoder +} +``` + +### `TxBuilder` + +```go expandable +package client + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() + +sdk.TxEncoder + TxDecoder() + +sdk.TxDecoder + TxJSONEncoder() + +sdk.TxEncoder + TxJSONDecoder() + +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) + +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} + + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() + +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() + +signing.SignModeHandler +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTip(tip *tx.Tip) + +SetTimeoutHeight(height uint64) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} +) +``` + +The [`client.TxBuilder`](https://docs.cosmos.network/main/core/transactions#transaction-generation) interface is as well implemented by `x/auth/tx`. +A `client.TxBuilder` can be accessed with `TxConfig.NewTxBuilder()`. + +### `TxEncoder`/ `TxDecoder` + +More information about `TxEncoder` and `TxDecoder` can be found [here](https://docs.cosmos.network/main/core/encoding#transaction-encoding). + +## Client + +### CLI + +#### Query + +The `x/auth/tx` module provides a CLI command to query any transaction, given its hash, transaction sequence or signature. + +Without any argument, the command will query the transaction using the transaction hash. + +```shell +simd query tx DFE87B78A630C0EFDF76C80CD24C997E252792E0317502AE1A02B9809F0D8685 +``` + +When querying a transaction from an account given its sequence, use the `--type=acc_seq` flag: + +```shell +simd query tx --type=acc_seq cosmos1u69uyr6v9qwe6zaaeaqly2h6wnedac0xpxq325/1 +``` + +When querying a transaction given its signature, use the `--type=signature` flag: + +```shell +simd query tx --type=signature Ofjvgrqi8twZfqVDmYIhqwRLQjZZ40XbxEamk/veH3gQpRF0hL2PH4ejRaDzAX+2WChnaWNQJQ41ekToIi5Wqw== +``` + +When querying a transaction given its events, use the `--type=events` flag: + +```shell +simd query txs --events 'message.sender=cosmos...' --page 1 --limit 30 +``` + +The `x/auth/block` module provides a CLI command to query any block, given its hash, height, or events. + +When querying a block by its hash, use the `--type=hash` flag: + +```shell +simd query block --type=hash DFE87B78A630C0EFDF76C80CD24C997E252792E0317502AE1A02B9809F0D8685 +``` + +When querying a block by its height, use the `--type=height` flag: + +```shell +simd query block --type=height 1357 +``` + +When querying a block by its events, use the `--query` flag: + +```shell +simd query blocks --query 'message.sender=cosmos...' --page 1 --limit 30 +``` + +#### Transactions + +The `x/auth/tx` module provides a convinient CLI command for decoding and encoding transactions. + +#### `encode` + +The `encode` command encodes a transaction created with the `--generate-only` flag or signed with the sign command. +The transaction is seralized it to Protobuf and returned as base64. + +```bash +$ simd tx encode tx.json +Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA== +$ simd tx encode tx.signed.json +``` + +More information about the `encode` command can be found running `simd tx encode --help`. + +#### `decode` + +The `decode` commands decodes a transaction encoded with the `encode` command. + +```bash +simd tx decode Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA== +``` + +More information about the `decode` command can be found running `simd tx decode --help`. + +### gRPC + +A user can query the `x/auth/tx` module using gRPC endpoints. + +#### `TxDecode` + +The `TxDecode` endpoint allows to decode a transaction. + +```shell +cosmos.tx.v1beta1.Service/TxDecode +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"tx_bytes":"Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA=="}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxDecode +``` + +Example Output: + +```json expandable +{ + "tx": { + "body": { + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "amount": [ + { + "denom": "stake", + "amount": "100" + } + ], + "fromAddress": "cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275", + "toAddress": "cosmos158saldyg8pmxu7fwvt0d6x7jeswp4gwyklk6y3" + } + ] + }, + "authInfo": { + "fee": { + "gasLimit": "200000" + } + } + } +} +``` + +#### `TxEncode` + +The `TxEncode` endpoint allows to encode a transaction. + +```shell +cosmos.tx.v1beta1.Service/TxEncode +``` + +Example: + +```shell expandable +grpcurl -plaintext \ + -d '{"tx": { + "body": { + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100"}],"fromAddress":"cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275","toAddress":"cosmos158saldyg8pmxu7fwvt0d6x7jeswp4gwyklk6y3"} + ] + }, + "authInfo": { + "fee": { + "gasLimit": "200000" + } + } + }}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxEncode +``` + +Example Output: + +```json +{ + "txBytes": "Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA==" +} +``` + +#### `TxDecodeAmino` + +The `TxDecode` endpoint allows to decode an amino transaction. + +```shell +cosmos.tx.v1beta1.Service/TxDecodeAmino +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"amino_binary": "KCgWqQpvqKNhmgotY29zbW9zMXRzeno3cDJ6Z2Q3dnZrYWh5ZnJlNHduNXh5dTgwcnB0ZzZ2OWg1Ei1jb3Ntb3MxdHN6ejdwMnpnZDd2dmthaHlmcmU0d241eHl1ODBycHRnNnY5aDUaCwoFc3Rha2USAjEwEhEKCwoFc3Rha2USAjEwEMCaDCIGZm9vYmFy"}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxDecodeAmino +``` + +Example Output: + +```json +{ + "aminoJson": "{\"type\":\"cosmos-sdk/StdTx\",\"value\":{\"msg\":[{\"type\":\"cosmos-sdk/MsgSend\",\"value\":{\"from_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"to_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}]}}],\"fee\":{\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}],\"gas\":\"200000\"},\"signatures\":null,\"memo\":\"foobar\",\"timeout_height\":\"0\"}}" +} +``` + +#### `TxEncodeAmino` + +The `TxEncodeAmino` endpoint allows to encode an amino transaction. + +```shell +cosmos.tx.v1beta1.Service/TxEncodeAmino +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"amino_json":"{\"type\":\"cosmos-sdk/StdTx\",\"value\":{\"msg\":[{\"type\":\"cosmos-sdk/MsgSend\",\"value\":{\"from_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"to_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}]}}],\"fee\":{\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}],\"gas\":\"200000\"},\"signatures\":null,\"memo\":\"foobar\",\"timeout_height\":\"0\"}}"}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxEncodeAmino +``` + +Example Output: + +```json +{ + "amino_binary": "KCgWqQpvqKNhmgotY29zbW9zMXRzeno3cDJ6Z2Q3dnZrYWh5ZnJlNHduNXh5dTgwcnB0ZzZ2OWg1Ei1jb3Ntb3MxdHN6ejdwMnpnZDd2dmthaHlmcmU0d241eHl1ODBycHRnNnY5aDUaCwoFc3Rha2USAjEwEhEKCwoFc3Rha2USAjEwEMCaDCIGZm9vYmFy" +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/modules/auth/vesting.mdx b/docs/sdk/v0.47/documentation/module-system/modules/auth/vesting.mdx new file mode 100644 index 00000000..0cef59d3 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/auth/vesting.mdx @@ -0,0 +1,752 @@ +--- +title: '`x/auth/vesting`' +--- + +* [Intro and Requirements](#intro-and-requirements) +* [Note](#note) +* [Vesting Account Types](#vesting-account-types) + * [BaseVestingAccount](#basevestingaccount) + * [ContinuousVestingAccount](#continuousvestingaccount) + * [DelayedVestingAccount](#delayedvestingaccount) + * [Period](#period) + * [PeriodicVestingAccount](#periodicvestingaccount) + * [PermanentLockedAccount](#permanentlockedaccount) +* [Vesting Account Specification](#vesting-account-specification) + * [Determining Vesting & Vested Amounts](#determining-vesting--vested-amounts) + * [Periodic Vesting Accounts](#periodic-vesting-accounts) + * [Transferring/Sending](#transferringsending) + * [Delegating](#delegating) + * [Undelegating](#undelegating) +* [Keepers & Handlers](#keepers--handlers) +* [Genesis Initialization](#genesis-initialization) +* [Examples](#examples) + * [Simple](#simple) + * [Slashing](#slashing) + * [Periodic Vesting](#periodic-vesting) +* [Glossary](#glossary) + +## Intro and Requirements + +This specification defines the vesting account implementation that is used by the Cosmos Hub. The requirements for this vesting account is that it should be initialized during genesis with a starting balance `X` and a vesting end time `ET`. A vesting account may be initialized with a vesting start time `ST` and a number of vesting periods `P`. If a vesting start time is included, the vesting period does not begin until start time is reached. If vesting periods are included, the vesting occurs over the specified number of periods. + +For all vesting accounts, the owner of the vesting account is able to delegate and undelegate from validators, however they cannot transfer coins to another account until those coins are vested. This specification allows for four different kinds of vesting: + +* Delayed vesting, where all coins are vested once `ET` is reached. +* Continous vesting, where coins begin to vest at `ST` and vest linearly with respect to time until `ET` is reached +* Periodic vesting, where coins begin to vest at `ST` and vest periodically according to number of periods and the vesting amount per period. The number of periods, length per period, and amount per period are configurable. A periodic vesting account is distinguished from a continuous vesting account in that coins can be released in staggered tranches. For example, a periodic vesting account could be used for vesting arrangements where coins are relased quarterly, yearly, or over any other function of tokens over time. +* Permanent locked vesting, where coins are locked forever. Coins in this account can still be used for delegating and for governance votes even while locked. + +## Note + +Vesting accounts can be initialized with some vesting and non-vesting coins. The non-vesting coins would be immediately transferable. DelayedVesting ContinuousVesting, PeriodicVesting and PermenantVesting accounts can be created with normal messages after genesis. Other types of vesting accounts must be created at genesis, or as part of a manual network upgrade. The current specification only allows for *unconditional* vesting (ie. there is no possibility of reaching `ET` and +having coins fail to vest). + +## Vesting Account Types + +```go expandable +/ VestingAccount defines an interface that any vesting account type must +/ implement. +type VestingAccount interface { + Account + + GetVestedCoins(Time) + +Coins + GetVestingCoins(Time) + +Coins + + / TrackDelegation performs internal vesting accounting necessary when + / delegating from a vesting account. It accepts the current block time, the + / delegation amount and balance of all coins whose denomination exists in + / the account's original vesting balance. + TrackDelegation(Time, Coins, Coins) + + / TrackUndelegation performs internal vesting accounting necessary when a + / vesting account performs an undelegation. + TrackUndelegation(Coins) + +GetStartTime() + +int64 + GetEndTime() + +int64 +} +``` + +### BaseVestingAccount + +```protobuf +// BaseVestingAccount implements the VestingAccount interface. It contains all +// the necessary fields needed for any vesting account implementation. +message BaseVestingAccount { + option (amino.name) = "cosmos-sdk/BaseVestingAccount"; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + cosmos.auth.v1beta1.BaseAccount base_account = 1 [(gogoproto.embed) = true]; + repeated cosmos.base.v1beta1.Coin original_vesting = 2 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + repeated cosmos.base.v1beta1.Coin delegated_free = 3 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + repeated cosmos.base.v1beta1.Coin delegated_vesting = 4 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + int64 end_time = 5; +} +``` + +### ContinuousVestingAccount + +```protobuf +// ContinuousVestingAccount implements the VestingAccount interface. It +// continuously vests by unlocking coins linearly with respect to time. +message ContinuousVestingAccount { + option (amino.name) = "cosmos-sdk/ContinuousVestingAccount"; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + BaseVestingAccount base_vesting_account = 1 [(gogoproto.embed) = true]; + int64 start_time = 2; +} +``` + +### DelayedVestingAccount + +```protobuf +// DelayedVestingAccount implements the VestingAccount interface. It vests all +// coins after a specific time, but non prior. In other words, it keeps them +// locked until a specified time. +message DelayedVestingAccount { + option (amino.name) = "cosmos-sdk/DelayedVestingAccount"; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + BaseVestingAccount base_vesting_account = 1 [(gogoproto.embed) = true]; +} +``` + +### Period + +```protobuf +// Period defines a length of time and amount of coins that will vest. +message Period { + option (gogoproto.goproto_stringer) = false; + + int64 length = 1; + repeated cosmos.base.v1beta1.Coin amount = 2 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; +} +``` + +```go +/ Stores all vesting periods passed as part of a PeriodicVestingAccount +type Periods []Period +``` + +### PeriodicVestingAccount + +```protobuf +// PeriodicVestingAccount implements the VestingAccount interface. It +// periodically vests by unlocking coins during each specified period. +message PeriodicVestingAccount { + option (amino.name) = "cosmos-sdk/PeriodicVestingAccount"; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + BaseVestingAccount base_vesting_account = 1 [(gogoproto.embed) = true]; + int64 start_time = 2; + repeated Period vesting_periods = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +In order to facilitate less ad-hoc type checking and assertions and to support flexibility in account balance usage, the existing `x/bank` `ViewKeeper` interface is updated to contain the following: + +```go +type ViewKeeper interface { + / ... + + / Calculates the total locked account balance. + LockedCoins(ctx sdk.Context, addr sdk.AccAddress) + +sdk.Coins + + / Calculates the total spendable balance that can be sent to other accounts. + SpendableCoins(ctx sdk.Context, addr sdk.AccAddress) + +sdk.Coins +} +``` + +### PermanentLockedAccount + +```protobuf +// PermanentLockedAccount implements the VestingAccount interface. It does +// not ever release coins, locking them indefinitely. Coins in this account can +// still be used for delegating and for governance votes even while locked. +// +// Since: cosmos-sdk 0.43 +message PermanentLockedAccount { + option (amino.name) = "cosmos-sdk/PermanentLockedAccount"; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + BaseVestingAccount base_vesting_account = 1 [(gogoproto.embed) = true]; +} +``` + +## Vesting Account Specification + +Given a vesting account, we define the following in the proceeding operations: + +* `OV`: The original vesting coin amount. It is a constant value. +* `V`: The number of `OV` coins that are still *vesting*. It is derived by + `OV`, `StartTime` and `EndTime`. This value is computed on demand and not on a per-block basis. +* `V'`: The number of `OV` coins that are *vested* (unlocked). This value is computed on demand and not a per-block basis. +* `DV`: The number of delegated *vesting* coins. It is a variable value. It is stored and modified directly in the vesting account. +* `DF`: The number of delegated *vested* (unlocked) coins. It is a variable value. It is stored and modified directly in the vesting account. +* `BC`: The number of `OV` coins less any coins that are transferred + (which can be negative or delegated). It is considered to be balance of the embedded base account. It is stored and modified directly in the vesting account. + +### Determining Vesting & Vested Amounts + +It is important to note that these values are computed on demand and not on a mandatory per-block basis (e.g. `BeginBlocker` or `EndBlocker`). + +#### Continuously Vesting Accounts + +To determine the amount of coins that are vested for a given block time `T`, the +following is performed: + +1. Compute `X := T - StartTime` +2. Compute `Y := EndTime - StartTime` +3. Compute `V' := OV * (X / Y)` +4. Compute `V := OV - V'` + +Thus, the total amount of *vested* coins is `V'` and the remaining amount, `V`, +is *vesting*. + +```go expandable +func (cva ContinuousVestingAccount) + +GetVestedCoins(t Time) + +Coins { + if t <= cva.StartTime { + / We must handle the case where the start time for a vesting account has + / been set into the future or when the start of the chain is not exactly + / known. + return ZeroCoins +} + +else if t >= cva.EndTime { + return cva.OriginalVesting +} + x := t - cva.StartTime + y := cva.EndTime - cva.StartTime + + return cva.OriginalVesting * (x / y) +} + +func (cva ContinuousVestingAccount) + +GetVestingCoins(t Time) + +Coins { + return cva.OriginalVesting - cva.GetVestedCoins(t) +} +``` + +### Periodic Vesting Accounts + +Periodic vesting accounts require calculating the coins released during each period for a given block time `T`. Note that multiple periods could have passed when calling `GetVestedCoins`, so we must iterate over each period until the end of that period is after `T`. + +1. Set `CT := StartTime` +2. Set `V' := 0` + +For each Period P: + +1. Compute `X := T - CT` +2. IF `X >= P.Length` + 1. Compute `V' += P.Amount` + 2. Compute `CT += P.Length` + 3. ELSE break +3. Compute `V := OV - V'` + +```go expandable +func (pva PeriodicVestingAccount) + +GetVestedCoins(t Time) + +Coins { + if t < pva.StartTime { + return ZeroCoins +} + ct := pva.StartTime / The start of the vesting schedule + vested := 0 + periods = pva.GetPeriods() + for _, period := range periods { + if t - ct < period.Length { + break +} + +vested += period.Amount + ct += period.Length / increment ct to the start of the next vesting period +} + +return vested +} + +func (pva PeriodicVestingAccount) + +GetVestingCoins(t Time) + +Coins { + return pva.OriginalVesting - cva.GetVestedCoins(t) +} +``` + +#### Delayed/Discrete Vesting Accounts + +Delayed vesting accounts are easier to reason about as they only have the full amount vesting up until a certain time, then all the coins become vested (unlocked). This does not include any unlocked coins the account may have initially. + +```go expandable +func (dva DelayedVestingAccount) + +GetVestedCoins(t Time) + +Coins { + if t >= dva.EndTime { + return dva.OriginalVesting +} + +return ZeroCoins +} + +func (dva DelayedVestingAccount) + +GetVestingCoins(t Time) + +Coins { + return dva.OriginalVesting - dva.GetVestedCoins(t) +} +``` + +### Transferring/Sending + +At any given time, a vesting account may transfer: `min((BC + DV) - V, BC)`. + +In other words, a vesting account may transfer the minimum of the base account balance and the base account balance plus the number of currently delegated vesting coins less the number of coins vested so far. + +However, given that account balances are tracked via the `x/bank` module and that we want to avoid loading the entire account balance, we can instead determine the locked balance, which can be defined as `max(V - DV, 0)`, and infer the spendable balance from that. + +```go +func (va VestingAccount) + +LockedCoins(t Time) + +Coins { + return max(va.GetVestingCoins(t) - va.DelegatedVesting, 0) +} +``` + +The `x/bank` `ViewKeeper` can then provide APIs to determine locked and spendable coins for any account: + +```go expandable +func (k Keeper) + +LockedCoins(ctx Context, addr AccAddress) + +Coins { + acc := k.GetAccount(ctx, addr) + if acc != nil { + if acc.IsVesting() { + return acc.LockedCoins(ctx.BlockTime()) +} + +} + + / non-vesting accounts do not have any locked coins + return NewCoins() +} +``` + +#### Keepers/Handlers + +The corresponding `x/bank` keeper should appropriately handle sending coins based on if the account is a vesting account or not. + +```go expandable +func (k Keeper) + +SendCoins(ctx Context, from Account, to Account, amount Coins) { + bc := k.GetBalances(ctx, from) + v := k.LockedCoins(ctx, from) + spendable := bc - v + newCoins := spendable - amount + assert(newCoins >= 0) + +from.SetBalance(newCoins) + +to.AddBalance(amount) + + / save balances... +} +``` + +### Delegating + +For a vesting account attempting to delegate `D` coins, the following is performed: + +1. Verify `BC >= D > 0` +2. Compute `X := min(max(V - DV, 0), D)` (portion of `D` that is vesting) +3. Compute `Y := D - X` (portion of `D` that is free) +4. Set `DV += X` +5. Set `DF += Y` + +```go +func (va VestingAccount) + +TrackDelegation(t Time, balance Coins, amount Coins) { + assert(balance <= amount) + x := min(max(va.GetVestingCoins(t) - va.DelegatedVesting, 0), amount) + y := amount - x + + va.DelegatedVesting += x + va.DelegatedFree += y +} +``` + +**Note** `TrackDelegation` only modifies the `DelegatedVesting` and `DelegatedFree` fields, so upstream callers MUST modify the `Coins` field by subtracting `amount`. + +#### Keepers/Handlers + +```go +func DelegateCoins(t Time, from Account, amount Coins) { + if isVesting(from) { + from.TrackDelegation(t, amount) +} + +else { + from.SetBalance(sc - amount) +} + + / save account... +} +``` + +### Undelegating + +For a vesting account attempting to undelegate `D` coins, the following is performed: + +> NOTE: `DV < D` and `(DV + DF) < D` may be possible due to quirks in the rounding of delegation/undelegation logic. + +1. Verify `D > 0` +2. Compute `X := min(DF, D)` (portion of `D` that should become free, prioritizing free coins) +3. Compute `Y := min(DV, D - X)` (portion of `D` that should remain vesting) +4. Set `DF -= X` +5. Set `DV -= Y` + +```go +func (cva ContinuousVestingAccount) + +TrackUndelegation(amount Coins) { + x := min(cva.DelegatedFree, amount) + y := amount - x + + cva.DelegatedFree -= x + cva.DelegatedVesting -= y +} +``` + +**Note** `TrackUnDelegation` only modifies the `DelegatedVesting` and `DelegatedFree` fields, so upstream callers MUST modify the `Coins` field by adding `amount`. + +**Note**: If a delegation is slashed, the continuous vesting account ends up with an excess `DV` amount, even after all its coins have vested. This is because undelegating free coins are prioritized. + +**Note**: The undelegation (bond refund) amount may exceed the delegated vesting (bond) amount due to the way undelegation truncates the bond refund, which can increase the validator's exchange rate (tokens/shares) slightly if the undelegated tokens are non-integral. + +#### Keepers/Handlers + +```go expandable +func UndelegateCoins(to Account, amount Coins) { + if isVesting(to) { + if to.DelegatedFree + to.DelegatedVesting >= amount { + to.TrackUndelegation(amount) + / save account ... +} + +} + +else { + AddBalance(to, amount) + / save account... +} +} +``` + +## Keepers & Handlers + +The `VestingAccount` implementations reside in `x/auth`. However, any keeper in a module (e.g. staking in `x/staking`) wishing to potentially utilize any vesting coins, must call explicit methods on the `x/bank` keeper (e.g. `DelegateCoins`) opposed to `SendCoins` and `SubtractCoins`. + +In addition, the vesting account should also be able to spend any coins it receives from other users. Thus, the bank module's `MsgSend` handler should error if a vesting account is trying to send an amount that exceeds their unlocked coin amount. + +See the above specification for full implementation details. + +## Genesis Initialization + +To initialize both vesting and non-vesting accounts, the `GenesisAccount` struct includes new fields: `Vesting`, `StartTime`, and `EndTime`. Accounts meant to be of type `BaseAccount` or any non-vesting type have `Vesting = false`. The genesis initialization logic (e.g. `initFromGenesisState`) must parse and return the correct accounts accordingly based off of these fields. + +```go expandable +type GenesisAccount struct { + / ... + + / vesting account fields + OriginalVesting sdk.Coins `json:"original_vesting"` + DelegatedFree sdk.Coins `json:"delegated_free"` + DelegatedVesting sdk.Coins `json:"delegated_vesting"` + StartTime int64 `json:"start_time"` + EndTime int64 `json:"end_time"` +} + +func ToAccount(gacc GenesisAccount) + +Account { + bacc := NewBaseAccount(gacc) + if gacc.OriginalVesting > 0 { + if ga.StartTime != 0 && ga.EndTime != 0 { + / return a continuous vesting account +} + +else if ga.EndTime != 0 { + / return a delayed vesting account +} + +else { + / invalid genesis vesting account provided + panic() +} + +} + +return bacc +} +``` + +## Examples + +### Simple + +Given a continuous vesting account with 10 vesting coins. + +```text +OV = 10 +DF = 0 +DV = 0 +BC = 10 +V = 10 +V' = 0 +``` + +1. Immediately receives 1 coin + + ```text + BC = 11 + ``` + +2. Time passes, 2 coins vest + + ```text + V = 8 + V' = 2 + ``` + +3. Delegates 4 coins to validator A + + ```text + DV = 4 + BC = 7 + ``` + +4. Sends 3 coins + + ```text + BC = 4 + ``` + +5. More time passes, 2 more coins vest + + ```text + V = 6 + V' = 4 + ``` + +6. Sends 2 coins. At this point the account cannot send anymore until further + coins vest or it receives additional coins. It can still however, delegate. + + ```text + BC = 2 + ``` + +### Slashing + +Same initial starting conditions as the simple example. + +1. Time passes, 5 coins vest + + ```text + V = 5 + V' = 5 + ``` + +2. Delegate 5 coins to validator A + + ```text + DV = 5 + BC = 5 + ``` + +3. Delegate 5 coins to validator B + + ```text + DF = 5 + BC = 0 + ``` + +4. Validator A gets slashed by 50%, making the delegation to A now worth 2.5 coins + +5. Undelegate from validator A (2.5 coins) + + ```text + DF = 5 - 2.5 = 2.5 + BC = 0 + 2.5 = 2.5 + ``` + +6. Undelegate from validator B (5 coins). The account at this point can only + send 2.5 coins unless it receives more coins or until more coins vest. + It can still however, delegate. + + ```text + DV = 5 - 2.5 = 2.5 + DF = 2.5 - 2.5 = 0 + BC = 2.5 + 5 = 7.5 + ``` + + Notice how we have an excess amount of `DV`. + +### Periodic Vesting + +A vesting account is created where 100 tokens will be released over 1 year, with +1/4 of tokens vesting each quarter. The vesting schedule would be as follows: + +```yaml +Periods: +- amount: 25stake, length: 7884000 +- amount: 25stake, length: 7884000 +- amount: 25stake, length: 7884000 +- amount: 25stake, length: 7884000 +``` + +```text +OV = 100 +DF = 0 +DV = 0 +BC = 100 +V = 100 +V' = 0 +``` + +1. Immediately receives 1 coin + + ```text + BC = 101 + ``` + +2. Vesting period 1 passes, 25 coins vest + + ```text + V = 75 + V' = 25 + ``` + +3. During vesting period 2, 5 coins are transfered and 5 coins are delegated + + ```text + DV = 5 + BC = 91 + ``` + +4. Vesting period 2 passes, 25 coins vest + + ```text + V = 50 + V' = 50 + ``` + +## Glossary + +* OriginalVesting: The amount of coins (per denomination) that are initially + part of a vesting account. These coins are set at genesis. +* StartTime: The BFT time at which a vesting account starts to vest. +* EndTime: The BFT time at which a vesting account is fully vested. +* DelegatedFree: The tracked amount of coins (per denomination) that are + delegated from a vesting account that have been fully vested at time of delegation. +* DelegatedVesting: The tracked amount of coins (per denomination) that are + delegated from a vesting account that were vesting at time of delegation. +* ContinuousVestingAccount: A vesting account implementation that vests coins + linearly over time. +* DelayedVestingAccount: A vesting account implementation that only fully vests + all coins at a given time. +* PeriodicVestingAccount: A vesting account implementation that vests coins + according to a custom vesting schedule. +* PermanentLockedAccount: It does not ever release coins, locking them indefinitely. + Coins in this account can still be used for delegating and for governance votes even while locked. + +## CLI + +A user can query and interact with the `vesting` module using the CLI. + +### Transactions + +The `tx` commands allow users to interact with the `vesting` module. + +```bash +simd tx vesting --help +``` + +#### create-periodic-vesting-account + +The `create-periodic-vesting-account` command creates a new vesting account funded with an allocation of tokens, where a sequence of coins and period length in seconds. Periods are sequential, in that the duration of of a period only starts at the end of the previous period. The duration of the first period starts upon account creation. + +```bash +simd tx vesting create-periodic-vesting-account [to_address] [periods_json_file] [flags] +``` + +Example: + +```bash +simd tx vesting create-periodic-vesting-account cosmos1.. periods.json +``` + +#### create-vesting-account + +The `create-vesting-account` command creates a new vesting account funded with an allocation of tokens. The account can either be a delayed or continuous vesting account, which is determined by the '--delayed' flag. All vesting accouts created will have their start time set by the committed block's time. The end\_time must be provided as a UNIX epoch timestamp. + +```bash +simd tx vesting create-vesting-account [to_address] [amount] [end_time] [flags] +``` + +Example: + +```bash +simd tx vesting create-vesting-account cosmos1.. 100stake 2592000 +``` diff --git a/docs/sdk/v0.47/documentation/module-system/modules/authz/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/authz/README.mdx new file mode 100644 index 00000000..2f9b66ad --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/authz/README.mdx @@ -0,0 +1,1408 @@ +--- +title: '`x/authz`' +--- + +## Abstract + +`x/authz` is an implementation of a Cosmos SDK module, per [ADR 30](docs/sdk/next/documentation/legacy/adr-comprehensive), that allows +granting arbitrary privileges from one account (the granter) to another account (the grantee). Authorizations must be granted for a particular Msg service method one by one using an implementation of the `Authorization` interface. + +## Contents + +* [Concepts](#concepts) + * [Authorization and Grant](#authorization-and-grant) + * [Built-in Authorizations](#built-in-authorizations) + * [Gas](#gas) +* [State](#state) + * [Grant](#grant) + * [GrantQueue](#grantqueue) +* [Messages](#messages) + * [MsgGrant](#msggrant) + * [MsgRevoke](#msgrevoke) + * [MsgExec](#msgexec) +* [Events](#events) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +### Authorization and Grant + +The `x/authz` module defines interfaces and messages grant authorizations to perform actions +on behalf of one account to other accounts. The design is defined in the [ADR 030](docs/sdk/next/documentation/legacy/adr-comprehensive). + +A *grant* is an allowance to execute a Msg by the grantee on behalf of the granter. +Authorization is an interface that must be implemented by a concrete authorization logic to validate and execute grants. Authorizations are extensible and can be defined for any Msg service method even outside of the module where the Msg method is defined. See the `SendAuthorization` example in the next section for more details. + +**Note:** The authz module is different from the [auth (authentication)](docs/sdk/v0.47/documentation/module-system/modules/auth/README) module that is responsible for specifying the base transaction and account types. + +```go expandable +package authz + +import ( + + "github.com/cosmos/gogoproto/proto" + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ Authorization represents the interface of various Authorization types implemented +/ by other modules. +type Authorization interface { + proto.Message + + / MsgTypeURL returns the fully-qualified Msg service method URL (as described in ADR 031), + / which will process and accept or reject a request. + MsgTypeURL() + +string + + / Accept determines whether this grant permits the provided sdk.Msg to be performed, + / and if so provides an upgraded authorization instance. + Accept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error) + + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} + +/ AcceptResponse instruments the controller of an authz message if the request is accepted +/ and if it should be updated or deleted. +type AcceptResponse struct { + / If Accept=true, the controller can accept and authorization and handle the update. + Accept bool + / If Delete=true, the controller must delete the authorization object and release + / storage resources. + Delete bool + / Controller, who is calling Authorization.Accept must check if `Updated != nil`. If yes, + / it must use the updated version and handle the update on the storage level. + Updated Authorization +} +``` + +### Built-in Authorizations + +The Cosmos SDK `x/authz` module comes with following authorization types: + +#### GenericAuthorization + +`GenericAuthorization` implements the `Authorization` interface that gives unrestricted permission to execute the provided Msg on behalf of granter's account. + +```protobuf +// GenericAuthorization gives the grantee unrestricted permissions to execute +// the provided method on behalf of the granter's account. +message GenericAuthorization { + option (amino.name) = "cosmos-sdk/GenericAuthorization"; + option (cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"; + + // Msg, identified by it's type URL, to grant unrestricted permissions to execute + string msg = 1; +} +``` + +```go expandable +package authz + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var _ Authorization = &GenericAuthorization{ +} + +/ NewGenericAuthorization creates a new GenericAuthorization object. +func NewGenericAuthorization(msgTypeURL string) *GenericAuthorization { + return &GenericAuthorization{ + Msg: msgTypeURL, +} +} + +/ MsgTypeURL implements Authorization.MsgTypeURL. +func (a GenericAuthorization) + +MsgTypeURL() + +string { + return a.Msg +} + +/ Accept implements Authorization.Accept. +func (a GenericAuthorization) + +Accept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error) { + return AcceptResponse{ + Accept: true +}, nil +} + +/ ValidateBasic implements Authorization.ValidateBasic. +func (a GenericAuthorization) + +ValidateBasic() + +error { + return nil +} +``` + +* `msg` stores Msg type URL. + +#### SendAuthorization + +`SendAuthorization` implements the `Authorization` interface for the `cosmos.bank.v1beta1.MsgSend` Msg. + +* It takes a (positive) `SpendLimit` that specifies the maximum amount of tokens the grantee can spend. The `SpendLimit` is updated as the tokens are spent. +* It takes an (optional) `AllowList` that specifies to which addresses a grantee can send token. + +```protobuf +// SendAuthorization allows the grantee to spend up to spend_limit coins from +// the granter's account. +// +// Since: cosmos-sdk 0.43 +message SendAuthorization { + option (cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"; + option (amino.name) = "cosmos-sdk/SendAuthorization"; + + repeated cosmos.base.v1beta1.Coin spend_limit = 1 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + + // allow_list specifies an optional list of addresses to whom the grantee can send tokens on behalf of the + // granter. If omitted, any recipient is allowed. + // + // Since: cosmos-sdk 0.47 + repeated string allow_list = 2; +} +``` + +```go expandable +package types + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ TODO: Revisit this once we have proper gas fee framework. +/ Ref: https://github.com/cosmos/cosmos-sdk/issues/9054 +/ Ref: https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(10) + +var _ authz.Authorization = &SendAuthorization{ +} + +/ NewSendAuthorization creates a new SendAuthorization object. +func NewSendAuthorization(spendLimit sdk.Coins, allowed []sdk.AccAddress) *SendAuthorization { + return &SendAuthorization{ + AllowList: toBech32Addresses(allowed), + SpendLimit: spendLimit, +} +} + +/ MsgTypeURL implements Authorization.MsgTypeURL. +func (a SendAuthorization) + +MsgTypeURL() + +string { + return sdk.MsgTypeURL(&MsgSend{ +}) +} + +/ Accept implements Authorization.Accept. +func (a SendAuthorization) + +Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) { + mSend, ok := msg.(*MsgSend) + if !ok { + return authz.AcceptResponse{ +}, sdkerrors.ErrInvalidType.Wrap("type mismatch") +} + toAddr := mSend.ToAddress + + limitLeft, isNegative := a.SpendLimit.SafeSub(mSend.Amount...) + if isNegative { + return authz.AcceptResponse{ +}, sdkerrors.ErrInsufficientFunds.Wrapf("requested amount is more than spend limit") +} + if limitLeft.IsZero() { + return authz.AcceptResponse{ + Accept: true, + Delete: true +}, nil +} + isAddrExists := false + allowedList := a.GetAllowList() + for _, addr := range allowedList { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "send authorization") + if addr == toAddr { + isAddrExists = true + break +} + +} + if len(allowedList) > 0 && !isAddrExists { + return authz.AcceptResponse{ +}, sdkerrors.ErrUnauthorized.Wrapf("cannot send to %s address", toAddr) +} + +return authz.AcceptResponse{ + Accept: true, + Delete: false, + Updated: &SendAuthorization{ + SpendLimit: limitLeft, + AllowList: allowedList +}}, nil +} + +/ ValidateBasic implements Authorization.ValidateBasic. +func (a SendAuthorization) + +ValidateBasic() + +error { + if a.SpendLimit == nil { + return sdkerrors.ErrInvalidCoins.Wrap("spend limit cannot be nil") +} + if !a.SpendLimit.IsAllPositive() { + return sdkerrors.ErrInvalidCoins.Wrapf("spend limit must be positive") +} + found := make(map[string]bool, 0) + for i := 0; i < len(a.AllowList); i++ { + if found[a.AllowList[i]] { + return ErrDuplicateEntry +} + +found[a.AllowList[i]] = true +} + +return nil +} + +func toBech32Addresses(allowed []sdk.AccAddress) []string { + if len(allowed) == 0 { + return nil +} + allowedAddrs := make([]string, len(allowed)) + for i, addr := range allowed { + allowedAddrs[i] = addr.String() +} + +return allowedAddrs +} +``` + +* `spend_limit` keeps track of how many coins are left in the authorization. +* `allow_list` specifies an optional list of addresses to whom the grantee can send tokens on behalf of the granter. + +#### StakeAuthorization + +`StakeAuthorization` implements the `Authorization` interface for messages in the [staking module](https://docs.cosmos.network/v0.44/modules/staking/). It takes an `AuthorizationType` to specify whether you want to authorise delegating, undelegating or redelegating (i.e. these have to be authorised seperately). It also takes a required `MaxTokens` that keeps track of a limit to the amount of tokens that can be delegated/undelegated/redelegated. If left empty, the amount is unlimited. Additionally, this Msg takes an `AllowList` or a `DenyList`, which allows you to select which validators you allow or deny grantees to stake with. + +```protobuf +// StakeAuthorization defines authorization for delegate/undelegate/redelegate. +// +// Since: cosmos-sdk 0.43 +message StakeAuthorization { + option (cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"; + option (amino.name) = "cosmos-sdk/StakeAuthorization"; + + // max_tokens specifies the maximum amount of tokens can be delegate to a validator. If it is + // empty, there is no spend limit and any amount of coins can be delegated. + cosmos.base.v1beta1.Coin max_tokens = 1 [(gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coin"]; + // validators is the oneof that represents either allow_list or deny_list + oneof validators { + // allow_list specifies list of validator addresses to whom grantee can delegate tokens on behalf of granter's + // account. + Validators allow_list = 2; + // deny_list specifies list of validator addresses to whom grantee can not delegate tokens. + Validators deny_list = 3; + } + // Validators defines list of validator addresses. + message Validators { + repeated string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + } + // authorization_type defines one of AuthorizationType. + AuthorizationType authorization_type = 4; +} +``` + +```go expandable +package types + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ TODO: Revisit this once we have propoer gas fee framework. +/ Tracking issues https://github.com/cosmos/cosmos-sdk/issues/9054, https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(10) + +var _ authz.Authorization = &StakeAuthorization{ +} + +/ NewStakeAuthorization creates a new StakeAuthorization object. +func NewStakeAuthorization(allowed []sdk.ValAddress, denied []sdk.ValAddress, authzType AuthorizationType, amount *sdk.Coin) (*StakeAuthorization, error) { + allowedValidators, deniedValidators, err := validateAllowAndDenyValidators(allowed, denied) + if err != nil { + return nil, err +} + a := StakeAuthorization{ +} + if allowedValidators != nil { + a.Validators = &StakeAuthorization_AllowList{ + AllowList: &StakeAuthorization_Validators{ + Address: allowedValidators +}} + +} + +else { + a.Validators = &StakeAuthorization_DenyList{ + DenyList: &StakeAuthorization_Validators{ + Address: deniedValidators +}} + +} + if amount != nil { + a.MaxTokens = amount +} + +a.AuthorizationType = authzType + + return &a, nil +} + +/ MsgTypeURL implements Authorization.MsgTypeURL. +func (a StakeAuthorization) + +MsgTypeURL() + +string { + authzType, err := normalizeAuthzType(a.AuthorizationType) + if err != nil { + panic(err) +} + +return authzType +} + +func (a StakeAuthorization) + +ValidateBasic() + +error { + if a.MaxTokens != nil && a.MaxTokens.IsNegative() { + return sdkerrors.Wrapf(authz.ErrNegativeMaxTokens, "negative coin amount: %v", a.MaxTokens) +} + if a.AuthorizationType == AuthorizationType_AUTHORIZATION_TYPE_UNSPECIFIED { + return authz.ErrUnknownAuthorizationType +} + +return nil +} + +/ Accept implements Authorization.Accept. +func (a StakeAuthorization) + +Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) { + var validatorAddress string + var amount sdk.Coin + switch msg := msg.(type) { + case *MsgDelegate: + validatorAddress = msg.ValidatorAddress + amount = msg.Amount + case *MsgUndelegate: + validatorAddress = msg.ValidatorAddress + amount = msg.Amount + case *MsgBeginRedelegate: + validatorAddress = msg.ValidatorDstAddress + amount = msg.Amount + default: + return authz.AcceptResponse{ +}, sdkerrors.ErrInvalidRequest.Wrap("unknown msg type") +} + isValidatorExists := false + allowedList := a.GetAllowList().GetAddress() + for _, validator := range allowedList { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "stake authorization") + if validator == validatorAddress { + isValidatorExists = true + break +} + +} + denyList := a.GetDenyList().GetAddress() + for _, validator := range denyList { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "stake authorization") + if validator == validatorAddress { + return authz.AcceptResponse{ +}, sdkerrors.ErrUnauthorized.Wrapf("cannot delegate/undelegate to %s validator", validator) +} + +} + if len(allowedList) > 0 && !isValidatorExists { + return authz.AcceptResponse{ +}, sdkerrors.ErrUnauthorized.Wrapf("cannot delegate/undelegate to %s validator", validatorAddress) +} + if a.MaxTokens == nil { + return authz.AcceptResponse{ + Accept: true, + Delete: false, + Updated: &StakeAuthorization{ + Validators: a.GetValidators(), + AuthorizationType: a.GetAuthorizationType() +}, +}, nil +} + +limitLeft, err := a.MaxTokens.SafeSub(amount) + if err != nil { + return authz.AcceptResponse{ +}, err +} + if limitLeft.IsZero() { + return authz.AcceptResponse{ + Accept: true, + Delete: true +}, nil +} + +return authz.AcceptResponse{ + Accept: true, + Delete: false, + Updated: &StakeAuthorization{ + Validators: a.GetValidators(), + AuthorizationType: a.GetAuthorizationType(), + MaxTokens: &limitLeft +}, +}, nil +} + +func validateAllowAndDenyValidators(allowed []sdk.ValAddress, denied []sdk.ValAddress) ([]string, []string, error) { + if len(allowed) == 0 && len(denied) == 0 { + return nil, nil, sdkerrors.ErrInvalidRequest.Wrap("both allowed & deny list cannot be empty") +} + if len(allowed) > 0 && len(denied) > 0 { + return nil, nil, sdkerrors.ErrInvalidRequest.Wrap("cannot set both allowed & deny list") +} + allowedValidators := make([]string, len(allowed)) + if len(allowed) > 0 { + for i, validator := range allowed { + allowedValidators[i] = validator.String() +} + +return allowedValidators, nil, nil +} + deniedValidators := make([]string, len(denied)) + for i, validator := range denied { + deniedValidators[i] = validator.String() +} + +return nil, deniedValidators, nil +} + +/ Normalized Msg type URLs +func normalizeAuthzType(authzType AuthorizationType) (string, error) { + switch authzType { + case AuthorizationType_AUTHORIZATION_TYPE_DELEGATE: + return sdk.MsgTypeURL(&MsgDelegate{ +}), nil + case AuthorizationType_AUTHORIZATION_TYPE_UNDELEGATE: + return sdk.MsgTypeURL(&MsgUndelegate{ +}), nil + case AuthorizationType_AUTHORIZATION_TYPE_REDELEGATE: + return sdk.MsgTypeURL(&MsgBeginRedelegate{ +}), nil + default: + return "", sdkerrors.Wrapf(authz.ErrUnknownAuthorizationType, "cannot normalize authz type with %T", authzType) +} +} +``` + +### Gas + +In order to prevent DoS attacks, granting `StakeAuthorization`s with `x/authz` incurs gas. `StakeAuthorization` allows you to authorize another account to delegate, undelegate, or redelegate to validators. The authorizer can define a list of validators they allow or deny delegations to. The Cosmos SDK iterates over these lists and charge 10 gas for each validator in both of the lists. + +Since the state maintaining a list for granter, grantee pair with same expiration, we are iterating over the list to remove the grant (incase of any revoke of paritcular `msgType`) from the list and we are charging 20 gas per iteration. + +## State + +### Grant + +Grants are identified by combining granter address (the address bytes of the granter), grantee address (the address bytes of the grantee) and Authorization type (its type URL). Hence we only allow one grant for the (granter, grantee, Authorization) triple. + +* Grant: `0x01 | granter_address_len (1 byte) | granter_address_bytes | grantee_address_len (1 byte) | grantee_address_bytes | msgType_bytes -> ProtocolBuffer(AuthorizationGrant)` + +The grant object encapsulates an `Authorization` type and an expiration timestamp: + +```protobuf +// Grant gives permissions to execute +// the provide method with expiration time. +message Grant { + google.protobuf.Any authorization = 1 [(cosmos_proto.accepts_interface) = "cosmos.authz.v1beta1.Authorization"]; + // time when the grant will expire and will be pruned. If null, then the grant + // doesn't have a time expiration (other conditions in `authorization` + // may apply to invalidate the grant) + google.protobuf.Timestamp expiration = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = true]; +} +``` + +### GrantQueue + +We are maintaining a queue for authz pruning. Whenever a grant is created, an item will be added to `GrantQueue` with a key of expiration, granter, grantee. + +In `EndBlock` (which runs for every block) we continuously check and prune the expired grants by forming a prefix key with current blocktime that passed the stored expiration in `GrantQueue`, we iterate through all the matched records from `GrantQueue` and delete them from the `GrantQueue` & `Grant`s store. + +```go expandable +package keeper + +import ( + + "fmt" + "strconv" + "time" + "github.com/cosmos/gogoproto/proto" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ TODO: Revisit this once we have propoer gas fee framework. +/ Tracking issues https://github.com/cosmos/cosmos-sdk/issues/9054, +/ https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(20) + +type Keeper struct { + storeKey storetypes.StoreKey + cdc codec.BinaryCodec + router *baseapp.MsgServiceRouter + authKeeper authz.AccountKeeper +} + +/ NewKeeper constructs a message authorization Keeper +func NewKeeper(storeKey storetypes.StoreKey, cdc codec.BinaryCodec, router *baseapp.MsgServiceRouter, ak authz.AccountKeeper) + +Keeper { + return Keeper{ + storeKey: storeKey, + cdc: cdc, + router: router, + authKeeper: ak, +} +} + +/ Logger returns a module-specific logger. +func (k Keeper) + +Logger(ctx sdk.Context) + +log.Logger { + return ctx.Logger().With("module", fmt.Sprintf("x/%s", authz.ModuleName)) +} + +/ getGrant returns grant stored at skey. +func (k Keeper) + +getGrant(ctx sdk.Context, skey []byte) (grant authz.Grant, found bool) { + store := ctx.KVStore(k.storeKey) + bz := store.Get(skey) + if bz == nil { + return grant, false +} + +k.cdc.MustUnmarshal(bz, &grant) + +return grant, true +} + +func (k Keeper) + +update(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, updated authz.Authorization) + +error { + skey := grantStoreKey(grantee, granter, updated.MsgTypeURL()) + +grant, found := k.getGrant(ctx, skey) + if !found { + return authz.ErrNoAuthorizationFound +} + +msg, ok := updated.(proto.Message) + if !ok { + return sdkerrors.ErrPackAny.Wrapf("cannot proto marshal %T", updated) +} + +any, err := codectypes.NewAnyWithValue(msg) + if err != nil { + return err +} + +grant.Authorization = any + store := ctx.KVStore(k.storeKey) + +store.Set(skey, k.cdc.MustMarshal(&grant)) + +return nil +} + +/ DispatchActions attempts to execute the provided messages via authorization +/ grants from the message signer to the grantee. +func (k Keeper) + +DispatchActions(ctx sdk.Context, grantee sdk.AccAddress, msgs []sdk.Msg) ([][]byte, error) { + results := make([][]byte, len(msgs)) + now := ctx.BlockTime() + for i, msg := range msgs { + signers := msg.GetSigners() + if len(signers) != 1 { + return nil, authz.ErrAuthorizationNumOfSigners +} + granter := signers[0] + + / If granter != grantee then check authorization.Accept, otherwise we + / implicitly accept. + if !granter.Equals(grantee) { + skey := grantStoreKey(grantee, granter, sdk.MsgTypeURL(msg)) + +grant, found := k.getGrant(ctx, skey) + if !found { + return nil, sdkerrors.Wrapf(authz.ErrNoAuthorizationFound, "failed to update grant with key %s", string(skey)) +} + if grant.Expiration != nil && grant.Expiration.Before(now) { + return nil, authz.ErrAuthorizationExpired +} + +authorization, err := grant.GetAuthorization() + if err != nil { + return nil, err +} + +resp, err := authorization.Accept(ctx, msg) + if err != nil { + return nil, err +} + if resp.Delete { + err = k.DeleteGrant(ctx, grantee, granter, sdk.MsgTypeURL(msg)) +} + +else if resp.Updated != nil { + err = k.update(ctx, grantee, granter, resp.Updated) +} + if err != nil { + return nil, err +} + if !resp.Accept { + return nil, sdkerrors.ErrUnauthorized +} + +} + handler := k.router.Handler(msg) + if handler == nil { + return nil, sdkerrors.ErrUnknownRequest.Wrapf("unrecognized message route: %s", sdk.MsgTypeURL(msg)) +} + +msgResp, err := handler(ctx, msg) + if err != nil { + return nil, sdkerrors.Wrapf(err, "failed to execute message; message %v", msg) +} + +results[i] = msgResp.Data + + / emit the events from the dispatched actions + events := msgResp.Events + sdkEvents := make([]sdk.Event, 0, len(events)) + for _, event := range events { + e := event + e.Attributes = append(e.Attributes, abci.EventAttribute{ + Key: "authz_msg_index", + Value: strconv.Itoa(i) +}) + +sdkEvents = append(sdkEvents, sdk.Event(e)) +} + +ctx.EventManager().EmitEvents(sdkEvents) +} + +return results, nil +} + +/ SaveGrant method grants the provided authorization to the grantee on the granter's account +/ with the provided expiration time and insert authorization key into the grants queue. If there is an existing authorization grant for the +/ same `sdk.Msg` type, this grant overwrites that. +func (k Keeper) + +SaveGrant(ctx sdk.Context, grantee, granter sdk.AccAddress, authorization authz.Authorization, expiration *time.Time) + +error { + store := ctx.KVStore(k.storeKey) + msgType := authorization.MsgTypeURL() + skey := grantStoreKey(grantee, granter, msgType) + +grant, err := authz.NewGrant(ctx.BlockTime(), authorization, expiration) + if err != nil { + return err +} + +var oldExp *time.Time + if oldGrant, found := k.getGrant(ctx, skey); found { + oldExp = oldGrant.Expiration +} + if oldExp != nil && (expiration == nil || !oldExp.Equal(*expiration)) { + if err = k.removeFromGrantQueue(ctx, skey, granter, grantee, *oldExp); err != nil { + return err +} + +} + + / If the expiration didn't change, then we don't remove it and we should not insert again + if expiration != nil && (oldExp == nil || !oldExp.Equal(*expiration)) { + if err = k.insertIntoGrantQueue(ctx, granter, grantee, msgType, *expiration); err != nil { + return err +} + +} + bz := k.cdc.MustMarshal(&grant) + +store.Set(skey, bz) + +return ctx.EventManager().EmitTypedEvent(&authz.EventGrant{ + MsgTypeUrl: authorization.MsgTypeURL(), + Granter: granter.String(), + Grantee: grantee.String(), +}) +} + +/ DeleteGrant revokes any authorization for the provided message type granted to the grantee +/ by the granter. +func (k Keeper) + +DeleteGrant(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) + +error { + store := ctx.KVStore(k.storeKey) + skey := grantStoreKey(grantee, granter, msgType) + +grant, found := k.getGrant(ctx, skey) + if !found { + return sdkerrors.Wrapf(authz.ErrNoAuthorizationFound, "failed to delete grant with key %s", string(skey)) +} + if grant.Expiration != nil { + err := k.removeFromGrantQueue(ctx, skey, granter, grantee, *grant.Expiration) + if err != nil { + return err +} + +} + +store.Delete(skey) + +return ctx.EventManager().EmitTypedEvent(&authz.EventRevoke{ + MsgTypeUrl: msgType, + Granter: granter.String(), + Grantee: grantee.String(), +}) +} + +/ GetAuthorizations Returns list of `Authorizations` granted to the grantee by the granter. +func (k Keeper) + +GetAuthorizations(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress) ([]authz.Authorization, error) { + store := ctx.KVStore(k.storeKey) + key := grantStoreKey(grantee, granter, "") + iter := sdk.KVStorePrefixIterator(store, key) + +defer iter.Close() + +var authorization authz.Grant + var authorizations []authz.Authorization + for ; iter.Valid(); iter.Next() { + if err := k.cdc.Unmarshal(iter.Value(), &authorization); err != nil { + return nil, err +} + +a, err := authorization.GetAuthorization() + if err != nil { + return nil, err +} + +authorizations = append(authorizations, a) +} + +return authorizations, nil +} + +/ GetAuthorization returns an Authorization and it's expiration time. +/ A nil Authorization is returned under the following circumstances: +/ - No grant is found. +/ - A grant is found, but it is expired. +/ - There was an error getting the authorization from the grant. +func (k Keeper) + +GetAuthorization(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) (authz.Authorization, *time.Time) { + grant, found := k.getGrant(ctx, grantStoreKey(grantee, granter, msgType)) + if !found || (grant.Expiration != nil && grant.Expiration.Before(ctx.BlockHeader().Time)) { + return nil, nil +} + +auth, err := grant.GetAuthorization() + if err != nil { + return nil, nil +} + +return auth, grant.Expiration +} + +/ IterateGrants iterates over all authorization grants +/ This function should be used with caution because it can involve significant IO operations. +/ It should not be used in query or msg services without charging additional gas. +/ The iteration stops when the handler function returns true or the iterator exhaust. +func (k Keeper) + +IterateGrants(ctx sdk.Context, + handler func(granterAddr sdk.AccAddress, granteeAddr sdk.AccAddress, grant authz.Grant) + +bool, +) { + store := ctx.KVStore(k.storeKey) + iter := sdk.KVStorePrefixIterator(store, GrantKey) + +defer iter.Close() + for ; iter.Valid(); iter.Next() { + var grant authz.Grant + granterAddr, granteeAddr, _ := parseGrantStoreKey(iter.Key()) + +k.cdc.MustUnmarshal(iter.Value(), &grant) + if handler(granterAddr, granteeAddr, grant) { + break +} + +} +} + +func (k Keeper) + +getGrantQueueItem(ctx sdk.Context, expiration time.Time, granter, grantee sdk.AccAddress) (*authz.GrantQueueItem, error) { + store := ctx.KVStore(k.storeKey) + bz := store.Get(GrantQueueKey(expiration, granter, grantee)) + if bz == nil { + return &authz.GrantQueueItem{ +}, nil +} + +var queueItems authz.GrantQueueItem + if err := k.cdc.Unmarshal(bz, &queueItems); err != nil { + return nil, err +} + +return &queueItems, nil +} + +func (k Keeper) + +setGrantQueueItem(ctx sdk.Context, expiration time.Time, + granter sdk.AccAddress, grantee sdk.AccAddress, queueItems *authz.GrantQueueItem, +) + +error { + store := ctx.KVStore(k.storeKey) + +bz, err := k.cdc.Marshal(queueItems) + if err != nil { + return err +} + +store.Set(GrantQueueKey(expiration, granter, grantee), bz) + +return nil +} + +/ insertIntoGrantQueue inserts a grant key into the grant queue +func (k Keeper) + +insertIntoGrantQueue(ctx sdk.Context, granter, grantee sdk.AccAddress, msgType string, expiration time.Time) + +error { + queueItems, err := k.getGrantQueueItem(ctx, expiration, granter, grantee) + if err != nil { + return err +} + if len(queueItems.MsgTypeUrls) == 0 { + k.setGrantQueueItem(ctx, expiration, granter, grantee, &authz.GrantQueueItem{ + MsgTypeUrls: []string{ + msgType +}, +}) +} + +else { + queueItems.MsgTypeUrls = append(queueItems.MsgTypeUrls, msgType) + +k.setGrantQueueItem(ctx, expiration, granter, grantee, queueItems) +} + +return nil +} + +/ removeFromGrantQueue removes a grant key from the grant queue +func (k Keeper) + +removeFromGrantQueue(ctx sdk.Context, grantKey []byte, granter, grantee sdk.AccAddress, expiration time.Time) + +error { + store := ctx.KVStore(k.storeKey) + key := GrantQueueKey(expiration, granter, grantee) + bz := store.Get(key) + if bz == nil { + return sdkerrors.Wrap(authz.ErrNoGrantKeyFound, "can't remove grant from the expire queue, grant key not found") +} + +var queueItem authz.GrantQueueItem + if err := k.cdc.Unmarshal(bz, &queueItem); err != nil { + return err +} + + _, _, msgType := parseGrantStoreKey(grantKey) + queueItems := queueItem.MsgTypeUrls + for index, typeURL := range queueItems { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "grant queue") + if typeURL == msgType { + end := len(queueItem.MsgTypeUrls) - 1 + queueItems[index] = queueItems[end] + queueItems = queueItems[:end] + if err := k.setGrantQueueItem(ctx, expiration, granter, grantee, &authz.GrantQueueItem{ + MsgTypeUrls: queueItems, +}); err != nil { + return err +} + +break +} + +} + +return nil +} + +/ DequeueAndDeleteExpiredGrants deletes expired grants from the state and grant queue. +func (k Keeper) + +DequeueAndDeleteExpiredGrants(ctx sdk.Context) + +error { + store := ctx.KVStore(k.storeKey) + iterator := store.Iterator(GrantQueuePrefix, sdk.InclusiveEndBytes(GrantQueueTimePrefix(ctx.BlockTime()))) + +defer iterator.Close() + for ; iterator.Valid(); iterator.Next() { + var queueItem authz.GrantQueueItem + if err := k.cdc.Unmarshal(iterator.Value(), &queueItem); err != nil { + return err +} + + _, granter, grantee, err := parseGrantQueueKey(iterator.Key()) + if err != nil { + return err +} + +store.Delete(iterator.Key()) + for _, typeURL := range queueItem.MsgTypeUrls { + store.Delete(grantStoreKey(grantee, granter, typeURL)) +} + +} + +return nil +} +``` + +* GrantQueue: `0x02 | expiration_bytes | granter_address_len (1 byte) | granter_address_bytes | grantee_address_len (1 byte) | grantee_address_bytes -> ProtocalBuffer(GrantQueueItem)` + +The `expiration_bytes` are the expiration date in UTC with the format `"2006-01-02T15:04:05.000000000"`. + +```go expandable +package keeper + +import ( + + "time" + "github.com/cosmos/cosmos-sdk/internal/conv" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/address" + "github.com/cosmos/cosmos-sdk/types/kv" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ Keys for store prefixes +/ Items are stored with the following key: values +/ +/ - 0x01: Grant +/ - 0x02: GrantQueueItem +var ( + GrantKey = []byte{0x01 +} / prefix for each key + GrantQueuePrefix = []byte{0x02 +} +) + +var lenTime = len(sdk.FormatTimeBytes(time.Now())) + +/ StoreKey is the store key string for authz +const StoreKey = authz.ModuleName + +/ grantStoreKey - return authorization store key +/ Items are stored with the following key: values +/ +/ - 0x01: Grant +func grantStoreKey(grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) []byte { + m := conv.UnsafeStrToBytes(msgType) + +granter = address.MustLengthPrefix(granter) + +grantee = address.MustLengthPrefix(grantee) + key := sdk.AppendLengthPrefixedBytes(GrantKey, granter, grantee, m) + +return key +} + +/ parseGrantStoreKey - split granter, grantee address and msg type from the authorization key +func parseGrantStoreKey(key []byte) (granterAddr, granteeAddr sdk.AccAddress, msgType string) { + / key is of format: + / 0x01 + + granterAddrLen, granterAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, 1, 1) / ignore key[0] since it is a prefix key + granterAddr, granterAddrEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrLenEndIndex+1, int(granterAddrLen[0])) + +granteeAddrLen, granteeAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrEndIndex+1, 1) + +granteeAddr, granteeAddrEndIndex := sdk.ParseLengthPrefixedBytes(key, granteeAddrLenEndIndex+1, int(granteeAddrLen[0])) + +kv.AssertKeyAtLeastLength(key, granteeAddrEndIndex+1) + +return granterAddr, granteeAddr, conv.UnsafeBytesToStr(key[(granteeAddrEndIndex + 1):]) +} + +/ parseGrantQueueKey split expiration time, granter and grantee from the grant queue key +func parseGrantQueueKey(key []byte) (time.Time, sdk.AccAddress, sdk.AccAddress, error) { + / key is of format: + / 0x02 + + expBytes, expEndIndex := sdk.ParseLengthPrefixedBytes(key, 1, lenTime) + +exp, err := sdk.ParseTimeBytes(expBytes) + if err != nil { + return exp, nil, nil, err +} + +granterAddrLen, granterAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, expEndIndex+1, 1) + +granter, granterEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrLenEndIndex+1, int(granterAddrLen[0])) + +granteeAddrLen, granteeAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, granterEndIndex+1, 1) + +grantee, _ := sdk.ParseLengthPrefixedBytes(key, granteeAddrLenEndIndex+1, int(granteeAddrLen[0])) + +return exp, granter, grantee, nil +} + +/ GrantQueueKey - return grant queue store key. If a given grant doesn't have a defined +/ expiration, then it should not be used in the pruning queue. +/ Key format is: +/ +/ 0x02: GrantQueueItem +func GrantQueueKey(expiration time.Time, granter sdk.AccAddress, grantee sdk.AccAddress) []byte { + exp := sdk.FormatTimeBytes(expiration) + +granter = address.MustLengthPrefix(granter) + +grantee = address.MustLengthPrefix(grantee) + +return sdk.AppendLengthPrefixedBytes(GrantQueuePrefix, exp, granter, grantee) +} + +/ GrantQueueTimePrefix - return grant queue time prefix +func GrantQueueTimePrefix(expiration time.Time) []byte { + return append(GrantQueuePrefix, sdk.FormatTimeBytes(expiration)...) +} + +/ firstAddressFromGrantStoreKey parses the first address only +func firstAddressFromGrantStoreKey(key []byte) + +sdk.AccAddress { + addrLen := key[0] + return sdk.AccAddress(key[1 : 1+addrLen]) +} +``` + +The `GrantQueueItem` object contains the list of type urls between granter and grantee that expire at the time indicated in the key. + +## Messages + +In this section we describe the processing of messages for the authz module. + +### MsgGrant + +An authorization grant is created using the `MsgGrant` message. +If there is already a grant for the `(granter, grantee, Authorization)` triple, then the new grant overwrites the previous one. To update or extend an existing grant, a new grant with the same `(granter, grantee, Authorization)` triple should be created. + +```protobuf +// MsgGrant is a request type for Grant method. It declares authorization to the grantee +// on behalf of the granter with the provided expiration time. +message MsgGrant { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgGrant"; + + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + cosmos.authz.v1beta1.Grant grant = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message handling should fail if: + +* both granter and grantee have the same address. +* provided `Expiration` time is less than current unix timestamp (but a grant will be created if no `expiration` time is provided since `expiration` is optional). +* provided `Grant.Authorization` is not implemented. +* `Authorization.MsgTypeURL()` is not defined in the router (there is no defined handler in the app router to handle that Msg types). + +### MsgRevoke + +A grant can be removed with the `MsgRevoke` message. + +```protobuf +// MsgRevoke revokes any authorization with the provided sdk.Msg type on the +// granter's account with that has been granted to the grantee. +message MsgRevoke { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgRevoke"; + + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string msg_type_url = 3; +} +``` + +The message handling should fail if: + +* both granter and grantee have the same address. +* provided `MsgTypeUrl` is empty. + +NOTE: The `MsgExec` message removes a grant if the grant has expired. + +### MsgExec + +When a grantee wants to execute a transaction on behalf of a granter, they must send `MsgExec`. + +```protobuf +// MsgExec attempts to execute the provided messages using +// authorizations granted to the grantee. Each message should have only +// one signer corresponding to the granter of the authorization. +message MsgExec { + option (cosmos.msg.v1.signer) = "grantee"; + option (amino.name) = "cosmos-sdk/MsgExec"; + + string grantee = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // Execute Msg. + // The x/authz will try to find a grant matching (msg.signers[0], grantee, MsgTypeURL(msg)) + // triple and validate it. + repeated google.protobuf.Any msgs = 2 [(cosmos_proto.accepts_interface) = "cosmos.base.v1beta1.Msg"]; +``` + +The message handling should fail if: + +* provided `Authorization` is not implemented. +* grantee doesn't have permission to run the transaction. +* if granted authorization is expired. + +## Events + +The authz module emits proto events defined in [the Protobuf reference](https://buf.build/cosmos/cosmos-sdk/docs/main/cosmos.authz.v1beta1#cosmos.authz.v1beta1.EventGrant). + +## Client + +### CLI + +A user can query and interact with the `authz` module using the CLI. + +#### Query + +The `query` commands allow users to query `authz` state. + +```bash +simd query authz --help +``` + +##### grants + +The `grants` command allows users to query grants for a granter-grantee pair. If the message type URL is set, it selects grants only for that message type. + +```bash +simd query authz grants [granter-addr] [grantee-addr] [msg-type-url]? [flags] +``` + +Example: + +```bash +simd query authz grants cosmos1.. cosmos1.. /cosmos.bank.v1beta1.MsgSend +``` + +Example Output: + +```bash +grants: +- authorization: + '@type': /cosmos.bank.v1beta1.SendAuthorization + spend_limit: + - amount: "100" + denom: stake + expiration: "2022-01-01T00:00:00Z" +pagination: null +``` + +#### Transactions + +The `tx` commands allow users to interact with the `authz` module. + +```bash +simd tx authz --help +``` + +##### exec + +The `exec` command allows a grantee to execute a transaction on behalf of granter. + +```bash + simd tx authz exec [tx-json-file] --from [grantee] [flags] +``` + +Example: + +```bash +simd tx authz exec tx.json --from=cosmos1.. +``` + +##### grant + +The `grant` command allows a granter to grant an authorization to a grantee. + +```bash +simd tx authz grant --from [flags] +``` + +Example: + +```bash +simd tx authz grant cosmos1.. send --spend-limit=100stake --from=cosmos1.. +``` + +##### revoke + +The `revoke` command allows a granter to revoke an authorization from a grantee. + +```bash +simd tx authz revoke [grantee] [msg-type-url] --from=[granter] [flags] +``` + +Example: + +```bash +simd tx authz revoke cosmos1.. /cosmos.bank.v1beta1.MsgSend --from=cosmos1.. +``` + +### gRPC + +A user can query the `authz` module using gRPC endpoints. + +#### Grants + +The `Grants` endpoint allows users to query grants for a granter-grantee pair. If the message type URL is set, it selects grants only for that message type. + +```bash +cosmos.authz.v1beta1.Query/Grants +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"granter":"cosmos1..","grantee":"cosmos1..","msg_type_url":"/cosmos.bank.v1beta1.MsgSend"}' \ + localhost:9090 \ + cosmos.authz.v1beta1.Query/Grants +``` + +Example Output: + +```bash expandable +{ + "grants": [ + { + "authorization": { + "@type": "/cosmos.bank.v1beta1.SendAuthorization", + "spendLimit": [ + { + "denom":"stake", + "amount":"100" + } + ] + }, + "expiration": "2022-01-01T00:00:00Z" + } + ] +} +``` + +### REST + +A user can query the `authz` module using REST endpoints. + +```bash +/cosmos/authz/v1beta1/grants +``` + +Example: + +```bash +curl "localhost:1317/cosmos/authz/v1beta1/grants?granter=cosmos1..&grantee=cosmos1..&msg_type_url=/cosmos.bank.v1beta1.MsgSend" +``` + +Example Output: + +```bash expandable +{ + "grants": [ + { + "authorization": { + "@type": "/cosmos.bank.v1beta1.SendAuthorization", + "spend_limit": [ + { + "denom": "stake", + "amount": "100" + } + ] + }, + "expiration": "2022-01-01T00:00:00Z" + } + ], + "pagination": null +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/modules/bank/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/bank/README.mdx new file mode 100644 index 00000000..0f07c3cb --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/bank/README.mdx @@ -0,0 +1,1098 @@ +--- +title: '`x/bank`' +description: This document specifies the bank module of the Cosmos SDK. +--- + +## Abstract + +This document specifies the bank module of the Cosmos SDK. + +The bank module is responsible for handling multi-asset coin transfers between +accounts and tracking special-case pseudo-transfers which must work differently +with particular kinds of accounts (notably delegating/undelegating for vesting +accounts). It exposes several interfaces with varying capabilities for secure +interaction with other modules which must alter user balances. + +In addition, the bank module tracks and provides query support for the total +supply of all assets used in the application. + +This module is used in the Cosmos Hub. + +## Contents + +* [Supply](#supply) + * [Total Supply](#total-supply) +* [Module Accounts](#module-accounts) + * [Permissions](#permissions) +* [State](#state) +* [Params](#params) +* [Keepers](#keepers) +* [Messages](#messages) +* [Events](#events) + * [Message Events](#message-events) + * [Keeper Events](#keeper-events) +* [Parameters](#parameters) + * [SendEnabled](#sendenabled) + * [DefaultSendEnabled](#defaultsendenabled) +* [Client](#client) + * [CLI](#cli) + * [Query](#query) + * [Transactions](#transactions) +* [gRPC](#grpc) + +## Supply + +The `supply` functionality: + +* passively tracks the total supply of coins within a chain, +* provides a pattern for modules to hold/interact with `Coins`, and +* introduces the invariant check to verify a chain's total supply. + +### Total Supply + +The total `Supply` of the network is equal to the sum of all coins from the +account. The total supply is updated every time a `Coin` is minted (eg: as part +of the inflation mechanism) or burned (eg: due to slashing or if a governance +proposal is vetoed). + +## Module Accounts + +The supply functionality introduces a new type of `auth.Account` which can be used by +modules to allocate tokens and in special cases mint or burn tokens. At a base +level these module accounts are capable of sending/receiving tokens to and from +`auth.Account`s and other module accounts. This design replaces previous +alternative designs where, to hold tokens, modules would burn the incoming +tokens from the sender account, and then track those tokens internally. Later, +in order to send tokens, the module would need to effectively mint tokens +within a destination account. The new design removes duplicate logic between +modules to perform this accounting. + +The `ModuleAccount` interface is defined as follows: + +```go +type ModuleAccount interface { + auth.Account / same methods as the Account interface + + GetName() + +string / name of the module; used to obtain the address + GetPermissions() []string / permissions of module account + HasPermission(string) + +bool +} +``` + +> **WARNING!** +> Any module or message handler that allows either direct or indirect sending of funds must explicitly guarantee those funds cannot be sent to module accounts (unless allowed). + +The supply `Keeper` also introduces new wrapper functions for the auth `Keeper` +and the bank `Keeper` that are related to `ModuleAccount`s in order to be able +to: + +* Get and set `ModuleAccount`s by providing the `Name`. +* Send coins from and to other `ModuleAccount`s or standard `Account`s + (`BaseAccount` or `VestingAccount`) by passing only the `Name`. +* `Mint` or `Burn` coins for a `ModuleAccount` (restricted to its permissions). + +### Permissions + +Each `ModuleAccount` has a different set of permissions that provide different +object capabilities to perform certain actions. Permissions need to be +registered upon the creation of the supply `Keeper` so that every time a +`ModuleAccount` calls the allowed functions, the `Keeper` can lookup the +permissions to that specific account and perform or not perform the action. + +The available permissions are: + +* `Minter`: allows for a module to mint a specific amount of coins. +* `Burner`: allows for a module to burn a specific amount of coins. +* `Staking`: allows for a module to delegate and undelegate a specific amount of coins. + +## State + +The `x/bank` module keeps state of the following primary objects: + +1. Account balances +2. Denomination metadata +3. The total supply of all balances +4. Information on which denominations are allowed to be sent. + +In addition, the `x/bank` module keeps the following indexes to manage the +aforementioned state: + +* Supply Index: `0x0 | byte(denom) -> byte(amount)` +* Denom Metadata Index: `0x1 | byte(denom) -> ProtocolBuffer(Metadata)` +* Balances Index: `0x2 | byte(address length) | []byte(address) | []byte(balance.Denom) -> ProtocolBuffer(balance)` +* Reverse Denomination to Address Index: `0x03 | byte(denom) | 0x00 | []byte(address) -> 0` + +## Params + +The bank module stores it's params in state with the prefix of `0x05`, +it can be updated with governance or the address with authority. + +* Params: `0x05 | ProtocolBuffer(Params)` + +```protobuf +// Params defines the parameters for the bank module. +message Params { + option (amino.name) = "cosmos-sdk/x/bank/Params"; + option (gogoproto.goproto_stringer) = false; + // Deprecated: Use of SendEnabled in params is deprecated. + // For genesis, use the newly added send_enabled field in the genesis object. + // Storage, lookup, and manipulation of this information is now in the keeper. + // + // As of cosmos-sdk 0.47, this only exists for backwards compatibility of genesis files. + repeated SendEnabled send_enabled = 1 [deprecated = true]; + bool default_send_enabled = 2; +} +``` + +## Keepers + +The bank module provides these exported keeper interfaces that can be +passed to other modules that read or update account balances. Modules +should use the least-permissive interface that provides the functionality they +require. + +Best practices dictate careful review of `bank` module code to ensure that +permissions are limited in the way that you expect. + +### Denied Addresses + +The `x/bank` module accepts a map of addresses that are considered blocklisted +from directly and explicitly receiving funds through means such as `MsgSend` and +`MsgMultiSend` and direct API calls like `SendCoinsFromModuleToAccount`. + +Typically, these addresses are module accounts. If these addresses receive funds +outside the expected rules of the state machine, invariants are likely to be +broken and could result in a halted network. + +By providing the `x/bank` module with a blocklisted set of addresses, an error occurs for the operation if a user or client attempts to directly or indirectly send funds to a blocklisted account, for example, by using [IBC](https://ibc.cosmos.network). + +### Common Types + +#### Input + +An input of a multiparty transfer + +```protobuf +/ Input models transaction input. +message Input { + string address = 1; + repeated cosmos.base.v1beta1.Coin coins = 2; +} +``` + +#### Output + +An output of a multiparty transfer. + +```protobuf +/ Output models transaction outputs. +message Output { + string address = 1; + repeated cosmos.base.v1beta1.Coin coins = 2; +} +``` + +### BaseKeeper + +The base keeper provides full-permission access: the ability to arbitrary modify any account's balance and mint or burn coins. + +Restricted permission to mint per module could be achieved by using baseKeeper with `WithMintCoinsRestriction` to give specific restrictions to mint (e.g. only minting certain denom). + +```go expandable +/ Keeper defines a module interface that facilitates the transfer of coins +/ between accounts. +type Keeper interface { + SendKeeper + WithMintCoinsRestriction(MintingRestrictionFn) + +BaseKeeper + + InitGenesis(context.Context, *types.GenesisState) + +ExportGenesis(context.Context) *types.GenesisState + + GetSupply(ctx context.Context, denom string) + +sdk.Coin + HasSupply(ctx context.Context, denom string) + +bool + GetPaginatedTotalSupply(ctx context.Context, pagination *query.PageRequest) (sdk.Coins, *query.PageResponse, error) + +IterateTotalSupply(ctx context.Context, cb func(sdk.Coin) + +bool) + +GetDenomMetaData(ctx context.Context, denom string) (types.Metadata, bool) + +HasDenomMetaData(ctx context.Context, denom string) + +bool + SetDenomMetaData(ctx context.Context, denomMetaData types.Metadata) + +IterateAllDenomMetaData(ctx context.Context, cb func(types.Metadata) + +bool) + +SendCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + SendCoinsFromModuleToModule(ctx context.Context, senderModule, recipientModule string, amt sdk.Coins) + +error + SendCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + DelegateCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + UndelegateCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + MintCoins(ctx context.Context, moduleName string, amt sdk.Coins) + +error + BurnCoins(ctx context.Context, moduleName string, amt sdk.Coins) + +error + + DelegateCoins(ctx context.Context, delegatorAddr, moduleAccAddr sdk.AccAddress, amt sdk.Coins) + +error + UndelegateCoins(ctx context.Context, moduleAccAddr, delegatorAddr sdk.AccAddress, amt sdk.Coins) + +error + + / GetAuthority gets the address capable of executing governance proposal messages. Usually the gov module account. + GetAuthority() + +string + + types.QueryServer +} +``` + +### SendKeeper + +The send keeper provides access to account balances and the ability to transfer coins between +accounts. The send keeper does not alter the total supply (mint or burn coins). + +```go expandable +/ SendKeeper defines a module interface that facilitates the transfer of coins +/ between accounts without the possibility of creating coins. +type SendKeeper interface { + ViewKeeper + + InputOutputCoins(ctx context.Context, inputs types.Input, outputs []types.Output) + +error + SendCoins(ctx context.Context, fromAddr sdk.AccAddress, toAddr sdk.AccAddress, amt sdk.Coins) + +error + + GetParams(ctx context.Context) + +types.Params + SetParams(ctx context.Context, params types.Params) + +error + + IsSendEnabledDenom(ctx context.Context, denom string) + +bool + SetSendEnabled(ctx context.Context, denom string, value bool) + +SetAllSendEnabled(ctx context.Context, sendEnableds []*types.SendEnabled) + +DeleteSendEnabled(ctx context.Context, denom string) + +IterateSendEnabledEntries(ctx context.Context, cb func(denom string, sendEnabled bool) (stop bool)) + +GetAllSendEnabledEntries(ctx context.Context) []types.SendEnabled + + IsSendEnabledCoin(ctx context.Context, coin sdk.Coin) + +bool + IsSendEnabledCoins(ctx context.Context, coins ...sdk.Coin) + +error + + BlockedAddr(addr sdk.AccAddress) + +bool +} +``` + +### ViewKeeper + +The view keeper provides read-only access to account balances. The view keeper does not have balance alteration functionality. All balance lookups are `O(1)`. + +```go expandable +/ ViewKeeper defines a module interface that facilitates read only access to +/ account balances. +type ViewKeeper interface { + ValidateBalance(ctx context.Context, addr sdk.AccAddress) + +error + HasBalance(ctx context.Context, addr sdk.AccAddress, amt sdk.Coin) + +bool + + GetAllBalances(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + GetAccountsBalances(ctx context.Context) []types.Balance + GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin + LockedCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + SpendableCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + SpendableCoin(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin + + IterateAccountBalances(ctx context.Context, addr sdk.AccAddress, cb func(coin sdk.Coin) (stop bool)) + +IterateAllBalances(ctx context.Context, cb func(address sdk.AccAddress, coin sdk.Coin) (stop bool)) +} +``` + +## Messages + +### MsgSend + +Send coins from one address to another. + +```protobuf +// MsgSend represents a message to send coins from one account to another. +message MsgSend { + option (cosmos.msg.v1.signer) = "from_address"; + option (amino.name) = "cosmos-sdk/MsgSend"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string from_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string to_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + repeated cosmos.base.v1beta1.Coin amount = 3 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; +} +``` + +The message will fail under the following conditions: + +* The coins do not have sending enabled +* The `to` address is restricted + +### MsgMultiSend + +Send coins from one sender and to a series of different address. If any of the receiving addresses do not correspond to an existing account, a new account is created. + +```protobuf +// MsgMultiSend represents an arbitrary multi-in, multi-out send message. +message MsgMultiSend { + option (cosmos.msg.v1.signer) = "inputs"; + option (amino.name) = "cosmos-sdk/MsgMultiSend"; + + option (gogoproto.equal) = false; + + // Inputs, despite being `repeated`, only allows one sender input. This is + // checked in MsgMultiSend's ValidateBasic. + repeated Input inputs = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + repeated Output outputs = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message will fail under the following conditions: + +* Any of the coins do not have sending enabled +* Any of the `to` addresses are restricted +* Any of the coins are locked +* The inputs and outputs do not correctly correspond to one another + +### MsgUpdateParams + +The `bank` module params can be updated through `MsgUpdateParams`, which can be done using governance proposal. The signer will always be the `gov` module account address. + +```protobuf +// MsgUpdateParams is the Msg/UpdateParams request type. +// +// Since: cosmos-sdk 0.47 +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + option (amino.name) = "cosmos-sdk/x/bank/MsgUpdateParams"; + + // params defines the x/bank parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message handling can fail if: + +* signer is not the gov module account address. + +### MsgSetSendEnabled + +Used with the x/gov module to set create/edit SendEnabled entries. + +```protobuf +// MsgSetSendEnabled is the Msg/SetSendEnabled request type. +// +// Only entries to add/update/delete need to be included. +// Existing SendEnabled entries that are not included in this +// message are left unchanged. +// +// Since: cosmos-sdk 0.47 +message MsgSetSendEnabled { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/MsgSetSendEnabled"; + + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // send_enabled is the list of entries to add or update. + repeated SendEnabled send_enabled = 2; + + // use_default_for is a list of denoms that should use the params.default_send_enabled value. + // Denoms listed here will have their SendEnabled entries deleted. + // If a denom is included that doesn't have a SendEnabled entry, + // it will be ignored. + repeated string use_default_for = 3; +} +``` + +The message will fail under the following conditions: + +* The authority is not a bech32 address. +* The authority is not x/gov module's address. +* There are multiple SendEnabled entries with the same Denom. +* One or more SendEnabled entries has an invalid Denom. + +## Events + +The bank module emits the following events: + +### Message Events + +#### MsgSend + +| Type | Attribute Key | Attribute Value | +| -------- | ------------- | ------------------ | +| transfer | recipient | `{recipientAddress}` | +| transfer | amount | `{amount}` | +| message | module | bank | +| message | action | send | +| message | sender | `{senderAddress}` | + +#### MsgMultiSend + +| Type | Attribute Key | Attribute Value | +| -------- | ------------- | ------------------ | +| transfer | recipient | `{recipientAddress}` | +| transfer | amount | `{amount}` | +| message | module | bank | +| message | action | multisend | +| message | sender | `{senderAddress}` | + +### Keeper Events + +In addition to message events, the bank keeper will produce events when the following methods are called (or any method which ends up calling them) + +#### MintCoins + +```json expandable +{ + "type": "coinbase", + "attributes": [ + { + "key": "minter", + "value": "{{sdk.AccAddress of the module minting coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being minted}}", + "index": true + } + ] +} +``` + +```json expandable +{ + "type": "coin_received", + "attributes": [ + { + "key": "receiver", + "value": "{{sdk.AccAddress of the module minting coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being received}}", + "index": true + } + ] +} +``` + +#### BurnCoins + +```json expandable +{ + "type": "burn", + "attributes": [ + { + "key": "burner", + "value": "{{sdk.AccAddress of the module burning coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being burned}}", + "index": true + } + ] +} +``` + +```json expandable +{ + "type": "coin_spent", + "attributes": [ + { + "key": "spender", + "value": "{{sdk.AccAddress of the module burning coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being burned}}", + "index": true + } + ] +} +``` + +#### addCoins + +```json expandable +{ + "type": "coin_received", + "attributes": [ + { + "key": "receiver", + "value": "{{sdk.AccAddress of the address beneficiary of the coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being received}}", + "index": true + } + ] +} +``` + +#### subUnlockedCoins/DelegateCoins + +```json expandable +{ + "type": "coin_spent", + "attributes": [ + { + "key": "spender", + "value": "{{sdk.AccAddress of the address which is spending coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being spent}}", + "index": true + } + ] +} +``` + +## Parameters + +The bank module contains the following parameters + +### SendEnabled + +The SendEnabled parameter is now deprecated and not to be use. It is replaced +with state store records. + +### DefaultSendEnabled + +The default send enabled value controls send transfer capability for all +coin denominations unless specifically included in the array of `SendEnabled` +parameters. + +## Client + +### CLI + +A user can query and interact with the `bank` module using the CLI. + +#### Query + +The `query` commands allow users to query `bank` state. + +```shell +simd query bank --help +``` + +##### balances + +The `balances` command allows users to query account balances by address. + +```shell +simd query bank balances [address] [flags] +``` + +Example: + +```shell +simd query bank balances cosmos1.. +``` + +Example Output: + +```yml +balances: +- amount: "1000000000" + denom: stake +pagination: + next_key: null + total: "0" +``` + +##### denom-metadata + +The `denom-metadata` command allows users to query metadata for coin denominations. A user can query metadata for a single denomination using the `--denom` flag or all denominations without it. + +```shell +simd query bank denom-metadata [flags] +``` + +Example: + +```shell +simd query bank denom-metadata --denom stake +``` + +Example Output: + +```yml +metadata: + base: stake + denom_units: + - aliases: + - STAKE + denom: stake + description: native staking token of simulation app + display: stake + name: SimApp Token + symbol: STK +``` + +##### total + +The `total` command allows users to query the total supply of coins. A user can query the total supply for a single coin using the `--denom` flag or all coins without it. + +```shell +simd query bank total [flags] +``` + +Example: + +```shell +simd query bank total --denom stake +``` + +Example Output: + +```yml +amount: "10000000000" +denom: stake +``` + +##### send-enabled + +The `send-enabled` command allows users to query for all or some SendEnabled entries. + +```shell +simd query bank send-enabled [denom1 ...] [flags] +``` + +Example: + +```shell +simd query bank send-enabled +``` + +Example output: + +```yml +send_enabled: +- denom: foocoin + enabled: true +- denom: barcoin +pagination: + next-key: null + total: 2 +``` + +#### Transactions + +The `tx` commands allow users to interact with the `bank` module. + +```shell +simd tx bank --help +``` + +##### send + +The `send` command allows users to send funds from one account to another. + +```shell +simd tx bank send [from_key_or_address] [to_address] [amount] [flags] +``` + +Example: + +```shell +simd tx bank send cosmos1.. cosmos1.. 100stake +``` + +## gRPC + +A user can query the `bank` module using gRPC endpoints. + +### Balance + +The `Balance` endpoint allows users to query account balance by address for a given denomination. + +```shell +cosmos.bank.v1beta1.Query/Balance +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"address":"cosmos1..","denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/Balance +``` + +Example Output: + +```json +{ + "balance": { + "denom": "stake", + "amount": "1000000000" + } +} +``` + +### AllBalances + +The `AllBalances` endpoint allows users to query account balance by address for all denominations. + +```shell +cosmos.bank.v1beta1.Query/AllBalances +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/AllBalances +``` + +Example Output: + +```json expandable +{ + "balances": [ + { + "denom": "stake", + "amount": "1000000000" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### DenomMetadata + +The `DenomMetadata` endpoint allows users to query metadata for a single coin denomination. + +```shell +cosmos.bank.v1beta1.Query/DenomMetadata +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/DenomMetadata +``` + +Example Output: + +```json expandable +{ + "metadata": { + "description": "native staking token of simulation app", + "denomUnits": [ + { + "denom": "stake", + "aliases": [ + "STAKE" + ] + } + ], + "base": "stake", + "display": "stake", + "name": "SimApp Token", + "symbol": "STK" + } +} +``` + +### DenomsMetadata + +The `DenomsMetadata` endpoint allows users to query metadata for all coin denominations. + +```shell +cosmos.bank.v1beta1.Query/DenomsMetadata +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/DenomsMetadata +``` + +Example Output: + +```json expandable +{ + "metadatas": [ + { + "description": "native staking token of simulation app", + "denomUnits": [ + { + "denom": "stake", + "aliases": [ + "STAKE" + ] + } + ], + "base": "stake", + "display": "stake", + "name": "SimApp Token", + "symbol": "STK" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### DenomOwners + +The `DenomOwners` endpoint allows users to query metadata for a single coin denomination. + +```shell +cosmos.bank.v1beta1.Query/DenomOwners +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/DenomOwners +``` + +Example Output: + +```json expandable +{ + "denomOwners": [ + { + "address": "cosmos1..", + "balance": { + "denom": "stake", + "amount": "5000000000" + } + +}, + { + "address": "cosmos1..", + "balance": { + "denom": "stake", + "amount": "5000000000" + } + +}, + ], + "pagination": { + "total": "2" + } +} +``` + +### TotalSupply + +The `TotalSupply` endpoint allows users to query the total supply of all coins. + +```shell +cosmos.bank.v1beta1.Query/TotalSupply +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/TotalSupply +``` + +Example Output: + +```json expandable +{ + "supply": [ + { + "denom": "stake", + "amount": "10000000000" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### SupplyOf + +The `SupplyOf` endpoint allows users to query the total supply of a single coin. + +```shell +cosmos.bank.v1beta1.Query/SupplyOf +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/SupplyOf +``` + +Example Output: + +```json +{ + "amount": { + "denom": "stake", + "amount": "10000000000" + } +} +``` + +### Params + +The `Params` endpoint allows users to query the parameters of the `bank` module. + +```shell +cosmos.bank.v1beta1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "defaultSendEnabled": true + } +} +``` + +### SendEnabled + +The `SendEnabled` enpoints allows users to query the SendEnabled entries of the `bank` module. + +Any denominations NOT returned, use the `Params.DefaultSendEnabled` value. + +```shell +cosmos.bank.v1beta1.Query/SendEnabled +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/SendEnabled +``` + +Example Output: + +```json expandable +{ + "send_enabled": [ + { + "denom": "foocoin", + "enabled": true + }, + { + "denom": "barcoin" + } + ], + "pagination": { + "next-key": null, + "total": 2 + } +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/modules/circuit/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/circuit/README.mdx new file mode 100644 index 00000000..ce05304a --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/circuit/README.mdx @@ -0,0 +1,151 @@ +--- +title: '`x/circuit`' +--- + +## Concepts + +Circuit Breaker is a module that is meant to avoid a chain needing to halt/shut down in the presence of a vulnerability, instead the module will allow specific messages or all messages to be disabled. When operating a chain, if it is app specific then a halt of the chain is less detrimental, but if there are applications built on top of the chain then halting is expensive due to the disturbance to applications. + +Circuit Breaker works with the idea that an address or set of addresses have the right to block messages from being executed and/or included in the mempool. Any address with a permission is able to reset the circuit breaker for the message. + +## State + +### Accounts + +* AccountPermissions `0x1 | account_address -> ProtocolBuffer(CircuitBreakerPermissions)` + +```go expandable +type level int32 + +const ( + / LEVEL_NONE_UNSPECIFIED indicates that the account will have no circuit + / breaker permissions. + LEVEL_NONE_UNSPECIFIED = iota + / LEVEL_SOME_MSGS indicates that the account will have permission to + / trip or reset the circuit breaker for some Msg type URLs. If this level + / is chosen, a non-empty list of Msg type URLs must be provided in + / limit_type_urls. + LEVEL_SOME_MSGS + / LEVEL_ALL_MSGS indicates that the account can trip or reset the circuit + / breaker for Msg's of all type URLs. + LEVEL_ALL_MSGS + / LEVEL_SUPER_ADMIN indicates that the account can take all circuit breaker + / actions and can grant permissions to other accounts. + LEVEL_SUPER_ADMIN +) + +type Access struct { + level int32 + msgs []string / if full permission, msgs can be empty +} +``` + +### Disable List + +List of type urls that are disabled. + +* DisableList `0x2 | msg_type_url -> []byte{}` {/* - should this be stored in json to skip encoding and decoding each block, does it matter? */} + +## State Transitions + +### Authorize + +Authorize, is called by the module authority (default governance module account) or any account with `LEVEL_SUPER_ADMIN` to give permission to disable/enable messages to another account. There are three levels of permissions that can be granted. `LEVEL_SOME_MSGS` limits the number of messages that can be disabled. `LEVEL_ALL_MSGS` permits all messages to be disabled. `LEVEL_SUPER_ADMIN` allows an account to take all circuit breaker actions including authorizing and deauthorizing other accounts. + +```protobuf + / AuthorizeCircuitBreaker allows a super-admin to grant (or revoke) another + / account's circuit breaker permissions. + rpc AuthorizeCircuitBreaker(MsgAuthorizeCircuitBreaker) returns (MsgAuthorizeCircuitBreakerResponse); +``` + +### Trip + +Trip, is called by an authorized account to disable message execution for a specific msgURL. If empty, all the msgs will be disabled. + +```protobuf + / TripCircuitBreaker pauses processing of Msg's in the state machine. + rpc TripCircuitBreaker(MsgTripCircuitBreaker) returns (MsgTripCircuitBreakerResponse); +``` + +### Reset + +Reset is called by an authorized account to enable execution for a specific msgURL of previously disabled message. If empty, all the disabled messages will be enabled. + +```protobuf + / ResetCircuitBreaker resumes processing of Msg's in the state machine that + / have been been paused using TripCircuitBreaker. + rpc ResetCircuitBreaker(MsgResetCircuitBreaker) returns (MsgResetCircuitBreakerResponse); +``` + +## Messages + +### MsgAuthorizeCircuitBreaker + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/circuit/v1/tx.proto#L25-L75 +``` + +This message is expected to fail if: + +* the granter is not an account with permission level `LEVEL_SUPER_ADMIN` or the module authority + +### MsgTripCircuitBreaker + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/circuit/v1/tx.proto#L77-L93 +``` + +This message is expected to fail if: + +* if the signer does not have a permission level with the ability to disable the specified type url message + +### MsgResetCircuitBreaker + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/circuit/v1/tx.proto#L95-109 +``` + +This message is expected to fail if: + +* if the type url is not disabled + +## Events - list and describe event tags + +The circuit module emits the following events: + +### Message Events + +#### MsgAuthorizeCircuitBreaker + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | --------------------------- | +| string | granter | `{granterAddress}` | +| string | grantee | `{granteeAddress}` | +| string | permission | `{granteePermissions}` | +| message | module | circuit | +| message | action | authorize\_circuit\_breaker | + +#### MsgTripCircuitBreaker + +| Type | Attribute Key | Attribute Value | +| --------- | ------------- | ---------------------- | +| string | authority | `{authorityAddress}` | +| \[]string | msg\_urls | \[]string`{msg\_urls}` | +| message | module | circuit | +| message | action | trip\_circuit\_breaker | + +#### ResetCircuitBreaker + +| Type | Attribute Key | Attribute Value | +| --------- | ------------- | ----------------------- | +| string | authority | `{authorityAddress}` | +| \[]string | msg\_urls | \[]string`{msg\_urls}` | +| message | module | circuit | +| message | action | reset\_circuit\_breaker | + +## Keys - list of key prefixes used by the circuit module + +* `AccountPermissionPrefix` - `0x01` +* `DisableListPrefix` - `0x02` + +## Client - list and describe CLI commands and gRPC and REST endpoints diff --git a/docs/sdk/v0.47/documentation/module-system/modules/consensus/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/consensus/README.mdx new file mode 100644 index 00000000..0a377fe4 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/consensus/README.mdx @@ -0,0 +1,6 @@ +--- +title: '`x/consensus`' +description: Functionality to modify CometBFT's ABCI consensus params. +--- + +Functionality to modify CometBFT's ABCI consensus params. diff --git a/docs/sdk/v0.47/documentation/module-system/modules/crisis/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/crisis/README.mdx new file mode 100644 index 00000000..e96a0cf2 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/crisis/README.mdx @@ -0,0 +1,128 @@ +--- +title: '`x/crisis`' +description: >- + The crisis module halts the blockchain under the circumstance that a + blockchain invariant is broken. Invariants can be registered with the + application during the application initialization process. +--- + +## Overview + +The crisis module halts the blockchain under the circumstance that a blockchain +invariant is broken. Invariants can be registered with the application during the +application initialization process. + +## Contents + +* [State](#state) +* [Messages](#messages) +* [Events](#events) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + +## State + +### ConstantFee + +Due to the anticipated large gas cost requirement to verify an invariant (and +potential to exceed the maximum allowable block gas limit) a constant fee is +used instead of the standard gas consumption method. The constant fee is +intended to be larger than the anticipated gas cost of running the invariant +with the standard gas consumption method. + +The ConstantFee param is stored in the module params state with the prefix of `0x01`, +it can be updated with governance or the address with authority. + +* Params: `mint/params -> legacy_amino(sdk.Coin)` + +## Messages + +In this section we describe the processing of the crisis messages and the +corresponding updates to the state. + +### MsgVerifyInvariant + +Blockchain invariants can be checked using the `MsgVerifyInvariant` message. + +```protobuf +// MsgVerifyInvariant represents a message to verify a particular invariance. +message MsgVerifyInvariant { + option (cosmos.msg.v1.signer) = "sender"; + option (amino.name) = "cosmos-sdk/MsgVerifyInvariant"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + // sender is the account address of private key to send coins to fee collector account. + string sender = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // name of the invariant module. + string invariant_module_name = 2; + + // invariant_route is the msg's invariant route. + string invariant_route = 3; +} +``` + +This message is expected to fail if: + +* the sender does not have enough coins for the constant fee +* the invariant route is not registered + +This message checks the invariant provided, and if the invariant is broken it +panics, halting the blockchain. If the invariant is broken, the constant fee is +never deducted as the transaction is never committed to a block (equivalent to +being refunded). However, if the invariant is not broken, the constant fee will +not be refunded. + +## Events + +The crisis module emits the following events: + +### Handlers + +#### MsgVerifyInvariance + +| Type | Attribute Key | Attribute Value | +| --------- | ------------- | ----------------- | +| invariant | route | `{invariantRoute}` | +| message | module | crisis | +| message | action | verify\_invariant | +| message | sender | `{senderAddress}` | + +## Parameters + +The crisis module contains the following parameters: + +| Key | Type | Example | +| ----------- | ------------- | --------------------------------- | +| ConstantFee | object (coin) | `{"denom":"uatom","amount":"1000"}` | + +## Client + +### CLI + +A user can query and interact with the `crisis` module using the CLI. + +#### Transactions + +The `tx` commands allow users to interact with the `crisis` module. + +```bash +simd tx crisis --help +``` + +##### invariant-broken + +The `invariant-broken` command submits proof when an invariant was broken to halt the chain + +```bash +simd tx crisis invariant-broken [module-name] [invariant-route] [flags] +``` + +Example: + +```bash +simd tx crisis invariant-broken bank total-supply --from=[keyname or address] +``` diff --git a/docs/sdk/v0.47/documentation/module-system/modules/distribution/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/distribution/README.mdx new file mode 100644 index 00000000..af798786 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/distribution/README.mdx @@ -0,0 +1,1128 @@ +--- +title: '`x/distribution`' +--- + +## Overview + +This *simple* distribution mechanism describes a functional way to passively +distribute rewards between validators and delegators. Note that this mechanism does +not distribute funds in as precisely as active reward distribution mechanisms and +will therefore be upgraded in the future. + +The mechanism operates as follows. Collected rewards are pooled globally and +divided out passively to validators and delegators. Each validator has the +opportunity to charge commission to the delegators on the rewards collected on +behalf of the delegators. Fees are collected directly into a global reward pool +and validator proposer-reward pool. Due to the nature of passive accounting, +whenever changes to parameters which affect the rate of reward distribution +occurs, withdrawal of rewards must also occur. + +* Whenever withdrawing, one must withdraw the maximum amount they are entitled + to, leaving nothing in the pool. +* Whenever bonding, unbonding, or re-delegating tokens to an existing account, a + full withdrawal of the rewards must occur (as the rules for lazy accounting + change). +* Whenever a validator chooses to change the commission on rewards, all accumulated + commission rewards must be simultaneously withdrawn. + +The above scenarios are covered in `hooks.md`. + +The distribution mechanism outlined herein is used to lazily distribute the +following rewards between validators and associated delegators: + +* multi-token fees to be socially distributed +* inflated staked asset provisions +* validator commission on all rewards earned by their delegators stake + +Fees are pooled within a global pool. The mechanisms used allow for validators +and delegators to independently and lazily withdraw their rewards. + +## Shortcomings + +As a part of the lazy computations, each delegator holds an accumulation term +specific to each validator which is used to estimate what their approximate +fair portion of tokens held in the global fee pool is owed to them. + +```text +entitlement = delegator-accumulation / all-delegators-accumulation +``` + +Under the circumstance that there was constant and equal flow of incoming +reward tokens every block, this distribution mechanism would be equal to the +active distribution (distribute individually to all delegators each block). +However, this is unrealistic so deviations from the active distribution will +occur based on fluctuations of incoming reward tokens as well as timing of +reward withdrawal by other delegators. + +If you happen to know that incoming rewards are about to significantly increase, +you are incentivized to not withdraw until after this event, increasing the +worth of your existing *accum*. See [#2764](https://github.com/cosmos/cosmos-sdk/issues/2764) +for further details. + +## Effect on Staking + +Charging commission on Atom provisions while also allowing for Atom-provisions +to be auto-bonded (distributed directly to the validators bonded stake) is +problematic within BPoS. Fundamentally, these two mechanisms are mutually +exclusive. If both commission and auto-bonding mechanisms are simultaneously +applied to the staking-token then the distribution of staking-tokens between +any validator and its delegators will change with each block. This then +necessitates a calculation for each delegation records for each block - +which is considered computationally expensive. + +In conclusion, we can only have Atom commission and unbonded atoms +provisions or bonded atom provisions with no Atom commission, and we elect to +implement the former. Stakeholders wishing to rebond their provisions may elect +to set up a script to periodically withdraw and rebond rewards. + +## Contents + +* [Concepts](#concepts) +* [State](#state) + * [FeePool](#feepool) + * [Validator Distribution](#validator-distribution) + * [Delegation Distribution](#delegation-distribution) + * [Params](#params) +* [Begin Block](#begin-block) +* [Messages](#messages) +* [Hooks](#hooks) +* [Events](#events) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + +## Concepts + +In Proof of Stake (PoS) blockchains, rewards gained from transaction fees are paid to validators. The fee distribution module fairly distributes the rewards to the validators' constituent delegators. + +Rewards are calculated per period. The period is updated each time a validator's delegation changes, for example, when the validator receives a new delegation. +The rewards for a single validator can then be calculated by taking the total rewards for the period before the delegation started, minus the current total rewards. +To learn more, see the [F1 Fee Distribution paper](https://github.com/cosmos/cosmos-sdk/tree/main/docs/spec/fee_distribution/f1_fee_distr.pdf). + +The commission to the validator is paid when the validator is removed or when the validator requests a withdrawal. +The commission is calculated and incremented at every `BeginBlock` operation to update accumulated fee amounts. + +The rewards to a delegator are distributed when the delegation is changed or removed, or a withdrawal is requested. +Before rewards are distributed, all slashes to the validator that occurred during the current delegation are applied. + +### Reference Counting in F1 Fee Distribution + +In F1 fee distribution, the rewards a delegator receives are calculated when their delegation is withdrawn. This calculation must read the terms of the summation of rewards divided by the share of tokens from the period which they ended when they delegated, and the final period that was created for the withdrawal. + +Additionally, as slashes change the amount of tokens a delegation will have (but we calculate this lazily, +only when a delegator un-delegates), we must calculate rewards in separate periods before / after any slashes +which occurred in between when a delegator delegated and when they withdrew their rewards. Thus slashes, like +delegations, reference the period which was ended by the slash event. + +All stored historical rewards records for periods which are no longer referenced by any delegations +or any slashes can thus be safely removed, as they will never be read (future delegations and future +slashes will always reference future periods). This is implemented by tracking a `ReferenceCount` +along with each historical reward storage entry. Each time a new object (delegation or slash) +is created which might need to reference the historical record, the reference count is incremented. +Each time one object which previously needed to reference the historical record is deleted, the reference +count is decremented. If the reference count hits zero, the historical record is deleted. + +## State + +### FeePool + +All globally tracked parameters for distribution are stored within +`FeePool`. Rewards are collected and added to the reward pool and +distributed to validators/delegators from here. + +Note that the reward pool holds decimal coins (`DecCoins`) to allow +for fractions of coins to be received from operations like inflation. +When coins are distributed from the pool they are truncated back to +`sdk.Coins` which are non-decimal. + +* FeePool: `0x00 -> ProtocolBuffer(FeePool)` + +```go +/ coins with decimal +type DecCoins []DecCoin + +type DecCoin struct { + Amount math.LegacyDec + Denom string +} +``` + +```protobuf +// FeePool is the global fee pool for distribution. +message FeePool { + repeated cosmos.base.v1beta1.DecCoin community_pool = 1 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.DecCoins" + ]; +} +``` + +### Validator Distribution + +Validator distribution information for the relevant validator is updated each time: + +1. delegation amount to a validator is updated, +2. any delegator withdraws from a validator, or +3. the validator withdraws its commission. + +* ValidatorDistInfo: `0x02 | ValOperatorAddrLen (1 byte) | ValOperatorAddr -> ProtocolBuffer(validatorDistribution)` + +```go +type ValidatorDistInfo struct { + OperatorAddress sdk.AccAddress + SelfBondRewards sdkmath.DecCoins + ValidatorCommission types.ValidatorAccumulatedCommission +} +``` + +### Delegation Distribution + +Each delegation distribution only needs to record the height at which it last +withdrew fees. Because a delegation must withdraw fees each time it's +properties change (aka bonded tokens etc.) its properties will remain constant +and the delegator's *accumulation* factor can be calculated passively knowing +only the height of the last withdrawal and its current properties. + +* DelegationDistInfo: `0x02 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValOperatorAddrLen (1 byte) | ValOperatorAddr -> ProtocolBuffer(delegatorDist)` + +```go +type DelegationDistInfo struct { + WithdrawalHeight int64 / last time this delegation withdrew rewards +} +``` + +### Params + +The distribution module stores it's params in state with the prefix of `0x09`, +it can be updated with governance or the address with authority. + +* Params: `0x09 | ProtocolBuffer(Params)` + +```protobuf +// Params defines the set of params for the distribution module. +message Params { + option (amino.name) = "cosmos-sdk/x/distribution/Params"; + option (gogoproto.goproto_stringer) = false; + + string community_tax = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + + // Deprecated: The base_proposer_reward field is deprecated and is no longer used + // in the x/distribution module's reward mechanism. + string base_proposer_reward = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + deprecated = true + ]; + + // Deprecated: The bonus_proposer_reward field is deprecated and is no longer used + // in the x/distribution module's reward mechanism. + string bonus_proposer_reward = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + deprecated = true + ]; + + bool withdraw_addr_enabled = 4; +} +``` + +## Begin Block + +At each `BeginBlock`, all fees received in the previous block are transferred to +the distribution `ModuleAccount` account. When a delegator or validator +withdraws their rewards, they are taken out of the `ModuleAccount`. During begin +block, the different claims on the fees collected are updated as follows: + +* The reserve community tax is charged. +* The remainder is distributed proportionally by voting power to all bonded validators + +### The Distribution Scheme + +See [params](#params) for description of parameters. + +Let `fees` be the total fees collected in the previous block, including +inflationary rewards to the stake. All fees are collected in a specific module +account during the block. During `BeginBlock`, they are sent to the +`"distribution"` `ModuleAccount`. No other sending of tokens occurs. Instead, the +rewards each account is entitled to are stored, and withdrawals can be triggered +through the messages `FundCommunityPool`, `WithdrawValidatorCommission` and +`WithdrawDelegatorReward`. + +#### Reward to the Community Pool + +The community pool gets `community_tax * fees`, plus any remaining dust after +validators get their rewards that are always rounded down to the nearest +integer value. + +#### Reward To the Validators + +The proposer receives no extra rewards. All fees are distributed among all the +bonded validators, including the proposer, in proportion to their consensus power. + +```text +powFrac = validator power / total bonded validator power +voteMul = 1 - community_tax +``` + +All validators receive `fees * voteMul * powFrac`. + +#### Rewards to Delegators + +Each validator's rewards are distributed to its delegators. The validator also +has a self-delegation that is treated like a regular delegation in +distribution calculations. + +The validator sets a commission rate. The commission rate is flexible, but each +validator sets a maximum rate and a maximum daily increase. These maximums cannot be exceeded and protect delegators from sudden increases of validator commission rates to prevent validators from taking all of the rewards. + +The outstanding rewards that the operator is entitled to are stored in +`ValidatorAccumulatedCommission`, while the rewards the delegators are entitled +to are stored in `ValidatorCurrentRewards`. The [F1 fee distribution scheme](#concepts) is used to calculate the rewards per delegator as they +withdraw or update their delegation, and is thus not handled in `BeginBlock`. + +#### Example Distribution + +For this example distribution, the underlying consensus engine selects block proposers in +proportion to their power relative to the entire bonded power. + +All validators are equally performant at including pre-commits in their proposed +blocks. Then hold `(pre_commits included) / (total bonded validator power)` +constant so that the amortized block reward for the validator is `( validator power / total bonded power) * (1 - community tax rate)` of +the total rewards. Consequently, the reward for a single delegator is: + +```text +(delegator proportion of the validator power / validator power) * (validator power / total bonded power) + * (1 - community tax rate) * (1 - validator commission rate) += (delegator proportion of the validator power / total bonded power) * (1 - +community tax rate) * (1 - validator commission rate) +``` + +## Messages + +### MsgSetWithdrawAddress + +By default, the withdraw address is the delegator address. To change its withdraw address, a delegator must send a `MsgSetWithdrawAddress` message. +Changing the withdraw address is possible only if the parameter `WithdrawAddrEnabled` is set to `true`. + +The withdraw address cannot be any of the module accounts. These accounts are blocked from being withdraw addresses by being added to the distribution keeper's `blockedAddrs` array at initialization. + +Response: + +```protobuf +// MsgSetWithdrawAddress sets the withdraw address for +// a delegator (or validator self-delegation). +message MsgSetWithdrawAddress { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgModifyWithdrawAddress"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string withdraw_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +```go +func (k Keeper) + +SetWithdrawAddr(ctx context.Context, delegatorAddr sdk.AccAddress, withdrawAddr sdk.AccAddress) + +error + if k.blockedAddrs[withdrawAddr.String()] { + fail with "`{ + withdrawAddr +}` is not allowed to receive external funds" +} + if !k.GetWithdrawAddrEnabled(ctx) { + fail with `ErrSetWithdrawAddrDisabled` +} + +k.SetDelegatorWithdrawAddr(ctx, delegatorAddr, withdrawAddr) +``` + +### MsgWithdrawDelegatorReward + +A delegator can withdraw its rewards. +Internally in the distribution module, this transaction simultaneously removes the previous delegation with associated rewards, the same as if the delegator simply started a new delegation of the same value. +The rewards are sent immediately from the distribution `ModuleAccount` to the withdraw address. +Any remainder (truncated decimals) are sent to the community pool. +The starting height of the delegation is set to the current validator period, and the reference count for the previous period is decremented. +The amount withdrawn is deducted from the `ValidatorOutstandingRewards` variable for the validator. + +In the F1 distribution, the total rewards are calculated per validator period, and a delegator receives a piece of those rewards in proportion to their stake in the validator. +In basic F1, the total rewards that all the delegators are entitled to between to periods is calculated the following way. +Let `R(X)` be the total accumulated rewards up to period `X` divided by the tokens staked at that time. The delegator allocation is `R(X) * delegator_stake`. +Then the rewards for all the delegators for staking between periods `A` and `B` are `(R(B) - R(A)) * total stake`. +However, these calculated rewards don't account for slashing. + +Taking the slashes into account requires iteration. +Let `F(X)` be the fraction a validator is to be slashed for a slashing event that happened at period `X`. +If the validator was slashed at periods `P1, ..., PN`, where `A < P1`, `PN < B`, the distribution module calculates the individual delegator's rewards, `T(A, B)`, as follows: + +```go +stake := initial stake + rewards := 0 + previous := A + for P in P1, ..., PN`: + rewards = (R(P) - previous) * stake + stake = stake * F(P) + +previous = P +rewards = rewards + (R(B) - R(PN)) * stake +``` + +The historical rewards are calculated retroactively by playing back all the slashes and then attenuating the delegator's stake at each step. +The final calculated stake is equivalent to the actual staked coins in the delegation with a margin of error due to rounding errors. + +Response: + +```protobuf +// MsgWithdrawDelegatorReward represents delegation withdrawal to a delegator +// from a single validator. +message MsgWithdrawDelegatorReward { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgWithdrawDelegationReward"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +### WithdrawValidatorCommission + +The validator can send the WithdrawValidatorCommission message to withdraw their accumulated commission. +The commission is calculated in every block during `BeginBlock`, so no iteration is required to withdraw. +The amount withdrawn is deducted from the `ValidatorOutstandingRewards` variable for the validator. +Only integer amounts can be sent. If the accumulated awards have decimals, the amount is truncated before the withdrawal is sent, and the remainder is left to be withdrawn later. + +### FundCommunityPool + +This message sends coins directly from the sender to the community pool. + +The transaction fails if the amount cannot be transferred from the sender to the distribution module account. + +```go expandable +func (k Keeper) + +FundCommunityPool(ctx context.Context, amount sdk.Coins, sender sdk.AccAddress) + +error { + if err := k.bankKeeper.SendCoinsFromAccountToModule(ctx, sender, types.ModuleName, amount); err != nil { + return err +} + feePool := k.GetFeePool(ctx) + +feePool.CommunityPool = feePool.CommunityPool.Add(sdk.NewDecCoinsFromCoins(amount...)...) + +k.SetFeePool(ctx, feePool) + +return nil +} +``` + +### Common distribution operations + +These operations take place during many different messages. + +#### Initialize delegation + +Each time a delegation is changed, the rewards are withdrawn and the delegation is reinitialized. +Initializing a delegation increments the validator period and keeps track of the starting period of the delegation. + +```go expandable +/ initialize starting info for a new delegation +func (k Keeper) + +initializeDelegation(ctx context.Context, val sdk.ValAddress, del sdk.AccAddress) { + / period has already been incremented - we want to store the period ended by this delegation action + previousPeriod := k.GetValidatorCurrentRewards(ctx, val).Period - 1 + + / increment reference count for the period we're going to track + k.incrementReferenceCount(ctx, val, previousPeriod) + validator := k.stakingKeeper.Validator(ctx, val) + delegation := k.stakingKeeper.Delegation(ctx, del, val) + + / calculate delegation stake in tokens + / we don't store directly, so multiply delegation shares * (tokens per share) + / note: necessary to truncate so we don't allow withdrawing more rewards than owed + stake := validator.TokensFromSharesTruncated(delegation.GetShares()) + +k.SetDelegatorStartingInfo(ctx, val, del, types.NewDelegatorStartingInfo(previousPeriod, stake, uint64(ctx.BlockHeight()))) +} +``` + +### MsgUpdateParams + +Distribution module params can be updated through `MsgUpdateParams`, which can be done using governance proposal and the signer will always be gov module account address. + +```protobuf +// MsgUpdateParams is the Msg/UpdateParams request type. +// +// Since: cosmos-sdk 0.47 +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/distribution/MsgUpdateParams"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // params defines the x/distribution parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message handling can fail if: + +* signer is not the gov module account address. + +## Hooks + +Available hooks that can be called by and from this module. + +### Create or modify delegation distribution + +* triggered-by: `staking.MsgDelegate`, `staking.MsgBeginRedelegate`, `staking.MsgUndelegate` + +#### Before + +* The delegation rewards are withdrawn to the withdraw address of the delegator. + The rewards include the current period and exclude the starting period. +* The validator period is incremented. + The validator period is incremented because the validator's power and share distribution might have changed. +* The reference count for the delegator's starting period is decremented. + +#### After + +The starting height of the delegation is set to the previous period. +Because of the `Before`-hook, this period is the last period for which the delegator was rewarded. + +### Validator created + +* triggered-by: `staking.MsgCreateValidator` + +When a validator is created, the following validator variables are initialized: + +* Historical rewards +* Current accumulated rewards +* Accumulated commission +* Total outstanding rewards +* Period + +By default, all values are set to a `0`, except period, which is set to `1`. + +### Validator removed + +* triggered-by: `staking.RemoveValidator` + +Outstanding commission is sent to the validator's self-delegation withdrawal address. +Remaining delegator rewards get sent to the community fee pool. + +Note: The validator gets removed only when it has no remaining delegations. +At that time, all outstanding delegator rewards will have been withdrawn. +Any remaining rewards are dust amounts. + +### Validator is slashed + +* triggered-by: `staking.Slash` +* The current validator period reference count is incremented. + The reference count is incremented because the slash event has created a reference to it. +* The validator period is incremented. +* The slash event is stored for later use. + The slash event will be referenced when calculating delegator rewards. + +## Events + +The distribution module emits the following events: + +### BeginBlocker + +| Type | Attribute Key | Attribute Value | +| ---------------- | ------------- | ------------------ | +| proposer\_reward | validator | `{validatorAddress}` | +| proposer\_reward | reward | `{proposerReward}` | +| commission | amount | `{commissionAmount}` | +| commission | validator | `{validatorAddress}` | +| rewards | amount | `{rewardAmount}` | +| rewards | validator | `{validatorAddress}` | + +### Handlers + +#### MsgSetWithdrawAddress + +| Type | Attribute Key | Attribute Value | +| ---------------------- | ----------------- | ---------------------- | +| set\_withdraw\_address | withdraw\_address | `{withdrawAddress}` | +| message | module | distribution | +| message | action | set\_withdraw\_address | +| message | sender | `{senderAddress}` | + +#### MsgWithdrawDelegatorReward + +| Type | Attribute Key | Attribute Value | +| ----------------- | ------------- | --------------------------- | +| withdraw\_rewards | amount | `{rewardAmount}` | +| withdraw\_rewards | validator | `{validatorAddress}` | +| message | module | distribution | +| message | action | withdraw\_delegator\_reward | +| message | sender | `{senderAddress}` | + +#### MsgWithdrawValidatorCommission + +| Type | Attribute Key | Attribute Value | +| -------------------- | ------------- | ------------------------------- | +| withdraw\_commission | amount | `{commissionAmount}` | +| message | module | distribution | +| message | action | withdraw\_validator\_commission | +| message | sender | `{senderAddress}` | + +## Parameters + +The distribution module contains the following parameters: + +| Key | Type | Example | +| ------------------- | ------------ | --------------------------- | +| communitytax | string (dec) | "0.020000000000000000" \[0] | +| withdrawaddrenabled | bool | true | + +* \[0] `communitytax` must be positive and cannot exceed 1.00. +* `baseproposerreward` and `bonusproposerreward` were parameters that are deprecated in v0.47 and are not used. + + +The reserve pool is the pool of collected funds for use by governance taken via the `CommunityTax`. +Currently with the Cosmos SDK, tokens collected by the CommunityTax are accounted for but unspendable. + + +## Client + +## CLI + +A user can query and interact with the `distribution` module using the CLI. + +#### Query + +The `query` commands allow users to query `distribution` state. + +```shell +simd query distribution --help +``` + +##### commission + +The `commission` command allows users to query validator commission rewards by address. + +```shell +simd query distribution commission [address] [flags] +``` + +Example: + +```shell +simd query distribution commission cosmosvaloper1... +``` + +Example Output: + +```yml +commission: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### community-pool + +The `community-pool` command allows users to query all coin balances within the community pool. + +```shell +simd query distribution community-pool [flags] +``` + +Example: + +```shell +simd query distribution community-pool +``` + +Example Output: + +```yml +pool: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### params + +The `params` command allows users to query the parameters of the `distribution` module. + +```shell +simd query distribution params [flags] +``` + +Example: + +```shell +simd query distribution params +``` + +Example Output: + +```yml +base_proposer_reward: "0.000000000000000000" +bonus_proposer_reward: "0.000000000000000000" +community_tax: "0.020000000000000000" +withdraw_addr_enabled: true +``` + +##### rewards + +The `rewards` command allows users to query delegator rewards. Users can optionally include the validator address to query rewards earned from a specific validator. + +```shell +simd query distribution rewards [delegator-addr] [validator-addr] [flags] +``` + +Example: + +```shell +simd query distribution rewards cosmos1... +``` + +Example Output: + +```yml +rewards: +- reward: + - amount: "1000000.000000000000000000" + denom: stake + validator_address: cosmosvaloper1.. +total: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### slashes + +The `slashes` command allows users to query all slashes for a given block range. + +```shell +simd query distribution slashes [validator] [start-height] [end-height] [flags] +``` + +Example: + +```shell +simd query distribution slashes cosmosvaloper1... 1 1000 +``` + +Example Output: + +```yml +pagination: + next_key: null + total: "0" +slashes: +- validator_period: 20, + fraction: "0.009999999999999999" +``` + +##### validator-outstanding-rewards + +The `validator-outstanding-rewards` command allows users to query all outstanding (un-withdrawn) rewards for a validator and all their delegations. + +```shell +simd query distribution validator-outstanding-rewards [validator] [flags] +``` + +Example: + +```shell +simd query distribution validator-outstanding-rewards cosmosvaloper1... +``` + +Example Output: + +```yml +rewards: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### validator-distribution-info + +The `validator-distribution-info` command allows users to query validator commission and self-delegation rewards for validator. + +```shell +simd query distribution validator-distribution-info cosmosvaloper1... +``` + +Example Output: + +```yml +commission: +- amount: "100000.000000000000000000" + denom: stake +operator_address: cosmosvaloper1... +self_bond_rewards: +- amount: "100000.000000000000000000" + denom: stake +``` + +#### Transactions + +The `tx` commands allow users to interact with the `distribution` module. + +```shell +simd tx distribution --help +``` + +##### fund-community-pool + +The `fund-community-pool` command allows users to send funds to the community pool. + +```shell +simd tx distribution fund-community-pool [amount] [flags] +``` + +Example: + +```shell +simd tx distribution fund-community-pool 100stake --from cosmos1... +``` + +##### set-withdraw-addr + +The `set-withdraw-addr` command allows users to set the withdraw address for rewards associated with a delegator address. + +```shell +simd tx distribution set-withdraw-addr [withdraw-addr] [flags] +``` + +Example: + +```shell +simd tx distribution set-withdraw-addr cosmos1... --from cosmos1... +``` + +##### withdraw-all-rewards + +The `withdraw-all-rewards` command allows users to withdraw all rewards for a delegator. + +```shell +simd tx distribution withdraw-all-rewards [flags] +``` + +Example: + +```shell +simd tx distribution withdraw-all-rewards --from cosmos1... +``` + +##### withdraw-rewards + +The `withdraw-rewards` command allows users to withdraw all rewards from a given delegation address, +and optionally withdraw validator commission if the delegation address given is a validator operator and the user proves the `--commission` flag. + +```shell +simd tx distribution withdraw-rewards [validator-addr] [flags] +``` + +Example: + +```shell +simd tx distribution withdraw-rewards cosmosvaloper1... --from cosmos1... --commission +``` + +### gRPC + +A user can query the `distribution` module using gRPC endpoints. + +#### Params + +The `Params` endpoint allows users to query parameters of the `distribution` module. + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "communityTax": "20000000000000000", + "baseProposerReward": "00000000000000000", + "bonusProposerReward": "00000000000000000", + "withdrawAddrEnabled": true + } +} +``` + +#### ValidatorDistributionInfo + +The `ValidatorDistributionInfo` queries validator commission and self-delegation rewards for validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorDistributionInfo +``` + +Example Output: + +```json expandable +{ + "commission": { + "commission": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + }, + "self_bond_rewards": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ], + "validator_address": "cosmosvalop1..." +} +``` + +#### ValidatorOutstandingRewards + +The `ValidatorOutstandingRewards` endpoint allows users to query rewards of a validator address. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1.."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorOutstandingRewards +``` + +Example Output: + +```json +{ + "rewards": { + "rewards": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + } +} +``` + +#### ValidatorCommission + +The `ValidatorCommission` endpoint allows users to query accumulated commission for a validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1.."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorCommission +``` + +Example Output: + +```json +{ + "commission": { + "commission": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + } +} +``` + +#### ValidatorSlashes + +The `ValidatorSlashes` endpoint allows users to query slash events of a validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1.."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorSlashes +``` + +Example Output: + +```json expandable +{ + "slashes": [ + { + "validator_period": "20", + "fraction": "0.009999999999999999" + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### DelegationRewards + +The `DelegationRewards` endpoint allows users to query the total rewards accrued by a delegation. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1...","validator_address":"cosmosvalop1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegationRewards +``` + +Example Output: + +```json +{ + "rewards": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] +} +``` + +#### DelegationTotalRewards + +The `DelegationTotalRewards` endpoint allows users to query the total rewards accrued by each validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegationTotalRewards +``` + +Example Output: + +```json expandable +{ + "rewards": [ + { + "validatorAddress": "cosmosvaloper1...", + "reward": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + } + ], + "total": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] +} +``` + +#### DelegatorValidators + +The `DelegatorValidators` endpoint allows users to query all validators for given delegator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegatorValidators +``` + +Example Output: + +```json +{ + "validators": [ + "cosmosvaloper1..." + ] +} +``` + +#### DelegatorWithdrawAddress + +The `DelegatorWithdrawAddress` endpoint allows users to query the withdraw address of a delegator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegatorWithdrawAddress +``` + +Example Output: + +```json +{ + "withdrawAddress": "cosmos1..." +} +``` + +#### CommunityPool + +The `CommunityPool` endpoint allows users to query the community pool coins. + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/CommunityPool +``` + +Example Output: + +```json +{ + "pool": [ + { + "denom": "stake", + "amount": "1000000000000000000" + } + ] +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/modules/evidence/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/evidence/README.mdx new file mode 100644 index 00000000..af647a9c --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/evidence/README.mdx @@ -0,0 +1,624 @@ +--- +title: '`x/evidence`' +description: Concepts State Messages Events Parameters BeginBlock Client CLI REST gRPC +--- + +* [Concepts](#concepts) +* [State](#state) +* [Messages](#messages) +* [Events](#events) +* [Parameters](#parameters) +* [BeginBlock](#beginblock) +* [Client](#client) + * [CLI](#cli) + * [REST](#rest) + * [gRPC](#grpc) + +## Abstract + +`x/evidence` is an implementation of a Cosmos SDK module, per [ADR 009](docs/sdk/next/documentation/legacy/adr-comprehensive), +that allows for the submission and handling of arbitrary evidence of misbehavior such +as equivocation and counterfactual signing. + +The evidence module differs from standard evidence handling which typically expects the +underlying consensus engine, e.g. CometBFT, to automatically submit evidence when +it is discovered by allowing clients and foreign chains to submit more complex evidence +directly. + +All concrete evidence types must implement the `Evidence` interface contract. Submitted +`Evidence` is first routed through the evidence module's `Router` in which it attempts +to find a corresponding registered `Handler` for that specific `Evidence` type. +Each `Evidence` type must have a `Handler` registered with the evidence module's +keeper in order for it to be successfully routed and executed. + +Each corresponding handler must also fulfill the `Handler` interface contract. The +`Handler` for a given `Evidence` type can perform any arbitrary state transitions +such as slashing, jailing, and tombstoning. + +## Concepts + +### Evidence + +Any concrete type of evidence submitted to the `x/evidence` module must fulfill the +`Evidence` contract outlined below. Not all concrete types of evidence will fulfill +this contract in the same way and some data may be entirely irrelevant to certain +types of evidence. An additional `ValidatorEvidence`, which extends `Evidence`, +has also been created to define a contract for evidence against malicious validators. + +```go expandable +/ Evidence defines the contract which concrete evidence types of misbehavior +/ must implement. +type Evidence interface { + proto.Message + + Route() + +string + String() + +string + Hash() []byte + ValidateBasic() + +error + + / Height at which the infraction occurred + GetHeight() + +int64 +} + +/ ValidatorEvidence extends Evidence interface to define contract +/ for evidence against malicious validators +type ValidatorEvidence interface { + Evidence + + / The consensus address of the malicious validator at time of infraction + GetConsensusAddress() + +sdk.ConsAddress + + / The total power of the malicious validator at time of infraction + GetValidatorPower() + +int64 + + / The total validator set power at time of infraction + GetTotalPower() + +int64 +} +``` + +### Registration & Handling + +The `x/evidence` module must first know about all types of evidence it is expected +to handle. This is accomplished by registering the `Route` method in the `Evidence` +contract with what is known as a `Router` (defined below). The `Router` accepts +`Evidence` and attempts to find the corresponding `Handler` for the `Evidence` +via the `Route` method. + +```go +type Router interface { + AddRoute(r string, h Handler) + +Router + HasRoute(r string) + +bool + GetRoute(path string) + +Handler + Seal() + +Sealed() + +bool +} +``` + +The `Handler` (defined below) is responsible for executing the entirety of the +business logic for handling `Evidence`. This typically includes validating the +evidence, both stateless checks via `ValidateBasic` and stateful checks via any +keepers provided to the `Handler`. In addition, the `Handler` may also perform +capabilities such as slashing and jailing a validator. All `Evidence` handled +by the `Handler` should be persisted. + +```go +/ Handler defines an agnostic Evidence handler. The handler is responsible +/ for executing all corresponding business logic necessary for verifying the +/ evidence as valid. In addition, the Handler may execute any necessary +/ slashing and potential jailing. +type Handler func(sdk.Context, Evidence) + +error +``` + +## State + +Currently the `x/evidence` module only stores valid submitted `Evidence` in state. +The evidence state is also stored and exported in the `x/evidence` module's `GenesisState`. + +```protobuf +/ GenesisState defines the evidence module's genesis state. +message GenesisState { + / evidence defines all the evidence at genesis. + repeated google.protobuf.Any evidence = 1; +} + +``` + +All `Evidence` is retrieved and stored via a prefix `KVStore` using prefix `0x00` (`KeyPrefixEvidence`). + +## Messages + +### MsgSubmitEvidence + +Evidence is submitted through a `MsgSubmitEvidence` message: + +```protobuf +/ MsgSubmitEvidence represents a message that supports submitting arbitrary +/ Evidence of misbehavior such as equivocation or counterfactual signing. +message MsgSubmitEvidence { + string submitter = 1; + google.protobuf.Any evidence = 2; +} +``` + +Note, the `Evidence` of a `MsgSubmitEvidence` message must have a corresponding +`Handler` registered with the `x/evidence` module's `Router` in order to be processed +and routed correctly. + +Given the `Evidence` is registered with a corresponding `Handler`, it is processed +as follows: + +```go expandable +func SubmitEvidence(ctx Context, evidence Evidence) + +error { + if _, ok := GetEvidence(ctx, evidence.Hash()); ok { + return errorsmod.Wrap(types.ErrEvidenceExists, strings.ToUpper(hex.EncodeToString(evidence.Hash()))) +} + if !router.HasRoute(evidence.Route()) { + return errorsmod.Wrap(types.ErrNoEvidenceHandlerExists, evidence.Route()) +} + handler := router.GetRoute(evidence.Route()) + if err := handler(ctx, evidence); err != nil { + return errorsmod.Wrap(types.ErrInvalidEvidence, err.Error()) +} + +ctx.EventManager().EmitEvent( + sdk.NewEvent( + types.EventTypeSubmitEvidence, + sdk.NewAttribute(types.AttributeKeyEvidenceHash, strings.ToUpper(hex.EncodeToString(evidence.Hash()))), + ), + ) + +SetEvidence(ctx, evidence) + +return nil +} +``` + +First, there must not already exist valid submitted `Evidence` of the exact same +type. Secondly, the `Evidence` is routed to the `Handler` and executed. Finally, +if there is no error in handling the `Evidence`, an event is emitted and it is persisted to state. + +## Events + +The `x/evidence` module emits the following events: + +### Handlers + +#### MsgSubmitEvidence + +| Type | Attribute Key | Attribute Value | +| ---------------- | -------------- | ---------------- | +| submit\_evidence | evidence\_hash | `{evidenceHash}` | +| message | module | evidence | +| message | sender | `{senderAddress}` | +| message | action | submit\_evidence | + +## Parameters + +The evidence module does not contain any parameters. + +## BeginBlock + +### Evidence Handling + +CometBFT blocks can include +[Evidence](https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md#evidence) that indicates if a validator committed malicious behavior. The relevant information is forwarded to the application as ABCI Evidence in `abci.RequestBeginBlock` so that the validator can be punished accordingly. + +#### Equivocation + +The Cosmos SDK handles two types of evidence inside the ABCI `BeginBlock`: + +* `DuplicateVoteEvidence`, +* `LightClientAttackEvidence`. + +The evidence module handles these two evidence types the same way. First, the Cosmos SDK converts the CometBFT concrete evidence type to an SDK `Evidence` interface using `Equivocation` as the concrete type. + +```protobuf +// Equivocation implements the Evidence interface and defines evidence of double +// signing misbehavior. +message Equivocation { + option (amino.name) = "cosmos-sdk/Equivocation"; + option (gogoproto.goproto_stringer) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.equal) = false; + + // height is the equivocation height. + int64 height = 1; + + // time is the equivocation time. + google.protobuf.Timestamp time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + + // power is the equivocation validator power. + int64 power = 3; + + // consensus_address is the equivocation validator consensus address. + string consensus_address = 4 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +For some `Equivocation` submitted in `block` to be valid, it must satisfy: + +`Evidence.Timestamp >= block.Timestamp - MaxEvidenceAge` + +Where: + +* `Evidence.Timestamp` is the timestamp in the block at height `Evidence.Height` +* `block.Timestamp` is the current block timestamp. + +If valid `Equivocation` evidence is included in a block, the validator's stake is +reduced (slashed) by `SlashFractionDoubleSign` as defined by the `x/slashing` module +of what their stake was when the infraction occurred, rather than when the evidence was discovered. +We want to "follow the stake", i.e., the stake that contributed to the infraction +should be slashed, even if it has since been redelegated or started unbonding. + +In addition, the validator is permanently jailed and tombstoned to make it impossible for that +validator to ever re-enter the validator set. + +The `Equivocation` evidence is handled as follows: + +```go expandable +package keeper + +import ( + + "fmt" + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/evidence/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ HandleEquivocationEvidence implements an equivocation evidence handler. Assuming the +/ evidence is valid, the validator committing the misbehavior will be slashed, +/ jailed and tombstoned. Once tombstoned, the validator will not be able to +/ recover. Note, the evidence contains the block time and height at the time of +/ the equivocation. +/ +/ The evidence is considered invalid if: +/ - the evidence is too old +/ - the validator is unbonded or does not exist +/ - the signing info does not exist (will panic) +/ - is already tombstoned +/ +/ TODO: Some of the invalid constraints listed above may need to be reconsidered +/ in the case of a lunatic attack. +func (k Keeper) + +HandleEquivocationEvidence(ctx sdk.Context, evidence *types.Equivocation) { + logger := k.Logger(ctx) + consAddr := evidence.GetConsensusAddress() + if _, err := k.slashingKeeper.GetPubkey(ctx, consAddr.Bytes()); err != nil { + / Ignore evidence that cannot be handled. + / + / NOTE: We used to panic with: + / `panic(fmt.Sprintf("Validator consensus-address %v not found", consAddr))`, + / but this couples the expectations of the app to both Tendermint and + / the simulator. Both are expected to provide the full range of + / allowable but none of the disallowed evidence types. Instead of + / getting this coordination right, it is easier to relax the + / constraints and ignore evidence that cannot be handled. + return +} + + / calculate the age of the evidence + infractionHeight := evidence.GetHeight() + infractionTime := evidence.GetTime() + ageDuration := ctx.BlockHeader().Time.Sub(infractionTime) + ageBlocks := ctx.BlockHeader().Height - infractionHeight + + / Reject evidence if the double-sign is too old. Evidence is considered stale + / if the difference in time and number of blocks is greater than the allowed + / parameters defined. + cp := ctx.ConsensusParams() + if cp != nil && cp.Evidence != nil { + if ageDuration > cp.Evidence.MaxAgeDuration && ageBlocks > cp.Evidence.MaxAgeNumBlocks { + logger.Info( + "ignored equivocation; evidence too old", + "validator", consAddr, + "infraction_height", infractionHeight, + "max_age_num_blocks", cp.Evidence.MaxAgeNumBlocks, + "infraction_time", infractionTime, + "max_age_duration", cp.Evidence.MaxAgeDuration, + ) + +return +} + +} + validator := k.stakingKeeper.ValidatorByConsAddr(ctx, consAddr) + if validator == nil || validator.IsUnbonded() { + / Defensive: Simulation doesn't take unbonding periods into account, and + / Tendermint might break this assumption at some point. + return +} + if !validator.GetOperator().Empty() { + if _, err := k.slashingKeeper.GetPubkey(ctx, consAddr.Bytes()); err != nil { + / Ignore evidence that cannot be handled. + / + / NOTE: We used to panic with: + / `panic(fmt.Sprintf("Validator consensus-address %v not found", consAddr))`, + / but this couples the expectations of the app to both Tendermint and + / the simulator. Both are expected to provide the full range of + / allowable but none of the disallowed evidence types. Instead of + / getting this coordination right, it is easier to relax the + / constraints and ignore evidence that cannot be handled. + return +} + +} + if ok := k.slashingKeeper.HasValidatorSigningInfo(ctx, consAddr); !ok { + panic(fmt.Sprintf("expected signing info for validator %s but not found", consAddr)) +} + + / ignore if the validator is already tombstoned + if k.slashingKeeper.IsTombstoned(ctx, consAddr) { + logger.Info( + "ignored equivocation; validator already tombstoned", + "validator", consAddr, + "infraction_height", infractionHeight, + "infraction_time", infractionTime, + ) + +return +} + +logger.Info( + "confirmed equivocation", + "validator", consAddr, + "infraction_height", infractionHeight, + "infraction_time", infractionTime, + ) + + / We need to retrieve the stake distribution which signed the block, so we + / subtract ValidatorUpdateDelay from the evidence height. + / Note, that this *can* result in a negative "distributionHeight", up to + / -ValidatorUpdateDelay, i.e. at the end of the + / pre-genesis block (none) = at the beginning of the genesis block. + / That's fine since this is just used to filter unbonding delegations & redelegations. + distributionHeight := infractionHeight - sdk.ValidatorUpdateDelay + + / Slash validator. The `power` is the int64 power of the validator as provided + / to/by Tendermint. This value is validator.Tokens as sent to Tendermint via + / ABCI, and now received as evidence. The fraction is passed in to separately + / to slash unbonding and rebonding delegations. + k.slashingKeeper.SlashWithInfractionReason( + ctx, + consAddr, + k.slashingKeeper.SlashFractionDoubleSign(ctx), + evidence.GetValidatorPower(), distributionHeight, + stakingtypes.Infraction_INFRACTION_DOUBLE_SIGN, + ) + + / Jail the validator if not already jailed. This will begin unbonding the + / validator if not already unbonding (tombstoned). + if !validator.IsJailed() { + k.slashingKeeper.Jail(ctx, consAddr) +} + +k.slashingKeeper.JailUntil(ctx, consAddr, types.DoubleSignJailEndTime) + +k.slashingKeeper.Tombstone(ctx, consAddr) + +k.SetEvidence(ctx, evidence) +} +``` + +**Note:** The slashing, jailing, and tombstoning calls are delegated through the `x/slashing` module +that emits informative events and finally delegates calls to the `x/staking` module. See documentation +on slashing and jailing in [State Transitions](docs/sdk/v0.47/documentation/module-system/modules/staking/README#state-transitions). + +## Client + +### CLI + +A user can query and interact with the `evidence` module using the CLI. + +#### Query + +The `query` commands allows users to query `evidence` state. + +```bash +simd query evidence --help +``` + +#### evidence + +The `evidence` command allows users to list all evidence or evidence by hash. + +Usage: + +```bash +simd query evidence [flags] +``` + +To query evidence by hash + +Example: + +```bash +simd query evidence "DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660" +``` + +Example Output: + +```bash +evidence: + consensus_address: cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h + height: 11 + power: 100 + time: "2021-10-20T16:08:38.194017624Z" +``` + +To get all evidence + +Example: + +```bash +simd query evidence +``` + +Example Output: + +```bash +evidence: + consensus_address: cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h + height: 11 + power: 100 + time: "2021-10-20T16:08:38.194017624Z" +pagination: + next_key: null + total: "1" +``` + +### REST + +A user can query the `evidence` module using REST endpoints. + +#### Evidence + +Get evidence by hash + +```bash +/cosmos/evidence/v1beta1/evidence/{hash} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence/DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660" +``` + +Example Output: + +```bash +{ + "evidence": { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } +} +``` + +#### All evidence + +Get all evidence + +```bash +/cosmos/evidence/v1beta1/evidence +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence" +``` + +Example Output: + +```bash expandable +{ + "evidence": [ + { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### gRPC + +A user can query the `evidence` module using gRPC endpoints. + +#### Evidence + +Get evidence by hash + +```bash +cosmos.evidence.v1beta1.Query/Evidence +``` + +Example: + +```bash +grpcurl -plaintext -d '{"evidence_hash":"DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660"}' localhost:9090 cosmos.evidence.v1beta1.Query/Evidence +``` + +Example Output: + +```bash +{ + "evidence": { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } +} +``` + +#### All evidence + +Get all evidence + +```bash +cosmos.evidence.v1beta1.Query/AllEvidence +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.evidence.v1beta1.Query/AllEvidence +``` + +Example Output: + +```bash expandable +{ + "evidence": [ + { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } + ], + "pagination": { + "total": "1" + } +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/modules/feegrant/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/feegrant/README.mdx new file mode 100644 index 00000000..27dd2c46 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/feegrant/README.mdx @@ -0,0 +1,3764 @@ +--- +title: '`x/feegrant`' +description: >- + This document specifies the fee grant module. For the full ADR, please see Fee + Grant ADR-029. +--- + +## Abstract + +This document specifies the fee grant module. For the full ADR, please see [Fee Grant ADR-029](docs/sdk/next/documentation/legacy/adr-comprehensive). + +This module allows accounts to grant fee allowances and to use fees from their accounts. Grantees can execute any transaction without the need to maintain sufficient fees. + +## Contents + +* [Concepts](#concepts) +* [State](#state) + * [FeeAllowance](#feeallowance) + * [FeeAllowanceQueue](#feeallowancequeue) +* [Messages](#messages) + * [Msg/GrantAllowance](#msggrantallowance) + * [Msg/RevokeAllowance](#msgrevokeallowance) +* [Events](#events) +* [Msg Server](#msg-server) + * [MsgGrantAllowance](#msggrantallowance-1) + * [MsgRevokeAllowance](#msgrevokeallowance-1) + * [Exec fee allowance](#exec-fee-allowance) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + +## Concepts + +### Grant + +`Grant` is stored in the KVStore to record a grant with full context. Every grant will contain `granter`, `grantee` and what kind of `allowance` is granted. `granter` is an account address who is giving permission to `grantee` (the beneficiary account address) to pay for some or all of `grantee`'s transaction fees. `allowance` defines what kind of fee allowance (`BasicAllowance` or `PeriodicAllowance`, see below) is granted to `grantee`. `allowance` accepts an interface which implements `FeeAllowanceI`, encoded as `Any` type. There can be only one existing fee grant allowed for a `grantee` and `granter`, self grants are not allowed. + +```protobuf +// Grant is stored in the KVStore to record a grant with full context +message Grant { + // granter is the address of the user granting an allowance of their funds. + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // grantee is the address of the user being granted an allowance of another user's funds. + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // allowance can be any of basic, periodic, allowed fee allowance. + google.protobuf.Any allowance = 3 [(cosmos_proto.accepts_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"]; +} +``` + +`FeeAllowanceI` looks like: + +```go expandable +package feegrant + +import ( + + "time" + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ FeeAllowance implementations are tied to a given fee delegator and delegatee, +/ and are used to enforce fee grant limits. +type FeeAllowanceI interface { + / Accept can use fee payment requested as well as timestamp of the current block + / to determine whether or not to process this. This is checked in + / Keeper.UseGrantedFees and the return values should match how it is handled there. + / + / If it returns an error, the fee payment is rejected, otherwise it is accepted. + / The FeeAllowance implementation is expected to update it's internal state + / and will be saved again after an acceptance. + / + / If remove is true (regardless of the error), the FeeAllowance will be deleted from storage + / (eg. when it is used up). (See call to RevokeAllowance in Keeper.UseGrantedFees) + +Accept(ctx sdk.Context, fee sdk.Coins, msgs []sdk.Msg) (remove bool, err error) + + / ValidateBasic should evaluate this FeeAllowance for internal consistency. + / Don't allow negative amounts, or negative periods for example. + ValidateBasic() + +error + + / ExpiresAt returns the expiry time of the allowance. + ExpiresAt() (*time.Time, error) +} +``` + +### Fee Allowance types + +There are two types of fee allowances present at the moment: + +* `BasicAllowance` +* `PeriodicAllowance` +* `AllowedMsgAllowance` + +### BasicAllowance + +`BasicAllowance` is permission for `grantee` to use fee from a `granter`'s account. If any of the `spend_limit` or `expiration` reaches its limit, the grant will be removed from the state. + +```protobuf +// BasicAllowance implements Allowance with a one-time grant of coins +// that optionally expires. The grantee can use up to SpendLimit to cover fees. +message BasicAllowance { + option (cosmos_proto.implements_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"; + option (amino.name) = "cosmos-sdk/BasicAllowance"; + + // spend_limit specifies the maximum amount of coins that can be spent + // by this allowance and will be updated as coins are spent. If it is + // empty, there is no spend limit and any amount of coins can be spent. + repeated cosmos.base.v1beta1.Coin spend_limit = 1 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; +``` + +* `spend_limit` is the limit of coins that are allowed to be used from the `granter` account. If it is empty, it assumes there's no spend limit, `grantee` can use any number of available coins from `granter` account address before the expiration. + +* `expiration` specifies an optional time when this allowance expires. If the value is left empty, there is no expiry for the grant. + +* When a grant is created with empty values for `spend_limit` and `expiration`, it is still a valid grant. It won't restrict the `grantee` to use any number of coins from `granter` and it won't have any expiration. The only way to restrict the `grantee` is by revoking the grant. + +### PeriodicAllowance + +`PeriodicAllowance` is a repeating fee allowance for the mentioned period, we can mention when the grant can expire as well as when a period can reset. We can also define the maximum number of coins that can be used in a mentioned period of time. + +```protobuf +// PeriodicAllowance extends Allowance to allow for both a maximum cap, +// as well as a limit per time period. +message PeriodicAllowance { + option (cosmos_proto.implements_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"; + option (amino.name) = "cosmos-sdk/PeriodicAllowance"; + + // basic specifies a struct of `BasicAllowance` + BasicAllowance basic = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // period specifies the time duration in which period_spend_limit coins can + // be spent before that allowance is reset + google.protobuf.Duration period = 2 + [(gogoproto.stdduration) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // period_spend_limit specifies the maximum number of coins that can be spent + // in the period + repeated cosmos.base.v1beta1.Coin period_spend_limit = 3 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + + // period_can_spend is the number of coins left to be spent before the period_reset time + repeated cosmos.base.v1beta1.Coin period_can_spend = 4 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + + // period_reset is the time at which this period resets and a new one begins, + // it is calculated from the start time of the first transaction after the + // last period ended + google.protobuf.Timestamp period_reset = 5 + [(gogoproto.stdtime) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +* `basic` is the instance of `BasicAllowance` which is optional for periodic fee allowance. If empty, the grant will have no `expiration` and no `spend_limit`. + +* `period` is the specific period of time, after each period passes, `period_can_spend` will be reset. + +* `period_spend_limit` specifies the maximum number of coins that can be spent in the period. + +* `period_can_spend` is the number of coins left to be spent before the period\_reset time. + +* `period_reset` keeps track of when a next period reset should happen. + +### AllowedMsgAllowance + +`AllowedMsgAllowance` is a fee allowance, it can be any of `BasicFeeAllowance`, `PeriodicAllowance` but restricted only to the allowed messages mentioned by the granter. + +```protobuf +// AllowedMsgAllowance creates allowance only for specified message types. +message AllowedMsgAllowance { + option (gogoproto.goproto_getters) = false; + option (cosmos_proto.implements_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"; + option (amino.name) = "cosmos-sdk/AllowedMsgAllowance"; + + // allowance can be any of basic and periodic fee allowance. + google.protobuf.Any allowance = 1 [(cosmos_proto.accepts_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"]; + + // allowed_messages are the messages for which the grantee has the access. + repeated string allowed_messages = 2; +} +``` + +* `allowance` is either `BasicAllowance` or `PeriodicAllowance`. + +* `allowed_messages` is array of messages allowed to execute the given allowance. + +### FeeGranter flag + +`feegrant` module introduces a `FeeGranter` flag for CLI for the sake of executing transactions with fee granter. When this flag is set, `clientCtx` will append the granter account address for transactions generated through CLI. + +```go expandable +package client + +import ( + + "crypto/tls" + "fmt" + "strings" + "github.com/pkg/errors" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "github.com/tendermint/tendermint/libs/cli" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials" + "google.golang.org/grpc/credentials/insecure" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ ClientContextKey defines the context key used to retrieve a client.Context from +/ a command's Context. +const ClientContextKey = sdk.ContextKey("client.context") + +/ SetCmdClientContextHandler is to be used in a command pre-hook execution to +/ read flags that populate a Context and sets that to the command's Context. +func SetCmdClientContextHandler(clientCtx Context, cmd *cobra.Command) (err error) { + clientCtx, err = ReadPersistentCommandFlags(clientCtx, cmd.Flags()) + if err != nil { + return err +} + +return SetCmdClientContext(cmd, clientCtx) +} + +/ ValidateCmd returns unknown command error or Help display if help flag set +func ValidateCmd(cmd *cobra.Command, args []string) + +error { + var unknownCmd string + var skipNext bool + for _, arg := range args { + / search for help flag + if arg == "--help" || arg == "-h" { + return cmd.Help() +} + + / check if the current arg is a flag + switch { + case len(arg) > 0 && (arg[0] == '-'): + / the next arg should be skipped if the current arg is a + / flag and does not use "=" to assign the flag's value + if !strings.Contains(arg, "=") { + skipNext = true +} + +else { + skipNext = false +} + case skipNext: + / skip current arg + skipNext = false + case unknownCmd == "": + / unknown command found + / continue searching for help flag + unknownCmd = arg +} + +} + + / return the help screen if no unknown command is found + if unknownCmd != "" { + err := fmt.Sprintf("unknown command \"%s\" for \"%s\"", unknownCmd, cmd.CalledAs()) + + / build suggestions for unknown argument + if suggestions := cmd.SuggestionsFor(unknownCmd); len(suggestions) > 0 { + err += "\n\nDid you mean this?\n" + for _, s := range suggestions { + err += fmt.Sprintf("\t%v\n", s) +} + +} + +return errors.New(err) +} + +return cmd.Help() +} + +/ ReadPersistentCommandFlags returns a Context with fields set for "persistent" +/ or common flags that do not necessarily change with context. +/ +/ Note, the provided clientCtx may have field pre-populated. The following order +/ of precedence occurs: +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func ReadPersistentCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { + if clientCtx.OutputFormat == "" || flagSet.Changed(cli.OutputFlag) { + output, _ := flagSet.GetString(cli.OutputFlag) + +clientCtx = clientCtx.WithOutputFormat(output) +} + if clientCtx.HomeDir == "" || flagSet.Changed(flags.FlagHome) { + homeDir, _ := flagSet.GetString(flags.FlagHome) + +clientCtx = clientCtx.WithHomeDir(homeDir) +} + if !clientCtx.Simulate || flagSet.Changed(flags.FlagDryRun) { + dryRun, _ := flagSet.GetBool(flags.FlagDryRun) + +clientCtx = clientCtx.WithSimulation(dryRun) +} + if clientCtx.KeyringDir == "" || flagSet.Changed(flags.FlagKeyringDir) { + keyringDir, _ := flagSet.GetString(flags.FlagKeyringDir) + + / The keyring directory is optional and falls back to the home directory + / if omitted. + if keyringDir == "" { + keyringDir = clientCtx.HomeDir +} + +clientCtx = clientCtx.WithKeyringDir(keyringDir) +} + if clientCtx.ChainID == "" || flagSet.Changed(flags.FlagChainID) { + chainID, _ := flagSet.GetString(flags.FlagChainID) + +clientCtx = clientCtx.WithChainID(chainID) +} + if clientCtx.Keyring == nil || flagSet.Changed(flags.FlagKeyringBackend) { + keyringBackend, _ := flagSet.GetString(flags.FlagKeyringBackend) + if keyringBackend != "" { + kr, err := NewKeyringFromBackend(clientCtx, keyringBackend) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithKeyring(kr) +} + +} + if clientCtx.Client == nil || flagSet.Changed(flags.FlagNode) { + rpcURI, _ := flagSet.GetString(flags.FlagNode) + if rpcURI != "" { + clientCtx = clientCtx.WithNodeURI(rpcURI) + +client, err := NewClientFromNode(rpcURI) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithClient(client) +} + +} + if clientCtx.GRPCClient == nil || flagSet.Changed(flags.FlagGRPC) { + grpcURI, _ := flagSet.GetString(flags.FlagGRPC) + if grpcURI != "" { + var dialOpts []grpc.DialOption + + useInsecure, _ := flagSet.GetBool(flags.FlagGRPCInsecure) + if useInsecure { + dialOpts = append(dialOpts, grpc.WithTransportCredentials(insecure.NewCredentials())) +} + +else { + dialOpts = append(dialOpts, grpc.WithTransportCredentials(credentials.NewTLS(&tls.Config{ + MinVersion: tls.VersionTLS12, +}))) +} + +grpcClient, err := grpc.Dial(grpcURI, dialOpts...) + if err != nil { + return Context{ +}, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) +} + +} + +return clientCtx, nil +} + +/ readQueryCommandFlags returns an updated Context with fields set based on flags +/ defined in AddQueryFlagsToCmd. An error is returned if any flag query fails. +/ +/ Note, the provided clientCtx may have field pre-populated. The following order +/ of precedence occurs: +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func readQueryCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { + if clientCtx.Height == 0 || flagSet.Changed(flags.FlagHeight) { + height, _ := flagSet.GetInt64(flags.FlagHeight) + +clientCtx = clientCtx.WithHeight(height) +} + if !clientCtx.UseLedger || flagSet.Changed(flags.FlagUseLedger) { + useLedger, _ := flagSet.GetBool(flags.FlagUseLedger) + +clientCtx = clientCtx.WithUseLedger(useLedger) +} + +return ReadPersistentCommandFlags(clientCtx, flagSet) +} + +/ readTxCommandFlags returns an updated Context with fields set based on flags +/ defined in AddTxFlagsToCmd. An error is returned if any flag query fails. +/ +/ Note, the provided clientCtx may have field pre-populated. The following order +/ of precedence occurs: +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func readTxCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { + clientCtx, err := ReadPersistentCommandFlags(clientCtx, flagSet) + if err != nil { + return clientCtx, err +} + if !clientCtx.GenerateOnly || flagSet.Changed(flags.FlagGenerateOnly) { + genOnly, _ := flagSet.GetBool(flags.FlagGenerateOnly) + +clientCtx = clientCtx.WithGenerateOnly(genOnly) +} + if !clientCtx.Offline || flagSet.Changed(flags.FlagOffline) { + offline, _ := flagSet.GetBool(flags.FlagOffline) + +clientCtx = clientCtx.WithOffline(offline) +} + if !clientCtx.UseLedger || flagSet.Changed(flags.FlagUseLedger) { + useLedger, _ := flagSet.GetBool(flags.FlagUseLedger) + +clientCtx = clientCtx.WithUseLedger(useLedger) +} + if clientCtx.BroadcastMode == "" || flagSet.Changed(flags.FlagBroadcastMode) { + bMode, _ := flagSet.GetString(flags.FlagBroadcastMode) + +clientCtx = clientCtx.WithBroadcastMode(bMode) +} + if !clientCtx.SkipConfirm || flagSet.Changed(flags.FlagSkipConfirmation) { + skipConfirm, _ := flagSet.GetBool(flags.FlagSkipConfirmation) + +clientCtx = clientCtx.WithSkipConfirmation(skipConfirm) +} + if clientCtx.SignModeStr == "" || flagSet.Changed(flags.FlagSignMode) { + signModeStr, _ := flagSet.GetString(flags.FlagSignMode) + +clientCtx = clientCtx.WithSignModeStr(signModeStr) +} + if clientCtx.FeePayer == nil || flagSet.Changed(flags.FlagFeePayer) { + payer, _ := flagSet.GetString(flags.FlagFeePayer) + if payer != "" { + payerAcc, err := sdk.AccAddressFromBech32(payer) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithFeePayerAddress(payerAcc) +} + +} + if clientCtx.FeeGranter == nil || flagSet.Changed(flags.FlagFeeGranter) { + granter, _ := flagSet.GetString(flags.FlagFeeGranter) + if granter != "" { + granterAcc, err := sdk.AccAddressFromBech32(granter) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithFeeGranterAddress(granterAcc) +} + +} + if clientCtx.From == "" || flagSet.Changed(flags.FlagFrom) { + from, _ := flagSet.GetString(flags.FlagFrom) + +fromAddr, fromName, keyType, err := GetFromFields(clientCtx, clientCtx.Keyring, from) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithFrom(from).WithFromAddress(fromAddr).WithFromName(fromName) + + / If the `from` signer account is a ledger key, we need to use + / SIGN_MODE_AMINO_JSON, because ledger doesn't support proto yet. + / ref: https://github.com/cosmos/cosmos-sdk/issues/8109 + if keyType == keyring.TypeLedger && clientCtx.SignModeStr != flags.SignModeLegacyAminoJSON && !clientCtx.LedgerHasProtobuf { + fmt.Println("Default sign-mode 'direct' not supported by Ledger, using sign-mode 'amino-json'.") + +clientCtx = clientCtx.WithSignModeStr(flags.SignModeLegacyAminoJSON) +} + +} + if !clientCtx.IsAux || flagSet.Changed(flags.FlagAux) { + isAux, _ := flagSet.GetBool(flags.FlagAux) + +clientCtx = clientCtx.WithAux(isAux) + if isAux { + / If the user didn't explicitly set an --output flag, use JSON by + / default. + if clientCtx.OutputFormat == "" || !flagSet.Changed(cli.OutputFlag) { + clientCtx = clientCtx.WithOutputFormat("json") +} + + / If the user didn't explicitly set a --sign-mode flag, use + / DIRECT_AUX by default. + if clientCtx.SignModeStr == "" || !flagSet.Changed(flags.FlagSignMode) { + clientCtx = clientCtx.WithSignModeStr(flags.SignModeDirectAux) +} + +} + +} + +return clientCtx, nil +} + +/ GetClientQueryContext returns a Context from a command with fields set based on flags +/ defined in AddQueryFlagsToCmd. An error is returned if any flag query fails. +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func GetClientQueryContext(cmd *cobra.Command) (Context, error) { + ctx := GetClientContextFromCmd(cmd) + +return readQueryCommandFlags(ctx, cmd.Flags()) +} + +/ GetClientTxContext returns a Context from a command with fields set based on flags +/ defined in AddTxFlagsToCmd. An error is returned if any flag query fails. +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func GetClientTxContext(cmd *cobra.Command) (Context, error) { + ctx := GetClientContextFromCmd(cmd) + +return readTxCommandFlags(ctx, cmd.Flags()) +} + +/ GetClientContextFromCmd returns a Context from a command or an empty Context +/ if it has not been set. +func GetClientContextFromCmd(cmd *cobra.Command) + +Context { + if v := cmd.Context().Value(ClientContextKey); v != nil { + clientCtxPtr := v.(*Context) + +return *clientCtxPtr +} + +return Context{ +} +} + +/ SetCmdClientContext sets a command's Context value to the provided argument. +func SetCmdClientContext(cmd *cobra.Command, clientCtx Context) + +error { + v := cmd.Context().Value(ClientContextKey) + if v == nil { + return errors.New("client context not set") +} + clientCtxPtr := v.(*Context) + *clientCtxPtr = clientCtx + + return nil +} +``` + +```go expandable +package tx + +import ( + + "bufio" + "context" + "encoding/json" + "errors" + "fmt" + "os" + + gogogrpc "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/pflag" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/input" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +/ GenerateOrBroadcastTxCLI will either generate and print and unsigned transaction +/ or sign it and broadcast it returning an error upon failure. +func GenerateOrBroadcastTxCLI(clientCtx client.Context, flagSet *pflag.FlagSet, msgs ...sdk.Msg) + +error { + txf := NewFactoryCLI(clientCtx, flagSet) + +return GenerateOrBroadcastTxWithFactory(clientCtx, txf, msgs...) +} + +/ GenerateOrBroadcastTxWithFactory will either generate and print and unsigned transaction +/ or sign it and broadcast it returning an error upon failure. +func GenerateOrBroadcastTxWithFactory(clientCtx client.Context, txf Factory, msgs ...sdk.Msg) + +error { + / Validate all msgs before generating or broadcasting the tx. + / We were calling ValidateBasic separately in each CLI handler before. + / Right now, we're factorizing that call inside this function. + / ref: https://github.com/cosmos/cosmos-sdk/pull/9236#discussion_r623803504 + for _, msg := range msgs { + if err := msg.ValidateBasic(); err != nil { + return err +} + +} + + / If the --aux flag is set, we simply generate and print the AuxSignerData. + if clientCtx.IsAux { + auxSignerData, err := makeAuxSignerData(clientCtx, txf, msgs...) + if err != nil { + return err +} + +return clientCtx.PrintProto(&auxSignerData) +} + if clientCtx.GenerateOnly { + return txf.PrintUnsignedTx(clientCtx, msgs...) +} + +return BroadcastTx(clientCtx, txf, msgs...) +} + +/ BroadcastTx attempts to generate, sign and broadcast a transaction with the +/ given set of messages. It will also simulate gas requirements if necessary. +/ It will return an error upon failure. +func BroadcastTx(clientCtx client.Context, txf Factory, msgs ...sdk.Msg) + +error { + txf, err := txf.Prepare(clientCtx) + if err != nil { + return err +} + if txf.SimulateAndExecute() || clientCtx.Simulate { + _, adjusted, err := CalculateGas(clientCtx, txf, msgs...) + if err != nil { + return err +} + +txf = txf.WithGas(adjusted) + _, _ = fmt.Fprintf(os.Stderr, "%s\n", GasEstimateResponse{ + GasEstimate: txf.Gas() +}) +} + if clientCtx.Simulate { + return nil +} + +tx, err := txf.BuildUnsignedTx(msgs...) + if err != nil { + return err +} + if !clientCtx.SkipConfirm { + txBytes, err := clientCtx.TxConfig.TxJSONEncoder()(tx.GetTx()) + if err != nil { + return err +} + if err := clientCtx.PrintRaw(json.RawMessage(txBytes)); err != nil { + _, _ = fmt.Fprintf(os.Stderr, "%s\n", txBytes) +} + buf := bufio.NewReader(os.Stdin) + +ok, err := input.GetConfirmation("confirm transaction before signing and broadcasting", buf, os.Stderr) + if err != nil || !ok { + _, _ = fmt.Fprintf(os.Stderr, "%s\n", "cancelled transaction") + +return err +} + +} + +err = Sign(txf, clientCtx.GetFromName(), tx, true) + if err != nil { + return err +} + +txBytes, err := clientCtx.TxConfig.TxEncoder()(tx.GetTx()) + if err != nil { + return err +} + + / broadcast to a Tendermint node + res, err := clientCtx.BroadcastTx(txBytes) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +} + +/ CalculateGas simulates the execution of a transaction and returns the +/ simulation response obtained by the query and the adjusted gas amount. +func CalculateGas( + clientCtx gogogrpc.ClientConn, txf Factory, msgs ...sdk.Msg, +) (*tx.SimulateResponse, uint64, error) { + txBytes, err := txf.BuildSimTx(msgs...) + if err != nil { + return nil, 0, err +} + txSvcClient := tx.NewServiceClient(clientCtx) + +simRes, err := txSvcClient.Simulate(context.Background(), &tx.SimulateRequest{ + TxBytes: txBytes, +}) + if err != nil { + return nil, 0, err +} + +return simRes, uint64(txf.GasAdjustment() * float64(simRes.GasInfo.GasUsed)), nil +} + +/ SignWithPrivKey signs a given tx with the given private key, and returns the +/ corresponding SignatureV2 if the signing is successful. +func SignWithPrivKey( + signMode signing.SignMode, signerData authsigning.SignerData, + txBuilder client.TxBuilder, priv cryptotypes.PrivKey, txConfig client.TxConfig, + accSeq uint64, +) (signing.SignatureV2, error) { + var sigV2 signing.SignatureV2 + + / Generate the bytes to be signed. + signBytes, err := txConfig.SignModeHandler().GetSignBytes(signMode, signerData, txBuilder.GetTx()) + if err != nil { + return sigV2, err +} + + / Sign those bytes + signature, err := priv.Sign(signBytes) + if err != nil { + return sigV2, err +} + + / Construct the SignatureV2 struct + sigData := signing.SingleSignatureData{ + SignMode: signMode, + Signature: signature, +} + +sigV2 = signing.SignatureV2{ + PubKey: priv.PubKey(), + Data: &sigData, + Sequence: accSeq, +} + +return sigV2, nil +} + +/ countDirectSigners counts the number of DIRECT signers in a signature data. +func countDirectSigners(data signing.SignatureData) + +int { + switch data := data.(type) { + case *signing.SingleSignatureData: + if data.SignMode == signing.SignMode_SIGN_MODE_DIRECT { + return 1 +} + +return 0 + case *signing.MultiSignatureData: + directSigners := 0 + for _, d := range data.Signatures { + directSigners += countDirectSigners(d) +} + +return directSigners + default: + panic("unreachable case") +} +} + +/ checkMultipleSigners checks that there can be maximum one DIRECT signer in +/ a tx. +func checkMultipleSigners(tx authsigning.Tx) + +error { + directSigners := 0 + sigsV2, err := tx.GetSignaturesV2() + if err != nil { + return err +} + for _, sig := range sigsV2 { + directSigners += countDirectSigners(sig.Data) + if directSigners > 1 { + return sdkerrors.ErrNotSupported.Wrap("txs signed with CLI can have maximum 1 DIRECT signer") +} + +} + +return nil +} + +/ Sign signs a given tx with a named key. The bytes signed over are canconical. +/ The resulting signature will be added to the transaction builder overwriting the previous +/ ones if overwrite=true (otherwise, the signature will be appended). +/ Signing a transaction with mutltiple signers in the DIRECT mode is not supprted and will +/ return an error. +/ An error is returned upon failure. +func Sign(txf Factory, name string, txBuilder client.TxBuilder, overwriteSig bool) + +error { + if txf.keybase == nil { + return errors.New("keybase must be set prior to signing a transaction") +} + signMode := txf.signMode + if signMode == signing.SignMode_SIGN_MODE_UNSPECIFIED { + / use the SignModeHandler's default mode if unspecified + signMode = txf.txConfig.SignModeHandler().DefaultMode() +} + +k, err := txf.keybase.Key(name) + if err != nil { + return err +} + +pubKey, err := k.GetPubKey() + if err != nil { + return err +} + signerData := authsigning.SignerData{ + ChainID: txf.chainID, + AccountNumber: txf.accountNumber, + Sequence: txf.sequence, + PubKey: pubKey, + Address: sdk.AccAddress(pubKey.Address()).String(), +} + + / For SIGN_MODE_DIRECT, calling SetSignatures calls setSignerInfos on + / TxBuilder under the hood, and SignerInfos is needed to generated the + / sign bytes. This is the reason for setting SetSignatures here, with a + / nil signature. + / + / Note: this line is not needed for SIGN_MODE_LEGACY_AMINO, but putting it + / also doesn't affect its generated sign bytes, so for code's simplicity + / sake, we put it here. + sigData := signing.SingleSignatureData{ + SignMode: signMode, + Signature: nil, +} + sig := signing.SignatureV2{ + PubKey: pubKey, + Data: &sigData, + Sequence: txf.Sequence(), +} + +var prevSignatures []signing.SignatureV2 + if !overwriteSig { + prevSignatures, err = txBuilder.GetTx().GetSignaturesV2() + if err != nil { + return err +} + +} + / Overwrite or append signer infos. + var sigs []signing.SignatureV2 + if overwriteSig { + sigs = []signing.SignatureV2{ + sig +} + +} + +else { + sigs = append(sigs, prevSignatures...) + +sigs = append(sigs, sig) +} + if err := txBuilder.SetSignatures(sigs...); err != nil { + return err +} + if err := checkMultipleSigners(txBuilder.GetTx()); err != nil { + return err +} + + / Generate the bytes to be signed. + bytesToSign, err := txf.txConfig.SignModeHandler().GetSignBytes(signMode, signerData, txBuilder.GetTx()) + if err != nil { + return err +} + + / Sign those bytes + sigBytes, _, err := txf.keybase.Sign(name, bytesToSign) + if err != nil { + return err +} + + / Construct the SignatureV2 struct + sigData = signing.SingleSignatureData{ + SignMode: signMode, + Signature: sigBytes, +} + +sig = signing.SignatureV2{ + PubKey: pubKey, + Data: &sigData, + Sequence: txf.Sequence(), +} + if overwriteSig { + err = txBuilder.SetSignatures(sig) +} + +else { + prevSignatures = append(prevSignatures, sig) + +err = txBuilder.SetSignatures(prevSignatures...) +} + if err != nil { + return fmt.Errorf("unable to set signatures on payload: %w", err) +} + + / Run optional preprocessing if specified. By default, this is unset + / and will return nil. + return txf.PreprocessTx(name, txBuilder) +} + +/ GasEstimateResponse defines a response definition for tx gas estimation. +type GasEstimateResponse struct { + GasEstimate uint64 `json:"gas_estimate" yaml:"gas_estimate"` +} + +func (gr GasEstimateResponse) + +String() + +string { + return fmt.Sprintf("gas estimate: %d", gr.GasEstimate) +} + +/ makeAuxSignerData generates an AuxSignerData from the client inputs. +func makeAuxSignerData(clientCtx client.Context, f Factory, msgs ...sdk.Msg) (tx.AuxSignerData, error) { + b := NewAuxTxBuilder() + +fromAddress, name, _, err := client.GetFromFields(clientCtx, clientCtx.Keyring, clientCtx.From) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetAddress(fromAddress.String()) + if clientCtx.Offline { + b.SetAccountNumber(f.accountNumber) + +b.SetSequence(f.sequence) +} + +else { + accNum, seq, err := clientCtx.AccountRetriever.GetAccountNumberSequence(clientCtx, fromAddress) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetAccountNumber(accNum) + +b.SetSequence(seq) +} + +err = b.SetMsgs(msgs...) + if err != nil { + return tx.AuxSignerData{ +}, err +} + if f.tip != nil { + if _, err := sdk.AccAddressFromBech32(f.tip.Tipper); err != nil { + return tx.AuxSignerData{ +}, sdkerrors.ErrInvalidAddress.Wrap("tipper must be a bech32 address") +} + +b.SetTip(f.tip) +} + +err = b.SetSignMode(f.SignMode()) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +key, err := clientCtx.Keyring.Key(name) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +pub, err := key.GetPubKey() + if err != nil { + return tx.AuxSignerData{ +}, err +} + +err = b.SetPubKey(pub) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetChainID(clientCtx.ChainID) + +signBz, err := b.GetSignBytes() + if err != nil { + return tx.AuxSignerData{ +}, err +} + +sig, _, err := clientCtx.Keyring.Sign(name, signBz) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetSignature(sig) + +return b.GetAuxSignerData() +} +``` + +```go expandable +package tx + +import ( + + "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +/ wrapper is a wrapper around the tx.Tx proto.Message which retain the raw +/ body and auth_info bytes. +type wrapper struct { + cdc codec.Codec + + tx *tx.Tx + + / bodyBz represents the protobuf encoding of TxBody. This should be encoding + / from the client using TxRaw if the tx was decoded from the wire + bodyBz []byte + + / authInfoBz represents the protobuf encoding of TxBody. This should be encoding + / from the client using TxRaw if the tx was decoded from the wire + authInfoBz []byte + + txBodyHasUnknownNonCriticals bool +} + +var ( + _ authsigning.Tx = &wrapper{ +} + _ client.TxBuilder = &wrapper{ +} + _ tx.TipTx = &wrapper{ +} + _ ante.HasExtensionOptionsTx = &wrapper{ +} + _ ExtensionOptionsTxBuilder = &wrapper{ +} + _ tx.TipTx = &wrapper{ +} +) + +/ ExtensionOptionsTxBuilder defines a TxBuilder that can also set extensions. +type ExtensionOptionsTxBuilder interface { + client.TxBuilder + + SetExtensionOptions(...*codectypes.Any) + +SetNonCriticalExtensionOptions(...*codectypes.Any) +} + +func newBuilder(cdc codec.Codec) *wrapper { + return &wrapper{ + cdc: cdc, + tx: &tx.Tx{ + Body: &tx.TxBody{ +}, + AuthInfo: &tx.AuthInfo{ + Fee: &tx.Fee{ +}, +}, +}, +} +} + +func (w *wrapper) + +GetMsgs() []sdk.Msg { + return w.tx.GetMsgs() +} + +func (w *wrapper) + +ValidateBasic() + +error { + return w.tx.ValidateBasic() +} + +func (w *wrapper) + +getBodyBytes() []byte { + if len(w.bodyBz) == 0 { + / if bodyBz is empty, then marshal the body. bodyBz will generally + / be set to nil whenever SetBody is called so the result of calling + / this method should always return the correct bytes. Note that after + / decoding bodyBz is derived from TxRaw so that it matches what was + / transmitted over the wire + var err error + w.bodyBz, err = proto.Marshal(w.tx.Body) + if err != nil { + panic(err) +} + +} + +return w.bodyBz +} + +func (w *wrapper) + +getAuthInfoBytes() []byte { + if len(w.authInfoBz) == 0 { + / if authInfoBz is empty, then marshal the body. authInfoBz will generally + / be set to nil whenever SetAuthInfo is called so the result of calling + / this method should always return the correct bytes. Note that after + / decoding authInfoBz is derived from TxRaw so that it matches what was + / transmitted over the wire + var err error + w.authInfoBz, err = proto.Marshal(w.tx.AuthInfo) + if err != nil { + panic(err) +} + +} + +return w.authInfoBz +} + +func (w *wrapper) + +GetSigners() []sdk.AccAddress { + return w.tx.GetSigners() +} + +func (w *wrapper) + +GetPubKeys() ([]cryptotypes.PubKey, error) { + signerInfos := w.tx.AuthInfo.SignerInfos + pks := make([]cryptotypes.PubKey, len(signerInfos)) + for i, si := range signerInfos { + / NOTE: it is okay to leave this nil if there is no PubKey in the SignerInfo. + / PubKey's can be left unset in SignerInfo. + if si.PublicKey == nil { + continue +} + pkAny := si.PublicKey.GetCachedValue() + +pk, ok := pkAny.(cryptotypes.PubKey) + if ok { + pks[i] = pk +} + +else { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "Expecting PubKey, got: %T", pkAny) +} + +} + +return pks, nil +} + +func (w *wrapper) + +GetGas() + +uint64 { + return w.tx.AuthInfo.Fee.GasLimit +} + +func (w *wrapper) + +GetFee() + +sdk.Coins { + return w.tx.AuthInfo.Fee.Amount +} + +func (w *wrapper) + +FeePayer() + +sdk.AccAddress { + feePayer := w.tx.AuthInfo.Fee.Payer + if feePayer != "" { + return sdk.MustAccAddressFromBech32(feePayer) +} + / use first signer as default if no payer specified + return w.GetSigners()[0] +} + +func (w *wrapper) + +FeeGranter() + +sdk.AccAddress { + feePayer := w.tx.AuthInfo.Fee.Granter + if feePayer != "" { + return sdk.MustAccAddressFromBech32(feePayer) +} + +return nil +} + +func (w *wrapper) + +GetTip() *tx.Tip { + return w.tx.AuthInfo.Tip +} + +func (w *wrapper) + +GetMemo() + +string { + return w.tx.Body.Memo +} + +/ GetTimeoutHeight returns the transaction's timeout height (if set). +func (w *wrapper) + +GetTimeoutHeight() + +uint64 { + return w.tx.Body.TimeoutHeight +} + +func (w *wrapper) + +GetSignaturesV2() ([]signing.SignatureV2, error) { + signerInfos := w.tx.AuthInfo.SignerInfos + sigs := w.tx.Signatures + pubKeys, err := w.GetPubKeys() + if err != nil { + return nil, err +} + n := len(signerInfos) + res := make([]signing.SignatureV2, n) + for i, si := range signerInfos { + / handle nil signatures (in case of simulation) + if si.ModeInfo == nil { + res[i] = signing.SignatureV2{ + PubKey: pubKeys[i], +} + +} + +else { + var err error + sigData, err := ModeInfoAndSigToSignatureData(si.ModeInfo, sigs[i]) + if err != nil { + return nil, err +} + / sequence number is functionally a transaction nonce and referred to as such in the SDK + nonce := si.GetSequence() + +res[i] = signing.SignatureV2{ + PubKey: pubKeys[i], + Data: sigData, + Sequence: nonce, +} + + +} + +} + +return res, nil +} + +func (w *wrapper) + +SetMsgs(msgs ...sdk.Msg) + +error { + anys, err := tx.SetMsgs(msgs) + if err != nil { + return err +} + +w.tx.Body.Messages = anys + + / set bodyBz to nil because the cached bodyBz no longer matches tx.Body + w.bodyBz = nil + + return nil +} + +/ SetTimeoutHeight sets the transaction's height timeout. +func (w *wrapper) + +SetTimeoutHeight(height uint64) { + w.tx.Body.TimeoutHeight = height + + / set bodyBz to nil because the cached bodyBz no longer matches tx.Body + w.bodyBz = nil +} + +func (w *wrapper) + +SetMemo(memo string) { + w.tx.Body.Memo = memo + + / set bodyBz to nil because the cached bodyBz no longer matches tx.Body + w.bodyBz = nil +} + +func (w *wrapper) + +SetGasLimit(limit uint64) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.GasLimit = limit + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetFeeAmount(coins sdk.Coins) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.Amount = coins + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetTip(tip *tx.Tip) { + w.tx.AuthInfo.Tip = tip + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetFeePayer(feePayer sdk.AccAddress) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.Payer = feePayer.String() + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetFeeGranter(feeGranter sdk.AccAddress) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.Granter = feeGranter.String() + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetSignatures(signatures ...signing.SignatureV2) + +error { + n := len(signatures) + signerInfos := make([]*tx.SignerInfo, n) + rawSigs := make([][]byte, n) + for i, sig := range signatures { + var modeInfo *tx.ModeInfo + modeInfo, rawSigs[i] = SignatureDataToModeInfoAndSig(sig.Data) + +any, err := codectypes.NewAnyWithValue(sig.PubKey) + if err != nil { + return err +} + +signerInfos[i] = &tx.SignerInfo{ + PublicKey: any, + ModeInfo: modeInfo, + Sequence: sig.Sequence, +} + +} + +w.setSignerInfos(signerInfos) + +w.setSignatures(rawSigs) + +return nil +} + +func (w *wrapper) + +setSignerInfos(infos []*tx.SignerInfo) { + w.tx.AuthInfo.SignerInfos = infos + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +setSignerInfoAtIndex(index int, info *tx.SignerInfo) { + if w.tx.AuthInfo.SignerInfos == nil { + w.tx.AuthInfo.SignerInfos = make([]*tx.SignerInfo, len(w.GetSigners())) +} + +w.tx.AuthInfo.SignerInfos[index] = info + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +setSignatures(sigs [][]byte) { + w.tx.Signatures = sigs +} + +func (w *wrapper) + +setSignatureAtIndex(index int, sig []byte) { + if w.tx.Signatures == nil { + w.tx.Signatures = make([][]byte, len(w.GetSigners())) +} + +w.tx.Signatures[index] = sig +} + +func (w *wrapper) + +GetTx() + +authsigning.Tx { + return w +} + +func (w *wrapper) + +GetProtoTx() *tx.Tx { + return w.tx +} + +/ Deprecated: AsAny extracts proto Tx and wraps it into Any. +/ NOTE: You should probably use `GetProtoTx` if you want to serialize the transaction. +func (w *wrapper) + +AsAny() *codectypes.Any { + return codectypes.UnsafePackAny(w.tx) +} + +/ WrapTx creates a TxBuilder wrapper around a tx.Tx proto message. +func WrapTx(protoTx *tx.Tx) + +client.TxBuilder { + return &wrapper{ + tx: protoTx, +} +} + +func (w *wrapper) + +GetExtensionOptions() []*codectypes.Any { + return w.tx.Body.ExtensionOptions +} + +func (w *wrapper) + +GetNonCriticalExtensionOptions() []*codectypes.Any { + return w.tx.Body.NonCriticalExtensionOptions +} + +func (w *wrapper) + +SetExtensionOptions(extOpts ...*codectypes.Any) { + w.tx.Body.ExtensionOptions = extOpts + w.bodyBz = nil +} + +func (w *wrapper) + +SetNonCriticalExtensionOptions(extOpts ...*codectypes.Any) { + w.tx.Body.NonCriticalExtensionOptions = extOpts + w.bodyBz = nil +} + +func (w *wrapper) + +AddAuxSignerData(data tx.AuxSignerData) + +error { + err := data.ValidateBasic() + if err != nil { + return err +} + +w.bodyBz = data.SignDoc.BodyBytes + + var body tx.TxBody + err = w.cdc.Unmarshal(w.bodyBz, &body) + if err != nil { + return err +} + if w.tx.Body.Memo != "" && w.tx.Body.Memo != body.Memo { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has memo %s, got %s in AuxSignerData", w.tx.Body.Memo, body.Memo) +} + if w.tx.Body.TimeoutHeight != 0 && w.tx.Body.TimeoutHeight != body.TimeoutHeight { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has timeout height %d, got %d in AuxSignerData", w.tx.Body.TimeoutHeight, body.TimeoutHeight) +} + if len(w.tx.Body.ExtensionOptions) != 0 { + if len(w.tx.Body.ExtensionOptions) != len(body.ExtensionOptions) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d extension options, got %d in AuxSignerData", len(w.tx.Body.ExtensionOptions), len(body.ExtensionOptions)) +} + for i, o := range w.tx.Body.ExtensionOptions { + if !o.Equal(body.ExtensionOptions[i]) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has extension option %+v at index %d, got %+v in AuxSignerData", o, i, body.ExtensionOptions[i]) +} + +} + +} + if len(w.tx.Body.NonCriticalExtensionOptions) != 0 { + if len(w.tx.Body.NonCriticalExtensionOptions) != len(body.NonCriticalExtensionOptions) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d non-critical extension options, got %d in AuxSignerData", len(w.tx.Body.NonCriticalExtensionOptions), len(body.NonCriticalExtensionOptions)) +} + for i, o := range w.tx.Body.NonCriticalExtensionOptions { + if !o.Equal(body.NonCriticalExtensionOptions[i]) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has non-critical extension option %+v at index %d, got %+v in AuxSignerData", o, i, body.NonCriticalExtensionOptions[i]) +} + +} + +} + if len(w.tx.Body.Messages) != 0 { + if len(w.tx.Body.Messages) != len(body.Messages) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d Msgs, got %d in AuxSignerData", len(w.tx.Body.Messages), len(body.Messages)) +} + for i, o := range w.tx.Body.Messages { + if !o.Equal(body.Messages[i]) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has Msg %+v at index %d, got %+v in AuxSignerData", o, i, body.Messages[i]) +} + +} + +} + if w.tx.AuthInfo.Tip != nil && data.SignDoc.Tip != nil { + if !w.tx.AuthInfo.Tip.Amount.IsEqual(data.SignDoc.Tip.Amount) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has tip %+v, got %+v in AuxSignerData", w.tx.AuthInfo.Tip.Amount, data.SignDoc.Tip.Amount) +} + if w.tx.AuthInfo.Tip.Tipper != data.SignDoc.Tip.Tipper { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has tipper %s, got %s in AuxSignerData", w.tx.AuthInfo.Tip.Tipper, data.SignDoc.Tip.Tipper) +} + +} + +w.SetMemo(body.Memo) + +w.SetTimeoutHeight(body.TimeoutHeight) + +w.SetExtensionOptions(body.ExtensionOptions...) + +w.SetNonCriticalExtensionOptions(body.NonCriticalExtensionOptions...) + msgs := make([]sdk.Msg, len(body.Messages)) + for i, msgAny := range body.Messages { + msgs[i] = msgAny.GetCachedValue().(sdk.Msg) +} + +w.SetMsgs(msgs...) + +w.SetTip(data.GetSignDoc().GetTip()) + + / Get the aux signer's index in GetSigners. + signerIndex := -1 + for i, signer := range w.GetSigners() { + if signer.String() == data.Address { + signerIndex = i +} + +} + if signerIndex < 0 { + return sdkerrors.ErrLogic.Wrapf("address %s is not a signer", data.Address) +} + +w.setSignerInfoAtIndex(signerIndex, &tx.SignerInfo{ + PublicKey: data.SignDoc.PublicKey, + ModeInfo: &tx.ModeInfo{ + Sum: &tx.ModeInfo_Single_{ + Single: &tx.ModeInfo_Single{ + Mode: data.Mode +}}}, + Sequence: data.SignDoc.Sequence, +}) + +w.setSignatureAtIndex(signerIndex, data.Sig) + +return nil +} +``` + +```protobuf +// Fee includes the amount of coins paid in fees and the maximum +// gas to be used by the transaction. The ratio yields an effective "gasprice", +// which must be above some miminum to be accepted into the mempool. +message Fee { + // amount is the amount of coins to be paid as a fee + repeated cosmos.base.v1beta1.Coin amount = 1 + [(gogoproto.nullable) = false, (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins"]; + + // gas_limit is the maximum gas that can be used in transaction processing + // before an out of gas error occurs + uint64 gas_limit = 2; + + // if unset, the first signer is responsible for paying the fees. If set, the specified account must pay the fees. + // the payer must be a tx signer (and thus have signed this field in AuthInfo). + // setting this field does *not* change the ordering of required signers for the transaction. + string payer = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // if set, the fee payer (either the first signer or the value of the payer field) requests that a fee grant be used + // to pay fees instead of the fee payer's own balance. If an appropriate fee grant does not exist or the chain does + // not support fee grants, this will fail + string granter = 4 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +Example cmd: + +```go +./simd tx gov submit-proposal --title="Test Proposal" --description="My awesome proposal" --type="Text" --from validator-key --fee-granter=cosmos1xh44hxt7spr67hqaa7nyx5gnutrz5fraw6grxn --chain-id=testnet --fees="10stake" +``` + +### Granted Fee Deductions + +Fees are deducted from grants in the `x/auth` ante handler. To learn more about how ante handlers work, read the [Auth Module AnteHandlers Guide](docs/sdk/v0.47/documentation/module-system/modules/auth/README#antehandlers). + +### Gas + +In order to prevent DoS attacks, using a filtered `x/feegrant` incurs gas. The SDK must assure that the `grantee`'s transactions all conform to the filter set by the `granter`. The SDK does this by iterating over the allowed messages in the filter and charging 10 gas per filtered message. The SDK will then iterate over the messages being sent by the `grantee` to ensure the messages adhere to the filter, also charging 10 gas per message. The SDK will stop iterating and fail the transaction if it finds a message that does not conform to the filter. + +**WARNING**: The gas is charged against the granted allowance. Ensure your messages conform to the filter, if any, before sending transactions using your allowance. + +### Pruning + +A queue in the state maintained with the prefix of expiration of the grants and checks them on EndBlock with the current block time for every block to prune. + +## State + +### FeeAllowance + +Fee Allowances are identified by combining `Grantee` (the account address of fee allowance grantee) with the `Granter` (the account address of fee allowance granter). + +Fee allowance grants are stored in the state as follows: + +* Grant: `0x00 | grantee_addr_len (1 byte) | grantee_addr_bytes | granter_addr_len (1 byte) | granter_addr_bytes -> ProtocolBuffer(Grant)` + +```go expandable +/ Code generated by protoc-gen-gogo. DO NOT EDIT. +/ source: cosmos/feegrant/v1beta1/feegrant.proto + +package feegrant + +import ( + + fmt "fmt" + _ "github.com/cosmos/cosmos-proto" + types1 "github.com/cosmos/cosmos-sdk/codec/types" + github_com_cosmos_cosmos_sdk_types "github.com/cosmos/cosmos-sdk/types" + types "github.com/cosmos/cosmos-sdk/types" + _ "github.com/cosmos/cosmos-sdk/types/tx/amino" + _ "github.com/cosmos/gogoproto/gogoproto" + proto "github.com/cosmos/gogoproto/proto" + github_com_cosmos_gogoproto_types "github.com/cosmos/gogoproto/types" + _ "google.golang.org/protobuf/types/known/durationpb" + _ "google.golang.org/protobuf/types/known/timestamppb" + io "io" + math "math" + math_bits "math/bits" + time "time" +) + +/ Reference imports to suppress errors if they are not otherwise used. +var _ = proto.Marshal +var _ = fmt.Errorf +var _ = math.Inf +var _ = time.Kitchen + +/ This is a compile-time assertion to ensure that this generated file +/ is compatible with the proto package it is being compiled against. +/ A compilation error at this line likely means your copy of the +/ proto package needs to be updated. +const _ = proto.GoGoProtoPackageIsVersion3 / please upgrade the proto package + +/ BasicAllowance implements Allowance with a one-time grant of coins +/ that optionally expires. The grantee can use up to SpendLimit to cover fees. +type BasicAllowance struct { + / spend_limit specifies the maximum amount of coins that can be spent + / by this allowance and will be updated as coins are spent. If it is + / empty, there is no spend limit and any amount of coins can be spent. + SpendLimit github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,1,rep,name=spend_limit,json=spendLimit,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"spend_limit"` + / expiration specifies an optional time when this allowance expires + Expiration *time.Time `protobuf:"bytes,2,opt,name=expiration,proto3,stdtime" json:"expiration,omitempty"` +} + +func (m *BasicAllowance) + +Reset() { *m = BasicAllowance{ +} +} + +func (m *BasicAllowance) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*BasicAllowance) + +ProtoMessage() { +} + +func (*BasicAllowance) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{0 +} +} + +func (m *BasicAllowance) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *BasicAllowance) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_BasicAllowance.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *BasicAllowance) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_BasicAllowance.Merge(m, src) +} + +func (m *BasicAllowance) + +XXX_Size() + +int { + return m.Size() +} + +func (m *BasicAllowance) + +XXX_DiscardUnknown() { + xxx_messageInfo_BasicAllowance.DiscardUnknown(m) +} + +var xxx_messageInfo_BasicAllowance proto.InternalMessageInfo + +func (m *BasicAllowance) + +GetSpendLimit() + +github_com_cosmos_cosmos_sdk_types.Coins { + if m != nil { + return m.SpendLimit +} + +return nil +} + +func (m *BasicAllowance) + +GetExpiration() *time.Time { + if m != nil { + return m.Expiration +} + +return nil +} + +/ PeriodicAllowance extends Allowance to allow for both a maximum cap, +/ as well as a limit per time period. +type PeriodicAllowance struct { + / basic specifies a struct of `BasicAllowance` + Basic BasicAllowance `protobuf:"bytes,1,opt,name=basic,proto3" json:"basic"` + / period specifies the time duration in which period_spend_limit coins can + / be spent before that allowance is reset + Period time.Duration `protobuf:"bytes,2,opt,name=period,proto3,stdduration" json:"period"` + / period_spend_limit specifies the maximum number of coins that can be spent + / in the period + PeriodSpendLimit github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,3,rep,name=period_spend_limit,json=periodSpendLimit,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"period_spend_limit"` + / period_can_spend is the number of coins left to be spent before the period_reset time + PeriodCanSpend github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,4,rep,name=period_can_spend,json=periodCanSpend,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"period_can_spend"` + / period_reset is the time at which this period resets and a new one begins, + / it is calculated from the start time of the first transaction after the + / last period ended + PeriodReset time.Time `protobuf:"bytes,5,opt,name=period_reset,json=periodReset,proto3,stdtime" json:"period_reset"` +} + +func (m *PeriodicAllowance) + +Reset() { *m = PeriodicAllowance{ +} +} + +func (m *PeriodicAllowance) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*PeriodicAllowance) + +ProtoMessage() { +} + +func (*PeriodicAllowance) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{1 +} +} + +func (m *PeriodicAllowance) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *PeriodicAllowance) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_PeriodicAllowance.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *PeriodicAllowance) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_PeriodicAllowance.Merge(m, src) +} + +func (m *PeriodicAllowance) + +XXX_Size() + +int { + return m.Size() +} + +func (m *PeriodicAllowance) + +XXX_DiscardUnknown() { + xxx_messageInfo_PeriodicAllowance.DiscardUnknown(m) +} + +var xxx_messageInfo_PeriodicAllowance proto.InternalMessageInfo + +func (m *PeriodicAllowance) + +GetBasic() + +BasicAllowance { + if m != nil { + return m.Basic +} + +return BasicAllowance{ +} +} + +func (m *PeriodicAllowance) + +GetPeriod() + +time.Duration { + if m != nil { + return m.Period +} + +return 0 +} + +func (m *PeriodicAllowance) + +GetPeriodSpendLimit() + +github_com_cosmos_cosmos_sdk_types.Coins { + if m != nil { + return m.PeriodSpendLimit +} + +return nil +} + +func (m *PeriodicAllowance) + +GetPeriodCanSpend() + +github_com_cosmos_cosmos_sdk_types.Coins { + if m != nil { + return m.PeriodCanSpend +} + +return nil +} + +func (m *PeriodicAllowance) + +GetPeriodReset() + +time.Time { + if m != nil { + return m.PeriodReset +} + +return time.Time{ +} +} + +/ AllowedMsgAllowance creates allowance only for specified message types. +type AllowedMsgAllowance struct { + / allowance can be any of basic and periodic fee allowance. + Allowance *types1.Any `protobuf:"bytes,1,opt,name=allowance,proto3" json:"allowance,omitempty"` + / allowed_messages are the messages for which the grantee has the access. + AllowedMessages []string `protobuf:"bytes,2,rep,name=allowed_messages,json=allowedMessages,proto3" json:"allowed_messages,omitempty"` +} + +func (m *AllowedMsgAllowance) + +Reset() { *m = AllowedMsgAllowance{ +} +} + +func (m *AllowedMsgAllowance) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*AllowedMsgAllowance) + +ProtoMessage() { +} + +func (*AllowedMsgAllowance) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{2 +} +} + +func (m *AllowedMsgAllowance) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *AllowedMsgAllowance) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_AllowedMsgAllowance.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *AllowedMsgAllowance) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_AllowedMsgAllowance.Merge(m, src) +} + +func (m *AllowedMsgAllowance) + +XXX_Size() + +int { + return m.Size() +} + +func (m *AllowedMsgAllowance) + +XXX_DiscardUnknown() { + xxx_messageInfo_AllowedMsgAllowance.DiscardUnknown(m) +} + +var xxx_messageInfo_AllowedMsgAllowance proto.InternalMessageInfo + +/ Grant is stored in the KVStore to record a grant with full context +type Grant struct { + / granter is the address of the user granting an allowance of their funds. + Granter string `protobuf:"bytes,1,opt,name=granter,proto3" json:"granter,omitempty"` + / grantee is the address of the user being granted an allowance of another user's funds. + Grantee string `protobuf:"bytes,2,opt,name=grantee,proto3" json:"grantee,omitempty"` + / allowance can be any of basic, periodic, allowed fee allowance. + Allowance *types1.Any `protobuf:"bytes,3,opt,name=allowance,proto3" json:"allowance,omitempty"` +} + +func (m *Grant) + +Reset() { *m = Grant{ +} +} + +func (m *Grant) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*Grant) + +ProtoMessage() { +} + +func (*Grant) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{3 +} +} + +func (m *Grant) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *Grant) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_Grant.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *Grant) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_Grant.Merge(m, src) +} + +func (m *Grant) + +XXX_Size() + +int { + return m.Size() +} + +func (m *Grant) + +XXX_DiscardUnknown() { + xxx_messageInfo_Grant.DiscardUnknown(m) +} + +var xxx_messageInfo_Grant proto.InternalMessageInfo + +func (m *Grant) + +GetGranter() + +string { + if m != nil { + return m.Granter +} + +return "" +} + +func (m *Grant) + +GetGrantee() + +string { + if m != nil { + return m.Grantee +} + +return "" +} + +func (m *Grant) + +GetAllowance() *types1.Any { + if m != nil { + return m.Allowance +} + +return nil +} + +func init() { + proto.RegisterType((*BasicAllowance)(nil), "cosmos.feegrant.v1beta1.BasicAllowance") + +proto.RegisterType((*PeriodicAllowance)(nil), "cosmos.feegrant.v1beta1.PeriodicAllowance") + +proto.RegisterType((*AllowedMsgAllowance)(nil), "cosmos.feegrant.v1beta1.AllowedMsgAllowance") + +proto.RegisterType((*Grant)(nil), "cosmos.feegrant.v1beta1.Grant") +} + +func init() { + proto.RegisterFile("cosmos/feegrant/v1beta1/feegrant.proto", fileDescriptor_7279582900c30aea) +} + +var fileDescriptor_7279582900c30aea = []byte{ + / 639 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xb4, 0x55, 0x3f, 0x6f, 0xd3, 0x40, + 0x14, 0x8f, 0x9b, 0xb6, 0x28, 0x17, 0x28, 0xad, 0xa9, 0x84, 0x53, 0x21, 0xbb, 0x8a, 0x04, 0x4d, + 0x2b, 0xd5, 0x56, 0x8b, 0x58, 0x3a, 0x35, 0x2e, 0xa2, 0x80, 0x5a, 0xa9, 0x72, 0x99, 0x90, 0x50, + 0x74, 0xb6, 0xaf, 0xe6, 0x44, 0xec, 0x33, 0x3e, 0x17, 0x1a, 0x06, 0x66, 0xc4, 0x80, 0x32, 0x32, + 0x32, 0x22, 0xa6, 0x0e, 0xe5, 0x3b, 0x54, 0x0c, 0xa8, 0x62, 0x62, 0x22, 0x28, 0x19, 0x3a, 0xf3, + 0x0d, 0x90, 0xef, 0xce, 0x8e, 0x9b, 0x50, 0x68, 0x25, 0xba, 0x24, 0x77, 0xef, 0xde, 0xfb, 0xfd, + 0x79, 0xef, 0x45, 0x01, 0xb7, 0x1c, 0x42, 0x7d, 0x42, 0x8d, 0x1d, 0x84, 0xbc, 0x08, 0x06, 0xb1, + 0xf1, 0x62, 0xc9, 0x46, 0x31, 0x5c, 0xca, 0x02, 0x7a, 0x18, 0x91, 0x98, 0xc8, 0xd7, 0x79, 0x9e, + 0x9e, 0x85, 0x45, 0xde, 0xcc, 0xb4, 0x47, 0x3c, 0xc2, 0x72, 0x8c, 0xe4, 0xc4, 0xd3, 0x67, 0x2a, + 0x1e, 0x21, 0x5e, 0x13, 0x19, 0xec, 0x66, 0xef, 0xee, 0x18, 0x30, 0x68, 0xa5, 0x4f, 0x1c, 0xa9, + 0xc1, 0x6b, 0x04, 0x2c, 0x7f, 0x52, 0x85, 0x18, 0x1b, 0x52, 0x94, 0x09, 0x71, 0x08, 0x0e, 0xc4, + 0xfb, 0x14, 0xf4, 0x71, 0x40, 0x0c, 0xf6, 0x29, 0x42, 0xda, 0x20, 0x51, 0x8c, 0x7d, 0x44, 0x63, + 0xe8, 0x87, 0x29, 0xe6, 0x60, 0x82, 0xbb, 0x1b, 0xc1, 0x18, 0x13, 0x81, 0x59, 0x7d, 0x37, 0x02, + 0x26, 0x4c, 0x48, 0xb1, 0x53, 0x6f, 0x36, 0xc9, 0x4b, 0x18, 0x38, 0x48, 0x7e, 0x0e, 0xca, 0x34, + 0x44, 0x81, 0xdb, 0x68, 0x62, 0x1f, 0xc7, 0x8a, 0x34, 0x5b, 0xac, 0x95, 0x97, 0x2b, 0xba, 0x90, + 0x9a, 0x88, 0x4b, 0xdd, 0xeb, 0x6b, 0x04, 0x07, 0xe6, 0x9d, 0xc3, 0x1f, 0x5a, 0xe1, 0x53, 0x47, + 0xab, 0x79, 0x38, 0x7e, 0xba, 0x6b, 0xeb, 0x0e, 0xf1, 0x85, 0x2f, 0xf1, 0xb5, 0x48, 0xdd, 0x67, + 0x46, 0xdc, 0x0a, 0x11, 0x65, 0x05, 0xf4, 0xe3, 0xf1, 0xfe, 0x82, 0x64, 0x01, 0x46, 0xb2, 0x91, + 0x70, 0xc8, 0xab, 0x00, 0xa0, 0xbd, 0x10, 0x73, 0x65, 0xca, 0xc8, 0xac, 0x54, 0x2b, 0x2f, 0xcf, + 0xe8, 0x5c, 0xba, 0x9e, 0x4a, 0xd7, 0x1f, 0xa5, 0xde, 0xcc, 0xd1, 0x76, 0x47, 0x93, 0xac, 0x5c, + 0xcd, 0xca, 0xfa, 0x97, 0x83, 0xc5, 0x9b, 0xa7, 0x0c, 0x49, 0xbf, 0x87, 0x50, 0x66, 0xef, 0xc1, + 0xdb, 0xe3, 0xfd, 0x85, 0x4a, 0x4e, 0xd8, 0x49, 0xf7, 0xd5, 0xcf, 0xa3, 0x60, 0x6a, 0x0b, 0x45, + 0x98, 0xb8, 0xf9, 0x9e, 0xdc, 0x07, 0x63, 0x76, 0x92, 0xa7, 0x48, 0x4c, 0xdb, 0x9c, 0x7e, 0x1a, + 0xd5, 0x49, 0x34, 0xb3, 0x94, 0xf4, 0x86, 0xfb, 0xe5, 0x00, 0xf2, 0x2a, 0x18, 0x0f, 0x19, 0xbc, + 0xb0, 0x59, 0x19, 0xb2, 0x79, 0x57, 0x4c, 0xc8, 0xbc, 0x92, 0x14, 0xbf, 0xef, 0x68, 0x12, 0x07, + 0x10, 0x75, 0xf2, 0x6b, 0x20, 0xf3, 0x53, 0x23, 0x3f, 0xa6, 0xe2, 0x05, 0x8d, 0x69, 0x92, 0x73, + 0x6d, 0xf7, 0x87, 0xf5, 0x0a, 0x88, 0x58, 0xc3, 0x81, 0x01, 0xd7, 0xa0, 0x8c, 0x5e, 0x10, 0xfb, + 0x04, 0x67, 0x5a, 0x83, 0x01, 0x13, 0x20, 0x6f, 0x80, 0xcb, 0x82, 0x3b, 0x42, 0x14, 0xc5, 0xca, + 0xd8, 0x3f, 0x57, 0x85, 0x35, 0xb1, 0x9d, 0x35, 0xb1, 0xcc, 0xcb, 0xad, 0xa4, 0x7a, 0xe5, 0xe1, + 0xb9, 0x96, 0xe6, 0x46, 0x4e, 0xe8, 0xd0, 0x86, 0x54, 0x7f, 0x49, 0xe0, 0x1a, 0xbb, 0x21, 0x77, + 0x93, 0x7a, 0xfd, 0xcd, 0x79, 0x02, 0x4a, 0x30, 0xbd, 0x88, 0xed, 0x99, 0x1e, 0x92, 0x5b, 0x0f, + 0x5a, 0xe6, 0xfc, 0x99, 0xc5, 0x58, 0x7d, 0x44, 0x79, 0x1e, 0x4c, 0x42, 0xce, 0xda, 0xf0, 0x11, + 0xa5, 0xd0, 0x43, 0x54, 0x19, 0x99, 0x2d, 0xd6, 0x4a, 0xd6, 0x55, 0x11, 0xdf, 0x14, 0xe1, 0x95, + 0xad, 0x37, 0x1f, 0xb4, 0xc2, 0xb9, 0x1c, 0xab, 0x39, 0xc7, 0x7f, 0xf0, 0x56, 0xfd, 0x2a, 0x81, + 0xb1, 0xf5, 0x04, 0x42, 0x5e, 0x06, 0x97, 0x18, 0x16, 0x8a, 0x98, 0xc7, 0x92, 0xa9, 0x7c, 0x3b, + 0x58, 0x9c, 0x16, 0x44, 0x75, 0xd7, 0x8d, 0x10, 0xa5, 0xdb, 0x71, 0x84, 0x03, 0xcf, 0x4a, 0x13, + 0xfb, 0x35, 0x88, 0xfd, 0x14, 0xce, 0x50, 0x33, 0xd0, 0xcd, 0xe2, 0xff, 0xee, 0xa6, 0x59, 0x3f, + 0xec, 0xaa, 0xd2, 0x51, 0x57, 0x95, 0x7e, 0x76, 0x55, 0xa9, 0xdd, 0x53, 0x0b, 0x47, 0x3d, 0xb5, + 0xf0, 0xbd, 0xa7, 0x16, 0x1e, 0xcf, 0xfd, 0x75, 0x6f, 0xf7, 0xb2, 0xff, 0x0b, 0x7b, 0x9c, 0xc9, + 0xb8, 0xfd, 0x3b, 0x00, 0x00, 0xff, 0xff, 0xe4, 0x3d, 0x09, 0x1d, 0x5a, 0x06, 0x00, 0x00, +} + +func (m *BasicAllowance) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *BasicAllowance) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *BasicAllowance) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.Expiration != nil { + n1, err1 := github_com_cosmos_gogoproto_types.StdTimeMarshalTo(*m.Expiration, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdTime(*m.Expiration):]) + if err1 != nil { + return 0, err1 +} + +i -= n1 + i = encodeVarintFeegrant(dAtA, i, uint64(n1)) + +i-- + dAtA[i] = 0x12 +} + if len(m.SpendLimit) > 0 { + for iNdEx := len(m.SpendLimit) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.SpendLimit[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa +} + +} + +return len(dAtA) - i, nil +} + +func (m *PeriodicAllowance) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *PeriodicAllowance) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *PeriodicAllowance) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + n2, err2 := github_com_cosmos_gogoproto_types.StdTimeMarshalTo(m.PeriodReset, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdTime(m.PeriodReset):]) + if err2 != nil { + return 0, err2 +} + +i -= n2 + i = encodeVarintFeegrant(dAtA, i, uint64(n2)) + +i-- + dAtA[i] = 0x2a + if len(m.PeriodCanSpend) > 0 { + for iNdEx := len(m.PeriodCanSpend) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.PeriodCanSpend[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x22 +} + +} + if len(m.PeriodSpendLimit) > 0 { + for iNdEx := len(m.PeriodSpendLimit) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.PeriodSpendLimit[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x1a +} + +} + +n3, err3 := github_com_cosmos_gogoproto_types.StdDurationMarshalTo(m.Period, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdDuration(m.Period):]) + if err3 != nil { + return 0, err3 +} + +i -= n3 + i = encodeVarintFeegrant(dAtA, i, uint64(n3)) + +i-- + dAtA[i] = 0x12 + { + size, err := m.Basic.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *AllowedMsgAllowance) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *AllowedMsgAllowance) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AllowedMsgAllowance) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.AllowedMessages) > 0 { + for iNdEx := len(m.AllowedMessages) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.AllowedMessages[iNdEx]) + +copy(dAtA[i:], m.AllowedMessages[iNdEx]) + +i = encodeVarintFeegrant(dAtA, i, uint64(len(m.AllowedMessages[iNdEx]))) + +i-- + dAtA[i] = 0x12 +} + +} + if m.Allowance != nil { + { + size, err := m.Allowance.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *Grant) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *Grant) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *Grant) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.Allowance != nil { + { + size, err := m.Allowance.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x1a +} + if len(m.Grantee) > 0 { + i -= len(m.Grantee) + +copy(dAtA[i:], m.Grantee) + +i = encodeVarintFeegrant(dAtA, i, uint64(len(m.Grantee))) + +i-- + dAtA[i] = 0x12 +} + if len(m.Granter) > 0 { + i -= len(m.Granter) + +copy(dAtA[i:], m.Granter) + +i = encodeVarintFeegrant(dAtA, i, uint64(len(m.Granter))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func encodeVarintFeegrant(dAtA []byte, offset int, v uint64) + +int { + offset -= sovFeegrant(v) + base := offset + for v >= 1<<7 { + dAtA[offset] = uint8(v&0x7f | 0x80) + +v >>= 7 + offset++ +} + +dAtA[offset] = uint8(v) + +return base +} + +func (m *BasicAllowance) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + if len(m.SpendLimit) > 0 { + for _, e := range m.SpendLimit { + l = e.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + if m.Expiration != nil { + l = github_com_cosmos_gogoproto_types.SizeOfStdTime(*m.Expiration) + +n += 1 + l + sovFeegrant(uint64(l)) +} + +return n +} + +func (m *PeriodicAllowance) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = m.Basic.Size() + +n += 1 + l + sovFeegrant(uint64(l)) + +l = github_com_cosmos_gogoproto_types.SizeOfStdDuration(m.Period) + +n += 1 + l + sovFeegrant(uint64(l)) + if len(m.PeriodSpendLimit) > 0 { + for _, e := range m.PeriodSpendLimit { + l = e.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + if len(m.PeriodCanSpend) > 0 { + for _, e := range m.PeriodCanSpend { + l = e.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + +l = github_com_cosmos_gogoproto_types.SizeOfStdTime(m.PeriodReset) + +n += 1 + l + sovFeegrant(uint64(l)) + +return n +} + +func (m *AllowedMsgAllowance) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + if m.Allowance != nil { + l = m.Allowance.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + if len(m.AllowedMessages) > 0 { + for _, s := range m.AllowedMessages { + l = len(s) + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + +return n +} + +func (m *Grant) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.Granter) + if l > 0 { + n += 1 + l + sovFeegrant(uint64(l)) +} + +l = len(m.Grantee) + if l > 0 { + n += 1 + l + sovFeegrant(uint64(l)) +} + if m.Allowance != nil { + l = m.Allowance.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +return n +} + +func sovFeegrant(x uint64) (n int) { + return (math_bits.Len64(x|1) + 6) / 7 +} + +func sozFeegrant(x uint64) (n int) { + return sovFeegrant(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} + +func (m *BasicAllowance) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: BasicAllowance: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: BasicAllowance: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SpendLimit", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.SpendLimit = append(m.SpendLimit, types.Coin{ +}) + if err := m.SpendLimit[len(m.SpendLimit)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Expiration", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if m.Expiration == nil { + m.Expiration = new(time.Time) +} + if err := github_com_cosmos_gogoproto_types.StdTimeUnmarshal(m.Expiration, dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *PeriodicAllowance) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PeriodicAllowance: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: PeriodicAllowance: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Basic", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := m.Basic.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Period", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := github_com_cosmos_gogoproto_types.StdDurationUnmarshal(&m.Period, dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PeriodSpendLimit", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.PeriodSpendLimit = append(m.PeriodSpendLimit, types.Coin{ +}) + if err := m.PeriodSpendLimit[len(m.PeriodSpendLimit)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PeriodCanSpend", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.PeriodCanSpend = append(m.PeriodCanSpend, types.Coin{ +}) + if err := m.PeriodCanSpend[len(m.PeriodCanSpend)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PeriodReset", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := github_com_cosmos_gogoproto_types.StdTimeUnmarshal(&m.PeriodReset, dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *AllowedMsgAllowance) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AllowedMsgAllowance: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: AllowedMsgAllowance: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Allowance", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if m.Allowance == nil { + m.Allowance = &types1.Any{ +} + +} + if err := m.Allowance.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AllowedMessages", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.AllowedMessages = append(m.AllowedMessages, string(dAtA[iNdEx:postIndex])) + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *Grant) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: Grant: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: Grant: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Granter", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Granter = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Grantee", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Grantee = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Allowance", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if m.Allowance == nil { + m.Allowance = &types1.Any{ +} + +} + if err := m.Allowance.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func skipFeegrant(dAtA []byte) (n int, err error) { + l := len(dAtA) + iNdEx := 0 + depth := 0 + for iNdEx < l { + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowFeegrant +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + wireType := int(wire & 0x7) + switch wireType { + case 0: + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowFeegrant +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + +iNdEx++ + if dAtA[iNdEx-1] < 0x80 { + break +} + +} + case 1: + iNdEx += 8 + case 2: + var length int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowFeegrant +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + length |= (int(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + if length < 0 { + return 0, ErrInvalidLengthFeegrant +} + +iNdEx += length + case 3: + depth++ + case 4: + if depth == 0 { + return 0, ErrUnexpectedEndOfGroupFeegrant +} + +depth-- + case 5: + iNdEx += 4 + default: + return 0, fmt.Errorf("proto: illegal wireType %d", wireType) +} + if iNdEx < 0 { + return 0, ErrInvalidLengthFeegrant +} + if depth == 0 { + return iNdEx, nil +} + +} + +return 0, io.ErrUnexpectedEOF +} + +var ( + ErrInvalidLengthFeegrant = fmt.Errorf("proto: negative length found during unmarshaling") + +ErrIntOverflowFeegrant = fmt.Errorf("proto: integer overflow") + +ErrUnexpectedEndOfGroupFeegrant = fmt.Errorf("proto: unexpected end of group") +) +``` + +### FeeAllowanceQueue + +Fee Allowances queue items are identified by combining the `FeeAllowancePrefixQueue` (i.e., 0x01), `expiration`, `grantee` (the account address of fee allowance grantee), `granter` (the account address of fee allowance granter). Endblocker checks `FeeAllowanceQueue` state for the expired grants and prunes them from `FeeAllowance` if there are any found. + +Fee allowance queue keys are stored in the state as follows: + +* Grant: `0x01 | expiration_bytes | grantee_addr_len (1 byte) | grantee_addr_bytes | granter_addr_len (1 byte) | granter_addr_bytes -> EmptyBytes` + +## Messages + +### Msg/GrantAllowance + +A fee allowance grant will be created with the `MsgGrantAllowance` message. + +```protobuf +// MsgGrantAllowance adds permission for Grantee to spend up to Allowance +// of fees from the account of Granter. +message MsgGrantAllowance { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgGrantAllowance"; + + // granter is the address of the user granting an allowance of their funds. + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // grantee is the address of the user being granted an allowance of another user's funds. + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // allowance can be any of basic, periodic, allowed fee allowance. + google.protobuf.Any allowance = 3 [(cosmos_proto.accepts_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"]; +} +``` + +### Msg/RevokeAllowance + +An allowed grant fee allowance can be removed with the `MsgRevokeAllowance` message. + +```protobuf +// MsgGrantAllowanceResponse defines the Msg/GrantAllowanceResponse response type. +message MsgGrantAllowanceResponse {} + +// MsgRevokeAllowance removes any existing Allowance from Granter to Grantee. +message MsgRevokeAllowance { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgRevokeAllowance"; + + // granter is the address of the user granting an allowance of their funds. + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // grantee is the address of the user being granted an allowance of another user's funds. + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +## Events + +The feegrant module emits the following events: + +## Msg Server + +### MsgGrantAllowance + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ---------------- | +| message | action | set\_feegrant | +| message | granter | `{granterAddress}` | +| message | grantee | `{granteeAddress}` | + +### MsgRevokeAllowance + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ---------------- | +| message | action | revoke\_feegrant | +| message | granter | `{granterAddress}` | +| message | grantee | `{granteeAddress}` | + +### Exec fee allowance + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ---------------- | +| message | action | use\_feegrant | +| message | granter | `{granterAddress}` | +| message | grantee | `{granteeAddress}` | + +## Client + +### CLI + +A user can query and interact with the `feegrant` module using the CLI. + +#### Query + +The `query` commands allow users to query `feegrant` state. + +```shell +simd query feegrant --help +``` + +##### grant + +The `grant` command allows users to query a grant for a given granter-grantee pair. + +```shell +simd query feegrant grant [granter] [grantee] [flags] +``` + +Example: + +```shell +simd query feegrant grant cosmos1.. cosmos1.. +``` + +Example Output: + +```yml +allowance: + '@type': /cosmos.feegrant.v1beta1.BasicAllowance + expiration: null + spend_limit: + - amount: "100" + denom: stake +grantee: cosmos1.. +granter: cosmos1.. +``` + +##### grants + +The `grants` command allows users to query all grants for a given grantee. + +```shell +simd query feegrant grants [grantee] [flags] +``` + +Example: + +```shell +simd query feegrant grants cosmos1.. +``` + +Example Output: + +```yml expandable +allowances: +- allowance: + '@type': /cosmos.feegrant.v1beta1.BasicAllowance + expiration: null + spend_limit: + - amount: "100" + denom: stake + grantee: cosmos1.. + granter: cosmos1.. +pagination: + next_key: null + total: "0" +``` + +#### Transactions + +The `tx` commands allow users to interact with the `feegrant` module. + +```shell +simd tx feegrant --help +``` + +##### grant + +The `grant` command allows users to grant fee allowances to another account. The fee allowance can have an expiration date, a total spend limit, and/or a periodic spend limit. + +```shell +simd tx feegrant grant [granter] [grantee] [flags] +``` + +Example (one-time spend limit): + +```shell +simd tx feegrant grant cosmos1.. cosmos1.. --spend-limit 100stake +``` + +Example (periodic spend limit): + +```shell +simd tx feegrant grant cosmos1.. cosmos1.. --period 3600 --period-limit 10stake +``` + +##### revoke + +The `revoke` command allows users to revoke a granted fee allowance. + +```shell +simd tx feegrant revoke [granter] [grantee] [flags] +``` + +Example: + +```shell +simd tx feegrant revoke cosmos1.. cosmos1.. +``` + +### gRPC + +A user can query the `feegrant` module using gRPC endpoints. + +#### Allowance + +The `Allowance` endpoint allows users to query a granted fee allowance. + +```shell +cosmos.feegrant.v1beta1.Query/Allowance +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"grantee":"cosmos1..","granter":"cosmos1.."}' \ + localhost:9090 \ + cosmos.feegrant.v1beta1.Query/Allowance +``` + +Example Output: + +```json +{ + "allowance": { + "granter": "cosmos1..", + "grantee": "cosmos1..", + "allowance": { + "@type": "/cosmos.feegrant.v1beta1.BasicAllowance", + "spendLimit": [ + { + "denom": "stake", + "amount": "100" + } + ] + } + } +} +``` + +#### Allowances + +The `Allowances` endpoint allows users to query all granted fee allowances for a given grantee. + +```shell +cosmos.feegrant.v1beta1.Query/Allowances +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' \ + localhost:9090 \ + cosmos.feegrant.v1beta1.Query/Allowances +``` + +Example Output: + +```json expandable +{ + "allowances": [ + { + "granter": "cosmos1..", + "grantee": "cosmos1..", + "allowance": { + "@type": "/cosmos.feegrant.v1beta1.BasicAllowance", + "spendLimit": [ + { + "denom": "stake", + "amount": "100" + } + ] + } + } + ], + "pagination": { + "total": "1" + } +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/modules/genutil/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/genutil/README.mdx new file mode 100644 index 00000000..b27b1342 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/genutil/README.mdx @@ -0,0 +1,74 @@ +--- +title: '`x/genutil`' +description: >- + The genutil package contains a variaety of genesis utility functionalities for + usage within a blockchain application. Namely: +--- + +## Concepts + +The `genutil` package contains a variaety of genesis utility functionalities for usage within a blockchain application. Namely: + +* Genesis transactions related (gentx) +* Commands for collection and creation of gentxs +* `InitChain` processing of gentxs +* Genesis file validation +* Genesis file migration +* CometBFT related initialization + * Translation of an app genesis to a CometBFT genesis + +## Client + +### CLI + +The genutil commands are available under the `genesis` subcommand. + +#### add-genesis-account + +Add a genesis account to `genesis.json`. Learn more [here](https://docs.cosmos.network/main/run-node/run-node#adding-genesis-accounts). + +#### collect-gentxs + +Collect genesis txs and output a `genesis.json` file. + +```shell +simd genesis collect-gentxs +``` + +This will create a new `genesis.json` file that includes data from all the validators (we sometimes call it the "super genesis file" to distinguish it from single-validator genesis files). + +#### gentx + +Generate a genesis tx carrying a self delegation. + +```shell +simd genesis gentx [key_name] [amount] --chain-id [chain-id] +``` + +This will create the genesis transaction for your new chain. Here `amount` should be at least `1000000000stake`. +If you provide too much or too little, you will encounter an error when starting a node. + +#### migrate + +Migrate genesis to a specified target (SDK) version. + +```shell +simd genesis migrate [target-version] +``` + + +The `migrate` command is extensible and takes a `MigrationMap`. This map is a mapping of target versions to genesis migrations functions. +When not using the default `MigrationMap`, it is recommended to still call the default `MigrationMap` corresponding the SDK version of the chain and prepend/append your own genesis migrations. + + +#### validate-genesis + +Validates the genesis file at the default location or at the location passed as an argument. + +```shell +simd genesis validate-genesis +``` + + +Validate genesis only validates if the genesis is valid at the **current application binary**. For validating a genesis from a previous version of the application, use the `migrate` command to migrate the genesis to the current version. + diff --git a/docs/sdk/v0.47/documentation/module-system/modules/gov/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/gov/README.mdx new file mode 100644 index 00000000..0892bd09 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/gov/README.mdx @@ -0,0 +1,2845 @@ +--- +title: '`x/gov`' +description: >- + This paper specifies the Governance module of the Cosmos SDK, which was first + described in the Cosmos Whitepaper in June 2016. +--- + +## Abstract + +This paper specifies the Governance module of the Cosmos SDK, which was first +described in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper) in +June 2016. + +The module enables Cosmos SDK based blockchain to support an on-chain governance +system. In this system, holders of the native staking token of the chain can vote +on proposals on a 1 token 1 vote basis. Next is a list of features the module +currently supports: + +* **Proposal submission:** Users can submit proposals with a deposit. Once the + minimum deposit is reached, the proposal enters voting period. +* **Vote:** Participants can vote on proposals that reached MinDeposit +* **Inheritance and penalties:** Delegators inherit their validator's vote if + they don't vote themselves. +* **Claiming deposit:** Users that deposited on proposals can recover their + deposits if the proposal was accepted or rejected. If the proposal was vetoed, or never entered voting period, the deposit is burned. + +This module will be used in the Cosmos Hub, the first Hub in the Cosmos network. +Features that may be added in the future are described in [Future Improvements](#future-improvements). + +## Contents + +The following specification uses *ATOM* as the native staking token. The module +can be adapted to any Proof-Of-Stake blockchain by replacing *ATOM* with the native +staking token of the chain. + +* [Concepts](#concepts) + * [Proposal submission](#proposal-submission) + * [Deposit](#deposit) + * [Vote](#vote) +* [State](#state) + * [Proposals](#proposals) + * [Parameters and base types](#parameters-and-base-types) + * [Deposit](#deposit-1) + * [ValidatorGovInfo](#validatorgovinfo) + * [Stores](#stores) + * [Proposal Processing Queue](#proposal-processing-queue) + * [Legacy Proposal](#legacy-proposal) +* [Messages](#messages) + * [Proposal Submission](#proposal-submission-1) + * [Deposit](#deposit-2) + * [Vote](#vote-1) +* [Events](#events) + * [EndBlocker](#endblocker) + * [Handlers](#handlers) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) +* [Metadata](#metadata) + * [Proposal](#proposal-3) + * [Vote](#vote-5) +* [Future Improvements](#future-improvements) + +## Concepts + +*Disclaimer: This is work in progress. Mechanisms are susceptible to change.* + +The governance process is divided in a few steps that are outlined below: + +* **Proposal submission:** Proposal is submitted to the blockchain with a + deposit. +* **Vote:** Once deposit reaches a certain value (`MinDeposit`), proposal is + confirmed and vote opens. Bonded Atom holders can then send `TxGovVote` + transactions to vote on the proposal. +* **Execution** After a period of time, the votes are tallied and depending + on the result, the messages in the proposal will be executed. + +### Proposal submission + +#### Right to submit a proposal + +Every account can submit proposals by sending a `MsgSubmitProposal` transaction. +Once a proposal is submitted, it is identified by its unique `proposalID`. + +#### Proposal Messages + +A proposal includes an array of `sdk.Msg`s which are executed automatically if the +proposal passes. The messages are executed by the governance `ModuleAccount` itself. Modules +such as `x/upgrade`, that want to allow certain messages to be executed by governance +only should add a whitelist within the respective msg server, granting the governance +module the right to execute the message once a quorum has been reached. The governance +module uses the `MsgServiceRouter` to check that these messages are correctly constructed +and have a respective path to execute on but do not perform a full validity check. + +### Deposit + +To prevent spam, proposals must be submitted with a deposit in the coins defined by +the `MinDeposit` param. + +When a proposal is submitted, it has to be accompanied with a deposit that must be +strictly positive, but can be inferior to `MinDeposit`. The submitter doesn't need +to pay for the entire deposit on their own. The newly created proposal is stored in +an *inactive proposal queue* and stays there until its deposit passes the `MinDeposit`. +Other token holders can increase the proposal's deposit by sending a `Deposit` +transaction. If a proposal doesn't pass the `MinDeposit` before the deposit end time +(the time when deposits are no longer accepted), the proposal will be destroyed: the +proposal will be removed from state and the deposit will be burned (see x/gov `EndBlocker`). +When a proposal deposit passes the `MinDeposit` threshold (even during the proposal +submission) before the deposit end time, the proposal will be moved into the +*active proposal queue* and the voting period will begin. + +The deposit is kept in escrow and held by the governance `ModuleAccount` until the +proposal is finalized (passed or rejected). + +#### Deposit refund and burn + +When a proposal is finalized, the coins from the deposit are either refunded or burned +according to the final tally of the proposal: + +* If the proposal is approved or rejected but *not* vetoed, each deposit will be + automatically refunded to its respective depositor (transferred from the governance + `ModuleAccount`). +* When the proposal is vetoed with greater than 1/3, deposits will be burned from the + governance `ModuleAccount` and the proposal information along with its deposit + information will be removed from state. +* All refunded or burned deposits are removed from the state. Events are issued when + burning or refunding a deposit. + +### Vote + +#### Participants + +*Participants* are users that have the right to vote on proposals. On the +Cosmos Hub, participants are bonded Atom holders. Unbonded Atom holders and +other users do not get the right to participate in governance. However, they +can submit and deposit on proposals. + +Note that when *participants* have bonded and unbonded Atoms, their voting power is calculated from their bonded Atom holdings only. + +#### Voting period + +Once a proposal reaches `MinDeposit`, it immediately enters `Voting period`. We +define `Voting period` as the interval between the moment the vote opens and +the moment the vote closes. `Voting period` should always be shorter than +`Unbonding period` to prevent double voting. The initial value of +`Voting period` is 2 weeks. + +#### Option set + +The option set of a proposal refers to the set of choices a participant can +choose from when casting its vote. + +The initial option set includes the following options: + +* `Yes` +* `No` +* `NoWithVeto` +* `Abstain` + +`NoWithVeto` counts as `No` but also adds a `Veto` vote. `Abstain` option +allows voters to signal that they do not intend to vote in favor or against the +proposal but accept the result of the vote. + +*Note: from the UI, for urgent proposals we should maybe add a ‘Not Urgent’ option that casts a `NoWithVeto` vote.* + +#### Weighted Votes + +[ADR-037](/docs/common/pages/adr-comprehensive#adr-037-governance-split-votes) introduces the weighted vote feature which allows a staker to split their votes into several voting options. For example, it could use 70% of its voting power to vote Yes and 30% of its voting power to vote No. + +Often times the entity owning that address might not be a single individual. For example, a company might have different stakeholders who want to vote differently, and so it makes sense to allow them to split their voting power. Currently, it is not possible for them to do "passthrough voting" and giving their users voting rights over their tokens. However, with this system, exchanges can poll their users for voting preferences, and then vote on-chain proportionally to the results of the poll. + +To represent weighted vote on chain, we use the following Protobuf message. + +```protobuf +// WeightedVoteOption defines a unit of vote for vote split. +// +// Since: cosmos-sdk 0.43 +message WeightedVoteOption { + // option defines the valid vote options, it must not contain duplicate vote options. + VoteOption option = 1; + + // weight is the vote weight associated with the vote option. + string weight = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +```protobuf +// Vote defines a vote on a governance proposal. +// A Vote consists of a proposal ID, the voter, and the vote option. +message Vote { + option (gogoproto.goproto_stringer) = false; + option (gogoproto.equal) = false; + + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1 [(gogoproto.jsontag) = "id", (amino.field_name) = "id", (amino.dont_omitempty) = true]; + + // voter is the voter address of the proposal. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // Deprecated: Prefer to use `options` instead. This field is set in queries + // if and only if `len(options) == 1` and that option has weight 1. In all + // other cases, this field will default to VOTE_OPTION_UNSPECIFIED. + VoteOption option = 3 [deprecated = true]; + + // options is the weighted vote options. + // + // Since: cosmos-sdk 0.43 + repeated WeightedVoteOption options = 4 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +For a weighted vote to be valid, the `options` field must not contain duplicate vote options, and the sum of weights of all options must be equal to 1. + +### Quorum + +Quorum is defined as the minimum percentage of voting power that needs to be +cast on a proposal for the result to be valid. + +### Expedited Proposals + +A proposal can be expedited, making the proposal use shorter voting duration and a higher tally threshold by its default. If an expedited proposal fails to meet the threshold within the scope of shorter voting duration, the expedited proposal is then converted to a regular proposal and restarts voting under regular voting conditions. + +#### Threshold + +Threshold is defined as the minimum proportion of `Yes` votes (excluding +`Abstain` votes) for the proposal to be accepted. + +Initially, the threshold is set at 50% of `Yes` votes, excluding `Abstain` +votes. A possibility to veto exists if more than 1/3rd of all votes are +`NoWithVeto` votes. Note, both of these values are derived from the `TallyParams` +on-chain parameter, which is modifiable by governance. +This means that proposals are accepted iff: + +* There exist bonded tokens. +* Quorum has been achieved. +* The proportion of `Abstain` votes is inferior to 1/1. +* The proportion of `NoWithVeto` votes is inferior to 1/3, including + `Abstain` votes. +* The proportion of `Yes` votes, excluding `Abstain` votes, at the end of + the voting period is superior to 1/2. + +For expedited proposals, by default, the threshold is higher than with a *normal proposal*, namely, 66.7%. + +#### Inheritance + +If a delegator does not vote, it will inherit its validator vote. + +* If the delegator votes before its validator, it will not inherit from the + validator's vote. +* If the delegator votes after its validator, it will override its validator + vote with its own. If the proposal is urgent, it is possible + that the vote will close before delegators have a chance to react and + override their validator's vote. This is not a problem, as proposals require more than 2/3rd of the total voting power to pass, when tallied at the end of the voting period. Because as little as 1/3 + 1 validation power could collude to censor transactions, non-collusion is already assumed for ranges exceeding this threshold. + +#### Validator’s punishment for non-voting + +At present, validators are not punished for failing to vote. + +#### Governance address + +Later, we may add permissioned keys that could only sign txs from certain modules. For the MVP, the `Governance address` will be the main validator address generated at account creation. This address corresponds to a different PrivKey than the CometBFT PrivKey which is responsible for signing consensus messages. Validators thus do not have to sign governance transactions with the sensitive CometBFT PrivKey. + +#### Burnable Params + +There are three parameters that define if the deposit of a proposal should be burned or returned to the depositors. + +* `BurnVoteVeto` burns the proposal deposit if the proposal gets vetoed. +* `BurnVoteQuorum` burns the proposal deposit if the proposal deposit if the vote does not reach quorum. +* `BurnProposalDepositPrevote` burns the proposal deposit if it does not enter the voting phase. + +> Note: These parameters are modifiable via governance. + +## State + +### Constitution + +`Constitution` is found in the genesis state. It is a string field intended to be used to descibe the purpose of a particular blockchain, and its expected norms. A few examples of how the constitution field can be used: + +* define the purpose of the chain, laying a foundation for its future development +* set expectations for delegators +* set expectations for validators +* define the chain's relationship to "meatspace" entities, like a foundation or corporation + +Since this is more of a social feature than a technical feature, we'll now get into some items that may have been useful to have in a genesis constitution: + +* What limitations on governance exist, if any? + * is it okay for the community to slash the wallet of a whale that they no longer feel that they want around? (viz: Juno Proposal 4 and 16) + * can governance "socially slash" a validator who is using unapproved MEV? (viz: commonwealth.im/osmosis) + * In the event of an economic emergency, what should validators do? + * Terra crash of May, 2022, saw validators choose to run a new binary with code that had not been approved by governance, because the governance token had been inflated to nothing. +* What is the purpose of the chain, specifically? + * best example of this is the Cosmos hub, where different founding groups, have different interpertations of the purpose of the network. + +This genesis entry, "constitution" hasn't been designed for existing chains, who should likely just ratify a constitution using their governance system. Instead, this is for new chains. It will allow for validators to have a much clearer idea of purpose and the expecations placed on them while operating thier nodes. Likewise, for community members, the constitution will give them some idea of what to expect from both the "chain team" and the validators, respectively. + +This constitution is designed to be immutable, and placed only in genesis, though that could change over time by a pull request to the cosmos-sdk that allows for the constitution to be changed by governance. Communities whishing to make amendments to their original constitution should use the governance mechanism and a "signaling proposal" to do exactly that. + +**Ideal use scenario for a cosmos chain constitution** + +As a chain developer, you decide that you'd like to provide clarity to your key user groups: + +* validators +* token holders +* developers (yourself) + +You use the constitution to immutably store some Markdown in genesis, so that when difficult questions come up, the constutituon can provide guidance to the community. + +### Proposals + +`Proposal` objects are used to tally votes and generally track the proposal's state. +They contain an array of arbitrary `sdk.Msg`'s which the governance module will attempt +to resolve and then execute if the proposal passes. `Proposal`'s are identified by a +unique id and contains a series of timestamps: `submit_time`, `deposit_end_time`, +`voting_start_time`, `voting_end_time` which track the lifecycle of a proposal + +```protobuf +// Proposal defines the core field members of a governance proposal. +message Proposal { + // id defines the unique id of the proposal. + uint64 id = 1; + + // messages are the arbitrary messages to be executed if the proposal passes. + repeated google.protobuf.Any messages = 2; + + // status defines the proposal status. + ProposalStatus status = 3; + + // final_tally_result is the final tally result of the proposal. When + // querying a proposal via gRPC, this field is not populated until the + // proposal's voting period has ended. + TallyResult final_tally_result = 4; + + // submit_time is the time of proposal submission. + google.protobuf.Timestamp submit_time = 5 [(gogoproto.stdtime) = true]; + + // deposit_end_time is the end time for deposition. + google.protobuf.Timestamp deposit_end_time = 6 [(gogoproto.stdtime) = true]; + + // total_deposit is the total deposit on the proposal. + repeated cosmos.base.v1beta1.Coin total_deposit = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // voting_start_time is the starting time to vote on a proposal. + google.protobuf.Timestamp voting_start_time = 8 [(gogoproto.stdtime) = true]; + + // voting_end_time is the end time of voting on a proposal. + google.protobuf.Timestamp voting_end_time = 9 [(gogoproto.stdtime) = true]; + + // metadata is any arbitrary metadata attached to the proposal. + string metadata = 10; + + // title is the title of the proposal + // + // Since: cosmos-sdk 0.47 + string title = 11; + + // summary is a short summary of the proposal + // + // Since: cosmos-sdk 0.47 + string summary = 12; + + // Proposer is the address of the proposal sumbitter + // + // Since: cosmos-sdk 0.47 + string proposer = 13 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +A proposal will generally require more than just a set of messages to explain its +purpose but need some greater justification and allow a means for interested participants +to discuss and debate the proposal. +In most cases, **it is encouraged to have an off-chain system that supports the on-chain governance process**. +To accommodate for this, a proposal contains a special **`metadata`** field, a string, +which can be used to add context to the proposal. The `metadata` field allows custom use for networks, +however, it is expected that the field contains a URL or some form of CID using a system such as +[IPFS](https://docs.ipfs.io/concepts/content-addressing/). To support the case of +interoperability across networks, the SDK recommends that the `metadata` represents +the following `JSON` template: + +```json +{ + "title": "...", + "description": "...", + "forum": "...", / a link to the discussion platform (i.e. Discord) + "other": "..." / any extra data that doesn't correspond to the other fields +} +``` + +This makes it far easier for clients to support multiple networks. + +The metadata has a maximum length that is chosen by the app developer, and +passed into the gov keeper as a config. The default maximum length in the SDK is 255 characters. + +#### Writing a module that uses governance + +There are many aspects of a chain, or of the individual modules that you may want to +use governance to perform such as changing various parameters. This is very simple +to do. First, write out your message types and `MsgServer` implementation. Add an +`authority` field to the keeper which will be populated in the constructor with the +governance module account: `govKeeper.GetGovernanceAccount().GetAddress()`. Then for +the methods in the `msg_server.go`, perform a check on the message that the signer +matches `authority`. This will prevent any user from executing that message. + +### Parameters and base types + +`Parameters` define the rules according to which votes are run. There can only +be one active parameter set at any given time. If governance wants to change a +parameter set, either to modify a value or add/remove a parameter field, a new +parameter set has to be created and the previous one rendered inactive. + +#### DepositParams + +```protobuf +// DepositParams defines the params for deposits on governance proposals. +message DepositParams { + // Minimum deposit for a proposal to enter voting period. + repeated cosmos.base.v1beta1.Coin min_deposit = 1 + [(gogoproto.nullable) = false, (gogoproto.jsontag) = "min_deposit,omitempty"]; + + // Maximum period for Atom holders to deposit on a proposal. Initial value: 2 + // months. + google.protobuf.Duration max_deposit_period = 2 + [(gogoproto.stdduration) = true, (gogoproto.jsontag) = "max_deposit_period,omitempty"]; +} +``` + +#### VotingParams + +```protobuf +// VotingParams defines the params for voting on governance proposals. +message VotingParams { + // Duration of the voting period. + google.protobuf.Duration voting_period = 1 [(gogoproto.stdduration) = true]; +} +``` + +#### TallyParams + +```protobuf +// TallyParams defines the params for tallying votes on governance proposals. +message TallyParams { + // Minimum percentage of total stake needed to vote for a result to be + // considered valid. + string quorum = 1 [(cosmos_proto.scalar) = "cosmos.Dec"]; + + // Minimum proportion of Yes votes for proposal to pass. Default value: 0.5. + string threshold = 2 [(cosmos_proto.scalar) = "cosmos.Dec"]; + + // Minimum value of Veto votes to Total votes ratio for proposal to be + // vetoed. Default value: 1/3. + string veto_threshold = 3 [(cosmos_proto.scalar) = "cosmos.Dec"]; +} +``` + +Parameters are stored in a global `GlobalParams` KVStore. + +Additionally, we introduce some basic types: + +```go expandable +type Vote byte + +const ( + VoteYes = 0x1 + VoteNo = 0x2 + VoteNoWithVeto = 0x3 + VoteAbstain = 0x4 +) + +type ProposalType string + +const ( + ProposalTypePlainText = "Text" + ProposalTypeSoftwareUpgrade = "SoftwareUpgrade" +) + +type ProposalStatus byte + +const ( + StatusNil ProposalStatus = 0x00 + StatusDepositPeriod ProposalStatus = 0x01 / Proposal is submitted. Participants can deposit on it but not vote + StatusVotingPeriod ProposalStatus = 0x02 / MinDeposit is reached, participants can vote + StatusPassed ProposalStatus = 0x03 / Proposal passed and successfully executed + StatusRejected ProposalStatus = 0x04 / Proposal has been rejected + StatusFailed ProposalStatus = 0x05 / Proposal passed but failed execution +) +``` + +### Deposit + +```protobuf +// Deposit defines an amount deposited by an account address to an active +// proposal. +message Deposit { + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1; + + // depositor defines the deposit addresses from the proposals. + string depositor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // amount to be deposited by depositor. + repeated cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +### ValidatorGovInfo + +This type is used in a temp map when tallying + +```go +type ValidatorGovInfo struct { + Minus sdk.Dec + Vote Vote +} +``` + +## Stores + + +Stores are KVStores in the multi-store. The key to find the store is the first parameter in the list + + +We will use one KVStore `Governance` to store four mappings: + +* A mapping from `proposalID|'proposal'` to `Proposal`. +* A mapping from `proposalID|'addresses'|address` to `Vote`. This mapping allows + us to query all addresses that voted on the proposal along with their vote by + doing a range query on `proposalID:addresses`. +* A mapping from `ParamsKey|'Params'` to `Params`. This map allows to query all + x/gov params. +* A mapping from `VotingPeriodProposalKeyPrefix|proposalID` to a single byte. This allows + us to know if a proposal is in the voting period or not with very low gas cost. + +For pseudocode purposes, here are the two function we will use to read or write in stores: + +* `load(StoreKey, Key)`: Retrieve item stored at key `Key` in store found at key `StoreKey` in the multistore +* `store(StoreKey, Key, value)`: Write value `Value` at key `Key` in store found at key `StoreKey` in the multistore + +### Proposal Processing Queue + +**Store:** + +* `ProposalProcessingQueue`: A queue `queue[proposalID]` containing all the + `ProposalIDs` of proposals that reached `MinDeposit`. During each `EndBlock`, + all the proposals that have reached the end of their voting period are processed. + To process a finished proposal, the application tallies the votes, computes the + votes of each validator and checks if every validator in the validator set has + voted. If the proposal is accepted, deposits are refunded. Finally, the proposal + content `Handler` is executed. + +And the pseudocode for the `ProposalProcessingQueue`: + +```go expandable +in EndBlock do + for finishedProposalID in GetAllFinishedProposalIDs(block.Time) + +proposal = load(Governance, ) / proposal is a const key + + validators = Keeper.getAllValidators() + tmpValMap := map(sdk.AccAddress) + +ValidatorGovInfo + + / Initiate mapping at 0. This is the amount of shares of the validator's vote that will be overridden by their delegator's votes + for each validator in validators + tmpValMap(validator.OperatorAddr).Minus = 0 + + / Tally + voterIterator = rangeQuery(Governance, ) /return all the addresses that voted on the proposal + for each (voterAddress, vote) + +in voterIterator + delegations = stakingKeeper.getDelegations(voterAddress) / get all delegations for current voter + for each delegation in delegations + / make sure delegation.Shares does NOT include shares being unbonded + tmpValMap(delegation.ValidatorAddr).Minus += delegation.Shares + proposal.updateTally(vote, delegation.Shares) + + _, isVal = stakingKeeper.getValidator(voterAddress) + if (isVal) + +tmpValMap(voterAddress).Vote = vote + + tallyingParam = load(GlobalParams, 'TallyingParam') + + / Update tally if validator voted + for each validator in validators + if tmpValMap(validator).HasVoted + proposal.updateTally(tmpValMap(validator).Vote, (validator.TotalShares - tmpValMap(validator).Minus)) + + / Check if proposal is accepted or rejected + totalNonAbstain := proposal.YesVotes + proposal.NoVotes + proposal.NoWithVetoVotes + if (proposal.Votes.YesVotes/totalNonAbstain > tallyingParam.Threshold AND proposal.Votes.NoWithVetoVotes/totalNonAbstain < tallyingParam.Veto) + / proposal was accepted at the end of the voting period + / refund deposits (non-voters already punished) + for each (amount, depositor) + +in proposal.Deposits + depositor.AtomBalance += amount + + stateWriter, err := proposal.Handler() + if err != nil + / proposal passed but failed during state execution + proposal.CurrentStatus = ProposalStatusFailed + else + / proposal pass and state is persisted + proposal.CurrentStatus = ProposalStatusAccepted + stateWriter.save() + +else + / proposal was rejected + proposal.CurrentStatus = ProposalStatusRejected + + store(Governance, , proposal) +``` + +### Legacy Proposal + +A legacy proposal is the old implementation of governance proposal. +Contrary to proposal that can contain any messages, a legacy proposal allows to submit a set of pre-defined proposals. +These proposal are defined by their types. + +While proposals should use the new implementation of the governance proposal, we need still to use legacy proposal in order to submit a `software-upgrade` and a `cancel-software-upgrade` proposal. + +More information on how to submit proposals in the [client section](#client). + +## Messages + +### Proposal Submission + +Proposals can be submitted by any account via a `MsgSubmitProposal` transaction. + +```protobuf +// MsgSubmitProposal defines an sdk.Msg type that supports submitting arbitrary +// proposal Content. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposer"; + option (amino.name) = "cosmos-sdk/v1/MsgSubmitProposal"; + + // messages are the arbitrary messages to be executed if proposal passes. + repeated google.protobuf.Any messages = 1; + + // initial_deposit is the deposit value that must be paid at proposal submission. + repeated cosmos.base.v1beta1.Coin initial_deposit = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // proposer is the account address of the proposer. + string proposer = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // metadata is any arbitrary metadata attached to the proposal. + string metadata = 4; + + // title is the title of the proposal. + // + // Since: cosmos-sdk 0.47 + string title = 5; + + // summary is the summary of the proposal + // + // Since: cosmos-sdk 0.47 + string summary = 6; +} +``` + +All `sdk.Msgs` passed into the `messages` field of a `MsgSubmitProposal` message +must be registered in the app's `MsgServiceRouter`. Each of these messages must +have one signer, namely the gov module account. And finally, the metadata length +must not be larger than the `maxMetadataLen` config passed into the gov keeper. + +**State modifications:** + +* Generate new `proposalID` +* Create new `Proposal` +* Initialise `Proposal`'s attributes +* Decrease balance of sender by `InitialDeposit` +* If `MinDeposit` is reached: + * Push `proposalID` in `ProposalProcessingQueue` +* Transfer `InitialDeposit` from the `Proposer` to the governance `ModuleAccount` + +A `MsgSubmitProposal` transaction can be handled according to the following +pseudocode. + +```go expandable +/ PSEUDOCODE / +/ Check if MsgSubmitProposal is valid. If it is, create proposal / + +upon receiving txGovSubmitProposal from sender do + if !correctlyFormatted(txGovSubmitProposal) + / check if proposal is correctly formatted and the messages have routes to other modules. Includes fee payment. + / check if all messages' unique Signer is the gov acct. + / check if the metadata is not too long. + throw + + initialDeposit = txGovSubmitProposal.InitialDeposit + if (initialDeposit.Atoms <= 0) + +OR (sender.AtomBalance < initialDeposit.Atoms) + / InitialDeposit is negative or null OR sender has insufficient funds + throw + if (txGovSubmitProposal.Type != ProposalTypePlainText) + +OR (txGovSubmitProposal.Type != ProposalTypeSoftwareUpgrade) + +sender.AtomBalance -= initialDeposit.Atoms + + depositParam = load(GlobalParams, 'DepositParam') + +proposalID = generate new proposalID + proposal = NewProposal() + +proposal.Messages = txGovSubmitProposal.Messages + proposal.Metadata = txGovSubmitProposal.Metadata + proposal.TotalDeposit = initialDeposit + proposal.SubmitTime = + proposal.DepositEndTime = .Add(depositParam.MaxDepositPeriod) + +proposal.Deposits.append({ + initialDeposit, sender +}) + +proposal.Submitter = sender + proposal.YesVotes = 0 + proposal.NoVotes = 0 + proposal.NoWithVetoVotes = 0 + proposal.AbstainVotes = 0 + proposal.CurrentStatus = ProposalStatusOpen + + store(Proposals, , proposal) / Store proposal in Proposals mapping + return proposalID +``` + +### Deposit + +Once a proposal is submitted, if +`Proposal.TotalDeposit < ActiveParam.MinDeposit`, Atom holders can send +`MsgDeposit` transactions to increase the proposal's deposit. + +```protobuf +// MsgDeposit defines a message to submit a deposit to an existing proposal. +message MsgDeposit { + option (cosmos.msg.v1.signer) = "depositor"; + option (amino.name) = "cosmos-sdk/v1/MsgDeposit"; + + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1 [(gogoproto.jsontag) = "proposal_id", (amino.dont_omitempty) = true]; + + // depositor defines the deposit addresses from the proposals. + string depositor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // amount to be deposited by depositor. + repeated cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +**State modifications:** + +* Decrease balance of sender by `deposit` +* Add `deposit` of sender in `proposal.Deposits` +* Increase `proposal.TotalDeposit` by sender's `deposit` +* If `MinDeposit` is reached: + * Push `proposalID` in `ProposalProcessingQueueEnd` +* Transfer `Deposit` from the `proposer` to the governance `ModuleAccount` + +A `MsgDeposit` transaction has to go through a number of checks to be valid. +These checks are outlined in the following pseudocode. + +```go expandable +/ PSEUDOCODE / +/ Check if MsgDeposit is valid. If it is, increase deposit and check if MinDeposit is reached + +upon receiving txGovDeposit from sender do + / check if proposal is correctly formatted. Includes fee payment. + if !correctlyFormatted(txGovDeposit) + +throw + + proposal = load(Proposals, ) / proposal is a const key, proposalID is variable + if (proposal == nil) + / There is no proposal for this proposalID + throw + if (txGovDeposit.Deposit.Atoms <= 0) + +OR (sender.AtomBalance < txGovDeposit.Deposit.Atoms) + +OR (proposal.CurrentStatus != ProposalStatusOpen) + + / deposit is negative or null + / OR sender has insufficient funds + / OR proposal is not open for deposit anymore + + throw + + depositParam = load(GlobalParams, 'DepositParam') + if (CurrentBlock >= proposal.SubmitBlock + depositParam.MaxDepositPeriod) + +proposal.CurrentStatus = ProposalStatusClosed + + else + / sender can deposit + sender.AtomBalance -= txGovDeposit.Deposit.Atoms + + proposal.Deposits.append({ + txGovVote.Deposit, sender +}) + +proposal.TotalDeposit.Plus(txGovDeposit.Deposit) + if (proposal.TotalDeposit >= depositParam.MinDeposit) + / MinDeposit is reached, vote opens + + proposal.VotingStartBlock = CurrentBlock + proposal.CurrentStatus = ProposalStatusActive + ProposalProcessingQueue.push(txGovDeposit.ProposalID) + +store(Proposals, , proposal) +``` + +### Vote + +Once `ActiveParam.MinDeposit` is reached, voting period starts. From there, +bonded Atom holders are able to send `MsgVote` transactions to cast their +vote on the proposal. + +```protobuf +// MsgVote defines a message to cast a vote. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/v1/MsgVote"; + + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1 [(gogoproto.jsontag) = "proposal_id", (amino.dont_omitempty) = true]; + + // voter is the voter address for the proposal. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // option defines the vote option. + VoteOption option = 3; + + // metadata is any arbitrary metadata attached to the Vote. + string metadata = 4; +} +``` + +**State modifications:** + +* Record `Vote` of sender + + +Gas cost for this message has to take into account the future tallying of the vote in EndBlocker. + + +Next is a pseudocode outline of the way `MsgVote` transactions are handled: + +```go expandable +/ PSEUDOCODE / + / Check if MsgVote is valid. If it is, count vote/ + + upon receiving txGovVote from sender do + / check if proposal is correctly formatted. Includes fee payment. + if !correctlyFormatted(txGovDeposit) + +throw + + proposal = load(Proposals, ) + if (proposal == nil) + / There is no proposal for this proposalID + throw + if (proposal.CurrentStatus == ProposalStatusActive) + + / Sender can vote if + / Proposal is active + / Sender has some bonds + + store(Governance, , txGovVote.Vote) / Voters can vote multiple times. Re-voting overrides previous vote. This is ok because tallying is done once at the end. +``` + +## Events + +The governance module emits the following events: + +### EndBlocker + +| Type | Attribute Key | Attribute Value | +| ------------------ | ---------------- | ---------------- | +| inactive\_proposal | proposal\_id | `{proposalID}` | +| inactive\_proposal | proposal\_result | `{proposalResult}` | +| active\_proposal | proposal\_id | `{proposalID}` | +| active\_proposal | proposal\_result | `{proposalResult}` | + +### Handlers + +#### MsgSubmitProposal + +| Type | Attribute Key | Attribute Value | +| --------------------- | --------------------- | ---------------- | +| submit\_proposal | proposal\_id | `{proposalID}` | +| submit\_proposal \[0] | voting\_period\_start | `{proposalID}` | +| proposal\_deposit | amount | `{depositAmount}` | +| proposal\_deposit | proposal\_id | `{proposalID}` | +| message | module | governance | +| message | action | submit\_proposal | +| message | sender | `{senderAddress}` | + +* \[0] Event only emitted if the voting period starts during the submission. + +#### MsgVote + +| Type | Attribute Key | Attribute Value | +| -------------- | ------------- | --------------- | +| proposal\_vote | option | `{voteOption}` | +| proposal\_vote | proposal\_id | `{proposalID}` | +| message | module | governance | +| message | action | vote | +| message | sender | `{senderAddress}` | + +#### MsgVoteWeighted + +| Type | Attribute Key | Attribute Value | +| -------------- | ------------- | --------------------- | +| proposal\_vote | option | `{weightedVoteOptions}` | +| proposal\_vote | proposal\_id | `{proposalID}` | +| message | module | governance | +| message | action | vote | +| message | sender | `{senderAddress}` | + +#### MsgDeposit + +| Type | Attribute Key | Attribute Value | +| ---------------------- | --------------------- | --------------- | +| proposal\_deposit | amount | `{depositAmount}` | +| proposal\_deposit | proposal\_id | `{proposalID}` | +| proposal\_deposit \[0] | voting\_period\_start | `{proposalID}` | +| message | module | governance | +| message | action | deposit | +| message | sender | `{senderAddress}` | + +* \[0] Event only emitted if the voting period starts during the submission. + +## Parameters + +The governance module contains the following parameters: + +| Key | Type | Example | +| -------------------------------- | ---------------- | ---------------------------------------- | +| min\_deposit | array (coins) | \[`{"denom":"uatom","amount":"10000000"}`] | +| max\_deposit\_period | string (time ns) | "172800000000000" (17280s) | +| voting\_period | string (time ns) | "172800000000000" (17280s) | +| quorum | string (dec) | "0.334000000000000000" | +| threshold | string (dec) | "0.500000000000000000" | +| veto | string (dec) | "0.334000000000000000" | +| expedited\_threshold | string (time ns) | "0.667000000000000000" | +| expedited\_voting\_period | string (time ns) | "86400000000000" (8600s) | +| expedited\_min\_deposit | array (coins) | \[`{"denom":"uatom","amount":"50000000"}`] | +| burn\_proposal\_deposit\_prevote | bool | false | +| burn\_vote\_quorum | bool | false | +| burn\_vote\_veto | bool | true | + +**NOTE**: The governance module contains parameters that are objects unlike other +modules. If only a subset of parameters are desired to be changed, only they need +to be included and not the entire parameter object structure. + +## Client + +### CLI + +A user can query and interact with the `gov` module using the CLI. + +#### Query + +The `query` commands allow users to query `gov` state. + +```bash +simd query gov --help +``` + +##### deposit + +The `deposit` command allows users to query a deposit for a given proposal from a given depositor. + +```bash +simd query gov deposit [proposal-id] [depositer-addr] [flags] +``` + +Example: + +```bash +simd query gov deposit 1 cosmos1.. +``` + +Example Output: + +```bash +amount: +- amount: "100" + denom: stake +depositor: cosmos1.. +proposal_id: "1" +``` + +##### deposits + +The `deposits` command allows users to query all deposits for a given proposal. + +```bash +simd query gov deposits [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov deposits 1 +``` + +Example Output: + +```bash +deposits: +- amount: + - amount: "100" + denom: stake + depositor: cosmos1.. + proposal_id: "1" +pagination: + next_key: null + total: "0" +``` + +##### param + +The `param` command allows users to query a given parameter for the `gov` module. + +```bash +simd query gov param [param-type] [flags] +``` + +Example: + +```bash +simd query gov param voting +``` + +Example Output: + +```bash +voting_period: "172800000000000" +``` + +##### params + +The `params` command allows users to query all parameters for the `gov` module. + +```bash +simd query gov params [flags] +``` + +Example: + +```bash +simd query gov params +``` + +Example Output: + +```bash expandable +deposit_params: + max_deposit_period: 172800s + min_deposit: + - amount: "10000000" + denom: stake +params: + expedited_min_deposit: + - amount: "50000000" + denom: stake + expedited_threshold: "0.670000000000000000" + expedited_voting_period: 86400s + max_deposit_period: 172800s + min_deposit: + - amount: "10000000" + denom: stake + min_initial_deposit_ratio: "0.000000000000000000" + proposal_cancel_burn_rate: "0.500000000000000000" + quorum: "0.334000000000000000" + threshold: "0.500000000000000000" + veto_threshold: "0.334000000000000000" + voting_period: 172800s +tally_params: + quorum: "0.334000000000000000" + threshold: "0.500000000000000000" + veto_threshold: "0.334000000000000000" +voting_params: + voting_period: 172800s +``` + +##### proposal + +The `proposal` command allows users to query a given proposal. + +```bash +simd query gov proposal [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov proposal 1 +``` + +Example Output: + +```bash expandable +deposit_end_time: "2022-03-30T11:50:20.819676256Z" +final_tally_result: + abstain_count: "0" + no_count: "0" + no_with_veto_count: "0" + yes_count: "0" +id: "1" +messages: +- '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "10" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. +metadata: AQ== +status: PROPOSAL_STATUS_DEPOSIT_PERIOD +submit_time: "2022-03-28T11:50:20.819676256Z" +total_deposit: +- amount: "10" + denom: stake +voting_end_time: null +voting_start_time: null +``` + +##### proposals + +The `proposals` command allows users to query all proposals with optional filters. + +```bash +simd query gov proposals [flags] +``` + +Example: + +```bash +simd query gov proposals +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +proposals: +- deposit_end_time: "2022-03-30T11:50:20.819676256Z" + final_tally_result: + abstain_count: "0" + no_count: "0" + no_with_veto_count: "0" + yes_count: "0" + id: "1" + messages: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "10" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + metadata: AQ== + status: PROPOSAL_STATUS_DEPOSIT_PERIOD + submit_time: "2022-03-28T11:50:20.819676256Z" + total_deposit: + - amount: "10" + denom: stake + voting_end_time: null + voting_start_time: null +- deposit_end_time: "2022-03-30T14:02:41.165025015Z" + final_tally_result: + abstain_count: "0" + no_count: "0" + no_with_veto_count: "0" + yes_count: "0" + id: "2" + messages: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "10" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + metadata: AQ== + status: PROPOSAL_STATUS_DEPOSIT_PERIOD + submit_time: "2022-03-28T14:02:41.165025015Z" + total_deposit: + - amount: "10" + denom: stake + voting_end_time: null + voting_start_time: null +``` + +##### proposer + +The `proposer` command allows users to query the proposer for a given proposal. + +```bash +simd query gov proposer [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov proposer 1 +``` + +Example Output: + +```bash +proposal_id: "1" +proposer: cosmos1.. +``` + +##### tally + +The `tally` command allows users to query the tally of a given proposal vote. + +```bash +simd query gov tally [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov tally 1 +``` + +Example Output: + +```bash +abstain: "0" +"no": "0" +no_with_veto: "0" +"yes": "1" +``` + +##### vote + +The `vote` command allows users to query a vote for a given proposal. + +```bash +simd query gov vote [proposal-id] [voter-addr] [flags] +``` + +Example: + +```bash +simd query gov vote 1 cosmos1.. +``` + +Example Output: + +```bash +option: VOTE_OPTION_YES +options: +- option: VOTE_OPTION_YES + weight: "1.000000000000000000" +proposal_id: "1" +voter: cosmos1.. +``` + +##### votes + +The `votes` command allows users to query all votes for a given proposal. + +```bash +simd query gov votes [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov votes 1 +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "0" +votes: +- option: VOTE_OPTION_YES + options: + - option: VOTE_OPTION_YES + weight: "1.000000000000000000" + proposal_id: "1" + voter: cosmos1.. +``` + +#### Transactions + +The `tx` commands allow users to interact with the `gov` module. + +```bash +simd tx gov --help +``` + +##### deposit + +The `deposit` command allows users to deposit tokens for a given proposal. + +```bash +simd tx gov deposit [proposal-id] [deposit] [flags] +``` + +Example: + +```bash +simd tx gov deposit 1 10000000stake --from cosmos1.. +``` + +##### draft-proposal + +The `draft-proposal` command allows users to draft any type of proposal. +The command returns a `draft_proposal.json`, to be used by `submit-proposal` after being completed. +The `draft_metadata.json` is meant to be uploaded to [IPFS](#metadata). + +```bash +simd tx gov draft-proposal +``` + +##### submit-proposal + +The `submit-proposal` command allows users to submit a governance proposal along with some messages and metadata. +Messages, metadata and deposit are defined in a JSON file. + +```bash +simd tx gov submit-proposal [path-to-proposal-json] [flags] +``` + +Example: + +```bash +simd tx gov submit-proposal /path/to/proposal.json --from cosmos1.. +``` + +where `proposal.json` contains: + +```json expandable +{ + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1...", / The gov module module address + "to_address": "cosmos1...", + "amount":[{ + "denom": "stake", + "amount": "10"}] + } + ], + "metadata": "AQ==", + "deposit": "10stake", + "title": "Proposal Title", + "summary": "Proposal Summary" +} +``` + + +By default the metadata, summary and title are both limited by 255 characters, this can be overridden by the application developer. + + +##### submit-legacy-proposal + +The `submit-legacy-proposal` command allows users to submit a governance legacy proposal along with an initial deposit. + +```bash +simd tx gov submit-legacy-proposal [command] [flags] +``` + +Example: + +```bash +simd tx gov submit-legacy-proposal --title="Test Proposal" --description="testing" --type="Text" --deposit="100000000stake" --from cosmos1.. +``` + +Example (`param-change`): + +```bash +simd tx gov submit-legacy-proposal param-change proposal.json --from cosmos1.. +``` + +```json expandable +{ + "title": "Test Proposal", + "description": "testing, testing, 1, 2, 3", + "changes": [ + { + "subspace": "staking", + "key": "MaxValidators", + "value": 100 + } + ], + "deposit": "10000000stake" +} +``` + +#### cancel-proposal + +Once proposal is canceled, from the deposits of proposal `deposits * proposal_cancel_ratio` will be burned or sent to `ProposalCancelDest` address , if `ProposalCancelDest` is empty then deposits will be burned. The `remaining deposits` will be sent to depositers. + +```bash +simd tx gov cancel-proposal [proposal-id] [flags] +``` + +Example: + +```bash +simd tx gov cancel-proposal 1 --from cosmos1... +``` + +##### vote + +The `vote` command allows users to submit a vote for a given governance proposal. + +```bash +simd tx gov vote [command] [flags] +``` + +Example: + +```bash +simd tx gov vote 1 yes --from cosmos1.. +``` + +##### weighted-vote + +The `weighted-vote` command allows users to submit a weighted vote for a given governance proposal. + +```bash +simd tx gov weighted-vote [proposal-id] [weighted-options] [flags] +``` + +Example: + +```bash +simd tx gov weighted-vote 1 yes=0.5,no=0.5 --from cosmos1.. +``` + +### gRPC + +A user can query the `gov` module using gRPC endpoints. + +#### Proposal + +The `Proposal` endpoint allows users to query a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Proposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Proposal +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposalId": "1", + "content": {"@type":"/cosmos.gov.v1beta1.TextProposal","description":"testing, testing, 1, 2, 3","title":"Test Proposal"}, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2021-09-16T19:40:08.712440474Z", + "depositEndTime": "2021-09-18T19:40:08.712440474Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "votingStartTime": "2021-09-16T19:40:08.712440474Z", + "votingEndTime": "2021-09-18T19:40:08.712440474Z", + "title": "Test Proposal", + "summary": "testing, testing, 1, 2, 3" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Proposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Proposal +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "id": "1", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yesCount": "0", + "abstainCount": "0", + "noCount": "0", + "noWithVetoCount": "0" + }, + "submitTime": "2022-03-28T11:50:20.819676256Z", + "depositEndTime": "2022-03-30T11:50:20.819676256Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "votingStartTime": "2022-03-28T14:25:26.644857113Z", + "votingEndTime": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Test Proposal", + "summary": "testing, testing, 1, 2, 3" + } +} +``` + +#### Proposals + +The `Proposals` endpoint allows users to query all proposals with optional filters. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Proposals +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "proposalId": "1", + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2022-03-28T11:50:20.819676256Z", + "depositEndTime": "2022-03-30T11:50:20.819676256Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "votingStartTime": "2022-03-28T14:25:26.644857113Z", + "votingEndTime": "2022-03-30T14:25:26.644857113Z" + }, + { + "proposalId": "2", + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2022-03-28T14:02:41.165025015Z", + "depositEndTime": "2022-03-30T14:02:41.165025015Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "votingStartTime": "0001-01-01T00:00:00Z", + "votingEndTime": "0001-01-01T00:00:00Z" + } + ], + "pagination": { + "total": "2" + } +} + +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Proposals +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.gov.v1.Query/Proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "id": "1", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yesCount": "0", + "abstainCount": "0", + "noCount": "0", + "noWithVetoCount": "0" + }, + "submitTime": "2022-03-28T11:50:20.819676256Z", + "depositEndTime": "2022-03-30T11:50:20.819676256Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "votingStartTime": "2022-03-28T14:25:26.644857113Z", + "votingEndTime": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + }, + { + "id": "2", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "finalTallyResult": { + "yesCount": "0", + "abstainCount": "0", + "noCount": "0", + "noWithVetoCount": "0" + }, + "submitTime": "2022-03-28T14:02:41.165025015Z", + "depositEndTime": "2022-03-30T14:02:41.165025015Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### Vote + +The `Vote` endpoint allows users to query a vote for a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Vote +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1","voter":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Vote +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposalId": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1000000000000000000" + } + ] + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Vote +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1","voter":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Vote +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposalId": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } +} +``` + +#### Votes + +The `Votes` endpoint allows users to query all votes for a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Votes +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1000000000000000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Votes +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### Params + +The `Params` endpoint allows users to query all parameters for the `gov` module. + +{/* TODO: #10197 Querying governance params outputs nil values */} + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"params_type":"voting"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Params +``` + +Example Output: + +```bash expandable +{ + "votingParams": { + "votingPeriod": "172800s" + }, + "depositParams": { + "maxDepositPeriod": "0s" + }, + "tallyParams": { + "quorum": "MA==", + "threshold": "MA==", + "vetoThreshold": "MA==" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"params_type":"voting"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Params +``` + +Example Output: + +```bash +{ + "votingParams": { + "votingPeriod": "172800s" + } +} +``` + +#### Deposit + +The `Deposit` endpoint allows users to query a deposit for a given proposal from a given depositor. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Deposit +``` + +Example: + +```bash +grpcurl -plaintext \ + '{"proposal_id":"1","depositor":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Deposit +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Deposit +``` + +Example: + +```bash +grpcurl -plaintext \ + '{"proposal_id":"1","depositor":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Deposit +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +#### deposits + +The `Deposits` endpoint allows users to query all deposits for a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Deposits +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Deposits +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### TallyResult + +The `TallyResult` endpoint allows users to query the tally of a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/TallyResult +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/TallyResult +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/TallyResult +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/TallyResult +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + } +} +``` + +### REST + +A user can query the `gov` module using REST endpoints. + +#### proposal + +The `proposals` endpoint allows users to query a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1 +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposal_id": "1", + "content": null, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1 +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "id": "1", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10" + } + ] + } + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes_count": "0", + "abstain_count": "0", + "no_count": "0", + "no_with_veto_count": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + } +} +``` + +#### proposals + +The `proposals` endpoint also allows users to query all proposals with optional filters. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "proposal_id": "1", + "content": null, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z" + }, + { + "proposal_id": "2", + "content": null, + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2022-03-28T14:02:41.165025015Z", + "deposit_end_time": "2022-03-30T14:02:41.165025015Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "voting_start_time": "0001-01-01T00:00:00Z", + "voting_end_time": "0001-01-01T00:00:00Z" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "id": "1", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10" + } + ] + } + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes_count": "0", + "abstain_count": "0", + "no_count": "0", + "no_with_veto_count": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + }, + { + "id": "2", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10" + } + ] + } + ], + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "final_tally_result": { + "yes_count": "0", + "abstain_count": "0", + "no_count": "0", + "no_with_veto_count": "0" + }, + "submit_time": "2022-03-28T14:02:41.165025015Z", + "deposit_end_time": "2022-03-30T14:02:41.165025015Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "voting_start_time": null, + "voting_end_time": null, + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### voter vote + +The `votes` endpoint allows users to query a vote for a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/votes/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/votes/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposal_id": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/votes/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/votes/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposal_id": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ], + "metadata": "" + } +} +``` + +#### votes + +The `votes` endpoint allows users to query all votes for a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/votes +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/votes +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ], + "metadata": "" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### params + +The `params` endpoint allows users to query all parameters for the `gov` module. + +{/* TODO: #10197 Querying governance params outputs nil values */} + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/params/{params_type} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/params/voting +``` + +Example Output: + +```bash expandable +{ + "voting_params": { + "voting_period": "172800s" + }, + "deposit_params": { + "min_deposit": [ + ], + "max_deposit_period": "0s" + }, + "tally_params": { + "quorum": "0.000000000000000000", + "threshold": "0.000000000000000000", + "veto_threshold": "0.000000000000000000" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/params/{params_type} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/params/voting +``` + +Example Output: + +```bash expandable +{ + "voting_params": { + "voting_period": "172800s" + }, + "deposit_params": { + "min_deposit": [ + ], + "max_deposit_period": "0s" + }, + "tally_params": { + "quorum": "0.000000000000000000", + "threshold": "0.000000000000000000", + "veto_threshold": "0.000000000000000000" + } +} +``` + +#### deposits + +The `deposits` endpoint allows users to query a deposit for a given proposal from a given depositor. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/deposits/{depositor} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/deposits/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/deposits/{depositor} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/deposits/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +#### proposal deposits + +The `deposits` endpoint allows users to query all deposits for a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/deposits +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/deposits +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### tally + +The `tally` endpoint allows users to query the tally of a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/tally +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/tally +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/tally +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/tally +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + } +} +``` + +## Metadata + +The gov module has two locations for metadata where users can provide further context about the on-chain actions they are taking. By default all metadata fields have a 255 character length field where metadata can be stored in json format, either on-chain or off-chain depending on the amount of data required. Here we provide a recommendation for the json structure and where the data should be stored. There are two important factors in making these recommendations. First, that the gov and group modules are consistent with one another, note the number of proposals made by all groups may be quite large. Second, that client applications such as block explorers and governance interfaces have confidence in the consistency of metadata structure accross chains. + +### Proposal + +Location: off-chain as json object stored on IPFS (mirrors [group proposal](docs/sdk/v0.47/documentation/module-system/modules/group/README#metadata)) + +```json +{ + "title": "", + "authors": [""], + "summary": "", + "details": "", + "proposal_forum_url": "", + "vote_option_context": "", +} +``` + + +The `authors` field is an array of strings, this is to allow for multiple authors to be listed in the metadata. +In v0.46, the `authors` field is a comma-separated string. Frontends are encouraged to support both formats for backwards compatibility. + + +### Vote + +Location: on-chain as json within 255 character limit (mirrors [group vote](docs/sdk/v0.47/documentation/module-system/modules/group/README#metadata)) + +```json +{ + "justification": "", +} +``` + +## Future Improvements + +The current documentation only describes the minimum viable product for the +governance module. Future improvements may include: + +* **`BountyProposals`:** If accepted, a `BountyProposal` creates an open + bounty. The `BountyProposal` specifies how many Atoms will be given upon + completion. These Atoms will be taken from the `reserve pool`. After a + `BountyProposal` is accepted by governance, anybody can submit a + `SoftwareUpgradeProposal` with the code to claim the bounty. Note that once a + `BountyProposal` is accepted, the corresponding funds in the `reserve pool` + are locked so that payment can always be honored. In order to link a + `SoftwareUpgradeProposal` to an open bounty, the submitter of the + `SoftwareUpgradeProposal` will use the `Proposal.LinkedProposal` attribute. + If a `SoftwareUpgradeProposal` linked to an open bounty is accepted by + governance, the funds that were reserved are automatically transferred to the + submitter. +* **Complex delegation:** Delegators could choose other representatives than + their validators. Ultimately, the chain of representatives would always end + up to a validator, but delegators could inherit the vote of their chosen + representative before they inherit the vote of their validator. In other + words, they would only inherit the vote of their validator if their other + appointed representative did not vote. +* **Better process for proposal review:** There would be two parts to + `proposal.Deposit`, one for anti-spam (same as in MVP) and an other one to + reward third party auditors. diff --git a/docs/sdk/v0.47/documentation/module-system/modules/group/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/group/README.mdx new file mode 100644 index 00000000..ff4d25b4 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/group/README.mdx @@ -0,0 +1,9006 @@ +--- +title: '`x/group`' +description: The following documents specify the group module. +--- + +## Abstract + +The following documents specify the group module. + +This module allows the creation and management of on-chain multisig accounts and enables voting for message execution based on configurable decision policies. + +## Contents + +* [Concepts](#concepts) + * [Group](#group) + * [Group Policy](#group-policy) + * [Decision Policy](#decision-policy) + * [Proposal](#proposal) + * [Pruning](#pruning) +* [State](#state) + * [Group Table](#group-table) + * [Group Member Table](#group-member-table) + * [Group Policy Table](#group-policy-table) + * [Proposal Table](#proposal-table) + * [Vote Table](#vote-table) +* [Msg Service](#msg-service) + * [Msg/CreateGroup](#msgcreategroup) + * [Msg/UpdateGroupMembers](#msgupdategroupmembers) + * [Msg/UpdateGroupAdmin](#msgupdategroupadmin) + * [Msg/UpdateGroupMetadata](#msgupdategroupmetadata) + * [Msg/CreateGroupPolicy](#msgcreategrouppolicy) + * [Msg/CreateGroupWithPolicy](#msgcreategroupwithpolicy) + * [Msg/UpdateGroupPolicyAdmin](#msgupdategrouppolicyadmin) + * [Msg/UpdateGroupPolicyDecisionPolicy](#msgupdategrouppolicydecisionpolicy) + * [Msg/UpdateGroupPolicyMetadata](#msgupdategrouppolicymetadata) + * [Msg/SubmitProposal](#msgsubmitproposal) + * [Msg/WithdrawProposal](#msgwithdrawproposal) + * [Msg/Vote](#msgvote) + * [Msg/Exec](#msgexec) + * [Msg/LeaveGroup](#msgleavegroup) +* [Events](#events) + * [EventCreateGroup](#eventcreategroup) + * [EventUpdateGroup](#eventupdategroup) + * [EventCreateGroupPolicy](#eventcreategrouppolicy) + * [EventUpdateGroupPolicy](#eventupdategrouppolicy) + * [EventCreateProposal](#eventcreateproposal) + * [EventWithdrawProposal](#eventwithdrawproposal) + * [EventVote](#eventvote) + * [EventExec](#eventexec) + * [EventLeaveGroup](#eventleavegroup) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) +* [Metadata](#metadata) + +## Concepts + +### Group + +A group is simply an aggregation of accounts with associated weights. It is not +an account and doesn't have a balance. It doesn't in and of itself have any +sort of voting or decision weight. It does have an "administrator" which has +the ability to add, remove and update members in the group. Note that a +group policy account could be an administrator of a group, and that the +administrator doesn't necessarily have to be a member of the group. + +### Group Policy + +A group policy is an account associated with a group and a decision policy. +Group policies are abstracted from groups because a single group may have +multiple decision policies for different types of actions. Managing group +membership separately from decision policies results in the least overhead +and keeps membership consistent across different policies. The pattern that +is recommended is to have a single master group policy for a given group, +and then to create separate group policies with different decision policies +and delegate the desired permissions from the master account to +those "sub-accounts" using the `x/authz` module. + +### Decision Policy + +A decision policy is the mechanism by which members of a group can vote on +proposals, as well as the rules that dictate whether a proposal should pass +or not based on its tally outcome. + +All decision policies generally would have a mininum execution period and a +maximum voting window. The minimum execution period is the minimum amount of time +that must pass after submission in order for a proposal to potentially be executed, and it may +be set to 0. The maximum voting window is the maximum time after submission that a proposal may +be voted on before it is tallied. + +The chain developer also defines an app-wide maximum execution period, which is +the maximum amount of time after a proposal's voting period end where users are +allowed to execute a proposal. + +The current group module comes shipped with two decision policies: threshold +and percentage. Any chain developer can extend upon these two, by creating +custom decision policies, as long as they adhere to the `DecisionPolicy` +interface: + +```go expandable +package group + +import ( + + "fmt" + "time" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/group/errors" + "github.com/cosmos/cosmos-sdk/x/group/internal/math" + "github.com/cosmos/cosmos-sdk/x/group/internal/orm" +) + +/ DecisionPolicyResult is the result of whether a proposal passes or not a +/ decision policy. +type DecisionPolicyResult struct { + / Allow determines if the proposal is allowed to pass. + Allow bool + / Final determines if the tally result is final or not. If final, then + / votes are pruned, and the tally result is saved in the proposal's + / `FinalTallyResult` field. + Final bool +} + +/ DecisionPolicy is the persistent set of rules to determine the result of election on a proposal. +type DecisionPolicy interface { + codec.ProtoMarshaler + + / GetVotingPeriod returns the duration after proposal submission where + / votes are accepted. + GetVotingPeriod() + +time.Duration + / GetMinExecutionPeriod returns the minimum duration after submission + / where we can execution a proposal. It can be set to 0 or to a value + / lesser than VotingPeriod to allow TRY_EXEC. + GetMinExecutionPeriod() + +time.Duration + / Allow defines policy-specific logic to allow a proposal to pass or not, + / based on its tally result, the group's total power and the time since + / the proposal was submitted. + Allow(tallyResult TallyResult, totalPower string) (DecisionPolicyResult, error) + +ValidateBasic() + +error + Validate(g GroupInfo, config Config) + +error +} + +/ Implements DecisionPolicy Interface +var _ DecisionPolicy = &ThresholdDecisionPolicy{ +} + +/ NewThresholdDecisionPolicy creates a threshold DecisionPolicy +func NewThresholdDecisionPolicy(threshold string, votingPeriod time.Duration, minExecutionPeriod time.Duration) + +DecisionPolicy { + return &ThresholdDecisionPolicy{ + threshold, &DecisionPolicyWindows{ + votingPeriod, minExecutionPeriod +}} +} + +/ GetVotingPeriod returns the voitng period of ThresholdDecisionPolicy +func (p ThresholdDecisionPolicy) + +GetVotingPeriod() + +time.Duration { + return p.Windows.VotingPeriod +} + +/ GetMinExecutionPeriod returns the minimum execution period of ThresholdDecisionPolicy +func (p ThresholdDecisionPolicy) + +GetMinExecutionPeriod() + +time.Duration { + return p.Windows.MinExecutionPeriod +} + +/ ValidateBasic does basic validation on ThresholdDecisionPolicy +func (p ThresholdDecisionPolicy) + +ValidateBasic() + +error { + if _, err := math.NewPositiveDecFromString(p.Threshold); err != nil { + return sdkerrors.Wrap(err, "threshold") +} + if p.Windows == nil || p.Windows.VotingPeriod == 0 { + return sdkerrors.Wrap(errors.ErrInvalid, "voting period cannot be zero") +} + +return nil +} + +/ Allow allows a proposal to pass when the tally of yes votes equals or exceeds the threshold before the timeout. +func (p ThresholdDecisionPolicy) + +Allow(tallyResult TallyResult, totalPower string) (DecisionPolicyResult, error) { + threshold, err := math.NewPositiveDecFromString(p.Threshold) + if err != nil { + return DecisionPolicyResult{ +}, sdkerrors.Wrap(err, "threshold") +} + +yesCount, err := math.NewNonNegativeDecFromString(tallyResult.YesCount) + if err != nil { + return DecisionPolicyResult{ +}, sdkerrors.Wrap(err, "yes count") +} + +totalPowerDec, err := math.NewNonNegativeDecFromString(totalPower) + if err != nil { + return DecisionPolicyResult{ +}, sdkerrors.Wrap(err, "total power") +} + + / the real threshold of the policy is `min(threshold,total_weight)`. If + / the group member weights changes (member leaving, member weight update) + / and the threshold doesn't, we can end up with threshold > total_weight. + / In this case, as long as everyone votes yes (in which case + / `yesCount`==`realThreshold`), then the proposal still passes. + realThreshold := min(threshold, totalPowerDec) + if yesCount.Cmp(realThreshold) >= 0 { + return DecisionPolicyResult{ + Allow: true, + Final: true +}, nil +} + +totalCounts, err := tallyResult.TotalCounts() + if err != nil { + return DecisionPolicyResult{ +}, err +} + +undecided, err := math.SubNonNegative(totalPowerDec, totalCounts) + if err != nil { + return DecisionPolicyResult{ +}, err +} + / maxYesCount is the max potential number of yes count, i.e the current yes count + / plus all undecided count (supposing they all vote yes). + maxYesCount, err := yesCount.Add(undecided) + if err != nil { + return DecisionPolicyResult{ +}, err +} + if maxYesCount.Cmp(realThreshold) < 0 { + return DecisionPolicyResult{ + Allow: false, + Final: true +}, nil +} + +return DecisionPolicyResult{ + Allow: false, + Final: false +}, nil +} + +func min(a, b math.Dec) + +math.Dec { + if a.Cmp(b) < 0 { + return a +} + +return b +} + +/ Validate validates the policy against the group. Note that the threshold +/ can actually be greater than the group's total weight: in the Allow method +/ we check the tally weight against `min(threshold,total_weight)`. +func (p *ThresholdDecisionPolicy) + +Validate(g GroupInfo, config Config) + +error { + _, err := math.NewPositiveDecFromString(p.Threshold) + if err != nil { + return sdkerrors.Wrap(err, "threshold") +} + _, err = math.NewNonNegativeDecFromString(g.TotalWeight) + if err != nil { + return sdkerrors.Wrap(err, "group total weight") +} + if p.Windows.MinExecutionPeriod > p.Windows.VotingPeriod+config.MaxExecutionPeriod { + return sdkerrors.Wrap(errors.ErrInvalid, "min_execution_period should be smaller than voting_period + max_execution_period") +} + +return nil +} + +/ Implements DecisionPolicy Interface +var _ DecisionPolicy = &PercentageDecisionPolicy{ +} + +/ NewPercentageDecisionPolicy creates a new percentage DecisionPolicy +func NewPercentageDecisionPolicy(percentage string, votingPeriod time.Duration, executionPeriod time.Duration) + +DecisionPolicy { + return &PercentageDecisionPolicy{ + percentage, &DecisionPolicyWindows{ + votingPeriod, executionPeriod +}} +} + +/ GetVotingPeriod returns the voitng period of PercentageDecisionPolicy +func (p PercentageDecisionPolicy) + +GetVotingPeriod() + +time.Duration { + return p.Windows.VotingPeriod +} + +/ GetMinExecutionPeriod returns the minimum execution period of PercentageDecisionPolicy +func (p PercentageDecisionPolicy) + +GetMinExecutionPeriod() + +time.Duration { + return p.Windows.MinExecutionPeriod +} + +/ ValidateBasic does basic validation on PercentageDecisionPolicy +func (p PercentageDecisionPolicy) + +ValidateBasic() + +error { + percentage, err := math.NewPositiveDecFromString(p.Percentage) + if err != nil { + return sdkerrors.Wrap(err, "percentage threshold") +} + if percentage.Cmp(math.NewDecFromInt64(1)) == 1 { + return sdkerrors.Wrap(errors.ErrInvalid, "percentage must be > 0 and <= 1") +} + if p.Windows == nil || p.Windows.VotingPeriod == 0 { + return sdkerrors.Wrap(errors.ErrInvalid, "voting period cannot be 0") +} + +return nil +} + +/ Validate validates the policy against the group. +func (p *PercentageDecisionPolicy) + +Validate(g GroupInfo, config Config) + +error { + if p.Windows.MinExecutionPeriod > p.Windows.VotingPeriod+config.MaxExecutionPeriod { + return sdkerrors.Wrap(errors.ErrInvalid, "min_execution_period should be smaller than voting_period + max_execution_period") +} + +return nil +} + +/ Allow allows a proposal to pass when the tally of yes votes equals or exceeds the percentage threshold before the timeout. +func (p PercentageDecisionPolicy) + +Allow(tally TallyResult, totalPower string) (DecisionPolicyResult, error) { + percentage, err := math.NewPositiveDecFromString(p.Percentage) + if err != nil { + return DecisionPolicyResult{ +}, sdkerrors.Wrap(err, "percentage") +} + +yesCount, err := math.NewNonNegativeDecFromString(tally.YesCount) + if err != nil { + return DecisionPolicyResult{ +}, sdkerrors.Wrap(err, "yes count") +} + +totalPowerDec, err := math.NewNonNegativeDecFromString(totalPower) + if err != nil { + return DecisionPolicyResult{ +}, sdkerrors.Wrap(err, "total power") +} + +yesPercentage, err := yesCount.Quo(totalPowerDec) + if err != nil { + return DecisionPolicyResult{ +}, err +} + if yesPercentage.Cmp(percentage) >= 0 { + return DecisionPolicyResult{ + Allow: true, + Final: true +}, nil +} + +totalCounts, err := tally.TotalCounts() + if err != nil { + return DecisionPolicyResult{ +}, err +} + +undecided, err := math.SubNonNegative(totalPowerDec, totalCounts) + if err != nil { + return DecisionPolicyResult{ +}, err +} + +sum, err := yesCount.Add(undecided) + if err != nil { + return DecisionPolicyResult{ +}, err +} + +sumPercentage, err := sum.Quo(totalPowerDec) + if err != nil { + return DecisionPolicyResult{ +}, err +} + if sumPercentage.Cmp(percentage) < 0 { + return DecisionPolicyResult{ + Allow: false, + Final: true +}, nil +} + +return DecisionPolicyResult{ + Allow: false, + Final: false +}, nil +} + +var _ orm.Validateable = GroupPolicyInfo{ +} + +/ NewGroupPolicyInfo creates a new GroupPolicyInfo instance +func NewGroupPolicyInfo(address sdk.AccAddress, group uint64, admin sdk.AccAddress, metadata string, + version uint64, decisionPolicy DecisionPolicy, createdAt time.Time, +) (GroupPolicyInfo, error) { + p := GroupPolicyInfo{ + Address: address.String(), + GroupId: group, + Admin: admin.String(), + Metadata: metadata, + Version: version, + CreatedAt: createdAt, +} + err := p.SetDecisionPolicy(decisionPolicy) + if err != nil { + return GroupPolicyInfo{ +}, err +} + +return p, nil +} + +/ SetDecisionPolicy sets the decision policy for GroupPolicyInfo. +func (g *GroupPolicyInfo) + +SetDecisionPolicy(decisionPolicy DecisionPolicy) + +error { + any, err := codectypes.NewAnyWithValue(decisionPolicy) + if err != nil { + return err +} + +g.DecisionPolicy = any + return nil +} + +/ GetDecisionPolicy gets the decision policy of GroupPolicyInfo +func (g GroupPolicyInfo) + +GetDecisionPolicy() (DecisionPolicy, error) { + decisionPolicy, ok := g.DecisionPolicy.GetCachedValue().(DecisionPolicy) + if !ok { + return nil, sdkerrors.ErrInvalidType.Wrapf("expected %T, got %T", (DecisionPolicy)(nil), g.DecisionPolicy.GetCachedValue()) +} + +return decisionPolicy, nil +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (g GroupPolicyInfo) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + var decisionPolicy DecisionPolicy + return unpacker.UnpackAny(g.DecisionPolicy, &decisionPolicy) +} + +func (g GroupInfo) + +PrimaryKeyFields() []interface{ +} { + return []interface{ +}{ + g.Id +} +} + +/ ValidateBasic does basic validation on group info. +func (g GroupInfo) + +ValidateBasic() + +error { + if g.Id == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "group's GroupId") +} + + _, err := sdk.AccAddressFromBech32(g.Admin) + if err != nil { + return sdkerrors.Wrap(err, "admin") +} + if _, err := math.NewNonNegativeDecFromString(g.TotalWeight); err != nil { + return sdkerrors.Wrap(err, "total weight") +} + if g.Version == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "version") +} + +return nil +} + +func (g GroupPolicyInfo) + +PrimaryKeyFields() []interface{ +} { + addr := sdk.MustAccAddressFromBech32(g.Address) + +return []interface{ +}{ + addr.Bytes() +} +} + +func (g Proposal) + +PrimaryKeyFields() []interface{ +} { + return []interface{ +}{ + g.Id +} +} + +/ ValidateBasic does basic validation on group policy info. +func (g GroupPolicyInfo) + +ValidateBasic() + +error { + _, err := sdk.AccAddressFromBech32(g.Admin) + if err != nil { + return sdkerrors.Wrap(err, "group policy admin") +} + _, err = sdk.AccAddressFromBech32(g.Address) + if err != nil { + return sdkerrors.Wrap(err, "group policy account address") +} + if g.GroupId == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "group policy's group id") +} + if g.Version == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "group policy version") +} + +policy, err := g.GetDecisionPolicy() + if err != nil { + return sdkerrors.Wrap(err, "group policy decision policy") +} + if err := policy.ValidateBasic(); err != nil { + return sdkerrors.Wrap(err, "group policy's decision policy") +} + +return nil +} + +func (g GroupMember) + +PrimaryKeyFields() []interface{ +} { + addr := sdk.MustAccAddressFromBech32(g.Member.Address) + +return []interface{ +}{ + g.GroupId, addr.Bytes() +} +} + +/ ValidateBasic does basic validation on group member. +func (g GroupMember) + +ValidateBasic() + +error { + if g.GroupId == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "group member's group id") +} + err := MemberToMemberRequest(g.Member).ValidateBasic() + if err != nil { + return sdkerrors.Wrap(err, "group member") +} + +return nil +} + +/ MemberToMemberRequest converts a `Member` (used for storage) +/ to a `MemberRequest` (used in requests). The only difference +/ between the two is that `MemberRequest` doesn't have any `AddedAt` field +/ since it cannot be set as part of requests. +func MemberToMemberRequest(m *Member) + +MemberRequest { + return MemberRequest{ + Address: m.Address, + Weight: m.Weight, + Metadata: m.Metadata, +} +} + +/ ValidateBasic does basic validation on proposal. +func (g Proposal) + +ValidateBasic() + +error { + if g.Id == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "proposal id") +} + _, err := sdk.AccAddressFromBech32(g.GroupPolicyAddress) + if err != nil { + return sdkerrors.Wrap(err, "proposal group policy address") +} + if g.GroupVersion == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "proposal group version") +} + if g.GroupPolicyVersion == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "proposal group policy version") +} + _, err = g.FinalTallyResult.GetYesCount() + if err != nil { + return sdkerrors.Wrap(err, "proposal FinalTallyResult yes count") +} + _, err = g.FinalTallyResult.GetNoCount() + if err != nil { + return sdkerrors.Wrap(err, "proposal FinalTallyResult no count") +} + _, err = g.FinalTallyResult.GetAbstainCount() + if err != nil { + return sdkerrors.Wrap(err, "proposal FinalTallyResult abstain count") +} + _, err = g.FinalTallyResult.GetNoWithVetoCount() + if err != nil { + return sdkerrors.Wrap(err, "proposal FinalTallyResult veto count") +} + +return nil +} + +func (v Vote) + +PrimaryKeyFields() []interface{ +} { + addr := sdk.MustAccAddressFromBech32(v.Voter) + +return []interface{ +}{ + v.ProposalId, addr.Bytes() +} +} + +var _ orm.Validateable = Vote{ +} + +/ ValidateBasic does basic validation on vote. +func (v Vote) + +ValidateBasic() + +error { + _, err := sdk.AccAddressFromBech32(v.Voter) + if err != nil { + return sdkerrors.Wrap(err, "voter") +} + if v.ProposalId == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "voter ProposalId") +} + if v.Option == VOTE_OPTION_UNSPECIFIED { + return sdkerrors.Wrap(errors.ErrEmpty, "voter vote option") +} + if _, ok := VoteOption_name[int32(v.Option)]; !ok { + return sdkerrors.Wrap(errors.ErrInvalid, "vote option") +} + +return nil +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (q QueryGroupPoliciesByGroupResponse) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + return unpackGroupPolicies(unpacker, q.GroupPolicies) +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (q QueryGroupPoliciesByAdminResponse) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + return unpackGroupPolicies(unpacker, q.GroupPolicies) +} + +func unpackGroupPolicies(unpacker codectypes.AnyUnpacker, accs []*GroupPolicyInfo) + +error { + for _, g := range accs { + err := g.UnpackInterfaces(unpacker) + if err != nil { + return err +} + +} + +return nil +} + +type operation func(x, y math.Dec) (math.Dec, error) + +func (t *TallyResult) + +operation(vote Vote, weight string, op operation) + +error { + weightDec, err := math.NewPositiveDecFromString(weight) + if err != nil { + return err +} + +yesCount, err := t.GetYesCount() + if err != nil { + return sdkerrors.Wrap(err, "yes count") +} + +noCount, err := t.GetNoCount() + if err != nil { + return sdkerrors.Wrap(err, "no count") +} + +abstainCount, err := t.GetAbstainCount() + if err != nil { + return sdkerrors.Wrap(err, "abstain count") +} + +vetoCount, err := t.GetNoWithVetoCount() + if err != nil { + return sdkerrors.Wrap(err, "veto count") +} + switch vote.Option { + case VOTE_OPTION_YES: + yesCount, err := op(yesCount, weightDec) + if err != nil { + return sdkerrors.Wrap(err, "yes count") +} + +t.YesCount = yesCount.String() + case VOTE_OPTION_NO: + noCount, err := op(noCount, weightDec) + if err != nil { + return sdkerrors.Wrap(err, "no count") +} + +t.NoCount = noCount.String() + case VOTE_OPTION_ABSTAIN: + abstainCount, err := op(abstainCount, weightDec) + if err != nil { + return sdkerrors.Wrap(err, "abstain count") +} + +t.AbstainCount = abstainCount.String() + case VOTE_OPTION_NO_WITH_VETO: + vetoCount, err := op(vetoCount, weightDec) + if err != nil { + return sdkerrors.Wrap(err, "veto count") +} + +t.NoWithVetoCount = vetoCount.String() + +default: + return sdkerrors.Wrapf(errors.ErrInvalid, "unknown vote option %s", vote.Option.String()) +} + +return nil +} + +/ GetYesCount returns the number of yes counts from tally result. +func (t TallyResult) + +GetYesCount() (math.Dec, error) { + yesCount, err := math.NewNonNegativeDecFromString(t.YesCount) + if err != nil { + return math.Dec{ +}, err +} + +return yesCount, nil +} + +/ GetNoCount returns the number of no counts from tally result. +func (t TallyResult) + +GetNoCount() (math.Dec, error) { + noCount, err := math.NewNonNegativeDecFromString(t.NoCount) + if err != nil { + return math.Dec{ +}, err +} + +return noCount, nil +} + +/ GetAbstainCount returns the number of abstain counts from tally result. +func (t TallyResult) + +GetAbstainCount() (math.Dec, error) { + abstainCount, err := math.NewNonNegativeDecFromString(t.AbstainCount) + if err != nil { + return math.Dec{ +}, err +} + +return abstainCount, nil +} + +/ GetNoWithVetoCount returns the number of no with veto counts from tally result. +func (t TallyResult) + +GetNoWithVetoCount() (math.Dec, error) { + vetoCount, err := math.NewNonNegativeDecFromString(t.NoWithVetoCount) + if err != nil { + return math.Dec{ +}, err +} + +return vetoCount, nil +} + +func (t *TallyResult) + +Add(vote Vote, weight string) + +error { + if err := t.operation(vote, weight, math.Add); err != nil { + return err +} + +return nil +} + +/ TotalCounts is the sum of all weights. +func (t TallyResult) + +TotalCounts() (math.Dec, error) { + yesCount, err := t.GetYesCount() + if err != nil { + return math.Dec{ +}, sdkerrors.Wrap(err, "yes count") +} + +noCount, err := t.GetNoCount() + if err != nil { + return math.Dec{ +}, sdkerrors.Wrap(err, "no count") +} + +abstainCount, err := t.GetAbstainCount() + if err != nil { + return math.Dec{ +}, sdkerrors.Wrap(err, "abstain count") +} + +vetoCount, err := t.GetNoWithVetoCount() + if err != nil { + return math.Dec{ +}, sdkerrors.Wrap(err, "veto count") +} + totalCounts := math.NewDecFromInt64(0) + +totalCounts, err = totalCounts.Add(yesCount) + if err != nil { + return math.Dec{ +}, err +} + +totalCounts, err = totalCounts.Add(noCount) + if err != nil { + return math.Dec{ +}, err +} + +totalCounts, err = totalCounts.Add(abstainCount) + if err != nil { + return math.Dec{ +}, err +} + +totalCounts, err = totalCounts.Add(vetoCount) + if err != nil { + return math.Dec{ +}, err +} + +return totalCounts, nil +} + +/ DefaultTallyResult returns a TallyResult with all counts set to 0. +func DefaultTallyResult() + +TallyResult { + return TallyResult{ + YesCount: "0", + NoCount: "0", + NoWithVetoCount: "0", + AbstainCount: "0", +} +} + +/ VoteOptionFromString returns a VoteOption from a string. It returns an error +/ if the string is invalid. +func VoteOptionFromString(str string) (VoteOption, error) { + vo, ok := VoteOption_value[str] + if !ok { + return VOTE_OPTION_UNSPECIFIED, fmt.Errorf("'%s' is not a valid vote option", str) +} + +return VoteOption(vo), nil +} +``` + +#### Threshold decision policy + +A threshold decision policy defines a threshold of yes votes (based on a tally +of voter weights) that must be achieved in order for a proposal to pass. For +this decision policy, abstain and veto are simply treated as no's. + +This decision policy also has a VotingPeriod window and a MinExecutionPeriod +window. The former defines the duration after proposal submission where members +are allowed to vote, after which tallying is performed. The latter specifies +the minimum duration after proposal submission where the proposal can be +executed. If set to 0, then the proposal is allowed to be executed immediately +on submission (using the `TRY_EXEC` option). Obviously, MinExecutionPeriod +cannot be greater than VotingPeriod+MaxExecutionPeriod (where MaxExecution is +the app-defined duration that specifies the window after voting ended where a +proposal can be executed). + +#### Percentage decision policy + +A percentage decision policy is similar to a threshold decision policy, except +that the threshold is not defined as a constant weight, but as a percentage. +It's more suited for groups where the group members' weights can be updated, as +the percentage threshold stays the same, and doesn't depend on how those member +weights get updated. + +Same as the Threshold decision policy, the percentage decision policy has the +two VotingPeriod and MinExecutionPeriod parameters. + +### Proposal + +Any member(s) of a group can submit a proposal for a group policy account to decide upon. +A proposal consists of a set of messages that will be executed if the proposal +passes as well as any metadata associated with the proposal. + +#### Voting + +There are four choices to choose while voting - yes, no, abstain and veto. Not +all decision policies will take the four choices into account. Votes can contain some optional metadata. +In the current implementation, the voting window begins as soon as a proposal +is submitted, and the end is defined by the group policy's decision policy. + +#### Withdrawing Proposals + +Proposals can be withdrawn any time before the voting period end, either by the +admin of the group policy or by one of the proposers. Once withdrawn, it is +marked as `PROPOSAL_STATUS_WITHDRAWN`, and no more voting or execution is +allowed on it. + +#### Aborted Proposals + +If the group policy is updated during the voting period of the proposal, then +the proposal is marked as `PROPOSAL_STATUS_ABORTED`, and no more voting or +execution is allowed on it. This is because the group policy defines the rules +of proposal voting and execution, so if those rules change during the lifecycle +of a proposal, then the proposal should be marked as stale. + +#### Tallying + +Tallying is the counting of all votes on a proposal. It happens only once in +the lifecycle of a proposal, but can be triggered by two factors, whichever +happens first: + +* either someone tries to execute the proposal (see next section), which can + happen on a `Msg/Exec` transaction, or a `Msg/{SubmitProposal,Vote}` + transaction with the `Exec` field set. When a proposal execution is attempted, + a tally is done first to make sure the proposal passes. +* or on `EndBlock` when the proposal's voting period end just passed. + +If the tally result passes the decision policy's rules, then the proposal is +marked as `PROPOSAL_STATUS_ACCEPTED`, or else it is marked as +`PROPOSAL_STATUS_REJECTED`. In any case, no more voting is allowed anymore, and the tally +result is persisted to state in the proposal's `FinalTallyResult`. + +#### Executing Proposals + +Proposals are executed only when the tallying is done, and the group account's +decision policy allows the proposal to pass based on the tally outcome. They +are marked by the status `PROPOSAL_STATUS_ACCEPTED`. Execution must happen +before a duration of `MaxExecutionPeriod` (set by the chain developer) after +each proposal's voting period end. + +Proposals will not be automatically executed by the chain in this current design, +but rather a user must submit a `Msg/Exec` transaction to attempt to execute the +proposal based on the current votes and decision policy. Any user (not only the +group members) can execute proposals that have been accepted, and execution fees are +paid by the proposal executor. +It's also possible to try to execute a proposal immediately on creation or on +new votes using the `Exec` field of `Msg/SubmitProposal` and `Msg/Vote` requests. +In the former case, proposers signatures are considered as yes votes. +In these cases, if the proposal can't be executed (i.e. it didn't pass the +decision policy's rules), it will still be opened for new votes and +could be tallied and executed later on. + +A successful proposal execution will have its `ExecutorResult` marked as +`PROPOSAL_EXECUTOR_RESULT_SUCCESS`. The proposal will be automatically pruned +after execution. On the other hand, a failed proposal execution will be marked +as `PROPOSAL_EXECUTOR_RESULT_FAILURE`. Such a proposal can be re-executed +multiple times, until it expires after `MaxExecutionPeriod` after voting period +end. + +### Pruning + +Proposals and votes are automatically pruned to avoid state bloat. + +Votes are pruned: + +* either after a successful tally, i.e. a tally whose result passes the decision + policy's rules, which can be trigged by a `Msg/Exec` or a + `Msg/{SubmitProposal,Vote}` with the `Exec` field set, +* or on `EndBlock` right after the proposal's voting period end. This applies to proposals with status `aborted` or `withdrawn` too. + +whichever happens first. + +Proposals are pruned: + +* on `EndBlock` whose proposal status is `withdrawn` or `aborted` on proposal's voting period end before tallying, +* and either after a successful proposal execution, +* or on `EndBlock` right after the proposal's `voting_period_end` + + `max_execution_period` (defined as an app-wide configuration) is passed, + +whichever happens first. + +## State + +The `group` module uses the `orm` package which provides table storage with support for +primary keys and secondary indexes. `orm` also defines `Sequence` which is a persistent unique key generator based on a counter that can be used along with `Table`s. + +Here's the list of tables and associated sequences and indexes stored as part of the `group` module. + +### Group Table + +The `groupTable` stores `GroupInfo`: `0x0 | BigEndian(GroupId) -> ProtocolBuffer(GroupInfo)`. + +#### groupSeq + +The value of `groupSeq` is incremented when creating a new group and corresponds to the new `GroupId`: `0x1 | 0x1 -> BigEndian`. + +The second `0x1` corresponds to the ORM `sequenceStorageKey`. + +#### groupByAdminIndex + +`groupByAdminIndex` allows to retrieve groups by admin address: +`0x2 | len([]byte(group.Admin)) | []byte(group.Admin) | BigEndian(GroupId) -> []byte()`. + +### Group Member Table + +The `groupMemberTable` stores `GroupMember`s: `0x10 | BigEndian(GroupId) | []byte(member.Address) -> ProtocolBuffer(GroupMember)`. + +The `groupMemberTable` is a primary key table and its `PrimaryKey` is given by +`BigEndian(GroupId) | []byte(member.Address)` which is used by the following indexes. + +#### groupMemberByGroupIndex + +`groupMemberByGroupIndex` allows to retrieve group members by group id: +`0x11 | BigEndian(GroupId) | PrimaryKey -> []byte()`. + +#### groupMemberByMemberIndex + +`groupMemberByMemberIndex` allows to retrieve group members by member address: +`0x12 | len([]byte(member.Address)) | []byte(member.Address) | PrimaryKey -> []byte()`. + +### Group Policy Table + +The `groupPolicyTable` stores `GroupPolicyInfo`: `0x20 | len([]byte(Address)) | []byte(Address) -> ProtocolBuffer(GroupPolicyInfo)`. + +The `groupPolicyTable` is a primary key table and its `PrimaryKey` is given by +`len([]byte(Address)) | []byte(Address)` which is used by the following indexes. + +#### groupPolicySeq + +The value of `groupPolicySeq` is incremented when creating a new group policy and is used to generate the new group policy account `Address`: +`0x21 | 0x1 -> BigEndian`. + +The second `0x1` corresponds to the ORM `sequenceStorageKey`. + +#### groupPolicyByGroupIndex + +`groupPolicyByGroupIndex` allows to retrieve group policies by group id: +`0x22 | BigEndian(GroupId) | PrimaryKey -> []byte()`. + +#### groupPolicyByAdminIndex + +`groupPolicyByAdminIndex` allows to retrieve group policies by admin address: +`0x23 | len([]byte(Address)) | []byte(Address) | PrimaryKey -> []byte()`. + +### Proposal Table + +The `proposalTable` stores `Proposal`s: `0x30 | BigEndian(ProposalId) -> ProtocolBuffer(Proposal)`. + +#### proposalSeq + +The value of `proposalSeq` is incremented when creating a new proposal and corresponds to the new `ProposalId`: `0x31 | 0x1 -> BigEndian`. + +The second `0x1` corresponds to the ORM `sequenceStorageKey`. + +#### proposalByGroupPolicyIndex + +`proposalByGroupPolicyIndex` allows to retrieve proposals by group policy account address: +`0x32 | len([]byte(account.Address)) | []byte(account.Address) | BigEndian(ProposalId) -> []byte()`. + +#### ProposalsByVotingPeriodEndIndex + +`proposalsByVotingPeriodEndIndex` allows to retrieve proposals sorted by chronological `voting_period_end`: +`0x33 | sdk.FormatTimeBytes(proposal.VotingPeriodEnd) | BigEndian(ProposalId) -> []byte()`. + +This index is used when tallying the proposal votes at the end of the voting period, and for pruning proposals at `VotingPeriodEnd + MaxExecutionPeriod`. + +### Vote Table + +The `voteTable` stores `Vote`s: `0x40 | BigEndian(ProposalId) | []byte(voter.Address) -> ProtocolBuffer(Vote)`. + +The `voteTable` is a primary key table and its `PrimaryKey` is given by +`BigEndian(ProposalId) | []byte(voter.Address)` which is used by the following indexes. + +#### voteByProposalIndex + +`voteByProposalIndex` allows to retrieve votes by proposal id: +`0x41 | BigEndian(ProposalId) | PrimaryKey -> []byte()`. + +#### voteByVoterIndex + +`voteByVoterIndex` allows to retrieve votes by voter address: +`0x42 | len([]byte(voter.Address)) | []byte(voter.Address) | PrimaryKey -> []byte()`. + +## Msg Service + +### Msg/CreateGroup + +A new group can be created with the `MsgCreateGroup`, which has an admin address, a list of members and some optional metadata. + +The metadata has a maximum length that is chosen by the app developer, and +passed into the group keeper as a config. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if + +* metadata length is greater than `MaxMetadataLen` config +* members are not correctly set (e.g. wrong address format, duplicates, or with 0 weight). + +### Msg/UpdateGroupMembers + +Group members can be updated with the `UpdateGroupMembers`. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +In the list of `MemberUpdates`, an existing member can be removed by setting its weight to 0. + +It's expected to fail if: + +* the signer is not the admin of the group. +* for any one of the associated group policies, if its decision policy's `Validate()` method fails against the updated group. + +### Msg/UpdateGroupAdmin + +The `UpdateGroupAdmin` can be used to update a group admin. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if the signer is not the admin of the group. + +### Msg/UpdateGroupMetadata + +The `UpdateGroupMetadata` can be used to update a group metadata. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* new metadata length is greater than `MaxMetadataLen` config. +* the signer is not the admin of the group. + +### Msg/CreateGroupPolicy + +A new group policy can be created with the `MsgCreateGroupPolicy`, which has an admin address, a group id, a decision policy and some optional metadata. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* the signer is not the admin of the group. +* metadata length is greater than `MaxMetadataLen` config. +* the decision policy's `Validate()` method doesn't pass against the group. + +### Msg/CreateGroupWithPolicy + +A new group with policy can be created with the `MsgCreateGroupWithPolicy`, which has an admin address, a list of members, a decision policy, a `group_policy_as_admin` field to optionally set group and group policy admin with group policy address and some optional metadata for group and group policy. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail for the same reasons as `Msg/CreateGroup` and `Msg/CreateGroupPolicy`. + +### Msg/UpdateGroupPolicyAdmin + +The `UpdateGroupPolicyAdmin` can be used to update a group policy admin. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if the signer is not the admin of the group policy. + +### Msg/UpdateGroupPolicyDecisionPolicy + +The `UpdateGroupPolicyDecisionPolicy` can be used to update a decision policy. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* the signer is not the admin of the group policy. +* the new decision policy's `Validate()` method doesn't pass against the group. + +### Msg/UpdateGroupPolicyMetadata + +The `UpdateGroupPolicyMetadata` can be used to update a group policy metadata. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* new metadata length is greater than `MaxMetadataLen` config. +* the signer is not the admin of the group. + +### Msg/SubmitProposal + +A new proposal can be created with the `MsgSubmitProposal`, which has a group policy account address, a list of proposers addresses, a list of messages to execute if the proposal is accepted and some optional metadata. +An optional `Exec` value can be provided to try to execute the proposal immediately after proposal creation. Proposers signatures are considered as yes votes in this case. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* metadata, title, or summary length is greater than `MaxMetadataLen` config. +* if any of the proposers is not a group member. + +### Msg/WithdrawProposal + +A proposal can be withdrawn using `MsgWithdrawProposal` which has an `address` (can be either a proposer or the group policy admin) and a `proposal_id` (which has to be withdrawn). + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* the signer is neither the group policy admin nor proposer of the proposal. +* the proposal is already closed or aborted. + +### Msg/Vote + +A new vote can be created with the `MsgVote`, given a proposal id, a voter address, a choice (yes, no, veto or abstain) and some optional metadata. +An optional `Exec` value can be provided to try to execute the proposal immediately after voting. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* metadata length is greater than `MaxMetadataLen` config. +* the proposal is not in voting period anymore. + +### Msg/Exec + +A proposal can be executed with the `MsgExec`. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +The messages that are part of this proposal won't be executed if: + +* the proposal has not been accepted by the group policy. +* the proposal has already been successfully executed. + +### Msg/LeaveGroup + +The `MsgLeaveGroup` allows group member to leave a group. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* the group member is not part of the group. +* for any one of the associated group policies, if its decision policy's `Validate()` method fails against the updated group. + +## Events + +The group module emits the following events: + +### EventCreateGroup + +| Type | Attribute Key | Attribute Value | +| -------------------------------- | ------------- | -------------------------------- | +| message | action | /cosmos.group.v1.Msg/CreateGroup | +| cosmos.group.v1.EventCreateGroup | group\_id | `{groupId}` | + +### EventUpdateGroup + +| Type | Attribute Key | Attribute Value | +| -------------------------------- | ------------- | ---------------------------------------------------------- | +| message | action | `/cosmos.group.v1.Msg/UpdateGroup{Admin\|Metadata\|Members}` | +| cosmos.group.v1.EventUpdateGroup | group\_id | `{groupId}` | + +### EventCreateGroupPolicy + +| Type | Attribute Key | Attribute Value | +| -------------------------------------- | ------------- | -------------------------------------- | +| message | action | /cosmos.group.v1.Msg/CreateGroupPolicy | +| cosmos.group.v1.EventCreateGroupPolicy | address | `{groupPolicyAddress}` | + +### EventUpdateGroupPolicy + +| Type | Attribute Key | Attribute Value | +| -------------------------------------- | ------------- | ----------------------------------------------------------------------- | +| message | action | `/cosmos.group.v1.Msg/UpdateGroupPolicy{Admin\|Metadata\|DecisionPolicy}` | +| cosmos.group.v1.EventUpdateGroupPolicy | address | `{groupPolicyAddress}` | + +### EventCreateProposal + +| Type | Attribute Key | Attribute Value | +| ----------------------------------- | ------------- | ----------------------------------- | +| message | action | /cosmos.group.v1.Msg/CreateProposal | +| cosmos.group.v1.EventCreateProposal | proposal\_id | `{proposalId}` | + +### EventWithdrawProposal + +| Type | Attribute Key | Attribute Value | +| ------------------------------------- | ------------- | ------------------------------------- | +| message | action | /cosmos.group.v1.Msg/WithdrawProposal | +| cosmos.group.v1.EventWithdrawProposal | proposal\_id | `{proposalId}` | + +### EventVote + +| Type | Attribute Key | Attribute Value | +| ------------------------- | ------------- | ------------------------- | +| message | action | /cosmos.group.v1.Msg/Vote | +| cosmos.group.v1.EventVote | proposal\_id | `{proposalId}` | + +## EventExec + +| Type | Attribute Key | Attribute Value | +| ------------------------- | ------------- | ------------------------- | +| message | action | /cosmos.group.v1.Msg/Exec | +| cosmos.group.v1.EventExec | proposal\_id | `{proposalId}` | +| cosmos.group.v1.EventExec | logs | `{logs\_string}` | + +### EventLeaveGroup + +| Type | Attribute Key | Attribute Value | +| ------------------------------- | ------------- | ------------------------------- | +| message | action | /cosmos.group.v1.Msg/LeaveGroup | +| cosmos.group.v1.EventLeaveGroup | proposal\_id | `{proposalId}` | +| cosmos.group.v1.EventLeaveGroup | address | `{address}` | + +## Client + +### CLI + +A user can query and interact with the `group` module using the CLI. + +#### Query + +The `query` commands allow users to query `group` state. + +```bash +simd query group --help +``` + +##### group-info + +The `group-info` command allows users to query for group info by given group id. + +```bash +simd query group group-info [id] [flags] +``` + +Example: + +```bash +simd query group group-info 1 +``` + +Example Output: + +```bash +admin: cosmos1.. +group_id: "1" +metadata: AQ== +total_weight: "3" +version: "1" +``` + +##### group-policy-info + +The `group-policy-info` command allows users to query for group policy info by account address of group policy . + +```bash +simd query group group-policy-info [group-policy-account] [flags] +``` + +Example: + +```bash +simd query group group-policy-info cosmos1.. +``` + +Example Output: + +```bash expandable +address: cosmos1.. +admin: cosmos1.. +decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s +group_id: "1" +metadata: AQ== +version: "1" +``` + +##### group-members + +The `group-members` command allows users to query for group members by group id with pagination flags. + +```bash +simd query group group-members [id] [flags] +``` + +Example: + +```bash +simd query group group-members 1 +``` + +Example Output: + +```bash expandable +members: +- group_id: "1" + member: + address: cosmos1.. + metadata: AQ== + weight: "2" +- group_id: "1" + member: + address: cosmos1.. + metadata: AQ== + weight: "1" +pagination: + next_key: null + total: "2" +``` + +##### groups-by-admin + +The `groups-by-admin` command allows users to query for groups by admin account address with pagination flags. + +```bash +simd query group groups-by-admin [admin] [flags] +``` + +Example: + +```bash +simd query group groups-by-admin cosmos1.. +``` + +Example Output: + +```bash expandable +groups: +- admin: cosmos1.. + group_id: "1" + metadata: AQ== + total_weight: "3" + version: "1" +- admin: cosmos1.. + group_id: "2" + metadata: AQ== + total_weight: "3" + version: "1" +pagination: + next_key: null + total: "2" +``` + +##### group-policies-by-group + +The `group-policies-by-group` command allows users to query for group policies by group id with pagination flags. + +```bash +simd query group group-policies-by-group [group-id] [flags] +``` + +Example: + +```bash +simd query group group-policies-by-group 1 +``` + +Example Output: + +```bash expandable +group_policies: +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +pagination: + next_key: null + total: "2" +``` + +##### group-policies-by-admin + +The `group-policies-by-admin` command allows users to query for group policies by admin account address with pagination flags. + +```bash +simd query group group-policies-by-admin [admin] [flags] +``` + +Example: + +```bash +simd query group group-policies-by-admin cosmos1.. +``` + +Example Output: + +```bash expandable +group_policies: +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +pagination: + next_key: null + total: "2" +``` + +##### proposal + +The `proposal` command allows users to query for proposal by id. + +```bash +simd query group proposal [id] [flags] +``` + +Example: + +```bash +simd query group proposal 1 +``` + +Example Output: + +```bash expandable +proposal: + address: cosmos1.. + executor_result: EXECUTOR_RESULT_NOT_RUN + group_policy_version: "1" + group_version: "1" + metadata: AQ== + msgs: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "100000000" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + proposal_id: "1" + proposers: + - cosmos1.. + result: RESULT_UNFINALIZED + status: STATUS_SUBMITTED + submitted_at: "2021-12-17T07:06:26.310638964Z" + windows: + min_execution_period: 0s + voting_period: 432000s + vote_state: + abstain_count: "0" + no_count: "0" + veto_count: "0" + yes_count: "0" + summary: "Summary" + title: "Title" +``` + +##### proposals-by-group-policy + +The `proposals-by-group-policy` command allows users to query for proposals by account address of group policy with pagination flags. + +```bash +simd query group proposals-by-group-policy [group-policy-account] [flags] +``` + +Example: + +```bash +simd query group proposals-by-group-policy cosmos1.. +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "1" +proposals: +- address: cosmos1.. + executor_result: EXECUTOR_RESULT_NOT_RUN + group_policy_version: "1" + group_version: "1" + metadata: AQ== + msgs: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "100000000" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + proposal_id: "1" + proposers: + - cosmos1.. + result: RESULT_UNFINALIZED + status: STATUS_SUBMITTED + submitted_at: "2021-12-17T07:06:26.310638964Z" + windows: + min_execution_period: 0s + voting_period: 432000s + vote_state: + abstain_count: "0" + no_count: "0" + veto_count: "0" + yes_count: "0" + summary: "Summary" + title: "Title" +``` + +##### vote + +The `vote` command allows users to query for vote by proposal id and voter account address. + +```bash +simd query group vote [proposal-id] [voter] [flags] +``` + +Example: + +```bash +simd query group vote 1 cosmos1.. +``` + +Example Output: + +```bash +vote: + choice: CHOICE_YES + metadata: AQ== + proposal_id: "1" + submitted_at: "2021-12-17T08:05:02.490164009Z" + voter: cosmos1.. +``` + +##### votes-by-proposal + +The `votes-by-proposal` command allows users to query for votes by proposal id with pagination flags. + +```bash +simd query group votes-by-proposal [proposal-id] [flags] +``` + +Example: + +```bash +simd query group votes-by-proposal 1 +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "1" +votes: +- choice: CHOICE_YES + metadata: AQ== + proposal_id: "1" + submitted_at: "2021-12-17T08:05:02.490164009Z" + voter: cosmos1.. +``` + +##### votes-by-voter + +The `votes-by-voter` command allows users to query for votes by voter account address with pagination flags. + +```bash +simd query group votes-by-voter [voter] [flags] +``` + +Example: + +```bash +simd query group votes-by-voter cosmos1.. +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "1" +votes: +- choice: CHOICE_YES + metadata: AQ== + proposal_id: "1" + submitted_at: "2021-12-17T08:05:02.490164009Z" + voter: cosmos1.. +``` + +### Transactions + +The `tx` commands allow users to interact with the `group` module. + +```bash +simd tx group --help +``` + +#### create-group + +The `create-group` command allows users to create a group which is an aggregation of member accounts with associated weights and +an administrator account. + +```bash +simd tx group create-group [admin] [metadata] [members-json-file] +``` + +Example: + +```bash +simd tx group create-group cosmos1.. "AQ==" members.json +``` + +#### update-group-admin + +The `update-group-admin` command allows users to update a group's admin. + +```bash +simd tx group update-group-admin [admin] [group-id] [new-admin] [flags] +``` + +Example: + +```bash +simd tx group update-group-admin cosmos1.. 1 cosmos1.. +``` + +#### update-group-members + +The `update-group-members` command allows users to update a group's members. + +```bash +simd tx group update-group-members [admin] [group-id] [members-json-file] [flags] +``` + +Example: + +```bash +simd tx group update-group-members cosmos1.. 1 members.json +``` + +#### update-group-metadata + +The `update-group-metadata` command allows users to update a group's metadata. + +```bash +simd tx group update-group-metadata [admin] [group-id] [metadata] [flags] +``` + +Example: + +```bash +simd tx group update-group-metadata cosmos1.. 1 "AQ==" +``` + +#### create-group-policy + +The `create-group-policy` command allows users to create a group policy which is an account associated with a group and a decision policy. + +```bash +simd tx group create-group-policy [admin] [group-id] [metadata] [decision-policy] [flags] +``` + +Example: + +```bash +simd tx group create-group-policy cosmos1.. 1 "AQ==" '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"1", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' +``` + +#### create-group-with-policy + +The `create-group-with-policy` command allows users to create a group which is an aggregation of member accounts with associated weights and an administrator account with decision policy. If the `--group-policy-as-admin` flag is set to `true`, the group policy address becomes the group and group policy admin. + +```bash +simd tx group create-group-with-policy [admin] [group-metadata] [group-policy-metadata] [members-json-file] [decision-policy] [flags] +``` + +Example: + +```bash +simd tx group create-group-with-policy cosmos1.. "AQ==" "AQ==" members.json '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"1", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' +``` + +#### update-group-policy-admin + +The `update-group-policy-admin` command allows users to update a group policy admin. + +```bash +simd tx group update-group-policy-admin [admin] [group-policy-account] [new-admin] [flags] +``` + +Example: + +```bash +simd tx group update-group-policy-admin cosmos1.. cosmos1.. cosmos1.. +``` + +#### update-group-policy-metadata + +The `update-group-policy-metadata` command allows users to update a group policy metadata. + +```bash +simd tx group update-group-policy-metadata [admin] [group-policy-account] [new-metadata] [flags] +``` + +Example: + +```bash +simd tx group update-group-policy-metadata cosmos1.. cosmos1.. "AQ==" +``` + +#### update-group-policy-decision-policy + +The `update-group-policy-decision-policy` command allows users to update a group policy's decision policy. + +```bash +simd tx group update-group-policy-decision-policy [admin] [group-policy-account] [decision-policy] [flags] +``` + +Example: + +```bash +simd tx group update-group-policy-decision-policy cosmos1.. cosmos1.. '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"2", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' +``` + +#### create-proposal + +The `create-proposal` command allows users to submit a new proposal. + +```bash +simd tx group create-proposal [group-policy-account] [proposer[,proposer]*] [msg_tx_json_file] [metadata] [flags] +``` + +Example: + +```bash +simd tx group create-proposal cosmos1.. cosmos1.. msg_tx.json "AQ==" +``` + +#### withdraw-proposal + +The `withdraw-proposal` command allows users to withdraw a proposal. + +```bash +simd tx group withdraw-proposal [proposal-id] [group-policy-admin-or-proposer] +``` + +Example: + +```bash +simd tx group withdraw-proposal 1 cosmos1.. +``` + +#### vote + +The `vote` command allows users to vote on a proposal. + +```bash +simd tx group vote proposal-id] [voter] [choice] [metadata] [flags] +``` + +Example: + +```bash +simd tx group vote 1 cosmos1.. CHOICE_YES "AQ==" +``` + +#### exec + +The `exec` command allows users to execute a proposal. + +```bash +simd tx group exec [proposal-id] [flags] +``` + +Example: + +```bash +simd tx group exec 1 +``` + +#### leave-group + +The `leave-group` command allows group member to leave the group. + +```bash +simd tx group leave-group [member-address] [group-id] +``` + +Example: + +```bash +simd tx group leave-group cosmos1... 1 +``` + +### gRPC + +A user can query the `group` module using gRPC endpoints. + +#### GroupInfo + +The `GroupInfo` endpoint allows users to query for group info by given group id. + +```bash +cosmos.group.v1.Query/GroupInfo +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"group_id":1}' localhost:9090 cosmos.group.v1.Query/GroupInfo +``` + +Example Output: + +```bash +{ + "info": { + "groupId": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "totalWeight": "3" + } +} +``` + +#### GroupPolicyInfo + +The `GroupPolicyInfo` endpoint allows users to query for group policy info by account address of group policy. + +```bash +cosmos.group.v1.Query/GroupPolicyInfo +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupPolicyInfo +``` + +Example Output: + +```bash +{ + "info": { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows": {"voting_period": "120h", "min_execution_period": "0s"}}, + } +} +``` + +#### GroupMembers + +The `GroupMembers` endpoint allows users to query for group members by group id with pagination flags. + +```bash +cosmos.group.v1.Query/GroupMembers +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"group_id":"1"}' localhost:9090 cosmos.group.v1.Query/GroupMembers +``` + +Example Output: + +```bash expandable +{ + "members": [ + { + "groupId": "1", + "member": { + "address": "cosmos1..", + "weight": "1" + } + }, + { + "groupId": "1", + "member": { + "address": "cosmos1..", + "weight": "2" + } + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### GroupsByAdmin + +The `GroupsByAdmin` endpoint allows users to query for groups by admin account address with pagination flags. + +```bash +cosmos.group.v1.Query/GroupsByAdmin +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"admin":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupsByAdmin +``` + +Example Output: + +```bash expandable +{ + "groups": [ + { + "groupId": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "totalWeight": "3" + }, + { + "groupId": "2", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "totalWeight": "3" + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### GroupPoliciesByGroup + +The `GroupPoliciesByGroup` endpoint allows users to query for group policies by group id with pagination flags. + +```bash +cosmos.group.v1.Query/GroupPoliciesByGroup +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"group_id":"1"}' localhost:9090 cosmos.group.v1.Query/GroupPoliciesByGroup +``` + +Example Output: + +```bash expandable +{ + "GroupPolicies": [ + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + }, + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### GroupPoliciesByAdmin + +The `GroupPoliciesByAdmin` endpoint allows users to query for group policies by admin account address with pagination flags. + +```bash +cosmos.group.v1.Query/GroupPoliciesByAdmin +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"admin":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupPoliciesByAdmin +``` + +Example Output: + +```bash expandable +{ + "GroupPolicies": [ + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + }, + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### Proposal + +The `Proposal` endpoint allows users to query for proposal by id. + +```bash +cosmos.group.v1.Query/Proposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' localhost:9090 cosmos.group.v1.Query/Proposal +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposalId": "1", + "address": "cosmos1..", + "proposers": [ + "cosmos1.." + ], + "submittedAt": "2021-12-17T07:06:26.310638964Z", + "groupVersion": "1", + "GroupPolicyVersion": "1", + "status": "STATUS_SUBMITTED", + "result": "RESULT_UNFINALIZED", + "voteState": { + "yesCount": "0", + "noCount": "0", + "abstainCount": "0", + "vetoCount": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executorResult": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100000000"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "title": "Title", + "summary": "Summary", + } +} +``` + +#### ProposalsByGroupPolicy + +The `ProposalsByGroupPolicy` endpoint allows users to query for proposals by account address of group policy with pagination flags. + +```bash +cosmos.group.v1.Query/ProposalsByGroupPolicy +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/ProposalsByGroupPolicy +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "proposalId": "1", + "address": "cosmos1..", + "proposers": [ + "cosmos1.." + ], + "submittedAt": "2021-12-17T08:03:27.099649352Z", + "groupVersion": "1", + "GroupPolicyVersion": "1", + "status": "STATUS_CLOSED", + "result": "RESULT_ACCEPTED", + "voteState": { + "yesCount": "1", + "noCount": "0", + "abstainCount": "0", + "vetoCount": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executorResult": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100000000"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "title": "Title", + "summary": "Summary", + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### VoteByProposalVoter + +The `VoteByProposalVoter` endpoint allows users to query for vote by proposal id and voter account address. + +```bash +cosmos.group.v1.Query/VoteByProposalVoter +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1","voter":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/VoteByProposalVoter +``` + +Example Output: + +```bash +{ + "vote": { + "proposalId": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "submittedAt": "2021-12-17T08:05:02.490164009Z" + } +} +``` + +#### VotesByProposal + +The `VotesByProposal` endpoint allows users to query for votes by proposal id with pagination flags. + +```bash +cosmos.group.v1.Query/VotesByProposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' localhost:9090 cosmos.group.v1.Query/VotesByProposal +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "submittedAt": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### VotesByVoter + +The `VotesByVoter` endpoint allows users to query for votes by voter account address with pagination flags. + +```bash +cosmos.group.v1.Query/VotesByVoter +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"voter":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/VotesByVoter +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "submittedAt": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### REST + +A user can query the `group` module using REST endpoints. + +#### GroupInfo + +The `GroupInfo` endpoint allows users to query for group info by given group id. + +```bash +/cosmos/group/v1/group_info/{group_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_info/1 +``` + +Example Output: + +```bash +{ + "info": { + "id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "total_weight": "3" + } +} +``` + +#### GroupPolicyInfo + +The `GroupPolicyInfo` endpoint allows users to query for group policy info by account address of group policy. + +```bash +/cosmos/group/v1/group_policy_info/{address} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_policy_info/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "info": { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + } +} +``` + +#### GroupMembers + +The `GroupMembers` endpoint allows users to query for group members by group id with pagination flags. + +```bash +/cosmos/group/v1/group_members/{group_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_members/1 +``` + +Example Output: + +```bash expandable +{ + "members": [ + { + "group_id": "1", + "member": { + "address": "cosmos1..", + "weight": "1", + "metadata": "AQ==" + } + }, + { + "group_id": "1", + "member": { + "address": "cosmos1..", + "weight": "2", + "metadata": "AQ==" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### GroupsByAdmin + +The `GroupsByAdmin` endpoint allows users to query for groups by admin account address with pagination flags. + +```bash +/cosmos/group/v1/groups_by_admin/{admin} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/groups_by_admin/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "groups": [ + { + "id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "total_weight": "3" + }, + { + "id": "2", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "total_weight": "3" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### GroupPoliciesByGroup + +The `GroupPoliciesByGroup` endpoint allows users to query for group policies by group id with pagination flags. + +```bash +/cosmos/group/v1/group_policies_by_group/{group_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_policies_by_group/1 +``` + +Example Output: + +```bash expandable +{ + "group_policies": [ + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + }, + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### GroupPoliciesByAdmin + +The `GroupPoliciesByAdmin` endpoint allows users to query for group policies by admin account address with pagination flags. + +```bash +/cosmos/group/v1/group_policies_by_admin/{admin} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_policies_by_admin/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "group_policies": [ + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + }, + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +``` + +#### Proposal + +The `Proposal` endpoint allows users to query for proposal by id. + +```bash +/cosmos/group/v1/proposal/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/proposal/1 +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposal_id": "1", + "address": "cosmos1..", + "metadata": "AQ==", + "proposers": [ + "cosmos1.." + ], + "submitted_at": "2021-12-17T07:06:26.310638964Z", + "group_version": "1", + "group_policy_version": "1", + "status": "STATUS_SUBMITTED", + "result": "RESULT_UNFINALIZED", + "vote_state": { + "yes_count": "0", + "no_count": "0", + "abstain_count": "0", + "veto_count": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executor_result": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "100000000" + } + ] + } + ], + "title": "Title", + "summary": "Summary", + } +} +``` + +#### ProposalsByGroupPolicy + +The `ProposalsByGroupPolicy` endpoint allows users to query for proposals by account address of group policy with pagination flags. + +```bash +/cosmos/group/v1/proposals_by_group_policy/{address} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/proposals_by_group_policy/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "id": "1", + "group_policy_address": "cosmos1..", + "metadata": "AQ==", + "proposers": [ + "cosmos1.." + ], + "submit_time": "2021-12-17T08:03:27.099649352Z", + "group_version": "1", + "group_policy_version": "1", + "status": "STATUS_CLOSED", + "result": "RESULT_ACCEPTED", + "vote_state": { + "yes_count": "1", + "no_count": "0", + "abstain_count": "0", + "veto_count": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executor_result": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "100000000" + } + ] + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### VoteByProposalVoter + +The `VoteByProposalVoter` endpoint allows users to query for vote by proposal id and voter account address. + +```bash +/cosmos/group/v1/vote_by_proposal_voter/{proposal_id}/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1beta1/vote_by_proposal_voter/1/cosmos1.. +``` + +Example Output: + +```bash +{ + "vote": { + "proposal_id": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "metadata": "AQ==", + "submitted_at": "2021-12-17T08:05:02.490164009Z" + } +} +``` + +#### VotesByProposal + +The `VotesByProposal` endpoint allows users to query for votes by proposal id with pagination flags. + +```bash +/cosmos/group/v1/votes_by_proposal/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/votes_by_proposal/1 +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "option": "CHOICE_YES", + "metadata": "AQ==", + "submit_time": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### VotesByVoter + +The `VotesByVoter` endpoint allows users to query for votes by voter account address with pagination flags. + +```bash +/cosmos/group/v1/votes_by_voter/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/votes_by_voter/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "metadata": "AQ==", + "submitted_at": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +## Metadata + +The group module has four locations for metadata where users can provide further context about the on-chain actions they are taking. By default all metadata fields have a 255 character length field where metadata can be stored in json format, either on-chain or off-chain depending on the amount of data required. Here we provide a recommendation for the json structure and where the data should be stored. There are two important factors in making these recommendations. First, that the group and gov modules are consistent with one another, note the number of proposals made by all groups may be quite large. Second, that client applications such as block explorers and governance interfaces have confidence in the consistency of metadata structure accross chains. + +### Proposal + +Location: off-chain as json object stored on IPFS (mirrors [gov proposal](docs/sdk/v0.47/documentation/module-system/modules/gov/README#metadata)) + +```json +{ + "title": "", + "authors": [""], + "summary": "", + "details": "", + "proposal_forum_url": "", + "vote_option_context": "", +} +``` + + +The `authors` field is an array of strings, this is to allow for multiple authors to be listed in the metadata. +In v0.46, the `authors` field is a comma-separated string. Frontends are encouraged to support both formats for backwards compatibility. + + +### Vote + +Location: on-chain as json within 255 character limit (mirrors [gov vote](docs/sdk/v0.47/documentation/module-system/modules/gov/README#metadata)) + +```json +{ + "justification": "", +} +``` + +### Group + +Location: off-chain as json object stored on IPFS + +```json +{ + "name": "", + "description": "", + "group_website_url": "", + "group_forum_url": "", +} +``` + +### Decision policy + +Location: on-chain as json within 255 character limit + +```json +{ + "name": "", + "description": "", +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/modules/mint/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/mint/README.mdx new file mode 100644 index 00000000..287bdaee --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/mint/README.mdx @@ -0,0 +1,431 @@ +--- +title: '`x/mint`' +description: >- + State Minter Params Begin-Block NextInflationRate NextAnnualProvisions + BlockProvision Parameters Events BeginBlocker Client CLI gRPC REST +--- + +## Contents + +* [State](#state) + * [Minter](#minter) + * [Params](#params) +* [Begin-Block](#begin-block) + * [NextInflationRate](#nextinflationrate) + * [NextAnnualProvisions](#nextannualprovisions) + * [BlockProvision](#blockprovision) +* [Parameters](#parameters) +* [Events](#events) + * [BeginBlocker](#beginblocker) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +### The Minting Mechanism + +The minting mechanism was designed to: + +* allow for a flexible inflation rate determined by market demand targeting a particular bonded-stake ratio +* effect a balance between market liquidity and staked supply + +In order to best determine the appropriate market rate for inflation rewards, a +moving change rate is used. The moving change rate mechanism ensures that if +the % bonded is either over or under the goal %-bonded, the inflation rate will +adjust to further incentivize or disincentivize being bonded, respectively. Setting the goal +%-bonded at less than 100% encourages the network to maintain some non-staked tokens +which should help provide some liquidity. + +It can be broken down in the following way: + +* If the inflation rate is below the goal %-bonded the inflation rate will + increase until a maximum value is reached +* If the goal % bonded (67% in Cosmos-Hub) is maintained, then the inflation + rate will stay constant +* If the inflation rate is above the goal %-bonded the inflation rate will + decrease until a minimum value is reached + +## State + +### Minter + +The minter is a space for holding current inflation information. + +* Minter: `0x00 -> ProtocolBuffer(minter)` + +```protobuf +// Minter represents the minting state. +message Minter { + // current annual inflation rate + string inflation = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // current annual expected provisions + string annual_provisions = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +### Params + +The mint module stores it's params in state with the prefix of `0x01`, +it can be updated with governance or the address with authority. + +* Params: `mint/params -> legacy_amino(params)` + +```protobuf +// Params defines the parameters for the x/mint module. +message Params { + option (gogoproto.goproto_stringer) = false; + option (amino.name) = "cosmos-sdk/x/mint/Params"; + + // type of coin to mint + string mint_denom = 1; + // maximum annual change in inflation rate + string inflation_rate_change = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // maximum inflation rate + string inflation_max = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // minimum inflation rate + string inflation_min = 4 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // goal of percent bonded atoms + string goal_bonded = 5 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // expected blocks per year + uint64 blocks_per_year = 6; +} +``` + +## Begin-Block + +Minting parameters are recalculated and inflation paid at the beginning of each block. + +### Inflation rate calculation + +Inflation rate is calculated using an "inflation calculation function" that's +passed to the `NewAppModule` function. If no function is passed, then the SDK's +default inflation function will be used (`NextInflationRate`). In case a custom +inflation calculation logic is needed, this can be achieved by defining and +passing a function that matches `InflationCalculationFn`'s signature. + +```go +type InflationCalculationFn func(ctx sdk.Context, minter Minter, params Params, bondedRatio math.LegacyDec) + +math.LegacyDec +``` + +#### NextInflationRate + +The target annual inflation rate is recalculated each block. +The inflation is also subject to a rate change (positive or negative) +depending on the distance from the desired ratio (67%). The maximum rate change +possible is defined to be 13% per year, however the annual inflation is capped +as between 7% and 20%. + +```go expandable +NextInflationRate(params Params, bondedRatio math.LegacyDec) (inflation math.LegacyDec) { + inflationRateChangePerYear = (1 - bondedRatio/params.GoalBonded) * params.InflationRateChange + inflationRateChange = inflationRateChangePerYear/blocksPerYr + + / increase the new annual inflation for this next block + inflation += inflationRateChange + if inflation > params.InflationMax { + inflation = params.InflationMax +} + if inflation < params.InflationMin { + inflation = params.InflationMin +} + +return inflation +} +``` + +### NextAnnualProvisions + +Calculate the annual provisions based on current total supply and inflation +rate. This parameter is calculated once per block. + +```go +NextAnnualProvisions(params Params, totalSupply math.LegacyDec) (provisions math.LegacyDec) { + return Inflation * totalSupply +``` + +### BlockProvision + +Calculate the provisions generated for each block based on current annual provisions. The provisions are then minted by the `mint` module's `ModuleMinterAccount` and then transferred to the `auth`'s `FeeCollector` `ModuleAccount`. + +```go +BlockProvision(params Params) + +sdk.Coin { + provisionAmt = AnnualProvisions/ params.BlocksPerYear + return sdk.NewCoin(params.MintDenom, provisionAmt.Truncate()) +``` + +## Parameters + +The minting module contains the following parameters: + +| Key | Type | Example | +| ------------------- | --------------- | ---------------------- | +| MintDenom | string | "uatom" | +| InflationRateChange | string (dec) | "0.130000000000000000" | +| InflationMax | string (dec) | "0.200000000000000000" | +| InflationMin | string (dec) | "0.070000000000000000" | +| GoalBonded | string (dec) | "0.670000000000000000" | +| BlocksPerYear | string (uint64) | "6311520" | + +## Events + +The minting module emits the following events: + +### BeginBlocker + +| Type | Attribute Key | Attribute Value | +| ---- | ------------------ | ------------------ | +| mint | bonded\_ratio | `{bondedRatio}` | +| mint | inflation | `{inflation}` | +| mint | annual\_provisions | `{annualProvisions}` | +| mint | amount | `{amount}` | + +## Client + +### CLI + +A user can query and interact with the `mint` module using the CLI. + +#### Query + +The `query` commands allow users to query `mint` state. + +```shell +simd query mint --help +``` + +##### annual-provisions + +The `annual-provisions` command allow users to query the current minting annual provisions value + +```shell +simd query mint annual-provisions [flags] +``` + +Example: + +```shell +simd query mint annual-provisions +``` + +Example Output: + +```shell +22268504368893.612100895088410693 +``` + +##### inflation + +The `inflation` command allow users to query the current minting inflation value + +```shell +simd query mint inflation [flags] +``` + +Example: + +```shell +simd query mint inflation +``` + +Example Output: + +```shell +0.199200302563256955 +``` + +##### params + +The `params` command allow users to query the current minting parameters + +```shell +simd query mint params [flags] +``` + +Example: + +```yml +blocks_per_year: "4360000" +goal_bonded: "0.670000000000000000" +inflation_max: "0.200000000000000000" +inflation_min: "0.070000000000000000" +inflation_rate_change: "0.130000000000000000" +mint_denom: stake +``` + +### gRPC + +A user can query the `mint` module using gRPC endpoints. + +#### AnnualProvisions + +The `AnnualProvisions` endpoint allow users to query the current minting annual provisions value + +```shell +/cosmos.mint.v1beta1.Query/AnnualProvisions +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/AnnualProvisions +``` + +Example Output: + +```json +{ + "annualProvisions": "1432452520532626265712995618" +} +``` + +#### Inflation + +The `Inflation` endpoint allow users to query the current minting inflation value + +```shell +/cosmos.mint.v1beta1.Query/Inflation +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/Inflation +``` + +Example Output: + +```json +{ + "inflation": "130197115720711261" +} +``` + +#### Params + +The `Params` endpoint allow users to query the current minting parameters + +```shell +/cosmos.mint.v1beta1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "mintDenom": "stake", + "inflationRateChange": "130000000000000000", + "inflationMax": "200000000000000000", + "inflationMin": "70000000000000000", + "goalBonded": "670000000000000000", + "blocksPerYear": "6311520" + } +} +``` + +### REST + +A user can query the `mint` module using REST endpoints. + +#### annual-provisions + +```shell +/cosmos/mint/v1beta1/annual_provisions +``` + +Example: + +```shell +curl "localhost:1317/cosmos/mint/v1beta1/annual_provisions" +``` + +Example Output: + +```json +{ + "annualProvisions": "1432452520532626265712995618" +} +``` + +#### inflation + +```shell +/cosmos/mint/v1beta1/inflation +``` + +Example: + +```shell +curl "localhost:1317/cosmos/mint/v1beta1/inflation" +``` + +Example Output: + +```json +{ + "inflation": "130197115720711261" +} +``` + +#### params + +```shell +/cosmos/mint/v1beta1/params +``` + +Example: + +```shell +curl "localhost:1317/cosmos/mint/v1beta1/params" +``` + +Example Output: + +```json +{ + "params": { + "mintDenom": "stake", + "inflationRateChange": "130000000000000000", + "inflationMax": "200000000000000000", + "inflationMin": "70000000000000000", + "goalBonded": "670000000000000000", + "blocksPerYear": "6311520" + } +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/modules/nft/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/nft/README.mdx new file mode 100644 index 00000000..d55e953b --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/nft/README.mdx @@ -0,0 +1,88 @@ +--- +title: '`x/nft`' +description: '## Abstract' +--- + +## Contents + +## Abstract + +`x/nft` is an implementation of a Cosmos SDK module, per [ADR 43](docs/sdk/next/documentation/legacy/adr-comprehensive), that allows you to create nft classification, create nft, transfer nft, update nft, and support various queries by integrating the module. It is fully compatible with the ERC721 specification. + +* [Concepts](#concepts) + * [Class](#class) + * [NFT](#nft) +* [State](#state) + * [Class](#class-1) + * [NFT](#nft-1) + * [NFTOfClassByOwner](#nftofclassbyowner) + * [Owner](#owner) + * [TotalSupply](#totalsupply) +* [Messages](#messages) + * [MsgSend](#msgsend) +* [Events](#events) + +## Concepts + +### Class + +`x/nft` module defines a struct `Class` to describe the common characteristics of a class of nft, under this class, you can create a variety of nft, which is equivalent to an erc721 contract for Ethereum. The design is defined in the [ADR 043](docs/sdk/next/documentation/legacy/adr-comprehensive). + +### NFT + +The full name of NFT is Non-Fungible Tokens. Because of the irreplaceable nature of NFT, it means that it can be used to represent unique things. The nft implemented by this module is fully compatible with Ethereum ERC721 standard. + +## State + +### Class + +Class is mainly composed of `id`, `name`, `symbol`, `description`, `uri`, `uri_hash`,`data` where `id` is the unique identifier of the class, similar to the Ethereum ERC721 contract address, the others are optional. + +* Class: `0x01 | classID | -> ProtocolBuffer(Class)` + +### NFT + +NFT is mainly composed of `class_id`, `id`, `uri`, `uri_hash` and `data`. Among them, `class_id` and `id` are two-tuples that identify the uniqueness of nft, `uri` and `uri_hash` is optional, which identifies the off-chain storage location of the nft, and `data` is an Any type. Use Any chain of `x/nft` modules can be customized by extending this field + +* NFT: `0x02 | classID | 0x00 | nftID |-> ProtocolBuffer(NFT)` + +### NFTOfClassByOwner + +NFTOfClassByOwner is mainly to realize the function of querying all nfts using classID and owner, without other redundant functions. + +* NFTOfClassByOwner: `0x03 | owner | 0x00 | classID | 0x00 | nftID |-> 0x01` + +### Owner + +Since there is no extra field in NFT to indicate the owner of nft, an additional key-value pair is used to save the ownership of nft. With the transfer of nft, the key-value pair is updated synchronously. + +* OwnerKey: `0x04 | classID | 0x00 | nftID |-> owner` + +### TotalSupply + +TotalSupply is responsible for tracking the number of all nfts under a certain class. Mint operation is performed under the changed class, supply increases by one, burn operation, and supply decreases by one. + +* OwnerKey: `0x05 | classID |-> totalSupply` + +## Messages + +In this section we describe the processing of messages for the NFT module. + + +The validation of `ClassID` and `NftID` is left to the app developer.\ +The SDK does not provide any validation for these fields. + + +### MsgSend + +You can use the `MsgSend` message to transfer the ownership of nft. This is a function provided by the `x/nft` module. Of course, you can use the `Transfer` method to implement your own transfer logic, but you need to pay extra attention to the transfer permissions. + +The message handling should fail if: + +* provided `ClassID` does not exist. +* provided `Id` does not exist. +* provided `Sender` does not the owner of nft. + +## Events + +The nft module emits proto events defined in [the Protobuf reference](https://buf.build/cosmos/cosmos-sdk/docs/main:cosmos.nft.v1beta1). diff --git a/docs/sdk/v0.47/documentation/module-system/modules/params/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/params/README.mdx new file mode 100644 index 00000000..d258b01c --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/params/README.mdx @@ -0,0 +1,79 @@ +--- +title: '`x/params`' +--- + +> Note: The Params module has been depreacted in favour of each module housing its own parameters. + +## Abstract + +Package params provides a globally available parameter store. + +There are two main types, Keeper and Subspace. Subspace is an isolated namespace for a +paramstore, where keys are prefixed by preconfigured spacename. Keeper has a +permission to access all existing spaces. + +Subspace can be used by the individual keepers, which need a private parameter store +that the other keepers cannot modify. The params Keeper can be used to add a route to `x/gov` router in order to modify any parameter in case a proposal passes. + +The following contents explains how to use params module for master and user modules. + +## Contents + +* [Keeper](#keeper) +* [Subspace](#subspace) + * [Key](#key) + * [KeyTable](#keytable) + * [ParamSet](#paramset) + +## Keeper + +In the app initialization stage, [subspaces](#subspace) can be allocated for other modules' keeper using `Keeper.Subspace` and are stored in `Keeper.spaces`. Then, those modules can have a reference to their specific parameter store through `Keeper.GetSubspace`. + +Example: + +```go +type ExampleKeeper struct { + paramSpace paramtypes.Subspace +} + +func (k ExampleKeeper) + +SetParams(ctx sdk.Context, params types.Params) { + k.paramSpace.SetParamSet(ctx, ¶ms) +} +``` + +## Subspace + +`Subspace` is a prefixed subspace of the parameter store. Each module which uses the +parameter store will take a `Subspace` to isolate permission to access. + +### Key + +Parameter keys are human readable alphanumeric strings. A parameter for the key +`"ExampleParameter"` is stored under `[]byte("SubspaceName" + "/" + "ExampleParameter")`, +where `"SubspaceName"` is the name of the subspace. + +Subkeys are secondary parameter keys those are used along with a primary parameter key. +Subkeys can be used for grouping or dynamic parameter key generation during runtime. + +### KeyTable + +All of the parameter keys that will be used should be registered at the compile +time. `KeyTable` is essentially a `map[string]attribute`, where the `string` is a parameter key. + +Currently, `attribute` consists of a `reflect.Type`, which indicates the parameter +type to check that provided key and value are compatible and registered, as well as a function `ValueValidatorFn` to validate values. + +Only primary keys have to be registered on the `KeyTable`. Subkeys inherit the +attribute of the primary key. + +### ParamSet + +Modules often define parameters as a proto message. The generated struct can implement +`ParamSet` interface to be used with the following methods: + +* `KeyTable.RegisterParamSet()`: registers all parameters in the struct +* `Subspace.{Get, Set}ParamSet()`: Get to & Set from the struct + +The implementor should be a pointer in order to use `GetParamSet()`. diff --git a/docs/sdk/v0.47/documentation/module-system/modules/slashing/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/slashing/README.mdx new file mode 100644 index 00000000..936895ac --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/slashing/README.mdx @@ -0,0 +1,858 @@ +--- +title: '`x/slashing`' +description: >- + This section specifies the slashing module of the Cosmos SDK, which implements + functionality first outlined in the Cosmos Whitepaper in June 2016. +--- + +## Abstract + +This section specifies the slashing module of the Cosmos SDK, which implements functionality +first outlined in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper) in June 2016. + +The slashing module enables Cosmos SDK-based blockchains to disincentivize any attributable action +by a protocol-recognized actor with value at stake by penalizing them ("slashing"). + +Penalties may include, but are not limited to: + +* Burning some amount of their stake +* Removing their ability to vote on future blocks for a period of time. + +This module will be used by the Cosmos Hub, the first hub in the Cosmos ecosystem. + +## Contents + +* [Concepts](#concepts) + * [States](#states) + * [Tombstone Caps](#tombstone-caps) + * [Infraction Timelines](#infraction-timelines) +* [State](#state) + * [Signing Info (Liveness)](#signing-info-liveness) + * [Params](#params) +* [Messages](#messages) + * [Unjail](#unjail) +* [BeginBlock](#beginblock) + * [Liveness Tracking](#liveness-tracking) +* [Hooks](#hooks) +* [Events](#events) +* [Staking Tombstone](#staking-tombstone) +* [Parameters](#parameters) +* [CLI](#cli) + * [Query](#query) + * [Transactions](#transactions) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +### States + +At any given time, there are any number of validators registered in the state +machine. Each block, the top `MaxValidators` (defined by `x/staking`) validators +who are not jailed become *bonded*, meaning that they may propose and vote on +blocks. Validators who are *bonded* are *at stake*, meaning that part or all of +their stake and their delegators' stake is at risk if they commit a protocol fault. + +For each of these validators we keep a `ValidatorSigningInfo` record that contains +information partaining to validator's liveness and other infraction related +attributes. + +### Tombstone Caps + +In order to mitigate the impact of initially likely categories of non-malicious +protocol faults, the Cosmos Hub implements for each validator +a *tombstone* cap, which only allows a validator to be slashed once for a double +sign fault. For example, if you misconfigure your HSM and double-sign a bunch of +old blocks, you'll only be punished for the first double-sign (and then immediately tombstombed). This will still be quite expensive and desirable to avoid, but tombstone caps +somewhat blunt the economic impact of unintentional misconfiguration. + +Liveness faults do not have caps, as they can't stack upon each other. Liveness bugs are "detected" as soon as the infraction occurs, and the validators are immediately put in jail, so it is not possible for them to commit multiple liveness faults without unjailing in between. + +### Infraction Timelines + +To illustrate how the `x/slashing` module handles submitted evidence through +CometBFT consensus, consider the following examples: + +**Definitions**: + +*\[* : timeline start\ +*]* : timeline end\ +*Cn* : infraction `n` committed\ +*Dn* : infraction `n` discovered\ +*Vb* : validator bonded\ +*Vu* : validator unbonded + +#### Single Double Sign Infraction + +\[----------C1----D1,Vu-----] + +A single infraction is committed then later discovered, at which point the +validator is unbonded and slashed at the full amount for the infraction. + +#### Multiple Double Sign Infractions + +\[----------C1--C2---C3---D1,D2,D3Vu-----] + +Multiple infractions are committed and then later discovered, at which point the +validator is jailed and slashed for only one infraction. Because the validator +is also tombstoned, they can not rejoin the validator set. + +## State + +### Signing Info (Liveness) + +Every block includes a set of precommits by the validators for the previous block, +known as the `LastCommitInfo` provided by CometBFT. A `LastCommitInfo` is valid so +long as it contains precommits from +2/3 of total voting power. + +Proposers are incentivized to include precommits from all validators in the CometBFT `LastCommitInfo` +by receiving additional fees proportional to the difference between the voting +power included in the `LastCommitInfo` and +2/3 (see [fee distribution](docs/sdk/v0.47/documentation/module-system/modules/distribution/README#begin-block)). + +```go +type LastCommitInfo struct { + Round int32 + Votes []VoteInfo +} +``` + +Validators are penalized for failing to be included in the `LastCommitInfo` for some +number of blocks by being automatically jailed, potentially slashed, and unbonded. + +Information about validator's liveness activity is tracked through `ValidatorSigningInfo`. +It is indexed in the store as follows: + +* ValidatorSigningInfo: `0x01 | ConsAddrLen (1 byte) | ConsAddress -> ProtocolBuffer(ValSigningInfo)` +* MissedBlocksBitArray: `0x02 | ConsAddrLen (1 byte) | ConsAddress | LittleEndianUint64(signArrayIndex) -> VarInt(didMiss)` (varint is a number encoding format) + +The first mapping allows us to easily lookup the recent signing info for a +validator based on the validator's consensus address. + +The second mapping (`MissedBlocksBitArray`) acts +as a bit-array of size `SignedBlocksWindow` that tells us if the validator missed +the block for a given index in the bit-array. The index in the bit-array is given +as little endian uint64. +The result is a `varint` that takes on `0` or `1`, where `0` indicates the +validator did not miss (did sign) the corresponding block, and `1` indicates +they missed the block (did not sign). + +Note that the `MissedBlocksBitArray` is not explicitly initialized up-front. Keys +are added as we progress through the first `SignedBlocksWindow` blocks for a newly +bonded validator. The `SignedBlocksWindow` parameter defines the size +(number of blocks) of the sliding window used to track validator liveness. + +The information stored for tracking validator liveness is as follows: + +```protobuf +// ValidatorSigningInfo defines a validator's signing info for monitoring their +// liveness activity. +message ValidatorSigningInfo { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // Height at which validator was first a candidate OR was unjailed + int64 start_height = 2; + // Index which is incremented each time the validator was a bonded + // in a block and may have signed a precommit or not. This in conjunction with the + // `SignedBlocksWindow` param determines the index in the `MissedBlocksBitArray`. + int64 index_offset = 3; + // Timestamp until which the validator is jailed due to liveness downtime. + google.protobuf.Timestamp jailed_until = 4 + [(gogoproto.stdtime) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // Whether or not a validator has been tombstoned (killed out of validator set). It is set + // once the validator commits an equivocation or for any other configured misbehiavor. + bool tombstoned = 5; + // A counter kept to avoid unnecessary array reads. + // Note that `Sum(MissedBlocksBitArray)` always equals `MissedBlocksCounter`. + int64 missed_blocks_counter = 6; +} +``` + +### Params + +The slashing module stores it's params in state with the prefix of `0x00`, +it can be updated with governance or the address with authority. + +* Params: `0x00 | ProtocolBuffer(Params)` + +```protobuf +// Params represents the parameters used for by the slashing module. +message Params { + option (amino.name) = "cosmos-sdk/x/slashing/Params"; + + int64 signed_blocks_window = 1; + bytes min_signed_per_window = 2 [ + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true + ]; + google.protobuf.Duration downtime_jail_duration = 3 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdduration) = true]; + bytes slash_fraction_double_sign = 4 [ + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true + ]; + bytes slash_fraction_downtime = 5 [ + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true + ]; +} +``` + +## Messages + +In this section we describe the processing of messages for the `slashing` module. + +### Unjail + +If a validator was automatically unbonded due to downtime and wishes to come back online & +possibly rejoin the bonded set, it must send `MsgUnjail`: + +```protobuf +/ MsgUnjail is an sdk.Msg used for unjailing a jailed validator, thus returning +/ them into the bonded validator set, so they can begin receiving provisions +/ and rewards again. +message MsgUnjail { + string validator_addr = 1; +} +``` + +Below is a pseudocode of the `MsgSrv/Unjail` RPC: + +```go expandable +unjail(tx MsgUnjail) + +validator = getValidator(tx.ValidatorAddr) + if validator == nil + fail with "No validator found" + if getSelfDelegation(validator) == 0 + fail with "validator must self delegate before unjailing" + if !validator.Jailed + fail with "Validator not jailed, cannot unjail" + + info = GetValidatorSigningInfo(operator) + if info.Tombstoned + fail with "Tombstoned validator cannot be unjailed" + if block time < info.JailedUntil + fail with "Validator still jailed, cannot unjail until period has expired" + + validator.Jailed = false + setValidator(validator) + +return +``` + +If the validator has enough stake to be in the top `n = MaximumBondedValidators`, it will be automatically rebonded, +and all delegators still delegated to the validator will be rebonded and begin to again collect +provisions and rewards. + +## BeginBlock + +### Liveness Tracking + +At the beginning of each block, we update the `ValidatorSigningInfo` for each +validator and check if they've crossed below the liveness threshold over a +sliding window. This sliding window is defined by `SignedBlocksWindow` and the +index in this window is determined by `IndexOffset` found in the validator's +`ValidatorSigningInfo`. For each block processed, the `IndexOffset` is incremented +regardless if the validator signed or not. Once the index is determined, the +`MissedBlocksBitArray` and `MissedBlocksCounter` are updated accordingly. + +Finally, in order to determine if a validator crosses below the liveness threshold, +we fetch the maximum number of blocks missed, `maxMissed`, which is +`SignedBlocksWindow - (MinSignedPerWindow * SignedBlocksWindow)` and the minimum +height at which we can determine liveness, `minHeight`. If the current block is +greater than `minHeight` and the validator's `MissedBlocksCounter` is greater than +`maxMissed`, they will be slashed by `SlashFractionDowntime`, will be jailed +for `DowntimeJailDuration`, and have the following values reset: +`MissedBlocksBitArray`, `MissedBlocksCounter`, and `IndexOffset`. + +**Note**: Liveness slashes do **NOT** lead to a tombstombing. + +```go expandable +height := block.Height + for vote in block.LastCommitInfo.Votes { + signInfo := GetValidatorSigningInfo(vote.Validator.Address) + + / This is a relative index, so we counts blocks the validator SHOULD have + / signed. We use the 0-value default signing info if not present, except for + / start height. + index := signInfo.IndexOffset % SignedBlocksWindow() + +signInfo.IndexOffset++ + + / Update MissedBlocksBitArray and MissedBlocksCounter. The MissedBlocksCounter + / just tracks the sum of MissedBlocksBitArray. That way we avoid needing to + / read/write the whole array each time. + missedPrevious := GetValidatorMissedBlockBitArray(vote.Validator.Address, index) + missed := !signed + switch { + case !missedPrevious && missed: + / array index has changed from not missed to missed, increment counter + SetValidatorMissedBlockBitArray(vote.Validator.Address, index, true) + +signInfo.MissedBlocksCounter++ + case missedPrevious && !missed: + / array index has changed from missed to not missed, decrement counter + SetValidatorMissedBlockBitArray(vote.Validator.Address, index, false) + +signInfo.MissedBlocksCounter-- + + default: + / array index at this index has not changed; no need to update counter +} + if missed { + / emit events... +} + minHeight := signInfo.StartHeight + SignedBlocksWindow() + maxMissed := SignedBlocksWindow() - MinSignedPerWindow() + + / If we are past the minimum height and the validator has missed too many + / jail and slash them. + if height > minHeight && signInfo.MissedBlocksCounter > maxMissed { + validator := ValidatorByConsAddr(vote.Validator.Address) + + / emit events... + + / We need to retrieve the stake distribution which signed the block, so we + / subtract ValidatorUpdateDelay from the block height, and subtract an + / additional 1 since this is the LastCommit. + / + / Note, that this CAN result in a negative "distributionHeight" up to + / -ValidatorUpdateDelay-1, i.e. at the end of the pre-genesis block (none) = at the beginning of the genesis block. + / That's fine since this is just used to filter unbonding delegations & redelegations. + distributionHeight := height - sdk.ValidatorUpdateDelay - 1 + + SlashWithInfractionReason(vote.Validator.Address, distributionHeight, vote.Validator.Power, SlashFractionDowntime(), stakingtypes.Downtime) + +Jail(vote.Validator.Address) + +signInfo.JailedUntil = block.Time.Add(DowntimeJailDuration()) + + / We need to reset the counter & array so that the validator won't be + / immediately slashed for downtime upon rebonding. + signInfo.MissedBlocksCounter = 0 + signInfo.IndexOffset = 0 + ClearValidatorMissedBlockBitArray(vote.Validator.Address) +} + +SetValidatorSigningInfo(vote.Validator.Address, signInfo) +} +``` + +## Hooks + +This section contains a description of the module's `hooks`. Hooks are operations that are executed automatically when events are raised. + +### Staking hooks + +The slashing module implements the `StakingHooks` defined in `x/staking` and are used as record-keeping of validators information. During the app initialization, these hooks should be registered in the staking module struct. + +The following hooks impact the slashing state: + +* `AfterValidatorBonded` creates a `ValidatorSigningInfo` instance as described in the following section. +* `AfterValidatorCreated` stores a validator's consensus key. +* `AfterValidatorRemoved` removes a validator's consensus key. + +### Validator Bonded + +Upon successful first-time bonding of a new validator, we create a new `ValidatorSigningInfo` structure for the +now-bonded validator, which `StartHeight` of the current block. + +If the validator was out of the validator set and gets bonded again, its new bonded height is set. + +```go expandable +onValidatorBonded(address sdk.ValAddress) + +signingInfo, found = GetValidatorSigningInfo(address) + if !found { + signingInfo = ValidatorSigningInfo { + StartHeight : CurrentHeight, + IndexOffset : 0, + JailedUntil : time.Unix(0, 0), + Tombstone : false, + MissedBloskCounter : 0 +} + +else { + signingInfo.StartHeight = CurrentHeight +} + +setValidatorSigningInfo(signingInfo) +} + +return +``` + +## Events + +The slashing module emits the following events: + +### MsgServer + +#### MsgUnjail + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ------------------ | +| message | module | slashing | +| message | sender | `{validatorAddress}` | + +### Keeper + +### BeginBlocker: HandleValidatorSignature + +| Type | Attribute Key | Attribute Value | +| ----- | ------------- | --------------------------- | +| slash | address | `{validatorConsensusAddress}` | +| slash | power | `{validatorPower}` | +| slash | reason | `{slashReason}` | +| slash | jailed \[0] | `{validatorConsensusAddress}` | +| slash | burned coins | `{math.Int}` | + +* \[0] Only included if the validator is jailed. + +| Type | Attribute Key | Attribute Value | +| -------- | -------------- | --------------------------- | +| liveness | address | `{validatorConsensusAddress}` | +| liveness | missed\_blocks | `{missedBlocksCounter}` | +| liveness | height | `{blockHeight}` | + +#### Slash + +* same as `"slash"` event from `HandleValidatorSignature`, but without the `jailed` attribute. + +#### Jail + +| Type | Attribute Key | Attribute Value | +| ----- | ------------- | ------------------ | +| slash | jailed | `{validatorAddress}` | + +## Staking Tombstone + +### Abstract + +In the current implementation of the `slashing` module, when the consensus engine +informs the state machine of a validator's consensus fault, the validator is +partially slashed, and put into a "jail period", a period of time in which they +are not allowed to rejoin the validator set. However, because of the nature of +consensus faults and ABCI, there can be a delay between an infraction occurring, +and evidence of the infraction reaching the state machine (this is one of the +primary reasons for the existence of the unbonding period). + +> Note: The tombstone concept, only applies to faults that have a delay between +> the infraction occurring and evidence reaching the state machine. For example, +> evidence of a validator double signing may take a while to reach the state machine +> due to unpredictable evidence gossip layer delays and the ability of validators to +> selectively reveal double-signatures (e.g. to infrequently-online light clients). +> Liveness slashing, on the other hand, is detected immediately as soon as the +> infraction occurs, and therefore no slashing period is needed. A validator is +> immediately put into jail period, and they cannot commit another liveness fault +> until they unjail. In the future, there may be other types of byzantine faults +> that have delays (for example, submitting evidence of an invalid proposal as a transaction). +> When implemented, it will have to be decided whether these future types of +> byzantine faults will result in a tombstoning (and if not, the slash amounts +> will not be capped by a slashing period). + +In the current system design, once a validator is put in the jail for a consensus +fault, after the `JailPeriod` they are allowed to send a transaction to `unjail` +themselves, and thus rejoin the validator set. + +One of the "design desires" of the `slashing` module is that if multiple +infractions occur before evidence is executed (and a validator is put in jail), +they should only be punished for single worst infraction, but not cumulatively. +For example, if the sequence of events is: + +1. Validator A commits Infraction 1 (worth 30% slash) +2. Validator A commits Infraction 2 (worth 40% slash) +3. Validator A commits Infraction 3 (worth 35% slash) +4. Evidence for Infraction 1 reaches state machine (and validator is put in jail) +5. Evidence for Infraction 2 reaches state machine +6. Evidence for Infraction 3 reaches state machine + +Only Infraction 2 should have its slash take effect, as it is the highest. This +is done, so that in the case of the compromise of a validator's consensus key, +they will only be punished once, even if the hacker double-signs many blocks. +Because, the unjailing has to be done with the validator's operator key, they +have a chance to re-secure their consensus key, and then signal that they are +ready using their operator key. We call this period during which we track only +the max infraction, the "slashing period". + +Once, a validator rejoins by unjailing themselves, we begin a new slashing period; +if they commit a new infraction after unjailing, it gets slashed cumulatively on +top of the worst infraction from the previous slashing period. + +However, while infractions are grouped based off of the slashing periods, because +evidence can be submitted up to an `unbondingPeriod` after the infraction, we +still have to allow for evidence to be submitted for previous slashing periods. +For example, if the sequence of events is: + +1. Validator A commits Infraction 1 (worth 30% slash) +2. Validator A commits Infraction 2 (worth 40% slash) +3. Evidence for Infraction 1 reaches state machine (and Validator A is put in jail) +4. Validator A unjails + +We are now in a new slashing period, however we still have to keep the door open +for the previous infraction, as the evidence for Infraction 2 may still come in. +As the number of slashing periods increase, it creates more complexity as we have +to keep track of the highest infraction amount for every single slashing period. + +> Note: Currently, according to the `slashing` module spec, a new slashing period +> is created every time a validator is unbonded then rebonded. This should probably +> be changed to jailed/unjailed. See issue [#3205](https://github.com/cosmos/cosmos-sdk/issues/3205) +> for further details. For the remainder of this, I will assume that we only start +> a new slashing period when a validator gets unjailed. + +The maximum number of slashing periods is the `len(UnbondingPeriod) / len(JailPeriod)`. +The current defaults in Gaia for the `UnbondingPeriod` and `JailPeriod` are 3 weeks +and 2 days, respectively. This means there could potentially be up to 11 slashing +periods concurrently being tracked per validator. If we set the `JailPeriod >= UnbondingPeriod`, +we only have to track 1 slashing period (i.e not have to track slashing periods). + +Currently, in the jail period implementation, once a validator unjails, all of +their delegators who are delegated to them (haven't unbonded / redelegated away), +stay with them. Given that consensus safety faults are so egregious +(way more so than liveness faults), it is probably prudent to have delegators not +"auto-rebond" to the validator. + +#### Proposal: infinite jail + +We propose setting the "jail time" for a +validator who commits a consensus safety fault, to `infinite` (i.e. a tombstone state). +This essentially kicks the validator out of the validator set and does not allow +them to re-enter the validator set. All of their delegators (including the operator themselves) +have to either unbond or redelegate away. The validator operator can create a new +validator if they would like, with a new operator key and consensus key, but they +have to "re-earn" their delegations back. + +Implementing the tombstone system and getting rid of the slashing period tracking +will make the `slashing` module way simpler, especially because we can remove all +of the hooks defined in the `slashing` module consumed by the `staking` module +(the `slashing` module still consumes hooks defined in `staking`). + +#### Single slashing amount + +Another optimization that can be made is that if we assume that all ABCI faults +for CometBFT consensus are slashed at the same level, we don't have to keep +track of "max slash". Once an ABCI fault happens, we don't have to worry about +comparing potential future ones to find the max. + +Currently the only CometBFT ABCI fault is: + +* Unjustified precommits (double signs) + +It is currently planned to include the following fault in the near future: + +* Signing a precommit when you're in unbonding phase (needed to make light client bisection safe) + +Given that these faults are both attributable byzantine faults, we will likely +want to slash them equally, and thus we can enact the above change. + +> Note: This change may make sense for current CometBFT consensus, but maybe +> not for a different consensus algorithm or future versions of CometBFT that +> may want to punish at different levels (for example, partial slashing). + +## Parameters + +The slashing module contains the following parameters: + +| Key | Type | Example | +| ----------------------- | -------------- | ---------------------- | +| SignedBlocksWindow | string (int64) | "100" | +| MinSignedPerWindow | string (dec) | "0.500000000000000000" | +| DowntimeJailDuration | string (ns) | "600000000000" | +| SlashFractionDoubleSign | string (dec) | "0.050000000000000000" | +| SlashFractionDowntime | string (dec) | "0.010000000000000000" | + +## CLI + +A user can query and interact with the `slashing` module using the CLI. + +### Query + +The `query` commands allow users to query `slashing` state. + +```shell +simd query slashing --help +``` + +#### params + +The `params` command allows users to query genesis parameters for the slashing module. + +```shell +simd query slashing params [flags] +``` + +Example: + +```shell +simd query slashing params +``` + +Example Output: + +```yml +downtime_jail_duration: 600s +min_signed_per_window: "0.500000000000000000" +signed_blocks_window: "100" +slash_fraction_double_sign: "0.050000000000000000" +slash_fraction_downtime: "0.010000000000000000" +``` + +#### signing-info + +The `signing-info` command allows users to query signing-info of the validator using consensus public key. + +```shell +simd query slashing signing-infos [flags] +``` + +Example: + +```shell +simd query slashing signing-info '{"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys6jD5B6tPgC8="}' + +``` + +Example Output: + +```yml +address: cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c +index_offset: "2068" +jailed_until: "1970-01-01T00:00:00Z" +missed_blocks_counter: "0" +start_height: "0" +tombstoned: false +``` + +#### signing-infos + +The `signing-infos` command allows users to query signing infos of all validators. + +```shell +simd query slashing signing-infos [flags] +``` + +Example: + +```shell +simd query slashing signing-infos +``` + +Example Output: + +```yml +info: +- address: cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c + index_offset: "2075" + jailed_until: "1970-01-01T00:00:00Z" + missed_blocks_counter: "0" + start_height: "0" + tombstoned: false +pagination: + next_key: null + total: "0" +``` + +### Transactions + +The `tx` commands allow users to interact with the `slashing` module. + +```bash +simd tx slashing --help +``` + +#### unjail + +The `unjail` command allows users to unjail a validator previously jailed for downtime. + +```bash +simd tx slashing unjail --from mykey [flags] +``` + +Example: + +```bash +simd tx slashing unjail --from mykey +``` + +### gRPC + +A user can query the `slashing` module using gRPC endpoints. + +#### Params + +The `Params` endpoint allows users to query the parameters of slashing module. + +```shell +cosmos.slashing.v1beta1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "signedBlocksWindow": "100", + "minSignedPerWindow": "NTAwMDAwMDAwMDAwMDAwMDAw", + "downtimeJailDuration": "600s", + "slashFractionDoubleSign": "NTAwMDAwMDAwMDAwMDAwMDA=", + "slashFractionDowntime": "MTAwMDAwMDAwMDAwMDAwMDA=" + } +} +``` + +#### SigningInfo + +The SigningInfo queries the signing info of given cons address. + +```shell +cosmos.slashing.v1beta1.Query/SigningInfo +``` + +Example: + +```shell +grpcurl -plaintext -d '{"cons_address":"cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c"}' localhost:9090 cosmos.slashing.v1beta1.Query/SigningInfo +``` + +Example Output: + +```json +{ + "valSigningInfo": { + "address": "cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c", + "indexOffset": "3493", + "jailedUntil": "1970-01-01T00:00:00Z" + } +} +``` + +#### SigningInfos + +The SigningInfos queries signing info of all validators. + +```shell +cosmos.slashing.v1beta1.Query/SigningInfos +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/SigningInfos +``` + +Example Output: + +```json expandable +{ + "info": [ + { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "indexOffset": "2467", + "jailedUntil": "1970-01-01T00:00:00Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### REST + +A user can query the `slashing` module using REST endpoints. + +#### Params + +```shell +/cosmos/slashing/v1beta1/params +``` + +Example: + +```shell +curl "localhost:1317/cosmos/slashing/v1beta1/params" +``` + +Example Output: + +```json +{ + "params": { + "signed_blocks_window": "100", + "min_signed_per_window": "0.500000000000000000", + "downtime_jail_duration": "600s", + "slash_fraction_double_sign": "0.050000000000000000", + "slash_fraction_downtime": "0.010000000000000000" +} +``` + +#### signing\_info + +```shell +/cosmos/slashing/v1beta1/signing_infos/%s +``` + +Example: + +```shell +curl "localhost:1317/cosmos/slashing/v1beta1/signing_infos/cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c" +``` + +Example Output: + +```json +{ + "val_signing_info": { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "start_height": "0", + "index_offset": "4184", + "jailed_until": "1970-01-01T00:00:00Z", + "tombstoned": false, + "missed_blocks_counter": "0" + } +} +``` + +#### signing\_infos + +```shell +/cosmos/slashing/v1beta1/signing_infos +``` + +Example: + +```shell +curl "localhost:1317/cosmos/slashing/v1beta1/signing_infos +``` + +Example Output: + +```json expandable +{ + "info": [ + { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "start_height": "0", + "index_offset": "4169", + "jailed_until": "1970-01-01T00:00:00Z", + "tombstoned": false, + "missed_blocks_counter": "0" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/modules/staking/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/staking/README.mdx new file mode 100644 index 00000000..3f11b7b9 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/staking/README.mdx @@ -0,0 +1,3851 @@ +--- +title: '`x/staking`' +description: >- + This paper specifies the Staking module of the Cosmos SDK that was first + described in the Cosmos Whitepaper in June 2016. +--- + +## Abstract + +This paper specifies the Staking module of the Cosmos SDK that was first +described in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper) +in June 2016. + +The module enables Cosmos SDK-based blockchain to support an advanced +Proof-of-Stake (PoS) system. In this system, holders of the native staking token of +the chain can become validators and can delegate tokens to validators, +ultimately determining the effective validator set for the system. + +This module is used in the Cosmos Hub, the first Hub in the Cosmos +network. + +## Contents + +* [State](#state) + * [Pool](#pool) + * [LastTotalPower](#lasttotalpower) + * [ValidatorUpdates](#validatorupdates) + * [UnbondingID](#unbondingid) + * [Params](#params) + * [Validator](#validator) + * [Delegation](#delegation) + * [UnbondingDelegation](#unbondingdelegation) + * [Redelegation](#redelegation) + * [Queues](#queues) + * [HistoricalInfo](#historicalinfo) +* [State Transitions](#state-transitions) + * [Validators](#validators) + * [Delegations](#delegations) + * [Slashing](#slashing) + * [How Shares are calculated](#how-shares-are-calculated) +* [Messages](#messages) + * [MsgCreateValidator](#msgcreatevalidator) + * [MsgEditValidator](#msgeditvalidator) + * [MsgDelegate](#msgdelegate) + * [MsgUndelegate](#msgundelegate) + * [MsgCancelUnbondingDelegation](#msgcancelunbondingdelegation) + * [MsgBeginRedelegate](#msgbeginredelegate) + * [MsgUpdateParams](#msgupdateparams) +* [Begin-Block](#begin-block) + * [Historical Info Tracking](#historical-info-tracking) +* [End-Block](#end-block) + * [Validator Set Changes](#validator-set-changes) + * [Queues](#queues-1) +* [Hooks](#hooks) +* [Events](#events) + * [EndBlocker](#endblocker) + * [Msg's](#msgs) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## State + +### Pool + +Pool is used for tracking bonded and not-bonded token supply of the bond denomination. + +### LastTotalPower + +LastTotalPower tracks the total amounts of bonded tokens recorded during the previous end block. +Store entries prefixed with "Last" must remain unchanged until EndBlock. + +* LastTotalPower: `0x12 -> ProtocolBuffer(math.Int)` + +### ValidatorUpdates + +ValidatorUpdates contains the validator updates returned to ABCI at the end of every block. +The values are overwritten in every block. + +* ValidatorUpdates `0x61 -> []abci.ValidatorUpdate` + +### UnbondingID + +UnbondingID stores the ID of the latest unbonding operation. It enables to create unique IDs for unbonding operation, i.e., UnbondingID is incremented every time a new unbonding operation (validator unbonding, unbonding delegation, redelegation) is initiated. + +* UnbondingID: `0x37 -> uint64` + +### Params + +The staking module stores its params in state with the prefix of `0x51`, +it can be updated with governance or the address with authority. + +* Params: `0x51 | ProtocolBuffer(Params)` + +```protobuf +// Params defines the parameters for the x/staking module. +message Params { + option (amino.name) = "cosmos-sdk/x/staking/Params"; + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // unbonding_time is the time duration of unbonding. + google.protobuf.Duration unbonding_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdduration) = true]; + // max_validators is the maximum number of validators. + uint32 max_validators = 2; + // max_entries is the max entries for either unbonding delegation or redelegation (per pair/trio). + uint32 max_entries = 3; + // historical_entries is the number of historical entries to persist. + uint32 historical_entries = 4; + // bond_denom defines the bondable coin denomination. + string bond_denom = 5; + // min_commission_rate is the chain-wide minimum commission rate that a validator can charge their delegators + string min_commission_rate = 6 [ + (gogoproto.moretags) = "yaml:\"min_commission_rate\"", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +### Validator + +Validators can have one of three statuses + +* `Unbonded`: The validator is not in the active set. They cannot sign blocks and do not earn + rewards. They can receive delegations. +* `Bonded`: Once the validator receives sufficient bonded tokens they automatically join the + active set during [`EndBlock`](#validator-set-changes) and their status is updated to `Bonded`. + They are signing blocks and receiving rewards. They can receive further delegations. + They can be slashed for misbehavior. Delegators to this validator who unbond their delegation + must wait the duration of the UnbondingTime, a chain-specific param, during which time + they are still slashable for offences of the source validator if those offences were committed + during the period of time that the tokens were bonded. +* `Unbonding`: When a validator leaves the active set, either by choice or due to slashing, jailing or + tombstoning, an unbonding of all their delegations begins. All delegations must then wait the UnbondingTime + before their tokens are moved to their accounts from the `BondedPool`. + + +Tombstoning is permanent, once tombstoned a validators consensus key can not be reused within the chain where the tombstoning happened. + + +Validators objects should be primarily stored and accessed by the +`OperatorAddr`, an SDK validator address for the operator of the validator. Two +additional indices are maintained per validator object in order to fulfill +required lookups for slashing and validator-set updates. A third special index +(`LastValidatorPower`) is also maintained which however remains constant +throughout each block, unlike the first two indices which mirror the validator +records within a block. + +* Validators: `0x21 | OperatorAddrLen (1 byte) | OperatorAddr -> ProtocolBuffer(validator)` +* ValidatorsByConsAddr: `0x22 | ConsAddrLen (1 byte) | ConsAddr -> OperatorAddr` +* ValidatorsByPower: `0x23 | BigEndian(ConsensusPower) | OperatorAddrLen (1 byte) | OperatorAddr -> OperatorAddr` +* LastValidatorsPower: `0x11 | OperatorAddrLen (1 byte) | OperatorAddr -> ProtocolBuffer(ConsensusPower)` +* ValidatorsByUnbondingID: `0x38 | UnbondingID -> 0x21 | OperatorAddrLen (1 byte) | OperatorAddr` + +`Validators` is the primary index - it ensures that each operator can have only one +associated validator, where the public key of that validator can change in the +future. Delegators can refer to the immutable operator of the validator, without +concern for the changing public key. + +`ValidatorsByUnbondingID` is an additional index that enables lookups for +validators by the unbonding IDs corresponding to their current unbonding. + +`ValidatorByConsAddr` is an additional index that enables lookups for slashing. +When CometBFT reports evidence, it provides the validator address, so this +map is needed to find the operator. Note that the `ConsAddr` corresponds to the +address which can be derived from the validator's `ConsPubKey`. + +`ValidatorsByPower` is an additional index that provides a sorted list of +potential validators to quickly determine the current active set. Here +ConsensusPower is validator.Tokens/10^6 by default. Note that all validators +where `Jailed` is true are not stored within this index. + +`LastValidatorsPower` is a special index that provides a historical list of the +last-block's bonded validators. This index remains constant during a block but +is updated during the validator set update process which takes place in [`EndBlock`](#end-block). + +Each validator's state is stored in a `Validator` struct: + +```protobuf +// Validator defines a validator, together with the total amount of the +// Validator's bond shares and their exchange rate to coins. Slashing results in +// a decrease in the exchange rate, allowing correct calculation of future +// undelegations without iterating over delegators. When coins are delegated to +// this validator, the validator is credited with a delegation whose number of +// bond shares is based on the amount of coins delegated divided by the current +// exchange rate. Voting power can be calculated as total bonded shares +// multiplied by exchange rate. +message Validator { + option (gogoproto.equal) = false; + option (gogoproto.goproto_stringer) = false; + option (gogoproto.goproto_getters) = false; + + // operator_address defines the address of the validator's operator; bech encoded in JSON. + string operator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // consensus_pubkey is the consensus public key of the validator, as a Protobuf Any. + google.protobuf.Any consensus_pubkey = 2 [(cosmos_proto.accepts_interface) = "cosmos.crypto.PubKey"]; + // jailed defined whether the validator has been jailed from bonded status or not. + bool jailed = 3; + // status is the validator status (bonded/unbonding/unbonded). + BondStatus status = 4; + // tokens define the delegated tokens (incl. self-delegation). + string tokens = 5 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // delegator_shares defines total shares issued to a validator's delegators. + string delegator_shares = 6 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // description defines the description terms for the validator. + Description description = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // unbonding_height defines, if unbonding, the height at which this validator has begun unbonding. + int64 unbonding_height = 8; + // unbonding_time defines, if unbonding, the min time for the validator to complete unbonding. + google.protobuf.Timestamp unbonding_time = 9 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + // commission defines the commission parameters. + Commission commission = 10 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // min_self_delegation is the validator's self declared minimum self delegation. + // + // Since: cosmos-sdk 0.46 + string min_self_delegation = 11 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + + // strictly positive if this validator's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 12; + + // list of unbonding ids, each uniquely identifing an unbonding of this validator + repeated uint64 unbonding_ids = 13; +} +``` + +```protobuf +// CommissionRates defines the initial commission rates to be used for creating +// a validator. +message CommissionRates { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // rate is the commission rate charged to delegators, as a fraction. + string rate = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // max_rate defines the maximum commission rate which validator can ever charge, as a fraction. + string max_rate = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // max_change_rate defines the maximum daily increase of the validator commission, as a fraction. + string max_change_rate = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +// Commission defines commission parameters for a given validator. +message Commission { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // commission_rates defines the initial commission rates to be used for creating a validator. + CommissionRates commission_rates = 1 + [(gogoproto.embed) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // update_time is the last time the commission rate was changed. + google.protobuf.Timestamp update_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} + +// Description defines a validator description. +message Description { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // moniker defines a human-readable name for the validator. + string moniker = 1; + // identity defines an optional identity signature (ex. UPort or Keybase). + string identity = 2; + // website defines an optional website link. + string website = 3; + // security_contact defines an optional email for security contact. + string security_contact = 4; + // details define other optional details. + string details = 5; +} +``` + +### Delegation + +Delegations are identified by combining `DelegatorAddr` (the address of the delegator) +with the `ValidatorAddr` Delegators are indexed in the store as follows: + +* Delegation: `0x31 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr -> ProtocolBuffer(delegation)` + +Stake holders may delegate coins to validators; under this circumstance their +funds are held in a `Delegation` data structure. It is owned by one +delegator, and is associated with the shares for one validator. The sender of +the transaction is the owner of the bond. + +```protobuf +// Delegation represents the bond with tokens held by an account. It is +// owned by one delegator, and is associated with the voting power of one +// validator. +message Delegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + // delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // shares define the delegation shares received. + string shares = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +#### Delegator Shares + +When one Delegates tokens to a Validator they are issued a number of delegator shares based on a +dynamic exchange rate, calculated as follows from the total number of tokens delegated to the +validator and the number of shares issued so far: + +`Shares per Token = validator.TotalShares() / validator.Tokens()` + +Only the number of shares received is stored on the DelegationEntry. When a delegator then +Undelegates, the token amount they receive is calculated from the number of shares they currently +hold and the inverse exchange rate: + +`Tokens per Share = validator.Tokens() / validatorShares()` + +These `Shares` are simply an accounting mechanism. They are not a fungible asset. The reason for +this mechanism is to simplify the accounting around slashing. Rather than iteratively slashing the +tokens of every delegation entry, instead the Validators total bonded tokens can be slashed, +effectively reducing the value of each issued delegator share. + +### UnbondingDelegation + +Shares in a `Delegation` can be unbonded, but they must for some time exist as +an `UnbondingDelegation`, where shares can be reduced if Byzantine behavior is +detected. + +`UnbondingDelegation` are indexed in the store as: + +* UnbondingDelegation: `0x32 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr -> ProtocolBuffer(unbondingDelegation)` +* UnbondingDelegationsFromValidator: `0x33 | ValidatorAddrLen (1 byte) | ValidatorAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` +* UnbondingDelegationByUnbondingId: `0x38 | UnbondingId -> 0x32 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr` + `UnbondingDelegation` is used in queries, to lookup all unbonding delegations for + a given delegator. + +`UnbondingDelegationsFromValidator` is used in slashing, to lookup all +unbonding delegations associated with a given validator that need to be +slashed. + +`UnbondingDelegationByUnbondingId` is an additional index that enables +lookups for unbonding delegations by the unbonding IDs of the containing +unbonding delegation entries. + +A UnbondingDelegation object is created every time an unbonding is initiated. + +```protobuf +// UnbondingDelegation stores all of a single delegator's unbonding bonds +// for a single validator in an time-ordered list. +message UnbondingDelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + // delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // entries are the unbonding delegation entries. + repeated UnbondingDelegationEntry entries = 3 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; // unbonding delegation entries +} + +// UnbondingDelegationEntry defines an unbonding object with relevant metadata. +message UnbondingDelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // creation_height is the height which the unbonding took place. + int64 creation_height = 1; + // completion_time is the unix time for unbonding completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + // initial_balance defines the tokens initially scheduled to receive at completion. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // balance defines the tokens to receive at completion. + string balance = 4 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + // Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} +``` + +### Redelegation + +The bonded tokens worth of a `Delegation` may be instantly redelegated from a +source validator to a different validator (destination validator). However when +this occurs they must be tracked in a `Redelegation` object, whereby their +shares can be slashed if their tokens have contributed to a Byzantine fault +committed by the source validator. + +`Redelegation` are indexed in the store as: + +* Redelegations: `0x34 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddr -> ProtocolBuffer(redelegation)` +* RedelegationsBySrc: `0x35 | ValidatorSrcAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddrLen (1 byte) | ValidatorDstAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` +* RedelegationsByDst: `0x36 | ValidatorDstAddrLen (1 byte) | ValidatorDstAddr | ValidatorSrcAddrLen (1 byte) | ValidatorSrcAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` +* RedelegationByUnbondingId: `0x38 | UnbondingId -> 0x34 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddr` + +`Redelegations` is used for queries, to lookup all redelegations for a given +delegator. + +`RedelegationsBySrc` is used for slashing based on the `ValidatorSrcAddr`. + +`RedelegationsByDst` is used for slashing based on the `ValidatorDstAddr` + +The first map here is used for queries, to lookup all redelegations for a given +delegator. The second map is used for slashing based on the `ValidatorSrcAddr`, +while the third map is for slashing based on the `ValidatorDstAddr`. + +`RedelegationByUnbondingId` is an additional index that enables +lookups for redelegations by the unbonding IDs of the containing +redelegation entries. + +A redelegation object is created every time a redelegation occurs. To prevent +"redelegation hopping" redelegations may not occur under the situation that: + +* the (re)delegator already has another immature redelegation in progress + with a destination to a validator (let's call it `Validator X`) +* and, the (re)delegator is attempting to create a *new* redelegation + where the source validator for this new redelegation is `Validator X`. + +```protobuf +// RedelegationEntry defines a redelegation object with relevant metadata. +message RedelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // creation_height defines the height which the redelegation took place. + int64 creation_height = 1; + // completion_time defines the unix time for redelegation completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + // initial_balance defines the initial balance when redelegation started. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // shares_dst is the amount of destination-validator shares created by redelegation. + string shares_dst = 4 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + // Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} + +// Redelegation contains the list of a particular delegator's redelegating bonds +// from a particular source validator to a particular destination validator. +message Redelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + // delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_src_address is the validator redelegation source operator address. + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_dst_address is the validator redelegation destination operator address. + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // entries are the redelegation entries. + repeated RedelegationEntry entries = 4 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; // redelegation entries +} +``` + +### Queues + +All queues objects are sorted by timestamp. The time used within any queue is +first rounded to the nearest nanosecond then sorted. The sortable time format +used is a slight modification of the RFC3339Nano and uses the format string +`"2006-01-02T15:04:05.000000000"`. Notably this format: + +* right pads all zeros +* drops the time zone info (uses UTC) + +In all cases, the stored timestamp represents the maturation time of the queue +element. + +#### UnbondingDelegationQueue + +For the purpose of tracking progress of unbonding delegations the unbonding +delegations queue is kept. + +* UnbondingDelegation: `0x41 | format(time) -> []DVPair` + +```protobuf +// DVPair is struct that just has a delegator-validator pair with no other data. +// It is intended to be used as a marshalable pointer. For example, a DVPair can +// be used to construct the key to getting an UnbondingDelegation from state. +message DVPair { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +#### RedelegationQueue + +For the purpose of tracking progress of redelegations the redelegation queue is +kept. + +* RedelegationQueue: `0x42 | format(time) -> []DVVTriplet` + +```protobuf +// DVVTriplet is struct that just has a delegator-validator-validator triplet +// with no other data. It is intended to be used as a marshalable pointer. For +// example, a DVVTriplet can be used to construct the key to getting a +// Redelegation from state. +message DVVTriplet { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +#### ValidatorQueue + +For the purpose of tracking progress of unbonding validators the validator +queue is kept. + +* ValidatorQueueTime: `0x43 | format(time) -> []sdk.ValAddress` + +The stored object as each key is an array of validator operator addresses from +which the validator object can be accessed. Typically it is expected that only +a single validator record will be associated with a given timestamp however it is possible +that multiple validators exist in the queue at the same location. + +### HistoricalInfo + +HistoricalInfo objects are stored and pruned at each block such that the staking keeper persists +the `n` most recent historical info defined by staking module parameter: `HistoricalEntries`. + +```go expandable +syntax = "proto3"; +package cosmos.staking.v1beta1; + +import "gogoproto/gogo.proto"; +import "google/protobuf/any.proto"; +import "google/protobuf/duration.proto"; +import "google/protobuf/timestamp.proto"; + +import "cosmos_proto/cosmos.proto"; +import "cosmos/base/v1beta1/coin.proto"; +import "amino/amino.proto"; +import "tendermint/types/types.proto"; +import "tendermint/abci/types.proto"; + +option go_package = "github.com/cosmos/cosmos-sdk/x/staking/types"; + +/ HistoricalInfo contains header and validator information for a given block. +/ It is stored as part of staking module's state, which persists the `n` most +/ recent HistoricalInfo +/ (`n` is set by the staking module's `historical_entries` parameter). +message HistoricalInfo { + tendermint.types.Header header = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + repeated Validator valset = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ CommissionRates defines the initial commission rates to be used for creating +/ a validator. +message CommissionRates { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / rate is the commission rate charged to delegators, as a fraction. + string rate = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / max_rate defines the maximum commission rate which validator can ever charge, as a fraction. + string max_rate = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / max_change_rate defines the maximum daily increase of the validator commission, as a fraction. + string max_change_rate = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +/ Commission defines commission parameters for a given validator. +message Commission { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / commission_rates defines the initial commission rates to be used for creating a validator. + CommissionRates commission_rates = 1 + [(gogoproto.embed) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + / update_time is the last time the commission rate was changed. + google.protobuf.Timestamp update_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} + +/ Description defines a validator description. +message Description { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / moniker defines a human-readable name for the validator. + string moniker = 1; + / identity defines an optional identity signature (ex. UPort or Keybase). + string identity = 2; + / website defines an optional website link. + string website = 3; + / security_contact defines an optional email for security contact. + string security_contact = 4; + / details define other optional details. + string details = 5; +} + +/ Validator defines a validator, together with the total amount of the +/ Validator's bond shares and their exchange rate to coins. Slashing results in +/ a decrease in the exchange rate, allowing correct calculation of future +/ undelegations without iterating over delegators. When coins are delegated to +/ this validator, the validator is credited with a delegation whose number of +/ bond shares is based on the amount of coins delegated divided by the current +/ exchange rate. Voting power can be calculated as total bonded shares +/ multiplied by exchange rate. +message Validator { + option (gogoproto.equal) = false; + option (gogoproto.goproto_stringer) = false; + option (gogoproto.goproto_getters) = false; + + / operator_address defines the address of the validator's operator; bech encoded in JSON. + string operator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / consensus_pubkey is the consensus public key of the validator, as a Protobuf Any. + google.protobuf.Any consensus_pubkey = 2 [(cosmos_proto.accepts_interface) = "cosmos.crypto.PubKey"]; + / jailed defined whether the validator has been jailed from bonded status or not. + bool jailed = 3; + / status is the validator status (bonded/unbonding/unbonded). + BondStatus status = 4; + / tokens define the delegated tokens (incl. self-delegation). + string tokens = 5 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / delegator_shares defines total shares issued to a validator's delegators. + string delegator_shares = 6 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / description defines the description terms for the validator. + Description description = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + / unbonding_height defines, if unbonding, the height at which this validator has begun unbonding. + int64 unbonding_height = 8; + / unbonding_time defines, if unbonding, the min time for the validator to complete unbonding. + google.protobuf.Timestamp unbonding_time = 9 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + / commission defines the commission parameters. + Commission commission = 10 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + / min_self_delegation is the validator's self declared minimum self delegation. + / + / Since: cosmos-sdk 0.46 + string min_self_delegation = 11 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + + / strictly positive if this validator's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 12; + + / list of unbonding ids, each uniquely identifing an unbonding of this validator + repeated uint64 unbonding_ids = 13; +} + +/ BondStatus is the status of a validator. +enum BondStatus { + option (gogoproto.goproto_enum_prefix) = false; + + / UNSPECIFIED defines an invalid validator status. + BOND_STATUS_UNSPECIFIED = 0 [(gogoproto.enumvalue_customname) = "Unspecified"]; + / UNBONDED defines a validator that is not bonded. + BOND_STATUS_UNBONDED = 1 [(gogoproto.enumvalue_customname) = "Unbonded"]; + / UNBONDING defines a validator that is unbonding. + BOND_STATUS_UNBONDING = 2 [(gogoproto.enumvalue_customname) = "Unbonding"]; + / BONDED defines a validator that is bonded. + BOND_STATUS_BONDED = 3 [(gogoproto.enumvalue_customname) = "Bonded"]; +} + +/ ValAddresses defines a repeated set of validator addresses. +message ValAddresses { + option (gogoproto.goproto_stringer) = false; + option (gogoproto.stringer) = true; + + repeated string addresses = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ DVPair is struct that just has a delegator-validator pair with no other data. +/ It is intended to be used as a marshalable pointer. For example, a DVPair can +/ be used to construct the key to getting an UnbondingDelegation from state. +message DVPair { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ DVPairs defines an array of DVPair objects. +message DVPairs { + repeated DVPair pairs = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ DVVTriplet is struct that just has a delegator-validator-validator triplet +/ with no other data. It is intended to be used as a marshalable pointer. For +/ example, a DVVTriplet can be used to construct the key to getting a +/ Redelegation from state. +message DVVTriplet { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ DVVTriplets defines an array of DVVTriplet objects. +message DVVTriplets { + repeated DVVTriplet triplets = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ Delegation represents the bond with tokens held by an account. It is +/ owned by one delegator, and is associated with the voting power of one +/ validator. +message Delegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + / delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / shares define the delegation shares received. + string shares = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +/ UnbondingDelegation stores all of a single delegator's unbonding bonds +/ for a single validator in an time-ordered list. +message UnbondingDelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + / delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / entries are the unbonding delegation entries. + repeated UnbondingDelegationEntry entries = 3 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; / unbonding delegation entries +} + +/ UnbondingDelegationEntry defines an unbonding object with relevant metadata. +message UnbondingDelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / creation_height is the height which the unbonding took place. + int64 creation_height = 1; + / completion_time is the unix time for unbonding completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + / initial_balance defines the tokens initially scheduled to receive at completion. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / balance defines the tokens to receive at completion. + string balance = 4 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + / Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} + +/ RedelegationEntry defines a redelegation object with relevant metadata. +message RedelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / creation_height defines the height which the redelegation took place. + int64 creation_height = 1; + / completion_time defines the unix time for redelegation completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + / initial_balance defines the initial balance when redelegation started. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / shares_dst is the amount of destination-validator shares created by redelegation. + string shares_dst = 4 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + / Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} + +/ Redelegation contains the list of a particular delegator's redelegating bonds +/ from a particular source validator to a particular destination validator. +message Redelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + / delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_src_address is the validator redelegation source operator address. + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_dst_address is the validator redelegation destination operator address. + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / entries are the redelegation entries. + repeated RedelegationEntry entries = 4 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; / redelegation entries +} + +/ Params defines the parameters for the x/staking module. +message Params { + option (amino.name) = "cosmos-sdk/x/staking/Params"; + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / unbonding_time is the time duration of unbonding. + google.protobuf.Duration unbonding_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdduration) = true]; + / max_validators is the maximum number of validators. + uint32 max_validators = 2; + / max_entries is the max entries for either unbonding delegation or redelegation (per pair/trio). + uint32 max_entries = 3; + / historical_entries is the number of historical entries to persist. + uint32 historical_entries = 4; + / bond_denom defines the bondable coin denomination. + string bond_denom = 5; + / min_commission_rate is the chain-wide minimum commission rate that a validator can charge their delegators + string min_commission_rate = 6 [ + (gogoproto.moretags) = "yaml:\"min_commission_rate\"", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +/ DelegationResponse is equivalent to Delegation except that it contains a +/ balance in addition to shares which is more suitable for client responses. +message DelegationResponse { + option (gogoproto.equal) = false; + option (gogoproto.goproto_stringer) = false; + + Delegation delegation = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + cosmos.base.v1beta1.Coin balance = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ RedelegationEntryResponse is equivalent to a RedelegationEntry except that it +/ contains a balance in addition to shares which is more suitable for client +/ responses. +message RedelegationEntryResponse { + option (gogoproto.equal) = true; + + RedelegationEntry redelegation_entry = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + string balance = 4 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; +} + +/ RedelegationResponse is equivalent to a Redelegation except that its entries +/ contain a balance in addition to shares which is more suitable for client +/ responses. +message RedelegationResponse { + option (gogoproto.equal) = false; + + Redelegation redelegation = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + repeated RedelegationEntryResponse entries = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ Pool is used for tracking bonded and not-bonded token supply of the bond +/ denomination. +message Pool { + option (gogoproto.description) = true; + option (gogoproto.equal) = true; + string not_bonded_tokens = 1 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "not_bonded_tokens", + (amino.dont_omitempty) = true + ]; + string bonded_tokens = 2 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "bonded_tokens", + (amino.dont_omitempty) = true + ]; +} + +/ Infraction indicates the infraction a validator commited. +enum Infraction { + / UNSPECIFIED defines an empty infraction. + INFRACTION_UNSPECIFIED = 0; + / DOUBLE_SIGN defines a validator that double-signs a block. + INFRACTION_DOUBLE_SIGN = 1; + / DOWNTIME defines a validator that missed signing too many blocks. + INFRACTION_DOWNTIME = 2; +} + +/ ValidatorUpdates defines an array of abci.ValidatorUpdate objects. +/ TODO: explore moving this to proto/cosmos/base to separate modules from tendermint dependence +message ValidatorUpdates { + repeated tendermint.abci.ValidatorUpdate updates = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +At each BeginBlock, the staking keeper will persist the current Header and the Validators that committed +the current block in a `HistoricalInfo` object. The Validators are sorted on their address to ensure that +they are in a deterministic order. +The oldest HistoricalEntries will be pruned to ensure that there only exist the parameter-defined number of +historical entries. + +## State Transitions + +### Validators + +State transitions in validators are performed on every [`EndBlock`](#validator-set-changes) +in order to check for changes in the active `ValidatorSet`. + +A validator can be `Unbonded`, `Unbonding` or `Bonded`. `Unbonded` +and `Unbonding` are collectively called `Not Bonded`. A validator can move +directly between all the states, except for from `Bonded` to `Unbonded`. + +#### Not bonded to Bonded + +The following transition occurs when a validator's ranking in the `ValidatorPowerIndex` surpasses +that of the `LastValidator`. + +* set `validator.Status` to `Bonded` +* send the `validator.Tokens` from the `NotBondedTokens` to the `BondedPool` `ModuleAccount` +* delete the existing record from `ValidatorByPowerIndex` +* add a new updated record to the `ValidatorByPowerIndex` +* update the `Validator` object for this validator +* if it exists, delete any `ValidatorQueue` record for this validator + +#### Bonded to Unbonding + +When a validator begins the unbonding process the following operations occur: + +* send the `validator.Tokens` from the `BondedPool` to the `NotBondedTokens` `ModuleAccount` +* set `validator.Status` to `Unbonding` +* delete the existing record from `ValidatorByPowerIndex` +* add a new updated record to the `ValidatorByPowerIndex` +* update the `Validator` object for this validator +* insert a new record into the `ValidatorQueue` for this validator + +#### Unbonding to Unbonded + +A validator moves from unbonding to unbonded when the `ValidatorQueue` object +moves from bonded to unbonded + +* update the `Validator` object for this validator +* set `validator.Status` to `Unbonded` + +#### Jail/Unjail + +when a validator is jailed it is effectively removed from the CometBFT set. +this process may be also be reversed. the following operations occur: + +* set `Validator.Jailed` and update object +* if jailed delete record from `ValidatorByPowerIndex` +* if unjailed add record to `ValidatorByPowerIndex` + +Jailed validators are not present in any of the following stores: + +* the power store (from consensus power to address) + +### Delegations + +#### Delegate + +When a delegation occurs both the validator and the delegation objects are affected + +* determine the delegators shares based on tokens delegated and the validator's exchange rate +* remove tokens from the sending account +* add shares the delegation object or add them to a created validator object +* add new delegator shares and update the `Validator` object +* transfer the `delegation.Amount` from the delegator's account to the `BondedPool` or the `NotBondedPool` `ModuleAccount` depending if the `validator.Status` is `Bonded` or not +* delete the existing record from `ValidatorByPowerIndex` +* add an new updated record to the `ValidatorByPowerIndex` + +#### Begin Unbonding + +As a part of the Undelegate and Complete Unbonding state transitions Unbond +Delegation may be called. + +* subtract the unbonded shares from delegator +* add the unbonded tokens to an `UnbondingDelegationEntry` +* update the delegation or remove the delegation if there are no more shares +* if the delegation is the operator of the validator and no more shares exist then trigger a jail validator +* update the validator with removed the delegator shares and associated coins +* if the validator state is `Bonded`, transfer the `Coins` worth of the unbonded + shares from the `BondedPool` to the `NotBondedPool` `ModuleAccount` +* remove the validator if it is unbonded and there are no more delegation shares. +* remove the validator if it is unbonded and there are no more delegation shares +* get a unique `unbondingId` and map it to the `UnbondingDelegationEntry` in `UnbondingDelegationByUnbondingId` +* call the `AfterUnbondingInitiated(unbondingId)` hook +* add the unbonding delegation to `UnbondingDelegationQueue` with the completion time set to `UnbondingTime` + +#### Cancel an `UnbondingDelegation` Entry + +When a `cancel unbond delegation` occurs both the `validator`, the `delegation` and an `UnbondingDelegationQueue` state will be updated. + +* if cancel unbonding delegation amount equals to the `UnbondingDelegation` entry `balance`, then the `UnbondingDelegation` entry deleted from `UnbondingDelegationQueue`. +* if the `cancel unbonding delegation amount is less than the `UnbondingDelegation`entry balance, then the`UnbondingDelegation`entry will be updated with new balance in the`UnbondingDelegationQueue\`. +* cancel `amount` is [Delegated](#delegations) back to the original `validator`. + +#### Complete Unbonding + +For undelegations which do not complete immediately, the following operations +occur when the unbonding delegation queue element matures: + +* remove the entry from the `UnbondingDelegation` object +* transfer the tokens from the `NotBondedPool` `ModuleAccount` to the delegator `Account` + +#### Begin Redelegation + +Redelegations affect the delegation, source and destination validators. + +* perform an `unbond` delegation from the source validator to retrieve the tokens worth of the unbonded shares +* using the unbonded tokens, `Delegate` them to the destination validator +* if the `sourceValidator.Status` is `Bonded`, and the `destinationValidator` is not, + transfer the newly delegated tokens from the `BondedPool` to the `NotBondedPool` `ModuleAccount` +* otherwise, if the `sourceValidator.Status` is not `Bonded`, and the `destinationValidator` + is `Bonded`, transfer the newly delegated tokens from the `NotBondedPool` to the `BondedPool` `ModuleAccount` +* record the token amount in an new entry in the relevant `Redelegation` + +From when a redelegation begins until it completes, the delegator is in a state of "pseudo-unbonding", and can still be +slashed for infractions that occurred before the redelegation began. + +#### Complete Redelegation + +When a redelegations complete the following occurs: + +* remove the entry from the `Redelegation` object + +### Slashing + +#### Slash Validator + +When a Validator is slashed, the following occurs: + +* The total `slashAmount` is calculated as the `slashFactor` (a chain parameter) \* `TokensFromConsensusPower`, + the total number of tokens bonded to the validator at the time of the infraction. +* Every unbonding delegation and pseudo-unbonding redelegation such that the infraction occured before the unbonding or + redelegation began from the validator are slashed by the `slashFactor` percentage of the initialBalance. +* Each amount slashed from redelegations and unbonding delegations is subtracted from the + total slash amount. +* The `remaingSlashAmount` is then slashed from the validator's tokens in the `BondedPool` or + `NonBondedPool` depending on the validator's status. This reduces the total supply of tokens. + +In the case of a slash due to any infraction that requires evidence to submitted (for example double-sign), the slash +occurs at the block where the evidence is included, not at the block where the infraction occured. +Put otherwise, validators are not slashed retroactively, only when they are caught. + +#### Slash Unbonding Delegation + +When a validator is slashed, so are those unbonding delegations from the validator that began unbonding +after the time of the infraction. Every entry in every unbonding delegation from the validator +is slashed by `slashFactor`. The amount slashed is calculated from the `InitialBalance` of the +delegation and is capped to prevent a resulting negative balance. Completed (or mature) unbondings are not slashed. + +#### Slash Redelegation + +When a validator is slashed, so are all redelegations from the validator that began after the +infraction. Redelegations are slashed by `slashFactor`. +Redelegations that began before the infraction are not slashed. +The amount slashed is calculated from the `InitialBalance` of the delegation and is capped to +prevent a resulting negative balance. +Mature redelegations (that have completed pseudo-unbonding) are not slashed. + +### How Shares are calculated + +At any given point in time, each validator has a number of tokens, `T`, and has a number of shares issued, `S`. +Each delegator, `i`, holds a number of shares, `S_i`. +The number of tokens is the sum of all tokens delegated to the validator, plus the rewards, minus the slashes. + +The delegator is entitled to a portion of the underlying tokens proportional to their proportion of shares. +So delegator `i` is entitled to `T * S_i / S` of the validator's tokens. + +When a delegator delegates new tokens to the validator, they receive a number of shares proportional to their contribution. +So when delegator `j` delegates `T_j` tokens, they receive `S_j = S * T_j / T` shares. +The total number of tokens is now `T + T_j`, and the total number of shares is `S + S_j`. +`j`s proportion of the shares is the same as their proportion of the total tokens contributed: `(S + S_j) / S = (T + T_j) / T`. + +A special case is the initial delegation, when `T = 0` and `S = 0`, so `T_j / T` is undefined. +For the initial delegation, delegator `j` who delegates `T_j` tokens receive `S_j = T_j` shares. +So a validator that hasn't received any rewards and has not been slashed will have `T = S`. + +## Messages + +In this section we describe the processing of the staking messages and the corresponding updates to the state. All created/modified state objects specified by each message are defined within the [state](#state) section. + +### MsgCreateValidator + +A validator is created using the `MsgCreateValidator` message. +The validator must be created with an initial delegation from the operator. + +```protobuf + // CreateValidator defines a method for creating a new validator. + rpc CreateValidator(MsgCreateValidator) returns (MsgCreateValidatorResponse); +``` + +```protobuf +// MsgCreateValidator defines a SDK message for creating a new validator. +message MsgCreateValidator { + // NOTE(fdymylja): this is a particular case in which + // if validator_address == delegator_address then only one + // is expected to sign, otherwise both are. + option (cosmos.msg.v1.signer) = "delegator_address"; + option (cosmos.msg.v1.signer) = "validator_address"; + option (amino.name) = "cosmos-sdk/MsgCreateValidator"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + Description description = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + CommissionRates commission = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + string min_self_delegation = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + string delegator_address = 4 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 5 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + google.protobuf.Any pubkey = 6 [(cosmos_proto.accepts_interface) = "cosmos.crypto.PubKey"]; + cosmos.base.v1beta1.Coin value = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message is expected to fail if: + +* another validator with this operator address is already registered +* another validator with this pubkey is already registered +* the initial self-delegation tokens are of a denom not specified as the bonding denom +* the commission parameters are faulty, namely: + * `MaxRate` is either > 1 or < 0 + * the initial `Rate` is either negative or > `MaxRate` + * the initial `MaxChangeRate` is either negative or > `MaxRate` +* the description fields are too large + +This message creates and stores the `Validator` object at appropriate indexes. +Additionally a self-delegation is made with the initial tokens delegation +tokens `Delegation`. The validator always starts as unbonded but may be bonded +in the first end-block. + +### MsgEditValidator + +The `Description`, `CommissionRate` of a validator can be updated using the +`MsgEditValidator` message. + +```protobuf + // EditValidator defines a method for editing an existing validator. + rpc EditValidator(MsgEditValidator) returns (MsgEditValidatorResponse); +``` + +```protobuf +// MsgEditValidator defines a SDK message for editing an existing validator. +message MsgEditValidator { + option (cosmos.msg.v1.signer) = "validator_address"; + option (amino.name) = "cosmos-sdk/MsgEditValidator"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + Description description = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // We pass a reference to the new commission rate and min self delegation as + // it's not mandatory to update. If not updated, the deserialized rate will be + // zero with no way to distinguish if an update was intended. + // REF: #2373 + string commission_rate = 3 + [(cosmos_proto.scalar) = "cosmos.Dec", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec"]; + string min_self_delegation = 4 + [(cosmos_proto.scalar) = "cosmos.Int", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int"]; +} +``` + +This message is expected to fail if: + +* the initial `CommissionRate` is either negative or > `MaxRate` +* the `CommissionRate` has already been updated within the previous 24 hours +* the `CommissionRate` is > `MaxChangeRate` +* the description fields are too large + +This message stores the updated `Validator` object. + +### MsgDelegate + +Within this message the delegator provides coins, and in return receives +some amount of their validator's (newly created) delegator-shares that are +assigned to `Delegation.Shares`. + +```protobuf + // Delegate defines a method for performing a delegation of coins + // from a delegator to a validator. + rpc Delegate(MsgDelegate) returns (MsgDelegateResponse); +``` + +```protobuf +// MsgDelegate defines a SDK message for performing a delegation of coins +// from a delegator to a validator. +message MsgDelegate { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgDelegate"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message is expected to fail if: + +* the validator does not exist +* the `Amount` `Coin` has a denomination different than one defined by `params.BondDenom` +* the exchange rate is invalid, meaning the validator has no tokens (due to slashing) but there are outstanding shares +* the amount delegated is less than the minimum allowed delegation + +If an existing `Delegation` object for provided addresses does not already +exist then it is created as part of this message otherwise the existing +`Delegation` is updated to include the newly received shares. + +The delegator receives newly minted shares at the current exchange rate. +The exchange rate is the number of existing shares in the validator divided by +the number of currently delegated tokens. + +The validator is updated in the `ValidatorByPower` index, and the delegation is +tracked in validator object in the `Validators` index. + +It is possible to delegate to a jailed validator, the only difference being it +will not be added to the power index until it is unjailed. + +![Delegation sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/delegation_sequence.svg) + +### MsgUndelegate + +The `MsgUndelegate` message allows delegators to undelegate their tokens from +validator. + +```protobuf + // Undelegate defines a method for performing an undelegation from a + // delegate and a validator. + rpc Undelegate(MsgUndelegate) returns (MsgUndelegateResponse); +``` + +```protobuf +// MsgUndelegate defines a SDK message for performing an undelegation from a +// delegate and a validator. +message MsgUndelegate { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgUndelegate"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message returns a response containing the completion time of the undelegation: + +```protobuf +// MsgUndelegateResponse defines the Msg/Undelegate response type. +message MsgUndelegateResponse { + google.protobuf.Timestamp completion_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} +``` + +This message is expected to fail if: + +* the delegation doesn't exist +* the validator doesn't exist +* the delegation has less shares than the ones worth of `Amount` +* existing `UnbondingDelegation` has maximum entries as defined by `params.MaxEntries` +* the `Amount` has a denomination different than one defined by `params.BondDenom` + +When this message is processed the following actions occur: + +* validator's `DelegatorShares` and the delegation's `Shares` are both reduced by the message `SharesAmount` +* calculate the token worth of the shares remove that amount tokens held within the validator +* with those removed tokens, if the validator is: + * `Bonded` - add them to an entry in `UnbondingDelegation` (create `UnbondingDelegation` if it doesn't exist) with a completion time a full unbonding period from the current time. Update pool shares to reduce BondedTokens and increase NotBondedTokens by token worth of the shares. + * `Unbonding` - add them to an entry in `UnbondingDelegation` (create `UnbondingDelegation` if it doesn't exist) with the same completion time as the validator (`UnbondingMinTime`). + * `Unbonded` - then send the coins the message `DelegatorAddr` +* if there are no more `Shares` in the delegation, then the delegation object is removed from the store + * under this situation if the delegation is the validator's self-delegation then also jail the validator. + +![Unbond sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/unbond_sequence.svg) + +### MsgCancelUnbondingDelegation + +The `MsgCancelUnbondingDelegation` message allows delegators to cancel the `unbondingDelegation` entry and delegate back to a previous validator. + +```protobuf + // CancelUnbondingDelegation defines a method for performing canceling the unbonding delegation + // and delegate back to previous validator. + // + // Since: cosmos-sdk 0.46 + rpc CancelUnbondingDelegation(MsgCancelUnbondingDelegation) returns (MsgCancelUnbondingDelegationResponse); +``` + +```protobuf +// MsgCancelUnbondingDelegation defines the SDK message for performing a cancel unbonding delegation for delegator +// +// Since: cosmos-sdk 0.46 +message MsgCancelUnbondingDelegation { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgCancelUnbondingDelegation"; + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // amount is always less than or equal to unbonding delegation entry balance + cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // creation_height is the height which the unbonding took place. + int64 creation_height = 4; +} +``` + +This message is expected to fail if: + +* the `unbondingDelegation` entry is already processed. +* the `cancel unbonding delegation` amount is greater than the `unbondingDelegation` entry balance. +* the `cancel unbonding delegation` height doesn't exist in the `unbondingDelegationQueue` of the delegator. + +When this message is processed the following actions occur: + +* if the `unbondingDelegation` Entry balance is zero + * in this condition `unbondingDelegation` entry will be removed from `unbondingDelegationQueue`. + * otherwise `unbondingDelegationQueue` will be updated with new `unbondingDelegation` entry balance and initial balance +* the validator's `DelegatorShares` and the delegation's `Shares` are both increased by the message `Amount`. + +### MsgBeginRedelegate + +The redelegation command allows delegators to instantly switch validators. Once +the unbonding period has passed, the redelegation is automatically completed in +the EndBlocker. + +```protobuf + // BeginRedelegate defines a method for performing a redelegation + // of coins from a delegator and source validator to a destination validator. + rpc BeginRedelegate(MsgBeginRedelegate) returns (MsgBeginRedelegateResponse); +``` + +```protobuf +// MsgBeginRedelegate defines a SDK message for performing a redelegation +// of coins from a delegator and source validator to a destination validator. +message MsgBeginRedelegate { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgBeginRedelegate"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + cosmos.base.v1beta1.Coin amount = 4 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message returns a response containing the completion time of the redelegation: + +```protobuf + +// MsgBeginRedelegateResponse defines the Msg/BeginRedelegate response type. +message MsgBeginRedelegateResponse { + google.protobuf.Timestamp completion_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} +``` + +This message is expected to fail if: + +* the delegation doesn't exist +* the source or destination validators don't exist +* the delegation has less shares than the ones worth of `Amount` +* the source validator has a receiving redelegation which is not matured (aka. the redelegation may be transitive) +* existing `Redelegation` has maximum entries as defined by `params.MaxEntries` +* the `Amount` `Coin` has a denomination different than one defined by `params.BondDenom` + +When this message is processed the following actions occur: + +* the source validator's `DelegatorShares` and the delegations `Shares` are both reduced by the message `SharesAmount` +* calculate the token worth of the shares remove that amount tokens held within the source validator. +* if the source validator is: + * `Bonded` - add an entry to the `Redelegation` (create `Redelegation` if it doesn't exist) with a completion time a full unbonding period from the current time. Update pool shares to reduce BondedTokens and increase NotBondedTokens by token worth of the shares (this may be effectively reversed in the next step however). + * `Unbonding` - add an entry to the `Redelegation` (create `Redelegation` if it doesn't exist) with the same completion time as the validator (`UnbondingMinTime`). + * `Unbonded` - no action required in this step +* Delegate the token worth to the destination validator, possibly moving tokens back to the bonded state. +* if there are no more `Shares` in the source delegation, then the source delegation object is removed from the store + * under this situation if the delegation is the validator's self-delegation then also jail the validator. + +![Begin redelegation sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/begin_redelegation_sequence.svg) + +### MsgUpdateParams + +The `MsgUpdateParams` update the staking module parameters. +The params are updated through a governance proposal where the signer is the gov module account address. + +```protobuf +// MsgUpdateParams is the Msg/UpdateParams request type. +// +// Since: cosmos-sdk 0.47 +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/x/staking/MsgUpdateParams"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // params defines the x/staking parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +}; +``` + +The message handling can fail if: + +* signer is not the authority defined in the staking keeper (usually the gov module account). + +## Begin-Block + +Each abci begin block call, the historical info will get stored and pruned +according to the `HistoricalEntries` parameter. + +### Historical Info Tracking + +If the `HistoricalEntries` parameter is 0, then the `BeginBlock` performs a no-op. + +Otherwise, the latest historical info is stored under the key `historicalInfoKey|height`, while any entries older than `height - HistoricalEntries` is deleted. +In most cases, this results in a single entry being pruned per block. +However, if the parameter `HistoricalEntries` has changed to a lower value there will be multiple entries in the store that must be pruned. + +## End-Block + +Each abci end block call, the operations to update queues and validator set +changes are specified to execute. + +### Validator Set Changes + +The staking validator set is updated during this process by state transitions +that run at the end of every block. As a part of this process any updated +validators are also returned back to CometBFT for inclusion in the CometBFT +validator set which is responsible for validating CometBFT messages at the +consensus layer. Operations are as following: + +* the new validator set is taken as the top `params.MaxValidators` number of + validators retrieved from the `ValidatorsByPower` index +* the previous validator set is compared with the new validator set: + * missing validators begin unbonding and their `Tokens` are transferred from the + `BondedPool` to the `NotBondedPool` `ModuleAccount` + * new validators are instantly bonded and their `Tokens` are transferred from the + `NotBondedPool` to the `BondedPool` `ModuleAccount` + +In all cases, any validators leaving or entering the bonded validator set or +changing balances and staying within the bonded validator set incur an update +message reporting their new consensus power which is passed back to CometBFT. + +The `LastTotalPower` and `LastValidatorsPower` hold the state of the total power +and validator power from the end of the last block, and are used to check for +changes that have occurred in `ValidatorsByPower` and the total new power, which +is calculated during `EndBlock`. + +### Queues + +Within staking, certain state-transitions are not instantaneous but take place +over a duration of time (typically the unbonding period). When these +transitions are mature certain operations must take place in order to complete +the state operation. This is achieved through the use of queues which are +checked/processed at the end of each block. + +#### Unbonding Validators + +When a validator is kicked out of the bonded validator set (either through +being jailed, or not having sufficient bonded tokens) it begins the unbonding +process along with all its delegations begin unbonding (while still being +delegated to this validator). At this point the validator is said to be an +"unbonding validator", whereby it will mature to become an "unbonded validator" +after the unbonding period has passed. + +Each block the validator queue is to be checked for mature unbonding validators +(namely with a completion time `<=` current time and completion height `<=` current +block height). At this point any mature validators which do not have any +delegations remaining are deleted from state. For all other mature unbonding +validators that still have remaining delegations, the `validator.Status` is +switched from `types.Unbonding` to +`types.Unbonded`. + +Unbonding operations can be put on hold by external modules via the `PutUnbondingOnHold(unbondingId)` method. +As a result, an unbonding operation (e.g., an unbonding delegation) that is on hold, cannot complete +even if it reaches maturity. For an unbonding operation with `unbondingId` to eventually complete +(after it reaches maturity), every call to `PutUnbondingOnHold(unbondingId)` must be matched +by a call to `UnbondingCanComplete(unbondingId)`. + +#### Unbonding Delegations + +Complete the unbonding of all mature `UnbondingDelegations.Entries` within the +`UnbondingDelegations` queue with the following procedure: + +* transfer the balance coins to the delegator's wallet address +* remove the mature entry from `UnbondingDelegation.Entries` +* remove the `UnbondingDelegation` object from the store if there are no + remaining entries. + +#### Redelegations + +Complete the unbonding of all mature `Redelegation.Entries` within the +`Redelegations` queue with the following procedure: + +* remove the mature entry from `Redelegation.Entries` +* remove the `Redelegation` object from the store if there are no + remaining entries. + +## Hooks + +Other modules may register operations to execute when a certain event has +occurred within staking. These events can be registered to execute either +right `Before` or `After` the staking event (as per the hook name). The +following hooks can registered with staking: + +* `AfterValidatorCreated(Context, ValAddress) error` + * called when a validator is created +* `BeforeValidatorModified(Context, ValAddress) error` + * called when a validator's state is changed +* `AfterValidatorRemoved(Context, ConsAddress, ValAddress) error` + * called when a validator is deleted +* `AfterValidatorBonded(Context, ConsAddress, ValAddress) error` + * called when a validator is bonded +* `AfterValidatorBeginUnbonding(Context, ConsAddress, ValAddress) error` + * called when a validator begins unbonding +* `BeforeDelegationCreated(Context, AccAddress, ValAddress) error` + * called when a delegation is created +* `BeforeDelegationSharesModified(Context, AccAddress, ValAddress) error` + * called when a delegation's shares are modified +* `AfterDelegationModified(Context, AccAddress, ValAddress) error` + * called when a delegation is created or modified +* `BeforeDelegationRemoved(Context, AccAddress, ValAddress) error` + * called when a delegation is removed +* `AfterUnbondingInitiated(Context, UnbondingID)` + * called when an unbonding operation (validator unbonding, unbonding delegation, redelegation) was initiated + +## Events + +The staking module emits the following events: + +### EndBlocker + +| Type | Attribute Key | Attribute Value | +| ---------------------- | ---------------------- | ------------------------- | +| complete\_unbonding | amount | `{totalUnbondingAmount}` | +| complete\_unbonding | validator | `{validatorAddress}` | +| complete\_unbonding | delegator | `{delegatorAddress}` | +| complete\_redelegation | amount | `{totalRedelegationAmount}` | +| complete\_redelegation | source\_validator | `{srcValidatorAddress}` | +| complete\_redelegation | destination\_validator | `{dstValidatorAddress}` | +| complete\_redelegation | delegator | `{delegatorAddress}` | + +## Msg's + +### MsgCreateValidator + +| Type | Attribute Key | Attribute Value | +| ----------------- | ------------- | ------------------ | +| create\_validator | validator | `{validatorAddress}` | +| create\_validator | amount | `{delegationAmount}` | +| message | module | staking | +| message | action | create\_validator | +| message | sender | `{senderAddress}` | + +### MsgEditValidator + +| Type | Attribute Key | Attribute Value | +| --------------- | --------------------- | ------------------- | +| edit\_validator | commission\_rate | `{commissionRate}` | +| edit\_validator | min\_self\_delegation | `{minSelfDelegation}` | +| message | module | staking | +| message | action | edit\_validator | +| message | sender | `{senderAddress}` | + +### MsgDelegate + +| Type | Attribute Key | Attribute Value | +| -------- | ------------- | ------------------ | +| delegate | validator | `{validatorAddress}` | +| delegate | amount | `{delegationAmount}` | +| message | module | staking | +| message | action | delegate | +| message | sender | `{senderAddress}` | + +### MsgUndelegate + +| Type | Attribute Key | Attribute Value | +| ------- | --------------------- | ------------------ | +| unbond | validator | `{validatorAddress}` | +| unbond | amount | `{unbondAmount}` | +| unbond | completion\_time \[0] | `{completionTime}` | +| message | module | staking | +| message | action | begin\_unbonding | +| message | sender | `{senderAddress}` | + +* \[0] Time is formatted in the RFC3339 standard + +### MsgCancelUnbondingDelegation + +| Type | Attribute Key | Attribute Value | +| ----------------------------- | ---------------- | --------------------------------- | +| cancel\_unbonding\_delegation | validator | `{validatorAddress}` | +| cancel\_unbonding\_delegation | delegator | `{delegatorAddress}` | +| cancel\_unbonding\_delegation | amount | `{cancelUnbondingDelegationAmount}` | +| cancel\_unbonding\_delegation | creation\_height | `{unbondingCreationHeight}` | +| message | module | staking | +| message | action | cancel\_unbond | +| message | sender | `{senderAddress}` | + +### MsgBeginRedelegate + +| Type | Attribute Key | Attribute Value | +| ---------- | ---------------------- | --------------------- | +| redelegate | source\_validator | `{srcValidatorAddress}` | +| redelegate | destination\_validator | `{dstValidatorAddress}` | +| redelegate | amount | `{unbondAmount}` | +| redelegate | completion\_time \[0] | `{completionTime}` | +| message | module | staking | +| message | action | begin\_redelegate | +| message | sender | `{senderAddress}` | + +* \[0] Time is formatted in the RFC3339 standard + +## Parameters + +The staking module contains the following parameters: + +| Key | Type | Example | +| ----------------- | ---------------- | ---------------------- | +| UnbondingTime | string (time ns) | "259200000000000" | +| MaxValidators | uint16 | 100 | +| KeyMaxEntries | uint16 | 7 | +| HistoricalEntries | uint16 | 3 | +| BondDenom | string | "stake" | +| MinCommissionRate | string | "0.000000000000000000" | + +## Client + +### CLI + +A user can query and interact with the `staking` module using the CLI. + +#### Query + +The `query` commands allows users to query `staking` state. + +```bash +simd query staking --help +``` + +##### delegation + +The `delegation` command allows users to query delegations for an individual delegator on an individual validator. + +Usage: + +```bash +simd query staking delegation [delegator-addr] [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash +balance: + amount: "10000000000" + denom: stake +delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +##### delegations + +The `delegations` command allows users to query delegations for an individual delegator on all validators. + +Usage: + +```bash +simd query staking delegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegations cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash expandable +delegation_responses: +- balance: + amount: "10000000000" + denom: stake + delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- balance: + amount: "10000000000" + denom: stake + delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1x20lytyf6zkcrv5edpkfkn8sz578qg5sqfyqnp +pagination: + next_key: null + total: "0" +``` + +##### delegations-to + +The `delegations-to` command allows users to query delegations on an individual validator. + +Usage: + +```bash +simd query staking delegations-to [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegations-to cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +- balance: + amount: "504000000" + denom: stake + delegation: + delegator_address: cosmos1q2qwwynhv8kh3lu5fkeex4awau9x8fwt45f5cp + shares: "504000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- balance: + amount: "78125000000" + denom: uixo + delegation: + delegator_address: cosmos1qvppl3479hw4clahe0kwdlfvf8uvjtcd99m2ca + shares: "78125000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +pagination: + next_key: null + total: "0" +``` + +##### historical-info + +The `historical-info` command allows users to query historical information at given height. + +Usage: + +```bash +simd query staking historical-info [height] [flags] +``` + +Example: + +```bash +simd query staking historical-info 10 +``` + +Example Output: + +```bash expandable +header: + app_hash: Lbx8cXpI868wz8sgp4qPYVrlaKjevR5WP/IjUxwp3oo= + chain_id: testnet + consensus_hash: BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8= + data_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + evidence_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + height: "10" + last_block_id: + hash: RFbkpu6pWfSThXxKKl6EZVDnBSm16+U0l0xVjTX08Fk= + part_set_header: + hash: vpIvXD4rxD5GM4MXGz0Sad9I7/iVYLzZsEU4BVgWIU= + total: 1 + last_commit_hash: Ne4uXyx4QtNp4Zx89kf9UK7oG9QVbdB6e7ZwZkhy8K0= + last_results_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + next_validators_hash: nGBgKeWBjoxeKFti00CxHsnULORgKY4LiuQwBuUrhCs= + proposer_address: mMEP2c2IRPLr99LedSRtBg9eONM= + time: "2021-10-01T06:00:49.785790894Z" + validators_hash: nGBgKeWBjoxeKFti00CxHsnULORgKY4LiuQwBuUrhCs= + version: + app: "0" + block: "11" +valset: +- commission: + commission_rates: + max_change_rate: "0.010000000000000000" + max_rate: "0.200000000000000000" + rate: "0.100000000000000000" + update_time: "2021-10-01T05:52:50.380144238Z" + consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8= + delegator_shares: "10000000.000000000000000000" + description: + details: "" + identity: "" + moniker: myvalidator + security_contact: "" + website: "" + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc + status: BOND_STATUS_BONDED + tokens: "10000000" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +``` + +##### params + +The `params` command allows users to query values set as staking parameters. + +Usage: + +```bash +simd query staking params [flags] +``` + +Example: + +```bash +simd query staking params +``` + +Example Output: + +```bash +bond_denom: stake +historical_entries: 10000 +max_entries: 7 +max_validators: 50 +unbonding_time: 1814400s +``` + +##### pool + +The `pool` command allows users to query values for amounts stored in the staking pool. + +Usage: + +```bash +simd q staking pool [flags] +``` + +Example: + +```bash +simd q staking pool +``` + +Example Output: + +```bash +bonded_tokens: "10000000" +not_bonded_tokens: "0" +``` + +##### redelegation + +The `redelegation` command allows users to query a redelegation record based on delegator and a source and destination validator address. + +Usage: + +```bash +simd query staking redelegation [delegator-addr] [src-validator-addr] [dst-validator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +pagination: null +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm + validator_src_address: cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm +``` + +##### redelegations + +The `redelegations` command allows users to query all redelegation records for an individual delegator. + +Usage: + +```bash +simd query staking redelegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1zppjyal5emta5cquje8ndkpz0rs046m7zqxrpp +- entries: + - balance: "562770000000" + redelegation_entry: + completion_time: "2021-10-25T21:42:07.336911677Z" + creation_height: 2.39735e+06 + initial_balance: "562770000000" + shares_dst: "562770000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1zppjyal5emta5cquje8ndkpz0rs046m7zqxrpp +``` + +##### redelegations-from + +The `redelegations-from` command allows users to query delegations that are redelegating *from* a validator. + +Usage: + +```bash +simd query staking redelegations-from [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegations-from cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1pm6e78p4pgn0da365plzl4t56pxy8hwtqp2mph + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +- entries: + - balance: "221000000" + redelegation_entry: + completion_time: "2021-10-05T21:05:45.669420544Z" + creation_height: 2.120693e+06 + initial_balance: "221000000" + shares_dst: "221000000.000000000000000000" + redelegation: + delegator_address: cosmos1zqv8qxy2zgn4c58fz8jt8jmhs3d0attcussrf6 + entries: null + validator_dst_address: cosmosvaloper10mseqwnwtjaqfrwwp2nyrruwmjp6u5jhah4c3y + validator_src_address: cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +``` + +##### unbonding-delegation + +The `unbonding-delegation` command allows users to query unbonding delegations for an individual delegator on an individual validator. + +Usage: + +```bash +simd query staking unbonding-delegation [delegator-addr] [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash +delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +entries: +- balance: "52000000" + completion_time: "2021-11-02T11:35:55.391594709Z" + creation_height: "55078" + initial_balance: "52000000" +validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +##### unbonding-delegations + +The `unbonding-delegations` command allows users to query all unbonding-delegations records for one delegator. + +Usage: + +```bash +simd query staking unbonding-delegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegations cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +unbonding_responses: +- delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: + - balance: "52000000" + completion_time: "2021-11-02T11:35:55.391594709Z" + creation_height: "55078" + initial_balance: "52000000" + validator_address: cosmosvaloper1t8ehvswxjfn3ejzkjtntcyrqwvmvuknzmvtaaa + +``` + +##### unbonding-delegations-from + +The `unbonding-delegations-from` command allows users to query delegations that are unbonding *from* a validator. + +Usage: + +```bash +simd query staking unbonding-delegations-from [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegations-from cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +unbonding_responses: +- delegator_address: cosmos1qqq9txnw4c77sdvzx0tkedsafl5s3vk7hn53fn + entries: + - balance: "150000000" + completion_time: "2021-11-01T21:41:13.098141574Z" + creation_height: "46823" + initial_balance: "150000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- delegator_address: cosmos1peteje73eklqau66mr7h7rmewmt2vt99y24f5z + entries: + - balance: "24000000" + completion_time: "2021-10-31T02:57:18.192280361Z" + creation_height: "21516" + initial_balance: "24000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +##### validator + +The `validator` command allows users to query details about an individual validator. + +Usage: + +```bash +simd query staking validator [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking validator cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +commission: + commission_rates: + max_change_rate: "0.020000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-01T19:24:52.663191049Z" +consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc= +delegator_shares: "32948270000.000000000000000000" +description: + details: Witval is the validator arm from Vitwit. Vitwit is into software consulting + and services business since 2015. We are working closely with Cosmos ecosystem + since 2018. We are also building tools for the ecosystem, Aneka is our explorer + for the cosmos ecosystem. + identity: 51468B615127273A + moniker: Witval + security_contact: "" + website: "" +jailed: false +min_self_delegation: "1" +operator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +status: BOND_STATUS_BONDED +tokens: "32948270000" +unbonding_height: "0" +unbonding_time: "1970-01-01T00:00:00Z" +``` + +##### validators + +The `validators` command allows users to query details about all validators on a network. + +Usage: + +```bash +simd query staking validators [flags] +``` + +Example: + +```bash +simd query staking validators +``` + +Example Output: + +```bash expandable +pagination: + next_key: FPTi7TKAjN63QqZh+BaXn6gBmD5/ + total: "0" +validators: +commission: + commission_rates: + max_change_rate: "0.020000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-01T19:24:52.663191049Z" +consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc= +delegator_shares: "32948270000.000000000000000000" +description: + details: Witval is the validator arm from Vitwit. Vitwit is into software consulting + and services business since 2015. We are working closely with Cosmos ecosystem + since 2018. We are also building tools for the ecosystem, Aneka is our explorer + for the cosmos ecosystem. + identity: 51468B615127273A + moniker: Witval + security_contact: "" + website: "" + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj + status: BOND_STATUS_BONDED + tokens: "32948270000" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +- commission: + commission_rates: + max_change_rate: "0.100000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-04T18:02:21.446645619Z" + consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: GDNpuKDmCg9GnhnsiU4fCWktuGUemjNfvpCZiqoRIYA= + delegator_shares: "559343421.000000000000000000" + description: + details: Noderunners is a professional validator in POS networks. We have a huge + node running experience, reliable soft and hardware. Our commissions are always + low, our support to delegators is always full. Stake with us and start receiving + your Cosmos rewards now! + identity: 812E82D12FEA3493 + moniker: Noderunners + security_contact: info@noderunners.biz + website: http://noderunners.biz + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1q5ku90atkhktze83j9xjaks2p7uruag5zp6wt7 + status: BOND_STATUS_BONDED + tokens: "559343421" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +``` + +#### Transactions + +The `tx` commands allows users to interact with the `staking` module. + +```bash +simd tx staking --help +``` + +##### create-validator + +The command `create-validator` allows users to create new validator initialized with a self-delegation to it. + +Usage: + +```bash +simd tx staking create-validator [path/to/validator.json] [flags] +``` + +Example: + +```bash +simd tx staking create-validator /path/to/validator.json \ + --chain-id="name_of_chain_id" \ + --gas="auto" \ + --gas-adjustment="1.2" \ + --gas-prices="0.025stake" \ + --from=mykey +``` + +where `validator.json` contains: + +```json expandable +{ + "pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "BnbwFpeONLqvWqJb3qaUbL5aoIcW3fSuAp9nT3z5f20=" + }, + "amount": "1000000stake", + "moniker": "my-moniker", + "website": "https://myweb.site", + "security": "security-contact@gmail.com", + "details": "description of your validator", + "commission-rate": "0.10", + "commission-max-rate": "0.20", + "commission-max-change-rate": "0.01", + "min-self-delegation": "1" +} +``` + +and pubkey can be obtained by using `simd tendermint show-validator` command. + +##### delegate + +The command `delegate` allows users to delegate liquid tokens to a validator. + +Usage: + +```bash +simd tx staking delegate [validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking delegate cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm 1000stake --from mykey +``` + +##### edit-validator + +The command `edit-validator` allows users to edit an existing validator account. + +Usage: + +```bash +simd tx staking edit-validator [flags] +``` + +Example: + +```bash +simd tx staking edit-validator --moniker "new_moniker_name" --website "new_webiste_url" --from mykey +``` + +##### redelegate + +The command `redelegate` allows users to redelegate illiquid tokens from one validator to another. + +Usage: + +```bash +simd tx staking redelegate [src-validator-addr] [dst-validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking redelegate cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm 100stake --from mykey +``` + +##### unbond + +The command `unbond` allows users to unbond shares from a validator. + +Usage: + +```bash +simd tx staking unbond [validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking unbond cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj 100stake --from mykey +``` + +##### cancel unbond + +The command `cancel-unbond` allow users to cancel the unbonding delegation entry and delegate back to the original validator. + +Usage: + +```bash +simd tx staking cancel-unbond [validator-addr] [amount] [creation-height] +``` + +Example: + +```bash +simd tx staking cancel-unbond cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj 100stake 123123 --from mykey +``` + +### gRPC + +A user can query the `staking` module using gRPC endpoints. + +#### Validators + +The `Validators` endpoint queries all validators that match the given status. + +```bash +cosmos.staking.v1beta1.Query/Validators +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.staking.v1beta1.Query/Validators +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "consensusPubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8="}, + "status": "BOND_STATUS_BONDED", + "tokens": "10000000", + "delegatorShares": "10000000000000000000000000", + "description": { + "moniker": "myvalidator" + }, + "unbondingTime": "1970-01-01T00:00:00Z", + "commission": { + "commissionRates": { + "rate": "100000000000000000", + "maxRate": "200000000000000000", + "maxChangeRate": "10000000000000000" + }, + "updateTime": "2021-10-01T05:52:50.380144238Z" + }, + "minSelfDelegation": "1" + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### Validator + +The `Validator` endpoint queries validator information for given validator address. + +```bash +cosmos.staking.v1beta1.Query/Validator +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Validator +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "consensusPubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8="}, + "status": "BOND_STATUS_BONDED", + "tokens": "10000000", + "delegatorShares": "10000000000000000000000000", + "description": { + "moniker": "myvalidator" + }, + "unbondingTime": "1970-01-01T00:00:00Z", + "commission": { + "commissionRates": { + "rate": "100000000000000000", + "maxRate": "200000000000000000", + "maxChangeRate": "10000000000000000" + }, + "updateTime": "2021-10-01T05:52:50.380144238Z" + }, + "minSelfDelegation": "1" + } +} +``` + +#### ValidatorDelegations + +The `ValidatorDelegations` endpoint queries delegate information for given validator. + +```bash +cosmos.staking.v1beta1.Query/ValidatorDelegations +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/ValidatorDelegations +``` + +Example Output: + +```bash expandable +{ + "delegationResponses": [ + { + "delegation": { + "delegatorAddress": "cosmos1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgy3ua5t", + "validatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "shares": "10000000000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "10000000" + } + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### ValidatorUnbondingDelegations + +The `ValidatorUnbondingDelegations` endpoint queries delegate information for given validator. + +```bash +cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1z3pzzw84d6xn00pw9dy3yapqypfde7vg6965fy", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "25325", + "completion_time": "2021-10-31T09:24:36.797320636Z", + "initial_balance": "20000000", + "balance": "20000000" + } + ] + }, + { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "13100", + "completion_time": "2021-10-30T12:53:02.272266791Z", + "initial_balance": "1000000", + "balance": "1000000" + } + ] + }, + ], + "pagination": { + "next_key": null, + "total": "8" + } +} +``` + +#### Delegation + +The `Delegation` endpoint queries delegate information for given validator delegator pair. + +```bash +cosmos.staking.v1beta1.Query/Delegation +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Delegation +``` + +Example Output: + +```bash expandable +{ + "delegation_response": + { + "delegation": + { + "delegator_address":"cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "shares":"25083119936.000000000000000000" + }, + "balance": + { + "denom":"stake", + "amount":"25083119936" + } + } +} +``` + +#### UnbondingDelegation + +The `UnbondingDelegation` endpoint queries unbonding information for given validator delegator. + +```bash +cosmos.staking.v1beta1.Query/UnbondingDelegation +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/UnbondingDelegation +``` + +Example Output: + +```bash expandable +{ + "unbond": { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "136984", + "completion_time": "2021-11-08T05:38:47.505593891Z", + "initial_balance": "400000000", + "balance": "400000000" + }, + { + "creation_height": "137005", + "completion_time": "2021-11-08T05:40:53.526196312Z", + "initial_balance": "385000000", + "balance": "385000000" + } + ] + } +} +``` + +#### DelegatorDelegations + +The `DelegatorDelegations` endpoint queries all delegations of a given delegator address. + +```bash +cosmos.staking.v1beta1.Query/DelegatorDelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorDelegations +``` + +Example Output: + +```bash +{ + "delegation_responses": [ + {"delegation":{"delegator_address":"cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77","validator_address":"cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8","shares":"25083339023.000000000000000000"},"balance":{"denom":"stake","amount":"25083339023"}} + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorUnbondingDelegations + +The `DelegatorUnbondingDelegations` endpoint queries all unbonding delegations of a given delegator address. + +```bash +cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1sjllsnramtg3ewxqwwrwjxfgc4n4ef9uxyejze", + "entries": [ + { + "creation_height": "136984", + "completion_time": "2021-11-08T05:38:47.505593891Z", + "initial_balance": "400000000", + "balance": "400000000" + }, + { + "creation_height": "137005", + "completion_time": "2021-11-08T05:40:53.526196312Z", + "initial_balance": "385000000", + "balance": "385000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### Redelegations + +The `Redelegations` endpoint queries redelegations of given address. + +```bash +cosmos.staking.v1beta1.Query/Redelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf", "src_validator_addr" : "cosmosvaloper1j7euyj85fv2jugejrktj540emh9353ltgppc3g", "dst_validator_addr" : "cosmosvaloper1yy3tnegzmkdcm7czzcy3flw5z0zyr9vkkxrfse"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Redelegations +``` + +Example Output: + +```bash expandable +{ + "redelegation_responses": [ + { + "redelegation": { + "delegator_address": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf", + "validator_src_address": "cosmosvaloper1j7euyj85fv2jugejrktj540emh9353ltgppc3g", + "validator_dst_address": "cosmosvaloper1yy3tnegzmkdcm7czzcy3flw5z0zyr9vkkxrfse", + "entries": null + }, + "entries": [ + { + "redelegation_entry": { + "creation_height": 135932, + "completion_time": "2021-11-08T03:52:55.299147901Z", + "initial_balance": "2900000", + "shares_dst": "2900000.000000000000000000" + }, + "balance": "2900000" + } + ] + } + ], + "pagination": null +} +``` + +#### DelegatorValidators + +The `DelegatorValidators` endpoint queries all validators information for given delegator. + +```bash +cosmos.staking.v1beta1.Query/DelegatorValidators +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorValidators +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operator_address": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "UPwHWxH1zHJWGOa/m6JB3f5YjHMvPQPkVbDqqi+U7Uw=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "347260647559", + "delegator_shares": "347260647559.000000000000000000", + "description": { + "moniker": "BouBouNode", + "identity": "", + "website": "https://boubounode.com", + "security_contact": "", + "details": "AI-based Validator. #1 AI Validator on Game of Stakes. Fairly priced. Don't trust (humans), verify. Made with BouBou love." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.061000000000000000", + "max_rate": "0.300000000000000000", + "max_change_rate": "0.150000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorValidator + +The `DelegatorValidator` endpoint queries validator information for given delegator validator + +```bash +cosmos.staking.v1beta1.Query/DelegatorValidator +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1eh5mwu044gd5ntkkc2xgfg8247mgc56f3n8rr7", "validator_addr": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorValidator +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operator_address": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "UPwHWxH1zHJWGOa/m6JB3f5YjHMvPQPkVbDqqi+U7Uw=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "347262754841", + "delegator_shares": "347262754841.000000000000000000", + "description": { + "moniker": "BouBouNode", + "identity": "", + "website": "https://boubounode.com", + "security_contact": "", + "details": "AI-based Validator. #1 AI Validator on Game of Stakes. Fairly priced. Don't trust (humans), verify. Made with BouBou love." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.061000000000000000", + "max_rate": "0.300000000000000000", + "max_change_rate": "0.150000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } +} +``` + +#### HistoricalInfo + +```bash +cosmos.staking.v1beta1.Query/HistoricalInfo +``` + +Example: + +```bash +grpcurl -plaintext -d '{"height" : 1}' localhost:9090 cosmos.staking.v1beta1.Query/HistoricalInfo +``` + +Example Output: + +```bash expandable +{ + "hist": { + "header": { + "version": { + "block": "11", + "app": "0" + }, + "chain_id": "simd-1", + "height": "140142", + "time": "2021-10-11T10:56:29.720079569Z", + "last_block_id": { + "hash": "9gri/4LLJUBFqioQ3NzZIP9/7YHR9QqaM6B2aJNQA7o=", + "part_set_header": { + "total": 1, + "hash": "Hk1+C864uQkl9+I6Zn7IurBZBKUevqlVtU7VqaZl1tc=" + } + }, + "last_commit_hash": "VxrcS27GtvGruS3I9+AlpT7udxIT1F0OrRklrVFSSKc=", + "data_hash": "80BjOrqNYUOkTnmgWyz9AQ8n7SoEmPVi4QmAe8RbQBY=", + "validators_hash": "95W49n2hw8RWpr1GPTAO5MSPi6w6Wjr3JjjS7AjpBho=", + "next_validators_hash": "95W49n2hw8RWpr1GPTAO5MSPi6w6Wjr3JjjS7AjpBho=", + "consensus_hash": "BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=", + "app_hash": "ZZaxnSY3E6Ex5Bvkm+RigYCK82g8SSUL53NymPITeOE=", + "last_results_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "evidence_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "proposer_address": "aH6dO428B+ItuoqPq70efFHrSMY=" + }, + "valset": [ + { + "operator_address": "cosmosvaloper196ax4vc0lwpxndu9dyhvca7jhxp70rmcqcnylw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "/O7BtNW0pafwfvomgR4ZnfldwPXiFfJs9mHg3gwfv5Q=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1426045203613", + "delegator_shares": "1426045203613.000000000000000000", + "description": { + "moniker": "SG-1", + "identity": "48608633F99D1B60", + "website": "https://sg-1.online", + "security_contact": "", + "details": "SG-1 - your favorite validator on Witval. We offer 100% Soft Slash protection." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.037500000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.030000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } + ] + } +} + +``` + +#### Pool + +The `Pool` endpoint queries the pool information. + +```bash +cosmos.staking.v1beta1.Query/Pool +``` + +Example: + +```bash +grpcurl -plaintext -d localhost:9090 cosmos.staking.v1beta1.Query/Pool +``` + +Example Output: + +```bash +{ + "pool": { + "not_bonded_tokens": "369054400189", + "bonded_tokens": "15657192425623" + } +} +``` + +#### Params + +The `Params` endpoint queries the pool information. + +```bash +cosmos.staking.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.staking.v1beta1.Query/Params +``` + +Example Output: + +```bash +{ + "params": { + "unbondingTime": "1814400s", + "maxValidators": 100, + "maxEntries": 7, + "historicalEntries": 10000, + "bondDenom": "stake" + } +} +``` + +### REST + +A user can query the `staking` module using REST endpoints. + +#### DelegatorDelegations + +The `DelegtaorDelegations` REST endpoint queries all delegations of a given delegator address. + +```bash +/cosmos/staking/v1beta1/delegations/{delegatorAddr} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/delegations/cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "delegation_responses": [ + { + "delegation": { + "delegator_address": "cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5", + "validator_address": "cosmosvaloper1quqxfrxkycr0uzt4yk0d57tcq3zk7srm7sm6r8", + "shares": "256250000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "256250000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5", + "validator_address": "cosmosvaloper194v8uwee2fvs2s8fa5k7j03ktwc87h5ym39jfv", + "shares": "255150000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "255150000" + } + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### Redelegations + +The `Redelegations` REST endpoint queries redelegations of given address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/redelegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1thfntksw0d35n2tkr0k8v54fr8wxtxwxl2c56e/redelegations?srcValidatorAddr=cosmosvaloper1lzhlnpahvznwfv4jmay2tgaha5kmz5qx4cuznf&dstValidatorAddr=cosmosvaloper1vq8tw77kp8lvxq9u3c8eeln9zymn68rng8pgt4" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "redelegation_responses": [ + { + "redelegation": { + "delegator_address": "cosmos1thfntksw0d35n2tkr0k8v54fr8wxtxwxl2c56e", + "validator_src_address": "cosmosvaloper1lzhlnpahvznwfv4jmay2tgaha5kmz5qx4cuznf", + "validator_dst_address": "cosmosvaloper1vq8tw77kp8lvxq9u3c8eeln9zymn68rng8pgt4", + "entries": null + }, + "entries": [ + { + "redelegation_entry": { + "creation_height": 151523, + "completion_time": "2021-11-09T06:03:25.640682116Z", + "initial_balance": "200000000", + "shares_dst": "200000000.000000000000000000" + }, + "balance": "200000000" + } + ] + } + ], + "pagination": null +} +``` + +#### DelegatorUnbondingDelegations + +The `DelegatorUnbondingDelegations` REST endpoint queries all unbonding delegations of a given delegator address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/unbonding_delegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1nxv42u3lv642q0fuzu2qmrku27zgut3n3z7lll/unbonding_delegations" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1nxv42u3lv642q0fuzu2qmrku27zgut3n3z7lll", + "validator_address": "cosmosvaloper1e7mvqlz50ch6gw4yjfemsc069wfre4qwmw53kq", + "entries": [ + { + "creation_height": "2442278", + "completion_time": "2021-10-12T10:59:03.797335857Z", + "initial_balance": "50000000000", + "balance": "50000000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorValidators + +The `DelegatorValidators` REST endpoint queries all validators information for given delegator address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/validators +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1xwazl8ftks4gn00y5x3c47auquc62ssune9ppv/validators" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operator_address": "cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "5v4n3px3PkfNnKflSgepDnsMQR1hiNXnqOC11Y72/PQ=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "21592843799", + "delegator_shares": "21592843799.000000000000000000", + "description": { + "moniker": "jabbey", + "identity": "", + "website": "https://twitter.com/JoeAbbey", + "security_contact": "", + "details": "just another dad in the cosmos" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.100000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-09T19:03:54.984821705Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorValidator + +The `DelegatorValidator` REST endpoint queries validator information for given delegator validator pair. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/validators/{validatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1xwazl8ftks4gn00y5x3c47auquc62ssune9ppv/validators/cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operator_address": "cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "5v4n3px3PkfNnKflSgepDnsMQR1hiNXnqOC11Y72/PQ=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "21592843799", + "delegator_shares": "21592843799.000000000000000000", + "description": { + "moniker": "jabbey", + "identity": "", + "website": "https://twitter.com/JoeAbbey", + "security_contact": "", + "details": "just another dad in the cosmos" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.100000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-09T19:03:54.984821705Z" + }, + "min_self_delegation": "1" + } +} +``` + +#### HistoricalInfo + +The `HistoricalInfo` REST endpoint queries the historical information for given height. + +```bash +/cosmos/staking/v1beta1/historical_info/{height} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/historical_info/153332" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "hist": { + "header": { + "version": { + "block": "11", + "app": "0" + }, + "chain_id": "cosmos-1", + "height": "153332", + "time": "2021-10-12T09:05:35.062230221Z", + "last_block_id": { + "hash": "NX8HevR5khb7H6NGKva+jVz7cyf0skF1CrcY9A0s+d8=", + "part_set_header": { + "total": 1, + "hash": "zLQ2FiKM5tooL3BInt+VVfgzjlBXfq0Hc8Iux/xrhdg=" + } + }, + "last_commit_hash": "P6IJrK8vSqU3dGEyRHnAFocoDGja0bn9euLuy09s350=", + "data_hash": "eUd+6acHWrNXYju8Js449RJ99lOYOs16KpqQl4SMrEM=", + "validators_hash": "mB4pravvMsJKgi+g8aYdSeNlt0kPjnRFyvtAQtaxcfw=", + "next_validators_hash": "mB4pravvMsJKgi+g8aYdSeNlt0kPjnRFyvtAQtaxcfw=", + "consensus_hash": "BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=", + "app_hash": "fuELArKRK+CptnZ8tu54h6xEleSWenHNmqC84W866fU=", + "last_results_hash": "p/BPexV4LxAzlVcPRvW+lomgXb6Yze8YLIQUo/4Kdgc=", + "evidence_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "proposer_address": "G0MeY8xQx7ooOsni8KE/3R/Ib3Q=" + }, + "valset": [ + { + "operator_address": "cosmosvaloper196ax4vc0lwpxndu9dyhvca7jhxp70rmcqcnylw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "/O7BtNW0pafwfvomgR4ZnfldwPXiFfJs9mHg3gwfv5Q=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1416521659632", + "delegator_shares": "1416521659632.000000000000000000", + "description": { + "moniker": "SG-1", + "identity": "48608633F99D1B60", + "website": "https://sg-1.online", + "security_contact": "", + "details": "SG-1 - your favorite validator on cosmos. We offer 100% Soft Slash protection." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.037500000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.030000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + }, + { + "operator_address": "cosmosvaloper1t8ehvswxjfn3ejzkjtntcyrqwvmvuknzmvtaaa", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "uExZyjNLtr2+FFIhNDAMcQ8+yTrqE7ygYTsI7khkA5Y=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1348298958808", + "delegator_shares": "1348298958808.000000000000000000", + "description": { + "moniker": "Cosmostation", + "identity": "AE4C403A6E7AA1AC", + "website": "https://www.cosmostation.io", + "security_contact": "admin@stamper.network", + "details": "Cosmostation validator node. Delegate your tokens and Start Earning Staking Rewards" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "1.000000000000000000", + "max_change_rate": "0.200000000000000000" + }, + "update_time": "2021-10-01T15:06:38.821314287Z" + }, + "min_self_delegation": "1" + } + ] + } +} +``` + +#### Parameters + +The `Parameters` REST endpoint queries the staking parameters. + +```bash +/cosmos/staking/v1beta1/params +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/params" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "params": { + "unbonding_time": "2419200s", + "max_validators": 100, + "max_entries": 7, + "historical_entries": 10000, + "bond_denom": "stake" + } +} +``` + +#### Pool + +The `Pool` REST endpoint queries the pool information. + +```bash +/cosmos/staking/v1beta1/pool +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/pool" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "pool": { + "not_bonded_tokens": "432805737458", + "bonded_tokens": "15783637712645" + } +} +``` + +#### Validators + +The `Validators` REST endpoint queries all validators that match the given status. + +```bash +/cosmos/staking/v1beta1/validators +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/validators" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operator_address": "cosmosvaloper1q3jsx9dpfhtyqqgetwpe5tmk8f0ms5qywje8tw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "N7BPyek2aKuNZ0N/8YsrqSDhGZmgVaYUBuddY8pwKaE=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "383301887799", + "delegator_shares": "383301887799.000000000000000000", + "description": { + "moniker": "SmartNodes", + "identity": "D372724899D1EDC8", + "website": "https://smartnodes.co", + "security_contact": "", + "details": "Earn Rewards with Crypto Staking & Node Deployment" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-01T15:51:31.596618510Z" + }, + "min_self_delegation": "1" + }, + { + "operator_address": "cosmosvaloper1q5ku90atkhktze83j9xjaks2p7uruag5zp6wt7", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "GDNpuKDmCg9GnhnsiU4fCWktuGUemjNfvpCZiqoRIYA=" + }, + "jailed": false, + "status": "BOND_STATUS_UNBONDING", + "tokens": "1017819654", + "delegator_shares": "1017819654.000000000000000000", + "description": { + "moniker": "Noderunners", + "identity": "812E82D12FEA3493", + "website": "http://noderunners.biz", + "security_contact": "info@noderunners.biz", + "details": "Noderunners is a professional validator in POS networks. We have a huge node running experience, reliable soft and hardware. Our commissions are always low, our support to delegators is always full. Stake with us and start receiving your cosmos rewards now!" + }, + "unbonding_height": "147302", + "unbonding_time": "2021-11-08T22:58:53.718662452Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-04T18:02:21.446645619Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": "FONDBFkE4tEEf7yxWWKOD49jC2NK", + "total": "2" + } +} +``` + +#### Validator + +The `Validator` REST endpoint queries validator information for given validator address. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "33027900000", + "delegator_shares": "33027900000.000000000000000000", + "description": { + "moniker": "Witval", + "identity": "51468B615127273A", + "website": "", + "security_contact": "", + "details": "Witval is the validator arm from Vitwit. Vitwit is into software consulting and services business since 2015. We are working closely with Cosmos ecosystem since 2018. We are also building tools for the ecosystem, Aneka is our explorer for the cosmos ecosystem." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.020000000000000000" + }, + "update_time": "2021-10-01T19:24:52.663191049Z" + }, + "min_self_delegation": "1" + } +} +``` + +#### ValidatorDelegations + +The `ValidatorDelegations` REST endpoint queries delegate information for given validator. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q/delegations" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "delegation_responses": [ + { + "delegation": { + "delegator_address": "cosmos190g5j8aszqhvtg7cprmev8xcxs6csra7xnk3n3", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "31000000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "31000000000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1ddle9tczl87gsvmeva3c48nenyng4n56qwq4ee", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "628470000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "628470000" + } + }, + { + "delegation": { + "delegator_address": "cosmos10fdvkczl76m040smd33lh9xn9j0cf26kk4s2nw", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "838120000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "838120000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "500000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "500000000" + } + }, + { + "delegation": { + "delegator_address": "cosmos16msryt3fqlxtvsy8u5ay7wv2p8mglfg9hrek2e", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "61310000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "61310000" + } + } + ], + "pagination": { + "next_key": null, + "total": "5" + } +} +``` + +#### Delegation + +The `Delegation` REST endpoint queries delegate information for given validator delegator pair. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations/{delegatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q/delegations/cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "delegation_response": { + "delegation": { + "delegator_address": "cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "500000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "500000000" + } + } +} +``` + +#### UnbondingDelegation + +The `UnbondingDelegation` REST endpoint queries unbonding information for given validator delegator pair. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations/{delegatorAddr}/unbonding_delegation +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu/delegations/cosmos1ze2ye5u5k3qdlexvt2e0nn0508p04094ya0qpm/unbonding_delegation" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "unbond": { + "delegator_address": "cosmos1ze2ye5u5k3qdlexvt2e0nn0508p04094ya0qpm", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "153687", + "completion_time": "2021-11-09T09:41:18.352401903Z", + "initial_balance": "525111", + "balance": "525111" + } + ] + } +} +``` + +#### ValidatorUnbondingDelegations + +The `ValidatorUnbondingDelegations` REST endpoint queries unbonding delegations of a validator. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/unbonding_delegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu/unbonding_delegations" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1q9snn84jfrd9ge8t46kdcggpe58dua82vnj7uy", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "90998", + "completion_time": "2021-11-05T00:14:37.005841058Z", + "initial_balance": "24000000", + "balance": "24000000" + } + ] + }, + { + "delegator_address": "cosmos1qf36e6wmq9h4twhdvs6pyq9qcaeu7ye0s3dqq2", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "47478", + "completion_time": "2021-11-01T22:47:26.714116854Z", + "initial_balance": "8000000", + "balance": "8000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/modules/upgrade/README.mdx b/docs/sdk/v0.47/documentation/module-system/modules/upgrade/README.mdx new file mode 100644 index 00000000..e7062a71 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/modules/upgrade/README.mdx @@ -0,0 +1,640 @@ +--- +title: '`x/upgrade`' +--- + +## Abstract + +`x/upgrade` is an implementation of a Cosmos SDK module that facilitates smoothly +upgrading a live Cosmos chain to a new (breaking) software version. It accomplishes this by +providing a `BeginBlocker` hook that prevents the blockchain state machine from +proceeding once a pre-defined upgrade block height has been reached. + +The module does not prescribe anything regarding how governance decides to do an +upgrade, but just the mechanism for coordinating the upgrade safely. Without software +support for upgrades, upgrading a live chain is risky because all of the validators +need to pause their state machines at exactly the same point in the process. If +this is not done correctly, there can be state inconsistencies which are hard to +recover from. + +* [Concepts](#concepts) +* [State](#state) +* [Events](#events) +* [Client](#client) + * [CLI](#cli) + * [REST](#rest) + * [gRPC](#grpc) +* [Resources](#resources) + +## Concepts + +### Plan + +The `x/upgrade` module defines a `Plan` type in which a live upgrade is scheduled +to occur. A `Plan` can be scheduled at a specific block height. +A `Plan` is created once a (frozen) release candidate along with an appropriate upgrade +`Handler` (see below) is agreed upon, where the `Name` of a `Plan` corresponds to a +specific `Handler`. Typically, a `Plan` is created through a governance proposal +process, where if voted upon and passed, will be scheduled. The `Info` of a `Plan` +may contain various metadata about the upgrade, typically application specific +upgrade info to be included on-chain such as a git commit that validators could +automatically upgrade to. + +#### Sidecar Process + +If an operator running the application binary also runs a sidecar process to assist +in the automatic download and upgrade of a binary, the `Info` allows this process to +be seamless. This tool is [Cosmovisor](https://github.com/cosmos/cosmos-sdk/tree/main/tools/cosmovisor#readme). + +```go +type Plan struct { + Name string + Height int64 + Info string +} +``` + +### Handler + +The `x/upgrade` module facilitates upgrading from major version X to major version Y. To +accomplish this, node operators must first upgrade their current binary to a new +binary that has a corresponding `Handler` for the new version Y. It is assumed that +this version has fully been tested and approved by the community at large. This +`Handler` defines what state migrations need to occur before the new binary Y +can successfully run the chain. Naturally, this `Handler` is application specific +and not defined on a per-module basis. Registering a `Handler` is done via +`Keeper#SetUpgradeHandler` in the application. + +```go +type UpgradeHandler func(Context, Plan, VersionMap) (VersionMap, error) +``` + +During each `EndBlock` execution, the `x/upgrade` module checks if there exists a +`Plan` that should execute (is scheduled at that height). If so, the corresponding +`Handler` is executed. If the `Plan` is expected to execute but no `Handler` is registered +or if the binary was upgraded too early, the node will gracefully panic and exit. + +### StoreLoader + +The `x/upgrade` module also facilitates store migrations as part of the upgrade. The +`StoreLoader` sets the migrations that need to occur before the new binary can +successfully run the chain. This `StoreLoader` is also application specific and +not defined on a per-module basis. Registering this `StoreLoader` is done via +`app#SetStoreLoader` in the application. + +```go +func UpgradeStoreLoader (upgradeHeight int64, storeUpgrades *store.StoreUpgrades) + +baseapp.StoreLoader +``` + +If there's a planned upgrade and the upgrade height is reached, the old binary writes `Plan` to the disk before panicking. + +This information is critical to ensure the `StoreUpgrades` happens smoothly at correct height and +expected upgrade. It eliminiates the chances for the new binary to execute `StoreUpgrades` multiple +times everytime on restart. Also if there are multiple upgrades planned on same height, the `Name` +will ensure these `StoreUpgrades` takes place only in planned upgrade handler. + +### Proposal + +Typically, a `Plan` is proposed and submitted through governance via a proposal +containing a `MsgSoftwareUpgrade` message. +This proposal prescribes to the standard governance process. If the proposal passes, +the `Plan`, which targets a specific `Handler`, is persisted and scheduled. The +upgrade can be delayed or hastened by updating the `Plan.Height` in a new proposal. + +```protobuf +// MsgSoftwareUpgrade is the Msg/SoftwareUpgrade request type. +// +// Since: cosmos-sdk 0.46 +message MsgSoftwareUpgrade { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/MsgSoftwareUpgrade"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // plan is the upgrade plan. + Plan plan = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +#### Cancelling Upgrade Proposals + +Upgrade proposals can be cancelled. There exists a gov-enabled `MsgCancelUpgrade` +message type, which can be embedded in a proposal, voted on and, if passed, will +remove the scheduled upgrade `Plan`. +Of course this requires that the upgrade was known to be a bad idea well before the +upgrade itself, to allow time for a vote. + +```protobuf +// MsgCancelUpgrade is the Msg/CancelUpgrade request type. +// +// Since: cosmos-sdk 0.46 +message MsgCancelUpgrade { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/MsgCancelUpgrade"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +If such a possibility is desired, the upgrade height is to be +`2 * (VotingPeriod + DepositPeriod) + (SafetyDelta)` from the beginning of the +upgrade proposal. The `SafetyDelta` is the time available from the success of an +upgrade proposal and the realization it was a bad idea (due to external social consensus). + +A `MsgCancelUpgrade` proposal can also be made while the original +`MsgSoftwareUpgrade` proposal is still being voted upon, as long as the `VotingPeriod` +ends after the `MsgSoftwareUpgrade` proposal. + +## State + +The internal state of the `x/upgrade` module is relatively minimal and simple. The +state contains the currently active upgrade `Plan` (if one exists) by key +`0x0` and if a `Plan` is marked as "done" by key `0x1`. The state +contains the consensus versions of all app modules in the application. The versions +are stored as big endian `uint64`, and can be accessed with prefix `0x2` appended +by the corresponding module name of type `string`. The state maintains a +`Protocol Version` which can be accessed by key `0x3`. + +* Plan: `0x0 -> Plan` +* Done: `0x1 | byte(plan name) -> BigEndian(Block Height)` +* ConsensusVersion: `0x2 | byte(module name) -> BigEndian(Module Consensus Version)` +* ProtocolVersion: `0x3 -> BigEndian(Protocol Version)` + +The `x/upgrade` module contains no genesis state. + +## Events + +The `x/upgrade` does not emit any events by itself. Any and all proposal related +events are emitted through the `x/gov` module. + +## Client + +### CLI + +A user can query and interact with the `upgrade` module using the CLI. + +#### Query + +The `query` commands allow users to query `upgrade` state. + +```bash +simd query upgrade --help +``` + +##### applied + +The `applied` command allows users to query the block header for height at which a completed upgrade was applied. + +```bash +simd query upgrade applied [upgrade-name] [flags] +``` + +If upgrade-name was previously executed on the chain, this returns the header for the block at which it was applied. +This helps a client determine which binary was valid over a given range of blocks, as well as more context to understand past migrations. + +Example: + +```bash +simd query upgrade applied "test-upgrade" +``` + +Example Output: + +```bash expandable +"block_id": { + "hash": "A769136351786B9034A5F196DC53F7E50FCEB53B48FA0786E1BFC45A0BB646B5", + "parts": { + "total": 1, + "hash": "B13CBD23011C7480E6F11BE4594EE316548648E6A666B3575409F8F16EC6939E" + } + }, + "block_size": "7213", + "header": { + "version": { + "block": "11" + }, + "chain_id": "testnet-2", + "height": "455200", + "time": "2021-04-10T04:37:57.085493838Z", + "last_block_id": { + "hash": "0E8AD9309C2DC411DF98217AF59E044A0E1CCEAE7C0338417A70338DF50F4783", + "parts": { + "total": 1, + "hash": "8FE572A48CD10BC2CBB02653CA04CA247A0F6830FF19DC972F64D339A355E77D" + } + }, + "last_commit_hash": "DE890239416A19E6164C2076B837CC1D7F7822FC214F305616725F11D2533140", + "data_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "validators_hash": "A31047ADE54AE9072EE2A12FF260A8990BA4C39F903EAF5636B50D58DBA72582", + "next_validators_hash": "A31047ADE54AE9072EE2A12FF260A8990BA4C39F903EAF5636B50D58DBA72582", + "consensus_hash": "048091BC7DDC283F77BFBF91D73C44DA58C3DF8A9CBC867405D8B7F3DAADA22F", + "app_hash": "28ECC486AFC332BA6CC976706DBDE87E7D32441375E3F10FD084CD4BAF0DA021", + "last_results_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "evidence_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "proposer_address": "2ABC4854B1A1C5AA8403C4EA853A81ACA901CC76" + }, + "num_txs": "0" +} +``` + +##### module versions + +The `module_versions` command gets a list of module names and their respective consensus versions. + +Following the command with a specific module name will return only +that module's information. + +```bash +simd query upgrade module_versions [optional module_name] [flags] +``` + +Example: + +```bash +simd query upgrade module_versions +``` + +Example Output: + +```bash expandable +module_versions: +- name: auth + version: "2" +- name: authz + version: "1" +- name: bank + version: "2" +- name: crisis + version: "1" +- name: distribution + version: "2" +- name: evidence + version: "1" +- name: feegrant + version: "1" +- name: genutil + version: "1" +- name: gov + version: "2" +- name: ibc + version: "2" +- name: mint + version: "1" +- name: params + version: "1" +- name: slashing + version: "2" +- name: staking + version: "2" +- name: transfer + version: "1" +- name: upgrade + version: "1" +- name: vesting + version: "1" +``` + +Example: + +```bash +regen query upgrade module_versions ibc +``` + +Example Output: + +```bash +module_versions: +- name: ibc + version: "2" +``` + +##### plan + +The `plan` command gets the currently scheduled upgrade plan, if one exists. + +```bash +regen query upgrade plan [flags] +``` + +Example: + +```bash +simd query upgrade plan +``` + +Example Output: + +```bash +height: "130" +info: "" +name: test-upgrade +time: "0001-01-01T00:00:00Z" +upgraded_client_state: null +``` + +#### Transactions + +The upgrade module supports the following transactions: + +* `software-proposal` - submits an upgrade proposal: + +```bash +simd tx upgrade software-upgrade v2 --title="Test Proposal" --summary="testing" --deposit="100000000stake" --upgrade-height 1000000 \ +--upgrade-info '{ "binaries": { "linux/amd64":"https://example.com/simd.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" } }' --from cosmos1.. +``` + +* `cancel-software-upgrade` - cancels a previously submitted upgrade proposal: + +```bash +simd tx upgrade cancel-software-upgrade --title="Test Proposal" --summary="testing" --deposit="100000000stake" --from cosmos1.. +``` + +### REST + +A user can query the `upgrade` module using REST endpoints. + +#### Applied Plan + +`AppliedPlan` queries a previously applied upgrade plan by its name. + +```bash +/cosmos/upgrade/v1beta1/applied_plan/{name} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/applied_plan/v2.0-upgrade" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "height": "30" +} +``` + +#### Current Plan + +`CurrentPlan` queries the current upgrade plan. + +```bash +/cosmos/upgrade/v1beta1/current_plan +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/current_plan" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "plan": "v2.1-upgrade" +} +``` + +#### Module versions + +`ModuleVersions` queries the list of module versions from state. + +```bash +/cosmos/upgrade/v1beta1/module_versions +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/module_versions" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "module_versions": [ + { + "name": "auth", + "version": "2" + }, + { + "name": "authz", + "version": "1" + }, + { + "name": "bank", + "version": "2" + }, + { + "name": "crisis", + "version": "1" + }, + { + "name": "distribution", + "version": "2" + }, + { + "name": "evidence", + "version": "1" + }, + { + "name": "feegrant", + "version": "1" + }, + { + "name": "genutil", + "version": "1" + }, + { + "name": "gov", + "version": "2" + }, + { + "name": "ibc", + "version": "2" + }, + { + "name": "mint", + "version": "1" + }, + { + "name": "params", + "version": "1" + }, + { + "name": "slashing", + "version": "2" + }, + { + "name": "staking", + "version": "2" + }, + { + "name": "transfer", + "version": "1" + }, + { + "name": "upgrade", + "version": "1" + }, + { + "name": "vesting", + "version": "1" + } + ] +} +``` + +### gRPC + +A user can query the `upgrade` module using gRPC endpoints. + +#### Applied Plan + +`AppliedPlan` queries a previously applied upgrade plan by its name. + +```bash +cosmos.upgrade.v1beta1.Query/AppliedPlan +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"name":"v2.0-upgrade"}' \ + localhost:9090 \ + cosmos.upgrade.v1beta1.Query/AppliedPlan +``` + +Example Output: + +```bash +{ + "height": "30" +} +``` + +#### Current Plan + +`CurrentPlan` queries the current upgrade plan. + +```bash +cosmos.upgrade.v1beta1.Query/CurrentPlan +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/CurrentPlan +``` + +Example Output: + +```bash +{ + "plan": "v2.1-upgrade" +} +``` + +#### Module versions + +`ModuleVersions` queries the list of module versions from state. + +```bash +cosmos.upgrade.v1beta1.Query/ModuleVersions +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/ModuleVersions +``` + +Example Output: + +```bash expandable +{ + "module_versions": [ + { + "name": "auth", + "version": "2" + }, + { + "name": "authz", + "version": "1" + }, + { + "name": "bank", + "version": "2" + }, + { + "name": "crisis", + "version": "1" + }, + { + "name": "distribution", + "version": "2" + }, + { + "name": "evidence", + "version": "1" + }, + { + "name": "feegrant", + "version": "1" + }, + { + "name": "genutil", + "version": "1" + }, + { + "name": "gov", + "version": "2" + }, + { + "name": "ibc", + "version": "2" + }, + { + "name": "mint", + "version": "1" + }, + { + "name": "params", + "version": "1" + }, + { + "name": "slashing", + "version": "2" + }, + { + "name": "staking", + "version": "2" + }, + { + "name": "transfer", + "version": "1" + }, + { + "name": "upgrade", + "version": "1" + }, + { + "name": "vesting", + "version": "1" + } + ] +} +``` + +## Resources + +A list of (external) resources to learn more about the `x/upgrade` module. + +* [Cosmos Dev Series: Cosmos Blockchain Upgrade](https://medium.com/web3-surfers/cosmos-dev-series-cosmos-sdk-based-blockchain-upgrade-b5e99181554c) - The blog post that explains how software upgrades work in detail. diff --git a/docs/sdk/v0.47/documentation/module-system/msg-services.mdx b/docs/sdk/v0.47/documentation/module-system/msg-services.mdx new file mode 100644 index 00000000..d0f057ca --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/msg-services.mdx @@ -0,0 +1,3421 @@ +--- +title: '`Msg` Services' +--- + + +**Synopsis** +A Protobuf `Msg` service processes [messages](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#messages). Protobuf `Msg` services are specific to the module in which they are defined, and only process messages defined within the said module. They are called from `BaseApp` during [`DeliverTx`](/docs/sdk/v0.47/learn/advanced/baseapp#delivertx). + + + + +### Pre-requisite Readings + +* [Module Manager](/docs/sdk/v0.47/documentation/module-system/module-manager) +* [Messages and Queries](/docs/sdk/v0.47/documentation/module-system/messages-and-queries) + + + +## Implementation of a module `Msg` service + +Each module should define a Protobuf `Msg` service, which will be responsible for processing requests (implementing `sdk.Msg`) and returning responses. + +As further described in [ADR 031](docs/sdk/next/documentation/legacy/adr-comprehensive), this approach has the advantage of clearly specifying return types and generating server and client code. + +Protobuf generates a `MsgServer` interface based on a definition of `Msg` service. It is the role of the module developer to implement this interface, by implementing the state transition logic that should happen upon receival of each `sdk.Msg`. As an example, here is the generated `MsgServer` interface for `x/bank`, which exposes two `sdk.Msg`s: + +```go expandable +/ Code generated by protoc-gen-gogo. DO NOT EDIT. +/ source: cosmos/bank/v1beta1/tx.proto + +package types + +import ( + + context "context" + fmt "fmt" + _ "github.com/cosmos/cosmos-proto" + github_com_cosmos_cosmos_sdk_types "github.com/cosmos/cosmos-sdk/types" + types "github.com/cosmos/cosmos-sdk/types" + _ "github.com/cosmos/cosmos-sdk/types/msgservice" + _ "github.com/cosmos/cosmos-sdk/types/tx/amino" + _ "github.com/cosmos/gogoproto/gogoproto" + grpc1 "github.com/cosmos/gogoproto/grpc" + proto "github.com/cosmos/gogoproto/proto" + grpc "google.golang.org/grpc" + codes "google.golang.org/grpc/codes" + status "google.golang.org/grpc/status" + io "io" + math "math" + math_bits "math/bits" +) + +/ Reference imports to suppress errors if they are not otherwise used. +var _ = proto.Marshal +var _ = fmt.Errorf +var _ = math.Inf + +/ This is a compile-time assertion to ensure that this generated file +/ is compatible with the proto package it is being compiled against. +/ A compilation error at this line likely means your copy of the +/ proto package needs to be updated. +const _ = proto.GoGoProtoPackageIsVersion3 / please upgrade the proto package + +/ MsgSend represents a message to send coins from one account to another. +type MsgSend struct { + FromAddress string `protobuf:"bytes,1,opt,name=from_address,json=fromAddress,proto3" json:"from_address,omitempty"` + ToAddress string `protobuf:"bytes,2,opt,name=to_address,json=toAddress,proto3" json:"to_address,omitempty"` + Amount github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,3,rep,name=amount,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"amount"` +} + +func (m *MsgSend) + +Reset() { *m = MsgSend{ +} +} + +func (m *MsgSend) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSend) + +ProtoMessage() { +} + +func (*MsgSend) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{0 +} +} + +func (m *MsgSend) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSend) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSend.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSend) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSend.Merge(m, src) +} + +func (m *MsgSend) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSend) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSend.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSend proto.InternalMessageInfo + +/ MsgSendResponse defines the Msg/Send response type. +type MsgSendResponse struct { +} + +func (m *MsgSendResponse) + +Reset() { *m = MsgSendResponse{ +} +} + +func (m *MsgSendResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSendResponse) + +ProtoMessage() { +} + +func (*MsgSendResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{1 +} +} + +func (m *MsgSendResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSendResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSendResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSendResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSendResponse.Merge(m, src) +} + +func (m *MsgSendResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSendResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSendResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSendResponse proto.InternalMessageInfo + +/ MsgMultiSend represents an arbitrary multi-in, multi-out send message. +type MsgMultiSend struct { + / Inputs, despite being `repeated`, only allows one sender input. This is + / checked in MsgMultiSend's ValidateBasic. + Inputs []Input `protobuf:"bytes,1,rep,name=inputs,proto3" json:"inputs"` + Outputs []Output `protobuf:"bytes,2,rep,name=outputs,proto3" json:"outputs"` +} + +func (m *MsgMultiSend) + +Reset() { *m = MsgMultiSend{ +} +} + +func (m *MsgMultiSend) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgMultiSend) + +ProtoMessage() { +} + +func (*MsgMultiSend) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{2 +} +} + +func (m *MsgMultiSend) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgMultiSend) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgMultiSend.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgMultiSend) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgMultiSend.Merge(m, src) +} + +func (m *MsgMultiSend) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgMultiSend) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgMultiSend.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgMultiSend proto.InternalMessageInfo + +func (m *MsgMultiSend) + +GetInputs() []Input { + if m != nil { + return m.Inputs +} + +return nil +} + +func (m *MsgMultiSend) + +GetOutputs() []Output { + if m != nil { + return m.Outputs +} + +return nil +} + +/ MsgMultiSendResponse defines the Msg/MultiSend response type. +type MsgMultiSendResponse struct { +} + +func (m *MsgMultiSendResponse) + +Reset() { *m = MsgMultiSendResponse{ +} +} + +func (m *MsgMultiSendResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgMultiSendResponse) + +ProtoMessage() { +} + +func (*MsgMultiSendResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{3 +} +} + +func (m *MsgMultiSendResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgMultiSendResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgMultiSendResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgMultiSendResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgMultiSendResponse.Merge(m, src) +} + +func (m *MsgMultiSendResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgMultiSendResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgMultiSendResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgMultiSendResponse proto.InternalMessageInfo + +/ MsgUpdateParams is the Msg/UpdateParams request type. +/ +/ Since: cosmos-sdk 0.47 +type MsgUpdateParams struct { + / authority is the address that controls the module (defaults to x/gov unless overwritten). + Authority string `protobuf:"bytes,1,opt,name=authority,proto3" json:"authority,omitempty"` + / params defines the x/bank parameters to update. + / + / NOTE: All parameters must be supplied. + Params Params `protobuf:"bytes,2,opt,name=params,proto3" json:"params"` +} + +func (m *MsgUpdateParams) + +Reset() { *m = MsgUpdateParams{ +} +} + +func (m *MsgUpdateParams) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgUpdateParams) + +ProtoMessage() { +} + +func (*MsgUpdateParams) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{4 +} +} + +func (m *MsgUpdateParams) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgUpdateParams) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgUpdateParams.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgUpdateParams) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgUpdateParams.Merge(m, src) +} + +func (m *MsgUpdateParams) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgUpdateParams) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgUpdateParams.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgUpdateParams proto.InternalMessageInfo + +func (m *MsgUpdateParams) + +GetAuthority() + +string { + if m != nil { + return m.Authority +} + +return "" +} + +func (m *MsgUpdateParams) + +GetParams() + +Params { + if m != nil { + return m.Params +} + +return Params{ +} +} + +/ MsgUpdateParamsResponse defines the response structure for executing a +/ MsgUpdateParams message. +/ +/ Since: cosmos-sdk 0.47 +type MsgUpdateParamsResponse struct { +} + +func (m *MsgUpdateParamsResponse) + +Reset() { *m = MsgUpdateParamsResponse{ +} +} + +func (m *MsgUpdateParamsResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgUpdateParamsResponse) + +ProtoMessage() { +} + +func (*MsgUpdateParamsResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{5 +} +} + +func (m *MsgUpdateParamsResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgUpdateParamsResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgUpdateParamsResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgUpdateParamsResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgUpdateParamsResponse.Merge(m, src) +} + +func (m *MsgUpdateParamsResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgUpdateParamsResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgUpdateParamsResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgUpdateParamsResponse proto.InternalMessageInfo + +/ MsgSetSendEnabled is the Msg/SetSendEnabled request type. +/ +/ Only entries to add/update/delete need to be included. +/ Existing SendEnabled entries that are not included in this +/ message are left unchanged. +/ +/ Since: cosmos-sdk 0.47 +type MsgSetSendEnabled struct { + Authority string `protobuf:"bytes,1,opt,name=authority,proto3" json:"authority,omitempty"` + / send_enabled is the list of entries to add or update. + SendEnabled []*SendEnabled `protobuf:"bytes,2,rep,name=send_enabled,json=sendEnabled,proto3" json:"send_enabled,omitempty"` + / use_default_for is a list of denoms that should use the params.default_send_enabled value. + / Denoms listed here will have their SendEnabled entries deleted. + / If a denom is included that doesn't have a SendEnabled entry, + / it will be ignored. + UseDefaultFor []string `protobuf:"bytes,3,rep,name=use_default_for,json=useDefaultFor,proto3" json:"use_default_for,omitempty"` +} + +func (m *MsgSetSendEnabled) + +Reset() { *m = MsgSetSendEnabled{ +} +} + +func (m *MsgSetSendEnabled) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSetSendEnabled) + +ProtoMessage() { +} + +func (*MsgSetSendEnabled) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{6 +} +} + +func (m *MsgSetSendEnabled) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSetSendEnabled) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSetSendEnabled.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSetSendEnabled) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSetSendEnabled.Merge(m, src) +} + +func (m *MsgSetSendEnabled) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSetSendEnabled) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSetSendEnabled.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSetSendEnabled proto.InternalMessageInfo + +func (m *MsgSetSendEnabled) + +GetAuthority() + +string { + if m != nil { + return m.Authority +} + +return "" +} + +func (m *MsgSetSendEnabled) + +GetSendEnabled() []*SendEnabled { + if m != nil { + return m.SendEnabled +} + +return nil +} + +func (m *MsgSetSendEnabled) + +GetUseDefaultFor() []string { + if m != nil { + return m.UseDefaultFor +} + +return nil +} + +/ MsgSetSendEnabledResponse defines the Msg/SetSendEnabled response type. +/ +/ Since: cosmos-sdk 0.47 +type MsgSetSendEnabledResponse struct { +} + +func (m *MsgSetSendEnabledResponse) + +Reset() { *m = MsgSetSendEnabledResponse{ +} +} + +func (m *MsgSetSendEnabledResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSetSendEnabledResponse) + +ProtoMessage() { +} + +func (*MsgSetSendEnabledResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{7 +} +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSetSendEnabledResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSetSendEnabledResponse.Merge(m, src) +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSetSendEnabledResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSetSendEnabledResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSetSendEnabledResponse proto.InternalMessageInfo + +func init() { + proto.RegisterType((*MsgSend)(nil), "cosmos.bank.v1beta1.MsgSend") + +proto.RegisterType((*MsgSendResponse)(nil), "cosmos.bank.v1beta1.MsgSendResponse") + +proto.RegisterType((*MsgMultiSend)(nil), "cosmos.bank.v1beta1.MsgMultiSend") + +proto.RegisterType((*MsgMultiSendResponse)(nil), "cosmos.bank.v1beta1.MsgMultiSendResponse") + +proto.RegisterType((*MsgUpdateParams)(nil), "cosmos.bank.v1beta1.MsgUpdateParams") + +proto.RegisterType((*MsgUpdateParamsResponse)(nil), "cosmos.bank.v1beta1.MsgUpdateParamsResponse") + +proto.RegisterType((*MsgSetSendEnabled)(nil), "cosmos.bank.v1beta1.MsgSetSendEnabled") + +proto.RegisterType((*MsgSetSendEnabledResponse)(nil), "cosmos.bank.v1beta1.MsgSetSendEnabledResponse") +} + +func init() { + proto.RegisterFile("cosmos/bank/v1beta1/tx.proto", fileDescriptor_1d8cb1613481f5b7) +} + +var fileDescriptor_1d8cb1613481f5b7 = []byte{ + / 690 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x54, 0xcd, 0x4f, 0xd4, 0x5e, + 0x14, 0x6d, 0x99, 0xdf, 0x6f, 0x48, 0x1f, 0xa3, 0x84, 0x4a, 0x84, 0x29, 0xa4, 0x03, 0x8d, 0x21, + 0x80, 0xd2, 0x0a, 0x7e, 0x25, 0x63, 0x34, 0x3a, 0xa8, 0x89, 0x26, 0x13, 0xcd, 0x10, 0x17, 0xba, + 0x99, 0xb4, 0xf4, 0xd1, 0x69, 0xa0, 0x7d, 0x4d, 0xdf, 0x2b, 0x81, 0x9d, 0xba, 0x32, 0xae, 0xdc, + 0xbb, 0x61, 0x69, 0x5c, 0xb1, 0x70, 0x69, 0xe2, 0x96, 0x25, 0x71, 0xe5, 0x4a, 0x0d, 0x2c, 0xd0, + 0xff, 0xc2, 0xbc, 0x8f, 0x96, 0xce, 0x30, 0xc3, 0x10, 0x37, 0xd3, 0xce, 0xbb, 0xe7, 0x9c, 0x7b, + 0xcf, 0xed, 0x69, 0xc1, 0xe4, 0x2a, 0xc2, 0x01, 0xc2, 0x96, 0x63, 0x87, 0xeb, 0xd6, 0xe6, 0xa2, + 0x03, 0x89, 0xbd, 0x68, 0x91, 0x2d, 0x33, 0x8a, 0x11, 0x41, 0xea, 0x05, 0x5e, 0x35, 0x69, 0xd5, + 0x14, 0x55, 0x6d, 0xd4, 0x43, 0x1e, 0x62, 0x75, 0x8b, 0xde, 0x71, 0xa8, 0xa6, 0x67, 0x42, 0x18, + 0x66, 0x42, 0xab, 0xc8, 0x0f, 0x4f, 0xd4, 0x73, 0x8d, 0x98, 0x2e, 0xaf, 0x97, 0x79, 0xbd, 0xc9, + 0x85, 0x45, 0x5f, 0x5e, 0x1a, 0x13, 0xd4, 0x00, 0x7b, 0xd6, 0xe6, 0x22, 0xbd, 0x88, 0xc2, 0x88, + 0x1d, 0xf8, 0x21, 0xb2, 0xd8, 0x2f, 0x3f, 0x32, 0x3e, 0x0c, 0x80, 0xc1, 0x3a, 0xf6, 0x56, 0x60, + 0xe8, 0xaa, 0xb7, 0x41, 0x69, 0x2d, 0x46, 0x41, 0xd3, 0x76, 0xdd, 0x18, 0x62, 0x3c, 0x2e, 0x4f, + 0xc9, 0xb3, 0x4a, 0x6d, 0xfc, 0xdb, 0xe7, 0x85, 0x51, 0xa1, 0x7f, 0x9f, 0x57, 0x56, 0x48, 0xec, + 0x87, 0x5e, 0x63, 0x88, 0xa2, 0xc5, 0x91, 0x7a, 0x0b, 0x00, 0x82, 0x32, 0xea, 0x40, 0x1f, 0xaa, + 0x42, 0x50, 0x4a, 0x6c, 0x81, 0xa2, 0x1d, 0xa0, 0x24, 0x24, 0xe3, 0x85, 0xa9, 0xc2, 0xec, 0xd0, + 0x52, 0xd9, 0xcc, 0x96, 0x88, 0x61, 0xba, 0x44, 0x73, 0x19, 0xf9, 0x61, 0xed, 0xc6, 0xde, 0x8f, + 0x8a, 0xf4, 0xe9, 0x67, 0x65, 0xd6, 0xf3, 0x49, 0x2b, 0x71, 0xcc, 0x55, 0x14, 0x08, 0xe7, 0xe2, + 0xb2, 0x80, 0xdd, 0x75, 0x8b, 0x6c, 0x47, 0x10, 0x33, 0x02, 0xfe, 0x78, 0xb4, 0x3b, 0x2f, 0x37, + 0x84, 0x7e, 0xf5, 0xea, 0xdb, 0x9d, 0x8a, 0xf4, 0x7b, 0xa7, 0x22, 0xbd, 0x39, 0xda, 0x9d, 0x6f, + 0xb3, 0xfa, 0xee, 0x68, 0x77, 0x5e, 0xcd, 0x49, 0x88, 0x8d, 0x18, 0x23, 0x60, 0x58, 0xdc, 0x36, + 0x20, 0x8e, 0x50, 0x88, 0xa1, 0xf1, 0x45, 0x06, 0xa5, 0x3a, 0xf6, 0xea, 0xc9, 0x06, 0xf1, 0xd9, + 0xd6, 0xee, 0x80, 0xa2, 0x1f, 0x46, 0x09, 0xa1, 0xfb, 0xa2, 0xf3, 0x6b, 0x66, 0x97, 0x10, 0x98, + 0x8f, 0x29, 0xa4, 0xa6, 0x50, 0x03, 0x62, 0x28, 0x4e, 0x52, 0xef, 0x81, 0x41, 0x94, 0x10, 0xc6, + 0x1f, 0x60, 0xfc, 0x89, 0xae, 0xfc, 0xa7, 0x0c, 0x93, 0x17, 0x48, 0x69, 0xd5, 0xcb, 0xa9, 0x25, + 0x21, 0x49, 0xcd, 0x8c, 0xb5, 0x9b, 0xc9, 0xa6, 0x35, 0x2e, 0x82, 0xd1, 0xfc, 0xff, 0xcc, 0xd6, + 0x57, 0x99, 0x59, 0x7d, 0x1e, 0xb9, 0x36, 0x81, 0xcf, 0xec, 0xd8, 0x0e, 0xb0, 0x7a, 0x13, 0x28, + 0x76, 0x42, 0x5a, 0x28, 0xf6, 0xc9, 0x76, 0xdf, 0x30, 0x1c, 0x43, 0xd5, 0xbb, 0xa0, 0x18, 0x31, + 0x05, 0x16, 0x83, 0x5e, 0x8e, 0x78, 0x93, 0xb6, 0x95, 0x70, 0x56, 0xf5, 0x3a, 0x35, 0x73, 0xac, + 0x47, 0xfd, 0x4c, 0xe7, 0xfc, 0x6c, 0xf1, 0x77, 0xa2, 0x63, 0x5a, 0xa3, 0x0c, 0xc6, 0x3a, 0x8e, + 0x32, 0x73, 0x7f, 0x64, 0x30, 0xc2, 0x9e, 0x23, 0xa1, 0x9e, 0x1f, 0x86, 0xb6, 0xb3, 0x01, 0xdd, + 0x7f, 0xb6, 0xb7, 0x0c, 0x4a, 0x18, 0x86, 0x6e, 0x13, 0x72, 0x1d, 0xf1, 0xd8, 0xa6, 0xba, 0x9a, + 0xcc, 0xf5, 0x6b, 0x0c, 0xe1, 0x5c, 0xf3, 0x19, 0x30, 0x9c, 0x60, 0xd8, 0x74, 0xe1, 0x9a, 0x9d, + 0x6c, 0x90, 0xe6, 0x1a, 0x8a, 0x59, 0xfc, 0x95, 0xc6, 0xb9, 0x04, 0xc3, 0x07, 0xfc, 0xf4, 0x11, + 0x8a, 0xab, 0xd6, 0xc9, 0x5d, 0x4c, 0x76, 0x06, 0x35, 0xef, 0xca, 0x98, 0x00, 0xe5, 0x13, 0x87, + 0xe9, 0x22, 0x96, 0x5e, 0x17, 0x40, 0xa1, 0x8e, 0x3d, 0xf5, 0x09, 0xf8, 0x8f, 0x65, 0x77, 0xb2, + 0xeb, 0xd0, 0x22, 0xf2, 0xda, 0xa5, 0xd3, 0xaa, 0xa9, 0xa6, 0xfa, 0x02, 0x28, 0xc7, 0x2f, 0xc3, + 0x74, 0x2f, 0x4a, 0x06, 0xd1, 0xe6, 0xfa, 0x42, 0x32, 0x69, 0x07, 0x94, 0xda, 0x02, 0xd9, 0x73, + 0xa0, 0x3c, 0x4a, 0xbb, 0x72, 0x16, 0x54, 0xd6, 0xa3, 0x05, 0xce, 0x77, 0xe4, 0x62, 0xa6, 0xb7, + 0xed, 0x3c, 0x4e, 0x33, 0xcf, 0x86, 0x4b, 0x3b, 0x69, 0xff, 0xbf, 0xa2, 0x29, 0xaf, 0x2d, 0xef, + 0x1d, 0xe8, 0xf2, 0xfe, 0x81, 0x2e, 0xff, 0x3a, 0xd0, 0xe5, 0xf7, 0x87, 0xba, 0xb4, 0x7f, 0xa8, + 0x4b, 0xdf, 0x0f, 0x75, 0xe9, 0xe5, 0xdc, 0xa9, 0x9f, 0x35, 0x11, 0x7b, 0xf6, 0x75, 0x73, 0x8a, + 0xec, 0xeb, 0x7d, 0xed, 0x6f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x2f, 0xe8, 0x90, 0x24, 0x8f, 0x06, + 0x00, 0x00, +} + +/ Reference imports to suppress errors if they are not otherwise used. +var _ context.Context +var _ grpc.ClientConn + +/ This is a compile-time assertion to ensure that this generated file +/ is compatible with the grpc package it is being compiled against. +const _ = grpc.SupportPackageIsVersion4 + +/ MsgClient is the client API for Msg service. +/ +/ For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. +type MsgClient interface { + / Send defines a method for sending coins from one account to another account. + Send(ctx context.Context, in *MsgSend, opts ...grpc.CallOption) (*MsgSendResponse, error) + / MultiSend defines a method for sending coins from some accounts to other accounts. + MultiSend(ctx context.Context, in *MsgMultiSend, opts ...grpc.CallOption) (*MsgMultiSendResponse, error) + / UpdateParams defines a governance operation for updating the x/bank module parameters. + / The authority is defined in the keeper. + / + / Since: cosmos-sdk 0.47 + UpdateParams(ctx context.Context, in *MsgUpdateParams, opts ...grpc.CallOption) (*MsgUpdateParamsResponse, error) + / SetSendEnabled is a governance operation for setting the SendEnabled flag + / on any number of Denoms. Only the entries to add or update should be + / included. Entries that already exist in the store, but that aren't + / included in this message, will be left unchanged. + / + / Since: cosmos-sdk 0.47 + SetSendEnabled(ctx context.Context, in *MsgSetSendEnabled, opts ...grpc.CallOption) (*MsgSetSendEnabledResponse, error) +} + +type msgClient struct { + cc grpc1.ClientConn +} + +func NewMsgClient(cc grpc1.ClientConn) + +MsgClient { + return &msgClient{ + cc +} +} + +func (c *msgClient) + +Send(ctx context.Context, in *MsgSend, opts ...grpc.CallOption) (*MsgSendResponse, error) { + out := new(MsgSendResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/Send", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +func (c *msgClient) + +MultiSend(ctx context.Context, in *MsgMultiSend, opts ...grpc.CallOption) (*MsgMultiSendResponse, error) { + out := new(MsgMultiSendResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/MultiSend", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +func (c *msgClient) + +UpdateParams(ctx context.Context, in *MsgUpdateParams, opts ...grpc.CallOption) (*MsgUpdateParamsResponse, error) { + out := new(MsgUpdateParamsResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/UpdateParams", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +func (c *msgClient) + +SetSendEnabled(ctx context.Context, in *MsgSetSendEnabled, opts ...grpc.CallOption) (*MsgSetSendEnabledResponse, error) { + out := new(MsgSetSendEnabledResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/SetSendEnabled", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +/ MsgServer is the server API for Msg service. +type MsgServer interface { + / Send defines a method for sending coins from one account to another account. + Send(context.Context, *MsgSend) (*MsgSendResponse, error) + / MultiSend defines a method for sending coins from some accounts to other accounts. + MultiSend(context.Context, *MsgMultiSend) (*MsgMultiSendResponse, error) + / UpdateParams defines a governance operation for updating the x/bank module parameters. + / The authority is defined in the keeper. + / + / Since: cosmos-sdk 0.47 + UpdateParams(context.Context, *MsgUpdateParams) (*MsgUpdateParamsResponse, error) + / SetSendEnabled is a governance operation for setting the SendEnabled flag + / on any number of Denoms. Only the entries to add or update should be + / included. Entries that already exist in the store, but that aren't + / included in this message, will be left unchanged. + / + / Since: cosmos-sdk 0.47 + SetSendEnabled(context.Context, *MsgSetSendEnabled) (*MsgSetSendEnabledResponse, error) +} + +/ UnimplementedMsgServer can be embedded to have forward compatible implementations. +type UnimplementedMsgServer struct { +} + +func (*UnimplementedMsgServer) + +Send(ctx context.Context, req *MsgSend) (*MsgSendResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method Send not implemented") +} + +func (*UnimplementedMsgServer) + +MultiSend(ctx context.Context, req *MsgMultiSend) (*MsgMultiSendResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method MultiSend not implemented") +} + +func (*UnimplementedMsgServer) + +UpdateParams(ctx context.Context, req *MsgUpdateParams) (*MsgUpdateParamsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UpdateParams not implemented") +} + +func (*UnimplementedMsgServer) + +SetSendEnabled(ctx context.Context, req *MsgSetSendEnabled) (*MsgSetSendEnabledResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method SetSendEnabled not implemented") +} + +func RegisterMsgServer(s grpc1.Server, srv MsgServer) { + s.RegisterService(&_Msg_serviceDesc, srv) +} + +func _Msg_Send_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgSend) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).Send(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/Send", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).Send(ctx, req.(*MsgSend)) +} + +return interceptor(ctx, in, info, handler) +} + +func _Msg_MultiSend_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgMultiSend) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).MultiSend(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/MultiSend", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).MultiSend(ctx, req.(*MsgMultiSend)) +} + +return interceptor(ctx, in, info, handler) +} + +func _Msg_UpdateParams_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgUpdateParams) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).UpdateParams(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/UpdateParams", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).UpdateParams(ctx, req.(*MsgUpdateParams)) +} + +return interceptor(ctx, in, info, handler) +} + +func _Msg_SetSendEnabled_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgSetSendEnabled) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).SetSendEnabled(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/SetSendEnabled", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).SetSendEnabled(ctx, req.(*MsgSetSendEnabled)) +} + +return interceptor(ctx, in, info, handler) +} + +var _Msg_serviceDesc = grpc.ServiceDesc{ + ServiceName: "cosmos.bank.v1beta1.Msg", + HandlerType: (*MsgServer)(nil), + Methods: []grpc.MethodDesc{ + { + MethodName: "Send", + Handler: _Msg_Send_Handler, +}, + { + MethodName: "MultiSend", + Handler: _Msg_MultiSend_Handler, +}, + { + MethodName: "UpdateParams", + Handler: _Msg_UpdateParams_Handler, +}, + { + MethodName: "SetSendEnabled", + Handler: _Msg_SetSendEnabled_Handler, +}, +}, + Streams: []grpc.StreamDesc{ +}, + Metadata: "cosmos/bank/v1beta1/tx.proto", +} + +func (m *MsgSend) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSend) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSend) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Amount) > 0 { + for iNdEx := len(m.Amount) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Amount[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x1a +} + +} + if len(m.ToAddress) > 0 { + i -= len(m.ToAddress) + +copy(dAtA[i:], m.ToAddress) + +i = encodeVarintTx(dAtA, i, uint64(len(m.ToAddress))) + +i-- + dAtA[i] = 0x12 +} + if len(m.FromAddress) > 0 { + i -= len(m.FromAddress) + +copy(dAtA[i:], m.FromAddress) + +i = encodeVarintTx(dAtA, i, uint64(len(m.FromAddress))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *MsgSendResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSendResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSendResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func (m *MsgMultiSend) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgMultiSend) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgMultiSend) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Outputs) > 0 { + for iNdEx := len(m.Outputs) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Outputs[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x12 +} + +} + if len(m.Inputs) > 0 { + for iNdEx := len(m.Inputs) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Inputs[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa +} + +} + +return len(dAtA) - i, nil +} + +func (m *MsgMultiSendResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgMultiSendResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgMultiSendResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func (m *MsgUpdateParams) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgUpdateParams) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgUpdateParams) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + { + size, err := m.Params.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x12 + if len(m.Authority) > 0 { + i -= len(m.Authority) + +copy(dAtA[i:], m.Authority) + +i = encodeVarintTx(dAtA, i, uint64(len(m.Authority))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *MsgUpdateParamsResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgUpdateParamsResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgUpdateParamsResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func (m *MsgSetSendEnabled) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSetSendEnabled) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSetSendEnabled) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.UseDefaultFor) > 0 { + for iNdEx := len(m.UseDefaultFor) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.UseDefaultFor[iNdEx]) + +copy(dAtA[i:], m.UseDefaultFor[iNdEx]) + +i = encodeVarintTx(dAtA, i, uint64(len(m.UseDefaultFor[iNdEx]))) + +i-- + dAtA[i] = 0x1a +} + +} + if len(m.SendEnabled) > 0 { + for iNdEx := len(m.SendEnabled) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.SendEnabled[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x12 +} + +} + if len(m.Authority) > 0 { + i -= len(m.Authority) + +copy(dAtA[i:], m.Authority) + +i = encodeVarintTx(dAtA, i, uint64(len(m.Authority))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *MsgSetSendEnabledResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSetSendEnabledResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSetSendEnabledResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func encodeVarintTx(dAtA []byte, offset int, v uint64) + +int { + offset -= sovTx(v) + base := offset + for v >= 1<<7 { + dAtA[offset] = uint8(v&0x7f | 0x80) + +v >>= 7 + offset++ +} + +dAtA[offset] = uint8(v) + +return base +} + +func (m *MsgSend) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.FromAddress) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + +l = len(m.ToAddress) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + if len(m.Amount) > 0 { + for _, e := range m.Amount { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + +return n +} + +func (m *MsgSendResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func (m *MsgMultiSend) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + if len(m.Inputs) > 0 { + for _, e := range m.Inputs { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + if len(m.Outputs) > 0 { + for _, e := range m.Outputs { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + +return n +} + +func (m *MsgMultiSendResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func (m *MsgUpdateParams) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.Authority) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + +l = m.Params.Size() + +n += 1 + l + sovTx(uint64(l)) + +return n +} + +func (m *MsgUpdateParamsResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func (m *MsgSetSendEnabled) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.Authority) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + if len(m.SendEnabled) > 0 { + for _, e := range m.SendEnabled { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + if len(m.UseDefaultFor) > 0 { + for _, s := range m.UseDefaultFor { + l = len(s) + +n += 1 + l + sovTx(uint64(l)) +} + +} + +return n +} + +func (m *MsgSetSendEnabledResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func sovTx(x uint64) (n int) { + return (math_bits.Len64(x|1) + 6) / 7 +} + +func sozTx(x uint64) (n int) { + return sovTx(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} + +func (m *MsgSend) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSend: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSend: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field FromAddress", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.FromAddress = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ToAddress", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.ToAddress = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Amount", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Amount = append(m.Amount, types.Coin{ +}) + if err := m.Amount[len(m.Amount)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgSendResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSendResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSendResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgMultiSend) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgMultiSend: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgMultiSend: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Inputs", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Inputs = append(m.Inputs, Input{ +}) + if err := m.Inputs[len(m.Inputs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Outputs", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Outputs = append(m.Outputs, Output{ +}) + if err := m.Outputs[len(m.Outputs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgMultiSendResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgMultiSendResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgMultiSendResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgUpdateParams) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgUpdateParams: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgUpdateParams: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Authority", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Authority = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Params", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := m.Params.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgUpdateParamsResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgUpdateParamsResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgUpdateParamsResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgSetSendEnabled) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSetSendEnabled: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSetSendEnabled: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Authority", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Authority = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SendEnabled", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.SendEnabled = append(m.SendEnabled, &SendEnabled{ +}) + if err := m.SendEnabled[len(m.SendEnabled)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UseDefaultFor", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.UseDefaultFor = append(m.UseDefaultFor, string(dAtA[iNdEx:postIndex])) + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgSetSendEnabledResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSetSendEnabledResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSetSendEnabledResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func skipTx(dAtA []byte) (n int, err error) { + l := len(dAtA) + iNdEx := 0 + depth := 0 + for iNdEx < l { + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowTx +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + wireType := int(wire & 0x7) + switch wireType { + case 0: + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowTx +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + +iNdEx++ + if dAtA[iNdEx-1] < 0x80 { + break +} + +} + case 1: + iNdEx += 8 + case 2: + var length int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowTx +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + length |= (int(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + if length < 0 { + return 0, ErrInvalidLengthTx +} + +iNdEx += length + case 3: + depth++ + case 4: + if depth == 0 { + return 0, ErrUnexpectedEndOfGroupTx +} + +depth-- + case 5: + iNdEx += 4 + default: + return 0, fmt.Errorf("proto: illegal wireType %d", wireType) +} + if iNdEx < 0 { + return 0, ErrInvalidLengthTx +} + if depth == 0 { + return iNdEx, nil +} + +} + +return 0, io.ErrUnexpectedEOF +} + +var ( + ErrInvalidLengthTx = fmt.Errorf("proto: negative length found during unmarshaling") + +ErrIntOverflowTx = fmt.Errorf("proto: integer overflow") + +ErrUnexpectedEndOfGroupTx = fmt.Errorf("proto: unexpected end of group") +) +``` + +When possible, the existing module's [`Keeper`](/docs/sdk/v0.47/documentation/module-system/keeper) should implement `MsgServer`, otherwise a `msgServer` struct that embeds the `Keeper` can be created, typically in `./keeper/msg_server.go`: + +```go expandable +package keeper + +import ( + + "context" + "github.com/armon/go-metrics" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +type msgServer struct { + Keeper +} + +var _ types.MsgServer = msgServer{ +} + +/ NewMsgServerImpl returns an implementation of the bank MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &msgServer{ + Keeper: keeper +} +} + +func (k msgServer) + +Send(goCtx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + +from, err := sdk.AccAddressFromBech32(msg.FromAddress) + if err != nil { + return nil, err +} + +to, err := sdk.AccAddressFromBech32(msg.ToAddress) + if err != nil { + return nil, err +} + if k.BlockedAddr(to) { + return nil, sdkerrors.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + +err = k.SendCoins(ctx, from, to, msg.Amount) + if err != nil { + return nil, err +} + +defer func() { + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "send" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + +return &types.MsgSendResponse{ +}, nil +} + +func (k msgServer) + +MultiSend(goCtx context.Context, msg *types.MsgMultiSend) (*types.MsgMultiSendResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + + / NOTE: totalIn == totalOut should already have been checked + for _, in := range msg.Inputs { + if err := k.IsSendEnabledCoins(ctx, in.Coins...); err != nil { + return nil, err +} + +} + for _, out := range msg.Outputs { + accAddr := sdk.MustAccAddressFromBech32(out.Address) + if k.BlockedAddr(accAddr) { + return nil, sdkerrors.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", out.Address) +} + +} + err := k.InputOutputCoins(ctx, msg.Inputs, msg.Outputs) + if err != nil { + return nil, err +} + +return &types.MsgMultiSendResponse{ +}, nil +} + +func (k msgServer) + +UpdateParams(goCtx context.Context, req *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + if k.GetAuthority() != req.Authority { + return nil, sdkerrors.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), req.Authority) +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.SetParams(ctx, req.Params); err != nil { + return nil, err +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} + +func (k msgServer) + +SetSendEnabled(goCtx context.Context, msg *types.MsgSetSendEnabled) (*types.MsgSetSendEnabledResponse, error) { + if k.GetAuthority() != msg.Authority { + return nil, sdkerrors.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), msg.Authority) +} + ctx := sdk.UnwrapSDKContext(goCtx) + if len(msg.SendEnabled) > 0 { + k.SetAllSendEnabled(ctx, msg.SendEnabled) +} + if len(msg.UseDefaultFor) > 0 { + k.DeleteSendEnabled(ctx, msg.UseDefaultFor...) +} + +return &types.MsgSetSendEnabledResponse{ +}, nil +} +``` + +`msgServer` methods can retrieve the `sdk.Context` from the `context.Context` parameter method using the `sdk.UnwrapSDKContext`: + +```go expandable +package keeper + +import ( + + "context" + "github.com/armon/go-metrics" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +type msgServer struct { + Keeper +} + +var _ types.MsgServer = msgServer{ +} + +/ NewMsgServerImpl returns an implementation of the bank MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &msgServer{ + Keeper: keeper +} +} + +func (k msgServer) + +Send(goCtx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + +from, err := sdk.AccAddressFromBech32(msg.FromAddress) + if err != nil { + return nil, err +} + +to, err := sdk.AccAddressFromBech32(msg.ToAddress) + if err != nil { + return nil, err +} + if k.BlockedAddr(to) { + return nil, sdkerrors.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + +err = k.SendCoins(ctx, from, to, msg.Amount) + if err != nil { + return nil, err +} + +defer func() { + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "send" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + +return &types.MsgSendResponse{ +}, nil +} + +func (k msgServer) + +MultiSend(goCtx context.Context, msg *types.MsgMultiSend) (*types.MsgMultiSendResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + + / NOTE: totalIn == totalOut should already have been checked + for _, in := range msg.Inputs { + if err := k.IsSendEnabledCoins(ctx, in.Coins...); err != nil { + return nil, err +} + +} + for _, out := range msg.Outputs { + accAddr := sdk.MustAccAddressFromBech32(out.Address) + if k.BlockedAddr(accAddr) { + return nil, sdkerrors.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", out.Address) +} + +} + err := k.InputOutputCoins(ctx, msg.Inputs, msg.Outputs) + if err != nil { + return nil, err +} + +return &types.MsgMultiSendResponse{ +}, nil +} + +func (k msgServer) + +UpdateParams(goCtx context.Context, req *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + if k.GetAuthority() != req.Authority { + return nil, sdkerrors.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), req.Authority) +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.SetParams(ctx, req.Params); err != nil { + return nil, err +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} + +func (k msgServer) + +SetSendEnabled(goCtx context.Context, msg *types.MsgSetSendEnabled) (*types.MsgSetSendEnabledResponse, error) { + if k.GetAuthority() != msg.Authority { + return nil, sdkerrors.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), msg.Authority) +} + ctx := sdk.UnwrapSDKContext(goCtx) + if len(msg.SendEnabled) > 0 { + k.SetAllSendEnabled(ctx, msg.SendEnabled) +} + if len(msg.UseDefaultFor) > 0 { + k.DeleteSendEnabled(ctx, msg.UseDefaultFor...) +} + +return &types.MsgSetSendEnabledResponse{ +}, nil +} +``` + +`sdk.Msg` processing usually follows these 3 steps: + +### Validation + +The message server must perform all validation required (both *stateful* and *stateless*) to make sure the `message` is valid. +The `signer` is charged for the gas cost of this validation. + +For example, a `msgServer` method for a `transfer` message should check that the sending account has enough funds to actually perform the transfer. + +It is recommended to implement all validation checks in a separate function that passes state values as arguments. This implementation simplifies testing. As expected, expensive validation functions charge additional gas. Example: + +```go +ValidateMsgA(msg MsgA, now Time, gm GasMeter) + +error { + if now.Before(msg.Expire) { + return sdkerrrors.ErrInvalidRequest.Wrap("msg expired") +} + +gm.ConsumeGas(1000, "signature verification") + +return signatureVerificaton(msg.Prover, msg.Data) +} +``` + + +Previously, the `ValidateBasic` method was used to perform simple and stateless validation checks. +This way of validating is deprecated, this means the `msgServer` must perform all validation checks. + + +### State Transition + +After the validation is successful, the `msgServer` method uses the [`keeper`](/docs/sdk/v0.47/documentation/module-system/keeper) functions to access the state and perform a state transition. + +### Events + +Before returning, `msgServer` methods generally emit one or more [events](/docs/sdk/v0.47/learn/advanced/events) by using the `EventManager` held in the `ctx`. Use the new `EmitTypedEvent` function that uses protobuf-based event types: + +```go +ctx.EventManager().EmitTypedEvent( + &group.EventABC{ + Key1: Value1, Key2, Value2 +}) +``` + +or the older `EmitEvent` function: + +```go +ctx.EventManager().EmitEvent( + sdk.NewEvent( + eventType, / e.g. sdk.EventTypeMessage for a message, types.CustomEventType for a custom event defined in the module + sdk.NewAttribute(key1, value1), + sdk.NewAttribute(key2, value2), + ), +) +``` + +These events are relayed back to the underlying consensus engine and can be used by service providers to implement services around the application. Click [here](/docs/sdk/v0.47/learn/advanced/events) to learn more about events. + +The invoked `msgServer` method returns a `proto.Message` response and an `error`. These return values are then wrapped into an `*sdk.Result` or an `error` using `sdk.WrapServiceResult(ctx sdk.Context, res proto.Message, err error)`: + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + + gogogrpc "github.com/cosmos/gogoproto/grpc" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ MsgServiceRouter routes fully-qualified Msg service methods to their handler. +type MsgServiceRouter struct { + interfaceRegistry codectypes.InterfaceRegistry + routes map[string]MsgServiceHandler +} + +var _ gogogrpc.Server = &MsgServiceRouter{ +} + +/ NewMsgServiceRouter creates a new MsgServiceRouter. +func NewMsgServiceRouter() *MsgServiceRouter { + return &MsgServiceRouter{ + routes: map[string]MsgServiceHandler{ +}, +} +} + +/ MsgServiceHandler defines a function type which handles Msg service message. +type MsgServiceHandler = func(ctx sdk.Context, req sdk.Msg) (*sdk.Result, error) + +/ Handler returns the MsgServiceHandler for a given msg or nil if not found. +func (msr *MsgServiceRouter) + +Handler(msg sdk.Msg) + +MsgServiceHandler { + return msr.routes[sdk.MsgTypeURL(msg)] +} + +/ HandlerByTypeURL returns the MsgServiceHandler for a given query route path or nil +/ if not found. +func (msr *MsgServiceRouter) + +HandlerByTypeURL(typeURL string) + +MsgServiceHandler { + return msr.routes[typeURL] +} + +/ RegisterService implements the gRPC Server.RegisterService method. sd is a gRPC +/ service description, handler is an object which implements that gRPC service. +/ +/ This function PANICs: +/ - if it is called before the service `Msg`s have been registered using +/ RegisterInterfaces, +/ - or if a service is being registered twice. +func (msr *MsgServiceRouter) + +RegisterService(sd *grpc.ServiceDesc, handler interface{ +}) { + / Adds a top-level query handler based on the gRPC service name. + for _, method := range sd.Methods { + fqMethod := fmt.Sprintf("/%s/%s", sd.ServiceName, method.MethodName) + methodHandler := method.Handler + + var requestTypeName string + + / NOTE: This is how we pull the concrete request type for each handler for registering in the InterfaceRegistry. + / This approach is maybe a bit hacky, but less hacky than reflecting on the handler object itself. + / We use a no-op interceptor to avoid actually calling into the handler itself. + _, _ = methodHandler(nil, context.Background(), func(i interface{ +}) + +error { + msg, ok := i.(sdk.Msg) + if !ok { + / We panic here because there is no other alternative and the app cannot be initialized correctly + / this should only happen if there is a problem with code generation in which case the app won't + / work correctly anyway. + panic(fmt.Errorf("unable to register service method %s: %T does not implement sdk.Msg", fqMethod, i)) +} + +requestTypeName = sdk.MsgTypeURL(msg) + +return nil +}, noopInterceptor) + + / Check that the service Msg fully-qualified method name has already + / been registered (via RegisterInterfaces). If the user registers a + / service without registering according service Msg type, there might be + / some unexpected behavior down the road. Since we can't return an error + / (`Server.RegisterService` interface restriction) + +we panic (at startup). + reqType, err := msr.interfaceRegistry.Resolve(requestTypeName) + if err != nil || reqType == nil { + panic( + fmt.Errorf( + "type_url %s has not been registered yet. "+ + "Before calling RegisterService, you must register all interfaces by calling the `RegisterInterfaces` "+ + "method on module.BasicManager. Each module should call `msgservice.RegisterMsgServiceDesc` inside its "+ + "`RegisterInterfaces` method with the `_Msg_serviceDesc` generated by proto-gen", + requestTypeName, + ), + ) +} + + / Check that each service is only registered once. If a service is + / registered more than once, then we should error. Since we can't + / return an error (`Server.RegisterService` interface restriction) + +we + / panic (at startup). + _, found := msr.routes[requestTypeName] + if found { + panic( + fmt.Errorf( + "msg service %s has already been registered. Please make sure to only register each service once. "+ + "This usually means that there are conflicting modules registering the same msg service", + fqMethod, + ), + ) +} + +msr.routes[requestTypeName] = func(ctx sdk.Context, req sdk.Msg) (*sdk.Result, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + interceptor := func(goCtx context.Context, _ interface{ +}, _ *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{ +}, error) { + goCtx = context.WithValue(goCtx, sdk.SdkContextKey, ctx) + +return handler(goCtx, req) +} + if err := req.ValidateBasic(); err != nil { + return nil, err +} + / Call the method handler from the service description with the handler object. + / We don't do any decoding here because the decoding was already done. + res, err := methodHandler(handler, sdk.WrapSDKContext(ctx), noopDecoder, interceptor) + if err != nil { + return nil, err +} + +resMsg, ok := res.(proto.Message) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "Expecting proto.Message, got %T", resMsg) +} + +return sdk.WrapServiceResult(ctx, resMsg, err) +} + +} +} + +/ SetInterfaceRegistry sets the interface registry for the router. +func (msr *MsgServiceRouter) + +SetInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) { + msr.interfaceRegistry = interfaceRegistry +} + +func noopDecoder(_ interface{ +}) + +error { + return nil +} + +func noopInterceptor(_ context.Context, _ interface{ +}, _ *grpc.UnaryServerInfo, _ grpc.UnaryHandler) (interface{ +}, error) { + return nil, nil +} +``` + +This method takes care of marshaling the `res` parameter to protobuf and attaching any events on the `ctx.EventManager()` to the `sdk.Result`. + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/base/abci/v1beta1/abci.proto#L88-L109 +``` + +This diagram shows a typical structure of a Protobuf `Msg` service, and how the message propagates through the module. + +![Transaction flow](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/transaction_flow.svg) + +## Telemetry + +New [telemetry metrics](/docs/sdk/v0.47/learn/advanced/telemetry) can be created from `msgServer` methods when handling messages. + +This is an example from the `x/auth/vesting` module: + +```go expandable +package vesting + +import ( + + "context" + "github.com/armon/go-metrics" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" +) + +type msgServer struct { + keeper.AccountKeeper + types.BankKeeper +} + +/ NewMsgServerImpl returns an implementation of the vesting MsgServer interface, +/ wrapping the corresponding AccountKeeper and BankKeeper. +func NewMsgServerImpl(k keeper.AccountKeeper, bk types.BankKeeper) + +types.MsgServer { + return &msgServer{ + AccountKeeper: k, + BankKeeper: bk +} +} + +var _ types.MsgServer = msgServer{ +} + +func (s msgServer) + +CreateVestingAccount(goCtx context.Context, msg *types.MsgCreateVestingAccount) (*types.MsgCreateVestingAccountResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + ak := s.AccountKeeper + bk := s.BankKeeper + if err := bk.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + +from, err := sdk.AccAddressFromBech32(msg.FromAddress) + if err != nil { + return nil, err +} + +to, err := sdk.AccAddressFromBech32(msg.ToAddress) + if err != nil { + return nil, err +} + if bk.BlockedAddr(to) { + return nil, sdkerrors.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + if acc := ak.GetAccount(ctx, to); acc != nil { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidRequest, "account %s already exists", msg.ToAddress) +} + baseAccount := authtypes.NewBaseAccountWithAddress(to) + +baseAccount = ak.NewAccount(ctx, baseAccount).(*authtypes.BaseAccount) + baseVestingAccount := types.NewBaseVestingAccount(baseAccount, msg.Amount.Sort(), msg.EndTime) + +var vestingAccount authtypes.AccountI + if msg.Delayed { + vestingAccount = types.NewDelayedVestingAccountRaw(baseVestingAccount) +} + +else { + vestingAccount = types.NewContinuousVestingAccountRaw(baseVestingAccount, ctx.BlockTime().Unix()) +} + +ak.SetAccount(ctx, vestingAccount) + +defer func() { + telemetry.IncrCounter(1, "new", "account") + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "create_vesting_account" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + +err = bk.SendCoins(ctx, from, to, msg.Amount) + if err != nil { + return nil, err +} + +return &types.MsgCreateVestingAccountResponse{ +}, nil +} + +func (s msgServer) + +CreatePermanentLockedAccount(goCtx context.Context, msg *types.MsgCreatePermanentLockedAccount) (*types.MsgCreatePermanentLockedAccountResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + ak := s.AccountKeeper + bk := s.BankKeeper + if err := bk.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + +from, err := sdk.AccAddressFromBech32(msg.FromAddress) + if err != nil { + return nil, err +} + +to, err := sdk.AccAddressFromBech32(msg.ToAddress) + if err != nil { + return nil, err +} + if bk.BlockedAddr(to) { + return nil, sdkerrors.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + if acc := ak.GetAccount(ctx, to); acc != nil { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidRequest, "account %s already exists", msg.ToAddress) +} + baseAccount := authtypes.NewBaseAccountWithAddress(to) + +baseAccount = ak.NewAccount(ctx, baseAccount).(*authtypes.BaseAccount) + vestingAccount := types.NewPermanentLockedAccount(baseAccount, msg.Amount) + +ak.SetAccount(ctx, vestingAccount) + +defer func() { + telemetry.IncrCounter(1, "new", "account") + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "create_permanent_locked_account" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + +err = bk.SendCoins(ctx, from, to, msg.Amount) + if err != nil { + return nil, err +} + +return &types.MsgCreatePermanentLockedAccountResponse{ +}, nil +} + +func (s msgServer) + +CreatePeriodicVestingAccount(goCtx context.Context, msg *types.MsgCreatePeriodicVestingAccount) (*types.MsgCreatePeriodicVestingAccountResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + ak := s.AccountKeeper + bk := s.BankKeeper + + from, err := sdk.AccAddressFromBech32(msg.FromAddress) + if err != nil { + return nil, err +} + +to, err := sdk.AccAddressFromBech32(msg.ToAddress) + if err != nil { + return nil, err +} + if acc := ak.GetAccount(ctx, to); acc != nil { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidRequest, "account %s already exists", msg.ToAddress) +} + +var totalCoins sdk.Coins + for _, period := range msg.VestingPeriods { + totalCoins = totalCoins.Add(period.Amount...) +} + baseAccount := authtypes.NewBaseAccountWithAddress(to) + +baseAccount = ak.NewAccount(ctx, baseAccount).(*authtypes.BaseAccount) + vestingAccount := types.NewPeriodicVestingAccount(baseAccount, totalCoins.Sort(), msg.StartTime, msg.VestingPeriods) + +ak.SetAccount(ctx, vestingAccount) + +defer func() { + telemetry.IncrCounter(1, "new", "account") + for _, a := range totalCoins { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "create_periodic_vesting_account" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + +err = bk.SendCoins(ctx, from, to, totalCoins) + if err != nil { + return nil, err +} + +return &types.MsgCreatePeriodicVestingAccountResponse{ +}, nil +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/query-services.mdx b/docs/sdk/v0.47/documentation/module-system/query-services.mdx new file mode 100644 index 00000000..61946345 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/query-services.mdx @@ -0,0 +1,402 @@ +--- +title: Query Services +--- + + +**Synopsis** +A Protobuf Query service processes [`queries`](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#queries). Query services are specific to the module in which they are defined, and only process `queries` defined within said module. They are called from `BaseApp`'s [`Query` method](/docs/sdk/v0.47/learn/advanced/baseapp#query). + + + + +### Pre-requisite Readings + +* [Module Manager](/docs/sdk/v0.47/documentation/module-system/module-manager) +* [Messages and Queries](/docs/sdk/v0.47/documentation/module-system/messages-and-queries) + + + +## Implementation of a module query service + +### gRPC Service + +When defining a Protobuf `Query` service, a `QueryServer` interface is generated for each module with all the service methods: + +```go +type QueryServer interface { + QueryBalance(context.Context, *QueryBalanceParams) (*types.Coin, error) + +QueryAllBalances(context.Context, *QueryAllBalancesParams) (*QueryAllBalancesResponse, error) +} +``` + +These custom queries methods should be implemented by a module's keeper, typically in `./keeper/grpc_query.go`. The first parameter of these methods is a generic `context.Context`. Therefore, the Cosmos SDK provides a function `sdk.UnwrapSDKContext` to retrieve the `sdk.Context` from the provided +`context.Context`. + +Here's an example implementation for the bank module: + +```go expandable +package keeper + +import ( + + "context" + "cosmossdk.io/math" + gogotypes "github.com/cosmos/gogoproto/types" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/status" + "github.com/cosmos/cosmos-sdk/store/prefix" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/query" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var _ types.QueryServer = BaseKeeper{ +} + +/ Balance implements the Query/Balance gRPC method +func (k BaseKeeper) + +Balance(ctx context.Context, req *types.QueryBalanceRequest) (*types.QueryBalanceResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + +address, err := sdk.AccAddressFromBech32(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + balance := k.GetBalance(sdkCtx, address, req.Denom) + +return &types.QueryBalanceResponse{ + Balance: &balance +}, nil +} + +/ AllBalances implements the Query/AllBalances gRPC method +func (k BaseKeeper) + +AllBalances(ctx context.Context, req *types.QueryAllBalancesRequest) (*types.QueryAllBalancesResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + +addr, err := sdk.AccAddressFromBech32(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + balances := sdk.NewCoins() + accountStore := k.getAccountStore(sdkCtx, addr) + +pageRes, err := query.Paginate(accountStore, req.Pagination, func(key, value []byte) + +error { + denom := string(key) + +balance, err := UnmarshalBalanceCompat(k.cdc, value, denom) + if err != nil { + return err +} + +balances = append(balances, balance) + +return nil +}) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "paginate: %v", err) +} + +return &types.QueryAllBalancesResponse{ + Balances: balances, + Pagination: pageRes +}, nil +} + +/ SpendableBalances implements a gRPC query handler for retrieving an account's +/ spendable balances. +func (k BaseKeeper) + +SpendableBalances(ctx context.Context, req *types.QuerySpendableBalancesRequest) (*types.QuerySpendableBalancesResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + +addr, err := sdk.AccAddressFromBech32(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + balances := sdk.NewCoins() + accountStore := k.getAccountStore(sdkCtx, addr) + zeroAmt := math.ZeroInt() + +pageRes, err := query.Paginate(accountStore, req.Pagination, func(key, _ []byte) + +error { + balances = append(balances, sdk.NewCoin(string(key), zeroAmt)) + +return nil +}) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "paginate: %v", err) +} + result := sdk.NewCoins() + spendable := k.SpendableCoins(sdkCtx, addr) + for _, c := range balances { + result = append(result, sdk.NewCoin(c.Denom, spendable.AmountOf(c.Denom))) +} + +return &types.QuerySpendableBalancesResponse{ + Balances: result, + Pagination: pageRes +}, nil +} + +/ SpendableBalanceByDenom implements a gRPC query handler for retrieving an account's +/ spendable balance for a specific denom. +func (k BaseKeeper) + +SpendableBalanceByDenom(ctx context.Context, req *types.QuerySpendableBalanceByDenomRequest) (*types.QuerySpendableBalanceByDenomResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + +addr, err := sdk.AccAddressFromBech32(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + spendable := k.SpendableCoin(sdkCtx, addr, req.Denom) + +return &types.QuerySpendableBalanceByDenomResponse{ + Balance: &spendable +}, nil +} + +/ TotalSupply implements the Query/TotalSupply gRPC method +func (k BaseKeeper) + +TotalSupply(ctx context.Context, req *types.QueryTotalSupplyRequest) (*types.QueryTotalSupplyResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +totalSupply, pageRes, err := k.GetPaginatedTotalSupply(sdkCtx, req.Pagination) + if err != nil { + return nil, status.Error(codes.Internal, err.Error()) +} + +return &types.QueryTotalSupplyResponse{ + Supply: totalSupply, + Pagination: pageRes +}, nil +} + +/ SupplyOf implements the Query/SupplyOf gRPC method +func (k BaseKeeper) + +SupplyOf(c context.Context, req *types.QuerySupplyOfRequest) (*types.QuerySupplyOfResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + ctx := sdk.UnwrapSDKContext(c) + supply := k.GetSupply(ctx, req.Denom) + +return &types.QuerySupplyOfResponse{ + Amount: sdk.NewCoin(req.Denom, supply.Amount) +}, nil +} + +/ Params implements the gRPC service handler for querying x/bank parameters. +func (k BaseKeeper) + +Params(ctx context.Context, req *types.QueryParamsRequest) (*types.QueryParamsResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + params := k.GetParams(sdkCtx) + +return &types.QueryParamsResponse{ + Params: params +}, nil +} + +/ DenomsMetadata implements Query/DenomsMetadata gRPC method. +func (k BaseKeeper) + +DenomsMetadata(c context.Context, req *types.QueryDenomsMetadataRequest) (*types.QueryDenomsMetadataResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + ctx := sdk.UnwrapSDKContext(c) + store := prefix.NewStore(ctx.KVStore(k.storeKey), types.DenomMetadataPrefix) + metadatas := []types.Metadata{ +} + +pageRes, err := query.Paginate(store, req.Pagination, func(_, value []byte) + +error { + var metadata types.Metadata + k.cdc.MustUnmarshal(value, &metadata) + +metadatas = append(metadatas, metadata) + +return nil +}) + if err != nil { + return nil, status.Error(codes.Internal, err.Error()) +} + +return &types.QueryDenomsMetadataResponse{ + Metadatas: metadatas, + Pagination: pageRes, +}, nil +} + +/ DenomMetadata implements Query/DenomMetadata gRPC method. +func (k BaseKeeper) + +DenomMetadata(c context.Context, req *types.QueryDenomMetadataRequest) (*types.QueryDenomMetadataResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + ctx := sdk.UnwrapSDKContext(c) + +metadata, found := k.GetDenomMetaData(ctx, req.Denom) + if !found { + return nil, status.Errorf(codes.NotFound, "client metadata for denom %s", req.Denom) +} + +return &types.QueryDenomMetadataResponse{ + Metadata: metadata, +}, nil +} + +func (k BaseKeeper) + +DenomOwners( + goCtx context.Context, + req *types.QueryDenomOwnersRequest, +) (*types.QueryDenomOwnersResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + ctx := sdk.UnwrapSDKContext(goCtx) + denomPrefixStore := k.getDenomAddressPrefixStore(ctx, req.Denom) + +var denomOwners []*types.DenomOwner + pageRes, err := query.FilteredPaginate( + denomPrefixStore, + req.Pagination, + func(key []byte, _ []byte, accumulate bool) (bool, error) { + if accumulate { + address, _, err := types.AddressAndDenomFromBalancesStore(key) + if err != nil { + return false, err +} + +denomOwners = append( + denomOwners, + &types.DenomOwner{ + Address: address.String(), + Balance: k.GetBalance(ctx, address, req.Denom), +}, + ) +} + +return true, nil +}, + ) + if err != nil { + return nil, status.Error(codes.Internal, err.Error()) +} + +return &types.QueryDenomOwnersResponse{ + DenomOwners: denomOwners, + Pagination: pageRes +}, nil +} + +func (k BaseKeeper) + +SendEnabled(goCtx context.Context, req *types.QuerySendEnabledRequest) (*types.QuerySendEnabledResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + ctx := sdk.UnwrapSDKContext(goCtx) + resp := &types.QuerySendEnabledResponse{ +} + if len(req.Denoms) > 0 { + store := ctx.KVStore(k.storeKey) + for _, denom := range req.Denoms { + if se, ok := k.getSendEnabled(store, denom); ok { + resp.SendEnabled = append(resp.SendEnabled, types.NewSendEnabled(denom, se)) +} + +} + +} + +else { + store := k.getSendEnabledPrefixStore(ctx) + +var err error + + resp.Pagination, err = query.FilteredPaginate( + store, + req.Pagination, + func(key []byte, value []byte, accumulate bool) (bool, error) { + if accumulate { + var enabled gogotypes.BoolValue + k.cdc.MustUnmarshal(value, &enabled) + +resp.SendEnabled = append(resp.SendEnabled, types.NewSendEnabled(string(key), enabled.Value)) +} + +return true, nil +}, + ) + if err != nil { + return nil, status.Error(codes.Internal, err.Error()) +} + +} + +return resp, nil +} +``` + +### Calling queries from the State Machine + +The Cosmos SDK v0.47 introduces a new `cosmos.query.v1.module_query_safe` Protobuf annotation which is used to state that a query that is safe to be called from within the state machine, for example: + +* a Keeper's query function can be called from another module's Keeper, +* ADR-033 intermodule query calls, +* CosmWasm contracts can also directly interact with these queries. + +If the `module_query_safe` annotation set to `true`, it means: + +* The query is deterministic: given a block height it will return the same response upon multiple calls, and doesn't introduce any state-machine breaking changes across SDK patch versions. +* Gas consumption never fluctuates across calls and across patch versions. + +If you are a module developer and want to use `module_query_safe` annotation for your own query, you have to ensure the following things: + +* the query is deterministic and won't introduce state-machine-breaking changes without coordinated upgrades +* it has its gas tracked, to avoid the attack vector where no gas is accounted for + on potentially high-computation queries. diff --git a/docs/sdk/v0.47/documentation/module-system/simulator.mdx b/docs/sdk/v0.47/documentation/module-system/simulator.mdx new file mode 100644 index 00000000..c58fbb15 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/simulator.mdx @@ -0,0 +1,1628 @@ +--- +title: Module Simulation +--- + + + +### Pre-requisite Readings + +* [Cosmos Blockchain Simulator](/docs/sdk/v0.47/learn/advanced/simulation) + + +## Synopsis + +This document details how to define each module simulation functions to be +integrated with the application `SimulationManager`. + +* [Simulation package](#simulation-package) + * [Store decoders](#store-decoders) + * [Randomized genesis](#randomized-genesis) + * [Randomized parameter changes](#randomized-parameter-changes) + * [Random weighted operations](#random-weighted-operations) + * [Random proposal contents](#random-proposal-contents) +* [Registering simulation functions](#registering-simulation-functions) +* [App Simulator manager](#app-simulator-manager) + +## Simulation package + +Every module that implements the Cosmos SDK simulator needs to have a `x//simulation` +package which contains the primary functions required by the fuzz tests: store +decoders, randomized genesis state and parameters, weighted operations and proposal +contents. + +### Store decoders + +Registering the store decoders is required for the `AppImportExport`. This allows +for the key-value pairs from the stores to be decoded (*i.e* unmarshalled) +to their corresponding types. In particular, it matches the key to a concrete type +and then unmarshals the value from the `KVPair` to the type provided. + +You can use the example [here](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/distribution/simulation/decoder.go) from the distribution module to implement your store decoders. + +### Randomized genesis + +The simulator tests different scenarios and values for genesis parameters +in order to fully test the edge cases of specific modules. The `simulator` package from each module must expose a `RandomizedGenState` function to generate the initial random `GenesisState` from a given seed. + +Once the module genesis parameter are generated randomly (or with the key and +values defined in a `params` file), they are marshaled to JSON format and added +to the app genesis JSON to use it on the simulations. + +You can check an example on how to create the randomized genesis [here](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/staking/simulation/genesis.go). + +### Randomized parameter changes + +The simulator is able to test parameter changes at random. The simulator package from each module must contain a `RandomizedParams` func that will simulate parameter changes of the module throughout the simulations lifespan. + +You can see how an example of what is needed to fully test parameter changes [here](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/staking/simulation/params.go) + +### Random weighted operations + +Operations are one of the crucial parts of the Cosmos SDK simulation. They are the transactions +(`Msg`) that are simulated with random field values. The sender of the operation +is also assigned randomly. + +Operations on the simulation are simulated using the full [transaction cycle](/docs/sdk/v0.47/learn/advanced/transactions) of a +`ABCI` application that exposes the `BaseApp`. + +Shown below is how weights are set: + +```go expandable +package simulation + +import ( + + "fmt" + "math/rand" + "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/testutil" + sdk "github.com/cosmos/cosmos-sdk/types" + moduletestutil "github.com/cosmos/cosmos-sdk/types/module/testutil" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/simulation" + "github.com/cosmos/cosmos-sdk/x/staking/keeper" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ Simulation operation weights constants +/ +/nolint:gosec / these are not hardcoded credentials +const ( + DefaultWeightMsgCreateValidator int = 100 + DefaultWeightMsgEditValidator int = 5 + DefaultWeightMsgDelegate int = 100 + DefaultWeightMsgUndelegate int = 100 + DefaultWeightMsgBeginRedelegate int = 100 + DefaultWeightMsgCancelUnbondingDelegation int = 100 + + OpWeightMsgCreateValidator = "op_weight_msg_create_validator" + OpWeightMsgEditValidator = "op_weight_msg_edit_validator" + OpWeightMsgDelegate = "op_weight_msg_delegate" + OpWeightMsgUndelegate = "op_weight_msg_undelegate" + OpWeightMsgBeginRedelegate = "op_weight_msg_begin_redelegate" + OpWeightMsgCancelUnbondingDelegation = "op_weight_msg_cancel_unbonding_delegation" +) + +/ WeightedOperations returns all the operations from the module with their respective weights +func WeightedOperations( + appParams simtypes.AppParams, cdc codec.JSONCodec, ak types.AccountKeeper, + bk types.BankKeeper, k *keeper.Keeper, +) + +simulation.WeightedOperations { + var ( + weightMsgCreateValidator int + weightMsgEditValidator int + weightMsgDelegate int + weightMsgUndelegate int + weightMsgBeginRedelegate int + weightMsgCancelUnbondingDelegation int + ) + +appParams.GetOrGenerate(cdc, OpWeightMsgCreateValidator, &weightMsgCreateValidator, nil, + func(_ *rand.Rand) { + weightMsgCreateValidator = DefaultWeightMsgCreateValidator +}, + ) + +appParams.GetOrGenerate(cdc, OpWeightMsgEditValidator, &weightMsgEditValidator, nil, + func(_ *rand.Rand) { + weightMsgEditValidator = DefaultWeightMsgEditValidator +}, + ) + +appParams.GetOrGenerate(cdc, OpWeightMsgDelegate, &weightMsgDelegate, nil, + func(_ *rand.Rand) { + weightMsgDelegate = DefaultWeightMsgDelegate +}, + ) + +appParams.GetOrGenerate(cdc, OpWeightMsgUndelegate, &weightMsgUndelegate, nil, + func(_ *rand.Rand) { + weightMsgUndelegate = DefaultWeightMsgUndelegate +}, + ) + +appParams.GetOrGenerate(cdc, OpWeightMsgBeginRedelegate, &weightMsgBeginRedelegate, nil, + func(_ *rand.Rand) { + weightMsgBeginRedelegate = DefaultWeightMsgBeginRedelegate +}, + ) + +appParams.GetOrGenerate(cdc, OpWeightMsgCancelUnbondingDelegation, &weightMsgCancelUnbondingDelegation, nil, + func(_ *rand.Rand) { + weightMsgCancelUnbondingDelegation = DefaultWeightMsgCancelUnbondingDelegation +}, + ) + +return simulation.WeightedOperations{ + simulation.NewWeightedOperation( + weightMsgCreateValidator, + SimulateMsgCreateValidator(ak, bk, k), + ), + simulation.NewWeightedOperation( + weightMsgEditValidator, + SimulateMsgEditValidator(ak, bk, k), + ), + simulation.NewWeightedOperation( + weightMsgDelegate, + SimulateMsgDelegate(ak, bk, k), + ), + simulation.NewWeightedOperation( + weightMsgUndelegate, + SimulateMsgUndelegate(ak, bk, k), + ), + simulation.NewWeightedOperation( + weightMsgBeginRedelegate, + SimulateMsgBeginRedelegate(ak, bk, k), + ), + simulation.NewWeightedOperation( + weightMsgCancelUnbondingDelegation, + SimulateMsgCancelUnbondingDelegate(ak, bk, k), + ), +} +} + +/ SimulateMsgCreateValidator generates a MsgCreateValidator with random values +func SimulateMsgCreateValidator(ak types.AccountKeeper, bk types.BankKeeper, k *keeper.Keeper) + +simtypes.Operation { + return func( + r *rand.Rand, app *baseapp.BaseApp, ctx sdk.Context, accs []simtypes.Account, chainID string, + ) (simtypes.OperationMsg, []simtypes.FutureOperation, error) { + simAccount, _ := simtypes.RandomAcc(r, accs) + address := sdk.ValAddress(simAccount.Address) + + / ensure the validator doesn't exist already + _, found := k.GetValidator(ctx, address) + if found { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgCreateValidator, "validator already exists"), nil, nil +} + denom := k.GetParams(ctx).BondDenom + balance := bk.GetBalance(ctx, simAccount.Address, denom).Amount + if !balance.IsPositive() { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgCreateValidator, "balance is negative"), nil, nil +} + +amount, err := simtypes.RandPositiveInt(r, balance) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgCreateValidator, "unable to generate positive amount"), nil, err +} + selfDelegation := sdk.NewCoin(denom, amount) + account := ak.GetAccount(ctx, simAccount.Address) + spendable := bk.SpendableCoins(ctx, account.GetAddress()) + +var fees sdk.Coins + + coins, hasNeg := spendable.SafeSub(selfDelegation) + if !hasNeg { + fees, err = simtypes.RandomFees(r, ctx, coins) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgCreateValidator, "unable to generate fees"), nil, err +} + +} + description := types.NewDescription( + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + ) + maxCommission := sdk.NewDecWithPrec(int64(simtypes.RandIntBetween(r, 0, 100)), 2) + commission := types.NewCommissionRates( + simtypes.RandomDecAmount(r, maxCommission), + maxCommission, + simtypes.RandomDecAmount(r, maxCommission), + ) + +msg, err := types.NewMsgCreateValidator(address, simAccount.ConsKey.PubKey(), selfDelegation, description, commission, math.OneInt()) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msg.Type(), "unable to create CreateValidator message"), nil, err +} + txCtx := simulation.OperationInput{ + R: r, + App: app, + TxGen: moduletestutil.MakeTestEncodingConfig().TxConfig, + Cdc: nil, + Msg: msg, + MsgType: msg.Type(), + Context: ctx, + SimAccount: simAccount, + AccountKeeper: ak, + ModuleName: types.ModuleName, +} + +return simulation.GenAndDeliverTx(txCtx, fees) +} +} + +/ SimulateMsgEditValidator generates a MsgEditValidator with random values +func SimulateMsgEditValidator(ak types.AccountKeeper, bk types.BankKeeper, k *keeper.Keeper) + +simtypes.Operation { + return func( + r *rand.Rand, app *baseapp.BaseApp, ctx sdk.Context, accs []simtypes.Account, chainID string, + ) (simtypes.OperationMsg, []simtypes.FutureOperation, error) { + if len(k.GetAllValidators(ctx)) == 0 { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgEditValidator, "number of validators equal zero"), nil, nil +} + +val, ok := testutil.RandSliceElem(r, k.GetAllValidators(ctx)) + if !ok { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgEditValidator, "unable to pick a validator"), nil, nil +} + address := val.GetOperator() + newCommissionRate := simtypes.RandomDecAmount(r, val.Commission.MaxRate) + if err := val.Commission.ValidateNewRate(newCommissionRate, ctx.BlockHeader().Time); err != nil { + / skip as the commission is invalid + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgEditValidator, "invalid commission rate"), nil, nil +} + +simAccount, found := simtypes.FindAccount(accs, sdk.AccAddress(val.GetOperator())) + if !found { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgEditValidator, "unable to find account"), nil, fmt.Errorf("validator %s not found", val.GetOperator()) +} + account := ak.GetAccount(ctx, simAccount.Address) + spendable := bk.SpendableCoins(ctx, account.GetAddress()) + description := types.NewDescription( + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + ) + msg := types.NewMsgEditValidator(address, description, &newCommissionRate, nil) + txCtx := simulation.OperationInput{ + R: r, + App: app, + TxGen: moduletestutil.MakeTestEncodingConfig().TxConfig, + Cdc: nil, + Msg: msg, + MsgType: msg.Type(), + Context: ctx, + SimAccount: simAccount, + AccountKeeper: ak, + Bankkeeper: bk, + ModuleName: types.ModuleName, + CoinsSpentInMsg: spendable, +} + +return simulation.GenAndDeliverTxWithRandFees(txCtx) +} +} + +/ SimulateMsgDelegate generates a MsgDelegate with random values +func SimulateMsgDelegate(ak types.AccountKeeper, bk types.BankKeeper, k *keeper.Keeper) + +simtypes.Operation { + return func( + r *rand.Rand, app *baseapp.BaseApp, ctx sdk.Context, accs []simtypes.Account, chainID string, + ) (simtypes.OperationMsg, []simtypes.FutureOperation, error) { + denom := k.GetParams(ctx).BondDenom + if len(k.GetAllValidators(ctx)) == 0 { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgDelegate, "number of validators equal zero"), nil, nil +} + +simAccount, _ := simtypes.RandomAcc(r, accs) + +val, ok := testutil.RandSliceElem(r, k.GetAllValidators(ctx)) + if !ok { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgDelegate, "unable to pick a validator"), nil, nil +} + if val.InvalidExRate() { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgDelegate, "validator's invalid echange rate"), nil, nil +} + amount := bk.GetBalance(ctx, simAccount.Address, denom).Amount + if !amount.IsPositive() { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgDelegate, "balance is negative"), nil, nil +} + +amount, err := simtypes.RandPositiveInt(r, amount) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgDelegate, "unable to generate positive amount"), nil, err +} + bondAmt := sdk.NewCoin(denom, amount) + account := ak.GetAccount(ctx, simAccount.Address) + spendable := bk.SpendableCoins(ctx, account.GetAddress()) + +var fees sdk.Coins + + coins, hasNeg := spendable.SafeSub(bondAmt) + if !hasNeg { + fees, err = simtypes.RandomFees(r, ctx, coins) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgDelegate, "unable to generate fees"), nil, err +} + +} + msg := types.NewMsgDelegate(simAccount.Address, val.GetOperator(), bondAmt) + txCtx := simulation.OperationInput{ + R: r, + App: app, + TxGen: moduletestutil.MakeTestEncodingConfig().TxConfig, + Cdc: nil, + Msg: msg, + MsgType: msg.Type(), + Context: ctx, + SimAccount: simAccount, + AccountKeeper: ak, + ModuleName: types.ModuleName, +} + +return simulation.GenAndDeliverTx(txCtx, fees) +} +} + +/ SimulateMsgUndelegate generates a MsgUndelegate with random values +func SimulateMsgUndelegate(ak types.AccountKeeper, bk types.BankKeeper, k *keeper.Keeper) + +simtypes.Operation { + return func( + r *rand.Rand, app *baseapp.BaseApp, ctx sdk.Context, accs []simtypes.Account, chainID string, + ) (simtypes.OperationMsg, []simtypes.FutureOperation, error) { + val, ok := testutil.RandSliceElem(r, k.GetAllValidators(ctx)) + if !ok { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgUndelegate, "validator is not ok"), nil, nil +} + valAddr := val.GetOperator() + delegations := k.GetValidatorDelegations(ctx, val.GetOperator()) + if delegations == nil { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgUndelegate, "keeper does have any delegation entries"), nil, nil +} + + / get random delegator from validator + delegation := delegations[r.Intn(len(delegations))] + delAddr := delegation.GetDelegatorAddr() + if k.HasMaxUnbondingDelegationEntries(ctx, delAddr, valAddr) { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgUndelegate, "keeper does have a max unbonding delegation entries"), nil, nil +} + totalBond := val.TokensFromShares(delegation.GetShares()).TruncateInt() + if !totalBond.IsPositive() { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgUndelegate, "total bond is negative"), nil, nil +} + +unbondAmt, err := simtypes.RandPositiveInt(r, totalBond) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgUndelegate, "invalid unbond amount"), nil, err +} + if unbondAmt.IsZero() { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgUndelegate, "unbond amount is zero"), nil, nil +} + msg := types.NewMsgUndelegate( + delAddr, valAddr, sdk.NewCoin(k.BondDenom(ctx), unbondAmt), + ) + + / need to retrieve the simulation account associated with delegation to retrieve PrivKey + var simAccount simtypes.Account + for _, simAcc := range accs { + if simAcc.Address.Equals(delAddr) { + simAccount = simAcc + break +} + +} + / if simaccount.PrivKey == nil, delegation address does not exist in accs. Return error + if simAccount.PrivKey == nil { + return simtypes.NoOpMsg(types.ModuleName, msg.Type(), "account private key is nil"), nil, fmt.Errorf("delegation addr: %s does not exist in simulation accounts", delAddr) +} + account := ak.GetAccount(ctx, delAddr) + spendable := bk.SpendableCoins(ctx, account.GetAddress()) + txCtx := simulation.OperationInput{ + R: r, + App: app, + TxGen: moduletestutil.MakeTestEncodingConfig().TxConfig, + Cdc: nil, + Msg: msg, + MsgType: msg.Type(), + Context: ctx, + SimAccount: simAccount, + AccountKeeper: ak, + Bankkeeper: bk, + ModuleName: types.ModuleName, + CoinsSpentInMsg: spendable, +} + +return simulation.GenAndDeliverTxWithRandFees(txCtx) +} +} + +/ SimulateMsgCancelUnbondingDelegate generates a MsgCancelUnbondingDelegate with random values +func SimulateMsgCancelUnbondingDelegate(ak types.AccountKeeper, bk types.BankKeeper, k *keeper.Keeper) + +simtypes.Operation { + return func( + r *rand.Rand, app *baseapp.BaseApp, ctx sdk.Context, accs []simtypes.Account, chainID string, + ) (simtypes.OperationMsg, []simtypes.FutureOperation, error) { + if len(k.GetAllValidators(ctx)) == 0 { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgDelegate, "number of validators equal zero"), nil, nil +} + +simAccount, _ := simtypes.RandomAcc(r, accs) + +val, ok := testutil.RandSliceElem(r, k.GetAllValidators(ctx)) + if !ok { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgCancelUnbondingDelegation, "validator is not ok"), nil, nil +} + if val.IsJailed() || val.InvalidExRate() { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgCancelUnbondingDelegation, "validator is jailed"), nil, nil +} + valAddr := val.GetOperator() + +unbondingDelegation, found := k.GetUnbondingDelegation(ctx, simAccount.Address, valAddr) + if !found { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgCancelUnbondingDelegation, "account does have any unbonding delegation"), nil, nil +} + + / This is a temporary fix to make staking simulation pass. We should fetch + / the first unbondingDelegationEntry that matches the creationHeight, because + / currently the staking msgServer chooses the first unbondingDelegationEntry + / with the matching creationHeight. + / + / ref: https://github.com/cosmos/cosmos-sdk/issues/12932 + creationHeight := unbondingDelegation.Entries[r.Intn(len(unbondingDelegation.Entries))].CreationHeight + + var unbondingDelegationEntry types.UnbondingDelegationEntry + for _, entry := range unbondingDelegation.Entries { + if entry.CreationHeight == creationHeight { + unbondingDelegationEntry = entry + break +} + +} + if unbondingDelegationEntry.CompletionTime.Before(ctx.BlockTime()) { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgCancelUnbondingDelegation, "unbonding delegation is already processed"), nil, nil +} + if !unbondingDelegationEntry.Balance.IsPositive() { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgCancelUnbondingDelegation, "delegator receiving balance is negative"), nil, nil +} + cancelBondAmt := simtypes.RandomAmount(r, unbondingDelegationEntry.Balance) + if cancelBondAmt.IsZero() { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgCancelUnbondingDelegation, "cancelBondAmt amount is zero"), nil, nil +} + msg := types.NewMsgCancelUnbondingDelegation( + simAccount.Address, valAddr, unbondingDelegationEntry.CreationHeight, sdk.NewCoin(k.BondDenom(ctx), cancelBondAmt), + ) + spendable := bk.SpendableCoins(ctx, simAccount.Address) + txCtx := simulation.OperationInput{ + R: r, + App: app, + TxGen: moduletestutil.MakeTestEncodingConfig().TxConfig, + Cdc: nil, + Msg: msg, + MsgType: msg.Type(), + Context: ctx, + SimAccount: simAccount, + AccountKeeper: ak, + Bankkeeper: bk, + ModuleName: types.ModuleName, + CoinsSpentInMsg: spendable, +} + +return simulation.GenAndDeliverTxWithRandFees(txCtx) +} +} + +/ SimulateMsgBeginRedelegate generates a MsgBeginRedelegate with random values +func SimulateMsgBeginRedelegate(ak types.AccountKeeper, bk types.BankKeeper, k *keeper.Keeper) + +simtypes.Operation { + return func( + r *rand.Rand, app *baseapp.BaseApp, ctx sdk.Context, accs []simtypes.Account, chainID string, + ) (simtypes.OperationMsg, []simtypes.FutureOperation, error) { + allVals := k.GetAllValidators(ctx) + +srcVal, ok := testutil.RandSliceElem(r, allVals) + if !ok { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgBeginRedelegate, "unable to pick validator"), nil, nil +} + srcAddr := srcVal.GetOperator() + delegations := k.GetValidatorDelegations(ctx, srcAddr) + if delegations == nil { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgBeginRedelegate, "keeper does have any delegation entries"), nil, nil +} + + / get random delegator from src validator + delegation := delegations[r.Intn(len(delegations))] + delAddr := delegation.GetDelegatorAddr() + if k.HasReceivingRedelegation(ctx, delAddr, srcAddr) { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgBeginRedelegate, "receveing redelegation is not allowed"), nil, nil / skip +} + + / get random destination validator + destVal, ok := testutil.RandSliceElem(r, allVals) + if !ok { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgBeginRedelegate, "unable to pick validator"), nil, nil +} + destAddr := destVal.GetOperator() + if srcAddr.Equals(destAddr) || destVal.InvalidExRate() || k.HasMaxRedelegationEntries(ctx, delAddr, srcAddr, destAddr) { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgBeginRedelegate, "checks failed"), nil, nil +} + totalBond := srcVal.TokensFromShares(delegation.GetShares()).TruncateInt() + if !totalBond.IsPositive() { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgBeginRedelegate, "total bond is negative"), nil, nil +} + +redAmt, err := simtypes.RandPositiveInt(r, totalBond) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgBeginRedelegate, "unable to generate positive amount"), nil, err +} + if redAmt.IsZero() { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgBeginRedelegate, "amount is zero"), nil, nil +} + + / check if the shares truncate to zero + shares, err := srcVal.SharesFromTokens(redAmt) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgBeginRedelegate, "invalid shares"), nil, err +} + if srcVal.TokensFromShares(shares).TruncateInt().IsZero() { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgBeginRedelegate, "shares truncate to zero"), nil, nil / skip +} + + / need to retrieve the simulation account associated with delegation to retrieve PrivKey + var simAccount simtypes.Account + for _, simAcc := range accs { + if simAcc.Address.Equals(delAddr) { + simAccount = simAcc + break +} + +} + + / if simaccount.PrivKey == nil, delegation address does not exist in accs. Return error + if simAccount.PrivKey == nil { + return simtypes.NoOpMsg(types.ModuleName, types.TypeMsgBeginRedelegate, "account private key is nil"), nil, fmt.Errorf("delegation addr: %s does not exist in simulation accounts", delAddr) +} + account := ak.GetAccount(ctx, delAddr) + spendable := bk.SpendableCoins(ctx, account.GetAddress()) + msg := types.NewMsgBeginRedelegate( + delAddr, srcAddr, destAddr, + sdk.NewCoin(k.BondDenom(ctx), redAmt), + ) + txCtx := simulation.OperationInput{ + R: r, + App: app, + TxGen: moduletestutil.MakeTestEncodingConfig().TxConfig, + Cdc: nil, + Msg: msg, + MsgType: msg.Type(), + Context: ctx, + SimAccount: simAccount, + AccountKeeper: ak, + Bankkeeper: bk, + ModuleName: types.ModuleName, + CoinsSpentInMsg: spendable, +} + +return simulation.GenAndDeliverTxWithRandFees(txCtx) +} +} +``` + +As you can see, the weights are predefined in this case. Options exist to override this behavior with different weights. One option is to use `*rand.Rand` to define a random weight for the operation, or you can inject your own predefined weights. + +Here is how one can override the above package `simappparams`. + +```go expandable +#!/usr/bin/make -f + +PACKAGES_NOSIMULATION=$(shell go list ./... | grep -v '/simulation') + +PACKAGES_SIMTEST=$(shell go list ./... | grep '/simulation') + +export VERSION := $(shell echo $(shell git describe --always --match "v*") | sed 's/^v/') + +export TMVERSION := $(shell go list -m github.com/tendermint/tendermint | sed 's:.* ::') + +export COMMIT := $(shell git log -1 --format='%H') + +LEDGER_ENABLED ?= true +BINDIR ?= $(GOPATH)/bin +BUILDDIR ?= $(CURDIR)/build +SIMAPP = ./simapp +MOCKS_DIR = $(CURDIR)/tests/mocks +HTTPS_GIT := https://github.com/cosmos/cosmos-sdk.git +DOCKER := $(shell which docker) + +PROJECT_NAME = $(shell git remote get-url origin | xargs basename -s .git) + +DOCS_DOMAIN=docs.cosmos.network +# RocksDB is a native dependency, so we don't assume the library is installed. +# Instead, it must be explicitly enabled and we warn when it is not. +ENABLE_ROCKSDB ?= false + +# process build tags +build_tags = netgo + ifeq ($(LEDGER_ENABLED),true) + ifeq ($(OS),Windows_NT) + +GCCEXE = $(shell where gcc.exe 2> NUL) + ifeq ($(GCCEXE),) + $(error gcc.exe not installed for ledger support, please install or set LEDGER_ENABLED=false) + +else + build_tags += ledger + endif + else + UNAME_S = $(shell uname -s) + ifeq ($(UNAME_S),OpenBSD) + $(warning OpenBSD detected, disabling ledger support (https://github.com/cosmos/cosmos-sdk/issues/1988)) + +else + GCC = $(shell command -v gcc 2> /dev/null) + ifeq ($(GCC),) + $(error gcc not installed for ledger support, please install or set LEDGER_ENABLED=false) + +else + build_tags += ledger + endif + endif + endif +endif + ifeq (secp,$(findstring secp,$(COSMOS_BUILD_OPTIONS))) + +build_tags += libsecp256k1_sdk +endif + ifeq (legacy,$(findstring legacy,$(COSMOS_BUILD_OPTIONS))) + +build_tags += app_v1 +endif + whitespace := +whitespace += $(whitespace) + comma := , +build_tags_comma_sep := $(subst $(whitespace),$(comma),$(build_tags)) + +# process linker flags + +ldflags = -X github.com/cosmos/cosmos-sdk/version.Name=sim \ + -X github.com/cosmos/cosmos-sdk/version.AppName=simd \ + -X github.com/cosmos/cosmos-sdk/version.Version=$(VERSION) \ + -X github.com/cosmos/cosmos-sdk/version.Commit=$(COMMIT) \ + -X "github.com/cosmos/cosmos-sdk/version.BuildTags=$(build_tags_comma_sep)" \ + -X github.com/tendermint/tendermint/version.TMCoreSemVer=$(TMVERSION) + ifeq ($(ENABLE_ROCKSDB),true) + +BUILD_TAGS += rocksdb_build + test_tags += rocksdb_build +endif + +# DB backend selection + ifeq (cleveldb,$(findstring cleveldb,$(COSMOS_BUILD_OPTIONS))) + +build_tags += gcc +endif + ifeq (badgerdb,$(findstring badgerdb,$(COSMOS_BUILD_OPTIONS))) + +BUILD_TAGS += badgerdb +endif +# handle rocksdb + ifeq (rocksdb,$(findstring rocksdb,$(COSMOS_BUILD_OPTIONS))) + ifneq ($(ENABLE_ROCKSDB),true) + $(error Cannot use RocksDB backend unless ENABLE_ROCKSDB=true) + +endif + CGO_ENABLED=1 + BUILD_TAGS += rocksdb +endif +# handle boltdb + ifeq (boltdb,$(findstring boltdb,$(COSMOS_BUILD_OPTIONS))) + +BUILD_TAGS += boltdb +endif + ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS))) + +ldflags += -w -s +endif +ldflags += $(LDFLAGS) + ldflags := $(strip $(ldflags)) + +build_tags += $(BUILD_TAGS) + +build_tags := $(strip $(build_tags)) + +BUILD_FLAGS := -tags "$(build_tags)" -ldflags '$(ldflags)' +# check for nostrip option + ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS))) + +BUILD_FLAGS += -trimpath +endif + +# Check for debug option + ifeq (debug,$(findstring debug,$(COSMOS_BUILD_OPTIONS))) + +BUILD_FLAGS += -gcflags "all=-N -l" +endif + +all: tools build lint test vulncheck + +# The below include contains the tools and runsim targets. +include contrib/devtools/Makefile + +############################################################################### +### Build ### +############################################################################### + +BUILD_TARGETS := build install + +build: BUILD_ARGS=-o $(BUILDDIR)/ + +build-linux-amd64: + GOOS=linux GOARCH=amd64 LEDGER_ENABLED=false $(MAKE) + +build + +build-linux-arm64: + GOOS=linux GOARCH=arm64 LEDGER_ENABLED=false $(MAKE) + +build + +$(BUILD_TARGETS): go.sum $(BUILDDIR)/ + cd ${ + CURRENT_DIR +}/simapp && go $@ -mod=readonly $(BUILD_FLAGS) $(BUILD_ARGS) ./... + +$(BUILDDIR)/: + mkdir -p $(BUILDDIR)/ + +cosmovisor: + $(MAKE) -C tools/cosmovisor cosmovisor + +.PHONY: build build-linux-amd64 build-linux-arm64 cosmovisor + +mocks: $(MOCKS_DIR) + @go install github.com/golang/mock/mockgen@v1.6.0 + sh ./scripts/mockgen.sh +.PHONY: mocks + +vulncheck: $(BUILDDIR)/ + GOBIN=$(BUILDDIR) + +go install golang.org/x/vuln/cmd/govulncheck@latest + $(BUILDDIR)/govulncheck ./... + +$(MOCKS_DIR): + mkdir -p $(MOCKS_DIR) + +distclean: clean tools-clean +clean: + rm -rf \ + $(BUILDDIR)/ \ + artifacts/ \ + tmp-swagger-gen/ + +.PHONY: distclean clean + +############################################################################### +### Tools & Dependencies ### +############################################################################### + +go.sum: go.mod + echo "Ensure dependencies have not been modified ..." >&2 + go mod verify + go mod tidy + +############################################################################### +### Documentation ### +############################################################################### + +update-swagger-docs: statik + $(BINDIR)/statik -src=client/docs/swagger-ui -dest=client/docs -f -m + @if [ -n "$(git status --porcelain)" ]; then \ + echo "\033[91mSwagger docs are out of sync!!!\033[0m";\ + exit 1;\ + else \ + echo "\033[92mSwagger docs are in sync\033[0m";\ + fi +.PHONY: update-swagger-docs + +godocs: + @echo "--> Wait a few seconds and visit http:/localhost:6060/pkg/github.com/cosmos/cosmos-sdk/types" + godoc -http=:6060 + +# This builds the docs.cosmos.network docs using docusaurus. +# Old documentation, which have not been migrated to docusaurus are generated with vuepress. +build-docs: + @echo "building docusaurus docs" + @cd docs && npm ci && npm run build + mv docs/build ~/output + + @echo "building old docs" + @cd docs && \ + while read -r branch path_prefix; do \ + echo "building vuepress ${ + branch +} + +docs" ; \ + (git clean -fdx && git reset --hard && git checkout ${ + branch +} && npm install && VUEPRESS_BASE="/${ + path_prefix +}/" npm run build) ; \ + mkdir -p ~/output/${ + path_prefix +} ; \ + cp -r .vuepress/dist/* ~/output/${ + path_prefix +}/ ; \ + done < vuepress_versions ; + + @echo "setup domain" + @echo $(DOCS_DOMAIN) > ~/output/CNAME + +.PHONY: build-docs + +############################################################################### +### Tests & Simulation ### +############################################################################### + +test: test-unit +test-e2e: + $(MAKE) -C tests test-e2e +test-e2e-cov: + $(MAKE) -C tests test-e2e-cov +test-integration: + $(MAKE) -C tests test-integration +test-integration-cov: + $(MAKE) -C tests test-integration-cov +test-all: test-unit test-e2e test-integration test-ledger-mock test-race + +TEST_PACKAGES=./... +TEST_TARGETS := test-unit test-unit-amino test-unit-proto test-ledger-mock test-race test-ledger test-race + +# Test runs-specific rules. To add a new test target, just add +# a new rule, customise ARGS or TEST_PACKAGES ad libitum, and +# append the new rule to the TEST_TARGETS list. +test-unit: test_tags += cgo ledger test_ledger_mock norace +test-unit-amino: test_tags += ledger test_ledger_mock test_amino norace +test-ledger: test_tags += cgo ledger norace +test-ledger-mock: test_tags += ledger test_ledger_mock norace +test-race: test_tags += cgo ledger test_ledger_mock +test-race: ARGS=-race +test-race: TEST_PACKAGES=$(PACKAGES_NOSIMULATION) +$(TEST_TARGETS): run-tests + +# check-* compiles and collects tests without running them +# note: go test -c doesn't support multiple packages yet (https://github.com/golang/go/issues/15513) + +CHECK_TEST_TARGETS := check-test-unit check-test-unit-amino +check-test-unit: test_tags += cgo ledger test_ledger_mock norace +check-test-unit-amino: test_tags += ledger test_ledger_mock test_amino norace +$(CHECK_TEST_TARGETS): EXTRA_ARGS=-run=none +$(CHECK_TEST_TARGETS): run-tests + +ARGS += -tags "$(test_tags)" +SUB_MODULES = $(shell find . -type f -name 'go.mod' -print0 | xargs -0 -n1 dirname | sort) + +CURRENT_DIR = $(shell pwd) + +run-tests: + ifneq (,$(shell which tparse 2>/dev/null)) + @echo "Starting unit tests"; \ + finalec=0; \ + for module in $(SUB_MODULES); do \ + cd ${ + CURRENT_DIR +}/$module; \ + echo "Running unit tests for module $module"; \ + go test -mod=readonly -json $(ARGS) $(TEST_PACKAGES) ./... | tparse; \ + ec=$?; \ + if [ "$ec" -ne '0' ]; then finalec=$ec; fi; \ + done; \ + exit $finalec +else + @echo "Starting unit tests"; \ + finalec=0; \ + for module in $(SUB_MODULES); do \ + cd ${ + CURRENT_DIR +}/$module; \ + echo "Running unit tests for module $module"; \ + go test -mod=readonly $(ARGS) $(TEST_PACKAGES) ./... ; \ + ec=$?; \ + if [ "$ec" -ne '0' ]; then finalec=$ec; fi; \ + done; \ + exit $finalec +endif + +.PHONY: run-tests test test-all $(TEST_TARGETS) + +test-sim-nondeterminism: + @echo "Running non-determinism test..." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestAppStateDeterminism -Enabled=true \ + -NumBlocks=100 -BlockSize=200 -Commit=true -Period=0 -v -timeout 24h + +test-sim-custom-genesis-fast: + @echo "Running custom genesis simulation..." + @echo "By default, ${ + HOME +}/.gaiad/config/genesis.json will be used." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestFullAppSimulation -Genesis=${ + HOME +}/.gaiad/config/genesis.json \ + -Enabled=true -NumBlocks=100 -BlockSize=200 -Commit=true -Seed=99 -Period=5 -v -timeout 24h + +test-sim-import-export: runsim + @echo "Running application import/export simulation. This may take several minutes..." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 5 TestAppImportExport + +test-sim-after-import: runsim + @echo "Running application simulation-after-import. This may take several minutes..." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 5 TestAppSimulationAfterImport + +test-sim-custom-genesis-multi-seed: runsim + @echo "Running multi-seed custom genesis simulation..." + @echo "By default, ${ + HOME +}/.gaiad/config/genesis.json will be used." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Genesis=${ + HOME +}/.gaiad/config/genesis.json -SimAppPkg=. -ExitOnFail 400 5 TestFullAppSimulation + +test-sim-multi-seed-long: runsim + @echo "Running long multi-seed application simulation. This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 500 50 TestFullAppSimulation + +test-sim-multi-seed-short: runsim + @echo "Running short multi-seed application simulation. This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 10 TestFullAppSimulation + +test-sim-benchmark-invariants: + @echo "Running simulation invariant benchmarks..." + cd ${ + CURRENT_DIR +}/simapp && @go test -mod=readonly -benchmem -bench=BenchmarkInvariants -run=^$ \ + -Enabled=true -NumBlocks=1000 -BlockSize=200 \ + -Period=1 -Commit=true -Seed=57 -v -timeout 24h + +.PHONY: \ +test-sim-nondeterminism \ +test-sim-custom-genesis-fast \ +test-sim-import-export \ +test-sim-after-import \ +test-sim-custom-genesis-multi-seed \ +test-sim-multi-seed-short \ +test-sim-multi-seed-long \ +test-sim-benchmark-invariants + +SIM_NUM_BLOCKS ?= 500 +SIM_BLOCK_SIZE ?= 200 +SIM_COMMIT ?= true + +test-sim-benchmark: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @go test -mod=readonly -benchmem -run=^$ $(SIMAPP) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h + +test-sim-profile: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @go test -mod=readonly -benchmem -run=^$ $(SIMAPP) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -cpuprofile cpu.out -memprofile mem.out + +.PHONY: test-sim-profile test-sim-benchmark + +test-rosetta: + docker build -t rosetta-ci:latest -f contrib/rosetta/rosetta-ci/Dockerfile . + docker-compose -f contrib/rosetta/docker-compose.yaml up --abort-on-container-exit --exit-code-from test_rosetta --build +.PHONY: test-rosetta + +benchmark: + @go test -mod=readonly -bench=. $(PACKAGES_NOSIMULATION) +.PHONY: benchmark + +############################################################################### +### Linting ### +############################################################################### + +golangci_lint_cmd=golangci-lint +golangci_version=v1.50.0 + +lint: + @echo "--> Running linter" + @go install github.com/golangci/golangci-lint/cmd/golangci-lint@$(golangci_version) + @$(golangci_lint_cmd) + +run --timeout=10m + +lint-fix: + @echo "--> Running linter" + @go install github.com/golangci/golangci-lint/cmd/golangci-lint@$(golangci_version) + @$(golangci_lint_cmd) + +run --fix --out-format=tab --issues-exit-code=0 + +.PHONY: lint lint-fix + format: + @go install mvdan.cc/gofumpt@latest + @go install github.com/golangci/golangci-lint/cmd/golangci-lint@$(golangci_version) + +find . -name '*.go' -type f -not -path "./vendor*" -not -path "*.git*" -not -path "./client/docs/statik/statik.go" -not -path "./tests/mocks/*" -not -name "*.pb.go" -not -name "*.pb.gw.go" -not -name "*.pulsar.go" -not -path "./crypto/keys/secp256k1/*" | xargs gofumpt -w -l + $(golangci_lint_cmd) + +run --fix +.PHONY: format + +############################################################################### +### Devdoc ### +############################################################################### + +DEVDOC_SAVE = docker commit `docker ps -a -n 1 -q` devdoc:local + +devdoc-init: + $(DOCKER) + +run -it -v "$(CURDIR):/go/src/github.com/cosmos/cosmos-sdk" -w "/go/src/github.com/cosmos/cosmos-sdk" tendermint/devdoc echo + # TODO make this safer + $(call DEVDOC_SAVE) + +devdoc: + $(DOCKER) + +run -it -v "$(CURDIR):/go/src/github.com/cosmos/cosmos-sdk" -w "/go/src/github.com/cosmos/cosmos-sdk" devdoc:local bash + +devdoc-save: + # TODO make this safer + $(call DEVDOC_SAVE) + +devdoc-clean: + docker rmi -f $(docker images -f "dangling=true" -q) + +devdoc-update: + docker pull tendermint/devdoc + +.PHONY: devdoc devdoc-clean devdoc-init devdoc-save devdoc-update + +############################################################################### +### Protobuf ### +############################################################################### + +protoVer=0.11.2 +protoImageName=ghcr.io/cosmos/proto-builder:$(protoVer) + +protoImage=$(DOCKER) + +run --rm -v $(CURDIR):/workspace --workdir /workspace $(protoImageName) + +proto-all: proto-format proto-lint proto-gen + +proto-gen: + @echo "Generating Protobuf files" + @$(protoImage) + +sh ./scripts/protocgen.sh + +proto-swagger-gen: + @echo "Generating Protobuf Swagger" + @$(protoImage) + +sh ./scripts/protoc-swagger-gen.sh + +proto-format: + @$(protoImage) + +find ./ -name "*.proto" -exec clang-format -i { +} \; + +proto-lint: + @$(protoImage) + +buf lint --error-format=json + +proto-check-breaking: + @$(protoImage) + +buf breaking --against $(HTTPS_GIT)#branch=main + +TM_URL = https://raw.githubusercontent.com/tendermint/tendermint/v0.37.0-rc2/proto/tendermint + +TM_CRYPTO_TYPES = proto/tendermint/crypto +TM_ABCI_TYPES = proto/tendermint/abci +TM_TYPES = proto/tendermint/types +TM_VERSION = proto/tendermint/version +TM_LIBS = proto/tendermint/libs/bits +TM_P2P = proto/tendermint/p2p + +proto-update-deps: + @echo "Updating Protobuf dependencies" + + @mkdir -p $(TM_ABCI_TYPES) + @curl -sSL $(TM_URL)/abci/types.proto > $(TM_ABCI_TYPES)/types.proto + + @mkdir -p $(TM_VERSION) + @curl -sSL $(TM_URL)/version/types.proto > $(TM_VERSION)/types.proto + + @mkdir -p $(TM_TYPES) + @curl -sSL $(TM_URL)/types/types.proto > $(TM_TYPES)/types.proto + @curl -sSL $(TM_URL)/types/evidence.proto > $(TM_TYPES)/evidence.proto + @curl -sSL $(TM_URL)/types/params.proto > $(TM_TYPES)/params.proto + @curl -sSL $(TM_URL)/types/validator.proto > $(TM_TYPES)/validator.proto + @curl -sSL $(TM_URL)/types/block.proto > $(TM_TYPES)/block.proto + + @mkdir -p $(TM_CRYPTO_TYPES) + @curl -sSL $(TM_URL)/crypto/proof.proto > $(TM_CRYPTO_TYPES)/proof.proto + @curl -sSL $(TM_URL)/crypto/keys.proto > $(TM_CRYPTO_TYPES)/keys.proto + + @mkdir -p $(TM_LIBS) + @curl -sSL $(TM_URL)/libs/bits/types.proto > $(TM_LIBS)/types.proto + + @mkdir -p $(TM_P2P) + @curl -sSL $(TM_URL)/p2p/types.proto > $(TM_P2P)/types.proto + + $(DOCKER) + +run --rm -v $(CURDIR)/proto:/workspace --workdir /workspace $(protoImageName) + +buf mod update + +.PHONY: proto-all proto-gen proto-swagger-gen proto-format proto-lint proto-check-breaking proto-update-deps + +############################################################################### +### Localnet ### +############################################################################### + +localnet-build-env: + $(MAKE) -C contrib/images simd-env +localnet-build-dlv: + $(MAKE) -C contrib/images simd-dlv + +localnet-build-nodes: + $(DOCKER) + +run --rm -v $(CURDIR)/.testnets:/data cosmossdk/simd \ + testnet init-files --v 4 -o /data --starting-ip-address 192.168.10.2 --keyring-backend=test + docker-compose up -d + +localnet-stop: + docker-compose down + +# localnet-start will run a 4-node testnet locally. The nodes are +# based off the docker images in: ./contrib/images/simd-env +localnet-start: localnet-stop localnet-build-env localnet-build-nodes + +# localnet-debug will run a 4-node testnet locally in debug mode +# you can read more about the debug mode here: ./contrib/images/simd-dlv/README.md +localnet-debug: localnet-stop localnet-build-dlv localnet-build-nodes + +.PHONY: localnet-start localnet-stop localnet-debug localnet-build-env localnet-build-dlv localnet-build-nodes + +############################################################################### +### rosetta ### +############################################################################### +# builds rosetta test data dir +rosetta-data: + -docker container rm data_dir_build + docker build -t rosetta-ci:latest -f contrib/rosetta/rosetta-ci/Dockerfile . + docker run --name data_dir_build -t rosetta-ci:latest sh /rosetta/data.sh + docker cp data_dir_build:/tmp/data.tar.gz "$(CURDIR)/contrib/rosetta/rosetta-ci/data.tar.gz" + docker container rm data_dir_build +.PHONY: rosetta-data +``` + +For the last test a tool called [runsim](https://github.com/cosmos/tools/tree/master/cmd/runsim) is used, this is used to parallelize go test instances, provide info to Github and slack integrations to provide information to your team on how the simulations are running. + +### Random proposal contents + +Randomized governance proposals are also supported on the Cosmos SDK simulator. Each +module must define the governance proposal `Content`s that they expose and register +them to be used on the parameters. + +## Registering simulation functions + +Now that all the required functions are defined, we need to integrate them into the module pattern within the `module.go`: + +```go expandable +package distribution + +import ( + + "context" + "encoding/json" + "fmt" + + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + + modulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/distribution/client/cli" + "github.com/cosmos/cosmos-sdk/x/distribution/exported" + "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + "github.com/cosmos/cosmos-sdk/x/distribution/simulation" + "github.com/cosmos/cosmos-sdk/x/distribution/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + staking "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ ConsensusVersion defines the current x/distribution module consensus version. +const ConsensusVersion = 3 + +var ( + _ module.BeginBlockAppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the distribution module. +type AppModuleBasic struct { + cdc codec.Codec +} + +/ Name returns the distribution module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the distribution module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the distribution +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the distribution module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return types.ValidateGenesis(&data) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the distribution module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the distribution module. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.NewTxCmd() +} + +/ GetQueryCmd returns the root query command for the distribution module. +func (AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.GetQueryCmd() +} + +/ RegisterInterfaces implements InterfaceModule +func (b AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the distribution module. +type AppModule struct { + AppModuleBasic + + keeper keeper.Keeper + accountKeeper types.AccountKeeper + bankKeeper types.BankKeeper + stakingKeeper types.StakingKeeper + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +/ NewAppModule creates a new AppModule object +func NewAppModule( + cdc codec.Codec, keeper keeper.Keeper, accountKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, stakingKeeper types.StakingKeeper, ss exported.Subspace, +) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc +}, + keeper: keeper, + accountKeeper: accountKeeper, + bankKeeper: bankKeeper, + stakingKeeper: stakingKeeper, + legacySubspace: ss, +} +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ Name returns the distribution module's name. +func (AppModule) + +Name() + +string { + return types.ModuleName +} + +/ RegisterInvariants registers the distribution module invariants. +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +/ RegisterServices registers module services. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.keeper)) + +types.RegisterQueryServer(cfg.QueryServer(), keeper.NewQuerier(am.keeper)) + m := keeper.NewMigrator(am.keeper, am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 2 to 3: %v", types.ModuleName, err)) +} +} + +/ InitGenesis performs genesis initialization for the distribution module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +am.keeper.InitGenesis(ctx, genesisState) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the distribution +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ BeginBlock returns the begin blocker for the distribution module. +func (am AppModule) + +BeginBlock(ctx sdk.Context, req abci.RequestBeginBlock) { + BeginBlocker(ctx, req, am.keeper) +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the distribution module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalContents returns all the distribution content functions used to +/ simulate governance proposals. +func (am AppModule) + +ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent { + return nil +} + +/ RegisterStoreDecoder registers a decoder for distribution module's types +func (am AppModule) + +RegisterStoreDecoder(sdr sdk.StoreDecoderRegistry) { + sdr[types.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + simState.AppParams, simState.Cdc, am.accountKeeper, am.bankKeeper, am.keeper, am.stakingKeeper, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type DistrInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + + AccountKeeper types.AccountKeeper + BankKeeper types.BankKeeper + StakingKeeper types.StakingKeeper + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace +} + +type DistrOutputs struct { + depinject.Out + + DistrKeeper keeper.Keeper + Module appmodule.AppModule + Hooks staking.StakingHooksWrapper +} + +func ProvideModule(in DistrInputs) + +DistrOutputs { + feeCollectorName := in.Config.FeeCollectorName + if feeCollectorName == "" { + feeCollectorName = authtypes.FeeCollectorName +} + + / default to governance authority if not provided + authority := authtypes.NewModuleAddress(govtypes.ModuleName) + if in.Config.Authority != "" { + authority = authtypes.NewModuleAddressOrBech32Address(in.Config.Authority) +} + k := keeper.NewKeeper( + in.Cdc, + in.Key, + in.AccountKeeper, + in.BankKeeper, + in.StakingKeeper, + feeCollectorName, + authority.String(), + ) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.StakingKeeper, in.LegacySubspace) + +return DistrOutputs{ + DistrKeeper: k, + Module: m, + Hooks: staking.StakingHooksWrapper{ + StakingHooks: k.Hooks() +}, +} +} +``` + +## App Simulator manager + +The following step is setting up the `SimulatorManager` at the app level. This +is required for the simulation test files on the next step. + +```go +type CustomApp struct { + ... + sm *module.SimulationManager +} +``` + +Then at the instantiation of the application, we create the `SimulationManager` +instance in the same way we create the `ModuleManager` but this time we only pass +the modules that implement the simulation functions from the `AppModuleSimulation` +interface described above. + +```go expandable +func NewCustomApp(...) { + / create the simulation manager and define the order of the modules for deterministic simulations + app.sm = module.NewSimulationManager( + auth.NewAppModule(app.accountKeeper), + bank.NewAppModule(app.bankKeeper, app.accountKeeper), + supply.NewAppModule(app.supplyKeeper, app.accountKeeper), + gov.NewAppModule(app.govKeeper, app.accountKeeper, app.supplyKeeper), + mint.NewAppModule(app.mintKeeper), + distr.NewAppModule(app.distrKeeper, app.accountKeeper, app.supplyKeeper, app.stakingKeeper), + staking.NewAppModule(app.stakingKeeper, app.accountKeeper, app.supplyKeeper), + slashing.NewAppModule(app.slashingKeeper, app.accountKeeper, app.stakingKeeper), + ) + + / register the store decoders for simulation tests + app.sm.RegisterStoreDecoders() + ... +} +``` diff --git a/docs/sdk/v0.47/documentation/module-system/structure.mdx b/docs/sdk/v0.47/documentation/module-system/structure.mdx new file mode 100644 index 00000000..39732c23 --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/structure.mdx @@ -0,0 +1,94 @@ +--- +title: Recommended Folder Structure +--- + + +**Synopsis** +This document outlines the recommended structure of Cosmos SDK modules. These ideas are meant to be applied as suggestions. Application developers are encouraged to improve upon and contribute to module structure and development design. + + +## Structure + +A typical Cosmos SDK module can be structured as follows: + +```shell +proto +└── {project_name} +    └── {module_name} +    └── {proto_version} +       ├── {module_name}.proto +       ├── event.proto +       ├── genesis.proto +       ├── query.proto +       └── tx.proto +``` + +* `{module_name}.proto`: The module's common message type definitions. +* `event.proto`: The module's message type definitions related to events. +* `genesis.proto`: The module's message type definitions related to genesis state. +* `query.proto`: The module's Query service and related message type definitions. +* `tx.proto`: The module's Msg service and related message type definitions. + +```shell expandable +x/{module_name} +├── client +│   ├── cli +│   │ ├── query.go +│   │   └── tx.go +│   └── testutil +│   ├── cli_test.go +│   └── suite.go +├── exported +│   └── exported.go +├── keeper +│   ├── genesis.go +│   ├── grpc_query.go +│   ├── hooks.go +│   ├── invariants.go +│   ├── keeper.go +│   ├── keys.go +│   ├── msg_server.go +│   └── querier.go +├── module +│   └── module.go +│   └── abci.go +│   └── autocli.go +├── simulation +│   ├── decoder.go +│   ├── genesis.go +│   ├── operations.go +│   └── params.go +├── {module_name}.pb.go +├── codec.go +├── errors.go +├── events.go +├── events.pb.go +├── expected_keepers.go +├── genesis.go +├── genesis.pb.go +├── keys.go +├── msgs.go +├── params.go +├── query.pb.go +├── tx.pb.go +└── 05-depinject.md +``` + +* `client/`: The module's CLI client functionality implementation and the module's CLI testing suite. +* `exported/`: The module's exported types - typically interface types. If a module relies on keepers from another module, it is expected to receive the keepers as interface contracts through the `expected_keepers.go` file (see below) in order to avoid a direct dependency on the module implementing the keepers. However, these interface contracts can define methods that operate on and/or return types that are specific to the module that is implementing the keepers and this is where `exported/` comes into play. The interface types that are defined in `exported/` use canonical types, allowing for the module to receive the keepers as interface contracts through the `expected_keepers.go` file. This pattern allows for code to remain DRY and also alleviates import cycle chaos. +* `keeper/`: The module's `Keeper` and `MsgServer` implementation. +* `module/`: The module's `AppModule` and `AppModuleBasic` implementation. + * `abci.go`: The module's `BeginBlocker` and `EndBlocker` implementations (this file is only required if `BeginBlocker` and/or `EndBlocker` need to be defined). + * `autocli.go`: The module [autocli](/docs/sdk/v0.47/documentation/operations/tooling/autocli) options. +* `simulation/`: The module's [simulation](/docs/sdk/v0.47/documentation/module-system/simulator) package defines functions used by the blockchain simulator application (`simapp`). +* `REAMDE.md`: The module's specification documents outlining important concepts, state storage structure, and message and event type definitions. Learn more how to write module specs in the [spec guidelines](/docs/sdk/v0.47/documentation/protocol-development/SPEC_MODULE). +* The root directory includes type definitions for messages, events, and genesis state, including the type definitions generated by Protocol Buffers. + * `codec.go`: The module's registry methods for interface types. + * `errors.go`: The module's sentinel errors. + * `events.go`: The module's event types and constructors. + * `expected_keepers.go`: The module's [expected keeper](/docs/sdk/v0.47/documentation/module-system/keeper#type-definition) interfaces. + * `genesis.go`: The module's genesis state methods and helper functions. + * `keys.go`: The module's store keys and associated helper functions. + * `msgs.go`: The module's message type definitions and associated methods. + * `params.go`: The module's parameter type definitions and associated methods. + * `*.pb.go`: The module's type definitions generated by Protocol Buffers (as defined in the respective `*.proto` files above). diff --git a/docs/sdk/v0.47/documentation/module-system/testing.mdx b/docs/sdk/v0.47/documentation/module-system/testing.mdx new file mode 100644 index 00000000..f0c141fb --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/testing.mdx @@ -0,0 +1,2391 @@ +--- +title: Testing +--- + +The Cosmos SDK contains different types of [tests](https://martinfowler.com/articles/practical-test-pyramid.html). +These tests have different goals and are used at different stages of the development cycle. +We advice, as a general rule, to use tests at all stages of the development cycle. +It is adviced, as a chain developer, to test your application and modules in a similar way than the SDK. + +The rationale behind testing can be found in [ADR-59](/docs/common/pages/adr-comprehensive#adr-059-test-scopes). + +## Unit Tests + +Unit tests are the lowest test category of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html). +All packages and modules should have unit test coverage. Modules should have their dependencies mocked: this means mocking keepers. + +The SDK uses `mockgen` to generate mocks for keepers: + +```go expandable +#!/usr/bin/env bash + +mockgen_cmd="mockgen" +$mockgen_cmd -source=client/account_retriever.go -package mock -destination testutil/mock/account_retriever.go +$mockgen_cmd -package mock -destination testutil/mock/tendermint_tm_db_DB.go github.com/tendermint/tm-db DB +$mockgen_cmd -source=types/module/module.go -package mock -destination testutil/mock/types_module_module.go +$mockgen_cmd -source=types/module/mock_appmodule_test.go -package mock -destination testutil/mock/types_mock_appmodule.go +$mockgen_cmd -source=types/invariant.go -package mock -destination testutil/mock/types_invariant.go +$mockgen_cmd -package mock -destination testutil/mock/grpc_server.go github.com/cosmos/gogoproto/grpc Server +$mockgen_cmd -package mock -destination testutil/mock/tendermint_tendermint_libs_log_DB.go github.com/tendermint/tendermint/libs/log Logger +$mockgen_cmd -source=orm/model/ormtable/hooks.go -package ormmocks -destination orm/testing/ormmocks/hooks.go +$mockgen_cmd -source=x/nft/expected_keepers.go -package testutil -destination x/nft/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/feegrant/expected_keepers.go -package testutil -destination x/feegrant/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/mint/types/expected_keepers.go -package testutil -destination x/mint/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/params/proposal_handler_test.go -package testutil -destination x/params/testutil/staking_keeper_mock.go +$mockgen_cmd -source=x/crisis/types/expected_keepers.go -package testutil -destination x/crisis/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/auth/types/expected_keepers.go -package testutil -destination x/auth/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/auth/ante/expected_keepers.go -package testutil -destination x/auth/ante/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/authz/expected_keepers.go -package testutil -destination x/authz/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/bank/types/expected_keepers.go -package testutil -destination x/bank/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/group/testutil/expected_keepers.go -package testutil -destination x/group/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/evidence/types/expected_keepers.go -package testutil -destination x/evidence/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/distribution/types/expected_keepers.go -package testutil -destination x/distribution/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/slashing/types/expected_keepers.go -package testutil -destination x/slashing/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/genutil/types/expected_keepers.go -package testutil -destination x/genutil/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/gov/testutil/expected_keepers.go -package testutil -destination x/gov/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/staking/types/expected_keepers.go -package testutil -destination x/staking/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/auth/vesting/types/expected_keepers.go -package testutil -destination x/auth/vesting/testutil/expected_keepers_mocks.go +``` + +You can read more about mockgen [here](https://github.com/golang/mock). + +### Example + +As an example, we will walkthrough the [keeper tests](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/gov/keeper/keeper_test.go) of the `x/gov` module. + +The `x/gov` module has a `Keeper` type requires a few external dependencies (ie. imports outside `x/gov` to work properly). + +```go expandable +package keeper + +import ( + + "fmt" + "time" + "github.com/tendermint/tendermint/libs/log" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" +) + +/ Keeper defines the governance module Keeper +type Keeper struct { + authKeeper types.AccountKeeper + bankKeeper types.BankKeeper + + / The reference to the DelegationSet and ValidatorSet to get information about validators and delegators + sk types.StakingKeeper + + / GovHooks + hooks types.GovHooks + + / The (unexposed) + +keys used to access the stores from the Context. + storeKey storetypes.StoreKey + + / The codec for binary encoding/decoding. + cdc codec.BinaryCodec + + / Legacy Proposal router + legacyRouter v1beta1.Router + + / Msg server router + router *baseapp.MsgServiceRouter + + config types.Config + + / the address capable of executing a MsgUpdateParams message. Typically, this + / should be the x/gov module account. + authority string +} + +/ GetAuthority returns the x/gov module's authority. +func (k Keeper) + +GetAuthority() + +string { + return k.authority +} + +/ NewKeeper returns a governance keeper. It handles: +/ - submitting governance proposals +/ - depositing funds into proposals, and activating upon sufficient funds being deposited +/ - users voting on proposals, with weight proportional to stake in the system +/ - and tallying the result of the vote. +/ +/ CONTRACT: the parameter Subspace must have the param key table already initialized +func NewKeeper( + cdc codec.BinaryCodec, key storetypes.StoreKey, authKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, sk types.StakingKeeper, + router *baseapp.MsgServiceRouter, config types.Config, authority string, +) *Keeper { + / ensure governance module account is set + if addr := authKeeper.GetModuleAddress(types.ModuleName); addr == nil { + panic(fmt.Sprintf("%s module account has not been set", types.ModuleName)) +} + if _, err := sdk.AccAddressFromBech32(authority); err != nil { + panic(fmt.Sprintf("invalid authority address: %s", authority)) +} + + / If MaxMetadataLen not set by app developer, set to default value. + if config.MaxMetadataLen == 0 { + config.MaxMetadataLen = types.DefaultConfig().MaxMetadataLen +} + +return &Keeper{ + storeKey: key, + authKeeper: authKeeper, + bankKeeper: bankKeeper, + sk: sk, + cdc: cdc, + router: router, + config: config, + authority: authority, +} +} + +/ Hooks gets the hooks for governance *Keeper { + func (keeper *Keeper) + +Hooks() + +types.GovHooks { + if keeper.hooks == nil { + / return a no-op implementation if no hooks are set + return types.MultiGovHooks{ +} + +} + +return keeper.hooks +} + +/ SetHooks sets the hooks for governance +func (keeper *Keeper) + +SetHooks(gh types.GovHooks) *Keeper { + if keeper.hooks != nil { + panic("cannot set governance hooks twice") +} + +keeper.hooks = gh + + return keeper +} + +/ SetLegacyRouter sets the legacy router for governance +func (keeper *Keeper) + +SetLegacyRouter(router v1beta1.Router) { + / It is vital to seal the governance proposal router here as to not allow + / further handlers to be registered after the keeper is created since this + / could create invalid or non-deterministic behavior. + router.Seal() + +keeper.legacyRouter = router +} + +/ Logger returns a module-specific logger. +func (keeper Keeper) + +Logger(ctx sdk.Context) + +log.Logger { + return ctx.Logger().With("module", "x/"+types.ModuleName) +} + +/ Router returns the gov keeper's router +func (keeper Keeper) + +Router() *baseapp.MsgServiceRouter { + return keeper.router +} + +/ LegacyRouter returns the gov keeper's legacy router +func (keeper Keeper) + +LegacyRouter() + +v1beta1.Router { + return keeper.legacyRouter +} + +/ GetGovernanceAccount returns the governance ModuleAccount +func (keeper Keeper) + +GetGovernanceAccount(ctx sdk.Context) + +authtypes.ModuleAccountI { + return keeper.authKeeper.GetModuleAccount(ctx, types.ModuleName) +} + +/ ProposalQueues + +/ InsertActiveProposalQueue inserts a proposalID into the active proposal queue at endTime +func (keeper Keeper) + +InsertActiveProposalQueue(ctx sdk.Context, proposalID uint64, endTime time.Time) { + store := ctx.KVStore(keeper.storeKey) + bz := types.GetProposalIDBytes(proposalID) + +store.Set(types.ActiveProposalQueueKey(proposalID, endTime), bz) +} + +/ RemoveFromActiveProposalQueue removes a proposalID from the Active Proposal Queue +func (keeper Keeper) + +RemoveFromActiveProposalQueue(ctx sdk.Context, proposalID uint64, endTime time.Time) { + store := ctx.KVStore(keeper.storeKey) + +store.Delete(types.ActiveProposalQueueKey(proposalID, endTime)) +} + +/ InsertInactiveProposalQueue inserts a proposalID into the inactive proposal queue at endTime +func (keeper Keeper) + +InsertInactiveProposalQueue(ctx sdk.Context, proposalID uint64, endTime time.Time) { + store := ctx.KVStore(keeper.storeKey) + bz := types.GetProposalIDBytes(proposalID) + +store.Set(types.InactiveProposalQueueKey(proposalID, endTime), bz) +} + +/ RemoveFromInactiveProposalQueue removes a proposalID from the Inactive Proposal Queue +func (keeper Keeper) + +RemoveFromInactiveProposalQueue(ctx sdk.Context, proposalID uint64, endTime time.Time) { + store := ctx.KVStore(keeper.storeKey) + +store.Delete(types.InactiveProposalQueueKey(proposalID, endTime)) +} + +/ Iterators + +/ IterateActiveProposalsQueue iterates over the proposals in the active proposal queue +/ and performs a callback function +func (keeper Keeper) + +IterateActiveProposalsQueue(ctx sdk.Context, endTime time.Time, cb func(proposal v1.Proposal) (stop bool)) { + iterator := keeper.ActiveProposalQueueIterator(ctx, endTime) + +defer iterator.Close() + for ; iterator.Valid(); iterator.Next() { + proposalID, _ := types.SplitActiveProposalQueueKey(iterator.Key()) + +proposal, found := keeper.GetProposal(ctx, proposalID) + if !found { + panic(fmt.Sprintf("proposal %d does not exist", proposalID)) +} + if cb(proposal) { + break +} + +} +} + +/ IterateInactiveProposalsQueue iterates over the proposals in the inactive proposal queue +/ and performs a callback function +func (keeper Keeper) + +IterateInactiveProposalsQueue(ctx sdk.Context, endTime time.Time, cb func(proposal v1.Proposal) (stop bool)) { + iterator := keeper.InactiveProposalQueueIterator(ctx, endTime) + +defer iterator.Close() + for ; iterator.Valid(); iterator.Next() { + proposalID, _ := types.SplitInactiveProposalQueueKey(iterator.Key()) + +proposal, found := keeper.GetProposal(ctx, proposalID) + if !found { + panic(fmt.Sprintf("proposal %d does not exist", proposalID)) +} + if cb(proposal) { + break +} + +} +} + +/ ActiveProposalQueueIterator returns an sdk.Iterator for all the proposals in the Active Queue that expire by endTime +func (keeper Keeper) + +ActiveProposalQueueIterator(ctx sdk.Context, endTime time.Time) + +sdk.Iterator { + store := ctx.KVStore(keeper.storeKey) + +return store.Iterator(types.ActiveProposalQueuePrefix, sdk.PrefixEndBytes(types.ActiveProposalByTimeKey(endTime))) +} + +/ InactiveProposalQueueIterator returns an sdk.Iterator for all the proposals in the Inactive Queue that expire by endTime +func (keeper Keeper) + +InactiveProposalQueueIterator(ctx sdk.Context, endTime time.Time) + +sdk.Iterator { + store := ctx.KVStore(keeper.storeKey) + +return store.Iterator(types.InactiveProposalQueuePrefix, sdk.PrefixEndBytes(types.InactiveProposalByTimeKey(endTime))) +} + +/ assertMetadataLength returns an error if given metadata length +/ is greater than a pre-defined MaxMetadataLen. +func (keeper Keeper) + +assertMetadataLength(metadata string) + +error { + if metadata != "" && uint64(len(metadata)) > keeper.config.MaxMetadataLen { + return types.ErrMetadataTooLong.Wrapf("got metadata with length %d", len(metadata)) +} + +return nil +} +``` + +In order to only test `x/gov`, we mock the [expected keepers](https://docs.cosmos.network/v0.46/building-modules/keeper.html#type-definition) and instantiate the `Keeper` with the mocked dependencies. Note that we may need to configure the mocked dependencies to return the expected values: + +```go expandable +package keeper_test + +import ( + + "fmt" + "testing" + "cosmossdk.io/math" + "github.com/golang/mock/gomock" + tmproto "github.com/tendermint/tendermint/proto/tendermint/types" + tmtime "github.com/tendermint/tendermint/types/time" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/testutil" + "github.com/cosmos/cosmos-sdk/testutil/testdata" + sdk "github.com/cosmos/cosmos-sdk/types" + moduletestutil "github.com/cosmos/cosmos-sdk/types/module/testutil" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtestutil "github.com/cosmos/cosmos-sdk/x/gov/testutil" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +var ( + _, _, addr = testdata.KeyTestPubAddr() + +govAcct = authtypes.NewModuleAddress(types.ModuleName) + +TestProposal = getTestProposal() +) + +/ getTestProposal creates and returns a test proposal message. +func getTestProposal() []sdk.Msg { + legacyProposalMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Title", "description"), authtypes.NewModuleAddress(types.ModuleName).String()) + if err != nil { + panic(err) +} + +return []sdk.Msg{ + banktypes.NewMsgSend(govAcct, addr, sdk.NewCoins(sdk.NewCoin("stake", sdk.NewInt(1000)))), + legacyProposalMsg, +} +} + +/ setupGovKeeper creates a govKeeper as well as all its dependencies. +func setupGovKeeper(t *testing.T) ( + *keeper.Keeper, + *govtestutil.MockAccountKeeper, + *govtestutil.MockBankKeeper, + *govtestutil.MockStakingKeeper, + moduletestutil.TestEncodingConfig, + sdk.Context, +) { + key := sdk.NewKVStoreKey(types.StoreKey) + testCtx := testutil.DefaultContextWithDB(t, key, sdk.NewTransientStoreKey("transient_test")) + ctx := testCtx.Ctx.WithBlockHeader(tmproto.Header{ + Time: tmtime.Now() +}) + encCfg := moduletestutil.MakeTestEncodingConfig() + +v1.RegisterInterfaces(encCfg.InterfaceRegistry) + +v1beta1.RegisterInterfaces(encCfg.InterfaceRegistry) + +banktypes.RegisterInterfaces(encCfg.InterfaceRegistry) + + / Create MsgServiceRouter, but don't populate it before creating the gov + / keeper. + msr := baseapp.NewMsgServiceRouter() + + / gomock initializations + ctrl := gomock.NewController(t) + acctKeeper := govtestutil.NewMockAccountKeeper(ctrl) + bankKeeper := govtestutil.NewMockBankKeeper(ctrl) + stakingKeeper := govtestutil.NewMockStakingKeeper(ctrl) + +acctKeeper.EXPECT().GetModuleAddress(types.ModuleName).Return(govAcct).AnyTimes() + +acctKeeper.EXPECT().GetModuleAccount(gomock.Any(), types.ModuleName).Return(authtypes.NewEmptyModuleAccount(types.ModuleName)).AnyTimes() + +trackMockBalances(bankKeeper) + +stakingKeeper.EXPECT().TokensFromConsensusPower(ctx, gomock.Any()).DoAndReturn(func(ctx sdk.Context, power int64) + +math.Int { + return sdk.TokensFromConsensusPower(power, math.NewIntFromUint64(1000000)) +}).AnyTimes() + +stakingKeeper.EXPECT().BondDenom(ctx).Return("stake").AnyTimes() + +stakingKeeper.EXPECT().IterateBondedValidatorsByPower(gomock.Any(), gomock.Any()).AnyTimes() + +stakingKeeper.EXPECT().IterateDelegations(gomock.Any(), gomock.Any(), gomock.Any()).AnyTimes() + +stakingKeeper.EXPECT().TotalBondedTokens(gomock.Any()).Return(math.NewInt(10000000)).AnyTimes() + + / Gov keeper initializations + govKeeper := keeper.NewKeeper(encCfg.Codec, key, acctKeeper, bankKeeper, stakingKeeper, msr, types.DefaultConfig(), govAcct.String()) + +govKeeper.SetProposalID(ctx, 1) + govRouter := v1beta1.NewRouter() / Also register legacy gov handlers to test them too. + govRouter.AddRoute(types.RouterKey, v1beta1.ProposalHandler) + +govKeeper.SetLegacyRouter(govRouter) + +govKeeper.SetParams(ctx, v1.DefaultParams()) + + / Register all handlers for the MegServiceRouter. + msr.SetInterfaceRegistry(encCfg.InterfaceRegistry) + +v1.RegisterMsgServer(msr, keeper.NewMsgServerImpl(govKeeper)) + +banktypes.RegisterMsgServer(msr, nil) / Nil is fine here as long as we never execute the proposal's Msgs. + + return govKeeper, acctKeeper, bankKeeper, stakingKeeper, encCfg, ctx +} + +/ trackMockBalances sets up expected calls on the Mock BankKeeper, and also +/ locally tracks accounts balances (not modules balances). +func trackMockBalances(bankKeeper *govtestutil.MockBankKeeper) { + balances := make(map[string]sdk.Coins) + + / We don't track module account balances. + bankKeeper.EXPECT().MintCoins(gomock.Any(), minttypes.ModuleName, gomock.Any()).AnyTimes() + +bankKeeper.EXPECT().BurnCoins(gomock.Any(), types.ModuleName, gomock.Any()).AnyTimes() + +bankKeeper.EXPECT().SendCoinsFromModuleToModule(gomock.Any(), minttypes.ModuleName, types.ModuleName, gomock.Any()).AnyTimes() + + / But we do track normal account balances. + bankKeeper.EXPECT().SendCoinsFromAccountToModule(gomock.Any(), gomock.Any(), types.ModuleName, gomock.Any()).DoAndReturn(func(_ sdk.Context, sender sdk.AccAddress, _ string, coins sdk.Coins) + +error { + newBalance, negative := balances[sender.String()].SafeSub(coins...) + if negative { + return fmt.Errorf("not enough balance") +} + +balances[sender.String()] = newBalance + return nil +}).AnyTimes() + +bankKeeper.EXPECT().SendCoinsFromModuleToAccount(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).DoAndReturn(func(_ sdk.Context, module string, rcpt sdk.AccAddress, coins sdk.Coins) + +error { + balances[rcpt.String()] = balances[rcpt.String()].Add(coins...) + +return nil +}).AnyTimes() + +bankKeeper.EXPECT().GetAllBalances(gomock.Any(), gomock.Any()).DoAndReturn(func(_ sdk.Context, addr sdk.AccAddress) + +sdk.Coins { + return balances[addr.String()] +}).AnyTimes() +} +``` + +This allows us to test the `x/gov` module without having to import other modules. + +```go expandable +package keeper_test + +import ( + + "testing" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtestutil "github.com/cosmos/cosmos-sdk/x/gov/testutil" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +type KeeperTestSuite struct { + suite.Suite + + cdc codec.Codec + ctx sdk.Context + govKeeper *keeper.Keeper + acctKeeper *govtestutil.MockAccountKeeper + bankKeeper *govtestutil.MockBankKeeper + stakingKeeper *govtestutil.MockStakingKeeper + queryClient v1.QueryClient + legacyQueryClient v1beta1.QueryClient + addrs []sdk.AccAddress + msgSrvr v1.MsgServer + legacyMsgSrvr v1beta1.MsgServer +} + +func (suite *KeeperTestSuite) + +SetupSuite() { + suite.reset() +} + +func (suite *KeeperTestSuite) + +reset() { + govKeeper, acctKeeper, bankKeeper, stakingKeeper, encCfg, ctx := setupGovKeeper(suite.T()) + + / Populate the gov account with some coins, as the TestProposal we have + / is a MsgSend from the gov account. + coins := sdk.NewCoins(sdk.NewCoin("stake", sdk.NewInt(100000))) + err := bankKeeper.MintCoins(suite.ctx, minttypes.ModuleName, coins) + +suite.NoError(err) + +err = bankKeeper.SendCoinsFromModuleToModule(ctx, minttypes.ModuleName, types.ModuleName, coins) + +suite.NoError(err) + queryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1.RegisterQueryServer(queryHelper, govKeeper) + legacyQueryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1beta1.RegisterQueryServer(legacyQueryHelper, keeper.NewLegacyQueryServer(govKeeper)) + queryClient := v1.NewQueryClient(queryHelper) + legacyQueryClient := v1beta1.NewQueryClient(legacyQueryHelper) + +suite.ctx = ctx + suite.govKeeper = govKeeper + suite.acctKeeper = acctKeeper + suite.bankKeeper = bankKeeper + suite.stakingKeeper = stakingKeeper + suite.cdc = encCfg.Codec + suite.queryClient = queryClient + suite.legacyQueryClient = legacyQueryClient + suite.msgSrvr = keeper.NewMsgServerImpl(suite.govKeeper) + +suite.legacyMsgSrvr = keeper.NewLegacyMsgServerImpl(govAcct.String(), suite.msgSrvr) + +suite.addrs = simtestutil.AddTestAddrsIncremental(bankKeeper, stakingKeeper, ctx, 3, sdk.NewInt(30000000)) +} + +func TestIncrementProposalNumber(t *testing.T) { + govKeeper, _, _, _, _, ctx := setupGovKeeper(t) + tp := TestProposal + _, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + +proposal6, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + +require.Equal(t, uint64(6), proposal6.Id) +} + +func TestProposalQueues(t *testing.T) { + govKeeper, _, _, _, _, ctx := setupGovKeeper(t) + + / create test proposals + tp := TestProposal + proposal, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + inactiveIterator := govKeeper.InactiveProposalQueueIterator(ctx, *proposal.DepositEndTime) + +require.True(t, inactiveIterator.Valid()) + proposalID := types.GetProposalIDFromBytes(inactiveIterator.Value()) + +require.Equal(t, proposalID, proposal.Id) + +inactiveIterator.Close() + +govKeeper.ActivateVotingPeriod(ctx, proposal) + +proposal, ok := govKeeper.GetProposal(ctx, proposal.Id) + +require.True(t, ok) + activeIterator := govKeeper.ActiveProposalQueueIterator(ctx, *proposal.VotingEndTime) + +require.True(t, activeIterator.Valid()) + +proposalID, _ = types.SplitActiveProposalQueueKey(activeIterator.Key()) + +require.Equal(t, proposalID, proposal.Id) + +activeIterator.Close() +} + +func TestKeeperTestSuite(t *testing.T) { + suite.Run(t, new(KeeperTestSuite)) +} +``` + +We can test then create unit tests using the newly created `Keeper` instance. + +```go expandable +package keeper_test + +import ( + + "testing" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtestutil "github.com/cosmos/cosmos-sdk/x/gov/testutil" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +type KeeperTestSuite struct { + suite.Suite + + cdc codec.Codec + ctx sdk.Context + govKeeper *keeper.Keeper + acctKeeper *govtestutil.MockAccountKeeper + bankKeeper *govtestutil.MockBankKeeper + stakingKeeper *govtestutil.MockStakingKeeper + queryClient v1.QueryClient + legacyQueryClient v1beta1.QueryClient + addrs []sdk.AccAddress + msgSrvr v1.MsgServer + legacyMsgSrvr v1beta1.MsgServer +} + +func (suite *KeeperTestSuite) + +SetupSuite() { + suite.reset() +} + +func (suite *KeeperTestSuite) + +reset() { + govKeeper, acctKeeper, bankKeeper, stakingKeeper, encCfg, ctx := setupGovKeeper(suite.T()) + + / Populate the gov account with some coins, as the TestProposal we have + / is a MsgSend from the gov account. + coins := sdk.NewCoins(sdk.NewCoin("stake", sdk.NewInt(100000))) + err := bankKeeper.MintCoins(suite.ctx, minttypes.ModuleName, coins) + +suite.NoError(err) + +err = bankKeeper.SendCoinsFromModuleToModule(ctx, minttypes.ModuleName, types.ModuleName, coins) + +suite.NoError(err) + queryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1.RegisterQueryServer(queryHelper, govKeeper) + legacyQueryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1beta1.RegisterQueryServer(legacyQueryHelper, keeper.NewLegacyQueryServer(govKeeper)) + queryClient := v1.NewQueryClient(queryHelper) + legacyQueryClient := v1beta1.NewQueryClient(legacyQueryHelper) + +suite.ctx = ctx + suite.govKeeper = govKeeper + suite.acctKeeper = acctKeeper + suite.bankKeeper = bankKeeper + suite.stakingKeeper = stakingKeeper + suite.cdc = encCfg.Codec + suite.queryClient = queryClient + suite.legacyQueryClient = legacyQueryClient + suite.msgSrvr = keeper.NewMsgServerImpl(suite.govKeeper) + +suite.legacyMsgSrvr = keeper.NewLegacyMsgServerImpl(govAcct.String(), suite.msgSrvr) + +suite.addrs = simtestutil.AddTestAddrsIncremental(bankKeeper, stakingKeeper, ctx, 3, sdk.NewInt(30000000)) +} + +func TestIncrementProposalNumber(t *testing.T) { + govKeeper, _, _, _, _, ctx := setupGovKeeper(t) + tp := TestProposal + _, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + +proposal6, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + +require.Equal(t, uint64(6), proposal6.Id) +} + +func TestProposalQueues(t *testing.T) { + govKeeper, _, _, _, _, ctx := setupGovKeeper(t) + + / create test proposals + tp := TestProposal + proposal, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + inactiveIterator := govKeeper.InactiveProposalQueueIterator(ctx, *proposal.DepositEndTime) + +require.True(t, inactiveIterator.Valid()) + proposalID := types.GetProposalIDFromBytes(inactiveIterator.Value()) + +require.Equal(t, proposalID, proposal.Id) + +inactiveIterator.Close() + +govKeeper.ActivateVotingPeriod(ctx, proposal) + +proposal, ok := govKeeper.GetProposal(ctx, proposal.Id) + +require.True(t, ok) + activeIterator := govKeeper.ActiveProposalQueueIterator(ctx, *proposal.VotingEndTime) + +require.True(t, activeIterator.Valid()) + +proposalID, _ = types.SplitActiveProposalQueueKey(activeIterator.Key()) + +require.Equal(t, proposalID, proposal.Id) + +activeIterator.Close() +} + +func TestKeeperTestSuite(t *testing.T) { + suite.Run(t, new(KeeperTestSuite)) +} +``` + +## Integration Tests + +Integration tests are at the second level of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html). +In the SDK, we locate our integration tests under [`/tests/integrations`](https://github.com/cosmos/cosmos-sdk/tree/main/tests/integration). + +The goal of these integration tests is to test how a component interacts with other dependencies. Compared to unit tests, integration tests do not mock dependencies. Instead, they use the direct dependencies of the component. This differs as well from end-to-end tests, which test the component with a full application. + +Integration tests interact with the tested module via the defined `Msg` and `Query` services. The result of the test can be verified by checking the state of the application, by checking the emitted events or the response. It is adviced to combine two of these methods to verify the result of the test. + +The SDK provides small helpers for quickly setting up an integration tests. These helpers can be found at [Link](https://github.com/cosmos/cosmos-sdk/blob/main/testutil/integration). + +### Example + +```go expandable +package integration_test + +import ( + + "fmt" + "io" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/integration" + moduletestutil "github.com/cosmos/cosmos-sdk/types/module/testutil" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/google/go-cmp/cmp" +) + +/ Example shows how to use the integration test framework to test the integration of a modules. +/ Panics are used in this example, but in a real test case, you should use the testing.T object and assertions. +func Example() { + / in this example we are testing the integration of the following modules: + / - mint, which directly depends on auth, bank and staking + encodingCfg := moduletestutil.MakeTestEncodingConfig(auth.AppModuleBasic{ +}, mint.AppModuleBasic{ +}) + keys := storetypes.NewKVStoreKeys(authtypes.StoreKey, minttypes.StoreKey) + authority := authtypes.NewModuleAddress("gov").String() + accountKeeper := authkeeper.NewAccountKeeper( + encodingCfg.Codec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + map[string][]string{ + minttypes.ModuleName: { + authtypes.Minter +}}, + "cosmos", + authority, + ) + + / subspace is nil because we don't test params (which is legacy anyway) + authModule := auth.NewAppModule(encodingCfg.Codec, accountKeeper, authsims.RandomGenesisAccounts, nil) + + / here bankkeeper and staking keeper is nil because we are not testing them + / subspace is nil because we don't test params (which is legacy anyway) + mintKeeper := mintkeeper.NewKeeper(encodingCfg.Codec, keys[minttypes.StoreKey], nil, accountKeeper, nil, authtypes.FeeCollectorName, authority) + mintModule := mint.NewAppModule(encodingCfg.Codec, mintKeeper, accountKeeper, nil, nil) + + / create the application and register all the modules from the previous step + / replace the name and the logger by testing values in a real test case (e.g. t.Name() + +and log.NewTestLogger(t)) + integrationApp := integration.NewIntegrationApp("example", log.NewLogger(io.Discard), keys, authModule, mintModule) + + / register the message and query servers + authtypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), authkeeper.NewMsgServerImpl(accountKeeper)) + +minttypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), mintkeeper.NewMsgServerImpl(mintKeeper)) + +minttypes.RegisterQueryServer(integrationApp.QueryHelper(), mintKeeper) + params := minttypes.DefaultParams() + +params.BlocksPerYear = 10000 + + / now we can use the application to test a mint message + result, err := integrationApp.RunMsg(&minttypes.MsgUpdateParams{ + Authority: authority, + Params: params, +}) + if err != nil { + panic(err) +} + + / in this example the result is an empty response, a nil check is enough + / in other cases, it is recommended to check the result value. + if result == nil { + panic(fmt.Errorf("unexpected nil result")) +} + + / we now check the result + resp := minttypes.MsgUpdateParamsResponse{ +} + +err = encodingCfg.Codec.Unmarshal(result.Value, &resp) + if err != nil { + panic(err) +} + + / we should also check the state of the application + got := mintKeeper.GetParams(integrationApp.SDKContext()) + if diff := cmp.Diff(got, params); diff != "" { + panic(diff) +} + +fmt.Println(got.BlocksPerYear) + / Output: 10000 +} + +/ ExampleOneModule shows how to use the integration test framework to test the integration of a single module. +/ That module has no dependency on other modules. +func Example_oneModule() { + / in this example we are testing the integration of the auth module: + encodingCfg := moduletestutil.MakeTestEncodingConfig(auth.AppModuleBasic{ +}) + keys := storetypes.NewKVStoreKeys(authtypes.StoreKey) + authority := authtypes.NewModuleAddress("gov").String() + accountKeeper := authkeeper.NewAccountKeeper( + encodingCfg.Codec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + map[string][]string{ + minttypes.ModuleName: { + authtypes.Minter +}}, + "cosmos", + authority, + ) + + / subspace is nil because we don't test params (which is legacy anyway) + authModule := auth.NewAppModule(encodingCfg.Codec, accountKeeper, authsims.RandomGenesisAccounts, nil) + + / create the application and register all the modules from the previous step + / replace the name and the logger by testing values in a real test case (e.g. t.Name() + +and log.NewTestLogger(t)) + integrationApp := integration.NewIntegrationApp("example-one-module", log.NewLogger(io.Discard), keys, authModule) + + / register the message and query servers + authtypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), authkeeper.NewMsgServerImpl(accountKeeper)) + params := authtypes.DefaultParams() + +params.MaxMemoCharacters = 1000 + + / now we can use the application to test a mint message + result, err := integrationApp.RunMsg(&authtypes.MsgUpdateParams{ + Authority: authority, + Params: params, +}) + if err != nil { + panic(err) +} + + / in this example the result is an empty response, a nil check is enough + / in other cases, it is recommended to check the result value. + if result == nil { + panic(fmt.Errorf("unexpected nil result")) +} + + / we now check the result + resp := authtypes.MsgUpdateParamsResponse{ +} + +err = encodingCfg.Codec.Unmarshal(result.Value, &resp) + if err != nil { + panic(err) +} + + / we should also check the state of the application + got := accountKeeper.GetParams(integrationApp.SDKContext()) + if diff := cmp.Diff(got, params); diff != "" { + panic(diff) +} + +fmt.Println(got.MaxMemoCharacters) + / Output: 1000 +} +``` + +## Deterministic and Regression tests + +Tests are written for queries in the Cosmos SDK which have `module_query_safe` Protobuf annotation. + +Each query is tested using 2 methods: + +* Use property-based testing with the [`rapid`](https://pkg.go.dev/pgregory.net/rapid@v0.5.3) library. The property that is tested is that the query response and gas consumption are the same upon 1000 query calls. +* Regression tests are written with hardcoded responses and gas, and verify they don't change upon 1000 calls and between SDK patch versions. + +Here's an example of regression tests: + +```go expandable +package keeper_test + +import ( + + "testing" + "github.com/stretchr/testify/suite" + tmproto "github.com/tendermint/tendermint/proto/tendermint/types" + "pgregory.net/rapid" + "github.com/cosmos/cosmos-sdk/baseapp" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/testutil/configurator" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + "github.com/cosmos/cosmos-sdk/testutil/testdata" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktestutil "github.com/cosmos/cosmos-sdk/x/bank/testutil" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + + _ "github.com/cosmos/cosmos-sdk/x/auth" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + _ "github.com/cosmos/cosmos-sdk/x/bank" + _ "github.com/cosmos/cosmos-sdk/x/consensus" + _ "github.com/cosmos/cosmos-sdk/x/params" + _ "github.com/cosmos/cosmos-sdk/x/staking" +) + +type DeterministicTestSuite struct { + suite.Suite + + ctx sdk.Context + bankKeeper keeper.BaseKeeper + + queryClient banktypes.QueryClient +} + +var ( + denomRegex = sdk.DefaultCoinDenomRegex() + +addr1 = sdk.MustAccAddressFromBech32("cosmos139f7kncmglres2nf3h4hc4tade85ekfr8sulz5") + +coin1 = sdk.NewCoin("denom", sdk.NewInt(10)) + +metadataAtom = banktypes.Metadata{ + Description: "The native staking token of the Cosmos Hub.", + DenomUnits: []*banktypes.DenomUnit{ + { + Denom: "uatom", + Exponent: 0, + Aliases: []string{"microatom" +}, +}, + { + Denom: "atom", + Exponent: 6, + Aliases: []string{"ATOM" +}, +}, +}, + Base: "uatom", + Display: "atom", +} +) + +func TestDeterministicTestSuite(t *testing.T) { + suite.Run(t, new(DeterministicTestSuite)) +} + +func (suite *DeterministicTestSuite) + +SetupTest() { + var interfaceRegistry codectypes.InterfaceRegistry + + app, err := simtestutil.Setup( + configurator.NewAppConfig( + configurator.AuthModule(), + configurator.TxModule(), + configurator.ParamsModule(), + configurator.ConsensusModule(), + configurator.BankModule(), + configurator.StakingModule(), + ), + &suite.bankKeeper, + &interfaceRegistry, + ) + +suite.Require().NoError(err) + ctx := app.BaseApp.NewContext(false, tmproto.Header{ +}) + +suite.ctx = ctx + queryHelper := baseapp.NewQueryServerTestHelper(ctx, interfaceRegistry) + +banktypes.RegisterQueryServer(queryHelper, suite.bankKeeper) + +suite.queryClient = banktypes.NewQueryClient(queryHelper) +} + +func (suite *DeterministicTestSuite) + +fundAccount(addr sdk.AccAddress, coin ...sdk.Coin) { + err := banktestutil.FundAccount(suite.bankKeeper, suite.ctx, addr, sdk.NewCoins(coin...)) + +suite.Require().NoError(err) +} + +func (suite *DeterministicTestSuite) + +getCoin(t *rapid.T) + +sdk.Coin { + return sdk.NewCoin( + rapid.StringMatching(denomRegex).Draw(t, "denom"), + sdk.NewInt(rapid.Int64Min(1).Draw(t, "amount")), + ) +} + +func (suite *DeterministicTestSuite) + +TestGRPCQueryBalance() { + rapid.Check(suite.T(), func(t *rapid.T) { + addr := testdata.AddressGenerator(t).Draw(t, "address") + coin := suite.getCoin(t) + +suite.fundAccount(addr, coin) + req := banktypes.NewQueryBalanceRequest(addr, coin.GetDenom()) + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.Balance, 0, true) +}) + +suite.fundAccount(addr1, coin1) + req := banktypes.NewQueryBalanceRequest(addr1, coin1.GetDenom()) + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.Balance, 1087, false) +} + +func (suite *DeterministicTestSuite) + +TestGRPCQueryAllBalances() { + rapid.Check(suite.T(), func(t *rapid.T) { + addr := testdata.AddressGenerator(t).Draw(t, "address") + numCoins := rapid.IntRange(1, 10).Draw(t, "num-count") + coins := make(sdk.Coins, 0, numCoins) + for i := 0; i < numCoins; i++ { + coin := suite.getCoin(t) + + / NewCoins sorts the denoms + coins = sdk.NewCoins(append(coins, coin)...) +} + +suite.fundAccount(addr, coins...) + req := banktypes.NewQueryAllBalancesRequest(addr, testdata.PaginationGenerator(t, uint64(numCoins)).Draw(t, "pagination")) + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.AllBalances, 0, true) +}) + coins := sdk.NewCoins( + sdk.NewCoin("stake", sdk.NewInt(10)), + sdk.NewCoin("denom", sdk.NewInt(100)), + ) + +suite.fundAccount(addr1, coins...) + req := banktypes.NewQueryAllBalancesRequest(addr1, nil) + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.AllBalances, 357, false) +} + +func (suite *DeterministicTestSuite) + +TestGRPCQuerySpendableBalances() { + rapid.Check(suite.T(), func(t *rapid.T) { + addr := testdata.AddressGenerator(t).Draw(t, "address") + numCoins := rapid.IntRange(1, 10).Draw(t, "num-count") + coins := make(sdk.Coins, 0, numCoins) + for i := 0; i < numCoins; i++ { + coin := sdk.NewCoin( + rapid.StringMatching(denomRegex).Draw(t, "denom"), + sdk.NewInt(rapid.Int64Min(1).Draw(t, "amount")), + ) + + / NewCoins sorts the denoms + coins = sdk.NewCoins(append(coins, coin)...) +} + err := banktestutil.FundAccount(suite.bankKeeper, suite.ctx, addr, coins) + +suite.Require().NoError(err) + req := banktypes.NewQuerySpendableBalancesRequest(addr, testdata.PaginationGenerator(t, uint64(numCoins)).Draw(t, "pagination")) + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.SpendableBalances, 0, true) +}) + coins := sdk.NewCoins( + sdk.NewCoin("stake", sdk.NewInt(10)), + sdk.NewCoin("denom", sdk.NewInt(100)), + ) + err := banktestutil.FundAccount(suite.bankKeeper, suite.ctx, addr1, coins) + +suite.Require().NoError(err) + req := banktypes.NewQuerySpendableBalancesRequest(addr1, nil) + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.SpendableBalances, 2032, false) +} + +func (suite *DeterministicTestSuite) + +TestGRPCQueryTotalSupply() { + res, err := suite.queryClient.TotalSupply(suite.ctx, &banktypes.QueryTotalSupplyRequest{ +}) + +suite.Require().NoError(err) + initialSupply := res.GetSupply() + +rapid.Check(suite.T(), func(t *rapid.T) { + numCoins := rapid.IntRange(1, 3).Draw(t, "num-count") + coins := make(sdk.Coins, 0, numCoins) + for i := 0; i < numCoins; i++ { + coin := sdk.NewCoin( + rapid.StringMatching(denomRegex).Draw(t, "denom"), + sdk.NewInt(rapid.Int64Min(1).Draw(t, "amount")), + ) + +coins = coins.Add(coin) +} + +suite.Require().NoError(suite.bankKeeper.MintCoins(suite.ctx, minttypes.ModuleName, coins)) + +initialSupply = initialSupply.Add(coins...) + req := &banktypes.QueryTotalSupplyRequest{ + Pagination: testdata.PaginationGenerator(t, uint64(len(initialSupply))).Draw(t, "pagination"), +} + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.TotalSupply, 0, true) +}) + +suite.SetupTest() / reset + coins := sdk.NewCoins( + sdk.NewCoin("foo", sdk.NewInt(10)), + sdk.NewCoin("bar", sdk.NewInt(100)), + ) + +suite.Require().NoError(suite.bankKeeper.MintCoins(suite.ctx, minttypes.ModuleName, coins)) + req := &banktypes.QueryTotalSupplyRequest{ +} + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.TotalSupply, 243, false) +} + +func (suite *DeterministicTestSuite) + +TestGRPCQueryTotalSupplyOf() { + rapid.Check(suite.T(), func(t *rapid.T) { + coin := sdk.NewCoin( + rapid.StringMatching(denomRegex).Draw(t, "denom"), + sdk.NewInt(rapid.Int64Min(1).Draw(t, "amount")), + ) + +suite.Require().NoError(suite.bankKeeper.MintCoins(suite.ctx, minttypes.ModuleName, sdk.NewCoins(coin))) + req := &banktypes.QuerySupplyOfRequest{ + Denom: coin.GetDenom() +} + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.SupplyOf, 0, true) +}) + coin := sdk.NewCoin("bar", sdk.NewInt(100)) + +suite.Require().NoError(suite.bankKeeper.MintCoins(suite.ctx, minttypes.ModuleName, sdk.NewCoins(coin))) + req := &banktypes.QuerySupplyOfRequest{ + Denom: coin.GetDenom() +} + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.SupplyOf, 1021, false) +} + +func (suite *DeterministicTestSuite) + +TestGRPCQueryParams() { + rapid.Check(suite.T(), func(t *rapid.T) { + enabledStatus := banktypes.SendEnabled{ + Denom: rapid.StringMatching(denomRegex).Draw(t, "denom"), + Enabled: rapid.Bool().Draw(t, "status"), +} + params := banktypes.Params{ + SendEnabled: []*banktypes.SendEnabled{&enabledStatus +}, + DefaultSendEnabled: rapid.Bool().Draw(t, "send"), +} + +suite.bankKeeper.SetParams(suite.ctx, params) + req := &banktypes.QueryParamsRequest{ +} + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.Params, 0, true) +}) + enabledStatus := banktypes.SendEnabled{ + Denom: "denom", + Enabled: true, +} + params := banktypes.Params{ + SendEnabled: []*banktypes.SendEnabled{&enabledStatus +}, + DefaultSendEnabled: false, +} + +suite.bankKeeper.SetParams(suite.ctx, params) + req := &banktypes.QueryParamsRequest{ +} + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.Params, 1003, false) +} + +func (suite *DeterministicTestSuite) + +createAndReturnMetadatas(t *rapid.T, count int) []banktypes.Metadata { + denomsMetadata := make([]banktypes.Metadata, 0, count) + for i := 0; i < count; i++ { + denom := rapid.StringMatching(denomRegex).Draw(t, "denom") + aliases := rapid.SliceOf(rapid.String()).Draw(t, "aliases") + / In the GRPC server code, empty arrays are returned as nil + if len(aliases) == 0 { + aliases = nil +} + metadata := banktypes.Metadata{ + Description: rapid.StringN(1, 100, 100).Draw(t, "desc"), + DenomUnits: []*banktypes.DenomUnit{ + { + Denom: denom, + Exponent: rapid.Uint32().Draw(t, "exponent"), + Aliases: aliases, +}, +}, + Base: denom, + Display: denom, + Name: rapid.String().Draw(t, "name"), + Symbol: rapid.String().Draw(t, "symbol"), + URI: rapid.String().Draw(t, "uri"), + URIHash: rapid.String().Draw(t, "uri-hash"), +} + +denomsMetadata = append(denomsMetadata, metadata) +} + +return denomsMetadata +} + +func (suite *DeterministicTestSuite) + +TestGRPCDenomsMetadata() { + rapid.Check(suite.T(), func(t *rapid.T) { + count := rapid.IntRange(1, 3).Draw(t, "count") + denomsMetadata := suite.createAndReturnMetadatas(t, count) + +suite.Require().Len(denomsMetadata, count) + for i := 0; i < count; i++ { + suite.bankKeeper.SetDenomMetaData(suite.ctx, denomsMetadata[i]) +} + req := &banktypes.QueryDenomsMetadataRequest{ + Pagination: testdata.PaginationGenerator(t, uint64(count)).Draw(t, "pagination"), +} + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.DenomsMetadata, 0, true) +}) + +suite.SetupTest() / reset + + suite.bankKeeper.SetDenomMetaData(suite.ctx, metadataAtom) + req := &banktypes.QueryDenomsMetadataRequest{ +} + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.DenomsMetadata, 660, false) +} + +func (suite *DeterministicTestSuite) + +TestGRPCDenomMetadata() { + rapid.Check(suite.T(), func(t *rapid.T) { + denomMetadata := suite.createAndReturnMetadatas(t, 1) + +suite.Require().Len(denomMetadata, 1) + +suite.bankKeeper.SetDenomMetaData(suite.ctx, denomMetadata[0]) + req := &banktypes.QueryDenomMetadataRequest{ + Denom: denomMetadata[0].Base, +} + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.DenomMetadata, 0, true) +}) + +suite.bankKeeper.SetDenomMetaData(suite.ctx, metadataAtom) + req := &banktypes.QueryDenomMetadataRequest{ + Denom: metadataAtom.Base, +} + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.DenomMetadata, 1300, false) +} + +func (suite *DeterministicTestSuite) + +TestGRPCSendEnabled() { + allDenoms := []string{ +} + +rapid.Check(suite.T(), func(t *rapid.T) { + count := rapid.IntRange(0, 10).Draw(t, "count") + denoms := make([]string, 0, count) + for i := 0; i < count; i++ { + coin := banktypes.SendEnabled{ + Denom: rapid.StringMatching(denomRegex).Draw(t, "denom"), + Enabled: rapid.Bool().Draw(t, "enabled-status"), +} + +suite.bankKeeper.SetSendEnabled(suite.ctx, coin.Denom, coin.Enabled) + +denoms = append(denoms, coin.Denom) +} + +allDenoms = append(allDenoms, denoms...) + req := &banktypes.QuerySendEnabledRequest{ + Denoms: denoms, + / Pagination is only taken into account when `denoms` is an empty array + Pagination: testdata.PaginationGenerator(t, uint64(len(allDenoms))).Draw(t, "pagination"), +} + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.SendEnabled, 0, true) +}) + +coin1 := banktypes.SendEnabled{ + Denom: "falsecoin", + Enabled: false, +} + +coin2 := banktypes.SendEnabled{ + Denom: "truecoin", + Enabled: true, +} + +suite.bankKeeper.SetSendEnabled(suite.ctx, coin1.Denom, false) + +suite.bankKeeper.SetSendEnabled(suite.ctx, coin2.Denom, true) + req := &banktypes.QuerySendEnabledRequest{ + Denoms: []string{ + coin1.GetDenom(), coin2.GetDenom() +}, +} + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.SendEnabled, 4063, false) +} + +func (suite *DeterministicTestSuite) + +TestGRPCDenomOwners() { + rapid.Check(suite.T(), func(t *rapid.T) { + denom := rapid.StringMatching(denomRegex).Draw(t, "denom") + numAddr := rapid.IntRange(1, 10).Draw(t, "number-address") + for i := 0; i < numAddr; i++ { + addr := testdata.AddressGenerator(t).Draw(t, "address") + coin := sdk.NewCoin( + denom, + sdk.NewInt(rapid.Int64Min(1).Draw(t, "amount")), + ) + err := banktestutil.FundAccount(suite.bankKeeper, suite.ctx, addr, sdk.NewCoins(coin)) + +suite.Require().NoError(err) +} + req := &banktypes.QueryDenomOwnersRequest{ + Denom: denom, + Pagination: testdata.PaginationGenerator(t, uint64(numAddr)).Draw(t, "pagination"), +} + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.DenomOwners, 0, true) +}) + denomOwners := []*banktypes.DenomOwner{ + { + Address: "cosmos1qg65a9q6k2sqq7l3ycp428sqqpmqcucgzze299", + Balance: coin1, +}, + { + Address: "cosmos1qglnsqgpq48l7qqzgs8qdshr6fh3gqq9ej3qut", + Balance: coin1, +}, +} + for i := 0; i < len(denomOwners); i++ { + addr, err := sdk.AccAddressFromBech32(denomOwners[i].Address) + +suite.Require().NoError(err) + +err = banktestutil.FundAccount(suite.bankKeeper, suite.ctx, addr, sdk.NewCoins(coin1)) + +suite.Require().NoError(err) +} + req := &banktypes.QueryDenomOwnersRequest{ + Denom: coin1.GetDenom(), +} + +testdata.DeterministicIterations(suite.ctx, suite.Require(), req, suite.queryClient.DenomOwners, 2525, false) +} +``` + +## Simulations + +Simulations uses as well a minimal application, built with [`depinject`](/docs/sdk/v0.47/documentation/module-system/depinject): + + +You can as well use the `AppConfig` `configurator` for creating an `AppConfig` [inline](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/slashing/app_test.go#L54-L62). There is no difference between those two ways, use whichever you prefer. + + +Following is an example for `x/gov/` simulations: + +```go expandable +package simulation_test + +import ( + + "fmt" + "math/rand" + "testing" + "time" + "github.com/stretchr/testify/require" + abci "github.com/tendermint/tendermint/abci/types" + tmproto "github.com/tendermint/tendermint/proto/tendermint/types" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/configurator" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + _ "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + "github.com/cosmos/cosmos-sdk/x/bank/testutil" + _ "github.com/cosmos/cosmos-sdk/x/consensus" + govcodec "github.com/cosmos/cosmos-sdk/x/gov/codec" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + "github.com/cosmos/cosmos-sdk/x/gov/simulation" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + _ "github.com/cosmos/cosmos-sdk/x/params" + _ "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +type MockWeightedProposalContent struct { + n int +} + +func (m MockWeightedProposalContent) + +AppParamsKey() + +string { + return fmt.Sprintf("AppParamsKey-%d", m.n) +} + +func (m MockWeightedProposalContent) + +DefaultWeight() + +int { + return m.n +} + +func (m MockWeightedProposalContent) + +ContentSimulatorFn() + +simtypes.ContentSimulatorFn { + return func(r *rand.Rand, _ sdk.Context, _ []simtypes.Account) + +simtypes.Content { + return v1beta1.NewTextProposal( + fmt.Sprintf("title-%d: %s", m.n, simtypes.RandStringOfLength(r, 100)), + fmt.Sprintf("description-%d: %s", m.n, simtypes.RandStringOfLength(r, 4000)), + ) +} +} + +/ make sure the MockWeightedProposalContent satisfied the WeightedProposalContent interface +var _ simtypes.WeightedProposalContent = MockWeightedProposalContent{ +} + +func mockWeightedProposalContent(n int) []simtypes.WeightedProposalContent { + wpc := make([]simtypes.WeightedProposalContent, n) + for i := 0; i < n; i++ { + wpc[i] = MockWeightedProposalContent{ + i +} + +} + +return wpc +} + +/ TestWeightedOperations tests the weights of the operations. +func TestWeightedOperations(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + ctx.WithChainID("test-chain") + cdc := suite.cdc + appParams := make(simtypes.AppParams) + weightesOps := simulation.WeightedOperations(appParams, cdc, suite.AccountKeeper, + suite.BankKeeper, suite.GovKeeper, mockWeightedProposalContent(3), + ) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accs := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + expected := []struct { + weight int + opMsgRoute string + opMsgName string +}{ + {0, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {1, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {2, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + { + simulation.DefaultWeightMsgDeposit, types.ModuleName, simulation.TypeMsgDeposit +}, + { + simulation.DefaultWeightMsgVote, types.ModuleName, simulation.TypeMsgVote +}, + { + simulation.DefaultWeightMsgVoteWeighted, types.ModuleName, simulation.TypeMsgVoteWeighted +}, +} + for i, w := range weightesOps { + operationMsg, _, _ := w.Op()(r, app.BaseApp, ctx, accs, ctx.ChainID()) + / require.NoError(t, err) / TODO check if it should be NoError + + / the following checks are very much dependent from the ordering of the output given + / by WeightedOperations. if the ordering in WeightedOperations changes some tests + / will fail + require.Equal(t, expected[i].weight, w.Weight(), "weight should be the same") + +require.Equal(t, expected[i].opMsgRoute, operationMsg.Route, "route should be the same") + +require.Equal(t, expected[i].opMsgName, operationMsg.Name, "operation Msg name should be the same") +} +} + +/ TestSimulateMsgSubmitProposal tests the normal scenario of a valid message of type TypeMsgSubmitProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgSubmitProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / begin a new block + app.BeginBlock(abci.RequestBeginBlock{ + Header: tmproto.Header{ + Height: app.LastBlockHeight() + 1, + AppHash: app.LastCommitID().Hash +}}) + + / execute operation + op := simulation.SimulateMsgSubmitProposal(suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper, MockWeightedProposalContent{3 +}.ContentSimulatorFn()) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgSubmitProposal + err = govcodec.ModuleCdc.UnmarshalJSON(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, "cosmos1p8wcgrjr4pjju90xg6u9cgq55dxwq8j7u4x9a0", msg.Proposer) + +require.NotEqual(t, len(msg.InitialDeposit), 0) + +require.Equal(t, "2686011stake", msg.InitialDeposit[0].String()) + +require.Equal(t, "title-3: ZBSpYuLyYggwexjxusrBqDOTtGTOWeLrQKjLxzIivHSlcxgdXhhuTSkuxKGLwQvuyNhYFmBZHeAerqyNEUzXPFGkqEGqiQWIXnku", msg.Messages[0].GetCachedValue().(*v1.MsgExecLegacyContent).Content.GetCachedValue().(v1beta1.Content).GetTitle()) + +require.Equal(t, "description-3: NJWzHdBNpAXKJPHWQdrGYcAHSctgVlqwqHoLfHsXUdStwfefwzqLuKEhmMyYLdbZrcPgYqjNHxPexsruwEGStAneKbWkQDDIlCWBLSiAASNhZqNFlPtfqPJoxKsgMdzjWqLWdqKQuJqWPMvwPQWZUtVMOTMYKJbfdlZsjdsomuScvDmbDkgRualsxDvRJuCAmPOXitIbcyWsKGSdrEunFAOdmXnsuyFVgJqEjbklvmwrUlsxjRSfKZxGcpayDdgoFcnVSutxjRgOSFzPwidAjubMncNweqpbxhXGchpZUxuFDOtpnhNUycJICRYqsPhPSCjPTWZFLkstHWJxvdPEAyEIxXgLwbNOjrgzmaujiBABBIXvcXpLrbcEWNNQsbjvgJFgJkflpRohHUutvnaUqoopuKjTDaemDeSdqbnOzcfJpcTuAQtZoiLZOoAIlboFDAeGmSNwkvObPRvRWQgWkGkxwtPauYgdkmypLjbqhlHJIQTntgWjXwZdOyYEdQRRLfMSdnxqppqUofqLbLQDUjwKVKfZJUJQPsWIPwIVaSTrmKskoAhvmZyJgeRpkaTfGgrJzAigcxtfshmiDCFkuiluqtMOkidknnTBtumyJYlIsWLnCQclqdVmikUoMOPdPWwYbJxXyqUVicNxFxyqJTenNblyyKSdlCbiXxUiYUiMwXZASYfvMDPFgxniSjWaZTjHkqlJvtBsXqwPpyVxnJVGFWhfSxgOcduoxkiopJvFjMmFabrGYeVtTXLhxVUEiGwYUvndjFGzDVntUvibiyZhfMQdMhgsiuysLMiePBNXifRLMsSmXPkwlPloUbJveCvUlaalhZHuvdkCnkSHbMbmOnrfEGPwQiACiPlnihiaOdbjPqPiTXaHDoJXjSlZmltGqNHHNrcKdlFSCdmVOuvDcBLdSklyGJmcLTbSFtALdGlPkqqecJrpLCXNPWefoTJNgEJlyMEPneVaxxduAAEqQpHWZodWyRkDAxzyMnFMcjSVqeRXLqsNyNtQBbuRvunZflWSbbvXXdkyLikYqutQhLPONXbvhcQZJPSWnOulqQaXmbfFxAkqfYeseSHOQidHwbcsOaMnSrrmGjjRmEMQNuknupMxJiIeVjmgZvbmjPIQTEhQFULQLBMPrxcFPvBinaOPYWGvYGRKxLZdwamfRQQFngcdSlvwjfaPbURasIsGJVHtcEAxnIIrhSriiXLOlbEBLXFElXJFGxHJczRBIxAuPKtBisjKBwfzZFagdNmjdwIRvwzLkFKWRTDPxJCmpzHUcrPiiXXHnOIlqNVoGSXZewdnCRhuxeYGPVTfrNTQNOxZmxInOazUYNTNDgzsxlgiVEHPKMfbesvPHUqpNkUqbzeuzfdrsuLDpKHMUbBMKczKKWOdYoIXoPYtEjfOnlQLoGnbQUCuERdEFaptwnsHzTJDsuZkKtzMpFaZobynZdzNydEeJJHDYaQcwUxcqvwfWwNUsCiLvkZQiSfzAHftYgAmVsXgtmcYgTqJIawstRYJrZdSxlfRiqTufgEQVambeZZmaAyRQbcmdjVUZZCgqDrSeltJGXPMgZnGDZqISrGDOClxXCxMjmKqEPwKHoOfOeyGmqWqihqjINXLqnyTesZePQRqaWDQNqpLgNrAUKulklmckTijUltQKuWQDwpLmDyxLppPVMwsmBIpOwQttYFMjgJQZLYFPmxWFLIeZihkRNnkzoypBICIxgEuYsVWGIGRbbxqVasYnstWomJnHwmtOhAFSpttRYYzBmyEtZXiCthvKvWszTXDbiJbGXMcrYpKAgvUVFtdKUfvdMfhAryctklUCEdjetjuGNfJjajZtvzdYaqInKtFPPLYmRaXPdQzxdSQfmZDEVHlHGEGNSPRFJuIfKLLfUmnHxHnRjmzQPNlqrXgifUdzAGKVabYqvcDeYoTYgPsBUqehrBhmQUgTvDnsdpuhUoxskDdppTsYMcnDIPSwKIqhXDCIxOuXrywahvVavvHkPuaenjLmEbMgrkrQLHEAwrhHkPRNvonNQKqprqOFVZKAtpRSpvQUxMoXCMZLSSbnLEFsjVfANdQNQVwTmGxqVjVqRuxREAhuaDrFgEZpYKhwWPEKBevBfsOIcaZKyykQafzmGPLRAKDtTcJxJVgiiuUkmyMYuDUNEUhBEdoBLJnamtLmMJQgmLiUELIhLpiEvpOXOvXCPUeldLFqkKOwfacqIaRcnnZvERKRMCKUkMABbDHytQqQblrvoxOZkwzosQfDKGtIdfcXRJNqlBNwOCWoQBcEWyqrMlYZIAXYJmLfnjoJepgSFvrgajaBAIksoyeHqgqbGvpAstMIGmIhRYGGNPRIfOQKsGoKgxtsidhTaAePRCBFqZgPDWCIkqOJezGVkjfYUCZTlInbxBXwUAVRsxHTQtJFnnpmMvXDYCVlEmnZBKhmmxQOIQzxFWpJQkQoSAYzTEiDWEOsVLNrbfzeHFRyeYATakQQWmFDLPbVMCJcWjFGJjfqCoVzlbNNEsqxdSmNPjTjHYOkuEMFLkXYGaoJlraLqayMeCsTjWNRDPBywBJLAPVkGQqTwApVVwYAetlwSbzsdHWsTwSIcctkyKDuRWYDQikRqsKTMJchrliONJeaZIzwPQrNbTwxsGdwuduvibtYndRwpdsvyCktRHFalvUuEKMqXbItfGcNGWsGzubdPMYayOUOINjpcFBeESdwpdlTYmrPsLsVDhpTzoMegKrytNVZkfJRPuDCUXxSlSthOohmsuxmIZUedzxKmowKOdXTMcEtdpHaPWgIsIjrViKrQOCONlSuazmLuCUjLltOGXeNgJKedTVrrVCpWYWHyVrdXpKgNaMJVjbXxnVMSChdWKuZdqpisvrkBJPoURDYxWOtpjzZoOpWzyUuYNhCzRoHsMjmmWDcXzQiHIyjwdhPNwiPqFxeUfMVFQGImhykFgMIlQEoZCaRoqSBXTSWAeDumdbsOGtATwEdZlLfoBKiTvodQBGOEcuATWXfiinSjPmJKcWgQrTVYVrwlyMWhxqNbCMpIQNoSMGTiWfPTCezUjYcdWppnsYJihLQCqbNLRGgqrwHuIvsazapTpoPZIyZyeeSueJuTIhpHMEJfJpScshJubJGfkusuVBgfTWQoywSSliQQSfbvaHKiLnyjdSbpMkdBgXepoSsHnCQaYuHQqZsoEOmJCiuQUpJkmfyfbIShzlZpHFmLCsbknEAkKXKfRTRnuwdBeuOGgFbJLbDksHVapaRayWzwoYBEpmrlAxrUxYMUekKbpjPNfjUCjhbdMAnJmYQVZBQZkFVweHDAlaqJjRqoQPoOMLhyvYCzqEuQsAFoxWrzRnTVjStPadhsESlERnKhpEPsfDxNvxqcOyIulaCkmPdambLHvGhTZzysvqFauEgkFRItPfvisehFmoBhQqmkfbHVsgfHXDPJVyhwPllQpuYLRYvGodxKjkarnSNgsXoKEMlaSKxKdcVgvOkuLcfLFfdtXGTclqfPOfeoVLbqcjcXCUEBgAGplrkgsmIEhWRZLlGPGCwKWRaCKMkBHTAcypUrYjWwCLtOPVygMwMANGoQwFnCqFrUGMCRZUGJKTZIGPyldsifauoMnJPLTcDHmilcmahlqOELaAUYDBuzsVywnDQfwRLGIWozYaOAilMBcObErwgTDNGWnwQMUgFFSKtPDMEoEQCTKVREqrXZSGLqwTMcxHfWotDllNkIJPMbXzjDVjPOOjCFuIvTyhXKLyhUScOXvYthRXpPfKwMhptXaxIxgqBoUqzrWbaoLTVpQoottZyPFfNOoMioXHRuFwMRYUiKvcWPkrayyTLOCFJlAyslDameIuqVAuxErqFPEWIScKpBORIuZqoXlZuTvAjEdlEWDODFRregDTqGNoFBIHxvimmIZwLfFyKUfEWAnNBdtdzDmTPXtpHRGdIbuucfTjOygZsTxPjfweXhSUkMhPjMaxKlMIJMOXcnQfyzeOcbWwNbeH", msg.Messages[0].GetCachedValue().(*v1.MsgExecLegacyContent).Content.GetCachedValue().(v1beta1.Content).GetDescription()) + +require.Equal(t, "gov", msg.Route()) + +require.Equal(t, simulation.TypeMsgSubmitProposal, msg.Type()) +} + +/ TestSimulateMsgDeposit tests the normal scenario of a valid message of type TypeMsgDeposit. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgDeposit(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + content := v1beta1.NewTextProposal("Test", "description") + +contentMsg, err := v1.NewLegacyContent(content, suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String()) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + depositPeriod := suite.GovKeeper.GetParams(ctx).MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, "", submitTime, submitTime.Add(*depositPeriod), "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + +suite.GovKeeper.SetProposal(ctx, proposal) + + / begin a new block + app.BeginBlock(abci.RequestBeginBlock{ + Header: tmproto.Header{ + Height: app.LastBlockHeight() + 1, + AppHash: app.LastCommitID().Hash, + Time: blockTime +}}) + + / execute operation + op := simulation.SimulateMsgDeposit(suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgDeposit + err = govcodec.ModuleCdc.UnmarshalJSON(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Depositor) + +require.NotEqual(t, len(msg.Amount), 0) + +require.Equal(t, "560969stake", msg.Amount[0].String()) + +require.Equal(t, "gov", msg.Route()) + +require.Equal(t, simulation.TypeMsgDeposit, msg.Type()) +} + +/ TestSimulateMsgVote tests the normal scenario of a valid message of type TypeMsgVote. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVote(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + depositPeriod := suite.GovKeeper.GetParams(ctx).MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, "", submitTime, submitTime.Add(*depositPeriod), "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + +suite.GovKeeper.ActivateVotingPeriod(ctx, proposal) + + / begin a new block + app.BeginBlock(abci.RequestBeginBlock{ + Header: tmproto.Header{ + Height: app.LastBlockHeight() + 1, + AppHash: app.LastCommitID().Hash, + Time: blockTime +}}) + + / execute operation + op := simulation.SimulateMsgVote(suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVote + govcodec.ModuleCdc.UnmarshalJSON(operationMsg.Msg, &msg) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.Equal(t, v1.OptionYes, msg.Option) + +require.Equal(t, "gov", msg.Route()) + +require.Equal(t, simulation.TypeMsgVote, msg.Type()) +} + +/ TestSimulateMsgVoteWeighted tests the normal scenario of a valid message of type TypeMsgVoteWeighted. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVoteWeighted(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + depositPeriod := suite.GovKeeper.GetParams(ctx).MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, "", submitTime, submitTime.Add(*depositPeriod), "text proposal", "test", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + +suite.GovKeeper.ActivateVotingPeriod(ctx, proposal) + + / begin a new block + app.BeginBlock(abci.RequestBeginBlock{ + Header: tmproto.Header{ + Height: app.LastBlockHeight() + 1, + AppHash: app.LastCommitID().Hash, + Time: blockTime +}}) + + / execute operation + op := simulation.SimulateMsgVoteWeighted(suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVoteWeighted + govcodec.ModuleCdc.UnmarshalJSON(operationMsg.Msg, &msg) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.True(t, len(msg.Options) >= 1) + +require.Equal(t, "gov", msg.Route()) + +require.Equal(t, simulation.TypeMsgVoteWeighted, msg.Type()) +} + +type suite struct { + cdc codec.Codec + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + GovKeeper *keeper.Keeper + StakingKeeper *stakingkeeper.Keeper + App *runtime.App +} + +/ returns context and an app with updated mint keeper +func createTestSuite(t *testing.T, isCheckTx bool) (suite, sdk.Context) { + res := suite{ +} + +app, err := simtestutil.Setup(configurator.NewAppConfig( + configurator.AuthModule(), + configurator.TxModule(), + configurator.ParamsModule(), + configurator.BankModule(), + configurator.StakingModule(), + configurator.ConsensusModule(), + configurator.GovModule(), + ), &res.AccountKeeper, &res.BankKeeper, &res.GovKeeper, &res.StakingKeeper, &res.cdc) + +require.NoError(t, err) + ctx := app.BaseApp.NewContext(isCheckTx, tmproto.Header{ +}) + +res.App = app + return res, ctx +} + +func getTestingAccounts( + t *testing.T, r *rand.Rand, + accountKeeper authkeeper.AccountKeeper, bankKeeper bankkeeper.Keeper, stakingKeeper *stakingkeeper.Keeper, + ctx sdk.Context, n int, +) []simtypes.Account { + accounts := simtypes.RandomAccounts(r, n) + initAmt := stakingKeeper.TokensFromConsensusPower(ctx, 200) + initCoins := sdk.NewCoins(sdk.NewCoin(sdk.DefaultBondDenom, initAmt)) + + / add coins to the accounts + for _, account := range accounts { + acc := accountKeeper.NewAccountWithAddress(ctx, account.Address) + +accountKeeper.SetAccount(ctx, acc) + +require.NoError(t, testutil.FundAccount(bankKeeper, ctx, account.Address, initCoins)) +} + +return accounts +} +``` + +```go expandable +package simulation_test + +import ( + + "fmt" + "math/rand" + "testing" + "time" + "github.com/stretchr/testify/require" + abci "github.com/tendermint/tendermint/abci/types" + tmproto "github.com/tendermint/tendermint/proto/tendermint/types" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/configurator" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + _ "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + "github.com/cosmos/cosmos-sdk/x/bank/testutil" + _ "github.com/cosmos/cosmos-sdk/x/consensus" + govcodec "github.com/cosmos/cosmos-sdk/x/gov/codec" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + "github.com/cosmos/cosmos-sdk/x/gov/simulation" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + _ "github.com/cosmos/cosmos-sdk/x/params" + _ "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +type MockWeightedProposalContent struct { + n int +} + +func (m MockWeightedProposalContent) + +AppParamsKey() + +string { + return fmt.Sprintf("AppParamsKey-%d", m.n) +} + +func (m MockWeightedProposalContent) + +DefaultWeight() + +int { + return m.n +} + +func (m MockWeightedProposalContent) + +ContentSimulatorFn() + +simtypes.ContentSimulatorFn { + return func(r *rand.Rand, _ sdk.Context, _ []simtypes.Account) + +simtypes.Content { + return v1beta1.NewTextProposal( + fmt.Sprintf("title-%d: %s", m.n, simtypes.RandStringOfLength(r, 100)), + fmt.Sprintf("description-%d: %s", m.n, simtypes.RandStringOfLength(r, 4000)), + ) +} +} + +/ make sure the MockWeightedProposalContent satisfied the WeightedProposalContent interface +var _ simtypes.WeightedProposalContent = MockWeightedProposalContent{ +} + +func mockWeightedProposalContent(n int) []simtypes.WeightedProposalContent { + wpc := make([]simtypes.WeightedProposalContent, n) + for i := 0; i < n; i++ { + wpc[i] = MockWeightedProposalContent{ + i +} + +} + +return wpc +} + +/ TestWeightedOperations tests the weights of the operations. +func TestWeightedOperations(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + ctx.WithChainID("test-chain") + cdc := suite.cdc + appParams := make(simtypes.AppParams) + weightesOps := simulation.WeightedOperations(appParams, cdc, suite.AccountKeeper, + suite.BankKeeper, suite.GovKeeper, mockWeightedProposalContent(3), + ) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accs := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + expected := []struct { + weight int + opMsgRoute string + opMsgName string +}{ + {0, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {1, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {2, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + { + simulation.DefaultWeightMsgDeposit, types.ModuleName, simulation.TypeMsgDeposit +}, + { + simulation.DefaultWeightMsgVote, types.ModuleName, simulation.TypeMsgVote +}, + { + simulation.DefaultWeightMsgVoteWeighted, types.ModuleName, simulation.TypeMsgVoteWeighted +}, +} + for i, w := range weightesOps { + operationMsg, _, _ := w.Op()(r, app.BaseApp, ctx, accs, ctx.ChainID()) + / require.NoError(t, err) / TODO check if it should be NoError + + / the following checks are very much dependent from the ordering of the output given + / by WeightedOperations. if the ordering in WeightedOperations changes some tests + / will fail + require.Equal(t, expected[i].weight, w.Weight(), "weight should be the same") + +require.Equal(t, expected[i].opMsgRoute, operationMsg.Route, "route should be the same") + +require.Equal(t, expected[i].opMsgName, operationMsg.Name, "operation Msg name should be the same") +} +} + +/ TestSimulateMsgSubmitProposal tests the normal scenario of a valid message of type TypeMsgSubmitProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgSubmitProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / begin a new block + app.BeginBlock(abci.RequestBeginBlock{ + Header: tmproto.Header{ + Height: app.LastBlockHeight() + 1, + AppHash: app.LastCommitID().Hash +}}) + + / execute operation + op := simulation.SimulateMsgSubmitProposal(suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper, MockWeightedProposalContent{3 +}.ContentSimulatorFn()) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgSubmitProposal + err = govcodec.ModuleCdc.UnmarshalJSON(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, "cosmos1p8wcgrjr4pjju90xg6u9cgq55dxwq8j7u4x9a0", msg.Proposer) + +require.NotEqual(t, len(msg.InitialDeposit), 0) + +require.Equal(t, "2686011stake", msg.InitialDeposit[0].String()) + +require.Equal(t, "title-3: ZBSpYuLyYggwexjxusrBqDOTtGTOWeLrQKjLxzIivHSlcxgdXhhuTSkuxKGLwQvuyNhYFmBZHeAerqyNEUzXPFGkqEGqiQWIXnku", msg.Messages[0].GetCachedValue().(*v1.MsgExecLegacyContent).Content.GetCachedValue().(v1beta1.Content).GetTitle()) + +require.Equal(t, "description-3: NJWzHdBNpAXKJPHWQdrGYcAHSctgVlqwqHoLfHsXUdStwfefwzqLuKEhmMyYLdbZrcPgYqjNHxPexsruwEGStAneKbWkQDDIlCWBLSiAASNhZqNFlPtfqPJoxKsgMdzjWqLWdqKQuJqWPMvwPQWZUtVMOTMYKJbfdlZsjdsomuScvDmbDkgRualsxDvRJuCAmPOXitIbcyWsKGSdrEunFAOdmXnsuyFVgJqEjbklvmwrUlsxjRSfKZxGcpayDdgoFcnVSutxjRgOSFzPwidAjubMncNweqpbxhXGchpZUxuFDOtpnhNUycJICRYqsPhPSCjPTWZFLkstHWJxvdPEAyEIxXgLwbNOjrgzmaujiBABBIXvcXpLrbcEWNNQsbjvgJFgJkflpRohHUutvnaUqoopuKjTDaemDeSdqbnOzcfJpcTuAQtZoiLZOoAIlboFDAeGmSNwkvObPRvRWQgWkGkxwtPauYgdkmypLjbqhlHJIQTntgWjXwZdOyYEdQRRLfMSdnxqppqUofqLbLQDUjwKVKfZJUJQPsWIPwIVaSTrmKskoAhvmZyJgeRpkaTfGgrJzAigcxtfshmiDCFkuiluqtMOkidknnTBtumyJYlIsWLnCQclqdVmikUoMOPdPWwYbJxXyqUVicNxFxyqJTenNblyyKSdlCbiXxUiYUiMwXZASYfvMDPFgxniSjWaZTjHkqlJvtBsXqwPpyVxnJVGFWhfSxgOcduoxkiopJvFjMmFabrGYeVtTXLhxVUEiGwYUvndjFGzDVntUvibiyZhfMQdMhgsiuysLMiePBNXifRLMsSmXPkwlPloUbJveCvUlaalhZHuvdkCnkSHbMbmOnrfEGPwQiACiPlnihiaOdbjPqPiTXaHDoJXjSlZmltGqNHHNrcKdlFSCdmVOuvDcBLdSklyGJmcLTbSFtALdGlPkqqecJrpLCXNPWefoTJNgEJlyMEPneVaxxduAAEqQpHWZodWyRkDAxzyMnFMcjSVqeRXLqsNyNtQBbuRvunZflWSbbvXXdkyLikYqutQhLPONXbvhcQZJPSWnOulqQaXmbfFxAkqfYeseSHOQidHwbcsOaMnSrrmGjjRmEMQNuknupMxJiIeVjmgZvbmjPIQTEhQFULQLBMPrxcFPvBinaOPYWGvYGRKxLZdwamfRQQFngcdSlvwjfaPbURasIsGJVHtcEAxnIIrhSriiXLOlbEBLXFElXJFGxHJczRBIxAuPKtBisjKBwfzZFagdNmjdwIRvwzLkFKWRTDPxJCmpzHUcrPiiXXHnOIlqNVoGSXZewdnCRhuxeYGPVTfrNTQNOxZmxInOazUYNTNDgzsxlgiVEHPKMfbesvPHUqpNkUqbzeuzfdrsuLDpKHMUbBMKczKKWOdYoIXoPYtEjfOnlQLoGnbQUCuERdEFaptwnsHzTJDsuZkKtzMpFaZobynZdzNydEeJJHDYaQcwUxcqvwfWwNUsCiLvkZQiSfzAHftYgAmVsXgtmcYgTqJIawstRYJrZdSxlfRiqTufgEQVambeZZmaAyRQbcmdjVUZZCgqDrSeltJGXPMgZnGDZqISrGDOClxXCxMjmKqEPwKHoOfOeyGmqWqihqjINXLqnyTesZePQRqaWDQNqpLgNrAUKulklmckTijUltQKuWQDwpLmDyxLppPVMwsmBIpOwQttYFMjgJQZLYFPmxWFLIeZihkRNnkzoypBICIxgEuYsVWGIGRbbxqVasYnstWomJnHwmtOhAFSpttRYYzBmyEtZXiCthvKvWszTXDbiJbGXMcrYpKAgvUVFtdKUfvdMfhAryctklUCEdjetjuGNfJjajZtvzdYaqInKtFPPLYmRaXPdQzxdSQfmZDEVHlHGEGNSPRFJuIfKLLfUmnHxHnRjmzQPNlqrXgifUdzAGKVabYqvcDeYoTYgPsBUqehrBhmQUgTvDnsdpuhUoxskDdppTsYMcnDIPSwKIqhXDCIxOuXrywahvVavvHkPuaenjLmEbMgrkrQLHEAwrhHkPRNvonNQKqprqOFVZKAtpRSpvQUxMoXCMZLSSbnLEFsjVfANdQNQVwTmGxqVjVqRuxREAhuaDrFgEZpYKhwWPEKBevBfsOIcaZKyykQafzmGPLRAKDtTcJxJVgiiuUkmyMYuDUNEUhBEdoBLJnamtLmMJQgmLiUELIhLpiEvpOXOvXCPUeldLFqkKOwfacqIaRcnnZvERKRMCKUkMABbDHytQqQblrvoxOZkwzosQfDKGtIdfcXRJNqlBNwOCWoQBcEWyqrMlYZIAXYJmLfnjoJepgSFvrgajaBAIksoyeHqgqbGvpAstMIGmIhRYGGNPRIfOQKsGoKgxtsidhTaAePRCBFqZgPDWCIkqOJezGVkjfYUCZTlInbxBXwUAVRsxHTQtJFnnpmMvXDYCVlEmnZBKhmmxQOIQzxFWpJQkQoSAYzTEiDWEOsVLNrbfzeHFRyeYATakQQWmFDLPbVMCJcWjFGJjfqCoVzlbNNEsqxdSmNPjTjHYOkuEMFLkXYGaoJlraLqayMeCsTjWNRDPBywBJLAPVkGQqTwApVVwYAetlwSbzsdHWsTwSIcctkyKDuRWYDQikRqsKTMJchrliONJeaZIzwPQrNbTwxsGdwuduvibtYndRwpdsvyCktRHFalvUuEKMqXbItfGcNGWsGzubdPMYayOUOINjpcFBeESdwpdlTYmrPsLsVDhpTzoMegKrytNVZkfJRPuDCUXxSlSthOohmsuxmIZUedzxKmowKOdXTMcEtdpHaPWgIsIjrViKrQOCONlSuazmLuCUjLltOGXeNgJKedTVrrVCpWYWHyVrdXpKgNaMJVjbXxnVMSChdWKuZdqpisvrkBJPoURDYxWOtpjzZoOpWzyUuYNhCzRoHsMjmmWDcXzQiHIyjwdhPNwiPqFxeUfMVFQGImhykFgMIlQEoZCaRoqSBXTSWAeDumdbsOGtATwEdZlLfoBKiTvodQBGOEcuATWXfiinSjPmJKcWgQrTVYVrwlyMWhxqNbCMpIQNoSMGTiWfPTCezUjYcdWppnsYJihLQCqbNLRGgqrwHuIvsazapTpoPZIyZyeeSueJuTIhpHMEJfJpScshJubJGfkusuVBgfTWQoywSSliQQSfbvaHKiLnyjdSbpMkdBgXepoSsHnCQaYuHQqZsoEOmJCiuQUpJkmfyfbIShzlZpHFmLCsbknEAkKXKfRTRnuwdBeuOGgFbJLbDksHVapaRayWzwoYBEpmrlAxrUxYMUekKbpjPNfjUCjhbdMAnJmYQVZBQZkFVweHDAlaqJjRqoQPoOMLhyvYCzqEuQsAFoxWrzRnTVjStPadhsESlERnKhpEPsfDxNvxqcOyIulaCkmPdambLHvGhTZzysvqFauEgkFRItPfvisehFmoBhQqmkfbHVsgfHXDPJVyhwPllQpuYLRYvGodxKjkarnSNgsXoKEMlaSKxKdcVgvOkuLcfLFfdtXGTclqfPOfeoVLbqcjcXCUEBgAGplrkgsmIEhWRZLlGPGCwKWRaCKMkBHTAcypUrYjWwCLtOPVygMwMANGoQwFnCqFrUGMCRZUGJKTZIGPyldsifauoMnJPLTcDHmilcmahlqOELaAUYDBuzsVywnDQfwRLGIWozYaOAilMBcObErwgTDNGWnwQMUgFFSKtPDMEoEQCTKVREqrXZSGLqwTMcxHfWotDllNkIJPMbXzjDVjPOOjCFuIvTyhXKLyhUScOXvYthRXpPfKwMhptXaxIxgqBoUqzrWbaoLTVpQoottZyPFfNOoMioXHRuFwMRYUiKvcWPkrayyTLOCFJlAyslDameIuqVAuxErqFPEWIScKpBORIuZqoXlZuTvAjEdlEWDODFRregDTqGNoFBIHxvimmIZwLfFyKUfEWAnNBdtdzDmTPXtpHRGdIbuucfTjOygZsTxPjfweXhSUkMhPjMaxKlMIJMOXcnQfyzeOcbWwNbeH", msg.Messages[0].GetCachedValue().(*v1.MsgExecLegacyContent).Content.GetCachedValue().(v1beta1.Content).GetDescription()) + +require.Equal(t, "gov", msg.Route()) + +require.Equal(t, simulation.TypeMsgSubmitProposal, msg.Type()) +} + +/ TestSimulateMsgDeposit tests the normal scenario of a valid message of type TypeMsgDeposit. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgDeposit(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + content := v1beta1.NewTextProposal("Test", "description") + +contentMsg, err := v1.NewLegacyContent(content, suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String()) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + depositPeriod := suite.GovKeeper.GetParams(ctx).MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, "", submitTime, submitTime.Add(*depositPeriod), "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + +suite.GovKeeper.SetProposal(ctx, proposal) + + / begin a new block + app.BeginBlock(abci.RequestBeginBlock{ + Header: tmproto.Header{ + Height: app.LastBlockHeight() + 1, + AppHash: app.LastCommitID().Hash, + Time: blockTime +}}) + + / execute operation + op := simulation.SimulateMsgDeposit(suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgDeposit + err = govcodec.ModuleCdc.UnmarshalJSON(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Depositor) + +require.NotEqual(t, len(msg.Amount), 0) + +require.Equal(t, "560969stake", msg.Amount[0].String()) + +require.Equal(t, "gov", msg.Route()) + +require.Equal(t, simulation.TypeMsgDeposit, msg.Type()) +} + +/ TestSimulateMsgVote tests the normal scenario of a valid message of type TypeMsgVote. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVote(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + depositPeriod := suite.GovKeeper.GetParams(ctx).MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, "", submitTime, submitTime.Add(*depositPeriod), "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + +suite.GovKeeper.ActivateVotingPeriod(ctx, proposal) + + / begin a new block + app.BeginBlock(abci.RequestBeginBlock{ + Header: tmproto.Header{ + Height: app.LastBlockHeight() + 1, + AppHash: app.LastCommitID().Hash, + Time: blockTime +}}) + + / execute operation + op := simulation.SimulateMsgVote(suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVote + govcodec.ModuleCdc.UnmarshalJSON(operationMsg.Msg, &msg) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.Equal(t, v1.OptionYes, msg.Option) + +require.Equal(t, "gov", msg.Route()) + +require.Equal(t, simulation.TypeMsgVote, msg.Type()) +} + +/ TestSimulateMsgVoteWeighted tests the normal scenario of a valid message of type TypeMsgVoteWeighted. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVoteWeighted(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + depositPeriod := suite.GovKeeper.GetParams(ctx).MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, "", submitTime, submitTime.Add(*depositPeriod), "text proposal", "test", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r")) + +require.NoError(t, err) + +suite.GovKeeper.ActivateVotingPeriod(ctx, proposal) + + / begin a new block + app.BeginBlock(abci.RequestBeginBlock{ + Header: tmproto.Header{ + Height: app.LastBlockHeight() + 1, + AppHash: app.LastCommitID().Hash, + Time: blockTime +}}) + + / execute operation + op := simulation.SimulateMsgVoteWeighted(suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVoteWeighted + govcodec.ModuleCdc.UnmarshalJSON(operationMsg.Msg, &msg) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.True(t, len(msg.Options) >= 1) + +require.Equal(t, "gov", msg.Route()) + +require.Equal(t, simulation.TypeMsgVoteWeighted, msg.Type()) +} + +type suite struct { + cdc codec.Codec + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + GovKeeper *keeper.Keeper + StakingKeeper *stakingkeeper.Keeper + App *runtime.App +} + +/ returns context and an app with updated mint keeper +func createTestSuite(t *testing.T, isCheckTx bool) (suite, sdk.Context) { + res := suite{ +} + +app, err := simtestutil.Setup(configurator.NewAppConfig( + configurator.AuthModule(), + configurator.TxModule(), + configurator.ParamsModule(), + configurator.BankModule(), + configurator.StakingModule(), + configurator.ConsensusModule(), + configurator.GovModule(), + ), &res.AccountKeeper, &res.BankKeeper, &res.GovKeeper, &res.StakingKeeper, &res.cdc) + +require.NoError(t, err) + ctx := app.BaseApp.NewContext(isCheckTx, tmproto.Header{ +}) + +res.App = app + return res, ctx +} + +func getTestingAccounts( + t *testing.T, r *rand.Rand, + accountKeeper authkeeper.AccountKeeper, bankKeeper bankkeeper.Keeper, stakingKeeper *stakingkeeper.Keeper, + ctx sdk.Context, n int, +) []simtypes.Account { + accounts := simtypes.RandomAccounts(r, n) + initAmt := stakingKeeper.TokensFromConsensusPower(ctx, 200) + initCoins := sdk.NewCoins(sdk.NewCoin(sdk.DefaultBondDenom, initAmt)) + + / add coins to the accounts + for _, account := range accounts { + acc := accountKeeper.NewAccountWithAddress(ctx, account.Address) + +accountKeeper.SetAccount(ctx, acc) + +require.NoError(t, testutil.FundAccount(bankKeeper, ctx, account.Address, initCoins)) +} + +return accounts +} +``` + +## End-to-end Tests + +End-to-end tests are at the top of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html). +They must test the whole application flow, from the user perspective (for instance, CLI tests). They are located under [`/tests/e2e`](https://github.com/cosmos/cosmos-sdk/tree/main/tests/e2e). + +{/* @julienrbrt: makes more sense to use an app wired app to have 0 simapp dependencies */} +For that, the SDK is using `simapp` but you should use your own application (`appd`). +Here are some examples: + +* SDK E2E tests: [Link](https://github.com/cosmos/cosmos-sdk/tree/main/tests/e2e). +* Cosmos Hub E2E tests: [Link](https://github.com/cosmos/gaia/tree/main/tests/e2e). +* Osmosis E2E tests: [Link](https://github.com/osmosis-labs/osmosis/tree/main/tests/e2e). + + +**warning** +The SDK is in the process of creating its E2E tests, as defined in [ADR-59](/docs/common/pages/adr-comprehensive#adr-059-test-scopes). This page will eventually be updated with better examples. + + +## Learn More + +Learn more about testing scope in [ADR-59](/docs/common/pages/adr-comprehensive#adr-059-test-scopes). diff --git a/docs/sdk/v0.47/documentation/module-system/upgrade.mdx b/docs/sdk/v0.47/documentation/module-system/upgrade.mdx new file mode 100644 index 00000000..a698535a --- /dev/null +++ b/docs/sdk/v0.47/documentation/module-system/upgrade.mdx @@ -0,0 +1,126 @@ +--- +title: Upgrading Modules +--- + + +**Synopsis** +[In-Place Store Migrations](/docs/sdk/next/documentation/operations/upgrade-advanced) allow your modules to upgrade to new versions that include breaking changes. This document outlines how to build modules to take advantage of this functionality. + + + + +### Pre-requisite Readings + +* [In-Place Store Migration](/docs/sdk/next/documentation/operations/upgrade-advanced) + + + +## Consensus Version + +Successful upgrades of existing modules require each `AppModule` to implement the function `ConsensusVersion() uint64`. + +* The versions must be hard-coded by the module developer. +* The initial version **must** be set to 1. + +Consensus versions serve as state-breaking versions of app modules and must be incremented when the module introduces breaking changes. + +## Registering Migrations + +To register the functionality that takes place during a module upgrade, you must register which migrations you want to take place. + +Migration registration takes place in the `Configurator` using the `RegisterMigration` method. The `AppModule` reference to the configurator is in the `RegisterServices` method. + +You can register one or more migrations. If you register more than one migration script, list the migrations in increasing order and ensure there are enough migrations that lead to the desired consensus version. For example, to migrate to version 3 of a module, register separate migrations for version 1 and version 2 as shown in the following example: + +```go +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + / --snip-- + cfg.RegisterMigration(types.ModuleName, 1, func(ctx sdk.Context) + +error { + / Perform in-place store migrations from ConsensusVersion 1 to 2. +}) + +cfg.RegisterMigration(types.ModuleName, 2, func(ctx sdk.Context) + +error { + / Perform in-place store migrations from ConsensusVersion 2 to 3. +}) +} +``` + +Since these migrations are functions that need access to a Keeper's store, use a wrapper around the keepers called `Migrator` as shown in this example: + +```go expandable +package keeper + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/bank/exported" + v2 "github.com/cosmos/cosmos-sdk/x/bank/migrations/v2" + v3 "github.com/cosmos/cosmos-sdk/x/bank/migrations/v3" + v4 "github.com/cosmos/cosmos-sdk/x/bank/migrations/v4" +) + +/ Migrator is a struct for handling in-place store migrations. +type Migrator struct { + keeper BaseKeeper + legacySubspace exported.Subspace +} + +/ NewMigrator returns a new Migrator. +func NewMigrator(keeper BaseKeeper, legacySubspace exported.Subspace) + +Migrator { + return Migrator{ + keeper: keeper, legacySubspace: legacySubspace +} +} + +/ Migrate1to2 migrates from version 1 to 2. +func (m Migrator) + +Migrate1to2(ctx sdk.Context) + +error { + return v2.MigrateStore(ctx, m.keeper.storeKey, m.keeper.cdc) +} + +/ Migrate2to3 migrates x/bank storage from version 2 to 3. +func (m Migrator) + +Migrate2to3(ctx sdk.Context) + +error { + return v3.MigrateStore(ctx, m.keeper.storeKey, m.keeper.cdc) +} + +/ Migrate3to4 migrates x/bank storage from version 3 to 4. +func (m Migrator) + +Migrate3to4(ctx sdk.Context) + +error { + return v4.MigrateStore(ctx, m.keeper.storeKey, m.legacySubspace, m.keeper.cdc) +} +``` + +## Writing Migration Scripts + +To define the functionality that takes place during an upgrade, write a migration script and place the functions in a `migrations/` directory. For example, to write migration scripts for the bank module, place the functions in `x/bank/migrations/`. Use the recommended naming convention for these functions. For example, `v2bank` is the script that migrates the package `x/bank/migrations/v2`: + +```go +/ Migrating bank module from version 1 to 2 +func (m Migrator) + +Migrate1to2(ctx sdk.Context) + +error { + return v2bank.MigrateStore(ctx, m.keeper.storeKey) / v2bank is package `x/bank/migrations/v2`. +} +``` + +To see example code of changes that were implemented in a migration of balance keys, check out [migrateBalanceKeys](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/bank/migrations/v2/store.go#L52-L73). For context, this code introduced migrations of the bank store that updated addresses to be prefixed by their length in bytes as outlined in [ADR-028](/docs/common/pages/adr-comprehensive#adr-028-public-key-addresses). diff --git a/docs/sdk/v0.47/documentation/operations/intro.mdx b/docs/sdk/v0.47/documentation/operations/intro.mdx new file mode 100644 index 00000000..16032291 --- /dev/null +++ b/docs/sdk/v0.47/documentation/operations/intro.mdx @@ -0,0 +1,13 @@ +--- +title: SDK Migrations +--- + +To smoothen the update to the latest stable release, the SDK includes a CLI command for hard-fork migrations (under the ` genesis migrate` subcommand). +Additionally, the SDK includes in-place migrations for its core modules. These in-place migrations are useful to migrate between major releases. + +* Hard-fork migrations are supported from the last major release to the current one. +* In-place module migrations are supported from the last two major releases to the current one. + +Migration from a version older than the last two major releases is not supported. + +When migrating from a previous version, refer to the [`UPGRADING.md`](/docs/sdk/v0.47/documentation/operations/upgrading) and the `CHANGELOG.md` of the version you are migrating to. diff --git a/docs/sdk/v0.47/documentation/operations/packages/README.mdx b/docs/sdk/v0.47/documentation/operations/packages/README.mdx new file mode 100644 index 00000000..867d699c --- /dev/null +++ b/docs/sdk/v0.47/documentation/operations/packages/README.mdx @@ -0,0 +1,41 @@ +--- +title: Packages +description: >- + The Cosmos SDK is a collection of Go modules. This section provides + documentation on various packages that can used when developing a Cosmos SDK + chain. It lists all standalone Go modules that are part of the Cosmos SDK. +--- + +The Cosmos SDK is a collection of Go modules. This section provides documentation on various packages that can used when developing a Cosmos SDK chain. +It lists all standalone Go modules that are part of the Cosmos SDK. + + +For more information on SDK modules, see the [SDK Modules](https://docs.cosmos.network/main/modules) section. +For more information on SDK tooling, see the [Tooling](https://docs.cosmos.network/main/tooling) section. + + +## Core + +* [Core](https://pkg.go.dev/cosmossdk.io/core) - Core library defining SDK interfaces ([ADR-063](/docs/common/pages/adr-comprehensive#adr-063-core-module-api)) +* [API](https://pkg.go.dev/cosmossdk.io/api) - API library containing generated SDK Pulsar API +* [Store](https://pkg.go.dev/cosmossdk.io/store) - Implementation of the Cosmos SDK store + +## State Management + +* [Collections](/docs/sdk/v0.47/documentation/operations/packages/collections) - State management library +* [ORM](/docs/sdk/v0.47/documentation/operations/packages/orm) - State management library + +## Automation + +* [Depinject](/docs/sdk/v0.47/documentation/module-system/depinject) - Dependency injection framework +* [Client/v2](https://pkg.go.dev/cosmossdk.io/client/v2) - Library powering [AutoCLI](https://docs.cosmos.network/main/building-modules/autocli) + +## Utilities + +* [Log](https://pkg.go.dev/cosmossdk.io/log) - Logging library +* [Errors](https://pkg.go.dev/cosmossdk.io/errors) - Error handling library +* [Math](https://pkg.go.dev/cosmossdk.io/math) - Math library for SDK arithmetic operations + +## Example + +* [SimApp](https://pkg.go.dev/cosmossdk.io/simapp) - SimApp is **the** sample Cosmos SDK chain. This package should not be imported in your application. diff --git a/docs/sdk/v0.47/documentation/operations/packages/collections.mdx b/docs/sdk/v0.47/documentation/operations/packages/collections.mdx new file mode 100644 index 00000000..a1915c14 --- /dev/null +++ b/docs/sdk/v0.47/documentation/operations/packages/collections.mdx @@ -0,0 +1,1276 @@ +--- +title: Collections +description: >- + Collections is a library meant to simplify the experience with respect to + module state handling. +--- + +Collections is a library meant to simplify the experience with respect to module state handling. + +Cosmos SDK modules handle their state using the `KVStore` interface. The problem with working with +`KVStore` is that it forces you to think of state as a bytes KV pairings when in reality the majority of +state comes from complex concrete golang objects (strings, ints, structs, etc.). + +Collections allows you to work with state as if they were normal golang objects and removes the need +for you to think of your state as raw bytes in your code. + +It also allows you to migrate your existing state without causing any state breakage that forces you into +tedious and complex chain state migrations. + +## Installation + +To install collections in your cosmos-sdk chain project, run the following command: + +```shell +go get cosmossdk.io/collections +``` + +## Core types + +Collections offers 5 different APIs to work with state, which will be explored in the next sections, these APIs are: + +* `Map`: to work with typed arbitrary KV pairings. +* `KeySet`: to work with just typed keys +* `Item`: to work with just one typed value +* `Sequence`: which is a monotonically increasing number. +* `IndexedMap`: which combines `Map` and `KeySet` to provide a `Map` with indexing capabilities. + +## Preliminary components + +Before exploring the different collections types and their capability it is necessary to introduce +the three components that every collection shares. In fact when instantiating a collection type by doing, for example, +`collections.NewMap/collections.NewItem/...` you will find yourself having to pass them some common arguments. + +For example, in code: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var AllowListPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + AllowList collections.KeySet[string] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + AllowList: collections.NewKeySet(sb, AllowListPrefix, "allow_list", collections.StringKey), +} +} +``` + +Let's analyse the shared arguments, what they do, and why we need them. + +### SchemaBuilder + +The first argument passed is the `SchemaBuilder` + +`SchemaBuilder` is a structure that keeps track of all the state of a module, it is not required by the collections +to deal with state but it offers a dynamic and reflective way for clients to explore a module's state. + +We instantiate a `SchemaBuilder` by passing it a function that given the modules store key returns the module's specific store. + +We then need to pass the schema builder to every collection type we instantiate in our keeper, in our case the `AllowList`. + +### Prefix + +The second argument passed to our `KeySet` is a `collections.Prefix`, a prefix represents a partition of the module's `KVStore` +where all the state of a specific collection will be saved. + +Since a module can have multiple collections, the following is expected: + +* module params will become a `collections.Item` +* the `AllowList` is a `collections.KeySet` + +We don't want a collection to write over the state of the other collection so we pass it a prefix, which defines a storage +partition owned by the collection. + +If you already built modules, the prefix translates to the items you were creating in your `types/keys.go` file, example: [Link](https://github.com/cosmos/cosmos-sdk/blob/main/x/feegrant/key.go#L27) + +your old: + +```go +var ( + / FeeAllowanceKeyPrefix is the set of the kvstore for fee allowance data + / - 0x00: allowance + FeeAllowanceKeyPrefix = []byte{0x00 +} + + / FeeAllowanceQueueKeyPrefix is the set of the kvstore for fee allowance keys data + / - 0x01: + FeeAllowanceQueueKeyPrefix = []byte{0x01 +} +) +``` + +becomes: + +```go +var ( + / FeeAllowanceKeyPrefix is the set of the kvstore for fee allowance data + / - 0x00: allowance + FeeAllowanceKeyPrefix = collections.NewPrefix(0) + + / FeeAllowanceQueueKeyPrefix is the set of the kvstore for fee allowance keys data + / - 0x01: + FeeAllowanceQueueKeyPrefix = collections.NewPrefix(1) +) +``` + +#### Rules + +`collections.NewPrefix` accepts either `uint8`, `string` or `[]bytes` it's good practice to use an always increasing `uint8`for disk space efficiency. + +A collection **MUST NOT** share the same prefix as another collection in the same module, and a collection prefix **MUST NEVER** start with the same prefix as another, examples: + +```go +prefix1 := collections.NewPrefix("prefix") + +prefix2 := collections.NewPrefix("prefix") / THIS IS BAD! +``` + +```go +prefix1 := collections.NewPrefix("a") + +prefix2 := collections.NewPrefix("aa") / prefix2 starts with the same as prefix1: BAD!!! +``` + +### Human-Readable Name + +The third parameter we pass to a collection is a string, which is a human-readable name. +It is needed to make the role of a collection understandable by clients who have no clue about +what a module is storing in state. + +#### Rules + +Each collection in a module **MUST** have a unique humanised name. + +## Key and Value Codecs + +A collection is generic over the type you can use as keys or values. +This makes collections dumb, but also means that hypothetically we can store everything +that can be a go type into a collection. We are not bounded to any type of encoding (be it proto, json or whatever) + +So a collection needs to be given a way to understand how to convert your keys and values to bytes. +This is achieved through `KeyCodec` and `ValueCodec`, which are arguments that you pass to your +collections when you're instantiating them using the `collections.NewMap/collections.NewItem/...` +instantiation functions. + +NOTE: Generally speaking you will never be required to implement your own `Key/ValueCodec` as +the SDK and collections libraries already come with default, safe and fast implementation of those. +You might need to implement them only if you're migrating to collections and there are state layout incompatibilities. + +Let's explore an example: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var IDsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + IDs collections.Map[string, uint64] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + IDs: collections.NewMap(sb, IDsPrefix, "ids", collections.StringKey, collections.Uint64Value), +} +} +``` + +We're now instantiating a map where the key is string and the value is `uint64`. +We already know the first three arguments of the `NewMap` function. + +The fourth parameter is our `KeyCodec`, we know that the `Map` has `string` as key so we pass it a `KeyCodec` that handles strings as keys. + +The fifth parameter is our `ValueCodec`, we know that the `Map` as a `uint64` as value so we pass it a `ValueCodec` that handles uint64. + +Collections already comes with all the required implementations for golang primitive types. + +Let's make another example, this falls closer to what we build using cosmos SDK, let's say we want +to create a `collections.Map` that maps account addresses to their base account. So we want to map an `sdk.AccAddress` to an `auth.BaseAccount` (which is a proto): + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts collections.Map[sdk.AccAddress, authtypes.BaseAccount] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.47/learn/advanced/encoding)), +} +} +``` + +As we can see here since our `collections.Map` maps `sdk.AccAddress` to `authtypes.BaseAccount`, +we use the `sdk.AccAddressKey` which is the `KeyCodec` implementation for `AccAddress` and we use `codec.CollValue` to +encode our proto type `BaseAccount`. + +Generally speaking you will always find the respective key and value codecs for types in the `go.mod` path you're using +to import that type. If you want to encode proto values refer to the codec `codec.CollValue` function, which allows you +to encode any type implement the `proto.Message` interface. + +## Map + +We analyse the first and most important collection type, the `collections.Map`. +This is the type that everything else builds on top of. + +### Use case + +A `collections.Map` is used to map arbitrary keys with arbitrary values. + +### Example + +It's easier to explain a `collections.Map` capabilities through an example: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "fmt" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts collections.Map[sdk.AccAddress, authtypes.BaseAccount] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.47/documentation/operations/packages/cdc)), +} +} + +func (k Keeper) + +CreateAccount(ctx sdk.Context, addr sdk.AccAddress, account authtypes.BaseAccount) + +error { + has, err := k.Accounts.Has(ctx, addr) + if err != nil { + return err +} + if has { + return fmt.Errorf("account already exists: %s", addr) +} + +err = k.Accounts.Set(ctx, addr, account) + if err != nil { + return err +} + +return nil +} + +func (k Keeper) + +GetAccount(ctx sdk.Context, addr sdk.AccAddress) (authtypes.BaseAccount, error) { + acc, err := k.Accounts.Get(ctx, addr) + if err != nil { + return authtypes.BaseAccount{ +}, err +} + +return acc, nil +} + +func (k Keeper) + +RemoveAccount(ctx sdk.Context, addr sdk.AccAddress) + +error { + err := k.Accounts.Remove(ctx, addr) + if err != nil { + return err +} + +return nil +} +``` + +#### Set method + +Set maps with the provided `AccAddress` (the key) to the `auth.BaseAccount` (the value). + +Under the hood the `collections.Map` will convert the key and value to bytes using the [key and value codec](/docs/sdk/v0.47/documentation/operations/packages/README#key-and-value-codecs). +It will prepend to our bytes key the [prefix](/docs/sdk/v0.47/documentation/operations/packages/README#prefix) and store it in the KVStore of the module. + +#### Has method + +The has method reports if the provided key exists in the store. + +#### Get method + +The get method accepts the `AccAddress` and returns the associated `auth.BaseAccount` if it exists, otherwise it errors. + +#### Remove method + +The remove method accepts the `AccAddress` and removes it from the store. It won't report errors +if it does not exist, to check for existence before removal use the `Has` method. + +#### Iteration + +Iteration has a separate section. + +## KeySet + +The second type of collection is `collections.KeySet`, as the word suggests it maintains +only a set of keys without values. + +#### Implementation curiosity + +A `collections.KeySet` is just a `collections.Map` with a `key` but no value. +The value internally is always the same and is represented as an empty byte slice `[]byte{}`. + +### Example + +As always we explore the collection type through an example: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "fmt" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var ValidatorsSetPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + ValidatorsSet collections.KeySet[sdk.ValAddress] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + ValidatorsSet: collections.NewKeySet(sb, ValidatorsSetPrefix, "validators_set", sdk.ValAddressKey), +} +} + +func (k Keeper) + +AddValidator(ctx sdk.Context, validator sdk.ValAddress) + +error { + has, err := k.ValidatorsSet.Has(ctx, validator) + if err != nil { + return err +} + if has { + return fmt.Errorf("validator already in set: %s", validator) +} + +err = k.ValidatorsSet.Set(ctx, validator) + if err != nil { + return err +} + +return nil +} + +func (k Keeper) + +RemoveValidator(ctx sdk.Context, validator sdk.ValAddress) + +error { + err := k.ValidatorsSet.Remove(ctx, validator) + if err != nil { + return err +} + +return nil +} +``` + +The first difference we notice is that `KeySet` needs use to specify only one type parameter: the key (`sdk.ValAddress` in this case). +The second difference we notice is that `KeySet` in its `NewKeySet` function does not require +us to specify a `ValueCodec` but only a `KeyCodec`. This is because a `KeySet` only saves keys and not values. + +Let's explore the methods. + +#### Has method + +Has allows us to understand if a key is present in the `collections.KeySet` or not, functions in the same way as `collections.Map.Has +` + +#### Set method + +Set inserts the provided key in the `KeySet`. + +#### Remove method + +Remove removes the provided key from the `KeySet`, it does not error if the key does not exist, +if existence check before removal is required it needs to be coupled with the `Has` method. + +## Item + +The third type of collection is the `collections.Item`. +It stores only one single item, it's useful for example for parameters, there's only one instance +of parameters in state always. + +#### implementation curiosity + +A `collections.Item` is just a `collections.Map` with no key but just a value. +The key is the prefix of the collection! + +### Example + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +var ParamsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Params collections.Item[stakingtypes.Params] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Params: collections.NewItem(sb, ParamsPrefix, "params", codec.CollValue[stakingtypes.Params](/docs/sdk/v0.47/learn/advanced/encoding)), +} +} + +func (k Keeper) + +UpdateParams(ctx sdk.Context, params stakingtypes.Params) + +error { + err := k.Params.Set(ctx, params) + if err != nil { + return err +} + +return nil +} + +func (k Keeper) + +GetParams(ctx sdk.Context) (stakingtypes.Params, error) { + return k.Params.Get(ctx) +} +``` + +The first key difference we notice is that we specify only one type parameter, which is the value we're storing. +The second key difference is that we don't specify the `KeyCodec`, since we store only one item we already know the key +and the fact that it is constant. + +## Iteration + +One of the key features of the `KVStore` is iterating over keys. + +Collections which deal with keys (so `Map`, `KeySet` and `IndexedMap`) allow you to iterate +over keys in a safe and typed way. They all share the same API, the only difference being +that `KeySet` returns a different type of `Iterator` because `KeySet` only deals with keys. + + + +Every collection shares the same `Iterator` semantics. + + + +Let's have a look at the `Map.Iterate` method: + +```go +func (m Map[K, V]) + +Iterate(ctx context.Context, ranger Ranger[K]) (Iterator[K, V], error) +``` + +It accepts a `collections.Ranger[K]`, which is an API that instructs map on how to iterate over keys. +As always we don't need to implement anything here as `collections` already provides some generic `Ranger` implementers +that expose all you need to work with ranges. + +### Example + +We have a `collections.Map` that maps accounts using `uint64` IDs. + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts collections.Map[uint64, authtypes.BaseAccount] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", collections.Uint64Key, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.47/documentation/operations/packages/cdc)), +} +} + +func (k Keeper) + +GetAllAccounts(ctx sdk.Context) ([]authtypes.BaseAccount, error) { + / passing a nil Ranger equals to: iterate over every possible key + iter, err := k.Accounts.Iterate(ctx, nil) + if err != nil { + return nil, err +} + +accounts, err := iter.Values() + if err != nil { + return nil, err +} + +return accounts, err +} + +func (k Keeper) + +IterateAccountsBetween(ctx sdk.Context, start, end uint64) ([]authtypes.BaseAccount, error) { + / The collections.Range API offers a lot of capability + / like defining where the iteration starts or ends. + rng := new(collections.Range[uint64]). + StartInclusive(start). + EndExclusive(end). + Descending() + +iter, err := k.Accounts.Iterate(ctx, rng) + if err != nil { + return nil, err +} + +accounts, err := iter.Values() + if err != nil { + return nil, err +} + +return accounts, nil +} + +func (k Keeper) + +IterateAccounts(ctx sdk.Context, do func(id uint64, acc authtypes.BaseAccount) (stop bool)) + +error { + iter, err := k.Accounts.Iterate(ctx, nil) + if err != nil { + return err +} + +defer iter.Close() + for ; iter.Valid(); iter.Next() { + kv, err := iter.KeyValue() + if err != nil { + return err +} + if do(kv.Key, kv.Value) { + break +} + +} + +return nil +} +``` + +Let's analyse each method in the example and how it makes use of the `Iterate` and the returned `Iterator` API. + +#### GetAllAccounts + +In `GetAllAccounts` we pass to our `Iterate` a nil `Ranger`. This means that the returned `Iterator` will include +all the existing keys within the collection. + +Then we use some the `Values` method from the returned `Iterator` API to collect all the values into a slice. + +`Iterator` offers other methods such as `Keys()` to collect only the keys and not the values and `KeyValues` to collect +all the keys and values. + +#### IterateAccountsBetween + +Here we make use of the `collections.Range` helper to specialise our range. +We make it start in a point through `StartInclusive` and end in the other with `EndExclusive`, then +we instruct it to report us results in reverse order through `Descending` + +Then we pass the range instruction to `Iterate` and get an `Iterator`, which will contain only the results +we specified in the range. + +Then we use again th `Values` method of the `Iterator` to collect all the results. + +`collections.Range` also offers a `Prefix` API which is not appliable to all keys types, +for example uint64 cannot be prefix because it is of constant size, but a `string` key +can be prefixed. + +#### IterateAccounts + +Here we showcase how to lazily collect values from an Iterator. + + + +`Keys/Values/KeyValues` fully consume and close the `Iterator`, here we need to explicitly do a `defer iterator.Close()` call. + + + +`Iterator` also exposes a `Value` and `Key` method to collect only the current value or key, if collecting both is not needed. + + + +For this `callback` pattern, collections expose a `Walk` API. + + + +## Composite keys + +So far we've worked only with simple keys, like `uint64`, the account address, etc. +There are some more complex cases in, which we need to deal with composite keys. + +A key is composite when it is composed of multiple keys, for example bank balances as stored as the composite key +`(AccAddress, string)` where the first part is the address holding the coins and the second part is the denom. + +Example, let's say address `BOB` holds `10atom,15osmo`, this is how it is stored in state: + +```javascript +(bob, atom) => 10 +(bob, osmos) => 15 +``` + +Now this allows to efficiently get a specific denom balance of an address, by simply `getting` `(address, denom)`, or getting all the balances +of an address by prefixing over `(address)`. + +Let's see now how we can work with composite keys using collections. + +### Example + +In our example we will show-case how we can use collections when we are dealing with balances, similar to bank, +a balance is a mapping between `(address, denom) => math.Int` the composite key in our case is `(address, denom)`. + +## Instantiation of a composite key collection + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + "cosmossdk.io/math" + storetypes "cosmossdk.io/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var BalancesPrefix = collections.NewPrefix(1) + +type Keeper struct { + Schema collections.Schema + Balances collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Balances: collections.NewMap( + sb, BalancesPrefix, "balances", + collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), + math.IntValue, + ), +} +} +``` + +#### The Map Key definition + +First of all we can see that in order to define a composite key of two elements we use the `collections.Pair` type: + +```go +collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] +``` + +`collections.Pair` defines a key composed of two other keys, in our case the first part is `sdk.AccAddress`, the second +part is `string`. + +#### The Key Codec instantiation + +The arguments to instantiate are always the same, the only thing that changes is how we instantiate +the `KeyCodec`, since this key is composed of two keys we use `collections.PairKeyCodec`, which generates +a `KeyCodec` composed of two key codecs. The first one will encode the first part of the key, the second one will +encode the second part of the key. + +### Working with composite key collections + +Let's expand on the example we used before: + +```go expandable +var BalancesPrefix = collections.NewPrefix(1) + +type Keeper struct { + Schema collections.Schema + Balances collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Balances: collections.NewMap( + sb, BalancesPrefix, "balances", + collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), + math.IntValue, + ), +} +} + +func (k Keeper) + +SetBalance(ctx sdk.Context, address sdk.AccAddress, denom string, amount math.Int) + +error { + key := collections.Join(address, denom) + +return k.Balances.Set(ctx, key, amount) +} + +func (k Keeper) + +GetBalance(ctx sdk.Context, address sdk.AccAddress, denom string) (math.Int, error) { + return k.Balances.Get(ctx, collections.Join(address, denom)) +} + +func (k Keeper) + +GetAllAddressBalances(ctx sdk.Context, address sdk.AccAddress) (sdk.Coins, error) { + balances := sdk.NewCoins() + rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/v0.47/documentation/operations/packages/address) + +iter, err := k.Balances.Iterate(ctx, rng) + if err != nil { + return nil, err +} + +kvs, err := iter.KeyValues() + if err != nil { + return nil, err +} + for _, kv := range kvs { + balances = balances.Add(sdk.NewCoin(kv.Key.K2(), kv.Value)) +} + +return balances, nil +} + +func (k Keeper) + +GetAllAddressBalancesBetween(ctx sdk.Context, address sdk.AccAddress, startDenom, endDenom string) (sdk.Coins, error) { + rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/v0.47/documentation/operations/packages/address). + StartInclusive(startDenom). + EndInclusive(endDenom) + +iter, err := k.Balances.Iterate(ctx, rng) + if err != nil { + return nil, err +} + ... +} +``` + +#### SetBalance + +As we can see here we're setting the balance of an address for a specific denom. +We use the `collections.Join` function to generate the composite key. +`collections.Join` returns a `collections.Pair` (which is the key of our `collections.Map`) + +`collections.Pair` contains the two keys we have joined, it also exposes two methods: `K1` to fetch the 1st part of the +key and `K2` to fetch the second part. + +As always, we use the `collections.Map.Set` method to map the composite key to our value (`math.Int`in this case) + +#### GetBalance + +To get a value in composite key collection, we simply use `collections.Join` to compose the key. + +#### GetAllAddressBalances + +We use `collections.PrefixedPairRange` to iterate over all the keys starting with the provided address. +Concretely the iteration will report all the balances belonging to the provided address. + +The first part is that we instantiate a `PrefixedPairRange`, which is a `Ranger` implementer aimed to help +in `Pair` keys iterations. + +```go +rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/v0.47/documentation/operations/packages/address) +``` + +As we can see here we're passing the type parameters of the `collections.Pair` because golang type inference +with respect to generics is not as permissive as other languages, so we need to explitly say what are the types of the pair key. + +#### GetAllAddressesBalancesBetween + +This showcases how we can further specialise our range to limit the results further, by specifying +the range between the second part of the key (in our case the denoms, which are strings). + +## IndexedMap + +`collections.IndexedMap` is a collection that uses under the hood a `collections.Map`, and has a struct, which contains the indexes that we need to define. + +### Example + +Let's say we have an `auth.BaseAccount` struct which looks like the following: + +```go +type BaseAccount struct { + AccountNumber uint64 `protobuf:"varint,3,opt,name=account_number,json=accountNumber,proto3" json:"account_number,omitempty"` + Sequence uint64 `protobuf:"varint,4,opt,name=sequence,proto3" json:"sequence,omitempty"` +} +``` + +First of all, when we save our accounts in state we map them using a primary key `sdk.AccAddress`. +If it were to be a `collections.Map` it would be `collections.Map[sdk.AccAddres, authtypes.BaseAccount]`. + +Then we also want to be able to get an account not only by its `sdk.AccAddress`, but also by its `AccountNumber`. + +So we can say we want to create an `Index` that maps our `BaseAccount` to its `AccountNumber`. + +We also know that this `Index` is unique. Unique means that there can only be one `BaseAccount` that maps to a specific +`AccountNumber`. + +First of all, we start by defining the object that contains our index: + +```go expandable +var AccountsNumberIndexPrefix = collections.NewPrefix(1) + +type AccountsIndexes struct { + Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +} + +func (a AccountsIndexes) + +IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { + return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ + a.Number +} +} + +func NewAccountIndexes(sb *collections.SchemaBuilder) + +AccountsIndexes { + return AccountsIndexes{ + Number: indexes.NewUnique( + sb, AccountsNumberIndexPrefix, "accounts_by_number", + collections.Uint64Key, sdk.AccAddressKey, + func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { + return v.AccountNumber, nil +}, + ), +} +} +``` + +We create an `AccountIndexes` struct which contains a field: `Number`. This field represents our `AccountNumber` index. +`AccountNumber` is a field of `authtypes.BaseAccount` and it's a `uint64`. + +Then we can see in our `AccountIndexes` struct the `Number` field is defined as: + +```go +*indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +``` + +Where the first type parameter is `uint64`, which is the field type of our index. +The second type parameter is the primary key `sdk.AccAddress` +And the third type parameter is the actual object we're storing `authtypes.BaseAccount`. + +Then we implement a function called `IndexesList` on our `AccountIndexes` struct, this will be used +by the `IndexedMap` to keep the underlying map in sync with the indexes, in our case `Number`. +This function just needs to return the slice of indexes contained in the struct. + +Then we create a `NewAccountIndexes` function that instantiates and returns the `AccountsIndexes` struct. + +The function takes a `SchemaBuilder`. Then we instantiate our `indexes.Unique`, let's analyse the arguments we pass to +`indexes.NewUnique`. + +#### Instantiating a `indexes.Unique` + +The first three arguments, we already know them, they are: `SchemaBuilder`, `Prefix` which is our index prefix (the partition +where index keys relationship for the `Number` index will be maintained), and the human name for the `Number` index. + +The second argument is a `collections.Uint64Key` which is a key codec to deal with `uint64` keys, we pass that because +the key we're trying to index is a `uint64` key (the account number), and then we pass as fifth argument the primary key codec, +which in our case is `sdk.AccAddress` (remember: we're mapping `sdk.AccAddress` => `BaseAccount`). + +Then as last parameter we pass a function that: given the `BaseAccount` returns its `AccountNumber`. + +After this we can proceed instantiating our `IndexedMap`. + +```go expandable +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewIndexedMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.47/documentation/operations/packages/cdc), + NewAccountIndexes(sb), + ), +} +} +``` + +As we can see here what we do, for now, is the same thing as we did for `collections.Map`. +We pass it the `SchemaBuilder`, the `Prefix` where we plan to store the mapping between `sdk.AccAddress` and `authtypes.BaseAccount`, +the human name and the respective `sdk.AccAddress` key codec and `authtypes.BaseAccount` value codec. + +Then we pass the instantiation of our `AccountIndexes` through `NewAccountIndexes`. + +Full example: + +```go expandable +package docs + +import ( + + "cosmossdk.io/collections" + "cosmossdk.io/collections/indexes" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsNumberIndexPrefix = collections.NewPrefix(1) + +type AccountsIndexes struct { + Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +} + +func (a AccountsIndexes) + +IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { + return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ + a.Number +} +} + +func NewAccountIndexes(sb *collections.SchemaBuilder) + +AccountsIndexes { + return AccountsIndexes{ + Number: indexes.NewUnique( + sb, AccountsNumberIndexPrefix, "accounts_by_number", + collections.Uint64Key, sdk.AccAddressKey, + func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { + return v.AccountNumber, nil +}, + ), +} +} + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewIndexedMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.47/documentation/operations/packages/cdc), + NewAccountIndexes(sb), + ), +} +} +``` + +### Working with IndexedMaps + +Whilst instantiating `collections.IndexedMap` is tedious, working with them is extremely smooth. + +Let's take the full example, and expand it with some use-cases. + +```go expandable +package docs + +import ( + + "cosmossdk.io/collections" + "cosmossdk.io/collections/indexes" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsNumberIndexPrefix = collections.NewPrefix(1) + +type AccountsIndexes struct { + Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +} + +func (a AccountsIndexes) + +IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { + return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ + a.Number +} +} + +func NewAccountIndexes(sb *collections.SchemaBuilder) + +AccountsIndexes { + return AccountsIndexes{ + Number: indexes.NewUnique( + sb, AccountsNumberIndexPrefix, "accounts_by_number", + collections.Uint64Key, sdk.AccAddressKey, + func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { + return v.AccountNumber, nil +}, + ), +} +} + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewIndexedMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.47/documentation/operations/packages/cdc), + NewAccountIndexes(sb), + ), +} +} + +func (k Keeper) + +CreateAccount(ctx sdk.Context, addr sdk.AccAddress) + +error { + nextAccountNumber := k.getNextAccountNumber() + newAcc := authtypes.BaseAccount{ + AccountNumber: nextAccountNumber, + Sequence: 0, +} + +return k.Accounts.Set(ctx, addr, newAcc) +} + +func (k Keeper) + +RemoveAccount(ctx sdk.Context, addr sdk.AccAddress) + +error { + return k.Accounts.Remove(ctx, addr) +} + +func (k Keeper) + +GetAccountByNumber(ctx sdk.Context, accNumber uint64) (sdk.AccAddress, authtypes.BaseAccount, error) { + accAddress, err := k.Accounts.Indexes.Number.MatchExact(ctx, accNumber) + if err != nil { + return nil, authtypes.BaseAccount{ +}, err +} + +acc, err := k.Accounts.Get(ctx, accAddress) + +return accAddress, acc, nil +} + +func (k Keeper) + +GetAccountsByNumber(ctx sdk.Context, startAccNum, endAccNum uint64) ([]authtypes.BaseAccount, error) { + rng := new(collections.Range[uint64]). + StartInclusive(startAccNum). + EndInclusive(endAccNum) + +iter, err := k.Accounts.Indexes.Number.Iterate(ctx, rng) + if err != nil { + return nil, err +} + +return indexes.CollectValues(ctx, k.Accounts, iter) +} + +func (k Keeper) + +getNextAccountNumber() + +uint64 { + return 0 +} +``` + +## Collections with interfaces as values + +Although cosmos-sdk is shifting away from the usage of interface registry, there are still some places where it is used. +In order to support old code, we have to support collections with interface values. + +The generic `codec.CollValue` is not able to handle interface values, so we need to use a special type `codec.CollValueInterface`. +`codec.CollValueInterface` takes a `codec.BinaryCodec` as an argument, and uses it to marshal and unmarshal values as interfaces. +The `codec.CollValueInterface` lives in the `codec` package, whose import path is `github.com/cosmos/cosmos-sdk/codec`. + +### Instantiating Collections with interface values + +In order to instantiate a collection with interface values, we need to use `codec.CollValueInterface` instead of `codec.CollValue`. + +```go expandable +package example + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.Map[sdk.AccAddress, sdk.AccountI] +} + +func NewKeeper(cdc codec.BinaryCodec, storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollInterfaceValue[sdk.AccountI](/docs/sdk/v0.47/learn/advanced/encoding), + ), +} +} + +func (k Keeper) + +SaveBaseAccount(ctx sdk.Context, account authtypes.BaseAccount) + +error { + return k.Accounts.Set(ctx, account.GetAddress(), account) +} + +func (k Keeper) + +SaveModuleAccount(ctx sdk.Context, account authtypes.ModuleAccount) + +error { + return k.Accounts.Set(ctx, account.GetAddress(), account) +} + +func (k Keeper) + +GetAccount(ctx sdk.context, addr sdk.AccAddress) (sdk.AccountI, error) { + return k.Accounts.Get(ctx, addr) +} +``` diff --git a/docs/sdk/v0.47/documentation/operations/packages/depinject.mdx b/docs/sdk/v0.47/documentation/operations/packages/depinject.mdx new file mode 100644 index 00000000..59b290a4 --- /dev/null +++ b/docs/sdk/v0.47/documentation/operations/packages/depinject.mdx @@ -0,0 +1,713 @@ +--- +title: Depinject +--- + +> **DISCLAIMER**: This is a **beta** package. The SDK team is actively working on this feature and we are looking for feedback from the community. Please try it out and let us know what you think. + +## Overview + +`depinject` is a dependency injection (DI) framework for the Cosmos SDK, designed to streamline the process of building and configuring blockchain applications. It works in conjunction with the `core/appconfig` module to replace the majority of boilerplate code in `app.go` with a configuration file in Go, YAML, or JSON format. + +`depinject` is particularly useful for developing blockchain applications: + +* With multiple interdependent components, modules, or services. Helping manage their dependencies effectively. +* That require decoupling of these components, making it easier to test, modify, or replace individual parts without affecting the entire system. +* That are wanting to simplify the setup and initialisation of modules and their dependencies by reducing boilerplate code and automating dependency management. + +By using `depinject`, developers can achieve: + +* Cleaner and more organised code. + +* Improved modularity and maintainability. + +* A more maintainable and modular structure for their blockchain applications, ultimately enhancing development velocity and code quality. + +* [Go Doc](https://pkg.go.dev/cosmossdk.io/depinject) + +## Usage + +The `depinject` framework, based on dependency injection concepts, streamlines the management of dependencies within your blockchain application using its Configuration API. This API offers a set of functions and methods to create easy to use configurations, making it simple to define, modify, and access dependencies and their relationships. + +A core component of the [Configuration API](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/depinject#Config) is the `Provide` function, which allows you to register provider functions that supply dependencies. Inspired by constructor injection, these provider functions form the basis of the dependency tree, enabling the management and resolution of dependencies in a structured and maintainable manner. Additionally, `depinject` supports interface types as inputs to provider functions, offering flexibility and decoupling between components, similar to interface injection concepts. + +By leveraging `depinject` and its Configuration API, you can efficiently handle dependencies in your blockchain application, ensuring a clean, modular, and well-organised codebase. + +Example: + +```go expandable +package main + +import ( + + "fmt" + "cosmossdk.io/depinject" +) + +type AnotherInt int + +func main() { + var ( + x int + y AnotherInt + ) + +fmt.Printf("Before (%v, %v)\n", x, y) + +depinject.Inject( + depinject.Provide( + func() + +int { + return 1 +}, + func() + +AnotherInt { + return AnotherInt(2) +}, + ), + &x, + &y, + ) + +fmt.Printf("After (%v, %v)\n", x, y) +} +``` + +In this example, `depinject.Provide` registers two provider functions that return `int` and `AnotherInt` values. The `depinject.Inject` function is then used to inject these values into the variables `x` and `y`. + +Provider functions serve as the basis for the dependency tree. They are analysed to identify their inputs as dependencies and their outputs as dependents. These dependents can either be used by another provider function or be stored outside the DI container (e.g., `&x` and `&y` in the example above). + +### Interface type resolution + +`depinject` supports the use of interface types as inputs to provider functions, which helps decouple dependencies between modules. This approach is particularly useful for managing complex systems with multiple modules, such as the Cosmos SDK, where dependencies need to be flexible and maintainable. + +For example, `x/bank` expects an [AccountKeeper](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/x/bank/types#AccountKeeper) interface as [input to ProvideModule](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/bank/module.go#L208-L260). `SimApp` uses the implementation in `x/auth`, but the modular design allows for easy changes to the implementation if needed. + +Consider the following example: + +```go expandable +package duck + +type Duck interface { + quack() +} + +type AlsoDuck interface { + quack() +} + +type Mallard struct{ +} + +type Canvasback struct{ +} + +func (duck Mallard) + +quack() { +} + +func (duck Canvasback) + +quack() { +} + +type Pond struct { + Duck AlsoDuck +} +``` + +In this example, there's a `Pond` struct that has a `Duck` field of type `AlsoDuck`. The `depinject` framework can automatically resolve the appropriate implementation when there's only one available, as shown below: + +```go +var pond Pond + +depinject.Inject( + depinject.Provide( + func() + +Mallard { + return Mallard{ +} +}, + func(duck Duck) + +Pond { + return Pond{ + Duck: duck +} + +}), + &pond) +``` + +This code snippet results in the `Duck` field of `Pond` being implicitly bound to the `Mallard` implementation because it's the only implementation of the `Duck` interface in the container. + +However, if there are multiple implementations of the `Duck` interface, as in the following example, you'll encounter an error: + +```go +var pond Pond + +depinject.Inject( + depinject.Provide( + func() + +Mallard { + return Mallard{ +} +}, + func() + +Canvasback { + return Canvasback{ +} +}, + func(duck Duck) + +Pond { + return Pond{ + Duck: duck +} + +}), + &pond) +``` + +A specific binding preference for `Duck` is required. + +#### `BindInterface` API + +In the above situation registering a binding for a given interface binding may look like: + +```go expandable +depinject.Inject( + depinject.Configs( + depinject.BindInterface( + "duck.Duck", + "duck.Mallard"), + depinject.Provide( + func() + +Mallard { + return Mallard{ +} +}, + func() + +Canvasback { + return Canvasback{ +} +}, + func(duck Duck) + +APond { + return Pond{ + Duck: duck +} + +})), + &pond) +``` + +Now `depinject` has enough information to provide `Mallard` as an input to `APond`. + +### Full example in real app + + +When using `depinject.Inject`, the injected types must be pointers. + + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + _ "embed" + "io" + "os" + "path/filepath" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" +) + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool = mempool.NewSenderNonceMempool() + / mempoolOpt = baseapp.SetMempool(nonceMempool) + / prepareOpt = func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt = func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +authtypes.AccountI { + return authtypes.ProtoBaseAccount() +}, + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.CapabilityKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.CrisisKeeper, + &app.UpgradeKeeper, + &app.ParamsKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + ); err != nil { + panic(err) +} + +app.App = appBuilder.Build(logger, db, traceStore, baseAppOptions...) + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(app.App.BaseApp, appOpts, app.appCodec, logger, app.kvStoreKeys()); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + + /**** Module Options ****/ + + app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + +## Debugging + +Issues with resolving dependencies in the container can be done with logs and [Graphviz](https://graphviz.org) renderings of the container tree. +By default, whenever there is an error, logs will be printed to stderr and a rendering of the dependency graph in Graphviz DOT format will be saved to `debug_container.dot`. + +Here is an example Graphviz rendering of a successful build of a dependency graph: +![Graphviz Example](https://raw.githubusercontent.com/cosmos/cosmos-sdk/ff39d243d421442b400befcd959ec3ccd2525154/depinject/testdata/example.svg) + +Rectangles represent functions, ovals represent types, rounded rectangles represent modules and the single hexagon +represents the function which called `Build`. Black-colored shapes mark functions and types that were called/resolved +without an error. Gray-colored nodes mark functions and types that could have been called/resolved in the container but +were left unused. + +Here is an example Graphviz rendering of a dependency graph build which failed: +![Graphviz Error Example](https://raw.githubusercontent.com/cosmos/cosmos-sdk/ff39d243d421442b400befcd959ec3ccd2525154/depinject/testdata/example_error.svg) + +Graphviz DOT files can be converted into SVG's for viewing in a web browser using the `dot` command-line tool, ex: + +```txt +dot -Tsvg debug_container.dot > debug_container.svg +``` + +Many other tools including some IDEs support working with DOT files. diff --git a/docs/sdk/v0.47/documentation/operations/packages/orm.mdx b/docs/sdk/v0.47/documentation/operations/packages/orm.mdx new file mode 100644 index 00000000..636794ab --- /dev/null +++ b/docs/sdk/v0.47/documentation/operations/packages/orm.mdx @@ -0,0 +1,370 @@ +--- +title: ORM +description: >- + The Cosmos SDK ORM is a state management library that provides a rich, but + opinionated set of tools for managing a module's state. It provides support + for: +--- + +The Cosmos SDK ORM is a state management library that provides a rich, but opinionated set of tools for managing a +module's state. It provides support for: + +* type safe management of state +* multipart keys +* secondary indexes +* unique indexes +* easy prefix and range queries +* automatic genesis import/export +* automatic query services for clients, including support for light client proofs (still in development) +* indexing state data in external databases (still in development) + +## Design and Philosophy + +The ORM's data model is inspired by the relational data model found in SQL databases. The core abstraction is a table +with a primary key and optional secondary indexes. + +Because the Cosmos SDK uses protobuf as its encoding layer, ORM tables are defined directly in .proto files using +protobuf options. Each table is defined by a single protobuf `message` type and a schema of multiple tables is +represented by a single .proto file. + +Table structure is specified in the same file where messages are defined in order to make it easy to focus on better +design of the state layer. Because blockchain state layout is part of the public API for clients (TODO: link to docs on +light client proofs), it is important to think about the state layout as being part of the public API of a module. +Changing the state layout actually breaks clients, so it is ideal to think through it carefully up front and to aim for +a design that will eliminate or minimize breaking changes down the road. Also, good design of state enables building +more performant and sophisticated applications. Providing users with a set of tools inspired by relational databases +which have a long history of database design best practices and allowing schema to be specified declaratively in a +single place are design choices the ORM makes to enable better design and more durable APIs. + +Also, by only supporting the table abstraction as opposed to key-value pair maps, it is easy to add to new +columns/fields to any data structure without causing a breaking change and the data structures can easily be indexed in +any off-the-shelf SQL database for more sophisticated queries. + +The encoding of fields in keys is designed to support ordered iteration for all protobuf primitive field types +except for `bytes` as well as the well-known types `google.protobuf.Timestamp` and `google.protobuf.Duration`. Encodings +are optimized for storage space when it makes sense (see the documentation in `cosmos/orm/v1/orm.proto` for more details) +and table rows do not use extra storage space to store key fields in the value. + +We recommend that users of the ORM attempt to follow database design best practices such as +[normalization](https://en.wikipedia.org/wiki/Database_normalization) (at least 1NF). +For instance, defining `repeated` fields in a table is considered an anti-pattern because breaks first normal form (1NF). +Although we support `repeated` fields in tables, they cannot be used as key fields for this reason. This may seem +restrictive but years of best practice (and also experience in the SDK) have shown that following this pattern +leads to easier to maintain schemas. + +To illustrate the motivation for these principles with an example from the SDK, historically balances were stored +as a mapping from account -> map of denom to amount. This did not scale well because an account with 100 token balances +needed to be encoded/decoded every time a single coin balance changed. Now balances are stored as account,denom -> amount +as in the example above. With the ORM's data model, if we wanted to add a new field to `Balance` such as +`unlocked_balance` (if vesting accounts were redesigned in this way), it would be easy to add it to this table without +requiring a data migration. Because of the ORM's optimizations, the account and denom are only stored in the key part +of storage and not in the value leading to both a flexible data model and efficient usage of storage. + +## Defining Tables + +To define a table: + +1. create a .proto file to describe the module's state (naming it `state.proto` is recommended for consistency), + and import "cosmos/orm/v1/orm.proto", ex: + +```protobuf +syntax = "proto3"; +package bank_example; + +import "cosmos/orm/v1/orm.proto"; +``` + +2. define a `message` for the table, ex: + +```protobuf +message Balance { + bytes account = 1; + string denom = 2; + uint64 balance = 3; +} +``` + +3. add the `cosmos.orm.v1.table` option to the table and give the table an `id` unique within this .proto file: + +```protobuf +message Balance { + option (cosmos.orm.v1.table) = { + id: 1 + }; + + bytes account = 1; + string denom = 2; + uint64 balance = 3; +} +``` + +4. define the primary key field or fields, as a comma-separated list of the fields from the message which should make + up the primary key: + +```protobuf +message Balance { + option (cosmos.orm.v1.table) = { + id: 1 + primary_key: { fields: "account,denom" } + }; + + bytes account = 1; + string denom = 2; + uint64 balance = 3; +} +``` + +5. add any desired secondary indexes by specifying an `id` unique within the table and a comma-separate list of the + index fields: + +```protobuf expandable +message Balance { + option (cosmos.orm.v1.table) = { + id: 1; + primary_key: { fields: "account,denom" } + index: { id: 1 fields: "denom" } / this allows querying for the accounts which own a denom + }; + + bytes account = 1; + string denom = 2; + uint64 amount = 3; +} +``` + +### Auto-incrementing Primary Keys + +A common pattern in SDK modules and in database design is to define tables with a single integer `id` field with an +automatically generated primary key. In the ORM we can do this by setting the `auto_increment` option to `true` on the +primary key, ex: + +```protobuf +message Account { + option (cosmos.orm.v1.table) = { + id: 2; + primary_key: { fields: "id", auto_increment: true } + }; + + uint64 id = 1; + bytes address = 2; +} +``` + +### Unique Indexes + +A unique index can be added by setting the `unique` option to `true` on an index, ex: + +```protobuf +message Account { + option (cosmos.orm.v1.table) = { + id: 2; + primary_key: { fields: "id", auto_increment: true } + index: {id: 1, fields: "address", unique: true} + }; + + uint64 id = 1; + bytes address = 2; +} +``` + +### Singletons + +The ORM also supports a special type of table with only one row called a `singleton`. This can be used for storing +module parameters. Singletons only need to define a unique `id` and that cannot conflict with the id of other +tables or singletons in the same .proto file. Ex: + +```protobuf +message Params { + option (cosmos.orm.v1.singleton) = { + id: 3; + }; + + google.protobuf.Duration voting_period = 1; + uint64 min_threshold = 2; +} +``` + +## Running Codegen + +NOTE: the ORM will only work with protobuf code that implements the [google.golang.org/protobuf](https://pkg.go.dev/google.golang.org/protobuf) +API. That means it will not work with code generated using gogo-proto. + +To install the ORM's code generator, run: + +```shell +go install cosmossdk.io/orm/cmd/protoc-gen-go-cosmos-orm@latest +``` + +The recommended way to run the code generator is to use [buf build](https://docs.buf.build/build/usage). +This is an example `buf.gen.yaml` that runs `protoc-gen-go`, `protoc-gen-go-grpc` and `protoc-gen-go-cosmos-orm` +using buf managed mode: + +```yaml expandable +version: v1 +managed: + enabled: true + go_package_prefix: + default: foo.bar/api # the go package prefix of your package + override: + buf.build/cosmos/cosmos-sdk: cosmossdk.io/api # required to import the Cosmos SDK api module +plugins: + - name: go + out: . + opt: paths=source_relative + - name: go-grpc + out: . + opt: paths=source_relative + - name: go-cosmos-orm + out: . + opt: paths=source_relative +``` + +## Using the ORM in a module + +### Initialization + +To use the ORM in a module, first create a `ModuleSchemaDescriptor`. This tells the ORM which .proto files have defined +an ORM schema and assigns them all a unique non-zero id. Ex: + +```go +var MyModuleSchema = &ormv1alpha1.ModuleSchemaDescriptor{ + SchemaFile: []*ormv1alpha1.ModuleSchemaDescriptor_FileEntry{ + { + Id: 1, + ProtoFileName: mymodule.File_my_module_state_proto.Path(), +}, +}, +} +``` + +In the ORM generated code for a file named `state.proto`, there should be an interface `StateStore` that got generated +with a constructor `NewStateStore` that takes a parameter of type `ormdb.ModuleDB`. Add a reference to `StateStore` +to your module's keeper struct. Ex: + +```go +type Keeper struct { + db StateStore +} +``` + +Then instantiate the `StateStore` instance via an `ormdb.ModuleDB` that is instantiated from the `SchemaDescriptor` +above and one or more store services from `cosmossdk.io/core/store`. Ex: + +```go expandable +func NewKeeper(storeService store.KVStoreService) (*Keeper, error) { + modDb, err := ormdb.NewModuleDB(MyModuleSchema, ormdb.ModuleDBOptions{ + KVStoreService: storeService +}) + if err != nil { + return nil, err +} + +db, err := NewStateStore(modDb) + if err != nil { + return nil, err +} + +return Keeper{ + db: db +}, nil +} +``` + +### Using the generated code + +The generated code for the ORM contains methods for inserting, updating, deleting and querying table entries. +For each table in a .proto file, there is a type-safe table interface implemented in generated code. For instance, +for a table named `Balance` there should be a `BalanceTable` interface that looks like this: + +```go expandable +type BalanceTable interface { + Insert(ctx context.Context, balance *Balance) + +error + Update(ctx context.Context, balance *Balance) + +error + Save(ctx context.Context, balance *Balance) + +error + Delete(ctx context.Context, balance *Balance) + +error + Has(ctx context.Context, acocunt []byte, denom string) (found bool, err error) + / Get returns nil and an error which responds true to ormerrors.IsNotFound() + if the record was not found. + Get(ctx context.Context, acocunt []byte, denom string) (*Balance, error) + +List(ctx context.Context, prefixKey BalanceIndexKey, opts ...ormlist.Option) (BalanceIterator, error) + +ListRange(ctx context.Context, from, to BalanceIndexKey, opts ...ormlist.Option) (BalanceIterator, error) + +DeleteBy(ctx context.Context, prefixKey BalanceIndexKey) + +error + DeleteRange(ctx context.Context, from, to BalanceIndexKey) + +error + + doNotImplement() +} +``` + +This `BalanceTable` should be accessible from the `StateStore` interface (assuming our file is named `state.proto`) +via a `BalanceTable()` accessor method. If all the above example tables/singletons were in the same `state.proto`, +then `StateStore` would get generated like this: + +```go +type BankStore interface { + BalanceTable() + +BalanceTable + AccountTable() + +AccountTable + ParamsTable() + +ParamsTable + + doNotImplement() +} +``` + +So to work with the `BalanceTable` in a keeper method we could use code like this: + +```go expandable +func (k keeper) + +AddBalance(ctx context.Context, acct []byte, denom string, amount uint64) + +error { + balance, err := k.db.BalanceTable().Get(ctx, acct, denom) + if err != nil && !ormerrors.IsNotFound(err) { + return err +} + if balance == nil { + balance = &Balance{ + Account: acct, + Denom: denom, + Amount: amount, +} + +} + +else { + balance.Amount = balance.Amount + amount +} + +return k.db.BalanceTable().Save(ctx, balance) +} +``` + +`List` methods take `IndexKey` parameters. For instance, `BalanceTable.List` takes `BalanceIndexKey`. `BalanceIndexKey` +let's represent index keys for the different indexes (primary and secondary) on the `Balance` table. The primary key +in the `Balance` table gets a struct `BalanceAccountDenomIndexKey` and the first index gets an index key `BalanceDenomIndexKey`. +If we wanted to list all the denoms and amounts that an account holds, we would use `BalanceAccountDenomIndexKey` +with a `List` query just on the account prefix. Ex: + +```go +it, err := keeper.db.BalanceTable().List(ctx, BalanceAccountDenomIndexKey{ +}.WithAccount(acct)) +``` diff --git a/docs/sdk/v0.47/documentation/operations/tooling/README.mdx b/docs/sdk/v0.47/documentation/operations/tooling/README.mdx new file mode 100644 index 00000000..5df85a1c --- /dev/null +++ b/docs/sdk/v0.47/documentation/operations/tooling/README.mdx @@ -0,0 +1,12 @@ +--- +title: Tools +description: >- + This section provides documentation on various tooling used in development of + a Cosmos SDK chain, operating a node and testing. +--- + +This section provides documentation on various tooling used in development of a Cosmos SDK chain, operating a node and testing. + +* [Protocol Buffers](/docs/sdk/v0.47/documentation/operations/tooling/protobuf) +* [Cosmovisor](/docs/sdk/v0.47/documentation/operations/tooling/cosmovisor) +* [Confix](/docs/sdk/v0.47/documentation/operations/tooling/confix) diff --git a/docs/sdk/v0.47/documentation/operations/tooling/autocli.mdx b/docs/sdk/v0.47/documentation/operations/tooling/autocli.mdx new file mode 100644 index 00000000..a904e75c --- /dev/null +++ b/docs/sdk/v0.47/documentation/operations/tooling/autocli.mdx @@ -0,0 +1,372 @@ +--- +title: AutoCLI +--- + + +**Synopsis** +This document details how to build CLI and REST interfaces for a module. Examples from various Cosmos SDK modules are included. + + + + +## Pre-requisite Readings + +* [Building Modules Intro](docs/sdk/v0.47/documentation/module-system/intro) + + + +The `autocli` package is a [Go library](https://pkg.go.dev/cosmossdk.io/client/v2/autocli) for generating CLI (command line interface) interfaces for Cosmos SDK-based applications. It provides a simple way to add CLI commands to your application by generating them automatically based on your gRPC service definitions. Autocli generates CLI commands and flags directly from your protobuf messages, including options, input parameters, and output parameters. This means that you can easily add a CLI interface to your application without having to manually create and manage commands. + +## Getting Started + +Here are the steps to use the `autocli` package: + +1. Define your app's modules that implement the `appmodule.AppModule` interface. +2. Configure how behave `autocli` command generation, by implementing the `func (am AppModule) AutoCLIOptions() *autocliv1.ModuleOptions` method on the module. Learn more [here](#advanced-usage). +3. Use the `autocli.AppOptions` struct to specifies the modules you defined. If you are using the `depinject` package to manage your app's dependencies, it can automatically create an instance of `autocli.AppOptions` based on your app's configuration. +4. Use the `EnhanceRootCommand()` method provided by `autocli` to add the CLI commands for the specified modules to your root command and can also be found in the `client/v2/autocli/app.go` file. Additionally, this method adds the `autocli` functionality to your app's root command. This method is additive only, meaning that it does not create commands if they are already registered for a module. Instead, it adds any missing commands to the root command. + +Here's an example of how to use `autocli`: + +```go expandable +/ Define your app's modules + testModules := map[string]appmodule.AppModule{ + "testModule": &TestModule{ +}, +} + +/ Define the autocli AppOptions + autoCliOpts := autocli.AppOptions{ + Modules: testModules, +} + +/ Get the root command + rootCmd := &cobra.Command{ + Use: "app", +} + +/ Enhance the root command with autocli +autocli.EnhanceRootCommand(rootCmd, autoCliOpts) + +/ Run the root command + if err := rootCmd.Execute(); err != nil { + fmt.Println(err) +} +``` + +## Flags + +`autocli` generates flags for each field in a protobuf message. By default, the names of the flags are generated based on the names of the fields in the message. You can customise the flag names using the `namingOptions` parameter of the `Builder.AddMessageFlags()` method. + +To define flags for a message, you can use the `Builder.AddMessageFlags()` method. This method takes the `cobra.Command` instance and the message type as input, and generates flags for each field in the message. + +```go expandable +package autocli + +import ( + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + "fmt" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/spf13/cobra" + "google.golang.org/protobuf/reflect/protoreflect" + "sigs.k8s.io/yaml" + "cosmossdk.io/client/v2/internal/util" +) + +func (b *Builder) + +buildMethodCommandCommon(descriptor protoreflect.MethodDescriptor, options *autocliv1.RpcCommandOptions, exec func(cmd *cobra.Command, input protoreflect.Message) + +error) (*cobra.Command, error) { + if options == nil { + / use the defaults + options = &autocliv1.RpcCommandOptions{ +} + +} + long := options.Long + if long == "" { + long = util.DescriptorDocs(descriptor) +} + inputDesc := descriptor.Input() + inputType := util.ResolveMessageType(b.TypeResolver, inputDesc) + use := options.Use + if use == "" { + use = protoNameToCliName(descriptor.Name()) +} + cmd := &cobra.Command{ + Use: use, + Long: long, + Short: options.Short, + Example: options.Example, + Aliases: options.Alias, + SuggestFor: options.SuggestFor, + Deprecated: options.Deprecated, + Version: options.Version, +} + +binder, err := b.AddMessageFlags(cmd.Context(), cmd.Flags(), inputType, options) + if err != nil { + return nil, err +} + +cmd.Args = binder.CobraArgs + + cmd.RunE = func(cmd *cobra.Command, args []string) + +error { + input, err := binder.BuildMessage(args) + if err != nil { + return err +} + +return exec(cmd, input) +} + +return cmd, nil +} + +/ enhanceCommandCommon enhances the provided query or msg command with either generated commands based on the provided module +/ options or the provided custom commands for each module. If the provided query command already contains a command +/ for a module, that command is not over-written by this method. This allows a graceful addition of autocli to +/ automatically fill in missing commands. +func (b *Builder) + +enhanceCommandCommon(cmd *cobra.Command, moduleOptions map[string]*autocliv1.ModuleOptions, customCmds map[string]*cobra.Command, buildModuleCommand func(*cobra.Command, *autocliv1.ModuleOptions, string) + +error) + +error { + allModuleNames := map[string]bool{ +} + for moduleName := range moduleOptions { + allModuleNames[moduleName] = true +} + for moduleName := range customCmds { + allModuleNames[moduleName] = true +} + for moduleName := range allModuleNames { + / if we have an existing command skip adding one here + if cmd.HasSubCommands() { + if _, _, err := cmd.Find([]string{ + moduleName +}); err == nil { + / command already exists, skip + continue +} + +} + + / if we have a custom command use that instead of generating one + if custom := customCmds[moduleName]; custom != nil { + / custom commands get added lower down + cmd.AddCommand(custom) + +continue +} + + / check for autocli options + modOpts := moduleOptions[moduleName] + if modOpts == nil { + continue +} + err := buildModuleCommand(cmd, modOpts, moduleName) + if err != nil { + return err +} + +} + +return nil +} + +/ outOrStdoutFormat formats the output based on the output flag and writes it to the command's output stream. +func (b *Builder) + +outOrStdoutFormat(cmd *cobra.Command, out []byte) + +error { + var err error + outputType := cmd.Flag(flags.FlagOutput) + if outputType == nil { + return fmt.Errorf("output flag not found") +} + if outputType.Value.String() == "text" { + out, err = yaml.JSONToYAML(out) + if err != nil { + return err +} + +} + _, err = fmt.Fprintln(cmd.OutOrStdout(), string(out)) + +return nil +} +``` + +The `binder` variable returned by the `AddMessageFlags()` method is used to bind the command-line arguments to the fields in the message. + +You can also customise the behavior of the flags using the `namingOptions` parameter of the `Builder.AddMessageFlags()` method. This parameter allows you to specify a custom prefix for the flags, and to specify whether to generate flags for repeated fields and whether to generate flags for fields with default values. + +## Commands and Queries + +The `autocli` package generates CLI commands and flags for each method defined in your gRPC service. By default, it generates commands for each RPC method that does not return a stream of messages. The commands are named based on the name of the service method. + +For example, given the following protobuf definition for a service: + +```protobuf +service MyService { + rpc MyMethod(MyRequest) returns (MyResponse) {} +} +``` + +`autocli` will generate a command named `my-method` for the `MyMethod` method. The command will have flags for each field in the `MyRequest` message. + +If you want to customise the behavior of a command, you can define a custom command by implementing the `autocli.Command` interface. You can then register the command with the `autocli.Builder` instance for your application. + +Similarly, you can define a custom query by implementing the `autocli.Query` interface. You can then register the query with the `autocli.Builder` instance for your application. + +To add a custom command or query, you can use the `Builder.AddCustomCommand` or `Builder.AddCustomQuery` methods, respectively. These methods take a `cobra.Command` or `cobra.Command` instance, respectively, which can be used to define the behavior of the command or query. + +## Advanced Usage + +### Specifying Subcommands + +By default, `autocli` generates a command for each method in your gRPC service. However, you can specify subcommands to group related commands together. To specify subcommands, you can use the `autocliv1.ServiceCommandDescriptor` struct. + +This example shows how to use the `autocliv1.ServiceCommandDescriptor` struct to group related commands together and specify subcommands in your gRPC service by defining an instance of `autocliv1.ModuleOptions` in your `autocli.go` file. + +```go expandable +package gov + +import ( + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + govv1 "cosmossdk.io/api/cosmos/gov/v1" + govv1beta1 "cosmossdk.io/api/cosmos/gov/v1beta1" +) + +/ AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. +func (am AppModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Tx: &autocliv1.ServiceCommandDescriptor{ + Service: govv1.Msg_ServiceDesc.ServiceName, + / map v1beta1 as a sub-command + SubCommands: map[string]*autocliv1.ServiceCommandDescriptor{ + "v1beta1": { + Service: govv1beta1.Msg_ServiceDesc.ServiceName +}, +}, +}, + Query: &autocliv1.ServiceCommandDescriptor{ + Service: govv1.Query_ServiceDesc.ServiceName, + / map v1beta1 as a sub-command + SubCommands: map[string]*autocliv1.ServiceCommandDescriptor{ + "v1beta1": { + Service: govv1beta1.Query_ServiceDesc.ServiceName +}, +}, +}, +} +} +``` + +The `AutoCLIOptions()` method in the autocli package allows you to specify the services and sub-commands to be mapped for your app. In the example code, an instance of the `autocliv1.ModuleOptions` struct is defined in the `appmodule.AppModule` implementation located in the `x/gov/autocli.go` file. This configuration groups related commands together and specifies subcommands for each service. + +### Positional Arguments + +Positional arguments are arguments that are passed to a command without being specified as a flag. They are typically used for providing additional context to a command, such as a filename or search query. + +To add positional arguments to a command, you can use the `autocliv1.PositionalArgDescriptor` struct, as seen in the example below. You need to specify the `ProtoField` parameter, which is the name of the protobuf field that should be used as the positional argument. In addition, if the parameter is a variable-length argument, you can specify the `Varargs` parameter as `true`. This can only be applied to the last positional parameter, and the `ProtoField` must be a repeated field. + +Here's an example of how to define a positional argument for the `Account` method of the `auth` service: + +```go expandable +package auth + +import ( + + authv1beta1 "cosmossdk.io/api/cosmos/auth/v1beta1" + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" +) + +/ AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. +func (am AppModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: &autocliv1.ServiceCommandDescriptor{ + Service: authv1beta1.Query_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "Account", + Use: "account [address]", + Short: "query account by address", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "address" +}}, +}, + { + RpcMethod: "AccountAddressByID", + Use: "address-by-id [acc-num]", + Short: "query account address by account number", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "id" +}}, +}, +}, +}, + Tx: &autocliv1.ServiceCommandDescriptor{ + Service: authv1beta1.Msg_ServiceDesc.ServiceName, +}, +} +} +``` + +Here are some example commands that use the positional arguments we defined above: + +To query an account by address: + +```bash + query auth account cosmos1abcd...xyz +``` + +To query an account address by account number: + +```bash + query auth address-by-acc-num 1 +``` + +In both of these commands, the `auth` service is being queried with the `query` subcommand, followed by the specific method being called (`account` or `address-by-acc-num`). The positional argument is included at the end of the command (`cosmos1abcd...xyz` or `1`) to specify the address or account number, respectively. + +### Customising Flag Names + +By default, `autocli` generates flag names based on the names of the fields in your protobuf message. However, you can customise the flag names by providing a `FlagOptions` parameter to the `Builder.AddMessageFlags()` method. This parameter allows you to specify custom names for flags based on the names of the message fields. For example, if you have a message with the fields `test` and `test1`, you can use the following naming options to customise the flags + +```go +options := autocliv1.RpcCommandOptions{ + FlagOptions: map[string]*autocliv1.FlagOptions{ + "test": { + Name: "custom_name", +}, + "test1": { + Name: "other_name", +}, +}, +} + +builder.AddMessageFlags(message, options) +``` + +Note that `autocliv1.RpcCommandOptions` is a field of the `autocliv1.ServiceCommandDescriptor` struct, which is defined in the `autocliv1` package. To use this option, you can define an instance of `autocliv1.ModuleOptions` in your `appmodule.AppModule` implementation and specify the `FlagOptions` for the relevant service command descriptor. + +## Conclusion + +`autocli` is a powerful tool for adding CLI interfaces to your Cosmos SDK-based applications. It allows you to easily generate CLI commands and flags from your protobuf messages, and provides many options for customising the behavior of your CLI application. + +To further enhance your CLI experience with Cosmos SDK-based blockchains, you can use `Hubl`. `Hubl` is a tool that allows you to query any Cosmos SDK-based blockchain using the new AutoCLI feature of the Cosmos SDK. With hubl, you can easily configure a new chain and query modules with just a few simple commands. + +For more information on `Hubl`, including how to configure a new chain and query a module, see the [Hubl documentation](https://docs.cosmos.network/main/tooling/hubl). diff --git a/docs/sdk/v0.47/documentation/operations/tooling/confix.mdx b/docs/sdk/v0.47/documentation/operations/tooling/confix.mdx new file mode 100644 index 00000000..febc0375 --- /dev/null +++ b/docs/sdk/v0.47/documentation/operations/tooling/confix.mdx @@ -0,0 +1,131 @@ +--- +title: Confix +description: >- + Confix is a configuration management tool that allows you to manage your + configuration via CLI. +--- + +`Confix` is a configuration management tool that allows you to manage your configuration via CLI. + +It is based on the [CometBFT RFC 019](https://github.com/cometbft/cometbft/blob/5013bc3f4a6d64dcc2bf02ccc002ebc9881c62e4/docs/rfc/rfc-019-config-version.md). + +## Installation + +### Add Config Command + +To add the confix tool, it's required to add the `ConfigCommand` to your application's root command file (e.g. `simd/cmd/root.go`). + +Import the `confixCmd` package: + +```go +import "cosmossdk.io/tools/confix/cmd" +``` + +Find the following line: + +```go +initRootCmd(rootCmd, encodingConfig) +``` + +After that line, add the following: + +```go +rootCmd.AddCommand( + confixcmd.ConfigCommand(), +) +``` + +The `ConfixCommand` function builds the `config` root command and is defined in the `confixCmd` package (`cosmossdk.io/tools/confix/cmd`). +An implementation example can be found in `simapp`. + +The command will be available as `simd config`. + +### Using Confix Standalone + +To use Confix standalone, without having to add it in your application, install it with the following command: + +```bash +go install cosmossdk.io/tools/confix/cmd/confix@latest +``` + + +Currently, due to the replace directive in the Confix go.mod, it is not possible to use `go install`. +Building from source or importing in an application is required until that replace directive is removed. + + +Alternatively, for building from source, simply run `make confix`. The binary will be located in `tools/confix`. + +## Usage + +Use standalone: + +```shell +confix --help +``` + +Use in simd: + +```shell +simd config fix --help +``` + +### Get + +Get a configuration value, e.g.: + +```shell +simd config get app pruning # gets the value pruning from app.toml +simd config get client chain-id # gets the value chain-id from client.toml +``` + +```shell +confix get ~/.simapp/config/app.toml pruning # gets the value pruning from app.toml +confix get ~/.simapp/config/client.toml chain-id # gets the value chain-id from client.toml +``` + +### Set + +Set a configuration value, e.g.: + +```shell +simd config set app pruning "enabled" # sets the value pruning from app.toml +simd config set client chain-id "foo-1" # sets the value chain-id from client.toml +``` + +```shell +confix set ~/.simapp/config/app.toml pruning "enabled" # sets the value pruning from app.toml +confix set ~/.simapp/config/client.toml chain-id "foo-1" # sets the value chain-id from client.toml +``` + +### Migrate + +Migrate a configuration file to a new version, e.g.: + +```shell +simd config migrate v0.47 # migrates defaultHome/config/app.toml to the latest v0.47 config +``` + +```shell +confix migrate v0.47 ~/.simapp/config/app.toml # migrate ~/.simapp/config/app.toml to the latest v0.47 config +``` + +### Diff + +Get the diff between a given configuration file and the default configuration file, e.g.: + +```shell +simd config diff v0.47 # gets the diff between defaultHome/config/app.toml and the latest v0.47 config +``` + +```shell +confix diff v0.47 ~/.simapp/config/app.toml # gets the diff between ~/.simapp/config/app.toml and the latest v0.47 config +``` + +### Maintainer + +At each SDK modification of the default configuration, add the default SDK config under `data/v0.XX-app.toml`. +This allows users to use the tool standalone. + +## Credits + +This project is based on the [CometBFT RFC 019](https://github.com/cometbft/cometbft/blob/5013bc3f4a6d64dcc2bf02ccc002ebc9881c62e4/docs/rfc/rfc-019-config-version.md) and their own implementation of [confix](https://github.com/cometbft/cometbft/blob/v0.36.x/scripts/confix/confix.go). diff --git a/docs/sdk/v0.47/documentation/operations/tooling/cosmovisor.mdx b/docs/sdk/v0.47/documentation/operations/tooling/cosmovisor.mdx new file mode 100644 index 00000000..36363fd0 --- /dev/null +++ b/docs/sdk/v0.47/documentation/operations/tooling/cosmovisor.mdx @@ -0,0 +1,364 @@ +--- +title: Cosmovisor +--- + +`cosmovisor` is a small process manager for Cosmos SDK application binaries that monitors the governance module for incoming chain upgrade proposals. If it sees a proposal that gets approved, `cosmovisor` can automatically download the new binary, stop the current binary, switch from the old binary to the new one, and finally restart the node with the new binary. + +* [Design](#design) +* [Contributing](#contributing) +* [Setup](#setup) + * [Installation](#installation) + * [Command Line Arguments And Environment Variables](#command-line-arguments-and-environment-variables) + * [Folder Layout](#folder-layout) +* [Usage](#usage) + * [Initialization](#initialization) + * [Detecting Upgrades](#detecting-upgrades) + * [Auto-Download](#auto-download) +* [Example: SimApp Upgrade](#example-simapp-upgrade) + * [Chain Setup](#chain-setup) + * [Prepare Cosmovisor and Start the Chain](#prepare-cosmovisor-and-start-the-chain) + * [Update App](#update-app) + +## Design + +Cosmovisor is designed to be used as a wrapper for a `Cosmos SDK` app: + +* it will pass arguments to the associated app (configured by `DAEMON_NAME` env variable). + Running `cosmovisor run arg1 arg2 ....` will run `app arg1 arg2 ...`; +* it will manage an app by restarting and upgrading if needed; +* it is configured using environment variables, not positional arguments. + +*Note: If new versions of the application are not set up to run in-place store migrations, migrations will need to be run manually before restarting `cosmovisor` with the new binary. For this reason, we recommend applications adopt in-place store migrations.* + +*Note: If validators would like to enable the auto-download option (which [we don't recommend](#auto-download)), and they are currently running an application using Cosmos SDK `v0.42`, they will need to use Cosmovisor [`v0.1`](https://github.com/cosmos/cosmos-sdk/releases/tag/cosmovisor%2Fv0.1.0). Later versions of Cosmovisor do not support Cosmos SDK `v0.44.3` or earlier if the auto-download option is enabled.* + +## Contributing + +Cosmovisor is part of the Cosmos SDK monorepo, but it's a separate module with it's own release schedule. + +Release branches have the following format `release/cosmovisor/vA.B.x`, where A and B are a number (e.g. `release/cosmovisor/v1.3.x`). Releases are tagged using the following format: `cosmovisor/vA.B.C`. + +## Setup + +### Installation + +You can download Cosmovisor from the [GitHub releases](https://github.com/cosmos/cosmos-sdk/releases/tag/cosmovisor%2Fv1.3.0). + +To install the latest version of `cosmovisor`, run the following command: + +```shell +go install cosmossdk.io/tools/cosmovisor/cmd/cosmovisor@latest +``` + +To install a previous version, you can specify the version. IMPORTANT: Chains that use Cosmos SDK v0.44.3 or earlier (eg v0.44.2) and want to use auto-download feature MUST use `cosmovisor v0.1.0` + +```shell +go install github.com/cosmos/cosmos-sdk/cosmovisor/cmd/cosmovisor@v0.1.0 +``` + +Run `cosmovisor version` to check the cosmovisor version. + +Alternatively, for building from source, simply run `make cosmovisor`. The binary will be located in `tools/cosmovisor`. + + +Building from source using `make cosmovisor` won't display the correct `cosmovisor` version. + + +### Command Line Arguments And Environment Variables + +The first argument passed to `cosmovisor` is the action for `cosmovisor` to take. Options are: + +* `help`, `--help`, or `-h` - Output `cosmovisor` help information and check your `cosmovisor` configuration. +* `run` - Run the configured binary using the rest of the provided arguments. +* `version` - Output the `cosmovisor` version and also run the binary with the `version` argument. + +All arguments passed to `cosmovisor run` will be passed to the application binary (as a subprocess). `cosmovisor` will return `/dev/stdout` and `/dev/stderr` of the subprocess as its own. For this reason, `cosmovisor run` cannot accept any command-line arguments other than those available to the application binary. + +\*Note: Use of `cosmovisor` without one of the action arguments is deprecated. For backwards compatibility, if the first argument is not an action argument, `run` is assumed. However, this fallback might be removed in future versions, so it is recommended that you always provide `run`. + +`cosmovisor` reads its configuration from environment variables: + +* `DAEMON_HOME` is the location where the `cosmovisor/` directory is kept that contains the genesis binary, the upgrade binaries, and any additional auxiliary files associated with each binary (e.g. `$HOME/.gaiad`, `$HOME/.regend`, `$HOME/.simd`, etc.). +* `DAEMON_NAME` is the name of the binary itself (e.g. `gaiad`, `regend`, `simd`, etc.). +* `DAEMON_ALLOW_DOWNLOAD_BINARIES` (*optional*), if set to `true`, will enable auto-downloading of new binaries (for security reasons, this is intended for full nodes rather than validators). By default, `cosmovisor` will not auto-download new binaries. +* `DAEMON_RESTART_AFTER_UPGRADE` (*optional*, default = `true`), if `true`, restarts the subprocess with the same command-line arguments and flags (but with the new binary) after a successful upgrade. Otherwise (`false`), `cosmovisor` stops running after an upgrade and requires the system administrator to manually restart it. Note restart is only after the upgrade and does not auto-restart the subprocess after an error occurs. +* `DAEMON_RESTART_DELAY` (*optional*, default none), allow a node operator to define a delay between the node halt (for upgrade) and backup by the specified time. The value must be a duration (e.g. `1s`). +* `DAEMON_POLL_INTERVAL` (*optional*, default 300 milliseconds), is the interval length for polling the upgrade plan file. The value must be a duration (e.g. `1s`). +* `DAEMON_DATA_BACKUP_DIR` option to set a custom backup directory. If not set, `DAEMON_HOME` is used. +* `UNSAFE_SKIP_BACKUP` (defaults to `false`), if set to `true`, upgrades directly without performing a backup. Otherwise (`false`, default) backs up the data before trying the upgrade. The default value of false is useful and recommended in case of failures and when a backup needed to rollback. We recommend using the default backup option `UNSAFE_SKIP_BACKUP=false`. +* `DAEMON_PREUPGRADE_MAX_RETRIES` (defaults to `0`). The maximum number of times to call `pre-upgrade` in the application after exit status of `31`. After the maximum number of retries, Cosmovisor fails the upgrade. +* `COSMOVISOR_DISABLE_LOGS` (defaults to `false`). If set to true, this will disable Cosmovisor logs (but not the underlying process) completely. This may be useful, for example, when a Cosmovisor subcommand you are executing returns a valid JSON you are then parsing, as logs added by Cosmovisor make this output not a valid JSON. + +### Folder Layout + +`$DAEMON_HOME/cosmovisor` is expected to belong completely to `cosmovisor` and the subprocesses that are controlled by it. The folder content is organized as follows: + +```text +. +├── current -> genesis or upgrades/ +├── genesis +│   └── bin +│   └── $DAEMON_NAME +└── upgrades + └── + ├── bin + │   └── $DAEMON_NAME + └── upgrade-info.json +``` + +The `cosmovisor/` directory incudes a subdirectory for each version of the application (i.e. `genesis` or `upgrades/`). Within each subdirectory is the application binary (i.e. `bin/$DAEMON_NAME`) and any additional auxiliary files associated with each binary. `current` is a symbolic link to the currently active directory (i.e. `genesis` or `upgrades/`). The `name` variable in `upgrades/` is the lowercased URI-encoded name of the upgrade as specified in the upgrade module plan. Note that the upgrade name path are normalized to be lowercased: for instance, `MyUpgrade` is normalized to `myupgrade`, and its path is `upgrades/myupgrade`. + +Please note that `$DAEMON_HOME/cosmovisor` only stores the *application binaries*. The `cosmovisor` binary itself can be stored in any typical location (e.g. `/usr/local/bin`). The application will continue to store its data in the default data directory (e.g. `$HOME/.gaiad`) or the data directory specified with the `--home` flag. `$DAEMON_HOME` is independent of the data directory and can be set to any location. If you set `$DAEMON_HOME` to the same directory as the data directory, you will end up with a configuation like the following: + +```text +.gaiad +├── config +├── data +└── cosmovisor +``` + +## Usage + +The system administrator is responsible for: + +* installing the `cosmovisor` binary +* configuring the host's init system (e.g. `systemd`, `launchd`, etc.) +* appropriately setting the environmental variables +* creating the `/cosmovisor` directory +* creating the `/cosmovisor/genesis/bin` folder +* creating the `/cosmovisor/upgrades//bin` folders +* placing the different versions of the `` executable in the appropriate `bin` folders. + +`cosmovisor` will set the `current` link to point to `genesis` at first start (i.e. when no `current` link exists) and then handle switching binaries at the correct points in time so that the system administrator can prepare days in advance and relax at upgrade time. + +In order to support downloadable binaries, a tarball for each upgrade binary will need to be packaged up and made available through a canonical URL. Additionally, a tarball that includes the genesis binary and all available upgrade binaries can be packaged up and made available so that all the necessary binaries required to sync a fullnode from start can be easily downloaded. + +The `DAEMON` specific code and operations (e.g. cometBFT config, the application db, syncing blocks, etc.) all work as expected. The application binaries' directives such as command-line flags and environment variables also work as expected. + +### Initialization + +The `cosmovisor init ` command creates the folder structure required for using cosmovisor. + +It does the following: + +* creates the `/cosmovisor` folder if it doesn't yet exist +* creates the `/cosmovisor/genesis/bin` folder if it doesn't yet exist +* copies the provided executable file to `/cosmovisor/genesis/bin/` +* creates the `current` link, pointing to the `genesis` folder + +It uses the `DAEMON_HOME` and `DAEMON_NAME` environment variables for folder location and executable name. + +The `cosmovisor init` command is specifically for initializing cosmovisor, and should not be confused with a chain's `init` command (e.g. `cosmovisor run init`). + +### Detecting Upgrades + +`cosmovisor` is polling the `$DAEMON_HOME/data/upgrade-info.json` file for new upgrade instructions. The file is created by the x/upgrade module in `BeginBlocker` when an upgrade is detected and the blockchain reaches the upgrade height. +The following heuristic is applied to detect the upgrade: + +* When starting, `cosmovisor` doesn't know much about currently running upgrade, except the binary which is `current/bin/`. It tries to read the `current/update-info.json` file to get information about the current upgrade name. +* If neither `cosmovisor/current/upgrade-info.json` nor `data/upgrade-info.json` exist, then `cosmovisor` will wait for `data/upgrade-info.json` file to trigger an upgrade. +* If `cosmovisor/current/upgrade-info.json` doesn't exist but `data/upgrade-info.json` exists, then `cosmovisor` assumes that whatever is in `data/upgrade-info.json` is a valid upgrade request. In this case `cosmovisor` tries immediately to make an upgrade according to the `name` attribute in `data/upgrade-info.json`. +* Otherwise, `cosmovisor` waits for changes in `upgrade-info.json`. As soon as a new upgrade name is recorded in the file, `cosmovisor` will trigger an upgrade mechanism. + +When the upgrade mechanism is triggered, `cosmovisor` will: + +1. if `DAEMON_ALLOW_DOWNLOAD_BINARIES` is enabled, start by auto-downloading a new binary into `cosmovisor//bin` (where `` is the `upgrade-info.json:name` attribute); +2. update the `current` symbolic link to point to the new directory and save `data/upgrade-info.json` to `cosmovisor/current/upgrade-info.json`. + +### Auto-Download + +Generally, `cosmovisor` requires that the system administrator place all relevant binaries on disk before the upgrade happens. However, for people who don't need such control and want an automated setup (maybe they are syncing a non-validating fullnode and want to do little maintenance), there is another option. + +**NOTE: we don't recommend using auto-download** because it doesn't verify in advance if a binary is available. If there will be any issue with downloading a binary, the cosmovisor will stop and won't restart an App (which could lead to a chain halt). + +If `DAEMON_ALLOW_DOWNLOAD_BINARIES` is set to `true`, and no local binary can be found when an upgrade is triggered, `cosmovisor` will attempt to download and install the binary itself based on the instructions in the `info` attribute in the `data/upgrade-info.json` file. The files is constructed by the x/upgrade module and contains data from the upgrade `Plan` object. The `Plan` has an info field that is expected to have one of the following two valid formats to specify a download: + +1. Store an os/architecture -> binary URI map in the upgrade plan info field as JSON under the `"binaries"` key. For example: + + ```json + { + "binaries": { + "linux/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" + } + } + ``` + + You can include multiple binaries at once to ensure more than one environment will receive the correct binaries: + + ```json + { + "binaries": { + "linux/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f", + "linux/arm64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f", + "darwin/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" + } + } + ``` + + When submitting this as a proposal ensure there are no spaces. An example command using `gaiad` could look like: + + ```shell expandable + > gaiad tx upgrade software-upgrade Vega \ + --title Vega \ + --deposit 100uatom \ + --upgrade-height 7368420 \ + --upgrade-info '{"binaries":{"linux/amd64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-linux-amd64","linux/arm64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-linux-arm64","darwin/amd64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-darwin-amd64"}}' \ + --summary "upgrade to Vega" \ + --gas 400000 \ + --from user \ + --chain-id test \ + --home test/val2 \ + --node tcp://localhost:36657 \ + --yes + ``` + +2. Store a link to a file that contains all information in the above format (e.g. if you want to specify lots of binaries, changelog info, etc. without filling up the blockchain). For example: + + ```text + https://example.com/testnet-1001-info.json?checksum=sha256:deaaa99fda9407c4dbe1d04bd49bab0cc3c1dd76fa392cd55a9425be074af01e + ``` + +When `cosmovisor` is triggered to download the new binary, `cosmovisor` will parse the `"binaries"` field, download the new binary with [go-getter](https://github.com/hashicorp/go-getter), and unpack the new binary in the `upgrades/` folder so that it can be run as if it was installed manually. + +Note that for this mechanism to provide strong security guarantees, all URLs should include a SHA 256/512 checksum. This ensures that no false binary is run, even if someone hacks the server or hijacks the DNS. `go-getter` will always ensure the downloaded file matches the checksum if it is provided. `go-getter` will also handle unpacking archives into directories (in this case the download link should point to a `zip` file of all data in the `bin` directory). + +To properly create a sha256 checksum on linux, you can use the `sha256sum` utility. For example: + +```shell +sha256sum ./testdata/repo/zip_directory/autod.zip +``` + +The result will look something like the following: `29139e1381b8177aec909fab9a75d11381cab5adf7d3af0c05ff1c9c117743a7`. + +You can also use `sha512sum` if you would prefer to use longer hashes, or `md5sum` if you would prefer to use broken hashes. Whichever you choose, make sure to set the hash algorithm properly in the checksum argument to the URL. + +## Example: SimApp Upgrade + +The following instructions provide a demonstration of `cosmovisor` using the simulation application (`simapp`) shipped with the Cosmos SDK's source code. The following commands are to be run from within the `cosmos-sdk` repository. + +### Chain Setup + +Let's create a new chain using the `v0.44` version of simapp (the Cosmos SDK demo app): + +```shell +git checkout v0.44.6 +make build +``` + +Clean `~/.simapp` (never do this in a production environment): + +```shell +./build/simd unsafe-reset-all +``` + +Set up app config: + +```shell +./build/simd config set client chain-id test +./build/simd config set client keyring-backend test +./build/simd config set client broadcast-mode sync +``` + +Initialize the node and overwrite any previous genesis file (never do this in a production environment): + +{/* TODO: init does not read chain-id from config */} + +```shell +./build/simd init test --chain-id test --overwrite +``` + +Set the minimum gas price to `0stake` in `~/.simapp/config/app.toml`: + +```shell +minimum-gas-prices = "0stake" +``` + +For the sake of this demonstration, amend `voting_period` in `genesis.json` to a reduced time of 20 seconds (`20s`): + +```shell +cat <<< $(jq '.app_state.gov.voting_params.voting_period = "20s"' $HOME/.simapp/config/genesis.json) > $HOME/.simapp/config/genesis.json +``` + +Create a validator, and setup genesis transaction: + +```shell +./build/simd keys add validator +./build/simd genesis add-genesis-account validator 1000000000stake --keyring-backend test +./build/simd genesis gentx validator 1000000stake --chain-id test +./build/simd genesis collect-gentxs +``` + +#### Prepare Cosmovisor and Start the Chain + +Set the required environment variables: + +```shell +export DAEMON_NAME=simd +export DAEMON_HOME=$HOME/.simapp +``` + +Set the optional environment variable to trigger an automatic app restart: + +```shell +export DAEMON_RESTART_AFTER_UPGRADE=true +``` + +Create the folder for the genesis binary and copy the `simd` binary: + +```shell +mkdir -p $DAEMON_HOME/cosmovisor/genesis/bin +cp ./build/simd $DAEMON_HOME/cosmovisor/genesis/bin +``` + +Now you can run cosmovisor with simapp v0.44: + +```shell +cosmovisor run start +``` + +#### Update App + +Update app to the latest version (e.g. v0.45). + +Next, we can add a migration - which is defined using `x/upgrade` [upgrade plan](https://github.com/cosmos/cosmos-sdk/blob/main/docs/advanced/13-upgrade.md) (you may refer to a past version if you are using an older Cosmos SDK release). In a migration we can do any deterministic state change. + +Build the new version `simd` binary: + +```shell +make build +``` + +Create the folder for the upgrade binary and copy the `simd` binary: + +```shell +mkdir -p $DAEMON_HOME/cosmovisor/upgrades/test1/bin +cp ./build/simd $DAEMON_HOME/cosmovisor/upgrades/test1/bin +``` + +Open a new terminal window and submit an upgrade proposal along with a deposit and a vote (these commands must be run within 20 seconds of each other): + +**v0.45 and earlier**: + +```shell +./build/simd tx gov submit-proposal software-upgrade test1 --title upgrade --description upgrade --upgrade-height 200 --from validator --yes +./build/simd tx gov deposit 1 10000000stake --from validator --yes +./build/simd tx gov vote 1 yes --from validator --yes +``` + +**v0.46, v0.47**: + +```shell +./build/simd tx gov submit-legacy-proposal software-upgrade test1 --title upgrade --description upgrade --upgrade-height 200 --from validator --yes +./build/simd tx gov deposit 1 10000000stake --from validator --yes +./build/simd tx gov vote 1 yes --from validator --yes +``` + +**>= v0.48+**: + +```shell +./build/simd tx upgrade software-upgrade test1 --title upgrade --summary upgrade --upgrade-height 200 --from validator --yes +./build/simd tx gov deposit 1 10000000stake --from validator --yes +./build/simd tx gov vote 1 yes --from validator --yes +``` + +The upgrade will occur automatically at height 200. Note: you may need to change the upgrade height in the snippet above if your test play takes more time. diff --git a/docs/sdk/v0.47/documentation/operations/tooling/depinject.mdx b/docs/sdk/v0.47/documentation/operations/tooling/depinject.mdx new file mode 100644 index 00000000..c6977e0d --- /dev/null +++ b/docs/sdk/v0.47/documentation/operations/tooling/depinject.mdx @@ -0,0 +1,692 @@ +--- +title: Depinject +--- + +> **DISCLAIMER**: This is a **beta** package. The SDK team is actively working on this feature and we are looking for feedback from the community. Please try it out and let us know what you think. + +## Overview + +`depinject` is a dependency injection framework for the Cosmos SDK. This module together with `core/appconfig` are meant to simplify the definition of a blockchain by replacing most of `app.go`'s boilerplate code with a configuration file (Go, YAML or JSON). + +* [Go Doc](https://pkg.go.dev/cosmossdk.io/depinject) + +## Usage + +`depinject` includes an expressive and composable [Configuration API](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/depinject#Config). +A core configuration function is `Provide`. The example below demonstrates the registration of free **provider functions** via the `Provide` API. + +```go expandable +package main + +import ( + + "fmt" + "cosmossdk.io/depinject" +) + +type AnotherInt int + +func main() { + var ( + x int + y AnotherInt + ) + +fmt.Printf("Before (%v, %v)\n", x, y) + +depinject.Inject( + depinject.Provide( + func() + +int { + return 1 +}, + func() + +AnotherInt { + return AnotherInt(2) +}, + ), + &x, + &y, + ) + +fmt.Printf("After (%v, %v)\n", x, y) +} +``` + +Provider functions form the basis of the dependency tree, they are introspected then their inputs identified as dependencies and outputs as dependants, either for another provider function or state stored outside the DI container, as is the case of `&x` and `&y` above. + +### Interface type resolution + +`depinject` supports interface types as inputs to provider functions. In the SDK's case this pattern is used to decouple +`Keeper` dependencies between modules. For example `x/bank` expects an [AccountKeeper](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/x/bank/types#AccountKeeper) interface as [input to ProvideModule](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/bank/module.go#L208-L260). + +Concretely `SimApp` uses the implementation in `x/auth`, but this design allows for this loose coupling to change. + +Given the following types: + +```go expandable +package duck + +type Duck interface { + quack() +} + +type AlsoDuck interface { + quack() +} + +type Mallard struct{ +} + +type Canvasback struct{ +} + +func (duck Mallard) + +quack() { +} + +func (duck Canvasback) + +quack() { +} + +type Pond struct { + Duck AlsoDuck +} +``` + +This usage + +```go +var pond Pond + +depinject.Inject( + depinject.Provide( + func() + +Mallard { + return Mallard{ +} +}, + func(duck Duck) + +Pond { + return Pond{ + Duck: duck +} + +}), + &pond) +``` + +results in an *implicit* binding of `Duck` to `Mallard`. This works because there is only one implementation of `Duck` in the container.\ +However, adding a second provider of `Duck` will result in an error: + +```go +var pond Pond + +depinject.Inject( + depinject.Provide( + func() + +Mallard { + return Mallard{ +} +}, + func() + +Canvasback { + return Canvasback{ +} +}, + func(duck Duck) + +Pond { + return Pond{ + Duck: duck +} + +}), + &pond) +``` + +A specific binding preference for `Duck` is required. + +#### `BindInterface` API + +In the above situation registering a binding for a given interface binding may look like + +```go expandable +depinject.Inject( + depinject.Configs( + depinject.BindInterface( + "duck.Duck", + "duck.Mallard"), + depinject.Provide( + func() + +Mallard { + return Mallard{ +} +}, + func() + +Canvasback { + return Canvasback{ +} +}, + func(duck Duck) + +APond { + return Pond{ + Duck: duck +} + +})), + &pond) +``` + +Now `depinject` has enough information to provide `Mallard` as an input to `APond`. + +### Full example in real app + + +When using `depinject.Inject`, the injected types must be pointers. + + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + _ "embed" + "io" + "os" + "path/filepath" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" +) + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool = mempool.NewSenderNonceMempool() + / mempoolOpt = baseapp.SetMempool(nonceMempool) + / prepareOpt = func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt = func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +authtypes.AccountI { + return authtypes.ProtoBaseAccount() +}, + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.CapabilityKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.CrisisKeeper, + &app.UpgradeKeeper, + &app.ParamsKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + ); err != nil { + panic(err) +} + +app.App = appBuilder.Build(logger, db, traceStore, baseAppOptions...) + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(app.App.BaseApp, appOpts, app.appCodec, logger, app.kvStoreKeys()); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + + /**** Module Options ****/ + + app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + +## Debugging + +Issues with resolving dependencies in the container can be done with logs and [Graphviz](https://graphviz.org) renderings of the container tree. +By default, whenever there is an error, logs will be printed to stderr and a rendering of the dependency graph in Graphviz DOT format will be saved to `debug_container.dot`. + +Here is an example Graphviz rendering of a successful build of a dependency graph: +![Graphviz Example](https://raw.githubusercontent.com/cosmos/cosmos-sdk/ff39d243d421442b400befcd959ec3ccd2525154/depinject/testdata/example.svg) + +Rectangles represent functions, ovals represent types, rounded rectangles represent modules and the single hexagon +represents the function which called `Build`. Black-colored shapes mark functions and types that were called/resolved +without an error. Gray-colored nodes mark functions and types that could have been called/resolved in the container but +were left unused. + +Here is an example Graphviz rendering of a dependency graph build which failed: +![Graphviz Error Example](https://raw.githubusercontent.com/cosmos/cosmos-sdk/ff39d243d421442b400befcd959ec3ccd2525154/depinject/testdata/example_error.svg) + +Graphviz DOT files can be converted into SVG's for viewing in a web browser using the `dot` command-line tool, ex: + +```txt +dot -Tsvg debug_container.dot > debug_container.svg +``` + +Many other tools including some IDEs support working with DOT files. diff --git a/docs/sdk/v0.47/documentation/operations/tooling/hubl.mdx b/docs/sdk/v0.47/documentation/operations/tooling/hubl.mdx new file mode 100644 index 00000000..f7b193ca --- /dev/null +++ b/docs/sdk/v0.47/documentation/operations/tooling/hubl.mdx @@ -0,0 +1,71 @@ +--- +title: Hubl +--- + +`Hubl` is a tool that allows you to query any Cosmos SDK based blockchain. +It takes advantage of the new [AutoCLI](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/client/v2@v2.0.0-20220916140313-c5245716b516/cli) feature {/* TODO replace with AutoCLI docs */} of the Cosmos SDK. + +## Installation + +Hubl can be installed using `go install`: + +```shell +go install cosmossdk.io/tools/hubl/cmd/hubl@latest +``` + +Or build from source: + +```shell +git clone --depth=1 https://github.com/cosmos/cosmos-sdk +make hubl +``` + +The binary will be located in `tools/hubl`. + +## Usage + +```shell +hubl --help +``` + +### Add chain + +To configure a new chain just run this command using the --init flag and the name of the chain as it's listed in the chain registry ([Link](https://github.com/cosmos/chain-registry)). + +If the chain is not listed in the chain registry, you can use any unique name. + +```shell +hubl init [chain-name] +hubl init regen +``` + +The chain configuration is stored in `~/.hubl/config.toml`. + + + +When using an unsecure gRPC endpoint, change the `insecure` field to `true` in the config file. + +```toml +[chains] +[chains.regen] +[[chains.regen.trusted-grpc-endpoints]] +endpoint = 'localhost:9090' +insecure = true +``` + +Or use the `--insecure` flag: + +```shell +hubl init regen --insecure +``` + + + +### Query + +To query a chain, you can use the `query` command. +Then specify which module you want to query and the query itself. + +```shell +hubl regen query auth module-accounts +``` diff --git a/docs/sdk/v0.47/documentation/operations/tooling/protobuf.mdx b/docs/sdk/v0.47/documentation/operations/tooling/protobuf.mdx new file mode 100644 index 00000000..115b7aef --- /dev/null +++ b/docs/sdk/v0.47/documentation/operations/tooling/protobuf.mdx @@ -0,0 +1,1073 @@ +--- +title: Protocol Buffers +description: >- + It is known that Cosmos SDK uses protocol buffers extensively, this docuemnt + is meant to provide a guide on how it is used in the cosmos-sdk. +--- + +It is known that Cosmos SDK uses protocol buffers extensively, this docuemnt is meant to provide a guide on how it is used in the cosmos-sdk. + +To generate the proto file, the Cosmos SDK uses a docker image, this image is provided to all to use as well. The latest version is `ghcr.io/cosmos/proto-builder:0.12.x` + +Below is the example of the Cosmos SDK's commands for generating, linting, and formatting protobuf files that can be reused in any applications makefile. + +```go expandable +#!/usr/bin/make -f + +PACKAGES_NOSIMULATION=$(shell go list ./... | grep -v '/simulation') + +PACKAGES_SIMTEST=$(shell go list ./... | grep '/simulation') + +export VERSION := $(shell echo $(shell git describe --always --match "v*") | sed 's/^v/') + +export TMVERSION := $(shell go list -m github.com/tendermint/tendermint | sed 's:.* ::') + +export COMMIT := $(shell git log -1 --format='%H') + +LEDGER_ENABLED ?= true +BINDIR ?= $(GOPATH)/bin +BUILDDIR ?= $(CURDIR)/build +SIMAPP = ./simapp +MOCKS_DIR = $(CURDIR)/tests/mocks +HTTPS_GIT := https://github.com/cosmos/cosmos-sdk.git +DOCKER := $(shell which docker) + +PROJECT_NAME = $(shell git remote get-url origin | xargs basename -s .git) + +DOCS_DOMAIN=docs.cosmos.network +# RocksDB is a native dependency, so we don't assume the library is installed. +# Instead, it must be explicitly enabled and we warn when it is not. +ENABLE_ROCKSDB ?= false + +# process build tags +build_tags = netgo + ifeq ($(LEDGER_ENABLED),true) + ifeq ($(OS),Windows_NT) + +GCCEXE = $(shell where gcc.exe 2> NUL) + ifeq ($(GCCEXE),) + $(error gcc.exe not installed for ledger support, please install or set LEDGER_ENABLED=false) + +else + build_tags += ledger + endif + else + UNAME_S = $(shell uname -s) + ifeq ($(UNAME_S),OpenBSD) + $(warning OpenBSD detected, disabling ledger support (https://github.com/cosmos/cosmos-sdk/issues/1988)) + +else + GCC = $(shell command -v gcc 2> /dev/null) + ifeq ($(GCC),) + $(error gcc not installed for ledger support, please install or set LEDGER_ENABLED=false) + +else + build_tags += ledger + endif + endif + endif +endif + ifeq (secp,$(findstring secp,$(COSMOS_BUILD_OPTIONS))) + +build_tags += libsecp256k1_sdk +endif + ifeq (legacy,$(findstring legacy,$(COSMOS_BUILD_OPTIONS))) + +build_tags += app_v1 +endif + whitespace := +whitespace += $(whitespace) + comma := , +build_tags_comma_sep := $(subst $(whitespace),$(comma),$(build_tags)) + +# process linker flags + +ldflags = -X github.com/cosmos/cosmos-sdk/version.Name=sim \ + -X github.com/cosmos/cosmos-sdk/version.AppName=simd \ + -X github.com/cosmos/cosmos-sdk/version.Version=$(VERSION) \ + -X github.com/cosmos/cosmos-sdk/version.Commit=$(COMMIT) \ + -X "github.com/cosmos/cosmos-sdk/version.BuildTags=$(build_tags_comma_sep)" \ + -X github.com/tendermint/tendermint/version.TMCoreSemVer=$(TMVERSION) + ifeq ($(ENABLE_ROCKSDB),true) + +BUILD_TAGS += rocksdb_build + test_tags += rocksdb_build +endif + +# DB backend selection + ifeq (cleveldb,$(findstring cleveldb,$(COSMOS_BUILD_OPTIONS))) + +build_tags += gcc +endif + ifeq (badgerdb,$(findstring badgerdb,$(COSMOS_BUILD_OPTIONS))) + +BUILD_TAGS += badgerdb +endif +# handle rocksdb + ifeq (rocksdb,$(findstring rocksdb,$(COSMOS_BUILD_OPTIONS))) + ifneq ($(ENABLE_ROCKSDB),true) + $(error Cannot use RocksDB backend unless ENABLE_ROCKSDB=true) + +endif + CGO_ENABLED=1 + BUILD_TAGS += rocksdb +endif +# handle boltdb + ifeq (boltdb,$(findstring boltdb,$(COSMOS_BUILD_OPTIONS))) + +BUILD_TAGS += boltdb +endif + ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS))) + +ldflags += -w -s +endif +ldflags += $(LDFLAGS) + ldflags := $(strip $(ldflags)) + +build_tags += $(BUILD_TAGS) + +build_tags := $(strip $(build_tags)) + +BUILD_FLAGS := -tags "$(build_tags)" -ldflags '$(ldflags)' +# check for nostrip option + ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS))) + +BUILD_FLAGS += -trimpath +endif + +# Check for debug option + ifeq (debug,$(findstring debug,$(COSMOS_BUILD_OPTIONS))) + +BUILD_FLAGS += -gcflags "all=-N -l" +endif + +all: tools build lint test vulncheck + +# The below include contains the tools and runsim targets. +include contrib/devtools/Makefile + +############################################################################### +### Build ### +############################################################################### + +BUILD_TARGETS := build install + +build: BUILD_ARGS=-o $(BUILDDIR)/ + +build-linux-amd64: + GOOS=linux GOARCH=amd64 LEDGER_ENABLED=false $(MAKE) + +build + +build-linux-arm64: + GOOS=linux GOARCH=arm64 LEDGER_ENABLED=false $(MAKE) + +build + +$(BUILD_TARGETS): go.sum $(BUILDDIR)/ + cd ${ + CURRENT_DIR +}/simapp && go $@ -mod=readonly $(BUILD_FLAGS) $(BUILD_ARGS) ./... + +$(BUILDDIR)/: + mkdir -p $(BUILDDIR)/ + +cosmovisor: + $(MAKE) -C tools/cosmovisor cosmovisor + +.PHONY: build build-linux-amd64 build-linux-arm64 cosmovisor + +mocks: $(MOCKS_DIR) + @go install github.com/golang/mock/mockgen@v1.6.0 + sh ./scripts/mockgen.sh +.PHONY: mocks + +vulncheck: $(BUILDDIR)/ + GOBIN=$(BUILDDIR) + +go install golang.org/x/vuln/cmd/govulncheck@latest + $(BUILDDIR)/govulncheck ./... + +$(MOCKS_DIR): + mkdir -p $(MOCKS_DIR) + +distclean: clean tools-clean +clean: + rm -rf \ + $(BUILDDIR)/ \ + artifacts/ \ + tmp-swagger-gen/ + +.PHONY: distclean clean + +############################################################################### +### Tools & Dependencies ### +############################################################################### + +go.sum: go.mod + echo "Ensure dependencies have not been modified ..." >&2 + go mod verify + go mod tidy + +############################################################################### +### Documentation ### +############################################################################### + +update-swagger-docs: statik + $(BINDIR)/statik -src=client/docs/swagger-ui -dest=client/docs -f -m + @if [ -n "$(git status --porcelain)" ]; then \ + echo "\033[91mSwagger docs are out of sync!!!\033[0m";\ + exit 1;\ + else \ + echo "\033[92mSwagger docs are in sync\033[0m";\ + fi +.PHONY: update-swagger-docs + +godocs: + @echo "--> Wait a few seconds and visit http://localhost:6060/pkg/github.com/cosmos/cosmos-sdk/types" + godoc -http=:6060 + +# This builds the docs.cosmos.network docs using docusaurus. +# Old documentation, which have not been migrated to docusaurus are generated with vuepress. +build-docs: + @echo "building docusaurus docs" + @cd docs && npm ci && npm run build + mv docs/build ~/output + + @echo "building old docs" + @cd docs && \ + while read -r branch path_prefix; do \ + echo "building vuepress ${ + branch +} + +docs" ; \ + (git clean -fdx && git reset --hard && git checkout ${ + branch +} && npm install && VUEPRESS_BASE="/${ + path_prefix +}/" npm run build) ; \ + mkdir -p ~/output/${ + path_prefix +} ; \ + cp -r .vuepress/dist/* ~/output/${ + path_prefix +}/ ; \ + done < vuepress_versions ; + + @echo "setup domain" + @echo $(DOCS_DOMAIN) > ~/output/CNAME + +.PHONY: build-docs + +############################################################################### +### Tests & Simulation ### +############################################################################### + +test: test-unit +test-e2e: + $(MAKE) -C tests test-e2e +test-e2e-cov: + $(MAKE) -C tests test-e2e-cov +test-integration: + $(MAKE) -C tests test-integration +test-integration-cov: + $(MAKE) -C tests test-integration-cov +test-all: test-unit test-e2e test-integration test-ledger-mock test-race + +TEST_PACKAGES=./... +TEST_TARGETS := test-unit test-unit-amino test-unit-proto test-ledger-mock test-race test-ledger test-race + +# Test runs-specific rules. To add a new test target, just add +# a new rule, customise ARGS or TEST_PACKAGES ad libitum, and +# append the new rule to the TEST_TARGETS list. +test-unit: test_tags += cgo ledger test_ledger_mock norace +test-unit-amino: test_tags += ledger test_ledger_mock test_amino norace +test-ledger: test_tags += cgo ledger norace +test-ledger-mock: test_tags += ledger test_ledger_mock norace +test-race: test_tags += cgo ledger test_ledger_mock +test-race: ARGS=-race +test-race: TEST_PACKAGES=$(PACKAGES_NOSIMULATION) +$(TEST_TARGETS): run-tests + +# check-* compiles and collects tests without running them +# note: go test -c doesn't support multiple packages yet (https://github.com/golang/go/issues/15513) + +CHECK_TEST_TARGETS := check-test-unit check-test-unit-amino +check-test-unit: test_tags += cgo ledger test_ledger_mock norace +check-test-unit-amino: test_tags += ledger test_ledger_mock test_amino norace +$(CHECK_TEST_TARGETS): EXTRA_ARGS=-run=none +$(CHECK_TEST_TARGETS): run-tests + +ARGS += -tags "$(test_tags)" +SUB_MODULES = $(shell find . -type f -name 'go.mod' -print0 | xargs -0 -n1 dirname | sort) + +CURRENT_DIR = $(shell pwd) + +run-tests: + ifneq (,$(shell which tparse 2>/dev/null)) + @echo "Starting unit tests"; \ + finalec=0; \ + for module in $(SUB_MODULES); do \ + cd ${ + CURRENT_DIR +}/$module; \ + echo "Running unit tests for module $module"; \ + go test -mod=readonly -json $(ARGS) $(TEST_PACKAGES) ./... | tparse; \ + ec=$?; \ + if [ "$ec" -ne '0' ]; then finalec=$ec; fi; \ + done; \ + exit $finalec +else + @echo "Starting unit tests"; \ + finalec=0; \ + for module in $(SUB_MODULES); do \ + cd ${ + CURRENT_DIR +}/$module; \ + echo "Running unit tests for module $module"; \ + go test -mod=readonly $(ARGS) $(TEST_PACKAGES) ./... ; \ + ec=$?; \ + if [ "$ec" -ne '0' ]; then finalec=$ec; fi; \ + done; \ + exit $finalec +endif + +.PHONY: run-tests test test-all $(TEST_TARGETS) + +test-sim-nondeterminism: + @echo "Running non-determinism test..." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestAppStateDeterminism -Enabled=true \ + -NumBlocks=100 -BlockSize=200 -Commit=true -Period=0 -v -timeout 24h + +test-sim-custom-genesis-fast: + @echo "Running custom genesis simulation..." + @echo "By default, ${ + HOME +}/.gaiad/config/genesis.json will be used." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestFullAppSimulation -Genesis=${ + HOME +}/.gaiad/config/genesis.json \ + -Enabled=true -NumBlocks=100 -BlockSize=200 -Commit=true -Seed=99 -Period=5 -v -timeout 24h + +test-sim-import-export: runsim + @echo "Running application import/export simulation. This may take several minutes..." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 5 TestAppImportExport + +test-sim-after-import: runsim + @echo "Running application simulation-after-import. This may take several minutes..." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 5 TestAppSimulationAfterImport + +test-sim-custom-genesis-multi-seed: runsim + @echo "Running multi-seed custom genesis simulation..." + @echo "By default, ${ + HOME +}/.gaiad/config/genesis.json will be used." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Genesis=${ + HOME +}/.gaiad/config/genesis.json -SimAppPkg=. -ExitOnFail 400 5 TestFullAppSimulation + +test-sim-multi-seed-long: runsim + @echo "Running long multi-seed application simulation. This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 500 50 TestFullAppSimulation + +test-sim-multi-seed-short: runsim + @echo "Running short multi-seed application simulation. This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 10 TestFullAppSimulation + +test-sim-benchmark-invariants: + @echo "Running simulation invariant benchmarks..." + cd ${ + CURRENT_DIR +}/simapp && @go test -mod=readonly -benchmem -bench=BenchmarkInvariants -run=^$ \ + -Enabled=true -NumBlocks=1000 -BlockSize=200 \ + -Period=1 -Commit=true -Seed=57 -v -timeout 24h + +.PHONY: \ +test-sim-nondeterminism \ +test-sim-custom-genesis-fast \ +test-sim-import-export \ +test-sim-after-import \ +test-sim-custom-genesis-multi-seed \ +test-sim-multi-seed-short \ +test-sim-multi-seed-long \ +test-sim-benchmark-invariants + +SIM_NUM_BLOCKS ?= 500 +SIM_BLOCK_SIZE ?= 200 +SIM_COMMIT ?= true + +test-sim-benchmark: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @go test -mod=readonly -benchmem -run=^$ $(SIMAPP) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h + +test-sim-profile: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @go test -mod=readonly -benchmem -run=^$ $(SIMAPP) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -cpuprofile cpu.out -memprofile mem.out + +.PHONY: test-sim-profile test-sim-benchmark + +test-rosetta: + docker build -t rosetta-ci:latest -f contrib/rosetta/rosetta-ci/Dockerfile . + docker-compose -f contrib/rosetta/docker-compose.yaml up --abort-on-container-exit --exit-code-from test_rosetta --build +.PHONY: test-rosetta + +benchmark: + @go test -mod=readonly -bench=. $(PACKAGES_NOSIMULATION) +.PHONY: benchmark + +############################################################################### +### Linting ### +############################################################################### + +golangci_lint_cmd=golangci-lint +golangci_version=v1.50.0 + +lint: + @echo "--> Running linter" + @go install github.com/golangci/golangci-lint/cmd/golangci-lint@$(golangci_version) + @$(golangci_lint_cmd) + +run --timeout=10m + +lint-fix: + @echo "--> Running linter" + @go install github.com/golangci/golangci-lint/cmd/golangci-lint@$(golangci_version) + @$(golangci_lint_cmd) + +run --fix --out-format=tab --issues-exit-code=0 + +.PHONY: lint lint-fix + format: + @go install mvdan.cc/gofumpt@latest + @go install github.com/golangci/golangci-lint/cmd/golangci-lint@$(golangci_version) + +find . -name '*.go' -type f -not -path "./vendor*" -not -path "*.git*" -not -path "./client/docs/statik/statik.go" -not -path "./tests/mocks/*" -not -name "*.pb.go" -not -name "*.pb.gw.go" -not -name "*.pulsar.go" -not -path "./crypto/keys/secp256k1/*" | xargs gofumpt -w -l + $(golangci_lint_cmd) + +run --fix +.PHONY: format + +############################################################################### +### Devdoc ### +############################################################################### + +DEVDOC_SAVE = docker commit `docker ps -a -n 1 -q` devdoc:local + +devdoc-init: + $(DOCKER) + +run -it -v "$(CURDIR):/go/src/github.com/cosmos/cosmos-sdk" -w "/go/src/github.com/cosmos/cosmos-sdk" tendermint/devdoc echo + # TODO make this safer + $(call DEVDOC_SAVE) + +devdoc: + $(DOCKER) + +run -it -v "$(CURDIR):/go/src/github.com/cosmos/cosmos-sdk" -w "/go/src/github.com/cosmos/cosmos-sdk" devdoc:local bash + +devdoc-save: + # TODO make this safer + $(call DEVDOC_SAVE) + +devdoc-clean: + docker rmi -f $(docker images -f "dangling=true" -q) + +devdoc-update: + docker pull tendermint/devdoc + +.PHONY: devdoc devdoc-clean devdoc-init devdoc-save devdoc-update + +############################################################################### +### Protobuf ### +############################################################################### + +protoVer=0.11.2 +protoImageName=ghcr.io/cosmos/proto-builder:$(protoVer) + +protoImage=$(DOCKER) + +run --rm -v $(CURDIR):/workspace --workdir /workspace $(protoImageName) + +proto-all: proto-format proto-lint proto-gen + +proto-gen: + @echo "Generating Protobuf files" + @$(protoImage) + +sh ./scripts/protocgen.sh + +proto-swagger-gen: + @echo "Generating Protobuf Swagger" + @$(protoImage) + +sh ./scripts/protoc-swagger-gen.sh + +proto-format: + @$(protoImage) + +find ./ -name "*.proto" -exec clang-format -i { +} \; + +proto-lint: + @$(protoImage) + +buf lint --error-format=json + +proto-check-breaking: + @$(protoImage) + +buf breaking --against $(HTTPS_GIT)#branch=main + +TM_URL = https://raw.githubusercontent.com/tendermint/tendermint/v0.37.0-rc2/proto/tendermint + +TM_CRYPTO_TYPES = proto/tendermint/crypto +TM_ABCI_TYPES = proto/tendermint/abci +TM_TYPES = proto/tendermint/types +TM_VERSION = proto/tendermint/version +TM_LIBS = proto/tendermint/libs/bits +TM_P2P = proto/tendermint/p2p + +proto-update-deps: + @echo "Updating Protobuf dependencies" + + @mkdir -p $(TM_ABCI_TYPES) + @curl -sSL $(TM_URL)/abci/types.proto > $(TM_ABCI_TYPES)/types.proto + + @mkdir -p $(TM_VERSION) + @curl -sSL $(TM_URL)/version/types.proto > $(TM_VERSION)/types.proto + + @mkdir -p $(TM_TYPES) + @curl -sSL $(TM_URL)/types/types.proto > $(TM_TYPES)/types.proto + @curl -sSL $(TM_URL)/types/evidence.proto > $(TM_TYPES)/evidence.proto + @curl -sSL $(TM_URL)/types/params.proto > $(TM_TYPES)/params.proto + @curl -sSL $(TM_URL)/types/validator.proto > $(TM_TYPES)/validator.proto + @curl -sSL $(TM_URL)/types/block.proto > $(TM_TYPES)/block.proto + + @mkdir -p $(TM_CRYPTO_TYPES) + @curl -sSL $(TM_URL)/crypto/proof.proto > $(TM_CRYPTO_TYPES)/proof.proto + @curl -sSL $(TM_URL)/crypto/keys.proto > $(TM_CRYPTO_TYPES)/keys.proto + + @mkdir -p $(TM_LIBS) + @curl -sSL $(TM_URL)/libs/bits/types.proto > $(TM_LIBS)/types.proto + + @mkdir -p $(TM_P2P) + @curl -sSL $(TM_URL)/p2p/types.proto > $(TM_P2P)/types.proto + + $(DOCKER) + +run --rm -v $(CURDIR)/proto:/workspace --workdir /workspace $(protoImageName) + +buf mod update + +.PHONY: proto-all proto-gen proto-swagger-gen proto-format proto-lint proto-check-breaking proto-update-deps + +############################################################################### +### Localnet ### +############################################################################### + +localnet-build-env: + $(MAKE) -C contrib/images simd-env +localnet-build-dlv: + $(MAKE) -C contrib/images simd-dlv + +localnet-build-nodes: + $(DOCKER) + +run --rm -v $(CURDIR)/.testnets:/data cosmossdk/simd \ + testnet init-files --v 4 -o /data --starting-ip-address 192.168.10.2 --keyring-backend=test + docker-compose up -d + +localnet-stop: + docker-compose down + +# localnet-start will run a 4-node testnet locally. The nodes are +# based off the docker images in: ./contrib/images/simd-env +localnet-start: localnet-stop localnet-build-env localnet-build-nodes + +# localnet-debug will run a 4-node testnet locally in debug mode +# you can read more about the debug mode here: ./contrib/images/simd-dlv/README.md +localnet-debug: localnet-stop localnet-build-dlv localnet-build-nodes + +.PHONY: localnet-start localnet-stop localnet-debug localnet-build-env localnet-build-dlv localnet-build-nodes + +############################################################################### +### rosetta ### +############################################################################### +# builds rosetta test data dir +rosetta-data: + -docker container rm data_dir_build + docker build -t rosetta-ci:latest -f contrib/rosetta/rosetta-ci/Dockerfile . + docker run --name data_dir_build -t rosetta-ci:latest sh /rosetta/data.sh + docker cp data_dir_build:/tmp/data.tar.gz "$(CURDIR)/contrib/rosetta/rosetta-ci/data.tar.gz" + docker container rm data_dir_build +.PHONY: rosetta-data +``` + +The script used to generate the protobuf files can be found in the `scripts/` directory. + +```shell +#!/usr/bin/env bash + +# How to run manually: +# docker build --pull --rm -f "contrib/devtools/Dockerfile" -t cosmossdk-proto:latest "contrib/devtools" +# docker run --rm -v $(pwd):/workspace --workdir /workspace cosmossdk-proto sh ./scripts/protocgen.sh + +set -e + +echo "Generating gogo proto code" +cd proto +proto_dirs=$(find ./cosmos ./amino -path -prune -o -name '*.proto' -print0 | xargs -0 -n1 dirname | sort | uniq) +for dir in $proto_dirs; do + for file in $(find "${dir}" -maxdepth 1 -name '*.proto'); do + # this regex checks if a proto file has its go_package set to cosmossdk.io/api/... + # gogo proto files SHOULD ONLY be generated if this is false + # we don't want gogo proto to run for proto files which are natively built for google.golang.org/protobuf + if grep -q "option go_package" "$file" && grep -H -o -c 'option go_package.*cosmossdk.io/api' "$file" | grep -q ':0 +``` + +## Buf + +[Buf](https://buf.build) is a protobuf tool that abstracts the needs to use the complicated `protoc` toolchain on top of various other things that ensure you are using protobuf in accordance with the majority of the ecosystem. Within the cosmos-sdk repository there are a few files that have a buf prefix. Lets start with the top level and then dive into the various directories. + +### Workspace + +At the root level directory a workspace is defined using [buf workspaces](https://docs.buf.build/configuration/v1/buf-work-yaml). This helps if there are one or more protobuf containing directories in your project. + +Cosmos SDK example: + +```go +version: v1 +directories: + - proto +``` + +### Proto Directory + +Next is the `proto/` directory where all of our protobuf files live. In here there are many different buf files defined each serving a different purpose. + +```bash +├── 05-depinject.md +├── buf.gen.gogo.yaml +├── buf.gen.pulsar.yaml +├── buf.gen.swagger.yaml +├── buf.lock +├── buf.md +├── buf.yaml +├── cosmos +└── tendermint +``` + +The above diagram all the files and directories within the Cosmos SDK `proto/` directory. + +#### `buf.gen.gogo.yaml` + +`buf.gen.gogo.yaml` defines how the protobuf files should be generated for use with in the module. This file uses [gogoproto](https://github.com/gogo/protobuf), a separate generator from the google go-proto generator that makes working with various objects more ergonomic, and it has more performant encode and decode steps + +```go +version: v1 +plugins: + - name: gocosmos + out: .. + opt: plugins=grpc,Mgoogle/protobuf/any.proto=github.com/cosmos/gogoproto/types/any + - name: grpc-gateway + out: .. + opt: logtostderr=true,allow_colon_final_segments=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.gen.pulsar.yaml` + +`buf.gen.pulsar.yaml` defines how protobuf files should be generated using the [new golang apiv2 of protobuf](https://go.dev/blog/protobuf-apiv2). This generator is used instead of the google go-proto generator because it has some extra helpers for Cosmos SDK applications and will have more performant encode and decode than the google go-proto generator. You can follow the development of this generator [here](https://github.com/cosmos/cosmos-proto). + +```go expandable +version: v1 +managed: + enabled: true + go_package_prefix: + default: cosmossdk.io/api + except: + - buf.build/googleapis/googleapis + - buf.build/cosmos/gogo-proto + - buf.build/cosmos/cosmos-proto + override: +plugins: + - name: go-pulsar + out: ../api + opt: paths=source_relative + - name: go-grpc + out: ../api + opt: paths=source_relative +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.gen.swagger.yaml` + +`buf.gen.swagger.yaml` generates the swagger documentation for the query and messages of the chain. This will only define the REST API end points that were defined in the query and msg servers. You can find examples of this [here](https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/bank/v1beta1/query.proto#L19) + +```go +version: v1 +plugins: + - name: swagger + out: ../tmp-swagger-gen + opt: logtostderr=true,fqn_for_swagger_name=true,simple_operation_ids=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.lock` + +This is a autogenerated file based off the dependencies required by the `.gen` files. There is no need to copy the current one. If you depend on cosmos-sdk proto definitions a new entry for the Cosmos SDK will need to be provided. The dependency you will need to use is `buf.build/cosmos/cosmos-sdk`. + +```go expandable +# Generated by buf. DO NOT EDIT. +version: v1 +deps: + - remote: buf.build + owner: cosmos + repository: cosmos-proto + commit: 04467658e59e44bbb22fe568206e1f70 + digest: shake256:73a640bd60e0c523b0f8237ff34eab67c45a38b64bbbde1d80224819d272dbf316ac183526bd245f994af6608b025f5130483d0133c5edd385531326b5990466 + - remote: buf.build + owner: cosmos + repository: gogo-proto + commit: 88ef6483f90f478fb938c37dde52ece3 + digest: shake256:89c45df2aa11e0cff97b0d695436713db3d993d76792e9f8dc1ae90e6ab9a9bec55503d48ceedd6b86069ab07d3041b32001b2bfe0227fa725dd515ff381e5ba + - remote: buf.build + owner: googleapis + repository: googleapis + commit: 751cbe31638d43a9bfb6162cd2352e67 + digest: shake256:87f55470d9d124e2d1dedfe0231221f4ed7efbc55bc5268917c678e2d9b9c41573a7f9a557f6d8539044524d9fc5ca8fbb7db05eb81379d168285d76b57eb8a4 + - remote: buf.build + owner: protocolbuffers + repository: wellknowntypes + commit: 3ddd61d1f53d485abd3d3a2b47a62b8e + digest: shake256:9e6799d56700d0470c3723a2fd027e8b4a41a07085a0c90c58e05f6c0038fac9b7a0170acd7692707a849983b1b8189aa33e7b73f91d68157f7136823115546b +``` + +#### `buf.yaml` + +`buf.yaml` defines the [name of your package](https://github.com/cosmos/cosmos-sdk/blob/main/proto/buf.yaml#L3), which [breakage checker](https://docs.buf.build/tour/detect-breaking-changes) to use and how to [lint your protobuf files](https://docs.buf.build/tour/lint-your-api). + +```go expandable +# This module represents buf.build/cosmos/cosmos-sdk +version: v1 +name: buf.build/cosmos/cosmos-sdk +deps: + - buf.build/cosmos/cosmos-proto + - buf.build/cosmos/gogo-proto + - buf.build/googleapis/googleapis + - buf.build/protocolbuffers/wellknowntypes +breaking: + use: + - FILE + ignore: + - testpb +lint: + use: + - STANDARD + - COMMENTS + - FILE_LOWER_SNAKE_CASE + except: + - UNARY_RPC + - COMMENT_FIELD + - SERVICE_SUFFIX + - PACKAGE_VERSION_SUFFIX + - RPC_REQUEST_STANDARD_NAME + ignore: + - tendermint +``` + +We use a variety of linters for the Cosmos SDK protobuf files. The repo also checks this in ci. + +A reference to the github actions can be found [here](https://github.com/cosmos/cosmos-sdk/blob/main/.github/workflows/proto.yml#L1-L32) + +```go expandable +name: Protobuf +# Protobuf runs buf (https://buf.build/) + +lint and check-breakage +# This workflow is only run when a .proto file has been changed +on: + pull_request: + paths: + - "proto/**" + +permissions: + contents: read + +jobs: + lint: + runs-on: depot-ubuntu-22.04-4 + timeout-minutes: 5 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-lint-action@v1 + with: + input: "proto" + + break-check: + runs-on: depot-ubuntu-22.04-4 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-breaking-action@v1 + with: + input: "proto" + against: "https://github.com/${{ + github.repository +}}.git#branch=${{ + github.event.pull_request.base.ref +}},ref=HEAD~1,subdir=proto" +``` +; then + buf generate --template buf.gen.gogo.yaml $file + fi + done +done + +cd .. + +# generate codec/testdata proto code +(cd testutil/testdata; buf generate) + +# generate baseapp test messages +(cd baseapp/testutil; buf generate) + +# move proto files to the right places +cp -r github.com/cosmos/cosmos-sdk/* ./ +rm -rf github.com + +go mod tidy + +./scripts/protocgen-pulsar.sh +``` + +## Buf + +[Buf](https://buf.build) is a protobuf tool that abstracts the needs to use the complicated `protoc` toolchain on top of various other things that ensure you are using protobuf in accordance with the majority of the ecosystem. Within the cosmos-sdk repository there are a few files that have a buf prefix. Lets start with the top level and then dive into the various directories. + +### Workspace + +At the root level directory a workspace is defined using [buf workspaces](https://docs.buf.build/configuration/v1/buf-work-yaml). This helps if there are one or more protobuf containing directories in your project. + +Cosmos SDK example: + +```go +version: v1 +directories: + - proto +``` + +### Proto Directory + +Next is the `proto/` directory where all of our protobuf files live. In here there are many different buf files defined each serving a different purpose. + +```bash +├── 05-depinject.md +├── buf.gen.gogo.yaml +├── buf.gen.pulsar.yaml +├── buf.gen.swagger.yaml +├── buf.lock +├── buf.md +├── buf.yaml +├── cosmos +└── tendermint +``` + +The above diagram all the files and directories within the Cosmos SDK `proto/` directory. + +#### `buf.gen.gogo.yaml` + +`buf.gen.gogo.yaml` defines how the protobuf files should be generated for use with in the module. This file uses [gogoproto](https://github.com/gogo/protobuf), a separate generator from the google go-proto generator that makes working with various objects more ergonomic, and it has more performant encode and decode steps + +```go +version: v1 +plugins: + - name: gocosmos + out: .. + opt: plugins=grpc,Mgoogle/protobuf/any.proto=github.com/cosmos/gogoproto/types/any + - name: grpc-gateway + out: .. + opt: logtostderr=true,allow_colon_final_segments=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.gen.pulsar.yaml` + +`buf.gen.pulsar.yaml` defines how protobuf files should be generated using the [new golang apiv2 of protobuf](https://go.dev/blog/protobuf-apiv2). This generator is used instead of the google go-proto generator because it has some extra helpers for Cosmos SDK applications and will have more performant encode and decode than the google go-proto generator. You can follow the development of this generator [here](https://github.com/cosmos/cosmos-proto). + +```go expandable +version: v1 +managed: + enabled: true + go_package_prefix: + default: cosmossdk.io/api + except: + - buf.build/googleapis/googleapis + - buf.build/cosmos/gogo-proto + - buf.build/cosmos/cosmos-proto + override: +plugins: + - name: go-pulsar + out: ../api + opt: paths=source_relative + - name: go-grpc + out: ../api + opt: paths=source_relative +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.gen.swagger.yaml` + +`buf.gen.swagger.yaml` generates the swagger documentation for the query and messages of the chain. This will only define the REST API end points that were defined in the query and msg servers. You can find examples of this [here](https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/bank/v1beta1/query.proto#L19) + +```go +version: v1 +plugins: + - name: swagger + out: ../tmp-swagger-gen + opt: logtostderr=true,fqn_for_swagger_name=true,simple_operation_ids=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.lock` + +This is a autogenerated file based off the dependencies required by the `.gen` files. There is no need to copy the current one. If you depend on cosmos-sdk proto definitions a new entry for the Cosmos SDK will need to be provided. The dependency you will need to use is `buf.build/cosmos/cosmos-sdk`. + +```go expandable +# Generated by buf. DO NOT EDIT. +version: v1 +deps: + - remote: buf.build + owner: cosmos + repository: cosmos-proto + commit: 04467658e59e44bbb22fe568206e1f70 + digest: shake256:73a640bd60e0c523b0f8237ff34eab67c45a38b64bbbde1d80224819d272dbf316ac183526bd245f994af6608b025f5130483d0133c5edd385531326b5990466 + - remote: buf.build + owner: cosmos + repository: gogo-proto + commit: 88ef6483f90f478fb938c37dde52ece3 + digest: shake256:89c45df2aa11e0cff97b0d695436713db3d993d76792e9f8dc1ae90e6ab9a9bec55503d48ceedd6b86069ab07d3041b32001b2bfe0227fa725dd515ff381e5ba + - remote: buf.build + owner: googleapis + repository: googleapis + commit: 751cbe31638d43a9bfb6162cd2352e67 + digest: shake256:87f55470d9d124e2d1dedfe0231221f4ed7efbc55bc5268917c678e2d9b9c41573a7f9a557f6d8539044524d9fc5ca8fbb7db05eb81379d168285d76b57eb8a4 + - remote: buf.build + owner: protocolbuffers + repository: wellknowntypes + commit: 3ddd61d1f53d485abd3d3a2b47a62b8e + digest: shake256:9e6799d56700d0470c3723a2fd027e8b4a41a07085a0c90c58e05f6c0038fac9b7a0170acd7692707a849983b1b8189aa33e7b73f91d68157f7136823115546b +``` + +#### `buf.yaml` + +`buf.yaml` defines the [name of your package](https://github.com/cosmos/cosmos-sdk/blob/main/proto/buf.yaml#L3), which [breakage checker](https://docs.buf.build/tour/detect-breaking-changes) to use and how to [lint your protobuf files](https://docs.buf.build/tour/lint-your-api). + +```go expandable +# This module represents buf.build/cosmos/cosmos-sdk +version: v1 +name: buf.build/cosmos/cosmos-sdk +deps: + - buf.build/cosmos/cosmos-proto + - buf.build/cosmos/gogo-proto + - buf.build/googleapis/googleapis + - buf.build/protocolbuffers/wellknowntypes +breaking: + use: + - FILE + ignore: + - testpb +lint: + use: + - STANDARD + - COMMENTS + - FILE_LOWER_SNAKE_CASE + except: + - UNARY_RPC + - COMMENT_FIELD + - SERVICE_SUFFIX + - PACKAGE_VERSION_SUFFIX + - RPC_REQUEST_STANDARD_NAME + ignore: + - tendermint +``` + +We use a variety of linters for the Cosmos SDK protobuf files. The repo also checks this in ci. + +A reference to the github actions can be found [here](https://github.com/cosmos/cosmos-sdk/blob/main/.github/workflows/proto.yml#L1-L32) + +```go expandable +name: Protobuf +# Protobuf runs buf (https://buf.build/) + +lint and check-breakage +# This workflow is only run when a .proto file has been changed +on: + pull_request: + paths: + - "proto/**" + +permissions: + contents: read + +jobs: + lint: + runs-on: depot-ubuntu-22.04-4 + timeout-minutes: 5 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-lint-action@v1 + with: + input: "proto" + + break-check: + runs-on: depot-ubuntu-22.04-4 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-breaking-action@v1 + with: + input: "proto" + against: "https://github.com/${{ + github.repository +}}.git#branch=${{ + github.event.pull_request.base.ref +}},ref=HEAD~1,subdir=proto" +``` diff --git a/docs/sdk/v0.47/documentation/operations/upgrading.mdx b/docs/sdk/v0.47/documentation/operations/upgrading.mdx new file mode 100644 index 00000000..9cbfb2fb --- /dev/null +++ b/docs/sdk/v0.47/documentation/operations/upgrading.mdx @@ -0,0 +1,625 @@ +--- +title: Upgrading Cosmos SDK +description: >- + This guide provides instructions for upgrading to specific versions of Cosmos + SDK. Note, always read the SimApp section for more information on application + wiring updates. +--- + +This guide provides instructions for upgrading to specific versions of Cosmos SDK. +Note, always read the **SimApp** section for more information on application wiring updates. + +## \[Unreleased] + +### Migration to CometBFT (Part 2) + +The Cosmos SDK has migrated in its previous versions, to CometBFT. +Some functions have been renamed to reflect the naming change. + +Following an exhaustive list: + +* `client.TendermintRPC` -> `client.CometRPC` +* `clitestutil.MockTendermintRPC` -> `clitestutil.MockCometRPC` +* `clitestutilgenutil.CreateDefaultTendermintConfig` -> `clitestutilgenutil.CreateDefaultCometConfig` +* Package `client/grpc/tmservice` -> `client/grpc/cmtservice` + +Additionally, the commands and flags mentioning `tendermint` have been renamed to `comet`. +However, these commands and flags is still supported for backward compatibility. + +For backward compatibility, the `**/tendermint/**` gRPC services are still supported. + +Additionally, the SDK is starting its abstraction from CometBFT Go types thorought the codebase: + +* The usage of CometBFT have been replaced to use the Cosmos SDK logger interface (`cosmossdk.io/log.Logger`). +* The usage of `github.com/cometbft/cometbft/libs/bytes.HexByte` have been replaced by `[]byte`. + +### Configuration + +A new tool have been created for migrating configuration of the SDK. Use the following command to migrate your configuration: + +```bash +simd config migrate v0.48 +``` + +More information about [confix](/docs/sdk/v0.47/documentation/operations/tooling/confix). + +#### Events + +The log section of abci.TxResult is not populated in the case of successful msg(s) execution. Instead a new attribute is added to all messages indicating the `msg_index` which identifies which events and attributes relate the same transaction + +#### gRPC-Web + +gRPC-Web is now listening to the same address as the gRPC Gateway API server (default: `localhost:1317`). +The possibility to listen to a different address has been removed, as well as its settings. +Use `confix` to clean-up your `app.toml`. A nginx (or alike) reverse-proxy can be set to keep the previous behavior. + +#### Database Support + +ClevelDB, BoltDB and BadgerDB are not supported anymore. To migrate from a unsupported database to a supported database please use the database migration tool. + +### Protobuf + +The SDK is in the process of removing all `gogoproto` annotations. + +#### Stringer + +The `gogoproto.goproto_stringer = false` annotation has been removed from most proto files. This means that the `String()` method is being generated for types that previously had this annotation. The generated `String()` method uses `proto.CompactTextString` for *stringifying* structs. +[Verify](https://github.com/cosmos/cosmos-sdk/pull/13850#issuecomment-1328889651) the usage of the modified `String()` methods and double-check that they are not used in state-machine code. + +### SimApp + +{/* TODO(@julienrbrt) collapse this section in 3 parts, general, app v1 and app v2 changes, now it is a bit confusing */} + +#### Module Assertions + +Previously, all modules were required to be set in `OrderBeginBlockers`, `OrderEndBlockers` and `OrderInitGenesis / OrderExportGenesis` in `app.go` / `app_config.go`. +This is no longer the case, the assertion has been loosened to only require modules implementing, respectively, the `module.BeginBlockAppModule`, `module.EndBlockAppModule` and `module.HasGenesis` interfaces. + +#### Modules Keepers + +The following modules `NewKeeper` function now take a `KVStoreService` instead of a `StoreKey`: + +* `x/auth` +* `x/authz` +* `x/bank` +* `x/consensus` +* `x/distribution` +* `x/feegrant` +* `x/nft` + +User manually wiring their chain need to use the `runtime.NewKVStoreService` method to create a `KVStoreService` from a `StoreKey`: + +```diff +app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, +- keys[consensusparamtypes.StoreKey] ++ runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +The following modules' `Keeper` methods now take in a `context.Context` instead of `sdk.Context`. Any module that has an interfaces for them (like "expected keepers") will need to update and re-generate mocks if needed: + +* `x/authz` +* `x/bank` +* `x/distribution` + +**Users using depinject do not need any changes, this is automatically done for them.** + +#### Logger + +The following modules `NewKeeper` function now take a `log.Logger`: + +* `x/bank` + +`depinject` users must now supply the logger through the main `depinject.Supply` function instead of passing it to `appBuilder.Build`. + +```diff +appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, ++ logger, + ... +``` + +```diff +- app.App = appBuilder.Build(logger, db, traceStore, baseAppOptions...) ++ app.App = appBuilder.Build(db, traceStore, baseAppOptions...) +``` + +User manually wiring their chain need to add the logger argument when creating the keeper. + +#### Module Basics + +Previously, the `ModuleBasics` was a global variable that was used to register all modules's `AppModuleBasic` implementation. +The global variable has been removed and the basic module manager can be now created from the module manager. + +This is automatically done for depinject users, however for supplying different app module implementation, pass them via `depinject.Supply` in the main `AppConfig` (`app_config.go`): + +```go expandable +depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +}, + ), +}, + ) +``` + +Users manually wiring their chain need to use the new `module.NewBasicManagerFromManager` function, after the module manager creation, and pass a `map[string]module.AppModuleBasic` as argument for optionally overridding some module's `AppModuleBasic`. + +### Packages + +#### Store + +References to `types/store.go` which contained aliases for store types have been remapped to point to appropriate store/types, hence the `types/store.go` file is no longer needed and has been removed. + +##### Extract Store to a standalone module + +The `store` module is extracted to have a separate go.mod file which allows it be a standalone module. +All the store imports are now renamed to use `cosmossdk.io/store` instead of `github.com/cosmos/cosmos-sdk/store` across the SDK. + +#### Client + +The return type of the interface method `TxConfig.SignModeHandler()` has been changed from `x/auth/signing.SignModeHandler` to `x/tx/signing.HandlerMap`. This change is transparent to most users as the `TxConfig` interface is typically implemented by private `x/auth/tx.config` struct (as returned by `auth.NewTxConfig`) which has been updated to return the new type. If users have implemented their own `TxConfig` interface, they will need to update their implementation to return the new type. + +### Modules + +#### `**all**` + +[RFC 001](docs/sdk/next/documentation/legacy/rfc-overview) has defined a simplification of the message validation process for modules. +The `sdk.Msg` interface has been updated to not require the implementation of the `ValidateBasic` method. +It is now recommended to validate message directly in the message server. When the validation is performed in the message server, the `ValidateBasic` method on a message is no longer required and can be removed. + +#### `x/auth` + +For ante handler construction via `ante.NewAnteHandler`, the field `ante.HandlerOptions.SignModeHandler` has been updated to `x/tx/signing/HandlerMap` from `x/auth/signing/SignModeHandler`. Callers typically fetch this value from `client.TxConfig.SignModeHandler()` (which is also changed) so this change should be transparent to most users. + +#### `x/capability` + +Capability was moved to [IBC-GO](https://github.com/cosmos/ibc-go). IBC V8 will contain the necessary changes to incorporate the new module location + +#### `x/gov` + +##### Expedited Proposals + +The `gov` v1 module has been updated to support the ability to expedite governance proposals. When a proposal is expedited, the voting period will be shortened to `ExpeditedVotingPeriod` parameter. An expedited proposal must have an higher voting threshold than a classic proposal, that threshold is defined with the `ExpeditedThreshold` parameter. + +##### Cancelling Proposals + +The `gov` module has been updated to support the ability to cancel governance proposals. When a proposal is canceled, all the deposits of the proposal are either burnt or sent to `ProposalCancelDest` address. The deposits burn rate will be determined by a new parameter called `ProposalCancelRatio` parameter. + +```text + 1. deposits * proposal_cancel_ratio will be burned or sent to `ProposalCancelDest` address , if `ProposalCancelDest` is empty then deposits will be burned. + 2. deposits * (1 - proposal_cancel_ratio) will be sent to depositors. +``` + +By default, the new `ProposalCancelRatio` parameter is set to 0.5 during migration and `ProposalCancelDest` is set to empty string (i.e. burnt). + +#### `x/evidence` + +##### Extract evidence to a standalone module + +The `x/evidence` module is extracted to have a separate go.mod file which allows it be a standalone module. +All the evidence imports are now renamed to use `cosmossdk.io/x/evidence` instead of `github.com/cosmos/cosmos-sdk/x/evidence` across the SDK. + +#### `x/nft` + +##### Extract nft to a standalone module + +The `x/nft` module is extracted to have a separate go.mod file which allows it to be a standalone module. + +#### x/feegrant + +##### Extract feegrant to a standalone module + +The `x/feegrant` module is extracted to have a separate go.mod file which allows it to be a standalone module. +All the feegrant imports are now renamed to use `cosmossdk.io/x/feegrant` instead of `github.com/cosmos/cosmos-sdk/x/feegrant` across the SDK. + +#### `x/upgrade` + +##### Extract upgrade to a standalone module + +The `x/upgrade` module is extracted to have a separate go.mod file which allows it to be a standalone module. +All the upgrade imports are now renamed to use `cosmossdk.io/x/upgrade` instead of `github.com/cosmos/cosmos-sdk/x/upgrade` across the SDK. + +## [v0.47.x](https://github.com/cosmos/cosmos-sdk/releases/tag/v0.47.0) + +### Migration to CometBFT (Part 1) + +The Cosmos SDK has migrated to CometBFT, as its default consensus engine. +CometBFT is an implementation of the Tendermint consensus algorithm, and the successor of Tendermint Core. +Due to the import changes, this is a breaking change. Chains need to remove **entirely** their imports of Tendermint Core in their codebase, from direct and indirects imports in their `go.mod`. + +* Replace `github.com/tendermint/tendermint` by `github.com/cometbft/cometbft` +* Replace `github.com/tendermint/tm-db` by `github.com/cometbft/cometbft-db` +* Verify `github.com/tendermint/tendermint` is not an indirect or direct dependency +* Run `make proto-gen` + +Other than that, the migration should be seamless. +On the SDK side, clean-up of variables, functions to reflect the new name will only happen from v0.48 (part 2). + +Note: It is possible that these steps must first be performed by your dependencies before you can perform them on your own codebase. + +### Simulation + +Remove `RandomizedParams` from `AppModuleSimulation` interface. Previously, it used to generate random parameter changes during simulations, however, it does so through ParamChangeProposal which is now legacy. Since all modules were migrated, we can now safely remove this from `AppModuleSimulation` interface. + +Moreover, to support the `MsgUpdateParams` governance proposals for each modules, `AppModuleSimulation` now defines a `AppModule.ProposalMsgs` method in addition to `AppModule.ProposalContents`. That method defines the messages that can be used to submit a proposal and that should be tested in simulation. + +When a module has no proposal messages or proposal content to be tested by simulation, the `AppModule.ProposalMsgs` and `AppModule.ProposalContents` methods can be deleted. + +### gRPC + +A new gRPC service, `proto/cosmos/base/node/v1beta1/query.proto`, has been introduced +which exposes various operator configuration. App developers should be sure to +register the service with the gRPC-gateway service via +`nodeservice.RegisterGRPCGatewayRoutes` in their application construction, which +is typically found in `RegisterAPIRoutes`. + +### AppModule Interface + +Support for the `AppModule` `Querier`, `Route` and `LegacyQuerier` methods has been entirely removed from the `AppModule` +interface. This removes and fully deprecates all legacy queriers. All modules no longer support the REST API previously +known as the LCD, and the `sdk.Msg#Route` method won't be used anymore. + +Most other existing `AppModule` methods have been moved to extension interfaces in preparation for the migration +to the `cosmossdk.io/core/appmodule` API in the next release. Most `AppModule` implementations should not be broken +by this change. + +### SimApp + +The `simapp` package **should not be imported in your own app**. Instead, you should import the `runtime.AppI` interface, that defines an `App`, and use the [`simtestutil` package](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/testutil/sims) for application testing. + +#### App Wiring + +SimApp's `app_v2.go` is using [App Wiring](/docs/sdk/v0.47/documentation/application-framework/app-go-v2), the dependency injection framework of the Cosmos SDK. +This means that modules are injected directly into SimApp thanks to a [configuration file](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/app_config.go). +The previous behavior, without the dependency injection framework, is still present in [`app.go`](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/app.go) and is not going anywhere. + +If you are using a `app.go` without dependency injection, add the following lines to your `app.go` in order to provide newer gRPC services: + +```go +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) +``` + +#### Constructor + +The constructor, `NewSimApp` has been simplified: + +* `NewSimApp` does not take encoding parameters (`encodingConfig`) as input, instead the encoding parameters are injected (when using app wiring), or directly created in the constructor. Instead, we can instantiate `SimApp` for getting the encoding configuration. +* `NewSimApp` now uses `AppOptions` for getting the home path (`homePath`) and the invariant checks period (`invCheckPeriod`). These were unnecessary given as arguments as they were already present in the `AppOptions`. + +#### Encoding + +`simapp.MakeTestEncodingConfig()` was deprecated and has been removed. Instead you can use the `TestEncodingConfig` from the `types/module/testutil` package. +This means you can replace your usage of `simapp.MakeTestEncodingConfig` in tests to `moduletestutil.MakeTestEncodingConfig`, which takes a series of relevant `AppModuleBasic` as input (the module being tested and any potential dependencies). + +#### Export + +`ExportAppStateAndValidators` takes an extra argument, `modulesToExport`, which is a list of module names to export. +That argument should be passed to the module maanager `ExportGenesisFromModules` method. + +#### Replaces + +The `GoLevelDB` version must pinned to `v1.0.1-0.20210819022825-2ae1ddf74ef7` in the application, following versions might cause unexpected behavior. +This can be done adding `replace github.com/syndtr/goleveldb => github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7` to the `go.mod` file. + +* [issue #14949 on cosmos-sdk](https://github.com/cosmos/cosmos-sdk/issues/14949) +* [issue #25413 on go-ethereum](https://github.com/ethereum/go-ethereum/pull/25413) + +### Protobuf + +The SDK has migrated from `gogo/protobuf` (which is currently unmaintained), to our own maintained fork, [`cosmos/gogoproto`](https://github.com/cosmos/gogoproto). + +This means you should replace all imports of `github.com/gogo/protobuf` to `github.com/cosmos/gogoproto`. +This allows you to remove the replace directive `replace github.com/gogo/protobuf => github.com/regen-network/protobuf v1.3.3-alpha.regen.1` from your `go.mod` file. + +Please use the `ghcr.io/cosmos/proto-builder` image (version `>=` `0.11.5`) for generating protobuf files. + +See which buf commit for `cosmos/cosmos-sdk` to pin in your `buf.yaml` file [here](docs/sdk/v0.47/documentation/operations/tooling/README). + +#### Gogoproto Import Paths + +The SDK made a [patch fix](https://github.com/cosmos/gogoproto/pull/32) on its gogoproto repository to require that each proto file's package name matches its OS import path (relatively to a protobuf root import path, usually the root `proto/` folder, set by the `protoc -I` flag). + +For example, assuming you put all your proto files in subfolders inside your root `proto/` folder, then a proto file with package name `myapp.mymodule.v1` should be found in the `proto/myapp/mymodule/v1/` folder. If it is in another folder, the proto generation command will throw an error. + +If you are using a custom folder structure for your proto files, please reorganize them so that their OS path matches their proto package name. + +This is to allow the proto FileDescriptSets to be correctly registered, and this standardized OS import paths allows [Hubl](/docs/sdk/v0.47/documentation/operations/tooling/hubl) to reflectively talk to any chain. + +#### `{accepts,implements}_interface` proto annotations + +The SDK is normalizing the strings inside the Protobuf `accepts_interface` and `implements_interface` annotations. We require them to be fully-scoped names. They will soon be used by code generators like Pulsar and Telescope to match which messages can or cannot be packed inside `Any`s. + +Here are the following replacements that you need to perform on your proto files: + +```diff expandable +- "Content" ++ "cosmos.gov.v1beta1.Content" +- "Authorization" ++ "cosmos.authz.v1beta1.Authorization" +- "sdk.Msg" ++ "cosmos.base.v1beta1.Msg" +- "AccountI" ++ "cosmos.auth.v1beta1.AccountI" +- "ModuleAccountI" ++ "cosmos.auth.v1beta1.ModuleAccountI" +- "FeeAllowanceI" ++ "cosmos.feegrant.v1beta1.FeeAllowanceI" +``` + +Please also check that in your own app's proto files that there are no single-word names for those two proto annotations. If so, then replace them with fully-qualified names, even though those names don't actually resolve to an actual protobuf entity. + +For more information, see the [encoding guide](/docs/sdk/v0.47/learn/advanced/encoding). + +### Transactions + +#### Broadcast Mode + +Broadcast mode `block` was deprecated and has been removed. Please use `sync` mode +instead. When upgrading your tests from `block` to `sync` and checking for a +transaction code, you need to query the transaction first (with its hash) to get +the correct code. + +### Modules + +#### `**all**` + +`EventTypeMessage` events, with `sdk.AttributeKeyModule` and `sdk.AttributeKeySender` are now emitted directly at message excecution (in `baseapp`). +This means that the following boilerplate should be removed from all your custom modules: + +```go +ctx.EventManager().EmitEvent( + sdk.NewEvent( + sdk.EventTypeMessage, + sdk.NewAttribute(sdk.AttributeKeyModule, types.AttributeValueCategory), + sdk.NewAttribute(sdk.AttributeKeySender, `signer/sender`), + ), +) +``` + +The module name is assumed by `baseapp` to be the second element of the message route: `"cosmos.bank.v1beta1.MsgSend" -> "bank"`. +In case a module does not follow the standard message path, (e.g. IBC), it is advised to keep emitting the module name event. +`Baseapp` only emits that event if the module has not already done so. + +#### `x/params` + +The `params` module was deprecated since v0.46. The Cosmos SDK has migrated away from `x/params` for its own modules. +Cosmos SDK modules now store their parameters directly in its repective modules. +The `params` module will be removed in `v0.48`, as mentioned [in v0.46 release](https://github.com/cosmos/cosmos-sdk/blob/v0.46.1/UPGRADING#xparams). It is strongly encouraged to migrate away from `x/params` before `v0.48`. + +When performing a chain migration, the params table must be initizalied manually. This was done in the modules keepers in previous versions. +Have a look at `simapp.RegisterUpgradeHandlers()` for an example. + +#### `x/gov` + +##### Minimum Proposal Deposit At Time of Submission + +The `gov` module has been updated to support a minimum proposal deposit at submission time. It is determined by a new +parameter called `MinInitialDepositRatio`. When multiplied by the existing `MinDeposit` parameter, it produces +the necessary proportion of coins needed at the proposal submission time. The motivation for this change is to prevent proposal spamming. + +By default, the new `MinInitialDepositRatio` parameter is set to zero during migration. The value of zero signifies that this +feature is disabled. If chains wish to utilize the minimum proposal deposits at time of submission, the migration logic needs to be +modified to set the new parameter to the desired value. + +##### New Proposal.Proposer field + +The `Proposal` proto has been updated with proposer field. For proposal state migraton developers can call `v4.AddProposerAddressToProposal` in their upgrade handler to update all existing proposal and make them compatible and **this migration is optional**. + +```go expandable +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + v4 "github.com/cosmos/cosmos-sdk/x/gov/migrations/v4" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" +) + +func (app SimApp) + +RegisterUpgradeHandlers() { + app.UpgradeKeeper.SetUpgradeHandler(UpgradeName, + func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / this migration is optional + / add proposal ids with proposers which are active (deposit or voting period) + proposals := make(map[uint64]string) + +proposals[1] = "cosmos1luyncewxk4lm24k6gqy8y5dxkj0klr4tu0lmnj" ... + v4.AddProposerAddressToProposal(ctx, sdk.NewKVStoreKey(v4.ModuleName), app.appCodec, proposals) + +return app.ModuleManager.RunMigrations(ctx, app.Configurator(), fromVM) +}) +} +``` + +#### `x/consensus` + +Introducing a new `x/consensus` module to handle managing Tendermint consensus +parameters. For migration it is required to call a specific migration to migrate +existing parameters from the deprecated `x/params` to `x/consensus` module. App +developers should ensure to call `baseapp.MigrateParams` in their upgrade handler. + +Example: + +```go expandable +func (app SimApp) + +RegisterUpgradeHandlers() { + ----> baseAppLegacySS := app.ParamsKeeper.Subspace(baseapp.Paramspace).WithKeyTable(paramstypes.ConsensusParamsKeyTable()) <---- + + app.UpgradeKeeper.SetUpgradeHandler( + UpgradeName, + func(ctx sdk.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / Migrate Tendermint consensus parameters from x/params module to a + / dedicated x/consensus module. + ----> baseapp.MigrateParams(ctx, baseAppLegacySS, &app.ConsensusParamsKeeper) <---- + + / ... + + return app.ModuleManager.RunMigrations(ctx, app.Configurator(), fromVM) +}, + ) + + / ... +} +``` + +The old params module is required to still be imported in your app.go in order to handle this migration. + +##### `app.go` changes + +When using an `app.go` without App Wiring, the following changes are required: + +```diff +- bApp.SetParamStore(app.ParamsKeeper.Subspace(baseapp.Paramspace).WithKeyTable(paramstypes.ConsensusParamsKeyTable())) ++ app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, keys[consensusparamstypes.StoreKey], authtypes.NewModuleAddress(govtypes.ModuleName).String()) ++ bApp.SetParamStore(&app.ConsensusParamsKeeper) +``` + +When using App Wiring, the paramater store is automatically set for you. + +#### `x/nft` + +The SDK does not validate anymore the `classID` and `nftID` of an NFT, for extra flexibility in your NFT implementation. +This means chain developers need to validate the `classID` and `nftID` of an NFT. + +### Ledger + +Ledger support has been generalized to enable use of different apps and keytypes that use `secp256k1`. The Ledger interface remains the same, but it can now be provided through the Keyring `Options`, allowing higher-level chains to connect to different Ledger apps or use custom implementations. In addition, higher-level chains can provide custom key implementations around the Ledger public key, to enable greater flexibility with address generation and signing. + +This is not a breaking change, as all values will default to use the standard Cosmos app implementation unless specified otherwise. + +## [v0.46.x](https://github.com/cosmos/cosmos-sdk/releases/tag/v0.46.0) + +### Go API Changes + +The `replace google.golang.org/grpc` directive can be removed from the `go.mod`, it is no more required to block the version. + +A few packages that were deprecated in the previous version are now removed. + +For instance, the REST API, deprecated in v0.45, is now removed. If you have not migrated yet, please follow the [instructions](https://docs.cosmos.network/v0.45/migrations/rest.html). + +To improve clarity of the API, some renaming and improvements has been done: + +| Package | Previous | Current | +| --------- | ---------------------------------- | ------------------------------------ | +| `simapp` | `encodingConfig.Marshaler` | `encodingConfig.Codec` | +| `simapp` | `FundAccount`, `FundModuleAccount` | Functions moved to `x/bank/testutil` | +| `types` | `AccAddressFromHex` | `AccAddressFromHexUnsafe` | +| `x/auth` | `MempoolFeeDecorator` | Use `DeductFeeDecorator` instead | +| `x/bank` | `AddressFromBalancesStore` | `AddressAndDenomFromBalancesStore` | +| `x/gov` | `keeper.DeleteDeposits` | `keeper.DeleteAndBurnDeposits` | +| `x/gov` | `keeper.RefundDeposits` | `keeper.RefundAndDeleteDeposits` | +| `x/{mod}` | package `legacy` | package `migrations` | + +For the exhaustive list of API renaming, please refer to the [CHANGELOG](https://github.com/cosmos/cosmos-sdk/blob/main/CHANGELOG). + +#### new packages + +Additionally, new packages have been introduced in order to further split the codebase. Aliases are available for a new API breaking migration, but it is encouraged to migrate to this new packages: + +* `errors` should replace `types/errors` when registering errors or wrapping SDK errors. +* `math` contains the `Int` or `Uint` types that are used in the SDK. +* `x/nft` an NFT base module. +* `x/group` a group module allowing to create DAOs, multisig and policies. Greatly composes with `x/authz`. + +#### `x/authz` + +* `authz.NewMsgGrant` `expiration` is now a pointer. When `nil` is used, then no expiration will be set (grant won't expire). +* `authz.NewGrant` takes a new argument: block time, to correctly validate expire time. + +### Keyring + +The keyring has been refactored in v0.46. + +* The `Unsafe*` interfaces have been removed from the keyring package. Please use interface casting if you wish to access those unsafe functions. +* The keys' implementation has been refactored to be serialized as proto. +* `keyring.NewInMemory` and `keyring.New` takes now a `codec.Codec`. +* Take `keyring.Record` instead of `Info` as first argument in: + \* `MkConsKeyOutput` + \* `MkValKeyOutput` + \* `MkAccKeyOutput` +* Rename: + \* `SavePubKey` to `SaveOfflineKey` and remove the `algo` argument. + \* `NewMultiInfo`, `NewLedgerInfo` to `NewLegacyMultiInfo`, `newLegacyLedgerInfo` respectively. + \* `NewOfflineInfo` to `newLegacyOfflineInfo` and move it to `migration_test.go`. + +### PostHandler + +A `postHandler` is like an `antehandler`, but is run *after* the `runMsgs` execution. It is in the same store branch that `runMsgs`, meaning that both `runMsgs` and `postHandler`. This allows to run a custom logic after the execution of the messages. + +### IAVL + +v0.19.0 IAVL introduces a new "fast" index. This index represents the latest state of the +IAVL laid out in a format that preserves data locality by key. As a result, it allows for faster queries and iterations +since data can now be read in lexicographical order that is frequent for Cosmos-SDK chains. + +The first time the chain is started after the upgrade, the aforementioned index is created. The creation process +might take time and depends on the size of the latest state of the chain. For example, Osmosis takes around 15 minutes to rebuild the index. + +While the index is being created, node operators can observe the following in the logs: +"Upgrading IAVL storage for faster queries + execution on the live state. This may take a while". The store +key is appended to the message. The message is printed for every module that has a non-transient store. +As a result, it gives a good indication of the progress of the upgrade. + +There is also downgrade and re-upgrade protection. If a node operator chooses to downgrade to IAVL pre-fast index, and then upgrade again, the index is rebuilt from scratch. This implementation detail should not be relevant in most cases. It was added as a safeguard against operator +mistakes. + +### Modules + +#### `x/params` + +* The `x/params` module has been depreacted in favour of each module housing and providing way to modify their parameters. Each module that has parameters that are changable during runtime have an authority, the authority can be a module or user account. The Cosmos SDK team recommends migrating modules away from using the param module. An example of how this could look like can be found [here](https://github.com/cosmos/cosmos-sdk/pull/12363). +* The Param module will be maintained until April 18, 2023. At this point the module will reach end of life and be removed from the Cosmos SDK. + +#### `x/gov` + +The `gov` module has been greatly improved. The previous API has been moved to `v1beta1` while the new implementation is called `v1`. + +In order to submit a proposal with `submit-proposal` you now need to pass a `proposal.json` file. +You can still use the old way by using `submit-legacy-proposal`. This is not recommended. +More information can be found in the gov module [client documentation](https://docs.cosmos.network/v0.46/modules/gov/07_client.html). + +#### `x/staking` + +The `staking module` added a new message type to cancel unbonding delegations. Users that have unbonded by accident or wish to cancel a undelegation can now specify the amount and valdiator they would like to cancel the unbond from + +### Protobuf + +The `third_party/proto` folder that existed in [previous version](https://github.com/cosmos/cosmos-sdk/tree/v0.45.3/third_party/proto) now does not contains directly the [proto files](https://github.com/cosmos/cosmos-sdk/tree/release/v0.46.x/third_party/proto). + +Instead, the SDK uses [`buf`](https://buf.build). Clients should have their own [`buf.yaml`](https://docs.buf.build/configuration/v1/buf-yaml) with `buf.build/cosmos/cosmos-sdk` as dependency, in order to avoid having to copy paste these files. + +The protos can as well be downloaded using `buf export buf.build/cosmos/cosmos-sdk:8cb30a2c4de74dc9bd8d260b1e75e176 --output `. + +Cosmos message protobufs should be extended with `cosmos.msg.v1.signer`: + +```protobuf +message MsgSetWithdrawAddress { + option (cosmos.msg.v1.signer) = "delegator_address"; ++ + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string withdraw_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +When clients interract with a node they are required to set a codec in in the grpc.Dial. More information can be found in this [doc](https://docs.cosmos.network/v0.46/run-node/interact-node.html#programmatically-via-go). diff --git a/docs/sdk/v0.47/documentation/protocol-development/SPEC_MODULE.mdx b/docs/sdk/v0.47/documentation/protocol-development/SPEC_MODULE.mdx new file mode 100644 index 00000000..0fc30607 --- /dev/null +++ b/docs/sdk/v0.47/documentation/protocol-development/SPEC_MODULE.mdx @@ -0,0 +1,65 @@ +--- +title: Specification of Modules +description: >- + This file intends to outline the common structure for specifications within + this directory. +--- + +This file intends to outline the common structure for specifications within +this directory. + +## Tense + +For consistency, specs should be written in passive present tense. + +## Pseudo-Code + +Generally, pseudo-code should be minimized throughout the spec. Often, simple +bulleted-lists which describe a function's operations are sufficient and should +be considered preferable. In certain instances, due to the complex nature of +the functionality being described pseudo-code may the most suitable form of +specification. In these cases use of pseudo-code is permissible, but should be +presented in a concise manner, ideally restricted to only the complex +element as a part of a larger description. + +## Common Layout + +The following generalized `README` structure should be used to breakdown +specifications for modules. The following list is nonbinding and all sections are optional. + +* `# {Module Name}` - overview of the module +* `## Concepts` - describe specialized concepts and definitions used throughout the spec +* `## State` - specify and describe structures expected to marshalled into the store, and their keys +* `## State Transitions` - standard state transition operations triggered by hooks, messages, etc. +* `## Messages` - specify message structure(s) and expected state machine behaviour(s) +* `## Begin Block` - specify any begin-block operations +* `## End Block` - specify any end-block operations +* `## Hooks` - describe available hooks to be called by/from this module +* `## Events` - list and describe event tags used +* `## Client` - list and describe CLI commands and gRPC and REST endpoints +* `## Params` - list all module parameters, their types (in JSON) and examples +* `## Future Improvements` - describe future improvements of this module +* `## Tests` - acceptance tests +* `## Appendix` - supplementary details referenced elsewhere within the spec + +### Notation for key-value mapping + +Within `## State` the following notation `->` should be used to describe key to +value mapping: + +```text +key -> value +``` + +to represent byte concatenation the `|` may be used. In addition, encoding +type may be specified, for example: + +```text +0x00 | addressBytes | address2Bytes -> amino(value_object) +``` + +Additionally, index mappings may be specified by mapping to the `nil` value, for example: + +```text +0x01 | address2Bytes | addressBytes -> nil +``` diff --git a/docs/sdk/v0.47/documentation/protocol-development/SPEC_STANDARD.mdx b/docs/sdk/v0.47/documentation/protocol-development/SPEC_STANDARD.mdx new file mode 100644 index 00000000..b4c6656e --- /dev/null +++ b/docs/sdk/v0.47/documentation/protocol-development/SPEC_STANDARD.mdx @@ -0,0 +1,128 @@ +--- +title: What is an SDK standard? +--- + +An SDK standard is a design document describing a particular protocol, standard, or feature expected to be used by the Cosmos SDK. A SDK standard should list the desired properties of the standard, explain the design rationale, and provide a concise but comprehensive technical specification. The primary author is responsible for pushing the proposal through the standardization process, soliciting input and support from the community, and communicating with relevant stakeholders to ensure (social) consensus. + +## Sections + +A SDK standard consists of: + +* a synopsis, +* overview and basic concepts, +* technical specification, +* history log, and +* copyright notice. + +All top-level sections are required. References should be included inline as links, or tabulated at the bottom of the section if necessary. Included sub-sections should be listed in the order specified below. + +### Table Of Contents + +Provide a table of contents at the top of the file to assist readers. + +### Synopsis + +The document should include a brief (\~200 word) synopsis providing a high-level description of and rationale for the specification. + +### Overview and basic concepts + +This section should include a motivation sub-section and a definitions sub-section if required: + +* *Motivation* - A rationale for the existence of the proposed feature, or the proposed changes to an existing feature. +* *Definitions* - A list of new terms or concepts utilized in the document or required to understand it. + +### System model and properties + +This section should include an assumptions sub-section if any, the mandatory properties sub-section, and a dependencies sub-section. Note that the first two sub-section are are tightly coupled: how to enforce a property will depend directly on the assumptions made. This sub-section is important to capture the interactions of the specified feature with the "rest-of-the-world", i.e., with other features of the ecosystem. + +* *Assumptions* - A list of any assumptions made by the feature designer. It should capture which features are used by the feature under specification, and what do we expect from them. +* *Properties* - A list of the desired properties or characteristics of the feature specified, and expected effects or failures when the properties are violated. In case it is relevant, it can also include a list of properties that the feature does not guarantee. +* *Dependencies* - A list of the features that use the feature under specification and how. + +### Technical specification + +This is the main section of the document, and should contain protocol documentation, design rationale, required references, and technical details where appropriate. +The section may have any or all of the following sub-sections, as appropriate to the particular specification. The API sub-section is especially encouraged when appropriate. + +* *API* - A detailed description of the features's API. +* *Technical Details* - All technical details including syntax, diagrams, semantics, protocols, data structures, algorithms, and pseudocode as appropriate. The technical specification should be detailed enough such that separate correct implementations of the specification without knowledge of each other are compatible. +* *Backwards Compatibility* - A discussion of compatibility (or lack thereof) with previous feature or protocol versions. +* *Known Issues* - A list of known issues. This sub-section is specially important for specifications of already in-use features. +* *Example Implementation* - A concrete example implementation or description of an expected implementation to serve as the primary reference for implementers. + +### History + +A specification should include a history section, listing any inspiring documents and a plaintext log of significant changes. + +See an example history section [below](#history-1). + +### Copyright + +A specification should include a copyright section waiving rights via [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). + +## Formatting + +### General + +Specifications must be written in GitHub-flavoured Markdown. + +For a GitHub-flavoured Markdown cheat sheet, see [here](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet). For a local Markdown renderer, see [here](https://github.com/joeyespo/grip). + +### Language + +Specifications should be written in Simple English, avoiding obscure terminology and unnecessary jargon. For excellent examples of Simple English, please see the [Simple English Wikipedia](https://simple.wikipedia.org/wiki/Main_Page). + +The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in specifications are to be interpreted as described in [RFC 2119](https://tools.ietf.org/html/rfc2119). + +### Pseudocode + +Pseudocode in specifications should be language-agnostic and formatted in a simple imperative standard, with line numbers, variables, simple conditional blocks, for loops, and +English fragments where necessary to explain further functionality such as scheduling timeouts. LaTeX images should be avoided because they are difficult to review in diff form. + +Pseudocode for structs can be written in a simple language like Typescript or golang, as interfaces. + +Example Golang pseudocode struct: + +```go +type CacheKVStore interface { + cache: map[Key]Value + parent: KVStore + deleted: Key +} +``` + +Pseudocode for algorithms should be written in simple Golang, as functions. + +Example pseudocode algorithm: + +```go expandable +func get( + store CacheKVStore, + key Key) + +Value { + value = store.cache.get(Key) + if (value !== null) { + return value +} + +else { + value = store.parent.get(key) + +store.cache.set(key, value) + +return value +} +} +``` + +## History + +This specification was significantly inspired by and derived from IBC's [ICS](https://github.com/cosmos/ibc/blob/main/spec/ics-001-ics-standard/README.md), which +was in turn derived from Ethereum's [EIP 1](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1.md). + +Nov 24, 2022 - Initial draft finished and submitted as a PR + +## Copyright + +All content herein is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/docs/sdk/v0.47/documentation/protocol-development/addresses/README.mdx b/docs/sdk/v0.47/documentation/protocol-development/addresses/README.mdx new file mode 100644 index 00000000..55735703 --- /dev/null +++ b/docs/sdk/v0.47/documentation/protocol-development/addresses/README.mdx @@ -0,0 +1,5 @@ +--- +title: Addresses spec +--- + +* [Bech32](/docs/sdk/v0.47/documentation/protocol-development/addresses/bech32) diff --git a/docs/sdk/v0.47/documentation/protocol-development/addresses/bech32.mdx b/docs/sdk/v0.47/documentation/protocol-development/addresses/bech32.mdx new file mode 100644 index 00000000..ac52b2cf --- /dev/null +++ b/docs/sdk/v0.47/documentation/protocol-development/addresses/bech32.mdx @@ -0,0 +1,23 @@ +--- +title: Bech32 on Cosmos +--- + +The Cosmos network prefers to use the Bech32 address format wherever users must handle binary data. Bech32 encoding provides robust integrity checks on data and the human readable part (HRP) provides contextual hints that can assist UI developers with providing informative error messages. + +In the Cosmos network, keys and addresses may refer to a number of different roles in the network like accounts, validators etc. + +## HRP table + +| HRP | Definition | +| ------------- | ---------------------------------- | +| cosmos | Cosmos Account Address | +| cosmosvalcons | Cosmos Validator Consensus Address | +| cosmosvaloper | Cosmos Validator Operator Address | + +## Encoding + +While all user facing interfaces to Cosmos software should exposed Bech32 interfaces, many internal interfaces encode binary value in hex or base64 encoded form. + +To covert between other binary representation of addresses and keys, it is important to first apply the Amino encoding process before Bech32 encoding. + +A complete implementation of the Amino serialization format is unnecessary in most cases. Simply prepending bytes from this [table](https://github.com/cometbft/cometbft/blob/main/spec/blockchain/05-encoding.md) to the byte string payload before Bech32 encoding will sufficient for compatible representation. diff --git a/docs/sdk/v0.47/documentation/protocol-development/ics/README.mdx b/docs/sdk/v0.47/documentation/protocol-development/ics/README.mdx new file mode 100644 index 00000000..19217aef --- /dev/null +++ b/docs/sdk/v0.47/documentation/protocol-development/ics/README.mdx @@ -0,0 +1,6 @@ +--- +title: Cosmos ICS +description: ICS030 - Signed Messages +--- + +* [ICS030 - Signed Messages](/docs/sdk/v0.47/documentation/protocol-development/ics/ics-030-signed-messages) diff --git a/docs/sdk/v0.47/documentation/protocol-development/ics/ics-030-signed-messages.mdx b/docs/sdk/v0.47/documentation/protocol-development/ics/ics-030-signed-messages.mdx new file mode 100644 index 00000000..4787eb37 --- /dev/null +++ b/docs/sdk/v0.47/documentation/protocol-development/ics/ics-030-signed-messages.mdx @@ -0,0 +1,194 @@ +--- +title: 'ICS 030: Cosmos Signed Messages' +--- + +> TODO: Replace with valid ICS number and possibly move to new location. + +* [Changelog](#changelog) +* [Abstract](#abstract) +* [Preliminary](#preliminary) +* [Specification](#specification) +* [Future Adaptations](#future-adaptations) +* [API](#api) +* [References](#references) + +## Status + +Proposed. + +## Changelog + +## Abstract + +Having the ability to sign messages off-chain has proven to be a fundamental aspect +of nearly any blockchain. The notion of signing messages off-chain has many +added benefits such as saving on computational costs and reducing transaction +throughput and overhead. Within the context of the Cosmos, some of the major +applications of signing such data includes, but is not limited to, providing a +cryptographic secure and verifiable means of proving validator identity and +possibly associating it with some other framework or organization. In addition, +having the ability to sign Cosmos messages with a Ledger or similar HSM device. + +A standardized protocol for hashing, signing, and verifying messages that can be +implemented by the Cosmos SDK and other third-party organizations is needed. Such a +standardized protocol subscribes to the following: + +* Contains a specification of human-readable and machine-verifiable typed structured data +* Contains a framework for deterministic and injective encoding of structured data +* Utilizes cryptographic secure hashing and signing algorithms +* A framework for supporting extensions and domain separation +* Is invulnerable to chosen ciphertext attacks +* Has protection against potentially signing transactions a user did not intend to + +This specification is only concerned with the rationale and the standardized +implementation of Cosmos signed messages. It does **not** concern itself with the +concept of replay attacks as that will be left up to the higher-level application +implementation. If you view signed messages in the means of authorizing some +action or data, then such an application would have to either treat this as +idempotent or have mechanisms in place to reject known signed messages. + +## Preliminary + +The Cosmos message signing protocol will be parameterized with a cryptographic +secure hashing algorithm `SHA-256` and a signing algorithm `S` that contains +the operations `sign` and `verify` which provide a digital signature over a set +of bytes and verification of a signature respectively. + +Note, our goal here is not to provide context and reasoning about why necessarily +these algorithms were chosen apart from the fact they are the defacto algorithms +used in CometBFT and the Cosmos SDK and that they satisfy our needs for such +cryptographic algorithms such as having resistance to collision and second +pre-image attacks, as well as being [deterministic](https://en.wikipedia.org/wiki/Hash_function#Determinism) and [uniform](https://en.wikipedia.org/wiki/Hash_function#Uniformity). + +## Specification + +CometBFT has a well established protocol for signing messages using a canonical +JSON representation as defined [here](https://github.com/cometbft/cometbft/blob/master/types/canonical.go). + +An example of such a canonical JSON structure is CometBFT's vote structure: + +```go +type CanonicalJSONVote struct { + ChainID string `json:"@chain_id"` + Type string `json:"@type"` + BlockID CanonicalJSONBlockID `json:"block_id"` + Height int64 `json:"height"` + Round int `json:"round"` + Timestamp string `json:"timestamp"` + VoteType byte `json:"type"` +} +``` + +With such canonical JSON structures, the specification requires that they include +meta fields: `@chain_id` and `@type`. These meta fields are reserved and must be +included. They are both of type `string`. In addition, fields must be ordered +in lexicographically ascending order. + +For the purposes of signing Cosmos messages, the `@chain_id` field must correspond +to the Cosmos chain identifier. The user-agent should **refuse** signing if the +`@chain_id` field does not match the currently active chain! The `@type` field +must equal the constant `"message"`. The `@type` field corresponds to the type of +structure the user will be signing in an application. For now, a user is only +allowed to sign bytes of valid ASCII text ([see here](https://github.com/cometbft/cometbft/blob/v0.37.0/libs/strings/string.go#L35-L64)). +However, this will change and evolve to support additional application-specific +structures that are human-readable and machine-verifiable ([see Future Adaptations](#future-adaptations)). + +Thus, we can have a canonical JSON structure for signing Cosmos messages using +the [JSON schema](http://json-schema.org/) specification as such: + +```json expandable +{ + "$schema": "http://json-schema.org/draft-04/schema#", + "$id": "cosmos/signing/typeData/schema", + "title": "The Cosmos signed message typed data schema.", + "type": "object", + "properties": { + "@chain_id": { + "type": "string", + "description": "The corresponding Cosmos chain identifier.", + "minLength": 1 + }, + "@type": { + "type": "string", + "description": "The message type. It must be 'message'.", + "enum": [ + "message" + ] + }, + "text": { + "type": "string", + "description": "The valid ASCII text to sign.", + "pattern": "^[\\x20-\\x7E]+$", + "minLength": 1 + } + }, + "required": [ + "@chain_id", + "@type", + "text" + ] +} +``` + +e.g. + +```json +{ + "@chain_id": "1", + "@type": "message", + "text": "Hello, you can identify me as XYZ on keybase." +} +``` + +## Future Adaptations + +As applications can vary greatly in domain, it will be vital to support both +domain separation and human-readable and machine-verifiable structures. + +Domain separation will allow for application developers to prevent collisions of +otherwise identical structures. It should be designed to be unique per application +use and should directly be used in the signature encoding itself. + +Human-readable and machine-verifiable structures will allow end users to sign +more complex structures, apart from just string messages, and still be able to +know exactly what they are signing (opposed to signing a bunch of arbitrary bytes). + +Thus, in the future, the Cosmos signing message specification will be expected +to expand upon it's canonical JSON structure to include such functionality. + +## API + +Application developers and designers should formalize a standard set of APIs that +adhere to the following specification: + +*** + +### **cosmosSignBytes** + +Params: + +* `data`: the Cosmos signed message canonical JSON structure +* `address`: the Bech32 Cosmos account address to sign data with + +Returns: + +* `signature`: the Cosmos signature derived using signing algorithm `S` + +*** + +### Examples + +Using the `secp256k1` as the DSA, `S`: + +```javascript +data = { + "@chain_id": "1", + "@type": "message", + "text": "I hereby claim I am ABC on Keybase!" +} + +cosmosSignBytes(data, "cosmos1pvsch6cddahhrn5e8ekw0us50dpnugwnlfngt3") +> "0x7fc4a495473045022100dec81a9820df0102381cdbf7e8b0f1e2cb64c58e0ecda1324543742e0388e41a02200df37905a6505c1b56a404e23b7473d2c0bc5bcda96771d2dda59df6ed2b98f8" +``` + +## References diff --git a/docs/sdk/v0.47/learn.mdx b/docs/sdk/v0.47/learn.mdx deleted file mode 100644 index d607d0fa..00000000 --- a/docs/sdk/v0.47/learn.mdx +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: "Learn" -description: "Version: v0.47" ---- - -* [Introduction](/v0.47/learn/intro/overview) - Dive into the fundamentals of Cosmos SDK with an insightful introduction, laying the groundwork for understanding blockchain development. In this section we provide a High-Level Overview of the SDK, then dive deeper into Core concepts such as Application-Specific Blockchains, Blockchain Architecture, and finally we begin to explore what are the main components of the SDK. -* [Beginner](/v0.47/learn/beginner/overview-app) - Start your journey with beginner-friendly resources in the Cosmos SDK's "Learn" section, providing a gentle entry point for newcomers to blockchain development. Here we focus on a little more detail, covering the Anatomy of a Cosmos SDK Application, Transaction Lifecycles, Accounts and lastly, Gas and Fees. -* [Advanced](/v0.47/learn/advanced/baseapp) - Level up your Cosmos SDK expertise with advanced topics, tailored for experienced developers diving into intricate blockchain application development. We cover the Cosmos SDK on a lower level as we dive into the core of the SDK with BaseApp, Transactions, Context, Node Client (Daemon), Store, Encoding, gRPC, REST, and CometBFT Endpoints, CLI, Events, Telementry, Object-Capability Model, RunTx recovery middleware, Cosmos Blockchain Simulator, Protobuf Documentation, In-Place Store Migrations, Configuration and AutoCLI. diff --git a/docs/sdk/v0.47/learn/advanced/baseapp.mdx b/docs/sdk/v0.47/learn/advanced/baseapp.mdx index 4378fd07..2fcfca1e 100644 --- a/docs/sdk/v0.47/learn/advanced/baseapp.mdx +++ b/docs/sdk/v0.47/learn/advanced/baseapp.mdx @@ -1,182 +1,1440 @@ --- -title: "BaseApp" -description: "Version: v0.47" +title: BaseApp --- - - This document describes `BaseApp`, the abstraction that implements the core functionalities of a Cosmos SDK application. - +## Synopsis + +This document describes `BaseApp`, the abstraction that implements the core functionalities of a Cosmos SDK application. - ### Pre-requisite Readings[​](#pre-requisite-readings "Direct link to Pre-requisite Readings") - * [Anatomy of a Cosmos SDK application](/v0.47/learn/beginner/overview-app) - * [Lifecycle of a Cosmos SDK transaction](/v0.47/learn/beginner/tx-lifecycle) +### Pre-requisite Readings + +- [Anatomy of a Cosmos SDK application](/docs/sdk/v0.47/learn/beginner/overview-app) +- [Lifecycle of a Cosmos SDK transaction](/docs/sdk/v0.47/learn/beginner/tx-lifecycle) + -## Introduction[​](#introduction "Direct link to Introduction") +## Introduction `BaseApp` is a base type that implements the core of a Cosmos SDK application, namely: -* The [Application Blockchain Interface](#main-abci-10-messages), for the state-machine to communicate with the underlying consensus engine (e.g. CometBFT). -* [Service Routers](#service-routers), to route messages and queries to the appropriate module. -* Different [states](#state-updates), as the state-machine can have different volatile states updated based on the ABCI message received. +- The [Application Blockchain Interface](#main-abci-10-messages), for the state-machine to communicate with the underlying consensus engine (e.g. CometBFT). +- [Service Routers](#service-routers), to route messages and queries to the appropriate module. +- Different [states](#state-updates), as the state-machine can have different volatile states updated based on the ABCI message received. -The goal of `BaseApp` is to provide the fundamental layer of a Cosmos SDK application that developers can easily extend to build their own custom application. Usually, developers will create a custom type for their application, like so: +The goal of `BaseApp` is to provide the fundamental layer of a Cosmos SDK application +that developers can easily extend to build their own custom application. Usually, +developers will create a custom type for their application, like so: -``` -type App struct { // reference to a BaseApp *baseapp.BaseApp // list of application store keys // list of application keepers // module manager} +```go +type App struct { + / reference to a BaseApp + *baseapp.BaseApp + + / list of application store keys + + / list of application keepers + + / module manager +} ``` -Extending the application with `BaseApp` gives the former access to all of `BaseApp`'s methods. This allows developers to compose their custom application with the modules they want, while not having to concern themselves with the hard work of implementing the ABCI, the service routers and state management logic. +Extending the application with `BaseApp` gives the former access to all of `BaseApp`'s methods. +This allows developers to compose their custom application with the modules they want, while not +having to concern themselves with the hard work of implementing the ABCI, the service routers and state +management logic. -## Type Definition[​](#type-definition "Direct link to Type Definition") +## Type Definition The `BaseApp` type holds many important parameters for any Cosmos SDK based application. -baseapp/baseapp.go +```go expandable +package baseapp + +import ( + + "errors" + "fmt" + "sort" + "strings" + "github.com/cosmos/gogoproto/proto" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/crypto/tmhash" + "github.com/tendermint/tendermint/libs/log" + tmproto "github.com/tendermint/tendermint/proto/tendermint/types" + dbm "github.com/tendermint/tm-db" + "golang.org/x/exp/maps" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/snapshots" + "github.com/cosmos/cosmos-sdk/store" + "github.com/cosmos/cosmos-sdk/store/rootmulti" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +const ( + runTxModeCheck runTxMode = iota / Check a transaction + runTxModeReCheck / Recheck a (pending) + +transaction after a commit + runTxModeSimulate / Simulate a transaction + runTxModeDeliver / Deliver a transaction + runTxPrepareProposal + runTxProcessProposal +) + +var _ abci.Application = (*BaseApp)(nil) + +type ( + / Enum mode for app.runTx + runTxMode uint8 + + / StoreLoader defines a customizable function to control how we load the CommitMultiStore + / from disk. This is useful for state migration, when loading a datastore written with + / an older version of the software. In particular, if a module changed the substore key name + / (or removed a substore) + +between two versions of the software. + StoreLoader func(ms sdk.CommitMultiStore) + +error +) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { /nolint: maligned + / initialized on creation + logger log.Logger + name string / application name from abci.Info + db dbm.DB / common DB backend + cms sdk.CommitMultiStore / Main (uncached) + +state + qms sdk.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.AnteHandler / post handler, optional, e.g. for tips + initChainer sdk.InitChainer / initialize state with validators and state blob + beginBlocker sdk.BeginBlocker / logic to run before any txs + processProposal sdk.ProcessProposalHandler / the handler which runs on ABCI ProcessProposal + prepareProposal sdk.PrepareProposalHandler / the handler which runs on ABCI PrepareProposal + endBlocker sdk.EndBlocker / logic to run after all txs, and to determine valset changes + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / checkState is set on InitChain and reset on Commit + / deliverState is set on InitChain and BeginBlock and set to nil on Commit + checkState *state / for CheckTx + deliverState *state / for DeliverTx + processProposalState *state / for ProcessProposal + prepareProposalState *state / for PrepareProposal + + / an inter-block write-through cache provided to the context during deliverState + interBlockCache sdk.MultiStorePersistentCache + + / absent validators from begin block + voteInfos []abci.VoteInfo + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the baseapp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from Tendermint. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: Tendermint block pruning is dependant on this parameter in conunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs Tendermint what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / abciListeners for hooking into the ABCI message processing of the BaseApp + / and exposing the requests and responses to external consumers + abciListeners []ABCIListener +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +/ +/ NOTE: The db is used to store the version number for now. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger, + name: name, + db: db, + cms: store.NewCommitMultiStore(db), + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + if app.processProposal == nil { + app.SetProcessProposal(app.DefaultProcessProposal()) +} + if app.prepareProposal == nil { + app.SetPrepareProposal(app.DefaultPrepareProposal()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ SetMsgServiceRouter sets the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +SetMsgServiceRouter(msgServiceRouter *MsgServiceRouter) { + app.msgServiceRouter = msgServiceRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} -``` -loading... -``` +} +} + +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) + +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + +} +} + +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) + +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} + +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := maps.Keys(keys) + +sort.Strings(skeys) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms sdk.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +sdk.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + emptyHeader := tmproto.Header{ +} + + / needed for the export command which inits from store but never calls initchain + app.setState(runTxModeCheck, emptyHeader) + + / needed for ABCI Replay Blocks mode which calls Prepare/Process proposal (InitChain is not called) + +app.setState(runTxPrepareProposal, emptyHeader) + +app.setState(runTxProcessProposal, emptyHeader) + +app.Seal() + +rms, ok := app.cms.(*rootmulti.Store) + if !ok { + return fmt.Errorf("invalid commit multi-store; expected %T, got: %T", &rootmulti.Store{ +}, app.cms) +} + +return rms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache sdk.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/baseapp/baseapp.go#L50-L146) +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode runTxMode, header tmproto.Header) { + ms := app.cms.CacheMultiStore() + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, header, false, app.logger), +} + switch mode { + case runTxModeCheck: + / Minimum gas prices are also set. It is set on InitChain and reset on Commit. + baseState.ctx = baseState.ctx.WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices) + +app.checkState = baseState + case runTxModeDeliver: + / It is set on InitChain and BeginBlock and set to nil on Commit. + app.deliverState = baseState + case runTxPrepareProposal: + / It is set on InitChain and Commit. + app.prepareProposalState = baseState + case runTxProcessProposal: + / It is set on InitChain and Commit. + app.processProposalState = baseState + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) *tmproto.ConsensusParams { + if app.paramStore == nil { + return nil +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + panic(err) +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the baseapp's param store. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp *tmproto.ConsensusParams) { + if app.paramStore == nil { + panic("cannot store consensus params with no params store set") +} + if cp == nil { + return +} + +app.paramStore.Set(ctx, cp) + / We're explicitly not storing the Tendermint app_version in the param store. It's + / stored instead in the x/upgrade store, with its own bump logic. +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp == nil || cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateHeight(req abci.RequestBeginBlock) + +error { + if req.Header.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Header.Height) +} + + / expectedHeight holds the expected height to validate. + var expectedHeight int64 + if app.LastBlockHeight() == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain (no + / previous commit). The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / - either there was already a previous commit in the store, in which + / case we increment the version from there, + / - or there was no previous commit, and initial version was not set, + / in which case we start at version 1. + expectedHeight = app.LastBlockHeight() + 1 +} + if req.Header.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Header.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + err := msg.ValidateBasic() + if err != nil { + return err +} + +} + +return nil +} + +/ Returns the application's deliverState if app is in runTxModeDeliver, +/ prepareProposalState if app is in runTxPrepareProposal, processProposalState +/ if app is in runTxProcessProposal, and checkState otherwise. +func (app *BaseApp) + +getState(mode runTxMode) *state { + switch mode { + case runTxModeDeliver: + return app.deliverState + case runTxPrepareProposal: + return app.prepareProposalState + case runTxProcessProposal: + return app.processProposalState + default: + return app.checkState +} +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode runTxMode, txBytes []byte) + +sdk.Context { + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.ctx. + WithTxBytes(txBytes). + WithVoteInfos(app.voteInfos) + +ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == runTxModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == runTxModeSimulate { + ctx, _ = ctx.CacheContext() +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, sdk.CacheMultiStore) { + ms := ctx.MultiStore() + / TODO: https://github.com/cosmos/cosmos-sdk/issues/2824 + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + sdk.TraceContext( + map[string]interface{ +}{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(sdk.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +func (app *BaseApp) + +runTx(mode runTxMode, txBytes []byte) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, priority int64, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == runTxModeDeliver && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, 0, sdkerrors.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + / consumeBlockGas makes sure block gas is consumed at most once. It must happen after + / tx processing, and must be executed even if tx processing fails. Hence, we use trick with `defer` + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: This must exist in a separate defer function for the above recovery + / to recover from this one. + if mode == runTxModeDeliver { + defer consumeBlockGas() +} + +tx, err := app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ +}, nil, nil, 0, err +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, 0, err +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache sdk.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == runTxModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + return gInfo, nil, nil, 0, err +} + +priority = ctx.Priority() + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + if mode == runTxModeCheck { + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, priority, err +} + +} + +else if mode == runTxModeDeliver { + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, priority, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + result, err = app.runMsgs(runMsgCtx, msgs, mode) + if err == nil { + + / Run optional postHandlers. + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.postHandler(postCtx, tx, mode == runTxModeSimulate) + if err != nil { + return gInfo, nil, anteEvents, priority, err +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if mode == runTxModeDeliver { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == runTxModeDeliver || mode == runTxModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, priority, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, mode runTxMode) (*sdk.Result, error) { + msgLogs := make(sdk.ABCIMessageLogs, 0, len(msgs)) + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != runTxModeDeliver && mode != runTxModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, sdkerrors.Wrapf(sdkerrors.ErrUnknownRequest, "can't route message %+v", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, sdkerrors.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents := createEvents(msgResult.GetEvents(), msg) + + / append message events, data and logs + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + +msgLogs = append(msgLogs, sdk.NewABCIMessageLog(uint32(i), msgResult.Log, msgEvents)) +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, sdkerrors.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Log: strings.TrimSpace(msgLogs.String()), + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(events sdk.Events, msg sdk.Msg) + +sdk.Events { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + if len(msg.GetSigners()) > 0 && !msg.GetSigners()[0].Empty() { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, msg.GetSigners()[0].String())) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + / here we assume that routes module name is the second element of the route + / e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" + moduleName := strings.Split(eventMsgName, ".") + if len(moduleName) > 1 { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName[1])) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events) +} + +/ DefaultPrepareProposal returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from Tendermint will simply be returned, which, by default, are in +/ FIFO order. +func (app *BaseApp) + +DefaultPrepareProposal() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req abci.RequestPrepareProposal) + +abci.ResponsePrepareProposal { + / If the mempool is nil or a no-op mempool, we simply return the transactions + / requested from Tendermint, which, by default, should be in FIFO order. + _, isNoOp := app.mempool.(mempool.NoOpMempool) + if app.mempool == nil || isNoOp { + return abci.ResponsePrepareProposal{ + Txs: req.Txs +} + +} + +var ( + txsBytes [][]byte + byteCount int64 + ) + iterator := app.mempool.Select(ctx, req.Txs) + for iterator != nil { + memTx := iterator.Tx() + +bz, err := app.txEncoder(memTx) + if err != nil { + panic(err) +} + txSize := int64(len(bz)) + + / NOTE: Since runTx was already executed in CheckTx, which calls + / mempool.Insert, ideally everything in the pool should be valid. But + / some mempool implementations may insert invalid txs, so we check again. + _, _, _, _, err = app.runTx(runTxPrepareProposal, bz) + if err != nil { + err := app.mempool.Remove(memTx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + panic(err) +} + +iterator = iterator.Next() + +continue +} + +else if byteCount += txSize; byteCount <= req.MaxTxBytes { + txsBytes = append(txsBytes, bz) +} + +else { + break +} + +iterator = iterator.Next() +} + +return abci.ResponsePrepareProposal{ + Txs: txsBytes +} + +} +} + +/ DefaultProcessProposal returns the default implementation for processing an ABCI proposal. +/ Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. Note that step (2) + +is identical to the +/ validation step performed in DefaultPrepareProposal. It is very important that the same validation logic is used +/ in both steps, and applications must ensure that this is the case in non-default handlers. +func (app *BaseApp) + +DefaultProcessProposal() + +sdk.ProcessProposalHandler { + return func(ctx sdk.Context, req abci.RequestProcessProposal) + +abci.ResponseProcessProposal { + for _, txBytes := range req.Txs { + _, err := app.txDecoder(txBytes) + if err != nil { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + + _, _, _, _, err = app.runTx(runTxProcessProposal, txBytes) + if err != nil { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + +} + +return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +} + +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req abci.RequestPrepareProposal) + +abci.ResponsePrepareProposal { + return abci.ResponsePrepareProposal{ + Txs: req.Txs +} + +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ abci.RequestProcessProposal) + +abci.ResponseProcessProposal { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +} + +} +} +``` Let us go through the most important components. -> **Note**: Not all parameters are described, only the most important ones. Refer to the type definition for the full list. +> **Note**: Not all parameters are described, only the most important ones. Refer to the +> type definition for the full list. First, the important parameters that are initialized during the bootstrapping of the application: -* [`CommitMultiStore`](/v0.47/learn/advanced/store#commitmultistore): This is the main store of the application, which holds the canonical state that is committed at the [end of each block](#commit). This store is **not** cached, meaning it is not used to update the application's volatile (un-committed) states. The `CommitMultiStore` is a multi-store, meaning a store of stores. Each module of the application uses one or multiple `KVStores` in the multi-store to persist their subset of the state. -* Database: The `db` is used by the `CommitMultiStore` to handle data persistence. -* [`Msg` Service Router](#msg-service-router): The `msgServiceRouter` facilitates the routing of `sdk.Msg` requests to the appropriate module `Msg` service for processing. Here a `sdk.Msg` refers to the transaction component that needs to be processed by a service in order to update the application state, and not to ABCI message which implements the interface between the application and the underlying consensus engine. -* [gRPC Query Router](#grpc-query-router): The `grpcQueryRouter` facilitates the routing of gRPC queries to the appropriate module for it to be processed. These queries are not ABCI messages themselves, but they are relayed to the relevant module's gRPC `Query` service. -* [`TxDecoder`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types#TxDecoder): It is used to decode raw transaction bytes relayed by the underlying CometBFT engine. -* [`AnteHandler`](#antehandler): This handler is used to handle signature verification, fee payment, and other pre-message execution checks when a transaction is received. It's executed during [`CheckTx/RecheckTx`](#checktx) and [`DeliverTx`](#delivertx). -* [`InitChainer`](/v0.47/learn/beginner/overview-app#initchainer), [`BeginBlocker` and `EndBlocker`](/v0.47/learn/beginner/overview-app#beginblocker-and-endblocker): These are the functions executed when the application receives the `InitChain`, `BeginBlock` and `EndBlock` ABCI messages from the underlying CometBFT engine. +- [`CommitMultiStore`](/docs/sdk/v0.47/learn/advanced/store#commitmultistore): This is the main store of the application, + which holds the canonical state that is committed at the [end of each block](#commit). This store + is **not** cached, meaning it is not used to update the application's volatile (un-committed) states. + The `CommitMultiStore` is a multi-store, meaning a store of stores. Each module of the application + uses one or multiple `KVStores` in the multi-store to persist their subset of the state. +- Database: The `db` is used by the `CommitMultiStore` to handle data persistence. +- [`Msg` Service Router](#msg-service-router): The `msgServiceRouter` facilitates the routing of `sdk.Msg` requests to the appropriate + module `Msg` service for processing. Here a `sdk.Msg` refers to the transaction component that needs to be + processed by a service in order to update the application state, and not to ABCI message which implements + the interface between the application and the underlying consensus engine. +- [gRPC Query Router](#grpc-query-router): The `grpcQueryRouter` facilitates the routing of gRPC queries to the + appropriate module for it to be processed. These queries are not ABCI messages themselves, but they + are relayed to the relevant module's gRPC `Query` service. +- [`TxDecoder`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types#TxDecoder): It is used to decode + raw transaction bytes relayed by the underlying CometBFT engine. +- [`AnteHandler`](#antehandler): This handler is used to handle signature verification, fee payment, + and other pre-message execution checks when a transaction is received. It's executed during + [`CheckTx/RecheckTx`](#checktx) and [`DeliverTx`](#delivertx). +- [`InitChainer`](/docs/sdk/v0.47/learn/beginner/overview-app#initchainer), + [`BeginBlocker` and `EndBlocker`](/docs/sdk/v0.47/learn/beginner/overview-app#beginblocker-and-endblocker): These are + the functions executed when the application receives the `InitChain`, `BeginBlock` and `EndBlock` + ABCI messages from the underlying CometBFT engine. Then, parameters used to define [volatile states](#state-updates) (i.e. cached states): -* `checkState`: This state is updated during [`CheckTx`](#checktx), and reset on [`Commit`](#commit). -* `deliverState`: This state is updated during [`DeliverTx`](#delivertx), and set to `nil` on [`Commit`](#commit) and gets re-initialized on BeginBlock. -* `processProposalState`: This state is updated during [`ProcessProposal`](#process-proposal). -* `prepareProposalState`: This state is updated during [`PrepareProposal`](#prepare-proposal). +- `checkState`: This state is updated during [`CheckTx`](#checktx), and reset on [`Commit`](#commit). +- `deliverState`: This state is updated during [`DeliverTx`](#delivertx), and set to `nil` on + [`Commit`](#commit) and gets re-initialized on BeginBlock. +- `processProposalState`: This state is updated during [`ProcessProposal`](#process-proposal). +- `prepareProposalState`: This state is updated during [`PrepareProposal`](#prepare-proposal). Finally, a few more important parameters: -* `voteInfos`: This parameter carries the list of validators whose precommit is missing, either because they did not vote or because the proposer did not include their vote. This information is carried by the and can be used by the application for various things like punishing absent validators. -* `minGasPrices`: This parameter defines the minimum gas prices accepted by the node. This is a **local** parameter, meaning each full-node can set a different `minGasPrices`. It is used in the `AnteHandler` during [`CheckTx`](#checktx), mainly as a spam protection mechanism. The transaction enters the [mempool](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#mempool-methods) only if the gas prices of the transaction are greater than one of the minimum gas price in `minGasPrices` (e.g. if `minGasPrices == 1uatom,1photon`, the `gas-price` of the transaction must be greater than `1uatom` OR `1photon`). -* `appVersion`: Version of the application. It is set in the [application's constructor function](/v0.47/learn/beginner/overview-app#constructor-function). - -## Constructor[​](#constructor "Direct link to Constructor") - -``` -func NewBaseApp( name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp),) *BaseApp { // ...} +- `voteInfos`: This parameter carries the list of validators whose precommit is missing, either + because they did not vote or because the proposer did not include their vote. This information is + carried by the and can be used by the application for various things like + punishing absent validators. +- `minGasPrices`: This parameter defines the minimum gas prices accepted by the node. This is a + **local** parameter, meaning each full-node can set a different `minGasPrices`. It is used in the + `AnteHandler` during [`CheckTx`](#checktx), mainly as a spam protection mechanism. The transaction + enters the [mempool](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#mempool-methods) + only if the gas prices of the transaction are greater than one of the minimum gas price in + `minGasPrices` (e.g. if `minGasPrices == 1uatom,1photon`, the `gas-price` of the transaction must be + greater than `1uatom` OR `1photon`). +- `appVersion`: Version of the application. It is set in the + [application's constructor function](/docs/sdk/v0.47/learn/beginner/overview-app#constructor-function). + +## Constructor + +```go +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + + / ... +} ``` -The `BaseApp` constructor function is pretty straightforward. The only thing worth noting is the possibility to provide additional [`options`](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/baseapp/options.go) to the `BaseApp`, which will execute them in order. The `options` are generally `setter` functions for important parameters, like `SetPruning()` to set pruning options or `SetMinGasPrices()` to set the node's `min-gas-prices`. +The `BaseApp` constructor function is pretty straightforward. The only thing worth noting is the +possibility to provide additional [`options`](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/baseapp/options.go) +to the `BaseApp`, which will execute them in order. The `options` are generally `setter` functions +for important parameters, like `SetPruning()` to set pruning options or `SetMinGasPrices()` to set +the node's `min-gas-prices`. Naturally, developers can add additional `options` based on their application's needs. -## State Updates[​](#state-updates "Direct link to State Updates") +## State Updates -The `BaseApp` maintains four primary volatile states and a root or main state. The main state is the canonical state of the application and the volatile states, `checkState`, `deliverState`, `prepareProposalState`, `processPreposalState`, are used to handle state transitions in-between the main state made during [`Commit`](#commit). +The `BaseApp` maintains four primary volatile states and a root or main state. The main state +is the canonical state of the application and the volatile states, `checkState`, `deliverState`, `prepareProposalState`, `processPreposalState`, +are used to handle state transitions in-between the main state made during [`Commit`](#commit). -Internally, there is only a single `CommitMultiStore` which we refer to as the main or root state. From this root state, we derive four volatile states by using a mechanism called *store branching* (performed by `CacheWrap` function). The types can be illustrated as follows: +Internally, there is only a single `CommitMultiStore` which we refer to as the main or root state. +From this root state, we derive four volatile states by using a mechanism called _store branching_ (performed by `CacheWrap` function). +The types can be illustrated as follows: -![Types](/images/v0.47/learn/advanced/assets/images/baseapp_state-c6660bdfda8fa3aeb44239780b465ecc.png) +![Types](/docs/sdk/images/learn/advanced/baseapp_state.png) -### InitChain State Updates[​](#initchain-state-updates "Direct link to InitChain State Updates") +### InitChain State Updates -During `InitChain`, the four volatile states, `checkState`, `prepareProposalState`, `processProposalState` and `deliverState` are set by branching the root `CommitMultiStore`. Any subsequent reads and writes happen on branched versions of the `CommitMultiStore`. To avoid unnecessary roundtrip to the main state, all reads to the branched store are cached. +During `InitChain`, the four volatile states, `checkState`, `prepareProposalState`, `processProposalState` +and `deliverState` are set by branching the root `CommitMultiStore`. Any subsequent reads and writes happen +on branched versions of the `CommitMultiStore`. +To avoid unnecessary roundtrip to the main state, all reads to the branched store are cached. -![InitChain](/images/v0.47/learn/advanced/assets/images/baseapp_state-initchain-62da1a79d5dd67a6d1ab07f2805040da.png) +![InitChain](/docs/sdk/images/learn/advanced/baseapp_state-initchain.png) -### CheckTx State Updates[​](#checktx-state-updates "Direct link to CheckTx State Updates") +### CheckTx State Updates -During `CheckTx`, the `checkState`, which is based off of the last committed state from the root store, is used for any reads and writes. Here we only execute the `AnteHandler` and verify a service router exists for every message in the transaction. Note, when we execute the `AnteHandler`, we branch the already branched `checkState`. This has the side effect that if the `AnteHandler` fails, the state transitions won't be reflected in the `checkState` -- i.e. `checkState` is only updated on success. +During `CheckTx`, the `checkState`, which is based off of the last committed state from the root +store, is used for any reads and writes. Here we only execute the `AnteHandler` and verify a service router +exists for every message in the transaction. Note, when we execute the `AnteHandler`, we branch +the already branched `checkState`. +This has the side effect that if the `AnteHandler` fails, the state transitions won't be reflected in the `checkState` +\-- i.e. `checkState` is only updated on success. -![CheckTx](/images/v0.47/learn/advanced/assets/images/baseapp_state-checktx-5bb98c17c37b2b93e98cc681b6c1c9d6.png) +![CheckTx](/docs/sdk/images/learn/advanced/baseapp_state-checktx.png) -### PrepareProposal State Updates[​](#prepareproposal-state-updates "Direct link to PrepareProposal State Updates") +### PrepareProposal State Updates -During `PrepareProposal`, the `prepareProposalState` is set by branching the root `CommitMultiStore`. The `prepareProposalState` is used for any reads and writes that occur during the `PrepareProposal` phase. The function uses the `Select()` method of the mempool to iterate over the transactions. `runTx` is then called, which encodes and validates each transaction and from there the `AnteHandler` is executed. If successful, valid transactions are returned inclusive of the events, tags, and data generated during the execution of the proposal. The described behavior is that of the default handler, applications have the flexibility to define their own [custom mempool handlers](https://docs.cosmos.network/main/building-apps/app-mempool#custom-mempool-handlers). +During `PrepareProposal`, the `prepareProposalState` is set by branching the root `CommitMultiStore`. +The `prepareProposalState` is used for any reads and writes that occur during the `PrepareProposal` phase. +The function uses the `Select()` method of the mempool to iterate over the transactions. `runTx` is then called, +which encodes and validates each transaction and from there the `AnteHandler` is executed. +If successful, valid transactions are returned inclusive of the events, tags, and data generated +during the execution of the proposal. +The described behavior is that of the default handler, applications have the flexibility to define their own +[custom mempool handlers](https://docs.cosmos.network/main/building-apps/app-mempool#custom-mempool-handlers). -![ProcessProposal](/images/v0.47/learn/advanced/assets/images/baseapp_state-prepareproposal-bc5c8099ad94b823c376d1bde26d584a.png) +![ProcessProposal](/docs/sdk/images/learn/advanced/baseapp_state-prepareproposal.png) -### ProcessProposal State Updates[​](#processproposal-state-updates "Direct link to ProcessProposal State Updates") +### ProcessProposal State Updates -During `ProcessProposal`, the `processProposalState` is set based off of the last committed state from the root store and is used to process a signed proposal received from a validator. In this state, `runTx` is called and the `AnteHandler` is executed and the context used in this state is built with information from the header and the main state, including the minimum gas prices, which are also set. Again we want to highlight that the described behavior is that of the default handler and applications have the flexibility to define their own [custom mempool handlers](https://docs.cosmos.network/main/building-apps/app-mempool#custom-mempool-handlers). +During `ProcessProposal`, the `processProposalState` is set based off of the last committed state +from the root store and is used to process a signed proposal received from a validator. +In this state, `runTx` is called and the `AnteHandler` is executed and the context used in this state is built with information +from the header and the main state, including the minimum gas prices, which are also set. +Again we want to highlight that the described behavior is that of the default handler and applications have the flexibility to define their own +[custom mempool handlers](https://docs.cosmos.network/main/building-apps/app-mempool#custom-mempool-handlers). -![ProcessProposal](/images/v0.47/learn/advanced/assets/images/baseapp_state-processproposal-486265a078da51c6f72ce7248e8021b3.png) +![ProcessProposal](/docs/sdk/images/learn/advanced/baseapp_state-processproposal.png) -### BeginBlock State Updates[​](#beginblock-state-updates "Direct link to BeginBlock State Updates") +### BeginBlock State Updates -During `BeginBlock`, the `deliverState` is set for use in subsequent `DeliverTx` ABCI messages. The `deliverState` is based off of the last committed state from the root store and is branched. Note, the `deliverState` is set to `nil` on [`Commit`](#commit). +During `BeginBlock`, the `deliverState` is set for use in subsequent `DeliverTx` ABCI messages. The +`deliverState` is based off of the last committed state from the root store and is branched. +Note, the `deliverState` is set to `nil` on [`Commit`](#commit). -![BeginBlock](/images/v0.47/learn/advanced/assets/images/baseapp_state-begin_block-dfbb5abb42a34744e7fe08df2f8d01e2.png) +![BeginBlock](/docs/sdk/images/learn/advanced/baseapp_state-begin_block.png) -### DeliverTx State Updates[​](#delivertx-state-updates "Direct link to DeliverTx State Updates") +### DeliverTx State Updates -The state flow for `DeliverTx` is nearly identical to `CheckTx` except state transitions occur on the `deliverState` and messages in a transaction are executed. Similarly to `CheckTx`, state transitions occur on a doubly branched state -- `deliverState`. Successful message execution results in writes being committed to `deliverState`. Note, if message execution fails, state transitions from the AnteHandler are persisted. +The state flow for `DeliverTx` is nearly identical to `CheckTx` except state transitions occur on +the `deliverState` and messages in a transaction are executed. Similarly to `CheckTx`, state transitions +occur on a doubly branched state -- `deliverState`. Successful message execution results in +writes being committed to `deliverState`. Note, if message execution fails, state transitions from +the AnteHandler are persisted. -![DeliverTx](/images/v0.47/learn/advanced/assets/images/baseapp_state-deliver_tx-5999f54501aa641d0c0a93279561f956.png) +![DeliverTx](/docs/sdk/images/learn/advanced/baseapp_state-deliver_tx.png) -### Commit State Updates[​](#commit-state-updates "Direct link to Commit State Updates") +### Commit State Updates -During `Commit` all the state transitions that occurred in the `deliverState` are finally written to the root `CommitMultiStore` which in turn is committed to disk and results in a new application root hash. These state transitions are now considered final. Finally, the `checkState` is set to the newly committed state and `deliverState` is set to `nil` to be reset on `BeginBlock`. +During `Commit` all the state transitions that occurred in the `deliverState` are finally written to +the root `CommitMultiStore` which in turn is committed to disk and results in a new application +root hash. These state transitions are now considered final. Finally, the `checkState` is set to the +newly committed state and `deliverState` is set to `nil` to be reset on `BeginBlock`. -![Commit](/images/v0.47/learn/advanced/assets/images/baseapp_state-commit-247373784511c1db3ed2175551b22abb.png) +![Commit](/docs/sdk/images/learn/advanced/baseapp_state-commit.png) -## ParamStore[​](#paramstore "Direct link to ParamStore") +## ParamStore -During `InitChain`, the `RequestInitChain` provides `ConsensusParams` which contains parameters related to block execution such as maximum gas and size in addition to evidence parameters. If these parameters are non-nil, they are set in the BaseApp's `ParamStore`. Behind the scenes, the `ParamStore` is managed by an `x/consensus_params` module. This allows the parameters to be tweaked via on-chain governance. +During `InitChain`, the `RequestInitChain` provides `ConsensusParams` which contains parameters +related to block execution such as maximum gas and size in addition to evidence parameters. If these +parameters are non-nil, they are set in the BaseApp's `ParamStore`. Behind the scenes, the `ParamStore` +is managed by an `x/consensus_params` module. This allows the parameters to be tweaked via +on-chain governance. -## Service Routers[​](#service-routers "Direct link to Service Routers") +## Service Routers When messages and queries are received by the application, they must be routed to the appropriate module in order to be processed. Routing is done via `BaseApp`, which holds a `msgServiceRouter` for messages, and a `grpcQueryRouter` for queries. -### `Msg` Service Router[​](#msg-service-router "Direct link to msg-service-router") +### `Msg` Service Router -[`sdk.Msg`s](/v0.47/build/building-modules/messages-and-queries#messages) need to be routed after they are extracted from transactions, which are sent from the underlying CometBFT engine via the [`CheckTx`](#checktx) and [`DeliverTx`](#delivertx) ABCI messages. To do so, `BaseApp` holds a `msgServiceRouter` which maps fully-qualified service methods (`string`, defined in each module's Protobuf `Msg` service) to the appropriate module's `MsgServer` implementation. +[`sdk.Msg`s](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#messages) need to be routed after they are extracted from transactions, which are sent from the underlying CometBFT engine via the [`CheckTx`](#checktx) and [`DeliverTx`](#delivertx) ABCI messages. To do so, `BaseApp` holds a `msgServiceRouter` which maps fully-qualified service methods (`string`, defined in each module's Protobuf `Msg` service) to the appropriate module's `MsgServer` implementation. The [default `msgServiceRouter` included in `BaseApp`](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/baseapp/msg_service_router.go) is stateless. However, some applications may want to make use of more stateful routing mechanisms such as allowing governance to disable certain routes or point them to new modules for upgrade purposes. For this reason, the `sdk.Context` is also passed into each [route handler inside `msgServiceRouter`](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/baseapp/msg_service_router.go#L31-L32). For a stateless router that doesn't want to make use of this, you can just ignore the `ctx`. -The application's `msgServiceRouter` is initialized with all the routes using the application's [module manager](/v0.47/build/building-modules/module-manager#manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](/v0.47/learn/beginner/overview-app#constructor-function). +The application's `msgServiceRouter` is initialized with all the routes using the application's [module manager](/docs/sdk/v0.47/documentation/module-system/module-manager#manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](/docs/sdk/v0.47/learn/beginner/overview-app#constructor-function). -### gRPC Query Router[​](#grpc-query-router "Direct link to gRPC Query Router") +### gRPC Query Router -Similar to `sdk.Msg`s, [`queries`](/v0.47/build/building-modules/messages-and-queries#queries) need to be routed to the appropriate module's [`Query` service](/v0.47/build/building-modules/query-services). To do so, `BaseApp` holds a `grpcQueryRouter`, which maps modules' fully-qualified service methods (`string`, defined in their Protobuf `Query` gRPC) to their `QueryServer` implementation. The `grpcQueryRouter` is called during the initial stages of query processing, which can be either by directly sending a gRPC query to the gRPC endpoint, or via the [`Query` ABCI message](#query) on the CometBFT RPC endpoint. +Similar to `sdk.Msg`s, [`queries`](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#queries) need to be routed to the appropriate module's [`Query` service](/docs/sdk/v0.47/documentation/module-system/query-services). To do so, `BaseApp` holds a `grpcQueryRouter`, which maps modules' fully-qualified service methods (`string`, defined in their Protobuf `Query` gRPC) to their `QueryServer` implementation. The `grpcQueryRouter` is called during the initial stages of query processing, which can be either by directly sending a gRPC query to the gRPC endpoint, or via the [`Query` ABCI message](#query) on the CometBFT RPC endpoint. -Just like the `msgServiceRouter`, the `grpcQueryRouter` is initialized with all the query routes using the application's [module manager](/v0.47/build/building-modules/module-manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](/v0.47/learn/beginner/overview-app#app-constructor). +Just like the `msgServiceRouter`, the `grpcQueryRouter` is initialized with all the query routes using the application's [module manager](/docs/sdk/v0.47/documentation/module-system/module-manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](/docs/sdk/v0.47/learn/beginner/overview-app#app-constructor). -## Main ABCI 1.0 Messages[​](#main-abci-10-messages "Direct link to Main ABCI 1.0 Messages") +## Main ABCI 1.0 Messages The [Application-Blockchain Interface](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md) (ABCI) is a generic interface that connects a state-machine with a consensus engine to form a functional full-node. It can be wrapped in any language, and needs to be implemented by each application-specific blockchain built on top of an ABCI-compatible consensus engine like CometBFT. The consensus engine handles two main tasks: -* The networking logic, which mainly consists in gossiping block parts, transactions and consensus votes. -* The consensus logic, which results in the deterministic ordering of transactions in the form of blocks. +- The networking logic, which mainly consists in gossiping block parts, transactions and consensus votes. +- The consensus logic, which results in the deterministic ordering of transactions in the form of blocks. It is **not** the role of the consensus engine to define the state or the validity of transactions. Generally, transactions are handled by the consensus engine in the form of `[]bytes`, and relayed to the application via the ABCI to be decoded and processed. At keys moments in the networking and consensus processes (e.g. beginning of a block, commit of a block, reception of an unconfirmed transaction, ...), the consensus engine emits ABCI messages for the state-machine to act on. Developers building on top of the Cosmos SDK need not implement the ABCI themselves, as `BaseApp` comes with a built-in implementation of the interface. Let us go through the main ABCI messages that `BaseApp` implements: -* [`Prepare Proposal`](#prepare-proposal) -* [`Process Proposal`](#process-proposal) -* [`CheckTx`](#checktx) -* [`DeliverTx`](#delivertx) +- [`Prepare Proposal`](#prepare-proposal) +- [`Process Proposal`](#process-proposal) +- [`CheckTx`](#checktx) +- [`DeliverTx`](#delivertx) -### Prepare Proposal[​](#prepare-proposal "Direct link to Prepare Proposal") +### Prepare Proposal The `PrepareProposal` function is part of the new methods introduced in Application Blockchain Interface (ABCI++) in CometBFT and is an important part of the application's overall governance system. In the Cosmos SDK, it allows the application to have more fine-grained control over the transactions that are processed, and ensures that only valid transactions are committed to the blockchain. Here is how the `PrepareProposal` function can be implemented: 1. Extract the `sdk.Msg`s from the transaction. -2. Perform *stateful* checks by calling `Validate()` on each of the `sdk.Msg`'s. This is done after *stateless* checks as *stateful* checks are more computationally expensive. If `Validate()` fails, `PrepareProposal` returns before running further checks, which saves resources. +2. Perform _stateful_ checks by calling `Validate()` on each of the `sdk.Msg`'s. This is done after _stateless_ checks as _stateful_ checks are more computationally expensive. If `Validate()` fails, `PrepareProposal` returns before running further checks, which saves resources. 3. Perform any additional checks that are specific to the application, such as checking account balances, or ensuring that certain conditions are met before a transaction is proposed.hey are processed by the consensus engine, if necessary. 4. Return the updated transactions to be processed by the consensus engine @@ -186,12 +1444,12 @@ It's important to note that `PrepareProposal` complements the `ProcessProposal` `PrepareProposal` returns a response to the underlying consensus engine of type [`abci.ResponseCheckTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#processproposal). The response contains: -* `Code (uint32)`: Response Code. `0` if successful. -* `Data ([]byte)`: Result bytes, if any. -* `Log (string):` The output of the application's logger. May be non-deterministic. -* `Info (string):` Additional information. May be non-deterministic. +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. -### Process Proposal[​](#process-proposal "Direct link to Process Proposal") +### Process Proposal The `ProcessProposal` function is called by the BaseApp as part of the ABCI message flow, and is executed during the `BeginBlock` phase of the consensus process. The purpose of this function is to give more control to the application for block validation, allowing it to check all transactions in a proposed block before the validator sends the prevote for the block. It allows a validator to perform application-dependent work in a proposed block, enabling features such as immediate block execution, and allows the Application to reject invalid blocks. @@ -212,52 +1470,309 @@ However, developers must exercise greater caution when using these methods. Inco `ProcessProposal` returns a response to the underlying consensus engine of type [`abci.ResponseCheckTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#processproposal). The response contains: -* `Code (uint32)`: Response Code. `0` if successful. -* `Data ([]byte)`: Result bytes, if any. -* `Log (string):` The output of the application's logger. May be non-deterministic. -* `Info (string):` Additional information. May be non-deterministic. +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. -### CheckTx[​](#checktx "Direct link to CheckTx") +### CheckTx -`CheckTx` is sent by the underlying consensus engine when a new unconfirmed (i.e. not yet included in a valid block) transaction is received by a full-node. The role of `CheckTx` is to guard the full-node's mempool (where unconfirmed transactions are stored until they are included in a block) from spam transactions. Unconfirmed transactions are relayed to peers only if they pass `CheckTx`. +`CheckTx` is sent by the underlying consensus engine when a new unconfirmed (i.e. not yet included in a valid block) +transaction is received by a full-node. The role of `CheckTx` is to guard the full-node's mempool +(where unconfirmed transactions are stored until they are included in a block) from spam transactions. +Unconfirmed transactions are relayed to peers only if they pass `CheckTx`. -`CheckTx()` can perform both *stateful* and *stateless* checks, but developers should strive to make the checks **lightweight** because gas fees are not charged for the resources (CPU, data load...) used during the `CheckTx`. +`CheckTx()` can perform both _stateful_ and _stateless_ checks, but developers should strive to +make the checks **lightweight** because gas fees are not charged for the resources (CPU, data load...) used during the `CheckTx`. -In the Cosmos SDK, after [decoding transactions](/v0.47/learn/advanced/encoding), `CheckTx()` is implemented to do the following checks: +In the Cosmos SDK, after [decoding transactions](/docs/sdk/v0.47/learn/advanced/encoding), `CheckTx()` is implemented +to do the following checks: 1. Extract the `sdk.Msg`s from the transaction. -2. **Optionally** perform *stateless* checks by calling `ValidateBasic()` on each of the `sdk.Msg`s. This is done first, as *stateless* checks are less computationally expensive than *stateful* checks. If `ValidateBasic()` fail, `CheckTx` returns before running *stateful* checks, which saves resources. This check is still performed for messages that have not yet migrated to the new message validation mechanism defined in [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) and still have a `ValidateBasic()` method. -3. Perform non-module related *stateful* checks on the [account](/v0.47/learn/beginner/accounts). This step is mainly about checking that the `sdk.Msg` signatures are valid, that enough fees are provided and that the sending account has enough funds to pay for said fees. Note that no precise [`gas`](/v0.47/learn/beginner/gas-fees) counting occurs here, as `sdk.Msg`s are not processed. Usually, the [`AnteHandler`](/v0.47/learn/beginner/gas-fees#antehandler) will check that the `gas` provided with the transaction is superior to a minimum reference gas amount based on the raw transaction size, in order to avoid spam with transactions that provide 0 gas. +2. **Optionally** perform _stateless_ checks by calling `ValidateBasic()` on each of the `sdk.Msg`s. This is done + first, as _stateless_ checks are less computationally expensive than _stateful_ checks. If + `ValidateBasic()` fail, `CheckTx` returns before running _stateful_ checks, which saves resources. + This check is still performed for messages that have not yet migrated to the new message validation mechanism defined in [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) and still have a `ValidateBasic()` method. +3. Perform non-module related _stateful_ checks on the [account](/docs/sdk/v0.47/learn/beginner/accounts). This step is mainly about checking + that the `sdk.Msg` signatures are valid, that enough fees are provided and that the sending account + has enough funds to pay for said fees. Note that no precise [`gas`](/docs/sdk/v0.47/learn/beginner/gas-fees) counting occurs here, + as `sdk.Msg`s are not processed. Usually, the [`AnteHandler`](/docs/sdk/v0.47/learn/beginner/gas-fees#antehandler) will check that the `gas` provided + with the transaction is superior to a minimum reference gas amount based on the raw transaction size, + in order to avoid spam with transactions that provide 0 gas. `CheckTx` does **not** process `sdk.Msg`s - they only need to be processed when the canonical state need to be updated, which happens during `DeliverTx`. -Steps 2. and 3. are performed by the [`AnteHandler`](/v0.47/learn/beginner/gas-fees#antehandler) in the [`RunTx()`](#runtx) function, which `CheckTx()` calls with the `runTxModeCheck` mode. During each step of `CheckTx()`, a special [volatile state](#state-updates) called `checkState` is updated. This state is used to keep track of the temporary changes triggered by the `CheckTx()` calls of each transaction without modifying the [main canonical state](#state-updates). For example, when a transaction goes through `CheckTx()`, the transaction's fees are deducted from the sender's account in `checkState`. If a second transaction is received from the same account before the first is processed, and the account has consumed all its funds in `checkState` during the first transaction, the second transaction will fail `CheckTx`() and be rejected. In any case, the sender's account will not actually pay the fees until the transaction is actually included in a block, because `checkState` never gets committed to the main state. The `checkState` is reset to the latest state of the main state each time a blocks gets [committed](#commit). - -`CheckTx` returns a response to the underlying consensus engine of type [`abci.ResponseCheckTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#checktx). The response contains: - -* `Code (uint32)`: Response Code. `0` if successful. -* `Data ([]byte)`: Result bytes, if any. -* `Log (string):` The output of the application's logger. May be non-deterministic. -* `Info (string):` Additional information. May be non-deterministic. -* `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction. -* `GasUsed (int64)`: Amount of gas consumed by transaction. During `CheckTx`, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction. Next is an example: - -x/auth/ante/basic.go - -``` -loading... +Steps 2. and 3. are performed by the [`AnteHandler`](/docs/sdk/v0.47/learn/beginner/gas-fees#antehandler) in the [`RunTx()`](#runtx) +function, which `CheckTx()` calls with the `runTxModeCheck` mode. During each step of `CheckTx()`, a +special [volatile state](#state-updates) called `checkState` is updated. This state is used to keep +track of the temporary changes triggered by the `CheckTx()` calls of each transaction without modifying +the [main canonical state](#state-updates). For example, when a transaction goes through `CheckTx()`, the +transaction's fees are deducted from the sender's account in `checkState`. If a second transaction is +received from the same account before the first is processed, and the account has consumed all its +funds in `checkState` during the first transaction, the second transaction will fail `CheckTx`() and +be rejected. In any case, the sender's account will not actually pay the fees until the transaction +is actually included in a block, because `checkState` never gets committed to the main state. The +`checkState` is reset to the latest state of the main state each time a blocks gets [committed](#commit). + +`CheckTx` returns a response to the underlying consensus engine of type [`abci.ResponseCheckTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#checktx). +The response contains: + +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. +- `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction. +- `GasUsed (int64)`: Amount of gas consumed by transaction. During `CheckTx`, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction. Next is an example: + +```go expandable +package ante + +import ( + + "github.com/cosmos/cosmos-sdk/codec/legacy" + "github.com/cosmos/cosmos-sdk/crypto/keys/multisig" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/migrations/legacytx" + authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +/ ValidateBasicDecorator will call tx.ValidateBasic and return any non-nil error. +/ If ValidateBasic passes, decorator calls next AnteHandler in chain. Note, +/ ValidateBasicDecorator decorator will not get executed on ReCheckTx since it +/ is not dependent on application state. +type ValidateBasicDecorator struct{ +} + +func NewValidateBasicDecorator() + +ValidateBasicDecorator { + return ValidateBasicDecorator{ +} +} + +func (vbd ValidateBasicDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + / no need to validate basic on recheck tx, call next antehandler + if ctx.IsReCheckTx() { + return next(ctx, tx, simulate) +} + if err := tx.ValidateBasic(); err != nil { + return ctx, err +} + +return next(ctx, tx, simulate) +} + +/ ValidateMemoDecorator will validate memo given the parameters passed in +/ If memo is too large decorator returns with error, otherwise call next AnteHandler +/ CONTRACT: Tx must implement TxWithMemo interface +type ValidateMemoDecorator struct { + ak AccountKeeper +} + +func NewValidateMemoDecorator(ak AccountKeeper) + +ValidateMemoDecorator { + return ValidateMemoDecorator{ + ak: ak, +} +} + +func (vmd ValidateMemoDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + memoTx, ok := tx.(sdk.TxWithMemo) + if !ok { + return ctx, sdkerrors.Wrap(sdkerrors.ErrTxDecode, "invalid transaction type") +} + memoLength := len(memoTx.GetMemo()) + if memoLength > 0 { + params := vmd.ak.GetParams(ctx) + if uint64(memoLength) > params.MaxMemoCharacters { + return ctx, sdkerrors.Wrapf(sdkerrors.ErrMemoTooLarge, + "maximum number of characters is %d but received %d characters", + params.MaxMemoCharacters, memoLength, + ) +} + +} + +return next(ctx, tx, simulate) +} + +/ ConsumeTxSizeGasDecorator will take in parameters and consume gas proportional +/ to the size of tx before calling next AnteHandler. Note, the gas costs will be +/ slightly over estimated due to the fact that any given signing account may need +/ to be retrieved from state. +/ +/ CONTRACT: If simulate=true, then signatures must either be completely filled +/ in or empty. +/ CONTRACT: To use this decorator, signatures of transaction must be represented +/ as legacytx.StdSignature otherwise simulate mode will incorrectly estimate gas cost. +type ConsumeTxSizeGasDecorator struct { + ak AccountKeeper +} + +func NewConsumeGasForTxSizeDecorator(ak AccountKeeper) + +ConsumeTxSizeGasDecorator { + return ConsumeTxSizeGasDecorator{ + ak: ak, +} +} + +func (cgts ConsumeTxSizeGasDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + sigTx, ok := tx.(authsigning.SigVerifiableTx) + if !ok { + return ctx, sdkerrors.Wrap(sdkerrors.ErrTxDecode, "invalid tx type") +} + params := cgts.ak.GetParams(ctx) + +ctx.GasMeter().ConsumeGas(params.TxSizeCostPerByte*sdk.Gas(len(ctx.TxBytes())), "txSize") + + / simulate gas cost for signatures in simulate mode + if simulate { + / in simulate mode, each element should be a nil signature + sigs, err := sigTx.GetSignaturesV2() + if err != nil { + return ctx, err +} + n := len(sigs) + for i, signer := range sigTx.GetSigners() { + / if signature is already filled in, no need to simulate gas cost + if i < n && !isIncompleteSignature(sigs[i].Data) { + continue +} + +var pubkey cryptotypes.PubKey + acc := cgts.ak.GetAccount(ctx, signer) + + / use placeholder simSecp256k1Pubkey if sig is nil + if acc == nil || acc.GetPubKey() == nil { + pubkey = simSecp256k1Pubkey +} + +else { + pubkey = acc.GetPubKey() +} + + / use stdsignature to mock the size of a full signature + simSig := legacytx.StdSignature{ /nolint:staticcheck / this will be removed when proto is ready + Signature: simSecp256k1Sig[:], + PubKey: pubkey, +} + sigBz := legacy.Cdc.MustMarshal(simSig) + cost := sdk.Gas(len(sigBz) + 6) + + / If the pubkey is a multi-signature pubkey, then we estimate for the maximum + / number of signers. + if _, ok := pubkey.(*multisig.LegacyAminoPubKey); ok { + cost *= params.TxSigLimit +} + +ctx.GasMeter().ConsumeGas(params.TxSizeCostPerByte*cost, "txSize") +} + +} + +return next(ctx, tx, simulate) +} + +/ isIncompleteSignature tests whether SignatureData is fully filled in for simulation purposes +func isIncompleteSignature(data signing.SignatureData) + +bool { + if data == nil { + return true +} + switch data := data.(type) { + case *signing.SingleSignatureData: + return len(data.Signature) == 0 + case *signing.MultiSignatureData: + if len(data.Signatures) == 0 { + return true +} + for _, s := range data.Signatures { + if isIncompleteSignature(s) { + return true +} + +} + +} + +return false +} + +type ( + / TxTimeoutHeightDecorator defines an AnteHandler decorator that checks for a + / tx height timeout. + TxTimeoutHeightDecorator struct{ +} + + / TxWithTimeoutHeight defines the interface a tx must implement in order for + / TxHeightTimeoutDecorator to process the tx. + TxWithTimeoutHeight interface { + sdk.Tx + + GetTimeoutHeight() + +uint64 +} +) + +/ TxTimeoutHeightDecorator defines an AnteHandler decorator that checks for a +/ tx height timeout. +func NewTxTimeoutHeightDecorator() + +TxTimeoutHeightDecorator { + return TxTimeoutHeightDecorator{ +} +} + +/ AnteHandle implements an AnteHandler decorator for the TxHeightTimeoutDecorator +/ type where the current block height is checked against the tx's height timeout. +/ If a height timeout is provided (non-zero) + +and is less than the current block +/ height, then an error is returned. +func (txh TxTimeoutHeightDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + timeoutTx, ok := tx.(TxWithTimeoutHeight) + if !ok { + return ctx, sdkerrors.Wrap(sdkerrors.ErrTxDecode, "expected tx to implement TxWithTimeoutHeight") +} + timeoutHeight := timeoutTx.GetTimeoutHeight() + if timeoutHeight > 0 && uint64(ctx.BlockHeight()) > timeoutHeight { + return ctx, sdkerrors.Wrapf( + sdkerrors.ErrTxTimeoutHeight, "block height: %d, timeout height: %d", ctx.BlockHeight(), timeoutHeight, + ) +} + +return next(ctx, tx, simulate) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/auth/ante/basic.go#L96) +- `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](/docs/sdk/v0.47/learn/advanced/events) for more. +- `Codespace (string)`: Namespace for the Code. -* `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](/v0.47/learn/advanced/events) for more. -* `Codespace (string)`: Namespace for the Code. +#### RecheckTx -#### RecheckTx[​](#rechecktx "Direct link to RecheckTx") +After `Commit`, `CheckTx` is run again on all transactions that remain in the node's local mempool +excluding the transactions that are included in the block. To prevent the mempool from rechecking all transactions +every time a block is committed, the configuration option `mempool.recheck=false` can be set. As of +Tendermint v0.32.1, an additional `Type` parameter is made available to the `CheckTx` function that +indicates whether an incoming transaction is new (`CheckTxType_New`), or a recheck (`CheckTxType_Recheck`). +This allows certain checks like signature verification can be skipped during `CheckTxType_Recheck`. -After `Commit`, `CheckTx` is run again on all transactions that remain in the node's local mempool excluding the transactions that are included in the block. To prevent the mempool from rechecking all transactions every time a block is committed, the configuration option `mempool.recheck=false` can be set. As of Tendermint v0.32.1, an additional `Type` parameter is made available to the `CheckTx` function that indicates whether an incoming transaction is new (`CheckTxType_New`), or a recheck (`CheckTxType_Recheck`). This allows certain checks like signature verification can be skipped during `CheckTxType_Recheck`. - -### DeliverTx[​](#delivertx "Direct link to DeliverTx") +### DeliverTx When the underlying consensus engine receives a block proposal, each transaction in the block needs to be processed by the application. To that end, the underlying consensus engine sends a `DeliverTx` message to the application for each transaction in a sequential order. @@ -266,155 +1781,2931 @@ Before the first transaction of a given block is processed, a [volatile state](# `DeliverTx` performs the **exact same steps as `CheckTx`**, with a little caveat at step 3 and the addition of a fifth step: 1. The `AnteHandler` does **not** check that the transaction's `gas-prices` is sufficient. That is because the `min-gas-prices` value `gas-prices` is checked against is local to the node, and therefore what is enough for one full-node might not be for another. This means that the proposer can potentially include transactions for free, although they are not incentivised to do so, as they earn a bonus on the total fee of the block they propose. -2. For each `sdk.Msg` in the transaction, route to the appropriate module's Protobuf [`Msg` service](/v0.47/build/building-modules/msg-services). Additional *stateful* checks are performed, and the branched multistore held in `deliverState`'s `context` is updated by the module's `keeper`. If the `Msg` service returns successfully, the branched multistore held in `context` is written to `deliverState` `CacheMultiStore`. +2. For each `sdk.Msg` in the transaction, route to the appropriate module's Protobuf [`Msg` service](/docs/sdk/v0.47/documentation/module-system/msg-services). Additional _stateful_ checks are performed, and the branched multistore held in `deliverState`'s `context` is updated by the module's `keeper`. If the `Msg` service returns successfully, the branched multistore held in `context` is written to `deliverState` `CacheMultiStore`. During the additional fifth step outlined in (2), each read/write to the store increases the value of `GasConsumed`. You can find the default cost of each operation: -store/types/gas.go +```go expandable +package types -``` -loading... -``` +import ( + + "fmt" + "math" +) + +/ Gas consumption descriptors. +const ( + GasIterNextCostFlatDesc = "IterNextFlat" + GasValuePerByteDesc = "ValuePerByte" + GasWritePerByteDesc = "WritePerByte" + GasReadPerByteDesc = "ReadPerByte" + GasWriteCostFlatDesc = "WriteFlat" + GasReadCostFlatDesc = "ReadFlat" + GasHasDesc = "Has" + GasDeleteDesc = "Delete" +) + +/ Gas measured by the SDK +type Gas = uint64 + +/ ErrorNegativeGasConsumed defines an error thrown when the amount of gas refunded results in a +/ negative gas consumed amount. +type ErrorNegativeGasConsumed struct { + Descriptor string +} + +/ ErrorOutOfGas defines an error thrown when an action results in out of gas. +type ErrorOutOfGas struct { + Descriptor string +} + +/ ErrorGasOverflow defines an error thrown when an action results gas consumption +/ unsigned integer overflow. +type ErrorGasOverflow struct { + Descriptor string +} + +/ GasMeter interface to track gas consumption +type GasMeter interface { + GasConsumed() + +Gas + GasConsumedToLimit() + +Gas + GasRemaining() + +Gas + Limit() + +Gas + ConsumeGas(amount Gas, descriptor string) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/store/types/gas.go#L230-L241) +RefundGas(amount Gas, descriptor string) + +IsPastLimit() + +bool + IsOutOfGas() + +bool + String() + +string +} + +type basicGasMeter struct { + limit Gas + consumed Gas +} + +/ NewGasMeter returns a reference to a new basicGasMeter. +func NewGasMeter(limit Gas) + +GasMeter { + return &basicGasMeter{ + limit: limit, + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *basicGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasRemaining returns the gas left in the GasMeter. +func (g *basicGasMeter) + +GasRemaining() + +Gas { + if g.IsPastLimit() { + return 0 +} + +return g.limit - g.consumed +} + +/ Limit returns the gas limit of the GasMeter. +func (g *basicGasMeter) + +Limit() + +Gas { + return g.limit +} + +/ GasConsumedToLimit returns the gas limit if gas consumed is past the limit, +/ otherwise it returns the consumed gas. +/ NOTE: This behaviour is only called when recovering from panic when +/ BlockGasMeter consumes gas past the limit. +func (g *basicGasMeter) + +GasConsumedToLimit() + +Gas { + if g.IsPastLimit() { + return g.limit +} + +return g.consumed +} + +/ addUint64Overflow performs the addition operation on two uint64 integers and +/ returns a boolean on whether or not the result overflows. +func addUint64Overflow(a, b uint64) (uint64, bool) { + if math.MaxUint64-a < b { + return 0, true +} + +return a + b, false +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit or out of gas. +func (g *basicGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + g.consumed = math.MaxUint64 + panic(ErrorGasOverflow{ + descriptor +}) +} + if g.consumed > g.limit { + panic(ErrorOutOfGas{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the transaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *basicGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns true if gas consumed is past limit, otherwise it returns false. +func (g *basicGasMeter) + +IsPastLimit() + +bool { + return g.consumed > g.limit +} + +/ IsOutOfGas returns true if gas consumed is greater than or equal to gas limit, otherwise it returns false. +func (g *basicGasMeter) + +IsOutOfGas() + +bool { + return g.consumed >= g.limit +} + +/ String returns the BasicGasMeter's gas limit and gas consumed. +func (g *basicGasMeter) + +String() + +string { + return fmt.Sprintf("BasicGasMeter:\n limit: %d\n consumed: %d", g.limit, g.consumed) +} + +type infiniteGasMeter struct { + consumed Gas +} + +/ NewInfiniteGasMeter returns a new gas meter without a limit. +func NewInfiniteGasMeter() + +GasMeter { + return &infiniteGasMeter{ + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *infiniteGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasConsumedToLimit returns the gas consumed from the GasMeter since the gas is not confined to a limit. +/ NOTE: This behaviour is only called when recovering from panic when BlockGasMeter consumes gas past the limit. +func (g *infiniteGasMeter) + +GasConsumedToLimit() + +Gas { + return g.consumed +} + +/ GasRemaining returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +GasRemaining() + +Gas { + return math.MaxUint64 +} + +/ Limit returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +Limit() + +Gas { + return math.MaxUint64 +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit. +func (g *infiniteGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + / TODO: Should we set the consumed field after overflow checking? + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + panic(ErrorGasOverflow{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the trasaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *infiniteGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsPastLimit() + +bool { + return false +} + +/ IsOutOfGas returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsOutOfGas() + +bool { + return false +} + +/ String returns the InfiniteGasMeter's gas consumed. +func (g *infiniteGasMeter) + +String() + +string { + return fmt.Sprintf("InfiniteGasMeter:\n consumed: %d", g.consumed) +} + +/ GasConfig defines gas cost for each operation on KVStores +type GasConfig struct { + HasCost Gas + DeleteCost Gas + ReadCostFlat Gas + ReadCostPerByte Gas + WriteCostFlat Gas + WriteCostPerByte Gas + IterNextCostFlat Gas +} + +/ KVGasConfig returns a default gas config for KVStores. +func KVGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 1000, + DeleteCost: 1000, + ReadCostFlat: 1000, + ReadCostPerByte: 3, + WriteCostFlat: 2000, + WriteCostPerByte: 30, + IterNextCostFlat: 30, +} +} + +/ TransientGasConfig returns a default gas config for TransientStores. +func TransientGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 100, + DeleteCost: 100, + ReadCostFlat: 100, + ReadCostPerByte: 0, + WriteCostFlat: 200, + WriteCostPerByte: 3, + IterNextCostFlat: 3, +} +} +``` At any point, if `GasConsumed > GasWanted`, the function returns with `Code != 0` and `DeliverTx` fails. `DeliverTx` returns a response to the underlying consensus engine of type [`abci.ResponseDeliverTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#delivertx). The response contains: -* `Code (uint32)`: Response Code. `0` if successful. -* `Data ([]byte)`: Result bytes, if any. -* `Log (string):` The output of the application's logger. May be non-deterministic. -* `Info (string):` Additional information. May be non-deterministic. -* `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction. -* `GasUsed (int64)`: Amount of gas consumed by transaction. During `DeliverTx`, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction, and by adding gas each time a read/write to the store occurs. -* `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](/v0.47/learn/advanced/events) for more. -* `Codespace (string)`: Namespace for the Code. +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. +- `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction. +- `GasUsed (int64)`: Amount of gas consumed by transaction. During `DeliverTx`, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction, and by adding gas each time a read/write to the store occurs. +- `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](/docs/sdk/v0.47/learn/advanced/events) for more. +- `Codespace (string)`: Namespace for the Code. -## RunTx, AnteHandler, RunMsgs, PostHandler[​](#runtx-antehandler-runmsgs-posthandler "Direct link to RunTx, AnteHandler, RunMsgs, PostHandler") +## RunTx, AnteHandler, RunMsgs, PostHandler -### RunTx[​](#runtx "Direct link to RunTx") +### RunTx `RunTx` is called from `CheckTx`/`DeliverTx` to handle the transaction, with `runTxModeCheck` or `runTxModeDeliver` as parameter to differentiate between the two modes of execution. Note that when `RunTx` receives a transaction, it has already been decoded. -The first thing `RunTx` does upon being called is to retrieve the `context`'s `CacheMultiStore` by calling the `getContextForTx()` function with the appropriate mode (either `runTxModeCheck` or `runTxModeDeliver`). This `CacheMultiStore` is a branch of the main store, with cache functionality (for query requests), instantiated during `BeginBlock` for `DeliverTx` and during the `Commit` of the previous block for `CheckTx`. After that, two `defer func()` are called for [`gas`](/v0.47/learn/beginner/gas-fees) management. They are executed when `runTx` returns and make sure `gas` is actually consumed, and will throw errors, if any. +The first thing `RunTx` does upon being called is to retrieve the `context`'s `CacheMultiStore` by calling the `getContextForTx()` function with the appropriate mode (either `runTxModeCheck` or `runTxModeDeliver`). This `CacheMultiStore` is a branch of the main store, with cache functionality (for query requests), instantiated during `BeginBlock` for `DeliverTx` and during the `Commit` of the previous block for `CheckTx`. After that, two `defer func()` are called for [`gas`](/docs/sdk/v0.47/learn/beginner/gas-fees) management. They are executed when `runTx` returns and make sure `gas` is actually consumed, and will throw errors, if any. -After that, `RunTx()` calls `ValidateBasic()`, when available and for backward compatibility, on each `sdk.Msg`in the `Tx`, which runs preliminary *stateless* validity checks. If any `sdk.Msg` fails to pass `ValidateBasic()`, `RunTx()` returns with an error. +After that, `RunTx()` calls `ValidateBasic()`, when available and for backward compatibility, on each `sdk.Msg`in the `Tx`, which runs preliminary _stateless_ validity checks. If any `sdk.Msg` fails to pass `ValidateBasic()`, `RunTx()` returns with an error. Then, the [`anteHandler`](#antehandler) of the application is run (if it exists). In preparation of this step, both the `checkState`/`deliverState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function. -baseapp/baseapp.go +```go expandable +package baseapp + +import ( + + "errors" + "fmt" + "sort" + "strings" + "github.com/cosmos/gogoproto/proto" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/crypto/tmhash" + "github.com/tendermint/tendermint/libs/log" + tmproto "github.com/tendermint/tendermint/proto/tendermint/types" + dbm "github.com/tendermint/tm-db" + "golang.org/x/exp/maps" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/snapshots" + "github.com/cosmos/cosmos-sdk/store" + "github.com/cosmos/cosmos-sdk/store/rootmulti" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +const ( + runTxModeCheck runTxMode = iota / Check a transaction + runTxModeReCheck / Recheck a (pending) + +transaction after a commit + runTxModeSimulate / Simulate a transaction + runTxModeDeliver / Deliver a transaction + runTxPrepareProposal + runTxProcessProposal +) + +var _ abci.Application = (*BaseApp)(nil) + +type ( + / Enum mode for app.runTx + runTxMode uint8 + + / StoreLoader defines a customizable function to control how we load the CommitMultiStore + / from disk. This is useful for state migration, when loading a datastore written with + / an older version of the software. In particular, if a module changed the substore key name + / (or removed a substore) + +between two versions of the software. + StoreLoader func(ms sdk.CommitMultiStore) + +error +) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { /nolint: maligned + / initialized on creation + logger log.Logger + name string / application name from abci.Info + db dbm.DB / common DB backend + cms sdk.CommitMultiStore / Main (uncached) + +state + qms sdk.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.AnteHandler / post handler, optional, e.g. for tips + initChainer sdk.InitChainer / initialize state with validators and state blob + beginBlocker sdk.BeginBlocker / logic to run before any txs + processProposal sdk.ProcessProposalHandler / the handler which runs on ABCI ProcessProposal + prepareProposal sdk.PrepareProposalHandler / the handler which runs on ABCI PrepareProposal + endBlocker sdk.EndBlocker / logic to run after all txs, and to determine valset changes + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / checkState is set on InitChain and reset on Commit + / deliverState is set on InitChain and BeginBlock and set to nil on Commit + checkState *state / for CheckTx + deliverState *state / for DeliverTx + processProposalState *state / for ProcessProposal + prepareProposalState *state / for PrepareProposal + + / an inter-block write-through cache provided to the context during deliverState + interBlockCache sdk.MultiStorePersistentCache + + / absent validators from begin block + voteInfos []abci.VoteInfo + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the baseapp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from Tendermint. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: Tendermint block pruning is dependant on this parameter in conunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs Tendermint what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / abciListeners for hooking into the ABCI message processing of the BaseApp + / and exposing the requests and responses to external consumers + abciListeners []ABCIListener +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +/ +/ NOTE: The db is used to store the version number for now. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger, + name: name, + db: db, + cms: store.NewCommitMultiStore(db), + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + if app.processProposal == nil { + app.SetProcessProposal(app.DefaultProcessProposal()) +} + if app.prepareProposal == nil { + app.SetPrepareProposal(app.DefaultPrepareProposal()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ SetMsgServiceRouter sets the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +SetMsgServiceRouter(msgServiceRouter *MsgServiceRouter) { + app.msgServiceRouter = msgServiceRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} -``` -loading... -``` +} +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/baseapp/baseapp.go#L663-L672) +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) -This allows `RunTx` not to commit the changes made to the state during the execution of `anteHandler` if it ends up failing. It also prevents the module implementing the `anteHandler` from writing to state, which is an important part of the [object-capabilities](/v0.47/learn/advanced/ocap) of the Cosmos SDK. +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} -Finally, the [`RunMsgs()`](#runmsgs) function is called to process the `sdk.Msg`s in the `Tx`. In preparation of this step, just like with the `anteHandler`, both the `checkState`/`deliverState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function. +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} -### AnteHandler[​](#antehandler "Direct link to AnteHandler") +} +} -The `AnteHandler` is a special handler that implements the `AnteHandler` interface and is used to authenticate the transaction before the transaction's internal messages are processed. +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) -types/handler.go +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := maps.Keys(keys) + +sort.Strings(skeys) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms sdk.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +sdk.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + emptyHeader := tmproto.Header{ +} + + / needed for the export command which inits from store but never calls initchain + app.setState(runTxModeCheck, emptyHeader) + + / needed for ABCI Replay Blocks mode which calls Prepare/Process proposal (InitChain is not called) + +app.setState(runTxPrepareProposal, emptyHeader) + +app.setState(runTxProcessProposal, emptyHeader) + +app.Seal() + +rms, ok := app.cms.(*rootmulti.Store) + if !ok { + return fmt.Errorf("invalid commit multi-store; expected %T, got: %T", &rootmulti.Store{ +}, app.cms) +} + +return rms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache sdk.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode runTxMode, header tmproto.Header) { + ms := app.cms.CacheMultiStore() + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, header, false, app.logger), +} + switch mode { + case runTxModeCheck: + / Minimum gas prices are also set. It is set on InitChain and reset on Commit. + baseState.ctx = baseState.ctx.WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices) + +app.checkState = baseState + case runTxModeDeliver: + / It is set on InitChain and BeginBlock and set to nil on Commit. + app.deliverState = baseState + case runTxPrepareProposal: + / It is set on InitChain and Commit. + app.prepareProposalState = baseState + case runTxProcessProposal: + / It is set on InitChain and Commit. + app.processProposalState = baseState + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) *tmproto.ConsensusParams { + if app.paramStore == nil { + return nil +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + panic(err) +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the baseapp's param store. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp *tmproto.ConsensusParams) { + if app.paramStore == nil { + panic("cannot store consensus params with no params store set") +} + if cp == nil { + return +} + +app.paramStore.Set(ctx, cp) + / We're explicitly not storing the Tendermint app_version in the param store. It's + / stored instead in the x/upgrade store, with its own bump logic. +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp == nil || cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateHeight(req abci.RequestBeginBlock) + +error { + if req.Header.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Header.Height) +} + + / expectedHeight holds the expected height to validate. + var expectedHeight int64 + if app.LastBlockHeight() == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain (no + / previous commit). The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / - either there was already a previous commit in the store, in which + / case we increment the version from there, + / - or there was no previous commit, and initial version was not set, + / in which case we start at version 1. + expectedHeight = app.LastBlockHeight() + 1 +} + if req.Header.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Header.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + err := msg.ValidateBasic() + if err != nil { + return err +} + +} + +return nil +} + +/ Returns the application's deliverState if app is in runTxModeDeliver, +/ prepareProposalState if app is in runTxPrepareProposal, processProposalState +/ if app is in runTxProcessProposal, and checkState otherwise. +func (app *BaseApp) + +getState(mode runTxMode) *state { + switch mode { + case runTxModeDeliver: + return app.deliverState + case runTxPrepareProposal: + return app.prepareProposalState + case runTxProcessProposal: + return app.processProposalState + default: + return app.checkState +} +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode runTxMode, txBytes []byte) + +sdk.Context { + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.ctx. + WithTxBytes(txBytes). + WithVoteInfos(app.voteInfos) + +ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == runTxModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == runTxModeSimulate { + ctx, _ = ctx.CacheContext() +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, sdk.CacheMultiStore) { + ms := ctx.MultiStore() + / TODO: https://github.com/cosmos/cosmos-sdk/issues/2824 + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + sdk.TraceContext( + map[string]interface{ +}{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(sdk.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +func (app *BaseApp) + +runTx(mode runTxMode, txBytes []byte) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, priority int64, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == runTxModeDeliver && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, 0, sdkerrors.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + / consumeBlockGas makes sure block gas is consumed at most once. It must happen after + / tx processing, and must be executed even if tx processing fails. Hence, we use trick with `defer` + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: This must exist in a separate defer function for the above recovery + / to recover from this one. + if mode == runTxModeDeliver { + defer consumeBlockGas() +} + +tx, err := app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ +}, nil, nil, 0, err +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, 0, err +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache sdk.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == runTxModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + return gInfo, nil, nil, 0, err +} + +priority = ctx.Priority() + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + if mode == runTxModeCheck { + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, priority, err +} + +} + +else if mode == runTxModeDeliver { + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, priority, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + result, err = app.runMsgs(runMsgCtx, msgs, mode) + if err == nil { + + / Run optional postHandlers. + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.postHandler(postCtx, tx, mode == runTxModeSimulate) + if err != nil { + return gInfo, nil, anteEvents, priority, err +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if mode == runTxModeDeliver { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == runTxModeDeliver || mode == runTxModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, priority, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, mode runTxMode) (*sdk.Result, error) { + msgLogs := make(sdk.ABCIMessageLogs, 0, len(msgs)) + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != runTxModeDeliver && mode != runTxModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, sdkerrors.Wrapf(sdkerrors.ErrUnknownRequest, "can't route message %+v", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, sdkerrors.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents := createEvents(msgResult.GetEvents(), msg) + + / append message events, data and logs + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + +msgLogs = append(msgLogs, sdk.NewABCIMessageLog(uint32(i), msgResult.Log, msgEvents)) +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, sdkerrors.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Log: strings.TrimSpace(msgLogs.String()), + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(events sdk.Events, msg sdk.Msg) + +sdk.Events { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + if len(msg.GetSigners()) > 0 && !msg.GetSigners()[0].Empty() { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, msg.GetSigners()[0].String())) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + / here we assume that routes module name is the second element of the route + / e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" + moduleName := strings.Split(eventMsgName, ".") + if len(moduleName) > 1 { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName[1])) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events) +} + +/ DefaultPrepareProposal returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from Tendermint will simply be returned, which, by default, are in +/ FIFO order. +func (app *BaseApp) + +DefaultPrepareProposal() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req abci.RequestPrepareProposal) + +abci.ResponsePrepareProposal { + / If the mempool is nil or a no-op mempool, we simply return the transactions + / requested from Tendermint, which, by default, should be in FIFO order. + _, isNoOp := app.mempool.(mempool.NoOpMempool) + if app.mempool == nil || isNoOp { + return abci.ResponsePrepareProposal{ + Txs: req.Txs +} + +} + +var ( + txsBytes [][]byte + byteCount int64 + ) + iterator := app.mempool.Select(ctx, req.Txs) + for iterator != nil { + memTx := iterator.Tx() + +bz, err := app.txEncoder(memTx) + if err != nil { + panic(err) +} + txSize := int64(len(bz)) + + / NOTE: Since runTx was already executed in CheckTx, which calls + / mempool.Insert, ideally everything in the pool should be valid. But + / some mempool implementations may insert invalid txs, so we check again. + _, _, _, _, err = app.runTx(runTxPrepareProposal, bz) + if err != nil { + err := app.mempool.Remove(memTx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + panic(err) +} + +iterator = iterator.Next() + +continue +} + +else if byteCount += txSize; byteCount <= req.MaxTxBytes { + txsBytes = append(txsBytes, bz) +} + +else { + break +} + +iterator = iterator.Next() +} + +return abci.ResponsePrepareProposal{ + Txs: txsBytes +} + +} +} + +/ DefaultProcessProposal returns the default implementation for processing an ABCI proposal. +/ Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. Note that step (2) + +is identical to the +/ validation step performed in DefaultPrepareProposal. It is very important that the same validation logic is used +/ in both steps, and applications must ensure that this is the case in non-default handlers. +func (app *BaseApp) + +DefaultProcessProposal() + +sdk.ProcessProposalHandler { + return func(ctx sdk.Context, req abci.RequestProcessProposal) + +abci.ResponseProcessProposal { + for _, txBytes := range req.Txs { + _, err := app.txDecoder(txBytes) + if err != nil { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + + _, _, _, _, err = app.runTx(runTxProcessProposal, txBytes) + if err != nil { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + +} + +return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +} + +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req abci.RequestPrepareProposal) + +abci.ResponsePrepareProposal { + return abci.ResponsePrepareProposal{ + Txs: req.Txs +} + +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ abci.RequestProcessProposal) + +abci.ResponseProcessProposal { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +} + +} +} ``` -loading... -``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/types/handler.go#L6-L8) +This allows `RunTx` not to commit the changes made to the state during the execution of `anteHandler` if it ends up failing. It also prevents the module implementing the `anteHandler` from writing to state, which is an important part of the [object-capabilities](/docs/sdk/v0.47/learn/advanced/ocap) of the Cosmos SDK. + +Finally, the [`RunMsgs()`](#runmsgs) function is called to process the `sdk.Msg`s in the `Tx`. In preparation of this step, just like with the `anteHandler`, both the `checkState`/`deliverState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function. + +### AnteHandler + +The `AnteHandler` is a special handler that implements the `AnteHandler` interface and is used to authenticate the transaction before the transaction's internal messages are processed. + +```go expandable +package types + +/ Handler defines the core of the state transition function of an application. +type Handler func(ctx Context, msg Msg) (*Result, error) + +/ AnteHandler authenticates transactions, before their internal messages are handled. +/ If newCtx.IsZero(), ctx is used instead. +type AnteHandler func(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) + +/ AnteDecorator wraps the next AnteHandler to perform custom pre- and post-processing. +type AnteDecorator interface { + AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) +} + +/ ChainDecorator chains AnteDecorators together with each AnteDecorator +/ wrapping over the decorators further along chain and returns a single AnteHandler. +/ +/ NOTE: The first element is outermost decorator, while the last element is innermost +/ decorator. Decorator ordering is critical since some decorators will expect +/ certain checks and updates to be performed (e.g. the Context) + +before the decorator +/ is run. These expectations should be documented clearly in a CONTRACT docline +/ in the decorator's godoc. +/ +/ NOTE: Any application that uses GasMeter to limit transaction processing cost +/ MUST set GasMeter with the FIRST AnteDecorator. Failing to do so will cause +/ transactions to be processed with an infinite gasmeter and open a DOS attack vector. +/ Use `ante.SetUpContextDecorator` or a custom Decorator with similar functionality. +/ Returns nil when no AnteDecorator are supplied. +func ChainAnteDecorators(chain ...AnteDecorator) + +AnteHandler { + if len(chain) == 0 { + return nil +} + + / handle non-terminated decorators chain + if (chain[len(chain)-1] != Terminator{ +}) { + chain = append(chain, Terminator{ +}) +} + +return func(ctx Context, tx Tx, simulate bool) (Context, error) { + return chain[0].AnteHandle(ctx, tx, simulate, ChainAnteDecorators(chain[1:]...)) +} +} + +/ Terminator AnteDecorator will get added to the chain to simplify decorator code +/ Don't need to check if next == nil further up the chain +/ +/ ______ +/ <((((((\\\ +/ / . +}\ +/ ;--..--._| +} +/ (\ '--/\--' ) +/ \\ | '-' :'| +/ \\ . -==- .-| +/ \\ \.__.' \--._ +/ [\\ __.--| / _/'--. +/ \ \\ .'-._ ('-----'/ __/ \ +/ \ \\ / __>| | '--. | +/ \ \\ | \ | / / / +/ \ '\ / \ | | _/ / +/ \ \ \ | | / / +/ snd \ \ \ / +type Terminator struct{ +} + +/ Simply return provided Context and nil error +func (t Terminator) + +AnteHandle(ctx Context, _ Tx, _ bool, _ AnteHandler) (Context, error) { + return ctx, nil +} +``` The `AnteHandler` is theoretically optional, but still a very important component of public blockchain networks. It serves 3 primary purposes: -* Be a primary line of defense against spam and second line of defense (the first one being the mempool) against transaction replay with fees deduction and [`sequence`](/v0.47/learn/advanced/transactions#transaction-generation) checking. -* Perform preliminary *stateful* validity checks like ensuring signatures are valid or that the sender has enough funds to pay for fees. -* Play a role in the incentivisation of stakeholders via the collection of transaction fees. +- Be a primary line of defense against spam and second line of defense (the first one being the mempool) against transaction replay with fees deduction and [`sequence`](/docs/sdk/v0.47/learn/advanced/transactions#transaction-generation) checking. +- Perform preliminary _stateful_ validity checks like ensuring signatures are valid or that the sender has enough funds to pay for fees. +- Play a role in the incentivisation of stakeholders via the collection of transaction fees. -`BaseApp` holds an `anteHandler` as parameter that is initialized in the [application's constructor](/v0.47/learn/beginner/overview-app#application-constructor). The most widely used `anteHandler` is the [`auth` module](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/auth/ante/ante.go). +`BaseApp` holds an `anteHandler` as parameter that is initialized in the [application's constructor](/docs/sdk/v0.47/learn/beginner/overview-app#application-constructor). The most widely used `anteHandler` is the [`auth` module](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/auth/ante/ante.go). -Click [here](/v0.47/learn/beginner/gas-fees#antehandler) for more on the `anteHandler`. +Click [here](/docs/sdk/v0.47/learn/beginner/gas-fees#antehandler) for more on the `anteHandler`. -### RunMsgs[​](#runmsgs "Direct link to RunMsgs") +### RunMsgs `RunMsgs` is called from `RunTx` with `runTxModeCheck` as parameter to check the existence of a route for each message the transaction, and with `runTxModeDeliver` to actually process the `sdk.Msg`s. -First, it retrieves the `sdk.Msg`'s fully-qualified type name, by checking the `type_url` of the Protobuf `Any` representing the `sdk.Msg`. Then, using the application's [`msgServiceRouter`](#msg-service-router), it checks for the existence of `Msg` service method related to that `type_url`. At this point, if `mode == runTxModeCheck`, `RunMsgs` returns. Otherwise, if `mode == runTxModeDeliver`, the [`Msg` service](/v0.47/build/building-modules/msg-services) RPC is executed, before `RunMsgs` returns. +First, it retrieves the `sdk.Msg`'s fully-qualified type name, by checking the `type_url` of the Protobuf `Any` representing the `sdk.Msg`. Then, using the application's [`msgServiceRouter`](#msg-service-router), it checks for the existence of `Msg` service method related to that `type_url`. At this point, if `mode == runTxModeCheck`, `RunMsgs` returns. Otherwise, if `mode == runTxModeDeliver`, the [`Msg` service](/docs/sdk/v0.47/documentation/module-system/msg-services) RPC is executed, before `RunMsgs` returns. -### PostHandler[​](#posthandler "Direct link to PostHandler") +### PostHandler `PostHandler` is similar to `AnteHandler`, but it, as the name suggests, executes custom post tx processing logic after [`RunMsgs`](#runmsgs) is called. `PostHandler` receives the `Result` of the the `RunMsgs` in order to enable this customizable behavior. -Like `AnteHandler`s, `PostHandler`s are theoretically optional, one use case for `PostHandler`s is transaction tips (enabled by default in simapp). Other use cases like unused gas refund can also be enabled by `PostHandler`s. +Like `AnteHandler`s, `PostHandler`s are theoretically optional, one use case for `PostHandler`s is transaction tips (enabled by default in simapp). +Other use cases like unused gas refund can also be enabled by `PostHandler`s. -x/auth/posthandler/post.go +```go expandable +package posthandler -``` -loading... -``` +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ HandlerOptions are the options required for constructing a default SDK PostHandler. +type HandlerOptions struct{ +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/auth/posthandler/post.go#L1-L15) +/ NewPostHandler returns an empty posthandler chain. +func NewPostHandler(options HandlerOptions) (sdk.AnteHandler, error) { + postDecorators := []sdk.AnteDecorator{ +} + +return sdk.ChainAnteDecorators(postDecorators...), nil +} +``` Note, when `PostHandler`s fail, the state from `runMsgs` is also reverted, effectively making the transaction fail. -## Other ABCI Messages[​](#other-abci-messages "Direct link to Other ABCI Messages") +## Other ABCI Messages -### InitChain[​](#initchain "Direct link to InitChain") +### InitChain The [`InitChain` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine when the chain is first started. It is mainly used to **initialize** parameters and state like: -* [Consensus Parameters](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#consensus-parameters) via `setConsensusParams`. -* [`checkState` and `deliverState`](#state-updates) via `setState`. -* The [block gas meter](/v0.47/learn/beginner/gas-fees#block-gas-meter), with infinite gas to process genesis transactions. +- [Consensus Parameters](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#consensus-parameters) via `setConsensusParams`. +- [`checkState` and `deliverState`](#state-updates) via `setState`. +- The [block gas meter](/docs/sdk/v0.47/learn/beginner/gas-fees#block-gas-meter), with infinite gas to process genesis transactions. -Finally, the `InitChain(req abci.RequestInitChain)` method of `BaseApp` calls the [`initChainer()`](/v0.47/learn/beginner/overview-app#initchainer) of the application in order to initialize the main state of the application from the `genesis file` and, if defined, call the [`InitGenesis`](/v0.47/build/building-modules/genesis#initgenesis) function of each of the application's modules. +Finally, the `InitChain(req abci.RequestInitChain)` method of `BaseApp` calls the [`initChainer()`](/docs/sdk/v0.47/learn/beginner/overview-app#initchainer) of the application in order to initialize the main state of the application from the `genesis file` and, if defined, call the [`InitGenesis`](/docs/sdk/v0.47/documentation/module-system/genesis#initgenesis) function of each of the application's modules. -### BeginBlock[​](#beginblock "Direct link to BeginBlock") +### BeginBlock The [`BeginBlock` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine when a block proposal created by the correct proposer is received, before [`DeliverTx`](#delivertx) is run for each transaction in the block. It allows developers to have logic be executed at the beginning of each block. In the Cosmos SDK, the `BeginBlock(req abci.RequestBeginBlock)` method does the following: -* Initialize [`deliverState`](#state-updates) with the latest header using the `req abci.RequestBeginBlock` passed as parameter via the `setState` function. +- Initialize [`deliverState`](#state-updates) with the latest header using the `req abci.RequestBeginBlock` passed as parameter via the `setState` function. + + ```go expandable + package baseapp + + import ( + + "errors" + "fmt" + "sort" + "strings" + "github.com/cosmos/gogoproto/proto" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/crypto/tmhash" + "github.com/tendermint/tendermint/libs/log" + tmproto "github.com/tendermint/tendermint/proto/tendermint/types" + dbm "github.com/tendermint/tm-db" + "golang.org/x/exp/maps" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/snapshots" + "github.com/cosmos/cosmos-sdk/store" + "github.com/cosmos/cosmos-sdk/store/rootmulti" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" + ) + + const ( + runTxModeCheck runTxMode = iota / Check a transaction + runTxModeReCheck / Recheck a (pending) + + transaction after a commit + runTxModeSimulate / Simulate a transaction + runTxModeDeliver / Deliver a transaction + runTxPrepareProposal + runTxProcessProposal + ) + + var _ abci.Application = (*BaseApp)(nil) + + type ( + / Enum mode for app.runTx + runTxMode uint8 + + / StoreLoader defines a customizable function to control how we load the CommitMultiStore + / from disk. This is useful for state migration, when loading a datastore written with + / an older version of the software. In particular, if a module changed the substore key name + / (or removed a substore) + + between two versions of the software. + StoreLoader func(ms sdk.CommitMultiStore) + + error + ) + + / BaseApp reflects the ABCI application implementation. + type BaseApp struct { /nolint: maligned + / initialized on creation + logger log.Logger + name string / application name from abci.Info + db dbm.DB / common DB backend + cms sdk.CommitMultiStore / Main (uncached) + + state + qms sdk.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + + grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.AnteHandler / post handler, optional, e.g. for tips + initChainer sdk.InitChainer / initialize state with validators and state blob + beginBlocker sdk.BeginBlocker / logic to run before any txs + processProposal sdk.ProcessProposalHandler / the handler which runs on ABCI ProcessProposal + prepareProposal sdk.PrepareProposalHandler / the handler which runs on ABCI PrepareProposal + endBlocker sdk.EndBlocker / logic to run after all txs, and to determine valset changes + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / checkState is set on InitChain and reset on Commit + / deliverState is set on InitChain and BeginBlock and set to nil on Commit + checkState *state / for CheckTx + deliverState *state / for DeliverTx + processProposalState *state / for ProcessProposal + prepareProposalState *state / for PrepareProposal + + / an inter-block write-through cache provided to the context during deliverState + interBlockCache sdk.MultiStorePersistentCache + + / absent validators from begin block + voteInfos []abci.VoteInfo + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the baseapp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + + at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from Tendermint. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: Tendermint block pruning is dependant on this parameter in conunction + / with the unbonding (safety threshold) + + period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType + }.{ + attributeKey + }, + / which informs Tendermint what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ + } + + / abciListeners for hooking into the ABCI message processing of the BaseApp + / and exposing the requests and responses to external consumers + abciListeners []ABCIListener + } + + / NewBaseApp returns a reference to an initialized BaseApp. It accepts a + / variadic number of option functions, which act on the BaseApp to set + / configuration choices. + / + / NOTE: The db is used to store the version number for now. + func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), + ) *BaseApp { + app := &BaseApp{ + logger: logger, + name: name, + db: db, + cms: store.NewCommitMultiStore(db), + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, + } + for _, option := range options { + option(app) + } + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ + }) + } + if app.processProposal == nil { + app.SetProcessProposal(app.DefaultProcessProposal()) + } + if app.prepareProposal == nil { + app.SetPrepareProposal(app.DefaultPrepareProposal()) + } + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) + } + + app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + return app + } + + / Name returns the name of the BaseApp. + func (app *BaseApp) + + Name() + + string { + return app.name + } + + / AppVersion returns the application's protocol version. + func (app *BaseApp) + + AppVersion() + + uint64 { + return app.appVersion + } + + / Version returns the application's version string. + func (app *BaseApp) + + Version() + + string { + return app.version + } + + / Logger returns the logger of the BaseApp. + func (app *BaseApp) + + Logger() + + log.Logger { + return app.logger + } + + / Trace returns the boolean value for logging error stack traces. + func (app *BaseApp) + + Trace() + + bool { + return app.trace + } + + / MsgServiceRouter returns the MsgServiceRouter of a BaseApp. + func (app *BaseApp) + + MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter + } + + / SetMsgServiceRouter sets the MsgServiceRouter of a BaseApp. + func (app *BaseApp) + + SetMsgServiceRouter(msgServiceRouter *MsgServiceRouter) { + app.msgServiceRouter = msgServiceRouter + } + + / MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp + / multistore. + func (app *BaseApp) + + MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) + } + + else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) + } + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + + default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) + } - baseapp/baseapp.go + } + } - ``` - loading... - ``` + / MountKVStores mounts all IAVL or DB stores to the provided keys in the + / BaseApp multistore. + func (app *BaseApp) + + MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) + } + + else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) + } + + } + } + + / MountTransientStores mounts all transient stores to the provided keys in + / the BaseApp multistore. + func (app *BaseApp) + + MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) + } + } + + / MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal + / commit multi-store. + func (app *BaseApp) + + MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := maps.Keys(keys) + + sort.Strings(skeys) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) + } + } + + / MountStore mounts a store to the provided key in the BaseApp multistore, + / using the default DB. + func (app *BaseApp) + + MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) + } + + / LoadLatestVersion loads the latest application version. It will panic if + / called more than once on a running BaseApp. + func (app *BaseApp) + + LoadLatestVersion() + + error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) + } + + return app.Init() + } + + / DefaultStoreLoader will be used by default and loads the latest version + func DefaultStoreLoader(ms sdk.CommitMultiStore) + + error { + return ms.LoadLatestVersion() + } + + / CommitMultiStore returns the root multi-store. + / App constructor can use this to access the `cms`. + / UNSAFE: must not be used during the abci life cycle. + func (app *BaseApp) + + CommitMultiStore() + + sdk.CommitMultiStore { + return app.cms + } + + / SnapshotManager returns the snapshot manager. + / application use this to register extra extension snapshotters. + func (app *BaseApp) + + SnapshotManager() *snapshots.Manager { + return app.snapshotManager + } + + / LoadVersion loads the BaseApp application version. It will panic if called + / more than once on a running baseapp. + func (app *BaseApp) + + LoadVersion(version int64) + + error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) + } + + return app.Init() + } + + / LastCommitID returns the last CommitID of the multistore. + func (app *BaseApp) + + LastCommitID() + + storetypes.CommitID { + return app.cms.LastCommitID() + } + + / LastBlockHeight returns the last committed block height. + func (app *BaseApp) + + LastBlockHeight() + + int64 { + return app.cms.LastCommitID().Version + } - [See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/baseapp/baseapp.go#L406-L433) + / Init initializes the app. It seals the app, preventing any + / further modifications. In addition, it validates the app against + / the earlier provided settings. Returns an error if validation fails. + / nil otherwise. Panics if the app is already sealed. + func (app *BaseApp) + + Init() + + error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") + } + emptyHeader := tmproto.Header{ + } + + / needed for the export command which inits from store but never calls initchain + app.setState(runTxModeCheck, emptyHeader) + + / needed for ABCI Replay Blocks mode which calls Prepare/Process proposal (InitChain is not called) + + app.setState(runTxPrepareProposal, emptyHeader) + + app.setState(runTxProcessProposal, emptyHeader) + + app.Seal() + + rms, ok := app.cms.(*rootmulti.Store) + if !ok { + return fmt.Errorf("invalid commit multi-store; expected %T, got: %T", &rootmulti.Store{ + }, app.cms) + } + + return rms.GetPruning().Validate() + } + + func (app *BaseApp) + + setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices + } + + func (app *BaseApp) + + setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight + } + + func (app *BaseApp) + + setHaltTime(haltTime uint64) { + app.haltTime = haltTime + } + + func (app *BaseApp) + + setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks + } + + func (app *BaseApp) + + setInterBlockCache(cache sdk.MultiStorePersistentCache) { + app.interBlockCache = cache + } + + func (app *BaseApp) + + setTrace(trace bool) { + app.trace = trace + } + + func (app *BaseApp) + + setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ + }) + for _, e := range ie { + app.indexEvents[e] = struct{ + }{ + } + + } + } + + / Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. + func (app *BaseApp) + + Seal() { + app.sealed = true + } + + / IsSealed returns true if the BaseApp is sealed and false otherwise. + func (app *BaseApp) + + IsSealed() + + bool { + return app.sealed + } + + / setState sets the BaseApp's state for the corresponding mode with a branched + / multi-store (i.e. a CacheMultiStore) + + and a new Context with the same + / multi-store branch, and provided header. + func (app *BaseApp) + + setState(mode runTxMode, header tmproto.Header) { + ms := app.cms.CacheMultiStore() + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, header, false, app.logger), + } + switch mode { + case runTxModeCheck: + / Minimum gas prices are also set. It is set on InitChain and reset on Commit. + baseState.ctx = baseState.ctx.WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices) + + app.checkState = baseState + case runTxModeDeliver: + / It is set on InitChain and BeginBlock and set to nil on Commit. + app.deliverState = baseState + case runTxPrepareProposal: + / It is set on InitChain and Commit. + app.prepareProposalState = baseState + case runTxProcessProposal: + / It is set on InitChain and Commit. + app.processProposalState = baseState + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) + } + } + + / GetConsensusParams returns the current consensus parameters from the BaseApp's + / ParamStore. If the BaseApp has no ParamStore defined, nil is returned. + func (app *BaseApp) + + GetConsensusParams(ctx sdk.Context) *tmproto.ConsensusParams { + if app.paramStore == nil { + return nil + } + + cp, err := app.paramStore.Get(ctx) + if err != nil { + panic(err) + } + + return cp + } + + / StoreConsensusParams sets the consensus parameters to the baseapp's param store. + func (app *BaseApp) + + StoreConsensusParams(ctx sdk.Context, cp *tmproto.ConsensusParams) { + if app.paramStore == nil { + panic("cannot store consensus params with no params store set") + } + if cp == nil { + return + } + + app.paramStore.Set(ctx, cp) + / We're explicitly not storing the Tendermint app_version in the param store. It's + / stored instead in the x/upgrade store, with its own bump logic. + } + + / AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. + func (app *BaseApp) + + AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) + } + } + + / GetMaximumBlockGas gets the maximum gas from the consensus params. It panics + / if maximum block gas is less than negative one and returns zero if negative + / one. + func (app *BaseApp) + + GetMaximumBlockGas(ctx sdk.Context) + + uint64 { + cp := app.GetConsensusParams(ctx) + if cp == nil || cp.Block == nil { + return 0 + } + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) + } + } + + func (app *BaseApp) + + validateHeight(req abci.RequestBeginBlock) + + error { + if req.Header.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Header.Height) + } + + / expectedHeight holds the expected height to validate. + var expectedHeight int64 + if app.LastBlockHeight() == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain (no + / previous commit). The height we're expecting is the initial height. + expectedHeight = app.initialHeight + } + + else { + / This case can mean two things: + / - either there was already a previous commit in the store, in which + / case we increment the version from there, + / - or there was no previous commit, and initial version was not set, + / in which case we start at version 1. + expectedHeight = app.LastBlockHeight() + 1 + } + if req.Header.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Header.Height, expectedHeight) + } + + return nil + } + + / validateBasicTxMsgs executes basic validator calls for messages. + func validateBasicTxMsgs(msgs []sdk.Msg) + + error { + if len(msgs) == 0 { + return sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") + } + for _, msg := range msgs { + err := msg.ValidateBasic() + if err != nil { + return err + } + + } + + return nil + } + + / Returns the application's deliverState if app is in runTxModeDeliver, + / prepareProposalState if app is in runTxPrepareProposal, processProposalState + / if app is in runTxProcessProposal, and checkState otherwise. + func (app *BaseApp) + + getState(mode runTxMode) *state { + switch mode { + case runTxModeDeliver: + return app.deliverState + case runTxPrepareProposal: + return app.prepareProposalState + case runTxProcessProposal: + return app.processProposalState + default: + return app.checkState + } + } + + / retrieve the context for the tx w/ txBytes and other memoized values. + func (app *BaseApp) + + getContextForTx(mode runTxMode, txBytes []byte) + + sdk.Context { + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) + } + ctx := modeState.ctx. + WithTxBytes(txBytes). + WithVoteInfos(app.voteInfos) + + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == runTxModeReCheck { + ctx = ctx.WithIsReCheckTx(true) + } + if mode == runTxModeSimulate { + ctx, _ = ctx.CacheContext() + } + + return ctx + } + + / cacheTxContext returns a new context based off of the provided context with + / a branched multi-store. + func (app *BaseApp) + + cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, sdk.CacheMultiStore) { + ms := ctx.MultiStore() + / TODO: https://github.com/cosmos/cosmos-sdk/issues/2824 + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + sdk.TraceContext( + map[string]interface{ + }{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), + }, + ), + ).(sdk.CacheMultiStore) + } + + return ctx.WithMultiStore(msCache), msCache + } + + / runTx processes a transaction within a given execution mode, encoded transaction + / bytes, and the decoded transaction itself. All state transitions occur through + / a cached Context depending on the mode provided. State only gets persisted + / if all messages get executed successfully and the execution mode is DeliverTx. + / Note, gas execution info is always returned. A reference to a Result is + / returned if the tx does not run out of gas and if all the messages are valid + / and execute successfully. An error is returned otherwise. + func (app *BaseApp) + + runTx(mode runTxMode, txBytes []byte) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, priority int64, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == runTxModeDeliver && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, 0, sdkerrors.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") + } + + defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + + err, result = processRecovery(r, recoveryMW), nil + } + + gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() + } + + }() + blockGasConsumed := false + / consumeBlockGas makes sure block gas is consumed at most once. It must happen after + / tx processing, and must be executed even if tx processing fails. Hence, we use trick with `defer` + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) + } + + } + + / If BlockGasMeter() + + panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: This must exist in a separate defer function for the above recovery + / to recover from this one. + if mode == runTxModeDeliver { + defer consumeBlockGas() + } + + tx, err := app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ + }, nil, nil, 0, err + } + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ + }, nil, nil, 0, err + } + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache sdk.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + + anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + + newCtx, err := app.anteHandler(anteCtx, tx, mode == runTxModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + + is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) + } + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + return gInfo, nil, nil, 0, err + } + + priority = ctx.Priority() + + msCache.Write() + + anteEvents = events.ToABCIEvents() + } + if mode == runTxModeCheck { + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, priority, err + } + + } + + else if mode == runTxModeDeliver { + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, priority, + fmt.Errorf("failed to remove tx from mempool: %w", err) + } + + } + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + result, err = app.runMsgs(runMsgCtx, msgs, mode) + if err == nil { + + / Run optional postHandlers. + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + + newCtx, err := app.postHandler(postCtx, tx, mode == runTxModeSimulate) + if err != nil { + return gInfo, nil, anteEvents, priority, err + } + + result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) + } + if mode == runTxModeDeliver { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + + msCache.Write() + } + if len(anteEvents) > 0 && (mode == runTxModeDeliver || mode == runTxModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) + } + + } + + return gInfo, result, anteEvents, priority, err + } + + / runMsgs iterates through a list of messages and executes them with the provided + / Context and execution mode. Messages will only be executed during simulation + / and DeliverTx. An error is returned if any single message fails or if a + / Handler does not exist for a given message route. Otherwise, a reference to a + / Result is returned. The caller must not commit state if an error is returned. + func (app *BaseApp) + + runMsgs(ctx sdk.Context, msgs []sdk.Msg, mode runTxMode) (*sdk.Result, error) { + msgLogs := make(sdk.ABCIMessageLogs, 0, len(msgs)) + events := sdk.EmptyEvents() + + var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != runTxModeDeliver && mode != runTxModeSimulate { + break + } + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, sdkerrors.Wrapf(sdkerrors.ErrUnknownRequest, "can't route message %+v", msg) + } + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, sdkerrors.Wrapf(err, "failed to execute message; message index: %d", i) + } + + / create message events + msgEvents := createEvents(msgResult.GetEvents(), msg) + + / append message events, data and logs + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + + has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) + } + + msgResponses = append(msgResponses, msgResponse) + } + + msgLogs = append(msgLogs, sdk.NewABCIMessageLog(uint32(i), msgResult.Log, msgEvents)) + } + + data, err := makeABCIData(msgResponses) + if err != nil { + return nil, sdkerrors.Wrap(err, "failed to marshal tx data") + } + + return &sdk.Result{ + Data: data, + Log: strings.TrimSpace(msgLogs.String()), + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, + }, nil + } + + / makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. + func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses + }) + } + + func createEvents(events sdk.Events, msg sdk.Msg) + + sdk.Events { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + if len(msg.GetSigners()) > 0 && !msg.GetSigners()[0].Empty() { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, msg.GetSigners()[0].String())) + } + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + / here we assume that routes module name is the second element of the route + / e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" + moduleName := strings.Split(eventMsgName, ".") + if len(moduleName) > 1 { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName[1])) + } + + } + + return sdk.Events{ + msgEvent + }.AppendEvents(events) + } + + / DefaultPrepareProposal returns the default implementation for processing an + / ABCI proposal. The application's mempool is enumerated and all valid + / transactions are added to the proposal. Transactions are valid if they: + / + / 1) + + Successfully encode to bytes. + / 2) + + Are valid (i.e. pass runTx, AnteHandler only). + / + / Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is + / reached or the mempool is exhausted. + / + / Note: + / + / - Step (2) + + is identical to the validation step performed in + / DefaultProcessProposal. It is very important that the same validation logic + / is used in both steps, and applications must ensure that this is the case in + / non-default handlers. + / + / - If no mempool is set or if the mempool is a no-op mempool, the transactions + / requested from Tendermint will simply be returned, which, by default, are in + / FIFO order. + func (app *BaseApp) + + DefaultPrepareProposal() + + sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req abci.RequestPrepareProposal) + + abci.ResponsePrepareProposal { + / If the mempool is nil or a no-op mempool, we simply return the transactions + / requested from Tendermint, which, by default, should be in FIFO order. + _, isNoOp := app.mempool.(mempool.NoOpMempool) + if app.mempool == nil || isNoOp { + return abci.ResponsePrepareProposal{ + Txs: req.Txs + } + + } + + var ( + txsBytes [][]byte + byteCount int64 + ) + iterator := app.mempool.Select(ctx, req.Txs) + for iterator != nil { + memTx := iterator.Tx() + + bz, err := app.txEncoder(memTx) + if err != nil { + panic(err) + } + txSize := int64(len(bz)) + + / NOTE: Since runTx was already executed in CheckTx, which calls + / mempool.Insert, ideally everything in the pool should be valid. But + / some mempool implementations may insert invalid txs, so we check again. + _, _, _, _, err = app.runTx(runTxPrepareProposal, bz) + if err != nil { + err := app.mempool.Remove(memTx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + panic(err) + } + + iterator = iterator.Next() + + continue + } + + else if byteCount += txSize; byteCount <= req.MaxTxBytes { + txsBytes = append(txsBytes, bz) + } + + else { + break + } + + iterator = iterator.Next() + } + + return abci.ResponsePrepareProposal{ + Txs: txsBytes + } + + } + } + + / DefaultProcessProposal returns the default implementation for processing an ABCI proposal. + / Every transaction in the proposal must pass 2 conditions: + / + / 1. The transaction bytes must decode to a valid transaction. + / 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) + / + / If any transaction fails to pass either condition, the proposal is rejected. Note that step (2) + + is identical to the + / validation step performed in DefaultPrepareProposal. It is very important that the same validation logic is used + / in both steps, and applications must ensure that this is the case in non-default handlers. + func (app *BaseApp) + + DefaultProcessProposal() + + sdk.ProcessProposalHandler { + return func(ctx sdk.Context, req abci.RequestProcessProposal) + + abci.ResponseProcessProposal { + for _, txBytes := range req.Txs { + _, err := app.txDecoder(txBytes) + if err != nil { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT + } + + } + + _, _, _, _, err = app.runTx(runTxProcessProposal, txBytes) + if err != nil { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT + } + + } + + } + + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT + } + + } + } + + / NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always + / return the transactions sent by the client's request. + func NoOpPrepareProposal() + + sdk.PrepareProposalHandler { + return func(_ sdk.Context, req abci.RequestPrepareProposal) + + abci.ResponsePrepareProposal { + return abci.ResponsePrepareProposal{ + Txs: req.Txs + } + + } + } + + / NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always + / return ACCEPT. + func NoOpProcessProposal() + + sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ abci.RequestProcessProposal) + + abci.ResponseProcessProposal { + return abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT + } + + } + } + ``` - This function also resets the [main gas meter](/v0.47/learn/beginner/gas-fees#main-gas-meter). + This function also resets the [main gas meter](/docs/sdk/v0.47/learn/beginner/gas-fees#main-gas-meter). -* Initialize the [block gas meter](/v0.47/learn/beginner/gas-fees#block-gas-meter) with the `maxGas` limit. The `gas` consumed within the block cannot go above `maxGas`. This parameter is defined in the application's consensus parameters. +- Initialize the [block gas meter](/docs/sdk/v0.47/learn/beginner/gas-fees#block-gas-meter) with the `maxGas` limit. The `gas` consumed within the block cannot go above `maxGas`. This parameter is defined in the application's consensus parameters. -* Run the application's [`beginBlocker()`](/v0.47/learn/beginner/overview-app#beginblocker-and-endblock), which mainly runs the [`BeginBlocker()`](/v0.47/build/building-modules/beginblock-endblock#beginblock) method of each of the application's modules. +- Run the application's [`beginBlocker()`](/docs/sdk/v0.47/learn/beginner/overview-app#beginblocker-and-endblock), which mainly runs the [`BeginBlocker()`](/docs/sdk/v0.47/documentation/module-system/beginblock-endblock#beginblock) method of each of the application's modules. -* Set the [`VoteInfos`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#voteinfo) of the application, i.e. the list of validators whose *precommit* for the previous block was included by the proposer of the current block. This information is carried into the [`Context`](/v0.47/learn/advanced/context) so that it can be used during `DeliverTx` and `EndBlock`. +- Set the [`VoteInfos`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#voteinfo) of the application, i.e. the list of validators whose _precommit_ for the previous block was included by the proposer of the current block. This information is carried into the [`Context`](/docs/sdk/v0.47/learn/advanced/context) so that it can be used during `DeliverTx` and `EndBlock`. -### EndBlock[​](#endblock "Direct link to EndBlock") +### EndBlock -The [`EndBlock` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine after [`DeliverTx`](#delivertx) as been run for each transaction in the block. It allows developers to have logic be executed at the end of each block. In the Cosmos SDK, the bulk `EndBlock(req abci.RequestEndBlock)` method is to run the application's [`EndBlocker()`](/v0.47/learn/beginner/overview-app#beginblocker-and-endblock), which mainly runs the [`EndBlocker()`](/v0.47/build/building-modules/beginblock-endblock#beginblock) method of each of the application's modules. +The [`EndBlock` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine after [`DeliverTx`](#delivertx) as been run for each transaction in the block. It allows developers to have logic be executed at the end of each block. In the Cosmos SDK, the bulk `EndBlock(req abci.RequestEndBlock)` method is to run the application's [`EndBlocker()`](/docs/sdk/v0.47/learn/beginner/overview-app#beginblocker-and-endblock), which mainly runs the [`EndBlocker()`](/docs/sdk/v0.47/documentation/module-system/beginblock-endblock#beginblock) method of each of the application's modules. -### Commit[​](#commit "Direct link to Commit") +### Commit -The [`Commit` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine after the full-node has received *precommits* from 2/3+ of validators (weighted by voting power). On the `BaseApp` end, the `Commit(res abci.ResponseCommit)` function is implemented to commit all the valid state transitions that occurred during `BeginBlock`, `DeliverTx` and `EndBlock` and to reset state for the next block. +The [`Commit` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine after the full-node has received _precommits_ from 2/3+ of validators (weighted by voting power). On the `BaseApp` end, the `Commit(res abci.ResponseCommit)` function is implemented to commit all the valid state transitions that occurred during `BeginBlock`, `DeliverTx` and `EndBlock` and to reset state for the next block. To commit state-transitions, the `Commit` function calls the `Write()` function on `deliverState.ms`, where `deliverState.ms` is a branched multistore of the main store `app.cms`. Then, the `Commit` function sets `checkState` to the latest header (obtained from `deliverState.ctx.BlockHeader`) and `deliverState` to `nil`. Finally, `Commit` returns the hash of the commitment of `app.cms` back to the underlying consensus engine. This hash is used as a reference in the header of the next block. -### Info[​](#info "Direct link to Info") +### Info The [`Info` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#info-methods) is a simple query from the underlying consensus engine, notably used to sync the latter with the application during a handshake that happens on startup. When called, the `Info(res abci.ResponseInfo)` function from `BaseApp` will return the application's name, version and the hash of the last commit of `app.cms`. -### Query[​](#query "Direct link to Query") +### Query -The [`Query` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#info-methods) is used to serve queries received from the underlying consensus engine, including queries received via RPC like CometBFT RPC. It used to be the main entrypoint to build interfaces with the application, but with the introduction of [gRPC queries](/v0.47/build/building-modules/query-services) in Cosmos SDK v0.40, its usage is more limited. The application must respect a few rules when implementing the `Query` method, which are outlined [here](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#query). +The [`Query` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#info-methods) is used to serve queries received from the underlying consensus engine, including queries received via RPC like CometBFT RPC. It used to be the main entrypoint to build interfaces with the application, but with the introduction of [gRPC queries](/docs/sdk/v0.47/documentation/module-system/query-services) in Cosmos SDK v0.40, its usage is more limited. The application must respect a few rules when implementing the `Query` method, which are outlined [here](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#query). Each CometBFT `query` comes with a `path`, which is a `string` which denotes what to query. If the `path` matches a gRPC fully-qualified service method, then `BaseApp` will defer the query to the `grpcQueryRouter` and let it handle it like explained [above](#grpc-query-router). Otherwise, the `path` represents a query that is not (yet) handled by the gRPC router. `BaseApp` splits the `path` string with the `/` delimiter. By convention, the first element of the split string (`split[0]`) contains the category of `query` (`app`, `p2p`, `store` or `custom` ). The `BaseApp` implementation of the `Query(req abci.RequestQuery)` method is a simple dispatcher serving these 4 main categories of queries: -* Application-related queries like querying the application's version, which are served via the `handleQueryApp` method. -* Direct queries to the multistore, which are served by the `handlerQueryStore` method. These direct queries are different from custom queries which go through `app.queryRouter`, and are mainly used by third-party service provider like block explorers. -* P2P queries, which are served via the `handleQueryP2P` method. These queries return either `app.addrPeerFilter` or `app.ipPeerFilter` that contain the list of peers filtered by address or IP respectively. These lists are first initialized via `options` in `BaseApp`'s [constructor](#constructor). +- Application-related queries like querying the application's version, which are served via the `handleQueryApp` method. +- Direct queries to the multistore, which are served by the `handlerQueryStore` method. These direct queries are different from custom queries which go through `app.queryRouter`, and are mainly used by third-party service provider like block explorers. +- P2P queries, which are served via the `handleQueryP2P` method. These queries return either `app.addrPeerFilter` or `app.ipPeerFilter` that contain the list of peers filtered by address or IP respectively. These lists are first initialized via `options` in `BaseApp`'s [constructor](#constructor). diff --git a/docs/sdk/v0.47/learn/advanced/cli.mdx b/docs/sdk/v0.47/learn/advanced/cli.mdx index b08b0d28..308fd923 100644 --- a/docs/sdk/v0.47/learn/advanced/cli.mdx +++ b/docs/sdk/v0.47/learn/advanced/cli.mdx @@ -1,204 +1,2816 @@ --- -title: "Command-Line Interface" -description: "Version: v0.47" +title: Command-Line Interface --- - - This document describes how command-line interface (CLI) works on a high-level, for an [**application**](/v0.47/learn/beginner/overview-app). A separate document for implementing a CLI for a Cosmos SDK [**module**](/v0.47/build/building-modules/intro) can be found [here](/v0.47/build/building-modules/module-interfaces#cli). - +## Synopsis -## Command-Line Interface[​](#command-line-interface-1 "Direct link to Command-Line Interface") +This document describes how command-line interface (CLI) works on a high-level, for an [**application**](/docs/sdk/v0.47/learn/beginner/overview-app). A separate document for implementing a CLI for a Cosmos SDK [**module**](/docs/sdk/v0.47/documentation/module-system/intro) can be found [here](/docs/sdk/v0.47/documentation/module-system/module-interfaces#cli). -### Example Command[​](#example-command "Direct link to Example Command") +## Command-Line Interface + +### Example Command There is no set way to create a CLI, but Cosmos SDK modules typically use the [Cobra Library](https://github.com/spf13/cobra). Building a CLI with Cobra entails defining commands, arguments, and flags. [**Commands**](#root-command) understand the actions users wish to take, such as `tx` for creating a transaction and `query` for querying the application. Each command can also have nested subcommands, necessary for naming the specific transaction type. Users also supply **Arguments**, such as account numbers to send coins to, and [**Flags**](#flags) to modify various aspects of the commands, such as gas prices or which node to broadcast to. Here is an example of a command a user might enter to interact with the simapp CLI `simd` in order to send some tokens: -``` +```bash simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --gas auto --gas-prices ``` The first four strings specify the command: -* The root command for the entire application `simd`. -* The subcommand `tx`, which contains all commands that let users create transactions. -* The subcommand `bank` to indicate which module to route the command to ([`x/bank`](/v0.47/build/modules/bank) module in this case). -* The type of transaction `send`. +- The root command for the entire application `simd`. +- The subcommand `tx`, which contains all commands that let users create transactions. +- The subcommand `bank` to indicate which module to route the command to ([`x/bank`](docs/sdk/v0.47/documentation/module-system/modules/bank/README) module in this case). +- The type of transaction `send`. The next two strings are arguments: the `from_address` the user wishes to send from, the `to_address` of the recipient, and the `amount` they want to send. Finally, the last few strings of the command are optional flags to indicate how much the user is willing to pay in fees (calculated using the amount of gas used to execute the transaction and the gas prices provided by the user). -The CLI interacts with a [node](/v0.47/learn/advanced/node) to handle this command. The interface itself is defined in a `main.go` file. +The CLI interacts with a [node](/docs/sdk/v0.47/learn/advanced/node) to handle this command. The interface itself is defined in a `main.go` file. -### Building the CLI[​](#building-the-cli "Direct link to Building the CLI") +### Building the CLI The `main.go` file needs to have a `main()` function that creates a root command, to which all the application commands will be added as subcommands. The root command additionally handles: -* **setting configurations** by reading in configuration files (e.g. the Cosmos SDK config file). -* **adding any flags** to it, such as `--chain-id`. -* **instantiating the `codec`** by calling the application's `MakeCodec()` function (called `MakeTestEncodingConfig` in `simapp`). The [`codec`](/v0.47/learn/advanced/encoding) is used to encode and decode data structures for the application - stores can only persist `[]byte`s so the developer must define a serialization format for their data structures or use the default, Protobuf. -* **adding subcommand** for all the possible user interactions, including [transaction commands](#transaction-commands) and [query commands](#query-commands). +- **setting configurations** by reading in configuration files (e.g. the Cosmos SDK config file). +- **adding any flags** to it, such as `--chain-id`. +- **instantiating the `codec`** by calling the application's `MakeCodec()` function (called `MakeTestEncodingConfig` in `simapp`). The [`codec`](/docs/sdk/v0.47/learn/advanced/encoding) is used to encode and decode data structures for the application - stores can only persist `[]byte`s so the developer must define a serialization format for their data structures or use the default, Protobuf. +- **adding subcommand** for all the possible user interactions, including [transaction commands](#transaction-commands) and [query commands](#query-commands). The `main()` function finally creates an executor and [execute](https://pkg.go.dev/github.com/spf13/cobra#Command.Execute) the root command. See an example of `main()` function from the `simapp` application: -simapp/simd/main.go +```go expandable +package main -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/simd/main.go#L12-L24) + "os" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/simd/cmd" + "github.com/cosmos/cosmos-sdk/server" + svrcmd "github.com/cosmos/cosmos-sdk/server/cmd" +) -The rest of the document will detail what needs to be implemented for each step and include smaller portions of code from the `simapp` CLI files. +func main() { + rootCmd := cmd.NewRootCmd() + if err := svrcmd.Execute(rootCmd, "", simapp.DefaultNodeHome); err != nil { + switch e := err.(type) { + case server.ErrorCode: + os.Exit(e.Code) -## Adding Commands to the CLI[​](#adding-commands-to-the-cli "Direct link to Adding Commands to the CLI") +default: + os.Exit(1) +} -Every application CLI first constructs a root command, then adds functionality by aggregating subcommands (often with further nested subcommands) using `rootCmd.AddCommand()`. The bulk of an application's unique capabilities lies in its transaction and query commands, called `TxCmd` and `QueryCmd` respectively. +} +} +``` -### Root Command[​](#root-command "Direct link to Root Command") +The rest of the document will detail what needs to be implemented for each step and include smaller portions of code from the `simapp` CLI files. -The root command (called `rootCmd`) is what the user first types into the command line to indicate which application they wish to interact with. The string used to invoke the command (the "Use" field) is typically the name of the application suffixed with `-d`, e.g. `simd` or `gaiad`. The root command typically includes the following commands to support basic functionality in the application. +## Adding Commands to the CLI -* **Status** command from the Cosmos SDK rpc client tools, which prints information about the status of the connected [`Node`](/v0.47/learn/advanced/node). The Status of a node includes `NodeInfo`,`SyncInfo` and `ValidatorInfo`. -* **Keys** [commands](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/client/keys) from the Cosmos SDK client tools, which includes a collection of subcommands for using the key functions in the Cosmos SDK crypto tools, including adding a new key and saving it to the keyring, listing all public keys stored in the keyring, and deleting a key. For example, users can type `simd keys add ` to add a new key and save an encrypted copy to the keyring, using the flag `--recover` to recover a private key from a seed phrase or the flag `--multisig` to group multiple keys together to create a multisig key. For full details on the `add` key command, see the code [here](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/client/keys/add.go). For more details about usage of `--keyring-backend` for storage of key credentials look at the [keyring docs](/v0.47/user/run-node/keyring). -* **Server** commands from the Cosmos SDK server package. These commands are responsible for providing the mechanisms necessary to start an ABCI CometBFT application and provides the CLI framework (based on [cobra](https://github.com/spf13/cobra)) necessary to fully bootstrap an application. The package exposes two core functions: `StartCmd` and `ExportCmd` which creates commands to start the application and export state respectively. Learn more [here](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/server). -* [**Transaction**](#transaction-commands) commands. -* [**Query**](#query-commands) commands. +Every application CLI first constructs a root command, then adds functionality by aggregating subcommands (often with further nested subcommands) using `rootCmd.AddCommand()`. The bulk of an application's unique capabilities lies in its transaction and query commands, called `TxCmd` and `QueryCmd` respectively. -Next is an example `rootCmd` function from the `simapp` application. It instantiates the root command, adds a [*persistent* flag](#flags) and `PreRun` function to be run before every execution, and adds all of the necessary subcommands. +### Root Command -simapp/simd/cmd/root.go +The root command (called `rootCmd`) is what the user first types into the command line to indicate which application they wish to interact with. The string used to invoke the command (the "Use" field) is typically the name of the application suffixed with `-d`, e.g. `simd` or `gaiad`. The root command typically includes the following commands to support basic functionality in the application. -``` -loading... +- **Status** command from the Cosmos SDK rpc client tools, which prints information about the status of the connected [`Node`](/docs/sdk/v0.47/learn/advanced/node). The Status of a node includes `NodeInfo`,`SyncInfo` and `ValidatorInfo`. +- **Keys** [commands](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/client/keys) from the Cosmos SDK client tools, which includes a collection of subcommands for using the key functions in the Cosmos SDK crypto tools, including adding a new key and saving it to the keyring, listing all public keys stored in the keyring, and deleting a key. For example, users can type `simd keys add ` to add a new key and save an encrypted copy to the keyring, using the flag `--recover` to recover a private key from a seed phrase or the flag `--multisig` to group multiple keys together to create a multisig key. For full details on the `add` key command, see the code [here](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/client/keys/add.go). For more details about usage of `--keyring-backend` for storage of key credentials look at the [keyring docs](/docs/sdk/v0.47/user/run-node/keyring). +- **Server** commands from the Cosmos SDK server package. These commands are responsible for providing the mechanisms necessary to start an ABCI CometBFT application and provides the CLI framework (based on [cobra](https://github.com/spf13/cobra)) necessary to fully bootstrap an application. The package exposes two core functions: `StartCmd` and `ExportCmd` which creates commands to start the application and export state respectively. + Learn more [here](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/server). +- [**Transaction**](#transaction-commands) commands. +- [**Query**](#query-commands) commands. + +Next is an example `rootCmd` function from the `simapp` application. It instantiates the root command, adds a [_persistent_ flag](#flags) and `PreRun` function to be run before every execution, and adds all of the necessary subcommands. + +```go expandable +package cmd + +import ( + + "errors" + "io" + "os" + + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/spf13/cobra" + "github.com/spf13/viper" + tmcfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the +/ main function. +func NewRootCmd() *cobra.Command { + / we "pre"-instantiate the application for getting the injected/configured encoding configuration + tempApp := simapp.NewSimApp(log.NewNopLogger(), dbm.NewMemDB(), nil, true, simtestutil.NewAppOptionsWithFlagHome(simapp.DefaultNodeHome)) + encodingConfig := params.EncodingConfig{ + InterfaceRegistry: tempApp.InterfaceRegistry(), + Codec: tempApp.AppCodec(), + TxConfig: tempApp.TxConfig(), + Amino: tempApp.LegacyAmino(), +} + initClientCtx := client.Context{ +}. + WithCodec(encodingConfig.Codec). + WithInterfaceRegistry(encodingConfig.InterfaceRegistry). + WithTxConfig(encodingConfig.TxConfig). + WithLegacyAmino(encodingConfig.Amino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customTMConfig := initTendermintConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customTMConfig) +}, +} + +initRootCmd(rootCmd, encodingConfig) + +return rootCmd +} + +/ initTendermintConfig helps to override default Tendermint Config values. +/ return tmcfg.DefaultConfig if no custom configuration is required for the application. +func initTendermintConfig() *tmcfg.Config { + cfg := tmcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd(rootCmd *cobra.Command, encodingConfig params.EncodingConfig) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(simapp.ModuleBasics, simapp.DefaultNodeHome), + NewTestnetCmd(simapp.ModuleBasics, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + config.Cmd(), + pruning.PruningCmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(encodingConfig), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(encodingConfig params.EncodingConfig, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.GenesisCoreCommand(encodingConfig.TxConfig, simapp.ModuleBasics, simapp.DefaultNodeHome) + for _, sub_cmd := range cmds { + cmd.AddCommand(sub_cmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetAccountCmd(), + rpc.ValidatorCommand(), + rpc.BlockCommand(), + authcmd.QueryTxsByEventsCmd(), + authcmd.QueryTxCmd(), + ) + +simapp.ModuleBasics.AddQueryCommands(cmd) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +simapp.ModuleBasics.AddTxCommands(cmd) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + var simApp *simapp.SimApp + + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/simd/cmd/root.go#L38-L92) - -`rootCmd` has a function called `initAppConfig()` which is useful for setting the application's custom configs. By default app uses CometBFT app config template from Cosmos SDK, which can be over-written via `initAppConfig()`. Here's an example code to override default `app.toml` template. - -simapp/simd/cmd/root.go - +`rootCmd` has a function called `initAppConfig()` which is useful for setting the application's custom configs. +By default app uses CometBFT app config template from Cosmos SDK, which can be over-written via `initAppConfig()`. +Here's an example code to override default `app.toml` template. + +```go expandable +package cmd + +import ( + + "errors" + "io" + "os" + + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/spf13/cobra" + "github.com/spf13/viper" + tmcfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the +/ main function. +func NewRootCmd() *cobra.Command { + / we "pre"-instantiate the application for getting the injected/configured encoding configuration + tempApp := simapp.NewSimApp(log.NewNopLogger(), dbm.NewMemDB(), nil, true, simtestutil.NewAppOptionsWithFlagHome(simapp.DefaultNodeHome)) + encodingConfig := params.EncodingConfig{ + InterfaceRegistry: tempApp.InterfaceRegistry(), + Codec: tempApp.AppCodec(), + TxConfig: tempApp.TxConfig(), + Amino: tempApp.LegacyAmino(), +} + initClientCtx := client.Context{ +}. + WithCodec(encodingConfig.Codec). + WithInterfaceRegistry(encodingConfig.InterfaceRegistry). + WithTxConfig(encodingConfig.TxConfig). + WithLegacyAmino(encodingConfig.Amino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customTMConfig := initTendermintConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customTMConfig) +}, +} + +initRootCmd(rootCmd, encodingConfig) + +return rootCmd +} + +/ initTendermintConfig helps to override default Tendermint Config values. +/ return tmcfg.DefaultConfig if no custom configuration is required for the application. +func initTendermintConfig() *tmcfg.Config { + cfg := tmcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd(rootCmd *cobra.Command, encodingConfig params.EncodingConfig) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(simapp.ModuleBasics, simapp.DefaultNodeHome), + NewTestnetCmd(simapp.ModuleBasics, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + config.Cmd(), + pruning.PruningCmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(encodingConfig), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(encodingConfig params.EncodingConfig, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.GenesisCoreCommand(encodingConfig.TxConfig, simapp.ModuleBasics, simapp.DefaultNodeHome) + for _, sub_cmd := range cmds { + cmd.AddCommand(sub_cmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetAccountCmd(), + rpc.ValidatorCommand(), + rpc.BlockCommand(), + authcmd.QueryTxsByEventsCmd(), + authcmd.QueryTxCmd(), + ) + +simapp.ModuleBasics.AddQueryCommands(cmd) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +simapp.ModuleBasics.AddTxCommands(cmd) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + var simApp *simapp.SimApp + + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/simd/cmd/root.go#L106-L161) The `initAppConfig()` also allows overriding the default Cosmos SDK's [server config](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/server/config/config.go#L235). One example is the `min-gas-prices` config, which defines the minimum gas prices a validator is willing to accept for processing a transaction. By default, the Cosmos SDK sets this parameter to `""` (empty string), which forces all validators to tweak their own `app.toml` and set a non-empty value, or else the node will halt on startup. This might not be the best UX for validators, so the chain developer can set a default `app.toml` value for validators inside this `initAppConfig()` function. -simapp/simd/cmd/root.go - +```go expandable +package cmd + +import ( + + "errors" + "io" + "os" + + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/spf13/cobra" + "github.com/spf13/viper" + tmcfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the +/ main function. +func NewRootCmd() *cobra.Command { + / we "pre"-instantiate the application for getting the injected/configured encoding configuration + tempApp := simapp.NewSimApp(log.NewNopLogger(), dbm.NewMemDB(), nil, true, simtestutil.NewAppOptionsWithFlagHome(simapp.DefaultNodeHome)) + encodingConfig := params.EncodingConfig{ + InterfaceRegistry: tempApp.InterfaceRegistry(), + Codec: tempApp.AppCodec(), + TxConfig: tempApp.TxConfig(), + Amino: tempApp.LegacyAmino(), +} + initClientCtx := client.Context{ +}. + WithCodec(encodingConfig.Codec). + WithInterfaceRegistry(encodingConfig.InterfaceRegistry). + WithTxConfig(encodingConfig.TxConfig). + WithLegacyAmino(encodingConfig.Amino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customTMConfig := initTendermintConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customTMConfig) +}, +} + +initRootCmd(rootCmd, encodingConfig) + +return rootCmd +} + +/ initTendermintConfig helps to override default Tendermint Config values. +/ return tmcfg.DefaultConfig if no custom configuration is required for the application. +func initTendermintConfig() *tmcfg.Config { + cfg := tmcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd(rootCmd *cobra.Command, encodingConfig params.EncodingConfig) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(simapp.ModuleBasics, simapp.DefaultNodeHome), + NewTestnetCmd(simapp.ModuleBasics, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + config.Cmd(), + pruning.PruningCmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(encodingConfig), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(encodingConfig params.EncodingConfig, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.GenesisCoreCommand(encodingConfig.TxConfig, simapp.ModuleBasics, simapp.DefaultNodeHome) + for _, sub_cmd := range cmds { + cmd.AddCommand(sub_cmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetAccountCmd(), + rpc.ValidatorCommand(), + rpc.BlockCommand(), + authcmd.QueryTxsByEventsCmd(), + authcmd.QueryTxCmd(), + ) + +simapp.ModuleBasics.AddQueryCommands(cmd) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +simapp.ModuleBasics.AddTxCommands(cmd) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + var simApp *simapp.SimApp + + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/simd/cmd/root.go#L126-L142) - -The root-level `status` and `keys` subcommands are common across most applications and do not interact with application state. The bulk of an application's functionality - what users can actually *do* with it - is enabled by its `tx` and `query` commands. - -### Transaction Commands[​](#transaction-commands "Direct link to Transaction Commands") - -[Transactions](/v0.47/learn/advanced/transactions) are objects wrapping [`Msg`s](/v0.47/build/building-modules/messages-and-queries#messages) that trigger state changes. To enable the creation of transactions using the CLI interface, a function `txCommand` is generally added to the `rootCmd`: - -simapp/simd/cmd/root.go +The root-level `status` and `keys` subcommands are common across most applications and do not interact with application state. The bulk of an application's functionality - what users can actually _do_ with it - is enabled by its `tx` and `query` commands. + +### Transaction Commands + +[Transactions](/docs/sdk/v0.47/learn/advanced/transactions) are objects wrapping [`Msg`s](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#messages) that trigger state changes. To enable the creation of transactions using the CLI interface, a function `txCommand` is generally added to the `rootCmd`: + +```go expandable +package cmd + +import ( + + "errors" + "io" + "os" + + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/spf13/cobra" + "github.com/spf13/viper" + tmcfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the +/ main function. +func NewRootCmd() *cobra.Command { + / we "pre"-instantiate the application for getting the injected/configured encoding configuration + tempApp := simapp.NewSimApp(log.NewNopLogger(), dbm.NewMemDB(), nil, true, simtestutil.NewAppOptionsWithFlagHome(simapp.DefaultNodeHome)) + encodingConfig := params.EncodingConfig{ + InterfaceRegistry: tempApp.InterfaceRegistry(), + Codec: tempApp.AppCodec(), + TxConfig: tempApp.TxConfig(), + Amino: tempApp.LegacyAmino(), +} + initClientCtx := client.Context{ +}. + WithCodec(encodingConfig.Codec). + WithInterfaceRegistry(encodingConfig.InterfaceRegistry). + WithTxConfig(encodingConfig.TxConfig). + WithLegacyAmino(encodingConfig.Amino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customTMConfig := initTendermintConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customTMConfig) +}, +} + +initRootCmd(rootCmd, encodingConfig) + +return rootCmd +} + +/ initTendermintConfig helps to override default Tendermint Config values. +/ return tmcfg.DefaultConfig if no custom configuration is required for the application. +func initTendermintConfig() *tmcfg.Config { + cfg := tmcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd(rootCmd *cobra.Command, encodingConfig params.EncodingConfig) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(simapp.ModuleBasics, simapp.DefaultNodeHome), + NewTestnetCmd(simapp.ModuleBasics, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + config.Cmd(), + pruning.PruningCmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(encodingConfig), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(encodingConfig params.EncodingConfig, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.GenesisCoreCommand(encodingConfig.TxConfig, simapp.ModuleBasics, simapp.DefaultNodeHome) + for _, sub_cmd := range cmds { + cmd.AddCommand(sub_cmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetAccountCmd(), + rpc.ValidatorCommand(), + rpc.BlockCommand(), + authcmd.QueryTxsByEventsCmd(), + authcmd.QueryTxCmd(), + ) + +simapp.ModuleBasics.AddQueryCommands(cmd) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +simapp.ModuleBasics.AddTxCommands(cmd) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + var simApp *simapp.SimApp + + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/simd/cmd/root.go#L177-L184) This `txCommand` function adds all the transaction available to end-users for the application. This typically includes: -* **Sign command** from the [`auth`](/v0.47/build/modules/auth) module that signs messages in a transaction. To enable multisig, add the `auth` module's `MultiSign` command. Since every transaction requires some sort of signature in order to be valid, the signing command is necessary for every application. -* **Broadcast command** from the Cosmos SDK client tools, to broadcast transactions. -* **All [module transaction commands](/v0.47/build/building-modules/module-interfaces#transaction-commands)** the application is dependent on, retrieved by using the [basic module manager's](/v0.47/build/building-modules/module-manager#basic-manager) `AddTxCommands()` function. +- **Sign command** from the [`auth`](/docs/sdk/v0.47/documentation/module-system/modules/auth) module that signs messages in a transaction. To enable multisig, add the `auth` module's `MultiSign` command. Since every transaction requires some sort of signature in order to be valid, the signing command is necessary for every application. +- **Broadcast command** from the Cosmos SDK client tools, to broadcast transactions. +- **All [module transaction commands](/docs/sdk/v0.47/documentation/module-system/module-interfaces#transaction-commands)** the application is dependent on, retrieved by using the [basic module manager's](/docs/sdk/v0.47/documentation/module-system/module-manager#basic-manager) `AddTxCommands()` function. Here is an example of a `txCommand` aggregating these subcommands from the `simapp` application: -simapp/simd/cmd/root.go - +```go expandable +package cmd + +import ( + + "errors" + "io" + "os" + + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/spf13/cobra" + "github.com/spf13/viper" + tmcfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the +/ main function. +func NewRootCmd() *cobra.Command { + / we "pre"-instantiate the application for getting the injected/configured encoding configuration + tempApp := simapp.NewSimApp(log.NewNopLogger(), dbm.NewMemDB(), nil, true, simtestutil.NewAppOptionsWithFlagHome(simapp.DefaultNodeHome)) + encodingConfig := params.EncodingConfig{ + InterfaceRegistry: tempApp.InterfaceRegistry(), + Codec: tempApp.AppCodec(), + TxConfig: tempApp.TxConfig(), + Amino: tempApp.LegacyAmino(), +} + initClientCtx := client.Context{ +}. + WithCodec(encodingConfig.Codec). + WithInterfaceRegistry(encodingConfig.InterfaceRegistry). + WithTxConfig(encodingConfig.TxConfig). + WithLegacyAmino(encodingConfig.Amino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customTMConfig := initTendermintConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customTMConfig) +}, +} + +initRootCmd(rootCmd, encodingConfig) + +return rootCmd +} + +/ initTendermintConfig helps to override default Tendermint Config values. +/ return tmcfg.DefaultConfig if no custom configuration is required for the application. +func initTendermintConfig() *tmcfg.Config { + cfg := tmcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd(rootCmd *cobra.Command, encodingConfig params.EncodingConfig) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(simapp.ModuleBasics, simapp.DefaultNodeHome), + NewTestnetCmd(simapp.ModuleBasics, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + config.Cmd(), + pruning.PruningCmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(encodingConfig), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(encodingConfig params.EncodingConfig, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.GenesisCoreCommand(encodingConfig.TxConfig, simapp.ModuleBasics, simapp.DefaultNodeHome) + for _, sub_cmd := range cmds { + cmd.AddCommand(sub_cmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetAccountCmd(), + rpc.ValidatorCommand(), + rpc.BlockCommand(), + authcmd.QueryTxsByEventsCmd(), + authcmd.QueryTxCmd(), + ) + +simapp.ModuleBasics.AddQueryCommands(cmd) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +simapp.ModuleBasics.AddTxCommands(cmd) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + var simApp *simapp.SimApp + + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/simd/cmd/root.go#L227-L251) - -### Query Commands[​](#query-commands "Direct link to Query Commands") -[**Queries**](/v0.47/build/building-modules/messages-and-queries#queries) are objects that allow users to retrieve information about the application's state. To enable the creation of queries using the CLI interface, a function `queryCommand` is generally added to the `rootCmd`: - -simapp/simd/cmd/root.go - -``` -loading... +### Query Commands + +[**Queries**](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#queries) are objects that allow users to retrieve information about the application's state. To enable the creation of queries using the CLI interface, a function `queryCommand` is generally added to the `rootCmd`: + +```go expandable +package cmd + +import ( + + "errors" + "io" + "os" + + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/spf13/cobra" + "github.com/spf13/viper" + tmcfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the +/ main function. +func NewRootCmd() *cobra.Command { + / we "pre"-instantiate the application for getting the injected/configured encoding configuration + tempApp := simapp.NewSimApp(log.NewNopLogger(), dbm.NewMemDB(), nil, true, simtestutil.NewAppOptionsWithFlagHome(simapp.DefaultNodeHome)) + encodingConfig := params.EncodingConfig{ + InterfaceRegistry: tempApp.InterfaceRegistry(), + Codec: tempApp.AppCodec(), + TxConfig: tempApp.TxConfig(), + Amino: tempApp.LegacyAmino(), +} + initClientCtx := client.Context{ +}. + WithCodec(encodingConfig.Codec). + WithInterfaceRegistry(encodingConfig.InterfaceRegistry). + WithTxConfig(encodingConfig.TxConfig). + WithLegacyAmino(encodingConfig.Amino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customTMConfig := initTendermintConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customTMConfig) +}, +} + +initRootCmd(rootCmd, encodingConfig) + +return rootCmd +} + +/ initTendermintConfig helps to override default Tendermint Config values. +/ return tmcfg.DefaultConfig if no custom configuration is required for the application. +func initTendermintConfig() *tmcfg.Config { + cfg := tmcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd(rootCmd *cobra.Command, encodingConfig params.EncodingConfig) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(simapp.ModuleBasics, simapp.DefaultNodeHome), + NewTestnetCmd(simapp.ModuleBasics, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + config.Cmd(), + pruning.PruningCmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(encodingConfig), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(encodingConfig params.EncodingConfig, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.GenesisCoreCommand(encodingConfig.TxConfig, simapp.ModuleBasics, simapp.DefaultNodeHome) + for _, sub_cmd := range cmds { + cmd.AddCommand(sub_cmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetAccountCmd(), + rpc.ValidatorCommand(), + rpc.BlockCommand(), + authcmd.QueryTxsByEventsCmd(), + authcmd.QueryTxCmd(), + ) + +simapp.ModuleBasics.AddQueryCommands(cmd) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +simapp.ModuleBasics.AddTxCommands(cmd) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + var simApp *simapp.SimApp + + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/simd/cmd/root.go#L177-L184) - This `queryCommand` function adds all the queries available to end-users for the application. This typically includes: -* **QueryTx** and/or other transaction query commands] from the `auth` module which allow the user to search for a transaction by inputting its hash, a list of tags, or a block height. These queries allow users to see if transactions have been included in a block. -* **Account command** from the `auth` module, which displays the state (e.g. account balance) of an account given an address. -* **Validator command** from the Cosmos SDK rpc client tools, which displays the validator set of a given height. -* **Block command** from the Cosmos SDK RPC client tools, which displays the block data for a given height. -* **All [module query commands](/v0.47/build/building-modules/module-interfaces#query-commands)** the application is dependent on, retrieved by using the [basic module manager's](/v0.47/build/building-modules/module-manager#basic-manager) `AddQueryCommands()` function. +- **QueryTx** and/or other transaction query commands] from the `auth` module which allow the user to search for a transaction by inputting its hash, a list of tags, or a block height. These queries allow users to see if transactions have been included in a block. +- **Account command** from the `auth` module, which displays the state (e.g. account balance) of an account given an address. +- **Validator command** from the Cosmos SDK rpc client tools, which displays the validator set of a given height. +- **Block command** from the Cosmos SDK RPC client tools, which displays the block data for a given height. +- **All [module query commands](/docs/sdk/v0.47/documentation/module-system/module-interfaces#query-commands)** the application is dependent on, retrieved by using the [basic module manager's](/docs/sdk/v0.47/documentation/module-system/module-manager#basic-manager) `AddQueryCommands()` function. Here is an example of a `queryCommand` aggregating subcommands from the `simapp` application: -simapp/simd/cmd/root.go - -``` -loading... +```go expandable +package cmd + +import ( + + "errors" + "io" + "os" + + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/spf13/cobra" + "github.com/spf13/viper" + tmcfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the +/ main function. +func NewRootCmd() *cobra.Command { + / we "pre"-instantiate the application for getting the injected/configured encoding configuration + tempApp := simapp.NewSimApp(log.NewNopLogger(), dbm.NewMemDB(), nil, true, simtestutil.NewAppOptionsWithFlagHome(simapp.DefaultNodeHome)) + encodingConfig := params.EncodingConfig{ + InterfaceRegistry: tempApp.InterfaceRegistry(), + Codec: tempApp.AppCodec(), + TxConfig: tempApp.TxConfig(), + Amino: tempApp.LegacyAmino(), +} + initClientCtx := client.Context{ +}. + WithCodec(encodingConfig.Codec). + WithInterfaceRegistry(encodingConfig.InterfaceRegistry). + WithTxConfig(encodingConfig.TxConfig). + WithLegacyAmino(encodingConfig.Amino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customTMConfig := initTendermintConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customTMConfig) +}, +} + +initRootCmd(rootCmd, encodingConfig) + +return rootCmd +} + +/ initTendermintConfig helps to override default Tendermint Config values. +/ return tmcfg.DefaultConfig if no custom configuration is required for the application. +func initTendermintConfig() *tmcfg.Config { + cfg := tmcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd(rootCmd *cobra.Command, encodingConfig params.EncodingConfig) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(simapp.ModuleBasics, simapp.DefaultNodeHome), + NewTestnetCmd(simapp.ModuleBasics, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + config.Cmd(), + pruning.PruningCmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(encodingConfig), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(encodingConfig params.EncodingConfig, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.GenesisCoreCommand(encodingConfig.TxConfig, simapp.ModuleBasics, simapp.DefaultNodeHome) + for _, sub_cmd := range cmds { + cmd.AddCommand(sub_cmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetAccountCmd(), + rpc.ValidatorCommand(), + rpc.BlockCommand(), + authcmd.QueryTxsByEventsCmd(), + authcmd.QueryTxCmd(), + ) + +simapp.ModuleBasics.AddQueryCommands(cmd) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +simapp.ModuleBasics.AddTxCommands(cmd) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + var simApp *simapp.SimApp + + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/simd/cmd/root.go#L204-L225) +## Flags -## Flags[​](#flags "Direct link to Flags") +Flags are used to modify commands; developers can include them in a `flags.go` file with their CLI. Users can explicitly include them in commands or pre-configure them by inside their [`app.toml`](/docs/sdk/v0.47/user/run-node/interact-node#configuring-the-node-using-apptoml). Commonly pre-configured flags include the `--node` to connect to and `--chain-id` of the blockchain the user wishes to interact with. -Flags are used to modify commands; developers can include them in a `flags.go` file with their CLI. Users can explicitly include them in commands or pre-configure them by inside their [`app.toml`](/v0.47/user/run-node/interact-node#configuring-the-node-using-apptoml). Commonly pre-configured flags include the `--node` to connect to and `--chain-id` of the blockchain the user wishes to interact with. +A _persistent_ flag (as opposed to a _local_ flag) added to a command transcends all of its children: subcommands will inherit the configured values for these flags. Additionally, all flags have default values when they are added to commands; some toggle an option off but others are empty values that the user needs to override to create valid commands. A flag can be explicitly marked as _required_ so that an error is automatically thrown if the user does not provide a value, but it is also acceptable to handle unexpected missing flags differently. -A *persistent* flag (as opposed to a *local* flag) added to a command transcends all of its children: subcommands will inherit the configured values for these flags. Additionally, all flags have default values when they are added to commands; some toggle an option off but others are empty values that the user needs to override to create valid commands. A flag can be explicitly marked as *required* so that an error is automatically thrown if the user does not provide a value, but it is also acceptable to handle unexpected missing flags differently. +Flags are added to commands directly (generally in the [module's CLI file](/docs/sdk/v0.47/documentation/module-system/module-interfaces#flags) where module commands are defined) and no flag except for the `rootCmd` persistent flags has to be added at application level. It is common to add a _persistent_ flag for `--chain-id`, the unique identifier of the blockchain the application pertains to, to the root command. Adding this flag can be done in the `main()` function. Adding this flag makes sense as the chain ID should not be changing across commands in this application CLI. -Flags are added to commands directly (generally in the [module's CLI file](/v0.47/build/building-modules/module-interfaces#flags) where module commands are defined) and no flag except for the `rootCmd` persistent flags has to be added at application level. It is common to add a *persistent* flag for `--chain-id`, the unique identifier of the blockchain the application pertains to, to the root command. Adding this flag can be done in the `main()` function. Adding this flag makes sense as the chain ID should not be changing across commands in this application CLI. - -## Environment variables[​](#environment-variables "Direct link to Environment variables") +## Environment variables Each flag is bound to it's respecteve named environment variable. Then name of the environment variable consist of two parts - capital case `basename` followed by flag name of the flag. `-` must be substituted with `_`. For example flag `--home` for application with basename `GAIA` is bound to `GAIA_HOME`. It allows reducing the amount of flags typed for routine operations. For example instead of: -``` +```shell gaia --home=./ --node= --chain-id="testchain-1" --keyring-backend=test tx ... --from= ``` this will be more convenient: -``` -# define env variables in .env, .envrc etcGAIA_HOME=GAIA_NODE=GAIA_CHAIN_ID="testchain-1"GAIA_KEYRING_BACKEND="test"# and later just usegaia tx ... --from= +```shell +# define env variables in .env, .envrc etc +GAIA_HOME= +GAIA_NODE= +GAIA_CHAIN_ID="testchain-1" +GAIA_KEYRING_BACKEND="test" + +# and later just use +gaia tx ... --from= ``` -## Configurations[​](#configurations "Direct link to Configurations") +## Configurations It is vital that the root command of an application uses `PersistentPreRun()` cobra command property for executing the command, so all child commands have access to the server and client contexts. These contexts are set as their default values initially and maybe modified, scoped to the command, in their respective `PersistentPreRun()` functions. Note that the `client.Context` is typically pre-populated with "default" values that may be useful for all commands to inherit and override if necessary. Here is an example of an `PersistentPreRun()` function from `simapp`: -simapp/simd/cmd/root.go - -``` -loading... +```go expandable +package cmd + +import ( + + "errors" + "io" + "os" + + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/spf13/cobra" + "github.com/spf13/viper" + tmcfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the +/ main function. +func NewRootCmd() *cobra.Command { + / we "pre"-instantiate the application for getting the injected/configured encoding configuration + tempApp := simapp.NewSimApp(log.NewNopLogger(), dbm.NewMemDB(), nil, true, simtestutil.NewAppOptionsWithFlagHome(simapp.DefaultNodeHome)) + encodingConfig := params.EncodingConfig{ + InterfaceRegistry: tempApp.InterfaceRegistry(), + Codec: tempApp.AppCodec(), + TxConfig: tempApp.TxConfig(), + Amino: tempApp.LegacyAmino(), +} + initClientCtx := client.Context{ +}. + WithCodec(encodingConfig.Codec). + WithInterfaceRegistry(encodingConfig.InterfaceRegistry). + WithTxConfig(encodingConfig.TxConfig). + WithLegacyAmino(encodingConfig.Amino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customTMConfig := initTendermintConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customTMConfig) +}, +} + +initRootCmd(rootCmd, encodingConfig) + +return rootCmd +} + +/ initTendermintConfig helps to override default Tendermint Config values. +/ return tmcfg.DefaultConfig if no custom configuration is required for the application. +func initTendermintConfig() *tmcfg.Config { + cfg := tmcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd(rootCmd *cobra.Command, encodingConfig params.EncodingConfig) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(simapp.ModuleBasics, simapp.DefaultNodeHome), + NewTestnetCmd(simapp.ModuleBasics, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + config.Cmd(), + pruning.PruningCmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(encodingConfig), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(encodingConfig params.EncodingConfig, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.GenesisCoreCommand(encodingConfig.TxConfig, simapp.ModuleBasics, simapp.DefaultNodeHome) + for _, sub_cmd := range cmds { + cmd.AddCommand(sub_cmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetAccountCmd(), + rpc.ValidatorCommand(), + rpc.BlockCommand(), + authcmd.QueryTxsByEventsCmd(), + authcmd.QueryTxCmd(), + ) + +simapp.ModuleBasics.AddQueryCommands(cmd) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +simapp.ModuleBasics.AddTxCommands(cmd) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + var simApp *simapp.SimApp + + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/simd/cmd/root.go#L63-L86) - The `SetCmdClientContextHandler` call reads persistent flags via `ReadPersistentCommandFlags` which creates a `client.Context` and sets that on the root command's `Context`. The `InterceptConfigsPreRunHandler` call creates a viper literal, default `server.Context`, and a logger and sets that on the root command's `Context`. The `server.Context` will be modified and saved to disk. The internal `interceptConfigs` call reads or creates a CometBFT configuration based on the home path provided. In addition, `interceptConfigs` also reads and loads the application configuration, `app.toml`, and binds that to the `server.Context` viper literal. This is vital so the application can get access to not only the CLI flags, but also to the application configuration values provided by this file. - - When willing to configure which logger is used, do not to use `InterceptConfigsPreRunHandler`, which sets the default SDK logger, but instead use `InterceptConfigsAndCreateContext` and set the server context and the logger manually: + +When willing to configure which logger is used, do not to use `InterceptConfigsPreRunHandler`, which sets the default SDK logger, but instead use `InterceptConfigsAndCreateContext` and set the server context and the logger manually: + +```diff expandable +-return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig) + ++serverCtx, err := server.InterceptConfigsAndCreateContext(cmd, customAppTemplate, customAppConfig, customCMTConfig) ++if err != nil { ++ return err ++} + ++/ overwrite default server logger ++logger, err := server.CreateSDKLogger(serverCtx, cmd.OutOrStdout()) ++if err != nil { ++ return err ++} ++serverCtx.Logger = logger.With(log.ModuleKey, "server") + ++/ set server context ++return server.SetCmdServerContext(cmd, serverCtx) +``` - ``` - -return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig)+serverCtx, err := server.InterceptConfigsAndCreateContext(cmd, customAppTemplate, customAppConfig, customCMTConfig)+if err != nil {+ return err+}+// overwrite default server logger+logger, err := server.CreateSDKLogger(serverCtx, cmd.OutOrStdout())+if err != nil {+ return err+}+serverCtx.Logger = logger.With(log.ModuleKey, "server")+// set server context+return server.SetCmdServerContext(cmd, serverCtx) - ``` - + diff --git a/docs/sdk/v0.47/learn/advanced/config.mdx b/docs/sdk/v0.47/learn/advanced/config.mdx index 901d930b..a9526711 100644 --- a/docs/sdk/v0.47/learn/advanced/config.mdx +++ b/docs/sdk/v0.47/learn/advanced/config.mdx @@ -1,26 +1,288 @@ --- -title: "Configuration" -description: "Version: v0.47" +title: Configuration +description: >- + This documentation refers to the app.toml, if you'd like to read about the + config.toml please visit CometBFT docs. --- This documentation refers to the app.toml, if you'd like to read about the config.toml please visit [CometBFT docs](https://docs.cometbft.com/v0.37/). -tools/confix/data/v0.47-app.toml +{/* the following is not a python reference, however syntax coloring makes the file more readable in the docs */} -``` -loading... -``` +```python +# This is a TOML config file. +# For more information, see https://github.com/toml-lang/toml + +############################################################################### +### Base Configuration ### +############################################################################### + +# The minimum gas prices a validator is willing to accept for processing a +# transaction. A transaction's fees must meet the minimum of any denomination +# specified in this config (e.g. 0.25token1,0.0001token2). +minimum-gas-prices = "0stake" + +# default: the last 362880 states are kept, pruning at 10 block intervals +# nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) +# everything: 2 latest states will be kept; pruning at 10 block intervals. +# custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' +pruning = "default" + +# These are applied if and only if the pruning strategy is custom. +pruning-keep-recent = "0" +pruning-interval = "0" + +# HaltHeight contains a non-zero block height at which a node will gracefully +# halt and shutdown that can be used to assist upgrades and testing. +# +# Note: Commitment of state will be attempted on the corresponding block. +halt-height = 0 + +# HaltTime contains a non-zero minimum block time (in Unix seconds) at which +# a node will gracefully halt and shutdown that can be used to assist upgrades +# and testing. +# +# Note: Commitment of state will be attempted on the corresponding block. +halt-time = 0 + +# MinRetainBlocks defines the minimum block height offset from the current +# block being committed, such that all blocks past this offset are pruned +# from Tendermint. It is used as part of the process of determining the +# ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates +# that no blocks should be pruned. +# +# This configuration value is only responsible for pruning Tendermint blocks. +# It has no bearing on application state pruning which is determined by the +# "pruning-*" configurations. +# +# Note: Tendermint block pruning is dependent on this parameter in conjunction +# with the unbonding (safety threshold) period, state pruning and state sync +# snapshot parameters to determine the correct minimum value of +# ResponseCommit.RetainHeight. +min-retain-blocks = 0 + +# InterBlockCache enables inter-block caching. +inter-block-cache = true + +# IndexEvents defines the set of events in the form {eventType}.{attributeKey}, +# which informs Tendermint what to index. If empty, all events will be indexed. +# +# Example: +# ["message.sender", "message.recipient"] +index-events = [] + +# IavlCacheSize set the size of the iavl tree cache (in number of nodes). +iavl-cache-size = 781250 + +# IAVLDisableFastNode enables or disables the fast node feature of IAVL. +# Default is false. +iavl-disable-fastnode = false + +# IAVLLazyLoading enable/disable the lazy loading of iavl store. +# Default is false. +iavl-lazy-loading = false + +# AppDBBackend defines the database backend type to use for the application and snapshots DBs. +# An empty string indicates that a fallback will be used. +# The fallback is the db_backend value set in Tendermint's config.toml. +app-db-backend = "" + +############################################################################### +### Telemetry Configuration ### +############################################################################### + +[telemetry] + +# Prefixed with keys to separate services. +service-name = "" + +# Enabled enables the application telemetry functionality. When enabled, +# an in-memory sink is also enabled by default. Operators may also enabled +# other sinks such as Prometheus. +enabled = false + +# Enable prefixing gauge values with hostname. +enable-hostname = false + +# Enable adding hostname to labels. +enable-hostname-label = false + +# Enable adding service to labels. +enable-service-label = false + +# PrometheusRetentionTime, when positive, enables a Prometheus metrics sink. +prometheus-retention-time = 0 + +# GlobalLabels defines a global set of name/value label tuples applied to all +# metrics emitted using the wrapper functions defined in telemetry package. +# +# Example: +# [["chain_id", "cosmoshub-1"]] +global-labels = [] + +############################################################################### +### API Configuration ### +############################################################################### + +[api] + +# Enable defines if the API server should be enabled. +enable = false + +# Swagger defines if swagger documentation should automatically be registered. +swagger = false + +# Address defines the API server to listen on. +address = "tcp://localhost:1317" + +# MaxOpenConnections defines the number of maximum open connections. +max-open-connections = 1000 + +# RPCReadTimeout defines the Tendermint RPC read timeout (in seconds). +rpc-read-timeout = 10 + +# RPCWriteTimeout defines the Tendermint RPC write timeout (in seconds). +rpc-write-timeout = 0 + +# RPCMaxBodyBytes defines the Tendermint maximum request body (in bytes). +rpc-max-body-bytes = 1000000 + +# EnableUnsafeCORS defines if CORS should be enabled (unsafe - use it at your own risk). +enabled-unsafe-cors = false + +############################################################################### +### Rosetta Configuration ### +############################################################################### -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/main/tools/confix/data/v0.47-app.toml) +[rosetta] + +# Enable defines if the Rosetta API server should be enabled. +enable = false + +# Address defines the Rosetta API server to listen on. +address = ":8080" + +# Network defines the name of the blockchain that will be returned by Rosetta. +blockchain = "app" + +# Network defines the name of the network that will be returned by Rosetta. +network = "network" + +# Retries defines the number of retries when connecting to the node before failing. +retries = 3 + +# Offline defines if Rosetta server should run in offline mode. +offline = false + +# EnableDefaultSuggestedFee defines if the server should suggest fee by default. +# If 'construction/medata' is called without gas limit and gas price, +# suggested fee based on gas-to-suggest and denom-to-suggest will be given. +enable-fee-suggestion = false + +# GasToSuggest defines gas limit when calculating the fee +gas-to-suggest = 200000 + +# DenomToSuggest defines the default denom for fee suggestion. +# Price must be in minimum-gas-prices. +denom-to-suggest = "uatom" + +############################################################################### +### gRPC Configuration ### +############################################################################### + +[grpc] + +# Enable defines if the gRPC server should be enabled. +enable = true + +# Address defines the gRPC server address to bind to. +address = "localhost:9090" + +# MaxRecvMsgSize defines the max message size in bytes the server can receive. +# The default value is 10MB. +max-recv-msg-size = "10485760" + +# MaxSendMsgSize defines the max message size in bytes the server can send. +# The default value is math.MaxInt32. +max-send-msg-size = "2147483647" + +############################################################################### +### gRPC Web Configuration ### +############################################################################### + +[grpc-web] + +# GRPCWebEnable defines if the gRPC-web should be enabled. +# NOTE: gRPC must also be enabled, otherwise, this configuration is a no-op. +enable = true + +# Address defines the gRPC-web server address to bind to. +address = "localhost:9091" + +# EnableUnsafeCORS defines if CORS should be enabled (unsafe - use it at your own risk). +enable-unsafe-cors = false + +############################################################################### +### State Sync Configuration ### +############################################################################### + +# State sync snapshots allow other nodes to rapidly join the network without replaying historical +# blocks, instead downloading and applying a snapshot of the application state at a given height. +[state-sync] + +# snapshot-interval specifies the block interval at which local state sync snapshots are +# taken (0 to disable). +snapshot-interval = 0 + +# snapshot-keep-recent specifies the number of recent snapshots to keep and serve (0 to keep all). +snapshot-keep-recent = 2 + +############################################################################### +### Store / State Streaming ### +############################################################################### + +[store] +streamers = [] + +[streamers] +[streamers.file] +keys = ["*"] +write_dir = "" +prefix = "" + +# output-metadata specifies if output the metadata file which includes the abci request/responses +# during processing the block. +output-metadata = "true" + +# stop-node-on-error specifies if propagate the file streamer errors to consensus state machine. +stop-node-on-error = "true" + +# fsync specifies if call fsync after writing the files. +fsync = "false" + +############################################################################### +### Mempool ### +############################################################################### + +[mempool] +# Setting max-txs to 0 will allow for a unbounded amount of transactions in the mempool. +# Setting max_txs to negative 1 (-1) will disable transactions from being inserted into the mempool. +# Setting max_txs to a positive number (> 0) will limit the number of transactions in the mempool, by the specified amount. +# +# Note, this configuration only applies to SDK built-in app-side mempool +# implementations. +max-txs = 5000 + +``` -## inter-block-cache[​](#inter-block-cache "Direct link to inter-block-cache") +## inter-block-cache This feature will consume more ram than a normal node, if enabled. -## iavl-cache-size[​](#iavl-cache-size "Direct link to iavl-cache-size") +## iavl-cache-size Using this feature will increase ram consumption -## iavl-lazy-loading[​](#iavl-lazy-loading "Direct link to iavl-lazy-loading") +## iavl-lazy-loading This feature is to be used for archive nodes, allowing them to have a faster start up time. diff --git a/docs/sdk/v0.47/learn/advanced/context.mdx b/docs/sdk/v0.47/learn/advanced/context.mdx index 31b9030b..30cbb287 100644 --- a/docs/sdk/v0.47/learn/advanced/context.mdx +++ b/docs/sdk/v0.47/learn/advanced/context.mdx @@ -1,78 +1,692 @@ --- -title: "Context" -description: "Version: v0.47" +title: Context --- - - The `context` is a data structure intended to be passed from function to function that carries information about the current state of the application. It provides access to a branched storage (a safe branch of the entire state) as well as useful objects and information like `gasMeter`, `block height`, `consensus parameters` and more. - +## Synopsis + +The `context` is a data structure intended to be passed from function to function that carries information about the current state of the application. It provides access to a branched storage (a safe branch of the entire state) as well as useful objects and information like `gasMeter`, `block height`, `consensus parameters` and more. - ### Pre-requisites Readings[​](#pre-requisites-readings "Direct link to Pre-requisites Readings") - * [Anatomy of a Cosmos SDK Application](/v0.47/learn/beginner/overview-app) - * [Lifecycle of a Transaction](/v0.47/learn/beginner/tx-lifecycle) +### Pre-requisites Readings + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.47/learn/beginner/overview-app) +- [Lifecycle of a Transaction](/docs/sdk/v0.47/learn/beginner/tx-lifecycle) + -## Context Definition[​](#context-definition "Direct link to Context Definition") +## Context Definition -The Cosmos SDK `Context` is a custom data structure that contains Go's stdlib [`context`](https://pkg.go.dev/context) as its base, and has many additional types within its definition that are specific to the Cosmos SDK. The `Context` is integral to transaction processing in that it allows modules to easily access their respective [store](/v0.47/learn/advanced/store#base-layer-kvstores) in the [`multistore`](/v0.47/learn/advanced/store#multistore) and retrieve transactional context such as the block header and gas meter. +The Cosmos SDK `Context` is a custom data structure that contains Go's stdlib [`context`](https://pkg.go.dev/context) as its base, and has many additional types within its definition that are specific to the Cosmos SDK. The `Context` is integral to transaction processing in that it allows modules to easily access their respective [store](/docs/sdk/v0.47/learn/advanced/store#base-layer-kvstores) in the [`multistore`](/docs/sdk/v0.47/learn/advanced/store#multistore) and retrieve transactional context such as the block header and gas meter. -types/context.go +```go expandable +package types -``` -loading... -``` +import ( + + "context" + "time" + "github.com/cosmos/gogoproto/proto" + abci "github.com/tendermint/tendermint/abci/types" + tmbytes "github.com/tendermint/tendermint/libs/bytes" + "github.com/tendermint/tendermint/libs/log" + tmproto "github.com/tendermint/tendermint/proto/tendermint/types" + "github.com/cosmos/cosmos-sdk/store/gaskv" + storetypes "github.com/cosmos/cosmos-sdk/store/types" +) + +/* +Context is an immutable object contains all information needed to +process a request. + +It contains a context.Context object inside if you want to use that, +but please do not over-use it. We try to keep all data structured +and standard additions here would be better just to add to the Context struct +*/ +type Context struct { + baseCtx context.Context + ms MultiStore + header tmproto.Header + headerHash tmbytes.HexBytes + chainID string + txBytes []byte + logger log.Logger + voteInfo []abci.VoteInfo + gasMeter GasMeter + blockGasMeter GasMeter + checkTx bool + recheckTx bool / if recheckTx == true, then checkTx must also be true + minGasPrice DecCoins + consParams *tmproto.ConsensusParams + eventManager *EventManager + priority int64 / The tx priority, only relevant in CheckTx + kvGasConfig storetypes.GasConfig + transientKVGasConfig storetypes.GasConfig +} + +/ Proposed rename, not done to avoid API breakage +type Request = Context + +/ Read-only accessors +func (c Context) + +Context() + +context.Context { + return c.baseCtx +} + +func (c Context) + +MultiStore() + +MultiStore { + return c.ms +} + +func (c Context) + +BlockHeight() + +int64 { + return c.header.Height +} + +func (c Context) + +BlockTime() + +time.Time { + return c.header.Time +} + +func (c Context) + +ChainID() + +string { + return c.chainID +} + +func (c Context) + +TxBytes() []byte { + return c.txBytes +} + +func (c Context) + +Logger() + +log.Logger { + return c.logger +} + +func (c Context) + +VoteInfos() []abci.VoteInfo { + return c.voteInfo +} + +func (c Context) + +GasMeter() + +GasMeter { + return c.gasMeter +} + +func (c Context) + +BlockGasMeter() + +GasMeter { + return c.blockGasMeter +} + +func (c Context) + +IsCheckTx() + +bool { + return c.checkTx +} + +func (c Context) + +IsReCheckTx() + +bool { + return c.recheckTx +} + +func (c Context) + +MinGasPrices() + +DecCoins { + return c.minGasPrice +} + +func (c Context) + +EventManager() *EventManager { + return c.eventManager +} + +func (c Context) + +Priority() + +int64 { + return c.priority +} + +func (c Context) + +KVGasConfig() + +storetypes.GasConfig { + return c.kvGasConfig +} + +func (c Context) + +TransientKVGasConfig() + +storetypes.GasConfig { + return c.transientKVGasConfig +} + +/ clone the header before returning +func (c Context) + +BlockHeader() + +tmproto.Header { + msg := proto.Clone(&c.header).(*tmproto.Header) + +return *msg +} + +/ HeaderHash returns a copy of the header hash obtained during abci.RequestBeginBlock +func (c Context) + +HeaderHash() + +tmbytes.HexBytes { + hash := make([]byte, len(c.headerHash)) + +copy(hash, c.headerHash) + +return hash +} + +func (c Context) + +ConsensusParams() *tmproto.ConsensusParams { + return proto.Clone(c.consParams).(*tmproto.ConsensusParams) +} + +func (c Context) + +Deadline() (deadline time.Time, ok bool) { + return c.baseCtx.Deadline() +} + +func (c Context) + +Done() <-chan struct{ +} { + return c.baseCtx.Done() +} + +func (c Context) + +Err() + +error { + return c.baseCtx.Err() +} + +/ create a new context +func NewContext(ms MultiStore, header tmproto.Header, isCheckTx bool, logger log.Logger) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +return Context{ + baseCtx: context.Background(), + ms: ms, + header: header, + chainID: header.ChainID, + checkTx: isCheckTx, + logger: logger, + gasMeter: storetypes.NewInfiniteGasMeter(), + minGasPrice: DecCoins{ +}, + eventManager: NewEventManager(), + kvGasConfig: storetypes.KVGasConfig(), + transientKVGasConfig: storetypes.TransientGasConfig(), +} +} + +/ WithContext returns a Context with an updated context.Context. +func (c Context) + +WithContext(ctx context.Context) + +Context { + c.baseCtx = ctx + return c +} + +/ WithMultiStore returns a Context with an updated MultiStore. +func (c Context) + +WithMultiStore(ms MultiStore) + +Context { + c.ms = ms + return c +} + +/ WithBlockHeader returns a Context with an updated tendermint block header in UTC time. +func (c Context) + +WithBlockHeader(header tmproto.Header) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +c.header = header + return c +} + +/ WithHeaderHash returns a Context with an updated tendermint block header hash. +func (c Context) + +WithHeaderHash(hash []byte) + +Context { + temp := make([]byte, len(hash)) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/types/context.go#L17-L44) +copy(temp, hash) -* **Base Context:** The base type is a Go [Context](https://pkg.go.dev/context), which is explained further in the [Go Context Package](#go-context-package) section below. -* **Multistore:** Every application's `BaseApp` contains a [`CommitMultiStore`](/v0.47/learn/advanced/store#multistore) which is provided when a `Context` is created. Calling the `KVStore()` and `TransientStore()` methods allows modules to fetch their respective [`KVStore`](/v0.47/learn/advanced/store#base-layer-kvstores) using their unique `StoreKey`. -* **Header:** The [header](https://docs.cometbft.com/v0.37/spec/core/data_structures#header) is a Blockchain type. It carries important information about the state of the blockchain, such as block height and proposer of the current block. -* **Header Hash:** The current block header hash, obtained during `abci.RequestBeginBlock`. -* **Chain ID:** The unique identification number of the blockchain a block pertains to. -* **Transaction Bytes:** The `[]byte` representation of a transaction being processed using the context. Every transaction is processed by various parts of the Cosmos SDK and consensus engine (e.g. CometBFT) throughout its [lifecycle](/v0.47/learn/beginner/tx-lifecycle), some of which do not have any understanding of transaction types. Thus, transactions are marshaled into the generic `[]byte` type using some kind of [encoding format](/v0.47/learn/advanced/encoding) such as [Amino](/v0.47/learn/advanced/encoding). -* **Logger:** A `logger` from the CometBFT libraries. Learn more about logs [here](https://docs.cometbft.com/v0.37/core/configuration). Modules call this method to create their own unique module-specific logger. -* **VoteInfo:** A list of the ABCI type [`VoteInfo`](https://docs.cometbft.com/master/spec/abci/abci.html#voteinfo), which includes the name of a validator and a boolean indicating whether they have signed the block. -* **Gas Meters:** Specifically, a [`gasMeter`](/v0.47/learn/beginner/gas-fees#main-gas-meter) for the transaction currently being processed using the context and a [`blockGasMeter`](/v0.47/learn/beginner/gas-fees#block-gas-meter) for the entire block it belongs to. Users specify how much in fees they wish to pay for the execution of their transaction; these gas meters keep track of how much [gas](/v0.47/learn/beginner/gas-fees) has been used in the transaction or block so far. If the gas meter runs out, execution halts. -* **CheckTx Mode:** A boolean value indicating whether a transaction should be processed in `CheckTx` or `DeliverTx` mode. -* **Min Gas Price:** The minimum [gas](/v0.47/learn/beginner/gas-fees) price a node is willing to take in order to include a transaction in its block. This price is a local value configured by each node individually, and should therefore **not be used in any functions used in sequences leading to state-transitions**. -* **Consensus Params:** The ABCI type [Consensus Parameters](https://docs.cometbft.com/master/spec/abci/apps.html#consensus-parameters), which specify certain limits for the blockchain, such as maximum gas for a block. -* **Event Manager:** The event manager allows any caller with access to a `Context` to emit [`Events`](/v0.47/learn/advanced/events). Modules may define module specific `Events` by defining various `Types` and `Attributes` or use the common definitions found in `types/`. Clients can subscribe or query for these `Events`. These `Events` are collected throughout `DeliverTx`, `BeginBlock`, and `EndBlock` and are returned to CometBFT for indexing. For example: -* **Priority:** The transaction priority, only relevant in `CheckTx`. -* **KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the `KVStore`. -* **Transient KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the transiant `KVStore`. +c.headerHash = temp + return c +} -## Go Context Package[​](#go-context-package "Direct link to Go Context Package") +/ WithBlockTime returns a Context with an updated tendermint block header time in UTC time +func (c Context) -A basic `Context` is defined in the [Golang Context Package](https://pkg.go.dev/context). A `Context` is an immutable data structure that carries request-scoped data across APIs and processes. Contexts are also designed to enable concurrency and to be used in goroutines. +WithBlockTime(newTime time.Time) -Contexts are intended to be **immutable**; they should never be edited. Instead, the convention is to create a child context from its parent using a `With` function. For example: +Context { + newHeader := c.BlockHeader() + / https://github.com/gogo/protobuf/issues/519 + newHeader.Time = newTime.UTC() +return c.WithBlockHeader(newHeader) +} + +/ WithProposer returns a Context with an updated proposer consensus address. +func (c Context) + +WithProposer(addr ConsAddress) + +Context { + newHeader := c.BlockHeader() + +newHeader.ProposerAddress = addr.Bytes() + +return c.WithBlockHeader(newHeader) +} + +/ WithBlockHeight returns a Context with an updated block height. +func (c Context) + +WithBlockHeight(height int64) + +Context { + newHeader := c.BlockHeader() + +newHeader.Height = height + return c.WithBlockHeader(newHeader) +} + +/ WithChainID returns a Context with an updated chain identifier. +func (c Context) + +WithChainID(chainID string) + +Context { + c.chainID = chainID + return c +} + +/ WithTxBytes returns a Context with an updated txBytes. +func (c Context) + +WithTxBytes(txBytes []byte) + +Context { + c.txBytes = txBytes + return c +} + +/ WithLogger returns a Context with an updated logger. +func (c Context) + +WithLogger(logger log.Logger) + +Context { + c.logger = logger + return c +} + +/ WithVoteInfos returns a Context with an updated consensus VoteInfo. +func (c Context) + +WithVoteInfos(voteInfo []abci.VoteInfo) + +Context { + c.voteInfo = voteInfo + return c +} + +/ WithGasMeter returns a Context with an updated transaction GasMeter. +func (c Context) + +WithGasMeter(meter GasMeter) + +Context { + c.gasMeter = meter + return c +} + +/ WithBlockGasMeter returns a Context with an updated block GasMeter +func (c Context) + +WithBlockGasMeter(meter GasMeter) + +Context { + c.blockGasMeter = meter + return c +} + +/ WithKVGasConfig returns a Context with an updated gas configuration for +/ the KVStore +func (c Context) + +WithKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.kvGasConfig = gasConfig + return c +} + +/ WithTransientKVGasConfig returns a Context with an updated gas configuration for +/ the transient KVStore +func (c Context) + +WithTransientKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.transientKVGasConfig = gasConfig + return c +} + +/ WithIsCheckTx enables or disables CheckTx value for verifying transactions and returns an updated Context +func (c Context) + +WithIsCheckTx(isCheckTx bool) + +Context { + c.checkTx = isCheckTx + return c +} + +/ WithIsRecheckTx called with true will also set true on checkTx in order to +/ enforce the invariant that if recheckTx = true then checkTx = true as well. +func (c Context) + +WithIsReCheckTx(isRecheckTx bool) + +Context { + if isRecheckTx { + c.checkTx = true +} + +c.recheckTx = isRecheckTx + return c +} + +/ WithMinGasPrices returns a Context with an updated minimum gas price value +func (c Context) + +WithMinGasPrices(gasPrices DecCoins) + +Context { + c.minGasPrice = gasPrices + return c +} + +/ WithConsensusParams returns a Context with an updated consensus params +func (c Context) + +WithConsensusParams(params *tmproto.ConsensusParams) + +Context { + c.consParams = params + return c +} + +/ WithEventManager returns a Context with an updated event manager +func (c Context) + +WithEventManager(em *EventManager) + +Context { + c.eventManager = em + return c +} + +/ WithPriority returns a Context with an updated tx priority +func (c Context) + +WithPriority(p int64) + +Context { + c.priority = p + return c +} + +/ TODO: remove??? +func (c Context) + +IsZero() + +bool { + return c.ms == nil +} + +func (c Context) + +WithValue(key, value interface{ +}) + +Context { + c.baseCtx = context.WithValue(c.baseCtx, key, value) + +return c +} + +func (c Context) + +Value(key interface{ +}) + +interface{ +} { + if key == SdkContextKey { + return c +} + +return c.baseCtx.Value(key) +} + +/ ---------------------------------------------------------------------------- +/ Store / Caching +/ ---------------------------------------------------------------------------- + +/ KVStore fetches a KVStore from the MultiStore. +func (c Context) + +KVStore(key storetypes.StoreKey) + +KVStore { + return gaskv.NewStore(c.MultiStore().GetKVStore(key), c.GasMeter(), c.kvGasConfig) +} + +/ TransientStore fetches a TransientStore from the MultiStore. +func (c Context) + +TransientStore(key storetypes.StoreKey) + +KVStore { + return gaskv.NewStore(c.MultiStore().GetKVStore(key), c.GasMeter(), c.transientKVGasConfig) +} + +/ CacheContext returns a new Context with the multi-store cached and a new +/ EventManager. The cached context is written to the context when writeCache +/ is called. Note, events are automatically emitted on the parent context's +/ EventManager when the caller executes the write. +func (c Context) + +CacheContext() (cc Context, writeCache func()) { + cms := c.MultiStore().CacheMultiStore() + +cc = c.WithMultiStore(cms).WithEventManager(NewEventManager()) + +writeCache = func() { + c.EventManager().EmitEvents(cc.EventManager().Events()) + +cms.Write() +} + +return cc, writeCache +} + +var _ context.Context = Context{ +} + +/ ContextKey defines a type alias for a stdlib Context key. +type ContextKey string + +/ SdkContextKey is the key in the context.Context which holds the sdk.Context. +const SdkContextKey ContextKey = "sdk-context" + +/ WrapSDKContext returns a stdlib context.Context with the provided sdk.Context's internal +/ context as a value. It is useful for passing an sdk.Context through methods that take a +/ stdlib context.Context parameter such as generated gRPC methods. To get the original +/ sdk.Context back, call UnwrapSDKContext. +func WrapSDKContext(ctx Context) + +context.Context { + return ctx +} + +/ UnwrapSDKContext retrieves a Context from a context.Context instance +/ attached with WrapSDKContext. It panics if a Context was not properly +/ attached +func UnwrapSDKContext(ctx context.Context) + +Context { + if sdkCtx, ok := ctx.(Context); ok { + return sdkCtx +} + +return ctx.Value(SdkContextKey).(Context) +} ``` + +- **Base Context:** The base type is a Go [Context](https://pkg.go.dev/context), which is explained further in the [Go Context Package](#go-context-package) section below. +- **Multistore:** Every application's `BaseApp` contains a [`CommitMultiStore`](/docs/sdk/v0.47/learn/advanced/store#multistore) which is provided when a `Context` is created. Calling the `KVStore()` and `TransientStore()` methods allows modules to fetch their respective [`KVStore`](/docs/sdk/v0.47/learn/advanced/store#base-layer-kvstores) using their unique `StoreKey`. +- **Header:** The [header](https://docs.cometbft.com/v0.37/spec/core/data_structures#header) is a Blockchain type. It carries important information about the state of the blockchain, such as block height and proposer of the current block. +- **Header Hash:** The current block header hash, obtained during `abci.RequestBeginBlock`. +- **Chain ID:** The unique identification number of the blockchain a block pertains to. +- **Transaction Bytes:** The `[]byte` representation of a transaction being processed using the context. Every transaction is processed by various parts of the Cosmos SDK and consensus engine (e.g. CometBFT) throughout its [lifecycle](/docs/sdk/v0.47/learn/beginner/tx-lifecycle), some of which do not have any understanding of transaction types. Thus, transactions are marshaled into the generic `[]byte` type using some kind of [encoding format](/docs/sdk/v0.47/learn/advanced/encoding) such as [Amino](/docs/sdk/v0.47/learn/advanced/encoding). +- **Logger:** A `logger` from the CometBFT libraries. Learn more about logs [here](https://docs.cometbft.com/v0.37/core/configuration). Modules call this method to create their own unique module-specific logger. +- **VoteInfo:** A list of the ABCI type [`VoteInfo`](https://docs.cometbft.com/master/spec/abci/abci.html#voteinfo), which includes the name of a validator and a boolean indicating whether they have signed the block. +- **Gas Meters:** Specifically, a [`gasMeter`](/docs/sdk/v0.47/learn/beginner/gas-fees#main-gas-meter) for the transaction currently being processed using the context and a [`blockGasMeter`](/docs/sdk/v0.47/learn/beginner/gas-fees#block-gas-meter) for the entire block it belongs to. Users specify how much in fees they wish to pay for the execution of their transaction; these gas meters keep track of how much [gas](/docs/sdk/v0.47/learn/beginner/gas-fees) has been used in the transaction or block so far. If the gas meter runs out, execution halts. +- **CheckTx Mode:** A boolean value indicating whether a transaction should be processed in `CheckTx` or `DeliverTx` mode. +- **Min Gas Price:** The minimum [gas](/docs/sdk/v0.47/learn/beginner/gas-fees) price a node is willing to take in order to include a transaction in its block. This price is a local value configured by each node individually, and should therefore **not be used in any functions used in sequences leading to state-transitions**. +- **Consensus Params:** The ABCI type [Consensus Parameters](https://docs.cometbft.com/master/spec/abci/apps.html#consensus-parameters), which specify certain limits for the blockchain, such as maximum gas for a block. +- **Event Manager:** The event manager allows any caller with access to a `Context` to emit [`Events`](/docs/sdk/v0.47/learn/advanced/events). Modules may define module specific + `Events` by defining various `Types` and `Attributes` or use the common definitions found in `types/`. Clients can subscribe or query for these `Events`. These `Events` are collected throughout `DeliverTx`, `BeginBlock`, and `EndBlock` and are returned to CometBFT for indexing. For example: +- **Priority:** The transaction priority, only relevant in `CheckTx`. +- **KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the `KVStore`. +- **Transient KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the transiant `KVStore`. + +## Go Context Package + +A basic `Context` is defined in the [Golang Context Package](https://pkg.go.dev/context). A `Context` +is an immutable data structure that carries request-scoped data across APIs and processes. Contexts +are also designed to enable concurrency and to be used in goroutines. + +Contexts are intended to be **immutable**; they should never be edited. Instead, the convention is +to create a child context from its parent using a `With` function. For example: + +```go childCtx = parentCtx.WithBlockHeader(header) ``` -The [Golang Context Package](https://pkg.go.dev/context) documentation instructs developers to explicitly pass a context `ctx` as the first argument of a process. +The [Golang Context Package](https://pkg.go.dev/context) documentation instructs developers to +explicitly pass a context `ctx` as the first argument of a process. -## Store branching[​](#store-branching "Direct link to Store branching") +## Store branching -The `Context` contains a `MultiStore`, which allows for branchinig and caching functionality using `CacheMultiStore` (queries in `CacheMultiStore` are cached to avoid future round trips). Each `KVStore` is branched in a safe and isolated ephemeral storage. Processes are free to write changes to the `CacheMultiStore`. If a state-transition sequence is performed without issue, the store branch can be committed to the underlying store at the end of the sequence or disregard them if something goes wrong. The pattern of usage for a Context is as follows: +The `Context` contains a `MultiStore`, which allows for branchinig and caching functionality using `CacheMultiStore` +(queries in `CacheMultiStore` are cached to avoid future round trips). +Each `KVStore` is branched in a safe and isolated ephemeral storage. Processes are free to write changes to +the `CacheMultiStore`. If a state-transition sequence is performed without issue, the store branch can +be committed to the underlying store at the end of the sequence or disregard them if something +goes wrong. The pattern of usage for a Context is as follows: -1. A process receives a Context `ctx` from its parent process, which provides information needed to perform the process. -2. The `ctx.ms` is a **branched store**, i.e. a branch of the [multistore](/v0.47/learn/advanced/store#multistore) is made so that the process can make changes to the state as it executes, without changing the original`ctx.ms`. This is useful to protect the underlying multistore in case the changes need to be reverted at some point in the execution. -3. The process may read and write from `ctx` as it is executing. It may call a subprocess and pass `ctx` to it as needed. -4. When a subprocess returns, it checks if the result is a success or failure. If a failure, nothing needs to be done - the branch `ctx` is simply discarded. If successful, the changes made to the `CacheMultiStore` can be committed to the original `ctx.ms` via `Write()`. +1. A process receives a Context `ctx` from its parent process, which provides information needed to + perform the process. +2. The `ctx.ms` is a **branched store**, i.e. a branch of the [multistore](/docs/sdk/v0.47/learn/advanced/store#multistore) is made so that the process can make changes to the state as it executes, without changing the original`ctx.ms`. This is useful to protect the underlying multistore in case the changes need to be reverted at some point in the execution. +3. The process may read and write from `ctx` as it is executing. It may call a subprocess and pass + `ctx` to it as needed. +4. When a subprocess returns, it checks if the result is a success or failure. If a failure, nothing + needs to be done - the branch `ctx` is simply discarded. If successful, the changes made to + the `CacheMultiStore` can be committed to the original `ctx.ms` via `Write()`. -For example, here is a snippet from the [`runTx`](/v0.47/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) function in [`baseapp`](/v0.47/learn/advanced/baseapp): +For example, here is a snippet from the [`runTx`](/docs/sdk/v0.47/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) function in [`baseapp`](/docs/sdk/v0.47/learn/advanced/baseapp): -``` -runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes)result = app.runMsgs(runMsgCtx, msgs, mode)result.GasWanted = gasWantedif mode != runTxModeDeliver { return result}if result.IsOK() { msCache.Write()} +```go +runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + +result = app.runMsgs(runMsgCtx, msgs, mode) + +result.GasWanted = gasWanted + if mode != runTxModeDeliver { + return result +} + if result.IsOK() { + msCache.Write() +} ``` Here is the process: -1. Prior to calling `runMsgs` on the message(s) in the transaction, it uses `app.cacheTxContext()` to branch and cache the context and multistore. +1. Prior to calling `runMsgs` on the message(s) in the transaction, it uses `app.cacheTxContext()` + to branch and cache the context and multistore. 2. `runMsgCtx` - the context with branched store, is used in `runMsgs` to return a result. -3. If the process is running in [`checkTxMode`](/v0.47/learn/advanced/baseapp#checktx), there is no need to write the changes - the result is returned immediately. -4. If the process is running in [`deliverTxMode`](/v0.47/learn/advanced/baseapp#delivertx) and the result indicates a successful run over all the messages, the branched multistore is written back to the original. +3. If the process is running in [`checkTxMode`](/docs/sdk/v0.47/learn/advanced/baseapp#checktx), there is no need to write the + changes - the result is returned immediately. +4. If the process is running in [`deliverTxMode`](/docs/sdk/v0.47/learn/advanced/baseapp#delivertx) and the result indicates + a successful run over all the messages, the branched multistore is written back to the original. diff --git a/docs/sdk/v0.47/learn/advanced/encoding.mdx b/docs/sdk/v0.47/learn/advanced/encoding.mdx index 26b549d5..bb65551b 100644 --- a/docs/sdk/v0.47/learn/advanced/encoding.mdx +++ b/docs/sdk/v0.47/learn/advanced/encoding.mdx @@ -1,268 +1,2774 @@ --- -title: "Encoding" -description: "Version: v0.47" +title: Encoding --- - - While encoding in the Cosmos SDK used to be mainly handled by `go-amino` codec, the Cosmos SDK is moving towards using `gogoprotobuf` for both state and client-side encoding. - +## Synopsis + +While encoding in the Cosmos SDK used to be mainly handled by `go-amino` codec, the Cosmos SDK is moving towards using `gogoprotobuf` for both state and client-side encoding. - ### Pre-requisite Readings[​](#pre-requisite-readings "Direct link to Pre-requisite Readings") - * [Anatomy of a Cosmos SDK application](/v0.47/learn/beginner/overview-app) +### Pre-requisite Readings + +- [Anatomy of a Cosmos SDK application](/docs/sdk/v0.47/learn/beginner/overview-app) + -## Encoding[​](#encoding-1 "Direct link to Encoding") +## Encoding -The Cosmos SDK utilizes two binary wire encoding protocols, [Amino](https://github.com/tendermint/go-amino/) which is an object encoding specification and [Protocol Buffers](https://developers.google.com/protocol-buffers), a subset of Proto3 with an extension for interface support. See the [Proto3 spec](https://developers.google.com/protocol-buffers/docs/proto3) for more information on Proto3, which Amino is largely compatible with (but not with Proto2). +The Cosmos SDK utilizes two binary wire encoding protocols, [Amino](https://github.com/tendermint/go-amino/) which is an object encoding specification and [Protocol Buffers](https://developers.google.com/protocol-buffers), a subset of Proto3 with an extension for +interface support. See the [Proto3 spec](https://developers.google.com/protocol-buffers/docs/proto3) +for more information on Proto3, which Amino is largely compatible with (but not with Proto2). -Due to Amino having significant performance drawbacks, being reflection-based, and not having any meaningful cross-language/client support, Protocol Buffers, specifically [gogoprotobuf](https://github.com/cosmos/gogoproto/), is being used in place of Amino. Note, this process of using Protocol Buffers over Amino is still an ongoing process. +Due to Amino having significant performance drawbacks, being reflection-based, and +not having any meaningful cross-language/client support, Protocol Buffers, specifically +[gogoprotobuf](https://github.com/cosmos/gogoproto/), is being used in place of Amino. +Note, this process of using Protocol Buffers over Amino is still an ongoing process. -Binary wire encoding of types in the Cosmos SDK can be broken down into two main categories, client encoding and store encoding. Client encoding mainly revolves around transaction processing and signing, whereas store encoding revolves around types used in state-machine transitions and what is ultimately stored in the Merkle tree. +Binary wire encoding of types in the Cosmos SDK can be broken down into two main +categories, client encoding and store encoding. Client encoding mainly revolves +around transaction processing and signing, whereas store encoding revolves around +types used in state-machine transitions and what is ultimately stored in the Merkle +tree. -For store encoding, protobuf definitions can exist for any type and will typically have an Amino-based "intermediary" type. Specifically, the protobuf-based type definition is used for serialization and persistence, whereas the Amino-based type is used for business logic in the state-machine where they may convert back-n-forth. Note, the Amino-based types may slowly be phased-out in the future, so developers should take note to use the protobuf message definitions where possible. +For store encoding, protobuf definitions can exist for any type and will typically +have an Amino-based "intermediary" type. Specifically, the protobuf-based type +definition is used for serialization and persistence, whereas the Amino-based type +is used for business logic in the state-machine where they may convert back-n-forth. +Note, the Amino-based types may slowly be phased-out in the future, so developers +should take note to use the protobuf message definitions where possible. -In the `codec` package, there exists two core interfaces, `BinaryCodec` and `JSONCodec`, where the former encapsulates the current Amino interface except it operates on types implementing the latter instead of generic `interface{}` types. +In the `codec` package, there exists two core interfaces, `BinaryCodec` and `JSONCodec`, +where the former encapsulates the current Amino interface except it operates on +types implementing the latter instead of generic `interface{}` types. -In addition, there exists two implementations of `Codec`. The first being `AminoCodec`, where both binary and JSON serialization is handled via Amino. The second being `ProtoCodec`, where both binary and JSON serialization is handled via Protobuf. +In addition, there exists two implementations of `Codec`. The first being +`AminoCodec`, where both binary and JSON serialization is handled via Amino. The +second being `ProtoCodec`, where both binary and JSON serialization is handled +via Protobuf. -This means that modules may use Amino or Protobuf encoding, but the types must implement `ProtoMarshaler`. If modules wish to avoid implementing this interface for their types, they may use an Amino codec directly. +This means that modules may use Amino or Protobuf encoding, but the types must +implement `ProtoMarshaler`. If modules wish to avoid implementing this interface +for their types, they may use an Amino codec directly. -### Amino[​](#amino "Direct link to Amino") +### Amino -Every module uses an Amino codec to serialize types and interfaces. This codec typically has types and interfaces registered in that module's domain only (e.g. messages), but there are exceptions like `x/gov`. Each module exposes a `RegisterLegacyAminoCodec` function that allows a user to provide a codec and have all the types registered. An application will call this method for each necessary module. +Every module uses an Amino codec to serialize types and interfaces. This codec typically +has types and interfaces registered in that module's domain only (e.g. messages), +but there are exceptions like `x/gov`. Each module exposes a `RegisterLegacyAminoCodec` function +that allows a user to provide a codec and have all the types registered. An application +will call this method for each necessary module. -Where there is no protobuf-based type definition for a module (see below), Amino is used to encode and decode raw wire bytes to the concrete type or interface: +Where there is no protobuf-based type definition for a module (see below), Amino +is used to encode and decode raw wire bytes to the concrete type or interface: -``` -bz := keeper.cdc.MustMarshal(typeOrInterface)keeper.cdc.MustUnmarshal(bz, &typeOrInterface) +```go +bz := keeper.cdc.MustMarshal(typeOrInterface) + +keeper.cdc.MustUnmarshal(bz, &typeOrInterface) ``` -Note, there are length-prefixed variants of the above functionality and this is typically used for when the data needs to be streamed or grouped together (e.g. `ResponseDeliverTx.Data`) +Note, there are length-prefixed variants of the above functionality and this is +typically used for when the data needs to be streamed or grouped together +(e.g. `ResponseDeliverTx.Data`) -#### Authz authorizations and Gov/Group proposals[​](#authz-authorizations-and-govgroup-proposals "Direct link to Authz authorizations and Gov/Group proposals") +#### Authz authorizations and Gov/Group proposals -Since authz's `MsgExec` and `MsgGrant` message types, as well as gov's and group's `MsgSubmitProposal`, can contain different messages instances, it is important that developers add the following code inside the `init` method of their module's `codec.go` file: +Since authz's `MsgExec` and `MsgGrant` message types, as well as gov's and group's `MsgSubmitProposal`, can contain different messages instances, it is important that developers +add the following code inside the `init` method of their module's `codec.go` file: -``` -import ( authzcodec "github.com/cosmos/cosmos-sdk/x/authz/codec" govcodec "github.com/cosmos/cosmos-sdk/x/gov/codec" groupcodec "github.com/cosmos/cosmos-sdk/x/group/codec")init() { // Register all Amino interfaces and concrete types on the authz and gov Amino codec so that this can later be // used to properly serialize MsgGrant, MsgExec and MsgSubmitProposal instances RegisterLegacyAminoCodec(authzcodec.Amino) RegisterLegacyAminoCodec(govcodec.Amino) RegisterLegacyAminoCodec(groupcodec.Amino)} +```go expandable +import ( + + authzcodec "github.com/cosmos/cosmos-sdk/x/authz/codec" + govcodec "github.com/cosmos/cosmos-sdk/x/gov/codec" + groupcodec "github.com/cosmos/cosmos-sdk/x/group/codec" +) + +init() { + / Register all Amino interfaces and concrete types on the authz and gov Amino codec so that this can later be + / used to properly serialize MsgGrant, MsgExec and MsgSubmitProposal instances + RegisterLegacyAminoCodec(authzcodec.Amino) + +RegisterLegacyAminoCodec(govcodec.Amino) + +RegisterLegacyAminoCodec(groupcodec.Amino) +} ``` -This will allow the `x/authz` module to properly serialize and de-serializes `MsgExec` instances using Amino, which is required when signing this kind of messages using a Ledger. +This will allow the `x/authz` module to properly serialize and de-serializes `MsgExec` instances using Amino, +which is required when signing this kind of messages using a Ledger. -### Gogoproto[​](#gogoproto "Direct link to Gogoproto") +### Gogoproto Modules are encouraged to utilize Protobuf encoding for their respective types. In the Cosmos SDK, we use the [Gogoproto](https://github.com/cosmos/gogoproto) specific implementation of the Protobuf spec that offers speed and DX improvements compared to the official [Google protobuf implementation](https://github.com/protocolbuffers/protobuf). -### Guidelines for protobuf message definitions[​](#guidelines-for-protobuf-message-definitions "Direct link to Guidelines for protobuf message definitions") +### Guidelines for protobuf message definitions In addition to [following official Protocol Buffer guidelines](https://developers.google.com/protocol-buffers/docs/proto3#simple), we recommend using these annotations in .proto files when dealing with interfaces: -* use `cosmos_proto.accepts_interface` to annote `Any` fields that accept interfaces +- use `cosmos_proto.accepts_interface` to annote `Any` fields that accept interfaces + - pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface` + - example: `(cosmos_proto.accepts_interface) = "cosmos.gov.v1beta1.Content"` (and not just `Content`) +- annotate interface implementations with `cosmos_proto.implements_interface` + - pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface` + - example: `(cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"` (and not just `Authorization`) - * pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface` - * example: `(cosmos_proto.accepts_interface) = "cosmos.gov.v1beta1.Content"` (and not just `Content`) +Code generators can then match the `accepts_interface` and `implements_interface` annotations to know whether some Protobuf messages are allowed to be packed in a given `Any` field or not. -* annotate interface implementations with `cosmos_proto.implements_interface` +### Transaction Encoding - * pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface` - * example: `(cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"` (and not just `Authorization`) +Another important use of Protobuf is the encoding and decoding of +[transactions](/docs/sdk/v0.47/learn/advanced/transactions). Transactions are defined by the application or +the Cosmos SDK but are then passed to the underlying consensus engine to be relayed to +other peers. Since the underlying consensus engine is agnostic to the application, +the consensus engine accepts only transactions in the form of raw bytes. -Code generators can then match the `accepts_interface` and `implements_interface` annotations to know whether some Protobuf messages are allowed to be packed in a given `Any` field or not. +- The `TxEncoder` object performs the encoding. +- The `TxDecoder` object performs the decoding. -### Transaction Encoding[​](#transaction-encoding "Direct link to Transaction Encoding") +```go expandable +package types -Another important use of Protobuf is the encoding and decoding of [transactions](/v0.47/learn/advanced/transactions). Transactions are defined by the application or the Cosmos SDK but are then passed to the underlying consensus engine to be relayed to other peers. Since the underlying consensus engine is agnostic to the application, the consensus engine accepts only transactions in the form of raw bytes. +import ( -* The `TxEncoder` object performs the encoding. -* The `TxDecoder` object performs the decoding. + "encoding/json" + fmt "fmt" + "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/codec" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) -types/tx\_msg.go +type ( + / Msg defines the interface a transaction message must fulfill. + Msg interface { + proto.Message -``` -loading... -``` + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/types/tx_msg.go#L76-L80) +error -A standard implementation of both these objects can be found in the [`auth/tx` module](/v0.47/build/modules/auth/tx): + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} -x/auth/tx/decoder.go + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() -``` -loading... -``` +uint64 + GetAmount() -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/auth/tx/decoder.go) +Coins +} -x/auth/tx/encoder.go + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() -``` -loading... -``` +cryptotypes.PubKey + GetSignature() []byte +} + + / Tx defines the interface a transaction must fulfill. + Tx interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg + + / ValidateBasic does a simple and lightweight validation check that doesn't + / require access to any other information. + ValidateBasic() + +error +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/auth/tx/encoder.go) +uint64 + GetFee() -See [ADR-020](/v0.47/build/architecture/adr-020-protobuf-transaction-encoding) for details of how a transaction is encoded. +Coins + FeePayer() -### Interface Encoding and Usage of `Any`[​](#interface-encoding-and-usage-of-any "Direct link to interface-encoding-and-usage-of-any") +AccAddress + FeeGranter() -The Protobuf DSL is strongly typed, which can make inserting variable-typed fields difficult. Imagine we want to create a `Profile` protobuf message that serves as a wrapper over [an account](/v0.47/learn/beginner/accounts): +AccAddress +} + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +func MsgTypeURL(msg Msg) + +string { + return "/" + proto.MessageName(msg) +} + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} +``` + +A standard implementation of both these objects can be found in the [`auth/tx` module](/docs/sdk/v0.47/documentation/module-system/modules/auth/tx): + +```go expandable +package tx + +import ( + + "fmt" + "google.golang.org/protobuf/encoding/protowire" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/unknownproto" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" +) + +/ DefaultTxDecoder returns a default protobuf TxDecoder using the provided Marshaler. +func DefaultTxDecoder(cdc codec.ProtoCodecMarshaler) + +sdk.TxDecoder { + return func(txBytes []byte) (sdk.Tx, error) { + / Make sure txBytes follow ADR-027. + err := rejectNonADR027TxRaw(txBytes) + if err != nil { + return nil, sdkerrors.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +var raw tx.TxRaw + + / reject all unknown proto fields in the root TxRaw + err = unknownproto.RejectUnknownFieldsStrict(txBytes, &raw, cdc.InterfaceRegistry()) + if err != nil { + return nil, sdkerrors.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +err = cdc.Unmarshal(txBytes, &raw) + if err != nil { + return nil, err +} + +var body tx.TxBody + + / allow non-critical unknown fields in TxBody + txBodyHasUnknownNonCriticals, err := unknownproto.RejectUnknownFields(raw.BodyBytes, &body, true, cdc.InterfaceRegistry()) + if err != nil { + return nil, sdkerrors.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +err = cdc.Unmarshal(raw.BodyBytes, &body) + if err != nil { + return nil, sdkerrors.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +var authInfo tx.AuthInfo + + / reject all unknown proto fields in AuthInfo + err = unknownproto.RejectUnknownFieldsStrict(raw.AuthInfoBytes, &authInfo, cdc.InterfaceRegistry()) + if err != nil { + return nil, sdkerrors.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +err = cdc.Unmarshal(raw.AuthInfoBytes, &authInfo) + if err != nil { + return nil, sdkerrors.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + theTx := &tx.Tx{ + Body: &body, + AuthInfo: &authInfo, + Signatures: raw.Signatures, +} + +return &wrapper{ + tx: theTx, + bodyBz: raw.BodyBytes, + authInfoBz: raw.AuthInfoBytes, + txBodyHasUnknownNonCriticals: txBodyHasUnknownNonCriticals, +}, nil +} +} + +/ DefaultJSONTxDecoder returns a default protobuf JSON TxDecoder using the provided Marshaler. +func DefaultJSONTxDecoder(cdc codec.ProtoCodecMarshaler) + +sdk.TxDecoder { + return func(txBytes []byte) (sdk.Tx, error) { + var theTx tx.Tx + err := cdc.UnmarshalJSON(txBytes, &theTx) + if err != nil { + return nil, sdkerrors.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +return &wrapper{ + tx: &theTx, +}, nil +} +} + +/ rejectNonADR027TxRaw rejects txBytes that do not follow ADR-027. This is NOT +/ a generic ADR-027 checker, it only applies decoding TxRaw. Specifically, it +/ only checks that: +/ - field numbers are in ascending order (1, 2, and potentially multiple 3s), +/ - and varints are as short as possible. +/ All other ADR-027 edge cases (e.g. default values) + +are not applicable with +/ TxRaw. +func rejectNonADR027TxRaw(txBytes []byte) + +error { + / Make sure all fields are ordered in ascending order with this variable. + prevTagNum := protowire.Number(0) + for len(txBytes) > 0 { + tagNum, wireType, m := protowire.ConsumeTag(txBytes) + if m < 0 { + return fmt.Errorf("invalid length; %w", protowire.ParseError(m)) +} + / TxRaw only has bytes fields. + if wireType != protowire.BytesType { + return fmt.Errorf("expected %d wire type, got %d", protowire.BytesType, wireType) +} + / Make sure fields are ordered in ascending order. + if tagNum < prevTagNum { + return fmt.Errorf("txRaw must follow ADR-027, got tagNum %d after tagNum %d", tagNum, prevTagNum) +} + +prevTagNum = tagNum + + / All 3 fields of TxRaw have wireType == 2, so their next component + / is a varint, so we can safely call ConsumeVarint here. + / Byte structure: + / Inner fields are verified in `DefaultTxDecoder` + lengthPrefix, m := protowire.ConsumeVarint(txBytes[m:]) + if m < 0 { + return fmt.Errorf("invalid length; %w", protowire.ParseError(m)) +} + / We make sure that this varint is as short as possible. + n := varintMinLength(lengthPrefix) + if n != m { + return fmt.Errorf("length prefix varint for tagNum %d is not as short as possible, read %d, only need %d", tagNum, m, n) +} + + / Skip over the bytes that store fieldNumber and wireType bytes. + _, _, m = protowire.ConsumeField(txBytes) + if m < 0 { + return fmt.Errorf("invalid length; %w", protowire.ParseError(m)) +} + +txBytes = txBytes[m:] +} + +return nil +} + +/ varintMinLength returns the minimum number of bytes necessary to encode an +/ uint using varint encoding. +func varintMinLength(n uint64) + +int { + switch { + / Note: 1< valz[j].ConsensusPower(r) +} + +func (valz ValidatorsByVotingPower) + +Swap(i, j int) { + valz[i], valz[j] = valz[j], valz[i] +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (v Validators) + +UnpackInterfaces(c codectypes.AnyUnpacker) + +error { + for i := range v { + if err := v[i].UnpackInterfaces(c); err != nil { + return err +} + +} + +return nil +} + +/ return the redelegation +func MustMarshalValidator(cdc codec.BinaryCodec, validator *Validator) []byte { + return cdc.MustMarshal(validator) +} + +/ unmarshal a redelegation from a store value +func MustUnmarshalValidator(cdc codec.BinaryCodec, value []byte) + +Validator { + validator, err := UnmarshalValidator(cdc, value) + if err != nil { + panic(err) +} + +return validator +} + +/ unmarshal a redelegation from a store value +func UnmarshalValidator(cdc codec.BinaryCodec, value []byte) (v Validator, err error) { + err = cdc.Unmarshal(value, &v) + +return v, err +} + +/ IsBonded checks if the validator status equals Bonded +func (v Validator) + +IsBonded() + +bool { + return v.GetStatus() == Bonded +} + +/ IsUnbonded checks if the validator status equals Unbonded +func (v Validator) + +IsUnbonded() + +bool { + return v.GetStatus() == Unbonded +} + +/ IsUnbonding checks if the validator status equals Unbonding +func (v Validator) + +IsUnbonding() + +bool { + return v.GetStatus() == Unbonding +} + +/ constant used in flags to indicate that description field should not be updated +const DoNotModifyDesc = "[do-not-modify]" + +func NewDescription(moniker, identity, website, securityContact, details string) + +Description { + return Description{ + Moniker: moniker, + Identity: identity, + Website: website, + SecurityContact: securityContact, + Details: details, +} +} + +/ String implements the Stringer interface for a Description object. +func (d Description) + +String() + +string { + out, _ := yaml.Marshal(d) + +return string(out) +} + +/ UpdateDescription updates the fields of a given description. An error is +/ returned if the resulting description contains an invalid length. +func (d Description) + +UpdateDescription(d2 Description) (Description, error) { + if d2.Moniker == DoNotModifyDesc { + d2.Moniker = d.Moniker +} + if d2.Identity == DoNotModifyDesc { + d2.Identity = d.Identity +} + if d2.Website == DoNotModifyDesc { + d2.Website = d.Website +} + if d2.SecurityContact == DoNotModifyDesc { + d2.SecurityContact = d.SecurityContact +} + if d2.Details == DoNotModifyDesc { + d2.Details = d.Details +} + +return NewDescription( + d2.Moniker, + d2.Identity, + d2.Website, + d2.SecurityContact, + d2.Details, + ).EnsureLength() +} + +/ EnsureLength ensures the length of a validator's description. +func (d Description) + +EnsureLength() (Description, error) { + if len(d.Moniker) > MaxMonikerLength { + return d, sdkerrors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid moniker length; got: %d, max: %d", len(d.Moniker), MaxMonikerLength) +} + if len(d.Identity) > MaxIdentityLength { + return d, sdkerrors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid identity length; got: %d, max: %d", len(d.Identity), MaxIdentityLength) +} + if len(d.Website) > MaxWebsiteLength { + return d, sdkerrors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid website length; got: %d, max: %d", len(d.Website), MaxWebsiteLength) +} + if len(d.SecurityContact) > MaxSecurityContactLength { + return d, sdkerrors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid security contact length; got: %d, max: %d", len(d.SecurityContact), MaxSecurityContactLength) +} + if len(d.Details) > MaxDetailsLength { + return d, sdkerrors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid details length; got: %d, max: %d", len(d.Details), MaxDetailsLength) +} + +return d, nil +} + +/ ABCIValidatorUpdate returns an abci.ValidatorUpdate from a staking validator type +/ with the full validator power +func (v Validator) + +ABCIValidatorUpdate(r math.Int) + +abci.ValidatorUpdate { + tmProtoPk, err := v.TmConsPublicKey() + if err != nil { + panic(err) +} + +return abci.ValidatorUpdate{ + PubKey: tmProtoPk, + Power: v.ConsensusPower(r), +} +} + +/ ABCIValidatorUpdateZero returns an abci.ValidatorUpdate from a staking validator type +/ with zero power used for validator updates. +func (v Validator) + +ABCIValidatorUpdateZero() + +abci.ValidatorUpdate { + tmProtoPk, err := v.TmConsPublicKey() + if err != nil { + panic(err) +} + +return abci.ValidatorUpdate{ + PubKey: tmProtoPk, + Power: 0, +} +} + +/ SetInitialCommission attempts to set a validator's initial commission. An +/ error is returned if the commission is invalid. +func (v Validator) + +SetInitialCommission(commission Commission) (Validator, error) { + if err := commission.Validate(); err != nil { + return v, err +} + +v.Commission = commission + + return v, nil +} + +/ In some situations, the exchange rate becomes invalid, e.g. if +/ Validator loses all tokens due to slashing. In this case, +/ make all future delegations invalid. +func (v Validator) + +InvalidExRate() + +bool { + return v.Tokens.IsZero() && v.DelegatorShares.IsPositive() +} + +/ calculate the token worth of provided shares +func (v Validator) + +TokensFromShares(shares sdk.Dec) + +math.LegacyDec { + return (shares.MulInt(v.Tokens)).Quo(v.DelegatorShares) +} + +/ calculate the token worth of provided shares, truncated +func (v Validator) + +TokensFromSharesTruncated(shares sdk.Dec) + +math.LegacyDec { + return (shares.MulInt(v.Tokens)).QuoTruncate(v.DelegatorShares) +} + +/ TokensFromSharesRoundUp returns the token worth of provided shares, rounded +/ up. +func (v Validator) + +TokensFromSharesRoundUp(shares sdk.Dec) + +math.LegacyDec { + return (shares.MulInt(v.Tokens)).QuoRoundUp(v.DelegatorShares) +} + +/ SharesFromTokens returns the shares of a delegation given a bond amount. It +/ returns an error if the validator has no tokens. +func (v Validator) + +SharesFromTokens(amt math.Int) (sdk.Dec, error) { + if v.Tokens.IsZero() { + return math.LegacyZeroDec(), ErrInsufficientShares +} + +return v.GetDelegatorShares().MulInt(amt).QuoInt(v.GetTokens()), nil +} + +/ SharesFromTokensTruncated returns the truncated shares of a delegation given +/ a bond amount. It returns an error if the validator has no tokens. +func (v Validator) + +SharesFromTokensTruncated(amt math.Int) (sdk.Dec, error) { + if v.Tokens.IsZero() { + return math.LegacyZeroDec(), ErrInsufficientShares +} + +return v.GetDelegatorShares().MulInt(amt).QuoTruncate(sdk.NewDecFromInt(v.GetTokens())), nil +} + +/ get the bonded tokens which the validator holds +func (v Validator) + +BondedTokens() -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/staking/types/validator.go#L41-L64) +math.Int { + if v.IsBonded() { + return v.Tokens +} -#### `Any`'s TypeURL[​](#anys-typeurl "Direct link to anys-typeurl") +return math.ZeroInt() +} + +/ ConsensusPower gets the consensus-engine power. Aa reduction of 10^6 from +/ validator tokens is applied +func (v Validator) + +ConsensusPower(r math.Int) + +int64 { + if v.IsBonded() { + return v.PotentialConsensusPower(r) +} + +return 0 +} + +/ PotentialConsensusPower returns the potential consensus-engine power. +func (v Validator) + +PotentialConsensusPower(r math.Int) + +int64 { + return sdk.TokensToConsensusPower(v.Tokens, r) +} + +/ UpdateStatus updates the location of the shares within a validator +/ to reflect the new status +func (v Validator) + +UpdateStatus(newStatus BondStatus) + +Validator { + v.Status = newStatus + return v +} + +/ AddTokensFromDel adds tokens to a validator +func (v Validator) + +AddTokensFromDel(amount math.Int) (Validator, sdk.Dec) { + / calculate the shares to issue + var issuedShares sdk.Dec + if v.DelegatorShares.IsZero() { + / the first delegation to a validator sets the exchange rate to one + issuedShares = sdk.NewDecFromInt(amount) +} + +else { + shares, err := v.SharesFromTokens(amount) + if err != nil { + panic(err) +} + +issuedShares = shares +} + +v.Tokens = v.Tokens.Add(amount) + +v.DelegatorShares = v.DelegatorShares.Add(issuedShares) + +return v, issuedShares +} + +/ RemoveTokens removes tokens from a validator +func (v Validator) + +RemoveTokens(tokens math.Int) + +Validator { + if tokens.IsNegative() { + panic(fmt.Sprintf("should not happen: trying to remove negative tokens %v", tokens)) +} + if v.Tokens.LT(tokens) { + panic(fmt.Sprintf("should not happen: only have %v tokens, trying to remove %v", v.Tokens, tokens)) +} + +v.Tokens = v.Tokens.Sub(tokens) + +return v +} + +/ RemoveDelShares removes delegator shares from a validator. +/ NOTE: because token fractions are left in the valiadator, +/ +/ the exchange rate of future shares of this validator can increase. +func (v Validator) + +RemoveDelShares(delShares sdk.Dec) (Validator, math.Int) { + remainingShares := v.DelegatorShares.Sub(delShares) + +var issuedTokens math.Int + if remainingShares.IsZero() { + / last delegation share gets any trimmings + issuedTokens = v.Tokens + v.Tokens = math.ZeroInt() +} + +else { + / leave excess tokens in the validator + / however fully use all the delegator shares + issuedTokens = v.TokensFromShares(delShares).TruncateInt() + +v.Tokens = v.Tokens.Sub(issuedTokens) + if v.Tokens.IsNegative() { + panic("attempting to remove more tokens than available in validator") +} + +} + +v.DelegatorShares = remainingShares + + return v, issuedTokens +} + +/ MinEqual defines a more minimum set of equality conditions when comparing two +/ validators. +func (v *Validator) + +MinEqual(other *Validator) + +bool { + return v.OperatorAddress == other.OperatorAddress && + v.Status == other.Status && + v.Tokens.Equal(other.Tokens) && + v.DelegatorShares.Equal(other.DelegatorShares) && + v.Description.Equal(other.Description) && + v.Commission.Equal(other.Commission) && + v.Jailed == other.Jailed && + v.MinSelfDelegation.Equal(other.MinSelfDelegation) && + v.ConsensusPubkey.Equal(other.ConsensusPubkey) +} + +/ Equal checks if the receiver equals the parameter +func (v *Validator) + +Equal(v2 *Validator) + +bool { + return v.MinEqual(v2) && + v.UnbondingHeight == v2.UnbondingHeight && + v.UnbondingTime.Equal(v2.UnbondingTime) +} + +func (v Validator) + +IsJailed() + +bool { + return v.Jailed +} + +func (v Validator) + +GetMoniker() + +string { + return v.Description.Moniker +} + +func (v Validator) + +GetStatus() + +BondStatus { + return v.Status +} + +func (v Validator) + +GetOperator() + +sdk.ValAddress { + if v.OperatorAddress == "" { + return nil +} + +addr, err := sdk.ValAddressFromBech32(v.OperatorAddress) + if err != nil { + panic(err) +} + +return addr +} + +/ ConsPubKey returns the validator PubKey as a cryptotypes.PubKey. +func (v Validator) + +ConsPubKey() (cryptotypes.PubKey, error) { + pk, ok := v.ConsensusPubkey.GetCachedValue().(cryptotypes.PubKey) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expecting cryptotypes.PubKey, got %T", pk) +} + +return pk, nil +} + +/ TmConsPublicKey casts Validator.ConsensusPubkey to tmprotocrypto.PubKey. +func (v Validator) + +TmConsPublicKey() (tmprotocrypto.PublicKey, error) { + pk, err := v.ConsPubKey() + if err != nil { + return tmprotocrypto.PublicKey{ +}, err +} + +tmPk, err := cryptocodec.ToTmProtoPublicKey(pk) + if err != nil { + return tmprotocrypto.PublicKey{ +}, err +} + +return tmPk, nil +} + +/ GetConsAddr extracts Consensus key address +func (v Validator) + +GetConsAddr() (sdk.ConsAddress, error) { + pk, ok := v.ConsensusPubkey.GetCachedValue().(cryptotypes.PubKey) + if !ok { + return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expecting cryptotypes.PubKey, got %T", pk) +} + +return sdk.ConsAddress(pk.Address()), nil +} + +func (v Validator) + +GetTokens() + +math.Int { + return v.Tokens +} + +func (v Validator) + +GetBondedTokens() + +math.Int { + return v.BondedTokens() +} + +func (v Validator) + +GetConsensusPower(r math.Int) + +int64 { + return v.ConsensusPower(r) +} + +func (v Validator) + +GetCommission() + +math.LegacyDec { + return v.Commission.Rate +} + +func (v Validator) + +GetMinSelfDelegation() + +math.Int { + return v.MinSelfDelegation +} + +func (v Validator) + +GetDelegatorShares() + +math.LegacyDec { + return v.DelegatorShares +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (v Validator) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + var pk cryptotypes.PubKey + return unpacker.UnpackAny(v.ConsensusPubkey, &pk) +} +``` + +#### `Any`'s TypeURL When packing a protobuf message inside an `Any`, the message's type is uniquely defined by its type URL, which is the message's fully qualified name prefixed by a `/` (slash) character. In some implementations of `Any`, like the gogoproto one, there's generally [a resolvable prefix, e.g. `type.googleapis.com`](https://github.com/gogo/protobuf/blob/b03c65ea87cdc3521ede29f62fe3ce239267c1bc/protobuf/google/protobuf/any.proto#L87-L91). However, in the Cosmos SDK, we made the decision to not include such prefix, to have shorter type URLs. The Cosmos SDK's own `Any` implementation can be found in `github.com/cosmos/cosmos-sdk/codec/types`. The Cosmos SDK is also switching away from gogoproto to the official `google.golang.org/protobuf` (known as the Protobuf API v2). Its default `Any` implementation also contains the [`type.googleapis.com`](https://github.com/protocolbuffers/protobuf-go/blob/v1.28.1/types/known/anypb/any.pb.go#L266) prefix. To maintain compatibility with the SDK, the following methods from `"google.golang.org/protobuf/types/known/anypb"` should not be used: -* `anypb.New` -* `anypb.MarshalFrom` -* `anypb.Any#MarshalFrom` +- `anypb.New` +- `anypb.MarshalFrom` +- `anypb.Any#MarshalFrom` Instead, the Cosmos SDK provides helper functions in `"github.com/cosmos/cosmos-proto/anyutil"`, which create an official `anypb.Any` without inserting the prefixes: -* `anyutil.New` -* `anyutil.MarshalFrom` +- `anyutil.New` +- `anyutil.MarshalFrom` For example, to pack a `sdk.Msg` called `internalMsg`, use: -``` -import (- "google.golang.org/protobuf/types/known/anypb"+ "github.com/cosmos/cosmos-proto/anyutil")- anyMsg, err := anypb.New(internalMsg.Message().Interface())+ anyMsg, err := anyutil.New(internalMsg.Message().Interface())- fmt.Println(anyMsg.TypeURL) // type.googleapis.com/cosmos.bank.v1beta1.MsgSend+ fmt.Println(anyMsg.TypeURL) // /cosmos.bank.v1beta1.MsgSend +```diff +import ( +- "google.golang.org/protobuf/types/known/anypb" ++ "github.com/cosmos/cosmos-proto/anyutil" +) + +- anyMsg, err := anypb.New(internalMsg.Message().Interface()) ++ anyMsg, err := anyutil.New(internalMsg.Message().Interface()) + +- fmt.Println(anyMsg.TypeURL) / type.googleapis.com/cosmos.bank.v1beta1.MsgSend ++ fmt.Println(anyMsg.TypeURL) / /cosmos.bank.v1beta1.MsgSend ``` -## FAQ[​](#faq "Direct link to FAQ") +## FAQ -### How to create modules using protobuf encoding[​](#how-to-create-modules-using-protobuf-encoding "Direct link to How to create modules using protobuf encoding") +### How to create modules using protobuf encoding -#### Defining module types[​](#defining-module-types "Direct link to Defining module types") +#### Defining module types Protobuf types can be defined to encode: -* state -* [`Msg`s](/v0.47/build/building-modules/messages-and-queries#messages) -* [Query services](/v0.47/build/building-modules/query-services) -* [genesis](/v0.47/build/building-modules/genesis) +- state +- [`Msg`s](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#messages) +- [Query services](/docs/sdk/v0.47/documentation/module-system/query-services) +- [genesis](/docs/sdk/v0.47/documentation/module-system/genesis) -#### Naming and conventions[​](#naming-and-conventions "Direct link to Naming and conventions") +#### Naming and conventions -We encourage developers to follow industry guidelines: [Protocol Buffers style guide](https://developers.google.com/protocol-buffers/docs/style) and [Buf](https://buf.build/docs/style-guide), see more details in [ADR 023](/v0.47/build/architecture/adr-023-protobuf-naming) +We encourage developers to follow industry guidelines: [Protocol Buffers style guide](https://developers.google.com/protocol-buffers/docs/style) +and [Buf](https://buf.build/docs/style-guide), see more details in [ADR 023](/docs/common/pages/adr-comprehensive#adr-023-protocol-buffer-naming-and-versioning-conventions) -### How to update modules to protobuf encoding[​](#how-to-update-modules-to-protobuf-encoding "Direct link to How to update modules to protobuf encoding") +### How to update modules to protobuf encoding -If modules do not contain any interfaces (e.g. `Account` or `Content`), then they may simply migrate any existing types that are encoded and persisted via their concrete Amino codec to Protobuf (see 1. for further guidelines) and accept a `Marshaler` as the codec which is implemented via the `ProtoCodec` without any further customization. +If modules do not contain any interfaces (e.g. `Account` or `Content`), then they +may simply migrate any existing types that +are encoded and persisted via their concrete Amino codec to Protobuf (see 1. for further guidelines) and accept a `Marshaler` as the codec which is implemented via the `ProtoCodec` +without any further customization. However, if a module type composes an interface, it must wrap it in the `sdk.Any` (from `/types` package) type. To do that, a module-level .proto file must use [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto) for respective message type interface types. For example, in the `x/evidence` module defines an `Evidence` interface, which is used by the `MsgSubmitEvidence`. The structure definition must use `sdk.Any` to wrap the evidence file. In the proto file we define it as follows: -``` -// proto/cosmos/evidence/v1beta1/tx.protomessage MsgSubmitEvidence { string submitter = 1; google.protobuf.Any evidence = 2 [(cosmos_proto.accepts_interface) = "cosmos.evidence.v1beta1.Evidence"];} +```protobuf +/ proto/cosmos/evidence/v1beta1/tx.proto + +message MsgSubmitEvidence { + string submitter = 1; + google.protobuf.Any evidence = 2 [(cosmos_proto.accepts_interface) = "cosmos.evidence.v1beta1.Evidence"]; +} ``` The Cosmos SDK `codec.Codec` interface provides support methods `MarshalInterface` and `UnmarshalInterface` to easy encoding of state to `Any`. Module should register interfaces using `InterfaceRegistry` which provides a mechanism for registering interfaces: `RegisterInterface(protoName string, iface interface{}, impls ...proto.Message)` and implementations: `RegisterImplementations(iface interface{}, impls ...proto.Message)` that can be safely unpacked from Any, similarly to type registration with Amino: -codec/types/interface\_registry.go - +```go expandable +package types + +import ( + + "fmt" + "reflect" + "github.com/cosmos/gogoproto/jsonpb" + "github.com/cosmos/gogoproto/proto" +) + +/ AnyUnpacker is an interface which allows safely unpacking types packed +/ in Any's against a whitelist of registered types +type AnyUnpacker interface { + / UnpackAny unpacks the value in any to the interface pointer passed in as + / iface. Note that the type in any must have been registered in the + / underlying whitelist registry as a concrete type for that interface + / Ex: + / var msg sdk.Msg + / err := cdc.UnpackAny(any, &msg) + / ... + UnpackAny(any *Any, iface interface{ +}) + +error +} + +/ InterfaceRegistry provides a mechanism for registering interfaces and +/ implementations that can be safely unpacked from Any +type InterfaceRegistry interface { + AnyUnpacker + jsonpb.AnyResolver + + / RegisterInterface associates protoName as the public name for the + / interface passed in as iface. This is to be used primarily to create + / a public facing registry of interface implementations for clients. + / protoName should be a well-chosen public facing name that remains stable. + / RegisterInterface takes an optional list of impls to be registered + / as implementations of iface. + / + / Ex: + / registry.RegisterInterface("cosmos.base.v1beta1.Msg", (*sdk.Msg)(nil)) + +RegisterInterface(protoName string, iface interface{ +}, impls ...proto.Message) + + / RegisterImplementations registers impls as concrete implementations of + / the interface iface. + / + / Ex: + / registry.RegisterImplementations((*sdk.Msg)(nil), &MsgSend{ +}, &MsgMultiSend{ +}) + +RegisterImplementations(iface interface{ +}, impls ...proto.Message) + + / ListAllInterfaces list the type URLs of all registered interfaces. + ListAllInterfaces() []string + + / ListImplementations lists the valid type URLs for the given interface name that can be used + / for the provided interface type URL. + ListImplementations(ifaceTypeURL string) []string + + / EnsureRegistered ensures there is a registered interface for the given concrete type. + EnsureRegistered(iface interface{ +}) + +error +} + +/ UnpackInterfacesMessage is meant to extend protobuf types (which implement +/ proto.Message) + +to support a post-deserialization phase which unpacks +/ types packed within Any's using the whitelist provided by AnyUnpacker +type UnpackInterfacesMessage interface { + / UnpackInterfaces is implemented in order to unpack values packed within + / Any's using the AnyUnpacker. It should generally be implemented as + / follows: + / func (s *MyStruct) + +UnpackInterfaces(unpacker AnyUnpacker) + +error { + / var x AnyInterface + / / where X is an Any field on MyStruct + / err := unpacker.UnpackAny(s.X, &x) + / if err != nil { + / return nil + / +} + / / where Y is a field on MyStruct that implements UnpackInterfacesMessage itself + / err = s.Y.UnpackInterfaces(unpacker) + / if err != nil { + / return nil + / +} + / return nil + / +} + +UnpackInterfaces(unpacker AnyUnpacker) + +error +} + +type interfaceRegistry struct { + interfaceNames map[string]reflect.Type + interfaceImpls map[reflect.Type]interfaceMap + implInterfaces map[reflect.Type]reflect.Type + typeURLMap map[string]reflect.Type +} + +type interfaceMap = map[string]reflect.Type + +/ NewInterfaceRegistry returns a new InterfaceRegistry +func NewInterfaceRegistry() + +InterfaceRegistry { + return &interfaceRegistry{ + interfaceNames: map[string]reflect.Type{ +}, + interfaceImpls: map[reflect.Type]interfaceMap{ +}, + implInterfaces: map[reflect.Type]reflect.Type{ +}, + typeURLMap: map[string]reflect.Type{ +}, +} +} + +func (registry *interfaceRegistry) + +RegisterInterface(protoName string, iface interface{ +}, impls ...proto.Message) { + typ := reflect.TypeOf(iface) + if typ.Elem().Kind() != reflect.Interface { + panic(fmt.Errorf("%T is not an interface type", iface)) +} + +registry.interfaceNames[protoName] = typ + registry.RegisterImplementations(iface, impls...) +} + +/ EnsureRegistered ensures there is a registered interface for the given concrete type. +/ +/ Returns an error if not, and nil if so. +func (registry *interfaceRegistry) + +EnsureRegistered(impl interface{ +}) + +error { + if reflect.ValueOf(impl).Kind() != reflect.Ptr { + return fmt.Errorf("%T is not a pointer", impl) +} + if _, found := registry.implInterfaces[reflect.TypeOf(impl)]; !found { + return fmt.Errorf("%T does not have a registered interface", impl) +} + +return nil +} + +/ RegisterImplementations registers a concrete proto Message which implements +/ the given interface. +/ +/ This function PANICs if different concrete types are registered under the +/ same typeURL. +func (registry *interfaceRegistry) + +RegisterImplementations(iface interface{ +}, impls ...proto.Message) { + for _, impl := range impls { + typeURL := "/" + proto.MessageName(impl) + +registry.registerImpl(iface, typeURL, impl) +} +} + +/ RegisterCustomTypeURL registers a concrete type which implements the given +/ interface under `typeURL`. +/ +/ This function PANICs if different concrete types are registered under the +/ same typeURL. +func (registry *interfaceRegistry) + +RegisterCustomTypeURL(iface interface{ +}, typeURL string, impl proto.Message) { + registry.registerImpl(iface, typeURL, impl) +} + +/ registerImpl registers a concrete type which implements the given +/ interface under `typeURL`. +/ +/ This function PANICs if different concrete types are registered under the +/ same typeURL. +func (registry *interfaceRegistry) + +registerImpl(iface interface{ +}, typeURL string, impl proto.Message) { + ityp := reflect.TypeOf(iface).Elem() + +imap, found := registry.interfaceImpls[ityp] + if !found { + imap = map[string]reflect.Type{ +} + +} + implType := reflect.TypeOf(impl) + if !implType.AssignableTo(ityp) { + panic(fmt.Errorf("type %T doesn't actually implement interface %+v", impl, ityp)) +} + + / Check if we already registered something under the given typeURL. It's + / okay to register the same concrete type again, but if we are registering + / a new concrete type under the same typeURL, then we throw an error (here, + / we panic). + foundImplType, found := imap[typeURL] + if found && foundImplType != implType { + panic( + fmt.Errorf( + "concrete type %s has already been registered under typeURL %s, cannot register %s under same typeURL. "+ + "This usually means that there are conflicting modules registering different concrete types "+ + "for a same interface implementation", + foundImplType, + typeURL, + implType, + ), + ) +} + +imap[typeURL] = implType + registry.typeURLMap[typeURL] = implType + registry.implInterfaces[implType] = ityp + registry.interfaceImpls[ityp] = imap +} + +func (registry *interfaceRegistry) + +ListAllInterfaces() []string { + interfaceNames := registry.interfaceNames + keys := make([]string, 0, len(interfaceNames)) + for key := range interfaceNames { + keys = append(keys, key) +} + +return keys +} + +func (registry *interfaceRegistry) + +ListImplementations(ifaceName string) []string { + typ, ok := registry.interfaceNames[ifaceName] + if !ok { + return []string{ +} + +} + +impls, ok := registry.interfaceImpls[typ.Elem()] + if !ok { + return []string{ +} + +} + keys := make([]string, 0, len(impls)) + for key := range impls { + keys = append(keys, key) +} + +return keys +} + +func (registry *interfaceRegistry) + +UnpackAny(any *Any, iface interface{ +}) + +error { + / here we gracefully handle the case in which `any` itself is `nil`, which may occur in message decoding + if any == nil { + return nil +} + if any.TypeUrl == "" { + / if TypeUrl is empty return nil because without it we can't actually unpack anything + return nil +} + rv := reflect.ValueOf(iface) + if rv.Kind() != reflect.Ptr { + return fmt.Errorf("UnpackAny expects a pointer") +} + rt := rv.Elem().Type() + cachedValue := any.cachedValue + if cachedValue != nil { + if reflect.TypeOf(cachedValue).AssignableTo(rt) { + rv.Elem().Set(reflect.ValueOf(cachedValue)) + +return nil +} + +} + +imap, found := registry.interfaceImpls[rt] + if !found { + return fmt.Errorf("no registered implementations of type %+v", rt) +} + +typ, found := imap[any.TypeUrl] + if !found { + return fmt.Errorf("no concrete type registered for type URL %s against interface %T", any.TypeUrl, iface) +} + +msg, ok := reflect.New(typ.Elem()).Interface().(proto.Message) + if !ok { + return fmt.Errorf("can't proto unmarshal %T", msg) +} + err := proto.Unmarshal(any.Value, msg) + if err != nil { + return err +} + +err = UnpackInterfaces(msg, registry) + if err != nil { + return err +} + +rv.Elem().Set(reflect.ValueOf(msg)) + +any.cachedValue = msg + + return nil +} + +/ Resolve returns the proto message given its typeURL. It works with types +/ registered with RegisterInterface/RegisterImplementations, as well as those +/ registered with RegisterWithCustomTypeURL. +func (registry *interfaceRegistry) + +Resolve(typeURL string) (proto.Message, error) { + typ, found := registry.typeURLMap[typeURL] + if !found { + return nil, fmt.Errorf("unable to resolve type URL %s", typeURL) +} + +msg, ok := reflect.New(typ.Elem()).Interface().(proto.Message) + if !ok { + return nil, fmt.Errorf("can't resolve type URL %s", typeURL) +} + +return msg, nil +} + +/ UnpackInterfaces is a convenience function that calls UnpackInterfaces +/ on x if x implements UnpackInterfacesMessage +func UnpackInterfaces(x interface{ +}, unpacker AnyUnpacker) + +error { + if msg, ok := x.(UnpackInterfacesMessage); ok { + return msg.UnpackInterfaces(unpacker) +} + +return nil +} ``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/codec/types/interface_registry.go#L24-L57) In addition, an `UnpackInterfaces` phase should be introduced to deserialization to unpack interfaces before they're needed. Protobuf types that contain a protobuf `Any` either directly or via one of their members should implement the `UnpackInterfacesMessage` interface: -``` -type UnpackInterfacesMessage interface { UnpackInterfaces(InterfaceUnpacker) error} +```go +type UnpackInterfacesMessage interface { + UnpackInterfaces(InterfaceUnpacker) + +error +} ``` -### Custom Stringer[​](#custom-stringer "Direct link to Custom Stringer") +### Custom Stringer -Using `option (gogoproto.goproto_stringer) = false;` in a proto message definition leads to unexpected behaviour, like returning wrong output or having missing fields in the output. For that reason a proto Message's `String()` must not be customized, and the `goproto_stringer` option must be avoided. +Using `option (gogoproto.goproto_stringer) = false;` in a proto message definition leads to unexpected behaviour, like returning wrong output or having missing fields in the output. +For that reason a proto Message's `String()` must not be customized, and the `goproto_stringer` option must be avoided. A correct YAML output can be obtained through ProtoJSON, using the `JSONToYAML` function: -codec/yaml.go +```go expandable +package codec -``` -loading... -``` +import ( + + "github.com/cosmos/gogoproto/proto" + "sigs.k8s.io/yaml" +) + +/ MarshalYAML marshals toPrint using JSONCodec to leverage specialized MarshalJSON methods +/ (usually related to serialize data with protobuf or amin depending on a configuration). +/ This involves additional roundtrip through JSON. +func MarshalYAML(cdc JSONCodec, toPrint proto.Message) ([]byte, error) { + / We are OK with the performance hit of the additional JSON roundtip. MarshalYAML is not + / used in any critical parts of the system. + bz, err := cdc.MarshalJSON(toPrint) + if err != nil { + return nil, err +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/codec/yaml.go#L8-L20) +return yaml.JSONToYAML(bz) +} +``` For example: -x/auth/types/account.go +```go expandable +package types + +import ( + + "bytes" + "encoding/json" + "errors" + "fmt" + "strings" + "github.com/cosmos/gogoproto/proto" + "github.com/tendermint/tendermint/crypto" + "sigs.k8s.io/yaml" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptocodec "github.com/cosmos/cosmos-sdk/crypto/codec" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var ( + _ AccountI = (*BaseAccount)(nil) + _ GenesisAccount = (*BaseAccount)(nil) + _ codectypes.UnpackInterfacesMessage = (*BaseAccount)(nil) + _ GenesisAccount = (*ModuleAccount)(nil) + _ ModuleAccountI = (*ModuleAccount)(nil) +) + +/ NewBaseAccount creates a new BaseAccount object +/ +/nolint:interfacer +func NewBaseAccount(address sdk.AccAddress, pubKey cryptotypes.PubKey, accountNumber, sequence uint64) *BaseAccount { + acc := &BaseAccount{ + Address: address.String(), + AccountNumber: accountNumber, + Sequence: sequence, +} + err := acc.SetPubKey(pubKey) + if err != nil { + panic(err) +} + +return acc +} + +/ ProtoBaseAccount - a prototype function for BaseAccount +func ProtoBaseAccount() + +AccountI { + return &BaseAccount{ +} +} -``` -loading... -``` +/ NewBaseAccountWithAddress - returns a new base account with a given address +/ leaving AccountNumber and Sequence to zero. +func NewBaseAccountWithAddress(addr sdk.AccAddress) *BaseAccount { + return &BaseAccount{ + Address: addr.String(), +} +} + +/ GetAddress - Implements sdk.AccountI. +func (acc BaseAccount) + +GetAddress() + +sdk.AccAddress { + addr, _ := sdk.AccAddressFromBech32(acc.Address) + +return addr +} + +/ SetAddress - Implements sdk.AccountI. +func (acc *BaseAccount) + +SetAddress(addr sdk.AccAddress) + +error { + if len(acc.Address) != 0 { + return errors.New("cannot override BaseAccount address") +} + +acc.Address = addr.String() + +return nil +} + +/ GetPubKey - Implements sdk.AccountI. +func (acc BaseAccount) + +GetPubKey() (pk cryptotypes.PubKey) { + if acc.PubKey == nil { + return nil +} + +content, ok := acc.PubKey.GetCachedValue().(cryptotypes.PubKey) + if !ok { + return nil +} + +return content +} + +/ SetPubKey - Implements sdk.AccountI. +func (acc *BaseAccount) + +SetPubKey(pubKey cryptotypes.PubKey) + +error { + if pubKey == nil { + acc.PubKey = nil + return nil +} + +any, err := codectypes.NewAnyWithValue(pubKey) + if err == nil { + acc.PubKey = any +} + +return err +} + +/ GetAccountNumber - Implements AccountI +func (acc BaseAccount) + +GetAccountNumber() + +uint64 { + return acc.AccountNumber +} + +/ SetAccountNumber - Implements AccountI +func (acc *BaseAccount) + +SetAccountNumber(accNumber uint64) + +error { + acc.AccountNumber = accNumber + return nil +} + +/ GetSequence - Implements sdk.AccountI. +func (acc BaseAccount) + +GetSequence() + +uint64 { + return acc.Sequence +} + +/ SetSequence - Implements sdk.AccountI. +func (acc *BaseAccount) + +SetSequence(seq uint64) + +error { + acc.Sequence = seq + return nil +} + +/ Validate checks for errors on the account fields +func (acc BaseAccount) + +Validate() + +error { + if acc.Address == "" || acc.PubKey == nil { + return nil +} + +accAddr, err := sdk.AccAddressFromBech32(acc.Address) + if err != nil { + return err +} + if !bytes.Equal(acc.GetPubKey().Address().Bytes(), accAddr.Bytes()) { + return errors.New("account address and pubkey address do not match") +} + +return nil +} + +/ MarshalYAML returns the YAML representation of an account. +func (acc BaseAccount) + +MarshalYAML() (interface{ +}, error) { + registry := codectypes.NewInterfaceRegistry() + +cryptocodec.RegisterInterfaces(registry) + +bz, err := codec.MarshalYAML(codec.NewProtoCodec(registry), &acc) + if err != nil { + return nil, err +} + +return string(bz), err +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (acc BaseAccount) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + if acc.PubKey == nil { + return nil +} + +var pubKey cryptotypes.PubKey + return unpacker.UnpackAny(acc.PubKey, &pubKey) +} + +/ NewModuleAddressOrAddress gets an input string and returns an AccAddress. +/ If the input is a valid address, it returns the address. +/ If the input is a module name, it returns the module address. +func NewModuleAddressOrBech32Address(input string) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/auth/types/account.go#L141-L151) +sdk.AccAddress { + if addr, err := sdk.AccAddressFromBech32(input); err == nil { + return addr +} + +return NewModuleAddress(input) +} + +/ NewModuleAddress creates an AccAddress from the hash of the module's name +func NewModuleAddress(name string) + +sdk.AccAddress { + return sdk.AccAddress(crypto.AddressHash([]byte(name))) +} + +/ NewEmptyModuleAccount creates a empty ModuleAccount from a string +func NewEmptyModuleAccount(name string, permissions ...string) *ModuleAccount { + moduleAddress := NewModuleAddress(name) + baseAcc := NewBaseAccountWithAddress(moduleAddress) + if err := validatePermissions(permissions...); err != nil { + panic(err) +} + +return &ModuleAccount{ + BaseAccount: baseAcc, + Name: name, + Permissions: permissions, +} +} + +/ NewModuleAccount creates a new ModuleAccount instance +func NewModuleAccount(ba *BaseAccount, name string, permissions ...string) *ModuleAccount { + if err := validatePermissions(permissions...); err != nil { + panic(err) +} + +return &ModuleAccount{ + BaseAccount: ba, + Name: name, + Permissions: permissions, +} +} + +/ HasPermission returns whether or not the module account has permission. +func (ma ModuleAccount) + +HasPermission(permission string) + +bool { + for _, perm := range ma.Permissions { + if perm == permission { + return true +} + +} + +return false +} + +/ GetName returns the name of the holder's module +func (ma ModuleAccount) + +GetName() + +string { + return ma.Name +} + +/ GetPermissions returns permissions granted to the module account +func (ma ModuleAccount) + +GetPermissions() []string { + return ma.Permissions +} + +/ SetPubKey - Implements AccountI +func (ma ModuleAccount) + +SetPubKey(pubKey cryptotypes.PubKey) + +error { + return fmt.Errorf("not supported for module accounts") +} + +/ Validate checks for errors on the account fields +func (ma ModuleAccount) + +Validate() + +error { + if strings.TrimSpace(ma.Name) == "" { + return errors.New("module account name cannot be blank") +} + if ma.Address != sdk.AccAddress(crypto.AddressHash([]byte(ma.Name))).String() { + return fmt.Errorf("address %s cannot be derived from the module name '%s'", ma.Address, ma.Name) +} + +return ma.BaseAccount.Validate() +} + +type moduleAccountPretty struct { + Address sdk.AccAddress `json:"address"` + PubKey string `json:"public_key"` + AccountNumber uint64 `json:"account_number"` + Sequence uint64 `json:"sequence"` + Name string `json:"name"` + Permissions []string `json:"permissions"` +} + +/ MarshalYAML returns the YAML representation of a ModuleAccount. +func (ma ModuleAccount) + +MarshalYAML() (interface{ +}, error) { + accAddr, err := sdk.AccAddressFromBech32(ma.Address) + if err != nil { + return nil, err +} + +bs, err := yaml.Marshal(moduleAccountPretty{ + Address: accAddr, + PubKey: "", + AccountNumber: ma.AccountNumber, + Sequence: ma.Sequence, + Name: ma.Name, + Permissions: ma.Permissions, +}) + if err != nil { + return nil, err +} + +return string(bs), nil +} + +/ MarshalJSON returns the JSON representation of a ModuleAccount. +func (ma ModuleAccount) + +MarshalJSON() ([]byte, error) { + accAddr, err := sdk.AccAddressFromBech32(ma.Address) + if err != nil { + return nil, err +} + +return json.Marshal(moduleAccountPretty{ + Address: accAddr, + PubKey: "", + AccountNumber: ma.AccountNumber, + Sequence: ma.Sequence, + Name: ma.Name, + Permissions: ma.Permissions, +}) +} + +/ UnmarshalJSON unmarshals raw JSON bytes into a ModuleAccount. +func (ma *ModuleAccount) + +UnmarshalJSON(bz []byte) + +error { + var alias moduleAccountPretty + if err := json.Unmarshal(bz, &alias); err != nil { + return err +} + +ma.BaseAccount = NewBaseAccount(alias.Address, nil, alias.AccountNumber, alias.Sequence) + +ma.Name = alias.Name + ma.Permissions = alias.Permissions + + return nil +} + +/ AccountI is an interface used to store coins at a given address within state. +/ It presumes a notion of sequence numbers for replay protection, +/ a notion of account numbers for replay protection for previously pruned accounts, +/ and a pubkey for authentication purposes. +/ +/ Many complex conditions can be used in the concrete struct which implements AccountI. +type AccountI interface { + proto.Message + + GetAddress() + +sdk.AccAddress + SetAddress(sdk.AccAddress) + +error / errors if already set. + + GetPubKey() + +cryptotypes.PubKey / can return nil. + SetPubKey(cryptotypes.PubKey) + +error + + GetAccountNumber() + +uint64 + SetAccountNumber(uint64) + +error + + GetSequence() + +uint64 + SetSequence(uint64) + +error + + / Ensure that account implements stringer + String() + +string +} + +/ ModuleAccountI defines an account interface for modules that hold tokens in +/ an escrow. +type ModuleAccountI interface { + AccountI + + GetName() + +string + GetPermissions() []string + HasPermission(string) + +bool +} + +/ GenesisAccounts defines a slice of GenesisAccount objects +type GenesisAccounts []GenesisAccount + +/ Contains returns true if the given address exists in a slice of GenesisAccount +/ objects. +func (ga GenesisAccounts) + +Contains(addr sdk.Address) + +bool { + for _, acc := range ga { + if acc.GetAddress().Equals(addr) { + return true +} + +} + +return false +} + +/ GenesisAccount defines a genesis account that embeds an AccountI with validation capabilities. +type GenesisAccount interface { + AccountI + + Validate() + +error +} +``` diff --git a/docs/sdk/v0.47/learn/advanced/events.mdx b/docs/sdk/v0.47/learn/advanced/events.mdx index 04e9cf71..1e7fa2fb 100644 --- a/docs/sdk/v0.47/learn/advanced/events.mdx +++ b/docs/sdk/v0.47/learn/advanced/events.mdx @@ -1,147 +1,2235 @@ --- -title: "Events" -description: "Version: v0.47" +title: Events --- - - `Event`s are objects that contain information about the execution of the application. They are mainly used by service providers like block explorers and wallet to track the execution of various messages and index transactions. - +## Synopsis + +`Event`s are objects that contain information about the execution of the application. They are mainly used by service providers like block explorers and wallet to track the execution of various messages and index transactions. - ### Pre-requisite Readings[​](#pre-requisite-readings "Direct link to Pre-requisite Readings") - * [Anatomy of a Cosmos SDK application](/v0.47/learn/beginner/overview-app) - * [CometBFT Documentation on Events](https://docs.cometbft.com/v0.37/spec/abci/abci++_basic_concepts#events) - +### Pre-requisite Readings -## Events[​](#events-1 "Direct link to Events") +- [Anatomy of a Cosmos SDK application](/docs/sdk/v0.47/learn/beginner/overview-app) +- [CometBFT Documentation on Events](https://docs.cometbft.com/v0.37/spec/abci/abci++_basic_concepts#events) -Events are implemented in the Cosmos SDK as an alias of the ABCI `Event` type and take the form of: `{eventType}.{attributeKey}={attributeValue}`. + -proto/tendermint/abci/types.proto +## Events -``` -loading... -``` +Events are implemented in the Cosmos SDK as an alias of the ABCI `Event` type and +take the form of: `{eventType}.{attributeKey}={attributeValue}`. -[See full example on GitHub](https://github.com/cometbft/cometbft/blob/v0.37.0/proto/tendermint/abci/types.proto#L334-L343) +```protobuf +// Event allows application developers to attach additional information to +// ResponseBeginBlock, ResponseEndBlock, ResponseCheckTx and ResponseDeliverTx. +// Later, transactions may be queried using these events. +message Event { + string type = 1; + repeated EventAttribute attributes = 2 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "attributes,omitempty" + ]; +} +``` An Event contains: -* A `type` to categorize the Event at a high-level; for example, the Cosmos SDK uses the `"message"` type to filter Events by `Msg`s. -* A list of `attributes` are key-value pairs that give more information about the categorized Event. For example, for the `"message"` type, we can filter Events by key-value pairs using `message.action={some_action}`, `message.module={some_module}` or `message.sender={some_sender}`. -* A `msg_index` to identify which messages relate to the same transaction +- A `type` to categorize the Event at a high-level; for example, the Cosmos SDK uses the `"message"` type to filter Events by `Msg`s. +- A list of `attributes` are key-value pairs that give more information about the categorized Event. For example, for the `"message"` type, we can filter Events by key-value pairs using `message.action={some_action}`, `message.module={some_module}` or `message.sender={some_sender}`. +- A `msg_index` to identify which messages relate to the same transaction - - To parse the attribute values as strings, make sure to add `'` (single quotes) around each attribute value. - + + To parse the attribute values as strings, make sure to add `'` (single quotes) + around each attribute value. + -*Typed Events* are Protobuf-defined [messages](/v0.47/build/architecture/adr-032-typed-events) used by the Cosmos SDK for emitting and querying Events. They are defined in a `event.proto` file, on a **per-module basis** and are read as `proto.Message`. *Legacy Events* are defined on a **per-module basis** in the module's `/types/events.go` file. They are triggered from the module's Protobuf [`Msg` service](/v0.47/build/building-modules/msg-services) by using the [`EventManager`](#eventmanager). +_Typed Events_ are Protobuf-defined [messages](/docs/common/pages/adr-comprehensive#adr-032-typed-events) used by the Cosmos SDK +for emitting and querying Events. They are defined in a `event.proto` file, on a **per-module basis** and are read as `proto.Message`. +_Legacy Events_ are defined on a **per-module basis** in the module's `/types/events.go` file. +They are triggered from the module's Protobuf [`Msg` service](/docs/sdk/v0.47/documentation/module-system/msg-services) +by using the [`EventManager`](#eventmanager). -In addition, each module documents its events under in the `Events` sections of its specs (x/\{moduleName}/`README.md`). +In addition, each module documents its events under in the `Events` sections of its specs (x/`{moduleName}`/`README.md`). Lastly, Events are returned to the underlying consensus engine in the response of the following ABCI messages: -* [`BeginBlock`](/v0.47/learn/advanced/baseapp#beginblock) -* [`EndBlock`](/v0.47/learn/advanced/baseapp#endblock) -* [`CheckTx`](/v0.47/learn/advanced/baseapp#checktx) -* [`DeliverTx`](/v0.47/learn/advanced/baseapp#delivertx) +- [`BeginBlock`](/docs/sdk/v0.47/learn/advanced/baseapp#beginblock) +- [`EndBlock`](/docs/sdk/v0.47/learn/advanced/baseapp#endblock) +- [`CheckTx`](/docs/sdk/v0.47/learn/advanced/baseapp#checktx) +- [`DeliverTx`](/docs/sdk/v0.47/learn/advanced/baseapp#delivertx) -### Examples[​](#examples "Direct link to Examples") +### Examples The following examples show how to query Events using the Cosmos SDK. -| Event | Description | -| ------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------- | -| `tx.height=23` | Query all transactions at height 23 | -| `message.action='/cosmos.bank.v1beta1.Msg/Send'` | Query all transactions containing a x/bank `Send` [Service `Msg`](/v0.47/build/building-modules/msg-services). Note the `'`s around the value. | -| `message.module='bank'` | Query all transactions containing messages from the x/bank module. Note the `'`s around the value. | -| `create_validator.validator='cosmosval1...'` | x/staking-specific Event, see [x/staking SPEC](/v0.47/build/modules/staking). | +| Event | Description | +| ------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `tx.height=23` | Query all transactions at height 23 | +| `message.action='/cosmos.bank.v1beta1.Msg/Send'` | Query all transactions containing a x/bank `Send` [Service `Msg`](/docs/sdk/v0.47/documentation/module-system/msg-services). Note the `'`s around the value. | +| `message.module='bank'` | Query all transactions containing messages from the x/bank module. Note the `'`s around the value. | +| `create_validator.validator='cosmosval1...'` | x/staking-specific Event, see [x/staking SPEC](/docs/sdk/v0.47/documentation/module-system/modules/staking). | -## EventManager[​](#eventmanager "Direct link to EventManager") +## EventManager -In Cosmos SDK applications, Events are managed by an abstraction called the `EventManager`. Internally, the `EventManager` tracks a list of Events for the entire execution flow of a transaction or `BeginBlock`/`EndBlock`. +In Cosmos SDK applications, Events are managed by an abstraction called the `EventManager`. +Internally, the `EventManager` tracks a list of Events for the entire execution flow of a +transaction or `BeginBlock`/`EndBlock`. -types/events.go +```go expandable +package types -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/types/events.go#L24-L27) + "encoding/json" + "fmt" + "reflect" + "sort" + "strings" + "golang.org/x/exp/maps" + "golang.org/x/exp/slices" + "github.com/cosmos/gogoproto/jsonpb" + proto "github.com/cosmos/gogoproto/proto" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/cosmos/cosmos-sdk/codec" +) -The `EventManager` comes with a set of useful methods to manage Events. The method that is used most by module and application developers is `EmitTypedEvent` or `EmitEvent` that tracks an Event in the `EventManager`. +/ ---------------------------------------------------------------------------- +/ Event Manager +/ ---------------------------------------------------------------------------- -types/events.go +/ EventManager implements a simple wrapper around a slice of Event objects that +/ can be emitted from. +type EventManager struct { + events Events +} -``` -loading... -``` +func NewEventManager() *EventManager { + return &EventManager{ + EmptyEvents() +} +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/types/events.go#L53-L62) +func (em *EventManager) -Module developers should handle Event emission via the `EventManager#EmitTypedEvent` or `EventManager#EmitEvent` in each message `Handler` and in each `BeginBlock`/`EndBlock` handler. The `EventManager` is accessed via the [`Context`](/v0.47/learn/advanced/context), where Event should be already registered, and emitted like this: +Events() -**Typed events:** +Events { + return em.events +} + +/ EmitEvent stores a single Event object. +/ Deprecated: Use EmitTypedEvent +func (em *EventManager) + +EmitEvent(event Event) { + em.events = em.events.AppendEvent(event) +} + +/ EmitEvents stores a series of Event objects. +/ Deprecated: Use EmitTypedEvents +func (em *EventManager) + +EmitEvents(events Events) { + em.events = em.events.AppendEvents(events) +} + +/ ABCIEvents returns all stored Event objects as abci.Event objects. +func (em EventManager) + +ABCIEvents() []abci.Event { + return em.events.ToABCIEvents() +} + +/ EmitTypedEvent takes typed event and emits converting it into Event +func (em *EventManager) + +EmitTypedEvent(tev proto.Message) + +error { + event, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +em.EmitEvent(event) + +return nil +} + +/ EmitTypedEvents takes series of typed events and emit +func (em *EventManager) + +EmitTypedEvents(tevs ...proto.Message) + +error { + events := make(Events, len(tevs)) + for i, tev := range tevs { + res, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +events[i] = res +} + +em.EmitEvents(events) + +return nil +} + +/ TypedEventToEvent takes typed event and converts to Event object +func TypedEventToEvent(tev proto.Message) (Event, error) { + evtType := proto.MessageName(tev) + +evtJSON, err := codec.ProtoMarshalJSON(tev, nil) + if err != nil { + return Event{ +}, err +} + +var attrMap map[string]json.RawMessage + err = json.Unmarshal(evtJSON, &attrMap) + if err != nil { + return Event{ +}, err +} + + / sort the keys to ensure the order is always the same + keys := maps.Keys(attrMap) + +slices.Sort(keys) + attrs := make([]abci.EventAttribute, 0, len(attrMap)) + for _, k := range keys { + v := attrMap[k] + attrs = append(attrs, abci.EventAttribute{ + Key: k, + Value: string(v), +}) +} + +return Event{ + Type: evtType, + Attributes: attrs, +}, nil +} + +/ ParseTypedEvent converts abci.Event back to typed event +func ParseTypedEvent(event abci.Event) (proto.Message, error) { + concreteGoType := proto.MessageType(event.Type) + if concreteGoType == nil { + return nil, fmt.Errorf("failed to retrieve the message of type %q", event.Type) +} + +var value reflect.Value + if concreteGoType.Kind() == reflect.Ptr { + value = reflect.New(concreteGoType.Elem()) +} + +else { + value = reflect.Zero(concreteGoType) +} + +protoMsg, ok := value.Interface().(proto.Message) + if !ok { + return nil, fmt.Errorf("%q does not implement proto.Message", event.Type) +} + attrMap := make(map[string]json.RawMessage) + for _, attr := range event.Attributes { + attrMap[attr.Key] = json.RawMessage(attr.Value) +} + +attrBytes, err := json.Marshal(attrMap) + if err != nil { + return nil, err +} + +err = jsonpb.Unmarshal(strings.NewReader(string(attrBytes)), protoMsg) + if err != nil { + return nil, err +} + +return protoMsg, nil +} + +/ ---------------------------------------------------------------------------- +/ Events +/ ---------------------------------------------------------------------------- + +type ( + / Event is a type alias for an ABCI Event + Event abci.Event + + / Events defines a slice of Event objects + Events []Event +) + +/ NewEvent creates a new Event object with a given type and slice of one or more +/ attributes. +func NewEvent(ty string, attrs ...Attribute) + +Event { + e := Event{ + Type: ty +} + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ NewAttribute returns a new key/value Attribute object. +func NewAttribute(k, v string) + +Attribute { + return Attribute{ + k, v +} +} + +/ EmptyEvents returns an empty slice of events. +func EmptyEvents() + +Events { + return make(Events, 0) +} -x/group/keeper/msg\_server.go +func (a Attribute) +String() + +string { + return fmt.Sprintf("%s: %s", a.Key, a.Value) +} + +/ ToKVPair converts an Attribute object into a Tendermint key/value pair. +func (a Attribute) + +ToKVPair() + +abci.EventAttribute { + return abci.EventAttribute{ + Key: a.Key, + Value: a.Value +} +} + +/ AppendAttributes adds one or more attributes to an Event. +func (e Event) + +AppendAttributes(attrs ...Attribute) + +Event { + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ GetAttribute returns an attribute for a given key present in an event. +/ If the key is not found, the boolean value will be false. +func (e Event) + +GetAttribute(key string) (Attribute, bool) { + for _, attr := range e.Attributes { + if attr.Key == key { + return Attribute{ + Key: attr.Key, + Value: attr.Value +}, true +} + +} + +return Attribute{ +}, false +} + +/ AppendEvent adds an Event to a slice of events. +func (e Events) + +AppendEvent(event Event) + +Events { + return append(e, event) +} + +/ AppendEvents adds a slice of Event objects to an exist slice of Event objects. +func (e Events) + +AppendEvents(events Events) + +Events { + return append(e, events...) +} + +/ ToABCIEvents converts a slice of Event objects to a slice of abci.Event +/ objects. +func (e Events) + +ToABCIEvents() []abci.Event { + res := make([]abci.Event, len(e)) + for i, ev := range e { + res[i] = abci.Event{ + Type: ev.Type, + Attributes: ev.Attributes +} + +} + +return res +} + +/ GetAttributes returns all attributes matching a given key present in events. +/ If the key is not found, the boolean value will be false. +func (e Events) + +GetAttributes(key string) ([]Attribute, bool) { + attrs := make([]Attribute, 0) + for _, event := range e { + if attr, found := event.GetAttribute(key); found { + attrs = append(attrs, attr) +} + +} + +return attrs, len(attrs) > 0 +} + +/ Common event types and attribute keys +const ( + EventTypeTx = "tx" + + AttributeKeyAccountSequence = "acc_seq" + AttributeKeySignature = "signature" + AttributeKeyFee = "fee" + AttributeKeyFeePayer = "fee_payer" + + EventTypeMessage = "message" + + AttributeKeyAction = "action" + AttributeKeyModule = "module" + AttributeKeySender = "sender" + AttributeKeyAmount = "amount" +) + +type ( + / StringAttributes defines a slice of StringEvents objects. + StringEvents []StringEvent +) + +func (se StringEvents) + +String() + +string { + var sb strings.Builder + for _, e := range se { + fmt.Fprintf(&sb, "\t\t- %s\n", e.Type) + for _, attr := range e.Attributes { + fmt.Fprintf(&sb, "\t\t\t- %s\n", attr) +} + +} + +return strings.TrimRight(sb.String(), "\n") +} + +/ Flatten returns a flattened version of StringEvents by grouping all attributes +/ per unique event type. +func (se StringEvents) + +Flatten() + +StringEvents { + flatEvents := make(map[string][]Attribute) + for _, e := range se { + flatEvents[e.Type] = append(flatEvents[e.Type], e.Attributes...) +} + keys := make([]string, 0, len(flatEvents)) + res := make(StringEvents, 0, len(flatEvents)) / appeneded to keys, same length of what is allocated to keys + for ty := range flatEvents { + keys = append(keys, ty) +} + +sort.Strings(keys) + for _, ty := range keys { + res = append(res, StringEvent{ + Type: ty, + Attributes: flatEvents[ty] +}) +} + +return res +} + +/ StringifyEvent converts an Event object to a StringEvent object. +func StringifyEvent(e abci.Event) + +StringEvent { + res := StringEvent{ + Type: e.Type +} + for _, attr := range e.Attributes { + res.Attributes = append( + res.Attributes, + Attribute{ + Key: attr.Key, + Value: attr.Value +}, + ) +} + +return res +} + +/ StringifyEvents converts a slice of Event objects into a slice of StringEvent +/ objects. +func StringifyEvents(events []abci.Event) + +StringEvents { + res := make(StringEvents, 0, len(events)) + for _, e := range events { + res = append(res, StringifyEvent(e)) +} + +return res.Flatten() +} + +/ MarkEventsToIndex returns the set of ABCI events, where each event's attribute +/ has it's index value marked based on the provided set of events to index. +func MarkEventsToIndex(events []abci.Event, indexSet map[string]struct{ +}) []abci.Event { + indexAll := len(indexSet) == 0 + updatedEvents := make([]abci.Event, len(events)) + for i, e := range events { + updatedEvent := abci.Event{ + Type: e.Type, + Attributes: make([]abci.EventAttribute, len(e.Attributes)), +} + for j, attr := range e.Attributes { + _, index := indexSet[fmt.Sprintf("%s.%s", e.Type, attr.Key)] + updatedAttr := abci.EventAttribute{ + Key: attr.Key, + Value: attr.Value, + Index: index || indexAll, +} + +updatedEvent.Attributes[j] = updatedAttr +} + +updatedEvents[i] = updatedEvent +} + +return updatedEvents +} ``` -loading... + +The `EventManager` comes with a set of useful methods to manage Events. The method +that is used most by module and application developers is `EmitTypedEvent` or `EmitEvent` that tracks +an Event in the `EventManager`. + +```go expandable +package types + +import ( + + "encoding/json" + "fmt" + "reflect" + "sort" + "strings" + "golang.org/x/exp/maps" + "golang.org/x/exp/slices" + "github.com/cosmos/gogoproto/jsonpb" + proto "github.com/cosmos/gogoproto/proto" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/cosmos/cosmos-sdk/codec" +) + +/ ---------------------------------------------------------------------------- +/ Event Manager +/ ---------------------------------------------------------------------------- + +/ EventManager implements a simple wrapper around a slice of Event objects that +/ can be emitted from. +type EventManager struct { + events Events +} + +func NewEventManager() *EventManager { + return &EventManager{ + EmptyEvents() +} +} + +func (em *EventManager) + +Events() + +Events { + return em.events +} + +/ EmitEvent stores a single Event object. +/ Deprecated: Use EmitTypedEvent +func (em *EventManager) + +EmitEvent(event Event) { + em.events = em.events.AppendEvent(event) +} + +/ EmitEvents stores a series of Event objects. +/ Deprecated: Use EmitTypedEvents +func (em *EventManager) + +EmitEvents(events Events) { + em.events = em.events.AppendEvents(events) +} + +/ ABCIEvents returns all stored Event objects as abci.Event objects. +func (em EventManager) + +ABCIEvents() []abci.Event { + return em.events.ToABCIEvents() +} + +/ EmitTypedEvent takes typed event and emits converting it into Event +func (em *EventManager) + +EmitTypedEvent(tev proto.Message) + +error { + event, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +em.EmitEvent(event) + +return nil +} + +/ EmitTypedEvents takes series of typed events and emit +func (em *EventManager) + +EmitTypedEvents(tevs ...proto.Message) + +error { + events := make(Events, len(tevs)) + for i, tev := range tevs { + res, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +events[i] = res +} + +em.EmitEvents(events) + +return nil +} + +/ TypedEventToEvent takes typed event and converts to Event object +func TypedEventToEvent(tev proto.Message) (Event, error) { + evtType := proto.MessageName(tev) + +evtJSON, err := codec.ProtoMarshalJSON(tev, nil) + if err != nil { + return Event{ +}, err +} + +var attrMap map[string]json.RawMessage + err = json.Unmarshal(evtJSON, &attrMap) + if err != nil { + return Event{ +}, err +} + + / sort the keys to ensure the order is always the same + keys := maps.Keys(attrMap) + +slices.Sort(keys) + attrs := make([]abci.EventAttribute, 0, len(attrMap)) + for _, k := range keys { + v := attrMap[k] + attrs = append(attrs, abci.EventAttribute{ + Key: k, + Value: string(v), +}) +} + +return Event{ + Type: evtType, + Attributes: attrs, +}, nil +} + +/ ParseTypedEvent converts abci.Event back to typed event +func ParseTypedEvent(event abci.Event) (proto.Message, error) { + concreteGoType := proto.MessageType(event.Type) + if concreteGoType == nil { + return nil, fmt.Errorf("failed to retrieve the message of type %q", event.Type) +} + +var value reflect.Value + if concreteGoType.Kind() == reflect.Ptr { + value = reflect.New(concreteGoType.Elem()) +} + +else { + value = reflect.Zero(concreteGoType) +} + +protoMsg, ok := value.Interface().(proto.Message) + if !ok { + return nil, fmt.Errorf("%q does not implement proto.Message", event.Type) +} + attrMap := make(map[string]json.RawMessage) + for _, attr := range event.Attributes { + attrMap[attr.Key] = json.RawMessage(attr.Value) +} + +attrBytes, err := json.Marshal(attrMap) + if err != nil { + return nil, err +} + +err = jsonpb.Unmarshal(strings.NewReader(string(attrBytes)), protoMsg) + if err != nil { + return nil, err +} + +return protoMsg, nil +} + +/ ---------------------------------------------------------------------------- +/ Events +/ ---------------------------------------------------------------------------- + +type ( + / Event is a type alias for an ABCI Event + Event abci.Event + + / Events defines a slice of Event objects + Events []Event +) + +/ NewEvent creates a new Event object with a given type and slice of one or more +/ attributes. +func NewEvent(ty string, attrs ...Attribute) + +Event { + e := Event{ + Type: ty +} + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ NewAttribute returns a new key/value Attribute object. +func NewAttribute(k, v string) + +Attribute { + return Attribute{ + k, v +} +} + +/ EmptyEvents returns an empty slice of events. +func EmptyEvents() + +Events { + return make(Events, 0) +} + +func (a Attribute) + +String() + +string { + return fmt.Sprintf("%s: %s", a.Key, a.Value) +} + +/ ToKVPair converts an Attribute object into a Tendermint key/value pair. +func (a Attribute) + +ToKVPair() + +abci.EventAttribute { + return abci.EventAttribute{ + Key: a.Key, + Value: a.Value +} +} + +/ AppendAttributes adds one or more attributes to an Event. +func (e Event) + +AppendAttributes(attrs ...Attribute) + +Event { + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ GetAttribute returns an attribute for a given key present in an event. +/ If the key is not found, the boolean value will be false. +func (e Event) + +GetAttribute(key string) (Attribute, bool) { + for _, attr := range e.Attributes { + if attr.Key == key { + return Attribute{ + Key: attr.Key, + Value: attr.Value +}, true +} + +} + +return Attribute{ +}, false +} + +/ AppendEvent adds an Event to a slice of events. +func (e Events) + +AppendEvent(event Event) + +Events { + return append(e, event) +} + +/ AppendEvents adds a slice of Event objects to an exist slice of Event objects. +func (e Events) + +AppendEvents(events Events) + +Events { + return append(e, events...) +} + +/ ToABCIEvents converts a slice of Event objects to a slice of abci.Event +/ objects. +func (e Events) + +ToABCIEvents() []abci.Event { + res := make([]abci.Event, len(e)) + for i, ev := range e { + res[i] = abci.Event{ + Type: ev.Type, + Attributes: ev.Attributes +} + +} + +return res +} + +/ GetAttributes returns all attributes matching a given key present in events. +/ If the key is not found, the boolean value will be false. +func (e Events) + +GetAttributes(key string) ([]Attribute, bool) { + attrs := make([]Attribute, 0) + for _, event := range e { + if attr, found := event.GetAttribute(key); found { + attrs = append(attrs, attr) +} + +} + +return attrs, len(attrs) > 0 +} + +/ Common event types and attribute keys +const ( + EventTypeTx = "tx" + + AttributeKeyAccountSequence = "acc_seq" + AttributeKeySignature = "signature" + AttributeKeyFee = "fee" + AttributeKeyFeePayer = "fee_payer" + + EventTypeMessage = "message" + + AttributeKeyAction = "action" + AttributeKeyModule = "module" + AttributeKeySender = "sender" + AttributeKeyAmount = "amount" +) + +type ( + / StringAttributes defines a slice of StringEvents objects. + StringEvents []StringEvent +) + +func (se StringEvents) + +String() + +string { + var sb strings.Builder + for _, e := range se { + fmt.Fprintf(&sb, "\t\t- %s\n", e.Type) + for _, attr := range e.Attributes { + fmt.Fprintf(&sb, "\t\t\t- %s\n", attr) +} + +} + +return strings.TrimRight(sb.String(), "\n") +} + +/ Flatten returns a flattened version of StringEvents by grouping all attributes +/ per unique event type. +func (se StringEvents) + +Flatten() + +StringEvents { + flatEvents := make(map[string][]Attribute) + for _, e := range se { + flatEvents[e.Type] = append(flatEvents[e.Type], e.Attributes...) +} + keys := make([]string, 0, len(flatEvents)) + res := make(StringEvents, 0, len(flatEvents)) / appeneded to keys, same length of what is allocated to keys + for ty := range flatEvents { + keys = append(keys, ty) +} + +sort.Strings(keys) + for _, ty := range keys { + res = append(res, StringEvent{ + Type: ty, + Attributes: flatEvents[ty] +}) +} + +return res +} + +/ StringifyEvent converts an Event object to a StringEvent object. +func StringifyEvent(e abci.Event) + +StringEvent { + res := StringEvent{ + Type: e.Type +} + for _, attr := range e.Attributes { + res.Attributes = append( + res.Attributes, + Attribute{ + Key: attr.Key, + Value: attr.Value +}, + ) +} + +return res +} + +/ StringifyEvents converts a slice of Event objects into a slice of StringEvent +/ objects. +func StringifyEvents(events []abci.Event) + +StringEvents { + res := make(StringEvents, 0, len(events)) + for _, e := range events { + res = append(res, StringifyEvent(e)) +} + +return res.Flatten() +} + +/ MarkEventsToIndex returns the set of ABCI events, where each event's attribute +/ has it's index value marked based on the provided set of events to index. +func MarkEventsToIndex(events []abci.Event, indexSet map[string]struct{ +}) []abci.Event { + indexAll := len(indexSet) == 0 + updatedEvents := make([]abci.Event, len(events)) + for i, e := range events { + updatedEvent := abci.Event{ + Type: e.Type, + Attributes: make([]abci.EventAttribute, len(e.Attributes)), +} + for j, attr := range e.Attributes { + _, index := indexSet[fmt.Sprintf("%s.%s", e.Type, attr.Key)] + updatedAttr := abci.EventAttribute{ + Key: attr.Key, + Value: attr.Value, + Index: index || indexAll, +} + +updatedEvent.Attributes[j] = updatedAttr +} + +updatedEvents[i] = updatedEvent +} + +return updatedEvents +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/group/keeper/msg_server.go#L88-L91) +Module developers should handle Event emission via the `EventManager#EmitTypedEvent` or `EventManager#EmitEvent` in each message +`Handler` and in each `BeginBlock`/`EndBlock` handler. The `EventManager` is accessed via +the [`Context`](/docs/sdk/v0.47/learn/advanced/context), where Event should be already registered, and emitted like this: -**Legacy events:** +**Typed events:** + +```go expandable +package keeper +import ( + + "context" + "encoding/binary" + "fmt" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/errors" + "github.com/cosmos/cosmos-sdk/x/group/internal/math" + "github.com/cosmos/cosmos-sdk/x/group/internal/orm" +) + +var _ group.MsgServer = Keeper{ +} + +/ TODO: Revisit this once we have proper gas fee framework. +/ Tracking issues https://github.com/cosmos/cosmos-sdk/issues/9054, https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(20) + +func (k Keeper) + +CreateGroup(goCtx context.Context, req *group.MsgCreateGroup) (*group.MsgCreateGroupResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + metadata := req.Metadata + members := group.MemberRequests{ + Members: req.Members +} + admin := req.Admin + if err := members.ValidateBasic(); err != nil { + return nil, err +} + if err := k.assertMetadataLength(metadata, "group metadata"); err != nil { + return nil, err +} + totalWeight := math.NewDecFromInt64(0) + for i := range members.Members { + m := members.Members[i] + if err := k.assertMetadataLength(m.Metadata, "member metadata"); err != nil { + return nil, err +} + + / Members of a group must have a positive weight. + weight, err := math.NewPositiveDecFromString(m.Weight) + if err != nil { + return nil, err +} + + / Adding up members weights to compute group total weight. + totalWeight, err = totalWeight.Add(weight) + if err != nil { + return nil, err +} + +} + + / Create a new group in the groupTable. + groupInfo := &group.GroupInfo{ + Id: k.groupTable.Sequence().PeekNextVal(ctx.KVStore(k.key)), + Admin: admin, + Metadata: metadata, + Version: 1, + TotalWeight: totalWeight.String(), + CreatedAt: ctx.BlockTime(), +} + +groupID, err := k.groupTable.Create(ctx.KVStore(k.key), groupInfo) + if err != nil { + return nil, sdkerrors.Wrap(err, "could not create group") +} + + / Create new group members in the groupMemberTable. + for i := range members.Members { + m := members.Members[i] + err := k.groupMemberTable.Create(ctx.KVStore(k.key), &group.GroupMember{ + GroupId: groupID, + Member: &group.Member{ + Address: m.Address, + Weight: m.Weight, + Metadata: m.Metadata, + AddedAt: ctx.BlockTime(), +}, +}) + if err != nil { + return nil, sdkerrors.Wrapf(err, "could not store member %d", i) +} + +} + +err = ctx.EventManager().EmitTypedEvent(&group.EventCreateGroup{ + GroupId: groupID +}) + if err != nil { + return nil, err +} + +return &group.MsgCreateGroupResponse{ + GroupId: groupID +}, nil +} + +func (k Keeper) + +UpdateGroupMembers(goCtx context.Context, req *group.MsgUpdateGroupMembers) (*group.MsgUpdateGroupMembersResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(g *group.GroupInfo) + +error { + totalWeight, err := math.NewNonNegativeDecFromString(g.TotalWeight) + if err != nil { + return sdkerrors.Wrap(err, "group total weight") +} + for i := range req.MemberUpdates { + if err := k.assertMetadataLength(req.MemberUpdates[i].Metadata, "group member metadata"); err != nil { + return err +} + groupMember := group.GroupMember{ + GroupId: req.GroupId, + Member: &group.Member{ + Address: req.MemberUpdates[i].Address, + Weight: req.MemberUpdates[i].Weight, + Metadata: req.MemberUpdates[i].Metadata, +}, +} + + / Checking if the group member is already part of the group + var found bool + var prevGroupMember group.GroupMember + switch err := k.groupMemberTable.GetOne(ctx.KVStore(k.key), orm.PrimaryKey(&groupMember), &prevGroupMember); { + case err == nil: + found = true + case sdkerrors.ErrNotFound.Is(err): + found = false + default: + return sdkerrors.Wrap(err, "get group member") +} + +newMemberWeight, err := math.NewNonNegativeDecFromString(groupMember.Member.Weight) + if err != nil { + return err +} + + / Handle delete for members with zero weight. + if newMemberWeight.IsZero() { + / We can't delete a group member that doesn't already exist. + if !found { + return sdkerrors.Wrap(sdkerrors.ErrNotFound, "unknown member") +} + +previousMemberWeight, err := math.NewPositiveDecFromString(prevGroupMember.Member.Weight) + if err != nil { + return err +} + + / Subtract the weight of the group member to delete from the group total weight. + totalWeight, err = math.SubNonNegative(totalWeight, previousMemberWeight) + if err != nil { + return err +} + + / Delete group member in the groupMemberTable. + if err := k.groupMemberTable.Delete(ctx.KVStore(k.key), &groupMember); err != nil { + return sdkerrors.Wrap(err, "delete member") +} + +continue +} + / If group member already exists, handle update + if found { + previousMemberWeight, err := math.NewPositiveDecFromString(prevGroupMember.Member.Weight) + if err != nil { + return err +} + / Subtract previous weight from the group total weight. + totalWeight, err = math.SubNonNegative(totalWeight, previousMemberWeight) + if err != nil { + return err +} + / Save updated group member in the groupMemberTable. + groupMember.Member.AddedAt = prevGroupMember.Member.AddedAt + if err := k.groupMemberTable.Update(ctx.KVStore(k.key), &groupMember); err != nil { + return sdkerrors.Wrap(err, "add member") +} + +} + +else { / else handle create. + groupMember.Member.AddedAt = ctx.BlockTime() + if err := k.groupMemberTable.Create(ctx.KVStore(k.key), &groupMember); err != nil { + return sdkerrors.Wrap(err, "add member") +} + +} + / In both cases (handle + update), we need to add the new member's weight to the group total weight. + totalWeight, err = totalWeight.Add(newMemberWeight) + if err != nil { + return err +} + +} + / Update group in the groupTable. + g.TotalWeight = totalWeight.String() + +g.Version++ + if err := k.validateDecisionPolicies(ctx, *g); err != nil { + return err +} + +return k.groupTable.Update(ctx.KVStore(k.key), g.Id, g) +} + err := k.doUpdateGroup(ctx, req, action, "members updated") + if err != nil { + return nil, err +} + +return &group.MsgUpdateGroupMembersResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupAdmin(goCtx context.Context, req *group.MsgUpdateGroupAdmin) (*group.MsgUpdateGroupAdminResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(g *group.GroupInfo) + +error { + g.Admin = req.NewAdmin + g.Version++ + + return k.groupTable.Update(ctx.KVStore(k.key), g.Id, g) +} + err := k.doUpdateGroup(ctx, req, action, "admin updated") + if err != nil { + return nil, err +} + +return &group.MsgUpdateGroupAdminResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupMetadata(goCtx context.Context, req *group.MsgUpdateGroupMetadata) (*group.MsgUpdateGroupMetadataResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(g *group.GroupInfo) + +error { + g.Metadata = req.Metadata + g.Version++ + return k.groupTable.Update(ctx.KVStore(k.key), g.Id, g) +} + if err := k.assertMetadataLength(req.Metadata, "group metadata"); err != nil { + return nil, err +} + err := k.doUpdateGroup(ctx, req, action, "metadata updated") + if err != nil { + return nil, err +} + +return &group.MsgUpdateGroupMetadataResponse{ +}, nil +} + +func (k Keeper) + +CreateGroupWithPolicy(goCtx context.Context, req *group.MsgCreateGroupWithPolicy) (*group.MsgCreateGroupWithPolicyResponse, error) { + groupRes, err := k.CreateGroup(goCtx, &group.MsgCreateGroup{ + Admin: req.Admin, + Members: req.Members, + Metadata: req.GroupMetadata, +}) + if err != nil { + return nil, sdkerrors.Wrap(err, "group response") +} + groupID := groupRes.GroupId + + var groupPolicyAddr sdk.AccAddress + groupPolicyRes, err := k.CreateGroupPolicy(goCtx, &group.MsgCreateGroupPolicy{ + Admin: req.Admin, + GroupId: groupID, + Metadata: req.GroupPolicyMetadata, + DecisionPolicy: req.DecisionPolicy, +}) + if err != nil { + return nil, sdkerrors.Wrap(err, "group policy response") +} + policyAddr := groupPolicyRes.Address + + groupPolicyAddr, err = sdk.AccAddressFromBech32(policyAddr) + if err != nil { + return nil, sdkerrors.Wrap(err, "group policy address") +} + groupPolicyAddress := groupPolicyAddr.String() + if req.GroupPolicyAsAdmin { + updateAdminReq := &group.MsgUpdateGroupAdmin{ + GroupId: groupID, + Admin: req.Admin, + NewAdmin: groupPolicyAddress, +} + _, err = k.UpdateGroupAdmin(goCtx, updateAdminReq) + if err != nil { + return nil, err +} + updatePolicyAddressReq := &group.MsgUpdateGroupPolicyAdmin{ + Admin: req.Admin, + GroupPolicyAddress: groupPolicyAddress, + NewAdmin: groupPolicyAddress, +} + _, err = k.UpdateGroupPolicyAdmin(goCtx, updatePolicyAddressReq) + if err != nil { + return nil, err +} + +} + +return &group.MsgCreateGroupWithPolicyResponse{ + GroupId: groupID, + GroupPolicyAddress: groupPolicyAddress +}, nil +} + +func (k Keeper) + +CreateGroupPolicy(goCtx context.Context, req *group.MsgCreateGroupPolicy) (*group.MsgCreateGroupPolicyResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + +admin, err := sdk.AccAddressFromBech32(req.GetAdmin()) + if err != nil { + return nil, sdkerrors.Wrap(err, "request admin") +} + +policy, err := req.GetDecisionPolicy() + if err != nil { + return nil, sdkerrors.Wrap(err, "request decision policy") +} + groupID := req.GetGroupID() + metadata := req.GetMetadata() + if err := k.assertMetadataLength(metadata, "group policy metadata"); err != nil { + return nil, err +} + +g, err := k.getGroupInfo(ctx, groupID) + if err != nil { + return nil, err +} + +groupAdmin, err := sdk.AccAddressFromBech32(g.Admin) + if err != nil { + return nil, sdkerrors.Wrap(err, "group admin") +} + / Only current group admin is authorized to create a group policy for this + if !groupAdmin.Equals(admin) { + return nil, sdkerrors.Wrap(sdkerrors.ErrUnauthorized, "not group admin") +} + +err = policy.Validate(g, k.config) + if err != nil { + return nil, err +} + + / Generate account address of group policy. + var accountAddr sdk.AccAddress + / loop here in the rare case where a ADR-028-derived address creates a + / collision with an existing address. + for { + nextAccVal := k.groupPolicySeq.NextVal(ctx.KVStore(k.key)) + derivationKey := make([]byte, 8) + +binary.BigEndian.PutUint64(derivationKey, nextAccVal) + accountCredentials := authtypes.NewModuleCredential(group.ModuleName, [][]byte{{ + GroupPolicyTablePrefix +}, derivationKey +}) + +accountAddr = sdk.AccAddress(accountCredentials.Address()) + if k.accKeeper.GetAccount(ctx, accountAddr) != nil { + / handle a rare collision, in which case we just go on to the + / next sequence value and derive a new address. + continue +} + + / group policy accounts are unclaimable base accounts + account, err := authtypes.NewBaseAccountWithPubKey(accountCredentials) + if err != nil { + return nil, sdkerrors.Wrap(err, "could not create group policy account") +} + acc := k.accKeeper.NewAccount(ctx, account) + +k.accKeeper.SetAccount(ctx, acc) + +break +} + +groupPolicy, err := group.NewGroupPolicyInfo( + accountAddr, + groupID, + admin, + metadata, + 1, + policy, + ctx.BlockTime(), + ) + if err != nil { + return nil, err +} + if err := k.groupPolicyTable.Create(ctx.KVStore(k.key), &groupPolicy); err != nil { + return nil, sdkerrors.Wrap(err, "could not create group policy") +} + +err = ctx.EventManager().EmitTypedEvent(&group.EventCreateGroupPolicy{ + Address: accountAddr.String() +}) + if err != nil { + return nil, err +} + +return &group.MsgCreateGroupPolicyResponse{ + Address: accountAddr.String() +}, nil +} + +func (k Keeper) + +UpdateGroupPolicyAdmin(goCtx context.Context, req *group.MsgUpdateGroupPolicyAdmin) (*group.MsgUpdateGroupPolicyAdminResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(groupPolicy *group.GroupPolicyInfo) + +error { + groupPolicy.Admin = req.NewAdmin + groupPolicy.Version++ + return k.groupPolicyTable.Update(ctx.KVStore(k.key), groupPolicy) +} + err := k.doUpdateGroupPolicy(ctx, req.GroupPolicyAddress, req.Admin, action, "group policy admin updated") + if err != nil { + return nil, err +} + +return &group.MsgUpdateGroupPolicyAdminResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupPolicyDecisionPolicy(goCtx context.Context, req *group.MsgUpdateGroupPolicyDecisionPolicy) (*group.MsgUpdateGroupPolicyDecisionPolicyResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + +policy, err := req.GetDecisionPolicy() + if err != nil { + return nil, err +} + action := func(groupPolicy *group.GroupPolicyInfo) + +error { + g, err := k.getGroupInfo(ctx, groupPolicy.GroupId) + if err != nil { + return err +} + +err = policy.Validate(g, k.config) + if err != nil { + return err +} + +err = groupPolicy.SetDecisionPolicy(policy) + if err != nil { + return err +} + +groupPolicy.Version++ + return k.groupPolicyTable.Update(ctx.KVStore(k.key), groupPolicy) +} + +err = k.doUpdateGroupPolicy(ctx, req.GroupPolicyAddress, req.Admin, action, "group policy's decision policy updated") + if err != nil { + return nil, err +} + +return &group.MsgUpdateGroupPolicyDecisionPolicyResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupPolicyMetadata(goCtx context.Context, req *group.MsgUpdateGroupPolicyMetadata) (*group.MsgUpdateGroupPolicyMetadataResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + metadata := req.GetMetadata() + action := func(groupPolicy *group.GroupPolicyInfo) + +error { + groupPolicy.Metadata = metadata + groupPolicy.Version++ + return k.groupPolicyTable.Update(ctx.KVStore(k.key), groupPolicy) +} + if err := k.assertMetadataLength(metadata, "group policy metadata"); err != nil { + return nil, err +} + err := k.doUpdateGroupPolicy(ctx, req.GroupPolicyAddress, req.Admin, action, "group policy metadata updated") + if err != nil { + return nil, err +} + +return &group.MsgUpdateGroupPolicyMetadataResponse{ +}, nil +} + +func (k Keeper) + +SubmitProposal(goCtx context.Context, req *group.MsgSubmitProposal) (*group.MsgSubmitProposalResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + +groupPolicyAddr, err := sdk.AccAddressFromBech32(req.GroupPolicyAddress) + if err != nil { + return nil, sdkerrors.Wrap(err, "request account address of group policy") +} + metadata := req.Metadata + proposers := req.Proposers + msgs, err := req.GetMsgs() + if err != nil { + return nil, sdkerrors.Wrap(err, "request msgs") +} + if err := k.assertMetadataLength(metadata, "metadata"); err != nil { + return nil, err +} + if err := k.assertMetadataLength(req.Summary, "proposal summary"); err != nil { + return nil, err +} + if err := k.assertMetadataLength(req.Title, "proposal Title"); err != nil { + return nil, err +} + +policyAcc, err := k.getGroupPolicyInfo(ctx, req.GroupPolicyAddress) + if err != nil { + return nil, sdkerrors.Wrap(err, "load group policy") +} + +g, err := k.getGroupInfo(ctx, policyAcc.GroupId) + if err != nil { + return nil, sdkerrors.Wrap(err, "get group by groupId of group policy") +} + + / Only members of the group can submit a new proposal. + for i := range proposers { + if !k.groupMemberTable.Has(ctx.KVStore(k.key), orm.PrimaryKey(&group.GroupMember{ + GroupId: g.Id, + Member: &group.Member{ + Address: proposers[i] +}})) { + return nil, sdkerrors.Wrapf(errors.ErrUnauthorized, "not in group: %s", proposers[i]) +} + +} + + / Check that if the messages require signers, they are all equal to the given account address of group policy. + if err := ensureMsgAuthZ(msgs, groupPolicyAddr); err != nil { + return nil, err +} + +policy, err := policyAcc.GetDecisionPolicy() + if err != nil { + return nil, sdkerrors.Wrap(err, "proposal group policy decision policy") +} + + / Prevent proposal that can not succeed. + err = policy.Validate(g, k.config) + if err != nil { + return nil, err +} + m := &group.Proposal{ + Id: k.proposalTable.Sequence().PeekNextVal(ctx.KVStore(k.key)), + GroupPolicyAddress: req.GroupPolicyAddress, + Metadata: metadata, + Proposers: proposers, + SubmitTime: ctx.BlockTime(), + GroupVersion: g.Version, + GroupPolicyVersion: policyAcc.Version, + Status: group.PROPOSAL_STATUS_SUBMITTED, + ExecutorResult: group.PROPOSAL_EXECUTOR_RESULT_NOT_RUN, + VotingPeriodEnd: ctx.BlockTime().Add(policy.GetVotingPeriod()), / The voting window begins as soon as the proposal is submitted. + FinalTallyResult: group.DefaultTallyResult(), + Title: req.Title, + Summary: req.Summary, +} + if err := m.SetMsgs(msgs); err != nil { + return nil, sdkerrors.Wrap(err, "create proposal") +} + +id, err := k.proposalTable.Create(ctx.KVStore(k.key), m) + if err != nil { + return nil, sdkerrors.Wrap(err, "create proposal") +} + +err = ctx.EventManager().EmitTypedEvent(&group.EventSubmitProposal{ + ProposalId: id +}) + if err != nil { + return nil, err +} + + / Try to execute proposal immediately + if req.Exec == group.Exec_EXEC_TRY { + / Consider proposers as Yes votes + for i := range proposers { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "vote on proposal") + _, err = k.Vote(sdk.WrapSDKContext(ctx), &group.MsgVote{ + ProposalId: id, + Voter: proposers[i], + Option: group.VOTE_OPTION_YES, +}) + if err != nil { + return &group.MsgSubmitProposalResponse{ + ProposalId: id +}, sdkerrors.Wrapf(err, "the proposal was created but failed on vote for voter %s", proposers[i]) +} + +} + + / Then try to execute the proposal + _, err = k.Exec(sdk.WrapSDKContext(ctx), &group.MsgExec{ + ProposalId: id, + / We consider the first proposer as the MsgExecRequest signer + / but that could be revisited (eg using the group policy) + +Executor: proposers[0], +}) + if err != nil { + return &group.MsgSubmitProposalResponse{ + ProposalId: id +}, sdkerrors.Wrap(err, "the proposal was created but failed on exec") +} + +} + +return &group.MsgSubmitProposalResponse{ + ProposalId: id +}, nil +} + +func (k Keeper) + +WithdrawProposal(goCtx context.Context, req *group.MsgWithdrawProposal) (*group.MsgWithdrawProposalResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + id := req.ProposalId + address := req.Address + + proposal, err := k.getProposal(ctx, id) + if err != nil { + return nil, err +} + + / Ensure the proposal can be withdrawn. + if proposal.Status != group.PROPOSAL_STATUS_SUBMITTED { + return nil, sdkerrors.Wrapf(errors.ErrInvalid, "cannot withdraw a proposal with the status of %s", proposal.Status.String()) +} + +var policyInfo group.GroupPolicyInfo + if policyInfo, err = k.getGroupPolicyInfo(ctx, proposal.GroupPolicyAddress); err != nil { + return nil, sdkerrors.Wrap(err, "load group policy") +} + + / check address is the group policy admin he is in proposers list.. + if address != policyInfo.Admin && !isProposer(proposal, address) { + return nil, sdkerrors.Wrapf(errors.ErrUnauthorized, "given address is neither group policy admin nor in proposers: %s", address) +} + +proposal.Status = group.PROPOSAL_STATUS_WITHDRAWN + if err := k.proposalTable.Update(ctx.KVStore(k.key), id, &proposal); err != nil { + return nil, err +} + +err = ctx.EventManager().EmitTypedEvent(&group.EventWithdrawProposal{ + ProposalId: id +}) + if err != nil { + return nil, err +} + +return &group.MsgWithdrawProposalResponse{ +}, nil +} + +func (k Keeper) + +Vote(goCtx context.Context, req *group.MsgVote) (*group.MsgVoteResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + id := req.ProposalId + voteOption := req.Option + metadata := req.Metadata + if err := k.assertMetadataLength(metadata, "metadata"); err != nil { + return nil, err +} + +proposal, err := k.getProposal(ctx, id) + if err != nil { + return nil, err +} + / Ensure that we can still accept votes for this proposal. + if proposal.Status != group.PROPOSAL_STATUS_SUBMITTED { + return nil, sdkerrors.Wrap(errors.ErrInvalid, "proposal not open for voting") +} + if ctx.BlockTime().After(proposal.VotingPeriodEnd) { + return nil, sdkerrors.Wrap(errors.ErrExpired, "voting period has ended already") +} + +policyInfo, err := k.getGroupPolicyInfo(ctx, proposal.GroupPolicyAddress) + if err != nil { + return nil, sdkerrors.Wrap(err, "load group policy") +} + +electorate, err := k.getGroupInfo(ctx, policyInfo.GroupId) + if err != nil { + return nil, err +} + + / Count and store votes. + voterAddr := req.Voter + voter := group.GroupMember{ + GroupId: electorate.Id, + Member: &group.Member{ + Address: voterAddr +}} + if err := k.groupMemberTable.GetOne(ctx.KVStore(k.key), orm.PrimaryKey(&voter), &voter); err != nil { + return nil, sdkerrors.Wrapf(err, "voter address: %s", voterAddr) +} + newVote := group.Vote{ + ProposalId: id, + Voter: voterAddr, + Option: voteOption, + Metadata: metadata, + SubmitTime: ctx.BlockTime(), +} + + / The ORM will return an error if the vote already exists, + / making sure than a voter hasn't already voted. + if err := k.voteTable.Create(ctx.KVStore(k.key), &newVote); err != nil { + return nil, sdkerrors.Wrap(err, "store vote") +} + +err = ctx.EventManager().EmitTypedEvent(&group.EventVote{ + ProposalId: id +}) + if err != nil { + return nil, err +} + + / Try to execute proposal immediately + if req.Exec == group.Exec_EXEC_TRY { + _, err = k.Exec(sdk.WrapSDKContext(ctx), &group.MsgExec{ + ProposalId: id, + Executor: voterAddr, +}) + if err != nil { + return nil, err +} + +} + +return &group.MsgVoteResponse{ +}, nil +} + +/ doTallyAndUpdate performs a tally, and, if the tally result is final, then: +/ - updates the proposal's `Status` and `FinalTallyResult` fields, +/ - prune all the votes. +func (k Keeper) + +doTallyAndUpdate(ctx sdk.Context, p *group.Proposal, electorate group.GroupInfo, policyInfo group.GroupPolicyInfo) + +error { + policy, err := policyInfo.GetDecisionPolicy() + if err != nil { + return err +} + +tallyResult, err := k.Tally(ctx, *p, policyInfo.GroupId) + if err != nil { + return err +} + +result, err := policy.Allow(tallyResult, electorate.TotalWeight) + if err != nil { + return sdkerrors.Wrap(err, "policy allow") +} + + / If the result was final (i.e. enough votes to pass) + +or if the voting + / period ended, then we consider the proposal as final. + if isFinal := result.Final || ctx.BlockTime().After(p.VotingPeriodEnd); isFinal { + if err := k.pruneVotes(ctx, p.Id); err != nil { + return err +} + +p.FinalTallyResult = tallyResult + if result.Allow { + p.Status = group.PROPOSAL_STATUS_ACCEPTED +} + +else { + p.Status = group.PROPOSAL_STATUS_REJECTED +} + +} + +return nil +} + +/ Exec executes the messages from a proposal. +func (k Keeper) + +Exec(goCtx context.Context, req *group.MsgExec) (*group.MsgExecResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + id := req.ProposalId + + proposal, err := k.getProposal(ctx, id) + if err != nil { + return nil, err +} + if proposal.Status != group.PROPOSAL_STATUS_SUBMITTED && proposal.Status != group.PROPOSAL_STATUS_ACCEPTED { + return nil, sdkerrors.Wrapf(errors.ErrInvalid, "not possible to exec with proposal status %s", proposal.Status.String()) +} + +policyInfo, err := k.getGroupPolicyInfo(ctx, proposal.GroupPolicyAddress) + if err != nil { + return nil, sdkerrors.Wrap(err, "load group policy") +} + + / If proposal is still in SUBMITTED phase, it means that the voting period + / didn't end yet, and tallying hasn't been done. In this case, we need to + / tally first. + if proposal.Status == group.PROPOSAL_STATUS_SUBMITTED { + electorate, err := k.getGroupInfo(ctx, policyInfo.GroupId) + if err != nil { + return nil, sdkerrors.Wrap(err, "load group") +} + if err := k.doTallyAndUpdate(ctx, &proposal, electorate, policyInfo); err != nil { + return nil, err +} + +} + + / Execute proposal payload. + var logs string + if proposal.Status == group.PROPOSAL_STATUS_ACCEPTED && proposal.ExecutorResult != group.PROPOSAL_EXECUTOR_RESULT_SUCCESS { + / Caching context so that we don't update the store in case of failure. + cacheCtx, flush := ctx.CacheContext() + +addr, err := sdk.AccAddressFromBech32(policyInfo.Address) + if err != nil { + return nil, err +} + decisionPolicy := policyInfo.DecisionPolicy.GetCachedValue().(group.DecisionPolicy) + if results, err := k.doExecuteMsgs(cacheCtx, k.router, proposal, addr, decisionPolicy); err != nil { + proposal.ExecutorResult = group.PROPOSAL_EXECUTOR_RESULT_FAILURE + logs = fmt.Sprintf("proposal execution failed on proposal %d, because of error %s", id, err.Error()) + +k.Logger(ctx).Info("proposal execution failed", "cause", err, "proposalID", id) +} + +else { + proposal.ExecutorResult = group.PROPOSAL_EXECUTOR_RESULT_SUCCESS + flush() + for _, res := range results { + / NOTE: The sdk msg handler creates a new EventManager, so events must be correctly propagated back to the current context + ctx.EventManager().EmitEvents(res.GetEvents()) +} + +} + +} + + / Update proposal in proposalTable + / If proposal has successfully run, delete it from state. + if proposal.ExecutorResult == group.PROPOSAL_EXECUTOR_RESULT_SUCCESS { + if err := k.pruneProposal(ctx, proposal.Id); err != nil { + return nil, err +} + +} + +else { + store := ctx.KVStore(k.key) + if err := k.proposalTable.Update(store, id, &proposal); err != nil { + return nil, err +} + +} + +err = ctx.EventManager().EmitTypedEvent(&group.EventExec{ + ProposalId: id, + Logs: logs, + Result: proposal.ExecutorResult, +}) + if err != nil { + return nil, err +} + +return &group.MsgExecResponse{ + Result: proposal.ExecutorResult, +}, nil +} + +/ LeaveGroup implements the MsgServer/LeaveGroup method. +func (k Keeper) + +LeaveGroup(goCtx context.Context, req *group.MsgLeaveGroup) (*group.MsgLeaveGroupResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + _, err := sdk.AccAddressFromBech32(req.Address) + if err != nil { + return nil, err +} + +groupInfo, err := k.getGroupInfo(ctx, req.GroupId) + if err != nil { + return nil, sdkerrors.Wrap(err, "group") +} + +groupWeight, err := math.NewNonNegativeDecFromString(groupInfo.TotalWeight) + if err != nil { + return nil, err +} + +gm, err := k.getGroupMember(ctx, &group.GroupMember{ + GroupId: req.GroupId, + Member: &group.Member{ + Address: req.Address +}, +}) + if err != nil { + return nil, err +} + +memberWeight, err := math.NewPositiveDecFromString(gm.Member.Weight) + if err != nil { + return nil, err +} + +updatedWeight, err := math.SubNonNegative(groupWeight, memberWeight) + if err != nil { + return nil, err +} + + / delete group member in the groupMemberTable. + if err := k.groupMemberTable.Delete(ctx.KVStore(k.key), gm); err != nil { + return nil, sdkerrors.Wrap(err, "group member") +} + + / update group weight + groupInfo.TotalWeight = updatedWeight.String() + +groupInfo.Version++ + if err := k.validateDecisionPolicies(ctx, groupInfo); err != nil { + return nil, err +} + if err := k.groupTable.Update(ctx.KVStore(k.key), groupInfo.Id, &groupInfo); err != nil { + return nil, err +} + +ctx.EventManager().EmitTypedEvent(&group.EventLeaveGroup{ + GroupId: req.GroupId, + Address: req.Address, +}) + +return &group.MsgLeaveGroupResponse{ +}, nil +} + +func (k Keeper) + +getGroupMember(ctx sdk.Context, member *group.GroupMember) (*group.GroupMember, error) { + var groupMember group.GroupMember + switch err := k.groupMemberTable.GetOne(ctx.KVStore(k.key), + orm.PrimaryKey(member), &groupMember); { + case err == nil: + break + case sdkerrors.ErrNotFound.Is(err): + return nil, sdkerrors.ErrNotFound.Wrapf("%s is not part of group %d", member.Member.Address, member.GroupId) + +default: + return nil, err +} + +return &groupMember, nil +} + +type authNGroupReq interface { + GetGroupID() + +uint64 + GetAdmin() + +string +} + +type ( + actionFn func(m *group.GroupInfo) + +error + groupPolicyActionFn func(m *group.GroupPolicyInfo) + +error +) + +/ doUpdateGroupPolicy first makes sure that the group policy admin initiated the group policy update, +/ before performing the group policy update and emitting an event. +func (k Keeper) + +doUpdateGroupPolicy(ctx sdk.Context, groupPolicy string, admin string, action groupPolicyActionFn, note string) + +error { + groupPolicyInfo, err := k.getGroupPolicyInfo(ctx, groupPolicy) + if err != nil { + return sdkerrors.Wrap(err, "load group policy") +} + +groupPolicyAddr, err := sdk.AccAddressFromBech32(groupPolicy) + if err != nil { + return sdkerrors.Wrap(err, "group policy address") +} + +groupPolicyAdmin, err := sdk.AccAddressFromBech32(admin) + if err != nil { + return sdkerrors.Wrap(err, "group policy admin") +} + + / Only current group policy admin is authorized to update a group policy. + if groupPolicyAdmin.String() != groupPolicyInfo.Admin { + return sdkerrors.Wrap(sdkerrors.ErrUnauthorized, "not group policy admin") +} + if err := action(&groupPolicyInfo); err != nil { + return sdkerrors.Wrap(err, note) +} + if err = k.abortProposals(ctx, groupPolicyAddr); err != nil { + return err +} + if err = ctx.EventManager().EmitTypedEvent(&group.EventUpdateGroupPolicy{ + Address: groupPolicyInfo.Address +}); err != nil { + return err +} + +return nil +} + +/ doUpdateGroup first makes sure that the group admin initiated the group update, +/ before performing the group update and emitting an event. +func (k Keeper) + +doUpdateGroup(ctx sdk.Context, req authNGroupReq, action actionFn, note string) + +error { + err := k.doAuthenticated(ctx, req, action, note) + if err != nil { + return err +} + +err = ctx.EventManager().EmitTypedEvent(&group.EventUpdateGroup{ + GroupId: req.GetGroupID() +}) + if err != nil { + return err +} + +return nil +} + +/ doAuthenticated makes sure that the group admin initiated the request, +/ and perform the provided action on the group. +func (k Keeper) + +doAuthenticated(ctx sdk.Context, req authNGroupReq, action actionFn, errNote string) + +error { + group, err := k.getGroupInfo(ctx, req.GetGroupID()) + if err != nil { + return err +} + +admin, err := sdk.AccAddressFromBech32(group.Admin) + if err != nil { + return sdkerrors.Wrap(err, "group admin") +} + +reqAdmin, err := sdk.AccAddressFromBech32(req.GetAdmin()) + if err != nil { + return sdkerrors.Wrap(err, "request admin") +} + if !admin.Equals(reqAdmin) { + return sdkerrors.Wrapf(sdkerrors.ErrUnauthorized, "not group admin; got %s, expected %s", req.GetAdmin(), group.Admin) +} + if err := action(&group); err != nil { + return sdkerrors.Wrap(err, errNote) +} + +return nil +} + +/ assertMetadataLength returns an error if given metadata length +/ is greater than a pre-defined maxMetadataLen. +func (k Keeper) + +assertMetadataLength(metadata string, description string) + +error { + if metadata != "" && uint64(len(metadata)) > k.config.MaxMetadataLen { + return sdkerrors.Wrapf(errors.ErrMaxLimit, description) +} + +return nil +} + +/ validateDecisionPolicies loops through all decision policies from the group, +/ and calls each of their Validate() + +method. +func (k Keeper) + +validateDecisionPolicies(ctx sdk.Context, g group.GroupInfo) + +error { + it, err := k.groupPolicyByGroupIndex.Get(ctx.KVStore(k.key), g.Id) + if err != nil { + return err +} + +defer it.Close() + for { + var groupPolicy group.GroupPolicyInfo + _, err = it.LoadNext(&groupPolicy) + if errors.ErrORMIteratorDone.Is(err) { + break +} + if err != nil { + return err +} + +err = groupPolicy.DecisionPolicy.GetCachedValue().(group.DecisionPolicy).Validate(g, k.config) + if err != nil { + return err +} + +} + +return nil +} + +/ isProposer checks that an address is a proposer of a given proposal. +func isProposer(proposal group.Proposal, address string) + +bool { + for _, proposer := range proposal.Proposers { + if proposer == address { + return true +} + +} + +return false +} ``` -ctx.EventManager().EmitEvent( sdk.NewEvent(eventType, sdk.NewAttribute(attributeKey, attributeValue)),) + +**Legacy events:** + +```go +ctx.EventManager().EmitEvent( + sdk.NewEvent(eventType, sdk.NewAttribute(attributeKey, attributeValue)), +) ``` Module's `handler` function should also set a new `EventManager` to the `context` to isolate emitted Events per `message`: -``` -func NewHandler(keeper Keeper) sdk.Handler { return func(ctx sdk.Context, msg sdk.Msg) (*sdk.Result, error) { ctx = ctx.WithEventManager(sdk.NewEventManager()) switch msg := msg.(type) { +```go +func NewHandler(keeper Keeper) + +sdk.Handler { + return func(ctx sdk.Context, msg sdk.Msg) (*sdk.Result, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + switch msg := msg.(type) { ``` -See the [`Msg` services](/v0.47/build/building-modules/msg-services) concept doc for a more detailed view on how to typically implement Events and use the `EventManager` in modules. +See the [`Msg` services](/docs/sdk/v0.47/documentation/module-system/msg-services) concept doc for a more detailed +view on how to typically implement Events and use the `EventManager` in modules. -## Subscribing to Events[​](#subscribing-to-events "Direct link to Subscribing to Events") +## Subscribing to Events You can use CometBFT's [Websocket](https://docs.cometbft.com/v0.37/core/subscription) to subscribe to Events by calling the `subscribe` RPC method: -``` -{ "jsonrpc": "2.0", "method": "subscribe", "id": "0", "params": { "query": "tm.event='eventCategory' AND eventType.eventAttribute='attributeValue'" }} +```json +{ + "jsonrpc": "2.0", + "method": "subscribe", + "id": "0", + "params": { + "query": "tm.event='eventCategory' AND eventType.eventAttribute='attributeValue'" + } +} ``` The main `eventCategory` you can subscribe to are: -* `NewBlock`: Contains Events triggered during `BeginBlock` and `EndBlock`. -* `Tx`: Contains Events triggered during `DeliverTx` (i.e. transaction processing). -* `ValidatorSetUpdates`: Contains validator set updates for the block. +- `NewBlock`: Contains Events triggered during `BeginBlock` and `EndBlock`. +- `Tx`: Contains Events triggered during `DeliverTx` (i.e. transaction processing). +- `ValidatorSetUpdates`: Contains validator set updates for the block. -These Events are triggered from the `state` package after a block is committed. You can get the full list of Event categories [on the CometBFT Go documentation](https://pkg.go.dev/github.com/cometbft/cometbft/types#pkg-constants). +These Events are triggered from the `state` package after a block is committed. You can get the +full list of Event categories [on the CometBFT Go documentation](https://pkg.go.dev/github.com/cometbft/cometbft/types#pkg-constants). The `type` and `attribute` value of the `query` allow you to filter the specific Event you are looking for. For example, a `Mint` transaction triggers an Event of type `EventMint` and has an `Id` and an `Owner` as `attributes` (as defined in the [`events.proto` file of the `NFT` module](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/nft/v1beta1/event.proto#L21-L31)). Subscribing to this Event would be done like so: -``` -{ "jsonrpc": "2.0", "method": "subscribe", "id": "0", "params": { "query": "tm.event='Tx' AND mint.owner='ownerAddress'" }} +```json +{ + "jsonrpc": "2.0", + "method": "subscribe", + "id": "0", + "params": { + "query": "tm.event='Tx' AND mint.owner='ownerAddress'" + } +} ``` -where `ownerAddress` is an address following the [`AccAddress`](/v0.47/learn/beginner/accounts#addresses) format. +where `ownerAddress` is an address following the [`AccAddress`](/docs/sdk/v0.47/learn/beginner/accounts#addresses) format. The same way can be used to subscribe to [legacy events](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/bank/types/events.go). -## Default Events[​](#default-events "Direct link to Default Events") +## Default Events There are a few events that are automatically emitted for all messages, directly from `baseapp`. -* `message.action`: The name of the message type. -* `message.sender`: The address of the message signer. -* `message.module`: The name of the module that emitted the message. +- `message.action`: The name of the message type. +- `message.sender`: The address of the message signer. +- `message.module`: The name of the module that emitted the message. - - The module name is assumed by `baseapp` to be the second element of the message route: `"cosmos.bank.v1beta1.MsgSend" -> "bank"`. In case a module does not follow the standard message path, (e.g. IBC), it is advised to keep emitting the module name event. `Baseapp` only emits that event if the module have not already done so. - + + The module name is assumed by `baseapp` to be the second element of the + message route: `"cosmos.bank.v1beta1.MsgSend" -> "bank"`. In case a module + does not follow the standard message path, (e.g. IBC), it is advised to keep + emitting the module name event. `Baseapp` only emits that event if the module + have not already done so. + diff --git a/docs/sdk/v0.47/learn/advanced/grpc_rest.mdx b/docs/sdk/v0.47/learn/advanced/grpc_rest.mdx index 6ab1f720..731fe536 100644 --- a/docs/sdk/v0.47/learn/advanced/grpc_rest.mdx +++ b/docs/sdk/v0.47/learn/advanced/grpc_rest.mdx @@ -1,108 +1,203 @@ --- title: "gRPC, REST, and CometBFT Endpoints" -description: "Version: v0.47" --- - - This document presents an overview of all the endpoints a node exposes: gRPC, REST as well as some other endpoints. - +## Synopsis -## An Overview of All Endpoints[​](#an-overview-of-all-endpoints "Direct link to An Overview of All Endpoints") +This document presents an overview of all the endpoints a node exposes: gRPC, REST as well as some other endpoints. -Each node exposes the following endpoints for users to interact with a node, each endpoint is served on a different port. Details on how to configure each endpoint is provided in the endpoint's own section. - -* the gRPC server (default port: `9090`), -* the REST server (default port: `1317`), -* the CometBFT RPC endpoint (default port: `26657`). - - - The node also exposes some other endpoints, such as the CometBFT P2P endpoint, or the [Prometheus endpoint](https://docs.cometbft.com/v0.37/core/metrics), which are not directly related to the Cosmos SDK. Please refer to the [CometBFT documentation](https://docs.cometbft.com/v0.37/core/configuration) for more information about these endpoints. - - -## gRPC Server[​](#grpc-server "Direct link to gRPC Server") +## An Overview of All Endpoints -In the Cosmos SDK, Protobuf is the main [encoding](/v0.47/learn/advanced/encoding) library. This brings a wide range of Protobuf-based tools that can be plugged into the Cosmos SDK. One such tool is [gRPC](https://grpc.io), a modern open-source high performance RPC framework that has decent client support in several languages. - -Each module exposes a [Protobuf `Query` service](/v0.47/build/building-modules/messages-and-queries#queries) that defines state queries. The `Query` services and a transaction service used to broadcast transactions are hooked up to the gRPC server via the following function inside the application: - -server/types/app.go +Each node exposes the following endpoints for users to interact with a node, each endpoint is served on a different port. Details on how to configure each endpoint is provided in the endpoint's own section. -``` -loading... +- the gRPC server (default port: `9090`), +- the REST server (default port: `1317`), +- the CometBFT RPC endpoint (default port: `26657`). + + + The node also exposes some other endpoints, such as the CometBFT P2P endpoint, + or the [Prometheus endpoint](https://docs.cometbft.com/v0.37/core/metrics), + which are not directly related to the Cosmos SDK. Please refer to the + [CometBFT documentation](https://docs.cometbft.com/v0.37/core/configuration) + for more information about these endpoints. + + +## gRPC Server + +In the Cosmos SDK, Protobuf is the main [encoding](/docs/sdk/v0.47/learn/advanced/encoding) library. This brings a wide range of Protobuf-based tools that can be plugged into the Cosmos SDK. One such tool is [gRPC](https://grpc.io), a modern open-source high performance RPC framework that has decent client support in several languages. + +Each module exposes a [Protobuf `Query` service](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#queries) that defines state queries. The `Query` services and a transaction service used to broadcast transactions are hooked up to the gRPC server via the following function inside the application: + +```go expandable +package types + +import ( + + "encoding/json" + "io" + "time" + "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + tmproto "github.com/tendermint/tendermint/proto/tendermint/types" + tmtypes "github.com/tendermint/tendermint/types" + dbm "github.com/tendermint/tm-db" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ ServerStartTime defines the time duration that the server need to stay running after startup +/ for the startup be considered successful +const ServerStartTime = 5 * time.Second + +type ( + / AppOptions defines an interface that is passed into an application + / constructor, typically used to set BaseApp options that are either supplied + / via config file or through CLI arguments/flags. The underlying implementation + / is defined by the server package and is typically implemented via a Viper + / literal defined on the server Context. Note, casting Get calls may not yield + / the expected types and could result in type assertion errors. It is recommend + / to either use the cast package or perform manual conversion for safety. + AppOptions interface { + Get(string) + +interface{ +} + +} + + / Application defines an application interface that wraps abci.Application. + / The interface defines the necessary contracts to be implemented in order + / to fully bootstrap and start an application. + Application interface { + abci.Application + + RegisterAPIRoutes(*api.Server, config.APIConfig) + + / RegisterGRPCServer registers gRPC services directly with the gRPC + / server. + RegisterGRPCServer(grpc.Server) + + / RegisterTxService registers the gRPC Query service for tx (such as tx + / simulation, fetching txs by hash...). + RegisterTxService(client.Context) + + / RegisterTendermintService registers the gRPC Query service for tendermint queries. + RegisterTendermintService(client.Context) + + / RegisterNodeService registers the node gRPC Query service. + RegisterNodeService(client.Context) + + / CommitMultiStore return the multistore instance + CommitMultiStore() + +sdk.CommitMultiStore +} + + / AppCreator is a function that allows us to lazily initialize an + / application using various configurations. + AppCreator func(log.Logger, dbm.DB, io.Writer, AppOptions) + +Application + + / ModuleInitFlags takes a start command and adds modules specific init flags. + ModuleInitFlags func(startCmd *cobra.Command) + + / ExportedApp represents an exported app state, along with + / validators, consensus params and latest app height. + ExportedApp struct { + / AppState is the application state as JSON. + AppState json.RawMessage + / Validators is the exported validator set. + Validators []tmtypes.GenesisValidator + / Height is the app's latest block height. + Height int64 + / ConsensusParams are the exported consensus params for ABCI. + ConsensusParams *tmproto.ConsensusParams +} + + / AppExporter is a function that dumps all app state to + / JSON-serializable structure and returns the current validator set. + AppExporter func(log.Logger, dbm.DB, io.Writer, int64, bool, []string, AppOptions, []string) (ExportedApp, error) +) ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/server/types/app.go#L46-L48) - -Note: It is not possible to expose any [Protobuf `Msg` service](/v0.47/build/building-modules/messages-and-queries#messages) endpoints via gRPC. Transactions must be generated and signed using the CLI or programmatically before they can be broadcasted using gRPC. See [Generating, Signing, and Broadcasting Transactions](/v0.47/user/run-node/txs) for more information. +Note: It is not possible to expose any [Protobuf `Msg` service](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#messages) endpoints via gRPC. Transactions must be generated and signed using the CLI or programmatically before they can be broadcasted using gRPC. See [Generating, Signing, and Broadcasting Transactions](/docs/sdk/v0.47/user/run-node/txs) for more information. The `grpc.Server` is a concrete gRPC server, which spawns and serves all gRPC query requests and a broadcast transaction request. This server can be configured inside `~/.simapp/config/app.toml`: -* `grpc.enable = true|false` field defines if the gRPC server should be enabled. Defaults to `true`. -* `grpc.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `localhost:9090`. +- `grpc.enable = true|false` field defines if the gRPC server should be enabled. Defaults to `true`. +- `grpc.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `localhost:9090`. - - `~/.simapp` is the directory where the node's configuration and databases are stored. By default, it's set to `~/.{app_name}`. - + + `~/.simapp` is the directory where the node's configuration and databases are + stored. By default, it's set to `~/.{app_name}`. + -Once the gRPC server is started, you can send requests to it using a gRPC client. Some examples are given in our [Interact with the Node](/v0.47/user/run-node/interact-node#using-grpc) tutorial. +Once the gRPC server is started, you can send requests to it using a gRPC client. Some examples are given in our [Interact with the Node](/docs/sdk/v0.47/user/run-node/interact-node#using-grpc) tutorial. An overview of all available gRPC endpoints shipped with the Cosmos SDK is [Protobuf documentation](https://buf.build/cosmos/cosmos-sdk). -## REST Server[​](#rest-server "Direct link to REST Server") +## REST Server Cosmos SDK supports REST routes via gRPC-gateway. All routes are configured under the following fields in `~/.simapp/config/app.toml`: -* `api.enable = true|false` field defines if the REST server should be enabled. Defaults to `false`. -* `api.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `tcp://localhost:1317`. -* some additional API configuration options are defined in `~/.simapp/config/app.toml`, along with comments, please refer to that file directly. +- `api.enable = true|false` field defines if the REST server should be enabled. Defaults to `false`. +- `api.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `tcp://localhost:1317`. +- some additional API configuration options are defined in `~/.simapp/config/app.toml`, along with comments, please refer to that file directly. -### gRPC-gateway REST Routes[​](#grpc-gateway-rest-routes "Direct link to gRPC-gateway REST Routes") +### gRPC-gateway REST Routes If, for various reasons, you cannot use gRPC (for example, you are building a web application, and browsers don't support HTTP2 on which gRPC is built), then the Cosmos SDK offers REST routes via gRPC-gateway. [gRPC-gateway](https://grpc-ecosystem.github.io/grpc-gateway/) is a tool to expose gRPC endpoints as REST endpoints. For each gRPC endpoint defined in a Protobuf `Query` service, the Cosmos SDK offers a REST equivalent. For instance, querying a balance could be done via the `/cosmos.bank.v1beta1.QueryAllBalances` gRPC endpoint, or alternatively via the gRPC-gateway `"/cosmos/bank/v1beta1/balances/{address}"` REST endpoint: both will return the same result. For each RPC method defined in a Protobuf `Query` service, the corresponding REST endpoint is defined as an option: -proto/cosmos/bank/v1beta1/query.proto - +```protobuf + // AllBalances queries the balance of all coins for a single account. + // + // When called from another module, this query might consume a high amount of + // gas if the pagination field is incorrectly set. + rpc AllBalances(QueryAllBalancesRequest) returns (QueryAllBalancesResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/bank/v1beta1/balances/{address}"; + } ``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/query.proto#L23-L30) For application developers, gRPC-gateway REST routes needs to be wired up to the REST server, this is done by calling the `RegisterGRPCGatewayRoutes` function on the ModuleManager. -### Swagger[​](#swagger "Direct link to Swagger") +### Swagger A [Swagger](https://swagger.io/) (or OpenAPIv2) specification file is exposed under the `/swagger` route on the API server. Swagger is an open specification describing the API endpoints a server serves, including description, input arguments, return types and much more about each endpoint. Enabling the `/swagger` endpoint is configurable inside `~/.simapp/config/app.toml` via the `api.swagger` field, which is set to true by default. -For application developers, you may want to generate your own Swagger definitions based on your custom modules. The Cosmos SDK's [Swagger generation script](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/scripts/protoc-swagger-gen.sh) is a good place to start. +For application developers, you may want to generate your own Swagger definitions based on your custom modules. +The Cosmos SDK's [Swagger generation script](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/scripts/protoc-swagger-gen.sh) is a good place to start. -## CometBFT RPC[​](#cometbft-rpc "Direct link to CometBFT RPC") +## CometBFT RPC Independently from the Cosmos SDK, CometBFT also exposes a RPC server. This RPC server can be configured by tuning parameters under the `rpc` table in the `~/.simapp/config/config.toml`, the default listening address is `tcp://localhost:26657`. An OpenAPI specification of all CometBFT RPC endpoints is available [here](https://docs.cometbft.com/master/rpc/). Some CometBFT RPC endpoints are directly related to the Cosmos SDK: -* `/abci_query`: this endpoint will query the application for state. As the `path` parameter, you can send the following strings: - - * any Protobuf fully-qualified service method, such as `/cosmos.bank.v1beta1.Query/AllBalances`. The `data` field should then include the method's request parameter(s) encoded as bytes using Protobuf. - * `/app/simulate`: this will simulate a transaction, and return some information such as gas used. - * `/app/version`: this will return the application's version. - * `/store/{path}`: this will query the store directly. - * `/p2p/filter/addr/{port}`: this will return a filtered list of the node's P2P peers by address port. - * `/p2p/filter/id/{id}`: this will return a filtered list of the node's P2P peers by ID. - -* `/broadcast_tx_{aync,async,commit}`: these 3 endpoint will broadcast a transaction to other peers. CLI, gRPC and REST expose [a way to broadcast transations](/v0.47/learn/advanced/transactions#broadcasting-the-transaction), but they all use these 3 CometBFT RPCs under the hood. - -## Comparison Table[​](#comparison-table "Direct link to Comparison Table") - -| Name | Advantages | Disadvantages | -| ------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | -| gRPC | - can use code-generated stubs in various languages - supports streaming and bidirectional communication (HTTP2) - small wire binary sizes, faster transmission | - based on HTTP2, not available in browsers - learning curve (mostly due to Protobuf) | -| REST | - ubiquitous - client libraries in all languages, faster implementation | - only supports unary request-response communication (HTTP1.1) - bigger over-the-wire message sizes (JSON) | -| CometBFT RPC | - easy to use | - bigger over-the-wire message sizes (JSON) | +- `/abci_query`: this endpoint will query the application for state. As the `path` parameter, you can send the following strings: + - any Protobuf fully-qualified service method, such as `/cosmos.bank.v1beta1.Query/AllBalances`. The `data` field should then include the method's request parameter(s) encoded as bytes using Protobuf. + - `/app/simulate`: this will simulate a transaction, and return some information such as gas used. + - `/app/version`: this will return the application's version. + - `/store/{path}`: this will query the store directly. + - `/p2p/filter/addr/{port}`: this will return a filtered list of the node's P2P peers by address port. + - `/p2p/filter/id/{id}`: this will return a filtered list of the node's P2P peers by ID. +- `/broadcast_tx_{aync,async,commit}`: these 3 endpoint will broadcast a transaction to other peers. CLI, gRPC and REST expose [a way to broadcast transations](/docs/sdk/v0.47/learn/advanced/transactions#broadcasting-the-transaction), but they all use these 3 CometBFT RPCs under the hood. + +## Comparison Table + +| Name | Advantages | Disadvantages | +| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | +| gRPC | - can use code-generated stubs in various languages
- supports streaming and bidirectional communication (HTTP2)
- small wire binary sizes, faster transmission | - based on HTTP2, not available in browsers
- learning curve (mostly due to Protobuf) | +| REST | - ubiquitous
- client libraries in all languages, faster implementation
| - only supports unary request-response communication (HTTP1.1)
- bigger over-the-wire message sizes (JSON) | +| CometBFT RPC | - easy to use | - bigger over-the-wire message sizes (JSON) | diff --git a/docs/sdk/v0.47/learn/advanced/interblock-cache.mdx b/docs/sdk/v0.47/learn/advanced/interblock-cache.mdx index c171a1f3..fd532244 100644 --- a/docs/sdk/v0.47/learn/advanced/interblock-cache.mdx +++ b/docs/sdk/v0.47/learn/advanced/interblock-cache.mdx @@ -1,61 +1,46 @@ --- -title: "Inter-block Cache" -description: "Version: v0.47" +title: Inter-block Cache --- * [Inter-block Cache](#inter-block-cache) - * [Synopsis](#synopsis) - * [Overview and basic concepts](#overview-and-basic-concepts) - * [Motivation](#motivation) * [Definitions](#definitions) - * [System model and properties](#system-model-and-properties) - * [Assumptions](#assumptions) - * [Properties](#properties) - * [Thread safety](#thread-safety) * [Crash recovery](#crash-recovery) * [Iteration](#iteration) - * [Technical specification](#technical-specification) - * [General design](#general-design) - * [API](#api) - * [CommitKVCacheManager](#commitkvcachemanager) * [CommitKVStoreCache](#commitkvstorecache) - * [Implementation details](#implementation-details) - * [History](#history) - * [Copyright](#copyright) -## Synopsis[​](#synopsis "Direct link to Synopsis") +## Synopsis The inter-block cache is an in-memory cache storing (in-most-cases) immutable state that modules need to read in between blocks. When enabled, all sub-stores of a multi store, e.g., `rootmulti`, are wrapped. -## Overview and basic concepts[​](#overview-and-basic-concepts "Direct link to Overview and basic concepts") +## Overview and basic concepts -### Motivation[​](#motivation "Direct link to Motivation") +### Motivation The goal of the inter-block cache is to allow SDK modules to have fast access to data that it is typically queried during the execution of every block. This is data that do not change often, e.g. module parameters. The inter-block cache wraps each `CommitKVStore` of a multi store such as `rootmulti` with a fixed size, write-through cache. Caches are not cleared after a block is committed, as opposed to other caching layers such as `cachekv`. -### Definitions[​](#definitions "Direct link to Definitions") +### Definitions * `Store key` uniquely identifies a store. * `KVCache` is a `CommitKVStore` wrapped with a cache. * `Cache manager` is a key component of the inter-block cache responsible for maintaining a map from `store keys` to `KVCaches`. -## System model and properties[​](#system-model-and-properties "Direct link to System model and properties") +## System model and properties -### Assumptions[​](#assumptions "Direct link to Assumptions") +### Assumptions This specification assumes that there exists a cache implementation accessible to the inter-block cache feature. @@ -76,47 +61,55 @@ The specification also assumes that `CommitKVStore` offers the following API: > Ideally, both `Cache` and `CommitKVStore` should be specified in a different document and referenced here. -### Properties[​](#properties "Direct link to Properties") +### Properties -#### Thread safety[​](#thread-safety "Direct link to Thread safety") +#### Thread safety -Accessing the `cache manager` or a `KVCache` is not thread-safe: no method is guarded with a lock. Note that this is true even if the cache implementation is thread-safe. +Accessing the `cache manager` or a `KVCache` is not thread-safe: no method is guarded with a lock. +Note that this is true even if the cache implementation is thread-safe. > For instance, assume that two `Set` operations are executed concurrently on the same key, each writing a different value. After both are executed, the cache and the underlying store may be inconsistent, each storing a different value under the same key. -#### Crash recovery[​](#crash-recovery "Direct link to Crash recovery") +#### Crash recovery -The inter-block cache transparently delegates `Commit()` to its aggregate `CommitKVStore`. If the aggregate `CommitKVStore` supports atomic writes and use them to guarantee that the store is always in a consistent state in disk, the inter-block cache can be transparently moved to a consistent state when a failure occurs. +The inter-block cache transparently delegates `Commit()` to its aggregate `CommitKVStore`. If the +aggregate `CommitKVStore` supports atomic writes and use them to guarantee that the store is always in a consistent state in disk, the inter-block cache can be transparently moved to a consistent state when a failure occurs. > Note that this is the case for `IAVLStore`, the preferred `CommitKVStore`. On commit, it calls `SaveVersion()` on the underlying `MutableTree`. `SaveVersion` writes to disk are atomic via batching. This means that only consistent versions of the store (the tree) are written to the disk. Thus, in case of a failure during a `SaveVersion` call, on recovery from disk, the version of the store will be consistent. -#### Iteration[​](#iteration "Direct link to Iteration") +#### Iteration Iteration over each wrapped store is supported via the embedded `CommitKVStore` interface. -## Technical specification[​](#technical-specification "Direct link to Technical specification") +## Technical specification -### General design[​](#general-design "Direct link to General design") +### General design The inter-block cache feature is composed by two components: `CommitKVCacheManager` and `CommitKVCache`. `CommitKVCacheManager` implements the cache manager. It maintains a mapping from a store key to a `KVStore`. -``` -type CommitKVStoreCacheManager interface{ cacheSize uint caches map[string]CommitKVStore} +```go +type CommitKVStoreCacheManager interface{ + cacheSize uint + caches map[string]CommitKVStore +} ``` `CommitKVStoreCache` implements a `KVStore`: a write-through cache that wraps a `CommitKVStore`. This means that deletes and writes always happen to both the cache and the underlying `CommitKVStore`. Reads on the other hand first hit the internal cache. During a cache miss, the read is delegated to the underlying `CommitKVStore` and cached. -``` -type CommitKVStoreCache interface{ store CommitKVStore cache Cache} +```go +type CommitKVStoreCache interface{ + store CommitKVStore + cache Cache +} ``` To enable inter-block cache on `rootmulti`, one needs to instantiate a `CommitKVCacheManager` and set it by calling `SetInterBlockCache()` before calling one of `LoadLatestVersion()`, `LoadLatestVersionAndUpgrade(...)`, `LoadVersionAndUpgrade(...)` and `LoadVersion(version)`. -### API[​](#api "Direct link to API") +### API -#### CommitKVCacheManager[​](#commitkvcachemanager "Direct link to CommitKVCacheManager") +#### CommitKVCacheManager The method `NewCommitKVStoreCacheManager` creates a new cache manager and returns it. @@ -124,8 +117,16 @@ The method `NewCommitKVStoreCacheManager` creates a new cache manager and return | ---- | ------- | ------------------------------------------------------------------------ | | size | integer | Determines the capacity of each of the KVCache maintained by the manager | -``` -func NewCommitKVStoreCacheManager(size uint) CommitKVStoreCacheManager { manager = CommitKVStoreCacheManager{size, make(map[string]CommitKVStore)} return manager} +```go +func NewCommitKVStoreCacheManager(size uint) + +CommitKVStoreCacheManager { + manager = CommitKVStoreCacheManager{ + size, make(map[string]CommitKVStore) +} + +return manager +} ``` `GetStoreCache` returns a cache from the CommitStoreCacheManager for a given store key. If no cache exists for the store key, then one is created and set. @@ -136,8 +137,27 @@ func NewCommitKVStoreCacheManager(size uint) CommitKVStoreCacheManager { mana | storeKey | string | The store key of the store being retrieved | | store | `CommitKVStore` | The store that it is cached in case the manager does not have any in its map of caches | -``` -func GetStoreCache( manager CommitKVStoreCacheManager, storeKey string, store CommitKVStore) CommitKVStore { if manager.caches.has(storeKey) { return manager.caches.get(storeKey) } else { cache = CommitKVStoreCacheManager{store, manager.cacheSize} manager.set(storeKey, cache) return cache }} +```go expandable +func GetStoreCache( + manager CommitKVStoreCacheManager, + storeKey string, + store CommitKVStore) + +CommitKVStore { + if manager.caches.has(storeKey) { + return manager.caches.get(storeKey) +} + +else { + cache = CommitKVStoreCacheManager{ + store, manager.cacheSize +} + +manager.set(storeKey, cache) + +return cache +} +} ``` `Unwrap` returns the underlying CommitKVStore for a given store key. @@ -147,8 +167,22 @@ func GetStoreCache( manager CommitKVStoreCacheManager, storeKey string, | manager | `CommitKVStoreCacheManager` | The cache manager | | storeKey | string | The store key of the store being unwrapped | -``` -func Unwrap( manager CommitKVStoreCacheManager, storeKey string) CommitKVStore { if manager.caches.has(storeKey) { cache = manager.caches.get(storeKey) return cache.store } else { return nil }} +```go expandable +func Unwrap( + manager CommitKVStoreCacheManager, + storeKey string) + +CommitKVStore { + if manager.caches.has(storeKey) { + cache = manager.caches.get(storeKey) + +return cache.store +} + +else { + return nil +} +} ``` `Reset` resets the manager's map of caches. @@ -157,11 +191,15 @@ func Unwrap( manager CommitKVStoreCacheManager, storeKey string) CommitKVS | ------- | --------------------------- | ----------------- | | manager | `CommitKVStoreCacheManager` | The cache manager | -``` -function Reset(manager CommitKVStoreCacheManager) { for (let storeKey of manager.caches.keys()) { manager.caches.delete(storeKey) }} +```go +function Reset(manager CommitKVStoreCacheManager) { + for (let storeKey of manager.caches.keys()) { + manager.caches.delete(storeKey) +} +} ``` -#### CommitKVStoreCache[​](#commitkvstorecache "Direct link to CommitKVStoreCache") +#### CommitKVStoreCache `NewCommitKVStoreCache` creates a new `CommitKVStoreCache` and returns it. @@ -170,8 +208,18 @@ function Reset(manager CommitKVStoreCacheManager) { for (let storeKey of mana | store | CommitKVStore | The store to be cached | | size | string | Determines the capacity of the cache being created | -``` -func NewCommitKVStoreCache( store CommitKVStore, size uint) CommitKVStoreCache { KVCache = CommitKVStoreCache{store, NewCache(size)} return KVCache} +```go +func NewCommitKVStoreCache( + store CommitKVStore, + size uint) + +CommitKVStoreCache { + KVCache = CommitKVStoreCache{ + store, NewCache(size) +} + +return KVCache +} ``` `Get` retrieves a value by key. It first looks in the cache. If the key is not in the cache, the query is delegated to the underlying `CommitKVStore`. In the latter case, the key/value pair is cached. The method returns the value. @@ -181,8 +229,25 @@ func NewCommitKVStoreCache( store CommitKVStore, size uint) CommitKVStoreC | KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` from which the key/value pair is retrieved | | key | string | Key of the key/value pair being retrieved | -``` -func Get( KVCache CommitKVStoreCache, key string) []byte { valueCache, success := KVCache.cache.Get(key) if success { // cache hit return valueCache } else { // cache miss valueStore = KVCache.store.Get(key) KVCache.cache.Add(key, valueStore) return valueStore }} +```go expandable +func Get( + KVCache CommitKVStoreCache, + key string) []byte { + valueCache, success := KVCache.cache.Get(key) + if success { + / cache hit + return valueCache +} + +else { + / cache miss + valueStore = KVCache.store.Get(key) + +KVCache.cache.Add(key, valueStore) + +return valueStore +} +} ``` `Set` inserts a key/value pair into both the write-through cache and the underlying `CommitKVStore`. @@ -193,8 +258,15 @@ func Get( KVCache CommitKVStoreCache, key string) []byte { valueCache, | key | string | Key of the key/value pair being inserted | | value | \[]byte | Value of the key/value pair being inserted | -``` -func Set( KVCache CommitKVStoreCache, key string, value []byte) { KVCache.cache.Add(key, value) KVCache.store.Set(key, value)} +```go +func Set( + KVCache CommitKVStoreCache, + key string, + value []byte) { + KVCache.cache.Add(key, value) + +KVCache.store.Set(key, value) +} ``` `Delete` removes a key/value pair from both the write-through cache and the underlying `CommitKVStore`. @@ -204,8 +276,14 @@ func Set( KVCache CommitKVStoreCache, key string, value []byte) { KV | KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` from which the key/value pair is deleted | | key | string | Key of the key/value pair being deleted | -``` -func Delete( KVCache CommitKVStoreCache, key string) { KVCache.cache.Remove(key) KVCache.store.Delete(key)} +```go +func Delete( + KVCache CommitKVStoreCache, + key string) { + KVCache.cache.Remove(key) + +KVCache.store.Delete(key) +} ``` `CacheWrap` wraps a `CommitKVStoreCache` with another caching layer (`CacheKV`). @@ -216,18 +294,21 @@ func Delete( KVCache CommitKVStoreCache, key string) { KVCache.cache.Re | ------- | -------------------- | -------------------------------------- | | KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` being wrapped | -``` -func CacheWrap( KVCache CommitKVStoreCache) { return CacheKV.NewStore(KVCache)} +```go +func CacheWrap( + KVCache CommitKVStoreCache) { + return CacheKV.NewStore(KVCache) +} ``` -### Implementation details[​](#implementation-details "Direct link to Implementation details") +### Implementation details The inter-block cache implementation uses a fixed-sized adaptive replacement cache (ARC) as cache. [The ARC implementation](https://github.com/hashicorp/golang-lru/blob/master/arc.go) is thread-safe. ARC is an enhancement over the standard LRU cache in that tracks both frequency and recency of use. This avoids a burst in access to new entries from evicting the frequently used older entries. It adds some additional tracking overhead to a standard LRU cache, computationally it is roughly `2x` the cost, and the extra memory overhead is linear with the size of the cache. The default cache size is `1000`. -## History[​](#history "Direct link to History") +## History Dec 20, 2022 - Initial draft finished and submitted as a PR -## Copyright[​](#copyright "Direct link to Copyright") +## Copyright All content herein is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/docs/sdk/v0.47/learn/advanced/node.mdx b/docs/sdk/v0.47/learn/advanced/node.mdx index 8fa18ab2..ed8f5532 100644 --- a/docs/sdk/v0.47/learn/advanced/node.mdx +++ b/docs/sdk/v0.47/learn/advanced/node.mdx @@ -1,63 +1,615 @@ --- -title: "Node Client (Daemon)" -description: "Version: v0.47" +title: Node Client (Daemon) --- - - The main endpoint of a Cosmos SDK application is the daemon client, otherwise known as the full-node client. The full-node runs the state-machine, starting from a genesis file. It connects to peers running the same client in order to receive and relay transactions, block proposals and signatures. The full-node is constituted of the application, defined with the Cosmos SDK, and of a consensus engine connected to the application via the ABCI. - +## Synopsis + +The main endpoint of a Cosmos SDK application is the daemon client, otherwise known as the full-node client. The full-node runs the state-machine, starting from a genesis file. It connects to peers running the same client in order to receive and relay transactions, block proposals and signatures. The full-node is constituted of the application, defined with the Cosmos SDK, and of a consensus engine connected to the application via the ABCI. - ### Pre-requisite Readings[​](#pre-requisite-readings "Direct link to Pre-requisite Readings") - * [Anatomy of an SDK application](/v0.47/learn/beginner/overview-app) +### Pre-requisite Readings + +- [Anatomy of an SDK application](/docs/sdk/v0.47/learn/beginner/overview-app) + -## `main` function[​](#main-function "Direct link to main-function") +## `main` function The full-node client of any Cosmos SDK application is built by running a `main` function. The client is generally named by appending the `-d` suffix to the application name (e.g. `appd` for an application named `app`), and the `main` function is defined in a `./appd/cmd/main.go` file. Running this function creates an executable `appd` that comes with a set of commands. For an app named `app`, the main command is [`appd start`](#start-command), which starts the full-node. In general, developers will implement the `main.go` function with the following structure: -* First, an [`encodingCodec`](/v0.47/learn/advanced/encoding) is instantiated for the application. -* Then, the `config` is retrieved and config parameters are set. This mainly involves setting the Bech32 prefixes for [addresses](/v0.47/learn/beginner/accounts#addresses). +- First, an [`encodingCodec`](/docs/sdk/v0.47/learn/advanced/encoding) is instantiated for the application. +- Then, the `config` is retrieved and config parameters are set. This mainly involves setting the Bech32 prefixes for [addresses](/docs/sdk/v0.47/learn/beginner/accounts#addresses). -types/config.go +```go expandable +package types -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/types/config.go#L14-L29) + "context" + "fmt" + "sync" + "github.com/cosmos/cosmos-sdk/version" +) -* Using [cobra](https://github.com/spf13/cobra), the root command of the full-node client is created. After that, all the custom commands of the application are added using the `AddCommand()` method of `rootCmd`. -* Add default server commands to `rootCmd` using the `server.AddCommands()` method. These commands are separated from the ones added above since they are standard and defined at Cosmos SDK level. They should be shared by all Cosmos SDK-based applications. They include the most important command: the [`start` command](#start-command). -* Prepare and execute the `executor`. +/ DefaultKeyringServiceName defines a default service name for the keyring. +const DefaultKeyringServiceName = "cosmos" -libs/cli/setup.go +/ Config is the structure that holds the SDK configuration parameters. +/ This could be used to initialize certain configuration parameters for the SDK. +type Config struct { + fullFundraiserPath string + bech32AddressPrefix map[string]string + txEncoder TxEncoder + addressVerifier func([]byte) -``` -loading... -``` +error + mtx sync.RWMutex -[See full example on GitHub](https://github.com/cometbft/cometbft/blob/v0.37.0/libs/cli/setup.go#L74-L78) + / SLIP-44 related + purpose uint32 + coinType uint32 -See an example of `main` function from the `simapp` application, the Cosmos SDK's application for demo purposes: + sealed bool + sealedch chan struct{ +} +} + +/ cosmos-sdk wide global singleton +var ( + sdkConfig *Config + initConfig sync.Once +) + +/ New returns a new Config with default values. +func NewConfig() *Config { + return &Config{ + sealedch: make(chan struct{ +}), + bech32AddressPrefix: map[string]string{ + "account_addr": Bech32PrefixAccAddr, + "validator_addr": Bech32PrefixValAddr, + "consensus_addr": Bech32PrefixConsAddr, + "account_pub": Bech32PrefixAccPub, + "validator_pub": Bech32PrefixValPub, + "consensus_pub": Bech32PrefixConsPub, +}, + fullFundraiserPath: FullFundraiserPath, + + purpose: Purpose, + coinType: CoinType, + txEncoder: nil, +} +} + +/ GetConfig returns the config instance for the SDK. +func GetConfig() *Config { + initConfig.Do(func() { + sdkConfig = NewConfig() +}) + +return sdkConfig +} + +/ GetSealedConfig returns the config instance for the SDK if/once it is sealed. +func GetSealedConfig(ctx context.Context) (*Config, error) { + config := GetConfig() + +select { + case <-config.sealedch: + return config, nil + case <-ctx.Done(): + return nil, ctx.Err() +} +} + +func (config *Config) + +assertNotSealed() { + config.mtx.Lock() + +defer config.mtx.Unlock() + if config.sealed { + panic("Config is sealed") +} +} + +/ SetBech32PrefixForAccount builds the Config with Bech32 addressPrefix and publKeyPrefix for accounts +/ and returns the config instance +func (config *Config) + +SetBech32PrefixForAccount(addressPrefix, pubKeyPrefix string) { + config.assertNotSealed() + +config.bech32AddressPrefix["account_addr"] = addressPrefix + config.bech32AddressPrefix["account_pub"] = pubKeyPrefix +} + +/ SetBech32PrefixForValidator builds the Config with Bech32 addressPrefix and publKeyPrefix for validators +/ +/ and returns the config instance +func (config *Config) + +SetBech32PrefixForValidator(addressPrefix, pubKeyPrefix string) { + config.assertNotSealed() + +config.bech32AddressPrefix["validator_addr"] = addressPrefix + config.bech32AddressPrefix["validator_pub"] = pubKeyPrefix +} + +/ SetBech32PrefixForConsensusNode builds the Config with Bech32 addressPrefix and publKeyPrefix for consensus nodes +/ and returns the config instance +func (config *Config) + +SetBech32PrefixForConsensusNode(addressPrefix, pubKeyPrefix string) { + config.assertNotSealed() + +config.bech32AddressPrefix["consensus_addr"] = addressPrefix + config.bech32AddressPrefix["consensus_pub"] = pubKeyPrefix +} + +/ SetTxEncoder builds the Config with TxEncoder used to marshal StdTx to bytes +func (config *Config) + +SetTxEncoder(encoder TxEncoder) { + config.assertNotSealed() + +config.txEncoder = encoder +} + +/ SetAddressVerifier builds the Config with the provided function for verifying that addresses +/ have the correct format +func (config *Config) + +SetAddressVerifier(addressVerifier func([]byte) + +error) { + config.assertNotSealed() + +config.addressVerifier = addressVerifier +} + +/ Set the FullFundraiserPath (BIP44Prefix) + +on the config. +/ +/ Deprecated: This method is supported for backward compatibility only and will be removed in a future release. Use SetPurpose and SetCoinType instead. +func (config *Config) + +SetFullFundraiserPath(fullFundraiserPath string) { + config.assertNotSealed() + +config.fullFundraiserPath = fullFundraiserPath +} + +/ Set the BIP-0044 Purpose code on the config +func (config *Config) + +SetPurpose(purpose uint32) { + config.assertNotSealed() + +config.purpose = purpose +} + +/ Set the BIP-0044 CoinType code on the config +func (config *Config) + +SetCoinType(coinType uint32) { + config.assertNotSealed() + +config.coinType = coinType +} + +/ Seal seals the config such that the config state could not be modified further +func (config *Config) + +Seal() *Config { + config.mtx.Lock() + if config.sealed { + config.mtx.Unlock() + +return config +} + + / signal sealed after state exposed/unlocked + config.sealed = true + config.mtx.Unlock() + +close(config.sealedch) + +return config +} + +/ GetBech32AccountAddrPrefix returns the Bech32 prefix for account address +func (config *Config) + +GetBech32AccountAddrPrefix() + +string { + return config.bech32AddressPrefix["account_addr"] +} + +/ GetBech32ValidatorAddrPrefix returns the Bech32 prefix for validator address +func (config *Config) + +GetBech32ValidatorAddrPrefix() + +string { + return config.bech32AddressPrefix["validator_addr"] +} + +/ GetBech32ConsensusAddrPrefix returns the Bech32 prefix for consensus node address +func (config *Config) + +GetBech32ConsensusAddrPrefix() + +string { + return config.bech32AddressPrefix["consensus_addr"] +} + +/ GetBech32AccountPubPrefix returns the Bech32 prefix for account public key +func (config *Config) + +GetBech32AccountPubPrefix() + +string { + return config.bech32AddressPrefix["account_pub"] +} -simapp/simd/main.go +/ GetBech32ValidatorPubPrefix returns the Bech32 prefix for validator public key +func (config *Config) +GetBech32ValidatorPubPrefix() + +string { + return config.bech32AddressPrefix["validator_pub"] +} + +/ GetBech32ConsensusPubPrefix returns the Bech32 prefix for consensus node public key +func (config *Config) + +GetBech32ConsensusPubPrefix() + +string { + return config.bech32AddressPrefix["consensus_pub"] +} + +/ GetTxEncoder return function to encode transactions +func (config *Config) + +GetTxEncoder() + +TxEncoder { + return config.txEncoder +} + +/ GetAddressVerifier returns the function to verify that addresses have the correct format +func (config *Config) + +GetAddressVerifier() + +func([]byte) + +error { + return config.addressVerifier +} + +/ GetPurpose returns the BIP-0044 Purpose code on the config. +func (config *Config) + +GetPurpose() + +uint32 { + return config.purpose +} + +/ GetCoinType returns the BIP-0044 CoinType code on the config. +func (config *Config) + +GetCoinType() + +uint32 { + return config.coinType +} + +/ GetFullFundraiserPath returns the BIP44Prefix. +/ +/ Deprecated: This method is supported for backward compatibility only and will be removed in a future release. Use GetFullBIP44Path instead. +func (config *Config) + +GetFullFundraiserPath() + +string { + return config.fullFundraiserPath +} + +/ GetFullBIP44Path returns the BIP44Prefix. +func (config *Config) + +GetFullBIP44Path() + +string { + return fmt.Sprintf("m/%d'/%d'/0'/0/0", config.purpose, config.coinType) +} + +func KeyringServiceName() + +string { + if len(version.Name) == 0 { + return DefaultKeyringServiceName +} + +return version.Name +} ``` -loading... + +- Using [cobra](https://github.com/spf13/cobra), the root command of the full-node client is created. After that, all the custom commands of the application are added using the `AddCommand()` method of `rootCmd`. +- Add default server commands to `rootCmd` using the `server.AddCommands()` method. These commands are separated from the ones added above since they are standard and defined at Cosmos SDK level. They should be shared by all Cosmos SDK-based applications. They include the most important command: the [`start` command](#start-command). +- Prepare and execute the `executor`. + +```go expandable +package cli + +import ( + + "fmt" + "os" + "path/filepath" + "runtime" + "strings" + "github.com/spf13/cobra" + "github.com/spf13/viper" +) + +const ( + HomeFlag = "home" + TraceFlag = "trace" + OutputFlag = "output" + EncodingFlag = "encoding" +) + +/ Executable is the minimal interface to *corba.Command, so we can +/ wrap if desired before the test +type Executable interface { + Execute() + +error +} + +/ PrepareBaseCmd is meant for CometBFT and other servers +func PrepareBaseCmd(cmd *cobra.Command, envPrefix, defaultHome string) + +Executor { + cobra.OnInitialize(func() { + initEnv(envPrefix) +}) + +cmd.PersistentFlags().StringP(HomeFlag, "", defaultHome, "directory for config and data") + +cmd.PersistentFlags().Bool(TraceFlag, false, "print out full stack trace on errors") + +cmd.PersistentPreRunE = concatCobraCmdFuncs(bindFlagsLoadViper, cmd.PersistentPreRunE) + +return Executor{ + cmd, os.Exit +} +} + +/ PrepareMainCmd is meant for client side libs that want some more flags +/ +/ This adds --encoding (hex, btc, base64) + +and --output (text, json) + +to +/ the command. These only really make sense in interactive commands. +func PrepareMainCmd(cmd *cobra.Command, envPrefix, defaultHome string) + +Executor { + cmd.PersistentFlags().StringP(EncodingFlag, "e", "hex", "Binary encoding (hex|b64|btc)") + +cmd.PersistentFlags().StringP(OutputFlag, "o", "text", "Output format (text|json)") + +cmd.PersistentPreRunE = concatCobraCmdFuncs(validateOutput, cmd.PersistentPreRunE) + +return PrepareBaseCmd(cmd, envPrefix, defaultHome) +} + +/ initEnv sets to use ENV variables if set. +func initEnv(prefix string) { + copyEnvVars(prefix) + + / env variables with TM prefix (eg. TM_ROOT) + +viper.SetEnvPrefix(prefix) + +viper.SetEnvKeyReplacer(strings.NewReplacer(".", "_", "-", "_")) + +viper.AutomaticEnv() +} + +/ This copies all variables like TMROOT to TM_ROOT, +/ so we can support both formats for the user +func copyEnvVars(prefix string) { + prefix = strings.ToUpper(prefix) + ps := prefix + "_" + for _, e := range os.Environ() { + kv := strings.SplitN(e, "=", 2) + if len(kv) == 2 { + k, v := kv[0], kv[1] + if strings.HasPrefix(k, prefix) && !strings.HasPrefix(k, ps) { + k2 := strings.Replace(k, prefix, ps, 1) + +os.Setenv(k2, v) +} + +} + +} +} + +/ Executor wraps the cobra Command with a nicer Execute method +type Executor struct { + *cobra.Command + Exit func(int) / this is os.Exit by default, override in tests +} + +type ExitCoder interface { + ExitCode() + +int +} + +/ execute adds all child commands to the root command sets flags appropriately. +/ This is called by main.main(). It only needs to happen once to the rootCmd. +func (e Executor) + +Execute() + +error { + e.SilenceUsage = true + e.SilenceErrors = true + err := e.Command.Execute() + if err != nil { + if viper.GetBool(TraceFlag) { + const size = 64 << 10 + buf := make([]byte, size) + +buf = buf[:runtime.Stack(buf, false)] + fmt.Fprintf(os.Stderr, "ERROR: %v\n%s\n", err, buf) +} + +else { + fmt.Fprintf(os.Stderr, "ERROR: %v\n", err) +} + + / return error code 1 by default, can override it with a special error type + exitCode := 1 + if ec, ok := err.(ExitCoder); ok { + exitCode = ec.ExitCode() +} + +e.Exit(exitCode) +} + +return err +} + +type cobraCmdFunc func(cmd *cobra.Command, args []string) + +error + +/ Returns a single function that calls each argument function in sequence +/ RunE, PreRunE, PersistentPreRunE, etc. all have this same signature +func concatCobraCmdFuncs(fs ...cobraCmdFunc) + +cobraCmdFunc { + return func(cmd *cobra.Command, args []string) + +error { + for _, f := range fs { + if f != nil { + if err := f(cmd, args); err != nil { + return err +} + +} + +} + +return nil +} +} + +/ Bind all flags and read the config into viper +func bindFlagsLoadViper(cmd *cobra.Command, args []string) + +error { + / cmd.Flags() + +includes flags from this command and all persistent flags from the parent + if err := viper.BindPFlags(cmd.Flags()); err != nil { + return err +} + homeDir := viper.GetString(HomeFlag) + +viper.Set(HomeFlag, homeDir) + +viper.SetConfigName("config") / name of config file (without extension) + +viper.AddConfigPath(homeDir) / search root directory + viper.AddConfigPath(filepath.Join(homeDir, "config")) / search root directory /config + + / If a config file is found, read it in. + if err := viper.ReadInConfig(); err == nil { + / stderr, so if we redirect output to json file, this doesn't appear + / fmt.Fprintln(os.Stderr, "Using config file:", viper.ConfigFileUsed()) +} + +else if _, ok := err.(viper.ConfigFileNotFoundError); !ok { + / ignore not found error, return other errors + return err +} + +return nil +} + +func validateOutput(cmd *cobra.Command, args []string) + +error { + / validate output format + output := viper.GetString(OutputFlag) + switch output { + case "text", "json": + default: + return fmt.Errorf("unsupported output format: %s", output) +} + +return nil +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/simd/main.go) +See an example of `main` function from the `simapp` application, the Cosmos SDK's application for demo purposes: -## `start` command[​](#start-command "Direct link to start-command") +```go expandable +package main -The `start` command is defined in the `/server` folder of the Cosmos SDK. It is added to the root command of the full-node client in the [`main` function](#main-function) and called by the end-user to start their node: +import ( + "os" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/simd/cmd" + "github.com/cosmos/cosmos-sdk/server" + svrcmd "github.com/cosmos/cosmos-sdk/server/cmd" +) + +func main() { + rootCmd := cmd.NewRootCmd() + if err := svrcmd.Execute(rootCmd, "", simapp.DefaultNodeHome); err != nil { + switch e := err.(type) { + case server.ErrorCode: + os.Exit(e.Code) + +default: + os.Exit(1) +} + +} +} ``` -# For an example app named "app", the following command starts the full-node.appd start# Using the Cosmos SDK's own simapp, the following commands start the simapp node.simd start + +## `start` command + +The `start` command is defined in the `/server` folder of the Cosmos SDK. It is added to the root command of the full-node client in the [`main` function](#main-function) and called by the end-user to start their node: + +```bash +# For an example app named "app", the following command starts the full-node. +appd start + +# Using the Cosmos SDK's own simapp, the following commands start the simapp node. +simd start ``` As a reminder, the full-node is composed of three conceptual layers: the networking layer, the consensus layer and the application layer. The first two are generally bundled together in an entity called the consensus engine (CometBFT by default), while the third is the state-machine defined with the help of the Cosmos SDK. Currently, the Cosmos SDK uses CometBFT as the default consensus engine, meaning the start command is implemented to boot up a CometBFT node. @@ -66,58 +618,2253 @@ The flow of the `start` command is pretty straightforward. First, it retrieves t With the `db`, the `start` command creates a new instance of the application using an `appCreator` function: -server/start.go +```go expandable +package server -``` -loading... -``` +/ DONTCOVER -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/server/start.go#L220) +import ( -Note that an `appCreator` is a function that fulfills the `AppCreator` signature: + "errors" + "fmt" + "net" + "net/http" + "os" + "runtime/pprof" + "time" + "github.com/spf13/cobra" + "github.com/tendermint/tendermint/abci/server" + tcmd "github.com/tendermint/tendermint/cmd/tendermint/commands" + "github.com/tendermint/tendermint/node" + "github.com/tendermint/tendermint/p2p" + pvm "github.com/tendermint/tendermint/privval" + "github.com/tendermint/tendermint/proxy" + "github.com/tendermint/tendermint/rpc/client/local" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + "cosmossdk.io/tools/rosetta" + crgserver "cosmossdk.io/tools/rosetta/lib/server" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + "github.com/cosmos/cosmos-sdk/server/types" + pruningtypes "github.com/cosmos/cosmos-sdk/store/pruning/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) -server/types/app.go +const ( + / Tendermint full-node start flags + flagWithTendermint = "with-tendermint" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" -``` -loading... -``` + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/server/types/app.go#L64-L66) + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" -In practice, the [constructor of the application](/v0.47/learn/beginner/overview-app#constructor-function) is passed as the `appCreator`. + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" -simapp/simd/cmd/root.go + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" + flagGRPCWebAddress = "grpc-web.address" -``` -loading... -``` + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" +) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/simd/cmd/root.go#L254-L268) +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ Tendermint. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with Tendermint in or out of process. By +default, the application will run with Tendermint in process. -Then, the instance of `app` is used to instantiate a new CometBFT node: +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. -server/start.go +For '--pruning' the options are as follows: -``` -loading... -``` +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/server/start.go#L336-L348) +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' -The CometBFT node can be created with `app` because the latter satisfies the [`abci.Application` interface](https://github.com/cometbft/cometbft/blob/v0.37.0/abci/types/application.go#L9-L35) (given that `app` extends [`baseapp`](/v0.47/learn/advanced/baseapp)). As part of the `node.New` method, CometBFT makes sure that the height of the application (i.e. number of blocks since genesis) is equal to the height of the CometBFT node. The difference between these two heights should always be negative or null. If it is strictly negative, `node.New` will replay blocks until the height of the application reaches the height of the CometBFT node. Finally, if the height of the application is `0`, the CometBFT node will call [`InitChain`](/v0.47/learn/advanced/baseapp#initchain) on the application to initialize the state from the genesis file. +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. -Once the CometBFT node is instantiated and in sync with the application, the node can be started: +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. -server/start.go +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, Tendermint is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + PreRunE: func(cmd *cobra.Command, _ []string) -``` -loading... -``` +error { + serverCtx := GetServerContextFromCmd(cmd) + + / Bind flags to the Context's Viper so the app construction can set + / options accordingly. + if err := serverCtx.Viper.BindPFlags(cmd.Flags()); err != nil { + return err +} + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + +return err +}, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withTM, _ := cmd.Flags().GetBool(flagWithTendermint) + if !withTM { + serverCtx.Logger.Info("starting ABCI without Tendermint") + +return startStandAlone(serverCtx, appCreator) +} + + / amino is needed here for backwards compatibility of REST routes + err = startInProcess(serverCtx, clientCtx, appCreator) + +errCode, ok := err.(ErrorCode) + if !ok { + return err +} + +serverCtx.Logger.Debug(fmt.Sprintf("received quit signal: %d", errCode.Code)) + +return nil +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +cmd.Flags().Bool(flagWithTendermint, true, "Run abci app embedded in-process with tendermint") + +cmd.Flags().String(flagAddress, "tcp://0.0.0.0:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune Tendermint blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the Tendermint RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the Tendermint RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the Tendermint maximum response body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no Tendermint process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().String(flagGRPCWebAddress, serverconfig.DefaultGRPCWebAddress, "The gRPC-Web server address to listen on") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + + / add support for all Tendermint-specific command line options + tcmd.AddNodeFlags(cmd) + +return cmd +} + +func startStandAlone(ctx *Context, appCreator types.AppCreator) + +error { + addr := ctx.Viper.GetString(flagAddress) + transport := ctx.Viper.GetString(flagTransport) + home := ctx.Viper.GetString(flags.FlagHome) + +db, err := openDB(home, GetAppDBBackend(ctx.Viper)) + if err != nil { + return err +} + traceWriterFile := ctx.Viper.GetString(flagTraceStore) + +traceWriter, err := openTraceWriter(traceWriterFile) + if err != nil { + return err +} + app := appCreator(ctx.Logger, db, traceWriter, ctx.Viper) + +config, err := serverconfig.GetConfig(ctx.Viper) + if err != nil { + return err +} + + _, err = startTelemetry(config) + if err != nil { + return err +} + +svr, err := server.NewServer(addr, transport, app) + if err != nil { + return fmt.Errorf("error creating listener: %v", err) +} + +svr.SetLogger(ctx.Logger.With("module", "abci-server")) + +err = svr.Start() + if err != nil { + fmt.Println(err.Error()) + +os.Exit(1) +} + +defer func() { + if err = svr.Stop(); err != nil { + fmt.Println(err.Error()) + +os.Exit(1) +} + +}() + + / Wait for SIGINT or SIGTERM signal + return WaitForQuitSignals() +} + +func startInProcess(ctx *Context, clientCtx client.Context, appCreator types.AppCreator) + +error { + cfg := ctx.Config + home := cfg.RootDir + var cpuProfileCleanup func() + if cpuProfile := ctx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +ctx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +cpuProfileCleanup = func() { + ctx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + ctx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +} + +} + +db, err := openDB(home, GetAppDBBackend(ctx.Viper)) + if err != nil { + return err +} + traceWriterFile := ctx.Viper.GetString(flagTraceStore) + +traceWriter, err := openTraceWriter(traceWriterFile) + if err != nil { + return err +} + + / Clean up the traceWriter in the cpuProfileCleanup routine that is invoked + / when the server is shutting down. + fn := cpuProfileCleanup + cpuProfileCleanup = func() { + if fn != nil { + fn() +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + if err = traceWriter.Close(); err != nil { + ctx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +config, err := serverconfig.GetConfig(ctx.Viper) + if err != nil { + return err +} + if err := config.ValidateBasic(); err != nil { + return err +} + app := appCreator(ctx.Logger, db, traceWriter, ctx.Viper) + +nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return err +} + genDocProvider := node.DefaultGenesisDocProviderFunc(cfg) + +var ( + tmNode *node.Node + gRPCOnly = ctx.Viper.GetBool(flagGRPCOnly) + ) + if gRPCOnly { + ctx.Logger.Info("starting node in gRPC only mode; Tendermint is disabled") + +config.GRPC.Enable = true +} + +else { + ctx.Logger.Info("starting node with ABCI Tendermint in-process") + +tmNode, err = node.NewNode( + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(app), + genDocProvider, + node.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + ctx.Logger, + ) + if err != nil { + return err +} + if err := tmNode.Start(); err != nil { + return err +} + +} + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local tendermint RPC client. + if (config.API.Enable || config.GRPC.Enable) && tmNode != nil { + / re-assign for making the client available below + / do not use := to avoid shadowing clientCtx + clientCtx = clientCtx.WithClient(local.New(tmNode)) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/server/start.go#L350-L352) +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx) +} + +metrics, err := startTelemetry(config) + if err != nil { + return err +} + +var apiSrv *api.Server + if config.API.Enable { + genDoc, err := genDocProvider() + if err != nil { + return err +} + clientCtx := clientCtx.WithHomeDir(home).WithChainID(genDoc.ChainID) + if config.GRPC.Enable { + _, port, err := net.SplitHostPort(config.GRPC.Address) + if err != nil { + return err +} + maxSendMsgSize := config.GRPC.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.GRPC.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + grpcAddress := fmt.Sprintf("127.0.0.1:%s", port) + + / If grpc is enabled, configure grpc client for grpc gateway. + grpcClient, err := grpc.Dial( + grpcAddress, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +ctx.Logger.Debug("grpc client assigned to client context", "target", grpcAddress) +} + +apiSrv = api.New(clientCtx, ctx.Logger.With("module", "api-server")) + +app.RegisterAPIRoutes(apiSrv, config.API) + if config.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + errCh := make(chan error) + +go func() { + if err := apiSrv.Start(config); err != nil { + errCh <- err +} + +}() + +select { + case err := <-errCh: + return err + case <-time.After(types.ServerStartTime): / assume server started successfully +} + +} + +var ( + grpcSrv *grpc.Server + grpcWebSrv *http.Server + ) + if config.GRPC.Enable { + grpcSrv, err = servergrpc.StartGRPCServer(clientCtx, app, config.GRPC) + if err != nil { + return err +} + +defer grpcSrv.Stop() + if config.GRPCWeb.Enable { + grpcWebSrv, err = servergrpc.StartGRPCWeb(grpcSrv, config) + if err != nil { + ctx.Logger.Error("failed to start grpc-web http server: ", err) + +return err +} + +defer func() { + if err := grpcWebSrv.Close(); err != nil { + ctx.Logger.Error("failed to close grpc-web http server: ", err) +} + +}() +} + +} + + / At this point it is safe to block the process if we're in gRPC only mode as + / we do not need to start Rosetta or handle any Tendermint related processes. + if gRPCOnly { + / wait for signal capture and gracefully return + return WaitForQuitSignals() +} + +var rosettaSrv crgserver.Server + if config.Rosetta.Enable { + offlineMode := config.Rosetta.Offline + + / If GRPC is not enabled rosetta cannot work in online mode, so we throw an error. + if !config.GRPC.Enable && !offlineMode { + return errors.New("'grpc' must be enable in online mode for Rosetta to work") +} + +minGasPrices, err := sdk.ParseDecCoins(config.MinGasPrices) + if err != nil { + ctx.Logger.Error("failed to parse minimum-gas-prices: ", err) + +return err +} + conf := &rosetta.Config{ + Blockchain: config.Rosetta.Blockchain, + Network: config.Rosetta.Network, + TendermintRPC: ctx.Config.RPC.ListenAddress, + GRPCEndpoint: config.GRPC.Address, + Addr: config.Rosetta.Address, + Retries: config.Rosetta.Retries, + Offline: offlineMode, + GasToSuggest: config.Rosetta.GasToSuggest, + EnableFeeSuggestion: config.Rosetta.EnableFeeSuggestion, + GasPrices: minGasPrices.Sort(), + Codec: clientCtx.Codec.(*codec.ProtoCodec), + InterfaceRegistry: clientCtx.InterfaceRegistry, +} + +rosettaSrv, err = rosetta.ServerFromConfig(conf) + if err != nil { + return err +} + errCh := make(chan error) + +go func() { + if err := rosettaSrv.Start(); err != nil { + errCh <- err +} + +}() + +select { + case err := <-errCh: + return err + case <-time.After(types.ServerStartTime): / assume server started successfully +} + +} + +defer func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() +} + if cpuProfileCleanup != nil { + cpuProfileCleanup() +} + if apiSrv != nil { + _ = apiSrv.Close() +} + +ctx.Logger.Info("exiting...") +}() + + / wait for signal capture and gracefully return + return WaitForQuitSignals() +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + if !cfg.Telemetry.Enabled { + return nil, nil +} + +return telemetry.New(cfg.Telemetry) +} +``` + +Note that an `appCreator` is a function that fulfills the `AppCreator` signature: + +```go expandable +package types + +import ( + + "encoding/json" + "io" + "time" + "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + tmproto "github.com/tendermint/tendermint/proto/tendermint/types" + tmtypes "github.com/tendermint/tendermint/types" + dbm "github.com/tendermint/tm-db" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ ServerStartTime defines the time duration that the server need to stay running after startup +/ for the startup be considered successful +const ServerStartTime = 5 * time.Second + +type ( + / AppOptions defines an interface that is passed into an application + / constructor, typically used to set BaseApp options that are either supplied + / via config file or through CLI arguments/flags. The underlying implementation + / is defined by the server package and is typically implemented via a Viper + / literal defined on the server Context. Note, casting Get calls may not yield + / the expected types and could result in type assertion errors. It is recommend + / to either use the cast package or perform manual conversion for safety. + AppOptions interface { + Get(string) + +interface{ +} + +} + + / Application defines an application interface that wraps abci.Application. + / The interface defines the necessary contracts to be implemented in order + / to fully bootstrap and start an application. + Application interface { + abci.Application + + RegisterAPIRoutes(*api.Server, config.APIConfig) + + / RegisterGRPCServer registers gRPC services directly with the gRPC + / server. + RegisterGRPCServer(grpc.Server) + + / RegisterTxService registers the gRPC Query service for tx (such as tx + / simulation, fetching txs by hash...). + RegisterTxService(client.Context) + + / RegisterTendermintService registers the gRPC Query service for tendermint queries. + RegisterTendermintService(client.Context) + + / RegisterNodeService registers the node gRPC Query service. + RegisterNodeService(client.Context) + + / CommitMultiStore return the multistore instance + CommitMultiStore() + +sdk.CommitMultiStore +} + + / AppCreator is a function that allows us to lazily initialize an + / application using various configurations. + AppCreator func(log.Logger, dbm.DB, io.Writer, AppOptions) + +Application + + / ModuleInitFlags takes a start command and adds modules specific init flags. + ModuleInitFlags func(startCmd *cobra.Command) + + / ExportedApp represents an exported app state, along with + / validators, consensus params and latest app height. + ExportedApp struct { + / AppState is the application state as JSON. + AppState json.RawMessage + / Validators is the exported validator set. + Validators []tmtypes.GenesisValidator + / Height is the app's latest block height. + Height int64 + / ConsensusParams are the exported consensus params for ABCI. + ConsensusParams *tmproto.ConsensusParams +} + + / AppExporter is a function that dumps all app state to + / JSON-serializable structure and returns the current validator set. + AppExporter func(log.Logger, dbm.DB, io.Writer, int64, bool, []string, AppOptions, []string) (ExportedApp, error) +) +``` + +In practice, the [constructor of the application](/docs/sdk/v0.47/learn/beginner/overview-app#constructor-function) is passed as the `appCreator`. + +```go expandable +package cmd + +import ( + + "errors" + "io" + "os" + + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/spf13/cobra" + "github.com/spf13/viper" + tmcfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the +/ main function. +func NewRootCmd() *cobra.Command { + / we "pre"-instantiate the application for getting the injected/configured encoding configuration + tempApp := simapp.NewSimApp(log.NewNopLogger(), dbm.NewMemDB(), nil, true, simtestutil.NewAppOptionsWithFlagHome(simapp.DefaultNodeHome)) + encodingConfig := params.EncodingConfig{ + InterfaceRegistry: tempApp.InterfaceRegistry(), + Codec: tempApp.AppCodec(), + TxConfig: tempApp.TxConfig(), + Amino: tempApp.LegacyAmino(), +} + initClientCtx := client.Context{ +}. + WithCodec(encodingConfig.Codec). + WithInterfaceRegistry(encodingConfig.InterfaceRegistry). + WithTxConfig(encodingConfig.TxConfig). + WithLegacyAmino(encodingConfig.Amino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customTMConfig := initTendermintConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customTMConfig) +}, +} + +initRootCmd(rootCmd, encodingConfig) + +return rootCmd +} + +/ initTendermintConfig helps to override default Tendermint Config values. +/ return tmcfg.DefaultConfig if no custom configuration is required for the application. +func initTendermintConfig() *tmcfg.Config { + cfg := tmcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd(rootCmd *cobra.Command, encodingConfig params.EncodingConfig) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(simapp.ModuleBasics, simapp.DefaultNodeHome), + NewTestnetCmd(simapp.ModuleBasics, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + config.Cmd(), + pruning.PruningCmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(encodingConfig), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(encodingConfig params.EncodingConfig, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.GenesisCoreCommand(encodingConfig.TxConfig, simapp.ModuleBasics, simapp.DefaultNodeHome) + for _, sub_cmd := range cmds { + cmd.AddCommand(sub_cmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetAccountCmd(), + rpc.ValidatorCommand(), + rpc.BlockCommand(), + authcmd.QueryTxsByEventsCmd(), + authcmd.QueryTxCmd(), + ) + +simapp.ModuleBasics.AddQueryCommands(cmd) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +simapp.ModuleBasics.AddTxCommands(cmd) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + var simApp *simapp.SimApp + + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} +``` + +Then, the instance of `app` is used to instantiate a new CometBFT node: + +```go expandable +package server + +/ DONTCOVER + +import ( + + "errors" + "fmt" + "net" + "net/http" + "os" + "runtime/pprof" + "time" + "github.com/spf13/cobra" + "github.com/tendermint/tendermint/abci/server" + tcmd "github.com/tendermint/tendermint/cmd/tendermint/commands" + "github.com/tendermint/tendermint/node" + "github.com/tendermint/tendermint/p2p" + pvm "github.com/tendermint/tendermint/privval" + "github.com/tendermint/tendermint/proxy" + "github.com/tendermint/tendermint/rpc/client/local" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + "cosmossdk.io/tools/rosetta" + crgserver "cosmossdk.io/tools/rosetta/lib/server" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + "github.com/cosmos/cosmos-sdk/server/types" + pruningtypes "github.com/cosmos/cosmos-sdk/store/pruning/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +const ( + / Tendermint full-node start flags + flagWithTendermint = "with-tendermint" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" + + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" + + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" + + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" + + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" + flagGRPCWebAddress = "grpc-web.address" + + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" +) + +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ Tendermint. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with Tendermint in or out of process. By +default, the application will run with Tendermint in process. + +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. + +For '--pruning' the options are as follows: + +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) + +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' + +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. + +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. + +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, Tendermint is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + PreRunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + + / Bind flags to the Context's Viper so the app construction can set + / options accordingly. + if err := serverCtx.Viper.BindPFlags(cmd.Flags()); err != nil { + return err +} + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + +return err +}, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withTM, _ := cmd.Flags().GetBool(flagWithTendermint) + if !withTM { + serverCtx.Logger.Info("starting ABCI without Tendermint") + +return startStandAlone(serverCtx, appCreator) +} + + / amino is needed here for backwards compatibility of REST routes + err = startInProcess(serverCtx, clientCtx, appCreator) + +errCode, ok := err.(ErrorCode) + if !ok { + return err +} + +serverCtx.Logger.Debug(fmt.Sprintf("received quit signal: %d", errCode.Code)) + +return nil +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +cmd.Flags().Bool(flagWithTendermint, true, "Run abci app embedded in-process with tendermint") + +cmd.Flags().String(flagAddress, "tcp://0.0.0.0:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune Tendermint blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the Tendermint RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the Tendermint RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the Tendermint maximum response body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no Tendermint process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().String(flagGRPCWebAddress, serverconfig.DefaultGRPCWebAddress, "The gRPC-Web server address to listen on") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + + / add support for all Tendermint-specific command line options + tcmd.AddNodeFlags(cmd) + +return cmd +} + +func startStandAlone(ctx *Context, appCreator types.AppCreator) + +error { + addr := ctx.Viper.GetString(flagAddress) + transport := ctx.Viper.GetString(flagTransport) + home := ctx.Viper.GetString(flags.FlagHome) + +db, err := openDB(home, GetAppDBBackend(ctx.Viper)) + if err != nil { + return err +} + traceWriterFile := ctx.Viper.GetString(flagTraceStore) + +traceWriter, err := openTraceWriter(traceWriterFile) + if err != nil { + return err +} + app := appCreator(ctx.Logger, db, traceWriter, ctx.Viper) + +config, err := serverconfig.GetConfig(ctx.Viper) + if err != nil { + return err +} + + _, err = startTelemetry(config) + if err != nil { + return err +} + +svr, err := server.NewServer(addr, transport, app) + if err != nil { + return fmt.Errorf("error creating listener: %v", err) +} + +svr.SetLogger(ctx.Logger.With("module", "abci-server")) + +err = svr.Start() + if err != nil { + fmt.Println(err.Error()) + +os.Exit(1) +} + +defer func() { + if err = svr.Stop(); err != nil { + fmt.Println(err.Error()) + +os.Exit(1) +} + +}() + + / Wait for SIGINT or SIGTERM signal + return WaitForQuitSignals() +} + +func startInProcess(ctx *Context, clientCtx client.Context, appCreator types.AppCreator) + +error { + cfg := ctx.Config + home := cfg.RootDir + var cpuProfileCleanup func() + if cpuProfile := ctx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +ctx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +cpuProfileCleanup = func() { + ctx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + ctx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +} + +} + +db, err := openDB(home, GetAppDBBackend(ctx.Viper)) + if err != nil { + return err +} + traceWriterFile := ctx.Viper.GetString(flagTraceStore) + +traceWriter, err := openTraceWriter(traceWriterFile) + if err != nil { + return err +} + + / Clean up the traceWriter in the cpuProfileCleanup routine that is invoked + / when the server is shutting down. + fn := cpuProfileCleanup + cpuProfileCleanup = func() { + if fn != nil { + fn() +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + if err = traceWriter.Close(); err != nil { + ctx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +config, err := serverconfig.GetConfig(ctx.Viper) + if err != nil { + return err +} + if err := config.ValidateBasic(); err != nil { + return err +} + app := appCreator(ctx.Logger, db, traceWriter, ctx.Viper) + +nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return err +} + genDocProvider := node.DefaultGenesisDocProviderFunc(cfg) + +var ( + tmNode *node.Node + gRPCOnly = ctx.Viper.GetBool(flagGRPCOnly) + ) + if gRPCOnly { + ctx.Logger.Info("starting node in gRPC only mode; Tendermint is disabled") + +config.GRPC.Enable = true +} + +else { + ctx.Logger.Info("starting node with ABCI Tendermint in-process") + +tmNode, err = node.NewNode( + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(app), + genDocProvider, + node.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + ctx.Logger, + ) + if err != nil { + return err +} + if err := tmNode.Start(); err != nil { + return err +} + +} + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local tendermint RPC client. + if (config.API.Enable || config.GRPC.Enable) && tmNode != nil { + / re-assign for making the client available below + / do not use := to avoid shadowing clientCtx + clientCtx = clientCtx.WithClient(local.New(tmNode)) + +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx) +} + +metrics, err := startTelemetry(config) + if err != nil { + return err +} + +var apiSrv *api.Server + if config.API.Enable { + genDoc, err := genDocProvider() + if err != nil { + return err +} + clientCtx := clientCtx.WithHomeDir(home).WithChainID(genDoc.ChainID) + if config.GRPC.Enable { + _, port, err := net.SplitHostPort(config.GRPC.Address) + if err != nil { + return err +} + maxSendMsgSize := config.GRPC.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.GRPC.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + grpcAddress := fmt.Sprintf("127.0.0.1:%s", port) + + / If grpc is enabled, configure grpc client for grpc gateway. + grpcClient, err := grpc.Dial( + grpcAddress, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +ctx.Logger.Debug("grpc client assigned to client context", "target", grpcAddress) +} + +apiSrv = api.New(clientCtx, ctx.Logger.With("module", "api-server")) + +app.RegisterAPIRoutes(apiSrv, config.API) + if config.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + errCh := make(chan error) + +go func() { + if err := apiSrv.Start(config); err != nil { + errCh <- err +} + +}() + +select { + case err := <-errCh: + return err + case <-time.After(types.ServerStartTime): / assume server started successfully +} + +} + +var ( + grpcSrv *grpc.Server + grpcWebSrv *http.Server + ) + if config.GRPC.Enable { + grpcSrv, err = servergrpc.StartGRPCServer(clientCtx, app, config.GRPC) + if err != nil { + return err +} + +defer grpcSrv.Stop() + if config.GRPCWeb.Enable { + grpcWebSrv, err = servergrpc.StartGRPCWeb(grpcSrv, config) + if err != nil { + ctx.Logger.Error("failed to start grpc-web http server: ", err) + +return err +} + +defer func() { + if err := grpcWebSrv.Close(); err != nil { + ctx.Logger.Error("failed to close grpc-web http server: ", err) +} + +}() +} + +} + + / At this point it is safe to block the process if we're in gRPC only mode as + / we do not need to start Rosetta or handle any Tendermint related processes. + if gRPCOnly { + / wait for signal capture and gracefully return + return WaitForQuitSignals() +} + +var rosettaSrv crgserver.Server + if config.Rosetta.Enable { + offlineMode := config.Rosetta.Offline + + / If GRPC is not enabled rosetta cannot work in online mode, so we throw an error. + if !config.GRPC.Enable && !offlineMode { + return errors.New("'grpc' must be enable in online mode for Rosetta to work") +} + +minGasPrices, err := sdk.ParseDecCoins(config.MinGasPrices) + if err != nil { + ctx.Logger.Error("failed to parse minimum-gas-prices: ", err) + +return err +} + conf := &rosetta.Config{ + Blockchain: config.Rosetta.Blockchain, + Network: config.Rosetta.Network, + TendermintRPC: ctx.Config.RPC.ListenAddress, + GRPCEndpoint: config.GRPC.Address, + Addr: config.Rosetta.Address, + Retries: config.Rosetta.Retries, + Offline: offlineMode, + GasToSuggest: config.Rosetta.GasToSuggest, + EnableFeeSuggestion: config.Rosetta.EnableFeeSuggestion, + GasPrices: minGasPrices.Sort(), + Codec: clientCtx.Codec.(*codec.ProtoCodec), + InterfaceRegistry: clientCtx.InterfaceRegistry, +} + +rosettaSrv, err = rosetta.ServerFromConfig(conf) + if err != nil { + return err +} + errCh := make(chan error) + +go func() { + if err := rosettaSrv.Start(); err != nil { + errCh <- err +} + +}() + +select { + case err := <-errCh: + return err + case <-time.After(types.ServerStartTime): / assume server started successfully +} + +} + +defer func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() +} + if cpuProfileCleanup != nil { + cpuProfileCleanup() +} + if apiSrv != nil { + _ = apiSrv.Close() +} + +ctx.Logger.Info("exiting...") +}() + + / wait for signal capture and gracefully return + return WaitForQuitSignals() +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + if !cfg.Telemetry.Enabled { + return nil, nil +} + +return telemetry.New(cfg.Telemetry) +} +``` + +The CometBFT node can be created with `app` because the latter satisfies the [`abci.Application` interface](https://github.com/cometbft/cometbft/blob/v0.37.0/abci/types/application.go#L9-L35) (given that `app` extends [`baseapp`](/docs/sdk/v0.47/learn/advanced/baseapp)). As part of the `node.New` method, CometBFT makes sure that the height of the application (i.e. number of blocks since genesis) is equal to the height of the CometBFT node. The difference between these two heights should always be negative or null. If it is strictly negative, `node.New` will replay blocks until the height of the application reaches the height of the CometBFT node. Finally, if the height of the application is `0`, the CometBFT node will call [`InitChain`](/docs/sdk/v0.47/learn/advanced/baseapp#initchain) on the application to initialize the state from the genesis file. + +Once the CometBFT node is instantiated and in sync with the application, the node can be started: + +```go expandable +package server + +/ DONTCOVER + +import ( + + "errors" + "fmt" + "net" + "net/http" + "os" + "runtime/pprof" + "time" + "github.com/spf13/cobra" + "github.com/tendermint/tendermint/abci/server" + tcmd "github.com/tendermint/tendermint/cmd/tendermint/commands" + "github.com/tendermint/tendermint/node" + "github.com/tendermint/tendermint/p2p" + pvm "github.com/tendermint/tendermint/privval" + "github.com/tendermint/tendermint/proxy" + "github.com/tendermint/tendermint/rpc/client/local" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + "cosmossdk.io/tools/rosetta" + crgserver "cosmossdk.io/tools/rosetta/lib/server" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + "github.com/cosmos/cosmos-sdk/server/types" + pruningtypes "github.com/cosmos/cosmos-sdk/store/pruning/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +const ( + / Tendermint full-node start flags + flagWithTendermint = "with-tendermint" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" + + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" + + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" + + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" + + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" + flagGRPCWebAddress = "grpc-web.address" + + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" +) + +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ Tendermint. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with Tendermint in or out of process. By +default, the application will run with Tendermint in process. + +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. + +For '--pruning' the options are as follows: + +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) + +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' + +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. + +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. + +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, Tendermint is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + PreRunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + + / Bind flags to the Context's Viper so the app construction can set + / options accordingly. + if err := serverCtx.Viper.BindPFlags(cmd.Flags()); err != nil { + return err +} + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + +return err +}, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withTM, _ := cmd.Flags().GetBool(flagWithTendermint) + if !withTM { + serverCtx.Logger.Info("starting ABCI without Tendermint") + +return startStandAlone(serverCtx, appCreator) +} + + / amino is needed here for backwards compatibility of REST routes + err = startInProcess(serverCtx, clientCtx, appCreator) + +errCode, ok := err.(ErrorCode) + if !ok { + return err +} + +serverCtx.Logger.Debug(fmt.Sprintf("received quit signal: %d", errCode.Code)) + +return nil +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +cmd.Flags().Bool(flagWithTendermint, true, "Run abci app embedded in-process with tendermint") + +cmd.Flags().String(flagAddress, "tcp://0.0.0.0:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune Tendermint blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the Tendermint RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the Tendermint RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the Tendermint maximum response body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no Tendermint process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().String(flagGRPCWebAddress, serverconfig.DefaultGRPCWebAddress, "The gRPC-Web server address to listen on") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + + / add support for all Tendermint-specific command line options + tcmd.AddNodeFlags(cmd) + +return cmd +} + +func startStandAlone(ctx *Context, appCreator types.AppCreator) + +error { + addr := ctx.Viper.GetString(flagAddress) + transport := ctx.Viper.GetString(flagTransport) + home := ctx.Viper.GetString(flags.FlagHome) + +db, err := openDB(home, GetAppDBBackend(ctx.Viper)) + if err != nil { + return err +} + traceWriterFile := ctx.Viper.GetString(flagTraceStore) + +traceWriter, err := openTraceWriter(traceWriterFile) + if err != nil { + return err +} + app := appCreator(ctx.Logger, db, traceWriter, ctx.Viper) + +config, err := serverconfig.GetConfig(ctx.Viper) + if err != nil { + return err +} + + _, err = startTelemetry(config) + if err != nil { + return err +} + +svr, err := server.NewServer(addr, transport, app) + if err != nil { + return fmt.Errorf("error creating listener: %v", err) +} + +svr.SetLogger(ctx.Logger.With("module", "abci-server")) + +err = svr.Start() + if err != nil { + fmt.Println(err.Error()) + +os.Exit(1) +} + +defer func() { + if err = svr.Stop(); err != nil { + fmt.Println(err.Error()) + +os.Exit(1) +} + +}() + + / Wait for SIGINT or SIGTERM signal + return WaitForQuitSignals() +} + +func startInProcess(ctx *Context, clientCtx client.Context, appCreator types.AppCreator) + +error { + cfg := ctx.Config + home := cfg.RootDir + var cpuProfileCleanup func() + if cpuProfile := ctx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +ctx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +cpuProfileCleanup = func() { + ctx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + ctx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +} + +} + +db, err := openDB(home, GetAppDBBackend(ctx.Viper)) + if err != nil { + return err +} + traceWriterFile := ctx.Viper.GetString(flagTraceStore) + +traceWriter, err := openTraceWriter(traceWriterFile) + if err != nil { + return err +} + + / Clean up the traceWriter in the cpuProfileCleanup routine that is invoked + / when the server is shutting down. + fn := cpuProfileCleanup + cpuProfileCleanup = func() { + if fn != nil { + fn() +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + if err = traceWriter.Close(); err != nil { + ctx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +config, err := serverconfig.GetConfig(ctx.Viper) + if err != nil { + return err +} + if err := config.ValidateBasic(); err != nil { + return err +} + app := appCreator(ctx.Logger, db, traceWriter, ctx.Viper) + +nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return err +} + genDocProvider := node.DefaultGenesisDocProviderFunc(cfg) + +var ( + tmNode *node.Node + gRPCOnly = ctx.Viper.GetBool(flagGRPCOnly) + ) + if gRPCOnly { + ctx.Logger.Info("starting node in gRPC only mode; Tendermint is disabled") + +config.GRPC.Enable = true +} + +else { + ctx.Logger.Info("starting node with ABCI Tendermint in-process") + +tmNode, err = node.NewNode( + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(app), + genDocProvider, + node.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + ctx.Logger, + ) + if err != nil { + return err +} + if err := tmNode.Start(); err != nil { + return err +} + +} + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local tendermint RPC client. + if (config.API.Enable || config.GRPC.Enable) && tmNode != nil { + / re-assign for making the client available below + / do not use := to avoid shadowing clientCtx + clientCtx = clientCtx.WithClient(local.New(tmNode)) + +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx) +} + +metrics, err := startTelemetry(config) + if err != nil { + return err +} + +var apiSrv *api.Server + if config.API.Enable { + genDoc, err := genDocProvider() + if err != nil { + return err +} + clientCtx := clientCtx.WithHomeDir(home).WithChainID(genDoc.ChainID) + if config.GRPC.Enable { + _, port, err := net.SplitHostPort(config.GRPC.Address) + if err != nil { + return err +} + maxSendMsgSize := config.GRPC.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.GRPC.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + grpcAddress := fmt.Sprintf("127.0.0.1:%s", port) + + / If grpc is enabled, configure grpc client for grpc gateway. + grpcClient, err := grpc.Dial( + grpcAddress, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +ctx.Logger.Debug("grpc client assigned to client context", "target", grpcAddress) +} + +apiSrv = api.New(clientCtx, ctx.Logger.With("module", "api-server")) + +app.RegisterAPIRoutes(apiSrv, config.API) + if config.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + errCh := make(chan error) + +go func() { + if err := apiSrv.Start(config); err != nil { + errCh <- err +} + +}() + +select { + case err := <-errCh: + return err + case <-time.After(types.ServerStartTime): / assume server started successfully +} + +} + +var ( + grpcSrv *grpc.Server + grpcWebSrv *http.Server + ) + if config.GRPC.Enable { + grpcSrv, err = servergrpc.StartGRPCServer(clientCtx, app, config.GRPC) + if err != nil { + return err +} + +defer grpcSrv.Stop() + if config.GRPCWeb.Enable { + grpcWebSrv, err = servergrpc.StartGRPCWeb(grpcSrv, config) + if err != nil { + ctx.Logger.Error("failed to start grpc-web http server: ", err) + +return err +} + +defer func() { + if err := grpcWebSrv.Close(); err != nil { + ctx.Logger.Error("failed to close grpc-web http server: ", err) +} + +}() +} + +} + + / At this point it is safe to block the process if we're in gRPC only mode as + / we do not need to start Rosetta or handle any Tendermint related processes. + if gRPCOnly { + / wait for signal capture and gracefully return + return WaitForQuitSignals() +} + +var rosettaSrv crgserver.Server + if config.Rosetta.Enable { + offlineMode := config.Rosetta.Offline + + / If GRPC is not enabled rosetta cannot work in online mode, so we throw an error. + if !config.GRPC.Enable && !offlineMode { + return errors.New("'grpc' must be enable in online mode for Rosetta to work") +} + +minGasPrices, err := sdk.ParseDecCoins(config.MinGasPrices) + if err != nil { + ctx.Logger.Error("failed to parse minimum-gas-prices: ", err) + +return err +} + conf := &rosetta.Config{ + Blockchain: config.Rosetta.Blockchain, + Network: config.Rosetta.Network, + TendermintRPC: ctx.Config.RPC.ListenAddress, + GRPCEndpoint: config.GRPC.Address, + Addr: config.Rosetta.Address, + Retries: config.Rosetta.Retries, + Offline: offlineMode, + GasToSuggest: config.Rosetta.GasToSuggest, + EnableFeeSuggestion: config.Rosetta.EnableFeeSuggestion, + GasPrices: minGasPrices.Sort(), + Codec: clientCtx.Codec.(*codec.ProtoCodec), + InterfaceRegistry: clientCtx.InterfaceRegistry, +} + +rosettaSrv, err = rosetta.ServerFromConfig(conf) + if err != nil { + return err +} + errCh := make(chan error) + +go func() { + if err := rosettaSrv.Start(); err != nil { + errCh <- err +} + +}() + +select { + case err := <-errCh: + return err + case <-time.After(types.ServerStartTime): / assume server started successfully +} + +} + +defer func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() +} + if cpuProfileCleanup != nil { + cpuProfileCleanup() +} + if apiSrv != nil { + _ = apiSrv.Close() +} + +ctx.Logger.Info("exiting...") +}() + + / wait for signal capture and gracefully return + return WaitForQuitSignals() +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + if !cfg.Telemetry.Enabled { + return nil, nil +} + +return telemetry.New(cfg.Telemetry) +} +``` Upon starting, the node will bootstrap its RPC and P2P server and start dialing peers. During handshake with its peers, if the node realizes they are ahead, it will query all the blocks sequentially in order to catch up. Then, it will wait for new block proposals and block signatures from validators in order to make progress. -## Other commands[​](#other-commands "Direct link to Other commands") +## Other commands -To discover how to concretely run a node and interact with it, please refer to our [Running a Node, API and CLI](/v0.47/user/run-node/run-node) guide. +To discover how to concretely run a node and interact with it, please refer to our [Running a Node, API and CLI](/docs/sdk/v0.47/user/run-node/run-node) guide. diff --git a/docs/sdk/v0.47/learn/advanced/ocap.mdx b/docs/sdk/v0.47/learn/advanced/ocap.mdx index 1b657c11..30552583 100644 --- a/docs/sdk/v0.47/learn/advanced/ocap.mdx +++ b/docs/sdk/v0.47/learn/advanced/ocap.mdx @@ -1,55 +1,955 @@ --- -title: "Object-Capability Model" -description: "Version: v0.47" +title: Object-Capability Model +description: >- + When thinking about security, it is good to start with a specific threat + model. Our threat model is the following: --- -## Intro[​](#intro "Direct link to Intro") +## Intro When thinking about security, it is good to start with a specific threat model. Our threat model is the following: > We assume that a thriving ecosystem of Cosmos SDK modules that are easy to compose into a blockchain application will contain faulty or malicious modules. -The Cosmos SDK is designed to address this threat by being the foundation of an object capability system. +The Cosmos SDK is designed to address this threat by being the +foundation of an object capability system. -> The structural properties of object capability systems favor modularity in code design and ensure reliable encapsulation in code implementation. +> The structural properties of object capability systems favor +> modularity in code design and ensure reliable encapsulation in +> code implementation. > -> These structural properties facilitate the analysis of some security properties of an object-capability program or operating system. Some of these — in particular, information flow properties — can be analyzed at the level of object references and connectivity, independent of any knowledge or analysis of the code that determines the behavior of the objects. +> These structural properties facilitate the analysis of some +> security properties of an object-capability program or operating +> system. Some of these — in particular, information flow properties +> — can be analyzed at the level of object references and +> connectivity, independent of any knowledge or analysis of the code +> that determines the behavior of the objects. > -> As a consequence, these security properties can be established and maintained in the presence of new objects that contain unknown and possibly malicious code. +> As a consequence, these security properties can be established +> and maintained in the presence of new objects that contain unknown +> and possibly malicious code. > -> These structural properties stem from the two rules governing access to existing objects: +> These structural properties stem from the two rules governing +> access to existing objects: > -> 1. An object A can send a message to B only if object A holds a reference to B. -> 2. An object A can obtain a reference to C only if object A receives a message containing a reference to C. As a consequence of these two rules, an object can obtain a reference to another object only through a preexisting chain of references. In short, "Only connectivity begets connectivity." +> 1. An object A can send a message to B only if object A holds a +> reference to B. +> 2. An object A can obtain a reference to C only +> if object A receives a message containing a reference to C. As a +> consequence of these two rules, an object can obtain a reference +> to another object only through a preexisting chain of references. +> In short, "Only connectivity begets connectivity." For an introduction to object-capabilities, see this [Wikipedia article](https://en.wikipedia.org/wiki/Object-capability_model). -## Ocaps in practice[​](#ocaps-in-practice "Direct link to Ocaps in practice") +## Ocaps in practice The idea is to only reveal what is necessary to get the work done. -For example, the following code snippet violates the object capabilities principle: +For example, the following code snippet violates the object capabilities +principle: -``` -type AppAccount struct {...}account := &AppAccount{ Address: pub.Address(), Coins: sdk.Coins{sdk.NewInt64Coin("ATM", 100)},}sumValue := externalModule.ComputeSumValue(account) +```go +type AppAccount struct {... +} + account := &AppAccount{ + Address: pub.Address(), + Coins: sdk.Coins{ + sdk.NewInt64Coin("ATM", 100) +}, +} + sumValue := externalModule.ComputeSumValue(account) ``` -The method `ComputeSumValue` implies a pure function, yet the implied capability of accepting a pointer value is the capability to modify that value. The preferred method signature should take a copy instead. +The method `ComputeSumValue` implies a pure function, yet the implied +capability of accepting a pointer value is the capability to modify that +value. The preferred method signature should take a copy instead. -``` +```go sumValue := externalModule.ComputeSumValue(*account) ``` In the Cosmos SDK, you can see the application of this principle in simapp. -simapp/app.go +```go expandable +/go:build app_v1 -``` -loading... -``` +package simapp + +import ( + + "encoding/json" + "io" + "os" + "path/filepath" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "github.com/spf13/cast" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + + simappparams "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/client/grpc/tmservice" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + memKeys map[string]*storetypes.MemoryStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + encodingConfig := makeEncodingConfig() + appCodec := encodingConfig.Codec + legacyAmino := encodingConfig.Amino + interfaceRegistry := encodingConfig.InterfaceRegistry + txConfig := encodingConfig.TxConfig + + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool := mempool.NewSenderNonceMempool() + / mempoolOpt := baseapp.SetMempool(nonceMempool) + / prepareOpt := func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt := func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := sdk.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, capabilitytypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + tkeys := sdk.NewTransientStoreKeys(paramstypes.TStoreKey) + / NOTE: The testingkey is just mounted for testing purposes. Actual applications should + / not include this key. + memKeys := sdk.NewMemoryStoreKeys(capabilitytypes.MemStoreKey, "testingkey") + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(bApp, appOpts, appCodec, logger, keys); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, + memKeys: memKeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, keys[upgradetypes.StoreKey], authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +bApp.SetParamStore(&app.ConsensusParamsKeeper) + +app.CapabilityKeeper = capabilitykeeper.NewKeeper(appCodec, keys[capabilitytypes.StoreKey], memKeys[capabilitytypes.MemStoreKey]) + / Applications that wish to enforce statically created ScopedKeepers should call `Seal` after creating + / their scoped modules in `NewApp` with `ScopeToModule` + app.CapabilityKeeper.Seal() + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, keys[authtypes.StoreKey], authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + keys[banktypes.StoreKey], + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, keys[minttypes.StoreKey], app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, keys[distrtypes.StoreKey], app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, keys[slashingtypes.StoreKey], app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, keys[crisistypes.StoreKey], invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, keys[feegrant.StoreKey], app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.AuthzKeeper = authzkeeper.NewKeeper(keys[authzkeeper.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, keys[upgradetypes.StoreKey], appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/gov/spec/01_concepts.md#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, keys[govtypes.StoreKey], app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(keys[nftkeeper.StoreKey], appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, keys[evidencetypes.StoreKey], app.StakingKeeper, app.SlashingKeeper, + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app.BaseApp.DeliverTx, + encodingConfig.TxConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + capability.NewAppModule(appCodec, *app.CapabilityKeeper, false), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName)), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + ) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + +app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, capabilitytypes.ModuleName, minttypes.ModuleName, distrtypes.ModuleName, slashingtypes.ModuleName, + evidencetypes.ModuleName, stakingtypes.ModuleName, + authtypes.ModuleName, banktypes.ModuleName, govtypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, + authz.ModuleName, feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, govtypes.ModuleName, stakingtypes.ModuleName, + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, distrtypes.ModuleName, + slashingtypes.ModuleName, minttypes.ModuleName, + genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, upgradetypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder := []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +app.ModuleManager.RegisterServices(app.configurator) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + +app.MountMemoryStores(memKeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(encodingConfig.TxConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + logger.Error("error on loading last version", "err", err) + +os.Exit(1) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := ante.NewAnteHandler( + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + ) + if err != nil { + panic(err) +} + +app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/app.go#L294-L318) +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + return app.ModuleManager.BeginBlock(ctx, req) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + return app.ModuleManager.EndBlock(ctx, req) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return ModuleBasics.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetTKey returns the TransientStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetTKey(storeKey string) *storetypes.TransientStoreKey { + return app.tkeys[storeKey] +} + +/ GetMemKey returns the MemStoreKey for the provided mem key. +/ +/ NOTE: This is solely used for testing purposes. +func (app *SimApp) + +GetMemKey(storeKey string) *storetypes.MemoryStoreKey { + return app.memKeys[storeKey] +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new tendermint queries routes from grpc-gateway. + tmservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + ModuleBasics.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + tmservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + app.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter()) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName).WithKeyTable(govv1.ParamKeyTable()) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} + +func makeEncodingConfig() + +simappparams.EncodingConfig { + encodingConfig := simappparams.MakeTestEncodingConfig() + +std.RegisterLegacyAminoCodec(encodingConfig.Amino) + +std.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +ModuleBasics.RegisterLegacyAminoCodec(encodingConfig.Amino) + +ModuleBasics.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +return encodingConfig +} +``` The following diagram shows the current dependencies between keepers. -![Keeper dependencies](/images/v0.47/learn/advanced/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg) +![Keeper dependencies](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg) diff --git a/docs/sdk/v0.47/learn/advanced/proto-docs.mdx b/docs/sdk/v0.47/learn/advanced/proto-docs.mdx index 6c427ed0..0f3593eb 100644 --- a/docs/sdk/v0.47/learn/advanced/proto-docs.mdx +++ b/docs/sdk/v0.47/learn/advanced/proto-docs.mdx @@ -1,6 +1,6 @@ --- -title: "Protobuf Documentation" -description: "Version: v0.47" +title: Protobuf Documentation +description: See Cosmos SDK Buf Proto-docs --- See [Cosmos SDK Buf Proto-docs](https://buf.build/cosmos/cosmos-sdk/docs/main) diff --git a/docs/sdk/v0.47/learn/advanced/runtx_middleware.mdx b/docs/sdk/v0.47/learn/advanced/runtx_middleware.mdx index 494ee358..2ac22bf9 100644 --- a/docs/sdk/v0.47/learn/advanced/runtx_middleware.mdx +++ b/docs/sdk/v0.47/learn/advanced/runtx_middleware.mdx @@ -1,22 +1,124 @@ --- -title: "RunTx recovery middleware" -description: "Version: v0.47" +title: RunTx recovery middleware --- -`BaseApp.runTx()` function handles Go panics that might occur during transactions execution, for example, keeper has faced an invalid state and paniced. Depending on the panic type different handler is used, for instance the default one prints an error log message. Recovery middleware is used to add custom panic recovery for Cosmos SDK application developers. +`BaseApp.runTx()` function handles Go panics that might occur during transactions execution, for example, keeper has faced an invalid state and paniced. +Depending on the panic type different handler is used, for instance the default one prints an error log message. +Recovery middleware is used to add custom panic recovery for Cosmos SDK application developers. -More context can found in the corresponding [ADR-022](/v0.47/build/architecture/adr-022-custom-panic-handling) and the implementation in [recovery.go](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/baseapp/recovery.go). +More context can found in the corresponding [ADR-022](/docs/common/pages/adr-comprehensive#adr-022-custom-baseapp-panic-handling) and the implementation in [recovery.go](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/baseapp/recovery.go). -## Interface[​](#interface "Direct link to Interface") +## Interface -baseapp/recovery.go - -``` -loading... +```go expandable +package baseapp + +import ( + + "fmt" + "runtime/debug" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ RecoveryHandler handles recovery() + +object. +/ Return a non-nil error if recoveryObj was processed. +/ Return nil if recoveryObj was not processed. +type RecoveryHandler func(recoveryObj interface{ +}) + +error + +/ recoveryMiddleware is wrapper for RecoveryHandler to create chained recovery handling. +/ returns (recoveryMiddleware, nil) + if recoveryObj was not processed and should be passed to the next middleware in chain. +/ returns (nil, error) + if recoveryObj was processed and middleware chain processing should be stopped. +type recoveryMiddleware func(recoveryObj interface{ +}) (recoveryMiddleware, error) + +/ processRecovery processes recoveryMiddleware chain for recovery() + +object. +/ Chain processing stops on non-nil error or when chain is processed. +func processRecovery(recoveryObj interface{ +}, middleware recoveryMiddleware) + +error { + if middleware == nil { + return nil +} + +next, err := middleware(recoveryObj) + if err != nil { + return err +} + +return processRecovery(recoveryObj, next) +} + +/ newRecoveryMiddleware creates a RecoveryHandler middleware. +func newRecoveryMiddleware(handler RecoveryHandler, next recoveryMiddleware) + +recoveryMiddleware { + return func(recoveryObj interface{ +}) (recoveryMiddleware, error) { + if err := handler(recoveryObj); err != nil { + return nil, err +} + +return next, nil +} +} + +/ newOutOfGasRecoveryMiddleware creates a standard OutOfGas recovery middleware for app.runTx method. +func newOutOfGasRecoveryMiddleware(gasWanted uint64, ctx sdk.Context, next recoveryMiddleware) + +recoveryMiddleware { + handler := func(recoveryObj interface{ +}) + +error { + err, ok := recoveryObj.(sdk.ErrorOutOfGas) + if !ok { + return nil +} + +return sdkerrors.Wrap( + sdkerrors.ErrOutOfGas, fmt.Sprintf( + "out of gas in location: %v; gasWanted: %d, gasUsed: %d", + err.Descriptor, gasWanted, ctx.GasMeter().GasConsumed(), + ), + ) +} + +return newRecoveryMiddleware(handler, next) +} + +/ newDefaultRecoveryMiddleware creates a default (last in chain) + +recovery middleware for app.runTx method. +func newDefaultRecoveryMiddleware() + +recoveryMiddleware { + handler := func(recoveryObj interface{ +}) + +error { + return sdkerrors.Wrap( + sdkerrors.ErrPanic, fmt.Sprintf( + "recovered: %v\nstack:\n%v", recoveryObj, string(debug.Stack()), + ), + ) +} + +return newRecoveryMiddleware(handler, nil) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/baseapp/recovery.go#L11-L14) - `recoveryObj` is a return value for `recover()` function from the `buildin` Go package. **Contract:** @@ -24,24 +126,51 @@ loading... * RecoveryHandler returns `nil` if `recoveryObj` wasn't handled and should be passed to the next recovery middleware; * RecoveryHandler returns a non-nil `error` if `recoveryObj` was handled; -## Custom RecoveryHandler register[​](#custom-recoveryhandler-register "Direct link to Custom RecoveryHandler register") +## Custom RecoveryHandler register `BaseApp.AddRunTxRecoveryHandler(handlers ...RecoveryHandler)` BaseApp method adds recovery middleware to the default recovery chain. -## Example[​](#example "Direct link to Example") +## Example Lets assume we want to emit the "Consensus failure" chain state if some particular error occurred. We have a module keeper that panics: -``` -func (k FooKeeper) Do(obj interface{}) { if obj == nil { // that shouldn't happen, we need to crash the app err := errorsmod.Wrap(fooTypes.InternalError, "obj is nil") panic(err) }} +```go +func (k FooKeeper) + +Do(obj interface{ +}) { + if obj == nil { + / that shouldn't happen, we need to crash the app + err := errorsmod.Wrap(fooTypes.InternalError, "obj is nil") + +panic(err) +} +} ``` By default that panic would be recovered and an error message will be printed to log. To override that behaviour we should register a custom RecoveryHandler: -``` -// Cosmos SDK application constructorcustomHandler := func(recoveryObj interface{}) error { err, ok := recoveryObj.(error) if !ok { return nil } if fooTypes.InternalError.Is(err) { panic(fmt.Errorf("FooKeeper did panic with error: %w", err)) } return nil}baseApp := baseapp.NewBaseApp(...)baseApp.AddRunTxRecoveryHandler(customHandler) +```go expandable +/ Cosmos SDK application constructor + customHandler := func(recoveryObj interface{ +}) + +error { + err, ok := recoveryObj.(error) + if !ok { + return nil +} + if fooTypes.InternalError.Is(err) { + panic(fmt.Errorf("FooKeeper did panic with error: %w", err)) +} + +return nil +} + baseApp := baseapp.NewBaseApp(...) + +baseApp.AddRunTxRecoveryHandler(customHandler) ``` diff --git a/docs/sdk/v0.47/learn/advanced/simulation.mdx b/docs/sdk/v0.47/learn/advanced/simulation.mdx index 16547965..ae4333f5 100644 --- a/docs/sdk/v0.47/learn/advanced/simulation.mdx +++ b/docs/sdk/v0.47/learn/advanced/simulation.mdx @@ -1,67 +1,102 @@ --- -title: "Cosmos Blockchain Simulator" -description: "Version: v0.47" +title: Cosmos Blockchain Simulator +description: >- + The Cosmos SDK offers a full fledged simulation framework to fuzz test every + message defined by a module. --- -The Cosmos SDK offers a full fledged simulation framework to fuzz test every message defined by a module. +The Cosmos SDK offers a full fledged simulation framework to fuzz test every +message defined by a module. -On the Cosmos SDK, this functionality is provided by [`SimApp`](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/app_v2.go), which is a `Baseapp` application that is used for running the [`simulation`](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/simulation) module. This module defines all the simulation logic as well as the operations for randomized parameters like accounts, balances etc. +On the Cosmos SDK, this functionality is provided by [`SimApp`](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/app_v2.go), which is a +`Baseapp` application that is used for running the [`simulation`](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/simulation) module. +This module defines all the simulation logic as well as the operations for +randomized parameters like accounts, balances etc. -## Goals[​](#goals "Direct link to Goals") +## Goals -The blockchain simulator tests how the blockchain application would behave under real life circumstances by generating and sending randomized messages. The goal of this is to detect and debug failures that could halt a live chain, by providing logs and statistics about the operations run by the simulator as well as exporting the latest application state when a failure was found. +The blockchain simulator tests how the blockchain application would behave under +real life circumstances by generating and sending randomized messages. +The goal of this is to detect and debug failures that could halt a live chain, +by providing logs and statistics about the operations run by the simulator as +well as exporting the latest application state when a failure was found. -Its main difference with integration testing is that the simulator app allows you to pass parameters to customize the chain that's being simulated. This comes in handy when trying to reproduce bugs that were generated in the provided operations (randomized or not). +Its main difference with integration testing is that the simulator app allows +you to pass parameters to customize the chain that's being simulated. +This comes in handy when trying to reproduce bugs that were generated in the +provided operations (randomized or not). -## Simulation commands[​](#simulation-commands "Direct link to Simulation commands") +## Simulation commands -The simulation app has different commands, each of which tests a different failure type: +The simulation app has different commands, each of which tests a different +failure type: -* `AppImportExport`: The simulator exports the initial app state and then it creates a new app with the exported `genesis.json` as an input, checking for inconsistencies between the stores. +* `AppImportExport`: The simulator exports the initial app state and then it + creates a new app with the exported `genesis.json` as an input, checking for + inconsistencies between the stores. * `AppSimulationAfterImport`: Queues two simulations together. The first one provides the app state (*i.e* genesis) to the second. Useful to test software upgrades or hard-forks from a live chain. * `AppStateDeterminism`: Checks that all the nodes return the same values, in the same order. -* `BenchmarkInvariants`: Analysis of the performance of running all modules' invariants (*i.e* sequentially runs a [benchmark](https://pkg.go.dev/testing/#hdr-Benchmarks) test). An invariant checks for differences between the values that are on the store and the passive tracker. Eg: total coins held by accounts vs total supply tracker. +* `BenchmarkInvariants`: Analysis of the performance of running all modules' invariants (*i.e* sequentially runs a [benchmark](https://pkg.go.dev/testing/#hdr-Benchmarks) test). An invariant checks for + differences between the values that are on the store and the passive tracker. Eg: total coins held by accounts vs total supply tracker. * `FullAppSimulation`: General simulation mode. Runs the chain and the specified operations for a given number of blocks. Tests that there're no `panics` on the simulation. It does also run invariant checks on every `Period` but they are not benchmarked. -Each simulation must receive a set of inputs (*i.e* flags) such as the number of blocks that the simulation is run, seed, block size, etc. Check the full list of flags [here](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/simulation/client/cli/flags.go#L33-L57). +Each simulation must receive a set of inputs (*i.e* flags) such as the number of +blocks that the simulation is run, seed, block size, etc. +Check the full list of flags [here](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/simulation/client/cli/flags.go#L33-L57). -## Simulator Modes[​](#simulator-modes "Direct link to Simulator Modes") +## Simulator Modes In addition to the various inputs and commands, the simulator runs in three modes: -1. Completely random where the initial state, module parameters and simulation parameters are **pseudo-randomly generated**. -2. From a `genesis.json` file where the initial state and the module parameters are defined. This mode is helpful for running simulations on a known state such as a live network export where a new (mostly likely breaking) version of the application needs to be tested. -3. From a `params.json` file where the initial state is pseudo-randomly generated but the module and simulation parameters can be provided manually. This allows for a more controlled and deterministic simulation setup while allowing the state space to still be pseudo-randomly simulated. The list of available parameters are listed [here](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/simulation/client/cli/flags.go#L59-L78). - - - These modes are not mutually exclusive. So you can for example run a randomly generated genesis state (`1`) with manually generated simulation params (`3`). - - -## Usage[​](#usage "Direct link to Usage") - -This is a general example of how simulations are run. For more specific examples check the Cosmos SDK [Makefile](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/Makefile#L282-L318). - -``` - $ go test -mod=readonly github.com/cosmos/cosmos-sdk/simapp \ -run=TestApp \ ... -v -timeout 24h +1. Completely random where the initial state, module parameters and simulation + parameters are **pseudo-randomly generated**. +2. From a `genesis.json` file where the initial state and the module parameters are defined. + This mode is helpful for running simulations on a known state such as a live network export where a new (mostly likely breaking) version of the application needs to be tested. +3. From a `params.json` file where the initial state is pseudo-randomly generated but the module and simulation parameters can be provided manually. + This allows for a more controlled and deterministic simulation setup while allowing the state space to still be pseudo-randomly simulated. + The list of available parameters are listed [here](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/simulation/client/cli/flags.go#L59-L78). + + +These modes are not mutually exclusive. So you can for example run a randomly +generated genesis state (`1`) with manually generated simulation params (`3`). + + +## Usage + +This is a general example of how simulations are run. For more specific examples +check the Cosmos SDK [Makefile](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/Makefile#L282-L318). + +```bash + $ go test -mod=readonly github.com/cosmos/cosmos-sdk/simapp \ + -run=TestApp \ + ... + -v -timeout 24h ``` -## Debugging Tips[​](#debugging-tips "Direct link to Debugging Tips") +## Debugging Tips Here are some suggestions when encountering a simulation failure: -* Export the app state at the height where the failure was found. You can do this by passing the `-ExportStatePath` flag to the simulator. -* Use `-Verbose` logs. They could give you a better hint on all the operations involved. -* Reduce the simulation `-Period`. This will run the invariants checks more frequently. +* Export the app state at the height where the failure was found. You can do this + by passing the `-ExportStatePath` flag to the simulator. +* Use `-Verbose` logs. They could give you a better hint on all the operations + involved. +* Reduce the simulation `-Period`. This will run the invariants checks more + frequently. * Print all the failed invariants at once with `-PrintAllInvariants`. -* Try using another `-Seed`. If it can reproduce the same error and if it fails sooner, you will spend less time running the simulations. -* Reduce the `-NumBlocks` . How's the app state at the height previous to the failure? -* Run invariants on every operation with `-SimulateEveryOperation`. *Note*: this will slow down your simulation **a lot**. -* Try adding logs to operations that are not logged. You will have to define a [Logger](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/staking/keeper/keeper.go#L65-L68) on your `Keeper`. +* Try using another `-Seed`. If it can reproduce the same error and if it fails + sooner, you will spend less time running the simulations. +* Reduce the `-NumBlocks` . How's the app state at the height previous to the + failure? +* Run invariants on every operation with `-SimulateEveryOperation`. *Note*: this + will slow down your simulation **a lot**. +* Try adding logs to operations that are not logged. You will have to define a + [Logger](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/staking/keeper/keeper.go#L65-L68) on your `Keeper`. -## Use simulation in your Cosmos SDK-based application[​](#use-simulation-in-your-cosmos-sdk-based-application "Direct link to Use simulation in your Cosmos SDK-based application") +## Use simulation in your Cosmos SDK-based application Learn how you can integrate the simulation into your Cosmos SDK-based application: * Application Simulation Manager -* [Building modules: Simulator](/v0.47/build/building-modules/simulator) +* [Building modules: Simulator](/docs/sdk/v0.47/documentation/module-system/simulator) * Simulator tests diff --git a/docs/sdk/v0.47/learn/advanced/store.mdx b/docs/sdk/v0.47/learn/advanced/store.mdx index 20fa1064..741a2e0c 100644 --- a/docs/sdk/v0.47/learn/advanced/store.mdx +++ b/docs/sdk/v0.47/learn/advanced/store.mdx @@ -1,93 +1,94 @@ --- -title: "Store" -description: "Version: v0.47" +title: Store --- - - A store is a data structure that holds the state of the application. - +## Synopsis + +A store is a data structure that holds the state of the application. - ### Pre-requisite Readings[​](#pre-requisite-readings "Direct link to Pre-requisite Readings") - * [Anatomy of a Cosmos SDK application](/v0.47/learn/beginner/overview-app) +### Pre-requisite Readings + +- [Anatomy of a Cosmos SDK application](/docs/sdk/v0.47/learn/beginner/overview-app) + -## Introduction[​](#introduction "Direct link to Introduction") +## Introduction The Cosmos SDK store package provides interfaces, types, and abstractions for managing Merkleized state storage and commitment within a Cosmos SDK application. The package supplies various primitives for developers to work with, including state storage, state commitment, and wrapper KVStores. This document highlights the key abstractions and their significance. -## Multistore[​](#multistore "Direct link to Multistore") +## Multistore The main store in Cosmos SDK applications is a multistore, a store of stores, that supports modularity. Developers can add any number of key-value stores to the multistore based on their application needs. Each module can declare and manage its own subset of the state, allowing for a modular approach. Key-value stores within the multistore can only be accessed with a specific capability key, which is typically held in the keeper of the module that declared the store. -## Store Interfaces[​](#store-interfaces "Direct link to Store Interfaces") +## Store Interfaces -### KVStore[​](#kvstore "Direct link to KVStore") +### KVStore The `KVStore` interface defines a key-value store that can be used to store and retrieve data. The default implementation of `KVStore` used in `baseapp` is the `iavl.Store`, which is based on an IAVL Tree. KVStores can be accessed by objects that hold a specific key and can provide an `Iterator` method that returns an `Iterator` object, used to iterate over a range of keys. -### CommitKVStore[​](#commitkvstore "Direct link to CommitKVStore") +### CommitKVStore The `CommitKVStore` interface extends the `KVStore` interface and adds methods for state commitment. The default implementation of `CommitKVStore` used in `baseapp` is also the `iavl.Store`. -### StoreDB[​](#storedb "Direct link to StoreDB") +### StoreDB The `StoreDB` interface defines a database that can be used to persist key-value stores. The default implementation of `StoreDB` used in `baseapp` is the `dbm.DB`, which is a simple persistent key-value store. -### DBAdapter[​](#dbadapter "Direct link to DBAdapter") +### DBAdapter The `DBAdapter` interface defines an adapter for `dbm.DB` that fulfills the `KVStore` interface. This interface is used to provide compatibility between the `dbm.DB` implementation and the `KVStore` interface. -### TransientStore[​](#transientstore "Direct link to TransientStore") +### TransientStore The `TransientStore` interface defines a base-layer KVStore which is automatically discarded at the end of the block and is useful for persisting information that is only relevant per-block, like storing parameter changes. -## Store Abstractions[​](#store-abstractions "Direct link to Store Abstractions") +## Store Abstractions The store package provides a comprehensive set of abstractions for managing state commitment and storage in an SDK application. These abstractions include CacheWrapping, KVStore, and CommitMultiStore, which offer a range of features such as CRUD functionality, prefix-based iteration, and state commitment management. By utilizing these abstractions, developers can create modular applications with independent state management for each module. This approach allows for a more organized and maintainable application structure. -### CacheWrap[​](#cachewrap "Direct link to CacheWrap") +### CacheWrap CacheWrap is a wrapper around a KVStore that provides caching for both read and write operations. The CacheWrap can be used to improve performance by reducing the number of disk reads and writes required for state storage operations. The CacheWrap also includes a Write method that commits the pending writes to the underlying KVStore. -### HistoryStore[​](#historystore "Direct link to HistoryStore") +### HistoryStore The HistoryStore is an optional feature that can be used to store historical versions of the state. The HistoryStore can be used to track changes to the state over time, allowing developers to analyze changes in the state and roll back to previous versions if necessary. -### IndexStore[​](#indexstore "Direct link to IndexStore") +### IndexStore The IndexStore is a type of KVStore that is used to maintain indexes of data stored in other KVStores. IndexStores can be used to improve query performance by providing a way to quickly search for data based on specific criteria. -### Queryable[​](#queryable "Direct link to Queryable") +### Queryable The Queryable interface is used to provide a way for applications to query the state stored in a KVStore. The Queryable interface includes methods for retrieving data based on a key or a range of keys, as well as methods for retrieving data based on specific criteria. -### PrefixIterator[​](#prefixiterator "Direct link to PrefixIterator") +### PrefixIterator The PrefixIterator interface is used to iterate over a range of keys in a KVStore that share a common prefix. PrefixIterators can be used to efficiently retrieve subsets of data from a KVStore based on a specific prefix. -### RootMultiStore[​](#rootmultistore "Direct link to RootMultiStore") +### RootMultiStore The RootMultiStore is a Multistore that provides the ability to retrieve a snapshot of the state at a specific height. This is useful for implementing light clients. -### GasKVStore[​](#gaskvstore "Direct link to GasKVStore") +### GasKVStore The GasKVStore is a wrapper around a KVStore that provides gas measurement for read and write operations. The GasKVStore is typically used to measure the cost of executing transactions. -## Implementation Details[​](#implementation-details "Direct link to Implementation Details") +## Implementation Details While there are many interfaces that the store package provides, there is typically a core implementation for each main interface that modules and developers interact with that are defined in the Cosmos SDK. The `iavl.Store` provides the core implementation for state storage and commitment by implementing the following interfaces: -* `KVStore` -* `CommitStore` -* `CommitKVStore` -* `Queryable` -* `StoreWithInitialVersion` +- `KVStore` +- `CommitStore` +- `CommitKVStore` +- `Queryable` +- `StoreWithInitialVersion` The `iavl.Store` also provides the ability to remove historical state from the state commitment layer. @@ -97,6 +98,6 @@ Other store abstractions include `cachekv.Store`, `gaskv.Store`, `cachemulti.Sto Note that concurrent access to the `iavl.Store` tree is not safe, and it is the responsibility of the caller to ensure that concurrent access to the store is not performed. -## Store Migration[​](#store-migration "Direct link to Store Migration") +## Store Migration Store migration is the process of updating the structure of a KVStore to support new features or changes in the data model. Store migration can be a complex process, but it is essential for maintaining the integrity of the state stored in a KVStore. diff --git a/docs/sdk/v0.47/learn/advanced/telemetry.mdx b/docs/sdk/v0.47/learn/advanced/telemetry.mdx index 27a36ca2..051aff6d 100644 --- a/docs/sdk/v0.47/learn/advanced/telemetry.mdx +++ b/docs/sdk/v0.47/learn/advanced/telemetry.mdx @@ -1,62 +1,81 @@ --- -title: "Telemetry" -description: "Version: v0.47" +title: Telemetry --- - - Gather relevant insights about your application and modules with custom metrics and telemetry. - +## Synopsis -The Cosmos SDK enables operators and developers to gain insight into the performance and behavior of their application through the use of the `telemetry` package. To enable telemetrics, set `telemetry.enabled = true` in the app.toml config file. +Gather relevant insights about your application and modules with custom metrics and telemetry. + +The Cosmos SDK enables operators and developers to gain insight into the performance and behavior of +their application through the use of the `telemetry` package. To enable telemetrics, set `telemetry.enabled = true` in the app.toml config file. The Cosmos SDK currently supports enabling in-memory and prometheus as telemetry sinks. In-memory sink is always attached (when the telemetry is enabled) with 10 second interval and 1 minute retention. This means that metrics will be aggregated over 10 seconds, and metrics will be kept alive for 1 minute. To query active metrics (see retention note above) you have to enable API server (`api.enabled = true` in the app.toml). Single API endpoint is exposed: `http://localhost:1317/metrics?format={text|prometheus}`, the default being `text`. -## Emitting metrics[​](#emitting-metrics "Direct link to Emitting metrics") +## Emitting metrics -If telemetry is enabled via configuration, a single global metrics collector is registered via the [go-metrics](https://github.com/armon/go-metrics) library. This allows emitting and collecting metrics through simple [API](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/telemetry/wrapper.go). Example: +If telemetry is enabled via configuration, a single global metrics collector is registered via the +[go-metrics](https://github.com/armon/go-metrics) library. This allows emitting and collecting +metrics through simple [API](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/telemetry/wrapper.go). Example: -``` -func EndBlocker(ctx sdk.Context, k keeper.Keeper) { defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) // ...} +```go +func EndBlocker(ctx sdk.Context, k keeper.Keeper) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) + + / ... +} ``` -Developers may use the `telemetry` package directly, which provides wrappers around metric APIs that include adding useful labels, or they must use the `go-metrics` library directly. It is preferable to add as much context and adequate dimensionality to metrics as possible, so the `telemetry` package is advised. Regardless of the package or method used, the Cosmos SDK supports the following metrics types: +Developers may use the `telemetry` package directly, which provides wrappers around metric APIs +that include adding useful labels, or they must use the `go-metrics` library directly. It is preferable +to add as much context and adequate dimensionality to metrics as possible, so the `telemetry` package +is advised. Regardless of the package or method used, the Cosmos SDK supports the following metrics +types: -* gauges -* summaries -* counters +- gauges +- summaries +- counters -## Labels[​](#labels "Direct link to Labels") +## Labels -Certain components of modules will have their name automatically added as a label (e.g. `BeginBlock`). Operators may also supply the application with a global set of labels that will be applied to all metrics emitted using the `telemetry` package (e.g. chain-id). Global labels are supplied as a list of \[name, value] tuples. +Certain components of modules will have their name automatically added as a label (e.g. `BeginBlock`). +Operators may also supply the application with a global set of labels that will be applied to all +metrics emitted using the `telemetry` package (e.g. chain-id). Global labels are supplied as a list +of \[name, value] tuples. Example: -``` -global-labels = [ ["chain_id", "chain-OfXo4V"],] +```toml +global-labels = [ + ["chain_id", "chain-OfXo4V"], +] ``` -## Cardinality[​](#cardinality "Direct link to Cardinality") +## Cardinality -Cardinality is key, specifically label and key cardinality. Cardinality is how many unique values of something there are. So there is naturally a tradeoff between granularity and how much stress is put on the telemetry sink in terms of indexing, scrape, and query performance. +Cardinality is key, specifically label and key cardinality. Cardinality is how many unique values of +something there are. So there is naturally a tradeoff between granularity and how much stress is put +on the telemetry sink in terms of indexing, scrape, and query performance. -Developers should take care to support metrics with enough dimensionality and granularity to be useful, but not increase the cardinality beyond the sink's limits. A general rule of thumb is to not exceed a cardinality of 10. +Developers should take care to support metrics with enough dimensionality and granularity to be +useful, but not increase the cardinality beyond the sink's limits. A general rule of thumb is to not +exceed a cardinality of 10. Consider the following examples with enough granularity and adequate cardinality: -* begin/end blocker time -* tx gas used -* block gas used -* amount of tokens minted -* amount of accounts created +- begin/end blocker time +- tx gas used +- block gas used +- amount of tokens minted +- amount of accounts created The following examples expose too much cardinality and may not even prove to be useful: -* transfers between accounts with amount -* voting/deposit amount from unique addresses +- transfers between accounts with amount +- voting/deposit amount from unique addresses -## Supported Metrics[​](#supported-metrics "Direct link to Supported Metrics") +## Supported Metrics | Metric | Description | Unit | Type | | :------------------------------ | :---------------------------------------------------------------------------------------- | :-------------- | :------ | diff --git a/docs/sdk/v0.47/learn/advanced/transactions.mdx b/docs/sdk/v0.47/learn/advanced/transactions.mdx index 12903c7e..97e470ee 100644 --- a/docs/sdk/v0.47/learn/advanced/transactions.mdx +++ b/docs/sdk/v0.47/learn/advanced/transactions.mdx @@ -1,226 +1,1403 @@ --- -title: "Transactions" -description: "Version: v0.47" +title: Transactions --- - - `Transactions` are objects created by end-users to trigger state changes in the application. - +## Synopsis + +`Transactions` are objects created by end-users to trigger state changes in the application. - ### Pre-requisite Readings[​](#pre-requisite-readings "Direct link to Pre-requisite Readings") - * [Anatomy of a Cosmos SDK Application](/v0.47/learn/beginner/overview-app) +### Pre-requisite Readings + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.47/learn/beginner/overview-app) + -## Transactions[​](#transactions-1 "Direct link to Transactions") +## Transactions -Transactions are comprised of metadata held in [contexts](/v0.47/learn/advanced/context) and [`sdk.Msg`s](/v0.47/build/building-modules/messages-and-queries) that trigger state changes within a module through the module's Protobuf [`Msg` service](/v0.47/build/building-modules/msg-services). +Transactions are comprised of metadata held in [contexts](/docs/sdk/v0.47/learn/advanced/context) and [`sdk.Msg`s](/docs/sdk/v0.47/documentation/module-system/messages-and-queries) that trigger state changes within a module through the module's Protobuf [`Msg` service](/docs/sdk/v0.47/documentation/module-system/msg-services). -When users want to interact with an application and make state changes (e.g. sending coins), they create transactions. Each of a transaction's `sdk.Msg` must be signed using the private key associated with the appropriate account(s), before the transaction is broadcasted to the network. A transaction must then be included in a block, validated, and approved by the network through the consensus process. To read more about the lifecycle of a transaction, click [here](/v0.47/learn/beginner/tx-lifecycle). +When users want to interact with an application and make state changes (e.g. sending coins), they create transactions. Each of a transaction's `sdk.Msg` must be signed using the private key associated with the appropriate account(s), before the transaction is broadcasted to the network. A transaction must then be included in a block, validated, and approved by the network through the consensus process. To read more about the lifecycle of a transaction, click [here](/docs/sdk/v0.47/learn/beginner/tx-lifecycle). -## Type Definition[​](#type-definition "Direct link to Type Definition") +## Type Definition Transaction objects are Cosmos SDK types that implement the `Tx` interface -types/tx\_msg.go +```go expandable +package types -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/types/tx_msg.go#L42-L50) + "encoding/json" + fmt "fmt" + "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/codec" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) -It contains the following methods: +type ( + / Msg defines the interface a transaction message must fulfill. + Msg interface { + proto.Message + + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error + + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} + + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / Tx defines the interface a transaction must fulfill. + Tx interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg + + / ValidateBasic does a simple and lightweight validation check that doesn't + / require access to any other information. + ValidateBasic() + +error +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() + +AccAddress + FeeGranter() + +AccAddress +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +func MsgTypeURL(msg Msg) + +string { + return "/" + proto.MessageName(msg) +} + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} +``` -* **GetMsgs:** unwraps the transaction and returns a list of contained `sdk.Msg`s - one transaction may have one or multiple messages, which are defined by module developers. +It contains the following methods: -* **ValidateBasic:** lightweight, [*stateless*](/v0.47/learn/beginner/tx-lifecycle#types-of-checks) checks used by ABCI messages [`CheckTx`](/v0.47/learn/advanced/baseapp#checktx) and [`DeliverTx`](/v0.47/learn/advanced/baseapp#delivertx) to make sure transactions are not invalid. For example, the [`auth`](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth) module's `ValidateBasic` function checks that its transactions are signed by the correct number of signers and that the fees do not exceed what the user's maximum. When [`runTx`](/v0.47/learn/advanced/baseapp#runtx) is checking a transaction created from the [`auth`](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth/spec) module, it first runs `ValidateBasic` on each message, then runs the `auth` module AnteHandler which calls `ValidateBasic` for the transaction itself. +- **GetMsgs:** unwraps the transaction and returns a list of contained `sdk.Msg`s - one transaction may have one or multiple messages, which are defined by module developers. +- **ValidateBasic:** lightweight, [_stateless_](/docs/sdk/v0.47/learn/beginner/tx-lifecycle#types-of-checks) checks used by ABCI messages [`CheckTx`](/docs/sdk/v0.47/learn/advanced/baseapp#checktx) and [`DeliverTx`](/docs/sdk/v0.47/learn/advanced/baseapp#delivertx) to make sure transactions are not invalid. For example, the [`auth`](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth) module's `ValidateBasic` function checks that its transactions are signed by the correct number of signers and that the fees do not exceed what the user's maximum. When [`runTx`](/docs/sdk/v0.47/learn/advanced/baseapp#runtx) is checking a transaction created from the [`auth`](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth/spec) module, it first runs `ValidateBasic` on each message, then runs the `auth` module AnteHandler which calls `ValidateBasic` for the transaction itself. - :::note This function is different from the deprecated `sdk.Msg` [`ValidateBasic`](/v0.47/learn/beginner/tx-lifecycle#ValidateBasic) methods, which was performing basic validity checks on messages only. ::: + :::note + This function is different from the deprecated `sdk.Msg` [`ValidateBasic`](/docs/sdk/v0.47/learn/beginner/tx-lifecycle#ValidateBasic) methods, which was performing basic validity checks on messages only. As a developer, you should rarely manipulate `Tx` directly, as `Tx` is really an intermediate type used for transaction generation. Instead, developers should prefer the `TxBuilder` interface, which you can learn more about [below](#transaction-generation). -### Signing Transactions[​](#signing-transactions "Direct link to Signing Transactions") +### Signing Transactions Every message in a transaction must be signed by the addresses specified by its `GetSigners`. The Cosmos SDK currently allows signing transactions in two different ways. -#### `SIGN_MODE_DIRECT` (preferred)[​](#sign_mode_direct-preferred "Direct link to sign_mode_direct-preferred") +#### `SIGN_MODE_DIRECT` (preferred) The most used implementation of the `Tx` interface is the Protobuf `Tx` message, which is used in `SIGN_MODE_DIRECT`: -proto/cosmos/tx/v1beta1/tx.proto - -``` -loading... +```protobuf +// Tx is the standard type used for broadcasting transactions. +message Tx { + // body is the processable content of the transaction + TxBody body = 1; + + // auth_info is the authorization related content of the transaction, + // specifically signers, signer modes and fee + AuthInfo auth_info = 2; + + // signatures is a list of signatures that matches the length and order of + // AuthInfo's signer_infos to allow connecting signature meta information like + // public key and signing mode by position. + repeated bytes signatures = 3; +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/tx/v1beta1/tx.proto#L13-L26) +Because Protobuf serialization is not deterministic, the Cosmos SDK uses an additional `TxRaw` type to denote the pinned bytes over which a transaction is signed. Any user can generate a valid `body` and `auth_info` for a transaction, and serialize these two messages using Protobuf. `TxRaw` then pins the user's exact binary representation of `body` and `auth_info`, called respectively `body_bytes` and `auth_info_bytes`. The document that is signed by all signers of the transaction is `SignDoc` (deterministically serialized using [ADR-027](docs/sdk/next/documentation/legacy/adr-comprehensive)): -Because Protobuf serialization is not deterministic, the Cosmos SDK uses an additional `TxRaw` type to denote the pinned bytes over which a transaction is signed. Any user can generate a valid `body` and `auth_info` for a transaction, and serialize these two messages using Protobuf. `TxRaw` then pins the user's exact binary representation of `body` and `auth_info`, called respectively `body_bytes` and `auth_info_bytes`. The document that is signed by all signers of the transaction is `SignDoc` (deterministically serialized using [ADR-027](/v0.47/build/architecture/adr-027-deterministic-protobuf-serialization)): +```protobuf +// SignDoc is the type used for generating sign bytes for SIGN_MODE_DIRECT. +message SignDoc { + // body_bytes is protobuf serialization of a TxBody that matches the + // representation in TxRaw. + bytes body_bytes = 1; -proto/cosmos/tx/v1beta1/tx.proto + // auth_info_bytes is a protobuf serialization of an AuthInfo that matches the + // representation in TxRaw. + bytes auth_info_bytes = 2; -``` -loading... -``` + // chain_id is the unique identifier of the chain this transaction targets. + // It prevents signed transactions from being used on another chain by an + // attacker + string chain_id = 3; -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/tx/v1beta1/tx.proto#L48-L65) + // account_number is the account number of the account in state + uint64 account_number = 4; +} +``` Once signed by all signers, the `body_bytes`, `auth_info_bytes` and `signatures` are gathered into `TxRaw`, whose serialized bytes are broadcasted over the network. -#### `SIGN_MODE_LEGACY_AMINO_JSON`[​](#sign_mode_legacy_amino_json "Direct link to sign_mode_legacy_amino_json") +#### `SIGN_MODE_LEGACY_AMINO_JSON` The legacy implementation of the `Tx` interface is the `StdTx` struct from `x/auth`: -x/auth/migrations/legacytx/stdtx.go +```go expandable +package legacytx + +import ( + + "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/codec/legacy" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/types/tx/signing" +) + +/ Interface implementation checks +var ( + _ sdk.Tx = (*StdTx)(nil) + _ sdk.TxWithMemo = (*StdTx)(nil) + _ sdk.FeeTx = (*StdTx)(nil) + _ tx.TipTx = (*StdTx)(nil) + _ codectypes.UnpackInterfacesMessage = (*StdTx)(nil) + + _ codectypes.UnpackInterfacesMessage = (*StdSignature)(nil) +) + +/ StdFee includes the amount of coins paid in fees and the maximum +/ gas to be used by the transaction. The ratio yields an effective "gasprice", +/ which must be above some miminum to be accepted into the mempool. +/ [Deprecated] +type StdFee struct { + Amount sdk.Coins `json:"amount" yaml:"amount"` + Gas uint64 `json:"gas" yaml:"gas"` + Payer string `json:"payer,omitempty" yaml:"payer"` + Granter string `json:"granter,omitempty" yaml:"granter"` +} + +/ Deprecated: NewStdFee returns a new instance of StdFee +func NewStdFee(gas uint64, amount sdk.Coins) + +StdFee { + return StdFee{ + Amount: amount, + Gas: gas, +} +} + +/ GetGas returns the fee's (wanted) + +gas. +func (fee StdFee) + +GetGas() + +uint64 { + return fee.Gas +} + +/ GetAmount returns the fee's amount. +func (fee StdFee) + +GetAmount() + +sdk.Coins { + return fee.Amount +} + +/ Bytes returns the encoded bytes of a StdFee. +func (fee StdFee) + +Bytes() []byte { + if len(fee.Amount) == 0 { + fee.Amount = sdk.NewCoins() +} + +bz, err := legacy.Cdc.MarshalJSON(fee) + if err != nil { + panic(err) +} + +return bz +} + +/ GasPrices returns the gas prices for a StdFee. +/ +/ NOTE: The gas prices returned are not the true gas prices that were +/ originally part of the submitted transaction because the fee is computed +/ as fee = ceil(gasWanted * gasPrices). +func (fee StdFee) + +GasPrices() + +sdk.DecCoins { + return sdk.NewDecCoinsFromCoins(fee.Amount...).QuoDec(math.LegacyNewDec(int64(fee.Gas))) +} + +/ StdTip is the tips used in a tipped transaction. +type StdTip struct { + Amount sdk.Coins `json:"amount" yaml:"amount"` + Tipper string `json:"tipper" yaml:"tipper"` +} + +/ StdTx is the legacy transaction format for wrapping a Msg with Fee and Signatures. +/ It only works with Amino, please prefer the new protobuf Tx in types/tx. +/ NOTE: the first signature is the fee payer (Signatures must not be nil). +/ Deprecated +type StdTx struct { + Msgs []sdk.Msg `json:"msg" yaml:"msg"` + Fee StdFee `json:"fee" yaml:"fee"` + Signatures []StdSignature `json:"signatures" yaml:"signatures"` + Memo string `json:"memo" yaml:"memo"` + TimeoutHeight uint64 `json:"timeout_height" yaml:"timeout_height"` +} + +/ Deprecated +func NewStdTx(msgs []sdk.Msg, fee StdFee, sigs []StdSignature, memo string) + +StdTx { + return StdTx{ + Msgs: msgs, + Fee: fee, + Signatures: sigs, + Memo: memo, +} +} + +/ GetMsgs returns the all the transaction's messages. +func (tx StdTx) + +GetMsgs() []sdk.Msg { + return tx.Msgs +} + +/ ValidateBasic does a simple and lightweight validation check that doesn't +/ require access to any other information. +/ +/nolint:revive / we need to change the receiver name here, because otherwise we conflict with tx.MaxGasWanted. +func (stdTx StdTx) + +ValidateBasic() + +error { + stdSigs := stdTx.GetSignatures() + if stdTx.Fee.Gas > tx.MaxGasWanted { + return sdkerrors.Wrapf( + sdkerrors.ErrInvalidRequest, + "invalid gas supplied; %d > %d", stdTx.Fee.Gas, tx.MaxGasWanted, + ) +} + if stdTx.Fee.Amount.IsAnyNegative() { + return sdkerrors.Wrapf( + sdkerrors.ErrInsufficientFee, + "invalid fee provided: %s", stdTx.Fee.Amount, + ) +} + if len(stdSigs) == 0 { + return sdkerrors.ErrNoSignatures +} + if len(stdSigs) != len(stdTx.GetSigners()) { + return sdkerrors.Wrapf( + sdkerrors.ErrUnauthorized, + "wrong number of signers; expected %d, got %d", len(stdTx.GetSigners()), len(stdSigs), + ) +} + +return nil +} + +/ Deprecated: AsAny implements intoAny. It doesn't work for protobuf serialization, +/ so it can't be saved into protobuf configured storage. We are using it only for API +/ compatibility. +func (tx *StdTx) + +AsAny() *codectypes.Any { + return codectypes.UnsafePackAny(tx) +} + +/ GetSigners returns the addresses that must sign the transaction. +/ Addresses are returned in a deterministic order. +/ They are accumulated from the GetSigners method for each Msg +/ in the order they appear in tx.GetMsgs(). +/ Duplicate addresses will be omitted. +func (tx StdTx) + +GetSigners() []sdk.AccAddress { + var signers []sdk.AccAddress + seen := map[string]bool{ +} + for _, msg := range tx.GetMsgs() { + for _, addr := range msg.GetSigners() { + if !seen[addr.String()] { + signers = append(signers, addr) + +seen[addr.String()] = true +} + +} + +} + +return signers +} + +/ GetMemo returns the memo +func (tx StdTx) + +GetMemo() + +string { + return tx.Memo +} + +/ GetTimeoutHeight returns the transaction's timeout height (if set). +func (tx StdTx) + +GetTimeoutHeight() + +uint64 { + return tx.TimeoutHeight +} + +/ GetSignatures returns the signature of signers who signed the Msg. +/ CONTRACT: Length returned is same as length of +/ pubkeys returned from MsgKeySigners, and the order +/ matches. +/ CONTRACT: If the signature is missing (ie the Msg is +/ invalid), then the corresponding signature is +/ .Empty(). +func (tx StdTx) + +GetSignatures() [][]byte { + sigs := make([][]byte, len(tx.Signatures)) + for i, stdSig := range tx.Signatures { + sigs[i] = stdSig.Signature +} + +return sigs +} + +/ GetSignaturesV2 implements SigVerifiableTx.GetSignaturesV2 +func (tx StdTx) + +GetSignaturesV2() ([]signing.SignatureV2, error) { + res := make([]signing.SignatureV2, len(tx.Signatures)) + for i, sig := range tx.Signatures { + var err error + res[i], err = StdSignatureToSignatureV2(legacy.Cdc, sig) + if err != nil { + return nil, sdkerrors.Wrapf(err, "Unable to convert signature %v to V2", sig) +} + +} -``` -loading... -``` +return res, nil +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/auth/migrations/legacytx/stdtx.go#L83-L93) +/ GetPubkeys returns the pubkeys of signers if the pubkey is included in the signature +/ If pubkey is not included in the signature, then nil is in the slice instead +func (tx StdTx) -The document signed by all signers is `StdSignDoc`: +GetPubKeys() ([]cryptotypes.PubKey, error) { + pks := make([]cryptotypes.PubKey, len(tx.Signatures)) + for i, stdSig := range tx.Signatures { + pks[i] = stdSig.GetPubKey() +} -x/auth/migrations/legacytx/stdsign.go +return pks, nil +} -``` -loading... -``` +/ GetGas returns the Gas in StdFee +func (tx StdTx) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/auth/migrations/legacytx/stdsign.go#L38-L52) +GetGas() -which is encoded into bytes using Amino JSON. Once all signatures are gathered into `StdTx`, `StdTx` is serialized using Amino JSON, and these bytes are broadcasted over the network. +uint64 { + return tx.Fee.Gas +} -#### Other Sign Modes[​](#other-sign-modes "Direct link to Other Sign Modes") +/ GetFee returns the FeeAmount in StdFee +func (tx StdTx) -The Cosmos SDK also provides a couple of other sign modes for particular use cases. +GetFee() + +sdk.Coins { + return tx.Fee.Amount +} + +/ FeePayer returns the address that is responsible for paying fee +/ StdTx returns the first signer as the fee payer +/ If no signers for tx, return empty address +func (tx StdTx) + +FeePayer() -#### `SIGN_MODE_DIRECT_AUX`[​](#sign_mode_direct_aux "Direct link to sign_mode_direct_aux") +sdk.AccAddress { + if tx.GetSigners() != nil { + return tx.GetSigners()[0] +} -`SIGN_MODE_DIRECT_AUX` is a sign mode released in the Cosmos SDK v0.46 which targets transactions with multiple signers. Whereas `SIGN_MODE_DIRECT` expects each signer to sign over both `TxBody` and `AuthInfo` (which includes all other signers' signer infos, i.e. their account sequence, public key and mode info), `SIGN_MODE_DIRECT_AUX` allows N-1 signers to only sign over `TxBody` and *their own* signer info. Morever, each auxiliary signer (i.e. a signer using `SIGN_MODE_DIRECT_AUX`) doesn't need to sign over the fees: +return sdk.AccAddress{ +} +} -proto/cosmos/tx/v1beta1/tx.proto +/ FeeGranter always returns nil for StdTx +func (tx StdTx) +FeeGranter() + +sdk.AccAddress { + return nil +} + +/ GetTip always returns nil for StdTx +func (tx StdTx) + +GetTip() *tx.Tip { + return nil +} + +func (tx StdTx) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + for _, m := range tx.Msgs { + err := codectypes.UnpackInterfaces(m, unpacker) + if err != nil { + return err +} + +} + + / Signatures contain PubKeys, which need to be unpacked. + for _, s := range tx.Signatures { + err := s.UnpackInterfaces(unpacker) + if err != nil { + return err +} + +} + +return nil +} ``` -loading... + +The document signed by all signers is `StdSignDoc`: + +```go expandable +package legacytx + +import ( + + "encoding/json" + "fmt" + "sigs.k8s.io/yaml" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/legacy" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/crypto/types/multisig" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/types/tx/signing" +) + +/ LegacyMsg defines the old interface a message must fulfill, containing +/ Amino signing method and legacy router info. +/ Deprecated: Please use `Msg` instead. +type LegacyMsg interface { + sdk.Msg + + / Get the canonical byte representation of the Msg. + GetSignBytes() []byte + + / Return the message type. + / Must be alphanumeric or empty. + Route() + +string + + / Returns a human-readable string for the message, intended for utilization + / within tags + Type() + +string +} + +/ StdSignDoc is replay-prevention structure. +/ It includes the result of msg.GetSignBytes(), +/ as well as the ChainID (prevent cross chain replay) +/ and the Sequence numbers for each signature (prevent +/ inchain replay and enforce tx ordering per account). +type StdSignDoc struct { + AccountNumber uint64 `json:"account_number" yaml:"account_number"` + Sequence uint64 `json:"sequence" yaml:"sequence"` + TimeoutHeight uint64 `json:"timeout_height,omitempty" yaml:"timeout_height"` + ChainID string `json:"chain_id" yaml:"chain_id"` + Memo string `json:"memo" yaml:"memo"` + Fee json.RawMessage `json:"fee" yaml:"fee"` + Msgs []json.RawMessage `json:"msgs" yaml:"msgs"` + Tip *StdTip `json:"tip,omitempty" yaml:"tip"` +} + +/ StdSignBytes returns the bytes to sign for a transaction. +func StdSignBytes(chainID string, accnum, sequence, timeout uint64, fee StdFee, msgs []sdk.Msg, memo string, tip *tx.Tip) []byte { + msgsBytes := make([]json.RawMessage, 0, len(msgs)) + for _, msg := range msgs { + legacyMsg, ok := msg.(LegacyMsg) + if !ok { + panic(fmt.Errorf("expected %T when using amino JSON", (*LegacyMsg)(nil))) +} + +msgsBytes = append(msgsBytes, json.RawMessage(legacyMsg.GetSignBytes())) +} + +var stdTip *StdTip + if tip != nil { + if tip.Tipper == "" { + panic(fmt.Errorf("tipper cannot be empty")) +} + +stdTip = &StdTip{ + Amount: tip.Amount, + Tipper: tip.Tipper +} + +} + +bz, err := legacy.Cdc.MarshalJSON(StdSignDoc{ + AccountNumber: accnum, + ChainID: chainID, + Fee: json.RawMessage(fee.Bytes()), + Memo: memo, + Msgs: msgsBytes, + Sequence: sequence, + TimeoutHeight: timeout, + Tip: stdTip, +}) + if err != nil { + panic(err) +} + +return sdk.MustSortJSON(bz) +} + +/ Deprecated: StdSignature represents a sig +type StdSignature struct { + cryptotypes.PubKey `json:"pub_key" yaml:"pub_key"` / optional + Signature []byte `json:"signature" yaml:"signature"` +} + +/ Deprecated +func NewStdSignature(pk cryptotypes.PubKey, sig []byte) + +StdSignature { + return StdSignature{ + PubKey: pk, + Signature: sig +} +} + +/ GetSignature returns the raw signature bytes. +func (ss StdSignature) + +GetSignature() []byte { + return ss.Signature +} + +/ GetPubKey returns the public key of a signature as a cryptotypes.PubKey using the +/ Amino codec. +func (ss StdSignature) + +GetPubKey() + +cryptotypes.PubKey { + return ss.PubKey +} + +/ MarshalYAML returns the YAML representation of the signature. +func (ss StdSignature) + +MarshalYAML() (interface{ +}, error) { + pk := "" + if ss.PubKey != nil { + pk = ss.PubKey.String() +} + +bz, err := yaml.Marshal(struct { + PubKey string `json:"pub_key"` + Signature string `json:"signature"` +}{ + pk, + fmt.Sprintf("%X", ss.Signature), +}) + if err != nil { + return nil, err +} + +return string(bz), err +} + +func (ss StdSignature) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + return codectypes.UnpackInterfaces(ss.PubKey, unpacker) +} + +/ StdSignatureToSignatureV2 converts a StdSignature to a SignatureV2 +func StdSignatureToSignatureV2(cdc *codec.LegacyAmino, sig StdSignature) (signing.SignatureV2, error) { + pk := sig.GetPubKey() + +data, err := pubKeySigToSigData(cdc, pk, sig.Signature) + if err != nil { + return signing.SignatureV2{ +}, err +} + +return signing.SignatureV2{ + PubKey: pk, + Data: data, +}, nil +} + +func pubKeySigToSigData(cdc *codec.LegacyAmino, key cryptotypes.PubKey, sig []byte) (signing.SignatureData, error) { + multiPK, ok := key.(multisig.PubKey) + if !ok { + return &signing.SingleSignatureData{ + SignMode: signing.SignMode_SIGN_MODE_LEGACY_AMINO_JSON, + Signature: sig, +}, nil +} + +var multiSig multisig.AminoMultisignature + err := cdc.Unmarshal(sig, &multiSig) + if err != nil { + return nil, err +} + sigs := multiSig.Sigs + sigDatas := make([]signing.SignatureData, len(sigs)) + pubKeys := multiPK.GetPubKeys() + bitArray := multiSig.BitArray + n := multiSig.BitArray.Count() + signatures := multisig.NewMultisig(n) + sigIdx := 0 + for i := 0; i < n; i++ { + if bitArray.GetIndex(i) { + data, err := pubKeySigToSigData(cdc, pubKeys[i], multiSig.Sigs[sigIdx]) + if err != nil { + return nil, sdkerrors.Wrapf(err, "Unable to convert Signature to SigData %d", sigIdx) +} + +sigDatas[sigIdx] = data + multisig.AddSignature(signatures, data, sigIdx) + +sigIdx++ +} + +} + +return signatures, nil +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/tx/v1beta1/tx.proto#L67-L97) +which is encoded into bytes using Amino JSON. Once all signatures are gathered into `StdTx`, `StdTx` is serialized using Amino JSON, and these bytes are broadcasted over the network. + +#### Other Sign Modes + +The Cosmos SDK also provides a couple of other sign modes for particular use cases. + +#### `SIGN_MODE_DIRECT_AUX` + +`SIGN_MODE_DIRECT_AUX` is a sign mode released in the Cosmos SDK v0.46 which targets transactions with multiple signers. Whereas `SIGN_MODE_DIRECT` expects each signer to sign over both `TxBody` and `AuthInfo` (which includes all other signers' signer infos, i.e. their account sequence, public key and mode info), `SIGN_MODE_DIRECT_AUX` allows N-1 signers to only sign over `TxBody` and _their own_ signer info. Morever, each auxiliary signer (i.e. a signer using `SIGN_MODE_DIRECT_AUX`) doesn't +need to sign over the fees: + +```protobuf +// SignDocDirectAux is the type used for generating sign bytes for +// SIGN_MODE_DIRECT_AUX. +// +// Since: cosmos-sdk 0.46 +message SignDocDirectAux { + // body_bytes is protobuf serialization of a TxBody that matches the + // representation in TxRaw. + bytes body_bytes = 1; + + // public_key is the public key of the signing account. + google.protobuf.Any public_key = 2; + + // chain_id is the identifier of the chain this transaction targets. + // It prevents signed transactions from being used on another chain by an + // attacker. + string chain_id = 3; + + // account_number is the account number of the account in state. + uint64 account_number = 4; + + // sequence is the sequence number of the signing account. + uint64 sequence = 5; + + // Tip is the optional tip used for transactions fees paid in another denom. + // It should be left empty if the signer is not the tipper for this + // transaction. + // + // This field is ignored if the chain didn't enable tips, i.e. didn't add the + // `TipDecorator` in its posthandler. + Tip tip = 6; +} +``` -The use case is a multi-signer transaction, where one of the signers is appointed to gather all signatures, broadcast the signature and pay for fees, and the others only care about the transaction body. This generally allows for a better multi-signing UX. If Alice, Bob and Charlie are part of a 3-signer transaction, then Alice and Bob can both use `SIGN_MODE_DIRECT_AUX` to sign over the `TxBody` and their own signer info (no need an additional step to gather other signers' ones, like in `SIGN_MODE_DIRECT`), without specifying a fee in their SignDoc. Charlie can then gather both signatures from Alice and Bob, and create the final transaction by appending a fee. Note that the fee payer of the transaction (in our case Charlie) must sign over the fees, so must use `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. +The use case is a multi-signer transaction, where one of the signers is appointed to gather all signatures, broadcast the signature and pay for fees, and the others only care about the transaction body. This generally allows for a better multi-signing UX. If Alice, Bob and Charlie are part of a 3-signer transaction, then Alice and Bob can both use `SIGN_MODE_DIRECT_AUX` to sign over the `TxBody` and their own signer info (no need an additional step to gather other signers' ones, like in `SIGN_MODE_DIRECT`), without specifying a fee in their SignDoc. Charlie can then gather both signatures from Alice and Bob, and +create the final transaction by appending a fee. Note that the fee payer of the transaction (in our case Charlie) must sign over the fees, so must use `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. -#### `SIGN_MODE_TEXTUAL`[​](#sign_mode_textual "Direct link to sign_mode_textual") +#### `SIGN_MODE_TEXTUAL` `SIGN_MODE_TEXTUAL` is a new sign mode for delivering a better signing experience on hardware wallets, it is currently still under implementation. If you wish to learn more, please refer to [ADR-050](https://github.com/cosmos/cosmos-sdk/pull/10701). -#### Custom Sign modes[​](#custom-sign-modes "Direct link to Custom Sign modes") +#### Custom Sign modes There is the the opportunity to add your own custom sign mode to the Cosmos-SDK. While we can not accept the implementation of the sign mode to the repository, we can accept a pull request to add the custom signmode to the SignMode enum located [here](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/tx/signing/v1beta1/signing.proto#L17) -## Transaction Process[​](#transaction-process "Direct link to Transaction Process") +## Transaction Process The process of an end-user sending a transaction is: -* decide on the messages to put into the transaction, -* generate the transaction using the Cosmos SDK's `TxBuilder`, -* broadcast the transaction using one of the available interfaces. +- decide on the messages to put into the transaction, +- generate the transaction using the Cosmos SDK's `TxBuilder`, +- broadcast the transaction using one of the available interfaces. The next paragraphs will describe each of these components, in this order. -### Messages[​](#messages "Direct link to Messages") +### Messages - - Module `sdk.Msg`s are not to be confused with [ABCI Messages](https://docs.cometbft.com/v0.37/spec/abci/) which define interactions between the CometBFT and application layers. - + + Module `sdk.Msg`s are not to be confused with [ABCI + Messages](https://docs.cometbft.com/v0.37/spec/abci/) which define + interactions between the CometBFT and application layers. + -**Messages** (or `sdk.Msg`s) are module-specific objects that trigger state transitions within the scope of the module they belong to. Module developers define the messages for their module by adding methods to the Protobuf [`Msg` service](/v0.47/build/building-modules/msg-services), and also implement the corresponding `MsgServer`. +**Messages** (or `sdk.Msg`s) are module-specific objects that trigger state transitions within the scope of the module they belong to. Module developers define the messages for their module by adding methods to the Protobuf [`Msg` service](/docs/sdk/v0.47/documentation/module-system/msg-services), and also implement the corresponding `MsgServer`. -Each `sdk.Msg`s is related to exactly one Protobuf [`Msg` service](/v0.47/build/building-modules/msg-services) RPC, defined inside each module's `tx.proto` file. A SDK app router automatically maps every `sdk.Msg` to a corresponding RPC. Protobuf generates a `MsgServer` interface for each module `Msg` service, and the module developer needs to implement this interface. This design puts more responsibility on module developers, allowing application developers to reuse common functionalities without having to implement state transition logic repetitively. +Each `sdk.Msg`s is related to exactly one Protobuf [`Msg` service](/docs/sdk/v0.47/documentation/module-system/msg-services) RPC, defined inside each module's `tx.proto` file. A SDK app router automatically maps every `sdk.Msg` to a corresponding RPC. Protobuf generates a `MsgServer` interface for each module `Msg` service, and the module developer needs to implement this interface. +This design puts more responsibility on module developers, allowing application developers to reuse common functionalities without having to implement state transition logic repetitively. -To learn more about Protobuf `Msg` services and how to implement `MsgServer`, click [here](/v0.47/build/building-modules/msg-services). +To learn more about Protobuf `Msg` services and how to implement `MsgServer`, click [here](/docs/sdk/v0.47/documentation/module-system/msg-services). While messages contain the information for state transition logic, a transaction's other metadata and relevant information are stored in the `TxBuilder` and `Context`. -### Transaction Generation[​](#transaction-generation "Direct link to Transaction Generation") +### Transaction Generation The `TxBuilder` interface contains data closely related with the generation of transactions, which an end-user can freely set to generate the desired transaction: -client/tx\_config.go +```go expandable +package client -``` -loading... -``` +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() + +sdk.TxEncoder + TxDecoder() + +sdk.TxDecoder + TxJSONEncoder() + +sdk.TxEncoder + TxJSONDecoder() + +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) + +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} + + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() + +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() + +signing.SignModeHandler +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/client/tx_config.go#L33-L50) + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() -* `Msg`s, the array of [messages](#messages) included in the transaction. -* `GasLimit`, option chosen by the users for how to calculate how much gas they will need to pay. -* `Memo`, a note or comment to send with the transaction. -* `FeeAmount`, the maximum amount the user is willing to pay in fees. -* `TimeoutHeight`, block height until which the transaction is valid. -* `Signatures`, the array of signatures from all signers of the transaction. +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTip(tip *tx.Tip) + +SetTimeoutHeight(height uint64) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} +) +``` + +- `Msg`s, the array of [messages](#messages) included in the transaction. +- `GasLimit`, option chosen by the users for how to calculate how much gas they will need to pay. +- `Memo`, a note or comment to send with the transaction. +- `FeeAmount`, the maximum amount the user is willing to pay in fees. +- `TimeoutHeight`, block height until which the transaction is valid. +- `Signatures`, the array of signatures from all signers of the transaction. As there are currently two sign modes for signing transactions, there are also two implementations of `TxBuilder`: -* [wrapper](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/auth/tx/builder.go#L18-L34) for creating transactions for `SIGN_MODE_DIRECT`, -* [StdTxBuilder](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/auth/migrations/legacytx/stdtx_builder.go#L15-L21) for `SIGN_MODE_LEGACY_AMINO_JSON`. +- [wrapper](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/auth/tx/builder.go#L18-L34) for creating transactions for `SIGN_MODE_DIRECT`, +- [StdTxBuilder](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/auth/migrations/legacytx/stdtx_builder.go#L15-L21) for `SIGN_MODE_LEGACY_AMINO_JSON`. However, the two implementation of `TxBuilder` should be hidden away from end-users, as they should prefer using the overarching `TxConfig` interface: -client/tx\_config.go +```go expandable +package client -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/client/tx_config.go#L22-L31) + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) -`TxConfig` is an app-wide configuration for managing transactions. Most importantly, it holds the information about whether to sign each transaction with `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. By calling `txBuilder := txConfig.NewTxBuilder()`, a new `TxBuilder` will be created with the appropriate sign mode. +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() -Once `TxBuilder` is correctly populated with the setters exposed above, `TxConfig` will also take care of correctly encoding the bytes (again, either using `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`). Here's a pseudo-code snippet of how to generate and encode a transaction, using the `TxEncoder()` method: +sdk.TxEncoder + TxDecoder() -``` -txBuilder := txConfig.NewTxBuilder()txBuilder.SetMsgs(...) // and other setters on txBuilderbz, err := txConfig.TxEncoder()(txBuilder.GetTx())// bz are bytes to be broadcasted over the network -``` +sdk.TxDecoder + TxJSONEncoder() -### Broadcasting the Transaction[​](#broadcasting-the-transaction "Direct link to Broadcasting the Transaction") +sdk.TxEncoder + TxJSONDecoder() -Once the transaction bytes are generated, there are currently three ways of broadcasting it. +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) -#### CLI[​](#cli "Direct link to CLI") +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} -Application developers create entry points to the application by creating a [command-line interface](/v0.47/learn/advanced/cli), [gRPC and/or REST interface](/v0.47/learn/advanced/grpc_rest), typically found in the application's `./cmd` folder. These interfaces allow users to interact with the application through command-line. + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig -For the [command-line interface](/v0.47/build/building-modules/module-interfaces#cli), module developers create subcommands to add as children to the application top-level transaction command `TxCmd`. CLI commands actually bundle all the steps of transaction processing into one simple command: creating messages, generating transactions and broadcasting. For concrete examples, see the [Interacting with a Node](/v0.47/user/run-node/interact-node) section. An example transaction made using CLI looks like: + NewTxBuilder() -``` -simd tx send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() + +signing.SignModeHandler +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTip(tip *tx.Tip) + +SetTimeoutHeight(height uint64) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} +) ``` -#### gRPC[​](#grpc "Direct link to gRPC") +`TxConfig` is an app-wide configuration for managing transactions. Most importantly, it holds the information about whether to sign each transaction with `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. By calling `txBuilder := txConfig.NewTxBuilder()`, a new `TxBuilder` will be created with the appropriate sign mode. + +Once `TxBuilder` is correctly populated with the setters exposed above, `TxConfig` will also take care of correctly encoding the bytes (again, either using `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`). Here's a pseudo-code snippet of how to generate and encode a transaction, using the `TxEncoder()` method: -[gRPC](https://grpc.io) is the main component for the Cosmos SDK's RPC layer. Its principal usage is in the context of modules' [`Query` services](/v0.47/build/building-modules/query-services). However, the Cosmos SDK also exposes a few other module-agnostic gRPC services, one of them being the `Tx` service: +```go +txBuilder := txConfig.NewTxBuilder() -proto/cosmos/tx/v1beta1/service.proto +txBuilder.SetMsgs(...) / and other setters on txBuilder +bz, err := txConfig.TxEncoder()(txBuilder.GetTx()) +/ bz are bytes to be broadcasted over the network ``` -loading... + +### Broadcasting the Transaction + +Once the transaction bytes are generated, there are currently three ways of broadcasting it. + +#### CLI + +Application developers create entry points to the application by creating a [command-line interface](/docs/sdk/v0.47/learn/advanced/cli), [gRPC and/or REST interface](/docs/sdk/v0.47/learn/advanced/grpc_rest), typically found in the application's `./cmd` folder. These interfaces allow users to interact with the application through command-line. + +For the [command-line interface](/docs/sdk/v0.47/documentation/module-system/module-interfaces#cli), module developers create subcommands to add as children to the application top-level transaction command `TxCmd`. CLI commands actually bundle all the steps of transaction processing into one simple command: creating messages, generating transactions and broadcasting. For concrete examples, see the [Interacting with a Node](/docs/sdk/v0.47/user/run-node/interact-node) section. An example transaction made using CLI looks like: + +```bash +simd tx send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/tx/v1beta1/service.proto) +#### gRPC + +[gRPC](https://grpc.io) is the main component for the Cosmos SDK's RPC layer. Its principal usage is in the context of modules' [`Query` services](/docs/sdk/v0.47/documentation/module-system/query-services). However, the Cosmos SDK also exposes a few other module-agnostic gRPC services, one of them being the `Tx` service: + +```go expandable +syntax = "proto3"; +package cosmos.tx.v1beta1; + +import "google/api/annotations.proto"; +import "cosmos/base/abci/v1beta1/abci.proto"; +import "cosmos/tx/v1beta1/tx.proto"; +import "cosmos/base/query/v1beta1/pagination.proto"; +import "tendermint/types/block.proto"; +import "tendermint/types/types.proto"; + +option go_package = "github.com/cosmos/cosmos-sdk/types/tx"; + +/ Service defines a gRPC service for interacting with transactions. +service Service { + / Simulate simulates executing a transaction for estimating gas usage. + rpc Simulate(SimulateRequest) + +returns (SimulateResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/simulate" + body: "*" +}; +} + / GetTx fetches a tx by hash. + rpc GetTx(GetTxRequest) + +returns (GetTxResponse) { + option (google.api.http).get = "/cosmos/tx/v1beta1/txs/{ + hash +}"; +} + / BroadcastTx broadcast transaction. + rpc BroadcastTx(BroadcastTxRequest) + +returns (BroadcastTxResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/txs" + body: "*" +}; +} + / GetTxsEvent fetches txs by event. + rpc GetTxsEvent(GetTxsEventRequest) + +returns (GetTxsEventResponse) { + option (google.api.http).get = "/cosmos/tx/v1beta1/txs"; +} + / GetBlockWithTxs fetches a block with decoded txs. + / + / Since: cosmos-sdk 0.45.2 + rpc GetBlockWithTxs(GetBlockWithTxsRequest) + +returns (GetBlockWithTxsResponse) { + option (google.api.http).get = "/cosmos/tx/v1beta1/txs/block/{ + height +}"; +} + / TxDecode decodes the transaction. + / + / Since: cosmos-sdk 0.47 + rpc TxDecode(TxDecodeRequest) + +returns (TxDecodeResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/decode" + body: "*" +}; +} + / TxEncode encodes the transaction. + / + / Since: cosmos-sdk 0.47 + rpc TxEncode(TxEncodeRequest) + +returns (TxEncodeResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/encode" + body: "*" +}; +} + / TxEncodeAmino encodes an Amino transaction from JSON to encoded bytes. + / + / Since: cosmos-sdk 0.47 + rpc TxEncodeAmino(TxEncodeAminoRequest) + +returns (TxEncodeAminoResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/encode/amino" + body: "*" +}; +} + / TxDecodeAmino decodes an Amino transaction from encoded bytes to JSON. + / + / Since: cosmos-sdk 0.47 + rpc TxDecodeAmino(TxDecodeAminoRequest) + +returns (TxDecodeAminoResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/decode/amino" + body: "*" +}; +} +} + +/ GetTxsEventRequest is the request type for the Service.TxsByEvents +/ RPC method. +message GetTxsEventRequest { + / events is the list of transaction event type. + repeated string events = 1; + / pagination defines a pagination for the request. + / Deprecated post v0.46.x: use page and limit instead. + cosmos.base.query.v1beta1.PageRequest pagination = 2 [deprecated = true]; + + OrderBy order_by = 3; + / page is the page number to query, starts at 1. If not provided, will default to first page. + uint64 page = 4; + / limit is the total number of results to be returned in the result page. + / If left empty it will default to a value to be set by each app. + uint64 limit = 5; +} + +/ OrderBy defines the sorting order +enum OrderBy { + / ORDER_BY_UNSPECIFIED specifies an unknown sorting order. OrderBy defaults to ASC in this case. + ORDER_BY_UNSPECIFIED = 0; + / ORDER_BY_ASC defines ascending order + ORDER_BY_ASC = 1; + / ORDER_BY_DESC defines descending order + ORDER_BY_DESC = 2; +} + +/ GetTxsEventResponse is the response type for the Service.TxsByEvents +/ RPC method. +message GetTxsEventResponse { + / txs is the list of queried transactions. + repeated cosmos.tx.v1beta1.Tx txs = 1; + / tx_responses is the list of queried TxResponses. + repeated cosmos.base.abci.v1beta1.TxResponse tx_responses = 2; + / pagination defines a pagination for the response. + / Deprecated post v0.46.x: use total instead. + cosmos.base.query.v1beta1.PageResponse pagination = 3 [deprecated = true]; + / total is total number of results available + uint64 total = 4; +} + +/ BroadcastTxRequest is the request type for the Service.BroadcastTxRequest +/ RPC method. +message BroadcastTxRequest { + / tx_bytes is the raw transaction. + bytes tx_bytes = 1; + BroadcastMode mode = 2; +} + +/ BroadcastMode specifies the broadcast mode for the TxService.Broadcast RPC method. +enum BroadcastMode { + / zero-value for mode ordering + BROADCAST_MODE_UNSPECIFIED = 0; + / DEPRECATED: use BROADCAST_MODE_SYNC instead, + / BROADCAST_MODE_BLOCK is not supported by the SDK from v0.47.x onwards. + BROADCAST_MODE_BLOCK = 1 [deprecated = true]; + / BROADCAST_MODE_SYNC defines a tx broadcasting mode where the client waits for + / a CheckTx execution response only. + BROADCAST_MODE_SYNC = 2; + / BROADCAST_MODE_ASYNC defines a tx broadcasting mode where the client returns + / immediately. + BROADCAST_MODE_ASYNC = 3; +} + +/ BroadcastTxResponse is the response type for the +/ Service.BroadcastTx method. +message BroadcastTxResponse { + / tx_response is the queried TxResponses. + cosmos.base.abci.v1beta1.TxResponse tx_response = 1; +} + +/ SimulateRequest is the request type for the Service.Simulate +/ RPC method. +message SimulateRequest { + / tx is the transaction to simulate. + / Deprecated. Send raw tx bytes instead. + cosmos.tx.v1beta1.Tx tx = 1 [deprecated = true]; + / tx_bytes is the raw transaction. + / + / Since: cosmos-sdk 0.43 + bytes tx_bytes = 2; +} + +/ SimulateResponse is the response type for the +/ Service.SimulateRPC method. +message SimulateResponse { + / gas_info is the information about gas used in the simulation. + cosmos.base.abci.v1beta1.GasInfo gas_info = 1; + / result is the result of the simulation. + cosmos.base.abci.v1beta1.Result result = 2; +} + +/ GetTxRequest is the request type for the Service.GetTx +/ RPC method. +message GetTxRequest { + / hash is the tx hash to query, encoded as a hex string. + string hash = 1; +} + +/ GetTxResponse is the response type for the Service.GetTx method. +message GetTxResponse { + / tx is the queried transaction. + cosmos.tx.v1beta1.Tx tx = 1; + / tx_response is the queried TxResponses. + cosmos.base.abci.v1beta1.TxResponse tx_response = 2; +} + +/ GetBlockWithTxsRequest is the request type for the Service.GetBlockWithTxs +/ RPC method. +/ +/ Since: cosmos-sdk 0.45.2 +message GetBlockWithTxsRequest { + / height is the height of the block to query. + int64 height = 1; + / pagination defines a pagination for the request. + cosmos.base.query.v1beta1.PageRequest pagination = 2; +} + +/ GetBlockWithTxsResponse is the response type for the Service.GetBlockWithTxs method. +/ +/ Since: cosmos-sdk 0.45.2 +message GetBlockWithTxsResponse { + / txs are the transactions in the block. + repeated cosmos.tx.v1beta1.Tx txs = 1; + .tendermint.types.BlockID block_id = 2; + .tendermint.types.Block block = 3; + / pagination defines a pagination for the response. + cosmos.base.query.v1beta1.PageResponse pagination = 4; +} + +/ TxDecodeRequest is the request type for the Service.TxDecode +/ RPC method. +/ +/ Since: cosmos-sdk 0.47 +message TxDecodeRequest { + / tx_bytes is the raw transaction. + bytes tx_bytes = 1; +} + +/ TxDecodeResponse is the response type for the +/ Service.TxDecode method. +/ +/ Since: cosmos-sdk 0.47 +message TxDecodeResponse { + / tx is the decoded transaction. + cosmos.tx.v1beta1.Tx tx = 1; +} + +/ TxEncodeRequest is the request type for the Service.TxEncode +/ RPC method. +/ +/ Since: cosmos-sdk 0.47 +message TxEncodeRequest { + / tx is the transaction to encode. + cosmos.tx.v1beta1.Tx tx = 1; +} + +/ TxEncodeResponse is the response type for the +/ Service.TxEncode method. +/ +/ Since: cosmos-sdk 0.47 +message TxEncodeResponse { + / tx_bytes is the encoded transaction bytes. + bytes tx_bytes = 1; +} + +/ TxEncodeAminoRequest is the request type for the Service.TxEncodeAmino +/ RPC method. +/ +/ Since: cosmos-sdk 0.47 +message TxEncodeAminoRequest { + string amino_json = 1; +} + +/ TxEncodeAminoResponse is the response type for the Service.TxEncodeAmino +/ RPC method. +/ +/ Since: cosmos-sdk 0.47 +message TxEncodeAminoResponse { + bytes amino_binary = 1; +} + +/ TxDecodeAminoRequest is the request type for the Service.TxDecodeAmino +/ RPC method. +/ +/ Since: cosmos-sdk 0.47 +message TxDecodeAminoRequest { + bytes amino_binary = 1; +} + +/ TxDecodeAminoResponse is the response type for the Service.TxDecodeAmino +/ RPC method. +/ +/ Since: cosmos-sdk 0.47 +message TxDecodeAminoResponse { + string amino_json = 1; +} +``` The `Tx` service exposes a handful of utility functions, such as simulating a transaction or querying a transaction, and also one method to broadcast transactions. -Examples of broadcasting and simulating a transaction are shown [here](/v0.47/user/run-node/txs#programmatically-with-go). +Examples of broadcasting and simulating a transaction are shown [here](/docs/sdk/v0.47/user/run-node/txs#programmatically-with-go). -#### REST[​](#rest "Direct link to REST") +#### REST Each gRPC method has its corresponding REST endpoint, generated using [gRPC-gateway](https://github.com/grpc-ecosystem/grpc-gateway). Therefore, instead of using gRPC, you can also use HTTP to broadcast the same transaction, on the `POST /cosmos/tx/v1beta1/txs` endpoint. -An example can be seen [here](/v0.47/user/run-node/txs#using-rest) +An example can be seen [here](/docs/sdk/v0.47/user/run-node/txs#using-rest) -#### CometBFT RPC[​](#cometbft-rpc "Direct link to CometBFT RPC") +#### CometBFT RPC The three methods presented above are actually higher abstractions over the CometBFT RPC `/broadcast_tx_{async,sync,commit}` endpoints, documented [here](https://docs.cometbft.com/v0.37/core/rpc). This means that you can use the CometBFT RPC endpoints directly to broadcast the transaction, if you wish so. diff --git a/docs/sdk/v0.47/learn/advanced/upgrade.mdx b/docs/sdk/v0.47/learn/advanced/upgrade.mdx index 40847457..0f0d5389 100644 --- a/docs/sdk/v0.47/learn/advanced/upgrade.mdx +++ b/docs/sdk/v0.47/learn/advanced/upgrade.mdx @@ -1,112 +1,167 @@ --- -title: "In-Place Store Migrations" -description: "Version: v0.47" +title: In-Place Store Migrations --- - Read and understand all the in-place store migration documentation before you run a migration on a live chain. + Read and understand all the in-place store migration documentation before you + run a migration on a live chain. - - Upgrade your app modules smoothly with custom in-place store migration logic. - +## Synopsis + +Upgrade your app modules smoothly with custom in-place store migration logic. The Cosmos SDK uses two methods to perform upgrades: -* Exporting the entire application state to a JSON file using the `export` CLI command, making changes, and then starting a new binary with the changed JSON file as the genesis file. +- Exporting the entire application state to a JSON file using the `export` CLI command, making changes, and then starting a new binary with the changed JSON file as the genesis file. -* Perform upgrades in place, which significantly decrease the upgrade time for chains with a larger state. Use the [Module Upgrade Guide](/v0.47/build/building-modules/upgrade) to set up your application modules to take advantage of in-place upgrades. +- Perform upgrades in place, which significantly decrease the upgrade time for chains with a larger state. Use the [Module Upgrade Guide](/docs/sdk/v0.47/documentation/application-framework/app-upgrade) to set up your application modules to take advantage of in-place upgrades. This document provides steps to use the In-Place Store Migrations upgrade method. -## Tracking Module Versions[​](#tracking-module-versions "Direct link to Tracking Module Versions") +## Tracking Module Versions Each module gets assigned a consensus version by the module developer. The consensus version serves as the breaking change version of the module. The Cosmos SDK keeps track of all module consensus versions in the x/upgrade `VersionMap` store. During an upgrade, the difference between the old `VersionMap` stored in state and the new `VersionMap` is calculated by the Cosmos SDK. For each identified difference, the module-specific migrations are run and the respective consensus version of each upgraded module is incremented. -### Consensus Version[​](#consensus-version "Direct link to Consensus Version") +### Consensus Version The consensus version is defined on each app module by the module developer and serves as the breaking change version of the module. The consensus version informs the Cosmos SDK on which modules need to be upgraded. For example, if the bank module was version 2 and an upgrade introduces bank module 3, the Cosmos SDK upgrades the bank module and runs the "version 2 to 3" migration script. -### Version Map[​](#version-map "Direct link to Version Map") +### Version Map The version map is a mapping of module names to consensus versions. The map is persisted to x/upgrade's state for use during in-place migrations. When migrations finish, the updated version map is persisted in the state. -## Upgrade Handlers[​](#upgrade-handlers "Direct link to Upgrade Handlers") +## Upgrade Handlers Upgrades use an `UpgradeHandler` to facilitate migrations. The `UpgradeHandler` functions implemented by the app developer must conform to the following function signature. These functions retrieve the `VersionMap` from x/upgrade's state and return the new `VersionMap` to be stored in x/upgrade after the upgrade. The diff between the two `VersionMap`s determines which modules need upgrading. -``` +```go type UpgradeHandler func(ctx sdk.Context, plan Plan, fromVM VersionMap) (VersionMap, error) ``` Inside these functions, you must perform any upgrade logic to include in the provided `plan`. All upgrade handler functions must end with the following line of code: -``` - return app.mm.RunMigrations(ctx, cfg, fromVM) +```go +return app.mm.RunMigrations(ctx, cfg, fromVM) ``` -## Running Migrations[​](#running-migrations "Direct link to Running Migrations") +## Running Migrations Migrations are run inside of an `UpgradeHandler` using `app.mm.RunMigrations(ctx, cfg, vm)`. The `UpgradeHandler` functions describe the functionality to occur during an upgrade. The `RunMigration` function loops through the `VersionMap` argument and runs the migration scripts for all versions that are less than the versions of the new binary app module. After the migrations are finished, a new `VersionMap` is returned to persist the upgraded module versions to state. -``` -cfg := module.NewConfigurator(...)app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { // ... // additional upgrade logic // ... // returns a VersionMap with the updated module ConsensusVersions return app.mm.RunMigrations(ctx, fromVM)}) +```go +cfg := module.NewConfigurator(...) + +app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + + / ... + / additional upgrade logic + / ... + + / returns a VersionMap with the updated module ConsensusVersions + return app.mm.RunMigrations(ctx, fromVM) +}) ``` -To learn more about configuring migration scripts for your modules, see the [Module Upgrade Guide](/v0.47/build/building-modules/upgrade). +To learn more about configuring migration scripts for your modules, see the [Module Upgrade Guide](/docs/sdk/v0.47/documentation/application-framework/app-upgrade). -### Order Of Migrations[​](#order-of-migrations "Direct link to Order Of Migrations") +### Order Of Migrations By default, all migrations are run in module name alphabetical ascending order, except `x/auth` which is run last. The reason is state dependencies between x/auth and other modules (you can read more in [issue #10606](https://github.com/cosmos/cosmos-sdk/issues/10606)). If you want to change the order of migration, then you should call `app.mm.SetOrderMigrations(module1, module2, ...)` in your app.go file. The function will panic if you forget to include a module in the argument list. -## Adding New Modules During Upgrades[​](#adding-new-modules-during-upgrades "Direct link to Adding New Modules During Upgrades") +## Adding New Modules During Upgrades You can introduce entirely new modules to the application during an upgrade. New modules are recognized because they have not yet been registered in `x/upgrade`'s `VersionMap` store. In this case, `RunMigrations` calls the `InitGenesis` function from the corresponding module to set up its initial state. -### Add StoreUpgrades for New Modules[​](#add-storeupgrades-for-new-modules "Direct link to Add StoreUpgrades for New Modules") +### Add StoreUpgrades for New Modules All chains preparing to run in-place store migrations will need to manually add store upgrades for new modules and then configure the store loader to apply those upgrades. This ensures that the new module's stores are added to the multistore before the migrations begin. -``` -upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk()if err != nil { panic(err)}if upgradeInfo.Name == "my-plan" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { storeUpgrades := storetypes.StoreUpgrades{ // add store upgrades for new modules // Example: // Added: []string{"foo", "bar"}, // ... } // configure store loader that checks if version == upgradeHeight and applies store upgrades app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades))} +```go expandable +upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk() + if err != nil { + panic(err) +} + if upgradeInfo.Name == "my-plan" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := storetypes.StoreUpgrades{ + / add store upgrades for new modules + / Example: + / Added: []string{"foo", "bar" +}, + / ... +} + + / configure store loader that checks if version == upgradeHeight and applies store upgrades + app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} ``` -## Genesis State[​](#genesis-state "Direct link to Genesis State") +## Genesis State When starting a new chain, the consensus version of each module MUST be saved to state during the application's genesis. To save the consensus version, add the following line to the `InitChainer` method in `app.go`: -``` -func (app *MyApp) InitChainer(ctx sdk.Context, req abci.RequestInitChain) abci.ResponseInitChain { ...+ app.UpgradeKeeper.SetModuleVersionMap(ctx, app.mm.GetVersionMap()) ...} +```diff +func (app *MyApp) InitChainer(ctx sdk.Context, req abci.RequestInitChain) abci.ResponseInitChain { + ... ++ app.UpgradeKeeper.SetModuleVersionMap(ctx, app.mm.GetVersionMap()) + ... +} ``` This information is used by the Cosmos SDK to detect when modules with newer versions are introduced to the app. For a new module `foo`, `InitGenesis` is called by `RunMigration` only when `foo` is registered in the module manager but it's not set in the `fromVM`. Therefore, if you want to skip `InitGenesis` when a new module is added to the app, then you should set its module version in `fromVM` to the module consensus version: -``` -app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { // ... // Set foo's version to the latest ConsensusVersion in the VersionMap. // This will skip running InitGenesis on Foo fromVM[foo.ModuleName] = foo.AppModule{}.ConsensusVersion() return app.mm.RunMigrations(ctx, fromVM)}) +```go +app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / ... + + / Set foo's version to the latest ConsensusVersion in the VersionMap. + / This will skip running InitGenesis on Foo + fromVM[foo.ModuleName] = foo.AppModule{ +}.ConsensusVersion() + +return app.mm.RunMigrations(ctx, fromVM) +}) ``` -### Overwriting Genesis Functions[​](#overwriting-genesis-functions "Direct link to Overwriting Genesis Functions") +### Overwriting Genesis Functions The Cosmos SDK offers modules that the application developer can import in their app. These modules often have an `InitGenesis` function already defined. You can write your own `InitGenesis` function for an imported module. To do this, manually trigger your custom genesis function in the upgrade handler. - You MUST manually set the consensus version in the version map passed to the `UpgradeHandler` function. Without this, the SDK will run the Module's existing `InitGenesis` code even if you triggered your custom function in the `UpgradeHandler`. + You MUST manually set the consensus version in the version map passed to the + `UpgradeHandler` function. Without this, the SDK will run the Module's + existing `InitGenesis` code even if you triggered your custom function in the + `UpgradeHandler`. -``` -import foo "github.com/my/module/foo"app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { // Register the consensus version in the version map // to avoid the SDK from triggering the default // InitGenesis function. fromVM["foo"] = foo.AppModule{}.ConsensusVersion() // Run custom InitGenesis for foo app.mm["foo"].InitGenesis(ctx, app.appCodec, myCustomGenesisState) return app.mm.RunMigrations(ctx, cfg, fromVM)}) +```go expandable +import foo "github.com/my/module/foo" + +app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + + / Register the consensus version in the version map + / to avoid the SDK from triggering the default + / InitGenesis function. + fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() + + / Run custom InitGenesis for foo + app.mm["foo"].InitGenesis(ctx, app.appCodec, myCustomGenesisState) + +return app.mm.RunMigrations(ctx, cfg, fromVM) +}) ``` -## Syncing a Full Node to an Upgraded Blockchain[​](#syncing-a-full-node-to-an-upgraded-blockchain "Direct link to Syncing a Full Node to an Upgraded Blockchain") +## Syncing a Full Node to an Upgraded Blockchain You can sync a full node to an existing blockchain which has been upgraded using Cosmovisor To successfully sync, you must start with the initial binary that the blockchain started with at genesis. If all Software Upgrade Plans contain binary instruction, then you can run Cosmovisor with auto-download option to automatically handle downloading and switching to the binaries associated with each sequential upgrade. Otherwise, you need to manually provide all binaries to Cosmovisor. -To learn more about Cosmovisor, see the [Cosmovisor Quick Start](/v0.47/build/tooling/cosmovisor). +To learn more about Cosmovisor, see the [Cosmovisor Quick Start](/docs/sdk/v0.47/documentation/operations/tooling/cosmovisor). diff --git a/docs/sdk/v0.47/learn/beginner/accounts.mdx b/docs/sdk/v0.47/learn/beginner/accounts.mdx index cb75bbf5..40cef618 100644 --- a/docs/sdk/v0.47/learn/beginner/accounts.mdx +++ b/docs/sdk/v0.47/learn/beginner/accounts.mdx @@ -1,31 +1,68 @@ --- -title: "Accounts" -description: "Version: v0.47" +title: Accounts --- - - This document describes the in-built account and public key system of the Cosmos SDK. - +## Synopsis + +This document describes the in-built account and public key system of the Cosmos SDK. - ### Pre-requisite Readings[​](#pre-requisite-readings "Direct link to Pre-requisite Readings") - * [Anatomy of a Cosmos SDK Application](/v0.47/learn/beginner/overview-app) +### Pre-requisite Readings + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.47/learn/beginner/overview-app) + -## Account Definition[​](#account-definition "Direct link to Account Definition") +## Account Definition -In the Cosmos SDK, an *account* designates a pair of *public key* `PubKey` and *private key* `PrivKey`. The `PubKey` can be derived to generate various `Addresses`, which are used to identify users (among other parties) in the application. `Addresses` are also associated with [`message`s](/v0.47/build/building-modules/messages-and-queries#messages) to identify the sender of the `message`. The `PrivKey` is used to generate [digital signatures](#keys-accounts-addresses-and-signatures) to prove that an `Address` associated with the `PrivKey` approved of a given `message`. +In the Cosmos SDK, an _account_ designates a pair of _public key_ `PubKey` and _private key_ `PrivKey`. The `PubKey` can be derived to generate various `Addresses`, which are used to identify users (among other parties) in the application. `Addresses` are also associated with [`message`s](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#messages) to identify the sender of the `message`. The `PrivKey` is used to generate [digital signatures](#keys-accounts-addresses-and-signatures) to prove that an `Address` associated with the `PrivKey` approved of a given `message`. For HD key derivation the Cosmos SDK uses a standard called [BIP32](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki). The BIP32 allows users to create an HD wallet (as specified in [BIP44](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)) - a set of accounts derived from an initial secret seed. A seed is usually created from a 12- or 24-word mnemonic. A single seed can derive any number of `PrivKey`s using a one-way cryptographic function. Then, a `PubKey` can be derived from the `PrivKey`. Naturally, the mnemonic is the most sensitive information, as private keys can always be re-generated if the mnemonic is preserved. -``` - Account 0 Account 1 Account 2+------------------+ +------------------+ +------------------+| | | | | || Address 0 | | Address 1 | | Address 2 || ^ | | ^ | | ^ || | | | | | | | || | | | | | | | || | | | | | | | || + | | + | | + || Public key 0 | | Public key 1 | | Public key 2 || ^ | | ^ | | ^ || | | | | | | | || | | | | | | | || | | | | | | | || + | | + | | + || Private key 0 | | Private key 1 | | Private key 2 || ^ | | ^ | | ^ |+------------------+ +------------------+ +------------------+ | | | | | | | | | +--------------------------------------------------------------------+ | | +---------+---------+ | | | Master PrivKey | | | +-------------------+ | | +---------+---------+ | | | Mnemonic (Seed) | | | +-------------------+ +```text expandable + Account 0 Account 1 Account 2 + ++------------------+ +------------------+ +------------------+ +| | | | | | +| Address 0 | | Address 1 | | Address 2 | +| ^ | | ^ | | ^ | +| | | | | | | | | +| | | | | | | | | +| | | | | | | | | +| + | | + | | + | +| Public key 0 | | Public key 1 | | Public key 2 | +| ^ | | ^ | | ^ | +| | | | | | | | | +| | | | | | | | | +| | | | | | | | | +| + | | + | | + | +| Private key 0 | | Private key 1 | | Private key 2 | +| ^ | | ^ | | ^ | ++------------------+ +------------------+ +------------------+ + | | | + | | | + | | | + +--------------------------------------------------------------------+ + | + | + +---------+---------+ + | | + | Master PrivKey | + | | + +-------------------+ + | + | + +---------+---------+ + | | + | Mnemonic (Seed) | + | | + +-------------------+ ``` In the Cosmos SDK, keys are stored and managed by using an object called a [`Keyring`](#keyring). -## Keys, accounts, addresses, and signatures[​](#keys-accounts-addresses-and-signatures "Direct link to Keys, accounts, addresses, and signatures") +## Keys, accounts, addresses, and signatures The principal way of authenticating a user is done using [digital signatures](https://en.wikipedia.org/wiki/Digital_signature). Users sign transactions using their own private key. Signature verification is done with the associated public key. For on-chain signature verification purposes, we store the public key in an `Account` object (alongside other data required for a proper transaction validation). @@ -33,39 +70,873 @@ In the node, all data is stored using Protocol Buffers serialization. The Cosmos SDK supports the following digital key schemes for creating digital signatures: -* `secp256k1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256k1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/keys/secp256k1/secp256k1.go). -* `secp256r1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256r1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/keys/secp256r1/pubkey.go), -* `tm-ed25519`, as implemented in the [Cosmos SDK `crypto/keys/ed25519` package](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/keys/ed25519/ed25519.go). This scheme is supported only for the consensus validation. +- `secp256k1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256k1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/keys/secp256k1/secp256k1.go). +- `secp256r1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256r1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/keys/secp256r1/pubkey.go), +- `tm-ed25519`, as implemented in the [Cosmos SDK `crypto/keys/ed25519` package](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/keys/ed25519/ed25519.go). This scheme is supported only for the consensus validation. | | Address length in bytes | Public key length in bytes | Used for transaction authentication | Used for consensus (cometbft) | | :----------: | :---------------------: | :------------------------: | :---------------------------------: | :---------------------------: | -| `secp256k1` | 20 | 33 | yes | no | -| `secp256r1` | 32 | 33 | yes | no | -| `tm-ed25519` | -- not used -- | 32 | no | yes | +| `secp256k1` | 20 | 33 | yes | no | +| `secp256r1` | 32 | 33 | yes | no | +| `tm-ed25519` | -- not used -- | 32 | no | yes | -## Addresses[​](#addresses "Direct link to Addresses") +## Addresses `Addresses` and `PubKey`s are both public information that identifies actors in the application. `Account` is used to store authentication information. The basic account implementation is provided by a `BaseAccount` object. Each account is identified using `Address` which is a sequence of bytes derived from a public key. In the Cosmos SDK, we define 3 types of addresses that specify a context where an account is used: -* `AccAddress` identifies users (the sender of a `message`). -* `ValAddress` identifies validator operators. -* `ConsAddress` identifies validator nodes that are participating in consensus. Validator nodes are derived using the **`ed25519`** curve. +- `AccAddress` identifies users (the sender of a `message`). +- `ValAddress` identifies validator operators. +- `ConsAddress` identifies validator nodes that are participating in consensus. Validator nodes are derived using the **`ed25519`** curve. These types implement the `Address` interface: -types/address.go +```go expandable +package types + +import ( + + "bytes" + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "strings" + "sync" + "github.com/hashicorp/golang-lru/simplelru" + "sigs.k8s.io/yaml" + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/internal/conv" + "github.com/cosmos/cosmos-sdk/types/address" + "github.com/cosmos/cosmos-sdk/types/bech32" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +const ( + / Constants defined here are the defaults value for address. + / You can use the specific values for your project. + / Add the follow lines to the `main()` of your server. + / + / config := sdk.GetConfig() + / config.SetBech32PrefixForAccount(yourBech32PrefixAccAddr, yourBech32PrefixAccPub) + / config.SetBech32PrefixForValidator(yourBech32PrefixValAddr, yourBech32PrefixValPub) + / config.SetBech32PrefixForConsensusNode(yourBech32PrefixConsAddr, yourBech32PrefixConsPub) + / config.SetPurpose(yourPurpose) + / config.SetCoinType(yourCoinType) + / config.Seal() + + / Bech32MainPrefix defines the main SDK Bech32 prefix of an account's address + Bech32MainPrefix = "cosmos" + + / Purpose is the ATOM purpose as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +Purpose = 44 + + / CoinType is the ATOM coin type as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +CoinType = 118 + + / FullFundraiserPath is the parts of the BIP44 HD path that are fixed by + / what we used during the ATOM fundraiser. + FullFundraiserPath = "m/44'/118'/0'/0/0" + + / PrefixAccount is the prefix for account keys + PrefixAccount = "acc" + / PrefixValidator is the prefix for validator keys + PrefixValidator = "val" + / PrefixConsensus is the prefix for consensus keys + PrefixConsensus = "cons" + / PrefixPublic is the prefix for public keys + PrefixPublic = "pub" + / PrefixOperator is the prefix for operator keys + PrefixOperator = "oper" + + / PrefixAddress is the prefix for addresses + PrefixAddress = "addr" + + / Bech32PrefixAccAddr defines the Bech32 prefix of an account's address + Bech32PrefixAccAddr = Bech32MainPrefix + / Bech32PrefixAccPub defines the Bech32 prefix of an account's public key + Bech32PrefixAccPub = Bech32MainPrefix + PrefixPublic + / Bech32PrefixValAddr defines the Bech32 prefix of a validator's operator address + Bech32PrefixValAddr = Bech32MainPrefix + PrefixValidator + PrefixOperator + / Bech32PrefixValPub defines the Bech32 prefix of a validator's operator public key + Bech32PrefixValPub = Bech32MainPrefix + PrefixValidator + PrefixOperator + PrefixPublic + / Bech32PrefixConsAddr defines the Bech32 prefix of a consensus node address + Bech32PrefixConsAddr = Bech32MainPrefix + PrefixValidator + PrefixConsensus + / Bech32PrefixConsPub defines the Bech32 prefix of a consensus node public key + Bech32PrefixConsPub = Bech32MainPrefix + PrefixValidator + PrefixConsensus + PrefixPublic +) + +/ cache variables +var ( + / AccAddress.String() + +is expensive and if unoptimized dominantly showed up in profiles, + / yet has no mechanisms to trivially cache the result given that AccAddress is a []byte type. + accAddrMu sync.Mutex + accAddrCache *simplelru.LRU + consAddrMu sync.Mutex + consAddrCache *simplelru.LRU + valAddrMu sync.Mutex + valAddrCache *simplelru.LRU +) + +/ sentinel errors +var ( + ErrEmptyHexAddress = errors.New("decoding address from hex string failed: empty address") +) + +func init() { + var err error + / in total the cache size is 61k entries. Key is 32 bytes and value is around 50-70 bytes. + / That will make around 92 * 61k * 2 (LRU) + +bytes ~ 11 MB + if accAddrCache, err = simplelru.NewLRU(60000, nil); err != nil { + panic(err) +} + if consAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} + if valAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} +} + +/ Address is a common interface for different types of addresses used by the SDK +type Address interface { + Equals(Address) + +bool + Empty() + +bool + Marshal() ([]byte, error) + +MarshalJSON() ([]byte, error) + +Bytes() []byte + String() + +string + Format(s fmt.State, verb rune) +} + +/ Ensure that different address types implement the interface +var ( + _ Address = AccAddress{ +} + _ Address = ValAddress{ +} + _ Address = ConsAddress{ +} +) + +/ ---------------------------------------------------------------------------- +/ account +/ ---------------------------------------------------------------------------- + +/ AccAddress a wrapper around bytes meant to represent an account address. +/ When marshaled to a string or JSON, it uses Bech32. +type AccAddress []byte + +/ AccAddressFromHexUnsafe creates an AccAddress from a HEX-encoded string. +/ +/ Note, this function is considered unsafe as it may produce an AccAddress from +/ otherwise invalid input, such as a transaction hash. Please use +/ AccAddressFromBech32. +func AccAddressFromHexUnsafe(address string) (addr AccAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return AccAddress(bz), err +} + +/ VerifyAddressFormat verifies that the provided bytes form a valid address +/ according to the default address rules or a custom address verifier set by +/ GetConfig().SetAddressVerifier(). +/ TODO make an issue to get rid of global Config +/ ref: https://github.com/cosmos/cosmos-sdk/issues/9690 +func VerifyAddressFormat(bz []byte) + +error { + verifier := GetConfig().GetAddressVerifier() + if verifier != nil { + return verifier(bz) +} + if len(bz) == 0 { + return sdkerrors.Wrap(sdkerrors.ErrUnknownAddress, "addresses cannot be empty") +} + if len(bz) > address.MaxAddrLen { + return sdkerrors.Wrapf(sdkerrors.ErrUnknownAddress, "address max length is %d, got %d", address.MaxAddrLen, len(bz)) +} + +return nil +} + +/ MustAccAddressFromBech32 calls AccAddressFromBech32 and panics on error. +func MustAccAddressFromBech32(address string) + +AccAddress { + addr, err := AccAddressFromBech32(address) + if err != nil { + panic(err) +} + +return addr +} + +/ AccAddressFromBech32 creates an AccAddress from a Bech32 string. +func AccAddressFromBech32(address string) (addr AccAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return AccAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixAccAddr := GetConfig().GetBech32AccountAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixAccAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return AccAddress(bz), nil +} + +/ Returns boolean for whether two AccAddresses are Equal +func (aa AccAddress) + +Equals(aa2 Address) + +bool { + if aa.Empty() && aa2.Empty() { + return true +} + +return bytes.Equal(aa.Bytes(), aa2.Bytes()) +} + +/ Returns boolean for whether an AccAddress is empty +func (aa AccAddress) + +Empty() + +bool { + return len(aa) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (aa AccAddress) + +Marshal() ([]byte, error) { + return aa, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (aa *AccAddress) + +Unmarshal(data []byte) + +error { + *aa = data + return nil +} -``` -loading... -``` +/ MarshalJSON marshals to JSON using Bech32. +func (aa AccAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(aa.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (aa AccAddress) + +MarshalYAML() (interface{ +}, error) { + return aa.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ UnmarshalYAML unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ Bytes returns the raw address bytes. +func (aa AccAddress) + +Bytes() []byte { + return aa +} + +/ String implements the Stringer interface. +func (aa AccAddress) + +String() + +string { + if aa.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(aa) + +accAddrMu.Lock() + +defer accAddrMu.Unlock() + +addr, ok := accAddrCache.Get(key) + if ok { + return addr.(string) +} + +return cacheBech32Addr(GetConfig().GetBech32AccountAddrPrefix(), aa, accAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (aa AccAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + s.Write([]byte(aa.String())) + case 'p': + s.Write([]byte(fmt.Sprintf("%p", aa))) + +default: + s.Write([]byte(fmt.Sprintf("%X", []byte(aa)))) +} +} + +/ ---------------------------------------------------------------------------- +/ validator operator +/ ---------------------------------------------------------------------------- + +/ ValAddress defines a wrapper around bytes meant to present a validator's +/ operator. When marshaled to a string or JSON, it uses Bech32. +type ValAddress []byte + +/ ValAddressFromHex creates a ValAddress from a hex string. +func ValAddressFromHex(address string) (addr ValAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return ValAddress(bz), err +} + +/ ValAddressFromBech32 creates a ValAddress from a Bech32 string. +func ValAddressFromBech32(address string) (addr ValAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ValAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixValAddr := GetConfig().GetBech32ValidatorAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixValAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ValAddress(bz), nil +} + +/ Returns boolean for whether two ValAddresses are Equal +func (va ValAddress) + +Equals(va2 Address) + +bool { + if va.Empty() && va2.Empty() { + return true +} + +return bytes.Equal(va.Bytes(), va2.Bytes()) +} + +/ Returns boolean for whether an AccAddress is empty +func (va ValAddress) + +Empty() + +bool { + return len(va) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (va ValAddress) + +Marshal() ([]byte, error) { + return va, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (va *ValAddress) + +Unmarshal(data []byte) + +error { + *va = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (va ValAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(va.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (va ValAddress) + +MarshalYAML() (interface{ +}, error) { + return va.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ Bytes returns the raw address bytes. +func (va ValAddress) + +Bytes() []byte { + return va +} + +/ String implements the Stringer interface. +func (va ValAddress) + +String() + +string { + if va.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(va) + +valAddrMu.Lock() + +defer valAddrMu.Unlock() + +addr, ok := valAddrCache.Get(key) + if ok { + return addr.(string) +} + +return cacheBech32Addr(GetConfig().GetBech32ValidatorAddrPrefix(), va, valAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (va ValAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + s.Write([]byte(va.String())) + case 'p': + s.Write([]byte(fmt.Sprintf("%p", va))) + +default: + s.Write([]byte(fmt.Sprintf("%X", []byte(va)))) +} +} + +/ ---------------------------------------------------------------------------- +/ consensus node +/ ---------------------------------------------------------------------------- -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/types/address.go#L108-L124) +/ ConsAddress defines a wrapper around bytes meant to present a consensus node. +/ When marshaled to a string or JSON, it uses Bech32. +type ConsAddress []byte -Address construction algorithm is defined in [ADR-28](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-028-public-key-addresses.md). Here is the standard way to obtain an account address from a `pub` public key: +/ ConsAddressFromHex creates a ConsAddress from a hex string. +func ConsAddressFromHex(address string) (addr ConsAddress, err error) { + bz, err := addressBytesFromHexString(address) +return ConsAddress(bz), err +} + +/ ConsAddressFromBech32 creates a ConsAddress from a Bech32 string. +func ConsAddressFromBech32(address string) (addr ConsAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ConsAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixConsAddr := GetConfig().GetBech32ConsensusAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixConsAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ConsAddress(bz), nil +} + +/ get ConsAddress from pubkey +func GetConsAddress(pubkey cryptotypes.PubKey) + +ConsAddress { + return ConsAddress(pubkey.Address()) +} + +/ Returns boolean for whether two ConsAddress are Equal +func (ca ConsAddress) + +Equals(ca2 Address) + +bool { + if ca.Empty() && ca2.Empty() { + return true +} + +return bytes.Equal(ca.Bytes(), ca2.Bytes()) +} + +/ Returns boolean for whether an ConsAddress is empty +func (ca ConsAddress) + +Empty() + +bool { + return len(ca) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (ca ConsAddress) + +Marshal() ([]byte, error) { + return ca, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (ca *ConsAddress) + +Unmarshal(data []byte) + +error { + *ca = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (ca ConsAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(ca.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (ca ConsAddress) + +MarshalYAML() (interface{ +}, error) { + return ca.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ Bytes returns the raw address bytes. +func (ca ConsAddress) + +Bytes() []byte { + return ca +} + +/ String implements the Stringer interface. +func (ca ConsAddress) + +String() + +string { + if ca.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(ca) + +consAddrMu.Lock() + +defer consAddrMu.Unlock() + +addr, ok := consAddrCache.Get(key) + if ok { + return addr.(string) +} + +return cacheBech32Addr(GetConfig().GetBech32ConsensusAddrPrefix(), ca, consAddrCache, key) +} + +/ Bech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. Returns an error if the bech32 conversion +/ fails or the prefix is empty. +func Bech32ifyAddressBytes(prefix string, bs []byte) (string, error) { + if len(bs) == 0 { + return "", nil +} + if len(prefix) == 0 { + return "", errors.New("prefix cannot be empty") +} + +return bech32.ConvertAndEncode(prefix, bs) +} + +/ MustBech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. It panics if the bech32 conversion +/ fails or the prefix is empty. +func MustBech32ifyAddressBytes(prefix string, bs []byte) + +string { + s, err := Bech32ifyAddressBytes(prefix, bs) + if err != nil { + panic(err) +} + +return s +} + +/ Format implements the fmt.Formatter interface. + +func (ca ConsAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + s.Write([]byte(ca.String())) + case 'p': + s.Write([]byte(fmt.Sprintf("%p", ca))) + +default: + s.Write([]byte(fmt.Sprintf("%X", []byte(ca)))) +} +} + +/ ---------------------------------------------------------------------------- +/ auxiliary +/ ---------------------------------------------------------------------------- + +var errBech32EmptyAddress = errors.New("decoding Bech32 address failed: must provide a non empty address") + +/ GetFromBech32 decodes a bytestring from a Bech32 encoded string. +func GetFromBech32(bech32str, prefix string) ([]byte, error) { + if len(bech32str) == 0 { + return nil, errBech32EmptyAddress +} + +hrp, bz, err := bech32.DecodeAndConvert(bech32str) + if err != nil { + return nil, err +} + if hrp != prefix { + return nil, fmt.Errorf("invalid Bech32 prefix; expected %s, got %s", prefix, hrp) +} + +return bz, nil +} + +func addressBytesFromHexString(address string) ([]byte, error) { + if len(address) == 0 { + return nil, ErrEmptyHexAddress +} + +return hex.DecodeString(address) +} + +/ cacheBech32Addr is not concurrency safe. Concurrent access to cache causes race condition. +func cacheBech32Addr(prefix string, addr []byte, cache *simplelru.LRU, cacheKey string) + +string { + bech32Addr, err := bech32.ConvertAndEncode(prefix, addr) + if err != nil { + panic(err) +} + +cache.Add(cacheKey, bech32Addr) + +return bech32Addr +} ``` + +Address construction algorithm is defined in [ADR-28](/docs/common/pages/adr-comprehensive#adr-028-public-key-addresses). +Here is the standard way to obtain an account address from a `pub` public key: + +```go sdk.AccAddress(pub.Address().Bytes()) ``` @@ -73,13 +944,846 @@ Of note, the `Marshal()` and `Bytes()` method both return the same raw `[]byte` For user interaction, addresses are formatted using [Bech32](https://en.bitcoin.it/wiki/Bech32) and implemented by the `String` method. The Bech32 method is the only supported format to use when interacting with a blockchain. The Bech32 human-readable part (Bech32 prefix) is used to denote an address type. Example: -types/address.go +```go expandable +package types + +import ( + + "bytes" + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "strings" + "sync" + "github.com/hashicorp/golang-lru/simplelru" + "sigs.k8s.io/yaml" + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/internal/conv" + "github.com/cosmos/cosmos-sdk/types/address" + "github.com/cosmos/cosmos-sdk/types/bech32" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +const ( + / Constants defined here are the defaults value for address. + / You can use the specific values for your project. + / Add the follow lines to the `main()` of your server. + / + / config := sdk.GetConfig() + / config.SetBech32PrefixForAccount(yourBech32PrefixAccAddr, yourBech32PrefixAccPub) + / config.SetBech32PrefixForValidator(yourBech32PrefixValAddr, yourBech32PrefixValPub) + / config.SetBech32PrefixForConsensusNode(yourBech32PrefixConsAddr, yourBech32PrefixConsPub) + / config.SetPurpose(yourPurpose) + / config.SetCoinType(yourCoinType) + / config.Seal() + + / Bech32MainPrefix defines the main SDK Bech32 prefix of an account's address + Bech32MainPrefix = "cosmos" + + / Purpose is the ATOM purpose as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +Purpose = 44 + + / CoinType is the ATOM coin type as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +CoinType = 118 + + / FullFundraiserPath is the parts of the BIP44 HD path that are fixed by + / what we used during the ATOM fundraiser. + FullFundraiserPath = "m/44'/118'/0'/0/0" + + / PrefixAccount is the prefix for account keys + PrefixAccount = "acc" + / PrefixValidator is the prefix for validator keys + PrefixValidator = "val" + / PrefixConsensus is the prefix for consensus keys + PrefixConsensus = "cons" + / PrefixPublic is the prefix for public keys + PrefixPublic = "pub" + / PrefixOperator is the prefix for operator keys + PrefixOperator = "oper" + + / PrefixAddress is the prefix for addresses + PrefixAddress = "addr" + + / Bech32PrefixAccAddr defines the Bech32 prefix of an account's address + Bech32PrefixAccAddr = Bech32MainPrefix + / Bech32PrefixAccPub defines the Bech32 prefix of an account's public key + Bech32PrefixAccPub = Bech32MainPrefix + PrefixPublic + / Bech32PrefixValAddr defines the Bech32 prefix of a validator's operator address + Bech32PrefixValAddr = Bech32MainPrefix + PrefixValidator + PrefixOperator + / Bech32PrefixValPub defines the Bech32 prefix of a validator's operator public key + Bech32PrefixValPub = Bech32MainPrefix + PrefixValidator + PrefixOperator + PrefixPublic + / Bech32PrefixConsAddr defines the Bech32 prefix of a consensus node address + Bech32PrefixConsAddr = Bech32MainPrefix + PrefixValidator + PrefixConsensus + / Bech32PrefixConsPub defines the Bech32 prefix of a consensus node public key + Bech32PrefixConsPub = Bech32MainPrefix + PrefixValidator + PrefixConsensus + PrefixPublic +) + +/ cache variables +var ( + / AccAddress.String() + +is expensive and if unoptimized dominantly showed up in profiles, + / yet has no mechanisms to trivially cache the result given that AccAddress is a []byte type. + accAddrMu sync.Mutex + accAddrCache *simplelru.LRU + consAddrMu sync.Mutex + consAddrCache *simplelru.LRU + valAddrMu sync.Mutex + valAddrCache *simplelru.LRU +) + +/ sentinel errors +var ( + ErrEmptyHexAddress = errors.New("decoding address from hex string failed: empty address") +) + +func init() { + var err error + / in total the cache size is 61k entries. Key is 32 bytes and value is around 50-70 bytes. + / That will make around 92 * 61k * 2 (LRU) + +bytes ~ 11 MB + if accAddrCache, err = simplelru.NewLRU(60000, nil); err != nil { + panic(err) +} + if consAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} + if valAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} +} + +/ Address is a common interface for different types of addresses used by the SDK +type Address interface { + Equals(Address) + +bool + Empty() + +bool + Marshal() ([]byte, error) + +MarshalJSON() ([]byte, error) + +Bytes() []byte + String() + +string + Format(s fmt.State, verb rune) +} + +/ Ensure that different address types implement the interface +var ( + _ Address = AccAddress{ +} + _ Address = ValAddress{ +} + _ Address = ConsAddress{ +} +) + +/ ---------------------------------------------------------------------------- +/ account +/ ---------------------------------------------------------------------------- + +/ AccAddress a wrapper around bytes meant to represent an account address. +/ When marshaled to a string or JSON, it uses Bech32. +type AccAddress []byte + +/ AccAddressFromHexUnsafe creates an AccAddress from a HEX-encoded string. +/ +/ Note, this function is considered unsafe as it may produce an AccAddress from +/ otherwise invalid input, such as a transaction hash. Please use +/ AccAddressFromBech32. +func AccAddressFromHexUnsafe(address string) (addr AccAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return AccAddress(bz), err +} + +/ VerifyAddressFormat verifies that the provided bytes form a valid address +/ according to the default address rules or a custom address verifier set by +/ GetConfig().SetAddressVerifier(). +/ TODO make an issue to get rid of global Config +/ ref: https://github.com/cosmos/cosmos-sdk/issues/9690 +func VerifyAddressFormat(bz []byte) + +error { + verifier := GetConfig().GetAddressVerifier() + if verifier != nil { + return verifier(bz) +} + if len(bz) == 0 { + return sdkerrors.Wrap(sdkerrors.ErrUnknownAddress, "addresses cannot be empty") +} + if len(bz) > address.MaxAddrLen { + return sdkerrors.Wrapf(sdkerrors.ErrUnknownAddress, "address max length is %d, got %d", address.MaxAddrLen, len(bz)) +} + +return nil +} + +/ MustAccAddressFromBech32 calls AccAddressFromBech32 and panics on error. +func MustAccAddressFromBech32(address string) + +AccAddress { + addr, err := AccAddressFromBech32(address) + if err != nil { + panic(err) +} + +return addr +} + +/ AccAddressFromBech32 creates an AccAddress from a Bech32 string. +func AccAddressFromBech32(address string) (addr AccAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return AccAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixAccAddr := GetConfig().GetBech32AccountAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixAccAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return AccAddress(bz), nil +} + +/ Returns boolean for whether two AccAddresses are Equal +func (aa AccAddress) + +Equals(aa2 Address) + +bool { + if aa.Empty() && aa2.Empty() { + return true +} + +return bytes.Equal(aa.Bytes(), aa2.Bytes()) +} + +/ Returns boolean for whether an AccAddress is empty +func (aa AccAddress) + +Empty() + +bool { + return len(aa) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (aa AccAddress) + +Marshal() ([]byte, error) { + return aa, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (aa *AccAddress) + +Unmarshal(data []byte) + +error { + *aa = data + return nil +} -``` -loading... -``` +/ MarshalJSON marshals to JSON using Bech32. +func (aa AccAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(aa.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (aa AccAddress) + +MarshalYAML() (interface{ +}, error) { + return aa.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ UnmarshalYAML unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ Bytes returns the raw address bytes. +func (aa AccAddress) + +Bytes() []byte { + return aa +} + +/ String implements the Stringer interface. +func (aa AccAddress) + +String() + +string { + if aa.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(aa) + +accAddrMu.Lock() + +defer accAddrMu.Unlock() + +addr, ok := accAddrCache.Get(key) + if ok { + return addr.(string) +} + +return cacheBech32Addr(GetConfig().GetBech32AccountAddrPrefix(), aa, accAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (aa AccAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + s.Write([]byte(aa.String())) + case 'p': + s.Write([]byte(fmt.Sprintf("%p", aa))) + +default: + s.Write([]byte(fmt.Sprintf("%X", []byte(aa)))) +} +} + +/ ---------------------------------------------------------------------------- +/ validator operator +/ ---------------------------------------------------------------------------- + +/ ValAddress defines a wrapper around bytes meant to present a validator's +/ operator. When marshaled to a string or JSON, it uses Bech32. +type ValAddress []byte + +/ ValAddressFromHex creates a ValAddress from a hex string. +func ValAddressFromHex(address string) (addr ValAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return ValAddress(bz), err +} + +/ ValAddressFromBech32 creates a ValAddress from a Bech32 string. +func ValAddressFromBech32(address string) (addr ValAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ValAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixValAddr := GetConfig().GetBech32ValidatorAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixValAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ValAddress(bz), nil +} + +/ Returns boolean for whether two ValAddresses are Equal +func (va ValAddress) + +Equals(va2 Address) + +bool { + if va.Empty() && va2.Empty() { + return true +} + +return bytes.Equal(va.Bytes(), va2.Bytes()) +} + +/ Returns boolean for whether an AccAddress is empty +func (va ValAddress) + +Empty() + +bool { + return len(va) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (va ValAddress) + +Marshal() ([]byte, error) { + return va, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (va *ValAddress) + +Unmarshal(data []byte) + +error { + *va = data + return nil +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/types/address.go#L281-L295) +/ MarshalJSON marshals to JSON using Bech32. +func (va ValAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(va.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (va ValAddress) + +MarshalYAML() (interface{ +}, error) { + return va.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ Bytes returns the raw address bytes. +func (va ValAddress) + +Bytes() []byte { + return va +} + +/ String implements the Stringer interface. +func (va ValAddress) + +String() + +string { + if va.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(va) + +valAddrMu.Lock() + +defer valAddrMu.Unlock() + +addr, ok := valAddrCache.Get(key) + if ok { + return addr.(string) +} + +return cacheBech32Addr(GetConfig().GetBech32ValidatorAddrPrefix(), va, valAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (va ValAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + s.Write([]byte(va.String())) + case 'p': + s.Write([]byte(fmt.Sprintf("%p", va))) + +default: + s.Write([]byte(fmt.Sprintf("%X", []byte(va)))) +} +} + +/ ---------------------------------------------------------------------------- +/ consensus node +/ ---------------------------------------------------------------------------- + +/ ConsAddress defines a wrapper around bytes meant to present a consensus node. +/ When marshaled to a string or JSON, it uses Bech32. +type ConsAddress []byte + +/ ConsAddressFromHex creates a ConsAddress from a hex string. +func ConsAddressFromHex(address string) (addr ConsAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return ConsAddress(bz), err +} + +/ ConsAddressFromBech32 creates a ConsAddress from a Bech32 string. +func ConsAddressFromBech32(address string) (addr ConsAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ConsAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixConsAddr := GetConfig().GetBech32ConsensusAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixConsAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ConsAddress(bz), nil +} + +/ get ConsAddress from pubkey +func GetConsAddress(pubkey cryptotypes.PubKey) + +ConsAddress { + return ConsAddress(pubkey.Address()) +} + +/ Returns boolean for whether two ConsAddress are Equal +func (ca ConsAddress) + +Equals(ca2 Address) + +bool { + if ca.Empty() && ca2.Empty() { + return true +} + +return bytes.Equal(ca.Bytes(), ca2.Bytes()) +} + +/ Returns boolean for whether an ConsAddress is empty +func (ca ConsAddress) + +Empty() + +bool { + return len(ca) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (ca ConsAddress) + +Marshal() ([]byte, error) { + return ca, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (ca *ConsAddress) + +Unmarshal(data []byte) + +error { + *ca = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (ca ConsAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(ca.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (ca ConsAddress) + +MarshalYAML() (interface{ +}, error) { + return ca.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ Bytes returns the raw address bytes. +func (ca ConsAddress) + +Bytes() []byte { + return ca +} + +/ String implements the Stringer interface. +func (ca ConsAddress) + +String() + +string { + if ca.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(ca) + +consAddrMu.Lock() + +defer consAddrMu.Unlock() + +addr, ok := consAddrCache.Get(key) + if ok { + return addr.(string) +} + +return cacheBech32Addr(GetConfig().GetBech32ConsensusAddrPrefix(), ca, consAddrCache, key) +} + +/ Bech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. Returns an error if the bech32 conversion +/ fails or the prefix is empty. +func Bech32ifyAddressBytes(prefix string, bs []byte) (string, error) { + if len(bs) == 0 { + return "", nil +} + if len(prefix) == 0 { + return "", errors.New("prefix cannot be empty") +} + +return bech32.ConvertAndEncode(prefix, bs) +} + +/ MustBech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. It panics if the bech32 conversion +/ fails or the prefix is empty. +func MustBech32ifyAddressBytes(prefix string, bs []byte) + +string { + s, err := Bech32ifyAddressBytes(prefix, bs) + if err != nil { + panic(err) +} + +return s +} + +/ Format implements the fmt.Formatter interface. + +func (ca ConsAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + s.Write([]byte(ca.String())) + case 'p': + s.Write([]byte(fmt.Sprintf("%p", ca))) + +default: + s.Write([]byte(fmt.Sprintf("%X", []byte(ca)))) +} +} + +/ ---------------------------------------------------------------------------- +/ auxiliary +/ ---------------------------------------------------------------------------- + +var errBech32EmptyAddress = errors.New("decoding Bech32 address failed: must provide a non empty address") + +/ GetFromBech32 decodes a bytestring from a Bech32 encoded string. +func GetFromBech32(bech32str, prefix string) ([]byte, error) { + if len(bech32str) == 0 { + return nil, errBech32EmptyAddress +} + +hrp, bz, err := bech32.DecodeAndConvert(bech32str) + if err != nil { + return nil, err +} + if hrp != prefix { + return nil, fmt.Errorf("invalid Bech32 prefix; expected %s, got %s", prefix, hrp) +} + +return bz, nil +} + +func addressBytesFromHexString(address string) ([]byte, error) { + if len(address) == 0 { + return nil, ErrEmptyHexAddress +} + +return hex.DecodeString(address) +} + +/ cacheBech32Addr is not concurrency safe. Concurrent access to cache causes race condition. +func cacheBech32Addr(prefix string, addr []byte, cache *simplelru.LRU, cacheKey string) + +string { + bech32Addr, err := bech32.ConvertAndEncode(prefix, addr) + if err != nil { + panic(err) +} + +cache.Add(cacheKey, bech32Addr) + +return bech32Addr +} +``` | | Address Bech32 Prefix | | ------------------ | --------------------- | @@ -87,115 +1791,1661 @@ loading... | Validator Operator | cosmosvaloper | | Consensus Nodes | cosmosvalcons | -### Public Keys[​](#public-keys "Direct link to Public Keys") +### Public Keys Public keys in Cosmos SDK are defined by `cryptotypes.PubKey` interface. Since public keys are saved in a store, `cryptotypes.PubKey` extends the `proto.Message` interface: -crypto/types/types.go +```go expandable +package types -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/types/types.go#L8-L17) + proto "github.com/cosmos/gogoproto/proto" + tmcrypto "github.com/tendermint/tendermint/crypto" +) -A compressed format is used for `secp256k1` and `secp256r1` serialization. +/ PubKey defines a public key and extends proto.Message. +type PubKey interface { + proto.Message -* The first byte is a `0x02` byte if the `y`-coordinate is the lexicographically largest of the two associated with the `x`-coordinate. -* Otherwise the first byte is a `0x03`. + Address() -This prefix is followed by the `x`-coordinate. +Address + Bytes() []byte + VerifySignature(msg []byte, sig []byte) -Public Keys are not used to reference accounts (or users) and in general are not used when composing transaction messages (with few exceptions: `MsgCreateValidator`, `Validator` and `Multisig` messages). For user interactions, `PubKey` is formatted using Protobufs JSON ([ProtoMarshalJSON](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/codec/json.go#L14-L34) function). Example: +bool + Equals(PubKey) -crypto/keyring/output.go +bool + Type() -``` -loading... -``` +string +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/keyring/output.go#L23-L39) +/ LedgerPrivKey defines a private key that is not a proto message. For now, +/ LedgerSecp256k1 keys are not converted to proto.Message yet, this is why +/ they use LedgerPrivKey instead of PrivKey. All other keys must use PrivKey +/ instead of LedgerPrivKey. +/ TODO https://github.com/cosmos/cosmos-sdk/issues/7357. +type LedgerPrivKey interface { + Bytes() []byte + Sign(msg []byte) ([]byte, error) -## Keyring[​](#keyring "Direct link to Keyring") +PubKey() -A `Keyring` is an object that stores and manages accounts. In the Cosmos SDK, a `Keyring` implementation follows the `Keyring` interface: +PubKey + Equals(LedgerPrivKey) + +bool + Type() + +string +} -crypto/keyring/keyring.go +/ PrivKey defines a private key and extends proto.Message. For now, it extends +/ LedgerPrivKey (see godoc for LedgerPrivKey). Ultimately, we should remove +/ LedgerPrivKey and add its methods here directly. +/ TODO https://github.com/cosmos/cosmos-sdk/issues/7357. +type PrivKey interface { + proto.Message + LedgerPrivKey +} +type ( + Address = tmcrypto.Address +) ``` -loading... + +A compressed format is used for `secp256k1` and `secp256r1` serialization. + +- The first byte is a `0x02` byte if the `y`-coordinate is the lexicographically largest of the two associated with the `x`-coordinate. +- Otherwise the first byte is a `0x03`. + +This prefix is followed by the `x`-coordinate. + +Public Keys are not used to reference accounts (or users) and in general are not used when composing transaction messages (with few exceptions: `MsgCreateValidator`, `Validator` and `Multisig` messages). +For user interactions, `PubKey` is formatted using Protobufs JSON ([ProtoMarshalJSON](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/codec/json.go#L14-L34) function). Example: + +```go expandable +package keyring + +import ( + + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ TODO: Move this file to client/keys +/ Use protobuf interface marshaler rather then generic JSON + +/ KeyOutput defines a structure wrapping around an Info object used for output +/ functionality. +type KeyOutput struct { + Name string `json:"name" yaml:"name"` + Type string `json:"type" yaml:"type"` + Address string `json:"address" yaml:"address"` + PubKey string `json:"pubkey" yaml:"pubkey"` + Mnemonic string `json:"mnemonic,omitempty" yaml:"mnemonic"` +} + +/ NewKeyOutput creates a default KeyOutput instance without Mnemonic, Threshold and PubKeys +func NewKeyOutput(name string, keyType KeyType, a sdk.Address, pk cryptotypes.PubKey) (KeyOutput, error) { /nolint:interfacer + apk, err := codectypes.NewAnyWithValue(pk) + if err != nil { + return KeyOutput{ +}, err +} + +bz, err := codec.ProtoMarshalJSON(apk, nil) + if err != nil { + return KeyOutput{ +}, err +} + +return KeyOutput{ + Name: name, + Type: keyType.String(), + Address: a.String(), + PubKey: string(bz), +}, nil +} + +/ MkConsKeyOutput create a KeyOutput in with "cons" Bech32 prefixes. +func MkConsKeyOutput(k *Record) (KeyOutput, error) { + pk, err := k.GetPubKey() + if err != nil { + return KeyOutput{ +}, err +} + addr := sdk.ConsAddress(pk.Address()) + +return NewKeyOutput(k.Name, k.GetType(), addr, pk) +} + +/ MkValKeyOutput create a KeyOutput in with "val" Bech32 prefixes. +func MkValKeyOutput(k *Record) (KeyOutput, error) { + pk, err := k.GetPubKey() + if err != nil { + return KeyOutput{ +}, err +} + addr := sdk.ValAddress(pk.Address()) + +return NewKeyOutput(k.Name, k.GetType(), addr, pk) +} + +/ MkAccKeyOutput create a KeyOutput in with "acc" Bech32 prefixes. If the +/ public key is a multisig public key, then the threshold and constituent +/ public keys will be added. +func MkAccKeyOutput(k *Record) (KeyOutput, error) { + pk, err := k.GetPubKey() + if err != nil { + return KeyOutput{ +}, err +} + addr := sdk.AccAddress(pk.Address()) + +return NewKeyOutput(k.Name, k.GetType(), addr, pk) +} + +/ MkAccKeysOutput returns a slice of KeyOutput objects, each with the "acc" +/ Bech32 prefixes, given a slice of Record objects. It returns an error if any +/ call to MkKeyOutput fails. +func MkAccKeysOutput(records []*Record) ([]KeyOutput, error) { + kos := make([]KeyOutput, len(records)) + +var err error + for i, r := range records { + kos[i], err = MkAccKeyOutput(r) + if err != nil { + return nil, err +} + +} + +return kos, nil +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/keyring/keyring.go#L54-L101) +## Keyring + +A `Keyring` is an object that stores and manages accounts. In the Cosmos SDK, a `Keyring` implementation follows the `Keyring` interface: + +```go expandable +package keyring + +import ( + + "bufio" + "encoding/hex" + "fmt" + "io" + "os" + "path/filepath" + "sort" + "strings" + "github.com/99designs/keyring" + "github.com/pkg/errors" + "github.com/tendermint/crypto/bcrypt" + tmcrypto "github.com/tendermint/tendermint/crypto" + "github.com/cosmos/cosmos-sdk/client/input" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/crypto" + "github.com/cosmos/cosmos-sdk/crypto/hd" + "github.com/cosmos/cosmos-sdk/crypto/ledger" + "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/go-bip39" +) + +/ Backend options for Keyring +const ( + BackendFile = "file" + BackendOS = "os" + BackendKWallet = "kwallet" + BackendPass = "pass" + BackendTest = "test" + BackendMemory = "memory" +) + +const ( + keyringFileDirName = "keyring-file" + keyringTestDirName = "keyring-test" + passKeyringPrefix = "keyring-%s" + + / temporary pass phrase for exporting a key during a key rename + passPhrase = "temp" +) + +var ( + _ Keyring = &keystore{ +} + +maxPassphraseEntryAttempts = 3 +) + +/ Keyring exposes operations over a backend supported by github.com/99designs/keyring. +type Keyring interface { + / Get the backend type used in the keyring config: "file", "os", "kwallet", "pass", "test", "memory". + Backend() + +string + / List all keys. + List() ([]*Record, error) + + / Supported signing algorithms for Keyring and Ledger respectively. + SupportedAlgorithms() (SigningAlgoList, SigningAlgoList) + + / Key and KeyByAddress return keys by uid and address respectively. + Key(uid string) (*Record, error) + +KeyByAddress(address sdk.Address) (*Record, error) + + / Delete and DeleteByAddress remove keys from the keyring. + Delete(uid string) + +error + DeleteByAddress(address sdk.Address) + +error + + / Rename an existing key from the Keyring + Rename(from string, to string) + +error + + / NewMnemonic generates a new mnemonic, derives a hierarchical deterministic key from it, and + / persists the key to storage. Returns the generated mnemonic and the key Info. + / It returns an error if it fails to generate a key for the given algo type, or if + / another key is already stored under the same name or address. + / + / A passphrase set to the empty string will set the passphrase to the DefaultBIP39Passphrase value. + NewMnemonic(uid string, language Language, hdPath, bip39Passphrase string, algo SignatureAlgo) (*Record, string, error) + + / NewAccount converts a mnemonic to a private key and BIP-39 HD Path and persists it. + / It fails if there is an existing key Info with the same address. + NewAccount(uid, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error) + + / SaveLedgerKey retrieves a public key reference from a Ledger device and persists it. + SaveLedgerKey(uid string, algo SignatureAlgo, hrp string, coinType, account, index uint32) (*Record, error) + + / SaveOfflineKey stores a public key and returns the persisted Info structure. + SaveOfflineKey(uid string, pubkey types.PubKey) (*Record, error) + + / SaveMultisig stores and returns a new multsig (offline) + +key reference. + SaveMultisig(uid string, pubkey types.PubKey) (*Record, error) + +Signer + + Importer + Exporter + + Migrator +} + +/ Signer is implemented by key stores that want to provide signing capabilities. +type Signer interface { + / Sign sign byte messages with a user key. + Sign(uid string, msg []byte) ([]byte, types.PubKey, error) + + / SignByAddress sign byte messages with a user key providing the address. + SignByAddress(address sdk.Address, msg []byte) ([]byte, types.PubKey, error) +} + +/ Importer is implemented by key stores that support import of public and private keys. +type Importer interface { + / ImportPrivKey imports ASCII armored passphrase-encrypted private keys. + ImportPrivKey(uid, armor, passphrase string) + +error + + / ImportPubKey imports ASCII armored public keys. + ImportPubKey(uid string, armor string) + +error +} + +/ Migrator is implemented by key stores and enables migration of keys from amino to proto +type Migrator interface { + MigrateAll() ([]*Record, error) +} + +/ Exporter is implemented by key stores that support export of public and private keys. +type Exporter interface { + / Export public key + ExportPubKeyArmor(uid string) (string, error) + +ExportPubKeyArmorByAddress(address sdk.Address) (string, error) + + / ExportPrivKeyArmor returns a private key in ASCII armored format. + / It returns an error if the key does not exist or a wrong encryption passphrase is supplied. + ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error) + +ExportPrivKeyArmorByAddress(address sdk.Address, encryptPassphrase string) (armor string, err error) +} + +/ Option overrides keyring configuration options. +type Option func(options *Options) + +/ Options define the options of the Keyring. +type Options struct { + / supported signing algorithms for keyring + SupportedAlgos SigningAlgoList + / supported signing algorithms for Ledger + SupportedAlgosLedger SigningAlgoList + / define Ledger Derivation function + LedgerDerivation func() (ledger.SECP256K1, error) + / define Ledger key generation function + LedgerCreateKey func([]byte) + +types.PubKey + / define Ledger app name + LedgerAppName string + / indicate whether Ledger should skip DER Conversion on signature, + / depending on which format (DER or BER) + +the Ledger app returns signatures + LedgerSigSkipDERConv bool +} + +/ NewInMemory creates a transient keyring useful for testing +/ purposes and on-the-fly key generation. +/ Keybase options can be applied when generating this new Keybase. +func NewInMemory(cdc codec.Codec, opts ...Option) + +Keyring { + return NewInMemoryWithKeyring(keyring.NewArrayKeyring(nil), cdc, opts...) +} + +/ NewInMemoryWithKeyring returns an in memory keyring using the specified keyring.Keyring +/ as the backing keyring. +func NewInMemoryWithKeyring(kr keyring.Keyring, cdc codec.Codec, opts ...Option) + +Keyring { + return newKeystore(kr, cdc, BackendMemory, opts...) +} + +/ New creates a new instance of a keyring. +/ Keyring options can be applied when generating the new instance. +/ Available backends are "os", "file", "kwallet", "memory", "pass", "test". +func New( + appName, backend, rootDir string, userInput io.Reader, cdc codec.Codec, opts ...Option, +) (Keyring, error) { + var ( + db keyring.Keyring + err error + ) + switch backend { + case BackendMemory: + return NewInMemory(cdc, opts...), err + case BackendTest: + db, err = keyring.Open(newTestBackendKeyringConfig(appName, rootDir)) + case BackendFile: + db, err = keyring.Open(newFileBackendKeyringConfig(appName, rootDir, userInput)) + case BackendOS: + db, err = keyring.Open(newOSBackendKeyringConfig(appName, rootDir, userInput)) + case BackendKWallet: + db, err = keyring.Open(newKWalletBackendKeyringConfig(appName, rootDir, userInput)) + case BackendPass: + db, err = keyring.Open(newPassBackendKeyringConfig(appName, rootDir, userInput)) + +default: + return nil, fmt.Errorf("unknown keyring backend %v", backend) +} + if err != nil { + return nil, err +} + +return newKeystore(db, cdc, backend, opts...), nil +} + +type keystore struct { + db keyring.Keyring + cdc codec.Codec + backend string + options Options +} + +func newKeystore(kr keyring.Keyring, cdc codec.Codec, backend string, opts ...Option) + +keystore { + / Default options for keybase, these can be overwritten using the + / Option function + options := Options{ + SupportedAlgos: SigningAlgoList{ + hd.Secp256k1 +}, + SupportedAlgosLedger: SigningAlgoList{ + hd.Secp256k1 +}, +} + for _, optionFn := range opts { + optionFn(&options) +} + if options.LedgerDerivation != nil { + ledger.SetDiscoverLedger(options.LedgerDerivation) +} + if options.LedgerCreateKey != nil { + ledger.SetCreatePubkey(options.LedgerCreateKey) +} + if options.LedgerAppName != "" { + ledger.SetAppName(options.LedgerAppName) +} + if options.LedgerSigSkipDERConv { + ledger.SetSkipDERConversion() +} + +return keystore{ + db: kr, + cdc: cdc, + backend: backend, + options: options, +} +} + +/ Backend returns the keyring backend option used in the config +func (ks keystore) + +Backend() + +string { + return ks.backend +} + +func (ks keystore) + +ExportPubKeyArmor(uid string) (string, error) { + k, err := ks.Key(uid) + if err != nil { + return "", err +} + +key, err := k.GetPubKey() + if err != nil { + return "", err +} + +bz, err := ks.cdc.MarshalInterface(key) + if err != nil { + return "", err +} + +return crypto.ArmorPubKeyBytes(bz, key.Type()), nil +} + +func (ks keystore) + +ExportPubKeyArmorByAddress(address sdk.Address) (string, error) { + k, err := ks.KeyByAddress(address) + if err != nil { + return "", err +} + +return ks.ExportPubKeyArmor(k.Name) +} + +/ ExportPrivKeyArmor exports encrypted privKey +func (ks keystore) + +ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error) { + priv, err := ks.ExportPrivateKeyObject(uid) + if err != nil { + return "", err +} + +return crypto.EncryptArmorPrivKey(priv, encryptPassphrase, priv.Type()), nil +} + +/ ExportPrivateKeyObject exports an armored private key object. +func (ks keystore) + +ExportPrivateKeyObject(uid string) (types.PrivKey, error) { + k, err := ks.Key(uid) + if err != nil { + return nil, err +} + +priv, err := extractPrivKeyFromRecord(k) + if err != nil { + return nil, err +} + +return priv, err +} + +func (ks keystore) + +ExportPrivKeyArmorByAddress(address sdk.Address, encryptPassphrase string) (armor string, err error) { + k, err := ks.KeyByAddress(address) + if err != nil { + return "", err +} + +return ks.ExportPrivKeyArmor(k.Name, encryptPassphrase) +} + +func (ks keystore) + +ImportPrivKey(uid, armor, passphrase string) + +error { + if k, err := ks.Key(uid); err == nil { + if uid == k.Name { + return fmt.Errorf("cannot overwrite key: %s", uid) +} + +} + +privKey, _, err := crypto.UnarmorDecryptPrivKey(armor, passphrase) + if err != nil { + return errors.Wrap(err, "failed to decrypt private key") +} + + _, err = ks.writeLocalKey(uid, privKey) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +ImportPubKey(uid string, armor string) + +error { + if _, err := ks.Key(uid); err == nil { + return fmt.Errorf("cannot overwrite key: %s", uid) +} + +pubBytes, _, err := crypto.UnarmorPubKeyBytes(armor) + if err != nil { + return err +} + +var pubKey types.PubKey + if err := ks.cdc.UnmarshalInterface(pubBytes, &pubKey); err != nil { + return err +} + + _, err = ks.writeOfflineKey(uid, pubKey) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +Sign(uid string, msg []byte) ([]byte, types.PubKey, error) { + k, err := ks.Key(uid) + if err != nil { + return nil, nil, err +} + switch { + case k.GetLocal() != nil: + priv, err := extractPrivKeyFromLocal(k.GetLocal()) + if err != nil { + return nil, nil, err +} + +sig, err := priv.Sign(msg) + if err != nil { + return nil, nil, err +} + +return sig, priv.PubKey(), nil + case k.GetLedger() != nil: + return SignWithLedger(k, msg) + + / multi or offline record + default: + pub, err := k.GetPubKey() + if err != nil { + return nil, nil, err +} + +return nil, pub, errors.New("cannot sign with offline keys") +} +} + +func (ks keystore) + +SignByAddress(address sdk.Address, msg []byte) ([]byte, types.PubKey, error) { + k, err := ks.KeyByAddress(address) + if err != nil { + return nil, nil, err +} + +return ks.Sign(k.Name, msg) +} + +func (ks keystore) + +SaveLedgerKey(uid string, algo SignatureAlgo, hrp string, coinType, account, index uint32) (*Record, error) { + if !ks.options.SupportedAlgosLedger.Contains(algo) { + return nil, fmt.Errorf( + "%w: signature algo %s is not defined in the keyring options", + ErrUnsupportedSigningAlgo, algo.Name(), + ) +} + hdPath := hd.NewFundraiserParams(account, coinType, index) + +priv, _, err := ledger.NewPrivKeySecp256k1(*hdPath, hrp) + if err != nil { + return nil, fmt.Errorf("failed to generate ledger key: %w", err) +} + +return ks.writeLedgerKey(uid, priv.PubKey(), hdPath) +} + +func (ks keystore) + +writeLedgerKey(name string, pk types.PubKey, path *hd.BIP44Params) (*Record, error) { + k, err := NewLedgerRecord(name, pk, path) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +func (ks keystore) + +SaveMultisig(uid string, pubkey types.PubKey) (*Record, error) { + return ks.writeMultisigKey(uid, pubkey) +} + +func (ks keystore) + +SaveOfflineKey(uid string, pubkey types.PubKey) (*Record, error) { + return ks.writeOfflineKey(uid, pubkey) +} + +func (ks keystore) + +DeleteByAddress(address sdk.Address) + +error { + k, err := ks.KeyByAddress(address) + if err != nil { + return err +} + +err = ks.Delete(k.Name) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +Rename(oldName, newName string) + +error { + _, err := ks.Key(newName) + if err == nil { + return fmt.Errorf("rename failed: %s already exists in the keyring", newName) +} + +armor, err := ks.ExportPrivKeyArmor(oldName, passPhrase) + if err != nil { + return err +} + if err := ks.Delete(oldName); err != nil { + return err +} + if err := ks.ImportPrivKey(newName, armor, passPhrase); err != nil { + return err +} + +return nil +} + +/ Delete deletes a key in the keyring. `uid` represents the key name, without +/ the `.info` suffix. +func (ks keystore) + +Delete(uid string) + +error { + k, err := ks.Key(uid) + if err != nil { + return err +} + +addr, err := k.GetAddress() + if err != nil { + return err +} + +err = ks.db.Remove(addrHexKeyAsString(addr)) + if err != nil { + return err +} + +err = ks.db.Remove(infoKey(uid)) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +KeyByAddress(address sdk.Address) (*Record, error) { + ik, err := ks.db.Get(addrHexKeyAsString(address)) + if err != nil { + return nil, wrapKeyNotFound(err, fmt.Sprintf("key with address %s not found", address.String())) +} + if len(ik.Data) == 0 { + return nil, wrapKeyNotFound(err, fmt.Sprintf("key with address %s not found", address.String())) +} + +return ks.Key(string(ik.Data)) +} + +func wrapKeyNotFound(err error, msg string) + +error { + if err == keyring.ErrKeyNotFound { + return sdkerrors.Wrap(sdkerrors.ErrKeyNotFound, msg) +} + +return err +} + +func (ks keystore) + +List() ([]*Record, error) { + return ks.MigrateAll() +} + +func (ks keystore) + +NewMnemonic(uid string, language Language, hdPath, bip39Passphrase string, algo SignatureAlgo) (*Record, string, error) { + if language != English { + return nil, "", ErrUnsupportedLanguage +} + if !ks.isSupportedSigningAlgo(algo) { + return nil, "", ErrUnsupportedSigningAlgo +} + + / Default number of words (24): This generates a mnemonic directly from the + / number of words by reading system entropy. + entropy, err := bip39.NewEntropy(defaultEntropySize) + if err != nil { + return nil, "", err +} + +mnemonic, err := bip39.NewMnemonic(entropy) + if err != nil { + return nil, "", err +} + if bip39Passphrase == "" { + bip39Passphrase = DefaultBIP39Passphrase +} + +k, err := ks.NewAccount(uid, mnemonic, bip39Passphrase, hdPath, algo) + if err != nil { + return nil, "", err +} + +return k, mnemonic, nil +} + +func (ks keystore) + +NewAccount(name string, mnemonic string, bip39Passphrase string, hdPath string, algo SignatureAlgo) (*Record, error) { + if !ks.isSupportedSigningAlgo(algo) { + return nil, ErrUnsupportedSigningAlgo +} + + / create master key and derive first key for keyring + derivedPriv, err := algo.Derive()(mnemonic, bip39Passphrase, hdPath) + if err != nil { + return nil, err +} + privKey := algo.Generate()(derivedPriv) + + / check if the a key already exists with the same address and return an error + / if found + address := sdk.AccAddress(privKey.PubKey().Address()) + if _, err := ks.KeyByAddress(address); err == nil { + return nil, errors.New("duplicated address created") +} + +return ks.writeLocalKey(name, privKey) +} + +func (ks keystore) + +isSupportedSigningAlgo(algo SignatureAlgo) + +bool { + return ks.options.SupportedAlgos.Contains(algo) +} + +func (ks keystore) + +Key(uid string) (*Record, error) { + k, err := ks.migrate(uid) + if err != nil { + return nil, err +} + +return k, nil +} + +/ SupportedAlgorithms returns the keystore Options' supported signing algorithm. +/ for the keyring and Ledger. +func (ks keystore) + +SupportedAlgorithms() (SigningAlgoList, SigningAlgoList) { + return ks.options.SupportedAlgos, ks.options.SupportedAlgosLedger +} + +/ SignWithLedger signs a binary message with the ledger device referenced by an Info object +/ and returns the signed bytes and the public key. It returns an error if the device could +/ not be queried or it returned an error. +func SignWithLedger(k *Record, msg []byte) (sig []byte, pub types.PubKey, err error) { + ledgerInfo := k.GetLedger() + if ledgerInfo == nil { + return nil, nil, errors.New("not a ledger object") +} + path := ledgerInfo.GetPath() + +priv, err := ledger.NewPrivKeySecp256k1Unsafe(*path) + if err != nil { + return +} + +sig, err = priv.Sign(msg) + if err != nil { + return nil, nil, err +} + if !priv.PubKey().VerifySignature(msg, sig) { + return nil, nil, errors.New("Ledger generated an invalid signature. Perhaps you have multiple ledgers and need to try another one") +} + +return sig, priv.PubKey(), nil +} + +func newOSBackendKeyringConfig(appName, dir string, buf io.Reader) + +keyring.Config { + return keyring.Config{ + ServiceName: appName, + FileDir: dir, + KeychainTrustApplication: true, + FilePasswordFunc: newRealPrompt(dir, buf), +} +} + +func newTestBackendKeyringConfig(appName, dir string) + +keyring.Config { + return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.FileBackend +}, + ServiceName: appName, + FileDir: filepath.Join(dir, keyringTestDirName), + FilePasswordFunc: func(_ string) (string, error) { + return "test", nil +}, +} +} + +func newKWalletBackendKeyringConfig(appName, _ string, _ io.Reader) + +keyring.Config { + return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.KWalletBackend +}, + ServiceName: "kdewallet", + KWalletAppID: appName, + KWalletFolder: "", +} +} + +func newPassBackendKeyringConfig(appName, _ string, _ io.Reader) + +keyring.Config { + prefix := fmt.Sprintf(passKeyringPrefix, appName) + +return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.PassBackend +}, + ServiceName: appName, + PassPrefix: prefix, +} +} + +func newFileBackendKeyringConfig(name, dir string, buf io.Reader) + +keyring.Config { + fileDir := filepath.Join(dir, keyringFileDirName) + +return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.FileBackend +}, + ServiceName: name, + FileDir: fileDir, + FilePasswordFunc: newRealPrompt(fileDir, buf), +} +} + +func newRealPrompt(dir string, buf io.Reader) + +func(string) (string, error) { + return func(prompt string) (string, error) { + keyhashStored := false + keyhashFilePath := filepath.Join(dir, "keyhash") + +var keyhash []byte + + _, err := os.Stat(keyhashFilePath) + switch { + case err == nil: + keyhash, err = os.ReadFile(keyhashFilePath) + if err != nil { + return "", fmt.Errorf("failed to read %s: %v", keyhashFilePath, err) +} + +keyhashStored = true + case os.IsNotExist(err): + keyhashStored = false + + default: + return "", fmt.Errorf("failed to open %s: %v", keyhashFilePath, err) +} + failureCounter := 0 + for { + failureCounter++ + if failureCounter > maxPassphraseEntryAttempts { + return "", fmt.Errorf("too many failed passphrase attempts") +} + buf := bufio.NewReader(buf) + +pass, err := input.GetPassword(fmt.Sprintf("Enter keyring passphrase (attempt %d/%d):", failureCounter, maxPassphraseEntryAttempts), buf) + if err != nil { + / NOTE: LGTM.io reports a false positive alert that states we are printing the password, + / but we only log the error. + / + / lgtm [go/clear-text-logging] + fmt.Fprintln(os.Stderr, err) + +continue +} + if keyhashStored { + if err := bcrypt.CompareHashAndPassword(keyhash, []byte(pass)); err != nil { + fmt.Fprintln(os.Stderr, "incorrect passphrase") + +continue +} + +return pass, nil +} + +reEnteredPass, err := input.GetPassword("Re-enter keyring passphrase:", buf) + if err != nil { + / NOTE: LGTM.io reports a false positive alert that states we are printing the password, + / but we only log the error. + / + / lgtm [go/clear-text-logging] + fmt.Fprintln(os.Stderr, err) + +continue +} + if pass != reEnteredPass { + fmt.Fprintln(os.Stderr, "passphrase do not match") + +continue +} + saltBytes := tmcrypto.CRandBytes(16) + +passwordHash, err := bcrypt.GenerateFromPassword(saltBytes, []byte(pass), 2) + if err != nil { + fmt.Fprintln(os.Stderr, err) + +continue +} + if err := os.WriteFile(dir+"/keyhash", passwordHash, 0o555); err != nil { + return "", err +} + +return pass, nil +} + +} +} + +func (ks keystore) + +writeLocalKey(name string, privKey types.PrivKey) (*Record, error) { + k, err := NewLocalRecord(name, privKey, privKey.PubKey()) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +/ writeRecord persists a keyring item in keystore if it does not exist there. +/ For each key record, we actually write 2 items: +/ - one with key `.info`, with Data = the serialized protobuf key +/ - another with key `.address`, with Data = the uid (i.e. the key name) +/ This is to be able to query keys both by name and by address. +func (ks keystore) + +writeRecord(k *Record) + +error { + addr, err := k.GetAddress() + if err != nil { + return err +} + key := infoKey(k.Name) + +exists, err := ks.existsInDb(addr, key) + if err != nil { + return err +} + if exists { + return fmt.Errorf("public key %s already exists in keybase", key) +} + +serializedRecord, err := ks.cdc.Marshal(k) + if err != nil { + return fmt.Errorf("unable to serialize record; %+w", err) +} + item := keyring.Item{ + Key: key, + Data: serializedRecord, +} + if err := ks.SetItem(item); err != nil { + return err +} + +item = keyring.Item{ + Key: addrHexKeyAsString(addr), + Data: []byte(key), +} + if err := ks.SetItem(item); err != nil { + return err +} + +return nil +} + +/ existsInDb returns (true, nil) + if either addr or name exist is in keystore DB. +/ On the other hand, it returns (false, error) + if Get method returns error different from keyring.ErrKeyNotFound +/ In case of inconsistent keyring, it recovers it automatically. +func (ks keystore) + +existsInDb(addr sdk.Address, name string) (bool, error) { + _, errAddr := ks.db.Get(addrHexKeyAsString(addr)) + if errAddr != nil && !errors.Is(errAddr, keyring.ErrKeyNotFound) { + return false, errAddr +} + + _, errInfo := ks.db.Get(infoKey(name)) + if errInfo == nil { + return true, nil / uid lookup succeeds - info exists +} + +else if !errors.Is(errInfo, keyring.ErrKeyNotFound) { + return false, errInfo / received unexpected error - returns +} + + / looking for an issue, record with meta (getByAddress) + +exists, but record with public key itself does not + if errAddr == nil && errors.Is(errInfo, keyring.ErrKeyNotFound) { + fmt.Fprintf(os.Stderr, "address \"%s\" exists but pubkey itself does not\n", hex.EncodeToString(addr.Bytes())) + +fmt.Fprintln(os.Stderr, "recreating pubkey record") + err := ks.db.Remove(addrHexKeyAsString(addr)) + if err != nil { + return true, err +} + +return false, nil +} + + / both lookups failed, info does not exist + return false, nil +} + +func (ks keystore) + +writeOfflineKey(name string, pk types.PubKey) (*Record, error) { + k, err := NewOfflineRecord(name, pk) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +/ writeMultisigKey investigate where thisf function is called maybe remove it +func (ks keystore) + +writeMultisigKey(name string, pk types.PubKey) (*Record, error) { + k, err := NewMultiRecord(name, pk) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +func (ks keystore) + +MigrateAll() ([]*Record, error) { + keys, err := ks.db.Keys() + if err != nil { + return nil, err +} + if len(keys) == 0 { + return nil, nil +} + +sort.Strings(keys) + +var recs []*Record + for _, key := range keys { + / The keyring items only with `.info` consists the key info. + if !strings.HasSuffix(key, infoSuffix) { + continue +} + +rec, err := ks.migrate(key) + if err != nil { + fmt.Printf("migrate err for key %s: %q\n", key, err) + +continue +} + +recs = append(recs, rec) +} + +return recs, nil +} + +/ migrate converts keyring.Item from amino to proto serialization format. +/ the `key` argument can be a key uid (e.g. "alice") + +or with the '.info' +/ suffix (e.g. "alice.info"). +/ +/ It operates as follows: +/ 1. retrieve any key +/ 2. try to decode it using protobuf +/ 3. if ok, then return the key, do nothing else +/ 4. if it fails, then try to decode it using amino +/ 5. convert from the amino struct to the protobuf struct +/ 6. write the proto-encoded key back to the keyring +func (ks keystore) + +migrate(key string) (*Record, error) { + if !strings.HasSuffix(key, infoSuffix) { + key = infoKey(key) +} + + / 1. get the key. + item, err := ks.db.Get(key) + if err != nil { + return nil, wrapKeyNotFound(err, key) +} + if len(item.Data) == 0 { + return nil, sdkerrors.Wrap(sdkerrors.ErrKeyNotFound, key) +} + + / 2. Try to deserialize using proto + k, err := ks.protoUnmarshalRecord(item.Data) + / 3. If ok then return the key + if err == nil { + return k, nil +} + + / 4. Try to decode with amino + legacyInfo, err := unMarshalLegacyInfo(item.Data) + if err != nil { + return nil, fmt.Errorf("unable to unmarshal item.Data, err: %w", err) +} + + / 5. Convert and serialize info using proto + k, err = ks.convertFromLegacyInfo(legacyInfo) + if err != nil { + return nil, fmt.Errorf("convertFromLegacyInfo, err: %w", err) +} + +serializedRecord, err := ks.cdc.Marshal(k) + if err != nil { + return nil, fmt.Errorf("unable to serialize record, err: %w", err) +} + +item = keyring.Item{ + Key: key, + Data: serializedRecord, +} + + / 6. Overwrite the keyring entry with the new proto-encoded key. + if err := ks.SetItem(item); err != nil { + return nil, fmt.Errorf("unable to set keyring.Item, err: %w", err) +} + +fmt.Printf("Successfully migrated key %s.\n", key) + +return k, nil +} + +func (ks keystore) + +protoUnmarshalRecord(bz []byte) (*Record, error) { + k := new(Record) + if err := ks.cdc.Unmarshal(bz, k); err != nil { + return nil, err +} + +return k, nil +} + +func (ks keystore) + +SetItem(item keyring.Item) + +error { + return ks.db.Set(item) +} + +func (ks keystore) + +convertFromLegacyInfo(info LegacyInfo) (*Record, error) { + if info == nil { + return nil, errors.New("unable to convert LegacyInfo to Record cause info is nil") +} + name := info.GetName() + pk := info.GetPubKey() + switch info.GetType() { + case TypeLocal: + priv, err := privKeyFromLegacyInfo(info) + if err != nil { + return nil, err +} + +return NewLocalRecord(name, priv, pk) + case TypeOffline: + return NewOfflineRecord(name, pk) + case TypeMulti: + return NewMultiRecord(name, pk) + case TypeLedger: + path, err := info.GetPath() + if err != nil { + return nil, err +} + +return NewLedgerRecord(name, pk, path) + +default: + return nil, errors.New("unknown LegacyInfo type") +} +} + +func addrHexKeyAsString(address sdk.Address) + +string { + return fmt.Sprintf("%s.%s", hex.EncodeToString(address.Bytes()), addressSuffix) +} +``` The default implementation of `Keyring` comes from the third-party [`99designs/keyring`](https://github.com/99designs/keyring) library. A few notes on the `Keyring` methods: -* `Sign(uid string, msg []byte) ([]byte, types.PubKey, error)` strictly deals with the signature of the `msg` bytes. You must prepare and encode the transaction into a canonical `[]byte` form. Because protobuf is not deterministic, it has been decided in [ADR-020](/v0.47/build/architecture/adr-020-protobuf-transaction-encoding) that the canonical `payload` to sign is the `SignDoc` struct, deterministically encoded using [ADR-027](/v0.47/build/architecture/adr-027-deterministic-protobuf-serialization). Note that signature verification is not implemented in the Cosmos SDK by default, it is deferred to the [`anteHandler`](/v0.47/learn/advanced/baseapp#antehandler). +- `Sign(uid string, msg []byte) ([]byte, types.PubKey, error)` strictly deals with the signature of the `msg` bytes. You must prepare and encode the transaction into a canonical `[]byte` form. Because protobuf is not deterministic, it has been decided in [ADR-020](/docs/common/pages/adr-comprehensive#adr-020-protocol-buffer-transaction-encoding) that the canonical `payload` to sign is the `SignDoc` struct, deterministically encoded using [ADR-027](/docs/common/pages/adr-comprehensive#adr-027-deterministic-protobuf-serialization). Note that signature verification is not implemented in the Cosmos SDK by default, it is deferred to the [`anteHandler`](/docs/sdk/v0.47/learn/advanced/baseapp#antehandler). -proto/cosmos/tx/v1beta1/tx.proto +```protobuf +// SignDoc is the type used for generating sign bytes for SIGN_MODE_DIRECT. +message SignDoc { + // body_bytes is protobuf serialization of a TxBody that matches the + // representation in TxRaw. + bytes body_bytes = 1; -``` -loading... -``` + // auth_info_bytes is a protobuf serialization of an AuthInfo that matches the + // representation in TxRaw. + bytes auth_info_bytes = 2; + + // chain_id is the unique identifier of the chain this transaction targets. + // It prevents signed transactions from being used on another chain by an + // attacker + string chain_id = 3; -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/tx/v1beta1/tx.proto#L48-L65) + // account_number is the account number of the account in state + uint64 account_number = 4; +} +``` -* `NewAccount(uid, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error)` creates a new account based on the [`bip44 path`](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki) and persists it on disk. The `PrivKey` is **never stored unencrypted**, instead it is [encrypted with a passphrase](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/armor.go) before being persisted. In the context of this method, the key type and sequence number refer to the segment of the BIP44 derivation path (for example, `0`, `1`, `2`, ...) that is used to derive a private and a public key from the mnemonic. Using the same mnemonic and derivation path, the same `PrivKey`, `PubKey` and `Address` is generated. The following keys are supported by the keyring: +- `NewAccount(uid, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error)` creates a new account based on the [`bip44 path`](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki) and persists it on disk. The `PrivKey` is **never stored unencrypted**, instead it is [encrypted with a passphrase](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/armor.go) before being persisted. In the context of this method, the key type and sequence number refer to the segment of the BIP44 derivation path (for example, `0`, `1`, `2`, ...) that is used to derive a private and a public key from the mnemonic. Using the same mnemonic and derivation path, the same `PrivKey`, `PubKey` and `Address` is generated. The following keys are supported by the keyring: -* `secp256k1` +- `secp256k1` -* `ed25519` +- `ed25519` -* `ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error)` exports a private key in ASCII-armored encrypted format using the given passphrase. You can then either import the private key again into the keyring using the `ImportPrivKey(uid, armor, passphrase string)` function or decrypt it into a raw private key using the `UnarmorDecryptPrivKey(armorStr string, passphrase string)` function. +- `ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error)` exports a private key in ASCII-armored encrypted format using the given passphrase. You can then either import the private key again into the keyring using the `ImportPrivKey(uid, armor, passphrase string)` function or decrypt it into a raw private key using the `UnarmorDecryptPrivKey(armorStr string, passphrase string)` function. -### Create New Key Type[​](#create-new-key-type "Direct link to Create New Key Type") +### Create New Key Type To create a new key type for using in keyring, `keyring.SignatureAlgo` interface must be fulfilled. -crypto/keyring/signing\_algorithms.go +```go expandable +package keyring -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/keyring/signing_algorithms.go#L10-L15) + "fmt" + "strings" + "github.com/cosmos/cosmos-sdk/crypto/hd" +) -The interface consists in three methods where `Name()` returns the name of the algorithm as a `hd.PubKeyType` and `Derive()` and `Generate()` must return the following functions respectively: +/ SignatureAlgo defines the interface for a keyring supported algorithm. +type SignatureAlgo interface { + Name() -crypto/hd/algo.go +hd.PubKeyType + Derive() +hd.DeriveFn + Generate() + +hd.GenerateFn +} + +/ NewSigningAlgoFromString creates a supported SignatureAlgo. +func NewSigningAlgoFromString(str string, algoList SigningAlgoList) (SignatureAlgo, error) { + for _, algo := range algoList { + if str == string(algo.Name()) { + return algo, nil +} + +} + +return nil, fmt.Errorf("provided algorithm %q is not supported", str) +} + +/ SigningAlgoList is a slice of signature algorithms +type SigningAlgoList []SignatureAlgo + +/ Contains returns true if the SigningAlgoList the given SignatureAlgo. +func (sal SigningAlgoList) + +Contains(algo SignatureAlgo) + +bool { + for _, cAlgo := range sal { + if cAlgo.Name() == algo.Name() { + return true +} + +} + +return false +} + +/ String returns a comma separated string of the signature algorithm names in the list. +func (sal SigningAlgoList) + +String() + +string { + names := make([]string, len(sal)) + for i := range sal { + names[i] = string(sal[i].Name()) +} + +return strings.Join(names, ",") +} ``` -loading... -``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/hd/algo.go#L28-L31) +The interface consists in three methods where `Name()` returns the name of the algorithm as a `hd.PubKeyType` and `Derive()` and `Generate()` must return the following functions respectively: + +```go expandable +package hd + +import ( + + "github.com/cosmos/go-bip39" + "github.com/cosmos/cosmos-sdk/crypto/keys/secp256k1" + "github.com/cosmos/cosmos-sdk/crypto/types" +) + +/ PubKeyType defines an algorithm to derive key-pairs which can be used for cryptographic signing. +type PubKeyType string + +const ( + / MultiType implies that a pubkey is a multisignature + MultiType = PubKeyType("multi") + / Secp256k1Type uses the Bitcoin secp256k1 ECDSA parameters. + Secp256k1Type = PubKeyType("secp256k1") + / Ed25519Type represents the Ed25519Type signature system. + / It is currently not supported for end-user keys (wallets/ledgers). + Ed25519Type = PubKeyType("ed25519") + / Sr25519Type represents the Sr25519Type signature system. + Sr25519Type = PubKeyType("sr25519") +) + +/ Secp256k1 uses the Bitcoin secp256k1 ECDSA parameters. +var Secp256k1 = secp256k1Algo{ +} + +type ( + DeriveFn func(mnemonic string, bip39Passphrase, hdPath string) ([]byte, error) + +GenerateFn func(bz []byte) + +types.PrivKey +) + +type WalletGenerator interface { + Derive(mnemonic string, bip39Passphrase, hdPath string) ([]byte, error) + +Generate(bz []byte) + +types.PrivKey +} + +type secp256k1Algo struct{ +} + +func (s secp256k1Algo) + +Name() + +PubKeyType { + return Secp256k1Type +} + +/ Derive derives and returns the secp256k1 private key for the given seed and HD path. +func (s secp256k1Algo) + +Derive() + +DeriveFn { + return func(mnemonic string, bip39Passphrase, hdPath string) ([]byte, error) { + seed, err := bip39.NewSeedWithErrorChecking(mnemonic, bip39Passphrase) + if err != nil { + return nil, err +} + +masterPriv, ch := ComputeMastersFromSeed(seed) + if len(hdPath) == 0 { + return masterPriv[:], nil +} + +derivedKey, err := DerivePrivateKeyForPath(masterPriv, ch, hdPath) + +return derivedKey, err +} +} + +/ Generate generates a secp256k1 private key from the given bytes. +func (s secp256k1Algo) + +Generate() + +GenerateFn { + return func(bz []byte) + +types.PrivKey { + bzArr := make([]byte, secp256k1.PrivKeySize) + +copy(bzArr, bz) + +return &secp256k1.PrivKey{ + Key: bzArr +} + +} +} +``` Once the `keyring.SignatureAlgo` has been implemented it must be added to the [list of supported algos](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/keyring/keyring.go#L217) of the keyring. -For simplicity the implementation of a new key type should be done inside the `crypto/hd` package. There is an example of a working `secp256k1` implementation in [algo.go](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/hd/algo.go#L38). +For simplicity the implementation of a new key type should be done inside the `crypto/hd` package. +There is an example of a working `secp256k1` implementation in [algo.go](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/hd/algo.go#L38). -#### Implementing secp256r1 algo[​](#implementing-secp256r1-algo "Direct link to Implementing secp256r1 algo") +#### Implementing secp256r1 algo Here is an example of how secp256r1 could be implemented. First a new function to create a private key from a secret number is needed in the secp256r1 package. This function could look like this: -``` -// cosmos-sdk/crypto/keys/secp256r1/privkey.go// NewPrivKeyFromSecret creates a private key derived for the secret number// represented in big-endian. The `secret` must be a valid ECDSA field element.func NewPrivKeyFromSecret(secret []byte) (*PrivKey, error) { var d = new(big.Int).SetBytes(secret) if d.Cmp(secp256r1.Params().N) >= 1 { return nil, errorsmod.Wrap(errors.ErrInvalidRequest, "secret not in the curve base field") } sk := new(ecdsa.PrivKey) return &PrivKey{&ecdsaSK{*sk}}, nil} +```go expandable +/ cosmos-sdk/crypto/keys/secp256r1/privkey.go + +/ NewPrivKeyFromSecret creates a private key derived for the secret number +/ represented in big-endian. The `secret` must be a valid ECDSA field element. +func NewPrivKeyFromSecret(secret []byte) (*PrivKey, error) { + var d = new(big.Int).SetBytes(secret) + if d.Cmp(secp256r1.Params().N) >= 1 { + return nil, errorsmod.Wrap(errors.ErrInvalidRequest, "secret not in the curve base field") +} + sk := new(ecdsa.PrivKey) + +return &PrivKey{&ecdsaSK{*sk +}}, nil +} ``` After that `secp256r1Algo` can be implemented. -``` -// cosmos-sdk/crypto/hd/secp256r1Algo.gopackage hdimport ( "github.com/cosmos/go-bip39" "github.com/cosmos/cosmos-sdk/crypto/keys/secp256r1" "github.com/cosmos/cosmos-sdk/crypto/types")// Secp256r1Type uses the secp256r1 ECDSA parameters.const Secp256r1Type = PubKeyType("secp256r1")var Secp256r1 = secp256r1Algo{}type secp256r1Algo struct{}func (s secp256r1Algo) Name() PubKeyType { return Secp256r1Type}// Derive derives and returns the secp256r1 private key for the given seed and HD path.func (s secp256r1Algo) Derive() DeriveFn { return func(mnemonic string, bip39Passphrase, hdPath string) ([]byte, error) { seed, err := bip39.NewSeedWithErrorChecking(mnemonic, bip39Passphrase) if err != nil { return nil, err } masterPriv, ch := ComputeMastersFromSeed(seed) if len(hdPath) == 0 { return masterPriv[:], nil } derivedKey, err := DerivePrivateKeyForPath(masterPriv, ch, hdPath) return derivedKey, err }}// Generate generates a secp256r1 private key from the given bytes.func (s secp256r1Algo) Generate() GenerateFn { return func(bz []byte) types.PrivKey { key, err := secp256r1.NewPrivKeyFromSecret(bz) if err != nil { panic(err) } return key }} +```go expandable +/ cosmos-sdk/crypto/hd/secp256r1Algo.go + +package hd + +import ( + + "github.com/cosmos/go-bip39" + "github.com/cosmos/cosmos-sdk/crypto/keys/secp256r1" + "github.com/cosmos/cosmos-sdk/crypto/types" +) + +/ Secp256r1Type uses the secp256r1 ECDSA parameters. +const Secp256r1Type = PubKeyType("secp256r1") + +var Secp256r1 = secp256r1Algo{ +} + +type secp256r1Algo struct{ +} + +func (s secp256r1Algo) + +Name() + +PubKeyType { + return Secp256r1Type +} + +/ Derive derives and returns the secp256r1 private key for the given seed and HD path. +func (s secp256r1Algo) + +Derive() + +DeriveFn { + return func(mnemonic string, bip39Passphrase, hdPath string) ([]byte, error) { + seed, err := bip39.NewSeedWithErrorChecking(mnemonic, bip39Passphrase) + if err != nil { + return nil, err +} + +masterPriv, ch := ComputeMastersFromSeed(seed) + if len(hdPath) == 0 { + return masterPriv[:], nil +} + +derivedKey, err := DerivePrivateKeyForPath(masterPriv, ch, hdPath) + +return derivedKey, err +} +} + +/ Generate generates a secp256r1 private key from the given bytes. +func (s secp256r1Algo) + +Generate() + +GenerateFn { + return func(bz []byte) + +types.PrivKey { + key, err := secp256r1.NewPrivKeyFromSecret(bz) + if err != nil { + panic(err) +} + +return key +} +} ``` Finally, the algo must be added to the list of [supported algos](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/crypto/keyring/keyring.go#L217) by the keyring. -``` -// cosmos-sdk/crypto/keyring/keyring.gofunc newKeystore(kr keyring.Keyring, cdc codec.Codec, backend string, opts ...Option) keystore { // Default options for keybase, these can be overwritten using the // Option function options := Options{ SupportedAlgos: SigningAlgoList{hd.Secp256k1, hd.Secp256r1}, // added here SupportedAlgosLedger: SigningAlgoList{hd.Secp256k1}, }... +```go +/ cosmos-sdk/crypto/keyring/keyring.go + +func newKeystore(kr keyring.Keyring, cdc codec.Codec, backend string, opts ...Option) + +keystore { + / Default options for keybase, these can be overwritten using the + / Option function + options := Options{ + SupportedAlgos: SigningAlgoList{ + hd.Secp256k1, hd.Secp256r1 +}, / added here + SupportedAlgosLedger: SigningAlgoList{ + hd.Secp256k1 +}, +} +... ``` Hereafter to create new keys using your algo, you must specify it with the flag `--algo` : diff --git a/docs/sdk/v0.47/learn/beginner/gas-fees.mdx b/docs/sdk/v0.47/learn/beginner/gas-fees.mdx index 2794fea8..5e3d26e3 100644 --- a/docs/sdk/v0.47/learn/beginner/gas-fees.mdx +++ b/docs/sdk/v0.47/learn/beginner/gas-fees.mdx @@ -1,109 +1,878 @@ --- -title: "Gas and Fees" -description: "Version: v0.47" +title: Gas and Fees --- - - This document describes the default strategies to handle gas and fees within a Cosmos SDK application. - +## Synopsis + +This document describes the default strategies to handle gas and fees within a Cosmos SDK application. - ### Pre-requisite Readings[​](#pre-requisite-readings "Direct link to Pre-requisite Readings") - * [Anatomy of a Cosmos SDK Application](/v0.47/learn/beginner/overview-app) +### Pre-requisite Readings + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.47/learn/beginner/overview-app) + -## Introduction to `Gas` and `Fees`[​](#introduction-to-gas-and-fees "Direct link to introduction-to-gas-and-fees") +## Introduction to `Gas` and `Fees` In the Cosmos SDK, `gas` is a special unit that is used to track the consumption of resources during execution. `gas` is typically consumed whenever read and writes are made to the store, but it can also be consumed if expensive computation needs to be done. It serves two main purposes: -* Make sure blocks are not consuming too many resources and are finalized. This is implemented by default in the Cosmos SDK via the [block gas meter](#block-gas-meter). -* Prevent spam and abuse from end-user. To this end, `gas` consumed during [`message`](/v0.47/build/building-modules/messages-and-queries#messages) execution is typically priced, resulting in a `fee` (`fees = gas * gas-prices`). `fees` generally have to be paid by the sender of the `message`. Note that the Cosmos SDK does not enforce `gas` pricing by default, as there may be other ways to prevent spam (e.g. bandwidth schemes). Still, most applications implement `fee` mechanisms to prevent spam by using the [`AnteHandler`](#antehandler). +- Make sure blocks are not consuming too many resources and are finalized. This is implemented by default in the Cosmos SDK via the [block gas meter](#block-gas-meter). +- Prevent spam and abuse from end-user. To this end, `gas` consumed during [`message`](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#messages) execution is typically priced, resulting in a `fee` (`fees = gas * gas-prices`). `fees` generally have to be paid by the sender of the `message`. Note that the Cosmos SDK does not enforce `gas` pricing by default, as there may be other ways to prevent spam (e.g. bandwidth schemes). Still, most applications implement `fee` mechanisms to prevent spam by using the [`AnteHandler`](#antehandler). -## Gas Meter[​](#gas-meter "Direct link to Gas Meter") +## Gas Meter -In the Cosmos SDK, `gas` is a simple alias for `uint64`, and is managed by an object called a *gas meter*. Gas meters implement the `GasMeter` interface +In the Cosmos SDK, `gas` is a simple alias for `uint64`, and is managed by an object called a _gas meter_. Gas meters implement the `GasMeter` interface -store/types/gas.go +```go expandable +package types -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/store/types/gas.go#L40-L51) + "fmt" + "math" +) -where: +/ Gas consumption descriptors. +const ( + GasIterNextCostFlatDesc = "IterNextFlat" + GasValuePerByteDesc = "ValuePerByte" + GasWritePerByteDesc = "WritePerByte" + GasReadPerByteDesc = "ReadPerByte" + GasWriteCostFlatDesc = "WriteFlat" + GasReadCostFlatDesc = "ReadFlat" + GasHasDesc = "Has" + GasDeleteDesc = "Delete" +) -* `GasConsumed()` returns the amount of gas that was consumed by the gas meter instance. -* `GasConsumedToLimit()` returns the amount of gas that was consumed by gas meter instance, or the limit if it is reached. -* `GasRemaining()` returns the gas left in the GasMeter. -* `Limit()` returns the limit of the gas meter instance. `0` if the gas meter is infinite. -* `ConsumeGas(amount Gas, descriptor string)` consumes the amount of `gas` provided. If the `gas` overflows, it panics with the `descriptor` message. If the gas meter is not infinite, it panics if `gas` consumed goes above the limit. -* `RefundGas()` deducts the given amount from the gas consumed. This functionality enables refunding gas to the transaction or block gas pools so that EVM-compatible chains can fully support the go-ethereum StateDB interface. -* `IsPastLimit()` returns `true` if the amount of gas consumed by the gas meter instance is strictly above the limit, `false` otherwise. -* `IsOutOfGas()` returns `true` if the amount of gas consumed by the gas meter instance is above or equal to the limit, `false` otherwise. +/ Gas measured by the SDK +type Gas = uint64 -The gas meter is generally held in [`ctx`](/v0.47/learn/advanced/context), and consuming gas is done with the following pattern: +/ ErrorNegativeGasConsumed defines an error thrown when the amount of gas refunded results in a +/ negative gas consumed amount. +type ErrorNegativeGasConsumed struct { + Descriptor string +} -``` -ctx.GasMeter().ConsumeGas(amount, "description") -``` +/ ErrorOutOfGas defines an error thrown when an action results in out of gas. +type ErrorOutOfGas struct { + Descriptor string +} -By default, the Cosmos SDK makes use of two different gas meters, the [main gas meter](#main-gas-meter) and the [block gas meter](#block-gas-meter). +/ ErrorGasOverflow defines an error thrown when an action results gas consumption +/ unsigned integer overflow. +type ErrorGasOverflow struct { + Descriptor string +} -### Main Gas Meter[​](#main-gas-meter "Direct link to Main Gas Meter") +/ GasMeter interface to track gas consumption +type GasMeter interface { + GasConsumed() -`ctx.GasMeter()` is the main gas meter of the application. The main gas meter is initialized in `BeginBlock` via `setDeliverState`, and then tracks gas consumption during execution sequences that lead to state-transitions, i.e. those originally triggered by [`BeginBlock`](/v0.47/learn/advanced/baseapp#beginblock), [`DeliverTx`](/v0.47/learn/advanced/baseapp#delivertx) and [`EndBlock`](/v0.47/learn/advanced/baseapp#endblock). At the beginning of each `DeliverTx`, the main gas meter **must be set to 0** in the [`AnteHandler`](#antehandler), so that it can track gas consumption per-transaction. +Gas + GasConsumedToLimit() -Gas consumption can be done manually, generally by the module developer in the [`BeginBlocker`, `EndBlocker`](/v0.47/build/building-modules/beginblock-endblock) or [`Msg` service](/v0.47/build/building-modules/msg-services), but most of the time it is done automatically whenever there is a read or write to the store. This automatic gas consumption logic is implemented in a special store called [`GasKv`](/v0.47/learn/advanced/store#gaskv-store). +Gas + GasRemaining() -### Block Gas Meter[​](#block-gas-meter "Direct link to Block Gas Meter") +Gas + Limit() -`ctx.BlockGasMeter()` is the gas meter used to track gas consumption per block and make sure it does not go above a certain limit. A new instance of the `BlockGasMeter` is created each time [`BeginBlock`](/v0.47/learn/advanced/baseapp#beginblock) is called. The `BlockGasMeter` is finite, and the limit of gas per block is defined in the application's consensus parameters. By default, Cosmos SDK applications use the default consensus parameters provided by CometBFT: +Gas + ConsumeGas(amount Gas, descriptor string) -types/params.go +RefundGas(amount Gas, descriptor string) -``` -loading... +IsPastLimit() + +bool + IsOutOfGas() + +bool + String() + +string +} + +type basicGasMeter struct { + limit Gas + consumed Gas +} + +/ NewGasMeter returns a reference to a new basicGasMeter. +func NewGasMeter(limit Gas) + +GasMeter { + return &basicGasMeter{ + limit: limit, + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *basicGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasRemaining returns the gas left in the GasMeter. +func (g *basicGasMeter) + +GasRemaining() + +Gas { + if g.IsPastLimit() { + return 0 +} + +return g.limit - g.consumed +} + +/ Limit returns the gas limit of the GasMeter. +func (g *basicGasMeter) + +Limit() + +Gas { + return g.limit +} + +/ GasConsumedToLimit returns the gas limit if gas consumed is past the limit, +/ otherwise it returns the consumed gas. +/ NOTE: This behaviour is only called when recovering from panic when +/ BlockGasMeter consumes gas past the limit. +func (g *basicGasMeter) + +GasConsumedToLimit() + +Gas { + if g.IsPastLimit() { + return g.limit +} + +return g.consumed +} + +/ addUint64Overflow performs the addition operation on two uint64 integers and +/ returns a boolean on whether or not the result overflows. +func addUint64Overflow(a, b uint64) (uint64, bool) { + if math.MaxUint64-a < b { + return 0, true +} + +return a + b, false +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit or out of gas. +func (g *basicGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + g.consumed = math.MaxUint64 + panic(ErrorGasOverflow{ + descriptor +}) +} + if g.consumed > g.limit { + panic(ErrorOutOfGas{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the transaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *basicGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns true if gas consumed is past limit, otherwise it returns false. +func (g *basicGasMeter) + +IsPastLimit() + +bool { + return g.consumed > g.limit +} + +/ IsOutOfGas returns true if gas consumed is greater than or equal to gas limit, otherwise it returns false. +func (g *basicGasMeter) + +IsOutOfGas() + +bool { + return g.consumed >= g.limit +} + +/ String returns the BasicGasMeter's gas limit and gas consumed. +func (g *basicGasMeter) + +String() + +string { + return fmt.Sprintf("BasicGasMeter:\n limit: %d\n consumed: %d", g.limit, g.consumed) +} + +type infiniteGasMeter struct { + consumed Gas +} + +/ NewInfiniteGasMeter returns a new gas meter without a limit. +func NewInfiniteGasMeter() + +GasMeter { + return &infiniteGasMeter{ + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *infiniteGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasConsumedToLimit returns the gas consumed from the GasMeter since the gas is not confined to a limit. +/ NOTE: This behaviour is only called when recovering from panic when BlockGasMeter consumes gas past the limit. +func (g *infiniteGasMeter) + +GasConsumedToLimit() + +Gas { + return g.consumed +} + +/ GasRemaining returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +GasRemaining() + +Gas { + return math.MaxUint64 +} + +/ Limit returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +Limit() + +Gas { + return math.MaxUint64 +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit. +func (g *infiniteGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + / TODO: Should we set the consumed field after overflow checking? + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + panic(ErrorGasOverflow{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the trasaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *infiniteGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsPastLimit() + +bool { + return false +} + +/ IsOutOfGas returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsOutOfGas() + +bool { + return false +} + +/ String returns the InfiniteGasMeter's gas consumed. +func (g *infiniteGasMeter) + +String() + +string { + return fmt.Sprintf("InfiniteGasMeter:\n consumed: %d", g.consumed) +} + +/ GasConfig defines gas cost for each operation on KVStores +type GasConfig struct { + HasCost Gas + DeleteCost Gas + ReadCostFlat Gas + ReadCostPerByte Gas + WriteCostFlat Gas + WriteCostPerByte Gas + IterNextCostFlat Gas +} + +/ KVGasConfig returns a default gas config for KVStores. +func KVGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 1000, + DeleteCost: 1000, + ReadCostFlat: 1000, + ReadCostPerByte: 3, + WriteCostFlat: 2000, + WriteCostPerByte: 30, + IterNextCostFlat: 30, +} +} + +/ TransientGasConfig returns a default gas config for TransientStores. +func TransientGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 100, + DeleteCost: 100, + ReadCostFlat: 100, + ReadCostPerByte: 0, + WriteCostFlat: 200, + WriteCostPerByte: 3, + IterNextCostFlat: 3, +} +} ``` -[See full example on GitHub](https://github.com/cometbft/cometbft/blob/v0.37.0/types/params.go#L66-L105) +where: -When a new [transaction](/v0.47/learn/advanced/transactions) is being processed via `DeliverTx`, the current value of `BlockGasMeter` is checked to see if it is above the limit. If it is, `DeliverTx` returns immediately. This can happen even with the first transaction in a block, as `BeginBlock` itself can consume gas. If not, the transaction is processed normally. At the end of `DeliverTx`, the gas tracked by `ctx.BlockGasMeter()` is increased by the amount consumed to process the transaction: +- `GasConsumed()` returns the amount of gas that was consumed by the gas meter instance. +- `GasConsumedToLimit()` returns the amount of gas that was consumed by gas meter instance, or the limit if it is reached. +- `GasRemaining()` returns the gas left in the GasMeter. +- `Limit()` returns the limit of the gas meter instance. `0` if the gas meter is infinite. +- `ConsumeGas(amount Gas, descriptor string)` consumes the amount of `gas` provided. If the `gas` overflows, it panics with the `descriptor` message. If the gas meter is not infinite, it panics if `gas` consumed goes above the limit. +- `RefundGas()` deducts the given amount from the gas consumed. This functionality enables refunding gas to the transaction or block gas pools so that EVM-compatible chains can fully support the go-ethereum StateDB interface. +- `IsPastLimit()` returns `true` if the amount of gas consumed by the gas meter instance is strictly above the limit, `false` otherwise. +- `IsOutOfGas()` returns `true` if the amount of gas consumed by the gas meter instance is above or equal to the limit, `false` otherwise. +The gas meter is generally held in [`ctx`](/docs/sdk/v0.47/learn/advanced/context), and consuming gas is done with the following pattern: + +```go +ctx.GasMeter().ConsumeGas(amount, "description") +``` + +By default, the Cosmos SDK makes use of two different gas meters, the [main gas meter](#main-gas-meter) and the [block gas meter](#block-gas-meter). + +### Main Gas Meter + +`ctx.GasMeter()` is the main gas meter of the application. The main gas meter is initialized in `BeginBlock` via `setDeliverState`, and then tracks gas consumption during execution sequences that lead to state-transitions, i.e. those originally triggered by [`BeginBlock`](/docs/sdk/v0.47/learn/advanced/baseapp#beginblock), [`DeliverTx`](/docs/sdk/v0.47/learn/advanced/baseapp#delivertx) and [`EndBlock`](/docs/sdk/v0.47/learn/advanced/baseapp#endblock). At the beginning of each `DeliverTx`, the main gas meter **must be set to 0** in the [`AnteHandler`](#antehandler), so that it can track gas consumption per-transaction. + +Gas consumption can be done manually, generally by the module developer in the [`BeginBlocker`, `EndBlocker`](/docs/sdk/v0.47/documentation/module-system/beginblock-endblock) or [`Msg` service](/docs/sdk/v0.47/documentation/module-system/msg-services), but most of the time it is done automatically whenever there is a read or write to the store. This automatic gas consumption logic is implemented in a special store called [`GasKv`](/docs/sdk/v0.47/learn/advanced/store#gaskv-store). + +### Block Gas Meter + +`ctx.BlockGasMeter()` is the gas meter used to track gas consumption per block and make sure it does not go above a certain limit. A new instance of the `BlockGasMeter` is created each time [`BeginBlock`](/docs/sdk/v0.47/learn/advanced/baseapp#beginblock) is called. The `BlockGasMeter` is finite, and the limit of gas per block is defined in the application's consensus parameters. By default, Cosmos SDK applications use the default consensus parameters provided by CometBFT: + +```go expandable +package types + +import ( + + "errors" + "fmt" + "time" + "github.com/cometbft/cometbft/crypto/ed25519" + "github.com/cometbft/cometbft/crypto/secp256k1" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" +) + +const ( + / MaxBlockSizeBytes is the maximum permitted size of the blocks. + MaxBlockSizeBytes = 104857600 / 100MB + + / BlockPartSizeBytes is the size of one block part. + BlockPartSizeBytes uint32 = 65536 / 64kB + + / MaxBlockPartsCount is the maximum number of block parts. + MaxBlockPartsCount = (MaxBlockSizeBytes / BlockPartSizeBytes) + 1 + + ABCIPubKeyTypeEd25519 = ed25519.KeyType + ABCIPubKeyTypeSecp256k1 = secp256k1.KeyType +) + +var ABCIPubKeyTypesToNames = map[string]string{ + ABCIPubKeyTypeEd25519: ed25519.PubKeyName, + ABCIPubKeyTypeSecp256k1: secp256k1.PubKeyName, +} + +/ ConsensusParams contains consensus critical parameters that determine the +/ validity of blocks. +type ConsensusParams struct { + Block BlockParams `json:"block"` + Evidence EvidenceParams `json:"evidence"` + Validator ValidatorParams `json:"validator"` + Version VersionParams `json:"version"` +} + +/ BlockParams define limits on the block size and gas plus minimum time +/ between blocks. +type BlockParams struct { + MaxBytes int64 `json:"max_bytes"` + MaxGas int64 `json:"max_gas"` +} + +/ EvidenceParams determine how we handle evidence of malfeasance. +type EvidenceParams struct { + MaxAgeNumBlocks int64 `json:"max_age_num_blocks"` / only accept new evidence more recent than this + MaxAgeDuration time.Duration `json:"max_age_duration"` + MaxBytes int64 `json:"max_bytes"` +} + +/ ValidatorParams restrict the public key types validators can use. +/ NOTE: uses ABCI pubkey naming, not Amino names. +type ValidatorParams struct { + PubKeyTypes []string `json:"pub_key_types"` +} + +type VersionParams struct { + App uint64 `json:"app"` +} + +/ DefaultConsensusParams returns a default ConsensusParams. +func DefaultConsensusParams() *ConsensusParams { + return &ConsensusParams{ + Block: DefaultBlockParams(), + Evidence: DefaultEvidenceParams(), + Validator: DefaultValidatorParams(), + Version: DefaultVersionParams(), +} +} + +/ DefaultBlockParams returns a default BlockParams. +func DefaultBlockParams() + +BlockParams { + return BlockParams{ + MaxBytes: 22020096, / 21MB + MaxGas: -1, +} +} + +/ DefaultEvidenceParams returns a default EvidenceParams. +func DefaultEvidenceParams() + +EvidenceParams { + return EvidenceParams{ + MaxAgeNumBlocks: 100000, / 27.8 hrs at 1block/s + MaxAgeDuration: 48 * time.Hour, + MaxBytes: 1048576, / 1MB +} +} + +/ DefaultValidatorParams returns a default ValidatorParams, which allows +/ only ed25519 pubkeys. +func DefaultValidatorParams() + +ValidatorParams { + return ValidatorParams{ + PubKeyTypes: []string{ + ABCIPubKeyTypeEd25519 +}, +} +} + +func DefaultVersionParams() + +VersionParams { + return VersionParams{ + App: 0, +} +} + +func IsValidPubkeyType(params ValidatorParams, pubkeyType string) + +bool { + for i := 0; i < len(params.PubKeyTypes); i++ { + if params.PubKeyTypes[i] == pubkeyType { + return true +} + +} + +return false +} + +/ Validate validates the ConsensusParams to ensure all values are within their +/ allowed limits, and returns an error if they are not. +func (params ConsensusParams) + +ValidateBasic() + +error { + if params.Block.MaxBytes <= 0 { + return fmt.Errorf("block.MaxBytes must be greater than 0. Got %d", + params.Block.MaxBytes) +} + if params.Block.MaxBytes > MaxBlockSizeBytes { + return fmt.Errorf("block.MaxBytes is too big. %d > %d", + params.Block.MaxBytes, MaxBlockSizeBytes) +} + if params.Block.MaxGas < -1 { + return fmt.Errorf("block.MaxGas must be greater or equal to -1. Got %d", + params.Block.MaxGas) +} + if params.Evidence.MaxAgeNumBlocks <= 0 { + return fmt.Errorf("evidence.MaxAgeNumBlocks must be greater than 0. Got %d", + params.Evidence.MaxAgeNumBlocks) +} + if params.Evidence.MaxAgeDuration <= 0 { + return fmt.Errorf("evidence.MaxAgeDuration must be grater than 0 if provided, Got %v", + params.Evidence.MaxAgeDuration) +} + if params.Evidence.MaxBytes > params.Block.MaxBytes { + return fmt.Errorf("evidence.MaxBytesEvidence is greater than upper bound, %d > %d", + params.Evidence.MaxBytes, params.Block.MaxBytes) +} + if params.Evidence.MaxBytes < 0 { + return fmt.Errorf("evidence.MaxBytes must be non negative. Got: %d", + params.Evidence.MaxBytes) +} + if len(params.Validator.PubKeyTypes) == 0 { + return errors.New("len(Validator.PubKeyTypes) + +must be greater than 0") +} + + / Check if keyType is a known ABCIPubKeyType + for i := 0; i < len(params.Validator.PubKeyTypes); i++ { + keyType := params.Validator.PubKeyTypes[i] + if _, ok := ABCIPubKeyTypesToNames[keyType]; !ok { + return fmt.Errorf("params.Validator.PubKeyTypes[%d], %s, is an unknown pubkey type", + i, keyType) +} + +} + +return nil +} + +/ Hash returns a hash of a subset of the parameters to store in the block header. +/ Only the Block.MaxBytes and Block.MaxGas are included in the hash. +/ This allows the ConsensusParams to evolve more without breaking the block +/ protocol. No need for a Merkle tree here, just a small struct to hash. +func (params ConsensusParams) + +Hash() []byte { + hasher := tmhash.New() + hp := cmtproto.HashedParams{ + BlockMaxBytes: params.Block.MaxBytes, + BlockMaxGas: params.Block.MaxGas, +} + +bz, err := hp.Marshal() + if err != nil { + panic(err) +} + + _, err = hasher.Write(bz) + if err != nil { + panic(err) +} + +return hasher.Sum(nil) +} + +/ Update returns a copy of the params with updates from the non-zero fields of p2. +/ NOTE: note: must not modify the original +func (params ConsensusParams) + +Update(params2 *cmtproto.ConsensusParams) + +ConsensusParams { + res := params / explicit copy + if params2 == nil { + return res +} + + / we must defensively consider any structs may be nil + if params2.Block != nil { + res.Block.MaxBytes = params2.Block.MaxBytes + res.Block.MaxGas = params2.Block.MaxGas +} + if params2.Evidence != nil { + res.Evidence.MaxAgeNumBlocks = params2.Evidence.MaxAgeNumBlocks + res.Evidence.MaxAgeDuration = params2.Evidence.MaxAgeDuration + res.Evidence.MaxBytes = params2.Evidence.MaxBytes +} + if params2.Validator != nil { + / Copy params2.Validator.PubkeyTypes, and set result's value to the copy. + / This avoids having to initialize the slice to 0 values, and then write to it again. + res.Validator.PubKeyTypes = append([]string{ +}, params2.Validator.PubKeyTypes...) +} + if params2.Version != nil { + res.Version.App = params2.Version.App +} + +return res +} + +func (params *ConsensusParams) + +ToProto() + +cmtproto.ConsensusParams { + return cmtproto.ConsensusParams{ + Block: &cmtproto.BlockParams{ + MaxBytes: params.Block.MaxBytes, + MaxGas: params.Block.MaxGas, +}, + Evidence: &cmtproto.EvidenceParams{ + MaxAgeNumBlocks: params.Evidence.MaxAgeNumBlocks, + MaxAgeDuration: params.Evidence.MaxAgeDuration, + MaxBytes: params.Evidence.MaxBytes, +}, + Validator: &cmtproto.ValidatorParams{ + PubKeyTypes: params.Validator.PubKeyTypes, +}, + Version: &cmtproto.VersionParams{ + App: params.Version.App, +}, +} +} + +func ConsensusParamsFromProto(pbParams cmtproto.ConsensusParams) + +ConsensusParams { + return ConsensusParams{ + Block: BlockParams{ + MaxBytes: pbParams.Block.MaxBytes, + MaxGas: pbParams.Block.MaxGas, +}, + Evidence: EvidenceParams{ + MaxAgeNumBlocks: pbParams.Evidence.MaxAgeNumBlocks, + MaxAgeDuration: pbParams.Evidence.MaxAgeDuration, + MaxBytes: pbParams.Evidence.MaxBytes, +}, + Validator: ValidatorParams{ + PubKeyTypes: pbParams.Validator.PubKeyTypes, +}, + Version: VersionParams{ + App: pbParams.Version.App, +}, +} +} ``` -ctx.BlockGasMeter().ConsumeGas( ctx.GasMeter().GasConsumedToLimit(), "block gas meter",) + +When a new [transaction](/docs/sdk/v0.47/learn/advanced/transactions) is being processed via `DeliverTx`, the current value of `BlockGasMeter` is checked to see if it is above the limit. If it is, `DeliverTx` returns immediately. This can happen even with the first transaction in a block, as `BeginBlock` itself can consume gas. If not, the transaction is processed normally. At the end of `DeliverTx`, the gas tracked by `ctx.BlockGasMeter()` is increased by the amount consumed to process the transaction: + +```go +ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), + "block gas meter", +) ``` -## AnteHandler[​](#antehandler "Direct link to AnteHandler") +## AnteHandler The `AnteHandler` is run for every transaction during `CheckTx` and `DeliverTx`, before a Protobuf `Msg` service method for each `sdk.Msg` in the transaction. The anteHandler is not implemented in the core Cosmos SDK but in a module. That said, most applications today use the default implementation defined in the [`auth` module](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth). Here is what the `anteHandler` is intended to do in a normal Cosmos SDK application: -* Verify that the transactions are of the correct type. Transaction types are defined in the module that implements the `anteHandler`, and they follow the transaction interface: +- Verify that the transactions are of the correct type. Transaction types are defined in the module that implements the `anteHandler`, and they follow the transaction interface: -types/tx\_msg.go +```go expandable +package types -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/types/tx_msg.go#L42-L50) + "encoding/json" + fmt "fmt" + "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/codec" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) -This enables developers to play with various types for the transaction of their application. In the default `auth` module, the default transaction type is `Tx`: +type ( + / Msg defines the interface a transaction message must fulfill. + Msg interface { + proto.Message -proto/cosmos/tx/v1beta1/tx.proto + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() +error + + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} + + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / Tx defines the interface a transaction must fulfill. + Tx interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg + + / ValidateBasic does a simple and lightweight validation check that doesn't + / require access to any other information. + ValidateBasic() + +error +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() + +AccAddress + FeeGranter() + +AccAddress +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +func MsgTypeURL(msg Msg) + +string { + return "/" + proto.MessageName(msg) +} + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} ``` -loading... -``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/tx/v1beta1/tx.proto#L13-L26) +This enables developers to play with various types for the transaction of their application. In the default `auth` module, the default transaction type is `Tx`: + +```protobuf +// Tx is the standard type used for broadcasting transactions. +message Tx { + // body is the processable content of the transaction + TxBody body = 1; + + // auth_info is the authorization related content of the transaction, + // specifically signers, signer modes and fee + AuthInfo auth_info = 2; + + // signatures is a list of signatures that matches the length and order of + // AuthInfo's signer_infos to allow connecting signature meta information like + // public key and signing mode by position. + repeated bytes signatures = 3; +} +``` -* Verify signatures for each [`message`](/v0.47/build/building-modules/messages-and-queries#messages) contained in the transaction. Each `message` should be signed by one or multiple sender(s), and these signatures must be verified in the `anteHandler`. -* During `CheckTx`, verify that the gas prices provided with the transaction is greater than the local `min-gas-prices` (as a reminder, gas-prices can be deducted from the following equation: `fees = gas * gas-prices`). `min-gas-prices` is a parameter local to each full-node and used during `CheckTx` to discard transactions that do not provide a minimum amount of fees. This ensures that the mempool cannot be spammed with garbage transactions. -* Verify that the sender of the transaction has enough funds to cover for the `fees`. When the end-user generates a transaction, they must indicate 2 of the 3 following parameters (the third one being implicit): `fees`, `gas` and `gas-prices`. This signals how much they are willing to pay for nodes to execute their transaction. The provided `gas` value is stored in a parameter called `GasWanted` for later use. -* Set `newCtx.GasMeter` to 0, with a limit of `GasWanted`. **This step is crucial**, as it not only makes sure the transaction cannot consume infinite gas, but also that `ctx.GasMeter` is reset in-between each `DeliverTx` (`ctx` is set to `newCtx` after `anteHandler` is run, and the `anteHandler` is run each time `DeliverTx` is called). +- Verify signatures for each [`message`](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#messages) contained in the transaction. Each `message` should be signed by one or multiple sender(s), and these signatures must be verified in the `anteHandler`. +- During `CheckTx`, verify that the gas prices provided with the transaction is greater than the local `min-gas-prices` (as a reminder, gas-prices can be deducted from the following equation: `fees = gas * gas-prices`). `min-gas-prices` is a parameter local to each full-node and used during `CheckTx` to discard transactions that do not provide a minimum amount of fees. This ensures that the mempool cannot be spammed with garbage transactions. +- Verify that the sender of the transaction has enough funds to cover for the `fees`. When the end-user generates a transaction, they must indicate 2 of the 3 following parameters (the third one being implicit): `fees`, `gas` and `gas-prices`. This signals how much they are willing to pay for nodes to execute their transaction. The provided `gas` value is stored in a parameter called `GasWanted` for later use. +- Set `newCtx.GasMeter` to 0, with a limit of `GasWanted`. **This step is crucial**, as it not only makes sure the transaction cannot consume infinite gas, but also that `ctx.GasMeter` is reset in-between each `DeliverTx` (`ctx` is set to `newCtx` after `anteHandler` is run, and the `anteHandler` is run each time `DeliverTx` is called). -As explained above, the `anteHandler` returns a maximum limit of `gas` the transaction can consume during execution called `GasWanted`. The actual amount consumed in the end is denominated `GasUsed`, and we must therefore have `GasUsed =< GasWanted`. Both `GasWanted` and `GasUsed` are relayed to the underlying consensus engine when [`DeliverTx`](/v0.47/learn/advanced/baseapp#delivertx) returns. +As explained above, the `anteHandler` returns a maximum limit of `gas` the transaction can consume during execution called `GasWanted`. The actual amount consumed in the end is denominated `GasUsed`, and we must therefore have `GasUsed =< GasWanted`. Both `GasWanted` and `GasUsed` are relayed to the underlying consensus engine when [`DeliverTx`](/docs/sdk/v0.47/learn/advanced/baseapp#delivertx) returns. diff --git a/docs/sdk/v0.47/learn/beginner/overview-app.mdx b/docs/sdk/v0.47/learn/beginner/overview-app.mdx index 9175ec2b..009b72ae 100644 --- a/docs/sdk/v0.47/learn/beginner/overview-app.mdx +++ b/docs/sdk/v0.47/learn/beginner/overview-app.mdx @@ -1,216 +1,4695 @@ --- -title: "Overview of a Cosmos SDK Application" -description: "Version: v0.47" +title: Overview of a Cosmos SDK Application --- +## Synopsis + +This document describes the core parts of a Cosmos SDK application, represented throughout the document as a placeholder application named `app`. + +## Node Client + +The Daemon, or [Full-Node Client](/docs/sdk/v0.47/learn/advanced/node), is the core process of a Cosmos SDK-based blockchain. Participants in the network run this process to initialize their state-machine, connect with other full-nodes, and update their state-machine as new blocks come in. + +```text expandable + ^ +-------------------------------+ ^ + | | | | + | | State-machine = Application | | + | | | | Built with Cosmos SDK + | | ^ + | | + | +----------- | ABCI | ----------+ v + | | + v | ^ + | | | | +Blockchain Node | | Consensus | | + | | | | + | +-------------------------------+ | CometBFT + | | | | + | | Networking | | + | | | | + v +-------------------------------+ v +``` + +The blockchain full-node presents itself as a binary, generally suffixed by `-d` for "daemon" (e.g. `appd` for `app` or `gaiad` for `gaia`). This binary is built by running a simple [`main.go`](/docs/sdk/v0.47/learn/advanced/node#main-function) function placed in `./cmd/appd/`. This operation usually happens through the [Makefile](#dependencies-and-makefile). + +Once the main binary is built, the node can be started by running the [`start` command](/docs/sdk/v0.47/learn/advanced/node#start-command). This command function primarily does three things: + +1. Create an instance of the state-machine defined in [`app.go`](#core-application-file). +2. Initialize the state-machine with the latest known state, extracted from the `db` stored in the `~/.app/data` folder. At this point, the state-machine is at height `appBlockHeight`. +3. Create and start a new CometBFT instance. Among other things, the node performs a handshake with its peers. It gets the latest `blockHeight` from them and replays blocks to sync to this height if it is greater than the local `appBlockHeight`. The node starts from genesis and CometBFT sends an `InitChain` message via the ABCI to the `app`, which triggers the [`InitChainer`](#initchainer). + - This document describes the core parts of a Cosmos SDK application, represented throughout the document as a placeholder application named `app`. + When starting a CometBFT instance, the genesis file is the `0` height and the + state within the genesis file is committed at block height `1`. When querying + the state of the node, querying block height 0 will return an error. -## Node Client[​](#node-client "Direct link to Node Client") +## Core Application File + +In general, the core of the state-machine is defined in a file called `app.go`. This file mainly contains the **type definition of the application** and functions to **create and initialize it**. + +### Type Definition of the Application + +The first thing defined in `app.go` is the `type` of the application. It is generally comprised of the following parts: + +- **A reference to [`baseapp`](/docs/sdk/v0.47/learn/advanced/baseapp).** The custom application defined in `app.go` is an extension of `baseapp`. When a transaction is relayed by CometBFT to the application, `app` uses `baseapp`'s methods to route them to the appropriate module. `baseapp` implements most of the core logic for the application, including all the [ABCI methods](https://docs.cometbft.com/v0.37/spec/abci/) and the [routing logic](/docs/sdk/v0.47/learn/advanced/baseapp#routing). +- **A list of store keys**. The [store](/docs/sdk/v0.47/learn/advanced/store), which contains the entire state, is implemented as a [`multistore`](/docs/sdk/v0.47/learn/advanced/store#multistore) (i.e. a store of stores) in the Cosmos SDK. Each module uses one or multiple stores in the multistore to persist their part of the state. These stores can be accessed with specific keys that are declared in the `app` type. These keys, along with the `keepers`, are at the heart of the [object-capabilities model](/docs/sdk/v0.47/learn/advanced/ocap) of the Cosmos SDK. +- **A list of module's `keeper`s.** Each module defines an abstraction called [`keeper`](/docs/sdk/v0.47/documentation/module-system/keeper), which handles reads and writes for this module's store(s). The `keeper`'s methods of one module can be called from other modules (if authorized), which is why they are declared in the application's type and exported as interfaces to other modules so that the latter can only access the authorized functions. +- **A reference to an [`appCodec`](/docs/sdk/v0.47/learn/advanced/encoding).** The application's `appCodec` is used to serialize and deserialize data structures in order to store them, as stores can only persist `[]bytes`. The default codec is [Protocol Buffers](/docs/sdk/v0.47/learn/advanced/encoding). +- **A reference to a [`legacyAmino`](/docs/sdk/v0.47/learn/advanced/encoding) codec.** Some parts of the Cosmos SDK have not been migrated to use the `appCodec` above, and are still hardcoded to use Amino. Other parts explicitly use Amino for backwards compatibility. For these reasons, the application still holds a reference to the legacy Amino codec. Please note that the Amino codec will be removed from the SDK in the upcoming releases. +- **A reference to a [module manager](/docs/sdk/v0.47/documentation/module-system/module-manager#manager)** and a [basic module manager](/docs/sdk/v0.47/documentation/module-system/module-manager#basicmanager). The module manager is an object that contains a list of the application's modules. It facilitates operations related to these modules, like registering their [`Msg` service](/docs/sdk/v0.47/learn/advanced/baseapp#msg-services) and [gRPC `Query` service](/docs/sdk/v0.47/learn/advanced/baseapp#grpc-query-services), or setting the order of execution between modules for various functions like [`InitChainer`](#initchainer), [`BeginBlocker` and `EndBlocker`](#beginblocker-and-endblocker). + +See an example of application type definition from `simapp`, the Cosmos SDK's own app used for demo and testing purposes: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "io" + "os" + "path/filepath" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "github.com/spf13/cast" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + + simappparams "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/client/grpc/tmservice" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + memKeys map[string]*storetypes.MemoryStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + encodingConfig := makeEncodingConfig() + appCodec := encodingConfig.Codec + legacyAmino := encodingConfig.Amino + interfaceRegistry := encodingConfig.InterfaceRegistry + txConfig := encodingConfig.TxConfig + + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool := mempool.NewSenderNonceMempool() + / mempoolOpt := baseapp.SetMempool(nonceMempool) + / prepareOpt := func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt := func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := sdk.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, capabilitytypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + tkeys := sdk.NewTransientStoreKeys(paramstypes.TStoreKey) + / NOTE: The testingkey is just mounted for testing purposes. Actual applications should + / not include this key. + memKeys := sdk.NewMemoryStoreKeys(capabilitytypes.MemStoreKey, "testingkey") + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(bApp, appOpts, appCodec, logger, keys); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, + memKeys: memKeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, keys[upgradetypes.StoreKey], authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +bApp.SetParamStore(&app.ConsensusParamsKeeper) + +app.CapabilityKeeper = capabilitykeeper.NewKeeper(appCodec, keys[capabilitytypes.StoreKey], memKeys[capabilitytypes.MemStoreKey]) + / Applications that wish to enforce statically created ScopedKeepers should call `Seal` after creating + / their scoped modules in `NewApp` with `ScopeToModule` + app.CapabilityKeeper.Seal() + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, keys[authtypes.StoreKey], authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + keys[banktypes.StoreKey], + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, keys[minttypes.StoreKey], app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, keys[distrtypes.StoreKey], app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, keys[slashingtypes.StoreKey], app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, keys[crisistypes.StoreKey], invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, keys[feegrant.StoreKey], app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.AuthzKeeper = authzkeeper.NewKeeper(keys[authzkeeper.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, keys[upgradetypes.StoreKey], appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/gov/spec/01_concepts.md#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, keys[govtypes.StoreKey], app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(keys[nftkeeper.StoreKey], appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, keys[evidencetypes.StoreKey], app.StakingKeeper, app.SlashingKeeper, + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app.BaseApp.DeliverTx, + encodingConfig.TxConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + capability.NewAppModule(appCodec, *app.CapabilityKeeper, false), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName)), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + ) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + +app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, capabilitytypes.ModuleName, minttypes.ModuleName, distrtypes.ModuleName, slashingtypes.ModuleName, + evidencetypes.ModuleName, stakingtypes.ModuleName, + authtypes.ModuleName, banktypes.ModuleName, govtypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, + authz.ModuleName, feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, govtypes.ModuleName, stakingtypes.ModuleName, + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, distrtypes.ModuleName, + slashingtypes.ModuleName, minttypes.ModuleName, + genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, upgradetypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder := []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +app.ModuleManager.RegisterServices(app.configurator) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + +app.MountMemoryStores(memKeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(encodingConfig.TxConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + logger.Error("error on loading last version", "err", err) + +os.Exit(1) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := ante.NewAnteHandler( + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + ) + if err != nil { + panic(err) +} + +app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + return app.ModuleManager.BeginBlock(ctx, req) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + return app.ModuleManager.EndBlock(ctx, req) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return ModuleBasics.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetTKey returns the TransientStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetTKey(storeKey string) *storetypes.TransientStoreKey { + return app.tkeys[storeKey] +} + +/ GetMemKey returns the MemStoreKey for the provided mem key. +/ +/ NOTE: This is solely used for testing purposes. +func (app *SimApp) + +GetMemKey(storeKey string) *storetypes.MemoryStoreKey { + return app.memKeys[storeKey] +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new tendermint queries routes from grpc-gateway. + tmservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + ModuleBasics.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + tmservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + app.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter()) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName).WithKeyTable(govv1.ParamKeyTable()) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} + +func makeEncodingConfig() + +simappparams.EncodingConfig { + encodingConfig := simappparams.MakeTestEncodingConfig() + +std.RegisterLegacyAminoCodec(encodingConfig.Amino) + +std.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +ModuleBasics.RegisterLegacyAminoCodec(encodingConfig.Amino) + +ModuleBasics.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +return encodingConfig +} +``` + +### Constructor Function + +Also defined in `app.go` is the constructor function, which constructs a new application of the type defined in the preceding section. The function must fulfill the `AppCreator` signature in order to be used in the [`start` command](/docs/sdk/v0.47/learn/advanced/node#start-command) of the application's daemon command. + +```go expandable +package types + +import ( + + "encoding/json" + "io" + "time" + "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/cobra" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + tmproto "github.com/tendermint/tendermint/proto/tendermint/types" + tmtypes "github.com/tendermint/tendermint/types" + dbm "github.com/tendermint/tm-db" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ ServerStartTime defines the time duration that the server need to stay running after startup +/ for the startup be considered successful +const ServerStartTime = 5 * time.Second + +type ( + / AppOptions defines an interface that is passed into an application + / constructor, typically used to set BaseApp options that are either supplied + / via config file or through CLI arguments/flags. The underlying implementation + / is defined by the server package and is typically implemented via a Viper + / literal defined on the server Context. Note, casting Get calls may not yield + / the expected types and could result in type assertion errors. It is recommend + / to either use the cast package or perform manual conversion for safety. + AppOptions interface { + Get(string) + +interface{ +} + +} + + / Application defines an application interface that wraps abci.Application. + / The interface defines the necessary contracts to be implemented in order + / to fully bootstrap and start an application. + Application interface { + abci.Application + + RegisterAPIRoutes(*api.Server, config.APIConfig) + + / RegisterGRPCServer registers gRPC services directly with the gRPC + / server. + RegisterGRPCServer(grpc.Server) + + / RegisterTxService registers the gRPC Query service for tx (such as tx + / simulation, fetching txs by hash...). + RegisterTxService(client.Context) + + / RegisterTendermintService registers the gRPC Query service for tendermint queries. + RegisterTendermintService(client.Context) + + / RegisterNodeService registers the node gRPC Query service. + RegisterNodeService(client.Context) + + / CommitMultiStore return the multistore instance + CommitMultiStore() + +sdk.CommitMultiStore +} + + / AppCreator is a function that allows us to lazily initialize an + / application using various configurations. + AppCreator func(log.Logger, dbm.DB, io.Writer, AppOptions) + +Application + + / ModuleInitFlags takes a start command and adds modules specific init flags. + ModuleInitFlags func(startCmd *cobra.Command) + + / ExportedApp represents an exported app state, along with + / validators, consensus params and latest app height. + ExportedApp struct { + / AppState is the application state as JSON. + AppState json.RawMessage + / Validators is the exported validator set. + Validators []tmtypes.GenesisValidator + / Height is the app's latest block height. + Height int64 + / ConsensusParams are the exported consensus params for ABCI. + ConsensusParams *tmproto.ConsensusParams +} + + / AppExporter is a function that dumps all app state to + / JSON-serializable structure and returns the current validator set. + AppExporter func(log.Logger, dbm.DB, io.Writer, int64, bool, []string, AppOptions, []string) (ExportedApp, error) +) +``` + +Here are the main actions performed by this function: + +- Instantiate a new [`codec`](/docs/sdk/v0.47/learn/advanced/encoding) and initialize the `codec` of each of the application's modules using the [basic manager](/docs/sdk/v0.47/documentation/module-system/module-manager#basicmanager). +- Instantiate a new application with a reference to a `baseapp` instance, a codec, and all the appropriate store keys. +- Instantiate all the [`keeper`](#keeper) objects defined in the application's `type` using the `NewKeeper` function of each of the application's modules. Note that keepers must be instantiated in the correct order, as the `NewKeeper` of one module might require a reference to another module's `keeper`. +- Instantiate the application's [module manager](/docs/sdk/v0.47/documentation/module-system/module-manager#manager) with the [`AppModule`](#application-module-interface) object of each of the application's modules. +- With the module manager, initialize the application's [`Msg` services](/docs/sdk/v0.47/learn/advanced/baseapp#msg-services), [gRPC `Query` services](/docs/sdk/v0.47/learn/advanced/baseapp#grpc-query-services), [legacy `Msg` routes](/docs/sdk/v0.47/learn/advanced/baseapp#routing), and [legacy query routes](/docs/sdk/v0.47/learn/advanced/baseapp#query-routing). When a transaction is relayed to the application by CometBFT via the ABCI, it is routed to the appropriate module's [`Msg` service](#msg-services) using the routes defined here. Likewise, when a gRPC query request is received by the application, it is routed to the appropriate module's [`gRPC query service`](#grpc-query-services) using the gRPC routes defined here. The Cosmos SDK still supports legacy `Msg`s and legacy CometBFT queries, which are routed using the legacy `Msg` routes and the legacy query routes, respectively. +- With the module manager, register the [application's modules' invariants](/docs/sdk/v0.47/documentation/module-system/invariants). Invariants are variables (e.g. total supply of a token) that are evaluated at the end of each block. The process of checking invariants is done via a special module called the [`InvariantsRegistry`](/docs/sdk/v0.47/documentation/module-system/invariants#invariant-registry). The value of the invariant should be equal to a predicted value defined in the module. Should the value be different than the predicted one, special logic defined in the invariant registry is triggered (usually the chain is halted). This is useful to make sure that no critical bug goes unnoticed, producing long-lasting effects that are hard to fix. +- With the module manager, set the order of execution between the `InitGenesis`, `BeginBlocker`, and `EndBlocker` functions of each of the [application's modules](#application-module-interface). Note that not all modules implement these functions. +- Set the remaining application parameters: + - [`InitChainer`](#initchainer): used to initialize the application when it is first started. + - [`BeginBlocker`, `EndBlocker`](#beginblocker-and-endlbocker): called at the beginning and at the end of every block. + - [`anteHandler`](/docs/sdk/v0.47/learn/advanced/baseapp#antehandler): used to handle fees and signature verification. +- Mount the stores. +- Return the application. + +Note that the constructor function only creates an instance of the app, while the actual state is either carried over from the `~/.app/data` folder if the node is restarted, or generated from the genesis file if the node is started for the first time. + +See an example of application constructor from `simapp`: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "io" + "os" + "path/filepath" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "github.com/spf13/cast" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + + simappparams "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/client/grpc/tmservice" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + memKeys map[string]*storetypes.MemoryStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + encodingConfig := makeEncodingConfig() + appCodec := encodingConfig.Codec + legacyAmino := encodingConfig.Amino + interfaceRegistry := encodingConfig.InterfaceRegistry + txConfig := encodingConfig.TxConfig + + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool := mempool.NewSenderNonceMempool() + / mempoolOpt := baseapp.SetMempool(nonceMempool) + / prepareOpt := func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt := func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := sdk.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, capabilitytypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + tkeys := sdk.NewTransientStoreKeys(paramstypes.TStoreKey) + / NOTE: The testingkey is just mounted for testing purposes. Actual applications should + / not include this key. + memKeys := sdk.NewMemoryStoreKeys(capabilitytypes.MemStoreKey, "testingkey") + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(bApp, appOpts, appCodec, logger, keys); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, + memKeys: memKeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, keys[upgradetypes.StoreKey], authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +bApp.SetParamStore(&app.ConsensusParamsKeeper) + +app.CapabilityKeeper = capabilitykeeper.NewKeeper(appCodec, keys[capabilitytypes.StoreKey], memKeys[capabilitytypes.MemStoreKey]) + / Applications that wish to enforce statically created ScopedKeepers should call `Seal` after creating + / their scoped modules in `NewApp` with `ScopeToModule` + app.CapabilityKeeper.Seal() + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, keys[authtypes.StoreKey], authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + keys[banktypes.StoreKey], + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, keys[minttypes.StoreKey], app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, keys[distrtypes.StoreKey], app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, keys[slashingtypes.StoreKey], app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, keys[crisistypes.StoreKey], invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, keys[feegrant.StoreKey], app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.AuthzKeeper = authzkeeper.NewKeeper(keys[authzkeeper.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, keys[upgradetypes.StoreKey], appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/gov/spec/01_concepts.md#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, keys[govtypes.StoreKey], app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(keys[nftkeeper.StoreKey], appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, keys[evidencetypes.StoreKey], app.StakingKeeper, app.SlashingKeeper, + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app.BaseApp.DeliverTx, + encodingConfig.TxConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + capability.NewAppModule(appCodec, *app.CapabilityKeeper, false), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName)), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + ) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + +app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, capabilitytypes.ModuleName, minttypes.ModuleName, distrtypes.ModuleName, slashingtypes.ModuleName, + evidencetypes.ModuleName, stakingtypes.ModuleName, + authtypes.ModuleName, banktypes.ModuleName, govtypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, + authz.ModuleName, feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, govtypes.ModuleName, stakingtypes.ModuleName, + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, distrtypes.ModuleName, + slashingtypes.ModuleName, minttypes.ModuleName, + genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, upgradetypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder := []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +app.ModuleManager.RegisterServices(app.configurator) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + +app.MountMemoryStores(memKeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(encodingConfig.TxConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + logger.Error("error on loading last version", "err", err) + +os.Exit(1) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := ante.NewAnteHandler( + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + ) + if err != nil { + panic(err) +} + +app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + return app.ModuleManager.BeginBlock(ctx, req) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + return app.ModuleManager.EndBlock(ctx, req) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return ModuleBasics.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetTKey returns the TransientStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetTKey(storeKey string) *storetypes.TransientStoreKey { + return app.tkeys[storeKey] +} + +/ GetMemKey returns the MemStoreKey for the provided mem key. +/ +/ NOTE: This is solely used for testing purposes. +func (app *SimApp) + +GetMemKey(storeKey string) *storetypes.MemoryStoreKey { + return app.memKeys[storeKey] +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new tendermint queries routes from grpc-gateway. + tmservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + ModuleBasics.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + tmservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + app.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter()) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName).WithKeyTable(govv1.ParamKeyTable()) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} + +func makeEncodingConfig() + +simappparams.EncodingConfig { + encodingConfig := simappparams.MakeTestEncodingConfig() + +std.RegisterLegacyAminoCodec(encodingConfig.Amino) + +std.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +ModuleBasics.RegisterLegacyAminoCodec(encodingConfig.Amino) + +ModuleBasics.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +return encodingConfig +} +``` + +### InitChainer + +The `InitChainer` is a function that initializes the state of the application from a genesis file (i.e. token balances of genesis accounts). It is called when the application receives the `InitChain` message from the CometBFT engine, which happens when the node is started at `appBlockHeight == 0` (i.e. on genesis). The application must set the `InitChainer` in its [constructor](#constructor-function) via the [`SetInitChainer`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetInitChainer) method. + +In general, the `InitChainer` is mostly composed of the [`InitGenesis`](/docs/sdk/v0.47/documentation/module-system/genesis#initgenesis) function of each of the application's modules. This is done by calling the `InitGenesis` function of the module manager, which in turn calls the `InitGenesis` function of each of the modules it contains. Note that the order in which the modules' `InitGenesis` functions must be called has to be set in the module manager using the [module manager's](/docs/sdk/v0.47/documentation/module-system/module-manager) `SetOrderInitGenesis` method. This is done in the [application's constructor](#constructor-function), and the `SetOrderInitGenesis` has to be called before the `SetInitChainer`. + +See an example of an `InitChainer` from `simapp`: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "io" + "os" + "path/filepath" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "github.com/spf13/cast" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + + simappparams "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/client/grpc/tmservice" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + memKeys map[string]*storetypes.MemoryStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + encodingConfig := makeEncodingConfig() + appCodec := encodingConfig.Codec + legacyAmino := encodingConfig.Amino + interfaceRegistry := encodingConfig.InterfaceRegistry + txConfig := encodingConfig.TxConfig + + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool := mempool.NewSenderNonceMempool() + / mempoolOpt := baseapp.SetMempool(nonceMempool) + / prepareOpt := func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt := func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := sdk.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, capabilitytypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + tkeys := sdk.NewTransientStoreKeys(paramstypes.TStoreKey) + / NOTE: The testingkey is just mounted for testing purposes. Actual applications should + / not include this key. + memKeys := sdk.NewMemoryStoreKeys(capabilitytypes.MemStoreKey, "testingkey") + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(bApp, appOpts, appCodec, logger, keys); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, + memKeys: memKeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, keys[upgradetypes.StoreKey], authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +bApp.SetParamStore(&app.ConsensusParamsKeeper) + +app.CapabilityKeeper = capabilitykeeper.NewKeeper(appCodec, keys[capabilitytypes.StoreKey], memKeys[capabilitytypes.MemStoreKey]) + / Applications that wish to enforce statically created ScopedKeepers should call `Seal` after creating + / their scoped modules in `NewApp` with `ScopeToModule` + app.CapabilityKeeper.Seal() + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, keys[authtypes.StoreKey], authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + keys[banktypes.StoreKey], + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, keys[minttypes.StoreKey], app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, keys[distrtypes.StoreKey], app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, keys[slashingtypes.StoreKey], app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, keys[crisistypes.StoreKey], invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, keys[feegrant.StoreKey], app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.AuthzKeeper = authzkeeper.NewKeeper(keys[authzkeeper.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, keys[upgradetypes.StoreKey], appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/gov/spec/01_concepts.md#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, keys[govtypes.StoreKey], app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(keys[nftkeeper.StoreKey], appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, keys[evidencetypes.StoreKey], app.StakingKeeper, app.SlashingKeeper, + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app.BaseApp.DeliverTx, + encodingConfig.TxConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + capability.NewAppModule(appCodec, *app.CapabilityKeeper, false), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName)), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + ) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + +app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, capabilitytypes.ModuleName, minttypes.ModuleName, distrtypes.ModuleName, slashingtypes.ModuleName, + evidencetypes.ModuleName, stakingtypes.ModuleName, + authtypes.ModuleName, banktypes.ModuleName, govtypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, + authz.ModuleName, feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, govtypes.ModuleName, stakingtypes.ModuleName, + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, distrtypes.ModuleName, + slashingtypes.ModuleName, minttypes.ModuleName, + genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, upgradetypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder := []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +app.ModuleManager.RegisterServices(app.configurator) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + +app.MountMemoryStores(memKeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(encodingConfig.TxConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + logger.Error("error on loading last version", "err", err) + +os.Exit(1) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := ante.NewAnteHandler( + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + ) + if err != nil { + panic(err) +} + +app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + return app.ModuleManager.BeginBlock(ctx, req) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + return app.ModuleManager.EndBlock(ctx, req) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return ModuleBasics.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetTKey returns the TransientStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetTKey(storeKey string) *storetypes.TransientStoreKey { + return app.tkeys[storeKey] +} + +/ GetMemKey returns the MemStoreKey for the provided mem key. +/ +/ NOTE: This is solely used for testing purposes. +func (app *SimApp) + +GetMemKey(storeKey string) *storetypes.MemoryStoreKey { + return app.memKeys[storeKey] +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new tendermint queries routes from grpc-gateway. + tmservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + ModuleBasics.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + tmservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + app.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter()) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName).WithKeyTable(govv1.ParamKeyTable()) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} + +func makeEncodingConfig() + +simappparams.EncodingConfig { + encodingConfig := simappparams.MakeTestEncodingConfig() + +std.RegisterLegacyAminoCodec(encodingConfig.Amino) + +std.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +ModuleBasics.RegisterLegacyAminoCodec(encodingConfig.Amino) + +ModuleBasics.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +return encodingConfig +} +``` + +### BeginBlocker and EndBlocker + +The Cosmos SDK offers developers the possibility to implement automatic execution of code as part of their application. This is implemented through two functions called `BeginBlocker` and `EndBlocker`. They are called when the application receives the `BeginBlock` and `EndBlock` messages from the CometBFT engine, which happens respectively at the beginning and at the end of each block. The application must set the `BeginBlocker` and `EndBlocker` in its [constructor](#constructor-function) via the [`SetBeginBlocker`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetBeginBlocker) and [`SetEndBlocker`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetEndBlocker) methods. + +In general, the `BeginBlocker` and `EndBlocker` functions are mostly composed of the [`BeginBlock` and `EndBlock`](/docs/sdk/v0.47/documentation/module-system/beginblock-endblock) functions of each of the application's modules. This is done by calling the `BeginBlock` and `EndBlock` functions of the module manager, which in turn calls the `BeginBlock` and `EndBlock` functions of each of the modules it contains. Note that the order in which the modules' `BeginBlock` and `EndBlock` functions must be called has to be set in the module manager using the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods, respectively. This is done via the [module manager](/docs/sdk/v0.47/documentation/module-system/module-manager) in the [application's constructor](#constructor-function), and the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods have to be called before the `SetBeginBlocker` and `SetEndBlocker` functions. + +As a sidenote, it is important to remember that application-specific blockchains are deterministic. Developers must be careful not to introduce non-determinism in `BeginBlocker` or `EndBlocker`, and must also be careful not to make them too computationally expensive, as [gas](/docs/sdk/v0.47/learn/beginner/gas-fees) does not constrain the cost of `BeginBlocker` and `EndBlocker` execution. + +See an example of `BeginBlocker` and `EndBlocker` functions from `simapp` + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "io" + "os" + "path/filepath" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "github.com/spf13/cast" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + + simappparams "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/client/grpc/tmservice" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + memKeys map[string]*storetypes.MemoryStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + encodingConfig := makeEncodingConfig() + appCodec := encodingConfig.Codec + legacyAmino := encodingConfig.Amino + interfaceRegistry := encodingConfig.InterfaceRegistry + txConfig := encodingConfig.TxConfig + + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool := mempool.NewSenderNonceMempool() + / mempoolOpt := baseapp.SetMempool(nonceMempool) + / prepareOpt := func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt := func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := sdk.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, capabilitytypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + tkeys := sdk.NewTransientStoreKeys(paramstypes.TStoreKey) + / NOTE: The testingkey is just mounted for testing purposes. Actual applications should + / not include this key. + memKeys := sdk.NewMemoryStoreKeys(capabilitytypes.MemStoreKey, "testingkey") + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(bApp, appOpts, appCodec, logger, keys); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, + memKeys: memKeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, keys[upgradetypes.StoreKey], authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +bApp.SetParamStore(&app.ConsensusParamsKeeper) + +app.CapabilityKeeper = capabilitykeeper.NewKeeper(appCodec, keys[capabilitytypes.StoreKey], memKeys[capabilitytypes.MemStoreKey]) + / Applications that wish to enforce statically created ScopedKeepers should call `Seal` after creating + / their scoped modules in `NewApp` with `ScopeToModule` + app.CapabilityKeeper.Seal() + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, keys[authtypes.StoreKey], authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + keys[banktypes.StoreKey], + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, keys[minttypes.StoreKey], app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, keys[distrtypes.StoreKey], app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, keys[slashingtypes.StoreKey], app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, keys[crisistypes.StoreKey], invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, keys[feegrant.StoreKey], app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.AuthzKeeper = authzkeeper.NewKeeper(keys[authzkeeper.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, keys[upgradetypes.StoreKey], appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/gov/spec/01_concepts.md#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, keys[govtypes.StoreKey], app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(keys[nftkeeper.StoreKey], appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, keys[evidencetypes.StoreKey], app.StakingKeeper, app.SlashingKeeper, + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app.BaseApp.DeliverTx, + encodingConfig.TxConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + capability.NewAppModule(appCodec, *app.CapabilityKeeper, false), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName)), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + ) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + +app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, capabilitytypes.ModuleName, minttypes.ModuleName, distrtypes.ModuleName, slashingtypes.ModuleName, + evidencetypes.ModuleName, stakingtypes.ModuleName, + authtypes.ModuleName, banktypes.ModuleName, govtypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, + authz.ModuleName, feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, govtypes.ModuleName, stakingtypes.ModuleName, + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, distrtypes.ModuleName, + slashingtypes.ModuleName, minttypes.ModuleName, + genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, upgradetypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder := []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +app.ModuleManager.RegisterServices(app.configurator) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + +app.MountMemoryStores(memKeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(encodingConfig.TxConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + logger.Error("error on loading last version", "err", err) + +os.Exit(1) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := ante.NewAnteHandler( + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + ) + if err != nil { + panic(err) +} + +app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + return app.ModuleManager.BeginBlock(ctx, req) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + return app.ModuleManager.EndBlock(ctx, req) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) -The Daemon, or [Full-Node Client](/v0.47/learn/advanced/node), is the core process of a Cosmos SDK-based blockchain. Participants in the network run this process to initialize their state-machine, connect with other full-nodes, and update their state-machine as new blocks come in. +TxConfig() -``` - ^ +-------------------------------+ ^ | | | | | | State-machine = Application | | | | | | Built with Cosmos SDK | | ^ + | | | +----------- | ABCI | ----------+ v | | + v | ^ | | | |Blockchain Node | | Consensus | | | | | | | +-------------------------------+ | CometBFT | | | | | | Networking | | | | | | v +-------------------------------+ v -``` +client.TxConfig { + return app.txConfig +} -The blockchain full-node presents itself as a binary, generally suffixed by `-d` for "daemon" (e.g. `appd` for `app` or `gaiad` for `gaia`). This binary is built by running a simple [`main.go`](/v0.47/learn/advanced/node#main-function) function placed in `./cmd/appd/`. This operation usually happens through the [Makefile](#dependencies-and-makefile). +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) -Once the main binary is built, the node can be started by running the [`start` command](/v0.47/learn/advanced/node#start-command). This command function primarily does three things: +DefaultGenesis() -1. Create an instance of the state-machine defined in [`app.go`](#core-application-file). -2. Initialize the state-machine with the latest known state, extracted from the `db` stored in the `~/.app/data` folder. At this point, the state-machine is at height `appBlockHeight`. -3. Create and start a new CometBFT instance. Among other things, the node performs a handshake with its peers. It gets the latest `blockHeight` from them and replays blocks to sync to this height if it is greater than the local `appBlockHeight`. The node starts from genesis and CometBFT sends an `InitChain` message via the ABCI to the `app`, which triggers the [`InitChainer`](#initchainer). +map[string]json.RawMessage { + return ModuleBasics.DefaultGenesis(a.appCodec) +} - - When starting a CometBFT instance, the genesis file is the `0` height and the state within the genesis file is committed at block height `1`. When querying the state of the node, querying block height 0 will return an error. - +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) -## Core Application File[​](#core-application-file "Direct link to Core Application File") +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} -In general, the core of the state-machine is defined in a file called `app.go`. This file mainly contains the **type definition of the application** and functions to **create and initialize it**. +/ GetTKey returns the TransientStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) -### Type Definition of the Application[​](#type-definition-of-the-application "Direct link to Type Definition of the Application") +GetTKey(storeKey string) *storetypes.TransientStoreKey { + return app.tkeys[storeKey] +} -The first thing defined in `app.go` is the `type` of the application. It is generally comprised of the following parts: +/ GetMemKey returns the MemStoreKey for the provided mem key. +/ +/ NOTE: This is solely used for testing purposes. +func (app *SimApp) -* **A reference to [`baseapp`](/v0.47/learn/advanced/baseapp).** The custom application defined in `app.go` is an extension of `baseapp`. When a transaction is relayed by CometBFT to the application, `app` uses `baseapp`'s methods to route them to the appropriate module. `baseapp` implements most of the core logic for the application, including all the [ABCI methods](https://docs.cometbft.com/v0.37/spec/abci/) and the [routing logic](/v0.47/learn/advanced/baseapp#routing). -* **A list of store keys**. The [store](/v0.47/learn/advanced/store), which contains the entire state, is implemented as a [`multistore`](/v0.47/learn/advanced/store#multistore) (i.e. a store of stores) in the Cosmos SDK. Each module uses one or multiple stores in the multistore to persist their part of the state. These stores can be accessed with specific keys that are declared in the `app` type. These keys, along with the `keepers`, are at the heart of the [object-capabilities model](/v0.47/learn/advanced/ocap) of the Cosmos SDK. -* **A list of module's `keeper`s.** Each module defines an abstraction called [`keeper`](/v0.47/build/building-modules/keeper), which handles reads and writes for this module's store(s). The `keeper`'s methods of one module can be called from other modules (if authorized), which is why they are declared in the application's type and exported as interfaces to other modules so that the latter can only access the authorized functions. -* **A reference to an [`appCodec`](/v0.47/learn/advanced/encoding).** The application's `appCodec` is used to serialize and deserialize data structures in order to store them, as stores can only persist `[]bytes`. The default codec is [Protocol Buffers](/v0.47/learn/advanced/encoding). -* **A reference to a [`legacyAmino`](/v0.47/learn/advanced/encoding) codec.** Some parts of the Cosmos SDK have not been migrated to use the `appCodec` above, and are still hardcoded to use Amino. Other parts explicitly use Amino for backwards compatibility. For these reasons, the application still holds a reference to the legacy Amino codec. Please note that the Amino codec will be removed from the SDK in the upcoming releases. -* **A reference to a [module manager](/v0.47/build/building-modules/module-manager#manager)** and a [basic module manager](/v0.47/build/building-modules/module-manager#basicmanager). The module manager is an object that contains a list of the application's modules. It facilitates operations related to these modules, like registering their [`Msg` service](/v0.47/learn/advanced/baseapp#msg-services) and [gRPC `Query` service](/v0.47/learn/advanced/baseapp#grpc-query-services), or setting the order of execution between modules for various functions like [`InitChainer`](#initchainer), [`BeginBlocker` and `EndBlocker`](#beginblocker-and-endblocker). +GetMemKey(storeKey string) *storetypes.MemoryStoreKey { + return app.memKeys[storeKey] +} -See an example of application type definition from `simapp`, the Cosmos SDK's own app used for demo and testing purposes: +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) -simapp/app.go +GetSubspace(moduleName string) -``` -loading... -``` +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/app.go#L161-L203) +return subspace +} -### Constructor Function[​](#constructor-function "Direct link to Constructor Function") +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) -Also defined in `app.go` is the constructor function, which constructs a new application of the type defined in the preceding section. The function must fulfill the `AppCreator` signature in order to be used in the [`start` command](/v0.47/learn/advanced/node#start-command) of the application's daemon command. +SimulationManager() *module.SimulationManager { + return app.sm +} -server/types/app.go +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) -``` -loading... -``` +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/server/types/app.go#L64-L66) + / Register new tendermint queries routes from grpc-gateway. + tmservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) -Here are the main actions performed by this function: + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) -* Instantiate a new [`codec`](/v0.47/learn/advanced/encoding) and initialize the `codec` of each of the application's modules using the [basic manager](/v0.47/build/building-modules/module-manager#basicmanager). + / Register grpc-gateway routes for all modules. + ModuleBasics.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) -* Instantiate a new application with a reference to a `baseapp` instance, a codec, and all the appropriate store keys. + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} -* Instantiate all the [`keeper`](#keeper) objects defined in the application's `type` using the `NewKeeper` function of each of the application's modules. Note that keepers must be instantiated in the correct order, as the `NewKeeper` of one module might require a reference to another module's `keeper`. +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) -* Instantiate the application's [module manager](/v0.47/build/building-modules/module-manager#manager) with the [`AppModule`](#application-module-interface) object of each of the application's modules. +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} -* With the module manager, initialize the application's [`Msg` services](/v0.47/learn/advanced/baseapp#msg-services), [gRPC `Query` services](/v0.47/learn/advanced/baseapp#grpc-query-services), [legacy `Msg` routes](/v0.47/learn/advanced/baseapp#routing), and [legacy query routes](/v0.47/learn/advanced/baseapp#query-routing). When a transaction is relayed to the application by CometBFT via the ABCI, it is routed to the appropriate module's [`Msg` service](#msg-services) using the routes defined here. Likewise, when a gRPC query request is received by the application, it is routed to the appropriate module's [`gRPC query service`](#grpc-query-services) using the gRPC routes defined here. The Cosmos SDK still supports legacy `Msg`s and legacy CometBFT queries, which are routed using the legacy `Msg` routes and the legacy query routes, respectively. +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) -* With the module manager, register the [application's modules' invariants](/v0.47/build/building-modules/invariants). Invariants are variables (e.g. total supply of a token) that are evaluated at the end of each block. The process of checking invariants is done via a special module called the [`InvariantsRegistry`](/v0.47/build/building-modules/invariants#invariant-registry). The value of the invariant should be equal to a predicted value defined in the module. Should the value be different than the predicted one, special logic defined in the invariant registry is triggered (usually the chain is halted). This is useful to make sure that no critical bug goes unnoticed, producing long-lasting effects that are hard to fix. +RegisterTendermintService(clientCtx client.Context) { + tmservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + app.Query, + ) +} -* With the module manager, set the order of execution between the `InitGenesis`, `BeginBlocker`, and `EndBlocker` functions of each of the [application's modules](#application-module-interface). Note that not all modules implement these functions. +func (app *SimApp) -* Set the remaining application parameters: +RegisterNodeService(clientCtx client.Context) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter()) +} - * [`InitChainer`](#initchainer): used to initialize the application when it is first started. - * [`BeginBlocker`, `EndBlocker`](#beginblocker-and-endlbocker): called at the beginning and at the end of every block. - * [`anteHandler`](/v0.47/learn/advanced/baseapp#antehandler): used to handle fees and signature verification. +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() -* Mount the stores. +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} -* Return the application. +return dupMaccPerms +} -Note that the constructor function only creates an instance of the app, while the actual state is either carried over from the `~/.app/data` folder if the node is restarted, or generated from the genesis file if the node is started for the first time. +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() -See an example of application constructor from `simapp`: +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} -simapp/app.go + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) -``` -loading... -``` +return modAccAddrs +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/app.go#L214-L522) +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) -### InitChainer[​](#initchainer "Direct link to InitChainer") +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) -The `InitChainer` is a function that initializes the state of the application from a genesis file (i.e. token balances of genesis accounts). It is called when the application receives the `InitChain` message from the CometBFT engine, which happens when the node is started at `appBlockHeight == 0` (i.e. on genesis). The application must set the `InitChainer` in its [constructor](#constructor-function) via the [`SetInitChainer`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetInitChainer) method. +paramsKeeper.Subspace(authtypes.ModuleName) -In general, the `InitChainer` is mostly composed of the [`InitGenesis`](/v0.47/build/building-modules/genesis#initgenesis) function of each of the application's modules. This is done by calling the `InitGenesis` function of the module manager, which in turn calls the `InitGenesis` function of each of the modules it contains. Note that the order in which the modules' `InitGenesis` functions must be called has to be set in the module manager using the [module manager's](/v0.47/build/building-modules/module-manager) `SetOrderInitGenesis` method. This is done in the [application's constructor](#constructor-function), and the `SetOrderInitGenesis` has to be called before the `SetInitChainer`. +paramsKeeper.Subspace(banktypes.ModuleName) -See an example of an `InitChainer` from `simapp`: +paramsKeeper.Subspace(stakingtypes.ModuleName) -simapp/app.go +paramsKeeper.Subspace(minttypes.ModuleName) -``` -loading... -``` +paramsKeeper.Subspace(distrtypes.ModuleName) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/app.go#L569-L577) +paramsKeeper.Subspace(slashingtypes.ModuleName) -### BeginBlocker and EndBlocker[​](#beginblocker-and-endblocker "Direct link to BeginBlocker and EndBlocker") +paramsKeeper.Subspace(govtypes.ModuleName).WithKeyTable(govv1.ParamKeyTable()) -The Cosmos SDK offers developers the possibility to implement automatic execution of code as part of their application. This is implemented through two functions called `BeginBlocker` and `EndBlocker`. They are called when the application receives the `BeginBlock` and `EndBlock` messages from the CometBFT engine, which happens respectively at the beginning and at the end of each block. The application must set the `BeginBlocker` and `EndBlocker` in its [constructor](#constructor-function) via the [`SetBeginBlocker`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetBeginBlocker) and [`SetEndBlocker`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetEndBlocker) methods. +paramsKeeper.Subspace(crisistypes.ModuleName) -In general, the `BeginBlocker` and `EndBlocker` functions are mostly composed of the [`BeginBlock` and `EndBlock`](/v0.47/build/building-modules/beginblock-endblock) functions of each of the application's modules. This is done by calling the `BeginBlock` and `EndBlock` functions of the module manager, which in turn calls the `BeginBlock` and `EndBlock` functions of each of the modules it contains. Note that the order in which the modules' `BeginBlock` and `EndBlock` functions must be called has to be set in the module manager using the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods, respectively. This is done via the [module manager](/v0.47/build/building-modules/module-manager) in the [application's constructor](#constructor-function), and the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods have to be called before the `SetBeginBlocker` and `SetEndBlocker` functions. +return paramsKeeper +} -As a sidenote, it is important to remember that application-specific blockchains are deterministic. Developers must be careful not to introduce non-determinism in `BeginBlocker` or `EndBlocker`, and must also be careful not to make them too computationally expensive, as [gas](/v0.47/learn/beginner/gas-fees) does not constrain the cost of `BeginBlocker` and `EndBlocker` execution. +func makeEncodingConfig() -See an example of `BeginBlocker` and `EndBlocker` functions from `simapp` +simappparams.EncodingConfig { + encodingConfig := simappparams.MakeTestEncodingConfig() -simapp/app.go +std.RegisterLegacyAminoCodec(encodingConfig.Amino) -``` -loading... -``` +std.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +ModuleBasics.RegisterLegacyAminoCodec(encodingConfig.Amino) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/app.go#L555-L563) +ModuleBasics.RegisterInterfaces(encodingConfig.InterfaceRegistry) -### Register Codec[​](#register-codec "Direct link to Register Codec") +return encodingConfig +} +``` + +### Register Codec The `EncodingConfig` structure is the last important part of the `app.go` file. The goal of this structure is to define the codecs that will be used throughout the app. -simapp/params/encoding.go +```go expandable +package params -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/params/encoding.go#L9-L16) + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" +) + +/ EncodingConfig specifies the concrete encoding types to use for a given app. +/ This is provided for compatibility between protobuf and amino implementations. +type EncodingConfig struct { + InterfaceRegistry types.InterfaceRegistry + Codec codec.Codec + TxConfig client.TxConfig + Amino *codec.LegacyAmino +} +``` Here are descriptions of what each of the four fields means: -* `InterfaceRegistry`: The `InterfaceRegistry` is used by the Protobuf codec to handle interfaces that are encoded and decoded (we also say "unpacked") using [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto). `Any` could be thought as a struct that contains a `type_url` (name of a concrete type implementing the interface) and a `value` (its encoded bytes). `InterfaceRegistry` provides a mechanism for registering interfaces and implementations that can be safely unpacked from `Any`. Each application module implements the `RegisterInterfaces` method that can be used to register the module's own interfaces and implementations. +- `InterfaceRegistry`: The `InterfaceRegistry` is used by the Protobuf codec to handle interfaces that are encoded and decoded (we also say "unpacked") using [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto). `Any` could be thought as a struct that contains a `type_url` (name of a concrete type implementing the interface) and a `value` (its encoded bytes). `InterfaceRegistry` provides a mechanism for registering interfaces and implementations that can be safely unpacked from `Any`. Each application module implements the `RegisterInterfaces` method that can be used to register the module's own interfaces and implementations. + - You can read more about `Any` in [ADR-019](/docs/common/pages/adr-comprehensive#adr-019-protocol-buffer-state-encoding). + - To go more into details, the Cosmos SDK uses an implementation of the Protobuf specification called [`gogoprotobuf`](https://github.com/cosmos/gogoproto). By default, the [gogo protobuf implementation of `Any`](https://pkg.go.dev/github.com/cosmos/gogoproto/types) uses [global type registration](https://github.com/cosmos/gogoproto/blob/master/proto/properties.go#L540) to decode values packed in `Any` into concrete Go types. This introduces a vulnerability where any malicious module in the dependency tree could register a type with the global protobuf registry and cause it to be loaded and unmarshaled by a transaction that referenced it in the `type_url` field. For more information, please refer to [ADR-019](/docs/common/pages/adr-comprehensive#adr-019-protocol-buffer-state-encoding). +- `Codec`: The default codec used throughout the Cosmos SDK. It is composed of a `BinaryCodec` used to encode and decode state, and a `JSONCodec` used to output data to the users (for example, in the [CLI](#cli)). By default, the SDK uses Protobuf as `Codec`. +- `TxConfig`: `TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type. Currently, the SDK handles two transaction types: `SIGN_MODE_DIRECT` (which uses Protobuf binary as over-the-wire encoding) and `SIGN_MODE_LEGACY_AMINO_JSON` (which depends on Amino). Read more about transactions [here](/docs/sdk/v0.47/learn/advanced/transactions). +- `Amino`: Some legacy parts of the Cosmos SDK still use Amino for backwards-compatibility. Each module exposes a `RegisterLegacyAmino` method to register the module's specific types within Amino. This `Amino` codec should not be used by app developers anymore, and will be removed in future releases. - * You can read more about `Any` in [ADR-019](/v0.47/build/architecture/adr-019-protobuf-state-encoding). - * To go more into details, the Cosmos SDK uses an implementation of the Protobuf specification called [`gogoprotobuf`](https://github.com/cosmos/gogoproto). By default, the [gogo protobuf implementation of `Any`](https://pkg.go.dev/github.com/cosmos/gogoproto/types) uses [global type registration](https://github.com/cosmos/gogoproto/blob/master/proto/properties.go#L540) to decode values packed in `Any` into concrete Go types. This introduces a vulnerability where any malicious module in the dependency tree could register a type with the global protobuf registry and cause it to be loaded and unmarshaled by a transaction that referenced it in the `type_url` field. For more information, please refer to [ADR-019](/v0.47/build/architecture/adr-019-protobuf-state-encoding). +An application should create its own encoding config. +See an example of a `simappparams.EncodingConfig` from `simapp`: -* `Codec`: The default codec used throughout the Cosmos SDK. It is composed of a `BinaryCodec` used to encode and decode state, and a `JSONCodec` used to output data to the users (for example, in the [CLI](#cli)). By default, the SDK uses Protobuf as `Codec`. +```go expandable +/go:build app_v1 -* `TxConfig`: `TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type. Currently, the SDK handles two transaction types: `SIGN_MODE_DIRECT` (which uses Protobuf binary as over-the-wire encoding) and `SIGN_MODE_LEGACY_AMINO_JSON` (which depends on Amino). Read more about transactions [here](/v0.47/learn/advanced/transactions). +package simapp -* `Amino`: Some legacy parts of the Cosmos SDK still use Amino for backwards-compatibility. Each module exposes a `RegisterLegacyAmino` method to register the module's specific types within Amino. This `Amino` codec should not be used by app developers anymore, and will be removed in future releases. +import ( -An application should create its own encoding config. See an example of a `simappparams.EncodingConfig` from `simapp`: + "encoding/json" + "io" + "os" + "path/filepath" -simapp/app.go + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "github.com/spf13/cast" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" -``` -loading... -``` + simappparams "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/client/grpc/tmservice" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + memKeys map[string]*storetypes.MemoryStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + encodingConfig := makeEncodingConfig() + appCodec := encodingConfig.Codec + legacyAmino := encodingConfig.Amino + interfaceRegistry := encodingConfig.InterfaceRegistry + txConfig := encodingConfig.TxConfig + + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool := mempool.NewSenderNonceMempool() + / mempoolOpt := baseapp.SetMempool(nonceMempool) + / prepareOpt := func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt := func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := sdk.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, capabilitytypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + tkeys := sdk.NewTransientStoreKeys(paramstypes.TStoreKey) + / NOTE: The testingkey is just mounted for testing purposes. Actual applications should + / not include this key. + memKeys := sdk.NewMemoryStoreKeys(capabilitytypes.MemStoreKey, "testingkey") + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(bApp, appOpts, appCodec, logger, keys); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, + memKeys: memKeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, keys[upgradetypes.StoreKey], authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +bApp.SetParamStore(&app.ConsensusParamsKeeper) + +app.CapabilityKeeper = capabilitykeeper.NewKeeper(appCodec, keys[capabilitytypes.StoreKey], memKeys[capabilitytypes.MemStoreKey]) + / Applications that wish to enforce statically created ScopedKeepers should call `Seal` after creating + / their scoped modules in `NewApp` with `ScopeToModule` + app.CapabilityKeeper.Seal() + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, keys[authtypes.StoreKey], authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + keys[banktypes.StoreKey], + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, keys[minttypes.StoreKey], app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, keys[distrtypes.StoreKey], app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, keys[slashingtypes.StoreKey], app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, keys[crisistypes.StoreKey], invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, keys[feegrant.StoreKey], app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.AuthzKeeper = authzkeeper.NewKeeper(keys[authzkeeper.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, keys[upgradetypes.StoreKey], appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/gov/spec/01_concepts.md#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, keys[govtypes.StoreKey], app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(keys[nftkeeper.StoreKey], appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, keys[evidencetypes.StoreKey], app.StakingKeeper, app.SlashingKeeper, + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app.BaseApp.DeliverTx, + encodingConfig.TxConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + capability.NewAppModule(appCodec, *app.CapabilityKeeper, false), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName)), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + ) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + +app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, capabilitytypes.ModuleName, minttypes.ModuleName, distrtypes.ModuleName, slashingtypes.ModuleName, + evidencetypes.ModuleName, stakingtypes.ModuleName, + authtypes.ModuleName, banktypes.ModuleName, govtypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, + authz.ModuleName, feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, govtypes.ModuleName, stakingtypes.ModuleName, + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, distrtypes.ModuleName, + slashingtypes.ModuleName, minttypes.ModuleName, + genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, upgradetypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder := []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +app.ModuleManager.RegisterServices(app.configurator) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + +app.MountMemoryStores(memKeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(encodingConfig.TxConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + logger.Error("error on loading last version", "err", err) + +os.Exit(1) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := ante.NewAnteHandler( + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + ) + if err != nil { + panic(err) +} + +app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + return app.ModuleManager.BeginBlock(ctx, req) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + return app.ModuleManager.EndBlock(ctx, req) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/app.go#L731-L738) +/ InitChainer application update at chain initialization +func (app *SimApp) -## Modules[​](#modules "Direct link to Modules") +InitChainer(ctx sdk.Context, req abci.RequestInitChain) -[Modules](/v0.47/build/building-modules/intro) are the heart and soul of Cosmos SDK applications. They can be considered as state-machines nested within the state-machine. When a transaction is relayed from the underlying CometBFT engine via the ABCI to the application, it is routed by [`baseapp`](/v0.47/learn/advanced/baseapp) to the appropriate module in order to be processed. This paradigm enables developers to easily build complex state-machines, as most of the modules they need often already exist. **For developers, most of the work involved in building a Cosmos SDK application revolves around building custom modules required by their application that do not exist yet, and integrating them with modules that do already exist into one coherent application**. In the application directory, the standard practice is to store modules in the `x/` folder (not to be confused with the Cosmos SDK's `x/` folder, which contains already-built modules). +abci.ResponseInitChain { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} -### Application Module Interface[​](#application-module-interface "Direct link to Application Module Interface") +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) -Modules must implement [interfaces](/v0.47/build/building-modules/module-manager#application-module-interfaces) defined in the Cosmos SDK, [`AppModuleBasic`](/v0.47/build/building-modules/module-manager#appmodulebasic) and [`AppModule`](/v0.47/build/building-modules/module-manager#appmodule). The former implements basic non-dependent elements of the module, such as the `codec`, while the latter handles the bulk of the module methods (including methods that require references to other modules' `keeper`s). Both the `AppModule` and `AppModuleBasic` types are, by convention, defined in a file called `module.go`. +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} -`AppModule` exposes a collection of useful methods on the module that facilitates the composition of modules into a coherent application. These methods are called from the [`module manager`](/v0.47/build/building-modules/module-manager#manager), which manages the application's collection of modules. +/ LoadHeight loads a particular height +func (app *SimApp) -### `Msg` Services[​](#msg-services "Direct link to msg-services") +LoadHeight(height int64) -Each application module defines two [Protobuf services](https://developers.google.com/protocol-buffers/docs/proto#services): one `Msg` service to handle messages, and one gRPC `Query` service to handle queries. If we consider the module as a state-machine, then a `Msg` service is a set of state transition RPC methods. Each Protobuf `Msg` service method is 1:1 related to a Protobuf request type, which must implement `sdk.Msg` interface. Note that `sdk.Msg`s are bundled in [transactions](/v0.47/learn/advanced/transactions), and each transaction contains one or multiple messages. +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return ModuleBasics.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetTKey returns the TransientStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetTKey(storeKey string) *storetypes.TransientStoreKey { + return app.tkeys[storeKey] +} + +/ GetMemKey returns the MemStoreKey for the provided mem key. +/ +/ NOTE: This is solely used for testing purposes. +func (app *SimApp) + +GetMemKey(storeKey string) *storetypes.MemoryStoreKey { + return app.memKeys[storeKey] +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new tendermint queries routes from grpc-gateway. + tmservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + ModuleBasics.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + tmservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + app.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter()) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName).WithKeyTable(govv1.ParamKeyTable()) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} + +func makeEncodingConfig() + +simappparams.EncodingConfig { + encodingConfig := simappparams.MakeTestEncodingConfig() + +std.RegisterLegacyAminoCodec(encodingConfig.Amino) + +std.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +ModuleBasics.RegisterLegacyAminoCodec(encodingConfig.Amino) + +ModuleBasics.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +return encodingConfig +} +``` + +## Modules + +[Modules](/docs/sdk/v0.47/documentation/module-system/intro) are the heart and soul of Cosmos SDK applications. They can be considered as state-machines nested within the state-machine. When a transaction is relayed from the underlying CometBFT engine via the ABCI to the application, it is routed by [`baseapp`](/docs/sdk/v0.47/learn/advanced/baseapp) to the appropriate module in order to be processed. This paradigm enables developers to easily build complex state-machines, as most of the modules they need often already exist. **For developers, most of the work involved in building a Cosmos SDK application revolves around building custom modules required by their application that do not exist yet, and integrating them with modules that do already exist into one coherent application**. In the application directory, the standard practice is to store modules in the `x/` folder (not to be confused with the Cosmos SDK's `x/` folder, which contains already-built modules). + +### Application Module Interface + +Modules must implement [interfaces](/docs/sdk/v0.47/documentation/module-system/module-manager#application-module-interfaces) defined in the Cosmos SDK, [`AppModuleBasic`](/docs/sdk/v0.47/documentation/module-system/module-manager#appmodulebasic) and [`AppModule`](/docs/sdk/v0.47/documentation/module-system/module-manager#appmodule). The former implements basic non-dependent elements of the module, such as the `codec`, while the latter handles the bulk of the module methods (including methods that require references to other modules' `keeper`s). Both the `AppModule` and `AppModuleBasic` types are, by convention, defined in a file called `module.go`. + +`AppModule` exposes a collection of useful methods on the module that facilitates the composition of modules into a coherent application. These methods are called from the [`module manager`](/docs/sdk/v0.47/documentation/module-system/module-manager#manager), which manages the application's collection of modules. + +### `Msg` Services + +Each application module defines two [Protobuf services](https://developers.google.com/protocol-buffers/docs/proto#services): one `Msg` service to handle messages, and one gRPC `Query` service to handle queries. If we consider the module as a state-machine, then a `Msg` service is a set of state transition RPC methods. +Each Protobuf `Msg` service method is 1:1 related to a Protobuf request type, which must implement `sdk.Msg` interface. +Note that `sdk.Msg`s are bundled in [transactions](/docs/sdk/v0.47/learn/advanced/transactions), and each transaction contains one or multiple messages. When a valid block of transactions is received by the full-node, CometBFT relays each one to the application via [`DeliverTx`](https://docs.cometbft.com/v0.37/spec/abci/abci++_app_requirements#specifics-of-responsedelivertx). Then, the application handles the transaction: 1. Upon receiving the transaction, the application first unmarshalls it from `[]byte`. -2. Then, it verifies a few things about the transaction like [fee payment and signatures](/v0.47/learn/beginner/gas-fees#antehandler) before extracting the `Msg`(s) contained in the transaction. +2. Then, it verifies a few things about the transaction like [fee payment and signatures](/docs/sdk/v0.47/learn/beginner/gas-fees#antehandler) before extracting the `Msg`(s) contained in the transaction. 3. `sdk.Msg`s are encoded using Protobuf [`Any`s](#register-codec). By analyzing each `Any`'s `type_url`, baseapp's `msgServiceRouter` routes the `sdk.Msg` to the corresponding module's `Msg` service. 4. If the message is successfully processed, the state is updated. -For more details, see [transaction lifecycle](/v0.47/learn/beginner/tx-lifecycle). +For more details, see [transaction lifecycle](/docs/sdk/v0.47/learn/beginner/tx-lifecycle). Module developers create custom `Msg` services when they build their own module. The general practice is to define the `Msg` Protobuf service in a `tx.proto` file. For example, the `x/bank` module defines a service with two methods to transfer tokens: -proto/cosmos/bank/v1beta1/tx.proto +```protobuf +// Msg defines the bank Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; -``` -loading... -``` + // Send defines a method for sending coins from one account to another account. + rpc Send(MsgSend) returns (MsgSendResponse); + + // MultiSend defines a method for sending coins from some accounts to other accounts. + rpc MultiSend(MsgMultiSend) returns (MsgMultiSendResponse); -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/tx.proto#L13-L36) + // UpdateParams defines a governance operation for updating the x/bank module parameters. + // The authority is defined in the keeper. + // + // Since: cosmos-sdk 0.47 + rpc UpdateParams(MsgUpdateParams) returns (MsgUpdateParamsResponse); + + // SetSendEnabled is a governance operation for setting the SendEnabled flag + // on any number of Denoms. Only the entries to add or update should be + // included. Entries that already exist in the store, but that aren't + // included in this message, will be left unchanged. + // + // Since: cosmos-sdk 0.47 + rpc SetSendEnabled(MsgSetSendEnabled) returns (MsgSetSendEnabledResponse); +} +``` Service methods use `keeper` in order to update the module state. Each module should also implement the `RegisterServices` method as part of the [`AppModule` interface](#application-module-interface). This method should call the `RegisterMsgServer` function provided by the generated Protobuf code. -### gRPC `Query` Services[​](#grpc-query-services "Direct link to grpc-query-services") +### gRPC `Query` Services -gRPC `Query` services allow users to query the state using [gRPC](https://grpc.io). They are enabled by default, and can be configured under the `grpc.enable` and `grpc.address` fields inside [`app.toml`](/v0.47/user/run-node/interact-node#configuring-the-node-using-apptoml). +gRPC `Query` services allow users to query the state using [gRPC](https://grpc.io). They are enabled by default, and can be configured under the `grpc.enable` and `grpc.address` fields inside [`app.toml`](/docs/sdk/v0.47/user/run-node/interact-node#configuring-the-node-using-apptoml). gRPC `Query` services are defined in the module's Protobuf definition files, specifically inside `query.proto`. The `query.proto` definition file exposes a single `Query` [Protobuf service](https://developers.google.com/protocol-buffers/docs/proto#services). Each gRPC query endpoint corresponds to a service method, starting with the `rpc` keyword, inside the `Query` service. @@ -218,81 +4697,674 @@ Protobuf generates a `QueryServer` interface for each module, containing all the Finally, each module should also implement the `RegisterServices` method as part of the [`AppModule` interface](#application-module-interface). This method should call the `RegisterQueryServer` function provided by the generated Protobuf code. -### Keeper[​](#keeper "Direct link to Keeper") +### Keeper -[`Keepers`](/v0.47/build/building-modules/keeper) are the gatekeepers of their module's store(s). To read or write in a module's store, it is mandatory to go through one of its `keeper`'s methods. This is ensured by the [object-capabilities](/v0.47/learn/advanced/ocap) model of the Cosmos SDK. Only objects that hold the key to a store can access it, and only the module's `keeper` should hold the key(s) to the module's store(s). +[`Keepers`](/docs/sdk/v0.47/documentation/module-system/keeper) are the gatekeepers of their module's store(s). To read or write in a module's store, it is mandatory to go through one of its `keeper`'s methods. This is ensured by the [object-capabilities](/docs/sdk/v0.47/learn/advanced/ocap) model of the Cosmos SDK. Only objects that hold the key to a store can access it, and only the module's `keeper` should hold the key(s) to the module's store(s). `Keepers` are generally defined in a file called `keeper.go`. It contains the `keeper`'s type definition and methods. The `keeper` type definition generally consists of the following: -* **Key(s)** to the module's store(s) in the multistore. -* Reference to **other module's `keepers`**. Only needed if the `keeper` needs to access other module's store(s) (either to read or write from them). -* A reference to the application's **codec**. The `keeper` needs it to marshal structs before storing them, or to unmarshal them when it retrieves them, because stores only accept `[]bytes` as value. +- **Key(s)** to the module's store(s) in the multistore. +- Reference to **other module's `keepers`**. Only needed if the `keeper` needs to access other module's store(s) (either to read or write from them). +- A reference to the application's **codec**. The `keeper` needs it to marshal structs before storing them, or to unmarshal them when it retrieves them, because stores only accept `[]bytes` as value. Along with the type definition, the next important component of the `keeper.go` file is the `keeper`'s constructor function, `NewKeeper`. This function instantiates a new `keeper` of the type defined above with a `codec`, stores `keys` and potentially references other modules' `keeper`s as parameters. The `NewKeeper` function is called from the [application's constructor](#constructor-function). The rest of the file defines the `keeper`'s methods, which are primarily getters and setters. -### Command-Line, gRPC Services and REST Interfaces[​](#command-line-grpc-services-and-rest-interfaces "Direct link to Command-Line, gRPC Services and REST Interfaces") +### Command-Line, gRPC Services and REST Interfaces Each module defines command-line commands, gRPC services, and REST routes to be exposed to the end-user via the [application's interfaces](#application-interfacev). This enables end-users to create messages of the types defined in the module, or to query the subset of the state managed by the module. -#### CLI[​](#cli "Direct link to CLI") +#### CLI -Generally, the [commands related to a module](/v0.47/build/building-modules/module-interfaces#cli) are defined in a folder called `client/cli` in the module's folder. The CLI divides commands into two categories, transactions and queries, defined in `client/cli/tx.go` and `client/cli/query.go`, respectively. Both commands are built on top of the [Cobra Library](https://github.com/spf13/cobra): +Generally, the [commands related to a module](/docs/sdk/v0.47/documentation/module-system/module-interfaces#cli) are defined in a folder called `client/cli` in the module's folder. The CLI divides commands into two categories, transactions and queries, defined in `client/cli/tx.go` and `client/cli/query.go`, respectively. Both commands are built on top of the [Cobra Library](https://github.com/spf13/cobra): -* Transactions commands let users generate new transactions so that they can be included in a block and eventually update the state. One command should be created for each defined in the module. The command calls the constructor of the message with the parameters provided by the end-user, and wraps it into a transaction. The Cosmos SDK handles signing and the addition of other transaction metadata. -* Queries let users query the subset of the state defined by the module. Query commands forward queries to the [application's query router](/v0.47/learn/advanced/baseapp#query-routing), which routes them to the appropriate the `queryRoute` parameter supplied. +- Transactions commands let users generate new transactions so that they can be included in a block and eventually update the state. One command should be created for each defined in the module. The command calls the constructor of the message with the parameters provided by the end-user, and wraps it into a transaction. The Cosmos SDK handles signing and the addition of other transaction metadata. +- Queries let users query the subset of the state defined by the module. Query commands forward queries to the [application's query router](/docs/sdk/v0.47/learn/advanced/baseapp#query-routing), which routes them to the appropriate the `queryRoute` parameter supplied. -#### gRPC[​](#grpc "Direct link to gRPC") +#### gRPC [gRPC](https://grpc.io) is a modern open-source high performance RPC framework that has support in multiple languages. It is the recommended way for external clients (such as wallets, browsers and other backend services) to interact with a node. Each module can expose gRPC endpoints called [service methods](https://grpc.io/docs/what-is-grpc/core-concepts/#service-definition), which are defined in the [module's Protobuf `query.proto` file](#grpc-query-services). A service method is defined by its name, input arguments, and output response. The module then needs to perform the following actions: -* Define a `RegisterGRPCGatewayRoutes` method on `AppModuleBasic` to wire the client gRPC requests to the correct handler inside the module. -* For each service method, define a corresponding handler. The handler implements the core logic necessary to serve the gRPC request, and is located in the `keeper/grpc_query.go` file. +- Define a `RegisterGRPCGatewayRoutes` method on `AppModuleBasic` to wire the client gRPC requests to the correct handler inside the module. +- For each service method, define a corresponding handler. The handler implements the core logic necessary to serve the gRPC request, and is located in the `keeper/grpc_query.go` file. -#### gRPC-gateway REST Endpoints[​](#grpc-gateway-rest-endpoints "Direct link to gRPC-gateway REST Endpoints") +#### gRPC-gateway REST Endpoints Some external clients may not wish to use gRPC. In this case, the Cosmos SDK provides a gRPC gateway service, which exposes each gRPC service as a corresponding REST endpoint. Please refer to the [grpc-gateway](https://grpc-ecosystem.github.io/grpc-gateway/) documentation to learn more. The REST endpoints are defined in the Protobuf files, along with the gRPC services, using Protobuf annotations. Modules that want to expose REST queries should add `google.api.http` annotations to their `rpc` methods. By default, all REST endpoints defined in the SDK have a URL starting with the `/cosmos/` prefix. -The Cosmos SDK also provides a development endpoint to generate [Swagger](https://swagger.io/) definition files for these REST endpoints. This endpoint can be enabled inside the [`app.toml`](/v0.47/user/run-node/run-node#configuring-the-node-using-apptoml) config file, under the `api.swagger` key. +The Cosmos SDK also provides a development endpoint to generate [Swagger](https://swagger.io/) definition files for these REST endpoints. This endpoint can be enabled inside the [`app.toml`](/docs/sdk/v0.47/user/run-node/run-node#configuring-the-node-using-apptoml) config file, under the `api.swagger` key. -## Application Interface[​](#application-interface "Direct link to Application Interface") +## Application Interface [Interfaces](#command-line-grpc-services-and-rest-interfaces) let end-users interact with full-node clients. This means querying data from the full-node or creating and sending new transactions to be relayed by the full-node and eventually included in a block. -The main interface is the [Command-Line Interface](/v0.47/learn/advanced/cli). The CLI of a Cosmos SDK application is built by aggregating [CLI commands](#cli) defined in each of the modules used by the application. The CLI of an application is the same as the daemon (e.g. `appd`), and is defined in a file called `appd/main.go`. The file contains the following: +The main interface is the [Command-Line Interface](/docs/sdk/v0.47/learn/advanced/cli). The CLI of a Cosmos SDK application is built by aggregating [CLI commands](#cli) defined in each of the modules used by the application. The CLI of an application is the same as the daemon (e.g. `appd`), and is defined in a file called `appd/main.go`. The file contains the following: -* **A `main()` function**, which is executed to build the `appd` interface client. This function prepares each command and adds them to the `rootCmd` before building them. At the root of `appd`, the function adds generic commands like `status`, `keys`, and `config`, query commands, tx commands, and `rest-server`. -* **Query commands**, which are added by calling the `queryCmd` function. This function returns a Cobra command that contains the query commands defined in each of the application's modules (passed as an array of `sdk.ModuleClients` from the `main()` function), as well as some other lower level query commands such as block or validator queries. Query command are called by using the command `appd query [query]` of the CLI. -* **Transaction commands**, which are added by calling the `txCmd` function. Similar to `queryCmd`, the function returns a Cobra command that contains the tx commands defined in each of the application's modules, as well as lower level tx commands like transaction signing or broadcasting. Tx commands are called by using the command `appd tx [tx]` of the CLI. +- **A `main()` function**, which is executed to build the `appd` interface client. This function prepares each command and adds them to the `rootCmd` before building them. At the root of `appd`, the function adds generic commands like `status`, `keys`, and `config`, query commands, tx commands, and `rest-server`. +- **Query commands**, which are added by calling the `queryCmd` function. This function returns a Cobra command that contains the query commands defined in each of the application's modules (passed as an array of `sdk.ModuleClients` from the `main()` function), as well as some other lower level query commands such as block or validator queries. Query command are called by using the command `appd query [query]` of the CLI. +- **Transaction commands**, which are added by calling the `txCmd` function. Similar to `queryCmd`, the function returns a Cobra command that contains the tx commands defined in each of the application's modules, as well as lower level tx commands like transaction signing or broadcasting. Tx commands are called by using the command `appd tx [tx]` of the CLI. See an example of an application's main command-line file from the [Cosmos Hub](https://github.com/cosmos/gaia). -cmd/gaiad/cmd/root.go +```go expandable +package cmd -``` -loading... -``` +import ( + + "errors" + "io" + "os" + "path/filepath" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/snapshots" + snapshottypes "github.com/cosmos/cosmos-sdk/snapshots/types" + "github.com/cosmos/cosmos-sdk/store" + sdk "github.com/cosmos/cosmos-sdk/types" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" + "github.com/spf13/cast" + "github.com/spf13/cobra" + tmcfg "github.com/tendermint/tendermint/config" + tmcli "github.com/tendermint/tendermint/libs/cli" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + + gaia "github.com/cosmos/gaia/v8/app" + "github.com/cosmos/gaia/v8/app/params" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the +/ main function. +func NewRootCmd() (*cobra.Command, params.EncodingConfig) { + encodingConfig := gaia.MakeTestEncodingConfig() + initClientCtx := client.Context{ +}. + WithCodec(encodingConfig.Codec). + WithInterfaceRegistry(encodingConfig.InterfaceRegistry). + WithTxConfig(encodingConfig.TxConfig). + WithLegacyAmino(encodingConfig.Amino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(gaia.DefaultNodeHome). + WithViper("") + rootCmd := &cobra.Command{ + Use: "gaiad", + Short: "Stargate Cosmos Hub App", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + if err = client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customTemplate, customGaiaConfig := initAppConfig() + customTMConfig := initTendermintConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customTemplate, customGaiaConfig, customTMConfig) +}, +} + +initRootCmd(rootCmd, encodingConfig) + +return rootCmd, encodingConfig +} + +/ initTendermintConfig helps to override default Tendermint Config values. +/ return tmcfg.DefaultConfig if no custom configuration is required for the application. +func initTendermintConfig() *tmcfg.Config { + cfg := tmcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +func initAppConfig() (string, interface{ +}) { + srvCfg := serverconfig.DefaultConfig() + +srvCfg.StateSync.SnapshotInterval = 1000 + srvCfg.StateSync.SnapshotKeepRecent = 10 + + return params.CustomConfigTemplate(), params.CustomAppConfig{ + Config: *srvCfg, + BypassMinFeeMsgTypes: gaia.GetDefaultBypassFeeMessages(), +} +} + +func initRootCmd(rootCmd *cobra.Command, encodingConfig params.EncodingConfig) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(gaia.ModuleBasics, gaia.DefaultNodeHome), + genutilcli.CollectGenTxsCmd(banktypes.GenesisBalancesIterator{ +}, gaia.DefaultNodeHome), + genutilcli.GenTxCmd(gaia.ModuleBasics, encodingConfig.TxConfig, banktypes.GenesisBalancesIterator{ +}, gaia.DefaultNodeHome), + genutilcli.ValidateGenesisCmd(gaia.ModuleBasics), + AddGenesisAccountCmd(gaia.DefaultNodeHome), + tmcli.NewCompletionCmd(rootCmd, true), + testnetCmd(gaia.ModuleBasics, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + config.Cmd(), + ) + ac := appCreator{ + encCfg: encodingConfig, +} -[See full example on GitHub](https://github.com/cosmos/gaia/blob/26ae7c2/cmd/gaiad/cmd/root.go#L39-L80) +server.AddCommands(rootCmd, gaia.DefaultNodeHome, ac.newApp, ac.appExport, addModuleInitFlags) -## Dependencies and Makefile[​](#dependencies-and-makefile "Direct link to Dependencies and Makefile") + / add keybase, auxiliary RPC, query, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + queryCommand(), + txCommand(), + keys.Commands(gaia.DefaultNodeHome), + ) + +rootCmd.AddCommand(server.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetAccountCmd(), + rpc.ValidatorCommand(), + rpc.BlockCommand(), + authcmd.QueryTxsByEventsCmd(), + authcmd.QueryTxCmd(), + ) + +gaia.ModuleBasics.AddQueryCommands(cmd) + +cmd.PersistentFlags().String(flags.FlagChainID, "", "The network chain ID") + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + flags.LineBreak, + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +gaia.ModuleBasics.AddTxCommands(cmd) + +cmd.PersistentFlags().String(flags.FlagChainID, "", "The network chain ID") + +return cmd +} + +type appCreator struct { + encCfg params.EncodingConfig +} + +func (ac appCreator) + +newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + var cache sdk.MultiStorePersistentCache + if cast.ToBool(appOpts.Get(server.FlagInterBlockCache)) { + cache = store.NewCommitKVStoreCacheManager() +} + skipUpgradeHeights := make(map[int64]bool) + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + +pruningOpts, err := server.GetPruningOptionsFromFlags(appOpts) + if err != nil { + panic(err) +} + snapshotDir := filepath.Join(cast.ToString(appOpts.Get(flags.FlagHome)), "data", "snapshots") + +snapshotDB, err := dbm.NewDB("metadata", server.GetAppDBBackend(appOpts), snapshotDir) + if err != nil { + panic(err) +} + +snapshotStore, err := snapshots.NewStore(snapshotDB, snapshotDir) + if err != nil { + panic(err) +} + snapshotOptions := snapshottypes.NewSnapshotOptions( + cast.ToUint64(appOpts.Get(server.FlagStateSyncSnapshotInterval)), + cast.ToUint32(appOpts.Get(server.FlagStateSyncSnapshotKeepRecent)), + ) + +return gaia.NewGaiaApp( + logger, db, traceStore, true, skipUpgradeHeights, + cast.ToString(appOpts.Get(flags.FlagHome)), + cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)), + ac.encCfg, + appOpts, + baseapp.SetPruning(pruningOpts), + baseapp.SetMinGasPrices(cast.ToString(appOpts.Get(server.FlagMinGasPrices))), + baseapp.SetHaltHeight(cast.ToUint64(appOpts.Get(server.FlagHaltHeight))), + baseapp.SetHaltTime(cast.ToUint64(appOpts.Get(server.FlagHaltTime))), + baseapp.SetMinRetainBlocks(cast.ToUint64(appOpts.Get(server.FlagMinRetainBlocks))), + baseapp.SetInterBlockCache(cache), + baseapp.SetTrace(cast.ToBool(appOpts.Get(server.FlagTrace))), + baseapp.SetIndexEvents(cast.ToStringSlice(appOpts.Get(server.FlagIndexEvents))), + baseapp.SetSnapshot(snapshotStore, snapshotOptions), + ) +} + +func (ac appCreator) + +appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, +) (servertypes.ExportedApp, error) { + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home is not set") +} + +var loadLatest bool + if height == -1 { + loadLatest = true +} + gaiaApp := gaia.NewGaiaApp( + logger, + db, + traceStore, + loadLatest, + map[int64]bool{ +}, + homePath, + cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)), + ac.encCfg, + appOpts, + ) + if height != -1 { + if err := gaiaApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +return gaiaApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs) +} +``` + +## Dependencies and Makefile This section is optional, as developers are free to choose their dependency manager and project building method. That said, the current most used framework for versioning control is [`go.mod`](https://github.com/golang/go/wiki/Modules). It ensures each of the libraries used throughout the application are imported with the correct version. The following is the `go.mod` of the [Cosmos Hub](https://github.com/cosmos/gaia), provided as an example. -go.mod +```go expandable +module github.com/cosmos/gaia/v8 -``` -loading... -``` +go 1.18 + +require ( + cosmossdk.io/math v1.0.0-beta.3 + github.com/cosmos/cosmos-sdk v0.46.2 + github.com/cosmos/go-bip39 v1.0.0 / indirect + github.com/cosmos/ibc-go/v5 v5.0.0 + github.com/gogo/protobuf v1.3.3 + github.com/golang/protobuf v1.5.2 + github.com/golangci/golangci-lint v1.50.0 + github.com/gorilla/mux v1.8.0 + github.com/gravity-devs/liquidity/v2 v2.0.0 + github.com/grpc-ecosystem/grpc-gateway v1.16.0 + github.com/pkg/errors v0.9.1 + github.com/rakyll/statik v0.1.7 + github.com/spf13/cast v1.5.0 + github.com/spf13/cobra v1.6.0 + github.com/spf13/pflag v1.0.5 + github.com/spf13/viper v1.13.0 + github.com/strangelove-ventures/packet-forward-middleware/v2 v2.1.4-0.20220802012200-5a62a55a7f1d + github.com/stretchr/testify v1.8.0 + github.com/tendermint/tendermint v0.34.21 + github.com/tendermint/tm-db v0.6.7 + google.golang.org/genproto v0.0.0-20220815135757-37a418bb8959 + google.golang.org/grpc v1.50.0 +) -[See full example on GitHub](https://github.com/cosmos/gaia/blob/26ae7c2/go.mod#L1-L28) +require ( + 4d63.com/gochecknoglobals v0.1.0 / indirect + cloud.google.com/go v0.102.1 / indirect + cloud.google.com/go/compute v1.7.0 / indirect + cloud.google.com/go/iam v0.4.0 / indirect + cloud.google.com/go/storage v1.22.1 / indirect + cosmossdk.io/errors v1.0.0-beta.7 / indirect + filippo.io/edwards25519 v1.0.0-rc.1 / indirect + github.com/99designs/go-keychain v0.0.0-20191008050251-8e49817e8af4 / indirect + github.com/99designs/keyring v1.2.1 / indirect + github.com/Abirdcfly/dupword v0.0.7 / indirect + github.com/Antonboom/errname v0.1.7 / indirect + github.com/Antonboom/nilnil v0.1.1 / indirect + github.com/BurntSushi/toml v1.2.0 / indirect + github.com/ChainSafe/go-schnorrkel v0.0.0-20200405005733-88cbf1b4c40d / indirect + github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 / indirect + github.com/GaijinEntertainment/go-exhaustruct/v2 v2.3.0 / indirect + github.com/Masterminds/semver v1.5.0 / indirect + github.com/OpenPeeDeeP/depguard v1.1.1 / indirect + github.com/Workiva/go-datastructures v1.0.53 / indirect + github.com/alexkohler/prealloc v1.0.0 / indirect + github.com/alingse/asasalint v0.0.11 / indirect + github.com/armon/go-metrics v0.4.0 / indirect + github.com/ashanbrown/forbidigo v1.3.0 / indirect + github.com/ashanbrown/makezero v1.1.1 / indirect + github.com/aws/aws-sdk-go v1.40.45 / indirect + github.com/beorn7/perks v1.0.1 / indirect + github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d / indirect + github.com/bgentry/speakeasy v0.1.0 / indirect + github.com/bkielbasa/cyclop v1.2.0 / indirect + github.com/blizzy78/varnamelen v0.8.0 / indirect + github.com/bombsimon/wsl/v3 v3.3.0 / indirect + github.com/breml/bidichk v0.2.3 / indirect + github.com/breml/errchkjson v0.3.0 / indirect + github.com/btcsuite/btcd v0.22.1 / indirect + github.com/butuzov/ireturn v0.1.1 / indirect + github.com/cenkalti/backoff/v4 v4.1.3 / indirect + github.com/cespare/xxhash v1.1.0 / indirect + github.com/cespare/xxhash/v2 v2.1.2 / indirect + github.com/charithe/durationcheck v0.0.9 / indirect + github.com/chavacava/garif v0.0.0-20220630083739-93517212f375 / indirect + github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e / indirect + github.com/cockroachdb/apd/v2 v2.0.2 / indirect + github.com/coinbase/rosetta-sdk-go v0.7.9 / indirect + github.com/confio/ics23/go v0.7.0 / indirect + github.com/cosmos/btcutil v1.0.4 / indirect + github.com/cosmos/cosmos-proto v1.0.0-alpha7 / indirect + github.com/cosmos/gorocksdb v1.2.0 / indirect + github.com/cosmos/iavl v0.19.2-0.20220916140702-9b6be3095313 / indirect + github.com/cosmos/ledger-cosmos-go v0.11.1 / indirect + github.com/cosmos/ledger-go v0.9.3 / indirect + github.com/creachadair/taskgroup v0.3.2 / indirect + github.com/curioswitch/go-reassign v0.2.0 / indirect + github.com/daixiang0/gci v0.8.0 / indirect + github.com/danieljoos/wincred v1.1.2 / indirect + github.com/davecgh/go-spew v1.1.1 / indirect + github.com/denis-tingaikin/go-header v0.4.3 / indirect + github.com/desertbit/timer v0.0.0-20180107155436-c41aec40b27f / indirect + github.com/dgraph-io/badger/v2 v2.2007.4 / indirect + github.com/dgraph-io/ristretto v0.1.0 / indirect + github.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13 / indirect + github.com/dustin/go-humanize v1.0.0 / indirect + github.com/dvsekhvalnov/jose2go v1.5.0 / indirect + github.com/esimonov/ifshort v1.0.4 / indirect + github.com/ettle/strcase v0.1.1 / indirect + github.com/fatih/color v1.13.0 / indirect + github.com/fatih/structtag v1.2.0 / indirect + github.com/felixge/httpsnoop v1.0.1 / indirect + github.com/firefart/nonamedreturns v1.0.4 / indirect + github.com/fsnotify/fsnotify v1.5.4 / indirect + github.com/fzipp/gocyclo v0.6.0 / indirect + github.com/go-critic/go-critic v0.6.5 / indirect + github.com/go-kit/kit v0.12.0 / indirect + github.com/go-kit/log v0.2.1 / indirect + github.com/go-logfmt/logfmt v0.5.1 / indirect + github.com/go-playground/validator/v10 v10.4.1 / indirect + github.com/go-toolsmith/astcast v1.0.0 / indirect + github.com/go-toolsmith/astcopy v1.0.2 / indirect + github.com/go-toolsmith/astequal v1.0.3 / indirect + github.com/go-toolsmith/astfmt v1.0.0 / indirect + github.com/go-toolsmith/astp v1.0.0 / indirect + github.com/go-toolsmith/strparse v1.0.0 / indirect + github.com/go-toolsmith/typep v1.0.2 / indirect + github.com/go-xmlfmt/xmlfmt v0.0.0-20191208150333-d5b6f63a941b / indirect + github.com/gobwas/glob v0.2.3 / indirect + github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 / indirect + github.com/gofrs/flock v0.8.1 / indirect + github.com/gogo/gateway v1.1.0 / indirect + github.com/golang/glog v1.0.0 / indirect + github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da / indirect + github.com/golang/snappy v0.0.4 / indirect + github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2 / indirect + github.com/golangci/dupl v0.0.0-20180902072040-3e9179ac440a / indirect + github.com/golangci/go-misc v0.0.0-20220329215616-d24fe342adfe / indirect + github.com/golangci/gofmt v0.0.0-20220901101216-f2edd75033f2 / indirect + github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0 / indirect + github.com/golangci/maligned v0.0.0-20180506175553-b1d89398deca / indirect + github.com/golangci/misspell v0.3.5 / indirect + github.com/golangci/revgrep v0.0.0-20220804021717-745bb2f7c2e6 / indirect + github.com/golangci/unconvert v0.0.0-20180507085042-28b1c447d1f4 / indirect + github.com/google/btree v1.0.1 / indirect + github.com/google/go-cmp v0.5.9 / indirect + github.com/google/orderedcode v0.0.1 / indirect + github.com/google/uuid v1.3.0 / indirect + github.com/googleapis/enterprise-certificate-proxy v0.1.0 / indirect + github.com/googleapis/gax-go/v2 v2.4.0 / indirect + github.com/googleapis/go-type-adapters v1.0.0 / indirect + github.com/gordonklaus/ineffassign v0.0.0-20210914165742-4cc7213b9bc8 / indirect + github.com/gorilla/handlers v1.5.1 / indirect + github.com/gorilla/websocket v1.5.0 / indirect + github.com/gostaticanalysis/analysisutil v0.7.1 / indirect + github.com/gostaticanalysis/comment v1.4.2 / indirect + github.com/gostaticanalysis/forcetypeassert v0.1.0 / indirect + github.com/gostaticanalysis/nilerr v0.1.1 / indirect + github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 / indirect + github.com/grpc-ecosystem/grpc-gateway/v2 v2.0.1 / indirect + github.com/gsterjov/go-libsecret v0.0.0-20161001094733-a6f4afe4910c / indirect + github.com/gtank/merlin v0.1.1 / indirect + github.com/gtank/ristretto255 v0.1.2 / indirect + github.com/hashicorp/errwrap v1.1.0 / indirect + github.com/hashicorp/go-cleanhttp v0.5.2 / indirect + github.com/hashicorp/go-getter v1.6.1 / indirect + github.com/hashicorp/go-immutable-radix v1.3.1 / indirect + github.com/hashicorp/go-multierror v1.1.1 / indirect + github.com/hashicorp/go-safetemp v1.0.0 / indirect + github.com/hashicorp/go-version v1.6.0 / indirect + github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d / indirect + github.com/hashicorp/hcl v1.0.0 / indirect + github.com/hdevalence/ed25519consensus v0.0.0-20220222234857-c00d1f31bab3 / indirect + github.com/hexops/gotextdiff v1.0.3 / indirect + github.com/improbable-eng/grpc-web v0.15.0 / indirect + github.com/inconshreveable/mousetrap v1.0.1 / indirect + github.com/jgautheron/goconst v1.5.1 / indirect + github.com/jingyugao/rowserrcheck v1.1.1 / indirect + github.com/jirfag/go-printf-func-name v0.0.0-20200119135958-7558a9eaa5af / indirect + github.com/jmespath/go-jmespath v0.4.0 / indirect + github.com/jmhodges/levigo v1.0.0 / indirect + github.com/julz/importas v0.1.0 / indirect + github.com/kisielk/errcheck v1.6.2 / indirect + github.com/kisielk/gotool v1.0.0 / indirect + github.com/kkHAIKE/contextcheck v1.1.2 / indirect + github.com/klauspost/compress v1.15.9 / indirect + github.com/kulti/thelper v0.6.3 / indirect + github.com/kunwardeep/paralleltest v1.0.6 / indirect + github.com/kyoh86/exportloopref v0.1.8 / indirect + github.com/ldez/gomoddirectives v0.2.3 / indirect + github.com/ldez/tagliatelle v0.3.1 / indirect + github.com/leonklingele/grouper v1.1.0 / indirect + github.com/lib/pq v1.10.6 / indirect + github.com/libp2p/go-buffer-pool v0.1.0 / indirect + github.com/lufeee/execinquery v1.2.1 / indirect + github.com/magiconair/properties v1.8.6 / indirect + github.com/manifoldco/promptui v0.9.0 / indirect + github.com/maratori/testableexamples v1.0.0 / indirect + github.com/maratori/testpackage v1.1.0 / indirect + github.com/matoous/godox v0.0.0-20210227103229-6504466cf951 / indirect + github.com/mattn/go-colorable v0.1.13 / indirect + github.com/mattn/go-isatty v0.0.16 / indirect + github.com/mattn/go-runewidth v0.0.9 / indirect + github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 / indirect + github.com/mbilski/exhaustivestruct v1.2.0 / indirect + github.com/mgechev/revive v1.2.4 / indirect + github.com/mimoo/StrobeGo v0.0.0-20181016162300-f8f6d4d2b643 / indirect + github.com/minio/highwayhash v1.0.2 / indirect + github.com/mitchellh/go-homedir v1.1.0 / indirect + github.com/mitchellh/go-testing-interface v1.0.0 / indirect + github.com/mitchellh/mapstructure v1.5.0 / indirect + github.com/moricho/tparallel v0.2.1 / indirect + github.com/mtibben/percent v0.2.1 / indirect + github.com/nakabonne/nestif v0.3.1 / indirect + github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354 / indirect + github.com/nishanths/exhaustive v0.8.3 / indirect + github.com/nishanths/predeclared v0.2.2 / indirect + github.com/olekukonko/tablewriter v0.0.5 / indirect + github.com/pelletier/go-toml v1.9.5 / indirect + github.com/pelletier/go-toml/v2 v2.0.5 / indirect + github.com/petermattis/goid v0.0.0-20180202154549-b0b1615b78e5 / indirect + github.com/phayes/checkstyle v0.0.0-20170904204023-bfd46e6a821d / indirect + github.com/pmezard/go-difflib v1.0.0 / indirect + github.com/polyfloyd/go-errorlint v1.0.5 / indirect + github.com/prometheus/client_golang v1.12.2 / indirect + github.com/prometheus/client_model v0.2.0 / indirect + github.com/prometheus/common v0.34.0 / indirect + github.com/prometheus/procfs v0.7.3 / indirect + github.com/quasilyte/go-ruleguard v0.3.18 / indirect + github.com/quasilyte/gogrep v0.0.0-20220828223005-86e4605de09f / indirect + github.com/quasilyte/regex/syntax v0.0.0-20200407221936-30656e2c4a95 / indirect + github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 / indirect + github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0 / indirect + github.com/regen-network/cosmos-proto v0.3.1 / indirect + github.com/rs/cors v1.8.2 / indirect + github.com/rs/zerolog v1.27.0 / indirect + github.com/ryancurrah/gomodguard v1.2.4 / indirect + github.com/ryanrolds/sqlclosecheck v0.3.0 / indirect + github.com/sanposhiho/wastedassign/v2 v2.0.6 / indirect + github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa / indirect + github.com/sashamelentyev/interfacebloat v1.1.0 / indirect + github.com/sashamelentyev/usestdlibvars v1.20.0 / indirect + github.com/securego/gosec/v2 v2.13.1 / indirect + github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c / indirect + github.com/sirupsen/logrus v1.9.0 / indirect + github.com/sivchari/containedctx v1.0.2 / indirect + github.com/sivchari/nosnakecase v1.7.0 / indirect + github.com/sivchari/tenv v1.7.0 / indirect + github.com/sonatard/noctx v0.0.1 / indirect + github.com/sourcegraph/go-diff v0.6.1 / indirect + github.com/spf13/afero v1.8.2 / indirect + github.com/spf13/jwalterweatherman v1.1.0 / indirect + github.com/ssgreg/nlreturn/v2 v2.2.1 / indirect + github.com/stbenjam/no-sprintf-host-port v0.1.1 / indirect + github.com/stretchr/objx v0.4.0 / indirect + github.com/subosito/gotenv v1.4.1 / indirect + github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 / indirect + github.com/tdakkota/asciicheck v0.1.1 / indirect + github.com/tendermint/btcd v0.1.1 / indirect + github.com/tendermint/crypto v0.0.0-20191022145703-50d29ede1e15 / indirect + github.com/tendermint/go-amino v0.16.0 / indirect + github.com/tetafro/godot v1.4.11 / indirect + github.com/timakin/bodyclose v0.0.0-20210704033933-f49887972144 / indirect + github.com/timonwong/loggercheck v0.9.3 / indirect + github.com/tomarrell/wrapcheck/v2 v2.6.2 / indirect + github.com/tommy-muehle/go-mnd/v2 v2.5.0 / indirect + github.com/ulikunitz/xz v0.5.8 / indirect + github.com/ultraware/funlen v0.0.3 / indirect + github.com/ultraware/whitespace v0.0.5 / indirect + github.com/uudashr/gocognit v1.0.6 / indirect + github.com/yagipy/maintidx v1.0.0 / indirect + github.com/yeya24/promlinter v0.2.0 / indirect + github.com/zondax/hid v0.9.1-0.20220302062450-5552068d2266 / indirect + gitlab.com/bosi/decorder v0.2.3 / indirect + go.etcd.io/bbolt v1.3.6 / indirect + go.opencensus.io v0.23.0 / indirect + go.uber.org/atomic v1.9.0 / indirect + go.uber.org/multierr v1.8.0 / indirect + go.uber.org/zap v1.21.0 / indirect + golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa / indirect + golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e / indirect + golang.org/x/exp/typeparams v0.0.0-20220827204233-334a2380cb91 / indirect + golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 / indirect + golang.org/x/net v0.0.0-20220726230323-06994584191e / indirect + golang.org/x/oauth2 v0.0.0-20220622183110-fd043fe589d2 / indirect + golang.org/x/sync v0.0.0-20220819030929-7fc1605a5dde / indirect + golang.org/x/sys v0.0.0-20220915200043-7b5979e65e41 / indirect + golang.org/x/term v0.0.0-20220722155259-a9ba230a4035 / indirect + golang.org/x/text v0.3.7 / indirect + golang.org/x/tools v0.1.12 / indirect + golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f / indirect + google.golang.org/api v0.93.0 / indirect + google.golang.org/appengine v1.6.7 / indirect + google.golang.org/protobuf v1.28.1 / indirect + gopkg.in/ini.v1 v1.67.0 / indirect + gopkg.in/yaml.v2 v2.4.0 / indirect + gopkg.in/yaml.v3 v3.0.1 / indirect + honnef.co/go/tools v0.3.3 / indirect + mvdan.cc/gofumpt v0.4.0 / indirect + mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed / indirect + mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b / indirect + mvdan.cc/unparam v0.0.0-20220706161116-678bad134442 / indirect + nhooyr.io/websocket v1.8.6 / indirect + sigs.k8s.io/yaml v1.3.0 / indirect +) + +replace ( + github.com/gogo/protobuf => github.com/regen-network/protobuf v1.3.3-alpha.regen.1 + github.com/zondax/hid => github.com/zondax/hid v0.9.0 +) +``` For building the application, a [Makefile](https://en.wikipedia.org/wiki/Makefile) is generally used. The Makefile primarily ensures that the `go.mod` is run before building the two entrypoints to the application, [`appd`](#node-client) and [`appd`](#application-interface). diff --git a/docs/sdk/v0.47/learn/beginner/query-lifecycle.mdx b/docs/sdk/v0.47/learn/beginner/query-lifecycle.mdx index e631ff0c..e7176272 100644 --- a/docs/sdk/v0.47/learn/beginner/query-lifecycle.mdx +++ b/docs/sdk/v0.47/learn/beginner/query-lifecycle.mdx @@ -1,161 +1,3056 @@ --- -title: "Query Lifecycle" -description: "Version: v0.47" +title: Query Lifecycle --- - - This document describes the lifecycle of a query in a Cosmos SDK application, from the user interface to application stores and back. The query is referred to as `MyQuery`. - +## Synopsis + +This document describes the lifecycle of a query in a Cosmos SDK application, from the user interface to application stores and back. The query is referred to as `MyQuery`. - ### Pre-requisite Readings[​](#pre-requisite-readings "Direct link to Pre-requisite Readings") - * [Transaction Lifecycle](/v0.47/learn/beginner/tx-lifecycle) +### Pre-requisite Readings + +- [Transaction Lifecycle](/docs/sdk/v0.47/learn/beginner/tx-lifecycle) + -## Query Creation[​](#query-creation "Direct link to Query Creation") +## Query Creation -A [**query**](/v0.47/build/building-modules/messages-and-queries#queries) is a request for information made by end-users of applications through an interface and processed by a full-node. Users can query information about the network, the application itself, and application state directly from the application's stores or modules. Note that queries are different from [transactions](/v0.47/learn/advanced/transactions) (view the lifecycle [here](/v0.47/learn/beginner/tx-lifecycle)), particularly in that they do not require consensus to be processed (as they do not trigger state-transitions); they can be fully handled by one full-node. +A [**query**](/docs/sdk/v0.47/documentation/module-system/messages-and-queries#queries) is a request for information made by end-users of applications through an interface and processed by a full-node. Users can query information about the network, the application itself, and application state directly from the application's stores or modules. Note that queries are different from [transactions](/docs/sdk/v0.47/learn/advanced/transactions) (view the lifecycle [here](/docs/sdk/v0.47/learn/beginner/tx-lifecycle)), particularly in that they do not require consensus to be processed (as they do not trigger state-transitions); they can be fully handled by one full-node. -For the purpose of explaining the query lifecycle, let's say the query, `MyQuery`, is requesting a list of delegations made by a certain delegator address in the application called `simapp`. As is to be expected, the [`staking`](/v0.47/build/modules/staking) module handles this query. But first, there are a few ways `MyQuery` can be created by users. +For the purpose of explaining the query lifecycle, let's say the query, `MyQuery`, is requesting a list of delegations made by a certain delegator address in the application called `simapp`. As is to be expected, the [`staking`](/docs/sdk/v0.47/documentation/module-system/modules/staking) module handles this query. But first, there are a few ways `MyQuery` can be created by users. -### CLI[​](#cli "Direct link to CLI") +### CLI The main interface for an application is the command-line interface. Users connect to a full-node and run the CLI directly from their machines - the CLI interacts directly with the full-node. To create `MyQuery` from their terminal, users type the following command: -``` +```bash simd query staking delegations ``` -This query command was defined by the [`staking`](/v0.47/build/modules/staking) module developer and added to the list of subcommands by the application developer when creating the CLI. +This query command was defined by the [`staking`](/docs/sdk/v0.47/documentation/module-system/modules/staking) module developer and added to the list of subcommands by the application developer when creating the CLI. Note that the general format is as follows: -``` +```bash simd query [moduleName] [command] --flag ``` -To provide values such as `--node` (the full-node the CLI connects to), the user can use the [`app.toml`](/v0.47/user/run-node/interact-node#configuring-the-node-using-apptoml) config file to set them or provide them as flags. +To provide values such as `--node` (the full-node the CLI connects to), the user can use the [`app.toml`](/docs/sdk/v0.47/user/run-node/interact-node#configuring-the-node-using-apptoml) config file to set them or provide them as flags. -The CLI understands a specific set of commands, defined in a hierarchical structure by the application developer: from the [root command](/v0.47/learn/advanced/cli#root-command) (`simd`), the type of command (`Myquery`), the module that contains the command (`staking`), and command itself (`delegations`). Thus, the CLI knows exactly which module handles this command and directly passes the call there. +The CLI understands a specific set of commands, defined in a hierarchical structure by the application developer: from the [root command](/docs/sdk/v0.47/learn/advanced/cli#root-command) (`simd`), the type of command (`Myquery`), the module that contains the command (`staking`), and command itself (`delegations`). Thus, the CLI knows exactly which module handles this command and directly passes the call there. -### gRPC[​](#grpc "Direct link to gRPC") +### gRPC -Another interface through which users can make queries is [gRPC](https://grpc.io) requests to a [gRPC server](/v0.47/learn/advanced/grpc_rest#grpc-server). The endpoints are defined as [Protocol Buffers](https://developers.google.com/protocol-buffers) service methods inside `.proto` files, written in Protobuf's own language-agnostic interface definition language (IDL). The Protobuf ecosystem developed tools for code-generation from `*.proto` files into various languages. These tools allow to build gRPC clients easily. +Another interface through which users can make queries is [gRPC](https://grpc.io) requests to a [gRPC server](/docs/sdk/v0.47/learn/advanced/grpc_rest#grpc-server). The endpoints are defined as [Protocol Buffers](https://developers.google.com/protocol-buffers) service methods inside `.proto` files, written in Protobuf's own language-agnostic interface definition language (IDL). The Protobuf ecosystem developed tools for code-generation from `*.proto` files into various languages. These tools allow to build gRPC clients easily. One such tool is [grpcurl](https://github.com/fullstorydev/grpcurl), and a gRPC request for `MyQuery` using this client looks like: -``` -grpcurl \ -plaintext # We want results in plain test -import-path ./proto \ # Import these .proto files -proto ./proto/cosmos/staking/v1beta1/query.proto \ # Look into this .proto file for the Query protobuf service -d '{"address":"$MY_DELEGATOR"}' \ # Query arguments localhost:9090 \ # gRPC server endpoint cosmos.staking.v1beta1.Query/Delegations # Fully-qualified service method name +```bash +grpcurl \ + -plaintext # We want results in plain test + -import-path ./proto \ # Import these .proto files + -proto ./proto/cosmos/staking/v1beta1/query.proto \ # Look into this .proto file for the Query protobuf service + -d '{"address":"$MY_DELEGATOR"}' \ # Query arguments + localhost:9090 \ # gRPC server endpoint + cosmos.staking.v1beta1.Query/Delegations # Fully-qualified service method name ``` -### REST[​](#rest "Direct link to REST") +### REST -Another interface through which users can make queries is through HTTP Requests to a [REST server](/v0.47/learn/advanced/grpc_rest#rest-server). The REST server is fully auto-generated from Protobuf services, using [gRPC-gateway](https://github.com/grpc-ecosystem/grpc-gateway). +Another interface through which users can make queries is through HTTP Requests to a [REST server](/docs/sdk/v0.47/learn/advanced/grpc_rest#rest-server). The REST server is fully auto-generated from Protobuf services, using [gRPC-gateway](https://github.com/grpc-ecosystem/grpc-gateway). An example HTTP request for `MyQuery` looks like: -``` +```bash GET http://localhost:1317/cosmos/staking/v1beta1/delegators/{delegatorAddr}/delegations ``` -## How Queries are Handled by the CLI[​](#how-queries-are-handled-by-the-cli "Direct link to How Queries are Handled by the CLI") +## How Queries are Handled by the CLI The preceding examples show how an external user can interact with a node by querying its state. To understand in more detail the exact lifecycle of a query, let's dig into how the CLI prepares the query, and how the node handles it. The interactions from the users' perspective are a bit different, but the underlying functions are almost identical because they are implementations of the same command defined by the module developer. This step of processing happens within the CLI, gRPC, or REST server, and heavily involves a `client.Context`. -### Context[​](#context "Direct link to Context") +### Context The first thing that is created in the execution of a CLI command is a `client.Context`. A `client.Context` is an object that stores all the data needed to process a request on the user side. In particular, a `client.Context` stores the following: -* **Codec**: The [encoder/decoder](/v0.47/learn/advanced/encoding) used by the application, used to marshal the parameters and query before making the CometBFT RPC request and unmarshal the returned response into a JSON object. The default codec used by the CLI is Protobuf. -* **Account Decoder**: The account decoder from the [`auth`](/v0.47/build/modules/auth) module, which translates `[]byte`s into accounts. -* **RPC Client**: The CometBFT RPC Client, or node, to which requests are relayed. -* **Keyring**: A [Key Manager](/v0.47/learn/beginner/accounts#keyring) used to sign transactions and handle other operations with keys. -* **Output Writer**: A [Writer](https://pkg.go.dev/io/#Writer) used to output the response. -* **Configurations**: The flags configured by the user for this command, including `--height`, specifying the height of the blockchain to query, and `--indent`, which indicates to add an indent to the JSON response. +- **Codec**: The [encoder/decoder](/docs/sdk/v0.47/learn/advanced/encoding) used by the application, used to marshal the parameters and query before making the CometBFT RPC request and unmarshal the returned response into a JSON object. The default codec used by the CLI is Protobuf. +- **Account Decoder**: The account decoder from the [`auth`](/docs/sdk/v0.47/documentation/module-system/modules/auth) module, which translates `[]byte`s into accounts. +- **RPC Client**: The CometBFT RPC Client, or node, to which requests are relayed. +- **Keyring**: A [Key Manager](/docs/sdk/v0.47/learn/beginner/accounts#keyring) used to sign transactions and handle other operations with keys. +- **Output Writer**: A [Writer](https://pkg.go.dev/io/#Writer) used to output the response. +- **Configurations**: The flags configured by the user for this command, including `--height`, specifying the height of the blockchain to query, and `--indent`, which indicates to add an indent to the JSON response. The `client.Context` also contains various functions such as `Query()`, which retrieves the RPC Client and makes an ABCI call to relay a query to a full-node. -client/context.go +```go expandable +package client + +import ( + + "bufio" + "encoding/json" + "fmt" + "io" + "os" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/viper" + "google.golang.org/grpc" + "sigs.k8s.io/yaml" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ PreprocessTxFn defines a hook by which chains can preprocess transactions before broadcasting +type PreprocessTxFn func(chainID string, key keyring.KeyType, tx TxBuilder) + +error + +/ Context implements a typical context created in SDK modules for transaction +/ handling and queries. +type Context struct { + FromAddress sdk.AccAddress + Client TendermintRPC + GRPCClient *grpc.ClientConn + ChainID string + Codec codec.Codec + InterfaceRegistry codectypes.InterfaceRegistry + Input io.Reader + Keyring keyring.Keyring + KeyringOptions []keyring.Option + Output io.Writer + OutputFormat string + Height int64 + HomeDir string + KeyringDir string + From string + BroadcastMode string + FromName string + SignModeStr string + UseLedger bool + Simulate bool + GenerateOnly bool + Offline bool + SkipConfirm bool + TxConfig TxConfig + AccountRetriever AccountRetriever + NodeURI string + FeePayer sdk.AccAddress + FeeGranter sdk.AccAddress + Viper *viper.Viper + LedgerHasProtobuf bool + PreprocessTxHook PreprocessTxFn + + / IsAux is true when the signer is an auxiliary signer (e.g. the tipper). + IsAux bool + + / TODO: Deprecated (remove). + LegacyAmino *codec.LegacyAmino +} + +/ WithKeyring returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyring(k keyring.Keyring) + +Context { + ctx.Keyring = k + return ctx +} + +/ WithKeyringOptions returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyringOptions(opts ...keyring.Option) + +Context { + ctx.KeyringOptions = opts + return ctx +} + +/ WithInput returns a copy of the context with an updated input. +func (ctx Context) + +WithInput(r io.Reader) + +Context { + / convert to a bufio.Reader to have a shared buffer between the keyring and the + / the Commands, ensuring a read from one advance the read pointer for the other. + / see https://github.com/cosmos/cosmos-sdk/issues/9566. + ctx.Input = bufio.NewReader(r) + +return ctx +} + +/ WithCodec returns a copy of the Context with an updated Codec. +func (ctx Context) + +WithCodec(m codec.Codec) + +Context { + ctx.Codec = m + return ctx +} + +/ WithLegacyAmino returns a copy of the context with an updated LegacyAmino codec. +/ TODO: Deprecated (remove). +func (ctx Context) + +WithLegacyAmino(cdc *codec.LegacyAmino) -``` -loading... -``` +Context { + ctx.LegacyAmino = cdc + return ctx +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/client/context.go#L24-L64) +/ WithOutput returns a copy of the context with an updated output writer (e.g. stdout). +func (ctx Context) -The `client.Context`'s primary role is to store data used during interactions with the end-user and provide methods to interact with this data - it is used before and after the query is processed by the full-node. Specifically, in handling `MyQuery`, the `client.Context` is utilized to encode the query parameters, retrieve the full-node, and write the output. Prior to being relayed to a full-node, the query needs to be encoded into a `[]byte` form, as full-nodes are application-agnostic and do not understand specific types. The full-node (RPC Client) itself is retrieved using the `client.Context`, which knows which node the user CLI is connected to. The query is relayed to this full-node to be processed. Finally, the `client.Context` contains a `Writer` to write output when the response is returned. These steps are further described in later sections. +WithOutput(w io.Writer) -### Arguments and Route Creation[​](#arguments-and-route-creation "Direct link to Arguments and Route Creation") +Context { + ctx.Output = w + return ctx +} -At this point in the lifecycle, the user has created a CLI command with all of the data they wish to include in their query. A `client.Context` exists to assist in the rest of the `MyQuery`'s journey. Now, the next step is to parse the command or request, extract the arguments, and encode everything. These steps all happen on the user side within the interface they are interacting with. +/ WithFrom returns a copy of the context with an updated from address or name. +func (ctx Context) -#### Encoding[​](#encoding "Direct link to Encoding") +WithFrom(from string) -In our case (querying an address's delegations), `MyQuery` contains an [address](/v0.47/learn/beginner/accounts#addresses) `delegatorAddress` as its only argument. However, the request can only contain `[]byte`s, as it is ultimately relayed to a consensus engine (e.g. CometBFT) of a full-node that has no inherent knowledge of the application types. Thus, the `codec` of `client.Context` is used to marshal the address. +Context { + ctx.From = from + return ctx +} -Here is what the code looks like for the CLI command: +/ WithOutputFormat returns a copy of the context with an updated OutputFormat field. +func (ctx Context) -x/staking/client/cli/query.go +WithOutputFormat(format string) -``` -loading... -``` +Context { + ctx.OutputFormat = format + return ctx +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/staking/client/cli/query.go#L323-L326) +/ WithNodeURI returns a copy of the context with an updated node URI. +func (ctx Context) -#### gRPC Query Client Creation[​](#grpc-query-client-creation "Direct link to gRPC Query Client Creation") +WithNodeURI(nodeURI string) -The Cosmos SDK leverages code generated from Protobuf services to make queries. The `staking` module's `MyQuery` service generates a `queryClient`, which the CLI uses to make queries. Here is the relevant code: +Context { + ctx.NodeURI = nodeURI + return ctx +} -x/staking/client/cli/query.go +/ WithHeight returns a copy of the context with an updated height. +func (ctx Context) -``` -loading... +WithHeight(height int64) + +Context { + ctx.Height = height + return ctx +} + +/ WithClient returns a copy of the context with an updated RPC client +/ instance. +func (ctx Context) + +WithClient(client TendermintRPC) + +Context { + ctx.Client = client + return ctx +} + +/ WithGRPCClient returns a copy of the context with an updated GRPC client +/ instance. +func (ctx Context) + +WithGRPCClient(grpcClient *grpc.ClientConn) + +Context { + ctx.GRPCClient = grpcClient + return ctx +} + +/ WithUseLedger returns a copy of the context with an updated UseLedger flag. +func (ctx Context) + +WithUseLedger(useLedger bool) + +Context { + ctx.UseLedger = useLedger + return ctx +} + +/ WithChainID returns a copy of the context with an updated chain ID. +func (ctx Context) + +WithChainID(chainID string) + +Context { + ctx.ChainID = chainID + return ctx +} + +/ WithHomeDir returns a copy of the Context with HomeDir set. +func (ctx Context) + +WithHomeDir(dir string) + +Context { + if dir != "" { + ctx.HomeDir = dir +} + +return ctx +} + +/ WithKeyringDir returns a copy of the Context with KeyringDir set. +func (ctx Context) + +WithKeyringDir(dir string) + +Context { + ctx.KeyringDir = dir + return ctx +} + +/ WithGenerateOnly returns a copy of the context with updated GenerateOnly value +func (ctx Context) + +WithGenerateOnly(generateOnly bool) + +Context { + ctx.GenerateOnly = generateOnly + return ctx +} + +/ WithSimulation returns a copy of the context with updated Simulate value +func (ctx Context) + +WithSimulation(simulate bool) + +Context { + ctx.Simulate = simulate + return ctx +} + +/ WithOffline returns a copy of the context with updated Offline value. +func (ctx Context) + +WithOffline(offline bool) + +Context { + ctx.Offline = offline + return ctx +} + +/ WithFromName returns a copy of the context with an updated from account name. +func (ctx Context) + +WithFromName(name string) + +Context { + ctx.FromName = name + return ctx +} + +/ WithFromAddress returns a copy of the context with an updated from account +/ address. +func (ctx Context) + +WithFromAddress(addr sdk.AccAddress) + +Context { + ctx.FromAddress = addr + return ctx +} + +/ WithFeePayerAddress returns a copy of the context with an updated fee payer account +/ address. +func (ctx Context) + +WithFeePayerAddress(addr sdk.AccAddress) + +Context { + ctx.FeePayer = addr + return ctx +} + +/ WithFeeGranterAddress returns a copy of the context with an updated fee granter account +/ address. +func (ctx Context) + +WithFeeGranterAddress(addr sdk.AccAddress) + +Context { + ctx.FeeGranter = addr + return ctx +} + +/ WithBroadcastMode returns a copy of the context with an updated broadcast +/ mode. +func (ctx Context) + +WithBroadcastMode(mode string) + +Context { + ctx.BroadcastMode = mode + return ctx +} + +/ WithSignModeStr returns a copy of the context with an updated SignMode +/ value. +func (ctx Context) + +WithSignModeStr(signModeStr string) + +Context { + ctx.SignModeStr = signModeStr + return ctx +} + +/ WithSkipConfirmation returns a copy of the context with an updated SkipConfirm +/ value. +func (ctx Context) + +WithSkipConfirmation(skip bool) + +Context { + ctx.SkipConfirm = skip + return ctx +} + +/ WithTxConfig returns the context with an updated TxConfig +func (ctx Context) + +WithTxConfig(generator TxConfig) + +Context { + ctx.TxConfig = generator + return ctx +} + +/ WithAccountRetriever returns the context with an updated AccountRetriever +func (ctx Context) + +WithAccountRetriever(retriever AccountRetriever) + +Context { + ctx.AccountRetriever = retriever + return ctx +} + +/ WithInterfaceRegistry returns the context with an updated InterfaceRegistry +func (ctx Context) + +WithInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) + +Context { + ctx.InterfaceRegistry = interfaceRegistry + return ctx +} + +/ WithViper returns the context with Viper field. This Viper instance is used to read +/ client-side config from the config file. +func (ctx Context) + +WithViper(prefix string) + +Context { + v := viper.New() + +v.SetEnvPrefix(prefix) + +v.AutomaticEnv() + +ctx.Viper = v + return ctx +} + +/ WithAux returns a copy of the context with an updated IsAux value. +func (ctx Context) + +WithAux(isAux bool) + +Context { + ctx.IsAux = isAux + return ctx +} + +/ WithLedgerHasProto returns the context with the provided boolean value, indicating +/ whether the target Ledger application can support Protobuf payloads. +func (ctx Context) + +WithLedgerHasProtobuf(val bool) + +Context { + ctx.LedgerHasProtobuf = val + return ctx +} + +/ WithPreprocessTxHook returns the context with the provided preprocessing hook, which +/ enables chains to preprocess the transaction using the builder. +func (ctx Context) + +WithPreprocessTxHook(preprocessFn PreprocessTxFn) + +Context { + ctx.PreprocessTxHook = preprocessFn + return ctx +} + +/ PrintString prints the raw string to ctx.Output if it's defined, otherwise to os.Stdout +func (ctx Context) + +PrintString(str string) + +error { + return ctx.PrintBytes([]byte(str)) +} + +/ PrintBytes prints the raw bytes to ctx.Output if it's defined, otherwise to os.Stdout. +/ NOTE: for printing a complex state object, you should use ctx.PrintOutput +func (ctx Context) + +PrintBytes(o []byte) + +error { + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err := writer.Write(o) + +return err +} + +/ PrintProto outputs toPrint to the ctx.Output based on ctx.OutputFormat which is +/ either text or json. If text, toPrint will be YAML encoded. Otherwise, toPrint +/ will be JSON encoded using ctx.Codec. An error is returned upon failure. +func (ctx Context) + +PrintProto(toPrint proto.Message) + +error { + / always serialize JSON initially because proto json can't be directly YAML encoded + out, err := ctx.Codec.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintObjectLegacy is a variant of PrintProto that doesn't require a proto.Message type +/ and uses amino JSON encoding. +/ Deprecated: It will be removed in the near future! +func (ctx Context) + +PrintObjectLegacy(toPrint interface{ +}) + +error { + out, err := ctx.LegacyAmino.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintRaw is a variant of PrintProto that doesn't require a proto.Message type +/ and uses a raw JSON message. No marshaling is performed. +func (ctx Context) + +PrintRaw(toPrint json.RawMessage) + +error { + return ctx.printOutput(toPrint) +} + +func (ctx Context) + +printOutput(out []byte) + +error { + var err error + if ctx.OutputFormat == "text" { + out, err = yaml.JSONToYAML(out) + if err != nil { + return err +} + +} + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err = writer.Write(out) + if err != nil { + return err +} + if ctx.OutputFormat != "text" { + / append new-line for formats besides YAML + _, err = writer.Write([]byte("\n")) + if err != nil { + return err +} + +} + +return nil +} + +/ GetFromFields returns a from account address, account name and keyring type, given either an address or key name. +/ If clientCtx.Simulate is true the keystore is not accessed and a valid address must be provided +/ If clientCtx.GenerateOnly is true the keystore is only accessed if a key name is provided +func GetFromFields(clientCtx Context, kr keyring.Keyring, from string) (sdk.AccAddress, string, keyring.KeyType, error) { + if from == "" { + return nil, "", 0, nil +} + +addr, err := sdk.AccAddressFromBech32(from) + switch { + case clientCtx.Simulate: + if err != nil { + return nil, "", 0, fmt.Errorf("a valid bech32 address must be provided in simulation mode: %w", err) +} + +return addr, "", 0, nil + case clientCtx.GenerateOnly: + if err == nil { + return addr, "", 0, nil +} + +} + +var k *keyring.Record + if err == nil { + k, err = kr.KeyByAddress(addr) + if err != nil { + return nil, "", 0, err +} + +} + +else { + k, err = kr.Key(from) + if err != nil { + return nil, "", 0, err +} + +} + +addr, err = k.GetAddress() + if err != nil { + return nil, "", 0, err +} + +return addr, k.Name, k.GetType(), nil +} + +/ NewKeyringFromBackend gets a Keyring object from a backend +func NewKeyringFromBackend(ctx Context, backend string) (keyring.Keyring, error) { + if ctx.Simulate { + backend = keyring.BackendMemory +} + +return keyring.New(sdk.KeyringServiceName(), backend, ctx.KeyringDir, ctx.Input, ctx.Codec, ctx.KeyringOptions...) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/staking/client/cli/query.go#L317-L343) +The `client.Context`'s primary role is to store data used during interactions with the end-user and provide methods to interact with this data - it is used before and after the query is processed by the full-node. Specifically, in handling `MyQuery`, the `client.Context` is utilized to encode the query parameters, retrieve the full-node, and write the output. Prior to being relayed to a full-node, the query needs to be encoded into a `[]byte` form, as full-nodes are application-agnostic and do not understand specific types. The full-node (RPC Client) itself is retrieved using the `client.Context`, which knows which node the user CLI is connected to. The query is relayed to this full-node to be processed. Finally, the `client.Context` contains a `Writer` to write output when the response is returned. These steps are further described in later sections. -Under the hood, the `client.Context` has a `Query()` function used to retrieve the pre-configured node and relay a query to it; the function takes the query fully-qualified service method name as path (in our case: `/cosmos.staking.v1beta1.Query/Delegations`), and arguments as parameters. It first retrieves the RPC Client (called the [**node**](/v0.47/learn/advanced/node)) configured by the user to relay this query to, and creates the `ABCIQueryOptions` (parameters formatted for the ABCI call). The node is then used to make the ABCI call, `ABCIQueryWithOptions()`. +### Arguments and Route Creation -Here is what the code looks like: +At this point in the lifecycle, the user has created a CLI command with all of the data they wish to include in their query. A `client.Context` exists to assist in the rest of the `MyQuery`'s journey. Now, the next step is to parse the command or request, extract the arguments, and encode everything. These steps all happen on the user side within the interface they are interacting with. + +#### Encoding -client/query.go +In our case (querying an address's delegations), `MyQuery` contains an [address](/docs/sdk/v0.47/learn/beginner/accounts#addresses) `delegatorAddress` as its only argument. However, the request can only contain `[]byte`s, as it is ultimately relayed to a consensus engine (e.g. CometBFT) of a full-node that has no inherent knowledge of the application types. Thus, the `codec` of `client.Context` is used to marshal the address. +Here is what the code looks like for the CLI command: + +```go expandable +package cli + +import ( + + "fmt" + "strconv" + "strings" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ GetQueryCmd returns the cli query commands for this module +func GetQueryCmd() *cobra.Command { + stakingQueryCmd := &cobra.Command{ + Use: types.ModuleName, + Short: "Querying commands for the staking module", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +stakingQueryCmd.AddCommand( + GetCmdQueryDelegation(), + GetCmdQueryDelegations(), + GetCmdQueryUnbondingDelegation(), + GetCmdQueryUnbondingDelegations(), + GetCmdQueryRedelegation(), + GetCmdQueryRedelegations(), + GetCmdQueryValidator(), + GetCmdQueryValidators(), + GetCmdQueryValidatorDelegations(), + GetCmdQueryValidatorUnbondingDelegations(), + GetCmdQueryValidatorRedelegations(), + GetCmdQueryHistoricalInfo(), + GetCmdQueryParams(), + GetCmdQueryPool(), + ) + +return stakingQueryCmd +} + +/ GetCmdQueryValidator implements the validator query command. +func GetCmdQueryValidator() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "validator [validator-addr]", + Short: "Query a validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query details about an individual validator. + +Example: +$ %s query staking validator %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +addr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + params := &types.QueryValidatorRequest{ + ValidatorAddr: addr.String() +} + +res, err := queryClient.Validator(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Validator) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryValidators implements the query all validators command. +func GetCmdQueryValidators() *cobra.Command { + cmd := &cobra.Command{ + Use: "validators", + Short: "Query for all validators", + Args: cobra.NoArgs, + Long: strings.TrimSpace( + fmt.Sprintf(`Query details about all validators on a network. + +Example: +$ %s query staking validators +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + +result, err := queryClient.Validators(cmd.Context(), &types.QueryValidatorsRequest{ + / Leaving status empty on purpose to query all validators. + Pagination: pageReq, +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(result) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "validators") + +return cmd +} + +/ GetCmdQueryValidatorUnbondingDelegations implements the query all unbonding delegatations from a validator command. +func GetCmdQueryValidatorUnbondingDelegations() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "unbonding-delegations-from [validator-addr]", + Short: "Query all unbonding delegatations from a validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations that are unbonding _from_ a validator. + +Example: +$ %s query staking unbonding-delegations-from %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valAddr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryValidatorUnbondingDelegationsRequest{ + ValidatorAddr: valAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.ValidatorUnbondingDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "unbonding delegations") + +return cmd +} + +/ GetCmdQueryValidatorRedelegations implements the query all redelegatations +/ from a validator command. +func GetCmdQueryValidatorRedelegations() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "redelegations-from [validator-addr]", + Short: "Query all outgoing redelegatations from a validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations that are redelegating _from_ a validator. + +Example: +$ %s query staking redelegations-from %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valSrcAddr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryRedelegationsRequest{ + SrcValidatorAddr: valSrcAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.Redelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "validator redelegations") + +return cmd +} + +/ GetCmdQueryDelegation the query delegation command. +func GetCmdQueryDelegation() *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + +bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "delegation [delegator-addr] [validator-addr]", + Short: "Query a delegation based on address and validator address", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations for an individual delegator on an individual validator. + +Example: +$ %s query staking delegation %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixAccAddr, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(2), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +delAddr, err := sdk.AccAddressFromBech32(args[0]) + if err != nil { + return err +} + +valAddr, err := sdk.ValAddressFromBech32(args[1]) + if err != nil { + return err +} + params := &types.QueryDelegationRequest{ + DelegatorAddr: delAddr.String(), + ValidatorAddr: valAddr.String(), +} + +res, err := queryClient.Delegation(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res.DelegationResponse) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryDelegations implements the command to query all the delegations +/ made from one delegator. +func GetCmdQueryDelegations() *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + cmd := &cobra.Command{ + Use: "delegations [delegator-addr]", + Short: "Query all delegations made by one delegator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations for an individual delegator on all validators. + +Example: +$ %s query staking delegations %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +`, + version.AppName, bech32PrefixAccAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +delAddr, err := sdk.AccAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryDelegatorDelegationsRequest{ + DelegatorAddr: delAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.DelegatorDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "delegations") + +return cmd +} + +/ GetCmdQueryValidatorDelegations implements the command to query all the +/ delegations to a specific validator. +func GetCmdQueryValidatorDelegations() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "delegations-to [validator-addr]", + Short: "Query all delegations made to one validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations on an individual validator. + +Example: +$ %s query staking delegations-to %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valAddr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryValidatorDelegationsRequest{ + ValidatorAddr: valAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.ValidatorDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "validator delegations") + +return cmd +} + +/ GetCmdQueryUnbondingDelegation implements the command to query a single +/ unbonding-delegation record. +func GetCmdQueryUnbondingDelegation() *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + +bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "unbonding-delegation [delegator-addr] [validator-addr]", + Short: "Query an unbonding-delegation record based on delegator and validator address", + Long: strings.TrimSpace( + fmt.Sprintf(`Query unbonding delegations for an individual delegator on an individual validator. + +Example: +$ %s query staking unbonding-delegation %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixAccAddr, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(2), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valAddr, err := sdk.ValAddressFromBech32(args[1]) + if err != nil { + return err +} + +delAddr, err := sdk.AccAddressFromBech32(args[0]) + if err != nil { + return err +} + params := &types.QueryUnbondingDelegationRequest{ + DelegatorAddr: delAddr.String(), + ValidatorAddr: valAddr.String(), +} + +res, err := queryClient.UnbondingDelegation(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Unbond) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryUnbondingDelegations implements the command to query all the +/ unbonding-delegation records for a delegator. +func GetCmdQueryUnbondingDelegations() *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + cmd := &cobra.Command{ + Use: "unbonding-delegations [delegator-addr]", + Short: "Query all unbonding-delegations records for one delegator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query unbonding delegations for an individual delegator. + +Example: +$ %s query staking unbonding-delegations %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +`, + version.AppName, bech32PrefixAccAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +delegatorAddr, err := sdk.AccAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryDelegatorUnbondingDelegationsRequest{ + DelegatorAddr: delegatorAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.DelegatorUnbondingDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "unbonding delegations") + +return cmd +} + +/ GetCmdQueryRedelegation implements the command to query a single +/ redelegation record. +func GetCmdQueryRedelegation() *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + +bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "redelegation [delegator-addr] [src-validator-addr] [dst-validator-addr]", + Short: "Query a redelegation record based on delegator and a source and destination validator address", + Long: strings.TrimSpace( + fmt.Sprintf(`Query a redelegation record for an individual delegator between a source and destination validator. + +Example: +$ %s query staking redelegation %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p %s1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixAccAddr, bech32PrefixValAddr, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(3), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +delAddr, err := sdk.AccAddressFromBech32(args[0]) + if err != nil { + return err +} + +valSrcAddr, err := sdk.ValAddressFromBech32(args[1]) + if err != nil { + return err +} + +valDstAddr, err := sdk.ValAddressFromBech32(args[2]) + if err != nil { + return err +} + params := &types.QueryRedelegationsRequest{ + DelegatorAddr: delAddr.String(), + DstValidatorAddr: valDstAddr.String(), + SrcValidatorAddr: valSrcAddr.String(), +} + +res, err := queryClient.Redelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryRedelegations implements the command to query all the +/ redelegation records for a delegator. +func GetCmdQueryRedelegations() *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + cmd := &cobra.Command{ + Use: "redelegations [delegator-addr]", + Args: cobra.ExactArgs(1), + Short: "Query all redelegations records for one delegator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query all redelegation records for an individual delegator. + +Example: +$ %s query staking redelegation %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +`, + version.AppName, bech32PrefixAccAddr, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +delAddr, err := sdk.AccAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryRedelegationsRequest{ + DelegatorAddr: delAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.Redelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "delegator redelegations") + +return cmd +} + +/ GetCmdQueryHistoricalInfo implements the historical info query command +func GetCmdQueryHistoricalInfo() *cobra.Command { + cmd := &cobra.Command{ + Use: "historical-info [height]", + Args: cobra.ExactArgs(1), + Short: "Query historical info at given height", + Long: strings.TrimSpace( + fmt.Sprintf(`Query historical info at given height. + +Example: +$ %s query staking historical-info 5 +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +height, err := strconv.ParseInt(args[0], 10, 64) + if err != nil || height < 0 { + return fmt.Errorf("height argument provided must be a non-negative-integer: %v", err) +} + params := &types.QueryHistoricalInfoRequest{ + Height: height +} + +res, err := queryClient.HistoricalInfo(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res.Hist) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryPool implements the pool query command. +func GetCmdQueryPool() *cobra.Command { + cmd := &cobra.Command{ + Use: "pool", + Args: cobra.NoArgs, + Short: "Query the current staking pool values", + Long: strings.TrimSpace( + fmt.Sprintf(`Query values for amounts stored in the staking pool. + +Example: +$ %s query staking pool +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.Pool(cmd.Context(), &types.QueryPoolRequest{ +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Pool) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryParams implements the params query command. +func GetCmdQueryParams() *cobra.Command { + cmd := &cobra.Command{ + Use: "params", + Args: cobra.NoArgs, + Short: "Query the current staking parameters information", + Long: strings.TrimSpace( + fmt.Sprintf(`Query values set as staking parameters. + +Example: +$ %s query staking params +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.Params(cmd.Context(), &types.QueryParamsRequest{ +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Params) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} ``` -loading... + +#### gRPC Query Client Creation + +The Cosmos SDK leverages code generated from Protobuf services to make queries. The `staking` module's `MyQuery` service generates a `queryClient`, which the CLI uses to make queries. Here is the relevant code: + +```go expandable +package cli + +import ( + + "fmt" + "strconv" + "strings" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ GetQueryCmd returns the cli query commands for this module +func GetQueryCmd() *cobra.Command { + stakingQueryCmd := &cobra.Command{ + Use: types.ModuleName, + Short: "Querying commands for the staking module", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +stakingQueryCmd.AddCommand( + GetCmdQueryDelegation(), + GetCmdQueryDelegations(), + GetCmdQueryUnbondingDelegation(), + GetCmdQueryUnbondingDelegations(), + GetCmdQueryRedelegation(), + GetCmdQueryRedelegations(), + GetCmdQueryValidator(), + GetCmdQueryValidators(), + GetCmdQueryValidatorDelegations(), + GetCmdQueryValidatorUnbondingDelegations(), + GetCmdQueryValidatorRedelegations(), + GetCmdQueryHistoricalInfo(), + GetCmdQueryParams(), + GetCmdQueryPool(), + ) + +return stakingQueryCmd +} + +/ GetCmdQueryValidator implements the validator query command. +func GetCmdQueryValidator() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "validator [validator-addr]", + Short: "Query a validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query details about an individual validator. + +Example: +$ %s query staking validator %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +addr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + params := &types.QueryValidatorRequest{ + ValidatorAddr: addr.String() +} + +res, err := queryClient.Validator(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Validator) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryValidators implements the query all validators command. +func GetCmdQueryValidators() *cobra.Command { + cmd := &cobra.Command{ + Use: "validators", + Short: "Query for all validators", + Args: cobra.NoArgs, + Long: strings.TrimSpace( + fmt.Sprintf(`Query details about all validators on a network. + +Example: +$ %s query staking validators +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + +result, err := queryClient.Validators(cmd.Context(), &types.QueryValidatorsRequest{ + / Leaving status empty on purpose to query all validators. + Pagination: pageReq, +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(result) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "validators") + +return cmd +} + +/ GetCmdQueryValidatorUnbondingDelegations implements the query all unbonding delegatations from a validator command. +func GetCmdQueryValidatorUnbondingDelegations() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "unbonding-delegations-from [validator-addr]", + Short: "Query all unbonding delegatations from a validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations that are unbonding _from_ a validator. + +Example: +$ %s query staking unbonding-delegations-from %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valAddr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryValidatorUnbondingDelegationsRequest{ + ValidatorAddr: valAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.ValidatorUnbondingDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "unbonding delegations") + +return cmd +} + +/ GetCmdQueryValidatorRedelegations implements the query all redelegatations +/ from a validator command. +func GetCmdQueryValidatorRedelegations() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "redelegations-from [validator-addr]", + Short: "Query all outgoing redelegatations from a validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations that are redelegating _from_ a validator. + +Example: +$ %s query staking redelegations-from %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valSrcAddr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryRedelegationsRequest{ + SrcValidatorAddr: valSrcAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.Redelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "validator redelegations") + +return cmd +} + +/ GetCmdQueryDelegation the query delegation command. +func GetCmdQueryDelegation() *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + +bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "delegation [delegator-addr] [validator-addr]", + Short: "Query a delegation based on address and validator address", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations for an individual delegator on an individual validator. + +Example: +$ %s query staking delegation %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixAccAddr, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(2), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +delAddr, err := sdk.AccAddressFromBech32(args[0]) + if err != nil { + return err +} + +valAddr, err := sdk.ValAddressFromBech32(args[1]) + if err != nil { + return err +} + params := &types.QueryDelegationRequest{ + DelegatorAddr: delAddr.String(), + ValidatorAddr: valAddr.String(), +} + +res, err := queryClient.Delegation(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res.DelegationResponse) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryDelegations implements the command to query all the delegations +/ made from one delegator. +func GetCmdQueryDelegations() *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + cmd := &cobra.Command{ + Use: "delegations [delegator-addr]", + Short: "Query all delegations made by one delegator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations for an individual delegator on all validators. + +Example: +$ %s query staking delegations %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +`, + version.AppName, bech32PrefixAccAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +delAddr, err := sdk.AccAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryDelegatorDelegationsRequest{ + DelegatorAddr: delAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.DelegatorDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "delegations") + +return cmd +} + +/ GetCmdQueryValidatorDelegations implements the command to query all the +/ delegations to a specific validator. +func GetCmdQueryValidatorDelegations() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "delegations-to [validator-addr]", + Short: "Query all delegations made to one validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations on an individual validator. + +Example: +$ %s query staking delegations-to %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valAddr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryValidatorDelegationsRequest{ + ValidatorAddr: valAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.ValidatorDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "validator delegations") + +return cmd +} + +/ GetCmdQueryUnbondingDelegation implements the command to query a single +/ unbonding-delegation record. +func GetCmdQueryUnbondingDelegation() *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + +bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "unbonding-delegation [delegator-addr] [validator-addr]", + Short: "Query an unbonding-delegation record based on delegator and validator address", + Long: strings.TrimSpace( + fmt.Sprintf(`Query unbonding delegations for an individual delegator on an individual validator. + +Example: +$ %s query staking unbonding-delegation %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixAccAddr, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(2), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valAddr, err := sdk.ValAddressFromBech32(args[1]) + if err != nil { + return err +} + +delAddr, err := sdk.AccAddressFromBech32(args[0]) + if err != nil { + return err +} + params := &types.QueryUnbondingDelegationRequest{ + DelegatorAddr: delAddr.String(), + ValidatorAddr: valAddr.String(), +} + +res, err := queryClient.UnbondingDelegation(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Unbond) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryUnbondingDelegations implements the command to query all the +/ unbonding-delegation records for a delegator. +func GetCmdQueryUnbondingDelegations() *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + cmd := &cobra.Command{ + Use: "unbonding-delegations [delegator-addr]", + Short: "Query all unbonding-delegations records for one delegator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query unbonding delegations for an individual delegator. + +Example: +$ %s query staking unbonding-delegations %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +`, + version.AppName, bech32PrefixAccAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +delegatorAddr, err := sdk.AccAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryDelegatorUnbondingDelegationsRequest{ + DelegatorAddr: delegatorAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.DelegatorUnbondingDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "unbonding delegations") + +return cmd +} + +/ GetCmdQueryRedelegation implements the command to query a single +/ redelegation record. +func GetCmdQueryRedelegation() *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + +bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "redelegation [delegator-addr] [src-validator-addr] [dst-validator-addr]", + Short: "Query a redelegation record based on delegator and a source and destination validator address", + Long: strings.TrimSpace( + fmt.Sprintf(`Query a redelegation record for an individual delegator between a source and destination validator. + +Example: +$ %s query staking redelegation %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p %s1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixAccAddr, bech32PrefixValAddr, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(3), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +delAddr, err := sdk.AccAddressFromBech32(args[0]) + if err != nil { + return err +} + +valSrcAddr, err := sdk.ValAddressFromBech32(args[1]) + if err != nil { + return err +} + +valDstAddr, err := sdk.ValAddressFromBech32(args[2]) + if err != nil { + return err +} + params := &types.QueryRedelegationsRequest{ + DelegatorAddr: delAddr.String(), + DstValidatorAddr: valDstAddr.String(), + SrcValidatorAddr: valSrcAddr.String(), +} + +res, err := queryClient.Redelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryRedelegations implements the command to query all the +/ redelegation records for a delegator. +func GetCmdQueryRedelegations() *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + cmd := &cobra.Command{ + Use: "redelegations [delegator-addr]", + Args: cobra.ExactArgs(1), + Short: "Query all redelegations records for one delegator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query all redelegation records for an individual delegator. + +Example: +$ %s query staking redelegation %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +`, + version.AppName, bech32PrefixAccAddr, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +delAddr, err := sdk.AccAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryRedelegationsRequest{ + DelegatorAddr: delAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.Redelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "delegator redelegations") + +return cmd +} + +/ GetCmdQueryHistoricalInfo implements the historical info query command +func GetCmdQueryHistoricalInfo() *cobra.Command { + cmd := &cobra.Command{ + Use: "historical-info [height]", + Args: cobra.ExactArgs(1), + Short: "Query historical info at given height", + Long: strings.TrimSpace( + fmt.Sprintf(`Query historical info at given height. + +Example: +$ %s query staking historical-info 5 +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +height, err := strconv.ParseInt(args[0], 10, 64) + if err != nil || height < 0 { + return fmt.Errorf("height argument provided must be a non-negative-integer: %v", err) +} + params := &types.QueryHistoricalInfoRequest{ + Height: height +} + +res, err := queryClient.HistoricalInfo(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res.Hist) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryPool implements the pool query command. +func GetCmdQueryPool() *cobra.Command { + cmd := &cobra.Command{ + Use: "pool", + Args: cobra.NoArgs, + Short: "Query the current staking pool values", + Long: strings.TrimSpace( + fmt.Sprintf(`Query values for amounts stored in the staking pool. + +Example: +$ %s query staking pool +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.Pool(cmd.Context(), &types.QueryPoolRequest{ +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Pool) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryParams implements the params query command. +func GetCmdQueryParams() *cobra.Command { + cmd := &cobra.Command{ + Use: "params", + Args: cobra.NoArgs, + Short: "Query the current staking parameters information", + Long: strings.TrimSpace( + fmt.Sprintf(`Query values set as staking parameters. + +Example: +$ %s query staking params +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.Params(cmd.Context(), &types.QueryParamsRequest{ +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Params) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/client/query.go#L79-L113) +Under the hood, the `client.Context` has a `Query()` function used to retrieve the pre-configured node and relay a query to it; the function takes the query fully-qualified service method name as path (in our case: `/cosmos.staking.v1beta1.Query/Delegations`), and arguments as parameters. It first retrieves the RPC Client (called the [**node**](/docs/sdk/v0.47/learn/advanced/node)) configured by the user to relay this query to, and creates the `ABCIQueryOptions` (parameters formatted for the ABCI call). The node is then used to make the ABCI call, `ABCIQueryWithOptions()`. + +Here is what the code looks like: + +```go expandable +package client -## RPC[​](#rpc "Direct link to RPC") +import ( -With a call to `ABCIQueryWithOptions()`, `MyQuery` is received by a [full-node](/v0.47/learn/advanced/encoding) which then processes the request. Note that, while the RPC is made to the consensus engine (e.g. CometBFT) of a full-node, queries are not part of consensus and so are not broadcasted to the rest of the network, as they do not require anything the network needs to agree upon. + "context" + "fmt" + "strings" + "github.com/pkg/errors" + abci "github.com/tendermint/tendermint/abci/types" + tmbytes "github.com/tendermint/tendermint/libs/bytes" + rpcclient "github.com/tendermint/tendermint/rpc/client" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/status" + "github.com/cosmos/cosmos-sdk/store/rootmulti" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ GetNode returns an RPC client. If the context's client is not defined, an +/ error is returned. +func (ctx Context) + +GetNode() (TendermintRPC, error) { + if ctx.Client == nil { + return nil, errors.New("no RPC client is defined in offline mode") +} + +return ctx.Client, nil +} + +/ Query performs a query to a Tendermint node with the provided path. +/ It returns the result and height of the query upon success or an error if +/ the query fails. +func (ctx Context) + +Query(path string) ([]byte, int64, error) { + return ctx.query(path, nil) +} + +/ QueryWithData performs a query to a Tendermint node with the provided path +/ and a data payload. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +QueryWithData(path string, data []byte) ([]byte, int64, error) { + return ctx.query(path, data) +} + +/ QueryStore performs a query to a Tendermint node with the provided key and +/ store name. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +QueryStore(key tmbytes.HexBytes, storeName string) ([]byte, int64, error) { + return ctx.queryStore(key, storeName, "key") +} + +/ QueryABCI performs a query to a Tendermint node with the provide RequestQuery. +/ It returns the ResultQuery obtained from the query. The height used to perform +/ the query is the RequestQuery Height if it is non-zero, otherwise the context +/ height is used. +func (ctx Context) + +QueryABCI(req abci.RequestQuery) (abci.ResponseQuery, error) { + return ctx.queryABCI(req) +} + +/ GetFromAddress returns the from address from the context's name. +func (ctx Context) + +GetFromAddress() + +sdk.AccAddress { + return ctx.FromAddress +} + +/ GetFeePayerAddress returns the fee granter address from the context +func (ctx Context) + +GetFeePayerAddress() + +sdk.AccAddress { + return ctx.FeePayer +} + +/ GetFeeGranterAddress returns the fee granter address from the context +func (ctx Context) + +GetFeeGranterAddress() + +sdk.AccAddress { + return ctx.FeeGranter +} + +/ GetFromName returns the key name for the current context. +func (ctx Context) + +GetFromName() + +string { + return ctx.FromName +} + +func (ctx Context) + +queryABCI(req abci.RequestQuery) (abci.ResponseQuery, error) { + node, err := ctx.GetNode() + if err != nil { + return abci.ResponseQuery{ +}, err +} + +var queryHeight int64 + if req.Height != 0 { + queryHeight = req.Height +} + +else { + / fallback on the context height + queryHeight = ctx.Height +} + opts := rpcclient.ABCIQueryOptions{ + Height: queryHeight, + Prove: req.Prove, +} + +result, err := node.ABCIQueryWithOptions(context.Background(), req.Path, req.Data, opts) + if err != nil { + return abci.ResponseQuery{ +}, err +} + if !result.Response.IsOK() { + return abci.ResponseQuery{ +}, sdkErrorToGRPCError(result.Response) +} + + / data from trusted node or subspace query doesn't need verification + if !opts.Prove || !isQueryStoreWithProof(req.Path) { + return result.Response, nil +} + +return result.Response, nil +} + +func sdkErrorToGRPCError(resp abci.ResponseQuery) + +error { + switch resp.Code { + case sdkerrors.ErrInvalidRequest.ABCICode(): + return status.Error(codes.InvalidArgument, resp.Log) + case sdkerrors.ErrUnauthorized.ABCICode(): + return status.Error(codes.Unauthenticated, resp.Log) + case sdkerrors.ErrKeyNotFound.ABCICode(): + return status.Error(codes.NotFound, resp.Log) + +default: + return status.Error(codes.Unknown, resp.Log) +} +} + +/ query performs a query to a Tendermint node with the provided store name +/ and path. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +query(path string, key tmbytes.HexBytes) ([]byte, int64, error) { + resp, err := ctx.queryABCI(abci.RequestQuery{ + Path: path, + Data: key, + Height: ctx.Height, +}) + if err != nil { + return nil, 0, err +} + +return resp.Value, resp.Height, nil +} + +/ queryStore performs a query to a Tendermint node with the provided a store +/ name and path. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +queryStore(key tmbytes.HexBytes, storeName, endPath string) ([]byte, int64, error) { + path := fmt.Sprintf("/store/%s/%s", storeName, endPath) + +return ctx.query(path, key) +} + +/ isQueryStoreWithProof expects a format like /// +/ queryType must be "store" and subpath must be "key" to require a proof. +func isQueryStoreWithProof(path string) + +bool { + if !strings.HasPrefix(path, "/") { + return false +} + paths := strings.SplitN(path[1:], "/", 3) + switch { + case len(paths) != 3: + return false + case paths[0] != "store": + return false + case rootmulti.RequireProof("/" + paths[2]): + return true +} + +return false +} +``` + +## RPC + +With a call to `ABCIQueryWithOptions()`, `MyQuery` is received by a [full-node](/docs/sdk/v0.47/learn/advanced/encoding) which then processes the request. Note that, while the RPC is made to the consensus engine (e.g. CometBFT) of a full-node, queries are not part of consensus and so are not broadcasted to the rest of the network, as they do not require anything the network needs to agree upon. Read more about ABCI Clients and CometBFT RPC in the [CometBFT documentation](https://docs.cometbft.com/v0.37/spec/rpc/). -## Application Query Handling[​](#application-query-handling "Direct link to Application Query Handling") +## Application Query Handling -When a query is received by the full-node after it has been relayed from the underlying consensus engine, it is at that point being handled within an environment that understands application-specific types and has a copy of the state. [`baseapp`](/v0.47/learn/advanced/baseapp) implements the ABCI [`Query()`](/v0.47/learn/advanced/baseapp#query) function and handles gRPC queries. The query route is parsed, and it matches the fully-qualified service method name of an existing service method (most likely in one of the modules), then `baseapp` relays the request to the relevant module. +When a query is received by the full-node after it has been relayed from the underlying consensus engine, it is at that point being handled within an environment that understands application-specific types and has a copy of the state. [`baseapp`](/docs/sdk/v0.47/learn/advanced/baseapp) implements the ABCI [`Query()`](/docs/sdk/v0.47/learn/advanced/baseapp#query) function and handles gRPC queries. The query route is parsed, and it matches the fully-qualified service method name of an existing service method (most likely in one of the modules), then `baseapp` relays the request to the relevant module. -Since `MyQuery` has a Protobuf fully-qualified service method name from the `staking` module (recall `/cosmos.staking.v1beta1.Query/Delegations`), `baseapp` first parses the path, then uses its own internal `GRPCQueryRouter` to retrieve the corresponding gRPC handler, and routes the query to the module. The gRPC handler is responsible for recognizing this query, retrieving the appropriate values from the application's stores, and returning a response. Read more about query services [here](/v0.47/build/building-modules/query-services). +Since `MyQuery` has a Protobuf fully-qualified service method name from the `staking` module (recall `/cosmos.staking.v1beta1.Query/Delegations`), `baseapp` first parses the path, then uses its own internal `GRPCQueryRouter` to retrieve the corresponding gRPC handler, and routes the query to the module. The gRPC handler is responsible for recognizing this query, retrieving the appropriate values from the application's stores, and returning a response. Read more about query services [here](/docs/sdk/v0.47/documentation/module-system/query-services). Once a result is received from the querier, `baseapp` begins the process of returning a response to the user. -## Response[​](#response "Direct link to Response") +## Response Since `Query()` is an ABCI function, `baseapp` returns the response as an [`abci.ResponseQuery`](https://docs.cometbft.com/master/spec/abci/abci.html#query-2) type. The `client.Context` `Query()` routine receives the response and. -### CLI Response[​](#cli-response "Direct link to CLI Response") +### CLI Response + +The application [`codec`](/docs/sdk/v0.47/learn/advanced/encoding) is used to unmarshal the response to a JSON and the `client.Context` prints the output to the command line, applying any configurations such as the output type (text, JSON or YAML). + +```go expandable +package client + +import ( + + "bufio" + "encoding/json" + "fmt" + "io" + "os" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/viper" + "google.golang.org/grpc" + "sigs.k8s.io/yaml" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ PreprocessTxFn defines a hook by which chains can preprocess transactions before broadcasting +type PreprocessTxFn func(chainID string, key keyring.KeyType, tx TxBuilder) + +error + +/ Context implements a typical context created in SDK modules for transaction +/ handling and queries. +type Context struct { + FromAddress sdk.AccAddress + Client TendermintRPC + GRPCClient *grpc.ClientConn + ChainID string + Codec codec.Codec + InterfaceRegistry codectypes.InterfaceRegistry + Input io.Reader + Keyring keyring.Keyring + KeyringOptions []keyring.Option + Output io.Writer + OutputFormat string + Height int64 + HomeDir string + KeyringDir string + From string + BroadcastMode string + FromName string + SignModeStr string + UseLedger bool + Simulate bool + GenerateOnly bool + Offline bool + SkipConfirm bool + TxConfig TxConfig + AccountRetriever AccountRetriever + NodeURI string + FeePayer sdk.AccAddress + FeeGranter sdk.AccAddress + Viper *viper.Viper + LedgerHasProtobuf bool + PreprocessTxHook PreprocessTxFn + + / IsAux is true when the signer is an auxiliary signer (e.g. the tipper). + IsAux bool + + / TODO: Deprecated (remove). + LegacyAmino *codec.LegacyAmino +} + +/ WithKeyring returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyring(k keyring.Keyring) + +Context { + ctx.Keyring = k + return ctx +} + +/ WithKeyringOptions returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyringOptions(opts ...keyring.Option) + +Context { + ctx.KeyringOptions = opts + return ctx +} + +/ WithInput returns a copy of the context with an updated input. +func (ctx Context) + +WithInput(r io.Reader) + +Context { + / convert to a bufio.Reader to have a shared buffer between the keyring and the + / the Commands, ensuring a read from one advance the read pointer for the other. + / see https://github.com/cosmos/cosmos-sdk/issues/9566. + ctx.Input = bufio.NewReader(r) + +return ctx +} + +/ WithCodec returns a copy of the Context with an updated Codec. +func (ctx Context) + +WithCodec(m codec.Codec) + +Context { + ctx.Codec = m + return ctx +} + +/ WithLegacyAmino returns a copy of the context with an updated LegacyAmino codec. +/ TODO: Deprecated (remove). +func (ctx Context) + +WithLegacyAmino(cdc *codec.LegacyAmino) -The application [`codec`](/v0.47/learn/advanced/encoding) is used to unmarshal the response to a JSON and the `client.Context` prints the output to the command line, applying any configurations such as the output type (text, JSON or YAML). +Context { + ctx.LegacyAmino = cdc + return ctx +} -client/context.go +/ WithOutput returns a copy of the context with an updated output writer (e.g. stdout). +func (ctx Context) -``` -loading... -``` +WithOutput(w io.Writer) + +Context { + ctx.Output = w + return ctx +} + +/ WithFrom returns a copy of the context with an updated from address or name. +func (ctx Context) + +WithFrom(from string) + +Context { + ctx.From = from + return ctx +} + +/ WithOutputFormat returns a copy of the context with an updated OutputFormat field. +func (ctx Context) + +WithOutputFormat(format string) + +Context { + ctx.OutputFormat = format + return ctx +} + +/ WithNodeURI returns a copy of the context with an updated node URI. +func (ctx Context) + +WithNodeURI(nodeURI string) + +Context { + ctx.NodeURI = nodeURI + return ctx +} + +/ WithHeight returns a copy of the context with an updated height. +func (ctx Context) + +WithHeight(height int64) + +Context { + ctx.Height = height + return ctx +} + +/ WithClient returns a copy of the context with an updated RPC client +/ instance. +func (ctx Context) + +WithClient(client TendermintRPC) + +Context { + ctx.Client = client + return ctx +} + +/ WithGRPCClient returns a copy of the context with an updated GRPC client +/ instance. +func (ctx Context) + +WithGRPCClient(grpcClient *grpc.ClientConn) + +Context { + ctx.GRPCClient = grpcClient + return ctx +} + +/ WithUseLedger returns a copy of the context with an updated UseLedger flag. +func (ctx Context) + +WithUseLedger(useLedger bool) + +Context { + ctx.UseLedger = useLedger + return ctx +} + +/ WithChainID returns a copy of the context with an updated chain ID. +func (ctx Context) + +WithChainID(chainID string) + +Context { + ctx.ChainID = chainID + return ctx +} + +/ WithHomeDir returns a copy of the Context with HomeDir set. +func (ctx Context) + +WithHomeDir(dir string) + +Context { + if dir != "" { + ctx.HomeDir = dir +} + +return ctx +} + +/ WithKeyringDir returns a copy of the Context with KeyringDir set. +func (ctx Context) + +WithKeyringDir(dir string) + +Context { + ctx.KeyringDir = dir + return ctx +} + +/ WithGenerateOnly returns a copy of the context with updated GenerateOnly value +func (ctx Context) + +WithGenerateOnly(generateOnly bool) + +Context { + ctx.GenerateOnly = generateOnly + return ctx +} + +/ WithSimulation returns a copy of the context with updated Simulate value +func (ctx Context) + +WithSimulation(simulate bool) + +Context { + ctx.Simulate = simulate + return ctx +} + +/ WithOffline returns a copy of the context with updated Offline value. +func (ctx Context) + +WithOffline(offline bool) + +Context { + ctx.Offline = offline + return ctx +} + +/ WithFromName returns a copy of the context with an updated from account name. +func (ctx Context) + +WithFromName(name string) + +Context { + ctx.FromName = name + return ctx +} + +/ WithFromAddress returns a copy of the context with an updated from account +/ address. +func (ctx Context) + +WithFromAddress(addr sdk.AccAddress) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/client/context.go#L330-L358) +Context { + ctx.FromAddress = addr + return ctx +} + +/ WithFeePayerAddress returns a copy of the context with an updated fee payer account +/ address. +func (ctx Context) + +WithFeePayerAddress(addr sdk.AccAddress) + +Context { + ctx.FeePayer = addr + return ctx +} + +/ WithFeeGranterAddress returns a copy of the context with an updated fee granter account +/ address. +func (ctx Context) + +WithFeeGranterAddress(addr sdk.AccAddress) + +Context { + ctx.FeeGranter = addr + return ctx +} + +/ WithBroadcastMode returns a copy of the context with an updated broadcast +/ mode. +func (ctx Context) + +WithBroadcastMode(mode string) + +Context { + ctx.BroadcastMode = mode + return ctx +} + +/ WithSignModeStr returns a copy of the context with an updated SignMode +/ value. +func (ctx Context) + +WithSignModeStr(signModeStr string) + +Context { + ctx.SignModeStr = signModeStr + return ctx +} + +/ WithSkipConfirmation returns a copy of the context with an updated SkipConfirm +/ value. +func (ctx Context) + +WithSkipConfirmation(skip bool) + +Context { + ctx.SkipConfirm = skip + return ctx +} + +/ WithTxConfig returns the context with an updated TxConfig +func (ctx Context) + +WithTxConfig(generator TxConfig) + +Context { + ctx.TxConfig = generator + return ctx +} + +/ WithAccountRetriever returns the context with an updated AccountRetriever +func (ctx Context) + +WithAccountRetriever(retriever AccountRetriever) + +Context { + ctx.AccountRetriever = retriever + return ctx +} + +/ WithInterfaceRegistry returns the context with an updated InterfaceRegistry +func (ctx Context) + +WithInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) + +Context { + ctx.InterfaceRegistry = interfaceRegistry + return ctx +} + +/ WithViper returns the context with Viper field. This Viper instance is used to read +/ client-side config from the config file. +func (ctx Context) + +WithViper(prefix string) + +Context { + v := viper.New() + +v.SetEnvPrefix(prefix) + +v.AutomaticEnv() + +ctx.Viper = v + return ctx +} + +/ WithAux returns a copy of the context with an updated IsAux value. +func (ctx Context) + +WithAux(isAux bool) + +Context { + ctx.IsAux = isAux + return ctx +} + +/ WithLedgerHasProto returns the context with the provided boolean value, indicating +/ whether the target Ledger application can support Protobuf payloads. +func (ctx Context) + +WithLedgerHasProtobuf(val bool) + +Context { + ctx.LedgerHasProtobuf = val + return ctx +} + +/ WithPreprocessTxHook returns the context with the provided preprocessing hook, which +/ enables chains to preprocess the transaction using the builder. +func (ctx Context) + +WithPreprocessTxHook(preprocessFn PreprocessTxFn) + +Context { + ctx.PreprocessTxHook = preprocessFn + return ctx +} + +/ PrintString prints the raw string to ctx.Output if it's defined, otherwise to os.Stdout +func (ctx Context) + +PrintString(str string) + +error { + return ctx.PrintBytes([]byte(str)) +} + +/ PrintBytes prints the raw bytes to ctx.Output if it's defined, otherwise to os.Stdout. +/ NOTE: for printing a complex state object, you should use ctx.PrintOutput +func (ctx Context) + +PrintBytes(o []byte) + +error { + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err := writer.Write(o) + +return err +} + +/ PrintProto outputs toPrint to the ctx.Output based on ctx.OutputFormat which is +/ either text or json. If text, toPrint will be YAML encoded. Otherwise, toPrint +/ will be JSON encoded using ctx.Codec. An error is returned upon failure. +func (ctx Context) + +PrintProto(toPrint proto.Message) + +error { + / always serialize JSON initially because proto json can't be directly YAML encoded + out, err := ctx.Codec.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintObjectLegacy is a variant of PrintProto that doesn't require a proto.Message type +/ and uses amino JSON encoding. +/ Deprecated: It will be removed in the near future! +func (ctx Context) + +PrintObjectLegacy(toPrint interface{ +}) + +error { + out, err := ctx.LegacyAmino.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintRaw is a variant of PrintProto that doesn't require a proto.Message type +/ and uses a raw JSON message. No marshaling is performed. +func (ctx Context) + +PrintRaw(toPrint json.RawMessage) + +error { + return ctx.printOutput(toPrint) +} + +func (ctx Context) + +printOutput(out []byte) + +error { + var err error + if ctx.OutputFormat == "text" { + out, err = yaml.JSONToYAML(out) + if err != nil { + return err +} + +} + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err = writer.Write(out) + if err != nil { + return err +} + if ctx.OutputFormat != "text" { + / append new-line for formats besides YAML + _, err = writer.Write([]byte("\n")) + if err != nil { + return err +} + +} + +return nil +} + +/ GetFromFields returns a from account address, account name and keyring type, given either an address or key name. +/ If clientCtx.Simulate is true the keystore is not accessed and a valid address must be provided +/ If clientCtx.GenerateOnly is true the keystore is only accessed if a key name is provided +func GetFromFields(clientCtx Context, kr keyring.Keyring, from string) (sdk.AccAddress, string, keyring.KeyType, error) { + if from == "" { + return nil, "", 0, nil +} + +addr, err := sdk.AccAddressFromBech32(from) + switch { + case clientCtx.Simulate: + if err != nil { + return nil, "", 0, fmt.Errorf("a valid bech32 address must be provided in simulation mode: %w", err) +} + +return addr, "", 0, nil + case clientCtx.GenerateOnly: + if err == nil { + return addr, "", 0, nil +} + +} + +var k *keyring.Record + if err == nil { + k, err = kr.KeyByAddress(addr) + if err != nil { + return nil, "", 0, err +} + +} + +else { + k, err = kr.Key(from) + if err != nil { + return nil, "", 0, err +} + +} + +addr, err = k.GetAddress() + if err != nil { + return nil, "", 0, err +} + +return addr, k.Name, k.GetType(), nil +} + +/ NewKeyringFromBackend gets a Keyring object from a backend +func NewKeyringFromBackend(ctx Context, backend string) (keyring.Keyring, error) { + if ctx.Simulate { + backend = keyring.BackendMemory +} + +return keyring.New(sdk.KeyringServiceName(), backend, ctx.KeyringDir, ctx.Input, ctx.Codec, ctx.KeyringOptions...) +} +``` And that's a wrap! The result of the query is outputted to the console by the CLI. diff --git a/docs/sdk/v0.47/learn/beginner/tx-lifecycle.mdx b/docs/sdk/v0.47/learn/beginner/tx-lifecycle.mdx index 531cc56f..dc29440c 100644 --- a/docs/sdk/v0.47/learn/beginner/tx-lifecycle.mdx +++ b/docs/sdk/v0.47/learn/beginner/tx-lifecycle.mdx @@ -1,95 +1,113 @@ --- -title: "Transaction Lifecycle" -description: "Version: v0.47" +title: Transaction Lifecycle --- - - This document describes the lifecycle of a transaction from creation to committed state changes. Transaction definition is described in a [different doc](/v0.47/learn/advanced/transactions). The transaction is referred to as `Tx`. - +## Synopsis + +This document describes the lifecycle of a transaction from creation to committed state changes. Transaction definition is described in a [different doc](/docs/sdk/v0.47/learn/advanced/transactions). The transaction is referred to as `Tx`. - ### Pre-requisite Readings[​](#pre-requisite-readings "Direct link to Pre-requisite Readings") - * [Anatomy of a Cosmos SDK Application](/v0.47/learn/beginner/overview-app) +### Pre-requisite Readings + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.47/learn/beginner/overview-app) + -## Creation[​](#creation "Direct link to Creation") +## Creation -### Transaction Creation[​](#transaction-creation "Direct link to Transaction Creation") +### Transaction Creation -One of the main application interfaces is the command-line interface. The transaction `Tx` can be created by the user inputting a command in the following format from the [command-line](/v0.47/learn/advanced/cli), providing the type of transaction in `[command]`, arguments in `[args]`, and configurations such as gas prices in `[flags]`: +One of the main application interfaces is the command-line interface. The transaction `Tx` can be created by the user inputting a command in the following format from the [command-line](/docs/sdk/v0.47/learn/advanced/cli), providing the type of transaction in `[command]`, arguments in `[args]`, and configurations such as gas prices in `[flags]`: -``` +```bash [appname] tx [command] [args] [flags] ``` This command automatically **creates** the transaction, **signs** it using the account's private key, and **broadcasts** it to the specified peer node. -There are several required and optional flags for transaction creation. The `--from` flag specifies which [account](/v0.47/learn/beginner/accounts) the transaction is originating from. For example, if the transaction is sending coins, the funds are drawn from the specified `from` address. +There are several required and optional flags for transaction creation. The `--from` flag specifies which [account](/docs/sdk/v0.47/learn/beginner/accounts) the transaction is originating from. For example, if the transaction is sending coins, the funds are drawn from the specified `from` address. -#### Gas and Fees[​](#gas-and-fees "Direct link to Gas and Fees") +#### Gas and Fees -Additionally, there are several [flags](/v0.47/learn/advanced/cli) users can use to indicate how much they are willing to pay in [fees](/v0.47/learn/beginner/gas-fees): +Additionally, there are several [flags](/docs/sdk/v0.47/learn/advanced/cli) users can use to indicate how much they are willing to pay in [fees](/docs/sdk/v0.47/learn/beginner/gas-fees): -* `--gas` refers to how much [gas](/v0.47/learn/beginner/gas-fees), which represents computational resources, `Tx` consumes. Gas is dependent on the transaction and is not precisely calculated until execution, but can be estimated by providing `auto` as the value for `--gas`. -* `--gas-adjustment` (optional) can be used to scale `gas` up in order to avoid underestimating. For example, users can specify their gas adjustment as 1.5 to use 1.5 times the estimated gas. -* `--gas-prices` specifies how much the user is willing to pay per unit of gas, which can be one or multiple denominations of tokens. For example, `--gas-prices=0.025uatom, 0.025upho` means the user is willing to pay 0.025uatom AND 0.025upho per unit of gas. -* `--fees` specifies how much in fees the user is willing to pay in total. -* `--timeout-height` specifies a block timeout height to prevent the tx from being committed past a certain height. +- `--gas` refers to how much [gas](/docs/sdk/v0.47/learn/beginner/gas-fees), which represents computational resources, `Tx` consumes. Gas is dependent on the transaction and is not precisely calculated until execution, but can be estimated by providing `auto` as the value for `--gas`. +- `--gas-adjustment` (optional) can be used to scale `gas` up in order to avoid underestimating. For example, users can specify their gas adjustment as 1.5 to use 1.5 times the estimated gas. +- `--gas-prices` specifies how much the user is willing to pay per unit of gas, which can be one or multiple denominations of tokens. For example, `--gas-prices=0.025uatom, 0.025upho` means the user is willing to pay 0.025uatom AND 0.025upho per unit of gas. +- `--fees` specifies how much in fees the user is willing to pay in total. +- `--timeout-height` specifies a block timeout height to prevent the tx from being committed past a certain height. The ultimate value of the fees paid is equal to the gas multiplied by the gas prices. In other words, `fees = ceil(gas * gasPrices)`. Thus, since fees can be calculated using gas prices and vice versa, the users specify only one of the two. Later, validators decide whether or not to include the transaction in their block by comparing the given or calculated `gas-prices` to their local `min-gas-prices`. `Tx` is rejected if its `gas-prices` is not high enough, so users are incentivized to pay more. -#### CLI Example[​](#cli-example "Direct link to CLI Example") +#### CLI Example Users of the application `app` can enter the following command into their CLI to generate a transaction to send 1000uatom from a `senderAddress` to a `recipientAddress`. The command specifies how much gas they are willing to pay: an automatic estimate scaled up by 1.5 times, with a gas price of 0.025uatom per unit gas. -``` +```bash appd tx send 1000uatom --from --gas auto --gas-adjustment 1.5 --gas-prices 0.025uatom ``` -#### Other Transaction Creation Methods[​](#other-transaction-creation-methods "Direct link to Other Transaction Creation Methods") +#### Other Transaction Creation Methods -The command-line is an easy way to interact with an application, but `Tx` can also be created using a [gRPC or REST interface](/v0.47/learn/advanced/grpc_rest) or some other entry point defined by the application developer. From the user's perspective, the interaction depends on the web interface or wallet they are using (e.g. creating `Tx` using [Lunie.io](https://lunie.io/#/) and signing it with a Ledger Nano S). +The command-line is an easy way to interact with an application, but `Tx` can also be created using a [gRPC or REST interface](/docs/sdk/v0.47/learn/advanced/grpc_rest) or some other entry point defined by the application developer. From the user's perspective, the interaction depends on the web interface or wallet they are using (e.g. creating `Tx` using [Lunie.io](https://lunie.io/#/) and signing it with a Ledger Nano S). -## Addition to Mempool[​](#addition-to-mempool "Direct link to Addition to Mempool") +## Addition to Mempool -Each full-node (running CometBFT) that receives a `Tx` sends an [ABCI message](https://docs.cometbft.com/v0.37/spec/p2p/messages/), `CheckTx`, to the application layer to check for validity, and receives an `abci.ResponseCheckTx`. If the `Tx` passes the checks, it is held in the node's [**Mempool**](https://docs.cometbft.com/v0.37/spec/p2p/messages/mempool/), an in-memory pool of transactions unique to each node, pending inclusion in a block - honest nodes discard a `Tx` if it is found to be invalid. Prior to consensus, nodes continuously check incoming transactions and gossip them to their peers. +Each full-node (running CometBFT) that receives a `Tx` sends an [ABCI message](https://docs.cometbft.com/v0.37/spec/p2p/messages/), +`CheckTx`, to the application layer to check for validity, and receives an `abci.ResponseCheckTx`. If the `Tx` passes the checks, it is held in the node's +[**Mempool**](https://docs.cometbft.com/v0.37/spec/p2p/messages/mempool/), an in-memory pool of transactions unique to each node, pending inclusion in a block - honest nodes discard a `Tx` if it is found to be invalid. Prior to consensus, nodes continuously check incoming transactions and gossip them to their peers. -### Types of Checks[​](#types-of-checks "Direct link to Types of Checks") +### Types of Checks -The full-nodes perform stateless, then stateful checks on `Tx` during `CheckTx`, with the goal to identify and reject an invalid transaction as early on as possible to avoid wasted computation. +The full-nodes perform stateless, then stateful checks on `Tx` during `CheckTx`, with the goal to +identify and reject an invalid transaction as early on as possible to avoid wasted computation. -***Stateless*** checks do not require nodes to access state - light clients or offline nodes can do them - and are thus less computationally expensive. Stateless checks include making sure addresses are not empty, enforcing nonnegative numbers, and other logic specified in the definitions. +**_Stateless_** checks do not require nodes to access state - light clients or offline nodes can do +them - and are thus less computationally expensive. Stateless checks include making sure addresses +are not empty, enforcing nonnegative numbers, and other logic specified in the definitions. -***Stateful*** checks validate transactions and messages based on a committed state. Examples include checking that the relevant values exist and can be transacted with, the address has sufficient funds, and the sender is authorized or has the correct ownership to transact. At any given moment, full-nodes typically have [multiple versions](/v0.47/learn/advanced/baseapp#state-updates) of the application's internal state for different purposes. For example, nodes execute state changes while in the process of verifying transactions, but still need a copy of the last committed state in order to answer queries - they should not respond using state with uncommitted changes. +**_Stateful_** checks validate transactions and messages based on a committed state. Examples +include checking that the relevant values exist and can be transacted with, the address +has sufficient funds, and the sender is authorized or has the correct ownership to transact. +At any given moment, full-nodes typically have [multiple versions](/docs/sdk/v0.47/learn/advanced/baseapp#state-updates) +of the application's internal state for different purposes. For example, nodes execute state +changes while in the process of verifying transactions, but still need a copy of the last committed +state in order to answer queries - they should not respond using state with uncommitted changes. -In order to verify a `Tx`, full-nodes call `CheckTx`, which includes both *stateless* and *stateful* checks. Further validation happens later in the [`DeliverTx`](#delivertx) stage. `CheckTx` goes through several steps, beginning with decoding `Tx`. +In order to verify a `Tx`, full-nodes call `CheckTx`, which includes both _stateless_ and _stateful_ +checks. Further validation happens later in the [`DeliverTx`](#delivertx) stage. `CheckTx` goes +through several steps, beginning with decoding `Tx`. -### Decoding[​](#decoding "Direct link to Decoding") +### Decoding -When `Tx` is received by the application from the underlying consensus engine (e.g. CometBFT ), it is still in its [encoded](/v0.47/learn/advanced/encoding) `[]byte` form and needs to be unmarshaled in order to be processed. Then, the [`runTx`](/v0.47/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) function is called to run in `runTxModeCheck` mode, meaning the function runs all checks but exits before executing messages and writing state changes. +When `Tx` is received by the application from the underlying consensus engine (e.g. CometBFT ), it is still in its [encoded](/docs/sdk/v0.47/learn/advanced/encoding) `[]byte` form and needs to be unmarshaled in order to be processed. Then, the [`runTx`](/docs/sdk/v0.47/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) function is called to run in `runTxModeCheck` mode, meaning the function runs all checks but exits before executing messages and writing state changes. -### ValidateBasic (deprecated)[​](#validatebasic-deprecated "Direct link to ValidateBasic (deprecated)") +### ValidateBasic (deprecated) -Messages ([`sdk.Msg`](/v0.47/learn/advanced/transactions#messages)) are extracted from transactions (`Tx`). The `ValidateBasic` method of the `sdk.Msg` interface implemented by the module developer is run for each transaction. To discard obviously invalid messages, the `BaseApp` type calls the `ValidateBasic` method very early in the processing of the message in the [`CheckTx`](/v0.47/learn/advanced/baseapp#checktx) and [`DeliverTx`](/v0.47/learn/advanced/baseapp#delivertx) transactions. `ValidateBasic` can include only **stateless** checks (the checks that do not require access to the state). +Messages ([`sdk.Msg`](/docs/sdk/v0.47/learn/advanced/transactions#messages)) are extracted from transactions (`Tx`). The `ValidateBasic` method of the `sdk.Msg` interface implemented by the module developer is run for each transaction. +To discard obviously invalid messages, the `BaseApp` type calls the `ValidateBasic` method very early in the processing of the message in the [`CheckTx`](/docs/sdk/v0.47/learn/advanced/baseapp#checktx) and [`DeliverTx`](/docs/sdk/v0.47/learn/advanced/baseapp#delivertx) transactions. +`ValidateBasic` can include only **stateless** checks (the checks that do not require access to the state). - The `ValidateBasic` method on messages has been deprecated in favor of validating messages directly in their respective [`Msg` services](/v0.47/build/building-modules/msg-services#Validation). +The `ValidateBasic` method on messages has been deprecated in favor of validating messages directly in their respective [`Msg` services](/docs/sdk/v0.47/documentation/module-system/msg-services#Validation). + +Read [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) for more details. - Read [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) for more details. - `BaseApp` still calls `ValidateBasic` on messages that implements that method for backwards compatibility. + `BaseApp` still calls `ValidateBasic` on messages that implements that method + for backwards compatibility. -#### Guideline[​](#guideline "Direct link to Guideline") +#### Guideline -`ValidateBasic` should not be used anymore. Message validation should be performed in the `Msg` service when [handling a message](/v0.47/build/building-modules/03-msg-services#Validation) in a module Msg Server. +`ValidateBasic` should not be used anymore. Message validation should be performed in the `Msg` service when [handling a message](/docs/sdk/v0.47/documentation/module-system/msg-services#Validation) in a module Msg Server. -### AnteHandler[​](#antehandler "Direct link to AnteHandler") +### AnteHandler `AnteHandler`s even though optional, are in practice very often used to perform signature verification, gas calculation, fee deduction, and other core operations related to blockchain transactions. @@ -97,58 +115,150 @@ A copy of the cached context is provided to the `AnteHandler`, which performs li For example, the [`auth`](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth/spec) module `AnteHandler` checks and increments sequence numbers, checks signatures and account numbers, and deducts fees from the first signer of the transaction - all state changes are made using the `checkState`. -### Gas[​](#gas "Direct link to Gas") - -The [`Context`](/v0.47/learn/advanced/context), which keeps a `GasMeter` that tracks how much gas is used during the execution of `Tx`, is initialized. The user-provided amount of gas for `Tx` is known as `GasWanted`. If `GasConsumed`, the amount of gas consumed during execution, ever exceeds `GasWanted`, the execution stops and the changes made to the cached copy of the state are not committed. Otherwise, `CheckTx` sets `GasUsed` equal to `GasConsumed` and returns it in the result. After calculating the gas and fee values, validator-nodes check that the user-specified `gas-prices` is greater than their locally defined `min-gas-prices`. - -### Discard or Addition to Mempool[​](#discard-or-addition-to-mempool "Direct link to Discard or Addition to Mempool") - -If at any point during `CheckTx` the `Tx` fails, it is discarded and the transaction lifecycle ends there. Otherwise, if it passes `CheckTx` successfully, the default protocol is to relay it to peer nodes and add it to the Mempool so that the `Tx` becomes a candidate to be included in the next block. - -The **mempool** serves the purpose of keeping track of transactions seen by all full-nodes. Full-nodes keep a **mempool cache** of the last `mempool.cache_size` transactions they have seen, as a first line of defense to prevent replay attacks. Ideally, `mempool.cache_size` is large enough to encompass all of the transactions in the full mempool. If the mempool cache is too small to keep track of all the transactions, `CheckTx` is responsible for identifying and rejecting replayed transactions. - -Currently existing preventative measures include fees and a `sequence` (nonce) counter to distinguish replayed transactions from identical but valid ones. If an attacker tries to spam nodes with many copies of a `Tx`, full-nodes keeping a mempool cache reject all identical copies instead of running `CheckTx` on them. Even if the copies have incremented `sequence` numbers, attackers are disincentivized by the need to pay fees. - -Validator nodes keep a mempool to prevent replay attacks, just as full-nodes do, but also use it as a pool of unconfirmed transactions in preparation of block inclusion. Note that even if a `Tx` passes all checks at this stage, it is still possible to be found invalid later on, because `CheckTx` does not fully validate the transaction (that is, it does not actually execute the messages). - -## Inclusion in a Block[​](#inclusion-in-a-block "Direct link to Inclusion in a Block") - -Consensus, the process through which validator nodes come to agreement on which transactions to accept, happens in **rounds**. Each round begins with a proposer creating a block of the most recent transactions and ends with **validators**, special full-nodes with voting power responsible for consensus, agreeing to accept the block or go with a `nil` block instead. Validator nodes execute the consensus algorithm, such as [CometBFT](https://docs.cometbft.com/v0.37/spec/consensus/), confirming the transactions using ABCI requests to the application, in order to come to this agreement. - -The first step of consensus is the **block proposal**. One proposer amongst the validators is chosen by the consensus algorithm to create and propose a block - in order for a `Tx` to be included, it must be in this proposer's mempool. - -## State Changes[​](#state-changes "Direct link to State Changes") - -The next step of consensus is to execute the transactions to fully validate them. All full-nodes that receive a block proposal from the correct proposer execute the transactions by calling the ABCI functions [`BeginBlock`](/v0.47/learn/beginner/overview-app#beginblocker-and-endblocker), `DeliverTx` for each transaction, and [`EndBlock`](/v0.47/learn/beginner/overview-app#beginblocker-and-endblocker). While each full-node runs everything locally, this process yields a single, unambiguous result, since the messages' state transitions are deterministic and transactions are explicitly ordered in the block proposal. - +### Gas + +The [`Context`](/docs/sdk/v0.47/learn/advanced/context), which keeps a `GasMeter` that tracks how much gas is used during the execution of `Tx`, is initialized. The user-provided amount of gas for `Tx` is known as `GasWanted`. If `GasConsumed`, the amount of gas consumed during execution, ever exceeds `GasWanted`, the execution stops and the changes made to the cached copy of the state are not committed. Otherwise, `CheckTx` sets `GasUsed` equal to `GasConsumed` and returns it in the result. After calculating the gas and fee values, validator-nodes check that the user-specified `gas-prices` is greater than their locally defined `min-gas-prices`. + +### Discard or Addition to Mempool + +If at any point during `CheckTx` the `Tx` fails, it is discarded and the transaction lifecycle ends +there. Otherwise, if it passes `CheckTx` successfully, the default protocol is to relay it to peer +nodes and add it to the Mempool so that the `Tx` becomes a candidate to be included in the next block. + +The **mempool** serves the purpose of keeping track of transactions seen by all full-nodes. +Full-nodes keep a **mempool cache** of the last `mempool.cache_size` transactions they have seen, as a first line of +defense to prevent replay attacks. Ideally, `mempool.cache_size` is large enough to encompass all +of the transactions in the full mempool. If the mempool cache is too small to keep track of all +the transactions, `CheckTx` is responsible for identifying and rejecting replayed transactions. + +Currently existing preventative measures include fees and a `sequence` (nonce) counter to distinguish +replayed transactions from identical but valid ones. If an attacker tries to spam nodes with many +copies of a `Tx`, full-nodes keeping a mempool cache reject all identical copies instead of running +`CheckTx` on them. Even if the copies have incremented `sequence` numbers, attackers are +disincentivized by the need to pay fees. + +Validator nodes keep a mempool to prevent replay attacks, just as full-nodes do, but also use it as +a pool of unconfirmed transactions in preparation of block inclusion. Note that even if a `Tx` +passes all checks at this stage, it is still possible to be found invalid later on, because +`CheckTx` does not fully validate the transaction (that is, it does not actually execute the messages). + +## Inclusion in a Block + +Consensus, the process through which validator nodes come to agreement on which transactions to +accept, happens in **rounds**. Each round begins with a proposer creating a block of the most +recent transactions and ends with **validators**, special full-nodes with voting power responsible +for consensus, agreeing to accept the block or go with a `nil` block instead. Validator nodes +execute the consensus algorithm, such as [CometBFT](https://docs.cometbft.com/v0.37/spec/consensus/), +confirming the transactions using ABCI requests to the application, in order to come to this agreement. + +The first step of consensus is the **block proposal**. One proposer amongst the validators is chosen +by the consensus algorithm to create and propose a block - in order for a `Tx` to be included, it +must be in this proposer's mempool. + +## State Changes + +The next step of consensus is to execute the transactions to fully validate them. All full-nodes +that receive a block proposal from the correct proposer execute the transactions by calling the ABCI functions +[`BeginBlock`](/docs/sdk/v0.47/learn/beginner/overview-app#beginblocker-and-endblocker), `DeliverTx` for each transaction, +and [`EndBlock`](/docs/sdk/v0.47/learn/beginner/overview-app#beginblocker-and-endblocker). While each full-node runs everything +locally, this process yields a single, unambiguous result, since the messages' state transitions are deterministic and transactions are +explicitly ordered in the block proposal. + +```text expandable + ----------------------- + |Receive Block Proposal| + ----------------------- + | + v + ----------------------- + | BeginBlock | + ----------------------- + | + v + ----------------------- + | DeliverTx(tx0) | + | DeliverTx(tx1) | + | DeliverTx(tx2) | + | DeliverTx(tx3) | + | . | + | . | + | . | + ----------------------- + | + v + ----------------------- + | EndBlock | + ----------------------- + | + v + ----------------------- + | Consensus | + ----------------------- + | + v + ----------------------- + | Commit | + ----------------------- ``` - ----------------------- |Receive Block Proposal| ----------------------- | v ----------------------- | BeginBlock | ----------------------- | v ----------------------- | DeliverTx(tx0) | | DeliverTx(tx1) | | DeliverTx(tx2) | | DeliverTx(tx3) | | . | | . | | . | ----------------------- | v ----------------------- | EndBlock | ----------------------- | v ----------------------- | Consensus | ----------------------- | v ----------------------- | Commit | ----------------------- -``` - -### DeliverTx[​](#delivertx "Direct link to DeliverTx") - -The `DeliverTx` ABCI function defined in [`BaseApp`](/v0.47/learn/advanced/baseapp) does the bulk of the state transitions: it is run for each transaction in the block in sequential order as committed to during consensus. Under the hood, `DeliverTx` is almost identical to `CheckTx` but calls the [`runTx`](/v0.47/learn/advanced/baseapp#runtx) function in deliver mode instead of check mode. Instead of using their `checkState`, full-nodes use `deliverState`: - -* **Decoding:** Since `DeliverTx` is an ABCI call, `Tx` is received in the encoded `[]byte` form. Nodes first unmarshal the transaction, using the [`TxConfig`](/v0.47/learn/beginner/00-overview-app#register-codec) defined in the app, then call `runTx` in `runTxModeDeliver`, which is very similar to `CheckTx` but also executes and writes state changes. - -* **Checks and `AnteHandler`:** Full-nodes call `validateBasicMsgs` and `AnteHandler` again. This second check happens because they may not have seen the same transactions during the addition to Mempool stage and a malicious proposer may have included invalid ones. One difference here is that the `AnteHandler` does not compare `gas-prices` to the node's `min-gas-prices` since that value is local to each node - differing values across nodes yield nondeterministic results. - -* **`MsgServiceRouter`:** After `CheckTx` exits, `DeliverTx` continues to run [`runMsgs`](/v0.47/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) to fully execute each `Msg` within the transaction. Since the transaction may have messages from different modules, `BaseApp` needs to know which module to find the appropriate handler. This is achieved using `BaseApp`'s `MsgServiceRouter` so that it can be processed by the module's Protobuf [`Msg` service](/v0.47/build/building-modules/msg-services). For `LegacyMsg` routing, the `Route` function is called via the [module manager](/v0.47/build/building-modules/module-manager) to retrieve the route name and find the legacy [`Handler`](/v0.47/build/building-modules/msg-services#handler-type) within the module. - -* **`Msg` service:** Protobuf `Msg` service is responsible for executing each message in the `Tx` and causes state transitions to persist in `deliverTxState`. - -* **PostHandlers:** [`PostHandler`](/v0.47/learn/advanced/baseapp#posthandler)s run after the execution of the message. If they fail, the state change of `runMsgs`, as well of `PostHandlers`, are both reverted. - -* **Gas:** While a `Tx` is being delivered, a `GasMeter` is used to keep track of how much gas is being used; if execution completes, `GasUsed` is set and returned in the `abci.ResponseDeliverTx`. If execution halts because `BlockGasMeter` or `GasMeter` has run out or something else goes wrong, a deferred function at the end appropriately errors or panics. - -If there are any failed state changes resulting from a `Tx` being invalid or `GasMeter` running out, the transaction processing terminates and any state changes are reverted. Invalid transactions in a block proposal cause validator nodes to reject the block and vote for a `nil` block instead. - -### Commit[​](#commit "Direct link to Commit") - -The final step is for nodes to commit the block and state changes. Validator nodes perform the previous step of executing state transitions in order to validate the transactions, then sign the block to confirm it. Full nodes that are not validators do not participate in consensus - i.e. they cannot vote - but listen for votes to understand whether or not they should commit the state changes. - -When they receive enough validator votes (2/3+ *precommits* weighted by voting power), full nodes commit to a new block to be added to the blockchain and finalize the state transitions in the application layer. A new state root is generated to serve as a merkle proof for the state transitions. Applications use the [`Commit`](/v0.47/learn/advanced/baseapp#commit) ABCI method inherited from [Baseapp](/v0.47/learn/advanced/baseapp); it syncs all the state transitions by writing the `deliverState` into the application's internal state. As soon as the state changes are committed, `checkState` starts afresh from the most recently committed state and `deliverState` resets to `nil` in order to be consistent and reflect the changes. - -Note that not all blocks have the same number of transactions and it is possible for consensus to result in a `nil` block or one with none at all. In a public blockchain network, it is also possible for validators to be **byzantine**, or malicious, which may prevent a `Tx` from being committed in the blockchain. Possible malicious behaviors include the proposer deciding to censor a `Tx` by excluding it from the block or a validator voting against the block. -At this point, the transaction lifecycle of a `Tx` is over: nodes have verified its validity, delivered it by executing its state changes, and committed those changes. The `Tx` itself, in `[]byte` form, is stored in a block and appended to the blockchain. +### DeliverTx + +The `DeliverTx` ABCI function defined in [`BaseApp`](/docs/sdk/v0.47/learn/advanced/baseapp) does the bulk of the +state transitions: it is run for each transaction in the block in sequential order as committed +to during consensus. Under the hood, `DeliverTx` is almost identical to `CheckTx` but calls the +[`runTx`](/docs/sdk/v0.47/learn/advanced/baseapp#runtx) function in deliver mode instead of check mode. +Instead of using their `checkState`, full-nodes use `deliverState`: + +- **Decoding:** Since `DeliverTx` is an ABCI call, `Tx` is received in the encoded `[]byte` form. + Nodes first unmarshal the transaction, using the [`TxConfig`](/docs/sdk/v0.47/learn/beginner/overview-app#register-codec) defined in the app, then call `runTx` in `runTxModeDeliver`, which is very similar to `CheckTx` but also executes and writes state changes. + +- **Checks and `AnteHandler`:** Full-nodes call `validateBasicMsgs` and `AnteHandler` again. This second check + happens because they may not have seen the same transactions during the addition to Mempool stage + and a malicious proposer may have included invalid ones. One difference here is that the + `AnteHandler` does not compare `gas-prices` to the node's `min-gas-prices` since that value is local + to each node - differing values across nodes yield nondeterministic results. + +- **`MsgServiceRouter`:** After `CheckTx` exits, `DeliverTx` continues to run + [`runMsgs`](/docs/sdk/v0.47/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) to fully execute each `Msg` within the transaction. + Since the transaction may have messages from different modules, `BaseApp` needs to know which module + to find the appropriate handler. This is achieved using `BaseApp`'s `MsgServiceRouter` so that it can be processed by the module's Protobuf [`Msg` service](/docs/sdk/v0.47/documentation/module-system/msg-services). + For `LegacyMsg` routing, the `Route` function is called via the [module manager](/docs/sdk/v0.47/documentation/module-system/module-manager) to retrieve the route name and find the legacy [`Handler`](/docs/sdk/v0.47/documentation/module-system/msg-services#handler-type) within the module. + +- **`Msg` service:** Protobuf `Msg` service is responsible for executing each message in the `Tx` and causes state transitions to persist in `deliverTxState`. + +- **PostHandlers:** [`PostHandler`](/docs/sdk/v0.47/learn/advanced/baseapp#posthandler)s run after the execution of the message. If they fail, the state change of `runMsgs`, as well of `PostHandlers`, are both reverted. + +- **Gas:** While a `Tx` is being delivered, a `GasMeter` is used to keep track of how much + gas is being used; if execution completes, `GasUsed` is set and returned in the + `abci.ResponseDeliverTx`. If execution halts because `BlockGasMeter` or `GasMeter` has run out or something else goes + wrong, a deferred function at the end appropriately errors or panics. + +If there are any failed state changes resulting from a `Tx` being invalid or `GasMeter` running out, +the transaction processing terminates and any state changes are reverted. Invalid transactions in a +block proposal cause validator nodes to reject the block and vote for a `nil` block instead. + +### Commit + +The final step is for nodes to commit the block and state changes. Validator nodes +perform the previous step of executing state transitions in order to validate the transactions, +then sign the block to confirm it. Full nodes that are not validators do not +participate in consensus - i.e. they cannot vote - but listen for votes to understand whether or +not they should commit the state changes. + +When they receive enough validator votes (2/3+ _precommits_ weighted by voting power), full nodes commit to a new block to be added to the blockchain and +finalize the state transitions in the application layer. A new state root is generated to serve as +a merkle proof for the state transitions. Applications use the [`Commit`](/docs/sdk/v0.47/learn/advanced/baseapp#commit) +ABCI method inherited from [Baseapp](/docs/sdk/v0.47/learn/advanced/baseapp); it syncs all the state transitions by +writing the `deliverState` into the application's internal state. As soon as the state changes are +committed, `checkState` starts afresh from the most recently committed state and `deliverState` +resets to `nil` in order to be consistent and reflect the changes. + +Note that not all blocks have the same number of transactions and it is possible for consensus to +result in a `nil` block or one with none at all. In a public blockchain network, it is also possible +for validators to be **byzantine**, or malicious, which may prevent a `Tx` from being committed in +the blockchain. Possible malicious behaviors include the proposer deciding to censor a `Tx` by +excluding it from the block or a validator voting against the block. + +At this point, the transaction lifecycle of a `Tx` is over: nodes have verified its validity, +delivered it by executing its state changes, and committed those changes. The `Tx` itself, +in `[]byte` form, is stored in a block and appended to the blockchain. diff --git a/docs/sdk/v0.47/learn/glossary.mdx b/docs/sdk/v0.47/learn/glossary.mdx index 5dce4348..2d8a6493 100644 --- a/docs/sdk/v0.47/learn/glossary.mdx +++ b/docs/sdk/v0.47/learn/glossary.mdx @@ -1,72 +1,71 @@ --- -title: "Glossary" -description: "Version: v0.47" +title: Glossary --- -## ABCI (Application Blockchain Interface)[​](#abci-application-blockchain-interface "Direct link to ABCI (Application Blockchain Interface)") +## ABCI (Application Blockchain Interface) The interface between the Tendermint consensus engine and the application state machine, allowing them to communicate and perform state transitions. ABCI is a critical component of the Cosmos SDK, enabling developers to build applications using any programming language that can communicate via ABCI. -## ATOM[​](#atom "Direct link to ATOM") +## ATOM The native staking token of the Cosmos Hub, used for securing the network, participating in governance, and paying fees for transactions. -## CometBFT[​](#cometbft "Direct link to CometBFT") +## CometBFT A Byzantine Fault Tolerant (BFT) consensus engine that powers the Cosmos SDK. CometBFT is responsible for handling the consensus and networking layers of a blockchain. -## Cosmos Hub[​](#cosmos-hub "Direct link to Cosmos Hub") +## Cosmos Hub The first blockchain built with the Cosmos SDK, functioning as a hub for connecting other blockchains in the Cosmos ecosystem through IBC. -## Cosmos SDK[​](#cosmos-sdk "Direct link to Cosmos SDK") +## Cosmos SDK A framework for building blockchain applications, focusing on modularity, scalability, and interoperability. -## CosmWasm[​](#cosmwasm "Direct link to CosmWasm") +## CosmWasm A smart contract engine for the Cosmos SDK that enables developers to write and deploy smart contracts in WebAssembly (Wasm). CosmWasm is designed to be secure, efficient, and easy to use, allowing developers to build complex applications on top of the Cosmos SDK. -## Delegator[​](#delegator "Direct link to Delegator") +## Delegator A participant in a Proof of Stake network who delegates their tokens to a validator. Delegators share in the rewards and risks associated with the validator's performance in the consensus process. -## Gas[​](#gas "Direct link to Gas") +## Gas A measure of computational effort required to execute a transaction or smart contract on a blockchain. In the Cosmos ecosystem, gas is used to meter transactions and allocate resources fairly among users. Users must pay a gas fee, usually in the native token, to have their transactions processed by the network. -## Governance[​](#governance "Direct link to Governance") +## Governance The decision-making process in the Cosmos ecosystem, which allows token holders to propose and vote on network upgrades, parameter changes, and other critical decisions. -## IBC (Inter-Blockchain Communication)[​](#ibc-inter-blockchain-communication "Direct link to IBC (Inter-Blockchain Communication)") +## IBC (Inter-Blockchain Communication) A protocol for secure and reliable communication between heterogeneous blockchains built on the Cosmos SDK. IBC enables the transfer of tokens and data across multiple blockchains. -## Interoperability[​](#interoperability "Direct link to Interoperability") +## Interoperability The ability of different blockchains and distributed systems to communicate and interact with each other, enabling the seamless transfer of information, tokens, and other digital assets. In the context of Cosmos, the Inter-Blockchain Communication (IBC) protocol is a core technology that enables interoperability between blockchains built with the Cosmos SDK and other compatible blockchains. Interoperability allows for increased collaboration, innovation, and value creation across different blockchain ecosystems. -## Light Clients[​](#light-clients "Direct link to Light Clients") +## Light Clients Lightweight blockchain clients that verify and process only a small subset of the blockchain data, allowing users to interact with the network without downloading the entire blockchain. ABCI++ aims to enhance the security and performance of light clients by enabling them to efficiently verify state transitions and proofs. -## Module[​](#module "Direct link to Module") +## Module A self-contained, reusable piece of code that can be used to build blockchain functionality within a Cosmos SDK application. Modules can be developed by the community and shared for others to use. -## Slashing[​](#slashing "Direct link to Slashing") +## Slashing The process of penalizing validators or delegators by reducing their staked tokens if they behave maliciously or fail to meet the network's performance requirements. -## Staking[​](#staking "Direct link to Staking") +## Staking The process of locking up tokens as collateral to secure the network, participate in consensus, and earn rewards in a Proof of Stake (PoS) blockchain like Cosmos. -## State Sync[​](#state-sync "Direct link to State Sync") +## State Sync A feature that allows new nodes to quickly synchronize with the current state of the blockchain without downloading and processing all previous blocks. State Sync is particularly useful for nodes that have been offline for an extended period or are joining the network for the first time. ABCI++ aims to improve the efficiency and security of State Sync. -## Validator[​](#validator "Direct link to Validator") +## Validator A network participant responsible for proposing new blocks, validating transactions, and securing the Cosmos SDK-based blockchain through staking tokens. Validators play a crucial role in maintaining the security and integrity of the network. diff --git a/docs/sdk/v0.47/learn/intro/overview.mdx b/docs/sdk/v0.47/learn/intro/overview.mdx index 88a418a7..ce4e14f9 100644 --- a/docs/sdk/v0.47/learn/intro/overview.mdx +++ b/docs/sdk/v0.47/learn/intro/overview.mdx @@ -1,30 +1,29 @@ --- -title: "What is the Cosmos SDK" -description: "Version: v0.47" +title: What is the Cosmos SDK --- -The [Cosmos SDK](https://github.com/cosmos/cosmos-sdk) is an open-source framework for building multi-asset public Proof-of-Stake (PoS) blockchains, like the Cosmos Hub, as well as permissioned Proof-of-Authority (PoA) blockchains. Blockchains built with the Cosmos SDK are generally referred to as **application-specific blockchains**. +The [Cosmos SDK](https://github.com/cosmos/cosmos-sdk) is an open-source framework for building multi-asset public Proof-of-Stake (PoS) blockchains, like the Cosmos Hub, as well as permissioned Proof-of-Authority (PoA) blockchains. Blockchains built with the Cosmos SDK are generally referred to as **application-specific blockchains**. -The goal of the Cosmos SDK is to allow developers to easily create custom blockchains from scratch that can natively interoperate with other blockchains. We envision the Cosmos SDK as the npm-like framework to build secure blockchain applications on top of [CometBFT](https://github.com/cometbft/cometbft). SDK-based blockchains are built out of composable [modules](/v0.47/build/building-modules/intro), most of which are open-source and readily available for any developers to use. Anyone can create a module for the Cosmos SDK, and integrating already-built modules is as simple as importing them into your blockchain application. What's more, the Cosmos SDK is a capabilities-based system that allows developers to better reason about the security of interactions between modules. For a deeper look at capabilities, jump to [Object-Capability Model](/v0.47/learn/advanced/ocap). +The goal of the Cosmos SDK is to allow developers to easily create custom blockchains from scratch that can natively interoperate with other blockchains. We envision the Cosmos SDK as the npm-like framework to build secure blockchain applications on top of [CometBFT](https://github.com/cometbft/cometbft). SDK-based blockchains are built out of composable [modules](/docs/sdk/v0.47/documentation/module-system/intro), most of which are open-source and readily available for any developers to use. Anyone can create a module for the Cosmos SDK, and integrating already-built modules is as simple as importing them into your blockchain application. What's more, the Cosmos SDK is a capabilities-based system that allows developers to better reason about the security of interactions between modules. For a deeper look at capabilities, jump to [Object-Capability Model](/docs/sdk/v0.47/learn/advanced/ocap). -## What are Application-Specific Blockchains[​](#what-are-application-specific-blockchains "Direct link to What are Application-Specific Blockchains") +## What are Application-Specific Blockchains One development paradigm in the blockchain world today is that of virtual-machine blockchains like Ethereum, where development generally revolves around building decentralized applications on top of an existing blockchain as a set of smart contracts. While smart contracts can be very good for some use cases like single-use applications (e.g. ICOs), they often fall short for building complex decentralized platforms. More generally, smart contracts can be limiting in terms of flexibility, sovereignty and performance. Application-specific blockchains offer a radically different development paradigm than virtual-machine blockchains. An application-specific blockchain is a blockchain customized to operate a single application: developers have all the freedom to make the design decisions required for the application to run optimally. They can also provide better sovereignty, security and performance. -Learn more about [application-specific blockchains](/v0.47/learn/intro/why-app-specific). +Learn more about [application-specific blockchains](/docs/sdk/v0.47/learn/intro/why-app-specific). -## Why the Cosmos SDK[​](#why-the-cosmos-sdk "Direct link to Why the Cosmos SDK") +## Why the Cosmos SDK The Cosmos SDK is the most advanced framework for building custom application-specific blockchains today. Here are a few reasons why you might want to consider building your decentralized application with the Cosmos SDK: * The default consensus engine available within the Cosmos SDK is [CometBFT](https://github.com/cometbft/cometbft). CometBFT is the most (and only) mature BFT consensus engine in existence. It is widely used across the industry and is considered the gold standard consensus engine for building Proof-of-Stake systems. -* The Cosmos SDK is open-source and designed to make it easy to build blockchains out of composable [modules](/v0.47/build/modules). As the ecosystem of open-source Cosmos SDK modules grows, it will become increasingly easier to build complex decentralized platforms with it. +* The Cosmos SDK is open-source and designed to make it easy to build blockchains out of composable [modules](/docs/sdk/v0.47/documentation/module-system/modules). As the ecosystem of open-source Cosmos SDK modules grows, it will become increasingly easier to build complex decentralized platforms with it. * The Cosmos SDK is inspired by capabilities-based security, and informed by years of wrestling with blockchain state-machines. This makes the Cosmos SDK a very secure environment to build blockchains. * Most importantly, the Cosmos SDK has already been used to build many application-specific blockchains that are already in production. Among others, we can cite [Cosmos Hub](https://hub.cosmos.network), [IRIS Hub](https://irisnet.org), [Binance Chain](https://docs.binance.org/), [Terra](https://terra.money/) or [Kava](https://www.kava.io/). [Many more](https://cosmos.network/ecosystem) are building on the Cosmos SDK. -## Getting started with the Cosmos SDK[​](#getting-started-with-the-cosmos-sdk "Direct link to Getting started with the Cosmos SDK") +## Getting started with the Cosmos SDK -* Learn more about the [architecture of a Cosmos SDK application](/v0.47/learn/intro/sdk-app-architecture) +* Learn more about the [architecture of a Cosmos SDK application](/docs/sdk/v0.47/learn/intro/sdk-app-architecture) * Learn how to build an application-specific blockchain from scratch with the [Cosmos SDK Tutorial](https://cosmos.network/docs/tutorial) diff --git a/docs/sdk/v0.47/learn/intro/sdk-app-architecture.mdx b/docs/sdk/v0.47/learn/intro/sdk-app-architecture.mdx index 80fc18ee..5bd0244d 100644 --- a/docs/sdk/v0.47/learn/intro/sdk-app-architecture.mdx +++ b/docs/sdk/v0.47/learn/intro/sdk-app-architecture.mdx @@ -1,9 +1,9 @@ --- -title: "Introduction to Blockchain Architecture" -description: "Version: v0.47" +title: Introduction to Blockchain Architecture +description: 'At its core, a blockchain is a replicated deterministic state machine.' --- -## State machine[​](#state-machine "Direct link to State machine") +## State machine At its core, a blockchain is a [replicated deterministic state machine](https://en.wikipedia.org/wiki/State_machine_replication). @@ -11,48 +11,82 @@ A state machine is a computer science concept whereby a machine can have multipl Given a state S and a transaction T, the state machine will return a new state S'. -``` -+--------+ +--------+| | | || S +---------------->+ S' || | apply(T) | |+--------+ +--------+ +```text ++--------+ +--------+ +| | | | +| S +---------------->+ S' | +| | apply(T) | | ++--------+ +--------+ ``` In practice, the transactions are bundled in blocks to make the process more efficient. Given a state S and a block of transactions B, the state machine will return a new state S'. -``` -+--------+ +--------+| | | || S +----------------------------> | S' || | For each T in B: apply(T) | |+--------+ +--------+ +```text ++--------+ +--------+ +| | | | +| S +----------------------------> | S' | +| | For each T in B: apply(T) | | ++--------+ +--------+ ``` In a blockchain context, the state machine is deterministic. This means that if a node is started at a given state and replays the same sequence of transactions, it will always end up with the same final state. The Cosmos SDK gives developers maximum flexibility to define the state of their application, transaction types and state transition functions. The process of building state-machines with the Cosmos SDK will be described more in depth in the following sections. But first, let us see how the state-machine is replicated using **CometBFT**. -## CometBFT[​](#cometbft "Direct link to CometBFT") +## CometBFT Thanks to the Cosmos SDK, developers just have to define the state machine, and [*CometBFT*](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft) will handle replication over the network for them. -``` - ^ +-------------------------------+ ^ | | | | Built with Cosmos SDK | | State-machine = Application | | | | | v | +-------------------------------+ | | | ^Blockchain node | | Consensus | | | | | | | +-------------------------------+ | CometBFT | | | | | | Networking | | | | | | v +-------------------------------+ v +```text expandable + ^ +-------------------------------+ ^ + | | | | Built with Cosmos SDK + | | State-machine = Application | | + | | | v + | +-------------------------------+ + | | | ^ +Blockchain node | | Consensus | | + | | | | + | +-------------------------------+ | CometBFT + | | | | + | | Networking | | + | | | | + v +-------------------------------+ v ``` [CometBFT](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft) is an application-agnostic engine that is responsible for handling the *networking* and *consensus* layers of a blockchain. In practice, this means that CometBFT is responsible for propagating and ordering transaction bytes. CometBFT relies on an eponymous Byzantine-Fault-Tolerant (BFT) algorithm to reach consensus on the order of transactions. The CometBFT [consensus algorithm](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft#consensus-overview) works with a set of special nodes called *Validators*. Validators are responsible for adding blocks of transactions to the blockchain. At any given block, there is a validator set V. A validator in V is chosen by the algorithm to be the proposer of the next block. This block is considered valid if more than two thirds of V signed a `prevote` and a `precommit` on it, and if all the transactions that it contains are valid. The validator set can be changed by rules written in the state-machine. -## ABCI[​](#abci "Direct link to ABCI") +## ABCI CometBFT passes transactions to the application through an interface called the [ABCI](https://docs.cometbft.com/v0.37/spec/abci/), which the application must implement. -``` - +---------------------+ | | | Application | | | +--------+---+--------+ ^ | | | ABCI | v +--------+---+--------+ | | | | | CometBFT | | | | | +---------------------+ +```text expandable + +---------------------+ + | | + | Application | + | | + +--------+---+--------+ + ^ | + | | ABCI + | v + +--------+---+--------+ + | | + | | + | CometBFT | + | | + | | + +---------------------+ ``` Note that **CometBFT only handles transaction bytes**. It has no knowledge of what these bytes mean. All CometBFT does is order these transaction bytes deterministically. CometBFT passes the bytes to the application via the ABCI, and expects a return code to inform it if the messages contained in the transactions were successfully processed or not. Here are the most important messages of the ABCI: -* `CheckTx`: When a transaction is received by CometBFT, it is passed to the application to check if a few basic requirements are met. `CheckTx` is used to protect the mempool of full-nodes against spam transactions. . A special handler called the [`AnteHandler`](/v0.47/learn/beginner/gas-fees#antehandler) is used to execute a series of validation steps such as checking for sufficient fees and validating the signatures. If the checks are valid, the transaction is added to the [mempool](https://docs.cometbft.com/v0.37/spec/p2p/messages/mempool) and relayed to peer nodes. Note that transactions are not processed (i.e. no modification of the state occurs) with `CheckTx` since they have not been included in a block yet. -* `DeliverTx`: When a [valid block](https://docs.cometbft.com/v0.37/spec/core/data_structures#block) is received by CometBFT, each transaction in the block is passed to the application via `DeliverTx` in order to be processed. It is during this stage that the state transitions occur. The `AnteHandler` executes again, along with the actual [`Msg` service](/v0.47/build/building-modules/msg-services) RPC for each message in the transaction. +* `CheckTx`: When a transaction is received by CometBFT, it is passed to the application to check if a few basic requirements are met. `CheckTx` is used to protect the mempool of full-nodes against spam transactions. . A special handler called the [`AnteHandler`](/docs/sdk/v0.47/learn/beginner/gas-fees#antehandler) is used to execute a series of validation steps such as checking for sufficient fees and validating the signatures. If the checks are valid, the transaction is added to the [mempool](https://docs.cometbft.com/v0.37/spec/p2p/messages/mempool) and relayed to peer nodes. Note that transactions are not processed (i.e. no modification of the state occurs) with `CheckTx` since they have not been included in a block yet. +* `DeliverTx`: When a [valid block](https://docs.cometbft.com/v0.37/spec/core/data_structures#block) is received by CometBFT, each transaction in the block is passed to the application via `DeliverTx` in order to be processed. It is during this stage that the state transitions occur. The `AnteHandler` executes again, along with the actual [`Msg` service](/docs/sdk/v0.47/documentation/module-system/msg-services) RPC for each message in the transaction. * `BeginBlock`/`EndBlock`: These messages are executed at the beginning and the end of each block, whether the block contains transactions or not. It is useful to trigger automatic execution of logic. Proceed with caution though, as computationally expensive loops could slow down your blockchain, or even freeze it if the loop is infinite. Find a more detailed view of the ABCI methods from the [CometBFT docs](https://docs.cometbft.com/v0.37/spec/abci/). -Any application built on CometBFT needs to implement the ABCI interface in order to communicate with the underlying local CometBFT engine. Fortunately, you do not have to implement the ABCI interface. The Cosmos SDK provides a boilerplate implementation of it in the form of [baseapp](/v0.47/learn/intro/sdk-design#baseapp). +Any application built on CometBFT needs to implement the ABCI interface in order to communicate with the underlying local CometBFT engine. Fortunately, you do not have to implement the ABCI interface. The Cosmos SDK provides a boilerplate implementation of it in the form of [baseapp](/docs/sdk/v0.47/learn/intro/sdk-design#baseapp). diff --git a/docs/sdk/v0.47/learn/intro/sdk-design.mdx b/docs/sdk/v0.47/learn/intro/sdk-design.mdx index 61764786..39e9713a 100644 --- a/docs/sdk/v0.47/learn/intro/sdk-design.mdx +++ b/docs/sdk/v0.47/learn/intro/sdk-design.mdx @@ -1,9 +1,8 @@ --- -title: "Main Components of the Cosmos SDK" -description: "Version: v0.47" +title: Main Components of the Cosmos SDK --- -The Cosmos SDK is a framework that facilitates the development of secure state-machines on top of CometBFT. At its core, the Cosmos SDK is a boilerplate implementation of the [ABCI](/v0.47/learn/intro/sdk-app-architecture#abci) in Golang. It comes with a [`multistore`](/v0.47/learn/advanced/store#multistore) to persist data and a [`router`](/v0.47/learn/advanced/baseapp#routing) to handle transactions. +The Cosmos SDK is a framework that facilitates the development of secure state-machines on top of CometBFT. At its core, the Cosmos SDK is a boilerplate implementation of the [ABCI](/docs/sdk/v0.47/learn/intro/sdk-app-architecture#abci) in Golang. It comes with a [`multistore`](/docs/sdk/v0.47/learn/advanced/store#multistore) to persist data and a [`router`](/docs/sdk/v0.47/learn/advanced/baseapp#routing) to handle transactions. Here is a simplified view of how transactions are handled by an application built on top of the Cosmos SDK when transferred from CometBFT via `DeliverTx`: @@ -12,41 +11,953 @@ Here is a simplified view of how transactions are handled by an application buil 3. Route each message to the appropriate module so that it can be processed. 4. Commit state changes. -## `baseapp`[​](#baseapp "Direct link to baseapp") +## `baseapp` -`baseapp` is the boilerplate implementation of a Cosmos SDK application. It comes with an implementation of the ABCI to handle the connection with the underlying consensus engine. Typically, a Cosmos SDK application extends `baseapp` by embedding it in [`app.go`](/v0.47/learn/beginner/overview-app#core-application-file). +`baseapp` is the boilerplate implementation of a Cosmos SDK application. It comes with an implementation of the ABCI to handle the connection with the underlying consensus engine. Typically, a Cosmos SDK application extends `baseapp` by embedding it in [`app.go`](/docs/sdk/v0.47/learn/beginner/overview-app#core-application-file). Here is an example of this from `simapp`, the Cosmos SDK demonstration app: -simapp/app.go +```go expandable +/go:build app_v1 -``` -loading... -``` +package simapp + +import ( + + "encoding/json" + "io" + "os" + "path/filepath" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "github.com/spf13/cast" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + + simappparams "cosmossdk.io/simapp/params" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/client/grpc/tmservice" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/nft" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + memKeys map[string]*storetypes.MemoryStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + encodingConfig := makeEncodingConfig() + appCodec := encodingConfig.Codec + legacyAmino := encodingConfig.Amino + interfaceRegistry := encodingConfig.InterfaceRegistry + txConfig := encodingConfig.TxConfig + + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool := mempool.NewSenderNonceMempool() + / mempoolOpt := baseapp.SetMempool(nonceMempool) + / prepareOpt := func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt := func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := sdk.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, capabilitytypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + tkeys := sdk.NewTransientStoreKeys(paramstypes.TStoreKey) + / NOTE: The testingkey is just mounted for testing purposes. Actual applications should + / not include this key. + memKeys := sdk.NewMemoryStoreKeys(capabilitytypes.MemStoreKey, "testingkey") + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(bApp, appOpts, appCodec, logger, keys); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, + memKeys: memKeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, keys[upgradetypes.StoreKey], authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +bApp.SetParamStore(&app.ConsensusParamsKeeper) + +app.CapabilityKeeper = capabilitykeeper.NewKeeper(appCodec, keys[capabilitytypes.StoreKey], memKeys[capabilitytypes.MemStoreKey]) + / Applications that wish to enforce statically created ScopedKeepers should call `Seal` after creating + / their scoped modules in `NewApp` with `ScopeToModule` + app.CapabilityKeeper.Seal() + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, keys[authtypes.StoreKey], authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + keys[banktypes.StoreKey], + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, keys[minttypes.StoreKey], app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, keys[distrtypes.StoreKey], app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, keys[slashingtypes.StoreKey], app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, keys[crisistypes.StoreKey], invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, keys[feegrant.StoreKey], app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.AuthzKeeper = authzkeeper.NewKeeper(keys[authzkeeper.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, keys[upgradetypes.StoreKey], appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://github.com/cosmos/cosmos-sdk/blob/release/v0.46.x/x/gov/spec/01_concepts.md#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, keys[govtypes.StoreKey], app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(keys[nftkeeper.StoreKey], appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, keys[evidencetypes.StoreKey], app.StakingKeeper, app.SlashingKeeper, + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app.BaseApp.DeliverTx, + encodingConfig.TxConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + capability.NewAppModule(appCodec, *app.CapabilityKeeper, false), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName)), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + ) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + / NOTE: capability module's beginblocker must come before any modules using capabilities (e.g. IBC) + +app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, capabilitytypes.ModuleName, minttypes.ModuleName, distrtypes.ModuleName, slashingtypes.ModuleName, + evidencetypes.ModuleName, stakingtypes.ModuleName, + authtypes.ModuleName, banktypes.ModuleName, govtypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, + authz.ModuleName, feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, govtypes.ModuleName, stakingtypes.ModuleName, + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, distrtypes.ModuleName, + slashingtypes.ModuleName, minttypes.ModuleName, + genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, + paramstypes.ModuleName, upgradetypes.ModuleName, vestingtypes.ModuleName, consensusparamtypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + / NOTE: Capability module must occur first so that it can initialize any capabilities + / so that other modules that want to create or claim capabilities afterwards in InitChain + / can do so safely. + genesisModuleOrder := []string{ + capabilitytypes.ModuleName, authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +app.ModuleManager.RegisterServices(app.configurator) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + +app.MountMemoryStores(memKeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(encodingConfig.TxConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + logger.Error("error on loading last version", "err", err) + +os.Exit(1) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := ante.NewAnteHandler( + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + ) + if err != nil { + panic(err) +} + +app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/simapp/app.go#L164-L203) +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context, req abci.RequestBeginBlock) + +abci.ResponseBeginBlock { + return app.ModuleManager.BeginBlock(ctx, req) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context, req abci.RequestEndBlock) + +abci.ResponseEndBlock { + return app.ModuleManager.EndBlock(ctx, req) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return ModuleBasics.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetTKey returns the TransientStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetTKey(storeKey string) *storetypes.TransientStoreKey { + return app.tkeys[storeKey] +} + +/ GetMemKey returns the MemStoreKey for the provided mem key. +/ +/ NOTE: This is solely used for testing purposes. +func (app *SimApp) + +GetMemKey(storeKey string) *storetypes.MemoryStoreKey { + return app.memKeys[storeKey] +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new tendermint queries routes from grpc-gateway. + tmservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + ModuleBasics.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + tmservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + app.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter()) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName).WithKeyTable(govv1.ParamKeyTable()) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} + +func makeEncodingConfig() + +simappparams.EncodingConfig { + encodingConfig := simappparams.MakeTestEncodingConfig() + +std.RegisterLegacyAminoCodec(encodingConfig.Amino) + +std.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +ModuleBasics.RegisterLegacyAminoCodec(encodingConfig.Amino) + +ModuleBasics.RegisterInterfaces(encodingConfig.InterfaceRegistry) + +return encodingConfig +} +``` The goal of `baseapp` is to provide a secure interface between the store and the extensible state machine while defining as little about the state machine as possible (staying true to the ABCI). -For more on `baseapp`, please click [here](/v0.47/learn/advanced/baseapp). +For more on `baseapp`, please click [here](/docs/sdk/v0.47/learn/advanced/baseapp). -## Multistore[​](#multistore "Direct link to Multistore") +## Multistore -The Cosmos SDK provides a [`multistore`](/v0.47/learn/advanced/store#multistore) for persisting state. The multistore allows developers to declare any number of [`KVStores`](/v0.47/learn/advanced/store#base-layer-kvstores). These `KVStores` only accept the `[]byte` type as value and therefore any custom structure needs to be marshalled using [a codec](/v0.47/learn/advanced/encoding) before being stored. +The Cosmos SDK provides a [`multistore`](/docs/sdk/v0.47/learn/advanced/store#multistore) for persisting state. The multistore allows developers to declare any number of [`KVStores`](/docs/sdk/v0.47/learn/advanced/store#base-layer-kvstores). These `KVStores` only accept the `[]byte` type as value and therefore any custom structure needs to be marshalled using [a codec](/docs/sdk/v0.47/learn/advanced/encoding) before being stored. -The multistore abstraction is used to divide the state in distinct compartments, each managed by its own module. For more on the multistore, click [here](/v0.47/learn/advanced/store#multistore) +The multistore abstraction is used to divide the state in distinct compartments, each managed by its own module. For more on the multistore, click [here](/docs/sdk/v0.47/learn/advanced/store#multistore) -## Modules[​](#modules "Direct link to Modules") +## Modules The power of the Cosmos SDK lies in its modularity. Cosmos SDK applications are built by aggregating a collection of interoperable modules. Each module defines a subset of the state and contains its own message/transaction processor, while the Cosmos SDK is responsible for routing each message to its respective module. Here is a simplified view of how a transaction is processed by the application of each full-node when it is received in a valid block: -``` - + | | Transaction relayed from the full-node's | CometBFT engine to the node's application | via DeliverTx | | +---------------------v--------------------------+ | APPLICATION | | | | Using baseapp's methods: Decode the Tx, | | extract and route the message(s) | | | +---------------------+--------------------------+ | | | +---------------------------+ | | | Message routed to | the correct module | to be processed | |+----------------+ +---------------+ +----------------+ +------v----------+| | | | | | | || AUTH MODULE | | BANK MODULE | | STAKING MODULE | | GOV MODULE || | | | | | | || | | | | | | Handles message,|| | | | | | | Updates state || | | | | | | |+----------------+ +---------------+ +----------------+ +------+----------+ | | | | +--------------------------+ | | Return result to CometBFT | (0=Ok, 1=Err) v +```text expandable + + + | + | Transaction relayed from the full-node's + | CometBFT engine to the node's application + | via DeliverTx + | + | + +---------------------v--------------------------+ + | APPLICATION | + | | + | Using baseapp's methods: Decode the Tx, | + | extract and route the message(s) | + | | + +---------------------+--------------------------+ + | + | + | + +---------------------------+ + | + | + | Message routed to + | the correct module + | to be processed + | + | ++----------------+ +---------------+ +----------------+ +------v----------+ +| | | | | | | | +| AUTH MODULE | | BANK MODULE | | STAKING MODULE | | GOV MODULE | +| | | | | | | | +| | | | | | | Handles message,| +| | | | | | | Updates state | +| | | | | | | | ++----------------+ +---------------+ +----------------+ +------+----------+ + | + | + | + | + +--------------------------+ + | + | Return result to CometBFT + | (0=Ok, 1=Err) + v ``` -Each module can be seen as a little state-machine. Developers need to define the subset of the state handled by the module, as well as custom message types that modify the state (*Note:* `messages` are extracted from `transactions` by `baseapp`). In general, each module declares its own `KVStore` in the `multistore` to persist the subset of the state it defines. Most developers will need to access other 3rd party modules when building their own modules. Given that the Cosmos SDK is an open framework, some of the modules may be malicious, which means there is a need for security principles to reason about inter-module interactions. These principles are based on [object-capabilities](/v0.47/learn/advanced/ocap). In practice, this means that instead of having each module keep an access control list for other modules, each module implements special objects called `keepers` that can be passed to other modules to grant a pre-defined set of capabilities. +Each module can be seen as a little state-machine. Developers need to define the subset of the state handled by the module, as well as custom message types that modify the state (*Note:* `messages` are extracted from `transactions` by `baseapp`). In general, each module declares its own `KVStore` in the `multistore` to persist the subset of the state it defines. Most developers will need to access other 3rd party modules when building their own modules. Given that the Cosmos SDK is an open framework, some of the modules may be malicious, which means there is a need for security principles to reason about inter-module interactions. These principles are based on [object-capabilities](/docs/sdk/v0.47/learn/advanced/ocap). In practice, this means that instead of having each module keep an access control list for other modules, each module implements special objects called `keepers` that can be passed to other modules to grant a pre-defined set of capabilities. Cosmos SDK modules are defined in the `x/` folder of the Cosmos SDK. Some core modules include: diff --git a/docs/sdk/v0.47/learn/intro/why-app-specific.mdx b/docs/sdk/v0.47/learn/intro/why-app-specific.mdx index e326db91..a8f23e6e 100644 --- a/docs/sdk/v0.47/learn/intro/why-app-specific.mdx +++ b/docs/sdk/v0.47/learn/intro/why-app-specific.mdx @@ -1,69 +1,80 @@ --- -title: "Application-Specific Blockchains" -description: "Version: v0.47" +title: Application-Specific Blockchains --- - - This document explains what application-specific blockchains are, and why developers would want to build one as opposed to writing Smart Contracts. - +## Synopsis -## What are application-specific blockchains[​](#what-are-application-specific-blockchains "Direct link to What are application-specific blockchains") +This document explains what application-specific blockchains are, and why developers would want to build one as opposed to writing Smart Contracts. + +## What are application-specific blockchains Application-specific blockchains are blockchains customized to operate a single application. Instead of building a decentralized application on top of an underlying blockchain like Ethereum, developers build their own blockchain from the ground up. This means building a full-node client, a light-client, and all the necessary interfaces (CLI, REST, ...) to interact with the nodes. -``` - ^ +-------------------------------+ ^ | | | | Built with Cosmos SDK | | State-machine = Application | | | | | v | +-------------------------------+ | | | ^Blockchain node | | Consensus | | | | | | | +-------------------------------+ | CometBFT | | | | | | Networking | | | | | | v +-------------------------------+ v +```text expandable + ^ +-------------------------------+ ^ + | | | | Built with Cosmos SDK + | | State-machine = Application | | + | | | v + | +-------------------------------+ + | | | ^ +Blockchain node | | Consensus | | + | | | | + | +-------------------------------+ | CometBFT + | | | | + | | Networking | | + | | | | + v +-------------------------------+ v ``` -## What are the shortcomings of Smart Contracts[​](#what-are-the-shortcomings-of-smart-contracts "Direct link to What are the shortcomings of Smart Contracts") +## What are the shortcomings of Smart Contracts Virtual-machine blockchains like Ethereum addressed the demand for more programmability back in 2014. At the time, the options available for building decentralized applications were quite limited. Most developers would build on top of the complex and limited Bitcoin scripting language, or fork the Bitcoin codebase which was hard to work with and customize. Virtual-machine blockchains came in with a new value proposition. Their state-machine incorporates a virtual-machine that is able to interpret turing-complete programs called Smart Contracts. These Smart Contracts are very good for use cases like one-time events (e.g. ICOs), but they can fall short for building complex decentralized platforms. Here is why: -* Smart Contracts are generally developed with specific programming languages that can be interpreted by the underlying virtual-machine. These programming languages are often immature and inherently limited by the constraints of the virtual-machine itself. For example, the Ethereum Virtual Machine does not allow developers to implement automatic execution of code. Developers are also limited to the account-based system of the EVM, and they can only choose from a limited set of functions for their cryptographic operations. These are examples, but they hint at the lack of **flexibility** that a smart contract environment often entails. -* Smart Contracts are all run by the same virtual machine. This means that they compete for resources, which can severely restrain **performance**. And even if the state-machine were to be split in multiple subsets (e.g. via sharding), Smart Contracts would still need to be interpreted by a virtual machine, which would limit performance compared to a native application implemented at state-machine level (our benchmarks show an improvement on the order of 10x in performance when the virtual-machine is removed). -* Another issue with the fact that Smart Contracts share the same underlying environment is the resulting limitation in **sovereignty**. A decentralized application is an ecosystem that involves multiple players. If the application is built on a general-purpose virtual-machine blockchain, stakeholders have very limited sovereignty over their application, and are ultimately superseded by the governance of the underlying blockchain. If there is a bug in the application, very little can be done about it. +- Smart Contracts are generally developed with specific programming languages that can be interpreted by the underlying virtual-machine. These programming languages are often immature and inherently limited by the constraints of the virtual-machine itself. For example, the Ethereum Virtual Machine does not allow developers to implement automatic execution of code. Developers are also limited to the account-based system of the EVM, and they can only choose from a limited set of functions for their cryptographic operations. These are examples, but they hint at the lack of **flexibility** that a smart contract environment often entails. +- Smart Contracts are all run by the same virtual machine. This means that they compete for resources, which can severely restrain **performance**. And even if the state-machine were to be split in multiple subsets (e.g. via sharding), Smart Contracts would still need to be interpreted by a virtual machine, which would limit performance compared to a native application implemented at state-machine level (our benchmarks show an improvement on the order of 10x in performance when the virtual-machine is removed). +- Another issue with the fact that Smart Contracts share the same underlying environment is the resulting limitation in **sovereignty**. A decentralized application is an ecosystem that involves multiple players. If the application is built on a general-purpose virtual-machine blockchain, stakeholders have very limited sovereignty over their application, and are ultimately superseded by the governance of the underlying blockchain. If there is a bug in the application, very little can be done about it. Application-Specific Blockchains are designed to address these shortcomings. -## Application-Specific Blockchains Benefits[​](#application-specific-blockchains-benefits "Direct link to Application-Specific Blockchains Benefits") +## Application-Specific Blockchains Benefits -### Flexibility[​](#flexibility "Direct link to Flexibility") +### Flexibility Application-specific blockchains give maximum flexibility to developers: -* In Cosmos blockchains, the state-machine is typically connected to the underlying consensus engine via an interface called the [ABCI](https://docs.cometbft.com/v0.37/spec/abci/). This interface can be wrapped in any programming language, meaning developers can build their state-machine in the programming language of their choice. +- In Cosmos blockchains, the state-machine is typically connected to the underlying consensus engine via an interface called the [ABCI](https://docs.cometbft.com/v0.37/spec/abci/). This interface can be wrapped in any programming language, meaning developers can build their state-machine in the programming language of their choice. -* Developers can choose among multiple frameworks to build their state-machine. The most widely used today is the Cosmos SDK, but others exist (e.g. [Lotion](https://github.com/nomic-io/lotion), [Weave](https://github.com/iov-one/weave), ...). Typically the choice will be made based on the programming language they want to use (Cosmos SDK and Weave are in Golang, Lotion is in Javascript, ...). +- Developers can choose among multiple frameworks to build their state-machine. The most widely used today is the Cosmos SDK, but others exist (e.g. [Lotion](https://github.com/nomic-io/lotion), [Weave](https://github.com/iov-one/weave), ...). Typically the choice will be made based on the programming language they want to use (Cosmos SDK and Weave are in Golang, Lotion is in Javascript, ...). -* The ABCI also allows developers to swap the consensus engine of their application-specific blockchain. Today, only CometBFT is production-ready, but in the future other consensus engines are expected to emerge. +- The ABCI also allows developers to swap the consensus engine of their application-specific blockchain. Today, only CometBFT is production-ready, but in the future other consensus engines are expected to emerge. -* Even when they settle for a framework and consensus engine, developers still have the freedom to tweak them if they don't perfectly match their requirements in their pristine forms. +- Even when they settle for a framework and consensus engine, developers still have the freedom to tweak them if they don't perfectly match their requirements in their pristine forms. -* Developers are free to explore the full spectrum of tradeoffs (e.g. number of validators vs transaction throughput, safety vs availability in asynchrony, ...) and design choices (DB or IAVL tree for storage, UTXO or account model, ...). +- Developers are free to explore the full spectrum of tradeoffs (e.g. number of validators vs transaction throughput, safety vs availability in asynchrony, ...) and design choices (DB or IAVL tree for storage, UTXO or account model, ...). -* Developers can implement automatic execution of code. In the Cosmos SDK, logic can be automatically triggered at the beginning and the end of each block. They are also free to choose the cryptographic library used in their application, as opposed to being constrained by what is made available by the underlying environment in the case of virtual-machine blockchains. +- Developers can implement automatic execution of code. In the Cosmos SDK, logic can be automatically triggered at the beginning and the end of each block. They are also free to choose the cryptographic library used in their application, as opposed to being constrained by what is made available by the underlying environment in the case of virtual-machine blockchains. The list above contains a few examples that show how much flexibility application-specific blockchains give to developers. The goal of Cosmos and the Cosmos SDK is to make developer tooling as generic and composable as possible, so that each part of the stack can be forked, tweaked and improved without losing compatibility. As the community grows, more alternatives for each of the core building blocks will emerge, giving more options to developers. -### Performance[​](#performance "Direct link to Performance") +### Performance decentralized applications built with Smart Contracts are inherently capped in performance by the underlying environment. For a decentralized application to optimise performance, it needs to be built as an application-specific blockchain. Next are some of the benefits an application-specific blockchain brings in terms of performance: -* Developers of application-specific blockchains can choose to operate with a novel consensus engine such as CometBFT BFT. Compared to Proof-of-Work (used by most virtual-machine blockchains today), it offers significant gains in throughput. -* An application-specific blockchain only operates a single application, so that the application does not compete with others for computation and storage. This is the opposite of most non-sharded virtual-machine blockchains today, where smart contracts all compete for computation and storage. -* Even if a virtual-machine blockchain offered application-based sharding coupled with an efficient consensus algorithm, performance would still be limited by the virtual-machine itself. The real throughput bottleneck is the state-machine, and requiring transactions to be interpreted by a virtual-machine significantly increases the computational complexity of processing them. +- Developers of application-specific blockchains can choose to operate with a novel consensus engine such as CometBFT BFT. Compared to Proof-of-Work (used by most virtual-machine blockchains today), it offers significant gains in throughput. +- An application-specific blockchain only operates a single application, so that the application does not compete with others for computation and storage. This is the opposite of most non-sharded virtual-machine blockchains today, where smart contracts all compete for computation and storage. +- Even if a virtual-machine blockchain offered application-based sharding coupled with an efficient consensus algorithm, performance would still be limited by the virtual-machine itself. The real throughput bottleneck is the state-machine, and requiring transactions to be interpreted by a virtual-machine significantly increases the computational complexity of processing them. -### Security[​](#security "Direct link to Security") +### Security Security is hard to quantify, and greatly varies from platform to platform. That said here are some important benefits an application-specific blockchain can bring in terms of security: -* Developers can choose proven programming languages like Go when building their application-specific blockchains, as opposed to smart contract programming languages that are often more immature. -* Developers are not constrained by the cryptographic functions made available by the underlying virtual-machines. They can use their own custom cryptography, and rely on well-audited crypto libraries. -* Developers do not have to worry about potential bugs or exploitable mechanisms in the underlying virtual-machine, making it easier to reason about the security of the application. +- Developers can choose proven programming languages like Go when building their application-specific blockchains, as opposed to smart contract programming languages that are often more immature. +- Developers are not constrained by the cryptographic functions made available by the underlying virtual-machines. They can use their own custom cryptography, and rely on well-audited crypto libraries. +- Developers do not have to worry about potential bugs or exploitable mechanisms in the underlying virtual-machine, making it easier to reason about the security of the application. -### Sovereignty[​](#sovereignty "Direct link to Sovereignty") +### Sovereignty One of the major benefits of application-specific blockchains is sovereignty. A decentralized application is an ecosystem that involves many actors: users, developers, third-party services, and more. When developers build on virtual-machine blockchain where many decentralized applications coexist, the community of the application is different than the community of the underlying blockchain, and the latter supersedes the former in the governance process. If there is a bug or if a new feature is needed, stakeholders of the application have very little leeway to upgrade the code. If the community of the underlying blockchain refuses to act, nothing can happen. diff --git a/docs/sdk/v0.47/learn/learn.mdx b/docs/sdk/v0.47/learn/learn.mdx new file mode 100644 index 00000000..1758bd6b --- /dev/null +++ b/docs/sdk/v0.47/learn/learn.mdx @@ -0,0 +1,10 @@ +--- +title: Learn +--- + +* [Introduction](/docs/sdk/v0.47/learn/intro/overview) - Dive into the fundamentals of Cosmos SDK with an insightful introduction, + laying the groundwork for understanding blockchain development. In this section we provide a High-Level Overview of the SDK, then dive deeper into Core concepts such as Application-Specific Blockchains, Blockchain Architecture, and finally we begin to explore what are the main components of the SDK. +* [Beginner](/docs/sdk/v0.47/learn/beginner/overview-app) - Start your journey with beginner-friendly resources in the Cosmos SDK's "Learn" + section, providing a gentle entry point for newcomers to blockchain development. Here we focus on a little more detail, covering the Anatomy of a Cosmos SDK Application, Transaction Lifecycles, Accounts and lastly, Gas and Fees. +* [Advanced](/docs/sdk/v0.47/learn/advanced/baseapp) - Level up your Cosmos SDK expertise with advanced topics, tailored for experienced + developers diving into intricate blockchain application development. We cover the Cosmos SDK on a lower level as we dive into the core of the SDK with BaseApp, Transactions, Context, Node Client (Daemon), Store, Encoding, gRPC, REST, and CometBFT Endpoints, CLI, Events, Telementry, Object-Capability Model, RunTx recovery middleware, Cosmos Blockchain Simulator, Protobuf Documentation, In-Place Store Migrations, Configuration and AutoCLI. diff --git a/docs/sdk/v0.47/user.mdx b/docs/sdk/v0.47/user.mdx deleted file mode 100644 index 8bd7ed22..00000000 --- a/docs/sdk/v0.47/user.mdx +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "User Guides" -description: "Version: v0.47" ---- - -This section is designed for developers who are using the Cosmos SDK to build applications. It provides essential guides and references to effectively use the SDK's features. - -* [Setting up keys](/v0.47/user/run-node/keyring) - Learn how to set up secure key management using the Cosmos SDK's keyring feature. This guide provides a streamlined approach to cryptographic key handling, which is crucial for securing your application. -* [Running a node](/v0.47/user/run-node/run-node) - This guide provides step-by-step instructions to deploy and manage a node in the Cosmos network. It ensures a smooth and reliable operation of your blockchain application by covering all the necessary setup and maintenance steps. -* [CLI](/v0.47/user/run-node/interact-node) - Discover how to navigate and interact with the Cosmos SDK using the Command Line Interface (CLI). This section covers efficient and powerful command-based operations that can help you manage your application effectively. diff --git a/docs/sdk/v0.47/user/run-node/interact-node.mdx b/docs/sdk/v0.47/user/run-node/interact-node.mdx index 8a922b21..412e1542 100644 --- a/docs/sdk/v0.47/user/run-node/interact-node.mdx +++ b/docs/sdk/v0.47/user/run-node/interact-node.mdx @@ -1,62 +1,73 @@ --- -title: "Interacting with the Node" -description: "Version: v0.47" +title: Interacting with the Node --- - - There are multiple ways to interact with a node: using the CLI, using gRPC or using the REST endpoints. - +## Synopsis + +There are multiple ways to interact with a node: using the CLI, using gRPC or using the REST endpoints. - * [gRPC, REST and CometBFT Endpoints](/v0.47/learn/advanced/grpc_rest) - * [Running a Node](/v0.47/user/run-node/run-node) +**Pre-requisite Readings** + +- [gRPC, REST and CometBFT Endpoints](/docs/sdk/v0.47/learn/advanced/grpc_rest) +- [Running a Node](/docs/sdk/v0.47/user/run-node/run-node) + -## Using the CLI[​](#using-the-cli "Direct link to Using the CLI") +## Using the CLI Now that your chain is running, it is time to try sending tokens from the first account you created to a second account. In a new terminal window, start by running the following query command: -``` +```bash simd query bank balances $MY_VALIDATOR_ADDRESS ``` You should see the current balance of the account you created, equal to the original balance of `stake` you granted it minus the amount you delegated via the `gentx`. Now, create a second account: -``` -simd keys add recipient --keyring-backend test# Put the generated address in a variable for later use.RECIPIENT=$(simd keys show recipient -a --keyring-backend test) +```bash +simd keys add recipient --keyring-backend test + +# Put the generated address in a variable for later use. +RECIPIENT=$(simd keys show recipient -a --keyring-backend test) ``` The command above creates a local key-pair that is not yet registered on the chain. An account is created the first time it receives tokens from another account. Now, run the following command to send tokens to the `recipient` account: -``` -simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000000stake --chain-id my-test-chain --keyring-backend test# Check that the recipient account did receive the tokens.simd query bank balances $RECIPIENT +```bash +simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000000stake --chain-id my-test-chain --keyring-backend test + +# Check that the recipient account did receive the tokens. +simd query bank balances $RECIPIENT ``` Finally, delegate some of the stake tokens sent to the `recipient` account to the validator: -``` -simd tx staking delegate $(simd keys show my_validator --bech val -a --keyring-backend test) 500stake --from recipient --chain-id my-test-chain --keyring-backend test# Query the total delegations to `validator`.simd query staking delegations-to $(simd keys show my_validator --bech val -a --keyring-backend test) +```bash +simd tx staking delegate $(simd keys show my_validator --bech val -a --keyring-backend test) 500stake --from recipient --chain-id my-test-chain --keyring-backend test + +# Query the total delegations to `validator`. +simd query staking delegations-to $(simd keys show my_validator --bech val -a --keyring-backend test) ``` You should see two delegations, the first one made from the `gentx`, and the second one you just performed from the `recipient` account. -## Using gRPC[​](#using-grpc "Direct link to Using gRPC") +## Using gRPC -The Protobuf ecosystem developed tools for different use cases, including code-generation from `*.proto` files into various languages. These tools allow the building of clients easily. Often, the client connection (i.e. the transport) can be plugged and replaced very easily. Let's explore one of the most popular transport: [gRPC](/v0.47/learn/advanced/grpc_rest). +The Protobuf ecosystem developed tools for different use cases, including code-generation from `*.proto` files into various languages. These tools allow the building of clients easily. Often, the client connection (i.e. the transport) can be plugged and replaced very easily. Let's explore one of the most popular transport: [gRPC](/docs/sdk/v0.47/learn/advanced/grpc_rest). Since the code generation library largely depends on your own tech stack, we will only present three alternatives: -* `grpcurl` for generic debugging and testing, -* programmatically via Go, -* CosmJS for JavaScript/TypeScript developers. +- `grpcurl` for generic debugging and testing, +- programmatically via Go, +- CosmJS for JavaScript/TypeScript developers. -### grpcurl[​](#grpcurl "Direct link to grpcurl") +### grpcurl [grpcurl](https://github.com/fullstorydev/grpcurl) is like `curl` but for gRPC. It is also available as a Go library, but we will use it only as a CLI command for debugging and testing purposes. Follow the instructions in the previous link to install it. -Assuming you have a local node running (either a localnet, or connected a live network), you should be able to run the following command to list the Protobuf services available (you can replace `localhost:9000` by the gRPC server endpoint of another node, which is configured under the `grpc.address` field inside [`app.toml`](/v0.47/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml)): +Assuming you have a local node running (either a localnet, or connected a live network), you should be able to run the following command to list the Protobuf services available (you can replace `localhost:9000` by the gRPC server endpoint of another node, which is configured under the `grpc.address` field inside [`app.toml`](/docs/sdk/v0.47/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml)): -``` +```bash grpcurl -plaintext localhost:9090 list ``` @@ -64,86 +75,224 @@ You should see a list of gRPC services, like `cosmos.bank.v1beta1.Query`. This i In order to get a description of the service you can run the following command: -``` -grpcurl -plaintext \ localhost:9090 \ describe cosmos.bank.v1beta1.Query # Service we want to inspect +```bash +grpcurl -plaintext \ + localhost:9090 \ + describe cosmos.bank.v1beta1.Query # Service we want to inspect ``` It's also possible to execute an RPC call to query the node for information: -``` -grpcurl \ -plaintext \ -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ localhost:9090 \ cosmos.bank.v1beta1.Query/AllBalances +```bash +grpcurl \ + -plaintext \ + -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/AllBalances ``` The list of all available gRPC query endpoints is [coming soon](https://github.com/cosmos/cosmos-sdk/issues/7786). -#### Query for historical state using grpcurl[​](#query-for-historical-state-using-grpcurl "Direct link to Query for historical state using grpcurl") +#### Query for historical state using grpcurl You may also query for historical data by passing some [gRPC metadata](https://github.com/grpc/grpc-go/blob/master/Documentation/grpc-metadata.md) to the query: the `x-cosmos-block-height` metadata should contain the block to query. Using grpcurl as above, the command looks like: -``` -grpcurl \ -plaintext \ -H "x-cosmos-block-height: 123" \ -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ localhost:9090 \ cosmos.bank.v1beta1.Query/AllBalances +```bash +grpcurl \ + -plaintext \ + -H "x-cosmos-block-height: 123" \ + -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/AllBalances ``` Assuming the state at that block has not yet been pruned by the node, this query should return a non-empty response. -### Programmatically via Go[​](#programmatically-via-go "Direct link to Programmatically via Go") +### Programmatically via Go The following snippet shows how to query the state using gRPC inside a Go program. The idea is to create a gRPC connection, and use the Protobuf-generated client code to query the gRPC server. -#### Install Cosmos SDK[​](#install-cosmos-sdk "Direct link to Install Cosmos SDK") +#### Install Cosmos SDK -``` +```bash go get github.com/cosmos/cosmos-sdk@main ``` -``` -package mainimport ( "context" "fmt" "google.golang.org/grpc" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" banktypes "github.com/cosmos/cosmos-sdk/x/bank/types")func queryState() error { myAddress, err := sdk.AccAddressFromBech32("cosmos1...") // the my_validator or recipient address. if err != nil { return err } // Create a connection to the gRPC server. grpcConn, err := grpc.Dial( "127.0.0.1:9090", // your gRPC server address. grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanism. // This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry // if the request/response types contain interface instead of 'nil' you should pass the application specific codec. grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), ) if err != nil { return err } defer grpcConn.Close() // This creates a gRPC client to query the x/bank service. bankClient := banktypes.NewQueryClient(grpcConn) bankRes, err := bankClient.Balance( context.Background(), &banktypes.QueryBalanceRequest{Address: myAddress.String(), Denom: "stake"}, ) if err != nil { return err } fmt.Println(bankRes.GetBalance()) // Prints the account balance return nil}func main() { if err := queryState(); err != nil { panic(err) }} +```go expandable +package main + +import ( + + "context" + "fmt" + "google.golang.org/grpc" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +func queryState() + +error { + myAddress, err := sdk.AccAddressFromBech32("cosmos1...") / the my_validator or recipient address. + if err != nil { + return err +} + + / Create a connection to the gRPC server. + grpcConn, err := grpc.Dial( + "127.0.0.1:9090", / your gRPC server address. + grpc.WithInsecure(), / The Cosmos SDK doesn't support any transport security mechanism. + / This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry + / if the request/response types contain interface instead of 'nil' you should pass the application specific codec. + grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), + ) + if err != nil { + return err +} + +defer grpcConn.Close() + + / This creates a gRPC client to query the x/bank service. + bankClient := banktypes.NewQueryClient(grpcConn) + +bankRes, err := bankClient.Balance( + context.Background(), + &banktypes.QueryBalanceRequest{ + Address: myAddress.String(), + Denom: "stake" +}, + ) + if err != nil { + return err +} + +fmt.Println(bankRes.GetBalance()) / Prints the account balance + + return nil +} + +func main() { + if err := queryState(); err != nil { + panic(err) +} +} ``` You can replace the query client (here we are using `x/bank`'s) with one generated from any other Protobuf service. The list of all available gRPC query endpoints is [coming soon](https://github.com/cosmos/cosmos-sdk/issues/7786). -#### Query for historical state using Go[​](#query-for-historical-state-using-go "Direct link to Query for historical state using Go") +#### Query for historical state using Go Querying for historical blocks is done by adding the block height metadata in the gRPC request. -``` -package mainimport ( "context" "fmt" "google.golang.org/grpc" "google.golang.org/grpc/metadata" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" grpctypes "github.com/cosmos/cosmos-sdk/types/grpc" banktypes "github.com/cosmos/cosmos-sdk/x/bank/types")func queryState() error { myAddress, err := sdk.AccAddressFromBech32("cosmos1yerherx4d43gj5wa3zl5vflj9d4pln42n7kuzu") // the my_validator or recipient address. if err != nil { return err } // Create a connection to the gRPC server. grpcConn, err := grpc.Dial( "127.0.0.1:9090", // your gRPC server address. grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanism. // This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry // if the request/response types contain interface instead of 'nil' you should pass the application specific codec. grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), ) if err != nil { return err } defer grpcConn.Close() // This creates a gRPC client to query the x/bank service. bankClient := banktypes.NewQueryClient(grpcConn) var header metadata.MD _, err = bankClient.Balance( metadata.AppendToOutgoingContext(context.Background(), grpctypes.GRPCBlockHeightHeader, "12"), // Add metadata to request &banktypes.QueryBalanceRequest{Address: myAddress.String(), Denom: "stake"}, grpc.Header(&header), // Retrieve header from response ) if err != nil { return err } blockHeight := header.Get(grpctypes.GRPCBlockHeightHeader) fmt.Println(blockHeight) // Prints the block height (12) return nil}func main() { if err := queryState(); err != nil { panic(err) }} +```go expandable +package main + +import ( + + "context" + "fmt" + "google.golang.org/grpc" + "google.golang.org/grpc/metadata" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + grpctypes "github.com/cosmos/cosmos-sdk/types/grpc" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +func queryState() + +error { + myAddress, err := sdk.AccAddressFromBech32("cosmos1yerherx4d43gj5wa3zl5vflj9d4pln42n7kuzu") / the my_validator or recipient address. + if err != nil { + return err +} + + / Create a connection to the gRPC server. + grpcConn, err := grpc.Dial( + "127.0.0.1:9090", / your gRPC server address. + grpc.WithInsecure(), / The Cosmos SDK doesn't support any transport security mechanism. + / This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry + / if the request/response types contain interface instead of 'nil' you should pass the application specific codec. + grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), + ) + if err != nil { + return err +} + +defer grpcConn.Close() + + / This creates a gRPC client to query the x/bank service. + bankClient := banktypes.NewQueryClient(grpcConn) + +var header metadata.MD + _, err = bankClient.Balance( + metadata.AppendToOutgoingContext(context.Background(), grpctypes.GRPCBlockHeightHeader, "12"), / Add metadata to request + &banktypes.QueryBalanceRequest{ + Address: myAddress.String(), + Denom: "stake" +}, + grpc.Header(&header), / Retrieve header from response + ) + if err != nil { + return err +} + blockHeight := header.Get(grpctypes.GRPCBlockHeightHeader) + +fmt.Println(blockHeight) / Prints the block height (12) + +return nil +} + +func main() { + if err := queryState(); err != nil { + panic(err) +} +} ``` -### CosmJS[​](#cosmjs "Direct link to CosmJS") +### CosmJS -CosmJS documentation can be found at [https://cosmos.github.io/cosmjs](https://cosmos.github.io/cosmjs). As of January 2021, CosmJS documentation is still work in progress. +CosmJS documentation can be found at [Link](https://cosmos.github.io/cosmjs). As of January 2021, CosmJS documentation is still work in progress. -## Using the REST Endpoints[​](#using-the-rest-endpoints "Direct link to Using the REST Endpoints") +## Using the REST Endpoints -As described in the [gRPC guide](/v0.47/learn/advanced/grpc_rest), all gRPC services on the Cosmos SDK are made available for more convenient REST-based queries through gRPC-gateway. The format of the URL path is based on the Protobuf service method's full-qualified name, but may contain small customizations so that final URLs look more idiomatic. For example, the REST endpoint for the `cosmos.bank.v1beta1.Query/AllBalances` method is `GET /cosmos/bank/v1beta1/balances/{address}`. Request arguments are passed as query parameters. +As described in the [gRPC guide](/docs/sdk/v0.47/learn/advanced/grpc_rest), all gRPC services on the Cosmos SDK are made available for more convenient REST-based queries through gRPC-gateway. The format of the URL path is based on the Protobuf service method's full-qualified name, but may contain small customizations so that final URLs look more idiomatic. For example, the REST endpoint for the `cosmos.bank.v1beta1.Query/AllBalances` method is `GET /cosmos/bank/v1beta1/balances/{address}`. Request arguments are passed as query parameters. Note that the REST endpoints are not enabled by default. To enable them, edit the `api` section of your `~/.simapp/config/app.toml` file: -``` -# Enable defines if the API server should be enabled.enable = true +```toml +# Enable defines if the API server should be enabled. +enable = true ``` As a concrete example, the `curl` command to make balances request is: -``` -curl \ -X GET \ -H "Content-Type: application/json" \ http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS +```bash +curl \ + -X GET \ + -H "Content-Type: application/json" \ + http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS ``` Make sure to replace `localhost:1317` with the REST endpoint of your node, configured under the `api.address` field. -The list of all available REST endpoints is available as a Swagger specification file, it can be viewed at `localhost:1317/swagger`. Make sure that the `api.swagger` field is set to true in your [`app.toml`](/v0.47/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml) file. +The list of all available REST endpoints is available as a Swagger specification file, it can be viewed at `localhost:1317/swagger`. Make sure that the `api.swagger` field is set to true in your [`app.toml`](/docs/sdk/v0.47/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml) file. -### Query for historical state using REST[​](#query-for-historical-state-using-rest "Direct link to Query for historical state using REST") +### Query for historical state using REST Querying for historical state is done using the HTTP header `x-cosmos-block-height`. For example, a curl command would look like: -``` -curl \ -X GET \ -H "Content-Type: application/json" \ -H "x-cosmos-block-height: 123" \ http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS +```bash +curl \ + -X GET \ + -H "Content-Type: application/json" \ + -H "x-cosmos-block-height: 123" \ + http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS ``` Assuming the state at that block has not yet been pruned by the node, this query should return a non-empty response. -### Cross-Origin Resource Sharing (CORS)[​](#cross-origin-resource-sharing-cors "Direct link to Cross-Origin Resource Sharing (CORS)") +### Cross-Origin Resource Sharing (CORS) -[CORS policies](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) are not enabled by default to help with security. If you would like to use the rest-server in a public environment we recommend you provide a reverse proxy, this can be done with [nginx](https://www.nginx.com/). For testing and development purposes there is an `enabled-unsafe-cors` field inside [`app.toml`](/v0.47/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml). +[CORS policies](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) are not enabled by default to help with security. If you would like to use the rest-server in a public environment we recommend you provide a reverse proxy, this can be done with [nginx](https://www.nginx.com/). For testing and development purposes there is an `enabled-unsafe-cors` field inside [`app.toml`](/docs/sdk/v0.47/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml). diff --git a/docs/sdk/v0.47/user/run-node/keyring.mdx b/docs/sdk/v0.47/user/run-node/keyring.mdx index dc5631ae..bb99c802 100644 --- a/docs/sdk/v0.47/user/run-node/keyring.mdx +++ b/docs/sdk/v0.47/user/run-node/keyring.mdx @@ -1,89 +1,120 @@ --- -title: "Setting up the keyring" -description: "Version: v0.47" +title: Setting up the keyring --- - - This document describes how to configure and use the keyring and its various backends for an [**application**](/v0.47/learn/beginner/overview-app). - +## Synopsis -The keyring holds the private/public keypairs used to interact with a node. For instance, a validator key needs to be set up before running the blockchain node, so that blocks can be correctly signed. The private key can be stored in different locations, called "backends", such as a file or the operating system's own key storage. - -## Available backends for the keyring[​](#available-backends-for-the-keyring "Direct link to Available backends for the keyring") +This document describes how to configure and use the keyring and its various backends for an [**application**](/docs/sdk/v0.47/learn/beginner/overview-app). -Starting with the v0.38.0 release, Cosmos SDK comes with a new keyring implementation that provides a set of commands to manage cryptographic keys in a secure fashion. The new keyring supports multiple storage backends, some of which may not be available on all operating systems. - -### The `os` backend[​](#the-os-backend "Direct link to the-os-backend") +The keyring holds the private/public keypairs used to interact with a node. For instance, a validator key needs to be set up before running the blockchain node, so that blocks can be correctly signed. The private key can be stored in different locations, called "backends", such as a file or the operating system's own key storage. -The `os` backend relies on operating system-specific defaults to handle key storage securely. Typically, an operating system's credential sub-system handles password prompts, private keys storage, and user sessions according to the user's password policies. Here is a list of the most popular operating systems and their respective passwords manager: +## Available backends for the keyring -* macOS: [Keychain](https://support.apple.com/en-gb/guide/keychain-access/welcome/mac) +Starting with the v0.38.0 release, Cosmos SDK comes with a new keyring implementation +that provides a set of commands to manage cryptographic keys in a secure fashion. The +new keyring supports multiple storage backends, some of which may not be available on +all operating systems. -* Windows: [Credentials Management API](https://docs.microsoft.com/en-us/windows/win32/secauthn/credentials-management) +### The `os` backend -* GNU/Linux: +The `os` backend relies on operating system-specific defaults to handle key storage +securely. Typically, an operating system's credential sub-system handles password prompts, +private keys storage, and user sessions according to the user's password policies. Here +is a list of the most popular operating systems and their respective passwords manager: - * [libsecret](https://gitlab.gnome.org/GNOME/libsecret) - * [kwallet](https://api.kde.org/frameworks/kwallet/html/index.html) +- macOS: [Keychain](https://support.apple.com/en-gb/guide/keychain-access/welcome/mac) +- Windows: [Credentials Management API](https://docs.microsoft.com/en-us/windows/win32/secauthn/credentials-management) +- GNU/Linux: + - [libsecret](https://gitlab.gnome.org/GNOME/libsecret) + - [kwallet](https://api.kde.org/frameworks/kwallet/html/index.html) -GNU/Linux distributions that use GNOME as default desktop environment typically come with [Seahorse](https://wiki.gnome.org/Apps/Seahorse). Users of KDE based distributions are commonly provided with [KDE Wallet Manager](https://userbase.kde.org/KDE_Wallet_Manager). Whilst the former is in fact a `libsecret` convenient frontend, the latter is a `kwallet` client. +GNU/Linux distributions that use GNOME as default desktop environment typically come with +[Seahorse](https://wiki.gnome.org/Apps/Seahorse). Users of KDE based distributions are +commonly provided with [KDE Wallet Manager](https://userbase.kde.org/KDE_Wallet_Manager). +Whilst the former is in fact a `libsecret` convenient frontend, the latter is a `kwallet` +client. -`os` is the default option since operating system's default credentials managers are designed to meet users' most common needs and provide them with a comfortable experience without compromising on security. +`os` is the default option since operating system's default credentials managers are +designed to meet users' most common needs and provide them with a comfortable +experience without compromising on security. The recommended backends for headless environments are `file` and `pass`. -### The `file` backend[​](#the-file-backend "Direct link to the-file-backend") +### The `file` backend -The `file` backend more closely resembles the keybase implementation used prior to v0.38.1. It stores the keyring encrypted within the app's configuration directory. This keyring will request a password each time it is accessed, which may occur multiple times in a single command resulting in repeated password prompts. If using bash scripts to execute commands using the `file` option you may want to utilize the following format for multiple prompts: +The `file` backend more closely resembles the keybase implementation used prior to +v0.38.1. It stores the keyring encrypted within the app's configuration directory. This +keyring will request a password each time it is accessed, which may occur multiple +times in a single command resulting in repeated password prompts. If using bash scripts +to execute commands using the `file` option you may want to utilize the following format +for multiple prompts: -``` -# assuming that KEYPASSWD is set in the environment$ gaiacli config keyring-backend file # use file backend$ (echo $KEYPASSWD; echo $KEYPASSWD) | gaiacli keys add me # multiple prompts$ echo $KEYPASSWD | gaiacli keys show me # single prompt +```shell +# assuming that KEYPASSWD is set in the environment +$ gaiacli config keyring-backend file # use file backend +$ (echo $KEYPASSWD; echo $KEYPASSWD) | gaiacli keys add me # multiple prompts +$ echo $KEYPASSWD | gaiacli keys show me # single prompt ``` - - The first time you add a key to an empty keyring, you will be prompted to type the password twice. - + + The first time you add a key to an empty keyring, you will be prompted to type + the password twice. + -### The `pass` backend[​](#the-pass-backend "Direct link to the-pass-backend") +### The `pass` backend -The `pass` backend uses the [pass](https://www.passwordstore.org/) utility to manage on-disk encryption of keys' sensitive data and metadata. Keys are stored inside `gpg` encrypted files within app-specific directories. `pass` is available for the most popular UNIX operating systems as well as GNU/Linux distributions. Please refer to its manual page for information on how to download and install it. +The `pass` backend uses the [pass](https://www.passwordstore.org/) utility to manage on-disk +encryption of keys' sensitive data and metadata. Keys are stored inside `gpg` encrypted files +within app-specific directories. `pass` is available for the most popular UNIX +operating systems as well as GNU/Linux distributions. Please refer to its manual page for +information on how to download and install it. - - **pass** uses [GnuPG](https://gnupg.org/) for encryption. `gpg` automatically invokes the `gpg-agent` daemon upon execution, which handles the caching of GnuPG credentials. Please refer to `gpg-agent` man page for more information on how to configure cache parameters such as credentials TTL and passphrase expiration. - + + **pass** uses [GnuPG](https://gnupg.org/) for encryption. `gpg` automatically + invokes the `gpg-agent` daemon upon execution, which handles the caching of + GnuPG credentials. Please refer to `gpg-agent` man page for more information + on how to configure cache parameters such as credentials TTL and passphrase + expiration. + The password store must be set up prior to first use: -``` +```shell pass init ``` -Replace `` with your GPG key ID. You can use your personal GPG key or an alternative one you may want to use specifically to encrypt the password store. +Replace `` with your GPG key ID. You can use your personal GPG key or an alternative +one you may want to use specifically to encrypt the password store. -### The `kwallet` backend[​](#the-kwallet-backend "Direct link to the-kwallet-backend") +### The `kwallet` backend -The `kwallet` backend uses `KDE Wallet Manager`, which comes installed by default on the GNU/Linux distributions that ships KDE as default desktop environment. Please refer to [KWallet Handbook](https://docs.kde.org/stable5/en/kdeutils/kwallet5/index.html) for more information. +The `kwallet` backend uses `KDE Wallet Manager`, which comes installed by default on the +GNU/Linux distributions that ships KDE as default desktop environment. Please refer to +[KWallet Handbook](https://docs.kde.org/stable5/en/kdeutils/kwallet5/index.html) for more +information. -### The `test` backend[​](#the-test-backend "Direct link to the-test-backend") +### The `test` backend -The `test` backend is a password-less variation of the `file` backend. Keys are stored unencrypted on disk. +The `test` backend is a password-less variation of the `file` backend. Keys are stored +unencrypted on disk. **Provided for testing purposes only. The `test` backend is not recommended for use in production environments**. -### The `memory` backend[​](#the-memory-backend "Direct link to the-memory-backend") +### The `memory` backend The `memory` backend stores keys in memory. The keys are immediately deleted after the program has exited. **Provided for testing purposes only. The `memory` backend is not recommended for use in production environments**. -### Setting backend using the env variable[​](#setting-backend-using-the-env-variable "Direct link to Setting backend using the env variable") +### Setting backend using the env variable You can set the keyring-backend using env variable: `BINNAME_KEYRING_BACKEND`. For example, if you binary name is `gaia-v5` then set: `export GAIA_V5_KEYRING_BACKEND=pass` -## Adding keys to the keyring[​](#adding-keys-to-the-keyring "Direct link to Adding keys to the keyring") +## Adding keys to the keyring - Make sure you can build your own binary, and replace `simd` with the name of your binary in the snippets. + Make sure you can build your own binary, and replace `simd` with the name of + your binary in the snippets. Applications developed using the Cosmos SDK come with the `keys` subcommand. For the purpose of this tutorial, we're running the `simd` CLI, which is an application built using the Cosmos SDK for testing and educational purposes. For more information, see [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). @@ -92,8 +123,11 @@ You can use `simd keys` for help about the keys command and `simd keys [command] To create a new key in the keyring, run the `add` subcommand with a `` argument. For the purpose of this tutorial, we will solely use the `test` backend, and call our new key `my_validator`. This key will be used in the next section. -``` -$ simd keys add my_validator --keyring-backend test# Put the generated address in a variable for later use.MY_VALIDATOR_ADDRESS=$(simd keys show my_validator -a --keyring-backend test) +```bash +$ simd keys add my_validator --keyring-backend test + +# Put the generated address in a variable for later use. +MY_VALIDATOR_ADDRESS=$(simd keys show my_validator -a --keyring-backend test) ``` This command generates a new 24-word mnemonic phrase, persists it to the relevant backend, and outputs information about the keypair. If this keypair will be used to hold value-bearing tokens, be sure to write down the mnemonic phrase somewhere safe! diff --git a/docs/sdk/v0.47/user/run-node/multisig-guide.mdx b/docs/sdk/v0.47/user/run-node/multisig-guide.mdx index 769e23b8..1fdfd616 100644 --- a/docs/sdk/v0.47/user/run-node/multisig-guide.mdx +++ b/docs/sdk/v0.47/user/run-node/multisig-guide.mdx @@ -1,9 +1,12 @@ --- -title: "Guide to Multisig transactions" -description: "Version: v0.47" +title: Guide to Multisig transactions +description: >- + Multisignature accounts are accounts that are generated from multiple public + keys. A multisig necessitates that any transaction made on its behalf must be + signed by a specified threshold of its members. --- -## Overview[​](#overview "Direct link to Overview") +## Overview Multisignature accounts are accounts that are generated from multiple public keys. A multisig necessitates that any transaction made on its behalf must be signed by a specified threshold of its members. @@ -19,84 +22,89 @@ This is done by signing the transaction multiple times, once with each private k Once you have a transaction with the necessary signatures, it can be broadcasted to the network. The network will verify that the transaction has the necessary signatures from the accounts in the multisig key before it is executed. -## Step by step guide to multisig transactions[​](#step-by-step-guide-to-multisig-transactions "Direct link to Step by step guide to multisig transactions") +## Step by step guide to multisig transactions -This tutorial will use the test keyring which will store the keys in the default home directory `~/.simapp` unless otherwise specified. Verify which keys are available in the test keyring by running `--keyring-backend test`. +This tutorial will use the test keyring which will store the keys in the default home directory `~/.simapp` unless otherwise specified. +Verify which keys are available in the test keyring by running `--keyring-backend test`. In order to specify a consistent keyring for the entirety of the tutorial, set the default keyring by running `simd config keyring-backend test`. -``` +```shell simd keys list ``` If you don't already have accounts listed create the accounts using the below. -``` -simd keys add alicesimd keys add bobsimd keys add recipient +```shell +simd keys add alice +simd keys add bob +simd keys add recipient ``` Alternatively the public keys comprising the multisig can be imported into the keyring. -``` +```shell simd keys add alice --pubkey --keyring backend test ``` Create the multisig account between bob and alice. -``` +```shell simd keys add alice-bob-multisig --multisig alice,bob --multisig-threshold 2 ``` Before generating any transaction, verify the balance of each account and note the amount. This step is crucial to confirm that the transaction can be processed successfully. -``` -simd query bank balances $(simd keys show my_validator -a)simd query bank balances $(simd keys show alice-bob-multisig -a) +```shell +simd query bank balances $(simd keys show my_validator -a) +simd query bank balances $(simd keys show alice-bob-multisig -a) ``` -Ensure that the alice-bob-multisig account is funded with a sufficient balance to complete the transaction (gas included). In our case, the genesis account, my\_validator, holds our funds. Therefore, we will transfer funds from the `my_validator` account to the `alice-bob-multisig` account. Fund the multisig by sending it `stake` from the genesis account. +Ensure that the alice-bob-multisig account is funded with a sufficient balance to complete the transaction (gas included). In our case, the genesis account, my\_validator, holds our funds. Therefore, we will transfer funds from the `my_validator` account to the `alice-bob-multisig` account. +Fund the multisig by sending it `stake` from the genesis account. -``` +```shell simd tx bank send $(simd keys show my_validator -a) $(simd keys show alice-bob-multisig -a) "10000stake" ``` Check both accounts again to see if the funds have transferred. -``` +```shell simd query bank balances $(simd keys show alice-bob-multisig -a) ``` Initiate the transaction. This command will create a transaction from the multisignature account `alice-bob-multisig` to send 1000stake to the recipient account. The transaction will be generated but not broadcasted yet. -``` +```shell simd tx bank send $(simd keys show alice-bob-multisig -a) $(simd keys show recipient -a) 1000stake --generate-only --chain-id my-test-chain > tx.json ``` Alice signs the transaction using their key and refers to the multisig address. Execute the command below to accomplish this: -``` +```shell simd tx sign --from $(simd keys show alice -a) --multisig=cosmos1re6mg24kvzjzmwmly3dqrqzdkruxwvctw8wwds tx.json --chain-id my-test-chain > tx-signed-alice.json ``` Let's repeat for Bob. -``` +```shell simd tx sign --from $(simd keys show bob -a) --multisig=cosmos1re6mg24kvzjzmwmly3dqrqzdkruxwvctw8wwds tx.json --chain-id my-test-chain > tx-signed-bob.json ``` Execute a multisign transaction by using the `simd tx multisign` command. This command requires the names and signed transactions of all the participants in the multisig account. Here, Alice and Bob are the participants: -``` +```shell simd tx multisign tx.json alice-bob-multisig tx-signed-alice.json tx-signed-bob.json --chain-id my-test-chain > tx-signed.json ``` Once the multisigned transaction is generated, it needs to be broadcasted to the network. This is done using the simd tx broadcast command: -``` +```shell simd tx broadcast tx-signed.json --chain-id my-test-chain --gas auto --fees 250stake ``` Once the transaction is broadcasted, it's a good practice to verify if the transaction was successful. You can query the recipient's account balance again to confirm if the funds were indeed transferred: -``` +```shell simd query bank balances $(simd keys show alice-bob-multisig -a) ``` diff --git a/docs/sdk/v0.47/user/run-node/rosetta.mdx b/docs/sdk/v0.47/user/run-node/rosetta.mdx index 49e2629b..63c6ea2d 100644 --- a/docs/sdk/v0.47/user/run-node/rosetta.mdx +++ b/docs/sdk/v0.47/user/run-node/rosetta.mdx @@ -1,11 +1,14 @@ --- -title: "Rosetta" -description: "Version: v0.47" +title: Rosetta +description: >- + The rosetta package implements Coinbase's Rosetta API. This document provides + instructions on how to use the Rosetta API integration. For information about + the motivation and design choices, refer to ADR 035. --- -The `rosetta` package implements Coinbase's [Rosetta API](https://www.rosetta-api.org). This document provides instructions on how to use the Rosetta API integration. For information about the motivation and design choices, refer to [ADR 035](https://docs.cosmos.network/main/architecture/adr-035-rosetta-api-support). +The `rosetta` package implements Coinbase's [Rosetta API](https://www.rosetta-api.org). This document provides instructions on how to use the Rosetta API integration. For information about the motivation and design choices, refer to [ADR 035](/docs/common/pages/adr-comprehensive#adr-035-rosetta-api-support). -## Add Rosetta Command[​](#add-rosetta-command "Direct link to Add Rosetta Command") +## Add Rosetta Command The Rosetta API server is a stand-alone server that connects to a node of a chain developed with Cosmos SDK. @@ -13,20 +16,22 @@ To enable Rosetta API support, it's required to add the `RosettaCommand` to your Import the `rosettaCmd` package: -``` +```go import "cosmossdk.io/tools/rosetta/cmd" ``` Find the following line: -``` +```go initRootCmd(rootCmd, encodingConfig) ``` After that line, add the following: -``` -rootCmd.AddCommand( rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +```go +rootCmd.AddCommand( + rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec) +) ``` The `RosettaCommand` function builds the `rosetta` root command and is defined in the `rosettaCmd` package (`cosmossdk.io/tools/rosetta/cmd`). @@ -35,58 +40,88 @@ Since we’ve updated the Cosmos SDK to work with the Rosetta API, updating the An implementation example can be found in `simapp` package. -## Use Rosetta Command[​](#use-rosetta-command "Direct link to Use Rosetta Command") +## Use Rosetta Command To run Rosetta in your application CLI, use the following command: -``` +```shell simd rosetta --help ``` To test and run Rosetta API endpoints for applications that are running and exposed, use the following command: -``` -simd rosetta --blockchain "your application name (ex: gaia)" --network "your chain identifier (ex: testnet-1)" --tendermint "tendermint endpoint (ex: localhost:26657)" --grpc "gRPC endpoint (ex: localhost:9090)" --addr "rosetta binding address (ex: :8080)" +```shell +simd rosetta + --blockchain "your application name (ex: gaia)" + --network "your chain identifier (ex: testnet-1)" + --tendermint "tendermint endpoint (ex: localhost:26657)" + --grpc "gRPC endpoint (ex: localhost:9090)" + --addr "rosetta binding address (ex: :8080)" ``` -## Use Rosetta Standalone[​](#use-rosetta-standalone "Direct link to Use Rosetta Standalone") +## Use Rosetta Standalone To use Rosetta standalone, without having to add it in your application, install it with the following command: -``` +```bash go install cosmossdk.io/tools/rosetta/cmd/rosetta ``` Alternatively, for building from source, simply run `make rosetta`. The binary will be located in `tools/rosetta`. -## Extensions[​](#extensions "Direct link to Extensions") +## Extensions There are two ways in which you can customize and extend the implementation with your custom settings. -### Message extension[​](#message-extension "Direct link to Message extension") +### Message extension In order to make an `sdk.Msg` understandable by rosetta the only thing which is required is adding the methods to your messages that satisfy the `rosetta.Msg` interface. Examples on how to do so can be found in the staking types such as `MsgDelegate`, or in bank types such as `MsgSend`. -### Client interface override[​](#client-interface-override "Direct link to Client interface override") +### Client interface override In case more customization is required, it's possible to embed the Client type and override the methods which require customizations. Example: -``` -package custom_clientimport ("context""github.com/coinbase/rosetta-sdk-go/types""cosmossdk.io/tools/rosetta/lib")// CustomClient embeds the standard cosmos client// which means that it implements the cosmos-rosetta-gateway Client// interface while at the same time allowing to customize certain methodstype CustomClient struct { *rosetta.Client}func (c *CustomClient) ConstructionPayload(_ context.Context, request *types.ConstructionPayloadsRequest) (resp *types.ConstructionPayloadsResponse, err error) { // provide custom signature bytes panic("implement me")} +```go expandable +package custom_client +import ( + + +"context" + "github.com/coinbase/rosetta-sdk-go/types" + "cosmossdk.io/tools/rosetta/lib" +) + +/ CustomClient embeds the standard cosmos client +/ which means that it implements the cosmos-rosetta-gateway Client +/ interface while at the same time allowing to customize certain methods +type CustomClient struct { + *rosetta.Client +} + +func (c *CustomClient) + +ConstructionPayload(_ context.Context, request *types.ConstructionPayloadsRequest) (resp *types.ConstructionPayloadsResponse, err error) { + / provide custom signature bytes + panic("implement me") +} ``` NOTE: when using a customized client, the command cannot be used as the constructors required **may** differ, so it's required to create a new one. We intend to provide a way to init a customized client without writing extra code in the future. -### Error extension[​](#error-extension "Direct link to Error extension") +### Error extension Since rosetta requires to provide 'returned' errors to network options. In order to declare a new rosetta error, we use the `errors` package in cosmos-rosetta-gateway. Example: -``` -package custom_errorsimport crgerrs "cosmossdk.io/tools/rosetta/lib/errors"var customErrRetriable = truevar CustomError = crgerrs.RegisterError(100, "custom message", customErrRetriable, "description") +```go +package custom_errors +import crgerrs "cosmossdk.io/tools/rosetta/lib/errors" + +var customErrRetriable = true +var CustomError = crgerrs.RegisterError(100, "custom message", customErrRetriable, "description") ``` Note: errors must be registered before cosmos-rosetta-gateway's `Server`.`Start` method is called. Otherwise the registration will be ignored. Errors with same code will be ignored too. diff --git a/docs/sdk/v0.47/user/run-node/run-node.mdx b/docs/sdk/v0.47/user/run-node/run-node.mdx index 2456315e..a5d6a0c9 100644 --- a/docs/sdk/v0.47/user/run-node/run-node.mdx +++ b/docs/sdk/v0.47/user/run-node/run-node.mdx @@ -1,73 +1,99 @@ --- -title: "Running a Node" -description: "Version: v0.47" +title: Running a Node --- - - Now that the application is ready and the keyring populated, it's time to see how to run the blockchain node. In this section, the application we are running is called [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp), and its corresponding CLI binary `simd`. - +## Synopsis + +Now that the application is ready and the keyring populated, it's time to see how to run the blockchain node. In this section, the application we are running is called [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp), and its corresponding CLI binary `simd`. - * [Anatomy of a Cosmos SDK Application](/v0.47/learn/beginner/overview-app) - * [Setting up the keyring](/v0.47/user/run-node/keyring) +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.47/learn/beginner/overview-app) +- [Setting up the keyring](/docs/sdk/v0.47/user/run-node/keyring) + -## Initialize the Chain[​](#initialize-the-chain "Direct link to Initialize the Chain") +## Initialize the Chain - Make sure you can build your own binary, and replace `simd` with the name of your binary in the snippets. + Make sure you can build your own binary, and replace `simd` with the name of + your binary in the snippets. Before actually running the node, we need to initialize the chain, and most importantly its genesis file. This is done with the `init` subcommand: -``` -# The argument is the custom username of your node, it should be human-readable.simd init --chain-id my-test-chain +```bash +# The argument is the custom username of your node, it should be human-readable. +simd init --chain-id my-test-chain ``` The command above creates all the configuration files needed for your node to run, as well as a default genesis file, which defines the initial state of the network. - - All these configuration files are in `~/.simapp` by default, but you can overwrite the location of this folder by passing the `--home` flag to each commands, or set an `$APPD_HOME` environment variable (where `APPD` is the name of the binary). - + + All these configuration files are in `~/.simapp` by default, but you can + overwrite the location of this folder by passing the `--home` flag to each + commands, or set an `$APPD_HOME` environment variable (where `APPD` is the + name of the binary). + The `~/.simapp` folder has the following structure: -``` -. # ~/.simapp |- data # Contains the databases used by the node. |- config/ |- app.toml # Application-related configuration file. |- config.toml # CometBFT-related configuration file. |- genesis.json # The genesis file. |- node_key.json # Private key to use for node authentication in the p2p protocol. |- priv_validator_key.json # Private key to use as a validator in the consensus protocol. +```bash +. # ~/.simapp + |- data # Contains the databases used by the node. + |- config/ + |- app.toml # Application-related configuration file. + |- config.toml # CometBFT-related configuration file. + |- genesis.json # The genesis file. + |- node_key.json # Private key to use for node authentication in the p2p protocol. + |- priv_validator_key.json # Private key to use as a validator in the consensus protocol. ``` -## Updating Some Default Settings[​](#updating-some-default-settings "Direct link to Updating Some Default Settings") +## Updating Some Default Settings If you want to change any field values in configuration files (for ex: genesis.json) you can use `jq` ([installation](https://stedolan.github.io/jq/download/) & [docs](https://stedolan.github.io/jq/manual/#Assignment)) & `sed` commands to do that. Few examples are listed here. -``` -# to change the chain-idjq '.chain_id = "testing"' genesis.json > temp.json && mv temp.json genesis.json# to enable the api serversed -i '/\[api\]/,+3 s/enable = false/enable = true/' app.toml# to change the voting_periodjq '.app_state.gov.voting_params.voting_period = "600s"' genesis.json > temp.json && mv temp.json genesis.json# to change the inflationjq '.app_state.mint.minter.inflation = "0.300000000000000000"' genesis.json > temp.json && mv temp.json genesis.json +```bash expandable +# to change the chain-id +jq '.chain_id = "testing"' genesis.json > temp.json && mv temp.json genesis.json + +# to enable the api server +sed -i '/\[api\]/,+3 s/enable = false/enable = true/' app.toml + +# to change the voting_period +jq '.app_state.gov.voting_params.voting_period = "600s"' genesis.json > temp.json && mv temp.json genesis.json + +# to change the inflation +jq '.app_state.mint.minter.inflation = "0.300000000000000000"' genesis.json > temp.json && mv temp.json genesis.json ``` -### Client Interaction[​](#client-interaction "Direct link to Client Interaction") +### Client Interaction When instantiating a node, GRPC and REST are defaulted to localhost to avoid unknown exposure of your node to the public. It is recommended to not expose these endpoints without a proxy that can handle load balancing or authentication is setup between your node and the public. - - A commonly used tool for this is [nginx](https://nginx.org). - +A commonly used tool for this is [nginx](https://nginx.org). -## Adding Genesis Accounts[​](#adding-genesis-accounts "Direct link to Adding Genesis Accounts") +## Adding Genesis Accounts -Before starting the chain, you need to populate the state with at least one account. To do so, first [create a new account in the keyring](/v0.47/user/run-node/keyring#adding-keys-to-the-keyring) named `my_validator` under the `test` keyring backend (feel free to choose another name and another backend). +Before starting the chain, you need to populate the state with at least one account. To do so, first [create a new account in the keyring](/docs/sdk/v0.47/user/run-node/keyring#adding-keys-to-the-keyring) named `my_validator` under the `test` keyring backend (feel free to choose another name and another backend). Now that you have created a local account, go ahead and grant it some `stake` tokens in your chain's genesis file. Doing so will also make sure your chain is aware of this account's existence: -``` +```bash simd genesis add-genesis-account $MY_VALIDATOR_ADDRESS 100000000000stake ``` -Recall that `$MY_VALIDATOR_ADDRESS` is a variable that holds the address of the `my_validator` key in the [keyring](/v0.47/user/run-node/keyring#adding-keys-to-the-keyring). Also note that the tokens in the Cosmos SDK have the `{amount}{denom}` format: `amount` is is a 18-digit-precision decimal number, and `denom` is the unique token identifier with its denomination key (e.g. `atom` or `uatom`). Here, we are granting `stake` tokens, as `stake` is the token identifier used for staking in [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). For your own chain with its own staking denom, that token identifier should be used instead. +Recall that `$MY_VALIDATOR_ADDRESS` is a variable that holds the address of the `my_validator` key in the [keyring](/docs/sdk/v0.47/user/run-node/keyring#adding-keys-to-the-keyring). Also note that the tokens in the Cosmos SDK have the `{amount}{denom}` format: `amount` is is a 18-digit-precision decimal number, and `denom` is the unique token identifier with its denomination key (e.g. `atom` or `uatom`). Here, we are granting `stake` tokens, as `stake` is the token identifier used for staking in [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). For your own chain with its own staking denom, that token identifier should be used instead. -Now that your account has some tokens, you need to add a validator to your chain. Validators are special full-nodes that participate in the consensus process (implemented in the [underlying consensus engine](/v0.47/learn/intro/sdk-app-architecture#cometbft)) in order to add new blocks to the chain. Any account can declare its intention to become a validator operator, but only those with sufficient delegation get to enter the active set (for example, only the top 125 validator candidates with the most delegation get to be validators in the Cosmos Hub). For this guide, you will add your local node (created via the `init` command above) as a validator of your chain. Validators can be declared before a chain is first started via a special transaction included in the genesis file called a `gentx`: +Now that your account has some tokens, you need to add a validator to your chain. Validators are special full-nodes that participate in the consensus process (implemented in the [underlying consensus engine](/docs/sdk/v0.47/learn/intro/sdk-app-architecture#cometbft)) in order to add new blocks to the chain. Any account can declare its intention to become a validator operator, but only those with sufficient delegation get to enter the active set (for example, only the top 125 validator candidates with the most delegation get to be validators in the Cosmos Hub). For this guide, you will add your local node (created via the `init` command above) as a validator of your chain. Validators can be declared before a chain is first started via a special transaction included in the genesis file called a `gentx`: -``` -# Create a gentx.simd genesis gentx my_validator 100000000stake --chain-id my-test-chain --keyring-backend test# Add the gentx to the genesis file.simd genesis collect-gentxs +```bash +# Create a gentx. +simd genesis gentx my_validator 100000000stake --chain-id my-test-chain --keyring-backend test + +# Add the gentx to the genesis file. +simd genesis collect-gentxs ``` A `gentx` does three things: @@ -78,38 +104,49 @@ A `gentx` does three things: For more information on `gentx`, use the following command: -``` +```bash simd genesis gentx --help ``` -## Configuring the Node Using `app.toml` and `config.toml`[​](#configuring-the-node-using-apptoml-and-configtoml "Direct link to configuring-the-node-using-apptoml-and-configtoml") +## Configuring the Node Using `app.toml` and `config.toml` The Cosmos SDK automatically generates two configuration files inside `~/.simapp/config`: -* `config.toml`: used to configure the CometBFT, learn more on [CometBFT's documentation](https://docs.cometbft.com/v0.37/core/configuration), -* `app.toml`: generated by the Cosmos SDK, and used to configure your app, such as state pruning strategies, telemetry, gRPC and REST servers configuration, state sync... +- `config.toml`: used to configure the CometBFT, learn more on [CometBFT's documentation](https://docs.cometbft.com/v0.37/core/configuration), +- `app.toml`: generated by the Cosmos SDK, and used to configure your app, such as state pruning strategies, telemetry, gRPC and REST servers configuration, state sync... Both files are heavily commented, please refer to them directly to tweak your node. One example config to tweak is the `minimum-gas-prices` field inside `app.toml`, which defines the minimum gas prices the validator node is willing to accept for processing a transaction. Depending on the chain, it might be an empty string or not. If it's empty, make sure to edit the field with some value, for example `10token`, or else the node will halt on startup. For the purpose of this tutorial, let's set the minimum gas price to 0: +```toml + # The minimum gas prices a validator is willing to accept for processing a + # transaction. A transaction's fees must meet the minimum of any denomination + # specified in this config (e.g. 0.25token1;0.0001token2). + minimum-gas-prices = "0stake" ``` - # The minimum gas prices a validator is willing to accept for processing a # transaction. A transaction's fees must meet the minimum of any denomination # specified in this config (e.g. 0.25token1;0.0001token2). minimum-gas-prices = "0stake" -``` - - When running a node (not a validator!) and not wanting to run the application mempool, set the `max-txs` field to `-1`. + +When running a node (not a validator!) and not wanting to run the application mempool, set the `max-txs` field to `-1`. + +```toml +[mempool] +# Setting max-txs to 0 will allow for a unbounded amount of transactions in the mempool. +# Setting max_txs to negative 1 (-1) will disable transactions from being inserted into the mempool. +# Setting max_txs to a positive number (> 0) will limit the number of transactions in the mempool, by the specified amount. +# +# Note, this configuration only applies to SDK built-in app-side mempool +# implementations. +max-txs = "-1" +``` - ``` - [mempool]# Setting max-txs to 0 will allow for a unbounded amount of transactions in the mempool.# Setting max_txs to negative 1 (-1) will disable transactions from being inserted into the mempool.# Setting max_txs to a positive number (> 0) will limit the number of transactions in the mempool, by the specified amount.## Note, this configuration only applies to SDK built-in app-side mempool# implementations.max-txs = "-1" - ``` - + -## Run a Localnet[​](#run-a-localnet "Direct link to Run a Localnet") +## Run a Localnet Now that everything is set up, you can finally start your node: -``` +```bash simd start ``` @@ -119,7 +156,7 @@ The previous command allow you to run a single node. This is enough for the next The naive way would be to run the same commands again in separate terminal windows. This is possible, however in the Cosmos SDK, we leverage the power of [Docker Compose](https://docs.docker.com/compose/) to run a localnet. If you need inspiration on how to set up your own localnet with Docker Compose, you can have a look at the Cosmos SDK's [`docker-compose.yml`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/docker-compose.yml). -## Logging[​](#logging "Direct link to Logging") +## Logging Logging provides a way to see what is going on with a node. By default the info level is set. This is a global level and all info logs will be outputted to the terminal. If you would like to filter specific logs to the terminal instead of all, then setting `module:log_level` is how this can work. @@ -127,37 +164,48 @@ Example: In config.toml: -``` +```toml log_level: "state:info,p2p:info,consensus:info,x/staking:info,x/ibc:info,*error" ``` -## State Sync[​](#state-sync "Direct link to State Sync") +## State Sync State sync is the act in which a node syncs the latest or close to the latest state of a blockchain. This is useful for users who don't want to sync all the blocks in history. Read more in [CometBFT documentation](https://docs.cometbft.com/v0.37/core/state-sync). State sync works thanks to snapshots. Read how the SDK handles snapshots [here](https://github.com/cosmos/cosmos-sdk/blob/825245d/store/snapshots/README.md). -### Local State Sync[​](#local-state-sync "Direct link to Local State Sync") +### Local State Sync Local state sync work similar to normal state sync except that it works off a local snapshot of state instead of one provided via the p2p network. The steps to start local state sync are similar to normal state sync with a few different designs. -1. As mentioned in [https://docs.cometbft.com/v0.37/core/state-sync](https://docs.cometbft.com/v0.37/core/state-sync), one must set a height and hash in the config.toml along with a few rpc servers (the afromentioned link has instructions on how to do this). -2. Run ` ` to restore a local snapshot (note: first load it from a file with the *load* command). +1. As mentioned in [Link](https://docs.cometbft.com/v0.37/core/state-sync), one must set a height and hash in the config.toml along with a few rpc servers (the afromentioned link has instructions on how to do this). +2. Run ` ` to restore a local snapshot (note: first load it from a file with the _load_ command). 3. Bootsrapping Comet state in order to start the node after the snapshot has been ingested. This can be done with the bootstrap command ` comet bootstrap-state` -### Snapshots Commands[​](#snapshots-commands "Direct link to Snapshots Commands") +### Snapshots Commands -The Cosmos SDK provides commands for managing snapshots. These commands can be added in an app with the following snippet in `cmd//root.go`: +The Cosmos SDK provides commands for managing snapshots. +These commands can be added in an app with the following snippet in `cmd//root.go`: -``` -import ( "github.com/cosmos/cosmos-sdk/client/snapshot")func initRootCmd(/* ... */) { // ... rootCmd.AddCommand( snapshot.Cmd(appCreator), )} +```go +import ( + + "github.com/cosmos/cosmos-sdk/client/snapshot" +) + +func initRootCmd(/* ... */) { + / ... + rootCmd.AddCommand( + snapshot.Cmd(appCreator), + ) +} ``` Then following commands are available at ` snapshots [command]`: -* **list**: list local snapshots -* **load**: Load a snapshot archive file into snapshot store -* **restore**: Restore app state from local snapshot -* **export**: Export app state to snapshot store -* **dump**: Dump the snapshot as portable archive format -* **delete**: Delete a local snapshot +- **list**: list local snapshots +- **load**: Load a snapshot archive file into snapshot store +- **restore**: Restore app state from local snapshot +- **export**: Export app state to snapshot store +- **dump**: Dump the snapshot as portable archive format +- **delete**: Delete a local snapshot diff --git a/docs/sdk/v0.47/user/run-node/run-production.mdx b/docs/sdk/v0.47/user/run-node/run-production.mdx index 5616c118..5551b06d 100644 --- a/docs/sdk/v0.47/user/run-node/run-production.mdx +++ b/docs/sdk/v0.47/user/run-node/run-production.mdx @@ -1,53 +1,56 @@ --- -title: "Running in Production" -description: "Version: v0.47" +title: Running in Production --- - - This section describes how to securely run a node in a public setting and/or on a mainnet on one of the many Cosmos SDK public blockchains. - +## Synopsis + +This section describes how to securely run a node in a public setting and/or on a mainnet on one of the many Cosmos SDK public blockchains. When operating a node, full node or validator, in production it is important to set your server up securely. - There are many different ways to secure a server and your node, the described steps here is one way. To see another way of setting up a server see the [run in production tutorial](https://tutorials.cosmos.network/hands-on-exercise/5-run-in-prod/1-overview.html). + There are many different ways to secure a server and your node, the described + steps here is one way. To see another way of setting up a server see the [run + in production + tutorial](https://tutorials.cosmos.network/hands-on-exercise/5-run-in-prod/1-overview.html). - - This walkthrough assumes the underlying operating system is Ubuntu. - +This walkthrough assumes the underlying operating system is Ubuntu. -## Sever Setup[​](#sever-setup "Direct link to Sever Setup") +## Sever Setup -### User[​](#user "Direct link to User") +### User When creating a server most times it is created as user `root`. This user has heightened privileges on the server. When operating a node, it is recommended to not run your node as the root user. 1. Create a new user -``` +```bash sudo adduser change_me ``` 2. We want to allow this user to perform sudo tasks -``` +```bash sudo usermod -aG sudo change_me ``` Now when logging into the server, the non `root` user can be used. -### Go[​](#go "Direct link to Go") +### Go 1. Install the [Go](https://go.dev/doc/install) version preconized by the application. - In the past, validators [have had issues](https://github.com/cosmos/cosmos-sdk/issues/13976) when using different versions of Go. It is recommended that the whole validator set uses the version of Go that is preconized by the application. + In the past, validators [have had + issues](https://github.com/cosmos/cosmos-sdk/issues/13976) when using + different versions of Go. It is recommended that the whole validator set uses + the version of Go that is preconized by the application. -### Firewall[​](#firewall "Direct link to Firewall") +### Firewall -Nodes should not have all ports open to the public, this is a simple way to get DDOS'd. Secondly it is recommended by [CometBFT](/v0.47/user/run-node/github.com/cometbft/cometbft) to never expose ports that are not required to operate a node. +Nodes should not have all ports open to the public, this is a simple way to get DDOS'd. Secondly it is recommended by [CometBFT](https://github.com/cometbft/cometbft) to never expose ports that are not required to operate a node. When setting up a firewall there are a few ports that can be open when operating a Cosmos SDK node. There is the CometBFT json-RPC, prometheus, p2p, remote signer and Cosmos SDK GRPC and REST. If the node is being operated as a node that does not offer endpoints to be used for submission or querying then a max of three endpoints are needed. @@ -55,19 +58,20 @@ Most, if not all servers come equipped with [ufw](https://help.ubuntu.com/commun 1. Reset UFW to disallow all incoming connections and allow outgoing -``` -sudo ufw default deny incomingsudo ufw default allow outgoing +```bash +sudo ufw default deny incoming +sudo ufw default allow outgoing ``` 2. Lets make sure that port 22 (ssh) stays open. -``` +```bash sudo ufw allow ssh ``` or -``` +```bash sudo ufw allow 22 ``` @@ -75,81 +79,81 @@ Both of the above commands are the same. 3. Allow Port 26656 (cometbft p2p port). If the node has a modified p2p port then that port must be used here. -``` +```bash sudo ufw allow 26656/tcp ``` 4. Allow port 26660 (cometbft [prometheus](https://prometheus.io)). This acts as the applications monitoring port as well. -``` +```bash sudo ufw allow 26660/tcp ``` 5. IF the node which is being setup would like to expose CometBFTs jsonRPC and Cosmos SDK GRPC and REST then follow this step. (Optional) -##### CometBFT JsonRPC[​](#cometbft-jsonrpc "Direct link to CometBFT JsonRPC") +##### CometBFT JsonRPC -``` +```bash sudo ufw allow 26657/tcp ``` -##### Cosmos SDK GRPC[​](#cosmos-sdk-grpc "Direct link to Cosmos SDK GRPC") +##### Cosmos SDK GRPC -``` +```bash sudo ufw allow 9090/tcp ``` -##### Cosmos SDK REST[​](#cosmos-sdk-rest "Direct link to Cosmos SDK REST") +##### Cosmos SDK REST -``` +```bash sudo ufw allow 1317/tcp ``` 6. Lastly, enable ufw -``` +```bash sudo ufw enable ``` -### Signing[​](#signing "Direct link to Signing") +### Signing If the node that is being started is a validator there are multiple ways a validator could sign blocks. -#### File[​](#file "Direct link to File") +#### File File based signing is the simplest and default approach. This approach works by storing the consensus key, generated on initialization, to sign blocks. This approach is only as safe as your server setup as if the server is compromised so is your key. This key is located in the `config/priv_val_key.json` directory generated on initialization. A second file exists that user must be aware of, the file is located in the data directory `data/priv_val_state.json`. This file protects your node from double signing. It keeps track of the consensus keys last sign height, round and latest signature. If the node crashes and needs to be recovered this file must be kept in order to ensure that the consensus key will not be used for signing a block that was previously signed. -#### Remote Signer[​](#remote-signer "Direct link to Remote Signer") +#### Remote Signer A remote signer is a secondary server that is separate from the running node that signs blocks with the consensus key. This means that the consensus key does not live on the node itself. This increases security because your full node which is connected to the remote signer can be swapped without missing blocks. The two most used remote signers are [tmkms](https://github.com/iqlusioninc/tmkms) from [Iqlusion](https://www.iqlusion.io) and [horcrux](https://github.com/strangelove-ventures/horcrux) from [Strangelove](https://strange.love). -##### TMKMS[​](#tmkms "Direct link to TMKMS") +##### TMKMS -###### Dependencies[​](#dependencies "Direct link to Dependencies") +###### Dependencies 1. Update server dependencies and install extras needed. -``` +```sh sudo apt update -y && sudo apt install build-essential curl jq -y ``` 2. Install Rust: -``` +```sh curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` 3. Install Libusb: -``` +```sh sudo apt install libusb-1.0-0-dev ``` -###### Setup[​](#setup "Direct link to Setup") +###### Setup There are two ways to install tmkms, from source or `cargo install`. In the examples we will cover downloading or building from source and using softsign. Softsign stands for software signing, but you could use a [yubihsm](https://www.yubico.com/products/hardware-security-module/) as your signing key if you wish. @@ -157,16 +161,23 @@ There are two ways to install tmkms, from source or `cargo install`. In the exam From source: -``` -cd $HOMEgit clone https://github.com/iqlusioninc/tmkms.gitcd $HOME/tmkmscargo install tmkms --features=softsigntmkms init configtmkms softsign keygen ./config/secrets/secret_connection_key +```bash +cd $HOME +git clone https://github.com/iqlusioninc/tmkms.git +cd $HOME/tmkms +cargo install tmkms --features=softsign +tmkms init config +tmkms softsign keygen ./config/secrets/secret_connection_key ``` or Cargo install: -``` -cargo install tmkms --features=softsigntmkms init configtmkms softsign keygen ./config/secrets/secret_connection_key +```bash +cargo install tmkms --features=softsign +tmkms init config +tmkms softsign keygen ./config/secrets/secret_connection_key ``` @@ -175,13 +186,13 @@ cargo install tmkms --features=softsigntmkms init configtmkms softsign keygen ./ 2. Migrate the validator key from the full node to the new tmkms instance. -``` +```bash scp user@123.456.32.123:~/.simd/config/priv_validator_key.json ~/tmkms/config/secrets ``` 3. Import the validator key into tmkms. -``` +```bash tmkms softsign import $HOME/tmkms/config/secrets/priv_validator_key.json $HOME/tmkms/config/secrets/priv_validator_key ``` @@ -189,40 +200,73 @@ At this point, it is necessary to delete the `priv_validator_key.json` from the 4. Modifiy the `tmkms.toml`. -``` +```bash vim $HOME/tmkms/config/tmkms.toml ``` -This example shows a configuration that could be used for soft signing. The example has an IP of `123.456.12.345` with a port of `26659` a chain\_id of `test-chain-waSDSe`. These are items that most be modified for the usecase of tmkms and the network. +This example shows a configuration that could be used for soft signing. The example has an IP of `123.456.12.345` with a port of `26659` a chain_id of `test-chain-waSDSe`. These are items that most be modified for the usecase of tmkms and the network. -``` -# CometBFT KMS configuration file## Chain Configuration[[chain]]id = "osmosis-1"key_format = { type = "bech32", account_key_prefix = "cosmospub", consensus_key_prefix = "cosmosvalconspub" }state_file = "/root/tmkms/config/state/priv_validator_state.json"## Signing Provider Configuration### Software-based Signer Configuration[[providers.softsign]]chain_ids = ["test-chain-waSDSe"]key_type = "consensus"path = "/root/tmkms/config/secrets/priv_validator_key"## Validator Configuration[[validator]]chain_id = "test-chain-waSDSe"addr = "tcp://123.456.12.345:26659"secret_key = "/root/tmkms/config/secrets/secret_connection_key"protocol_version = "v0.34"reconnect = true +```toml expandable +# CometBFT KMS configuration file + +## Chain Configuration + +[[chain]] +id = "osmosis-1" +key_format = { type = "bech32", account_key_prefix = "cosmospub", consensus_key_prefix = "cosmosvalconspub" } +state_file = "/root/tmkms/config/state/priv_validator_state.json" + +## Signing Provider Configuration + +### Software-based Signer Configuration + +[[providers.softsign]] +chain_ids = ["test-chain-waSDSe"] +key_type = "consensus" +path = "/root/tmkms/config/secrets/priv_validator_key" + +## Validator Configuration + +[[validator]] +chain_id = "test-chain-waSDSe" +addr = "tcp://123.456.12.345:26659" +secret_key = "/root/tmkms/config/secrets/secret_connection_key" +protocol_version = "v0.34" +reconnect = true ``` 5. Set the address of the tmkms instance. -``` -vim $HOME/.simd/config/config.tomlpriv_validator_laddr = "tcp://127.0.0.1:26659" +```bash +vim $HOME/.simd/config/config.toml + +priv_validator_laddr = "tcp://127.0.0.1:26659" ``` - - The above address it set to `127.0.0.1` but it is recommended to set the tmkms server to secure the startup - + + The above address it set to `127.0.0.1` but it is recommended to set the tmkms + server to secure the startup + - - It is recommended to comment or delete the lines that specify the path of the validator key and validator: + +It is recommended to comment or delete the lines that specify the path of the validator key and validator: - ``` - # Path to the JSON file containing the private key to use as a validator in the consensus protocol# priv_validator_key_file = "config/priv_validator_key.json"# Path to the JSON file containing the last sign state of a validator# priv_validator_state_file = "data/priv_validator_state.json" - ``` - +```toml +# Path to the JSON file containing the private key to use as a validator in the consensus protocol +# priv_validator_key_file = "config/priv_validator_key.json" + +# Path to the JSON file containing the last sign state of a validator +# priv_validator_state_file = "data/priv_validator_state.json" +``` + + 6. Start the two processes. -``` +```bash tmkms start -c $HOME/tmkms/config/tmkms.toml ``` -``` +```bash simd start ``` diff --git a/docs/sdk/v0.47/user/run-node/run-testnet.mdx b/docs/sdk/v0.47/user/run-node/run-testnet.mdx index db2fcd91..0cfa52c7 100644 --- a/docs/sdk/v0.47/user/run-node/run-testnet.mdx +++ b/docs/sdk/v0.47/user/run-node/run-testnet.mdx @@ -1,15 +1,14 @@ --- -title: "Running a Testnet" -description: "Version: v0.47" +title: Running a Testnet --- - - The `simd testnet` subcommand makes it easy to initialize and start a simulated test network for testing purposes. - +## Synopsis -In addition to the commands for [running a node](/v0.47/user/run-node/run-node), the `simd` binary also includes a `testnet` command that allows you to start a simulated test network in-process or to initialize files for a simulated test network that runs in a separate process. +The `simd testnet` subcommand makes it easy to initialize and start a simulated test network for testing purposes. -## Initialize Files[​](#initialize-files "Direct link to Initialize Files") +In addition to the commands for [running a node](/docs/sdk/v0.47/user/run-node/run-node), the `simd` binary also includes a `testnet` command that allows you to start a simulated test network in-process or to initialize files for a simulated test network that runs in a separate process. + +## Initialize Files First, let's take a look at the `init-files` subcommand. @@ -19,27 +18,27 @@ The `init-files` subcommand initializes the necessary files to run a test networ In order to initialize the files for a test network, run the following command: -``` +```bash simd testnet init-files ``` You should see the following output in your terminal: -``` +```bash Successfully initialized 4 node directories ``` The default output directory is a relative `.testnets` directory. Let's take a look at the files created within the `.testnets` directory. -### gentxs[​](#gentxs "Direct link to gentxs") +### gentxs The `gentxs` directory includes a genesis transaction for each validator node. Each file includes a JSON encoded genesis transaction used to register a validator node at the time of genesis. The genesis transactions are added to the `genesis.json` file within each node directory during the initilization process. -### nodes[​](#nodes "Direct link to nodes") +### nodes A node directory is created for each validator node. Within each node directory is a `simd` directory. The `simd` directory is the home directory for each node, which includes the configuration and data files for that node (i.e. the same files included in the default `~/.simapp` directory when running a single node). -## Start Testnet[​](#start-testnet "Direct link to Start Testnet") +## Start Testnet Now, let's take a look at the `start` subcommand. @@ -47,38 +46,52 @@ The `start` subcommand both initializes and starts an in-process test network. T You can start the local test network by running the following command: -``` +```bash simd testnet start ``` You should see something similar to the following: -``` -acquiring test network lockpreparing test network with chain-id "chain-mtoD9v"+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ THIS MNEMONIC IS FOR TESTING PURPOSES ONLY ++++ DO NOT USE IN PRODUCTION ++++ ++++ sustain know debris minute gate hybrid stereo custom ++++ divorce cross spoon machine latin vibrant term oblige ++++ moment beauty laundry repeat grab game bronze truly +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++starting test network...started test networkpress the Enter Key to terminate +```bash expandable +acquiring test network lock +preparing test network with chain-id "chain-mtoD9v" + ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++ THIS MNEMONIC IS FOR TESTING PURPOSES ONLY ++ +++ DO NOT USE IN PRODUCTION ++ +++ ++ +++ sustain know debris minute gate hybrid stereo custom ++ +++ divorce cross spoon machine latin vibrant term oblige ++ +++ moment beauty laundry repeat grab game bronze truly ++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +starting test network... +started test network +press the Enter Key to terminate ``` The first validator node is now running in-process, which means the test network will terminate once you either close the terminal window or you press the Enter key. In the output, the mnemonic phrase for the first validator node is provided for testing purposes. The validator node is using the same default addresses being used when initializing and starting a single node (no need to provide a `--node` flag). Check the status of the first validator node: -``` +```shell simd status ``` Import the key from the provided mnemonic: -``` +```shell simd keys add test --recover --keyring-backend test ``` Check the balance of the account address: -``` +```shell simd q bank balances [address] ``` Use this test account to manually test against the test network. -## Testnet Options[​](#testnet-options "Direct link to Testnet Options") +## Testnet Options You can customize the configuration of the test network with flags. In order to see all flag options, append the `--help` flag to each command. diff --git a/docs/sdk/v0.47/user/run-node/txs.mdx b/docs/sdk/v0.47/user/run-node/txs.mdx index c98a80ed..7cf9d1bb 100644 --- a/docs/sdk/v0.47/user/run-node/txs.mdx +++ b/docs/sdk/v0.47/user/run-node/txs.mdx @@ -1,45 +1,44 @@ --- title: "Generating, Signing and Broadcasting Transactions" -description: "Version: v0.47" --- - - This document describes how to generate an (unsigned) transaction, signing it (with one or multiple keys), and broadcasting it to the network. - +## Synopsis -## Using the CLI[​](#using-the-cli "Direct link to Using the CLI") +This document describes how to generate an (unsigned) transaction, signing it (with one or multiple keys), and broadcasting it to the network. -The easiest way to send transactions is using the CLI, as we have seen in the previous page when [interacting with a node](/v0.47/user/run-node/interact-node#using-the-cli). For example, running the following command +## Using the CLI -``` +The easiest way to send transactions is using the CLI, as we have seen in the previous page when [interacting with a node](/docs/sdk/v0.47/user/run-node/interact-node#using-the-cli). For example, running the following command + +```bash simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --chain-id my-test-chain --keyring-backend test ``` will run the following steps: -* generate a transaction with one `Msg` (`x/bank`'s `MsgSend`), and print the generated transaction to the console. -* ask the user for confirmation to send the transaction from the `$MY_VALIDATOR_ADDRESS` account. -* fetch `$MY_VALIDATOR_ADDRESS` from the keyring. This is possible because we have [set up the CLI's keyring](/v0.47/user/run-node/keyring) in a previous step. -* sign the generated transaction with the keyring's account. -* broadcast the signed transaction to the network. This is possible because the CLI connects to the node's CometBFT RPC endpoint. +- generate a transaction with one `Msg` (`x/bank`'s `MsgSend`), and print the generated transaction to the console. +- ask the user for confirmation to send the transaction from the `$MY_VALIDATOR_ADDRESS` account. +- fetch `$MY_VALIDATOR_ADDRESS` from the keyring. This is possible because we have [set up the CLI's keyring](/docs/sdk/v0.47/user/run-node/keyring) in a previous step. +- sign the generated transaction with the keyring's account. +- broadcast the signed transaction to the network. This is possible because the CLI connects to the node's CometBFT RPC endpoint. The CLI bundles all the necessary steps into a simple-to-use user experience. However, it's possible to run all the steps individually too. -### Generating a Transaction[​](#generating-a-transaction "Direct link to Generating a Transaction") +### Generating a Transaction Generating a transaction can simply be done by appending the `--generate-only` flag on any `tx` command, e.g.: -``` +```bash simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --chain-id my-test-chain --generate-only ``` This will output the unsigned transaction as JSON in the console. We can also save the unsigned transaction to a file (to be passed around between signers more easily) by appending `> unsigned_tx.json` to the above command. -### Signing a Transaction[​](#signing-a-transaction "Direct link to Signing a Transaction") +### Signing a Transaction Signing a transaction using the CLI requires the unsigned transaction to be saved in a file. Let's assume the unsigned transaction is in a file called `unsigned_tx.json` in the current directory (see previous paragraph on how to do that). Then, simply run the following command: -``` +```bash simd tx sign unsigned_tx.json --chain-id my-test-chain --keyring-backend test --from $MY_VALIDATOR_ADDRESS ``` @@ -47,153 +46,457 @@ This command will decode the unsigned transaction and sign it with `SIGN_MODE_DI Some useful flags to consider in the `tx sign` command: -* `--sign-mode`: you may use `amino-json` to sign the transaction using `SIGN_MODE_LEGACY_AMINO_JSON`, -* `--offline`: sign in offline mode. This means that the `tx sign` command doesn't connect to the node to retrieve the signer's account number and sequence, both needed for signing. In this case, you must manually supply the `--account-number` and `--sequence` flags. This is useful for offline signing, i.e. signing in a secure environment which doesn't have access to the internet. +- `--sign-mode`: you may use `amino-json` to sign the transaction using `SIGN_MODE_LEGACY_AMINO_JSON`, +- `--offline`: sign in offline mode. This means that the `tx sign` command doesn't connect to the node to retrieve the signer's account number and sequence, both needed for signing. In this case, you must manually supply the `--account-number` and `--sequence` flags. This is useful for offline signing, i.e. signing in a secure environment which doesn't have access to the internet. -#### Signing with Multiple Signers[​](#signing-with-multiple-signers "Direct link to Signing with Multiple Signers") +#### Signing with Multiple Signers - Please note that signing a transaction with multiple signers or with a multisig account, where at least one signer uses `SIGN_MODE_DIRECT`, is not yet possible. You may follow [this Github issue](https://github.com/cosmos/cosmos-sdk/issues/8141) for more info. + Please note that signing a transaction with multiple signers or with a + multisig account, where at least one signer uses `SIGN_MODE_DIRECT`, is not + yet possible. You may follow [this Github + issue](https://github.com/cosmos/cosmos-sdk/issues/8141) for more info. Signing with multiple signers is done with the `tx multisign` command. This command assumes that all signers use `SIGN_MODE_LEGACY_AMINO_JSON`. The flow is similar to the `tx sign` command flow, but instead of signing an unsigned transaction file, each signer signs the file signed by previous signer(s). The `tx multisign` command will append signatures to the existing transactions. It is important that signers sign the transaction **in the same order** as given by the transaction, which is retrievable using the `GetSigners()` method. For example, starting with the `unsigned_tx.json`, and assuming the transaction has 4 signers, we would run: -``` -# Let signer1 sign the unsigned tx.simd tx multisign unsigned_tx.json signer_key_1 --chain-id my-test-chain --keyring-backend test > partial_tx_1.json# Now signer1 will send the partial_tx_1.json to the signer2.# Signer2 appends their signature:simd tx multisign partial_tx_1.json signer_key_2 --chain-id my-test-chain --keyring-backend test > partial_tx_2.json# Signer2 sends the partial_tx_2.json file to signer3, and signer3 can append his signature:simd tx multisign partial_tx_2.json signer_key_3 --chain-id my-test-chain --keyring-backend test > partial_tx_3.json +```bash +# Let signer1 sign the unsigned tx. +simd tx multisign unsigned_tx.json signer_key_1 --chain-id my-test-chain --keyring-backend test > partial_tx_1.json +# Now signer1 will send the partial_tx_1.json to the signer2. +# Signer2 appends their signature: +simd tx multisign partial_tx_1.json signer_key_2 --chain-id my-test-chain --keyring-backend test > partial_tx_2.json +# Signer2 sends the partial_tx_2.json file to signer3, and signer3 can append his signature: +simd tx multisign partial_tx_2.json signer_key_3 --chain-id my-test-chain --keyring-backend test > partial_tx_3.json ``` -### Broadcasting a Transaction[​](#broadcasting-a-transaction "Direct link to Broadcasting a Transaction") +### Broadcasting a Transaction Broadcasting a transaction is done using the following command: -``` +```bash simd tx broadcast tx_signed.json ``` You may optionally pass the `--broadcast-mode` flag to specify which response to receive from the node: -* `sync`: the CLI waits for a CheckTx execution response only. -* `async`: the CLI returns immediately (transaction might fail). +- `sync`: the CLI waits for a CheckTx execution response only. +- `async`: the CLI returns immediately (transaction might fail). -### Encoding a Transaction[​](#encoding-a-transaction "Direct link to Encoding a Transaction") +### Encoding a Transaction In order to broadcast a transaction using the gRPC or REST endpoints, the transaction will need to be encoded first. This can be done using the CLI. Encoding a transaction is done using the following command: -``` +```bash simd tx encode tx_signed.json ``` This will read the transaction from the file, serialize it using Protobuf, and output the transaction bytes as base64 in the console. -### Decoding a Transaction[​](#decoding-a-transaction "Direct link to Decoding a Transaction") +### Decoding a Transaction The CLI can also be used to decode transaction bytes. Decoding a transaction is done using the following command: -``` +```bash simd tx decode [protobuf-byte-string] ``` This will decode the transaction bytes and output the transaction as JSON in the console. You can also save the transaction to a file by appending `> tx.json` to the above command. -## Programmatically with Go[​](#programmatically-with-go "Direct link to Programmatically with Go") +## Programmatically with Go It is possible to manipulate transactions programmatically via Go using the Cosmos SDK's `TxBuilder` interface. -### Generating a Transaction[​](#generating-a-transaction-1 "Direct link to Generating a Transaction") +### Generating a Transaction Before generating a transaction, a new instance of a `TxBuilder` needs to be created. Since the Cosmos SDK supports both Amino and Protobuf transactions, the first step would be to decide which encoding scheme to use. All the subsequent steps remain unchanged, whether you're using Amino or Protobuf, as `TxBuilder` abstracts the encoding mechanisms. In the following snippet, we will use Protobuf. -``` -import ( "github.com/cosmos/cosmos-sdk/simapp")func sendTx() error { // Choose your codec: Amino or Protobuf. Here, we use Protobuf, given by the following function. app := simapp.NewSimApp(...) // Create a new TxBuilder. txBuilder := app.TxConfig().NewTxBuilder() // --snip--} +```go expandable +import ( + + "github.com/cosmos/cosmos-sdk/simapp" +) + +func sendTx() + +error { + / Choose your codec: Amino or Protobuf. Here, we use Protobuf, given by the following function. + app := simapp.NewSimApp(...) + + / Create a new TxBuilder. + txBuilder := app.TxConfig().NewTxBuilder() + + / --snip-- +} ``` We can also set up some keys and addresses that will send and receive the transactions. Here, for the purpose of the tutorial, we will be using some dummy data to create keys. -``` -import ( "github.com/cosmos/cosmos-sdk/testutil/testdata")priv1, _, addr1 := testdata.KeyTestPubAddr()priv2, _, addr2 := testdata.KeyTestPubAddr()priv3, _, addr3 := testdata.KeyTestPubAddr() +```go +import ( + + "github.com/cosmos/cosmos-sdk/testutil/testdata" +) + +priv1, _, addr1 := testdata.KeyTestPubAddr() + +priv2, _, addr2 := testdata.KeyTestPubAddr() + +priv3, _, addr3 := testdata.KeyTestPubAddr() ``` Populating the `TxBuilder` can be done via its methods: -client/tx\_config.go +```go expandable +package client -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/client/tx_config.go#L33-L50) + txsigning "cosmossdk.io/x/tx/signing" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) -``` -import ( banktypes "github.com/cosmos/cosmos-sdk/x/bank/types")func sendTx() error { // --snip-- // Define two x/bank MsgSend messages: // - from addr1 to addr3, // - from addr2 to addr3. // This means that the transactions needs two signers: addr1 and addr2. msg1 := banktypes.NewMsgSend(addr1, addr3, types.NewCoins(types.NewInt64Coin("atom", 12))) msg2 := banktypes.NewMsgSend(addr2, addr3, types.NewCoins(types.NewInt64Coin("atom", 34))) err := txBuilder.SetMsgs(msg1, msg2) if err != nil { return err } txBuilder.SetGasLimit(...) txBuilder.SetFeeAmount(...) txBuilder.SetMemo(...) txBuilder.SetTimeoutHeight(...)} +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() + +sdk.TxEncoder + TxDecoder() + +sdk.TxDecoder + TxJSONEncoder() + +sdk.TxEncoder + TxJSONDecoder() + +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) + +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} + + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() + +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() *txsigning.HandlerMap + SigningContext() *txsigning.Context +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTip(tip *tx.Tip) + +SetTimeoutHeight(height uint64) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} + + / ExtendedTxBuilder extends the TxBuilder interface, + / which is used to set extension options to be included in a transaction. + ExtendedTxBuilder interface { + SetExtensionOptions(extOpts ...*codectypes.Any) +} +) ``` -At this point, `TxBuilder`'s underlying transaction is ready to be signed. +```go expandable +import ( + + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +func sendTx() -### Signing a Transaction[​](#signing-a-transaction-1 "Direct link to Signing a Transaction") +error { + / --snip-- -We set encoding config to use Protobuf, which will use `SIGN_MODE_DIRECT` by default. As per [ADR-020](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md), each signer needs to sign the `SignerInfo`s of all other signers. This means that we need to perform two steps sequentially: + / Define two x/bank MsgSend messages: + / - from addr1 to addr3, + / - from addr2 to addr3. + / This means that the transactions needs two signers: addr1 and addr2. + msg1 := banktypes.NewMsgSend(addr1, addr3, types.NewCoins(types.NewInt64Coin("atom", 12))) -* for each signer, populate the signer's `SignerInfo` inside `TxBuilder`, -* once all `SignerInfo`s are populated, for each signer, sign the `SignDoc` (the payload to be signed). +msg2 := banktypes.NewMsgSend(addr2, addr3, types.NewCoins(types.NewInt64Coin("atom", 34))) + err := txBuilder.SetMsgs(msg1, msg2) + if err != nil { + return err +} -In the current `TxBuilder`'s API, both steps are done using the same method: `SetSignatures()`. The current API requires us to first perform a round of `SetSignatures()` *with empty signatures*, only to populate `SignerInfo`s, and a second round of `SetSignatures()` to actually sign the correct payload. +txBuilder.SetGasLimit(...) +txBuilder.SetFeeAmount(...) + +txBuilder.SetMemo(...) + +txBuilder.SetTimeoutHeight(...) +} ``` -import ( cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" "github.com/cosmos/cosmos-sdk/types/tx/signing" xauthsigning "github.com/cosmos/cosmos-sdk/x/auth/signing")func sendTx() error { // --snip-- privs := []cryptotypes.PrivKey{priv1, priv2} accNums:= []uint64{..., ...} // The accounts' account numbers accSeqs:= []uint64{..., ...} // The accounts' sequence numbers // First round: we gather all the signer infos. We use the "set empty // signature" hack to do that. var sigsV2 []signing.SignatureV2 for i, priv := range privs { sigV2 := signing.SignatureV2{ PubKey: priv.PubKey(), Data: &signing.SingleSignatureData{ SignMode: encCfg.TxConfig.SignModeHandler().DefaultMode(), Signature: nil, }, Sequence: accSeqs[i], } sigsV2 = append(sigsV2, sigV2) } err := txBuilder.SetSignatures(sigsV2...) if err != nil { return err } // Second round: all signer infos are set, so each signer can sign. sigsV2 = []signing.SignatureV2{} for i, priv := range privs { signerData := xauthsigning.SignerData{ ChainID: chainID, AccountNumber: accNums[i], Sequence: accSeqs[i], } sigV2, err := tx.SignWithPrivKey( encCfg.TxConfig.SignModeHandler().DefaultMode(), signerData, txBuilder, priv, encCfg.TxConfig, accSeqs[i]) if err != nil { return nil, err } sigsV2 = append(sigsV2, sigV2) } err = txBuilder.SetSignatures(sigsV2...) if err != nil { return err }} + +At this point, `TxBuilder`'s underlying transaction is ready to be signed. + +### Signing a Transaction + +We set encoding config to use Protobuf, which will use `SIGN_MODE_DIRECT` by default. As per [ADR-020](docs/sdk/next/documentation/legacy/adr-comprehensive), each signer needs to sign the `SignerInfo`s of all other signers. This means that we need to perform two steps sequentially: + +- for each signer, populate the signer's `SignerInfo` inside `TxBuilder`, +- once all `SignerInfo`s are populated, for each signer, sign the `SignDoc` (the payload to be signed). + +In the current `TxBuilder`'s API, both steps are done using the same method: `SetSignatures()`. The current API requires us to first perform a round of `SetSignatures()` _with empty signatures_, only to populate `SignerInfo`s, and a second round of `SetSignatures()` to actually sign the correct payload. + +```go expandable +import ( + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + xauthsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +func sendTx() + +error { + / --snip-- + privs := []cryptotypes.PrivKey{ + priv1, priv2 +} + accNums:= []uint64{..., ... +} / The accounts' account numbers + accSeqs:= []uint64{..., ... +} / The accounts' sequence numbers + + / First round: we gather all the signer infos. We use the "set empty + / signature" hack to do that. + var sigsV2 []signing.SignatureV2 + for i, priv := range privs { + sigV2 := signing.SignatureV2{ + PubKey: priv.PubKey(), + Data: &signing.SingleSignatureData{ + SignMode: encCfg.TxConfig.SignModeHandler().DefaultMode(), + Signature: nil, +}, + Sequence: accSeqs[i], +} + +sigsV2 = append(sigsV2, sigV2) +} + err := txBuilder.SetSignatures(sigsV2...) + if err != nil { + return err +} + + / Second round: all signer infos are set, so each signer can sign. + sigsV2 = []signing.SignatureV2{ +} + for i, priv := range privs { + signerData := xauthsigning.SignerData{ + ChainID: chainID, + AccountNumber: accNums[i], + Sequence: accSeqs[i], +} + +sigV2, err := tx.SignWithPrivKey( + encCfg.TxConfig.SignModeHandler().DefaultMode(), signerData, + txBuilder, priv, encCfg.TxConfig, accSeqs[i]) + if err != nil { + return nil, err +} + +sigsV2 = append(sigsV2, sigV2) +} + +err = txBuilder.SetSignatures(sigsV2...) + if err != nil { + return err +} +} ``` The `TxBuilder` is now correctly populated. To print it, you can use the `TxConfig` interface from the initial encoding config `encCfg`: -``` -func sendTx() error { // --snip-- // Generated Protobuf-encoded bytes. txBytes, err := encCfg.TxConfig.TxEncoder()(txBuilder.GetTx()) if err != nil { return err } // Generate a JSON string. txJSONBytes, err := encCfg.TxConfig.TxJSONEncoder()(txBuilder.GetTx()) if err != nil { return err } txJSON := string(txJSONBytes)} -``` +```go expandable +func sendTx() -### Broadcasting a Transaction[​](#broadcasting-a-transaction-1 "Direct link to Broadcasting a Transaction") +error { + / --snip-- -The preferred way to broadcast a transaction is to use gRPC, though using REST (via `gRPC-gateway`) or the CometBFT RPC is also posible. An overview of the differences between these methods is exposed [here](/v0.47/learn/advanced/grpc_rest). For this tutorial, we will only describe the gRPC method. + / Generated Protobuf-encoded bytes. + txBytes, err := encCfg.TxConfig.TxEncoder()(txBuilder.GetTx()) + if err != nil { + return err +} + / Generate a JSON string. + txJSONBytes, err := encCfg.TxConfig.TxJSONEncoder()(txBuilder.GetTx()) + if err != nil { + return err +} + txJSON := string(txJSONBytes) +} ``` -import ( "context" "fmt" "google.golang.org/grpc" "github.com/cosmos/cosmos-sdk/types/tx")func sendTx(ctx context.Context) error { // --snip-- // Create a connection to the gRPC server. grpcConn := grpc.Dial( "127.0.0.1:9090", // Or your gRPC server address. grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanism. ) defer grpcConn.Close() // Broadcast the tx via gRPC. We create a new client for the Protobuf Tx // service. txClient := tx.NewServiceClient(grpcConn) // We then call the BroadcastTx method on this client. grpcRes, err := txClient.BroadcastTx( ctx, &tx.BroadcastTxRequest{ Mode: tx.BroadcastMode_BROADCAST_MODE_SYNC, TxBytes: txBytes, // Proto-binary of the signed transaction, see previous step. }, ) if err != nil { return err } fmt.Println(grpcRes.TxResponse.Code) // Should be `0` if the tx is successful return nil} + +### Broadcasting a Transaction + +The preferred way to broadcast a transaction is to use gRPC, though using REST (via `gRPC-gateway`) or the CometBFT RPC is also posible. An overview of the differences between these methods is exposed [here](/docs/sdk/v0.47/learn/advanced/grpc_rest). For this tutorial, we will only describe the gRPC method. + +```go expandable +import ( + + "context" + "fmt" + "google.golang.org/grpc" + "github.com/cosmos/cosmos-sdk/types/tx" +) + +func sendTx(ctx context.Context) + +error { + / --snip-- + + / Create a connection to the gRPC server. + grpcConn := grpc.Dial( + "127.0.0.1:9090", / Or your gRPC server address. + grpc.WithInsecure(), / The Cosmos SDK doesn't support any transport security mechanism. + ) + +defer grpcConn.Close() + + / Broadcast the tx via gRPC. We create a new client for the Protobuf Tx + / service. + txClient := tx.NewServiceClient(grpcConn) + / We then call the BroadcastTx method on this client. + grpcRes, err := txClient.BroadcastTx( + ctx, + &tx.BroadcastTxRequest{ + Mode: tx.BroadcastMode_BROADCAST_MODE_SYNC, + TxBytes: txBytes, / Proto-binary of the signed transaction, see previous step. +}, + ) + if err != nil { + return err +} + +fmt.Println(grpcRes.TxResponse.Code) / Should be `0` if the tx is successful + + return nil +} ``` -#### Simulating a Transaction[​](#simulating-a-transaction "Direct link to Simulating a Transaction") +#### Simulating a Transaction Before broadcasting a transaction, we sometimes may want to dry-run the transaction, to estimate some information about the transaction without actually committing it. This is called simulating a transaction, and can be done as follows: -``` -import ( "context" "fmt" "testing" "github.com/cosmos/cosmos-sdk/client" "github.com/cosmos/cosmos-sdk/types/tx" authtx "github.com/cosmos/cosmos-sdk/x/auth/tx")func simulateTx() error { // --snip-- // Simulate the tx via gRPC. We create a new client for the Protobuf Tx // service. txClient := tx.NewServiceClient(grpcConn) txBytes := /* Fill in with your signed transaction bytes. */ // We then call the Simulate method on this client. grpcRes, err := txClient.Simulate( context.Background(), &tx.SimulateRequest{ TxBytes: txBytes, }, ) if err != nil { return err } fmt.Println(grpcRes.GasInfo) // Prints estimated gas used. return nil} +```go expandable +import ( + + "context" + "fmt" + "testing" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/types/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" +) + +func simulateTx() + +error { + / --snip-- + + / Simulate the tx via gRPC. We create a new client for the Protobuf Tx + / service. + txClient := tx.NewServiceClient(grpcConn) + txBytes := /* Fill in with your signed transaction bytes. */ + + / We then call the Simulate method on this client. + grpcRes, err := txClient.Simulate( + context.Background(), + &tx.SimulateRequest{ + TxBytes: txBytes, +}, + ) + if err != nil { + return err +} + +fmt.Println(grpcRes.GasInfo) / Prints estimated gas used. + + return nil +} ``` -## Using gRPC[​](#using-grpc "Direct link to Using gRPC") +## Using gRPC It is not possible to generate or sign a transaction using gRPC, only to broadcast one. In order to broadcast a transaction using gRPC, you will need to generate, sign, and encode the transaction using either the CLI or programmatically with Go. -### Broadcasting a Transaction[​](#broadcasting-a-transaction-2 "Direct link to Broadcasting a Transaction") +### Broadcasting a Transaction Broadcasting a transaction using the gRPC endpoint can be done by sending a `BroadcastTx` request as follows, where the `txBytes` are the protobuf-encoded bytes of a signed transaction: -``` -grpcurl -plaintext \ -d '{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ localhost:9090 \ cosmos.tx.v1beta1.Service/BroadcastTx +```bash +grpcurl -plaintext \ + -d '{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/BroadcastTx ``` -## Using REST[​](#using-rest "Direct link to Using REST") +## Using REST It is not possible to generate or sign a transaction using REST, only to broadcast one. In order to broadcast a transaction using REST, you will need to generate, sign, and encode the transaction using either the CLI or programmatically with Go. -### Broadcasting a Transaction[​](#broadcasting-a-transaction-3 "Direct link to Broadcasting a Transaction") +### Broadcasting a Transaction Broadcasting a transaction using the REST endpoint (served by `gRPC-gateway`) can be done by sending a POST request as follows, where the `txBytes` are the protobuf-encoded bytes of a signed transaction: -``` -curl -X POST \ -H "Content-Type: application/json" \ -d'{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ localhost:1317/cosmos/tx/v1beta1/txs +```bash +curl -X POST \ + -H "Content-Type: application/json" \ + -d'{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ + localhost:1317/cosmos/tx/v1beta1/txs ``` -## Using CosmJS (JavaScript & TypeScript)[​](#using-cosmjs-javascript--typescript "Direct link to Using CosmJS (JavaScript & TypeScript)") +## Using CosmJS (JavaScript & TypeScript) -CosmJS aims to build client libraries in JavaScript that can be embedded in web applications. Please see [https://cosmos.github.io/cosmjs](https://cosmos.github.io/cosmjs) for more information. As of January 2021, CosmJS documentation is still work in progress. +CosmJS aims to build client libraries in JavaScript that can be embedded in web applications. Please see [Link](https://cosmos.github.io/cosmjs) for more information. As of January 2021, CosmJS documentation is still work in progress. diff --git a/docs/sdk/v0.47/user/user.mdx b/docs/sdk/v0.47/user/user.mdx new file mode 100644 index 00000000..39e36d0a --- /dev/null +++ b/docs/sdk/v0.47/user/user.mdx @@ -0,0 +1,13 @@ +--- +title: User Guides +description: >- + This section is designed for developers who are using the Cosmos SDK to build + applications. It provides essential guides and references to effectively use + the SDK's features. +--- + +This section is designed for developers who are using the Cosmos SDK to build applications. It provides essential guides and references to effectively use the SDK's features. + +* [Setting up keys](/docs/sdk/v0.47/user/run-node/keyring) - Learn how to set up secure key management using the Cosmos SDK's keyring feature. This guide provides a streamlined approach to cryptographic key handling, which is crucial for securing your application. +* [Running a node](/docs/sdk/v0.47/user/run-node/run-node) - This guide provides step-by-step instructions to deploy and manage a node in the Cosmos network. It ensures a smooth and reliable operation of your blockchain application by covering all the necessary setup and maintenance steps. +* [CLI](/docs/sdk/v0.47/user/run-node/interact-node) - Discover how to navigate and interact with the Cosmos SDK using the Command Line Interface (CLI). This section covers efficient and powerful command-based operations that can help you manage your application effectively. diff --git a/docs/sdk/v0.47/validate/run-testnet.mdx b/docs/sdk/v0.47/validate/run-testnet.mdx new file mode 100644 index 00000000..0cfa52c7 --- /dev/null +++ b/docs/sdk/v0.47/validate/run-testnet.mdx @@ -0,0 +1,97 @@ +--- +title: Running a Testnet +--- + +## Synopsis + +The `simd testnet` subcommand makes it easy to initialize and start a simulated test network for testing purposes. + +In addition to the commands for [running a node](/docs/sdk/v0.47/user/run-node/run-node), the `simd` binary also includes a `testnet` command that allows you to start a simulated test network in-process or to initialize files for a simulated test network that runs in a separate process. + +## Initialize Files + +First, let's take a look at the `init-files` subcommand. + +This is similar to the `init` command when initializing a single node, but in this case we are initializing multiple nodes, generating the genesis transactions for each node, and then collecting those transactions. + +The `init-files` subcommand initializes the necessary files to run a test network in a separate process (i.e. using a Docker container). Running this command is not a prerequisite for the `start` subcommand ([see below](#start-testnet)). + +In order to initialize the files for a test network, run the following command: + +```bash +simd testnet init-files +``` + +You should see the following output in your terminal: + +```bash +Successfully initialized 4 node directories +``` + +The default output directory is a relative `.testnets` directory. Let's take a look at the files created within the `.testnets` directory. + +### gentxs + +The `gentxs` directory includes a genesis transaction for each validator node. Each file includes a JSON encoded genesis transaction used to register a validator node at the time of genesis. The genesis transactions are added to the `genesis.json` file within each node directory during the initilization process. + +### nodes + +A node directory is created for each validator node. Within each node directory is a `simd` directory. The `simd` directory is the home directory for each node, which includes the configuration and data files for that node (i.e. the same files included in the default `~/.simapp` directory when running a single node). + +## Start Testnet + +Now, let's take a look at the `start` subcommand. + +The `start` subcommand both initializes and starts an in-process test network. This is the fastest way to spin up a local test network for testing purposes. + +You can start the local test network by running the following command: + +```bash +simd testnet start +``` + +You should see something similar to the following: + +```bash expandable +acquiring test network lock +preparing test network with chain-id "chain-mtoD9v" + ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++ THIS MNEMONIC IS FOR TESTING PURPOSES ONLY ++ +++ DO NOT USE IN PRODUCTION ++ +++ ++ +++ sustain know debris minute gate hybrid stereo custom ++ +++ divorce cross spoon machine latin vibrant term oblige ++ +++ moment beauty laundry repeat grab game bronze truly ++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +starting test network... +started test network +press the Enter Key to terminate +``` + +The first validator node is now running in-process, which means the test network will terminate once you either close the terminal window or you press the Enter key. In the output, the mnemonic phrase for the first validator node is provided for testing purposes. The validator node is using the same default addresses being used when initializing and starting a single node (no need to provide a `--node` flag). + +Check the status of the first validator node: + +```shell +simd status +``` + +Import the key from the provided mnemonic: + +```shell +simd keys add test --recover --keyring-backend test +``` + +Check the balance of the account address: + +```shell +simd q bank balances [address] +``` + +Use this test account to manually test against the test network. + +## Testnet Options + +You can customize the configuration of the test network with flags. In order to see all flag options, append the `--help` flag to each command. diff --git a/docs/sdk/v0.50/build.mdx b/docs/sdk/v0.50/build.mdx deleted file mode 100644 index 7b1cb072..00000000 --- a/docs/sdk/v0.50/build.mdx +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: "Build" -description: "Version: v0.50" ---- - -* [Building Apps](/v0.50/build/building-apps/app-go) - The documentation in this section will guide you through the process of developing your dApp using the Cosmos SDK framework. -* [Modules](/v0.50/build/modules) - Information about the various modules available in the Cosmos SDK: Auth, Authz, Bank, Crisis, Distribution, Evidence, Feegrant, Governance, Mint, Params, Slashing, Staking, Upgrade, NFT, Consensus, Circuit, Genutil. -* [Migrations](/v0.50/build/migrations/intro) - See what has been updated in each release the process of the transition between versions. -* [Packages](/v0.50/build/packages) - Explore a curated collection of pre-built modules and functionalities, streamlining the development process. -* [Tooling](/v0.50/build/tooling) - A suite of utilities designed to enhance the development workflow, optimizing the efficiency of Cosmos SDK-based projects. -* [ADR's](/v0.50/build/architecture) - Provides a structured repository of key decisions made during the development process, which have been documented and offers rationale behind key decisions being made. -* [REST API](https://docs.cosmos.network/api) - A comprehensive reference for the application programming interfaces (APIs) provided by the SDK. diff --git a/docs/sdk/v0.50/changelog/UPGRADING.md b/docs/sdk/v0.50/changelog/UPGRADING.md new file mode 100644 index 00000000..625ae3d4 --- /dev/null +++ b/docs/sdk/v0.50/changelog/UPGRADING.md @@ -0,0 +1,115 @@ +# Upgrading to Cosmos SDK v0.50 + +This document outlines the changes required when upgrading to Cosmos SDK v0.50. + +## Major Changes + +### AutoCLI Integration + +v0.50 introduces AutoCLI as a core feature for automatic CLI generation. + +### New Modules + +- **Circuit Breaker**: Emergency circuit breaking functionality +- **Epochs**: Time-based epoch management +- **NFT**: Native NFT support +- **Protocol Pool**: Community pool management + +### Module Updates + +- **Gov**: Enhanced governance with new proposal types +- **Staking**: Improved validator management +- **Auth**: Enhanced account management +- **Bank**: Multi-asset improvements + +## Breaking Changes + +### API Changes + +- Several keeper interfaces have been updated +- New dependency injection patterns +- Enhanced ABCI integration + +### Configuration Changes + +- New app configuration structure +- Enhanced client configuration +- Updated genesis format + +## Migration Steps + +### 1. Update Dependencies + +```bash +go get github.com/cosmos/cosmos-sdk@v0.50.x +go mod tidy +``` + +### 2. Update Imports + +Update any changed package paths in your application code. + +### 3. Update App Configuration + +- Review and update your app.go file +- Update keeper constructors with new parameters +- Implement new module interfaces if using custom modules + +### 4. Update CLI Commands + +- Leverage AutoCLI for automatic command generation +- Update custom CLI commands if needed + +### 5. Update Tests + +- Update test utilities usage +- Review simulation test configurations +- Update integration test patterns + +## New Features + +### AutoCLI + +AutoCLI automatically generates CLI commands based on your module's query and transaction services. + +```go +// Enable AutoCLI in your module +func (am AppModule) AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: &autocliv1.ServiceCommandDescriptor{ + Service: bankv1beta1.Query_ServiceDesc.ServiceName, + }, + Tx: &autocliv1.ServiceCommandDescriptor{ + Service: bankv1beta1.Msg_ServiceDesc.ServiceName, + }, + } +} +``` + +### Dependency Injection + +Enhanced dependency injection patterns for cleaner module architecture. + +### Enhanced ABCI + +Improved ABCI integration with better error handling and performance. + +## Deprecated Features + +- Legacy amino encoding (migrate to protobuf) +- Old REST endpoints (use gRPC or gRPC-gateway) +- Legacy CLI patterns (migrate to AutoCLI) + +## Testing Changes + +- Enhanced simulation framework +- New integration test utilities +- Improved mock generation + +## Performance Improvements + +- Optimized state machine execution +- Better memory management +- Enhanced caching mechanisms + +For detailed migration examples and additional information, see the [Cosmos SDK v0.50 release notes](https://github.com/cosmos/cosmos-sdk/releases/tag/v0.50.0). \ No newline at end of file diff --git a/docs/sdk/v0.50/changelog/release-notes.mdx b/docs/sdk/v0.50/changelog/release-notes.mdx new file mode 100644 index 00000000..3a96aa24 --- /dev/null +++ b/docs/sdk/v0.50/changelog/release-notes.mdx @@ -0,0 +1,591 @@ +--- +title: "Release Notes" +description: "Release notes generated from the project `CHANGELOG.md`" +mode: "center" +--- + + + + This page tracks all releases and changes from the + [cosmos/cosmos-sdk](https://github.com/cosmos/cosmos-sdk) repository. For the latest + development updates, see the + [UNRELEASED](https://github.com/cosmos/cosmos-sdk/blob/main/CHANGELOG.md#unreleased) + section. + + + + + ### Bug Fixes * + [GHSA-p22h-3m2v-cmgh](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-p22h-3m2v-cmgh) + Fix x/distribution can halt when historical rewards overflow. + + + + ### Bug Fixes * + [GHSA-47ww-ff84-4jrg](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-47ww-ff84-4jrg) + Fix x/group can halt when erroring in EndBlocker + + + + ### Bug Fixes * + [GHSA-x5vx-95h7-rv4p](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-x5vx-95h7-rv4p) + Fix Group module can halt chain when handling a malicious proposal + + + + ### Features * (crypto/keyring) + [#21653](https://github.com/cosmos/cosmos-sdk/pull/21653) New Linux-only + backend that adds Linux kernel's `keyctl` support. ### Improvements * (server) + [#21941](https://github.com/cosmos/cosmos-sdk/pull/21941) Regenerate + addrbook.json for in place testnet. ### Bug Fixes * Fix + [ABS-0043/ABS-0044](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-8wcc-m6j2-qxvm) + Limit recursion depth for unknown field detection and unpack any * (server) + [#22564](https://github.com/cosmos/cosmos-sdk/pull/22564) Fix fallback genesis + path in server * (x/group) + [#22425](https://github.com/cosmos/cosmos-sdk/pull/22425) Proper address + rendering in error * (sims) + [#21906](https://github.com/cosmos/cosmos-sdk/pull/21906) Skip sims test when + running dry on validators * (cli) + [#21919](https://github.com/cosmos/cosmos-sdk/pull/21919) Query + address-by-acc-num by account_id instead of id. * (x/group) + [#22229](https://github.com/cosmos/cosmos-sdk/pull/22229) Accept `1` and `try` + in CLI for group proposal exec. + + + + ### Features * (cli) [#20779](https://github.com/cosmos/cosmos-sdk/pull/20779) + Added `module-hash-by-height` command to query and retrieve module hashes at a + specified blockchain height, enhancing debugging capabilities. * (cli) + [#21372](https://github.com/cosmos/cosmos-sdk/pull/21372) Added a + `bulk-add-genesis-account` genesis command to add many genesis accounts at + once. * (types/collections) + [#21724](https://github.com/cosmos/cosmos-sdk/pull/21724) Added `LegacyDec` + collection value. ### Improvements * (x/bank) + [#21460](https://github.com/cosmos/cosmos-sdk/pull/21460) Added `Sender` + attribute in `MsgMultiSend` event. * (genutil) + [#21701](https://github.com/cosmos/cosmos-sdk/pull/21701) Improved error + messages for genesis validation. * (testutil/integration) + [#21816](https://github.com/cosmos/cosmos-sdk/pull/21816) Allow to pass + baseapp options in `NewIntegrationApp`. ### Bug Fixes * (runtime) + [#21769](https://github.com/cosmos/cosmos-sdk/pull/21769) Fix baseapp options + ordering to avoid overwriting options set by modules. * (x/consensus) + [#21493](https://github.com/cosmos/cosmos-sdk/pull/21493) Fix regression that + prevented to upgrade to > v0.50.7 without consensus version params. * + (baseapp) [#21256](https://github.com/cosmos/cosmos-sdk/pull/21256) Halt + height will not commit the block indicated, meaning that if halt-height is set + to 10, only blocks until 9 (included) will be committed. This is to go back to + the original behavior before a change was introduced in v0.50.0. * (baseapp) + [#21444](https://github.com/cosmos/cosmos-sdk/pull/21444) Follow-up, Return + PreBlocker events in FinalizeBlockResponse. * (baseapp) + [#21413](https://github.com/cosmos/cosmos-sdk/pull/21413) Fix data race in sdk + mempool. + + + + * (baseapp) [#21159](https://github.com/cosmos/cosmos-sdk/pull/21159) Return + PreBlocker events in FinalizeBlockResponse. * + [#20939](https://github.com/cosmos/cosmos-sdk/pull/20939) Fix collection + reverse iterator to include `pagination.key` in the result. * (client/grpc) + [#20969](https://github.com/cosmos/cosmos-sdk/pull/20969) Fix + `node.NewQueryServer` method not setting `cfg`. * (testutil/integration) + [#21006](https://github.com/cosmos/cosmos-sdk/pull/21006) Fix + `NewIntegrationApp` method not writing default genesis to state. * (runtime) + [#21080](https://github.com/cosmos/cosmos-sdk/pull/21080) Fix `app.yaml` / + `app.json` incompatibility with `depinject v1.0.0`. + + + + * (client) [#20690](https://github.com/cosmos/cosmos-sdk/pull/20690) Import + mnemonic from file * (x/authz,x/feegrant) + [#20590](https://github.com/cosmos/cosmos-sdk/pull/20590) Provide updated + keeper in depinject for authz and feegrant modules. * + [#20631](https://github.com/cosmos/cosmos-sdk/pull/20631) Fix json parsing in + the wait-tx command. * (x/auth) + [#20438](https://github.com/cosmos/cosmos-sdk/pull/20438) Add + `--skip-signature-verification` flag to multisign command to allow nested + multisigs. * (simulation) + [#17911](https://github.com/cosmos/cosmos-sdk/pull/17911) Fix all problems + with executing command `make test-sim-custom-genesis-fast` for simulation + test. * (simulation) [#18196](https://github.com/cosmos/cosmos-sdk/pull/18196) + Fix the problem of `validator set is empty after InitGenesis` in simulation + test. + + + + ### Improvements * (debug) + [#20328](https://github.com/cosmos/cosmos-sdk/pull/20328) Add consensus + address for debug cmd. * (runtime) + [#20264](https://github.com/cosmos/cosmos-sdk/pull/20264) Expose grpc query + router via depinject. * (x/consensus) + [#20381](https://github.com/cosmos/cosmos-sdk/pull/20381) Use Comet utility + for consensus module consensus param updates. * (client) + [#20356](https://github.com/cosmos/cosmos-sdk/pull/20356) Overwrite client + context when available in `SetCmdClientContext`. ### Bug Fixes * (baseapp) + [#20346](https://github.com/cosmos/cosmos-sdk/pull/20346) Correctly assign + `execModeSimulate` to context for `simulateTx`. * (baseapp) + [#20144](https://github.com/cosmos/cosmos-sdk/pull/20144) Remove txs from + mempool when AnteHandler fails in recheck. * (baseapp) + [#20107](https://github.com/cosmos/cosmos-sdk/pull/20107) Avoid header height + overwrite block height. * (cli) + [#20020](https://github.com/cosmos/cosmos-sdk/pull/20020) Make bootstrap-state + command support both new and legacy genesis format. * (testutil/sims) + [#20151](https://github.com/cosmos/cosmos-sdk/pull/20151) Set all signatures + and don't overwrite the previous one in `GenSignedMockTx`. + + + + ### Features * (types) + [#19759](https://github.com/cosmos/cosmos-sdk/pull/19759) Align + SignerExtractionAdapter in PriorityNonceMempool Remove. * (client) + [#19870](https://github.com/cosmos/cosmos-sdk/pull/19870) Add new query + command `wait-tx`. Alias `event-query-tx-for` to `wait-tx` for backward + compatibility. ### Improvements * (telemetry) + [#19903](https://github.com/cosmos/cosmos-sdk/pull/19903) Conditionally emit + metrics based on enablement. * **Introduction of `Now` Function**: Added a new + function called `Now` to the telemetry package. It returns the current system + time if telemetry is enabled, or a zero time if telemetry is not enabled. * + **Atomic Global Variable**: Implemented an atomic global variable to manage + the state of telemetry's enablement. This ensures thread safety for the + telemetry state. * **Conditional Telemetry Emission**: All telemetry functions + have been updated to emit metrics only when telemetry is enabled. They perform + a check with `isTelemetryEnabled()` and return early if telemetry is disabled, + minimizing unnecessary operations and overhead. * (deps) + [#19810](https://github.com/cosmos/cosmos-sdk/pull/19810) Upgrade prometheus + version and fix API breaking change due to prometheus bump. * (deps) + [#19810](https://github.com/cosmos/cosmos-sdk/pull/19810) Bump + `cosmossdk.io/store` to v1.1.0. * (server) + [#19884](https://github.com/cosmos/cosmos-sdk/pull/19884) Add start + customizability to start command options. * (x/gov) + [#19853](https://github.com/cosmos/cosmos-sdk/pull/19853) Emit `depositor` in + `EventTypeProposalDeposit`. * (x/gov) + [#19844](https://github.com/cosmos/cosmos-sdk/pull/19844) Emit the proposer of + governance proposals. * (baseapp) + [#19616](https://github.com/cosmos/cosmos-sdk/pull/19616) Don't share gas + meter in tx execution. * (x/authz) + [#20114](https://github.com/cosmos/cosmos-sdk/pull/20114) Follow up of + [GHSA-4j93-fm92-rp4m](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-4j93-fm92-rp4m) + for `x/authz`. * (crypto) + [#19691](https://github.com/cosmos/cosmos-sdk/pull/19745) Fix tx sign doesn't + throw an error when incorrect Ledger is used. * (baseapp) + [#19970](https://github.com/cosmos/cosmos-sdk/pull/19970) Fix default config + values to use no-op mempool as default. * (crypto) + [#20027](https://github.com/cosmos/cosmos-sdk/pull/20027) secp256r1 keys now + implement gogoproto's customtype interface. * (x/bank) + [#20028](https://github.com/cosmos/cosmos-sdk/pull/20028) Align query with + multi denoms for send-enabled. + + + + ### Features * (baseapp) + [#19626](https://github.com/cosmos/cosmos-sdk/pull/19626) Add + `DisableBlockGasMeter` option to `BaseApp`, which removes the block gas meter + during transaction execution. ### Improvements * (x/distribution) + [#19707](https://github.com/cosmos/cosmos-sdk/pull/19707) Add autocli config + for `DelegationTotalRewards` for CLI consistency with `q rewards` commands in + previous versions. * (x/auth) + [#19651](https://github.com/cosmos/cosmos-sdk/pull/19651) Allow empty public + keys in `GetSignBytesAdapter`. ### Bug Fixes * (x/gov) + [#19725](https://github.com/cosmos/cosmos-sdk/pull/19725) Fetch a failed + proposal tally from proposal.FinalTallyResult in the gprc query. * (types) + [#19709](https://github.com/cosmos/cosmos-sdk/pull/19709) Fix skip staking + genesis export when using `CoreAppModuleAdaptor` / `CoreAppModuleBasicAdaptor` + for it. * (x/auth) [#19549](https://github.com/cosmos/cosmos-sdk/pull/19549) + Accept custom get signers when injecting `x/auth/tx`. * (x/staking) Fix a + possible bypass of delegator slashing: + [GHSA-86h5-xcpx-cfqc](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-86h5-xcpx-cfqc) + * (baseapp) Fix a bug in `baseapp.ValidateVoteExtensions` helper + ([GHSA-95rx-m9m5-m94v](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-95rx-m9m5-m94v)). + The helper has been fixed and for avoiding API breaking changes + `currentHeight` and `chainID` arguments are ignored. Those arguments are + removed from the helper in v0.51+. + + + + ### Features * (server) + [#19280](https://github.com/cosmos/cosmos-sdk/pull/19280) Adds in-place + testnet CLI command. ### Improvements * (client) + [#19393](https://github.com/cosmos/cosmos-sdk/pull/19393/) Add + `ReadDefaultValuesFromDefaultClientConfig` to populate the default values from + the default client config in client.Context without creating a app folder. ### + Bug Fixes * (x/auth/vesting) [GHSA-4j93-fm92-rp4m](#bug-fixes) Add + `BlockedAddr` check in `CreatePeriodicVestingAccount`. * (baseapp) + [#19338](https://github.com/cosmos/cosmos-sdk/pull/19338) Set HeaderInfo in + context when calling `setState`. * (baseapp): + [#19200](https://github.com/cosmos/cosmos-sdk/pull/19200) Ensure that sdk side + ve math matches cometbft. * + [#19106](https://github.com/cosmos/cosmos-sdk/pull/19106) Allow empty public + keys when setting signatures. Public keys aren't needed for every transaction. + * (baseapp) [#19198](https://github.com/cosmos/cosmos-sdk/pull/19198) Remove + usage of pointers in logs in all optimistic execution goroutines. * (baseapp) + [#19177](https://github.com/cosmos/cosmos-sdk/pull/19177) Fix baseapp + `DefaultProposalHandler` same-sender non-sequential sequence. * (crypto) + [#19371](https://github.com/cosmos/cosmos-sdk/pull/19371) Avoid CLI redundant + log in stdout, log to stderr instead. + + + + ### Features + * (types) [#18991](https://github.com/cosmos/cosmos-sdk/pull/18991) Add SignerExtractionAdapter to PriorityNonceMempool/Config and provide Default implementation matching existing behavior. + * (gRPC) [#19043](https://github.com/cosmos/cosmos-sdk/pull/19043) Add `halt_height` to the gRPC `/cosmos/base/node/v1beta1/config` request. + ### Improvements + * (x/bank) [#18956](https://github.com/cosmos/cosmos-sdk/pull/18956) Introduced a new `DenomOwnersByQuery` query method for `DenomOwners`, which accepts the denom value as a query string parameter, resolving issues with denoms containing slashes. + * (x/gov) [#18707](https://github.com/cosmos/cosmos-sdk/pull/18707) Improve genesis validation. + * (x/auth/tx) [#18772](https://github.com/cosmos/cosmos-sdk/pull/18772) Remove misleading gas wanted from tx simulation failure log. + * (client/tx) [#18852](https://github.com/cosmos/cosmos-sdk/pull/18852) Add `WithFromName` to tx factory. + * (types) [#18888](https://github.com/cosmos/cosmos-sdk/pull/18888) Speedup DecCoin.Sort() if len(coins) \<= 1 + * (types) [#18875](https://github.com/cosmos/cosmos-sdk/pull/18875) Speedup coins.Sort() if len(coins) \<= 1 + * (baseapp) [#18915](https://github.com/cosmos/cosmos-sdk/pull/18915) Add a new `ExecModeVerifyVoteExtension` exec mode and ensure it's populated in the `Context` during `VerifyVoteExtension` execution. + * (testutil) [#18930](https://github.com/cosmos/cosmos-sdk/pull/18930) Add NodeURI for clientCtx. + ### Bug Fixes + * (baseapp) [#19058](https://github.com/cosmos/cosmos-sdk/pull/19058) Fix baseapp posthandler branch would fail if the `runMsgs` had returned an error. + * (baseapp) [#18609](https://github.com/cosmos/cosmos-sdk/issues/18609) Fixed accounting in the block gas meter after module's beginBlock and before DeliverTx, ensuring transaction processing always starts with the expected zeroed out block gas meter. + * (baseapp) [#18895](https://github.com/cosmos/cosmos-sdk/pull/18895) Fix de-duplicating vote extensions during validation in ValidateVoteExtensions. + + + + ### Features + * (debug) [#18219](https://github.com/cosmos/cosmos-sdk/pull/18219) Add debug commands for application codec types. + * (client/keys) [#17639](https://github.com/cosmos/cosmos-sdk/pull/17639) Allows using and saving public keys encoded as base64. + * (server) [#17094](https://github.com/cosmos/cosmos-sdk/pull/17094) Add a `shutdown-grace` flag for waiting a given time before exit. + ### Improvements + * (telemetry) [#18646] (https://github.com/cosmos/cosmos-sdk/pull/18646) Enable statsd and dogstatsd telemetry sinks. + * (server) [#18478](https://github.com/cosmos/cosmos-sdk/pull/18478) Add command flag to disable colored logs. + * (x/gov) [#18025](https://github.com/cosmos/cosmos-sdk/pull/18025) Improve ` q gov proposer` by querying directly a proposal instead of tx events. It is an alias of `q gov proposal` as the proposer is a field of the proposal. + * (version) [#18063](https://github.com/cosmos/cosmos-sdk/pull/18063) Allow to define extra info to be displayed in ` version --long` command. + * (codec/unknownproto)[#18541](https://github.com/cosmos/cosmos-sdk/pull/18541) Remove the use of "protoc-gen-gogo/descriptor" in favour of using the official protobuf descriptorpb types inside unknownproto. + ### Bug Fixes + * (x/auth) [#18564](https://github.com/cosmos/cosmos-sdk/pull/18564) Fix total fees calculation when batch signing. + * (server) [#18537](https://github.com/cosmos/cosmos-sdk/pull/18537) Fix panic when defining minimum gas config as `100stake;100uatom`. Use a `,` delimiter instead of `;`. Fixes the server config getter to use the correct delimiter. + * [#18531](https://github.com/cosmos/cosmos-sdk/pull/18531) Baseapp's `GetConsensusParams` returns an empty struct instead of panicking if no params are found. + * (client/tx) [#18472](https://github.com/cosmos/cosmos-sdk/pull/18472) Utilizes the correct Pubkey when simulating a transaction. + * (baseapp) [#18486](https://github.com/cosmos/cosmos-sdk/pull/18486) Fixed FinalizeBlock calls not being passed to ABCIListeners. + * (baseapp) [#18627](https://github.com/cosmos/cosmos-sdk/pull/18627) Post handlers are run on non successful transaction executions too. + * (baseapp) [#18654](https://github.com/cosmos/cosmos-sdk/pull/18654) Fixes an issue in which `gogoproto.Merge` does not work with gogoproto messages with custom types. + + + + ### Features + * (baseapp) [#18071](https://github.com/cosmos/cosmos-sdk/pull/18071) Add hybrid handlers to `MsgServiceRouter`. + * (server) [#18162](https://github.com/cosmos/cosmos-sdk/pull/18162) Start gRPC & API server in standalone mode. + * (baseapp & types) [#17712](https://github.com/cosmos/cosmos-sdk/pull/17712) Introduce `PreBlock`, which runs before begin blocker other modules, and allows to modify consensus parameters, and the changes are visible to the following state machine logics. Additionally it can be used for vote extensions. + * (genutil) [#17571](https://github.com/cosmos/cosmos-sdk/pull/17571) Allow creation of `AppGenesis` without a file lookup. + * (codec) [#17042](https://github.com/cosmos/cosmos-sdk/pull/17042) Add `CollValueV2` which supports encoding of protov2 messages in collections. + * (x/gov) [#16976](https://github.com/cosmos/cosmos-sdk/pull/16976) Add `failed_reason` field to `Proposal` under `x/gov` to indicate the reason for a failed proposal. Referenced from [#238](https://github.com/bnb-chain/greenfield-cosmos-sdk/pull/238) under `bnb-chain/greenfield-cosmos-sdk`. + * (baseapp) [#16898](https://github.com/cosmos/cosmos-sdk/pull/16898) Add `preFinalizeBlockHook` to allow vote extensions persistence. + * (cli) [#16887](https://github.com/cosmos/cosmos-sdk/pull/16887) Add two new CLI commands: ` tx simulate` for simulating a transaction; ` query block-results` for querying CometBFT RPC for block results. + * (x/bank) [#16852](https://github.com/cosmos/cosmos-sdk/pull/16852) Add `DenomMetadataByQueryString` query in bank module to support metadata query by query string. + * (baseapp) [#16581](https://github.com/cosmos/cosmos-sdk/pull/16581) Implement Optimistic Execution as an experimental feature (not enabled by default). + * (types) [#16257](https://github.com/cosmos/cosmos-sdk/pull/16257) Allow setting the base denom in the denom registry. + * (baseapp) [#16239](https://github.com/cosmos/cosmos-sdk/pull/16239) Add Gas Limits to allow node operators to resource bound queries. + * (cli) [#16209](https://github.com/cosmos/cosmos-sdk/pull/16209) Make `StartCmd` more customizable. + * (types/simulation) [#16074](https://github.com/cosmos/cosmos-sdk/pull/16074) Add generic SimulationStoreDecoder for modules using collections. + * (genutil) [#16046](https://github.com/cosmos/cosmos-sdk/pull/16046) Add "module-name" flag to genutil `add-genesis-account` to enable intializing module accounts at genesis.* [#15970](https://github.com/cosmos/cosmos-sdk/pull/15970) Enable SIGN_MODE_TEXTUAL. + * (types) [#15958](https://github.com/cosmos/cosmos-sdk/pull/15958) Add `module.NewBasicManagerFromManager` for creating a basic module manager from a module manager. + * (types/module) [#15829](https://github.com/cosmos/cosmos-sdk/pull/15829) Add new endblocker interface to handle valset updates. + * (runtime) [#15818](https://github.com/cosmos/cosmos-sdk/pull/15818) Provide logger through `depinject` instead of appBuilder. + * (types) [#15735](https://github.com/cosmos/cosmos-sdk/pull/15735) Make `ValidateBasic() error` method of `Msg` interface optional. Modules should validate messages directly in their message handlers ([RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation)). + * (x/genutil) [#15679](https://github.com/cosmos/cosmos-sdk/pull/15679) Allow applications to specify a custom genesis migration function for the `genesis migrate` command. + * (telemetry) [#15657](https://github.com/cosmos/cosmos-sdk/pull/15657) Emit more data (go version, sdk version, upgrade height) in prom metrics. + * (client) [#15597](https://github.com/cosmos/cosmos-sdk/pull/15597) Add status endpoint for clients. + * (testutil/integration) [#15556](https://github.com/cosmos/cosmos-sdk/pull/15556) Introduce `testutil/integration` package for module integration testing. + * (runtime) [#15547](https://github.com/cosmos/cosmos-sdk/pull/15547) Allow runtime to pass event core api service to modules. + * (client) [#15458](https://github.com/cosmos/cosmos-sdk/pull/15458) Add a `CmdContext` field to client.Context initialized to cobra command's context. + * (x/genutil) [#15301](https://github.com/cosmos/cosmos-sdk/pull/15031) Add application genesis. The genesis is now entirely managed by the application and passed to CometBFT at note instantiation. Functions that were taking a `cmttypes.GenesisDoc{}` now takes a `genutiltypes.AppGenesis{}`. + * (core) [#15133](https://github.com/cosmos/cosmos-sdk/pull/15133) Implement RegisterServices in the module manager. + * (x/bank) [#14894](https://github.com/cosmos/cosmos-sdk/pull/14894) Return a human readable denomination for IBC vouchers when querying bank balances. Added a `ResolveDenom` parameter to `types.QueryAllBalancesRequest` and `--resolve-denom` flag to `GetBalancesCmd()`. + * (core) [#14860](https://github.com/cosmos/cosmos-sdk/pull/14860) Add `Precommit` and `PrepareCheckState` AppModule callbacks. + * (x/gov) [#14720](https://github.com/cosmos/cosmos-sdk/pull/14720) Upstream expedited proposals from Osmosis. + * (cli) [#14659](https://github.com/cosmos/cosmos-sdk/pull/14659) Added ability to query blocks by events with queries directly passed to Tendermint, which will allow for full query operator support, e.g. `>`. + * (x/auth) [#14650](https://github.com/cosmos/cosmos-sdk/pull/14650) Add Textual SignModeHandler. Enable `SIGN_MODE_TEXTUAL` by following the [UPGRADING.md](./UPGRADING.md) instructions. + * (x/crisis) [#14588](https://github.com/cosmos/cosmos-sdk/pull/14588) Use CacheContext() in AssertInvariants(). + * (mempool) [#14484](https://github.com/cosmos/cosmos-sdk/pull/14484) Add priority nonce mempool option for transaction replacement. + * (query) [#14468](https://github.com/cosmos/cosmos-sdk/pull/14468) Implement pagination for collections. + * (x/gov) [#14373](https://github.com/cosmos/cosmos-sdk/pull/14373) Add new proto field `constitution` of type `string` to gov module genesis state, which allows chain builders to lay a strong foundation by specifying purpose. + * (client) [#14342](https://github.com/cosmos/cosmos-sdk/pull/14342) Add ` config` command is now a sub-command, for setting, getting and migrating Cosmos SDK configuration files. + * (x/distribution) [#14322](https://github.com/cosmos/cosmos-sdk/pull/14322) Introduce a new gRPC message handler, `DepositValidatorRewardsPool`, that allows explicit funding of a validator's reward pool. + * (x/bank) [#14224](https://github.com/cosmos/cosmos-sdk/pull/14224) Allow injection of restrictions on transfers using `AppendSendRestriction` or `PrependSendRestriction`. + ### Improvements + * (x/gov) [#18189](https://github.com/cosmos/cosmos-sdk/pull/18189) Limit the accepted deposit coins for a proposal to the minimum proposal deposit denoms. + * (x/staking) [#18049](https://github.com/cosmos/cosmos-sdk/pull/18049) Return early if Slash encounters zero tokens to burn. + * (x/staking) [#18035](https://github.com/cosmos/cosmos-sdk/pull/18035) Hoisted out of the redelegation loop, the non-changing validator and delegator addresses parsing. + * (keyring) [#17913](https://github.com/cosmos/cosmos-sdk/pull/17913) Add `NewAutoCLIKeyring` for creating an AutoCLI keyring from a SDK keyring. + * (x/consensus) [#18041](https://github.com/cosmos/cosmos-sdk/pull/18041) Let `ToProtoConsensusParams()` return an error. + * (x/gov) [#17780](https://github.com/cosmos/cosmos-sdk/pull/17780) Recover panics and turn them into errors when executing x/gov proposals. + * (baseapp) [#17667](https://github.com/cosmos/cosmos-sdk/pull/17667) Close databases opened by SDK in `baseApp.Close()`. + * (types/module) [#17554](https://github.com/cosmos/cosmos-sdk/pull/17554) Introduce `HasABCIGenesis` which is implemented by a module only when a validatorset update needs to be returned. + * (cli) [#17389](https://github.com/cosmos/cosmos-sdk/pull/17389) gRPC CometBFT commands have been added under ` q consensus comet`. CometBFT commands placement in the SDK has been simplified. See the exhaustive list below. + * `client/rpc.StatusCommand()` is now at `server.StatusCommand()` + * (testutil) [#17216](https://github.com/cosmos/cosmos-sdk/issues/17216) Add `DefaultContextWithKeys` to `testutil` package. + * (cli) [#17187](https://github.com/cosmos/cosmos-sdk/pull/17187) Do not use `ctx.PrintObjectLegacy` in commands anymore. + * ` q gov proposer [proposal-id]` now returns a proposal id as int instead of string. + * (x/staking) [#17164](https://github.com/cosmos/cosmos-sdk/pull/17164) Add `BondedTokensAndPubKeyByConsAddr` to the keeper to enable vote extension verification. + * (x/group, x/gov) [#17109](https://github.com/cosmos/cosmos-sdk/pull/17109) Let proposal summary be 40x longer than metadata limit. + * (version) [#17096](https://github.com/cosmos/cosmos-sdk/pull/17096) Improve `getSDKVersion()` to handle module replacements. + * (types) [#16890](https://github.com/cosmos/cosmos-sdk/pull/16890) Remove `GetTxCmd() *cobra.Command` and `GetQueryCmd() *cobra.Command` from `module.AppModuleBasic` interface. + * (x/authz) [#16869](https://github.com/cosmos/cosmos-sdk/pull/16869) Improve error message when grant not found. + * (all) [#16497](https://github.com/cosmos/cosmos-sdk/pull/16497) Removed all exported vestiges of `sdk.MustSortJSON` and `sdk.SortJSON`. + * (server) [#16238](https://github.com/cosmos/cosmos-sdk/pull/16238) Don't setup p2p node keys if starting a node in GRPC only mode. + * (cli) [#16206](https://github.com/cosmos/cosmos-sdk/pull/16206) Make ABCI handshake profileable. + * (types) [#16076](https://github.com/cosmos/cosmos-sdk/pull/16076) Optimize `ChainAnteDecorators`/`ChainPostDecorators` to instantiate the functions once instead of on every invocation of the returned `AnteHandler`/`PostHandler`. + * (server) [#16071](https://github.com/cosmos/cosmos-sdk/pull/16071) When `mempool.max-txs` is set to a negative value, use a no-op mempool (effectively disable the app mempool). + * (types/query) [#16041](https://github.com/cosmos/cosmos-sdk/pull/16041) Change pagination max limit to a variable in order to be modifed by application devs. + * (simapp) [#15958](https://github.com/cosmos/cosmos-sdk/pull/15958) Refactor SimApp for removing the global basic manager. + * (all modules) [#15901](https://github.com/cosmos/cosmos-sdk/issues/15901) All core Cosmos SDK modules query commands have migrated to [AutoCLI](https://docs.cosmos.network/main/core/autocli), ensuring parity between gRPC and CLI queries. + * (x/auth) [#15867](https://github.com/cosmos/cosmos-sdk/pull/15867) Support better logging for signature verification failure. + * (store/cachekv) [#15767](https://github.com/cosmos/cosmos-sdk/pull/15767) Reduce peak RAM usage during and after `InitGenesis`. + * (x/bank) [#15764](https://github.com/cosmos/cosmos-sdk/pull/15764) Speedup x/bank `InitGenesis`. + * (x/slashing) [#15580](https://github.com/cosmos/cosmos-sdk/pull/15580) Refactor the validator's missed block signing window to be a chunked bitmap instead of a "logical" bitmap, significantly reducing the storage footprint. + * (x/gov) [#15554](https://github.com/cosmos/cosmos-sdk/pull/15554) Add proposal result log in `active_proposal` event. When a proposal passes but fails to execute, the proposal result is logged in the `active_proposal` event. + * (x/consensus) [#15553](https://github.com/cosmos/cosmos-sdk/pull/15553) Migrate consensus module to use collections. + * (server) [#15358](https://github.com/cosmos/cosmos-sdk/pull/15358) Add `server.InterceptConfigsAndCreateContext` as alternative to `server.InterceptConfigsPreRunHandler` which does not set the server context and the default SDK logger. + * (mempool) [#15328](https://github.com/cosmos/cosmos-sdk/pull/15328) Improve the `PriorityNonceMempool`: + * Support generic transaction prioritization, instead of `ctx.Priority()` + * Improve construction through the use of a single `PriorityNonceMempoolConfig` instead of option functions + * (x/authz) [#15164](https://github.com/cosmos/cosmos-sdk/pull/15164) Add `MsgCancelUnbondingDelegation` to staking authorization. + * (server) [#15041](https://github.com/cosmos/cosmos-sdk/pull/15041) Remove unnecessary sleeps from gRPC and API server initiation. The servers will start and accept requests as soon as they're ready. + * (baseapp) [#15023](https://github.com/cosmos/cosmos-sdk/pull/15023) & [#15213](https://github.com/cosmos/cosmos-sdk/pull/15213) Add `MessageRouter` interface to baseapp and pass it to authz, gov and groups instead of concrete type. + * [#15011](https://github.com/cosmos/cosmos-sdk/pull/15011) Introduce `cosmossdk.io/log` package to provide a consistent logging interface through the SDK. CometBFT logger is now replaced by `cosmossdk.io/log.Logger`. + * (x/staking) [#14864](https://github.com/cosmos/cosmos-sdk/pull/14864) ` tx staking create-validator` CLI command now takes a json file as an arg instead of using required flags. + * (x/auth) [#14758](https://github.com/cosmos/cosmos-sdk/pull/14758) Allow transaction event queries to directly passed to Tendermint, which will allow for full query operator support, e.g. `>`. + * (x/evidence) [#14757](https://github.com/cosmos/cosmos-sdk/pull/14757) Evidence messages do not need to implement a `.Type()` anymore. + * (x/auth/tx) [#14751](https://github.com/cosmos/cosmos-sdk/pull/14751) Remove `.Type()` and `Route()` methods from all msgs and `legacytx.LegacyMsg` interface. + * (cli) [#14659](https://github.com/cosmos/cosmos-sdk/pull/14659) Added ability to query blocks by either height/hash ` q block --type=height|hash `. + * (x/staking) [#14590](https://github.com/cosmos/cosmos-sdk/pull/14590) Return undelegate amount in MsgUndelegateResponse. + * [#14529](https://github.com/cosmos/cosmos-sdk/pull/14529) Add new property `BondDenom` to `SimulationState` struct. + * (store) [#14439](https://github.com/cosmos/cosmos-sdk/pull/14439) Remove global metric gatherer from store. + * By default store has a no op metric gatherer, the application developer must set another metric gatherer or us the provided one in `store/metrics`. + * (store) [#14438](https://github.com/cosmos/cosmos-sdk/pull/14438) Pass logger from baseapp to store. + * (baseapp) [#14417](https://github.com/cosmos/cosmos-sdk/pull/14417) The store package no longer has a dependency on baseapp. + * (module) [#14415](https://github.com/cosmos/cosmos-sdk/pull/14415) Loosen assertions in SetOrderBeginBlockers() and SetOrderEndBlockers(). + * (store) [#14410](https://github.com/cosmos/cosmos-sdk/pull/14410) `rootmulti.Store.loadVersion` has validation to check if all the module stores' height is correct, it will error if any module store has incorrect height. + * [#14406](https://github.com/cosmos/cosmos-sdk/issues/14406) Migrate usage of `types/store.go` to `store/types/..`. + * (context)[#14384](https://github.com/cosmos/cosmos-sdk/pull/14384) Refactor(context): Pass EventManager to the context as an interface. + * (types) [#14354](https://github.com/cosmos/cosmos-sdk/pull/14354) Improve performance on Context.KVStore and Context.TransientStore by 40%. + * (crypto/keyring) [#14151](https://github.com/cosmos/cosmos-sdk/pull/14151) Move keys presentation from `crypto/keyring` to `client/keys` + * (signing) [#14087](https://github.com/cosmos/cosmos-sdk/pull/14087) Add SignModeHandlerWithContext interface with a new `GetSignBytesWithContext` to get the sign bytes using `context.Context` as an argument to access state. + * (server) [#14062](https://github.com/cosmos/cosmos-sdk/pull/14062) Remove rosetta from server start. + * (crypto) [#3129](https://github.com/cosmos/cosmos-sdk/pull/3129) New armor and keyring key derivation uses aead and encryption uses chacha20poly. + ### State Machine Breaking + * (x/gov) [#18146](https://github.com/cosmos/cosmos-sdk/pull/18146) Add denom check to reject denoms outside of those listed in `MinDeposit`. A new `MinDepositRatio` param is added (with a default value of `0.001`) and now deposits are required to be at least `MinDepositRatio*MinDeposit` to be accepted. + * (x/group,x/gov) [#16235](https://github.com/cosmos/cosmos-sdk/pull/16235) A group and gov proposal is rejected if the proposal metadata title and summary do not match the proposal title and summary. + * (baseapp) [#15930](https://github.com/cosmos/cosmos-sdk/pull/15930) change vote info provided by prepare and process proposal to the one in the block. + * (x/staking) [#15731](https://github.com/cosmos/cosmos-sdk/pull/15731) Introducing a new index to retrieve the delegations by validator efficiently. + * (x/staking) [#15701](https://github.com/cosmos/cosmos-sdk/pull/15701) The `HistoricalInfoKey` has been updated to use a binary format. + * (x/slashing) [#15580](https://github.com/cosmos/cosmos-sdk/pull/15580) The validator slashing window now stores "chunked" bitmap entries for each validator's signing window instead of a single boolean entry per signing window index. + * (x/staking) [#14590](https://github.com/cosmos/cosmos-sdk/pull/14590) `MsgUndelegateResponse` now includes undelegated amount. `x/staking` module's `keeper.Undelegate` now returns 3 values (completionTime,undelegateAmount,error) instead of 2. + * (x/feegrant) [#14294](https://github.com/cosmos/cosmos-sdk/pull/14294) Moved the logic of rejecting duplicate grant from `msg_server` to `keeper` method. + ### API Breaking Changes + * (x/auth) [#17787](https://github.com/cosmos/cosmos-sdk/pull/17787) Remove Tip functionality. + * (types) `module.EndBlockAppModule` has been replaced by Core API `appmodule.HasEndBlocker` or `module.HasABCIEndBlock` when needing validator updates. + * (types) `module.BeginBlockAppModule` has been replaced by Core API `appmodule.HasBeginBlocker`. + * (types) [#17358](https://github.com/cosmos/cosmos-sdk/pull/17358) Remove deprecated `sdk.Handler`, use `baseapp.MsgServiceHandler` instead. + * (client) [#17197](https://github.com/cosmos/cosmos-sdk/pull/17197) `keys.Commands` does not take a home directory anymore. It is inferred from the root command. + * (x/staking) [#17157](https://github.com/cosmos/cosmos-sdk/pull/17157) `GetValidatorsByPowerIndexKey` and `ValidateBasic` for historical info takes a validator address codec in order to be able to decode/encode addresses. + * `GetOperator()` now returns the address as it is represented in state, by default this is an encoded address + * `GetConsAddr() ([]byte, error)` returns `[]byte` instead of sdk.ConsAddres. + * `FromABCIEvidence` & `GetConsensusAddress(consAc address.Codec)` now take a consensus address codec to be able to decode the incoming address. + * (x/distribution) `Delegate` & `SlashValidator` helper function added the mock staking keeper as a parameter passed to the function + * (x/staking) [#17098](https://github.com/cosmos/cosmos-sdk/pull/17098) `NewMsgCreateValidator`, `NewValidator`, `NewMsgCancelUnbondingDelegation`, `NewMsgUndelegate`, `NewMsgBeginRedelegate`, `NewMsgDelegate` and `NewMsgEditValidator` takes a string instead of `sdk.ValAddress` or `sdk.AccAddress`: + * `NewRedelegation` and `NewUnbondingDelegation` takes a validatorAddressCodec and a delegatorAddressCodec in order to decode the addresses. + * `NewRedelegationResponse` takes a string instead of `sdk.ValAddress` or `sdk.AccAddress`. + * `NewMsgCreateValidator.Validate()` takes an address codec in order to decode the address. + * `BuildCreateValidatorMsg` takes a ValidatorAddressCodec in order to decode addresses. + * (x/slashing) [#17098](https://github.com/cosmos/cosmos-sdk/pull/17098) `NewMsgUnjail` takes a string instead of `sdk.ValAddress` + * (x/genutil) [#17098](https://github.com/cosmos/cosmos-sdk/pull/17098) `GenAppStateFromConfig`, AddGenesisAccountCmd and `GenTxCmd` takes an addresscodec to decode addresses. + * (x/distribution) [#17098](https://github.com/cosmos/cosmos-sdk/pull/17098) `NewMsgDepositValidatorRewardsPool`, `NewMsgFundCommunityPool`, `NewMsgWithdrawValidatorCommission` and `NewMsgWithdrawDelegatorReward` takes a string instead of `sdk.ValAddress` or `sdk.AccAddress`. + * (x/staking) [#16959](https://github.com/cosmos/cosmos-sdk/pull/16959) Add validator and consensus address codec as staking keeper arguments. + * (x/staking) [#16958](https://github.com/cosmos/cosmos-sdk/pull/16958) DelegationI interface `GetDelegatorAddr` & `GetValidatorAddr` have been migrated to return string instead of sdk.AccAddress and sdk.ValAddress respectively. stakingtypes.NewDelegation takes a string instead of sdk.AccAddress and sdk.ValAddress. + * (testutil) [#16899](https://github.com/cosmos/cosmos-sdk/pull/16899) The *cli testutil* `QueryBalancesExec` has been removed. Use the gRPC or REST query instead. + * (x/staking) [#16795](https://github.com/cosmos/cosmos-sdk/pull/16795) `DelegationToDelegationResponse`, `DelegationsToDelegationResponses`, `RedelegationsToRedelegationResponses` are no longer exported. + * (x/auth/vesting) [#16741](https://github.com/cosmos/cosmos-sdk/pull/16741) Vesting account constructor now return an error with the result of their validate function. + * (x/auth) [#16650](https://github.com/cosmos/cosmos-sdk/pull/16650) The *cli testutil* `QueryAccountExec` has been removed. Use the gRPC or REST query instead. + * (x/auth) [#16621](https://github.com/cosmos/cosmos-sdk/pull/16621) Pass address codec to auth new keeper constructor. + * (x/auth) [#16423](https://github.com/cosmos/cosmos-sdk/pull/16423) `helpers.AddGenesisAccount` has been moved to `x/genutil` to remove the cyclic dependency between `x/auth` and `x/genutil`. + * (baseapp) [#16342](https://github.com/cosmos/cosmos-sdk/pull/16342) NewContext was renamed to NewContextLegacy. The replacement (NewContext) now does not take a header, instead you should set the header via `WithHeaderInfo` or `WithBlockHeight`. Note that `WithBlockHeight` will soon be depreacted and its recommneded to use `WithHeaderInfo`. + * (x/mint) [#16329](https://github.com/cosmos/cosmos-sdk/pull/16329) Use collections for state management: + * Removed: keeper `GetParams`, `SetParams`, `GetMinter`, `SetMinter`. + * (x/crisis) [#16328](https://github.com/cosmos/cosmos-sdk/pull/16328) Use collections for state management: + * Removed: keeper `GetConstantFee`, `SetConstantFee` + * (x/staking) [#16324](https://github.com/cosmos/cosmos-sdk/pull/16324) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey`, and methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context` and return an `error`. Notable changes: + * `Validator` method now returns `types.ErrNoValidatorFound` instead of `nil` when not found. + * (x/distribution) [#16302](https://github.com/cosmos/cosmos-sdk/pull/16302) Use collections for FeePool state management. + * Removed: keeper `GetFeePool`, `SetFeePool`, `GetFeePoolCommunityCoins` + * (types) [#16272](https://github.com/cosmos/cosmos-sdk/pull/16272) `FeeGranter` in the `FeeTx` interface returns `[]byte` instead of `string`. + * (x/gov) [#16268](https://github.com/cosmos/cosmos-sdk/pull/16268) Use collections for proposal state management (part 2): + * this finalizes the gov collections migration + * Removed: types all the key related functions + * Removed: keeper `InsertActiveProposalsQueue`, `RemoveActiveProposalsQueue`, `InsertInactiveProposalsQueue`, `RemoveInactiveProposalsQueue`, `IterateInactiveProposalsQueue`, `IterateActiveProposalsQueue`, `ActiveProposalsQueueIterator`, `InactiveProposalsQueueIterator` + * (x/slashing) [#16246](https://github.com/cosmos/cosmos-sdk/issues/16246) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey`, and methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context` and return an `error`. `GetValidatorSigningInfo` now returns an error instead of a `found bool`, the error can be `nil` (found), `ErrNoSigningInfoFound` (not found) and any other error. + * (module) [#16227](https://github.com/cosmos/cosmos-sdk/issues/16227) `manager.RunMigrations()` now take a `context.Context` instead of a `sdk.Context`. + * (x/crisis) [#16216](https://github.com/cosmos/cosmos-sdk/issues/16216) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey`, methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context` and return an `error` instead of panicking. + * (x/distribution) [#16211](https://github.com/cosmos/cosmos-sdk/pull/16211) Use collections for params state management. + * (cli) [#16209](https://github.com/cosmos/cosmos-sdk/pull/16209) Add API `StartCmdWithOptions` to create customized start command. + * (x/mint) [#16179](https://github.com/cosmos/cosmos-sdk/issues/16179) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey`, and methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context` and return an `error`. + * (x/gov) [#16171](https://github.com/cosmos/cosmos-sdk/pull/16171) Use collections for proposal state management (part 1): + * Removed: keeper: `GetProposal`, `UnmarshalProposal`, `MarshalProposal`, `IterateProposal`, `GetProposal`, `GetProposalFiltered`, `GetProposals`, `GetProposalID`, `SetProposalID` + * Removed: errors unused errors + * (x/gov) [#16164](https://github.com/cosmos/cosmos-sdk/pull/16164) Use collections for vote state management: + * Removed: types `VoteKey`, `VoteKeys` + * Removed: keeper `IterateVotes`, `IterateAllVotes`, `GetVotes`, `GetVote`, `SetVote` + * (sims) [#16155](https://github.com/cosmos/cosmos-sdk/pull/16155) + * `simulation.NewOperationMsg` now marshals the operation msg as proto bytes instead of legacy amino JSON bytes. + * `simulation.NewOperationMsg` is now 2-arity instead of 3-arity with the obsolete argument `codec.ProtoCodec` removed. + * The field `OperationMsg.Msg` is now of type `[]byte` instead of `json.RawMessage`. + * (x/gov) [#16127](https://github.com/cosmos/cosmos-sdk/pull/16127) Use collections for deposit state management: + * The following methods are removed from the gov keeper: `GetDeposit`, `GetAllDeposits`, `IterateAllDeposits`. + * The following functions are removed from the gov types: `DepositKey`, `DepositsKey`. + * (x/gov) [#16118](https://github.com/cosmos/cosmos-sdk/pull/16118/) Use collections for constituion and params state management. + * (x/gov) [#16106](https://github.com/cosmos/cosmos-sdk/pull/16106) Remove gRPC query methods from gov keeper. + * (x/*all*) [#16052](https://github.com/cosmos/cosmos-sdk/pull/16062) `GetSignBytes` implementations on messages and global legacy amino codec definitions have been removed from all modules. + * (sims) [#16052](https://github.com/cosmos/cosmos-sdk/pull/16062) `GetOrGenerate` no longer requires a codec argument is now 4-arity instead of 5-arity. + * (types/math) [#16040](https://github.com/cosmos/cosmos-sdk/pull/16798) Remove aliases in `types/math.go` (part 2). + * (types/math) [#16040](https://github.com/cosmos/cosmos-sdk/pull/16040) Remove aliases in `types/math.go` (part 1). + * (x/auth) [#16016](https://github.com/cosmos/cosmos-sdk/pull/16016) Use collections for accounts state management: + * removed: keeper `HasAccountByID`, `AccountAddressByID`, `SetParams + * (x/genutil) [#15999](https://github.com/cosmos/cosmos-sdk/pull/15999) Genutil now takes the `GenesisTxHanlder` interface instead of deliverTx. The interface is implemented on baseapp + * (x/gov) [#15988](https://github.com/cosmos/cosmos-sdk/issues/15988) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey`, methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context` and return an `error` (instead of panicking or returning a `found bool`). Iterators callback functions now return an error instead of a `bool`. + * (x/auth) [#15985](https://github.com/cosmos/cosmos-sdk/pull/15985) The `AccountKeeper` does not expose the `QueryServer` and `MsgServer` APIs anymore. + * (x/authz) [#15962](https://github.com/cosmos/cosmos-sdk/issues/15962) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey`, methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context`. The `Authorization` interface's `Accept` method now takes a `context.Context` instead of a `sdk.Context`. + * (x/distribution) [#15948](https://github.com/cosmos/cosmos-sdk/issues/15948) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey` and methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context`. Keeper methods also now return an `error`. + * (x/bank) [#15891](https://github.com/cosmos/cosmos-sdk/issues/15891) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey` and methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context`. Also `FundAccount` and `FundModuleAccount` from the `testutil` package accept a `context.Context` instead of a `sdk.Context`, and it's position was moved to the first place. + * (x/slashing) [#15875](https://github.com/cosmos/cosmos-sdk/pull/15875) `x/slashing.NewAppModule` now requires an `InterfaceRegistry` parameter. + * (x/crisis) [#15852](https://github.com/cosmos/cosmos-sdk/pull/15852) Crisis keeper now takes a instance of the address codec to be able to decode user addresses + * (x/auth) [#15822](https://github.com/cosmos/cosmos-sdk/pull/15822) The type of struct field `ante.HandlerOptions.SignModeHandler` has been changed to `x/tx/signing.HandlerMap`. + * (client) [#15822](https://github.com/cosmos/cosmos-sdk/pull/15822) The return type of the interface method `TxConfig.SignModeHandler` has been changed to `x/tx/signing.HandlerMap`. + * The signature of `VerifySignature` has been changed to accept a `x/tx/signing.HandlerMap` and other structs from `x/tx` as arguments. + * The signature of `NewTxConfigWithTextual` has been deprecated and its signature changed to accept a `SignModeOptions`. + * The signature of `NewSigVerificationDecorator` has been changed to accept a `x/tx/signing.HandlerMap`. + * (x/bank) [#15818](https://github.com/cosmos/cosmos-sdk/issues/15818) `BaseViewKeeper`'s `Logger` method now doesn't require a context. `NewBaseKeeper`, `NewBaseSendKeeper` and `NewBaseViewKeeper` now also require a `log.Logger` to be passed in. + * (x/genutil) [#15679](https://github.com/cosmos/cosmos-sdk/pull/15679) `MigrateGenesisCmd` now takes a `MigrationMap` instead of having the SDK genesis migration hardcoded. + * (client) [#15673](https://github.com/cosmos/cosmos-sdk/pull/15673) Move `client/keys.OutputFormatJSON` and `client/keys.OutputFormatText` to `client/flags` package. + * (x/*all*) [#15648](https://github.com/cosmos/cosmos-sdk/issues/15648) Make `SetParams` consistent across all modules and validate the params at the message handling instead of `SetParams` method. + * (codec) [#15600](https://github.com/cosmos/cosmos-sdk/pull/15600) [#15873](https://github.com/cosmos/cosmos-sdk/pull/15873) add support for getting signers to `codec.Codec` and `InterfaceRegistry`: + * `InterfaceRegistry` is has unexported methods and implements `protodesc.Resolver` plus the `RangeFiles` and `SigningContext` methods. All implementations of `InterfaceRegistry` by other users must now embed the official implementation. + * `Codec` has new methods `InterfaceRegistry`, `GetMsgAnySigners`, `GetMsgV1Signers`, and `GetMsgV2Signers` as well as unexported methods. All implementations of `Codec` by other users must now embed an official implementation from the `codec` package. + * `AminoCodec` is marked as deprecated and no longer implements `Codec. + * (client) [#15597](https://github.com/cosmos/cosmos-sdk/pull/15597) `RegisterNodeService` now requires a config parameter. + * (x/nft) [#15588](https://github.com/cosmos/cosmos-sdk/pull/15588) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey` and methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context`. + * (baseapp) [#15568](https://github.com/cosmos/cosmos-sdk/pull/15568) `SetIAVLLazyLoading` is removed from baseapp. + * (x/genutil) [#15567](https://github.com/cosmos/cosmos-sdk/pull/15567) `CollectGenTxsCmd` & `GenTxCmd` takes a address.Codec to be able to decode addresses. + * (x/bank) [#15567](https://github.com/cosmos/cosmos-sdk/pull/15567) `GenesisBalance.GetAddress` now returns a string instead of `sdk.AccAddress` + * `MsgSendExec` test helper function now takes a address.Codec + * (x/auth) [#15520](https://github.com/cosmos/cosmos-sdk/pull/15520) `NewAccountKeeper` now takes a `KVStoreService` instead of a `StoreKey` and methods in the `Keeper` now take a `context.Context` instead of a `sdk.Context`. + * (baseapp) [#15519](https://github.com/cosmos/cosmos-sdk/pull/15519/files) `runTxMode`s were renamed to `execMode`. `ModeDeliver` as changed to `ModeFinalize` and a new `ModeVoteExtension` was added for vote extensions. + * (baseapp) [#15519](https://github.com/cosmos/cosmos-sdk/pull/15519/files) Writing of state to the multistore was moved to `FinalizeBlock`. `Commit` still handles the committing values to disk. + * (baseapp) [#15519](https://github.com/cosmos/cosmos-sdk/pull/15519/files) Calls to BeginBlock and EndBlock have been replaced with core api beginblock & endblock. + * (baseapp) [#15519](https://github.com/cosmos/cosmos-sdk/pull/15519/files) BeginBlock and EndBlock are now internal to baseapp. For testing, user must call `FinalizeBlock`. BeginBlock and EndBlock calls are internal to Baseapp. + * (baseapp) [#15519](https://github.com/cosmos/cosmos-sdk/pull/15519/files) All calls to ABCI methods now accept a pointer of the abci request and response types + * (x/consensus) [#15517](https://github.com/cosmos/cosmos-sdk/pull/15517) `NewKeeper` now takes a `KVStoreService` instead of a `StoreKey`. + * (x/bank) [#15477](https://github.com/cosmos/cosmos-sdk/pull/15477) `banktypes.NewMsgMultiSend` and `keeper.InputOutputCoins` only accept one input. + * (server) [#15358](https://github.com/cosmos/cosmos-sdk/pull/15358) Remove `server.ErrorCode` that was not used anywhere. + * (x/capability) [#15344](https://github.com/cosmos/cosmos-sdk/pull/15344) Capability module was removed and is now housed in [IBC-GO](https://github.com/cosmos/ibc-go). + * (mempool) [#15328](https://github.com/cosmos/cosmos-sdk/pull/15328) The `PriorityNonceMempool` is now generic over type `C comparable` and takes a single `PriorityNonceMempoolConfig[C]` argument. See `DefaultPriorityNonceMempoolConfig` for how to construct the configuration and a `TxPriority` type. + * [#15299](https://github.com/cosmos/cosmos-sdk/pull/15299) Remove `StdTx` transaction and signing APIs. No SDK version has actually supported `StdTx` since before Stargate. + * [#15284](https://github.com/cosmos/cosmos-sdk/pull/15284) + * (x/gov) [#15284](https://github.com/cosmos/cosmos-sdk/pull/15284) `NewKeeper` now requires `codec.Codec`. + * (x/authx) [#15284](https://github.com/cosmos/cosmos-sdk/pull/15284) `NewKeeper` now requires `codec.Codec`. + * `types/tx.Tx` no longer implements `sdk.Tx`. + * `sdk.Tx` now requires a new method `GetMsgsV2()`. + * `sdk.Msg.GetSigners` was deprecated and is no longer supported. Use the `cosmos.msg.v1.signer` protobuf annotation instead. + * `TxConfig` has a new method `SigningContext() *signing.Context`. + * `SigVerifiableTx.GetSigners()` now returns `([][]byte, error)` instead of `[]sdk.AccAddress`. + * `AccountKeeper` now has an `AddressCodec() address.Codec` method and the expected `AccountKeeper` for `x/auth/ante` expects this method. + * [#15211](https://github.com/cosmos/cosmos-sdk/pull/15211) Remove usage of `github.com/cometbft/cometbft/libs/bytes.HexBytes` in favor of `[]byte` thorough the SDK. + * (crypto) [#15070](https://github.com/cosmos/cosmos-sdk/pull/15070) `GenerateFromPassword` and `Cost` from `bcrypt.go` now take a `uint32` instead of a `int` type. + * (types) [#15067](https://github.com/cosmos/cosmos-sdk/pull/15067) Remove deprecated alias from `types/errors`. Use `cosmossdk.io/errors` instead. + * (server) [#15041](https://github.com/cosmos/cosmos-sdk/pull/15041) Refactor how gRPC and API servers are started to remove unnecessary sleeps: + * `api.Server#Start` now accepts a `context.Context`. The caller is responsible for ensuring that the context is canceled such that the API server can gracefully exit. The caller does not need to stop the server. + * To start the gRPC server you must first create the server via `NewGRPCServer`, after which you can start the gRPC server via `StartGRPCServer` which accepts a `context.Context`. The caller is responsible for ensuring that the context is canceled such that the gRPC server can gracefully exit. The caller does not need to stop the server. + * Rename `WaitForQuitSignals` to `ListenForQuitSignals`. Note, this function is no longer blocking. Thus the caller is expected to provide a `context.CancelFunc` which indicates that when a signal is caught, that any spawned processes can gracefully exit. + * Remove `ServerStartTime` constant. + * [#15011](https://github.com/cosmos/cosmos-sdk/pull/15011) All functions that were taking a CometBFT logger, now take `cosmossdk.io/log.Logger` instead. + * (simapp) [#14977](https://github.com/cosmos/cosmos-sdk/pull/14977) Move simulation helpers functions (`AppStateFn` and `AppStateRandomizedFn`) to `testutil/sims`. These takes an extra genesisState argument which is the default state of the app. + * (x/bank) [#14894](https://github.com/cosmos/cosmos-sdk/pull/14894) Allow a human readable denomination for coins when querying bank balances. Added a `ResolveDenom` parameter to `types.QueryAllBalancesRequest`. + * [#14847](https://github.com/cosmos/cosmos-sdk/pull/14847) App and ModuleManager methods `InitGenesis`, `ExportGenesis`, `BeginBlock` and `EndBlock` now also return an error. + * (x/upgrade) [#14764](https://github.com/cosmos/cosmos-sdk/pull/14764) The `x/upgrade` module is extracted to have a separate go.mod file which allows it to be a standalone module. + * (x/auth) [#14758](https://github.com/cosmos/cosmos-sdk/pull/14758) Refactor transaction searching: + * Refactor `QueryTxsByEvents` to accept a `query` of type `string` instead of `events` of type `[]string` + * Refactor CLI methods to accept `--query` flag instead of `--events` + * Pass `prove=false` to Tendermint's `TxSearch` RPC method + * (simulation) [#14751](https://github.com/cosmos/cosmos-sdk/pull/14751) Remove the `MsgType` field from `simulation.OperationInput` struct. + * (store) [#14746](https://github.com/cosmos/cosmos-sdk/pull/14746) Extract Store in its own go.mod and rename the package to `cosmossdk.io/store`. + * (x/nft) [#14725](https://github.com/cosmos/cosmos-sdk/pull/14725) Extract NFT in its own go.mod and rename the package to `cosmossdk.io/x/nft`. + * (x/gov) [#14720](https://github.com/cosmos/cosmos-sdk/pull/14720) Add an expedited field in the gov v1 proposal and `MsgNewMsgProposal`. + * (x/feegrant) [#14649](https://github.com/cosmos/cosmos-sdk/pull/14649) Extract Feegrant in its own go.mod and rename the package to `cosmossdk.io/x/feegrant`. + * (tx) [#14634](https://github.com/cosmos/cosmos-sdk/pull/14634) Move the `tx` go module to `x/tx`. + * (store/streaming)[#14603](https://github.com/cosmos/cosmos-sdk/pull/14603) `StoreDecoderRegistry` moved from store to `types/simulations` this breaks the `AppModuleSimulation` interface. + * (snapshots) [#14597](https://github.com/cosmos/cosmos-sdk/pull/14597) Move `snapshots` to `store/snapshots`, rename and bump proto package to v1. + * (x/staking) [#14590](https://github.com/cosmos/cosmos-sdk/pull/14590) `MsgUndelegateResponse` now includes undelegated amount. `x/staking` module's `keeper.Undelegate` now returns 3 values (completionTime,undelegateAmount,error) instead of 2. + * (crypto/keyring) [#14151](https://github.com/cosmos/cosmos-sdk/pull/14151) Move keys presentation from `crypto/keyring` to `client/keys` + * (baseapp) [#14050](https://github.com/cosmos/cosmos-sdk/pull/14050) Refactor `ABCIListener` interface to accept Go contexts. + * (x/auth) [#13850](https://github.com/cosmos/cosmos-sdk/pull/13850/) Remove `MarshalYAML` methods from module (`x/...`) types. + * (modules) [#13850](https://github.com/cosmos/cosmos-sdk/pull/13850) and [#14046](https://github.com/cosmos/cosmos-sdk/pull/14046) Remove gogoproto stringer annotations. This removes the custom `String()` methods on all types that were using the annotations. + * (x/evidence) [14724](https://github.com/cosmos/cosmos-sdk/pull/14724) Extract Evidence in its own go.mod and rename the package to `cosmossdk.io/x/evidence`. + * (crypto/keyring) [#13734](https://github.com/cosmos/cosmos-sdk/pull/13834) The keyring's `Sign` method now takes a new `signMode` argument. It is only used if the signing key is a Ledger hardware device. You can set it to 0 in all other cases. + * (snapshots) [14048](https://github.com/cosmos/cosmos-sdk/pull/14048) Move the Snapshot package to the store package. This is done in an effort group all storage related logic under one package. + * (signing) [#13701](https://github.com/cosmos/cosmos-sdk/pull/) Add `context.Context` as an argument `x/auth/signing.VerifySignature`. + * (store) [#11825](https://github.com/cosmos/cosmos-sdk/pull/11825) Make extension snapshotter interface safer to use, renamed the util function `WriteExtensionItem` to `WriteExtensionPayload`. + ### Client Breaking Changes + * (x/gov) [#17910](https://github.com/cosmos/cosmos-sdk/pull/17910) Remove telemetry for counting votes and proposals. It was incorrectly counting votes. Use alternatives, such as state streaming. + * (abci) [#15845](https://github.com/cosmos/cosmos-sdk/pull/15845) Remove duplicating events in `logs`. + * (abci) [#15845](https://github.com/cosmos/cosmos-sdk/pull/15845) Add `msg_index` to all event attributes to associate events and messages. + * (x/staking) [#15701](https://github.com/cosmos/cosmos-sdk/pull/15701) `HistoricalInfoKey` now has a binary format. + * (store/streaming) [#15519](https://github.com/cosmos/cosmos-sdk/pull/15519/files) State Streaming removed emitting of beginblock, endblock and delivertx in favour of emitting FinalizeBlock. + * (baseapp) [#15519](https://github.com/cosmos/cosmos-sdk/pull/15519/files) BeginBlock & EndBlock events have begin or endblock in the events in order to identify which stage they are emitted from since they are returned to comet as FinalizeBlock events. + * (grpc-web) [#14652](https://github.com/cosmos/cosmos-sdk/pull/14652) Use same port for gRPC-Web and the API server. + ### CLI Breaking Changes + * (all) The migration of modules to [AutoCLI](https://docs.cosmos.network/main/core/autocli) led to no changes in UX but a [small change in CLI outputs](https://github.com/cosmos/cosmos-sdk/issues/16651) where results can be nested. + * (all) Query pagination flags have been renamed with the migration to AutoCLI: + * `--reverse` -> `--page-reverse` + * `--offset` -> `--page-offset` + * `--limit` -> `--page-limit` + * `--count-total` -> `--page-count-total` + * (cli) [#17184](https://github.com/cosmos/cosmos-sdk/pull/17184) All json keys returned by the `status` command are now snake case instead of pascal case. + * (server) [#17177](https://github.com/cosmos/cosmos-sdk/pull/17177) Remove `iavl-lazy-loading` configuration. + * (x/gov) [#16987](https://github.com/cosmos/cosmos-sdk/pull/16987) In ` query gov proposals` the proposal status flag have renamed from `--status` to `--proposal-status`. Additionally, that flags now uses the ENUM values: `PROPOSAL_STATUS_DEPOSIT_PERIOD`, `PROPOSAL_STATUS_VOTING_PERIOD`, `PROPOSAL_STATUS_PASSED`, `PROPOSAL_STATUS_REJECTED`, `PROPOSAL_STATUS_FAILED`. + * (x/bank) [#16899](https://github.com/cosmos/cosmos-sdk/pull/16899) With the migration to AutoCLI some bank commands have been split in two: + * Use `total-supply` (or `total`) for querying the total supply and `total-supply-of` for querying the supply of a specific denom. + * Use `denoms-metadata` for querying all denom metadata and `denom-metadata` for querying a specific denom metadata. + * (rosetta) [#16276](https://github.com/cosmos/cosmos-sdk/issues/16276) Rosetta migration to standalone repo. + * (cli) [#15826](https://github.com/cosmos/cosmos-sdk/pull/15826) Remove ` q account` command. Use ` q auth account` instead. + * (cli) [#15299](https://github.com/cosmos/cosmos-sdk/pull/15299) Remove `--amino` flag from `sign` and `multi-sign` commands. Amino `StdTx` has been deprecated for a while. Amino JSON signing still works as expected. + * (x/gov) [#14880](https://github.com/cosmos/cosmos-sdk/pull/14880) Remove ` tx gov submit-legacy-proposal cancel-software-upgrade` and `software-upgrade` commands. These commands are now in the `x/upgrade` module and using gov v1. Use `tx upgrade software-upgrade` instead. + * (x/staking) [#14864](https://github.com/cosmos/cosmos-sdk/pull/14864) ` tx staking create-validator` CLI command now takes a json file as an arg instead of using required flags. + * (cli) [#14659](https://github.com/cosmos/cosmos-sdk/pull/14659) ` q block ` is removed as it just output json. The new command allows either height/hash and is ` q block --type=height|hash `. + * (grpc-web) [#14652](https://github.com/cosmos/cosmos-sdk/pull/14652) Remove `grpc-web.address` flag. + * (client) [#14342](https://github.com/cosmos/cosmos-sdk/pull/14342) ` config` command is now a sub-command using Confix. Use ` config --help` to learn more. + ### Bug Fixes + * (server) [#18254](https://github.com/cosmos/cosmos-sdk/pull/18254) Don't hardcode gRPC address to localhost. + * (x/gov) [#18173](https://github.com/cosmos/cosmos-sdk/pull/18173) Gov hooks now return an error and are *blocking* when they fail. Expect for `AfterProposalFailedMinDeposit` and `AfterProposalVotingPeriodEnded` which log the error and continue. + * (x/gov) [#17873](https://github.com/cosmos/cosmos-sdk/pull/17873) Fail any inactive and active proposals that cannot be decoded. + * (x/slashing) [#18016](https://github.com/cosmos/cosmos-sdk/pull/18016) Fixed builder function for missed blocks key (`validatorMissedBlockBitArrayPrefixKey`) in slashing/migration/v4. + * (x/bank) [#18107](https://github.com/cosmos/cosmos-sdk/pull/18107) Add missing keypair of SendEnabled to restore legacy param set before migration. + * (baseapp) [#17769](https://github.com/cosmos/cosmos-sdk/pull/17769) Ensure we respect block size constraints in the `DefaultProposalHandler`'s `PrepareProposal` handler when a nil or no-op mempool is used. We provide a `TxSelector` type to assist in making transaction selection generalized. We also fix a comparison bug in tx selection when `req.maxTxBytes` is reached. + * (mempool) [#17668](https://github.com/cosmos/cosmos-sdk/pull/17668) Fix `PriorityNonceIterator.Next()` nil pointer ref for min priority at the end of iteration. + * (config) [#17649](https://github.com/cosmos/cosmos-sdk/pull/17649) Fix `mempool.max-txs` configuration is invalid in `app.config`. + * (baseapp) [#17518](https://github.com/cosmos/cosmos-sdk/pull/17518) Utilizing voting power from vote extensions (CometBFT) instead of the current bonded tokens (x/staking) to determine if a set of vote extensions are valid. + * (baseapp) [#17251](https://github.com/cosmos/cosmos-sdk/pull/17251) VerifyVoteExtensions and ExtendVote initialize their own contexts/states, allowing VerifyVoteExtensions being called without ExtendVote. + * (x/distribution) [#17236](https://github.com/cosmos/cosmos-sdk/pull/17236) Using "validateCommunityTax" in "Params.ValidateBasic", preventing panic when field "CommunityTax" is nil. + * (x/bank) [#17170](https://github.com/cosmos/cosmos-sdk/pull/17170) Avoid empty spendable error message on send coins. + * (x/group) [#17146](https://github.com/cosmos/cosmos-sdk/pull/17146) Rename x/group legacy ORM package's error codespace from "orm" to "legacy_orm", preventing collisions with ORM's error codespace "orm". + * (types/query) [#16905](https://github.com/cosmos/cosmos-sdk/pull/16905) Collections Pagination now applies proper count when filtering results. + * (x/bank) [#16841](https://github.com/cosmos/cosmos-sdk/pull/16841) Correctly process legacy `DenomAddressIndex` values. + * (x/auth/vesting) [#16733](https://github.com/cosmos/cosmos-sdk/pull/16733) Panic on overflowing and negative EndTimes when creating a PeriodicVestingAccount. + * (x/consensus) [#16713](https://github.com/cosmos/cosmos-sdk/pull/16713) Add missing ABCI param in `MsgUpdateParams`. + * (baseapp) [#16700](https://github.com/cosmos/cosmos-sdk/pull/16700) Fix consensus failure in returning no response to malformed transactions. + * [#16639](https://github.com/cosmos/cosmos-sdk/pull/16639) Make sure we don't execute blocks beyond the halt height. + * (baseapp) [#16613](https://github.com/cosmos/cosmos-sdk/pull/16613) Ensure each message in a transaction has a registered handler, otherwise `CheckTx` will fail. + * (baseapp) [#16596](https://github.com/cosmos/cosmos-sdk/pull/16596) Return error during `ExtendVote` and `VerifyVoteExtension` if the request height is earlier than `VoteExtensionsEnableHeight`. + * (baseapp) [#16259](https://github.com/cosmos/cosmos-sdk/pull/16259) Ensure the `Context` block height is correct after `InitChain` and prior to the second block. + * (x/gov) [#16231](https://github.com/cosmos/cosmos-sdk/pull/16231) Fix Rawlog JSON formatting of proposal_vote option field.* (cli) [#16138](https://github.com/cosmos/cosmos-sdk/pull/16138) Fix snapshot commands panic if snapshot don't exists. + * (x/staking) [#16043](https://github.com/cosmos/cosmos-sdk/pull/16043) Call `AfterUnbondingInitiated` hook for new unbonding entries only and fix `UnbondingDelegation` entries handling. This is a behavior change compared to Cosmos SDK v0.47.x, now the hook is called only for new unbonding entries. + * (types) [#16010](https://github.com/cosmos/cosmos-sdk/pull/16010) Let `module.CoreAppModuleBasicAdaptor` fallback to legacy genesis handling. + * (types) [#15691](https://github.com/cosmos/cosmos-sdk/pull/15691) Make `Coin.Validate()` check that `.Amount` is not nil. + * (x/crypto) [#15258](https://github.com/cosmos/cosmos-sdk/pull/15258) Write keyhash file with permissions 0600 instead of 0555. + * (x/auth) [#15059](https://github.com/cosmos/cosmos-sdk/pull/15059) `ante.CountSubKeys` returns 0 when passing a nil `Pubkey`. + * (x/capability) [#15030](https://github.com/cosmos/cosmos-sdk/pull/15030) Prevent `x/capability` from consuming `GasMeter` gas during `InitMemStore` + * (types/coin) [#14739](https://github.com/cosmos/cosmos-sdk/pull/14739) Deprecate the method `Coin.IsEqual` in favour of `Coin.Equal`. The difference between the two methods is that the first one results in a panic when denoms are not equal. This panic lead to unexpected behavior. + ### Deprecated + * (types) [#16980](https://github.com/cosmos/cosmos-sdk/pull/16980) Deprecate `IntProto` and `DecProto`. Instead, `math.Int` and `math.LegacyDec` should be used respectively. Both types support `Marshal` and `Unmarshal` for binary serialization. + * (x/staking) [#14567](https://github.com/cosmos/cosmos-sdk/pull/14567) The `delegator_address` field of `MsgCreateValidator` has been deprecated. + diff --git a/docs/sdk/v0.50/documentation/application-framework/app-go-v2.mdx b/docs/sdk/v0.50/documentation/application-framework/app-go-v2.mdx new file mode 100644 index 00000000..cba8dd35 --- /dev/null +++ b/docs/sdk/v0.50/documentation/application-framework/app-go-v2.mdx @@ -0,0 +1,2923 @@ +--- +title: Overview of `app_v2.go` +--- + + +**Synopsis** + +The Cosmos SDK allows much easier wiring of an `app.go` thanks to App Wiring and [`depinject`](/docs/sdk/v0.50/documentation/module-system/depinject). +Learn more about the rationale of App Wiring in [ADR-057](docs/sdk/next/documentation/legacy/adr-comprehensive). + + + + +**Pre-requisite Readings** + +* [ADR 057: App Wiring](docs/sdk/next/documentation/legacy/adr-comprehensive) +* [Depinject Documentation](/docs/sdk/v0.50/documentation/module-system/depinject) +* [Modules depinject-ready](/docs/sdk/v0.50/documentation/module-system/depinject) + + + +This section is intended to provide an overview of the `SimApp` `app_v2.go` file with App Wiring. + +## `app_config.go` + +The `app_config.go` file is the single place to configure all modules parameters. + +1. Create the `AppConfig` variable: + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + crisismodulev1 "cosmossdk.io/api/cosmos/crisis/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + paramsmodulev1 "cosmossdk.io/api/cosmos/params/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/depinject" + + _ "cosmossdk.io/x/circuit" / import for side-effects + _ "cosmossdk.io/x/evidence" / import for side-effects + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/crisis" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + "github.com/cosmos/cosmos-sdk/x/genutil" + "github.com/cosmos/cosmos-sdk/x/gov" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/params" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + + "cosmossdk.io/core/appconfig" + circuittypes "cosmossdk.io/x/circuit/types" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + "cosmossdk.io/x/nft" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + / application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + }, + EndBlockers: []string{ + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + crisistypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensustypes.ModuleName, + circuittypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + / ExportGenesis: []string{ + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: paramstypes.ModuleName, + Config: appconfig.WrapAny(¶msmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: crisistypes.ModuleName, + Config: appconfig.WrapAny(&crisismodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + }, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + }, + ), + }, + )) + ) + ``` + +2. Configure the `runtime` module: + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + crisismodulev1 "cosmossdk.io/api/cosmos/crisis/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + paramsmodulev1 "cosmossdk.io/api/cosmos/params/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/depinject" + + _ "cosmossdk.io/x/circuit" / import for side-effects + _ "cosmossdk.io/x/evidence" / import for side-effects + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/crisis" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + "github.com/cosmos/cosmos-sdk/x/genutil" + "github.com/cosmos/cosmos-sdk/x/gov" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/params" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + + "cosmossdk.io/core/appconfig" + circuittypes "cosmossdk.io/x/circuit/types" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + "cosmossdk.io/x/nft" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + / application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + }, + EndBlockers: []string{ + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + crisistypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensustypes.ModuleName, + circuittypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + / ExportGenesis: []string{ + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: paramstypes.ModuleName, + Config: appconfig.WrapAny(¶msmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: crisistypes.ModuleName, + Config: appconfig.WrapAny(&crisismodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + }, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + }, + ), + }, + )) + ) + ``` + +3. Configure the modules defined in the `PreBlocker`, `BeginBlocker` and `EndBlocker` and the `tx` module: + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + crisismodulev1 "cosmossdk.io/api/cosmos/crisis/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + paramsmodulev1 "cosmossdk.io/api/cosmos/params/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/depinject" + + _ "cosmossdk.io/x/circuit" / import for side-effects + _ "cosmossdk.io/x/evidence" / import for side-effects + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/crisis" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + "github.com/cosmos/cosmos-sdk/x/genutil" + "github.com/cosmos/cosmos-sdk/x/gov" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/params" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + + "cosmossdk.io/core/appconfig" + circuittypes "cosmossdk.io/x/circuit/types" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + "cosmossdk.io/x/nft" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + / application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + }, + EndBlockers: []string{ + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + crisistypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensustypes.ModuleName, + circuittypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + / ExportGenesis: []string{ + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: paramstypes.ModuleName, + Config: appconfig.WrapAny(¶msmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: crisistypes.ModuleName, + Config: appconfig.WrapAny(&crisismodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + }, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + }, + ), + }, + )) + ) + ``` + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + crisismodulev1 "cosmossdk.io/api/cosmos/crisis/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + paramsmodulev1 "cosmossdk.io/api/cosmos/params/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/depinject" + + _ "cosmossdk.io/x/circuit" / import for side-effects + _ "cosmossdk.io/x/evidence" / import for side-effects + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/crisis" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + "github.com/cosmos/cosmos-sdk/x/genutil" + "github.com/cosmos/cosmos-sdk/x/gov" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/params" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + + "cosmossdk.io/core/appconfig" + circuittypes "cosmossdk.io/x/circuit/types" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + "cosmossdk.io/x/nft" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + / application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + }, + EndBlockers: []string{ + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + crisistypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensustypes.ModuleName, + circuittypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + / ExportGenesis: []string{ + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: paramstypes.ModuleName, + Config: appconfig.WrapAny(¶msmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: crisistypes.ModuleName, + Config: appconfig.WrapAny(&crisismodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + }, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + }, + ), + }, + )) + ) + ``` + +### Complete `app_config.go` + +```go expandable +package simapp + +import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + crisismodulev1 "cosmossdk.io/api/cosmos/crisis/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + paramsmodulev1 "cosmossdk.io/api/cosmos/params/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/depinject" + + _ "cosmossdk.io/x/circuit" / import for side-effects + _ "cosmossdk.io/x/evidence" / import for side-effects + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/crisis" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + "github.com/cosmos/cosmos-sdk/x/genutil" + "github.com/cosmos/cosmos-sdk/x/gov" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/params" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + + "cosmossdk.io/core/appconfig" + circuittypes "cosmossdk.io/x/circuit/types" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + "cosmossdk.io/x/nft" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName +}, + { + Account: distrtypes.ModuleName +}, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter +}}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName +}}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName +}}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner +}}, + { + Account: nft.ModuleName +}, +} + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName +} + + / application configuration (used by depinject) + +AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, +}, + EndBlockers: []string{ + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, +}, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", +}, +}, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + crisistypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensustypes.ModuleName, + circuittypes.ModuleName, +}, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + / ExportGenesis: []string{ +}, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ +}, +}), +}, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address +}), +}, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ +}), +}, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, +}), +}, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ +}), +}, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ +}), +}, + { + Name: paramstypes.ModuleName, + Config: appconfig.WrapAny(¶msmodulev1.Module{ +}), +}, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ +}), +}, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ +}), +}, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ +}), +}, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ +}), +}, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ +}), +}, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ +}), +}, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ +}), +}, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, +}), +}, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ +}), +}, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ +}), +}, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ +}), +}, + { + Name: crisistypes.ModuleName, + Config: appconfig.WrapAny(&crisismodulev1.Module{ +}), +}, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ +}), +}, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ +}), +}, +}, +}), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +}, + ), +}, + )) +) +``` + +### Alternative formats + + +The example above shows how to create an `AppConfig` using Go. However, it is also possible to create an `AppConfig` using YAML, or JSON.\ +The configuration can then be embed with `go:embed` and read with [`appconfig.LoadYAML`](https://pkg.go.dev/cosmossdk.io/core/appconfig#LoadYAML), or [`appconfig.LoadJSON`](https://pkg.go.dev/cosmossdk.io/core/appconfig#LoadJSON), in `app_v2.go`. + +```go +/go:embed app_config.yaml +var ( + appConfigYaml []byte + appConfig = appconfig.LoadYAML(appConfigYaml) +) +``` + + + +```yaml expandable +modules: + - name: runtime + config: + "@type": cosmos.app.runtime.v1alpha1.Module + app_name: SimApp + begin_blockers: [staking, auth, bank] + end_blockers: [bank, auth, staking] + init_genesis: [bank, auth, staking] + - name: auth + config: + "@type": cosmos.auth.module.v1.Module + bech32_prefix: cosmos + - name: bank + config: + "@type": cosmos.bank.module.v1.Module + - name: staking + config: + "@type": cosmos.staking.module.v1.Module + - name: tx + config: + "@type": cosmos.tx.module.v1.Module +``` + +A more complete example of `app.yaml` can be found [here](https://github.com/cosmos/cosmos-sdk/blob/91b1d83f1339e235a1dfa929ecc00084101a19e3/simapp/app.yaml). + +## `app_v2.go` + +`app_v2.go` is the place where `SimApp` is constructed. `depinject.Inject` facilitates that by automatically wiring the app modules and keepers, provided an application configuration `AppConfig` is provided. `SimApp` is constructed, when calling the injected `*runtime.AppBuilder`, with `appBuilder.Build(...)`.\ +In short `depinject` and the [`runtime` package](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/runtime) abstract the wiring of the app, and the `AppBuilder` is the place where the app is constructed. [`runtime`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/runtime) takes care of registering the codecs, KV store, subspaces and instantiating `baseapp`. + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + "os" + "path/filepath" + "cosmossdk.io/log" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/depinject" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitBreakerKeeper circuitkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.CrisisKeeper, + &app.UpgradeKeeper, + &app.ParamsKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitBreakerKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + + +When using `depinject.Inject`, the injected types must be pointers. + + +### Advanced Configuration + +In advanced cases, it is possible to inject extra (module) configuration in a way that is not (yet) supported by `AppConfig`.\ +In this case, use `depinject.Configs` for combining the extra configuration and `AppConfig`, and `depinject.Supply` to providing that extra configuration. +More information on how work `depinject.Configs` and `depinject.Supply` can be found in the [`depinject` documentation](https://pkg.go.dev/cosmossdk.io/depinject). + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + "os" + "path/filepath" + "cosmossdk.io/log" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/depinject" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitBreakerKeeper circuitkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.CrisisKeeper, + &app.UpgradeKeeper, + &app.ParamsKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitBreakerKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + +### Registering non app wiring modules + +It is possible to combine app wiring / depinject enabled modules with non app wiring modules. +To do so, use the `app.RegisterModules` method to register the modules on your app, as well as `app.RegisterStores` for registering the extra stores needed. + +```go expandable +/ .... +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + +/ register module manually +app.RegisterStores(storetypes.NewKVStoreKey(example.ModuleName)) + +app.ExampleKeeper = examplekeeper.NewKeeper(app.appCodec, app.AccountKeeper.AddressCodec(), runtime.NewKVStoreService(app.GetKey(example.ModuleName)), authtypes.NewModuleAddress(govtypes.ModuleName).String()) + exampleAppModule := examplemodule.NewAppModule(app.ExampleKeeper) + if err := app.RegisterModules(&exampleAppModule); err != nil { + panic(err) +} + +/ .... +``` + + +When using AutoCLI and combining app wiring and non app wiring modules. The AutoCLI options should be manually constructed instead of injected. +Otherwise it will miss the non depinject modules and not register their CLI. + + +### Complete `app_v2.go` + + +Note that in the complete `SimApp` `app_v2.go` file, testing utilities are also defined, but they could as well be defined in a separate file. + + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + "os" + "path/filepath" + "cosmossdk.io/log" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/depinject" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitBreakerKeeper circuitkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.CrisisKeeper, + &app.UpgradeKeeper, + &app.ParamsKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitBreakerKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` diff --git a/docs/sdk/v0.50/documentation/application-framework/app-go.mdx b/docs/sdk/v0.50/documentation/application-framework/app-go.mdx new file mode 100644 index 00000000..f1691fa5 --- /dev/null +++ b/docs/sdk/v0.50/documentation/application-framework/app-go.mdx @@ -0,0 +1,940 @@ +--- +title: Overview of `app.go` +description: >- + This section is intended to provide an overview of the SimApp app.go file and + is still a work in progress. For now please instead read the tutorials for a + deep dive on how to build a chain. +--- + +This section is intended to provide an overview of the `SimApp` `app.go` file and is still a work in progress. +For now please instead read the [tutorials](https://tutorials.cosmos.network) for a deep dive on how to build a chain. + +## Complete `app.go` + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "os" + "path/filepath" + "cosmossdk.io/log" + "cosmossdk.io/x/tx/signing" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/core/appmodule" + "github.com/cosmos/cosmos-sdk/codec/address" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ stdAccAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdAccAddressCodec struct{ +} + +func (g stdAccAddressCodec) + +StringToBytes(text string) ([]byte, error) { + if text == "" { + return nil, nil +} + +return sdk.AccAddressFromBech32(text) +} + +func (g stdAccAddressCodec) + +BytesToString(bz []byte) (string, error) { + if bz == nil { + return "", nil +} + +return sdk.AccAddress(bz).String(), nil +} + +/ stdValAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdValAddressCodec struct{ +} + +func (g stdValAddressCodec) + +StringToBytes(text string) ([]byte, error) { + return sdk.ValAddressFromBech32(text) +} + +func (g stdValAddressCodec) + +BytesToString(bz []byte) (string, error) { + return sdk.ValAddress(bz).String(), nil +} + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, circuittypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + tkeys := storetypes.NewTransientStoreKeys(paramstypes.TStoreKey) + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), runtime.EventService{ +}) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, runtime.NewKVStoreService(keys[authtypes.StoreKey]), authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[minttypes.StoreKey]), app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[distrtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[crisistypes.StoreKey]), invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[feegrant.StoreKey]), app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[circuittypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper(runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[govtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.DistrKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), app.StakingKeeper, app.SlashingKeeper, app.AccountKeeper.AddressCodec(), runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName), app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, circuittypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + err := app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + + / At startup, after all modules have been registered, check that all prot + / annotations are correct. + protoFiles, err := proto.MergedRegistry() + if err != nil { + panic(err) +} + +err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + fmt.Fprintln(os.Stderr, err.Error()) +} + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} +``` diff --git a/docs/sdk/v0.50/documentation/application-framework/app-mempool.mdx b/docs/sdk/v0.50/documentation/application-framework/app-mempool.mdx new file mode 100644 index 00000000..9557373e --- /dev/null +++ b/docs/sdk/v0.50/documentation/application-framework/app-mempool.mdx @@ -0,0 +1,851 @@ +--- +title: Application Mempool +--- + + +**Synopsis** +This sections describes how the app-side mempool can be used and replaced. + + +Since `v0.47` the application has its own mempool to allow much more granular +block building than previous versions. This change was enabled by +[ABCI 1.0](/docs/sdk/v0.50/documentation/application-frameworkhttps://github.com/cometbft/cometbft/blob/v0.37.0/spec/abci). +Notably it introduces the `PrepareProposal` and `ProcessProposal` steps of ABCI++. + + +**Pre-requisite Readings** + +* [BaseApp](/docs/sdk/v0.50/learn/advanced/baseapp) + + + +## Prepare Proposal + +`PrepareProposal` handles construction of the block, meaning that when a proposer +is preparing to propose a block, it requests the application to evaluate a +`RequestPrepareProposal`, which contains a series of transactions from CometBFT's +mempool. At this point, the application has complete control over the proposal. +It can modify, delete, and inject transactions from it's own app-side mempool into +the proposal or even ignore all the transactions altogether. What the application +does with the transactions provided to it by `RequestPrepareProposal` have no +effect on CometBFT's mempool. + +Note, that the application defines the semantics of the `PrepareProposal` and it +MAY be non-deterministic and is only executed by the current block proposer. + +Now, reading mempool twice in the previous sentence is confusing, lets break it down. +CometBFT has a mempool that handles gossiping transactions to other nodes +in the network. How these transactions are ordered is determined by CometBFT's +mempool, typically FIFO. However, since the application is able to fully inspect +all transactions, it can provide greater control over transaction ordering. +Allowing the application to handle ordering enables the application to define how +it would like the block constructed. + +The Cosmos SDK defines the `DefaultProposalHandler` type, which provides applications with +`PrepareProposal` and `ProcessProposal` handlers. If you decide to implement your +own `PrepareProposal` handler, you must be sure to ensure that the transactions +selected DO NOT exceed the maximum block gas (if set) and the maximum bytes provided +by `req.MaxBytes`. + +```go expandable +package baseapp + +import ( + + "bytes" + "fmt" + "cosmossdk.io/math" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cmtcrypto "github.com/cometbft/cometbft/crypto" + cryptoenc "github.com/cometbft/cometbft/crypto/encoding" + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + protoio "github.com/cosmos/gogoproto/io" + "github.com/cosmos/gogoproto/proto" + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +/ VoteExtensionThreshold defines the total voting power % that must be +/ submitted in order for all vote extensions to be considered valid for a +/ given height. +var VoteExtensionThreshold = math.LegacyNewDecWithPrec(667, 3) + +type ( + / Validator defines the interface contract require for verifying vote extension + / signatures. Typically, this will be implemented by the x/staking module, + / which has knowledge of the CometBFT public key. + Validator interface { + CmtConsPublicKey() (cmtprotocrypto.PublicKey, error) + +BondedTokens() + +math.Int +} + + / ValidatorStore defines the interface contract require for verifying vote + / extension signatures. Typically, this will be implemented by the x/staking + / module, which has knowledge of the CometBFT public key. + ValidatorStore interface { + GetValidatorByConsAddr(sdk.Context, cryptotypes.Address) (Validator, error) + +TotalBondedTokens(ctx sdk.Context) + +math.Int +} +) + +/ ValidateVoteExtensions defines a helper function for verifying vote extension +/ signatures that may be passed or manually injected into a block proposal from +/ a proposer in ProcessProposal. It returns an error if any signature is invalid +/ or if unexpected vote extensions and/or signatures are found or less than 2/3 +/ power is received. +func ValidateVoteExtensions( + ctx sdk.Context, + valStore ValidatorStore, + currentHeight int64, + chainID string, + extCommit abci.ExtendedCommitInfo, +) + +error { + cp := ctx.ConsensusParams() + extsEnabled := cp.Abci != nil && cp.Abci.VoteExtensionsEnableHeight > 0 + marshalDelimitedFn := func(msg proto.Message) ([]byte, error) { + var buf bytes.Buffer + if err := protoio.NewDelimitedWriter(&buf).WriteMsg(msg); err != nil { + return nil, err +} + +return buf.Bytes(), nil +} + +var sumVP math.Int + for _, vote := range extCommit.Votes { + if !extsEnabled { + if len(vote.VoteExtension) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension at height %d", currentHeight) +} + if len(vote.ExtensionSignature) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension signature at height %d", currentHeight) +} + +continue +} + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("vote extensions enabled; received empty vote extension signature at height %d", currentHeight) +} + valConsAddr := cmtcrypto.Address(vote.Validator.Address) + +validator, err := valStore.GetValidatorByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get validator %X: %w", valConsAddr, err) +} + if validator == nil { + return fmt.Errorf("validator %X not found", valConsAddr) +} + +cmtPubKeyProto, err := validator.CmtConsPublicKey() + if err != nil { + return fmt.Errorf("failed to get validator %X public key: %w", valConsAddr, err) +} + +cmtPubKey, err := cryptoenc.PubKeyFromProto(cmtPubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) +} + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, / the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: chainID, +} + +extSignBytes, err := marshalDelimitedFn(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) +} + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return fmt.Errorf("failed to verify validator %X vote extension signature", valConsAddr) +} + +sumVP = sumVP.Add(validator.BondedTokens()) +} + + / Ensure we have at least 2/3 voting power that submitted valid vote + / extensions. + totalVP := valStore.TotalBondedTokens(ctx) + percentSubmitted := math.LegacyNewDecFromInt(sumVP).Quo(math.LegacyNewDecFromInt(totalVP)) + if percentSubmitted.LT(VoteExtensionThreshold) { + return fmt.Errorf("insufficient cumulative voting power received to verify vote extensions; got: %s, expected: >=%s", percentSubmitted, VoteExtensionThreshold) +} + +return nil +} + +type ( + / ProposalTxVerifier defines the interface that is implemented by BaseApp, + / that any custom ABCI PrepareProposal and ProcessProposal handler can use + / to verify a transaction. + ProposalTxVerifier interface { + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) +} + + / DefaultProposalHandler defines the default ABCI PrepareProposal and + / ProcessProposal handlers. + DefaultProposalHandler struct { + mempool mempool.Mempool + txVerifier ProposalTxVerifier +} +) + +func NewDefaultProposalHandler(mp mempool.Mempool, txVerifier ProposalTxVerifier) + +DefaultProposalHandler { + return DefaultProposalHandler{ + mempool: mp, + txVerifier: txVerifier, +} +} + +/ PrepareProposalHandler returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from CometBFT will simply be returned, which, by default, are in +/ FIFO order. +func (h DefaultProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + / If the mempool is nil or NoOp we simply return the transactions + / requested from CometBFT, which, by default, should be in FIFO order. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} + +var ( + selectedTxs [][]byte + totalTxBytes int64 + ) + iterator := h.mempool.Select(ctx, req.Txs) + for iterator != nil { + memTx := iterator.Tx() + + / NOTE: Since transaction verification was already executed in CheckTx, + / which calls mempool.Insert, in theory everything in the pool should be + / valid. But some mempool implementations may insert invalid txs, so we + / check again. + bz, err := h.txVerifier.PrepareProposalVerifyTx(memTx) + if err != nil { + err := h.mempool.Remove(memTx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + panic(err) +} + +} + +else { + txSize := int64(len(bz)) + if totalTxBytes += txSize; totalTxBytes <= req.MaxTxBytes { + selectedTxs = append(selectedTxs, bz) +} + +else { + / We've reached capacity per req.MaxTxBytes so we cannot select any + / more transactions. + break +} + +} + +iterator = iterator.Next() +} + +return &abci.ResponsePrepareProposal{ + Txs: selectedTxs +}, nil +} +} + +/ ProcessProposalHandler returns the default implementation for processing an +/ ABCI proposal. Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. +/ Note that step (2) + +is identical to the validation step performed in +/ DefaultPrepareProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +func (h DefaultProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + / If the mempool is nil or NoOp we simply return ACCEPT, + / because PrepareProposal may have included txs that could fail verification. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return NoOpProcessProposal() +} + +return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + for _, txBytes := range req.Txs { + _, err := h.txVerifier.ProcessProposalVerifyTx(txBytes) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpExtendVote defines a no-op ExtendVote handler. It will always return an +/ empty byte slice as the vote extension. +func NoOpExtendVote() + +sdk.ExtendVoteHandler { + return func(_ sdk.Context, _ *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} +} + +/ NoOpVerifyVoteExtensionHandler defines a no-op VerifyVoteExtension handler. It +/ will always return an ACCEPT status with no error. +func NoOpVerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(_ sdk.Context, _ *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} +``` + +This default implementation can be overridden by the application developer in +favor of a custom implementation in [`app.go`](/docs/sdk/v0.50/documentation/application-framework/app-go-v2): + +```go +prepareOpt := func(app *baseapp.BaseApp) { + abciPropHandler := baseapp.NewDefaultProposalHandler(mempool, app) + +app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) +} + +baseAppOptions = append(baseAppOptions, prepareOpt) +``` + +## Process Proposal + +`ProcessProposal` handles the validation of a proposal from `PrepareProposal`, +which also includes a block header. Meaning, that after a block has been proposed +the other validators have the right to vote on a block. The validator in the +default implementation of `PrepareProposal` runs basic validity checks on each +transaction. + +Note, `ProcessProposal` MAY NOT be non-deterministic, i.e. it must be deterministic. +This means if `ProcessProposal` panics or fails and we reject, all honest validator +processes will prevote nil and the CometBFT round will proceed again until a valid +proposal is proposed. + +Here is the implementation of the default implementation: + +```go expandable +package baseapp + +import ( + + "bytes" + "fmt" + "cosmossdk.io/math" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cmtcrypto "github.com/cometbft/cometbft/crypto" + cryptoenc "github.com/cometbft/cometbft/crypto/encoding" + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + protoio "github.com/cosmos/gogoproto/io" + "github.com/cosmos/gogoproto/proto" + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +/ VoteExtensionThreshold defines the total voting power % that must be +/ submitted in order for all vote extensions to be considered valid for a +/ given height. +var VoteExtensionThreshold = math.LegacyNewDecWithPrec(667, 3) + +type ( + / Validator defines the interface contract require for verifying vote extension + / signatures. Typically, this will be implemented by the x/staking module, + / which has knowledge of the CometBFT public key. + Validator interface { + CmtConsPublicKey() (cmtprotocrypto.PublicKey, error) + +BondedTokens() + +math.Int +} + + / ValidatorStore defines the interface contract require for verifying vote + / extension signatures. Typically, this will be implemented by the x/staking + / module, which has knowledge of the CometBFT public key. + ValidatorStore interface { + GetValidatorByConsAddr(sdk.Context, cryptotypes.Address) (Validator, error) + +TotalBondedTokens(ctx sdk.Context) + +math.Int +} +) + +/ ValidateVoteExtensions defines a helper function for verifying vote extension +/ signatures that may be passed or manually injected into a block proposal from +/ a proposer in ProcessProposal. It returns an error if any signature is invalid +/ or if unexpected vote extensions and/or signatures are found or less than 2/3 +/ power is received. +func ValidateVoteExtensions( + ctx sdk.Context, + valStore ValidatorStore, + currentHeight int64, + chainID string, + extCommit abci.ExtendedCommitInfo, +) + +error { + cp := ctx.ConsensusParams() + extsEnabled := cp.Abci != nil && cp.Abci.VoteExtensionsEnableHeight > 0 + marshalDelimitedFn := func(msg proto.Message) ([]byte, error) { + var buf bytes.Buffer + if err := protoio.NewDelimitedWriter(&buf).WriteMsg(msg); err != nil { + return nil, err +} + +return buf.Bytes(), nil +} + +var sumVP math.Int + for _, vote := range extCommit.Votes { + if !extsEnabled { + if len(vote.VoteExtension) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension at height %d", currentHeight) +} + if len(vote.ExtensionSignature) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension signature at height %d", currentHeight) +} + +continue +} + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("vote extensions enabled; received empty vote extension signature at height %d", currentHeight) +} + valConsAddr := cmtcrypto.Address(vote.Validator.Address) + +validator, err := valStore.GetValidatorByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get validator %X: %w", valConsAddr, err) +} + if validator == nil { + return fmt.Errorf("validator %X not found", valConsAddr) +} + +cmtPubKeyProto, err := validator.CmtConsPublicKey() + if err != nil { + return fmt.Errorf("failed to get validator %X public key: %w", valConsAddr, err) +} + +cmtPubKey, err := cryptoenc.PubKeyFromProto(cmtPubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) +} + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, / the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: chainID, +} + +extSignBytes, err := marshalDelimitedFn(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) +} + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return fmt.Errorf("failed to verify validator %X vote extension signature", valConsAddr) +} + +sumVP = sumVP.Add(validator.BondedTokens()) +} + + / Ensure we have at least 2/3 voting power that submitted valid vote + / extensions. + totalVP := valStore.TotalBondedTokens(ctx) + percentSubmitted := math.LegacyNewDecFromInt(sumVP).Quo(math.LegacyNewDecFromInt(totalVP)) + if percentSubmitted.LT(VoteExtensionThreshold) { + return fmt.Errorf("insufficient cumulative voting power received to verify vote extensions; got: %s, expected: >=%s", percentSubmitted, VoteExtensionThreshold) +} + +return nil +} + +type ( + / ProposalTxVerifier defines the interface that is implemented by BaseApp, + / that any custom ABCI PrepareProposal and ProcessProposal handler can use + / to verify a transaction. + ProposalTxVerifier interface { + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) +} + + / DefaultProposalHandler defines the default ABCI PrepareProposal and + / ProcessProposal handlers. + DefaultProposalHandler struct { + mempool mempool.Mempool + txVerifier ProposalTxVerifier +} +) + +func NewDefaultProposalHandler(mp mempool.Mempool, txVerifier ProposalTxVerifier) + +DefaultProposalHandler { + return DefaultProposalHandler{ + mempool: mp, + txVerifier: txVerifier, +} +} + +/ PrepareProposalHandler returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from CometBFT will simply be returned, which, by default, are in +/ FIFO order. +func (h DefaultProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + / If the mempool is nil or NoOp we simply return the transactions + / requested from CometBFT, which, by default, should be in FIFO order. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} + +var ( + selectedTxs [][]byte + totalTxBytes int64 + ) + iterator := h.mempool.Select(ctx, req.Txs) + for iterator != nil { + memTx := iterator.Tx() + + / NOTE: Since transaction verification was already executed in CheckTx, + / which calls mempool.Insert, in theory everything in the pool should be + / valid. But some mempool implementations may insert invalid txs, so we + / check again. + bz, err := h.txVerifier.PrepareProposalVerifyTx(memTx) + if err != nil { + err := h.mempool.Remove(memTx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + panic(err) +} + +} + +else { + txSize := int64(len(bz)) + if totalTxBytes += txSize; totalTxBytes <= req.MaxTxBytes { + selectedTxs = append(selectedTxs, bz) +} + +else { + / We've reached capacity per req.MaxTxBytes so we cannot select any + / more transactions. + break +} + +} + +iterator = iterator.Next() +} + +return &abci.ResponsePrepareProposal{ + Txs: selectedTxs +}, nil +} +} + +/ ProcessProposalHandler returns the default implementation for processing an +/ ABCI proposal. Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. +/ Note that step (2) + +is identical to the validation step performed in +/ DefaultPrepareProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +func (h DefaultProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + / If the mempool is nil or NoOp we simply return ACCEPT, + / because PrepareProposal may have included txs that could fail verification. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return NoOpProcessProposal() +} + +return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + for _, txBytes := range req.Txs { + _, err := h.txVerifier.ProcessProposalVerifyTx(txBytes) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpExtendVote defines a no-op ExtendVote handler. It will always return an +/ empty byte slice as the vote extension. +func NoOpExtendVote() + +sdk.ExtendVoteHandler { + return func(_ sdk.Context, _ *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} +} + +/ NoOpVerifyVoteExtensionHandler defines a no-op VerifyVoteExtension handler. It +/ will always return an ACCEPT status with no error. +func NoOpVerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(_ sdk.Context, _ *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} +``` + +Like `PrepareProposal` this implementation is the default and can be modified by +the application developer in [`app.go`](/docs/sdk/v0.50/documentation/application-framework/app-go-v2). If you decide to implement +your own `ProcessProposal` handler, you must be sure to ensure that the transactions +provided in the proposal DO NOT exceed the maximum block gas (if set). + +```go +processOpt := func(app *baseapp.BaseApp) { + abciPropHandler := baseapp.NewDefaultProposalHandler(mempool, app) + +app.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) +} + +baseAppOptions = append(baseAppOptions, processOpt) +``` + +## Mempool + +Now that we have walked through the `PrepareProposal` & `ProcessProposal`, we can move on to walking through the mempool. + +There are countless designs that an application developer can write for a mempool, the SDK opted to provide only simple mempool implementations. +Namely, the SDK provides the following mempools: + +* [No-op Mempool](#no-op-mempool) +* [Sender Nonce Mempool](#sender-nonce-mempool) +* [Priority Nonce Mempool](#priority-nonce-mempool) + +The default SDK is a [No-op Mempool](#no-op-mempool), but it can be replaced by the application developer in [`app.go`](/docs/sdk/v0.50/documentation/application-framework/app-go-v2): + +```go +nonceMempool := mempool.NewSenderNonceMempool() + mempoolOpt := baseapp.SetMempool(nonceMempool) + +baseAppOptions = append(baseAppOptions, mempoolOpt) +``` + +### No-op Mempool + +A no-op mempool is a mempool where transactions are completely discarded and ignored when BaseApp interacts with the mempool. +When this mempool is used, it assumed that an application will rely on CometBFT's transaction ordering defined in `RequestPrepareProposal`, +which is FIFO-ordered by default. + +> Note: If a NoOp mempool is used, PrepareProposal and ProcessProposal both should be aware of this as +> PrepareProposal could include transactions that could fail verification in ProcessProposal. + +### Sender Nonce Mempool + +The nonce mempool is a mempool that keeps transactions from an sorted by nonce in order to avoid the issues with nonces. +It works by storing the transaction in a list sorted by the transaction nonce. When the proposer asks for transactions to be included in a block it randomly selects a sender and gets the first transaction in the list. It repeats this until the mempool is empty or the block is full. + +It is configurable with the following parameters: + +#### MaxTxs + +It is an integer value that sets the mempool in one of three modes, *bounded*, *unbounded*, or *disabled*. + +* **negative**: Disabled, mempool does not insert new transaction and return early. +* **zero**: Unbounded mempool has no transaction limit and will never fail with `ErrMempoolTxMaxCapacity`. +* **positive**: Bounded, it fails with `ErrMempoolTxMaxCapacity` when `maxTx` value is the same as `CountTx()` + +#### Seed + +Set the seed for the random number generator used to select transactions from the mempool. + +### Priority Nonce Mempool + +The [priority nonce mempool](/docs/sdk/v0.50/documentation/application-frameworkhttps://github.com/cosmos/cosmos-sdk/blob/main/types/mempool/priority_nonce_spec) is a mempool implementation that stores txs in a partially ordered set by 2 dimensions: + +* priority +* sender-nonce (sequence number) + +Internally it uses one priority ordered [skip list](/docs/sdk/v0.50/documentation/application-frameworkhttps://pkg.go.dev/github.com/huandu/skiplist) and one skip list per sender ordered by sender-nonce (sequence number). When there are multiple txs from the same sender, they are not always comparable by priority to other sender txs and must be partially ordered by both sender-nonce and priority. + +It is configurable with the following parameters: + +#### MaxTxs + +It is an integer value that sets the mempool in one of three modes, *bounded*, *unbounded*, or *disabled*. + +* **negative**: Disabled, mempool does not insert new transaction and return early. +* **zero**: Unbounded mempool has no transaction limit and will never fail with `ErrMempoolTxMaxCapacity`. +* **positive**: Bounded, it fails with `ErrMempoolTxMaxCapacity` when `maxTx` value is the same as `CountTx()` + +#### Callback + +The priority nonce mempool provides mempool options allowing the application sets callback(s). + +* **OnRead**: Set a callback to be called when a transaction is read from the mempool. +* **TxReplacement**: Sets a callback to be called when duplicated transaction nonce detected during mempool insert. Application can define a transaction replacement rule based on tx priority or certain transaction fields. + +More information on the SDK mempool implementation can be found in the [godocs](/docs/sdk/v0.50/documentation/application-frameworkhttps://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/mempool). diff --git a/docs/sdk/v0.50/documentation/application-framework/app-testnet.mdx b/docs/sdk/v0.50/documentation/application-framework/app-testnet.mdx new file mode 100644 index 00000000..4f08d1f5 --- /dev/null +++ b/docs/sdk/v0.50/documentation/application-framework/app-testnet.mdx @@ -0,0 +1,257 @@ +--- +title: Application Testnets +description: >- + Building an application is complicated and requires a lot of testing. The + Cosmos SDK provides a way to test your application in a real-world + environment: a testnet. +--- + +Building an application is complicated and requires a lot of testing. The Cosmos SDK provides a way to test your application in a real-world environment: a testnet. + +We allow developers to take the state from their mainnet and run tests against the state. This is useful for testing upgrade migrations, or for testing the application in a real-world environment. + +## Testnet Setup + +We will be breaking down the steps to create a testnet from mainnet state. + +```go +/ InitSimAppForTestnet is broken down into two sections: + / Required Changes: Changes that, if not made, will cause the testnet to halt or panic + / Optional Changes: Changes to customize the testnet to one's liking (lower vote times, fund accounts, etc) + +func InitSimAppForTestnet(app *SimApp, newValAddr bytes.HexBytes, newValPubKey crypto.PubKey, newOperatorAddress, upgradeToTrigger string) *SimApp { + ... +} +``` + +### Required Changes + +#### Staking + +When creating a testnet the important part is migrate the validator set from many validators to one or a few. This allows developers to spin up the chain without needing to replace validator keys. + +```go expandable +ctx := app.BaseApp.NewUncachedContext(true, tmproto.Header{ +}) + pubkey := &ed25519.PubKey{ + Key: newValPubKey.Bytes() +} + +pubkeyAny, err := types.NewAnyWithValue(pubkey) + if err != nil { + tmos.Exit(err.Error()) +} + + / STAKING + / + + / Create Validator struct for our new validator. + _, bz, err := bech32.DecodeAndConvert(newOperatorAddress) + if err != nil { + tmos.Exit(err.Error()) +} + +bech32Addr, err := bech32.ConvertAndEncode("simvaloper", bz) + if err != nil { + tmos.Exit(err.Error()) +} + newVal := stakingtypes.Validator{ + OperatorAddress: bech32Addr, + ConsensusPubkey: pubkeyAny, + Jailed: false, + Status: stakingtypes.Bonded, + Tokens: sdk.NewInt(900000000000000), + DelegatorShares: sdk.MustNewDecFromStr("10000000"), + Description: stakingtypes.Description{ + Moniker: "Testnet Validator", +}, + Commission: stakingtypes.Commission{ + CommissionRates: stakingtypes.CommissionRates{ + Rate: sdk.MustNewDecFromStr("0.05"), + MaxRate: sdk.MustNewDecFromStr("0.1"), + MaxChangeRate: sdk.MustNewDecFromStr("0.05"), +}, +}, + MinSelfDelegation: sdk.OneInt(), +} + + / Remove all validators from power store + stakingKey := app.GetKey(stakingtypes.ModuleName) + stakingStore := ctx.KVStore(stakingKey) + iterator := app.StakingKeeper.ValidatorsPowerStoreIterator(ctx) + for ; iterator.Valid(); iterator.Next() { + stakingStore.Delete(iterator.Key()) +} + +iterator.Close() + + / Remove all valdiators from last validators store + iterator = app.StakingKeeper.LastValidatorsIterator(ctx) + for ; iterator.Valid(); iterator.Next() { + app.StakingKeeper.LastValidatorPower.Delete(iterator.Key()) +} + +iterator.Close() + + / Add our validator to power and last validators store + app.StakingKeeper.SetValidator(ctx, newVal) + +err = app.StakingKeeper.SetValidatorByConsAddr(ctx, newVal) + if err != nil { + panic(err) +} + +app.StakingKeeper.SetValidatorByPowerIndex(ctx, newVal) + +app.StakingKeeper.SetLastValidatorPower(ctx, newVal.GetOperator(), 0) + if err := app.StakingKeeper.Hooks().AfterValidatorCreated(ctx, newVal.GetOperator()); err != nil { + panic(err) +} +``` + +#### Distribution + +Since the validator set has changed, we need to update the distribution records for the new validator. + +```go +/ Initialize records for this validator across all distribution stores + app.DistrKeeper.ValidatorHistoricalRewards.Set(ctx, newVal.GetOperator(), 0, distrtypes.NewValidatorHistoricalRewards(sdk.DecCoins{ +}, 1)) + +app.DistrKeeper.ValidatorCurrentRewards.Set(ctx, newVal.GetOperator(), distrtypes.NewValidatorCurrentRewards(sdk.DecCoins{ +}, 1)) + +app.DistrKeeper.ValidatorAccumulatedCommission.Set(ctx, newVal.GetOperator(), distrtypes.InitialValidatorAccumulatedCommission()) + +app.DistrKeeper.ValidatorOutstandingRewards.Set(ctx, newVal.GetOperator(), distrtypes.ValidatorOutstandingRewards{ + Rewards: sdk.DecCoins{ +}}) +``` + +#### Slashing + +We also need to set the validator signing info for the new validator. + +```go expandable +/ SLASHING + / + + / Set validator signing info for our new validator. + newConsAddr := sdk.ConsAddress(newValAddr.Bytes()) + newValidatorSigningInfo := slashingtypes.ValidatorSigningInfo{ + Address: newConsAddr.String(), + StartHeight: app.LastBlockHeight() - 1, + Tombstoned: false, +} + +app.SlashingKeeper.ValidatorSigningInfo.Set(ctx, newConsAddr, newValidatorSigningInfo) +``` + +#### Bank + +It is useful to create new accounts for your testing purposes. This avoids the need to have the same key as you may have on mainnet. + +```go expandable +/ BANK + / + defaultCoins := sdk.NewCoins(sdk.NewInt64Coin("ustake", 1000000000000)) + localSimAppAccounts := []sdk.AccAddress{ + sdk.MustAccAddressFromBech32("cosmos12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj"), + sdk.MustAccAddressFromBech32("cosmos1cyyzpxplxdzkeea7kwsydadg87357qnahakaks"), + sdk.MustAccAddressFromBech32("cosmos18s5lynnmx37hq4wlrw9gdn68sg2uxp5rgk26vv"), + sdk.MustAccAddressFromBech32("cosmos1qwexv7c6sm95lwhzn9027vyu2ccneaqad4w8ka"), + sdk.MustAccAddressFromBech32("cosmos14hcxlnwlqtq75ttaxf674vk6mafspg8xwgnn53"), + sdk.MustAccAddressFromBech32("cosmos12rr534cer5c0vj53eq4y32lcwguyy7nndt0u2t"), + sdk.MustAccAddressFromBech32("cosmos1nt33cjd5auzh36syym6azgc8tve0jlvklnq7jq"), + sdk.MustAccAddressFromBech32("cosmos10qfrpash5g2vk3hppvu45x0g860czur8ff5yx0"), + sdk.MustAccAddressFromBech32("cosmos1f4tvsdukfwh6s9swrc24gkuz23tp8pd3e9r5fa"), + sdk.MustAccAddressFromBech32("cosmos1myv43sqgnj5sm4zl98ftl45af9cfzk7nhjxjqh"), + sdk.MustAccAddressFromBech32("cosmos14gs9zqh8m49yy9kscjqu9h72exyf295afg6kgk"), + sdk.MustAccAddressFromBech32("cosmos1jllfytsz4dryxhz5tl7u73v29exsf80vz52ucc") +} + + / Fund localSimApp accounts + for _, account := range localSimAppAccounts { + err := app.BankKeeper.MintCoins(ctx, minttypes.ModuleName, defaultCoins) + if err != nil { + tmos.Exit(err.Error()) +} + +err = app.BankKeeper.SendCoinsFromModuleToAccount(ctx, minttypes.ModuleName, account, defaultCoins) + if err != nil { + tmos.Exit(err.Error()) +} + +} +``` + +#### Upgrade + +If you would like to schedule an upgrade the below can be used. + +```go expandable +/ UPGRADE + / + if upgradeToTrigger != "" { + upgradePlan := upgradetypes.Plan{ + Name: upgradeToTrigger, + Height: app.LastBlockHeight(), +} + +err = app.UpgradeKeeper.ScheduleUpgrade(ctx, upgradePlan) + if err != nil { + panic(err) +} + +} +``` + +### Optional Changes + +If you have custom modules that rely on specific state from the above modules and/or you would like to test your custom module, you will need to update the state of your custom module to reflect your needs + +## Running the Testnet + +Before we can run the testnet we must plug everything together. + +in `root.go`, in the `initRootCmd` function we add: + +```diff + server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, createSimAppAndExport, addModuleInitFlags) + ++ server.AddTestnetCreatorCommand(rootCmd, simapp.DefaultNodeHome, newTestnetApp, addModuleInitFlags) +``` + +Next we will add a newTestnetApp helper function: + +```diff expandable +/ newTestnetApp starts by running the normal newApp method. From there, the app interface returned is modified in order +/ for a testnet to be created from the provided app. +func newTestnetApp(logger log.Logger, db cometbftdb.DB, traceStore io.Writer, appOpts servertypes.AppOptions) servertypes.Application { + / Create an app and type cast to an SimApp + app := newApp(logger, db, traceStore, appOpts) + simApp, ok := app.(*simapp.SimApp) + if !ok { + panic("app created from newApp is not of type simApp") + } + + newValAddr, ok := appOpts.Get(server.KeyNewValAddr).(bytes.HexBytes) + if !ok { + panic("newValAddr is not of type bytes.HexBytes") + } + newValPubKey, ok := appOpts.Get(server.KeyUserPubKey).(crypto.PubKey) + if !ok { + panic("newValPubKey is not of type crypto.PubKey") + } + newOperatorAddress, ok := appOpts.Get(server.KeyNewOpAddr).(string) + if !ok { + panic("newOperatorAddress is not of type string") + } + upgradeToTrigger, ok := appOpts.Get(server.KeyTriggerTestnetUpgrade).(string) + if !ok { + panic("upgradeToTrigger is not of type string") + } + + / Make modifications to the normal SimApp required to run the network locally + return simapp.InitSimAppForTestnet(simApp, newValAddr, newValPubKey, newOperatorAddress, upgradeToTrigger) +} +``` diff --git a/docs/sdk/v0.50/documentation/application-framework/app-upgrade.mdx b/docs/sdk/v0.50/documentation/application-framework/app-upgrade.mdx new file mode 100644 index 00000000..b1440511 --- /dev/null +++ b/docs/sdk/v0.50/documentation/application-framework/app-upgrade.mdx @@ -0,0 +1,219 @@ +--- +title: Application Upgrade +--- + + +This document describes how to upgrade your application. If you are looking specifically for the changes to perform between SDK versions, see the [SDK migrations documentation](https://docs.cosmos.network/main/migrations/intro). + + + +This section is currently incomplete. Track the progress of this document [here](https://github.com/cosmos/cosmos-sdk/issues/11504). + + + +**Pre-requisite Readings** + +* [`x/upgrade` Documentation](https://docs.cosmos.network/main/modules/upgrade) + + + +## General Workflow + +Let's assume we are running v0.38.0 of our software in our testnet and want to upgrade to v0.40.0. +How would this look in practice? First of all, we want to finalize the v0.40.0 release candidate +and there install a specially named upgrade handler (eg. "testnet-v2" or even "v0.40.0"). An upgrade +handler should be defined in a new version of the software to define what migrations +to run to migrate from the older version of the software. Naturally, this is app-specific rather +than module specific, and must be defined in `app.go`, even if it imports logic from various +modules to perform the actions. You can register them with `upgradeKeeper.SetUpgradeHandler` +during the app initialization (before starting the abci server), and they serve not only to +perform a migration, but also to identify if this is the old or new version (eg. presence of +a handler registered for the named upgrade). + +Once the release candidate along with an appropriate upgrade handler is frozen, +we can have a governance vote to approve this upgrade at some future block height (e.g. 200000). +This is known as an upgrade.Plan. The v0.38.0 code will not know of this handler, but will +continue to run until block 200000, when the plan kicks in at `BeginBlock`. It will check +for existence of the handler, and finding it missing, know that it is running the obsolete software, +and gracefully exit. + +Generally the application binary will restart on exit, but then will execute this BeginBlocker +again and exit, causing a restart loop. Either the operator can manually install the new software, +or you can make use of an external watcher daemon to possibly download and then switch binaries, +also potentially doing a backup. The SDK tool for doing such, is called [Cosmovisor](https://docs.cosmos.network/main/tooling/cosmovisor). + +When the binary restarts with the upgraded version (here v0.40.0), it will detect we have registered the +"testnet-v2" upgrade handler in the code, and realize it is the new version. It then will run the upgrade handler +and *migrate the database in-place*. Once finished, it marks the upgrade as done, and continues processing +the rest of the block as normal. Once 2/3 of the voting power has upgraded, the blockchain will immediately +resume the consensus mechanism. If the majority of operators add a custom `do-upgrade` script, this should +be a matter of minutes and not even require them to be awake at that time. + +## Integrating With An App + + +The following is not required for users using `depinject`, this is abstracted for them. + + +In addition to basic module wiring, setup the upgrade Keeper for the app and then define a `PreBlocker` that calls the upgrade +keeper's PreBlocker method: + +```go +func (app *myApp) + +PreBlocker(ctx sdk.Context, req req.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + / For demonstration sake, the app PreBlocker only returns the upgrade module pre-blocker. + / In a real app, the module manager should call all pre-blockers + / return return app.ModuleManager.PreBlock(ctx, req) + +return app.upgradeKeeper.PreBlocker(ctx, req) +} +``` + +The app must then integrate the upgrade keeper with its governance module as appropriate. The governance module +should call ScheduleUpgrade to schedule an upgrade and ClearUpgradePlan to cancel a pending upgrade. + +## Performing Upgrades + +Upgrades can be scheduled at a predefined block height. Once this block height is reached, the +existing software will cease to process ABCI messages and a new version with code that handles the upgrade must be deployed. +All upgrades are coordinated by a unique upgrade name that cannot be reused on the same blockchain. In order for the upgrade +module to know that the upgrade has been safely applied, a handler with the name of the upgrade must be installed. +Here is an example handler for an upgrade named "my-fancy-upgrade": + +```go +app.upgradeKeeper.SetUpgradeHandler("my-fancy-upgrade", func(ctx context.Context, plan upgrade.Plan) { + / Perform any migrations of the state store needed for this upgrade +}) +``` + +This upgrade handler performs the dual function of alerting the upgrade module that the named upgrade has been applied, +as well as providing the opportunity for the upgraded software to perform any necessary state migrations. Both the halt +(with the old binary) and applying the migration (with the new binary) are enforced in the state machine. Actually +switching the binaries is an ops task and not handled inside the sdk / abci app. + +Here is a sample code to set store migrations with an upgrade: + +```go expandable +/ this configures a no-op upgrade handler for the "my-fancy-upgrade" upgrade +app.UpgradeKeeper.SetUpgradeHandler("my-fancy-upgrade", func(ctx context.Context, plan upgrade.Plan) { + / upgrade changes here +}) + +upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk() + if err != nil { + / handle error +} + if upgradeInfo.Name == "my-fancy-upgrade" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := store.StoreUpgrades{ + Renamed: []store.StoreRename{{ + OldKey: "foo", + NewKey: "bar", +}}, + Deleted: []string{ +}, +} + / configure store loader that checks if version == upgradeHeight and applies store upgrades + app.SetStoreLoader(upgrade.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +``` + +## Halt Behavior + +Before halting the ABCI state machine in the BeginBlocker method, the upgrade module will log an error +that looks like: + +```text + UPGRADE "" NEEDED at height : +``` + +where `Name` and `Info` are the values of the respective fields on the upgrade Plan. + +To perform the actual halt of the blockchain, the upgrade keeper simply panics which prevents the ABCI state machine +from proceeding but doesn't actually exit the process. Exiting the process can cause issues for other nodes that start +to lose connectivity with the exiting nodes, thus this module prefers to just halt but not exit. + +## Automation + +Read more about [Cosmovisor](https://docs.cosmos.network/main/tooling/cosmovisor), the tool for automating upgrades. + +## Canceling Upgrades + +There are two ways to cancel a planned upgrade - with on-chain governance or off-chain social consensus. +For the first one, there is a `CancelSoftwareUpgrade` governance proposal, which can be voted on and will +remove the scheduled upgrade plan. Of course this requires that the upgrade was known to be a bad idea +well before the upgrade itself, to allow time for a vote. If you want to allow such a possibility, you +should set the upgrade height to be `2 * (votingperiod + depositperiod) + (safety delta)` from the beginning of +the first upgrade proposal. Safety delta is the time available from the success of an upgrade proposal +and the realization it was a bad idea (due to external testing). You can also start a `CancelSoftwareUpgrade` +proposal while the original `SoftwareUpgrade` proposal is still being voted upon, as long as the voting +period ends after the `SoftwareUpgrade` proposal. + +However, let's assume that we don't realize the upgrade has a bug until shortly before it will occur +(or while we try it out - hitting some panic in the migration). It would seem the blockchain is stuck, +but we need to allow an escape for social consensus to overrule the planned upgrade. To do so, there's +a `--unsafe-skip-upgrades` flag to the start command, which will cause the node to mark the upgrade +as done upon hitting the planned upgrade height(s), without halting and without actually performing a migration. +If over two-thirds run their nodes with this flag on the old binary, it will allow the chain to continue through +the upgrade with a manual override. (This must be well-documented for anyone syncing from genesis later on). + +Example: + +```shell + start --unsafe-skip-upgrades ... +``` + +## Pre-Upgrade Handling + +Cosmovisor supports custom pre-upgrade handling. Use pre-upgrade handling when you need to implement application config changes that are required in the newer version before you perform the upgrade. + +Using Cosmovisor pre-upgrade handling is optional. If pre-upgrade handling is not implemented, the upgrade continues. + +For example, make the required new-version changes to `app.toml` settings during the pre-upgrade handling. The pre-upgrade handling process means that the file does not have to be manually updated after the upgrade. + +Before the application binary is upgraded, Cosmovisor calls a `pre-upgrade` command that can be implemented by the application. + +The `pre-upgrade` command does not take in any command-line arguments and is expected to terminate with the following exit codes: + +| Exit status code | How it is handled in Cosmosvisor | +| ---------------- | ------------------------------------------------------------------------------------------------------------------- | +| `0` | Assumes `pre-upgrade` command executed successfully and continues the upgrade. | +| `1` | Default exit code when `pre-upgrade` command has not been implemented. | +| `30` | `pre-upgrade` command was executed but failed. This fails the entire upgrade. | +| `31` | `pre-upgrade` command was executed but failed. But the command is retried until exit code `1` or `30` are returned. | + +## Sample + +Here is a sample structure of the `pre-upgrade` command: + +```go expandable +func preUpgradeCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "pre-upgrade", + Short: "Pre-upgrade command", + Long: "Pre-upgrade command to implement custom pre-upgrade handling", + Run: func(cmd *cobra.Command, args []string) { + err := HandlePreUpgrade() + if err != nil { + os.Exit(30) +} + +os.Exit(0) +}, +} + +return cmd +} +``` + +Ensure that the pre-upgrade command has been registered in the application: + +```go +rootCmd.AddCommand( + / .. + preUpgradeCommand(), + / .. + ) +``` + +When not using Cosmovisor, ensure to run ` pre-upgrade` before starting the application binary. diff --git a/docs/sdk/v0.50/documentation/application-framework/vote-extensions.mdx b/docs/sdk/v0.50/documentation/application-framework/vote-extensions.mdx new file mode 100644 index 00000000..00f16f8f --- /dev/null +++ b/docs/sdk/v0.50/documentation/application-framework/vote-extensions.mdx @@ -0,0 +1,186 @@ +--- +title: Vote Extensions +--- + + +**Synopsis** +This section describes how the application can define and use vote extensions +defined in ABCI++. + + +## Extend Vote + +ABCI++ allows an application to extend a pre-commit vote with arbitrary data. This +process does NOT have to be deterministic, and the data returned can be unique to the +validator process. The Cosmos SDK defines `baseapp.ExtendVoteHandler`: + +```go +type ExtendVoteHandler func(Context, *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) +``` + +An application can set this handler in `app.go` via the `baseapp.SetExtendVoteHandler` +`BaseApp` option function. The `sdk.ExtendVoteHandler`, if defined, is called during +the `ExtendVote` ABCI method. Note, if an application decides to implement +`baseapp.ExtendVoteHandler`, it MUST return a non-nil `VoteExtension`. However, the vote +extension can be empty. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#extendvote) +for more details. + +There are many decentralized censorship-resistant use cases for vote extensions. +For example, a validator may want to submit prices for a price oracle or encryption +shares for an encrypted transaction mempool. Note, an application should be careful +to consider the size of the vote extensions as they could increase latency in block +production. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/docs/qa/CometBFT-QA-38.md#vote-extensions-testbed) +for more details. + +## Verify Vote Extension + +Similar to extending a vote, an application can also verify vote extensions from +other validators when validating their pre-commits. For a given vote extension, +this process MUST be deterministic. The Cosmos SDK defines `sdk.VerifyVoteExtensionHandler`: + +```go expandable +package types + +import ( + + abci "github.com/cometbft/cometbft/abci/types" +) + +/ InitChainer initializes application state at genesis +type InitChainer func(ctx Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) + +/ PrepareCheckStater runs code during commit after the block has been committed, and the `checkState` +/ has been branched for the new block. +type PrepareCheckStater func(ctx Context) + +/ Precommiter runs code during commit immediately before the `deliverState` is written to the `rootMultiStore`. +type Precommiter func(ctx Context) + +/ PeerFilter responds to p2p filtering queries from Tendermint +type PeerFilter func(info string) *abci.ResponseQuery + +/ ProcessProposalHandler defines a function type alias for processing a proposer +type ProcessProposalHandler func(Context, *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) + +/ PrepareProposalHandler defines a function type alias for preparing a proposal +type PrepareProposalHandler func(Context, *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) + +/ ExtendVoteHandler defines a function type alias for extending a pre-commit vote. +type ExtendVoteHandler func(Context, *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) + +/ VerifyVoteExtensionHandler defines a function type alias for verifying a +/ pre-commit vote extension. +type VerifyVoteExtensionHandler func(Context, *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) + +/ BeginBlocker defines a function type alias for executing application +/ business logic before transactions are executed. +/ +/ Note: The BeginBlock ABCI method no longer exists in the ABCI specification +/ as of CometBFT v0.38.0. This function type alias is provided for backwards +/ compatibility with applications that still use the BeginBlock ABCI method +/ and allows for existing BeginBlock functionality within applications. +type BeginBlocker func(Context) (BeginBlock, error) + +/ EndBlocker defines a function type alias for executing application +/ business logic after transactions are executed but before committing. +/ +/ Note: The EndBlock ABCI method no longer exists in the ABCI specification +/ as of CometBFT v0.38.0. This function type alias is provided for backwards +/ compatibility with applications that still use the EndBlock ABCI method +/ and allows for existing EndBlock functionality within applications. +type EndBlocker func(Context) (EndBlock, error) + +/ EndBlock defines a type which contains endblock events and validator set updates +type EndBlock struct { + ValidatorUpdates []abci.ValidatorUpdate + Events []abci.Event +} + +/ BeginBlock defines a type which contains beginBlock events +type BeginBlock struct { + Events []abci.Event +} +``` + +An application can set this handler in `app.go` via the `baseapp.SetVerifyVoteExtensionHandler` +`BaseApp` option function. The `sdk.VerifyVoteExtensionHandler`, if defined, is called +during the `VerifyVoteExtension` ABCI method. If an application defines a vote +extension handler, it should also define a verification handler. Note, not all +validators will share the same view of what vote extensions they verify depending +on how votes are propagated. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#verifyvoteextension) +for more details. + +## Vote Extension Propagation + +The agreed upon vote extensions at height `H` are provided to the proposing validator +at height `H+1` during `PrepareProposal`. As a result, the vote extensions are +not natively provided or exposed to the remaining validators during `ProcessProposal`. +As a result, if an application requires that the agreed upon vote extensions from +height `H` are available to all validators at `H+1`, the application must propagate +these vote extensions manually in the block proposal itself. This can be done by +"injecting" them into the block proposal, since the `Txs` field in `PrepareProposal` +is just a slice of byte slices. + +`FinalizeBlock` will ignore any byte slice that doesn't implement an `sdk.Tx`, so +any injected vote extensions will safely be ignored in `FinalizeBlock`. For more +details on propagation, see the [ABCI++ 2.0 ADR](docs/sdk/next/documentation/legacy/adr-comprehensive#vote-extension-propagation--verification). + +### Recovery of injected Vote Extensions + +As stated before, vote extensions can be injected into a block proposal (along with +other transactions in the `Txs` field). The Cosmos SDK provides a pre-FinalizeBlock +hook to allow applications to recover vote extensions, perform any necessary +computation on them, and then store the results in the cached store. These results +will be available to the application during the subsequent `FinalizeBlock` call. + +An example of how a pre-FinalizeBlock hook could look like is shown below: + +```go expandable +app.SetPreBlocker(func(ctx sdk.Context, req *abci.RequestFinalizeBlock) + +error { + allVEs := []VE{ +} / store all parsed vote extensions here + for _, tx := range req.Txs { + / define a custom function that tries to parse the tx as a vote extension + ve, ok := parseVoteExtension(tx) + if !ok { + continue +} + +allVEs = append(allVEs, ve) +} + + / perform any necessary computation on the vote extensions and store the result + / in the cached store + result := compute(allVEs) + err := storeVEResult(ctx, result) + if err != nil { + return err +} + +return nil +}) +``` + +Then, in an app's module, the application can retrieve the result of the computation +of vote extensions from the cached store: + +```go expandable +func (k Keeper) + +BeginBlocker(ctx context.Context) + +error { + / retrieve the result of the computation of vote extensions from the cached store + result, err := k.GetVEResult(ctx) + if err != nil { + return err +} + + / use the result of the computation of vote extensions + k.setSomething(result) + +return nil +} +``` diff --git a/docs/sdk/v0.50/documentation/consensus-block-production/introduction.mdx b/docs/sdk/v0.50/documentation/consensus-block-production/introduction.mdx new file mode 100644 index 00000000..acf6199a --- /dev/null +++ b/docs/sdk/v0.50/documentation/consensus-block-production/introduction.mdx @@ -0,0 +1,56 @@ +--- +title: Introduction +description: >- + ABC, Application Blockchain Interface is the interface between CometBFT and + the application, more information about ABCI can be found here. Within the + release of ABCI 2.0 for the 0.38 CometBFT release there were additional + methods introduced. +--- + +## What is ABCI? + +ABC, Application Blockchain Interface is the interface between CometBFT and the application, more information about ABCI can be found [here](https://docs.cometbft.com/v0.38/spec/abci/). Within the release of ABCI 2.0 for the 0.38 CometBFT release there were additional methods introduced. + +The 5 methods introduced during ABCI 2.0 are: + +* `PrepareProposal` +* `ProcessProposal` +* `ExtendVote` +* `VerifyVoteExtension` +* `FinalizeBlock` + +## The Flow + +## PrepareProposal + +Based on their voting power, CometBFT chooses a block proposer and calls `PrepareProposal` on the block proposer's application (Cosmos SDK). The selected block proposer is responsible for collecting outstanding transactions from the mempool, adhering to the application's specifications. The application can enforce custom transaction ordering and incorporate additional transactions, potentially generated from vote extensions in the previous block. + +To perform this manipulation on the application side, a custom handler must be implemented. By default, the Cosmos SDK provides `PrepareProposalHandler`, used in conjunction with an application specific mempool. A custom handler can be written by application developer, if a noop handler provided, all transactions are considered valid. Please see [this](https://github.com/fatal-fruit/abci-workshop) tutorial for more information on custom handlers. + +Please note that vote extensions will only be available on the following height in which vote extensions are enabled. More information about vote extensions can be found [here](https://docs.cosmos.network/main/build/abci/03-vote-extensions.md). + +After creating the proposal, the proposer returns it to CometBFT. + +PrepareProposal CAN be non-deterministic. + +## ProcessProposal + +This method allows validators to perform application-specific checks on the block proposal and is called on all validators. This is an important step in the consensus process, as it ensures that the block is valid and meets the requirements of the application. For example, validators could check that the block contains all the required transactions or that the block does not create any invalid state transitions. + +The implementation of `ProcessProposal` MUST be deterministic. + +## ExtendVote and VerifyVoteExtensions + +These methods allow applications to extend the voting process by requiring validators to perform additional actions beyond simply validating blocks. + +If vote extensions are enabled, `ExtendVote` will be called on every validator and each one will return its vote extension which is in practice a bunch of bytes. As mentioned above this data (vote extension) can only be retrieved in the next block height during `PrepareProposal`. Additionally, this data can be arbitrary, but in the provided tutorials, it serves as an oracle or proof of transactions in the mempool. Essentially, vote extensions are processed and injected as transactions. Examples of use-cases for vote extensions include prices for a price oracle or encryption shares for an encrypted transaction mempool. `ExtendVote` CAN be non-deterministic. + +`VerifyVoteExtensions` is performed on every validator multiple times in order to verify other validators' vote extensions. This check is submitted to validate the integrity and validity of the vote extensions preventing malicious or invalid vote extensions. + +Additionally, applications must keep the vote extension data concise as it can degrade the performance of their chain, see testing results [here](https://docs.cometbft.com/v0.38/qa/cometbft-qa-38#vote-extensions-testbed). + +`VerifyVoteExtensions` MUST be deterministic. + +## FinalizeBlock + +`FinalizeBlock` is then called and is responsible for updating the state of the blockchain and making the block available to users diff --git a/docs/sdk/v0.50/documentation/consensus-block-production/prepare-proposal.mdx b/docs/sdk/v0.50/documentation/consensus-block-production/prepare-proposal.mdx new file mode 100644 index 00000000..4339de6c --- /dev/null +++ b/docs/sdk/v0.50/documentation/consensus-block-production/prepare-proposal.mdx @@ -0,0 +1,386 @@ +--- +title: Prepare Proposal +--- + +`PrepareProposal` handles construction of the block, meaning that when a proposer +is preparing to propose a block, it requests the application to evaluate a +`RequestPrepareProposal`, which contains a series of transactions from CometBFT's +mempool. At this point, the application has complete control over the proposal. +It can modify, delete, and inject transactions from its own app-side mempool into +the proposal or even ignore all the transactions altogether. What the application +does with the transactions provided to it by `RequestPrepareProposal` has no +effect on CometBFT's mempool. + +Note, that the application defines the semantics of the `PrepareProposal` and it +MAY be non-deterministic and is only executed by the current block proposer. + +Now, reading mempool twice in the previous sentence is confusing, lets break it down. +CometBFT has a mempool that handles gossiping transactions to other nodes +in the network. The order of these transactions is determined by CometBFT's mempool, +using FIFO as the sole ordering mechanism. It's worth noting that the priority mempool +in Comet was removed or deprecated. +However, since the application is able to fully inspect +all transactions, it can provide greater control over transaction ordering. +Allowing the application to handle ordering enables the application to define how +it would like the block constructed. + +The Cosmos SDK defines the `DefaultProposalHandler` type, which provides applications with +`PrepareProposal` and `ProcessProposal` handlers. If you decide to implement your +own `PrepareProposal` handler, you must be sure to ensure that the transactions +selected DO NOT exceed the maximum block gas (if set) and the maximum bytes provided +by `req.MaxBytes`. + +```go expandable +package baseapp + +import ( + + "bytes" + "fmt" + "cosmossdk.io/math" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cmtcrypto "github.com/cometbft/cometbft/crypto" + cryptoenc "github.com/cometbft/cometbft/crypto/encoding" + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + protoio "github.com/cosmos/gogoproto/io" + "github.com/cosmos/gogoproto/proto" + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +/ VoteExtensionThreshold defines the total voting power % that must be +/ submitted in order for all vote extensions to be considered valid for a +/ given height. +var VoteExtensionThreshold = math.LegacyNewDecWithPrec(667, 3) + +type ( + / Validator defines the interface contract require for verifying vote extension + / signatures. Typically, this will be implemented by the x/staking module, + / which has knowledge of the CometBFT public key. + Validator interface { + CmtConsPublicKey() (cmtprotocrypto.PublicKey, error) + +BondedTokens() + +math.Int +} + + / ValidatorStore defines the interface contract require for verifying vote + / extension signatures. Typically, this will be implemented by the x/staking + / module, which has knowledge of the CometBFT public key. + ValidatorStore interface { + GetValidatorByConsAddr(sdk.Context, cryptotypes.Address) (Validator, error) + +TotalBondedTokens(ctx sdk.Context) + +math.Int +} +) + +/ ValidateVoteExtensions defines a helper function for verifying vote extension +/ signatures that may be passed or manually injected into a block proposal from +/ a proposer in ProcessProposal. It returns an error if any signature is invalid +/ or if unexpected vote extensions and/or signatures are found or less than 2/3 +/ power is received. +func ValidateVoteExtensions( + ctx sdk.Context, + valStore ValidatorStore, + currentHeight int64, + chainID string, + extCommit abci.ExtendedCommitInfo, +) + +error { + cp := ctx.ConsensusParams() + extsEnabled := cp.Abci != nil && cp.Abci.VoteExtensionsEnableHeight > 0 + marshalDelimitedFn := func(msg proto.Message) ([]byte, error) { + var buf bytes.Buffer + if err := protoio.NewDelimitedWriter(&buf).WriteMsg(msg); err != nil { + return nil, err +} + +return buf.Bytes(), nil +} + +var sumVP math.Int + for _, vote := range extCommit.Votes { + if !extsEnabled { + if len(vote.VoteExtension) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension at height %d", currentHeight) +} + if len(vote.ExtensionSignature) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension signature at height %d", currentHeight) +} + +continue +} + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("vote extensions enabled; received empty vote extension signature at height %d", currentHeight) +} + valConsAddr := cmtcrypto.Address(vote.Validator.Address) + +validator, err := valStore.GetValidatorByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get validator %X: %w", valConsAddr, err) +} + if validator == nil { + return fmt.Errorf("validator %X not found", valConsAddr) +} + +cmtPubKeyProto, err := validator.CmtConsPublicKey() + if err != nil { + return fmt.Errorf("failed to get validator %X public key: %w", valConsAddr, err) +} + +cmtPubKey, err := cryptoenc.PubKeyFromProto(cmtPubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) +} + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, / the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: chainID, +} + +extSignBytes, err := marshalDelimitedFn(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) +} + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return fmt.Errorf("failed to verify validator %X vote extension signature", valConsAddr) +} + +sumVP = sumVP.Add(validator.BondedTokens()) +} + + / Ensure we have at least 2/3 voting power that submitted valid vote + / extensions. + totalVP := valStore.TotalBondedTokens(ctx) + percentSubmitted := math.LegacyNewDecFromInt(sumVP).Quo(math.LegacyNewDecFromInt(totalVP)) + if percentSubmitted.LT(VoteExtensionThreshold) { + return fmt.Errorf("insufficient cumulative voting power received to verify vote extensions; got: %s, expected: >=%s", percentSubmitted, VoteExtensionThreshold) +} + +return nil +} + +type ( + / ProposalTxVerifier defines the interface that is implemented by BaseApp, + / that any custom ABCI PrepareProposal and ProcessProposal handler can use + / to verify a transaction. + ProposalTxVerifier interface { + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) +} + + / DefaultProposalHandler defines the default ABCI PrepareProposal and + / ProcessProposal handlers. + DefaultProposalHandler struct { + mempool mempool.Mempool + txVerifier ProposalTxVerifier +} +) + +func NewDefaultProposalHandler(mp mempool.Mempool, txVerifier ProposalTxVerifier) + +DefaultProposalHandler { + return DefaultProposalHandler{ + mempool: mp, + txVerifier: txVerifier, +} +} + +/ PrepareProposalHandler returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from CometBFT will simply be returned, which, by default, are in +/ FIFO order. +func (h DefaultProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + / If the mempool is nil or NoOp we simply return the transactions + / requested from CometBFT, which, by default, should be in FIFO order. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} + +var ( + selectedTxs [][]byte + totalTxBytes int64 + ) + iterator := h.mempool.Select(ctx, req.Txs) + for iterator != nil { + memTx := iterator.Tx() + + / NOTE: Since transaction verification was already executed in CheckTx, + / which calls mempool.Insert, in theory everything in the pool should be + / valid. But some mempool implementations may insert invalid txs, so we + / check again. + bz, err := h.txVerifier.PrepareProposalVerifyTx(memTx) + if err != nil { + err := h.mempool.Remove(memTx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + panic(err) +} + +} + +else { + txSize := int64(len(bz)) + if totalTxBytes += txSize; totalTxBytes <= req.MaxTxBytes { + selectedTxs = append(selectedTxs, bz) +} + +else { + / We've reached capacity per req.MaxTxBytes so we cannot select any + / more transactions. + break +} + +} + +iterator = iterator.Next() +} + +return &abci.ResponsePrepareProposal{ + Txs: selectedTxs +}, nil +} +} + +/ ProcessProposalHandler returns the default implementation for processing an +/ ABCI proposal. Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. +/ Note that step (2) + +is identical to the validation step performed in +/ DefaultPrepareProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +func (h DefaultProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + / If the mempool is nil or NoOp we simply return ACCEPT, + / because PrepareProposal may have included txs that could fail verification. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return NoOpProcessProposal() +} + +return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + for _, txBytes := range req.Txs { + _, err := h.txVerifier.ProcessProposalVerifyTx(txBytes) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpExtendVote defines a no-op ExtendVote handler. It will always return an +/ empty byte slice as the vote extension. +func NoOpExtendVote() + +sdk.ExtendVoteHandler { + return func(_ sdk.Context, _ *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} +} + +/ NoOpVerifyVoteExtensionHandler defines a no-op VerifyVoteExtension handler. It +/ will always return an ACCEPT status with no error. +func NoOpVerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(_ sdk.Context, _ *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} +``` + +This default implementation can be overridden by the application developer in +favor of a custom implementation in [`app.go`](/docs/sdk/v0.50/documentation/application-framework/app-go-v2): + +```go +prepareOpt := func(app *baseapp.BaseApp) { + abciPropHandler := baseapp.NewDefaultProposalHandler(mempool, app) + +app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) +} + +baseAppOptions = append(baseAppOptions, prepareOpt) +``` diff --git a/docs/sdk/v0.50/documentation/consensus-block-production/process-proposal.mdx b/docs/sdk/v0.50/documentation/consensus-block-production/process-proposal.mdx new file mode 100644 index 00000000..ebf02bc6 --- /dev/null +++ b/docs/sdk/v0.50/documentation/consensus-block-production/process-proposal.mdx @@ -0,0 +1,373 @@ +--- +title: Process Proposal +--- + +`ProcessProposal` handles the validation of a proposal from `PrepareProposal`, +which also includes a block header. Meaning, that after a block has been proposed +the other validators have the right to vote on a block. The validator in the +default implementation of `PrepareProposal` runs basic validity checks on each +transaction. + +Note, `ProcessProposal` MAY NOT be non-deterministic, i.e. it must be deterministic. +This means if `ProcessProposal` panics or fails and we reject, all honest validator +processes will prevote nil and the CometBFT round will proceed again until a valid +proposal is proposed. + +Here is the implementation of the default implementation: + +```go expandable +package baseapp + +import ( + + "bytes" + "fmt" + "cosmossdk.io/math" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cmtcrypto "github.com/cometbft/cometbft/crypto" + cryptoenc "github.com/cometbft/cometbft/crypto/encoding" + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + protoio "github.com/cosmos/gogoproto/io" + "github.com/cosmos/gogoproto/proto" + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +/ VoteExtensionThreshold defines the total voting power % that must be +/ submitted in order for all vote extensions to be considered valid for a +/ given height. +var VoteExtensionThreshold = math.LegacyNewDecWithPrec(667, 3) + +type ( + / Validator defines the interface contract require for verifying vote extension + / signatures. Typically, this will be implemented by the x/staking module, + / which has knowledge of the CometBFT public key. + Validator interface { + CmtConsPublicKey() (cmtprotocrypto.PublicKey, error) + +BondedTokens() + +math.Int +} + + / ValidatorStore defines the interface contract require for verifying vote + / extension signatures. Typically, this will be implemented by the x/staking + / module, which has knowledge of the CometBFT public key. + ValidatorStore interface { + GetValidatorByConsAddr(sdk.Context, cryptotypes.Address) (Validator, error) + +TotalBondedTokens(ctx sdk.Context) + +math.Int +} +) + +/ ValidateVoteExtensions defines a helper function for verifying vote extension +/ signatures that may be passed or manually injected into a block proposal from +/ a proposer in ProcessProposal. It returns an error if any signature is invalid +/ or if unexpected vote extensions and/or signatures are found or less than 2/3 +/ power is received. +func ValidateVoteExtensions( + ctx sdk.Context, + valStore ValidatorStore, + currentHeight int64, + chainID string, + extCommit abci.ExtendedCommitInfo, +) + +error { + cp := ctx.ConsensusParams() + extsEnabled := cp.Abci != nil && cp.Abci.VoteExtensionsEnableHeight > 0 + marshalDelimitedFn := func(msg proto.Message) ([]byte, error) { + var buf bytes.Buffer + if err := protoio.NewDelimitedWriter(&buf).WriteMsg(msg); err != nil { + return nil, err +} + +return buf.Bytes(), nil +} + +var sumVP math.Int + for _, vote := range extCommit.Votes { + if !extsEnabled { + if len(vote.VoteExtension) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension at height %d", currentHeight) +} + if len(vote.ExtensionSignature) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension signature at height %d", currentHeight) +} + +continue +} + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("vote extensions enabled; received empty vote extension signature at height %d", currentHeight) +} + valConsAddr := cmtcrypto.Address(vote.Validator.Address) + +validator, err := valStore.GetValidatorByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get validator %X: %w", valConsAddr, err) +} + if validator == nil { + return fmt.Errorf("validator %X not found", valConsAddr) +} + +cmtPubKeyProto, err := validator.CmtConsPublicKey() + if err != nil { + return fmt.Errorf("failed to get validator %X public key: %w", valConsAddr, err) +} + +cmtPubKey, err := cryptoenc.PubKeyFromProto(cmtPubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) +} + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, / the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: chainID, +} + +extSignBytes, err := marshalDelimitedFn(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) +} + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return fmt.Errorf("failed to verify validator %X vote extension signature", valConsAddr) +} + +sumVP = sumVP.Add(validator.BondedTokens()) +} + + / Ensure we have at least 2/3 voting power that submitted valid vote + / extensions. + totalVP := valStore.TotalBondedTokens(ctx) + percentSubmitted := math.LegacyNewDecFromInt(sumVP).Quo(math.LegacyNewDecFromInt(totalVP)) + if percentSubmitted.LT(VoteExtensionThreshold) { + return fmt.Errorf("insufficient cumulative voting power received to verify vote extensions; got: %s, expected: >=%s", percentSubmitted, VoteExtensionThreshold) +} + +return nil +} + +type ( + / ProposalTxVerifier defines the interface that is implemented by BaseApp, + / that any custom ABCI PrepareProposal and ProcessProposal handler can use + / to verify a transaction. + ProposalTxVerifier interface { + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) +} + + / DefaultProposalHandler defines the default ABCI PrepareProposal and + / ProcessProposal handlers. + DefaultProposalHandler struct { + mempool mempool.Mempool + txVerifier ProposalTxVerifier +} +) + +func NewDefaultProposalHandler(mp mempool.Mempool, txVerifier ProposalTxVerifier) + +DefaultProposalHandler { + return DefaultProposalHandler{ + mempool: mp, + txVerifier: txVerifier, +} +} + +/ PrepareProposalHandler returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from CometBFT will simply be returned, which, by default, are in +/ FIFO order. +func (h DefaultProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + / If the mempool is nil or NoOp we simply return the transactions + / requested from CometBFT, which, by default, should be in FIFO order. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} + +var ( + selectedTxs [][]byte + totalTxBytes int64 + ) + iterator := h.mempool.Select(ctx, req.Txs) + for iterator != nil { + memTx := iterator.Tx() + + / NOTE: Since transaction verification was already executed in CheckTx, + / which calls mempool.Insert, in theory everything in the pool should be + / valid. But some mempool implementations may insert invalid txs, so we + / check again. + bz, err := h.txVerifier.PrepareProposalVerifyTx(memTx) + if err != nil { + err := h.mempool.Remove(memTx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + panic(err) +} + +} + +else { + txSize := int64(len(bz)) + if totalTxBytes += txSize; totalTxBytes <= req.MaxTxBytes { + selectedTxs = append(selectedTxs, bz) +} + +else { + / We've reached capacity per req.MaxTxBytes so we cannot select any + / more transactions. + break +} + +} + +iterator = iterator.Next() +} + +return &abci.ResponsePrepareProposal{ + Txs: selectedTxs +}, nil +} +} + +/ ProcessProposalHandler returns the default implementation for processing an +/ ABCI proposal. Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. +/ Note that step (2) + +is identical to the validation step performed in +/ DefaultPrepareProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +func (h DefaultProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + / If the mempool is nil or NoOp we simply return ACCEPT, + / because PrepareProposal may have included txs that could fail verification. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return NoOpProcessProposal() +} + +return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + for _, txBytes := range req.Txs { + _, err := h.txVerifier.ProcessProposalVerifyTx(txBytes) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpExtendVote defines a no-op ExtendVote handler. It will always return an +/ empty byte slice as the vote extension. +func NoOpExtendVote() + +sdk.ExtendVoteHandler { + return func(_ sdk.Context, _ *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} +} + +/ NoOpVerifyVoteExtensionHandler defines a no-op VerifyVoteExtension handler. It +/ will always return an ACCEPT status with no error. +func NoOpVerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(_ sdk.Context, _ *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} +``` + +Like `PrepareProposal` this implementation is the default and can be modified by +the application developer in [`app.go`](/docs/sdk/v0.50/documentation/application-framework/app-go-v2). If you decide to implement +your own `ProcessProposal` handler, you must be sure to ensure that the transactions +provided in the proposal DO NOT exceed the maximum block gas and `maxtxbytes` (if set). + +```go +processOpt := func(app *baseapp.BaseApp) { + abciPropHandler := baseapp.NewDefaultProposalHandler(mempool, app) + +app.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) +} + +baseAppOptions = append(baseAppOptions, processOpt) +``` diff --git a/docs/sdk/v0.50/documentation/consensus-block-production/vote-extensions.mdx b/docs/sdk/v0.50/documentation/consensus-block-production/vote-extensions.mdx new file mode 100644 index 00000000..6400a54c --- /dev/null +++ b/docs/sdk/v0.50/documentation/consensus-block-production/vote-extensions.mdx @@ -0,0 +1,130 @@ +--- +title: Vote Extensions +--- + + +**Synopsis** +This section describes how the application can define and use vote extensions +defined in ABCI++. + + +## Extend Vote + +ABCI++ allows an application to extend a pre-commit vote with arbitrary data. This +process does NOT have to be deterministic, and the data returned can be unique to the +validator process. The Cosmos SDK defines [`baseapp.ExtendVoteHandler`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/types/abci.go#L26-L27): + +```go +type ExtendVoteHandler func(Context, *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) +``` + +An application can set this handler in `app.go` via the `baseapp.SetExtendVoteHandler` +`BaseApp` option function. The `sdk.ExtendVoteHandler`, if defined, is called during +the `ExtendVote` ABCI method. Note, if an application decides to implement +`baseapp.ExtendVoteHandler`, it MUST return a non-nil `VoteExtension`. However, the vote +extension can be empty. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#extendvote) +for more details. + +There are many decentralized censorship-resistant use cases for vote extensions. +For example, a validator may want to submit prices for a price oracle or encryption +shares for an encrypted transaction mempool. Note, an application should be careful +to consider the size of the vote extensions as they could increase latency in block +production. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/docs/qa/CometBFT-QA-38.md#vote-extensions-testbed) +for more details. + +Click [here](https://docs.cosmos.network/main/user/tutorials/vote-extensions) if you would like a walkthrough of how to implement vote extensions. + +## Verify Vote Extension + +Similar to extending a vote, an application can also verify vote extensions from +other validators when validating their pre-commits. For a given vote extension, +this process MUST be deterministic. The Cosmos SDK defines [`sdk.VerifyVoteExtensionHandler`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/types/abci.go#L29-L31): + +```go +type VerifyVoteExtensionHandler func(Context, *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) +``` + +An application can set this handler in `app.go` via the `baseapp.SetVerifyVoteExtensionHandler` +`BaseApp` option function. The `sdk.VerifyVoteExtensionHandler`, if defined, is called +during the `VerifyVoteExtension` ABCI method. If an application defines a vote +extension handler, it should also define a verification handler. Note, not all +validators will share the same view of what vote extensions they verify depending +on how votes are propagated. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#verifyvoteextension) +for more details. + +Additionally, please keep in mind that performance can be degraded if vote extensions are too big ([Link](https://docs.cometbft.com/v0.38/qa/cometbft-qa-38#vote-extensions-testbed)), so we highly recommend a size validation in `VerifyVoteExtensions`. + +## Vote Extension Propagation + +The agreed upon vote extensions at height `H` are provided to the proposing validator +at height `H+1` during `PrepareProposal`. As a result, the vote extensions are +not natively provided or exposed to the remaining validators during `ProcessProposal`. +As a result, if an application requires that the agreed upon vote extensions from +height `H` are available to all validators at `H+1`, the application must propagate +these vote extensions manually in the block proposal itself. This can be done by +"injecting" them into the block proposal, since the `Txs` field in `PrepareProposal` +is just a slice of byte slices. + +`FinalizeBlock` will ignore any byte slice that doesn't implement an `sdk.Tx`, so +any injected vote extensions will safely be ignored in `FinalizeBlock`. For more +details on propagation, see the [ABCI++ 2.0 ADR](docs/sdk/next/documentation/legacy/adr-comprehensive#vote-extension-propagation--verification). + +### Recovery of injected Vote Extensions + +As stated before, vote extensions can be injected into a block proposal (along with +other transactions in the `Txs` field). The Cosmos SDK provides a pre-FinalizeBlock +hook to allow applications to recover vote extensions, perform any necessary +computation on them, and then store the results in the cached store. These results +will be available to the application during the subsequent `FinalizeBlock` call. + +An example of how a pre-FinalizeBlock hook could look like is shown below: + +```go expandable +app.SetPreBlocker(func(ctx sdk.Context, req *abci.RequestFinalizeBlock) + +error { + allVEs := []VE{ +} / store all parsed vote extensions here + for _, tx := range req.Txs { + / define a custom function that tries to parse the tx as a vote extension + ve, ok := parseVoteExtension(tx) + if !ok { + continue +} + +allVEs = append(allVEs, ve) +} + + / perform any necessary computation on the vote extensions and store the result + / in the cached store + result := compute(allVEs) + err := storeVEResult(ctx, result) + if err != nil { + return err +} + +return nil +}) +``` + +Then, in an app's module, the application can retrieve the result of the computation +of vote extensions from the cached store: + +```go expandable +func (k Keeper) + +BeginBlocker(ctx context.Context) + +error { + / retrieve the result of the computation of vote extensions from the cached store + result, err := k.GetVEResult(ctx) + if err != nil { + return err +} + + / use the result of the computation of vote extensions + k.setSomething(result) + +return nil +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/beginblock-endblock.mdx b/docs/sdk/v0.50/documentation/module-system/beginblock-endblock.mdx new file mode 100644 index 00000000..8d18f6cc --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/beginblock-endblock.mdx @@ -0,0 +1,113 @@ +--- +title: BeginBlocker and EndBlocker +--- + + +**Synopsis** +`BeginBlocker` and `EndBlocker` are optional methods module developers can implement in their module. They will be triggered at the beginning and at the end of each block respectively, when the [`BeginBlock`](/docs/sdk/v0.50/learn/advanced/baseapp#beginblock) and [`EndBlock`](/docs/sdk/v0.50/learn/advanced/baseapp#endblock) ABCI messages are received from the underlying consensus engine. + + + +**Pre-requisite Readings** + +* [Module Manager](/docs/sdk/v0.50/documentation/module-system/module-manager) + + + +## BeginBlocker and EndBlocker + +`BeginBlocker` and `EndBlocker` are a way for module developers to add automatic execution of logic to their module. This is a powerful tool that should be used carefully, as complex automatic functions can slow down or even halt the chain. + +In 0.47.0, Prepare and Process Proposal were added that allow app developers to do arbitrary work at those phases, but they do not influence the work that will be done in BeginBlock. If an application required `BeginBlock` to execute prior to any sort of work is done then this is not possible today (0.50.0). + +When needed, `BeginBlocker` and `EndBlocker` are implemented as part of the [`HasBeginBlocker`, `HasABCIEndBlocker` and `EndBlocker` interfaces](/docs/sdk/v0.50/documentation/module-system/module-manager#appmodule). This means either can be left-out if not required. The `BeginBlock` and `EndBlock` methods of the interface implemented in `module.go` generally defer to `BeginBlocker` and `EndBlocker` methods respectively, which are usually implemented in `abci.go`. + +The actual implementation of `BeginBlocker` and `EndBlocker` in `abci.go` are very similar to that of a [`Msg` service](/docs/sdk/v0.50/documentation/module-system/msg-services): + +* They generally use the [`keeper`](/docs/sdk/v0.50/documentation/module-system/keeper) and [`ctx`](/docs/sdk/v0.50/learn/advanced/context) to retrieve information about the latest state. +* If needed, they use the `keeper` and `ctx` to trigger state-transitions. +* If needed, they can emit [`events`](/docs/sdk/v0.50/learn/advanced/events) via the `ctx`'s `EventManager`. + +A specific type of `EndBlocker` is available to return validator updates to the underlying consensus engine in the form of an [`[]abci.ValidatorUpdates`](https://docs.cometbft.com/v0.37/spec/abci/abci++_methods#endblock). This is the preferred way to implement custom validator changes. + +It is possible for developers to define the order of execution between the `BeginBlocker`/`EndBlocker` functions of each of their application's modules via the module's manager `SetOrderBeginBlocker`/`SetOrderEndBlocker` methods. For more on the module manager, click [here](/docs/sdk/v0.50/documentation/module-system/module-manager#manager). + +See an example implementation of `BeginBlocker` from the `distribution` module: + +```go expandable +package distribution + +import ( + + "time" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + "github.com/cosmos/cosmos-sdk/x/distribution/types" +) + +/ BeginBlocker sets the proposer for determining distribution during endblock +/ and distribute rewards for the previous block. +func BeginBlocker(ctx sdk.Context, k keeper.Keeper) + +error { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyBeginBlocker) + + / determine the total power signing the block + var previousTotalPower int64 + for _, voteInfo := range ctx.VoteInfos() { + previousTotalPower += voteInfo.Validator.Power +} + + / TODO this is Tendermint-dependent + / ref https://github.com/cosmos/cosmos-sdk/issues/3095 + if ctx.BlockHeight() > 1 { + k.AllocateTokens(ctx, previousTotalPower, ctx.VoteInfos()) +} + + / record the proposer for when we payout on the next block + consAddr := sdk.ConsAddress(ctx.BlockHeader().ProposerAddress) + +k.SetPreviousProposerConsAddr(ctx, consAddr) + +return nil +} +``` + +and an example implementation of `EndBlocker` from the `staking` module: + +```go expandable +package keeper + +import ( + + "context" + "time" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ BeginBlocker will persist the current header and validator set as a historical entry +/ and prune the oldest entry based on the HistoricalEntries parameter +func (k *Keeper) + +BeginBlocker(ctx sdk.Context) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyBeginBlocker) + +k.TrackHistoricalInfo(ctx) +} + +/ Called every block, update validator set +func (k *Keeper) + +EndBlocker(ctx context.Context) ([]abci.ValidatorUpdate, error) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) + +return k.BlockValidatorUpdates(sdk.UnwrapSDKContext(ctx)), nil +} +``` + +{/* TODO: leaving this here to update docs with core api changes */} diff --git a/docs/sdk/v0.50/documentation/module-system/depinject.mdx b/docs/sdk/v0.50/documentation/module-system/depinject.mdx new file mode 100644 index 00000000..73291469 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/depinject.mdx @@ -0,0 +1,3519 @@ +--- +title: Modules depinject-ready +--- + + +**Pre-requisite Readings** + +* [Depinject Documentation](/docs/sdk/v0.50/documentation/module-system/depinject) + + + +[`depinject`](/docs/sdk/v0.50/documentation/module-system/depinject) is used to wire any module in `app.go`. +All core modules are already configured to support dependency injection. + +To work with `depinject` a module must define its configuration and requirements so that `depinject` can provide the right dependencies. + +In brief, as a module developer, the following steps are required: + +1. Define the module configuration using Protobuf +2. Define the module dependencies in `x/{moduleName}/module.go` + +A chain developer can then use the module by following these two steps: + +1. Configure the module in `app_config.go` or `app.yaml` +2. Inject the module in `app.go` + +## Module Configuration + +The module available configuration is defined in a Protobuf file, located at `{moduleName}/module/v1/module.proto`. + +```protobuf +syntax = "proto3"; + +package cosmos.group.module.v1; + +import "cosmos/app/v1alpha1/module.proto"; +import "gogoproto/gogo.proto"; +import "google/protobuf/duration.proto"; +import "amino/amino.proto"; + +// Module is the config object of the group module. +message Module { + option (cosmos.app.v1alpha1.module) = { + go_import: "github.com/cosmos/cosmos-sdk/x/group" + }; + + // max_execution_period defines the max duration after a proposal's voting period ends that members can send a MsgExec + // to execute the proposal. + google.protobuf.Duration max_execution_period = 1 + [(gogoproto.stdduration) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // max_metadata_len defines the max length of the metadata bytes field for various entities within the group module. + // Defaults to 255 if not explicitly set. + uint64 max_metadata_len = 2; +} + +``` + +* `go_import` must point to the Go package of the custom module. +* Message fields define the module configuration. + That configuration can be set in the `app_config.go` / `app.yaml` file for a chain developer to configure the module.\ + Taking `group` as example, a chain developer is able to decide, thanks to `uint64 max_metadata_len`, what the maximum metadata length allowed for a group proposal is. + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + crisismodulev1 "cosmossdk.io/api/cosmos/crisis/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + paramsmodulev1 "cosmossdk.io/api/cosmos/params/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/depinject" + + _ "cosmossdk.io/x/circuit" / import for side-effects + _ "cosmossdk.io/x/evidence" / import for side-effects + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/crisis" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + "github.com/cosmos/cosmos-sdk/x/genutil" + "github.com/cosmos/cosmos-sdk/x/gov" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/params" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + + "cosmossdk.io/core/appconfig" + circuittypes "cosmossdk.io/x/circuit/types" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + "cosmossdk.io/x/nft" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + / application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + }, + EndBlockers: []string{ + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + crisistypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensustypes.ModuleName, + circuittypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + / ExportGenesis: []string{ + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: paramstypes.ModuleName, + Config: appconfig.WrapAny(¶msmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: crisistypes.ModuleName, + Config: appconfig.WrapAny(&crisismodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + }, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + }, + ), + }, + )) + ) + ``` + +That message is generated using [`pulsar`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/scripts/protocgen-pulsar.sh) (by running `make proto-gen`). +In the case of the `group` module, this file is generated here: [Link](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/api/cosmos/group/module/v1/module.pulsar.go). + +The part that is relevant for the module configuration is: + +```go expandable +/ Code generated by protoc-gen-go-pulsar. DO NOT EDIT. +package modulev1 + +import ( + + _ "cosmossdk.io/api/amino" + _ "cosmossdk.io/api/cosmos/app/v1alpha1" + fmt "fmt" + runtime "github.com/cosmos/cosmos-proto/runtime" + _ "github.com/cosmos/gogoproto/gogoproto" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoiface "google.golang.org/protobuf/runtime/protoiface" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + durationpb "google.golang.org/protobuf/types/known/durationpb" + io "io" + reflect "reflect" + sync "sync" +) + +var ( + md_Module protoreflect.MessageDescriptor + fd_Module_max_execution_period protoreflect.FieldDescriptor + fd_Module_max_metadata_len protoreflect.FieldDescriptor +) + +func init() { + file_cosmos_group_module_v1_module_proto_init() + +md_Module = File_cosmos_group_module_v1_module_proto.Messages().ByName("Module") + +fd_Module_max_execution_period = md_Module.Fields().ByName("max_execution_period") + +fd_Module_max_metadata_len = md_Module.Fields().ByName("max_metadata_len") +} + +var _ protoreflect.Message = (*fastReflection_Module)(nil) + +type fastReflection_Module Module + +func (x *Module) + +ProtoReflect() + +protoreflect.Message { + return (*fastReflection_Module)(x) +} + +func (x *Module) + +slowProtoReflect() + +protoreflect.Message { + mi := &file_cosmos_group_module_v1_module_proto_msgTypes[0] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) +} + +return ms +} + +return mi.MessageOf(x) +} + +var _fastReflection_Module_messageType fastReflection_Module_messageType +var _ protoreflect.MessageType = fastReflection_Module_messageType{ +} + +type fastReflection_Module_messageType struct{ +} + +func (x fastReflection_Module_messageType) + +Zero() + +protoreflect.Message { + return (*fastReflection_Module)(nil) +} + +func (x fastReflection_Module_messageType) + +New() + +protoreflect.Message { + return new(fastReflection_Module) +} + +func (x fastReflection_Module_messageType) + +Descriptor() + +protoreflect.MessageDescriptor { + return md_Module +} + +/ Descriptor returns message descriptor, which contains only the protobuf +/ type information for the message. +func (x *fastReflection_Module) + +Descriptor() + +protoreflect.MessageDescriptor { + return md_Module +} + +/ Type returns the message type, which encapsulates both Go and protobuf +/ type information. If the Go type information is not needed, +/ it is recommended that the message descriptor be used instead. +func (x *fastReflection_Module) + +Type() + +protoreflect.MessageType { + return _fastReflection_Module_messageType +} + +/ New returns a newly allocated and mutable empty message. +func (x *fastReflection_Module) + +New() + +protoreflect.Message { + return new(fastReflection_Module) +} + +/ Interface unwraps the message reflection interface and +/ returns the underlying ProtoMessage interface. +func (x *fastReflection_Module) + +Interface() + +protoreflect.ProtoMessage { + return (*Module)(x) +} + +/ Range iterates over every populated field in an undefined order, +/ calling f for each field descriptor and value encountered. +/ Range returns immediately if f returns false. +/ While iterating, mutating operations may only be performed +/ on the current field descriptor. +func (x *fastReflection_Module) + +Range(f func(protoreflect.FieldDescriptor, protoreflect.Value) + +bool) { + if x.MaxExecutionPeriod != nil { + value := protoreflect.ValueOfMessage(x.MaxExecutionPeriod.ProtoReflect()) + if !f(fd_Module_max_execution_period, value) { + return +} + +} + if x.MaxMetadataLen != uint64(0) { + value := protoreflect.ValueOfUint64(x.MaxMetadataLen) + if !f(fd_Module_max_metadata_len, value) { + return +} + +} +} + +/ Has reports whether a field is populated. +/ +/ Some fields have the property of nullability where it is possible to +/ distinguish between the default value of a field and whether the field +/ was explicitly populated with the default value. Singular message fields, +/ member fields of a oneof, and proto2 scalar fields are nullable. Such +/ fields are populated only if explicitly set. +/ +/ In other cases (aside from the nullable cases above), +/ a proto3 scalar field is populated if it contains a non-zero value, and +/ a repeated field is populated if it is non-empty. +func (x *fastReflection_Module) + +Has(fd protoreflect.FieldDescriptor) + +bool { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + return x.MaxExecutionPeriod != nil + case "cosmos.group.module.v1.Module.max_metadata_len": + return x.MaxMetadataLen != uint64(0) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ Clear clears the field such that a subsequent Has call reports false. +/ +/ Clearing an extension field clears both the extension type and value +/ associated with the given field number. +/ +/ Clear is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +Clear(fd protoreflect.FieldDescriptor) { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + x.MaxExecutionPeriod = nil + case "cosmos.group.module.v1.Module.max_metadata_len": + x.MaxMetadataLen = uint64(0) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ Get retrieves the value for a field. +/ +/ For unpopulated scalars, it returns the default value, where +/ the default value of a bytes scalar is guaranteed to be a copy. +/ For unpopulated composite types, it returns an empty, read-only view +/ of the value; to obtain a mutable reference, use Mutable. +func (x *fastReflection_Module) + +Get(descriptor protoreflect.FieldDescriptor) + +protoreflect.Value { + switch descriptor.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + value := x.MaxExecutionPeriod + return protoreflect.ValueOfMessage(value.ProtoReflect()) + case "cosmos.group.module.v1.Module.max_metadata_len": + value := x.MaxMetadataLen + return protoreflect.ValueOfUint64(value) + +default: + if descriptor.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", descriptor.FullName())) +} +} + +/ Set stores the value for a field. +/ +/ For a field belonging to a oneof, it implicitly clears any other field +/ that may be currently set within the same oneof. +/ For extension fields, it implicitly stores the provided ExtensionType. +/ When setting a composite type, it is unspecified whether the stored value +/ aliases the source's memory in any way. If the composite value is an +/ empty, read-only value, then it panics. +/ +/ Set is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +Set(fd protoreflect.FieldDescriptor, value protoreflect.Value) { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + x.MaxExecutionPeriod = value.Message().Interface().(*durationpb.Duration) + case "cosmos.group.module.v1.Module.max_metadata_len": + x.MaxMetadataLen = value.Uint() + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ Mutable returns a mutable reference to a composite type. +/ +/ If the field is unpopulated, it may allocate a composite value. +/ For a field belonging to a oneof, it implicitly clears any other field +/ that may be currently set within the same oneof. +/ For extension fields, it implicitly stores the provided ExtensionType +/ if not already stored. +/ It panics if the field does not contain a composite type. +/ +/ Mutable is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +Mutable(fd protoreflect.FieldDescriptor) + +protoreflect.Value { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + if x.MaxExecutionPeriod == nil { + x.MaxExecutionPeriod = new(durationpb.Duration) +} + +return protoreflect.ValueOfMessage(x.MaxExecutionPeriod.ProtoReflect()) + case "cosmos.group.module.v1.Module.max_metadata_len": + panic(fmt.Errorf("field max_metadata_len of message cosmos.group.module.v1.Module is not mutable")) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ NewField returns a new value that is assignable to the field +/ for the given descriptor. For scalars, this returns the default value. +/ For lists, maps, and messages, this returns a new, empty, mutable value. +func (x *fastReflection_Module) + +NewField(fd protoreflect.FieldDescriptor) + +protoreflect.Value { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + m := new(durationpb.Duration) + +return protoreflect.ValueOfMessage(m.ProtoReflect()) + case "cosmos.group.module.v1.Module.max_metadata_len": + return protoreflect.ValueOfUint64(uint64(0)) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ WhichOneof reports which field within the oneof is populated, +/ returning nil if none are populated. +/ It panics if the oneof descriptor does not belong to this message. +func (x *fastReflection_Module) + +WhichOneof(d protoreflect.OneofDescriptor) + +protoreflect.FieldDescriptor { + switch d.FullName() { + default: + panic(fmt.Errorf("%s is not a oneof field in cosmos.group.module.v1.Module", d.FullName())) +} + +panic("unreachable") +} + +/ GetUnknown retrieves the entire list of unknown fields. +/ The caller may only mutate the contents of the RawFields +/ if the mutated bytes are stored back into the message with SetUnknown. +func (x *fastReflection_Module) + +GetUnknown() + +protoreflect.RawFields { + return x.unknownFields +} + +/ SetUnknown stores an entire list of unknown fields. +/ The raw fields must be syntactically valid according to the wire format. +/ An implementation may panic if this is not the case. +/ Once stored, the caller must not mutate the content of the RawFields. +/ An empty RawFields may be passed to clear the fields. +/ +/ SetUnknown is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +SetUnknown(fields protoreflect.RawFields) { + x.unknownFields = fields +} + +/ IsValid reports whether the message is valid. +/ +/ An invalid message is an empty, read-only value. +/ +/ An invalid message often corresponds to a nil pointer of the concrete +/ message type, but the details are implementation dependent. +/ Validity is not part of the protobuf data model, and may not +/ be preserved in marshaling or other operations. +func (x *fastReflection_Module) + +IsValid() + +bool { + return x != nil +} + +/ ProtoMethods returns optional fastReflectionFeature-path implementations of various operations. +/ This method may return nil. +/ +/ The returned methods type is identical to +/ "google.golang.org/protobuf/runtime/protoiface".Methods. +/ Consult the protoiface package documentation for details. +func (x *fastReflection_Module) + +ProtoMethods() *protoiface.Methods { + size := func(input protoiface.SizeInput) + +protoiface.SizeOutput { + x := input.Message.Interface().(*Module) + if x == nil { + return protoiface.SizeOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Size: 0, +} + +} + options := runtime.SizeInputToOptions(input) + _ = options + var n int + var l int + _ = l + if x.MaxExecutionPeriod != nil { + l = options.Size(x.MaxExecutionPeriod) + +n += 1 + l + runtime.Sov(uint64(l)) +} + if x.MaxMetadataLen != 0 { + n += 1 + runtime.Sov(uint64(x.MaxMetadataLen)) +} + if x.unknownFields != nil { + n += len(x.unknownFields) +} + +return protoiface.SizeOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Size: n, +} + +} + marshal := func(input protoiface.MarshalInput) (protoiface.MarshalOutput, error) { + x := input.Message.Interface().(*Module) + if x == nil { + return protoiface.MarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Buf: input.Buf, +}, nil +} + options := runtime.MarshalInputToOptions(input) + _ = options + size := options.Size(x) + dAtA := make([]byte, size) + i := len(dAtA) + _ = i + var l int + _ = l + if x.unknownFields != nil { + i -= len(x.unknownFields) + +copy(dAtA[i:], x.unknownFields) +} + if x.MaxMetadataLen != 0 { + i = runtime.EncodeVarint(dAtA, i, uint64(x.MaxMetadataLen)) + +i-- + dAtA[i] = 0x10 +} + if x.MaxExecutionPeriod != nil { + encoded, err := options.Marshal(x.MaxExecutionPeriod) + if err != nil { + return protoiface.MarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Buf: input.Buf, +}, err +} + +i -= len(encoded) + +copy(dAtA[i:], encoded) + +i = runtime.EncodeVarint(dAtA, i, uint64(len(encoded))) + +i-- + dAtA[i] = 0xa +} + if input.Buf != nil { + input.Buf = append(input.Buf, dAtA...) +} + +else { + input.Buf = dAtA +} + +return protoiface.MarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Buf: input.Buf, +}, nil +} + unmarshal := func(input protoiface.UnmarshalInput) (protoiface.UnmarshalOutput, error) { + x := input.Message.Interface().(*Module) + if x == nil { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags, +}, nil +} + options := runtime.UnmarshalInputToOptions(input) + _ = options + dAtA := input.Buf + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrIntOverflow +} + if iNdEx >= l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: Module: wiretype end group for non-group") +} + if fieldNum <= 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: Module: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: wrong wireType = %d for field MaxExecutionPeriod", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrIntOverflow +} + if iNdEx >= l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrInvalidLength +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrInvalidLength +} + if postIndex > l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + if x.MaxExecutionPeriod == nil { + x.MaxExecutionPeriod = &durationpb.Duration{ +} + +} + if err := options.Unmarshal(dAtA[iNdEx:postIndex], x.MaxExecutionPeriod); err != nil { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, err +} + +iNdEx = postIndex + case 2: + if wireType != 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: wrong wireType = %d for field MaxMetadataLen", wireType) +} + +x.MaxMetadataLen = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrIntOverflow +} + if iNdEx >= l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + x.MaxMetadataLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + +default: + iNdEx = preIndex + skippy, err := runtime.Skip(dAtA[iNdEx:]) + if err != nil { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrInvalidLength +} + if (iNdEx + skippy) > l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + if !options.DiscardUnknown { + x.unknownFields = append(x.unknownFields, dAtA[iNdEx:iNdEx+skippy]...) +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + +return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, nil +} + +return &protoiface.Methods{ + NoUnkeyedLiterals: struct{ +}{ +}, + Flags: protoiface.SupportMarshalDeterministic | protoiface.SupportUnmarshalDiscardUnknown, + Size: size, + Marshal: marshal, + Unmarshal: unmarshal, + Merge: nil, + CheckInitialized: nil, +} +} + +/ Code generated by protoc-gen-go. DO NOT EDIT. +/ versions: +/ protoc-gen-go v1.27.0 +/ protoc (unknown) +/ source: cosmos/group/module/v1/module.proto + +const ( + / Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + / Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +/ Module is the config object of the group module. +type Module struct { + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields + + / max_execution_period defines the max duration after a proposal's voting period ends that members can send a MsgExec + / to execute the proposal. + MaxExecutionPeriod *durationpb.Duration `protobuf:"bytes,1,opt,name=max_execution_period,json=maxExecutionPeriod,proto3" json:"max_execution_period,omitempty"` + / max_metadata_len defines the max length of the metadata bytes field for various entities within the group module. + / Defaults to 255 if not explicitly set. + MaxMetadataLen uint64 `protobuf:"varint,2,opt,name=max_metadata_len,json=maxMetadataLen,proto3" json:"max_metadata_len,omitempty"` +} + +func (x *Module) + +Reset() { + *x = Module{ +} + if protoimpl.UnsafeEnabled { + mi := &file_cosmos_group_module_v1_module_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + +ms.StoreMessageInfo(mi) +} +} + +func (x *Module) + +String() + +string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Module) + +ProtoMessage() { +} + +/ Deprecated: Use Module.ProtoReflect.Descriptor instead. +func (*Module) + +Descriptor() ([]byte, []int) { + return file_cosmos_group_module_v1_module_proto_rawDescGZIP(), []int{0 +} +} + +func (x *Module) + +GetMaxExecutionPeriod() *durationpb.Duration { + if x != nil { + return x.MaxExecutionPeriod +} + +return nil +} + +func (x *Module) + +GetMaxMetadataLen() + +uint64 { + if x != nil { + return x.MaxMetadataLen +} + +return 0 +} + +var File_cosmos_group_module_v1_module_proto protoreflect.FileDescriptor + +var file_cosmos_group_module_v1_module_proto_rawDesc = []byte{ + 0x0a, 0x23, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x2f, 0x6d, + 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2f, 0x76, 0x31, 0x2f, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x16, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2e, 0x67, 0x72, + 0x6f, 0x75, 0x70, 0x2e, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x1a, 0x20, 0x63, + 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x61, 0x70, 0x70, 0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, + 0x61, 0x31, 0x2f, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, + 0x14, 0x67, 0x6f, 0x67, 0x6f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x67, 0x6f, 0x67, 0x6f, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, + 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x11, 0x61, 0x6d, 0x69, 0x6e, 0x6f, 0x2f, 0x61, 0x6d, 0x69, + 0x6e, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xbc, 0x01, 0x0a, 0x06, 0x4d, 0x6f, 0x64, + 0x75, 0x6c, 0x65, 0x12, 0x5a, 0x0a, 0x14, 0x6d, 0x61, 0x78, 0x5f, 0x65, 0x78, 0x65, 0x63, 0x75, + 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x70, 0x65, 0x72, 0x69, 0x6f, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, + 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, + 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x42, 0x0d, 0xc8, 0xde, + 0x1f, 0x00, 0x98, 0xdf, 0x1f, 0x01, 0xa8, 0xe7, 0xb0, 0x2a, 0x01, 0x52, 0x12, 0x6d, 0x61, 0x78, + 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x65, 0x72, 0x69, 0x6f, 0x64, 0x12, + 0x28, 0x0a, 0x10, 0x6d, 0x61, 0x78, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x5f, + 0x6c, 0x65, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0e, 0x6d, 0x61, 0x78, 0x4d, 0x65, + 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x4c, 0x65, 0x6e, 0x3a, 0x2c, 0xba, 0xc0, 0x96, 0xda, 0x01, + 0x26, 0x0a, 0x24, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, + 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2d, 0x73, 0x64, 0x6b, 0x2f, + 0x78, 0x2f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x42, 0xd6, 0x01, 0x0a, 0x1a, 0x63, 0x6f, 0x6d, 0x2e, + 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2e, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x2e, 0x6d, 0x6f, 0x64, + 0x75, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x42, 0x0b, 0x4d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x50, 0x72, + 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x30, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x73, 0x64, 0x6b, + 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x67, + 0x72, 0x6f, 0x75, 0x70, 0x2f, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2f, 0x76, 0x31, 0x3b, 0x6d, + 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x76, 0x31, 0xa2, 0x02, 0x03, 0x43, 0x47, 0x4d, 0xaa, 0x02, 0x16, + 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2e, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x2e, 0x4d, 0x6f, 0x64, + 0x75, 0x6c, 0x65, 0x2e, 0x56, 0x31, 0xca, 0x02, 0x16, 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x5c, + 0x47, 0x72, 0x6f, 0x75, 0x70, 0x5c, 0x4d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x5c, 0x56, 0x31, 0xe2, + 0x02, 0x22, 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x5c, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x5c, 0x4d, + 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x5c, 0x56, 0x31, 0x5c, 0x47, 0x50, 0x42, 0x4d, 0x65, 0x74, 0x61, + 0x64, 0x61, 0x74, 0x61, 0xea, 0x02, 0x19, 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x3a, 0x3a, 0x47, + 0x72, 0x6f, 0x75, 0x70, 0x3a, 0x3a, 0x4d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x3a, 0x3a, 0x56, 0x31, + 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, +} + +var ( + file_cosmos_group_module_v1_module_proto_rawDescOnce sync.Once + file_cosmos_group_module_v1_module_proto_rawDescData = file_cosmos_group_module_v1_module_proto_rawDesc +) + +func file_cosmos_group_module_v1_module_proto_rawDescGZIP() []byte { + file_cosmos_group_module_v1_module_proto_rawDescOnce.Do(func() { + file_cosmos_group_module_v1_module_proto_rawDescData = protoimpl.X.CompressGZIP(file_cosmos_group_module_v1_module_proto_rawDescData) +}) + +return file_cosmos_group_module_v1_module_proto_rawDescData +} + +var file_cosmos_group_module_v1_module_proto_msgTypes = make([]protoimpl.MessageInfo, 1) + +var file_cosmos_group_module_v1_module_proto_goTypes = []interface{ +}{ + (*Module)(nil), / 0: cosmos.group.module.v1.Module + (*durationpb.Duration)(nil), / 1: google.protobuf.Duration +} + +var file_cosmos_group_module_v1_module_proto_depIdxs = []int32{ + 1, / 0: cosmos.group.module.v1.Module.max_execution_period:type_name -> google.protobuf.Duration + 1, / [1:1] is the sub-list for method output_type + 1, / [1:1] is the sub-list for method input_type + 1, / [1:1] is the sub-list for extension type_name + 1, / [1:1] is the sub-list for extension extendee + 0, / [0:1] is the sub-list for field type_name +} + +func init() { + file_cosmos_group_module_v1_module_proto_init() +} + +func file_cosmos_group_module_v1_module_proto_init() { + if File_cosmos_group_module_v1_module_proto != nil { + return +} + if !protoimpl.UnsafeEnabled { + file_cosmos_group_module_v1_module_proto_msgTypes[0].Exporter = func(v interface{ +}, i int) + +interface{ +} { + switch v := v.(*Module); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil +} + +} + +} + +type x struct{ +} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{ +}).PkgPath(), + RawDescriptor: file_cosmos_group_module_v1_module_proto_rawDesc, + NumEnums: 0, + NumMessages: 1, + NumExtensions: 0, + NumServices: 0, +}, + GoTypes: file_cosmos_group_module_v1_module_proto_goTypes, + DependencyIndexes: file_cosmos_group_module_v1_module_proto_depIdxs, + MessageInfos: file_cosmos_group_module_v1_module_proto_msgTypes, +}.Build() + +File_cosmos_group_module_v1_module_proto = out.File + file_cosmos_group_module_v1_module_proto_rawDesc = nil + file_cosmos_group_module_v1_module_proto_goTypes = nil + file_cosmos_group_module_v1_module_proto_depIdxs = nil +} +``` + + +Pulsar is optional. The official [`protoc-gen-go`](https://developers.google.com/protocol-buffers/docs/reference/go-generated) can be used as well. + + +## Dependency Definition + +Once the configuration proto is defined, the module's `module.go` must define what dependencies are required by the module. +The boilerplate is similar for all modules. + + +All methods, structs and their fields must be public for `depinject`. + + +1. Import the module configuration generated package: + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var ( + _ appmodule.AppModule = AppModule{ + } + _ appmodule.HasEndBlocker = AppModule{ + } + ) + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx context.Context) + + error { + c := sdk.UnwrapSDKContext(ctx) + + return EndBlocker(c, am.keeper) + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + + Define an `init()` function for defining the `providers` of the module configuration:\ + This registers the module configuration message and the wiring of the module. + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var ( + _ appmodule.AppModule = AppModule{ + } + _ appmodule.HasEndBlocker = AppModule{ + } + ) + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx context.Context) + + error { + c := sdk.UnwrapSDKContext(ctx) + + return EndBlocker(c, am.keeper) + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + +2. Ensure that the module implements the `appmodule.AppModule` interface: + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.EndBlockAppModule = AppModule{ + } + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var _ appmodule.AppModule = AppModule{ + } + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name()) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + func (am AppModule) + + NewHandler() + + sdk.Handler { + return nil + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + EndBlocker(ctx, am.keeper) + + return []abci.ValidatorUpdate{ + } + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr sdk.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter *baseapp.MsgServiceRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + +3. Define a struct that inherits `depinject.In` and define the module inputs (i.e. module dependencies): + + * `depinject` provides the right dependencies to the module. + * `depinject` also checks that all dependencies are provided. + + :::tip + For making a dependency optional, add the `optional:"true"` struct tag.\ + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var ( + _ appmodule.AppModule = AppModule{ + } + _ appmodule.HasEndBlocker = AppModule{ + } + ) + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx context.Context) + + error { + c := sdk.UnwrapSDKContext(ctx) + + return EndBlocker(c, am.keeper) + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + +4. Define the module outputs with a public struct that inherits `depinject.Out`: + The module outputs are the dependencies that the module provides to other modules. It is usually the module itself and its keeper. + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var ( + _ appmodule.AppModule = AppModule{ + } + _ appmodule.HasEndBlocker = AppModule{ + } + ) + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx context.Context) + + error { + c := sdk.UnwrapSDKContext(ctx) + + return EndBlocker(c, am.keeper) + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + +5. Create a function named `ProvideModule` (as called in 1.) and use the inputs for instantiating the module outputs. + +```go expandable +package module + +import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" +) + +/ ConsensusVersion defines the current x/group module consensus version. +const ConsensusVersion = 2 + +var ( + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() +}, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, +} +} + +var ( + _ appmodule.AppModule = AppModule{ +} + _ appmodule.HasEndBlocker = AppModule{ +} +) + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec +} + +/ Name returns the group module's name. +func (AppModuleBasic) + +Name() + +string { + return group.ModuleName +} + +/ DefaultGenesis returns default genesis state as raw bytes for the group +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the group module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) +} + +return data.Validate() +} + +/ GetQueryCmd returns the cli query commands for the group module +func (a AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) +} + +/ GetTxCmd returns the transaction commands for the group module +func (a AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. +func (a AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ RegisterInterfaces registers the group module's interface types +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) +} + +/ RegisterLegacyAminoCodec registers the group module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) +} + +/ Name returns the group module's name. +func (AppModule) + +Name() + +string { + return group.ModuleName +} + +/ RegisterInvariants does nothing, there are no invariants to enforce +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +/ InitGenesis performs genesis initialization for the group module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the group +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + +return cdc.MustMarshalJSON(gs) +} + +/ RegisterServices registers a gRPC query service to respond to the +/ module-specific gRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + +group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) +} +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ EndBlock implements the group module's EndBlock. +func (am AppModule) + +EndBlock(ctx context.Context) + +error { + c := sdk.UnwrapSDKContext(ctx) + +return EndBlocker(c, am.keeper) +} + +/ ____________________________________________________________________________ + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the group module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ RegisterStoreDecoder registers a decoder for group module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter +} + +type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule +} + +func ProvideModule(in GroupInputs) + +GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen +}) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + +return GroupOutputs{ + GroupKeeper: k, + Module: m +} +} +``` + +The `ProvideModule` function should return an instance of `cosmossdk.io/core/appmodule.AppModule` which implements +one or more app module extension interfaces for initializing the module. + +Following is the complete app wiring configuration for `group`: + +```go expandable +package module + +import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" +) + +/ ConsensusVersion defines the current x/group module consensus version. +const ConsensusVersion = 2 + +var ( + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() +}, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, +} +} + +var ( + _ appmodule.AppModule = AppModule{ +} + _ appmodule.HasEndBlocker = AppModule{ +} +) + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec +} + +/ Name returns the group module's name. +func (AppModuleBasic) + +Name() + +string { + return group.ModuleName +} + +/ DefaultGenesis returns default genesis state as raw bytes for the group +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the group module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) +} + +return data.Validate() +} + +/ GetQueryCmd returns the cli query commands for the group module +func (a AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) +} + +/ GetTxCmd returns the transaction commands for the group module +func (a AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. +func (a AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ RegisterInterfaces registers the group module's interface types +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) +} + +/ RegisterLegacyAminoCodec registers the group module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) +} + +/ Name returns the group module's name. +func (AppModule) + +Name() + +string { + return group.ModuleName +} + +/ RegisterInvariants does nothing, there are no invariants to enforce +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +/ InitGenesis performs genesis initialization for the group module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the group +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + +return cdc.MustMarshalJSON(gs) +} + +/ RegisterServices registers a gRPC query service to respond to the +/ module-specific gRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + +group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) +} +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ EndBlock implements the group module's EndBlock. +func (am AppModule) + +EndBlock(ctx context.Context) + +error { + c := sdk.UnwrapSDKContext(ctx) + +return EndBlocker(c, am.keeper) +} + +/ ____________________________________________________________________________ + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the group module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ RegisterStoreDecoder registers a decoder for group module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter +} + +type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule +} + +func ProvideModule(in GroupInputs) + +GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen +}) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + +return GroupOutputs{ + GroupKeeper: k, + Module: m +} +} +``` + +The module is now ready to be used with `depinject` by a chain developer. + +## Integrate in an application + +The App Wiring is done in `app_config.go` / `app.yaml` and `app_v2.go` and is explained in detail in the [overview of `app_v2.go`](/docs/sdk/v0.50/documentation/application-framework/app-go-v2). diff --git a/docs/sdk/v0.50/documentation/module-system/errors.mdx b/docs/sdk/v0.50/documentation/module-system/errors.mdx new file mode 100644 index 00000000..9d91abd2 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/errors.mdx @@ -0,0 +1,702 @@ +--- +title: Errors +--- + + +**Synopsis** +This document outlines the recommended usage and APIs for error handling in Cosmos SDK modules. + + +Modules are encouraged to define and register their own errors to provide better +context on failed message or handler execution. Typically, these errors should be +common or general errors which can be further wrapped to provide additional specific +execution context. + +## Registration + +Modules should define and register their custom errors in `x/{module}/errors.go`. +Registration of errors is handled via the [`errors` package](https://github.com/cosmos/cosmos-sdk/blob/main/errors/errors.go). + +Example: + +```go expandable +package types + +import "cosmossdk.io/errors" + +/ x/distribution module sentinel errors +var ( + ErrEmptyDelegatorAddr = errors.Register(ModuleName, 2, "delegator address is empty") + +ErrEmptyWithdrawAddr = errors.Register(ModuleName, 3, "withdraw address is empty") + +ErrEmptyValidatorAddr = errors.Register(ModuleName, 4, "validator address is empty") + +ErrEmptyDelegationDistInfo = errors.Register(ModuleName, 5, "no delegation distribution info") + +ErrNoValidatorDistInfo = errors.Register(ModuleName, 6, "no validator distribution info") + +ErrNoValidatorCommission = errors.Register(ModuleName, 7, "no validator commission to withdraw") + +ErrSetWithdrawAddrDisabled = errors.Register(ModuleName, 8, "set withdraw address disabled") + +ErrBadDistribution = errors.Register(ModuleName, 9, "community pool does not have sufficient coins to distribute") + +ErrInvalidProposalAmount = errors.Register(ModuleName, 10, "invalid community pool spend proposal amount") + +ErrEmptyProposalRecipient = errors.Register(ModuleName, 11, "invalid community pool spend proposal recipient") + +ErrNoValidatorExists = errors.Register(ModuleName, 12, "validator does not exist") + +ErrNoDelegationExists = errors.Register(ModuleName, 13, "delegation does not exist") +) +``` + +Each custom module error must provide the codespace, which is typically the module name +(e.g. "distribution") and is unique per module, and a uint32 code. Together, the codespace and code +provide a globally unique Cosmos SDK error. Typically, the code is monotonically increasing but does not +necessarily have to be. The only restrictions on error codes are the following: + +* Must be greater than one, as a code value of one is reserved for internal errors. +* Must be unique within the module. + +Note, the Cosmos SDK provides a core set of *common* errors. These errors are defined in [`types/errors/errors.go`](https://github.com/cosmos/cosmos-sdk/blob/main/types/errors/errors.go). + +## Wrapping + +The custom module errors can be returned as their concrete type as they already fulfill the `error` +interface. However, module errors can be wrapped to provide further context and meaning to failed +execution. + +Example: + +```go expandable +package keeper + +import ( + + "context" + "errors" + "fmt" + "cosmossdk.io/collections" + "cosmossdk.io/core/store" + "cosmossdk.io/log" + "cosmossdk.io/math" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/query" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var _ Keeper = (*BaseKeeper)(nil) + +/ Keeper defines a module interface that facilitates the transfer of coins +/ between accounts. +type Keeper interface { + SendKeeper + WithMintCoinsRestriction(MintingRestrictionFn) + +BaseKeeper + + InitGenesis(context.Context, *types.GenesisState) + +ExportGenesis(context.Context) *types.GenesisState + + GetSupply(ctx context.Context, denom string) + +sdk.Coin + HasSupply(ctx context.Context, denom string) + +bool + GetPaginatedTotalSupply(ctx context.Context, pagination *query.PageRequest) (sdk.Coins, *query.PageResponse, error) + +IterateTotalSupply(ctx context.Context, cb func(sdk.Coin) + +bool) + +GetDenomMetaData(ctx context.Context, denom string) (types.Metadata, bool) + +HasDenomMetaData(ctx context.Context, denom string) + +bool + SetDenomMetaData(ctx context.Context, denomMetaData types.Metadata) + +GetAllDenomMetaData(ctx context.Context) []types.Metadata + IterateAllDenomMetaData(ctx context.Context, cb func(types.Metadata) + +bool) + +SendCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + SendCoinsFromModuleToModule(ctx context.Context, senderModule, recipientModule string, amt sdk.Coins) + +error + SendCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + DelegateCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + UndelegateCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + MintCoins(ctx context.Context, moduleName string, amt sdk.Coins) + +error + BurnCoins(ctx context.Context, moduleName string, amt sdk.Coins) + +error + + DelegateCoins(ctx context.Context, delegatorAddr, moduleAccAddr sdk.AccAddress, amt sdk.Coins) + +error + UndelegateCoins(ctx context.Context, moduleAccAddr, delegatorAddr sdk.AccAddress, amt sdk.Coins) + +error + + types.QueryServer +} + +/ BaseKeeper manages transfers between accounts. It implements the Keeper interface. +type BaseKeeper struct { + BaseSendKeeper + + ak types.AccountKeeper + cdc codec.BinaryCodec + storeService store.KVStoreService + mintCoinsRestrictionFn MintingRestrictionFn + logger log.Logger +} + +type MintingRestrictionFn func(ctx context.Context, coins sdk.Coins) + +error + +/ GetPaginatedTotalSupply queries for the supply, ignoring 0 coins, with a given pagination +func (k BaseKeeper) + +GetPaginatedTotalSupply(ctx context.Context, pagination *query.PageRequest) (sdk.Coins, *query.PageResponse, error) { + results, pageResp, err := query.CollectionPaginate[string, math.Int](/docs/sdk/v0.50/documentation/module-system/ctx, k.Supply, pagination) + if err != nil { + return nil, nil, err +} + coins := sdk.NewCoins() + for _, res := range results { + coins = coins.Add(sdk.NewCoin(res.Key, res.Value)) +} + +return coins, pageResp, nil +} + +/ NewBaseKeeper returns a new BaseKeeper object with a given codec, dedicated +/ store key, an AccountKeeper implementation, and a parameter Subspace used to +/ store and fetch module parameters. The BaseKeeper also accepts a +/ blocklist map. This blocklist describes the set of addresses that are not allowed +/ to receive funds through direct and explicit actions, for example, by using a MsgSend or +/ by using a SendCoinsFromModuleToAccount execution. +func NewBaseKeeper( + cdc codec.BinaryCodec, + storeService store.KVStoreService, + ak types.AccountKeeper, + blockedAddrs map[string]bool, + authority string, + logger log.Logger, +) + +BaseKeeper { + if _, err := ak.AddressCodec().StringToBytes(authority); err != nil { + panic(fmt.Errorf("invalid bank authority address: %w", err)) +} + + / add the module name to the logger + logger = logger.With(log.ModuleKey, "x/"+types.ModuleName) + +return BaseKeeper{ + BaseSendKeeper: NewBaseSendKeeper(cdc, storeService, ak, blockedAddrs, authority, logger), + ak: ak, + cdc: cdc, + storeService: storeService, + mintCoinsRestrictionFn: func(ctx context.Context, coins sdk.Coins) + +error { + return nil +}, + logger: logger, +} +} + +/ WithMintCoinsRestriction restricts the bank Keeper used within a specific module to +/ have restricted permissions on minting via function passed in parameter. +/ Previous restriction functions can be nested as such: +/ +/ bankKeeper.WithMintCoinsRestriction(restriction1).WithMintCoinsRestriction(restriction2) + +func (k BaseKeeper) + +WithMintCoinsRestriction(check MintingRestrictionFn) + +BaseKeeper { + oldRestrictionFn := k.mintCoinsRestrictionFn + k.mintCoinsRestrictionFn = func(ctx context.Context, coins sdk.Coins) + +error { + err := check(ctx, coins) + if err != nil { + return err +} + +err = oldRestrictionFn(ctx, coins) + if err != nil { + return err +} + +return nil +} + +return k +} + +/ DelegateCoins performs delegation by deducting amt coins from an account with +/ address addr. For vesting accounts, delegations amounts are tracked for both +/ vesting and vested coins. The coins are then transferred from the delegator +/ address to a ModuleAccount address. If any of the delegation amounts are negative, +/ an error is returned. +func (k BaseKeeper) + +DelegateCoins(ctx context.Context, delegatorAddr, moduleAccAddr sdk.AccAddress, amt sdk.Coins) + +error { + moduleAcc := k.ak.GetAccount(ctx, moduleAccAddr) + if moduleAcc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleAccAddr) +} + if !amt.IsValid() { + return errorsmod.Wrap(sdkerrors.ErrInvalidCoins, amt.String()) +} + balances := sdk.NewCoins() + for _, coin := range amt { + balance := k.GetBalance(ctx, delegatorAddr, coin.GetDenom()) + if balance.IsLT(coin) { + return errorsmod.Wrapf( + sdkerrors.ErrInsufficientFunds, "failed to delegate; %s is smaller than %s", balance, amt, + ) +} + +balances = balances.Add(balance) + err := k.setBalance(ctx, delegatorAddr, balance.Sub(coin)) + if err != nil { + return err +} + +} + if err := k.trackDelegation(ctx, delegatorAddr, balances, amt); err != nil { + return errorsmod.Wrap(err, "failed to track delegation") +} + / emit coin spent event + sdkCtx := sdk.UnwrapSDKContext(ctx) + +sdkCtx.EventManager().EmitEvent( + types.NewCoinSpentEvent(delegatorAddr, amt), + ) + err := k.addCoins(ctx, moduleAccAddr, amt) + if err != nil { + return err +} + +return nil +} + +/ UndelegateCoins performs undelegation by crediting amt coins to an account with +/ address addr. For vesting accounts, undelegation amounts are tracked for both +/ vesting and vested coins. The coins are then transferred from a ModuleAccount +/ address to the delegator address. If any of the undelegation amounts are +/ negative, an error is returned. +func (k BaseKeeper) + +UndelegateCoins(ctx context.Context, moduleAccAddr, delegatorAddr sdk.AccAddress, amt sdk.Coins) + +error { + moduleAcc := k.ak.GetAccount(ctx, moduleAccAddr) + if moduleAcc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleAccAddr) +} + if !amt.IsValid() { + return errorsmod.Wrap(sdkerrors.ErrInvalidCoins, amt.String()) +} + err := k.subUnlockedCoins(ctx, moduleAccAddr, amt) + if err != nil { + return err +} + if err := k.trackUndelegation(ctx, delegatorAddr, amt); err != nil { + return errorsmod.Wrap(err, "failed to track undelegation") +} + +err = k.addCoins(ctx, delegatorAddr, amt) + if err != nil { + return err +} + +return nil +} + +/ GetSupply retrieves the Supply from store +func (k BaseKeeper) + +GetSupply(ctx context.Context, denom string) + +sdk.Coin { + amt, err := k.Supply.Get(ctx, denom) + if err != nil { + return sdk.NewCoin(denom, math.ZeroInt()) +} + +return sdk.NewCoin(denom, amt) +} + +/ HasSupply checks if the supply coin exists in store. +func (k BaseKeeper) + +HasSupply(ctx context.Context, denom string) + +bool { + has, err := k.Supply.Has(ctx, denom) + +return has && err == nil +} + +/ GetDenomMetaData retrieves the denomination metadata. returns the metadata and true if the denom exists, +/ false otherwise. +func (k BaseKeeper) + +GetDenomMetaData(ctx context.Context, denom string) (types.Metadata, bool) { + m, err := k.BaseViewKeeper.DenomMetadata.Get(ctx, denom) + +return m, err == nil +} + +/ HasDenomMetaData checks if the denomination metadata exists in store. +func (k BaseKeeper) + +HasDenomMetaData(ctx context.Context, denom string) + +bool { + has, err := k.BaseViewKeeper.DenomMetadata.Has(ctx, denom) + +return has && err == nil +} + +/ GetAllDenomMetaData retrieves all denominations metadata +func (k BaseKeeper) + +GetAllDenomMetaData(ctx context.Context) []types.Metadata { + denomMetaData := make([]types.Metadata, 0) + +k.IterateAllDenomMetaData(ctx, func(metadata types.Metadata) + +bool { + denomMetaData = append(denomMetaData, metadata) + +return false +}) + +return denomMetaData +} + +/ IterateAllDenomMetaData iterates over all the denominations metadata and +/ provides the metadata to a callback. If true is returned from the +/ callback, iteration is halted. +func (k BaseKeeper) + +IterateAllDenomMetaData(ctx context.Context, cb func(types.Metadata) + +bool) { + err := k.BaseViewKeeper.DenomMetadata.Walk(ctx, nil, func(_ string, metadata types.Metadata) (stop bool, err error) { + return cb(metadata), nil +}) + if err != nil && !errors.Is(err, collections.ErrInvalidIterator) { + panic(err) +} +} + +/ SetDenomMetaData sets the denominations metadata +func (k BaseKeeper) + +SetDenomMetaData(ctx context.Context, denomMetaData types.Metadata) { + _ = k.BaseViewKeeper.DenomMetadata.Set(ctx, denomMetaData.Base, denomMetaData) +} + +/ SendCoinsFromModuleToAccount transfers coins from a ModuleAccount to an AccAddress. +/ It will panic if the module account does not exist. An error is returned if +/ the recipient address is black-listed or if sending the tokens fails. +func (k BaseKeeper) + +SendCoinsFromModuleToAccount( + ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins, +) + +error { + senderAddr := k.ak.GetModuleAddress(senderModule) + if senderAddr == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", senderModule)) +} + if k.BlockedAddr(recipientAddr) { + return errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", recipientAddr) +} + +return k.SendCoins(ctx, senderAddr, recipientAddr, amt) +} + +/ SendCoinsFromModuleToModule transfers coins from a ModuleAccount to another. +/ It will panic if either module account does not exist. +func (k BaseKeeper) + +SendCoinsFromModuleToModule( + ctx context.Context, senderModule, recipientModule string, amt sdk.Coins, +) + +error { + senderAddr := k.ak.GetModuleAddress(senderModule) + if senderAddr == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", senderModule)) +} + recipientAcc := k.ak.GetModuleAccount(ctx, recipientModule) + if recipientAcc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", recipientModule)) +} + +return k.SendCoins(ctx, senderAddr, recipientAcc.GetAddress(), amt) +} + +/ SendCoinsFromAccountToModule transfers coins from an AccAddress to a ModuleAccount. +/ It will panic if the module account does not exist. +func (k BaseKeeper) + +SendCoinsFromAccountToModule( + ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins, +) + +error { + recipientAcc := k.ak.GetModuleAccount(ctx, recipientModule) + if recipientAcc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", recipientModule)) +} + +return k.SendCoins(ctx, senderAddr, recipientAcc.GetAddress(), amt) +} + +/ DelegateCoinsFromAccountToModule delegates coins and transfers them from a +/ delegator account to a module account. It will panic if the module account +/ does not exist or is unauthorized. +func (k BaseKeeper) + +DelegateCoinsFromAccountToModule( + ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins, +) + +error { + recipientAcc := k.ak.GetModuleAccount(ctx, recipientModule) + if recipientAcc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", recipientModule)) +} + if !recipientAcc.HasPermission(authtypes.Staking) { + panic(errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to receive delegated coins", recipientModule)) +} + +return k.DelegateCoins(ctx, senderAddr, recipientAcc.GetAddress(), amt) +} + +/ UndelegateCoinsFromModuleToAccount undelegates the unbonding coins and transfers +/ them from a module account to the delegator account. It will panic if the +/ module account does not exist or is unauthorized. +func (k BaseKeeper) + +UndelegateCoinsFromModuleToAccount( + ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins, +) + +error { + acc := k.ak.GetModuleAccount(ctx, senderModule) + if acc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", senderModule)) +} + if !acc.HasPermission(authtypes.Staking) { + panic(errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to undelegate coins", senderModule)) +} + +return k.UndelegateCoins(ctx, acc.GetAddress(), recipientAddr, amt) +} + +/ MintCoins creates new coins from thin air and adds it to the module account. +/ It will panic if the module account does not exist or is unauthorized. +func (k BaseKeeper) + +MintCoins(ctx context.Context, moduleName string, amounts sdk.Coins) + +error { + sdkCtx := sdk.UnwrapSDKContext(ctx) + err := k.mintCoinsRestrictionFn(ctx, amounts) + if err != nil { + k.logger.Error(fmt.Sprintf("Module %q attempted to mint coins %s it doesn't have permission for, error %v", moduleName, amounts, err)) + +return err +} + acc := k.ak.GetModuleAccount(ctx, moduleName) + if acc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleName)) +} + if !acc.HasPermission(authtypes.Minter) { + panic(errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to mint tokens", moduleName)) +} + +err = k.addCoins(ctx, acc.GetAddress(), amounts) + if err != nil { + return err +} + for _, amount := range amounts { + supply := k.GetSupply(ctx, amount.GetDenom()) + +supply = supply.Add(amount) + +k.setSupply(ctx, supply) +} + +k.logger.Debug("minted coins from module account", "amount", amounts.String(), "from", moduleName) + + / emit mint event + sdkCtx.EventManager().EmitEvent( + types.NewCoinMintEvent(acc.GetAddress(), amounts), + ) + +return nil +} + +/ BurnCoins burns coins deletes coins from the balance of the module account. +/ It will panic if the module account does not exist or is unauthorized. +func (k BaseKeeper) + +BurnCoins(ctx context.Context, moduleName string, amounts sdk.Coins) + +error { + acc := k.ak.GetModuleAccount(ctx, moduleName) + if acc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleName)) +} + if !acc.HasPermission(authtypes.Burner) { + panic(errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to burn tokens", moduleName)) +} + err := k.subUnlockedCoins(ctx, acc.GetAddress(), amounts) + if err != nil { + return err +} + for _, amount := range amounts { + supply := k.GetSupply(ctx, amount.GetDenom()) + +supply = supply.Sub(amount) + +k.setSupply(ctx, supply) +} + +k.logger.Debug("burned tokens from module account", "amount", amounts.String(), "from", moduleName) + + / emit burn event + sdkCtx := sdk.UnwrapSDKContext(ctx) + +sdkCtx.EventManager().EmitEvent( + types.NewCoinBurnEvent(acc.GetAddress(), amounts), + ) + +return nil +} + +/ setSupply sets the supply for the given coin +func (k BaseKeeper) + +setSupply(ctx context.Context, coin sdk.Coin) { + / Bank invariants and IBC requires to remove zero coins. + if coin.IsZero() { + _ = k.Supply.Remove(ctx, coin.Denom) +} + +else { + _ = k.Supply.Set(ctx, coin.Denom, coin.Amount) +} +} + +/ trackDelegation tracks the delegation of the given account if it is a vesting account +func (k BaseKeeper) + +trackDelegation(ctx context.Context, addr sdk.AccAddress, balance, amt sdk.Coins) + +error { + acc := k.ak.GetAccount(ctx, addr) + if acc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "account %s does not exist", addr) +} + +vacc, ok := acc.(types.VestingAccount) + if ok { + / TODO: return error on account.TrackDelegation + sdkCtx := sdk.UnwrapSDKContext(ctx) + +vacc.TrackDelegation(sdkCtx.BlockHeader().Time, balance, amt) + +k.ak.SetAccount(ctx, acc) +} + +return nil +} + +/ trackUndelegation trakcs undelegation of the given account if it is a vesting account +func (k BaseKeeper) + +trackUndelegation(ctx context.Context, addr sdk.AccAddress, amt sdk.Coins) + +error { + acc := k.ak.GetAccount(ctx, addr) + if acc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "account %s does not exist", addr) +} + +vacc, ok := acc.(types.VestingAccount) + if ok { + / TODO: return error on account.TrackUndelegation + vacc.TrackUndelegation(amt) + +k.ak.SetAccount(ctx, acc) +} + +return nil +} + +/ IterateTotalSupply iterates over the total supply calling the given cb (callback) + +function +/ with the balance of each coin. +/ The iteration stops if the callback returns true. +func (k BaseViewKeeper) + +IterateTotalSupply(ctx context.Context, cb func(sdk.Coin) + +bool) { + err := k.Supply.Walk(ctx, nil, func(s string, m math.Int) (bool, error) { + return cb(sdk.NewCoin(s, m)), nil +}) + if err != nil && !errors.Is(err, collections.ErrInvalidIterator) { + panic(err) +} +} +``` + +Regardless if an error is wrapped or not, the Cosmos SDK's `errors` package provides a function to determine if +an error is of a particular kind via `Is`. + +## ABCI + +If a module error is registered, the Cosmos SDK `errors` package allows ABCI information to be extracted +through the `ABCIInfo` function. The package also provides `ResponseCheckTx` and `ResponseDeliverTx` as +auxiliary functions to automatically get `CheckTx` and `DeliverTx` responses from an error. diff --git a/docs/sdk/v0.50/documentation/module-system/genesis.mdx b/docs/sdk/v0.50/documentation/module-system/genesis.mdx new file mode 100644 index 00000000..db2378f1 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/genesis.mdx @@ -0,0 +1,767 @@ +--- +title: Module Genesis +--- + + +**Synopsis** +Modules generally handle a subset of the state and, as such, they need to define the related subset of the genesis file as well as methods to initialize, verify and export it. + + + +**Pre-requisite Readings** + +* [Module Manager](/docs/sdk/v0.50/documentation/module-system/module-manager) +* [Keepers](/docs/sdk/v0.50/documentation/module-system/keeper) + + + +## Type Definition + +The subset of the genesis state defined from a given module is generally defined in a `genesis.proto` file ([more info](/docs/sdk/v0.50/learn/advanced/encoding#gogoproto) on how to define protobuf messages). The struct defining the module's subset of the genesis state is usually called `GenesisState` and contains all the module-related values that need to be initialized during the genesis process. + +See an example of `GenesisState` protobuf message definition from the `auth` module: + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/auth/v1beta1/genesis.proto +``` + +Next we present the main genesis-related methods that need to be implemented by module developers in order for their module to be used in Cosmos SDK applications. + +### `DefaultGenesis` + +The `DefaultGenesis()` method is a simple method that calls the constructor function for `GenesisState` with the default value for each parameter. See an example from the `auth` module: + +```go expandable +package auth + +import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/depinject" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + + modulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + "cosmossdk.io/core/store" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/exported" + "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ ConsensusVersion defines the current x/auth module consensus version. +const ( + ConsensusVersion = 5 + GovModuleName = "gov" +) + +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the auth module. +type AppModuleBasic struct { + ac address.Codec +} + +/ Name returns the auth module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the auth module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the auth +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the auth module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return types.ValidateGenesis(data) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the auth module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the auth module. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return nil +} + +/ GetQueryCmd returns the root query command for the auth module. +func (ab AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.GetQueryCmd(ab.ac) +} + +/ RegisterInterfaces registers interfaces and implementations of the auth module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the auth module. +type AppModule struct { + AppModuleBasic + + accountKeeper keeper.AccountKeeper + randGenAccountsFn types.RandomGenesisAccountsFn + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, accountKeeper keeper.AccountKeeper, randGenAccountsFn types.RandomGenesisAccountsFn, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + ac: accountKeeper.AddressCodec() +}, + accountKeeper: accountKeeper, + randGenAccountsFn: randGenAccountsFn, + legacySubspace: ss, +} +} + +/ Name returns the auth module's name. +func (AppModule) + +Name() + +string { + return types.ModuleName +} + +/ RegisterServices registers a GRPC query service to respond to the +/ module-specific GRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.accountKeeper)) + +types.RegisterQueryServer(cfg.QueryServer(), keeper.NewQueryServer(am.accountKeeper)) + m := keeper.NewMigrator(am.accountKeeper, cfg.QueryServer(), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 2 to 3: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 3 to 4: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 4, m.Migrate4To5); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 4 to 5", types.ModuleName)) +} +} + +/ InitGenesis performs genesis initialization for the auth module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +am.accountKeeper.InitGenesis(ctx, genesisState) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the auth +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.accountKeeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the auth module +func (am AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState, am.randGenAccountsFn) +} + +/ ProposalMsgs returns msgs used for governance proposals for simulations. +func (AppModule) + +ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg { + return simulation.ProposalMsgs() +} + +/ RegisterStoreDecoder registers a decoder for auth module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[types.StoreKey] = simtypes.NewStoreDecoderFuncFromCollectionsSchema(am.accountKeeper.Schema) +} + +/ WeightedOperations doesn't return any auth module operation. +func (AppModule) + +WeightedOperations(_ module.SimulationState) []simtypes.WeightedOperation { + return nil +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideAddressCodec), + appmodule.Provide(ProvideModule), + ) +} + +/ ProvideAddressCodec provides an address.Codec to the container for any +/ modules that want to do address string <> bytes conversion. +func ProvideAddressCodec(config *modulev1.Module) + +address.Codec { + return authcodec.NewBech32Codec(config.Bech32Prefix) +} + +type ModuleInputs struct { + depinject.In + + Config *modulev1.Module + StoreService store.KVStoreService + Cdc codec.Codec + + RandomGenesisAccountsFn types.RandomGenesisAccountsFn `optional:"true"` + AccountI func() + +sdk.AccountI `optional:"true"` + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type ModuleOutputs struct { + depinject.Out + + AccountKeeper keeper.AccountKeeper + Module appmodule.AppModule +} + +func ProvideModule(in ModuleInputs) + +ModuleOutputs { + maccPerms := map[string][]string{ +} + for _, permission := range in.Config.ModuleAccountPermissions { + maccPerms[permission.Account] = permission.Permissions +} + + / default to governance authority if not provided + authority := types.NewModuleAddress(GovModuleName) + if in.Config.Authority != "" { + authority = types.NewModuleAddressOrBech32Address(in.Config.Authority) +} + if in.RandomGenesisAccountsFn == nil { + in.RandomGenesisAccountsFn = simulation.RandomGenesisAccounts +} + if in.AccountI == nil { + in.AccountI = types.ProtoBaseAccount +} + k := keeper.NewAccountKeeper(in.Cdc, in.StoreService, in.AccountI, maccPerms, in.Config.Bech32Prefix, authority.String()) + m := NewAppModule(in.Cdc, k, in.RandomGenesisAccountsFn, in.LegacySubspace) + +return ModuleOutputs{ + AccountKeeper: k, + Module: m +} +} +``` + +### `ValidateGenesis` + +The `ValidateGenesis(data GenesisState)` method is called to verify that the provided `genesisState` is correct. It should perform validity checks on each of the parameters listed in `GenesisState`. See an example from the `auth` module: + +```go expandable +package types + +import ( + + "encoding/json" + "fmt" + "sort" + + proto "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" +) + +var _ types.UnpackInterfacesMessage = GenesisState{ +} + +/ RandomGenesisAccountsFn defines the function required to generate custom account types +type RandomGenesisAccountsFn func(simState *module.SimulationState) + +GenesisAccounts + +/ NewGenesisState - Create a new genesis state +func NewGenesisState(params Params, accounts GenesisAccounts) *GenesisState { + genAccounts, err := PackAccounts(accounts) + if err != nil { + panic(err) +} + +return &GenesisState{ + Params: params, + Accounts: genAccounts, +} +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (g GenesisState) + +UnpackInterfaces(unpacker types.AnyUnpacker) + +error { + for _, any := range g.Accounts { + var account GenesisAccount + err := unpacker.UnpackAny(any, &account) + if err != nil { + return err +} + +} + +return nil +} + +/ DefaultGenesisState - Return a default genesis state +func DefaultGenesisState() *GenesisState { + return NewGenesisState(DefaultParams(), GenesisAccounts{ +}) +} + +/ GetGenesisStateFromAppState returns x/auth GenesisState given raw application +/ genesis state. +func GetGenesisStateFromAppState(cdc codec.Codec, appState map[string]json.RawMessage) + +GenesisState { + var genesisState GenesisState + if appState[ModuleName] != nil { + cdc.MustUnmarshalJSON(appState[ModuleName], &genesisState) +} + +return genesisState +} + +/ ValidateGenesis performs basic validation of auth genesis data returning an +/ error for any failed validation criteria. +func ValidateGenesis(data GenesisState) + +error { + if err := data.Params.Validate(); err != nil { + return err +} + +genAccs, err := UnpackAccounts(data.Accounts) + if err != nil { + return err +} + +return ValidateGenAccounts(genAccs) +} + +/ SanitizeGenesisAccounts sorts accounts and coin sets. +func SanitizeGenesisAccounts(genAccs GenesisAccounts) + +GenesisAccounts { + / Make sure there aren't any duplicated account numbers by fixing the duplicates with the lowest unused values. + / seenAccNum = easy lookup for used account numbers. + seenAccNum := map[uint64]bool{ +} + / dupAccNum = a map of account number to accounts with duplicate account numbers (excluding the 1st one seen). + dupAccNum := map[uint64]GenesisAccounts{ +} + for _, acc := range genAccs { + num := acc.GetAccountNumber() + if !seenAccNum[num] { + seenAccNum[num] = true +} + +else { + dupAccNum[num] = append(dupAccNum[num], acc) +} + +} + + / dupAccNums a sorted list of the account numbers with duplicates. + var dupAccNums []uint64 + for num := range dupAccNum { + dupAccNums = append(dupAccNums, num) +} + +sort.Slice(dupAccNums, func(i, j int) + +bool { + return dupAccNums[i] < dupAccNums[j] +}) + + / Change the account number of the duplicated ones to the first unused value. + globalNum := uint64(0) + for _, dupNum := range dupAccNums { + accs := dupAccNum[dupNum] + for _, acc := range accs { + for seenAccNum[globalNum] { + globalNum++ +} + if err := acc.SetAccountNumber(globalNum); err != nil { + panic(err) +} + +seenAccNum[globalNum] = true +} + +} + + / Then sort them all by account number. + sort.Slice(genAccs, func(i, j int) + +bool { + return genAccs[i].GetAccountNumber() < genAccs[j].GetAccountNumber() +}) + +return genAccs +} + +/ ValidateGenAccounts validates an array of GenesisAccounts and checks for duplicates +func ValidateGenAccounts(accounts GenesisAccounts) + +error { + addrMap := make(map[string]bool, len(accounts)) + for _, acc := range accounts { + / check for duplicated accounts + addrStr := acc.GetAddress().String() + if _, ok := addrMap[addrStr]; ok { + return fmt.Errorf("duplicate account found in genesis state; address: %s", addrStr) +} + +addrMap[addrStr] = true + + / check account specific validation + if err := acc.Validate(); err != nil { + return fmt.Errorf("invalid account found in genesis state; address: %s, error: %s", addrStr, err.Error()) +} + +} + +return nil +} + +/ GenesisAccountIterator implements genesis account iteration. +type GenesisAccountIterator struct{ +} + +/ IterateGenesisAccounts iterates over all the genesis accounts found in +/ appGenesis and invokes a callback on each genesis account. If any call +/ returns true, iteration stops. +func (GenesisAccountIterator) + +IterateGenesisAccounts( + cdc codec.Codec, appGenesis map[string]json.RawMessage, cb func(sdk.AccountI) (stop bool), +) { + for _, genAcc := range GetGenesisStateFromAppState(cdc, appGenesis).Accounts { + acc, ok := genAcc.GetCachedValue().(sdk.AccountI) + if !ok { + panic("expected account") +} + if cb(acc) { + break +} + +} +} + +/ PackAccounts converts GenesisAccounts to Any slice +func PackAccounts(accounts GenesisAccounts) ([]*types.Any, error) { + accountsAny := make([]*types.Any, len(accounts)) + for i, acc := range accounts { + msg, ok := acc.(proto.Message) + if !ok { + return nil, fmt.Errorf("cannot proto marshal %T", acc) +} + +any, err := types.NewAnyWithValue(msg) + if err != nil { + return nil, err +} + +accountsAny[i] = any +} + +return accountsAny, nil +} + +/ UnpackAccounts converts Any slice to GenesisAccounts +func UnpackAccounts(accountsAny []*types.Any) (GenesisAccounts, error) { + accounts := make(GenesisAccounts, len(accountsAny)) + for i, any := range accountsAny { + acc, ok := any.GetCachedValue().(GenesisAccount) + if !ok { + return nil, fmt.Errorf("expected genesis account") +} + +accounts[i] = acc +} + +return accounts, nil +} +``` + +## Other Genesis Methods + +Other than the methods related directly to `GenesisState`, module developers are expected to implement two other methods as part of the [`AppModuleGenesis` interface](/docs/sdk/v0.50/documentation/module-system/module-manager#appmodulegenesis) (only if the module needs to initialize a subset of state in genesis). These methods are [`InitGenesis`](#initgenesis) and [`ExportGenesis`](#exportgenesis). + +### `InitGenesis` + +The `InitGenesis` method is executed during [`InitChain`](/docs/sdk/v0.50/learn/advanced/baseapp#initchain) when the application is first started. Given a `GenesisState`, it initializes the subset of the state managed by the module by using the module's [`keeper`](/docs/sdk/v0.50/documentation/module-system/keeper) setter function on each parameter within the `GenesisState`. + +The [module manager](/docs/sdk/v0.50/documentation/module-system/module-manager#manager) of the application is responsible for calling the `InitGenesis` method of each of the application's modules in order. This order is set by the application developer via the manager's `SetOrderGenesisMethod`, which is called in the [application's constructor function](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor-function). + +See an example of `InitGenesis` from the `auth` module: + +```go expandable +package keeper + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ InitGenesis - Init store state from genesis data +/ +/ CONTRACT: old coins from the FeeCollectionKeeper need to be transferred through +/ a genesis port script to the new fee collector account +func (ak AccountKeeper) + +InitGenesis(ctx sdk.Context, data types.GenesisState) { + if err := ak.Params.Set(ctx, data.Params); err != nil { + panic(err) +} + +accounts, err := types.UnpackAccounts(data.Accounts) + if err != nil { + panic(err) +} + +accounts = types.SanitizeGenesisAccounts(accounts) + + / Set the accounts and make sure the global account number matches the largest account number (even if zero). + var lastAccNum *uint64 + for _, acc := range accounts { + accNum := acc.GetAccountNumber() + for lastAccNum == nil || *lastAccNum < accNum { + n := ak.NextAccountNumber(ctx) + +lastAccNum = &n +} + +ak.SetAccount(ctx, acc) +} + +ak.GetModuleAccount(ctx, types.FeeCollectorName) +} + +/ ExportGenesis returns a GenesisState for a given context and keeper +func (ak AccountKeeper) + +ExportGenesis(ctx sdk.Context) *types.GenesisState { + params := ak.GetParams(ctx) + +var genAccounts types.GenesisAccounts + ak.IterateAccounts(ctx, func(account sdk.AccountI) + +bool { + genAccount := account.(types.GenesisAccount) + +genAccounts = append(genAccounts, genAccount) + +return false +}) + +return types.NewGenesisState(params, genAccounts) +} +``` + +### `ExportGenesis` + +The `ExportGenesis` method is executed whenever an export of the state is made. It takes the latest known version of the subset of the state managed by the module and creates a new `GenesisState` out of it. This is mainly used when the chain needs to be upgraded via a hard fork. + +See an example of `ExportGenesis` from the `auth` module. + +```go expandable +package keeper + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ InitGenesis - Init store state from genesis data +/ +/ CONTRACT: old coins from the FeeCollectionKeeper need to be transferred through +/ a genesis port script to the new fee collector account +func (ak AccountKeeper) + +InitGenesis(ctx sdk.Context, data types.GenesisState) { + if err := ak.Params.Set(ctx, data.Params); err != nil { + panic(err) +} + +accounts, err := types.UnpackAccounts(data.Accounts) + if err != nil { + panic(err) +} + +accounts = types.SanitizeGenesisAccounts(accounts) + + / Set the accounts and make sure the global account number matches the largest account number (even if zero). + var lastAccNum *uint64 + for _, acc := range accounts { + accNum := acc.GetAccountNumber() + for lastAccNum == nil || *lastAccNum < accNum { + n := ak.NextAccountNumber(ctx) + +lastAccNum = &n +} + +ak.SetAccount(ctx, acc) +} + +ak.GetModuleAccount(ctx, types.FeeCollectorName) +} + +/ ExportGenesis returns a GenesisState for a given context and keeper +func (ak AccountKeeper) + +ExportGenesis(ctx sdk.Context) *types.GenesisState { + params := ak.GetParams(ctx) + +var genAccounts types.GenesisAccounts + ak.IterateAccounts(ctx, func(account sdk.AccountI) + +bool { + genAccount := account.(types.GenesisAccount) + +genAccounts = append(genAccounts, genAccount) + +return false +}) + +return types.NewGenesisState(params, genAccounts) +} +``` + +### GenesisTxHandler + +`GenesisTxHandler` is a way for modules to submit state transitions prior to the first block. This is used by `x/genutil` to submit the genesis transactions for the validators to be added to staking. + +```go +package genesis + +/ TxHandler is an interface that modules can implement to provide genesis state transitions +type TxHandler interface { + ExecuteGenesisTx([]byte) + +error +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/intro.mdx b/docs/sdk/v0.50/documentation/module-system/intro.mdx new file mode 100644 index 00000000..2806f96b --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/intro.mdx @@ -0,0 +1,332 @@ +--- +title: Introduction to Cosmos SDK Modules +--- + + +**Synopsis** +Modules define most of the logic of Cosmos SDK applications. Developers compose modules together using the Cosmos SDK to build their custom application-specific blockchains. This document outlines the basic concepts behind SDK modules and how to approach module management. + + + +**Pre-requisite Readings** + +* [Anatomy of a Cosmos SDK application](/docs/sdk/v0.50/learn/beginner/app-anatomy) +* [Lifecycle of a Cosmos SDK transaction](/docs/sdk/v0.50/learn/beginner/tx-lifecycle) + + + +## Role of Modules in a Cosmos SDK Application + +The Cosmos SDK can be thought of as the Ruby-on-Rails of blockchain development. It comes with a core that provides the basic functionalities every blockchain application needs, like a [boilerplate implementation of the ABCI](/docs/sdk/v0.50/learn/advanced/baseapp) to communicate with the underlying consensus engine, a [`multistore`](/docs/sdk/v0.50/learn/advanced/store#multistore) to persist state, a [server](/docs/sdk/v0.50/learn/advanced/node) to form a full-node and [interfaces](/docs/sdk/v0.50/documentation/module-system/module-interfaces) to handle queries. + +On top of this core, the Cosmos SDK enables developers to build modules that implement the business logic of their application. In other words, SDK modules implement the bulk of the logic of applications, while the core does the wiring and enables modules to be composed together. The end goal is to build a robust ecosystem of open-source Cosmos SDK modules, making it increasingly easier to build complex blockchain applications. + +Cosmos SDK modules can be seen as little state-machines within the state-machine. They generally define a subset of the state using one or more `KVStore`s in the [main multistore](/docs/sdk/v0.50/learn/advanced/store), as well as a subset of [message types](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#messages). These messages are routed by one of the main components of Cosmos SDK core, [`BaseApp`](/docs/sdk/v0.50/learn/advanced/baseapp), to a module Protobuf [`Msg` service](/docs/sdk/v0.50/documentation/module-system/msg-services) that defines them. + +```text expandable + + + | + | Transaction relayed from the full-node's consensus engine + | to the node's application via DeliverTx + | + | + | + +---------------------v--------------------------+ + | APPLICATION | + | | + | Using baseapp's methods: Decode the Tx, | + | extract and route the message(s) | + | | + +---------------------+--------------------------+ + | + | + | + +---------------------------+ + | + | + | + | Message routed to the correct + | module to be processed + | + | ++----------------+ +---------------+ +----------------+ +------v----------+ +| | | | | | | | +| AUTH MODULE | | BANK MODULE | | STAKING MODULE | | GOV MODULE | +| | | | | | | | +| | | | | | | Handles message,| +| | | | | | | Updates state | +| | | | | | | | ++----------------+ +---------------+ +----------------+ +------+----------+ + | + | + | + | + +--------------------------+ + | + | Return result to the underlying consensus engine (e.g. CometBFT) + | (0=Ok, 1=Err) + v +``` + +As a result of this architecture, building a Cosmos SDK application usually revolves around writing modules to implement the specialized logic of the application and composing them with existing modules to complete the application. Developers will generally work on modules that implement logic needed for their specific use case that do not exist yet, and will use existing modules for more generic functionalities like staking, accounts, or token management. + +### Modules as Sudo + +Modules have the ability to perform actions that are not available to regular users. This is because modules are given sudo permissions by the state machine. Modules can reject another modules desire to execute a function but this logic must be explicit. Examples of this can be seen when modules create functions to modify parameters: + +```go expandable +package keeper + +import ( + + "context" + "github.com/hashicorp/go-metrics" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/x/bank/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +type msgServer struct { + Keeper +} + +var _ types.MsgServer = msgServer{ +} + +/ NewMsgServerImpl returns an implementation of the bank MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &msgServer{ + Keeper: keeper +} +} + +func (k msgServer) + +Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + var ( + from, to []byte + err error + ) + if base, ok := k.Keeper.(BaseKeeper); ok { + from, err = base.ak.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid from address: %s", err) +} + +to, err = base.ak.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid to address: %s", err) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + if !msg.Amount.IsValid() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + if !msg.Amount.IsAllPositive() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + if err := k.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if k.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + +err = k.SendCoins(ctx, from, to, msg.Amount) + if err != nil { + return nil, err +} + +defer func() { + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "send" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + +return &types.MsgSendResponse{ +}, nil +} + +func (k msgServer) + +MultiSend(ctx context.Context, msg *types.MsgMultiSend) (*types.MsgMultiSendResponse, error) { + if len(msg.Inputs) == 0 { + return nil, types.ErrNoInputs +} + if len(msg.Inputs) != 1 { + return nil, types.ErrMultipleSenders +} + if len(msg.Outputs) == 0 { + return nil, types.ErrNoOutputs +} + if err := types.ValidateInputOutputs(msg.Inputs[0], msg.Outputs); err != nil { + return nil, err +} + + / NOTE: totalIn == totalOut should already have been checked + for _, in := range msg.Inputs { + if err := k.IsSendEnabledCoins(ctx, in.Coins...); err != nil { + return nil, err +} + +} + for _, out := range msg.Outputs { + if base, ok := k.Keeper.(BaseKeeper); ok { + accAddr, err := base.ak.AddressCodec().StringToBytes(out.Address) + if err != nil { + return nil, err +} + if k.BlockedAddr(accAddr) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", out.Address) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + +} + err := k.InputOutputCoins(ctx, msg.Inputs[0], msg.Outputs) + if err != nil { + return nil, err +} + +return &types.MsgMultiSendResponse{ +}, nil +} + +func (k msgServer) + +UpdateParams(ctx context.Context, req *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + if k.GetAuthority() != req.Authority { + return nil, errorsmod.Wrapf(types.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), req.Authority) +} + if err := req.Params.Validate(); err != nil { + return nil, err +} + if err := k.SetParams(ctx, req.Params); err != nil { + return nil, err +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} + +func (k msgServer) + +SetSendEnabled(ctx context.Context, msg *types.MsgSetSendEnabled) (*types.MsgSetSendEnabledResponse, error) { + if k.GetAuthority() != msg.Authority { + return nil, errorsmod.Wrapf(types.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), msg.Authority) +} + seen := map[string]bool{ +} + for _, se := range msg.SendEnabled { + if _, alreadySeen := seen[se.Denom]; alreadySeen { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("duplicate denom entries found for %q", se.Denom) +} + +seen[se.Denom] = true + if err := se.Validate(); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid SendEnabled denom %q: %s", se.Denom, err) +} + +} + for _, denom := range msg.UseDefaultFor { + if err := sdk.ValidateDenom(denom); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid UseDefaultFor denom %q: %s", denom, err) +} + +} + if len(msg.SendEnabled) > 0 { + k.SetAllSendEnabled(ctx, msg.SendEnabled) +} + if len(msg.UseDefaultFor) > 0 { + k.DeleteSendEnabled(ctx, msg.UseDefaultFor...) +} + +return &types.MsgSetSendEnabledResponse{ +}, nil +} + +func (k msgServer) + +Burn(goCtx context.Context, msg *types.MsgBurn) (*types.MsgBurnResponse, error) { + var ( + from []byte + err error + ) + +var coins sdk.Coins + for _, coin := range msg.Amount { + coins = coins.Add(sdk.NewCoin(coin.Denom, coin.Amount)) +} + if base, ok := k.Keeper.(BaseKeeper); ok { + from, err = base.ak.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid from address: %s", err) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + if !coins.IsValid() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, coins.String()) +} + if !coins.IsAllPositive() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, coins.String()) +} + +err = k.BurnCoins(goCtx, from, coins) + if err != nil { + return nil, err +} + +return &types.MsgBurnResponse{ +}, nil +} +``` + +## How to Approach Building Modules as a Developer + +While there are no definitive guidelines for writing modules, here are some important design principles developers should keep in mind when building them: + +* **Composability**: Cosmos SDK applications are almost always composed of multiple modules. This means developers need to carefully consider the integration of their module not only with the core of the Cosmos SDK, but also with other modules. The former is achieved by following standard design patterns outlined [here](#main-components-of-sdk-modules), while the latter is achieved by properly exposing the store(s) of the module via the [`keeper`](/docs/sdk/v0.50/documentation/module-system/keeper). +* **Specialization**: A direct consequence of the **composability** feature is that modules should be **specialized**. Developers should carefully establish the scope of their module and not batch multiple functionalities into the same module. This separation of concerns enables modules to be re-used in other projects and improves the upgradability of the application. **Specialization** also plays an important role in the [object-capabilities model](/docs/sdk/v0.50/learn/advanced/ocap) of the Cosmos SDK. +* **Capabilities**: Most modules need to read and/or write to the store(s) of other modules. However, in an open-source environment, it is possible for some modules to be malicious. That is why module developers need to carefully think not only about how their module interacts with other modules, but also about how to give access to the module's store(s). The Cosmos SDK takes a capabilities-oriented approach to inter-module security. This means that each store defined by a module is accessed by a `key`, which is held by the module's [`keeper`](/docs/sdk/v0.50/documentation/module-system/keeper). This `keeper` defines how to access the store(s) and under what conditions. Access to the module's store(s) is done by passing a reference to the module's `keeper`. + +## Main Components of Cosmos SDK Modules + +Modules are by convention defined in the `./x/` subfolder (e.g. the `bank` module will be defined in the `./x/bank` folder). They generally share the same core components: + +* A [`keeper`](/docs/sdk/v0.50/documentation/module-system/keeper), used to access the module's store(s) and update the state. +* A [`Msg` service](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#messages), used to process messages when they are routed to the module by [`BaseApp`](/docs/sdk/v0.50/learn/advanced/baseapp#message-routing) and trigger state-transitions. +* A [query service](/docs/sdk/v0.50/documentation/module-system/query-services), used to process user queries when they are routed to the module by [`BaseApp`](/docs/sdk/v0.50/learn/advanced/baseapp#query-routing). +* Interfaces, for end users to query the subset of the state defined by the module and create `message`s of the custom types defined in the module. + +In addition to these components, modules implement the `AppModule` interface in order to be managed by the [`module manager`](/docs/sdk/v0.50/documentation/module-system/module-manager). + +Please refer to the [structure document](/docs/sdk/v0.50/documentation/module-system/structure) to learn about the recommended structure of a module's directory. diff --git a/docs/sdk/v0.50/documentation/module-system/invariants.mdx b/docs/sdk/v0.50/documentation/module-system/invariants.mdx new file mode 100644 index 00000000..4c635b72 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/invariants.mdx @@ -0,0 +1,529 @@ +--- +title: Invariants +--- + + +**Synopsis** +An invariant is a property of the application that should always be true. In the context of the Cosmos SDK, an `Invariant` is a function that checks for a particular invariant. These functions are useful to detect bugs early on and act upon them to limit their potential consequences (e.g. by halting the chain). They are also useful in the development process of the application to detect bugs via simulations. + + + +**Pre-requisite Readings** + +* [Keepers](/docs/sdk/v0.50/documentation/module-system/keeper) + + + +## Implementing `Invariant`s + +An `Invariant` is a function that checks for a particular invariant within a module. Module `Invariant`s must follow the `Invariant` type: + +```go expandable +package types + +import "fmt" + +/ An Invariant is a function which tests a particular invariant. +/ The invariant returns a descriptive message about what happened +/ and a boolean indicating whether the invariant has been broken. +/ The simulator will then halt and print the logs. +type Invariant func(ctx Context) (string, bool) + +/ Invariants defines a group of invariants +type Invariants []Invariant + +/ expected interface for registering invariants +type InvariantRegistry interface { + RegisterRoute(moduleName, route string, invar Invariant) +} + +/ FormatInvariant returns a standardized invariant message. +func FormatInvariant(module, name, msg string) + +string { + return fmt.Sprintf("%s: %s invariant\n%s\n", module, name, msg) +} +``` + +The `string` return value is the invariant message, which can be used when printing logs, and the `bool` return value is the actual result of the invariant check. + +In practice, each module implements `Invariant`s in a `keeper/invariants.go` file within the module's folder. The standard is to implement one `Invariant` function per logical grouping of invariants with the following model: + +```go +/ Example for an Invariant that checks balance-related invariants + +func BalanceInvariants(k Keeper) + +sdk.Invariant { + return func(ctx context.Context) (string, bool) { + / Implement checks for balance-related invariants +} +} +``` + +Additionally, module developers should generally implement an `AllInvariants` function that runs all the `Invariant`s functions of the module: + +```go expandable +/ AllInvariants runs all invariants of the module. +/ In this example, the module implements two Invariants: BalanceInvariants and DepositsInvariants + +func AllInvariants(k Keeper) + +sdk.Invariant { + return func(ctx context.Context) (string, bool) { + res, stop := BalanceInvariants(k)(ctx) + if stop { + return res, stop +} + +return DepositsInvariant(k)(ctx) +} +} +``` + +Finally, module developers need to implement the `RegisterInvariants` method as part of the [`AppModule` interface](/docs/sdk/v0.50/documentation/module-system/module-manager#appmodule). Indeed, the `RegisterInvariants` method of the module, implemented in the `module/module.go` file, typically only defers the call to a `RegisterInvariants` method implemented in the `keeper/invariants.go` file. The `RegisterInvariants` method registers a route for each `Invariant` function in the [`InvariantRegistry`](#invariant-registry): + +```go expandable +package keeper + +import ( + + "bytes" + "fmt" + "cosmossdk.io/math" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ RegisterInvariants registers all staking invariants +func RegisterInvariants(ir sdk.InvariantRegistry, k *Keeper) { + ir.RegisterRoute(types.ModuleName, "module-accounts", + ModuleAccountInvariants(k)) + +ir.RegisterRoute(types.ModuleName, "nonnegative-power", + NonNegativePowerInvariant(k)) + +ir.RegisterRoute(types.ModuleName, "positive-delegation", + PositiveDelegationInvariant(k)) + +ir.RegisterRoute(types.ModuleName, "delegator-shares", + DelegatorSharesInvariant(k)) +} + +/ AllInvariants runs all invariants of the staking module. +func AllInvariants(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + res, stop := ModuleAccountInvariants(k)(ctx) + if stop { + return res, stop +} + +res, stop = NonNegativePowerInvariant(k)(ctx) + if stop { + return res, stop +} + +res, stop = PositiveDelegationInvariant(k)(ctx) + if stop { + return res, stop +} + +return DelegatorSharesInvariant(k)(ctx) +} +} + +/ ModuleAccountInvariants checks that the bonded and notBonded ModuleAccounts pools +/ reflects the tokens actively bonded and not bonded +func ModuleAccountInvariants(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + bonded := math.ZeroInt() + notBonded := math.ZeroInt() + bondedPool := k.GetBondedPool(ctx) + notBondedPool := k.GetNotBondedPool(ctx) + bondDenom := k.BondDenom(ctx) + +k.IterateValidators(ctx, func(_ int64, validator types.ValidatorI) + +bool { + switch validator.GetStatus() { + case types.Bonded: + bonded = bonded.Add(validator.GetTokens()) + case types.Unbonding, types.Unbonded: + notBonded = notBonded.Add(validator.GetTokens()) + +default: + panic("invalid validator status") +} + +return false +}) + +k.IterateUnbondingDelegations(ctx, func(_ int64, ubd types.UnbondingDelegation) + +bool { + for _, entry := range ubd.Entries { + notBonded = notBonded.Add(entry.Balance) +} + +return false +}) + poolBonded := k.bankKeeper.GetBalance(ctx, bondedPool.GetAddress(), bondDenom) + poolNotBonded := k.bankKeeper.GetBalance(ctx, notBondedPool.GetAddress(), bondDenom) + broken := !poolBonded.Amount.Equal(bonded) || !poolNotBonded.Amount.Equal(notBonded) + + / Bonded tokens should equal sum of tokens with bonded validators + / Not-bonded tokens should equal unbonding delegations plus tokens on unbonded validators + return sdk.FormatInvariant(types.ModuleName, "bonded and not bonded module account coins", fmt.Sprintf( + "\tPool's bonded tokens: %v\n"+ + "\tsum of bonded tokens: %v\n"+ + "not bonded token invariance:\n"+ + "\tPool's not bonded tokens: %v\n"+ + "\tsum of not bonded tokens: %v\n"+ + "module accounts total (bonded + not bonded):\n"+ + "\tModule Accounts' tokens: %v\n"+ + "\tsum tokens: %v\n", + poolBonded, bonded, poolNotBonded, notBonded, poolBonded.Add(poolNotBonded), bonded.Add(notBonded))), broken +} +} + +/ NonNegativePowerInvariant checks that all stored validators have >= 0 power. +func NonNegativePowerInvariant(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + var ( + msg string + broken bool + ) + iterator := k.ValidatorsPowerStoreIterator(ctx) + for ; iterator.Valid(); iterator.Next() { + validator, found := k.GetValidator(ctx, iterator.Value()) + if !found { + panic(fmt.Sprintf("validator record not found for address: %X\n", iterator.Value())) +} + powerKey := types.GetValidatorsByPowerIndexKey(validator, k.PowerReduction(ctx)) + if !bytes.Equal(iterator.Key(), powerKey) { + broken = true + msg += fmt.Sprintf("power store invariance:\n\tvalidator.Power: %v"+ + "\n\tkey should be: %v\n\tkey in store: %v\n", + validator.GetConsensusPower(k.PowerReduction(ctx)), powerKey, iterator.Key()) +} + if validator.Tokens.IsNegative() { + broken = true + msg += fmt.Sprintf("\tnegative tokens for validator: %v\n", validator) +} + +} + +iterator.Close() + +return sdk.FormatInvariant(types.ModuleName, "nonnegative power", fmt.Sprintf("found invalid validator powers\n%s", msg)), broken +} +} + +/ PositiveDelegationInvariant checks that all stored delegations have > 0 shares. +func PositiveDelegationInvariant(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + var ( + msg string + count int + ) + delegations := k.GetAllDelegations(ctx) + for _, delegation := range delegations { + if delegation.Shares.IsNegative() { + count++ + msg += fmt.Sprintf("\tdelegation with negative shares: %+v\n", delegation) +} + if delegation.Shares.IsZero() { + count++ + msg += fmt.Sprintf("\tdelegation with zero shares: %+v\n", delegation) +} + +} + broken := count != 0 + + return sdk.FormatInvariant(types.ModuleName, "positive delegations", fmt.Sprintf( + "%d invalid delegations found\n%s", count, msg)), broken +} +} + +/ DelegatorSharesInvariant checks whether all the delegator shares which persist +/ in the delegator object add up to the correct total delegator shares +/ amount stored in each validator. +func DelegatorSharesInvariant(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + var ( + msg string + broken bool + ) + validators := k.GetAllValidators(ctx) + validatorsDelegationShares := map[string]math.LegacyDec{ +} + + / initialize a map: validator -> its delegation shares + for _, validator := range validators { + validatorsDelegationShares[validator.GetOperator().String()] = math.LegacyZeroDec() +} + + / iterate through all the delegations to calculate the total delegation shares for each validator + delegations := k.GetAllDelegations(ctx) + for _, delegation := range delegations { + delegationValidatorAddr := delegation.GetValidatorAddr().String() + validatorDelegationShares := validatorsDelegationShares[delegationValidatorAddr] + validatorsDelegationShares[delegationValidatorAddr] = validatorDelegationShares.Add(delegation.Shares) +} + + / for each validator, check if its total delegation shares calculated from the step above equals to its expected delegation shares + for _, validator := range validators { + expValTotalDelShares := validator.GetDelegatorShares() + calculatedValTotalDelShares := validatorsDelegationShares[validator.GetOperator().String()] + if !calculatedValTotalDelShares.Equal(expValTotalDelShares) { + broken = true + msg += fmt.Sprintf("broken delegator shares invariance:\n"+ + "\tvalidator.DelegatorShares: %v\n"+ + "\tsum of Delegator.Shares: %v\n", expValTotalDelShares, calculatedValTotalDelShares) +} + +} + +return sdk.FormatInvariant(types.ModuleName, "delegator shares", msg), broken +} +} +``` + +For more, see an example of [`Invariant`s implementation from the `staking` module](/docs/sdk/v0.50/documentation/module-systemhttps://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/keeper/invariants.go). + +## Invariant Registry + +The `InvariantRegistry` is a registry where the `Invariant`s of all the modules of an application are registered. There is only one `InvariantRegistry` per **application**, meaning module developers need not implement their own `InvariantRegistry` when building a module. **All module developers need to do is to register their modules' invariants in the `InvariantRegistry`, as explained in the section above**. The rest of this section gives more information on the `InvariantRegistry` itself, and does not contain anything directly relevant to module developers. + +At its core, the `InvariantRegistry` is defined in the Cosmos SDK as an interface: + +```go expandable +package types + +import "fmt" + +/ An Invariant is a function which tests a particular invariant. +/ The invariant returns a descriptive message about what happened +/ and a boolean indicating whether the invariant has been broken. +/ The simulator will then halt and print the logs. +type Invariant func(ctx Context) (string, bool) + +/ Invariants defines a group of invariants +type Invariants []Invariant + +/ expected interface for registering invariants +type InvariantRegistry interface { + RegisterRoute(moduleName, route string, invar Invariant) +} + +/ FormatInvariant returns a standardized invariant message. +func FormatInvariant(module, name, msg string) + +string { + return fmt.Sprintf("%s: %s invariant\n%s\n", module, name, msg) +} +``` + +Typically, this interface is implemented in the `keeper` of a specific module. The most used implementation of an `InvariantRegistry` can be found in the `crisis` module: + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "time" + "cosmossdk.io/collections" + "cosmossdk.io/core/address" + "cosmossdk.io/log" + + storetypes "cosmossdk.io/core/store" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/crisis/types" +) + +/ Keeper - crisis keeper +type Keeper struct { + routes []types.InvarRoute + invCheckPeriod uint + storeService storetypes.KVStoreService + cdc codec.BinaryCodec + + / the address capable of executing a MsgUpdateParams message. Typically, this + / should be the x/gov module account. + authority string + + supplyKeeper types.SupplyKeeper + + feeCollectorName string / name of the FeeCollector ModuleAccount + + addressCodec address.Codec + + Schema collections.Schema + ConstantFee collections.Item[sdk.Coin] +} + +/ NewKeeper creates a new Keeper object +func NewKeeper( + cdc codec.BinaryCodec, storeService storetypes.KVStoreService, invCheckPeriod uint, + supplyKeeper types.SupplyKeeper, feeCollectorName, authority string, ac address.Codec, +) *Keeper { + sb := collections.NewSchemaBuilder(storeService) + k := &Keeper{ + storeService: storeService, + cdc: cdc, + routes: make([]types.InvarRoute, 0), + invCheckPeriod: invCheckPeriod, + supplyKeeper: supplyKeeper, + feeCollectorName: feeCollectorName, + authority: authority, + addressCodec: ac, + ConstantFee: collections.NewItem(sb, types.ConstantFeeKey, "constant_fee", codec.CollValue[sdk.Coin](/docs/sdk/v0.50/documentation/module-system/cdc)), +} + +schema, err := sb.Build() + if err != nil { + panic(err) +} + +k.Schema = schema + return k +} + +/ GetAuthority returns the x/crisis module's authority. +func (k *Keeper) + +GetAuthority() + +string { + return k.authority +} + +/ Logger returns a module-specific logger. +func (k *Keeper) + +Logger(ctx context.Context) + +log.Logger { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +return sdkCtx.Logger().With("module", "x/"+types.ModuleName) +} + +/ RegisterRoute register the routes for each of the invariants +func (k *Keeper) + +RegisterRoute(moduleName, route string, invar sdk.Invariant) { + invarRoute := types.NewInvarRoute(moduleName, route, invar) + +k.routes = append(k.routes, invarRoute) +} + +/ Routes - return the keeper's invariant routes +func (k *Keeper) + +Routes() []types.InvarRoute { + return k.routes +} + +/ Invariants returns a copy of all registered Crisis keeper invariants. +func (k *Keeper) + +Invariants() []sdk.Invariant { + invars := make([]sdk.Invariant, len(k.routes)) + for i, route := range k.routes { + invars[i] = route.Invar +} + +return invars +} + +/ AssertInvariants asserts all registered invariants. If any invariant fails, +/ the method panics. +func (k *Keeper) + +AssertInvariants(ctx sdk.Context) { + logger := k.Logger(ctx) + start := time.Now() + invarRoutes := k.Routes() + n := len(invarRoutes) + for i, ir := range invarRoutes { + logger.Info("asserting crisis invariants", "inv", fmt.Sprint(i+1, "/", n), "name", ir.FullRoute()) + +invCtx, _ := ctx.CacheContext() + if res, stop := ir.Invar(invCtx); stop { + / TODO: Include app name as part of context to allow for this to be + / variable. + panic(fmt.Errorf("invariant broken: %s\n"+ + "\tCRITICAL please submit the following transaction:\n"+ + "\t\t tx crisis invariant-broken %s %s", res, ir.ModuleName, ir.Route)) +} + +} + diff := time.Since(start) + +logger.Info("asserted all invariants", "duration", diff, "height", ctx.BlockHeight()) +} + +/ InvCheckPeriod returns the invariant checks period. +func (k *Keeper) + +InvCheckPeriod() + +uint { + return k.invCheckPeriod +} + +/ SendCoinsFromAccountToFeeCollector transfers amt to the fee collector account. +func (k *Keeper) + +SendCoinsFromAccountToFeeCollector(ctx context.Context, senderAddr sdk.AccAddress, amt sdk.Coins) + +error { + return k.supplyKeeper.SendCoinsFromAccountToModule(ctx, senderAddr, k.feeCollectorName, amt) +} +``` + +The `InvariantRegistry` is therefore typically instantiated by instantiating the `keeper` of the `crisis` module in the [application's constructor function](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor-function). + +`Invariant`s can be checked manually via [`message`s](/docs/sdk/v0.50/documentation/module-system/messages-and-queries), but most often they are checked automatically at the end of each block. Here is an example from the `crisis` module: + +```go expandable +package crisis + +import ( + + "context" + "time" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + "github.com/cosmos/cosmos-sdk/x/crisis/types" +) + +/ check all registered invariants +func EndBlocker(ctx context.Context, k keeper.Keeper) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) + sdkCtx := sdk.UnwrapSDKContext(ctx) + if k.InvCheckPeriod() == 0 || sdkCtx.BlockHeight()%int64(k.InvCheckPeriod()) != 0 { + / skip running the invariant check + return +} + +k.AssertInvariants(sdkCtx) +} +``` + +In both cases, if one of the `Invariant`s returns false, the `InvariantRegistry` can trigger special logic (e.g. have the application panic and print the `Invariant`s message in the log). diff --git a/docs/sdk/v0.50/documentation/module-system/keeper.mdx b/docs/sdk/v0.50/documentation/module-system/keeper.mdx new file mode 100644 index 00000000..809bcf7d --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/keeper.mdx @@ -0,0 +1,458 @@ +--- +title: Keepers +--- + + +**Synopsis** +`Keeper`s refer to a Cosmos SDK abstraction whose role is to manage access to the subset of the state defined by various modules. `Keeper`s are module-specific, i.e. the subset of state defined by a module can only be accessed by a `keeper` defined in said module. If a module needs to access the subset of state defined by another module, a reference to the second module's internal `keeper` needs to be passed to the first one. This is done in `app.go` during the instantiation of module keepers. + + + +**Pre-requisite Readings** + +* [Introduction to Cosmos SDK Modules](docs/sdk/v0.50/learn/intro/overview) + + + +## Motivation + +The Cosmos SDK is a framework that makes it easy for developers to build complex decentralized applications from scratch, mainly by composing modules together. As the ecosystem of open-source modules for the Cosmos SDK expands, it will become increasingly likely that some of these modules contain vulnerabilities, as a result of the negligence or malice of their developer. + +The Cosmos SDK adopts an [object-capabilities-based approach](/docs/sdk/v0.50/learn/advanced/ocap) to help developers better protect their application from unwanted inter-module interactions, and `keeper`s are at the core of this approach. A `keeper` can be considered quite literally to be the gatekeeper of a module's store(s). Each store (typically an [`IAVL` Store](/docs/sdk/v0.50/learn/advanced/store#iavl-store)) defined within a module comes with a `storeKey`, which grants unlimited access to it. The module's `keeper` holds this `storeKey` (which should otherwise remain unexposed), and defines [methods](#implementing-methods) for reading and writing to the store(s). + +The core idea behind the object-capabilities approach is to only reveal what is necessary to get the work done. In practice, this means that instead of handling permissions of modules through access-control lists, module `keeper`s are passed a reference to the specific instance of the other modules' `keeper`s that they need to access (this is done in the [application's constructor function](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor-function)). As a consequence, a module can only interact with the subset of state defined in another module via the methods exposed by the instance of the other module's `keeper`. This is a great way for developers to control the interactions that their own module can have with modules developed by external developers. + +## Type Definition + +`keeper`s are generally implemented in a `/keeper/keeper.go` file located in the module's folder. By convention, the type `keeper` of a module is simply named `Keeper` and usually follows the following structure: + +```go +type Keeper struct { + / External keepers, if any + + / Store key(s) + + / codec + + / authority +} +``` + +For example, here is the type definition of the `keeper` from the `staking` module: + +```go expandable +package keeper + +import ( + + "fmt" + "cosmossdk.io/log" + "cosmossdk.io/math" + abci "github.com/cometbft/cometbft/abci/types" + + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ Implements ValidatorSet interface +var _ types.ValidatorSet = Keeper{ +} + +/ Implements DelegationSet interface +var _ types.DelegationSet = Keeper{ +} + +/ Keeper of the x/staking store +type Keeper struct { + storeKey storetypes.StoreKey + cdc codec.BinaryCodec + authKeeper types.AccountKeeper + bankKeeper types.BankKeeper + hooks types.StakingHooks + authority string +} + +/ NewKeeper creates a new staking Keeper instance +func NewKeeper( + cdc codec.BinaryCodec, + key storetypes.StoreKey, +``` + +### Modern Keeper with Collections (ADR-062) + +The recommended approach for new modules is to use the Collections framework for type-safe state management: + +```go +package keeper + +import ( + "cosmossdk.io/collections" + "cosmossdk.io/core/store" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +type Keeper struct { + cdc codec.BinaryCodec + storeService store.KVStoreService + + // State management with collections + Schema collections.Schema + Params collections.Item[types.Params] + Validators collections.Map[[]byte, types.Validator] + Delegations collections.Map[collections.Pair[sdk.AccAddress, sdk.ValAddress], types.Delegation] + + // Indexes for efficient queries + DelegationsByValidator collections.Map[sdk.ValAddress, []sdk.AccAddress] + + // External keepers + accountKeeper types.AccountKeeper + bankKeeper types.BankKeeper + + authority string +} + +func NewKeeper( + cdc codec.BinaryCodec, + storeService store.KVStoreService, + ak types.AccountKeeper, + bk types.BankKeeper, + authority string, +) Keeper { + sb := collections.NewSchemaBuilder(storeService) + + k := Keeper{ + cdc: cdc, + storeService: storeService, + authority: authority, + accountKeeper: ak, + bankKeeper: bk, + + // Initialize collections + Params: collections.NewItem( + sb, + collections.NewPrefix(0), + "params", + codec.CollValue[types.Params](cdc), + ), + + Validators: collections.NewMap( + sb, + collections.NewPrefix(1), + "validators", + collections.BytesKey, + codec.CollValue[types.Validator](cdc), + ), + + Delegations: collections.NewMap( + sb, + collections.NewPrefix(2), + "delegations", + collections.PairKeyCodec( + sdk.AccAddressKey, + sdk.ValAddressKey, + ), + codec.CollValue[types.Delegation](cdc), + ), + } + + schema, err := sb.Build() + if err != nil { + panic(err) + } + k.Schema = schema + + return k + ak types.AccountKeeper, + bk types.BankKeeper, + authority string, +) *Keeper { + / ensure bonded and not bonded module accounts are set + if addr := ak.GetModuleAddress(types.BondedPoolName); addr == nil { + panic(fmt.Sprintf("%s module account has not been set", types.BondedPoolName)) +} + if addr := ak.GetModuleAddress(types.NotBondedPoolName); addr == nil { + panic(fmt.Sprintf("%s module account has not been set", types.NotBondedPoolName)) +} + + / ensure that authority is a valid AccAddress + if _, err := ak.AddressCodec().StringToBytes(authority); err != nil { + panic("authority is not a valid acc address") +} + +return &Keeper{ + storeKey: key, + cdc: cdc, + authKeeper: ak, + bankKeeper: bk, + hooks: nil, + authority: authority, +} +} + +/ Logger returns a module-specific logger. +func (k Keeper) + +Logger(ctx sdk.Context) + +log.Logger { + return ctx.Logger().With("module", "x/"+types.ModuleName) +} + +/ Hooks gets the hooks for staking *Keeper { + func (k *Keeper) + +Hooks() + +types.StakingHooks { + if k.hooks == nil { + / return a no-op implementation if no hooks are set + return types.MultiStakingHooks{ +} + +} + +return k.hooks +} + +/ SetHooks Set the validator hooks. In contrast to other receivers, this method must take a pointer due to nature +/ of the hooks interface and SDK start up sequence. +func (k *Keeper) + +SetHooks(sh types.StakingHooks) { + if k.hooks != nil { + panic("cannot set validator hooks twice") +} + +k.hooks = sh +} + +/ GetLastTotalPower Load the last total validator power. +func (k Keeper) + +GetLastTotalPower(ctx sdk.Context) + +math.Int { + store := ctx.KVStore(k.storeKey) + bz := store.Get(types.LastTotalPowerKey) + if bz == nil { + return math.ZeroInt() +} + ip := sdk.IntProto{ +} + +k.cdc.MustUnmarshal(bz, &ip) + +return ip.Int +} + +/ SetLastTotalPower Set the last total validator power. +func (k Keeper) + +SetLastTotalPower(ctx sdk.Context, power math.Int) { + store := ctx.KVStore(k.storeKey) + bz := k.cdc.MustMarshal(&sdk.IntProto{ + Int: power +}) + +store.Set(types.LastTotalPowerKey, bz) +} + +/ GetAuthority returns the x/staking module's authority. +func (k Keeper) + +GetAuthority() + +string { + return k.authority +} + +/ SetValidatorUpdates sets the ABCI validator power updates for the current block. +func (k Keeper) + +SetValidatorUpdates(ctx sdk.Context, valUpdates []abci.ValidatorUpdate) { + store := ctx.KVStore(k.storeKey) + bz := k.cdc.MustMarshal(&types.ValidatorUpdates{ + Updates: valUpdates +}) + +store.Set(types.ValidatorUpdatesKey, bz) +} + +/ GetValidatorUpdates returns the ABCI validator power updates within the current block. +func (k Keeper) + +GetValidatorUpdates(ctx sdk.Context) []abci.ValidatorUpdate { + store := ctx.KVStore(k.storeKey) + bz := store.Get(types.ValidatorUpdatesKey) + +var valUpdates types.ValidatorUpdates + k.cdc.MustUnmarshal(bz, &valUpdates) + +return valUpdates.Updates +} +``` + +Let us go through the different parameters: + +* An expected `keeper` is a `keeper` external to a module that is required by the internal `keeper` of said module. External `keeper`s are listed in the internal `keeper`'s type definition as interfaces. These interfaces are themselves defined in an `expected_keepers.go` file in the root of the module's folder. In this context, interfaces are used to reduce the number of dependencies, as well as to facilitate the maintenance of the module itself. +* `storeKey`s grant access to the store(s) of the [multistore](/docs/sdk/v0.50/learn/advanced/store) managed by the module. They should always remain unexposed to external modules. +* `cdc` is the [codec](/docs/sdk/v0.50/learn/advanced/encoding) used to marshall and unmarshall structs to/from `[]byte`. The `cdc` can be any of `codec.BinaryCodec`, `codec.JSONCodec` or `codec.Codec` based on your requirements. It can be either a proto or amino codec as long as they implement these interfaces. +* The authority listed is a module account or user account that has the right to change module level parameters. Previously this was handled by the param module, which has been deprecated. + +Of course, it is possible to define different types of internal `keeper`s for the same module (e.g. a read-only `keeper`). Each type of `keeper` comes with its own constructor function, which is called from the [application's constructor function](/docs/sdk/v0.50/learn/beginner/app-anatomy). This is where `keeper`s are instantiated, and where developers make sure to pass correct instances of modules' `keeper`s to other modules that require them. + +## Implementing Methods + +`Keeper`s primarily expose getter and setter methods for the store(s) managed by their module. These methods should remain as simple as possible and strictly be limited to getting or setting the requested value, as validity checks should have already been performed by the [`Msg` server](/docs/sdk/v0.50/documentation/module-system/msg-services) when `keeper`s' methods are called. + +Typically, a *getter* method will have the following signature + +```go +func (k Keeper) + +Get(ctx context.Context, key string) + +returnType +``` + +and the method will go through the following steps: + +1. Retrieve the appropriate store from the `ctx` using the `storeKey`. This is done through the `KVStore(storeKey sdk.StoreKey)` method of the `ctx`. Then it's preferred to use the `prefix.Store` to access only the desired limited subset of the store for convenience and safety. +2. If it exists, get the `[]byte` value stored at location `[]byte(key)` using the `Get(key []byte)` method of the store. +3. Unmarshall the retrieved value from `[]byte` to `returnType` using the codec `cdc`. Return the value. + +Similarly, a *setter* method will have the following signature + +```go +func (k Keeper) + +Set(ctx context.Context, key string, value valueType) +``` + +and the method will go through the following steps: + +1. Retrieve the appropriate store from the `ctx` using the `storeKey`. This is done through the `KVStore(storeKey sdk.StoreKey)` method of the `ctx`. It's preferred to use the `prefix.Store` to access only the desired limited subset of the store for convenience and safety. +2. Marshal `value` to `[]byte` using the codec `cdc`. +3. Set the encoded value in the store at location `key` using the `Set(key []byte, value []byte)` method of the store. + +For more, see an example of `keeper`'s [methods implementation from the `staking` module](/docs/sdk/v0.50/documentation/module-systemhttps://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/keeper/keeper.go). + +The [module `KVStore`](/docs/sdk/v0.50/learn/advanced/store#kvstore-and-commitkvstore-interfaces) also provides an `Iterator()` method which returns an `Iterator` object to iterate over a domain of keys. + +This is an example from the `auth` module to iterate accounts: + +```go expandable +package keeper + +import ( + + "context" + "errors" + "cosmossdk.io/collections" + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ NewAccountWithAddress implements AccountKeeperI. +func (ak AccountKeeper) + +NewAccountWithAddress(ctx context.Context, addr sdk.AccAddress) + +sdk.AccountI { + acc := ak.proto() + err := acc.SetAddress(addr) + if err != nil { + panic(err) +} + +return ak.NewAccount(ctx, acc) +} + +/ NewAccount sets the next account number to a given account interface +func (ak AccountKeeper) + +NewAccount(ctx context.Context, acc sdk.AccountI) + +sdk.AccountI { + if err := acc.SetAccountNumber(ak.NextAccountNumber(ctx)); err != nil { + panic(err) +} + +return acc +} + +/ HasAccount implements AccountKeeperI. +func (ak AccountKeeper) + +HasAccount(ctx context.Context, addr sdk.AccAddress) + +bool { + has, _ := ak.Accounts.Has(ctx, addr) + +return has +} + +/ GetAccount implements AccountKeeperI. +func (ak AccountKeeper) + +GetAccount(ctx context.Context, addr sdk.AccAddress) + +sdk.AccountI { + acc, err := ak.Accounts.Get(ctx, addr) + if err != nil && !errors.Is(err, collections.ErrNotFound) { + panic(err) +} + +return acc +} + +/ GetAllAccounts returns all accounts in the accountKeeper. +func (ak AccountKeeper) + +GetAllAccounts(ctx context.Context) (accounts []sdk.AccountI) { + ak.IterateAccounts(ctx, func(acc sdk.AccountI) (stop bool) { + accounts = append(accounts, acc) + +return false +}) + +return accounts +} + +/ SetAccount implements AccountKeeperI. +func (ak AccountKeeper) + +SetAccount(ctx context.Context, acc sdk.AccountI) { + err := ak.Accounts.Set(ctx, acc.GetAddress(), acc) + if err != nil { + panic(err) +} +} + +/ RemoveAccount removes an account for the account mapper store. +/ NOTE: this will cause supply invariant violation if called +func (ak AccountKeeper) + +RemoveAccount(ctx context.Context, acc sdk.AccountI) { + err := ak.Accounts.Remove(ctx, acc.GetAddress()) + if err != nil { + panic(err) +} +} + +/ IterateAccounts iterates over all the stored accounts and performs a callback function. +/ Stops iteration when callback returns true. +func (ak AccountKeeper) + +IterateAccounts(ctx context.Context, cb func(account sdk.AccountI) (stop bool)) { + err := ak.Accounts.Walk(ctx, nil, func(_ sdk.AccAddress, value sdk.AccountI) (bool, error) { + return cb(value), nil +}) + if err != nil { + panic(err) +} +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/messages-and-queries.mdx b/docs/sdk/v0.50/documentation/module-system/messages-and-queries.mdx new file mode 100644 index 00000000..acb8d178 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/messages-and-queries.mdx @@ -0,0 +1,1595 @@ +--- +title: Messages and Queries +--- + + +**Synopsis** +`Msg`s and `Queries` are the two primary objects handled by modules. Most of the core components defined in a module, like `Msg` services, `keeper`s and `Query` services, exist to process `message`s and `queries`. + + + +**Pre-requisite Readings** + +* [Introduction to Cosmos SDK Modules](docs/sdk/v0.50/learn/intro/overview) + + + +## Messages + +`Msg`s are objects whose end-goal is to trigger state-transitions. They are wrapped in [transactions](/docs/sdk/v0.50/learn/advanced/transactions), which may contain one or more of them. + +When a transaction is relayed from the underlying consensus engine to the Cosmos SDK application, it is first decoded by [`BaseApp`](/docs/sdk/v0.50/learn/advanced/baseapp). Then, each message contained in the transaction is extracted and routed to the appropriate module via `BaseApp`'s `MsgServiceRouter` so that it can be processed by the module's [`Msg` service](/docs/sdk/v0.50/documentation/module-system/msg-services). For a more detailed explanation of the lifecycle of a transaction, click [here](/docs/sdk/v0.50/learn/beginner/tx-lifecycle). + +### `Msg` Services + +Defining Protobuf `Msg` services is the recommended way to handle messages. A Protobuf `Msg` service should be created for each module, typically in `tx.proto` (see more info about [conventions and naming](/docs/sdk/v0.50/learn/advanced/encoding#faq)). It must have an RPC service method defined for each message in the module. + +Each `Msg` service method must have exactly one argument, which must implement the `sdk.Msg` interface, and a Protobuf response. The naming convention is to call the RPC argument `Msg` and the RPC response `MsgResponse`. For example: + +```protobuf +rpc Send(MsgSend) returns (MsgSendResponse); +``` + +See an example of a `Msg` service definition from `x/bank` module: + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/28fa3b8/x/bank/proto/cosmos/bank/v1beta1/tx.proto#L13-L41 +``` + +### `sdk.Msg` Interface + +`sdk.Msg` is a alias of `proto.Message`. + +To attach a `ValidateBasic()` method to a message then you must add methods to the type adhereing to the `HasValidateBasic`. + +```go expandable +package types + +import ( + + "encoding/json" + fmt "fmt" + strings "strings" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "github.com/cosmos/cosmos-sdk/codec" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) + +type ( + / Msg defines the interface a transaction message needed to fulfill. + Msg = proto.Message + + / LegacyMsg defines the interface a transaction message needed to fulfill up through + / v0.47. + LegacyMsg interface { + Msg + + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} + + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / HasMsgs defines an interface a transaction must fulfill. + HasMsgs interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg +} + + / Tx defines an interface a transaction must fulfill. + Tx interface { + HasMsgs + + / GetMsgsV2 gets the transaction's messages as google.golang.org/protobuf/proto.Message's. + GetMsgsV2() ([]protov2.Message, error) +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() []byte + FeeGranter() []byte +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} + + / HasValidateBasic defines a type that has a ValidateBasic method. + / ValidateBasic is deprecated and now facultative. + / Prefer validating messages directly in the msg server. + HasValidateBasic interface { + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +func MsgTypeURL(msg proto.Message) + +string { + if m, ok := msg.(protov2.Message); ok { + return "/" + string(m.ProtoReflect().Descriptor().FullName()) +} + +return "/" + proto.MessageName(msg) +} + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} + +/ GetModuleNameFromTypeURL assumes that module name is the second element of the msg type URL +/ e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" +/ It returns an empty string if the input is not a valid type URL +func GetModuleNameFromTypeURL(input string) + +string { + moduleName := strings.Split(input, ".") + if len(moduleName) > 1 { + return moduleName[1] +} + +return "" +} +``` + +Signers from the `GetSigners()` call is automated via a protobuf annotation. +Read more about the signer field [here](/docs/sdk/v0.50/documentation/module-system/protobuf-annotations). + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/e6848d99b55a65d014375b295bdd7f9641aac95e/proto/cosmos/bank/v1beta1/tx.proto#L40 +``` + +If there is a need for custom signers then there is an alternative path which can be taken. A function which returns `signing.CustomGetSigner` for a specific message can be defined. + +```go expandable +func ProvideCustomMsgTransactionGetSigners() + +signing.CustomGetSigner { + / Extract the signer from the signature. + signer, err := coretypes.LatestSigner(Tx).Sender(ethTx) + if err != nil { + return nil, err +} + + / Return the signer in the required format. + return signing.CustomGetSigner{ + MsgType: protoreflect.FullName(gogoproto.MessageName(&types.CustomMsg{ +})), + Fn: func(msg proto.Message) ([][]byte, error) { + return [][]byte{ + signer +}, nil +} + +} +} +``` + +This can be provided to the application using depinject's `Provide` method in the module that defines the type: + +```diff +func init() { + appconfig.RegisterModule(&modulev1.Module{}, +- appconfig.Provide(ProvideModule), ++ appconfig.Provide(ProvideModule, ProvideCustomMsgTransactionGetSigners), + ) +} +``` + +The Cosmos SDK uses Protobuf definitions to generate client and server code: + +* `MsgServer` interface defines the server API for the `Msg` service and its implementation is described as part of the [`Msg` services](/docs/sdk/v0.50/documentation/module-system/msg-services) documentation. +* Structures are generated for all RPC request and response types. + +A `RegisterMsgServer` method is also generated and should be used to register the module's `MsgServer` implementation in `RegisterServices` method from the [`AppModule` interface](/docs/sdk/v0.50/documentation/module-system/module-manager#appmodule). + +In order for clients (CLI and gRPC-gateway) to have these URLs registered, the Cosmos SDK provides the function `RegisterMsgServiceDesc(registry codectypes.InterfaceRegistry, sd *grpc.ServiceDesc)` that should be called inside module's [`RegisterInterfaces`](/docs/sdk/v0.50/documentation/module-system/module-manager#appmodulebasic) method, using the proto-generated `&_Msg_serviceDesc` as `*grpc.ServiceDesc` argument. + +## Queries + +A `query` is a request for information made by end-users of applications through an interface and processed by a full-node. A `query` is received by a full-node through its consensus engine and relayed to the application via the ABCI. It is then routed to the appropriate module via `BaseApp`'s `QueryRouter` so that it can be processed by the module's query service (./04-query-services.md). For a deeper look at the lifecycle of a `query`, click [here](/docs/sdk/v0.50/learn/beginner/query-lifecycle). + +### gRPC Queries + +Queries should be defined using [Protobuf services](/docs/sdk/v0.50/documentation/module-systemhttps://protobuf.dev/programming-guides/proto2/). A `Query` service should be created per module in `query.proto`. This service lists endpoints starting with `rpc`. + +Here's an example of such a `Query` service definition: + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/auth/v1beta1/query.proto#L14-L89 +``` + +As `proto.Message`s, generated `Response` types implement by default `String()` method of [`fmt.Stringer`](/docs/sdk/v0.50/documentation/module-systemhttps://pkg.go.dev/fmt#Stringer). + +A `RegisterQueryServer` method is also generated and should be used to register the module's query server in the `RegisterServices` method from the [`AppModule` interface](/docs/sdk/v0.50/documentation/module-system/module-manager#appmodule). + +### Store Queries + +Store queries query directly for store keys. They use `clientCtx.QueryABCI(req abci.RequestQuery)` to return the full `abci.ResponseQuery` with inclusion Merkle proofs. + +See following examples: + +```go expandable +package baseapp + +import ( + + "context" + "crypto/sha256" + "fmt" + "os" + "sort" + "strings" + "syscall" + "time" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/store/rootmulti" + snapshottypes "cosmossdk.io/store/snapshots/types" + storetypes "cosmossdk.io/store/types" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc/codes" + grpcstatus "google.golang.org/grpc/status" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ Supported ABCI Query prefixes and paths +const ( + QueryPathApp = "app" + QueryPathCustom = "custom" + QueryPathP2P = "p2p" + QueryPathStore = "store" + + QueryPathBroadcastTx = "/cosmos.tx.v1beta1.Service/BroadcastTx" +) + +func (app *BaseApp) + +InitChain(req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + if req.ChainId != app.chainID { + return nil, fmt.Errorf("invalid chain-id on InitChain; expected: %s, got: %s", app.chainID, req.ChainId) +} + + / On a new chain, we consider the init chain block height as 0, even though + / req.InitialHeight is 1 by default. + initHeader := cmtproto.Header{ + ChainID: req.ChainId, + Time: req.Time +} + +app.initialHeight = req.InitialHeight + + app.logger.Info("InitChain", "initialHeight", req.InitialHeight, "chainID", req.ChainId) + + / Set the initial height, which will be used to determine if we are proposing + / or processing the first block or not. + app.initialHeight = req.InitialHeight + + / if req.InitialHeight is > 1, then we set the initial version on all stores + if req.InitialHeight > 1 { + initHeader.Height = req.InitialHeight + if err := app.cms.SetInitialVersion(req.InitialHeight); err != nil { + return nil, err +} + +} + + / initialize states with a correct header + app.setState(execModeFinalize, initHeader) + +app.setState(execModeCheck, initHeader) + + / Store the consensus params in the BaseApp's param store. Note, this must be + / done after the finalizeBlockState and context have been set as it's persisted + / to state. + if req.ConsensusParams != nil { + err := app.StoreConsensusParams(app.finalizeBlockState.ctx, *req.ConsensusParams) + if err != nil { + return nil, err +} + +} + +defer func() { + / InitChain represents the state of the application BEFORE the first block, + / i.e. the genesis block. This means that when processing the app's InitChain + / handler, the block height is zero by default. However, after Commit is called + / the height needs to reflect the true block height. + initHeader.Height = req.InitialHeight + app.checkState.ctx = app.checkState.ctx.WithBlockHeader(initHeader) + +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithBlockHeader(initHeader) +}() + if app.initChainer == nil { + return &abci.ResponseInitChain{ +}, nil +} + + / add block gas meter for any genesis transactions (allow infinite gas) + +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithBlockGasMeter(storetypes.NewInfiniteGasMeter()) + +res, err := app.initChainer(app.finalizeBlockState.ctx, req) + if err != nil { + return nil, err +} + if len(req.Validators) > 0 { + if len(req.Validators) != len(res.Validators) { + return nil, fmt.Errorf( + "len(RequestInitChain.Validators) != len(GenesisValidators) (%d != %d)", + len(req.Validators), len(res.Validators), + ) +} + +sort.Sort(abci.ValidatorUpdates(req.Validators)) + +sort.Sort(abci.ValidatorUpdates(res.Validators)) + for i := range res.Validators { + if !proto.Equal(&res.Validators[i], &req.Validators[i]) { + return nil, fmt.Errorf("genesisValidators[%d] != req.Validators[%d] ", i, i) +} + +} + +} + + / In the case of a new chain, AppHash will be the hash of an empty string. + / During an upgrade, it'll be the hash of the last committed block. + var appHash []byte + if !app.LastCommitID().IsZero() { + appHash = app.LastCommitID().Hash +} + +else { + / $ echo -n '' | sha256sum + / e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 + emptyHash := sha256.Sum256([]byte{ +}) + +appHash = emptyHash[:] +} + + / NOTE: We don't commit, but FinalizeBlock for block InitialHeight starts from + / this FinalizeBlockState. + return &abci.ResponseInitChain{ + ConsensusParams: res.ConsensusParams, + Validators: res.Validators, + AppHash: appHash, +}, nil +} + +func (app *BaseApp) + +Info(req *abci.RequestInfo) (*abci.ResponseInfo, error) { + lastCommitID := app.cms.LastCommitID() + +return &abci.ResponseInfo{ + Data: app.name, + Version: app.version, + AppVersion: app.appVersion, + LastBlockHeight: lastCommitID.Version, + LastBlockAppHash: lastCommitID.Hash, +}, nil +} + +/ Query implements the ABCI interface. It delegates to CommitMultiStore if it +/ implements Queryable. +func (app *BaseApp) + +Query(_ context.Context, req *abci.RequestQuery) (resp *abci.ResponseQuery, err error) { + / add panic recovery for all queries + / + / Ref: https://github.com/cosmos/cosmos-sdk/pull/8039 + defer func() { + if r := recover(); r != nil { + resp = sdkerrors.QueryResult(errorsmod.Wrapf(sdkerrors.ErrPanic, "%v", r), app.trace) +} + +}() + + / when a client did not provide a query height, manually inject the latest + if req.Height == 0 { + req.Height = app.LastBlockHeight() +} + +telemetry.IncrCounter(1, "query", "count") + +telemetry.IncrCounter(1, "query", req.Path) + +defer telemetry.MeasureSince(time.Now(), req.Path) + if req.Path == QueryPathBroadcastTx { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "can't route a broadcast tx message"), app.trace), nil +} + + / handle gRPC routes first rather than calling splitPath because '/' characters + / are used as part of gRPC paths + if grpcHandler := app.grpcQueryRouter.Route(req.Path); grpcHandler != nil { + return app.handleQueryGRPC(grpcHandler, req), nil +} + path := SplitABCIQueryPath(req.Path) + if len(path) == 0 { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "no query path provided"), app.trace), nil +} + switch path[0] { + case QueryPathApp: + / "/app" prefix for special application queries + resp = handleQueryApp(app, path, req) + case QueryPathStore: + resp = handleQueryStore(app, path, *req) + case QueryPathP2P: + resp = handleQueryP2P(app, path) + +default: + resp = sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "unknown query path"), app.trace) +} + +return resp, nil +} + +/ ListSnapshots implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ListSnapshots(req *abci.RequestListSnapshots) (*abci.ResponseListSnapshots, error) { + resp := &abci.ResponseListSnapshots{ + Snapshots: []*abci.Snapshot{ +}} + if app.snapshotManager == nil { + return resp, nil +} + +snapshots, err := app.snapshotManager.List() + if err != nil { + app.logger.Error("failed to list snapshots", "err", err) + +return nil, err +} + for _, snapshot := range snapshots { + abciSnapshot, err := snapshot.ToABCI() + if err != nil { + app.logger.Error("failed to convert ABCI snapshots", "err", err) + +return nil, err +} + +resp.Snapshots = append(resp.Snapshots, &abciSnapshot) +} + +return resp, nil +} + +/ LoadSnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +LoadSnapshotChunk(req *abci.RequestLoadSnapshotChunk) (*abci.ResponseLoadSnapshotChunk, error) { + if app.snapshotManager == nil { + return &abci.ResponseLoadSnapshotChunk{ +}, nil +} + +chunk, err := app.snapshotManager.LoadChunk(req.Height, req.Format, req.Chunk) + if err != nil { + app.logger.Error( + "failed to load snapshot chunk", + "height", req.Height, + "format", req.Format, + "chunk", req.Chunk, + "err", err, + ) + +return nil, err +} + +return &abci.ResponseLoadSnapshotChunk{ + Chunk: chunk +}, nil +} + +/ OfferSnapshot implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +OfferSnapshot(req *abci.RequestOfferSnapshot) (*abci.ResponseOfferSnapshot, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ABORT +}, nil +} + if req.Snapshot == nil { + app.logger.Error("received nil snapshot") + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil +} + +snapshot, err := snapshottypes.SnapshotFromABCI(req.Snapshot) + if err != nil { + app.logger.Error("failed to decode snapshot metadata", "err", err) + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil +} + +err = app.snapshotManager.Restore(snapshot) + switch { + case err == nil: + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrUnknownFormat): + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT_FORMAT +}, nil + case errors.Is(err, snapshottypes.ErrInvalidMetadata): + app.logger.Error( + "rejecting invalid snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil + + default: + app.logger.Error( + "failed to restore snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + + / We currently don't support resetting the IAVL stores and retrying a + / different snapshot, so we ask CometBFT to abort all snapshot restoration. + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ABORT +}, nil +} +} + +/ ApplySnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ApplySnapshotChunk(req *abci.RequestApplySnapshotChunk) (*abci.ResponseApplySnapshotChunk, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ABORT +}, nil +} + + _, err := app.snapshotManager.RestoreChunk(req.Chunk) + switch { + case err == nil: + return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrChunkHashMismatch): + app.logger.Error( + "chunk checksum mismatch; rejecting sender and requesting refetch", + "chunk", req.Index, + "sender", req.Sender, + "err", err, + ) + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_RETRY, + RefetchChunks: []uint32{ + req.Index +}, + RejectSenders: []string{ + req.Sender +}, +}, nil + + default: + app.logger.Error("failed to restore snapshot", "err", err) + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ABORT +}, nil +} +} + +/ CheckTx implements the ABCI interface and executes a tx in CheckTx mode. In +/ CheckTx mode, messages are not executed. This means messages are only validated +/ and only the AnteHandler is executed. State is persisted to the BaseApp's +/ internal CheckTx state if the AnteHandler passes. Otherwise, the ResponseCheckTx +/ will contain relevant error information. Regardless of tx execution outcome, +/ the ResponseCheckTx will contain relevant gas execution context. +func (app *BaseApp) + +CheckTx(req *abci.RequestCheckTx) (*abci.ResponseCheckTx, error) { + var mode execMode + switch { + case req.Type == abci.CheckTxType_New: + mode = execModeCheck + case req.Type == abci.CheckTxType_Recheck: + mode = execModeReCheck + + default: + return nil, fmt.Errorf("unknown RequestCheckTx type: %s", req.Type) +} + +gInfo, result, anteEvents, err := app.runTx(mode, req.Tx) + if err != nil { + return sdkerrors.ResponseCheckTxWithEvents(err, gInfo.GasWanted, gInfo.GasUsed, anteEvents, app.trace), nil +} + +return &abci.ResponseCheckTx{ + GasWanted: int64(gInfo.GasWanted), / TODO: Should type accept unsigned ints? + GasUsed: int64(gInfo.GasUsed), / TODO: Should type accept unsigned ints? + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +}, nil +} + +/ PrepareProposal implements the PrepareProposal ABCI method and returns a +/ ResponsePrepareProposal object to the client. The PrepareProposal method is +/ responsible for allowing the block proposer to perform application-dependent +/ work in a block before proposing it. +/ +/ Transactions can be modified, removed, or added by the application. Since the +/ application maintains its own local mempool, it will ignore the transactions +/ provided to it in RequestPrepareProposal. Instead, it will determine which +/ transactions to return based on the mempool's semantics and the MaxTxBytes +/ provided by the client's request. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +PrepareProposal(req *abci.RequestPrepareProposal) (resp *abci.ResponsePrepareProposal, err error) { + if app.prepareProposal == nil { + return nil, errors.New("PrepareProposal handler not set") +} + + / Always reset state given that PrepareProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, +} + +app.setState(execModePrepareProposal, header) + + / CometBFT must never call PrepareProposal with a height of 0. + / + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("PrepareProposal called with invalid height") +} + +app.prepareProposalState.ctx = app.getContextForProposal(app.prepareProposalState.ctx, req.Height). + WithVoteInfos(toVoteInfo(req.LocalLastCommit.Votes)). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithBlockTime(req.Time). + WithProposer(req.ProposerAddress). + WithExecMode(sdk.ExecModePrepareProposal). + WithCometInfo(prepareProposalInfo{ + req +}) + +app.prepareProposalState.ctx = app.prepareProposalState.ctx. + WithConsensusParams(app.GetConsensusParams(app.prepareProposalState.ctx)). + WithBlockGasMeter(app.getBlockGasMeter(app.prepareProposalState.ctx)) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in PrepareProposal", + "height", req.Height, + "time", req.Time, + "panic", err, + ) + +resp = &abci.ResponsePrepareProposal{ +} + +} + +}() + +resp, err = app.prepareProposal(app.prepareProposalState.ctx, req) + if err != nil { + app.logger.Error("failed to prepare proposal", "height", req.Height, "error", err) + +return &abci.ResponsePrepareProposal{ +}, nil +} + +return resp, nil +} + +/ ProcessProposal implements the ProcessProposal ABCI method and returns a +/ ResponseProcessProposal object to the client. The ProcessProposal method is +/ responsible for allowing execution of application-dependent work in a proposed +/ block. Note, the application defines the exact implementation details of +/ ProcessProposal. In general, the application must at the very least ensure +/ that all transactions are valid. If all transactions are valid, then we inform +/ CometBFT that the Status is ACCEPT. However, the application is also able +/ to implement optimizations such as executing the entire proposed block +/ immediately. +/ +/ If a panic is detected during execution of an application's ProcessProposal +/ handler, it will be recovered and we will reject the proposal. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +ProcessProposal(req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) { + if app.processProposal == nil { + return nil, errors.New("ProcessProposal handler not set") +} + + / CometBFT must never call ProcessProposal with a height of 0. + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("ProcessProposal called with invalid height") +} + + / Always reset state given that ProcessProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, +} + +app.setState(execModeProcessProposal, header) + + / Since the application can get access to FinalizeBlock state and write to it, + / we must be sure to reset it in case ProcessProposal timeouts and is called + / again in a subsequent round. However, we only want to do this after we've + / processed the first block, as we want to avoid overwriting the finalizeState + / after state changes during InitChain. + if req.Height > app.initialHeight { + app.setState(execModeFinalize, header) +} + +app.processProposalState.ctx = app.getContextForProposal(app.processProposalState.ctx, req.Height). + WithVoteInfos(req.ProposedLastCommit.Votes). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithBlockTime(req.Time). + WithHeaderHash(req.Hash). + WithProposer(req.ProposerAddress). + WithCometInfo(cometInfo{ + ProposerAddress: req.ProposerAddress, + ValidatorsHash: req.NextValidatorsHash, + Misbehavior: req.Misbehavior, + LastCommit: req.ProposedLastCommit +}). + WithExecMode(sdk.ExecModeProcessProposal) + +app.processProposalState.ctx = app.processProposalState.ctx. + WithConsensusParams(app.GetConsensusParams(app.processProposalState.ctx)). + WithBlockGasMeter(app.getBlockGasMeter(app.processProposalState.ctx)) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in ProcessProposal", + "height", req.Height, + "time", req.Time, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +resp = &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + +}() + +resp, err = app.processProposal(app.processProposalState.ctx, req) + if err != nil { + app.logger.Error("failed to process proposal", "height", req.Height, "error", err) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +return resp, nil +} + +/ ExtendVote implements the ExtendVote ABCI method and returns a ResponseExtendVote. +/ It calls the application's ExtendVote handler which is responsible for performing +/ application-specific business logic when sending a pre-commit for the NEXT +/ block height. The extensions response may be non-deterministic but must always +/ be returned, even if empty. +/ +/ Agreed upon vote extensions are made available to the proposer of the next +/ height and are committed in the subsequent height, i.e. H+2. An error is +/ returned if vote extensions are not enabled or if extendVote fails or panics. +func (app *BaseApp) + +ExtendVote(_ context.Context, req *abci.RequestExtendVote) (resp *abci.ResponseExtendVote, err error) { + / Always reset state given that ExtendVote and VerifyVoteExtension can timeout + / and be called again in a subsequent round. + emptyHeader := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height +} + +app.setState(execModeVoteExtension, emptyHeader) + if app.extendVote == nil { + return nil, errors.New("application ExtendVote handler not set") +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(app.voteExtensionState.ctx) + if cp.Abci != nil && cp.Abci.VoteExtensionsEnableHeight <= 0 { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to ExtendVote at height %d", req.Height) +} + +app.voteExtensionState.ctx = app.voteExtensionState.ctx. + WithConsensusParams(cp). + WithBlockGasMeter(storetypes.NewInfiniteGasMeter()). + WithBlockHeight(req.Height). + WithHeaderHash(req.Hash). + WithExecMode(sdk.ExecModeVoteExtension) + + / add a deferred recover handler in case extendVote panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in ExtendVote", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +err = fmt.Errorf("recovered application panic in ExtendVote: %v", r) +} + +}() + +resp, err = app.extendVote(app.voteExtensionState.ctx, req) + if err != nil { + app.logger.Error("failed to extend vote", "height", req.Height, "error", err) + +return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} + +return resp, err +} + +/ VerifyVoteExtension implements the VerifyVoteExtension ABCI method and returns +/ a ResponseVerifyVoteExtension. It calls the applications' VerifyVoteExtension +/ handler which is responsible for performing application-specific business +/ logic in verifying a vote extension from another validator during the pre-commit +/ phase. The response MUST be deterministic. An error is returned if vote +/ extensions are not enabled or if verifyVoteExt fails or panics. +func (app *BaseApp) + +VerifyVoteExtension(req *abci.RequestVerifyVoteExtension) (resp *abci.ResponseVerifyVoteExtension, err error) { + if app.verifyVoteExt == nil { + return nil, errors.New("application VerifyVoteExtension handler not set") +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(app.voteExtensionState.ctx) + if cp.Abci != nil && cp.Abci.VoteExtensionsEnableHeight <= 0 { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to VerifyVoteExtension at height %d", req.Height) +} + + / add a deferred recover handler in case verifyVoteExt panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in VerifyVoteExtension", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "validator", fmt.Sprintf("%X", req.ValidatorAddress), + "panic", r, + ) + +err = fmt.Errorf("recovered application panic in VerifyVoteExtension: %v", r) +} + +}() + +resp, err = app.verifyVoteExt(app.voteExtensionState.ctx, req) + if err != nil { + app.logger.Error("failed to verify vote extension", "height", req.Height, "error", err) + +return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_REJECT +}, nil +} + +return resp, err +} + +/ FinalizeBlock will execute the block proposal provided by RequestFinalizeBlock. +/ Specifically, it will execute an application's BeginBlock (if defined), followed +/ by the transactions in the proposal, finally followed by the application's +/ EndBlock (if defined). +/ +/ For each raw transaction, i.e. a byte slice, BaseApp will only execute it if +/ it adheres to the sdk.Tx interface. Otherwise, the raw transaction will be +/ skipped. This is to support compatibility with proposers injecting vote +/ extensions into the proposal, which should not themselves be executed in cases +/ where they adhere to the sdk.Tx interface. +func (app *BaseApp) + +FinalizeBlock(req *abci.RequestFinalizeBlock) (*abci.ResponseFinalizeBlock, error) { + var events []abci.Event + if err := app.validateFinalizeBlockHeight(req); err != nil { + return nil, err +} + if app.cms.TracingEnabled() { + app.cms.SetTracingContext(storetypes.TraceContext( + map[string]any{"blockHeight": req.Height +}, + )) +} + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, +} + + / Initialize the FinalizeBlock state. If this is the first block, it should + / already be initialized in InitChain. Otherwise app.finalizeBlockState will be + / nil, since it is reset on Commit. + if app.finalizeBlockState == nil { + app.setState(execModeFinalize, header) +} + +else { + / In the first block, app.finalizeBlockState.ctx will already be initialized + / by InitChain. Context is now updated with Header information. + app.finalizeBlockState.ctx = app.finalizeBlockState.ctx. + WithBlockHeader(header). + WithBlockHeight(req.Height) +} + gasMeter := app.getBlockGasMeter(app.finalizeBlockState.ctx) + +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx. + WithBlockGasMeter(gasMeter). + WithHeaderHash(req.Hash). + WithConsensusParams(app.GetConsensusParams(app.finalizeBlockState.ctx)). + WithVoteInfos(req.DecidedLastCommit.Votes). + WithExecMode(sdk.ExecModeFinalize) + if app.checkState != nil { + app.checkState.ctx = app.checkState.ctx. + WithBlockGasMeter(gasMeter). + WithHeaderHash(req.Hash) +} + beginBlock := app.beginBlock(req) + +events = append(events, beginBlock.Events...) + + / Iterate over all raw transactions in the proposal and attempt to execute + / them, gathering the execution results. + / + / NOTE: Not all raw transactions may adhere to the sdk.Tx interface, e.g. + / vote extensions, so skip those. + txResults := make([]*abci.ExecTxResult, 0, len(req.Txs)) + for _, rawTx := range req.Txs { + if _, err := app.txDecoder(rawTx); err == nil { + txResults = append(txResults, app.deliverTx(rawTx)) +} + +} + if app.finalizeBlockState.ms.TracingEnabled() { + app.finalizeBlockState.ms = app.finalizeBlockState.ms.SetTracingContext(nil).(storetypes.CacheMultiStore) +} + +endBlock, err := app.endBlock(app.finalizeBlockState.ctx) + if err != nil { + return nil, err +} + +events = append(events, endBlock.Events...) + cp := app.GetConsensusParams(app.finalizeBlockState.ctx) + +return &abci.ResponseFinalizeBlock{ + Events: events, + TxResults: txResults, + ValidatorUpdates: endBlock.ValidatorUpdates, + ConsensusParamUpdates: &cp, + AppHash: app.workingHash(), +}, nil +} + +/ Commit implements the ABCI interface. It will commit all state that exists in +/ the deliver state's multi-store and includes the resulting commit ID in the +/ returned abci.ResponseCommit. Commit will set the check state based on the +/ latest header and reset the deliver state. Also, if a non-zero halt height is +/ defined in config, Commit will execute a deferred function call to check +/ against that height and gracefully halt if it matches the latest committed +/ height. +func (app *BaseApp) + +Commit() (*abci.ResponseCommit, error) { + header := app.finalizeBlockState.ctx.BlockHeader() + retainHeight := app.GetBlockRetentionHeight(header.Height) + if app.precommiter != nil { + app.precommiter(app.finalizeBlockState.ctx) +} + +rms, ok := app.cms.(*rootmulti.Store) + if ok { + rms.SetCommitHeader(header) +} + +app.cms.Commit() + resp := &abci.ResponseCommit{ + RetainHeight: retainHeight, +} + abciListeners := app.streamingManager.ABCIListeners + if len(abciListeners) > 0 { + ctx := app.finalizeBlockState.ctx + blockHeight := ctx.BlockHeight() + changeSet := app.cms.PopStateCache() + for _, abciListener := range abciListeners { + if err := abciListener.ListenCommit(ctx, *resp, changeSet); err != nil { + app.logger.Error("Commit listening hook failed", "height", blockHeight, "err", err) +} + +} + +} + + / Reset the CheckTx state to the latest committed. + / + / NOTE: This is safe because CometBFT holds a lock on the mempool for + / Commit. Use the header from this latest block. + app.setState(execModeCheck, header) + +app.finalizeBlockState = nil + if app.prepareCheckStater != nil { + app.prepareCheckStater(app.checkState.ctx) +} + +var halt bool + switch { + case app.haltHeight > 0 && uint64(header.Height) >= app.haltHeight: + halt = true + case app.haltTime > 0 && header.Time.Unix() >= int64(app.haltTime): + halt = true +} + if halt { + / Halt the binary and allow CometBFT to receive the ResponseCommit + / response with the commit ID hash. This will allow the node to successfully + / restart and process blocks assuming the halt configuration has been + / reset or moved to a more distant value. + app.halt() +} + +go app.snapshotManager.SnapshotIfApplicable(header.Height) + +return resp, nil +} + +/ workingHash gets the apphash that will be finalized in commit. +/ These writes will be persisted to the root multi-store (app.cms) + +and flushed to +/ disk in the Commit phase. This means when the ABCI client requests Commit(), the application +/ state transitions will be flushed to disk and as a result, but we already have +/ an application Merkle root. +func (app *BaseApp) + +workingHash() []byte { + / Write the FinalizeBlock state into branched storage and commit the MultiStore. + / The write to the FinalizeBlock state writes all state transitions to the root + / MultiStore (app.cms) + +so when Commit() + +is called it persists those values. + app.finalizeBlockState.ms.Write() + + / Get the hash of all writes in order to return the apphash to the comet in finalizeBlock. + commitHash := app.cms.WorkingHash() + +app.logger.Debug("hash of all writes", "workingHash", fmt.Sprintf("%X", commitHash)) + +return commitHash +} + +/ halt attempts to gracefully shutdown the node via SIGINT and SIGTERM falling +/ back on os.Exit if both fail. +func (app *BaseApp) + +halt() { + app.logger.Info("halting node per configuration", "height", app.haltHeight, "time", app.haltTime) + +p, err := os.FindProcess(os.Getpid()) + if err == nil { + / attempt cascading signals in case SIGINT fails (os dependent) + sigIntErr := p.Signal(syscall.SIGINT) + sigTermErr := p.Signal(syscall.SIGTERM) + if sigIntErr == nil || sigTermErr == nil { + return +} + +} + + / Resort to exiting immediately if the process could not be found or killed + / via SIGINT/SIGTERM signals. + app.logger.Info("failed to send SIGINT/SIGTERM; exiting...") + +os.Exit(0) +} + +func handleQueryApp(app *BaseApp, path []string, req *abci.RequestQuery) *abci.ResponseQuery { + if len(path) >= 2 { + switch path[1] { + case "simulate": + txBytes := req.Data + + gInfo, res, err := app.Simulate(txBytes) + if err != nil { + return sdkerrors.QueryResult(errorsmod.Wrap(err, "failed to simulate tx"), app.trace) +} + simRes := &sdk.SimulationResponse{ + GasInfo: gInfo, + Result: res, +} + +bz, err := codec.ProtoMarshalJSON(simRes, app.interfaceRegistry) + if err != nil { + return sdkerrors.QueryResult(errorsmod.Wrap(err, "failed to JSON encode simulation response"), app.trace) +} + +return &abci.ResponseQuery{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: bz, +} + case "version": + return &abci.ResponseQuery{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: []byte(app.version), +} + +default: + return sdkerrors.QueryResult(errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "unknown query: %s", path), app.trace) +} + +} + +return sdkerrors.QueryResult( + errorsmod.Wrap( + sdkerrors.ErrUnknownRequest, + "expected second parameter to be either 'simulate' or 'version', neither was present", + ), app.trace) +} + +func handleQueryStore(app *BaseApp, path []string, req abci.RequestQuery) *abci.ResponseQuery { + / "/store" prefix for store queries + queryable, ok := app.cms.(storetypes.Queryable) + if !ok { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "multi-store does not support queries"), app.trace) +} + +req.Path = "/" + strings.Join(path[1:], "/") + if req.Height <= 1 && req.Prove { + return sdkerrors.QueryResult( + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ), app.trace) +} + sdkReq := storetypes.RequestQuery(req) + +resp, err := queryable.Query(&sdkReq) + if err != nil { + return sdkerrors.QueryResult(err, app.trace) +} + +resp.Height = req.Height + abciResp := abci.ResponseQuery(*resp) + +return &abciResp +} + +func handleQueryP2P(app *BaseApp, path []string) *abci.ResponseQuery { + / "/p2p" prefix for p2p queries + if len(path) < 4 { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "path should be p2p filter "), app.trace) +} + +var resp *abci.ResponseQuery + + cmd, typ, arg := path[1], path[2], path[3] + switch cmd { + case "filter": + switch typ { + case "addr": + resp = app.FilterPeerByAddrPort(arg) + case "id": + resp = app.FilterPeerByID(arg) +} + +default: + resp = sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "expected second parameter to be 'filter'"), app.trace) +} + +return resp +} + +/ SplitABCIQueryPath splits a string path using the delimiter '/'. +/ +/ e.g. "this/is/funny" becomes []string{"this", "is", "funny" +} + +func SplitABCIQueryPath(requestPath string) (path []string) { + path = strings.Split(requestPath, "/") + + / first element is empty string + if len(path) > 0 && path[0] == "" { + path = path[1:] +} + +return path +} + +/ FilterPeerByAddrPort filters peers by address/port. +func (app *BaseApp) + +FilterPeerByAddrPort(info string) *abci.ResponseQuery { + if app.addrPeerFilter != nil { + return app.addrPeerFilter(info) +} + +return &abci.ResponseQuery{ +} +} + +/ FilterPeerByID filters peers by node ID. +func (app *BaseApp) + +FilterPeerByID(info string) *abci.ResponseQuery { + if app.idPeerFilter != nil { + return app.idPeerFilter(info) +} + +return &abci.ResponseQuery{ +} +} + +/ getContextForProposal returns the correct Context for PrepareProposal and +/ ProcessProposal. We use finalizeBlockState on the first block to be able to +/ access any state changes made in InitChain. +func (app *BaseApp) + +getContextForProposal(ctx sdk.Context, height int64) + +sdk.Context { + if height == app.initialHeight { + ctx, _ = app.finalizeBlockState.ctx.CacheContext() + + / clear all context data set during InitChain to avoid inconsistent behavior + ctx = ctx.WithBlockHeader(cmtproto.Header{ +}) + +return ctx +} + +return ctx +} + +func (app *BaseApp) + +handleQueryGRPC(handler GRPCQueryHandler, req *abci.RequestQuery) *abci.ResponseQuery { + ctx, err := app.CreateQueryContext(req.Height, req.Prove) + if err != nil { + return sdkerrors.QueryResult(err, app.trace) +} + +resp, err := handler(ctx, req) + if err != nil { + resp = sdkerrors.QueryResult(gRPCErrorToSDKError(err), app.trace) + +resp.Height = req.Height + return resp +} + +return resp +} + +func gRPCErrorToSDKError(err error) + +error { + status, ok := grpcstatus.FromError(err) + if !ok { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) +} + switch status.Code() { + case codes.NotFound: + return errorsmod.Wrap(sdkerrors.ErrKeyNotFound, err.Error()) + case codes.InvalidArgument: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.FailedPrecondition: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.Unauthenticated: + return errorsmod.Wrap(sdkerrors.ErrUnauthorized, err.Error()) + +default: + return errorsmod.Wrap(sdkerrors.ErrUnknownRequest, err.Error()) +} +} + +func checkNegativeHeight(height int64) + +error { + if height < 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "cannot query with height < 0; please provide a valid height") +} + +return nil +} + +/ createQueryContext creates a new sdk.Context for a query, taking as args +/ the block height and whether the query needs a proof or not. +func (app *BaseApp) + +CreateQueryContext(height int64, prove bool) (sdk.Context, error) { + if err := checkNegativeHeight(height); err != nil { + return sdk.Context{ +}, err +} + + / use custom query multi-store if provided + qms := app.qms + if qms == nil { + qms = app.cms.(storetypes.MultiStore) +} + lastBlockHeight := qms.LatestVersion() + if lastBlockHeight == 0 { + return sdk.Context{ +}, errorsmod.Wrapf(sdkerrors.ErrInvalidHeight, "%s is not ready; please wait for first block", app.Name()) +} + if height > lastBlockHeight { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidHeight, + "cannot query with height in the future; please provide a valid height", + ) +} + + / when a client did not provide a query height, manually inject the latest + if height == 0 { + height = lastBlockHeight +} + if height <= 1 && prove { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ) +} + +cacheMS, err := qms.CacheMultiStoreWithVersion(height) + if err != nil { + return sdk.Context{ +}, + errorsmod.Wrapf( + sdkerrors.ErrInvalidRequest, + "failed to load state at height %d; %s (latest height: %d)", height, err, lastBlockHeight, + ) +} + + / branch the commit multi-store for safety + ctx := sdk.NewContext(cacheMS, app.checkState.ctx.BlockHeader(), true, app.logger). + WithMinGasPrices(app.minGasPrices). + WithBlockHeight(height) + if height != lastBlockHeight { + rms, ok := app.cms.(*rootmulti.Store) + if ok { + cInfo, err := rms.GetCommitInfo(height) + if cInfo != nil && err == nil { + ctx = ctx.WithBlockTime(cInfo.Timestamp) +} + +} + +} + +return ctx, nil +} + +/ GetBlockRetentionHeight returns the height for which all blocks below this height +/ are pruned from CometBFT. Given a commitment height and a non-zero local +/ minRetainBlocks configuration, the retentionHeight is the smallest height that +/ satisfies: +/ +/ - Unbonding (safety threshold) + +time: The block interval in which validators +/ can be economically punished for misbehavior. Blocks in this interval must be +/ auditable e.g. by the light client. +/ +/ - Logical store snapshot interval: The block interval at which the underlying +/ logical store database is persisted to disk, e.g. every 10000 heights. Blocks +/ since the last IAVL snapshot must be available for replay on application restart. +/ +/ - State sync snapshots: Blocks since the oldest available snapshot must be +/ available for state sync nodes to catch up (oldest because a node may be +/ restoring an old snapshot while a new snapshot was taken). +/ +/ - Local (minRetainBlocks) + +config: Archive nodes may want to retain more or +/ all blocks, e.g. via a local config option min-retain-blocks. There may also +/ be a need to vary retention for other nodes, e.g. sentry nodes which do not +/ need historical blocks. +func (app *BaseApp) + +GetBlockRetentionHeight(commitHeight int64) + +int64 { + / pruning is disabled if minRetainBlocks is zero + if app.minRetainBlocks == 0 { + return 0 +} + minNonZero := func(x, y int64) + +int64 { + switch { + case x == 0: + return y + case y == 0: + return x + case x < y: + return x + + default: + return y +} + +} + + / Define retentionHeight as the minimum value that satisfies all non-zero + / constraints. All blocks below (commitHeight-retentionHeight) + +are pruned + / from CometBFT. + var retentionHeight int64 + + / Define the number of blocks needed to protect against misbehaving validators + / which allows light clients to operate safely. Note, we piggy back of the + / evidence parameters instead of computing an estimated number of blocks based + / on the unbonding period and block commitment time as the two should be + / equivalent. + cp := app.GetConsensusParams(app.finalizeBlockState.ctx) + if cp.Evidence != nil && cp.Evidence.MaxAgeNumBlocks > 0 { + retentionHeight = commitHeight - cp.Evidence.MaxAgeNumBlocks +} + if app.snapshotManager != nil { + snapshotRetentionHeights := app.snapshotManager.GetSnapshotBlockRetentionHeights() + if snapshotRetentionHeights > 0 { + retentionHeight = minNonZero(retentionHeight, commitHeight-snapshotRetentionHeights) +} + +} + v := commitHeight - int64(app.minRetainBlocks) + +retentionHeight = minNonZero(retentionHeight, v) + if retentionHeight <= 0 { + / prune nothing in the case of a non-positive height + return 0 +} + +return retentionHeight +} + +/ toVoteInfo converts the new ExtendedVoteInfo to VoteInfo. +func toVoteInfo(votes []abci.ExtendedVoteInfo) []abci.VoteInfo { + legacyVotes := make([]abci.VoteInfo, len(votes)) + for i, vote := range votes { + legacyVotes[i] = abci.VoteInfo{ + Validator: abci.Validator{ + Address: vote.Validator.Address, + Power: vote.Validator.Power, +}, + BlockIdFlag: vote.BlockIdFlag, +} + +} + +return legacyVotes +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/module-interfaces.mdx b/docs/sdk/v0.50/documentation/module-system/module-interfaces.mdx new file mode 100644 index 00000000..2c70ccd3 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/module-interfaces.mdx @@ -0,0 +1,1081 @@ +--- +title: Module Interfaces +--- + + +**Synopsis** +This document details how to build CLI and REST interfaces for a module. Examples from various Cosmos SDK modules are included. + + + +**Pre-requisite Readings** + +* [Building Modules Intro](docs/sdk/v0.50/learn/intro/overview) + + + +## CLI + +One of the main interfaces for an application is the [command-line interface](/docs/sdk/v0.50/learn/advanced/cli). This entrypoint adds commands from the application's modules enabling end-users to create [**messages**](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#messages) wrapped in transactions and [**queries**](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#queries). The CLI files are typically found in the module's `./client/cli` folder. + +### Transaction Commands + +In order to create messages that trigger state changes, end-users must create [transactions](/docs/sdk/v0.50/learn/advanced/transactions) that wrap and deliver the messages. A transaction command creates a transaction that includes one or more messages. + +Transaction commands typically have their own `tx.go` file that lives within the module's `./client/cli` folder. The commands are specified in getter functions and the name of the function should include the name of the command. + +Here is an example from the `x/bank` module: + +```go expandable +package cli + +import ( + + "fmt" + "cosmossdk.io/core/address" + sdkmath "cosmossdk.io/math" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/tx" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var FlagSplit = "split" + +/ NewTxCmd returns a root CLI command handler for all x/bank transaction commands. +func NewTxCmd(ac address.Codec) *cobra.Command { + txCmd := &cobra.Command{ + Use: types.ModuleName, + Short: "Bank transaction subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +txCmd.AddCommand( + NewSendTxCmd(ac), + NewMultiSendTxCmd(ac), + ) + +return txCmd +} + +/ NewSendTxCmd returns a CLI command handler for creating a MsgSend transaction. +func NewSendTxCmd(ac address.Codec) *cobra.Command { + cmd := &cobra.Command{ + Use: "send [from_key_or_address] [to_address] [amount]", + Short: "Send funds from one account to another.", + Long: `Send funds from one account to another. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.ExactArgs(3), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +toAddr, err := ac.StringToBytes(args[1]) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[2]) + if err != nil { + return err +} + if len(coins) == 0 { + return fmt.Errorf("invalid coins") +} + msg := types.NewMsgSend(clientCtx.GetFromAddress(), toAddr, coins) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} + +/ NewMultiSendTxCmd returns a CLI command handler for creating a MsgMultiSend transaction. +/ For a better UX this command is limited to send funds from one account to two or more accounts. +func NewMultiSendTxCmd(ac address.Codec) *cobra.Command { + cmd := &cobra.Command{ + Use: "multi-send [from_key_or_address] [to_address_1, to_address_2, ...] [amount]", + Short: "Send funds from one account to two or more accounts.", + Long: `Send funds from one account to two or more accounts. +By default, sends the [amount] to each address of the list. +Using the '--split' flag, the [amount] is split equally between the addresses. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.MinimumNArgs(4), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[len(args)-1]) + if err != nil { + return err +} + if coins.IsZero() { + return fmt.Errorf("must send positive amount") +} + +split, err := cmd.Flags().GetBool(FlagSplit) + if err != nil { + return err +} + totalAddrs := sdkmath.NewInt(int64(len(args) - 2)) + / coins to be received by the addresses + sendCoins := coins + if split { + sendCoins = coins.QuoInt(totalAddrs) +} + +var output []types.Output + for _, arg := range args[1 : len(args)-1] { + toAddr, err := ac.StringToBytes(arg) + if err != nil { + return err +} + +output = append(output, types.NewOutput(toAddr, sendCoins)) +} + + / amount to be send from the from address + var amount sdk.Coins + if split { + / user input: 1000stake to send to 3 addresses + / actual: 333stake to each address (=> 999stake actually sent) + +amount = sendCoins.MulInt(totalAddrs) +} + +else { + amount = coins.MulInt(totalAddrs) +} + msg := types.NewMsgMultiSend(types.NewInput(clientCtx.FromAddress, amount), output) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +cmd.Flags().Bool(FlagSplit, false, "Send the equally split token amount to each address") + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} +``` + +In the example, `NewSendTxCmd()` creates and returns the transaction command for a transaction that wraps and delivers `MsgSend`. `MsgSend` is the message used to send tokens from one account to another. + +In general, the getter function does the following: + +* **Constructs the command:** Read the [Cobra Documentation](/docs/sdk/v0.50/documentation/module-systemhttps://pkg.go.dev/github.com/spf13/cobra) for more detailed information on how to create commands. + * **Use:** Specifies the format of the user input required to invoke the command. In the example above, `send` is the name of the transaction command and `[from_key_or_address]`, `[to_address]`, and `[amount]` are the arguments. + * **Args:** The number of arguments the user provides. In this case, there are exactly three: `[from_key_or_address]`, `[to_address]`, and `[amount]`. + * **Short and Long:** Descriptions for the command. A `Short` description is expected. A `Long` description can be used to provide additional information that is displayed when a user adds the `--help` flag. + * **RunE:** Defines a function that can return an error. This is the function that is called when the command is executed. This function encapsulates all of the logic to create a new transaction. + * The function typically starts by getting the `clientCtx`, which can be done with `client.GetClientTxContext(cmd)`. The `clientCtx` contains information relevant to transaction handling, including information about the user. In this example, the `clientCtx` is used to retrieve the address of the sender by calling `clientCtx.GetFromAddress()`. + * If applicable, the command's arguments are parsed. In this example, the arguments `[to_address]` and `[amount]` are both parsed. + * A [message](/docs/sdk/v0.50/documentation/module-system/messages-and-queries) is created using the parsed arguments and information from the `clientCtx`. The constructor function of the message type is called directly. In this case, `types.NewMsgSend(fromAddr, toAddr, amount)`. Its good practice to call, if possible, the necessary [message validation methods](/docs/sdk/v0.50/documentation/module-system/msg-services#Validation) before broadcasting the message. + * Depending on what the user wants, the transaction is either generated offline or signed and broadcasted to the preconfigured node using `tx.GenerateOrBroadcastTxCLI(clientCtx, flags, msg)`. +* **Adds transaction flags:** All transaction commands must add a set of transaction [flags](#flags). The transaction flags are used to collect additional information from the user (e.g. the amount of fees the user is willing to pay). The transaction flags are added to the constructed command using `AddTxFlagsToCmd(cmd)`. +* **Returns the command:** Finally, the transaction command is returned. + +Each module can implement `NewTxCmd()`, which aggregates all of the transaction commands of the module. Here is an example from the `x/bank` module: + +```go expandable +package cli + +import ( + + "fmt" + "cosmossdk.io/core/address" + sdkmath "cosmossdk.io/math" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/tx" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var FlagSplit = "split" + +/ NewTxCmd returns a root CLI command handler for all x/bank transaction commands. +func NewTxCmd(ac address.Codec) *cobra.Command { + txCmd := &cobra.Command{ + Use: types.ModuleName, + Short: "Bank transaction subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +txCmd.AddCommand( + NewSendTxCmd(ac), + NewMultiSendTxCmd(ac), + ) + +return txCmd +} + +/ NewSendTxCmd returns a CLI command handler for creating a MsgSend transaction. +func NewSendTxCmd(ac address.Codec) *cobra.Command { + cmd := &cobra.Command{ + Use: "send [from_key_or_address] [to_address] [amount]", + Short: "Send funds from one account to another.", + Long: `Send funds from one account to another. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.ExactArgs(3), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +toAddr, err := ac.StringToBytes(args[1]) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[2]) + if err != nil { + return err +} + if len(coins) == 0 { + return fmt.Errorf("invalid coins") +} + msg := types.NewMsgSend(clientCtx.GetFromAddress(), toAddr, coins) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} + +/ NewMultiSendTxCmd returns a CLI command handler for creating a MsgMultiSend transaction. +/ For a better UX this command is limited to send funds from one account to two or more accounts. +func NewMultiSendTxCmd(ac address.Codec) *cobra.Command { + cmd := &cobra.Command{ + Use: "multi-send [from_key_or_address] [to_address_1, to_address_2, ...] [amount]", + Short: "Send funds from one account to two or more accounts.", + Long: `Send funds from one account to two or more accounts. +By default, sends the [amount] to each address of the list. +Using the '--split' flag, the [amount] is split equally between the addresses. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.MinimumNArgs(4), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[len(args)-1]) + if err != nil { + return err +} + if coins.IsZero() { + return fmt.Errorf("must send positive amount") +} + +split, err := cmd.Flags().GetBool(FlagSplit) + if err != nil { + return err +} + totalAddrs := sdkmath.NewInt(int64(len(args) - 2)) + / coins to be received by the addresses + sendCoins := coins + if split { + sendCoins = coins.QuoInt(totalAddrs) +} + +var output []types.Output + for _, arg := range args[1 : len(args)-1] { + toAddr, err := ac.StringToBytes(arg) + if err != nil { + return err +} + +output = append(output, types.NewOutput(toAddr, sendCoins)) +} + + / amount to be send from the from address + var amount sdk.Coins + if split { + / user input: 1000stake to send to 3 addresses + / actual: 333stake to each address (=> 999stake actually sent) + +amount = sendCoins.MulInt(totalAddrs) +} + +else { + amount = coins.MulInt(totalAddrs) +} + msg := types.NewMsgMultiSend(types.NewInput(clientCtx.FromAddress, amount), output) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +cmd.Flags().Bool(FlagSplit, false, "Send the equally split token amount to each address") + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} +``` + +Each module then can also implement a `GetTxCmd()` method that simply returns `NewTxCmd()`. This allows the root command to easily aggregate all of the transaction commands for each module. Here is an example: + +```go expandable +package bank + +import ( + + "context" + "encoding/json" + "fmt" + "time" + + modulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + corestore "cosmossdk.io/core/store" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/bank/client/cli" + "github.com/cosmos/cosmos-sdk/x/bank/exported" + "github.com/cosmos/cosmos-sdk/x/bank/keeper" + v1bank "github.com/cosmos/cosmos-sdk/x/bank/migrations/v1" + "github.com/cosmos/cosmos-sdk/x/bank/simulation" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +/ ConsensusVersion defines the current x/bank module consensus version. +const ConsensusVersion = 4 + +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the bank module. +type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec +} + +/ Name returns the bank module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the bank module's types on the LegacyAmino codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the bank +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the bank module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, _ client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return data.Validate() +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the bank module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the bank module. +func (ab AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.NewTxCmd(ab.ac) +} + +/ GetQueryCmd returns no root query command for the bank module. +func (ab AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.GetQueryCmd(ab.ac) +} + +/ RegisterInterfaces registers interfaces and implementations of the bank module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) + + / Register legacy interfaces for migration scripts. + v1bank.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the bank module. +type AppModule struct { + AppModuleBasic + + keeper keeper.Keeper + accountKeeper types.AccountKeeper + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ RegisterServices registers module services. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.keeper)) + +types.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper.(keeper.BaseKeeper), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 1 to 2: %v", err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 2 to 3: %v", err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 3 to 4: %v", err)) +} +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, accountKeeper types.AccountKeeper, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: accountKeeper.AddressCodec() +}, + keeper: keeper, + accountKeeper: accountKeeper, + legacySubspace: ss, +} +} + +/ Name returns the bank module's name. +func (AppModule) + +Name() + +string { + return types.ModuleName +} + +/ RegisterInvariants registers the bank module invariants. +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +/ QuerierRoute returns the bank module's querier route name. +func (AppModule) + +QuerierRoute() + +string { + return types.RouterKey +} + +/ InitGenesis performs genesis initialization for the bank module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + start := time.Now() + +var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +telemetry.MeasureSince(start, "InitGenesis", "crisis", "unmarshal") + +am.keeper.InitGenesis(ctx, &genesisState) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the bank +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the bank module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalMsgs returns msgs used for governance proposals for simulations. +func (AppModule) + +ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg { + return simulation.ProposalMsgs() +} + +/ RegisterStoreDecoder registers a decoder for supply module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[types.StoreKey] = simtypes.NewStoreDecoderFuncFromCollectionsSchema(am.keeper.(keeper.BaseKeeper).Schema) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + simState.AppParams, simState.Cdc, simState.TxConfig, am.accountKeeper, am.keeper, + ) +} + +/ App Wiring Setup + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type ModuleInputs struct { + depinject.In + + Config *modulev1.Module + Cdc codec.Codec + StoreService corestore.KVStoreService + Logger log.Logger + + AccountKeeper types.AccountKeeper + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type ModuleOutputs struct { + depinject.Out + + BankKeeper keeper.BaseKeeper + Module appmodule.AppModule +} + +func ProvideModule(in ModuleInputs) + +ModuleOutputs { + / Configure blocked module accounts. + / + / Default behavior for blockedAddresses is to regard any module mentioned in + / AccountKeeper's module account permissions as blocked. + blockedAddresses := make(map[string]bool) + if len(in.Config.BlockedModuleAccountsOverride) > 0 { + for _, moduleName := range in.Config.BlockedModuleAccountsOverride { + blockedAddresses[authtypes.NewModuleAddress(moduleName).String()] = true +} + +} + +else { + for _, permission := range in.AccountKeeper.GetModulePermissions() { + blockedAddresses[permission.GetAddress().String()] = true +} + +} + + / default to governance authority if not provided + authority := authtypes.NewModuleAddress(govtypes.ModuleName) + if in.Config.Authority != "" { + authority = authtypes.NewModuleAddressOrBech32Address(in.Config.Authority) +} + bankKeeper := keeper.NewBaseKeeper( + in.Cdc, + in.StoreService, + in.AccountKeeper, + blockedAddresses, + authority.String(), + in.Logger, + ) + m := NewAppModule(in.Cdc, bankKeeper, in.AccountKeeper, in.LegacySubspace) + +return ModuleOutputs{ + BankKeeper: bankKeeper, + Module: m +} +} +``` + +### Query Commands + + +This section is being rewritten. Refer to [AutoCLI](/docs/sdk/v0.50/learn/advanced/autocli) while this section is being updated. + + +## gRPC + +[gRPC](/docs/sdk/v0.50/documentation/module-systemhttps://grpc.io/) is a Remote Procedure Call (RPC) framework. RPC is the preferred way for external clients like wallets and exchanges to interact with a blockchain. + +In addition to providing an ABCI query pathway, the Cosmos SDK provides a gRPC proxy server that routes gRPC query requests to ABCI query requests. + +In order to do that, modules must implement `RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *runtime.ServeMux)` on `AppModuleBasic` to wire the client gRPC requests to the correct handler inside the module. + +Here's an example from the `x/auth` module: + +```go expandable +package auth + +import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/depinject" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + + modulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + "cosmossdk.io/core/store" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/exported" + "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ ConsensusVersion defines the current x/auth module consensus version. +const ( + ConsensusVersion = 5 + GovModuleName = "gov" +) + +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the auth module. +type AppModuleBasic struct { + ac address.Codec +} + +/ Name returns the auth module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the auth module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the auth +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the auth module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return types.ValidateGenesis(data) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the auth module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the auth module. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return nil +} + +/ GetQueryCmd returns the root query command for the auth module. +func (ab AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.GetQueryCmd(ab.ac) +} + +/ RegisterInterfaces registers interfaces and implementations of the auth module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the auth module. +type AppModule struct { + AppModuleBasic + + accountKeeper keeper.AccountKeeper + randGenAccountsFn types.RandomGenesisAccountsFn + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, accountKeeper keeper.AccountKeeper, randGenAccountsFn types.RandomGenesisAccountsFn, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + ac: accountKeeper.AddressCodec() +}, + accountKeeper: accountKeeper, + randGenAccountsFn: randGenAccountsFn, + legacySubspace: ss, +} +} + +/ Name returns the auth module's name. +func (AppModule) + +Name() + +string { + return types.ModuleName +} + +/ RegisterServices registers a GRPC query service to respond to the +/ module-specific GRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.accountKeeper)) + +types.RegisterQueryServer(cfg.QueryServer(), keeper.NewQueryServer(am.accountKeeper)) + m := keeper.NewMigrator(am.accountKeeper, cfg.QueryServer(), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 2 to 3: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 3 to 4: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 4, m.Migrate4To5); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 4 to 5", types.ModuleName)) +} +} + +/ InitGenesis performs genesis initialization for the auth module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +am.accountKeeper.InitGenesis(ctx, genesisState) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the auth +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.accountKeeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the auth module +func (am AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState, am.randGenAccountsFn) +} + +/ ProposalMsgs returns msgs used for governance proposals for simulations. +func (AppModule) + +ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg { + return simulation.ProposalMsgs() +} + +/ RegisterStoreDecoder registers a decoder for auth module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[types.StoreKey] = simtypes.NewStoreDecoderFuncFromCollectionsSchema(am.accountKeeper.Schema) +} + +/ WeightedOperations doesn't return any auth module operation. +func (AppModule) + +WeightedOperations(_ module.SimulationState) []simtypes.WeightedOperation { + return nil +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideAddressCodec), + appmodule.Provide(ProvideModule), + ) +} + +/ ProvideAddressCodec provides an address.Codec to the container for any +/ modules that want to do address string <> bytes conversion. +func ProvideAddressCodec(config *modulev1.Module) + +address.Codec { + return authcodec.NewBech32Codec(config.Bech32Prefix) +} + +type ModuleInputs struct { + depinject.In + + Config *modulev1.Module + StoreService store.KVStoreService + Cdc codec.Codec + + RandomGenesisAccountsFn types.RandomGenesisAccountsFn `optional:"true"` + AccountI func() + +sdk.AccountI `optional:"true"` + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type ModuleOutputs struct { + depinject.Out + + AccountKeeper keeper.AccountKeeper + Module appmodule.AppModule +} + +func ProvideModule(in ModuleInputs) + +ModuleOutputs { + maccPerms := map[string][]string{ +} + for _, permission := range in.Config.ModuleAccountPermissions { + maccPerms[permission.Account] = permission.Permissions +} + + / default to governance authority if not provided + authority := types.NewModuleAddress(GovModuleName) + if in.Config.Authority != "" { + authority = types.NewModuleAddressOrBech32Address(in.Config.Authority) +} + if in.RandomGenesisAccountsFn == nil { + in.RandomGenesisAccountsFn = simulation.RandomGenesisAccounts +} + if in.AccountI == nil { + in.AccountI = types.ProtoBaseAccount +} + k := keeper.NewAccountKeeper(in.Cdc, in.StoreService, in.AccountI, maccPerms, in.Config.Bech32Prefix, authority.String()) + m := NewAppModule(in.Cdc, k, in.RandomGenesisAccountsFn, in.LegacySubspace) + +return ModuleOutputs{ + AccountKeeper: k, + Module: m +} +} +``` + +## gRPC-gateway REST + +Applications need to support web services that use HTTP requests (e.g. a web wallet like [Keplr](/docs/sdk/v0.50/documentation/module-systemhttps://keplr.app)). [grpc-gateway](/docs/sdk/v0.50/documentation/module-systemhttps://github.com/grpc-ecosystem/grpc-gateway) translates REST calls into gRPC calls, which might be useful for clients that do not use gRPC. + +Modules that want to expose REST queries should add `google.api.http` annotations to their `rpc` methods, such as in the example below from the `x/auth` module: + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/auth/v1beta1/query.proto#L14-L89 +``` + +gRPC gateway is started in-process along with the application and CometBFT. It can be enabled or disabled by setting gRPC Configuration `enable` in [`app.toml`](/docs/sdk/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml). + +The Cosmos SDK provides a command for generating [Swagger](/docs/sdk/v0.50/documentation/module-systemhttps://swagger.io/) documentation (`protoc-gen-swagger`). Setting `swagger` in [`app.toml`](/docs/sdk/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml) defines if swagger documentation should be automatically registered. diff --git a/docs/sdk/v0.50/documentation/module-system/module-manager.mdx b/docs/sdk/v0.50/documentation/module-system/module-manager.mdx new file mode 100644 index 00000000..e7bb2115 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/module-manager.mdx @@ -0,0 +1,15115 @@ +--- +title: Module Manager +--- + + +**Synopsis** +Cosmos SDK modules need to implement the [`AppModule` interfaces](#application-module-interfaces), in order to be managed by the application's [module manager](#module-manager). The module manager plays an important role in [`message` and `query` routing](/docs/sdk/v0.50/learn/advanced/baseapp#routing), and allows application developers to set the order of execution of a variety of functions like [`PreBlocker`](/docs/sdk/v0.50/learn/beginner/app-anatomy#preblocker) and [`BeginBlocker` and `EndBlocker`](/docs/sdk/v0.50/learn/beginner/app-anatomy#begingblocker-and-endblocker). + + + +**Pre-requisite Readings** + +* [Introduction to Cosmos SDK Modules](docs/sdk/v0.50/learn/intro/overview) + + + +## Application Module Interfaces + +Application module interfaces exist to facilitate the composition of modules together to form a functional Cosmos SDK application. + + + +It is recommended to implement interfaces from the [Core API](docs/sdk/next/documentation/legacy/adr-comprehensive) `appmodule` package. This makes modules less dependent on the SDK. +For legacy reason modules can still implement interfaces from the SDK `module` package. + + +There are 2 main application module interfaces: + +* [`appmodule.AppModule` / `module.AppModule`](#appmodule) for inter-dependent module functionalities (except genesis-related functionalities). +* (legacy) [`module.AppModuleBasic`](#appmodulebasic) for independent module functionalities. New modules can use `module.CoreAppModuleBasicAdaptor` instead. + +The above interfaces are mostly embedding smaller interfaces (extension interfaces), that defines specific functionalities: + +* (legacy) `module.HasName`: Allows the module to provide its own name for legacy purposes. +* (legacy) [`module.HasGenesisBasics`](#modulehasgenesisbasics): The legacy interface for stateless genesis methods. +* [`module.HasGenesis`](#modulehasgenesis) for inter-dependent genesis-related module functionalities. +* [`module.HasABCIGenesis`](#modulehasabcigenesis) for inter-dependent genesis-related module functionalities. +* [`appmodule.HasGenesis` / `module.HasGenesis`](#appmodulehasgenesis): The extension interface for stateful genesis methods. +* [`appmodule.HasPreBlocker`](#haspreblocker): The extension interface that contains information about the `AppModule` and `PreBlock`. +* [`appmodule.HasBeginBlocker`](#hasbeginblocker): The extension interface that contains information about the `AppModule` and `BeginBlock`. +* [`appmodule.HasEndBlocker`](#hasendblocker): The extension interface that contains information about the `AppModule` and `EndBlock`. +* [`appmodule.HasPrecommit`](#hasprecommit): The extension interface that contains information about the `AppModule` and `Precommit`. +* [`appmodule.HasPrepareCheckState`](#haspreparecheckstate): The extension interface that contains information about the `AppModule` and `PrepareCheckState`. +* [`appmodule.HasService` / `module.HasServices`](#hasservices): The extension interface for modules to register services. +* [`module.HasABCIEndBlock`](#hasabciendblock): The extension interface that contains information about the `AppModule`, `EndBlock` and returns an updated validator set. +* (legacy) [`module.HasInvariants`](#hasinvariants): The extension interface for registering invariants. +* (legacy) [`module.HasConsensusVersion`](#hasconsensusversion): The extension interface for declaring a module consensus version. + +The `AppModuleBasic` interface exists to define independent methods of the module, i.e. those that do not depend on other modules in the application. This allows for the construction of the basic application structure early in the application definition, generally in the `init()` function of the [main application file](/docs/sdk/v0.50/learn/beginner/app-anatomy#core-application-file). + +The `AppModule` interface exists to define inter-dependent module methods. Many modules need to interact with other modules, typically through [`keeper`s](/docs/sdk/v0.50/documentation/module-system/keeper), which means there is a need for an interface where modules list their `keeper`s and other methods that require a reference to another module's object. `AppModule` interface extension, such as `HasBeginBlocker` and `HasEndBlocker`, also enables the module manager to set the order of execution between module's methods like `BeginBlock` and `EndBlock`, which is important in cases where the order of execution between modules matters in the context of the application. + +The usage of extension interfaces allows modules to define only the functionalities they need. For example, a module that does not need an `EndBlock` does not need to define the `HasEndBlocker` interface and thus the `EndBlock` method. `AppModule` and `AppModuleGenesis` are voluntarily small interfaces, that can take advantage of the `Module` patterns without having to define many placeholder functions. + +### `AppModuleBasic` + + +Use `module.CoreAppModuleBasicAdaptor` instead for creating an `AppModuleBasic` from an `appmodule.AppModule`. + + +The `AppModuleBasic` interface defines the independent methods modules need to implement. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + + storetypes "cosmossdk.io/store/types" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context) + +error { + return nil +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(sdk.Context) ([]abci.ValidatorUpdate, error) { + return []abci.ValidatorUpdate{ +}, nil +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasEndBlock := module.(HasABCIEndblock) + +return !hasEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + err := module.BeginBlock(ctx) + if err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +Let us go through the methods: + +* `RegisterLegacyAminoCodec(*codec.LegacyAmino)`: Registers the `amino` codec for the module, which is used to marshal and unmarshal structs to/from `[]byte` in order to persist them in the module's `KVStore`. +* `RegisterInterfaces(codectypes.InterfaceRegistry)`: Registers a module's interface types and their concrete implementations as `proto.Message`. +* `RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux)`: Registers gRPC routes for the module. + +All the `AppModuleBasic` of an application are managed by the [`BasicManager`](#basicmanager). + +### `HasName` + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + + storetypes "cosmossdk.io/store/types" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context) + +error { + return nil +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(sdk.Context) ([]abci.ValidatorUpdate, error) { + return []abci.ValidatorUpdate{ +}, nil +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasEndBlock := module.(HasABCIEndblock) + +return !hasEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + err := module.BeginBlock(ctx) + if err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +* `HasName` is an interface that has a method `Name()`. This method returns the name of the module as a `string`. + +### Genesis + + +For easily creating an `AppModule` that only has genesis functionalities, use `module.GenesisOnlyAppModule`. + + +#### `module.HasGenesisBasics` + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + + storetypes "cosmossdk.io/store/types" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context) + +error { + return nil +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(sdk.Context) ([]abci.ValidatorUpdate, error) { + return []abci.ValidatorUpdate{ +}, nil +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasEndBlock := module.(HasABCIEndblock) + +return !hasEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + err := module.BeginBlock(ctx) + if err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +Let us go through the methods: + +* `DefaultGenesis(codec.JSONCodec)`: Returns a default [`GenesisState`](/docs/sdk/v0.50/documentation/module-system/genesis#genesisstate) for the module, marshalled to `json.RawMessage`. The default `GenesisState` need to be defined by the module developer and is primarily used for testing. +* `ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage)`: Used to validate the `GenesisState` defined by a module, given in its `json.RawMessage` form. It will usually unmarshall the `json` before running a custom [`ValidateGenesis`](/docs/sdk/v0.50/documentation/module-system/genesis#validategenesis) function defined by the module developer. + +#### `module.HasGenesis` + +`HasGenesis` is an extension interface for allowing modules to implement genesis functionalities. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ genesisOnlyModule is an interface need to return GenesisOnlyAppModule struct in order to wrap two interfaces +type genesisOnlyModule interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + genesisOnlyModule +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg genesisOnlyModule) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + genesisOnlyModule: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndblock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + +module1, ok := m.Modules[moduleName].(HasGenesis) + if ok { + module1.InitGenesis(sdkCtx, c.cdc, module1.DefaultGenesis(c.cdc)) +} + if module2, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module2.InitGenesis(sdkCtx, c.cdc, module1.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ RunMigrationBeginBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was executed or not and an error if fails. +func (m *Manager) + +RunMigrationBeginBlock(ctx sdk.Context) (bool, error) { + for _, moduleName := range m.OrderBeginBlockers { + if mod, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if _, ok := mod.(appmodule.UpgradeModule); ok { + err := mod.BeginBlock(ctx) + +return err == nil, err +} + +} + +} + +return false, nil +} + +/ BeginBlock performs begin block functionality for non-upgrade modules. It creates a +/ child context with an event manager to aggregate events emitted from non-upgrade +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if _, ok := module.(appmodule.UpgradeModule); !ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +#### `module.HasABCIGenesis` + +`HasABCIGenesis` is an extension interface for allowing modules to implement genesis functionalities and returns validator set updates. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ genesisOnlyModule is an interface need to return GenesisOnlyAppModule struct in order to wrap two interfaces +type genesisOnlyModule interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + genesisOnlyModule +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg genesisOnlyModule) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + genesisOnlyModule: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndblock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + +module1, ok := m.Modules[moduleName].(HasGenesis) + if ok { + module1.InitGenesis(sdkCtx, c.cdc, module1.DefaultGenesis(c.cdc)) +} + if module2, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module2.InitGenesis(sdkCtx, c.cdc, module1.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ RunMigrationBeginBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was executed or not and an error if fails. +func (m *Manager) + +RunMigrationBeginBlock(ctx sdk.Context) (bool, error) { + for _, moduleName := range m.OrderBeginBlockers { + if mod, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if _, ok := mod.(appmodule.UpgradeModule); ok { + err := mod.BeginBlock(ctx) + +return err == nil, err +} + +} + +} + +return false, nil +} + +/ BeginBlock performs begin block functionality for non-upgrade modules. It creates a +/ child context with an event manager to aggregate events emitted from non-upgrade +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if _, ok := module.(appmodule.UpgradeModule); !ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +#### `appmodule.HasGenesis` + + +`appmodule.HasGenesis` is experimental and should be considered unstable, it is recommended to not use this interface at this time. + + +```go expandable +package appmodule + +import ( + + "context" + "io" +) + +/ HasGenesis is the extension interface that modules should implement to handle +/ genesis data and state initialization. +/ WARNING: This interface is experimental and may change at any time. +type HasGenesis interface { + AppModule + + / DefaultGenesis writes the default genesis for this module to the target. + DefaultGenesis(GenesisTarget) + +error + + / ValidateGenesis validates the genesis data read from the source. + ValidateGenesis(GenesisSource) + +error + + / InitGenesis initializes module state from the genesis source. + InitGenesis(context.Context, GenesisSource) + +error + + / ExportGenesis exports module state to the genesis target. + ExportGenesis(context.Context, GenesisTarget) + +error +} + +/ GenesisSource is a source for genesis data in JSON format. It may abstract over a +/ single JSON object or separate files for each field in a JSON object that can +/ be streamed over. Modules should open a separate io.ReadCloser for each field that +/ is required. When fields represent arrays they can efficiently be streamed +/ over. If there is no data for a field, this function should return nil, nil. It is +/ important that the caller closes the reader when done with it. +type GenesisSource = func(field string) (io.ReadCloser, error) + +/ GenesisTarget is a target for writing genesis data in JSON format. It may +/ abstract over a single JSON object or JSON in separate files that can be +/ streamed over. Modules should open a separate io.WriteCloser for each field +/ and should prefer writing fields as arrays when possible to support efficient +/ iteration. It is important the caller closers the writer AND checks the error +/ when done with it. It is expected that a stream of JSON data is written +/ to the writer. +type GenesisTarget = func(field string) (io.WriteCloser, error) +``` + +### `AppModule` + +The `AppModule` interface defines a module. Modules can declare their functionalities by implementing extensions interfaces. +`AppModule`s are managed by the [module manager](#manager), which checks which extension interfaces are implemented by the module. + +#### `appmodule.AppModule` + +```go expandable +package appmodule + +import ( + + "context" + "google.golang.org/grpc" + "cosmossdk.io/depinject" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} +``` + +#### `module.AppModule` + + +Previously the `module.AppModule` interface was containing all the methods that are defined in the extensions interfaces. This was leading to much boilerplate for modules that did not need all the functionalities. + + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + + storetypes "cosmossdk.io/store/types" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context) + +error { + return nil +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(sdk.Context) ([]abci.ValidatorUpdate, error) { + return []abci.ValidatorUpdate{ +}, nil +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasEndBlock := module.(HasABCIEndblock) + +return !hasEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + err := module.BeginBlock(ctx) + if err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +### `HasInvariants` + +This interface defines one method. It allows to checks if a module can register invariants. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + + storetypes "cosmossdk.io/store/types" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context) + +error { + return nil +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(sdk.Context) ([]abci.ValidatorUpdate, error) { + return []abci.ValidatorUpdate{ +}, nil +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasEndBlock := module.(HasABCIEndblock) + +return !hasEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + err := module.BeginBlock(ctx) + if err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +* `RegisterInvariants(sdk.InvariantRegistry)`: Registers the [`invariants`](/docs/sdk/v0.50/documentation/module-system/invariants) of the module. If an invariant deviates from its predicted value, the [`InvariantRegistry`](/docs/sdk/v0.50/documentation/module-system/invariants#registry) triggers appropriate logic (most often the chain will be halted). + +### `HasServices` + +This interface defines one method. It allows to checks if a module can register invariants. + +#### `appmodule.HasService` + +```go expandable +package appmodule + +import ( + + "context" + "google.golang.org/grpc" + "cosmossdk.io/depinject" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} +``` + +#### `module.HasServices` + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + + storetypes "cosmossdk.io/store/types" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context) + +error { + return nil +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(sdk.Context) ([]abci.ValidatorUpdate, error) { + return []abci.ValidatorUpdate{ +}, nil +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasEndBlock := module.(HasABCIEndblock) + +return !hasEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + err := module.BeginBlock(ctx) + if err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +* `RegisterServices(Configurator)`: Allows a module to register services. + +### `HasConsensusVersion` + +This interface defines one method for checking a module consensus version. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + + storetypes "cosmossdk.io/store/types" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context) + +error { + return nil +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(sdk.Context) ([]abci.ValidatorUpdate, error) { + return []abci.ValidatorUpdate{ +}, nil +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasEndBlock := module.(HasABCIEndblock) + +return !hasEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + err := module.BeginBlock(ctx) + if err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +* `ConsensusVersion() uint64`: Returns the consensus version of the module. + +### `HasPreBlocker` + +The `HasPreBlocker` is an extension interface from `appmodule.AppModule`. All modules that have an `PreBlock` method implement this interface. + +### `HasBeginBlocker` + +The `HasBeginBlocker` is an extension interface from `appmodule.AppModule`. All modules that have an `BeginBlock` method implement this interface. + +```go expandable +package appmodule + +import ( + + "context" + "cosmossdk.io/depinject" + "google.golang.org/grpc" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} +``` + +* `BeginBlock(context.Context) error`: This method gives module developers the option to implement logic that is automatically triggered at the beginning of each block. + +### `HasEndBlocker` + +The `HasEndBlocker` is an extension interface from `appmodule.AppModule`. All modules that have an `EndBlock` method implement this interface. If a module need to return validator set updates (staking), they can use `HasABCIEndBlock` + +```go expandable +package appmodule + +import ( + + "context" + "cosmossdk.io/depinject" + "google.golang.org/grpc" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} +``` + +* `EndBlock(context.Context) error`: This method gives module developers the option to implement logic that is automatically triggered at the end of each block. + +### `HasABCIEndBlock` + +The `HasABCIEndBlock` is an extension interface from `module.AppModule`. All modules that have an `EndBlock` which return validator set updates implement this interface. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + + storetypes "cosmossdk.io/store/types" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context) + +error { + return nil +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(sdk.Context) ([]abci.ValidatorUpdate, error) { + return []abci.ValidatorUpdate{ +}, nil +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasEndBlock := module.(HasABCIEndblock) + +return !hasEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + err := module.BeginBlock(ctx) + if err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +* `EndBlock(context.Context) ([]abci.ValidatorUpdate, error)`: This method gives module developers the option to inform the underlying consensus engine of validator set changes (e.g. the `staking` module). + +### `HasPrecommit` + +`HasPrecommit` is an extension interface from `appmodule.AppModule`. All modules that have a `Precommit` method implement this interface. + +```go expandable +package appmodule + +import ( + + "context" + "cosmossdk.io/depinject" + "google.golang.org/grpc" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} +``` + +* `Precommit(context.Context)`: This method gives module developers the option to implement logic that is automatically triggered during \[`Commit'](/docs/sdk/v0.50/learn/advanced/00-baseapp#commit) of each block using the [`finalizeblockstate`](/docs/sdk/v0.50/learn/advanced/00-baseapp#state-updates) of the block to be committed. Implement empty if no logic needs to be triggered during `Commit\` of each block for this module. + +### `HasPrepareCheckState` + +`HasPrepareCheckState` is an extension interface from `appmodule.AppModule`. All modules that have a `PrepareCheckState` method implement this interface. + +```go expandable +package appmodule + +import ( + + "context" + "cosmossdk.io/depinject" + "google.golang.org/grpc" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} +``` + +* `PrepareCheckState(context.Context)`: This method gives module developers the option to implement logic that is automatically triggered during \[`Commit'](/docs/sdk/v0.50/learn/advanced/00-baseapp#commit) of each block using the [`checkState`](/docs/sdk/v0.50/learn/advanced/00-baseapp#state-updates) of the next block. Implement empty if no logic needs to be triggered during `Commit\` of each block for this module. + +### Implementing the Application Module Interfaces + +Typically, the various application module interfaces are implemented in a file called `module.go`, located in the module's folder (e.g. `./x/module/module.go`). + +Almost every module needs to implement the `AppModuleBasic` and `AppModule` interfaces. If the module is only used for genesis, it will implement `AppModuleGenesis` instead of `AppModule`. The concrete type that implements the interface can add parameters that are required for the implementation of the various methods of the interface. For example, the `Route()` function often calls a `NewMsgServerImpl(k keeper)` function defined in `keeper/msg_server.go` and therefore needs to pass the module's [`keeper`](/docs/sdk/v0.50/documentation/module-system/keeper) as a parameter. + +```go +/ example +type AppModule struct { + AppModuleBasic + keeper Keeper +} +``` + +In the example above, you can see that the `AppModule` concrete type references an `AppModuleBasic`, and not an `AppModuleGenesis`. That is because `AppModuleGenesis` only needs to be implemented in modules that focus on genesis-related functionalities. In most modules, the concrete `AppModule` type will have a reference to an `AppModuleBasic` and implement the two added methods of `AppModuleGenesis` directly in the `AppModule` type. + +If no parameter is required (which is often the case for `AppModuleBasic`), just declare an empty concrete type like so: + +```go +type AppModuleBasic struct{ +} +``` + +## Module Managers + +Module managers are used to manage collections of `AppModuleBasic` and `AppModule`. + +### `BasicManager` + +The `BasicManager` is a structure that lists all the `AppModuleBasic` of an application: + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + + storetypes "cosmossdk.io/store/types" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context) + +error { + return nil +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(sdk.Context) ([]abci.ValidatorUpdate, error) { + return []abci.ValidatorUpdate{ +}, nil +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasEndBlock := module.(HasABCIEndblock) + +return !hasEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + err := module.BeginBlock(ctx) + if err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +It implements the following methods: + +* `NewBasicManager(modules ...AppModuleBasic)`: Constructor function. It takes a list of the application's `AppModuleBasic` and builds a new `BasicManager`. This function is generally called in the `init()` function of [`app.go`](/docs/sdk/v0.50/learn/beginner/app-anatomy#core-application-file) to quickly initialize the independent elements of the application's modules (click [here](/docs/sdk/v0.50/documentation/module-systemhttps://github.com/cosmos/gaia/blob/main/app/app.go#L59-L74) to see an example). +* `NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic)`: Contructor function. It creates a new `BasicManager` from a `Manager`. The `BasicManager` will contain all `AppModuleBasic` from the `AppModule` manager using `CoreAppModuleBasicAdaptor` whenever possible. Module's `AppModuleBasic` can be overridden by passing a custom AppModuleBasic map +* `RegisterLegacyAminoCodec(cdc *codec.LegacyAmino)`: Registers the [`codec.LegacyAmino`s](/docs/sdk/v0.50/learn/advanced/encoding#amino) of each of the application's `AppModuleBasic`. This function is usually called early on in the [application's construction](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor). +* `RegisterInterfaces(registry codectypes.InterfaceRegistry)`: Registers interface types and implementations of each of the application's `AppModuleBasic`. +* `DefaultGenesis(cdc codec.JSONCodec)`: Provides default genesis information for modules in the application by calling the [`DefaultGenesis(cdc codec.JSONCodec)`](/docs/sdk/v0.50/documentation/module-system/genesis#defaultgenesis) function of each module. It only calls the modules that implements the `HasGenesisBasics` interfaces. +* `ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage)`: Validates the genesis information modules by calling the [`ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage)`](/docs/sdk/v0.50/documentation/module-system/genesis#validategenesis) function of modules implementing the `HasGenesisBasics` interface. +* `RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux)`: Registers gRPC routes for modules. +* `AddTxCommands(rootTxCmd *cobra.Command)`: Adds modules' transaction commands (defined as `GetTxCmd() *cobra.Command`) to the application's [`rootTxCommand`](/docs/sdk/v0.50/learn/advanced/cli#transaction-commands). This function is usually called function from the `main.go` function of the [application's command-line interface](/docs/sdk/v0.50/learn/advanced/cli). +* `AddQueryCommands(rootQueryCmd *cobra.Command)`: Adds modules' query commands (defined as `GetQueryCmd() *cobra.Command`) to the application's [`rootQueryCommand`](/docs/sdk/v0.50/learn/advanced/cli#query-commands). This function is usually called function from the `main.go` function of the [application's command-line interface](/docs/sdk/v0.50/learn/advanced/cli). + +### `Manager` + +The `Manager` is a structure that holds all the `AppModule` of an application, and defines the order of execution between several key components of these modules: + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module genesis functionality (AppModuleGenesis) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (AppModuleGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + + storetypes "cosmossdk.io/store/types" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + + / client functionality + RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) + +GetTxCmd() *cobra.Command + GetQueryCmd() *cobra.Command +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if cmd := b.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} +} + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasGenesis +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ QuerierRoute returns an empty module querier route +func (GenesisOnlyAppModule) + +QuerierRoute() + +string { + return "" +} + +/ RegisterServices registers all services. +func (gam GenesisOnlyAppModule) + +RegisterServices(Configurator) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ BeginBlock returns an empty module begin-block +func (gam GenesisOnlyAppModule) + +BeginBlock(ctx sdk.Context) + +error { + return nil +} + +/ EndBlock returns an empty module end-block +func (GenesisOnlyAppModule) + +EndBlock(sdk.Context) ([]abci.ValidatorUpdate, error) { + return []abci.ValidatorUpdate{ +}, nil +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasEndBlock := module.(HasABCIEndblock) + +return !hasEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the +/ SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + err := module.BeginBlock(ctx) + if err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +The module manager is used throughout the application whenever an action on a collection of modules is required. It implements the following methods: + +* `NewManager(modules ...AppModule)`: Constructor function. It takes a list of the application's `AppModule`s and builds a new `Manager`. It is generally called from the application's main [constructor function](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor-function). +* `SetOrderInitGenesis(moduleNames ...string)`: Sets the order in which the [`InitGenesis`](/docs/sdk/v0.50/documentation/module-system/genesis#initgenesis) function of each module will be called when the application is first started. This function is generally called from the application's main [constructor function](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor-function). + To initialize modules successfully, module dependencies should be considered. For example, the `genutil` module must occur after `staking` module so that the pools are properly initialized with tokens from genesis accounts, the `genutils` module must also occur after `auth` so that it can access the params from auth, IBC's `capability` module should be initialized before all other modules so that it can initialize any capabilities. +* `SetOrderExportGenesis(moduleNames ...string)`: Sets the order in which the [`ExportGenesis`](/docs/sdk/v0.50/documentation/module-system/genesis#exportgenesis) function of each module will be called in case of an export. This function is generally called from the application's main [constructor function](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor-function). +* `SetOrderPreBlockers(moduleNames ...string)`: Sets the order in which the `PreBlock()` function of each module will be called before `BeginBlock()` of all modules. This function is generally called from the application's main [constructor function](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor-function). +* `SetOrderBeginBlockers(moduleNames ...string)`: Sets the order in which the `BeginBlock()` function of each module will be called at the beginning of each block. This function is generally called from the application's main [constructor function](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor-function). +* `SetOrderEndBlockers(moduleNames ...string)`: Sets the order in which the `EndBlock()` function of each module will be called at the end of each block. This function is generally called from the application's main [constructor function](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor-function). +* `SetOrderPrecommiters(moduleNames ...string)`: Sets the order in which the `Precommit()` function of each module will be called during commit of each block. This function is generally called from the application's main [constructor function](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor-function). +* `SetOrderPrepareCheckStaters(moduleNames ...string)`: Sets the order in which the `PrepareCheckState()` function of each module will be called during commit of each block. This function is generally called from the application's main [constructor function](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor-function). +* `SetOrderMigrations(moduleNames ...string)`: Sets the order of migrations to be run. If not set then migrations will be run with an order defined in `DefaultMigrationsOrder`. +* `RegisterInvariants(ir sdk.InvariantRegistry)`: Registers the [invariants](/docs/sdk/v0.50/documentation/module-system/invariants) of module implementing the `HasInvariants` interface. +* `RegisterServices(cfg Configurator)`: Registers the services of modules implementing the `HasServices` interface. +* `InitGenesis(ctx context.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage)`: Calls the [`InitGenesis`](/docs/sdk/v0.50/documentation/module-system/genesis#initgenesis) function of each module when the application is first started, in the order defined in `OrderInitGenesis`. Returns an `abci.ResponseInitChain` to the underlying consensus engine, which can contain validator updates. +* `ExportGenesis(ctx context.Context, cdc codec.JSONCodec)`: Calls the [`ExportGenesis`](/docs/sdk/v0.50/documentation/module-system/genesis#exportgenesis) function of each module, in the order defined in `OrderExportGenesis`. The export constructs a genesis file from a previously existing state, and is mainly used when a hard-fork upgrade of the chain is required. +* `ExportGenesisForModules(ctx context.Context, cdc codec.JSONCodec, modulesToExport []string)`: Behaves the same as `ExportGenesis`, except takes a list of modules to export. +* `BeginBlock(ctx context.Context) error`: At the beginning of each block, this function is called from [`BaseApp`](/docs/sdk/v0.50/learn/advanced/baseapp#beginblock) and, in turn, calls the [`BeginBlock`](/docs/sdk/v0.50/documentation/module-system/beginblock-endblock) function of each modules implementing the `appmodule.HasBeginBlocker` interface, in the order defined in `OrderBeginBlockers`. It creates a child [context](/docs/sdk/v0.50/learn/advanced/context) with an event manager to aggregate [events](/docs/sdk/v0.50/learn/advanced/events) emitted from each modules. +* `EndBlock(ctx context.Context) error`: At the end of each block, this function is called from [`BaseApp`](/docs/sdk/v0.50/learn/advanced/baseapp#endblock) and, in turn, calls the [`EndBlock`](/docs/sdk/v0.50/documentation/module-system/beginblock-endblock) function of each modules implementing the `appmodule.HasEndBlocker` interface, in the order defined in `OrderEndBlockers`. It creates a child [context](/docs/sdk/v0.50/learn/advanced/context) with an event manager to aggregate [events](/docs/sdk/v0.50/learn/advanced/events) emitted from all modules. The function returns an `abci` which contains the aforementioned events, as well as validator set updates (if any). +* `EndBlock(context.Context) ([]abci.ValidatorUpdate, error)`: At the end of each block, this function is called from [`BaseApp`](/docs/sdk/v0.50/learn/advanced/baseapp#endblock) and, in turn, calls the [`EndBlock`](/docs/sdk/v0.50/documentation/module-system/beginblock-endblock) function of each modules implementing the `module.HasABCIEndBlock` interface, in the order defined in `OrderEndBlockers`. It creates a child [context](/docs/sdk/v0.50/learn/advanced/context) with an event manager to aggregate [events](/docs/sdk/v0.50/learn/advanced/events) emitted from all modules. The function returns an `abci` which contains the aforementioned events, as well as validator set updates (if any). +* `Precommit(ctx context.Context)`: During [`Commit`](/docs/sdk/v0.50/learn/advanced/baseapp#commit), this function is called from `BaseApp` immediately before the [`deliverState`](/docs/sdk/v0.50/learn/advanced/baseapp#state-updates) is written to the underlying [`rootMultiStore`](/docs/sdk/v0.50/learn/advanced/store#commitmultistore) and, in turn calls the `Precommit` function of each modules implementing the `HasPrecommit` interface, in the order defined in `OrderPrecommiters`. It creates a child [context](/docs/sdk/v0.50/learn/advanced/context) where the underlying `CacheMultiStore` is that of the newly committed block's [`finalizeblockstate`](/docs/sdk/v0.50/learn/advanced/baseapp#state-updates). +* `PrepareCheckState(ctx context.Context)`: During [`Commit`](/docs/sdk/v0.50/learn/advanced/baseapp#commit), this function is called from `BaseApp` immediately after the [`deliverState`](/docs/sdk/v0.50/learn/advanced/baseapp#state-updates) is written to the underlying [`rootMultiStore`](/docs/sdk/v0.50/learn/advanced/store#commitmultistore) and, in turn calls the `PrepareCheckState` function of each module implementing the `HasPrepareCheckState` interface, in the order defined in `OrderPrepareCheckStaters`. It creates a child [context](/docs/sdk/v0.50/learn/advanced/context) where the underlying `CacheMultiStore` is that of the next block's [`checkState`](/docs/sdk/v0.50/learn/advanced/baseapp#state-updates). Writes to this state will be present in the [`checkState`](/docs/sdk/v0.50/learn/advanced/baseapp#state-updates) of the next block, and therefore this method can be used to prepare the `checkState` for the next block. + +Here's an example of a concrete integration within an `simapp`: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "os" + "path/filepath" + "cosmossdk.io/log" + "cosmossdk.io/x/tx/signing" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/core/appmodule" + "github.com/cosmos/cosmos-sdk/codec/address" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ stdAccAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdAccAddressCodec struct{ +} + +func (g stdAccAddressCodec) + +StringToBytes(text string) ([]byte, error) { + if text == "" { + return nil, nil +} + +return sdk.AccAddressFromBech32(text) +} + +func (g stdAccAddressCodec) + +BytesToString(bz []byte) (string, error) { + if bz == nil { + return "", nil +} + +return sdk.AccAddress(bz).String(), nil +} + +/ stdValAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdValAddressCodec struct{ +} + +func (g stdValAddressCodec) + +StringToBytes(text string) ([]byte, error) { + return sdk.ValAddressFromBech32(text) +} + +func (g stdValAddressCodec) + +BytesToString(bz []byte) (string, error) { + return sdk.ValAddress(bz).String(), nil +} + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, circuittypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + tkeys := storetypes.NewTransientStoreKeys(paramstypes.TStoreKey) + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), runtime.EventService{ +}) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, runtime.NewKVStoreService(keys[authtypes.StoreKey]), authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[minttypes.StoreKey]), app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[distrtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[crisistypes.StoreKey]), invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[feegrant.StoreKey]), app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[circuittypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper(runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[govtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.DistrKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), app.StakingKeeper, app.SlashingKeeper, app.AccountKeeper.AddressCodec(), runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName), app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, circuittypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + err := app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + + / At startup, after all modules have been registered, check that all prot + / annotations are correct. + protoFiles, err := proto.MergedRegistry() + if err != nil { + panic(err) +} + +err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + fmt.Fprintln(os.Stderr, err.Error()) +} + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} +``` + +This is the same example from `runtime` (the package that powers app di): + +```go expandable +package runtime + +import ( + + "fmt" + "os" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "cosmossdk.io/x/tx/signing" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/protobuf/reflect/protodesc" + "google.golang.org/protobuf/reflect/protoregistry" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/event" + "cosmossdk.io/core/genesis" + "cosmossdk.io/core/header" + "cosmossdk.io/core/store" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/std" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type appModule struct { + app *App +} + +func (m appModule) + +RegisterServices(configurator module.Configurator) { + err := m.app.registerRuntimeServices(configurator) + if err != nil { + panic(err) +} +} + +func (m appModule) + +IsOnePerModuleType() { +} + +func (m appModule) + +IsAppModule() { +} + +var ( + _ appmodule.AppModule = appModule{ +} + _ module.HasServices = appModule{ +} +) + +/ BaseAppOption is a depinject.AutoGroupType which can be used to pass +/ BaseApp options into the depinject. It should be used carefully. +type BaseAppOption func(*baseapp.BaseApp) + +/ IsManyPerContainerType indicates that this is a depinject.ManyPerContainerType. +func (b BaseAppOption) + +IsManyPerContainerType() { +} + +func init() { + appmodule.Register(&runtimev1alpha1.Module{ +}, + appmodule.Provide( + ProvideApp, + ProvideInterfaceRegistry, + ProvideKVStoreKey, + ProvideTransientStoreKey, + ProvideMemoryStoreKey, + ProvideGenesisTxHandler, + ProvideKVStoreService, + ProvideMemoryStoreService, + ProvideTransientStoreService, + ProvideEventService, + ProvideHeaderInfoService, + ProvideCometInfoService, + ProvideBasicManager, + ), + appmodule.Invoke(SetupAppBuilder), + ) +} + +func ProvideApp(interfaceRegistry codectypes.InterfaceRegistry) ( + codec.Codec, + *codec.LegacyAmino, + *AppBuilder, + codec.ProtoCodecMarshaler, + *baseapp.MsgServiceRouter, + appmodule.AppModule, + protodesc.Resolver, + protoregistry.MessageTypeResolver, + error, +) { + protoFiles := proto.HybridResolver + protoTypes := protoregistry.GlobalTypes + + / At startup, check that all proto annotations are correct. + if err := msgservice.ValidateProtoAnnotations(protoFiles); err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + _, _ = fmt.Fprintln(os.Stderr, err.Error()) +} + amino := codec.NewLegacyAmino() + +std.RegisterInterfaces(interfaceRegistry) + +std.RegisterLegacyAminoCodec(amino) + cdc := codec.NewProtoCodec(interfaceRegistry) + msgServiceRouter := baseapp.NewMsgServiceRouter() + app := &App{ + storeKeys: nil, + interfaceRegistry: interfaceRegistry, + cdc: cdc, + amino: amino, + basicManager: module.BasicManager{ +}, + msgServiceRouter: msgServiceRouter, +} + appBuilder := &AppBuilder{ + app +} + +return cdc, amino, appBuilder, cdc, msgServiceRouter, appModule{ + app +}, protoFiles, protoTypes, nil +} + +type AppInputs struct { + depinject.In + + AppConfig *appv1alpha1.Config + Config *runtimev1alpha1.Module + AppBuilder *AppBuilder + Modules map[string]appmodule.AppModule + CustomModuleBasics map[string]module.AppModuleBasic `optional:"true"` + BaseAppOptions []BaseAppOption + InterfaceRegistry codectypes.InterfaceRegistry + LegacyAmino *codec.LegacyAmino + Logger log.Logger +} + +func SetupAppBuilder(inputs AppInputs) { + app := inputs.AppBuilder.app + app.baseAppOptions = inputs.BaseAppOptions + app.config = inputs.Config + app.appConfig = inputs.AppConfig + app.logger = inputs.Logger + app.ModuleManager = module.NewManagerFromMap(inputs.Modules) + for name, mod := range inputs.Modules { + if customBasicMod, ok := inputs.CustomModuleBasics[name]; ok { + app.basicManager[name] = customBasicMod + customBasicMod.RegisterInterfaces(inputs.InterfaceRegistry) + +customBasicMod.RegisterLegacyAminoCodec(inputs.LegacyAmino) + +continue +} + coreAppModuleBasic := module.CoreAppModuleBasicAdaptor(name, mod) + +app.basicManager[name] = coreAppModuleBasic + coreAppModuleBasic.RegisterInterfaces(inputs.InterfaceRegistry) + +coreAppModuleBasic.RegisterLegacyAminoCodec(inputs.LegacyAmino) +} +} + +func ProvideInterfaceRegistry(customGetSigners []signing.CustomGetSigner) (codectypes.InterfaceRegistry, error) { + signingOptions := signing.Options{ + / using the global prefixes is a temporary solution until we refactor this + / to get the address.Codec's from the container + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +} + for _, signer := range customGetSigners { + signingOptions.DefineCustomGetSigners(signer.MsgType, signer.Fn) +} + +interfaceRegistry, err := codectypes.NewInterfaceRegistryWithOptions(codectypes.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signingOptions, +}) + if err != nil { + return nil, err +} + +err = interfaceRegistry.SigningContext().Validate() + if err != nil { + return nil, err +} + +return interfaceRegistry, nil +} + +func registerStoreKey(wrapper *AppBuilder, key storetypes.StoreKey) { + wrapper.app.storeKeys = append(wrapper.app.storeKeys, key) +} + +func storeKeyOverride(config *runtimev1alpha1.Module, moduleName string) *runtimev1alpha1.StoreKeyConfig { + for _, cfg := range config.OverrideStoreKeys { + if cfg.ModuleName == moduleName { + return cfg +} + +} + +return nil +} + +func ProvideKVStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.KVStoreKey { + override := storeKeyOverride(config, key.Name()) + +var storeKeyName string + if override != nil { + storeKeyName = override.KvStoreKey +} + +else { + storeKeyName = key.Name() +} + storeKey := storetypes.NewKVStoreKey(storeKeyName) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideTransientStoreKey(key depinject.ModuleKey, app *AppBuilder) *storetypes.TransientStoreKey { + storeKey := storetypes.NewTransientStoreKey(fmt.Sprintf("transient:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideMemoryStoreKey(key depinject.ModuleKey, app *AppBuilder) *storetypes.MemoryStoreKey { + storeKey := storetypes.NewMemoryStoreKey(fmt.Sprintf("memory:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideGenesisTxHandler(appBuilder *AppBuilder) + +genesis.TxHandler { + return appBuilder.app +} + +func ProvideKVStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.KVStoreService { + storeKey := ProvideKVStoreKey(config, key, app) + +return kvStoreService{ + key: storeKey +} +} + +func ProvideMemoryStoreService(key depinject.ModuleKey, app *AppBuilder) + +store.MemoryStoreService { + storeKey := ProvideMemoryStoreKey(key, app) + +return memStoreService{ + key: storeKey +} +} + +func ProvideTransientStoreService(key depinject.ModuleKey, app *AppBuilder) + +store.TransientStoreService { + storeKey := ProvideTransientStoreKey(key, app) + +return transientStoreService{ + key: storeKey +} +} + +func ProvideEventService() + +event.Service { + return EventService{ +} +} + +func ProvideCometInfoService() + +comet.BlockInfoService { + return cometInfoService{ +} +} + +func ProvideHeaderInfoService(app *AppBuilder) + +header.Service { + return headerInfoService{ +} +} + +func ProvideBasicManager(app *AppBuilder) + +module.BasicManager { + return app.app.basicManager +} +``` + +```go expandable +package runtime + +import ( + + "fmt" + "os" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "cosmossdk.io/x/tx/signing" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/protobuf/reflect/protodesc" + "google.golang.org/protobuf/reflect/protoregistry" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/event" + "cosmossdk.io/core/genesis" + "cosmossdk.io/core/header" + "cosmossdk.io/core/store" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/std" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type appModule struct { + app *App +} + +func (m appModule) + +RegisterServices(configurator module.Configurator) { + err := m.app.registerRuntimeServices(configurator) + if err != nil { + panic(err) +} +} + +func (m appModule) + +IsOnePerModuleType() { +} + +func (m appModule) + +IsAppModule() { +} + +var ( + _ appmodule.AppModule = appModule{ +} + _ module.HasServices = appModule{ +} +) + +/ BaseAppOption is a depinject.AutoGroupType which can be used to pass +/ BaseApp options into the depinject. It should be used carefully. +type BaseAppOption func(*baseapp.BaseApp) + +/ IsManyPerContainerType indicates that this is a depinject.ManyPerContainerType. +func (b BaseAppOption) + +IsManyPerContainerType() { +} + +func init() { + appmodule.Register(&runtimev1alpha1.Module{ +}, + appmodule.Provide( + ProvideApp, + ProvideInterfaceRegistry, + ProvideKVStoreKey, + ProvideTransientStoreKey, + ProvideMemoryStoreKey, + ProvideGenesisTxHandler, + ProvideKVStoreService, + ProvideMemoryStoreService, + ProvideTransientStoreService, + ProvideEventService, + ProvideHeaderInfoService, + ProvideCometInfoService, + ProvideBasicManager, + ), + appmodule.Invoke(SetupAppBuilder), + ) +} + +func ProvideApp(interfaceRegistry codectypes.InterfaceRegistry) ( + codec.Codec, + *codec.LegacyAmino, + *AppBuilder, + codec.ProtoCodecMarshaler, + *baseapp.MsgServiceRouter, + appmodule.AppModule, + protodesc.Resolver, + protoregistry.MessageTypeResolver, + error, +) { + protoFiles := proto.HybridResolver + protoTypes := protoregistry.GlobalTypes + + / At startup, check that all proto annotations are correct. + if err := msgservice.ValidateProtoAnnotations(protoFiles); err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + _, _ = fmt.Fprintln(os.Stderr, err.Error()) +} + amino := codec.NewLegacyAmino() + +std.RegisterInterfaces(interfaceRegistry) + +std.RegisterLegacyAminoCodec(amino) + cdc := codec.NewProtoCodec(interfaceRegistry) + msgServiceRouter := baseapp.NewMsgServiceRouter() + app := &App{ + storeKeys: nil, + interfaceRegistry: interfaceRegistry, + cdc: cdc, + amino: amino, + basicManager: module.BasicManager{ +}, + msgServiceRouter: msgServiceRouter, +} + appBuilder := &AppBuilder{ + app +} + +return cdc, amino, appBuilder, cdc, msgServiceRouter, appModule{ + app +}, protoFiles, protoTypes, nil +} + +type AppInputs struct { + depinject.In + + AppConfig *appv1alpha1.Config + Config *runtimev1alpha1.Module + AppBuilder *AppBuilder + Modules map[string]appmodule.AppModule + CustomModuleBasics map[string]module.AppModuleBasic `optional:"true"` + BaseAppOptions []BaseAppOption + InterfaceRegistry codectypes.InterfaceRegistry + LegacyAmino *codec.LegacyAmino + Logger log.Logger +} + +func SetupAppBuilder(inputs AppInputs) { + app := inputs.AppBuilder.app + app.baseAppOptions = inputs.BaseAppOptions + app.config = inputs.Config + app.appConfig = inputs.AppConfig + app.logger = inputs.Logger + app.ModuleManager = module.NewManagerFromMap(inputs.Modules) + for name, mod := range inputs.Modules { + if customBasicMod, ok := inputs.CustomModuleBasics[name]; ok { + app.basicManager[name] = customBasicMod + customBasicMod.RegisterInterfaces(inputs.InterfaceRegistry) + +customBasicMod.RegisterLegacyAminoCodec(inputs.LegacyAmino) + +continue +} + coreAppModuleBasic := module.CoreAppModuleBasicAdaptor(name, mod) + +app.basicManager[name] = coreAppModuleBasic + coreAppModuleBasic.RegisterInterfaces(inputs.InterfaceRegistry) + +coreAppModuleBasic.RegisterLegacyAminoCodec(inputs.LegacyAmino) +} +} + +func ProvideInterfaceRegistry(customGetSigners []signing.CustomGetSigner) (codectypes.InterfaceRegistry, error) { + signingOptions := signing.Options{ + / using the global prefixes is a temporary solution until we refactor this + / to get the address.Codec's from the container + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +} + for _, signer := range customGetSigners { + signingOptions.DefineCustomGetSigners(signer.MsgType, signer.Fn) +} + +interfaceRegistry, err := codectypes.NewInterfaceRegistryWithOptions(codectypes.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signingOptions, +}) + if err != nil { + return nil, err +} + +err = interfaceRegistry.SigningContext().Validate() + if err != nil { + return nil, err +} + +return interfaceRegistry, nil +} + +func registerStoreKey(wrapper *AppBuilder, key storetypes.StoreKey) { + wrapper.app.storeKeys = append(wrapper.app.storeKeys, key) +} + +func storeKeyOverride(config *runtimev1alpha1.Module, moduleName string) *runtimev1alpha1.StoreKeyConfig { + for _, cfg := range config.OverrideStoreKeys { + if cfg.ModuleName == moduleName { + return cfg +} + +} + +return nil +} + +func ProvideKVStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.KVStoreKey { + override := storeKeyOverride(config, key.Name()) + +var storeKeyName string + if override != nil { + storeKeyName = override.KvStoreKey +} + +else { + storeKeyName = key.Name() +} + storeKey := storetypes.NewKVStoreKey(storeKeyName) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideTransientStoreKey(key depinject.ModuleKey, app *AppBuilder) *storetypes.TransientStoreKey { + storeKey := storetypes.NewTransientStoreKey(fmt.Sprintf("transient:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideMemoryStoreKey(key depinject.ModuleKey, app *AppBuilder) *storetypes.MemoryStoreKey { + storeKey := storetypes.NewMemoryStoreKey(fmt.Sprintf("memory:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideGenesisTxHandler(appBuilder *AppBuilder) + +genesis.TxHandler { + return appBuilder.app +} + +func ProvideKVStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.KVStoreService { + storeKey := ProvideKVStoreKey(config, key, app) + +return kvStoreService{ + key: storeKey +} +} + +func ProvideMemoryStoreService(key depinject.ModuleKey, app *AppBuilder) + +store.MemoryStoreService { + storeKey := ProvideMemoryStoreKey(key, app) + +return memStoreService{ + key: storeKey +} +} + +func ProvideTransientStoreService(key depinject.ModuleKey, app *AppBuilder) + +store.TransientStoreService { + storeKey := ProvideTransientStoreKey(key, app) + +return transientStoreService{ + key: storeKey +} +} + +func ProvideEventService() + +event.Service { + return EventService{ +} +} + +func ProvideCometInfoService() + +comet.BlockInfoService { + return cometInfoService{ +} +} + +func ProvideHeaderInfoService(app *AppBuilder) + +header.Service { + return headerInfoService{ +} +} + +func ProvideBasicManager(app *AppBuilder) + +module.BasicManager { + return app.app.basicManager +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/README.mdx new file mode 100644 index 00000000..7428c32b --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/README.mdx @@ -0,0 +1,64 @@ +--- +title: List of Modules +description: >- + Here are some production-grade modules that can be used in Cosmos SDK + applications, along with their respective documentation: +--- + +Here are some production-grade modules that can be used in Cosmos SDK applications, along with their respective documentation: + +## Essential Modules + +Essential modules include functionality that *must* be included in your Cosmos SDK blockchain. +These modules provide the core behaviors that are needed for users and operators such as balance tracking, +proof-of-stake capabilities and governance. + +* [Auth](/docs/sdk/v0.50/documentation/module-system/modules/auth/README) - Authentication of accounts and transactions for Cosmos SDK applications. +* [Bank](/docs/sdk/v0.50/documentation/module-system/modules/bank/README) - Token transfer functionalities. +* [Circuit](/docs/sdk/v0.50/documentation/module-system/modules/circuit/README) - Circuit breaker module for pausing messages. +* [Consensus](/docs/sdk/v0.50/documentation/module-system/modules/consensus/README) - Consensus module for modifying CometBFT's ABCI consensus params. +* [Distribution](/docs/sdk/v0.50/documentation/module-system/modules/distribution/README) - Fee distribution, and staking token provision distribution. +* [Evidence](/docs/sdk/v0.50/documentation/module-system/modules/evidence/README) - Evidence handling for double signing, misbehaviour, etc. +* [Governance](/docs/sdk/v0.50/documentation/module-system/modules/gov/README) - On-chain proposals and voting. +* [Genutil](/docs/sdk/v0.50/documentation/module-system/modules/genutil/README) - Genesis utilities for the Cosmos SDK. +* [Mint](/docs/sdk/v0.50/documentation/module-system/modules/mint/README) - Creation of new units of staking token. +* [Slashing](/docs/sdk/v0.50/documentation/module-system/modules/slashing/README) - Validator punishment mechanisms. +* [Staking](/docs/sdk/v0.50/documentation/module-system/modules/staking/README) - Proof-of-Stake layer for public blockchains. +* [Upgrade](/docs/sdk/v0.50/documentation/module-system/modules/upgrade/README) - Software upgrades handling and coordination. + +## Supplementary Modules + +Supplementary modules are modules that are maintained in the Cosmos SDK but are not necessary for +the core functionality of your blockchain. They can be thought of as ways to extend the +capabilities of your blockchain or further specialize it. + +* [Authz](/docs/sdk/v0.50/documentation/module-system/modules/authz/README) - Authorization for accounts to perform actions on behalf of other accounts. +* [Epochs](/docs/sdk/v0.50/documentation/module-system/modules/epochs/README) - Registration so SDK modules can have logic to be executed at the timed tickers. +* [Feegrant](/docs/sdk/v0.50/documentation/module-system/modules/feegrant/README) - Grant fee allowances for executing transactions. +* [Group](/docs/sdk/v0.50/documentation/module-system/modules/group/README) - Allows for the creation and management of on-chain multisig accounts. +* [NFT](/docs/sdk/v0.50/documentation/module-system/modules/nft/README) - NFT module implemented based on [ADR43](https://docs.cosmos.network/main/architecture/adr-043-nft-module.html). +* [ProtocolPool](/docs/sdk/v0.50/documentation/module-system/modules/protocolpool/README) - Extended management of community pool functionality. + +## Deprecated Modules + +The following modules are deprecated. They will no longer be maintained and eventually will be removed +in an upcoming release of the Cosmos SDK per our [release process](https://github.com/cosmos/cosmos-sdk/blob/main/RELEASE_PROCESS.md). + +* [Crisis](/docs/sdk/v0.50/documentation/module-system/modules/crisis/README) - *Deprecated* halting the blockchain under certain circumstances (e.g. if an invariant is broken). +* [Params](/docs/sdk/v0.50/documentation/module-system/modules/params/README) - *Deprecated* Globally available parameter store. + +To learn more about the process of building modules, visit the [building modules reference documentation](https://docs.cosmos.network/main/building-modules/intro). + +## IBC + +The IBC module for the SDK is maintained by the IBC Go team in its [own repository](https://github.com/cosmos/ibc-go). + +Additionally, the [capability module](https://github.com/cosmos/ibc-go/tree/fdd664698d79864f1e00e147f9879e58497b5ef1/modules/capability) is from v0.50+ maintained by the IBC Go team in its [own repository](https://github.com/cosmos/ibc-go/tree/fdd664698d79864f1e00e147f9879e58497b5ef1/modules/capability). + +## CosmWasm + +The CosmWasm module enables smart contracts, learn more by going to their [documentation site](https://book.cosmwasm.com/), or visit [the repository](https://github.com/CosmWasm/cosmwasm). + +## EVM + +Read more about writing smart contracts with solidity at the official [`evm` documentation page](https://evm.cosmos.network/). diff --git a/docs/sdk/v0.50/documentation/module-system/modules/auth/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/auth/README.mdx new file mode 100644 index 00000000..486f6d54 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/auth/README.mdx @@ -0,0 +1,738 @@ +--- +title: '`x/auth`' +description: This document specifies the auth module of the Cosmos SDK. +--- + +## Abstract + +This document specifies the auth module of the Cosmos SDK. + +The auth module is responsible for specifying the base transaction and account types +for an application, since the SDK itself is agnostic to these particulars. It contains +the middlewares, where all basic transaction validity checks (signatures, nonces, auxiliary fields) +are performed, and exposes the account keeper, which allows other modules to read, write, and modify accounts. + +This module is used in the Cosmos Hub. + +## Contents + +* [Concepts](#concepts) + * [Gas & Fees](#gas--fees) +* [State](#state) + * [Accounts](#accounts) +* [AnteHandlers](#antehandlers) +* [Keepers](#keepers) + * [Account Keeper](#account-keeper) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +**Note:** The auth module is different from the [authz module](docs/sdk/v0.50/documentation/module-system/modules/authz/README). + +The differences are: + +* `auth` - authentication of accounts and transactions for Cosmos SDK applications and is responsible for specifying the base transaction and account types. +* `authz` - authorization for accounts to perform actions on behalf of other accounts and enables a granter to grant authorizations to a grantee that allows the grantee to execute messages on behalf of the granter. + +### Gas & Fees + +Fees serve two purposes for an operator of the network. + +Fees limit the growth of the state stored by every full node and allow for +general purpose censorship of transactions of little economic value. Fees +are best suited as an anti-spam mechanism where validators are disinterested in +the use of the network and identities of users. + +Fees are determined by the gas limits and gas prices transactions provide, where +`fees = ceil(gasLimit * gasPrices)`. Txs incur gas costs for all state reads/writes, +signature verification, as well as costs proportional to the tx size. Operators +should set minimum gas prices when starting their nodes. They must set the unit +costs of gas in each token denomination they wish to support: + +`simd start ... --minimum-gas-prices=0.00001stake;0.05photinos` + +When adding transactions to mempool or gossipping transactions, validators check +if the transaction's gas prices, which are determined by the provided fees, meet +any of the validator's minimum gas prices. In other words, a transaction must +provide a fee of at least one denomination that matches a validator's minimum +gas price. + +CometBFT does not currently provide fee based mempool prioritization, and fee +based mempool filtering is local to node and not part of consensus. But with +minimum gas prices set, such a mechanism could be implemented by node operators. + +Because the market value for tokens will fluctuate, validators are expected to +dynamically adjust their minimum gas prices to a level that would encourage the +use of the network. + +## State + +### Accounts + +Accounts contain authentication information for a uniquely identified external user of an SDK blockchain, +including public key, address, and account number / sequence number for replay protection. For efficiency, +since account balances must also be fetched to pay fees, account structs also store the balance of a user +as `sdk.Coins`. + +Accounts are exposed externally as an interface, and stored internally as +either a base account or vesting account. Module clients wishing to add more +account types may do so. + +* `0x01 | Address -> ProtocolBuffer(account)` + +#### Account Interface + +The account interface exposes methods to read and write standard account information. +Note that all of these methods operate on an account struct conforming to the +interface - in order to write the account to the store, the account keeper will +need to be used. + +```go expandable +/ AccountI is an interface used to store coins at a given address within state. +/ It presumes a notion of sequence numbers for replay protection, +/ a notion of account numbers for replay protection for previously pruned accounts, +/ and a pubkey for authentication purposes. +/ +/ Many complex conditions can be used in the concrete struct which implements AccountI. +type AccountI interface { + proto.Message + + GetAddress() + +sdk.AccAddress + SetAddress(sdk.AccAddress) + +error / errors if already set. + + GetPubKey() + +crypto.PubKey / can return nil. + SetPubKey(crypto.PubKey) + +error + + GetAccountNumber() + +uint64 + SetAccountNumber(uint64) + +error + + GetSequence() + +uint64 + SetSequence(uint64) + +error + + / Ensure that account implements stringer + String() + +string +} +``` + +##### Base Account + +A base account is the simplest and most common account type, which just stores all requisite +fields directly in a struct. + +```protobuf +/ BaseAccount defines a base account type. It contains all the necessary fields +/ for basic account functionality. Any custom account type should extend this +/ type for additional functionality (e.g. vesting). +message BaseAccount { + string address = 1; + google.protobuf.Any pub_key = 2; + uint64 account_number = 3; + uint64 sequence = 4; +} +``` + +### Vesting Account + +See [Vesting](https://docs.cosmos.network/main/modules/auth/vesting/). + +## AnteHandlers + +The `x/auth` module presently has no transaction handlers of its own, but does expose the special `AnteHandler`, used for performing basic validity checks on a transaction, such that it could be thrown out of the mempool. +The `AnteHandler` can be seen as a set of decorators that check transactions within the current context, per [ADR 010](docs/sdk/next/documentation/legacy/adr-comprehensive). + +Note that the `AnteHandler` is called on both `CheckTx` and `DeliverTx`, as CometBFT proposers presently have the ability to include in their proposed block transactions which fail `CheckTx`. + +### Decorators + +The auth module provides `AnteDecorator`s that are recursively chained together into a single `AnteHandler` in the following order: + +* `SetUpContextDecorator`: Sets the `GasMeter` in the `Context` and wraps the next `AnteHandler` with a defer clause to recover from any downstream `OutOfGas` panics in the `AnteHandler` chain to return an error with information on gas provided and gas used. + +* `RejectExtensionOptionsDecorator`: Rejects all extension options which can optionally be included in protobuf transactions. + +* `MempoolFeeDecorator`: Checks if the `tx` fee is above local mempool `minFee` parameter during `CheckTx`. + +* `ValidateBasicDecorator`: Calls `tx.ValidateBasic` and returns any non-nil error. + +* `TxTimeoutHeightDecorator`: Check for a `tx` height timeout. + +* `ValidateMemoDecorator`: Validates `tx` memo with application parameters and returns any non-nil error. + +* `ConsumeGasTxSizeDecorator`: Consumes gas proportional to the `tx` size based on application parameters. + +* `DeductFeeDecorator`: Deducts the `FeeAmount` from first signer of the `tx`. If the `x/feegrant` module is enabled and a fee granter is set, it deducts fees from the fee granter account. + +* `SetPubKeyDecorator`: Sets the pubkey from a `tx`'s signers that does not already have its corresponding pubkey saved in the state machine and in the current context. + +* `ValidateSigCountDecorator`: Validates the number of signatures in `tx` based on app-parameters. + +* `SigGasConsumeDecorator`: Consumes parameter-defined amount of gas for each signature. This requires pubkeys to be set in context for all signers as part of `SetPubKeyDecorator`. + +* `SigVerificationDecorator`: Verifies all signatures are valid. This requires pubkeys to be set in context for all signers as part of `SetPubKeyDecorator`. + +* `IncrementSequenceDecorator`: Increments the account sequence for each signer to prevent replay attacks. + +## Keepers + +The auth module only exposes one keeper, the account keeper, which can be used to read and write accounts. + +### Account Keeper + +Presently only one fully-permissioned account keeper is exposed, which has the ability to both read and write +all fields of all accounts, and to iterate over all stored accounts. + +```go expandable +/ AccountKeeperI is the interface contract that x/auth's keeper implements. +type AccountKeeperI interface { + / Return a new account with the next account number and the specified address. Does not save the new account to the store. + NewAccountWithAddress(sdk.Context, sdk.AccAddress) + +types.AccountI + + / Return a new account with the next account number. Does not save the new account to the store. + NewAccount(sdk.Context, types.AccountI) + +types.AccountI + + / Check if an account exists in the store. + HasAccount(sdk.Context, sdk.AccAddress) + +bool + + / Retrieve an account from the store. + GetAccount(sdk.Context, sdk.AccAddress) + +types.AccountI + + / Set an account in the store. + SetAccount(sdk.Context, types.AccountI) + + / Remove an account from the store. + RemoveAccount(sdk.Context, types.AccountI) + + / Iterate over all accounts, calling the provided function. Stop iteration when it returns true. + IterateAccounts(sdk.Context, func(types.AccountI) + +bool) + + / Fetch the public key of an account at a specified address + GetPubKey(sdk.Context, sdk.AccAddress) (crypto.PubKey, error) + + / Fetch the sequence of an account at a specified address. + GetSequence(sdk.Context, sdk.AccAddress) (uint64, error) + + / Fetch the next account number, and increment the internal counter. + NextAccountNumber(sdk.Context) + +uint64 +} +``` + +## Parameters + +The auth module contains the following parameters: + +| Key | Type | Example | +| ---------------------- | ------ | ------- | +| MaxMemoCharacters | uint64 | 256 | +| TxSigLimit | uint64 | 7 | +| TxSizeCostPerByte | uint64 | 10 | +| SigVerifyCostED25519 | uint64 | 590 | +| SigVerifyCostSecp256k1 | uint64 | 1000 | + +## Client + +### CLI + +A user can query and interact with the `auth` module using the CLI. + +### Query + +The `query` commands allow users to query `auth` state. + +```bash +simd query auth --help +``` + +#### account + +The `account` command allow users to query for an account by it's address. + +```bash +simd query auth account [address] [flags] +``` + +Example: + +```bash +simd query auth account cosmos1... +``` + +Example Output: + +```bash +'@type': /cosmos.auth.v1beta1.BaseAccount +account_number: "0" +address: cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2 +pub_key: + '@type': /cosmos.crypto.secp256k1.PubKey + key: ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD +sequence: "1" +``` + +#### accounts + +The `accounts` command allow users to query all the available accounts. + +```bash +simd query auth accounts [flags] +``` + +Example: + +```bash +simd query auth accounts +``` + +Example Output: + +```bash expandable +accounts: +- '@type': /cosmos.auth.v1beta1.BaseAccount + account_number: "0" + address: cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2 + pub_key: + '@type': /cosmos.crypto.secp256k1.PubKey + key: ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD + sequence: "1" +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "8" + address: cosmos1yl6hdjhmkf37639730gffanpzndzdpmhwlkfhr + pub_key: null + sequence: "0" + name: transfer + permissions: + - minter + - burner +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "4" + address: cosmos1fl48vsnmsdzcv85q5d2q4z5ajdha8yu34mf0eh + pub_key: null + sequence: "0" + name: bonded_tokens_pool + permissions: + - burner + - staking +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "5" + address: cosmos1tygms3xhhs3yv487phx3dw4a95jn7t7lpm470r + pub_key: null + sequence: "0" + name: not_bonded_tokens_pool + permissions: + - burner + - staking +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "6" + address: cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn + pub_key: null + sequence: "0" + name: gov + permissions: + - burner +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "3" + address: cosmos1jv65s3grqf6v6jl3dp4t6c9t9rk99cd88lyufl + pub_key: null + sequence: "0" + name: distribution + permissions: [] +- '@type': /cosmos.auth.v1beta1.BaseAccount + account_number: "1" + address: cosmos147k3r7v2tvwqhcmaxcfql7j8rmkrlsemxshd3j + pub_key: null + sequence: "0" +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "7" + address: cosmos1m3h30wlvsf8llruxtpukdvsy0km2kum8g38c8q + pub_key: null + sequence: "0" + name: mint + permissions: + - minter +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "2" + address: cosmos17xpfvakm2amg962yls6f84z3kell8c5lserqta + pub_key: null + sequence: "0" + name: fee_collector + permissions: [] +pagination: + next_key: null + total: "0" +``` + +#### params + +The `params` command allow users to query the current auth parameters. + +```bash +simd query auth params [flags] +``` + +Example: + +```bash +simd query auth params +``` + +Example Output: + +```bash +max_memo_characters: "256" +sig_verify_cost_ed25519: "590" +sig_verify_cost_secp256k1: "1000" +tx_sig_limit: "7" +tx_size_cost_per_byte: "10" +``` + +### Transactions + +The `auth` module supports transactions commands to help you with signing and more. Compared to other modules you can access directly the `auth` module transactions commands using the only `tx` command. + +Use directly the `--help` flag to get more information about the `tx` command. + +```bash +simd tx --help +``` + +#### `sign` + +The `sign` command allows users to sign transactions that was generated offline. + +```bash +simd tx sign tx.json --from $ALICE > tx.signed.json +``` + +The result is a signed transaction that can be broadcasted to the network thanks to the broadcast command. + +More information about the `sign` command can be found running `simd tx sign --help`. + +#### `sign-batch` + +The `sign-batch` command allows users to sign multiples offline generated transactions. +The transactions can be in one file, with one tx per line, or in multiple files. + +```bash +simd tx sign txs.json --from $ALICE > tx.signed.json +``` + +or + +```bash +simd tx sign tx1.json tx2.json tx3.json --from $ALICE > tx.signed.json +``` + +The result is multiples signed transactions. For combining the signed transactions into one transactions, use the `--append` flag. + +More information about the `sign-batch` command can be found running `simd tx sign-batch --help`. + +#### `multi-sign` + +The `multi-sign` command allows users to sign transactions that was generated offline by a multisig account. + +```bash +simd tx multisign transaction.json k1k2k3 k1sig.json k2sig.json k3sig.json +``` + +Where `k1k2k3` is the multisig account address, `k1sig.json` is the signature of the first signer, `k2sig.json` is the signature of the second signer, and `k3sig.json` is the signature of the third signer. + +##### Nested multisig transactions + +To allow transactions to be signed by nested multisigs, meaning that a participant of a multisig account can be another multisig account, the `--skip-signature-verification` flag must be used. + +```bash +# First aggregate signatures of the multisig participant +simd tx multi-sign transaction.json ms1 ms1p1sig.json ms1p2sig.json --signature-only --skip-signature-verification > ms1sig.json + +# Then use the aggregated signatures and the other signatures to sign the final transaction +simd tx multi-sign transaction.json k1ms1 k1sig.json ms1sig.json --skip-signature-verification +``` + +Where `ms1` is the nested multisig account address, `ms1p1sig.json` is the signature of the first participant of the nested multisig account, `ms1p2sig.json` is the signature of the second participant of the nested multisig account, and `ms1sig.json` is the aggregated signature of the nested multisig account. + +`k1ms1` is a multisig account comprised of an individual signer and another nested multisig account (`ms1`). `k1sig.json` is the signature of the first signer of the individual member. + +More information about the `multi-sign` command can be found running `simd tx multi-sign --help`. + +#### `multisign-batch` + +The `multisign-batch` works the same way as `sign-batch`, but for multisig accounts. +With the difference that the `multisign-batch` command requires all transactions to be in one file, and the `--append` flag does not exist. + +More information about the `multisign-batch` command can be found running `simd tx multisign-batch --help`. + +#### `validate-signatures` + +The `validate-signatures` command allows users to validate the signatures of a signed transaction. + +```bash +$ simd tx validate-signatures tx.signed.json +Signers: + 0: cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275 + +Signatures: + 0: cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275 [OK] +``` + +More information about the `validate-signatures` command can be found running `simd tx validate-signatures --help`. + +#### `broadcast` + +The `broadcast` command allows users to broadcast a signed transaction to the network. + +```bash +simd tx broadcast tx.signed.json +``` + +More information about the `broadcast` command can be found running `simd tx broadcast --help`. + +### gRPC + +A user can query the `auth` module using gRPC endpoints. + +#### Account + +The `account` endpoint allow users to query for an account by it's address. + +```bash +cosmos.auth.v1beta1.Query/Account +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Account +``` + +Example Output: + +```bash expandable +{ + "account":{ + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "address":"cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2", + "pubKey":{ + "@type":"/cosmos.crypto.secp256k1.PubKey", + "key":"ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD" + }, + "sequence":"1" + } +} +``` + +#### Accounts + +The `accounts` endpoint allow users to query all the available accounts. + +```bash +cosmos.auth.v1beta1.Query/Accounts +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Accounts +``` + +Example Output: + +```bash expandable +{ + "accounts":[ + { + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "address":"cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2", + "pubKey":{ + "@type":"/cosmos.crypto.secp256k1.PubKey", + "key":"ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD" + }, + "sequence":"1" + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1yl6hdjhmkf37639730gffanpzndzdpmhwlkfhr", + "accountNumber":"8" + }, + "name":"transfer", + "permissions":[ + "minter", + "burner" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1fl48vsnmsdzcv85q5d2q4z5ajdha8yu34mf0eh", + "accountNumber":"4" + }, + "name":"bonded_tokens_pool", + "permissions":[ + "burner", + "staking" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1tygms3xhhs3yv487phx3dw4a95jn7t7lpm470r", + "accountNumber":"5" + }, + "name":"not_bonded_tokens_pool", + "permissions":[ + "burner", + "staking" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn", + "accountNumber":"6" + }, + "name":"gov", + "permissions":[ + "burner" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1jv65s3grqf6v6jl3dp4t6c9t9rk99cd88lyufl", + "accountNumber":"3" + }, + "name":"distribution" + }, + { + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "accountNumber":"1", + "address":"cosmos147k3r7v2tvwqhcmaxcfql7j8rmkrlsemxshd3j" + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1m3h30wlvsf8llruxtpukdvsy0km2kum8g38c8q", + "accountNumber":"7" + }, + "name":"mint", + "permissions":[ + "minter" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos17xpfvakm2amg962yls6f84z3kell8c5lserqta", + "accountNumber":"2" + }, + "name":"fee_collector" + } + ], + "pagination":{ + "total":"9" + } +} +``` + +#### Params + +The `params` endpoint allow users to query the current auth parameters. + +```bash +cosmos.auth.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Params +``` + +Example Output: + +```bash +{ + "params": { + "maxMemoCharacters": "256", + "txSigLimit": "7", + "txSizeCostPerByte": "10", + "sigVerifyCostEd25519": "590", + "sigVerifyCostSecp256k1": "1000" + } +} +``` + +### REST + +A user can query the `auth` module using REST endpoints. + +#### Account + +The `account` endpoint allow users to query for an account by it's address. + +```bash +/cosmos/auth/v1beta1/account?address={address} +``` + +#### Accounts + +The `accounts` endpoint allow users to query all the available accounts. + +```bash +/cosmos/auth/v1beta1/accounts +``` + +#### Params + +The `params` endpoint allow users to query the current auth parameters. + +```bash +/cosmos/auth/v1beta1/params +``` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/auth/tx.mdx b/docs/sdk/v0.50/documentation/module-system/modules/auth/tx.mdx new file mode 100644 index 00000000..ac4a86d5 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/auth/tx.mdx @@ -0,0 +1,536 @@ +--- +title: '`x/auth/tx`' +--- + + +**Pre-requisite Readings** + +* [Transactions](https://docs.cosmos.network/main/core/transactions#transaction-generation) +* [Encoding](https://docs.cosmos.network/main/core/encoding#transaction-encoding) + + + +## Abstract + +This document specifies the `x/auth/tx` package of the Cosmos SDK. + +This package represents the Cosmos SDK implementation of the `client.TxConfig`, `client.TxBuilder`, `client.TxEncoder` and `client.TxDecoder` interfaces. + +## Contents + +* [Transactions](#transactions) + * [`TxConfig`](#txconfig) + * [`TxBuilder`](#txbuilder) + * [`TxEncoder`/ `TxDecoder`](#txencoder-txdecoder) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + +## Transactions + +### `TxConfig` + +`client.TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type. +The interface defines a set of methods for creating a `client.TxBuilder`. + +```go expandable +package client + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() + +sdk.TxEncoder + TxDecoder() + +sdk.TxDecoder + TxJSONEncoder() + +sdk.TxEncoder + TxJSONDecoder() + +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) + +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} + + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() + +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() + +signing.SignModeHandler +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTip(tip *tx.Tip) + +SetTimeoutHeight(height uint64) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} +) +``` + +The default implementation of `client.TxConfig` is instantiated by `NewTxConfig` in `x/auth/tx` module. + +```go expandable +package tx + +import ( + + "fmt" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +type config struct { + handler signing.SignModeHandler + decoder sdk.TxDecoder + encoder sdk.TxEncoder + jsonDecoder sdk.TxDecoder + jsonEncoder sdk.TxEncoder + protoCodec codec.ProtoCodecMarshaler +} + +/ NewTxConfig returns a new protobuf TxConfig using the provided ProtoCodec and sign modes. The +/ first enabled sign mode will become the default sign mode. +/ NOTE: Use NewTxConfigWithHandler to provide a custom signing handler in case the sign mode +/ is not supported by default (eg: SignMode_SIGN_MODE_EIP_191). +func NewTxConfig(protoCodec codec.ProtoCodecMarshaler, enabledSignModes []signingtypes.SignMode) + +client.TxConfig { + return NewTxConfigWithHandler(protoCodec, makeSignModeHandler(enabledSignModes)) +} + +/ NewTxConfig returns a new protobuf TxConfig using the provided ProtoCodec and signing handler. +func NewTxConfigWithHandler(protoCodec codec.ProtoCodecMarshaler, handler signing.SignModeHandler) + +client.TxConfig { + return &config{ + handler: handler, + decoder: DefaultTxDecoder(protoCodec), + encoder: DefaultTxEncoder(), + jsonDecoder: DefaultJSONTxDecoder(protoCodec), + jsonEncoder: DefaultJSONTxEncoder(protoCodec), + protoCodec: protoCodec, +} +} + +func (g config) + +NewTxBuilder() + +client.TxBuilder { + return newBuilder(g.protoCodec) +} + +/ WrapTxBuilder returns a builder from provided transaction +func (g config) + +WrapTxBuilder(newTx sdk.Tx) (client.TxBuilder, error) { + newBuilder, ok := newTx.(*wrapper) + if !ok { + return nil, fmt.Errorf("expected %T, got %T", &wrapper{ +}, newTx) +} + +return newBuilder, nil +} + +func (g config) + +SignModeHandler() + +signing.SignModeHandler { + return g.handler +} + +func (g config) + +TxEncoder() + +sdk.TxEncoder { + return g.encoder +} + +func (g config) + +TxDecoder() + +sdk.TxDecoder { + return g.decoder +} + +func (g config) + +TxJSONEncoder() + +sdk.TxEncoder { + return g.jsonEncoder +} + +func (g config) + +TxJSONDecoder() + +sdk.TxDecoder { + return g.jsonDecoder +} +``` + +### `TxBuilder` + +```go expandable +package client + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() + +sdk.TxEncoder + TxDecoder() + +sdk.TxDecoder + TxJSONEncoder() + +sdk.TxEncoder + TxJSONDecoder() + +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) + +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} + + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() + +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() + +signing.SignModeHandler +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTip(tip *tx.Tip) + +SetTimeoutHeight(height uint64) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} +) +``` + +The [`client.TxBuilder`](https://docs.cosmos.network/main/core/transactions#transaction-generation) interface is as well implemented by `x/auth/tx`. +A `client.TxBuilder` can be accessed with `TxConfig.NewTxBuilder()`. + +### `TxEncoder`/ `TxDecoder` + +More information about `TxEncoder` and `TxDecoder` can be found [here](https://docs.cosmos.network/main/core/encoding#transaction-encoding). + +## Client + +### CLI + +#### Query + +The `x/auth/tx` module provides a CLI command to query any transaction, given its hash, transaction sequence or signature. + +Without any argument, the command will query the transaction using the transaction hash. + +```shell +simd query tx DFE87B78A630C0EFDF76C80CD24C997E252792E0317502AE1A02B9809F0D8685 +``` + +When querying a transaction from an account given its sequence, use the `--type=acc_seq` flag: + +```shell +simd query tx --type=acc_seq cosmos1u69uyr6v9qwe6zaaeaqly2h6wnedac0xpxq325/1 +``` + +When querying a transaction given its signature, use the `--type=signature` flag: + +```shell +simd query tx --type=signature Ofjvgrqi8twZfqVDmYIhqwRLQjZZ40XbxEamk/veH3gQpRF0hL2PH4ejRaDzAX+2WChnaWNQJQ41ekToIi5Wqw== +``` + +When querying a transaction given its events, use the `--type=events` flag: + +```shell +simd query txs --events 'message.sender=cosmos...' --page 1 --limit 30 +``` + +The `x/auth/block` module provides a CLI command to query any block, given its hash, height, or events. + +When querying a block by its hash, use the `--type=hash` flag: + +```shell +simd query block --type=hash DFE87B78A630C0EFDF76C80CD24C997E252792E0317502AE1A02B9809F0D8685 +``` + +When querying a block by its height, use the `--type=height` flag: + +```shell +simd query block --type=height 1357 +``` + +When querying a block by its events, use the `--query` flag: + +```shell +simd query blocks --query 'message.sender=cosmos...' --page 1 --limit 30 +``` + +#### Transactions + +The `x/auth/tx` module provides a convinient CLI command for decoding and encoding transactions. + +#### `encode` + +The `encode` command encodes a transaction created with the `--generate-only` flag or signed with the sign command. +The transaction is seralized it to Protobuf and returned as base64. + +```bash +$ simd tx encode tx.json +Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA== +$ simd tx encode tx.signed.json +``` + +More information about the `encode` command can be found running `simd tx encode --help`. + +#### `decode` + +The `decode` commands decodes a transaction encoded with the `encode` command. + +```bash +simd tx decode Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA== +``` + +More information about the `decode` command can be found running `simd tx decode --help`. + +### gRPC + +A user can query the `x/auth/tx` module using gRPC endpoints. + +#### `TxDecode` + +The `TxDecode` endpoint allows to decode a transaction. + +```shell +cosmos.tx.v1beta1.Service/TxDecode +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"tx_bytes":"Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA=="}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxDecode +``` + +Example Output: + +```json expandable +{ + "tx": { + "body": { + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "amount": [ + { + "denom": "stake", + "amount": "100" + } + ], + "fromAddress": "cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275", + "toAddress": "cosmos158saldyg8pmxu7fwvt0d6x7jeswp4gwyklk6y3" + } + ] + }, + "authInfo": { + "fee": { + "gasLimit": "200000" + } + } + } +} +``` + +#### `TxEncode` + +The `TxEncode` endpoint allows to encode a transaction. + +```shell +cosmos.tx.v1beta1.Service/TxEncode +``` + +Example: + +```shell expandable +grpcurl -plaintext \ + -d '{"tx": { + "body": { + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100"}],"fromAddress":"cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275","toAddress":"cosmos158saldyg8pmxu7fwvt0d6x7jeswp4gwyklk6y3"} + ] + }, + "authInfo": { + "fee": { + "gasLimit": "200000" + } + } + }}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxEncode +``` + +Example Output: + +```json +{ + "txBytes": "Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA==" +} +``` + +#### `TxDecodeAmino` + +The `TxDecode` endpoint allows to decode an amino transaction. + +```shell +cosmos.tx.v1beta1.Service/TxDecodeAmino +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"amino_binary": "KCgWqQpvqKNhmgotY29zbW9zMXRzeno3cDJ6Z2Q3dnZrYWh5ZnJlNHduNXh5dTgwcnB0ZzZ2OWg1Ei1jb3Ntb3MxdHN6ejdwMnpnZDd2dmthaHlmcmU0d241eHl1ODBycHRnNnY5aDUaCwoFc3Rha2USAjEwEhEKCwoFc3Rha2USAjEwEMCaDCIGZm9vYmFy"}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxDecodeAmino +``` + +Example Output: + +```json +{ + "aminoJson": "{\"type\":\"cosmos-sdk/StdTx\",\"value\":{\"msg\":[{\"type\":\"cosmos-sdk/MsgSend\",\"value\":{\"from_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"to_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}]}}],\"fee\":{\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}],\"gas\":\"200000\"},\"signatures\":null,\"memo\":\"foobar\",\"timeout_height\":\"0\"}}" +} +``` + +#### `TxEncodeAmino` + +The `TxEncodeAmino` endpoint allows to encode an amino transaction. + +```shell +cosmos.tx.v1beta1.Service/TxEncodeAmino +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"amino_json":"{\"type\":\"cosmos-sdk/StdTx\",\"value\":{\"msg\":[{\"type\":\"cosmos-sdk/MsgSend\",\"value\":{\"from_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"to_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}]}}],\"fee\":{\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}],\"gas\":\"200000\"},\"signatures\":null,\"memo\":\"foobar\",\"timeout_height\":\"0\"}}"}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxEncodeAmino +``` + +Example Output: + +```json +{ + "amino_binary": "KCgWqQpvqKNhmgotY29zbW9zMXRzeno3cDJ6Z2Q3dnZrYWh5ZnJlNHduNXh5dTgwcnB0ZzZ2OWg1Ei1jb3Ntb3MxdHN6ejdwMnpnZDd2dmthaHlmcmU0d241eHl1ODBycHRnNnY5aDUaCwoFc3Rha2USAjEwEhEKCwoFc3Rha2USAjEwEMCaDCIGZm9vYmFy" +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/auth/vesting.mdx b/docs/sdk/v0.50/documentation/module-system/modules/auth/vesting.mdx new file mode 100644 index 00000000..0cef59d3 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/auth/vesting.mdx @@ -0,0 +1,752 @@ +--- +title: '`x/auth/vesting`' +--- + +* [Intro and Requirements](#intro-and-requirements) +* [Note](#note) +* [Vesting Account Types](#vesting-account-types) + * [BaseVestingAccount](#basevestingaccount) + * [ContinuousVestingAccount](#continuousvestingaccount) + * [DelayedVestingAccount](#delayedvestingaccount) + * [Period](#period) + * [PeriodicVestingAccount](#periodicvestingaccount) + * [PermanentLockedAccount](#permanentlockedaccount) +* [Vesting Account Specification](#vesting-account-specification) + * [Determining Vesting & Vested Amounts](#determining-vesting--vested-amounts) + * [Periodic Vesting Accounts](#periodic-vesting-accounts) + * [Transferring/Sending](#transferringsending) + * [Delegating](#delegating) + * [Undelegating](#undelegating) +* [Keepers & Handlers](#keepers--handlers) +* [Genesis Initialization](#genesis-initialization) +* [Examples](#examples) + * [Simple](#simple) + * [Slashing](#slashing) + * [Periodic Vesting](#periodic-vesting) +* [Glossary](#glossary) + +## Intro and Requirements + +This specification defines the vesting account implementation that is used by the Cosmos Hub. The requirements for this vesting account is that it should be initialized during genesis with a starting balance `X` and a vesting end time `ET`. A vesting account may be initialized with a vesting start time `ST` and a number of vesting periods `P`. If a vesting start time is included, the vesting period does not begin until start time is reached. If vesting periods are included, the vesting occurs over the specified number of periods. + +For all vesting accounts, the owner of the vesting account is able to delegate and undelegate from validators, however they cannot transfer coins to another account until those coins are vested. This specification allows for four different kinds of vesting: + +* Delayed vesting, where all coins are vested once `ET` is reached. +* Continous vesting, where coins begin to vest at `ST` and vest linearly with respect to time until `ET` is reached +* Periodic vesting, where coins begin to vest at `ST` and vest periodically according to number of periods and the vesting amount per period. The number of periods, length per period, and amount per period are configurable. A periodic vesting account is distinguished from a continuous vesting account in that coins can be released in staggered tranches. For example, a periodic vesting account could be used for vesting arrangements where coins are relased quarterly, yearly, or over any other function of tokens over time. +* Permanent locked vesting, where coins are locked forever. Coins in this account can still be used for delegating and for governance votes even while locked. + +## Note + +Vesting accounts can be initialized with some vesting and non-vesting coins. The non-vesting coins would be immediately transferable. DelayedVesting ContinuousVesting, PeriodicVesting and PermenantVesting accounts can be created with normal messages after genesis. Other types of vesting accounts must be created at genesis, or as part of a manual network upgrade. The current specification only allows for *unconditional* vesting (ie. there is no possibility of reaching `ET` and +having coins fail to vest). + +## Vesting Account Types + +```go expandable +/ VestingAccount defines an interface that any vesting account type must +/ implement. +type VestingAccount interface { + Account + + GetVestedCoins(Time) + +Coins + GetVestingCoins(Time) + +Coins + + / TrackDelegation performs internal vesting accounting necessary when + / delegating from a vesting account. It accepts the current block time, the + / delegation amount and balance of all coins whose denomination exists in + / the account's original vesting balance. + TrackDelegation(Time, Coins, Coins) + + / TrackUndelegation performs internal vesting accounting necessary when a + / vesting account performs an undelegation. + TrackUndelegation(Coins) + +GetStartTime() + +int64 + GetEndTime() + +int64 +} +``` + +### BaseVestingAccount + +```protobuf +// BaseVestingAccount implements the VestingAccount interface. It contains all +// the necessary fields needed for any vesting account implementation. +message BaseVestingAccount { + option (amino.name) = "cosmos-sdk/BaseVestingAccount"; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + cosmos.auth.v1beta1.BaseAccount base_account = 1 [(gogoproto.embed) = true]; + repeated cosmos.base.v1beta1.Coin original_vesting = 2 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + repeated cosmos.base.v1beta1.Coin delegated_free = 3 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + repeated cosmos.base.v1beta1.Coin delegated_vesting = 4 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + int64 end_time = 5; +} +``` + +### ContinuousVestingAccount + +```protobuf +// ContinuousVestingAccount implements the VestingAccount interface. It +// continuously vests by unlocking coins linearly with respect to time. +message ContinuousVestingAccount { + option (amino.name) = "cosmos-sdk/ContinuousVestingAccount"; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + BaseVestingAccount base_vesting_account = 1 [(gogoproto.embed) = true]; + int64 start_time = 2; +} +``` + +### DelayedVestingAccount + +```protobuf +// DelayedVestingAccount implements the VestingAccount interface. It vests all +// coins after a specific time, but non prior. In other words, it keeps them +// locked until a specified time. +message DelayedVestingAccount { + option (amino.name) = "cosmos-sdk/DelayedVestingAccount"; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + BaseVestingAccount base_vesting_account = 1 [(gogoproto.embed) = true]; +} +``` + +### Period + +```protobuf +// Period defines a length of time and amount of coins that will vest. +message Period { + option (gogoproto.goproto_stringer) = false; + + int64 length = 1; + repeated cosmos.base.v1beta1.Coin amount = 2 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; +} +``` + +```go +/ Stores all vesting periods passed as part of a PeriodicVestingAccount +type Periods []Period +``` + +### PeriodicVestingAccount + +```protobuf +// PeriodicVestingAccount implements the VestingAccount interface. It +// periodically vests by unlocking coins during each specified period. +message PeriodicVestingAccount { + option (amino.name) = "cosmos-sdk/PeriodicVestingAccount"; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + BaseVestingAccount base_vesting_account = 1 [(gogoproto.embed) = true]; + int64 start_time = 2; + repeated Period vesting_periods = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +In order to facilitate less ad-hoc type checking and assertions and to support flexibility in account balance usage, the existing `x/bank` `ViewKeeper` interface is updated to contain the following: + +```go +type ViewKeeper interface { + / ... + + / Calculates the total locked account balance. + LockedCoins(ctx sdk.Context, addr sdk.AccAddress) + +sdk.Coins + + / Calculates the total spendable balance that can be sent to other accounts. + SpendableCoins(ctx sdk.Context, addr sdk.AccAddress) + +sdk.Coins +} +``` + +### PermanentLockedAccount + +```protobuf +// PermanentLockedAccount implements the VestingAccount interface. It does +// not ever release coins, locking them indefinitely. Coins in this account can +// still be used for delegating and for governance votes even while locked. +// +// Since: cosmos-sdk 0.43 +message PermanentLockedAccount { + option (amino.name) = "cosmos-sdk/PermanentLockedAccount"; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + BaseVestingAccount base_vesting_account = 1 [(gogoproto.embed) = true]; +} +``` + +## Vesting Account Specification + +Given a vesting account, we define the following in the proceeding operations: + +* `OV`: The original vesting coin amount. It is a constant value. +* `V`: The number of `OV` coins that are still *vesting*. It is derived by + `OV`, `StartTime` and `EndTime`. This value is computed on demand and not on a per-block basis. +* `V'`: The number of `OV` coins that are *vested* (unlocked). This value is computed on demand and not a per-block basis. +* `DV`: The number of delegated *vesting* coins. It is a variable value. It is stored and modified directly in the vesting account. +* `DF`: The number of delegated *vested* (unlocked) coins. It is a variable value. It is stored and modified directly in the vesting account. +* `BC`: The number of `OV` coins less any coins that are transferred + (which can be negative or delegated). It is considered to be balance of the embedded base account. It is stored and modified directly in the vesting account. + +### Determining Vesting & Vested Amounts + +It is important to note that these values are computed on demand and not on a mandatory per-block basis (e.g. `BeginBlocker` or `EndBlocker`). + +#### Continuously Vesting Accounts + +To determine the amount of coins that are vested for a given block time `T`, the +following is performed: + +1. Compute `X := T - StartTime` +2. Compute `Y := EndTime - StartTime` +3. Compute `V' := OV * (X / Y)` +4. Compute `V := OV - V'` + +Thus, the total amount of *vested* coins is `V'` and the remaining amount, `V`, +is *vesting*. + +```go expandable +func (cva ContinuousVestingAccount) + +GetVestedCoins(t Time) + +Coins { + if t <= cva.StartTime { + / We must handle the case where the start time for a vesting account has + / been set into the future or when the start of the chain is not exactly + / known. + return ZeroCoins +} + +else if t >= cva.EndTime { + return cva.OriginalVesting +} + x := t - cva.StartTime + y := cva.EndTime - cva.StartTime + + return cva.OriginalVesting * (x / y) +} + +func (cva ContinuousVestingAccount) + +GetVestingCoins(t Time) + +Coins { + return cva.OriginalVesting - cva.GetVestedCoins(t) +} +``` + +### Periodic Vesting Accounts + +Periodic vesting accounts require calculating the coins released during each period for a given block time `T`. Note that multiple periods could have passed when calling `GetVestedCoins`, so we must iterate over each period until the end of that period is after `T`. + +1. Set `CT := StartTime` +2. Set `V' := 0` + +For each Period P: + +1. Compute `X := T - CT` +2. IF `X >= P.Length` + 1. Compute `V' += P.Amount` + 2. Compute `CT += P.Length` + 3. ELSE break +3. Compute `V := OV - V'` + +```go expandable +func (pva PeriodicVestingAccount) + +GetVestedCoins(t Time) + +Coins { + if t < pva.StartTime { + return ZeroCoins +} + ct := pva.StartTime / The start of the vesting schedule + vested := 0 + periods = pva.GetPeriods() + for _, period := range periods { + if t - ct < period.Length { + break +} + +vested += period.Amount + ct += period.Length / increment ct to the start of the next vesting period +} + +return vested +} + +func (pva PeriodicVestingAccount) + +GetVestingCoins(t Time) + +Coins { + return pva.OriginalVesting - cva.GetVestedCoins(t) +} +``` + +#### Delayed/Discrete Vesting Accounts + +Delayed vesting accounts are easier to reason about as they only have the full amount vesting up until a certain time, then all the coins become vested (unlocked). This does not include any unlocked coins the account may have initially. + +```go expandable +func (dva DelayedVestingAccount) + +GetVestedCoins(t Time) + +Coins { + if t >= dva.EndTime { + return dva.OriginalVesting +} + +return ZeroCoins +} + +func (dva DelayedVestingAccount) + +GetVestingCoins(t Time) + +Coins { + return dva.OriginalVesting - dva.GetVestedCoins(t) +} +``` + +### Transferring/Sending + +At any given time, a vesting account may transfer: `min((BC + DV) - V, BC)`. + +In other words, a vesting account may transfer the minimum of the base account balance and the base account balance plus the number of currently delegated vesting coins less the number of coins vested so far. + +However, given that account balances are tracked via the `x/bank` module and that we want to avoid loading the entire account balance, we can instead determine the locked balance, which can be defined as `max(V - DV, 0)`, and infer the spendable balance from that. + +```go +func (va VestingAccount) + +LockedCoins(t Time) + +Coins { + return max(va.GetVestingCoins(t) - va.DelegatedVesting, 0) +} +``` + +The `x/bank` `ViewKeeper` can then provide APIs to determine locked and spendable coins for any account: + +```go expandable +func (k Keeper) + +LockedCoins(ctx Context, addr AccAddress) + +Coins { + acc := k.GetAccount(ctx, addr) + if acc != nil { + if acc.IsVesting() { + return acc.LockedCoins(ctx.BlockTime()) +} + +} + + / non-vesting accounts do not have any locked coins + return NewCoins() +} +``` + +#### Keepers/Handlers + +The corresponding `x/bank` keeper should appropriately handle sending coins based on if the account is a vesting account or not. + +```go expandable +func (k Keeper) + +SendCoins(ctx Context, from Account, to Account, amount Coins) { + bc := k.GetBalances(ctx, from) + v := k.LockedCoins(ctx, from) + spendable := bc - v + newCoins := spendable - amount + assert(newCoins >= 0) + +from.SetBalance(newCoins) + +to.AddBalance(amount) + + / save balances... +} +``` + +### Delegating + +For a vesting account attempting to delegate `D` coins, the following is performed: + +1. Verify `BC >= D > 0` +2. Compute `X := min(max(V - DV, 0), D)` (portion of `D` that is vesting) +3. Compute `Y := D - X` (portion of `D` that is free) +4. Set `DV += X` +5. Set `DF += Y` + +```go +func (va VestingAccount) + +TrackDelegation(t Time, balance Coins, amount Coins) { + assert(balance <= amount) + x := min(max(va.GetVestingCoins(t) - va.DelegatedVesting, 0), amount) + y := amount - x + + va.DelegatedVesting += x + va.DelegatedFree += y +} +``` + +**Note** `TrackDelegation` only modifies the `DelegatedVesting` and `DelegatedFree` fields, so upstream callers MUST modify the `Coins` field by subtracting `amount`. + +#### Keepers/Handlers + +```go +func DelegateCoins(t Time, from Account, amount Coins) { + if isVesting(from) { + from.TrackDelegation(t, amount) +} + +else { + from.SetBalance(sc - amount) +} + + / save account... +} +``` + +### Undelegating + +For a vesting account attempting to undelegate `D` coins, the following is performed: + +> NOTE: `DV < D` and `(DV + DF) < D` may be possible due to quirks in the rounding of delegation/undelegation logic. + +1. Verify `D > 0` +2. Compute `X := min(DF, D)` (portion of `D` that should become free, prioritizing free coins) +3. Compute `Y := min(DV, D - X)` (portion of `D` that should remain vesting) +4. Set `DF -= X` +5. Set `DV -= Y` + +```go +func (cva ContinuousVestingAccount) + +TrackUndelegation(amount Coins) { + x := min(cva.DelegatedFree, amount) + y := amount - x + + cva.DelegatedFree -= x + cva.DelegatedVesting -= y +} +``` + +**Note** `TrackUnDelegation` only modifies the `DelegatedVesting` and `DelegatedFree` fields, so upstream callers MUST modify the `Coins` field by adding `amount`. + +**Note**: If a delegation is slashed, the continuous vesting account ends up with an excess `DV` amount, even after all its coins have vested. This is because undelegating free coins are prioritized. + +**Note**: The undelegation (bond refund) amount may exceed the delegated vesting (bond) amount due to the way undelegation truncates the bond refund, which can increase the validator's exchange rate (tokens/shares) slightly if the undelegated tokens are non-integral. + +#### Keepers/Handlers + +```go expandable +func UndelegateCoins(to Account, amount Coins) { + if isVesting(to) { + if to.DelegatedFree + to.DelegatedVesting >= amount { + to.TrackUndelegation(amount) + / save account ... +} + +} + +else { + AddBalance(to, amount) + / save account... +} +} +``` + +## Keepers & Handlers + +The `VestingAccount` implementations reside in `x/auth`. However, any keeper in a module (e.g. staking in `x/staking`) wishing to potentially utilize any vesting coins, must call explicit methods on the `x/bank` keeper (e.g. `DelegateCoins`) opposed to `SendCoins` and `SubtractCoins`. + +In addition, the vesting account should also be able to spend any coins it receives from other users. Thus, the bank module's `MsgSend` handler should error if a vesting account is trying to send an amount that exceeds their unlocked coin amount. + +See the above specification for full implementation details. + +## Genesis Initialization + +To initialize both vesting and non-vesting accounts, the `GenesisAccount` struct includes new fields: `Vesting`, `StartTime`, and `EndTime`. Accounts meant to be of type `BaseAccount` or any non-vesting type have `Vesting = false`. The genesis initialization logic (e.g. `initFromGenesisState`) must parse and return the correct accounts accordingly based off of these fields. + +```go expandable +type GenesisAccount struct { + / ... + + / vesting account fields + OriginalVesting sdk.Coins `json:"original_vesting"` + DelegatedFree sdk.Coins `json:"delegated_free"` + DelegatedVesting sdk.Coins `json:"delegated_vesting"` + StartTime int64 `json:"start_time"` + EndTime int64 `json:"end_time"` +} + +func ToAccount(gacc GenesisAccount) + +Account { + bacc := NewBaseAccount(gacc) + if gacc.OriginalVesting > 0 { + if ga.StartTime != 0 && ga.EndTime != 0 { + / return a continuous vesting account +} + +else if ga.EndTime != 0 { + / return a delayed vesting account +} + +else { + / invalid genesis vesting account provided + panic() +} + +} + +return bacc +} +``` + +## Examples + +### Simple + +Given a continuous vesting account with 10 vesting coins. + +```text +OV = 10 +DF = 0 +DV = 0 +BC = 10 +V = 10 +V' = 0 +``` + +1. Immediately receives 1 coin + + ```text + BC = 11 + ``` + +2. Time passes, 2 coins vest + + ```text + V = 8 + V' = 2 + ``` + +3. Delegates 4 coins to validator A + + ```text + DV = 4 + BC = 7 + ``` + +4. Sends 3 coins + + ```text + BC = 4 + ``` + +5. More time passes, 2 more coins vest + + ```text + V = 6 + V' = 4 + ``` + +6. Sends 2 coins. At this point the account cannot send anymore until further + coins vest or it receives additional coins. It can still however, delegate. + + ```text + BC = 2 + ``` + +### Slashing + +Same initial starting conditions as the simple example. + +1. Time passes, 5 coins vest + + ```text + V = 5 + V' = 5 + ``` + +2. Delegate 5 coins to validator A + + ```text + DV = 5 + BC = 5 + ``` + +3. Delegate 5 coins to validator B + + ```text + DF = 5 + BC = 0 + ``` + +4. Validator A gets slashed by 50%, making the delegation to A now worth 2.5 coins + +5. Undelegate from validator A (2.5 coins) + + ```text + DF = 5 - 2.5 = 2.5 + BC = 0 + 2.5 = 2.5 + ``` + +6. Undelegate from validator B (5 coins). The account at this point can only + send 2.5 coins unless it receives more coins or until more coins vest. + It can still however, delegate. + + ```text + DV = 5 - 2.5 = 2.5 + DF = 2.5 - 2.5 = 0 + BC = 2.5 + 5 = 7.5 + ``` + + Notice how we have an excess amount of `DV`. + +### Periodic Vesting + +A vesting account is created where 100 tokens will be released over 1 year, with +1/4 of tokens vesting each quarter. The vesting schedule would be as follows: + +```yaml +Periods: +- amount: 25stake, length: 7884000 +- amount: 25stake, length: 7884000 +- amount: 25stake, length: 7884000 +- amount: 25stake, length: 7884000 +``` + +```text +OV = 100 +DF = 0 +DV = 0 +BC = 100 +V = 100 +V' = 0 +``` + +1. Immediately receives 1 coin + + ```text + BC = 101 + ``` + +2. Vesting period 1 passes, 25 coins vest + + ```text + V = 75 + V' = 25 + ``` + +3. During vesting period 2, 5 coins are transfered and 5 coins are delegated + + ```text + DV = 5 + BC = 91 + ``` + +4. Vesting period 2 passes, 25 coins vest + + ```text + V = 50 + V' = 50 + ``` + +## Glossary + +* OriginalVesting: The amount of coins (per denomination) that are initially + part of a vesting account. These coins are set at genesis. +* StartTime: The BFT time at which a vesting account starts to vest. +* EndTime: The BFT time at which a vesting account is fully vested. +* DelegatedFree: The tracked amount of coins (per denomination) that are + delegated from a vesting account that have been fully vested at time of delegation. +* DelegatedVesting: The tracked amount of coins (per denomination) that are + delegated from a vesting account that were vesting at time of delegation. +* ContinuousVestingAccount: A vesting account implementation that vests coins + linearly over time. +* DelayedVestingAccount: A vesting account implementation that only fully vests + all coins at a given time. +* PeriodicVestingAccount: A vesting account implementation that vests coins + according to a custom vesting schedule. +* PermanentLockedAccount: It does not ever release coins, locking them indefinitely. + Coins in this account can still be used for delegating and for governance votes even while locked. + +## CLI + +A user can query and interact with the `vesting` module using the CLI. + +### Transactions + +The `tx` commands allow users to interact with the `vesting` module. + +```bash +simd tx vesting --help +``` + +#### create-periodic-vesting-account + +The `create-periodic-vesting-account` command creates a new vesting account funded with an allocation of tokens, where a sequence of coins and period length in seconds. Periods are sequential, in that the duration of of a period only starts at the end of the previous period. The duration of the first period starts upon account creation. + +```bash +simd tx vesting create-periodic-vesting-account [to_address] [periods_json_file] [flags] +``` + +Example: + +```bash +simd tx vesting create-periodic-vesting-account cosmos1.. periods.json +``` + +#### create-vesting-account + +The `create-vesting-account` command creates a new vesting account funded with an allocation of tokens. The account can either be a delayed or continuous vesting account, which is determined by the '--delayed' flag. All vesting accouts created will have their start time set by the committed block's time. The end\_time must be provided as a UNIX epoch timestamp. + +```bash +simd tx vesting create-vesting-account [to_address] [amount] [end_time] [flags] +``` + +Example: + +```bash +simd tx vesting create-vesting-account cosmos1.. 100stake 2592000 +``` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/authz/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/authz/README.mdx new file mode 100644 index 00000000..cc72be3b --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/authz/README.mdx @@ -0,0 +1,1430 @@ +--- +title: '`x/authz`' +--- + +## Abstract + +`x/authz` is an implementation of a Cosmos SDK module, per [ADR 30](docs/sdk/next/documentation/legacy/adr-comprehensive), that allows +granting arbitrary privileges from one account (the granter) to another account (the grantee). Authorizations must be granted for a particular Msg service method one by one using an implementation of the `Authorization` interface. + +## Contents + +* [Concepts](#concepts) + * [Authorization and Grant](#authorization-and-grant) + * [Built-in Authorizations](#built-in-authorizations) + * [Gas](#gas) +* [State](#state) + * [Grant](#grant) + * [GrantQueue](#grantqueue) +* [Messages](#messages) + * [MsgGrant](#msggrant) + * [MsgRevoke](#msgrevoke) + * [MsgExec](#msgexec) +* [Events](#events) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +### Authorization and Grant + +The `x/authz` module defines interfaces and messages grant authorizations to perform actions +on behalf of one account to other accounts. The design is defined in the [ADR 030](docs/sdk/next/documentation/legacy/adr-comprehensive). + +A *grant* is an allowance to execute a Msg by the grantee on behalf of the granter. +Authorization is an interface that must be implemented by a concrete authorization logic to validate and execute grants. Authorizations are extensible and can be defined for any Msg service method even outside of the module where the Msg method is defined. See the `SendAuthorization` example in the next section for more details. + +**Note:** The authz module is different from the [auth (authentication)](docs/sdk/v0.50/documentation/module-system/modules/auth/README) module that is responsible for specifying the base transaction and account types. + +```go expandable +package authz + +import ( + + "github.com/cosmos/gogoproto/proto" + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ Authorization represents the interface of various Authorization types implemented +/ by other modules. +type Authorization interface { + proto.Message + + / MsgTypeURL returns the fully-qualified Msg service method URL (as described in ADR 031), + / which will process and accept or reject a request. + MsgTypeURL() + +string + + / Accept determines whether this grant permits the provided sdk.Msg to be performed, + / and if so provides an upgraded authorization instance. + Accept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error) + + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} + +/ AcceptResponse instruments the controller of an authz message if the request is accepted +/ and if it should be updated or deleted. +type AcceptResponse struct { + / If Accept=true, the controller can accept and authorization and handle the update. + Accept bool + / If Delete=true, the controller must delete the authorization object and release + / storage resources. + Delete bool + / Controller, who is calling Authorization.Accept must check if `Updated != nil`. If yes, + / it must use the updated version and handle the update on the storage level. + Updated Authorization +} +``` + +### Built-in Authorizations + +The Cosmos SDK `x/authz` module comes with following authorization types: + +#### GenericAuthorization + +`GenericAuthorization` implements the `Authorization` interface that gives unrestricted permission to execute the provided Msg on behalf of granter's account. + +```protobuf +// GenericAuthorization gives the grantee unrestricted permissions to execute +// the provided method on behalf of the granter's account. +message GenericAuthorization { + option (amino.name) = "cosmos-sdk/GenericAuthorization"; + option (cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"; + + // Msg, identified by it's type URL, to grant unrestricted permissions to execute + string msg = 1; +} +``` + +```go expandable +package authz + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var _ Authorization = &GenericAuthorization{ +} + +/ NewGenericAuthorization creates a new GenericAuthorization object. +func NewGenericAuthorization(msgTypeURL string) *GenericAuthorization { + return &GenericAuthorization{ + Msg: msgTypeURL, +} +} + +/ MsgTypeURL implements Authorization.MsgTypeURL. +func (a GenericAuthorization) + +MsgTypeURL() + +string { + return a.Msg +} + +/ Accept implements Authorization.Accept. +func (a GenericAuthorization) + +Accept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error) { + return AcceptResponse{ + Accept: true +}, nil +} + +/ ValidateBasic implements Authorization.ValidateBasic. +func (a GenericAuthorization) + +ValidateBasic() + +error { + return nil +} +``` + +* `msg` stores Msg type URL. + +#### SendAuthorization + +`SendAuthorization` implements the `Authorization` interface for the `cosmos.bank.v1beta1.MsgSend` Msg. + +* It takes a (positive) `SpendLimit` that specifies the maximum amount of tokens the grantee can spend. The `SpendLimit` is updated as the tokens are spent. +* It takes an (optional) `AllowList` that specifies to which addresses a grantee can send token. + +```protobuf +// SendAuthorization allows the grantee to spend up to spend_limit coins from +// the granter's account. +// +// Since: cosmos-sdk 0.43 +message SendAuthorization { + option (cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"; + option (amino.name) = "cosmos-sdk/SendAuthorization"; + + repeated cosmos.base.v1beta1.Coin spend_limit = 1 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + + // allow_list specifies an optional list of addresses to whom the grantee can send tokens on behalf of the + // granter. If omitted, any recipient is allowed. + // + // Since: cosmos-sdk 0.47 + repeated string allow_list = 2; +} +``` + +```go expandable +package types + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ TODO: Revisit this once we have proper gas fee framework. +/ Ref: https://github.com/cosmos/cosmos-sdk/issues/9054 +/ Ref: https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(10) + +var _ authz.Authorization = &SendAuthorization{ +} + +/ NewSendAuthorization creates a new SendAuthorization object. +func NewSendAuthorization(spendLimit sdk.Coins, allowed []sdk.AccAddress) *SendAuthorization { + return &SendAuthorization{ + AllowList: toBech32Addresses(allowed), + SpendLimit: spendLimit, +} +} + +/ MsgTypeURL implements Authorization.MsgTypeURL. +func (a SendAuthorization) + +MsgTypeURL() + +string { + return sdk.MsgTypeURL(&MsgSend{ +}) +} + +/ Accept implements Authorization.Accept. +func (a SendAuthorization) + +Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) { + mSend, ok := msg.(*MsgSend) + if !ok { + return authz.AcceptResponse{ +}, sdkerrors.ErrInvalidType.Wrap("type mismatch") +} + toAddr := mSend.ToAddress + + limitLeft, isNegative := a.SpendLimit.SafeSub(mSend.Amount...) + if isNegative { + return authz.AcceptResponse{ +}, sdkerrors.ErrInsufficientFunds.Wrapf("requested amount is more than spend limit") +} + if limitLeft.IsZero() { + return authz.AcceptResponse{ + Accept: true, + Delete: true +}, nil +} + isAddrExists := false + allowedList := a.GetAllowList() + for _, addr := range allowedList { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "send authorization") + if addr == toAddr { + isAddrExists = true + break +} + +} + if len(allowedList) > 0 && !isAddrExists { + return authz.AcceptResponse{ +}, sdkerrors.ErrUnauthorized.Wrapf("cannot send to %s address", toAddr) +} + +return authz.AcceptResponse{ + Accept: true, + Delete: false, + Updated: &SendAuthorization{ + SpendLimit: limitLeft, + AllowList: allowedList +}}, nil +} + +/ ValidateBasic implements Authorization.ValidateBasic. +func (a SendAuthorization) + +ValidateBasic() + +error { + if a.SpendLimit == nil { + return sdkerrors.ErrInvalidCoins.Wrap("spend limit cannot be nil") +} + if !a.SpendLimit.IsAllPositive() { + return sdkerrors.ErrInvalidCoins.Wrapf("spend limit must be positive") +} + found := make(map[string]bool, 0) + for i := 0; i < len(a.AllowList); i++ { + if found[a.AllowList[i]] { + return ErrDuplicateEntry +} + +found[a.AllowList[i]] = true +} + +return nil +} + +func toBech32Addresses(allowed []sdk.AccAddress) []string { + if len(allowed) == 0 { + return nil +} + allowedAddrs := make([]string, len(allowed)) + for i, addr := range allowed { + allowedAddrs[i] = addr.String() +} + +return allowedAddrs +} +``` + +* `spend_limit` keeps track of how many coins are left in the authorization. +* `allow_list` specifies an optional list of addresses to whom the grantee can send tokens on behalf of the granter. + +#### StakeAuthorization + +`StakeAuthorization` implements the `Authorization` interface for messages in the [staking module](https://docs.cosmos.network/v0.50/build/modules/staking). It takes an `AuthorizationType` to specify whether you want to authorise delegating, undelegating or redelegating (i.e. these have to be authorised separately). It also takes an optional `MaxTokens` that keeps track of a limit to the amount of tokens that can be delegated/undelegated/redelegated. If left empty, the amount is unlimited. Additionally, this Msg takes an `AllowList` or a `DenyList`, which allows you to select which validators you allow or deny grantees to stake with. + +```protobuf +// StakeAuthorization defines authorization for delegate/undelegate/redelegate. +// +// Since: cosmos-sdk 0.43 +message StakeAuthorization { + option (cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"; + option (amino.name) = "cosmos-sdk/StakeAuthorization"; + + // max_tokens specifies the maximum amount of tokens can be delegate to a validator. If it is + // empty, there is no spend limit and any amount of coins can be delegated. + cosmos.base.v1beta1.Coin max_tokens = 1 [(gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coin"]; + // validators is the oneof that represents either allow_list or deny_list + oneof validators { + // allow_list specifies list of validator addresses to whom grantee can delegate tokens on behalf of granter's + // account. + Validators allow_list = 2; + // deny_list specifies list of validator addresses to whom grantee can not delegate tokens. + Validators deny_list = 3; + } + // Validators defines list of validator addresses. + message Validators { + repeated string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + } + // authorization_type defines one of AuthorizationType. + AuthorizationType authorization_type = 4; +} +``` + +```go expandable +package types + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ TODO: Revisit this once we have propoer gas fee framework. +/ Tracking issues https://github.com/cosmos/cosmos-sdk/issues/9054, https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(10) + +var _ authz.Authorization = &StakeAuthorization{ +} + +/ NewStakeAuthorization creates a new StakeAuthorization object. +func NewStakeAuthorization(allowed []sdk.ValAddress, denied []sdk.ValAddress, authzType AuthorizationType, amount *sdk.Coin) (*StakeAuthorization, error) { + allowedValidators, deniedValidators, err := validateAllowAndDenyValidators(allowed, denied) + if err != nil { + return nil, err +} + a := StakeAuthorization{ +} + if allowedValidators != nil { + a.Validators = &StakeAuthorization_AllowList{ + AllowList: &StakeAuthorization_Validators{ + Address: allowedValidators +}} + +} + +else { + a.Validators = &StakeAuthorization_DenyList{ + DenyList: &StakeAuthorization_Validators{ + Address: deniedValidators +}} + +} + if amount != nil { + a.MaxTokens = amount +} + +a.AuthorizationType = authzType + + return &a, nil +} + +/ MsgTypeURL implements Authorization.MsgTypeURL. +func (a StakeAuthorization) + +MsgTypeURL() + +string { + authzType, err := normalizeAuthzType(a.AuthorizationType) + if err != nil { + panic(err) +} + +return authzType +} + +func (a StakeAuthorization) + +ValidateBasic() + +error { + if a.MaxTokens != nil && a.MaxTokens.IsNegative() { + return sdkerrors.Wrapf(authz.ErrNegativeMaxTokens, "negative coin amount: %v", a.MaxTokens) +} + if a.AuthorizationType == AuthorizationType_AUTHORIZATION_TYPE_UNSPECIFIED { + return authz.ErrUnknownAuthorizationType +} + +return nil +} + +/ Accept implements Authorization.Accept. +func (a StakeAuthorization) + +Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) { + var validatorAddress string + var amount sdk.Coin + switch msg := msg.(type) { + case *MsgDelegate: + validatorAddress = msg.ValidatorAddress + amount = msg.Amount + case *MsgUndelegate: + validatorAddress = msg.ValidatorAddress + amount = msg.Amount + case *MsgBeginRedelegate: + validatorAddress = msg.ValidatorDstAddress + amount = msg.Amount + default: + return authz.AcceptResponse{ +}, sdkerrors.ErrInvalidRequest.Wrap("unknown msg type") +} + isValidatorExists := false + allowedList := a.GetAllowList().GetAddress() + for _, validator := range allowedList { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "stake authorization") + if validator == validatorAddress { + isValidatorExists = true + break +} + +} + denyList := a.GetDenyList().GetAddress() + for _, validator := range denyList { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "stake authorization") + if validator == validatorAddress { + return authz.AcceptResponse{ +}, sdkerrors.ErrUnauthorized.Wrapf("cannot delegate/undelegate to %s validator", validator) +} + +} + if len(allowedList) > 0 && !isValidatorExists { + return authz.AcceptResponse{ +}, sdkerrors.ErrUnauthorized.Wrapf("cannot delegate/undelegate to %s validator", validatorAddress) +} + if a.MaxTokens == nil { + return authz.AcceptResponse{ + Accept: true, + Delete: false, + Updated: &StakeAuthorization{ + Validators: a.GetValidators(), + AuthorizationType: a.GetAuthorizationType() +}, +}, nil +} + +limitLeft, err := a.MaxTokens.SafeSub(amount) + if err != nil { + return authz.AcceptResponse{ +}, err +} + if limitLeft.IsZero() { + return authz.AcceptResponse{ + Accept: true, + Delete: true +}, nil +} + +return authz.AcceptResponse{ + Accept: true, + Delete: false, + Updated: &StakeAuthorization{ + Validators: a.GetValidators(), + AuthorizationType: a.GetAuthorizationType(), + MaxTokens: &limitLeft +}, +}, nil +} + +func validateAllowAndDenyValidators(allowed []sdk.ValAddress, denied []sdk.ValAddress) ([]string, []string, error) { + if len(allowed) == 0 && len(denied) == 0 { + return nil, nil, sdkerrors.ErrInvalidRequest.Wrap("both allowed & deny list cannot be empty") +} + if len(allowed) > 0 && len(denied) > 0 { + return nil, nil, sdkerrors.ErrInvalidRequest.Wrap("cannot set both allowed & deny list") +} + allowedValidators := make([]string, len(allowed)) + if len(allowed) > 0 { + for i, validator := range allowed { + allowedValidators[i] = validator.String() +} + +return allowedValidators, nil, nil +} + deniedValidators := make([]string, len(denied)) + for i, validator := range denied { + deniedValidators[i] = validator.String() +} + +return nil, deniedValidators, nil +} + +/ Normalized Msg type URLs +func normalizeAuthzType(authzType AuthorizationType) (string, error) { + switch authzType { + case AuthorizationType_AUTHORIZATION_TYPE_DELEGATE: + return sdk.MsgTypeURL(&MsgDelegate{ +}), nil + case AuthorizationType_AUTHORIZATION_TYPE_UNDELEGATE: + return sdk.MsgTypeURL(&MsgUndelegate{ +}), nil + case AuthorizationType_AUTHORIZATION_TYPE_REDELEGATE: + return sdk.MsgTypeURL(&MsgBeginRedelegate{ +}), nil + default: + return "", sdkerrors.Wrapf(authz.ErrUnknownAuthorizationType, "cannot normalize authz type with %T", authzType) +} +} +``` + +### Gas + +In order to prevent DoS attacks, granting `StakeAuthorization`s with `x/authz` incurs gas. `StakeAuthorization` allows you to authorize another account to delegate, undelegate, or redelegate to validators. The authorizer can define a list of validators they allow or deny delegations to. The Cosmos SDK iterates over these lists and charge 10 gas for each validator in both of the lists. + +Since the state maintaining a list for granter, grantee pair with same expiration, we are iterating over the list to remove the grant (incase of any revoke of paritcular `msgType`) from the list and we are charging 20 gas per iteration. + +## State + +### Grant + +Grants are identified by combining granter address (the address bytes of the granter), grantee address (the address bytes of the grantee) and Authorization type (its type URL). Hence we only allow one grant for the (granter, grantee, Authorization) triple. + +* Grant: `0x01 | granter_address_len (1 byte) | granter_address_bytes | grantee_address_len (1 byte) | grantee_address_bytes | msgType_bytes -> ProtocolBuffer(AuthorizationGrant)` + +The grant object encapsulates an `Authorization` type and an expiration timestamp: + +```protobuf +// Grant gives permissions to execute +// the provide method with expiration time. +message Grant { + google.protobuf.Any authorization = 1 [(cosmos_proto.accepts_interface) = "cosmos.authz.v1beta1.Authorization"]; + // time when the grant will expire and will be pruned. If null, then the grant + // doesn't have a time expiration (other conditions in `authorization` + // may apply to invalidate the grant) + google.protobuf.Timestamp expiration = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = true]; +} +``` + +### GrantQueue + +We are maintaining a queue for authz pruning. Whenever a grant is created, an item will be added to `GrantQueue` with a key of expiration, granter, grantee. + +In `EndBlock` (which runs for every block) we continuously check and prune the expired grants by forming a prefix key with current blocktime that passed the stored expiration in `GrantQueue`, we iterate through all the matched records from `GrantQueue` and delete them from the `GrantQueue` & `Grant`s store. + +```go expandable +package keeper + +import ( + + "fmt" + "strconv" + "time" + "github.com/cosmos/gogoproto/proto" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ TODO: Revisit this once we have propoer gas fee framework. +/ Tracking issues https://github.com/cosmos/cosmos-sdk/issues/9054, +/ https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(20) + +type Keeper struct { + storeKey storetypes.StoreKey + cdc codec.BinaryCodec + router *baseapp.MsgServiceRouter + authKeeper authz.AccountKeeper +} + +/ NewKeeper constructs a message authorization Keeper +func NewKeeper(storeKey storetypes.StoreKey, cdc codec.BinaryCodec, router *baseapp.MsgServiceRouter, ak authz.AccountKeeper) + +Keeper { + return Keeper{ + storeKey: storeKey, + cdc: cdc, + router: router, + authKeeper: ak, +} +} + +/ Logger returns a module-specific logger. +func (k Keeper) + +Logger(ctx sdk.Context) + +log.Logger { + return ctx.Logger().With("module", fmt.Sprintf("x/%s", authz.ModuleName)) +} + +/ getGrant returns grant stored at skey. +func (k Keeper) + +getGrant(ctx sdk.Context, skey []byte) (grant authz.Grant, found bool) { + store := ctx.KVStore(k.storeKey) + bz := store.Get(skey) + if bz == nil { + return grant, false +} + +k.cdc.MustUnmarshal(bz, &grant) + +return grant, true +} + +func (k Keeper) + +update(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, updated authz.Authorization) + +error { + skey := grantStoreKey(grantee, granter, updated.MsgTypeURL()) + +grant, found := k.getGrant(ctx, skey) + if !found { + return authz.ErrNoAuthorizationFound +} + +msg, ok := updated.(proto.Message) + if !ok { + return sdkerrors.ErrPackAny.Wrapf("cannot proto marshal %T", updated) +} + +any, err := codectypes.NewAnyWithValue(msg) + if err != nil { + return err +} + +grant.Authorization = any + store := ctx.KVStore(k.storeKey) + +store.Set(skey, k.cdc.MustMarshal(&grant)) + +return nil +} + +/ DispatchActions attempts to execute the provided messages via authorization +/ grants from the message signer to the grantee. +func (k Keeper) + +DispatchActions(ctx sdk.Context, grantee sdk.AccAddress, msgs []sdk.Msg) ([][]byte, error) { + results := make([][]byte, len(msgs)) + now := ctx.BlockTime() + for i, msg := range msgs { + signers := msg.GetSigners() + if len(signers) != 1 { + return nil, authz.ErrAuthorizationNumOfSigners +} + granter := signers[0] + + / If granter != grantee then check authorization.Accept, otherwise we + / implicitly accept. + if !granter.Equals(grantee) { + skey := grantStoreKey(grantee, granter, sdk.MsgTypeURL(msg)) + +grant, found := k.getGrant(ctx, skey) + if !found { + return nil, sdkerrors.Wrapf(authz.ErrNoAuthorizationFound, "failed to update grant with key %s", string(skey)) +} + if grant.Expiration != nil && grant.Expiration.Before(now) { + return nil, authz.ErrAuthorizationExpired +} + +authorization, err := grant.GetAuthorization() + if err != nil { + return nil, err +} + +resp, err := authorization.Accept(ctx, msg) + if err != nil { + return nil, err +} + if resp.Delete { + err = k.DeleteGrant(ctx, grantee, granter, sdk.MsgTypeURL(msg)) +} + +else if resp.Updated != nil { + err = k.update(ctx, grantee, granter, resp.Updated) +} + if err != nil { + return nil, err +} + if !resp.Accept { + return nil, sdkerrors.ErrUnauthorized +} + +} + handler := k.router.Handler(msg) + if handler == nil { + return nil, sdkerrors.ErrUnknownRequest.Wrapf("unrecognized message route: %s", sdk.MsgTypeURL(msg)) +} + +msgResp, err := handler(ctx, msg) + if err != nil { + return nil, sdkerrors.Wrapf(err, "failed to execute message; message %v", msg) +} + +results[i] = msgResp.Data + + / emit the events from the dispatched actions + events := msgResp.Events + sdkEvents := make([]sdk.Event, 0, len(events)) + for _, event := range events { + e := event + e.Attributes = append(e.Attributes, abci.EventAttribute{ + Key: "authz_msg_index", + Value: strconv.Itoa(i) +}) + +sdkEvents = append(sdkEvents, sdk.Event(e)) +} + +ctx.EventManager().EmitEvents(sdkEvents) +} + +return results, nil +} + +/ SaveGrant method grants the provided authorization to the grantee on the granter's account +/ with the provided expiration time and insert authorization key into the grants queue. If there is an existing authorization grant for the +/ same `sdk.Msg` type, this grant overwrites that. +func (k Keeper) + +SaveGrant(ctx sdk.Context, grantee, granter sdk.AccAddress, authorization authz.Authorization, expiration *time.Time) + +error { + store := ctx.KVStore(k.storeKey) + msgType := authorization.MsgTypeURL() + skey := grantStoreKey(grantee, granter, msgType) + +grant, err := authz.NewGrant(ctx.BlockTime(), authorization, expiration) + if err != nil { + return err +} + +var oldExp *time.Time + if oldGrant, found := k.getGrant(ctx, skey); found { + oldExp = oldGrant.Expiration +} + if oldExp != nil && (expiration == nil || !oldExp.Equal(*expiration)) { + if err = k.removeFromGrantQueue(ctx, skey, granter, grantee, *oldExp); err != nil { + return err +} + +} + + / If the expiration didn't change, then we don't remove it and we should not insert again + if expiration != nil && (oldExp == nil || !oldExp.Equal(*expiration)) { + if err = k.insertIntoGrantQueue(ctx, granter, grantee, msgType, *expiration); err != nil { + return err +} + +} + bz := k.cdc.MustMarshal(&grant) + +store.Set(skey, bz) + +return ctx.EventManager().EmitTypedEvent(&authz.EventGrant{ + MsgTypeUrl: authorization.MsgTypeURL(), + Granter: granter.String(), + Grantee: grantee.String(), +}) +} + +/ DeleteGrant revokes any authorization for the provided message type granted to the grantee +/ by the granter. +func (k Keeper) + +DeleteGrant(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) + +error { + store := ctx.KVStore(k.storeKey) + skey := grantStoreKey(grantee, granter, msgType) + +grant, found := k.getGrant(ctx, skey) + if !found { + return sdkerrors.Wrapf(authz.ErrNoAuthorizationFound, "failed to delete grant with key %s", string(skey)) +} + if grant.Expiration != nil { + err := k.removeFromGrantQueue(ctx, skey, granter, grantee, *grant.Expiration) + if err != nil { + return err +} + +} + +store.Delete(skey) + +return ctx.EventManager().EmitTypedEvent(&authz.EventRevoke{ + MsgTypeUrl: msgType, + Granter: granter.String(), + Grantee: grantee.String(), +}) +} + +/ GetAuthorizations Returns list of `Authorizations` granted to the grantee by the granter. +func (k Keeper) + +GetAuthorizations(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress) ([]authz.Authorization, error) { + store := ctx.KVStore(k.storeKey) + key := grantStoreKey(grantee, granter, "") + iter := sdk.KVStorePrefixIterator(store, key) + +defer iter.Close() + +var authorization authz.Grant + var authorizations []authz.Authorization + for ; iter.Valid(); iter.Next() { + if err := k.cdc.Unmarshal(iter.Value(), &authorization); err != nil { + return nil, err +} + +a, err := authorization.GetAuthorization() + if err != nil { + return nil, err +} + +authorizations = append(authorizations, a) +} + +return authorizations, nil +} + +/ GetAuthorization returns an Authorization and it's expiration time. +/ A nil Authorization is returned under the following circumstances: +/ - No grant is found. +/ - A grant is found, but it is expired. +/ - There was an error getting the authorization from the grant. +func (k Keeper) + +GetAuthorization(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) (authz.Authorization, *time.Time) { + grant, found := k.getGrant(ctx, grantStoreKey(grantee, granter, msgType)) + if !found || (grant.Expiration != nil && grant.Expiration.Before(ctx.BlockHeader().Time)) { + return nil, nil +} + +auth, err := grant.GetAuthorization() + if err != nil { + return nil, nil +} + +return auth, grant.Expiration +} + +/ IterateGrants iterates over all authorization grants +/ This function should be used with caution because it can involve significant IO operations. +/ It should not be used in query or msg services without charging additional gas. +/ The iteration stops when the handler function returns true or the iterator exhaust. +func (k Keeper) + +IterateGrants(ctx sdk.Context, + handler func(granterAddr sdk.AccAddress, granteeAddr sdk.AccAddress, grant authz.Grant) + +bool, +) { + store := ctx.KVStore(k.storeKey) + iter := sdk.KVStorePrefixIterator(store, GrantKey) + +defer iter.Close() + for ; iter.Valid(); iter.Next() { + var grant authz.Grant + granterAddr, granteeAddr, _ := parseGrantStoreKey(iter.Key()) + +k.cdc.MustUnmarshal(iter.Value(), &grant) + if handler(granterAddr, granteeAddr, grant) { + break +} + +} +} + +func (k Keeper) + +getGrantQueueItem(ctx sdk.Context, expiration time.Time, granter, grantee sdk.AccAddress) (*authz.GrantQueueItem, error) { + store := ctx.KVStore(k.storeKey) + bz := store.Get(GrantQueueKey(expiration, granter, grantee)) + if bz == nil { + return &authz.GrantQueueItem{ +}, nil +} + +var queueItems authz.GrantQueueItem + if err := k.cdc.Unmarshal(bz, &queueItems); err != nil { + return nil, err +} + +return &queueItems, nil +} + +func (k Keeper) + +setGrantQueueItem(ctx sdk.Context, expiration time.Time, + granter sdk.AccAddress, grantee sdk.AccAddress, queueItems *authz.GrantQueueItem, +) + +error { + store := ctx.KVStore(k.storeKey) + +bz, err := k.cdc.Marshal(queueItems) + if err != nil { + return err +} + +store.Set(GrantQueueKey(expiration, granter, grantee), bz) + +return nil +} + +/ insertIntoGrantQueue inserts a grant key into the grant queue +func (k Keeper) + +insertIntoGrantQueue(ctx sdk.Context, granter, grantee sdk.AccAddress, msgType string, expiration time.Time) + +error { + queueItems, err := k.getGrantQueueItem(ctx, expiration, granter, grantee) + if err != nil { + return err +} + if len(queueItems.MsgTypeUrls) == 0 { + k.setGrantQueueItem(ctx, expiration, granter, grantee, &authz.GrantQueueItem{ + MsgTypeUrls: []string{ + msgType +}, +}) +} + +else { + queueItems.MsgTypeUrls = append(queueItems.MsgTypeUrls, msgType) + +k.setGrantQueueItem(ctx, expiration, granter, grantee, queueItems) +} + +return nil +} + +/ removeFromGrantQueue removes a grant key from the grant queue +func (k Keeper) + +removeFromGrantQueue(ctx sdk.Context, grantKey []byte, granter, grantee sdk.AccAddress, expiration time.Time) + +error { + store := ctx.KVStore(k.storeKey) + key := GrantQueueKey(expiration, granter, grantee) + bz := store.Get(key) + if bz == nil { + return sdkerrors.Wrap(authz.ErrNoGrantKeyFound, "can't remove grant from the expire queue, grant key not found") +} + +var queueItem authz.GrantQueueItem + if err := k.cdc.Unmarshal(bz, &queueItem); err != nil { + return err +} + + _, _, msgType := parseGrantStoreKey(grantKey) + queueItems := queueItem.MsgTypeUrls + for index, typeURL := range queueItems { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "grant queue") + if typeURL == msgType { + end := len(queueItem.MsgTypeUrls) - 1 + queueItems[index] = queueItems[end] + queueItems = queueItems[:end] + if err := k.setGrantQueueItem(ctx, expiration, granter, grantee, &authz.GrantQueueItem{ + MsgTypeUrls: queueItems, +}); err != nil { + return err +} + +break +} + +} + +return nil +} + +/ DequeueAndDeleteExpiredGrants deletes expired grants from the state and grant queue. +func (k Keeper) + +DequeueAndDeleteExpiredGrants(ctx sdk.Context) + +error { + store := ctx.KVStore(k.storeKey) + iterator := store.Iterator(GrantQueuePrefix, sdk.InclusiveEndBytes(GrantQueueTimePrefix(ctx.BlockTime()))) + +defer iterator.Close() + for ; iterator.Valid(); iterator.Next() { + var queueItem authz.GrantQueueItem + if err := k.cdc.Unmarshal(iterator.Value(), &queueItem); err != nil { + return err +} + + _, granter, grantee, err := parseGrantQueueKey(iterator.Key()) + if err != nil { + return err +} + +store.Delete(iterator.Key()) + for _, typeURL := range queueItem.MsgTypeUrls { + store.Delete(grantStoreKey(grantee, granter, typeURL)) +} + +} + +return nil +} +``` + +* GrantQueue: `0x02 | expiration_bytes | granter_address_len (1 byte) | granter_address_bytes | grantee_address_len (1 byte) | grantee_address_bytes -> ProtocalBuffer(GrantQueueItem)` + +The `expiration_bytes` are the expiration date in UTC with the format `"2006-01-02T15:04:05.000000000"`. + +```go expandable +package keeper + +import ( + + "time" + "github.com/cosmos/cosmos-sdk/internal/conv" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/address" + "github.com/cosmos/cosmos-sdk/types/kv" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ Keys for store prefixes +/ Items are stored with the following key: values +/ +/ - 0x01: Grant +/ - 0x02: GrantQueueItem +var ( + GrantKey = []byte{0x01 +} / prefix for each key + GrantQueuePrefix = []byte{0x02 +} +) + +var lenTime = len(sdk.FormatTimeBytes(time.Now())) + +/ StoreKey is the store key string for authz +const StoreKey = authz.ModuleName + +/ grantStoreKey - return authorization store key +/ Items are stored with the following key: values +/ +/ - 0x01: Grant +func grantStoreKey(grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) []byte { + m := conv.UnsafeStrToBytes(msgType) + +granter = address.MustLengthPrefix(granter) + +grantee = address.MustLengthPrefix(grantee) + key := sdk.AppendLengthPrefixedBytes(GrantKey, granter, grantee, m) + +return key +} + +/ parseGrantStoreKey - split granter, grantee address and msg type from the authorization key +func parseGrantStoreKey(key []byte) (granterAddr, granteeAddr sdk.AccAddress, msgType string) { + / key is of format: + / 0x01 + + granterAddrLen, granterAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, 1, 1) / ignore key[0] since it is a prefix key + granterAddr, granterAddrEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrLenEndIndex+1, int(granterAddrLen[0])) + +granteeAddrLen, granteeAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrEndIndex+1, 1) + +granteeAddr, granteeAddrEndIndex := sdk.ParseLengthPrefixedBytes(key, granteeAddrLenEndIndex+1, int(granteeAddrLen[0])) + +kv.AssertKeyAtLeastLength(key, granteeAddrEndIndex+1) + +return granterAddr, granteeAddr, conv.UnsafeBytesToStr(key[(granteeAddrEndIndex + 1):]) +} + +/ parseGrantQueueKey split expiration time, granter and grantee from the grant queue key +func parseGrantQueueKey(key []byte) (time.Time, sdk.AccAddress, sdk.AccAddress, error) { + / key is of format: + / 0x02 + + expBytes, expEndIndex := sdk.ParseLengthPrefixedBytes(key, 1, lenTime) + +exp, err := sdk.ParseTimeBytes(expBytes) + if err != nil { + return exp, nil, nil, err +} + +granterAddrLen, granterAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, expEndIndex+1, 1) + +granter, granterEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrLenEndIndex+1, int(granterAddrLen[0])) + +granteeAddrLen, granteeAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, granterEndIndex+1, 1) + +grantee, _ := sdk.ParseLengthPrefixedBytes(key, granteeAddrLenEndIndex+1, int(granteeAddrLen[0])) + +return exp, granter, grantee, nil +} + +/ GrantQueueKey - return grant queue store key. If a given grant doesn't have a defined +/ expiration, then it should not be used in the pruning queue. +/ Key format is: +/ +/ 0x02: GrantQueueItem +func GrantQueueKey(expiration time.Time, granter sdk.AccAddress, grantee sdk.AccAddress) []byte { + exp := sdk.FormatTimeBytes(expiration) + +granter = address.MustLengthPrefix(granter) + +grantee = address.MustLengthPrefix(grantee) + +return sdk.AppendLengthPrefixedBytes(GrantQueuePrefix, exp, granter, grantee) +} + +/ GrantQueueTimePrefix - return grant queue time prefix +func GrantQueueTimePrefix(expiration time.Time) []byte { + return append(GrantQueuePrefix, sdk.FormatTimeBytes(expiration)...) +} + +/ firstAddressFromGrantStoreKey parses the first address only +func firstAddressFromGrantStoreKey(key []byte) + +sdk.AccAddress { + addrLen := key[0] + return sdk.AccAddress(key[1 : 1+addrLen]) +} +``` + +The `GrantQueueItem` object contains the list of type urls between granter and grantee that expire at the time indicated in the key. + +## Messages + +In this section we describe the processing of messages for the authz module. + +### MsgGrant + +An authorization grant is created using the `MsgGrant` message. +If there is already a grant for the `(granter, grantee, Authorization)` triple, then the new grant overwrites the previous one. To update or extend an existing grant, a new grant with the same `(granter, grantee, Authorization)` triple should be created. + +```protobuf +// MsgGrant is a request type for Grant method. It declares authorization to the grantee +// on behalf of the granter with the provided expiration time. +message MsgGrant { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgGrant"; + + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + cosmos.authz.v1beta1.Grant grant = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message handling should fail if: + +* both granter and grantee have the same address. +* provided `Expiration` time is less than current unix timestamp (but a grant will be created if no `expiration` time is provided since `expiration` is optional). +* provided `Grant.Authorization` is not implemented. +* `Authorization.MsgTypeURL()` is not defined in the router (there is no defined handler in the app router to handle that Msg types). + +### MsgRevoke + +A grant can be removed with the `MsgRevoke` message. + +```protobuf +// MsgRevoke revokes any authorization with the provided sdk.Msg type on the +// granter's account with that has been granted to the grantee. +message MsgRevoke { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgRevoke"; + + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string msg_type_url = 3; +} +``` + +The message handling should fail if: + +* both granter and grantee have the same address. +* provided `MsgTypeUrl` is empty. + +NOTE: The `MsgExec` message removes a grant if the grant has expired. + +### MsgExec + +When a grantee wants to execute a transaction on behalf of a granter, they must send `MsgExec`. + +```protobuf +// MsgExec attempts to execute the provided messages using +// authorizations granted to the grantee. Each message should have only +// one signer corresponding to the granter of the authorization. +message MsgExec { + option (cosmos.msg.v1.signer) = "grantee"; + option (amino.name) = "cosmos-sdk/MsgExec"; + + string grantee = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // Execute Msg. + // The x/authz will try to find a grant matching (msg.signers[0], grantee, MsgTypeURL(msg)) + // triple and validate it. + repeated google.protobuf.Any msgs = 2 [(cosmos_proto.accepts_interface) = "cosmos.base.v1beta1.Msg"]; +``` + +The message handling should fail if: + +* provided `Authorization` is not implemented. +* grantee doesn't have permission to run the transaction. +* if granted authorization is expired. + +## Events + +The authz module emits proto events defined in [the Protobuf reference](https://buf.build/cosmos/cosmos-sdk/docs/main/cosmos.authz.v1beta1#cosmos.authz.v1beta1.EventGrant). + +## Client + +### CLI + +A user can query and interact with the `authz` module using the CLI. + +#### Query + +The `query` commands allow users to query `authz` state. + +```bash +simd query authz --help +``` + +##### grants + +The `grants` command allows users to query grants for a granter-grantee pair. If the message type URL is set, it selects grants only for that message type. + +```bash +simd query authz grants [granter-addr] [grantee-addr] [msg-type-url]? [flags] +``` + +Example: + +```bash +simd query authz grants cosmos1.. cosmos1.. /cosmos.bank.v1beta1.MsgSend +``` + +Example Output: + +```bash +grants: +- authorization: + '@type': /cosmos.bank.v1beta1.SendAuthorization + spend_limit: + - amount: "100" + denom: stake + expiration: "2022-01-01T00:00:00Z" +pagination: null +``` + +#### Transactions + +The `tx` commands allow users to interact with the `authz` module. + +```bash +simd tx authz --help +``` + +##### exec + +The `exec` command allows a grantee to execute a transaction on behalf of granter. + +```bash + simd tx authz exec [tx-json-file] --from [grantee] [flags] +``` + +Example: + +```bash +simd tx authz exec tx.json --from=cosmos1.. +``` + +##### grant + +The `grant` command allows a granter to grant an authorization to a grantee. + +```bash +simd tx authz grant --from [flags] +``` + +* The `send` authorization\_type refers to the built-in `SendAuthorization` type. The custom flags available are `spend-limit` (required) and `allow-list` (optional) , documented [here](#SendAuthorization) + +Example: + +```bash + simd tx authz grant cosmos1.. send --spend-limit=100stake --allow-list=cosmos1...,cosmos2... --from=cosmos1.. +``` + +* The `generic` authorization\_type refers to the built-in `GenericAuthorization` type. The custom flag available is `msg-type` ( required) documented [here](#GenericAuthorization). + +> Note: `msg-type` is any valid Cosmos SDK `Msg` type url. + +Example: + +```bash + simd tx authz grant cosmos1.. generic --msg-type=/cosmos.bank.v1beta1.MsgSend --from=cosmos1.. +``` + +* The `delegate`,`unbond`,`redelegate` authorization\_types refer to the built-in `StakeAuthorization` type. The custom flags available are `spend-limit` (optional), `allowed-validators` (optional) and `deny-validators` (optional) documented [here](#StakeAuthorization). + +> Note: `allowed-validators` and `deny-validators` cannot both be empty. `spend-limit` represents the `MaxTokens` + +Example: + +```bash +simd tx authz grant cosmos1.. delegate --spend-limit=100stake --allowed-validators=cosmos...,cosmos... --deny-validators=cosmos... --from=cosmos1.. +``` + +##### revoke + +The `revoke` command allows a granter to revoke an authorization from a grantee. + +```bash +simd tx authz revoke [grantee] [msg-type-url] --from=[granter] [flags] +``` + +Example: + +```bash +simd tx authz revoke cosmos1.. /cosmos.bank.v1beta1.MsgSend --from=cosmos1.. +``` + +### gRPC + +A user can query the `authz` module using gRPC endpoints. + +#### Grants + +The `Grants` endpoint allows users to query grants for a granter-grantee pair. If the message type URL is set, it selects grants only for that message type. + +```bash +cosmos.authz.v1beta1.Query/Grants +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"granter":"cosmos1..","grantee":"cosmos1..","msg_type_url":"/cosmos.bank.v1beta1.MsgSend"}' \ + localhost:9090 \ + cosmos.authz.v1beta1.Query/Grants +``` + +Example Output: + +```bash expandable +{ + "grants": [ + { + "authorization": { + "@type": "/cosmos.bank.v1beta1.SendAuthorization", + "spendLimit": [ + { + "denom":"stake", + "amount":"100" + } + ] + }, + "expiration": "2022-01-01T00:00:00Z" + } + ] +} +``` + +### REST + +A user can query the `authz` module using REST endpoints. + +```bash +/cosmos/authz/v1beta1/grants +``` + +Example: + +```bash +curl "localhost:1317/cosmos/authz/v1beta1/grants?granter=cosmos1..&grantee=cosmos1..&msg_type_url=/cosmos.bank.v1beta1.MsgSend" +``` + +Example Output: + +```bash expandable +{ + "grants": [ + { + "authorization": { + "@type": "/cosmos.bank.v1beta1.SendAuthorization", + "spend_limit": [ + { + "denom": "stake", + "amount": "100" + } + ] + }, + "expiration": "2022-01-01T00:00:00Z" + } + ], + "pagination": null +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/bank/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/bank/README.mdx new file mode 100644 index 00000000..5a639dd0 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/bank/README.mdx @@ -0,0 +1,1209 @@ +--- +title: '`x/bank`' +description: This document specifies the bank module of the Cosmos SDK. +--- + +## Abstract + +This document specifies the bank module of the Cosmos SDK. + +The bank module is responsible for handling multi-asset coin transfers between +accounts and tracking special-case pseudo-transfers which must work differently +with particular kinds of accounts (notably delegating/undelegating for vesting +accounts). It exposes several interfaces with varying capabilities for secure +interaction with other modules which must alter user balances. + +In addition, the bank module tracks and provides query support for the total +supply of all assets used in the application. + +This module is used in the Cosmos Hub. + +## Contents + +* [Supply](#supply) + * [Total Supply](#total-supply) +* [Module Accounts](#module-accounts) + * [Permissions](#permissions) +* [State](#state) +* [Params](#params) +* [Keepers](#keepers) +* [Messages](#messages) +* [Events](#events) + * [Message Events](#message-events) + * [Keeper Events](#keeper-events) +* [Parameters](#parameters) + * [SendEnabled](#sendenabled) + * [DefaultSendEnabled](#defaultsendenabled) +* [Client](#client) + * [CLI](#cli) + * [Query](#query) + * [Transactions](#transactions) +* [gRPC](#grpc) + +## Supply + +The `supply` functionality: + +* passively tracks the total supply of coins within a chain, +* provides a pattern for modules to hold/interact with `Coins`, and +* introduces the invariant check to verify a chain's total supply. + +### Total Supply + +The total `Supply` of the network is equal to the sum of all coins from the +account. The total supply is updated every time a `Coin` is minted (eg: as part +of the inflation mechanism) or burned (eg: due to slashing or if a governance +proposal is vetoed). + +## Module Accounts + +The supply functionality introduces a new type of `auth.Account` which can be used by +modules to allocate tokens and in special cases mint or burn tokens. At a base +level these module accounts are capable of sending/receiving tokens to and from +`auth.Account`s and other module accounts. This design replaces previous +alternative designs where, to hold tokens, modules would burn the incoming +tokens from the sender account, and then track those tokens internally. Later, +in order to send tokens, the module would need to effectively mint tokens +within a destination account. The new design removes duplicate logic between +modules to perform this accounting. + +The `ModuleAccount` interface is defined as follows: + +```go +type ModuleAccount interface { + auth.Account / same methods as the Account interface + + GetName() + +string / name of the module; used to obtain the address + GetPermissions() []string / permissions of module account + HasPermission(string) + +bool +} +``` + +> **WARNING!** +> Any module or message handler that allows either direct or indirect sending of funds must explicitly guarantee those funds cannot be sent to module accounts (unless allowed). + +The supply `Keeper` also introduces new wrapper functions for the auth `Keeper` +and the bank `Keeper` that are related to `ModuleAccount`s in order to be able +to: + +* Get and set `ModuleAccount`s by providing the `Name`. +* Send coins from and to other `ModuleAccount`s or standard `Account`s + (`BaseAccount` or `VestingAccount`) by passing only the `Name`. +* `Mint` or `Burn` coins for a `ModuleAccount` (restricted to its permissions). + +### Permissions + +Each `ModuleAccount` has a different set of permissions that provide different +object capabilities to perform certain actions. Permissions need to be +registered upon the creation of the supply `Keeper` so that every time a +`ModuleAccount` calls the allowed functions, the `Keeper` can lookup the +permissions to that specific account and perform or not perform the action. + +The available permissions are: + +* `Minter`: allows for a module to mint a specific amount of coins. +* `Burner`: allows for a module to burn a specific amount of coins. +* `Staking`: allows for a module to delegate and undelegate a specific amount of coins. + +## State + +The `x/bank` module keeps state of the following primary objects: + +1. Account balances +2. Denomination metadata +3. The total supply of all balances +4. Information on which denominations are allowed to be sent. + +In addition, the `x/bank` module keeps the following indexes to manage the +aforementioned state: + +* Supply Index: `0x0 | byte(denom) -> byte(amount)` +* Denom Metadata Index: `0x1 | byte(denom) -> ProtocolBuffer(Metadata)` +* Balances Index: `0x2 | byte(address length) | []byte(address) | []byte(balance.Denom) -> ProtocolBuffer(balance)` +* Reverse Denomination to Address Index: `0x03 | byte(denom) | 0x00 | []byte(address) -> 0` + +## Params + +The bank module stores it's params in state with the prefix of `0x05`, +it can be updated with governance or the address with authority. + +* Params: `0x05 | ProtocolBuffer(Params)` + +```protobuf +// Params defines the parameters for the bank module. +message Params { + option (amino.name) = "cosmos-sdk/x/bank/Params"; + option (gogoproto.goproto_stringer) = false; + // Deprecated: Use of SendEnabled in params is deprecated. + // For genesis, use the newly added send_enabled field in the genesis object. + // Storage, lookup, and manipulation of this information is now in the keeper. + // + // As of cosmos-sdk 0.47, this only exists for backwards compatibility of genesis files. + repeated SendEnabled send_enabled = 1 [deprecated = true]; + bool default_send_enabled = 2; +} +``` + +## Keepers + +The bank module provides these exported keeper interfaces that can be +passed to other modules that read or update account balances. Modules +should use the least-permissive interface that provides the functionality they +require. + +Best practices dictate careful review of `bank` module code to ensure that +permissions are limited in the way that you expect. + +### Denied Addresses + +The `x/bank` module accepts a map of addresses that are considered blocklisted +from directly and explicitly receiving funds through means such as `MsgSend` and +`MsgMultiSend` and direct API calls like `SendCoinsFromModuleToAccount`. + +Typically, these addresses are module accounts. If these addresses receive funds +outside the expected rules of the state machine, invariants are likely to be +broken and could result in a halted network. + +By providing the `x/bank` module with a blocklisted set of addresses, an error occurs for the operation if a user or client attempts to directly or indirectly send funds to a blocklisted account, for example, by using [IBC](https://ibc.cosmos.network). + +### Common Types + +#### Input + +An input of a multiparty transfer + +```protobuf +/ Input models transaction input. +message Input { + string address = 1; + repeated cosmos.base.v1beta1.Coin coins = 2; +} +``` + +#### Output + +An output of a multiparty transfer. + +```protobuf +/ Output models transaction outputs. +message Output { + string address = 1; + repeated cosmos.base.v1beta1.Coin coins = 2; +} +``` + +### BaseKeeper + +The base keeper provides full-permission access: the ability to arbitrary modify any account's balance and mint or burn coins. + +Restricted permission to mint per module could be achieved by using baseKeeper with `WithMintCoinsRestriction` to give specific restrictions to mint (e.g. only minting certain denom). + +```go expandable +/ Keeper defines a module interface that facilitates the transfer of coins +/ between accounts. +type Keeper interface { + SendKeeper + WithMintCoinsRestriction(MintingRestrictionFn) + +BaseKeeper + + InitGenesis(context.Context, *types.GenesisState) + +ExportGenesis(context.Context) *types.GenesisState + + GetSupply(ctx context.Context, denom string) + +sdk.Coin + HasSupply(ctx context.Context, denom string) + +bool + GetPaginatedTotalSupply(ctx context.Context, pagination *query.PageRequest) (sdk.Coins, *query.PageResponse, error) + +IterateTotalSupply(ctx context.Context, cb func(sdk.Coin) + +bool) + +GetDenomMetaData(ctx context.Context, denom string) (types.Metadata, bool) + +HasDenomMetaData(ctx context.Context, denom string) + +bool + SetDenomMetaData(ctx context.Context, denomMetaData types.Metadata) + +IterateAllDenomMetaData(ctx context.Context, cb func(types.Metadata) + +bool) + +SendCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + SendCoinsFromModuleToModule(ctx context.Context, senderModule, recipientModule string, amt sdk.Coins) + +error + SendCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + DelegateCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + UndelegateCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + MintCoins(ctx context.Context, moduleName string, amt sdk.Coins) + +error + BurnCoins(ctx context.Context, moduleName string, amt sdk.Coins) + +error + + DelegateCoins(ctx context.Context, delegatorAddr, moduleAccAddr sdk.AccAddress, amt sdk.Coins) + +error + UndelegateCoins(ctx context.Context, moduleAccAddr, delegatorAddr sdk.AccAddress, amt sdk.Coins) + +error + + / GetAuthority gets the address capable of executing governance proposal messages. Usually the gov module account. + GetAuthority() + +string + + types.QueryServer +} +``` + +### SendKeeper + +The send keeper provides access to account balances and the ability to transfer coins between +accounts. The send keeper does not alter the total supply (mint or burn coins). + +```go expandable +/ SendKeeper defines a module interface that facilitates the transfer of coins +/ between accounts without the possibility of creating coins. +type SendKeeper interface { + ViewKeeper + + AppendSendRestriction(restriction SendRestrictionFn) + +PrependSendRestriction(restriction SendRestrictionFn) + +ClearSendRestriction() + +InputOutputCoins(ctx context.Context, input types.Input, outputs []types.Output) + +error + SendCoins(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) + +error + + GetParams(ctx context.Context) + +types.Params + SetParams(ctx context.Context, params types.Params) + +error + + IsSendEnabledDenom(ctx context.Context, denom string) + +bool + SetSendEnabled(ctx context.Context, denom string, value bool) + +SetAllSendEnabled(ctx context.Context, sendEnableds []*types.SendEnabled) + +DeleteSendEnabled(ctx context.Context, denom string) + +IterateSendEnabledEntries(ctx context.Context, cb func(denom string, sendEnabled bool) (stop bool)) + +GetAllSendEnabledEntries(ctx context.Context) []types.SendEnabled + + IsSendEnabledCoin(ctx context.Context, coin sdk.Coin) + +bool + IsSendEnabledCoins(ctx context.Context, coins ...sdk.Coin) + +error + + BlockedAddr(addr sdk.AccAddress) + +bool +} +``` + +#### Send Restrictions + +The `SendKeeper` applies a `SendRestrictionFn` before each transfer of funds. + +```golang +/ A SendRestrictionFn can restrict sends and/or provide a new receiver address. +type SendRestrictionFn func(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) (newToAddr sdk.AccAddress, err error) +``` + +After the `SendKeeper` (or `BaseKeeper`) has been created, send restrictions can be added to it using the `AppendSendRestriction` or `PrependSendRestriction` functions. +Both functions compose the provided restriction with any previously provided restrictions. +`AppendSendRestriction` adds the provided restriction to be run after any previously provided send restrictions. +`PrependSendRestriction` adds the restriction to be run before any previously provided send restrictions. +The composition will short-circuit when an error is encountered. I.e. if the first one returns an error, the second is not run. + +During `SendCoins`, the send restriction is applied after coins are removed from the from address, but before adding them to the to address. +During `InputOutputCoins`, the send restriction is applied after the input coins are removed and once for each output before the funds are added. + +A send restriction function should make use of a custom value in the context to allow bypassing that specific restriction. + +Send Restrictions are not placed on `ModuleToAccount` or `ModuleToModule` transfers. This is done due to modules needing to move funds to user accounts and other module accounts. This is a design decision to allow for more flexibility in the state machine. The state machine should be able to move funds between module accounts and user accounts without restrictions. + +Secondly this limitation would limit the usage of the state machine even for itself. users would not be able to receive rewards, not be able to move funds between module accounts. In the case that a user sends funds from a user account to the community pool and then a governance proposal is used to get those tokens into the users account this would fall under the discretion of the app chain developer to what they would like to do here. We can not make strong assumptions here. +Thirdly, this issue could lead into a chain halt if a token is disabled and the token is moved in the begin/endblock. This is the last reason we see the current change and more damaging then beneficial for users. + +For example, in your module's keeper package, you'd define the send restriction function: + +```golang expandable +var _ banktypes.SendRestrictionFn = Keeper{ +}.SendRestrictionFn + +func (k Keeper) + +SendRestrictionFn(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) (sdk.AccAddress, error) { + / Bypass if the context says to. + if mymodule.HasBypass(ctx) { + return toAddr, nil +} + + / Your custom send restriction logic goes here. + return nil, errors.New("not implemented") +} +``` + +The bank keeper should be provided to your keeper's constructor so the send restriction can be added to it: + +```golang +func NewKeeper(cdc codec.BinaryCodec, storeKey storetypes.StoreKey, bankKeeper mymodule.BankKeeper) + +Keeper { + rv := Keeper{/*...*/ +} + +bankKeeper.AppendSendRestriction(rv.SendRestrictionFn) + +return rv +} +``` + +Then, in the `mymodule` package, define the context helpers: + +```golang expandable +const bypassKey = "bypass-mymodule-restriction" + +/ WithBypass returns a new context that will cause the mymodule bank send restriction to be skipped. +func WithBypass(ctx context.Context) + +context.Context { + return sdk.UnwrapSDKContext(ctx).WithValue(bypassKey, true) +} + +/ WithoutBypass returns a new context that will cause the mymodule bank send restriction to not be skipped. +func WithoutBypass(ctx context.Context) + +context.Context { + return sdk.UnwrapSDKContext(ctx).WithValue(bypassKey, false) +} + +/ HasBypass checks the context to see if the mymodule bank send restriction should be skipped. +func HasBypass(ctx context.Context) + +bool { + bypassValue := ctx.Value(bypassKey) + if bypassValue == nil { + return false +} + +bypass, isBool := bypassValue.(bool) + +return isBool && bypass +} +``` + +Now, anywhere where you want to use `SendCoins` or `InputOutputCoins`, but you don't want your send restriction applied: + +```golang +func (k Keeper) + +DoThing(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) + +error { + return k.bankKeeper.SendCoins(mymodule.WithBypass(ctx), fromAddr, toAddr, amt) +} +``` + +### ViewKeeper + +The view keeper provides read-only access to account balances. The view keeper does not have balance alteration functionality. All balance lookups are `O(1)`. + +```go expandable +/ ViewKeeper defines a module interface that facilitates read only access to +/ account balances. +type ViewKeeper interface { + ValidateBalance(ctx context.Context, addr sdk.AccAddress) + +error + HasBalance(ctx context.Context, addr sdk.AccAddress, amt sdk.Coin) + +bool + + GetAllBalances(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + GetAccountsBalances(ctx context.Context) []types.Balance + GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin + LockedCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + SpendableCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + SpendableCoin(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin + + IterateAccountBalances(ctx context.Context, addr sdk.AccAddress, cb func(coin sdk.Coin) (stop bool)) + +IterateAllBalances(ctx context.Context, cb func(address sdk.AccAddress, coin sdk.Coin) (stop bool)) +} +``` + +## Messages + +### MsgSend + +Send coins from one address to another. + +```protobuf +// MsgSend represents a message to send coins from one account to another. +message MsgSend { + option (cosmos.msg.v1.signer) = "from_address"; + option (amino.name) = "cosmos-sdk/MsgSend"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string from_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string to_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + repeated cosmos.base.v1beta1.Coin amount = 3 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; +} +``` + +The message will fail under the following conditions: + +* The coins do not have sending enabled +* The `to` address is restricted + +### MsgMultiSend + +Send coins from one sender and to a series of different address. If any of the receiving addresses do not correspond to an existing account, a new account is created. + +```protobuf +// MsgMultiSend represents an arbitrary multi-in, multi-out send message. +message MsgMultiSend { + option (cosmos.msg.v1.signer) = "inputs"; + option (amino.name) = "cosmos-sdk/MsgMultiSend"; + + option (gogoproto.equal) = false; + + // Inputs, despite being `repeated`, only allows one sender input. This is + // checked in MsgMultiSend's ValidateBasic. + repeated Input inputs = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + repeated Output outputs = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message will fail under the following conditions: + +* Any of the coins do not have sending enabled +* Any of the `to` addresses are restricted +* Any of the coins are locked +* The inputs and outputs do not correctly correspond to one another + +### MsgUpdateParams + +The `bank` module params can be updated through `MsgUpdateParams`, which can be done using governance proposal. The signer will always be the `gov` module account address. + +```protobuf +// MsgUpdateParams is the Msg/UpdateParams request type. +// +// Since: cosmos-sdk 0.47 +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + option (amino.name) = "cosmos-sdk/x/bank/MsgUpdateParams"; + + // params defines the x/bank parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message handling can fail if: + +* signer is not the gov module account address. + +### MsgSetSendEnabled + +Used with the x/gov module to set create/edit SendEnabled entries. + +```protobuf +// MsgSetSendEnabled is the Msg/SetSendEnabled request type. +// +// Only entries to add/update/delete need to be included. +// Existing SendEnabled entries that are not included in this +// message are left unchanged. +// +// Since: cosmos-sdk 0.47 +message MsgSetSendEnabled { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/MsgSetSendEnabled"; + + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // send_enabled is the list of entries to add or update. + repeated SendEnabled send_enabled = 2; + + // use_default_for is a list of denoms that should use the params.default_send_enabled value. + // Denoms listed here will have their SendEnabled entries deleted. + // If a denom is included that doesn't have a SendEnabled entry, + // it will be ignored. + repeated string use_default_for = 3; +} +``` + +The message will fail under the following conditions: + +* The authority is not a bech32 address. +* The authority is not x/gov module's address. +* There are multiple SendEnabled entries with the same Denom. +* One or more SendEnabled entries has an invalid Denom. + +## Events + +The bank module emits the following events: + +### Message Events + +#### MsgSend + +| Type | Attribute Key | Attribute Value | +| -------- | ------------- | ------------------ | +| transfer | recipient | `{recipientAddress}` | +| transfer | amount | `{amount}` | +| message | module | bank | +| message | action | send | +| message | sender | `{senderAddress}` | + +#### MsgMultiSend + +| Type | Attribute Key | Attribute Value | +| -------- | ------------- | ------------------ | +| transfer | recipient | `{recipientAddress}` | +| transfer | amount | `{amount}` | +| message | module | bank | +| message | action | multisend | +| message | sender | `{senderAddress}` | + +### Keeper Events + +In addition to message events, the bank keeper will produce events when the following methods are called (or any method which ends up calling them) + +#### MintCoins + +```json expandable +{ + "type": "coinbase", + "attributes": [ + { + "key": "minter", + "value": "{{sdk.AccAddress of the module minting coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being minted}}", + "index": true + } + ] +} +``` + +```json expandable +{ + "type": "coin_received", + "attributes": [ + { + "key": "receiver", + "value": "{{sdk.AccAddress of the module minting coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being received}}", + "index": true + } + ] +} +``` + +#### BurnCoins + +```json expandable +{ + "type": "burn", + "attributes": [ + { + "key": "burner", + "value": "{{sdk.AccAddress of the module burning coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being burned}}", + "index": true + } + ] +} +``` + +```json expandable +{ + "type": "coin_spent", + "attributes": [ + { + "key": "spender", + "value": "{{sdk.AccAddress of the module burning coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being burned}}", + "index": true + } + ] +} +``` + +#### addCoins + +```json expandable +{ + "type": "coin_received", + "attributes": [ + { + "key": "receiver", + "value": "{{sdk.AccAddress of the address beneficiary of the coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being received}}", + "index": true + } + ] +} +``` + +#### subUnlockedCoins/DelegateCoins + +```json expandable +{ + "type": "coin_spent", + "attributes": [ + { + "key": "spender", + "value": "{{sdk.AccAddress of the address which is spending coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being spent}}", + "index": true + } + ] +} +``` + +## Parameters + +The bank module contains the following parameters + +### SendEnabled + +The SendEnabled parameter is now deprecated and not to be use. It is replaced +with state store records. + +### DefaultSendEnabled + +The default send enabled value controls send transfer capability for all +coin denominations unless specifically included in the array of `SendEnabled` +parameters. + +## Client + +### CLI + +A user can query and interact with the `bank` module using the CLI. + +#### Query + +The `query` commands allow users to query `bank` state. + +```shell +simd query bank --help +``` + +##### balances + +The `balances` command allows users to query account balances by address. + +```shell +simd query bank balances [address] [flags] +``` + +Example: + +```shell +simd query bank balances cosmos1.. +``` + +Example Output: + +```yml +balances: +- amount: "1000000000" + denom: stake +pagination: + next_key: null + total: "0" +``` + +##### denom-metadata + +The `denom-metadata` command allows users to query metadata for coin denominations. A user can query metadata for a single denomination using the `--denom` flag or all denominations without it. + +```shell +simd query bank denom-metadata [flags] +``` + +Example: + +```shell +simd query bank denom-metadata --denom stake +``` + +Example Output: + +```yml +metadata: + base: stake + denom_units: + - aliases: + - STAKE + denom: stake + description: native staking token of simulation app + display: stake + name: SimApp Token + symbol: STK +``` + +##### total + +The `total` command allows users to query the total supply of coins. A user can query the total supply for a single coin using the `--denom` flag or all coins without it. + +```shell +simd query bank total [flags] +``` + +Example: + +```shell +simd query bank total --denom stake +``` + +Example Output: + +```yml +amount: "10000000000" +denom: stake +``` + +##### send-enabled + +The `send-enabled` command allows users to query for all or some SendEnabled entries. + +```shell +simd query bank send-enabled [denom1 ...] [flags] +``` + +Example: + +```shell +simd query bank send-enabled +``` + +Example output: + +```yml +send_enabled: +- denom: foocoin + enabled: true +- denom: barcoin +pagination: + next-key: null + total: 2 +``` + +#### Transactions + +The `tx` commands allow users to interact with the `bank` module. + +```shell +simd tx bank --help +``` + +##### send + +The `send` command allows users to send funds from one account to another. + +```shell +simd tx bank send [from_key_or_address] [to_address] [amount] [flags] +``` + +Example: + +```shell +simd tx bank send cosmos1.. cosmos1.. 100stake +``` + +## gRPC + +A user can query the `bank` module using gRPC endpoints. + +### Balance + +The `Balance` endpoint allows users to query account balance by address for a given denomination. + +```shell +cosmos.bank.v1beta1.Query/Balance +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"address":"cosmos1..","denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/Balance +``` + +Example Output: + +```json +{ + "balance": { + "denom": "stake", + "amount": "1000000000" + } +} +``` + +### AllBalances + +The `AllBalances` endpoint allows users to query account balance by address for all denominations. + +```shell +cosmos.bank.v1beta1.Query/AllBalances +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/AllBalances +``` + +Example Output: + +```json expandable +{ + "balances": [ + { + "denom": "stake", + "amount": "1000000000" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### DenomMetadata + +The `DenomMetadata` endpoint allows users to query metadata for a single coin denomination. + +```shell +cosmos.bank.v1beta1.Query/DenomMetadata +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/DenomMetadata +``` + +Example Output: + +```json expandable +{ + "metadata": { + "description": "native staking token of simulation app", + "denomUnits": [ + { + "denom": "stake", + "aliases": [ + "STAKE" + ] + } + ], + "base": "stake", + "display": "stake", + "name": "SimApp Token", + "symbol": "STK" + } +} +``` + +### DenomsMetadata + +The `DenomsMetadata` endpoint allows users to query metadata for all coin denominations. + +```shell +cosmos.bank.v1beta1.Query/DenomsMetadata +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/DenomsMetadata +``` + +Example Output: + +```json expandable +{ + "metadatas": [ + { + "description": "native staking token of simulation app", + "denomUnits": [ + { + "denom": "stake", + "aliases": [ + "STAKE" + ] + } + ], + "base": "stake", + "display": "stake", + "name": "SimApp Token", + "symbol": "STK" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### DenomOwners + +The `DenomOwners` endpoint allows users to query metadata for a single coin denomination. + +```shell +cosmos.bank.v1beta1.Query/DenomOwners +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/DenomOwners +``` + +Example Output: + +```json expandable +{ + "denomOwners": [ + { + "address": "cosmos1..", + "balance": { + "denom": "stake", + "amount": "5000000000" + } + +}, + { + "address": "cosmos1..", + "balance": { + "denom": "stake", + "amount": "5000000000" + } + +}, + ], + "pagination": { + "total": "2" + } +} +``` + +### TotalSupply + +The `TotalSupply` endpoint allows users to query the total supply of all coins. + +```shell +cosmos.bank.v1beta1.Query/TotalSupply +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/TotalSupply +``` + +Example Output: + +```json expandable +{ + "supply": [ + { + "denom": "stake", + "amount": "10000000000" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### SupplyOf + +The `SupplyOf` endpoint allows users to query the total supply of a single coin. + +```shell +cosmos.bank.v1beta1.Query/SupplyOf +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/SupplyOf +``` + +Example Output: + +```json +{ + "amount": { + "denom": "stake", + "amount": "10000000000" + } +} +``` + +### Params + +The `Params` endpoint allows users to query the parameters of the `bank` module. + +```shell +cosmos.bank.v1beta1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "defaultSendEnabled": true + } +} +``` + +### SendEnabled + +The `SendEnabled` enpoints allows users to query the SendEnabled entries of the `bank` module. + +Any denominations NOT returned, use the `Params.DefaultSendEnabled` value. + +```shell +cosmos.bank.v1beta1.Query/SendEnabled +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/SendEnabled +``` + +Example Output: + +```json expandable +{ + "send_enabled": [ + { + "denom": "foocoin", + "enabled": true + }, + { + "denom": "barcoin" + } + ], + "pagination": { + "next-key": null, + "total": 2 + } +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/circuit/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/circuit/README.mdx new file mode 100644 index 00000000..1ea49d47 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/circuit/README.mdx @@ -0,0 +1,504 @@ +--- +title: '`x/circuit`' +--- + +## Concepts + +Circuit Breaker is a module that is meant to avoid a chain needing to halt/shut down in the presence of a vulnerability, instead the module will allow specific messages or all messages to be disabled. When operating a chain, if it is app specific then a halt of the chain is less detrimental, but if there are applications built on top of the chain then halting is expensive due to the disturbance to applications. + +Circuit Breaker works with the idea that an address or set of addresses have the right to block messages from being executed and/or included in the mempool. Any address with a permission is able to reset the circuit breaker for the message. + +The transactions are checked and can be rejected at two points: + +* In `CircuitBreakerDecorator` [ante handler](https://docs.cosmos.network/main/learn/advanced/baseapp#antehandler): + +```go expandable +package ante + +import ( + + "context" + "github.com/cockroachdb/errors" + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ CircuitBreaker is an interface that defines the methods for a circuit breaker. +type CircuitBreaker interface { + IsAllowed(ctx context.Context, typeURL string) (bool, error) +} + +/ CircuitBreakerDecorator is an AnteDecorator that checks if the transaction type is allowed to enter the mempool or be executed +type CircuitBreakerDecorator struct { + circuitKeeper CircuitBreaker +} + +func NewCircuitBreakerDecorator(ck CircuitBreaker) + +CircuitBreakerDecorator { + return CircuitBreakerDecorator{ + circuitKeeper: ck, +} +} + +func (cbd CircuitBreakerDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + / loop through all the messages and check if the message type is allowed + for _, msg := range tx.GetMsgs() { + isAllowed, err := cbd.circuitKeeper.IsAllowed(ctx, sdk.MsgTypeURL(msg)) + if err != nil { + return ctx, err +} + if !isAllowed { + return ctx, errors.New("tx type not allowed") +} + +} + +return next(ctx, tx, simulate) +} +``` + +* With a [message router check](https://docs.cosmos.network/main/learn/advanced/baseapp#msg-service-router): + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + + gogogrpc "github.com/cosmos/gogoproto/grpc" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc" + "google.golang.org/protobuf/runtime/protoiface" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/baseapp/internal/protocompat" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ MessageRouter ADR 031 request type routing +/ docs/sdk/next/documentation/legacy/adr-comprehensive +type MessageRouter interface { + Handler(msg sdk.Msg) + +MsgServiceHandler + HandlerByTypeURL(typeURL string) + +MsgServiceHandler +} + +/ MsgServiceRouter routes fully-qualified Msg service methods to their handler. +type MsgServiceRouter struct { + interfaceRegistry codectypes.InterfaceRegistry + routes map[string]MsgServiceHandler + hybridHandlers map[string]func(ctx context.Context, req, resp protoiface.MessageV1) + +error + circuitBreaker CircuitBreaker +} + +var _ gogogrpc.Server = &MsgServiceRouter{ +} + +/ NewMsgServiceRouter creates a new MsgServiceRouter. +func NewMsgServiceRouter() *MsgServiceRouter { + return &MsgServiceRouter{ + routes: map[string]MsgServiceHandler{ +}, + hybridHandlers: map[string]func(ctx context.Context, req, resp protoiface.MessageV1) + +error{ +}, +} +} + +func (msr *MsgServiceRouter) + +SetCircuit(cb CircuitBreaker) { + msr.circuitBreaker = cb +} + +/ MsgServiceHandler defines a function type which handles Msg service message. +type MsgServiceHandler = func(ctx sdk.Context, req sdk.Msg) (*sdk.Result, error) + +/ Handler returns the MsgServiceHandler for a given msg or nil if not found. +func (msr *MsgServiceRouter) + +Handler(msg sdk.Msg) + +MsgServiceHandler { + return msr.routes[sdk.MsgTypeURL(msg)] +} + +/ HandlerByTypeURL returns the MsgServiceHandler for a given query route path or nil +/ if not found. +func (msr *MsgServiceRouter) + +HandlerByTypeURL(typeURL string) + +MsgServiceHandler { + return msr.routes[typeURL] +} + +/ RegisterService implements the gRPC Server.RegisterService method. sd is a gRPC +/ service description, handler is an object which implements that gRPC service. +/ +/ This function PANICs: +/ - if it is called before the service `Msg`s have been registered using +/ RegisterInterfaces, +/ - or if a service is being registered twice. +func (msr *MsgServiceRouter) + +RegisterService(sd *grpc.ServiceDesc, handler interface{ +}) { + / Adds a top-level query handler based on the gRPC service name. + for _, method := range sd.Methods { + err := msr.registerMsgServiceHandler(sd, method, handler) + if err != nil { + panic(err) +} + +err = msr.registerHybridHandler(sd, method, handler) + if err != nil { + panic(err) +} + +} +} + +func (msr *MsgServiceRouter) + +HybridHandlerByMsgName(msgName string) + +func(ctx context.Context, req, resp protoiface.MessageV1) + +error { + return msr.hybridHandlers[msgName] +} + +func (msr *MsgServiceRouter) + +registerHybridHandler(sd *grpc.ServiceDesc, method grpc.MethodDesc, handler interface{ +}) + +error { + inputName, err := protocompat.RequestFullNameFromMethodDesc(sd, method) + if err != nil { + return err +} + cdc := codec.NewProtoCodec(msr.interfaceRegistry) + +hybridHandler, err := protocompat.MakeHybridHandler(cdc, sd, method, handler) + if err != nil { + return err +} + / if circuit breaker is not nil, then we decorate the hybrid handler with the circuit breaker + if msr.circuitBreaker == nil { + msr.hybridHandlers[string(inputName)] = hybridHandler + return nil +} + / decorate the hybrid handler with the circuit breaker + circuitBreakerHybridHandler := func(ctx context.Context, req, resp protoiface.MessageV1) + +error { + messageName := codectypes.MsgTypeURL(req) + +allowed, err := msr.circuitBreaker.IsAllowed(ctx, messageName) + if err != nil { + return err +} + if !allowed { + return fmt.Errorf("circuit breaker disallows execution of message %s", messageName) +} + +return hybridHandler(ctx, req, resp) +} + +msr.hybridHandlers[string(inputName)] = circuitBreakerHybridHandler + return nil +} + +func (msr *MsgServiceRouter) + +registerMsgServiceHandler(sd *grpc.ServiceDesc, method grpc.MethodDesc, handler interface{ +}) + +error { + fqMethod := fmt.Sprintf("/%s/%s", sd.ServiceName, method.MethodName) + methodHandler := method.Handler + + var requestTypeName string + + / NOTE: This is how we pull the concrete request type for each handler for registering in the InterfaceRegistry. + / This approach is maybe a bit hacky, but less hacky than reflecting on the handler object itself. + / We use a no-op interceptor to avoid actually calling into the handler itself. + _, _ = methodHandler(nil, context.Background(), func(i interface{ +}) + +error { + msg, ok := i.(sdk.Msg) + if !ok { + / We panic here because there is no other alternative and the app cannot be initialized correctly + / this should only happen if there is a problem with code generation in which case the app won't + / work correctly anyway. + panic(fmt.Errorf("unable to register service method %s: %T does not implement sdk.Msg", fqMethod, i)) +} + +requestTypeName = sdk.MsgTypeURL(msg) + +return nil +}, noopInterceptor) + + / Check that the service Msg fully-qualified method name has already + / been registered (via RegisterInterfaces). If the user registers a + / service without registering according service Msg type, there might be + / some unexpected behavior down the road. Since we can't return an error + / (`Server.RegisterService` interface restriction) + +we panic (at startup). + reqType, err := msr.interfaceRegistry.Resolve(requestTypeName) + if err != nil || reqType == nil { + return fmt.Errorf( + "type_url %s has not been registered yet. "+ + "Before calling RegisterService, you must register all interfaces by calling the `RegisterInterfaces` "+ + "method on module.BasicManager. Each module should call `msgservice.RegisterMsgServiceDesc` inside its "+ + "`RegisterInterfaces` method with the `_Msg_serviceDesc` generated by proto-gen", + requestTypeName, + ) +} + + / Check that each service is only registered once. If a service is + / registered more than once, then we should error. Since we can't + / return an error (`Server.RegisterService` interface restriction) + +we + / panic (at startup). + _, found := msr.routes[requestTypeName] + if found { + return fmt.Errorf( + "msg service %s has already been registered. Please make sure to only register each service once. "+ + "This usually means that there are conflicting modules registering the same msg service", + fqMethod, + ) +} + +msr.routes[requestTypeName] = func(ctx sdk.Context, msg sdk.Msg) (*sdk.Result, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + interceptor := func(goCtx context.Context, _ interface{ +}, _ *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{ +}, error) { + goCtx = context.WithValue(goCtx, sdk.SdkContextKey, ctx) + +return handler(goCtx, msg) +} + if m, ok := msg.(sdk.HasValidateBasic); ok { + if err := m.ValidateBasic(); err != nil { + return nil, err +} + +} + if msr.circuitBreaker != nil { + msgURL := sdk.MsgTypeURL(msg) + +isAllowed, err := msr.circuitBreaker.IsAllowed(ctx, msgURL) + if err != nil { + return nil, err +} + if !isAllowed { + return nil, fmt.Errorf("circuit breaker disables execution of this message: %s", msgURL) +} + +} + + / Call the method handler from the service description with the handler object. + / We don't do any decoding here because the decoding was already done. + res, err := methodHandler(handler, ctx, noopDecoder, interceptor) + if err != nil { + return nil, err +} + +resMsg, ok := res.(proto.Message) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "Expecting proto.Message, got %T", resMsg) +} + +return sdk.WrapServiceResult(ctx, resMsg, err) +} + +return nil +} + +/ SetInterfaceRegistry sets the interface registry for the router. +func (msr *MsgServiceRouter) + +SetInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) { + msr.interfaceRegistry = interfaceRegistry +} + +func noopDecoder(_ interface{ +}) + +error { + return nil +} + +func noopInterceptor(_ context.Context, _ interface{ +}, _ *grpc.UnaryServerInfo, _ grpc.UnaryHandler) (interface{ +}, error) { + return nil, nil +} +``` + + +The `CircuitBreakerDecorator` works for most use cases, but [does not check the inner messages of a transaction](https://docs.cosmos.network/main/learn/beginner/tx-lifecycle#antehandler). This some transactions (such as `x/authz` transactions or some `x/gov` transactions) may pass the ante handler. **This does not affect the circuit breaker** as the message router check will still fail the transaction. +This tradeoff is to avoid introducing more dependencies in the `x/circuit` module. Chains can re-define the `CircuitBreakerDecorator` to check for inner messages if they wish to do so. + + +## State + +### Accounts + +* AccountPermissions `0x1 | account_address -> ProtocolBuffer(CircuitBreakerPermissions)` + +```go expandable +type level int32 + +const ( + / LEVEL_NONE_UNSPECIFIED indicates that the account will have no circuit + / breaker permissions. + LEVEL_NONE_UNSPECIFIED = iota + / LEVEL_SOME_MSGS indicates that the account will have permission to + / trip or reset the circuit breaker for some Msg type URLs. If this level + / is chosen, a non-empty list of Msg type URLs must be provided in + / limit_type_urls. + LEVEL_SOME_MSGS + / LEVEL_ALL_MSGS indicates that the account can trip or reset the circuit + / breaker for Msg's of all type URLs. + LEVEL_ALL_MSGS + / LEVEL_SUPER_ADMIN indicates that the account can take all circuit breaker + / actions and can grant permissions to other accounts. + LEVEL_SUPER_ADMIN +) + +type Access struct { + level int32 + msgs []string / if full permission, msgs can be empty +} +``` + +### Disable List + +List of type urls that are disabled. + +* DisableList `0x2 | msg_type_url -> []byte{}` {/* - should this be stored in json to skip encoding and decoding each block, does it matter? */} + +## State Transitions + +### Authorize + +Authorize, is called by the module authority (default governance module account) or any account with `LEVEL_SUPER_ADMIN` to give permission to disable/enable messages to another account. There are three levels of permissions that can be granted. `LEVEL_SOME_MSGS` limits the number of messages that can be disabled. `LEVEL_ALL_MSGS` permits all messages to be disabled. `LEVEL_SUPER_ADMIN` allows an account to take all circuit breaker actions including authorizing and deauthorizing other accounts. + +```protobuf + / AuthorizeCircuitBreaker allows a super-admin to grant (or revoke) another + / account's circuit breaker permissions. + rpc AuthorizeCircuitBreaker(MsgAuthorizeCircuitBreaker) returns (MsgAuthorizeCircuitBreakerResponse); +``` + +### Trip + +Trip, is called by an authorized account to disable message execution for a specific msgURL. If empty, all the msgs will be disabled. + +```protobuf + / TripCircuitBreaker pauses processing of Msg's in the state machine. + rpc TripCircuitBreaker(MsgTripCircuitBreaker) returns (MsgTripCircuitBreakerResponse); +``` + +### Reset + +Reset is called by an authorized account to enable execution for a specific msgURL of previously disabled message. If empty, all the disabled messages will be enabled. + +```protobuf + / ResetCircuitBreaker resumes processing of Msg's in the state machine that + / have been been paused using TripCircuitBreaker. + rpc ResetCircuitBreaker(MsgResetCircuitBreaker) returns (MsgResetCircuitBreakerResponse); +``` + +## Messages + +### MsgAuthorizeCircuitBreaker + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/circuit/v1/tx.proto#L25-L75 +``` + +This message is expected to fail if: + +* the granter is not an account with permission level `LEVEL_SUPER_ADMIN` or the module authority + +### MsgTripCircuitBreaker + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/circuit/v1/tx.proto#L77-L93 +``` + +This message is expected to fail if: + +* if the signer does not have a permission level with the ability to disable the specified type url message + +### MsgResetCircuitBreaker + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/circuit/v1/tx.proto#L95-109 +``` + +This message is expected to fail if: + +* if the type url is not disabled + +## Events - list and describe event tags + +The circuit module emits the following events: + +### Message Events + +#### MsgAuthorizeCircuitBreaker + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | --------------------------- | +| string | granter | `{granterAddress}` | +| string | grantee | `{granteeAddress}` | +| string | permission | `{granteePermissions}` | +| message | module | circuit | +| message | action | authorize\_circuit\_breaker | + +#### MsgTripCircuitBreaker + +| Type | Attribute Key | Attribute Value | +| --------- | ------------- | ---------------------- | +| string | authority | `{authorityAddress}` | +| \[]string | msg\_urls | \[]string`{msg\_urls}` | +| message | module | circuit | +| message | action | trip\_circuit\_breaker | + +#### ResetCircuitBreaker + +| Type | Attribute Key | Attribute Value | +| --------- | ------------- | ----------------------- | +| string | authority | `{authorityAddress}` | +| \[]string | msg\_urls | \[]string`{msg\_urls}` | +| message | module | circuit | +| message | action | reset\_circuit\_breaker | + +## Keys - list of key prefixes used by the circuit module + +* `AccountPermissionPrefix` - `0x01` +* `DisableListPrefix` - `0x02` + +## Client - list and describe CLI commands and gRPC and REST endpoints diff --git a/docs/sdk/v0.50/documentation/module-system/modules/consensus/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/consensus/README.mdx new file mode 100644 index 00000000..0a377fe4 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/consensus/README.mdx @@ -0,0 +1,6 @@ +--- +title: '`x/consensus`' +description: Functionality to modify CometBFT's ABCI consensus params. +--- + +Functionality to modify CometBFT's ABCI consensus params. diff --git a/docs/sdk/v0.50/documentation/module-system/modules/crisis/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/crisis/README.mdx new file mode 100644 index 00000000..e96a0cf2 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/crisis/README.mdx @@ -0,0 +1,128 @@ +--- +title: '`x/crisis`' +description: >- + The crisis module halts the blockchain under the circumstance that a + blockchain invariant is broken. Invariants can be registered with the + application during the application initialization process. +--- + +## Overview + +The crisis module halts the blockchain under the circumstance that a blockchain +invariant is broken. Invariants can be registered with the application during the +application initialization process. + +## Contents + +* [State](#state) +* [Messages](#messages) +* [Events](#events) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + +## State + +### ConstantFee + +Due to the anticipated large gas cost requirement to verify an invariant (and +potential to exceed the maximum allowable block gas limit) a constant fee is +used instead of the standard gas consumption method. The constant fee is +intended to be larger than the anticipated gas cost of running the invariant +with the standard gas consumption method. + +The ConstantFee param is stored in the module params state with the prefix of `0x01`, +it can be updated with governance or the address with authority. + +* Params: `mint/params -> legacy_amino(sdk.Coin)` + +## Messages + +In this section we describe the processing of the crisis messages and the +corresponding updates to the state. + +### MsgVerifyInvariant + +Blockchain invariants can be checked using the `MsgVerifyInvariant` message. + +```protobuf +// MsgVerifyInvariant represents a message to verify a particular invariance. +message MsgVerifyInvariant { + option (cosmos.msg.v1.signer) = "sender"; + option (amino.name) = "cosmos-sdk/MsgVerifyInvariant"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + // sender is the account address of private key to send coins to fee collector account. + string sender = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // name of the invariant module. + string invariant_module_name = 2; + + // invariant_route is the msg's invariant route. + string invariant_route = 3; +} +``` + +This message is expected to fail if: + +* the sender does not have enough coins for the constant fee +* the invariant route is not registered + +This message checks the invariant provided, and if the invariant is broken it +panics, halting the blockchain. If the invariant is broken, the constant fee is +never deducted as the transaction is never committed to a block (equivalent to +being refunded). However, if the invariant is not broken, the constant fee will +not be refunded. + +## Events + +The crisis module emits the following events: + +### Handlers + +#### MsgVerifyInvariance + +| Type | Attribute Key | Attribute Value | +| --------- | ------------- | ----------------- | +| invariant | route | `{invariantRoute}` | +| message | module | crisis | +| message | action | verify\_invariant | +| message | sender | `{senderAddress}` | + +## Parameters + +The crisis module contains the following parameters: + +| Key | Type | Example | +| ----------- | ------------- | --------------------------------- | +| ConstantFee | object (coin) | `{"denom":"uatom","amount":"1000"}` | + +## Client + +### CLI + +A user can query and interact with the `crisis` module using the CLI. + +#### Transactions + +The `tx` commands allow users to interact with the `crisis` module. + +```bash +simd tx crisis --help +``` + +##### invariant-broken + +The `invariant-broken` command submits proof when an invariant was broken to halt the chain + +```bash +simd tx crisis invariant-broken [module-name] [invariant-route] [flags] +``` + +Example: + +```bash +simd tx crisis invariant-broken bank total-supply --from=[keyname or address] +``` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/distribution/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/distribution/README.mdx new file mode 100644 index 00000000..496147ca --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/distribution/README.mdx @@ -0,0 +1,1132 @@ +--- +title: '`x/distribution`' +--- + +## Overview + +This *simple* distribution mechanism describes a functional way to passively +distribute rewards between validators and delegators. Note that this mechanism does +not distribute funds in as precisely as active reward distribution mechanisms and +will therefore be upgraded in the future. + +The mechanism operates as follows. Collected rewards are pooled globally and +divided out passively to validators and delegators. Each validator has the +opportunity to charge commission to the delegators on the rewards collected on +behalf of the delegators. Fees are collected directly into a global reward pool +and validator proposer-reward pool. Due to the nature of passive accounting, +whenever changes to parameters which affect the rate of reward distribution +occurs, withdrawal of rewards must also occur. + +* Whenever withdrawing, one must withdraw the maximum amount they are entitled + to, leaving nothing in the pool. +* Whenever bonding, unbonding, or re-delegating tokens to an existing account, a + full withdrawal of the rewards must occur (as the rules for lazy accounting + change). +* Whenever a validator chooses to change the commission on rewards, all accumulated + commission rewards must be simultaneously withdrawn. + +The above scenarios are covered in `hooks.md`. + +The distribution mechanism outlined herein is used to lazily distribute the +following rewards between validators and associated delegators: + +* multi-token fees to be socially distributed +* inflated staked asset provisions +* validator commission on all rewards earned by their delegators stake + +Fees are pooled within a global pool. The mechanisms used allow for validators +and delegators to independently and lazily withdraw their rewards. + +## Shortcomings + +As a part of the lazy computations, each delegator holds an accumulation term +specific to each validator which is used to estimate what their approximate +fair portion of tokens held in the global fee pool is owed to them. + +```text +entitlement = delegator-accumulation / all-delegators-accumulation +``` + +Under the circumstance that there was constant and equal flow of incoming +reward tokens every block, this distribution mechanism would be equal to the +active distribution (distribute individually to all delegators each block). +However, this is unrealistic so deviations from the active distribution will +occur based on fluctuations of incoming reward tokens as well as timing of +reward withdrawal by other delegators. + +If you happen to know that incoming rewards are about to significantly increase, +you are incentivized to not withdraw until after this event, increasing the +worth of your existing *accum*. See [#2764](https://github.com/cosmos/cosmos-sdk/issues/2764) +for further details. + +## Effect on Staking + +Charging commission on Atom provisions while also allowing for Atom-provisions +to be auto-bonded (distributed directly to the validators bonded stake) is +problematic within BPoS. Fundamentally, these two mechanisms are mutually +exclusive. If both commission and auto-bonding mechanisms are simultaneously +applied to the staking-token then the distribution of staking-tokens between +any validator and its delegators will change with each block. This then +necessitates a calculation for each delegation records for each block - +which is considered computationally expensive. + +In conclusion, we can only have Atom commission and unbonded atoms +provisions or bonded atom provisions with no Atom commission, and we elect to +implement the former. Stakeholders wishing to rebond their provisions may elect +to set up a script to periodically withdraw and rebond rewards. + +## Contents + +* [Concepts](#concepts) +* [State](#state) + * [FeePool](#feepool) + * [Validator Distribution](#validator-distribution) + * [Delegation Distribution](#delegation-distribution) + * [Params](#params) +* [Begin Block](#begin-block) +* [Messages](#messages) +* [Hooks](#hooks) +* [Events](#events) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + +## Concepts + +In Proof of Stake (PoS) blockchains, rewards gained from transaction fees are paid to validators. The fee distribution module fairly distributes the rewards to the validators' constituent delegators. + +Rewards are calculated per period. The period is updated each time a validator's delegation changes, for example, when the validator receives a new delegation. +The rewards for a single validator can then be calculated by taking the total rewards for the period before the delegation started, minus the current total rewards. +To learn more, see the [F1 Fee Distribution paper](https://github.com/cosmos/cosmos-sdk/tree/main/docs/spec/fee_distribution/f1_fee_distr.pdf). + +The commission to the validator is paid when the validator is removed or when the validator requests a withdrawal. +The commission is calculated and incremented at every `BeginBlock` operation to update accumulated fee amounts. + +The rewards to a delegator are distributed when the delegation is changed or removed, or a withdrawal is requested. +Before rewards are distributed, all slashes to the validator that occurred during the current delegation are applied. + +### Reference Counting in F1 Fee Distribution + +In F1 fee distribution, the rewards a delegator receives are calculated when their delegation is withdrawn. This calculation must read the terms of the summation of rewards divided by the share of tokens from the period which they ended when they delegated, and the final period that was created for the withdrawal. + +Additionally, as slashes change the amount of tokens a delegation will have (but we calculate this lazily, +only when a delegator un-delegates), we must calculate rewards in separate periods before / after any slashes +which occurred in between when a delegator delegated and when they withdrew their rewards. Thus slashes, like +delegations, reference the period which was ended by the slash event. + +All stored historical rewards records for periods which are no longer referenced by any delegations +or any slashes can thus be safely removed, as they will never be read (future delegations and future +slashes will always reference future periods). This is implemented by tracking a `ReferenceCount` +along with each historical reward storage entry. Each time a new object (delegation or slash) +is created which might need to reference the historical record, the reference count is incremented. +Each time one object which previously needed to reference the historical record is deleted, the reference +count is decremented. If the reference count hits zero, the historical record is deleted. + +## State + +### FeePool + +All globally tracked parameters for distribution are stored within +`FeePool`. Rewards are collected and added to the reward pool and +distributed to validators/delegators from here. + +Note that the reward pool holds decimal coins (`DecCoins`) to allow +for fractions of coins to be received from operations like inflation. +When coins are distributed from the pool they are truncated back to +`sdk.Coins` which are non-decimal. + +* FeePool: `0x00 -> ProtocolBuffer(FeePool)` + +```go +/ coins with decimal +type DecCoins []DecCoin + +type DecCoin struct { + Amount math.LegacyDec + Denom string +} +``` + +```protobuf +// FeePool is the global fee pool for distribution. +message FeePool { + repeated cosmos.base.v1beta1.DecCoin community_pool = 1 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.DecCoins" + ]; +} +``` + +### Validator Distribution + +Validator distribution information for the relevant validator is updated each time: + +1. delegation amount to a validator is updated, +2. any delegator withdraws from a validator, or +3. the validator withdraws its commission. + +* ValidatorDistInfo: `0x02 | ValOperatorAddrLen (1 byte) | ValOperatorAddr -> ProtocolBuffer(validatorDistribution)` + +```go +type ValidatorDistInfo struct { + OperatorAddress sdk.AccAddress + SelfBondRewards sdkmath.DecCoins + ValidatorCommission types.ValidatorAccumulatedCommission +} +``` + +### Delegation Distribution + +Each delegation distribution only needs to record the height at which it last +withdrew fees. Because a delegation must withdraw fees each time it's +properties change (aka bonded tokens etc.) its properties will remain constant +and the delegator's *accumulation* factor can be calculated passively knowing +only the height of the last withdrawal and its current properties. + +* DelegationDistInfo: `0x02 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValOperatorAddrLen (1 byte) | ValOperatorAddr -> ProtocolBuffer(delegatorDist)` + +```go +type DelegationDistInfo struct { + WithdrawalHeight int64 / last time this delegation withdrew rewards +} +``` + +### Params + +The distribution module stores it's params in state with the prefix of `0x09`, +it can be updated with governance or the address with authority. + +* Params: `0x09 | ProtocolBuffer(Params)` + +```protobuf +// Params defines the set of params for the distribution module. +message Params { + option (amino.name) = "cosmos-sdk/x/distribution/Params"; + option (gogoproto.goproto_stringer) = false; + + string community_tax = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + + // Deprecated: The base_proposer_reward field is deprecated and is no longer used + // in the x/distribution module's reward mechanism. + string base_proposer_reward = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + deprecated = true + ]; + + // Deprecated: The bonus_proposer_reward field is deprecated and is no longer used + // in the x/distribution module's reward mechanism. + string bonus_proposer_reward = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + deprecated = true + ]; + + bool withdraw_addr_enabled = 4; +} +``` + +## Begin Block + +At each `BeginBlock`, all fees received in the previous block are transferred to +the distribution `ModuleAccount` account. When a delegator or validator +withdraws their rewards, they are taken out of the `ModuleAccount`. During begin +block, the different claims on the fees collected are updated as follows: + +* The reserve community tax is charged. +* The remainder is distributed proportionally by voting power to all bonded validators + +### The Distribution Scheme + +See [params](#params) for description of parameters. + +Let `fees` be the total fees collected in the previous block, including +inflationary rewards to the stake. All fees are collected in a specific module +account during the block. During `BeginBlock`, they are sent to the +`"distribution"` `ModuleAccount`. No other sending of tokens occurs. Instead, the +rewards each account is entitled to are stored, and withdrawals can be triggered +through the messages `FundCommunityPool`, `WithdrawValidatorCommission` and +`WithdrawDelegatorReward`. + +#### Reward to the Community Pool + +The community pool gets `community_tax * fees`, plus any remaining dust after +validators get their rewards that are always rounded down to the nearest +integer value. + +#### Reward To the Validators + +The proposer receives no extra rewards. All fees are distributed among all the +bonded validators, including the proposer, in proportion to their consensus power. + +```text +powFrac = validator power / total bonded validator power +voteMul = 1 - community_tax +``` + +All validators receive `fees * voteMul * powFrac`. + +#### Rewards to Delegators + +Each validator's rewards are distributed to its delegators. The validator also +has a self-delegation that is treated like a regular delegation in +distribution calculations. + +The validator sets a commission rate. The commission rate is flexible, but each +validator sets a maximum rate and a maximum daily increase. These maximums cannot be exceeded and protect delegators from sudden increases of validator commission rates to prevent validators from taking all of the rewards. + +The outstanding rewards that the operator is entitled to are stored in +`ValidatorAccumulatedCommission`, while the rewards the delegators are entitled +to are stored in `ValidatorCurrentRewards`. The [F1 fee distribution scheme](#concepts) is used to calculate the rewards per delegator as they +withdraw or update their delegation, and is thus not handled in `BeginBlock`. + +#### Example Distribution + +For this example distribution, the underlying consensus engine selects block proposers in +proportion to their power relative to the entire bonded power. + +All validators are equally performant at including pre-commits in their proposed +blocks. Then hold `(pre_commits included) / (total bonded validator power)` +constant so that the amortized block reward for the validator is `( validator power / total bonded power) * (1 - community tax rate)` of +the total rewards. Consequently, the reward for a single delegator is: + +```text +(delegator proportion of the validator power / validator power) * (validator power / total bonded power) + * (1 - community tax rate) * (1 - validator commission rate) += (delegator proportion of the validator power / total bonded power) * (1 - +community tax rate) * (1 - validator commission rate) +``` + +## Messages + +### MsgSetWithdrawAddress + +By default, the withdraw address is the delegator address. To change its withdraw address, a delegator must send a `MsgSetWithdrawAddress` message. +Changing the withdraw address is possible only if the parameter `WithdrawAddrEnabled` is set to `true`. + +The withdraw address cannot be any of the module accounts. These accounts are blocked from being withdraw addresses by being added to the distribution keeper's `blockedAddrs` array at initialization. + +Response: + +```protobuf +// MsgSetWithdrawAddress sets the withdraw address for +// a delegator (or validator self-delegation). +message MsgSetWithdrawAddress { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgModifyWithdrawAddress"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string withdraw_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +```go +func (k Keeper) + +SetWithdrawAddr(ctx context.Context, delegatorAddr sdk.AccAddress, withdrawAddr sdk.AccAddress) + +error + if k.blockedAddrs[withdrawAddr.String()] { + fail with "`{ + withdrawAddr +}` is not allowed to receive external funds" +} + if !k.GetWithdrawAddrEnabled(ctx) { + fail with `ErrSetWithdrawAddrDisabled` +} + +k.SetDelegatorWithdrawAddr(ctx, delegatorAddr, withdrawAddr) +``` + +### MsgWithdrawDelegatorReward + +A delegator can withdraw its rewards. +Internally in the distribution module, this transaction simultaneously removes the previous delegation with associated rewards, the same as if the delegator simply started a new delegation of the same value. +The rewards are sent immediately from the distribution `ModuleAccount` to the withdraw address. +Any remainder (truncated decimals) are sent to the community pool. +The starting height of the delegation is set to the current validator period, and the reference count for the previous period is decremented. +The amount withdrawn is deducted from the `ValidatorOutstandingRewards` variable for the validator. + +In the F1 distribution, the total rewards are calculated per validator period, and a delegator receives a piece of those rewards in proportion to their stake in the validator. +In basic F1, the total rewards that all the delegators are entitled to between to periods is calculated the following way. +Let `R(X)` be the total accumulated rewards up to period `X` divided by the tokens staked at that time. The delegator allocation is `R(X) * delegator_stake`. +Then the rewards for all the delegators for staking between periods `A` and `B` are `(R(B) - R(A)) * total stake`. +However, these calculated rewards don't account for slashing. + +Taking the slashes into account requires iteration. +Let `F(X)` be the fraction a validator is to be slashed for a slashing event that happened at period `X`. +If the validator was slashed at periods `P1, ..., PN`, where `A < P1`, `PN < B`, the distribution module calculates the individual delegator's rewards, `T(A, B)`, as follows: + +```go +stake := initial stake + rewards := 0 + previous := A + for P in P1, ..., PN`: + rewards = (R(P) - previous) * stake + stake = stake * F(P) + +previous = P +rewards = rewards + (R(B) - R(PN)) * stake +``` + +The historical rewards are calculated retroactively by playing back all the slashes and then attenuating the delegator's stake at each step. +The final calculated stake is equivalent to the actual staked coins in the delegation with a margin of error due to rounding errors. + +Response: + +```protobuf +// MsgWithdrawDelegatorReward represents delegation withdrawal to a delegator +// from a single validator. +message MsgWithdrawDelegatorReward { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgWithdrawDelegationReward"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +### WithdrawValidatorCommission + +The validator can send the WithdrawValidatorCommission message to withdraw their accumulated commission. +The commission is calculated in every block during `BeginBlock`, so no iteration is required to withdraw. +The amount withdrawn is deducted from the `ValidatorOutstandingRewards` variable for the validator. +Only integer amounts can be sent. If the accumulated awards have decimals, the amount is truncated before the withdrawal is sent, and the remainder is left to be withdrawn later. + +### FundCommunityPool + +This message sends coins directly from the sender to the community pool. + +The transaction fails if the amount cannot be transferred from the sender to the distribution module account. + +```go expandable +func (k Keeper) + +FundCommunityPool(ctx context.Context, amount sdk.Coins, sender sdk.AccAddress) + +error { + if err := k.bankKeeper.SendCoinsFromAccountToModule(ctx, sender, types.ModuleName, amount); err != nil { + return err +} + +feePool, err := k.FeePool.Get(ctx) + if err != nil { + return err +} + +feePool.CommunityPool = feePool.CommunityPool.Add(sdk.NewDecCoinsFromCoins(amount...)...) + if err := k.FeePool.Set(ctx, feePool); err != nil { + return err +} + +return nil +} +``` + +### Common distribution operations + +These operations take place during many different messages. + +#### Initialize delegation + +Each time a delegation is changed, the rewards are withdrawn and the delegation is reinitialized. +Initializing a delegation increments the validator period and keeps track of the starting period of the delegation. + +```go expandable +/ initialize starting info for a new delegation +func (k Keeper) + +initializeDelegation(ctx context.Context, val sdk.ValAddress, del sdk.AccAddress) { + / period has already been incremented - we want to store the period ended by this delegation action + previousPeriod := k.GetValidatorCurrentRewards(ctx, val).Period - 1 + + / increment reference count for the period we're going to track + k.incrementReferenceCount(ctx, val, previousPeriod) + validator := k.stakingKeeper.Validator(ctx, val) + delegation := k.stakingKeeper.Delegation(ctx, del, val) + + / calculate delegation stake in tokens + / we don't store directly, so multiply delegation shares * (tokens per share) + / note: necessary to truncate so we don't allow withdrawing more rewards than owed + stake := validator.TokensFromSharesTruncated(delegation.GetShares()) + +k.SetDelegatorStartingInfo(ctx, val, del, types.NewDelegatorStartingInfo(previousPeriod, stake, uint64(ctx.BlockHeight()))) +} +``` + +### MsgUpdateParams + +Distribution module params can be updated through `MsgUpdateParams`, which can be done using governance proposal and the signer will always be gov module account address. + +```protobuf +// MsgUpdateParams is the Msg/UpdateParams request type. +// +// Since: cosmos-sdk 0.47 +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/distribution/MsgUpdateParams"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // params defines the x/distribution parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message handling can fail if: + +* signer is not the gov module account address. + +## Hooks + +Available hooks that can be called by and from this module. + +### Create or modify delegation distribution + +* triggered-by: `staking.MsgDelegate`, `staking.MsgBeginRedelegate`, `staking.MsgUndelegate` + +#### Before + +* The delegation rewards are withdrawn to the withdraw address of the delegator. + The rewards include the current period and exclude the starting period. +* The validator period is incremented. + The validator period is incremented because the validator's power and share distribution might have changed. +* The reference count for the delegator's starting period is decremented. + +#### After + +The starting height of the delegation is set to the previous period. +Because of the `Before`-hook, this period is the last period for which the delegator was rewarded. + +### Validator created + +* triggered-by: `staking.MsgCreateValidator` + +When a validator is created, the following validator variables are initialized: + +* Historical rewards +* Current accumulated rewards +* Accumulated commission +* Total outstanding rewards +* Period + +By default, all values are set to a `0`, except period, which is set to `1`. + +### Validator removed + +* triggered-by: `staking.RemoveValidator` + +Outstanding commission is sent to the validator's self-delegation withdrawal address. +Remaining delegator rewards get sent to the community fee pool. + +Note: The validator gets removed only when it has no remaining delegations. +At that time, all outstanding delegator rewards will have been withdrawn. +Any remaining rewards are dust amounts. + +### Validator is slashed + +* triggered-by: `staking.Slash` +* The current validator period reference count is incremented. + The reference count is incremented because the slash event has created a reference to it. +* The validator period is incremented. +* The slash event is stored for later use. + The slash event will be referenced when calculating delegator rewards. + +## Events + +The distribution module emits the following events: + +### BeginBlocker + +| Type | Attribute Key | Attribute Value | +| ---------------- | ------------- | ------------------ | +| proposer\_reward | validator | `{validatorAddress}` | +| proposer\_reward | reward | `{proposerReward}` | +| commission | amount | `{commissionAmount}` | +| commission | validator | `{validatorAddress}` | +| rewards | amount | `{rewardAmount}` | +| rewards | validator | `{validatorAddress}` | + +### Handlers + +#### MsgSetWithdrawAddress + +| Type | Attribute Key | Attribute Value | +| ---------------------- | ----------------- | ---------------------- | +| set\_withdraw\_address | withdraw\_address | `{withdrawAddress}` | +| message | module | distribution | +| message | action | set\_withdraw\_address | +| message | sender | `{senderAddress}` | + +#### MsgWithdrawDelegatorReward + +| Type | Attribute Key | Attribute Value | +| ----------------- | ------------- | --------------------------- | +| withdraw\_rewards | amount | `{rewardAmount}` | +| withdraw\_rewards | validator | `{validatorAddress}` | +| message | module | distribution | +| message | action | withdraw\_delegator\_reward | +| message | sender | `{senderAddress}` | + +#### MsgWithdrawValidatorCommission + +| Type | Attribute Key | Attribute Value | +| -------------------- | ------------- | ------------------------------- | +| withdraw\_commission | amount | `{commissionAmount}` | +| message | module | distribution | +| message | action | withdraw\_validator\_commission | +| message | sender | `{senderAddress}` | + +## Parameters + +The distribution module contains the following parameters: + +| Key | Type | Example | +| ------------------- | ------------ | --------------------------- | +| communitytax | string (dec) | "0.020000000000000000" \[0] | +| withdrawaddrenabled | bool | true | + +* \[0] `communitytax` must be positive and cannot exceed 1.00. +* `baseproposerreward` and `bonusproposerreward` were parameters that are deprecated in v0.47 and are not used. + + +The reserve pool is the pool of collected funds for use by governance taken via the `CommunityTax`. +Currently with the Cosmos SDK, tokens collected by the CommunityTax are accounted for but unspendable. + + +## Client + +## CLI + +A user can query and interact with the `distribution` module using the CLI. + +#### Query + +The `query` commands allow users to query `distribution` state. + +```shell +simd query distribution --help +``` + +##### commission + +The `commission` command allows users to query validator commission rewards by address. + +```shell +simd query distribution commission [address] [flags] +``` + +Example: + +```shell +simd query distribution commission cosmosvaloper1... +``` + +Example Output: + +```yml +commission: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### community-pool + +The `community-pool` command allows users to query all coin balances within the community pool. + +```shell +simd query distribution community-pool [flags] +``` + +Example: + +```shell +simd query distribution community-pool +``` + +Example Output: + +```yml +pool: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### params + +The `params` command allows users to query the parameters of the `distribution` module. + +```shell +simd query distribution params [flags] +``` + +Example: + +```shell +simd query distribution params +``` + +Example Output: + +```yml +base_proposer_reward: "0.000000000000000000" +bonus_proposer_reward: "0.000000000000000000" +community_tax: "0.020000000000000000" +withdraw_addr_enabled: true +``` + +##### rewards + +The `rewards` command allows users to query delegator rewards. Users can optionally include the validator address to query rewards earned from a specific validator. + +```shell +simd query distribution rewards [delegator-addr] [validator-addr] [flags] +``` + +Example: + +```shell +simd query distribution rewards cosmos1... +``` + +Example Output: + +```yml +rewards: +- reward: + - amount: "1000000.000000000000000000" + denom: stake + validator_address: cosmosvaloper1.. +total: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### slashes + +The `slashes` command allows users to query all slashes for a given block range. + +```shell +simd query distribution slashes [validator] [start-height] [end-height] [flags] +``` + +Example: + +```shell +simd query distribution slashes cosmosvaloper1... 1 1000 +``` + +Example Output: + +```yml +pagination: + next_key: null + total: "0" +slashes: +- validator_period: 20, + fraction: "0.009999999999999999" +``` + +##### validator-outstanding-rewards + +The `validator-outstanding-rewards` command allows users to query all outstanding (un-withdrawn) rewards for a validator and all their delegations. + +```shell +simd query distribution validator-outstanding-rewards [validator] [flags] +``` + +Example: + +```shell +simd query distribution validator-outstanding-rewards cosmosvaloper1... +``` + +Example Output: + +```yml +rewards: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### validator-distribution-info + +The `validator-distribution-info` command allows users to query validator commission and self-delegation rewards for validator. + +````shell expandable +simd query distribution validator-distribution-info cosmosvaloper1... +``` + +Example Output: + +```yml +commission: +- amount: "100000.000000000000000000" + denom: stake +operator_address: cosmosvaloper1... +self_bond_rewards: +- amount: "100000.000000000000000000" + denom: stake +``` + +#### Transactions + +The `tx` commands allow users to interact with the `distribution` module. + +```shell +simd tx distribution --help +``` + +##### fund-community-pool + +The `fund-community-pool` command allows users to send funds to the community pool. + +```shell +simd tx distribution fund-community-pool [amount] [flags] +``` + +Example: + +```shell +simd tx distribution fund-community-pool 100stake --from cosmos1... +``` + +##### set-withdraw-addr + +The `set-withdraw-addr` command allows users to set the withdraw address for rewards associated with a delegator address. + +```shell +simd tx distribution set-withdraw-addr [withdraw-addr] [flags] +``` + +Example: + +```shell +simd tx distribution set-withdraw-addr cosmos1... --from cosmos1... +``` + +##### withdraw-all-rewards + +The `withdraw-all-rewards` command allows users to withdraw all rewards for a delegator. + +```shell +simd tx distribution withdraw-all-rewards [flags] +``` + +Example: + +```shell +simd tx distribution withdraw-all-rewards --from cosmos1... +``` + +##### withdraw-rewards + +The `withdraw-rewards` command allows users to withdraw all rewards from a given delegation address, +and optionally withdraw validator commission if the delegation address given is a validator operator and the user proves the `--commission` flag. + +```shell +simd tx distribution withdraw-rewards [validator-addr] [flags] +``` + +Example: + +```shell +simd tx distribution withdraw-rewards cosmosvaloper1... --from cosmos1... --commission +``` + +### gRPC + +A user can query the `distribution` module using gRPC endpoints. + +#### Params + +The `Params` endpoint allows users to query parameters of the `distribution` module. + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "communityTax": "20000000000000000", + "baseProposerReward": "00000000000000000", + "bonusProposerReward": "00000000000000000", + "withdrawAddrEnabled": true + } +} +``` + +#### ValidatorDistributionInfo + +The `ValidatorDistributionInfo` queries validator commission and self-delegation rewards for validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorDistributionInfo +``` + +Example Output: + +```json +{ + "commission": { + "commission": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + }, + "self_bond_rewards": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ], + "validator_address": "cosmosvalop1..." +} +``` + +#### ValidatorOutstandingRewards + +The `ValidatorOutstandingRewards` endpoint allows users to query rewards of a validator address. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1.."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorOutstandingRewards +``` + +Example Output: + +```json +{ + "rewards": { + "rewards": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + } +} +``` + +#### ValidatorCommission + +The `ValidatorCommission` endpoint allows users to query accumulated commission for a validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1.."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorCommission +``` + +Example Output: + +```json +{ + "commission": { + "commission": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + } +} +``` + +#### ValidatorSlashes + +The `ValidatorSlashes` endpoint allows users to query slash events of a validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1.."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorSlashes +``` + +Example Output: + +```json +{ + "slashes": [ + { + "validator_period": "20", + "fraction": "0.009999999999999999" + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### DelegationRewards + +The `DelegationRewards` endpoint allows users to query the total rewards accrued by a delegation. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1...","validator_address":"cosmosvalop1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegationRewards +``` + +Example Output: + +```json +{ + "rewards": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] +} +``` + +#### DelegationTotalRewards + +The `DelegationTotalRewards` endpoint allows users to query the total rewards accrued by each validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegationTotalRewards +``` + +Example Output: + +```json +{ + "rewards": [ + { + "validatorAddress": "cosmosvaloper1...", + "reward": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + } + ], + "total": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] +} +``` + +#### DelegatorValidators + +The `DelegatorValidators` endpoint allows users to query all validators for given delegator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegatorValidators +``` + +Example Output: + +```json +{ + "validators": ["cosmosvaloper1..."] +} +``` + +#### DelegatorWithdrawAddress + +The `DelegatorWithdrawAddress` endpoint allows users to query the withdraw address of a delegator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegatorWithdrawAddress +``` + +Example Output: + +```json +{ + "withdrawAddress": "cosmos1..." +} +``` + +#### CommunityPool + +The `CommunityPool` endpoint allows users to query the community pool coins. + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/CommunityPool +``` + +Example Output: + +```json +{ + "pool": [ + { + "denom": "stake", + "amount": "1000000000000000000" + } + ] +} +``` +```` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/epochs/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/epochs/README.mdx new file mode 100644 index 00000000..900dac7d --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/epochs/README.mdx @@ -0,0 +1,179 @@ +--- +title: '`x/epochs`' +--- + +## Abstract + +Often in the SDK, we would like to run certain code every-so often. The +purpose of `epochs` module is to allow other modules to set that they +would like to be signaled once every period. So another module can +specify it wants to execute code once a week, starting at UTC-time = x. +`epochs` creates a generalized epoch interface to other modules so that +they can easily be signaled upon such events. + +## Contents + +1. **[Concept](#concepts)** +2. **[State](#state)** +3. **[Events](#events)** +4. **[Keeper](#keepers)** +5. **[Hooks](#hooks)** +6. **[Queries](#queries)** + +## Concepts + +The epochs module defines on-chain timers that execute at fixed time intervals. +Other SDK modules can then register logic to be executed at the timer ticks. +We refer to the period in between two timer ticks as an "epoch". + +Every timer has a unique identifier. +Every epoch will have a start time, and an end time, where `end time = start time + timer interval`. +On mainnet, we only utilize one identifier, with a time interval of `one day`. + +The timer will tick at the first block whose block time is greater than the timer end time, +and set the start as the prior timer end time. (Notably, it's not set to the block time!) +This means that if the chain has been down for a while, you will get one timer tick per block, +until the timer has caught up. + +## State + +The Epochs module keeps a single `EpochInfo` per identifier. +This contains the current state of the timer with the corresponding identifier. +Its fields are modified at every timer tick. +EpochInfos are initialized as part of genesis initialization or upgrade logic, +and are only modified on begin blockers. + +## Events + +The `epochs` module emits the following events: + +### BeginBlocker + +| Type | Attribute Key | Attribute Value | +| ------------ | ------------- | --------------- | +| epoch\_start | epoch\_number | `{epoch\_number}` | +| epoch\_start | start\_time | `{start\_time}` | + +### EndBlocker + +| Type | Attribute Key | Attribute Value | +| ---------- | ------------- | --------------- | +| epoch\_end | epoch\_number | `{epoch\_number}` | + +## Keepers + +### Keeper functions + +Epochs keeper module provides utility functions to manage epochs. + +## Hooks + +```go +/ the first block whose timestamp is after the duration is counted as the end of the epoch + AfterEpochEnd(ctx sdk.Context, epochIdentifier string, epochNumber int64) + / new epoch is next block of epoch end block + BeforeEpochStart(ctx sdk.Context, epochIdentifier string, epochNumber int64) +``` + +### How modules receive hooks + +On hook receiver function of other modules, they need to filter +`epochIdentifier` and only do executions for only specific +epochIdentifier. Filtering epochIdentifier could be in `Params` of other +modules so that they can be modified by governance. + +This is the standard dev UX of this: + +```golang +func (k MyModuleKeeper) + +AfterEpochEnd(ctx sdk.Context, epochIdentifier string, epochNumber int64) { + params := k.GetParams(ctx) + if epochIdentifier == params.DistrEpochIdentifier { + / my logic +} +} +``` + +### Panic isolation + +If a given epoch hook panics, its state update is reverted, but we keep +proceeding through the remaining hooks. This allows more advanced epoch +logic to be used, without concern over state machine halting, or halting +subsequent modules. + +This does mean that if there is behavior you expect from a prior epoch +hook, and that epoch hook reverted, your hook may also have an issue. So +do keep in mind "what if a prior hook didn't get executed" in the safety +checks you consider for a new epoch hook. + +## Queries + +The Epochs module provides the following queries to check the module's state. + +```protobuf +service Query { + / EpochInfos provide running epochInfos + rpc EpochInfos(QueryEpochsInfoRequest) returns (QueryEpochsInfoResponse) {} + / CurrentEpoch provide current epoch of specified identifier + rpc CurrentEpoch(QueryCurrentEpochRequest) returns (QueryCurrentEpochResponse) {} +} +``` + +### Epoch Infos + +Query the currently running epochInfos + +```sh + query epochs epoch-infos +``` + + +**Example** + +An example output: + +```sh expandable +epochs: +- current_epoch: "183" + current_epoch_start_height: "2438409" + current_epoch_start_time: "2021-12-18T17:16:09.898160996Z" + duration: 86400s + epoch_counting_started: true + identifier: day + start_time: "2021-06-18T17:00:00Z" +- current_epoch: "26" + current_epoch_start_height: "2424854" + current_epoch_start_time: "2021-12-17T17:02:07.229632445Z" + duration: 604800s + epoch_counting_started: true + identifier: week + start_time: "2021-06-18T17:00:00Z" +``` + + + +### Current Epoch + +Query the current epoch by the specified identifier + +```sh + query epochs current-epoch [identifier] +``` + + +**Example** + +Query the current `day` epoch: + +```sh + query epochs current-epoch day +``` + +Which in this example outputs: + +```sh +current_epoch: "183" +``` + + diff --git a/docs/sdk/v0.50/documentation/module-system/modules/evidence/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/evidence/README.mdx new file mode 100644 index 00000000..f409db40 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/evidence/README.mdx @@ -0,0 +1,624 @@ +--- +title: '`x/evidence`' +description: Concepts State Messages Events Parameters BeginBlock Client CLI REST gRPC +--- + +* [Concepts](#concepts) +* [State](#state) +* [Messages](#messages) +* [Events](#events) +* [Parameters](#parameters) +* [BeginBlock](#beginblock) +* [Client](#client) + * [CLI](#cli) + * [REST](#rest) + * [gRPC](#grpc) + +## Abstract + +`x/evidence` is an implementation of a Cosmos SDK module, per [ADR 009](docs/sdk/next/documentation/legacy/adr-comprehensive), +that allows for the submission and handling of arbitrary evidence of misbehavior such +as equivocation and counterfactual signing. + +The evidence module differs from standard evidence handling which typically expects the +underlying consensus engine, e.g. CometBFT, to automatically submit evidence when +it is discovered by allowing clients and foreign chains to submit more complex evidence +directly. + +All concrete evidence types must implement the `Evidence` interface contract. Submitted +`Evidence` is first routed through the evidence module's `Router` in which it attempts +to find a corresponding registered `Handler` for that specific `Evidence` type. +Each `Evidence` type must have a `Handler` registered with the evidence module's +keeper in order for it to be successfully routed and executed. + +Each corresponding handler must also fulfill the `Handler` interface contract. The +`Handler` for a given `Evidence` type can perform any arbitrary state transitions +such as slashing, jailing, and tombstoning. + +## Concepts + +### Evidence + +Any concrete type of evidence submitted to the `x/evidence` module must fulfill the +`Evidence` contract outlined below. Not all concrete types of evidence will fulfill +this contract in the same way and some data may be entirely irrelevant to certain +types of evidence. An additional `ValidatorEvidence`, which extends `Evidence`, +has also been created to define a contract for evidence against malicious validators. + +```go expandable +/ Evidence defines the contract which concrete evidence types of misbehavior +/ must implement. +type Evidence interface { + proto.Message + + Route() + +string + String() + +string + Hash() []byte + ValidateBasic() + +error + + / Height at which the infraction occurred + GetHeight() + +int64 +} + +/ ValidatorEvidence extends Evidence interface to define contract +/ for evidence against malicious validators +type ValidatorEvidence interface { + Evidence + + / The consensus address of the malicious validator at time of infraction + GetConsensusAddress() + +sdk.ConsAddress + + / The total power of the malicious validator at time of infraction + GetValidatorPower() + +int64 + + / The total validator set power at time of infraction + GetTotalPower() + +int64 +} +``` + +### Registration & Handling + +The `x/evidence` module must first know about all types of evidence it is expected +to handle. This is accomplished by registering the `Route` method in the `Evidence` +contract with what is known as a `Router` (defined below). The `Router` accepts +`Evidence` and attempts to find the corresponding `Handler` for the `Evidence` +via the `Route` method. + +```go +type Router interface { + AddRoute(r string, h Handler) + +Router + HasRoute(r string) + +bool + GetRoute(path string) + +Handler + Seal() + +Sealed() + +bool +} +``` + +The `Handler` (defined below) is responsible for executing the entirety of the +business logic for handling `Evidence`. This typically includes validating the +evidence, both stateless checks via `ValidateBasic` and stateful checks via any +keepers provided to the `Handler`. In addition, the `Handler` may also perform +capabilities such as slashing and jailing a validator. All `Evidence` handled +by the `Handler` should be persisted. + +```go +/ Handler defines an agnostic Evidence handler. The handler is responsible +/ for executing all corresponding business logic necessary for verifying the +/ evidence as valid. In addition, the Handler may execute any necessary +/ slashing and potential jailing. +type Handler func(context.Context, Evidence) + +error +``` + +## State + +Currently the `x/evidence` module only stores valid submitted `Evidence` in state. +The evidence state is also stored and exported in the `x/evidence` module's `GenesisState`. + +```protobuf +/ GenesisState defines the evidence module's genesis state. +message GenesisState { + / evidence defines all the evidence at genesis. + repeated google.protobuf.Any evidence = 1; +} + +``` + +All `Evidence` is retrieved and stored via a prefix `KVStore` using prefix `0x00` (`KeyPrefixEvidence`). + +## Messages + +### MsgSubmitEvidence + +Evidence is submitted through a `MsgSubmitEvidence` message: + +```protobuf +/ MsgSubmitEvidence represents a message that supports submitting arbitrary +/ Evidence of misbehavior such as equivocation or counterfactual signing. +message MsgSubmitEvidence { + string submitter = 1; + google.protobuf.Any evidence = 2; +} +``` + +Note, the `Evidence` of a `MsgSubmitEvidence` message must have a corresponding +`Handler` registered with the `x/evidence` module's `Router` in order to be processed +and routed correctly. + +Given the `Evidence` is registered with a corresponding `Handler`, it is processed +as follows: + +```go expandable +func SubmitEvidence(ctx Context, evidence Evidence) + +error { + if _, err := GetEvidence(ctx, evidence.Hash()); err == nil { + return errorsmod.Wrap(types.ErrEvidenceExists, strings.ToUpper(hex.EncodeToString(evidence.Hash()))) +} + if !router.HasRoute(evidence.Route()) { + return errorsmod.Wrap(types.ErrNoEvidenceHandlerExists, evidence.Route()) +} + handler := router.GetRoute(evidence.Route()) + if err := handler(ctx, evidence); err != nil { + return errorsmod.Wrap(types.ErrInvalidEvidence, err.Error()) +} + +ctx.EventManager().EmitEvent( + sdk.NewEvent( + types.EventTypeSubmitEvidence, + sdk.NewAttribute(types.AttributeKeyEvidenceHash, strings.ToUpper(hex.EncodeToString(evidence.Hash()))), + ), + ) + +SetEvidence(ctx, evidence) + +return nil +} +``` + +First, there must not already exist valid submitted `Evidence` of the exact same +type. Secondly, the `Evidence` is routed to the `Handler` and executed. Finally, +if there is no error in handling the `Evidence`, an event is emitted and it is persisted to state. + +## Events + +The `x/evidence` module emits the following events: + +### Handlers + +#### MsgSubmitEvidence + +| Type | Attribute Key | Attribute Value | +| ---------------- | -------------- | ---------------- | +| submit\_evidence | evidence\_hash | `{evidenceHash}` | +| message | module | evidence | +| message | sender | `{senderAddress}` | +| message | action | submit\_evidence | + +## Parameters + +The evidence module does not contain any parameters. + +## BeginBlock + +### Evidence Handling + +CometBFT blocks can include +[Evidence](https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md#evidence) that indicates if a validator committed malicious behavior. The relevant information is forwarded to the application as ABCI Evidence in `abci.RequestBeginBlock` so that the validator can be punished accordingly. + +#### Equivocation + +The Cosmos SDK handles two types of evidence inside the ABCI `BeginBlock`: + +* `DuplicateVoteEvidence`, +* `LightClientAttackEvidence`. + +The evidence module handles these two evidence types the same way. First, the Cosmos SDK converts the CometBFT concrete evidence type to an SDK `Evidence` interface using `Equivocation` as the concrete type. + +```protobuf +// Equivocation implements the Evidence interface and defines evidence of double +// signing misbehavior. +message Equivocation { + option (amino.name) = "cosmos-sdk/Equivocation"; + option (gogoproto.goproto_stringer) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.equal) = false; + + // height is the equivocation height. + int64 height = 1; + + // time is the equivocation time. + google.protobuf.Timestamp time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + + // power is the equivocation validator power. + int64 power = 3; + + // consensus_address is the equivocation validator consensus address. + string consensus_address = 4 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +For some `Equivocation` submitted in `block` to be valid, it must satisfy: + +`Evidence.Timestamp >= block.Timestamp - MaxEvidenceAge` + +Where: + +* `Evidence.Timestamp` is the timestamp in the block at height `Evidence.Height` +* `block.Timestamp` is the current block timestamp. + +If valid `Equivocation` evidence is included in a block, the validator's stake is +reduced (slashed) by `SlashFractionDoubleSign` as defined by the `x/slashing` module +of what their stake was when the infraction occurred, rather than when the evidence was discovered. +We want to "follow the stake", i.e., the stake that contributed to the infraction +should be slashed, even if it has since been redelegated or started unbonding. + +In addition, the validator is permanently jailed and tombstoned to make it impossible for that +validator to ever re-enter the validator set. + +The `Equivocation` evidence is handled as follows: + +```go expandable +package keeper + +import ( + + "fmt" + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/evidence/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ HandleEquivocationEvidence implements an equivocation evidence handler. Assuming the +/ evidence is valid, the validator committing the misbehavior will be slashed, +/ jailed and tombstoned. Once tombstoned, the validator will not be able to +/ recover. Note, the evidence contains the block time and height at the time of +/ the equivocation. +/ +/ The evidence is considered invalid if: +/ - the evidence is too old +/ - the validator is unbonded or does not exist +/ - the signing info does not exist (will panic) +/ - is already tombstoned +/ +/ TODO: Some of the invalid constraints listed above may need to be reconsidered +/ in the case of a lunatic attack. +func (k Keeper) + +HandleEquivocationEvidence(ctx sdk.Context, evidence *types.Equivocation) { + logger := k.Logger(ctx) + consAddr := evidence.GetConsensusAddress() + if _, err := k.slashingKeeper.GetPubkey(ctx, consAddr.Bytes()); err != nil { + / Ignore evidence that cannot be handled. + / + / NOTE: We used to panic with: + / `panic(fmt.Sprintf("Validator consensus-address %v not found", consAddr))`, + / but this couples the expectations of the app to both Tendermint and + / the simulator. Both are expected to provide the full range of + / allowable but none of the disallowed evidence types. Instead of + / getting this coordination right, it is easier to relax the + / constraints and ignore evidence that cannot be handled. + return +} + + / calculate the age of the evidence + infractionHeight := evidence.GetHeight() + infractionTime := evidence.GetTime() + ageDuration := ctx.BlockHeader().Time.Sub(infractionTime) + ageBlocks := ctx.BlockHeader().Height - infractionHeight + + / Reject evidence if the double-sign is too old. Evidence is considered stale + / if the difference in time and number of blocks is greater than the allowed + / parameters defined. + cp := ctx.ConsensusParams() + if cp != nil && cp.Evidence != nil { + if ageDuration > cp.Evidence.MaxAgeDuration && ageBlocks > cp.Evidence.MaxAgeNumBlocks { + logger.Info( + "ignored equivocation; evidence too old", + "validator", consAddr, + "infraction_height", infractionHeight, + "max_age_num_blocks", cp.Evidence.MaxAgeNumBlocks, + "infraction_time", infractionTime, + "max_age_duration", cp.Evidence.MaxAgeDuration, + ) + +return +} + +} + validator := k.stakingKeeper.ValidatorByConsAddr(ctx, consAddr) + if validator == nil || validator.IsUnbonded() { + / Defensive: Simulation doesn't take unbonding periods into account, and + / Tendermint might break this assumption at some point. + return +} + if !validator.GetOperator().Empty() { + if _, err := k.slashingKeeper.GetPubkey(ctx, consAddr.Bytes()); err != nil { + / Ignore evidence that cannot be handled. + / + / NOTE: We used to panic with: + / `panic(fmt.Sprintf("Validator consensus-address %v not found", consAddr))`, + / but this couples the expectations of the app to both Tendermint and + / the simulator. Both are expected to provide the full range of + / allowable but none of the disallowed evidence types. Instead of + / getting this coordination right, it is easier to relax the + / constraints and ignore evidence that cannot be handled. + return +} + +} + if ok := k.slashingKeeper.HasValidatorSigningInfo(ctx, consAddr); !ok { + panic(fmt.Sprintf("expected signing info for validator %s but not found", consAddr)) +} + + / ignore if the validator is already tombstoned + if k.slashingKeeper.IsTombstoned(ctx, consAddr) { + logger.Info( + "ignored equivocation; validator already tombstoned", + "validator", consAddr, + "infraction_height", infractionHeight, + "infraction_time", infractionTime, + ) + +return +} + +logger.Info( + "confirmed equivocation", + "validator", consAddr, + "infraction_height", infractionHeight, + "infraction_time", infractionTime, + ) + + / We need to retrieve the stake distribution which signed the block, so we + / subtract ValidatorUpdateDelay from the evidence height. + / Note, that this *can* result in a negative "distributionHeight", up to + / -ValidatorUpdateDelay, i.e. at the end of the + / pre-genesis block (none) = at the beginning of the genesis block. + / That's fine since this is just used to filter unbonding delegations & redelegations. + distributionHeight := infractionHeight - sdk.ValidatorUpdateDelay + + / Slash validator. The `power` is the int64 power of the validator as provided + / to/by Tendermint. This value is validator.Tokens as sent to Tendermint via + / ABCI, and now received as evidence. The fraction is passed in to separately + / to slash unbonding and rebonding delegations. + k.slashingKeeper.SlashWithInfractionReason( + ctx, + consAddr, + k.slashingKeeper.SlashFractionDoubleSign(ctx), + evidence.GetValidatorPower(), distributionHeight, + stakingtypes.Infraction_INFRACTION_DOUBLE_SIGN, + ) + + / Jail the validator if not already jailed. This will begin unbonding the + / validator if not already unbonding (tombstoned). + if !validator.IsJailed() { + k.slashingKeeper.Jail(ctx, consAddr) +} + +k.slashingKeeper.JailUntil(ctx, consAddr, types.DoubleSignJailEndTime) + +k.slashingKeeper.Tombstone(ctx, consAddr) + +k.SetEvidence(ctx, evidence) +} +``` + +**Note:** The slashing, jailing, and tombstoning calls are delegated through the `x/slashing` module +that emits informative events and finally delegates calls to the `x/staking` module. See documentation +on slashing and jailing in [State Transitions](docs/sdk/v0.50/documentation/module-system/modules/staking/README#state-transitions). + +## Client + +### CLI + +A user can query and interact with the `evidence` module using the CLI. + +#### Query + +The `query` commands allows users to query `evidence` state. + +```bash +simd query evidence --help +``` + +#### evidence + +The `evidence` command allows users to list all evidence or evidence by hash. + +Usage: + +```bash +simd query evidence evidence [flags] +``` + +To query evidence by hash + +Example: + +```bash +simd query evidence evidence "DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660" +``` + +Example Output: + +```bash +evidence: + consensus_address: cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h + height: 11 + power: 100 + time: "2021-10-20T16:08:38.194017624Z" +``` + +To get all evidence + +Example: + +```bash +simd query evidence list +``` + +Example Output: + +```bash +evidence: + consensus_address: cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h + height: 11 + power: 100 + time: "2021-10-20T16:08:38.194017624Z" +pagination: + next_key: null + total: "1" +``` + +### REST + +A user can query the `evidence` module using REST endpoints. + +#### Evidence + +Get evidence by hash + +```bash +/cosmos/evidence/v1beta1/evidence/{hash} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence/DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660" +``` + +Example Output: + +```bash +{ + "evidence": { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } +} +``` + +#### All evidence + +Get all evidence + +```bash +/cosmos/evidence/v1beta1/evidence +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence" +``` + +Example Output: + +```bash expandable +{ + "evidence": [ + { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### gRPC + +A user can query the `evidence` module using gRPC endpoints. + +#### Evidence + +Get evidence by hash + +```bash +cosmos.evidence.v1beta1.Query/Evidence +``` + +Example: + +```bash +grpcurl -plaintext -d '{"evidence_hash":"DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660"}' localhost:9090 cosmos.evidence.v1beta1.Query/Evidence +``` + +Example Output: + +```bash +{ + "evidence": { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } +} +``` + +#### All evidence + +Get all evidence + +```bash +cosmos.evidence.v1beta1.Query/AllEvidence +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.evidence.v1beta1.Query/AllEvidence +``` + +Example Output: + +```bash expandable +{ + "evidence": [ + { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } + ], + "pagination": { + "total": "1" + } +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/feegrant/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/feegrant/README.mdx new file mode 100644 index 00000000..33ec8702 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/feegrant/README.mdx @@ -0,0 +1,3771 @@ +--- +title: '`x/feegrant`' +description: >- + This document specifies the fee grant module. For the full ADR, please see Fee + Grant ADR-029. +--- + +## Abstract + +This document specifies the fee grant module. For the full ADR, please see [Fee Grant ADR-029](docs/sdk/next/documentation/legacy/adr-comprehensive). + +This module allows accounts to grant fee allowances and to use fees from their accounts. Grantees can execute any transaction without the need to maintain sufficient fees. + +## Contents + +* [Concepts](#concepts) +* [State](#state) + * [FeeAllowance](#feeallowance) + * [FeeAllowanceQueue](#feeallowancequeue) +* [Messages](#messages) + * [Msg/GrantAllowance](#msggrantallowance) + * [Msg/RevokeAllowance](#msgrevokeallowance) +* [Events](#events) +* [Msg Server](#msg-server) + * [MsgGrantAllowance](#msggrantallowance-1) + * [MsgRevokeAllowance](#msgrevokeallowance-1) + * [Exec fee allowance](#exec-fee-allowance) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + +## Concepts + +### Grant + +`Grant` is stored in the KVStore to record a grant with full context. Every grant will contain `granter`, `grantee` and what kind of `allowance` is granted. `granter` is an account address who is giving permission to `grantee` (the beneficiary account address) to pay for some or all of `grantee`'s transaction fees. `allowance` defines what kind of fee allowance (`BasicAllowance` or `PeriodicAllowance`, see below) is granted to `grantee`. `allowance` accepts an interface which implements `FeeAllowanceI`, encoded as `Any` type. There can be only one existing fee grant allowed for a `grantee` and `granter`, self grants are not allowed. + +```protobuf +// Grant is stored in the KVStore to record a grant with full context +message Grant { + // granter is the address of the user granting an allowance of their funds. + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // grantee is the address of the user being granted an allowance of another user's funds. + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // allowance can be any of basic, periodic, allowed fee allowance. + google.protobuf.Any allowance = 3 [(cosmos_proto.accepts_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"]; +} +``` + +`FeeAllowanceI` looks like: + +```go expandable +package feegrant + +import ( + + "time" + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ FeeAllowance implementations are tied to a given fee delegator and delegatee, +/ and are used to enforce fee grant limits. +type FeeAllowanceI interface { + / Accept can use fee payment requested as well as timestamp of the current block + / to determine whether or not to process this. This is checked in + / Keeper.UseGrantedFees and the return values should match how it is handled there. + / + / If it returns an error, the fee payment is rejected, otherwise it is accepted. + / The FeeAllowance implementation is expected to update it's internal state + / and will be saved again after an acceptance. + / + / If remove is true (regardless of the error), the FeeAllowance will be deleted from storage + / (eg. when it is used up). (See call to RevokeAllowance in Keeper.UseGrantedFees) + +Accept(ctx sdk.Context, fee sdk.Coins, msgs []sdk.Msg) (remove bool, err error) + + / ValidateBasic should evaluate this FeeAllowance for internal consistency. + / Don't allow negative amounts, or negative periods for example. + ValidateBasic() + +error + + / ExpiresAt returns the expiry time of the allowance. + ExpiresAt() (*time.Time, error) +} +``` + +### Fee Allowance types + +There are two types of fee allowances present at the moment: + +* `BasicAllowance` +* `PeriodicAllowance` +* `AllowedMsgAllowance` + +### BasicAllowance + +`BasicAllowance` is permission for `grantee` to use fee from a `granter`'s account. If any of the `spend_limit` or `expiration` reaches its limit, the grant will be removed from the state. + +```protobuf +// BasicAllowance implements Allowance with a one-time grant of coins +// that optionally expires. The grantee can use up to SpendLimit to cover fees. +message BasicAllowance { + option (cosmos_proto.implements_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"; + option (amino.name) = "cosmos-sdk/BasicAllowance"; + + // spend_limit specifies the maximum amount of coins that can be spent + // by this allowance and will be updated as coins are spent. If it is + // empty, there is no spend limit and any amount of coins can be spent. + repeated cosmos.base.v1beta1.Coin spend_limit = 1 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; +``` + +* `spend_limit` is the limit of coins that are allowed to be used from the `granter` account. If it is empty, it assumes there's no spend limit, `grantee` can use any number of available coins from `granter` account address before the expiration. + +* `expiration` specifies an optional time when this allowance expires. If the value is left empty, there is no expiry for the grant. + +* When a grant is created with empty values for `spend_limit` and `expiration`, it is still a valid grant. It won't restrict the `grantee` to use any number of coins from `granter` and it won't have any expiration. The only way to restrict the `grantee` is by revoking the grant. + +### PeriodicAllowance + +`PeriodicAllowance` is a repeating fee allowance for the mentioned period, we can mention when the grant can expire as well as when a period can reset. We can also define the maximum number of coins that can be used in a mentioned period of time. + +```protobuf +// PeriodicAllowance extends Allowance to allow for both a maximum cap, +// as well as a limit per time period. +message PeriodicAllowance { + option (cosmos_proto.implements_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"; + option (amino.name) = "cosmos-sdk/PeriodicAllowance"; + + // basic specifies a struct of `BasicAllowance` + BasicAllowance basic = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // period specifies the time duration in which period_spend_limit coins can + // be spent before that allowance is reset + google.protobuf.Duration period = 2 + [(gogoproto.stdduration) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // period_spend_limit specifies the maximum number of coins that can be spent + // in the period + repeated cosmos.base.v1beta1.Coin period_spend_limit = 3 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + + // period_can_spend is the number of coins left to be spent before the period_reset time + repeated cosmos.base.v1beta1.Coin period_can_spend = 4 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + + // period_reset is the time at which this period resets and a new one begins, + // it is calculated from the start time of the first transaction after the + // last period ended + google.protobuf.Timestamp period_reset = 5 + [(gogoproto.stdtime) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +* `basic` is the instance of `BasicAllowance` which is optional for periodic fee allowance. If empty, the grant will have no `expiration` and no `spend_limit`. + +* `period` is the specific period of time, after each period passes, `period_can_spend` will be reset. + +* `period_spend_limit` specifies the maximum number of coins that can be spent in the period. + +* `period_can_spend` is the number of coins left to be spent before the period\_reset time. + +* `period_reset` keeps track of when a next period reset should happen. + +### AllowedMsgAllowance + +`AllowedMsgAllowance` is a fee allowance, it can be any of `BasicFeeAllowance`, `PeriodicAllowance` but restricted only to the allowed messages mentioned by the granter. + +```protobuf +// AllowedMsgAllowance creates allowance only for specified message types. +message AllowedMsgAllowance { + option (gogoproto.goproto_getters) = false; + option (cosmos_proto.implements_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"; + option (amino.name) = "cosmos-sdk/AllowedMsgAllowance"; + + // allowance can be any of basic and periodic fee allowance. + google.protobuf.Any allowance = 1 [(cosmos_proto.accepts_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"]; + + // allowed_messages are the messages for which the grantee has the access. + repeated string allowed_messages = 2; +} +``` + +* `allowance` is either `BasicAllowance` or `PeriodicAllowance`. + +* `allowed_messages` is array of messages allowed to execute the given allowance. + +### FeeGranter flag + +`feegrant` module introduces a `FeeGranter` flag for CLI for the sake of executing transactions with fee granter. When this flag is set, `clientCtx` will append the granter account address for transactions generated through CLI. + +```go expandable +package client + +import ( + + "crypto/tls" + "fmt" + "strings" + "github.com/pkg/errors" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "github.com/tendermint/tendermint/libs/cli" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials" + "google.golang.org/grpc/credentials/insecure" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ ClientContextKey defines the context key used to retrieve a client.Context from +/ a command's Context. +const ClientContextKey = sdk.ContextKey("client.context") + +/ SetCmdClientContextHandler is to be used in a command pre-hook execution to +/ read flags that populate a Context and sets that to the command's Context. +func SetCmdClientContextHandler(clientCtx Context, cmd *cobra.Command) (err error) { + clientCtx, err = ReadPersistentCommandFlags(clientCtx, cmd.Flags()) + if err != nil { + return err +} + +return SetCmdClientContext(cmd, clientCtx) +} + +/ ValidateCmd returns unknown command error or Help display if help flag set +func ValidateCmd(cmd *cobra.Command, args []string) + +error { + var unknownCmd string + var skipNext bool + for _, arg := range args { + / search for help flag + if arg == "--help" || arg == "-h" { + return cmd.Help() +} + + / check if the current arg is a flag + switch { + case len(arg) > 0 && (arg[0] == '-'): + / the next arg should be skipped if the current arg is a + / flag and does not use "=" to assign the flag's value + if !strings.Contains(arg, "=") { + skipNext = true +} + +else { + skipNext = false +} + case skipNext: + / skip current arg + skipNext = false + case unknownCmd == "": + / unknown command found + / continue searching for help flag + unknownCmd = arg +} + +} + + / return the help screen if no unknown command is found + if unknownCmd != "" { + err := fmt.Sprintf("unknown command \"%s\" for \"%s\"", unknownCmd, cmd.CalledAs()) + + / build suggestions for unknown argument + if suggestions := cmd.SuggestionsFor(unknownCmd); len(suggestions) > 0 { + err += "\n\nDid you mean this?\n" + for _, s := range suggestions { + err += fmt.Sprintf("\t%v\n", s) +} + +} + +return errors.New(err) +} + +return cmd.Help() +} + +/ ReadPersistentCommandFlags returns a Context with fields set for "persistent" +/ or common flags that do not necessarily change with context. +/ +/ Note, the provided clientCtx may have field pre-populated. The following order +/ of precedence occurs: +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func ReadPersistentCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { + if clientCtx.OutputFormat == "" || flagSet.Changed(cli.OutputFlag) { + output, _ := flagSet.GetString(cli.OutputFlag) + +clientCtx = clientCtx.WithOutputFormat(output) +} + if clientCtx.HomeDir == "" || flagSet.Changed(flags.FlagHome) { + homeDir, _ := flagSet.GetString(flags.FlagHome) + +clientCtx = clientCtx.WithHomeDir(homeDir) +} + if !clientCtx.Simulate || flagSet.Changed(flags.FlagDryRun) { + dryRun, _ := flagSet.GetBool(flags.FlagDryRun) + +clientCtx = clientCtx.WithSimulation(dryRun) +} + if clientCtx.KeyringDir == "" || flagSet.Changed(flags.FlagKeyringDir) { + keyringDir, _ := flagSet.GetString(flags.FlagKeyringDir) + + / The keyring directory is optional and falls back to the home directory + / if omitted. + if keyringDir == "" { + keyringDir = clientCtx.HomeDir +} + +clientCtx = clientCtx.WithKeyringDir(keyringDir) +} + if clientCtx.ChainID == "" || flagSet.Changed(flags.FlagChainID) { + chainID, _ := flagSet.GetString(flags.FlagChainID) + +clientCtx = clientCtx.WithChainID(chainID) +} + if clientCtx.Keyring == nil || flagSet.Changed(flags.FlagKeyringBackend) { + keyringBackend, _ := flagSet.GetString(flags.FlagKeyringBackend) + if keyringBackend != "" { + kr, err := NewKeyringFromBackend(clientCtx, keyringBackend) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithKeyring(kr) +} + +} + if clientCtx.Client == nil || flagSet.Changed(flags.FlagNode) { + rpcURI, _ := flagSet.GetString(flags.FlagNode) + if rpcURI != "" { + clientCtx = clientCtx.WithNodeURI(rpcURI) + +client, err := NewClientFromNode(rpcURI) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithClient(client) +} + +} + if clientCtx.GRPCClient == nil || flagSet.Changed(flags.FlagGRPC) { + grpcURI, _ := flagSet.GetString(flags.FlagGRPC) + if grpcURI != "" { + var dialOpts []grpc.DialOption + + useInsecure, _ := flagSet.GetBool(flags.FlagGRPCInsecure) + if useInsecure { + dialOpts = append(dialOpts, grpc.WithTransportCredentials(insecure.NewCredentials())) +} + +else { + dialOpts = append(dialOpts, grpc.WithTransportCredentials(credentials.NewTLS(&tls.Config{ + MinVersion: tls.VersionTLS12, +}))) +} + +grpcClient, err := grpc.Dial(grpcURI, dialOpts...) + if err != nil { + return Context{ +}, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) +} + +} + +return clientCtx, nil +} + +/ readQueryCommandFlags returns an updated Context with fields set based on flags +/ defined in AddQueryFlagsToCmd. An error is returned if any flag query fails. +/ +/ Note, the provided clientCtx may have field pre-populated. The following order +/ of precedence occurs: +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func readQueryCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { + if clientCtx.Height == 0 || flagSet.Changed(flags.FlagHeight) { + height, _ := flagSet.GetInt64(flags.FlagHeight) + +clientCtx = clientCtx.WithHeight(height) +} + if !clientCtx.UseLedger || flagSet.Changed(flags.FlagUseLedger) { + useLedger, _ := flagSet.GetBool(flags.FlagUseLedger) + +clientCtx = clientCtx.WithUseLedger(useLedger) +} + +return ReadPersistentCommandFlags(clientCtx, flagSet) +} + +/ readTxCommandFlags returns an updated Context with fields set based on flags +/ defined in AddTxFlagsToCmd. An error is returned if any flag query fails. +/ +/ Note, the provided clientCtx may have field pre-populated. The following order +/ of precedence occurs: +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func readTxCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { + clientCtx, err := ReadPersistentCommandFlags(clientCtx, flagSet) + if err != nil { + return clientCtx, err +} + if !clientCtx.GenerateOnly || flagSet.Changed(flags.FlagGenerateOnly) { + genOnly, _ := flagSet.GetBool(flags.FlagGenerateOnly) + +clientCtx = clientCtx.WithGenerateOnly(genOnly) +} + if !clientCtx.Offline || flagSet.Changed(flags.FlagOffline) { + offline, _ := flagSet.GetBool(flags.FlagOffline) + +clientCtx = clientCtx.WithOffline(offline) +} + if !clientCtx.UseLedger || flagSet.Changed(flags.FlagUseLedger) { + useLedger, _ := flagSet.GetBool(flags.FlagUseLedger) + +clientCtx = clientCtx.WithUseLedger(useLedger) +} + if clientCtx.BroadcastMode == "" || flagSet.Changed(flags.FlagBroadcastMode) { + bMode, _ := flagSet.GetString(flags.FlagBroadcastMode) + +clientCtx = clientCtx.WithBroadcastMode(bMode) +} + if !clientCtx.SkipConfirm || flagSet.Changed(flags.FlagSkipConfirmation) { + skipConfirm, _ := flagSet.GetBool(flags.FlagSkipConfirmation) + +clientCtx = clientCtx.WithSkipConfirmation(skipConfirm) +} + if clientCtx.SignModeStr == "" || flagSet.Changed(flags.FlagSignMode) { + signModeStr, _ := flagSet.GetString(flags.FlagSignMode) + +clientCtx = clientCtx.WithSignModeStr(signModeStr) +} + if clientCtx.FeePayer == nil || flagSet.Changed(flags.FlagFeePayer) { + payer, _ := flagSet.GetString(flags.FlagFeePayer) + if payer != "" { + payerAcc, err := sdk.AccAddressFromBech32(payer) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithFeePayerAddress(payerAcc) +} + +} + if clientCtx.FeeGranter == nil || flagSet.Changed(flags.FlagFeeGranter) { + granter, _ := flagSet.GetString(flags.FlagFeeGranter) + if granter != "" { + granterAcc, err := sdk.AccAddressFromBech32(granter) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithFeeGranterAddress(granterAcc) +} + +} + if clientCtx.From == "" || flagSet.Changed(flags.FlagFrom) { + from, _ := flagSet.GetString(flags.FlagFrom) + +fromAddr, fromName, keyType, err := GetFromFields(clientCtx, clientCtx.Keyring, from) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithFrom(from).WithFromAddress(fromAddr).WithFromName(fromName) + + / If the `from` signer account is a ledger key, we need to use + / SIGN_MODE_AMINO_JSON, because ledger doesn't support proto yet. + / ref: https://github.com/cosmos/cosmos-sdk/issues/8109 + if keyType == keyring.TypeLedger && clientCtx.SignModeStr != flags.SignModeLegacyAminoJSON && !clientCtx.LedgerHasProtobuf { + fmt.Println("Default sign-mode 'direct' not supported by Ledger, using sign-mode 'amino-json'.") + +clientCtx = clientCtx.WithSignModeStr(flags.SignModeLegacyAminoJSON) +} + +} + if !clientCtx.IsAux || flagSet.Changed(flags.FlagAux) { + isAux, _ := flagSet.GetBool(flags.FlagAux) + +clientCtx = clientCtx.WithAux(isAux) + if isAux { + / If the user didn't explicitly set an --output flag, use JSON by + / default. + if clientCtx.OutputFormat == "" || !flagSet.Changed(cli.OutputFlag) { + clientCtx = clientCtx.WithOutputFormat("json") +} + + / If the user didn't explicitly set a --sign-mode flag, use + / DIRECT_AUX by default. + if clientCtx.SignModeStr == "" || !flagSet.Changed(flags.FlagSignMode) { + clientCtx = clientCtx.WithSignModeStr(flags.SignModeDirectAux) +} + +} + +} + +return clientCtx, nil +} + +/ GetClientQueryContext returns a Context from a command with fields set based on flags +/ defined in AddQueryFlagsToCmd. An error is returned if any flag query fails. +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func GetClientQueryContext(cmd *cobra.Command) (Context, error) { + ctx := GetClientContextFromCmd(cmd) + +return readQueryCommandFlags(ctx, cmd.Flags()) +} + +/ GetClientTxContext returns a Context from a command with fields set based on flags +/ defined in AddTxFlagsToCmd. An error is returned if any flag query fails. +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func GetClientTxContext(cmd *cobra.Command) (Context, error) { + ctx := GetClientContextFromCmd(cmd) + +return readTxCommandFlags(ctx, cmd.Flags()) +} + +/ GetClientContextFromCmd returns a Context from a command or an empty Context +/ if it has not been set. +func GetClientContextFromCmd(cmd *cobra.Command) + +Context { + if v := cmd.Context().Value(ClientContextKey); v != nil { + clientCtxPtr := v.(*Context) + +return *clientCtxPtr +} + +return Context{ +} +} + +/ SetCmdClientContext sets a command's Context value to the provided argument. +func SetCmdClientContext(cmd *cobra.Command, clientCtx Context) + +error { + v := cmd.Context().Value(ClientContextKey) + if v == nil { + return errors.New("client context not set") +} + clientCtxPtr := v.(*Context) + *clientCtxPtr = clientCtx + + return nil +} +``` + +```go expandable +package tx + +import ( + + "bufio" + "context" + "encoding/json" + "errors" + "fmt" + "os" + + gogogrpc "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/pflag" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/input" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +/ GenerateOrBroadcastTxCLI will either generate and print and unsigned transaction +/ or sign it and broadcast it returning an error upon failure. +func GenerateOrBroadcastTxCLI(clientCtx client.Context, flagSet *pflag.FlagSet, msgs ...sdk.Msg) + +error { + txf := NewFactoryCLI(clientCtx, flagSet) + +return GenerateOrBroadcastTxWithFactory(clientCtx, txf, msgs...) +} + +/ GenerateOrBroadcastTxWithFactory will either generate and print and unsigned transaction +/ or sign it and broadcast it returning an error upon failure. +func GenerateOrBroadcastTxWithFactory(clientCtx client.Context, txf Factory, msgs ...sdk.Msg) + +error { + / Validate all msgs before generating or broadcasting the tx. + / We were calling ValidateBasic separately in each CLI handler before. + / Right now, we're factorizing that call inside this function. + / ref: https://github.com/cosmos/cosmos-sdk/pull/9236#discussion_r623803504 + for _, msg := range msgs { + if err := msg.ValidateBasic(); err != nil { + return err +} + +} + + / If the --aux flag is set, we simply generate and print the AuxSignerData. + if clientCtx.IsAux { + auxSignerData, err := makeAuxSignerData(clientCtx, txf, msgs...) + if err != nil { + return err +} + +return clientCtx.PrintProto(&auxSignerData) +} + if clientCtx.GenerateOnly { + return txf.PrintUnsignedTx(clientCtx, msgs...) +} + +return BroadcastTx(clientCtx, txf, msgs...) +} + +/ BroadcastTx attempts to generate, sign and broadcast a transaction with the +/ given set of messages. It will also simulate gas requirements if necessary. +/ It will return an error upon failure. +func BroadcastTx(clientCtx client.Context, txf Factory, msgs ...sdk.Msg) + +error { + txf, err := txf.Prepare(clientCtx) + if err != nil { + return err +} + if txf.SimulateAndExecute() || clientCtx.Simulate { + _, adjusted, err := CalculateGas(clientCtx, txf, msgs...) + if err != nil { + return err +} + +txf = txf.WithGas(adjusted) + _, _ = fmt.Fprintf(os.Stderr, "%s\n", GasEstimateResponse{ + GasEstimate: txf.Gas() +}) +} + if clientCtx.Simulate { + return nil +} + +tx, err := txf.BuildUnsignedTx(msgs...) + if err != nil { + return err +} + if !clientCtx.SkipConfirm { + txBytes, err := clientCtx.TxConfig.TxJSONEncoder()(tx.GetTx()) + if err != nil { + return err +} + if err := clientCtx.PrintRaw(json.RawMessage(txBytes)); err != nil { + _, _ = fmt.Fprintf(os.Stderr, "%s\n", txBytes) +} + buf := bufio.NewReader(os.Stdin) + +ok, err := input.GetConfirmation("confirm transaction before signing and broadcasting", buf, os.Stderr) + if err != nil || !ok { + _, _ = fmt.Fprintf(os.Stderr, "%s\n", "cancelled transaction") + +return err +} + +} + +err = Sign(txf, clientCtx.GetFromName(), tx, true) + if err != nil { + return err +} + +txBytes, err := clientCtx.TxConfig.TxEncoder()(tx.GetTx()) + if err != nil { + return err +} + + / broadcast to a Tendermint node + res, err := clientCtx.BroadcastTx(txBytes) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +} + +/ CalculateGas simulates the execution of a transaction and returns the +/ simulation response obtained by the query and the adjusted gas amount. +func CalculateGas( + clientCtx gogogrpc.ClientConn, txf Factory, msgs ...sdk.Msg, +) (*tx.SimulateResponse, uint64, error) { + txBytes, err := txf.BuildSimTx(msgs...) + if err != nil { + return nil, 0, err +} + txSvcClient := tx.NewServiceClient(clientCtx) + +simRes, err := txSvcClient.Simulate(context.Background(), &tx.SimulateRequest{ + TxBytes: txBytes, +}) + if err != nil { + return nil, 0, err +} + +return simRes, uint64(txf.GasAdjustment() * float64(simRes.GasInfo.GasUsed)), nil +} + +/ SignWithPrivKey signs a given tx with the given private key, and returns the +/ corresponding SignatureV2 if the signing is successful. +func SignWithPrivKey( + signMode signing.SignMode, signerData authsigning.SignerData, + txBuilder client.TxBuilder, priv cryptotypes.PrivKey, txConfig client.TxConfig, + accSeq uint64, +) (signing.SignatureV2, error) { + var sigV2 signing.SignatureV2 + + / Generate the bytes to be signed. + signBytes, err := txConfig.SignModeHandler().GetSignBytes(signMode, signerData, txBuilder.GetTx()) + if err != nil { + return sigV2, err +} + + / Sign those bytes + signature, err := priv.Sign(signBytes) + if err != nil { + return sigV2, err +} + + / Construct the SignatureV2 struct + sigData := signing.SingleSignatureData{ + SignMode: signMode, + Signature: signature, +} + +sigV2 = signing.SignatureV2{ + PubKey: priv.PubKey(), + Data: &sigData, + Sequence: accSeq, +} + +return sigV2, nil +} + +/ countDirectSigners counts the number of DIRECT signers in a signature data. +func countDirectSigners(data signing.SignatureData) + +int { + switch data := data.(type) { + case *signing.SingleSignatureData: + if data.SignMode == signing.SignMode_SIGN_MODE_DIRECT { + return 1 +} + +return 0 + case *signing.MultiSignatureData: + directSigners := 0 + for _, d := range data.Signatures { + directSigners += countDirectSigners(d) +} + +return directSigners + default: + panic("unreachable case") +} +} + +/ checkMultipleSigners checks that there can be maximum one DIRECT signer in +/ a tx. +func checkMultipleSigners(tx authsigning.Tx) + +error { + directSigners := 0 + sigsV2, err := tx.GetSignaturesV2() + if err != nil { + return err +} + for _, sig := range sigsV2 { + directSigners += countDirectSigners(sig.Data) + if directSigners > 1 { + return sdkerrors.ErrNotSupported.Wrap("txs signed with CLI can have maximum 1 DIRECT signer") +} + +} + +return nil +} + +/ Sign signs a given tx with a named key. The bytes signed over are canconical. +/ The resulting signature will be added to the transaction builder overwriting the previous +/ ones if overwrite=true (otherwise, the signature will be appended). +/ Signing a transaction with mutltiple signers in the DIRECT mode is not supprted and will +/ return an error. +/ An error is returned upon failure. +func Sign(txf Factory, name string, txBuilder client.TxBuilder, overwriteSig bool) + +error { + if txf.keybase == nil { + return errors.New("keybase must be set prior to signing a transaction") +} + signMode := txf.signMode + if signMode == signing.SignMode_SIGN_MODE_UNSPECIFIED { + / use the SignModeHandler's default mode if unspecified + signMode = txf.txConfig.SignModeHandler().DefaultMode() +} + +k, err := txf.keybase.Key(name) + if err != nil { + return err +} + +pubKey, err := k.GetPubKey() + if err != nil { + return err +} + signerData := authsigning.SignerData{ + ChainID: txf.chainID, + AccountNumber: txf.accountNumber, + Sequence: txf.sequence, + PubKey: pubKey, + Address: sdk.AccAddress(pubKey.Address()).String(), +} + + / For SIGN_MODE_DIRECT, calling SetSignatures calls setSignerInfos on + / TxBuilder under the hood, and SignerInfos is needed to generated the + / sign bytes. This is the reason for setting SetSignatures here, with a + / nil signature. + / + / Note: this line is not needed for SIGN_MODE_LEGACY_AMINO, but putting it + / also doesn't affect its generated sign bytes, so for code's simplicity + / sake, we put it here. + sigData := signing.SingleSignatureData{ + SignMode: signMode, + Signature: nil, +} + sig := signing.SignatureV2{ + PubKey: pubKey, + Data: &sigData, + Sequence: txf.Sequence(), +} + +var prevSignatures []signing.SignatureV2 + if !overwriteSig { + prevSignatures, err = txBuilder.GetTx().GetSignaturesV2() + if err != nil { + return err +} + +} + / Overwrite or append signer infos. + var sigs []signing.SignatureV2 + if overwriteSig { + sigs = []signing.SignatureV2{ + sig +} + +} + +else { + sigs = append(sigs, prevSignatures...) + +sigs = append(sigs, sig) +} + if err := txBuilder.SetSignatures(sigs...); err != nil { + return err +} + if err := checkMultipleSigners(txBuilder.GetTx()); err != nil { + return err +} + + / Generate the bytes to be signed. + bytesToSign, err := txf.txConfig.SignModeHandler().GetSignBytes(signMode, signerData, txBuilder.GetTx()) + if err != nil { + return err +} + + / Sign those bytes + sigBytes, _, err := txf.keybase.Sign(name, bytesToSign) + if err != nil { + return err +} + + / Construct the SignatureV2 struct + sigData = signing.SingleSignatureData{ + SignMode: signMode, + Signature: sigBytes, +} + +sig = signing.SignatureV2{ + PubKey: pubKey, + Data: &sigData, + Sequence: txf.Sequence(), +} + if overwriteSig { + err = txBuilder.SetSignatures(sig) +} + +else { + prevSignatures = append(prevSignatures, sig) + +err = txBuilder.SetSignatures(prevSignatures...) +} + if err != nil { + return fmt.Errorf("unable to set signatures on payload: %w", err) +} + + / Run optional preprocessing if specified. By default, this is unset + / and will return nil. + return txf.PreprocessTx(name, txBuilder) +} + +/ GasEstimateResponse defines a response definition for tx gas estimation. +type GasEstimateResponse struct { + GasEstimate uint64 `json:"gas_estimate" yaml:"gas_estimate"` +} + +func (gr GasEstimateResponse) + +String() + +string { + return fmt.Sprintf("gas estimate: %d", gr.GasEstimate) +} + +/ makeAuxSignerData generates an AuxSignerData from the client inputs. +func makeAuxSignerData(clientCtx client.Context, f Factory, msgs ...sdk.Msg) (tx.AuxSignerData, error) { + b := NewAuxTxBuilder() + +fromAddress, name, _, err := client.GetFromFields(clientCtx, clientCtx.Keyring, clientCtx.From) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetAddress(fromAddress.String()) + if clientCtx.Offline { + b.SetAccountNumber(f.accountNumber) + +b.SetSequence(f.sequence) +} + +else { + accNum, seq, err := clientCtx.AccountRetriever.GetAccountNumberSequence(clientCtx, fromAddress) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetAccountNumber(accNum) + +b.SetSequence(seq) +} + +err = b.SetMsgs(msgs...) + if err != nil { + return tx.AuxSignerData{ +}, err +} + if f.tip != nil { + if _, err := sdk.AccAddressFromBech32(f.tip.Tipper); err != nil { + return tx.AuxSignerData{ +}, sdkerrors.ErrInvalidAddress.Wrap("tipper must be a bech32 address") +} + +b.SetTip(f.tip) +} + +err = b.SetSignMode(f.SignMode()) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +key, err := clientCtx.Keyring.Key(name) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +pub, err := key.GetPubKey() + if err != nil { + return tx.AuxSignerData{ +}, err +} + +err = b.SetPubKey(pub) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetChainID(clientCtx.ChainID) + +signBz, err := b.GetSignBytes() + if err != nil { + return tx.AuxSignerData{ +}, err +} + +sig, _, err := clientCtx.Keyring.Sign(name, signBz) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetSignature(sig) + +return b.GetAuxSignerData() +} +``` + +```go expandable +package tx + +import ( + + "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +/ wrapper is a wrapper around the tx.Tx proto.Message which retain the raw +/ body and auth_info bytes. +type wrapper struct { + cdc codec.Codec + + tx *tx.Tx + + / bodyBz represents the protobuf encoding of TxBody. This should be encoding + / from the client using TxRaw if the tx was decoded from the wire + bodyBz []byte + + / authInfoBz represents the protobuf encoding of TxBody. This should be encoding + / from the client using TxRaw if the tx was decoded from the wire + authInfoBz []byte + + txBodyHasUnknownNonCriticals bool +} + +var ( + _ authsigning.Tx = &wrapper{ +} + _ client.TxBuilder = &wrapper{ +} + _ tx.TipTx = &wrapper{ +} + _ ante.HasExtensionOptionsTx = &wrapper{ +} + _ ExtensionOptionsTxBuilder = &wrapper{ +} + _ tx.TipTx = &wrapper{ +} +) + +/ ExtensionOptionsTxBuilder defines a TxBuilder that can also set extensions. +type ExtensionOptionsTxBuilder interface { + client.TxBuilder + + SetExtensionOptions(...*codectypes.Any) + +SetNonCriticalExtensionOptions(...*codectypes.Any) +} + +func newBuilder(cdc codec.Codec) *wrapper { + return &wrapper{ + cdc: cdc, + tx: &tx.Tx{ + Body: &tx.TxBody{ +}, + AuthInfo: &tx.AuthInfo{ + Fee: &tx.Fee{ +}, +}, +}, +} +} + +func (w *wrapper) + +GetMsgs() []sdk.Msg { + return w.tx.GetMsgs() +} + +func (w *wrapper) + +ValidateBasic() + +error { + return w.tx.ValidateBasic() +} + +func (w *wrapper) + +getBodyBytes() []byte { + if len(w.bodyBz) == 0 { + / if bodyBz is empty, then marshal the body. bodyBz will generally + / be set to nil whenever SetBody is called so the result of calling + / this method should always return the correct bytes. Note that after + / decoding bodyBz is derived from TxRaw so that it matches what was + / transmitted over the wire + var err error + w.bodyBz, err = proto.Marshal(w.tx.Body) + if err != nil { + panic(err) +} + +} + +return w.bodyBz +} + +func (w *wrapper) + +getAuthInfoBytes() []byte { + if len(w.authInfoBz) == 0 { + / if authInfoBz is empty, then marshal the body. authInfoBz will generally + / be set to nil whenever SetAuthInfo is called so the result of calling + / this method should always return the correct bytes. Note that after + / decoding authInfoBz is derived from TxRaw so that it matches what was + / transmitted over the wire + var err error + w.authInfoBz, err = proto.Marshal(w.tx.AuthInfo) + if err != nil { + panic(err) +} + +} + +return w.authInfoBz +} + +func (w *wrapper) + +GetSigners() []sdk.AccAddress { + return w.tx.GetSigners() +} + +func (w *wrapper) + +GetPubKeys() ([]cryptotypes.PubKey, error) { + signerInfos := w.tx.AuthInfo.SignerInfos + pks := make([]cryptotypes.PubKey, len(signerInfos)) + for i, si := range signerInfos { + / NOTE: it is okay to leave this nil if there is no PubKey in the SignerInfo. + / PubKey's can be left unset in SignerInfo. + if si.PublicKey == nil { + continue +} + pkAny := si.PublicKey.GetCachedValue() + +pk, ok := pkAny.(cryptotypes.PubKey) + if ok { + pks[i] = pk +} + +else { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "Expecting PubKey, got: %T", pkAny) +} + +} + +return pks, nil +} + +func (w *wrapper) + +GetGas() + +uint64 { + return w.tx.AuthInfo.Fee.GasLimit +} + +func (w *wrapper) + +GetFee() + +sdk.Coins { + return w.tx.AuthInfo.Fee.Amount +} + +func (w *wrapper) + +FeePayer() + +sdk.AccAddress { + feePayer := w.tx.AuthInfo.Fee.Payer + if feePayer != "" { + return sdk.MustAccAddressFromBech32(feePayer) +} + / use first signer as default if no payer specified + return w.GetSigners()[0] +} + +func (w *wrapper) + +FeeGranter() + +sdk.AccAddress { + feePayer := w.tx.AuthInfo.Fee.Granter + if feePayer != "" { + return sdk.MustAccAddressFromBech32(feePayer) +} + +return nil +} + +func (w *wrapper) + +GetTip() *tx.Tip { + return w.tx.AuthInfo.Tip +} + +func (w *wrapper) + +GetMemo() + +string { + return w.tx.Body.Memo +} + +/ GetTimeoutHeight returns the transaction's timeout height (if set). +func (w *wrapper) + +GetTimeoutHeight() + +uint64 { + return w.tx.Body.TimeoutHeight +} + +func (w *wrapper) + +GetSignaturesV2() ([]signing.SignatureV2, error) { + signerInfos := w.tx.AuthInfo.SignerInfos + sigs := w.tx.Signatures + pubKeys, err := w.GetPubKeys() + if err != nil { + return nil, err +} + n := len(signerInfos) + res := make([]signing.SignatureV2, n) + for i, si := range signerInfos { + / handle nil signatures (in case of simulation) + if si.ModeInfo == nil { + res[i] = signing.SignatureV2{ + PubKey: pubKeys[i], +} + +} + +else { + var err error + sigData, err := ModeInfoAndSigToSignatureData(si.ModeInfo, sigs[i]) + if err != nil { + return nil, err +} + / sequence number is functionally a transaction nonce and referred to as such in the SDK + nonce := si.GetSequence() + +res[i] = signing.SignatureV2{ + PubKey: pubKeys[i], + Data: sigData, + Sequence: nonce, +} + + +} + +} + +return res, nil +} + +func (w *wrapper) + +SetMsgs(msgs ...sdk.Msg) + +error { + anys, err := tx.SetMsgs(msgs) + if err != nil { + return err +} + +w.tx.Body.Messages = anys + + / set bodyBz to nil because the cached bodyBz no longer matches tx.Body + w.bodyBz = nil + + return nil +} + +/ SetTimeoutHeight sets the transaction's height timeout. +func (w *wrapper) + +SetTimeoutHeight(height uint64) { + w.tx.Body.TimeoutHeight = height + + / set bodyBz to nil because the cached bodyBz no longer matches tx.Body + w.bodyBz = nil +} + +func (w *wrapper) + +SetMemo(memo string) { + w.tx.Body.Memo = memo + + / set bodyBz to nil because the cached bodyBz no longer matches tx.Body + w.bodyBz = nil +} + +func (w *wrapper) + +SetGasLimit(limit uint64) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.GasLimit = limit + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetFeeAmount(coins sdk.Coins) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.Amount = coins + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetTip(tip *tx.Tip) { + w.tx.AuthInfo.Tip = tip + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetFeePayer(feePayer sdk.AccAddress) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.Payer = feePayer.String() + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetFeeGranter(feeGranter sdk.AccAddress) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.Granter = feeGranter.String() + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetSignatures(signatures ...signing.SignatureV2) + +error { + n := len(signatures) + signerInfos := make([]*tx.SignerInfo, n) + rawSigs := make([][]byte, n) + for i, sig := range signatures { + var modeInfo *tx.ModeInfo + modeInfo, rawSigs[i] = SignatureDataToModeInfoAndSig(sig.Data) + +any, err := codectypes.NewAnyWithValue(sig.PubKey) + if err != nil { + return err +} + +signerInfos[i] = &tx.SignerInfo{ + PublicKey: any, + ModeInfo: modeInfo, + Sequence: sig.Sequence, +} + +} + +w.setSignerInfos(signerInfos) + +w.setSignatures(rawSigs) + +return nil +} + +func (w *wrapper) + +setSignerInfos(infos []*tx.SignerInfo) { + w.tx.AuthInfo.SignerInfos = infos + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +setSignerInfoAtIndex(index int, info *tx.SignerInfo) { + if w.tx.AuthInfo.SignerInfos == nil { + w.tx.AuthInfo.SignerInfos = make([]*tx.SignerInfo, len(w.GetSigners())) +} + +w.tx.AuthInfo.SignerInfos[index] = info + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +setSignatures(sigs [][]byte) { + w.tx.Signatures = sigs +} + +func (w *wrapper) + +setSignatureAtIndex(index int, sig []byte) { + if w.tx.Signatures == nil { + w.tx.Signatures = make([][]byte, len(w.GetSigners())) +} + +w.tx.Signatures[index] = sig +} + +func (w *wrapper) + +GetTx() + +authsigning.Tx { + return w +} + +func (w *wrapper) + +GetProtoTx() *tx.Tx { + return w.tx +} + +/ Deprecated: AsAny extracts proto Tx and wraps it into Any. +/ NOTE: You should probably use `GetProtoTx` if you want to serialize the transaction. +func (w *wrapper) + +AsAny() *codectypes.Any { + return codectypes.UnsafePackAny(w.tx) +} + +/ WrapTx creates a TxBuilder wrapper around a tx.Tx proto message. +func WrapTx(protoTx *tx.Tx) + +client.TxBuilder { + return &wrapper{ + tx: protoTx, +} +} + +func (w *wrapper) + +GetExtensionOptions() []*codectypes.Any { + return w.tx.Body.ExtensionOptions +} + +func (w *wrapper) + +GetNonCriticalExtensionOptions() []*codectypes.Any { + return w.tx.Body.NonCriticalExtensionOptions +} + +func (w *wrapper) + +SetExtensionOptions(extOpts ...*codectypes.Any) { + w.tx.Body.ExtensionOptions = extOpts + w.bodyBz = nil +} + +func (w *wrapper) + +SetNonCriticalExtensionOptions(extOpts ...*codectypes.Any) { + w.tx.Body.NonCriticalExtensionOptions = extOpts + w.bodyBz = nil +} + +func (w *wrapper) + +AddAuxSignerData(data tx.AuxSignerData) + +error { + err := data.ValidateBasic() + if err != nil { + return err +} + +w.bodyBz = data.SignDoc.BodyBytes + + var body tx.TxBody + err = w.cdc.Unmarshal(w.bodyBz, &body) + if err != nil { + return err +} + if w.tx.Body.Memo != "" && w.tx.Body.Memo != body.Memo { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has memo %s, got %s in AuxSignerData", w.tx.Body.Memo, body.Memo) +} + if w.tx.Body.TimeoutHeight != 0 && w.tx.Body.TimeoutHeight != body.TimeoutHeight { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has timeout height %d, got %d in AuxSignerData", w.tx.Body.TimeoutHeight, body.TimeoutHeight) +} + if len(w.tx.Body.ExtensionOptions) != 0 { + if len(w.tx.Body.ExtensionOptions) != len(body.ExtensionOptions) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d extension options, got %d in AuxSignerData", len(w.tx.Body.ExtensionOptions), len(body.ExtensionOptions)) +} + for i, o := range w.tx.Body.ExtensionOptions { + if !o.Equal(body.ExtensionOptions[i]) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has extension option %+v at index %d, got %+v in AuxSignerData", o, i, body.ExtensionOptions[i]) +} + +} + +} + if len(w.tx.Body.NonCriticalExtensionOptions) != 0 { + if len(w.tx.Body.NonCriticalExtensionOptions) != len(body.NonCriticalExtensionOptions) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d non-critical extension options, got %d in AuxSignerData", len(w.tx.Body.NonCriticalExtensionOptions), len(body.NonCriticalExtensionOptions)) +} + for i, o := range w.tx.Body.NonCriticalExtensionOptions { + if !o.Equal(body.NonCriticalExtensionOptions[i]) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has non-critical extension option %+v at index %d, got %+v in AuxSignerData", o, i, body.NonCriticalExtensionOptions[i]) +} + +} + +} + if len(w.tx.Body.Messages) != 0 { + if len(w.tx.Body.Messages) != len(body.Messages) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d Msgs, got %d in AuxSignerData", len(w.tx.Body.Messages), len(body.Messages)) +} + for i, o := range w.tx.Body.Messages { + if !o.Equal(body.Messages[i]) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has Msg %+v at index %d, got %+v in AuxSignerData", o, i, body.Messages[i]) +} + +} + +} + if w.tx.AuthInfo.Tip != nil && data.SignDoc.Tip != nil { + if !w.tx.AuthInfo.Tip.Amount.IsEqual(data.SignDoc.Tip.Amount) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has tip %+v, got %+v in AuxSignerData", w.tx.AuthInfo.Tip.Amount, data.SignDoc.Tip.Amount) +} + if w.tx.AuthInfo.Tip.Tipper != data.SignDoc.Tip.Tipper { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has tipper %s, got %s in AuxSignerData", w.tx.AuthInfo.Tip.Tipper, data.SignDoc.Tip.Tipper) +} + +} + +w.SetMemo(body.Memo) + +w.SetTimeoutHeight(body.TimeoutHeight) + +w.SetExtensionOptions(body.ExtensionOptions...) + +w.SetNonCriticalExtensionOptions(body.NonCriticalExtensionOptions...) + msgs := make([]sdk.Msg, len(body.Messages)) + for i, msgAny := range body.Messages { + msgs[i] = msgAny.GetCachedValue().(sdk.Msg) +} + +w.SetMsgs(msgs...) + +w.SetTip(data.GetSignDoc().GetTip()) + + / Get the aux signer's index in GetSigners. + signerIndex := -1 + for i, signer := range w.GetSigners() { + if signer.String() == data.Address { + signerIndex = i +} + +} + if signerIndex < 0 { + return sdkerrors.ErrLogic.Wrapf("address %s is not a signer", data.Address) +} + +w.setSignerInfoAtIndex(signerIndex, &tx.SignerInfo{ + PublicKey: data.SignDoc.PublicKey, + ModeInfo: &tx.ModeInfo{ + Sum: &tx.ModeInfo_Single_{ + Single: &tx.ModeInfo_Single{ + Mode: data.Mode +}}}, + Sequence: data.SignDoc.Sequence, +}) + +w.setSignatureAtIndex(signerIndex, data.Sig) + +return nil +} +``` + +```protobuf +// Fee includes the amount of coins paid in fees and the maximum +// gas to be used by the transaction. The ratio yields an effective "gasprice", +// which must be above some miminum to be accepted into the mempool. +message Fee { + // amount is the amount of coins to be paid as a fee + repeated cosmos.base.v1beta1.Coin amount = 1 + [(gogoproto.nullable) = false, (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins"]; + + // gas_limit is the maximum gas that can be used in transaction processing + // before an out of gas error occurs + uint64 gas_limit = 2; + + // if unset, the first signer is responsible for paying the fees. If set, the specified account must pay the fees. + // the payer must be a tx signer (and thus have signed this field in AuthInfo). + // setting this field does *not* change the ordering of required signers for the transaction. + string payer = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // if set, the fee payer (either the first signer or the value of the payer field) requests that a fee grant be used + // to pay fees instead of the fee payer's own balance. If an appropriate fee grant does not exist or the chain does + // not support fee grants, this will fail + string granter = 4 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +Example cmd: + +```go +./simd tx gov submit-proposal --title="Test Proposal" --description="My awesome proposal" --type="Text" --from validator-key --fee-granter=cosmos1xh44hxt7spr67hqaa7nyx5gnutrz5fraw6grxn --chain-id=testnet --fees="10stake" +``` + +### Granted Fee Deductions + +Fees are deducted from grants in the `x/auth` ante handler. To learn more about how ante handlers work, read the [Auth Module AnteHandlers Guide](/docs/sdk/v0.50/auth/README#antehandlers). + +### Gas + +In order to prevent DoS attacks, using a filtered `x/feegrant` incurs gas. The SDK must assure that the `grantee`'s transactions all conform to the filter set by the `granter`. The SDK does this by iterating over the allowed messages in the filter and charging 10 gas per filtered message. The SDK will then iterate over the messages being sent by the `grantee` to ensure the messages adhere to the filter, also charging 10 gas per message. The SDK will stop iterating and fail the transaction if it finds a message that does not conform to the filter. + +**WARNING**: The gas is charged against the granted allowance. Ensure your messages conform to the filter, if any, before sending transactions using your allowance. + +### Pruning + +A queue in the state maintained with the prefix of expiration of the grants and checks them on EndBlock with the current block time for every block to prune. + +## State + +### FeeAllowance + +Fee Allowances are identified by combining `Grantee` (the account address of fee allowance grantee) with the `Granter` (the account address of fee allowance granter). + +Fee allowance grants are stored in the state as follows: + +* Grant: `0x00 | grantee_addr_len (1 byte) | grantee_addr_bytes | granter_addr_len (1 byte) | granter_addr_bytes -> ProtocolBuffer(Grant)` + +```go expandable +/ Code generated by protoc-gen-gogo. DO NOT EDIT. +/ source: cosmos/feegrant/v1beta1/feegrant.proto + +package feegrant + +import ( + + fmt "fmt" + _ "github.com/cosmos/cosmos-proto" + types1 "github.com/cosmos/cosmos-sdk/codec/types" + github_com_cosmos_cosmos_sdk_types "github.com/cosmos/cosmos-sdk/types" + types "github.com/cosmos/cosmos-sdk/types" + _ "github.com/cosmos/cosmos-sdk/types/tx/amino" + _ "github.com/cosmos/gogoproto/gogoproto" + proto "github.com/cosmos/gogoproto/proto" + github_com_cosmos_gogoproto_types "github.com/cosmos/gogoproto/types" + _ "google.golang.org/protobuf/types/known/durationpb" + _ "google.golang.org/protobuf/types/known/timestamppb" + io "io" + math "math" + math_bits "math/bits" + time "time" +) + +/ Reference imports to suppress errors if they are not otherwise used. +var _ = proto.Marshal +var _ = fmt.Errorf +var _ = math.Inf +var _ = time.Kitchen + +/ This is a compile-time assertion to ensure that this generated file +/ is compatible with the proto package it is being compiled against. +/ A compilation error at this line likely means your copy of the +/ proto package needs to be updated. +const _ = proto.GoGoProtoPackageIsVersion3 / please upgrade the proto package + +/ BasicAllowance implements Allowance with a one-time grant of coins +/ that optionally expires. The grantee can use up to SpendLimit to cover fees. +type BasicAllowance struct { + / spend_limit specifies the maximum amount of coins that can be spent + / by this allowance and will be updated as coins are spent. If it is + / empty, there is no spend limit and any amount of coins can be spent. + SpendLimit github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,1,rep,name=spend_limit,json=spendLimit,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"spend_limit"` + / expiration specifies an optional time when this allowance expires + Expiration *time.Time `protobuf:"bytes,2,opt,name=expiration,proto3,stdtime" json:"expiration,omitempty"` +} + +func (m *BasicAllowance) + +Reset() { *m = BasicAllowance{ +} +} + +func (m *BasicAllowance) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*BasicAllowance) + +ProtoMessage() { +} + +func (*BasicAllowance) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{0 +} +} + +func (m *BasicAllowance) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *BasicAllowance) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_BasicAllowance.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *BasicAllowance) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_BasicAllowance.Merge(m, src) +} + +func (m *BasicAllowance) + +XXX_Size() + +int { + return m.Size() +} + +func (m *BasicAllowance) + +XXX_DiscardUnknown() { + xxx_messageInfo_BasicAllowance.DiscardUnknown(m) +} + +var xxx_messageInfo_BasicAllowance proto.InternalMessageInfo + +func (m *BasicAllowance) + +GetSpendLimit() + +github_com_cosmos_cosmos_sdk_types.Coins { + if m != nil { + return m.SpendLimit +} + +return nil +} + +func (m *BasicAllowance) + +GetExpiration() *time.Time { + if m != nil { + return m.Expiration +} + +return nil +} + +/ PeriodicAllowance extends Allowance to allow for both a maximum cap, +/ as well as a limit per time period. +type PeriodicAllowance struct { + / basic specifies a struct of `BasicAllowance` + Basic BasicAllowance `protobuf:"bytes,1,opt,name=basic,proto3" json:"basic"` + / period specifies the time duration in which period_spend_limit coins can + / be spent before that allowance is reset + Period time.Duration `protobuf:"bytes,2,opt,name=period,proto3,stdduration" json:"period"` + / period_spend_limit specifies the maximum number of coins that can be spent + / in the period + PeriodSpendLimit github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,3,rep,name=period_spend_limit,json=periodSpendLimit,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"period_spend_limit"` + / period_can_spend is the number of coins left to be spent before the period_reset time + PeriodCanSpend github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,4,rep,name=period_can_spend,json=periodCanSpend,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"period_can_spend"` + / period_reset is the time at which this period resets and a new one begins, + / it is calculated from the start time of the first transaction after the + / last period ended + PeriodReset time.Time `protobuf:"bytes,5,opt,name=period_reset,json=periodReset,proto3,stdtime" json:"period_reset"` +} + +func (m *PeriodicAllowance) + +Reset() { *m = PeriodicAllowance{ +} +} + +func (m *PeriodicAllowance) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*PeriodicAllowance) + +ProtoMessage() { +} + +func (*PeriodicAllowance) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{1 +} +} + +func (m *PeriodicAllowance) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *PeriodicAllowance) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_PeriodicAllowance.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *PeriodicAllowance) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_PeriodicAllowance.Merge(m, src) +} + +func (m *PeriodicAllowance) + +XXX_Size() + +int { + return m.Size() +} + +func (m *PeriodicAllowance) + +XXX_DiscardUnknown() { + xxx_messageInfo_PeriodicAllowance.DiscardUnknown(m) +} + +var xxx_messageInfo_PeriodicAllowance proto.InternalMessageInfo + +func (m *PeriodicAllowance) + +GetBasic() + +BasicAllowance { + if m != nil { + return m.Basic +} + +return BasicAllowance{ +} +} + +func (m *PeriodicAllowance) + +GetPeriod() + +time.Duration { + if m != nil { + return m.Period +} + +return 0 +} + +func (m *PeriodicAllowance) + +GetPeriodSpendLimit() + +github_com_cosmos_cosmos_sdk_types.Coins { + if m != nil { + return m.PeriodSpendLimit +} + +return nil +} + +func (m *PeriodicAllowance) + +GetPeriodCanSpend() + +github_com_cosmos_cosmos_sdk_types.Coins { + if m != nil { + return m.PeriodCanSpend +} + +return nil +} + +func (m *PeriodicAllowance) + +GetPeriodReset() + +time.Time { + if m != nil { + return m.PeriodReset +} + +return time.Time{ +} +} + +/ AllowedMsgAllowance creates allowance only for specified message types. +type AllowedMsgAllowance struct { + / allowance can be any of basic and periodic fee allowance. + Allowance *types1.Any `protobuf:"bytes,1,opt,name=allowance,proto3" json:"allowance,omitempty"` + / allowed_messages are the messages for which the grantee has the access. + AllowedMessages []string `protobuf:"bytes,2,rep,name=allowed_messages,json=allowedMessages,proto3" json:"allowed_messages,omitempty"` +} + +func (m *AllowedMsgAllowance) + +Reset() { *m = AllowedMsgAllowance{ +} +} + +func (m *AllowedMsgAllowance) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*AllowedMsgAllowance) + +ProtoMessage() { +} + +func (*AllowedMsgAllowance) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{2 +} +} + +func (m *AllowedMsgAllowance) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *AllowedMsgAllowance) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_AllowedMsgAllowance.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *AllowedMsgAllowance) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_AllowedMsgAllowance.Merge(m, src) +} + +func (m *AllowedMsgAllowance) + +XXX_Size() + +int { + return m.Size() +} + +func (m *AllowedMsgAllowance) + +XXX_DiscardUnknown() { + xxx_messageInfo_AllowedMsgAllowance.DiscardUnknown(m) +} + +var xxx_messageInfo_AllowedMsgAllowance proto.InternalMessageInfo + +/ Grant is stored in the KVStore to record a grant with full context +type Grant struct { + / granter is the address of the user granting an allowance of their funds. + Granter string `protobuf:"bytes,1,opt,name=granter,proto3" json:"granter,omitempty"` + / grantee is the address of the user being granted an allowance of another user's funds. + Grantee string `protobuf:"bytes,2,opt,name=grantee,proto3" json:"grantee,omitempty"` + / allowance can be any of basic, periodic, allowed fee allowance. + Allowance *types1.Any `protobuf:"bytes,3,opt,name=allowance,proto3" json:"allowance,omitempty"` +} + +func (m *Grant) + +Reset() { *m = Grant{ +} +} + +func (m *Grant) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*Grant) + +ProtoMessage() { +} + +func (*Grant) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{3 +} +} + +func (m *Grant) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *Grant) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_Grant.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *Grant) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_Grant.Merge(m, src) +} + +func (m *Grant) + +XXX_Size() + +int { + return m.Size() +} + +func (m *Grant) + +XXX_DiscardUnknown() { + xxx_messageInfo_Grant.DiscardUnknown(m) +} + +var xxx_messageInfo_Grant proto.InternalMessageInfo + +func (m *Grant) + +GetGranter() + +string { + if m != nil { + return m.Granter +} + +return "" +} + +func (m *Grant) + +GetGrantee() + +string { + if m != nil { + return m.Grantee +} + +return "" +} + +func (m *Grant) + +GetAllowance() *types1.Any { + if m != nil { + return m.Allowance +} + +return nil +} + +func init() { + proto.RegisterType((*BasicAllowance)(nil), "cosmos.feegrant.v1beta1.BasicAllowance") + +proto.RegisterType((*PeriodicAllowance)(nil), "cosmos.feegrant.v1beta1.PeriodicAllowance") + +proto.RegisterType((*AllowedMsgAllowance)(nil), "cosmos.feegrant.v1beta1.AllowedMsgAllowance") + +proto.RegisterType((*Grant)(nil), "cosmos.feegrant.v1beta1.Grant") +} + +func init() { + proto.RegisterFile("cosmos/feegrant/v1beta1/feegrant.proto", fileDescriptor_7279582900c30aea) +} + +var fileDescriptor_7279582900c30aea = []byte{ + / 639 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xb4, 0x55, 0x3f, 0x6f, 0xd3, 0x40, + 0x14, 0x8f, 0x9b, 0xb6, 0x28, 0x17, 0x28, 0xad, 0xa9, 0x84, 0x53, 0x21, 0xbb, 0x8a, 0x04, 0x4d, + 0x2b, 0xd5, 0x56, 0x8b, 0x58, 0x3a, 0x35, 0x2e, 0xa2, 0x80, 0x5a, 0xa9, 0x72, 0x99, 0x90, 0x50, + 0x74, 0xb6, 0xaf, 0xe6, 0x44, 0xec, 0x33, 0x3e, 0x17, 0x1a, 0x06, 0x66, 0xc4, 0x80, 0x32, 0x32, + 0x32, 0x22, 0xa6, 0x0e, 0xe5, 0x3b, 0x54, 0x0c, 0xa8, 0x62, 0x62, 0x22, 0x28, 0x19, 0x3a, 0xf3, + 0x0d, 0x90, 0xef, 0xce, 0x8e, 0x9b, 0x50, 0x68, 0x25, 0xba, 0x24, 0x77, 0xef, 0xde, 0xfb, 0xfd, + 0x79, 0xef, 0x45, 0x01, 0xb7, 0x1c, 0x42, 0x7d, 0x42, 0x8d, 0x1d, 0x84, 0xbc, 0x08, 0x06, 0xb1, + 0xf1, 0x62, 0xc9, 0x46, 0x31, 0x5c, 0xca, 0x02, 0x7a, 0x18, 0x91, 0x98, 0xc8, 0xd7, 0x79, 0x9e, + 0x9e, 0x85, 0x45, 0xde, 0xcc, 0xb4, 0x47, 0x3c, 0xc2, 0x72, 0x8c, 0xe4, 0xc4, 0xd3, 0x67, 0x2a, + 0x1e, 0x21, 0x5e, 0x13, 0x19, 0xec, 0x66, 0xef, 0xee, 0x18, 0x30, 0x68, 0xa5, 0x4f, 0x1c, 0xa9, + 0xc1, 0x6b, 0x04, 0x2c, 0x7f, 0x52, 0x85, 0x18, 0x1b, 0x52, 0x94, 0x09, 0x71, 0x08, 0x0e, 0xc4, + 0xfb, 0x14, 0xf4, 0x71, 0x40, 0x0c, 0xf6, 0x29, 0x42, 0xda, 0x20, 0x51, 0x8c, 0x7d, 0x44, 0x63, + 0xe8, 0x87, 0x29, 0xe6, 0x60, 0x82, 0xbb, 0x1b, 0xc1, 0x18, 0x13, 0x81, 0x59, 0x7d, 0x37, 0x02, + 0x26, 0x4c, 0x48, 0xb1, 0x53, 0x6f, 0x36, 0xc9, 0x4b, 0x18, 0x38, 0x48, 0x7e, 0x0e, 0xca, 0x34, + 0x44, 0x81, 0xdb, 0x68, 0x62, 0x1f, 0xc7, 0x8a, 0x34, 0x5b, 0xac, 0x95, 0x97, 0x2b, 0xba, 0x90, + 0x9a, 0x88, 0x4b, 0xdd, 0xeb, 0x6b, 0x04, 0x07, 0xe6, 0x9d, 0xc3, 0x1f, 0x5a, 0xe1, 0x53, 0x47, + 0xab, 0x79, 0x38, 0x7e, 0xba, 0x6b, 0xeb, 0x0e, 0xf1, 0x85, 0x2f, 0xf1, 0xb5, 0x48, 0xdd, 0x67, + 0x46, 0xdc, 0x0a, 0x11, 0x65, 0x05, 0xf4, 0xe3, 0xf1, 0xfe, 0x82, 0x64, 0x01, 0x46, 0xb2, 0x91, + 0x70, 0xc8, 0xab, 0x00, 0xa0, 0xbd, 0x10, 0x73, 0x65, 0xca, 0xc8, 0xac, 0x54, 0x2b, 0x2f, 0xcf, + 0xe8, 0x5c, 0xba, 0x9e, 0x4a, 0xd7, 0x1f, 0xa5, 0xde, 0xcc, 0xd1, 0x76, 0x47, 0x93, 0xac, 0x5c, + 0xcd, 0xca, 0xfa, 0x97, 0x83, 0xc5, 0x9b, 0xa7, 0x0c, 0x49, 0xbf, 0x87, 0x50, 0x66, 0xef, 0xc1, + 0xdb, 0xe3, 0xfd, 0x85, 0x4a, 0x4e, 0xd8, 0x49, 0xf7, 0xd5, 0xcf, 0xa3, 0x60, 0x6a, 0x0b, 0x45, + 0x98, 0xb8, 0xf9, 0x9e, 0xdc, 0x07, 0x63, 0x76, 0x92, 0xa7, 0x48, 0x4c, 0xdb, 0x9c, 0x7e, 0x1a, + 0xd5, 0x49, 0x34, 0xb3, 0x94, 0xf4, 0x86, 0xfb, 0xe5, 0x00, 0xf2, 0x2a, 0x18, 0x0f, 0x19, 0xbc, + 0xb0, 0x59, 0x19, 0xb2, 0x79, 0x57, 0x4c, 0xc8, 0xbc, 0x92, 0x14, 0xbf, 0xef, 0x68, 0x12, 0x07, + 0x10, 0x75, 0xf2, 0x6b, 0x20, 0xf3, 0x53, 0x23, 0x3f, 0xa6, 0xe2, 0x05, 0x8d, 0x69, 0x92, 0x73, + 0x6d, 0xf7, 0x87, 0xf5, 0x0a, 0x88, 0x58, 0xc3, 0x81, 0x01, 0xd7, 0xa0, 0x8c, 0x5e, 0x10, 0xfb, + 0x04, 0x67, 0x5a, 0x83, 0x01, 0x13, 0x20, 0x6f, 0x80, 0xcb, 0x82, 0x3b, 0x42, 0x14, 0xc5, 0xca, + 0xd8, 0x3f, 0x57, 0x85, 0x35, 0xb1, 0x9d, 0x35, 0xb1, 0xcc, 0xcb, 0xad, 0xa4, 0x7a, 0xe5, 0xe1, + 0xb9, 0x96, 0xe6, 0x46, 0x4e, 0xe8, 0xd0, 0x86, 0x54, 0x7f, 0x49, 0xe0, 0x1a, 0xbb, 0x21, 0x77, + 0x93, 0x7a, 0xfd, 0xcd, 0x79, 0x02, 0x4a, 0x30, 0xbd, 0x88, 0xed, 0x99, 0x1e, 0x92, 0x5b, 0x0f, + 0x5a, 0xe6, 0xfc, 0x99, 0xc5, 0x58, 0x7d, 0x44, 0x79, 0x1e, 0x4c, 0x42, 0xce, 0xda, 0xf0, 0x11, + 0xa5, 0xd0, 0x43, 0x54, 0x19, 0x99, 0x2d, 0xd6, 0x4a, 0xd6, 0x55, 0x11, 0xdf, 0x14, 0xe1, 0x95, + 0xad, 0x37, 0x1f, 0xb4, 0xc2, 0xb9, 0x1c, 0xab, 0x39, 0xc7, 0x7f, 0xf0, 0x56, 0xfd, 0x2a, 0x81, + 0xb1, 0xf5, 0x04, 0x42, 0x5e, 0x06, 0x97, 0x18, 0x16, 0x8a, 0x98, 0xc7, 0x92, 0xa9, 0x7c, 0x3b, + 0x58, 0x9c, 0x16, 0x44, 0x75, 0xd7, 0x8d, 0x10, 0xa5, 0xdb, 0x71, 0x84, 0x03, 0xcf, 0x4a, 0x13, + 0xfb, 0x35, 0x88, 0xfd, 0x14, 0xce, 0x50, 0x33, 0xd0, 0xcd, 0xe2, 0xff, 0xee, 0xa6, 0x59, 0x3f, + 0xec, 0xaa, 0xd2, 0x51, 0x57, 0x95, 0x7e, 0x76, 0x55, 0xa9, 0xdd, 0x53, 0x0b, 0x47, 0x3d, 0xb5, + 0xf0, 0xbd, 0xa7, 0x16, 0x1e, 0xcf, 0xfd, 0x75, 0x6f, 0xf7, 0xb2, 0xff, 0x0b, 0x7b, 0x9c, 0xc9, + 0xb8, 0xfd, 0x3b, 0x00, 0x00, 0xff, 0xff, 0xe4, 0x3d, 0x09, 0x1d, 0x5a, 0x06, 0x00, 0x00, +} + +func (m *BasicAllowance) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *BasicAllowance) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *BasicAllowance) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.Expiration != nil { + n1, err1 := github_com_cosmos_gogoproto_types.StdTimeMarshalTo(*m.Expiration, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdTime(*m.Expiration):]) + if err1 != nil { + return 0, err1 +} + +i -= n1 + i = encodeVarintFeegrant(dAtA, i, uint64(n1)) + +i-- + dAtA[i] = 0x12 +} + if len(m.SpendLimit) > 0 { + for iNdEx := len(m.SpendLimit) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.SpendLimit[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa +} + +} + +return len(dAtA) - i, nil +} + +func (m *PeriodicAllowance) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *PeriodicAllowance) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *PeriodicAllowance) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + n2, err2 := github_com_cosmos_gogoproto_types.StdTimeMarshalTo(m.PeriodReset, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdTime(m.PeriodReset):]) + if err2 != nil { + return 0, err2 +} + +i -= n2 + i = encodeVarintFeegrant(dAtA, i, uint64(n2)) + +i-- + dAtA[i] = 0x2a + if len(m.PeriodCanSpend) > 0 { + for iNdEx := len(m.PeriodCanSpend) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.PeriodCanSpend[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x22 +} + +} + if len(m.PeriodSpendLimit) > 0 { + for iNdEx := len(m.PeriodSpendLimit) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.PeriodSpendLimit[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x1a +} + +} + +n3, err3 := github_com_cosmos_gogoproto_types.StdDurationMarshalTo(m.Period, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdDuration(m.Period):]) + if err3 != nil { + return 0, err3 +} + +i -= n3 + i = encodeVarintFeegrant(dAtA, i, uint64(n3)) + +i-- + dAtA[i] = 0x12 + { + size, err := m.Basic.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *AllowedMsgAllowance) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *AllowedMsgAllowance) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AllowedMsgAllowance) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.AllowedMessages) > 0 { + for iNdEx := len(m.AllowedMessages) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.AllowedMessages[iNdEx]) + +copy(dAtA[i:], m.AllowedMessages[iNdEx]) + +i = encodeVarintFeegrant(dAtA, i, uint64(len(m.AllowedMessages[iNdEx]))) + +i-- + dAtA[i] = 0x12 +} + +} + if m.Allowance != nil { + { + size, err := m.Allowance.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *Grant) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *Grant) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *Grant) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.Allowance != nil { + { + size, err := m.Allowance.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x1a +} + if len(m.Grantee) > 0 { + i -= len(m.Grantee) + +copy(dAtA[i:], m.Grantee) + +i = encodeVarintFeegrant(dAtA, i, uint64(len(m.Grantee))) + +i-- + dAtA[i] = 0x12 +} + if len(m.Granter) > 0 { + i -= len(m.Granter) + +copy(dAtA[i:], m.Granter) + +i = encodeVarintFeegrant(dAtA, i, uint64(len(m.Granter))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func encodeVarintFeegrant(dAtA []byte, offset int, v uint64) + +int { + offset -= sovFeegrant(v) + base := offset + for v >= 1<<7 { + dAtA[offset] = uint8(v&0x7f | 0x80) + +v >>= 7 + offset++ +} + +dAtA[offset] = uint8(v) + +return base +} + +func (m *BasicAllowance) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + if len(m.SpendLimit) > 0 { + for _, e := range m.SpendLimit { + l = e.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + if m.Expiration != nil { + l = github_com_cosmos_gogoproto_types.SizeOfStdTime(*m.Expiration) + +n += 1 + l + sovFeegrant(uint64(l)) +} + +return n +} + +func (m *PeriodicAllowance) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = m.Basic.Size() + +n += 1 + l + sovFeegrant(uint64(l)) + +l = github_com_cosmos_gogoproto_types.SizeOfStdDuration(m.Period) + +n += 1 + l + sovFeegrant(uint64(l)) + if len(m.PeriodSpendLimit) > 0 { + for _, e := range m.PeriodSpendLimit { + l = e.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + if len(m.PeriodCanSpend) > 0 { + for _, e := range m.PeriodCanSpend { + l = e.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + +l = github_com_cosmos_gogoproto_types.SizeOfStdTime(m.PeriodReset) + +n += 1 + l + sovFeegrant(uint64(l)) + +return n +} + +func (m *AllowedMsgAllowance) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + if m.Allowance != nil { + l = m.Allowance.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + if len(m.AllowedMessages) > 0 { + for _, s := range m.AllowedMessages { + l = len(s) + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + +return n +} + +func (m *Grant) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.Granter) + if l > 0 { + n += 1 + l + sovFeegrant(uint64(l)) +} + +l = len(m.Grantee) + if l > 0 { + n += 1 + l + sovFeegrant(uint64(l)) +} + if m.Allowance != nil { + l = m.Allowance.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +return n +} + +func sovFeegrant(x uint64) (n int) { + return (math_bits.Len64(x|1) + 6) / 7 +} + +func sozFeegrant(x uint64) (n int) { + return sovFeegrant(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} + +func (m *BasicAllowance) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: BasicAllowance: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: BasicAllowance: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SpendLimit", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.SpendLimit = append(m.SpendLimit, types.Coin{ +}) + if err := m.SpendLimit[len(m.SpendLimit)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Expiration", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if m.Expiration == nil { + m.Expiration = new(time.Time) +} + if err := github_com_cosmos_gogoproto_types.StdTimeUnmarshal(m.Expiration, dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *PeriodicAllowance) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PeriodicAllowance: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: PeriodicAllowance: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Basic", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := m.Basic.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Period", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := github_com_cosmos_gogoproto_types.StdDurationUnmarshal(&m.Period, dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PeriodSpendLimit", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.PeriodSpendLimit = append(m.PeriodSpendLimit, types.Coin{ +}) + if err := m.PeriodSpendLimit[len(m.PeriodSpendLimit)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PeriodCanSpend", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.PeriodCanSpend = append(m.PeriodCanSpend, types.Coin{ +}) + if err := m.PeriodCanSpend[len(m.PeriodCanSpend)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PeriodReset", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := github_com_cosmos_gogoproto_types.StdTimeUnmarshal(&m.PeriodReset, dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *AllowedMsgAllowance) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AllowedMsgAllowance: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: AllowedMsgAllowance: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Allowance", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if m.Allowance == nil { + m.Allowance = &types1.Any{ +} + +} + if err := m.Allowance.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AllowedMessages", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.AllowedMessages = append(m.AllowedMessages, string(dAtA[iNdEx:postIndex])) + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *Grant) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: Grant: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: Grant: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Granter", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Granter = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Grantee", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Grantee = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Allowance", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if m.Allowance == nil { + m.Allowance = &types1.Any{ +} + +} + if err := m.Allowance.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func skipFeegrant(dAtA []byte) (n int, err error) { + l := len(dAtA) + iNdEx := 0 + depth := 0 + for iNdEx < l { + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowFeegrant +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + wireType := int(wire & 0x7) + switch wireType { + case 0: + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowFeegrant +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + +iNdEx++ + if dAtA[iNdEx-1] < 0x80 { + break +} + +} + case 1: + iNdEx += 8 + case 2: + var length int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowFeegrant +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + length |= (int(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + if length < 0 { + return 0, ErrInvalidLengthFeegrant +} + +iNdEx += length + case 3: + depth++ + case 4: + if depth == 0 { + return 0, ErrUnexpectedEndOfGroupFeegrant +} + +depth-- + case 5: + iNdEx += 4 + default: + return 0, fmt.Errorf("proto: illegal wireType %d", wireType) +} + if iNdEx < 0 { + return 0, ErrInvalidLengthFeegrant +} + if depth == 0 { + return iNdEx, nil +} + +} + +return 0, io.ErrUnexpectedEOF +} + +var ( + ErrInvalidLengthFeegrant = fmt.Errorf("proto: negative length found during unmarshaling") + +ErrIntOverflowFeegrant = fmt.Errorf("proto: integer overflow") + +ErrUnexpectedEndOfGroupFeegrant = fmt.Errorf("proto: unexpected end of group") +) +``` + +### FeeAllowanceQueue + +Fee Allowances queue items are identified by combining the `FeeAllowancePrefixQueue` (i.e., 0x01), `expiration`, `grantee` (the account address of fee allowance grantee), `granter` (the account address of fee allowance granter). Endblocker checks `FeeAllowanceQueue` state for the expired grants and prunes them from `FeeAllowance` if there are any found. + +Fee allowance queue keys are stored in the state as follows: + +* Grant: `0x01 | expiration_bytes | grantee_addr_len (1 byte) | grantee_addr_bytes | granter_addr_len (1 byte) | granter_addr_bytes -> EmptyBytes` + +## Messages + +### Msg/GrantAllowance + +A fee allowance grant will be created with the `MsgGrantAllowance` message. + +```protobuf +// MsgGrantAllowance adds permission for Grantee to spend up to Allowance +// of fees from the account of Granter. +message MsgGrantAllowance { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgGrantAllowance"; + + // granter is the address of the user granting an allowance of their funds. + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // grantee is the address of the user being granted an allowance of another user's funds. + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // allowance can be any of basic, periodic, allowed fee allowance. + google.protobuf.Any allowance = 3 [(cosmos_proto.accepts_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"]; +} +``` + +### Msg/RevokeAllowance + +An allowed grant fee allowance can be removed with the `MsgRevokeAllowance` message. + +```protobuf +// MsgGrantAllowanceResponse defines the Msg/GrantAllowanceResponse response type. +message MsgGrantAllowanceResponse {} + +// MsgRevokeAllowance removes any existing Allowance from Granter to Grantee. +message MsgRevokeAllowance { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgRevokeAllowance"; + + // granter is the address of the user granting an allowance of their funds. + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // grantee is the address of the user being granted an allowance of another user's funds. + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +## Events + +The feegrant module emits the following events: + +## Msg Server + +### MsgGrantAllowance + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ---------------- | +| message | action | set\_feegrant | +| message | granter | `{granterAddress}` | +| message | grantee | `{granteeAddress}` | + +### MsgRevokeAllowance + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ---------------- | +| message | action | revoke\_feegrant | +| message | granter | `{granterAddress}` | +| message | grantee | `{granteeAddress}` | + +### Exec fee allowance + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ---------------- | +| message | action | use\_feegrant | +| message | granter | `{granterAddress}` | +| message | grantee | `{granteeAddress}` | + +### Prune fee allowances + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | --------------- | +| message | action | prune\_feegrant | +| message | pruner | `{prunerAddress}` | + +## Client + +### CLI + +A user can query and interact with the `feegrant` module using the CLI. + +#### Query + +The `query` commands allow users to query `feegrant` state. + +```shell +simd query feegrant --help +``` + +##### grant + +The `grant` command allows users to query a grant for a given granter-grantee pair. + +```shell +simd query feegrant grant [granter] [grantee] [flags] +``` + +Example: + +```shell +simd query feegrant grant cosmos1.. cosmos1.. +``` + +Example Output: + +```yml +allowance: + '@type': /cosmos.feegrant.v1beta1.BasicAllowance + expiration: null + spend_limit: + - amount: "100" + denom: stake +grantee: cosmos1.. +granter: cosmos1.. +``` + +##### grants + +The `grants` command allows users to query all grants for a given grantee. + +```shell +simd query feegrant grants [grantee] [flags] +``` + +Example: + +```shell +simd query feegrant grants cosmos1.. +``` + +Example Output: + +```yml expandable +allowances: +- allowance: + '@type': /cosmos.feegrant.v1beta1.BasicAllowance + expiration: null + spend_limit: + - amount: "100" + denom: stake + grantee: cosmos1.. + granter: cosmos1.. +pagination: + next_key: null + total: "0" +``` + +#### Transactions + +The `tx` commands allow users to interact with the `feegrant` module. + +```shell +simd tx feegrant --help +``` + +##### grant + +The `grant` command allows users to grant fee allowances to another account. The fee allowance can have an expiration date, a total spend limit, and/or a periodic spend limit. + +```shell +simd tx feegrant grant [granter] [grantee] [flags] +``` + +Example (one-time spend limit): + +```shell +simd tx feegrant grant cosmos1.. cosmos1.. --spend-limit 100stake +``` + +Example (periodic spend limit): + +```shell +simd tx feegrant grant cosmos1.. cosmos1.. --period 3600 --period-limit 10stake +``` + +##### revoke + +The `revoke` command allows users to revoke a granted fee allowance. + +```shell +simd tx feegrant revoke [granter] [grantee] [flags] +``` + +Example: + +```shell +simd tx feegrant revoke cosmos1.. cosmos1.. +``` + +### gRPC + +A user can query the `feegrant` module using gRPC endpoints. + +#### Allowance + +The `Allowance` endpoint allows users to query a granted fee allowance. + +```shell +cosmos.feegrant.v1beta1.Query/Allowance +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"grantee":"cosmos1..","granter":"cosmos1.."}' \ + localhost:9090 \ + cosmos.feegrant.v1beta1.Query/Allowance +``` + +Example Output: + +```json +{ + "allowance": { + "granter": "cosmos1..", + "grantee": "cosmos1..", + "allowance": { + "@type": "/cosmos.feegrant.v1beta1.BasicAllowance", + "spendLimit": [ + { + "denom": "stake", + "amount": "100" + } + ] + } + } +} +``` + +#### Allowances + +The `Allowances` endpoint allows users to query all granted fee allowances for a given grantee. + +```shell +cosmos.feegrant.v1beta1.Query/Allowances +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' \ + localhost:9090 \ + cosmos.feegrant.v1beta1.Query/Allowances +``` + +Example Output: + +```json expandable +{ + "allowances": [ + { + "granter": "cosmos1..", + "grantee": "cosmos1..", + "allowance": { + "@type": "/cosmos.feegrant.v1beta1.BasicAllowance", + "spendLimit": [ + { + "denom": "stake", + "amount": "100" + } + ] + } + } + ], + "pagination": { + "total": "1" + } +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/genutil/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/genutil/README.mdx new file mode 100644 index 00000000..556d5cd2 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/genutil/README.mdx @@ -0,0 +1,1251 @@ +--- +title: '`x/genutil`' +description: >- + The genutil package contains a variety of genesis utility functionalities for + usage within a blockchain application. Namely: +--- + +## Concepts + +The `genutil` package contains a variety of genesis utility functionalities for usage within a blockchain application. Namely: + +* Genesis transactions related (gentx) +* Commands for collection and creation of gentxs +* `InitChain` processing of gentxs +* Genesis file creation +* Genesis file validation +* Genesis file migration +* CometBFT related initialization + * Translation of an app genesis to a CometBFT genesis + +## Genesis + +Genutil contains the data structure that defines an application genesis. +An application genesis consist of a consensus genesis (g.e. CometBFT genesis) and application related genesis data. + +```go expandable +package types + +import ( + + "bytes" + "encoding/json" + "errors" + "fmt" + "os" + "time" + + cmtjson "github.com/cometbft/cometbft/libs/json" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + cmttime "github.com/cometbft/cometbft/types/time" + "github.com/cosmos/cosmos-sdk/version" +) + +const ( + / MaxChainIDLen is the maximum length of a chain ID. + MaxChainIDLen = cmttypes.MaxChainIDLen +) + +/ AppGenesis defines the app's genesis. +type AppGenesis struct { + AppName string `json:"app_name"` + AppVersion string `json:"app_version"` + GenesisTime time.Time `json:"genesis_time"` + ChainID string `json:"chain_id"` + InitialHeight int64 `json:"initial_height"` + AppHash []byte `json:"app_hash"` + AppState json.RawMessage `json:"app_state,omitempty"` + Consensus *ConsensusGenesis `json:"consensus,omitempty"` +} + +/ NewAppGenesisWithVersion returns a new AppGenesis with the app name and app version already. +func NewAppGenesisWithVersion(chainID string, appState json.RawMessage) *AppGenesis { + return &AppGenesis{ + AppName: version.AppName, + AppVersion: version.Version, + ChainID: chainID, + AppState: appState, + Consensus: &ConsensusGenesis{ + Validators: nil, +}, +} +} + +/ ValidateAndComplete performs validation and completes the AppGenesis. +func (ag *AppGenesis) + +ValidateAndComplete() + +error { + if ag.ChainID == "" { + return errors.New("genesis doc must include non-empty chain_id") +} + if len(ag.ChainID) > MaxChainIDLen { + return fmt.Errorf("chain_id in genesis doc is too long (max: %d)", MaxChainIDLen) +} + if ag.InitialHeight < 0 { + return fmt.Errorf("initial_height cannot be negative (got %v)", ag.InitialHeight) +} + if ag.InitialHeight == 0 { + ag.InitialHeight = 1 +} + if ag.GenesisTime.IsZero() { + ag.GenesisTime = cmttime.Now() +} + if err := ag.Consensus.ValidateAndComplete(); err != nil { + return err +} + +return nil +} + +/ SaveAs is a utility method for saving AppGenesis as a JSON file. +func (ag *AppGenesis) + +SaveAs(file string) + +error { + appGenesisBytes, err := json.MarshalIndent(ag, "", " + ") + if err != nil { + return err +} + +return os.WriteFile(file, appGenesisBytes, 0o600) +} + +/ AppGenesisFromFile reads the AppGenesis from the provided file. +func AppGenesisFromFile(genFile string) (*AppGenesis, error) { + jsonBlob, err := os.ReadFile(genFile) + if err != nil { + return nil, fmt.Errorf("couldn't read AppGenesis file (%s): %w", genFile, err) +} + +var appGenesis AppGenesis + if err := json.Unmarshal(jsonBlob, &appGenesis); err != nil { + / fallback to CometBFT genesis + var ctmGenesis cmttypes.GenesisDoc + if err2 := cmtjson.Unmarshal(jsonBlob, &ctmGenesis); err2 != nil { + return nil, fmt.Errorf("error unmarshalling AppGenesis at %s: %w\n failed fallback to CometBFT GenDoc: %w", genFile, err, err2) +} + +appGenesis = AppGenesis{ + AppName: version.AppName, + / AppVersion is not filled as we do not know it from a CometBFT genesis + GenesisTime: ctmGenesis.GenesisTime, + ChainID: ctmGenesis.ChainID, + InitialHeight: ctmGenesis.InitialHeight, + AppHash: ctmGenesis.AppHash, + AppState: ctmGenesis.AppState, + Consensus: &ConsensusGenesis{ + Validators: ctmGenesis.Validators, + Params: ctmGenesis.ConsensusParams, +}, +} + +} + +return &appGenesis, nil +} + +/ -------------------------- +/ CometBFT Genesis Handling +/ -------------------------- + +/ ToGenesisDoc converts the AppGenesis to a CometBFT GenesisDoc. +func (ag *AppGenesis) + +ToGenesisDoc() (*cmttypes.GenesisDoc, error) { + return &cmttypes.GenesisDoc{ + GenesisTime: ag.GenesisTime, + ChainID: ag.ChainID, + InitialHeight: ag.InitialHeight, + AppHash: ag.AppHash, + AppState: ag.AppState, + Validators: ag.Consensus.Validators, + ConsensusParams: ag.Consensus.Params, +}, nil +} + +/ ConsensusGenesis defines the consensus layer's genesis. +/ TODO(@julienrbrt) + +eventually abstract from CometBFT types +type ConsensusGenesis struct { + Validators []cmttypes.GenesisValidator `json:"validators,omitempty"` + Params *cmttypes.ConsensusParams `json:"params,omitempty"` +} + +/ NewConsensusGenesis returns a ConsensusGenesis with given values. +/ It takes a proto consensus params so it can called from server export command. +func NewConsensusGenesis(params cmtproto.ConsensusParams, validators []cmttypes.GenesisValidator) *ConsensusGenesis { + return &ConsensusGenesis{ + Params: &cmttypes.ConsensusParams{ + Block: cmttypes.BlockParams{ + MaxBytes: params.Block.MaxBytes, + MaxGas: params.Block.MaxGas, +}, + Evidence: cmttypes.EvidenceParams{ + MaxAgeNumBlocks: params.Evidence.MaxAgeNumBlocks, + MaxAgeDuration: params.Evidence.MaxAgeDuration, + MaxBytes: params.Evidence.MaxBytes, +}, + Validator: cmttypes.ValidatorParams{ + PubKeyTypes: params.Validator.PubKeyTypes, +}, +}, + Validators: validators, +} +} + +func (cs *ConsensusGenesis) + +MarshalJSON() ([]byte, error) { + type Alias ConsensusGenesis + return cmtjson.Marshal(&Alias{ + Validators: cs.Validators, + Params: cs.Params, +}) +} + +func (cs *ConsensusGenesis) + +UnmarshalJSON(b []byte) + +error { + type Alias ConsensusGenesis + result := Alias{ +} + if err := cmtjson.Unmarshal(b, &result); err != nil { + return err +} + +cs.Params = result.Params + cs.Validators = result.Validators + + return nil +} + +func (cs *ConsensusGenesis) + +ValidateAndComplete() + +error { + if cs == nil { + return fmt.Errorf("consensus genesis cannot be nil") +} + if cs.Params == nil { + cs.Params = cmttypes.DefaultConsensusParams() +} + +else if err := cs.Params.ValidateBasic(); err != nil { + return err +} + for i, v := range cs.Validators { + if v.Power == 0 { + return fmt.Errorf("the genesis file cannot contain validators with no voting power: %v", v) +} + if len(v.Address) > 0 && !bytes.Equal(v.PubKey.Address(), v.Address) { + return fmt.Errorf("incorrect address for validator %v in the genesis file, should be %v", v, v.PubKey.Address()) +} + if len(v.Address) == 0 { + cs.Validators[i].Address = v.PubKey.Address() +} + +} + +return nil +} +``` + +The application genesis can then be translated to the consensus engine to the right format: + +```go expandable +package types + +import ( + + "bytes" + "encoding/json" + "errors" + "fmt" + "os" + "time" + + cmtjson "github.com/cometbft/cometbft/libs/json" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + cmttime "github.com/cometbft/cometbft/types/time" + "github.com/cosmos/cosmos-sdk/version" +) + +const ( + / MaxChainIDLen is the maximum length of a chain ID. + MaxChainIDLen = cmttypes.MaxChainIDLen +) + +/ AppGenesis defines the app's genesis. +type AppGenesis struct { + AppName string `json:"app_name"` + AppVersion string `json:"app_version"` + GenesisTime time.Time `json:"genesis_time"` + ChainID string `json:"chain_id"` + InitialHeight int64 `json:"initial_height"` + AppHash []byte `json:"app_hash"` + AppState json.RawMessage `json:"app_state,omitempty"` + Consensus *ConsensusGenesis `json:"consensus,omitempty"` +} + +/ NewAppGenesisWithVersion returns a new AppGenesis with the app name and app version already. +func NewAppGenesisWithVersion(chainID string, appState json.RawMessage) *AppGenesis { + return &AppGenesis{ + AppName: version.AppName, + AppVersion: version.Version, + ChainID: chainID, + AppState: appState, + Consensus: &ConsensusGenesis{ + Validators: nil, +}, +} +} + +/ ValidateAndComplete performs validation and completes the AppGenesis. +func (ag *AppGenesis) + +ValidateAndComplete() + +error { + if ag.ChainID == "" { + return errors.New("genesis doc must include non-empty chain_id") +} + if len(ag.ChainID) > MaxChainIDLen { + return fmt.Errorf("chain_id in genesis doc is too long (max: %d)", MaxChainIDLen) +} + if ag.InitialHeight < 0 { + return fmt.Errorf("initial_height cannot be negative (got %v)", ag.InitialHeight) +} + if ag.InitialHeight == 0 { + ag.InitialHeight = 1 +} + if ag.GenesisTime.IsZero() { + ag.GenesisTime = cmttime.Now() +} + if err := ag.Consensus.ValidateAndComplete(); err != nil { + return err +} + +return nil +} + +/ SaveAs is a utility method for saving AppGenesis as a JSON file. +func (ag *AppGenesis) + +SaveAs(file string) + +error { + appGenesisBytes, err := json.MarshalIndent(ag, "", " + ") + if err != nil { + return err +} + +return os.WriteFile(file, appGenesisBytes, 0o600) +} + +/ AppGenesisFromFile reads the AppGenesis from the provided file. +func AppGenesisFromFile(genFile string) (*AppGenesis, error) { + jsonBlob, err := os.ReadFile(genFile) + if err != nil { + return nil, fmt.Errorf("couldn't read AppGenesis file (%s): %w", genFile, err) +} + +var appGenesis AppGenesis + if err := json.Unmarshal(jsonBlob, &appGenesis); err != nil { + / fallback to CometBFT genesis + var ctmGenesis cmttypes.GenesisDoc + if err2 := cmtjson.Unmarshal(jsonBlob, &ctmGenesis); err2 != nil { + return nil, fmt.Errorf("error unmarshalling AppGenesis at %s: %w\n failed fallback to CometBFT GenDoc: %w", genFile, err, err2) +} + +appGenesis = AppGenesis{ + AppName: version.AppName, + / AppVersion is not filled as we do not know it from a CometBFT genesis + GenesisTime: ctmGenesis.GenesisTime, + ChainID: ctmGenesis.ChainID, + InitialHeight: ctmGenesis.InitialHeight, + AppHash: ctmGenesis.AppHash, + AppState: ctmGenesis.AppState, + Consensus: &ConsensusGenesis{ + Validators: ctmGenesis.Validators, + Params: ctmGenesis.ConsensusParams, +}, +} + +} + +return &appGenesis, nil +} + +/ -------------------------- +/ CometBFT Genesis Handling +/ -------------------------- + +/ ToGenesisDoc converts the AppGenesis to a CometBFT GenesisDoc. +func (ag *AppGenesis) + +ToGenesisDoc() (*cmttypes.GenesisDoc, error) { + return &cmttypes.GenesisDoc{ + GenesisTime: ag.GenesisTime, + ChainID: ag.ChainID, + InitialHeight: ag.InitialHeight, + AppHash: ag.AppHash, + AppState: ag.AppState, + Validators: ag.Consensus.Validators, + ConsensusParams: ag.Consensus.Params, +}, nil +} + +/ ConsensusGenesis defines the consensus layer's genesis. +/ TODO(@julienrbrt) + +eventually abstract from CometBFT types +type ConsensusGenesis struct { + Validators []cmttypes.GenesisValidator `json:"validators,omitempty"` + Params *cmttypes.ConsensusParams `json:"params,omitempty"` +} + +/ NewConsensusGenesis returns a ConsensusGenesis with given values. +/ It takes a proto consensus params so it can called from server export command. +func NewConsensusGenesis(params cmtproto.ConsensusParams, validators []cmttypes.GenesisValidator) *ConsensusGenesis { + return &ConsensusGenesis{ + Params: &cmttypes.ConsensusParams{ + Block: cmttypes.BlockParams{ + MaxBytes: params.Block.MaxBytes, + MaxGas: params.Block.MaxGas, +}, + Evidence: cmttypes.EvidenceParams{ + MaxAgeNumBlocks: params.Evidence.MaxAgeNumBlocks, + MaxAgeDuration: params.Evidence.MaxAgeDuration, + MaxBytes: params.Evidence.MaxBytes, +}, + Validator: cmttypes.ValidatorParams{ + PubKeyTypes: params.Validator.PubKeyTypes, +}, +}, + Validators: validators, +} +} + +func (cs *ConsensusGenesis) + +MarshalJSON() ([]byte, error) { + type Alias ConsensusGenesis + return cmtjson.Marshal(&Alias{ + Validators: cs.Validators, + Params: cs.Params, +}) +} + +func (cs *ConsensusGenesis) + +UnmarshalJSON(b []byte) + +error { + type Alias ConsensusGenesis + result := Alias{ +} + if err := cmtjson.Unmarshal(b, &result); err != nil { + return err +} + +cs.Params = result.Params + cs.Validators = result.Validators + + return nil +} + +func (cs *ConsensusGenesis) + +ValidateAndComplete() + +error { + if cs == nil { + return fmt.Errorf("consensus genesis cannot be nil") +} + if cs.Params == nil { + cs.Params = cmttypes.DefaultConsensusParams() +} + +else if err := cs.Params.ValidateBasic(); err != nil { + return err +} + for i, v := range cs.Validators { + if v.Power == 0 { + return fmt.Errorf("the genesis file cannot contain validators with no voting power: %v", v) +} + if len(v.Address) > 0 && !bytes.Equal(v.PubKey.Address(), v.Address) { + return fmt.Errorf("incorrect address for validator %v in the genesis file, should be %v", v, v.PubKey.Address()) +} + if len(v.Address) == 0 { + cs.Validators[i].Address = v.PubKey.Address() +} + +} + +return nil +} +``` + +```go expandable +package server + +import ( + + "context" + "errors" + "fmt" + "io" + "net" + "os" + "runtime/pprof" + "github.com/cometbft/cometbft/abci/server" + cmtcmd "github.com/cometbft/cometbft/cmd/cometbft/commands" + cmtcfg "github.com/cometbft/cometbft/config" + "github.com/cometbft/cometbft/node" + "github.com/cometbft/cometbft/p2p" + pvm "github.com/cometbft/cometbft/privval" + "github.com/cometbft/cometbft/proxy" + "github.com/cometbft/cometbft/rpc/client/local" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/hashicorp/go-metrics" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "golang.org/x/sync/errgroup" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + + pruningtypes "cosmossdk.io/store/pruning/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + servercmtlog "github.com/cosmos/cosmos-sdk/server/log" + "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/version" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" +) + +const ( + / CometBFT full-node start flags + flagWithComet = "with-comet" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagQueryGasLimit = "query-gas-limit" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" + + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" + + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" + + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" + + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" + + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" +) + +/ StartCmdOptions defines options that can be customized in `StartCmdWithOptions`, +type StartCmdOptions struct { + / DBOpener can be used to customize db opening, for example customize db options or support different db backends, + / default to the builtin db opener. + DBOpener func(rootDir string, backendType dbm.BackendType) (dbm.DB, error) + / PostSetup can be used to setup extra services under the same cancellable context, + / it's not called in stand-alone mode, only for in-process mode. + PostSetup func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / AddFlags add custom flags to start cmd + AddFlags func(cmd *cobra.Command) +} + +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + return StartCmdWithOptions(appCreator, defaultNodeHome, StartCmdOptions{ +}) +} + +/ StartCmdWithOptions runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmdWithOptions(appCreator types.AppCreator, defaultNodeHome string, opts StartCmdOptions) *cobra.Command { + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with CometBFT in or out of process. By +default, the application will run with CometBFT in process. + +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. + +For '--pruning' the options are as follows: + +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) + +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' + +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. + +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. + +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, CometBFT is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + PreRunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + + / Bind flags to the Context's Viper so the app construction can set + / options accordingly. + if err := serverCtx.Viper.BindPFlags(cmd.Flags()); err != nil { + return err +} + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + +return err +}, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + +return wrapCPUProfile(serverCtx, func() + +error { + return start(serverCtx, clientCtx, appCreator, withCMT, opts) +}) +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +cmd.Flags().Bool(flagWithComet, true, "Run abci app embedded in-process with CometBFT") + +cmd.Flags().String(flagAddress, "tcp://0.0.0.0:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().Uint64(FlagQueryGasLimit, 0, "Maximum gas a Rest/Grpc query can consume. Blank and 0 imply unbounded.") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune CometBFT blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the CometBFT RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the CometBFT RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the CometBFT maximum request body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no CometBFT process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + + / support old flags name for backwards compatibility + cmd.Flags().SetNormalizeFunc(func(f *pflag.FlagSet, name string) + +pflag.NormalizedName { + if name == "with-tendermint" { + name = flagWithComet +} + +return pflag.NormalizedName(name) +}) + + / add support for all CometBFT-specific command line options + cmtcmd.AddNodeFlags(cmd) + if opts.AddFlags != nil { + opts.AddFlags(cmd) +} + +return cmd +} + +func start(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, withCmt bool, opts StartCmdOptions) + +error { + svrCfg, err := getAndValidateConfig(svrCtx) + if err != nil { + return err +} + +app, appCleanupFn, err := startApp(svrCtx, appCreator, opts) + if err != nil { + return err +} + +defer appCleanupFn() + +metrics, err := startTelemetry(svrCfg) + if err != nil { + return err +} + +emitServerInfoMetrics() + if !withCmt { + return startStandAlone(svrCtx, app, opts) +} + +return startInProcess(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +func startStandAlone(svrCtx *Context, app types.Application, opts StartCmdOptions) + +error { + addr := svrCtx.Viper.GetString(flagAddress) + transport := svrCtx.Viper.GetString(flagTransport) + cmtApp := NewCometABCIWrapper(app) + +svr, err := server.NewServer(addr, transport, cmtApp) + if err != nil { + return fmt.Errorf("error creating listener: %v", err) +} + +svr.SetLogger(servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger.With("module", "abci-server") +}) + +g, ctx := getCtx(svrCtx, false) + +g.Go(func() + +error { + if err := svr.Start(); err != nil { + svrCtx.Logger.Error("failed to start out-of-process ABCI server", "err", err) + +return err +} + + / Wait for the calling process to be canceled or close the provided context, + / so we can gracefully stop the ABCI server. + <-ctx.Done() + +svrCtx.Logger.Info("stopping the ABCI server...") + +return errors.Join(svr.Stop(), app.Close()) +}) + +return g.Wait() +} + +func startInProcess(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, + metrics *telemetry.Metrics, opts StartCmdOptions, +) + +error { + cmtCfg := svrCtx.Config + home := cmtCfg.RootDir + gRPCOnly := svrCtx.Viper.GetBool(flagGRPCOnly) + +g, ctx := getCtx(svrCtx, true) + if gRPCOnly { + / TODO: Generalize logic so that gRPC only is really in startStandAlone + svrCtx.Logger.Info("starting node in gRPC only mode; CometBFT is disabled") + +svrCfg.GRPC.Enable = true +} + +else { + svrCtx.Logger.Info("starting node with ABCI CometBFT in-process") + +tmNode, cleanupFn, err := startCmtNode(ctx, cmtCfg, app, svrCtx) + if err != nil { + return err +} + +defer cleanupFn() + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / Re-assign for making the client available below do not use := to avoid + / shadowing the clientCtx variable. + clientCtx = clientCtx.WithClient(local.New(tmNode)) + +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, cmtCfg, svrCfg, clientCtx, svrCtx, app, home, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetup != nil { + if err := opts.PostSetup(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + + / wait for signal capture and gracefully return + / we are guaranteed to be waiting for the "ListenForQuitSignals" goroutine. + return g.Wait() +} + +/ TODO: Move nodeKey into being created within the function. +func startCmtNode( + ctx context.Context, + cfg *cmtcfg.Config, + app types.Application, + svrCtx *Context, +) (tmNode *node.Node, cleanupFn func(), err error) { + nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return nil, cleanupFn, err +} + cmtApp := NewCometABCIWrapper(app) + +tmNode, err = node.NewNodeWithContext( + ctx, + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(cmtApp), + getGenDocProvider(cfg), + cmtcfg.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger +}, + ) + if err != nil { + return tmNode, cleanupFn, err +} + if err := tmNode.Start(); err != nil { + return tmNode, cleanupFn, err +} + +cleanupFn = func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() + _ = app.Close() +} + +} + +return tmNode, cleanupFn, nil +} + +func getAndValidateConfig(svrCtx *Context) (serverconfig.Config, error) { + config, err := serverconfig.GetConfig(svrCtx.Viper) + if err != nil { + return config, err +} + if err := config.ValidateBasic(); err != nil { + return config, err +} + +return config, nil +} + +/ returns a function which returns the genesis doc from the genesis file. +func getGenDocProvider(cfg *cmtcfg.Config) + +func() (*cmttypes.GenesisDoc, error) { + return func() (*cmttypes.GenesisDoc, error) { + appGenesis, err := genutiltypes.AppGenesisFromFile(cfg.GenesisFile()) + if err != nil { + return nil, err +} + +return appGenesis.ToGenesisDoc() +} +} + +func setupTraceWriter(svrCtx *Context) (traceWriter io.WriteCloser, cleanup func(), err error) { + / clean up the traceWriter when the server is shutting down + cleanup = func() { +} + traceWriterFile := svrCtx.Viper.GetString(flagTraceStore) + +traceWriter, err = openTraceWriter(traceWriterFile) + if err != nil { + return traceWriter, cleanup, err +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + cleanup = func() { + if err = traceWriter.Close(); err != nil { + svrCtx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +return traceWriter, cleanup, nil +} + +func startGrpcServer( + ctx context.Context, + g *errgroup.Group, + config serverconfig.GRPCConfig, + clientCtx client.Context, + svrCtx *Context, + app types.Application, +) (*grpc.Server, client.Context, error) { + if !config.Enable { + / return grpcServer as nil if gRPC is disabled + return nil, clientCtx, nil +} + _, port, err := net.SplitHostPort(config.Address) + if err != nil { + return nil, clientCtx, err +} + maxSendMsgSize := config.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + grpcAddress := fmt.Sprintf("127.0.0.1:%s", port) + + / if gRPC is enabled, configure gRPC client for gRPC gateway + grpcClient, err := grpc.Dial( + grpcAddress, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return nil, clientCtx, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +svrCtx.Logger.Debug("gRPC client assigned to client context", "target", grpcAddress) + +grpcSrv, err := servergrpc.NewGRPCServer(clientCtx, app, config) + if err != nil { + return nil, clientCtx, err +} + + / Start the gRPC server in a goroutine. Note, the provided ctx will ensure + / that the server is gracefully shut down. + g.Go(func() + +error { + return servergrpc.StartGRPCServer(ctx, svrCtx.Logger.With("module", "grpc-server"), config, grpcSrv) +}) + +return grpcSrv, clientCtx, nil +} + +func startAPIServer( + ctx context.Context, + g *errgroup.Group, + cmtCfg *cmtcfg.Config, + svrCfg serverconfig.Config, + clientCtx client.Context, + svrCtx *Context, + app types.Application, + home string, + grpcSrv *grpc.Server, + metrics *telemetry.Metrics, +) + +error { + if !svrCfg.API.Enable { + return nil +} + +clientCtx = clientCtx.WithHomeDir(home) + apiSrv := api.New(clientCtx, svrCtx.Logger.With("module", "api-server"), grpcSrv) + +app.RegisterAPIRoutes(apiSrv, svrCfg.API) + if svrCfg.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + +g.Go(func() + +error { + return apiSrv.Start(ctx, svrCfg) +}) + +return nil +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + if !cfg.Telemetry.Enabled { + return nil, nil +} + +return telemetry.New(cfg.Telemetry) +} + +/ wrapCPUProfile starts CPU profiling, if enabled, and executes the provided +/ callbackFn in a separate goroutine, then will wait for that callback to +/ return. +/ +/ NOTE: We expect the caller to handle graceful shutdown and signal handling. +func wrapCPUProfile(svrCtx *Context, callbackFn func() + +error) + +error { + if cpuProfile := svrCtx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +svrCtx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +defer func() { + svrCtx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + svrCtx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +}() +} + +return callbackFn() +} + +/ emitServerInfoMetrics emits server info related metrics using application telemetry. +func emitServerInfoMetrics() { + var ls []metrics.Label + versionInfo := version.NewInfo() + if len(versionInfo.GoVersion) > 0 { + ls = append(ls, telemetry.NewLabel("go", versionInfo.GoVersion)) +} + if len(versionInfo.CosmosSdkVersion) > 0 { + ls = append(ls, telemetry.NewLabel("version", versionInfo.CosmosSdkVersion)) +} + if len(ls) == 0 { + return +} + +telemetry.SetGaugeWithLabels([]string{"server", "info" +}, 1, ls) +} + +func getCtx(svrCtx *Context, block bool) (*errgroup.Group, context.Context) { + ctx, cancelFn := context.WithCancel(context.Background()) + +g, ctx := errgroup.WithContext(ctx) + / listen for quit signals so the calling parent process can gracefully exit + ListenForQuitSignals(g, block, cancelFn, svrCtx.Logger) + +return g, ctx +} + +func startApp(svrCtx *Context, appCreator types.AppCreator, opts StartCmdOptions) (app types.Application, cleanupFn func(), err error) { + traceWriter, traceCleanupFn, err := setupTraceWriter(svrCtx) + if err != nil { + return app, traceCleanupFn, err +} + home := svrCtx.Config.RootDir + db, err := opts.DBOpener(home, GetAppDBBackend(svrCtx.Viper)) + if err != nil { + return app, traceCleanupFn, err +} + +app = appCreator(svrCtx.Logger, db, traceWriter, svrCtx.Viper) + +cleanupFn = func() { + traceCleanupFn() + if localErr := app.Close(); localErr != nil { + svrCtx.Logger.Error(localErr.Error()) +} + +} + +return app, cleanupFn, nil +} +``` + +## Client + +### CLI + +The genutil commands are available under the `genesis` subcommand. + +#### add-genesis-account + +Add a genesis account to `genesis.json`. Learn more [here](https://docs.cosmos.network/main/run-node/run-node#adding-genesis-accounts). + +#### collect-gentxs + +Collect genesis txs and output a `genesis.json` file. + +```shell +simd genesis collect-gentxs +``` + +This will create a new `genesis.json` file that includes data from all the validators (we sometimes call it the "super genesis file" to distinguish it from single-validator genesis files). + +#### gentx + +Generate a genesis tx carrying a self delegation. + +```shell +simd genesis gentx [key_name] [amount] --chain-id [chain-id] +``` + +This will create the genesis transaction for your new chain. Here `amount` should be at least `1000000000stake`. +If you provide too much or too little, you will encounter an error when starting a node. + +#### migrate + +Migrate genesis to a specified target (SDK) version. + +```shell +simd genesis migrate [target-version] +``` + + +The `migrate` command is extensible and takes a `MigrationMap`. This map is a mapping of target versions to genesis migrations functions. +When not using the default `MigrationMap`, it is recommended to still call the default `MigrationMap` corresponding the SDK version of the chain and prepend/append your own genesis migrations. + + +#### validate-genesis + +Validates the genesis file at the default location or at the location passed as an argument. + +```shell +simd genesis validate-genesis +``` + + +Validate genesis only validates if the genesis is valid at the **current application binary**. For validating a genesis from a previous version of the application, use the `migrate` command to migrate the genesis to the current version. + diff --git a/docs/sdk/v0.50/documentation/module-system/modules/gov/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/gov/README.mdx new file mode 100644 index 00000000..e1188577 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/gov/README.mdx @@ -0,0 +1,2727 @@ +--- +title: '`x/gov`' +description: >- + This paper specifies the Governance module of the Cosmos SDK, which was first + described in the Cosmos Whitepaper in June 2016. +--- + +## Abstract + +This paper specifies the Governance module of the Cosmos SDK, which was first +described in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper) in +June 2016. + +The module enables Cosmos SDK based blockchain to support an on-chain governance +system. In this system, holders of the native staking token of the chain can vote +on proposals on a 1 token 1 vote basis. Next is a list of features the module +currently supports: + +* **Proposal submission:** Users can submit proposals with a deposit. Once the + minimum deposit is reached, the proposal enters voting period. The minimum deposit can be reached by collecting deposits from different users (including proposer) within deposit period. +* **Vote:** Participants can vote on proposals that reached MinDeposit and entered voting period. +* **Inheritance and penalties:** Delegators inherit their validator's vote if + they don't vote themselves. +* **Claiming deposit:** Users that deposited on proposals can recover their + deposits if the proposal was accepted or rejected. If the proposal was vetoed, or never entered voting period (minimum deposit not reached within deposit period), the deposit is burned. + +This module is in use on the Cosmos Hub (a.k.a [gaia](https://github.com/cosmos/gaia)). +Features that may be added in the future are described in [Future Improvements](#future-improvements). + +## Contents + +The following specification uses *ATOM* as the native staking token. The module +can be adapted to any Proof-Of-Stake blockchain by replacing *ATOM* with the native +staking token of the chain. + +* [Concepts](#concepts) + * [Proposal submission](#proposal-submission) + * [Deposit](#deposit) + * [Vote](#vote) + * [Software Upgrade](#software-upgrade) +* [State](#state) + * [Proposals](#proposals) + * [Parameters and base types](#parameters-and-base-types) + * [Deposit](#deposit-1) + * [ValidatorGovInfo](#validatorgovinfo) + * [Stores](#stores) + * [Proposal Processing Queue](#proposal-processing-queue) + * [Legacy Proposal](#legacy-proposal) +* [Messages](#messages) + * [Proposal Submission](#proposal-submission-1) + * [Deposit](#deposit-2) + * [Vote](#vote-1) +* [Events](#events) + * [EndBlocker](#endblocker) + * [Handlers](#handlers) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) +* [Metadata](#metadata) + * [Proposal](#proposal-3) + * [Vote](#vote-5) +* [Future Improvements](#future-improvements) + +## Concepts + +*Disclaimer: This is work in progress. Mechanisms are susceptible to change.* + +The governance process is divided in a few steps that are outlined below: + +* **Proposal submission:** Proposal is submitted to the blockchain with a + deposit. +* **Vote:** Once deposit reaches a certain value (`MinDeposit`), proposal is + confirmed and vote opens. Bonded Atom holders can then send `TxGovVote` + transactions to vote on the proposal. +* **Execution** After a period of time, the votes are tallied and depending + on the result, the messages in the proposal will be executed. + +### Proposal submission + +#### Right to submit a proposal + +Every account can submit proposals by sending a `MsgSubmitProposal` transaction. +Once a proposal is submitted, it is identified by its unique `proposalID`. + +#### Proposal Messages + +A proposal includes an array of `sdk.Msg`s which are executed automatically if the +proposal passes. The messages are executed by the governance `ModuleAccount` itself. Modules +such as `x/upgrade`, that want to allow certain messages to be executed by governance +only should add a whitelist within the respective msg server, granting the governance +module the right to execute the message once a quorum has been reached. The governance +module uses the `MsgServiceRouter` to check that these messages are correctly constructed +and have a respective path to execute on but do not perform a full validity check. + +### Deposit + +To prevent spam, proposals must be submitted with a deposit in the coins defined by +the `MinDeposit` param. + +When a proposal is submitted, it has to be accompanied with a deposit that must be +strictly positive, but can be inferior to `MinDeposit`. The submitter doesn't need +to pay for the entire deposit on their own. The newly created proposal is stored in +an *inactive proposal queue* and stays there until its deposit passes the `MinDeposit`. +Other token holders can increase the proposal's deposit by sending a `Deposit` +transaction. If a proposal doesn't pass the `MinDeposit` before the deposit end time +(the time when deposits are no longer accepted), the proposal will be destroyed: the +proposal will be removed from state and the deposit will be burned (see x/gov `EndBlocker`). +When a proposal deposit passes the `MinDeposit` threshold (even during the proposal +submission) before the deposit end time, the proposal will be moved into the +*active proposal queue* and the voting period will begin. + +The deposit is kept in escrow and held by the governance `ModuleAccount` until the +proposal is finalized (passed or rejected). + +#### Deposit refund and burn + +When a proposal is finalized, the coins from the deposit are either refunded or burned +according to the final tally of the proposal: + +* If the proposal is approved or rejected but *not* vetoed, each deposit will be + automatically refunded to its respective depositor (transferred from the governance + `ModuleAccount`). +* When the proposal is vetoed with greater than 1/3, deposits will be burned from the + governance `ModuleAccount` and the proposal information along with its deposit + information will be removed from state. +* All refunded or burned deposits are removed from the state. Events are issued when + burning or refunding a deposit. + +### Vote + +#### Participants + +*Participants* are users that have the right to vote on proposals. On the +Cosmos Hub, participants are bonded Atom holders. Unbonded Atom holders and +other users do not get the right to participate in governance. However, they +can submit and deposit on proposals. + +Note that when *participants* have bonded and unbonded Atoms, their voting power is calculated from their bonded Atom holdings only. + +#### Voting period + +Once a proposal reaches `MinDeposit`, it immediately enters `Voting period`. We +define `Voting period` as the interval between the moment the vote opens and +the moment the vote closes. The initial value of `Voting period` is 2 weeks. + +#### Option set + +The option set of a proposal refers to the set of choices a participant can +choose from when casting its vote. + +The initial option set includes the following options: + +* `Yes` +* `No` +* `NoWithVeto` +* `Abstain` + +`NoWithVeto` counts as `No` but also adds a `Veto` vote. `Abstain` option +allows voters to signal that they do not intend to vote in favor or against the +proposal but accept the result of the vote. + +*Note: from the UI, for urgent proposals we should maybe add a ‘Not Urgent’ option that casts a `NoWithVeto` vote.* + +#### Weighted Votes + +[ADR-037](docs/sdk/next/documentation/legacy/adr-comprehensive) introduces the weighted vote feature which allows a staker to split their votes into several voting options. For example, it could use 70% of its voting power to vote Yes and 30% of its voting power to vote No. + +Often times the entity owning that address might not be a single individual. For example, a company might have different stakeholders who want to vote differently, and so it makes sense to allow them to split their voting power. Currently, it is not possible for them to do "passthrough voting" and giving their users voting rights over their tokens. However, with this system, exchanges can poll their users for voting preferences, and then vote on-chain proportionally to the results of the poll. + +To represent weighted vote on chain, we use the following Protobuf message. + +```protobuf +// WeightedVoteOption defines a unit of vote for vote split. +// +// Since: cosmos-sdk 0.43 +message WeightedVoteOption { + // option defines the valid vote options, it must not contain duplicate vote options. + VoteOption option = 1; + + // weight is the vote weight associated with the vote option. + string weight = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +```protobuf +// Vote defines a vote on a governance proposal. +// A Vote consists of a proposal ID, the voter, and the vote option. +message Vote { + option (gogoproto.goproto_stringer) = false; + option (gogoproto.equal) = false; + + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1 [(gogoproto.jsontag) = "id", (amino.field_name) = "id", (amino.dont_omitempty) = true]; + + // voter is the voter address of the proposal. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // Deprecated: Prefer to use `options` instead. This field is set in queries + // if and only if `len(options) == 1` and that option has weight 1. In all + // other cases, this field will default to VOTE_OPTION_UNSPECIFIED. + VoteOption option = 3 [deprecated = true]; + + // options is the weighted vote options. + // + // Since: cosmos-sdk 0.43 + repeated WeightedVoteOption options = 4 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +For a weighted vote to be valid, the `options` field must not contain duplicate vote options, and the sum of weights of all options must be equal to 1. + +### Quorum + +Quorum is defined as the minimum percentage of voting power that needs to be +cast on a proposal for the result to be valid. + +### Expedited Proposals + +A proposal can be expedited, making the proposal use shorter voting duration and a higher tally threshold by its default. If an expedited proposal fails to meet the threshold within the scope of shorter voting duration, the expedited proposal is then converted to a regular proposal and restarts voting under regular voting conditions. + +#### Threshold + +Threshold is defined as the minimum proportion of `Yes` votes (excluding +`Abstain` votes) for the proposal to be accepted. + +Initially, the threshold is set at 50% of `Yes` votes, excluding `Abstain` +votes. A possibility to veto exists if more than 1/3rd of all votes are +`NoWithVeto` votes. Note, both of these values are derived from the `TallyParams` +on-chain parameter, which is modifiable by governance. +This means that proposals are accepted iff: + +* There exist bonded tokens. +* Quorum has been achieved. +* The proportion of `Abstain` votes is inferior to 1/1. +* The proportion of `NoWithVeto` votes is inferior to 1/3, including + `Abstain` votes. +* The proportion of `Yes` votes, excluding `Abstain` votes, at the end of + the voting period is superior to 1/2. + +For expedited proposals, by default, the threshold is higher than with a *normal proposal*, namely, 66.7%. + +#### Inheritance + +If a delegator does not vote, it will inherit its validator vote. + +* If the delegator votes before its validator, it will not inherit from the + validator's vote. +* If the delegator votes after its validator, it will override its validator + vote with its own. If the proposal is urgent, it is possible + that the vote will close before delegators have a chance to react and + override their validator's vote. This is not a problem, as proposals require more than 2/3rd of the total voting power to pass, when tallied at the end of the voting period. Because as little as 1/3 + 1 validation power could collude to censor transactions, non-collusion is already assumed for ranges exceeding this threshold. + +#### Validator’s punishment for non-voting + +At present, validators are not punished for failing to vote. + +#### Governance address + +Later, we may add permissioned keys that could only sign txs from certain modules. For the MVP, the `Governance address` will be the main validator address generated at account creation. This address corresponds to a different PrivKey than the CometBFT PrivKey which is responsible for signing consensus messages. Validators thus do not have to sign governance transactions with the sensitive CometBFT PrivKey. + +#### Burnable Params + +There are three parameters that define if the deposit of a proposal should be burned or returned to the depositors. + +* `BurnVoteVeto` burns the proposal deposit if the proposal gets vetoed. +* `BurnVoteQuorum` burns the proposal deposit if the proposal deposit if the vote does not reach quorum. +* `BurnProposalDepositPrevote` burns the proposal deposit if it does not enter the voting phase. + +> Note: These parameters are modifiable via governance. + +## State + +### Constitution + +`Constitution` is found in the genesis state. It is a string field intended to be used to descibe the purpose of a particular blockchain, and its expected norms. A few examples of how the constitution field can be used: + +* define the purpose of the chain, laying a foundation for its future development +* set expectations for delegators +* set expectations for validators +* define the chain's relationship to "meatspace" entities, like a foundation or corporation + +Since this is more of a social feature than a technical feature, we'll now get into some items that may have been useful to have in a genesis constitution: + +* What limitations on governance exist, if any? + * is it okay for the community to slash the wallet of a whale that they no longer feel that they want around? (viz: Juno Proposal 4 and 16) + * can governance "socially slash" a validator who is using unapproved MEV? (viz: commonwealth.im/osmosis) + * In the event of an economic emergency, what should validators do? + * Terra crash of May, 2022, saw validators choose to run a new binary with code that had not been approved by governance, because the governance token had been inflated to nothing. +* What is the purpose of the chain, specifically? + * best example of this is the Cosmos hub, where different founding groups, have different interpertations of the purpose of the network. + +This genesis entry, "constitution" hasn't been designed for existing chains, who should likely just ratify a constitution using their governance system. Instead, this is for new chains. It will allow for validators to have a much clearer idea of purpose and the expecations placed on them while operating thier nodes. Likewise, for community members, the constitution will give them some idea of what to expect from both the "chain team" and the validators, respectively. + +This constitution is designed to be immutable, and placed only in genesis, though that could change over time by a pull request to the cosmos-sdk that allows for the constitution to be changed by governance. Communities whishing to make amendments to their original constitution should use the governance mechanism and a "signaling proposal" to do exactly that. + +**Ideal use scenario for a cosmos chain constitution** + +As a chain developer, you decide that you'd like to provide clarity to your key user groups: + +* validators +* token holders +* developers (yourself) + +You use the constitution to immutably store some Markdown in genesis, so that when difficult questions come up, the constutituon can provide guidance to the community. + +### Proposals + +`Proposal` objects are used to tally votes and generally track the proposal's state. +They contain an array of arbitrary `sdk.Msg`'s which the governance module will attempt +to resolve and then execute if the proposal passes. `Proposal`'s are identified by a +unique id and contains a series of timestamps: `submit_time`, `deposit_end_time`, +`voting_start_time`, `voting_end_time` which track the lifecycle of a proposal + +```protobuf +// Proposal defines the core field members of a governance proposal. +message Proposal { + // id defines the unique id of the proposal. + uint64 id = 1; + + // messages are the arbitrary messages to be executed if the proposal passes. + repeated google.protobuf.Any messages = 2; + + // status defines the proposal status. + ProposalStatus status = 3; + + // final_tally_result is the final tally result of the proposal. When + // querying a proposal via gRPC, this field is not populated until the + // proposal's voting period has ended. + TallyResult final_tally_result = 4; + + // submit_time is the time of proposal submission. + google.protobuf.Timestamp submit_time = 5 [(gogoproto.stdtime) = true]; + + // deposit_end_time is the end time for deposition. + google.protobuf.Timestamp deposit_end_time = 6 [(gogoproto.stdtime) = true]; + + // total_deposit is the total deposit on the proposal. + repeated cosmos.base.v1beta1.Coin total_deposit = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // voting_start_time is the starting time to vote on a proposal. + google.protobuf.Timestamp voting_start_time = 8 [(gogoproto.stdtime) = true]; + + // voting_end_time is the end time of voting on a proposal. + google.protobuf.Timestamp voting_end_time = 9 [(gogoproto.stdtime) = true]; + + // metadata is any arbitrary metadata attached to the proposal. + string metadata = 10; + + // title is the title of the proposal + // + // Since: cosmos-sdk 0.47 + string title = 11; + + // summary is a short summary of the proposal + // + // Since: cosmos-sdk 0.47 + string summary = 12; + + // Proposer is the address of the proposal sumbitter + // + // Since: cosmos-sdk 0.47 + string proposer = 13 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +A proposal will generally require more than just a set of messages to explain its +purpose but need some greater justification and allow a means for interested participants +to discuss and debate the proposal. +In most cases, **it is encouraged to have an off-chain system that supports the on-chain governance process**. +To accommodate for this, a proposal contains a special **`metadata`** field, a string, +which can be used to add context to the proposal. The `metadata` field allows custom use for networks, +however, it is expected that the field contains a URL or some form of CID using a system such as +[IPFS](https://docs.ipfs.io/concepts/content-addressing/). To support the case of +interoperability across networks, the SDK recommends that the `metadata` represents +the following `JSON` template: + +```json +{ + "title": "...", + "description": "...", + "forum": "...", / a link to the discussion platform (i.e. Discord) + "other": "..." / any extra data that doesn't correspond to the other fields +} +``` + +This makes it far easier for clients to support multiple networks. + +The metadata has a maximum length that is chosen by the app developer, and +passed into the gov keeper as a config. The default maximum length in the SDK is 255 characters. + +#### Writing a module that uses governance + +There are many aspects of a chain, or of the individual modules that you may want to +use governance to perform such as changing various parameters. This is very simple +to do. First, write out your message types and `MsgServer` implementation. Add an +`authority` field to the keeper which will be populated in the constructor with the +governance module account: `govKeeper.GetGovernanceAccount().GetAddress()`. Then for +the methods in the `msg_server.go`, perform a check on the message that the signer +matches `authority`. This will prevent any user from executing that message. + +### Parameters and base types + +`Parameters` define the rules according to which votes are run. There can only +be one active parameter set at any given time. If governance wants to change a +parameter set, either to modify a value or add/remove a parameter field, a new +parameter set has to be created and the previous one rendered inactive. + +#### DepositParams + +```protobuf +// DepositParams defines the params for deposits on governance proposals. +message DepositParams { + // Minimum deposit for a proposal to enter voting period. + repeated cosmos.base.v1beta1.Coin min_deposit = 1 + [(gogoproto.nullable) = false, (gogoproto.jsontag) = "min_deposit,omitempty"]; + + // Maximum period for Atom holders to deposit on a proposal. Initial value: 2 + // months. + google.protobuf.Duration max_deposit_period = 2 + [(gogoproto.stdduration) = true, (gogoproto.jsontag) = "max_deposit_period,omitempty"]; +} +``` + +#### VotingParams + +```protobuf +// VotingParams defines the params for voting on governance proposals. +message VotingParams { + // Duration of the voting period. + google.protobuf.Duration voting_period = 1 [(gogoproto.stdduration) = true]; +} +``` + +#### TallyParams + +```protobuf +// TallyParams defines the params for tallying votes on governance proposals. +message TallyParams { + // Minimum percentage of total stake needed to vote for a result to be + // considered valid. + string quorum = 1 [(cosmos_proto.scalar) = "cosmos.Dec"]; + + // Minimum proportion of Yes votes for proposal to pass. Default value: 0.5. + string threshold = 2 [(cosmos_proto.scalar) = "cosmos.Dec"]; + + // Minimum value of Veto votes to Total votes ratio for proposal to be + // vetoed. Default value: 1/3. + string veto_threshold = 3 [(cosmos_proto.scalar) = "cosmos.Dec"]; +} +``` + +Parameters are stored in a global `GlobalParams` KVStore. + +Additionally, we introduce some basic types: + +```go expandable +type Vote byte + +const ( + VoteYes = 0x1 + VoteNo = 0x2 + VoteNoWithVeto = 0x3 + VoteAbstain = 0x4 +) + +type ProposalType string + +const ( + ProposalTypePlainText = "Text" + ProposalTypeSoftwareUpgrade = "SoftwareUpgrade" +) + +type ProposalStatus byte + +const ( + StatusNil ProposalStatus = 0x00 + StatusDepositPeriod ProposalStatus = 0x01 / Proposal is submitted. Participants can deposit on it but not vote + StatusVotingPeriod ProposalStatus = 0x02 / MinDeposit is reached, participants can vote + StatusPassed ProposalStatus = 0x03 / Proposal passed and successfully executed + StatusRejected ProposalStatus = 0x04 / Proposal has been rejected + StatusFailed ProposalStatus = 0x05 / Proposal passed but failed execution +) +``` + +### Deposit + +```protobuf +// Deposit defines an amount deposited by an account address to an active +// proposal. +message Deposit { + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1; + + // depositor defines the deposit addresses from the proposals. + string depositor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // amount to be deposited by depositor. + repeated cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +### ValidatorGovInfo + +This type is used in a temp map when tallying + +```go +type ValidatorGovInfo struct { + Minus sdk.Dec + Vote Vote +} +``` + +## Stores + + +Stores are KVStores in the multi-store. The key to find the store is the first parameter in the list + + +We will use one KVStore `Governance` to store four mappings: + +* A mapping from `proposalID|'proposal'` to `Proposal`. +* A mapping from `proposalID|'addresses'|address` to `Vote`. This mapping allows + us to query all addresses that voted on the proposal along with their vote by + doing a range query on `proposalID:addresses`. +* A mapping from `ParamsKey|'Params'` to `Params`. This map allows to query all + x/gov params. +* A mapping from `VotingPeriodProposalKeyPrefix|proposalID` to a single byte. This allows + us to know if a proposal is in the voting period or not with very low gas cost. + +For pseudocode purposes, here are the two function we will use to read or write in stores: + +* `load(StoreKey, Key)`: Retrieve item stored at key `Key` in store found at key `StoreKey` in the multistore +* `store(StoreKey, Key, value)`: Write value `Value` at key `Key` in store found at key `StoreKey` in the multistore + +### Proposal Processing Queue + +**Store:** + +* `ProposalProcessingQueue`: A queue `queue[proposalID]` containing all the + `ProposalIDs` of proposals that reached `MinDeposit`. During each `EndBlock`, + all the proposals that have reached the end of their voting period are processed. + To process a finished proposal, the application tallies the votes, computes the + votes of each validator and checks if every validator in the validator set has + voted. If the proposal is accepted, deposits are refunded. Finally, the proposal + content `Handler` is executed. + +And the pseudocode for the `ProposalProcessingQueue`: + +```go expandable +in EndBlock do + for finishedProposalID in GetAllFinishedProposalIDs(block.Time) + +proposal = load(Governance, ) / proposal is a const key + + validators = Keeper.getAllValidators() + tmpValMap := map(sdk.AccAddress) + +ValidatorGovInfo + + / Initiate mapping at 0. This is the amount of shares of the validator's vote that will be overridden by their delegator's votes + for each validator in validators + tmpValMap(validator.OperatorAddr).Minus = 0 + + / Tally + voterIterator = rangeQuery(Governance, ) /return all the addresses that voted on the proposal + for each (voterAddress, vote) + +in voterIterator + delegations = stakingKeeper.getDelegations(voterAddress) / get all delegations for current voter + for each delegation in delegations + / make sure delegation.Shares does NOT include shares being unbonded + tmpValMap(delegation.ValidatorAddr).Minus += delegation.Shares + proposal.updateTally(vote, delegation.Shares) + + _, isVal = stakingKeeper.getValidator(voterAddress) + if (isVal) + +tmpValMap(voterAddress).Vote = vote + + tallyingParam = load(GlobalParams, 'TallyingParam') + + / Update tally if validator voted + for each validator in validators + if tmpValMap(validator).HasVoted + proposal.updateTally(tmpValMap(validator).Vote, (validator.TotalShares - tmpValMap(validator).Minus)) + + / Check if proposal is accepted or rejected + totalNonAbstain := proposal.YesVotes + proposal.NoVotes + proposal.NoWithVetoVotes + if (proposal.Votes.YesVotes/totalNonAbstain > tallyingParam.Threshold AND proposal.Votes.NoWithVetoVotes/totalNonAbstain < tallyingParam.Veto) + / proposal was accepted at the end of the voting period + / refund deposits (non-voters already punished) + for each (amount, depositor) + +in proposal.Deposits + depositor.AtomBalance += amount + + stateWriter, err := proposal.Handler() + if err != nil + / proposal passed but failed during state execution + proposal.CurrentStatus = ProposalStatusFailed + else + / proposal pass and state is persisted + proposal.CurrentStatus = ProposalStatusAccepted + stateWriter.save() + +else + / proposal was rejected + proposal.CurrentStatus = ProposalStatusRejected + + store(Governance, , proposal) +``` + +### Legacy Proposal + + +Legacy proposals are deprecated. Use the new proposal flow by granting the governance module the right to execute the message. + + +A legacy proposal is the old implementation of governance proposal. +Contrary to proposal that can contain any messages, a legacy proposal allows to submit a set of pre-defined proposals. +These proposals are defined by their types and handled by handlers that are registered in the gov v1beta1 router. + +More information on how to submit proposals in the [client section](#client). + +## Messages + +### Proposal Submission + +Proposals can be submitted by any account via a `MsgSubmitProposal` transaction. + +```protobuf +// MsgSubmitProposal defines an sdk.Msg type that supports submitting arbitrary +// proposal Content. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposer"; + option (amino.name) = "cosmos-sdk/v1/MsgSubmitProposal"; + + // messages are the arbitrary messages to be executed if proposal passes. + repeated google.protobuf.Any messages = 1; + + // initial_deposit is the deposit value that must be paid at proposal submission. + repeated cosmos.base.v1beta1.Coin initial_deposit = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // proposer is the account address of the proposer. + string proposer = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // metadata is any arbitrary metadata attached to the proposal. + string metadata = 4; + + // title is the title of the proposal. + // + // Since: cosmos-sdk 0.47 + string title = 5; + + // summary is the summary of the proposal + // + // Since: cosmos-sdk 0.47 + string summary = 6; +} +``` + +All `sdk.Msgs` passed into the `messages` field of a `MsgSubmitProposal` message +must be registered in the app's `MsgServiceRouter`. Each of these messages must +have one signer, namely the gov module account. And finally, the metadata length +must not be larger than the `maxMetadataLen` config passed into the gov keeper. +The `initialDeposit` must be strictly positive and conform to the accepted denom of the `MinDeposit` param. + +**State modifications:** + +* Generate new `proposalID` +* Create new `Proposal` +* Initialise `Proposal`'s attributes +* Decrease balance of sender by `InitialDeposit` +* If `MinDeposit` is reached: + * Push `proposalID` in `ProposalProcessingQueue` +* Transfer `InitialDeposit` from the `Proposer` to the governance `ModuleAccount` + +### Deposit + +Once a proposal is submitted, if `Proposal.TotalDeposit < ActiveParam.MinDeposit`, Atom holders can send +`MsgDeposit` transactions to increase the proposal's deposit. + +A deposit is accepted iff: + +* The proposal exists +* The proposal is not in the voting period +* The deposited coins are conform to the accepted denom from the `MinDeposit` param + +```protobuf +// MsgDeposit defines a message to submit a deposit to an existing proposal. +message MsgDeposit { + option (cosmos.msg.v1.signer) = "depositor"; + option (amino.name) = "cosmos-sdk/v1/MsgDeposit"; + + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1 [(gogoproto.jsontag) = "proposal_id", (amino.dont_omitempty) = true]; + + // depositor defines the deposit addresses from the proposals. + string depositor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // amount to be deposited by depositor. + repeated cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +**State modifications:** + +* Decrease balance of sender by `deposit` +* Add `deposit` of sender in `proposal.Deposits` +* Increase `proposal.TotalDeposit` by sender's `deposit` +* If `MinDeposit` is reached: + * Push `proposalID` in `ProposalProcessingQueueEnd` +* Transfer `Deposit` from the `proposer` to the governance `ModuleAccount` + +### Vote + +Once `ActiveParam.MinDeposit` is reached, voting period starts. From there, +bonded Atom holders are able to send `MsgVote` transactions to cast their +vote on the proposal. + +```protobuf +// MsgVote defines a message to cast a vote. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/v1/MsgVote"; + + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1 [(gogoproto.jsontag) = "proposal_id", (amino.dont_omitempty) = true]; + + // voter is the voter address for the proposal. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // option defines the vote option. + VoteOption option = 3; + + // metadata is any arbitrary metadata attached to the Vote. + string metadata = 4; +} +``` + +**State modifications:** + +* Record `Vote` of sender + + +Gas cost for this message has to take into account the future tallying of the vote in EndBlocker. + + +## Events + +The governance module emits the following events: + +### EndBlocker + +| Type | Attribute Key | Attribute Value | +| ------------------ | ---------------- | ---------------- | +| inactive\_proposal | proposal\_id | `{proposalID}` | +| inactive\_proposal | proposal\_result | `{proposalResult}` | +| active\_proposal | proposal\_id | `{proposalID}` | +| active\_proposal | proposal\_result | `{proposalResult}` | + +### Handlers + +#### MsgSubmitProposal + +| Type | Attribute Key | Attribute Value | +| --------------------- | --------------------- | ---------------- | +| submit\_proposal | proposal\_id | `{proposalID}` | +| submit\_proposal \[0] | voting\_period\_start | `{proposalID}` | +| proposal\_deposit | amount | `{depositAmount}` | +| proposal\_deposit | proposal\_id | `{proposalID}` | +| message | module | governance | +| message | action | submit\_proposal | +| message | sender | `{senderAddress}` | + +* \[0] Event only emitted if the voting period starts during the submission. + +#### MsgVote + +| Type | Attribute Key | Attribute Value | +| -------------- | ------------- | --------------- | +| proposal\_vote | option | `{voteOption}` | +| proposal\_vote | proposal\_id | `{proposalID}` | +| message | module | governance | +| message | action | vote | +| message | sender | `{senderAddress}` | + +#### MsgVoteWeighted + +| Type | Attribute Key | Attribute Value | +| -------------- | ------------- | --------------------- | +| proposal\_vote | option | `{weightedVoteOptions}` | +| proposal\_vote | proposal\_id | `{proposalID}` | +| message | module | governance | +| message | action | vote | +| message | sender | `{senderAddress}` | + +#### MsgDeposit + +| Type | Attribute Key | Attribute Value | +| ---------------------- | --------------------- | --------------- | +| proposal\_deposit | amount | `{depositAmount}` | +| proposal\_deposit | proposal\_id | `{proposalID}` | +| proposal\_deposit \[0] | voting\_period\_start | `{proposalID}` | +| message | module | governance | +| message | action | deposit | +| message | sender | `{senderAddress}` | + +* \[0] Event only emitted if the voting period starts during the submission. + +## Parameters + +The governance module contains the following parameters: + +| Key | Type | Example | +| -------------------------------- | ---------------- | ---------------------------------------- | +| min\_deposit | array (coins) | \[`{"denom":"uatom","amount":"10000000"}`] | +| max\_deposit\_period | string (time ns) | "172800000000000" (17280s) | +| voting\_period | string (time ns) | "172800000000000" (17280s) | +| quorum | string (dec) | "0.334000000000000000" | +| threshold | string (dec) | "0.500000000000000000" | +| veto | string (dec) | "0.334000000000000000" | +| expedited\_threshold | string (time ns) | "0.667000000000000000" | +| expedited\_voting\_period | string (time ns) | "86400000000000" (8600s) | +| expedited\_min\_deposit | array (coins) | \[`{"denom":"uatom","amount":"50000000"}`] | +| burn\_proposal\_deposit\_prevote | bool | false | +| burn\_vote\_quorum | bool | false | +| burn\_vote\_veto | bool | true | +| min\_initial\_deposit\_ratio | string | "0.1" | + +**NOTE**: The governance module contains parameters that are objects unlike other +modules. If only a subset of parameters are desired to be changed, only they need +to be included and not the entire parameter object structure. + +## Client + +### CLI + +A user can query and interact with the `gov` module using the CLI. + +#### Query + +The `query` commands allow users to query `gov` state. + +```bash +simd query gov --help +``` + +##### deposit + +The `deposit` command allows users to query a deposit for a given proposal from a given depositor. + +```bash +simd query gov deposit [proposal-id] [depositer-addr] [flags] +``` + +Example: + +```bash +simd query gov deposit 1 cosmos1.. +``` + +Example Output: + +```bash +amount: +- amount: "100" + denom: stake +depositor: cosmos1.. +proposal_id: "1" +``` + +##### deposits + +The `deposits` command allows users to query all deposits for a given proposal. + +```bash +simd query gov deposits [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov deposits 1 +``` + +Example Output: + +```bash +deposits: +- amount: + - amount: "100" + denom: stake + depositor: cosmos1.. + proposal_id: "1" +pagination: + next_key: null + total: "0" +``` + +##### param + +The `param` command allows users to query a given parameter for the `gov` module. + +```bash +simd query gov param [param-type] [flags] +``` + +Example: + +```bash +simd query gov param voting +``` + +Example Output: + +```bash +voting_period: "172800000000000" +``` + +##### params + +The `params` command allows users to query all parameters for the `gov` module. + +```bash +simd query gov params [flags] +``` + +Example: + +```bash +simd query gov params +``` + +Example Output: + +```bash expandable +deposit_params: + max_deposit_period: 172800s + min_deposit: + - amount: "10000000" + denom: stake +params: + expedited_min_deposit: + - amount: "50000000" + denom: stake + expedited_threshold: "0.670000000000000000" + expedited_voting_period: 86400s + max_deposit_period: 172800s + min_deposit: + - amount: "10000000" + denom: stake + min_initial_deposit_ratio: "0.000000000000000000" + proposal_cancel_burn_rate: "0.500000000000000000" + quorum: "0.334000000000000000" + threshold: "0.500000000000000000" + veto_threshold: "0.334000000000000000" + voting_period: 172800s +tally_params: + quorum: "0.334000000000000000" + threshold: "0.500000000000000000" + veto_threshold: "0.334000000000000000" +voting_params: + voting_period: 172800s +``` + +##### proposal + +The `proposal` command allows users to query a given proposal. + +```bash +simd query gov proposal [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov proposal 1 +``` + +Example Output: + +```bash expandable +deposit_end_time: "2022-03-30T11:50:20.819676256Z" +final_tally_result: + abstain_count: "0" + no_count: "0" + no_with_veto_count: "0" + yes_count: "0" +id: "1" +messages: +- '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "10" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. +metadata: AQ== +status: PROPOSAL_STATUS_DEPOSIT_PERIOD +submit_time: "2022-03-28T11:50:20.819676256Z" +total_deposit: +- amount: "10" + denom: stake +voting_end_time: null +voting_start_time: null +``` + +##### proposals + +The `proposals` command allows users to query all proposals with optional filters. + +```bash +simd query gov proposals [flags] +``` + +Example: + +```bash +simd query gov proposals +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +proposals: +- deposit_end_time: "2022-03-30T11:50:20.819676256Z" + final_tally_result: + abstain_count: "0" + no_count: "0" + no_with_veto_count: "0" + yes_count: "0" + id: "1" + messages: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "10" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + metadata: AQ== + status: PROPOSAL_STATUS_DEPOSIT_PERIOD + submit_time: "2022-03-28T11:50:20.819676256Z" + total_deposit: + - amount: "10" + denom: stake + voting_end_time: null + voting_start_time: null +- deposit_end_time: "2022-03-30T14:02:41.165025015Z" + final_tally_result: + abstain_count: "0" + no_count: "0" + no_with_veto_count: "0" + yes_count: "0" + id: "2" + messages: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "10" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + metadata: AQ== + status: PROPOSAL_STATUS_DEPOSIT_PERIOD + submit_time: "2022-03-28T14:02:41.165025015Z" + total_deposit: + - amount: "10" + denom: stake + voting_end_time: null + voting_start_time: null +``` + +##### proposer + +The `proposer` command allows users to query the proposer for a given proposal. + +```bash +simd query gov proposer [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov proposer 1 +``` + +Example Output: + +```bash +proposal_id: "1" +proposer: cosmos1.. +``` + +##### tally + +The `tally` command allows users to query the tally of a given proposal vote. + +```bash +simd query gov tally [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov tally 1 +``` + +Example Output: + +```bash +abstain: "0" +"no": "0" +no_with_veto: "0" +"yes": "1" +``` + +##### vote + +The `vote` command allows users to query a vote for a given proposal. + +```bash +simd query gov vote [proposal-id] [voter-addr] [flags] +``` + +Example: + +```bash +simd query gov vote 1 cosmos1.. +``` + +Example Output: + +```bash +option: VOTE_OPTION_YES +options: +- option: VOTE_OPTION_YES + weight: "1.000000000000000000" +proposal_id: "1" +voter: cosmos1.. +``` + +##### votes + +The `votes` command allows users to query all votes for a given proposal. + +```bash +simd query gov votes [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov votes 1 +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "0" +votes: +- option: VOTE_OPTION_YES + options: + - option: VOTE_OPTION_YES + weight: "1.000000000000000000" + proposal_id: "1" + voter: cosmos1.. +``` + +#### Transactions + +The `tx` commands allow users to interact with the `gov` module. + +```bash +simd tx gov --help +``` + +##### deposit + +The `deposit` command allows users to deposit tokens for a given proposal. + +```bash +simd tx gov deposit [proposal-id] [deposit] [flags] +``` + +Example: + +```bash +simd tx gov deposit 1 10000000stake --from cosmos1.. +``` + +##### draft-proposal + +The `draft-proposal` command allows users to draft any type of proposal. +The command returns a `draft_proposal.json`, to be used by `submit-proposal` after being completed. +The `draft_metadata.json` is meant to be uploaded to [IPFS](#metadata). + +```bash +simd tx gov draft-proposal +``` + +##### submit-proposal + +The `submit-proposal` command allows users to submit a governance proposal along with some messages and metadata. +Messages, metadata and deposit are defined in a JSON file. + +```bash +simd tx gov submit-proposal [path-to-proposal-json] [flags] +``` + +Example: + +```bash +simd tx gov submit-proposal /path/to/proposal.json --from cosmos1.. +``` + +where `proposal.json` contains: + +```json expandable +{ + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1...", / The gov module module address + "to_address": "cosmos1...", + "amount":[{ + "denom": "stake", + "amount": "10"}] + } + ], + "metadata": "AQ==", + "deposit": "10stake", + "title": "Proposal Title", + "summary": "Proposal Summary" +} +``` + + +By default the metadata, summary and title are both limited by 255 characters, this can be overridden by the application developer. + + + +When metadata is not specified, the title is limited to 255 characters and the summary 40x the title length. + + +##### submit-legacy-proposal + +The `submit-legacy-proposal` command allows users to submit a governance legacy proposal along with an initial deposit. + +```bash +simd tx gov submit-legacy-proposal [command] [flags] +``` + +Example: + +```bash +simd tx gov submit-legacy-proposal --title="Test Proposal" --description="testing" --type="Text" --deposit="100000000stake" --from cosmos1.. +``` + +Example (`param-change`): + +```bash +simd tx gov submit-legacy-proposal param-change proposal.json --from cosmos1.. +``` + +```json expandable +{ + "title": "Test Proposal", + "description": "testing, testing, 1, 2, 3", + "changes": [ + { + "subspace": "staking", + "key": "MaxValidators", + "value": 100 + } + ], + "deposit": "10000000stake" +} +``` + +#### cancel-proposal + +Once proposal is canceled, from the deposits of proposal `deposits * proposal_cancel_ratio` will be burned or sent to `ProposalCancelDest` address , if `ProposalCancelDest` is empty then deposits will be burned. The `remaining deposits` will be sent to depositers. + +```bash +simd tx gov cancel-proposal [proposal-id] [flags] +``` + +Example: + +```bash +simd tx gov cancel-proposal 1 --from cosmos1... +``` + +##### vote + +The `vote` command allows users to submit a vote for a given governance proposal. + +```bash +simd tx gov vote [command] [flags] +``` + +Example: + +```bash +simd tx gov vote 1 yes --from cosmos1.. +``` + +##### weighted-vote + +The `weighted-vote` command allows users to submit a weighted vote for a given governance proposal. + +```bash +simd tx gov weighted-vote [proposal-id] [weighted-options] [flags] +``` + +Example: + +```bash +simd tx gov weighted-vote 1 yes=0.5,no=0.5 --from cosmos1.. +``` + +### gRPC + +A user can query the `gov` module using gRPC endpoints. + +#### Proposal + +The `Proposal` endpoint allows users to query a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Proposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Proposal +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposalId": "1", + "content": {"@type":"/cosmos.gov.v1beta1.TextProposal","description":"testing, testing, 1, 2, 3","title":"Test Proposal"}, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2021-09-16T19:40:08.712440474Z", + "depositEndTime": "2021-09-18T19:40:08.712440474Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "votingStartTime": "2021-09-16T19:40:08.712440474Z", + "votingEndTime": "2021-09-18T19:40:08.712440474Z", + "title": "Test Proposal", + "summary": "testing, testing, 1, 2, 3" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Proposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Proposal +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "id": "1", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yesCount": "0", + "abstainCount": "0", + "noCount": "0", + "noWithVetoCount": "0" + }, + "submitTime": "2022-03-28T11:50:20.819676256Z", + "depositEndTime": "2022-03-30T11:50:20.819676256Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "votingStartTime": "2022-03-28T14:25:26.644857113Z", + "votingEndTime": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Test Proposal", + "summary": "testing, testing, 1, 2, 3" + } +} +``` + +#### Proposals + +The `Proposals` endpoint allows users to query all proposals with optional filters. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Proposals +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "proposalId": "1", + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2022-03-28T11:50:20.819676256Z", + "depositEndTime": "2022-03-30T11:50:20.819676256Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "votingStartTime": "2022-03-28T14:25:26.644857113Z", + "votingEndTime": "2022-03-30T14:25:26.644857113Z" + }, + { + "proposalId": "2", + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2022-03-28T14:02:41.165025015Z", + "depositEndTime": "2022-03-30T14:02:41.165025015Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "votingStartTime": "0001-01-01T00:00:00Z", + "votingEndTime": "0001-01-01T00:00:00Z" + } + ], + "pagination": { + "total": "2" + } +} + +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Proposals +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.gov.v1.Query/Proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "id": "1", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yesCount": "0", + "abstainCount": "0", + "noCount": "0", + "noWithVetoCount": "0" + }, + "submitTime": "2022-03-28T11:50:20.819676256Z", + "depositEndTime": "2022-03-30T11:50:20.819676256Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "votingStartTime": "2022-03-28T14:25:26.644857113Z", + "votingEndTime": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + }, + { + "id": "2", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "finalTallyResult": { + "yesCount": "0", + "abstainCount": "0", + "noCount": "0", + "noWithVetoCount": "0" + }, + "submitTime": "2022-03-28T14:02:41.165025015Z", + "depositEndTime": "2022-03-30T14:02:41.165025015Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### Vote + +The `Vote` endpoint allows users to query a vote for a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Vote +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1","voter":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Vote +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposalId": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1000000000000000000" + } + ] + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Vote +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1","voter":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Vote +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposalId": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } +} +``` + +#### Votes + +The `Votes` endpoint allows users to query all votes for a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Votes +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1000000000000000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Votes +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### Params + +The `Params` endpoint allows users to query all parameters for the `gov` module. + +{/* TODO: #10197 Querying governance params outputs nil values */} + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"params_type":"voting"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Params +``` + +Example Output: + +```bash expandable +{ + "votingParams": { + "votingPeriod": "172800s" + }, + "depositParams": { + "maxDepositPeriod": "0s" + }, + "tallyParams": { + "quorum": "MA==", + "threshold": "MA==", + "vetoThreshold": "MA==" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"params_type":"voting"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Params +``` + +Example Output: + +```bash +{ + "votingParams": { + "votingPeriod": "172800s" + } +} +``` + +#### Deposit + +The `Deposit` endpoint allows users to query a deposit for a given proposal from a given depositor. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Deposit +``` + +Example: + +```bash +grpcurl -plaintext \ + '{"proposal_id":"1","depositor":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Deposit +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Deposit +``` + +Example: + +```bash +grpcurl -plaintext \ + '{"proposal_id":"1","depositor":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Deposit +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +#### deposits + +The `Deposits` endpoint allows users to query all deposits for a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Deposits +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Deposits +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### TallyResult + +The `TallyResult` endpoint allows users to query the tally of a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/TallyResult +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/TallyResult +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/TallyResult +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/TallyResult +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + } +} +``` + +### REST + +A user can query the `gov` module using REST endpoints. + +#### proposal + +The `proposals` endpoint allows users to query a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1 +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposal_id": "1", + "content": null, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1 +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "id": "1", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10" + } + ] + } + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes_count": "0", + "abstain_count": "0", + "no_count": "0", + "no_with_veto_count": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + } +} +``` + +#### proposals + +The `proposals` endpoint also allows users to query all proposals with optional filters. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "proposal_id": "1", + "content": null, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z" + }, + { + "proposal_id": "2", + "content": null, + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2022-03-28T14:02:41.165025015Z", + "deposit_end_time": "2022-03-30T14:02:41.165025015Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "voting_start_time": "0001-01-01T00:00:00Z", + "voting_end_time": "0001-01-01T00:00:00Z" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "id": "1", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10" + } + ] + } + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes_count": "0", + "abstain_count": "0", + "no_count": "0", + "no_with_veto_count": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + }, + { + "id": "2", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10" + } + ] + } + ], + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "final_tally_result": { + "yes_count": "0", + "abstain_count": "0", + "no_count": "0", + "no_with_veto_count": "0" + }, + "submit_time": "2022-03-28T14:02:41.165025015Z", + "deposit_end_time": "2022-03-30T14:02:41.165025015Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "voting_start_time": null, + "voting_end_time": null, + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### voter vote + +The `votes` endpoint allows users to query a vote for a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/votes/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/votes/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposal_id": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/votes/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/votes/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposal_id": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ], + "metadata": "" + } +} +``` + +#### votes + +The `votes` endpoint allows users to query all votes for a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/votes +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/votes +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ], + "metadata": "" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### params + +The `params` endpoint allows users to query all parameters for the `gov` module. + +{/* TODO: #10197 Querying governance params outputs nil values */} + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/params/{params_type} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/params/voting +``` + +Example Output: + +```bash expandable +{ + "voting_params": { + "voting_period": "172800s" + }, + "deposit_params": { + "min_deposit": [ + ], + "max_deposit_period": "0s" + }, + "tally_params": { + "quorum": "0.000000000000000000", + "threshold": "0.000000000000000000", + "veto_threshold": "0.000000000000000000" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/params/{params_type} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/params/voting +``` + +Example Output: + +```bash expandable +{ + "voting_params": { + "voting_period": "172800s" + }, + "deposit_params": { + "min_deposit": [ + ], + "max_deposit_period": "0s" + }, + "tally_params": { + "quorum": "0.000000000000000000", + "threshold": "0.000000000000000000", + "veto_threshold": "0.000000000000000000" + } +} +``` + +#### deposits + +The `deposits` endpoint allows users to query a deposit for a given proposal from a given depositor. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/deposits/{depositor} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/deposits/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/deposits/{depositor} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/deposits/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +#### proposal deposits + +The `deposits` endpoint allows users to query all deposits for a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/deposits +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/deposits +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### tally + +The `tally` endpoint allows users to query the tally of a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/tally +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/tally +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/tally +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/tally +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + } +} +``` + +## Metadata + +The gov module has two locations for metadata where users can provide further context about the on-chain actions they are taking. By default all metadata fields have a 255 character length field where metadata can be stored in json format, either on-chain or off-chain depending on the amount of data required. Here we provide a recommendation for the json structure and where the data should be stored. There are two important factors in making these recommendations. First, that the gov and group modules are consistent with one another, note the number of proposals made by all groups may be quite large. Second, that client applications such as block explorers and governance interfaces have confidence in the consistency of metadata structure accross chains. + +### Proposal + +Location: off-chain as json object stored on IPFS (mirrors [group proposal](docs/sdk/v0.50/documentation/module-system/modules/group/README#metadata)) + +```json +{ + "title": "", + "authors": [""], + "summary": "", + "details": "", + "proposal_forum_url": "", + "vote_option_context": "", +} +``` + + +The `authors` field is an array of strings, this is to allow for multiple authors to be listed in the metadata. +In v0.46, the `authors` field is a comma-separated string. Frontends are encouraged to support both formats for backwards compatibility. + + +### Vote + +Location: on-chain as json within 255 character limit (mirrors [group vote](docs/sdk/v0.50/documentation/module-system/modules/group/README#metadata)) + +```json +{ + "justification": "", +} +``` + +## Future Improvements + +The current documentation only describes the minimum viable product for the +governance module. Future improvements may include: + +* **`BountyProposals`:** If accepted, a `BountyProposal` creates an open + bounty. The `BountyProposal` specifies how many Atoms will be given upon + completion. These Atoms will be taken from the `reserve pool`. After a + `BountyProposal` is accepted by governance, anybody can submit a + `SoftwareUpgradeProposal` with the code to claim the bounty. Note that once a + `BountyProposal` is accepted, the corresponding funds in the `reserve pool` + are locked so that payment can always be honored. In order to link a + `SoftwareUpgradeProposal` to an open bounty, the submitter of the + `SoftwareUpgradeProposal` will use the `Proposal.LinkedProposal` attribute. + If a `SoftwareUpgradeProposal` linked to an open bounty is accepted by + governance, the funds that were reserved are automatically transferred to the + submitter. +* **Complex delegation:** Delegators could choose other representatives than + their validators. Ultimately, the chain of representatives would always end + up to a validator, but delegators could inherit the vote of their chosen + representative before they inherit the vote of their validator. In other + words, they would only inherit the vote of their validator if their other + appointed representative did not vote. +* **Better process for proposal review:** There would be two parts to + `proposal.Deposit`, one for anti-spam (same as in MVP) and an other one to + reward third party auditors. diff --git a/docs/sdk/v0.50/documentation/module-system/modules/group/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/group/README.mdx new file mode 100644 index 00000000..4bfaf244 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/group/README.mdx @@ -0,0 +1,9016 @@ +--- +title: '`x/group`' +description: The following documents specify the group module. +--- + +## Abstract + +The following documents specify the group module. + +This module allows the creation and management of on-chain multisig accounts and enables voting for message execution based on configurable decision policies. + +## Contents + +* [Concepts](#concepts) + * [Group](#group) + * [Group Policy](#group-policy) + * [Decision Policy](#decision-policy) + * [Proposal](#proposal) + * [Pruning](#pruning) +* [State](#state) + * [Group Table](#group-table) + * [Group Member Table](#group-member-table) + * [Group Policy Table](#group-policy-table) + * [Proposal Table](#proposal-table) + * [Vote Table](#vote-table) +* [Msg Service](#msg-service) + * [Msg/CreateGroup](#msgcreategroup) + * [Msg/UpdateGroupMembers](#msgupdategroupmembers) + * [Msg/UpdateGroupAdmin](#msgupdategroupadmin) + * [Msg/UpdateGroupMetadata](#msgupdategroupmetadata) + * [Msg/CreateGroupPolicy](#msgcreategrouppolicy) + * [Msg/CreateGroupWithPolicy](#msgcreategroupwithpolicy) + * [Msg/UpdateGroupPolicyAdmin](#msgupdategrouppolicyadmin) + * [Msg/UpdateGroupPolicyDecisionPolicy](#msgupdategrouppolicydecisionpolicy) + * [Msg/UpdateGroupPolicyMetadata](#msgupdategrouppolicymetadata) + * [Msg/SubmitProposal](#msgsubmitproposal) + * [Msg/WithdrawProposal](#msgwithdrawproposal) + * [Msg/Vote](#msgvote) + * [Msg/Exec](#msgexec) + * [Msg/LeaveGroup](#msgleavegroup) +* [Events](#events) + * [EventCreateGroup](#eventcreategroup) + * [EventUpdateGroup](#eventupdategroup) + * [EventCreateGroupPolicy](#eventcreategrouppolicy) + * [EventUpdateGroupPolicy](#eventupdategrouppolicy) + * [EventCreateProposal](#eventcreateproposal) + * [EventWithdrawProposal](#eventwithdrawproposal) + * [EventVote](#eventvote) + * [EventExec](#eventexec) + * [EventLeaveGroup](#eventleavegroup) + * [EventProposalPruned](#eventproposalpruned) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) +* [Metadata](#metadata) + +## Concepts + +### Group + +A group is simply an aggregation of accounts with associated weights. It is not +an account and doesn't have a balance. It doesn't in and of itself have any +sort of voting or decision weight. It does have an "administrator" which has +the ability to add, remove and update members in the group. Note that a +group policy account could be an administrator of a group, and that the +administrator doesn't necessarily have to be a member of the group. + +### Group Policy + +A group policy is an account associated with a group and a decision policy. +Group policies are abstracted from groups because a single group may have +multiple decision policies for different types of actions. Managing group +membership separately from decision policies results in the least overhead +and keeps membership consistent across different policies. The pattern that +is recommended is to have a single master group policy for a given group, +and then to create separate group policies with different decision policies +and delegate the desired permissions from the master account to +those "sub-accounts" using the `x/authz` module. + +### Decision Policy + +A decision policy is the mechanism by which members of a group can vote on +proposals, as well as the rules that dictate whether a proposal should pass +or not based on its tally outcome. + +All decision policies generally would have a mininum execution period and a +maximum voting window. The minimum execution period is the minimum amount of time +that must pass after submission in order for a proposal to potentially be executed, and it may +be set to 0. The maximum voting window is the maximum time after submission that a proposal may +be voted on before it is tallied. + +The chain developer also defines an app-wide maximum execution period, which is +the maximum amount of time after a proposal's voting period end where users are +allowed to execute a proposal. + +The current group module comes shipped with two decision policies: threshold +and percentage. Any chain developer can extend upon these two, by creating +custom decision policies, as long as they adhere to the `DecisionPolicy` +interface: + +```go expandable +package group + +import ( + + "fmt" + "time" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/group/errors" + "github.com/cosmos/cosmos-sdk/x/group/internal/math" + "github.com/cosmos/cosmos-sdk/x/group/internal/orm" +) + +/ DecisionPolicyResult is the result of whether a proposal passes or not a +/ decision policy. +type DecisionPolicyResult struct { + / Allow determines if the proposal is allowed to pass. + Allow bool + / Final determines if the tally result is final or not. If final, then + / votes are pruned, and the tally result is saved in the proposal's + / `FinalTallyResult` field. + Final bool +} + +/ DecisionPolicy is the persistent set of rules to determine the result of election on a proposal. +type DecisionPolicy interface { + codec.ProtoMarshaler + + / GetVotingPeriod returns the duration after proposal submission where + / votes are accepted. + GetVotingPeriod() + +time.Duration + / GetMinExecutionPeriod returns the minimum duration after submission + / where we can execution a proposal. It can be set to 0 or to a value + / lesser than VotingPeriod to allow TRY_EXEC. + GetMinExecutionPeriod() + +time.Duration + / Allow defines policy-specific logic to allow a proposal to pass or not, + / based on its tally result, the group's total power and the time since + / the proposal was submitted. + Allow(tallyResult TallyResult, totalPower string) (DecisionPolicyResult, error) + +ValidateBasic() + +error + Validate(g GroupInfo, config Config) + +error +} + +/ Implements DecisionPolicy Interface +var _ DecisionPolicy = &ThresholdDecisionPolicy{ +} + +/ NewThresholdDecisionPolicy creates a threshold DecisionPolicy +func NewThresholdDecisionPolicy(threshold string, votingPeriod time.Duration, minExecutionPeriod time.Duration) + +DecisionPolicy { + return &ThresholdDecisionPolicy{ + threshold, &DecisionPolicyWindows{ + votingPeriod, minExecutionPeriod +}} +} + +/ GetVotingPeriod returns the voitng period of ThresholdDecisionPolicy +func (p ThresholdDecisionPolicy) + +GetVotingPeriod() + +time.Duration { + return p.Windows.VotingPeriod +} + +/ GetMinExecutionPeriod returns the minimum execution period of ThresholdDecisionPolicy +func (p ThresholdDecisionPolicy) + +GetMinExecutionPeriod() + +time.Duration { + return p.Windows.MinExecutionPeriod +} + +/ ValidateBasic does basic validation on ThresholdDecisionPolicy +func (p ThresholdDecisionPolicy) + +ValidateBasic() + +error { + if _, err := math.NewPositiveDecFromString(p.Threshold); err != nil { + return sdkerrors.Wrap(err, "threshold") +} + if p.Windows == nil || p.Windows.VotingPeriod == 0 { + return sdkerrors.Wrap(errors.ErrInvalid, "voting period cannot be zero") +} + +return nil +} + +/ Allow allows a proposal to pass when the tally of yes votes equals or exceeds the threshold before the timeout. +func (p ThresholdDecisionPolicy) + +Allow(tallyResult TallyResult, totalPower string) (DecisionPolicyResult, error) { + threshold, err := math.NewPositiveDecFromString(p.Threshold) + if err != nil { + return DecisionPolicyResult{ +}, sdkerrors.Wrap(err, "threshold") +} + +yesCount, err := math.NewNonNegativeDecFromString(tallyResult.YesCount) + if err != nil { + return DecisionPolicyResult{ +}, sdkerrors.Wrap(err, "yes count") +} + +totalPowerDec, err := math.NewNonNegativeDecFromString(totalPower) + if err != nil { + return DecisionPolicyResult{ +}, sdkerrors.Wrap(err, "total power") +} + + / the real threshold of the policy is `min(threshold,total_weight)`. If + / the group member weights changes (member leaving, member weight update) + / and the threshold doesn't, we can end up with threshold > total_weight. + / In this case, as long as everyone votes yes (in which case + / `yesCount`==`realThreshold`), then the proposal still passes. + realThreshold := min(threshold, totalPowerDec) + if yesCount.Cmp(realThreshold) >= 0 { + return DecisionPolicyResult{ + Allow: true, + Final: true +}, nil +} + +totalCounts, err := tallyResult.TotalCounts() + if err != nil { + return DecisionPolicyResult{ +}, err +} + +undecided, err := math.SubNonNegative(totalPowerDec, totalCounts) + if err != nil { + return DecisionPolicyResult{ +}, err +} + / maxYesCount is the max potential number of yes count, i.e the current yes count + / plus all undecided count (supposing they all vote yes). + maxYesCount, err := yesCount.Add(undecided) + if err != nil { + return DecisionPolicyResult{ +}, err +} + if maxYesCount.Cmp(realThreshold) < 0 { + return DecisionPolicyResult{ + Allow: false, + Final: true +}, nil +} + +return DecisionPolicyResult{ + Allow: false, + Final: false +}, nil +} + +func min(a, b math.Dec) + +math.Dec { + if a.Cmp(b) < 0 { + return a +} + +return b +} + +/ Validate validates the policy against the group. Note that the threshold +/ can actually be greater than the group's total weight: in the Allow method +/ we check the tally weight against `min(threshold,total_weight)`. +func (p *ThresholdDecisionPolicy) + +Validate(g GroupInfo, config Config) + +error { + _, err := math.NewPositiveDecFromString(p.Threshold) + if err != nil { + return sdkerrors.Wrap(err, "threshold") +} + _, err = math.NewNonNegativeDecFromString(g.TotalWeight) + if err != nil { + return sdkerrors.Wrap(err, "group total weight") +} + if p.Windows.MinExecutionPeriod > p.Windows.VotingPeriod+config.MaxExecutionPeriod { + return sdkerrors.Wrap(errors.ErrInvalid, "min_execution_period should be smaller than voting_period + max_execution_period") +} + +return nil +} + +/ Implements DecisionPolicy Interface +var _ DecisionPolicy = &PercentageDecisionPolicy{ +} + +/ NewPercentageDecisionPolicy creates a new percentage DecisionPolicy +func NewPercentageDecisionPolicy(percentage string, votingPeriod time.Duration, executionPeriod time.Duration) + +DecisionPolicy { + return &PercentageDecisionPolicy{ + percentage, &DecisionPolicyWindows{ + votingPeriod, executionPeriod +}} +} + +/ GetVotingPeriod returns the voitng period of PercentageDecisionPolicy +func (p PercentageDecisionPolicy) + +GetVotingPeriod() + +time.Duration { + return p.Windows.VotingPeriod +} + +/ GetMinExecutionPeriod returns the minimum execution period of PercentageDecisionPolicy +func (p PercentageDecisionPolicy) + +GetMinExecutionPeriod() + +time.Duration { + return p.Windows.MinExecutionPeriod +} + +/ ValidateBasic does basic validation on PercentageDecisionPolicy +func (p PercentageDecisionPolicy) + +ValidateBasic() + +error { + percentage, err := math.NewPositiveDecFromString(p.Percentage) + if err != nil { + return sdkerrors.Wrap(err, "percentage threshold") +} + if percentage.Cmp(math.NewDecFromInt64(1)) == 1 { + return sdkerrors.Wrap(errors.ErrInvalid, "percentage must be > 0 and <= 1") +} + if p.Windows == nil || p.Windows.VotingPeriod == 0 { + return sdkerrors.Wrap(errors.ErrInvalid, "voting period cannot be 0") +} + +return nil +} + +/ Validate validates the policy against the group. +func (p *PercentageDecisionPolicy) + +Validate(g GroupInfo, config Config) + +error { + if p.Windows.MinExecutionPeriod > p.Windows.VotingPeriod+config.MaxExecutionPeriod { + return sdkerrors.Wrap(errors.ErrInvalid, "min_execution_period should be smaller than voting_period + max_execution_period") +} + +return nil +} + +/ Allow allows a proposal to pass when the tally of yes votes equals or exceeds the percentage threshold before the timeout. +func (p PercentageDecisionPolicy) + +Allow(tally TallyResult, totalPower string) (DecisionPolicyResult, error) { + percentage, err := math.NewPositiveDecFromString(p.Percentage) + if err != nil { + return DecisionPolicyResult{ +}, sdkerrors.Wrap(err, "percentage") +} + +yesCount, err := math.NewNonNegativeDecFromString(tally.YesCount) + if err != nil { + return DecisionPolicyResult{ +}, sdkerrors.Wrap(err, "yes count") +} + +totalPowerDec, err := math.NewNonNegativeDecFromString(totalPower) + if err != nil { + return DecisionPolicyResult{ +}, sdkerrors.Wrap(err, "total power") +} + +yesPercentage, err := yesCount.Quo(totalPowerDec) + if err != nil { + return DecisionPolicyResult{ +}, err +} + if yesPercentage.Cmp(percentage) >= 0 { + return DecisionPolicyResult{ + Allow: true, + Final: true +}, nil +} + +totalCounts, err := tally.TotalCounts() + if err != nil { + return DecisionPolicyResult{ +}, err +} + +undecided, err := math.SubNonNegative(totalPowerDec, totalCounts) + if err != nil { + return DecisionPolicyResult{ +}, err +} + +sum, err := yesCount.Add(undecided) + if err != nil { + return DecisionPolicyResult{ +}, err +} + +sumPercentage, err := sum.Quo(totalPowerDec) + if err != nil { + return DecisionPolicyResult{ +}, err +} + if sumPercentage.Cmp(percentage) < 0 { + return DecisionPolicyResult{ + Allow: false, + Final: true +}, nil +} + +return DecisionPolicyResult{ + Allow: false, + Final: false +}, nil +} + +var _ orm.Validateable = GroupPolicyInfo{ +} + +/ NewGroupPolicyInfo creates a new GroupPolicyInfo instance +func NewGroupPolicyInfo(address sdk.AccAddress, group uint64, admin sdk.AccAddress, metadata string, + version uint64, decisionPolicy DecisionPolicy, createdAt time.Time, +) (GroupPolicyInfo, error) { + p := GroupPolicyInfo{ + Address: address.String(), + GroupId: group, + Admin: admin.String(), + Metadata: metadata, + Version: version, + CreatedAt: createdAt, +} + err := p.SetDecisionPolicy(decisionPolicy) + if err != nil { + return GroupPolicyInfo{ +}, err +} + +return p, nil +} + +/ SetDecisionPolicy sets the decision policy for GroupPolicyInfo. +func (g *GroupPolicyInfo) + +SetDecisionPolicy(decisionPolicy DecisionPolicy) + +error { + any, err := codectypes.NewAnyWithValue(decisionPolicy) + if err != nil { + return err +} + +g.DecisionPolicy = any + return nil +} + +/ GetDecisionPolicy gets the decision policy of GroupPolicyInfo +func (g GroupPolicyInfo) + +GetDecisionPolicy() (DecisionPolicy, error) { + decisionPolicy, ok := g.DecisionPolicy.GetCachedValue().(DecisionPolicy) + if !ok { + return nil, sdkerrors.ErrInvalidType.Wrapf("expected %T, got %T", (DecisionPolicy)(nil), g.DecisionPolicy.GetCachedValue()) +} + +return decisionPolicy, nil +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (g GroupPolicyInfo) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + var decisionPolicy DecisionPolicy + return unpacker.UnpackAny(g.DecisionPolicy, &decisionPolicy) +} + +func (g GroupInfo) + +PrimaryKeyFields() []interface{ +} { + return []interface{ +}{ + g.Id +} +} + +/ ValidateBasic does basic validation on group info. +func (g GroupInfo) + +ValidateBasic() + +error { + if g.Id == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "group's GroupId") +} + + _, err := sdk.AccAddressFromBech32(g.Admin) + if err != nil { + return sdkerrors.Wrap(err, "admin") +} + if _, err := math.NewNonNegativeDecFromString(g.TotalWeight); err != nil { + return sdkerrors.Wrap(err, "total weight") +} + if g.Version == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "version") +} + +return nil +} + +func (g GroupPolicyInfo) + +PrimaryKeyFields() []interface{ +} { + addr := sdk.MustAccAddressFromBech32(g.Address) + +return []interface{ +}{ + addr.Bytes() +} +} + +func (g Proposal) + +PrimaryKeyFields() []interface{ +} { + return []interface{ +}{ + g.Id +} +} + +/ ValidateBasic does basic validation on group policy info. +func (g GroupPolicyInfo) + +ValidateBasic() + +error { + _, err := sdk.AccAddressFromBech32(g.Admin) + if err != nil { + return sdkerrors.Wrap(err, "group policy admin") +} + _, err = sdk.AccAddressFromBech32(g.Address) + if err != nil { + return sdkerrors.Wrap(err, "group policy account address") +} + if g.GroupId == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "group policy's group id") +} + if g.Version == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "group policy version") +} + +policy, err := g.GetDecisionPolicy() + if err != nil { + return sdkerrors.Wrap(err, "group policy decision policy") +} + if err := policy.ValidateBasic(); err != nil { + return sdkerrors.Wrap(err, "group policy's decision policy") +} + +return nil +} + +func (g GroupMember) + +PrimaryKeyFields() []interface{ +} { + addr := sdk.MustAccAddressFromBech32(g.Member.Address) + +return []interface{ +}{ + g.GroupId, addr.Bytes() +} +} + +/ ValidateBasic does basic validation on group member. +func (g GroupMember) + +ValidateBasic() + +error { + if g.GroupId == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "group member's group id") +} + err := MemberToMemberRequest(g.Member).ValidateBasic() + if err != nil { + return sdkerrors.Wrap(err, "group member") +} + +return nil +} + +/ MemberToMemberRequest converts a `Member` (used for storage) +/ to a `MemberRequest` (used in requests). The only difference +/ between the two is that `MemberRequest` doesn't have any `AddedAt` field +/ since it cannot be set as part of requests. +func MemberToMemberRequest(m *Member) + +MemberRequest { + return MemberRequest{ + Address: m.Address, + Weight: m.Weight, + Metadata: m.Metadata, +} +} + +/ ValidateBasic does basic validation on proposal. +func (g Proposal) + +ValidateBasic() + +error { + if g.Id == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "proposal id") +} + _, err := sdk.AccAddressFromBech32(g.GroupPolicyAddress) + if err != nil { + return sdkerrors.Wrap(err, "proposal group policy address") +} + if g.GroupVersion == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "proposal group version") +} + if g.GroupPolicyVersion == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "proposal group policy version") +} + _, err = g.FinalTallyResult.GetYesCount() + if err != nil { + return sdkerrors.Wrap(err, "proposal FinalTallyResult yes count") +} + _, err = g.FinalTallyResult.GetNoCount() + if err != nil { + return sdkerrors.Wrap(err, "proposal FinalTallyResult no count") +} + _, err = g.FinalTallyResult.GetAbstainCount() + if err != nil { + return sdkerrors.Wrap(err, "proposal FinalTallyResult abstain count") +} + _, err = g.FinalTallyResult.GetNoWithVetoCount() + if err != nil { + return sdkerrors.Wrap(err, "proposal FinalTallyResult veto count") +} + +return nil +} + +func (v Vote) + +PrimaryKeyFields() []interface{ +} { + addr := sdk.MustAccAddressFromBech32(v.Voter) + +return []interface{ +}{ + v.ProposalId, addr.Bytes() +} +} + +var _ orm.Validateable = Vote{ +} + +/ ValidateBasic does basic validation on vote. +func (v Vote) + +ValidateBasic() + +error { + _, err := sdk.AccAddressFromBech32(v.Voter) + if err != nil { + return sdkerrors.Wrap(err, "voter") +} + if v.ProposalId == 0 { + return sdkerrors.Wrap(errors.ErrEmpty, "voter ProposalId") +} + if v.Option == VOTE_OPTION_UNSPECIFIED { + return sdkerrors.Wrap(errors.ErrEmpty, "voter vote option") +} + if _, ok := VoteOption_name[int32(v.Option)]; !ok { + return sdkerrors.Wrap(errors.ErrInvalid, "vote option") +} + +return nil +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (q QueryGroupPoliciesByGroupResponse) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + return unpackGroupPolicies(unpacker, q.GroupPolicies) +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (q QueryGroupPoliciesByAdminResponse) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + return unpackGroupPolicies(unpacker, q.GroupPolicies) +} + +func unpackGroupPolicies(unpacker codectypes.AnyUnpacker, accs []*GroupPolicyInfo) + +error { + for _, g := range accs { + err := g.UnpackInterfaces(unpacker) + if err != nil { + return err +} + +} + +return nil +} + +type operation func(x, y math.Dec) (math.Dec, error) + +func (t *TallyResult) + +operation(vote Vote, weight string, op operation) + +error { + weightDec, err := math.NewPositiveDecFromString(weight) + if err != nil { + return err +} + +yesCount, err := t.GetYesCount() + if err != nil { + return sdkerrors.Wrap(err, "yes count") +} + +noCount, err := t.GetNoCount() + if err != nil { + return sdkerrors.Wrap(err, "no count") +} + +abstainCount, err := t.GetAbstainCount() + if err != nil { + return sdkerrors.Wrap(err, "abstain count") +} + +vetoCount, err := t.GetNoWithVetoCount() + if err != nil { + return sdkerrors.Wrap(err, "veto count") +} + switch vote.Option { + case VOTE_OPTION_YES: + yesCount, err := op(yesCount, weightDec) + if err != nil { + return sdkerrors.Wrap(err, "yes count") +} + +t.YesCount = yesCount.String() + case VOTE_OPTION_NO: + noCount, err := op(noCount, weightDec) + if err != nil { + return sdkerrors.Wrap(err, "no count") +} + +t.NoCount = noCount.String() + case VOTE_OPTION_ABSTAIN: + abstainCount, err := op(abstainCount, weightDec) + if err != nil { + return sdkerrors.Wrap(err, "abstain count") +} + +t.AbstainCount = abstainCount.String() + case VOTE_OPTION_NO_WITH_VETO: + vetoCount, err := op(vetoCount, weightDec) + if err != nil { + return sdkerrors.Wrap(err, "veto count") +} + +t.NoWithVetoCount = vetoCount.String() + +default: + return sdkerrors.Wrapf(errors.ErrInvalid, "unknown vote option %s", vote.Option.String()) +} + +return nil +} + +/ GetYesCount returns the number of yes counts from tally result. +func (t TallyResult) + +GetYesCount() (math.Dec, error) { + yesCount, err := math.NewNonNegativeDecFromString(t.YesCount) + if err != nil { + return math.Dec{ +}, err +} + +return yesCount, nil +} + +/ GetNoCount returns the number of no counts from tally result. +func (t TallyResult) + +GetNoCount() (math.Dec, error) { + noCount, err := math.NewNonNegativeDecFromString(t.NoCount) + if err != nil { + return math.Dec{ +}, err +} + +return noCount, nil +} + +/ GetAbstainCount returns the number of abstain counts from tally result. +func (t TallyResult) + +GetAbstainCount() (math.Dec, error) { + abstainCount, err := math.NewNonNegativeDecFromString(t.AbstainCount) + if err != nil { + return math.Dec{ +}, err +} + +return abstainCount, nil +} + +/ GetNoWithVetoCount returns the number of no with veto counts from tally result. +func (t TallyResult) + +GetNoWithVetoCount() (math.Dec, error) { + vetoCount, err := math.NewNonNegativeDecFromString(t.NoWithVetoCount) + if err != nil { + return math.Dec{ +}, err +} + +return vetoCount, nil +} + +func (t *TallyResult) + +Add(vote Vote, weight string) + +error { + if err := t.operation(vote, weight, math.Add); err != nil { + return err +} + +return nil +} + +/ TotalCounts is the sum of all weights. +func (t TallyResult) + +TotalCounts() (math.Dec, error) { + yesCount, err := t.GetYesCount() + if err != nil { + return math.Dec{ +}, sdkerrors.Wrap(err, "yes count") +} + +noCount, err := t.GetNoCount() + if err != nil { + return math.Dec{ +}, sdkerrors.Wrap(err, "no count") +} + +abstainCount, err := t.GetAbstainCount() + if err != nil { + return math.Dec{ +}, sdkerrors.Wrap(err, "abstain count") +} + +vetoCount, err := t.GetNoWithVetoCount() + if err != nil { + return math.Dec{ +}, sdkerrors.Wrap(err, "veto count") +} + totalCounts := math.NewDecFromInt64(0) + +totalCounts, err = totalCounts.Add(yesCount) + if err != nil { + return math.Dec{ +}, err +} + +totalCounts, err = totalCounts.Add(noCount) + if err != nil { + return math.Dec{ +}, err +} + +totalCounts, err = totalCounts.Add(abstainCount) + if err != nil { + return math.Dec{ +}, err +} + +totalCounts, err = totalCounts.Add(vetoCount) + if err != nil { + return math.Dec{ +}, err +} + +return totalCounts, nil +} + +/ DefaultTallyResult returns a TallyResult with all counts set to 0. +func DefaultTallyResult() + +TallyResult { + return TallyResult{ + YesCount: "0", + NoCount: "0", + NoWithVetoCount: "0", + AbstainCount: "0", +} +} + +/ VoteOptionFromString returns a VoteOption from a string. It returns an error +/ if the string is invalid. +func VoteOptionFromString(str string) (VoteOption, error) { + vo, ok := VoteOption_value[str] + if !ok { + return VOTE_OPTION_UNSPECIFIED, fmt.Errorf("'%s' is not a valid vote option", str) +} + +return VoteOption(vo), nil +} +``` + +#### Threshold decision policy + +A threshold decision policy defines a threshold of yes votes (based on a tally +of voter weights) that must be achieved in order for a proposal to pass. For +this decision policy, abstain and veto are simply treated as no's. + +This decision policy also has a VotingPeriod window and a MinExecutionPeriod +window. The former defines the duration after proposal submission where members +are allowed to vote, after which tallying is performed. The latter specifies +the minimum duration after proposal submission where the proposal can be +executed. If set to 0, then the proposal is allowed to be executed immediately +on submission (using the `TRY_EXEC` option). Obviously, MinExecutionPeriod +cannot be greater than VotingPeriod+MaxExecutionPeriod (where MaxExecution is +the app-defined duration that specifies the window after voting ended where a +proposal can be executed). + +#### Percentage decision policy + +A percentage decision policy is similar to a threshold decision policy, except +that the threshold is not defined as a constant weight, but as a percentage. +It's more suited for groups where the group members' weights can be updated, as +the percentage threshold stays the same, and doesn't depend on how those member +weights get updated. + +Same as the Threshold decision policy, the percentage decision policy has the +two VotingPeriod and MinExecutionPeriod parameters. + +### Proposal + +Any member(s) of a group can submit a proposal for a group policy account to decide upon. +A proposal consists of a set of messages that will be executed if the proposal +passes as well as any metadata associated with the proposal. + +#### Voting + +There are four choices to choose while voting - yes, no, abstain and veto. Not +all decision policies will take the four choices into account. Votes can contain some optional metadata. +In the current implementation, the voting window begins as soon as a proposal +is submitted, and the end is defined by the group policy's decision policy. + +#### Withdrawing Proposals + +Proposals can be withdrawn any time before the voting period end, either by the +admin of the group policy or by one of the proposers. Once withdrawn, it is +marked as `PROPOSAL_STATUS_WITHDRAWN`, and no more voting or execution is +allowed on it. + +#### Aborted Proposals + +If the group policy is updated during the voting period of the proposal, then +the proposal is marked as `PROPOSAL_STATUS_ABORTED`, and no more voting or +execution is allowed on it. This is because the group policy defines the rules +of proposal voting and execution, so if those rules change during the lifecycle +of a proposal, then the proposal should be marked as stale. + +#### Tallying + +Tallying is the counting of all votes on a proposal. It happens only once in +the lifecycle of a proposal, but can be triggered by two factors, whichever +happens first: + +* either someone tries to execute the proposal (see next section), which can + happen on a `Msg/Exec` transaction, or a `Msg/{SubmitProposal,Vote}` + transaction with the `Exec` field set. When a proposal execution is attempted, + a tally is done first to make sure the proposal passes. +* or on `EndBlock` when the proposal's voting period end just passed. + +If the tally result passes the decision policy's rules, then the proposal is +marked as `PROPOSAL_STATUS_ACCEPTED`, or else it is marked as +`PROPOSAL_STATUS_REJECTED`. In any case, no more voting is allowed anymore, and the tally +result is persisted to state in the proposal's `FinalTallyResult`. + +#### Executing Proposals + +Proposals are executed only when the tallying is done, and the group account's +decision policy allows the proposal to pass based on the tally outcome. They +are marked by the status `PROPOSAL_STATUS_ACCEPTED`. Execution must happen +before a duration of `MaxExecutionPeriod` (set by the chain developer) after +each proposal's voting period end. + +Proposals will not be automatically executed by the chain in this current design, +but rather a user must submit a `Msg/Exec` transaction to attempt to execute the +proposal based on the current votes and decision policy. Any user (not only the +group members) can execute proposals that have been accepted, and execution fees are +paid by the proposal executor. +It's also possible to try to execute a proposal immediately on creation or on +new votes using the `Exec` field of `Msg/SubmitProposal` and `Msg/Vote` requests. +In the former case, proposers signatures are considered as yes votes. +In these cases, if the proposal can't be executed (i.e. it didn't pass the +decision policy's rules), it will still be opened for new votes and +could be tallied and executed later on. + +A successful proposal execution will have its `ExecutorResult` marked as +`PROPOSAL_EXECUTOR_RESULT_SUCCESS`. The proposal will be automatically pruned +after execution. On the other hand, a failed proposal execution will be marked +as `PROPOSAL_EXECUTOR_RESULT_FAILURE`. Such a proposal can be re-executed +multiple times, until it expires after `MaxExecutionPeriod` after voting period +end. + +### Pruning + +Proposals and votes are automatically pruned to avoid state bloat. + +Votes are pruned: + +* either after a successful tally, i.e. a tally whose result passes the decision + policy's rules, which can be trigged by a `Msg/Exec` or a + `Msg/{SubmitProposal,Vote}` with the `Exec` field set, +* or on `EndBlock` right after the proposal's voting period end. This applies to proposals with status `aborted` or `withdrawn` too. + +whichever happens first. + +Proposals are pruned: + +* on `EndBlock` whose proposal status is `withdrawn` or `aborted` on proposal's voting period end before tallying, +* and either after a successful proposal execution, +* or on `EndBlock` right after the proposal's `voting_period_end` + + `max_execution_period` (defined as an app-wide configuration) is passed, + +whichever happens first. + +## State + +The `group` module uses the `orm` package which provides table storage with support for +primary keys and secondary indexes. `orm` also defines `Sequence` which is a persistent unique key generator based on a counter that can be used along with `Table`s. + +Here's the list of tables and associated sequences and indexes stored as part of the `group` module. + +### Group Table + +The `groupTable` stores `GroupInfo`: `0x0 | BigEndian(GroupId) -> ProtocolBuffer(GroupInfo)`. + +#### groupSeq + +The value of `groupSeq` is incremented when creating a new group and corresponds to the new `GroupId`: `0x1 | 0x1 -> BigEndian`. + +The second `0x1` corresponds to the ORM `sequenceStorageKey`. + +#### groupByAdminIndex + +`groupByAdminIndex` allows to retrieve groups by admin address: +`0x2 | len([]byte(group.Admin)) | []byte(group.Admin) | BigEndian(GroupId) -> []byte()`. + +### Group Member Table + +The `groupMemberTable` stores `GroupMember`s: `0x10 | BigEndian(GroupId) | []byte(member.Address) -> ProtocolBuffer(GroupMember)`. + +The `groupMemberTable` is a primary key table and its `PrimaryKey` is given by +`BigEndian(GroupId) | []byte(member.Address)` which is used by the following indexes. + +#### groupMemberByGroupIndex + +`groupMemberByGroupIndex` allows to retrieve group members by group id: +`0x11 | BigEndian(GroupId) | PrimaryKey -> []byte()`. + +#### groupMemberByMemberIndex + +`groupMemberByMemberIndex` allows to retrieve group members by member address: +`0x12 | len([]byte(member.Address)) | []byte(member.Address) | PrimaryKey -> []byte()`. + +### Group Policy Table + +The `groupPolicyTable` stores `GroupPolicyInfo`: `0x20 | len([]byte(Address)) | []byte(Address) -> ProtocolBuffer(GroupPolicyInfo)`. + +The `groupPolicyTable` is a primary key table and its `PrimaryKey` is given by +`len([]byte(Address)) | []byte(Address)` which is used by the following indexes. + +#### groupPolicySeq + +The value of `groupPolicySeq` is incremented when creating a new group policy and is used to generate the new group policy account `Address`: +`0x21 | 0x1 -> BigEndian`. + +The second `0x1` corresponds to the ORM `sequenceStorageKey`. + +#### groupPolicyByGroupIndex + +`groupPolicyByGroupIndex` allows to retrieve group policies by group id: +`0x22 | BigEndian(GroupId) | PrimaryKey -> []byte()`. + +#### groupPolicyByAdminIndex + +`groupPolicyByAdminIndex` allows to retrieve group policies by admin address: +`0x23 | len([]byte(Address)) | []byte(Address) | PrimaryKey -> []byte()`. + +### Proposal Table + +The `proposalTable` stores `Proposal`s: `0x30 | BigEndian(ProposalId) -> ProtocolBuffer(Proposal)`. + +#### proposalSeq + +The value of `proposalSeq` is incremented when creating a new proposal and corresponds to the new `ProposalId`: `0x31 | 0x1 -> BigEndian`. + +The second `0x1` corresponds to the ORM `sequenceStorageKey`. + +#### proposalByGroupPolicyIndex + +`proposalByGroupPolicyIndex` allows to retrieve proposals by group policy account address: +`0x32 | len([]byte(account.Address)) | []byte(account.Address) | BigEndian(ProposalId) -> []byte()`. + +#### ProposalsByVotingPeriodEndIndex + +`proposalsByVotingPeriodEndIndex` allows to retrieve proposals sorted by chronological `voting_period_end`: +`0x33 | sdk.FormatTimeBytes(proposal.VotingPeriodEnd) | BigEndian(ProposalId) -> []byte()`. + +This index is used when tallying the proposal votes at the end of the voting period, and for pruning proposals at `VotingPeriodEnd + MaxExecutionPeriod`. + +### Vote Table + +The `voteTable` stores `Vote`s: `0x40 | BigEndian(ProposalId) | []byte(voter.Address) -> ProtocolBuffer(Vote)`. + +The `voteTable` is a primary key table and its `PrimaryKey` is given by +`BigEndian(ProposalId) | []byte(voter.Address)` which is used by the following indexes. + +#### voteByProposalIndex + +`voteByProposalIndex` allows to retrieve votes by proposal id: +`0x41 | BigEndian(ProposalId) | PrimaryKey -> []byte()`. + +#### voteByVoterIndex + +`voteByVoterIndex` allows to retrieve votes by voter address: +`0x42 | len([]byte(voter.Address)) | []byte(voter.Address) | PrimaryKey -> []byte()`. + +## Msg Service + +### Msg/CreateGroup + +A new group can be created with the `MsgCreateGroup`, which has an admin address, a list of members and some optional metadata. + +The metadata has a maximum length that is chosen by the app developer, and +passed into the group keeper as a config. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if + +* metadata length is greater than `MaxMetadataLen` config +* members are not correctly set (e.g. wrong address format, duplicates, or with 0 weight). + +### Msg/UpdateGroupMembers + +Group members can be updated with the `UpdateGroupMembers`. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +In the list of `MemberUpdates`, an existing member can be removed by setting its weight to 0. + +It's expected to fail if: + +* the signer is not the admin of the group. +* for any one of the associated group policies, if its decision policy's `Validate()` method fails against the updated group. + +### Msg/UpdateGroupAdmin + +The `UpdateGroupAdmin` can be used to update a group admin. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if the signer is not the admin of the group. + +### Msg/UpdateGroupMetadata + +The `UpdateGroupMetadata` can be used to update a group metadata. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* new metadata length is greater than `MaxMetadataLen` config. +* the signer is not the admin of the group. + +### Msg/CreateGroupPolicy + +A new group policy can be created with the `MsgCreateGroupPolicy`, which has an admin address, a group id, a decision policy and some optional metadata. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* the signer is not the admin of the group. +* metadata length is greater than `MaxMetadataLen` config. +* the decision policy's `Validate()` method doesn't pass against the group. + +### Msg/CreateGroupWithPolicy + +A new group with policy can be created with the `MsgCreateGroupWithPolicy`, which has an admin address, a list of members, a decision policy, a `group_policy_as_admin` field to optionally set group and group policy admin with group policy address and some optional metadata for group and group policy. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail for the same reasons as `Msg/CreateGroup` and `Msg/CreateGroupPolicy`. + +### Msg/UpdateGroupPolicyAdmin + +The `UpdateGroupPolicyAdmin` can be used to update a group policy admin. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if the signer is not the admin of the group policy. + +### Msg/UpdateGroupPolicyDecisionPolicy + +The `UpdateGroupPolicyDecisionPolicy` can be used to update a decision policy. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* the signer is not the admin of the group policy. +* the new decision policy's `Validate()` method doesn't pass against the group. + +### Msg/UpdateGroupPolicyMetadata + +The `UpdateGroupPolicyMetadata` can be used to update a group policy metadata. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* new metadata length is greater than `MaxMetadataLen` config. +* the signer is not the admin of the group. + +### Msg/SubmitProposal + +A new proposal can be created with the `MsgSubmitProposal`, which has a group policy account address, a list of proposers addresses, a list of messages to execute if the proposal is accepted and some optional metadata. +An optional `Exec` value can be provided to try to execute the proposal immediately after proposal creation. Proposers signatures are considered as yes votes in this case. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* metadata, title, or summary length is greater than `MaxMetadataLen` config. +* if any of the proposers is not a group member. + +### Msg/WithdrawProposal + +A proposal can be withdrawn using `MsgWithdrawProposal` which has an `address` (can be either a proposer or the group policy admin) and a `proposal_id` (which has to be withdrawn). + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* the signer is neither the group policy admin nor proposer of the proposal. +* the proposal is already closed or aborted. + +### Msg/Vote + +A new vote can be created with the `MsgVote`, given a proposal id, a voter address, a choice (yes, no, veto or abstain) and some optional metadata. +An optional `Exec` value can be provided to try to execute the proposal immediately after voting. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* metadata length is greater than `MaxMetadataLen` config. +* the proposal is not in voting period anymore. + +### Msg/Exec + +A proposal can be executed with the `MsgExec`. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +The messages that are part of this proposal won't be executed if: + +* the proposal has not been accepted by the group policy. +* the proposal has already been successfully executed. + +### Msg/LeaveGroup + +The `MsgLeaveGroup` allows group member to leave a group. + +```go expandable +/ Since: cosmos-sdk 0.46 +syntax = "proto3"; + +package cosmos.group.v1; + +option go_package = "github.com/cosmos/cosmos-sdk/x/group"; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "google/protobuf/any.proto"; +import "cosmos/group/v1/types.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; + +/ Msg is the cosmos.group.v1 Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + / CreateGroup creates a new group with an admin account address, a list of members and some optional metadata. + rpc CreateGroup(MsgCreateGroup) + +returns (MsgCreateGroupResponse); + + / UpdateGroupMembers updates the group members with given group id and admin address. + rpc UpdateGroupMembers(MsgUpdateGroupMembers) + +returns (MsgUpdateGroupMembersResponse); + + / UpdateGroupAdmin updates the group admin with given group id and previous admin address. + rpc UpdateGroupAdmin(MsgUpdateGroupAdmin) + +returns (MsgUpdateGroupAdminResponse); + + / UpdateGroupMetadata updates the group metadata with given group id and admin address. + rpc UpdateGroupMetadata(MsgUpdateGroupMetadata) + +returns (MsgUpdateGroupMetadataResponse); + + / CreateGroupPolicy creates a new group policy using given DecisionPolicy. + rpc CreateGroupPolicy(MsgCreateGroupPolicy) + +returns (MsgCreateGroupPolicyResponse); + + / CreateGroupWithPolicy creates a new group with policy. + rpc CreateGroupWithPolicy(MsgCreateGroupWithPolicy) + +returns (MsgCreateGroupWithPolicyResponse); + + / UpdateGroupPolicyAdmin updates a group policy admin. + rpc UpdateGroupPolicyAdmin(MsgUpdateGroupPolicyAdmin) + +returns (MsgUpdateGroupPolicyAdminResponse); + + / UpdateGroupPolicyDecisionPolicy allows a group policy's decision policy to be updated. + rpc UpdateGroupPolicyDecisionPolicy(MsgUpdateGroupPolicyDecisionPolicy) + +returns (MsgUpdateGroupPolicyDecisionPolicyResponse); + + / UpdateGroupPolicyMetadata updates a group policy metadata. + rpc UpdateGroupPolicyMetadata(MsgUpdateGroupPolicyMetadata) + +returns (MsgUpdateGroupPolicyMetadataResponse); + + / SubmitProposal submits a new proposal. + rpc SubmitProposal(MsgSubmitProposal) + +returns (MsgSubmitProposalResponse); + + / WithdrawProposal withdraws a proposal. + rpc WithdrawProposal(MsgWithdrawProposal) + +returns (MsgWithdrawProposalResponse); + + / Vote allows a voter to vote on a proposal. + rpc Vote(MsgVote) + +returns (MsgVoteResponse); + + / Exec executes a proposal. + rpc Exec(MsgExec) + +returns (MsgExecResponse); + + / LeaveGroup allows a group member to leave the group. + rpc LeaveGroup(MsgLeaveGroup) + +returns (MsgLeaveGroupResponse); +} + +/ +/ Groups +/ + +/ MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} + +/ MsgCreateGroupResponse is the Msg/CreateGroup response type. +message MsgCreateGroupResponse { + / group_id is the unique ID of the newly created group. + uint64 group_id = 1; +} + +/ MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / member_updates is the list of members to update, + / set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ MsgUpdateGroupMembersResponse is the Msg/UpdateGroupMembers response type. +message MsgUpdateGroupMembersResponse { +} + +/ MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + / admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupAdminResponse is the Msg/UpdateGroupAdmin response type. +message MsgUpdateGroupAdminResponse { +} + +/ MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is the updated group's metadata. + string metadata = 3; +} + +/ MsgUpdateGroupMetadataResponse is the Msg/UpdateGroupMetadata response type. +message MsgUpdateGroupMetadataResponse { +} + +/ +/ Group Policies +/ + +/ MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; + + / metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupPolicyResponse is the Msg/CreateGroupPolicy response type. +message MsgCreateGroupPolicyResponse { + / address is the account address of the newly created group policy. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyAdminResponse is the Msg/UpdateGroupPolicyAdmin response type. +message MsgUpdateGroupPolicyAdminResponse { +} + +/ MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + / group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + / group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + / group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + / and group policy admin. + bool group_policy_as_admin = 5; + + / decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgCreateGroupWithPolicyResponse is the Msg/CreateGroupWithPolicy response type. +message MsgCreateGroupWithPolicyResponse { + / group_id is the unique ID of the newly created group with policy. + uint64 group_id = 1; + + / group_policy_address is the account address of the newly created group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} + +/ MsgUpdateGroupPolicyDecisionPolicyResponse is the Msg/UpdateGroupPolicyDecisionPolicy response type. +message MsgUpdateGroupPolicyDecisionPolicyResponse { +} + +/ MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + / admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / metadata is the group policy metadata to be updated. + string metadata = 3; +} + +/ MsgUpdateGroupPolicyMetadataResponse is the Msg/UpdateGroupPolicyMetadata response type. +message MsgUpdateGroupPolicyMetadataResponse { +} + +/ +/ Proposals and Voting +/ + +/ Exec defines modes of execution of a proposal on creation or on new vote. +enum Exec { + / An empty value means that there should be a separate + / MsgExec request for the proposal to execute. + EXEC_UNSPECIFIED = 0; + + / Try to execute the proposal immediately. + / If the proposal is not allowed per the DecisionPolicy, + / the proposal will still be open and could + / be executed at a later point. + EXEC_TRY = 1; +} + +/ MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + / group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / proposers are the account addresses of the proposers. + / Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + / metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + / messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + / exec defines the mode of execution of the proposal, + / whether it should be executed immediately on creation or not. + / If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + / title is the title of the proposal. + / + / Since: cosmos-sdk 0.47 + string title = 6; + + / summary is the summary of the proposal. + / + / Since: cosmos-sdk 0.47 + string summary = 7; +} + +/ MsgSubmitProposalResponse is the Msg/SubmitProposal response type. +message MsgSubmitProposalResponse { + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; +} + +/ MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgWithdrawProposalResponse is the Msg/WithdrawProposal response type. +message MsgWithdrawProposalResponse { +} + +/ MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / option is the voter's choice on the proposal. + VoteOption option = 3; + + / metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + / exec defines whether the proposal should be executed + / immediately after voting or not. + Exec exec = 5; +} + +/ MsgVoteResponse is the Msg/Vote response type. +message MsgVoteResponse { +} + +/ MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "signer"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + / proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + / executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ MsgExecResponse is the Msg/Exec request type. +message MsgExecResponse { + / result is the final result of the proposal execution. + ProposalExecutorResult result = 2; +} + +/ MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + / address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + / group_id is the unique ID of the group. + uint64 group_id = 2; +} + +/ MsgLeaveGroupResponse is the Msg/LeaveGroup response type. +message MsgLeaveGroupResponse { +} +``` + +It's expected to fail if: + +* the group member is not part of the group. +* for any one of the associated group policies, if its decision policy's `Validate()` method fails against the updated group. + +## Events + +The group module emits the following events: + +### EventCreateGroup + +| Type | Attribute Key | Attribute Value | +| -------------------------------- | ------------- | -------------------------------- | +| message | action | /cosmos.group.v1.Msg/CreateGroup | +| cosmos.group.v1.EventCreateGroup | group\_id | `{groupId}` | + +### EventUpdateGroup + +| Type | Attribute Key | Attribute Value | +| -------------------------------- | ------------- | ---------------------------------------------------------- | +| message | action | `/cosmos.group.v1.Msg/UpdateGroup{Admin\|Metadata\|Members}` | +| cosmos.group.v1.EventUpdateGroup | group\_id | `{groupId}` | + +### EventCreateGroupPolicy + +| Type | Attribute Key | Attribute Value | +| -------------------------------------- | ------------- | -------------------------------------- | +| message | action | /cosmos.group.v1.Msg/CreateGroupPolicy | +| cosmos.group.v1.EventCreateGroupPolicy | address | `{groupPolicyAddress}` | + +### EventUpdateGroupPolicy + +| Type | Attribute Key | Attribute Value | +| -------------------------------------- | ------------- | ----------------------------------------------------------------------- | +| message | action | `/cosmos.group.v1.Msg/UpdateGroupPolicy{Admin\|Metadata\|DecisionPolicy}` | +| cosmos.group.v1.EventUpdateGroupPolicy | address | `{groupPolicyAddress}` | + +### EventCreateProposal + +| Type | Attribute Key | Attribute Value | +| ----------------------------------- | ------------- | ----------------------------------- | +| message | action | /cosmos.group.v1.Msg/CreateProposal | +| cosmos.group.v1.EventCreateProposal | proposal\_id | `{proposalId}` | + +### EventWithdrawProposal + +| Type | Attribute Key | Attribute Value | +| ------------------------------------- | ------------- | ------------------------------------- | +| message | action | /cosmos.group.v1.Msg/WithdrawProposal | +| cosmos.group.v1.EventWithdrawProposal | proposal\_id | `{proposalId}` | + +### EventVote + +| Type | Attribute Key | Attribute Value | +| ------------------------- | ------------- | ------------------------- | +| message | action | /cosmos.group.v1.Msg/Vote | +| cosmos.group.v1.EventVote | proposal\_id | `{proposalId}` | + +## EventExec + +| Type | Attribute Key | Attribute Value | +| ------------------------- | ------------- | ------------------------- | +| message | action | /cosmos.group.v1.Msg/Exec | +| cosmos.group.v1.EventExec | proposal\_id | `{proposalId}` | +| cosmos.group.v1.EventExec | logs | `{logs\_string}` | + +### EventLeaveGroup + +| Type | Attribute Key | Attribute Value | +| ------------------------------- | ------------- | ------------------------------- | +| message | action | /cosmos.group.v1.Msg/LeaveGroup | +| cosmos.group.v1.EventLeaveGroup | proposal\_id | `{proposalId}` | +| cosmos.group.v1.EventLeaveGroup | address | `{address}` | + +### EventProposalPruned + +| Type | Attribute Key | Attribute Value | +| ----------------------------------- | ------------- | ------------------------------- | +| message | action | /cosmos.group.v1.Msg/LeaveGroup | +| cosmos.group.v1.EventProposalPruned | proposal\_id | `{proposalId}` | +| cosmos.group.v1.EventProposalPruned | status | `{ProposalStatus}` | +| cosmos.group.v1.EventProposalPruned | tally\_result | `{TallyResult}` | + +## Client + +### CLI + +A user can query and interact with the `group` module using the CLI. + +#### Query + +The `query` commands allow users to query `group` state. + +```bash +simd query group --help +``` + +##### group-info + +The `group-info` command allows users to query for group info by given group id. + +```bash +simd query group group-info [id] [flags] +``` + +Example: + +```bash +simd query group group-info 1 +``` + +Example Output: + +```bash +admin: cosmos1.. +group_id: "1" +metadata: AQ== +total_weight: "3" +version: "1" +``` + +##### group-policy-info + +The `group-policy-info` command allows users to query for group policy info by account address of group policy . + +```bash +simd query group group-policy-info [group-policy-account] [flags] +``` + +Example: + +```bash +simd query group group-policy-info cosmos1.. +``` + +Example Output: + +```bash expandable +address: cosmos1.. +admin: cosmos1.. +decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s +group_id: "1" +metadata: AQ== +version: "1" +``` + +##### group-members + +The `group-members` command allows users to query for group members by group id with pagination flags. + +```bash +simd query group group-members [id] [flags] +``` + +Example: + +```bash +simd query group group-members 1 +``` + +Example Output: + +```bash expandable +members: +- group_id: "1" + member: + address: cosmos1.. + metadata: AQ== + weight: "2" +- group_id: "1" + member: + address: cosmos1.. + metadata: AQ== + weight: "1" +pagination: + next_key: null + total: "2" +``` + +##### groups-by-admin + +The `groups-by-admin` command allows users to query for groups by admin account address with pagination flags. + +```bash +simd query group groups-by-admin [admin] [flags] +``` + +Example: + +```bash +simd query group groups-by-admin cosmos1.. +``` + +Example Output: + +```bash expandable +groups: +- admin: cosmos1.. + group_id: "1" + metadata: AQ== + total_weight: "3" + version: "1" +- admin: cosmos1.. + group_id: "2" + metadata: AQ== + total_weight: "3" + version: "1" +pagination: + next_key: null + total: "2" +``` + +##### group-policies-by-group + +The `group-policies-by-group` command allows users to query for group policies by group id with pagination flags. + +```bash +simd query group group-policies-by-group [group-id] [flags] +``` + +Example: + +```bash +simd query group group-policies-by-group 1 +``` + +Example Output: + +```bash expandable +group_policies: +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +pagination: + next_key: null + total: "2" +``` + +##### group-policies-by-admin + +The `group-policies-by-admin` command allows users to query for group policies by admin account address with pagination flags. + +```bash +simd query group group-policies-by-admin [admin] [flags] +``` + +Example: + +```bash +simd query group group-policies-by-admin cosmos1.. +``` + +Example Output: + +```bash expandable +group_policies: +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +pagination: + next_key: null + total: "2" +``` + +##### proposal + +The `proposal` command allows users to query for proposal by id. + +```bash +simd query group proposal [id] [flags] +``` + +Example: + +```bash +simd query group proposal 1 +``` + +Example Output: + +```bash expandable +proposal: + address: cosmos1.. + executor_result: EXECUTOR_RESULT_NOT_RUN + group_policy_version: "1" + group_version: "1" + metadata: AQ== + msgs: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "100000000" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + proposal_id: "1" + proposers: + - cosmos1.. + result: RESULT_UNFINALIZED + status: STATUS_SUBMITTED + submitted_at: "2021-12-17T07:06:26.310638964Z" + windows: + min_execution_period: 0s + voting_period: 432000s + vote_state: + abstain_count: "0" + no_count: "0" + veto_count: "0" + yes_count: "0" + summary: "Summary" + title: "Title" +``` + +##### proposals-by-group-policy + +The `proposals-by-group-policy` command allows users to query for proposals by account address of group policy with pagination flags. + +```bash +simd query group proposals-by-group-policy [group-policy-account] [flags] +``` + +Example: + +```bash +simd query group proposals-by-group-policy cosmos1.. +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "1" +proposals: +- address: cosmos1.. + executor_result: EXECUTOR_RESULT_NOT_RUN + group_policy_version: "1" + group_version: "1" + metadata: AQ== + msgs: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "100000000" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + proposal_id: "1" + proposers: + - cosmos1.. + result: RESULT_UNFINALIZED + status: STATUS_SUBMITTED + submitted_at: "2021-12-17T07:06:26.310638964Z" + windows: + min_execution_period: 0s + voting_period: 432000s + vote_state: + abstain_count: "0" + no_count: "0" + veto_count: "0" + yes_count: "0" + summary: "Summary" + title: "Title" +``` + +##### vote + +The `vote` command allows users to query for vote by proposal id and voter account address. + +```bash +simd query group vote [proposal-id] [voter] [flags] +``` + +Example: + +```bash +simd query group vote 1 cosmos1.. +``` + +Example Output: + +```bash +vote: + choice: CHOICE_YES + metadata: AQ== + proposal_id: "1" + submitted_at: "2021-12-17T08:05:02.490164009Z" + voter: cosmos1.. +``` + +##### votes-by-proposal + +The `votes-by-proposal` command allows users to query for votes by proposal id with pagination flags. + +```bash +simd query group votes-by-proposal [proposal-id] [flags] +``` + +Example: + +```bash +simd query group votes-by-proposal 1 +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "1" +votes: +- choice: CHOICE_YES + metadata: AQ== + proposal_id: "1" + submitted_at: "2021-12-17T08:05:02.490164009Z" + voter: cosmos1.. +``` + +##### votes-by-voter + +The `votes-by-voter` command allows users to query for votes by voter account address with pagination flags. + +```bash +simd query group votes-by-voter [voter] [flags] +``` + +Example: + +```bash +simd query group votes-by-voter cosmos1.. +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "1" +votes: +- choice: CHOICE_YES + metadata: AQ== + proposal_id: "1" + submitted_at: "2021-12-17T08:05:02.490164009Z" + voter: cosmos1.. +``` + +### Transactions + +The `tx` commands allow users to interact with the `group` module. + +```bash +simd tx group --help +``` + +#### create-group + +The `create-group` command allows users to create a group which is an aggregation of member accounts with associated weights and +an administrator account. + +```bash +simd tx group create-group [admin] [metadata] [members-json-file] +``` + +Example: + +```bash +simd tx group create-group cosmos1.. "AQ==" members.json +``` + +#### update-group-admin + +The `update-group-admin` command allows users to update a group's admin. + +```bash +simd tx group update-group-admin [admin] [group-id] [new-admin] [flags] +``` + +Example: + +```bash +simd tx group update-group-admin cosmos1.. 1 cosmos1.. +``` + +#### update-group-members + +The `update-group-members` command allows users to update a group's members. + +```bash +simd tx group update-group-members [admin] [group-id] [members-json-file] [flags] +``` + +Example: + +```bash +simd tx group update-group-members cosmos1.. 1 members.json +``` + +#### update-group-metadata + +The `update-group-metadata` command allows users to update a group's metadata. + +```bash +simd tx group update-group-metadata [admin] [group-id] [metadata] [flags] +``` + +Example: + +```bash +simd tx group update-group-metadata cosmos1.. 1 "AQ==" +``` + +#### create-group-policy + +The `create-group-policy` command allows users to create a group policy which is an account associated with a group and a decision policy. + +```bash +simd tx group create-group-policy [admin] [group-id] [metadata] [decision-policy] [flags] +``` + +Example: + +```bash +simd tx group create-group-policy cosmos1.. 1 "AQ==" '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"1", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' +``` + +#### create-group-with-policy + +The `create-group-with-policy` command allows users to create a group which is an aggregation of member accounts with associated weights and an administrator account with decision policy. If the `--group-policy-as-admin` flag is set to `true`, the group policy address becomes the group and group policy admin. + +```bash +simd tx group create-group-with-policy [admin] [group-metadata] [group-policy-metadata] [members-json-file] [decision-policy] [flags] +``` + +Example: + +```bash +simd tx group create-group-with-policy cosmos1.. "AQ==" "AQ==" members.json '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"1", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' +``` + +#### update-group-policy-admin + +The `update-group-policy-admin` command allows users to update a group policy admin. + +```bash +simd tx group update-group-policy-admin [admin] [group-policy-account] [new-admin] [flags] +``` + +Example: + +```bash +simd tx group update-group-policy-admin cosmos1.. cosmos1.. cosmos1.. +``` + +#### update-group-policy-metadata + +The `update-group-policy-metadata` command allows users to update a group policy metadata. + +```bash +simd tx group update-group-policy-metadata [admin] [group-policy-account] [new-metadata] [flags] +``` + +Example: + +```bash +simd tx group update-group-policy-metadata cosmos1.. cosmos1.. "AQ==" +``` + +#### update-group-policy-decision-policy + +The `update-group-policy-decision-policy` command allows users to update a group policy's decision policy. + +```bash +simd tx group update-group-policy-decision-policy [admin] [group-policy-account] [decision-policy] [flags] +``` + +Example: + +```bash +simd tx group update-group-policy-decision-policy cosmos1.. cosmos1.. '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"2", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' +``` + +#### submit-proposal + +The `submit-proposal` command allows users to submit a new proposal. + +```bash +simd tx group submit-proposal [group-policy-account] [proposer[,proposer]*] [msg_tx_json_file] [metadata] [flags] +``` + +Example: + +```bash +simd tx group submit-proposal cosmos1.. cosmos1.. msg_tx.json "AQ==" +``` + +#### withdraw-proposal + +The `withdraw-proposal` command allows users to withdraw a proposal. + +```bash +simd tx group withdraw-proposal [proposal-id] [group-policy-admin-or-proposer] +``` + +Example: + +```bash +simd tx group withdraw-proposal 1 cosmos1.. +``` + +#### vote + +The `vote` command allows users to vote on a proposal. + +```bash +simd tx group vote proposal-id] [voter] [choice] [metadata] [flags] +``` + +Example: + +```bash +simd tx group vote 1 cosmos1.. CHOICE_YES "AQ==" +``` + +#### exec + +The `exec` command allows users to execute a proposal. + +```bash +simd tx group exec [proposal-id] [flags] +``` + +Example: + +```bash +simd tx group exec 1 +``` + +#### leave-group + +The `leave-group` command allows group member to leave the group. + +```bash +simd tx group leave-group [member-address] [group-id] +``` + +Example: + +```bash +simd tx group leave-group cosmos1... 1 +``` + +### gRPC + +A user can query the `group` module using gRPC endpoints. + +#### GroupInfo + +The `GroupInfo` endpoint allows users to query for group info by given group id. + +```bash +cosmos.group.v1.Query/GroupInfo +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"group_id":1}' localhost:9090 cosmos.group.v1.Query/GroupInfo +``` + +Example Output: + +```bash +{ + "info": { + "groupId": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "totalWeight": "3" + } +} +``` + +#### GroupPolicyInfo + +The `GroupPolicyInfo` endpoint allows users to query for group policy info by account address of group policy. + +```bash +cosmos.group.v1.Query/GroupPolicyInfo +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupPolicyInfo +``` + +Example Output: + +```bash +{ + "info": { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows": {"voting_period": "120h", "min_execution_period": "0s"}}, + } +} +``` + +#### GroupMembers + +The `GroupMembers` endpoint allows users to query for group members by group id with pagination flags. + +```bash +cosmos.group.v1.Query/GroupMembers +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"group_id":"1"}' localhost:9090 cosmos.group.v1.Query/GroupMembers +``` + +Example Output: + +```bash expandable +{ + "members": [ + { + "groupId": "1", + "member": { + "address": "cosmos1..", + "weight": "1" + } + }, + { + "groupId": "1", + "member": { + "address": "cosmos1..", + "weight": "2" + } + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### GroupsByAdmin + +The `GroupsByAdmin` endpoint allows users to query for groups by admin account address with pagination flags. + +```bash +cosmos.group.v1.Query/GroupsByAdmin +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"admin":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupsByAdmin +``` + +Example Output: + +```bash expandable +{ + "groups": [ + { + "groupId": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "totalWeight": "3" + }, + { + "groupId": "2", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "totalWeight": "3" + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### GroupPoliciesByGroup + +The `GroupPoliciesByGroup` endpoint allows users to query for group policies by group id with pagination flags. + +```bash +cosmos.group.v1.Query/GroupPoliciesByGroup +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"group_id":"1"}' localhost:9090 cosmos.group.v1.Query/GroupPoliciesByGroup +``` + +Example Output: + +```bash expandable +{ + "GroupPolicies": [ + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + }, + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### GroupPoliciesByAdmin + +The `GroupPoliciesByAdmin` endpoint allows users to query for group policies by admin account address with pagination flags. + +```bash +cosmos.group.v1.Query/GroupPoliciesByAdmin +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"admin":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupPoliciesByAdmin +``` + +Example Output: + +```bash expandable +{ + "GroupPolicies": [ + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + }, + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### Proposal + +The `Proposal` endpoint allows users to query for proposal by id. + +```bash +cosmos.group.v1.Query/Proposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' localhost:9090 cosmos.group.v1.Query/Proposal +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposalId": "1", + "address": "cosmos1..", + "proposers": [ + "cosmos1.." + ], + "submittedAt": "2021-12-17T07:06:26.310638964Z", + "groupVersion": "1", + "GroupPolicyVersion": "1", + "status": "STATUS_SUBMITTED", + "result": "RESULT_UNFINALIZED", + "voteState": { + "yesCount": "0", + "noCount": "0", + "abstainCount": "0", + "vetoCount": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executorResult": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100000000"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "title": "Title", + "summary": "Summary", + } +} +``` + +#### ProposalsByGroupPolicy + +The `ProposalsByGroupPolicy` endpoint allows users to query for proposals by account address of group policy with pagination flags. + +```bash +cosmos.group.v1.Query/ProposalsByGroupPolicy +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/ProposalsByGroupPolicy +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "proposalId": "1", + "address": "cosmos1..", + "proposers": [ + "cosmos1.." + ], + "submittedAt": "2021-12-17T08:03:27.099649352Z", + "groupVersion": "1", + "GroupPolicyVersion": "1", + "status": "STATUS_CLOSED", + "result": "RESULT_ACCEPTED", + "voteState": { + "yesCount": "1", + "noCount": "0", + "abstainCount": "0", + "vetoCount": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executorResult": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100000000"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "title": "Title", + "summary": "Summary", + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### VoteByProposalVoter + +The `VoteByProposalVoter` endpoint allows users to query for vote by proposal id and voter account address. + +```bash +cosmos.group.v1.Query/VoteByProposalVoter +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1","voter":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/VoteByProposalVoter +``` + +Example Output: + +```bash +{ + "vote": { + "proposalId": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "submittedAt": "2021-12-17T08:05:02.490164009Z" + } +} +``` + +#### VotesByProposal + +The `VotesByProposal` endpoint allows users to query for votes by proposal id with pagination flags. + +```bash +cosmos.group.v1.Query/VotesByProposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' localhost:9090 cosmos.group.v1.Query/VotesByProposal +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "submittedAt": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### VotesByVoter + +The `VotesByVoter` endpoint allows users to query for votes by voter account address with pagination flags. + +```bash +cosmos.group.v1.Query/VotesByVoter +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"voter":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/VotesByVoter +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "submittedAt": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### REST + +A user can query the `group` module using REST endpoints. + +#### GroupInfo + +The `GroupInfo` endpoint allows users to query for group info by given group id. + +```bash +/cosmos/group/v1/group_info/{group_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_info/1 +``` + +Example Output: + +```bash +{ + "info": { + "id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "total_weight": "3" + } +} +``` + +#### GroupPolicyInfo + +The `GroupPolicyInfo` endpoint allows users to query for group policy info by account address of group policy. + +```bash +/cosmos/group/v1/group_policy_info/{address} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_policy_info/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "info": { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + } +} +``` + +#### GroupMembers + +The `GroupMembers` endpoint allows users to query for group members by group id with pagination flags. + +```bash +/cosmos/group/v1/group_members/{group_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_members/1 +``` + +Example Output: + +```bash expandable +{ + "members": [ + { + "group_id": "1", + "member": { + "address": "cosmos1..", + "weight": "1", + "metadata": "AQ==" + } + }, + { + "group_id": "1", + "member": { + "address": "cosmos1..", + "weight": "2", + "metadata": "AQ==" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### GroupsByAdmin + +The `GroupsByAdmin` endpoint allows users to query for groups by admin account address with pagination flags. + +```bash +/cosmos/group/v1/groups_by_admin/{admin} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/groups_by_admin/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "groups": [ + { + "id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "total_weight": "3" + }, + { + "id": "2", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "total_weight": "3" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### GroupPoliciesByGroup + +The `GroupPoliciesByGroup` endpoint allows users to query for group policies by group id with pagination flags. + +```bash +/cosmos/group/v1/group_policies_by_group/{group_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_policies_by_group/1 +``` + +Example Output: + +```bash expandable +{ + "group_policies": [ + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + }, + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### GroupPoliciesByAdmin + +The `GroupPoliciesByAdmin` endpoint allows users to query for group policies by admin account address with pagination flags. + +```bash +/cosmos/group/v1/group_policies_by_admin/{admin} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_policies_by_admin/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "group_policies": [ + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + }, + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +``` + +#### Proposal + +The `Proposal` endpoint allows users to query for proposal by id. + +```bash +/cosmos/group/v1/proposal/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/proposal/1 +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposal_id": "1", + "address": "cosmos1..", + "metadata": "AQ==", + "proposers": [ + "cosmos1.." + ], + "submitted_at": "2021-12-17T07:06:26.310638964Z", + "group_version": "1", + "group_policy_version": "1", + "status": "STATUS_SUBMITTED", + "result": "RESULT_UNFINALIZED", + "vote_state": { + "yes_count": "0", + "no_count": "0", + "abstain_count": "0", + "veto_count": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executor_result": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "100000000" + } + ] + } + ], + "title": "Title", + "summary": "Summary", + } +} +``` + +#### ProposalsByGroupPolicy + +The `ProposalsByGroupPolicy` endpoint allows users to query for proposals by account address of group policy with pagination flags. + +```bash +/cosmos/group/v1/proposals_by_group_policy/{address} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/proposals_by_group_policy/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "id": "1", + "group_policy_address": "cosmos1..", + "metadata": "AQ==", + "proposers": [ + "cosmos1.." + ], + "submit_time": "2021-12-17T08:03:27.099649352Z", + "group_version": "1", + "group_policy_version": "1", + "status": "STATUS_CLOSED", + "result": "RESULT_ACCEPTED", + "vote_state": { + "yes_count": "1", + "no_count": "0", + "abstain_count": "0", + "veto_count": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executor_result": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "100000000" + } + ] + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### VoteByProposalVoter + +The `VoteByProposalVoter` endpoint allows users to query for vote by proposal id and voter account address. + +```bash +/cosmos/group/v1/vote_by_proposal_voter/{proposal_id}/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1beta1/vote_by_proposal_voter/1/cosmos1.. +``` + +Example Output: + +```bash +{ + "vote": { + "proposal_id": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "metadata": "AQ==", + "submitted_at": "2021-12-17T08:05:02.490164009Z" + } +} +``` + +#### VotesByProposal + +The `VotesByProposal` endpoint allows users to query for votes by proposal id with pagination flags. + +```bash +/cosmos/group/v1/votes_by_proposal/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/votes_by_proposal/1 +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "option": "CHOICE_YES", + "metadata": "AQ==", + "submit_time": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### VotesByVoter + +The `VotesByVoter` endpoint allows users to query for votes by voter account address with pagination flags. + +```bash +/cosmos/group/v1/votes_by_voter/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/votes_by_voter/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "metadata": "AQ==", + "submitted_at": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +## Metadata + +The group module has four locations for metadata where users can provide further context about the on-chain actions they are taking. By default all metadata fields have a 255 character length field where metadata can be stored in json format, either on-chain or off-chain depending on the amount of data required. Here we provide a recommendation for the json structure and where the data should be stored. There are two important factors in making these recommendations. First, that the group and gov modules are consistent with one another, note the number of proposals made by all groups may be quite large. Second, that client applications such as block explorers and governance interfaces have confidence in the consistency of metadata structure across chains. + +### Proposal + +Location: off-chain as json object stored on IPFS (mirrors [gov proposal](docs/sdk/v0.50/documentation/module-system/modules/gov/README#metadata)) + +```json +{ + "title": "", + "authors": [""], + "summary": "", + "details": "", + "proposal_forum_url": "", + "vote_option_context": "", +} +``` + + +The `authors` field is an array of strings, this is to allow for multiple authors to be listed in the metadata. +In v0.46, the `authors` field is a comma-separated string. Frontends are encouraged to support both formats for backwards compatibility. + + +### Vote + +Location: on-chain as json within 255 character limit (mirrors [gov vote](docs/sdk/v0.50/documentation/module-system/modules/gov/README#metadata)) + +```json +{ + "justification": "", +} +``` + +### Group + +Location: off-chain as json object stored on IPFS + +```json +{ + "name": "", + "description": "", + "group_website_url": "", + "group_forum_url": "", +} +``` + +### Decision policy + +Location: on-chain as json within 255 character limit + +```json +{ + "name": "", + "description": "", +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/mint/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/mint/README.mdx new file mode 100644 index 00000000..03f7948a --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/mint/README.mdx @@ -0,0 +1,431 @@ +--- +title: '`x/mint`' +description: >- + State Minter Params Begin-Block NextInflationRate NextAnnualProvisions + BlockProvision Parameters Events BeginBlocker Client CLI gRPC REST +--- + +## Contents + +* [State](#state) + * [Minter](#minter) + * [Params](#params) +* [Begin-Block](#begin-block) + * [NextInflationRate](#nextinflationrate) + * [NextAnnualProvisions](#nextannualprovisions) + * [BlockProvision](#blockprovision) +* [Parameters](#parameters) +* [Events](#events) + * [BeginBlocker](#beginblocker) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +### The Minting Mechanism + +The minting mechanism was designed to: + +* allow for a flexible inflation rate determined by market demand targeting a particular bonded-stake ratio +* effect a balance between market liquidity and staked supply + +In order to best determine the appropriate market rate for inflation rewards, a +moving change rate is used. The moving change rate mechanism ensures that if +the % bonded is either over or under the goal %-bonded, the inflation rate will +adjust to further incentivize or disincentivize being bonded, respectively. Setting the goal +%-bonded at less than 100% encourages the network to maintain some non-staked tokens +which should help provide some liquidity. + +It can be broken down in the following way: + +* If the actual percentage of bonded tokens is below the goal %-bonded the inflation rate will + increase until a maximum value is reached +* If the goal % bonded (67% in Cosmos-Hub) is maintained, then the inflation + rate will stay constant +* If the actual percentage of bonded tokens is above the goal %-bonded the inflation rate will + decrease until a minimum value is reached + +## State + +### Minter + +The minter is a space for holding current inflation information. + +* Minter: `0x00 -> ProtocolBuffer(minter)` + +```protobuf +// Minter represents the minting state. +message Minter { + // current annual inflation rate + string inflation = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // current annual expected provisions + string annual_provisions = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +### Params + +The mint module stores its params in state with the prefix of `0x01`, +it can be updated with governance or the address with authority. + +* Params: `mint/params -> legacy_amino(params)` + +```protobuf +// Params defines the parameters for the x/mint module. +message Params { + option (gogoproto.goproto_stringer) = false; + option (amino.name) = "cosmos-sdk/x/mint/Params"; + + // type of coin to mint + string mint_denom = 1; + // maximum annual change in inflation rate + string inflation_rate_change = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // maximum inflation rate + string inflation_max = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // minimum inflation rate + string inflation_min = 4 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // goal of percent bonded atoms + string goal_bonded = 5 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // expected blocks per year + uint64 blocks_per_year = 6; +} +``` + +## Begin-Block + +Minting parameters are recalculated and inflation paid at the beginning of each block. + +### Inflation rate calculation + +Inflation rate is calculated using an "inflation calculation function" that's +passed to the `NewAppModule` function. If no function is passed, then the SDK's +default inflation function will be used (`NextInflationRate`). In case a custom +inflation calculation logic is needed, this can be achieved by defining and +passing a function that matches `InflationCalculationFn`'s signature. + +```go +type InflationCalculationFn func(ctx sdk.Context, minter Minter, params Params, bondedRatio math.LegacyDec) + +math.LegacyDec +``` + +#### NextInflationRate + +The target annual inflation rate is recalculated each block. +The inflation is also subject to a rate change (positive or negative) +depending on the distance from the desired ratio (67%). The maximum rate change +possible is defined to be 13% per year, however, the annual inflation is capped +as between 7% and 20%. + +```go expandable +NextInflationRate(params Params, bondedRatio math.LegacyDec) (inflation math.LegacyDec) { + inflationRateChangePerYear = (1 - bondedRatio/params.GoalBonded) * params.InflationRateChange + inflationRateChange = inflationRateChangePerYear/blocksPerYr + + / increase the new annual inflation for this next block + inflation += inflationRateChange + if inflation > params.InflationMax { + inflation = params.InflationMax +} + if inflation < params.InflationMin { + inflation = params.InflationMin +} + +return inflation +} +``` + +### NextAnnualProvisions + +Calculate the annual provisions based on current total supply and inflation +rate. This parameter is calculated once per block. + +```go +NextAnnualProvisions(params Params, totalSupply math.LegacyDec) (provisions math.LegacyDec) { + return Inflation * totalSupply +``` + +### BlockProvision + +Calculate the provisions generated for each block based on current annual provisions. The provisions are then minted by the `mint` module's `ModuleMinterAccount` and then transferred to the `auth`'s `FeeCollector` `ModuleAccount`. + +```go +BlockProvision(params Params) + +sdk.Coin { + provisionAmt = AnnualProvisions/ params.BlocksPerYear + return sdk.NewCoin(params.MintDenom, provisionAmt.Truncate()) +``` + +## Parameters + +The minting module contains the following parameters: + +| Key | Type | Example | +| ------------------- | --------------- | ---------------------- | +| MintDenom | string | "uatom" | +| InflationRateChange | string (dec) | "0.130000000000000000" | +| InflationMax | string (dec) | "0.200000000000000000" | +| InflationMin | string (dec) | "0.070000000000000000" | +| GoalBonded | string (dec) | "0.670000000000000000" | +| BlocksPerYear | string (uint64) | "6311520" | + +## Events + +The minting module emits the following events: + +### BeginBlocker + +| Type | Attribute Key | Attribute Value | +| ---- | ------------------ | ------------------ | +| mint | bonded\_ratio | `{bondedRatio}` | +| mint | inflation | `{inflation}` | +| mint | annual\_provisions | `{annualProvisions}` | +| mint | amount | `{amount}` | + +## Client + +### CLI + +A user can query and interact with the `mint` module using the CLI. + +#### Query + +The `query` commands allows users to query `mint` state. + +```shell +simd query mint --help +``` + +##### annual-provisions + +The `annual-provisions` command allows users to query the current minting annual provisions value + +```shell +simd query mint annual-provisions [flags] +``` + +Example: + +```shell +simd query mint annual-provisions +``` + +Example Output: + +```shell +22268504368893.612100895088410693 +``` + +##### inflation + +The `inflation` command allows users to query the current minting inflation value + +```shell +simd query mint inflation [flags] +``` + +Example: + +```shell +simd query mint inflation +``` + +Example Output: + +```shell +0.199200302563256955 +``` + +##### params + +The `params` command allows users to query the current minting parameters + +```shell +simd query mint params [flags] +``` + +Example: + +```yml +blocks_per_year: "4360000" +goal_bonded: "0.670000000000000000" +inflation_max: "0.200000000000000000" +inflation_min: "0.070000000000000000" +inflation_rate_change: "0.130000000000000000" +mint_denom: stake +``` + +### gRPC + +A user can query the `mint` module using gRPC endpoints. + +#### AnnualProvisions + +The `AnnualProvisions` endpoint allows users to query the current minting annual provisions value + +```shell +/cosmos.mint.v1beta1.Query/AnnualProvisions +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/AnnualProvisions +``` + +Example Output: + +```json +{ + "annualProvisions": "1432452520532626265712995618" +} +``` + +#### Inflation + +The `Inflation` endpoint allows users to query the current minting inflation value + +```shell +/cosmos.mint.v1beta1.Query/Inflation +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/Inflation +``` + +Example Output: + +```json +{ + "inflation": "130197115720711261" +} +``` + +#### Params + +The `Params` endpoint allows users to query the current minting parameters + +```shell +/cosmos.mint.v1beta1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "mintDenom": "stake", + "inflationRateChange": "130000000000000000", + "inflationMax": "200000000000000000", + "inflationMin": "70000000000000000", + "goalBonded": "670000000000000000", + "blocksPerYear": "6311520" + } +} +``` + +### REST + +A user can query the `mint` module using REST endpoints. + +#### annual-provisions + +```shell +/cosmos/mint/v1beta1/annual_provisions +``` + +Example: + +```shell +curl "localhost:1317/cosmos/mint/v1beta1/annual_provisions" +``` + +Example Output: + +```json +{ + "annualProvisions": "1432452520532626265712995618" +} +``` + +#### inflation + +```shell +/cosmos/mint/v1beta1/inflation +``` + +Example: + +```shell +curl "localhost:1317/cosmos/mint/v1beta1/inflation" +``` + +Example Output: + +```json +{ + "inflation": "130197115720711261" +} +``` + +#### params + +```shell +/cosmos/mint/v1beta1/params +``` + +Example: + +```shell +curl "localhost:1317/cosmos/mint/v1beta1/params" +``` + +Example Output: + +```json +{ + "params": { + "mintDenom": "stake", + "inflationRateChange": "130000000000000000", + "inflationMax": "200000000000000000", + "inflationMin": "70000000000000000", + "goalBonded": "670000000000000000", + "blocksPerYear": "6311520" + } +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/nft/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/nft/README.mdx new file mode 100644 index 00000000..d55e953b --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/nft/README.mdx @@ -0,0 +1,88 @@ +--- +title: '`x/nft`' +description: '## Abstract' +--- + +## Contents + +## Abstract + +`x/nft` is an implementation of a Cosmos SDK module, per [ADR 43](docs/sdk/next/documentation/legacy/adr-comprehensive), that allows you to create nft classification, create nft, transfer nft, update nft, and support various queries by integrating the module. It is fully compatible with the ERC721 specification. + +* [Concepts](#concepts) + * [Class](#class) + * [NFT](#nft) +* [State](#state) + * [Class](#class-1) + * [NFT](#nft-1) + * [NFTOfClassByOwner](#nftofclassbyowner) + * [Owner](#owner) + * [TotalSupply](#totalsupply) +* [Messages](#messages) + * [MsgSend](#msgsend) +* [Events](#events) + +## Concepts + +### Class + +`x/nft` module defines a struct `Class` to describe the common characteristics of a class of nft, under this class, you can create a variety of nft, which is equivalent to an erc721 contract for Ethereum. The design is defined in the [ADR 043](docs/sdk/next/documentation/legacy/adr-comprehensive). + +### NFT + +The full name of NFT is Non-Fungible Tokens. Because of the irreplaceable nature of NFT, it means that it can be used to represent unique things. The nft implemented by this module is fully compatible with Ethereum ERC721 standard. + +## State + +### Class + +Class is mainly composed of `id`, `name`, `symbol`, `description`, `uri`, `uri_hash`,`data` where `id` is the unique identifier of the class, similar to the Ethereum ERC721 contract address, the others are optional. + +* Class: `0x01 | classID | -> ProtocolBuffer(Class)` + +### NFT + +NFT is mainly composed of `class_id`, `id`, `uri`, `uri_hash` and `data`. Among them, `class_id` and `id` are two-tuples that identify the uniqueness of nft, `uri` and `uri_hash` is optional, which identifies the off-chain storage location of the nft, and `data` is an Any type. Use Any chain of `x/nft` modules can be customized by extending this field + +* NFT: `0x02 | classID | 0x00 | nftID |-> ProtocolBuffer(NFT)` + +### NFTOfClassByOwner + +NFTOfClassByOwner is mainly to realize the function of querying all nfts using classID and owner, without other redundant functions. + +* NFTOfClassByOwner: `0x03 | owner | 0x00 | classID | 0x00 | nftID |-> 0x01` + +### Owner + +Since there is no extra field in NFT to indicate the owner of nft, an additional key-value pair is used to save the ownership of nft. With the transfer of nft, the key-value pair is updated synchronously. + +* OwnerKey: `0x04 | classID | 0x00 | nftID |-> owner` + +### TotalSupply + +TotalSupply is responsible for tracking the number of all nfts under a certain class. Mint operation is performed under the changed class, supply increases by one, burn operation, and supply decreases by one. + +* OwnerKey: `0x05 | classID |-> totalSupply` + +## Messages + +In this section we describe the processing of messages for the NFT module. + + +The validation of `ClassID` and `NftID` is left to the app developer.\ +The SDK does not provide any validation for these fields. + + +### MsgSend + +You can use the `MsgSend` message to transfer the ownership of nft. This is a function provided by the `x/nft` module. Of course, you can use the `Transfer` method to implement your own transfer logic, but you need to pay extra attention to the transfer permissions. + +The message handling should fail if: + +* provided `ClassID` does not exist. +* provided `Id` does not exist. +* provided `Sender` does not the owner of nft. + +## Events + +The nft module emits proto events defined in [the Protobuf reference](https://buf.build/cosmos/cosmos-sdk/docs/main:cosmos.nft.v1beta1). diff --git a/docs/sdk/v0.50/documentation/module-system/modules/params/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/params/README.mdx new file mode 100644 index 00000000..d258b01c --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/params/README.mdx @@ -0,0 +1,79 @@ +--- +title: '`x/params`' +--- + +> Note: The Params module has been depreacted in favour of each module housing its own parameters. + +## Abstract + +Package params provides a globally available parameter store. + +There are two main types, Keeper and Subspace. Subspace is an isolated namespace for a +paramstore, where keys are prefixed by preconfigured spacename. Keeper has a +permission to access all existing spaces. + +Subspace can be used by the individual keepers, which need a private parameter store +that the other keepers cannot modify. The params Keeper can be used to add a route to `x/gov` router in order to modify any parameter in case a proposal passes. + +The following contents explains how to use params module for master and user modules. + +## Contents + +* [Keeper](#keeper) +* [Subspace](#subspace) + * [Key](#key) + * [KeyTable](#keytable) + * [ParamSet](#paramset) + +## Keeper + +In the app initialization stage, [subspaces](#subspace) can be allocated for other modules' keeper using `Keeper.Subspace` and are stored in `Keeper.spaces`. Then, those modules can have a reference to their specific parameter store through `Keeper.GetSubspace`. + +Example: + +```go +type ExampleKeeper struct { + paramSpace paramtypes.Subspace +} + +func (k ExampleKeeper) + +SetParams(ctx sdk.Context, params types.Params) { + k.paramSpace.SetParamSet(ctx, ¶ms) +} +``` + +## Subspace + +`Subspace` is a prefixed subspace of the parameter store. Each module which uses the +parameter store will take a `Subspace` to isolate permission to access. + +### Key + +Parameter keys are human readable alphanumeric strings. A parameter for the key +`"ExampleParameter"` is stored under `[]byte("SubspaceName" + "/" + "ExampleParameter")`, +where `"SubspaceName"` is the name of the subspace. + +Subkeys are secondary parameter keys those are used along with a primary parameter key. +Subkeys can be used for grouping or dynamic parameter key generation during runtime. + +### KeyTable + +All of the parameter keys that will be used should be registered at the compile +time. `KeyTable` is essentially a `map[string]attribute`, where the `string` is a parameter key. + +Currently, `attribute` consists of a `reflect.Type`, which indicates the parameter +type to check that provided key and value are compatible and registered, as well as a function `ValueValidatorFn` to validate values. + +Only primary keys have to be registered on the `KeyTable`. Subkeys inherit the +attribute of the primary key. + +### ParamSet + +Modules often define parameters as a proto message. The generated struct can implement +`ParamSet` interface to be used with the following methods: + +* `KeyTable.RegisterParamSet()`: registers all parameters in the struct +* `Subspace.{Get, Set}ParamSet()`: Get to & Set from the struct + +The implementor should be a pointer in order to use `GetParamSet()`. diff --git a/docs/sdk/v0.50/documentation/module-system/modules/protocolpool/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/protocolpool/README.mdx new file mode 100644 index 00000000..26851081 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/protocolpool/README.mdx @@ -0,0 +1,694 @@ +--- +title: '`x/protocolpool`' +--- + +## Concepts + +`x/protocolpool` is a supplemental Cosmos SDK module that handles functionality for community pool funds. The module provides a separate module account for the community pool making it easier to track the pool assets. Starting with v0.53 of the Cosmos SDK, community funds can be tracked using this module instead of the `x/distribution` module. Funds are migrated from the `x/distribution` module's community pool to `x/protocolpool`'s module account automatically. + +This module is `supplemental`; it is not required to run a Cosmos SDK chain. `x/protocolpool` enhances the community pool functionality provided by `x/distribution` and enables custom modules to further extend the community pool. + +Note: *as long as an external commmunity pool keeper (here, `x/protocolpool`) is wired in DI configs, `x/distribution` will automatically use it for its external pool.* + +## Usage Limitations + +The following `x/distribution` handlers will now return an error when the `protocolpool` module is used with `x/distribution`: + +**QueryService** + +* `CommunityPool` + +**MsgService** + +* `CommunityPoolSpend` +* `FundCommunityPool` + +If you have services that rely on this functionality from `x/distribution`, please update them to use the `x/protocolpool` equivalents. + +## State Transitions + +### FundCommunityPool + +FundCommunityPool can be called by any valid account to send funds to the `x/protocolpool` module account. + +```protobuf + / FundCommunityPool defines a method to allow an account to directly + / fund the community pool. + rpc FundCommunityPool(MsgFundCommunityPool) returns (MsgFundCommunityPoolResponse); +``` + +### CommunityPoolSpend + +CommunityPoolSpend can be called by the module authority (default governance module account) or any account with authorization to spend funds from the `x/protocolpool` module account to a receiver address. + +```protobuf + / CommunityPoolSpend defines a governance operation for sending tokens from + / the community pool in the x/protocolpool module to another account, which + / could be the governance module itself. The authority is defined in the + / keeper. + rpc CommunityPoolSpend(MsgCommunityPoolSpend) returns (MsgCommunityPoolSpendResponse); +``` + +### CreateContinuousFund + +CreateContinuousFund is a message used to initiate a continuous fund for a specific recipient. The proposed percentage of funds will be distributed only on withdraw request for the recipient. The fund distribution continues until expiry time is reached or continuous fund request is canceled. +NOTE: This feature is designed to work with the SDK's default bond denom. + +```protobuf + / CreateContinuousFund defines a method to distribute a percentage of funds to an address continuously. + / This ContinuousFund can be indefinite or run until a given expiry time. + / Funds come from validator block rewards from x/distribution, but may also come from + / any user who funds the ProtocolPoolEscrow module account directly through x/bank. + rpc CreateContinuousFund(MsgCreateContinuousFund) returns (MsgCreateContinuousFundResponse); +``` + +### CancelContinuousFund + +CancelContinuousFund is a message used to cancel an existing continuous fund proposal for a specific recipient. Cancelling a continuous fund stops further distribution of funds, and the state object is removed from storage. + +```protobuf + / CancelContinuousFund defines a method for cancelling continuous fund. + rpc CancelContinuousFund(MsgCancelContinuousFund) returns (MsgCancelContinuousFundResponse); +``` + +## Messages + +### MsgFundCommunityPool + +This message sends coins directly from the sender to the community pool. + + +If you know the `x/protocolpool` module account address, you can directly use bank `send` transaction instead. + + +```protobuf +message MsgFundCommunityPool { + option (cosmos.msg.v1.signer) = "depositor"; + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string depositor = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + repeated cosmos.base.v1beta1.Coin amount = 2 + [(gogoproto.nullable) = false, (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins"]; +} + +``` + +* The msg will fail if the amount cannot be transferred from the sender to the `x/protocolpool` module account. + +```go +func (k Keeper) + +FundCommunityPool(ctx context.Context, amount sdk.Coins, sender sdk.AccAddress) + +error { + return k.bankKeeper.SendCoinsFromAccountToModule(ctx, sender, types.ModuleName, amount) +} +``` + +### MsgCommunityPoolSpend + +This message distributes funds from the `x/protocolpool` module account to the recipient using `DistributeFromCommunityPool` keeper method. + +```protobuf +// pool to another account. This message is typically executed via a governance +// proposal with the governance module being the executing authority. +message MsgCommunityPoolSpend { + option (cosmos.msg.v1.signer) = "authority"; + + // Authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string recipient = 2; + repeated cosmos.base.v1beta1.Coin amount = 3 + [(gogoproto.nullable) = false, (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins"]; +} + +``` + +The message will fail under the following conditions: + +* The amount cannot be transferred to the recipient from the `x/protocolpool` module account. +* The `recipient` address is restricted + +```go +func (k Keeper) + +DistributeFromCommunityPool(ctx context.Context, amount sdk.Coins, receiveAddr sdk.AccAddress) + +error { + return k.bankKeeper.SendCoinsFromModuleToAccount(ctx, types.ModuleName, receiveAddr, amount) +} +``` + +### MsgCreateContinuousFund + +This message is used to create a continuous fund for a specific recipient. The proposed percentage of funds will be distributed only on withdraw request for the recipient. This fund distribution continues until expiry time is reached or continuous fund request is canceled. + +```protobuf + string recipient = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +// MsgUpdateParams is the Msg/UpdateParams request type. +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // params defines the x/protocolpool parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false]; +} + +// MsgUpdateParamsResponse defines the response structure for executing a +``` + +The message will fail under the following conditions: + +* The recipient address is empty or restricted. +* The percentage is zero/negative/greater than one. +* The Expiry time is less than the current block time. + + +If two continuous fund proposals to the same address are created, the previous ContinuousFund will be updated with the new ContinuousFund. + + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "cosmossdk.io/math" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/protocolpool/types" +) + +type MsgServer struct { + Keeper +} + +var _ types.MsgServer = MsgServer{ +} + +/ NewMsgServerImpl returns an implementation of the protocolpool MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &MsgServer{ + Keeper: keeper +} +} + +func (k MsgServer) + +FundCommunityPool(ctx context.Context, msg *types.MsgFundCommunityPool) (*types.MsgFundCommunityPoolResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +depositor, err := k.authKeeper.AddressCodec().StringToBytes(msg.Depositor) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid depositor address: %s", err) +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + + / send funds to community pool module account + if err := k.Keeper.FundCommunityPool(sdkCtx, msg.Amount, depositor); err != nil { + return nil, err +} + +return &types.MsgFundCommunityPoolResponse{ +}, nil +} + +func (k MsgServer) + +CommunityPoolSpend(ctx context.Context, msg *types.MsgCommunityPoolSpend) (*types.MsgCommunityPoolSpendResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + +recipient, err := k.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + + / distribute funds from community pool module account + if err := k.DistributeFromCommunityPool(sdkCtx, msg.Amount, recipient); err != nil { + return nil, err +} + +sdkCtx.Logger().Debug("transferred from the community pool", "amount", msg.Amount.String(), "recipient", msg.Recipient) + +return &types.MsgCommunityPoolSpendResponse{ +}, nil +} + +func (k MsgServer) + +CreateContinuousFund(ctx context.Context, msg *types.MsgCreateContinuousFund) (*types.MsgCreateContinuousFundResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + +recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + + / deny creation if we know this address is blocked from receiving funds + if k.bankKeeper.BlockedAddr(recipient) { + return nil, fmt.Errorf("recipient is blocked in the bank keeper: %s", msg.Recipient) +} + +has, err := k.ContinuousFunds.Has(sdkCtx, recipient) + if err != nil { + return nil, err +} + if has { + return nil, fmt.Errorf("continuous fund already exists for recipient %s", msg.Recipient) +} + + / Validate the message fields + err = validateContinuousFund(sdkCtx, *msg) + if err != nil { + return nil, err +} + + / Check if total funds percentage exceeds 100% + / If exceeds, we should not setup continuous fund proposal. + totalStreamFundsPercentage := math.LegacyZeroDec() + +err = k.ContinuousFunds.Walk(sdkCtx, nil, func(key sdk.AccAddress, value types.ContinuousFund) (stop bool, err error) { + totalStreamFundsPercentage = totalStreamFundsPercentage.Add(value.Percentage) + +return false, nil +}) + if err != nil { + return nil, err +} + +totalStreamFundsPercentage = totalStreamFundsPercentage.Add(msg.Percentage) + if totalStreamFundsPercentage.GT(math.LegacyOneDec()) { + return nil, fmt.Errorf("cannot set continuous fund proposal\ntotal funds percentage exceeds 100\ncurrent total percentage: %s", totalStreamFundsPercentage.Sub(msg.Percentage).MulInt64(100).TruncateInt().String()) +} + + / Create continuous fund proposal + cf := types.ContinuousFund{ + Recipient: msg.Recipient, + Percentage: msg.Percentage, + Expiry: msg.Expiry, +} + + / Set continuous fund to the state + err = k.ContinuousFunds.Set(sdkCtx, recipient, cf) + if err != nil { + return nil, err +} + +return &types.MsgCreateContinuousFundResponse{ +}, nil +} + +func (k MsgServer) + +CancelContinuousFund(ctx context.Context, msg *types.MsgCancelContinuousFund) (*types.MsgCancelContinuousFundResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + +recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + canceledHeight := sdkCtx.BlockHeight() + canceledTime := sdkCtx.BlockTime() + +has, err := k.ContinuousFunds.Has(sdkCtx, recipient) + if err != nil { + return nil, fmt.Errorf("cannot get continuous fund for recipient %w", err) +} + if !has { + return nil, fmt.Errorf("cannot cancel continuous fund for recipient %s - does not exist", msg.Recipient) +} + if err := k.ContinuousFunds.Remove(sdkCtx, recipient); err != nil { + return nil, fmt.Errorf("failed to remove continuous fund for recipient %s: %w", msg.Recipient, err) +} + +return &types.MsgCancelContinuousFundResponse{ + CanceledTime: canceledTime, + CanceledHeight: uint64(canceledHeight), + Recipient: msg.Recipient, +}, nil +} + +func (k MsgServer) + +UpdateParams(ctx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.GetAuthority()); err != nil { + return nil, err +} + if err := msg.Params.Validate(); err != nil { + return nil, fmt.Errorf("invalid params: %w", err) +} + if err := k.Params.Set(sdkCtx, msg.Params); err != nil { + return nil, fmt.Errorf("failed to set params: %w", err) +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} +``` + +### MsgCancelContinuousFund + +This message is used to cancel an existing continuous fund proposal for a specific recipient. Once canceled, the continuous fund will no longer distribute funds at each begin block, and the state object will be removed. + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/x/protocolpool/proto/cosmos/protocolpool/v1/tx.proto#L136-L161 +``` + +The message will fail under the following conditions: + +* The recipient address is empty or restricted. +* The ContinuousFund for the recipient does not exist. + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "cosmossdk.io/math" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/protocolpool/types" +) + +type MsgServer struct { + Keeper +} + +var _ types.MsgServer = MsgServer{ +} + +/ NewMsgServerImpl returns an implementation of the protocolpool MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &MsgServer{ + Keeper: keeper +} +} + +func (k MsgServer) + +FundCommunityPool(ctx context.Context, msg *types.MsgFundCommunityPool) (*types.MsgFundCommunityPoolResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +depositor, err := k.authKeeper.AddressCodec().StringToBytes(msg.Depositor) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid depositor address: %s", err) +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + + / send funds to community pool module account + if err := k.Keeper.FundCommunityPool(sdkCtx, msg.Amount, depositor); err != nil { + return nil, err +} + +return &types.MsgFundCommunityPoolResponse{ +}, nil +} + +func (k MsgServer) + +CommunityPoolSpend(ctx context.Context, msg *types.MsgCommunityPoolSpend) (*types.MsgCommunityPoolSpendResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + +recipient, err := k.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + + / distribute funds from community pool module account + if err := k.DistributeFromCommunityPool(sdkCtx, msg.Amount, recipient); err != nil { + return nil, err +} + +sdkCtx.Logger().Debug("transferred from the community pool", "amount", msg.Amount.String(), "recipient", msg.Recipient) + +return &types.MsgCommunityPoolSpendResponse{ +}, nil +} + +func (k MsgServer) + +CreateContinuousFund(ctx context.Context, msg *types.MsgCreateContinuousFund) (*types.MsgCreateContinuousFundResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + +recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + + / deny creation if we know this address is blocked from receiving funds + if k.bankKeeper.BlockedAddr(recipient) { + return nil, fmt.Errorf("recipient is blocked in the bank keeper: %s", msg.Recipient) +} + +has, err := k.ContinuousFunds.Has(sdkCtx, recipient) + if err != nil { + return nil, err +} + if has { + return nil, fmt.Errorf("continuous fund already exists for recipient %s", msg.Recipient) +} + + / Validate the message fields + err = validateContinuousFund(sdkCtx, *msg) + if err != nil { + return nil, err +} + + / Check if total funds percentage exceeds 100% + / If exceeds, we should not setup continuous fund proposal. + totalStreamFundsPercentage := math.LegacyZeroDec() + +err = k.ContinuousFunds.Walk(sdkCtx, nil, func(key sdk.AccAddress, value types.ContinuousFund) (stop bool, err error) { + totalStreamFundsPercentage = totalStreamFundsPercentage.Add(value.Percentage) + +return false, nil +}) + if err != nil { + return nil, err +} + +totalStreamFundsPercentage = totalStreamFundsPercentage.Add(msg.Percentage) + if totalStreamFundsPercentage.GT(math.LegacyOneDec()) { + return nil, fmt.Errorf("cannot set continuous fund proposal\ntotal funds percentage exceeds 100\ncurrent total percentage: %s", totalStreamFundsPercentage.Sub(msg.Percentage).MulInt64(100).TruncateInt().String()) +} + + / Create continuous fund proposal + cf := types.ContinuousFund{ + Recipient: msg.Recipient, + Percentage: msg.Percentage, + Expiry: msg.Expiry, +} + + / Set continuous fund to the state + err = k.ContinuousFunds.Set(sdkCtx, recipient, cf) + if err != nil { + return nil, err +} + +return &types.MsgCreateContinuousFundResponse{ +}, nil +} + +func (k MsgServer) + +CancelContinuousFund(ctx context.Context, msg *types.MsgCancelContinuousFund) (*types.MsgCancelContinuousFundResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + +recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + canceledHeight := sdkCtx.BlockHeight() + canceledTime := sdkCtx.BlockTime() + +has, err := k.ContinuousFunds.Has(sdkCtx, recipient) + if err != nil { + return nil, fmt.Errorf("cannot get continuous fund for recipient %w", err) +} + if !has { + return nil, fmt.Errorf("cannot cancel continuous fund for recipient %s - does not exist", msg.Recipient) +} + if err := k.ContinuousFunds.Remove(sdkCtx, recipient); err != nil { + return nil, fmt.Errorf("failed to remove continuous fund for recipient %s: %w", msg.Recipient, err) +} + +return &types.MsgCancelContinuousFundResponse{ + CanceledTime: canceledTime, + CanceledHeight: uint64(canceledHeight), + Recipient: msg.Recipient, +}, nil +} + +func (k MsgServer) + +UpdateParams(ctx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.GetAuthority()); err != nil { + return nil, err +} + if err := msg.Params.Validate(); err != nil { + return nil, fmt.Errorf("invalid params: %w", err) +} + if err := k.Params.Set(sdkCtx, msg.Params); err != nil { + return nil, fmt.Errorf("failed to set params: %w", err) +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} +``` + +## Client + +It takes the advantage of `AutoCLI` + +```go expandable +package protocolpool + +import ( + + "fmt" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + poolv1 "cosmossdk.io/api/cosmos/protocolpool/v1" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. +func (am AppModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: &autocliv1.ServiceCommandDescriptor{ + Service: poolv1.Query_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "CommunityPool", + Use: "community-pool", + Short: "Query the amount of coins in the community pool", + Example: fmt.Sprintf(`%s query protocolpool community-pool`, version.AppName), +}, + { + RpcMethod: "ContinuousFunds", + Use: "continuous-funds", + Short: "Query all continuous funds", + Example: fmt.Sprintf(`%s query protocolpool continuous-funds`, version.AppName), +}, + { + RpcMethod: "ContinuousFund", + Use: "continuous-fund ", + Short: "Query a continuous fund by its recipient address", + Example: fmt.Sprintf(`%s query protocolpool continuous-fund cosmos1...`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "recipient" +}}, +}, +}, +}, + Tx: &autocliv1.ServiceCommandDescriptor{ + Service: poolv1.Msg_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "FundCommunityPool", + Use: "fund-community-pool ", + Short: "Funds the community pool with the specified amount", + Example: fmt.Sprintf(`%s tx protocolpool fund-community-pool 100uatom --from mykey`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "amount" +}}, +}, + { + RpcMethod: "CreateContinuousFund", + Use: "create-continuous-fund ", + Short: "Create continuous fund for a recipient with optional expiry", + Example: fmt.Sprintf(`%s tx protocolpool create-continuous-fund cosmos1... 0.2 2023-11-31T12:34:56.789Z --from mykey`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "recipient" +}, + { + ProtoField: "percentage" +}, + { + ProtoField: "expiry", + Optional: true +}, +}, + GovProposal: true, +}, + { + RpcMethod: "CancelContinuousFund", + Use: "cancel-continuous-fund ", + Short: "Cancel continuous fund for a specific recipient", + Example: fmt.Sprintf(`%s tx protocolpool cancel-continuous-fund cosmos1... --from mykey`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "recipient" +}, +}, + GovProposal: true, +}, + { + RpcMethod: "UpdateParams", + Use: "update-params-proposal ", + Short: "Submit a proposal to update protocolpool module params. Note: the entire params must be provided.", + Example: fmt.Sprintf(`%s tx protocolpool update-params-proposal '{ "enabled_distribution_denoms": ["stake", "foo"] +}'`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "params" +}}, + GovProposal: true, +}, +}, +}, +} +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/slashing/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/slashing/README.mdx new file mode 100644 index 00000000..936895ac --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/slashing/README.mdx @@ -0,0 +1,858 @@ +--- +title: '`x/slashing`' +description: >- + This section specifies the slashing module of the Cosmos SDK, which implements + functionality first outlined in the Cosmos Whitepaper in June 2016. +--- + +## Abstract + +This section specifies the slashing module of the Cosmos SDK, which implements functionality +first outlined in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper) in June 2016. + +The slashing module enables Cosmos SDK-based blockchains to disincentivize any attributable action +by a protocol-recognized actor with value at stake by penalizing them ("slashing"). + +Penalties may include, but are not limited to: + +* Burning some amount of their stake +* Removing their ability to vote on future blocks for a period of time. + +This module will be used by the Cosmos Hub, the first hub in the Cosmos ecosystem. + +## Contents + +* [Concepts](#concepts) + * [States](#states) + * [Tombstone Caps](#tombstone-caps) + * [Infraction Timelines](#infraction-timelines) +* [State](#state) + * [Signing Info (Liveness)](#signing-info-liveness) + * [Params](#params) +* [Messages](#messages) + * [Unjail](#unjail) +* [BeginBlock](#beginblock) + * [Liveness Tracking](#liveness-tracking) +* [Hooks](#hooks) +* [Events](#events) +* [Staking Tombstone](#staking-tombstone) +* [Parameters](#parameters) +* [CLI](#cli) + * [Query](#query) + * [Transactions](#transactions) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +### States + +At any given time, there are any number of validators registered in the state +machine. Each block, the top `MaxValidators` (defined by `x/staking`) validators +who are not jailed become *bonded*, meaning that they may propose and vote on +blocks. Validators who are *bonded* are *at stake*, meaning that part or all of +their stake and their delegators' stake is at risk if they commit a protocol fault. + +For each of these validators we keep a `ValidatorSigningInfo` record that contains +information partaining to validator's liveness and other infraction related +attributes. + +### Tombstone Caps + +In order to mitigate the impact of initially likely categories of non-malicious +protocol faults, the Cosmos Hub implements for each validator +a *tombstone* cap, which only allows a validator to be slashed once for a double +sign fault. For example, if you misconfigure your HSM and double-sign a bunch of +old blocks, you'll only be punished for the first double-sign (and then immediately tombstombed). This will still be quite expensive and desirable to avoid, but tombstone caps +somewhat blunt the economic impact of unintentional misconfiguration. + +Liveness faults do not have caps, as they can't stack upon each other. Liveness bugs are "detected" as soon as the infraction occurs, and the validators are immediately put in jail, so it is not possible for them to commit multiple liveness faults without unjailing in between. + +### Infraction Timelines + +To illustrate how the `x/slashing` module handles submitted evidence through +CometBFT consensus, consider the following examples: + +**Definitions**: + +*\[* : timeline start\ +*]* : timeline end\ +*Cn* : infraction `n` committed\ +*Dn* : infraction `n` discovered\ +*Vb* : validator bonded\ +*Vu* : validator unbonded + +#### Single Double Sign Infraction + +\[----------C1----D1,Vu-----] + +A single infraction is committed then later discovered, at which point the +validator is unbonded and slashed at the full amount for the infraction. + +#### Multiple Double Sign Infractions + +\[----------C1--C2---C3---D1,D2,D3Vu-----] + +Multiple infractions are committed and then later discovered, at which point the +validator is jailed and slashed for only one infraction. Because the validator +is also tombstoned, they can not rejoin the validator set. + +## State + +### Signing Info (Liveness) + +Every block includes a set of precommits by the validators for the previous block, +known as the `LastCommitInfo` provided by CometBFT. A `LastCommitInfo` is valid so +long as it contains precommits from +2/3 of total voting power. + +Proposers are incentivized to include precommits from all validators in the CometBFT `LastCommitInfo` +by receiving additional fees proportional to the difference between the voting +power included in the `LastCommitInfo` and +2/3 (see [fee distribution](docs/sdk/v0.47/documentation/module-system/modules/distribution/README#begin-block)). + +```go +type LastCommitInfo struct { + Round int32 + Votes []VoteInfo +} +``` + +Validators are penalized for failing to be included in the `LastCommitInfo` for some +number of blocks by being automatically jailed, potentially slashed, and unbonded. + +Information about validator's liveness activity is tracked through `ValidatorSigningInfo`. +It is indexed in the store as follows: + +* ValidatorSigningInfo: `0x01 | ConsAddrLen (1 byte) | ConsAddress -> ProtocolBuffer(ValSigningInfo)` +* MissedBlocksBitArray: `0x02 | ConsAddrLen (1 byte) | ConsAddress | LittleEndianUint64(signArrayIndex) -> VarInt(didMiss)` (varint is a number encoding format) + +The first mapping allows us to easily lookup the recent signing info for a +validator based on the validator's consensus address. + +The second mapping (`MissedBlocksBitArray`) acts +as a bit-array of size `SignedBlocksWindow` that tells us if the validator missed +the block for a given index in the bit-array. The index in the bit-array is given +as little endian uint64. +The result is a `varint` that takes on `0` or `1`, where `0` indicates the +validator did not miss (did sign) the corresponding block, and `1` indicates +they missed the block (did not sign). + +Note that the `MissedBlocksBitArray` is not explicitly initialized up-front. Keys +are added as we progress through the first `SignedBlocksWindow` blocks for a newly +bonded validator. The `SignedBlocksWindow` parameter defines the size +(number of blocks) of the sliding window used to track validator liveness. + +The information stored for tracking validator liveness is as follows: + +```protobuf +// ValidatorSigningInfo defines a validator's signing info for monitoring their +// liveness activity. +message ValidatorSigningInfo { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // Height at which validator was first a candidate OR was unjailed + int64 start_height = 2; + // Index which is incremented each time the validator was a bonded + // in a block and may have signed a precommit or not. This in conjunction with the + // `SignedBlocksWindow` param determines the index in the `MissedBlocksBitArray`. + int64 index_offset = 3; + // Timestamp until which the validator is jailed due to liveness downtime. + google.protobuf.Timestamp jailed_until = 4 + [(gogoproto.stdtime) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // Whether or not a validator has been tombstoned (killed out of validator set). It is set + // once the validator commits an equivocation or for any other configured misbehiavor. + bool tombstoned = 5; + // A counter kept to avoid unnecessary array reads. + // Note that `Sum(MissedBlocksBitArray)` always equals `MissedBlocksCounter`. + int64 missed_blocks_counter = 6; +} +``` + +### Params + +The slashing module stores it's params in state with the prefix of `0x00`, +it can be updated with governance or the address with authority. + +* Params: `0x00 | ProtocolBuffer(Params)` + +```protobuf +// Params represents the parameters used for by the slashing module. +message Params { + option (amino.name) = "cosmos-sdk/x/slashing/Params"; + + int64 signed_blocks_window = 1; + bytes min_signed_per_window = 2 [ + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true + ]; + google.protobuf.Duration downtime_jail_duration = 3 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdduration) = true]; + bytes slash_fraction_double_sign = 4 [ + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true + ]; + bytes slash_fraction_downtime = 5 [ + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true + ]; +} +``` + +## Messages + +In this section we describe the processing of messages for the `slashing` module. + +### Unjail + +If a validator was automatically unbonded due to downtime and wishes to come back online & +possibly rejoin the bonded set, it must send `MsgUnjail`: + +```protobuf +/ MsgUnjail is an sdk.Msg used for unjailing a jailed validator, thus returning +/ them into the bonded validator set, so they can begin receiving provisions +/ and rewards again. +message MsgUnjail { + string validator_addr = 1; +} +``` + +Below is a pseudocode of the `MsgSrv/Unjail` RPC: + +```go expandable +unjail(tx MsgUnjail) + +validator = getValidator(tx.ValidatorAddr) + if validator == nil + fail with "No validator found" + if getSelfDelegation(validator) == 0 + fail with "validator must self delegate before unjailing" + if !validator.Jailed + fail with "Validator not jailed, cannot unjail" + + info = GetValidatorSigningInfo(operator) + if info.Tombstoned + fail with "Tombstoned validator cannot be unjailed" + if block time < info.JailedUntil + fail with "Validator still jailed, cannot unjail until period has expired" + + validator.Jailed = false + setValidator(validator) + +return +``` + +If the validator has enough stake to be in the top `n = MaximumBondedValidators`, it will be automatically rebonded, +and all delegators still delegated to the validator will be rebonded and begin to again collect +provisions and rewards. + +## BeginBlock + +### Liveness Tracking + +At the beginning of each block, we update the `ValidatorSigningInfo` for each +validator and check if they've crossed below the liveness threshold over a +sliding window. This sliding window is defined by `SignedBlocksWindow` and the +index in this window is determined by `IndexOffset` found in the validator's +`ValidatorSigningInfo`. For each block processed, the `IndexOffset` is incremented +regardless if the validator signed or not. Once the index is determined, the +`MissedBlocksBitArray` and `MissedBlocksCounter` are updated accordingly. + +Finally, in order to determine if a validator crosses below the liveness threshold, +we fetch the maximum number of blocks missed, `maxMissed`, which is +`SignedBlocksWindow - (MinSignedPerWindow * SignedBlocksWindow)` and the minimum +height at which we can determine liveness, `minHeight`. If the current block is +greater than `minHeight` and the validator's `MissedBlocksCounter` is greater than +`maxMissed`, they will be slashed by `SlashFractionDowntime`, will be jailed +for `DowntimeJailDuration`, and have the following values reset: +`MissedBlocksBitArray`, `MissedBlocksCounter`, and `IndexOffset`. + +**Note**: Liveness slashes do **NOT** lead to a tombstombing. + +```go expandable +height := block.Height + for vote in block.LastCommitInfo.Votes { + signInfo := GetValidatorSigningInfo(vote.Validator.Address) + + / This is a relative index, so we counts blocks the validator SHOULD have + / signed. We use the 0-value default signing info if not present, except for + / start height. + index := signInfo.IndexOffset % SignedBlocksWindow() + +signInfo.IndexOffset++ + + / Update MissedBlocksBitArray and MissedBlocksCounter. The MissedBlocksCounter + / just tracks the sum of MissedBlocksBitArray. That way we avoid needing to + / read/write the whole array each time. + missedPrevious := GetValidatorMissedBlockBitArray(vote.Validator.Address, index) + missed := !signed + switch { + case !missedPrevious && missed: + / array index has changed from not missed to missed, increment counter + SetValidatorMissedBlockBitArray(vote.Validator.Address, index, true) + +signInfo.MissedBlocksCounter++ + case missedPrevious && !missed: + / array index has changed from missed to not missed, decrement counter + SetValidatorMissedBlockBitArray(vote.Validator.Address, index, false) + +signInfo.MissedBlocksCounter-- + + default: + / array index at this index has not changed; no need to update counter +} + if missed { + / emit events... +} + minHeight := signInfo.StartHeight + SignedBlocksWindow() + maxMissed := SignedBlocksWindow() - MinSignedPerWindow() + + / If we are past the minimum height and the validator has missed too many + / jail and slash them. + if height > minHeight && signInfo.MissedBlocksCounter > maxMissed { + validator := ValidatorByConsAddr(vote.Validator.Address) + + / emit events... + + / We need to retrieve the stake distribution which signed the block, so we + / subtract ValidatorUpdateDelay from the block height, and subtract an + / additional 1 since this is the LastCommit. + / + / Note, that this CAN result in a negative "distributionHeight" up to + / -ValidatorUpdateDelay-1, i.e. at the end of the pre-genesis block (none) = at the beginning of the genesis block. + / That's fine since this is just used to filter unbonding delegations & redelegations. + distributionHeight := height - sdk.ValidatorUpdateDelay - 1 + + SlashWithInfractionReason(vote.Validator.Address, distributionHeight, vote.Validator.Power, SlashFractionDowntime(), stakingtypes.Downtime) + +Jail(vote.Validator.Address) + +signInfo.JailedUntil = block.Time.Add(DowntimeJailDuration()) + + / We need to reset the counter & array so that the validator won't be + / immediately slashed for downtime upon rebonding. + signInfo.MissedBlocksCounter = 0 + signInfo.IndexOffset = 0 + ClearValidatorMissedBlockBitArray(vote.Validator.Address) +} + +SetValidatorSigningInfo(vote.Validator.Address, signInfo) +} +``` + +## Hooks + +This section contains a description of the module's `hooks`. Hooks are operations that are executed automatically when events are raised. + +### Staking hooks + +The slashing module implements the `StakingHooks` defined in `x/staking` and are used as record-keeping of validators information. During the app initialization, these hooks should be registered in the staking module struct. + +The following hooks impact the slashing state: + +* `AfterValidatorBonded` creates a `ValidatorSigningInfo` instance as described in the following section. +* `AfterValidatorCreated` stores a validator's consensus key. +* `AfterValidatorRemoved` removes a validator's consensus key. + +### Validator Bonded + +Upon successful first-time bonding of a new validator, we create a new `ValidatorSigningInfo` structure for the +now-bonded validator, which `StartHeight` of the current block. + +If the validator was out of the validator set and gets bonded again, its new bonded height is set. + +```go expandable +onValidatorBonded(address sdk.ValAddress) + +signingInfo, found = GetValidatorSigningInfo(address) + if !found { + signingInfo = ValidatorSigningInfo { + StartHeight : CurrentHeight, + IndexOffset : 0, + JailedUntil : time.Unix(0, 0), + Tombstone : false, + MissedBloskCounter : 0 +} + +else { + signingInfo.StartHeight = CurrentHeight +} + +setValidatorSigningInfo(signingInfo) +} + +return +``` + +## Events + +The slashing module emits the following events: + +### MsgServer + +#### MsgUnjail + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ------------------ | +| message | module | slashing | +| message | sender | `{validatorAddress}` | + +### Keeper + +### BeginBlocker: HandleValidatorSignature + +| Type | Attribute Key | Attribute Value | +| ----- | ------------- | --------------------------- | +| slash | address | `{validatorConsensusAddress}` | +| slash | power | `{validatorPower}` | +| slash | reason | `{slashReason}` | +| slash | jailed \[0] | `{validatorConsensusAddress}` | +| slash | burned coins | `{math.Int}` | + +* \[0] Only included if the validator is jailed. + +| Type | Attribute Key | Attribute Value | +| -------- | -------------- | --------------------------- | +| liveness | address | `{validatorConsensusAddress}` | +| liveness | missed\_blocks | `{missedBlocksCounter}` | +| liveness | height | `{blockHeight}` | + +#### Slash + +* same as `"slash"` event from `HandleValidatorSignature`, but without the `jailed` attribute. + +#### Jail + +| Type | Attribute Key | Attribute Value | +| ----- | ------------- | ------------------ | +| slash | jailed | `{validatorAddress}` | + +## Staking Tombstone + +### Abstract + +In the current implementation of the `slashing` module, when the consensus engine +informs the state machine of a validator's consensus fault, the validator is +partially slashed, and put into a "jail period", a period of time in which they +are not allowed to rejoin the validator set. However, because of the nature of +consensus faults and ABCI, there can be a delay between an infraction occurring, +and evidence of the infraction reaching the state machine (this is one of the +primary reasons for the existence of the unbonding period). + +> Note: The tombstone concept, only applies to faults that have a delay between +> the infraction occurring and evidence reaching the state machine. For example, +> evidence of a validator double signing may take a while to reach the state machine +> due to unpredictable evidence gossip layer delays and the ability of validators to +> selectively reveal double-signatures (e.g. to infrequently-online light clients). +> Liveness slashing, on the other hand, is detected immediately as soon as the +> infraction occurs, and therefore no slashing period is needed. A validator is +> immediately put into jail period, and they cannot commit another liveness fault +> until they unjail. In the future, there may be other types of byzantine faults +> that have delays (for example, submitting evidence of an invalid proposal as a transaction). +> When implemented, it will have to be decided whether these future types of +> byzantine faults will result in a tombstoning (and if not, the slash amounts +> will not be capped by a slashing period). + +In the current system design, once a validator is put in the jail for a consensus +fault, after the `JailPeriod` they are allowed to send a transaction to `unjail` +themselves, and thus rejoin the validator set. + +One of the "design desires" of the `slashing` module is that if multiple +infractions occur before evidence is executed (and a validator is put in jail), +they should only be punished for single worst infraction, but not cumulatively. +For example, if the sequence of events is: + +1. Validator A commits Infraction 1 (worth 30% slash) +2. Validator A commits Infraction 2 (worth 40% slash) +3. Validator A commits Infraction 3 (worth 35% slash) +4. Evidence for Infraction 1 reaches state machine (and validator is put in jail) +5. Evidence for Infraction 2 reaches state machine +6. Evidence for Infraction 3 reaches state machine + +Only Infraction 2 should have its slash take effect, as it is the highest. This +is done, so that in the case of the compromise of a validator's consensus key, +they will only be punished once, even if the hacker double-signs many blocks. +Because, the unjailing has to be done with the validator's operator key, they +have a chance to re-secure their consensus key, and then signal that they are +ready using their operator key. We call this period during which we track only +the max infraction, the "slashing period". + +Once, a validator rejoins by unjailing themselves, we begin a new slashing period; +if they commit a new infraction after unjailing, it gets slashed cumulatively on +top of the worst infraction from the previous slashing period. + +However, while infractions are grouped based off of the slashing periods, because +evidence can be submitted up to an `unbondingPeriod` after the infraction, we +still have to allow for evidence to be submitted for previous slashing periods. +For example, if the sequence of events is: + +1. Validator A commits Infraction 1 (worth 30% slash) +2. Validator A commits Infraction 2 (worth 40% slash) +3. Evidence for Infraction 1 reaches state machine (and Validator A is put in jail) +4. Validator A unjails + +We are now in a new slashing period, however we still have to keep the door open +for the previous infraction, as the evidence for Infraction 2 may still come in. +As the number of slashing periods increase, it creates more complexity as we have +to keep track of the highest infraction amount for every single slashing period. + +> Note: Currently, according to the `slashing` module spec, a new slashing period +> is created every time a validator is unbonded then rebonded. This should probably +> be changed to jailed/unjailed. See issue [#3205](https://github.com/cosmos/cosmos-sdk/issues/3205) +> for further details. For the remainder of this, I will assume that we only start +> a new slashing period when a validator gets unjailed. + +The maximum number of slashing periods is the `len(UnbondingPeriod) / len(JailPeriod)`. +The current defaults in Gaia for the `UnbondingPeriod` and `JailPeriod` are 3 weeks +and 2 days, respectively. This means there could potentially be up to 11 slashing +periods concurrently being tracked per validator. If we set the `JailPeriod >= UnbondingPeriod`, +we only have to track 1 slashing period (i.e not have to track slashing periods). + +Currently, in the jail period implementation, once a validator unjails, all of +their delegators who are delegated to them (haven't unbonded / redelegated away), +stay with them. Given that consensus safety faults are so egregious +(way more so than liveness faults), it is probably prudent to have delegators not +"auto-rebond" to the validator. + +#### Proposal: infinite jail + +We propose setting the "jail time" for a +validator who commits a consensus safety fault, to `infinite` (i.e. a tombstone state). +This essentially kicks the validator out of the validator set and does not allow +them to re-enter the validator set. All of their delegators (including the operator themselves) +have to either unbond or redelegate away. The validator operator can create a new +validator if they would like, with a new operator key and consensus key, but they +have to "re-earn" their delegations back. + +Implementing the tombstone system and getting rid of the slashing period tracking +will make the `slashing` module way simpler, especially because we can remove all +of the hooks defined in the `slashing` module consumed by the `staking` module +(the `slashing` module still consumes hooks defined in `staking`). + +#### Single slashing amount + +Another optimization that can be made is that if we assume that all ABCI faults +for CometBFT consensus are slashed at the same level, we don't have to keep +track of "max slash". Once an ABCI fault happens, we don't have to worry about +comparing potential future ones to find the max. + +Currently the only CometBFT ABCI fault is: + +* Unjustified precommits (double signs) + +It is currently planned to include the following fault in the near future: + +* Signing a precommit when you're in unbonding phase (needed to make light client bisection safe) + +Given that these faults are both attributable byzantine faults, we will likely +want to slash them equally, and thus we can enact the above change. + +> Note: This change may make sense for current CometBFT consensus, but maybe +> not for a different consensus algorithm or future versions of CometBFT that +> may want to punish at different levels (for example, partial slashing). + +## Parameters + +The slashing module contains the following parameters: + +| Key | Type | Example | +| ----------------------- | -------------- | ---------------------- | +| SignedBlocksWindow | string (int64) | "100" | +| MinSignedPerWindow | string (dec) | "0.500000000000000000" | +| DowntimeJailDuration | string (ns) | "600000000000" | +| SlashFractionDoubleSign | string (dec) | "0.050000000000000000" | +| SlashFractionDowntime | string (dec) | "0.010000000000000000" | + +## CLI + +A user can query and interact with the `slashing` module using the CLI. + +### Query + +The `query` commands allow users to query `slashing` state. + +```shell +simd query slashing --help +``` + +#### params + +The `params` command allows users to query genesis parameters for the slashing module. + +```shell +simd query slashing params [flags] +``` + +Example: + +```shell +simd query slashing params +``` + +Example Output: + +```yml +downtime_jail_duration: 600s +min_signed_per_window: "0.500000000000000000" +signed_blocks_window: "100" +slash_fraction_double_sign: "0.050000000000000000" +slash_fraction_downtime: "0.010000000000000000" +``` + +#### signing-info + +The `signing-info` command allows users to query signing-info of the validator using consensus public key. + +```shell +simd query slashing signing-infos [flags] +``` + +Example: + +```shell +simd query slashing signing-info '{"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys6jD5B6tPgC8="}' + +``` + +Example Output: + +```yml +address: cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c +index_offset: "2068" +jailed_until: "1970-01-01T00:00:00Z" +missed_blocks_counter: "0" +start_height: "0" +tombstoned: false +``` + +#### signing-infos + +The `signing-infos` command allows users to query signing infos of all validators. + +```shell +simd query slashing signing-infos [flags] +``` + +Example: + +```shell +simd query slashing signing-infos +``` + +Example Output: + +```yml +info: +- address: cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c + index_offset: "2075" + jailed_until: "1970-01-01T00:00:00Z" + missed_blocks_counter: "0" + start_height: "0" + tombstoned: false +pagination: + next_key: null + total: "0" +``` + +### Transactions + +The `tx` commands allow users to interact with the `slashing` module. + +```bash +simd tx slashing --help +``` + +#### unjail + +The `unjail` command allows users to unjail a validator previously jailed for downtime. + +```bash +simd tx slashing unjail --from mykey [flags] +``` + +Example: + +```bash +simd tx slashing unjail --from mykey +``` + +### gRPC + +A user can query the `slashing` module using gRPC endpoints. + +#### Params + +The `Params` endpoint allows users to query the parameters of slashing module. + +```shell +cosmos.slashing.v1beta1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "signedBlocksWindow": "100", + "minSignedPerWindow": "NTAwMDAwMDAwMDAwMDAwMDAw", + "downtimeJailDuration": "600s", + "slashFractionDoubleSign": "NTAwMDAwMDAwMDAwMDAwMDA=", + "slashFractionDowntime": "MTAwMDAwMDAwMDAwMDAwMDA=" + } +} +``` + +#### SigningInfo + +The SigningInfo queries the signing info of given cons address. + +```shell +cosmos.slashing.v1beta1.Query/SigningInfo +``` + +Example: + +```shell +grpcurl -plaintext -d '{"cons_address":"cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c"}' localhost:9090 cosmos.slashing.v1beta1.Query/SigningInfo +``` + +Example Output: + +```json +{ + "valSigningInfo": { + "address": "cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c", + "indexOffset": "3493", + "jailedUntil": "1970-01-01T00:00:00Z" + } +} +``` + +#### SigningInfos + +The SigningInfos queries signing info of all validators. + +```shell +cosmos.slashing.v1beta1.Query/SigningInfos +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/SigningInfos +``` + +Example Output: + +```json expandable +{ + "info": [ + { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "indexOffset": "2467", + "jailedUntil": "1970-01-01T00:00:00Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### REST + +A user can query the `slashing` module using REST endpoints. + +#### Params + +```shell +/cosmos/slashing/v1beta1/params +``` + +Example: + +```shell +curl "localhost:1317/cosmos/slashing/v1beta1/params" +``` + +Example Output: + +```json +{ + "params": { + "signed_blocks_window": "100", + "min_signed_per_window": "0.500000000000000000", + "downtime_jail_duration": "600s", + "slash_fraction_double_sign": "0.050000000000000000", + "slash_fraction_downtime": "0.010000000000000000" +} +``` + +#### signing\_info + +```shell +/cosmos/slashing/v1beta1/signing_infos/%s +``` + +Example: + +```shell +curl "localhost:1317/cosmos/slashing/v1beta1/signing_infos/cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c" +``` + +Example Output: + +```json +{ + "val_signing_info": { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "start_height": "0", + "index_offset": "4184", + "jailed_until": "1970-01-01T00:00:00Z", + "tombstoned": false, + "missed_blocks_counter": "0" + } +} +``` + +#### signing\_infos + +```shell +/cosmos/slashing/v1beta1/signing_infos +``` + +Example: + +```shell +curl "localhost:1317/cosmos/slashing/v1beta1/signing_infos +``` + +Example Output: + +```json expandable +{ + "info": [ + { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "start_height": "0", + "index_offset": "4169", + "jailed_until": "1970-01-01T00:00:00Z", + "tombstoned": false, + "missed_blocks_counter": "0" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/staking/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/staking/README.mdx new file mode 100644 index 00000000..4d199c7a --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/staking/README.mdx @@ -0,0 +1,3851 @@ +--- +title: '`x/staking`' +description: >- + This paper specifies the Staking module of the Cosmos SDK that was first + described in the Cosmos Whitepaper in June 2016. +--- + +## Abstract + +This paper specifies the Staking module of the Cosmos SDK that was first +described in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper) +in June 2016. + +The module enables Cosmos SDK-based blockchain to support an advanced +Proof-of-Stake (PoS) system. In this system, holders of the native staking token of +the chain can become validators and can delegate tokens to validators, +ultimately determining the effective validator set for the system. + +This module is used in the Cosmos Hub, the first Hub in the Cosmos +network. + +## Contents + +* [State](#state) + * [Pool](#pool) + * [LastTotalPower](#lasttotalpower) + * [ValidatorUpdates](#validatorupdates) + * [UnbondingID](#unbondingid) + * [Params](#params) + * [Validator](#validator) + * [Delegation](#delegation) + * [UnbondingDelegation](#unbondingdelegation) + * [Redelegation](#redelegation) + * [Queues](#queues) + * [HistoricalInfo](#historicalinfo) +* [State Transitions](#state-transitions) + * [Validators](#validators) + * [Delegations](#delegations) + * [Slashing](#slashing) + * [How Shares are calculated](#how-shares-are-calculated) +* [Messages](#messages) + * [MsgCreateValidator](#msgcreatevalidator) + * [MsgEditValidator](#msgeditvalidator) + * [MsgDelegate](#msgdelegate) + * [MsgUndelegate](#msgundelegate) + * [MsgCancelUnbondingDelegation](#msgcancelunbondingdelegation) + * [MsgBeginRedelegate](#msgbeginredelegate) + * [MsgUpdateParams](#msgupdateparams) +* [Begin-Block](#begin-block) + * [Historical Info Tracking](#historical-info-tracking) +* [End-Block](#end-block) + * [Validator Set Changes](#validator-set-changes) + * [Queues](#queues-1) +* [Hooks](#hooks) +* [Events](#events) + * [EndBlocker](#endblocker) + * [Msg's](#msgs) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## State + +### Pool + +Pool is used for tracking bonded and not-bonded token supply of the bond denomination. + +### LastTotalPower + +LastTotalPower tracks the total amounts of bonded tokens recorded during the previous end block. +Store entries prefixed with "Last" must remain unchanged until EndBlock. + +* LastTotalPower: `0x12 -> ProtocolBuffer(math.Int)` + +### ValidatorUpdates + +ValidatorUpdates contains the validator updates returned to ABCI at the end of every block. +The values are overwritten in every block. + +* ValidatorUpdates `0x61 -> []abci.ValidatorUpdate` + +### UnbondingID + +UnbondingID stores the ID of the latest unbonding operation. It enables creating unique IDs for unbonding operations, i.e., UnbondingID is incremented every time a new unbonding operation (validator unbonding, unbonding delegation, redelegation) is initiated. + +* UnbondingID: `0x37 -> uint64` + +### Params + +The staking module stores its params in state with the prefix of `0x51`, +it can be updated with governance or the address with authority. + +* Params: `0x51 | ProtocolBuffer(Params)` + +```protobuf +// Params defines the parameters for the x/staking module. +message Params { + option (amino.name) = "cosmos-sdk/x/staking/Params"; + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // unbonding_time is the time duration of unbonding. + google.protobuf.Duration unbonding_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdduration) = true]; + // max_validators is the maximum number of validators. + uint32 max_validators = 2; + // max_entries is the max entries for either unbonding delegation or redelegation (per pair/trio). + uint32 max_entries = 3; + // historical_entries is the number of historical entries to persist. + uint32 historical_entries = 4; + // bond_denom defines the bondable coin denomination. + string bond_denom = 5; + // min_commission_rate is the chain-wide minimum commission rate that a validator can charge their delegators + string min_commission_rate = 6 [ + (gogoproto.moretags) = "yaml:\"min_commission_rate\"", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +### Validator + +Validators can have one of three statuses + +* `Unbonded`: The validator is not in the active set. They cannot sign blocks and do not earn + rewards. They can receive delegations. +* `Bonded`: Once the validator receives sufficient bonded tokens they automatically join the + active set during [`EndBlock`](#validator-set-changes) and their status is updated to `Bonded`. + They are signing blocks and receiving rewards. They can receive further delegations. + They can be slashed for misbehavior. Delegators to this validator who unbond their delegation + must wait the duration of the UnbondingTime, a chain-specific param, during which time + they are still slashable for offences of the source validator if those offences were committed + during the period of time that the tokens were bonded. +* `Unbonding`: When a validator leaves the active set, either by choice or due to slashing, jailing or + tombstoning, an unbonding of all their delegations begins. All delegations must then wait the UnbondingTime + before their tokens are moved to their accounts from the `BondedPool`. + + +Tombstoning is permanent, once tombstoned a validator's consensus key can not be reused within the chain where the tombstoning happened. + + +Validators objects should be primarily stored and accessed by the +`OperatorAddr`, an SDK validator address for the operator of the validator. Two +additional indices are maintained per validator object in order to fulfill +required lookups for slashing and validator-set updates. A third special index +(`LastValidatorPower`) is also maintained which however remains constant +throughout each block, unlike the first two indices which mirror the validator +records within a block. + +* Validators: `0x21 | OperatorAddrLen (1 byte) | OperatorAddr -> ProtocolBuffer(validator)` +* ValidatorsByConsAddr: `0x22 | ConsAddrLen (1 byte) | ConsAddr -> OperatorAddr` +* ValidatorsByPower: `0x23 | BigEndian(ConsensusPower) | OperatorAddrLen (1 byte) | OperatorAddr -> OperatorAddr` +* LastValidatorsPower: `0x11 | OperatorAddrLen (1 byte) | OperatorAddr -> ProtocolBuffer(ConsensusPower)` +* ValidatorsByUnbondingID: `0x38 | UnbondingID -> 0x21 | OperatorAddrLen (1 byte) | OperatorAddr` + +`Validators` is the primary index - it ensures that each operator can have only one +associated validator, where the public key of that validator can change in the +future. Delegators can refer to the immutable operator of the validator, without +concern for the changing public key. + +`ValidatorsByUnbondingID` is an additional index that enables lookups for +validators by the unbonding IDs corresponding to their current unbonding. + +`ValidatorByConsAddr` is an additional index that enables lookups for slashing. +When CometBFT reports evidence, it provides the validator address, so this +map is needed to find the operator. Note that the `ConsAddr` corresponds to the +address which can be derived from the validator's `ConsPubKey`. + +`ValidatorsByPower` is an additional index that provides a sorted list of +potential validators to quickly determine the current active set. Here +ConsensusPower is validator.Tokens/10^6 by default. Note that all validators +where `Jailed` is true are not stored within this index. + +`LastValidatorsPower` is a special index that provides a historical list of the +last-block's bonded validators. This index remains constant during a block but +is updated during the validator set update process which takes place in [`EndBlock`](#end-block). + +Each validator's state is stored in a `Validator` struct: + +```protobuf +// Validator defines a validator, together with the total amount of the +// Validator's bond shares and their exchange rate to coins. Slashing results in +// a decrease in the exchange rate, allowing correct calculation of future +// undelegations without iterating over delegators. When coins are delegated to +// this validator, the validator is credited with a delegation whose number of +// bond shares is based on the amount of coins delegated divided by the current +// exchange rate. Voting power can be calculated as total bonded shares +// multiplied by exchange rate. +message Validator { + option (gogoproto.equal) = false; + option (gogoproto.goproto_stringer) = false; + option (gogoproto.goproto_getters) = false; + + // operator_address defines the address of the validator's operator; bech encoded in JSON. + string operator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // consensus_pubkey is the consensus public key of the validator, as a Protobuf Any. + google.protobuf.Any consensus_pubkey = 2 [(cosmos_proto.accepts_interface) = "cosmos.crypto.PubKey"]; + // jailed defined whether the validator has been jailed from bonded status or not. + bool jailed = 3; + // status is the validator status (bonded/unbonding/unbonded). + BondStatus status = 4; + // tokens define the delegated tokens (incl. self-delegation). + string tokens = 5 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // delegator_shares defines total shares issued to a validator's delegators. + string delegator_shares = 6 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // description defines the description terms for the validator. + Description description = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // unbonding_height defines, if unbonding, the height at which this validator has begun unbonding. + int64 unbonding_height = 8; + // unbonding_time defines, if unbonding, the min time for the validator to complete unbonding. + google.protobuf.Timestamp unbonding_time = 9 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + // commission defines the commission parameters. + Commission commission = 10 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // min_self_delegation is the validator's self declared minimum self delegation. + // + // Since: cosmos-sdk 0.46 + string min_self_delegation = 11 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + + // strictly positive if this validator's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 12; + + // list of unbonding ids, each uniquely identifing an unbonding of this validator + repeated uint64 unbonding_ids = 13; +} +``` + +```protobuf +// CommissionRates defines the initial commission rates to be used for creating +// a validator. +message CommissionRates { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // rate is the commission rate charged to delegators, as a fraction. + string rate = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // max_rate defines the maximum commission rate which validator can ever charge, as a fraction. + string max_rate = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // max_change_rate defines the maximum daily increase of the validator commission, as a fraction. + string max_change_rate = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +// Commission defines commission parameters for a given validator. +message Commission { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // commission_rates defines the initial commission rates to be used for creating a validator. + CommissionRates commission_rates = 1 + [(gogoproto.embed) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // update_time is the last time the commission rate was changed. + google.protobuf.Timestamp update_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} + +// Description defines a validator description. +message Description { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // moniker defines a human-readable name for the validator. + string moniker = 1; + // identity defines an optional identity signature (ex. UPort or Keybase). + string identity = 2; + // website defines an optional website link. + string website = 3; + // security_contact defines an optional email for security contact. + string security_contact = 4; + // details define other optional details. + string details = 5; +} +``` + +### Delegation + +Delegations are identified by combining `DelegatorAddr` (the address of the delegator) +with the `ValidatorAddr` Delegators are indexed in the store as follows: + +* Delegation: `0x31 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr -> ProtocolBuffer(delegation)` + +Stake holders may delegate coins to validators; under this circumstance their +funds are held in a `Delegation` data structure. It is owned by one +delegator, and is associated with the shares for one validator. The sender of +the transaction is the owner of the bond. + +```protobuf +// Delegation represents the bond with tokens held by an account. It is +// owned by one delegator, and is associated with the voting power of one +// validator. +message Delegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + // delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // shares define the delegation shares received. + string shares = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +#### Delegator Shares + +When one delegates tokens to a Validator, they are issued a number of delegator shares based on a +dynamic exchange rate, calculated as follows from the total number of tokens delegated to the +validator and the number of shares issued so far: + +`Shares per Token = validator.TotalShares() / validator.Tokens()` + +Only the number of shares received is stored on the DelegationEntry. When a delegator then +Undelegates, the token amount they receive is calculated from the number of shares they currently +hold and the inverse exchange rate: + +`Tokens per Share = validator.Tokens() / validatorShares()` + +These `Shares` are simply an accounting mechanism. They are not a fungible asset. The reason for +this mechanism is to simplify the accounting around slashing. Rather than iteratively slashing the +tokens of every delegation entry, instead the Validator's total bonded tokens can be slashed, +effectively reducing the value of each issued delegator share. + +### UnbondingDelegation + +Shares in a `Delegation` can be unbonded, but they must for some time exist as +an `UnbondingDelegation`, where shares can be reduced if Byzantine behavior is +detected. + +`UnbondingDelegation` are indexed in the store as: + +* UnbondingDelegation: `0x32 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr -> ProtocolBuffer(unbondingDelegation)` +* UnbondingDelegationsFromValidator: `0x33 | ValidatorAddrLen (1 byte) | ValidatorAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` +* UnbondingDelegationByUnbondingId: `0x38 | UnbondingId -> 0x32 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr` + `UnbondingDelegation` is used in queries, to lookup all unbonding delegations for + a given delegator. + +`UnbondingDelegationsFromValidator` is used in slashing, to lookup all +unbonding delegations associated with a given validator that need to be +slashed. + +`UnbondingDelegationByUnbondingId` is an additional index that enables +lookups for unbonding delegations by the unbonding IDs of the containing +unbonding delegation entries. + +A UnbondingDelegation object is created every time an unbonding is initiated. + +```protobuf +// UnbondingDelegation stores all of a single delegator's unbonding bonds +// for a single validator in an time-ordered list. +message UnbondingDelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + // delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // entries are the unbonding delegation entries. + repeated UnbondingDelegationEntry entries = 3 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; // unbonding delegation entries +} + +// UnbondingDelegationEntry defines an unbonding object with relevant metadata. +message UnbondingDelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // creation_height is the height which the unbonding took place. + int64 creation_height = 1; + // completion_time is the unix time for unbonding completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + // initial_balance defines the tokens initially scheduled to receive at completion. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // balance defines the tokens to receive at completion. + string balance = 4 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + // Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} +``` + +### Redelegation + +The bonded tokens worth of a `Delegation` may be instantly redelegated from a +source validator to a different validator (destination validator). However when +this occurs they must be tracked in a `Redelegation` object, whereby their +shares can be slashed if their tokens have contributed to a Byzantine fault +committed by the source validator. + +`Redelegation` are indexed in the store as: + +* Redelegations: `0x34 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddr -> ProtocolBuffer(redelegation)` +* RedelegationsBySrc: `0x35 | ValidatorSrcAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddrLen (1 byte) | ValidatorDstAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` +* RedelegationsByDst: `0x36 | ValidatorDstAddrLen (1 byte) | ValidatorDstAddr | ValidatorSrcAddrLen (1 byte) | ValidatorSrcAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` +* RedelegationByUnbondingId: `0x38 | UnbondingId -> 0x34 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddr` + +`Redelegations` is used for queries, to lookup all redelegations for a given +delegator. + +`RedelegationsBySrc` is used for slashing based on the `ValidatorSrcAddr`. + +`RedelegationsByDst` is used for slashing based on the `ValidatorDstAddr` + +The first map here is used for queries, to lookup all redelegations for a given +delegator. The second map is used for slashing based on the `ValidatorSrcAddr`, +while the third map is for slashing based on the `ValidatorDstAddr`. + +`RedelegationByUnbondingId` is an additional index that enables +lookups for redelegations by the unbonding IDs of the containing +redelegation entries. + +A redelegation object is created every time a redelegation occurs. To prevent +"redelegation hopping" redelegations may not occur under the situation that: + +* the (re)delegator already has another immature redelegation in progress + with a destination to a validator (let's call it `Validator X`) +* and, the (re)delegator is attempting to create a *new* redelegation + where the source validator for this new redelegation is `Validator X`. + +```protobuf +// RedelegationEntry defines a redelegation object with relevant metadata. +message RedelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // creation_height defines the height which the redelegation took place. + int64 creation_height = 1; + // completion_time defines the unix time for redelegation completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + // initial_balance defines the initial balance when redelegation started. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // shares_dst is the amount of destination-validator shares created by redelegation. + string shares_dst = 4 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + // Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} + +// Redelegation contains the list of a particular delegator's redelegating bonds +// from a particular source validator to a particular destination validator. +message Redelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + // delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_src_address is the validator redelegation source operator address. + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_dst_address is the validator redelegation destination operator address. + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // entries are the redelegation entries. + repeated RedelegationEntry entries = 4 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; // redelegation entries +} +``` + +### Queues + +All queue objects are sorted by timestamp. The time used within any queue is +firstly converted to UTC, rounded to the nearest nanosecond then sorted. The sortable time format +used is a slight modification of the RFC3339Nano and uses the format string +`"2006-01-02T15:04:05.000000000"`. Notably this format: + +* right pads all zeros +* drops the time zone info (we already use UTC) + +In all cases, the stored timestamp represents the maturation time of the queue +element. + +#### UnbondingDelegationQueue + +For the purpose of tracking progress of unbonding delegations the unbonding +delegations queue is kept. + +* UnbondingDelegation: `0x41 | format(time) -> []DVPair` + +```protobuf +// DVPair is struct that just has a delegator-validator pair with no other data. +// It is intended to be used as a marshalable pointer. For example, a DVPair can +// be used to construct the key to getting an UnbondingDelegation from state. +message DVPair { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +#### RedelegationQueue + +For the purpose of tracking progress of redelegations the redelegation queue is +kept. + +* RedelegationQueue: `0x42 | format(time) -> []DVVTriplet` + +```protobuf +// DVVTriplet is struct that just has a delegator-validator-validator triplet +// with no other data. It is intended to be used as a marshalable pointer. For +// example, a DVVTriplet can be used to construct the key to getting a +// Redelegation from state. +message DVVTriplet { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +#### ValidatorQueue + +For the purpose of tracking progress of unbonding validators the validator +queue is kept. + +* ValidatorQueueTime: `0x43 | format(time) -> []sdk.ValAddress` + +The stored object by each key is an array of validator operator addresses from +which the validator object can be accessed. Typically it is expected that only +a single validator record will be associated with a given timestamp however it is possible +that multiple validators exist in the queue at the same location. + +### HistoricalInfo + +HistoricalInfo objects are stored and pruned at each block such that the staking keeper persists +the `n` most recent historical info defined by staking module parameter: `HistoricalEntries`. + +```go expandable +syntax = "proto3"; +package cosmos.staking.v1beta1; + +import "gogoproto/gogo.proto"; +import "google/protobuf/any.proto"; +import "google/protobuf/duration.proto"; +import "google/protobuf/timestamp.proto"; + +import "cosmos_proto/cosmos.proto"; +import "cosmos/base/v1beta1/coin.proto"; +import "amino/amino.proto"; +import "tendermint/types/types.proto"; +import "tendermint/abci/types.proto"; + +option go_package = "github.com/cosmos/cosmos-sdk/x/staking/types"; + +/ HistoricalInfo contains header and validator information for a given block. +/ It is stored as part of staking module's state, which persists the `n` most +/ recent HistoricalInfo +/ (`n` is set by the staking module's `historical_entries` parameter). +message HistoricalInfo { + tendermint.types.Header header = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + repeated Validator valset = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ CommissionRates defines the initial commission rates to be used for creating +/ a validator. +message CommissionRates { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / rate is the commission rate charged to delegators, as a fraction. + string rate = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / max_rate defines the maximum commission rate which validator can ever charge, as a fraction. + string max_rate = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / max_change_rate defines the maximum daily increase of the validator commission, as a fraction. + string max_change_rate = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +/ Commission defines commission parameters for a given validator. +message Commission { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / commission_rates defines the initial commission rates to be used for creating a validator. + CommissionRates commission_rates = 1 + [(gogoproto.embed) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + / update_time is the last time the commission rate was changed. + google.protobuf.Timestamp update_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} + +/ Description defines a validator description. +message Description { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / moniker defines a human-readable name for the validator. + string moniker = 1; + / identity defines an optional identity signature (ex. UPort or Keybase). + string identity = 2; + / website defines an optional website link. + string website = 3; + / security_contact defines an optional email for security contact. + string security_contact = 4; + / details define other optional details. + string details = 5; +} + +/ Validator defines a validator, together with the total amount of the +/ Validator's bond shares and their exchange rate to coins. Slashing results in +/ a decrease in the exchange rate, allowing correct calculation of future +/ undelegations without iterating over delegators. When coins are delegated to +/ this validator, the validator is credited with a delegation whose number of +/ bond shares is based on the amount of coins delegated divided by the current +/ exchange rate. Voting power can be calculated as total bonded shares +/ multiplied by exchange rate. +message Validator { + option (gogoproto.equal) = false; + option (gogoproto.goproto_stringer) = false; + option (gogoproto.goproto_getters) = false; + + / operator_address defines the address of the validator's operator; bech encoded in JSON. + string operator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / consensus_pubkey is the consensus public key of the validator, as a Protobuf Any. + google.protobuf.Any consensus_pubkey = 2 [(cosmos_proto.accepts_interface) = "cosmos.crypto.PubKey"]; + / jailed defined whether the validator has been jailed from bonded status or not. + bool jailed = 3; + / status is the validator status (bonded/unbonding/unbonded). + BondStatus status = 4; + / tokens define the delegated tokens (incl. self-delegation). + string tokens = 5 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / delegator_shares defines total shares issued to a validator's delegators. + string delegator_shares = 6 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / description defines the description terms for the validator. + Description description = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + / unbonding_height defines, if unbonding, the height at which this validator has begun unbonding. + int64 unbonding_height = 8; + / unbonding_time defines, if unbonding, the min time for the validator to complete unbonding. + google.protobuf.Timestamp unbonding_time = 9 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + / commission defines the commission parameters. + Commission commission = 10 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + / min_self_delegation is the validator's self declared minimum self delegation. + / + / Since: cosmos-sdk 0.46 + string min_self_delegation = 11 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + + / strictly positive if this validator's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 12; + + / list of unbonding ids, each uniquely identifing an unbonding of this validator + repeated uint64 unbonding_ids = 13; +} + +/ BondStatus is the status of a validator. +enum BondStatus { + option (gogoproto.goproto_enum_prefix) = false; + + / UNSPECIFIED defines an invalid validator status. + BOND_STATUS_UNSPECIFIED = 0 [(gogoproto.enumvalue_customname) = "Unspecified"]; + / UNBONDED defines a validator that is not bonded. + BOND_STATUS_UNBONDED = 1 [(gogoproto.enumvalue_customname) = "Unbonded"]; + / UNBONDING defines a validator that is unbonding. + BOND_STATUS_UNBONDING = 2 [(gogoproto.enumvalue_customname) = "Unbonding"]; + / BONDED defines a validator that is bonded. + BOND_STATUS_BONDED = 3 [(gogoproto.enumvalue_customname) = "Bonded"]; +} + +/ ValAddresses defines a repeated set of validator addresses. +message ValAddresses { + option (gogoproto.goproto_stringer) = false; + option (gogoproto.stringer) = true; + + repeated string addresses = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ DVPair is struct that just has a delegator-validator pair with no other data. +/ It is intended to be used as a marshalable pointer. For example, a DVPair can +/ be used to construct the key to getting an UnbondingDelegation from state. +message DVPair { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ DVPairs defines an array of DVPair objects. +message DVPairs { + repeated DVPair pairs = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ DVVTriplet is struct that just has a delegator-validator-validator triplet +/ with no other data. It is intended to be used as a marshalable pointer. For +/ example, a DVVTriplet can be used to construct the key to getting a +/ Redelegation from state. +message DVVTriplet { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ DVVTriplets defines an array of DVVTriplet objects. +message DVVTriplets { + repeated DVVTriplet triplets = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ Delegation represents the bond with tokens held by an account. It is +/ owned by one delegator, and is associated with the voting power of one +/ validator. +message Delegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + / delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / shares define the delegation shares received. + string shares = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +/ UnbondingDelegation stores all of a single delegator's unbonding bonds +/ for a single validator in an time-ordered list. +message UnbondingDelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + / delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / entries are the unbonding delegation entries. + repeated UnbondingDelegationEntry entries = 3 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; / unbonding delegation entries +} + +/ UnbondingDelegationEntry defines an unbonding object with relevant metadata. +message UnbondingDelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / creation_height is the height which the unbonding took place. + int64 creation_height = 1; + / completion_time is the unix time for unbonding completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + / initial_balance defines the tokens initially scheduled to receive at completion. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / balance defines the tokens to receive at completion. + string balance = 4 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + / Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} + +/ RedelegationEntry defines a redelegation object with relevant metadata. +message RedelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / creation_height defines the height which the redelegation took place. + int64 creation_height = 1; + / completion_time defines the unix time for redelegation completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + / initial_balance defines the initial balance when redelegation started. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / shares_dst is the amount of destination-validator shares created by redelegation. + string shares_dst = 4 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + / Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} + +/ Redelegation contains the list of a particular delegator's redelegating bonds +/ from a particular source validator to a particular destination validator. +message Redelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + / delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_src_address is the validator redelegation source operator address. + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_dst_address is the validator redelegation destination operator address. + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / entries are the redelegation entries. + repeated RedelegationEntry entries = 4 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; / redelegation entries +} + +/ Params defines the parameters for the x/staking module. +message Params { + option (amino.name) = "cosmos-sdk/x/staking/Params"; + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / unbonding_time is the time duration of unbonding. + google.protobuf.Duration unbonding_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdduration) = true]; + / max_validators is the maximum number of validators. + uint32 max_validators = 2; + / max_entries is the max entries for either unbonding delegation or redelegation (per pair/trio). + uint32 max_entries = 3; + / historical_entries is the number of historical entries to persist. + uint32 historical_entries = 4; + / bond_denom defines the bondable coin denomination. + string bond_denom = 5; + / min_commission_rate is the chain-wide minimum commission rate that a validator can charge their delegators + string min_commission_rate = 6 [ + (gogoproto.moretags) = "yaml:\"min_commission_rate\"", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +/ DelegationResponse is equivalent to Delegation except that it contains a +/ balance in addition to shares which is more suitable for client responses. +message DelegationResponse { + option (gogoproto.equal) = false; + option (gogoproto.goproto_stringer) = false; + + Delegation delegation = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + cosmos.base.v1beta1.Coin balance = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ RedelegationEntryResponse is equivalent to a RedelegationEntry except that it +/ contains a balance in addition to shares which is more suitable for client +/ responses. +message RedelegationEntryResponse { + option (gogoproto.equal) = true; + + RedelegationEntry redelegation_entry = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + string balance = 4 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; +} + +/ RedelegationResponse is equivalent to a Redelegation except that its entries +/ contain a balance in addition to shares which is more suitable for client +/ responses. +message RedelegationResponse { + option (gogoproto.equal) = false; + + Redelegation redelegation = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + repeated RedelegationEntryResponse entries = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ Pool is used for tracking bonded and not-bonded token supply of the bond +/ denomination. +message Pool { + option (gogoproto.description) = true; + option (gogoproto.equal) = true; + string not_bonded_tokens = 1 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "not_bonded_tokens", + (amino.dont_omitempty) = true + ]; + string bonded_tokens = 2 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "bonded_tokens", + (amino.dont_omitempty) = true + ]; +} + +/ Infraction indicates the infraction a validator commited. +enum Infraction { + / UNSPECIFIED defines an empty infraction. + INFRACTION_UNSPECIFIED = 0; + / DOUBLE_SIGN defines a validator that double-signs a block. + INFRACTION_DOUBLE_SIGN = 1; + / DOWNTIME defines a validator that missed signing too many blocks. + INFRACTION_DOWNTIME = 2; +} + +/ ValidatorUpdates defines an array of abci.ValidatorUpdate objects. +/ TODO: explore moving this to proto/cosmos/base to separate modules from tendermint dependence +message ValidatorUpdates { + repeated tendermint.abci.ValidatorUpdate updates = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +At each BeginBlock, the staking keeper will persist the current Header and the Validators that committed +the current block in a `HistoricalInfo` object. The Validators are sorted on their address to ensure that +they are in a deterministic order. +The oldest HistoricalEntries will be pruned to ensure that there only exist the parameter-defined number of +historical entries. + +## State Transitions + +### Validators + +State transitions in validators are performed on every [`EndBlock`](#validator-set-changes) +in order to check for changes in the active `ValidatorSet`. + +A validator can be `Unbonded`, `Unbonding` or `Bonded`. `Unbonded` +and `Unbonding` are collectively called `Not Bonded`. A validator can move +directly between all the states, except for from `Bonded` to `Unbonded`. + +#### Not bonded to Bonded + +The following transition occurs when a validator's ranking in the `ValidatorPowerIndex` surpasses +that of the `LastValidator`. + +* set `validator.Status` to `Bonded` +* send the `validator.Tokens` from the `NotBondedTokens` to the `BondedPool` `ModuleAccount` +* delete the existing record from `ValidatorByPowerIndex` +* add a new updated record to the `ValidatorByPowerIndex` +* update the `Validator` object for this validator +* if it exists, delete any `ValidatorQueue` record for this validator + +#### Bonded to Unbonding + +When a validator begins the unbonding process the following operations occur: + +* send the `validator.Tokens` from the `BondedPool` to the `NotBondedTokens` `ModuleAccount` +* set `validator.Status` to `Unbonding` +* delete the existing record from `ValidatorByPowerIndex` +* add a new updated record to the `ValidatorByPowerIndex` +* update the `Validator` object for this validator +* insert a new record into the `ValidatorQueue` for this validator + +#### Unbonding to Unbonded + +A validator moves from unbonding to unbonded when the `ValidatorQueue` object +moves from bonded to unbonded + +* update the `Validator` object for this validator +* set `validator.Status` to `Unbonded` + +#### Jail/Unjail + +when a validator is jailed it is effectively removed from the CometBFT set. +this process may be also be reversed. the following operations occur: + +* set `Validator.Jailed` and update object +* if jailed delete record from `ValidatorByPowerIndex` +* if unjailed add record to `ValidatorByPowerIndex` + +Jailed validators are not present in any of the following stores: + +* the power store (from consensus power to address) + +### Delegations + +#### Delegate + +When a delegation occurs both the validator and the delegation objects are affected + +* determine the delegators shares based on tokens delegated and the validator's exchange rate +* remove tokens from the sending account +* add shares the delegation object or add them to a created validator object +* add new delegator shares and update the `Validator` object +* transfer the `delegation.Amount` from the delegator's account to the `BondedPool` or the `NotBondedPool` `ModuleAccount` depending if the `validator.Status` is `Bonded` or not +* delete the existing record from `ValidatorByPowerIndex` +* add an new updated record to the `ValidatorByPowerIndex` + +#### Begin Unbonding + +As a part of the Undelegate and Complete Unbonding state transitions Unbond +Delegation may be called. + +* subtract the unbonded shares from delegator +* add the unbonded tokens to an `UnbondingDelegationEntry` +* update the delegation or remove the delegation if there are no more shares +* if the delegation is the operator of the validator and no more shares exist then trigger a jail validator +* update the validator with removed the delegator shares and associated coins +* if the validator state is `Bonded`, transfer the `Coins` worth of the unbonded + shares from the `BondedPool` to the `NotBondedPool` `ModuleAccount` +* remove the validator if it is unbonded and there are no more delegation shares. +* remove the validator if it is unbonded and there are no more delegation shares +* get a unique `unbondingId` and map it to the `UnbondingDelegationEntry` in `UnbondingDelegationByUnbondingId` +* call the `AfterUnbondingInitiated(unbondingId)` hook +* add the unbonding delegation to `UnbondingDelegationQueue` with the completion time set to `UnbondingTime` + +#### Cancel an `UnbondingDelegation` Entry + +When a `cancel unbond delegation` occurs both the `validator`, the `delegation` and an `UnbondingDelegationQueue` state will be updated. + +* if cancel unbonding delegation amount equals to the `UnbondingDelegation` entry `balance`, then the `UnbondingDelegation` entry deleted from `UnbondingDelegationQueue`. +* if the `cancel unbonding delegation amount is less than the `UnbondingDelegation`entry balance, then the`UnbondingDelegation`entry will be updated with new balance in the`UnbondingDelegationQueue\`. +* cancel `amount` is [Delegated](#delegations) back to the original `validator`. + +#### Complete Unbonding + +For undelegations which do not complete immediately, the following operations +occur when the unbonding delegation queue element matures: + +* remove the entry from the `UnbondingDelegation` object +* transfer the tokens from the `NotBondedPool` `ModuleAccount` to the delegator `Account` + +#### Begin Redelegation + +Redelegations affect the delegation, source and destination validators. + +* perform an `unbond` delegation from the source validator to retrieve the tokens worth of the unbonded shares +* using the unbonded tokens, `Delegate` them to the destination validator +* if the `sourceValidator.Status` is `Bonded`, and the `destinationValidator` is not, + transfer the newly delegated tokens from the `BondedPool` to the `NotBondedPool` `ModuleAccount` +* otherwise, if the `sourceValidator.Status` is not `Bonded`, and the `destinationValidator` + is `Bonded`, transfer the newly delegated tokens from the `NotBondedPool` to the `BondedPool` `ModuleAccount` +* record the token amount in an new entry in the relevant `Redelegation` + +From when a redelegation begins until it completes, the delegator is in a state of "pseudo-unbonding", and can still be +slashed for infractions that occurred before the redelegation began. + +#### Complete Redelegation + +When a redelegations complete the following occurs: + +* remove the entry from the `Redelegation` object + +### Slashing + +#### Slash Validator + +When a Validator is slashed, the following occurs: + +* The total `slashAmount` is calculated as the `slashFactor` (a chain parameter) \* `TokensFromConsensusPower`, + the total number of tokens bonded to the validator at the time of the infraction. +* Every unbonding delegation and pseudo-unbonding redelegation such that the infraction occured before the unbonding or + redelegation began from the validator are slashed by the `slashFactor` percentage of the initialBalance. +* Each amount slashed from redelegations and unbonding delegations is subtracted from the + total slash amount. +* The `remaingSlashAmount` is then slashed from the validator's tokens in the `BondedPool` or + `NonBondedPool` depending on the validator's status. This reduces the total supply of tokens. + +In the case of a slash due to any infraction that requires evidence to submitted (for example double-sign), the slash +occurs at the block where the evidence is included, not at the block where the infraction occured. +Put otherwise, validators are not slashed retroactively, only when they are caught. + +#### Slash Unbonding Delegation + +When a validator is slashed, so are those unbonding delegations from the validator that began unbonding +after the time of the infraction. Every entry in every unbonding delegation from the validator +is slashed by `slashFactor`. The amount slashed is calculated from the `InitialBalance` of the +delegation and is capped to prevent a resulting negative balance. Completed (or mature) unbondings are not slashed. + +#### Slash Redelegation + +When a validator is slashed, so are all redelegations from the validator that began after the +infraction. Redelegations are slashed by `slashFactor`. +Redelegations that began before the infraction are not slashed. +The amount slashed is calculated from the `InitialBalance` of the delegation and is capped to +prevent a resulting negative balance. +Mature redelegations (that have completed pseudo-unbonding) are not slashed. + +### How Shares are calculated + +At any given point in time, each validator has a number of tokens, `T`, and has a number of shares issued, `S`. +Each delegator, `i`, holds a number of shares, `S_i`. +The number of tokens is the sum of all tokens delegated to the validator, plus the rewards, minus the slashes. + +The delegator is entitled to a portion of the underlying tokens proportional to their proportion of shares. +So delegator `i` is entitled to `T * S_i / S` of the validator's tokens. + +When a delegator delegates new tokens to the validator, they receive a number of shares proportional to their contribution. +So when delegator `j` delegates `T_j` tokens, they receive `S_j = S * T_j / T` shares. +The total number of tokens is now `T + T_j`, and the total number of shares is `S + S_j`. +`j`s proportion of the shares is the same as their proportion of the total tokens contributed: `(S + S_j) / S = (T + T_j) / T`. + +A special case is the initial delegation, when `T = 0` and `S = 0`, so `T_j / T` is undefined. +For the initial delegation, delegator `j` who delegates `T_j` tokens receive `S_j = T_j` shares. +So a validator that hasn't received any rewards and has not been slashed will have `T = S`. + +## Messages + +In this section we describe the processing of the staking messages and the corresponding updates to the state. All created/modified state objects specified by each message are defined within the [state](#state) section. + +### MsgCreateValidator + +A validator is created using the `MsgCreateValidator` message. +The validator must be created with an initial delegation from the operator. + +```protobuf + // CreateValidator defines a method for creating a new validator. + rpc CreateValidator(MsgCreateValidator) returns (MsgCreateValidatorResponse); +``` + +```protobuf +// MsgCreateValidator defines a SDK message for creating a new validator. +message MsgCreateValidator { + // NOTE(fdymylja): this is a particular case in which + // if validator_address == delegator_address then only one + // is expected to sign, otherwise both are. + option (cosmos.msg.v1.signer) = "delegator_address"; + option (cosmos.msg.v1.signer) = "validator_address"; + option (amino.name) = "cosmos-sdk/MsgCreateValidator"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + Description description = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + CommissionRates commission = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + string min_self_delegation = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + string delegator_address = 4 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 5 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + google.protobuf.Any pubkey = 6 [(cosmos_proto.accepts_interface) = "cosmos.crypto.PubKey"]; + cosmos.base.v1beta1.Coin value = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message is expected to fail if: + +* another validator with this operator address is already registered +* another validator with this pubkey is already registered +* the initial self-delegation tokens are of a denom not specified as the bonding denom +* the commission parameters are faulty, namely: + * `MaxRate` is either > 1 or < 0 + * the initial `Rate` is either negative or > `MaxRate` + * the initial `MaxChangeRate` is either negative or > `MaxRate` +* the description fields are too large + +This message creates and stores the `Validator` object at appropriate indexes. +Additionally a self-delegation is made with the initial tokens delegation +tokens `Delegation`. The validator always starts as unbonded but may be bonded +in the first end-block. + +### MsgEditValidator + +The `Description`, `CommissionRate` of a validator can be updated using the +`MsgEditValidator` message. + +```protobuf + // EditValidator defines a method for editing an existing validator. + rpc EditValidator(MsgEditValidator) returns (MsgEditValidatorResponse); +``` + +```protobuf +// MsgEditValidator defines a SDK message for editing an existing validator. +message MsgEditValidator { + option (cosmos.msg.v1.signer) = "validator_address"; + option (amino.name) = "cosmos-sdk/MsgEditValidator"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + Description description = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // We pass a reference to the new commission rate and min self delegation as + // it's not mandatory to update. If not updated, the deserialized rate will be + // zero with no way to distinguish if an update was intended. + // REF: #2373 + string commission_rate = 3 + [(cosmos_proto.scalar) = "cosmos.Dec", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec"]; + string min_self_delegation = 4 + [(cosmos_proto.scalar) = "cosmos.Int", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int"]; +} +``` + +This message is expected to fail if: + +* the initial `CommissionRate` is either negative or > `MaxRate` +* the `CommissionRate` has already been updated within the previous 24 hours +* the `CommissionRate` is > `MaxChangeRate` +* the description fields are too large + +This message stores the updated `Validator` object. + +### MsgDelegate + +Within this message the delegator provides coins, and in return receives +some amount of their validator's (newly created) delegator-shares that are +assigned to `Delegation.Shares`. + +```protobuf + // Delegate defines a method for performing a delegation of coins + // from a delegator to a validator. + rpc Delegate(MsgDelegate) returns (MsgDelegateResponse); +``` + +```protobuf +// MsgDelegate defines a SDK message for performing a delegation of coins +// from a delegator to a validator. +message MsgDelegate { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgDelegate"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message is expected to fail if: + +* the validator does not exist +* the `Amount` `Coin` has a denomination different than one defined by `params.BondDenom` +* the exchange rate is invalid, meaning the validator has no tokens (due to slashing) but there are outstanding shares +* the amount delegated is less than the minimum allowed delegation + +If an existing `Delegation` object for provided addresses does not already +exist then it is created as part of this message otherwise the existing +`Delegation` is updated to include the newly received shares. + +The delegator receives newly minted shares at the current exchange rate. +The exchange rate is the number of existing shares in the validator divided by +the number of currently delegated tokens. + +The validator is updated in the `ValidatorByPower` index, and the delegation is +tracked in validator object in the `Validators` index. + +It is possible to delegate to a jailed validator, the only difference being it +will not be added to the power index until it is unjailed. + +![Delegation sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/delegation_sequence.svg) + +### MsgUndelegate + +The `MsgUndelegate` message allows delegators to undelegate their tokens from +validator. + +```protobuf + // Undelegate defines a method for performing an undelegation from a + // delegate and a validator. + rpc Undelegate(MsgUndelegate) returns (MsgUndelegateResponse); +``` + +```protobuf +// MsgUndelegate defines a SDK message for performing an undelegation from a +// delegate and a validator. +message MsgUndelegate { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgUndelegate"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message returns a response containing the completion time of the undelegation: + +```protobuf +// MsgUndelegateResponse defines the Msg/Undelegate response type. +message MsgUndelegateResponse { + google.protobuf.Timestamp completion_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} +``` + +This message is expected to fail if: + +* the delegation doesn't exist +* the validator doesn't exist +* the delegation has less shares than the ones worth of `Amount` +* existing `UnbondingDelegation` has maximum entries as defined by `params.MaxEntries` +* the `Amount` has a denomination different than one defined by `params.BondDenom` + +When this message is processed the following actions occur: + +* validator's `DelegatorShares` and the delegation's `Shares` are both reduced by the message `SharesAmount` +* calculate the token worth of the shares remove that amount tokens held within the validator +* with those removed tokens, if the validator is: + * `Bonded` - add them to an entry in `UnbondingDelegation` (create `UnbondingDelegation` if it doesn't exist) with a completion time a full unbonding period from the current time. Update pool shares to reduce BondedTokens and increase NotBondedTokens by token worth of the shares. + * `Unbonding` - add them to an entry in `UnbondingDelegation` (create `UnbondingDelegation` if it doesn't exist) with the same completion time as the validator (`UnbondingMinTime`). + * `Unbonded` - then send the coins the message `DelegatorAddr` +* if there are no more `Shares` in the delegation, then the delegation object is removed from the store + * under this situation if the delegation is the validator's self-delegation then also jail the validator. + +![Unbond sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/unbond_sequence.svg) + +### MsgCancelUnbondingDelegation + +The `MsgCancelUnbondingDelegation` message allows delegators to cancel the `unbondingDelegation` entry and delegate back to a previous validator. + +```protobuf + // CancelUnbondingDelegation defines a method for performing canceling the unbonding delegation + // and delegate back to previous validator. + // + // Since: cosmos-sdk 0.46 + rpc CancelUnbondingDelegation(MsgCancelUnbondingDelegation) returns (MsgCancelUnbondingDelegationResponse); +``` + +```protobuf +// MsgCancelUnbondingDelegation defines the SDK message for performing a cancel unbonding delegation for delegator +// +// Since: cosmos-sdk 0.46 +message MsgCancelUnbondingDelegation { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgCancelUnbondingDelegation"; + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // amount is always less than or equal to unbonding delegation entry balance + cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // creation_height is the height which the unbonding took place. + int64 creation_height = 4; +} +``` + +This message is expected to fail if: + +* the `unbondingDelegation` entry is already processed. +* the `cancel unbonding delegation` amount is greater than the `unbondingDelegation` entry balance. +* the `cancel unbonding delegation` height doesn't exist in the `unbondingDelegationQueue` of the delegator. + +When this message is processed the following actions occur: + +* if the `unbondingDelegation` Entry balance is zero + * in this condition `unbondingDelegation` entry will be removed from `unbondingDelegationQueue`. + * otherwise `unbondingDelegationQueue` will be updated with new `unbondingDelegation` entry balance and initial balance +* the validator's `DelegatorShares` and the delegation's `Shares` are both increased by the message `Amount`. + +### MsgBeginRedelegate + +The redelegation command allows delegators to instantly switch validators. Once +the unbonding period has passed, the redelegation is automatically completed in +the EndBlocker. + +```protobuf + // BeginRedelegate defines a method for performing a redelegation + // of coins from a delegator and source validator to a destination validator. + rpc BeginRedelegate(MsgBeginRedelegate) returns (MsgBeginRedelegateResponse); +``` + +```protobuf +// MsgBeginRedelegate defines a SDK message for performing a redelegation +// of coins from a delegator and source validator to a destination validator. +message MsgBeginRedelegate { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgBeginRedelegate"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + cosmos.base.v1beta1.Coin amount = 4 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message returns a response containing the completion time of the redelegation: + +```protobuf + +// MsgBeginRedelegateResponse defines the Msg/BeginRedelegate response type. +message MsgBeginRedelegateResponse { + google.protobuf.Timestamp completion_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} +``` + +This message is expected to fail if: + +* the delegation doesn't exist +* the source or destination validators don't exist +* the delegation has less shares than the ones worth of `Amount` +* the source validator has a receiving redelegation which is not matured (aka. the redelegation may be transitive) +* existing `Redelegation` has maximum entries as defined by `params.MaxEntries` +* the `Amount` `Coin` has a denomination different than one defined by `params.BondDenom` + +When this message is processed the following actions occur: + +* the source validator's `DelegatorShares` and the delegations `Shares` are both reduced by the message `SharesAmount` +* calculate the token worth of the shares remove that amount tokens held within the source validator. +* if the source validator is: + * `Bonded` - add an entry to the `Redelegation` (create `Redelegation` if it doesn't exist) with a completion time a full unbonding period from the current time. Update pool shares to reduce BondedTokens and increase NotBondedTokens by token worth of the shares (this may be effectively reversed in the next step however). + * `Unbonding` - add an entry to the `Redelegation` (create `Redelegation` if it doesn't exist) with the same completion time as the validator (`UnbondingMinTime`). + * `Unbonded` - no action required in this step +* Delegate the token worth to the destination validator, possibly moving tokens back to the bonded state. +* if there are no more `Shares` in the source delegation, then the source delegation object is removed from the store + * under this situation if the delegation is the validator's self-delegation then also jail the validator. + +![Begin redelegation sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/begin_redelegation_sequence.svg) + +### MsgUpdateParams + +The `MsgUpdateParams` update the staking module parameters. +The params are updated through a governance proposal where the signer is the gov module account address. + +```protobuf +// MsgUpdateParams is the Msg/UpdateParams request type. +// +// Since: cosmos-sdk 0.47 +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/x/staking/MsgUpdateParams"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // params defines the x/staking parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +}; +``` + +The message handling can fail if: + +* signer is not the authority defined in the staking keeper (usually the gov module account). + +## Begin-Block + +Each abci begin block call, the historical info will get stored and pruned +according to the `HistoricalEntries` parameter. + +### Historical Info Tracking + +If the `HistoricalEntries` parameter is 0, then the `BeginBlock` performs a no-op. + +Otherwise, the latest historical info is stored under the key `historicalInfoKey|height`, while any entries older than `height - HistoricalEntries` is deleted. +In most cases, this results in a single entry being pruned per block. +However, if the parameter `HistoricalEntries` has changed to a lower value there will be multiple entries in the store that must be pruned. + +## End-Block + +Each abci end block call, the operations to update queues and validator set +changes are specified to execute. + +### Validator Set Changes + +The staking validator set is updated during this process by state transitions +that run at the end of every block. As a part of this process any updated +validators are also returned back to CometBFT for inclusion in the CometBFT +validator set which is responsible for validating CometBFT messages at the +consensus layer. Operations are as following: + +* the new validator set is taken as the top `params.MaxValidators` number of + validators retrieved from the `ValidatorsByPower` index +* the previous validator set is compared with the new validator set: + * missing validators begin unbonding and their `Tokens` are transferred from the + `BondedPool` to the `NotBondedPool` `ModuleAccount` + * new validators are instantly bonded and their `Tokens` are transferred from the + `NotBondedPool` to the `BondedPool` `ModuleAccount` + +In all cases, any validators leaving or entering the bonded validator set or +changing balances and staying within the bonded validator set incur an update +message reporting their new consensus power which is passed back to CometBFT. + +The `LastTotalPower` and `LastValidatorsPower` hold the state of the total power +and validator power from the end of the last block, and are used to check for +changes that have occurred in `ValidatorsByPower` and the total new power, which +is calculated during `EndBlock`. + +### Queues + +Within staking, certain state-transitions are not instantaneous but take place +over a duration of time (typically the unbonding period). When these +transitions are mature certain operations must take place in order to complete +the state operation. This is achieved through the use of queues which are +checked/processed at the end of each block. + +#### Unbonding Validators + +When a validator is kicked out of the bonded validator set (either through +being jailed, or not having sufficient bonded tokens) it begins the unbonding +process along with all its delegations begin unbonding (while still being +delegated to this validator). At this point the validator is said to be an +"unbonding validator", whereby it will mature to become an "unbonded validator" +after the unbonding period has passed. + +Each block the validator queue is to be checked for mature unbonding validators +(namely with a completion time `<=` current time and completion height `<=` current +block height). At this point any mature validators which do not have any +delegations remaining are deleted from state. For all other mature unbonding +validators that still have remaining delegations, the `validator.Status` is +switched from `types.Unbonding` to +`types.Unbonded`. + +Unbonding operations can be put on hold by external modules via the `PutUnbondingOnHold(unbondingId)` method. +As a result, an unbonding operation (e.g., an unbonding delegation) that is on hold, cannot complete +even if it reaches maturity. For an unbonding operation with `unbondingId` to eventually complete +(after it reaches maturity), every call to `PutUnbondingOnHold(unbondingId)` must be matched +by a call to `UnbondingCanComplete(unbondingId)`. + +#### Unbonding Delegations + +Complete the unbonding of all mature `UnbondingDelegations.Entries` within the +`UnbondingDelegations` queue with the following procedure: + +* transfer the balance coins to the delegator's wallet address +* remove the mature entry from `UnbondingDelegation.Entries` +* remove the `UnbondingDelegation` object from the store if there are no + remaining entries. + +#### Redelegations + +Complete the unbonding of all mature `Redelegation.Entries` within the +`Redelegations` queue with the following procedure: + +* remove the mature entry from `Redelegation.Entries` +* remove the `Redelegation` object from the store if there are no + remaining entries. + +## Hooks + +Other modules may register operations to execute when a certain event has +occurred within staking. These events can be registered to execute either +right `Before` or `After` the staking event (as per the hook name). The +following hooks can registered with staking: + +* `AfterValidatorCreated(Context, ValAddress) error` + * called when a validator is created +* `BeforeValidatorModified(Context, ValAddress) error` + * called when a validator's state is changed +* `AfterValidatorRemoved(Context, ConsAddress, ValAddress) error` + * called when a validator is deleted +* `AfterValidatorBonded(Context, ConsAddress, ValAddress) error` + * called when a validator is bonded +* `AfterValidatorBeginUnbonding(Context, ConsAddress, ValAddress) error` + * called when a validator begins unbonding +* `BeforeDelegationCreated(Context, AccAddress, ValAddress) error` + * called when a delegation is created +* `BeforeDelegationSharesModified(Context, AccAddress, ValAddress) error` + * called when a delegation's shares are modified +* `AfterDelegationModified(Context, AccAddress, ValAddress) error` + * called when a delegation is created or modified +* `BeforeDelegationRemoved(Context, AccAddress, ValAddress) error` + * called when a delegation is removed +* `AfterUnbondingInitiated(Context, UnbondingID)` + * called when an unbonding operation (validator unbonding, unbonding delegation, redelegation) was initiated + +## Events + +The staking module emits the following events: + +### EndBlocker + +| Type | Attribute Key | Attribute Value | +| ---------------------- | ---------------------- | ------------------------- | +| complete\_unbonding | amount | `{totalUnbondingAmount}` | +| complete\_unbonding | validator | `{validatorAddress}` | +| complete\_unbonding | delegator | `{delegatorAddress}` | +| complete\_redelegation | amount | `{totalRedelegationAmount}` | +| complete\_redelegation | source\_validator | `{srcValidatorAddress}` | +| complete\_redelegation | destination\_validator | `{dstValidatorAddress}` | +| complete\_redelegation | delegator | `{delegatorAddress}` | + +## Msg's + +### MsgCreateValidator + +| Type | Attribute Key | Attribute Value | +| ----------------- | ------------- | ------------------ | +| create\_validator | validator | `{validatorAddress}` | +| create\_validator | amount | `{delegationAmount}` | +| message | module | staking | +| message | action | create\_validator | +| message | sender | `{senderAddress}` | + +### MsgEditValidator + +| Type | Attribute Key | Attribute Value | +| --------------- | --------------------- | ------------------- | +| edit\_validator | commission\_rate | `{commissionRate}` | +| edit\_validator | min\_self\_delegation | `{minSelfDelegation}` | +| message | module | staking | +| message | action | edit\_validator | +| message | sender | `{senderAddress}` | + +### MsgDelegate + +| Type | Attribute Key | Attribute Value | +| -------- | ------------- | ------------------ | +| delegate | validator | `{validatorAddress}` | +| delegate | amount | `{delegationAmount}` | +| message | module | staking | +| message | action | delegate | +| message | sender | `{senderAddress}` | + +### MsgUndelegate + +| Type | Attribute Key | Attribute Value | +| ------- | --------------------- | ------------------ | +| unbond | validator | `{validatorAddress}` | +| unbond | amount | `{unbondAmount}` | +| unbond | completion\_time \[0] | `{completionTime}` | +| message | module | staking | +| message | action | begin\_unbonding | +| message | sender | `{senderAddress}` | + +* \[0] Time is formatted in the RFC3339 standard + +### MsgCancelUnbondingDelegation + +| Type | Attribute Key | Attribute Value | +| ----------------------------- | ---------------- | --------------------------------- | +| cancel\_unbonding\_delegation | validator | `{validatorAddress}` | +| cancel\_unbonding\_delegation | delegator | `{delegatorAddress}` | +| cancel\_unbonding\_delegation | amount | `{cancelUnbondingDelegationAmount}` | +| cancel\_unbonding\_delegation | creation\_height | `{unbondingCreationHeight}` | +| message | module | staking | +| message | action | cancel\_unbond | +| message | sender | `{senderAddress}` | + +### MsgBeginRedelegate + +| Type | Attribute Key | Attribute Value | +| ---------- | ---------------------- | --------------------- | +| redelegate | source\_validator | `{srcValidatorAddress}` | +| redelegate | destination\_validator | `{dstValidatorAddress}` | +| redelegate | amount | `{unbondAmount}` | +| redelegate | completion\_time \[0] | `{completionTime}` | +| message | module | staking | +| message | action | begin\_redelegate | +| message | sender | `{senderAddress}` | + +* \[0] Time is formatted in the RFC3339 standard + +## Parameters + +The staking module contains the following parameters: + +| Key | Type | Example | +| ----------------- | ---------------- | ---------------------- | +| UnbondingTime | string (time ns) | "259200000000000" | +| MaxValidators | uint16 | 100 | +| KeyMaxEntries | uint16 | 7 | +| HistoricalEntries | uint16 | 3 | +| BondDenom | string | "stake" | +| MinCommissionRate | string | "0.000000000000000000" | + +## Client + +### CLI + +A user can query and interact with the `staking` module using the CLI. + +#### Query + +The `query` commands allows users to query `staking` state. + +```bash +simd query staking --help +``` + +##### delegation + +The `delegation` command allows users to query delegations for an individual delegator on an individual validator. + +Usage: + +```bash +simd query staking delegation [delegator-addr] [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash +balance: + amount: "10000000000" + denom: stake +delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +##### delegations + +The `delegations` command allows users to query delegations for an individual delegator on all validators. + +Usage: + +```bash +simd query staking delegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegations cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash expandable +delegation_responses: +- balance: + amount: "10000000000" + denom: stake + delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- balance: + amount: "10000000000" + denom: stake + delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1x20lytyf6zkcrv5edpkfkn8sz578qg5sqfyqnp +pagination: + next_key: null + total: "0" +``` + +##### delegations-to + +The `delegations-to` command allows users to query delegations on an individual validator. + +Usage: + +```bash +simd query staking delegations-to [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegations-to cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +- balance: + amount: "504000000" + denom: stake + delegation: + delegator_address: cosmos1q2qwwynhv8kh3lu5fkeex4awau9x8fwt45f5cp + shares: "504000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- balance: + amount: "78125000000" + denom: uixo + delegation: + delegator_address: cosmos1qvppl3479hw4clahe0kwdlfvf8uvjtcd99m2ca + shares: "78125000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +pagination: + next_key: null + total: "0" +``` + +##### historical-info + +The `historical-info` command allows users to query historical information at given height. + +Usage: + +```bash +simd query staking historical-info [height] [flags] +``` + +Example: + +```bash +simd query staking historical-info 10 +``` + +Example Output: + +```bash expandable +header: + app_hash: Lbx8cXpI868wz8sgp4qPYVrlaKjevR5WP/IjUxwp3oo= + chain_id: testnet + consensus_hash: BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8= + data_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + evidence_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + height: "10" + last_block_id: + hash: RFbkpu6pWfSThXxKKl6EZVDnBSm16+U0l0xVjTX08Fk= + part_set_header: + hash: vpIvXD4rxD5GM4MXGz0Sad9I7/iVYLzZsEU4BVgWIU= + total: 1 + last_commit_hash: Ne4uXyx4QtNp4Zx89kf9UK7oG9QVbdB6e7ZwZkhy8K0= + last_results_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + next_validators_hash: nGBgKeWBjoxeKFti00CxHsnULORgKY4LiuQwBuUrhCs= + proposer_address: mMEP2c2IRPLr99LedSRtBg9eONM= + time: "2021-10-01T06:00:49.785790894Z" + validators_hash: nGBgKeWBjoxeKFti00CxHsnULORgKY4LiuQwBuUrhCs= + version: + app: "0" + block: "11" +valset: +- commission: + commission_rates: + max_change_rate: "0.010000000000000000" + max_rate: "0.200000000000000000" + rate: "0.100000000000000000" + update_time: "2021-10-01T05:52:50.380144238Z" + consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8= + delegator_shares: "10000000.000000000000000000" + description: + details: "" + identity: "" + moniker: myvalidator + security_contact: "" + website: "" + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc + status: BOND_STATUS_BONDED + tokens: "10000000" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +``` + +##### params + +The `params` command allows users to query values set as staking parameters. + +Usage: + +```bash +simd query staking params [flags] +``` + +Example: + +```bash +simd query staking params +``` + +Example Output: + +```bash +bond_denom: stake +historical_entries: 10000 +max_entries: 7 +max_validators: 50 +unbonding_time: 1814400s +``` + +##### pool + +The `pool` command allows users to query values for amounts stored in the staking pool. + +Usage: + +```bash +simd q staking pool [flags] +``` + +Example: + +```bash +simd q staking pool +``` + +Example Output: + +```bash +bonded_tokens: "10000000" +not_bonded_tokens: "0" +``` + +##### redelegation + +The `redelegation` command allows users to query a redelegation record based on delegator and a source and destination validator address. + +Usage: + +```bash +simd query staking redelegation [delegator-addr] [src-validator-addr] [dst-validator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +pagination: null +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm + validator_src_address: cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm +``` + +##### redelegations + +The `redelegations` command allows users to query all redelegation records for an individual delegator. + +Usage: + +```bash +simd query staking redelegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1zppjyal5emta5cquje8ndkpz0rs046m7zqxrpp +- entries: + - balance: "562770000000" + redelegation_entry: + completion_time: "2021-10-25T21:42:07.336911677Z" + creation_height: 2.39735e+06 + initial_balance: "562770000000" + shares_dst: "562770000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1zppjyal5emta5cquje8ndkpz0rs046m7zqxrpp +``` + +##### redelegations-from + +The `redelegations-from` command allows users to query delegations that are redelegating *from* a validator. + +Usage: + +```bash +simd query staking redelegations-from [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegations-from cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1pm6e78p4pgn0da365plzl4t56pxy8hwtqp2mph + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +- entries: + - balance: "221000000" + redelegation_entry: + completion_time: "2021-10-05T21:05:45.669420544Z" + creation_height: 2.120693e+06 + initial_balance: "221000000" + shares_dst: "221000000.000000000000000000" + redelegation: + delegator_address: cosmos1zqv8qxy2zgn4c58fz8jt8jmhs3d0attcussrf6 + entries: null + validator_dst_address: cosmosvaloper10mseqwnwtjaqfrwwp2nyrruwmjp6u5jhah4c3y + validator_src_address: cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +``` + +##### unbonding-delegation + +The `unbonding-delegation` command allows users to query unbonding delegations for an individual delegator on an individual validator. + +Usage: + +```bash +simd query staking unbonding-delegation [delegator-addr] [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash +delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +entries: +- balance: "52000000" + completion_time: "2021-11-02T11:35:55.391594709Z" + creation_height: "55078" + initial_balance: "52000000" +validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +##### unbonding-delegations + +The `unbonding-delegations` command allows users to query all unbonding-delegations records for one delegator. + +Usage: + +```bash +simd query staking unbonding-delegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegations cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +unbonding_responses: +- delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: + - balance: "52000000" + completion_time: "2021-11-02T11:35:55.391594709Z" + creation_height: "55078" + initial_balance: "52000000" + validator_address: cosmosvaloper1t8ehvswxjfn3ejzkjtntcyrqwvmvuknzmvtaaa + +``` + +##### unbonding-delegations-from + +The `unbonding-delegations-from` command allows users to query delegations that are unbonding *from* a validator. + +Usage: + +```bash +simd query staking unbonding-delegations-from [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegations-from cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +unbonding_responses: +- delegator_address: cosmos1qqq9txnw4c77sdvzx0tkedsafl5s3vk7hn53fn + entries: + - balance: "150000000" + completion_time: "2021-11-01T21:41:13.098141574Z" + creation_height: "46823" + initial_balance: "150000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- delegator_address: cosmos1peteje73eklqau66mr7h7rmewmt2vt99y24f5z + entries: + - balance: "24000000" + completion_time: "2021-10-31T02:57:18.192280361Z" + creation_height: "21516" + initial_balance: "24000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +##### validator + +The `validator` command allows users to query details about an individual validator. + +Usage: + +```bash +simd query staking validator [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking validator cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +commission: + commission_rates: + max_change_rate: "0.020000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-01T19:24:52.663191049Z" +consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc= +delegator_shares: "32948270000.000000000000000000" +description: + details: Witval is the validator arm from Vitwit. Vitwit is into software consulting + and services business since 2015. We are working closely with Cosmos ecosystem + since 2018. We are also building tools for the ecosystem, Aneka is our explorer + for the cosmos ecosystem. + identity: 51468B615127273A + moniker: Witval + security_contact: "" + website: "" +jailed: false +min_self_delegation: "1" +operator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +status: BOND_STATUS_BONDED +tokens: "32948270000" +unbonding_height: "0" +unbonding_time: "1970-01-01T00:00:00Z" +``` + +##### validators + +The `validators` command allows users to query details about all validators on a network. + +Usage: + +```bash +simd query staking validators [flags] +``` + +Example: + +```bash +simd query staking validators +``` + +Example Output: + +```bash expandable +pagination: + next_key: FPTi7TKAjN63QqZh+BaXn6gBmD5/ + total: "0" +validators: +commission: + commission_rates: + max_change_rate: "0.020000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-01T19:24:52.663191049Z" +consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc= +delegator_shares: "32948270000.000000000000000000" +description: + details: Witval is the validator arm from Vitwit. Vitwit is into software consulting + and services business since 2015. We are working closely with Cosmos ecosystem + since 2018. We are also building tools for the ecosystem, Aneka is our explorer + for the cosmos ecosystem. + identity: 51468B615127273A + moniker: Witval + security_contact: "" + website: "" + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj + status: BOND_STATUS_BONDED + tokens: "32948270000" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +- commission: + commission_rates: + max_change_rate: "0.100000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-04T18:02:21.446645619Z" + consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: GDNpuKDmCg9GnhnsiU4fCWktuGUemjNfvpCZiqoRIYA= + delegator_shares: "559343421.000000000000000000" + description: + details: Noderunners is a professional validator in POS networks. We have a huge + node running experience, reliable soft and hardware. Our commissions are always + low, our support to delegators is always full. Stake with us and start receiving + your Cosmos rewards now! + identity: 812E82D12FEA3493 + moniker: Noderunners + security_contact: info@noderunners.biz + website: http://noderunners.biz + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1q5ku90atkhktze83j9xjaks2p7uruag5zp6wt7 + status: BOND_STATUS_BONDED + tokens: "559343421" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +``` + +#### Transactions + +The `tx` commands allows users to interact with the `staking` module. + +```bash +simd tx staking --help +``` + +##### create-validator + +The command `create-validator` allows users to create new validator initialized with a self-delegation to it. + +Usage: + +```bash +simd tx staking create-validator [path/to/validator.json] [flags] +``` + +Example: + +```bash +simd tx staking create-validator /path/to/validator.json \ + --chain-id="name_of_chain_id" \ + --gas="auto" \ + --gas-adjustment="1.2" \ + --gas-prices="0.025stake" \ + --from=mykey +``` + +where `validator.json` contains: + +```json expandable +{ + "pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "BnbwFpeONLqvWqJb3qaUbL5aoIcW3fSuAp9nT3z5f20=" + }, + "amount": "1000000stake", + "moniker": "my-moniker", + "website": "https://myweb.site", + "security": "security-contact@gmail.com", + "details": "description of your validator", + "commission-rate": "0.10", + "commission-max-rate": "0.20", + "commission-max-change-rate": "0.01", + "min-self-delegation": "1" +} +``` + +and pubkey can be obtained by using `simd tendermint show-validator` command. + +##### delegate + +The command `delegate` allows users to delegate liquid tokens to a validator. + +Usage: + +```bash +simd tx staking delegate [validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking delegate cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm 1000stake --from mykey +``` + +##### edit-validator + +The command `edit-validator` allows users to edit an existing validator account. + +Usage: + +```bash +simd tx staking edit-validator [flags] +``` + +Example: + +```bash +simd tx staking edit-validator --moniker "new_moniker_name" --website "new_webiste_url" --from mykey +``` + +##### redelegate + +The command `redelegate` allows users to redelegate illiquid tokens from one validator to another. + +Usage: + +```bash +simd tx staking redelegate [src-validator-addr] [dst-validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking redelegate cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm 100stake --from mykey +``` + +##### unbond + +The command `unbond` allows users to unbond shares from a validator. + +Usage: + +```bash +simd tx staking unbond [validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking unbond cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj 100stake --from mykey +``` + +##### cancel unbond + +The command `cancel-unbond` allow users to cancel the unbonding delegation entry and delegate back to the original validator. + +Usage: + +```bash +simd tx staking cancel-unbond [validator-addr] [amount] [creation-height] +``` + +Example: + +```bash +simd tx staking cancel-unbond cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj 100stake 123123 --from mykey +``` + +### gRPC + +A user can query the `staking` module using gRPC endpoints. + +#### Validators + +The `Validators` endpoint queries all validators that match the given status. + +```bash +cosmos.staking.v1beta1.Query/Validators +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.staking.v1beta1.Query/Validators +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "consensusPubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8="}, + "status": "BOND_STATUS_BONDED", + "tokens": "10000000", + "delegatorShares": "10000000000000000000000000", + "description": { + "moniker": "myvalidator" + }, + "unbondingTime": "1970-01-01T00:00:00Z", + "commission": { + "commissionRates": { + "rate": "100000000000000000", + "maxRate": "200000000000000000", + "maxChangeRate": "10000000000000000" + }, + "updateTime": "2021-10-01T05:52:50.380144238Z" + }, + "minSelfDelegation": "1" + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### Validator + +The `Validator` endpoint queries validator information for given validator address. + +```bash +cosmos.staking.v1beta1.Query/Validator +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Validator +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "consensusPubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8="}, + "status": "BOND_STATUS_BONDED", + "tokens": "10000000", + "delegatorShares": "10000000000000000000000000", + "description": { + "moniker": "myvalidator" + }, + "unbondingTime": "1970-01-01T00:00:00Z", + "commission": { + "commissionRates": { + "rate": "100000000000000000", + "maxRate": "200000000000000000", + "maxChangeRate": "10000000000000000" + }, + "updateTime": "2021-10-01T05:52:50.380144238Z" + }, + "minSelfDelegation": "1" + } +} +``` + +#### ValidatorDelegations + +The `ValidatorDelegations` endpoint queries delegate information for given validator. + +```bash +cosmos.staking.v1beta1.Query/ValidatorDelegations +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/ValidatorDelegations +``` + +Example Output: + +```bash expandable +{ + "delegationResponses": [ + { + "delegation": { + "delegatorAddress": "cosmos1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgy3ua5t", + "validatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "shares": "10000000000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "10000000" + } + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### ValidatorUnbondingDelegations + +The `ValidatorUnbondingDelegations` endpoint queries delegate information for given validator. + +```bash +cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1z3pzzw84d6xn00pw9dy3yapqypfde7vg6965fy", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "25325", + "completion_time": "2021-10-31T09:24:36.797320636Z", + "initial_balance": "20000000", + "balance": "20000000" + } + ] + }, + { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "13100", + "completion_time": "2021-10-30T12:53:02.272266791Z", + "initial_balance": "1000000", + "balance": "1000000" + } + ] + }, + ], + "pagination": { + "next_key": null, + "total": "8" + } +} +``` + +#### Delegation + +The `Delegation` endpoint queries delegate information for given validator delegator pair. + +```bash +cosmos.staking.v1beta1.Query/Delegation +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Delegation +``` + +Example Output: + +```bash expandable +{ + "delegation_response": + { + "delegation": + { + "delegator_address":"cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "shares":"25083119936.000000000000000000" + }, + "balance": + { + "denom":"stake", + "amount":"25083119936" + } + } +} +``` + +#### UnbondingDelegation + +The `UnbondingDelegation` endpoint queries unbonding information for given validator delegator. + +```bash +cosmos.staking.v1beta1.Query/UnbondingDelegation +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/UnbondingDelegation +``` + +Example Output: + +```bash expandable +{ + "unbond": { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "136984", + "completion_time": "2021-11-08T05:38:47.505593891Z", + "initial_balance": "400000000", + "balance": "400000000" + }, + { + "creation_height": "137005", + "completion_time": "2021-11-08T05:40:53.526196312Z", + "initial_balance": "385000000", + "balance": "385000000" + } + ] + } +} +``` + +#### DelegatorDelegations + +The `DelegatorDelegations` endpoint queries all delegations of a given delegator address. + +```bash +cosmos.staking.v1beta1.Query/DelegatorDelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorDelegations +``` + +Example Output: + +```bash +{ + "delegation_responses": [ + {"delegation":{"delegator_address":"cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77","validator_address":"cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8","shares":"25083339023.000000000000000000"},"balance":{"denom":"stake","amount":"25083339023"}} + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorUnbondingDelegations + +The `DelegatorUnbondingDelegations` endpoint queries all unbonding delegations of a given delegator address. + +```bash +cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1sjllsnramtg3ewxqwwrwjxfgc4n4ef9uxyejze", + "entries": [ + { + "creation_height": "136984", + "completion_time": "2021-11-08T05:38:47.505593891Z", + "initial_balance": "400000000", + "balance": "400000000" + }, + { + "creation_height": "137005", + "completion_time": "2021-11-08T05:40:53.526196312Z", + "initial_balance": "385000000", + "balance": "385000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### Redelegations + +The `Redelegations` endpoint queries redelegations of given address. + +```bash +cosmos.staking.v1beta1.Query/Redelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf", "src_validator_addr" : "cosmosvaloper1j7euyj85fv2jugejrktj540emh9353ltgppc3g", "dst_validator_addr" : "cosmosvaloper1yy3tnegzmkdcm7czzcy3flw5z0zyr9vkkxrfse"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Redelegations +``` + +Example Output: + +```bash expandable +{ + "redelegation_responses": [ + { + "redelegation": { + "delegator_address": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf", + "validator_src_address": "cosmosvaloper1j7euyj85fv2jugejrktj540emh9353ltgppc3g", + "validator_dst_address": "cosmosvaloper1yy3tnegzmkdcm7czzcy3flw5z0zyr9vkkxrfse", + "entries": null + }, + "entries": [ + { + "redelegation_entry": { + "creation_height": 135932, + "completion_time": "2021-11-08T03:52:55.299147901Z", + "initial_balance": "2900000", + "shares_dst": "2900000.000000000000000000" + }, + "balance": "2900000" + } + ] + } + ], + "pagination": null +} +``` + +#### DelegatorValidators + +The `DelegatorValidators` endpoint queries all validators information for given delegator. + +```bash +cosmos.staking.v1beta1.Query/DelegatorValidators +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorValidators +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operator_address": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "UPwHWxH1zHJWGOa/m6JB3f5YjHMvPQPkVbDqqi+U7Uw=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "347260647559", + "delegator_shares": "347260647559.000000000000000000", + "description": { + "moniker": "BouBouNode", + "identity": "", + "website": "https://boubounode.com", + "security_contact": "", + "details": "AI-based Validator. #1 AI Validator on Game of Stakes. Fairly priced. Don't trust (humans), verify. Made with BouBou love." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.061000000000000000", + "max_rate": "0.300000000000000000", + "max_change_rate": "0.150000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorValidator + +The `DelegatorValidator` endpoint queries validator information for given delegator validator + +```bash +cosmos.staking.v1beta1.Query/DelegatorValidator +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1eh5mwu044gd5ntkkc2xgfg8247mgc56f3n8rr7", "validator_addr": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorValidator +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operator_address": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "UPwHWxH1zHJWGOa/m6JB3f5YjHMvPQPkVbDqqi+U7Uw=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "347262754841", + "delegator_shares": "347262754841.000000000000000000", + "description": { + "moniker": "BouBouNode", + "identity": "", + "website": "https://boubounode.com", + "security_contact": "", + "details": "AI-based Validator. #1 AI Validator on Game of Stakes. Fairly priced. Don't trust (humans), verify. Made with BouBou love." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.061000000000000000", + "max_rate": "0.300000000000000000", + "max_change_rate": "0.150000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } +} +``` + +#### HistoricalInfo + +```bash +cosmos.staking.v1beta1.Query/HistoricalInfo +``` + +Example: + +```bash +grpcurl -plaintext -d '{"height" : 1}' localhost:9090 cosmos.staking.v1beta1.Query/HistoricalInfo +``` + +Example Output: + +```bash expandable +{ + "hist": { + "header": { + "version": { + "block": "11", + "app": "0" + }, + "chain_id": "simd-1", + "height": "140142", + "time": "2021-10-11T10:56:29.720079569Z", + "last_block_id": { + "hash": "9gri/4LLJUBFqioQ3NzZIP9/7YHR9QqaM6B2aJNQA7o=", + "part_set_header": { + "total": 1, + "hash": "Hk1+C864uQkl9+I6Zn7IurBZBKUevqlVtU7VqaZl1tc=" + } + }, + "last_commit_hash": "VxrcS27GtvGruS3I9+AlpT7udxIT1F0OrRklrVFSSKc=", + "data_hash": "80BjOrqNYUOkTnmgWyz9AQ8n7SoEmPVi4QmAe8RbQBY=", + "validators_hash": "95W49n2hw8RWpr1GPTAO5MSPi6w6Wjr3JjjS7AjpBho=", + "next_validators_hash": "95W49n2hw8RWpr1GPTAO5MSPi6w6Wjr3JjjS7AjpBho=", + "consensus_hash": "BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=", + "app_hash": "ZZaxnSY3E6Ex5Bvkm+RigYCK82g8SSUL53NymPITeOE=", + "last_results_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "evidence_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "proposer_address": "aH6dO428B+ItuoqPq70efFHrSMY=" + }, + "valset": [ + { + "operator_address": "cosmosvaloper196ax4vc0lwpxndu9dyhvca7jhxp70rmcqcnylw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "/O7BtNW0pafwfvomgR4ZnfldwPXiFfJs9mHg3gwfv5Q=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1426045203613", + "delegator_shares": "1426045203613.000000000000000000", + "description": { + "moniker": "SG-1", + "identity": "48608633F99D1B60", + "website": "https://sg-1.online", + "security_contact": "", + "details": "SG-1 - your favorite validator on Witval. We offer 100% Soft Slash protection." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.037500000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.030000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } + ] + } +} + +``` + +#### Pool + +The `Pool` endpoint queries the pool information. + +```bash +cosmos.staking.v1beta1.Query/Pool +``` + +Example: + +```bash +grpcurl -plaintext -d localhost:9090 cosmos.staking.v1beta1.Query/Pool +``` + +Example Output: + +```bash +{ + "pool": { + "not_bonded_tokens": "369054400189", + "bonded_tokens": "15657192425623" + } +} +``` + +#### Params + +The `Params` endpoint queries the pool information. + +```bash +cosmos.staking.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.staking.v1beta1.Query/Params +``` + +Example Output: + +```bash +{ + "params": { + "unbondingTime": "1814400s", + "maxValidators": 100, + "maxEntries": 7, + "historicalEntries": 10000, + "bondDenom": "stake" + } +} +``` + +### REST + +A user can query the `staking` module using REST endpoints. + +#### DelegatorDelegations + +The `DelegtaorDelegations` REST endpoint queries all delegations of a given delegator address. + +```bash +/cosmos/staking/v1beta1/delegations/{delegatorAddr} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/delegations/cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "delegation_responses": [ + { + "delegation": { + "delegator_address": "cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5", + "validator_address": "cosmosvaloper1quqxfrxkycr0uzt4yk0d57tcq3zk7srm7sm6r8", + "shares": "256250000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "256250000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5", + "validator_address": "cosmosvaloper194v8uwee2fvs2s8fa5k7j03ktwc87h5ym39jfv", + "shares": "255150000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "255150000" + } + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### Redelegations + +The `Redelegations` REST endpoint queries redelegations of given address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/redelegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1thfntksw0d35n2tkr0k8v54fr8wxtxwxl2c56e/redelegations?srcValidatorAddr=cosmosvaloper1lzhlnpahvznwfv4jmay2tgaha5kmz5qx4cuznf&dstValidatorAddr=cosmosvaloper1vq8tw77kp8lvxq9u3c8eeln9zymn68rng8pgt4" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "redelegation_responses": [ + { + "redelegation": { + "delegator_address": "cosmos1thfntksw0d35n2tkr0k8v54fr8wxtxwxl2c56e", + "validator_src_address": "cosmosvaloper1lzhlnpahvznwfv4jmay2tgaha5kmz5qx4cuznf", + "validator_dst_address": "cosmosvaloper1vq8tw77kp8lvxq9u3c8eeln9zymn68rng8pgt4", + "entries": null + }, + "entries": [ + { + "redelegation_entry": { + "creation_height": 151523, + "completion_time": "2021-11-09T06:03:25.640682116Z", + "initial_balance": "200000000", + "shares_dst": "200000000.000000000000000000" + }, + "balance": "200000000" + } + ] + } + ], + "pagination": null +} +``` + +#### DelegatorUnbondingDelegations + +The `DelegatorUnbondingDelegations` REST endpoint queries all unbonding delegations of a given delegator address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/unbonding_delegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1nxv42u3lv642q0fuzu2qmrku27zgut3n3z7lll/unbonding_delegations" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1nxv42u3lv642q0fuzu2qmrku27zgut3n3z7lll", + "validator_address": "cosmosvaloper1e7mvqlz50ch6gw4yjfemsc069wfre4qwmw53kq", + "entries": [ + { + "creation_height": "2442278", + "completion_time": "2021-10-12T10:59:03.797335857Z", + "initial_balance": "50000000000", + "balance": "50000000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorValidators + +The `DelegatorValidators` REST endpoint queries all validators information for given delegator address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/validators +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1xwazl8ftks4gn00y5x3c47auquc62ssune9ppv/validators" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operator_address": "cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "5v4n3px3PkfNnKflSgepDnsMQR1hiNXnqOC11Y72/PQ=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "21592843799", + "delegator_shares": "21592843799.000000000000000000", + "description": { + "moniker": "jabbey", + "identity": "", + "website": "https://twitter.com/JoeAbbey", + "security_contact": "", + "details": "just another dad in the cosmos" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.100000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-09T19:03:54.984821705Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorValidator + +The `DelegatorValidator` REST endpoint queries validator information for given delegator validator pair. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/validators/{validatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1xwazl8ftks4gn00y5x3c47auquc62ssune9ppv/validators/cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operator_address": "cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "5v4n3px3PkfNnKflSgepDnsMQR1hiNXnqOC11Y72/PQ=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "21592843799", + "delegator_shares": "21592843799.000000000000000000", + "description": { + "moniker": "jabbey", + "identity": "", + "website": "https://twitter.com/JoeAbbey", + "security_contact": "", + "details": "just another dad in the cosmos" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.100000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-09T19:03:54.984821705Z" + }, + "min_self_delegation": "1" + } +} +``` + +#### HistoricalInfo + +The `HistoricalInfo` REST endpoint queries the historical information for given height. + +```bash +/cosmos/staking/v1beta1/historical_info/{height} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/historical_info/153332" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "hist": { + "header": { + "version": { + "block": "11", + "app": "0" + }, + "chain_id": "cosmos-1", + "height": "153332", + "time": "2021-10-12T09:05:35.062230221Z", + "last_block_id": { + "hash": "NX8HevR5khb7H6NGKva+jVz7cyf0skF1CrcY9A0s+d8=", + "part_set_header": { + "total": 1, + "hash": "zLQ2FiKM5tooL3BInt+VVfgzjlBXfq0Hc8Iux/xrhdg=" + } + }, + "last_commit_hash": "P6IJrK8vSqU3dGEyRHnAFocoDGja0bn9euLuy09s350=", + "data_hash": "eUd+6acHWrNXYju8Js449RJ99lOYOs16KpqQl4SMrEM=", + "validators_hash": "mB4pravvMsJKgi+g8aYdSeNlt0kPjnRFyvtAQtaxcfw=", + "next_validators_hash": "mB4pravvMsJKgi+g8aYdSeNlt0kPjnRFyvtAQtaxcfw=", + "consensus_hash": "BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=", + "app_hash": "fuELArKRK+CptnZ8tu54h6xEleSWenHNmqC84W866fU=", + "last_results_hash": "p/BPexV4LxAzlVcPRvW+lomgXb6Yze8YLIQUo/4Kdgc=", + "evidence_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "proposer_address": "G0MeY8xQx7ooOsni8KE/3R/Ib3Q=" + }, + "valset": [ + { + "operator_address": "cosmosvaloper196ax4vc0lwpxndu9dyhvca7jhxp70rmcqcnylw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "/O7BtNW0pafwfvomgR4ZnfldwPXiFfJs9mHg3gwfv5Q=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1416521659632", + "delegator_shares": "1416521659632.000000000000000000", + "description": { + "moniker": "SG-1", + "identity": "48608633F99D1B60", + "website": "https://sg-1.online", + "security_contact": "", + "details": "SG-1 - your favorite validator on cosmos. We offer 100% Soft Slash protection." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.037500000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.030000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + }, + { + "operator_address": "cosmosvaloper1t8ehvswxjfn3ejzkjtntcyrqwvmvuknzmvtaaa", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "uExZyjNLtr2+FFIhNDAMcQ8+yTrqE7ygYTsI7khkA5Y=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1348298958808", + "delegator_shares": "1348298958808.000000000000000000", + "description": { + "moniker": "Cosmostation", + "identity": "AE4C403A6E7AA1AC", + "website": "https://www.cosmostation.io", + "security_contact": "admin@stamper.network", + "details": "Cosmostation validator node. Delegate your tokens and Start Earning Staking Rewards" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "1.000000000000000000", + "max_change_rate": "0.200000000000000000" + }, + "update_time": "2021-10-01T15:06:38.821314287Z" + }, + "min_self_delegation": "1" + } + ] + } +} +``` + +#### Parameters + +The `Parameters` REST endpoint queries the staking parameters. + +```bash +/cosmos/staking/v1beta1/params +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/params" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "params": { + "unbonding_time": "2419200s", + "max_validators": 100, + "max_entries": 7, + "historical_entries": 10000, + "bond_denom": "stake" + } +} +``` + +#### Pool + +The `Pool` REST endpoint queries the pool information. + +```bash +/cosmos/staking/v1beta1/pool +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/pool" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "pool": { + "not_bonded_tokens": "432805737458", + "bonded_tokens": "15783637712645" + } +} +``` + +#### Validators + +The `Validators` REST endpoint queries all validators that match the given status. + +```bash +/cosmos/staking/v1beta1/validators +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/validators" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operator_address": "cosmosvaloper1q3jsx9dpfhtyqqgetwpe5tmk8f0ms5qywje8tw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "N7BPyek2aKuNZ0N/8YsrqSDhGZmgVaYUBuddY8pwKaE=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "383301887799", + "delegator_shares": "383301887799.000000000000000000", + "description": { + "moniker": "SmartNodes", + "identity": "D372724899D1EDC8", + "website": "https://smartnodes.co", + "security_contact": "", + "details": "Earn Rewards with Crypto Staking & Node Deployment" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-01T15:51:31.596618510Z" + }, + "min_self_delegation": "1" + }, + { + "operator_address": "cosmosvaloper1q5ku90atkhktze83j9xjaks2p7uruag5zp6wt7", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "GDNpuKDmCg9GnhnsiU4fCWktuGUemjNfvpCZiqoRIYA=" + }, + "jailed": false, + "status": "BOND_STATUS_UNBONDING", + "tokens": "1017819654", + "delegator_shares": "1017819654.000000000000000000", + "description": { + "moniker": "Noderunners", + "identity": "812E82D12FEA3493", + "website": "http://noderunners.biz", + "security_contact": "info@noderunners.biz", + "details": "Noderunners is a professional validator in POS networks. We have a huge node running experience, reliable soft and hardware. Our commissions are always low, our support to delegators is always full. Stake with us and start receiving your cosmos rewards now!" + }, + "unbonding_height": "147302", + "unbonding_time": "2021-11-08T22:58:53.718662452Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-04T18:02:21.446645619Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": "FONDBFkE4tEEf7yxWWKOD49jC2NK", + "total": "2" + } +} +``` + +#### Validator + +The `Validator` REST endpoint queries validator information for given validator address. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "33027900000", + "delegator_shares": "33027900000.000000000000000000", + "description": { + "moniker": "Witval", + "identity": "51468B615127273A", + "website": "", + "security_contact": "", + "details": "Witval is the validator arm from Vitwit. Vitwit is into software consulting and services business since 2015. We are working closely with Cosmos ecosystem since 2018. We are also building tools for the ecosystem, Aneka is our explorer for the cosmos ecosystem." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.020000000000000000" + }, + "update_time": "2021-10-01T19:24:52.663191049Z" + }, + "min_self_delegation": "1" + } +} +``` + +#### ValidatorDelegations + +The `ValidatorDelegations` REST endpoint queries delegate information for given validator. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q/delegations" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "delegation_responses": [ + { + "delegation": { + "delegator_address": "cosmos190g5j8aszqhvtg7cprmev8xcxs6csra7xnk3n3", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "31000000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "31000000000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1ddle9tczl87gsvmeva3c48nenyng4n56qwq4ee", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "628470000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "628470000" + } + }, + { + "delegation": { + "delegator_address": "cosmos10fdvkczl76m040smd33lh9xn9j0cf26kk4s2nw", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "838120000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "838120000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "500000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "500000000" + } + }, + { + "delegation": { + "delegator_address": "cosmos16msryt3fqlxtvsy8u5ay7wv2p8mglfg9hrek2e", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "61310000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "61310000" + } + } + ], + "pagination": { + "next_key": null, + "total": "5" + } +} +``` + +#### Delegation + +The `Delegation` REST endpoint queries delegate information for given validator delegator pair. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations/{delegatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q/delegations/cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "delegation_response": { + "delegation": { + "delegator_address": "cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "500000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "500000000" + } + } +} +``` + +#### UnbondingDelegation + +The `UnbondingDelegation` REST endpoint queries unbonding information for given validator delegator pair. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations/{delegatorAddr}/unbonding_delegation +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu/delegations/cosmos1ze2ye5u5k3qdlexvt2e0nn0508p04094ya0qpm/unbonding_delegation" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "unbond": { + "delegator_address": "cosmos1ze2ye5u5k3qdlexvt2e0nn0508p04094ya0qpm", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "153687", + "completion_time": "2021-11-09T09:41:18.352401903Z", + "initial_balance": "525111", + "balance": "525111" + } + ] + } +} +``` + +#### ValidatorUnbondingDelegations + +The `ValidatorUnbondingDelegations` REST endpoint queries unbonding delegations of a validator. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/unbonding_delegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu/unbonding_delegations" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1q9snn84jfrd9ge8t46kdcggpe58dua82vnj7uy", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "90998", + "completion_time": "2021-11-05T00:14:37.005841058Z", + "initial_balance": "24000000", + "balance": "24000000" + } + ] + }, + { + "delegator_address": "cosmos1qf36e6wmq9h4twhdvs6pyq9qcaeu7ye0s3dqq2", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "47478", + "completion_time": "2021-11-01T22:47:26.714116854Z", + "initial_balance": "8000000", + "balance": "8000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/modules/upgrade/README.mdx b/docs/sdk/v0.50/documentation/module-system/modules/upgrade/README.mdx new file mode 100644 index 00000000..3e4f3230 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/modules/upgrade/README.mdx @@ -0,0 +1,640 @@ +--- +title: '`x/upgrade`' +--- + +## Abstract + +`x/upgrade` is an implementation of a Cosmos SDK module that facilitates smoothly +upgrading a live Cosmos chain to a new (breaking) software version. It accomplishes this by +providing a `PreBlocker` hook that prevents the blockchain state machine from +proceeding once a pre-defined upgrade block height has been reached. + +The module does not prescribe anything regarding how governance decides to do an +upgrade, but just the mechanism for coordinating the upgrade safely. Without software +support for upgrades, upgrading a live chain is risky because all of the validators +need to pause their state machines at exactly the same point in the process. If +this is not done correctly, there can be state inconsistencies which are hard to +recover from. + +* [Concepts](#concepts) +* [State](#state) +* [Events](#events) +* [Client](#client) + * [CLI](#cli) + * [REST](#rest) + * [gRPC](#grpc) +* [Resources](#resources) + +## Concepts + +### Plan + +The `x/upgrade` module defines a `Plan` type in which a live upgrade is scheduled +to occur. A `Plan` can be scheduled at a specific block height. +A `Plan` is created once a (frozen) release candidate along with an appropriate upgrade +`Handler` (see below) is agreed upon, where the `Name` of a `Plan` corresponds to a +specific `Handler`. Typically, a `Plan` is created through a governance proposal +process, where if voted upon and passed, will be scheduled. The `Info` of a `Plan` +may contain various metadata about the upgrade, typically application specific +upgrade info to be included on-chain such as a git commit that validators could +automatically upgrade to. + +```go +type Plan struct { + Name string + Height int64 + Info string +} +``` + +#### Sidecar Process + +If an operator running the application binary also runs a sidecar process to assist +in the automatic download and upgrade of a binary, the `Info` allows this process to +be seamless. This tool is [Cosmovisor](https://github.com/cosmos/cosmos-sdk/tree/main/tools/cosmovisor#readme). + +### Handler + +The `x/upgrade` module facilitates upgrading from major version X to major version Y. To +accomplish this, node operators must first upgrade their current binary to a new +binary that has a corresponding `Handler` for the new version Y. It is assumed that +this version has fully been tested and approved by the community at large. This +`Handler` defines what state migrations need to occur before the new binary Y +can successfully run the chain. Naturally, this `Handler` is application specific +and not defined on a per-module basis. Registering a `Handler` is done via +`Keeper#SetUpgradeHandler` in the application. + +```go +type UpgradeHandler func(Context, Plan, VersionMap) (VersionMap, error) +``` + +During each `EndBlock` execution, the `x/upgrade` module checks if there exists a +`Plan` that should execute (is scheduled at that height). If so, the corresponding +`Handler` is executed. If the `Plan` is expected to execute but no `Handler` is registered +or if the binary was upgraded too early, the node will gracefully panic and exit. + +### StoreLoader + +The `x/upgrade` module also facilitates store migrations as part of the upgrade. The +`StoreLoader` sets the migrations that need to occur before the new binary can +successfully run the chain. This `StoreLoader` is also application specific and +not defined on a per-module basis. Registering this `StoreLoader` is done via +`app#SetStoreLoader` in the application. + +```go +func UpgradeStoreLoader (upgradeHeight int64, storeUpgrades *store.StoreUpgrades) + +baseapp.StoreLoader +``` + +If there's a planned upgrade and the upgrade height is reached, the old binary writes `Plan` to the disk before panicking. + +This information is critical to ensure the `StoreUpgrades` happens smoothly at correct height and +expected upgrade. It eliminiates the chances for the new binary to execute `StoreUpgrades` multiple +times everytime on restart. Also if there are multiple upgrades planned on same height, the `Name` +will ensure these `StoreUpgrades` takes place only in planned upgrade handler. + +### Proposal + +Typically, a `Plan` is proposed and submitted through governance via a proposal +containing a `MsgSoftwareUpgrade` message. +This proposal prescribes to the standard governance process. If the proposal passes, +the `Plan`, which targets a specific `Handler`, is persisted and scheduled. The +upgrade can be delayed or hastened by updating the `Plan.Height` in a new proposal. + +```protobuf +// MsgSoftwareUpgrade is the Msg/SoftwareUpgrade request type. +// +// Since: cosmos-sdk 0.46 +message MsgSoftwareUpgrade { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/MsgSoftwareUpgrade"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // plan is the upgrade plan. + Plan plan = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +#### Cancelling Upgrade Proposals + +Upgrade proposals can be cancelled. There exists a gov-enabled `MsgCancelUpgrade` +message type, which can be embedded in a proposal, voted on and, if passed, will +remove the scheduled upgrade `Plan`. +Of course this requires that the upgrade was known to be a bad idea well before the +upgrade itself, to allow time for a vote. + +```protobuf +// MsgCancelUpgrade is the Msg/CancelUpgrade request type. +// +// Since: cosmos-sdk 0.46 +message MsgCancelUpgrade { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/MsgCancelUpgrade"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +If such a possibility is desired, the upgrade height is to be +`2 * (VotingPeriod + DepositPeriod) + (SafetyDelta)` from the beginning of the +upgrade proposal. The `SafetyDelta` is the time available from the success of an +upgrade proposal and the realization it was a bad idea (due to external social consensus). + +A `MsgCancelUpgrade` proposal can also be made while the original +`MsgSoftwareUpgrade` proposal is still being voted upon, as long as the `VotingPeriod` +ends after the `MsgSoftwareUpgrade` proposal. + +## State + +The internal state of the `x/upgrade` module is relatively minimal and simple. The +state contains the currently active upgrade `Plan` (if one exists) by key +`0x0` and if a `Plan` is marked as "done" by key `0x1`. The state +contains the consensus versions of all app modules in the application. The versions +are stored as big endian `uint64`, and can be accessed with prefix `0x2` appended +by the corresponding module name of type `string`. The state maintains a +`Protocol Version` which can be accessed by key `0x3`. + +* Plan: `0x0 -> Plan` +* Done: `0x1 | byte(plan name) -> BigEndian(Block Height)` +* ConsensusVersion: `0x2 | byte(module name) -> BigEndian(Module Consensus Version)` +* ProtocolVersion: `0x3 -> BigEndian(Protocol Version)` + +The `x/upgrade` module contains no genesis state. + +## Events + +The `x/upgrade` does not emit any events by itself. Any and all proposal related +events are emitted through the `x/gov` module. + +## Client + +### CLI + +A user can query and interact with the `upgrade` module using the CLI. + +#### Query + +The `query` commands allow users to query `upgrade` state. + +```bash +simd query upgrade --help +``` + +##### applied + +The `applied` command allows users to query the block header for height at which a completed upgrade was applied. + +```bash +simd query upgrade applied [upgrade-name] [flags] +``` + +If upgrade-name was previously executed on the chain, this returns the header for the block at which it was applied. +This helps a client determine which binary was valid over a given range of blocks, as well as more context to understand past migrations. + +Example: + +```bash +simd query upgrade applied "test-upgrade" +``` + +Example Output: + +```bash expandable +"block_id": { + "hash": "A769136351786B9034A5F196DC53F7E50FCEB53B48FA0786E1BFC45A0BB646B5", + "parts": { + "total": 1, + "hash": "B13CBD23011C7480E6F11BE4594EE316548648E6A666B3575409F8F16EC6939E" + } + }, + "block_size": "7213", + "header": { + "version": { + "block": "11" + }, + "chain_id": "testnet-2", + "height": "455200", + "time": "2021-04-10T04:37:57.085493838Z", + "last_block_id": { + "hash": "0E8AD9309C2DC411DF98217AF59E044A0E1CCEAE7C0338417A70338DF50F4783", + "parts": { + "total": 1, + "hash": "8FE572A48CD10BC2CBB02653CA04CA247A0F6830FF19DC972F64D339A355E77D" + } + }, + "last_commit_hash": "DE890239416A19E6164C2076B837CC1D7F7822FC214F305616725F11D2533140", + "data_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "validators_hash": "A31047ADE54AE9072EE2A12FF260A8990BA4C39F903EAF5636B50D58DBA72582", + "next_validators_hash": "A31047ADE54AE9072EE2A12FF260A8990BA4C39F903EAF5636B50D58DBA72582", + "consensus_hash": "048091BC7DDC283F77BFBF91D73C44DA58C3DF8A9CBC867405D8B7F3DAADA22F", + "app_hash": "28ECC486AFC332BA6CC976706DBDE87E7D32441375E3F10FD084CD4BAF0DA021", + "last_results_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "evidence_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "proposer_address": "2ABC4854B1A1C5AA8403C4EA853A81ACA901CC76" + }, + "num_txs": "0" +} +``` + +##### module versions + +The `module_versions` command gets a list of module names and their respective consensus versions. + +Following the command with a specific module name will return only +that module's information. + +```bash +simd query upgrade module_versions [optional module_name] [flags] +``` + +Example: + +```bash +simd query upgrade module_versions +``` + +Example Output: + +```bash expandable +module_versions: +- name: auth + version: "2" +- name: authz + version: "1" +- name: bank + version: "2" +- name: crisis + version: "1" +- name: distribution + version: "2" +- name: evidence + version: "1" +- name: feegrant + version: "1" +- name: genutil + version: "1" +- name: gov + version: "2" +- name: ibc + version: "2" +- name: mint + version: "1" +- name: params + version: "1" +- name: slashing + version: "2" +- name: staking + version: "2" +- name: transfer + version: "1" +- name: upgrade + version: "1" +- name: vesting + version: "1" +``` + +Example: + +```bash +regen query upgrade module_versions ibc +``` + +Example Output: + +```bash +module_versions: +- name: ibc + version: "2" +``` + +##### plan + +The `plan` command gets the currently scheduled upgrade plan, if one exists. + +```bash +regen query upgrade plan [flags] +``` + +Example: + +```bash +simd query upgrade plan +``` + +Example Output: + +```bash +height: "130" +info: "" +name: test-upgrade +time: "0001-01-01T00:00:00Z" +upgraded_client_state: null +``` + +#### Transactions + +The upgrade module supports the following transactions: + +* `software-proposal` - submits an upgrade proposal: + +```bash +simd tx upgrade software-upgrade v2 --title="Test Proposal" --summary="testing" --deposit="100000000stake" --upgrade-height 1000000 \ +--upgrade-info '{ "binaries": { "linux/amd64":"https://example.com/simd.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" } }' --from cosmos1.. +``` + +* `cancel-software-upgrade` - cancels a previously submitted upgrade proposal: + +```bash +simd tx upgrade cancel-software-upgrade --title="Test Proposal" --summary="testing" --deposit="100000000stake" --from cosmos1.. +``` + +### REST + +A user can query the `upgrade` module using REST endpoints. + +#### Applied Plan + +`AppliedPlan` queries a previously applied upgrade plan by its name. + +```bash +/cosmos/upgrade/v1beta1/applied_plan/{name} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/applied_plan/v2.0-upgrade" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "height": "30" +} +``` + +#### Current Plan + +`CurrentPlan` queries the current upgrade plan. + +```bash +/cosmos/upgrade/v1beta1/current_plan +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/current_plan" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "plan": "v2.1-upgrade" +} +``` + +#### Module versions + +`ModuleVersions` queries the list of module versions from state. + +```bash +/cosmos/upgrade/v1beta1/module_versions +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/module_versions" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "module_versions": [ + { + "name": "auth", + "version": "2" + }, + { + "name": "authz", + "version": "1" + }, + { + "name": "bank", + "version": "2" + }, + { + "name": "crisis", + "version": "1" + }, + { + "name": "distribution", + "version": "2" + }, + { + "name": "evidence", + "version": "1" + }, + { + "name": "feegrant", + "version": "1" + }, + { + "name": "genutil", + "version": "1" + }, + { + "name": "gov", + "version": "2" + }, + { + "name": "ibc", + "version": "2" + }, + { + "name": "mint", + "version": "1" + }, + { + "name": "params", + "version": "1" + }, + { + "name": "slashing", + "version": "2" + }, + { + "name": "staking", + "version": "2" + }, + { + "name": "transfer", + "version": "1" + }, + { + "name": "upgrade", + "version": "1" + }, + { + "name": "vesting", + "version": "1" + } + ] +} +``` + +### gRPC + +A user can query the `upgrade` module using gRPC endpoints. + +#### Applied Plan + +`AppliedPlan` queries a previously applied upgrade plan by its name. + +```bash +cosmos.upgrade.v1beta1.Query/AppliedPlan +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"name":"v2.0-upgrade"}' \ + localhost:9090 \ + cosmos.upgrade.v1beta1.Query/AppliedPlan +``` + +Example Output: + +```bash +{ + "height": "30" +} +``` + +#### Current Plan + +`CurrentPlan` queries the current upgrade plan. + +```bash +cosmos.upgrade.v1beta1.Query/CurrentPlan +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/CurrentPlan +``` + +Example Output: + +```bash +{ + "plan": "v2.1-upgrade" +} +``` + +#### Module versions + +`ModuleVersions` queries the list of module versions from state. + +```bash +cosmos.upgrade.v1beta1.Query/ModuleVersions +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/ModuleVersions +``` + +Example Output: + +```bash expandable +{ + "module_versions": [ + { + "name": "auth", + "version": "2" + }, + { + "name": "authz", + "version": "1" + }, + { + "name": "bank", + "version": "2" + }, + { + "name": "crisis", + "version": "1" + }, + { + "name": "distribution", + "version": "2" + }, + { + "name": "evidence", + "version": "1" + }, + { + "name": "feegrant", + "version": "1" + }, + { + "name": "genutil", + "version": "1" + }, + { + "name": "gov", + "version": "2" + }, + { + "name": "ibc", + "version": "2" + }, + { + "name": "mint", + "version": "1" + }, + { + "name": "params", + "version": "1" + }, + { + "name": "slashing", + "version": "2" + }, + { + "name": "staking", + "version": "2" + }, + { + "name": "transfer", + "version": "1" + }, + { + "name": "upgrade", + "version": "1" + }, + { + "name": "vesting", + "version": "1" + } + ] +} +``` + +## Resources + +A list of (external) resources to learn more about the `x/upgrade` module. + +* [Cosmos Dev Series: Cosmos Blockchain Upgrade](https://medium.com/web3-surfers/cosmos-dev-series-cosmos-sdk-based-blockchain-upgrade-b5e99181554c) - The blog post that explains how software upgrades work in detail. diff --git a/docs/sdk/v0.50/documentation/module-system/msg-services.mdx b/docs/sdk/v0.50/documentation/module-system/msg-services.mdx new file mode 100644 index 00000000..1d1cb61a --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/msg-services.mdx @@ -0,0 +1,3599 @@ +--- +title: '`Msg` Services' +--- + + +**Synopsis** +A Protobuf `Msg` service processes [messages](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#messages). Protobuf `Msg` services are specific to the module in which they are defined, and only process messages defined within the said module. They are called from `BaseApp` during [`DeliverTx`](/docs/sdk/v0.50/learn/advanced/baseapp#delivertx). + + + +**Pre-requisite Readings** + +* [Module Manager](/docs/sdk/v0.50/documentation/module-system/module-manager) +* [Messages and Queries](/docs/sdk/v0.50/documentation/module-system/messages-and-queries) + + + +## Implementation of a module `Msg` service + +Each module should define a Protobuf `Msg` service, which will be responsible for processing requests (implementing `sdk.Msg`) and returning responses. + +As further described in [ADR 031](docs/sdk/next/documentation/legacy/adr-comprehensive), this approach has the advantage of clearly specifying return types and generating server and client code. + +Protobuf generates a `MsgServer` interface based on a definition of `Msg` service. It is the role of the module developer to implement this interface, by implementing the state transition logic that should happen upon receival of each `sdk.Msg`. As an example, here is the generated `MsgServer` interface for `x/bank`, which exposes two `sdk.Msg`s: + +```go expandable +/ Code generated by protoc-gen-gogo. DO NOT EDIT. +/ source: cosmos/bank/v1beta1/tx.proto + +package types + +import ( + + context "context" + fmt "fmt" + _ "github.com/cosmos/cosmos-proto" + github_com_cosmos_cosmos_sdk_types "github.com/cosmos/cosmos-sdk/types" + types "github.com/cosmos/cosmos-sdk/types" + _ "github.com/cosmos/cosmos-sdk/types/msgservice" + _ "github.com/cosmos/cosmos-sdk/types/tx/amino" + _ "github.com/cosmos/gogoproto/gogoproto" + grpc1 "github.com/cosmos/gogoproto/grpc" + proto "github.com/cosmos/gogoproto/proto" + grpc "google.golang.org/grpc" + codes "google.golang.org/grpc/codes" + status "google.golang.org/grpc/status" + io "io" + math "math" + math_bits "math/bits" +) + +/ Reference imports to suppress errors if they are not otherwise used. +var _ = proto.Marshal +var _ = fmt.Errorf +var _ = math.Inf + +/ This is a compile-time assertion to ensure that this generated file +/ is compatible with the proto package it is being compiled against. +/ A compilation error at this line likely means your copy of the +/ proto package needs to be updated. +const _ = proto.GoGoProtoPackageIsVersion3 / please upgrade the proto package + +/ MsgSend represents a message to send coins from one account to another. +type MsgSend struct { + FromAddress string `protobuf:"bytes,1,opt,name=from_address,json=fromAddress,proto3" json:"from_address,omitempty"` + ToAddress string `protobuf:"bytes,2,opt,name=to_address,json=toAddress,proto3" json:"to_address,omitempty"` + Amount github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,3,rep,name=amount,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"amount"` +} + +func (m *MsgSend) + +Reset() { *m = MsgSend{ +} +} + +func (m *MsgSend) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSend) + +ProtoMessage() { +} + +func (*MsgSend) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{0 +} +} + +func (m *MsgSend) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSend) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSend.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSend) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSend.Merge(m, src) +} + +func (m *MsgSend) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSend) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSend.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSend proto.InternalMessageInfo + +/ MsgSendResponse defines the Msg/Send response type. +type MsgSendResponse struct { +} + +func (m *MsgSendResponse) + +Reset() { *m = MsgSendResponse{ +} +} + +func (m *MsgSendResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSendResponse) + +ProtoMessage() { +} + +func (*MsgSendResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{1 +} +} + +func (m *MsgSendResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSendResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSendResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSendResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSendResponse.Merge(m, src) +} + +func (m *MsgSendResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSendResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSendResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSendResponse proto.InternalMessageInfo + +/ MsgMultiSend represents an arbitrary multi-in, multi-out send message. +type MsgMultiSend struct { + / Inputs, despite being `repeated`, only allows one sender input. This is + / checked in MsgMultiSend's ValidateBasic. + Inputs []Input `protobuf:"bytes,1,rep,name=inputs,proto3" json:"inputs"` + Outputs []Output `protobuf:"bytes,2,rep,name=outputs,proto3" json:"outputs"` +} + +func (m *MsgMultiSend) + +Reset() { *m = MsgMultiSend{ +} +} + +func (m *MsgMultiSend) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgMultiSend) + +ProtoMessage() { +} + +func (*MsgMultiSend) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{2 +} +} + +func (m *MsgMultiSend) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgMultiSend) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgMultiSend.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgMultiSend) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgMultiSend.Merge(m, src) +} + +func (m *MsgMultiSend) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgMultiSend) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgMultiSend.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgMultiSend proto.InternalMessageInfo + +func (m *MsgMultiSend) + +GetInputs() []Input { + if m != nil { + return m.Inputs +} + +return nil +} + +func (m *MsgMultiSend) + +GetOutputs() []Output { + if m != nil { + return m.Outputs +} + +return nil +} + +/ MsgMultiSendResponse defines the Msg/MultiSend response type. +type MsgMultiSendResponse struct { +} + +func (m *MsgMultiSendResponse) + +Reset() { *m = MsgMultiSendResponse{ +} +} + +func (m *MsgMultiSendResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgMultiSendResponse) + +ProtoMessage() { +} + +func (*MsgMultiSendResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{3 +} +} + +func (m *MsgMultiSendResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgMultiSendResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgMultiSendResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgMultiSendResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgMultiSendResponse.Merge(m, src) +} + +func (m *MsgMultiSendResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgMultiSendResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgMultiSendResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgMultiSendResponse proto.InternalMessageInfo + +/ MsgUpdateParams is the Msg/UpdateParams request type. +/ +/ Since: cosmos-sdk 0.47 +type MsgUpdateParams struct { + / authority is the address that controls the module (defaults to x/gov unless overwritten). + Authority string `protobuf:"bytes,1,opt,name=authority,proto3" json:"authority,omitempty"` + / params defines the x/bank parameters to update. + / + / NOTE: All parameters must be supplied. + Params Params `protobuf:"bytes,2,opt,name=params,proto3" json:"params"` +} + +func (m *MsgUpdateParams) + +Reset() { *m = MsgUpdateParams{ +} +} + +func (m *MsgUpdateParams) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgUpdateParams) + +ProtoMessage() { +} + +func (*MsgUpdateParams) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{4 +} +} + +func (m *MsgUpdateParams) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgUpdateParams) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgUpdateParams.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgUpdateParams) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgUpdateParams.Merge(m, src) +} + +func (m *MsgUpdateParams) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgUpdateParams) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgUpdateParams.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgUpdateParams proto.InternalMessageInfo + +func (m *MsgUpdateParams) + +GetAuthority() + +string { + if m != nil { + return m.Authority +} + +return "" +} + +func (m *MsgUpdateParams) + +GetParams() + +Params { + if m != nil { + return m.Params +} + +return Params{ +} +} + +/ MsgUpdateParamsResponse defines the response structure for executing a +/ MsgUpdateParams message. +/ +/ Since: cosmos-sdk 0.47 +type MsgUpdateParamsResponse struct { +} + +func (m *MsgUpdateParamsResponse) + +Reset() { *m = MsgUpdateParamsResponse{ +} +} + +func (m *MsgUpdateParamsResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgUpdateParamsResponse) + +ProtoMessage() { +} + +func (*MsgUpdateParamsResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{5 +} +} + +func (m *MsgUpdateParamsResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgUpdateParamsResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgUpdateParamsResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgUpdateParamsResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgUpdateParamsResponse.Merge(m, src) +} + +func (m *MsgUpdateParamsResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgUpdateParamsResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgUpdateParamsResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgUpdateParamsResponse proto.InternalMessageInfo + +/ MsgSetSendEnabled is the Msg/SetSendEnabled request type. +/ +/ Only entries to add/update/delete need to be included. +/ Existing SendEnabled entries that are not included in this +/ message are left unchanged. +/ +/ Since: cosmos-sdk 0.47 +type MsgSetSendEnabled struct { + Authority string `protobuf:"bytes,1,opt,name=authority,proto3" json:"authority,omitempty"` + / send_enabled is the list of entries to add or update. + SendEnabled []*SendEnabled `protobuf:"bytes,2,rep,name=send_enabled,json=sendEnabled,proto3" json:"send_enabled,omitempty"` + / use_default_for is a list of denoms that should use the params.default_send_enabled value. + / Denoms listed here will have their SendEnabled entries deleted. + / If a denom is included that doesn't have a SendEnabled entry, + / it will be ignored. + UseDefaultFor []string `protobuf:"bytes,3,rep,name=use_default_for,json=useDefaultFor,proto3" json:"use_default_for,omitempty"` +} + +func (m *MsgSetSendEnabled) + +Reset() { *m = MsgSetSendEnabled{ +} +} + +func (m *MsgSetSendEnabled) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSetSendEnabled) + +ProtoMessage() { +} + +func (*MsgSetSendEnabled) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{6 +} +} + +func (m *MsgSetSendEnabled) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSetSendEnabled) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSetSendEnabled.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSetSendEnabled) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSetSendEnabled.Merge(m, src) +} + +func (m *MsgSetSendEnabled) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSetSendEnabled) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSetSendEnabled.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSetSendEnabled proto.InternalMessageInfo + +func (m *MsgSetSendEnabled) + +GetAuthority() + +string { + if m != nil { + return m.Authority +} + +return "" +} + +func (m *MsgSetSendEnabled) + +GetSendEnabled() []*SendEnabled { + if m != nil { + return m.SendEnabled +} + +return nil +} + +func (m *MsgSetSendEnabled) + +GetUseDefaultFor() []string { + if m != nil { + return m.UseDefaultFor +} + +return nil +} + +/ MsgSetSendEnabledResponse defines the Msg/SetSendEnabled response type. +/ +/ Since: cosmos-sdk 0.47 +type MsgSetSendEnabledResponse struct { +} + +func (m *MsgSetSendEnabledResponse) + +Reset() { *m = MsgSetSendEnabledResponse{ +} +} + +func (m *MsgSetSendEnabledResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSetSendEnabledResponse) + +ProtoMessage() { +} + +func (*MsgSetSendEnabledResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{7 +} +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSetSendEnabledResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSetSendEnabledResponse.Merge(m, src) +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSetSendEnabledResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSetSendEnabledResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSetSendEnabledResponse proto.InternalMessageInfo + +func init() { + proto.RegisterType((*MsgSend)(nil), "cosmos.bank.v1beta1.MsgSend") + +proto.RegisterType((*MsgSendResponse)(nil), "cosmos.bank.v1beta1.MsgSendResponse") + +proto.RegisterType((*MsgMultiSend)(nil), "cosmos.bank.v1beta1.MsgMultiSend") + +proto.RegisterType((*MsgMultiSendResponse)(nil), "cosmos.bank.v1beta1.MsgMultiSendResponse") + +proto.RegisterType((*MsgUpdateParams)(nil), "cosmos.bank.v1beta1.MsgUpdateParams") + +proto.RegisterType((*MsgUpdateParamsResponse)(nil), "cosmos.bank.v1beta1.MsgUpdateParamsResponse") + +proto.RegisterType((*MsgSetSendEnabled)(nil), "cosmos.bank.v1beta1.MsgSetSendEnabled") + +proto.RegisterType((*MsgSetSendEnabledResponse)(nil), "cosmos.bank.v1beta1.MsgSetSendEnabledResponse") +} + +func init() { + proto.RegisterFile("cosmos/bank/v1beta1/tx.proto", fileDescriptor_1d8cb1613481f5b7) +} + +var fileDescriptor_1d8cb1613481f5b7 = []byte{ + / 700 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x54, 0xcf, 0x4f, 0xd3, 0x50, + 0x1c, 0x5f, 0x99, 0x8e, 0xec, 0x31, 0x25, 0x54, 0x22, 0xac, 0x90, 0x0e, 0x16, 0x43, 0x00, 0xa5, + 0x15, 0x34, 0x9a, 0xcc, 0x68, 0x74, 0x28, 0x89, 0x26, 0x8b, 0x66, 0xc4, 0x83, 0x5e, 0x96, 0xd7, + 0xf5, 0x51, 0x1a, 0xd6, 0xbe, 0xa6, 0xef, 0x95, 0xb0, 0x9b, 0x7a, 0x32, 0x9e, 0x3c, 0x7b, 0xe2, + 0x68, 0x8c, 0x07, 0x0e, 0x1e, 0x4d, 0xbc, 0x72, 0x24, 0x9e, 0x3c, 0xa9, 0x81, 0x03, 0xfa, 0x5f, + 0x98, 0xf7, 0xa3, 0xa5, 0x8c, 0x8d, 0x11, 0x2f, 0x6b, 0xf7, 0x3e, 0x3f, 0xbe, 0xef, 0xf3, 0xed, + 0xf7, 0x3d, 0x30, 0xd9, 0xc4, 0xc4, 0xc3, 0xc4, 0xb4, 0xa0, 0xbf, 0x61, 0x6e, 0x2e, 0x5a, 0x88, + 0xc2, 0x45, 0x93, 0x6e, 0x19, 0x41, 0x88, 0x29, 0x56, 0x2f, 0x09, 0xd4, 0x60, 0xa8, 0x21, 0x51, + 0x6d, 0xd4, 0xc1, 0x0e, 0xe6, 0xb8, 0xc9, 0xde, 0x04, 0x55, 0xd3, 0x13, 0x23, 0x82, 0x12, 0xa3, + 0x26, 0x76, 0xfd, 0x13, 0x78, 0xaa, 0x10, 0xf7, 0x15, 0x78, 0x51, 0xe0, 0x0d, 0x61, 0x2c, 0xeb, + 0x0a, 0x68, 0x4c, 0x4a, 0x3d, 0xe2, 0x98, 0x9b, 0x8b, 0xec, 0x21, 0x81, 0x11, 0xe8, 0xb9, 0x3e, + 0x36, 0xf9, 0xaf, 0x58, 0x2a, 0x7f, 0x1e, 0x00, 0x83, 0x35, 0xe2, 0xac, 0x22, 0xdf, 0x56, 0xef, + 0x80, 0xc2, 0x5a, 0x88, 0xbd, 0x06, 0xb4, 0xed, 0x10, 0x11, 0x32, 0xae, 0x4c, 0x29, 0xb3, 0xf9, + 0xea, 0xf8, 0xf7, 0x2f, 0x0b, 0xa3, 0xd2, 0xff, 0x81, 0x40, 0x56, 0x69, 0xe8, 0xfa, 0x4e, 0x7d, + 0x88, 0xb1, 0xe5, 0x92, 0x7a, 0x1b, 0x00, 0x8a, 0x13, 0xe9, 0x40, 0x1f, 0x69, 0x9e, 0xe2, 0x58, + 0xd8, 0x06, 0x39, 0xe8, 0xe1, 0xc8, 0xa7, 0xe3, 0xd9, 0xa9, 0xec, 0xec, 0xd0, 0x52, 0xd1, 0x48, + 0x9a, 0x48, 0x50, 0xdc, 0x44, 0x63, 0x19, 0xbb, 0x7e, 0x75, 0x65, 0xf7, 0x67, 0x29, 0xf3, 0xe9, + 0x57, 0x69, 0xd6, 0x71, 0xe9, 0x7a, 0x64, 0x19, 0x4d, 0xec, 0xc9, 0xe4, 0xf2, 0xb1, 0x40, 0xec, + 0x0d, 0x93, 0xb6, 0x03, 0x44, 0xb8, 0x80, 0x7c, 0x38, 0xdc, 0x99, 0x2f, 0xb4, 0x90, 0x03, 0x9b, + 0xed, 0x06, 0xeb, 0x2d, 0xf9, 0x78, 0xb8, 0x33, 0xaf, 0xd4, 0x65, 0xc1, 0xca, 0xf5, 0xb7, 0xdb, + 0xa5, 0xcc, 0x9f, 0xed, 0x52, 0xe6, 0x0d, 0xe3, 0xa5, 0xb3, 0xbf, 0x3b, 0xdc, 0x99, 0x57, 0x53, + 0x9e, 0xb2, 0x45, 0xe5, 0x11, 0x30, 0x2c, 0x5f, 0xeb, 0x88, 0x04, 0xd8, 0x27, 0xa8, 0xfc, 0x55, + 0x01, 0x85, 0x1a, 0x71, 0x6a, 0x51, 0x8b, 0xba, 0xbc, 0x8d, 0x77, 0x41, 0xce, 0xf5, 0x83, 0x88, + 0xb2, 0x06, 0xb2, 0x40, 0x9a, 0xd1, 0x65, 0x2a, 0x8c, 0xc7, 0x8c, 0x52, 0xcd, 0xb3, 0x44, 0x72, + 0x53, 0x42, 0xa4, 0xde, 0x07, 0x83, 0x38, 0xa2, 0x5c, 0x3f, 0xc0, 0xf5, 0x13, 0x5d, 0xf5, 0x4f, + 0x39, 0x27, 0x6d, 0x10, 0xcb, 0x2a, 0x57, 0xe3, 0x48, 0xd2, 0x92, 0x85, 0x19, 0x3b, 0x1e, 0x26, + 0xd9, 0x6d, 0xf9, 0x32, 0x18, 0x4d, 0xff, 0x4f, 0x62, 0x7d, 0x53, 0x78, 0xd4, 0xe7, 0x81, 0x0d, + 0x29, 0x7a, 0x06, 0x43, 0xe8, 0x11, 0xf5, 0x16, 0xc8, 0xc3, 0x88, 0xae, 0xe3, 0xd0, 0xa5, 0xed, + 0xbe, 0xd3, 0x71, 0x44, 0x55, 0xef, 0x81, 0x5c, 0xc0, 0x1d, 0xf8, 0x5c, 0xf4, 0x4a, 0x24, 0x8a, + 0x1c, 0x6b, 0x89, 0x50, 0x55, 0x6e, 0xb2, 0x30, 0x47, 0x7e, 0x2c, 0xcf, 0x74, 0x2a, 0xcf, 0x96, + 0x38, 0x24, 0x1d, 0xbb, 0x2d, 0x17, 0xc1, 0x58, 0xc7, 0x52, 0x12, 0xee, 0xaf, 0x02, 0x46, 0xf8, + 0x77, 0xa4, 0x2c, 0xf3, 0x23, 0x1f, 0x5a, 0x2d, 0x64, 0xff, 0x77, 0xbc, 0x65, 0x50, 0x20, 0xc8, + 0xb7, 0x1b, 0x48, 0xf8, 0xc8, 0xcf, 0x36, 0xd5, 0x35, 0x64, 0xaa, 0x5e, 0x7d, 0x88, 0xa4, 0x8a, + 0xcf, 0x80, 0xe1, 0x88, 0xa0, 0x86, 0x8d, 0xd6, 0x60, 0xd4, 0xa2, 0x8d, 0x35, 0x1c, 0xf2, 0xf3, + 0x90, 0xaf, 0x5f, 0x88, 0x08, 0x7a, 0x28, 0x56, 0x57, 0x70, 0x58, 0x31, 0x4f, 0xf6, 0x62, 0xb2, + 0x73, 0x50, 0xd3, 0xa9, 0xca, 0x13, 0xa0, 0x78, 0x62, 0x31, 0x6e, 0xc4, 0xd2, 0xeb, 0x2c, 0xc8, + 0xd6, 0x88, 0xa3, 0x3e, 0x01, 0xe7, 0xf8, 0xec, 0x4e, 0x76, 0xdd, 0xb4, 0x1c, 0x79, 0xed, 0xca, + 0x69, 0x68, 0xec, 0xa9, 0xbe, 0x00, 0xf9, 0xa3, 0xc3, 0x30, 0xdd, 0x4b, 0x92, 0x50, 0xb4, 0xb9, + 0xbe, 0x94, 0xc4, 0xda, 0x02, 0x85, 0x63, 0x03, 0xd9, 0x73, 0x43, 0x69, 0x96, 0x76, 0xed, 0x2c, + 0xac, 0xa4, 0xc6, 0x3a, 0xb8, 0xd8, 0x31, 0x17, 0x33, 0xbd, 0x63, 0xa7, 0x79, 0x9a, 0x71, 0x36, + 0x5e, 0x5c, 0x49, 0x3b, 0xff, 0x8a, 0x4d, 0x79, 0x75, 0x79, 0x77, 0x5f, 0x57, 0xf6, 0xf6, 0x75, + 0xe5, 0xf7, 0xbe, 0xae, 0xbc, 0x3f, 0xd0, 0x33, 0x7b, 0x07, 0x7a, 0xe6, 0xc7, 0x81, 0x9e, 0x79, + 0x39, 0x77, 0xea, 0x3d, 0x27, 0xc7, 0x9e, 0x5f, 0x77, 0x56, 0x8e, 0x5f, 0xe7, 0x37, 0xfe, 0x05, + 0x00, 0x00, 0xff, 0xff, 0x5b, 0x5b, 0x43, 0xa9, 0xa0, 0x06, 0x00, 0x00, +} + +/ Reference imports to suppress errors if they are not otherwise used. +var _ context.Context +var _ grpc.ClientConn + +/ This is a compile-time assertion to ensure that this generated file +/ is compatible with the grpc package it is being compiled against. +const _ = grpc.SupportPackageIsVersion4 + +/ MsgClient is the client API for Msg service. +/ +/ For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. +type MsgClient interface { + / Send defines a method for sending coins from one account to another account. + Send(ctx context.Context, in *MsgSend, opts ...grpc.CallOption) (*MsgSendResponse, error) + / MultiSend defines a method for sending coins from some accounts to other accounts. + MultiSend(ctx context.Context, in *MsgMultiSend, opts ...grpc.CallOption) (*MsgMultiSendResponse, error) + / UpdateParams defines a governance operation for updating the x/bank module parameters. + / The authority is defined in the keeper. + / + / Since: cosmos-sdk 0.47 + UpdateParams(ctx context.Context, in *MsgUpdateParams, opts ...grpc.CallOption) (*MsgUpdateParamsResponse, error) + / SetSendEnabled is a governance operation for setting the SendEnabled flag + / on any number of Denoms. Only the entries to add or update should be + / included. Entries that already exist in the store, but that aren't + / included in this message, will be left unchanged. + / + / Since: cosmos-sdk 0.47 + SetSendEnabled(ctx context.Context, in *MsgSetSendEnabled, opts ...grpc.CallOption) (*MsgSetSendEnabledResponse, error) +} + +type msgClient struct { + cc grpc1.ClientConn +} + +func NewMsgClient(cc grpc1.ClientConn) + +MsgClient { + return &msgClient{ + cc +} +} + +func (c *msgClient) + +Send(ctx context.Context, in *MsgSend, opts ...grpc.CallOption) (*MsgSendResponse, error) { + out := new(MsgSendResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/Send", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +func (c *msgClient) + +MultiSend(ctx context.Context, in *MsgMultiSend, opts ...grpc.CallOption) (*MsgMultiSendResponse, error) { + out := new(MsgMultiSendResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/MultiSend", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +func (c *msgClient) + +UpdateParams(ctx context.Context, in *MsgUpdateParams, opts ...grpc.CallOption) (*MsgUpdateParamsResponse, error) { + out := new(MsgUpdateParamsResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/UpdateParams", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +func (c *msgClient) + +SetSendEnabled(ctx context.Context, in *MsgSetSendEnabled, opts ...grpc.CallOption) (*MsgSetSendEnabledResponse, error) { + out := new(MsgSetSendEnabledResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/SetSendEnabled", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +/ MsgServer is the server API for Msg service. +type MsgServer interface { + / Send defines a method for sending coins from one account to another account. + Send(context.Context, *MsgSend) (*MsgSendResponse, error) + / MultiSend defines a method for sending coins from some accounts to other accounts. + MultiSend(context.Context, *MsgMultiSend) (*MsgMultiSendResponse, error) + / UpdateParams defines a governance operation for updating the x/bank module parameters. + / The authority is defined in the keeper. + / + / Since: cosmos-sdk 0.47 + UpdateParams(context.Context, *MsgUpdateParams) (*MsgUpdateParamsResponse, error) + / SetSendEnabled is a governance operation for setting the SendEnabled flag + / on any number of Denoms. Only the entries to add or update should be + / included. Entries that already exist in the store, but that aren't + / included in this message, will be left unchanged. + / + / Since: cosmos-sdk 0.47 + SetSendEnabled(context.Context, *MsgSetSendEnabled) (*MsgSetSendEnabledResponse, error) +} + +/ UnimplementedMsgServer can be embedded to have forward compatible implementations. +type UnimplementedMsgServer struct { +} + +func (*UnimplementedMsgServer) + +Send(ctx context.Context, req *MsgSend) (*MsgSendResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method Send not implemented") +} + +func (*UnimplementedMsgServer) + +MultiSend(ctx context.Context, req *MsgMultiSend) (*MsgMultiSendResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method MultiSend not implemented") +} + +func (*UnimplementedMsgServer) + +UpdateParams(ctx context.Context, req *MsgUpdateParams) (*MsgUpdateParamsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UpdateParams not implemented") +} + +func (*UnimplementedMsgServer) + +SetSendEnabled(ctx context.Context, req *MsgSetSendEnabled) (*MsgSetSendEnabledResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method SetSendEnabled not implemented") +} + +func RegisterMsgServer(s grpc1.Server, srv MsgServer) { + s.RegisterService(&_Msg_serviceDesc, srv) +} + +func _Msg_Send_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgSend) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).Send(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/Send", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).Send(ctx, req.(*MsgSend)) +} + +return interceptor(ctx, in, info, handler) +} + +func _Msg_MultiSend_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgMultiSend) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).MultiSend(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/MultiSend", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).MultiSend(ctx, req.(*MsgMultiSend)) +} + +return interceptor(ctx, in, info, handler) +} + +func _Msg_UpdateParams_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgUpdateParams) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).UpdateParams(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/UpdateParams", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).UpdateParams(ctx, req.(*MsgUpdateParams)) +} + +return interceptor(ctx, in, info, handler) +} + +func _Msg_SetSendEnabled_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgSetSendEnabled) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).SetSendEnabled(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/SetSendEnabled", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).SetSendEnabled(ctx, req.(*MsgSetSendEnabled)) +} + +return interceptor(ctx, in, info, handler) +} + +var _Msg_serviceDesc = grpc.ServiceDesc{ + ServiceName: "cosmos.bank.v1beta1.Msg", + HandlerType: (*MsgServer)(nil), + Methods: []grpc.MethodDesc{ + { + MethodName: "Send", + Handler: _Msg_Send_Handler, +}, + { + MethodName: "MultiSend", + Handler: _Msg_MultiSend_Handler, +}, + { + MethodName: "UpdateParams", + Handler: _Msg_UpdateParams_Handler, +}, + { + MethodName: "SetSendEnabled", + Handler: _Msg_SetSendEnabled_Handler, +}, +}, + Streams: []grpc.StreamDesc{ +}, + Metadata: "cosmos/bank/v1beta1/tx.proto", +} + +func (m *MsgSend) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSend) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSend) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Amount) > 0 { + for iNdEx := len(m.Amount) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Amount[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x1a +} + +} + if len(m.ToAddress) > 0 { + i -= len(m.ToAddress) + +copy(dAtA[i:], m.ToAddress) + +i = encodeVarintTx(dAtA, i, uint64(len(m.ToAddress))) + +i-- + dAtA[i] = 0x12 +} + if len(m.FromAddress) > 0 { + i -= len(m.FromAddress) + +copy(dAtA[i:], m.FromAddress) + +i = encodeVarintTx(dAtA, i, uint64(len(m.FromAddress))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *MsgSendResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSendResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSendResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func (m *MsgMultiSend) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgMultiSend) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgMultiSend) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Outputs) > 0 { + for iNdEx := len(m.Outputs) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Outputs[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x12 +} + +} + if len(m.Inputs) > 0 { + for iNdEx := len(m.Inputs) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Inputs[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa +} + +} + +return len(dAtA) - i, nil +} + +func (m *MsgMultiSendResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgMultiSendResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgMultiSendResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func (m *MsgUpdateParams) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgUpdateParams) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgUpdateParams) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + { + size, err := m.Params.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x12 + if len(m.Authority) > 0 { + i -= len(m.Authority) + +copy(dAtA[i:], m.Authority) + +i = encodeVarintTx(dAtA, i, uint64(len(m.Authority))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *MsgUpdateParamsResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgUpdateParamsResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgUpdateParamsResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func (m *MsgSetSendEnabled) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSetSendEnabled) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSetSendEnabled) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.UseDefaultFor) > 0 { + for iNdEx := len(m.UseDefaultFor) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.UseDefaultFor[iNdEx]) + +copy(dAtA[i:], m.UseDefaultFor[iNdEx]) + +i = encodeVarintTx(dAtA, i, uint64(len(m.UseDefaultFor[iNdEx]))) + +i-- + dAtA[i] = 0x1a +} + +} + if len(m.SendEnabled) > 0 { + for iNdEx := len(m.SendEnabled) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.SendEnabled[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x12 +} + +} + if len(m.Authority) > 0 { + i -= len(m.Authority) + +copy(dAtA[i:], m.Authority) + +i = encodeVarintTx(dAtA, i, uint64(len(m.Authority))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *MsgSetSendEnabledResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSetSendEnabledResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSetSendEnabledResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func encodeVarintTx(dAtA []byte, offset int, v uint64) + +int { + offset -= sovTx(v) + base := offset + for v >= 1<<7 { + dAtA[offset] = uint8(v&0x7f | 0x80) + +v >>= 7 + offset++ +} + +dAtA[offset] = uint8(v) + +return base +} + +func (m *MsgSend) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.FromAddress) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + +l = len(m.ToAddress) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + if len(m.Amount) > 0 { + for _, e := range m.Amount { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + +return n +} + +func (m *MsgSendResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func (m *MsgMultiSend) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + if len(m.Inputs) > 0 { + for _, e := range m.Inputs { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + if len(m.Outputs) > 0 { + for _, e := range m.Outputs { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + +return n +} + +func (m *MsgMultiSendResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func (m *MsgUpdateParams) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.Authority) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + +l = m.Params.Size() + +n += 1 + l + sovTx(uint64(l)) + +return n +} + +func (m *MsgUpdateParamsResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func (m *MsgSetSendEnabled) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.Authority) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + if len(m.SendEnabled) > 0 { + for _, e := range m.SendEnabled { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + if len(m.UseDefaultFor) > 0 { + for _, s := range m.UseDefaultFor { + l = len(s) + +n += 1 + l + sovTx(uint64(l)) +} + +} + +return n +} + +func (m *MsgSetSendEnabledResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func sovTx(x uint64) (n int) { + return (math_bits.Len64(x|1) + 6) / 7 +} + +func sozTx(x uint64) (n int) { + return sovTx(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} + +func (m *MsgSend) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSend: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSend: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field FromAddress", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.FromAddress = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ToAddress", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.ToAddress = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Amount", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Amount = append(m.Amount, types.Coin{ +}) + if err := m.Amount[len(m.Amount)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgSendResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSendResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSendResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgMultiSend) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgMultiSend: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgMultiSend: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Inputs", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Inputs = append(m.Inputs, Input{ +}) + if err := m.Inputs[len(m.Inputs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Outputs", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Outputs = append(m.Outputs, Output{ +}) + if err := m.Outputs[len(m.Outputs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgMultiSendResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgMultiSendResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgMultiSendResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgUpdateParams) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgUpdateParams: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgUpdateParams: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Authority", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Authority = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Params", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := m.Params.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgUpdateParamsResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgUpdateParamsResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgUpdateParamsResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgSetSendEnabled) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSetSendEnabled: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSetSendEnabled: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Authority", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Authority = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SendEnabled", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.SendEnabled = append(m.SendEnabled, &SendEnabled{ +}) + if err := m.SendEnabled[len(m.SendEnabled)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UseDefaultFor", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.UseDefaultFor = append(m.UseDefaultFor, string(dAtA[iNdEx:postIndex])) + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgSetSendEnabledResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSetSendEnabledResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSetSendEnabledResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func skipTx(dAtA []byte) (n int, err error) { + l := len(dAtA) + iNdEx := 0 + depth := 0 + for iNdEx < l { + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowTx +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + wireType := int(wire & 0x7) + switch wireType { + case 0: + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowTx +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + +iNdEx++ + if dAtA[iNdEx-1] < 0x80 { + break +} + +} + case 1: + iNdEx += 8 + case 2: + var length int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowTx +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + length |= (int(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + if length < 0 { + return 0, ErrInvalidLengthTx +} + +iNdEx += length + case 3: + depth++ + case 4: + if depth == 0 { + return 0, ErrUnexpectedEndOfGroupTx +} + +depth-- + case 5: + iNdEx += 4 + default: + return 0, fmt.Errorf("proto: illegal wireType %d", wireType) +} + if iNdEx < 0 { + return 0, ErrInvalidLengthTx +} + if depth == 0 { + return iNdEx, nil +} + +} + +return 0, io.ErrUnexpectedEOF +} + +var ( + ErrInvalidLengthTx = fmt.Errorf("proto: negative length found during unmarshaling") + +ErrIntOverflowTx = fmt.Errorf("proto: integer overflow") + +ErrUnexpectedEndOfGroupTx = fmt.Errorf("proto: unexpected end of group") +) +``` + +When possible, the existing module's [`Keeper`](/docs/sdk/v0.50/documentation/module-system/keeper) should implement `MsgServer`, otherwise a `msgServer` struct that embeds the `Keeper` can be created, typically in `./keeper/msg_server.go`: + +```go expandable +package keeper + +import ( + + "context" + "github.com/armon/go-metrics" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +type msgServer struct { + Keeper +} + +var _ types.MsgServer = msgServer{ +} + +/ NewMsgServerImpl returns an implementation of the bank MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &msgServer{ + Keeper: keeper +} +} + +func (k msgServer) + +Send(goCtx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + var ( + from, to []byte + err error + ) + if base, ok := k.Keeper.(BaseKeeper); ok { + from, err = base.ak.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid from address: %s", err) +} + +to, err = base.ak.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid to address: %s", err) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + if !msg.Amount.IsValid() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + if !msg.Amount.IsAllPositive() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if k.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + +err = k.SendCoins(ctx, from, to, msg.Amount) + if err != nil { + return nil, err +} + +defer func() { + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "send" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + +return &types.MsgSendResponse{ +}, nil +} + +func (k msgServer) + +MultiSend(goCtx context.Context, msg *types.MsgMultiSend) (*types.MsgMultiSendResponse, error) { + if len(msg.Inputs) == 0 { + return nil, types.ErrNoInputs +} + if len(msg.Inputs) != 1 { + return nil, types.ErrMultipleSenders +} + if len(msg.Outputs) == 0 { + return nil, types.ErrNoOutputs +} + if err := types.ValidateInputOutputs(msg.Inputs[0], msg.Outputs); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + + / NOTE: totalIn == totalOut should already have been checked + for _, in := range msg.Inputs { + if err := k.IsSendEnabledCoins(ctx, in.Coins...); err != nil { + return nil, err +} + +} + for _, out := range msg.Outputs { + if base, ok := k.Keeper.(BaseKeeper); ok { + accAddr, err := base.ak.AddressCodec().StringToBytes(out.Address) + if err != nil { + return nil, err +} + if k.BlockedAddr(accAddr) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", out.Address) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + +} + err := k.InputOutputCoins(ctx, msg.Inputs[0], msg.Outputs) + if err != nil { + return nil, err +} + +return &types.MsgMultiSendResponse{ +}, nil +} + +func (k msgServer) + +UpdateParams(goCtx context.Context, req *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + if k.GetAuthority() != req.Authority { + return nil, errorsmod.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), req.Authority) +} + if err := req.Params.Validate(); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.SetParams(ctx, req.Params); err != nil { + return nil, err +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} + +func (k msgServer) + +SetSendEnabled(goCtx context.Context, msg *types.MsgSetSendEnabled) (*types.MsgSetSendEnabledResponse, error) { + if k.GetAuthority() != msg.Authority { + return nil, errorsmod.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), msg.Authority) +} + seen := map[string]bool{ +} + for _, se := range msg.SendEnabled { + if _, alreadySeen := seen[se.Denom]; alreadySeen { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("duplicate denom entries found for %q", se.Denom) +} + +seen[se.Denom] = true + if err := se.Validate(); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid SendEnabled denom %q: %s", se.Denom, err) +} + +} + for _, denom := range msg.UseDefaultFor { + if err := sdk.ValidateDenom(denom); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid UseDefaultFor denom %q: %s", denom, err) +} + +} + ctx := sdk.UnwrapSDKContext(goCtx) + if len(msg.SendEnabled) > 0 { + k.SetAllSendEnabled(ctx, msg.SendEnabled) +} + if len(msg.UseDefaultFor) > 0 { + k.DeleteSendEnabled(ctx, msg.UseDefaultFor...) +} + +return &types.MsgSetSendEnabledResponse{ +}, nil +} +``` + +`msgServer` methods can retrieve the `context.Context` from the `context.Context` parameter method using the `sdk.UnwrapSDKContext`: + +```go expandable +package keeper + +import ( + + "context" + "github.com/armon/go-metrics" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +type msgServer struct { + Keeper +} + +var _ types.MsgServer = msgServer{ +} + +/ NewMsgServerImpl returns an implementation of the bank MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &msgServer{ + Keeper: keeper +} +} + +func (k msgServer) + +Send(goCtx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + var ( + from, to []byte + err error + ) + if base, ok := k.Keeper.(BaseKeeper); ok { + from, err = base.ak.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid from address: %s", err) +} + +to, err = base.ak.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid to address: %s", err) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + if !msg.Amount.IsValid() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + if !msg.Amount.IsAllPositive() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if k.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + +err = k.SendCoins(ctx, from, to, msg.Amount) + if err != nil { + return nil, err +} + +defer func() { + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "send" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + +return &types.MsgSendResponse{ +}, nil +} + +func (k msgServer) + +MultiSend(goCtx context.Context, msg *types.MsgMultiSend) (*types.MsgMultiSendResponse, error) { + if len(msg.Inputs) == 0 { + return nil, types.ErrNoInputs +} + if len(msg.Inputs) != 1 { + return nil, types.ErrMultipleSenders +} + if len(msg.Outputs) == 0 { + return nil, types.ErrNoOutputs +} + if err := types.ValidateInputOutputs(msg.Inputs[0], msg.Outputs); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + + / NOTE: totalIn == totalOut should already have been checked + for _, in := range msg.Inputs { + if err := k.IsSendEnabledCoins(ctx, in.Coins...); err != nil { + return nil, err +} + +} + for _, out := range msg.Outputs { + if base, ok := k.Keeper.(BaseKeeper); ok { + accAddr, err := base.ak.AddressCodec().StringToBytes(out.Address) + if err != nil { + return nil, err +} + if k.BlockedAddr(accAddr) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", out.Address) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + +} + err := k.InputOutputCoins(ctx, msg.Inputs[0], msg.Outputs) + if err != nil { + return nil, err +} + +return &types.MsgMultiSendResponse{ +}, nil +} + +func (k msgServer) + +UpdateParams(goCtx context.Context, req *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + if k.GetAuthority() != req.Authority { + return nil, errorsmod.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), req.Authority) +} + if err := req.Params.Validate(); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.SetParams(ctx, req.Params); err != nil { + return nil, err +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} + +func (k msgServer) + +SetSendEnabled(goCtx context.Context, msg *types.MsgSetSendEnabled) (*types.MsgSetSendEnabledResponse, error) { + if k.GetAuthority() != msg.Authority { + return nil, errorsmod.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), msg.Authority) +} + seen := map[string]bool{ +} + for _, se := range msg.SendEnabled { + if _, alreadySeen := seen[se.Denom]; alreadySeen { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("duplicate denom entries found for %q", se.Denom) +} + +seen[se.Denom] = true + if err := se.Validate(); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid SendEnabled denom %q: %s", se.Denom, err) +} + +} + for _, denom := range msg.UseDefaultFor { + if err := sdk.ValidateDenom(denom); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid UseDefaultFor denom %q: %s", denom, err) +} + +} + ctx := sdk.UnwrapSDKContext(goCtx) + if len(msg.SendEnabled) > 0 { + k.SetAllSendEnabled(ctx, msg.SendEnabled) +} + if len(msg.UseDefaultFor) > 0 { + k.DeleteSendEnabled(ctx, msg.UseDefaultFor...) +} + +return &types.MsgSetSendEnabledResponse{ +}, nil +} +``` + +`sdk.Msg` processing usually follows these 3 steps: + +### Validation + +The message server must perform all validation required (both *stateful* and *stateless*) to make sure the `message` is valid. +The `signer` is charged for the gas cost of this validation. + +For example, a `msgServer` method for a `transfer` message should check that the sending account has enough funds to actually perform the transfer. + +It is recommended to implement all validation checks in a separate function that passes state values as arguments. This implementation simplifies testing. As expected, expensive validation functions charge additional gas. Example: + +```go +ValidateMsgA(msg MsgA, now Time, gm GasMeter) + +error { + if now.Before(msg.Expire) { + return sdkerrrors.ErrInvalidRequest.Wrap("msg expired") +} + +gm.ConsumeGas(1000, "signature verification") + +return signatureVerificaton(msg.Prover, msg.Data) +} +``` + + +Previously, the `ValidateBasic` method was used to perform simple and stateless validation checks. +This way of validating is deprecated, this means the `msgServer` must perform all validation checks. + + +### State Transition + +After the validation is successful, the `msgServer` method uses the [`keeper`](/docs/sdk/v0.50/documentation/module-system/keeper) functions to access the state and perform a state transition. + +### Events + +Before returning, `msgServer` methods generally emit one or more [events](/docs/sdk/v0.50/learn/advanced/events) by using the `EventManager` held in the `ctx`. Use the new `EmitTypedEvent` function that uses protobuf-based event types: + +```go +ctx.EventManager().EmitTypedEvent( + &group.EventABC{ + Key1: Value1, Key2, Value2 +}) +``` + +or the older `EmitEvent` function: + +```go +ctx.EventManager().EmitEvent( + sdk.NewEvent( + eventType, / e.g. sdk.EventTypeMessage for a message, types.CustomEventType for a custom event defined in the module + sdk.NewAttribute(key1, value1), + sdk.NewAttribute(key2, value2), + ), +) +``` + +These events are relayed back to the underlying consensus engine and can be used by service providers to implement services around the application. Click [here](/docs/sdk/v0.50/learn/advanced/events) to learn more about events. + +The invoked `msgServer` method returns a `proto.Message` response and an `error`. These return values are then wrapped into an `*sdk.Result` or an `error` using `sdk.WrapServiceResult(ctx context.Context, res proto.Message, err error)`: + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + + gogogrpc "github.com/cosmos/gogoproto/grpc" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc" + + errorsmod "cosmossdk.io/errors" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ MessageRouter ADR 031 request type routing +/ docs/sdk/next/documentation/legacy/adr-comprehensive +type MessageRouter interface { + Handler(msg sdk.Msg) + +MsgServiceHandler + HandlerByTypeURL(typeURL string) + +MsgServiceHandler +} + +/ MsgServiceRouter routes fully-qualified Msg service methods to their handler. +type MsgServiceRouter struct { + interfaceRegistry codectypes.InterfaceRegistry + routes map[string]MsgServiceHandler + circuitBreaker CircuitBreaker +} + +var _ gogogrpc.Server = &MsgServiceRouter{ +} + +/ NewMsgServiceRouter creates a new MsgServiceRouter. +func NewMsgServiceRouter() *MsgServiceRouter { + return &MsgServiceRouter{ + routes: map[string]MsgServiceHandler{ +}, +} +} + +func (msr *MsgServiceRouter) + +SetCircuit(cb CircuitBreaker) { + msr.circuitBreaker = cb +} + +/ MsgServiceHandler defines a function type which handles Msg service message. +type MsgServiceHandler = func(ctx sdk.Context, req sdk.Msg) (*sdk.Result, error) + +/ Handler returns the MsgServiceHandler for a given msg or nil if not found. +func (msr *MsgServiceRouter) + +Handler(msg sdk.Msg) + +MsgServiceHandler { + return msr.routes[sdk.MsgTypeURL(msg)] +} + +/ HandlerByTypeURL returns the MsgServiceHandler for a given query route path or nil +/ if not found. +func (msr *MsgServiceRouter) + +HandlerByTypeURL(typeURL string) + +MsgServiceHandler { + return msr.routes[typeURL] +} + +/ RegisterService implements the gRPC Server.RegisterService method. sd is a gRPC +/ service description, handler is an object which implements that gRPC service. +/ +/ This function PANICs: +/ - if it is called before the service `Msg`s have been registered using +/ RegisterInterfaces, +/ - or if a service is being registered twice. +func (msr *MsgServiceRouter) + +RegisterService(sd *grpc.ServiceDesc, handler interface{ +}) { + / Adds a top-level query handler based on the gRPC service name. + for _, method := range sd.Methods { + fqMethod := fmt.Sprintf("/%s/%s", sd.ServiceName, method.MethodName) + methodHandler := method.Handler + + var requestTypeName string + + / NOTE: This is how we pull the concrete request type for each handler for registering in the InterfaceRegistry. + / This approach is maybe a bit hacky, but less hacky than reflecting on the handler object itself. + / We use a no-op interceptor to avoid actually calling into the handler itself. + _, _ = methodHandler(nil, context.Background(), func(i interface{ +}) + +error { + msg, ok := i.(sdk.Msg) + if !ok { + / We panic here because there is no other alternative and the app cannot be initialized correctly + / this should only happen if there is a problem with code generation in which case the app won't + / work correctly anyway. + panic(fmt.Errorf("unable to register service method %s: %T does not implement sdk.Msg", fqMethod, i)) +} + +requestTypeName = sdk.MsgTypeURL(msg) + +return nil +}, noopInterceptor) + + / Check that the service Msg fully-qualified method name has already + / been registered (via RegisterInterfaces). If the user registers a + / service without registering according service Msg type, there might be + / some unexpected behavior down the road. Since we can't return an error + / (`Server.RegisterService` interface restriction) + +we panic (at startup). + reqType, err := msr.interfaceRegistry.Resolve(requestTypeName) + if err != nil || reqType == nil { + panic( + fmt.Errorf( + "type_url %s has not been registered yet. "+ + "Before calling RegisterService, you must register all interfaces by calling the `RegisterInterfaces` "+ + "method on module.BasicManager. Each module should call `msgservice.RegisterMsgServiceDesc` inside its "+ + "`RegisterInterfaces` method with the `_Msg_serviceDesc` generated by proto-gen", + requestTypeName, + ), + ) +} + + / Check that each service is only registered once. If a service is + / registered more than once, then we should error. Since we can't + / return an error (`Server.RegisterService` interface restriction) + +we + / panic (at startup). + _, found := msr.routes[requestTypeName] + if found { + panic( + fmt.Errorf( + "msg service %s has already been registered. Please make sure to only register each service once. "+ + "This usually means that there are conflicting modules registering the same msg service", + fqMethod, + ), + ) +} + +msr.routes[requestTypeName] = func(ctx sdk.Context, msg sdk.Msg) (*sdk.Result, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + interceptor := func(goCtx context.Context, _ interface{ +}, _ *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{ +}, error) { + goCtx = context.WithValue(goCtx, sdk.SdkContextKey, ctx) + +return handler(goCtx, msg) +} + if m, ok := msg.(sdk.HasValidateBasic); ok { + if err := m.ValidateBasic(); err != nil { + return nil, err +} + +} + if msr.circuitBreaker != nil { + msgURL := sdk.MsgTypeURL(msg) + +isAllowed, err := msr.circuitBreaker.IsAllowed(ctx, msgURL) + if err != nil { + return nil, err +} + if !isAllowed { + return nil, fmt.Errorf("circuit breaker disables execution of this message: %s", msgURL) +} + +} + + / Call the method handler from the service description with the handler object. + / We don't do any decoding here because the decoding was already done. + res, err := methodHandler(handler, ctx, noopDecoder, interceptor) + if err != nil { + return nil, err +} + +resMsg, ok := res.(proto.Message) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "Expecting proto.Message, got %T", resMsg) +} + +return sdk.WrapServiceResult(ctx, resMsg, err) +} + +} +} + +/ SetInterfaceRegistry sets the interface registry for the router. +func (msr *MsgServiceRouter) + +SetInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) { + msr.interfaceRegistry = interfaceRegistry +} + +func noopDecoder(_ interface{ +}) + +error { + return nil +} + +func noopInterceptor(_ context.Context, _ interface{ +}, _ *grpc.UnaryServerInfo, _ grpc.UnaryHandler) (interface{ +}, error) { + return nil, nil +} +``` + +This method takes care of marshaling the `res` parameter to protobuf and attaching any events on the `ctx.EventManager()` to the `sdk.Result`. + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/base/abci/v1beta1/abci.proto#L93-L113 +``` + +This diagram shows a typical structure of a Protobuf `Msg` service, and how the message propagates through the module. + +![Transaction flow](/docs/sdk/v0.50/documentation/module-systemhttps://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/transaction_flow.svg) + +## Telemetry + +New [telemetry metrics](/docs/sdk/v0.50/learn/advanced/telemetry) can be created from `msgServer` methods when handling messages. + +This is an example from the `x/auth/vesting` module: + +```go expandable +package vesting + +import ( + + "context" + "github.com/armon/go-metrics" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" +) + +type msgServer struct { + keeper.AccountKeeper + types.BankKeeper +} + +/ NewMsgServerImpl returns an implementation of the vesting MsgServer interface, +/ wrapping the corresponding AccountKeeper and BankKeeper. +func NewMsgServerImpl(k keeper.AccountKeeper, bk types.BankKeeper) + +types.MsgServer { + return &msgServer{ + AccountKeeper: k, + BankKeeper: bk +} +} + +var _ types.MsgServer = msgServer{ +} + +func (s msgServer) + +CreateVestingAccount(goCtx context.Context, msg *types.MsgCreateVestingAccount) (*types.MsgCreateVestingAccountResponse, error) { + from, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'from' address: %s", err) +} + +to, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'to' address: %s", err) +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + if msg.EndTime <= 0 { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "invalid end time") +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := s.BankKeeper.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if s.BankKeeper.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + if acc := s.AccountKeeper.GetAccount(ctx, to); acc != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "account %s already exists", msg.ToAddress) +} + baseAccount := authtypes.NewBaseAccountWithAddress(to) + +baseAccount = s.AccountKeeper.NewAccount(ctx, baseAccount).(*authtypes.BaseAccount) + baseVestingAccount := types.NewBaseVestingAccount(baseAccount, msg.Amount.Sort(), msg.EndTime) + +var vestingAccount sdk.AccountI + if msg.Delayed { + vestingAccount = types.NewDelayedVestingAccountRaw(baseVestingAccount) +} + +else { + vestingAccount = types.NewContinuousVestingAccountRaw(baseVestingAccount, ctx.BlockTime().Unix()) +} + +s.AccountKeeper.SetAccount(ctx, vestingAccount) + +defer func() { + telemetry.IncrCounter(1, "new", "account") + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "create_vesting_account" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + if err = s.BankKeeper.SendCoins(ctx, from, to, msg.Amount); err != nil { + return nil, err +} + +return &types.MsgCreateVestingAccountResponse{ +}, nil +} + +func (s msgServer) + +CreatePermanentLockedAccount(goCtx context.Context, msg *types.MsgCreatePermanentLockedAccount) (*types.MsgCreatePermanentLockedAccountResponse, error) { + from, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'from' address: %s", err) +} + +to, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'to' address: %s", err) +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := s.BankKeeper.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if s.BankKeeper.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + if acc := s.AccountKeeper.GetAccount(ctx, to); acc != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "account %s already exists", msg.ToAddress) +} + baseAccount := authtypes.NewBaseAccountWithAddress(to) + +baseAccount = s.AccountKeeper.NewAccount(ctx, baseAccount).(*authtypes.BaseAccount) + vestingAccount := types.NewPermanentLockedAccount(baseAccount, msg.Amount) + +s.AccountKeeper.SetAccount(ctx, vestingAccount) + +defer func() { + telemetry.IncrCounter(1, "new", "account") + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "create_permanent_locked_account" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + if err = s.BankKeeper.SendCoins(ctx, from, to, msg.Amount); err != nil { + return nil, err +} + +return &types.MsgCreatePermanentLockedAccountResponse{ +}, nil +} + +func (s msgServer) + +CreatePeriodicVestingAccount(goCtx context.Context, msg *types.MsgCreatePeriodicVestingAccount) (*types.MsgCreatePeriodicVestingAccountResponse, error) { + from, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'from' address: %s", err) +} + +to, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'to' address: %s", err) +} + if msg.StartTime < 1 { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "invalid start time of %d, length must be greater than 0", msg.StartTime) +} + +var totalCoins sdk.Coins + for i, period := range msg.VestingPeriods { + if period.Length < 1 { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "invalid period length of %d in period %d, length must be greater than 0", period.Length, i) +} + +totalCoins = totalCoins.Add(period.Amount...) +} + ctx := sdk.UnwrapSDKContext(goCtx) + if acc := s.AccountKeeper.GetAccount(ctx, to); acc != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "account %s already exists", msg.ToAddress) +} + if err := s.BankKeeper.IsSendEnabledCoins(ctx, totalCoins...); err != nil { + return nil, err +} + baseAccount := authtypes.NewBaseAccountWithAddress(to) + +baseAccount = s.AccountKeeper.NewAccount(ctx, baseAccount).(*authtypes.BaseAccount) + vestingAccount := types.NewPeriodicVestingAccount(baseAccount, totalCoins.Sort(), msg.StartTime, msg.VestingPeriods) + +s.AccountKeeper.SetAccount(ctx, vestingAccount) + +defer func() { + telemetry.IncrCounter(1, "new", "account") + for _, a := range totalCoins { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "create_periodic_vesting_account" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + if err = s.BankKeeper.SendCoins(ctx, from, to, totalCoins); err != nil { + return nil, err +} + +return &types.MsgCreatePeriodicVestingAccountResponse{ +}, nil +} + +func validateAmount(amount sdk.Coins) + +error { + if !amount.IsValid() { + return sdkerrors.ErrInvalidCoins.Wrap(amount.String()) +} + if !amount.IsAllPositive() { + return sdkerrors.ErrInvalidCoins.Wrap(amount.String()) +} + +return nil +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/preblock.mdx b/docs/sdk/v0.50/documentation/module-system/preblock.mdx new file mode 100644 index 00000000..ea330cc3 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/preblock.mdx @@ -0,0 +1,32 @@ +--- +title: PreBlocker +--- + + +**Synopsis** +`PreBlocker` is optional method module developers can implement in their module. They will be triggered before [`BeginBlock`](/docs/sdk/v0.50/learn/advanced/baseapp#beginblock). + + + +**Pre-requisite Readings** + +* [Module Manager](/docs/sdk/v0.50/documentation/module-system/module-manager) + + + +## PreBlocker + +There are two semantics around the new lifecycle method: + +* It runs before the `BeginBlocker` of all modules +* It can modify consensus parameters in storage, and signal the caller through the return value. + +When it returns `ConsensusParamsChanged=true`, the caller must refresh the consensus parameter in the deliver context: + +``` +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithConsensusParams(app.GetConsensusParams()) +``` + +The new ctx must be passed to all the other lifecycle methods. + +{/* TODO: leaving this here to update docs with core api changes */} diff --git a/docs/sdk/v0.50/documentation/module-system/protobuf-annotations.mdx b/docs/sdk/v0.50/documentation/module-system/protobuf-annotations.mdx new file mode 100644 index 00000000..c37f4bf4 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/protobuf-annotations.mdx @@ -0,0 +1,417 @@ +--- +title: Protocol Buffer Annotations +description: >- + Comprehensive guide to protobuf annotations, code generation, and best practices + for Cosmos SDK module development +--- + +This comprehensive guide covers all aspects of Protocol Buffer usage in the Cosmos SDK, including annotations, code generation, and integration with the module system. + +## Project Setup and Code Generation + +### Directory Structure + +Organize your protobuf files following this structure: + +```shell +proto/ +├── buf.yaml # Buf configuration +├── buf.gen.gogo.yaml # Code generation config +└── myapp/ + └── mymodule/ + └── v1/ + ├── types.proto # Core type definitions + ├── tx.proto # Transaction messages + ├── query.proto # Query services + ├── genesis.proto # Genesis state + └── events.proto # Event definitions +``` + +### Buf Configuration + +`proto/buf.yaml`: +```yaml +version: v1 +name: buf.build/myorg/myapp +deps: + - buf.build/cosmos/cosmos-sdk + - buf.build/cosmos/cosmos-proto + - buf.build/cosmos/gogo-proto +breaking: + use: + - FILE +lint: + use: + - STANDARD + - COMMENTS + - FILE_LOWER_SNAKE_CASE +``` + +`proto/buf.gen.gogo.yaml`: +```yaml +version: v1 +plugins: + - name: gocosmos + out: .. + opt: plugins=grpc,Mgoogle/protobuf/any.proto=github.com/cosmos/gogoproto/types/any + - name: grpc-gateway + out: .. + opt: logtostderr=true,allow_colon_final_segments=true +``` + +### Generating Code + +```bash +# Install buf +curl -sSL https://github.com/bufbuild/buf/releases/download/v1.28.1/buf-$(uname -s)-$(uname -m) \ + -o /usr/local/bin/buf && chmod +x /usr/local/bin/buf + +# Generate protobuf code +cd proto +buf generate + +# Move generated files to correct location +cp -r github.com/myorg/myapp/* ../ +rm -rf github.com +``` + +## Message Annotations + +### Signer + +Specifies which field(s) contain the transaction signer(s): + +```protobuf +message MsgSend { + option (cosmos.msg.v1.signer) = "from_address"; + + string from_address = 1; + string to_address = 2; + repeated cosmos.base.v1beta1.Coin amount = 3; +} + +// Multiple signers +message MsgMultiSend { + option (cosmos.msg.v1.signer) = "inputs"; // Repeated field with signers + + repeated Input inputs = 1; + repeated Output outputs = 2; +} +``` + +## Scalar Types + +Scalar annotations provide type information for client libraries and validation: + +### Address Types + +```protobuf +message MsgDelegate { + // Delegator address (AccAddress) + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // Validator operator address (ValAddress) + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.ValidatorAddressString"]; + + // Consensus address (ConsAddress) - used in evidence + string consensus_address = 3 [(cosmos_proto.scalar) = "cosmos.ConsensusAddressString"]; +} +``` + +### Numeric Types + +```protobuf +message Proposal { + // Large integers (sdk.Int) + string yes_count = 1 [(cosmos_proto.scalar) = "cosmos.Int"]; + string no_count = 2 [(cosmos_proto.scalar) = "cosmos.Int"]; + + // Decimals for precise calculations (sdk.Dec) + string quorum = 3 [(cosmos_proto.scalar) = "cosmos.Dec"]; + string threshold = 4 [(cosmos_proto.scalar) = "cosmos.Dec"]; +} +``` + +### Complete Scalar Reference + +| Scalar Type | Go Type | Usage | +|------------|---------|--------| +| `cosmos.AddressString` | `sdk.AccAddress` | User account addresses | +| `cosmos.ValidatorAddressString` | `sdk.ValAddress` | Validator operator addresses | +| `cosmos.ConsensusAddressString` | `sdk.ConsAddress` | Consensus key addresses | +| `cosmos.Int` | `sdk.Int` | Arbitrary precision integers | +| `cosmos.Dec` | `sdk.Dec` | Arbitrary precision decimals | + +## Interface Annotations + +### Implements Interface + +Marks a message as implementing a specific interface: + +```protobuf +// Mark as account implementation +message BaseAccount { + option (cosmos_proto.implements_interface) = "cosmos.auth.v1beta1.AccountI"; + + string address = 1; + google.protobuf.Any pub_key = 2; + uint64 account_number = 3; + uint64 sequence = 4; +} + +// Custom authorization implementation +message SendAuthorization { + option (cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"; + + repeated cosmos.base.v1beta1.Coin spend_limit = 1; +} +``` + +### Accepts Interface + +Specifies which interface an `Any` field accepts: + +```protobuf +message Grant { + // Accepts any Authorization implementation + google.protobuf.Any authorization = 1 [ + (cosmos_proto.accepts_interface) = "cosmos.authz.v1beta1.Authorization" + ]; + + google.protobuf.Timestamp expiration = 2; +} + +message Account { + // Accepts any AccountI implementation + google.protobuf.Any account = 1 [ + (cosmos_proto.accepts_interface) = "cosmos.auth.v1beta1.AccountI" + ]; +} +``` + +These annotations enable: +- Type-safe `Any` unpacking +- Client code generation +- Automatic validation + +## Versioning Annotations + +Track when features were added for client compatibility: + +### Method Added In + +```protobuf +service Msg { + // Available since v0.47 + rpc Send(MsgSend) returns (MsgSendResponse); + + // Added in v0.50 + rpc UpdateParams(MsgUpdateParams) returns (MsgUpdateParamsResponse) { + option (cosmos_proto.method_added_in) = "cosmos-sdk v0.50"; + } +} +``` + +### Field Added In + +```protobuf +message Params { + bool send_enabled = 1; + + // Added in v0.50 + string default_send_enabled = 2 [ + (cosmos_proto.field_added_in) = "cosmos-sdk v0.50" + ]; +} +``` + +### Message Added In + +```protobuf +// Added in v0.47 +message MsgUpdateParams { + option (cosmos_proto.message_added_in) = "cosmos-sdk v0.47"; + + string authority = 1; + Params params = 2; +} +``` + +Format: `"[module] v[version]"` +- `"cosmos-sdk v0.50.1"` +- `"x/gov v2.0.0"` +- `"ibc-go v8.0.0"` + +## Gogoproto Annotations + +Gogoproto provides Go-specific optimizations: + +### Performance Optimizations + +```protobuf +message OptimizedMessage { + option (gogoproto.equal) = true; // Generate Equal() method + option (gogoproto.goproto_getters) = false; // Disable getters for direct field access + option (gogoproto.goproto_stringer) = false; // Custom String() method + + string id = 1; + + // Non-nullable fields (avoid pointer overhead) + Item item = 2 [(gogoproto.nullable) = false]; + + // Non-nullable repeated fields + repeated Item items = 3 [ + (gogoproto.nullable) = false, + (gogoproto.castrepeated) = "Items" // Custom slice type + ]; + + // Custom type casting + bytes custom_data = 4 [(gogoproto.casttype) = "CustomType"]; +} +``` + +### Common Gogoproto Options + +| Option | Effect | Usage | +|--------|--------|-------| +| `(gogoproto.nullable) = false` | Non-pointer fields | Reduce allocations | +| `(gogoproto.equal) = true` | Generate `Equal()` method | Equality checks | +| `(gogoproto.goproto_getters) = false` | No getter methods | Direct field access | +| `(gogoproto.castrepeated)` | Custom slice type | Type-safe collections | +| `(gogoproto.casttype)` | Custom Go type | Custom implementations | +| `(gogoproto.customname)` | Custom field name | Go naming conventions | + +## Amino Annotations (Legacy) + +Amino annotations maintain backwards compatibility for signing: + + +Amino is deprecated since v0.50. Only use for backwards compatibility. + + +### Message Name + +```protobuf +message MsgSend { + option (amino.name) = "cosmos-sdk/MsgSend"; // Display name for signing + // ... +} +``` + +### Field Annotations + +```protobuf +message Account { + // Custom field name in amino encoding + google.protobuf.Any pub_key = 1 [ + (amino.field_name) = "public_key" + ]; + + // Always include in JSON (even if empty) + repeated cosmos.base.v1beta1.Coin coins = 2 [ + (amino.dont_omitempty) = true, + (amino.encoding) = "legacy_coins" // Special coin encoding + ]; +} +``` + +## Service Annotations + +### Message Service + +Mark a service as a Msg service for transactions: + +```protobuf +service Msg { + option (cosmos.msg.v1.service) = true; // This is a Msg service + + rpc Send(MsgSend) returns (MsgSendResponse); + rpc UpdateParams(MsgUpdateParams) returns (MsgUpdateParamsResponse); +} +``` + +## Complete Example + +Here's a complete example using all annotation types: + +```protobuf +syntax = "proto3"; +package myapp.mymodule.v1; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; +import "cosmos/msg/v1/msg.proto"; +import "amino/amino.proto"; +import "google/protobuf/any.proto"; +import "cosmos/base/v1beta1/coin.proto"; + +option go_package = "github.com/myorg/myapp/x/mymodule/types"; + +// Service definition +service Msg { + option (cosmos.msg.v1.service) = true; + + rpc CreateItem(MsgCreateItem) returns (MsgCreateItemResponse); +} + +// Message with all annotations +message MsgCreateItem { + option (cosmos.msg.v1.signer) = "creator"; + option (amino.name) = "mymodule/CreateItem"; + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + // Address with scalar + string creator = 1 [ + (cosmos_proto.scalar) = "cosmos.AddressString" + ]; + + // Non-nullable nested message + Item item = 2 [ + (gogoproto.nullable) = false + ]; + + // Coins with special handling + repeated cosmos.base.v1beta1.Coin fee = 3 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (amino.encoding) = "legacy_coins", + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; +} + +message MsgCreateItemResponse { + string id = 1; +} + +// Type implementing an interface +message Item { + option (cosmos_proto.implements_interface) = "mymodule.ItemI"; + + string id = 1; + string data = 2; + + // Any field accepting interface + google.protobuf.Any extension = 3 [ + (cosmos_proto.accepts_interface) = "mymodule.ItemExtension" + ]; +} +``` + +## Best Practices + +1. **Always use scalar annotations** for addresses and numeric types +2. **Use `nullable=false`** for required fields to reduce allocations +3. **Implement interfaces** with proper annotations for Any type safety +4. **Version your APIs** with added_in annotations +5. **Document breaking changes** with deprecated field markers +6. **Generate code regularly** and commit .pb.go files +7. **Use buf** for linting and breaking change detection + +## External Resources + +- [Protocol Buffers Documentation](https://protobuf.dev/) +- [Buf Build](https://buf.build/) - Modern protobuf tooling +- [Gogoproto](https://github.com/cosmos/gogoproto) - Go-specific protobuf extensions +- [Cosmos Proto](https://github.com/cosmos/cosmos-proto) - Cosmos SDK protobuf extensions +- [Buf Registry - Cosmos SDK](https://buf.build/cosmos/cosmos-sdk) - Published SDK protos diff --git a/docs/sdk/v0.50/documentation/module-system/query-services.mdx b/docs/sdk/v0.50/documentation/module-system/query-services.mdx new file mode 100644 index 00000000..72cb4757 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/query-services.mdx @@ -0,0 +1,391 @@ +--- +title: Query Services +--- + + +**Synopsis** +A Protobuf Query service processes [`queries`](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#queries). Query services are specific to the module in which they are defined, and only process `queries` defined within said module. They are called from `BaseApp`'s [`Query` method](/docs/sdk/v0.50/learn/advanced/baseapp#query). + + + +**Pre-requisite Readings** + +* [Module Manager](/docs/sdk/v0.50/documentation/module-system/module-manager) +* [Messages and Queries](/docs/sdk/v0.50/documentation/module-system/messages-and-queries) + + + +## Implementation of a module query service + +### gRPC Service + +When defining a Protobuf `Query` service, a `QueryServer` interface is generated for each module with all the service methods: + +```go +type QueryServer interface { + QueryBalance(context.Context, *QueryBalanceParams) (*types.Coin, error) + +QueryAllBalances(context.Context, *QueryAllBalancesParams) (*QueryAllBalancesResponse, error) +} +``` + +These custom queries methods should be implemented by a module's keeper, typically in `./keeper/grpc_query.go`. The first parameter of these methods is a generic `context.Context`. Therefore, the Cosmos SDK provides a function `sdk.UnwrapSDKContext` to retrieve the `context.Context` from the provided +`context.Context`. + +Here's an example implementation for the bank module: + +```go expandable +package keeper + +import ( + + "context" + "cosmossdk.io/collections" + "cosmossdk.io/math" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/status" + "cosmossdk.io/store/prefix" + "github.com/cosmos/cosmos-sdk/runtime" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/query" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +type Querier struct { + BaseKeeper +} + +var _ types.QueryServer = BaseKeeper{ +} + +func NewQuerier(keeper *BaseKeeper) + +Querier { + return Querier{ + BaseKeeper: *keeper +} +} + +/ Balance implements the Query/Balance gRPC method +func (k BaseKeeper) + +Balance(ctx context.Context, req *types.QueryBalanceRequest) (*types.QueryBalanceResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + +address, err := k.ak.AddressCodec().StringToBytes(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + balance := k.GetBalance(sdkCtx, address, req.Denom) + +return &types.QueryBalanceResponse{ + Balance: &balance +}, nil +} + +/ AllBalances implements the Query/AllBalances gRPC method +func (k BaseKeeper) + +AllBalances(ctx context.Context, req *types.QueryAllBalancesRequest) (*types.QueryAllBalancesResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + +addr, err := k.ak.AddressCodec().StringToBytes(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + balances := sdk.NewCoins() + + _, pageRes, err := query.CollectionFilteredPaginate(ctx, k.Balances, req.Pagination, func(key collections.Pair[sdk.AccAddress, string], value math.Int) (include bool, err error) { + denom := key.K2() + if req.ResolveDenom { + if metadata, ok := k.GetDenomMetaData(sdkCtx, denom); ok { + denom = metadata.Display +} + +} + +balances = append(balances, sdk.NewCoin(denom, value)) + +return false, nil / we don't include results because we're appending them here. +}, query.WithCollectionPaginationPairPrefix[sdk.AccAddress, string](/docs/sdk/v0.50/documentation/module-system/addr)) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "paginate: %v", err) +} + +return &types.QueryAllBalancesResponse{ + Balances: balances, + Pagination: pageRes +}, nil +} + +/ SpendableBalances implements a gRPC query handler for retrieving an account's +/ spendable balances. +func (k BaseKeeper) + +SpendableBalances(ctx context.Context, req *types.QuerySpendableBalancesRequest) (*types.QuerySpendableBalancesResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + +addr, err := k.ak.AddressCodec().StringToBytes(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + balances := sdk.NewCoins() + zeroAmt := math.ZeroInt() + + _, pageRes, err := query.CollectionFilteredPaginate(ctx, k.Balances, req.Pagination, func(key collections.Pair[sdk.AccAddress, string], _ math.Int) (include bool, err error) { + balances = append(balances, sdk.NewCoin(key.K2(), zeroAmt)) + +return false, nil / not including results as they're appended here +}, query.WithCollectionPaginationPairPrefix[sdk.AccAddress, string](/docs/sdk/v0.50/documentation/module-system/addr)) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "paginate: %v", err) +} + result := sdk.NewCoins() + spendable := k.SpendableCoins(sdkCtx, addr) + for _, c := range balances { + result = append(result, sdk.NewCoin(c.Denom, spendable.AmountOf(c.Denom))) +} + +return &types.QuerySpendableBalancesResponse{ + Balances: result, + Pagination: pageRes +}, nil +} + +/ SpendableBalanceByDenom implements a gRPC query handler for retrieving an account's +/ spendable balance for a specific denom. +func (k BaseKeeper) + +SpendableBalanceByDenom(ctx context.Context, req *types.QuerySpendableBalanceByDenomRequest) (*types.QuerySpendableBalanceByDenomResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + +addr, err := k.ak.AddressCodec().StringToBytes(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + spendable := k.SpendableCoin(sdkCtx, addr, req.Denom) + +return &types.QuerySpendableBalanceByDenomResponse{ + Balance: &spendable +}, nil +} + +/ TotalSupply implements the Query/TotalSupply gRPC method +func (k BaseKeeper) + +TotalSupply(ctx context.Context, req *types.QueryTotalSupplyRequest) (*types.QueryTotalSupplyResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +totalSupply, pageRes, err := k.GetPaginatedTotalSupply(sdkCtx, req.Pagination) + if err != nil { + return nil, status.Error(codes.Internal, err.Error()) +} + +return &types.QueryTotalSupplyResponse{ + Supply: totalSupply, + Pagination: pageRes +}, nil +} + +/ SupplyOf implements the Query/SupplyOf gRPC method +func (k BaseKeeper) + +SupplyOf(c context.Context, req *types.QuerySupplyOfRequest) (*types.QuerySupplyOfResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + ctx := sdk.UnwrapSDKContext(c) + supply := k.GetSupply(ctx, req.Denom) + +return &types.QuerySupplyOfResponse{ + Amount: sdk.NewCoin(req.Denom, supply.Amount) +}, nil +} + +/ Params implements the gRPC service handler for querying x/bank parameters. +func (k BaseKeeper) + +Params(ctx context.Context, req *types.QueryParamsRequest) (*types.QueryParamsResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + params := k.GetParams(sdkCtx) + +return &types.QueryParamsResponse{ + Params: params +}, nil +} + +/ DenomsMetadata implements Query/DenomsMetadata gRPC method. +func (k BaseKeeper) + +DenomsMetadata(c context.Context, req *types.QueryDenomsMetadataRequest) (*types.QueryDenomsMetadataResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + kvStore := runtime.KVStoreAdapter(k.storeService.OpenKVStore(c)) + store := prefix.NewStore(kvStore, types.DenomMetadataPrefix) + metadatas := []types.Metadata{ +} + +pageRes, err := query.Paginate(store, req.Pagination, func(_, value []byte) + +error { + var metadata types.Metadata + k.cdc.MustUnmarshal(value, &metadata) + +metadatas = append(metadatas, metadata) + +return nil +}) + if err != nil { + return nil, status.Error(codes.Internal, err.Error()) +} + +return &types.QueryDenomsMetadataResponse{ + Metadatas: metadatas, + Pagination: pageRes, +}, nil +} + +/ DenomMetadata implements Query/DenomMetadata gRPC method. +func (k BaseKeeper) + +DenomMetadata(c context.Context, req *types.QueryDenomMetadataRequest) (*types.QueryDenomMetadataResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + ctx := sdk.UnwrapSDKContext(c) + +metadata, found := k.GetDenomMetaData(ctx, req.Denom) + if !found { + return nil, status.Errorf(codes.NotFound, "client metadata for denom %s", req.Denom) +} + +return &types.QueryDenomMetadataResponse{ + Metadata: metadata, +}, nil +} + +func (k BaseKeeper) + +DenomOwners( + goCtx context.Context, + req *types.QueryDenomOwnersRequest, +) (*types.QueryDenomOwnersResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + +var denomOwners []*types.DenomOwner + + _, pageRes, err := query.CollectionFilteredPaginate(goCtx, k.Balances.Indexes.Denom, req.Pagination, + func(key collections.Pair[string, sdk.AccAddress], value collections.NoValue) (include bool, err error) { + amt, err := k.Balances.Get(goCtx, collections.Join(key.K2(), req.Denom)) + if err != nil { + return false, err +} + +denomOwners = append(denomOwners, &types.DenomOwner{ + Address: key.K2().String(), + Balance: sdk.NewCoin(req.Denom, amt), +}) + +return false, nil +}, + query.WithCollectionPaginationPairPrefix[string, sdk.AccAddress](/docs/sdk/v0.50/documentation/module-system/req.Denom), + ) + if err != nil { + return nil, err +} + +return &types.QueryDenomOwnersResponse{ + DenomOwners: denomOwners, + Pagination: pageRes +}, nil +} + +func (k BaseKeeper) + +SendEnabled(goCtx context.Context, req *types.QuerySendEnabledRequest) (*types.QuerySendEnabledResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + ctx := sdk.UnwrapSDKContext(goCtx) + resp := &types.QuerySendEnabledResponse{ +} + if len(req.Denoms) > 0 { + for _, denom := range req.Denoms { + if se, ok := k.getSendEnabled(ctx, denom); ok { + resp.SendEnabled = append(resp.SendEnabled, types.NewSendEnabled(denom, se)) +} + +} + +} + +else { + results, pageResp, err := query.CollectionPaginate[string, bool](/docs/sdk/v0.50/documentation/module-system/ctx, k.BaseViewKeeper.SendEnabled, req.Pagination) + if err != nil { + return nil, status.Error(codes.Internal, err.Error()) +} + for _, r := range results { + resp.SendEnabled = append(resp.SendEnabled, &types.SendEnabled{ + Denom: r.Key, + Enabled: r.Value, +}) +} + +resp.Pagination = pageResp +} + +return resp, nil +} +``` + +### Calling queries from the State Machine + +The Cosmos SDK v0.47 introduces a new `cosmos.query.v1.module_query_safe` Protobuf annotation which is used to state that a query that is safe to be called from within the state machine, for example: + +* a Keeper's query function can be called from another module's Keeper, +* ADR-033 intermodule query calls, +* CosmWasm contracts can also directly interact with these queries. + +If the `module_query_safe` annotation set to `true`, it means: + +* The query is deterministic: given a block height it will return the same response upon multiple calls, and doesn't introduce any state-machine breaking changes across SDK patch versions. +* Gas consumption never fluctuates across calls and across patch versions. + +If you are a module developer and want to use `module_query_safe` annotation for your own query, you have to ensure the following things: + +* the query is deterministic and won't introduce state-machine-breaking changes without coordinated upgrades +* it has its gas tracked, to avoid the attack vector where no gas is accounted for + on potentially high-computation queries. diff --git a/docs/sdk/v0.50/documentation/module-system/simulator.mdx b/docs/sdk/v0.50/documentation/module-system/simulator.mdx new file mode 100644 index 00000000..2cc9b6fb --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/simulator.mdx @@ -0,0 +1,1718 @@ +--- +title: Module Simulation +--- + + +**Pre-requisite Readings** + +* [Cosmos Blockchain Simulator](/docs/sdk/v0.50/learn/advanced/simulation) + + +## Synopsis + +This document details how to define each module simulation functions to be +integrated with the application `SimulationManager`. + +* [Simulation package](#simulation-package) + * [Store decoders](#store-decoders) + * [Randomized genesis](#randomized-genesis) + * [Random weighted operations](#random-weighted-operations) + * [Random proposal contents](#random-proposal-contents) +* [Registering simulation functions](#registering-simulation-functions) +* [App Simulator manager](#app-simulator-manager) + +## Simulation package + +Every module that implements the Cosmos SDK simulator needs to have a `x//simulation` +package which contains the primary functions required by the fuzz tests: store +decoders, randomized genesis state and parameters, weighted operations and proposal +contents. + +### Store decoders + +Registering the store decoders is required for the `AppImportExport`. This allows +for the key-value pairs from the stores to be decoded (*i.e* unmarshalled) +to their corresponding types. In particular, it matches the key to a concrete type +and then unmarshals the value from the `KVPair` to the type provided. + +You can use the example [here](/docs/sdk/v0.50/documentation/module-systemhttps://github.com/cosmos/cosmos-sdk/blob/v/x/distribution/simulation/decoder.go) from the distribution module to implement your store decoders. + +### Randomized genesis + +The simulator tests different scenarios and values for genesis parameters +in order to fully test the edge cases of specific modules. The `simulator` package from each module must expose a `RandomizedGenState` function to generate the initial random `GenesisState` from a given seed. + +Once the module genesis parameter are generated randomly (or with the key and +values defined in a `params` file), they are marshaled to JSON format and added +to the app genesis JSON to use it on the simulations. + +You can check an example on how to create the randomized genesis [here](/docs/sdk/v0.50/documentation/module-systemhttps://github.com/cosmos/cosmos-sdk/blob/v/x/staking/simulation/genesis.go). + +### Randomized parameter changes + +The simulator is able to test parameter changes at random. The simulator package from each module must contain a `RandomizedParams` func that will simulate parameter changes of the module throughout the simulations lifespan. + +You can see how an example of what is needed to fully test parameter changes [here](/docs/sdk/v0.50/documentation/module-systemhttps://github.com/cosmos/cosmos-sdk/blob/v/x/staking/simulation/params.go) + +### Random weighted operations + +Operations are one of the crucial parts of the Cosmos SDK simulation. They are the transactions +(`Msg`) that are simulated with random field values. The sender of the operation +is also assigned randomly. + +Operations on the simulation are simulated using the full [transaction cycle](/docs/sdk/v0.50/learn/advanced/transactions) of a +`ABCI` application that exposes the `BaseApp`. + +Shown below is how weights are set: + +```go expandable +package simulation + +import ( + + "bytes" + "fmt" + "math/rand" + "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/testutil" + sdk "github.com/cosmos/cosmos-sdk/types" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/simulation" + "github.com/cosmos/cosmos-sdk/x/staking/keeper" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ Simulation operation weights constants +const ( + DefaultWeightMsgCreateValidator int = 100 + DefaultWeightMsgEditValidator int = 5 + DefaultWeightMsgDelegate int = 100 + DefaultWeightMsgUndelegate int = 100 + DefaultWeightMsgBeginRedelegate int = 100 + DefaultWeightMsgCancelUnbondingDelegation int = 100 + + OpWeightMsgCreateValidator = "op_weight_msg_create_validator" + OpWeightMsgEditValidator = "op_weight_msg_edit_validator" + OpWeightMsgDelegate = "op_weight_msg_delegate" + OpWeightMsgUndelegate = "op_weight_msg_undelegate" + OpWeightMsgBeginRedelegate = "op_weight_msg_begin_redelegate" + OpWeightMsgCancelUnbondingDelegation = "op_weight_msg_cancel_unbonding_delegation" +) + +/ WeightedOperations returns all the operations from the module with their respective weights +func WeightedOperations( + appParams simtypes.AppParams, + cdc codec.JSONCodec, + txGen client.TxConfig, + ak types.AccountKeeper, + bk types.BankKeeper, + k *keeper.Keeper, +) + +simulation.WeightedOperations { + var ( + weightMsgCreateValidator int + weightMsgEditValidator int + weightMsgDelegate int + weightMsgUndelegate int + weightMsgBeginRedelegate int + weightMsgCancelUnbondingDelegation int + ) + +appParams.GetOrGenerate(OpWeightMsgCreateValidator, &weightMsgCreateValidator, nil, func(_ *rand.Rand) { + weightMsgCreateValidator = DefaultWeightMsgCreateValidator +}) + +appParams.GetOrGenerate(OpWeightMsgEditValidator, &weightMsgEditValidator, nil, func(_ *rand.Rand) { + weightMsgEditValidator = DefaultWeightMsgEditValidator +}) + +appParams.GetOrGenerate(OpWeightMsgDelegate, &weightMsgDelegate, nil, func(_ *rand.Rand) { + weightMsgDelegate = DefaultWeightMsgDelegate +}) + +appParams.GetOrGenerate(OpWeightMsgUndelegate, &weightMsgUndelegate, nil, func(_ *rand.Rand) { + weightMsgUndelegate = DefaultWeightMsgUndelegate +}) + +appParams.GetOrGenerate(OpWeightMsgBeginRedelegate, &weightMsgBeginRedelegate, nil, func(_ *rand.Rand) { + weightMsgBeginRedelegate = DefaultWeightMsgBeginRedelegate +}) + +appParams.GetOrGenerate(OpWeightMsgCancelUnbondingDelegation, &weightMsgCancelUnbondingDelegation, nil, func(_ *rand.Rand) { + weightMsgCancelUnbondingDelegation = DefaultWeightMsgCancelUnbondingDelegation +}) + +return simulation.WeightedOperations{ + simulation.NewWeightedOperation( + weightMsgCreateValidator, + SimulateMsgCreateValidator(txGen, ak, bk, k), + ), + simulation.NewWeightedOperation( + weightMsgEditValidator, + SimulateMsgEditValidator(txGen, ak, bk, k), + ), + simulation.NewWeightedOperation( + weightMsgDelegate, + SimulateMsgDelegate(txGen, ak, bk, k), + ), + simulation.NewWeightedOperation( + weightMsgUndelegate, + SimulateMsgUndelegate(txGen, ak, bk, k), + ), + simulation.NewWeightedOperation( + weightMsgBeginRedelegate, + SimulateMsgBeginRedelegate(txGen, ak, bk, k), + ), + simulation.NewWeightedOperation( + weightMsgCancelUnbondingDelegation, + SimulateMsgCancelUnbondingDelegate(txGen, ak, bk, k), + ), +} +} + +/ SimulateMsgCreateValidator generates a MsgCreateValidator with random values +func SimulateMsgCreateValidator( + txGen client.TxConfig, + ak types.AccountKeeper, + bk types.BankKeeper, + k *keeper.Keeper, +) + +simtypes.Operation { + return func( + r *rand.Rand, app *baseapp.BaseApp, ctx sdk.Context, accs []simtypes.Account, chainID string, + ) (simtypes.OperationMsg, []simtypes.FutureOperation, error) { + msgType := sdk.MsgTypeURL(&types.MsgCreateValidator{ +}) + +simAccount, _ := simtypes.RandomAcc(r, accs) + address := sdk.ValAddress(simAccount.Address) + + / ensure the validator doesn't exist already + _, err := k.GetValidator(ctx, address) + if err == nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "validator already exists"), nil, nil +} + +denom, err := k.BondDenom(ctx) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "bond denom not found"), nil, err +} + balance := bk.GetBalance(ctx, simAccount.Address, denom).Amount + if !balance.IsPositive() { + return simtypes.NoOpMsg(types.ModuleName, msgType, "balance is negative"), nil, nil +} + +amount, err := simtypes.RandPositiveInt(r, balance) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to generate positive amount"), nil, err +} + selfDelegation := sdk.NewCoin(denom, amount) + account := ak.GetAccount(ctx, simAccount.Address) + spendable := bk.SpendableCoins(ctx, account.GetAddress()) + +var fees sdk.Coins + + coins, hasNeg := spendable.SafeSub(selfDelegation) + if !hasNeg { + fees, err = simtypes.RandomFees(r, ctx, coins) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to generate fees"), nil, err +} + +} + description := types.NewDescription( + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + ) + maxCommission := math.LegacyNewDecWithPrec(int64(simtypes.RandIntBetween(r, 0, 100)), 2) + commission := types.NewCommissionRates( + simtypes.RandomDecAmount(r, maxCommission), + maxCommission, + simtypes.RandomDecAmount(r, maxCommission), + ) + +msg, err := types.NewMsgCreateValidator(address.String(), simAccount.ConsKey.PubKey(), selfDelegation, description, commission, math.OneInt()) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, sdk.MsgTypeURL(msg), "unable to create CreateValidator message"), nil, err +} + txCtx := simulation.OperationInput{ + R: r, + App: app, + TxGen: txGen, + Cdc: nil, + Msg: msg, + Context: ctx, + SimAccount: simAccount, + AccountKeeper: ak, + ModuleName: types.ModuleName, +} + +return simulation.GenAndDeliverTx(txCtx, fees) +} +} + +/ SimulateMsgEditValidator generates a MsgEditValidator with random values +func SimulateMsgEditValidator( + txGen client.TxConfig, + ak types.AccountKeeper, + bk types.BankKeeper, + k *keeper.Keeper, +) + +simtypes.Operation { + return func( + r *rand.Rand, app *baseapp.BaseApp, ctx sdk.Context, accs []simtypes.Account, chainID string, + ) (simtypes.OperationMsg, []simtypes.FutureOperation, error) { + msgType := sdk.MsgTypeURL(&types.MsgEditValidator{ +}) + +vals, err := k.GetAllValidators(ctx) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to get validators"), nil, err +} + if len(vals) == 0 { + return simtypes.NoOpMsg(types.ModuleName, msgType, "number of validators equal zero"), nil, nil +} + +val, ok := testutil.RandSliceElem(r, vals) + if !ok { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to pick a validator"), nil, nil +} + address := val.GetOperator() + newCommissionRate := simtypes.RandomDecAmount(r, val.Commission.MaxRate) + if err := val.Commission.ValidateNewRate(newCommissionRate, ctx.BlockHeader().Time); err != nil { + / skip as the commission is invalid + return simtypes.NoOpMsg(types.ModuleName, msgType, "invalid commission rate"), nil, nil +} + +bz, err := k.ValidatorAddressCodec().StringToBytes(val.GetOperator()) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "error getting validator address bytes"), nil, err +} + +simAccount, found := simtypes.FindAccount(accs, sdk.AccAddress(bz)) + if !found { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to find account"), nil, fmt.Errorf("validator %s not found", val.GetOperator()) +} + account := ak.GetAccount(ctx, simAccount.Address) + spendable := bk.SpendableCoins(ctx, account.GetAddress()) + description := types.NewDescription( + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + simtypes.RandStringOfLength(r, 10), + ) + msg := types.NewMsgEditValidator(address, description, &newCommissionRate, nil) + txCtx := simulation.OperationInput{ + R: r, + App: app, + TxGen: txGen, + Cdc: nil, + Msg: msg, + Context: ctx, + SimAccount: simAccount, + AccountKeeper: ak, + Bankkeeper: bk, + ModuleName: types.ModuleName, + CoinsSpentInMsg: spendable, +} + +return simulation.GenAndDeliverTxWithRandFees(txCtx) +} +} + +/ SimulateMsgDelegate generates a MsgDelegate with random values +func SimulateMsgDelegate( + txGen client.TxConfig, + ak types.AccountKeeper, + bk types.BankKeeper, + k *keeper.Keeper, +) + +simtypes.Operation { + return func( + r *rand.Rand, app *baseapp.BaseApp, ctx sdk.Context, accs []simtypes.Account, chainID string, + ) (simtypes.OperationMsg, []simtypes.FutureOperation, error) { + msgType := sdk.MsgTypeURL(&types.MsgDelegate{ +}) + +denom, err := k.BondDenom(ctx) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "bond denom not found"), nil, err +} + +vals, err := k.GetAllValidators(ctx) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to get validators"), nil, err +} + if len(vals) == 0 { + return simtypes.NoOpMsg(types.ModuleName, msgType, "number of validators equal zero"), nil, nil +} + +simAccount, _ := simtypes.RandomAcc(r, accs) + +val, ok := testutil.RandSliceElem(r, vals) + if !ok { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to pick a validator"), nil, nil +} + if val.InvalidExRate() { + return simtypes.NoOpMsg(types.ModuleName, msgType, "validator's invalid echange rate"), nil, nil +} + amount := bk.GetBalance(ctx, simAccount.Address, denom).Amount + if !amount.IsPositive() { + return simtypes.NoOpMsg(types.ModuleName, msgType, "balance is negative"), nil, nil +} + +amount, err = simtypes.RandPositiveInt(r, amount) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to generate positive amount"), nil, err +} + bondAmt := sdk.NewCoin(denom, amount) + account := ak.GetAccount(ctx, simAccount.Address) + spendable := bk.SpendableCoins(ctx, account.GetAddress()) + +var fees sdk.Coins + + coins, hasNeg := spendable.SafeSub(bondAmt) + if !hasNeg { + fees, err = simtypes.RandomFees(r, ctx, coins) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to generate fees"), nil, err +} + +} + msg := types.NewMsgDelegate(simAccount.Address.String(), val.GetOperator(), bondAmt) + txCtx := simulation.OperationInput{ + R: r, + App: app, + TxGen: txGen, + Cdc: nil, + Msg: msg, + Context: ctx, + SimAccount: simAccount, + AccountKeeper: ak, + ModuleName: types.ModuleName, +} + +return simulation.GenAndDeliverTx(txCtx, fees) +} +} + +/ SimulateMsgUndelegate generates a MsgUndelegate with random values +func SimulateMsgUndelegate( + txGen client.TxConfig, + ak types.AccountKeeper, + bk types.BankKeeper, + k *keeper.Keeper, +) + +simtypes.Operation { + return func( + r *rand.Rand, app *baseapp.BaseApp, ctx sdk.Context, accs []simtypes.Account, chainID string, + ) (simtypes.OperationMsg, []simtypes.FutureOperation, error) { + msgType := sdk.MsgTypeURL(&types.MsgUndelegate{ +}) + +vals, err := k.GetAllValidators(ctx) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to get validators"), nil, err +} + if len(vals) == 0 { + return simtypes.NoOpMsg(types.ModuleName, msgType, "number of validators equal zero"), nil, nil +} + +val, ok := testutil.RandSliceElem(r, vals) + if !ok { + return simtypes.NoOpMsg(types.ModuleName, msgType, "validator is not ok"), nil, nil +} + +valAddr, err := k.ValidatorAddressCodec().StringToBytes(val.GetOperator()) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "error getting validator address bytes"), nil, err +} + +delegations, err := k.GetValidatorDelegations(ctx, valAddr) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "error getting validator delegations"), nil, nil +} + if delegations == nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "keeper does have any delegation entries"), nil, nil +} + + / get random delegator from validator + delegation := delegations[r.Intn(len(delegations))] + delAddr := delegation.GetDelegatorAddr() + +delAddrBz, err := ak.AddressCodec().StringToBytes(delAddr) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "error getting delegator address bytes"), nil, err +} + +hasMaxUD, err := k.HasMaxUnbondingDelegationEntries(ctx, delAddrBz, valAddr) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "error getting max unbonding delegation entries"), nil, err +} + if hasMaxUD { + return simtypes.NoOpMsg(types.ModuleName, msgType, "keeper does have a max unbonding delegation entries"), nil, nil +} + totalBond := val.TokensFromShares(delegation.GetShares()).TruncateInt() + if !totalBond.IsPositive() { + return simtypes.NoOpMsg(types.ModuleName, msgType, "total bond is negative"), nil, nil +} + +unbondAmt, err := simtypes.RandPositiveInt(r, totalBond) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "invalid unbond amount"), nil, err +} + if unbondAmt.IsZero() { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unbond amount is zero"), nil, nil +} + +bondDenom, err := k.BondDenom(ctx) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "bond denom not found"), nil, err +} + msg := types.NewMsgUndelegate( + delAddr, val.GetOperator(), sdk.NewCoin(bondDenom, unbondAmt), + ) + + / need to retrieve the simulation account associated with delegation to retrieve PrivKey + var simAccount simtypes.Account + for _, simAcc := range accs { + if simAcc.Address.Equals(sdk.AccAddress(delAddrBz)) { + simAccount = simAcc + break +} + +} + / if simaccount.PrivKey == nil, delegation address does not exist in accs. However, since smart contracts and module accounts can stake, we can ignore the error + if simAccount.PrivKey == nil { + return simtypes.NoOpMsg(types.ModuleName, sdk.MsgTypeURL(msg), "account private key is nil"), nil, nil +} + account := ak.GetAccount(ctx, delAddrBz) + spendable := bk.SpendableCoins(ctx, account.GetAddress()) + txCtx := simulation.OperationInput{ + R: r, + App: app, + TxGen: txGen, + Cdc: nil, + Msg: msg, + Context: ctx, + SimAccount: simAccount, + AccountKeeper: ak, + Bankkeeper: bk, + ModuleName: types.ModuleName, + CoinsSpentInMsg: spendable, +} + +return simulation.GenAndDeliverTxWithRandFees(txCtx) +} +} + +/ SimulateMsgCancelUnbondingDelegate generates a MsgCancelUnbondingDelegate with random values +func SimulateMsgCancelUnbondingDelegate( + txGen client.TxConfig, + ak types.AccountKeeper, + bk types.BankKeeper, + k *keeper.Keeper, +) + +simtypes.Operation { + return func( + r *rand.Rand, app *baseapp.BaseApp, ctx sdk.Context, accs []simtypes.Account, chainID string, + ) (simtypes.OperationMsg, []simtypes.FutureOperation, error) { + msgType := sdk.MsgTypeURL(&types.MsgCancelUnbondingDelegation{ +}) + +vals, err := k.GetAllValidators(ctx) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to get validators"), nil, err +} + if len(vals) == 0 { + return simtypes.NoOpMsg(types.ModuleName, msgType, "number of validators equal zero"), nil, nil +} + +simAccount, _ := simtypes.RandomAcc(r, accs) + +val, ok := testutil.RandSliceElem(r, vals) + if !ok { + return simtypes.NoOpMsg(types.ModuleName, msgType, "validator is not ok"), nil, nil +} + if val.IsJailed() || val.InvalidExRate() { + return simtypes.NoOpMsg(types.ModuleName, msgType, "validator is jailed"), nil, nil +} + +valAddr, err := k.ValidatorAddressCodec().StringToBytes(val.GetOperator()) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "error getting validator address bytes"), nil, err +} + +unbondingDelegation, err := k.GetUnbondingDelegation(ctx, simAccount.Address, valAddr) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "account does have any unbonding delegation"), nil, nil +} + + / This is a temporary fix to make staking simulation pass. We should fetch + / the first unbondingDelegationEntry that matches the creationHeight, because + / currently the staking msgServer chooses the first unbondingDelegationEntry + / with the matching creationHeight. + / + / ref: https://github.com/cosmos/cosmos-sdk/issues/12932 + creationHeight := unbondingDelegation.Entries[r.Intn(len(unbondingDelegation.Entries))].CreationHeight + + var unbondingDelegationEntry types.UnbondingDelegationEntry + for _, entry := range unbondingDelegation.Entries { + if entry.CreationHeight == creationHeight { + unbondingDelegationEntry = entry + break +} + +} + if unbondingDelegationEntry.CompletionTime.Before(ctx.BlockTime()) { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unbonding delegation is already processed"), nil, nil +} + if !unbondingDelegationEntry.Balance.IsPositive() { + return simtypes.NoOpMsg(types.ModuleName, msgType, "delegator receiving balance is negative"), nil, nil +} + cancelBondAmt := simtypes.RandomAmount(r, unbondingDelegationEntry.Balance) + if cancelBondAmt.IsZero() { + return simtypes.NoOpMsg(types.ModuleName, msgType, "cancelBondAmt amount is zero"), nil, nil +} + +bondDenom, err := k.BondDenom(ctx) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "bond denom not found"), nil, err +} + msg := types.NewMsgCancelUnbondingDelegation( + simAccount.Address.String(), val.GetOperator(), unbondingDelegationEntry.CreationHeight, sdk.NewCoin(bondDenom, cancelBondAmt), + ) + spendable := bk.SpendableCoins(ctx, simAccount.Address) + txCtx := simulation.OperationInput{ + R: r, + App: app, + TxGen: txGen, + Cdc: nil, + Msg: msg, + Context: ctx, + SimAccount: simAccount, + AccountKeeper: ak, + Bankkeeper: bk, + ModuleName: types.ModuleName, + CoinsSpentInMsg: spendable, +} + +return simulation.GenAndDeliverTxWithRandFees(txCtx) +} +} + +/ SimulateMsgBeginRedelegate generates a MsgBeginRedelegate with random values +func SimulateMsgBeginRedelegate( + txGen client.TxConfig, + ak types.AccountKeeper, + bk types.BankKeeper, + k *keeper.Keeper, +) + +simtypes.Operation { + return func( + r *rand.Rand, app *baseapp.BaseApp, ctx sdk.Context, accs []simtypes.Account, chainID string, + ) (simtypes.OperationMsg, []simtypes.FutureOperation, error) { + msgType := sdk.MsgTypeURL(&types.MsgBeginRedelegate{ +}) + +allVals, err := k.GetAllValidators(ctx) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to get validators"), nil, err +} + if len(allVals) == 0 { + return simtypes.NoOpMsg(types.ModuleName, msgType, "number of validators equal zero"), nil, nil +} + +srcVal, ok := testutil.RandSliceElem(r, allVals) + if !ok { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to pick validator"), nil, nil +} + +srcAddr, err := k.ValidatorAddressCodec().StringToBytes(srcVal.GetOperator()) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "error getting validator address bytes"), nil, err +} + +delegations, err := k.GetValidatorDelegations(ctx, srcAddr) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "error getting validator delegations"), nil, nil +} + if delegations == nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "keeper does have any delegation entries"), nil, nil +} + + / get random delegator from src validator + delegation := delegations[r.Intn(len(delegations))] + delAddr := delegation.GetDelegatorAddr() + +delAddrBz, err := ak.AddressCodec().StringToBytes(delAddr) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "error getting delegator address bytes"), nil, err +} + +hasRecRedel, err := k.HasReceivingRedelegation(ctx, delAddrBz, srcAddr) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "error getting receiving redelegation"), nil, err +} + if hasRecRedel { + return simtypes.NoOpMsg(types.ModuleName, msgType, "receveing redelegation is not allowed"), nil, nil / skip +} + + / get random destination validator + destVal, ok := testutil.RandSliceElem(r, allVals) + if !ok { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to pick validator"), nil, nil +} + +destAddr, err := k.ValidatorAddressCodec().StringToBytes(destVal.GetOperator()) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "error getting validator address bytes"), nil, err +} + +hasMaxRedel, err := k.HasMaxRedelegationEntries(ctx, delAddrBz, srcAddr, destAddr) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "error getting max redelegation entries"), nil, err +} + if bytes.Equal(srcAddr, destAddr) || destVal.InvalidExRate() || hasMaxRedel { + return simtypes.NoOpMsg(types.ModuleName, msgType, "checks failed"), nil, nil +} + totalBond := srcVal.TokensFromShares(delegation.GetShares()).TruncateInt() + if !totalBond.IsPositive() { + return simtypes.NoOpMsg(types.ModuleName, msgType, "total bond is negative"), nil, nil +} + +redAmt, err := simtypes.RandPositiveInt(r, totalBond) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "unable to generate positive amount"), nil, err +} + if redAmt.IsZero() { + return simtypes.NoOpMsg(types.ModuleName, msgType, "amount is zero"), nil, nil +} + + / check if the shares truncate to zero + shares, err := srcVal.SharesFromTokens(redAmt) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "invalid shares"), nil, err +} + if srcVal.TokensFromShares(shares).TruncateInt().IsZero() { + return simtypes.NoOpMsg(types.ModuleName, msgType, "shares truncate to zero"), nil, nil / skip +} + + / need to retrieve the simulation account associated with delegation to retrieve PrivKey + var simAccount simtypes.Account + for _, simAcc := range accs { + if simAcc.Address.Equals(sdk.AccAddress(delAddrBz)) { + simAccount = simAcc + break +} + +} + + / if simaccount.PrivKey == nil, delegation address does not exist in accs. However, since smart contracts and module accounts can stake, we can ignore the error + if simAccount.PrivKey == nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "account private key is nil"), nil, nil +} + account := ak.GetAccount(ctx, delAddrBz) + spendable := bk.SpendableCoins(ctx, account.GetAddress()) + +bondDenom, err := k.BondDenom(ctx) + if err != nil { + return simtypes.NoOpMsg(types.ModuleName, msgType, "bond denom not found"), nil, err +} + msg := types.NewMsgBeginRedelegate( + delAddr, srcVal.GetOperator(), destVal.GetOperator(), + sdk.NewCoin(bondDenom, redAmt), + ) + txCtx := simulation.OperationInput{ + R: r, + App: app, + TxGen: txGen, + Cdc: nil, + Msg: msg, + Context: ctx, + SimAccount: simAccount, + AccountKeeper: ak, + Bankkeeper: bk, + ModuleName: types.ModuleName, + CoinsSpentInMsg: spendable, +} + +return simulation.GenAndDeliverTxWithRandFees(txCtx) +} +} +``` + +As you can see, the weights are predefined in this case. Options exist to override this behavior with different weights. One option is to use `*rand.Rand` to define a random weight for the operation, or you can inject your own predefined weights. + +Here is how one can override the above package `simappparams`. + +```go expandable +#!/usr/bin/make -f + +PACKAGES_NOSIMULATION=$(shell go list ./... | grep -v '/simulation') + +PACKAGES_SIMTEST=$(shell go list ./... | grep '/simulation') + +export VERSION := $(shell echo $(shell git describe --tags --always --match "v*") | sed 's/^v/') + +export CMTVERSION := $(shell go list -m github.com/cometbft/cometbft | sed 's:.* ::') + +export COMMIT := $(shell git log -1 --format='%H') + +LEDGER_ENABLED ?= true +BINDIR ?= $(GOPATH)/bin +BUILDDIR ?= $(CURDIR)/build +SIMAPP = ./simapp +MOCKS_DIR = $(CURDIR)/tests/mocks +HTTPS_GIT := https://github.com/cosmos/cosmos-sdk.git +DOCKER := $(shell which docker) + +PROJECT_NAME = $(shell git remote get-url origin | xargs basename -s .git) + +# process build tags +build_tags = netgo + ifeq ($(LEDGER_ENABLED),true) + ifeq ($(OS),Windows_NT) + +GCCEXE = $(shell where gcc.exe 2> NUL) + ifeq ($(GCCEXE),) + $(error gcc.exe not installed for ledger support, please install or set LEDGER_ENABLED=false) + +else + build_tags += ledger + endif + else + UNAME_S = $(shell uname -s) + ifeq ($(UNAME_S),OpenBSD) + $(warning OpenBSD detected, disabling ledger support (https://github.com/cosmos/cosmos-sdk/issues/1988)) + +else + GCC = $(shell command -v gcc 2> /dev/null) + ifeq ($(GCC),) + $(error gcc not installed for ledger support, please install or set LEDGER_ENABLED=false) + +else + build_tags += ledger + endif + endif + endif +endif + ifeq (secp,$(findstring secp,$(COSMOS_BUILD_OPTIONS))) + +build_tags += libsecp256k1_sdk +endif + ifeq (legacy,$(findstring legacy,$(COSMOS_BUILD_OPTIONS))) + +build_tags += app_v1 +endif + whitespace := +whitespace += $(whitespace) + comma := , +build_tags_comma_sep := $(subst $(whitespace),$(comma),$(build_tags)) + +# process linker flags + +ldflags = -X github.com/cosmos/cosmos-sdk/version.Name=sim \ + -X github.com/cosmos/cosmos-sdk/version.AppName=simd \ + -X github.com/cosmos/cosmos-sdk/version.Version=$(VERSION) \ + -X github.com/cosmos/cosmos-sdk/version.Commit=$(COMMIT) \ + -X "github.com/cosmos/cosmos-sdk/version.BuildTags=$(build_tags_comma_sep)" \ + -X github.com/cometbft/cometbft/version.TMCoreSemVer=$(CMTVERSION) + +# DB backend selection + ifeq (cleveldb,$(findstring cleveldb,$(COSMOS_BUILD_OPTIONS))) + +build_tags += gcc +endif + ifeq (badgerdb,$(findstring badgerdb,$(COSMOS_BUILD_OPTIONS))) + +build_tags += badgerdb +endif +# handle rocksdb + ifeq (rocksdb,$(findstring rocksdb,$(COSMOS_BUILD_OPTIONS))) + +CGO_ENABLED=1 + build_tags += rocksdb +endif +# handle boltdb + ifeq (boltdb,$(findstring boltdb,$(COSMOS_BUILD_OPTIONS))) + +build_tags += boltdb +endif + ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS))) + +ldflags += -w -s +endif +ldflags += $(LDFLAGS) + ldflags := $(strip $(ldflags)) + +build_tags += $(BUILD_TAGS) + +build_tags := $(strip $(build_tags)) + +BUILD_FLAGS := -tags "$(build_tags)" -ldflags '$(ldflags)' +# check for nostrip option + ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS))) + +BUILD_FLAGS += -trimpath +endif + +# Check for debug option + ifeq (debug,$(findstring debug,$(COSMOS_BUILD_OPTIONS))) + +BUILD_FLAGS += -gcflags "all=-N -l" +endif + +all: tools build lint test vulncheck + +# The below include contains the tools and runsim targets. +include contrib/devtools/Makefile + +############################################################################### +### Build ### +############################################################################### + +BUILD_TARGETS := build install + +build: BUILD_ARGS=-o $(BUILDDIR)/ + +build-linux-amd64: + GOOS=linux GOARCH=amd64 LEDGER_ENABLED=false $(MAKE) + +build + +build-linux-arm64: + GOOS=linux GOARCH=arm64 LEDGER_ENABLED=false $(MAKE) + +build + +$(BUILD_TARGETS): go.sum $(BUILDDIR)/ + cd ${ + CURRENT_DIR +}/simapp && go $@ -mod=readonly $(BUILD_FLAGS) $(BUILD_ARGS) ./... + +$(BUILDDIR)/: + mkdir -p $(BUILDDIR)/ + +cosmovisor: + $(MAKE) -C tools/cosmovisor cosmovisor + +confix: + $(MAKE) -C tools/confix confix + +hubl: + $(MAKE) -C tools/hubl hubl + +.PHONY: build build-linux-amd64 build-linux-arm64 cosmovisor confix + +mocks: $(MOCKS_DIR) + @go install github.com/golang/mock/mockgen@v1.6.0 + sh ./scripts/mockgen.sh +.PHONY: mocks + +vulncheck: $(BUILDDIR)/ + GOBIN=$(BUILDDIR) + +go install golang.org/x/vuln/cmd/govulncheck@latest + $(BUILDDIR)/govulncheck ./... + +$(MOCKS_DIR): + mkdir -p $(MOCKS_DIR) + +distclean: clean tools-clean +clean: + rm -rf \ + $(BUILDDIR)/ \ + artifacts/ \ + tmp-swagger-gen/ \ + .testnets + +.PHONY: distclean clean + +############################################################################### +### Tools & Dependencies ### +############################################################################### + +go.sum: go.mod + echo "Ensure dependencies have not been modified ..." >&2 + go mod verify + go mod tidy + +############################################################################### +### Documentation ### +############################################################################### + +godocs: + @echo "--> Wait a few seconds and visit http:/localhost:6060/pkg/github.com/cosmos/cosmos-sdk/types" + go install golang.org/x/tools/cmd/godoc@latest + godoc -http=:6060 + +build-docs: + @cd docs && DOCS_DOMAIN=docs.cosmos.network sh ./build-all.sh + +.PHONY: build-docs + +############################################################################### +### Tests & Simulation ### +############################################################################### + +# make init-simapp initializes a single local node network +# it is useful for testing and development +# Usage: make install && make init-simapp && simd start +# Warning: make init-simapp will remove all data in simapp home directory +init-simapp: + ./scripts/init-simapp.sh + +test: test-unit +test-e2e: + $(MAKE) -C tests test-e2e +test-e2e-cov: + $(MAKE) -C tests test-e2e-cov +test-integration: + $(MAKE) -C tests test-integration +test-integration-cov: + $(MAKE) -C tests test-integration-cov +test-all: test-unit test-e2e test-integration test-ledger-mock test-race + +TEST_PACKAGES=./... +TEST_TARGETS := test-unit test-unit-amino test-unit-proto test-ledger-mock test-race test-ledger test-race + +# Test runs-specific rules. To add a new test target, just add +# a new rule, customise ARGS or TEST_PACKAGES ad libitum, and +# append the new rule to the TEST_TARGETS list. +test-unit: test_tags += cgo ledger test_ledger_mock norace +test-unit-amino: test_tags += ledger test_ledger_mock test_amino norace +test-ledger: test_tags += cgo ledger norace +test-ledger-mock: test_tags += ledger test_ledger_mock norace +test-race: test_tags += cgo ledger test_ledger_mock +test-race: ARGS=-race +test-race: TEST_PACKAGES=$(PACKAGES_NOSIMULATION) +$(TEST_TARGETS): run-tests + +# check-* compiles and collects tests without running them +# note: go test -c doesn't support multiple packages yet (https://github.com/golang/go/issues/15513) + +CHECK_TEST_TARGETS := check-test-unit check-test-unit-amino +check-test-unit: test_tags += cgo ledger test_ledger_mock norace +check-test-unit-amino: test_tags += ledger test_ledger_mock test_amino norace +$(CHECK_TEST_TARGETS): EXTRA_ARGS=-run=none +$(CHECK_TEST_TARGETS): run-tests + +ARGS += -tags "$(test_tags)" +SUB_MODULES = $(shell find . -type f -name 'go.mod' -print0 | xargs -0 -n1 dirname | sort) + +CURRENT_DIR = $(shell pwd) + +run-tests: + ifneq (,$(shell which tparse 2>/dev/null)) + @echo "Starting unit tests"; \ + finalec=0; \ + for module in $(SUB_MODULES); do \ + cd ${ + CURRENT_DIR +}/$module; \ + echo "Running unit tests for $(grep '^module' go.mod)"; \ + go test -mod=readonly -json $(ARGS) $(TEST_PACKAGES) ./... | tparse; \ + ec=$?; \ + if [ "$ec" -ne '0' ]; then finalec=$ec; fi; \ + done; \ + exit $finalec +else + @echo "Starting unit tests"; \ + finalec=0; \ + for module in $(SUB_MODULES); do \ + cd ${ + CURRENT_DIR +}/$module; \ + echo "Running unit tests for $(grep '^module' go.mod)"; \ + go test -mod=readonly $(ARGS) $(TEST_PACKAGES) ./... ; \ + ec=$?; \ + if [ "$ec" -ne '0' ]; then finalec=$ec; fi; \ + done; \ + exit $finalec +endif + +.PHONY: run-tests test test-all $(TEST_TARGETS) + +test-sim-nondeterminism: + @echo "Running non-determinism test..." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestAppStateDeterminism -Enabled=true \ + -NumBlocks=100 -BlockSize=200 -Commit=true -Period=0 -v -timeout 24h + +# Requires an exported plugin. See store/streaming/README.md for documentation. +# +# example: +# export COSMOS_SDK_ABCI_V1= +# make test-sim-nondeterminism-streaming +# +# Using the built-in examples: +# export COSMOS_SDK_ABCI_V1=/store/streaming/abci/examples/file/file +# make test-sim-nondeterminism-streaming +test-sim-nondeterminism-streaming: + @echo "Running non-determinism-streaming test..." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestAppStateDeterminism -Enabled=true \ + -NumBlocks=100 -BlockSize=200 -Commit=true -Period=0 -v -timeout 24h -EnableStreaming=true + +test-sim-custom-genesis-fast: + @echo "Running custom genesis simulation..." + @echo "By default, ${ + HOME +}/.simapp/config/genesis.json will be used." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestFullAppSimulation -Genesis=${ + HOME +}/.simapp/config/genesis.json \ + -Enabled=true -NumBlocks=100 -BlockSize=200 -Commit=true -Seed=99 -Period=5 -SigverifyTx=false -v -timeout 24h + +test-sim-import-export: runsim + @echo "Running application import/export simulation. This may take several minutes..." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 5 TestAppImportExport + +test-sim-after-import: runsim + @echo "Running application simulation-after-import. This may take several minutes..." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 5 TestAppSimulationAfterImport + +test-sim-custom-genesis-multi-seed: runsim + @echo "Running multi-seed custom genesis simulation..." + @echo "By default, ${ + HOME +}/.simapp/config/genesis.json will be used." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Genesis=${ + HOME +}/.simapp/config/genesis.json -SigverifyTx=false -SimAppPkg=. -ExitOnFail 400 5 TestFullAppSimulation + +test-sim-multi-seed-long: runsim + @echo "Running long multi-seed application simulation. This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 500 50 TestFullAppSimulation + +test-sim-multi-seed-short: runsim + @echo "Running short multi-seed application simulation. This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 10 TestFullAppSimulation + +test-sim-benchmark-invariants: + @echo "Running simulation invariant benchmarks..." + cd ${ + CURRENT_DIR +}/simapp && @go test -mod=readonly -benchmem -bench=BenchmarkInvariants -run=^$ \ + -Enabled=true -NumBlocks=1000 -BlockSize=200 \ + -Period=1 -Commit=true -Seed=57 -v -timeout 24h + +.PHONY: \ +test-sim-nondeterminism \ +test-sim-nondeterminism-streaming \ +test-sim-custom-genesis-fast \ +test-sim-import-export \ +test-sim-after-import \ +test-sim-custom-genesis-multi-seed \ +test-sim-multi-seed-short \ +test-sim-multi-seed-long \ +test-sim-benchmark-invariants + +SIM_NUM_BLOCKS ?= 500 +SIM_BLOCK_SIZE ?= 200 +SIM_COMMIT ?= true + +test-sim-benchmark: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h + +# Requires an exported plugin. See store/streaming/README.md for documentation. +# +# example: +# export COSMOS_SDK_ABCI_V1= +# make test-sim-benchmark-streaming +# +# Using the built-in examples: +# export COSMOS_SDK_ABCI_V1=/store/streaming/abci/examples/file/file +# make test-sim-benchmark-streaming +test-sim-benchmark-streaming: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -EnableStreaming=true + +test-sim-profile: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -benchmem -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -cpuprofile cpu.out -memprofile mem.out + +# Requires an exported plugin. See store/streaming/README.md for documentation. +# +# example: +# export COSMOS_SDK_ABCI_V1= +# make test-sim-profile-streaming +# +# Using the built-in examples: +# export COSMOS_SDK_ABCI_V1=/store/streaming/abci/examples/file/file +# make test-sim-profile-streaming +test-sim-profile-streaming: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -benchmem -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -cpuprofile cpu.out -memprofile mem.out -EnableStreaming=true + +.PHONY: test-sim-profile test-sim-benchmark + +benchmark: + @go test -mod=readonly -bench=. $(PACKAGES_NOSIMULATION) +.PHONY: benchmark + +############################################################################### +### Linting ### +############################################################################### + +golangci_version=v1.51.2 + +lint-install: + @echo "--> Installing golangci-lint $(golangci_version)" + @go install github.com/golangci/golangci-lint/cmd/golangci-lint@$(golangci_version) + +lint: + @echo "--> Running linter" + $(MAKE) + +lint-install + @./scripts/go-lint-all.bash --timeout=15m + +lint-fix: + @echo "--> Running linter" + $(MAKE) + +lint-install + @./scripts/go-lint-all.bash --fix + +.PHONY: lint lint-fix + +############################################################################### +### Protobuf ### +############################################################################### + +protoVer=0.14.0 +protoImageName=ghcr.io/cosmos/proto-builder:$(protoVer) + +protoImage=$(DOCKER) + +run --rm -v $(CURDIR):/workspace --workdir /workspace $(protoImageName) + +proto-all: proto-format proto-lint proto-gen + +proto-gen: + @echo "Generating Protobuf files" + @$(protoImage) + +sh ./scripts/protocgen.sh + +proto-swagger-gen: + @echo "Generating Protobuf Swagger" + @$(protoImage) + +sh ./scripts/protoc-swagger-gen.sh + +proto-format: + @$(protoImage) + +find ./ -name "*.proto" -exec clang-format -i { +} \; + +proto-lint: + @$(protoImage) + +buf lint --error-format=json + +proto-check-breaking: + @$(protoImage) + +buf breaking --against $(HTTPS_GIT)#branch=main + +CMT_URL = https://raw.githubusercontent.com/cometbft/cometbft/v0.38.0/proto/tendermint + +CMT_CRYPTO_TYPES = proto/tendermint/crypto +CMT_ABCI_TYPES = proto/tendermint/abci +CMT_TYPES = proto/tendermint/types +CMT_VERSION = proto/tendermint/version +CMT_LIBS = proto/tendermint/libs/bits +CMT_P2P = proto/tendermint/p2p + +proto-update-deps: + @echo "Updating Protobuf dependencies" + + @mkdir -p $(CMT_ABCI_TYPES) + @curl -sSL $(CMT_URL)/abci/types.proto > $(CMT_ABCI_TYPES)/types.proto + + @mkdir -p $(CMT_VERSION) + @curl -sSL $(CMT_URL)/version/types.proto > $(CMT_VERSION)/types.proto + + @mkdir -p $(CMT_TYPES) + @curl -sSL $(CMT_URL)/types/types.proto > $(CMT_TYPES)/types.proto + @curl -sSL $(CMT_URL)/types/evidence.proto > $(CMT_TYPES)/evidence.proto + @curl -sSL $(CMT_URL)/types/params.proto > $(CMT_TYPES)/params.proto + @curl -sSL $(CMT_URL)/types/validator.proto > $(CMT_TYPES)/validator.proto + @curl -sSL $(CMT_URL)/types/block.proto > $(CMT_TYPES)/block.proto + + @mkdir -p $(CMT_CRYPTO_TYPES) + @curl -sSL $(CMT_URL)/crypto/proof.proto > $(CMT_CRYPTO_TYPES)/proof.proto + @curl -sSL $(CMT_URL)/crypto/keys.proto > $(CMT_CRYPTO_TYPES)/keys.proto + + @mkdir -p $(CMT_LIBS) + @curl -sSL $(CMT_URL)/libs/bits/types.proto > $(CMT_LIBS)/types.proto + + @mkdir -p $(CMT_P2P) + @curl -sSL $(CMT_URL)/p2p/types.proto > $(CMT_P2P)/types.proto + + $(DOCKER) + +run --rm -v $(CURDIR)/proto:/workspace --workdir /workspace $(protoImageName) + +buf mod update + +.PHONY: proto-all proto-gen proto-swagger-gen proto-format proto-lint proto-check-breaking proto-update-deps + +############################################################################### +### Localnet ### +############################################################################### + +localnet-build-env: + $(MAKE) -C contrib/images simd-env +localnet-build-dlv: + $(MAKE) -C contrib/images simd-dlv + +localnet-build-nodes: + $(DOCKER) + +run --rm -v $(CURDIR)/.testnets:/data cosmossdk/simd \ + testnet init-files --v 4 -o /data --starting-ip-address 192.168.10.2 --keyring-backend=test + docker compose up -d + +localnet-stop: + docker compose down + +# localnet-start will run a 4-node testnet locally. The nodes are +# based off the docker images in: ./contrib/images/simd-env +localnet-start: localnet-stop localnet-build-env localnet-build-nodes + +# localnet-debug will run a 4-node testnet locally in debug mode +# you can read more about the debug mode here: ./contrib/images/simd-dlv/README.md +localnet-debug: localnet-stop localnet-build-dlv localnet-build-nodes + +.PHONY: localnet-start localnet-stop localnet-debug localnet-build-env localnet-build-dlv localnet-build-nodes +``` + +For the last test a tool called [runsim](/docs/sdk/v0.50/documentation/module-systemhttps://github.com/cosmos/tools/tree/master/cmd/runsim) is used, this is used to parallelize go test instances, provide info to Github and slack integrations to provide information to your team on how the simulations are running. + +### Random proposal contents + +Randomized governance proposals are also supported on the Cosmos SDK simulator. Each +module must define the governance proposal `Content`s that they expose and register +them to be used on the parameters. + +## Registering simulation functions + +Now that all the required functions are defined, we need to integrate them into the module pattern within the `module.go`: + +```go expandable +package distribution + +import ( + + "context" + "encoding/json" + "fmt" + + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/store" + "cosmossdk.io/depinject" + + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/distribution/client/cli" + "github.com/cosmos/cosmos-sdk/x/distribution/exported" + "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + "github.com/cosmos/cosmos-sdk/x/distribution/simulation" + "github.com/cosmos/cosmos-sdk/x/distribution/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + staking "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ ConsensusVersion defines the current x/distribution module consensus version. +const ConsensusVersion = 3 + +var ( + _ module.AppModuleBasic = AppModule{ +} + _ module.AppModuleSimulation = AppModule{ +} + _ module.HasGenesis = AppModule{ +} + _ module.HasServices = AppModule{ +} + _ module.HasInvariants = AppModule{ +} + + _ appmodule.AppModule = AppModule{ +} + _ appmodule.HasBeginBlocker = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the distribution module. +type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec +} + +/ Name returns the distribution module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the distribution module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the distribution +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the distribution module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, _ sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return types.ValidateGenesis(&data) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the distribution module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the distribution module. +func (ab AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.NewTxCmd(ab.cdc.InterfaceRegistry().SigningContext().ValidatorAddressCodec(), ab.cdc.InterfaceRegistry().SigningContext().AddressCodec()) +} + +/ RegisterInterfaces implements InterfaceModule +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the distribution module. +type AppModule struct { + AppModuleBasic + + keeper keeper.Keeper + accountKeeper types.AccountKeeper + bankKeeper types.BankKeeper + stakingKeeper types.StakingKeeper + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +/ NewAppModule creates a new AppModule object +func NewAppModule( + cdc codec.Codec, keeper keeper.Keeper, accountKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, stakingKeeper types.StakingKeeper, ss exported.Subspace, +) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: accountKeeper.AddressCodec() +}, + keeper: keeper, + accountKeeper: accountKeeper, + bankKeeper: bankKeeper, + stakingKeeper: stakingKeeper, + legacySubspace: ss, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ RegisterInvariants registers the distribution module invariants. +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +/ RegisterServices registers module services. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.keeper)) + +types.RegisterQueryServer(cfg.QueryServer(), keeper.NewQuerier(am.keeper)) + m := keeper.NewMigrator(am.keeper, am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 2 to 3: %v", types.ModuleName, err)) +} +} + +/ InitGenesis performs genesis initialization for the distribution module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) { + var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +am.keeper.InitGenesis(ctx, genesisState) +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the distribution +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ BeginBlock returns the begin blocker for the distribution module. +func (am AppModule) + +BeginBlock(ctx context.Context) + +error { + c := sdk.UnwrapSDKContext(ctx) + +return BeginBlocker(c, am.keeper) +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the distribution module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalMsgs returns msgs used for governance proposals for simulations. +func (AppModule) + +ProposalMsgs(_ module.SimulationState) []simtypes.WeightedProposalMsg { + return simulation.ProposalMsgs() +} + +/ RegisterStoreDecoder registers a decoder for distribution module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[types.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accountKeeper, am.bankKeeper, am.keeper, am.stakingKeeper, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type ModuleInputs struct { + depinject.In + + Config *modulev1.Module + StoreService store.KVStoreService + Cdc codec.Codec + + AccountKeeper types.AccountKeeper + BankKeeper types.BankKeeper + StakingKeeper types.StakingKeeper + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type ModuleOutputs struct { + depinject.Out + + DistrKeeper keeper.Keeper + Module appmodule.AppModule + Hooks staking.StakingHooksWrapper +} + +func ProvideModule(in ModuleInputs) + +ModuleOutputs { + feeCollectorName := in.Config.FeeCollectorName + if feeCollectorName == "" { + feeCollectorName = authtypes.FeeCollectorName +} + + / default to governance authority if not provided + authority := authtypes.NewModuleAddress(govtypes.ModuleName) + if in.Config.Authority != "" { + authority = authtypes.NewModuleAddressOrBech32Address(in.Config.Authority) +} + k := keeper.NewKeeper( + in.Cdc, + in.StoreService, + in.AccountKeeper, + in.BankKeeper, + in.StakingKeeper, + feeCollectorName, + authority.String(), + ) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.StakingKeeper, in.LegacySubspace) + +return ModuleOutputs{ + DistrKeeper: k, + Module: m, + Hooks: staking.StakingHooksWrapper{ + StakingHooks: k.Hooks() +}, +} +} +``` + +## App Simulator manager + +The following step is setting up the `SimulatorManager` at the app level. This +is required for the simulation test files on the next step. + +```go +type CustomApp struct { + ... + sm *module.SimulationManager +} +``` + +Then at the instantiation of the application, we create the `SimulationManager` +instance in the same way we create the `ModuleManager` but this time we only pass +the modules that implement the simulation functions from the `AppModuleSimulation` +interface described above. + +```go expandable +func NewCustomApp(...) { + / create the simulation manager and define the order of the modules for deterministic simulations + app.sm = module.NewSimulationManager( + auth.NewAppModule(app.accountKeeper), + bank.NewAppModule(app.bankKeeper, app.accountKeeper), + supply.NewAppModule(app.supplyKeeper, app.accountKeeper), + gov.NewAppModule(app.govKeeper, app.accountKeeper, app.supplyKeeper), + mint.NewAppModule(app.mintKeeper), + distr.NewAppModule(app.distrKeeper, app.accountKeeper, app.supplyKeeper, app.stakingKeeper), + staking.NewAppModule(app.stakingKeeper, app.accountKeeper, app.supplyKeeper), + slashing.NewAppModule(app.slashingKeeper, app.accountKeeper, app.stakingKeeper), + ) + + / register the store decoders for simulation tests + app.sm.RegisterStoreDecoders() + ... +} +``` diff --git a/docs/sdk/v0.50/documentation/module-system/structure.mdx b/docs/sdk/v0.50/documentation/module-system/structure.mdx new file mode 100644 index 00000000..9ac22181 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/structure.mdx @@ -0,0 +1,94 @@ +--- +title: Recommended Folder Structure +--- + + +**Synopsis** +This document outlines the recommended structure of Cosmos SDK modules. These ideas are meant to be applied as suggestions. Application developers are encouraged to improve upon and contribute to module structure and development design. + + +## Structure + +A typical Cosmos SDK module can be structured as follows: + +```shell +proto +└── {project_name} +    └── {module_name} +    └── {proto_version} +       ├── {module_name}.proto +       ├── event.proto +       ├── genesis.proto +       ├── query.proto +       └── tx.proto +``` + +* `{module_name}.proto`: The module's common message type definitions. +* `event.proto`: The module's message type definitions related to events. +* `genesis.proto`: The module's message type definitions related to genesis state. +* `query.proto`: The module's Query service and related message type definitions. +* `tx.proto`: The module's Msg service and related message type definitions. + +```shell expandable +x/{module_name} +├── client +│   ├── cli +│   │ ├── query.go +│   │   └── tx.go +│   └── testutil +│   ├── cli_test.go +│   └── suite.go +├── exported +│   └── exported.go +├── keeper +│   ├── genesis.go +│   ├── grpc_query.go +│   ├── hooks.go +│   ├── invariants.go +│   ├── keeper.go +│   ├── keys.go +│   ├── msg_server.go +│   └── querier.go +├── module +│   └── module.go +│   └── abci.go +│   └── autocli.go +├── simulation +│   ├── decoder.go +│   ├── genesis.go +│   ├── operations.go +│   └── params.go +├── {module_name}.pb.go +├── codec.go +├── errors.go +├── events.go +├── events.pb.go +├── expected_keepers.go +├── genesis.go +├── genesis.pb.go +├── keys.go +├── msgs.go +├── params.go +├── query.pb.go +├── tx.pb.go +└── README.md +``` + +* `client/`: The module's CLI client functionality implementation and the module's CLI testing suite. +* `exported/`: The module's exported types - typically interface types. If a module relies on keepers from another module, it is expected to receive the keepers as interface contracts through the `expected_keepers.go` file (see below) in order to avoid a direct dependency on the module implementing the keepers. However, these interface contracts can define methods that operate on and/or return types that are specific to the module that is implementing the keepers and this is where `exported/` comes into play. The interface types that are defined in `exported/` use canonical types, allowing for the module to receive the keepers as interface contracts through the `expected_keepers.go` file. This pattern allows for code to remain DRY and also alleviates import cycle chaos. +* `keeper/`: The module's `Keeper` and `MsgServer` implementation. +* `module/`: The module's `AppModule` and `AppModuleBasic` implementation. + * `abci.go`: The module's `BeginBlocker` and `EndBlocker` implementations (this file is only required if `BeginBlocker` and/or `EndBlocker` need to be defined). + * `autocli.go`: The module [autocli](https://docs.cosmos.network/main/core/autocli) options. +* `simulation/`: The module's [simulation](/docs/sdk/v0.50/documentation/module-system/simulator) package defines functions used by the blockchain simulator application (`simapp`). +* `REAMDE.md`: The module's specification documents outlining important concepts, state storage structure, and message and event type definitions. Learn more how to write module specs in the [spec guidelines](/docs/sdk/v0.50/documentation/protocol-development/SPEC_MODULE). +* The root directory includes type definitions for messages, events, and genesis state, including the type definitions generated by Protocol Buffers. + * `codec.go`: The module's registry methods for interface types. + * `errors.go`: The module's sentinel errors. + * `events.go`: The module's event types and constructors. + * `expected_keepers.go`: The module's [expected keeper](/docs/sdk/v0.50/documentation/module-system/keeper#type-definition) interfaces. + * `genesis.go`: The module's genesis state methods and helper functions. + * `keys.go`: The module's store keys and associated helper functions. + * `msgs.go`: The module's message type definitions and associated methods. + * `params.go`: The module's parameter type definitions and associated methods. + * `*.pb.go`: The module's type definitions generated by Protocol Buffers (as defined in the respective `*.proto` files above). diff --git a/docs/sdk/v0.50/documentation/module-system/testing.mdx b/docs/sdk/v0.50/documentation/module-system/testing.mdx new file mode 100644 index 00000000..d24bc2c2 --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/testing.mdx @@ -0,0 +1,2781 @@ +--- +title: Testing +--- + +The Cosmos SDK contains different types of [tests](https://martinfowler.com/articles/practical-test-pyramid.html). +These tests have different goals and are used at different stages of the development cycle. +We advice, as a general rule, to use tests at all stages of the development cycle. +It is adviced, as a chain developer, to test your application and modules in a similar way than the SDK. + +The rationale behind testing can be found in [ADR-59](https://docs.cosmos.network/main/architecture/adr-059-test-scopes.html). + +## Unit Tests + +Unit tests are the lowest test category of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html). +All packages and modules should have unit test coverage. Modules should have their dependencies mocked: this means mocking keepers. + +The SDK uses `mockgen` to generate mocks for keepers: + +```go expandable +#!/usr/bin/env bash + +mockgen_cmd="mockgen" +$mockgen_cmd -source=client/account_retriever.go -package mock -destination testutil/mock/account_retriever.go +$mockgen_cmd -package mock -destination store/mock/cosmos_cosmos_db_DB.go github.com/cosmos/cosmos-db DB +$mockgen_cmd -source=types/module/module.go -package mock -destination testutil/mock/types_module_module.go +$mockgen_cmd -source=types/module/mock_appmodule_test.go -package mock -destination testutil/mock/types_mock_appmodule.go +$mockgen_cmd -source=types/invariant.go -package mock -destination testutil/mock/types_invariant.go +$mockgen_cmd -package mock -destination testutil/mock/grpc_server.go github.com/cosmos/gogoproto/grpc Server +$mockgen_cmd -package mock -destination testutil/mock/logger.go cosmossdk.io/log Logger +$mockgen_cmd -source=orm/model/ormtable/hooks.go -package ormmocks -destination orm/testing/ormmocks/hooks.go +$mockgen_cmd -source=x/nft/expected_keepers.go -package testutil -destination x/nft/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/feegrant/expected_keepers.go -package testutil -destination x/feegrant/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/mint/types/expected_keepers.go -package testutil -destination x/mint/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/params/proposal_handler_test.go -package testutil -destination x/params/testutil/staking_keeper_mock.go +$mockgen_cmd -source=x/crisis/types/expected_keepers.go -package testutil -destination x/crisis/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/auth/tx/config/expected_keepers.go -package testutil -destination x/auth/tx/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/auth/types/expected_keepers.go -package testutil -destination x/auth/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/auth/ante/expected_keepers.go -package testutil -destination x/auth/ante/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/authz/expected_keepers.go -package testutil -destination x/authz/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/bank/types/expected_keepers.go -package testutil -destination x/bank/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/group/testutil/expected_keepers.go -package testutil -destination x/group/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/evidence/types/expected_keepers.go -package testutil -destination x/evidence/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/distribution/types/expected_keepers.go -package testutil -destination x/distribution/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/slashing/types/expected_keepers.go -package testutil -destination x/slashing/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/genutil/types/expected_keepers.go -package testutil -destination x/genutil/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/gov/testutil/expected_keepers.go -package testutil -destination x/gov/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/staking/types/expected_keepers.go -package testutil -destination x/staking/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/auth/vesting/types/expected_keepers.go -package testutil -destination x/auth/vesting/testutil/expected_keepers_mocks.go +``` + +You can read more about mockgen [here](https://github.com/golang/mock). + +### Example + +As an example, we will walkthrough the [keeper tests](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/gov/keeper/keeper_test.go) of the `x/gov` module. + +The `x/gov` module has a `Keeper` type, which requires a few external dependencies (ie. imports outside `x/gov` to work properly). + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "time" + "cosmossdk.io/collections" + + corestoretypes "cosmossdk.io/core/store" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" +) + +/ Keeper defines the governance module Keeper +type Keeper struct { + authKeeper types.AccountKeeper + bankKeeper types.BankKeeper + distrKeeper types.DistributionKeeper + + / The reference to the DelegationSet and ValidatorSet to get information about validators and delegators + sk types.StakingKeeper + + / GovHooks + hooks types.GovHooks + + / The (unexposed) + +keys used to access the stores from the Context. + storeService corestoretypes.KVStoreService + + / The codec for binary encoding/decoding. + cdc codec.Codec + + / Legacy Proposal router + legacyRouter v1beta1.Router + + / Msg server router + router baseapp.MessageRouter + + config types.Config + + / the address capable of executing a MsgUpdateParams message. Typically, this + / should be the x/gov module account. + authority string + + Schema collections.Schema + Constitution collections.Item[string] + Params collections.Item[v1.Params] + Deposits collections.Map[collections.Pair[uint64, sdk.AccAddress], v1.Deposit] + Votes collections.Map[collections.Pair[uint64, sdk.AccAddress], v1.Vote] + ProposalID collections.Sequence + Proposals collections.Map[uint64, v1.Proposal] + ActiveProposalsQueue collections.Map[collections.Pair[time.Time, uint64], uint64] / TODO(tip): this should be simplified and go into an index. + InactiveProposalsQueue collections.Map[collections.Pair[time.Time, uint64], uint64] / TODO(tip): this should be simplified and go into an index. + VotingPeriodProposals collections.Map[uint64, []byte] / TODO(tip): this could be a keyset or index. +} + +/ GetAuthority returns the x/gov module's authority. +func (k Keeper) + +GetAuthority() + +string { + return k.authority +} + +/ NewKeeper returns a governance keeper. It handles: +/ - submitting governance proposals +/ - depositing funds into proposals, and activating upon sufficient funds being deposited +/ - users voting on proposals, with weight proportional to stake in the system +/ - and tallying the result of the vote. +/ +/ CONTRACT: the parameter Subspace must have the param key table already initialized +func NewKeeper( + cdc codec.Codec, storeService corestoretypes.KVStoreService, authKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, sk types.StakingKeeper, distrKeeper types.DistributionKeeper, + router baseapp.MessageRouter, config types.Config, authority string, +) *Keeper { + / ensure governance module account is set + if addr := authKeeper.GetModuleAddress(types.ModuleName); addr == nil { + panic(fmt.Sprintf("%s module account has not been set", types.ModuleName)) +} + if _, err := authKeeper.AddressCodec().StringToBytes(authority); err != nil { + panic(fmt.Sprintf("invalid authority address: %s", authority)) +} + + / If MaxMetadataLen not set by app developer, set to default value. + if config.MaxMetadataLen == 0 { + config.MaxMetadataLen = types.DefaultConfig().MaxMetadataLen +} + sb := collections.NewSchemaBuilder(storeService) + k := &Keeper{ + storeService: storeService, + authKeeper: authKeeper, + bankKeeper: bankKeeper, + distrKeeper: distrKeeper, + sk: sk, + cdc: cdc, + router: router, + config: config, + authority: authority, + Constitution: collections.NewItem(sb, types.ConstitutionKey, "constitution", collections.StringValue), + Params: collections.NewItem(sb, types.ParamsKey, "params", codec.CollValue[v1.Params](/docs/sdk/v0.50/documentation/module-system/cdc)), + Deposits: collections.NewMap(sb, types.DepositsKeyPrefix, "deposits", collections.PairKeyCodec(collections.Uint64Key, sdk.AddressKeyAsIndexKey(sdk.AccAddressKey)), codec.CollValue[v1.Deposit](/docs/sdk/v0.50/documentation/module-system/cdc)), / nolint: staticcheck / sdk.AddressKeyAsIndexKey is needed to retain state compatibility + Votes: collections.NewMap(sb, types.VotesKeyPrefix, "votes", collections.PairKeyCodec(collections.Uint64Key, sdk.AddressKeyAsIndexKey(sdk.AccAddressKey)), codec.CollValue[v1.Vote](/docs/sdk/v0.50/documentation/module-system/cdc)), / nolint: staticcheck / sdk.AddressKeyAsIndexKey is needed to retain state compatibility + ProposalID: collections.NewSequence(sb, types.ProposalIDKey, "proposal_id"), + Proposals: collections.NewMap(sb, types.ProposalsKeyPrefix, "proposals", collections.Uint64Key, codec.CollValue[v1.Proposal](/docs/sdk/v0.50/documentation/module-system/cdc)), + ActiveProposalsQueue: collections.NewMap(sb, types.ActiveProposalQueuePrefix, "active_proposals_queue", collections.PairKeyCodec(sdk.TimeKey, collections.Uint64Key), collections.Uint64Value), / sdk.TimeKey is needed to retain state compatibility + InactiveProposalsQueue: collections.NewMap(sb, types.InactiveProposalQueuePrefix, "inactive_proposals_queue", collections.PairKeyCodec(sdk.TimeKey, collections.Uint64Key), collections.Uint64Value), / sdk.TimeKey is needed to retain state compatibility + VotingPeriodProposals: collections.NewMap(sb, types.VotingPeriodProposalKeyPrefix, "voting_period_proposals", collections.Uint64Key, collections.BytesValue), +} + +schema, err := sb.Build() + if err != nil { + panic(err) +} + +k.Schema = schema + return k +} + +/ Hooks gets the hooks for governance *Keeper { + func (k *Keeper) + +Hooks() + +types.GovHooks { + if k.hooks == nil { + / return a no-op implementation if no hooks are set + return types.MultiGovHooks{ +} + +} + +return k.hooks +} + +/ SetHooks sets the hooks for governance +func (k *Keeper) + +SetHooks(gh types.GovHooks) *Keeper { + if k.hooks != nil { + panic("cannot set governance hooks twice") +} + +k.hooks = gh + + return k +} + +/ SetLegacyRouter sets the legacy router for governance +func (k *Keeper) + +SetLegacyRouter(router v1beta1.Router) { + / It is vital to seal the governance proposal router here as to not allow + / further handlers to be registered after the keeper is created since this + / could create invalid or non-deterministic behavior. + router.Seal() + +k.legacyRouter = router +} + +/ Logger returns a module-specific logger. +func (k Keeper) + +Logger(ctx context.Context) + +log.Logger { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +return sdkCtx.Logger().With("module", "x/"+types.ModuleName) +} + +/ Router returns the gov keeper's router +func (k Keeper) + +Router() + +baseapp.MessageRouter { + return k.router +} + +/ LegacyRouter returns the gov keeper's legacy router +func (k Keeper) + +LegacyRouter() + +v1beta1.Router { + return k.legacyRouter +} + +/ GetGovernanceAccount returns the governance ModuleAccount +func (k Keeper) + +GetGovernanceAccount(ctx context.Context) + +sdk.ModuleAccountI { + return k.authKeeper.GetModuleAccount(ctx, types.ModuleName) +} + +/ ModuleAccountAddress returns gov module account address +func (k Keeper) + +ModuleAccountAddress() + +sdk.AccAddress { + return k.authKeeper.GetModuleAddress(types.ModuleName) +} + +/ assertMetadataLength returns an error if given metadata length +/ is greater than a pre-defined MaxMetadataLen. +func (k Keeper) + +assertMetadataLength(metadata string) + +error { + if metadata != "" && uint64(len(metadata)) > k.config.MaxMetadataLen { + return types.ErrMetadataTooLong.Wrapf("got metadata with length %d", len(metadata)) +} + +return nil +} +``` + +In order to only test `x/gov`, we mock the [expected keepers](https://docs.cosmos.network/v0.46/building-modules/keeper.html#type-definition) and instantiate the `Keeper` with the mocked dependencies. Note that we may need to configure the mocked dependencies to return the expected values: + +```go expandable +package keeper_test + +import ( + + "fmt" + "testing" + "github.com/stretchr/testify/require" + "cosmossdk.io/math" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttime "github.com/cometbft/cometbft/types/time" + "github.com/golang/mock/gomock" + + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil" + "github.com/cosmos/cosmos-sdk/testutil/testdata" + sdk "github.com/cosmos/cosmos-sdk/types" + moduletestutil "github.com/cosmos/cosmos-sdk/types/module/testutil" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + disttypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtestutil "github.com/cosmos/cosmos-sdk/x/gov/testutil" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +var ( + _, _, addr = testdata.KeyTestPubAddr() + +govAcct = authtypes.NewModuleAddress(types.ModuleName) + +distAcct = authtypes.NewModuleAddress(disttypes.ModuleName) + +TestProposal = getTestProposal() +) + +/ getTestProposal creates and returns a test proposal message. +func getTestProposal() []sdk.Msg { + legacyProposalMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Title", "description"), authtypes.NewModuleAddress(types.ModuleName).String()) + if err != nil { + panic(err) +} + +return []sdk.Msg{ + banktypes.NewMsgSend(govAcct, addr, sdk.NewCoins(sdk.NewCoin("stake", math.NewInt(1000)))), + legacyProposalMsg, +} +} + +/ setupGovKeeper creates a govKeeper as well as all its dependencies. +func setupGovKeeper(t *testing.T) ( + *keeper.Keeper, + *govtestutil.MockAccountKeeper, + *govtestutil.MockBankKeeper, + *govtestutil.MockStakingKeeper, + *govtestutil.MockDistributionKeeper, + moduletestutil.TestEncodingConfig, + sdk.Context, +) { + key := storetypes.NewKVStoreKey(types.StoreKey) + storeService := runtime.NewKVStoreService(key) + testCtx := testutil.DefaultContextWithDB(t, key, storetypes.NewTransientStoreKey("transient_test")) + ctx := testCtx.Ctx.WithBlockHeader(cmtproto.Header{ + Time: cmttime.Now() +}) + encCfg := moduletestutil.MakeTestEncodingConfig() + +v1.RegisterInterfaces(encCfg.InterfaceRegistry) + +v1beta1.RegisterInterfaces(encCfg.InterfaceRegistry) + +banktypes.RegisterInterfaces(encCfg.InterfaceRegistry) + + / Create MsgServiceRouter, but don't populate it before creating the gov + / keeper. + msr := baseapp.NewMsgServiceRouter() + + / gomock initializations + ctrl := gomock.NewController(t) + acctKeeper := govtestutil.NewMockAccountKeeper(ctrl) + bankKeeper := govtestutil.NewMockBankKeeper(ctrl) + stakingKeeper := govtestutil.NewMockStakingKeeper(ctrl) + distributionKeeper := govtestutil.NewMockDistributionKeeper(ctrl) + +acctKeeper.EXPECT().GetModuleAddress(types.ModuleName).Return(govAcct).AnyTimes() + +acctKeeper.EXPECT().GetModuleAddress(disttypes.ModuleName).Return(distAcct).AnyTimes() + +acctKeeper.EXPECT().GetModuleAccount(gomock.Any(), types.ModuleName).Return(authtypes.NewEmptyModuleAccount(types.ModuleName)).AnyTimes() + +acctKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + +trackMockBalances(bankKeeper, distributionKeeper) + +stakingKeeper.EXPECT().TokensFromConsensusPower(ctx, gomock.Any()).DoAndReturn(func(ctx sdk.Context, power int64) + +math.Int { + return sdk.TokensFromConsensusPower(power, math.NewIntFromUint64(1000000)) +}).AnyTimes() + +stakingKeeper.EXPECT().BondDenom(ctx).Return("stake").AnyTimes() + +stakingKeeper.EXPECT().IterateBondedValidatorsByPower(gomock.Any(), gomock.Any()).AnyTimes() + +stakingKeeper.EXPECT().IterateDelegations(gomock.Any(), gomock.Any(), gomock.Any()).AnyTimes() + +stakingKeeper.EXPECT().TotalBondedTokens(gomock.Any()).Return(math.NewInt(10000000)).AnyTimes() + +distributionKeeper.EXPECT().FundCommunityPool(gomock.Any(), gomock.Any(), gomock.Any()).Return(nil).AnyTimes() + + / Gov keeper initializations + govKeeper := keeper.NewKeeper(encCfg.Codec, storeService, acctKeeper, bankKeeper, stakingKeeper, distributionKeeper, msr, types.DefaultConfig(), govAcct.String()) + +require.NoError(t, govKeeper.ProposalID.Set(ctx, 1)) + govRouter := v1beta1.NewRouter() / Also register legacy gov handlers to test them too. + govRouter.AddRoute(types.RouterKey, v1beta1.ProposalHandler) + +govKeeper.SetLegacyRouter(govRouter) + err := govKeeper.Params.Set(ctx, v1.DefaultParams()) + +require.NoError(t, err) + + / Register all handlers for the MegServiceRouter. + msr.SetInterfaceRegistry(encCfg.InterfaceRegistry) + +v1.RegisterMsgServer(msr, keeper.NewMsgServerImpl(govKeeper)) + +banktypes.RegisterMsgServer(msr, nil) / Nil is fine here as long as we never execute the proposal's Msgs. + + return govKeeper, acctKeeper, bankKeeper, stakingKeeper, distributionKeeper, encCfg, ctx +} + +/ trackMockBalances sets up expected calls on the Mock BankKeeper, and also +/ locally tracks accounts balances (not modules balances). +func trackMockBalances(bankKeeper *govtestutil.MockBankKeeper, distributionKeeper *govtestutil.MockDistributionKeeper) { + balances := make(map[string]sdk.Coins) + +balances[distAcct.String()] = sdk.NewCoins(sdk.NewCoin(sdk.DefaultBondDenom, math.NewInt(0))) + + / We don't track module account balances. + bankKeeper.EXPECT().MintCoins(gomock.Any(), minttypes.ModuleName, gomock.Any()).AnyTimes() + +bankKeeper.EXPECT().BurnCoins(gomock.Any(), types.ModuleName, gomock.Any()).AnyTimes() + +bankKeeper.EXPECT().SendCoinsFromModuleToModule(gomock.Any(), minttypes.ModuleName, types.ModuleName, gomock.Any()).AnyTimes() + + / But we do track normal account balances. + bankKeeper.EXPECT().SendCoinsFromAccountToModule(gomock.Any(), gomock.Any(), types.ModuleName, gomock.Any()).DoAndReturn(func(_ sdk.Context, sender sdk.AccAddress, _ string, coins sdk.Coins) + +error { + newBalance, negative := balances[sender.String()].SafeSub(coins...) + if negative { + return fmt.Errorf("not enough balance") +} + +balances[sender.String()] = newBalance + return nil +}).AnyTimes() + +bankKeeper.EXPECT().SendCoinsFromModuleToAccount(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).DoAndReturn(func(_ sdk.Context, module string, rcpt sdk.AccAddress, coins sdk.Coins) + +error { + balances[rcpt.String()] = balances[rcpt.String()].Add(coins...) + +return nil +}).AnyTimes() + +bankKeeper.EXPECT().GetAllBalances(gomock.Any(), gomock.Any()).DoAndReturn(func(_ sdk.Context, addr sdk.AccAddress) + +sdk.Coins { + return balances[addr.String()] +}).AnyTimes() + +bankKeeper.EXPECT().GetBalance(gomock.Any(), gomock.Any(), sdk.DefaultBondDenom).DoAndReturn(func(_ sdk.Context, addr sdk.AccAddress, _ string) + +sdk.Coin { + balances := balances[addr.String()] + for _, balance := range balances { + if balance.Denom == sdk.DefaultBondDenom { + return balance +} + +} + +return sdk.NewCoin(sdk.DefaultBondDenom, math.NewInt(0)) +}).AnyTimes() + +distributionKeeper.EXPECT().FundCommunityPool(gomock.Any(), gomock.Any(), gomock.Any()).DoAndReturn(func(_ sdk.Context, coins sdk.Coins, sender sdk.AccAddress) + +error { + / sender balance + newBalance, negative := balances[sender.String()].SafeSub(coins...) + if negative { + return fmt.Errorf("not enough balance") +} + +balances[sender.String()] = newBalance + / receiver balance + balances[distAcct.String()] = balances[distAcct.String()].Add(coins...) + +return nil +}).AnyTimes() +} +``` + +This allows us to test the `x/gov` module without having to import other modules. + +```go expandable +package keeper_test + +import ( + + "testing" + "cosmossdk.io/collections" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" + + sdkmath "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtestutil "github.com/cosmos/cosmos-sdk/x/gov/testutil" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +var address1 = "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r" + +type KeeperTestSuite struct { + suite.Suite + + cdc codec.Codec + ctx sdk.Context + govKeeper *keeper.Keeper + acctKeeper *govtestutil.MockAccountKeeper + bankKeeper *govtestutil.MockBankKeeper + stakingKeeper *govtestutil.MockStakingKeeper + distKeeper *govtestutil.MockDistributionKeeper + queryClient v1.QueryClient + legacyQueryClient v1beta1.QueryClient + addrs []sdk.AccAddress + msgSrvr v1.MsgServer + legacyMsgSrvr v1beta1.MsgServer +} + +func (suite *KeeperTestSuite) + +SetupSuite() { + suite.reset() +} + +func (suite *KeeperTestSuite) + +reset() { + govKeeper, acctKeeper, bankKeeper, stakingKeeper, distKeeper, encCfg, ctx := setupGovKeeper(suite.T()) + + / Populate the gov account with some coins, as the TestProposal we have + / is a MsgSend from the gov account. + coins := sdk.NewCoins(sdk.NewCoin("stake", sdkmath.NewInt(100000))) + err := bankKeeper.MintCoins(suite.ctx, minttypes.ModuleName, coins) + +suite.NoError(err) + +err = bankKeeper.SendCoinsFromModuleToModule(ctx, minttypes.ModuleName, types.ModuleName, coins) + +suite.NoError(err) + queryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1.RegisterQueryServer(queryHelper, keeper.NewQueryServer(govKeeper)) + legacyQueryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1beta1.RegisterQueryServer(legacyQueryHelper, keeper.NewLegacyQueryServer(govKeeper)) + queryClient := v1.NewQueryClient(queryHelper) + legacyQueryClient := v1beta1.NewQueryClient(legacyQueryHelper) + +suite.ctx = ctx + suite.govKeeper = govKeeper + suite.acctKeeper = acctKeeper + suite.bankKeeper = bankKeeper + suite.stakingKeeper = stakingKeeper + suite.distKeeper = distKeeper + suite.cdc = encCfg.Codec + suite.queryClient = queryClient + suite.legacyQueryClient = legacyQueryClient + suite.msgSrvr = keeper.NewMsgServerImpl(suite.govKeeper) + +suite.legacyMsgSrvr = keeper.NewLegacyMsgServerImpl(govAcct.String(), suite.msgSrvr) + +suite.addrs = simtestutil.AddTestAddrsIncremental(bankKeeper, stakingKeeper, ctx, 3, sdkmath.NewInt(30000000)) + +suite.acctKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() +} + +func TestIncrementProposalNumber(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + +authKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + ac := address.NewBech32Codec("cosmos") + +addrBz, err := ac.StringToBytes(address1) + +require.NoError(t, err) + tp := TestProposal + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, true) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, true) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +proposal6, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +require.Equal(t, uint64(6), proposal6.Id) +} + +func TestProposalQueues(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + ac := address.NewBech32Codec("cosmos") + +addrBz, err := ac.StringToBytes(address1) + +require.NoError(t, err) + +authKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + + / create test proposals + tp := TestProposal + proposal, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +has, err := govKeeper.InactiveProposalsQueue.Has(ctx, collections.Join(*proposal.DepositEndTime, proposal.Id)) + +require.NoError(t, err) + +require.True(t, has) + +require.NoError(t, govKeeper.ActivateVotingPeriod(ctx, proposal)) + +proposal, err = govKeeper.Proposals.Get(ctx, proposal.Id) + +require.Nil(t, err) + +has, err = govKeeper.ActiveProposalsQueue.Has(ctx, collections.Join(*proposal.VotingEndTime, proposal.Id)) + +require.NoError(t, err) + +require.True(t, has) +} + +func TestKeeperTestSuite(t *testing.T) { + suite.Run(t, new(KeeperTestSuite)) +} +``` + +We can test then create unit tests using the newly created `Keeper` instance. + +```go expandable +package keeper_test + +import ( + + "testing" + "cosmossdk.io/collections" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" + + sdkmath "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtestutil "github.com/cosmos/cosmos-sdk/x/gov/testutil" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +var address1 = "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r" + +type KeeperTestSuite struct { + suite.Suite + + cdc codec.Codec + ctx sdk.Context + govKeeper *keeper.Keeper + acctKeeper *govtestutil.MockAccountKeeper + bankKeeper *govtestutil.MockBankKeeper + stakingKeeper *govtestutil.MockStakingKeeper + distKeeper *govtestutil.MockDistributionKeeper + queryClient v1.QueryClient + legacyQueryClient v1beta1.QueryClient + addrs []sdk.AccAddress + msgSrvr v1.MsgServer + legacyMsgSrvr v1beta1.MsgServer +} + +func (suite *KeeperTestSuite) + +SetupSuite() { + suite.reset() +} + +func (suite *KeeperTestSuite) + +reset() { + govKeeper, acctKeeper, bankKeeper, stakingKeeper, distKeeper, encCfg, ctx := setupGovKeeper(suite.T()) + + / Populate the gov account with some coins, as the TestProposal we have + / is a MsgSend from the gov account. + coins := sdk.NewCoins(sdk.NewCoin("stake", sdkmath.NewInt(100000))) + err := bankKeeper.MintCoins(suite.ctx, minttypes.ModuleName, coins) + +suite.NoError(err) + +err = bankKeeper.SendCoinsFromModuleToModule(ctx, minttypes.ModuleName, types.ModuleName, coins) + +suite.NoError(err) + queryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1.RegisterQueryServer(queryHelper, keeper.NewQueryServer(govKeeper)) + legacyQueryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1beta1.RegisterQueryServer(legacyQueryHelper, keeper.NewLegacyQueryServer(govKeeper)) + queryClient := v1.NewQueryClient(queryHelper) + legacyQueryClient := v1beta1.NewQueryClient(legacyQueryHelper) + +suite.ctx = ctx + suite.govKeeper = govKeeper + suite.acctKeeper = acctKeeper + suite.bankKeeper = bankKeeper + suite.stakingKeeper = stakingKeeper + suite.distKeeper = distKeeper + suite.cdc = encCfg.Codec + suite.queryClient = queryClient + suite.legacyQueryClient = legacyQueryClient + suite.msgSrvr = keeper.NewMsgServerImpl(suite.govKeeper) + +suite.legacyMsgSrvr = keeper.NewLegacyMsgServerImpl(govAcct.String(), suite.msgSrvr) + +suite.addrs = simtestutil.AddTestAddrsIncremental(bankKeeper, stakingKeeper, ctx, 3, sdkmath.NewInt(30000000)) + +suite.acctKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() +} + +func TestIncrementProposalNumber(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + +authKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + ac := address.NewBech32Codec("cosmos") + +addrBz, err := ac.StringToBytes(address1) + +require.NoError(t, err) + tp := TestProposal + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, true) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, true) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +proposal6, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +require.Equal(t, uint64(6), proposal6.Id) +} + +func TestProposalQueues(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + ac := address.NewBech32Codec("cosmos") + +addrBz, err := ac.StringToBytes(address1) + +require.NoError(t, err) + +authKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + + / create test proposals + tp := TestProposal + proposal, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +has, err := govKeeper.InactiveProposalsQueue.Has(ctx, collections.Join(*proposal.DepositEndTime, proposal.Id)) + +require.NoError(t, err) + +require.True(t, has) + +require.NoError(t, govKeeper.ActivateVotingPeriod(ctx, proposal)) + +proposal, err = govKeeper.Proposals.Get(ctx, proposal.Id) + +require.Nil(t, err) + +has, err = govKeeper.ActiveProposalsQueue.Has(ctx, collections.Join(*proposal.VotingEndTime, proposal.Id)) + +require.NoError(t, err) + +require.True(t, has) +} + +func TestKeeperTestSuite(t *testing.T) { + suite.Run(t, new(KeeperTestSuite)) +} +``` + +## Integration Tests + +Integration tests are at the second level of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html). +In the SDK, we locate our integration tests under [`/tests/integrations`](https://github.com/cosmos/cosmos-sdk/tree/main/tests/integration). + +The goal of these integration tests is to test how a component interacts with other dependencies. Compared to unit tests, integration tests do not mock dependencies. Instead, they use the direct dependencies of the component. This differs as well from end-to-end tests, which test the component with a full application. + +Integration tests interact with the tested module via the defined `Msg` and `Query` services. The result of the test can be verified by checking the state of the application, by checking the emitted events or the response. It is adviced to combine two of these methods to verify the result of the test. + +The SDK provides small helpers for quickly setting up an integration tests. These helpers can be found at [Link](https://github.com/cosmos/cosmos-sdk/blob/main/testutil/integration). + +### Example + +```go expandable +package integration_test + +import ( + + "fmt" + "io" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/google/go-cmp/cmp" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + + addresscodec "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/integration" + sdk "github.com/cosmos/cosmos-sdk/types" + moduletestutil "github.com/cosmos/cosmos-sdk/types/module/testutil" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +/ Example shows how to use the integration test framework to test the integration of SDK modules. +/ Panics are used in this example, but in a real test case, you should use the testing.T object and assertions. +func Example() { + / in this example we are testing the integration of the following modules: + / - mint, which directly depends on auth, bank and staking + encodingCfg := moduletestutil.MakeTestEncodingConfig(auth.AppModuleBasic{ +}, mint.AppModuleBasic{ +}) + keys := storetypes.NewKVStoreKeys(authtypes.StoreKey, minttypes.StoreKey) + authority := authtypes.NewModuleAddress("gov").String() + + / replace the logger by testing values in a real test case (e.g. log.NewTestLogger(t)) + logger := log.NewNopLogger() + cms := integration.CreateMultiStore(keys, logger) + newCtx := sdk.NewContext(cms, cmtproto.Header{ +}, true, logger) + accountKeeper := authkeeper.NewAccountKeeper( + encodingCfg.Codec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + map[string][]string{ + minttypes.ModuleName: { + authtypes.Minter +}}, + addresscodec.NewBech32Codec("cosmos"), + "cosmos", + authority, + ) + + / subspace is nil because we don't test params (which is legacy anyway) + authModule := auth.NewAppModule(encodingCfg.Codec, accountKeeper, authsims.RandomGenesisAccounts, nil) + + / here bankkeeper and staking keeper is nil because we are not testing them + / subspace is nil because we don't test params (which is legacy anyway) + mintKeeper := mintkeeper.NewKeeper(encodingCfg.Codec, runtime.NewKVStoreService(keys[minttypes.StoreKey]), nil, accountKeeper, nil, authtypes.FeeCollectorName, authority) + mintModule := mint.NewAppModule(encodingCfg.Codec, mintKeeper, accountKeeper, nil, nil) + + / create the application and register all the modules from the previous step + integrationApp := integration.NewIntegrationApp( + newCtx, + logger, + keys, + encodingCfg.Codec, + map[string]appmodule.AppModule{ + authtypes.ModuleName: authModule, + minttypes.ModuleName: mintModule, +}, + ) + + / register the message and query servers + authtypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), authkeeper.NewMsgServerImpl(accountKeeper)) + +minttypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), mintkeeper.NewMsgServerImpl(mintKeeper)) + +minttypes.RegisterQueryServer(integrationApp.QueryHelper(), mintkeeper.NewQueryServerImpl(mintKeeper)) + params := minttypes.DefaultParams() + +params.BlocksPerYear = 10000 + + / now we can use the application to test a mint message + result, err := integrationApp.RunMsg(&minttypes.MsgUpdateParams{ + Authority: authority, + Params: params, +}) + if err != nil { + panic(err) +} + + / in this example the result is an empty response, a nil check is enough + / in other cases, it is recommended to check the result value. + if result == nil { + panic(fmt.Errorf("unexpected nil result")) +} + + / we now check the result + resp := minttypes.MsgUpdateParamsResponse{ +} + +err = encodingCfg.Codec.Unmarshal(result.Value, &resp) + if err != nil { + panic(err) +} + sdkCtx := sdk.UnwrapSDKContext(integrationApp.Context()) + + / we should also check the state of the application + got, err := mintKeeper.Params.Get(sdkCtx) + if err != nil { + panic(err) +} + if diff := cmp.Diff(got, params); diff != "" { + panic(diff) +} + +fmt.Println(got.BlocksPerYear) + / Output: 10000 +} + +/ ExampleOneModule shows how to use the integration test framework to test the integration of a single module. +/ That module has no dependency on other modules. +func Example_oneModule() { + / in this example we are testing the integration of the auth module: + encodingCfg := moduletestutil.MakeTestEncodingConfig(auth.AppModuleBasic{ +}) + keys := storetypes.NewKVStoreKeys(authtypes.StoreKey) + authority := authtypes.NewModuleAddress("gov").String() + + / replace the logger by testing values in a real test case (e.g. log.NewTestLogger(t)) + logger := log.NewLogger(io.Discard) + cms := integration.CreateMultiStore(keys, logger) + newCtx := sdk.NewContext(cms, cmtproto.Header{ +}, true, logger) + accountKeeper := authkeeper.NewAccountKeeper( + encodingCfg.Codec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + map[string][]string{ + minttypes.ModuleName: { + authtypes.Minter +}}, + addresscodec.NewBech32Codec("cosmos"), + "cosmos", + authority, + ) + + / subspace is nil because we don't test params (which is legacy anyway) + authModule := auth.NewAppModule(encodingCfg.Codec, accountKeeper, authsims.RandomGenesisAccounts, nil) + + / create the application and register all the modules from the previous step + integrationApp := integration.NewIntegrationApp( + newCtx, + logger, + keys, + encodingCfg.Codec, + map[string]appmodule.AppModule{ + authtypes.ModuleName: authModule, +}, + ) + + / register the message and query servers + authtypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), authkeeper.NewMsgServerImpl(accountKeeper)) + params := authtypes.DefaultParams() + +params.MaxMemoCharacters = 1000 + + / now we can use the application to test a mint message + result, err := integrationApp.RunMsg(&authtypes.MsgUpdateParams{ + Authority: authority, + Params: params, +}, + / this allows to the begin and end blocker of the module before and after the message + integration.WithAutomaticFinalizeBlock(), + / this allows to commit the state after the message + integration.WithAutomaticCommit(), + ) + if err != nil { + panic(err) +} + + / verify that the begin and end blocker were called + / NOTE: in this example, we are testing auth, which doesn't have any begin or end blocker + / so verifying the block height is enough + if integrationApp.LastBlockHeight() != 2 { + panic(fmt.Errorf("expected block height to be 2, got %d", integrationApp.LastBlockHeight())) +} + + / in this example the result is an empty response, a nil check is enough + / in other cases, it is recommended to check the result value. + if result == nil { + panic(fmt.Errorf("unexpected nil result")) +} + + / we now check the result + resp := authtypes.MsgUpdateParamsResponse{ +} + +err = encodingCfg.Codec.Unmarshal(result.Value, &resp) + if err != nil { + panic(err) +} + sdkCtx := sdk.UnwrapSDKContext(integrationApp.Context()) + + / we should also check the state of the application + got := accountKeeper.GetParams(sdkCtx) + if diff := cmp.Diff(got, params); diff != "" { + panic(diff) +} + +fmt.Println(got.MaxMemoCharacters) + / Output: 1000 +} +``` + +## Deterministic and Regression tests + +Tests are written for queries in the Cosmos SDK which have `module_query_safe` Protobuf annotation. + +Each query is tested using 2 methods: + +* Use property-based testing with the [`rapid`](https://pkg.go.dev/pgregory.net/rapid@v0.5.3) library. The property that is tested is that the query response and gas consumption are the same upon 1000 query calls. +* Regression tests are written with hardcoded responses and gas, and verify they don't change upon 1000 calls and between SDK patch versions. + +Here's an example of regression tests: + +```go expandable +package keeper_test + +import ( + + "testing" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "gotest.tools/v3/assert" + "pgregory.net/rapid" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/integration" + "github.com/cosmos/cosmos-sdk/testutil/testdata" + sdk "github.com/cosmos/cosmos-sdk/types" + moduletestutil "github.com/cosmos/cosmos-sdk/types/module/testutil" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/bank" + "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktestutil "github.com/cosmos/cosmos-sdk/x/bank/testutil" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + _ "github.com/cosmos/cosmos-sdk/x/consensus" + _ "github.com/cosmos/cosmos-sdk/x/params" + _ "github.com/cosmos/cosmos-sdk/x/staking" +) + +var ( + denomRegex = sdk.DefaultCoinDenomRegex() + +addr1 = sdk.MustAccAddressFromBech32("cosmos139f7kncmglres2nf3h4hc4tade85ekfr8sulz5") + +coin1 = sdk.NewCoin("denom", sdk.NewInt(10)) + +metadataAtom = banktypes.Metadata{ + Description: "The native staking token of the Cosmos Hub.", + DenomUnits: []*banktypes.DenomUnit{ + { + Denom: "uatom", + Exponent: 0, + Aliases: []string{"microatom" +}, +}, + { + Denom: "atom", + Exponent: 6, + Aliases: []string{"ATOM" +}, +}, +}, + Base: "uatom", + Display: "atom", +} +) + +type deterministicFixture struct { + ctx sdk.Context + bankKeeper keeper.BaseKeeper + queryClient banktypes.QueryClient +} + +func initDeterministicFixture(t *testing.T) *deterministicFixture { + keys := storetypes.NewKVStoreKeys(authtypes.StoreKey, banktypes.StoreKey) + cdc := moduletestutil.MakeTestEncodingConfig(auth.AppModuleBasic{ +}, bank.AppModuleBasic{ +}).Codec + logger := log.NewTestLogger(t) + cms := integration.CreateMultiStore(keys, logger) + newCtx := sdk.NewContext(cms, cmtproto.Header{ +}, true, logger) + authority := authtypes.NewModuleAddress("gov") + maccPerms := map[string][]string{ + minttypes.ModuleName: { + authtypes.Minter +}, +} + accountKeeper := authkeeper.NewAccountKeeper( + cdc, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + sdk.Bech32MainPrefix, + authority.String(), + ) + blockedAddresses := map[string]bool{ + accountKeeper.GetAuthority(): false, +} + bankKeeper := keeper.NewBaseKeeper( + cdc, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + accountKeeper, + blockedAddresses, + authority.String(), + log.NewNopLogger(), + ) + authModule := auth.NewAppModule(cdc, accountKeeper, authsims.RandomGenesisAccounts, nil) + bankModule := bank.NewAppModule(cdc, bankKeeper, accountKeeper, nil) + integrationApp := integration.NewIntegrationApp(newCtx, logger, keys, cdc, authModule, bankModule) + sdkCtx := sdk.UnwrapSDKContext(integrationApp.Context()) + + / Register MsgServer and QueryServer + banktypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), keeper.NewMsgServerImpl(bankKeeper)) + +banktypes.RegisterQueryServer(integrationApp.QueryHelper(), keeper.NewQuerier(&bankKeeper)) + qr := integrationApp.QueryHelper() + queryClient := banktypes.NewQueryClient(qr) + f := deterministicFixture{ + ctx: sdkCtx, + bankKeeper: bankKeeper, + queryClient: queryClient, +} + +return &f +} + +func fundAccount(f *deterministicFixture, addr sdk.AccAddress, coin ...sdk.Coin) { + err := banktestutil.FundAccount(f.ctx, f.bankKeeper, addr, sdk.NewCoins(coin...)) + +assert.NilError(&testing.T{ +}, err) +} + +func getCoin(rt *rapid.T) + +sdk.Coin { + return sdk.NewCoin( + rapid.StringMatching(denomRegex).Draw(rt, "denom"), + sdk.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) +} + +func TestGRPCQueryBalance(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + addr := testdata.AddressGenerator(rt).Draw(rt, "address") + coin := getCoin(rt) + +fundAccount(f, addr, coin) + req := banktypes.NewQueryBalanceRequest(addr, coin.GetDenom()) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.Balance, 0, true) +}) + +fundAccount(f, addr1, coin1) + req := banktypes.NewQueryBalanceRequest(addr1, coin1.GetDenom()) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.Balance, 1087, false) +} + +func TestGRPCQueryAllBalances(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + addr := testdata.AddressGenerator(rt).Draw(rt, "address") + numCoins := rapid.IntRange(1, 10).Draw(rt, "num-count") + coins := make(sdk.Coins, 0, numCoins) + for i := 0; i < numCoins; i++ { + coin := getCoin(rt) + + / NewCoins sorts the denoms + coins = sdk.NewCoins(append(coins, coin)...) +} + +fundAccount(f, addr, coins...) + req := banktypes.NewQueryAllBalancesRequest(addr, testdata.PaginationGenerator(rt, uint64(numCoins)).Draw(rt, "pagination"), false) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.AllBalances, 0, true) +}) + coins := sdk.NewCoins( + sdk.NewCoin("stake", sdk.NewInt(10)), + sdk.NewCoin("denom", sdk.NewInt(100)), + ) + +fundAccount(f, addr1, coins...) + req := banktypes.NewQueryAllBalancesRequest(addr1, nil, false) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.AllBalances, 357, false) +} + +func TestGRPCQuerySpendableBalances(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + addr := testdata.AddressGenerator(rt).Draw(rt, "address") + + / Denoms must be unique, otherwise sdk.NewCoins will panic. + denoms := rapid.SliceOfNDistinct(rapid.StringMatching(denomRegex), 1, 10, rapid.ID[string]).Draw(rt, "denoms") + coins := make(sdk.Coins, 0, len(denoms)) + for _, denom := range denoms { + coin := sdk.NewCoin( + denom, + sdk.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) + + / NewCoins sorts the denoms + coins = sdk.NewCoins(append(coins, coin)...) +} + err := banktestutil.FundAccount(f.ctx, f.bankKeeper, addr, coins) + +assert.NilError(t, err) + req := banktypes.NewQuerySpendableBalancesRequest(addr, testdata.PaginationGenerator(rt, uint64(len(denoms))).Draw(rt, "pagination")) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SpendableBalances, 0, true) +}) + coins := sdk.NewCoins( + sdk.NewCoin("stake", sdk.NewInt(10)), + sdk.NewCoin("denom", sdk.NewInt(100)), + ) + err := banktestutil.FundAccount(f.ctx, f.bankKeeper, addr1, coins) + +assert.NilError(t, err) + req := banktypes.NewQuerySpendableBalancesRequest(addr1, nil) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SpendableBalances, 2032, false) +} + +func TestGRPCQueryTotalSupply(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +res, err := f.queryClient.TotalSupply(f.ctx, &banktypes.QueryTotalSupplyRequest{ +}) + +assert.NilError(t, err) + initialSupply := res.GetSupply() + +rapid.Check(t, func(rt *rapid.T) { + numCoins := rapid.IntRange(1, 3).Draw(rt, "num-count") + coins := make(sdk.Coins, 0, numCoins) + for i := 0; i < numCoins; i++ { + coin := sdk.NewCoin( + rapid.StringMatching(denomRegex).Draw(rt, "denom"), + sdk.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) + +coins = coins.Add(coin) +} + +assert.NilError(t, f.bankKeeper.MintCoins(f.ctx, minttypes.ModuleName, coins)) + +initialSupply = initialSupply.Add(coins...) + req := &banktypes.QueryTotalSupplyRequest{ + Pagination: testdata.PaginationGenerator(rt, uint64(len(initialSupply))).Draw(rt, "pagination"), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.TotalSupply, 0, true) +}) + +f = initDeterministicFixture(t) / reset + coins := sdk.NewCoins( + sdk.NewCoin("foo", sdk.NewInt(10)), + sdk.NewCoin("bar", sdk.NewInt(100)), + ) + +assert.NilError(t, f.bankKeeper.MintCoins(f.ctx, minttypes.ModuleName, coins)) + req := &banktypes.QueryTotalSupplyRequest{ +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.TotalSupply, 150, false) +} + +func TestGRPCQueryTotalSupplyOf(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + coin := sdk.NewCoin( + rapid.StringMatching(denomRegex).Draw(rt, "denom"), + sdk.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) + +assert.NilError(t, f.bankKeeper.MintCoins(f.ctx, minttypes.ModuleName, sdk.NewCoins(coin))) + req := &banktypes.QuerySupplyOfRequest{ + Denom: coin.GetDenom() +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SupplyOf, 0, true) +}) + coin := sdk.NewCoin("bar", sdk.NewInt(100)) + +assert.NilError(t, f.bankKeeper.MintCoins(f.ctx, minttypes.ModuleName, sdk.NewCoins(coin))) + req := &banktypes.QuerySupplyOfRequest{ + Denom: coin.GetDenom() +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SupplyOf, 1021, false) +} + +func TestGRPCQueryParams(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + enabledStatus := banktypes.SendEnabled{ + Denom: rapid.StringMatching(denomRegex).Draw(rt, "denom"), + Enabled: rapid.Bool().Draw(rt, "status"), +} + params := banktypes.Params{ + SendEnabled: []*banktypes.SendEnabled{&enabledStatus +}, + DefaultSendEnabled: rapid.Bool().Draw(rt, "send"), +} + +f.bankKeeper.SetParams(f.ctx, params) + req := &banktypes.QueryParamsRequest{ +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.Params, 0, true) +}) + enabledStatus := banktypes.SendEnabled{ + Denom: "denom", + Enabled: true, +} + params := banktypes.Params{ + SendEnabled: []*banktypes.SendEnabled{&enabledStatus +}, + DefaultSendEnabled: false, +} + +f.bankKeeper.SetParams(f.ctx, params) + req := &banktypes.QueryParamsRequest{ +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.Params, 1003, false) +} + +func createAndReturnMetadatas(t *rapid.T, count int) []banktypes.Metadata { + denomsMetadata := make([]banktypes.Metadata, 0, count) + for i := 0; i < count; i++ { + denom := rapid.StringMatching(denomRegex).Draw(t, "denom") + aliases := rapid.SliceOf(rapid.String()).Draw(t, "aliases") + / In the GRPC server code, empty arrays are returned as nil + if len(aliases) == 0 { + aliases = nil +} + metadata := banktypes.Metadata{ + Description: rapid.StringN(1, 100, 100).Draw(t, "desc"), + DenomUnits: []*banktypes.DenomUnit{ + { + Denom: denom, + Exponent: rapid.Uint32().Draw(t, "exponent"), + Aliases: aliases, +}, +}, + Base: denom, + Display: denom, + Name: rapid.String().Draw(t, "name"), + Symbol: rapid.String().Draw(t, "symbol"), + URI: rapid.String().Draw(t, "uri"), + URIHash: rapid.String().Draw(t, "uri-hash"), +} + +denomsMetadata = append(denomsMetadata, metadata) +} + +return denomsMetadata +} + +func TestGRPCDenomsMetadata(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + count := rapid.IntRange(1, 3).Draw(rt, "count") + denomsMetadata := createAndReturnMetadatas(rt, count) + +assert.Assert(t, len(denomsMetadata) == count) + for i := 0; i < count; i++ { + f.bankKeeper.SetDenomMetaData(f.ctx, denomsMetadata[i]) +} + req := &banktypes.QueryDenomsMetadataRequest{ + Pagination: testdata.PaginationGenerator(rt, uint64(count)).Draw(rt, "pagination"), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomsMetadata, 0, true) +}) + +f = initDeterministicFixture(t) / reset + + f.bankKeeper.SetDenomMetaData(f.ctx, metadataAtom) + req := &banktypes.QueryDenomsMetadataRequest{ +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomsMetadata, 660, false) +} + +func TestGRPCDenomMetadata(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + denomMetadata := createAndReturnMetadatas(rt, 1) + +assert.Assert(t, len(denomMetadata) == 1) + +f.bankKeeper.SetDenomMetaData(f.ctx, denomMetadata[0]) + req := &banktypes.QueryDenomMetadataRequest{ + Denom: denomMetadata[0].Base, +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomMetadata, 0, true) +}) + +f.bankKeeper.SetDenomMetaData(f.ctx, metadataAtom) + req := &banktypes.QueryDenomMetadataRequest{ + Denom: metadataAtom.Base, +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomMetadata, 1300, false) +} + +func TestGRPCSendEnabled(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + allDenoms := []string{ +} + +rapid.Check(t, func(rt *rapid.T) { + count := rapid.IntRange(0, 10).Draw(rt, "count") + denoms := make([]string, 0, count) + for i := 0; i < count; i++ { + coin := banktypes.SendEnabled{ + Denom: rapid.StringMatching(denomRegex).Draw(rt, "denom"), + Enabled: rapid.Bool().Draw(rt, "enabled-status"), +} + +f.bankKeeper.SetSendEnabled(f.ctx, coin.Denom, coin.Enabled) + +denoms = append(denoms, coin.Denom) +} + +allDenoms = append(allDenoms, denoms...) + req := &banktypes.QuerySendEnabledRequest{ + Denoms: denoms, + / Pagination is only taken into account when `denoms` is an empty array + Pagination: testdata.PaginationGenerator(rt, uint64(len(allDenoms))).Draw(rt, "pagination"), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SendEnabled, 0, true) +}) + +coin1 := banktypes.SendEnabled{ + Denom: "falsecoin", + Enabled: false, +} + +coin2 := banktypes.SendEnabled{ + Denom: "truecoin", + Enabled: true, +} + +f.bankKeeper.SetSendEnabled(f.ctx, coin1.Denom, false) + +f.bankKeeper.SetSendEnabled(f.ctx, coin2.Denom, true) + req := &banktypes.QuerySendEnabledRequest{ + Denoms: []string{ + coin1.GetDenom(), coin2.GetDenom() +}, +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SendEnabled, 4063, false) +} + +func TestGRPCDenomOwners(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + denom := rapid.StringMatching(denomRegex).Draw(rt, "denom") + numAddr := rapid.IntRange(1, 10).Draw(rt, "number-address") + for i := 0; i < numAddr; i++ { + addr := testdata.AddressGenerator(rt).Draw(rt, "address") + coin := sdk.NewCoin( + denom, + sdk.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) + err := banktestutil.FundAccount(f.ctx, f.bankKeeper, addr, sdk.NewCoins(coin)) + +assert.NilError(t, err) +} + req := &banktypes.QueryDenomOwnersRequest{ + Denom: denom, + Pagination: testdata.PaginationGenerator(rt, uint64(numAddr)).Draw(rt, "pagination"), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomOwners, 0, true) +}) + denomOwners := []*banktypes.DenomOwner{ + { + Address: "cosmos1qg65a9q6k2sqq7l3ycp428sqqpmqcucgzze299", + Balance: coin1, +}, + { + Address: "cosmos1qglnsqgpq48l7qqzgs8qdshr6fh3gqq9ej3qut", + Balance: coin1, +}, +} + for i := 0; i < len(denomOwners); i++ { + addr, err := sdk.AccAddressFromBech32(denomOwners[i].Address) + +assert.NilError(t, err) + +err = banktestutil.FundAccount(f.ctx, f.bankKeeper, addr, sdk.NewCoins(coin1)) + +assert.NilError(t, err) +} + req := &banktypes.QueryDenomOwnersRequest{ + Denom: coin1.GetDenom(), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomOwners, 2516, false) +} +``` + +## Simulations + +Simulations uses as well a minimal application, built with [`depinject`](/docs/sdk/v0.50/documentation/module-system/depinject): + + +You can as well use the `AppConfig` `configurator` for creating an `AppConfig` [inline](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/slashing/app_test.go#L54-L62). There is no difference between those two ways, use whichever you prefer. + + +Following is an example for `x/gov/` simulations: + +```go expandable +package simulation_test + +import ( + + "fmt" + "math/rand" + "testing" + "time" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/gogoproto/proto" + "github.com/stretchr/testify/require" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/configurator" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + _ "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + "github.com/cosmos/cosmos-sdk/x/bank/testutil" + _ "github.com/cosmos/cosmos-sdk/x/consensus" + _ "github.com/cosmos/cosmos-sdk/x/distribution" + dk "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + _ "github.com/cosmos/cosmos-sdk/x/gov" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + "github.com/cosmos/cosmos-sdk/x/gov/simulation" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + _ "github.com/cosmos/cosmos-sdk/x/params" + _ "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +var ( + _ simtypes.WeightedProposalMsg = MockWeightedProposals{ +} + _ simtypes.WeightedProposalContent = MockWeightedProposals{ +} /nolint:staticcheck / testing legacy code path +) + +type MockWeightedProposals struct { + n int +} + +func (m MockWeightedProposals) + +AppParamsKey() + +string { + return fmt.Sprintf("AppParamsKey-%d", m.n) +} + +func (m MockWeightedProposals) + +DefaultWeight() + +int { + return m.n +} + +func (m MockWeightedProposals) + +MsgSimulatorFn() + +simtypes.MsgSimulatorFn { + return func(r *rand.Rand, _ sdk.Context, _ []simtypes.Account) + +sdk.Msg { + return nil +} +} + +func (m MockWeightedProposals) + +ContentSimulatorFn() + +simtypes.ContentSimulatorFn { /nolint:staticcheck / testing legacy code path + return func(r *rand.Rand, _ sdk.Context, _ []simtypes.Account) + +simtypes.Content { /nolint:staticcheck / testing legacy code path + return v1beta1.NewTextProposal( + fmt.Sprintf("title-%d: %s", m.n, simtypes.RandStringOfLength(r, 100)), + fmt.Sprintf("description-%d: %s", m.n, simtypes.RandStringOfLength(r, 4000)), + ) +} +} + +func mockWeightedProposalMsg(n int) []simtypes.WeightedProposalMsg { + wpc := make([]simtypes.WeightedProposalMsg, n) + for i := 0; i < n; i++ { + wpc[i] = MockWeightedProposals{ + i +} + +} + +return wpc +} + +func mockWeightedLegacyProposalContent(n int) []simtypes.WeightedProposalContent { /nolint:staticcheck / testing legacy code path + wpc := make([]simtypes.WeightedProposalContent, n) /nolint:staticcheck / testing legacy code path + for i := 0; i < n; i++ { + wpc[i] = MockWeightedProposals{ + i +} + +} + +return wpc +} + +/ TestWeightedOperations tests the weights of the operations. +func TestWeightedOperations(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + ctx.WithChainID("test-chain") + appParams := make(simtypes.AppParams) + weightesOps := simulation.WeightedOperations(appParams, suite.TxConfig, suite.AccountKeeper, + suite.BankKeeper, suite.GovKeeper, mockWeightedProposalMsg(3), mockWeightedLegacyProposalContent(1), + ) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accs := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + expected := []struct { + weight int + opMsgRoute string + opMsgName string +}{ + { + simulation.DefaultWeightMsgDeposit, types.ModuleName, simulation.TypeMsgDeposit +}, + { + simulation.DefaultWeightMsgVote, types.ModuleName, simulation.TypeMsgVote +}, + { + simulation.DefaultWeightMsgVoteWeighted, types.ModuleName, simulation.TypeMsgVoteWeighted +}, + { + simulation.DefaultWeightMsgCancelProposal, types.ModuleName, simulation.TypeMsgCancelProposal +}, + {0, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {1, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {2, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {0, types.ModuleName, simulation.TypeMsgSubmitProposal +}, +} + +require.Equal(t, len(weightesOps), len(expected), "number of operations should be the same") + for i, w := range weightesOps { + operationMsg, _, err := w.Op()(r, app.BaseApp, ctx, accs, ctx.ChainID()) + +require.NoError(t, err) + + / the following checks are very much dependent from the ordering of the output given + / by WeightedOperations. if the ordering in WeightedOperations changes some tests + / will fail + require.Equal(t, expected[i].weight, w.Weight(), "weight should be the same") + +require.Equal(t, expected[i].opMsgRoute, operationMsg.Route, "route should be the same") + +require.Equal(t, expected[i].opMsgName, operationMsg.Name, "operation Msg name should be the same") +} +} + +/ TestSimulateMsgSubmitProposal tests the normal scenario of a valid message of type TypeMsgSubmitProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgSubmitProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + +app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + + / execute operation + op := simulation.SimulateMsgSubmitProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper, MockWeightedProposals{3 +}.MsgSimulatorFn()) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgSubmitProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Proposer) + +require.NotEqual(t, len(msg.InitialDeposit), 0) + +require.Equal(t, "47841094stake", msg.InitialDeposit[0].String()) + +require.Equal(t, simulation.TypeMsgSubmitProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgSubmitProposal tests the normal scenario of a valid message of type TypeMsgSubmitProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgSubmitLegacyProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + +app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + + / execute operation + op := simulation.SimulateMsgSubmitLegacyProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper, MockWeightedProposals{3 +}.ContentSimulatorFn()) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgSubmitProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +var msgLegacyContent v1.MsgExecLegacyContent + err = proto.Unmarshal(msg.Messages[0].Value, &msgLegacyContent) + +require.NoError(t, err) + +var textProposal v1beta1.TextProposal + err = proto.Unmarshal(msgLegacyContent.Content.Value, &textProposal) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, "cosmos1p8wcgrjr4pjju90xg6u9cgq55dxwq8j7u4x9a0", msg.Proposer) + +require.NotEqual(t, len(msg.InitialDeposit), 0) + +require.Equal(t, "25166256stake", msg.InitialDeposit[0].String()) + +require.Equal(t, "title-3: ZBSpYuLyYggwexjxusrBqDOTtGTOWeLrQKjLxzIivHSlcxgdXhhuTSkuxKGLwQvuyNhYFmBZHeAerqyNEUzXPFGkqEGqiQWIXnku", + textProposal.GetTitle()) + +require.Equal(t, "description-3: NJWzHdBNpAXKJPHWQdrGYcAHSctgVlqwqHoLfHsXUdStwfefwzqLuKEhmMyYLdbZrcPgYqjNHxPexsruwEGStAneKbWkQDDIlCWBLSiAASNhZqNFlPtfqPJoxKsgMdzjWqLWdqKQuJqWPMvwPQWZUtVMOTMYKJbfdlZsjdsomuScvDmbDkgRualsxDvRJuCAmPOXitIbcyWsKGSdrEunFAOdmXnsuyFVgJqEjbklvmwrUlsxjRSfKZxGcpayDdgoFcnVSutxjRgOSFzPwidAjubMncNweqpbxhXGchpZUxuFDOtpnhNUycJICRYqsPhPSCjPTWZFLkstHWJxvdPEAyEIxXgLwbNOjrgzmaujiBABBIXvcXpLrbcEWNNQsbjvgJFgJkflpRohHUutvnaUqoopuKjTDaemDeSdqbnOzcfJpcTuAQtZoiLZOoAIlboFDAeGmSNwkvObPRvRWQgWkGkxwtPauYgdkmypLjbqhlHJIQTntgWjXwZdOyYEdQRRLfMSdnxqppqUofqLbLQDUjwKVKfZJUJQPsWIPwIVaSTrmKskoAhvmZyJgeRpkaTfGgrJzAigcxtfshmiDCFkuiluqtMOkidknnTBtumyJYlIsWLnCQclqdVmikUoMOPdPWwYbJxXyqUVicNxFxyqJTenNblyyKSdlCbiXxUiYUiMwXZASYfvMDPFgxniSjWaZTjHkqlJvtBsXqwPpyVxnJVGFWhfSxgOcduoxkiopJvFjMmFabrGYeVtTXLhxVUEiGwYUvndjFGzDVntUvibiyZhfMQdMhgsiuysLMiePBNXifRLMsSmXPkwlPloUbJveCvUlaalhZHuvdkCnkSHbMbmOnrfEGPwQiACiPlnihiaOdbjPqPiTXaHDoJXjSlZmltGqNHHNrcKdlFSCdmVOuvDcBLdSklyGJmcLTbSFtALdGlPkqqecJrpLCXNPWefoTJNgEJlyMEPneVaxxduAAEqQpHWZodWyRkDAxzyMnFMcjSVqeRXLqsNyNtQBbuRvunZflWSbbvXXdkyLikYqutQhLPONXbvhcQZJPSWnOulqQaXmbfFxAkqfYeseSHOQidHwbcsOaMnSrrmGjjRmEMQNuknupMxJiIeVjmgZvbmjPIQTEhQFULQLBMPrxcFPvBinaOPYWGvYGRKxLZdwamfRQQFngcdSlvwjfaPbURasIsGJVHtcEAxnIIrhSriiXLOlbEBLXFElXJFGxHJczRBIxAuPKtBisjKBwfzZFagdNmjdwIRvwzLkFKWRTDPxJCmpzHUcrPiiXXHnOIlqNVoGSXZewdnCRhuxeYGPVTfrNTQNOxZmxInOazUYNTNDgzsxlgiVEHPKMfbesvPHUqpNkUqbzeuzfdrsuLDpKHMUbBMKczKKWOdYoIXoPYtEjfOnlQLoGnbQUCuERdEFaptwnsHzTJDsuZkKtzMpFaZobynZdzNydEeJJHDYaQcwUxcqvwfWwNUsCiLvkZQiSfzAHftYgAmVsXgtmcYgTqJIawstRYJrZdSxlfRiqTufgEQVambeZZmaAyRQbcmdjVUZZCgqDrSeltJGXPMgZnGDZqISrGDOClxXCxMjmKqEPwKHoOfOeyGmqWqihqjINXLqnyTesZePQRqaWDQNqpLgNrAUKulklmckTijUltQKuWQDwpLmDyxLppPVMwsmBIpOwQttYFMjgJQZLYFPmxWFLIeZihkRNnkzoypBICIxgEuYsVWGIGRbbxqVasYnstWomJnHwmtOhAFSpttRYYzBmyEtZXiCthvKvWszTXDbiJbGXMcrYpKAgvUVFtdKUfvdMfhAryctklUCEdjetjuGNfJjajZtvzdYaqInKtFPPLYmRaXPdQzxdSQfmZDEVHlHGEGNSPRFJuIfKLLfUmnHxHnRjmzQPNlqrXgifUdzAGKVabYqvcDeYoTYgPsBUqehrBhmQUgTvDnsdpuhUoxskDdppTsYMcnDIPSwKIqhXDCIxOuXrywahvVavvHkPuaenjLmEbMgrkrQLHEAwrhHkPRNvonNQKqprqOFVZKAtpRSpvQUxMoXCMZLSSbnLEFsjVfANdQNQVwTmGxqVjVqRuxREAhuaDrFgEZpYKhwWPEKBevBfsOIcaZKyykQafzmGPLRAKDtTcJxJVgiiuUkmyMYuDUNEUhBEdoBLJnamtLmMJQgmLiUELIhLpiEvpOXOvXCPUeldLFqkKOwfacqIaRcnnZvERKRMCKUkMABbDHytQqQblrvoxOZkwzosQfDKGtIdfcXRJNqlBNwOCWoQBcEWyqrMlYZIAXYJmLfnjoJepgSFvrgajaBAIksoyeHqgqbGvpAstMIGmIhRYGGNPRIfOQKsGoKgxtsidhTaAePRCBFqZgPDWCIkqOJezGVkjfYUCZTlInbxBXwUAVRsxHTQtJFnnpmMvXDYCVlEmnZBKhmmxQOIQzxFWpJQkQoSAYzTEiDWEOsVLNrbfzeHFRyeYATakQQWmFDLPbVMCJcWjFGJjfqCoVzlbNNEsqxdSmNPjTjHYOkuEMFLkXYGaoJlraLqayMeCsTjWNRDPBywBJLAPVkGQqTwApVVwYAetlwSbzsdHWsTwSIcctkyKDuRWYDQikRqsKTMJchrliONJeaZIzwPQrNbTwxsGdwuduvibtYndRwpdsvyCktRHFalvUuEKMqXbItfGcNGWsGzubdPMYayOUOINjpcFBeESdwpdlTYmrPsLsVDhpTzoMegKrytNVZkfJRPuDCUXxSlSthOohmsuxmIZUedzxKmowKOdXTMcEtdpHaPWgIsIjrViKrQOCONlSuazmLuCUjLltOGXeNgJKedTVrrVCpWYWHyVrdXpKgNaMJVjbXxnVMSChdWKuZdqpisvrkBJPoURDYxWOtpjzZoOpWzyUuYNhCzRoHsMjmmWDcXzQiHIyjwdhPNwiPqFxeUfMVFQGImhykFgMIlQEoZCaRoqSBXTSWAeDumdbsOGtATwEdZlLfoBKiTvodQBGOEcuATWXfiinSjPmJKcWgQrTVYVrwlyMWhxqNbCMpIQNoSMGTiWfPTCezUjYcdWppnsYJihLQCqbNLRGgqrwHuIvsazapTpoPZIyZyeeSueJuTIhpHMEJfJpScshJubJGfkusuVBgfTWQoywSSliQQSfbvaHKiLnyjdSbpMkdBgXepoSsHnCQaYuHQqZsoEOmJCiuQUpJkmfyfbIShzlZpHFmLCsbknEAkKXKfRTRnuwdBeuOGgFbJLbDksHVapaRayWzwoYBEpmrlAxrUxYMUekKbpjPNfjUCjhbdMAnJmYQVZBQZkFVweHDAlaqJjRqoQPoOMLhyvYCzqEuQsAFoxWrzRnTVjStPadhsESlERnKhpEPsfDxNvxqcOyIulaCkmPdambLHvGhTZzysvqFauEgkFRItPfvisehFmoBhQqmkfbHVsgfHXDPJVyhwPllQpuYLRYvGodxKjkarnSNgsXoKEMlaSKxKdcVgvOkuLcfLFfdtXGTclqfPOfeoVLbqcjcXCUEBgAGplrkgsmIEhWRZLlGPGCwKWRaCKMkBHTAcypUrYjWwCLtOPVygMwMANGoQwFnCqFrUGMCRZUGJKTZIGPyldsifauoMnJPLTcDHmilcmahlqOELaAUYDBuzsVywnDQfwRLGIWozYaOAilMBcObErwgTDNGWnwQMUgFFSKtPDMEoEQCTKVREqrXZSGLqwTMcxHfWotDllNkIJPMbXzjDVjPOOjCFuIvTyhXKLyhUScOXvYthRXpPfKwMhptXaxIxgqBoUqzrWbaoLTVpQoottZyPFfNOoMioXHRuFwMRYUiKvcWPkrayyTLOCFJlAyslDameIuqVAuxErqFPEWIScKpBORIuZqoXlZuTvAjEdlEWDODFRregDTqGNoFBIHxvimmIZwLfFyKUfEWAnNBdtdzDmTPXtpHRGdIbuucfTjOygZsTxPjfweXhSUkMhPjMaxKlMIJMOXcnQfyzeOcbWwNbeH", + textProposal.GetDescription()) + +require.Equal(t, simulation.TypeMsgSubmitProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgCancelProposal tests the normal scenario of a valid message of type TypeMsgCancelProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgCancelProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + / setup a proposal + proposer := accounts[0].Address + content := v1beta1.NewTextProposal("Test", "description") + +contentMsg, err := v1.NewLegacyContent(content, suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String()) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "title", "summary", proposer, false) + +require.NoError(t, err) + +suite.GovKeeper.SetProposal(ctx, proposal) + +app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + + / execute operation + op := simulation.SimulateMsgCancelProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgCancelProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, proposer.String(), msg.Proposer) + +require.Equal(t, simulation.TypeMsgCancelProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgDeposit tests the normal scenario of a valid message of type TypeMsgDeposit. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgDeposit(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + content := v1beta1.NewTextProposal("Test", "description") + +contentMsg, err := v1.NewLegacyContent(content, suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String()) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +suite.GovKeeper.SetProposal(ctx, proposal) + +app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + + / execute operation + op := simulation.SimulateMsgDeposit(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgDeposit + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Depositor) + +require.NotEqual(t, len(msg.Amount), 0) + +require.Equal(t, "560969stake", msg.Amount[0].String()) + +require.Equal(t, simulation.TypeMsgDeposit, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgVote tests the normal scenario of a valid message of type TypeMsgVote. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVote(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +suite.GovKeeper.ActivateVotingPeriod(ctx, proposal) + +app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + + / execute operation + op := simulation.SimulateMsgVote(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVote + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.Equal(t, v1.OptionYes, msg.Option) + +require.Equal(t, simulation.TypeMsgVote, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgVoteWeighted tests the normal scenario of a valid message of type TypeMsgVoteWeighted. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVoteWeighted(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "test", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +suite.GovKeeper.ActivateVotingPeriod(ctx, proposal) + +app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + + / execute operation + op := simulation.SimulateMsgVoteWeighted(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVoteWeighted + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.True(t, len(msg.Options) >= 1) + +require.Equal(t, simulation.TypeMsgVoteWeighted, sdk.MsgTypeURL(&msg)) +} + +type suite struct { + TxConfig client.TxConfig + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + GovKeeper *keeper.Keeper + StakingKeeper *stakingkeeper.Keeper + DistributionKeeper dk.Keeper + App *runtime.App +} + +/ returns context and an app with updated mint keeper +func createTestSuite(t *testing.T, isCheckTx bool) (suite, sdk.Context) { + res := suite{ +} + +app, err := simtestutil.Setup( + depinject.Configs( + configurator.NewAppConfig( + configurator.AuthModule(), + configurator.TxModule(), + configurator.ParamsModule(), + configurator.BankModule(), + configurator.StakingModule(), + configurator.ConsensusModule(), + configurator.DistributionModule(), + configurator.GovModule(), + ), + depinject.Supply(log.NewNopLogger()), + ), + &res.TxConfig, &res.AccountKeeper, &res.BankKeeper, &res.GovKeeper, &res.StakingKeeper, &res.DistributionKeeper) + +require.NoError(t, err) + ctx := app.BaseApp.NewContext(isCheckTx) + +res.App = app + return res, ctx +} + +func getTestingAccounts( + t *testing.T, r *rand.Rand, + accountKeeper authkeeper.AccountKeeper, bankKeeper bankkeeper.Keeper, stakingKeeper *stakingkeeper.Keeper, + ctx sdk.Context, n int, +) []simtypes.Account { + accounts := simtypes.RandomAccounts(r, n) + initAmt := stakingKeeper.TokensFromConsensusPower(ctx, 200) + initCoins := sdk.NewCoins(sdk.NewCoin(sdk.DefaultBondDenom, initAmt)) + + / add coins to the accounts + for _, account := range accounts { + acc := accountKeeper.NewAccountWithAddress(ctx, account.Address) + +accountKeeper.SetAccount(ctx, acc) + +require.NoError(t, testutil.FundAccount(ctx, bankKeeper, account.Address, initCoins)) +} + +return accounts +} +``` + +```go expandable +package simulation_test + +import ( + + "fmt" + "math/rand" + "testing" + "time" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/gogoproto/proto" + "github.com/stretchr/testify/require" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/configurator" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + _ "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + "github.com/cosmos/cosmos-sdk/x/bank/testutil" + _ "github.com/cosmos/cosmos-sdk/x/consensus" + _ "github.com/cosmos/cosmos-sdk/x/distribution" + dk "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + _ "github.com/cosmos/cosmos-sdk/x/gov" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + "github.com/cosmos/cosmos-sdk/x/gov/simulation" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + _ "github.com/cosmos/cosmos-sdk/x/params" + _ "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +var ( + _ simtypes.WeightedProposalMsg = MockWeightedProposals{ +} + _ simtypes.WeightedProposalContent = MockWeightedProposals{ +} /nolint:staticcheck / testing legacy code path +) + +type MockWeightedProposals struct { + n int +} + +func (m MockWeightedProposals) + +AppParamsKey() + +string { + return fmt.Sprintf("AppParamsKey-%d", m.n) +} + +func (m MockWeightedProposals) + +DefaultWeight() + +int { + return m.n +} + +func (m MockWeightedProposals) + +MsgSimulatorFn() + +simtypes.MsgSimulatorFn { + return func(r *rand.Rand, _ sdk.Context, _ []simtypes.Account) + +sdk.Msg { + return nil +} +} + +func (m MockWeightedProposals) + +ContentSimulatorFn() + +simtypes.ContentSimulatorFn { /nolint:staticcheck / testing legacy code path + return func(r *rand.Rand, _ sdk.Context, _ []simtypes.Account) + +simtypes.Content { /nolint:staticcheck / testing legacy code path + return v1beta1.NewTextProposal( + fmt.Sprintf("title-%d: %s", m.n, simtypes.RandStringOfLength(r, 100)), + fmt.Sprintf("description-%d: %s", m.n, simtypes.RandStringOfLength(r, 4000)), + ) +} +} + +func mockWeightedProposalMsg(n int) []simtypes.WeightedProposalMsg { + wpc := make([]simtypes.WeightedProposalMsg, n) + for i := 0; i < n; i++ { + wpc[i] = MockWeightedProposals{ + i +} + +} + +return wpc +} + +func mockWeightedLegacyProposalContent(n int) []simtypes.WeightedProposalContent { /nolint:staticcheck / testing legacy code path + wpc := make([]simtypes.WeightedProposalContent, n) /nolint:staticcheck / testing legacy code path + for i := 0; i < n; i++ { + wpc[i] = MockWeightedProposals{ + i +} + +} + +return wpc +} + +/ TestWeightedOperations tests the weights of the operations. +func TestWeightedOperations(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + ctx.WithChainID("test-chain") + appParams := make(simtypes.AppParams) + weightesOps := simulation.WeightedOperations(appParams, suite.TxConfig, suite.AccountKeeper, + suite.BankKeeper, suite.GovKeeper, mockWeightedProposalMsg(3), mockWeightedLegacyProposalContent(1), + ) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accs := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + expected := []struct { + weight int + opMsgRoute string + opMsgName string +}{ + { + simulation.DefaultWeightMsgDeposit, types.ModuleName, simulation.TypeMsgDeposit +}, + { + simulation.DefaultWeightMsgVote, types.ModuleName, simulation.TypeMsgVote +}, + { + simulation.DefaultWeightMsgVoteWeighted, types.ModuleName, simulation.TypeMsgVoteWeighted +}, + { + simulation.DefaultWeightMsgCancelProposal, types.ModuleName, simulation.TypeMsgCancelProposal +}, + {0, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {1, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {2, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {0, types.ModuleName, simulation.TypeMsgSubmitProposal +}, +} + +require.Equal(t, len(weightesOps), len(expected), "number of operations should be the same") + for i, w := range weightesOps { + operationMsg, _, err := w.Op()(r, app.BaseApp, ctx, accs, ctx.ChainID()) + +require.NoError(t, err) + + / the following checks are very much dependent from the ordering of the output given + / by WeightedOperations. if the ordering in WeightedOperations changes some tests + / will fail + require.Equal(t, expected[i].weight, w.Weight(), "weight should be the same") + +require.Equal(t, expected[i].opMsgRoute, operationMsg.Route, "route should be the same") + +require.Equal(t, expected[i].opMsgName, operationMsg.Name, "operation Msg name should be the same") +} +} + +/ TestSimulateMsgSubmitProposal tests the normal scenario of a valid message of type TypeMsgSubmitProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgSubmitProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + +app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + + / execute operation + op := simulation.SimulateMsgSubmitProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper, MockWeightedProposals{3 +}.MsgSimulatorFn()) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgSubmitProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Proposer) + +require.NotEqual(t, len(msg.InitialDeposit), 0) + +require.Equal(t, "47841094stake", msg.InitialDeposit[0].String()) + +require.Equal(t, simulation.TypeMsgSubmitProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgSubmitProposal tests the normal scenario of a valid message of type TypeMsgSubmitProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgSubmitLegacyProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + +app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + + / execute operation + op := simulation.SimulateMsgSubmitLegacyProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper, MockWeightedProposals{3 +}.ContentSimulatorFn()) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgSubmitProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +var msgLegacyContent v1.MsgExecLegacyContent + err = proto.Unmarshal(msg.Messages[0].Value, &msgLegacyContent) + +require.NoError(t, err) + +var textProposal v1beta1.TextProposal + err = proto.Unmarshal(msgLegacyContent.Content.Value, &textProposal) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, "cosmos1p8wcgrjr4pjju90xg6u9cgq55dxwq8j7u4x9a0", msg.Proposer) + +require.NotEqual(t, len(msg.InitialDeposit), 0) + +require.Equal(t, "25166256stake", msg.InitialDeposit[0].String()) + +require.Equal(t, "title-3: ZBSpYuLyYggwexjxusrBqDOTtGTOWeLrQKjLxzIivHSlcxgdXhhuTSkuxKGLwQvuyNhYFmBZHeAerqyNEUzXPFGkqEGqiQWIXnku", + textProposal.GetTitle()) + +require.Equal(t, "description-3: NJWzHdBNpAXKJPHWQdrGYcAHSctgVlqwqHoLfHsXUdStwfefwzqLuKEhmMyYLdbZrcPgYqjNHxPexsruwEGStAneKbWkQDDIlCWBLSiAASNhZqNFlPtfqPJoxKsgMdzjWqLWdqKQuJqWPMvwPQWZUtVMOTMYKJbfdlZsjdsomuScvDmbDkgRualsxDvRJuCAmPOXitIbcyWsKGSdrEunFAOdmXnsuyFVgJqEjbklvmwrUlsxjRSfKZxGcpayDdgoFcnVSutxjRgOSFzPwidAjubMncNweqpbxhXGchpZUxuFDOtpnhNUycJICRYqsPhPSCjPTWZFLkstHWJxvdPEAyEIxXgLwbNOjrgzmaujiBABBIXvcXpLrbcEWNNQsbjvgJFgJkflpRohHUutvnaUqoopuKjTDaemDeSdqbnOzcfJpcTuAQtZoiLZOoAIlboFDAeGmSNwkvObPRvRWQgWkGkxwtPauYgdkmypLjbqhlHJIQTntgWjXwZdOyYEdQRRLfMSdnxqppqUofqLbLQDUjwKVKfZJUJQPsWIPwIVaSTrmKskoAhvmZyJgeRpkaTfGgrJzAigcxtfshmiDCFkuiluqtMOkidknnTBtumyJYlIsWLnCQclqdVmikUoMOPdPWwYbJxXyqUVicNxFxyqJTenNblyyKSdlCbiXxUiYUiMwXZASYfvMDPFgxniSjWaZTjHkqlJvtBsXqwPpyVxnJVGFWhfSxgOcduoxkiopJvFjMmFabrGYeVtTXLhxVUEiGwYUvndjFGzDVntUvibiyZhfMQdMhgsiuysLMiePBNXifRLMsSmXPkwlPloUbJveCvUlaalhZHuvdkCnkSHbMbmOnrfEGPwQiACiPlnihiaOdbjPqPiTXaHDoJXjSlZmltGqNHHNrcKdlFSCdmVOuvDcBLdSklyGJmcLTbSFtALdGlPkqqecJrpLCXNPWefoTJNgEJlyMEPneVaxxduAAEqQpHWZodWyRkDAxzyMnFMcjSVqeRXLqsNyNtQBbuRvunZflWSbbvXXdkyLikYqutQhLPONXbvhcQZJPSWnOulqQaXmbfFxAkqfYeseSHOQidHwbcsOaMnSrrmGjjRmEMQNuknupMxJiIeVjmgZvbmjPIQTEhQFULQLBMPrxcFPvBinaOPYWGvYGRKxLZdwamfRQQFngcdSlvwjfaPbURasIsGJVHtcEAxnIIrhSriiXLOlbEBLXFElXJFGxHJczRBIxAuPKtBisjKBwfzZFagdNmjdwIRvwzLkFKWRTDPxJCmpzHUcrPiiXXHnOIlqNVoGSXZewdnCRhuxeYGPVTfrNTQNOxZmxInOazUYNTNDgzsxlgiVEHPKMfbesvPHUqpNkUqbzeuzfdrsuLDpKHMUbBMKczKKWOdYoIXoPYtEjfOnlQLoGnbQUCuERdEFaptwnsHzTJDsuZkKtzMpFaZobynZdzNydEeJJHDYaQcwUxcqvwfWwNUsCiLvkZQiSfzAHftYgAmVsXgtmcYgTqJIawstRYJrZdSxlfRiqTufgEQVambeZZmaAyRQbcmdjVUZZCgqDrSeltJGXPMgZnGDZqISrGDOClxXCxMjmKqEPwKHoOfOeyGmqWqihqjINXLqnyTesZePQRqaWDQNqpLgNrAUKulklmckTijUltQKuWQDwpLmDyxLppPVMwsmBIpOwQttYFMjgJQZLYFPmxWFLIeZihkRNnkzoypBICIxgEuYsVWGIGRbbxqVasYnstWomJnHwmtOhAFSpttRYYzBmyEtZXiCthvKvWszTXDbiJbGXMcrYpKAgvUVFtdKUfvdMfhAryctklUCEdjetjuGNfJjajZtvzdYaqInKtFPPLYmRaXPdQzxdSQfmZDEVHlHGEGNSPRFJuIfKLLfUmnHxHnRjmzQPNlqrXgifUdzAGKVabYqvcDeYoTYgPsBUqehrBhmQUgTvDnsdpuhUoxskDdppTsYMcnDIPSwKIqhXDCIxOuXrywahvVavvHkPuaenjLmEbMgrkrQLHEAwrhHkPRNvonNQKqprqOFVZKAtpRSpvQUxMoXCMZLSSbnLEFsjVfANdQNQVwTmGxqVjVqRuxREAhuaDrFgEZpYKhwWPEKBevBfsOIcaZKyykQafzmGPLRAKDtTcJxJVgiiuUkmyMYuDUNEUhBEdoBLJnamtLmMJQgmLiUELIhLpiEvpOXOvXCPUeldLFqkKOwfacqIaRcnnZvERKRMCKUkMABbDHytQqQblrvoxOZkwzosQfDKGtIdfcXRJNqlBNwOCWoQBcEWyqrMlYZIAXYJmLfnjoJepgSFvrgajaBAIksoyeHqgqbGvpAstMIGmIhRYGGNPRIfOQKsGoKgxtsidhTaAePRCBFqZgPDWCIkqOJezGVkjfYUCZTlInbxBXwUAVRsxHTQtJFnnpmMvXDYCVlEmnZBKhmmxQOIQzxFWpJQkQoSAYzTEiDWEOsVLNrbfzeHFRyeYATakQQWmFDLPbVMCJcWjFGJjfqCoVzlbNNEsqxdSmNPjTjHYOkuEMFLkXYGaoJlraLqayMeCsTjWNRDPBywBJLAPVkGQqTwApVVwYAetlwSbzsdHWsTwSIcctkyKDuRWYDQikRqsKTMJchrliONJeaZIzwPQrNbTwxsGdwuduvibtYndRwpdsvyCktRHFalvUuEKMqXbItfGcNGWsGzubdPMYayOUOINjpcFBeESdwpdlTYmrPsLsVDhpTzoMegKrytNVZkfJRPuDCUXxSlSthOohmsuxmIZUedzxKmowKOdXTMcEtdpHaPWgIsIjrViKrQOCONlSuazmLuCUjLltOGXeNgJKedTVrrVCpWYWHyVrdXpKgNaMJVjbXxnVMSChdWKuZdqpisvrkBJPoURDYxWOtpjzZoOpWzyUuYNhCzRoHsMjmmWDcXzQiHIyjwdhPNwiPqFxeUfMVFQGImhykFgMIlQEoZCaRoqSBXTSWAeDumdbsOGtATwEdZlLfoBKiTvodQBGOEcuATWXfiinSjPmJKcWgQrTVYVrwlyMWhxqNbCMpIQNoSMGTiWfPTCezUjYcdWppnsYJihLQCqbNLRGgqrwHuIvsazapTpoPZIyZyeeSueJuTIhpHMEJfJpScshJubJGfkusuVBgfTWQoywSSliQQSfbvaHKiLnyjdSbpMkdBgXepoSsHnCQaYuHQqZsoEOmJCiuQUpJkmfyfbIShzlZpHFmLCsbknEAkKXKfRTRnuwdBeuOGgFbJLbDksHVapaRayWzwoYBEpmrlAxrUxYMUekKbpjPNfjUCjhbdMAnJmYQVZBQZkFVweHDAlaqJjRqoQPoOMLhyvYCzqEuQsAFoxWrzRnTVjStPadhsESlERnKhpEPsfDxNvxqcOyIulaCkmPdambLHvGhTZzysvqFauEgkFRItPfvisehFmoBhQqmkfbHVsgfHXDPJVyhwPllQpuYLRYvGodxKjkarnSNgsXoKEMlaSKxKdcVgvOkuLcfLFfdtXGTclqfPOfeoVLbqcjcXCUEBgAGplrkgsmIEhWRZLlGPGCwKWRaCKMkBHTAcypUrYjWwCLtOPVygMwMANGoQwFnCqFrUGMCRZUGJKTZIGPyldsifauoMnJPLTcDHmilcmahlqOELaAUYDBuzsVywnDQfwRLGIWozYaOAilMBcObErwgTDNGWnwQMUgFFSKtPDMEoEQCTKVREqrXZSGLqwTMcxHfWotDllNkIJPMbXzjDVjPOOjCFuIvTyhXKLyhUScOXvYthRXpPfKwMhptXaxIxgqBoUqzrWbaoLTVpQoottZyPFfNOoMioXHRuFwMRYUiKvcWPkrayyTLOCFJlAyslDameIuqVAuxErqFPEWIScKpBORIuZqoXlZuTvAjEdlEWDODFRregDTqGNoFBIHxvimmIZwLfFyKUfEWAnNBdtdzDmTPXtpHRGdIbuucfTjOygZsTxPjfweXhSUkMhPjMaxKlMIJMOXcnQfyzeOcbWwNbeH", + textProposal.GetDescription()) + +require.Equal(t, simulation.TypeMsgSubmitProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgCancelProposal tests the normal scenario of a valid message of type TypeMsgCancelProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgCancelProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + / setup a proposal + proposer := accounts[0].Address + content := v1beta1.NewTextProposal("Test", "description") + +contentMsg, err := v1.NewLegacyContent(content, suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String()) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "title", "summary", proposer, false) + +require.NoError(t, err) + +suite.GovKeeper.SetProposal(ctx, proposal) + +app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + + / execute operation + op := simulation.SimulateMsgCancelProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgCancelProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, proposer.String(), msg.Proposer) + +require.Equal(t, simulation.TypeMsgCancelProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgDeposit tests the normal scenario of a valid message of type TypeMsgDeposit. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgDeposit(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + content := v1beta1.NewTextProposal("Test", "description") + +contentMsg, err := v1.NewLegacyContent(content, suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String()) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +suite.GovKeeper.SetProposal(ctx, proposal) + +app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + + / execute operation + op := simulation.SimulateMsgDeposit(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgDeposit + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Depositor) + +require.NotEqual(t, len(msg.Amount), 0) + +require.Equal(t, "560969stake", msg.Amount[0].String()) + +require.Equal(t, simulation.TypeMsgDeposit, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgVote tests the normal scenario of a valid message of type TypeMsgVote. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVote(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +suite.GovKeeper.ActivateVotingPeriod(ctx, proposal) + +app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + + / execute operation + op := simulation.SimulateMsgVote(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVote + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.Equal(t, v1.OptionYes, msg.Option) + +require.Equal(t, simulation.TypeMsgVote, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgVoteWeighted tests the normal scenario of a valid message of type TypeMsgVoteWeighted. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVoteWeighted(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "test", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +suite.GovKeeper.ActivateVotingPeriod(ctx, proposal) + +app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + + / execute operation + op := simulation.SimulateMsgVoteWeighted(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVoteWeighted + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.True(t, len(msg.Options) >= 1) + +require.Equal(t, simulation.TypeMsgVoteWeighted, sdk.MsgTypeURL(&msg)) +} + +type suite struct { + TxConfig client.TxConfig + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + GovKeeper *keeper.Keeper + StakingKeeper *stakingkeeper.Keeper + DistributionKeeper dk.Keeper + App *runtime.App +} + +/ returns context and an app with updated mint keeper +func createTestSuite(t *testing.T, isCheckTx bool) (suite, sdk.Context) { + res := suite{ +} + +app, err := simtestutil.Setup( + depinject.Configs( + configurator.NewAppConfig( + configurator.AuthModule(), + configurator.TxModule(), + configurator.ParamsModule(), + configurator.BankModule(), + configurator.StakingModule(), + configurator.ConsensusModule(), + configurator.DistributionModule(), + configurator.GovModule(), + ), + depinject.Supply(log.NewNopLogger()), + ), + &res.TxConfig, &res.AccountKeeper, &res.BankKeeper, &res.GovKeeper, &res.StakingKeeper, &res.DistributionKeeper) + +require.NoError(t, err) + ctx := app.BaseApp.NewContext(isCheckTx) + +res.App = app + return res, ctx +} + +func getTestingAccounts( + t *testing.T, r *rand.Rand, + accountKeeper authkeeper.AccountKeeper, bankKeeper bankkeeper.Keeper, stakingKeeper *stakingkeeper.Keeper, + ctx sdk.Context, n int, +) []simtypes.Account { + accounts := simtypes.RandomAccounts(r, n) + initAmt := stakingKeeper.TokensFromConsensusPower(ctx, 200) + initCoins := sdk.NewCoins(sdk.NewCoin(sdk.DefaultBondDenom, initAmt)) + + / add coins to the accounts + for _, account := range accounts { + acc := accountKeeper.NewAccountWithAddress(ctx, account.Address) + +accountKeeper.SetAccount(ctx, acc) + +require.NoError(t, testutil.FundAccount(ctx, bankKeeper, account.Address, initCoins)) +} + +return accounts +} +``` + +## End-to-end Tests + +End-to-end tests are at the top of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html). +They must test the whole application flow, from the user perspective (for instance, CLI tests). They are located under [`/tests/e2e`](https://github.com/cosmos/cosmos-sdk/tree/main/tests/e2e). + +{/* @julienrbrt: makes more sense to use an app wired app to have 0 simapp dependencies */} +For that, the SDK is using `simapp` but you should use your own application (`appd`). +Here are some examples: + +* SDK E2E tests: [Link](https://github.com/cosmos/cosmos-sdk/tree/main/tests/e2e). +* Cosmos Hub E2E tests: [Link](https://github.com/cosmos/gaia/tree/main/tests/e2e). +* Osmosis E2E tests: [Link](https://github.com/osmosis-labs/osmosis/tree/main/tests/e2e). + + +**warning** +The SDK is in the process of creating its E2E tests, as defined in [ADR-59](https://docs.cosmos.network/main/architecture/adr-059-test-scopes.html). This page will eventually be updated with better examples. + + +## Learn More + +Learn more about testing scope in [ADR-59](https://docs.cosmos.network/main/architecture/adr-059-test-scopes.html). diff --git a/docs/sdk/v0.50/documentation/module-system/upgrade.mdx b/docs/sdk/v0.50/documentation/module-system/upgrade.mdx new file mode 100644 index 00000000..e0d875cb --- /dev/null +++ b/docs/sdk/v0.50/documentation/module-system/upgrade.mdx @@ -0,0 +1,125 @@ +--- +title: Upgrading Modules +--- + + +**Synopsis** +[In-Place Store Migrations](/docs/sdk/next/documentation/operations/upgrade-advanced) allow your modules to upgrade to new versions that include breaking changes. This document outlines how to build modules to take advantage of this functionality. + + + +**Pre-requisite Readings** + +* [In-Place Store Migration](/docs/sdk/next/documentation/operations/upgrade-advanced) + + + +## Consensus Version + +Successful upgrades of existing modules require each `AppModule` to implement the function `ConsensusVersion() uint64`. + +* The versions must be hard-coded by the module developer. +* The initial version **must** be set to 1. + +Consensus versions serve as state-breaking versions of app modules and must be incremented when the module introduces breaking changes. + +## Registering Migrations + +To register the functionality that takes place during a module upgrade, you must register which migrations you want to take place. + +Migration registration takes place in the `Configurator` using the `RegisterMigration` method. The `AppModule` reference to the configurator is in the `RegisterServices` method. + +You can register one or more migrations. If you register more than one migration script, list the migrations in increasing order and ensure there are enough migrations that lead to the desired consensus version. For example, to migrate to version 3 of a module, register separate migrations for version 1 and version 2 as shown in the following example: + +```go +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + / --snip-- + cfg.RegisterMigration(types.ModuleName, 1, func(ctx sdk.Context) + +error { + / Perform in-place store migrations from ConsensusVersion 1 to 2. +}) + +cfg.RegisterMigration(types.ModuleName, 2, func(ctx sdk.Context) + +error { + / Perform in-place store migrations from ConsensusVersion 2 to 3. +}) +} +``` + +Since these migrations are functions that need access to a Keeper's store, use a wrapper around the keepers called `Migrator` as shown in this example: + +```go expandable +package keeper + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/bank/exported" + v2 "github.com/cosmos/cosmos-sdk/x/bank/migrations/v2" + v3 "github.com/cosmos/cosmos-sdk/x/bank/migrations/v3" + v4 "github.com/cosmos/cosmos-sdk/x/bank/migrations/v4" +) + +/ Migrator is a struct for handling in-place store migrations. +type Migrator struct { + keeper BaseKeeper + legacySubspace exported.Subspace +} + +/ NewMigrator returns a new Migrator. +func NewMigrator(keeper BaseKeeper, legacySubspace exported.Subspace) + +Migrator { + return Migrator{ + keeper: keeper, legacySubspace: legacySubspace +} +} + +/ Migrate1to2 migrates from version 1 to 2. +func (m Migrator) + +Migrate1to2(ctx sdk.Context) + +error { + return v2.MigrateStore(ctx, m.keeper.storeService, m.keeper.cdc) +} + +/ Migrate2to3 migrates x/bank storage from version 2 to 3. +func (m Migrator) + +Migrate2to3(ctx sdk.Context) + +error { + return v3.MigrateStore(ctx, m.keeper.storeService, m.keeper.cdc) +} + +/ Migrate3to4 migrates x/bank storage from version 3 to 4. +func (m Migrator) + +Migrate3to4(ctx sdk.Context) + +error { + return v4.MigrateStore(ctx, m.keeper.storeService, m.legacySubspace, m.keeper.cdc) +} +``` + +## Writing Migration Scripts + +To define the functionality that takes place during an upgrade, write a migration script and place the functions in a `migrations/` directory. For example, to write migration scripts for the bank module, place the functions in `x/bank/migrations/`. Use the recommended naming convention for these functions. For example, `v2bank` is the script that migrates the package `x/bank/migrations/v2`: + +```go +/ Migrating bank module from version 1 to 2 +func (m Migrator) + +Migrate1to2(ctx sdk.Context) + +error { + return v2bank.MigrateStore(ctx, m.keeper.storeKey) / v2bank is package `x/bank/migrations/v2`. +} +``` + +To see example code of changes that were implemented in a migration of balance keys, check out [migrateBalanceKeys](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/bank/migrations/v2/store.go#L55-L76). For context, this code introduced migrations of the bank store that updated addresses to be prefixed by their length in bytes as outlined in [ADR-028](/docs/common/pages/adr-comprehensive#adr-028-public-key-addresses). diff --git a/docs/sdk/v0.50/documentation/operations/intro.mdx b/docs/sdk/v0.50/documentation/operations/intro.mdx new file mode 100644 index 00000000..405e061e --- /dev/null +++ b/docs/sdk/v0.50/documentation/operations/intro.mdx @@ -0,0 +1,13 @@ +--- +title: SDK Migrations +--- + +To smoothen the update to the latest stable release, the SDK includes a CLI command for hard-fork migrations (under the ` genesis migrate` subcommand). +Additionally, the SDK includes in-place migrations for its core modules. These in-place migrations are useful to migrate between major releases. + +* Hard-fork migrations are supported from the last major release to the current one. +* [In-place module migrations](https://docs.cosmos.network/main/core/upgrade#overwriting-genesis-functions) are supported from the last two major releases to the current one. + +Migration from a version older than the last two major releases is not supported. + +When migrating from a previous version, refer to the [`UPGRADING.md`](/docs/sdk/v0.50/documentation/operations/upgrading) and the `CHANGELOG.md` of the version you are migrating to. diff --git a/docs/sdk/v0.50/documentation/operations/packages/README.mdx b/docs/sdk/v0.50/documentation/operations/packages/README.mdx new file mode 100644 index 00000000..71d607bb --- /dev/null +++ b/docs/sdk/v0.50/documentation/operations/packages/README.mdx @@ -0,0 +1,41 @@ +--- +title: Packages +description: >- + The Cosmos SDK is a collection of Go modules. This section provides + documentation on various packages that can used when developing a Cosmos SDK + chain. It lists all standalone Go modules that are part of the Cosmos SDK. +--- + +The Cosmos SDK is a collection of Go modules. This section provides documentation on various packages that can used when developing a Cosmos SDK chain. +It lists all standalone Go modules that are part of the Cosmos SDK. + + +For more information on SDK modules, see the [SDK Modules](https://docs.cosmos.network/main/modules) section. +For more information on SDK tooling, see the [Tooling](https://docs.cosmos.network/main/tooling) section. + + +## Core + +* [Core](https://pkg.go.dev/cosmossdk.io/core) - Core library defining SDK interfaces ([ADR-063](/docs/common/pages/adr-comprehensive#adr-063-core-module-api)) +* [API](https://pkg.go.dev/cosmossdk.io/api) - API library containing generated SDK Pulsar API +* [Store](https://pkg.go.dev/cosmossdk.io/store) - Implementation of the Cosmos SDK store + +## State Management + +* [Collections](/docs/sdk/v0.50/documentation/operations/packages/collections) - State management library +* [ORM](/docs/sdk/v0.47/build/packages/orm) - State management library + +## Automation + +* [Depinject](/docs/sdk/v0.50/documentation/module-system/depinject) - Dependency injection framework +* [Client/v2](https://pkg.go.dev/cosmossdk.io/client/v2) - Library powering [AutoCLI](https://docs.cosmos.network/main/core/autocli) + +## Utilities + +* [Log](https://pkg.go.dev/cosmossdk.io/log) - Logging library +* [Errors](https://pkg.go.dev/cosmossdk.io/errors) - Error handling library +* [Math](https://pkg.go.dev/cosmossdk.io/math) - Math library for SDK arithmetic operations + +## Example + +* [SimApp](https://pkg.go.dev/cosmossdk.io/simapp) - SimApp is **the** sample Cosmos SDK chain. This package should not be imported in your application. diff --git a/docs/sdk/v0.50/documentation/operations/packages/collections.mdx b/docs/sdk/v0.50/documentation/operations/packages/collections.mdx new file mode 100644 index 00000000..f5260a09 --- /dev/null +++ b/docs/sdk/v0.50/documentation/operations/packages/collections.mdx @@ -0,0 +1,1276 @@ +--- +title: Collections +description: >- + Collections is a library meant to simplify the experience with respect to + module state handling. +--- + +Collections is a library meant to simplify the experience with respect to module state handling. + +Cosmos SDK modules handle their state using the `KVStore` interface. The problem with working with +`KVStore` is that it forces you to think of state as a bytes KV pairings when in reality the majority of +state comes from complex concrete golang objects (strings, ints, structs, etc.). + +Collections allows you to work with state as if they were normal golang objects and removes the need +for you to think of your state as raw bytes in your code. + +It also allows you to migrate your existing state without causing any state breakage that forces you into +tedious and complex chain state migrations. + +## Installation + +To install collections in your cosmos-sdk chain project, run the following command: + +```shell +go get cosmossdk.io/collections +``` + +## Core types + +Collections offers 5 different APIs to work with state, which will be explored in the next sections, these APIs are: + +* `Map`: to work with typed arbitrary KV pairings. +* `KeySet`: to work with just typed keys +* `Item`: to work with just one typed value +* `Sequence`: which is a monotonically increasing number. +* `IndexedMap`: which combines `Map` and `KeySet` to provide a `Map` with indexing capabilities. + +## Preliminary components + +Before exploring the different collections types and their capability it is necessary to introduce +the three components that every collection shares. In fact when instantiating a collection type by doing, for example, +`collections.NewMap/collections.NewItem/...` you will find yourself having to pass them some common arguments. + +For example, in code: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var AllowListPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + AllowList collections.KeySet[string] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + AllowList: collections.NewKeySet(sb, AllowListPrefix, "allow_list", collections.StringKey), +} +} +``` + +Let's analyse the shared arguments, what they do, and why we need them. + +### SchemaBuilder + +The first argument passed is the `SchemaBuilder` + +`SchemaBuilder` is a structure that keeps track of all the state of a module, it is not required by the collections +to deal with state but it offers a dynamic and reflective way for clients to explore a module's state. + +We instantiate a `SchemaBuilder` by passing it a function that given the modules store key returns the module's specific store. + +We then need to pass the schema builder to every collection type we instantiate in our keeper, in our case the `AllowList`. + +### Prefix + +The second argument passed to our `KeySet` is a `collections.Prefix`, a prefix represents a partition of the module's `KVStore` +where all the state of a specific collection will be saved. + +Since a module can have multiple collections, the following is expected: + +* module params will become a `collections.Item` +* the `AllowList` is a `collections.KeySet` + +We don't want a collection to write over the state of the other collection so we pass it a prefix, which defines a storage +partition owned by the collection. + +If you already built modules, the prefix translates to the items you were creating in your `types/keys.go` file, example: [Link](https://github.com/cosmos/cosmos-sdk/blob/main/x/feegrant/key.go#L27) + +your old: + +```go +var ( + / FeeAllowanceKeyPrefix is the set of the kvstore for fee allowance data + / - 0x00: allowance + FeeAllowanceKeyPrefix = []byte{0x00 +} + + / FeeAllowanceQueueKeyPrefix is the set of the kvstore for fee allowance keys data + / - 0x01: + FeeAllowanceQueueKeyPrefix = []byte{0x01 +} +) +``` + +becomes: + +```go +var ( + / FeeAllowanceKeyPrefix is the set of the kvstore for fee allowance data + / - 0x00: allowance + FeeAllowanceKeyPrefix = collections.NewPrefix(0) + + / FeeAllowanceQueueKeyPrefix is the set of the kvstore for fee allowance keys data + / - 0x01: + FeeAllowanceQueueKeyPrefix = collections.NewPrefix(1) +) +``` + +#### Rules + +`collections.NewPrefix` accepts either `uint8`, `string` or `[]bytes` it's good practice to use an always increasing `uint8`for disk space efficiency. + +A collection **MUST NOT** share the same prefix as another collection in the same module, and a collection prefix **MUST NEVER** start with the same prefix as another, examples: + +```go +prefix1 := collections.NewPrefix("prefix") + +prefix2 := collections.NewPrefix("prefix") / THIS IS BAD! +``` + +```go +prefix1 := collections.NewPrefix("a") + +prefix2 := collections.NewPrefix("aa") / prefix2 starts with the same as prefix1: BAD!!! +``` + +### Human-Readable Name + +The third parameter we pass to a collection is a string, which is a human-readable name. +It is needed to make the role of a collection understandable by clients who have no clue about +what a module is storing in state. + +#### Rules + +Each collection in a module **MUST** have a unique humanised name. + +## Key and Value Codecs + +A collection is generic over the type you can use as keys or values. +This makes collections dumb, but also means that hypothetically we can store everything +that can be a go type into a collection. We are not bounded to any type of encoding (be it proto, json or whatever) + +So a collection needs to be given a way to understand how to convert your keys and values to bytes. +This is achieved through `KeyCodec` and `ValueCodec`, which are arguments that you pass to your +collections when you're instantiating them using the `collections.NewMap/collections.NewItem/...` +instantiation functions. + +NOTE: Generally speaking you will never be required to implement your own `Key/ValueCodec` as +the SDK and collections libraries already come with default, safe and fast implementation of those. +You might need to implement them only if you're migrating to collections and there are state layout incompatibilities. + +Let's explore an example: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var IDsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + IDs collections.Map[string, uint64] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + IDs: collections.NewMap(sb, IDsPrefix, "ids", collections.StringKey, collections.Uint64Value), +} +} +``` + +We're now instantiating a map where the key is string and the value is `uint64`. +We already know the first three arguments of the `NewMap` function. + +The fourth parameter is our `KeyCodec`, we know that the `Map` has `string` as key so we pass it a `KeyCodec` that handles strings as keys. + +The fifth parameter is our `ValueCodec`, we know that the `Map` as a `uint64` as value so we pass it a `ValueCodec` that handles uint64. + +Collections already comes with all the required implementations for golang primitive types. + +Let's make another example, this falls closer to what we build using cosmos SDK, let's say we want +to create a `collections.Map` that maps account addresses to their base account. So we want to map an `sdk.AccAddress` to an `auth.BaseAccount` (which is a proto): + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts collections.Map[sdk.AccAddress, authtypes.BaseAccount] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.50/learn/advanced/encoding)), +} +} +``` + +As we can see here since our `collections.Map` maps `sdk.AccAddress` to `authtypes.BaseAccount`, +we use the `sdk.AccAddressKey` which is the `KeyCodec` implementation for `AccAddress` and we use `codec.CollValue` to +encode our proto type `BaseAccount`. + +Generally speaking you will always find the respective key and value codecs for types in the `go.mod` path you're using +to import that type. If you want to encode proto values refer to the codec `codec.CollValue` function, which allows you +to encode any type implement the `proto.Message` interface. + +## Map + +We analyse the first and most important collection type, the `collections.Map`. +This is the type that everything else builds on top of. + +### Use case + +A `collections.Map` is used to map arbitrary keys with arbitrary values. + +### Example + +It's easier to explain a `collections.Map` capabilities through an example: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "fmt" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts collections.Map[sdk.AccAddress, authtypes.BaseAccount] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.50/documentation/operations/packages/cdc)), +} +} + +func (k Keeper) + +CreateAccount(ctx sdk.Context, addr sdk.AccAddress, account authtypes.BaseAccount) + +error { + has, err := k.Accounts.Has(ctx, addr) + if err != nil { + return err +} + if has { + return fmt.Errorf("account already exists: %s", addr) +} + +err = k.Accounts.Set(ctx, addr, account) + if err != nil { + return err +} + +return nil +} + +func (k Keeper) + +GetAccount(ctx sdk.Context, addr sdk.AccAddress) (authtypes.BaseAccount, error) { + acc, err := k.Accounts.Get(ctx, addr) + if err != nil { + return authtypes.BaseAccount{ +}, err +} + +return acc, nil +} + +func (k Keeper) + +RemoveAccount(ctx sdk.Context, addr sdk.AccAddress) + +error { + err := k.Accounts.Remove(ctx, addr) + if err != nil { + return err +} + +return nil +} +``` + +#### Set method + +Set maps with the provided `AccAddress` (the key) to the `auth.BaseAccount` (the value). + +Under the hood the `collections.Map` will convert the key and value to bytes using the [key and value codec](/docs/sdk/v0.50/documentation/operations/packages/README#key-and-value-codecs). +It will prepend to our bytes key the [prefix](/docs/sdk/v0.50/documentation/operations/packages/README#prefix) and store it in the KVStore of the module. + +#### Has method + +The has method reports if the provided key exists in the store. + +#### Get method + +The get method accepts the `AccAddress` and returns the associated `auth.BaseAccount` if it exists, otherwise it errors. + +#### Remove method + +The remove method accepts the `AccAddress` and removes it from the store. It won't report errors +if it does not exist, to check for existence before removal use the `Has` method. + +#### Iteration + +Iteration has a separate section. + +## KeySet + +The second type of collection is `collections.KeySet`, as the word suggests it maintains +only a set of keys without values. + +#### Implementation curiosity + +A `collections.KeySet` is just a `collections.Map` with a `key` but no value. +The value internally is always the same and is represented as an empty byte slice `[]byte{}`. + +### Example + +As always we explore the collection type through an example: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "fmt" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var ValidatorsSetPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + ValidatorsSet collections.KeySet[sdk.ValAddress] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + ValidatorsSet: collections.NewKeySet(sb, ValidatorsSetPrefix, "validators_set", sdk.ValAddressKey), +} +} + +func (k Keeper) + +AddValidator(ctx sdk.Context, validator sdk.ValAddress) + +error { + has, err := k.ValidatorsSet.Has(ctx, validator) + if err != nil { + return err +} + if has { + return fmt.Errorf("validator already in set: %s", validator) +} + +err = k.ValidatorsSet.Set(ctx, validator) + if err != nil { + return err +} + +return nil +} + +func (k Keeper) + +RemoveValidator(ctx sdk.Context, validator sdk.ValAddress) + +error { + err := k.ValidatorsSet.Remove(ctx, validator) + if err != nil { + return err +} + +return nil +} +``` + +The first difference we notice is that `KeySet` needs use to specify only one type parameter: the key (`sdk.ValAddress` in this case). +The second difference we notice is that `KeySet` in its `NewKeySet` function does not require +us to specify a `ValueCodec` but only a `KeyCodec`. This is because a `KeySet` only saves keys and not values. + +Let's explore the methods. + +#### Has method + +Has allows us to understand if a key is present in the `collections.KeySet` or not, functions in the same way as `collections.Map.Has +` + +#### Set method + +Set inserts the provided key in the `KeySet`. + +#### Remove method + +Remove removes the provided key from the `KeySet`, it does not error if the key does not exist, +if existence check before removal is required it needs to be coupled with the `Has` method. + +## Item + +The third type of collection is the `collections.Item`. +It stores only one single item, it's useful for example for parameters, there's only one instance +of parameters in state always. + +#### implementation curiosity + +A `collections.Item` is just a `collections.Map` with no key but just a value. +The key is the prefix of the collection! + +### Example + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +var ParamsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Params collections.Item[stakingtypes.Params] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Params: collections.NewItem(sb, ParamsPrefix, "params", codec.CollValue[stakingtypes.Params](/docs/sdk/v0.50/learn/advanced/encoding)), +} +} + +func (k Keeper) + +UpdateParams(ctx sdk.Context, params stakingtypes.Params) + +error { + err := k.Params.Set(ctx, params) + if err != nil { + return err +} + +return nil +} + +func (k Keeper) + +GetParams(ctx sdk.Context) (stakingtypes.Params, error) { + return k.Params.Get(ctx) +} +``` + +The first key difference we notice is that we specify only one type parameter, which is the value we're storing. +The second key difference is that we don't specify the `KeyCodec`, since we store only one item we already know the key +and the fact that it is constant. + +## Iteration + +One of the key features of the `KVStore` is iterating over keys. + +Collections which deal with keys (so `Map`, `KeySet` and `IndexedMap`) allow you to iterate +over keys in a safe and typed way. They all share the same API, the only difference being +that `KeySet` returns a different type of `Iterator` because `KeySet` only deals with keys. + + + +Every collection shares the same `Iterator` semantics. + + + +Let's have a look at the `Map.Iterate` method: + +```go +func (m Map[K, V]) + +Iterate(ctx context.Context, ranger Ranger[K]) (Iterator[K, V], error) +``` + +It accepts a `collections.Ranger[K]`, which is an API that instructs map on how to iterate over keys. +As always we don't need to implement anything here as `collections` already provides some generic `Ranger` implementers +that expose all you need to work with ranges. + +### Example + +We have a `collections.Map` that maps accounts using `uint64` IDs. + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts collections.Map[uint64, authtypes.BaseAccount] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", collections.Uint64Key, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.50/documentation/operations/packages/cdc)), +} +} + +func (k Keeper) + +GetAllAccounts(ctx sdk.Context) ([]authtypes.BaseAccount, error) { + / passing a nil Ranger equals to: iterate over every possible key + iter, err := k.Accounts.Iterate(ctx, nil) + if err != nil { + return nil, err +} + +accounts, err := iter.Values() + if err != nil { + return nil, err +} + +return accounts, err +} + +func (k Keeper) + +IterateAccountsBetween(ctx sdk.Context, start, end uint64) ([]authtypes.BaseAccount, error) { + / The collections.Range API offers a lot of capabilities + / like defining where the iteration starts or ends. + rng := new(collections.Range[uint64]). + StartInclusive(start). + EndExclusive(end). + Descending() + +iter, err := k.Accounts.Iterate(ctx, rng) + if err != nil { + return nil, err +} + +accounts, err := iter.Values() + if err != nil { + return nil, err +} + +return accounts, nil +} + +func (k Keeper) + +IterateAccounts(ctx sdk.Context, do func(id uint64, acc authtypes.BaseAccount) (stop bool)) + +error { + iter, err := k.Accounts.Iterate(ctx, nil) + if err != nil { + return err +} + +defer iter.Close() + for ; iter.Valid(); iter.Next() { + kv, err := iter.KeyValue() + if err != nil { + return err +} + if do(kv.Key, kv.Value) { + break +} + +} + +return nil +} +``` + +Let's analyse each method in the example and how it makes use of the `Iterate` and the returned `Iterator` API. + +#### GetAllAccounts + +In `GetAllAccounts` we pass to our `Iterate` a nil `Ranger`. This means that the returned `Iterator` will include +all the existing keys within the collection. + +Then we use some the `Values` method from the returned `Iterator` API to collect all the values into a slice. + +`Iterator` offers other methods such as `Keys()` to collect only the keys and not the values and `KeyValues` to collect +all the keys and values. + +#### IterateAccountsBetween + +Here we make use of the `collections.Range` helper to specialise our range. +We make it start in a point through `StartInclusive` and end in the other with `EndExclusive`, then +we instruct it to report us results in reverse order through `Descending` + +Then we pass the range instruction to `Iterate` and get an `Iterator`, which will contain only the results +we specified in the range. + +Then we use again th `Values` method of the `Iterator` to collect all the results. + +`collections.Range` also offers a `Prefix` API which is not appliable to all keys types, +for example uint64 cannot be prefix because it is of constant size, but a `string` key +can be prefixed. + +#### IterateAccounts + +Here we showcase how to lazily collect values from an Iterator. + + + +`Keys/Values/KeyValues` fully consume and close the `Iterator`, here we need to explicitly do a `defer iterator.Close()` call. + + + +`Iterator` also exposes a `Value` and `Key` method to collect only the current value or key, if collecting both is not needed. + + + +For this `callback` pattern, collections expose a `Walk` API. + + + +## Composite keys + +So far we've worked only with simple keys, like `uint64`, the account address, etc. +There are some more complex cases in, which we need to deal with composite keys. + +A key is composite when it is composed of multiple keys, for example bank balances as stored as the composite key +`(AccAddress, string)` where the first part is the address holding the coins and the second part is the denom. + +Example, let's say address `BOB` holds `10atom,15osmo`, this is how it is stored in state: + +```javascript +(bob, atom) => 10 +(bob, osmos) => 15 +``` + +Now this allows to efficiently get a specific denom balance of an address, by simply `getting` `(address, denom)`, or getting all the balances +of an address by prefixing over `(address)`. + +Let's see now how we can work with composite keys using collections. + +### Example + +In our example we will show-case how we can use collections when we are dealing with balances, similar to bank, +a balance is a mapping between `(address, denom) => math.Int` the composite key in our case is `(address, denom)`. + +## Instantiation of a composite key collection + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + "cosmossdk.io/math" + storetypes "cosmossdk.io/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var BalancesPrefix = collections.NewPrefix(1) + +type Keeper struct { + Schema collections.Schema + Balances collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Balances: collections.NewMap( + sb, BalancesPrefix, "balances", + collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), + math.IntValue, + ), +} +} +``` + +#### The Map Key definition + +First of all we can see that in order to define a composite key of two elements we use the `collections.Pair` type: + +```go +collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] +``` + +`collections.Pair` defines a key composed of two other keys, in our case the first part is `sdk.AccAddress`, the second +part is `string`. + +#### The Key Codec instantiation + +The arguments to instantiate are always the same, the only thing that changes is how we instantiate +the `KeyCodec`, since this key is composed of two keys we use `collections.PairKeyCodec`, which generates +a `KeyCodec` composed of two key codecs. The first one will encode the first part of the key, the second one will +encode the second part of the key. + +### Working with composite key collections + +Let's expand on the example we used before: + +```go expandable +var BalancesPrefix = collections.NewPrefix(1) + +type Keeper struct { + Schema collections.Schema + Balances collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Balances: collections.NewMap( + sb, BalancesPrefix, "balances", + collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), + math.IntValue, + ), +} +} + +func (k Keeper) + +SetBalance(ctx sdk.Context, address sdk.AccAddress, denom string, amount math.Int) + +error { + key := collections.Join(address, denom) + +return k.Balances.Set(ctx, key, amount) +} + +func (k Keeper) + +GetBalance(ctx sdk.Context, address sdk.AccAddress, denom string) (math.Int, error) { + return k.Balances.Get(ctx, collections.Join(address, denom)) +} + +func (k Keeper) + +GetAllAddressBalances(ctx sdk.Context, address sdk.AccAddress) (sdk.Coins, error) { + balances := sdk.NewCoins() + rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/v0.50/documentation/operations/packages/address) + +iter, err := k.Balances.Iterate(ctx, rng) + if err != nil { + return nil, err +} + +kvs, err := iter.KeyValues() + if err != nil { + return nil, err +} + for _, kv := range kvs { + balances = balances.Add(sdk.NewCoin(kv.Key.K2(), kv.Value)) +} + +return balances, nil +} + +func (k Keeper) + +GetAllAddressBalancesBetween(ctx sdk.Context, address sdk.AccAddress, startDenom, endDenom string) (sdk.Coins, error) { + rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/v0.50/documentation/operations/packages/address). + StartInclusive(startDenom). + EndInclusive(endDenom) + +iter, err := k.Balances.Iterate(ctx, rng) + if err != nil { + return nil, err +} + ... +} +``` + +#### SetBalance + +As we can see here we're setting the balance of an address for a specific denom. +We use the `collections.Join` function to generate the composite key. +`collections.Join` returns a `collections.Pair` (which is the key of our `collections.Map`) + +`collections.Pair` contains the two keys we have joined, it also exposes two methods: `K1` to fetch the 1st part of the +key and `K2` to fetch the second part. + +As always, we use the `collections.Map.Set` method to map the composite key to our value (`math.Int`in this case) + +#### GetBalance + +To get a value in composite key collection, we simply use `collections.Join` to compose the key. + +#### GetAllAddressBalances + +We use `collections.PrefixedPairRange` to iterate over all the keys starting with the provided address. +Concretely the iteration will report all the balances belonging to the provided address. + +The first part is that we instantiate a `PrefixedPairRange`, which is a `Ranger` implementer aimed to help +in `Pair` keys iterations. + +```go +rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/v0.50/documentation/operations/packages/address) +``` + +As we can see here we're passing the type parameters of the `collections.Pair` because golang type inference +with respect to generics is not as permissive as other languages, so we need to explitly say what are the types of the pair key. + +#### GetAllAddressesBalancesBetween + +This showcases how we can further specialise our range to limit the results further, by specifying +the range between the second part of the key (in our case the denoms, which are strings). + +## IndexedMap + +`collections.IndexedMap` is a collection that uses under the hood a `collections.Map`, and has a struct, which contains the indexes that we need to define. + +### Example + +Let's say we have an `auth.BaseAccount` struct which looks like the following: + +```go +type BaseAccount struct { + AccountNumber uint64 `protobuf:"varint,3,opt,name=account_number,json=accountNumber,proto3" json:"account_number,omitempty"` + Sequence uint64 `protobuf:"varint,4,opt,name=sequence,proto3" json:"sequence,omitempty"` +} +``` + +First of all, when we save our accounts in state we map them using a primary key `sdk.AccAddress`. +If it were to be a `collections.Map` it would be `collections.Map[sdk.AccAddres, authtypes.BaseAccount]`. + +Then we also want to be able to get an account not only by its `sdk.AccAddress`, but also by its `AccountNumber`. + +So we can say we want to create an `Index` that maps our `BaseAccount` to its `AccountNumber`. + +We also know that this `Index` is unique. Unique means that there can only be one `BaseAccount` that maps to a specific +`AccountNumber`. + +First of all, we start by defining the object that contains our index: + +```go expandable +var AccountsNumberIndexPrefix = collections.NewPrefix(1) + +type AccountsIndexes struct { + Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +} + +func (a AccountsIndexes) + +IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { + return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ + a.Number +} +} + +func NewAccountIndexes(sb *collections.SchemaBuilder) + +AccountsIndexes { + return AccountsIndexes{ + Number: indexes.NewUnique( + sb, AccountsNumberIndexPrefix, "accounts_by_number", + collections.Uint64Key, sdk.AccAddressKey, + func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { + return v.AccountNumber, nil +}, + ), +} +} +``` + +We create an `AccountIndexes` struct which contains a field: `Number`. This field represents our `AccountNumber` index. +`AccountNumber` is a field of `authtypes.BaseAccount` and it's a `uint64`. + +Then we can see in our `AccountIndexes` struct the `Number` field is defined as: + +```go +*indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +``` + +Where the first type parameter is `uint64`, which is the field type of our index. +The second type parameter is the primary key `sdk.AccAddress` +And the third type parameter is the actual object we're storing `authtypes.BaseAccount`. + +Then we implement a function called `IndexesList` on our `AccountIndexes` struct, this will be used +by the `IndexedMap` to keep the underlying map in sync with the indexes, in our case `Number`. +This function just needs to return the slice of indexes contained in the struct. + +Then we create a `NewAccountIndexes` function that instantiates and returns the `AccountsIndexes` struct. + +The function takes a `SchemaBuilder`. Then we instantiate our `indexes.Unique`, let's analyse the arguments we pass to +`indexes.NewUnique`. + +#### Instantiating a `indexes.Unique` + +The first three arguments, we already know them, they are: `SchemaBuilder`, `Prefix` which is our index prefix (the partition +where index keys relationship for the `Number` index will be maintained), and the human name for the `Number` index. + +The second argument is a `collections.Uint64Key` which is a key codec to deal with `uint64` keys, we pass that because +the key we're trying to index is a `uint64` key (the account number), and then we pass as fifth argument the primary key codec, +which in our case is `sdk.AccAddress` (remember: we're mapping `sdk.AccAddress` => `BaseAccount`). + +Then as last parameter we pass a function that: given the `BaseAccount` returns its `AccountNumber`. + +After this we can proceed instantiating our `IndexedMap`. + +```go expandable +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewIndexedMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.50/documentation/operations/packages/cdc), + NewAccountIndexes(sb), + ), +} +} +``` + +As we can see here what we do, for now, is the same thing as we did for `collections.Map`. +We pass it the `SchemaBuilder`, the `Prefix` where we plan to store the mapping between `sdk.AccAddress` and `authtypes.BaseAccount`, +the human name and the respective `sdk.AccAddress` key codec and `authtypes.BaseAccount` value codec. + +Then we pass the instantiation of our `AccountIndexes` through `NewAccountIndexes`. + +Full example: + +```go expandable +package docs + +import ( + + "cosmossdk.io/collections" + "cosmossdk.io/collections/indexes" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsNumberIndexPrefix = collections.NewPrefix(1) + +type AccountsIndexes struct { + Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +} + +func (a AccountsIndexes) + +IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { + return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ + a.Number +} +} + +func NewAccountIndexes(sb *collections.SchemaBuilder) + +AccountsIndexes { + return AccountsIndexes{ + Number: indexes.NewUnique( + sb, AccountsNumberIndexPrefix, "accounts_by_number", + collections.Uint64Key, sdk.AccAddressKey, + func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { + return v.AccountNumber, nil +}, + ), +} +} + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewIndexedMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.50/documentation/operations/packages/cdc), + NewAccountIndexes(sb), + ), +} +} +``` + +### Working with IndexedMaps + +Whilst instantiating `collections.IndexedMap` is tedious, working with them is extremely smooth. + +Let's take the full example, and expand it with some use-cases. + +```go expandable +package docs + +import ( + + "cosmossdk.io/collections" + "cosmossdk.io/collections/indexes" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsNumberIndexPrefix = collections.NewPrefix(1) + +type AccountsIndexes struct { + Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +} + +func (a AccountsIndexes) + +IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { + return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ + a.Number +} +} + +func NewAccountIndexes(sb *collections.SchemaBuilder) + +AccountsIndexes { + return AccountsIndexes{ + Number: indexes.NewUnique( + sb, AccountsNumberIndexPrefix, "accounts_by_number", + collections.Uint64Key, sdk.AccAddressKey, + func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { + return v.AccountNumber, nil +}, + ), +} +} + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewIndexedMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.50/documentation/operations/packages/cdc), + NewAccountIndexes(sb), + ), +} +} + +func (k Keeper) + +CreateAccount(ctx sdk.Context, addr sdk.AccAddress) + +error { + nextAccountNumber := k.getNextAccountNumber() + newAcc := authtypes.BaseAccount{ + AccountNumber: nextAccountNumber, + Sequence: 0, +} + +return k.Accounts.Set(ctx, addr, newAcc) +} + +func (k Keeper) + +RemoveAccount(ctx sdk.Context, addr sdk.AccAddress) + +error { + return k.Accounts.Remove(ctx, addr) +} + +func (k Keeper) + +GetAccountByNumber(ctx sdk.Context, accNumber uint64) (sdk.AccAddress, authtypes.BaseAccount, error) { + accAddress, err := k.Accounts.Indexes.Number.MatchExact(ctx, accNumber) + if err != nil { + return nil, authtypes.BaseAccount{ +}, err +} + +acc, err := k.Accounts.Get(ctx, accAddress) + +return accAddress, acc, nil +} + +func (k Keeper) + +GetAccountsByNumber(ctx sdk.Context, startAccNum, endAccNum uint64) ([]authtypes.BaseAccount, error) { + rng := new(collections.Range[uint64]). + StartInclusive(startAccNum). + EndInclusive(endAccNum) + +iter, err := k.Accounts.Indexes.Number.Iterate(ctx, rng) + if err != nil { + return nil, err +} + +return indexes.CollectValues(ctx, k.Accounts, iter) +} + +func (k Keeper) + +getNextAccountNumber() + +uint64 { + return 0 +} +``` + +## Collections with interfaces as values + +Although cosmos-sdk is shifting away from the usage of interface registry, there are still some places where it is used. +In order to support old code, we have to support collections with interface values. + +The generic `codec.CollValue` is not able to handle interface values, so we need to use a special type `codec.CollValueInterface`. +`codec.CollValueInterface` takes a `codec.BinaryCodec` as an argument, and uses it to marshal and unmarshal values as interfaces. +The `codec.CollValueInterface` lives in the `codec` package, whose import path is `github.com/cosmos/cosmos-sdk/codec`. + +### Instantiating Collections with interface values + +In order to instantiate a collection with interface values, we need to use `codec.CollValueInterface` instead of `codec.CollValue`. + +```go expandable +package example + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.Map[sdk.AccAddress, sdk.AccountI] +} + +func NewKeeper(cdc codec.BinaryCodec, storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollInterfaceValue[sdk.AccountI](/docs/sdk/v0.50/learn/advanced/encoding), + ), +} +} + +func (k Keeper) + +SaveBaseAccount(ctx sdk.Context, account authtypes.BaseAccount) + +error { + return k.Accounts.Set(ctx, account.GetAddress(), account) +} + +func (k Keeper) + +SaveModuleAccount(ctx sdk.Context, account authtypes.ModuleAccount) + +error { + return k.Accounts.Set(ctx, account.GetAddress(), account) +} + +func (k Keeper) + +GetAccount(ctx sdk.context, addr sdk.AccAddress) (sdk.AccountI, error) { + return k.Accounts.Get(ctx, addr) +} +``` diff --git a/docs/sdk/v0.50/documentation/operations/packages/depinject.mdx b/docs/sdk/v0.50/documentation/operations/packages/depinject.mdx new file mode 100644 index 00000000..59b290a4 --- /dev/null +++ b/docs/sdk/v0.50/documentation/operations/packages/depinject.mdx @@ -0,0 +1,713 @@ +--- +title: Depinject +--- + +> **DISCLAIMER**: This is a **beta** package. The SDK team is actively working on this feature and we are looking for feedback from the community. Please try it out and let us know what you think. + +## Overview + +`depinject` is a dependency injection (DI) framework for the Cosmos SDK, designed to streamline the process of building and configuring blockchain applications. It works in conjunction with the `core/appconfig` module to replace the majority of boilerplate code in `app.go` with a configuration file in Go, YAML, or JSON format. + +`depinject` is particularly useful for developing blockchain applications: + +* With multiple interdependent components, modules, or services. Helping manage their dependencies effectively. +* That require decoupling of these components, making it easier to test, modify, or replace individual parts without affecting the entire system. +* That are wanting to simplify the setup and initialisation of modules and their dependencies by reducing boilerplate code and automating dependency management. + +By using `depinject`, developers can achieve: + +* Cleaner and more organised code. + +* Improved modularity and maintainability. + +* A more maintainable and modular structure for their blockchain applications, ultimately enhancing development velocity and code quality. + +* [Go Doc](https://pkg.go.dev/cosmossdk.io/depinject) + +## Usage + +The `depinject` framework, based on dependency injection concepts, streamlines the management of dependencies within your blockchain application using its Configuration API. This API offers a set of functions and methods to create easy to use configurations, making it simple to define, modify, and access dependencies and their relationships. + +A core component of the [Configuration API](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/depinject#Config) is the `Provide` function, which allows you to register provider functions that supply dependencies. Inspired by constructor injection, these provider functions form the basis of the dependency tree, enabling the management and resolution of dependencies in a structured and maintainable manner. Additionally, `depinject` supports interface types as inputs to provider functions, offering flexibility and decoupling between components, similar to interface injection concepts. + +By leveraging `depinject` and its Configuration API, you can efficiently handle dependencies in your blockchain application, ensuring a clean, modular, and well-organised codebase. + +Example: + +```go expandable +package main + +import ( + + "fmt" + "cosmossdk.io/depinject" +) + +type AnotherInt int + +func main() { + var ( + x int + y AnotherInt + ) + +fmt.Printf("Before (%v, %v)\n", x, y) + +depinject.Inject( + depinject.Provide( + func() + +int { + return 1 +}, + func() + +AnotherInt { + return AnotherInt(2) +}, + ), + &x, + &y, + ) + +fmt.Printf("After (%v, %v)\n", x, y) +} +``` + +In this example, `depinject.Provide` registers two provider functions that return `int` and `AnotherInt` values. The `depinject.Inject` function is then used to inject these values into the variables `x` and `y`. + +Provider functions serve as the basis for the dependency tree. They are analysed to identify their inputs as dependencies and their outputs as dependents. These dependents can either be used by another provider function or be stored outside the DI container (e.g., `&x` and `&y` in the example above). + +### Interface type resolution + +`depinject` supports the use of interface types as inputs to provider functions, which helps decouple dependencies between modules. This approach is particularly useful for managing complex systems with multiple modules, such as the Cosmos SDK, where dependencies need to be flexible and maintainable. + +For example, `x/bank` expects an [AccountKeeper](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/x/bank/types#AccountKeeper) interface as [input to ProvideModule](https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/bank/module.go#L208-L260). `SimApp` uses the implementation in `x/auth`, but the modular design allows for easy changes to the implementation if needed. + +Consider the following example: + +```go expandable +package duck + +type Duck interface { + quack() +} + +type AlsoDuck interface { + quack() +} + +type Mallard struct{ +} + +type Canvasback struct{ +} + +func (duck Mallard) + +quack() { +} + +func (duck Canvasback) + +quack() { +} + +type Pond struct { + Duck AlsoDuck +} +``` + +In this example, there's a `Pond` struct that has a `Duck` field of type `AlsoDuck`. The `depinject` framework can automatically resolve the appropriate implementation when there's only one available, as shown below: + +```go +var pond Pond + +depinject.Inject( + depinject.Provide( + func() + +Mallard { + return Mallard{ +} +}, + func(duck Duck) + +Pond { + return Pond{ + Duck: duck +} + +}), + &pond) +``` + +This code snippet results in the `Duck` field of `Pond` being implicitly bound to the `Mallard` implementation because it's the only implementation of the `Duck` interface in the container. + +However, if there are multiple implementations of the `Duck` interface, as in the following example, you'll encounter an error: + +```go +var pond Pond + +depinject.Inject( + depinject.Provide( + func() + +Mallard { + return Mallard{ +} +}, + func() + +Canvasback { + return Canvasback{ +} +}, + func(duck Duck) + +Pond { + return Pond{ + Duck: duck +} + +}), + &pond) +``` + +A specific binding preference for `Duck` is required. + +#### `BindInterface` API + +In the above situation registering a binding for a given interface binding may look like: + +```go expandable +depinject.Inject( + depinject.Configs( + depinject.BindInterface( + "duck.Duck", + "duck.Mallard"), + depinject.Provide( + func() + +Mallard { + return Mallard{ +} +}, + func() + +Canvasback { + return Canvasback{ +} +}, + func(duck Duck) + +APond { + return Pond{ + Duck: duck +} + +})), + &pond) +``` + +Now `depinject` has enough information to provide `Mallard` as an input to `APond`. + +### Full example in real app + + +When using `depinject.Inject`, the injected types must be pointers. + + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + _ "embed" + "io" + "os" + "path/filepath" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/store/streaming" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/testutil/testdata_pulsar" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + "github.com/cosmos/cosmos-sdk/x/capability" + capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + nftkeeper "github.com/cosmos/cosmos-sdk/x/nft/keeper" + nftmodule "github.com/cosmos/cosmos-sdk/x/nft/module" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradeclient "github.com/cosmos/cosmos-sdk/x/upgrade/client" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" +) + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / ModuleBasics defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration + / and genesis verification. + ModuleBasics = module.NewBasicManager( + auth.AppModuleBasic{ +}, + genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + bank.AppModuleBasic{ +}, + capability.AppModuleBasic{ +}, + staking.AppModuleBasic{ +}, + mint.AppModuleBasic{ +}, + distr.AppModuleBasic{ +}, + gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + upgradeclient.LegacyProposalHandler, + upgradeclient.LegacyCancelProposalHandler, +}, + ), + params.AppModuleBasic{ +}, + crisis.AppModuleBasic{ +}, + slashing.AppModuleBasic{ +}, + feegrantmodule.AppModuleBasic{ +}, + upgrade.AppModuleBasic{ +}, + evidence.AppModuleBasic{ +}, + authzmodule.AppModuleBasic{ +}, + groupmodule.AppModuleBasic{ +}, + vesting.AppModuleBasic{ +}, + nftmodule.AppModuleBasic{ +}, + consensus.AppModuleBasic{ +}, + ) +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + CapabilityKeeper *capabilitykeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + / Below we could construct and set an application specific mempool and ABCI 1.0 Prepare and Process Proposal + / handlers. These defaults are already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / nonceMempool = mempool.NewSenderNonceMempool() + / mempoolOpt = baseapp.SetMempool(nonceMempool) + / prepareOpt = func(app *baseapp.BaseApp) { + / app.SetPrepareProposal(app.DefaultPrepareProposal()) + / +} + / processOpt = func(app *baseapp.BaseApp) { + / app.SetProcessProposal(app.DefaultProcessProposal()) + / +} + / + / Further down we'd set the options in the AppBuilder like below. + / baseAppOptions = append(baseAppOptions, mempoolOpt, prepareOpt, processOpt) + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +authtypes.AccountI { + return authtypes.ProtoBaseAccount() +}, + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.CapabilityKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.CrisisKeeper, + &app.UpgradeKeeper, + &app.ParamsKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + ); err != nil { + panic(err) +} + +app.App = appBuilder.Build(logger, db, traceStore, baseAppOptions...) + + / load state streaming if enabled + if _, _, err := streaming.LoadStreamingServices(app.App.BaseApp, appOpts, app.appCodec, logger, app.kvStoreKeys()); err != nil { + logger.Error("failed to load state streaming", "err", err) + +os.Exit(1) +} + + /**** Module Options ****/ + + app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req abci.RequestInitChain) + +abci.ResponseInitChain { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + +## Debugging + +Issues with resolving dependencies in the container can be done with logs and [Graphviz](https://graphviz.org) renderings of the container tree. +By default, whenever there is an error, logs will be printed to stderr and a rendering of the dependency graph in Graphviz DOT format will be saved to `debug_container.dot`. + +Here is an example Graphviz rendering of a successful build of a dependency graph: +![Graphviz Example](https://raw.githubusercontent.com/cosmos/cosmos-sdk/ff39d243d421442b400befcd959ec3ccd2525154/depinject/testdata/example.svg) + +Rectangles represent functions, ovals represent types, rounded rectangles represent modules and the single hexagon +represents the function which called `Build`. Black-colored shapes mark functions and types that were called/resolved +without an error. Gray-colored nodes mark functions and types that could have been called/resolved in the container but +were left unused. + +Here is an example Graphviz rendering of a dependency graph build which failed: +![Graphviz Error Example](https://raw.githubusercontent.com/cosmos/cosmos-sdk/ff39d243d421442b400befcd959ec3ccd2525154/depinject/testdata/example_error.svg) + +Graphviz DOT files can be converted into SVG's for viewing in a web browser using the `dot` command-line tool, ex: + +```txt +dot -Tsvg debug_container.dot > debug_container.svg +``` + +Many other tools including some IDEs support working with DOT files. diff --git a/docs/sdk/v0.50/documentation/operations/tooling/README.mdx b/docs/sdk/v0.50/documentation/operations/tooling/README.mdx new file mode 100644 index 00000000..bf2df935 --- /dev/null +++ b/docs/sdk/v0.50/documentation/operations/tooling/README.mdx @@ -0,0 +1,21 @@ +--- +title: Tools +description: >- + This section provides documentation on various tooling maintained by the SDK + team. This includes tools for development, operating a node, and ease of use + of a Cosmos SDK chain. +--- + +This section provides documentation on various tooling maintained by the SDK team. +This includes tools for development, operating a node, and ease of use of a Cosmos SDK chain. + +## CLI Tools + +* [Cosmovisor](/docs/sdk/v0.50/documentation/operations/tooling/cosmovisor) +* [Confix](/docs/sdk/v0.50/documentation/operations/tooling/confix) +* [Hubl](/docs/sdk/v0.50/documentation/operations/tooling/hubl) +* [Rosetta](https://docs.cosmos.network/main/run-node/rosetta) + +## Other Tools + +* [Protocol Buffers](/docs/sdk/v0.50/documentation/operations/tooling/protobuf) diff --git a/docs/sdk/v0.50/documentation/operations/tooling/confix.mdx b/docs/sdk/v0.50/documentation/operations/tooling/confix.mdx new file mode 100644 index 00000000..244a143c --- /dev/null +++ b/docs/sdk/v0.50/documentation/operations/tooling/confix.mdx @@ -0,0 +1,138 @@ +--- +title: Confix +description: >- + Confix is a configuration management tool that allows you to manage your + configuration via CLI. +--- + +`Confix` is a configuration management tool that allows you to manage your configuration via CLI. + +It is based on the [CometBFT RFC 019](https://github.com/cometbft/cometbft/blob/5013bc3f4a6d64dcc2bf02ccc002ebc9881c62e4/docs/rfc/rfc-019-config-version.md). + +## Installation + +### Add Config Command + +To add the confix tool, it's required to add the `ConfigCommand` to your application's root command file (e.g. `/cmd/root.go`). + +Import the `confixCmd` package: + +```go +import "cosmossdk.io/tools/confix/cmd" +``` + +Find the following line: + +```go +initRootCmd(rootCmd, encodingConfig) +``` + +After that line, add the following: + +```go +rootCmd.AddCommand( + confixcmd.ConfigCommand(), +) +``` + +The `ConfixCommand` function builds the `config` root command and is defined in the `confixCmd` package (`cosmossdk.io/tools/confix/cmd`). +An implementation example can be found in `simapp`. + +The command will be available as `simd config`. + +### Using Confix Standalone + +To use Confix standalone, without having to add it in your application, install it with the following command: + +```bash +go install cosmossdk.io/tools/confix/cmd/confix@latest +``` + +Alternatively, for building from source, simply run `make confix`. The binary will be located in `tools/confix`. + +## Usage + +Use standalone: + +```shell +confix --help +``` + +Use in simd: + +```shell +simd config fix --help +``` + +### Get + +Get a configuration value, e.g.: + +```shell +simd config get app pruning # gets the value pruning from app.toml +simd config get client chain-id # gets the value chain-id from client.toml +``` + +```shell +confix get ~/.simapp/config/app.toml pruning # gets the value pruning from app.toml +confix get ~/.simapp/config/client.toml chain-id # gets the value chain-id from client.toml +``` + +### Set + +Set a configuration value, e.g.: + +```shell +simd config set app pruning "enabled" # sets the value pruning from app.toml +simd config set client chain-id "foo-1" # sets the value chain-id from client.toml +``` + +```shell +confix set ~/.simapp/config/app.toml pruning "enabled" # sets the value pruning from app.toml +confix set ~/.simapp/config/client.toml chain-id "foo-1" # sets the value chain-id from client.toml +``` + +### Migrate + +Migrate a configuration file to a new version, e.g.: + +```shell +simd config migrate v0.47 # migrates defaultHome/config/app.toml to the latest v0.47 config +``` + +```shell +confix migrate v0.47 ~/.simapp/config/app.toml # migrate ~/.simapp/config/app.toml to the latest v0.47 config +``` + +### Diff + +Get the diff between a given configuration file and the default configuration file, e.g.: + +```shell +simd config diff v0.47 # gets the diff between defaultHome/config/app.toml and the latest v0.47 config +``` + +```shell +confix diff v0.47 ~/.simapp/config/app.toml # gets the diff between ~/.simapp/config/app.toml and the latest v0.47 config +``` + +### View + +View a configuration file, e.g: + +```shell +simd config view client # views the current app client config +``` + +```shell +confix view ~/.simapp/config/client.toml # views the current app client conf +``` + +### Maintainer + +At each SDK modification of the default configuration, add the default SDK config under `data/v0.XX-app.toml`. +This allows users to use the tool standalone. + +## Credits + +This project is based on the [CometBFT RFC 019](https://github.com/cometbft/cometbft/blob/5013bc3f4a6d64dcc2bf02ccc002ebc9881c62e4/docs/rfc/rfc-019-config-version.md) and their own implementation of [confix](https://github.com/cometbft/cometbft/blob/v0.36.x/scripts/confix/confix.go). diff --git a/docs/sdk/v0.50/documentation/operations/tooling/cosmovisor.mdx b/docs/sdk/v0.50/documentation/operations/tooling/cosmovisor.mdx new file mode 100644 index 00000000..8b87984b --- /dev/null +++ b/docs/sdk/v0.50/documentation/operations/tooling/cosmovisor.mdx @@ -0,0 +1,380 @@ +--- +title: Cosmovisor +--- + +`cosmovisor` is a process manager for Cosmos SDK application binaries that automates application binary switch at chain upgrades. +It polls the `upgrade-info.json` file that is created by the x/upgrade module at upgrade height, and then can automatically download the new binary, stop the current binary, switch from the old binary to the new one, and finally restart the node with the new binary. + +* [Cosmovisor](#cosmovisor) + * [Design](#design) + * [Contributing](#contributing) + * [Setup](#setup) + * [Installation](#installation) + * [Command Line Arguments And Environment Variables](#command-line-arguments-and-environment-variables) + * [Folder Layout](#folder-layout) + * [Usage](#usage) + * [Initialization](#initialization) + * [Detecting Upgrades](#detecting-upgrades) + * [Auto-Download](#auto-download) + * [Example: SimApp Upgrade](#example-simapp-upgrade) + * [Chain Setup](#chain-setup) + * [Prepare Cosmovisor and Start the Chain](#prepare-cosmovisor-and-start-the-chain) + * [Update App](#update-app) + +## Design + +Cosmovisor is designed to be used as a wrapper for a `Cosmos SDK` app: + +* it will pass arguments to the associated app (configured by `DAEMON_NAME` env variable). + Running `cosmovisor run arg1 arg2 ....` will run `app arg1 arg2 ...`; +* it will manage an app by restarting and upgrading if needed; +* it is configured using environment variables, not positional arguments. + +*Note: If new versions of the application are not set up to run in-place store migrations, migrations will need to be run manually before restarting `cosmovisor` with the new binary. For this reason, we recommend applications adopt in-place store migrations.* + + +Only the lastest version of cosmovisor is actively developed/maintained. + + + +Versions prior to v1.0.0 have a vulnerability that could lead to a DOS. Please upgrade to the latest version. + + +## Contributing + +Cosmovisor is part of the Cosmos SDK monorepo, but it's a separate module with it's own release schedule. + +Release branches have the following format `release/cosmovisor/vA.B.x`, where A and B are a number (e.g. `release/cosmovisor/v1.3.x`). Releases are tagged using the following format: `cosmovisor/vA.B.C`. + +## Setup + +### Installation + +You can download Cosmovisor from the [GitHub releases](https://github.com/cosmos/cosmos-sdk/releases/tag/cosmovisor%2Fv1.3.0). + +To install the latest version of `cosmovisor`, run the following command: + +```shell +go install cosmossdk.io/tools/cosmovisor/cmd/cosmovisor@latest +``` + +To install a previous version, you can specify the version. IMPORTANT: Chains that use Cosmos SDK v0.44.3 or earlier (eg v0.44.2) and want to use auto-download feature MUST use `cosmovisor v0.1.0` + +```shell +go install github.com/cosmos/cosmos-sdk/cosmovisor/cmd/cosmovisor@v0.1.0 +``` + +Run `cosmovisor version` to check the cosmovisor version. + +Alternatively, for building from source, simply run `make cosmovisor`. The binary will be located in `tools/cosmovisor`. + + +Building from source using `make cosmovisor` won't display the correct `cosmovisor` version. + + +### Command Line Arguments And Environment Variables + +The first argument passed to `cosmovisor` is the action for `cosmovisor` to take. Options are: + +* `help`, `--help`, or `-h` - Output `cosmovisor` help information and check your `cosmovisor` configuration. +* `run` - Run the configured binary using the rest of the provided arguments. +* `version` - Output the `cosmovisor` version and also run the binary with the `version` argument. +* `config` - Display the current `cosmovisor` configuration, that means displaying the environment variables value that `cosmovisor` is using. +* `add-upgrade` - Add an upgrade manually to `cosmovisor`. + +All arguments passed to `cosmovisor run` will be passed to the application binary (as a subprocess). `cosmovisor` will return `/dev/stdout` and `/dev/stderr` of the subprocess as its own. For this reason, `cosmovisor run` cannot accept any command-line arguments other than those available to the application binary. + + +Use of `cosmovisor` without one of the action arguments is deprecated. For backwards compatibility, if the first argument is not an action argument, `run` is assumed. However, this fallback might be removed in future versions, so it is recommended that you always provide `run`. + + +`cosmovisor` reads its configuration from environment variables: + +* `DAEMON_HOME` is the location where the `cosmovisor/` directory is kept that contains the genesis binary, the upgrade binaries, and any additional auxiliary files associated with each binary (e.g. `$HOME/.gaiad`, `$HOME/.regend`, `$HOME/.simd`, etc.). +* `DAEMON_NAME` is the name of the binary itself (e.g. `gaiad`, `regend`, `simd`, etc.). +* `DAEMON_ALLOW_DOWNLOAD_BINARIES` (*optional*), if set to `true`, will enable auto-downloading of new binaries (for security reasons, this is intended for full nodes rather than validators). By default, `cosmovisor` will not auto-download new binaries. +* `DAEMON_RESTART_AFTER_UPGRADE` (*optional*, default = `true`), if `true`, restarts the subprocess with the same command-line arguments and flags (but with the new binary) after a successful upgrade. Otherwise (`false`), `cosmovisor` stops running after an upgrade and requires the system administrator to manually restart it. Note restart is only after the upgrade and does not auto-restart the subprocess after an error occurs. +* `DAEMON_RESTART_DELAY` (*optional*, default none), allow a node operator to define a delay between the node halt (for upgrade) and backup by the specified time. The value must be a duration (e.g. `1s`). +* `DAEMON_POLL_INTERVAL` (*optional*, default 300 milliseconds), is the interval length for polling the upgrade plan file. The value must be a duration (e.g. `1s`). +* `DAEMON_DATA_BACKUP_DIR` option to set a custom backup directory. If not set, `DAEMON_HOME` is used. +* `UNSAFE_SKIP_BACKUP` (defaults to `false`), if set to `true`, upgrades directly without performing a backup. Otherwise (`false`, default) backs up the data before trying the upgrade. The default value of false is useful and recommended in case of failures and when a backup needed to rollback. We recommend using the default backup option `UNSAFE_SKIP_BACKUP=false`. +* `DAEMON_PREUPGRADE_MAX_RETRIES` (defaults to `0`). The maximum number of times to call [`pre-upgrade`](https://docs.cosmos.network/main/building-apps/app-upgrade#pre-upgrade-handling) in the application after exit status of `31`. After the maximum number of retries, Cosmovisor fails the upgrade. +* `COSMOVISOR_DISABLE_LOGS` (defaults to `false`). If set to true, this will disable Cosmovisor logs (but not the underlying process) completely. This may be useful, for example, when a Cosmovisor subcommand you are executing returns a valid JSON you are then parsing, as logs added by Cosmovisor make this output not a valid JSON. + +### Folder Layout + +`$DAEMON_HOME/cosmovisor` is expected to belong completely to `cosmovisor` and the subprocesses that are controlled by it. The folder content is organized as follows: + +```text +. +├── current -> genesis or upgrades/ +├── genesis +│   └── bin +│   └── $DAEMON_NAME +└── upgrades + └── + ├── bin + │   └── $DAEMON_NAME + └── upgrade-info.json +``` + +The `cosmovisor/` directory incudes a subdirectory for each version of the application (i.e. `genesis` or `upgrades/`). Within each subdirectory is the application binary (i.e. `bin/$DAEMON_NAME`) and any additional auxiliary files associated with each binary. `current` is a symbolic link to the currently active directory (i.e. `genesis` or `upgrades/`). The `name` variable in `upgrades/` is the lowercased URI-encoded name of the upgrade as specified in the upgrade module plan. Note that the upgrade name path are normalized to be lowercased: for instance, `MyUpgrade` is normalized to `myupgrade`, and its path is `upgrades/myupgrade`. + +Please note that `$DAEMON_HOME/cosmovisor` only stores the *application binaries*. The `cosmovisor` binary itself can be stored in any typical location (e.g. `/usr/local/bin`). The application will continue to store its data in the default data directory (e.g. `$HOME/.simapp`) or the data directory specified with the `--home` flag. `$DAEMON_HOME` is independent of the data directory and can be set to any location. If you set `$DAEMON_HOME` to the same directory as the data directory, you will end up with a configuation like the following: + +```text +.simapp +├── config +├── data +└── cosmovisor +``` + +## Usage + +The system administrator is responsible for: + +* installing the `cosmovisor` binary +* configuring the host's init system (e.g. `systemd`, `launchd`, etc.) +* appropriately setting the environmental variables +* creating the `/cosmovisor` directory +* creating the `/cosmovisor/genesis/bin` folder +* creating the `/cosmovisor/upgrades//bin` folders +* placing the different versions of the `` executable in the appropriate `bin` folders. + +`cosmovisor` will set the `current` link to point to `genesis` at first start (i.e. when no `current` link exists) and then handle switching binaries at the correct points in time so that the system administrator can prepare days in advance and relax at upgrade time. + +In order to support downloadable binaries, a tarball for each upgrade binary will need to be packaged up and made available through a canonical URL. Additionally, a tarball that includes the genesis binary and all available upgrade binaries can be packaged up and made available so that all the necessary binaries required to sync a fullnode from start can be easily downloaded. + +The `DAEMON` specific code and operations (e.g. cometBFT config, the application db, syncing blocks, etc.) all work as expected. The application binaries' directives such as command-line flags and environment variables also work as expected. + +### Initialization + +The `cosmovisor init ` command creates the folder structure required for using cosmovisor. + +It does the following: + +* creates the `/cosmovisor` folder if it doesn't yet exist +* creates the `/cosmovisor/genesis/bin` folder if it doesn't yet exist +* copies the provided executable file to `/cosmovisor/genesis/bin/` +* creates the `current` link, pointing to the `genesis` folder + +It uses the `DAEMON_HOME` and `DAEMON_NAME` environment variables for folder location and executable name. + +The `cosmovisor init` command is specifically for initializing cosmovisor, and should not be confused with a chain's `init` command (e.g. `cosmovisor run init`). + +### Detecting Upgrades + +`cosmovisor` is polling the `$DAEMON_HOME/data/upgrade-info.json` file for new upgrade instructions. The file is created by the x/upgrade module in `BeginBlocker` when an upgrade is detected and the blockchain reaches the upgrade height. +The following heuristic is applied to detect the upgrade: + +* When starting, `cosmovisor` doesn't know much about currently running upgrade, except the binary which is `current/bin/`. It tries to read the `current/update-info.json` file to get information about the current upgrade name. +* If neither `cosmovisor/current/upgrade-info.json` nor `data/upgrade-info.json` exist, then `cosmovisor` will wait for `data/upgrade-info.json` file to trigger an upgrade. +* If `cosmovisor/current/upgrade-info.json` doesn't exist but `data/upgrade-info.json` exists, then `cosmovisor` assumes that whatever is in `data/upgrade-info.json` is a valid upgrade request. In this case `cosmovisor` tries immediately to make an upgrade according to the `name` attribute in `data/upgrade-info.json`. +* Otherwise, `cosmovisor` waits for changes in `upgrade-info.json`. As soon as a new upgrade name is recorded in the file, `cosmovisor` will trigger an upgrade mechanism. + +When the upgrade mechanism is triggered, `cosmovisor` will: + +1. if `DAEMON_ALLOW_DOWNLOAD_BINARIES` is enabled, start by auto-downloading a new binary into `cosmovisor//bin` (where `` is the `upgrade-info.json:name` attribute); +2. update the `current` symbolic link to point to the new directory and save `data/upgrade-info.json` to `cosmovisor/current/upgrade-info.json`. + +### Auto-Download + +Generally, `cosmovisor` requires that the system administrator place all relevant binaries on disk before the upgrade happens. However, for people who don't need such control and want an automated setup (maybe they are syncing a non-validating fullnode and want to do little maintenance), there is another option. + +**NOTE: we don't recommend using auto-download** because it doesn't verify in advance if a binary is available. If there will be any issue with downloading a binary, the cosmovisor will stop and won't restart an App (which could lead to a chain halt). + +If `DAEMON_ALLOW_DOWNLOAD_BINARIES` is set to `true`, and no local binary can be found when an upgrade is triggered, `cosmovisor` will attempt to download and install the binary itself based on the instructions in the `info` attribute in the `data/upgrade-info.json` file. The files is constructed by the x/upgrade module and contains data from the upgrade `Plan` object. The `Plan` has an info field that is expected to have one of the following two valid formats to specify a download: + +1. Store an os/architecture -> binary URI map in the upgrade plan info field as JSON under the `"binaries"` key. For example: + + ```json + { + "binaries": { + "linux/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" + } + } + ``` + + You can include multiple binaries at once to ensure more than one environment will receive the correct binaries: + + ```json + { + "binaries": { + "linux/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f", + "linux/arm64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f", + "darwin/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" + } + } + ``` + + When submitting this as a proposal ensure there are no spaces. An example command using `gaiad` could look like: + + ```shell expandable + > gaiad tx upgrade software-upgrade Vega \ + --title Vega \ + --deposit 100uatom \ + --upgrade-height 7368420 \ + --upgrade-info '{"binaries":{"linux/amd64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-linux-amd64","linux/arm64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-linux-arm64","darwin/amd64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-darwin-amd64"}}' \ + --summary "upgrade to Vega" \ + --gas 400000 \ + --from user \ + --chain-id test \ + --home test/val2 \ + --node tcp://localhost:36657 \ + --yes + ``` + +2. Store a link to a file that contains all information in the above format (e.g. if you want to specify lots of binaries, changelog info, etc. without filling up the blockchain). For example: + + ```text + https://example.com/testnet-1001-info.json?checksum=sha256:deaaa99fda9407c4dbe1d04bd49bab0cc3c1dd76fa392cd55a9425be074af01e + ``` + +When `cosmovisor` is triggered to download the new binary, `cosmovisor` will parse the `"binaries"` field, download the new binary with [go-getter](https://github.com/hashicorp/go-getter), and unpack the new binary in the `upgrades/` folder so that it can be run as if it was installed manually. + +Note that for this mechanism to provide strong security guarantees, all URLs should include a SHA 256/512 checksum. This ensures that no false binary is run, even if someone hacks the server or hijacks the DNS. `go-getter` will always ensure the downloaded file matches the checksum if it is provided. `go-getter` will also handle unpacking archives into directories (in this case the download link should point to a `zip` file of all data in the `bin` directory). + +To properly create a sha256 checksum on linux, you can use the `sha256sum` utility. For example: + +```shell +sha256sum ./testdata/repo/zip_directory/autod.zip +``` + +The result will look something like the following: `29139e1381b8177aec909fab9a75d11381cab5adf7d3af0c05ff1c9c117743a7`. + +You can also use `sha512sum` if you would prefer to use longer hashes, or `md5sum` if you would prefer to use broken hashes. Whichever you choose, make sure to set the hash algorithm properly in the checksum argument to the URL. + +## Example: SimApp Upgrade + +The following instructions provide a demonstration of `cosmovisor` using the simulation application (`simapp`) shipped with the Cosmos SDK's source code. The following commands are to be run from within the `cosmos-sdk` repository. + +### Chain Setup + +Let's create a new chain using the `v0.44` version of simapp (the Cosmos SDK demo app): + +```shell +git checkout v0.44.6 +make build +``` + +Clean `~/.simapp` (never do this in a production environment): + +```shell +./build/simd unsafe-reset-all +``` + +Set up app config: + +```shell +./build/simd config set client chain-id test +./build/simd config set client keyring-backend test +./build/simd config set client broadcast-mode sync +``` + +Initialize the node and overwrite any previous genesis file (never do this in a production environment): + +```shell +./build/simd init test --chain-id test --overwrite +``` + +Set the minimum gas price to `0stake` in `~/.simapp/config/app.toml`: + +```shell +minimum-gas-prices = "0stake" +``` + +For the sake of this demonstration, amend `voting_period` in `genesis.json` to a reduced time of 20 seconds (`20s`): + +```shell +cat <<< $(jq '.app_state.gov.voting_params.voting_period = "20s"' $HOME/.simapp/config/genesis.json) > $HOME/.simapp/config/genesis.json +``` + +Create a validator, and setup genesis transaction: + +```shell +./build/simd keys add validator +./build/simd genesis add-genesis-account validator 1000000000stake --keyring-backend test +./build/simd genesis gentx validator 1000000stake --chain-id test +./build/simd genesis collect-gentxs +``` + +#### Prepare Cosmovisor and Start the Chain + +Set the required environment variables: + +```shell +export DAEMON_NAME=simd +export DAEMON_HOME=$HOME/.simapp +``` + +Set the optional environment variable to trigger an automatic app restart: + +```shell +export DAEMON_RESTART_AFTER_UPGRADE=true +``` + +Create the folder for the genesis binary and copy the `simd` binary: + +```shell +mkdir -p $DAEMON_HOME/cosmovisor/genesis/bin +cp ./build/simd $DAEMON_HOME/cosmovisor/genesis/bin +``` + +Now you can run cosmovisor with simapp v0.44: + +```shell +cosmovisor run start +``` + +#### Update App + +Update app to the latest version (e.g. v0.45). + +Next, we can add a migration - which is defined using `x/upgrade` [upgrade plan](https://github.com/cosmos/cosmos-sdk/blob/main/docs/core/upgrade.md) (you may refer to a past version if you are using an older Cosmos SDK release). In a migration we can do any deterministic state change. + +Build the new version `simd` binary: + +```shell +make build +``` + +Add the new `simd` binary and the upgrade name: + + + +The migration name must match the one defined in the migration plan. + + + +```shell +mkdir -p $DAEMON_HOME/cosmovisor/upgrades/test1/bin +cp ./build/simd $DAEMON_HOME/cosmovisor/upgrades/test1/bin +``` + +Open a new terminal window and submit an upgrade proposal along with a deposit and a vote (these commands must be run within 20 seconds of each other): + +**v0.45 and earlier**: + +```shell +./build/simd tx gov submit-proposal software-upgrade test1 --title upgrade --description upgrade --upgrade-height 200 --from validator --yes +./build/simd tx gov deposit 1 10000000stake --from validator --yes +./build/simd tx gov vote 1 yes --from validator --yes +``` + +**v0.46, v0.47**: + +```shell +./build/simd tx gov submit-legacy-proposal software-upgrade test1 --title upgrade --description upgrade --upgrade-height 200 --from validator --yes +./build/simd tx gov deposit 1 10000000stake --from validator --yes +./build/simd tx gov vote 1 yes --from validator --yes +``` + +**>= v0.50+**: + +```shell +./build/simd tx upgrade software-upgrade test1 --title upgrade --summary upgrade --upgrade-height 200 --from validator --yes +./build/simd tx gov deposit 1 10000000stake --from validator --yes +./build/simd tx gov vote 1 yes --from validator --yes +``` + +The upgrade will occur automatically at height 200. Note: you may need to change the upgrade height in the snippet above if your test play takes more time. diff --git a/docs/sdk/v0.50/documentation/operations/tooling/hubl.mdx b/docs/sdk/v0.50/documentation/operations/tooling/hubl.mdx new file mode 100644 index 00000000..f7b193ca --- /dev/null +++ b/docs/sdk/v0.50/documentation/operations/tooling/hubl.mdx @@ -0,0 +1,71 @@ +--- +title: Hubl +--- + +`Hubl` is a tool that allows you to query any Cosmos SDK based blockchain. +It takes advantage of the new [AutoCLI](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/client/v2@v2.0.0-20220916140313-c5245716b516/cli) feature {/* TODO replace with AutoCLI docs */} of the Cosmos SDK. + +## Installation + +Hubl can be installed using `go install`: + +```shell +go install cosmossdk.io/tools/hubl/cmd/hubl@latest +``` + +Or build from source: + +```shell +git clone --depth=1 https://github.com/cosmos/cosmos-sdk +make hubl +``` + +The binary will be located in `tools/hubl`. + +## Usage + +```shell +hubl --help +``` + +### Add chain + +To configure a new chain just run this command using the --init flag and the name of the chain as it's listed in the chain registry ([Link](https://github.com/cosmos/chain-registry)). + +If the chain is not listed in the chain registry, you can use any unique name. + +```shell +hubl init [chain-name] +hubl init regen +``` + +The chain configuration is stored in `~/.hubl/config.toml`. + + + +When using an unsecure gRPC endpoint, change the `insecure` field to `true` in the config file. + +```toml +[chains] +[chains.regen] +[[chains.regen.trusted-grpc-endpoints]] +endpoint = 'localhost:9090' +insecure = true +``` + +Or use the `--insecure` flag: + +```shell +hubl init regen --insecure +``` + + + +### Query + +To query a chain, you can use the `query` command. +Then specify which module you want to query and the query itself. + +```shell +hubl regen query auth module-accounts +``` diff --git a/docs/sdk/v0.50/documentation/operations/tooling/protobuf.mdx b/docs/sdk/v0.50/documentation/operations/tooling/protobuf.mdx new file mode 100644 index 00000000..da1e18d2 --- /dev/null +++ b/docs/sdk/v0.50/documentation/operations/tooling/protobuf.mdx @@ -0,0 +1,1052 @@ +--- +title: Protocol Buffers +description: >- + It is known that Cosmos SDK uses protocol buffers extensively, this document + is meant to provide a guide on how it is used in the cosmos-sdk. +--- + +It is known that Cosmos SDK uses protocol buffers extensively, this document is meant to provide a guide on how it is used in the cosmos-sdk. + +To generate the proto file, the Cosmos SDK uses a docker image, this image is provided to all to use as well. The latest version is `ghcr.io/cosmos/proto-builder:0.12.x` + +Below is the example of the Cosmos SDK's commands for generating, linting, and formatting protobuf files that can be reused in any applications makefile. + +```go expandable +#!/usr/bin/make -f + +PACKAGES_NOSIMULATION=$(shell go list ./... | grep -v '/simulation') + +PACKAGES_SIMTEST=$(shell go list ./... | grep '/simulation') + +export VERSION := $(shell echo $(shell git describe --tags --always --match "v*") | sed 's/^v/') + +export CMTVERSION := $(shell go list -m github.com/cometbft/cometbft | sed 's:.* ::') + +export COMMIT := $(shell git log -1 --format='%H') + +LEDGER_ENABLED ?= true +BINDIR ?= $(GOPATH)/bin +BUILDDIR ?= $(CURDIR)/build +SIMAPP = ./simapp +MOCKS_DIR = $(CURDIR)/tests/mocks +HTTPS_GIT := https://github.com/cosmos/cosmos-sdk.git +DOCKER := $(shell which docker) + +PROJECT_NAME = $(shell git remote get-url origin | xargs basename -s .git) + +# process build tags +build_tags = netgo + ifeq ($(LEDGER_ENABLED),true) + ifeq ($(OS),Windows_NT) + +GCCEXE = $(shell where gcc.exe 2> NUL) + ifeq ($(GCCEXE),) + $(error gcc.exe not installed for ledger support, please install or set LEDGER_ENABLED=false) + +else + build_tags += ledger + endif + else + UNAME_S = $(shell uname -s) + ifeq ($(UNAME_S),OpenBSD) + $(warning OpenBSD detected, disabling ledger support (https://github.com/cosmos/cosmos-sdk/issues/1988)) + +else + GCC = $(shell command -v gcc 2> /dev/null) + ifeq ($(GCC),) + $(error gcc not installed for ledger support, please install or set LEDGER_ENABLED=false) + +else + build_tags += ledger + endif + endif + endif +endif + ifeq (secp,$(findstring secp,$(COSMOS_BUILD_OPTIONS))) + +build_tags += libsecp256k1_sdk +endif + ifeq (legacy,$(findstring legacy,$(COSMOS_BUILD_OPTIONS))) + +build_tags += app_v1 +endif + whitespace := +whitespace += $(whitespace) + comma := , +build_tags_comma_sep := $(subst $(whitespace),$(comma),$(build_tags)) + +# process linker flags + +ldflags = -X github.com/cosmos/cosmos-sdk/version.Name=sim \ + -X github.com/cosmos/cosmos-sdk/version.AppName=simd \ + -X github.com/cosmos/cosmos-sdk/version.Version=$(VERSION) \ + -X github.com/cosmos/cosmos-sdk/version.Commit=$(COMMIT) \ + -X "github.com/cosmos/cosmos-sdk/version.BuildTags=$(build_tags_comma_sep)" \ + -X github.com/cometbft/cometbft/version.TMCoreSemVer=$(CMTVERSION) + +# DB backend selection + ifeq (cleveldb,$(findstring cleveldb,$(COSMOS_BUILD_OPTIONS))) + +build_tags += gcc +endif + ifeq (badgerdb,$(findstring badgerdb,$(COSMOS_BUILD_OPTIONS))) + +build_tags += badgerdb +endif +# handle rocksdb + ifeq (rocksdb,$(findstring rocksdb,$(COSMOS_BUILD_OPTIONS))) + +CGO_ENABLED=1 + build_tags += rocksdb +endif +# handle boltdb + ifeq (boltdb,$(findstring boltdb,$(COSMOS_BUILD_OPTIONS))) + +build_tags += boltdb +endif + ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS))) + +ldflags += -w -s +endif +ldflags += $(LDFLAGS) + ldflags := $(strip $(ldflags)) + +build_tags += $(BUILD_TAGS) + +build_tags := $(strip $(build_tags)) + +BUILD_FLAGS := -tags "$(build_tags)" -ldflags '$(ldflags)' +# check for nostrip option + ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS))) + +BUILD_FLAGS += -trimpath +endif + +# Check for debug option + ifeq (debug,$(findstring debug,$(COSMOS_BUILD_OPTIONS))) + +BUILD_FLAGS += -gcflags "all=-N -l" +endif + +all: tools build lint test vulncheck + +# The below include contains the tools and runsim targets. +include contrib/devtools/Makefile + +############################################################################### +### Build ### +############################################################################### + +BUILD_TARGETS := build install + +build: BUILD_ARGS=-o $(BUILDDIR)/ + +build-linux-amd64: + GOOS=linux GOARCH=amd64 LEDGER_ENABLED=false $(MAKE) + +build + +build-linux-arm64: + GOOS=linux GOARCH=arm64 LEDGER_ENABLED=false $(MAKE) + +build + +$(BUILD_TARGETS): go.sum $(BUILDDIR)/ + cd ${ + CURRENT_DIR +}/simapp && go $@ -mod=readonly $(BUILD_FLAGS) $(BUILD_ARGS) ./... + +$(BUILDDIR)/: + mkdir -p $(BUILDDIR)/ + +cosmovisor: + $(MAKE) -C tools/cosmovisor cosmovisor + +rosetta: + $(MAKE) -C tools/rosetta rosetta + +confix: + $(MAKE) -C tools/confix confix + +hubl: + $(MAKE) -C tools/hubl hubl + +.PHONY: build build-linux-amd64 build-linux-arm64 cosmovisor rosetta confix + +mocks: $(MOCKS_DIR) + @go install github.com/golang/mock/mockgen@v1.6.0 + sh ./scripts/mockgen.sh +.PHONY: mocks + +vulncheck: $(BUILDDIR)/ + GOBIN=$(BUILDDIR) + +go install golang.org/x/vuln/cmd/govulncheck@latest + $(BUILDDIR)/govulncheck ./... + +$(MOCKS_DIR): + mkdir -p $(MOCKS_DIR) + +distclean: clean tools-clean +clean: + rm -rf \ + $(BUILDDIR)/ \ + artifacts/ \ + tmp-swagger-gen/ \ + .testnets + +.PHONY: distclean clean + +############################################################################### +### Tools & Dependencies ### +############################################################################### + +go.sum: go.mod + echo "Ensure dependencies have not been modified ..." >&2 + go mod verify + go mod tidy + +############################################################################### +### Documentation ### +############################################################################### + +godocs: + @echo "--> Wait a few seconds and visit http://localhost:6060/pkg/github.com/cosmos/cosmos-sdk/types" + go install golang.org/x/tools/cmd/godoc@latest + godoc -http=:6060 + +build-docs: + @cd docs && DOCS_DOMAIN=docs.cosmos.network sh ./build-all.sh + +.PHONY: build-docs + +############################################################################### +### Tests & Simulation ### +############################################################################### + +# make init-simapp initializes a single local node network +# it is useful for testing and development +# Usage: make install && make init-simapp && simd start +# Warning: make init-simapp will remove all data in simapp home directory +init-simapp: + ./scripts/init-simapp.sh + +test: test-unit +test-e2e: + $(MAKE) -C tests test-e2e +test-e2e-cov: + $(MAKE) -C tests test-e2e-cov +test-integration: + $(MAKE) -C tests test-integration +test-integration-cov: + $(MAKE) -C tests test-integration-cov +test-all: test-unit test-e2e test-integration test-ledger-mock test-race + +TEST_PACKAGES=./... +TEST_TARGETS := test-unit test-unit-amino test-unit-proto test-ledger-mock test-race test-ledger test-race + +# Test runs-specific rules. To add a new test target, just add +# a new rule, customise ARGS or TEST_PACKAGES ad libitum, and +# append the new rule to the TEST_TARGETS list. +test-unit: test_tags += cgo ledger test_ledger_mock norace +test-unit-amino: test_tags += ledger test_ledger_mock test_amino norace +test-ledger: test_tags += cgo ledger norace +test-ledger-mock: test_tags += ledger test_ledger_mock norace +test-race: test_tags += cgo ledger test_ledger_mock +test-race: ARGS=-race +test-race: TEST_PACKAGES=$(PACKAGES_NOSIMULATION) +$(TEST_TARGETS): run-tests + +# check-* compiles and collects tests without running them +# note: go test -c doesn't support multiple packages yet (https://github.com/golang/go/issues/15513) + +CHECK_TEST_TARGETS := check-test-unit check-test-unit-amino +check-test-unit: test_tags += cgo ledger test_ledger_mock norace +check-test-unit-amino: test_tags += ledger test_ledger_mock test_amino norace +$(CHECK_TEST_TARGETS): EXTRA_ARGS=-run=none +$(CHECK_TEST_TARGETS): run-tests + +ARGS += -tags "$(test_tags)" +SUB_MODULES = $(shell find . -type f -name 'go.mod' -print0 | xargs -0 -n1 dirname | sort) + +CURRENT_DIR = $(shell pwd) + +run-tests: + ifneq (,$(shell which tparse 2>/dev/null)) + @echo "Starting unit tests"; \ + finalec=0; \ + for module in $(SUB_MODULES); do \ + cd ${ + CURRENT_DIR +}/$module; \ + echo "Running unit tests for $(grep '^module' go.mod)"; \ + go test -mod=readonly -json $(ARGS) $(TEST_PACKAGES) ./... | tparse; \ + ec=$?; \ + if [ "$ec" -ne '0' ]; then finalec=$ec; fi; \ + done; \ + exit $finalec +else + @echo "Starting unit tests"; \ + finalec=0; \ + for module in $(SUB_MODULES); do \ + cd ${ + CURRENT_DIR +}/$module; \ + echo "Running unit tests for $(grep '^module' go.mod)"; \ + go test -mod=readonly $(ARGS) $(TEST_PACKAGES) ./... ; \ + ec=$?; \ + if [ "$ec" -ne '0' ]; then finalec=$ec; fi; \ + done; \ + exit $finalec +endif + +.PHONY: run-tests test test-all $(TEST_TARGETS) + +test-sim-nondeterminism: + @echo "Running non-determinism test..." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestAppStateDeterminism -Enabled=true \ + -NumBlocks=100 -BlockSize=200 -Commit=true -Period=0 -v -timeout 24h + +# Requires an exported plugin. See store/streaming/README.md for documentation. +# +# example: +# export COSMOS_SDK_ABCI_V1= +# make test-sim-nondeterminism-streaming +# +# Using the built-in examples: +# export COSMOS_SDK_ABCI_V1=/store/streaming/abci/examples/file/file +# make test-sim-nondeterminism-streaming +test-sim-nondeterminism-streaming: + @echo "Running non-determinism-streaming test..." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestAppStateDeterminism -Enabled=true \ + -NumBlocks=100 -BlockSize=200 -Commit=true -Period=0 -v -timeout 24h -EnableStreaming=true + +test-sim-custom-genesis-fast: + @echo "Running custom genesis simulation..." + @echo "By default, ${ + HOME +}/.gaiad/config/genesis.json will be used." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestFullAppSimulation -Genesis=${ + HOME +}/.gaiad/config/genesis.json \ + -Enabled=true -NumBlocks=100 -BlockSize=200 -Commit=true -Seed=99 -Period=5 -v -timeout 24h + +test-sim-import-export: runsim + @echo "Running application import/export simulation. This may take several minutes..." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 5 TestAppImportExport + +test-sim-after-import: runsim + @echo "Running application simulation-after-import. This may take several minutes..." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 5 TestAppSimulationAfterImport + +test-sim-custom-genesis-multi-seed: runsim + @echo "Running multi-seed custom genesis simulation..." + @echo "By default, ${ + HOME +}/.gaiad/config/genesis.json will be used." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Genesis=${ + HOME +}/.gaiad/config/genesis.json -SimAppPkg=. -ExitOnFail 400 5 TestFullAppSimulation + +test-sim-multi-seed-long: runsim + @echo "Running long multi-seed application simulation. This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 500 50 TestFullAppSimulation + +test-sim-multi-seed-short: runsim + @echo "Running short multi-seed application simulation. This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 10 TestFullAppSimulation + +test-sim-benchmark-invariants: + @echo "Running simulation invariant benchmarks..." + cd ${ + CURRENT_DIR +}/simapp && @go test -mod=readonly -benchmem -bench=BenchmarkInvariants -run=^$ \ + -Enabled=true -NumBlocks=1000 -BlockSize=200 \ + -Period=1 -Commit=true -Seed=57 -v -timeout 24h + +.PHONY: \ +test-sim-nondeterminism \ +test-sim-nondeterminism-streaming \ +test-sim-custom-genesis-fast \ +test-sim-import-export \ +test-sim-after-import \ +test-sim-custom-genesis-multi-seed \ +test-sim-multi-seed-short \ +test-sim-multi-seed-long \ +test-sim-benchmark-invariants + +SIM_NUM_BLOCKS ?= 500 +SIM_BLOCK_SIZE ?= 200 +SIM_COMMIT ?= true + +test-sim-benchmark: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h + +# Requires an exported plugin. See store/streaming/README.md for documentation. +# +# example: +# export COSMOS_SDK_ABCI_V1= +# make test-sim-benchmark-streaming +# +# Using the built-in examples: +# export COSMOS_SDK_ABCI_V1=/store/streaming/abci/examples/file/file +# make test-sim-benchmark-streaming +test-sim-benchmark-streaming: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -EnableStreaming=true + +test-sim-profile: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -benchmem -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -cpuprofile cpu.out -memprofile mem.out + +# Requires an exported plugin. See store/streaming/README.md for documentation. +# +# example: +# export COSMOS_SDK_ABCI_V1= +# make test-sim-profile-streaming +# +# Using the built-in examples: +# export COSMOS_SDK_ABCI_V1=/store/streaming/abci/examples/file/file +# make test-sim-profile-streaming +test-sim-profile-streaming: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -benchmem -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -cpuprofile cpu.out -memprofile mem.out -EnableStreaming=true + +.PHONY: test-sim-profile test-sim-benchmark + +test-rosetta: + docker build -t rosetta-ci:latest -f contrib/rosetta/rosetta-ci/Dockerfile . + docker-compose -f contrib/rosetta/docker-compose.yaml up --abort-on-container-exit --exit-code-from test_rosetta --build +.PHONY: test-rosetta + +benchmark: + @go test -mod=readonly -bench=. $(PACKAGES_NOSIMULATION) +.PHONY: benchmark + +############################################################################### +### Linting ### +############################################################################### + +golangci_lint_cmd=golangci-lint +golangci_version=v1.51.2 + +lint: + @echo "--> Running linter" + @go install github.com/golangci/golangci-lint/cmd/golangci-lint@$(golangci_version) + @./scripts/go-lint-all.bash --timeout=15m + +lint-fix: + @echo "--> Running linter" + @go install github.com/golangci/golangci-lint/cmd/golangci-lint@$(golangci_version) + @./scripts/go-lint-all.bash --fix + +.PHONY: lint lint-fix + +############################################################################### +### Protobuf ### +############################################################################### + +protoVer=0.13.2 +protoImageName=ghcr.io/cosmos/proto-builder:$(protoVer) + +protoImage=$(DOCKER) + +run --rm -v $(CURDIR):/workspace --workdir /workspace $(protoImageName) + +proto-all: proto-format proto-lint proto-gen + +proto-gen: + @echo "Generating Protobuf files" + @$(protoImage) + +sh ./scripts/protocgen.sh + +proto-swagger-gen: + @echo "Generating Protobuf Swagger" + @$(protoImage) + +sh ./scripts/protoc-swagger-gen.sh + +proto-format: + @$(protoImage) + +find ./ -name "*.proto" -exec clang-format -i { +} \; + +proto-lint: + @$(protoImage) + +buf lint --error-format=json + +proto-check-breaking: + @$(protoImage) + +buf breaking --against $(HTTPS_GIT)#branch=main + +CMT_URL = https://raw.githubusercontent.com/cometbft/cometbft/v0.38.0-alpha.2/proto/tendermint + +CMT_CRYPTO_TYPES = proto/tendermint/crypto +CMT_ABCI_TYPES = proto/tendermint/abci +CMT_TYPES = proto/tendermint/types +CMT_VERSION = proto/tendermint/version +CMT_LIBS = proto/tendermint/libs/bits +CMT_P2P = proto/tendermint/p2p + +proto-update-deps: + @echo "Updating Protobuf dependencies" + + @mkdir -p $(CMT_ABCI_TYPES) + @curl -sSL $(CMT_URL)/abci/types.proto > $(CMT_ABCI_TYPES)/types.proto + + @mkdir -p $(CMT_VERSION) + @curl -sSL $(CMT_URL)/version/types.proto > $(CMT_VERSION)/types.proto + + @mkdir -p $(CMT_TYPES) + @curl -sSL $(CMT_URL)/types/types.proto > $(CMT_TYPES)/types.proto + @curl -sSL $(CMT_URL)/types/evidence.proto > $(CMT_TYPES)/evidence.proto + @curl -sSL $(CMT_URL)/types/params.proto > $(CMT_TYPES)/params.proto + @curl -sSL $(CMT_URL)/types/validator.proto > $(CMT_TYPES)/validator.proto + @curl -sSL $(CMT_URL)/types/block.proto > $(CMT_TYPES)/block.proto + + @mkdir -p $(CMT_CRYPTO_TYPES) + @curl -sSL $(CMT_URL)/crypto/proof.proto > $(CMT_CRYPTO_TYPES)/proof.proto + @curl -sSL $(CMT_URL)/crypto/keys.proto > $(CMT_CRYPTO_TYPES)/keys.proto + + @mkdir -p $(CMT_LIBS) + @curl -sSL $(CMT_URL)/libs/bits/types.proto > $(CMT_LIBS)/types.proto + + @mkdir -p $(CMT_P2P) + @curl -sSL $(CMT_URL)/p2p/types.proto > $(CMT_P2P)/types.proto + + $(DOCKER) + +run --rm -v $(CURDIR)/proto:/workspace --workdir /workspace $(protoImageName) + +buf mod update + +.PHONY: proto-all proto-gen proto-swagger-gen proto-format proto-lint proto-check-breaking proto-update-deps + +############################################################################### +### Localnet ### +############################################################################### + +localnet-build-env: + $(MAKE) -C contrib/images simd-env +localnet-build-dlv: + $(MAKE) -C contrib/images simd-dlv + +localnet-build-nodes: + $(DOCKER) + +run --rm -v $(CURDIR)/.testnets:/data cosmossdk/simd \ + testnet init-files --v 4 -o /data --starting-ip-address 192.168.10.2 --keyring-backend=test + docker-compose up -d + +localnet-stop: + docker-compose down + +# localnet-start will run a 4-node testnet locally. The nodes are +# based off the docker images in: ./contrib/images/simd-env +localnet-start: localnet-stop localnet-build-env localnet-build-nodes + +# localnet-debug will run a 4-node testnet locally in debug mode +# you can read more about the debug mode here: ./contrib/images/simd-dlv/README.md +localnet-debug: localnet-stop localnet-build-dlv localnet-build-nodes + +.PHONY: localnet-start localnet-stop localnet-debug localnet-build-env localnet-build-dlv localnet-build-nodes + +############################################################################### +### rosetta ### +############################################################################### +# builds rosetta test data dir +rosetta-data: + -docker container rm data_dir_build + docker build -t rosetta-ci:latest -f contrib/rosetta/rosetta-ci/Dockerfile . + docker run --name data_dir_build -t rosetta-ci:latest sh /rosetta/data.sh + docker cp data_dir_build:/tmp/data.tar.gz "$(CURDIR)/contrib/rosetta/rosetta-ci/data.tar.gz" + docker container rm data_dir_build +.PHONY: rosetta-data +``` + +The script used to generate the protobuf files can be found in the `scripts/` directory. + +```shell +#!/usr/bin/env bash + +# How to run manually: +# docker build --pull --rm -f "contrib/devtools/Dockerfile" -t cosmossdk-proto:latest "contrib/devtools" +# docker run --rm -v $(pwd):/workspace --workdir /workspace cosmossdk-proto sh ./scripts/protocgen.sh + +set -e + +echo "Generating gogo proto code" +cd proto +proto_dirs=$(find ./cosmos ./amino -path -prune -o -name '*.proto' -print0 | xargs -0 -n1 dirname | sort | uniq) +for dir in $proto_dirs; do + for file in $(find "${dir}" -maxdepth 1 -name '*.proto'); do + # this regex checks if a proto file has its go_package set to cosmossdk.io/api/... + # gogo proto files SHOULD ONLY be generated if this is false + # we don't want gogo proto to run for proto files which are natively built for google.golang.org/protobuf + if grep -q "option go_package" "$file" && grep -H -o -c 'option go_package.*cosmossdk.io/api' "$file" | grep -q ':0 +``` + +## Buf + +[Buf](https://buf.build) is a protobuf tool that abstracts the needs to use the complicated `protoc` toolchain on top of various other things that ensure you are using protobuf in accordance with the majority of the ecosystem. Within the cosmos-sdk repository there are a few files that have a buf prefix. Lets start with the top level and then dive into the various directories. + +### Workspace + +At the root level directory a workspace is defined using [buf workspaces](https://docs.buf.build/configuration/v1/buf-work-yaml). This helps if there are one or more protobuf containing directories in your project. + +Cosmos SDK example: + +```go +version: v1 +directories: + - proto +``` + +### Proto Directory + +Next is the `proto/` directory where all of our protobuf files live. In here there are many different buf files defined each serving a different purpose. + +```bash +├── README.md +├── buf.gen.gogo.yaml +├── buf.gen.pulsar.yaml +├── buf.gen.swagger.yaml +├── buf.lock +├── buf.md +├── buf.yaml +├── cosmos +└── tendermint +``` + +The above diagram all the files and directories within the Cosmos SDK `proto/` directory. + +#### `buf.gen.gogo.yaml` + +`buf.gen.gogo.yaml` defines how the protobuf files should be generated for use with in the module. This file uses [gogoproto](https://github.com/gogo/protobuf), a separate generator from the google go-proto generator that makes working with various objects more ergonomic, and it has more performant encode and decode steps + +```go +version: v1 +plugins: + - name: gocosmos + out: .. + opt: plugins=grpc,Mgoogle/protobuf/any.proto=github.com/cosmos/gogoproto/types/any + - name: grpc-gateway + out: .. + opt: logtostderr=true,allow_colon_final_segments=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.gen.pulsar.yaml` + +`buf.gen.pulsar.yaml` defines how protobuf files should be generated using the [new golang apiv2 of protobuf](https://go.dev/blog/protobuf-apiv2). This generator is used instead of the google go-proto generator because it has some extra helpers for Cosmos SDK applications and will have more performant encode and decode than the google go-proto generator. You can follow the development of this generator [here](https://github.com/cosmos/cosmos-proto). + +```go expandable +version: v1 +managed: + enabled: true + go_package_prefix: + default: cosmossdk.io/api + except: + - buf.build/googleapis/googleapis + - buf.build/cosmos/gogo-proto + - buf.build/cosmos/cosmos-proto + override: +plugins: + - name: go-pulsar + out: ../api + opt: paths=source_relative + - name: go-grpc + out: ../api + opt: paths=source_relative +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.gen.swagger.yaml` + +`buf.gen.swagger.yaml` generates the swagger documentation for the query and messages of the chain. This will only define the REST API end points that were defined in the query and msg servers. You can find examples of this [here](https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/bank/v1beta1/query.proto#L19) + +```go +version: v1 +plugins: + - name: swagger + out: ../tmp-swagger-gen + opt: logtostderr=true,fqn_for_swagger_name=true,simple_operation_ids=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.lock` + +This is an autogenerated file based off the dependencies required by the `.gen` files. There is no need to copy the current one. If you depend on cosmos-sdk proto definitions a new entry for the Cosmos SDK will need to be provided. The dependency you will need to use is `buf.build/cosmos/cosmos-sdk`. + +```go expandable +# Generated by buf. DO NOT EDIT. +version: v1 +deps: + - remote: buf.build + owner: cosmos + repository: cosmos-proto + commit: 04467658e59e44bbb22fe568206e1f70 + digest: shake256:73a640bd60e0c523b0f8237ff34eab67c45a38b64bbbde1d80224819d272dbf316ac183526bd245f994af6608b025f5130483d0133c5edd385531326b5990466 + - remote: buf.build + owner: cosmos + repository: gogo-proto + commit: 88ef6483f90f478fb938c37dde52ece3 + digest: shake256:89c45df2aa11e0cff97b0d695436713db3d993d76792e9f8dc1ae90e6ab9a9bec55503d48ceedd6b86069ab07d3041b32001b2bfe0227fa725dd515ff381e5ba + - remote: buf.build + owner: googleapis + repository: googleapis + commit: 751cbe31638d43a9bfb6162cd2352e67 + digest: shake256:87f55470d9d124e2d1dedfe0231221f4ed7efbc55bc5268917c678e2d9b9c41573a7f9a557f6d8539044524d9fc5ca8fbb7db05eb81379d168285d76b57eb8a4 + - remote: buf.build + owner: protocolbuffers + repository: wellknowntypes + commit: 3ddd61d1f53d485abd3d3a2b47a62b8e + digest: shake256:9e6799d56700d0470c3723a2fd027e8b4a41a07085a0c90c58e05f6c0038fac9b7a0170acd7692707a849983b1b8189aa33e7b73f91d68157f7136823115546b +``` + +#### `buf.yaml` + +`buf.yaml` defines the [name of your package](https://github.com/cosmos/cosmos-sdk/blob/main/proto/buf.yaml#L3), which [breakage checker](https://docs.buf.build/tour/detect-breaking-changes) to use and how to [lint your protobuf files](https://buf.build/docs/tutorials/getting-started-with-buf-cli#lint-your-api). + +```go expandable +# This module represents buf.build/cosmos/cosmos-sdk +version: v1 +name: buf.build/cosmos/cosmos-sdk +deps: + - buf.build/cosmos/cosmos-proto + - buf.build/cosmos/gogo-proto + - buf.build/googleapis/googleapis + - buf.build/protocolbuffers/wellknowntypes +breaking: + use: + - FILE + ignore: + - testpb +lint: + use: + - STANDARD + - COMMENTS + - FILE_LOWER_SNAKE_CASE + except: + - UNARY_RPC + - COMMENT_FIELD + - SERVICE_SUFFIX + - PACKAGE_VERSION_SUFFIX + - RPC_REQUEST_STANDARD_NAME + ignore: + - tendermint +``` + +We use a variety of linters for the Cosmos SDK protobuf files. The repo also checks this in ci. + +A reference to the github actions can be found [here](https://github.com/cosmos/cosmos-sdk/blob/main/.github/workflows/proto.yml#L1-L32) + +```go expandable +name: Protobuf +# Protobuf runs buf (https://buf.build/) + +lint and check-breakage +# This workflow is only run when a .proto file has been changed +on: + pull_request: + paths: + - "proto/**" + +permissions: + contents: read + +jobs: + lint: + runs-on: depot-ubuntu-22.04-4 + timeout-minutes: 5 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-lint-action@v1 + with: + input: "proto" + + break-check: + runs-on: depot-ubuntu-22.04-4 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-breaking-action@v1 + with: + input: "proto" + against: "https://github.com/${{ + github.repository +}}.git#branch=${{ + github.event.pull_request.base.ref +}},ref=HEAD~1,subdir=proto" +``` +; then + buf generate --template buf.gen.gogo.yaml $file + fi + done +done + +cd .. + +# generate codec/testdata proto code +(cd testutil/testdata; buf generate) + +# generate baseapp test messages +(cd baseapp/testutil; buf generate) + +# move proto files to the right places +cp -r github.com/cosmos/cosmos-sdk/* ./ +cp -r cosmossdk.io/** ./ +rm -rf github.com cosmossdk.io + +go mod tidy + +./scripts/protocgen-pulsar.sh + +``` + +## Buf + +[Buf](https://buf.build) is a protobuf tool that abstracts the needs to use the complicated `protoc` toolchain on top of various other things that ensure you are using protobuf in accordance with the majority of the ecosystem. Within the cosmos-sdk repository there are a few files that have a buf prefix. Lets start with the top level and then dive into the various directories. + +### Workspace + +At the root level directory a workspace is defined using [buf workspaces](https://docs.buf.build/configuration/v1/buf-work-yaml). This helps if there are one or more protobuf containing directories in your project. + +Cosmos SDK example: + +```go +version: v1 +directories: + - proto +``` + +### Proto Directory + +Next is the `proto/` directory where all of our protobuf files live. In here there are many different buf files defined each serving a different purpose. + +```bash +├── README.md +├── buf.gen.gogo.yaml +├── buf.gen.pulsar.yaml +├── buf.gen.swagger.yaml +├── buf.lock +├── buf.md +├── buf.yaml +├── cosmos +└── tendermint +``` + +The above diagram all the files and directories within the Cosmos SDK `proto/` directory. + +#### `buf.gen.gogo.yaml` + +`buf.gen.gogo.yaml` defines how the protobuf files should be generated for use with in the module. This file uses [gogoproto](https://github.com/gogo/protobuf), a separate generator from the google go-proto generator that makes working with various objects more ergonomic, and it has more performant encode and decode steps + +```go +version: v1 +plugins: + - name: gocosmos + out: .. + opt: plugins=grpc,Mgoogle/protobuf/any.proto=github.com/cosmos/gogoproto/types/any + - name: grpc-gateway + out: .. + opt: logtostderr=true,allow_colon_final_segments=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.gen.pulsar.yaml` + +`buf.gen.pulsar.yaml` defines how protobuf files should be generated using the [new golang apiv2 of protobuf](https://go.dev/blog/protobuf-apiv2). This generator is used instead of the google go-proto generator because it has some extra helpers for Cosmos SDK applications and will have more performant encode and decode than the google go-proto generator. You can follow the development of this generator [here](https://github.com/cosmos/cosmos-proto). + +```go expandable +version: v1 +managed: + enabled: true + go_package_prefix: + default: cosmossdk.io/api + except: + - buf.build/googleapis/googleapis + - buf.build/cosmos/gogo-proto + - buf.build/cosmos/cosmos-proto + override: +plugins: + - name: go-pulsar + out: ../api + opt: paths=source_relative + - name: go-grpc + out: ../api + opt: paths=source_relative +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.gen.swagger.yaml` + +`buf.gen.swagger.yaml` generates the swagger documentation for the query and messages of the chain. This will only define the REST API end points that were defined in the query and msg servers. You can find examples of this [here](https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/bank/v1beta1/query.proto#L19) + +```go +version: v1 +plugins: + - name: swagger + out: ../tmp-swagger-gen + opt: logtostderr=true,fqn_for_swagger_name=true,simple_operation_ids=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.lock` + +This is an autogenerated file based off the dependencies required by the `.gen` files. There is no need to copy the current one. If you depend on cosmos-sdk proto definitions a new entry for the Cosmos SDK will need to be provided. The dependency you will need to use is `buf.build/cosmos/cosmos-sdk`. + +```go expandable +# Generated by buf. DO NOT EDIT. +version: v1 +deps: + - remote: buf.build + owner: cosmos + repository: cosmos-proto + commit: 04467658e59e44bbb22fe568206e1f70 + digest: shake256:73a640bd60e0c523b0f8237ff34eab67c45a38b64bbbde1d80224819d272dbf316ac183526bd245f994af6608b025f5130483d0133c5edd385531326b5990466 + - remote: buf.build + owner: cosmos + repository: gogo-proto + commit: 88ef6483f90f478fb938c37dde52ece3 + digest: shake256:89c45df2aa11e0cff97b0d695436713db3d993d76792e9f8dc1ae90e6ab9a9bec55503d48ceedd6b86069ab07d3041b32001b2bfe0227fa725dd515ff381e5ba + - remote: buf.build + owner: googleapis + repository: googleapis + commit: 751cbe31638d43a9bfb6162cd2352e67 + digest: shake256:87f55470d9d124e2d1dedfe0231221f4ed7efbc55bc5268917c678e2d9b9c41573a7f9a557f6d8539044524d9fc5ca8fbb7db05eb81379d168285d76b57eb8a4 + - remote: buf.build + owner: protocolbuffers + repository: wellknowntypes + commit: 3ddd61d1f53d485abd3d3a2b47a62b8e + digest: shake256:9e6799d56700d0470c3723a2fd027e8b4a41a07085a0c90c58e05f6c0038fac9b7a0170acd7692707a849983b1b8189aa33e7b73f91d68157f7136823115546b +``` + +#### `buf.yaml` + +`buf.yaml` defines the [name of your package](https://github.com/cosmos/cosmos-sdk/blob/main/proto/buf.yaml#L3), which [breakage checker](https://docs.buf.build/tour/detect-breaking-changes) to use and how to [lint your protobuf files](https://buf.build/docs/tutorials/getting-started-with-buf-cli#lint-your-api). + +```go expandable +# This module represents buf.build/cosmos/cosmos-sdk +version: v1 +name: buf.build/cosmos/cosmos-sdk +deps: + - buf.build/cosmos/cosmos-proto + - buf.build/cosmos/gogo-proto + - buf.build/googleapis/googleapis + - buf.build/protocolbuffers/wellknowntypes +breaking: + use: + - FILE + ignore: + - testpb +lint: + use: + - STANDARD + - COMMENTS + - FILE_LOWER_SNAKE_CASE + except: + - UNARY_RPC + - COMMENT_FIELD + - SERVICE_SUFFIX + - PACKAGE_VERSION_SUFFIX + - RPC_REQUEST_STANDARD_NAME + ignore: + - tendermint +``` + +We use a variety of linters for the Cosmos SDK protobuf files. The repo also checks this in ci. + +A reference to the github actions can be found [here](https://github.com/cosmos/cosmos-sdk/blob/main/.github/workflows/proto.yml#L1-L32) + +```go expandable +name: Protobuf +# Protobuf runs buf (https://buf.build/) + +lint and check-breakage +# This workflow is only run when a .proto file has been changed +on: + pull_request: + paths: + - "proto/**" + +permissions: + contents: read + +jobs: + lint: + runs-on: depot-ubuntu-22.04-4 + timeout-minutes: 5 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-lint-action@v1 + with: + input: "proto" + + break-check: + runs-on: depot-ubuntu-22.04-4 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-breaking-action@v1 + with: + input: "proto" + against: "https://github.com/${{ + github.repository +}}.git#branch=${{ + github.event.pull_request.base.ref +}},ref=HEAD~1,subdir=proto" +``` diff --git a/docs/sdk/v0.50/documentation/operations/upgrade-guide.mdx b/docs/sdk/v0.50/documentation/operations/upgrade-guide.mdx new file mode 100644 index 00000000..ba46e60e --- /dev/null +++ b/docs/sdk/v0.50/documentation/operations/upgrade-guide.mdx @@ -0,0 +1,519 @@ +--- +title: Upgrade Guide +description: >- + This document provides a full guide for upgrading a Cosmos SDK chain from + v0.50.x to v0.53.x. +--- + +This document provides a full guide for upgrading a Cosmos SDK chain from `v0.50.x` to `v0.53.x`. + +This guide includes one **required** change and three **optional** features. + +After completing this guide, applications will have: + +* The `x/protocolpool` module +* The `x/epochs` module +* Unordered Transaction support + +## Table of Contents + +* [App Wiring Changes (REQUIRED)](#app-wiring-changes-required) +* [Adding ProtocolPool Module (OPTIONAL)](#adding-protocolpool-module-optional) + * [ProtocolPool Manual Wiring](#protocolpool-manual-wiring) + * [ProtocolPool DI Wiring](#protocolpool-di-wiring) +* [Adding Epochs Module (OPTIONAL)](#adding-epochs-module-optional) + * [Epochs Manual Wiring](#epochs-manual-wiring) + * [Epochs DI Wiring](#epochs-di-wiring) +* [Enable Unordered Transactions (OPTIONAL)](#enable-unordered-transactions-optional) +* [Upgrade Handler](#upgrade-handler) + +## App Wiring Changes **REQUIRED** + +The `x/auth` module now contains a `PreBlocker` that *must* be set in the module manager's `SetOrderPreBlockers` method. + +```go +app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, / NEW +) +``` + +## Adding ProtocolPool Module **OPTIONAL** + + + +Using an external community pool such as `x/protocolpool` will cause the following `x/distribution` handlers to return an error: + +**QueryService** + +* `CommunityPool` + +**MsgService** + +* `CommunityPoolSpend` +* `FundCommunityPool` + +If your services depend on this functionality from `x/distribution`, please update them to use either `x/protocolpool` or your custom external community pool alternatives. + + + +### Manual Wiring + +Import the following: + +```go +import ( + + / ... + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" +) +``` + +Set the module account permissions. + +```go +maccPerms = map[string][]string{ + / ... + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil, +} +``` + +Add the protocol pool keeper to your application struct. + +```go +ProtocolPoolKeeper protocolpoolkeeper.Keeper +``` + +Add the store key: + +```go +keys := storetypes.NewKVStoreKeys( + / ... + protocolpooltypes.StoreKey, +) +``` + +Instantiate the keeper. + +Make sure to do this before the distribution module instantiation, as you will pass the keeper there next. + +```go +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +Pass the protocolpool keeper to the distribution keeper: + +```go +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), / NEW +) +``` + +Add the protocolpool module to the module manager: + +```go +app.ModuleManager = module.NewManager( + / ... + protocolpool.NewAppModule(appCodec, app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), +) +``` + +Add an entry for SetOrderBeginBlockers, SetOrderEndBlockers, SetOrderInitGenesis, and SetOrderExportGenesis. + +```go +app.ModuleManager.SetOrderBeginBlockers( + / must come AFTER distribution. + distrtypes.ModuleName, + protocolpooltypes.ModuleName, +) +``` + +```go +app.ModuleManager.SetOrderEndBlockers( + / order does not matter. + protocolpooltypes.ModuleName, +) +``` + +```go +app.ModuleManager.SetOrderInitGenesis( + / order does not matter. + protocolpooltypes.ModuleName, +) +``` + +```go +app.ModuleManager.SetOrderInitGenesis( + protocolpooltypes.ModuleName, / must be exported before bank. + banktypes.ModuleName, +) +``` + +### DI Wiring + +Note: *as long as an external community pool keeper (here, `x/protocolpool`) is wired in DI configs, `x/distribution` will automatically use it for its external pool.* + +First, set up the keeper for the application. + +Import the protocolpool keeper: + +```go +protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" +``` + +Add the keeper to your application struct: + +```go +ProtocolPoolKeeper protocolpoolkeeper.Keeper +``` + +Add the keeper to the depinject system: + +```go +depinject.Inject( + appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + / ... other modules + &app.ProtocolPoolKeeper, / NEW MODULE! +) +``` + +Next, set up configuration for the module. + +Import the following: + +```go +import ( + + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" +) +``` + +The protocolpool module has module accounts that handle funds. Add them to the module account permission configuration: + +```go +moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + / ... + { + Account: protocolpooltypes.ModuleName +}, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount +}, +} +``` + +Next, add an entry for BeginBlockers, EndBlockers, InitGenesis, and ExportGenesis. + +```go +BeginBlockers: []string{ + / ... + / must be AFTER distribution. + distrtypes.ModuleName, + protocolpooltypes.ModuleName, +}, +``` + +```go +EndBlockers: []string{ + / ... + / order for protocolpool does not matter. + protocolpooltypes.ModuleName, +}, +``` + +```go +InitGenesis: []string{ + / ... must be AFTER distribution. + distrtypes.ModuleName, + protocolpooltypes.ModuleName, +}, +``` + +```go +ExportGenesis: []string{ + / ... + / Must be exported before x/bank. + protocolpooltypes.ModuleName, + banktypes.ModuleName, +}, +``` + +Lastly, add an entry for protocolpool in the ModuleConfig. + +```go +{ + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ +}), +}, +``` + +## Adding Epochs Module **OPTIONAL** + +### Manual Wiring + +Import the following: + +```go +import ( + + / ... + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" +) +``` + +Add the epochs keeper to your application struct: + +```go +EpochsKeeper epochskeeper.Keeper +``` + +Add the store key: + +```go +keys := storetypes.NewKVStoreKeys( + / ... + epochstypes.StoreKey, +) +``` + +Instantiate the keeper: + +```go +app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, +) +``` + +Set up hooks for the epochs keeper: + +To learn how to write hooks for the epoch keeper, see the [x/epoch README](https://github.com/cosmos/cosmos-sdk/blob/main/x/epochs/README.md) + +```go +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + app.SomeOtherModule + ), +) +``` + +Add the epochs module to the module manager: + +```go +app.ModuleManager = module.NewManager( + / ... + epochs.NewAppModule(appCodec, app.EpochsKeeper), +) +``` + +Add entries for SetOrderBeginBlockers and SetOrderInitGenesis: + +```go +app.ModuleManager.SetOrderBeginBlockers( + / ... + epochstypes.ModuleName, +) +``` + +```go +app.ModuleManager.SetOrderInitGenesis( + / ... + epochstypes.ModuleName, +) +``` + +### DI Wiring + +First, set up the keeper for the application. + +Import the epochs keeper: + +```go +epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" +``` + +Add the keeper to your application struct: + +```go +EpochsKeeper epochskeeper.Keeper +``` + +Add the keeper to the depinject system: + +```go +depinject.Inject( + appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + / ... other modules + &app.EpochsKeeper, / NEW MODULE! +) +``` + +Next, set up configuration for the module. + +Import the following: + +```go +import ( + + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" +) +``` + +Add an entry for BeginBlockers and InitGenesis: + +```go +BeginBlockers: []string{ + / ... + epochstypes.ModuleName, +}, +``` + +```go +InitGenesis: []string{ + / ... + epochstypes.ModuleName, +}, +``` + +Lastly, add an entry for epochs in the ModuleConfig: + +```go +{ + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ +}), +}, +``` + +## Enable Unordered Transactions **OPTIONAL** + +To enable unordered transaction support on an application, the `x/auth` keeper must be supplied with the `WithUnorderedTransactions` option. + +Note that unordered transactions require sequence values to be zero, and will **FAIL** if a non-zero sequence value is set. +Please ensure no sequence value is set when submitting an unordered transaction. +Services that rely on prior assumptions about sequence values should be updated to handle unordered transactions. +Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. + +```go +app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), / new option! + ) +``` + +If using dependency injection, update the auth module config. + +```go +{ + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + EnableUnorderedTransactions: true, / remove this line if you do not want unordered transactions. +}), +}, +``` + +By default, unordered transactions use a transaction timeout duration of 10 minutes and a default gas charge of 2240 gas units. +To modify these default values, pass in the corresponding options to the new `SigVerifyOptions` field in `x/auth's` `ante.HandlerOptions`. + +```go +options := ante.HandlerOptions{ + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimoutDuration), +}, +} +``` + +```go +anteDecorators := []sdk.AnteDecorator{ + / ... other decorators ... + ante.NewSigVerificationDecorator(options.AccountKeeper, options.SignModeHandler, options.SigVerifyOptions...), / supply new options +} +``` + +## Upgrade Handler + +The upgrade handler only requires adding the store upgrades for the modules added above. +If your application is not adding `x/protocolpool` or `x/epochs`, you do not need to add the store upgrade. + +```go expandable +/ UpgradeName defines the on-chain upgrade name for the sample SimApp upgrade +/ from v050 to v053. +/ +/ NOTE: This upgrade defines a reference implementation of what an upgrade +/ could look like when an application is migrating from Cosmos SDK version +/ v0.50.x to v0.53.x. +const UpgradeName = "v050-to-v053" + +func (app SimApp) + +RegisterUpgradeHandlers() { + app.UpgradeKeeper.SetUpgradeHandler( + UpgradeName, + func(ctx context.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + return app.ModuleManager.RunMigrations(ctx, app.Configurator(), fromVM) +}, + ) + +upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk() + if err != nil { + panic(err) +} + if upgradeInfo.Name == UpgradeName && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := storetypes.StoreUpgrades{ + Added: []string{ + epochstypes.ModuleName, / if not adding x/epochs to your chain, remove this line. + protocolpooltypes.ModuleName, / if not adding x/protocolpool to your chain, remove this line. +}, +} + + / configure store loader that checks if version == upgradeHeight and applies store upgrades + app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +} +``` diff --git a/docs/sdk/v0.50/documentation/operations/upgrade-reference.mdx b/docs/sdk/v0.50/documentation/operations/upgrade-reference.mdx new file mode 100644 index 00000000..395771cf --- /dev/null +++ b/docs/sdk/v0.50/documentation/operations/upgrade-reference.mdx @@ -0,0 +1,234 @@ +--- +title: Upgrade Reference +description: >- + This document provides a quick reference for the upgrades from v0.50.x to + v0.53.x of Cosmos SDK. +--- + +This document provides a quick reference for the upgrades from `v0.50.x` to `v0.53.x` of Cosmos SDK. + +Note, always read the **App Wiring Changes** section for more information on application wiring updates. + +🚨Upgrading to v0.53.x will require a **coordinated** chain upgrade.🚨 + +### TLDR; + +Unordered transactions, `x/protocolpool`, and `x/epoch` are the major new features added in v0.53.x. + +We also added the ability to add a `CheckTx` handler and enabled ed25519 signature verification. + +For a full list of changes, see the [Changelog](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/CHANGELOG.md). + +### Unordered Transactions + +The Cosmos SDK now supports unordered transactions. *This is an opt-in feature*. + +Clients that use this feature may now submit their transactions in a fire-and-forget manner to chains that enabled unordered transactions. + +To submit an unordered transaction, clients must set the `unordered` flag to +`true` and ensure a reasonable `timeout_timestamp` is set. The `timeout_timestamp` is +used as a TTL for the transaction and provides replay protection. Each transaction's `timeout_timestamp` must be +unique to the account; however, the difference may be as small as a nanosecond. See [ADR-070](docs/sdk/next/documentation/legacy/adr-comprehensive) for more details. + +Note that unordered transactions require sequence values to be zero, and will **FAIL** if a non-zero sequence value is set. +Please ensure no sequence value is set when submitting an unordered transaction. +Services that rely on prior assumptions about sequence values should be updated to handle unordered transactions. +Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. + +#### Enabling Unordered Transactions + +To enable unordered transactions, supply the `WithUnorderedTransactions` option to the `x/auth` keeper: + +```go +app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), / new option! + ) +``` + +If using dependency injection, update the auth module config. + +```go +{ + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + EnableUnorderedTransactions: true, / remove this line if you do not want unordered transactions. +}), +}, +``` + +By default, unordered transactions use a transaction timeout duration of 10 minutes and a default gas charge of 2240 gas units. +To modify these default values, pass in the corresponding options to the new `SigVerifyOptions` field in `x/auth's` `ante.HandlerOptions`. + +```go +options := ante.HandlerOptions{ + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimoutDuration), +}, +} +``` + +```go +anteDecorators := []sdk.AnteDecorator{ + / ... other decorators ... + ante.NewSigVerificationDecorator(options.AccountKeeper, options.SignModeHandler, options.SigVerifyOptions...), / supply new options +} +``` + +### App Wiring Changes + +In this section, we describe the required app wiring changes to run a v0.53.x Cosmos SDK application. + +**These changes are directly applicable to your application wiring.** + +The `x/auth` module now contains a `PreBlocker` that *must* be set in the module manager's `SetOrderPreBlockers` method. + +```go +app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, / NEW +) +``` + +That's it. + +### New Modules + +Below are some **optional** new modules you can include in your chain. +To see a full example of wiring these modules, please check out the [SimApp](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/simapp/app.go). + +#### Epochs + +⚠️Adding this module requires a `StoreUpgrade`⚠️ + +The new, supplemental `x/epochs` module provides Cosmos SDK modules functionality to register and execute custom logic at fixed time-intervals. + +Required wiring: + +* Keeper Instantiation +* StoreKey addition +* Hooks Registration +* App Module Registration +* entry in SetOrderBeginBlockers +* entry in SetGenesisModuleOrder +* entry in SetExportModuleOrder + +#### ProtocolPool + + + +Using `protocolpool` will cause the following `x/distribution` handlers to return an error: + +**QueryService** + +* `CommunityPool` + +**MsgService** + +* `CommunityPoolSpend` +* `FundCommunityPool` + +If you have services that rely on this functionality from `x/distribution`, please update them to use the `x/protocolpool` equivalents. + + + +⚠️Adding this module requires a `StoreUpgrade`⚠️ + +The new, supplemental `x/protocolpool` module provides extended functionality for managing and distributing block reward revenue. + +Required wiring: + +* Module Account Permissions + * protocolpooltypes.ModuleName (nil) + * protocolpooltypes.ProtocolPoolEscrowAccount (nil) +* Keeper Instantiation +* StoreKey addition +* Passing the keeper to the Distribution Keeper + * `distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper)` +* App Module Registration +* entry in SetOrderBeginBlockers +* entry in SetOrderEndBlockers +* entry in SetGenesisModuleOrder +* entry in SetExportModuleOrder **before `x/bank`** + +## Custom Minting Function in `x/mint` + +This release introduces the ability to configure a custom mint function in `x/mint`. The minting logic is now abstracted as a `MintFn` with a default implementation that can be overridden. + +### What’s New + +* **Configurable Mint Function:**\ + A new `MintFn` abstraction is introduced. By default, the module uses `DefaultMintFn`, but you can supply your own implementation. + +* **Deprecated InflationCalculationFn Parameter:**\ + The `InflationCalculationFn` argument previously provided to `mint.NewAppModule()` is now ignored and must be `nil`. To customize the default minter’s inflation behavior, wrap your custom function with `mintkeeper.DefaultMintFn` and pass it via the `WithMintFn` option: + +```go +mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(customInflationFn)) +``` + +### How to Upgrade + +1. **Using the Default Minting Function** + + No action is needed if you’re happy with the default behavior. Make sure your application wiring initializes the MintKeeper like this: + +```go +mintKeeper := mintkeeper.NewKeeper( + appCodec, + storeService, + stakingKeeper, + accountKeeper, + bankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) +``` + +2. **Using a Custom Minting Function** + + To use a custom minting function, define it as follows and pass it you your mintKeeper when constructing it: + +```go expandable +func myCustomMintFunc(ctx sdk.Context, k *mintkeeper.Keeper) { + / do minting... +} + +/ ... + mintKeeper := mintkeeper.NewKeeper( + appCodec, + storeService, + stakingKeeper, + accountKeeper, + bankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + mintkeeper.WithMintFn(myCustomMintFunc), / Use custom minting function + ) +``` + +### Misc Changes + +#### Testnet's init-files Command + +Some changes were made to `testnet`'s `init-files` command to support our new testing framework, `Systemtest`. + +##### Flag Changes + +* The flag for validator count was changed from `--v` to `--validator-count`(shorthand: `-v`). + +##### Flag Additions + +* `--staking-denom` allows changing the default stake denom, `stake`. +* `--commit-timeout` enables changing the commit timeout of the chain. +* `--single-host` enables running a multi-node network on a single host. This bumps each subsequent node's network addresses by 1. For example, node1's gRPC address will be 9090, node2's 9091, etc... diff --git a/docs/sdk/v0.50/documentation/operations/upgrading.mdx b/docs/sdk/v0.50/documentation/operations/upgrading.mdx new file mode 100644 index 00000000..0aeb5609 --- /dev/null +++ b/docs/sdk/v0.50/documentation/operations/upgrading.mdx @@ -0,0 +1,533 @@ +--- +title: Upgrading Cosmos SDK +description: >- + This guide provides instructions for upgrading to specific versions of Cosmos + SDK. Note, always read the SimApp section for more information on application + wiring updates. +--- + +This guide provides instructions for upgrading to specific versions of Cosmos SDK. +Note, always read the **SimApp** section for more information on application wiring updates. + +## [v0.50.x](/docs/sdk/v0.50/documentation/operationshttps://github.com/cosmos/cosmos-sdk/releases/tag/v0.50.0) + +### Migration to CometBFT (Part 2) + +The Cosmos SDK has migrated in its previous versions, to CometBFT. +Some functions have been renamed to reflect the naming change. + +Following an exhaustive list: + +* `client.TendermintRPC` -> `client.CometRPC` +* `clitestutil.MockTendermintRPC` -> `clitestutil.MockCometRPC` +* `clitestutilgenutil.CreateDefaultTendermintConfig` -> `clitestutilgenutil.CreateDefaultCometConfig` +* Package `client/grpc/tmservice` -> `client/grpc/cmtservice` + +Additionally, the commands and flags mentioning `tendermint` have been renamed to `comet`. +These commands and flags are still supported for backward compatibility. + +For backward compatibility, the `**/tendermint/**` gRPC services are still supported. + +Additionally, the SDK is starting its abstraction from CometBFT Go types through the codebase: + +* The usage of the CometBFT logger has been replaced by the Cosmos SDK logger interface (`cosmossdk.io/log.Logger`). +* The usage of `github.com/cometbft/cometbft/libs/bytes.HexByte` has been replaced by `[]byte`. +* Usage of an application genesis (see [genutil](#xgenutil)). + +#### Enable Vote Extensions + + +This is an optional feature that is disabled by default. + + +Once all the code changes required to implement Vote Extensions are in place, +they can be enabled by setting the consensus param `Abci.VoteExtensionsEnableHeight` +to a value greater than zero. + +In a new chain, this can be done in the `genesis.json` file. + +For existing chains this can be done in two ways: + +* During an upgrade the value is set in an upgrade handler. +* A governance proposal that changes the consensus param **after a coordinated upgrade has taken place**. + +### BaseApp + +All ABCI methods now accept a pointer to the request and response types defined +by CometBFT. In addition, they also return errors. An ABCI method should only +return errors in cases where a catastrophic failure has occurred and the application +should halt. However, this is abstracted away from the application developer. Any +handler that an application can define or set that returns an error, will gracefully +by handled by `BaseApp` on behalf of the application. + +BaseApp calls of `BeginBlock` & `Endblock` are now private but are still exposed +to the application to define via the `Manager` type. `FinalizeBlock` is public +and should be used in order to test and run operations. This means that although +`BeginBlock` & `Endblock` no longer exist in the ABCI interface, they are automatically +called by `BaseApp` during `FinalizeBlock`. Specifically, the order of operations +is `BeginBlock` -> `DeliverTx` (for all txs) -> `EndBlock`. + +ABCI++ 2.0 also brings `ExtendVote` and `VerifyVoteExtension` ABCI methods. These +methods allow applications to extend and verify pre-commit votes. The Cosmos SDK +allows an application to define handlers for these methods via `ExtendVoteHandler` +and `VerifyVoteExtensionHandler` respectively. Please see [here](/docs/sdk/v0.50/documentation/consensus-block-production/vote-extensions) +for more info. + +#### Set PreBlocker + +A `SetPreBlocker` method has been added to BaseApp. This is essential for BaseApp to run `PreBlock` which runs before begin blocker other modules, and allows to modify consensus parameters, and the changes are visible to the following state machine logics. +Read more about other use cases [here](docs/sdk/next/documentation/legacy/adr-comprehensive). + +`depinject` / app di users need to add `x/upgrade` in their `app_config.go` / `app.yml`: + +```diff ++ PreBlockers: []string{ ++ upgradetypes.ModuleName, ++ }, +BeginBlockers: []string{ +- upgradetypes.ModuleName, + minttypes.ModuleName, +} +``` + +When using (legacy) application wiring, the following must be added to `app.go`: + +```diff expandable ++app.ModuleManager.SetOrderPreBlockers( ++ upgradetypes.ModuleName, ++) + +app.ModuleManager.SetOrderBeginBlockers( +- upgradetypes.ModuleName, +) + ++ app.SetPreBlocker(app.PreBlocker) + +/ ... / + ++func (app *SimApp) PreBlocker(ctx sdk.Context, req *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { ++ return app.ModuleManager.PreBlock(ctx, req) ++} +``` + +#### Events + +The log section of `abci.TxResult` is not populated in the case of successful +msg(s) execution. Instead a new attribute is added to all messages indicating +the `msg_index` which identifies which events and attributes relate the same +transaction. + +`BeginBlock` & `EndBlock` Events are now emitted through `FinalizeBlock` but have +an added attribute, `mode=BeginBlock|EndBlock`, to identify if the event belongs +to `BeginBlock` or `EndBlock`. + +### Config files + +Confix is a new SDK tool for modifying and migrating configuration of the SDK. +It is the replacement of the `config.Cmd` command from the `client/config` package. + +Use the following command to migrate your configuration: + +```bash +simd config migrate v0.50 +``` + +If you were using ` config [key]` or ` config [key] [value]` to set and get values from the `client.toml`, replace it with ` config get client [key]` and ` config set client [key] [value]`. The extra verbosity is due to the extra functionalities added in config. + +More information about [confix](/docs/sdk/v0.50/documentation/operations/tooling/confix) and how to add it in your application binary in the [documentation](/docs/sdk/v0.50/documentation/operations/tooling/confix). + +#### gRPC-Web + +gRPC-Web is now listening to the same address and port as the gRPC Gateway API server (default: `localhost:1317`). +The possibility to listen to a different address has been removed, as well as its settings. +Use `confix` to clean-up your `app.toml`. A nginx (or alike) reverse-proxy can be set to keep the previous behavior. + +#### Database Support + +ClevelDB, BoltDB and BadgerDB are not supported anymore. To migrate from a unsupported database to a supported database please use a database migration tool. + +### Protobuf + +With the deprecation of the Amino JSON codec defined in [cosmos/gogoproto](/docs/sdk/v0.50/documentation/operationshttps://github.com/cosmos/gogoproto) in favor of the protoreflect powered x/tx/aminojson codec, module developers are encouraged verify that their messages have the correct protobuf annotations to deterministically produce identical output from both codecs. + +For core SDK types equivalence is asserted by generative testing of [SignableTypes](/docs/sdk/v0.50/documentation/operationshttps://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/tests/integration/rapidgen/rapidgen.go#L102) in [TestAminoJSON\_Equivalence](/docs/sdk/v0.50/documentation/operationshttps://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/tests/integration/tx/aminojson/aminojson_test.go#L94). + +**TODO: summarize proto annotation requirements.** + +#### Stringer + +The `gogoproto.goproto_stringer = false` annotation has been removed from most proto files. This means that the `String()` method is being generated for types that previously had this annotation. The generated `String()` method uses `proto.CompactTextString` for *stringifying* structs. +[Verify](/docs/sdk/v0.50/documentation/operationshttps://github.com/cosmos/cosmos-sdk/pull/13850#issuecomment-1328889651) the usage of the modified `String()` methods and double-check that they are not used in state-machine code. + +### SimApp + +In this section we describe the changes made in Cosmos SDK' SimApp. +**These changes are directly applicable to your application wiring.** + +#### Module Assertions + +Previously, all modules were required to be set in `OrderBeginBlockers`, `OrderEndBlockers` and `OrderInitGenesis / OrderExportGenesis` in `app.go` / `app_config.go`. This is no longer the case, the assertion has been loosened to only require modules implementing, respectively, the `appmodule.HasBeginBlocker`, `appmodule.HasEndBlocker` and `appmodule.HasGenesis` / `module.HasGenesis` interfaces. + +#### Module wiring + +The following modules `NewKeeper` function now take a `KVStoreService` instead of a `StoreKey`: + +* `x/auth` +* `x/authz` +* `x/bank` +* `x/consensus` +* `x/crisis` +* `x/distribution` +* `x/evidence` +* `x/feegrant` +* `x/gov` +* `x/mint` +* `x/nft` +* `x/slashing` +* `x/upgrade` + +**Users using `depinject` / app di do not need any changes, this is abstracted for them.** + +Users manually wiring their chain need to use the `runtime.NewKVStoreService` method to create a `KVStoreService` from a `StoreKey`: + +```diff +app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, +- keys[consensusparamtypes.StoreKey] ++ runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +#### Logger + +Replace all your CometBFT logger imports by `cosmossdk.io/log`. + +Additionally, `depinject` / app di users must now supply a logger through the main `depinject.Supply` function instead of passing it to `appBuilder.Build`. + +```diff +appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, ++ logger, + ... +``` + +```diff +- app.App = appBuilder.Build(logger, db, traceStore, baseAppOptions...) ++ app.App = appBuilder.Build(db, traceStore, baseAppOptions...) +``` + +User manually wiring their chain need to add the logger argument when creating the `x/bank` keeper. + +#### Module Basics + +Previously, the `ModuleBasics` was a global variable that was used to register all modules' `AppModuleBasic` implementation. +The global variable has been removed and the basic module manager can be now created from the module manager. + +This is automatically done for `depinject` / app di users, however for supplying different app module implementation, pass them via `depinject.Supply` in the main `AppConfig` (`app_config.go`): + +```go expandable +depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +}, + ), +}, + ) +``` + +Users manually wiring their chain need to use the new `module.NewBasicManagerFromManager` function, after the module manager creation, and pass a `map[string]module.AppModuleBasic` as argument for optionally overriding some module's `AppModuleBasic`. + +#### AutoCLI + +[`AutoCLI`](/docs/sdk/v0.50/learn/advanced/autocli) has been implemented by the SDK for all its module CLI queries. This means chains must add the following in their `root.go` to enable `AutoCLI` in their application: + +```go +if err := autoCliOpts.EnhanceRootCommand(rootCmd); err != nil { + panic(err) +} +``` + +Where `autoCliOpts` is the autocli options of the app, containing all modules and codecs. +That value can injected by depinject ([see root\_v2.go](/docs/sdk/v0.50/documentation/operationshttps://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/simapp/simd/cmd/root_v2.go#L49-L67)) or manually provided by the app ([see legacy app.go](/docs/sdk/v0.50/documentation/operationshttps://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/simapp/app.go#L636-L655)). + + +Not doing this will result in all core SDK modules queries not to be included in the binary. + + +Additionally `AutoCLI` automatically adds the custom modules commands to the root command for all modules implementing the [`appmodule.AppModule`](/docs/sdk/v0.50/documentation/operationshttps://pkg.go.dev/cosmossdk.io/core/appmodule#AppModule) interface. +This means, after ensuring all the used modules implement this interface, the following can be removed from your `root.go`: + +```diff +func txCommand() *cobra.Command { + .... +- appd.ModuleBasics.AddTxCommands(cmd) +} +``` + +```diff +func queryCommand() *cobra.Command { + .... +- appd.ModuleBasics.AddQueryCommands(cmd) +} +``` + +### Packages + +#### Math + +References to `types/math.go` which contained aliases for math types aliasing the `cosmossdk.io/math` package have been removed. +Import directly the `cosmossdk.io/math` package instead. + +#### Store + +References to `types/store.go` which contained aliases for store types have been remapped to point to appropriate `store/types`, hence the `types/store.go` file is no longer needed and has been removed. + +##### Extract Store to a standalone module + +The `store` module is extracted to have a separate go.mod file which allows it be a standalone module. +All the store imports are now renamed to use `cosmossdk.io/store` instead of `github.com/cosmos/cosmos-sdk/store` across the SDK. + +##### Streaming + +[ADR-38](docs/sdk/next/documentation/legacy/adr-comprehensive) has been implemented in the SDK. + +To continue using state streaming, replace `streaming.LoadStreamingServices` by the following in your `app.go`: + +```go +if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} +``` + +#### Client + +The return type of the interface method `TxConfig.SignModeHandler()` has been changed from `x/auth/signing.SignModeHandler` to `x/tx/signing.HandlerMap`. This change is transparent to most users as the `TxConfig` interface is typically implemented by private `x/auth/tx.config` struct (as returned by `auth.NewTxConfig`) which has been updated to return the new type. If users have implemented their own `TxConfig` interface, they will need to update their implementation to return the new type. + +##### Textual sign mode + +A new sign mode is available in the SDK that produces more human readable output, currently only available on Ledger +devices but soon to be implemented in other UIs. + + +This sign mode does not allow offline signing + + +When using (legacy) application wiring, the following must be added to `app.go` after setting the app's bank keeper: + +```go expandable +enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + log.Fatalf("Failed to create new TxConfig with options: %v", err) +} + +app.txConfig = txConfig +``` + +When using `depinject` / `app di`, **it's enabled by default** if there's a bank keeper present. + +And in the application client (usually `root.go`): + +```go expandable +if !clientCtx.Offline { + txConfigOpts.EnabledSignModes = append(txConfigOpts.EnabledSignModes, signing.SignMode_SIGN_MODE_TEXTUAL) + +txConfigOpts.TextualCoinMetadataQueryFn = txmodule.NewGRPCCoinMetadataQueryFn(clientCtx) + +txConfigWithTextual, err := tx.NewTxConfigWithOptions( + codec.NewProtoCodec(clientCtx.InterfaceRegistry), + txConfigOpts, + ) + if err != nil { + return err +} + +clientCtx = clientCtx.WithTxConfig(txConfigWithTextual) +} +``` + +When using `depinject` / `app di`, the a tx config should be recreated from the `txConfigOpts` to use `NewGRPCCoinMetadataQueryFn` instead of depending on the bank keeper (that is used in the server). + +To learn more see the [docs](/docs/sdk/v0.50/learn/advanced/transactions#sign_mode_textual) and the [ADR-050](docs/sdk/next/documentation/legacy/adr-comprehensive). + +### Modules + +#### `**all**` + +* [RFC 001](docs/sdk/next/documentation/legacy/rfc-overview) has defined a simplification of the message validation process for modules. + The `sdk.Msg` interface has been updated to not require the implementation of the `ValidateBasic` method. + It is now recommended to validate message directly in the message server. When the validation is performed in the message server, the `ValidateBasic` method on a message is no longer required and can be removed. + +* Messages no longer need to implement the `LegacyMsg` interface and implementations of `GetSignBytes` can be deleted. Because of this change, global legacy Amino codec definitions and their registration in `init()` can safely be removed as well. + +* The `AppModuleBasic` interface has been simplified. Defining `GetTxCmd() *cobra.Command` and `GetQueryCmd() *cobra.Command` is no longer required. The module manager detects when module commands are defined. If AutoCLI is enabled, `EnhanceRootCommand()` will add the auto-generated commands to the root command, unless a custom module command is defined and register that one instead. + +* The following modules' `Keeper` methods now take in a `context.Context` instead of `sdk.Context`. Any module that has an interfaces for them (like "expected keepers") will need to update and re-generate mocks if needed: + + * `x/authz` + * `x/bank` + * `x/mint` + * `x/crisis` + * `x/distribution` + * `x/evidence` + * `x/gov` + * `x/slashing` + * `x/upgrade` + +* `BeginBlock` and `EndBlock` have changed their signature, so it is important that any module implementing them are updated accordingly. + +```diff +- BeginBlock(sdk.Context, abci.RequestBeginBlock) ++ BeginBlock(context.Context) error +``` + +```diff +- EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate ++ EndBlock(context.Context) error +``` + +In case a module requires to return `abci.ValidatorUpdate` from `EndBlock`, it can use the `HasABCIEndBlock` interface instead. + +```diff +- EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate ++ EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +``` + + +It is possible to ensure that a module implements the correct interfaces by using compiler assertions in your `x/{moduleName}/module.go`: + +```go +var ( + _ module.AppModuleBasic = (*AppModule)(nil) + _ module.AppModuleSimulation = (*AppModule)(nil) + _ module.HasGenesis = (*AppModule)(nil) + + _ appmodule.AppModule = (*AppModule)(nil) + _ appmodule.HasBeginBlocker = (*AppModule)(nil) + _ appmodule.HasEndBlocker = (*AppModule)(nil) + ... +) +``` + +Read more on those interfaces [here](/docs/sdk/v0.50/documentation/module-system/module-manager#application-module-interfaces). + + + +* `GetSigners()` is no longer required to be implemented on `Msg` types. The SDK will automatically infer the signers from the `Signer` field on the message. The signer field is required on all messages unless using a custom signer function. + +To find out more please read the [signer field](/docs/sdk/v0.50/documentation/module-system/protobuf-annotations#signer) & [here](/docs/sdk/v0.50/documentation/operationshttps://github.com/cosmos/cosmos-sdk/blob/7352d0bce8e72121e824297df453eb1059c28da8/docs/docs/build/building-modules/02-messages-and-queries#L40) documentation. +{/* Link to docs once redeployed */} + +#### `x/auth` + +For ante handler construction via `ante.NewAnteHandler`, the field `ante.HandlerOptions.SignModeHandler` has been updated to `x/tx/signing/HandlerMap` from `x/auth/signing/SignModeHandler`. Callers typically fetch this value from `client.TxConfig.SignModeHandler()` (which is also changed) so this change should be transparent to most users. + +#### `x/capability` + +The capability module has been moved to [cosmos/ibc-go](/docs/sdk/v0.50/documentation/operationshttps://github.com/cosmos/ibc-go). IBC v8 will contain the necessary changes to incorporate the new module location. In your `app.go`, you must import the capability module from the new location: + +```diff ++ "github.com/cosmos/ibc-go/modules/capability" ++ capabilitykeeper "github.com/cosmos/ibc-go/modules/capability/keeper" ++ capabilitytypes "github.com/cosmos/ibc-go/modules/capability/types" +- "github.com/cosmos/cosmos-sdk/x/capability/types" +- capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper" +- capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types" +``` + +Similar to previous versions, your module manager must include the capability module. + +```go +app.ModuleManager = module.NewManager( + capability.NewAppModule(encodingConfig.Codec, *app.CapabilityKeeper, true), + / remaining modules +) +``` + +#### `x/genutil` + +The Cosmos SDK has migrated from a CometBFT genesis to a application managed genesis file. +The genesis is now fully handled by `x/genutil`. This has no consequences for running chains: + +* Importing a CometBFT genesis is still supported. +* Exporting a genesis now exports the genesis as an application genesis. + +When needing to read an application genesis, use the following helpers from the `x/genutil/types` package: + +```go +/ AppGenesisFromReader reads the AppGenesis from the reader. +func AppGenesisFromReader(reader io.Reader) (*AppGenesis, error) + +/ AppGenesisFromFile reads the AppGenesis from the provided file. +func AppGenesisFromFile(genFile string) (*AppGenesis, error) +``` + +#### `x/gov` + +##### Expedited Proposals + +The `gov` v1 module now supports expedited governance proposals. When a proposal is expedited, the voting period will be shortened to `ExpeditedVotingPeriod` parameter. An expedited proposal must have an higher voting threshold than a classic proposal, that threshold is defined with the `ExpeditedThreshold` parameter. + +##### Cancelling Proposals + +The `gov` module now supports cancelling governance proposals. When a proposal is canceled, all the deposits of the proposal are either burnt or sent to `ProposalCancelDest` address. The deposits burn rate will be determined by a new parameter called `ProposalCancelRatio` parameter. + +```text +1. deposits * proposal_cancel_ratio will be burned or sent to `ProposalCancelDest` address , if `ProposalCancelDest` is empty then deposits will be burned. +2. deposits * (1 - proposal_cancel_ratio) will be sent to depositors. +``` + +By default, the new `ProposalCancelRatio` parameter is set to `0.5` during migration and `ProposalCancelDest` is set to empty string (i.e. burnt). + +#### `x/evidence` + +##### Extract evidence to a standalone module + +The `x/evidence` module is extracted to have a separate go.mod file which allows it be a standalone module. +All the evidence imports are now renamed to use `cosmossdk.io/x/evidence` instead of `github.com/cosmos/cosmos-sdk/x/evidence` across the SDK. + +#### `x/nft` + +##### Extract nft to a standalone module + +The `x/nft` module is extracted to have a separate go.mod file which allows it to be a standalone module. +All the evidence imports are now renamed to use `cosmossdk.io/x/nft` instead of `github.com/cosmos/cosmos-sdk/x/nft` across the SDK. + +#### x/feegrant + +##### Extract feegrant to a standalone module + +The `x/feegrant` module is extracted to have a separate go.mod file which allows it to be a standalone module. +All the feegrant imports are now renamed to use `cosmossdk.io/x/feegrant` instead of `github.com/cosmos/cosmos-sdk/x/feegrant` across the SDK. + +#### `x/upgrade` + +##### Extract upgrade to a standalone module + +The `x/upgrade` module is extracted to have a separate go.mod file which allows it to be a standalone module. +All the upgrade imports are now renamed to use `cosmossdk.io/x/upgrade` instead of `github.com/cosmos/cosmos-sdk/x/upgrade` across the SDK. + +### Tooling + +#### Rosetta + +Rosetta has moved to it's own [repo](/docs/sdk/v0.50/user/run-node/rosetta) and not imported by the Cosmos SDK SimApp by default. +Any user who is interested on using the tool can connect it standalone to any node without the need to add it as part of the node binary. + +The rosetta tool also allows multi chain connections. diff --git a/docs/sdk/v0.50/documentation/protocol-development/README.mdx b/docs/sdk/v0.50/documentation/protocol-development/README.mdx new file mode 100644 index 00000000..f6b36ed8 --- /dev/null +++ b/docs/sdk/v0.50/documentation/protocol-development/README.mdx @@ -0,0 +1,26 @@ +--- +title: Specifications +description: >- + This directory contains specifications for the modules of the Cosmos SDK as + well as Interchain Standards (ICS) and other specifications. +--- + +This directory contains specifications for the modules of the Cosmos SDK as well as Interchain Standards (ICS) and other specifications. + +Cosmos SDK applications hold this state in a Merkle store. Updates to +the store may be made during transactions and at the beginning and end of every +block. + +## Cosmos SDK specifications + +* [Store](/docs/sdk/v0.50/learn/advanced/store) - The core Merkle store that holds the state. +* [Bech32](/docs/sdk/v0.50/documentation/protocol-development/addresses/bech32) - Address format for Cosmos SDK applications. + +## Modules specifications + +Go the [module directory](https://docs.cosmos.network/main/modules) + +## CometBFT + +For details on the underlying blockchain and p2p protocols, see +the [CometBFT specification](https://github.com/cometbft/cometbft/tree/main/spec). diff --git a/docs/sdk/v0.50/documentation/protocol-development/SPEC_MODULE.mdx b/docs/sdk/v0.50/documentation/protocol-development/SPEC_MODULE.mdx new file mode 100644 index 00000000..0fc30607 --- /dev/null +++ b/docs/sdk/v0.50/documentation/protocol-development/SPEC_MODULE.mdx @@ -0,0 +1,65 @@ +--- +title: Specification of Modules +description: >- + This file intends to outline the common structure for specifications within + this directory. +--- + +This file intends to outline the common structure for specifications within +this directory. + +## Tense + +For consistency, specs should be written in passive present tense. + +## Pseudo-Code + +Generally, pseudo-code should be minimized throughout the spec. Often, simple +bulleted-lists which describe a function's operations are sufficient and should +be considered preferable. In certain instances, due to the complex nature of +the functionality being described pseudo-code may the most suitable form of +specification. In these cases use of pseudo-code is permissible, but should be +presented in a concise manner, ideally restricted to only the complex +element as a part of a larger description. + +## Common Layout + +The following generalized `README` structure should be used to breakdown +specifications for modules. The following list is nonbinding and all sections are optional. + +* `# {Module Name}` - overview of the module +* `## Concepts` - describe specialized concepts and definitions used throughout the spec +* `## State` - specify and describe structures expected to marshalled into the store, and their keys +* `## State Transitions` - standard state transition operations triggered by hooks, messages, etc. +* `## Messages` - specify message structure(s) and expected state machine behaviour(s) +* `## Begin Block` - specify any begin-block operations +* `## End Block` - specify any end-block operations +* `## Hooks` - describe available hooks to be called by/from this module +* `## Events` - list and describe event tags used +* `## Client` - list and describe CLI commands and gRPC and REST endpoints +* `## Params` - list all module parameters, their types (in JSON) and examples +* `## Future Improvements` - describe future improvements of this module +* `## Tests` - acceptance tests +* `## Appendix` - supplementary details referenced elsewhere within the spec + +### Notation for key-value mapping + +Within `## State` the following notation `->` should be used to describe key to +value mapping: + +```text +key -> value +``` + +to represent byte concatenation the `|` may be used. In addition, encoding +type may be specified, for example: + +```text +0x00 | addressBytes | address2Bytes -> amino(value_object) +``` + +Additionally, index mappings may be specified by mapping to the `nil` value, for example: + +```text +0x01 | address2Bytes | addressBytes -> nil +``` diff --git a/docs/sdk/v0.50/documentation/protocol-development/SPEC_STANDARD.mdx b/docs/sdk/v0.50/documentation/protocol-development/SPEC_STANDARD.mdx new file mode 100644 index 00000000..b4c6656e --- /dev/null +++ b/docs/sdk/v0.50/documentation/protocol-development/SPEC_STANDARD.mdx @@ -0,0 +1,128 @@ +--- +title: What is an SDK standard? +--- + +An SDK standard is a design document describing a particular protocol, standard, or feature expected to be used by the Cosmos SDK. A SDK standard should list the desired properties of the standard, explain the design rationale, and provide a concise but comprehensive technical specification. The primary author is responsible for pushing the proposal through the standardization process, soliciting input and support from the community, and communicating with relevant stakeholders to ensure (social) consensus. + +## Sections + +A SDK standard consists of: + +* a synopsis, +* overview and basic concepts, +* technical specification, +* history log, and +* copyright notice. + +All top-level sections are required. References should be included inline as links, or tabulated at the bottom of the section if necessary. Included sub-sections should be listed in the order specified below. + +### Table Of Contents + +Provide a table of contents at the top of the file to assist readers. + +### Synopsis + +The document should include a brief (\~200 word) synopsis providing a high-level description of and rationale for the specification. + +### Overview and basic concepts + +This section should include a motivation sub-section and a definitions sub-section if required: + +* *Motivation* - A rationale for the existence of the proposed feature, or the proposed changes to an existing feature. +* *Definitions* - A list of new terms or concepts utilized in the document or required to understand it. + +### System model and properties + +This section should include an assumptions sub-section if any, the mandatory properties sub-section, and a dependencies sub-section. Note that the first two sub-section are are tightly coupled: how to enforce a property will depend directly on the assumptions made. This sub-section is important to capture the interactions of the specified feature with the "rest-of-the-world", i.e., with other features of the ecosystem. + +* *Assumptions* - A list of any assumptions made by the feature designer. It should capture which features are used by the feature under specification, and what do we expect from them. +* *Properties* - A list of the desired properties or characteristics of the feature specified, and expected effects or failures when the properties are violated. In case it is relevant, it can also include a list of properties that the feature does not guarantee. +* *Dependencies* - A list of the features that use the feature under specification and how. + +### Technical specification + +This is the main section of the document, and should contain protocol documentation, design rationale, required references, and technical details where appropriate. +The section may have any or all of the following sub-sections, as appropriate to the particular specification. The API sub-section is especially encouraged when appropriate. + +* *API* - A detailed description of the features's API. +* *Technical Details* - All technical details including syntax, diagrams, semantics, protocols, data structures, algorithms, and pseudocode as appropriate. The technical specification should be detailed enough such that separate correct implementations of the specification without knowledge of each other are compatible. +* *Backwards Compatibility* - A discussion of compatibility (or lack thereof) with previous feature or protocol versions. +* *Known Issues* - A list of known issues. This sub-section is specially important for specifications of already in-use features. +* *Example Implementation* - A concrete example implementation or description of an expected implementation to serve as the primary reference for implementers. + +### History + +A specification should include a history section, listing any inspiring documents and a plaintext log of significant changes. + +See an example history section [below](#history-1). + +### Copyright + +A specification should include a copyright section waiving rights via [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). + +## Formatting + +### General + +Specifications must be written in GitHub-flavoured Markdown. + +For a GitHub-flavoured Markdown cheat sheet, see [here](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet). For a local Markdown renderer, see [here](https://github.com/joeyespo/grip). + +### Language + +Specifications should be written in Simple English, avoiding obscure terminology and unnecessary jargon. For excellent examples of Simple English, please see the [Simple English Wikipedia](https://simple.wikipedia.org/wiki/Main_Page). + +The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in specifications are to be interpreted as described in [RFC 2119](https://tools.ietf.org/html/rfc2119). + +### Pseudocode + +Pseudocode in specifications should be language-agnostic and formatted in a simple imperative standard, with line numbers, variables, simple conditional blocks, for loops, and +English fragments where necessary to explain further functionality such as scheduling timeouts. LaTeX images should be avoided because they are difficult to review in diff form. + +Pseudocode for structs can be written in a simple language like Typescript or golang, as interfaces. + +Example Golang pseudocode struct: + +```go +type CacheKVStore interface { + cache: map[Key]Value + parent: KVStore + deleted: Key +} +``` + +Pseudocode for algorithms should be written in simple Golang, as functions. + +Example pseudocode algorithm: + +```go expandable +func get( + store CacheKVStore, + key Key) + +Value { + value = store.cache.get(Key) + if (value !== null) { + return value +} + +else { + value = store.parent.get(key) + +store.cache.set(key, value) + +return value +} +} +``` + +## History + +This specification was significantly inspired by and derived from IBC's [ICS](https://github.com/cosmos/ibc/blob/main/spec/ics-001-ics-standard/README.md), which +was in turn derived from Ethereum's [EIP 1](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1.md). + +Nov 24, 2022 - Initial draft finished and submitted as a PR + +## Copyright + +All content herein is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/docs/sdk/v0.50/documentation/protocol-development/_ics/README.mdx b/docs/sdk/v0.50/documentation/protocol-development/_ics/README.mdx new file mode 100644 index 00000000..ed0f1589 --- /dev/null +++ b/docs/sdk/v0.50/documentation/protocol-development/_ics/README.mdx @@ -0,0 +1,6 @@ +--- +title: Cosmos ICS +description: ICS030 - Signed Messages +--- + +* [ICS030 - Signed Messages](/docs/sdk/v0.50/documentation/protocol-development/_ics/ics-030-signed-messages) diff --git a/docs/sdk/v0.50/documentation/protocol-development/_ics/ics-030-signed-messages.mdx b/docs/sdk/v0.50/documentation/protocol-development/_ics/ics-030-signed-messages.mdx new file mode 100644 index 00000000..104b1941 --- /dev/null +++ b/docs/sdk/v0.50/documentation/protocol-development/_ics/ics-030-signed-messages.mdx @@ -0,0 +1,194 @@ +--- +title: 'ICS 030: Cosmos Signed Messages' +--- + +> TODO: Replace with valid ICS number and possibly move to new location. + +* [Changelog](#changelog) +* [Abstract](#abstract) +* [Preliminary](#preliminary) +* [Specification](#specification) +* [Future Adaptations](#future-adaptations) +* [API](#api) +* [References](#references) + +## Status + +Proposed. + +## Changelog + +## Abstract + +Having the ability to sign messages off-chain has proven to be a fundamental aspect +of nearly any blockchain. The notion of signing messages off-chain has many +added benefits such as saving on computational costs and reducing transaction +throughput and overhead. Within the context of the Cosmos, some of the major +applications of signing such data includes, but is not limited to, providing a +cryptographic secure and verifiable means of proving validator identity and +possibly associating it with some other framework or organization. In addition, +having the ability to sign Cosmos messages with a Ledger or similar HSM device. + +A standardized protocol for hashing, signing, and verifying messages that can be +implemented by the Cosmos SDK and other third-party organizations is needed. Such a +standardized protocol subscribes to the following: + +* Contains a specification of human-readable and machine-verifiable typed structured data +* Contains a framework for deterministic and injective encoding of structured data +* Utilizes cryptographic secure hashing and signing algorithms +* A framework for supporting extensions and domain separation +* Is invulnerable to chosen ciphertext attacks +* Has protection against potentially signing transactions a user did not intend to + +This specification is only concerned with the rationale and the standardized +implementation of Cosmos signed messages. It does **not** concern itself with the +concept of replay attacks as that will be left up to the higher-level application +implementation. If you view signed messages in the means of authorizing some +action or data, then such an application would have to either treat this as +idempotent or have mechanisms in place to reject known signed messages. + +## Preliminary + +The Cosmos message signing protocol will be parameterized with a cryptographic +secure hashing algorithm `SHA-256` and a signing algorithm `S` that contains +the operations `sign` and `verify` which provide a digital signature over a set +of bytes and verification of a signature respectively. + +Note, our goal here is not to provide context and reasoning about why necessarily +these algorithms were chosen apart from the fact they are the defacto algorithms +used in CometBFT and the Cosmos SDK and that they satisfy our needs for such +cryptographic algorithms such as having resistance to collision and second +pre-image attacks, as well as being [deterministic](https://en.wikipedia.org/wiki/Hash_function#Determinism) and [uniform](https://en.wikipedia.org/wiki/Hash_function#Uniformity). + +## Specification + +CometBFT has a well established protocol for signing messages using a canonical +JSON representation as defined [here](https://github.com/cometbft/cometbft/blob/master/types/canonical.go). + +An example of such a canonical JSON structure is CometBFT's vote structure: + +```go +type CanonicalJSONVote struct { + ChainID string `json:"@chain_id"` + Type string `json:"@type"` + BlockID CanonicalJSONBlockID `json:"block_id"` + Height int64 `json:"height"` + Round int `json:"round"` + Timestamp string `json:"timestamp"` + VoteType byte `json:"type"` +} +``` + +With such canonical JSON structures, the specification requires that they include +meta fields: `@chain_id` and `@type`. These meta fields are reserved and must be +included. They are both of type `string`. In addition, fields must be ordered +in lexicographically ascending order. + +For the purposes of signing Cosmos messages, the `@chain_id` field must correspond +to the Cosmos chain identifier. The user-agent should **refuse** signing if the +`@chain_id` field does not match the currently active chain! The `@type` field +must equal the constant `"message"`. The `@type` field corresponds to the type of +structure the user will be signing in an application. For now, a user is only +allowed to sign bytes of valid ASCII text ([see here](https://github.com/cometbft/cometbft/blob/v0.37.0/libs/strings/string.go#L35-L64)). +However, this will change and evolve to support additional application-specific +structures that are human-readable and machine-verifiable ([see Future Adaptations](#futureadaptations)). + +Thus, we can have a canonical JSON structure for signing Cosmos messages using +the [JSON schema](http://json-schema.org/) specification as such: + +```json expandable +{ + "$schema": "http://json-schema.org/draft-04/schema#", + "$id": "cosmos/signing/typeData/schema", + "title": "The Cosmos signed message typed data schema.", + "type": "object", + "properties": { + "@chain_id": { + "type": "string", + "description": "The corresponding Cosmos chain identifier.", + "minLength": 1 + }, + "@type": { + "type": "string", + "description": "The message type. It must be 'message'.", + "enum": [ + "message" + ] + }, + "text": { + "type": "string", + "description": "The valid ASCII text to sign.", + "pattern": "^[\\x20-\\x7E]+$", + "minLength": 1 + } + }, + "required": [ + "@chain_id", + "@type", + "text" + ] +} +``` + +e.g. + +```json +{ + "@chain_id": "1", + "@type": "message", + "text": "Hello, you can identify me as XYZ on keybase." +} +``` + +## Future Adaptations + +As applications can vary greatly in domain, it will be vital to support both +domain separation and human-readable and machine-verifiable structures. + +Domain separation will allow for application developers to prevent collisions of +otherwise identical structures. It should be designed to be unique per application +use and should directly be used in the signature encoding itself. + +Human-readable and machine-verifiable structures will allow end users to sign +more complex structures, apart from just string messages, and still be able to +know exactly what they are signing (opposed to signing a bunch of arbitrary bytes). + +Thus, in the future, the Cosmos signing message specification will be expected +to expand upon it's canonical JSON structure to include such functionality. + +## API + +Application developers and designers should formalize a standard set of APIs that +adhere to the following specification: + +*** + +### **cosmosSignBytes** + +Params: + +* `data`: the Cosmos signed message canonical JSON structure +* `address`: the Bech32 Cosmos account address to sign data with + +Returns: + +* `signature`: the Cosmos signature derived using signing algorithm `S` + +*** + +### Examples + +Using the `secp256k1` as the DSA, `S`: + +```javascript +data = { + "@chain_id": "1", + "@type": "message", + "text": "I hereby claim I am ABC on Keybase!" +} + +cosmosSignBytes(data, "cosmos1pvsch6cddahhrn5e8ekw0us50dpnugwnlfngt3") +> "0x7fc4a495473045022100dec81a9820df0102381cdbf7e8b0f1e2cb64c58e0ecda1324543742e0388e41a02200df37905a6505c1b56a404e23b7473d2c0bc5bcda96771d2dda59df6ed2b98f8" +``` + +## References diff --git a/docs/sdk/v0.50/documentation/protocol-development/addresses/README.mdx b/docs/sdk/v0.50/documentation/protocol-development/addresses/README.mdx new file mode 100644 index 00000000..fb4e38a0 --- /dev/null +++ b/docs/sdk/v0.50/documentation/protocol-development/addresses/README.mdx @@ -0,0 +1,5 @@ +--- +title: Addresses spec +--- + +* [Bech32](/docs/sdk/v0.50/documentation/protocol-development/addresses/bech32) diff --git a/docs/sdk/v0.50/documentation/protocol-development/addresses/bech32.mdx b/docs/sdk/v0.50/documentation/protocol-development/addresses/bech32.mdx new file mode 100644 index 00000000..73091371 --- /dev/null +++ b/docs/sdk/v0.50/documentation/protocol-development/addresses/bech32.mdx @@ -0,0 +1,23 @@ +--- +title: Bech32 on Cosmos +--- + +The Cosmos network prefers to use the Bech32 address format wherever users must handle binary data. Bech32 encoding provides robust integrity checks on data and the human readable part (HRP) provides contextual hints that can assist UI developers with providing informative error messages. + +In the Cosmos network, keys and addresses may refer to a number of different roles in the network like accounts, validators etc. + +## HRP table + +| HRP | Definition | +| ------------- | ---------------------------------- | +| cosmos | Cosmos Account Address | +| cosmosvalcons | Cosmos Validator Consensus Address | +| cosmosvaloper | Cosmos Validator Operator Address | + +## Encoding + +While all user facing interfaces to Cosmos software should exposed Bech32 interfaces, many internal interfaces encode binary value in hex or base64 encoded form. + +To covert between other binary representation of addresses and keys, it is important to first apply the Amino encoding process before Bech32 encoding. + +A complete implementation of the Amino serialization format is unnecessary in most cases. Simply prepending bytes from this [table](https://github.com/cometbft/cometbft/blob/main/spec/blockchain/encoding.md) to the byte string payload before Bech32 encoding will sufficient for compatible representation. diff --git a/docs/sdk/v0.50/documentation/protocol-development/store/README.mdx b/docs/sdk/v0.50/documentation/protocol-development/store/README.mdx new file mode 100644 index 00000000..8ae75dd5 --- /dev/null +++ b/docs/sdk/v0.50/documentation/protocol-development/store/README.mdx @@ -0,0 +1,242 @@ +--- +title: Store +--- + +The store package defines the interfaces, types and abstractions for Cosmos SDK +modules to read and write to Merkleized state within a Cosmos SDK application. +The store package provides many primitives for developers to use in order to +work with both state storage and state commitment. Below we describe the various +abstractions. + +## Types + +### `Store` + +The bulk of the store interfaces are defined [here](https://github.com/cosmos/cosmos-sdk/blob/main/store/types/store.go), +where the base primitive interface, for which other interfaces build off of, is +the `Store` type. The `Store` interface defines the ability to tell the type of +the implementing store and the ability to cache wrap via the `CacheWrapper` interface. + +### `CacheWrapper` & `CacheWrap` + +One of the most important features a store has the ability to perform is the +ability to cache wrap. Cache wrapping is essentially the underlying store wrapping +itself within another store type that performs caching for both reads and writes +with the ability to flush writes via `Write()`. + +### `KVStore` & `CacheKVStore` + +One of the most important interfaces that both developers and modules interface +with, which also provides the basis of most state storage and commitment operations, +is the `KVStore`. The `KVStore` interface provides basic CRUD abilities and +prefix-based iteration, including reverse iteration. + +Typically, each module has it's own dedicated `KVStore` instance, which it can +get access to via the `sdk.Context` and the use of a pointer-based named key -- +`KVStoreKey`. The `KVStoreKey` provides pseudo-OCAP. How a exactly a `KVStoreKey` +maps to a `KVStore` will be illustrated below through the `CommitMultiStore`. + +Note, a `KVStore` cannot directly commit state. Instead, a `KVStore` can be wrapped +by a `CacheKVStore` which extends a `KVStore` and provides the ability for the +caller to execute `Write()` which commits state to the underlying state storage. +Note, this doesn't actually flush writes to disk as writes are held in memory +until `Commit()` is called on the `CommitMultiStore`. + +### `CommitMultiStore` + +The `CommitMultiStore` interface exposes the the top-level interface that is used +to manage state commitment and storage by an SDK application and abstracts the +concept of multiple `KVStore`s which are used by multiple modules. Specifically, +it supports the following high-level primitives: + +* Allows for a caller to retrieve a `KVStore` by providing a `KVStoreKey`. +* Exposes pruning mechanisms to remove state pinned against a specific height/version + in the past. +* Allows for loading state storage at a particular height/version in the past to + provide current head and historical queries. +* Provides the ability to rollback state to a previous height/version. +* Provides the ability to to load state storage at a particular height/version + while also performing store upgrades, which are used during live hard-fork + application state migrations. +* Provides the ability to commit all current accumulated state to disk and performs + Merkle commitment. + +## Implementation Details + +While there are many interfaces that the `store` package provides, there is +typically a core implementation for each main interface that modules and +developers interact with that are defined in the Cosmos SDK. + +### `iavl.Store` + +The `iavl.Store` provides the core implementation for state storage and commitment +by implementing the following interfaces: + +* `KVStore` +* `CommitStore` +* `CommitKVStore` +* `Queryable` +* `StoreWithInitialVersion` + +It allows for all CRUD operations to be performed along with allowing current +and historical state queries, prefix iteration, and state commitment along with +Merkle proof operations. The `iavl.Store` also provides the ability to remove +historical state from the state commitment layer. + +An overview of the IAVL implementation can be found [here](https://github.com/cosmos/iavl/blob/master/docs/overview.md). +It is important to note that the IAVL store provides both state commitment and +logical storage operations, which comes with drawbacks as there are various +performance impacts, some of which are very drastic, when it comes to the +operations mentioned above. + +When dealing with state management in modules and clients, the Cosmos SDK provides +various layers of abstractions or "store wrapping", where the `iavl.Store` is the +bottom most layer. When requesting a store to perform reads or writes in a module, +the typical abstraction layer in order is defined as follows: + +```text +iavl.Store <- cachekv.Store <- gaskv.Store <- cachemulti.Store <- rootmulti.Store +``` + +### Concurrent use of IAVL store + +The tree under `iavl.Store` is not safe for concurrent use. It is the +responsibility of the caller to ensure that concurrent access to the store is +not performed. + +The main issue with concurrent use is when data is written at the same time as +it's being iterated over. Doing so will cause a irrecoverable fatal error because +of concurrent reads and writes to an internal map. + +Although it's not recommended, you can iterate through values while writing to +it by disabling "FastNode" **without guarantees that the values being written will +be returned during the iteration** (if you need this, you might want to reconsider +the design of your application). This is done by setting `iavl-disable-fastnode` +to `true` in the config TOML file. + +### `cachekv.Store` + +The `cachekv.Store` store wraps an underlying `KVStore`, typically a `iavl.Store` +and contains an in-memory cache for storing pending writes to underlying `KVStore`. +`Set` and `Delete` calls are executed on the in-memory cache, whereas `Has` calls +are proxied to the underlying `KVStore`. + +One of the most important calls to a `cachekv.Store` is `Write()`, which ensures +that key-value pairs are written to the underlying `KVStore` in a deterministic +and ordered manner by sorting the keys first. The store keeps track of "dirty" +keys and uses these to determine what keys to sort. In addition, it also keeps +track of deleted keys and ensures these are also removed from the underlying +`KVStore`. + +The `cachekv.Store` also provides the ability to perform iteration and reverse +iteration. Iteration is performed through the `cacheMergeIterator` type and uses +both the dirty cache and underlying `KVStore` to iterate over key-value pairs. + +Note, all calls to CRUD and iteration operations on a `cachekv.Store` are thread-safe. + +### `gaskv.Store` + +The `gaskv.Store` store provides a simple implementation of a `KVStore`. +Specifically, it just wraps an existing `KVStore`, such as a cache-wrapped +`iavl.Store`, and incurs configurable gas costs for CRUD operations via +`ConsumeGas()` calls defined on the `GasMeter` which exists in a `sdk.Context` +and then proxies the underlying CRUD call to the underlying store. Note, the +`GasMeter` is reset on each block. + +### `cachemulti.Store` & `rootmulti.Store` + +The `rootmulti.Store` acts as an abstraction around a series of stores. Namely, +it implements the `CommitMultiStore` an `Queryable` interfaces. Through the +`rootmulti.Store`, an SDK module can request access to a `KVStore` to perform +state CRUD operations and queries by holding access to a unique `KVStoreKey`. + +The `rootmulti.Store` ensures these queries and state operations are performed +through cached-wrapped instances of `cachekv.Store` which is described above. The +`rootmulti.Store` implementation is also responsible for committing all accumulated +state from each `KVStore` to disk and returning an application state Merkle root. + +Queries can be performed to return state data along with associated state +commitment proofs for both previous heights/versions and the current state root. +Queries are routed based on store name, i.e. a module, along with other parameters +which are defined in `abci.RequestQuery`. + +The `rootmulti.Store` also provides primitives for pruning data at a given +height/version from state storage. When a height is committed, the `rootmulti.Store` +will determine if other previous heights should be considered for removal based +on the operator's pruning settings defined by `PruningOptions`, which defines +how many recent versions to keep on disk and the interval at which to remove +"staged" pruned heights from disk. During each interval, the staged heights are +removed from each `KVStore`. Note, it is up to the underlying `KVStore` +implementation to determine how pruning is actually performed. The `PruningOptions` +are defined as follows: + +```go +type PruningOptions struct { + / KeepRecent defines how many recent heights to keep on disk. + KeepRecent uint64 + + / Interval defines when the pruned heights are removed from disk. + Interval uint64 + + / Strategy defines the kind of pruning strategy. See below for more information on each. + Strategy PruningStrategy +} +``` + +The Cosmos SDK defines a preset number of pruning "strategies": `default`, `everything` +`nothing`, and `custom`. + +It is important to note that the `rootmulti.Store` considers each `KVStore` as a +separate logical store. In other words, they do not share a Merkle tree or +comparable data structure. This means that when state is committed via +`rootmulti.Store`, each store is committed in sequence and thus is not atomic. + +In terms of store construction and wiring, each Cosmos SDK application contains +a `BaseApp` instance which internally has a reference to a `CommitMultiStore` +that is implemented by a `rootmulti.Store`. The application then registers one or +more `KVStoreKey` that pertain to a unique module and thus a `KVStore`. Through +the use of an `sdk.Context` and a `KVStoreKey`, each module can get direct access +to it's respective `KVStore` instance. + +Example: + +```go expandable +func NewApp(...) + +Application { + / ... + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + + / ... + keys := sdk.NewKVStoreKeys(...) + transientKeys := sdk.NewTransientStoreKeys(...) + memKeys := sdk.NewMemoryStoreKeys(...) + + / ... + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(transientKeys) + +app.MountMemoryStores(memKeys) + + / ... +} +``` + +The `rootmulti.Store` itself can be cache-wrapped which returns an instance of a +`cachemulti.Store`. For each block, `BaseApp` ensures that the proper abstractions +are created on the `CommitMultiStore`, i.e. ensuring that the `rootmulti.Store` +is cached-wrapped and uses the resulting `cachemulti.Store` to be set on the +`sdk.Context` which is then used for block and transaction execution. As a result, +all state mutations due to block and transaction execution are actually held +ephemerally until `Commit()` is called by the ABCI client. This concept is further +expanded upon when the AnteHandler is executed per transaction to ensure state +is not committed for transactions that failed CheckTx. diff --git a/docs/sdk/v0.50/documentation/protocol-development/store/interblock-cache.mdx b/docs/sdk/v0.50/documentation/protocol-development/store/interblock-cache.mdx new file mode 100644 index 00000000..fd532244 --- /dev/null +++ b/docs/sdk/v0.50/documentation/protocol-development/store/interblock-cache.mdx @@ -0,0 +1,314 @@ +--- +title: Inter-block Cache +--- + +* [Inter-block Cache](#inter-block-cache) + * [Synopsis](#synopsis) + * [Overview and basic concepts](#overview-and-basic-concepts) + * [Motivation](#motivation) + * [Definitions](#definitions) + * [System model and properties](#system-model-and-properties) + * [Assumptions](#assumptions) + * [Properties](#properties) + * [Thread safety](#thread-safety) + * [Crash recovery](#crash-recovery) + * [Iteration](#iteration) + * [Technical specification](#technical-specification) + * [General design](#general-design) + * [API](#api) + * [CommitKVCacheManager](#commitkvcachemanager) + * [CommitKVStoreCache](#commitkvstorecache) + * [Implementation details](#implementation-details) + * [History](#history) + * [Copyright](#copyright) + +## Synopsis + +The inter-block cache is an in-memory cache storing (in-most-cases) immutable state that modules need to read in between blocks. When enabled, all sub-stores of a multi store, e.g., `rootmulti`, are wrapped. + +## Overview and basic concepts + +### Motivation + +The goal of the inter-block cache is to allow SDK modules to have fast access to data that it is typically queried during the execution of every block. This is data that do not change often, e.g. module parameters. The inter-block cache wraps each `CommitKVStore` of a multi store such as `rootmulti` with a fixed size, write-through cache. Caches are not cleared after a block is committed, as opposed to other caching layers such as `cachekv`. + +### Definitions + +* `Store key` uniquely identifies a store. +* `KVCache` is a `CommitKVStore` wrapped with a cache. +* `Cache manager` is a key component of the inter-block cache responsible for maintaining a map from `store keys` to `KVCaches`. + +## System model and properties + +### Assumptions + +This specification assumes that there exists a cache implementation accessible to the inter-block cache feature. + +> The implementation uses adaptive replacement cache (ARC), an enhancement over the standard last-recently-used (LRU) cache in that tracks both frequency and recency of use. + +The inter-block cache requires that the cache implementation to provide methods to create a cache, add a key/value pair, remove a key/value pair and retrieve the value associated to a key. In this specification, we assume that a `Cache` feature offers this functionality through the following methods: + +* `NewCache(size int)` creates a new cache with `size` capacity and returns it. +* `Get(key string)` attempts to retrieve a key/value pair from `Cache.` It returns `(value []byte, success bool)`. If `Cache` contains the key, it `value` contains the associated value and `success=true`. Otherwise, `success=false` and `value` should be ignored. +* `Add(key string, value []byte)` inserts a key/value pair into the `Cache`. +* `Remove(key string)` removes the key/value pair identified by `key` from `Cache`. + +The specification also assumes that `CommitKVStore` offers the following API: + +* `Get(key string)` attempts to retrieve a key/value pair from `CommitKVStore`. +* `Set(key, string, value []byte)` inserts a key/value pair into the `CommitKVStore`. +* `Delete(key string)` removes the key/value pair identified by `key` from `CommitKVStore`. + +> Ideally, both `Cache` and `CommitKVStore` should be specified in a different document and referenced here. + +### Properties + +#### Thread safety + +Accessing the `cache manager` or a `KVCache` is not thread-safe: no method is guarded with a lock. +Note that this is true even if the cache implementation is thread-safe. + +> For instance, assume that two `Set` operations are executed concurrently on the same key, each writing a different value. After both are executed, the cache and the underlying store may be inconsistent, each storing a different value under the same key. + +#### Crash recovery + +The inter-block cache transparently delegates `Commit()` to its aggregate `CommitKVStore`. If the +aggregate `CommitKVStore` supports atomic writes and use them to guarantee that the store is always in a consistent state in disk, the inter-block cache can be transparently moved to a consistent state when a failure occurs. + +> Note that this is the case for `IAVLStore`, the preferred `CommitKVStore`. On commit, it calls `SaveVersion()` on the underlying `MutableTree`. `SaveVersion` writes to disk are atomic via batching. This means that only consistent versions of the store (the tree) are written to the disk. Thus, in case of a failure during a `SaveVersion` call, on recovery from disk, the version of the store will be consistent. + +#### Iteration + +Iteration over each wrapped store is supported via the embedded `CommitKVStore` interface. + +## Technical specification + +### General design + +The inter-block cache feature is composed by two components: `CommitKVCacheManager` and `CommitKVCache`. + +`CommitKVCacheManager` implements the cache manager. It maintains a mapping from a store key to a `KVStore`. + +```go +type CommitKVStoreCacheManager interface{ + cacheSize uint + caches map[string]CommitKVStore +} +``` + +`CommitKVStoreCache` implements a `KVStore`: a write-through cache that wraps a `CommitKVStore`. This means that deletes and writes always happen to both the cache and the underlying `CommitKVStore`. Reads on the other hand first hit the internal cache. During a cache miss, the read is delegated to the underlying `CommitKVStore` and cached. + +```go +type CommitKVStoreCache interface{ + store CommitKVStore + cache Cache +} +``` + +To enable inter-block cache on `rootmulti`, one needs to instantiate a `CommitKVCacheManager` and set it by calling `SetInterBlockCache()` before calling one of `LoadLatestVersion()`, `LoadLatestVersionAndUpgrade(...)`, `LoadVersionAndUpgrade(...)` and `LoadVersion(version)`. + +### API + +#### CommitKVCacheManager + +The method `NewCommitKVStoreCacheManager` creates a new cache manager and returns it. + +| Name | Type | Description | +| ---- | ------- | ------------------------------------------------------------------------ | +| size | integer | Determines the capacity of each of the KVCache maintained by the manager | + +```go +func NewCommitKVStoreCacheManager(size uint) + +CommitKVStoreCacheManager { + manager = CommitKVStoreCacheManager{ + size, make(map[string]CommitKVStore) +} + +return manager +} +``` + +`GetStoreCache` returns a cache from the CommitStoreCacheManager for a given store key. If no cache exists for the store key, then one is created and set. + +| Name | Type | Description | +| -------- | --------------------------- | -------------------------------------------------------------------------------------- | +| manager | `CommitKVStoreCacheManager` | The cache manager | +| storeKey | string | The store key of the store being retrieved | +| store | `CommitKVStore` | The store that it is cached in case the manager does not have any in its map of caches | + +```go expandable +func GetStoreCache( + manager CommitKVStoreCacheManager, + storeKey string, + store CommitKVStore) + +CommitKVStore { + if manager.caches.has(storeKey) { + return manager.caches.get(storeKey) +} + +else { + cache = CommitKVStoreCacheManager{ + store, manager.cacheSize +} + +manager.set(storeKey, cache) + +return cache +} +} +``` + +`Unwrap` returns the underlying CommitKVStore for a given store key. + +| Name | Type | Description | +| -------- | --------------------------- | ------------------------------------------ | +| manager | `CommitKVStoreCacheManager` | The cache manager | +| storeKey | string | The store key of the store being unwrapped | + +```go expandable +func Unwrap( + manager CommitKVStoreCacheManager, + storeKey string) + +CommitKVStore { + if manager.caches.has(storeKey) { + cache = manager.caches.get(storeKey) + +return cache.store +} + +else { + return nil +} +} +``` + +`Reset` resets the manager's map of caches. + +| Name | Type | Description | +| ------- | --------------------------- | ----------------- | +| manager | `CommitKVStoreCacheManager` | The cache manager | + +```go +function Reset(manager CommitKVStoreCacheManager) { + for (let storeKey of manager.caches.keys()) { + manager.caches.delete(storeKey) +} +} +``` + +#### CommitKVStoreCache + +`NewCommitKVStoreCache` creates a new `CommitKVStoreCache` and returns it. + +| Name | Type | Description | +| ----- | ------------- | -------------------------------------------------- | +| store | CommitKVStore | The store to be cached | +| size | string | Determines the capacity of the cache being created | + +```go +func NewCommitKVStoreCache( + store CommitKVStore, + size uint) + +CommitKVStoreCache { + KVCache = CommitKVStoreCache{ + store, NewCache(size) +} + +return KVCache +} +``` + +`Get` retrieves a value by key. It first looks in the cache. If the key is not in the cache, the query is delegated to the underlying `CommitKVStore`. In the latter case, the key/value pair is cached. The method returns the value. + +| Name | Type | Description | +| ------- | -------------------- | ------------------------------------------------------------------- | +| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` from which the key/value pair is retrieved | +| key | string | Key of the key/value pair being retrieved | + +```go expandable +func Get( + KVCache CommitKVStoreCache, + key string) []byte { + valueCache, success := KVCache.cache.Get(key) + if success { + / cache hit + return valueCache +} + +else { + / cache miss + valueStore = KVCache.store.Get(key) + +KVCache.cache.Add(key, valueStore) + +return valueStore +} +} +``` + +`Set` inserts a key/value pair into both the write-through cache and the underlying `CommitKVStore`. + +| Name | Type | Description | +| ------- | -------------------- | ---------------------------------------------------------------- | +| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` to which the key/value pair is inserted | +| key | string | Key of the key/value pair being inserted | +| value | \[]byte | Value of the key/value pair being inserted | + +```go +func Set( + KVCache CommitKVStoreCache, + key string, + value []byte) { + KVCache.cache.Add(key, value) + +KVCache.store.Set(key, value) +} +``` + +`Delete` removes a key/value pair from both the write-through cache and the underlying `CommitKVStore`. + +| Name | Type | Description | +| ------- | -------------------- | ----------------------------------------------------------------- | +| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` from which the key/value pair is deleted | +| key | string | Key of the key/value pair being deleted | + +```go +func Delete( + KVCache CommitKVStoreCache, + key string) { + KVCache.cache.Remove(key) + +KVCache.store.Delete(key) +} +``` + +`CacheWrap` wraps a `CommitKVStoreCache` with another caching layer (`CacheKV`). + +> It is unclear whether there is a use case for `CacheWrap`. + +| Name | Type | Description | +| ------- | -------------------- | -------------------------------------- | +| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` being wrapped | + +```go +func CacheWrap( + KVCache CommitKVStoreCache) { + return CacheKV.NewStore(KVCache) +} +``` + +### Implementation details + +The inter-block cache implementation uses a fixed-sized adaptive replacement cache (ARC) as cache. [The ARC implementation](https://github.com/hashicorp/golang-lru/blob/master/arc.go) is thread-safe. ARC is an enhancement over the standard LRU cache in that tracks both frequency and recency of use. This avoids a burst in access to new entries from evicting the frequently used older entries. It adds some additional tracking overhead to a standard LRU cache, computationally it is roughly `2x` the cost, and the extra memory overhead is linear with the size of the cache. The default cache size is `1000`. + +## History + +Dec 20, 2022 - Initial draft finished and submitted as a PR + +## Copyright + +All content herein is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/docs/sdk/v0.50/learn.mdx b/docs/sdk/v0.50/learn.mdx deleted file mode 100644 index f7c1b0a9..00000000 --- a/docs/sdk/v0.50/learn.mdx +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: "Learn" -description: "Version: v0.50" ---- - -* [Introduction](/v0.50/learn/intro/overview) - Dive into the fundamentals of Cosmos SDK with an insightful introduction, laying the groundwork for understanding blockchain development. In this section we provide a High-Level Overview of the SDK, then dive deeper into Core concepts such as Application-Specific Blockchains, Blockchain Architecture, and finally we begin to explore what are the main components of the SDK. -* [Beginner](/v0.50/learn/beginner/app-anatomy) - Start your journey with beginner-friendly resources in the Cosmos SDK's "Learn" section, providing a gentle entry point for newcomers to blockchain development. Here we focus on a little more detail, covering the Anatomy of a Cosmos SDK Application, Transaction Lifecycles, Accounts and lastly, Gas and Fees. -* [Advanced](/v0.50/learn/advanced/baseapp) - Level up your Cosmos SDK expertise with advanced topics, tailored for experienced developers diving into intricate blockchain application development. We cover the Cosmos SDK on a lower level as we dive into the core of the SDK with BaseApp, Transactions, Context, Node Client (Daemon), Store, Encoding, gRPC, REST, and CometBFT Endpoints, CLI, Events, Telementry, Object-Capability Model, RunTx recovery middleware, Cosmos Blockchain Simulator, Protobuf Documentation, In-Place Store Migrations, Configuration and AutoCLI. diff --git a/docs/sdk/v0.50/learn/advanced/autocli.mdx b/docs/sdk/v0.50/learn/advanced/autocli.mdx index ef1b82d7..5919fb32 100644 --- a/docs/sdk/v0.50/learn/advanced/autocli.mdx +++ b/docs/sdk/v0.50/learn/advanced/autocli.mdx @@ -1,33 +1,37 @@ --- -title: "AutoCLI" -description: "Version: v0.50" +title: AutoCLI --- - - This document details how to build CLI and REST interfaces for a module. Examples from various Cosmos SDK modules are included. - +## Synopsis + +This document details how to build CLI and REST interfaces for a module. Examples from various Cosmos SDK modules are included. - * [CLI](https://docs.cosmos.network/main/core/cli) +**Pre-requisite Readings** + +- [CLI](https://docs.cosmos.network/main/core/cli) + The `autocli` (also known as `client/v2`) package is a [Go library](https://pkg.go.dev/cosmossdk.io/client/v2/autocli) for generating CLI (command line interface) interfaces for Cosmos SDK-based applications. It provides a simple way to add CLI commands to your application by generating them automatically based on your gRPC service definitions. Autocli generates CLI commands and flags directly from your protobuf messages, including options, input parameters, and output parameters. This means that you can easily add a CLI interface to your application without having to manually create and manage commands. -## Overview[​](#overview "Direct link to Overview") +## Overview `autocli` generates CLI commands and flags for each method defined in your gRPC service. By default, it generates commands for each gRPC services. The commands are named based on the name of the service method. For example, given the following protobuf definition for a service: -``` -service MyService { rpc MyMethod(MyRequest) returns (MyResponse) {}} +```protobuf +service MyService { + rpc MyMethod(MyRequest) returns (MyResponse) {} +} ``` For instance, `autocli` would generate a command named `my-method` for the `MyMethod` method. The command will have flags for each field in the `MyRequest` message. It is possible to customize the generation of transactions and queries by defining options for each service. -## Application Wiring[​](#application-wiring "Direct link to Application Wiring") +## Application Wiring Here are the steps to use AutoCLI: @@ -36,69 +40,240 @@ Here are the steps to use AutoCLI: 3. Use the `autocli.AppOptions` struct to specify the modules you defined. If you are using `depinject`, it can automatically create an instance of `autocli.AppOptions` based on your app's configuration. 4. Use the `EnhanceRootCommand()` method provided by `autocli` to add the CLI commands for the specified modules to your root command. - - AutoCLI is additive only, meaning *enhancing* the root command will only add subcommands that are not already registered. This means that you can use AutoCLI alongside other custom commands within your app. - + + AutoCLI is additive only, meaning *enhancing* the root command will only add + subcommands that are not already registered. This means that you can use + AutoCLI alongside other custom commands within your app. + Here's an example of how to use `autocli` in your app: -``` -// Define your app's modulestestModules := map[string]appmodule.AppModule{ "testModule": &TestModule{},}// Define the autocli AppOptionsautoCliOpts := autocli.AppOptions{ Modules: testModules,}// Create the root commandrootCmd := &cobra.Command{ Use: "app",}if err := appOptions.EnhanceRootCommand(rootCmd); err != nil { return err}// Run the root commandif err := rootCmd.Execute(); err != nil { return err} +```go expandable +/ Define your app's modules + testModules := map[string]appmodule.AppModule{ + "testModule": &TestModule{ +}, +} + +/ Define the autocli AppOptions + autoCliOpts := autocli.AppOptions{ + Modules: testModules, +} + +/ Create the root command + rootCmd := &cobra.Command{ + Use: "app", +} + if err := appOptions.EnhanceRootCommand(rootCmd); err != nil { + return err +} + +/ Run the root command + if err := rootCmd.Execute(); err != nil { + return err +} ``` -### Keyring[​](#keyring "Direct link to Keyring") +### Keyring `autocli` uses a keyring for key name resolving names and signing transactions. - - AutoCLI provides a better UX than normal CLI as it allows to resolve key names directly from the keyring in all transactions and commands. + +AutoCLI provides a better UX than normal CLI as it allows to resolve key names directly from the keyring in all transactions and commands. - ``` - q bank balances alice tx bank send alice bob 1000denom - ``` - +```sh + q bank balances alice + tx bank send alice bob 1000denom +``` + + -The keyring used for resolving names and signing transactions is provided via the `client.Context`. The keyring is then converted to the `client/v2/autocli/keyring` interface. If no keyring is provided, the `autocli` generated command will not be able to sign transactions, but will still be able to query the chain. +The keyring used for resolving names and signing transactions is provided via the `client.Context`. +The keyring is then converted to the `client/v2/autocli/keyring` interface. +If no keyring is provided, the `autocli` generated command will not be able to sign transactions, but will still be able to query the chain. - - The Cosmos SDK keyring and Hubl keyring both implement the `client/v2/autocli/keyring` interface, thanks to the following wrapper: + +The Cosmos SDK keyring and Hubl keyring both implement the `client/v2/autocli/keyring` interface, thanks to the following wrapper: + +```go +keyring.NewAutoCLIKeyring(kb) +``` - ``` - keyring.NewAutoCLIKeyring(kb) - ``` - + -## Signing[​](#signing "Direct link to Signing") +## Signing -`autocli` supports signing transactions with the keyring. The [`cosmos.msg.v1.signer` protobuf annotation](https://docs.cosmos.network/main/build/building-modules/protobuf-annotations) defines the signer field of the message. This field is automatically filled when using the `--from` flag or defining the signer as a positional argument. +`autocli` supports signing transactions with the keyring. +The [`cosmos.msg.v1.signer` protobuf annotation](https://docs.cosmos.network/main/build/building-modules/protobuf-annotations) defines the signer field of the message. +This field is automatically filled when using the `--from` flag or defining the signer as a positional argument. - - AutoCLI currently supports only one signer per transaction. - +AutoCLI currently supports only one signer per transaction. -## Module Wiring & Customization[​](#module-wiring--customization "Direct link to Module Wiring & Customization") +## Module Wiring & Customization The `AutoCLIOptions()` method on your module allows to specify custom commands, sub-commands or flags for each service, as it was a `cobra.Command` instance, within the `RpcCommandOptions` struct. Defining such options will customize the behavior of the `autocli` command generation, which by default generates a command for each method in your gRPC service. -``` -*autocliv1.RpcCommandOptions{ RpcMethod: "Params", // The name of the gRPC service Use: "params", // Command usage that is displayed in the help Short: "Query the parameters of the governance process", // Short description of the command Long: "Query the parameters of the governance process. Specify specific param types (voting|tallying|deposit) to filter results.", // Long description of the command PositionalArgs: []*autocliv1.PositionalArgDescriptor{ {ProtoField: "params_type", Optional: true}, // Transform a flag into a positional argument },} +```go +*autocliv1.RpcCommandOptions{ + RpcMethod: "Params", / The name of the gRPC service + Use: "params", / Command usage that is displayed in the help + Short: "Query the parameters of the governance process", / Short description of the command + Long: "Query the parameters of the governance process. Specify specific param types (voting|tallying|deposit) + +to filter results.", / Long description of the command + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "params_type", + Optional: true +}, / Transform a flag into a positional argument +}, +} ``` -### Specifying Subcommands[​](#specifying-subcommands "Direct link to Specifying Subcommands") +### Specifying Subcommands By default, `autocli` generates a command for each method in your gRPC service. However, you can specify subcommands to group related commands together. To specify subcommands, use the `autocliv1.ServiceCommandDescriptor` struct. This example shows how to use the `autocliv1.ServiceCommandDescriptor` struct to group related commands together and specify subcommands in your gRPC service by defining an instance of `autocliv1.ModuleOptions` in your `autocli.go`. -x/gov/autocli.go - -``` -loading... +```go expandable +package gov + +import ( + + "fmt" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + govv1 "cosmossdk.io/api/cosmos/gov/v1" + govv1beta1 "cosmossdk.io/api/cosmos/gov/v1beta1" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. +func (am AppModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: &autocliv1.ServiceCommandDescriptor{ + Service: govv1.Query_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "Params", + Use: "params", + Short: "Query the parameters of the governance process", + Long: "Query the parameters of the governance process. Specify specific param types (voting|tallying|deposit) + +to filter results.", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "params_type", + Optional: true +}, +}, +}, + { + RpcMethod: "Proposals", + Use: "proposals", + Short: "Query proposals with optional filters", + Example: fmt.Sprintf("%[1]s query gov proposals --depositor cosmos1...\n%[1]s query gov proposals --voter cosmos1...\n%[1]s query gov proposals --proposal-status (PROPOSAL_STATUS_DEPOSIT_PERIOD|PROPOSAL_STATUS_VOTING_PERIOD|PROPOSAL_STATUS_PASSED|PROPOSAL_STATUS_REJECTED|PROPOSAL_STATUS_FAILED)", version.AppName), +}, + { + RpcMethod: "Proposal", + Use: "proposal [proposal-id]", + Short: "Query details of a single proposal", + Example: fmt.Sprintf("%s query gov proposal 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Vote", + Use: "vote [proposal-id] [voter-addr]", + Short: "Query details of a single vote", + Example: fmt.Sprintf("%s query gov vote 1 cosmos1...", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, + { + ProtoField: "voter" +}, +}, +}, + { + RpcMethod: "Votes", + Use: "votes [proposal-id]", + Short: "Query votes of a single proposal", + Example: fmt.Sprintf("%s query gov votes 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Deposit", + Use: "deposit [proposal-id] [depositer-addr]", + Short: "Query details of a deposit", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, + { + ProtoField: "depositor" +}, +}, +}, + { + RpcMethod: "Deposits", + Use: "deposits [proposal-id]", + Short: "Query deposits on a proposal", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "TallyResult", + Use: "tally [proposal-id]", + Short: "Query the tally of a proposal vote", + Example: fmt.Sprintf("%s query gov tally 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Constitution", + Use: "constitution", + Short: "Query the current chain constitution", +}, +}, + / map v1beta1 as a sub-command + SubCommands: map[string]*autocliv1.ServiceCommandDescriptor{ + "v1beta1": { + Service: govv1beta1.Query_ServiceDesc.ServiceName +}, +}, +}, + Tx: &autocliv1.ServiceCommandDescriptor{ + Service: govv1.Msg_ServiceDesc.ServiceName, + / map v1beta1 as a sub-command + SubCommands: map[string]*autocliv1.ServiceCommandDescriptor{ + "v1beta1": { + Service: govv1beta1.Msg_ServiceDesc.ServiceName +}, +}, +}, +} +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/x/gov/autocli.go#L94-L97) - -### Positional Arguments[​](#positional-arguments "Direct link to Positional Arguments") +### Positional Arguments By default `autocli` generates a flag for each field in your protobuf message. However, you can choose to use positional arguments instead of flags for certain fields. @@ -106,73 +281,393 @@ To add positional arguments to a command, use the `autocliv1.PositionalArgDescri Here's an example of how to define a positional argument for the `Account` method of the `auth` service: -x/auth/autocli.go - +```go expandable +package auth + +import ( + + "fmt" + + authv1beta1 "cosmossdk.io/api/cosmos/auth/v1beta1" + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + _ "cosmossdk.io/api/cosmos/crypto/secp256k1" / register to that it shows up in protoregistry.GlobalTypes + _ "cosmossdk.io/api/cosmos/crypto/secp256r1" / register to that it shows up in protoregistry.GlobalTypes + + "github.com/cosmos/cosmos-sdk/version" +) + +/ AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. +func (am AppModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: &autocliv1.ServiceCommandDescriptor{ + Service: authv1beta1.Query_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "Accounts", + Use: "accounts", + Short: "Query all the accounts", +}, + { + RpcMethod: "Account", + Use: "account [address]", + Short: "Query account by address", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "address" +}}, +}, + { + RpcMethod: "AccountInfo", + Use: "account-info [address]", + Short: "Query account info which is common to all account types.", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "address" +}}, +}, + { + RpcMethod: "AccountAddressByID", + Use: "address-by-acc-num [acc-num]", + Short: "Query account address by account number", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "id" +}}, +}, + { + RpcMethod: "ModuleAccounts", + Use: "module-accounts", + Short: "Query all module accounts", +}, + { + RpcMethod: "ModuleAccountByName", + Use: "module-account [module-name]", + Short: "Query module account info by module name", + Example: fmt.Sprintf("%s q auth module-account gov", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "name" +}}, +}, + { + RpcMethod: "AddressBytesToString", + Use: "address-bytes-to-string [address-bytes]", + Short: "Transform an address bytes to string", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "address_bytes" +}}, +}, + { + RpcMethod: "AddressStringToBytes", + Use: "address-string-to-bytes [address-string]", + Short: "Transform an address string to bytes", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "address_string" +}}, +}, + { + RpcMethod: "Bech32Prefix", + Use: "bech32-prefix", + Short: "Query the chain bech32 prefix (if applicable)", +}, + { + RpcMethod: "Params", + Use: "params", + Short: "Query the current auth parameters", +}, +}, +}, + / Tx is purposely left empty, as the only tx is MsgUpdateParams which is gov gated. +} +} ``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/x/auth/autocli.go#L25-L30) Then the command can be used as follows, instead of having to specify the `--address` flag: -``` +```bash query auth account cosmos1abcd...xyz ``` -### Customising Flag Names[​](#customising-flag-names "Direct link to Customising Flag Names") +### Customising Flag Names By default, `autocli` generates flag names based on the names of the fields in your protobuf message. However, you can customise the flag names by providing a `FlagOptions`. This parameter allows you to specify custom names for flags based on the names of the message fields. For example, if you have a message with the fields `test` and `test1`, you can use the following naming options to customise the flags: -``` -autocliv1.RpcCommandOptions{ FlagOptions: map[string]*autocliv1.FlagOptions{ "test": { Name: "custom_name", }, "test1": { Name: "other_name", }, }, } +```go +autocliv1.RpcCommandOptions{ + FlagOptions: map[string]*autocliv1.FlagOptions{ + "test": { + Name: "custom_name", +}, + "test1": { + Name: "other_name", +}, +}, +} ``` `FlagsOptions` is defined like sub commands in the `AutoCLIOptions()` method on your module. -### Combining AutoCLI with Other Commands Within A Module[​](#combining-autocli-with-other-commands-within-a-module "Direct link to Combining AutoCLI with Other Commands Within A Module") +### Combining AutoCLI with Other Commands Within A Module AutoCLI can be used alongside other commands within a module. For example, the `gov` module uses AutoCLI to generate commands for the `query` subcommand, but also defines custom commands for the `proposer` subcommands. In order to enable this behavior, set in `AutoCLIOptions()` the `EnhanceCustomCommand` field to `true`, for the command type (queries and/or transactions) you want to enhance. -x/gov/autocli.go - +```go expandable +package gov + +import ( + + "fmt" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + govv1 "cosmossdk.io/api/cosmos/gov/v1" + govv1beta1 "cosmossdk.io/api/cosmos/gov/v1beta1" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. +func (am AppModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: &autocliv1.ServiceCommandDescriptor{ + Service: govv1.Query_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "Params", + Use: "params", + Short: "Query the parameters of the governance process", + Long: "Query the parameters of the governance process. Specify specific param types (voting|tallying|deposit) + +to filter results.", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "params_type", + Optional: true +}, +}, +}, + { + RpcMethod: "Proposals", + Use: "proposals", + Short: "Query proposals with optional filters", + Example: fmt.Sprintf("%[1]s query gov proposals --depositor cosmos1...\n%[1]s query gov proposals --voter cosmos1...\n%[1]s query gov proposals --proposal-status (PROPOSAL_STATUS_DEPOSIT_PERIOD|PROPOSAL_STATUS_VOTING_PERIOD|PROPOSAL_STATUS_PASSED|PROPOSAL_STATUS_REJECTED|PROPOSAL_STATUS_FAILED)", version.AppName), +}, + { + RpcMethod: "Proposal", + Use: "proposal [proposal-id]", + Short: "Query details of a single proposal", + Example: fmt.Sprintf("%s query gov proposal 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Vote", + Use: "vote [proposal-id] [voter-addr]", + Short: "Query details of a single vote", + Example: fmt.Sprintf("%s query gov vote 1 cosmos1...", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, + { + ProtoField: "voter" +}, +}, +}, + { + RpcMethod: "Votes", + Use: "votes [proposal-id]", + Short: "Query votes of a single proposal", + Example: fmt.Sprintf("%s query gov votes 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Deposit", + Use: "deposit [proposal-id] [depositer-addr]", + Short: "Query details of a deposit", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, + { + ProtoField: "depositor" +}, +}, +}, + { + RpcMethod: "Deposits", + Use: "deposits [proposal-id]", + Short: "Query deposits on a proposal", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "TallyResult", + Use: "tally [proposal-id]", + Short: "Query the tally of a proposal vote", + Example: fmt.Sprintf("%s query gov tally 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Constitution", + Use: "constitution", + Short: "Query the current chain constitution", +}, +}, + / map v1beta1 as a sub-command + SubCommands: map[string]*autocliv1.ServiceCommandDescriptor{ + "v1beta1": { + Service: govv1beta1.Query_ServiceDesc.ServiceName +}, +}, + EnhanceCustomCommand: true, / We still have manual commands in gov that we want to keep +}, + Tx: &autocliv1.ServiceCommandDescriptor{ + Service: govv1.Msg_ServiceDesc.ServiceName, + / map v1beta1 as a sub-command + SubCommands: map[string]*autocliv1.ServiceCommandDescriptor{ + "v1beta1": { + Service: govv1beta1.Msg_ServiceDesc.ServiceName +}, +}, +}, +} +} ``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/fa4d87ef7e6d87aaccc94c337ffd2fe90fcb7a9d/x/gov/autocli.go#L98) If not set to true, `AutoCLI` will not generate commands for the module if there are already commands registered for the module (when `GetTxCmd()` or `GetTxCmd()` are defined). -### Skip a command[​](#skip-a-command "Direct link to Skip a command") +### Skip a command AutoCLI automatically skips unsupported commands when [`cosmos_proto.method_added_in` protobuf annotation](https://docs.cosmos.network/main/build/building-modules/protobuf-annotations) is present. Additionally, a command can be manually skipped using the `autocliv1.RpcCommandOptions`: -``` -*autocliv1.RpcCommandOptions{ RpcMethod: "Params", // The name of the gRPC service Skip: true,} +```go +*autocliv1.RpcCommandOptions{ + RpcMethod: "Params", / The name of the gRPC service + Skip: true, +} ``` -### Use AutoCLI for non module commands[​](#use-autocli-for-non-module-commands "Direct link to Use AutoCLI for non module commands") +### Use AutoCLI for non module commands It is possible to use `AutoCLI` for non module commands. The trick is still to implement the `appmodule.Module` interface and append it to the `appOptions.ModuleOptions` map. For example, here is how the SDK does it for `cometbft` gRPC commands: -v2.0.0-beta.1/client/grpc/cmtservice/autocli.go - -``` -loading... +```go expandable +package cmtservice + +import ( + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + cmtv1beta1 "cosmossdk.io/api/cosmos/base/tendermint/v1beta1" +) + +var CometBFTAutoCLIDescriptor = &autocliv1.ServiceCommandDescriptor{ + Service: cmtv1beta1.Service_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "GetNodeInfo", + Use: "node-info", + Short: "Query the current node info", +}, + { + RpcMethod: "GetSyncing", + Use: "syncing", + Short: "Query node syncing status", +}, + { + RpcMethod: "GetLatestBlock", + Use: "block-latest", + Short: "Query for the latest committed block", +}, + { + RpcMethod: "GetBlockByHeight", + Use: "block-by-height [height]", + Short: "Query for a committed block by height", + Long: "Query for a specific committed block using the CometBFT RPC `block_by_height` method", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "height" +}}, +}, + { + RpcMethod: "GetLatestValidatorSet", + Use: "validator-set", + Alias: []string{"validator-set-latest", "comet-validator-set", "cometbft-validator-set", "tendermint-validator-set" +}, + Short: "Query for the latest validator set", +}, + { + RpcMethod: "GetValidatorSetByHeight", + Use: "validator-set-by-height [height]", + Short: "Query for a validator set by height", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "height" +}}, +}, + { + RpcMethod: "ABCIQuery", + Skip: true, +}, +}, +} + +/ NewCometBFTCommands is a fake `appmodule.Module` to be considered as a module +/ and be added in AutoCLI. +func NewCometBFTCommands() *cometModule { /nolint:revive / fake module and limiting import of core + return &cometModule{ +} +} + +type cometModule struct{ +} + +func (m cometModule) + +IsOnePerModuleType() { +} + +func (m cometModule) + +IsAppModule() { +} + +func (m cometModule) + +Name() + +string { + return "comet" +} + +func (m cometModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: CometBFTAutoCLIDescriptor, +} +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/client/v2.0.0-beta.1/client/grpc/cmtservice/autocli.go#L52-L71) - -## Summary[​](#summary "Direct link to Summary") +## Summary `autocli` let you generate CLI to your Cosmos SDK-based applications without any cobra boilerplate. It allows you to easily generate CLI commands and flags from your protobuf messages, and provides many options for customising the behavior of your CLI application. diff --git a/docs/sdk/v0.50/learn/advanced/baseapp.mdx b/docs/sdk/v0.50/learn/advanced/baseapp.mdx index 523094fb..15711348 100644 --- a/docs/sdk/v0.50/learn/advanced/baseapp.mdx +++ b/docs/sdk/v0.50/learn/advanced/baseapp.mdx @@ -1,176 +1,1540 @@ --- -title: "BaseApp" -description: "Version: v0.50" +title: BaseApp --- - - This document describes `BaseApp`, the abstraction that implements the core functionalities of a Cosmos SDK application. - +## Synopsis + +This document describes `BaseApp`, the abstraction that implements the core functionalities of a Cosmos SDK application. - * [Anatomy of a Cosmos SDK application](/v0.50/learn/beginner/app-anatomy) - * [Lifecycle of a Cosmos SDK transaction](/v0.50/learn/beginner/tx-lifecycle) +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK application](/docs/sdk/v0.50/learn/beginner/app-anatomy) +- [Lifecycle of a Cosmos SDK transaction](/docs/sdk/v0.50/learn/beginner/tx-lifecycle) + -## Introduction[​](#introduction "Direct link to Introduction") +## Introduction `BaseApp` is a base type that implements the core of a Cosmos SDK application, namely: -* The [Application Blockchain Interface](#main-abci-messages), for the state-machine to communicate with the underlying consensus engine (e.g. CometBFT). -* [Service Routers](#service-routers), to route messages and queries to the appropriate module. -* Different [states](#state-updates), as the state-machine can have different volatile states updated based on the ABCI message received. +- The [Application Blockchain Interface](#main-abci-messages), for the state-machine to communicate with the underlying consensus engine (e.g. CometBFT). +- [Service Routers](#service-routers), to route messages and queries to the appropriate module. +- Different [states](#state-updates), as the state-machine can have different volatile states updated based on the ABCI message received. -The goal of `BaseApp` is to provide the fundamental layer of a Cosmos SDK application that developers can easily extend to build their own custom application. Usually, developers will create a custom type for their application, like so: +The goal of `BaseApp` is to provide the fundamental layer of a Cosmos SDK application +that developers can easily extend to build their own custom application. Usually, +developers will create a custom type for their application, like so: -``` -type App struct { // reference to a BaseApp *baseapp.BaseApp // list of application store keys // list of application keepers // module manager} +```go +type App struct { + / reference to a BaseApp + *baseapp.BaseApp + + / list of application store keys + + / list of application keepers + + / module manager +} ``` -Extending the application with `BaseApp` gives the former access to all of `BaseApp`'s methods. This allows developers to compose their custom application with the modules they want, while not having to concern themselves with the hard work of implementing the ABCI, the service routers and state management logic. +Extending the application with `BaseApp` gives the former access to all of `BaseApp`'s methods. +This allows developers to compose their custom application with the modules they want, while not +having to concern themselves with the hard work of implementing the ABCI, the service routers and state +management logic. -## Type Definition[​](#type-definition "Direct link to Type Definition") +## Type Definition The `BaseApp` type holds many important parameters for any Cosmos SDK based application. -baseapp/baseapp.go +```go expandable +package baseapp + +import ( + + "context" + "fmt" + "sort" + "strconv" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "golang.org/x/exp/maps" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + +error +) + +const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + +transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeFinalize / Finalize a block proposal +) + +var _ servertypes.ABCI = (*BaseApp)(nil) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { + / initialized on creation + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + +state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional, e.g. for tips + + initChainer sdk.InitChainer / ABCI InitChain handler + beginBlocker sdk.BeginBlocker / (legacy ABCI) + +BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + +EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - voteExtensionState: Used for ExtendVote and VerifyVoteExtension, which is + / set based on the previous block's state. This state is never committed. In + / case of multiple rounds, the state is always reset to the previous block's + / state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + voteExtensionState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger, + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) +} + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) +} + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) +} + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs base app will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ SetMsgServiceRouter sets the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +SetMsgServiceRouter(msgServiceRouter *MsgServiceRouter) { + app.msgServiceRouter = msgServiceRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) -``` -loading... -``` +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} + +} +} + +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) + +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + +} +} + +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) + +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} + +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := maps.Keys(keys) + +sort.Strings(skeys) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms storetypes.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +storetypes.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ ChainID returns the chainID of the app. +func (app *BaseApp) + +ChainID() + +string { + return app.chainID +} + +/ AnteHandler returns the AnteHandler of the app. +func (app *BaseApp) + +AnteHandler() + +sdk.AnteHandler { + return app.anteHandler +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/baseapp.go#L58-L182) +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + emptyHeader := cmtproto.Header{ + ChainID: app.chainID +} + + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) + +app.Seal() + if app.cms == nil { + return errors.New("commit multi-store must not be nil") +} + +return app.cms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode execMode, header cmtproto.Header) { + ms := app.cms.CacheMultiStore() + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, header, false, app.logger).WithStreamingManager(app.streamingManager), +} + switch mode { + case execModeCheck: + baseState.ctx = baseState.ctx.WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices) + +app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeVoteExtension: + app.voteExtensionState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ GetFinalizeBlockStateCtx returns the Context associated with the FinalizeBlock +/ state. This Context can be used to write data derived from processing vote +/ extensions to application state during ProcessProposal. +/ +/ NOTE: +/ - Do NOT use or write to state using this Context unless you intend for +/ that state to be committed. +/ - Do NOT use or write to state using this Context on the first block. +func (app *BaseApp) + +GetFinalizeBlockStateCtx() + +sdk.Context { + return app.finalizeBlockState.ctx +} + +/ SetCircuitBreaker sets the circuit breaker for the BaseApp. +/ The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. +func (app *BaseApp) + +SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") +} + +app.msgServiceRouter.SetCircuit(cb) +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) + +cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ +} + +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + panic(fmt.Errorf("consensus key is nil: %w", err)) +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the BaseApp's param +/ store. +/ +/ NOTE: We're explicitly not storing the CometBFT app_version in the param store. +/ It's stored instead in the x/upgrade store, with its own bump logic. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + +error { + if app.paramStore == nil { + panic("cannot store consensus params with no params store set") +} + +return app.paramStore.Set(ctx, cp) +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + +error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) +} + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 +} + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return err +} + +} + +return nil +} + +func (app *BaseApp) + +getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState +} +} + +func (app *BaseApp) + +getBlockGasMeter(ctx sdk.Context) + +storetypes.GasMeter { + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) +} + +return storetypes.NewInfiniteGasMeter() +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode execMode, txBytes []byte) + +sdk.Context { + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.ctx. + WithTxBytes(txBytes) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + / TODO: https://github.com/cosmos/cosmos-sdk/issues/2824 + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]interface{ +}{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(storetypes.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +func (app *BaseApp) + +beginBlock(req *abci.RequestFinalizeBlock) + +sdk.BeginBlock { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.ctx) + if err != nil { + panic(err) +} + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" +}, + ) +} + +resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) +} + +return resp +} + +func (app *BaseApp) + +deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ +} + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + +telemetry.IncrCounter(1, "tx", resultStr) + +telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + +telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") +}() + +gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + +return resp +} + +resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +} + +return resp +} + +/ endBlock is an application-defined function that is called after transactions +/ have been processed in FinalizeBlock. +func (app *BaseApp) + +endBlock(ctx context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.ctx) + if err != nil { + panic(err) +} + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" +}, + ) +} + +eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + +endblock = eb +} + +return endblock, nil +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +func (app *BaseApp) + +runTx(mode execMode, txBytes []byte) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() +} + +tx, err := app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + return gInfo, nil, nil, err +} + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + if mode == execModeCheck { + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err +} + +} + +else if mode == execModeFinalize { + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) +} + if err == nil { + / Run optional postHandlers. + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if err != nil { + return gInfo, nil, anteEvents, err +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "can't route message %+v", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) +} + +events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + + +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) + +sdk.Events { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + panic(err) +} + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + panic(err) +} + +msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events) +} + +/ PrepareProposalVerifyTx performs transaction verification when a proposer is +/ creating a block proposal during PrepareProposal. Any state committed to the +/ PrepareProposal state internally will be discarded. will be +/ returned if the transaction cannot be encoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModePrepareProposal, bz) + if err != nil { + return nil, err +} + +return bz, nil +} + +/ ProcessProposalVerifyTx performs transaction verification when receiving a +/ block proposal during ProcessProposal. Any state committed to the +/ ProcessProposal state internally will be discarded. will be +/ returned if the transaction cannot be decoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModeProcessProposal, txBz) + if err != nil { + return nil, err +} + +return tx, nil +} + +/ Close is called in start cmd to gracefully cleanup resources. +func (app *BaseApp) + +Close() + +error { + return nil +} +``` Let us go through the most important components. -> **Note**: Not all parameters are described, only the most important ones. Refer to the type definition for the full list. +> **Note**: Not all parameters are described, only the most important ones. Refer to the +> type definition for the full list. First, the important parameters that are initialized during the bootstrapping of the application: -* [`CommitMultiStore`](/v0.50/learn/advanced/store#commitmultistore): This is the main store of the application, which holds the canonical state that is committed at the [end of each block](#commit). This store is **not** cached, meaning it is not used to update the application's volatile (un-committed) states. The `CommitMultiStore` is a multi-store, meaning a store of stores. Each module of the application uses one or multiple `KVStores` in the multi-store to persist their subset of the state. -* Database: The `db` is used by the `CommitMultiStore` to handle data persistence. -* [`Msg` Service Router](#msg-service-router): The `msgServiceRouter` facilitates the routing of `sdk.Msg` requests to the appropriate module `Msg` service for processing. Here a `sdk.Msg` refers to the transaction component that needs to be processed by a service in order to update the application state, and not to ABCI message which implements the interface between the application and the underlying consensus engine. -* [gRPC Query Router](#grpc-query-router): The `grpcQueryRouter` facilitates the routing of gRPC queries to the appropriate module for it to be processed. These queries are not ABCI messages themselves, but they are relayed to the relevant module's gRPC `Query` service. -* [`TxDecoder`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types#TxDecoder): It is used to decode raw transaction bytes relayed by the underlying CometBFT engine. -* [`AnteHandler`](#antehandler): This handler is used to handle signature verification, fee payment, and other pre-message execution checks when a transaction is received. It's executed during [`CheckTx/RecheckTx`](#checktx) and [`FinalizeBlock`](#finalizeblock). -* [`InitChainer`](/v0.50/learn/beginner/app-anatomy#initchainer), [`PreBlocker`](/v0.50/learn/beginner/app-anatomy#preblocker), [`BeginBlocker` and `EndBlocker`](/v0.50/learn/beginner/app-anatomy#beginblocker-and-endblocker): These are the functions executed when the application receives the `InitChain` and `FinalizeBlock` ABCI messages from the underlying CometBFT engine. +- [`CommitMultiStore`](/docs/sdk/v0.50/learn/advanced/store#commitmultistore): This is the main store of the application, + which holds the canonical state that is committed at the [end of each block](#commit). This store + is **not** cached, meaning it is not used to update the application's volatile (un-committed) states. + The `CommitMultiStore` is a multi-store, meaning a store of stores. Each module of the application + uses one or multiple `KVStores` in the multi-store to persist their subset of the state. +- Database: The `db` is used by the `CommitMultiStore` to handle data persistence. +- [`Msg` Service Router](#msg-service-router): The `msgServiceRouter` facilitates the routing of `sdk.Msg` requests to the appropriate + module `Msg` service for processing. Here a `sdk.Msg` refers to the transaction component that needs to be + processed by a service in order to update the application state, and not to ABCI message which implements + the interface between the application and the underlying consensus engine. +- [gRPC Query Router](#grpc-query-router): The `grpcQueryRouter` facilitates the routing of gRPC queries to the + appropriate module for it to be processed. These queries are not ABCI messages themselves, but they + are relayed to the relevant module's gRPC `Query` service. +- [`TxDecoder`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types#TxDecoder): It is used to decode + raw transaction bytes relayed by the underlying CometBFT engine. +- [`AnteHandler`](#antehandler): This handler is used to handle signature verification, fee payment, + and other pre-message execution checks when a transaction is received. It's executed during + [`CheckTx/RecheckTx`](#checktx) and [`FinalizeBlock`](#finalizeblock). +- [`InitChainer`](/docs/sdk/v0.50/learn/beginner/app-anatomy#initchainer), [`PreBlocker`](/docs/sdk/v0.50/learn/beginner/app-anatomy#preblocker), [`BeginBlocker` and `EndBlocker`](/docs/sdk/v0.50/learn/beginner/app-anatomy#beginblocker-and-endblocker): These are + the functions executed when the application receives the `InitChain` and `FinalizeBlock` + ABCI messages from the underlying CometBFT engine. Then, parameters used to define [volatile states](#state-updates) (i.e. cached states): -* `checkState`: This state is updated during [`CheckTx`](#checktx), and reset on [`Commit`](#commit). -* `finalizeBlockState`: This state is updated during [`FinalizeBlock`](#finalizeblock), and set to `nil` on [`Commit`](#commit) and gets re-initialized on `FinalizeBlock`. -* `processProposalState`: This state is updated during [`ProcessProposal`](#process-proposal). -* `prepareProposalState`: This state is updated during [`PrepareProposal`](#prepare-proposal). +- `checkState`: This state is updated during [`CheckTx`](#checktx), and reset on [`Commit`](#commit). +- `finalizeBlockState`: This state is updated during [`FinalizeBlock`](#finalizeblock), and set to `nil` on + [`Commit`](#commit) and gets re-initialized on `FinalizeBlock`. +- `processProposalState`: This state is updated during [`ProcessProposal`](#process-proposal). +- `prepareProposalState`: This state is updated during [`PrepareProposal`](#prepare-proposal). Finally, a few more important parameters: -* `voteInfos`: This parameter carries the list of validators whose precommit is missing, either because they did not vote or because the proposer did not include their vote. This information is carried by the [Context](/v0.50/learn/advanced/context) and can be used by the application for various things like punishing absent validators. -* `minGasPrices`: This parameter defines the minimum gas prices accepted by the node. This is a **local** parameter, meaning each full-node can set a different `minGasPrices`. It is used in the `AnteHandler` during [`CheckTx`](#checktx), mainly as a spam protection mechanism. The transaction enters the [mempool](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#mempool-methods) only if the gas prices of the transaction are greater than one of the minimum gas price in `minGasPrices` (e.g. if `minGasPrices == 1uatom,1photon`, the `gas-price` of the transaction must be greater than `1uatom` OR `1photon`). -* `appVersion`: Version of the application. It is set in the [application's constructor function](/v0.50/learn/beginner/app-anatomy#constructor-function). - -## Constructor[​](#constructor "Direct link to Constructor") - -``` -func NewBaseApp( name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp),) *BaseApp { // ...} +- `voteInfos`: This parameter carries the list of validators whose precommit is missing, either + because they did not vote or because the proposer did not include their vote. This information is + carried by the [Context](/docs/sdk/v0.50/learn/advanced/context) and can be used by the application for various things like + punishing absent validators. +- `minGasPrices`: This parameter defines the minimum gas prices accepted by the node. This is a + **local** parameter, meaning each full-node can set a different `minGasPrices`. It is used in the + `AnteHandler` during [`CheckTx`](#checktx), mainly as a spam protection mechanism. The transaction + enters the [mempool](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#mempool-methods) + only if the gas prices of the transaction are greater than one of the minimum gas price in + `minGasPrices` (e.g. if `minGasPrices == 1uatom,1photon`, the `gas-price` of the transaction must be + greater than `1uatom` OR `1photon`). +- `appVersion`: Version of the application. It is set in the + [application's constructor function](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor-function). + +## Constructor + +```go +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + + / ... +} ``` -The `BaseApp` constructor function is pretty straightforward. The only thing worth noting is the possibility to provide additional [`options`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/options.go) to the `BaseApp`, which will execute them in order. The `options` are generally `setter` functions for important parameters, like `SetPruning()` to set pruning options or `SetMinGasPrices()` to set the node's `min-gas-prices`. +The `BaseApp` constructor function is pretty straightforward. The only thing worth noting is the +possibility to provide additional [`options`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/options.go) +to the `BaseApp`, which will execute them in order. The `options` are generally `setter` functions +for important parameters, like `SetPruning()` to set pruning options or `SetMinGasPrices()` to set +the node's `min-gas-prices`. Naturally, developers can add additional `options` based on their application's needs. -## State Updates[​](#state-updates "Direct link to State Updates") +## State Updates -The `BaseApp` maintains four primary volatile states and a root or main state. The main state is the canonical state of the application and the volatile states, `checkState`, `prepareProposalState`, `processProposalState` and `finalizeBlockState` are used to handle state transitions in-between the main state made during [`Commit`](#commit). +The `BaseApp` maintains four primary volatile states and a root or main state. The main state +is the canonical state of the application and the volatile states, `checkState`, `prepareProposalState`, `processProposalState` and `finalizeBlockState` +are used to handle state transitions in-between the main state made during [`Commit`](#commit). -Internally, there is only a single `CommitMultiStore` which we refer to as the main or root state. From this root state, we derive four volatile states by using a mechanism called *store branching* (performed by `CacheWrap` function). The types can be illustrated as follows: +Internally, there is only a single `CommitMultiStore` which we refer to as the main or root state. +From this root state, we derive four volatile states by using a mechanism called _store branching_ (performed by `CacheWrap` function). +The types can be illustrated as follows: -![Types](/images/v0.50/learn/advanced/assets/images/baseapp_state-c6660bdfda8fa3aeb44239780b465ecc.png) +![Types](/docs/sdk/images/learn/advanced/baseapp_state.png) -### InitChain State Updates[​](#initchain-state-updates "Direct link to InitChain State Updates") +### InitChain State Updates -During `InitChain`, the four volatile states, `checkState`, `prepareProposalState`, `processProposalState` and `finalizeBlockState` are set by branching the root `CommitMultiStore`. Any subsequent reads and writes happen on branched versions of the `CommitMultiStore`. To avoid unnecessary roundtrip to the main state, all reads to the branched store are cached. +During `InitChain`, the four volatile states, `checkState`, `prepareProposalState`, `processProposalState` +and `finalizeBlockState` are set by branching the root `CommitMultiStore`. Any subsequent reads and writes happen +on branched versions of the `CommitMultiStore`. +To avoid unnecessary roundtrip to the main state, all reads to the branched store are cached. -![InitChain](/images/v0.50/learn/advanced/assets/images/baseapp_state-initchain-62da1a79d5dd67a6d1ab07f2805040da.png) +![InitChain](/docs/sdk/images/learn/advanced/baseapp_state-initchain.png) -### CheckTx State Updates[​](#checktx-state-updates "Direct link to CheckTx State Updates") +### CheckTx State Updates -During `CheckTx`, the `checkState`, which is based off of the last committed state from the root store, is used for any reads and writes. Here we only execute the `AnteHandler` and verify a service router exists for every message in the transaction. Note, when we execute the `AnteHandler`, we branch the already branched `checkState`. This has the side effect that if the `AnteHandler` fails, the state transitions won't be reflected in the `checkState` -- i.e. `checkState` is only updated on success. +During `CheckTx`, the `checkState`, which is based off of the last committed state from the root +store, is used for any reads and writes. Here we only execute the `AnteHandler` and verify a service router +exists for every message in the transaction. Note, when we execute the `AnteHandler`, we branch +the already branched `checkState`. +This has the side effect that if the `AnteHandler` fails, the state transitions won't be reflected in the `checkState` +\-- i.e. `checkState` is only updated on success. -![CheckTx](/images/v0.50/learn/advanced/assets/images/baseapp_state-checktx-5bb98c17c37b2b93e98cc681b6c1c9d6.png) +![CheckTx](/docs/sdk/images/learn/advanced/baseapp_state-checktx.png) -### PrepareProposal State Updates[​](#prepareproposal-state-updates "Direct link to PrepareProposal State Updates") +### PrepareProposal State Updates -During `PrepareProposal`, the `prepareProposalState` is set by branching the root `CommitMultiStore`. The `prepareProposalState` is used for any reads and writes that occur during the `PrepareProposal` phase. The function uses the `Select()` method of the mempool to iterate over the transactions. `runTx` is then called, which encodes and validates each transaction and from there the `AnteHandler` is executed. If successful, valid transactions are returned inclusive of the events, tags, and data generated during the execution of the proposal. The described behavior is that of the default handler, applications have the flexibility to define their own [custom mempool handlers](https://docs.cosmos.network/main/building-apps/app-mempool#custom-mempool-handlers). +During `PrepareProposal`, the `prepareProposalState` is set by branching the root `CommitMultiStore`. +The `prepareProposalState` is used for any reads and writes that occur during the `PrepareProposal` phase. +The function uses the `Select()` method of the mempool to iterate over the transactions. `runTx` is then called, +which encodes and validates each transaction and from there the `AnteHandler` is executed. +If successful, valid transactions are returned inclusive of the events, tags, and data generated +during the execution of the proposal. +The described behavior is that of the default handler, applications have the flexibility to define their own +[custom mempool handlers](https://docs.cosmos.network/main/building-apps/app-mempool#custom-mempool-handlers). -![ProcessProposal](/images/v0.50/learn/advanced/assets/images/baseapp_state-prepareproposal-bc5c8099ad94b823c376d1bde26d584a.png) +![ProcessProposal](/docs/sdk/images/learn/advanced/baseapp_state-prepareproposal.png) -### ProcessProposal State Updates[​](#processproposal-state-updates "Direct link to ProcessProposal State Updates") +### ProcessProposal State Updates -During `ProcessProposal`, the `processProposalState` is set based off of the last committed state from the root store and is used to process a signed proposal received from a validator. In this state, `runTx` is called and the `AnteHandler` is executed and the context used in this state is built with information from the header and the main state, including the minimum gas prices, which are also set. Again we want to highlight that the described behavior is that of the default handler and applications have the flexibility to define their own [custom mempool handlers](https://docs.cosmos.network/main/building-apps/app-mempool#custom-mempool-handlers). +During `ProcessProposal`, the `processProposalState` is set based off of the last committed state +from the root store and is used to process a signed proposal received from a validator. +In this state, `runTx` is called and the `AnteHandler` is executed and the context used in this state is built with information +from the header and the main state, including the minimum gas prices, which are also set. +Again we want to highlight that the described behavior is that of the default handler and applications have the flexibility to define their own +[custom mempool handlers](https://docs.cosmos.network/main/building-apps/app-mempool#custom-mempool-handlers). -![ProcessProposal](/images/v0.50/learn/advanced/assets/images/baseapp_state-processproposal-486265a078da51c6f72ce7248e8021b3.png) +![ProcessProposal](/docs/sdk/images/learn/advanced/baseapp_state-processproposal.png) -### FinalizeBlock State Updates[​](#finalizeblock-state-updates "Direct link to FinalizeBlock State Updates") +### FinalizeBlock State Updates -During `FinalizeBlock`, the `finalizeBlockState` is set for use during transaction execution and endblock. The `finalizeBlockState` is based off of the last committed state from the root store and is branched. Note, the `finalizeBlockState` is set to `nil` on [`Commit`](#commit). +During `FinalizeBlock`, the `finalizeBlockState` is set for use during transaction execution and endblock. The +`finalizeBlockState` is based off of the last committed state from the root store and is branched. +Note, the `finalizeBlockState` is set to `nil` on [`Commit`](#commit). -The state flow for transaction execution is nearly identical to `CheckTx` except state transitions occur on the `finalizeBlockState` and messages in a transaction are executed. Similarly to `CheckTx`, state transitions occur on a doubly branched state -- `finalizeBlockState`. Successful message execution results in writes being committed to `finalizeBlockState`. Note, if message execution fails, state transitions from the AnteHandler are persisted. +The state flow for transaction execution is nearly identical to `CheckTx` except state transitions occur on +the `finalizeBlockState` and messages in a transaction are executed. Similarly to `CheckTx`, state transitions +occur on a doubly branched state -- `finalizeBlockState`. Successful message execution results in +writes being committed to `finalizeBlockState`. Note, if message execution fails, state transitions from +the AnteHandler are persisted. -### Commit State Updates[​](#commit-state-updates "Direct link to Commit State Updates") +### Commit State Updates -During `Commit` all the state transitions that occurred in the `finalizeBlockState` are finally written to the root `CommitMultiStore` which in turn is committed to disk and results in a new application root hash. These state transitions are now considered final. Finally, the `checkState` is set to the newly committed state and `finalizeBlockState` is set to `nil` to be reset on `FinalizeBlock`. +During `Commit` all the state transitions that occurred in the `finalizeBlockState` are finally written to +the root `CommitMultiStore` which in turn is committed to disk and results in a new application +root hash. These state transitions are now considered final. Finally, the `checkState` is set to the +newly committed state and `finalizeBlockState` is set to `nil` to be reset on `FinalizeBlock`. -![Commit](/images/v0.50/learn/advanced/assets/images/baseapp_state-commit-247373784511c1db3ed2175551b22abb.png) +![Commit](/docs/sdk/images/learn/advanced/baseapp_state-commit.png) -## ParamStore[​](#paramstore "Direct link to ParamStore") +## ParamStore -During `InitChain`, the `RequestInitChain` provides `ConsensusParams` which contains parameters related to block execution such as maximum gas and size in addition to evidence parameters. If these parameters are non-nil, they are set in the BaseApp's `ParamStore`. Behind the scenes, the `ParamStore` is managed by an `x/consensus_params` module. This allows the parameters to be tweaked via on-chain governance. +During `InitChain`, the `RequestInitChain` provides `ConsensusParams` which contains parameters +related to block execution such as maximum gas and size in addition to evidence parameters. If these +parameters are non-nil, they are set in the BaseApp's `ParamStore`. Behind the scenes, the `ParamStore` +is managed by an `x/consensus_params` module. This allows the parameters to be tweaked via +on-chain governance. -## Service Routers[​](#service-routers "Direct link to Service Routers") +## Service Routers When messages and queries are received by the application, they must be routed to the appropriate module in order to be processed. Routing is done via `BaseApp`, which holds a `msgServiceRouter` for messages, and a `grpcQueryRouter` for queries. -### `Msg` Service Router[​](#msg-service-router "Direct link to msg-service-router") +### `Msg` Service Router -[`sdk.Msg`s](/v0.50/build/building-modules/messages-and-queries#messages) need to be routed after they are extracted from transactions, which are sent from the underlying CometBFT engine via the [`CheckTx`](#checktx) and [`FinalizeBlock`](#finalizeblock) ABCI messages. To do so, `BaseApp` holds a `msgServiceRouter` which maps fully-qualified service methods (`string`, defined in each module's Protobuf `Msg` service) to the appropriate module's `MsgServer` implementation. +[`sdk.Msg`s](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#messages) need to be routed after they are extracted from transactions, which are sent from the underlying CometBFT engine via the [`CheckTx`](#checktx) and [`FinalizeBlock`](#finalizeblock) ABCI messages. To do so, `BaseApp` holds a `msgServiceRouter` which maps fully-qualified service methods (`string`, defined in each module's Protobuf `Msg` service) to the appropriate module's `MsgServer` implementation. The [default `msgServiceRouter` included in `BaseApp`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/msg_service_router.go) is stateless. However, some applications may want to make use of more stateful routing mechanisms such as allowing governance to disable certain routes or point them to new modules for upgrade purposes. For this reason, the `sdk.Context` is also passed into each [route handler inside `msgServiceRouter`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/msg_service_router.go#L31-L32). For a stateless router that doesn't want to make use of this, you can just ignore the `ctx`. -The application's `msgServiceRouter` is initialized with all the routes using the application's [module manager](/v0.50/build/building-modules/module-manager#manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](/v0.50/learn/beginner/app-anatomy#constructor-function). +The application's `msgServiceRouter` is initialized with all the routes using the application's [module manager](/docs/sdk/v0.50/documentation/module-system/module-manager#manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor-function). -### gRPC Query Router[​](#grpc-query-router "Direct link to gRPC Query Router") +### gRPC Query Router -Similar to `sdk.Msg`s, [`queries`](/v0.50/build/building-modules/messages-and-queries#queries) need to be routed to the appropriate module's [`Query` service](/v0.50/build/building-modules/query-services). To do so, `BaseApp` holds a `grpcQueryRouter`, which maps modules' fully-qualified service methods (`string`, defined in their Protobuf `Query` gRPC) to their `QueryServer` implementation. The `grpcQueryRouter` is called during the initial stages of query processing, which can be either by directly sending a gRPC query to the gRPC endpoint, or via the [`Query` ABCI message](#query) on the CometBFT RPC endpoint. +Similar to `sdk.Msg`s, [`queries`](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#queries) need to be routed to the appropriate module's [`Query` service](/docs/sdk/v0.50/documentation/module-system/query-services). To do so, `BaseApp` holds a `grpcQueryRouter`, which maps modules' fully-qualified service methods (`string`, defined in their Protobuf `Query` gRPC) to their `QueryServer` implementation. The `grpcQueryRouter` is called during the initial stages of query processing, which can be either by directly sending a gRPC query to the gRPC endpoint, or via the [`Query` ABCI message](#query) on the CometBFT RPC endpoint. -Just like the `msgServiceRouter`, the `grpcQueryRouter` is initialized with all the query routes using the application's [module manager](/v0.50/build/building-modules/module-manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](/v0.50/learn/beginner/app-anatomy#app-constructor). +Just like the `msgServiceRouter`, the `grpcQueryRouter` is initialized with all the query routes using the application's [module manager](/docs/sdk/v0.50/documentation/module-system/module-manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](/docs/sdk/v0.50/learn/beginner/app-anatomy#app-constructor). -## Main ABCI 2.0 Messages[​](#main-abci-20-messages "Direct link to Main ABCI 2.0 Messages") +## Main ABCI 2.0 Messages The [Application-Blockchain Interface](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md) (ABCI) is a generic interface that connects a state-machine with a consensus engine to form a functional full-node. It can be wrapped in any language, and needs to be implemented by each application-specific blockchain built on top of an ABCI-compatible consensus engine like CometBFT. The consensus engine handles two main tasks: -* The networking logic, which mainly consists in gossiping block parts, transactions and consensus votes. -* The consensus logic, which results in the deterministic ordering of transactions in the form of blocks. +- The networking logic, which mainly consists in gossiping block parts, transactions and consensus votes. +- The consensus logic, which results in the deterministic ordering of transactions in the form of blocks. It is **not** the role of the consensus engine to define the state or the validity of transactions. Generally, transactions are handled by the consensus engine in the form of `[]bytes`, and relayed to the application via the ABCI to be decoded and processed. At keys moments in the networking and consensus processes (e.g. beginning of a block, commit of a block, reception of an unconfirmed transaction, ...), the consensus engine emits ABCI messages for the state-machine to act on. Developers building on top of the Cosmos SDK need not implement the ABCI themselves, as `BaseApp` comes with a built-in implementation of the interface. Let us go through the main ABCI messages that `BaseApp` implements: -* [`Prepare Proposal`](#prepare-proposal) -* [`Process Proposal`](#process-proposal) -* [`CheckTx`](#checktx) -* [`FinalizeBlock`](#finalizeblock) -* [`ExtendVote`](#extendvote) -* [`VerifyVoteExtension`](#verifyvoteextension) +- [`Prepare Proposal`](#prepare-proposal) +- [`Process Proposal`](#process-proposal) +- [`CheckTx`](#checktx) +- [`FinalizeBlock`](#finalizeblock) +- [`ExtendVote`](#extendvote) +- [`VerifyVoteExtension`](#verifyvoteextension) -### Prepare Proposal[​](#prepare-proposal "Direct link to Prepare Proposal") +### Prepare Proposal The `PrepareProposal` function is part of the new methods introduced in Application Blockchain Interface (ABCI++) in CometBFT and is an important part of the application's overall governance system. In the Cosmos SDK, it allows the application to have more fine-grained control over the transactions that are processed, and ensures that only valid transactions are committed to the blockchain. Here is how the `PrepareProposal` function can be implemented: 1. Extract the `sdk.Msg`s from the transaction. -2. Perform *stateful* checks by calling `Validate()` on each of the `sdk.Msg`'s. This is done after *stateless* checks as *stateful* checks are more computationally expensive. If `Validate()` fails, `PrepareProposal` returns before running further checks, which saves resources. +2. Perform _stateful_ checks by calling `Validate()` on each of the `sdk.Msg`'s. This is done after _stateless_ checks as _stateful_ checks are more computationally expensive. If `Validate()` fails, `PrepareProposal` returns before running further checks, which saves resources. 3. Perform any additional checks that are specific to the application, such as checking account balances, or ensuring that certain conditions are met before a transaction is proposed.hey are processed by the consensus engine, if necessary. 4. Return the updated transactions to be processed by the consensus engine @@ -180,12 +1544,12 @@ It's important to note that `PrepareProposal` complements the `ProcessProposal` `PrepareProposal` returns a response to the underlying consensus engine of type [`abci.ResponseCheckTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#processproposal). The response contains: -* `Code (uint32)`: Response Code. `0` if successful. -* `Data ([]byte)`: Result bytes, if any. -* `Log (string):` The output of the application's logger. May be non-deterministic. -* `Info (string):` Additional information. May be non-deterministic. +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. -### Process Proposal[​](#process-proposal "Direct link to Process Proposal") +### Process Proposal The `ProcessProposal` function is called by the BaseApp as part of the ABCI message flow, and is executed during the `FinalizeBlock` phase of the consensus process. The purpose of this function is to give more control to the application for block validation, allowing it to check all transactions in a proposed block before the validator sends the prevote for the block. It allows a validator to perform application-dependent work in a proposed block, enabling features such as immediate block execution, and allows the Application to reject invalid blocks. @@ -206,104 +1570,1773 @@ However, developers must exercise greater caution when using these methods. Inco `ProcessProposal` returns a response to the underlying consensus engine of type [`abci.ResponseCheckTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#processproposal). The response contains: -* `Code (uint32)`: Response Code. `0` if successful. -* `Data ([]byte)`: Result bytes, if any. -* `Log (string):` The output of the application's logger. May be non-deterministic. -* `Info (string):` Additional information. May be non-deterministic. +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. -### CheckTx[​](#checktx "Direct link to CheckTx") +### CheckTx -`CheckTx` is sent by the underlying consensus engine when a new unconfirmed (i.e. not yet included in a valid block) transaction is received by a full-node. The role of `CheckTx` is to guard the full-node's mempool (where unconfirmed transactions are stored until they are included in a block) from spam transactions. Unconfirmed transactions are relayed to peers only if they pass `CheckTx`. +`CheckTx` is sent by the underlying consensus engine when a new unconfirmed (i.e. not yet included in a valid block) +transaction is received by a full-node. The role of `CheckTx` is to guard the full-node's mempool +(where unconfirmed transactions are stored until they are included in a block) from spam transactions. +Unconfirmed transactions are relayed to peers only if they pass `CheckTx`. -`CheckTx()` can perform both *stateful* and *stateless* checks, but developers should strive to make the checks **lightweight** because gas fees are not charged for the resources (CPU, data load...) used during the `CheckTx`. +`CheckTx()` can perform both _stateful_ and _stateless_ checks, but developers should strive to +make the checks **lightweight** because gas fees are not charged for the resources (CPU, data load...) used during the `CheckTx`. -In the Cosmos SDK, after [decoding transactions](/v0.50/learn/advanced/encoding), `CheckTx()` is implemented to do the following checks: +In the Cosmos SDK, after [decoding transactions](/docs/sdk/v0.50/learn/advanced/encoding), `CheckTx()` is implemented +to do the following checks: 1. Extract the `sdk.Msg`s from the transaction. -2. **Optionally** perform *stateless* checks by calling `ValidateBasic()` on each of the `sdk.Msg`s. This is done first, as *stateless* checks are less computationally expensive than *stateful* checks. If `ValidateBasic()` fail, `CheckTx` returns before running *stateful* checks, which saves resources. This check is still performed for messages that have not yet migrated to the new message validation mechanism defined in [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) and still have a `ValidateBasic()` method. -3. Perform non-module related *stateful* checks on the [account](/v0.50/learn/beginner/accounts). This step is mainly about checking that the `sdk.Msg` signatures are valid, that enough fees are provided and that the sending account has enough funds to pay for said fees. Note that no precise [`gas`](/v0.50/learn/beginner/gas-fees) counting occurs here, as `sdk.Msg`s are not processed. Usually, the [`AnteHandler`](/v0.50/learn/beginner/gas-fees#antehandler) will check that the `gas` provided with the transaction is superior to a minimum reference gas amount based on the raw transaction size, in order to avoid spam with transactions that provide 0 gas. +2. **Optionally** perform _stateless_ checks by calling `ValidateBasic()` on each of the `sdk.Msg`s. This is done + first, as _stateless_ checks are less computationally expensive than _stateful_ checks. If + `ValidateBasic()` fail, `CheckTx` returns before running _stateful_ checks, which saves resources. + This check is still performed for messages that have not yet migrated to the new message validation mechanism defined in [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) and still have a `ValidateBasic()` method. +3. Perform non-module related _stateful_ checks on the [account](/docs/sdk/v0.50/learn/beginner/accounts). This step is mainly about checking + that the `sdk.Msg` signatures are valid, that enough fees are provided and that the sending account + has enough funds to pay for said fees. Note that no precise [`gas`](/docs/sdk/v0.50/learn/beginner/gas-fees) counting occurs here, + as `sdk.Msg`s are not processed. Usually, the [`AnteHandler`](/docs/sdk/v0.50/learn/beginner/gas-fees#antehandler) will check that the `gas` provided + with the transaction is superior to a minimum reference gas amount based on the raw transaction size, + in order to avoid spam with transactions that provide 0 gas. `CheckTx` does **not** process `sdk.Msg`s - they only need to be processed when the canonical state needs to be updated, which happens during `FinalizeBlock`. -Steps 2. and 3. are performed by the [`AnteHandler`](/v0.50/learn/beginner/gas-fees#antehandler) in the [`RunTx()`](#runtx-antehandler-and-runmsgs) function, which `CheckTx()` calls with the `runTxModeCheck` mode. During each step of `CheckTx()`, a special [volatile state](#state-updates) called `checkState` is updated. This state is used to keep track of the temporary changes triggered by the `CheckTx()` calls of each transaction without modifying the [main canonical state](#main-state). For example, when a transaction goes through `CheckTx()`, the transaction's fees are deducted from the sender's account in `checkState`. If a second transaction is received from the same account before the first is processed, and the account has consumed all its funds in `checkState` during the first transaction, the second transaction will fail `CheckTx`() and be rejected. In any case, the sender's account will not actually pay the fees until the transaction is actually included in a block, because `checkState` never gets committed to the main state. The `checkState` is reset to the latest state of the main state each time a blocks gets [committed](#commit). +Steps 2. and 3. are performed by the [`AnteHandler`](/docs/sdk/v0.50/learn/beginner/gas-fees#antehandler) in the [`RunTx()`](#runtx-antehandler-and-runmsgs) +function, which `CheckTx()` calls with the `runTxModeCheck` mode. During each step of `CheckTx()`, a +special [volatile state](#state-updates) called `checkState` is updated. This state is used to keep +track of the temporary changes triggered by the `CheckTx()` calls of each transaction without modifying +the [main canonical state](#main-state). For example, when a transaction goes through `CheckTx()`, the +transaction's fees are deducted from the sender's account in `checkState`. If a second transaction is +received from the same account before the first is processed, and the account has consumed all its +funds in `checkState` during the first transaction, the second transaction will fail `CheckTx`() and +be rejected. In any case, the sender's account will not actually pay the fees until the transaction +is actually included in a block, because `checkState` never gets committed to the main state. The +`checkState` is reset to the latest state of the main state each time a blocks gets [committed](#commit). + +`CheckTx` returns a response to the underlying consensus engine of type [`abci.ResponseCheckTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#checktx). +The response contains: + +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. +- `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction. +- `GasUsed (int64)`: Amount of gas consumed by transaction. During `CheckTx`, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction. Next is an example: + +```go expandable +package ante + +import ( + + storetypes "cosmossdk.io/store/types" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/codec/legacy" + "github.com/cosmos/cosmos-sdk/crypto/keys/multisig" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/migrations/legacytx" + authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +/ ValidateBasicDecorator will call tx.ValidateBasic and return any non-nil error. +/ If ValidateBasic passes, decorator calls next AnteHandler in chain. Note, +/ ValidateBasicDecorator decorator will not get executed on ReCheckTx since it +/ is not dependent on application state. +type ValidateBasicDecorator struct{ +} + +func NewValidateBasicDecorator() + +ValidateBasicDecorator { + return ValidateBasicDecorator{ +} +} + +func (vbd ValidateBasicDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + / no need to validate basic on recheck tx, call next antehandler + if ctx.IsReCheckTx() { + return next(ctx, tx, simulate) +} + if validateBasic, ok := tx.(sdk.HasValidateBasic); ok { + if err := validateBasic.ValidateBasic(); err != nil { + return ctx, err +} + +} + +return next(ctx, tx, simulate) +} + +/ ValidateMemoDecorator will validate memo given the parameters passed in +/ If memo is too large decorator returns with error, otherwise call next AnteHandler +/ CONTRACT: Tx must implement TxWithMemo interface +type ValidateMemoDecorator struct { + ak AccountKeeper +} + +func NewValidateMemoDecorator(ak AccountKeeper) + +ValidateMemoDecorator { + return ValidateMemoDecorator{ + ak: ak, +} +} + +func (vmd ValidateMemoDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + memoTx, ok := tx.(sdk.TxWithMemo) + if !ok { + return ctx, errorsmod.Wrap(sdkerrors.ErrTxDecode, "invalid transaction type") +} + memoLength := len(memoTx.GetMemo()) + if memoLength > 0 { + params := vmd.ak.GetParams(ctx) + if uint64(memoLength) > params.MaxMemoCharacters { + return ctx, errorsmod.Wrapf(sdkerrors.ErrMemoTooLarge, + "maximum number of characters is %d but received %d characters", + params.MaxMemoCharacters, memoLength, + ) +} + +} + +return next(ctx, tx, simulate) +} + +/ ConsumeTxSizeGasDecorator will take in parameters and consume gas proportional +/ to the size of tx before calling next AnteHandler. Note, the gas costs will be +/ slightly over estimated due to the fact that any given signing account may need +/ to be retrieved from state. +/ +/ CONTRACT: If simulate=true, then signatures must either be completely filled +/ in or empty. +/ CONTRACT: To use this decorator, signatures of transaction must be represented +/ as legacytx.StdSignature otherwise simulate mode will incorrectly estimate gas cost. +type ConsumeTxSizeGasDecorator struct { + ak AccountKeeper +} + +func NewConsumeGasForTxSizeDecorator(ak AccountKeeper) + +ConsumeTxSizeGasDecorator { + return ConsumeTxSizeGasDecorator{ + ak: ak, +} +} + +func (cgts ConsumeTxSizeGasDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + sigTx, ok := tx.(authsigning.SigVerifiableTx) + if !ok { + return ctx, errorsmod.Wrap(sdkerrors.ErrTxDecode, "invalid tx type") +} + params := cgts.ak.GetParams(ctx) + +ctx.GasMeter().ConsumeGas(params.TxSizeCostPerByte*storetypes.Gas(len(ctx.TxBytes())), "txSize") + + / simulate gas cost for signatures in simulate mode + if simulate { + / in simulate mode, each element should be a nil signature + sigs, err := sigTx.GetSignaturesV2() + if err != nil { + return ctx, err +} + n := len(sigs) + +signers, err := sigTx.GetSigners() + if err != nil { + return sdk.Context{ +}, err +} + for i, signer := range signers { + / if signature is already filled in, no need to simulate gas cost + if i < n && !isIncompleteSignature(sigs[i].Data) { + continue +} + +var pubkey cryptotypes.PubKey + acc := cgts.ak.GetAccount(ctx, signer) + + / use placeholder simSecp256k1Pubkey if sig is nil + if acc == nil || acc.GetPubKey() == nil { + pubkey = simSecp256k1Pubkey +} + +else { + pubkey = acc.GetPubKey() +} + + / use stdsignature to mock the size of a full signature + simSig := legacytx.StdSignature{ /nolint:staticcheck / SA1019: legacytx.StdSignature is deprecated + Signature: simSecp256k1Sig[:], + PubKey: pubkey, +} + sigBz := legacy.Cdc.MustMarshal(simSig) + cost := storetypes.Gas(len(sigBz) + 6) + + / If the pubkey is a multi-signature pubkey, then we estimate for the maximum + / number of signers. + if _, ok := pubkey.(*multisig.LegacyAminoPubKey); ok { + cost *= params.TxSigLimit +} + +ctx.GasMeter().ConsumeGas(params.TxSizeCostPerByte*cost, "txSize") +} + +} + +return next(ctx, tx, simulate) +} + +/ isIncompleteSignature tests whether SignatureData is fully filled in for simulation purposes +func isIncompleteSignature(data signing.SignatureData) + +bool { + if data == nil { + return true +} + switch data := data.(type) { + case *signing.SingleSignatureData: + return len(data.Signature) == 0 + case *signing.MultiSignatureData: + if len(data.Signatures) == 0 { + return true +} + for _, s := range data.Signatures { + if isIncompleteSignature(s) { + return true +} + +} + +} + +return false +} + +type ( + / TxTimeoutHeightDecorator defines an AnteHandler decorator that checks for a + / tx height timeout. + TxTimeoutHeightDecorator struct{ +} + + / TxWithTimeoutHeight defines the interface a tx must implement in order for + / TxHeightTimeoutDecorator to process the tx. + TxWithTimeoutHeight interface { + sdk.Tx + + GetTimeoutHeight() + +uint64 +} +) + +/ TxTimeoutHeightDecorator defines an AnteHandler decorator that checks for a +/ tx height timeout. +func NewTxTimeoutHeightDecorator() + +TxTimeoutHeightDecorator { + return TxTimeoutHeightDecorator{ +} +} + +/ AnteHandle implements an AnteHandler decorator for the TxHeightTimeoutDecorator +/ type where the current block height is checked against the tx's height timeout. +/ If a height timeout is provided (non-zero) + +and is less than the current block +/ height, then an error is returned. +func (txh TxTimeoutHeightDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + timeoutTx, ok := tx.(TxWithTimeoutHeight) + if !ok { + return ctx, errorsmod.Wrap(sdkerrors.ErrTxDecode, "expected tx to implement TxWithTimeoutHeight") +} + timeoutHeight := timeoutTx.GetTimeoutHeight() + if timeoutHeight > 0 && uint64(ctx.BlockHeight()) > timeoutHeight { + return ctx, errorsmod.Wrapf( + sdkerrors.ErrTxTimeoutHeight, "block height: %d, timeout height: %d", ctx.BlockHeight(), timeoutHeight, + ) +} + +return next(ctx, tx, simulate) +} +``` -`CheckTx` returns a response to the underlying consensus engine of type [`abci.ResponseCheckTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#checktx). The response contains: +- `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](/docs/sdk/v0.50/learn/advanced/events) for more. +- `Codespace (string)`: Namespace for the Code. -* `Code (uint32)`: Response Code. `0` if successful. -* `Data ([]byte)`: Result bytes, if any. -* `Log (string):` The output of the application's logger. May be non-deterministic. -* `Info (string):` Additional information. May be non-deterministic. -* `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction. -* `GasUsed (int64)`: Amount of gas consumed by transaction. During `CheckTx`, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction. Next is an example: +#### RecheckTx -x/auth/ante/basic.go +After `Commit`, `CheckTx` is run again on all transactions that remain in the node's local mempool +excluding the transactions that are included in the block. To prevent the mempool from rechecking all transactions +every time a block is committed, the configuration option `mempool.recheck=false` can be set. As of +Tendermint v0.32.1, an additional `Type` parameter is made available to the `CheckTx` function that +indicates whether an incoming transaction is new (`CheckTxType_New`), or a recheck (`CheckTxType_Recheck`). +This allows certain checks like signature verification can be skipped during `CheckTxType_Recheck`. -``` -loading... -``` +## RunTx, AnteHandler, RunMsgs, PostHandler -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/ante/basic.go#L102) +### RunTx -* `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](/v0.50/learn/advanced/events) for more. -* `Codespace (string)`: Namespace for the Code. +`RunTx` is called from `CheckTx`/`Finalizeblock` to handle the transaction, with `execModeCheck` or `execModeFinalize` as parameter to differentiate between the two modes of execution. Note that when `RunTx` receives a transaction, it has already been decoded. -#### RecheckTx[​](#rechecktx "Direct link to RecheckTx") +The first thing `RunTx` does upon being called is to retrieve the `context`'s `CacheMultiStore` by calling the `getContextForTx()` function with the appropriate mode (either `runTxModeCheck` or `execModeFinalize`). This `CacheMultiStore` is a branch of the main store, with cache functionality (for query requests), instantiated during `FinalizeBlock` for transaction execution and during the `Commit` of the previous block for `CheckTx`. After that, two `defer func()` are called for [`gas`](/docs/sdk/v0.50/learn/beginner/gas-fees) management. They are executed when `runTx` returns and make sure `gas` is actually consumed, and will throw errors, if any. -After `Commit`, `CheckTx` is run again on all transactions that remain in the node's local mempool excluding the transactions that are included in the block. To prevent the mempool from rechecking all transactions every time a block is committed, the configuration option `mempool.recheck=false` can be set. As of Tendermint v0.32.1, an additional `Type` parameter is made available to the `CheckTx` function that indicates whether an incoming transaction is new (`CheckTxType_New`), or a recheck (`CheckTxType_Recheck`). This allows certain checks like signature verification can be skipped during `CheckTxType_Recheck`. +After that, `RunTx()` calls `ValidateBasic()`, when available and for backward compatibility, on each `sdk.Msg`in the `Tx`, which runs preliminary _stateless_ validity checks. If any `sdk.Msg` fails to pass `ValidateBasic()`, `RunTx()` returns with an error. -## RunTx, AnteHandler, RunMsgs, PostHandler[​](#runtx-antehandler-runmsgs-posthandler "Direct link to RunTx, AnteHandler, RunMsgs, PostHandler") +Then, the [`anteHandler`](#antehandler) of the application is run (if it exists). In preparation of this step, both the `checkState`/`finalizeBlockState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function. -### RunTx[​](#runtx "Direct link to RunTx") +```go expandable +package baseapp + +import ( + + "context" + "fmt" + "sort" + "strconv" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "golang.org/x/exp/maps" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + +error +) + +const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + +transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeFinalize / Finalize a block proposal +) + +var _ servertypes.ABCI = (*BaseApp)(nil) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { + / initialized on creation + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + +state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional, e.g. for tips + + initChainer sdk.InitChainer / ABCI InitChain handler + beginBlocker sdk.BeginBlocker / (legacy ABCI) + +BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + +EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - voteExtensionState: Used for ExtendVote and VerifyVoteExtension, which is + / set based on the previous block's state. This state is never committed. In + / case of multiple rounds, the state is always reset to the previous block's + / state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + voteExtensionState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger, + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) +} + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) +} + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) +} + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs base app will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ SetMsgServiceRouter sets the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +SetMsgServiceRouter(msgServiceRouter *MsgServiceRouter) { + app.msgServiceRouter = msgServiceRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) -`RunTx` is called from `CheckTx`/`Finalizeblock` to handle the transaction, with `execModeCheck` or `execModeFinalize` as parameter to differentiate between the two modes of execution. Note that when `RunTx` receives a transaction, it has already been decoded. +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} -The first thing `RunTx` does upon being called is to retrieve the `context`'s `CacheMultiStore` by calling the `getContextForTx()` function with the appropriate mode (either `runTxModeCheck` or `execModeFinalize`). This `CacheMultiStore` is a branch of the main store, with cache functionality (for query requests), instantiated during `FinalizeBlock` for transaction execution and during the `Commit` of the previous block for `CheckTx`. After that, two `defer func()` are called for [`gas`](/v0.50/learn/beginner/gas-fees) management. They are executed when `runTx` returns and make sure `gas` is actually consumed, and will throw errors, if any. +} +} -After that, `RunTx()` calls `ValidateBasic()`, when available and for backward compatibility, on each `sdk.Msg`in the `Tx`, which runs preliminary *stateless* validity checks. If any `sdk.Msg` fails to pass `ValidateBasic()`, `RunTx()` returns with an error. +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) -Then, the [`anteHandler`](#antehandler) of the application is run (if it exists). In preparation of this step, both the `checkState`/`finalizeBlockState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function. +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} -baseapp/baseapp.go +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} -``` -loading... -``` +} +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/baseapp.go#L663-L680) +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) -This allows `RunTx` not to commit the changes made to the state during the execution of `anteHandler` if it ends up failing. It also prevents the module implementing the `anteHandler` from writing to state, which is an important part of the [object-capabilities](/v0.50/learn/advanced/ocap) of the Cosmos SDK. +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} -Finally, the [`RunMsgs()`](#runmsgs) function is called to process the `sdk.Msg`s in the `Tx`. In preparation of this step, just like with the `anteHandler`, both the `checkState`/`finalizeBlockState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function. +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) -### AnteHandler[​](#antehandler "Direct link to AnteHandler") +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := maps.Keys(keys) -The `AnteHandler` is a special handler that implements the `AnteHandler` interface and is used to authenticate the transaction before the transaction's internal messages are processed. +sort.Strings(skeys) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} -types/handler.go +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms storetypes.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +storetypes.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ ChainID returns the chainID of the app. +func (app *BaseApp) + +ChainID() + +string { + return app.chainID +} + +/ AnteHandler returns the AnteHandler of the app. +func (app *BaseApp) + +AnteHandler() + +sdk.AnteHandler { + return app.anteHandler +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + emptyHeader := cmtproto.Header{ + ChainID: app.chainID +} + + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) + +app.Seal() + if app.cms == nil { + return errors.New("commit multi-store must not be nil") +} + +return app.cms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode execMode, header cmtproto.Header) { + ms := app.cms.CacheMultiStore() + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, header, false, app.logger).WithStreamingManager(app.streamingManager), +} + switch mode { + case execModeCheck: + baseState.ctx = baseState.ctx.WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices) + +app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeVoteExtension: + app.voteExtensionState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ GetFinalizeBlockStateCtx returns the Context associated with the FinalizeBlock +/ state. This Context can be used to write data derived from processing vote +/ extensions to application state during ProcessProposal. +/ +/ NOTE: +/ - Do NOT use or write to state using this Context unless you intend for +/ that state to be committed. +/ - Do NOT use or write to state using this Context on the first block. +func (app *BaseApp) + +GetFinalizeBlockStateCtx() + +sdk.Context { + return app.finalizeBlockState.ctx +} + +/ SetCircuitBreaker sets the circuit breaker for the BaseApp. +/ The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. +func (app *BaseApp) + +SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") +} + +app.msgServiceRouter.SetCircuit(cb) +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) + +cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ +} + +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + panic(fmt.Errorf("consensus key is nil: %w", err)) +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the BaseApp's param +/ store. +/ +/ NOTE: We're explicitly not storing the CometBFT app_version in the param store. +/ It's stored instead in the x/upgrade store, with its own bump logic. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + +error { + if app.paramStore == nil { + panic("cannot store consensus params with no params store set") +} + +return app.paramStore.Set(ctx, cp) +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + +error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) +} + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 +} + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return err +} + +} + +return nil +} + +func (app *BaseApp) + +getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState +} +} + +func (app *BaseApp) + +getBlockGasMeter(ctx sdk.Context) + +storetypes.GasMeter { + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) +} + +return storetypes.NewInfiniteGasMeter() +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode execMode, txBytes []byte) + +sdk.Context { + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.ctx. + WithTxBytes(txBytes) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + / TODO: https://github.com/cosmos/cosmos-sdk/issues/2824 + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]interface{ +}{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(storetypes.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +func (app *BaseApp) + +beginBlock(req *abci.RequestFinalizeBlock) + +sdk.BeginBlock { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.ctx) + if err != nil { + panic(err) +} + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" +}, + ) +} + +resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) +} + +return resp +} + +func (app *BaseApp) + +deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ +} + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + +telemetry.IncrCounter(1, "tx", resultStr) + +telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + +telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") +}() + +gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + +return resp +} + +resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +} + +return resp +} + +/ endBlock is an application-defined function that is called after transactions +/ have been processed in FinalizeBlock. +func (app *BaseApp) + +endBlock(ctx context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.ctx) + if err != nil { + panic(err) +} + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" +}, + ) +} + +eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + +endblock = eb +} + +return endblock, nil +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +func (app *BaseApp) + +runTx(mode execMode, txBytes []byte) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() +} + +tx, err := app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + return gInfo, nil, nil, err +} + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + if mode == execModeCheck { + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err +} + +} + +else if mode == execModeFinalize { + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) +} + if err == nil { + / Run optional postHandlers. + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if err != nil { + return gInfo, nil, anteEvents, err +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "can't route message %+v", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) +} + +events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + + +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) + +sdk.Events { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + panic(err) +} + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + panic(err) +} + +msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events) +} + +/ PrepareProposalVerifyTx performs transaction verification when a proposer is +/ creating a block proposal during PrepareProposal. Any state committed to the +/ PrepareProposal state internally will be discarded. will be +/ returned if the transaction cannot be encoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModePrepareProposal, bz) + if err != nil { + return nil, err +} + +return bz, nil +} + +/ ProcessProposalVerifyTx performs transaction verification when receiving a +/ block proposal during ProcessProposal. Any state committed to the +/ ProcessProposal state internally will be discarded. will be +/ returned if the transaction cannot be decoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModeProcessProposal, txBz) + if err != nil { + return nil, err +} + +return tx, nil +} + +/ Close is called in start cmd to gracefully cleanup resources. +func (app *BaseApp) + +Close() + +error { + return nil +} ``` -loading... -``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/types/handler.go#L6-L8) +This allows `RunTx` not to commit the changes made to the state during the execution of `anteHandler` if it ends up failing. It also prevents the module implementing the `anteHandler` from writing to state, which is an important part of the [object-capabilities](/docs/sdk/v0.50/learn/advanced/ocap) of the Cosmos SDK. + +Finally, the [`RunMsgs()`](#runmsgs) function is called to process the `sdk.Msg`s in the `Tx`. In preparation of this step, just like with the `anteHandler`, both the `checkState`/`finalizeBlockState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function. + +### AnteHandler + +The `AnteHandler` is a special handler that implements the `AnteHandler` interface and is used to authenticate the transaction before the transaction's internal messages are processed. + +```go expandable +package types + +/ Handler defines the core of the state transition function of an application. +type Handler func(ctx Context, msg Msg) (*Result, error) + +/ AnteHandler authenticates transactions, before their internal messages are handled. +/ If newCtx.IsZero(), ctx is used instead. +type AnteHandler func(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) + +/ PostHandler like AnteHandler but it executes after RunMsgs. Runs on success +/ or failure and enables use cases like gas refunding. +type PostHandler func(ctx Context, tx Tx, simulate, success bool) (newCtx Context, err error) + +/ AnteDecorator wraps the next AnteHandler to perform custom pre-processing. +type AnteDecorator interface { + AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) +} + +/ PostDecorator wraps the next PostHandler to perform custom post-processing. +type PostDecorator interface { + PostHandle(ctx Context, tx Tx, simulate, success bool, next PostHandler) (newCtx Context, err error) +} + +/ ChainAnteDecorators ChainDecorator chains AnteDecorators together with each AnteDecorator +/ wrapping over the decorators further along chain and returns a single AnteHandler. +/ +/ NOTE: The first element is outermost decorator, while the last element is innermost +/ decorator. Decorator ordering is critical since some decorators will expect +/ certain checks and updates to be performed (e.g. the Context) + +before the decorator +/ is run. These expectations should be documented clearly in a CONTRACT docline +/ in the decorator's godoc. +/ +/ NOTE: Any application that uses GasMeter to limit transaction processing cost +/ MUST set GasMeter with the FIRST AnteDecorator. Failing to do so will cause +/ transactions to be processed with an infinite gasmeter and open a DOS attack vector. +/ Use `ante.SetUpContextDecorator` or a custom Decorator with similar functionality. +/ Returns nil when no AnteDecorator are supplied. +func ChainAnteDecorators(chain ...AnteDecorator) + +AnteHandler { + if len(chain) == 0 { + return nil +} + handlerChain := make([]AnteHandler, len(chain)+1) + / set the terminal AnteHandler decorator + handlerChain[len(chain)] = func(ctx Context, tx Tx, simulate bool) (Context, error) { + return ctx, nil +} + for i := 0; i < len(chain); i++ { + ii := i + handlerChain[ii] = func(ctx Context, tx Tx, simulate bool) (Context, error) { + return chain[ii].AnteHandle(ctx, tx, simulate, handlerChain[ii+1]) +} + +} + +return handlerChain[0] +} + +/ ChainPostDecorators chains PostDecorators together with each PostDecorator +/ wrapping over the decorators further along chain and returns a single PostHandler. +/ +/ NOTE: The first element is outermost decorator, while the last element is innermost +/ decorator. Decorator ordering is critical since some decorators will expect +/ certain checks and updates to be performed (e.g. the Context) + +before the decorator +/ is run. These expectations should be documented clearly in a CONTRACT docline +/ in the decorator's godoc. +func ChainPostDecorators(chain ...PostDecorator) + +PostHandler { + if len(chain) == 0 { + return nil +} + handlerChain := make([]PostHandler, len(chain)+1) + / set the terminal PostHandler decorator + handlerChain[len(chain)] = func(ctx Context, tx Tx, simulate, success bool) (Context, error) { + return ctx, nil +} + for i := 0; i < len(chain); i++ { + ii := i + handlerChain[ii] = func(ctx Context, tx Tx, simulate, success bool) (Context, error) { + return chain[ii].PostHandle(ctx, tx, simulate, success, handlerChain[ii+1]) +} + +} + +return handlerChain[0] +} + +/ Terminator AnteDecorator will get added to the chain to simplify decorator code +/ Don't need to check if next == nil further up the chain +/ +/ ______ +/ <((((((\\\ +/ / . +}\ +/ ;--..--._| +} +/ (\ '--/\--' ) +/ \\ | '-' :'| +/ \\ . -==- .-| +/ \\ \.__.' \--._ +/ [\\ __.--| / _/'--. +/ \ \\ .'-._ ('-----'/ __/ \ +/ \ \\ / __>| | '--. | +/ \ \\ | \ | / / / +/ \ '\ / \ | | _/ / +/ \ \ \ | | / / +/ snd \ \ \ / +type Terminator struct{ +} + +/ AnteHandle returns the provided Context and nil error +func (t Terminator) + +AnteHandle(ctx Context, _ Tx, _ bool, _ AnteHandler) (Context, error) { + return ctx, nil +} + +/ PostHandle returns the provided Context and nil error +func (t Terminator) + +PostHandle(ctx Context, _ Tx, _, _ bool, _ PostHandler) (Context, error) { + return ctx, nil +} +``` The `AnteHandler` is theoretically optional, but still a very important component of public blockchain networks. It serves 3 primary purposes: -* Be a primary line of defense against spam and second line of defense (the first one being the mempool) against transaction replay with fees deduction and [`sequence`](/v0.50/learn/advanced/transactions#transaction-generation) checking. -* Perform preliminary *stateful* validity checks like ensuring signatures are valid or that the sender has enough funds to pay for fees. -* Play a role in the incentivisation of stakeholders via the collection of transaction fees. +- Be a primary line of defense against spam and second line of defense (the first one being the mempool) against transaction replay with fees deduction and [`sequence`](/docs/sdk/v0.50/learn/advanced/transactions#transaction-generation) checking. +- Perform preliminary _stateful_ validity checks like ensuring signatures are valid or that the sender has enough funds to pay for fees. +- Play a role in the incentivisation of stakeholders via the collection of transaction fees. -`BaseApp` holds an `anteHandler` as parameter that is initialized in the [application's constructor](/v0.50/learn/beginner/app-anatomy#application-constructor). The most widely used `anteHandler` is the [`auth` module](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/ante/ante.go). +`BaseApp` holds an `anteHandler` as parameter that is initialized in the [application's constructor](/docs/sdk/v0.50/learn/beginner/app-anatomy#application-constructor). The most widely used `anteHandler` is the [`auth` module](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/ante/ante.go). -Click [here](/v0.50/learn/beginner/gas-fees#antehandler) for more on the `anteHandler`. +Click [here](/docs/sdk/v0.50/learn/beginner/gas-fees#antehandler) for more on the `anteHandler`. -### RunMsgs[​](#runmsgs "Direct link to RunMsgs") +### RunMsgs `RunMsgs` is called from `RunTx` with `runTxModeCheck` as parameter to check the existence of a route for each message the transaction, and with `execModeFinalize` to actually process the `sdk.Msg`s. -First, it retrieves the `sdk.Msg`'s fully-qualified type name, by checking the `type_url` of the Protobuf `Any` representing the `sdk.Msg`. Then, using the application's [`msgServiceRouter`](#msg-service-router), it checks for the existence of `Msg` service method related to that `type_url`. At this point, if `mode == runTxModeCheck`, `RunMsgs` returns. Otherwise, if `mode == execModeFinalize`, the [`Msg` service](/v0.50/build/building-modules/msg-services) RPC is executed, before `RunMsgs` returns. +First, it retrieves the `sdk.Msg`'s fully-qualified type name, by checking the `type_url` of the Protobuf `Any` representing the `sdk.Msg`. Then, using the application's [`msgServiceRouter`](#msg-service-router), it checks for the existence of `Msg` service method related to that `type_url`. At this point, if `mode == runTxModeCheck`, `RunMsgs` returns. Otherwise, if `mode == execModeFinalize`, the [`Msg` service](/docs/sdk/v0.50/documentation/module-system/msg-services) RPC is executed, before `RunMsgs` returns. -### PostHandler[​](#posthandler "Direct link to PostHandler") +### PostHandler `PostHandler` is similar to `AnteHandler`, but it, as the name suggests, executes custom post tx processing logic after [`RunMsgs`](#runmsgs) is called. `PostHandler` receives the `Result` of the `RunMsgs` in order to enable this customizable behavior. @@ -311,164 +3344,6332 @@ Like `AnteHandler`s, `PostHandler`s are theoretically optional. Other use cases like unused gas refund can also be enabled by `PostHandler`s. -x/auth/posthandler/post.go +```go expandable +package posthandler -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/posthandler/post.go#L1-L15) + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ HandlerOptions are the options required for constructing a default SDK PostHandler. +type HandlerOptions struct{ +} + +/ NewPostHandler returns an empty PostHandler chain. +func NewPostHandler(_ HandlerOptions) (sdk.PostHandler, error) { + postDecorators := []sdk.PostDecorator{ +} + +return sdk.ChainPostDecorators(postDecorators...), nil +} +``` Note, when `PostHandler`s fail, the state from `runMsgs` is also reverted, effectively making the transaction fail. -## Other ABCI Messages[​](#other-abci-messages "Direct link to Other ABCI Messages") +## Other ABCI Messages -### InitChain[​](#initchain "Direct link to InitChain") +### InitChain The [`InitChain` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine when the chain is first started. It is mainly used to **initialize** parameters and state like: -* [Consensus Parameters](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#consensus-parameters) via `setConsensusParams`. -* [`checkState` and `finalizeBlockState`](#state-updates) via `setState`. -* The [block gas meter](/v0.50/learn/beginner/gas-fees#block-gas-meter), with infinite gas to process genesis transactions. +- [Consensus Parameters](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#consensus-parameters) via `setConsensusParams`. +- [`checkState` and `finalizeBlockState`](#state-updates) via `setState`. +- The [block gas meter](/docs/sdk/v0.50/learn/beginner/gas-fees#block-gas-meter), with infinite gas to process genesis transactions. -Finally, the `InitChain(req abci.RequestInitChain)` method of `BaseApp` calls the [`initChainer()`](/v0.50/learn/beginner/app-anatomy#initchainer) of the application in order to initialize the main state of the application from the `genesis file` and, if defined, call the [`InitGenesis`](/v0.50/build/building-modules/genesis#initgenesis) function of each of the application's modules. +Finally, the `InitChain(req abci.RequestInitChain)` method of `BaseApp` calls the [`initChainer()`](/docs/sdk/v0.50/learn/beginner/app-anatomy#initchainer) of the application in order to initialize the main state of the application from the `genesis file` and, if defined, call the [`InitGenesis`](/docs/sdk/v0.50/documentation/module-system/genesis#initgenesis) function of each of the application's modules. -### FinalizeBlock[​](#finalizeblock "Direct link to FinalizeBlock") +### FinalizeBlock The [`FinalizeBlock` ABCI message](https://github.com/cometbft/cometbft/blob/v0.38.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine when a block proposal created by the correct proposer is received. The previous `BeginBlock, DeliverTx and Endblock` calls are private methods on the BaseApp struct. -baseapp/abci.go - -``` -loading... +```go expandable +package baseapp + +import ( + + "context" + "crypto/sha256" + "fmt" + "os" + "sort" + "strings" + "syscall" + "time" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/store/rootmulti" + snapshottypes "cosmossdk.io/store/snapshots/types" + storetypes "cosmossdk.io/store/types" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc/codes" + grpcstatus "google.golang.org/grpc/status" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ Supported ABCI Query prefixes and paths +const ( + QueryPathApp = "app" + QueryPathCustom = "custom" + QueryPathP2P = "p2p" + QueryPathStore = "store" + + QueryPathBroadcastTx = "/cosmos.tx.v1beta1.Service/BroadcastTx" +) + +func (app *BaseApp) + +InitChain(req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + if req.ChainId != app.chainID { + return nil, fmt.Errorf("invalid chain-id on InitChain; expected: %s, got: %s", app.chainID, req.ChainId) +} + + / On a new chain, we consider the init chain block height as 0, even though + / req.InitialHeight is 1 by default. + initHeader := cmtproto.Header{ + ChainID: req.ChainId, + Time: req.Time +} + +app.initialHeight = req.InitialHeight + + app.logger.Info("InitChain", "initialHeight", req.InitialHeight, "chainID", req.ChainId) + + / Set the initial height, which will be used to determine if we are proposing + / or processing the first block or not. + app.initialHeight = req.InitialHeight + + / if req.InitialHeight is > 1, then we set the initial version on all stores + if req.InitialHeight > 1 { + initHeader.Height = req.InitialHeight + if err := app.cms.SetInitialVersion(req.InitialHeight); err != nil { + return nil, err +} + +} + + / initialize states with a correct header + app.setState(execModeFinalize, initHeader) + +app.setState(execModeCheck, initHeader) + + / Store the consensus params in the BaseApp's param store. Note, this must be + / done after the finalizeBlockState and context have been set as it's persisted + / to state. + if req.ConsensusParams != nil { + err := app.StoreConsensusParams(app.finalizeBlockState.ctx, *req.ConsensusParams) + if err != nil { + return nil, err +} + +} + +defer func() { + / InitChain represents the state of the application BEFORE the first block, + / i.e. the genesis block. This means that when processing the app's InitChain + / handler, the block height is zero by default. However, after Commit is called + / the height needs to reflect the true block height. + initHeader.Height = req.InitialHeight + app.checkState.ctx = app.checkState.ctx.WithBlockHeader(initHeader) + +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithBlockHeader(initHeader) +}() + if app.initChainer == nil { + return &abci.ResponseInitChain{ +}, nil +} + + / add block gas meter for any genesis transactions (allow infinite gas) + +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithBlockGasMeter(storetypes.NewInfiniteGasMeter()) + +res, err := app.initChainer(app.finalizeBlockState.ctx, req) + if err != nil { + return nil, err +} + if len(req.Validators) > 0 { + if len(req.Validators) != len(res.Validators) { + return nil, fmt.Errorf( + "len(RequestInitChain.Validators) != len(GenesisValidators) (%d != %d)", + len(req.Validators), len(res.Validators), + ) +} + +sort.Sort(abci.ValidatorUpdates(req.Validators)) + +sort.Sort(abci.ValidatorUpdates(res.Validators)) + for i := range res.Validators { + if !proto.Equal(&res.Validators[i], &req.Validators[i]) { + return nil, fmt.Errorf("genesisValidators[%d] != req.Validators[%d] ", i, i) +} + +} + +} + + / In the case of a new chain, AppHash will be the hash of an empty string. + / During an upgrade, it'll be the hash of the last committed block. + var appHash []byte + if !app.LastCommitID().IsZero() { + appHash = app.LastCommitID().Hash +} + +else { + / $ echo -n '' | sha256sum + / e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 + emptyHash := sha256.Sum256([]byte{ +}) + +appHash = emptyHash[:] +} + + / NOTE: We don't commit, but FinalizeBlock for block InitialHeight starts from + / this FinalizeBlockState. + return &abci.ResponseInitChain{ + ConsensusParams: res.ConsensusParams, + Validators: res.Validators, + AppHash: appHash, +}, nil +} + +func (app *BaseApp) + +Info(req *abci.RequestInfo) (*abci.ResponseInfo, error) { + lastCommitID := app.cms.LastCommitID() + +return &abci.ResponseInfo{ + Data: app.name, + Version: app.version, + AppVersion: app.appVersion, + LastBlockHeight: lastCommitID.Version, + LastBlockAppHash: lastCommitID.Hash, +}, nil +} + +/ Query implements the ABCI interface. It delegates to CommitMultiStore if it +/ implements Queryable. +func (app *BaseApp) + +Query(_ context.Context, req *abci.RequestQuery) (resp *abci.ResponseQuery, err error) { + / add panic recovery for all queries + / + / Ref: https://github.com/cosmos/cosmos-sdk/pull/8039 + defer func() { + if r := recover(); r != nil { + resp = sdkerrors.QueryResult(errorsmod.Wrapf(sdkerrors.ErrPanic, "%v", r), app.trace) +} + +}() + + / when a client did not provide a query height, manually inject the latest + if req.Height == 0 { + req.Height = app.LastBlockHeight() +} + +telemetry.IncrCounter(1, "query", "count") + +telemetry.IncrCounter(1, "query", req.Path) + +defer telemetry.MeasureSince(time.Now(), req.Path) + if req.Path == QueryPathBroadcastTx { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "can't route a broadcast tx message"), app.trace), nil +} + + / handle gRPC routes first rather than calling splitPath because '/' characters + / are used as part of gRPC paths + if grpcHandler := app.grpcQueryRouter.Route(req.Path); grpcHandler != nil { + return app.handleQueryGRPC(grpcHandler, req), nil +} + path := SplitABCIQueryPath(req.Path) + if len(path) == 0 { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "no query path provided"), app.trace), nil +} + switch path[0] { + case QueryPathApp: + / "/app" prefix for special application queries + resp = handleQueryApp(app, path, req) + case QueryPathStore: + resp = handleQueryStore(app, path, *req) + case QueryPathP2P: + resp = handleQueryP2P(app, path) + +default: + resp = sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "unknown query path"), app.trace) +} + +return resp, nil +} + +/ ListSnapshots implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ListSnapshots(req *abci.RequestListSnapshots) (*abci.ResponseListSnapshots, error) { + resp := &abci.ResponseListSnapshots{ + Snapshots: []*abci.Snapshot{ +}} + if app.snapshotManager == nil { + return resp, nil +} + +snapshots, err := app.snapshotManager.List() + if err != nil { + app.logger.Error("failed to list snapshots", "err", err) + +return nil, err +} + for _, snapshot := range snapshots { + abciSnapshot, err := snapshot.ToABCI() + if err != nil { + app.logger.Error("failed to convert ABCI snapshots", "err", err) + +return nil, err +} + +resp.Snapshots = append(resp.Snapshots, &abciSnapshot) +} + +return resp, nil +} + +/ LoadSnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +LoadSnapshotChunk(req *abci.RequestLoadSnapshotChunk) (*abci.ResponseLoadSnapshotChunk, error) { + if app.snapshotManager == nil { + return &abci.ResponseLoadSnapshotChunk{ +}, nil +} + +chunk, err := app.snapshotManager.LoadChunk(req.Height, req.Format, req.Chunk) + if err != nil { + app.logger.Error( + "failed to load snapshot chunk", + "height", req.Height, + "format", req.Format, + "chunk", req.Chunk, + "err", err, + ) + +return nil, err +} + +return &abci.ResponseLoadSnapshotChunk{ + Chunk: chunk +}, nil +} + +/ OfferSnapshot implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +OfferSnapshot(req *abci.RequestOfferSnapshot) (*abci.ResponseOfferSnapshot, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ABORT +}, nil +} + if req.Snapshot == nil { + app.logger.Error("received nil snapshot") + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil +} + +snapshot, err := snapshottypes.SnapshotFromABCI(req.Snapshot) + if err != nil { + app.logger.Error("failed to decode snapshot metadata", "err", err) + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil +} + +err = app.snapshotManager.Restore(snapshot) + switch { + case err == nil: + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrUnknownFormat): + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT_FORMAT +}, nil + case errors.Is(err, snapshottypes.ErrInvalidMetadata): + app.logger.Error( + "rejecting invalid snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil + + default: + app.logger.Error( + "failed to restore snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + + / We currently don't support resetting the IAVL stores and retrying a + / different snapshot, so we ask CometBFT to abort all snapshot restoration. + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ABORT +}, nil +} +} + +/ ApplySnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ApplySnapshotChunk(req *abci.RequestApplySnapshotChunk) (*abci.ResponseApplySnapshotChunk, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ABORT +}, nil +} + + _, err := app.snapshotManager.RestoreChunk(req.Chunk) + switch { + case err == nil: + return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrChunkHashMismatch): + app.logger.Error( + "chunk checksum mismatch; rejecting sender and requesting refetch", + "chunk", req.Index, + "sender", req.Sender, + "err", err, + ) + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_RETRY, + RefetchChunks: []uint32{ + req.Index +}, + RejectSenders: []string{ + req.Sender +}, +}, nil + + default: + app.logger.Error("failed to restore snapshot", "err", err) + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ABORT +}, nil +} +} + +/ CheckTx implements the ABCI interface and executes a tx in CheckTx mode. In +/ CheckTx mode, messages are not executed. This means messages are only validated +/ and only the AnteHandler is executed. State is persisted to the BaseApp's +/ internal CheckTx state if the AnteHandler passes. Otherwise, the ResponseCheckTx +/ will contain relevant error information. Regardless of tx execution outcome, +/ the ResponseCheckTx will contain relevant gas execution context. +func (app *BaseApp) + +CheckTx(req *abci.RequestCheckTx) (*abci.ResponseCheckTx, error) { + var mode execMode + switch { + case req.Type == abci.CheckTxType_New: + mode = execModeCheck + case req.Type == abci.CheckTxType_Recheck: + mode = execModeReCheck + + default: + return nil, fmt.Errorf("unknown RequestCheckTx type: %s", req.Type) +} + +gInfo, result, anteEvents, err := app.runTx(mode, req.Tx) + if err != nil { + return sdkerrors.ResponseCheckTxWithEvents(err, gInfo.GasWanted, gInfo.GasUsed, anteEvents, app.trace), nil +} + +return &abci.ResponseCheckTx{ + GasWanted: int64(gInfo.GasWanted), / TODO: Should type accept unsigned ints? + GasUsed: int64(gInfo.GasUsed), / TODO: Should type accept unsigned ints? + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +}, nil +} + +/ PrepareProposal implements the PrepareProposal ABCI method and returns a +/ ResponsePrepareProposal object to the client. The PrepareProposal method is +/ responsible for allowing the block proposer to perform application-dependent +/ work in a block before proposing it. +/ +/ Transactions can be modified, removed, or added by the application. Since the +/ application maintains its own local mempool, it will ignore the transactions +/ provided to it in RequestPrepareProposal. Instead, it will determine which +/ transactions to return based on the mempool's semantics and the MaxTxBytes +/ provided by the client's request. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +PrepareProposal(req *abci.RequestPrepareProposal) (resp *abci.ResponsePrepareProposal, err error) { + if app.prepareProposal == nil { + return nil, errors.New("PrepareProposal handler not set") +} + + / Always reset state given that PrepareProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, +} + +app.setState(execModePrepareProposal, header) + + / CometBFT must never call PrepareProposal with a height of 0. + / + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("PrepareProposal called with invalid height") +} + +app.prepareProposalState.ctx = app.getContextForProposal(app.prepareProposalState.ctx, req.Height). + WithVoteInfos(toVoteInfo(req.LocalLastCommit.Votes)). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithBlockTime(req.Time). + WithProposer(req.ProposerAddress). + WithExecMode(sdk.ExecModePrepareProposal). + WithCometInfo(prepareProposalInfo{ + req +}) + +app.prepareProposalState.ctx = app.prepareProposalState.ctx. + WithConsensusParams(app.GetConsensusParams(app.prepareProposalState.ctx)). + WithBlockGasMeter(app.getBlockGasMeter(app.prepareProposalState.ctx)) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in PrepareProposal", + "height", req.Height, + "time", req.Time, + "panic", err, + ) + +resp = &abci.ResponsePrepareProposal{ +} + +} + +}() + +resp, err = app.prepareProposal(app.prepareProposalState.ctx, req) + if err != nil { + app.logger.Error("failed to prepare proposal", "height", req.Height, "error", err) + +return &abci.ResponsePrepareProposal{ +}, nil +} + +return resp, nil +} + +/ ProcessProposal implements the ProcessProposal ABCI method and returns a +/ ResponseProcessProposal object to the client. The ProcessProposal method is +/ responsible for allowing execution of application-dependent work in a proposed +/ block. Note, the application defines the exact implementation details of +/ ProcessProposal. In general, the application must at the very least ensure +/ that all transactions are valid. If all transactions are valid, then we inform +/ CometBFT that the Status is ACCEPT. However, the application is also able +/ to implement optimizations such as executing the entire proposed block +/ immediately. +/ +/ If a panic is detected during execution of an application's ProcessProposal +/ handler, it will be recovered and we will reject the proposal. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +ProcessProposal(req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) { + if app.processProposal == nil { + return nil, errors.New("ProcessProposal handler not set") +} + + / CometBFT must never call ProcessProposal with a height of 0. + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("ProcessProposal called with invalid height") +} + + / Always reset state given that ProcessProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, +} + +app.setState(execModeProcessProposal, header) + + / Since the application can get access to FinalizeBlock state and write to it, + / we must be sure to reset it in case ProcessProposal timeouts and is called + / again in a subsequent round. However, we only want to do this after we've + / processed the first block, as we want to avoid overwriting the finalizeState + / after state changes during InitChain. + if req.Height > app.initialHeight { + app.setState(execModeFinalize, header) +} + +app.processProposalState.ctx = app.getContextForProposal(app.processProposalState.ctx, req.Height). + WithVoteInfos(req.ProposedLastCommit.Votes). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithBlockTime(req.Time). + WithHeaderHash(req.Hash). + WithProposer(req.ProposerAddress). + WithCometInfo(cometInfo{ + ProposerAddress: req.ProposerAddress, + ValidatorsHash: req.NextValidatorsHash, + Misbehavior: req.Misbehavior, + LastCommit: req.ProposedLastCommit +}). + WithExecMode(sdk.ExecModeProcessProposal) + +app.processProposalState.ctx = app.processProposalState.ctx. + WithConsensusParams(app.GetConsensusParams(app.processProposalState.ctx)). + WithBlockGasMeter(app.getBlockGasMeter(app.processProposalState.ctx)) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in ProcessProposal", + "height", req.Height, + "time", req.Time, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +resp = &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + +}() + +resp, err = app.processProposal(app.processProposalState.ctx, req) + if err != nil { + app.logger.Error("failed to process proposal", "height", req.Height, "error", err) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +return resp, nil +} + +/ ExtendVote implements the ExtendVote ABCI method and returns a ResponseExtendVote. +/ It calls the application's ExtendVote handler which is responsible for performing +/ application-specific business logic when sending a pre-commit for the NEXT +/ block height. The extensions response may be non-deterministic but must always +/ be returned, even if empty. +/ +/ Agreed upon vote extensions are made available to the proposer of the next +/ height and are committed in the subsequent height, i.e. H+2. An error is +/ returned if vote extensions are not enabled or if extendVote fails or panics. +func (app *BaseApp) + +ExtendVote(_ context.Context, req *abci.RequestExtendVote) (resp *abci.ResponseExtendVote, err error) { + / Always reset state given that ExtendVote and VerifyVoteExtension can timeout + / and be called again in a subsequent round. + emptyHeader := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height +} + +app.setState(execModeVoteExtension, emptyHeader) + if app.extendVote == nil { + return nil, errors.New("application ExtendVote handler not set") +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(app.voteExtensionState.ctx) + if cp.Abci != nil && cp.Abci.VoteExtensionsEnableHeight <= 0 { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to ExtendVote at height %d", req.Height) +} + +app.voteExtensionState.ctx = app.voteExtensionState.ctx. + WithConsensusParams(cp). + WithBlockGasMeter(storetypes.NewInfiniteGasMeter()). + WithBlockHeight(req.Height). + WithHeaderHash(req.Hash). + WithExecMode(sdk.ExecModeVoteExtension) + + / add a deferred recover handler in case extendVote panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in ExtendVote", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +err = fmt.Errorf("recovered application panic in ExtendVote: %v", r) +} + +}() + +resp, err = app.extendVote(app.voteExtensionState.ctx, req) + if err != nil { + app.logger.Error("failed to extend vote", "height", req.Height, "error", err) + +return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} + +return resp, err +} + +/ VerifyVoteExtension implements the VerifyVoteExtension ABCI method and returns +/ a ResponseVerifyVoteExtension. It calls the applications' VerifyVoteExtension +/ handler which is responsible for performing application-specific business +/ logic in verifying a vote extension from another validator during the pre-commit +/ phase. The response MUST be deterministic. An error is returned if vote +/ extensions are not enabled or if verifyVoteExt fails or panics. +func (app *BaseApp) + +VerifyVoteExtension(req *abci.RequestVerifyVoteExtension) (resp *abci.ResponseVerifyVoteExtension, err error) { + if app.verifyVoteExt == nil { + return nil, errors.New("application VerifyVoteExtension handler not set") +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(app.voteExtensionState.ctx) + if cp.Abci != nil && cp.Abci.VoteExtensionsEnableHeight <= 0 { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to VerifyVoteExtension at height %d", req.Height) +} + + / add a deferred recover handler in case verifyVoteExt panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in VerifyVoteExtension", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "validator", fmt.Sprintf("%X", req.ValidatorAddress), + "panic", r, + ) + +err = fmt.Errorf("recovered application panic in VerifyVoteExtension: %v", r) +} + +}() + +resp, err = app.verifyVoteExt(app.voteExtensionState.ctx, req) + if err != nil { + app.logger.Error("failed to verify vote extension", "height", req.Height, "error", err) + +return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_REJECT +}, nil +} + +return resp, err +} + +/ FinalizeBlock will execute the block proposal provided by RequestFinalizeBlock. +/ Specifically, it will execute an application's BeginBlock (if defined), followed +/ by the transactions in the proposal, finally followed by the application's +/ EndBlock (if defined). +/ +/ For each raw transaction, i.e. a byte slice, BaseApp will only execute it if +/ it adheres to the sdk.Tx interface. Otherwise, the raw transaction will be +/ skipped. This is to support compatibility with proposers injecting vote +/ extensions into the proposal, which should not themselves be executed in cases +/ where they adhere to the sdk.Tx interface. +func (app *BaseApp) + +FinalizeBlock(req *abci.RequestFinalizeBlock) (*abci.ResponseFinalizeBlock, error) { + var events []abci.Event + if err := app.validateFinalizeBlockHeight(req); err != nil { + return nil, err +} + if app.cms.TracingEnabled() { + app.cms.SetTracingContext(storetypes.TraceContext( + map[string]any{"blockHeight": req.Height +}, + )) +} + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, +} + + / Initialize the FinalizeBlock state. If this is the first block, it should + / already be initialized in InitChain. Otherwise app.finalizeBlockState will be + / nil, since it is reset on Commit. + if app.finalizeBlockState == nil { + app.setState(execModeFinalize, header) +} + +else { + / In the first block, app.finalizeBlockState.ctx will already be initialized + / by InitChain. Context is now updated with Header information. + app.finalizeBlockState.ctx = app.finalizeBlockState.ctx. + WithBlockHeader(header). + WithBlockHeight(req.Height) +} + gasMeter := app.getBlockGasMeter(app.finalizeBlockState.ctx) + +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx. + WithBlockGasMeter(gasMeter). + WithHeaderHash(req.Hash). + WithConsensusParams(app.GetConsensusParams(app.finalizeBlockState.ctx)). + WithVoteInfos(req.DecidedLastCommit.Votes). + WithExecMode(sdk.ExecModeFinalize) + if app.checkState != nil { + app.checkState.ctx = app.checkState.ctx. + WithBlockGasMeter(gasMeter). + WithHeaderHash(req.Hash) +} + beginBlock := app.beginBlock(req) + +events = append(events, beginBlock.Events...) + + / Iterate over all raw transactions in the proposal and attempt to execute + / them, gathering the execution results. + / + / NOTE: Not all raw transactions may adhere to the sdk.Tx interface, e.g. + / vote extensions, so skip those. + txResults := make([]*abci.ExecTxResult, 0, len(req.Txs)) + for _, rawTx := range req.Txs { + if _, err := app.txDecoder(rawTx); err == nil { + txResults = append(txResults, app.deliverTx(rawTx)) +} + +} + if app.finalizeBlockState.ms.TracingEnabled() { + app.finalizeBlockState.ms = app.finalizeBlockState.ms.SetTracingContext(nil).(storetypes.CacheMultiStore) +} + +endBlock, err := app.endBlock(app.finalizeBlockState.ctx) + if err != nil { + return nil, err +} + +events = append(events, endBlock.Events...) + cp := app.GetConsensusParams(app.finalizeBlockState.ctx) + +return &abci.ResponseFinalizeBlock{ + Events: events, + TxResults: txResults, + ValidatorUpdates: endBlock.ValidatorUpdates, + ConsensusParamUpdates: &cp, + AppHash: app.workingHash(), +}, nil +} + +/ Commit implements the ABCI interface. It will commit all state that exists in +/ the deliver state's multi-store and includes the resulting commit ID in the +/ returned abci.ResponseCommit. Commit will set the check state based on the +/ latest header and reset the deliver state. Also, if a non-zero halt height is +/ defined in config, Commit will execute a deferred function call to check +/ against that height and gracefully halt if it matches the latest committed +/ height. +func (app *BaseApp) + +Commit() (*abci.ResponseCommit, error) { + header := app.finalizeBlockState.ctx.BlockHeader() + retainHeight := app.GetBlockRetentionHeight(header.Height) + if app.precommiter != nil { + app.precommiter(app.finalizeBlockState.ctx) +} + +rms, ok := app.cms.(*rootmulti.Store) + if ok { + rms.SetCommitHeader(header) +} + +app.cms.Commit() + resp := &abci.ResponseCommit{ + RetainHeight: retainHeight, +} + abciListeners := app.streamingManager.ABCIListeners + if len(abciListeners) > 0 { + ctx := app.finalizeBlockState.ctx + blockHeight := ctx.BlockHeight() + changeSet := app.cms.PopStateCache() + for _, abciListener := range abciListeners { + if err := abciListener.ListenCommit(ctx, *resp, changeSet); err != nil { + app.logger.Error("Commit listening hook failed", "height", blockHeight, "err", err) +} + +} + +} + + / Reset the CheckTx state to the latest committed. + / + / NOTE: This is safe because CometBFT holds a lock on the mempool for + / Commit. Use the header from this latest block. + app.setState(execModeCheck, header) + +app.finalizeBlockState = nil + if app.prepareCheckStater != nil { + app.prepareCheckStater(app.checkState.ctx) +} + +var halt bool + switch { + case app.haltHeight > 0 && uint64(header.Height) >= app.haltHeight: + halt = true + case app.haltTime > 0 && header.Time.Unix() >= int64(app.haltTime): + halt = true +} + if halt { + / Halt the binary and allow CometBFT to receive the ResponseCommit + / response with the commit ID hash. This will allow the node to successfully + / restart and process blocks assuming the halt configuration has been + / reset or moved to a more distant value. + app.halt() +} + +go app.snapshotManager.SnapshotIfApplicable(header.Height) + +return resp, nil +} + +/ workingHash gets the apphash that will be finalized in commit. +/ These writes will be persisted to the root multi-store (app.cms) + +and flushed to +/ disk in the Commit phase. This means when the ABCI client requests Commit(), the application +/ state transitions will be flushed to disk and as a result, but we already have +/ an application Merkle root. +func (app *BaseApp) + +workingHash() []byte { + / Write the FinalizeBlock state into branched storage and commit the MultiStore. + / The write to the FinalizeBlock state writes all state transitions to the root + / MultiStore (app.cms) + +so when Commit() + +is called it persists those values. + app.finalizeBlockState.ms.Write() + + / Get the hash of all writes in order to return the apphash to the comet in finalizeBlock. + commitHash := app.cms.WorkingHash() + +app.logger.Debug("hash of all writes", "workingHash", fmt.Sprintf("%X", commitHash)) + +return commitHash +} + +/ halt attempts to gracefully shutdown the node via SIGINT and SIGTERM falling +/ back on os.Exit if both fail. +func (app *BaseApp) + +halt() { + app.logger.Info("halting node per configuration", "height", app.haltHeight, "time", app.haltTime) + +p, err := os.FindProcess(os.Getpid()) + if err == nil { + / attempt cascading signals in case SIGINT fails (os dependent) + sigIntErr := p.Signal(syscall.SIGINT) + sigTermErr := p.Signal(syscall.SIGTERM) + if sigIntErr == nil || sigTermErr == nil { + return +} + +} + + / Resort to exiting immediately if the process could not be found or killed + / via SIGINT/SIGTERM signals. + app.logger.Info("failed to send SIGINT/SIGTERM; exiting...") + +os.Exit(0) +} + +func handleQueryApp(app *BaseApp, path []string, req *abci.RequestQuery) *abci.ResponseQuery { + if len(path) >= 2 { + switch path[1] { + case "simulate": + txBytes := req.Data + + gInfo, res, err := app.Simulate(txBytes) + if err != nil { + return sdkerrors.QueryResult(errorsmod.Wrap(err, "failed to simulate tx"), app.trace) +} + simRes := &sdk.SimulationResponse{ + GasInfo: gInfo, + Result: res, +} + +bz, err := codec.ProtoMarshalJSON(simRes, app.interfaceRegistry) + if err != nil { + return sdkerrors.QueryResult(errorsmod.Wrap(err, "failed to JSON encode simulation response"), app.trace) +} + +return &abci.ResponseQuery{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: bz, +} + case "version": + return &abci.ResponseQuery{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: []byte(app.version), +} + +default: + return sdkerrors.QueryResult(errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "unknown query: %s", path), app.trace) +} + +} + +return sdkerrors.QueryResult( + errorsmod.Wrap( + sdkerrors.ErrUnknownRequest, + "expected second parameter to be either 'simulate' or 'version', neither was present", + ), app.trace) +} + +func handleQueryStore(app *BaseApp, path []string, req abci.RequestQuery) *abci.ResponseQuery { + / "/store" prefix for store queries + queryable, ok := app.cms.(storetypes.Queryable) + if !ok { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "multi-store does not support queries"), app.trace) +} + +req.Path = "/" + strings.Join(path[1:], "/") + if req.Height <= 1 && req.Prove { + return sdkerrors.QueryResult( + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ), app.trace) +} + sdkReq := storetypes.RequestQuery(req) + +resp, err := queryable.Query(&sdkReq) + if err != nil { + return sdkerrors.QueryResult(err, app.trace) +} + +resp.Height = req.Height + abciResp := abci.ResponseQuery(*resp) + +return &abciResp +} + +func handleQueryP2P(app *BaseApp, path []string) *abci.ResponseQuery { + / "/p2p" prefix for p2p queries + if len(path) < 4 { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "path should be p2p filter "), app.trace) +} + +var resp *abci.ResponseQuery + + cmd, typ, arg := path[1], path[2], path[3] + switch cmd { + case "filter": + switch typ { + case "addr": + resp = app.FilterPeerByAddrPort(arg) + case "id": + resp = app.FilterPeerByID(arg) +} + +default: + resp = sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "expected second parameter to be 'filter'"), app.trace) +} + +return resp +} + +/ SplitABCIQueryPath splits a string path using the delimiter '/'. +/ +/ e.g. "this/is/funny" becomes []string{"this", "is", "funny" +} + +func SplitABCIQueryPath(requestPath string) (path []string) { + path = strings.Split(requestPath, "/") + + / first element is empty string + if len(path) > 0 && path[0] == "" { + path = path[1:] +} + +return path +} + +/ FilterPeerByAddrPort filters peers by address/port. +func (app *BaseApp) + +FilterPeerByAddrPort(info string) *abci.ResponseQuery { + if app.addrPeerFilter != nil { + return app.addrPeerFilter(info) +} + +return &abci.ResponseQuery{ +} +} + +/ FilterPeerByID filters peers by node ID. +func (app *BaseApp) + +FilterPeerByID(info string) *abci.ResponseQuery { + if app.idPeerFilter != nil { + return app.idPeerFilter(info) +} + +return &abci.ResponseQuery{ +} +} + +/ getContextForProposal returns the correct Context for PrepareProposal and +/ ProcessProposal. We use finalizeBlockState on the first block to be able to +/ access any state changes made in InitChain. +func (app *BaseApp) + +getContextForProposal(ctx sdk.Context, height int64) + +sdk.Context { + if height == app.initialHeight { + ctx, _ = app.finalizeBlockState.ctx.CacheContext() + + / clear all context data set during InitChain to avoid inconsistent behavior + ctx = ctx.WithBlockHeader(cmtproto.Header{ +}) + +return ctx +} + +return ctx +} + +func (app *BaseApp) + +handleQueryGRPC(handler GRPCQueryHandler, req *abci.RequestQuery) *abci.ResponseQuery { + ctx, err := app.CreateQueryContext(req.Height, req.Prove) + if err != nil { + return sdkerrors.QueryResult(err, app.trace) +} + +resp, err := handler(ctx, req) + if err != nil { + resp = sdkerrors.QueryResult(gRPCErrorToSDKError(err), app.trace) + +resp.Height = req.Height + return resp +} + +return resp +} + +func gRPCErrorToSDKError(err error) + +error { + status, ok := grpcstatus.FromError(err) + if !ok { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) +} + switch status.Code() { + case codes.NotFound: + return errorsmod.Wrap(sdkerrors.ErrKeyNotFound, err.Error()) + case codes.InvalidArgument: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.FailedPrecondition: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.Unauthenticated: + return errorsmod.Wrap(sdkerrors.ErrUnauthorized, err.Error()) + +default: + return errorsmod.Wrap(sdkerrors.ErrUnknownRequest, err.Error()) +} +} + +func checkNegativeHeight(height int64) + +error { + if height < 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "cannot query with height < 0; please provide a valid height") +} + +return nil +} + +/ createQueryContext creates a new sdk.Context for a query, taking as args +/ the block height and whether the query needs a proof or not. +func (app *BaseApp) + +CreateQueryContext(height int64, prove bool) (sdk.Context, error) { + if err := checkNegativeHeight(height); err != nil { + return sdk.Context{ +}, err +} + + / use custom query multi-store if provided + qms := app.qms + if qms == nil { + qms = app.cms.(storetypes.MultiStore) +} + lastBlockHeight := qms.LatestVersion() + if lastBlockHeight == 0 { + return sdk.Context{ +}, errorsmod.Wrapf(sdkerrors.ErrInvalidHeight, "%s is not ready; please wait for first block", app.Name()) +} + if height > lastBlockHeight { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidHeight, + "cannot query with height in the future; please provide a valid height", + ) +} + + / when a client did not provide a query height, manually inject the latest + if height == 0 { + height = lastBlockHeight +} + if height <= 1 && prove { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ) +} + +cacheMS, err := qms.CacheMultiStoreWithVersion(height) + if err != nil { + return sdk.Context{ +}, + errorsmod.Wrapf( + sdkerrors.ErrInvalidRequest, + "failed to load state at height %d; %s (latest height: %d)", height, err, lastBlockHeight, + ) +} + + / branch the commit multi-store for safety + ctx := sdk.NewContext(cacheMS, app.checkState.ctx.BlockHeader(), true, app.logger). + WithMinGasPrices(app.minGasPrices). + WithBlockHeight(height) + if height != lastBlockHeight { + rms, ok := app.cms.(*rootmulti.Store) + if ok { + cInfo, err := rms.GetCommitInfo(height) + if cInfo != nil && err == nil { + ctx = ctx.WithBlockTime(cInfo.Timestamp) +} + +} + +} + +return ctx, nil +} + +/ GetBlockRetentionHeight returns the height for which all blocks below this height +/ are pruned from CometBFT. Given a commitment height and a non-zero local +/ minRetainBlocks configuration, the retentionHeight is the smallest height that +/ satisfies: +/ +/ - Unbonding (safety threshold) + +time: The block interval in which validators +/ can be economically punished for misbehavior. Blocks in this interval must be +/ auditable e.g. by the light client. +/ +/ - Logical store snapshot interval: The block interval at which the underlying +/ logical store database is persisted to disk, e.g. every 10000 heights. Blocks +/ since the last IAVL snapshot must be available for replay on application restart. +/ +/ - State sync snapshots: Blocks since the oldest available snapshot must be +/ available for state sync nodes to catch up (oldest because a node may be +/ restoring an old snapshot while a new snapshot was taken). +/ +/ - Local (minRetainBlocks) + +config: Archive nodes may want to retain more or +/ all blocks, e.g. via a local config option min-retain-blocks. There may also +/ be a need to vary retention for other nodes, e.g. sentry nodes which do not +/ need historical blocks. +func (app *BaseApp) + +GetBlockRetentionHeight(commitHeight int64) + +int64 { + / pruning is disabled if minRetainBlocks is zero + if app.minRetainBlocks == 0 { + return 0 +} + minNonZero := func(x, y int64) + +int64 { + switch { + case x == 0: + return y + case y == 0: + return x + case x < y: + return x + + default: + return y +} + +} + + / Define retentionHeight as the minimum value that satisfies all non-zero + / constraints. All blocks below (commitHeight-retentionHeight) + +are pruned + / from CometBFT. + var retentionHeight int64 + + / Define the number of blocks needed to protect against misbehaving validators + / which allows light clients to operate safely. Note, we piggy back of the + / evidence parameters instead of computing an estimated number of blocks based + / on the unbonding period and block commitment time as the two should be + / equivalent. + cp := app.GetConsensusParams(app.finalizeBlockState.ctx) + if cp.Evidence != nil && cp.Evidence.MaxAgeNumBlocks > 0 { + retentionHeight = commitHeight - cp.Evidence.MaxAgeNumBlocks +} + if app.snapshotManager != nil { + snapshotRetentionHeights := app.snapshotManager.GetSnapshotBlockRetentionHeights() + if snapshotRetentionHeights > 0 { + retentionHeight = minNonZero(retentionHeight, commitHeight-snapshotRetentionHeights) +} + +} + v := commitHeight - int64(app.minRetainBlocks) + +retentionHeight = minNonZero(retentionHeight, v) + if retentionHeight <= 0 { + / prune nothing in the case of a non-positive height + return 0 +} + +return retentionHeight +} + +/ toVoteInfo converts the new ExtendedVoteInfo to VoteInfo. +func toVoteInfo(votes []abci.ExtendedVoteInfo) []abci.VoteInfo { + legacyVotes := make([]abci.VoteInfo, len(votes)) + for i, vote := range votes { + legacyVotes[i] = abci.VoteInfo{ + Validator: abci.Validator{ + Address: vote.Validator.Address, + Power: vote.Validator.Power, +}, + BlockIdFlag: vote.BlockIdFlag, +} + +} + +return legacyVotes +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/abci.go#L623) +#### PreBlock + +- Run the application's [`preBlocker()`](/docs/sdk/v0.50/learn/beginner/app-anatomy#preblocker), which mainly runs the [`PreBlocker()`](/docs/sdk/v0.50/documentation/module-system/preblock#preblock) method of each of the modules. + +#### BeginBlock + +- Initialize [`finalizeBlockState`](#state-updates) with the latest header using the `req abci.RequestFinalizeBlock` passed as parameter via the `setState` function. + + ```go expandable + package baseapp + + import ( + + "context" + "fmt" + "sort" + "strconv" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "golang.org/x/exp/maps" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" + ) + + type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + + error + ) + + const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + + transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeFinalize / Finalize a block proposal + ) + + var _ servertypes.ABCI = (*BaseApp)(nil) + + / BaseApp reflects the ABCI application implementation. + type BaseApp struct { + / initialized on creation + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + + state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + + grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional, e.g. for tips + + initChainer sdk.InitChainer / ABCI InitChain handler + beginBlocker sdk.BeginBlocker / (legacy ABCI) + + BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + + EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - voteExtensionState: Used for ExtendVote and VerifyVoteExtension, which is + / set based on the previous block's state. This state is never committed. In + / case of multiple rounds, the state is always reset to the previous block's + / state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + voteExtensionState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + + at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + + period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType + }.{ + attributeKey + }, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ + } + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec + } + + / NewBaseApp returns a reference to an initialized BaseApp. It accepts a + / variadic number of option functions, which act on the BaseApp to set + / configuration choices. + func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), + ) *BaseApp { + app := &BaseApp{ + logger: logger, + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, + } + for _, option := range options { + option(app) + } + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ + }) + } + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) + } + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) + } + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) + } + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) + } + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) + } + + app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs base app will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + + return app + } + + / Name returns the name of the BaseApp. + func (app *BaseApp) + + Name() + + string { + return app.name + } + + / AppVersion returns the application's protocol version. + func (app *BaseApp) + + AppVersion() + + uint64 { + return app.appVersion + } + + / Version returns the application's version string. + func (app *BaseApp) + + Version() + + string { + return app.version + } + + / Logger returns the logger of the BaseApp. + func (app *BaseApp) + + Logger() + + log.Logger { + return app.logger + } + + / Trace returns the boolean value for logging error stack traces. + func (app *BaseApp) + + Trace() + + bool { + return app.trace + } + + / MsgServiceRouter returns the MsgServiceRouter of a BaseApp. + func (app *BaseApp) + + MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter + } + + / SetMsgServiceRouter sets the MsgServiceRouter of a BaseApp. + func (app *BaseApp) + + SetMsgServiceRouter(msgServiceRouter *MsgServiceRouter) { + app.msgServiceRouter = msgServiceRouter + } + + / MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp + / multistore. + func (app *BaseApp) + + MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) + } + + else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) + } + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) -#### PreBlock[​](#preblock "Direct link to PreBlock") + default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) + } -* Run the application's [`preBlocker()`](/v0.50/learn/beginner/app-anatomy#preblocker), which mainly runs the [`PreBlocker()`](/v0.50/build/building-modules/preblock#preblock) method of each of the modules. + } + } -#### BeginBlock[​](#beginblock "Direct link to BeginBlock") + / MountKVStores mounts all IAVL or DB stores to the provided keys in the + / BaseApp multistore. + func (app *BaseApp) -* Initialize [`finalizeBlockState`](#state-updates) with the latest header using the `req abci.RequestFinalizeBlock` passed as parameter via the `setState` function. + MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) + } - baseapp/baseapp.go + else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) + } - ``` - loading... - ``` + } + } + + / MountTransientStores mounts all transient stores to the provided keys in + / the BaseApp multistore. + func (app *BaseApp) + + MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) + } + } + + / MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal + / commit multi-store. + func (app *BaseApp) + + MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := maps.Keys(keys) + + sort.Strings(skeys) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) + } + } + + / MountStore mounts a store to the provided key in the BaseApp multistore, + / using the default DB. + func (app *BaseApp) + + MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) + } + + / LoadLatestVersion loads the latest application version. It will panic if + / called more than once on a running BaseApp. + func (app *BaseApp) + + LoadLatestVersion() + + error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) + } + + return app.Init() + } + + / DefaultStoreLoader will be used by default and loads the latest version + func DefaultStoreLoader(ms storetypes.CommitMultiStore) + + error { + return ms.LoadLatestVersion() + } + + / CommitMultiStore returns the root multi-store. + / App constructor can use this to access the `cms`. + / UNSAFE: must not be used during the abci life cycle. + func (app *BaseApp) + + CommitMultiStore() + + storetypes.CommitMultiStore { + return app.cms + } + + / SnapshotManager returns the snapshot manager. + / application use this to register extra extension snapshotters. + func (app *BaseApp) + + SnapshotManager() *snapshots.Manager { + return app.snapshotManager + } + + / LoadVersion loads the BaseApp application version. It will panic if called + / more than once on a running baseapp. + func (app *BaseApp) + + LoadVersion(version int64) + + error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) + } + + return app.Init() + } + + / LastCommitID returns the last CommitID of the multistore. + func (app *BaseApp) + + LastCommitID() + + storetypes.CommitID { + return app.cms.LastCommitID() + } + + / LastBlockHeight returns the last committed block height. + func (app *BaseApp) + + LastBlockHeight() + + int64 { + return app.cms.LastCommitID().Version + } + + / ChainID returns the chainID of the app. + func (app *BaseApp) + + ChainID() + + string { + return app.chainID + } + + / AnteHandler returns the AnteHandler of the app. + func (app *BaseApp) + + AnteHandler() + + sdk.AnteHandler { + return app.anteHandler + } + + / Init initializes the app. It seals the app, preventing any + / further modifications. In addition, it validates the app against + / the earlier provided settings. Returns an error if validation fails. + / nil otherwise. Panics if the app is already sealed. + func (app *BaseApp) + + Init() - [See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/baseapp.go#L682-L706) + error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") + } + emptyHeader := cmtproto.Header{ + ChainID: app.chainID + } - This function also resets the [main gas meter](/v0.50/learn/beginner/gas-fees#main-gas-meter). + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) -* Initialize the [block gas meter](/v0.50/learn/beginner/gas-fees#block-gas-meter) with the `maxGas` limit. The `gas` consumed within the block cannot go above `maxGas`. This parameter is defined in the application's consensus parameters. + app.Seal() + if app.cms == nil { + return errors.New("commit multi-store must not be nil") + } -* Run the application's [`beginBlocker()`](/v0.50/learn/beginner/app-anatomy#beginblocker-and-endblocker), which mainly runs the [`BeginBlocker()`](/v0.50/build/building-modules/beginblock-endblock#beginblock) method of each of the modules. + return app.cms.GetPruning().Validate() + } -* Set the [`VoteInfos`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#voteinfo) of the application, i.e. the list of validators whose *precommit* for the previous block was included by the proposer of the current block. This information is carried into the [`Context`](/v0.50/learn/advanced/context) so that it can be used during transaction execution and EndBlock. + func (app *BaseApp) -#### Transaction Execution[​](#transaction-execution "Direct link to Transaction Execution") + setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices + } + + func (app *BaseApp) + + setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight + } + + func (app *BaseApp) + + setHaltTime(haltTime uint64) { + app.haltTime = haltTime + } + + func (app *BaseApp) + + setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks + } + + func (app *BaseApp) + + setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache + } + + func (app *BaseApp) + + setTrace(trace bool) { + app.trace = trace + } + + func (app *BaseApp) + + setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ + }) + for _, e := range ie { + app.indexEvents[e] = struct{ + }{ + } + + } + } + + / Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. + func (app *BaseApp) + + Seal() { + app.sealed = true + } + + / IsSealed returns true if the BaseApp is sealed and false otherwise. + func (app *BaseApp) + + IsSealed() + + bool { + return app.sealed + } + + / setState sets the BaseApp's state for the corresponding mode with a branched + / multi-store (i.e. a CacheMultiStore) + + and a new Context with the same + / multi-store branch, and provided header. + func (app *BaseApp) + + setState(mode execMode, header cmtproto.Header) { + ms := app.cms.CacheMultiStore() + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, header, false, app.logger).WithStreamingManager(app.streamingManager), + } + switch mode { + case execModeCheck: + baseState.ctx = baseState.ctx.WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices) + + app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeVoteExtension: + app.voteExtensionState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) + } + } + + / GetFinalizeBlockStateCtx returns the Context associated with the FinalizeBlock + / state. This Context can be used to write data derived from processing vote + / extensions to application state during ProcessProposal. + / + / NOTE: + / - Do NOT use or write to state using this Context unless you intend for + / that state to be committed. + / - Do NOT use or write to state using this Context on the first block. + func (app *BaseApp) + + GetFinalizeBlockStateCtx() + + sdk.Context { + return app.finalizeBlockState.ctx + } + + / SetCircuitBreaker sets the circuit breaker for the BaseApp. + / The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. + func (app *BaseApp) + + SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") + } + + app.msgServiceRouter.SetCircuit(cb) + } + + / GetConsensusParams returns the current consensus parameters from the BaseApp's + / ParamStore. If the BaseApp has no ParamStore defined, nil is returned. + func (app *BaseApp) + + GetConsensusParams(ctx sdk.Context) + + cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ + } + + } + + cp, err := app.paramStore.Get(ctx) + if err != nil { + panic(fmt.Errorf("consensus key is nil: %w", err)) + } + + return cp + } + + / StoreConsensusParams sets the consensus parameters to the BaseApp's param + / store. + / + / NOTE: We're explicitly not storing the CometBFT app_version in the param store. + / It's stored instead in the x/upgrade store, with its own bump logic. + func (app *BaseApp) + + StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + + error { + if app.paramStore == nil { + panic("cannot store consensus params with no params store set") + } + + return app.paramStore.Set(ctx, cp) + } + + / AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. + func (app *BaseApp) + + AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) + } + } + + / GetMaximumBlockGas gets the maximum gas from the consensus params. It panics + / if maximum block gas is less than negative one and returns zero if negative + / one. + func (app *BaseApp) + + GetMaximumBlockGas(ctx sdk.Context) + + uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 + } + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) + } + } + + func (app *BaseApp) + + validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + + error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) + } + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight + } + + else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 + } + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) + } + + return nil + } + + / validateBasicTxMsgs executes basic validator calls for messages. + func validateBasicTxMsgs(msgs []sdk.Msg) + + error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") + } + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue + } + if err := m.ValidateBasic(); err != nil { + return err + } + + } + + return nil + } + + func (app *BaseApp) + + getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState + } + } + + func (app *BaseApp) + + getBlockGasMeter(ctx sdk.Context) + + storetypes.GasMeter { + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) + } + + return storetypes.NewInfiniteGasMeter() + } + + / retrieve the context for the tx w/ txBytes and other memoized values. + func (app *BaseApp) + + getContextForTx(mode execMode, txBytes []byte) + + sdk.Context { + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) + } + ctx := modeState.ctx. + WithTxBytes(txBytes) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) + } + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() + } + + return ctx + } + + / cacheTxContext returns a new context based off of the provided context with + / a branched multi-store. + func (app *BaseApp) + + cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + / TODO: https://github.com/cosmos/cosmos-sdk/issues/2824 + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]interface{ + }{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), + }, + ), + ).(storetypes.CacheMultiStore) + } + + return ctx.WithMultiStore(msCache), msCache + } + + func (app *BaseApp) + + beginBlock(req *abci.RequestFinalizeBlock) + + sdk.BeginBlock { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.ctx) + if err != nil { + panic(err) + } + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" + }, + ) + } + + resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) + } + + return resp + } + + func (app *BaseApp) + + deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ + } + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + + telemetry.IncrCounter(1, "tx", resultStr) + + telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + + telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") + }() + + gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + + return resp + } + + resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), + } + + return resp + } + + / endBlock is an application-defined function that is called after transactions + / have been processed in FinalizeBlock. + func (app *BaseApp) + + endBlock(ctx context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.ctx) + if err != nil { + panic(err) + } + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" + }, + ) + } + + eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + + endblock = eb + } + + return endblock, nil + } + + / runTx processes a transaction within a given execution mode, encoded transaction + / bytes, and the decoded transaction itself. All state transitions occur through + / a cached Context depending on the mode provided. State only gets persisted + / if all messages get executed successfully and the execution mode is DeliverTx. + / Note, gas execution info is always returned. A reference to a Result is + / returned if the tx does not run out of gas and if all the messages are valid + / and execute successfully. An error is returned otherwise. + func (app *BaseApp) + + runTx(mode execMode, txBytes []byte) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") + } + + defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + + err, result = processRecovery(r, recoveryMW), nil + } + + gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() + } + + }() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) + } + + } + + / If BlockGasMeter() + + panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() + } + + tx, err := app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ + }, nil, nil, err + } + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ + }, nil, nil, err + } + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + + anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + + newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + + is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) + } + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + return gInfo, nil, nil, err + } + + msCache.Write() + + anteEvents = events.ToABCIEvents() + } + if mode == execModeCheck { + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err + } + + } + + else if mode == execModeFinalize { + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) + } + + } + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) + } + if err == nil { + / Run optional postHandlers. + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + + newCtx, err := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if err != nil { + return gInfo, nil, anteEvents, err + } + + result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) + } + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + + msCache.Write() + } + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) + } + + } + + return gInfo, result, anteEvents, err + } + + / runMsgs iterates through a list of messages and executes them with the provided + / Context and execution mode. Messages will only be executed during simulation + / and DeliverTx. An error is returned if any single message fails or if a + / Handler does not exist for a given message route. Otherwise, a reference to a + / Result is returned. The caller must not commit state if an error is returned. + func (app *BaseApp) + + runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + + var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break + } + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "can't route message %+v", msg) + } + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) + } + + / create message events + msgEvents := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) + } + + events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + + has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) + } + + msgResponses = append(msgResponses, msgResponse) + } + + + } + + data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") + } + + return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, + }, nil + } + + / makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. + func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses + }) + } + + func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) + + sdk.Events { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + panic(err) + } + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + panic(err) + } + + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) + } + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) + } + + } + + return sdk.Events{ + msgEvent + }.AppendEvents(events) + } + + / PrepareProposalVerifyTx performs transaction verification when a proposer is + / creating a block proposal during PrepareProposal. Any state committed to the + / PrepareProposal state internally will be discarded. will be + / returned if the transaction cannot be encoded. will be returned if + / the transaction is valid, otherwise will be returned. + func (app *BaseApp) + + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err + } + + _, _, _, err = app.runTx(execModePrepareProposal, bz) + if err != nil { + return nil, err + } + + return bz, nil + } + + / ProcessProposalVerifyTx performs transaction verification when receiving a + / block proposal during ProcessProposal. Any state committed to the + / ProcessProposal state internally will be discarded. will be + / returned if the transaction cannot be decoded. will be returned if + / the transaction is valid, otherwise will be returned. + func (app *BaseApp) + + ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err + } + + _, _, _, err = app.runTx(execModeProcessProposal, txBz) + if err != nil { + return nil, err + } + + return tx, nil + } + + / Close is called in start cmd to gracefully cleanup resources. + func (app *BaseApp) + + Close() + + error { + return nil + } + ``` + + This function also resets the [main gas meter](/docs/sdk/v0.50/learn/beginner/gas-fees#main-gas-meter). + +- Initialize the [block gas meter](/docs/sdk/v0.50/learn/beginner/gas-fees#block-gas-meter) with the `maxGas` limit. The `gas` consumed within the block cannot go above `maxGas`. This parameter is defined in the application's consensus parameters. + +- Run the application's [`beginBlocker()`](/docs/sdk/v0.50/learn/beginner/app-anatomy#beginblocker-and-endblocker), which mainly runs the [`BeginBlocker()`](/docs/sdk/v0.50/documentation/module-system/beginblock-endblock#beginblock) method of each of the modules. + +- Set the [`VoteInfos`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#voteinfo) of the application, i.e. the list of validators whose _precommit_ for the previous block was included by the proposer of the current block. This information is carried into the [`Context`](/docs/sdk/v0.50/learn/advanced/context) so that it can be used during transaction execution and EndBlock. + +#### Transaction Execution When the underlying consensus engine receives a block proposal, each transaction in the block needs to be processed by the application. To that end, the underlying consensus engine sends the transactions in FinalizeBlock message to the application for each transaction in a sequential order. Before the first transaction of a given block is processed, a [volatile state](#state-updates) called `finalizeBlockState` is initialized during FinalizeBlock. This state is updated each time a transaction is processed via `FinalizeBlock`, and committed to the [main state](#main-state) when the block is [committed](#commit), after what it is set to `nil`. -baseapp/baseapp.go +```go expandable +package baseapp + +import ( + + "context" + "fmt" + "sort" + "strconv" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "golang.org/x/exp/maps" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + +error +) + +const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + +transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeFinalize / Finalize a block proposal +) + +var _ servertypes.ABCI = (*BaseApp)(nil) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { + / initialized on creation + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + +state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional, e.g. for tips + + initChainer sdk.InitChainer / ABCI InitChain handler + beginBlocker sdk.BeginBlocker / (legacy ABCI) + +BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + +EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - voteExtensionState: Used for ExtendVote and VerifyVoteExtension, which is + / set based on the previous block's state. This state is never committed. In + / case of multiple rounds, the state is always reset to the previous block's + / state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + voteExtensionState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger, + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) +} + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) +} + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) +} + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs base app will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ SetMsgServiceRouter sets the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +SetMsgServiceRouter(msgServiceRouter *MsgServiceRouter) { + app.msgServiceRouter = msgServiceRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) -``` -loading... -``` +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} + +} +} + +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) + +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + +} +} + +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) + +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} + +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := maps.Keys(keys) + +sort.Strings(skeys) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms storetypes.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +storetypes.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/baseapp.go#LL708-L743) +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ ChainID returns the chainID of the app. +func (app *BaseApp) + +ChainID() + +string { + return app.chainID +} + +/ AnteHandler returns the AnteHandler of the app. +func (app *BaseApp) + +AnteHandler() + +sdk.AnteHandler { + return app.anteHandler +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + emptyHeader := cmtproto.Header{ + ChainID: app.chainID +} + + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) + +app.Seal() + if app.cms == nil { + return errors.New("commit multi-store must not be nil") +} + +return app.cms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode execMode, header cmtproto.Header) { + ms := app.cms.CacheMultiStore() + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, header, false, app.logger).WithStreamingManager(app.streamingManager), +} + switch mode { + case execModeCheck: + baseState.ctx = baseState.ctx.WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices) + +app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeVoteExtension: + app.voteExtensionState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ GetFinalizeBlockStateCtx returns the Context associated with the FinalizeBlock +/ state. This Context can be used to write data derived from processing vote +/ extensions to application state during ProcessProposal. +/ +/ NOTE: +/ - Do NOT use or write to state using this Context unless you intend for +/ that state to be committed. +/ - Do NOT use or write to state using this Context on the first block. +func (app *BaseApp) + +GetFinalizeBlockStateCtx() + +sdk.Context { + return app.finalizeBlockState.ctx +} + +/ SetCircuitBreaker sets the circuit breaker for the BaseApp. +/ The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. +func (app *BaseApp) + +SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") +} + +app.msgServiceRouter.SetCircuit(cb) +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) + +cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ +} + +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + panic(fmt.Errorf("consensus key is nil: %w", err)) +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the BaseApp's param +/ store. +/ +/ NOTE: We're explicitly not storing the CometBFT app_version in the param store. +/ It's stored instead in the x/upgrade store, with its own bump logic. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + +error { + if app.paramStore == nil { + panic("cannot store consensus params with no params store set") +} + +return app.paramStore.Set(ctx, cp) +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + +error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) +} + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 +} + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return err +} + +} + +return nil +} + +func (app *BaseApp) + +getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState +} +} + +func (app *BaseApp) + +getBlockGasMeter(ctx sdk.Context) + +storetypes.GasMeter { + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) +} + +return storetypes.NewInfiniteGasMeter() +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode execMode, txBytes []byte) + +sdk.Context { + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.ctx. + WithTxBytes(txBytes) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + / TODO: https://github.com/cosmos/cosmos-sdk/issues/2824 + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]interface{ +}{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(storetypes.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +func (app *BaseApp) + +beginBlock(req *abci.RequestFinalizeBlock) + +sdk.BeginBlock { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.ctx) + if err != nil { + panic(err) +} + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" +}, + ) +} + +resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) +} + +return resp +} + +func (app *BaseApp) + +deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ +} + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + +telemetry.IncrCounter(1, "tx", resultStr) + +telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + +telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") +}() + +gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + +return resp +} + +resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +} + +return resp +} + +/ endBlock is an application-defined function that is called after transactions +/ have been processed in FinalizeBlock. +func (app *BaseApp) + +endBlock(ctx context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.ctx) + if err != nil { + panic(err) +} + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" +}, + ) +} + +eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + +endblock = eb +} + +return endblock, nil +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +func (app *BaseApp) + +runTx(mode execMode, txBytes []byte) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() +} + +tx, err := app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + return gInfo, nil, nil, err +} + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + if mode == execModeCheck { + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err +} + +} + +else if mode == execModeFinalize { + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) +} + if err == nil { + / Run optional postHandlers. + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if err != nil { + return gInfo, nil, anteEvents, err +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "can't route message %+v", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) +} + +events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + + +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) + +sdk.Events { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + panic(err) +} + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + panic(err) +} + +msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events) +} + +/ PrepareProposalVerifyTx performs transaction verification when a proposer is +/ creating a block proposal during PrepareProposal. Any state committed to the +/ PrepareProposal state internally will be discarded. will be +/ returned if the transaction cannot be encoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModePrepareProposal, bz) + if err != nil { + return nil, err +} + +return bz, nil +} + +/ ProcessProposalVerifyTx performs transaction verification when receiving a +/ block proposal during ProcessProposal. Any state committed to the +/ ProcessProposal state internally will be discarded. will be +/ returned if the transaction cannot be decoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModeProcessProposal, txBz) + if err != nil { + return nil, err +} + +return tx, nil +} + +/ Close is called in start cmd to gracefully cleanup resources. +func (app *BaseApp) + +Close() + +error { + return nil +} +``` Transaction execution within `FinalizeBlock` performs the **exact same steps as `CheckTx`**, with a little caveat at step 3 and the addition of a fifth step: 1. The `AnteHandler` does **not** check that the transaction's `gas-prices` is sufficient. That is because the `min-gas-prices` value `gas-prices` is checked against is local to the node, and therefore what is enough for one full-node might not be for another. This means that the proposer can potentially include transactions for free, although they are not incentivised to do so, as they earn a bonus on the total fee of the block they propose. -2. For each `sdk.Msg` in the transaction, route to the appropriate module's Protobuf [`Msg` service](/v0.50/build/building-modules/msg-services). Additional *stateful* checks are performed, and the branched multistore held in `finalizeBlockState`'s `context` is updated by the module's `keeper`. If the `Msg` service returns successfully, the branched multistore held in `context` is written to `finalizeBlockState` `CacheMultiStore`. +2. For each `sdk.Msg` in the transaction, route to the appropriate module's Protobuf [`Msg` service](/docs/sdk/v0.50/documentation/module-system/msg-services). Additional _stateful_ checks are performed, and the branched multistore held in `finalizeBlockState`'s `context` is updated by the module's `keeper`. If the `Msg` service returns successfully, the branched multistore held in `context` is written to `finalizeBlockState` `CacheMultiStore`. During the additional fifth step outlined in (2), each read/write to the store increases the value of `GasConsumed`. You can find the default cost of each operation: -store/types/gas.go +```go expandable +package types -``` -loading... -``` +import ( + + "fmt" + "math" +) + +/ Gas consumption descriptors. +const ( + GasIterNextCostFlatDesc = "IterNextFlat" + GasValuePerByteDesc = "ValuePerByte" + GasWritePerByteDesc = "WritePerByte" + GasReadPerByteDesc = "ReadPerByte" + GasWriteCostFlatDesc = "WriteFlat" + GasReadCostFlatDesc = "ReadFlat" + GasHasDesc = "Has" + GasDeleteDesc = "Delete" +) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/types/gas.go#L230-L241) +/ Gas measured by the SDK +type Gas = uint64 + +/ ErrorNegativeGasConsumed defines an error thrown when the amount of gas refunded results in a +/ negative gas consumed amount. +type ErrorNegativeGasConsumed struct { + Descriptor string +} + +/ ErrorOutOfGas defines an error thrown when an action results in out of gas. +type ErrorOutOfGas struct { + Descriptor string +} + +/ ErrorGasOverflow defines an error thrown when an action results gas consumption +/ unsigned integer overflow. +type ErrorGasOverflow struct { + Descriptor string +} + +/ GasMeter interface to track gas consumption +type GasMeter interface { + GasConsumed() + +Gas + GasConsumedToLimit() + +Gas + GasRemaining() + +Gas + Limit() + +Gas + ConsumeGas(amount Gas, descriptor string) + +RefundGas(amount Gas, descriptor string) + +IsPastLimit() + +bool + IsOutOfGas() + +bool + String() + +string +} + +type basicGasMeter struct { + limit Gas + consumed Gas +} + +/ NewGasMeter returns a reference to a new basicGasMeter. +func NewGasMeter(limit Gas) + +GasMeter { + return &basicGasMeter{ + limit: limit, + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *basicGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasRemaining returns the gas left in the GasMeter. +func (g *basicGasMeter) + +GasRemaining() + +Gas { + if g.IsPastLimit() { + return 0 +} + +return g.limit - g.consumed +} + +/ Limit returns the gas limit of the GasMeter. +func (g *basicGasMeter) + +Limit() + +Gas { + return g.limit +} + +/ GasConsumedToLimit returns the gas limit if gas consumed is past the limit, +/ otherwise it returns the consumed gas. +/ +/ NOTE: This behavior is only called when recovering from panic when +/ BlockGasMeter consumes gas past the limit. +func (g *basicGasMeter) + +GasConsumedToLimit() + +Gas { + if g.IsPastLimit() { + return g.limit +} + +return g.consumed +} + +/ addUint64Overflow performs the addition operation on two uint64 integers and +/ returns a boolean on whether or not the result overflows. +func addUint64Overflow(a, b uint64) (uint64, bool) { + if math.MaxUint64-a < b { + return 0, true +} + +return a + b, false +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit or out of gas. +func (g *basicGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + g.consumed = math.MaxUint64 + panic(ErrorGasOverflow{ + descriptor +}) +} + if g.consumed > g.limit { + panic(ErrorOutOfGas{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the transaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *basicGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns true if gas consumed is past limit, otherwise it returns false. +func (g *basicGasMeter) + +IsPastLimit() + +bool { + return g.consumed > g.limit +} + +/ IsOutOfGas returns true if gas consumed is greater than or equal to gas limit, otherwise it returns false. +func (g *basicGasMeter) + +IsOutOfGas() + +bool { + return g.consumed >= g.limit +} + +/ String returns the BasicGasMeter's gas limit and gas consumed. +func (g *basicGasMeter) + +String() + +string { + return fmt.Sprintf("BasicGasMeter:\n limit: %d\n consumed: %d", g.limit, g.consumed) +} + +type infiniteGasMeter struct { + consumed Gas +} + +/ NewInfiniteGasMeter returns a new gas meter without a limit. +func NewInfiniteGasMeter() + +GasMeter { + return &infiniteGasMeter{ + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *infiniteGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasConsumedToLimit returns the gas consumed from the GasMeter since the gas is not confined to a limit. +/ NOTE: This behavior is only called when recovering from panic when BlockGasMeter consumes gas past the limit. +func (g *infiniteGasMeter) + +GasConsumedToLimit() + +Gas { + return g.consumed +} + +/ GasRemaining returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +GasRemaining() + +Gas { + return math.MaxUint64 +} + +/ Limit returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +Limit() + +Gas { + return math.MaxUint64 +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit. +func (g *infiniteGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + / TODO: Should we set the consumed field after overflow checking? + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + panic(ErrorGasOverflow{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the trasaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *infiniteGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsPastLimit() + +bool { + return false +} + +/ IsOutOfGas returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsOutOfGas() + +bool { + return false +} + +/ String returns the InfiniteGasMeter's gas consumed. +func (g *infiniteGasMeter) + +String() + +string { + return fmt.Sprintf("InfiniteGasMeter:\n consumed: %d", g.consumed) +} + +/ GasConfig defines gas cost for each operation on KVStores +type GasConfig struct { + HasCost Gas + DeleteCost Gas + ReadCostFlat Gas + ReadCostPerByte Gas + WriteCostFlat Gas + WriteCostPerByte Gas + IterNextCostFlat Gas +} + +/ KVGasConfig returns a default gas config for KVStores. +func KVGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 1000, + DeleteCost: 1000, + ReadCostFlat: 1000, + ReadCostPerByte: 3, + WriteCostFlat: 2000, + WriteCostPerByte: 30, + IterNextCostFlat: 30, +} +} + +/ TransientGasConfig returns a default gas config for TransientStores. +func TransientGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 100, + DeleteCost: 100, + ReadCostFlat: 100, + ReadCostPerByte: 0, + WriteCostFlat: 200, + WriteCostPerByte: 3, + IterNextCostFlat: 3, +} +} +``` At any point, if `GasConsumed > GasWanted`, the function returns with `Code != 0` and the execution fails. Each transactions returns a response to the underlying consensus engine of type [`abci.ExecTxResult`](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci%2B%2B_methods.md#exectxresult). The response contains: -* `Code (uint32)`: Response Code. `0` if successful. -* `Data ([]byte)`: Result bytes, if any. -* `Log (string):` The output of the application's logger. May be non-deterministic. -* `Info (string):` Additional information. May be non-deterministic. -* `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction. -* `GasUsed (int64)`: Amount of gas consumed by transaction. During transaction execution, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction, and by adding gas each time a read/write to the store occurs. -* `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](/v0.50/learn/advanced/events) for more. -* `Codespace (string)`: Namespace for the Code. +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. +- `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction. +- `GasUsed (int64)`: Amount of gas consumed by transaction. During transaction execution, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction, and by adding gas each time a read/write to the store occurs. +- `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](/docs/sdk/v0.50/learn/advanced/events) for more. +- `Codespace (string)`: Namespace for the Code. -#### EndBlock[​](#endblock "Direct link to EndBlock") +#### EndBlock EndBlock is run after transaction execution completes. It allows developers to have logic be executed at the end of each block. In the Cosmos SDK, the bulk EndBlock() method is to run the application's EndBlocker(), which mainly runs the EndBlocker() method of each of the application's modules. -baseapp/baseapp.go +```go expandable +package baseapp + +import ( + + "context" + "fmt" + "sort" + "strconv" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "golang.org/x/exp/maps" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + +error +) + +const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + +transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeFinalize / Finalize a block proposal +) + +var _ servertypes.ABCI = (*BaseApp)(nil) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { + / initialized on creation + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + +state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional, e.g. for tips + + initChainer sdk.InitChainer / ABCI InitChain handler + beginBlocker sdk.BeginBlocker / (legacy ABCI) + +BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + +EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - voteExtensionState: Used for ExtendVote and VerifyVoteExtension, which is + / set based on the previous block's state. This state is never committed. In + / case of multiple rounds, the state is always reset to the previous block's + / state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + voteExtensionState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger, + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) +} + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) +} + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) +} + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs base app will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ SetMsgServiceRouter sets the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +SetMsgServiceRouter(msgServiceRouter *MsgServiceRouter) { + app.msgServiceRouter = msgServiceRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) -``` -loading... -``` +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} + +} +} + +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) + +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + +} +} + +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) + +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} + +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := maps.Keys(keys) + +sort.Strings(skeys) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms storetypes.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +storetypes.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/baseapp.go#L747-L769) +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} -### Commit[​](#commit "Direct link to Commit") +return app.Init() +} -The [`Commit` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine after the full-node has received *precommits* from 2/3+ of validators (weighted by voting power). On the `BaseApp` end, the `Commit(res abci.ResponseCommit)` function is implemented to commit all the valid state transitions that occurred during `FinalizeBlock` and to reset state for the next block. +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ ChainID returns the chainID of the app. +func (app *BaseApp) + +ChainID() + +string { + return app.chainID +} + +/ AnteHandler returns the AnteHandler of the app. +func (app *BaseApp) + +AnteHandler() + +sdk.AnteHandler { + return app.anteHandler +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + emptyHeader := cmtproto.Header{ + ChainID: app.chainID +} + + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) + +app.Seal() + if app.cms == nil { + return errors.New("commit multi-store must not be nil") +} + +return app.cms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode execMode, header cmtproto.Header) { + ms := app.cms.CacheMultiStore() + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, header, false, app.logger).WithStreamingManager(app.streamingManager), +} + switch mode { + case execModeCheck: + baseState.ctx = baseState.ctx.WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices) + +app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeVoteExtension: + app.voteExtensionState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ GetFinalizeBlockStateCtx returns the Context associated with the FinalizeBlock +/ state. This Context can be used to write data derived from processing vote +/ extensions to application state during ProcessProposal. +/ +/ NOTE: +/ - Do NOT use or write to state using this Context unless you intend for +/ that state to be committed. +/ - Do NOT use or write to state using this Context on the first block. +func (app *BaseApp) + +GetFinalizeBlockStateCtx() + +sdk.Context { + return app.finalizeBlockState.ctx +} + +/ SetCircuitBreaker sets the circuit breaker for the BaseApp. +/ The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. +func (app *BaseApp) + +SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") +} + +app.msgServiceRouter.SetCircuit(cb) +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) + +cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ +} + +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + panic(fmt.Errorf("consensus key is nil: %w", err)) +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the BaseApp's param +/ store. +/ +/ NOTE: We're explicitly not storing the CometBFT app_version in the param store. +/ It's stored instead in the x/upgrade store, with its own bump logic. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + +error { + if app.paramStore == nil { + panic("cannot store consensus params with no params store set") +} + +return app.paramStore.Set(ctx, cp) +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + +error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) +} + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 +} + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return err +} + +} + +return nil +} + +func (app *BaseApp) + +getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState +} +} + +func (app *BaseApp) + +getBlockGasMeter(ctx sdk.Context) + +storetypes.GasMeter { + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) +} + +return storetypes.NewInfiniteGasMeter() +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode execMode, txBytes []byte) + +sdk.Context { + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.ctx. + WithTxBytes(txBytes) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + / TODO: https://github.com/cosmos/cosmos-sdk/issues/2824 + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]interface{ +}{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(storetypes.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +func (app *BaseApp) + +beginBlock(req *abci.RequestFinalizeBlock) + +sdk.BeginBlock { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.ctx) + if err != nil { + panic(err) +} + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" +}, + ) +} + +resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) +} + +return resp +} + +func (app *BaseApp) + +deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ +} + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + +telemetry.IncrCounter(1, "tx", resultStr) + +telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + +telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") +}() + +gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + +return resp +} + +resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +} + +return resp +} + +/ endBlock is an application-defined function that is called after transactions +/ have been processed in FinalizeBlock. +func (app *BaseApp) + +endBlock(ctx context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.ctx) + if err != nil { + panic(err) +} + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" +}, + ) +} + +eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + +endblock = eb +} + +return endblock, nil +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +func (app *BaseApp) + +runTx(mode execMode, txBytes []byte) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() +} + +tx, err := app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + return gInfo, nil, nil, err +} + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + if mode == execModeCheck { + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err +} + +} + +else if mode == execModeFinalize { + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) +} + if err == nil { + / Run optional postHandlers. + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if err != nil { + return gInfo, nil, anteEvents, err +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "can't route message %+v", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) +} + +events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + + +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) + +sdk.Events { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + panic(err) +} + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + panic(err) +} + +msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events) +} + +/ PrepareProposalVerifyTx performs transaction verification when a proposer is +/ creating a block proposal during PrepareProposal. Any state committed to the +/ PrepareProposal state internally will be discarded. will be +/ returned if the transaction cannot be encoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModePrepareProposal, bz) + if err != nil { + return nil, err +} + +return bz, nil +} + +/ ProcessProposalVerifyTx performs transaction verification when receiving a +/ block proposal during ProcessProposal. Any state committed to the +/ ProcessProposal state internally will be discarded. will be +/ returned if the transaction cannot be decoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModeProcessProposal, txBz) + if err != nil { + return nil, err +} + +return tx, nil +} + +/ Close is called in start cmd to gracefully cleanup resources. +func (app *BaseApp) + +Close() + +error { + return nil +} +``` + +### Commit + +The [`Commit` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine after the full-node has received _precommits_ from 2/3+ of validators (weighted by voting power). On the `BaseApp` end, the `Commit(res abci.ResponseCommit)` function is implemented to commit all the valid state transitions that occurred during `FinalizeBlock` and to reset state for the next block. To commit state-transitions, the `Commit` function calls the `Write()` function on `finalizeBlockState.ms`, where `finalizeBlockState.ms` is a branched multistore of the main store `app.cms`. Then, the `Commit` function sets `checkState` to the latest header (obtained from `finalizeBlockState.ctx.BlockHeader`) and `finalizeBlockState` to `nil`. Finally, `Commit` returns the hash of the commitment of `app.cms` back to the underlying consensus engine. This hash is used as a reference in the header of the next block. -### Info[​](#info "Direct link to Info") +### Info The [`Info` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#info-methods) is a simple query from the underlying consensus engine, notably used to sync the latter with the application during a handshake that happens on startup. When called, the `Info(res abci.ResponseInfo)` function from `BaseApp` will return the application's name, version and the hash of the last commit of `app.cms`. -### Query[​](#query "Direct link to Query") +### Query -The [`Query` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#info-methods) is used to serve queries received from the underlying consensus engine, including queries received via RPC like CometBFT RPC. It used to be the main entrypoint to build interfaces with the application, but with the introduction of [gRPC queries](/v0.50/build/building-modules/query-services) in Cosmos SDK v0.40, its usage is more limited. The application must respect a few rules when implementing the `Query` method, which are outlined [here](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#query). +The [`Query` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#info-methods) is used to serve queries received from the underlying consensus engine, including queries received via RPC like CometBFT RPC. It used to be the main entrypoint to build interfaces with the application, but with the introduction of [gRPC queries](/docs/sdk/v0.50/documentation/module-system/query-services) in Cosmos SDK v0.40, its usage is more limited. The application must respect a few rules when implementing the `Query` method, which are outlined [here](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#query). Each CometBFT `query` comes with a `path`, which is a `string` which denotes what to query. If the `path` matches a gRPC fully-qualified service method, then `BaseApp` will defer the query to the `grpcQueryRouter` and let it handle it like explained [above](#grpc-query-router). Otherwise, the `path` represents a query that is not (yet) handled by the gRPC router. `BaseApp` splits the `path` string with the `/` delimiter. By convention, the first element of the split string (`split[0]`) contains the category of `query` (`app`, `p2p`, `store` or `custom` ). The `BaseApp` implementation of the `Query(req abci.RequestQuery)` method is a simple dispatcher serving these 4 main categories of queries: -* Application-related queries like querying the application's version, which are served via the `handleQueryApp` method. -* Direct queries to the multistore, which are served by the `handlerQueryStore` method. These direct queries are different from custom queries which go through `app.queryRouter`, and are mainly used by third-party service provider like block explorers. -* P2P queries, which are served via the `handleQueryP2P` method. These queries return either `app.addrPeerFilter` or `app.ipPeerFilter` that contain the list of peers filtered by address or IP respectively. These lists are first initialized via `options` in `BaseApp`'s [constructor](#constructor). +- Application-related queries like querying the application's version, which are served via the `handleQueryApp` method. +- Direct queries to the multistore, which are served by the `handlerQueryStore` method. These direct queries are different from custom queries which go through `app.queryRouter`, and are mainly used by third-party service provider like block explorers. +- P2P queries, which are served via the `handleQueryP2P` method. These queries return either `app.addrPeerFilter` or `app.ipPeerFilter` that contain the list of peers filtered by address or IP respectively. These lists are first initialized via `options` in `BaseApp`'s [constructor](#constructor). -### ExtendVote[​](#extendvote "Direct link to ExtendVote") +### ExtendVote `ExtendVote` allows an application to extend a pre-commit vote with arbitrary data. This process does NOT have to be deterministic and the data returned can be unique to the validator process. In the Cosmos-SDK this is implemented as a NoOp: -baseapp/abci\_utils.go - -``` -loading... +```go expandable +package baseapp + +import ( + + "bytes" + "fmt" + "cosmossdk.io/math" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cmtcrypto "github.com/cometbft/cometbft/crypto" + cryptoenc "github.com/cometbft/cometbft/crypto/encoding" + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + protoio "github.com/cosmos/gogoproto/io" + "github.com/cosmos/gogoproto/proto" + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +/ VoteExtensionThreshold defines the total voting power % that must be +/ submitted in order for all vote extensions to be considered valid for a +/ given height. +var VoteExtensionThreshold = math.LegacyNewDecWithPrec(667, 3) + +type ( + / Validator defines the interface contract require for verifying vote extension + / signatures. Typically, this will be implemented by the x/staking module, + / which has knowledge of the CometBFT public key. + Validator interface { + CmtConsPublicKey() (cmtprotocrypto.PublicKey, error) + +BondedTokens() + +math.Int +} + + / ValidatorStore defines the interface contract require for verifying vote + / extension signatures. Typically, this will be implemented by the x/staking + / module, which has knowledge of the CometBFT public key. + ValidatorStore interface { + GetValidatorByConsAddr(sdk.Context, cryptotypes.Address) (Validator, error) + +TotalBondedTokens(ctx sdk.Context) + +math.Int +} +) + +/ ValidateVoteExtensions defines a helper function for verifying vote extension +/ signatures that may be passed or manually injected into a block proposal from +/ a proposer in ProcessProposal. It returns an error if any signature is invalid +/ or if unexpected vote extensions and/or signatures are found or less than 2/3 +/ power is received. +func ValidateVoteExtensions( + ctx sdk.Context, + valStore ValidatorStore, + currentHeight int64, + chainID string, + extCommit abci.ExtendedCommitInfo, +) + +error { + cp := ctx.ConsensusParams() + extsEnabled := cp.Abci != nil && cp.Abci.VoteExtensionsEnableHeight > 0 + marshalDelimitedFn := func(msg proto.Message) ([]byte, error) { + var buf bytes.Buffer + if err := protoio.NewDelimitedWriter(&buf).WriteMsg(msg); err != nil { + return nil, err +} + +return buf.Bytes(), nil +} + +var sumVP math.Int + for _, vote := range extCommit.Votes { + if !extsEnabled { + if len(vote.VoteExtension) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension at height %d", currentHeight) +} + if len(vote.ExtensionSignature) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension signature at height %d", currentHeight) +} + +continue +} + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("vote extensions enabled; received empty vote extension signature at height %d", currentHeight) +} + valConsAddr := cmtcrypto.Address(vote.Validator.Address) + +validator, err := valStore.GetValidatorByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get validator %X: %w", valConsAddr, err) +} + if validator == nil { + return fmt.Errorf("validator %X not found", valConsAddr) +} + +cmtPubKeyProto, err := validator.CmtConsPublicKey() + if err != nil { + return fmt.Errorf("failed to get validator %X public key: %w", valConsAddr, err) +} + +cmtPubKey, err := cryptoenc.PubKeyFromProto(cmtPubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) +} + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, / the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: chainID, +} + +extSignBytes, err := marshalDelimitedFn(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) +} + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return fmt.Errorf("failed to verify validator %X vote extension signature", valConsAddr) +} + +sumVP = sumVP.Add(validator.BondedTokens()) +} + + / Ensure we have at least 2/3 voting power that submitted valid vote + / extensions. + totalVP := valStore.TotalBondedTokens(ctx) + percentSubmitted := math.LegacyNewDecFromInt(sumVP).Quo(math.LegacyNewDecFromInt(totalVP)) + if percentSubmitted.LT(VoteExtensionThreshold) { + return fmt.Errorf("insufficient cumulative voting power received to verify vote extensions; got: %s, expected: >=%s", percentSubmitted, VoteExtensionThreshold) +} + +return nil +} + +type ( + / ProposalTxVerifier defines the interface that is implemented by BaseApp, + / that any custom ABCI PrepareProposal and ProcessProposal handler can use + / to verify a transaction. + ProposalTxVerifier interface { + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) +} + + / DefaultProposalHandler defines the default ABCI PrepareProposal and + / ProcessProposal handlers. + DefaultProposalHandler struct { + mempool mempool.Mempool + txVerifier ProposalTxVerifier +} +) + +func NewDefaultProposalHandler(mp mempool.Mempool, txVerifier ProposalTxVerifier) + +DefaultProposalHandler { + return DefaultProposalHandler{ + mempool: mp, + txVerifier: txVerifier, +} +} + +/ PrepareProposalHandler returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from CometBFT will simply be returned, which, by default, are in +/ FIFO order. +func (h DefaultProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + / If the mempool is nil or NoOp we simply return the transactions + / requested from CometBFT, which, by default, should be in FIFO order. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} + +var ( + selectedTxs [][]byte + totalTxBytes int64 + ) + iterator := h.mempool.Select(ctx, req.Txs) + for iterator != nil { + memTx := iterator.Tx() + + / NOTE: Since transaction verification was already executed in CheckTx, + / which calls mempool.Insert, in theory everything in the pool should be + / valid. But some mempool implementations may insert invalid txs, so we + / check again. + bz, err := h.txVerifier.PrepareProposalVerifyTx(memTx) + if err != nil { + err := h.mempool.Remove(memTx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + panic(err) +} + +} + +else { + txSize := int64(len(bz)) + if totalTxBytes += txSize; totalTxBytes <= req.MaxTxBytes { + selectedTxs = append(selectedTxs, bz) +} + +else { + / We've reached capacity per req.MaxTxBytes so we cannot select any + / more transactions. + break +} + +} + +iterator = iterator.Next() +} + +return &abci.ResponsePrepareProposal{ + Txs: selectedTxs +}, nil +} +} + +/ ProcessProposalHandler returns the default implementation for processing an +/ ABCI proposal. Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. +/ Note that step (2) + +is identical to the validation step performed in +/ DefaultPrepareProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +func (h DefaultProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + / If the mempool is nil or NoOp we simply return ACCEPT, + / because PrepareProposal may have included txs that could fail verification. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return NoOpProcessProposal() +} + +return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + for _, txBytes := range req.Txs { + _, err := h.txVerifier.ProcessProposalVerifyTx(txBytes) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpExtendVote defines a no-op ExtendVote handler. It will always return an +/ empty byte slice as the vote extension. +func NoOpExtendVote() + +sdk.ExtendVoteHandler { + return func(_ sdk.Context, _ *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} +} + +/ NoOpVerifyVoteExtensionHandler defines a no-op VerifyVoteExtension handler. It +/ will always return an ACCEPT status with no error. +func NoOpVerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(_ sdk.Context, _ *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/abci_utils.go#L274-L281) - -### VerifyVoteExtension[​](#verifyvoteextension "Direct link to VerifyVoteExtension") +### VerifyVoteExtension `VerifyVoteExtension` allows an application to verify that the data returned by `ExtendVote` is valid. This process MUST be deterministic. Moreover, the value of ResponseVerifyVoteExtension.status MUST exclusively depend on the parameters passed in the call to RequestVerifyVoteExtension, and the last committed Application state. In the Cosmos-SDK this is implemented as a NoOp: -baseapp/abci\_utils.go - -``` -loading... +```go expandable +package baseapp + +import ( + + "bytes" + "fmt" + "cosmossdk.io/math" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cmtcrypto "github.com/cometbft/cometbft/crypto" + cryptoenc "github.com/cometbft/cometbft/crypto/encoding" + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + protoio "github.com/cosmos/gogoproto/io" + "github.com/cosmos/gogoproto/proto" + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +/ VoteExtensionThreshold defines the total voting power % that must be +/ submitted in order for all vote extensions to be considered valid for a +/ given height. +var VoteExtensionThreshold = math.LegacyNewDecWithPrec(667, 3) + +type ( + / Validator defines the interface contract require for verifying vote extension + / signatures. Typically, this will be implemented by the x/staking module, + / which has knowledge of the CometBFT public key. + Validator interface { + CmtConsPublicKey() (cmtprotocrypto.PublicKey, error) + +BondedTokens() + +math.Int +} + + / ValidatorStore defines the interface contract require for verifying vote + / extension signatures. Typically, this will be implemented by the x/staking + / module, which has knowledge of the CometBFT public key. + ValidatorStore interface { + GetValidatorByConsAddr(sdk.Context, cryptotypes.Address) (Validator, error) + +TotalBondedTokens(ctx sdk.Context) + +math.Int +} +) + +/ ValidateVoteExtensions defines a helper function for verifying vote extension +/ signatures that may be passed or manually injected into a block proposal from +/ a proposer in ProcessProposal. It returns an error if any signature is invalid +/ or if unexpected vote extensions and/or signatures are found or less than 2/3 +/ power is received. +func ValidateVoteExtensions( + ctx sdk.Context, + valStore ValidatorStore, + currentHeight int64, + chainID string, + extCommit abci.ExtendedCommitInfo, +) + +error { + cp := ctx.ConsensusParams() + extsEnabled := cp.Abci != nil && cp.Abci.VoteExtensionsEnableHeight > 0 + marshalDelimitedFn := func(msg proto.Message) ([]byte, error) { + var buf bytes.Buffer + if err := protoio.NewDelimitedWriter(&buf).WriteMsg(msg); err != nil { + return nil, err +} + +return buf.Bytes(), nil +} + +var sumVP math.Int + for _, vote := range extCommit.Votes { + if !extsEnabled { + if len(vote.VoteExtension) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension at height %d", currentHeight) +} + if len(vote.ExtensionSignature) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension signature at height %d", currentHeight) +} + +continue +} + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("vote extensions enabled; received empty vote extension signature at height %d", currentHeight) +} + valConsAddr := cmtcrypto.Address(vote.Validator.Address) + +validator, err := valStore.GetValidatorByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get validator %X: %w", valConsAddr, err) +} + if validator == nil { + return fmt.Errorf("validator %X not found", valConsAddr) +} + +cmtPubKeyProto, err := validator.CmtConsPublicKey() + if err != nil { + return fmt.Errorf("failed to get validator %X public key: %w", valConsAddr, err) +} + +cmtPubKey, err := cryptoenc.PubKeyFromProto(cmtPubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) +} + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, / the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: chainID, +} + +extSignBytes, err := marshalDelimitedFn(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) +} + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return fmt.Errorf("failed to verify validator %X vote extension signature", valConsAddr) +} + +sumVP = sumVP.Add(validator.BondedTokens()) +} + + / Ensure we have at least 2/3 voting power that submitted valid vote + / extensions. + totalVP := valStore.TotalBondedTokens(ctx) + percentSubmitted := math.LegacyNewDecFromInt(sumVP).Quo(math.LegacyNewDecFromInt(totalVP)) + if percentSubmitted.LT(VoteExtensionThreshold) { + return fmt.Errorf("insufficient cumulative voting power received to verify vote extensions; got: %s, expected: >=%s", percentSubmitted, VoteExtensionThreshold) +} + +return nil +} + +type ( + / ProposalTxVerifier defines the interface that is implemented by BaseApp, + / that any custom ABCI PrepareProposal and ProcessProposal handler can use + / to verify a transaction. + ProposalTxVerifier interface { + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) +} + + / DefaultProposalHandler defines the default ABCI PrepareProposal and + / ProcessProposal handlers. + DefaultProposalHandler struct { + mempool mempool.Mempool + txVerifier ProposalTxVerifier +} +) + +func NewDefaultProposalHandler(mp mempool.Mempool, txVerifier ProposalTxVerifier) + +DefaultProposalHandler { + return DefaultProposalHandler{ + mempool: mp, + txVerifier: txVerifier, +} +} + +/ PrepareProposalHandler returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from CometBFT will simply be returned, which, by default, are in +/ FIFO order. +func (h DefaultProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + / If the mempool is nil or NoOp we simply return the transactions + / requested from CometBFT, which, by default, should be in FIFO order. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} + +var ( + selectedTxs [][]byte + totalTxBytes int64 + ) + iterator := h.mempool.Select(ctx, req.Txs) + for iterator != nil { + memTx := iterator.Tx() + + / NOTE: Since transaction verification was already executed in CheckTx, + / which calls mempool.Insert, in theory everything in the pool should be + / valid. But some mempool implementations may insert invalid txs, so we + / check again. + bz, err := h.txVerifier.PrepareProposalVerifyTx(memTx) + if err != nil { + err := h.mempool.Remove(memTx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + panic(err) +} + +} + +else { + txSize := int64(len(bz)) + if totalTxBytes += txSize; totalTxBytes <= req.MaxTxBytes { + selectedTxs = append(selectedTxs, bz) +} + +else { + / We've reached capacity per req.MaxTxBytes so we cannot select any + / more transactions. + break +} + +} + +iterator = iterator.Next() +} + +return &abci.ResponsePrepareProposal{ + Txs: selectedTxs +}, nil +} +} + +/ ProcessProposalHandler returns the default implementation for processing an +/ ABCI proposal. Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. +/ Note that step (2) + +is identical to the validation step performed in +/ DefaultPrepareProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +func (h DefaultProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + / If the mempool is nil or NoOp we simply return ACCEPT, + / because PrepareProposal may have included txs that could fail verification. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return NoOpProcessProposal() +} + +return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + for _, txBytes := range req.Txs { + _, err := h.txVerifier.ProcessProposalVerifyTx(txBytes) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpExtendVote defines a no-op ExtendVote handler. It will always return an +/ empty byte slice as the vote extension. +func NoOpExtendVote() + +sdk.ExtendVoteHandler { + return func(_ sdk.Context, _ *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} +} + +/ NoOpVerifyVoteExtensionHandler defines a no-op VerifyVoteExtension handler. It +/ will always return an ACCEPT status with no error. +func NoOpVerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(_ sdk.Context, _ *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} ``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/abci_utils.go#L282-L288) diff --git a/docs/sdk/v0.50/learn/advanced/cli.mdx b/docs/sdk/v0.50/learn/advanced/cli.mdx index 1c1f5ed2..8674d6b6 100644 --- a/docs/sdk/v0.50/learn/advanced/cli.mdx +++ b/docs/sdk/v0.50/learn/advanced/cli.mdx @@ -1,216 +1,3186 @@ --- -title: "Command-Line Interface" -description: "Version: v0.50" +title: Command-Line Interface --- - - This document describes how command-line interface (CLI) works on a high-level, for an [**application**](/v0.50/learn/beginner/app-anatomy). A separate document for implementing a CLI for a Cosmos SDK [**module**](/v0.50/build/building-modules/intro) can be found [here](/v0.50/build/building-modules/module-interfaces#cli). - +## Synopsis -## Command-Line Interface[​](#command-line-interface-1 "Direct link to Command-Line Interface") +This document describes how command-line interface (CLI) works on a high-level, for an [**application**](/docs/sdk/v0.50/learn/beginner/app-anatomy). A separate document for implementing a CLI for a Cosmos SDK [**module**](/docs/sdk/v0.50/documentation/module-system/intro) can be found [here](/docs/sdk/v0.50/documentation/module-system/module-interfaces#cli). -### Example Command[​](#example-command "Direct link to Example Command") +## Command-Line Interface + +### Example Command There is no set way to create a CLI, but Cosmos SDK modules typically use the [Cobra Library](https://github.com/spf13/cobra). Building a CLI with Cobra entails defining commands, arguments, and flags. [**Commands**](#root-command) understand the actions users wish to take, such as `tx` for creating a transaction and `query` for querying the application. Each command can also have nested subcommands, necessary for naming the specific transaction type. Users also supply **Arguments**, such as account numbers to send coins to, and [**Flags**](#flags) to modify various aspects of the commands, such as gas prices or which node to broadcast to. Here is an example of a command a user might enter to interact with the simapp CLI `simd` in order to send some tokens: -``` +```bash simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --gas auto --gas-prices ``` The first four strings specify the command: -* The root command for the entire application `simd`. -* The subcommand `tx`, which contains all commands that let users create transactions. -* The subcommand `bank` to indicate which module to route the command to ([`x/bank`](/v0.50/build/modules/bank) module in this case). -* The type of transaction `send`. +- The root command for the entire application `simd`. +- The subcommand `tx`, which contains all commands that let users create transactions. +- The subcommand `bank` to indicate which module to route the command to ([`x/bank`](docs/sdk/v0.50/documentation/module-system/modules/bank/README) module in this case). +- The type of transaction `send`. The next two strings are arguments: the `from_address` the user wishes to send from, the `to_address` of the recipient, and the `amount` they want to send. Finally, the last few strings of the command are optional flags to indicate how much the user is willing to pay in fees (calculated using the amount of gas used to execute the transaction and the gas prices provided by the user). -The CLI interacts with a [node](/v0.50/learn/advanced/node) to handle this command. The interface itself is defined in a `main.go` file. +The CLI interacts with a [node](/docs/sdk/v0.50/learn/advanced/node) to handle this command. The interface itself is defined in a `main.go` file. -### Building the CLI[​](#building-the-cli "Direct link to Building the CLI") +### Building the CLI The `main.go` file needs to have a `main()` function that creates a root command, to which all the application commands will be added as subcommands. The root command additionally handles: -* **setting configurations** by reading in configuration files (e.g. the Cosmos SDK config file). -* **adding any flags** to it, such as `--chain-id`. -* **instantiating the `codec`** by injecting the application codecs. The [`codec`](/v0.50/learn/advanced/encoding) is used to encode and decode data structures for the application - stores can only persist `[]byte`s so the developer must define a serialization format for their data structures or use the default, Protobuf. -* **adding subcommand** for all the possible user interactions, including [transaction commands](#transaction-commands) and [query commands](#query-commands). +- **setting configurations** by reading in configuration files (e.g. the Cosmos SDK config file). +- **adding any flags** to it, such as `--chain-id`. +- **instantiating the `codec`** by injecting the application codecs. The [`codec`](/docs/sdk/v0.50/learn/advanced/encoding) is used to encode and decode data structures for the application - stores can only persist `[]byte`s so the developer must define a serialization format for their data structures or use the default, Protobuf. +- **adding subcommand** for all the possible user interactions, including [transaction commands](#transaction-commands) and [query commands](#query-commands). The `main()` function finally creates an executor and [execute](https://pkg.go.dev/github.com/spf13/cobra#Command.Execute) the root command. See an example of `main()` function from the `simapp` application: -simapp/simd/main.go +```go expandable +package main -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/simd/main.go#L12-L24) + "os" + "cosmossdk.io/log" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/simd/cmd" + svrcmd "github.com/cosmos/cosmos-sdk/server/cmd" +) + +func main() { + rootCmd := cmd.NewRootCmd() + if err := svrcmd.Execute(rootCmd, "", simapp.DefaultNodeHome); err != nil { + log.NewLogger(rootCmd.OutOrStderr()).Error("failure when running app", "err", err) + +os.Exit(1) +} +} +``` The rest of the document will detail what needs to be implemented for each step and include smaller portions of code from the `simapp` CLI files. -## Adding Commands to the CLI[​](#adding-commands-to-the-cli "Direct link to Adding Commands to the CLI") +## Adding Commands to the CLI Every application CLI first constructs a root command, then adds functionality by aggregating subcommands (often with further nested subcommands) using `rootCmd.AddCommand()`. The bulk of an application's unique capabilities lies in its transaction and query commands, called `TxCmd` and `QueryCmd` respectively. -### Root Command[​](#root-command "Direct link to Root Command") +### Root Command The root command (called `rootCmd`) is what the user first types into the command line to indicate which application they wish to interact with. The string used to invoke the command (the "Use" field) is typically the name of the application suffixed with `-d`, e.g. `simd` or `gaiad`. The root command typically includes the following commands to support basic functionality in the application. -* **Status** command from the Cosmos SDK rpc client tools, which prints information about the status of the connected [`Node`](/v0.50/learn/advanced/node). The Status of a node includes `NodeInfo`,`SyncInfo` and `ValidatorInfo`. -* **Keys** [commands](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/client/keys) from the Cosmos SDK client tools, which includes a collection of subcommands for using the key functions in the Cosmos SDK crypto tools, including adding a new key and saving it to the keyring, listing all public keys stored in the keyring, and deleting a key. For example, users can type `simd keys add ` to add a new key and save an encrypted copy to the keyring, using the flag `--recover` to recover a private key from a seed phrase or the flag `--multisig` to group multiple keys together to create a multisig key. For full details on the `add` key command, see the code [here](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/client/keys/add.go). For more details about usage of `--keyring-backend` for storage of key credentials look at the [keyring docs](/v0.50/user/run-node/keyring). -* **Server** commands from the Cosmos SDK server package. These commands are responsible for providing the mechanisms necessary to start an ABCI CometBFT application and provides the CLI framework (based on [cobra](https://github.com/spf13/cobra)) necessary to fully bootstrap an application. The package exposes two core functions: `StartCmd` and `ExportCmd` which creates commands to start the application and export state respectively. Learn more [here](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/server). -* [**Transaction**](#transaction-commands) commands. -* [**Query**](#query-commands) commands. - -Next is an example `rootCmd` function from the `simapp` application. It instantiates the root command, adds a [*persistent* flag](#flags) and `PreRun` function to be run before every execution, and adds all of the necessary subcommands. - -simapp/simd/cmd/root\_v2.go - +- **Status** command from the Cosmos SDK rpc client tools, which prints information about the status of the connected [`Node`](/docs/sdk/v0.50/learn/advanced/node). The Status of a node includes `NodeInfo`,`SyncInfo` and `ValidatorInfo`. +- **Keys** [commands](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/client/keys) from the Cosmos SDK client tools, which includes a collection of subcommands for using the key functions in the Cosmos SDK crypto tools, including adding a new key and saving it to the keyring, listing all public keys stored in the keyring, and deleting a key. For example, users can type `simd keys add ` to add a new key and save an encrypted copy to the keyring, using the flag `--recover` to recover a private key from a seed phrase or the flag `--multisig` to group multiple keys together to create a multisig key. For full details on the `add` key command, see the code [here](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/client/keys/add.go). For more details about usage of `--keyring-backend` for storage of key credentials look at the [keyring docs](/docs/sdk/v0.50/user/run-node/keyring). +- **Server** commands from the Cosmos SDK server package. These commands are responsible for providing the mechanisms necessary to start an ABCI CometBFT application and provides the CLI framework (based on [cobra](https://github.com/spf13/cobra)) necessary to fully bootstrap an application. The package exposes two core functions: `StartCmd` and `ExportCmd` which creates commands to start the application and export state respectively. + Learn more [here](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/server). +- [**Transaction**](#transaction-commands) commands. +- [**Query**](#query-commands) commands. + +Next is an example `rootCmd` function from the `simapp` application. It instantiates the root command, adds a [_persistent_ flag](#flags) and `PreRun` function to be run before every execution, and adds all of the necessary subcommands. + +```go expandable +/go:build !app_v1 + +package cmd + +import ( + + "errors" + "io" + "os" + + cmtcfg "github.com/cometbft/cometbft/config" + dbm "github.com/cosmos/cosmos-db" + "github.com/spf13/cobra" + "github.com/spf13/viper" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "cosmossdk.io/simapp" + confixcmd "cosmossdk.io/tools/confix/cmd" + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/client/snapshot" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the main function. +func NewRootCmd() *cobra.Command { + var ( + interfaceRegistry codectypes.InterfaceRegistry + appCodec codec.Codec + txConfig client.TxConfig + legacyAmino *codec.LegacyAmino + autoCliOpts autocli.AppOptions + moduleBasicManager module.BasicManager + ) + if err := depinject.Inject(depinject.Configs(simapp.AppConfig, depinject.Supply(log.NewNopLogger())), + &interfaceRegistry, + &appCodec, + &txConfig, + &legacyAmino, + &autoCliOpts, + &moduleBasicManager, + ); err != nil { + panic(err) +} + initClientCtx := client.Context{ +}. + WithCodec(appCodec). + WithInterfaceRegistry(interfaceRegistry). + WithLegacyAmino(legacyAmino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx = initClientCtx.WithCmdContext(cmd.Context()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + + / This needs to go after ReadFromClientConfig, as that function + / sets the RPC client needed for SIGN_MODE_TEXTUAL. + enabledSignModes := append(tx.DefaultSignModes, signing.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewGRPCCoinMetadataQueryFn(initClientCtx), +} + +txConfigWithTextual, err := tx.NewTxConfigWithOptions( + codec.NewProtoCodec(interfaceRegistry), + txConfigOpts, + ) + if err != nil { + return err +} + +initClientCtx = initClientCtx.WithTxConfig(txConfigWithTextual) + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customCMTConfig := initCometBFTConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig) +}, +} + +initRootCmd(rootCmd, txConfig, interfaceRegistry, appCodec, moduleBasicManager) + if err := autoCliOpts.EnhanceRootCommand(rootCmd); err != nil { + panic(err) +} + +return rootCmd +} + +/ initCometBFTConfig helps to override default CometBFT Config values. +/ return cmtcfg.DefaultConfig if no custom configuration is required for the application. +func initCometBFTConfig() *cmtcfg.Config { + cfg := cmtcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd( + rootCmd *cobra.Command, + txConfig client.TxConfig, + interfaceRegistry codectypes.InterfaceRegistry, + appCodec codec.Codec, + basicManager module.BasicManager, +) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(basicManager, simapp.DefaultNodeHome), + NewTestnetCmd(basicManager, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + confixcmd.ConfigCommand(), + pruning.Cmd(newApp), + snapshot.Cmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(txConfig, basicManager), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(interfaceRegistry, appCodec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(txConfig client.TxConfig, basicManager module.BasicManager, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.Commands(txConfig, basicManager, simapp.DefaultNodeHome) + for _, subCmd := range cmds { + cmd.AddCommand(subCmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + rpc.ValidatorCommand(), + server.QueryBlockCmd(), + authcmd.QueryTxsByEventsCmd(), + server.QueryBlocksCmd(), + authcmd.QueryTxCmd(), + ) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + + var simApp *simapp.SimApp + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/simd/cmd/root_v2.go#L47-L130) - - - Use the `EnhanceRootCommand()` from the AutoCLI options to automatically add auto-generated commands from the modules to the root command. Additionnally it adds all manually defined modules commands (`tx` and `query`) as well. Read more about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its dedicated section. - -`rootCmd` has a function called `initAppConfig()` which is useful for setting the application's custom configs. By default app uses CometBFT app config template from Cosmos SDK, which can be over-written via `initAppConfig()`. Here's an example code to override default `app.toml` template. - -simapp/simd/cmd/root\_v2.go - -``` -loading... + + Use the `EnhanceRootCommand()` from the AutoCLI options to automatically add + auto-generated commands from the modules to the root command. Additionnally it + adds all manually defined modules commands (`tx` and `query`) as well. Read + more about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its + dedicated section. + + +`rootCmd` has a function called `initAppConfig()` which is useful for setting the application's custom configs. +By default app uses CometBFT app config template from Cosmos SDK, which can be over-written via `initAppConfig()`. +Here's an example code to override default `app.toml` template. + +```go expandable +/go:build !app_v1 + +package cmd + +import ( + + "errors" + "io" + "os" + + cmtcfg "github.com/cometbft/cometbft/config" + dbm "github.com/cosmos/cosmos-db" + "github.com/spf13/cobra" + "github.com/spf13/viper" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "cosmossdk.io/simapp" + confixcmd "cosmossdk.io/tools/confix/cmd" + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/client/snapshot" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the main function. +func NewRootCmd() *cobra.Command { + var ( + interfaceRegistry codectypes.InterfaceRegistry + appCodec codec.Codec + txConfig client.TxConfig + legacyAmino *codec.LegacyAmino + autoCliOpts autocli.AppOptions + moduleBasicManager module.BasicManager + ) + if err := depinject.Inject(depinject.Configs(simapp.AppConfig, depinject.Supply(log.NewNopLogger())), + &interfaceRegistry, + &appCodec, + &txConfig, + &legacyAmino, + &autoCliOpts, + &moduleBasicManager, + ); err != nil { + panic(err) +} + initClientCtx := client.Context{ +}. + WithCodec(appCodec). + WithInterfaceRegistry(interfaceRegistry). + WithLegacyAmino(legacyAmino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx = initClientCtx.WithCmdContext(cmd.Context()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + + / This needs to go after ReadFromClientConfig, as that function + / sets the RPC client needed for SIGN_MODE_TEXTUAL. + enabledSignModes := append(tx.DefaultSignModes, signing.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewGRPCCoinMetadataQueryFn(initClientCtx), +} + +txConfigWithTextual, err := tx.NewTxConfigWithOptions( + codec.NewProtoCodec(interfaceRegistry), + txConfigOpts, + ) + if err != nil { + return err +} + +initClientCtx = initClientCtx.WithTxConfig(txConfigWithTextual) + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customCMTConfig := initCometBFTConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig) +}, +} + +initRootCmd(rootCmd, txConfig, interfaceRegistry, appCodec, moduleBasicManager) + if err := autoCliOpts.EnhanceRootCommand(rootCmd); err != nil { + panic(err) +} + +return rootCmd +} + +/ initCometBFTConfig helps to override default CometBFT Config values. +/ return cmtcfg.DefaultConfig if no custom configuration is required for the application. +func initCometBFTConfig() *cmtcfg.Config { + cfg := cmtcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd( + rootCmd *cobra.Command, + txConfig client.TxConfig, + interfaceRegistry codectypes.InterfaceRegistry, + appCodec codec.Codec, + basicManager module.BasicManager, +) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(basicManager, simapp.DefaultNodeHome), + NewTestnetCmd(basicManager, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + confixcmd.ConfigCommand(), + pruning.Cmd(newApp), + snapshot.Cmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(txConfig, basicManager), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(interfaceRegistry, appCodec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(txConfig client.TxConfig, basicManager module.BasicManager, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.Commands(txConfig, basicManager, simapp.DefaultNodeHome) + for _, subCmd := range cmds { + cmd.AddCommand(subCmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + rpc.ValidatorCommand(), + server.QueryBlockCmd(), + authcmd.QueryTxsByEventsCmd(), + server.QueryBlocksCmd(), + authcmd.QueryTxCmd(), + ) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + + var simApp *simapp.SimApp + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/simd/cmd/root_v2.go#L144-L199) - The `initAppConfig()` also allows overriding the default Cosmos SDK's [server config](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/server/config/config.go#L235). One example is the `min-gas-prices` config, which defines the minimum gas prices a validator is willing to accept for processing a transaction. By default, the Cosmos SDK sets this parameter to `""` (empty string), which forces all validators to tweak their own `app.toml` and set a non-empty value, or else the node will halt on startup. This might not be the best UX for validators, so the chain developer can set a default `app.toml` value for validators inside this `initAppConfig()` function. -simapp/simd/cmd/root\_v2.go - -``` -loading... +```go expandable +/go:build !app_v1 + +package cmd + +import ( + + "errors" + "io" + "os" + + cmtcfg "github.com/cometbft/cometbft/config" + dbm "github.com/cosmos/cosmos-db" + "github.com/spf13/cobra" + "github.com/spf13/viper" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "cosmossdk.io/simapp" + confixcmd "cosmossdk.io/tools/confix/cmd" + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/client/snapshot" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the main function. +func NewRootCmd() *cobra.Command { + var ( + interfaceRegistry codectypes.InterfaceRegistry + appCodec codec.Codec + txConfig client.TxConfig + legacyAmino *codec.LegacyAmino + autoCliOpts autocli.AppOptions + moduleBasicManager module.BasicManager + ) + if err := depinject.Inject(depinject.Configs(simapp.AppConfig, depinject.Supply(log.NewNopLogger())), + &interfaceRegistry, + &appCodec, + &txConfig, + &legacyAmino, + &autoCliOpts, + &moduleBasicManager, + ); err != nil { + panic(err) +} + initClientCtx := client.Context{ +}. + WithCodec(appCodec). + WithInterfaceRegistry(interfaceRegistry). + WithLegacyAmino(legacyAmino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx = initClientCtx.WithCmdContext(cmd.Context()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + + / This needs to go after ReadFromClientConfig, as that function + / sets the RPC client needed for SIGN_MODE_TEXTUAL. + enabledSignModes := append(tx.DefaultSignModes, signing.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewGRPCCoinMetadataQueryFn(initClientCtx), +} + +txConfigWithTextual, err := tx.NewTxConfigWithOptions( + codec.NewProtoCodec(interfaceRegistry), + txConfigOpts, + ) + if err != nil { + return err +} + +initClientCtx = initClientCtx.WithTxConfig(txConfigWithTextual) + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customCMTConfig := initCometBFTConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig) +}, +} + +initRootCmd(rootCmd, txConfig, interfaceRegistry, appCodec, moduleBasicManager) + if err := autoCliOpts.EnhanceRootCommand(rootCmd); err != nil { + panic(err) +} + +return rootCmd +} + +/ initCometBFTConfig helps to override default CometBFT Config values. +/ return cmtcfg.DefaultConfig if no custom configuration is required for the application. +func initCometBFTConfig() *cmtcfg.Config { + cfg := cmtcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd( + rootCmd *cobra.Command, + txConfig client.TxConfig, + interfaceRegistry codectypes.InterfaceRegistry, + appCodec codec.Codec, + basicManager module.BasicManager, +) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(basicManager, simapp.DefaultNodeHome), + NewTestnetCmd(basicManager, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + confixcmd.ConfigCommand(), + pruning.Cmd(newApp), + snapshot.Cmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(txConfig, basicManager), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(interfaceRegistry, appCodec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(txConfig client.TxConfig, basicManager module.BasicManager, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.Commands(txConfig, basicManager, simapp.DefaultNodeHome) + for _, subCmd := range cmds { + cmd.AddCommand(subCmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + rpc.ValidatorCommand(), + server.QueryBlockCmd(), + authcmd.QueryTxsByEventsCmd(), + server.QueryBlocksCmd(), + authcmd.QueryTxCmd(), + ) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + + var simApp *simapp.SimApp + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/simd/cmd/root_v2.go#L164-L180) - -The root-level `status` and `keys` subcommands are common across most applications and do not interact with application state. The bulk of an application's functionality - what users can actually *do* with it - is enabled by its `tx` and `query` commands. - -### Transaction Commands[​](#transaction-commands "Direct link to Transaction Commands") - -[Transactions](/v0.50/learn/advanced/transactions) are objects wrapping [`Msg`s](/v0.50/build/building-modules/messages-and-queries#messages) that trigger state changes. To enable the creation of transactions using the CLI interface, a function `txCommand` is generally added to the `rootCmd`: - -simapp/simd/cmd/root\_v2.go - -``` -loading... +The root-level `status` and `keys` subcommands are common across most applications and do not interact with application state. The bulk of an application's functionality - what users can actually _do_ with it - is enabled by its `tx` and `query` commands. + +### Transaction Commands + +[Transactions](/docs/sdk/v0.50/learn/advanced/transactions) are objects wrapping [`Msg`s](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#messages) that trigger state changes. To enable the creation of transactions using the CLI interface, a function `txCommand` is generally added to the `rootCmd`: + +```go expandable +/go:build !app_v1 + +package cmd + +import ( + + "errors" + "io" + "os" + + cmtcfg "github.com/cometbft/cometbft/config" + dbm "github.com/cosmos/cosmos-db" + "github.com/spf13/cobra" + "github.com/spf13/viper" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "cosmossdk.io/simapp" + confixcmd "cosmossdk.io/tools/confix/cmd" + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/client/snapshot" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the main function. +func NewRootCmd() *cobra.Command { + var ( + interfaceRegistry codectypes.InterfaceRegistry + appCodec codec.Codec + txConfig client.TxConfig + legacyAmino *codec.LegacyAmino + autoCliOpts autocli.AppOptions + moduleBasicManager module.BasicManager + ) + if err := depinject.Inject(depinject.Configs(simapp.AppConfig, depinject.Supply(log.NewNopLogger())), + &interfaceRegistry, + &appCodec, + &txConfig, + &legacyAmino, + &autoCliOpts, + &moduleBasicManager, + ); err != nil { + panic(err) +} + initClientCtx := client.Context{ +}. + WithCodec(appCodec). + WithInterfaceRegistry(interfaceRegistry). + WithLegacyAmino(legacyAmino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx = initClientCtx.WithCmdContext(cmd.Context()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + + / This needs to go after ReadFromClientConfig, as that function + / sets the RPC client needed for SIGN_MODE_TEXTUAL. + enabledSignModes := append(tx.DefaultSignModes, signing.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewGRPCCoinMetadataQueryFn(initClientCtx), +} + +txConfigWithTextual, err := tx.NewTxConfigWithOptions( + codec.NewProtoCodec(interfaceRegistry), + txConfigOpts, + ) + if err != nil { + return err +} + +initClientCtx = initClientCtx.WithTxConfig(txConfigWithTextual) + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customCMTConfig := initCometBFTConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig) +}, +} + +initRootCmd(rootCmd, txConfig, interfaceRegistry, appCodec, moduleBasicManager) + if err := autoCliOpts.EnhanceRootCommand(rootCmd); err != nil { + panic(err) +} + +return rootCmd +} + +/ initCometBFTConfig helps to override default CometBFT Config values. +/ return cmtcfg.DefaultConfig if no custom configuration is required for the application. +func initCometBFTConfig() *cmtcfg.Config { + cfg := cmtcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd( + rootCmd *cobra.Command, + txConfig client.TxConfig, + interfaceRegistry codectypes.InterfaceRegistry, + appCodec codec.Codec, + basicManager module.BasicManager, +) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(basicManager, simapp.DefaultNodeHome), + NewTestnetCmd(basicManager, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + confixcmd.ConfigCommand(), + pruning.Cmd(newApp), + snapshot.Cmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(txConfig, basicManager), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(interfaceRegistry, appCodec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(txConfig client.TxConfig, basicManager module.BasicManager, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.Commands(txConfig, basicManager, simapp.DefaultNodeHome) + for _, subCmd := range cmds { + cmd.AddCommand(subCmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + rpc.ValidatorCommand(), + server.QueryBlockCmd(), + authcmd.QueryTxsByEventsCmd(), + server.QueryBlocksCmd(), + authcmd.QueryTxCmd(), + ) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + + var simApp *simapp.SimApp + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/simd/cmd/root_v2.go#L222-L229) - This `txCommand` function adds all the transaction available to end-users for the application. This typically includes: -* **Sign command** from the [`auth`](/v0.50/build/modules/auth) module that signs messages in a transaction. To enable multisig, add the `auth` module's `MultiSign` command. Since every transaction requires some sort of signature in order to be valid, the signing command is necessary for every application. -* **Broadcast command** from the Cosmos SDK client tools, to broadcast transactions. -* **All [module transaction commands](/v0.50/build/building-modules/module-interfaces#transaction-commands)** the application is dependent on, retrieved by using the [basic module manager's](/v0.50/build/building-modules/module-manager#basic-manager) `AddTxCommands()` function, or enhanced by [AutoCLI](https://docs.cosmos.network/main/core/autocli). +- **Sign command** from the [`auth`](/docs/sdk/v0.50/documentation/module-system/modules/auth) module that signs messages in a transaction. To enable multisig, add the `auth` module's `MultiSign` command. Since every transaction requires some sort of signature in order to be valid, the signing command is necessary for every application. +- **Broadcast command** from the Cosmos SDK client tools, to broadcast transactions. +- **All [module transaction commands](/docs/sdk/v0.50/documentation/module-system/module-interfaces#transaction-commands)** the application is dependent on, retrieved by using the [basic module manager's](/docs/sdk/v0.50/documentation/module-system/module-manager#basic-manager) `AddTxCommands()` function, or enhanced by [AutoCLI](https://docs.cosmos.network/main/core/autocli). Here is an example of a `txCommand` aggregating these subcommands from the `simapp` application: -simapp/simd/cmd/root\_v2.go - -``` -loading... +```go expandable +/go:build !app_v1 + +package cmd + +import ( + + "errors" + "io" + "os" + + cmtcfg "github.com/cometbft/cometbft/config" + dbm "github.com/cosmos/cosmos-db" + "github.com/spf13/cobra" + "github.com/spf13/viper" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "cosmossdk.io/simapp" + confixcmd "cosmossdk.io/tools/confix/cmd" + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/client/snapshot" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the main function. +func NewRootCmd() *cobra.Command { + var ( + interfaceRegistry codectypes.InterfaceRegistry + appCodec codec.Codec + txConfig client.TxConfig + legacyAmino *codec.LegacyAmino + autoCliOpts autocli.AppOptions + moduleBasicManager module.BasicManager + ) + if err := depinject.Inject(depinject.Configs(simapp.AppConfig, depinject.Supply(log.NewNopLogger())), + &interfaceRegistry, + &appCodec, + &txConfig, + &legacyAmino, + &autoCliOpts, + &moduleBasicManager, + ); err != nil { + panic(err) +} + initClientCtx := client.Context{ +}. + WithCodec(appCodec). + WithInterfaceRegistry(interfaceRegistry). + WithLegacyAmino(legacyAmino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx = initClientCtx.WithCmdContext(cmd.Context()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + + / This needs to go after ReadFromClientConfig, as that function + / sets the RPC client needed for SIGN_MODE_TEXTUAL. + enabledSignModes := append(tx.DefaultSignModes, signing.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewGRPCCoinMetadataQueryFn(initClientCtx), +} + +txConfigWithTextual, err := tx.NewTxConfigWithOptions( + codec.NewProtoCodec(interfaceRegistry), + txConfigOpts, + ) + if err != nil { + return err +} + +initClientCtx = initClientCtx.WithTxConfig(txConfigWithTextual) + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customCMTConfig := initCometBFTConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig) +}, +} + +initRootCmd(rootCmd, txConfig, interfaceRegistry, appCodec, moduleBasicManager) + if err := autoCliOpts.EnhanceRootCommand(rootCmd); err != nil { + panic(err) +} + +return rootCmd +} + +/ initCometBFTConfig helps to override default CometBFT Config values. +/ return cmtcfg.DefaultConfig if no custom configuration is required for the application. +func initCometBFTConfig() *cmtcfg.Config { + cfg := cmtcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd( + rootCmd *cobra.Command, + txConfig client.TxConfig, + interfaceRegistry codectypes.InterfaceRegistry, + appCodec codec.Codec, + basicManager module.BasicManager, +) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(basicManager, simapp.DefaultNodeHome), + NewTestnetCmd(basicManager, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + confixcmd.ConfigCommand(), + pruning.Cmd(newApp), + snapshot.Cmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(txConfig, basicManager), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(interfaceRegistry, appCodec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(txConfig client.TxConfig, basicManager module.BasicManager, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.Commands(txConfig, basicManager, simapp.DefaultNodeHome) + for _, subCmd := range cmds { + cmd.AddCommand(subCmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + rpc.ValidatorCommand(), + server.QueryBlockCmd(), + authcmd.QueryTxsByEventsCmd(), + server.QueryBlocksCmd(), + authcmd.QueryTxCmd(), + ) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + + var simApp *simapp.SimApp + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/simd/cmd/root_v2.go#L270-L292) - - - When using AutoCLI to generate module transaction commands, `EnhanceRootCommand()` automatically adds the module `tx` command to the root command. Read more about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its dedicated section. - - -### Query Commands[​](#query-commands "Direct link to Query Commands") - -[**Queries**](/v0.50/build/building-modules/messages-and-queries#queries) are objects that allow users to retrieve information about the application's state. To enable the creation of queries using the CLI interface, a function `queryCommand` is generally added to the `rootCmd`: - -simapp/simd/cmd/root\_v2.go - -``` -loading... + + When using AutoCLI to generate module transaction commands, + `EnhanceRootCommand()` automatically adds the module `tx` command to the root + command. Read more about + [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its dedicated + section. + + +### Query Commands + +[**Queries**](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#queries) are objects that allow users to retrieve information about the application's state. To enable the creation of queries using the CLI interface, a function `queryCommand` is generally added to the `rootCmd`: + +```go expandable +/go:build !app_v1 + +package cmd + +import ( + + "errors" + "io" + "os" + + cmtcfg "github.com/cometbft/cometbft/config" + dbm "github.com/cosmos/cosmos-db" + "github.com/spf13/cobra" + "github.com/spf13/viper" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "cosmossdk.io/simapp" + confixcmd "cosmossdk.io/tools/confix/cmd" + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/client/snapshot" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the main function. +func NewRootCmd() *cobra.Command { + var ( + interfaceRegistry codectypes.InterfaceRegistry + appCodec codec.Codec + txConfig client.TxConfig + legacyAmino *codec.LegacyAmino + autoCliOpts autocli.AppOptions + moduleBasicManager module.BasicManager + ) + if err := depinject.Inject(depinject.Configs(simapp.AppConfig, depinject.Supply(log.NewNopLogger())), + &interfaceRegistry, + &appCodec, + &txConfig, + &legacyAmino, + &autoCliOpts, + &moduleBasicManager, + ); err != nil { + panic(err) +} + initClientCtx := client.Context{ +}. + WithCodec(appCodec). + WithInterfaceRegistry(interfaceRegistry). + WithLegacyAmino(legacyAmino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx = initClientCtx.WithCmdContext(cmd.Context()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + + / This needs to go after ReadFromClientConfig, as that function + / sets the RPC client needed for SIGN_MODE_TEXTUAL. + enabledSignModes := append(tx.DefaultSignModes, signing.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewGRPCCoinMetadataQueryFn(initClientCtx), +} + +txConfigWithTextual, err := tx.NewTxConfigWithOptions( + codec.NewProtoCodec(interfaceRegistry), + txConfigOpts, + ) + if err != nil { + return err +} + +initClientCtx = initClientCtx.WithTxConfig(txConfigWithTextual) + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customCMTConfig := initCometBFTConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig) +}, +} + +initRootCmd(rootCmd, txConfig, interfaceRegistry, appCodec, moduleBasicManager) + if err := autoCliOpts.EnhanceRootCommand(rootCmd); err != nil { + panic(err) +} + +return rootCmd +} + +/ initCometBFTConfig helps to override default CometBFT Config values. +/ return cmtcfg.DefaultConfig if no custom configuration is required for the application. +func initCometBFTConfig() *cmtcfg.Config { + cfg := cmtcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd( + rootCmd *cobra.Command, + txConfig client.TxConfig, + interfaceRegistry codectypes.InterfaceRegistry, + appCodec codec.Codec, + basicManager module.BasicManager, +) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(basicManager, simapp.DefaultNodeHome), + NewTestnetCmd(basicManager, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + confixcmd.ConfigCommand(), + pruning.Cmd(newApp), + snapshot.Cmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(txConfig, basicManager), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(interfaceRegistry, appCodec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(txConfig client.TxConfig, basicManager module.BasicManager, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.Commands(txConfig, basicManager, simapp.DefaultNodeHome) + for _, subCmd := range cmds { + cmd.AddCommand(subCmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + rpc.ValidatorCommand(), + server.QueryBlockCmd(), + authcmd.QueryTxsByEventsCmd(), + server.QueryBlocksCmd(), + authcmd.QueryTxCmd(), + ) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + + var simApp *simapp.SimApp + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/simd/cmd/root_v2.go#L222-L229) - This `queryCommand` function adds all the queries available to end-users for the application. This typically includes: -* **QueryTx** and/or other transaction query commands from the `auth` module which allow the user to search for a transaction by inputting its hash, a list of tags, or a block height. These queries allow users to see if transactions have been included in a block. -* **Account command** from the `auth` module, which displays the state (e.g. account balance) of an account given an address. -* **Validator command** from the Cosmos SDK rpc client tools, which displays the validator set of a given height. -* **Block command** from the Cosmos SDK RPC client tools, which displays the block data for a given height. -* **All [module query commands](/v0.50/build/building-modules/module-interfaces#query-commands)** the application is dependent on, retrieved by using the [basic module manager's](/v0.50/build/building-modules/module-manager#basic-manager) `AddQueryCommands()` function, or enhanced by [AutoCLI](https://docs.cosmos.network/main/core/autocli). +- **QueryTx** and/or other transaction query commands from the `auth` module which allow the user to search for a transaction by inputting its hash, a list of tags, or a block height. These queries allow users to see if transactions have been included in a block. +- **Account command** from the `auth` module, which displays the state (e.g. account balance) of an account given an address. +- **Validator command** from the Cosmos SDK rpc client tools, which displays the validator set of a given height. +- **Block command** from the Cosmos SDK RPC client tools, which displays the block data for a given height. +- **All [module query commands](/docs/sdk/v0.50/documentation/module-system/module-interfaces#query-commands)** the application is dependent on, retrieved by using the [basic module manager's](/docs/sdk/v0.50/documentation/module-system/module-manager#basic-manager) `AddQueryCommands()` function, or enhanced by [AutoCLI](https://docs.cosmos.network/main/core/autocli). Here is an example of a `queryCommand` aggregating subcommands from the `simapp` application: -simapp/simd/cmd/root\_v2.go - -``` -loading... +```go expandable +/go:build !app_v1 + +package cmd + +import ( + + "errors" + "io" + "os" + + cmtcfg "github.com/cometbft/cometbft/config" + dbm "github.com/cosmos/cosmos-db" + "github.com/spf13/cobra" + "github.com/spf13/viper" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "cosmossdk.io/simapp" + confixcmd "cosmossdk.io/tools/confix/cmd" + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/client/snapshot" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the main function. +func NewRootCmd() *cobra.Command { + var ( + interfaceRegistry codectypes.InterfaceRegistry + appCodec codec.Codec + txConfig client.TxConfig + legacyAmino *codec.LegacyAmino + autoCliOpts autocli.AppOptions + moduleBasicManager module.BasicManager + ) + if err := depinject.Inject(depinject.Configs(simapp.AppConfig, depinject.Supply(log.NewNopLogger())), + &interfaceRegistry, + &appCodec, + &txConfig, + &legacyAmino, + &autoCliOpts, + &moduleBasicManager, + ); err != nil { + panic(err) +} + initClientCtx := client.Context{ +}. + WithCodec(appCodec). + WithInterfaceRegistry(interfaceRegistry). + WithLegacyAmino(legacyAmino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx = initClientCtx.WithCmdContext(cmd.Context()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + + / This needs to go after ReadFromClientConfig, as that function + / sets the RPC client needed for SIGN_MODE_TEXTUAL. + enabledSignModes := append(tx.DefaultSignModes, signing.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewGRPCCoinMetadataQueryFn(initClientCtx), +} + +txConfigWithTextual, err := tx.NewTxConfigWithOptions( + codec.NewProtoCodec(interfaceRegistry), + txConfigOpts, + ) + if err != nil { + return err +} + +initClientCtx = initClientCtx.WithTxConfig(txConfigWithTextual) + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customCMTConfig := initCometBFTConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig) +}, +} + +initRootCmd(rootCmd, txConfig, interfaceRegistry, appCodec, moduleBasicManager) + if err := autoCliOpts.EnhanceRootCommand(rootCmd); err != nil { + panic(err) +} + +return rootCmd +} + +/ initCometBFTConfig helps to override default CometBFT Config values. +/ return cmtcfg.DefaultConfig if no custom configuration is required for the application. +func initCometBFTConfig() *cmtcfg.Config { + cfg := cmtcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd( + rootCmd *cobra.Command, + txConfig client.TxConfig, + interfaceRegistry codectypes.InterfaceRegistry, + appCodec codec.Codec, + basicManager module.BasicManager, +) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(basicManager, simapp.DefaultNodeHome), + NewTestnetCmd(basicManager, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + confixcmd.ConfigCommand(), + pruning.Cmd(newApp), + snapshot.Cmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(txConfig, basicManager), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(interfaceRegistry, appCodec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(txConfig client.TxConfig, basicManager module.BasicManager, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.Commands(txConfig, basicManager, simapp.DefaultNodeHome) + for _, subCmd := range cmds { + cmd.AddCommand(subCmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + rpc.ValidatorCommand(), + server.QueryBlockCmd(), + authcmd.QueryTxsByEventsCmd(), + server.QueryBlocksCmd(), + authcmd.QueryTxCmd(), + ) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + + var simApp *simapp.SimApp + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/simd/cmd/root_v2.go#L249-L268) + + When using AutoCLI to generate module query commands, `EnhanceRootCommand()` + automatically adds the module `query` command to the root command. Read more + about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its + dedicated section. + - - When using AutoCLI to generate module query commands, `EnhanceRootCommand()` automatically adds the module `query` command to the root command. Read more about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its dedicated section. - +## Flags -## Flags[​](#flags "Direct link to Flags") +Flags are used to modify commands; developers can include them in a `flags.go` file with their CLI. Users can explicitly include them in commands or pre-configure them by inside their [`app.toml`](/docs/sdk/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml). Commonly pre-configured flags include the `--node` to connect to and `--chain-id` of the blockchain the user wishes to interact with. -Flags are used to modify commands; developers can include them in a `flags.go` file with their CLI. Users can explicitly include them in commands or pre-configure them by inside their [`app.toml`](/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml). Commonly pre-configured flags include the `--node` to connect to and `--chain-id` of the blockchain the user wishes to interact with. +A _persistent_ flag (as opposed to a _local_ flag) added to a command transcends all of its children: subcommands will inherit the configured values for these flags. Additionally, all flags have default values when they are added to commands; some toggle an option off but others are empty values that the user needs to override to create valid commands. A flag can be explicitly marked as _required_ so that an error is automatically thrown if the user does not provide a value, but it is also acceptable to handle unexpected missing flags differently. -A *persistent* flag (as opposed to a *local* flag) added to a command transcends all of its children: subcommands will inherit the configured values for these flags. Additionally, all flags have default values when they are added to commands; some toggle an option off but others are empty values that the user needs to override to create valid commands. A flag can be explicitly marked as *required* so that an error is automatically thrown if the user does not provide a value, but it is also acceptable to handle unexpected missing flags differently. +Flags are added to commands directly (generally in the [module's CLI file](/docs/sdk/v0.50/documentation/module-system/module-interfaces#flags) where module commands are defined) and no flag except for the `rootCmd` persistent flags has to be added at application level. It is common to add a _persistent_ flag for `--chain-id`, the unique identifier of the blockchain the application pertains to, to the root command. Adding this flag can be done in the `main()` function. Adding this flag makes sense as the chain ID should not be changing across commands in this application CLI. -Flags are added to commands directly (generally in the [module's CLI file](/v0.50/build/building-modules/module-interfaces#flags) where module commands are defined) and no flag except for the `rootCmd` persistent flags has to be added at application level. It is common to add a *persistent* flag for `--chain-id`, the unique identifier of the blockchain the application pertains to, to the root command. Adding this flag can be done in the `main()` function. Adding this flag makes sense as the chain ID should not be changing across commands in this application CLI. - -## Environment variables[​](#environment-variables "Direct link to Environment variables") +## Environment variables Each flag is bound to its respective named environment variable. Then name of the environment variable consist of two parts - capital case `basename` followed by flag name of the flag. `-` must be substituted with `_`. For example flag `--node` for application with basename `GAIA` is bound to `GAIA_NODE`. It allows reducing the amount of flags typed for routine operations. For example instead of: -``` +```shell gaia --home=./ --node= --chain-id="testchain-1" --keyring-backend=test tx ... --from= ``` this will be more convenient: -``` -# define env variables in .env, .envrc etcGAIA_HOME=GAIA_NODE=GAIA_CHAIN_ID="testchain-1"GAIA_KEYRING_BACKEND="test"# and later just usegaia tx ... --from= +```shell +# define env variables in .env, .envrc etc +GAIA_HOME= +GAIA_NODE= +GAIA_CHAIN_ID="testchain-1" +GAIA_KEYRING_BACKEND="test" + +# and later just use +gaia tx ... --from= ``` -## Configurations[​](#configurations "Direct link to Configurations") +## Configurations It is vital that the root command of an application uses `PersistentPreRun()` cobra command property for executing the command, so all child commands have access to the server and client contexts. These contexts are set as their default values initially and may be modified, scoped to the command, in their respective `PersistentPreRun()` functions. Note that the `client.Context` is typically pre-populated with "default" values that may be useful for all commands to inherit and override if necessary. Here is an example of an `PersistentPreRun()` function from `simapp`: -simapp/simd/cmd/root\_v2.go - -``` -loading... +```go expandable +/go:build !app_v1 + +package cmd + +import ( + + "errors" + "io" + "os" + + cmtcfg "github.com/cometbft/cometbft/config" + dbm "github.com/cosmos/cosmos-db" + "github.com/spf13/cobra" + "github.com/spf13/viper" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "cosmossdk.io/simapp" + confixcmd "cosmossdk.io/tools/confix/cmd" + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/client/snapshot" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the main function. +func NewRootCmd() *cobra.Command { + var ( + interfaceRegistry codectypes.InterfaceRegistry + appCodec codec.Codec + txConfig client.TxConfig + legacyAmino *codec.LegacyAmino + autoCliOpts autocli.AppOptions + moduleBasicManager module.BasicManager + ) + if err := depinject.Inject(depinject.Configs(simapp.AppConfig, depinject.Supply(log.NewNopLogger())), + &interfaceRegistry, + &appCodec, + &txConfig, + &legacyAmino, + &autoCliOpts, + &moduleBasicManager, + ); err != nil { + panic(err) +} + initClientCtx := client.Context{ +}. + WithCodec(appCodec). + WithInterfaceRegistry(interfaceRegistry). + WithLegacyAmino(legacyAmino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx = initClientCtx.WithCmdContext(cmd.Context()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + + / This needs to go after ReadFromClientConfig, as that function + / sets the RPC client needed for SIGN_MODE_TEXTUAL. + enabledSignModes := append(tx.DefaultSignModes, signing.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewGRPCCoinMetadataQueryFn(initClientCtx), +} + +txConfigWithTextual, err := tx.NewTxConfigWithOptions( + codec.NewProtoCodec(interfaceRegistry), + txConfigOpts, + ) + if err != nil { + return err +} + +initClientCtx = initClientCtx.WithTxConfig(txConfigWithTextual) + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customCMTConfig := initCometBFTConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig) +}, +} + +initRootCmd(rootCmd, txConfig, interfaceRegistry, appCodec, moduleBasicManager) + if err := autoCliOpts.EnhanceRootCommand(rootCmd); err != nil { + panic(err) +} + +return rootCmd +} + +/ initCometBFTConfig helps to override default CometBFT Config values. +/ return cmtcfg.DefaultConfig if no custom configuration is required for the application. +func initCometBFTConfig() *cmtcfg.Config { + cfg := cmtcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd( + rootCmd *cobra.Command, + txConfig client.TxConfig, + interfaceRegistry codectypes.InterfaceRegistry, + appCodec codec.Codec, + basicManager module.BasicManager, +) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(basicManager, simapp.DefaultNodeHome), + NewTestnetCmd(basicManager, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + confixcmd.ConfigCommand(), + pruning.Cmd(newApp), + snapshot.Cmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(txConfig, basicManager), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(interfaceRegistry, appCodec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(txConfig client.TxConfig, basicManager module.BasicManager, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.Commands(txConfig, basicManager, simapp.DefaultNodeHome) + for _, subCmd := range cmds { + cmd.AddCommand(subCmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + rpc.ValidatorCommand(), + server.QueryBlockCmd(), + authcmd.QueryTxsByEventsCmd(), + server.QueryBlocksCmd(), + authcmd.QueryTxCmd(), + ) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + + var simApp *simapp.SimApp + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/simd/cmd/root_v2.go#L81-L120) - The `SetCmdClientContextHandler` call reads persistent flags via `ReadPersistentCommandFlags` which creates a `client.Context` and sets that on the root command's `Context`. The `InterceptConfigsPreRunHandler` call creates a viper literal, default `server.Context`, and a logger and sets that on the root command's `Context`. The `server.Context` will be modified and saved to disk. The internal `interceptConfigs` call reads or creates a CometBFT configuration based on the home path provided. In addition, `interceptConfigs` also reads and loads the application configuration, `app.toml`, and binds that to the `server.Context` viper literal. This is vital so the application can get access to not only the CLI flags, but also to the application configuration values provided by this file. - - When willing to configure which logger is used, do not use `InterceptConfigsPreRunHandler`, which sets the default SDK logger, but instead use `InterceptConfigsAndCreateContext` and set the server context and the logger manually: + +When willing to configure which logger is used, do not use `InterceptConfigsPreRunHandler`, which sets the default SDK logger, but instead use `InterceptConfigsAndCreateContext` and set the server context and the logger manually: + +```diff expandable +-return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig) + ++serverCtx, err := server.InterceptConfigsAndCreateContext(cmd, customAppTemplate, customAppConfig, customCMTConfig) ++if err != nil { ++ return err ++} + ++/ overwrite default server logger ++logger, err := server.CreateSDKLogger(serverCtx, cmd.OutOrStdout()) ++if err != nil { ++ return err ++} ++serverCtx.Logger = logger.With(log.ModuleKey, "server") + ++/ set server context ++return server.SetCmdServerContext(cmd, serverCtx) +``` - ``` - -return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig)+serverCtx, err := server.InterceptConfigsAndCreateContext(cmd, customAppTemplate, customAppConfig, customCMTConfig)+if err != nil {+ return err+}+// overwrite default server logger+logger, err := server.CreateSDKLogger(serverCtx, cmd.OutOrStdout())+if err != nil {+ return err+}+serverCtx.Logger = logger.With(log.ModuleKey, "server")+// set server context+return server.SetCmdServerContext(cmd, serverCtx) - ``` - + diff --git a/docs/sdk/v0.50/learn/advanced/config.mdx b/docs/sdk/v0.50/learn/advanced/config.mdx index 156038c4..a9526711 100644 --- a/docs/sdk/v0.50/learn/advanced/config.mdx +++ b/docs/sdk/v0.50/learn/advanced/config.mdx @@ -1,26 +1,288 @@ --- -title: "Configuration" -description: "Version: v0.50" +title: Configuration +description: >- + This documentation refers to the app.toml, if you'd like to read about the + config.toml please visit CometBFT docs. --- This documentation refers to the app.toml, if you'd like to read about the config.toml please visit [CometBFT docs](https://docs.cometbft.com/v0.37/). -tools/confix/data/v0.47-app.toml +{/* the following is not a python reference, however syntax coloring makes the file more readable in the docs */} -``` -loading... -``` +```python +# This is a TOML config file. +# For more information, see https://github.com/toml-lang/toml + +############################################################################### +### Base Configuration ### +############################################################################### + +# The minimum gas prices a validator is willing to accept for processing a +# transaction. A transaction's fees must meet the minimum of any denomination +# specified in this config (e.g. 0.25token1,0.0001token2). +minimum-gas-prices = "0stake" + +# default: the last 362880 states are kept, pruning at 10 block intervals +# nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) +# everything: 2 latest states will be kept; pruning at 10 block intervals. +# custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' +pruning = "default" + +# These are applied if and only if the pruning strategy is custom. +pruning-keep-recent = "0" +pruning-interval = "0" + +# HaltHeight contains a non-zero block height at which a node will gracefully +# halt and shutdown that can be used to assist upgrades and testing. +# +# Note: Commitment of state will be attempted on the corresponding block. +halt-height = 0 + +# HaltTime contains a non-zero minimum block time (in Unix seconds) at which +# a node will gracefully halt and shutdown that can be used to assist upgrades +# and testing. +# +# Note: Commitment of state will be attempted on the corresponding block. +halt-time = 0 + +# MinRetainBlocks defines the minimum block height offset from the current +# block being committed, such that all blocks past this offset are pruned +# from Tendermint. It is used as part of the process of determining the +# ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates +# that no blocks should be pruned. +# +# This configuration value is only responsible for pruning Tendermint blocks. +# It has no bearing on application state pruning which is determined by the +# "pruning-*" configurations. +# +# Note: Tendermint block pruning is dependent on this parameter in conjunction +# with the unbonding (safety threshold) period, state pruning and state sync +# snapshot parameters to determine the correct minimum value of +# ResponseCommit.RetainHeight. +min-retain-blocks = 0 + +# InterBlockCache enables inter-block caching. +inter-block-cache = true + +# IndexEvents defines the set of events in the form {eventType}.{attributeKey}, +# which informs Tendermint what to index. If empty, all events will be indexed. +# +# Example: +# ["message.sender", "message.recipient"] +index-events = [] + +# IavlCacheSize set the size of the iavl tree cache (in number of nodes). +iavl-cache-size = 781250 + +# IAVLDisableFastNode enables or disables the fast node feature of IAVL. +# Default is false. +iavl-disable-fastnode = false + +# IAVLLazyLoading enable/disable the lazy loading of iavl store. +# Default is false. +iavl-lazy-loading = false + +# AppDBBackend defines the database backend type to use for the application and snapshots DBs. +# An empty string indicates that a fallback will be used. +# The fallback is the db_backend value set in Tendermint's config.toml. +app-db-backend = "" + +############################################################################### +### Telemetry Configuration ### +############################################################################### + +[telemetry] + +# Prefixed with keys to separate services. +service-name = "" + +# Enabled enables the application telemetry functionality. When enabled, +# an in-memory sink is also enabled by default. Operators may also enabled +# other sinks such as Prometheus. +enabled = false + +# Enable prefixing gauge values with hostname. +enable-hostname = false + +# Enable adding hostname to labels. +enable-hostname-label = false + +# Enable adding service to labels. +enable-service-label = false + +# PrometheusRetentionTime, when positive, enables a Prometheus metrics sink. +prometheus-retention-time = 0 + +# GlobalLabels defines a global set of name/value label tuples applied to all +# metrics emitted using the wrapper functions defined in telemetry package. +# +# Example: +# [["chain_id", "cosmoshub-1"]] +global-labels = [] + +############################################################################### +### API Configuration ### +############################################################################### + +[api] + +# Enable defines if the API server should be enabled. +enable = false + +# Swagger defines if swagger documentation should automatically be registered. +swagger = false + +# Address defines the API server to listen on. +address = "tcp://localhost:1317" + +# MaxOpenConnections defines the number of maximum open connections. +max-open-connections = 1000 + +# RPCReadTimeout defines the Tendermint RPC read timeout (in seconds). +rpc-read-timeout = 10 + +# RPCWriteTimeout defines the Tendermint RPC write timeout (in seconds). +rpc-write-timeout = 0 + +# RPCMaxBodyBytes defines the Tendermint maximum request body (in bytes). +rpc-max-body-bytes = 1000000 + +# EnableUnsafeCORS defines if CORS should be enabled (unsafe - use it at your own risk). +enabled-unsafe-cors = false + +############################################################################### +### Rosetta Configuration ### +############################################################################### -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/main/tools/confix/data/v0.47-app.toml) +[rosetta] + +# Enable defines if the Rosetta API server should be enabled. +enable = false + +# Address defines the Rosetta API server to listen on. +address = ":8080" + +# Network defines the name of the blockchain that will be returned by Rosetta. +blockchain = "app" + +# Network defines the name of the network that will be returned by Rosetta. +network = "network" + +# Retries defines the number of retries when connecting to the node before failing. +retries = 3 + +# Offline defines if Rosetta server should run in offline mode. +offline = false + +# EnableDefaultSuggestedFee defines if the server should suggest fee by default. +# If 'construction/medata' is called without gas limit and gas price, +# suggested fee based on gas-to-suggest and denom-to-suggest will be given. +enable-fee-suggestion = false + +# GasToSuggest defines gas limit when calculating the fee +gas-to-suggest = 200000 + +# DenomToSuggest defines the default denom for fee suggestion. +# Price must be in minimum-gas-prices. +denom-to-suggest = "uatom" + +############################################################################### +### gRPC Configuration ### +############################################################################### + +[grpc] + +# Enable defines if the gRPC server should be enabled. +enable = true + +# Address defines the gRPC server address to bind to. +address = "localhost:9090" + +# MaxRecvMsgSize defines the max message size in bytes the server can receive. +# The default value is 10MB. +max-recv-msg-size = "10485760" + +# MaxSendMsgSize defines the max message size in bytes the server can send. +# The default value is math.MaxInt32. +max-send-msg-size = "2147483647" + +############################################################################### +### gRPC Web Configuration ### +############################################################################### + +[grpc-web] + +# GRPCWebEnable defines if the gRPC-web should be enabled. +# NOTE: gRPC must also be enabled, otherwise, this configuration is a no-op. +enable = true + +# Address defines the gRPC-web server address to bind to. +address = "localhost:9091" + +# EnableUnsafeCORS defines if CORS should be enabled (unsafe - use it at your own risk). +enable-unsafe-cors = false + +############################################################################### +### State Sync Configuration ### +############################################################################### + +# State sync snapshots allow other nodes to rapidly join the network without replaying historical +# blocks, instead downloading and applying a snapshot of the application state at a given height. +[state-sync] + +# snapshot-interval specifies the block interval at which local state sync snapshots are +# taken (0 to disable). +snapshot-interval = 0 + +# snapshot-keep-recent specifies the number of recent snapshots to keep and serve (0 to keep all). +snapshot-keep-recent = 2 + +############################################################################### +### Store / State Streaming ### +############################################################################### + +[store] +streamers = [] + +[streamers] +[streamers.file] +keys = ["*"] +write_dir = "" +prefix = "" + +# output-metadata specifies if output the metadata file which includes the abci request/responses +# during processing the block. +output-metadata = "true" + +# stop-node-on-error specifies if propagate the file streamer errors to consensus state machine. +stop-node-on-error = "true" + +# fsync specifies if call fsync after writing the files. +fsync = "false" + +############################################################################### +### Mempool ### +############################################################################### + +[mempool] +# Setting max-txs to 0 will allow for a unbounded amount of transactions in the mempool. +# Setting max_txs to negative 1 (-1) will disable transactions from being inserted into the mempool. +# Setting max_txs to a positive number (> 0) will limit the number of transactions in the mempool, by the specified amount. +# +# Note, this configuration only applies to SDK built-in app-side mempool +# implementations. +max-txs = 5000 + +``` -## inter-block-cache[​](#inter-block-cache "Direct link to inter-block-cache") +## inter-block-cache This feature will consume more ram than a normal node, if enabled. -## iavl-cache-size[​](#iavl-cache-size "Direct link to iavl-cache-size") +## iavl-cache-size Using this feature will increase ram consumption -## iavl-lazy-loading[​](#iavl-lazy-loading "Direct link to iavl-lazy-loading") +## iavl-lazy-loading This feature is to be used for archive nodes, allowing them to have a faster start up time. diff --git a/docs/sdk/v0.50/learn/advanced/context.mdx b/docs/sdk/v0.50/learn/advanced/context.mdx index d0416027..5a8e2360 100644 --- a/docs/sdk/v0.50/learn/advanced/context.mdx +++ b/docs/sdk/v0.50/learn/advanced/context.mdx @@ -1,79 +1,802 @@ --- -title: "Context" -description: "Version: v0.50" +title: Context --- - - The `context` is a data structure intended to be passed from function to function that carries information about the current state of the application. It provides access to a branched storage (a safe branch of the entire state) as well as useful objects and information like `gasMeter`, `block height`, `consensus parameters` and more. - +## Synopsis + +The `context` is a data structure intended to be passed from function to function that carries information about the current state of the application. It provides access to a branched storage (a safe branch of the entire state) as well as useful objects and information like `gasMeter`, `block height`, `consensus parameters` and more. - * [Anatomy of a Cosmos SDK Application](/v0.50/learn/beginner/app-anatomy) - * [Lifecycle of a Transaction](/v0.50/learn/beginner/tx-lifecycle) +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.50/learn/beginner/app-anatomy) +- [Lifecycle of a Transaction](/docs/sdk/v0.50/learn/beginner/tx-lifecycle) + -## Context Definition[​](#context-definition "Direct link to Context Definition") +## Context Definition -The Cosmos SDK `Context` is a custom data structure that contains Go's stdlib [`context`](https://pkg.go.dev/context) as its base, and has many additional types within its definition that are specific to the Cosmos SDK. The `Context` is integral to transaction processing in that it allows modules to easily access their respective [store](/v0.50/learn/advanced/store#base-layer-kvstores) in the [`multistore`](/v0.50/learn/advanced/store#multistore) and retrieve transactional context such as the block header and gas meter. +The Cosmos SDK `Context` is a custom data structure that contains Go's stdlib [`context`](https://pkg.go.dev/context) as its base, and has many additional types within its definition that are specific to the Cosmos SDK. The `Context` is integral to transaction processing in that it allows modules to easily access their respective [store](/docs/sdk/v0.50/learn/advanced/store#base-layer-kvstores) in the [`multistore`](/docs/sdk/v0.50/learn/advanced/store#multistore) and retrieve transactional context such as the block header and gas meter. -types/context.go +```go expandable +package types -``` -loading... -``` +import ( + + "context" + "time" + "cosmossdk.io/log" + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cosmos/gogoproto/proto" + "cosmossdk.io/store/gaskv" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/header" +) + +/ ExecMode defines the execution mode which can be set on a Context. +type ExecMode uint8 + +/ All possible execution modes. +const ( + ExecModeCheck ExecMode = iota + ExecModeReCheck + ExecModeSimulate + ExecModePrepareProposal + ExecModeProcessProposal + ExecModeVoteExtension + ExecModeFinalize +) + +/* +Context is an immutable object contains all information needed to +process a request. + +It contains a context.Context object inside if you want to use that, +but please do not over-use it. We try to keep all data structured +and standard additions here would be better just to add to the Context struct +*/ +type Context struct { + baseCtx context.Context + ms storetypes.MultiStore + / Deprecated: Use HeaderService for height, time, and chainID and CometService for the rest + header cmtproto.Header + / Deprecated: Use HeaderService for hash + headerHash []byte + / Deprecated: Use HeaderService for chainID and CometService for the rest + chainID string + txBytes []byte + logger log.Logger + voteInfo []abci.VoteInfo + gasMeter storetypes.GasMeter + blockGasMeter storetypes.GasMeter + checkTx bool + recheckTx bool / if recheckTx == true, then checkTx must also be true + execMode ExecMode + minGasPrice DecCoins + consParams cmtproto.ConsensusParams + eventManager EventManagerI + priority int64 / The tx priority, only relevant in CheckTx + kvGasConfig storetypes.GasConfig + transientKVGasConfig storetypes.GasConfig + streamingManager storetypes.StreamingManager + cometInfo comet.BlockInfo + headerInfo header.Info +} + +/ Proposed rename, not done to avoid API breakage +type Request = Context + +/ Read-only accessors +func (c Context) + +Context() + +context.Context { + return c.baseCtx +} + +func (c Context) + +MultiStore() + +storetypes.MultiStore { + return c.ms +} + +func (c Context) + +BlockHeight() + +int64 { + return c.header.Height +} + +func (c Context) + +BlockTime() + +time.Time { + return c.header.Time +} + +func (c Context) + +ChainID() + +string { + return c.chainID +} + +func (c Context) + +TxBytes() []byte { + return c.txBytes +} + +func (c Context) + +Logger() + +log.Logger { + return c.logger +} + +func (c Context) + +VoteInfos() []abci.VoteInfo { + return c.voteInfo +} + +func (c Context) + +GasMeter() + +storetypes.GasMeter { + return c.gasMeter +} + +func (c Context) + +BlockGasMeter() + +storetypes.GasMeter { + return c.blockGasMeter +} + +func (c Context) + +IsCheckTx() + +bool { + return c.checkTx +} + +func (c Context) + +IsReCheckTx() + +bool { + return c.recheckTx +} + +func (c Context) + +ExecMode() + +ExecMode { + return c.execMode +} + +func (c Context) + +MinGasPrices() + +DecCoins { + return c.minGasPrice +} + +func (c Context) + +EventManager() + +EventManagerI { + return c.eventManager +} + +func (c Context) + +Priority() + +int64 { + return c.priority +} + +func (c Context) + +KVGasConfig() + +storetypes.GasConfig { + return c.kvGasConfig +} + +func (c Context) + +TransientKVGasConfig() + +storetypes.GasConfig { + return c.transientKVGasConfig +} + +func (c Context) + +StreamingManager() + +storetypes.StreamingManager { + return c.streamingManager +} + +func (c Context) + +CometInfo() + +comet.BlockInfo { + return c.cometInfo +} + +func (c Context) + +HeaderInfo() + +header.Info { + return c.headerInfo +} + +/ clone the header before returning +func (c Context) + +BlockHeader() + +cmtproto.Header { + msg := proto.Clone(&c.header).(*cmtproto.Header) + +return *msg +} + +/ HeaderHash returns a copy of the header hash obtained during abci.RequestBeginBlock +func (c Context) + +HeaderHash() []byte { + hash := make([]byte, len(c.headerHash)) + +copy(hash, c.headerHash) + +return hash +} + +func (c Context) + +ConsensusParams() + +cmtproto.ConsensusParams { + return c.consParams +} + +func (c Context) + +Deadline() (deadline time.Time, ok bool) { + return c.baseCtx.Deadline() +} + +func (c Context) + +Done() <-chan struct{ +} { + return c.baseCtx.Done() +} + +func (c Context) + +Err() + +error { + return c.baseCtx.Err() +} + +/ create a new context +func NewContext(ms storetypes.MultiStore, header cmtproto.Header, isCheckTx bool, logger log.Logger) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +return Context{ + baseCtx: context.Background(), + ms: ms, + header: header, + chainID: header.ChainID, + checkTx: isCheckTx, + logger: logger, + gasMeter: storetypes.NewInfiniteGasMeter(), + minGasPrice: DecCoins{ +}, + eventManager: NewEventManager(), + kvGasConfig: storetypes.KVGasConfig(), + transientKVGasConfig: storetypes.TransientGasConfig(), +} +} + +/ WithContext returns a Context with an updated context.Context. +func (c Context) + +WithContext(ctx context.Context) + +Context { + c.baseCtx = ctx + return c +} + +/ WithMultiStore returns a Context with an updated MultiStore. +func (c Context) + +WithMultiStore(ms storetypes.MultiStore) + +Context { + c.ms = ms + return c +} + +/ WithBlockHeader returns a Context with an updated CometBFT block header in UTC time. +func (c Context) + +WithBlockHeader(header cmtproto.Header) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +c.header = header + return c +} + +/ WithHeaderHash returns a Context with an updated CometBFT block header hash. +func (c Context) + +WithHeaderHash(hash []byte) + +Context { + temp := make([]byte, len(hash)) + +copy(temp, hash) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/types/context.go#L41-L67) - -* **Base Context:** The base type is a Go [Context](https://pkg.go.dev/context), which is explained further in the [Go Context Package](#go-context-package) section below. -* **Multistore:** Every application's `BaseApp` contains a [`CommitMultiStore`](/v0.50/learn/advanced/store#multistore) which is provided when a `Context` is created. Calling the `KVStore()` and `TransientStore()` methods allows modules to fetch their respective [`KVStore`](/v0.50/learn/advanced/store#base-layer-kvstores) using their unique `StoreKey`. -* **Header:** The [header](https://docs.cometbft.com/v0.37/spec/core/data_structures#header) is a Blockchain type. It carries important information about the state of the blockchain, such as block height and proposer of the current block. -* **Header Hash:** The current block header hash, obtained during `abci.FinalizeBlock`. -* **Chain ID:** The unique identification number of the blockchain a block pertains to. -* **Transaction Bytes:** The `[]byte` representation of a transaction being processed using the context. Every transaction is processed by various parts of the Cosmos SDK and consensus engine (e.g. CometBFT) throughout its [lifecycle](/v0.50/learn/beginner/tx-lifecycle), some of which do not have any understanding of transaction types. Thus, transactions are marshaled into the generic `[]byte` type using some kind of [encoding format](/v0.50/learn/advanced/encoding) such as [Amino](/v0.50/learn/advanced/encoding). -* **Logger:** A `logger` from the CometBFT libraries. Learn more about logs [here](https://docs.cometbft.com/v0.37/core/configuration). Modules call this method to create their own unique module-specific logger. -* **VoteInfo:** A list of the ABCI type [`VoteInfo`](https://docs.cometbft.com/master/spec/abci/abci.html#voteinfo), which includes the name of a validator and a boolean indicating whether they have signed the block. -* **Gas Meters:** Specifically, a [`gasMeter`](/v0.50/learn/beginner/gas-fees#main-gas-meter) for the transaction currently being processed using the context and a [`blockGasMeter`](/v0.50/learn/beginner/gas-fees#block-gas-meter) for the entire block it belongs to. Users specify how much in fees they wish to pay for the execution of their transaction; these gas meters keep track of how much [gas](/v0.50/learn/beginner/gas-fees) has been used in the transaction or block so far. If the gas meter runs out, execution halts. -* **CheckTx Mode:** A boolean value indicating whether a transaction should be processed in `CheckTx` or `DeliverTx` mode. -* **Min Gas Price:** The minimum [gas](/v0.50/learn/beginner/gas-fees) price a node is willing to take in order to include a transaction in its block. This price is a local value configured by each node individually, and should therefore **not be used in any functions used in sequences leading to state-transitions**. -* **Consensus Params:** The ABCI type [Consensus Parameters](https://docs.cometbft.com/master/spec/abci/apps.html#consensus-parameters), which specify certain limits for the blockchain, such as maximum gas for a block. -* **Event Manager:** The event manager allows any caller with access to a `Context` to emit [`Events`](/v0.50/learn/advanced/events). Modules may define module specific `Events` by defining various `Types` and `Attributes` or use the common definitions found in `types/`. Clients can subscribe or query for these `Events`. These `Events` are collected throughout `FinalizeBlock` and are returned to CometBFT for indexing. -* **Priority:** The transaction priority, only relevant in `CheckTx`. -* **KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the `KVStore`. -* **Transient KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the transiant `KVStore`. -* **StreamingManager:** The streamingManager field provides access to the streaming manager, which allows modules to subscribe to state changes emitted by the blockchain. The streaming manager is used by the state listening API, which is described in [ADR 038](https://docs.cosmos.network/main/architecture/adr-038-state-listening). -* **CometInfo:** A lightweight field that contains information about the current block, such as the block height, time, and hash. This information can be used for validating evidence, providing historical data, and enhancing the user experience. For further details see [here](https://github.com/cosmos/cosmos-sdk/blob/main/core/comet/service.go#L14). -* **HeaderInfo:** The `headerInfo` field contains information about the current block header, such as the chain ID, gas limit, and timestamp. For further details see [here](https://github.com/cosmos/cosmos-sdk/blob/main/core/header/service.go#L14). - -## Go Context Package[​](#go-context-package "Direct link to Go Context Package") - -A basic `Context` is defined in the [Golang Context Package](https://pkg.go.dev/context). A `Context` is an immutable data structure that carries request-scoped data across APIs and processes. Contexts are also designed to enable concurrency and to be used in goroutines. - -Contexts are intended to be **immutable**; they should never be edited. Instead, the convention is to create a child context from its parent using a `With` function. For example: +c.headerHash = temp + return c +} +/ WithBlockTime returns a Context with an updated CometBFT block header time in UTC with no monotonic component. +/ Stripping the monotonic component is for time equality. +func (c Context) + +WithBlockTime(newTime time.Time) + +Context { + newHeader := c.BlockHeader() + / https://github.com/gogo/protobuf/issues/519 + newHeader.Time = newTime.Round(0).UTC() + +return c.WithBlockHeader(newHeader) +} + +/ WithProposer returns a Context with an updated proposer consensus address. +func (c Context) + +WithProposer(addr ConsAddress) + +Context { + newHeader := c.BlockHeader() + +newHeader.ProposerAddress = addr.Bytes() + +return c.WithBlockHeader(newHeader) +} + +/ WithBlockHeight returns a Context with an updated block height. +func (c Context) + +WithBlockHeight(height int64) + +Context { + newHeader := c.BlockHeader() + +newHeader.Height = height + return c.WithBlockHeader(newHeader) +} + +/ WithChainID returns a Context with an updated chain identifier. +func (c Context) + +WithChainID(chainID string) + +Context { + c.chainID = chainID + return c +} + +/ WithTxBytes returns a Context with an updated txBytes. +func (c Context) + +WithTxBytes(txBytes []byte) + +Context { + c.txBytes = txBytes + return c +} + +/ WithLogger returns a Context with an updated logger. +func (c Context) + +WithLogger(logger log.Logger) + +Context { + c.logger = logger + return c +} + +/ WithVoteInfos returns a Context with an updated consensus VoteInfo. +func (c Context) + +WithVoteInfos(voteInfo []abci.VoteInfo) + +Context { + c.voteInfo = voteInfo + return c +} + +/ WithGasMeter returns a Context with an updated transaction GasMeter. +func (c Context) + +WithGasMeter(meter storetypes.GasMeter) + +Context { + c.gasMeter = meter + return c +} + +/ WithBlockGasMeter returns a Context with an updated block GasMeter +func (c Context) + +WithBlockGasMeter(meter storetypes.GasMeter) + +Context { + c.blockGasMeter = meter + return c +} + +/ WithKVGasConfig returns a Context with an updated gas configuration for +/ the KVStore +func (c Context) + +WithKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.kvGasConfig = gasConfig + return c +} + +/ WithTransientKVGasConfig returns a Context with an updated gas configuration for +/ the transient KVStore +func (c Context) + +WithTransientKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.transientKVGasConfig = gasConfig + return c +} + +/ WithIsCheckTx enables or disables CheckTx value for verifying transactions and returns an updated Context +func (c Context) + +WithIsCheckTx(isCheckTx bool) + +Context { + c.checkTx = isCheckTx + c.execMode = ExecModeCheck + return c +} + +/ WithIsRecheckTx called with true will also set true on checkTx in order to +/ enforce the invariant that if recheckTx = true then checkTx = true as well. +func (c Context) + +WithIsReCheckTx(isRecheckTx bool) + +Context { + if isRecheckTx { + c.checkTx = true +} + +c.recheckTx = isRecheckTx + c.execMode = ExecModeReCheck + return c +} + +/ WithExecMode returns a Context with an updated ExecMode. +func (c Context) + +WithExecMode(m ExecMode) + +Context { + c.execMode = m + return c +} + +/ WithMinGasPrices returns a Context with an updated minimum gas price value +func (c Context) + +WithMinGasPrices(gasPrices DecCoins) + +Context { + c.minGasPrice = gasPrices + return c +} + +/ WithConsensusParams returns a Context with an updated consensus params +func (c Context) + +WithConsensusParams(params cmtproto.ConsensusParams) + +Context { + c.consParams = params + return c +} + +/ WithEventManager returns a Context with an updated event manager +func (c Context) + +WithEventManager(em EventManagerI) + +Context { + c.eventManager = em + return c +} + +/ WithPriority returns a Context with an updated tx priority +func (c Context) + +WithPriority(p int64) + +Context { + c.priority = p + return c +} + +/ WithStreamingManager returns a Context with an updated streaming manager +func (c Context) + +WithStreamingManager(sm storetypes.StreamingManager) + +Context { + c.streamingManager = sm + return c +} + +/ WithCometInfo returns a Context with an updated comet info +func (c Context) + +WithCometInfo(cometInfo comet.BlockInfo) + +Context { + c.cometInfo = cometInfo + return c +} + +/ WithHeaderInfo returns a Context with an updated header info +func (c Context) + +WithHeaderInfo(headerInfo header.Info) + +Context { + / Settime to UTC + headerInfo.Time = headerInfo.Time.UTC() + +c.headerInfo = headerInfo + return c +} + +/ TODO: remove??? +func (c Context) + +IsZero() + +bool { + return c.ms == nil +} + +func (c Context) + +WithValue(key, value interface{ +}) + +Context { + c.baseCtx = context.WithValue(c.baseCtx, key, value) + +return c +} + +func (c Context) + +Value(key interface{ +}) + +interface{ +} { + if key == SdkContextKey { + return c +} + +return c.baseCtx.Value(key) +} + +/ ---------------------------------------------------------------------------- +/ Store / Caching +/ ---------------------------------------------------------------------------- + +/ KVStore fetches a KVStore from the MultiStore. +func (c Context) + +KVStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.kvGasConfig) +} + +/ TransientStore fetches a TransientStore from the MultiStore. +func (c Context) + +TransientStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.transientKVGasConfig) +} + +/ CacheContext returns a new Context with the multi-store cached and a new +/ EventManager. The cached context is written to the context when writeCache +/ is called. Note, events are automatically emitted on the parent context's +/ EventManager when the caller executes the write. +func (c Context) + +CacheContext() (cc Context, writeCache func()) { + cms := c.ms.CacheMultiStore() + +cc = c.WithMultiStore(cms).WithEventManager(NewEventManager()) + +writeCache = func() { + c.EventManager().EmitEvents(cc.EventManager().Events()) + +cms.Write() +} + +return cc, writeCache +} + +var ( + _ context.Context = Context{ +} + _ storetypes.Context = Context{ +} +) + +/ ContextKey defines a type alias for a stdlib Context key. +type ContextKey string + +/ SdkContextKey is the key in the context.Context which holds the sdk.Context. +const SdkContextKey ContextKey = "sdk-context" + +/ WrapSDKContext returns a stdlib context.Context with the provided sdk.Context's internal +/ context as a value. It is useful for passing an sdk.Context through methods that take a +/ stdlib context.Context parameter such as generated gRPC methods. To get the original +/ sdk.Context back, call UnwrapSDKContext. +/ +/ Deprecated: there is no need to wrap anymore as the Cosmos SDK context implements context.Context. +func WrapSDKContext(ctx Context) + +context.Context { + return ctx +} + +/ UnwrapSDKContext retrieves a Context from a context.Context instance +/ attached with WrapSDKContext. It panics if a Context was not properly +/ attached +func UnwrapSDKContext(ctx context.Context) + +Context { + if sdkCtx, ok := ctx.(Context); ok { + return sdkCtx +} + +return ctx.Value(SdkContextKey).(Context) +} ``` + +- **Base Context:** The base type is a Go [Context](https://pkg.go.dev/context), which is explained further in the [Go Context Package](#go-context-package) section below. +- **Multistore:** Every application's `BaseApp` contains a [`CommitMultiStore`](/docs/sdk/v0.50/learn/advanced/store#multistore) which is provided when a `Context` is created. Calling the `KVStore()` and `TransientStore()` methods allows modules to fetch their respective [`KVStore`](/docs/sdk/v0.50/learn/advanced/store#base-layer-kvstores) using their unique `StoreKey`. +- **Header:** The [header](https://docs.cometbft.com/v0.37/spec/core/data_structures#header) is a Blockchain type. It carries important information about the state of the blockchain, such as block height and proposer of the current block. +- **Header Hash:** The current block header hash, obtained during `abci.FinalizeBlock`. +- **Chain ID:** The unique identification number of the blockchain a block pertains to. +- **Transaction Bytes:** The `[]byte` representation of a transaction being processed using the context. Every transaction is processed by various parts of the Cosmos SDK and consensus engine (e.g. CometBFT) throughout its [lifecycle](/docs/sdk/v0.50/learn/beginner/tx-lifecycle), some of which do not have any understanding of transaction types. Thus, transactions are marshaled into the generic `[]byte` type using some kind of [encoding format](/docs/sdk/v0.50/learn/advanced/encoding) such as [Amino](/docs/sdk/v0.50/learn/advanced/encoding). +- **Logger:** A `logger` from the CometBFT libraries. Learn more about logs [here](https://docs.cometbft.com/v0.37/core/configuration). Modules call this method to create their own unique module-specific logger. +- **VoteInfo:** A list of the ABCI type [`VoteInfo`](https://docs.cometbft.com/master/spec/abci/abci.html#voteinfo), which includes the name of a validator and a boolean indicating whether they have signed the block. +- **Gas Meters:** Specifically, a [`gasMeter`](/docs/sdk/v0.50/learn/beginner/gas-fees#main-gas-meter) for the transaction currently being processed using the context and a [`blockGasMeter`](/docs/sdk/v0.50/learn/beginner/gas-fees#block-gas-meter) for the entire block it belongs to. Users specify how much in fees they wish to pay for the execution of their transaction; these gas meters keep track of how much [gas](/docs/sdk/v0.50/learn/beginner/gas-fees) has been used in the transaction or block so far. If the gas meter runs out, execution halts. +- **CheckTx Mode:** A boolean value indicating whether a transaction should be processed in `CheckTx` or `DeliverTx` mode. +- **Min Gas Price:** The minimum [gas](/docs/sdk/v0.50/learn/beginner/gas-fees) price a node is willing to take in order to include a transaction in its block. This price is a local value configured by each node individually, and should therefore **not be used in any functions used in sequences leading to state-transitions**. +- **Consensus Params:** The ABCI type [Consensus Parameters](https://docs.cometbft.com/master/spec/abci/apps.html#consensus-parameters), which specify certain limits for the blockchain, such as maximum gas for a block. +- **Event Manager:** The event manager allows any caller with access to a `Context` to emit [`Events`](/docs/sdk/v0.50/learn/advanced/events). Modules may define module specific + `Events` by defining various `Types` and `Attributes` or use the common definitions found in `types/`. Clients can subscribe or query for these `Events`. These `Events` are collected throughout `FinalizeBlock` and are returned to CometBFT for indexing. +- **Priority:** The transaction priority, only relevant in `CheckTx`. +- **KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the `KVStore`. +- **Transient KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the transiant `KVStore`. +- **StreamingManager:** The streamingManager field provides access to the streaming manager, which allows modules to subscribe to state changes emitted by the blockchain. The streaming manager is used by the state listening API, which is described in [ADR 038](/docs/common/pages/adr-comprehensive#adr-038-kvstore-state-listening). +- **CometInfo:** A lightweight field that contains information about the current block, such as the block height, time, and hash. This information can be used for validating evidence, providing historical data, and enhancing the user experience. For further details see [here](https://github.com/cosmos/cosmos-sdk/blob/main/core/comet/service.go#L14). +- **HeaderInfo:** The `headerInfo` field contains information about the current block header, such as the chain ID, gas limit, and timestamp. For further details see [here](https://github.com/cosmos/cosmos-sdk/blob/main/core/header/service.go#L14). + +## Go Context Package + +A basic `Context` is defined in the [Golang Context Package](https://pkg.go.dev/context). A `Context` +is an immutable data structure that carries request-scoped data across APIs and processes. Contexts +are also designed to enable concurrency and to be used in goroutines. + +Contexts are intended to be **immutable**; they should never be edited. Instead, the convention is +to create a child context from its parent using a `With` function. For example: + +```go childCtx = parentCtx.WithBlockHeader(header) ``` -The [Golang Context Package](https://pkg.go.dev/context) documentation instructs developers to explicitly pass a context `ctx` as the first argument of a process. +The [Golang Context Package](https://pkg.go.dev/context) documentation instructs developers to +explicitly pass a context `ctx` as the first argument of a process. -## Store branching[​](#store-branching "Direct link to Store branching") +## Store branching -The `Context` contains a `MultiStore`, which allows for branching and caching functionality using `CacheMultiStore` (queries in `CacheMultiStore` are cached to avoid future round trips). Each `KVStore` is branched in a safe and isolated ephemeral storage. Processes are free to write changes to the `CacheMultiStore`. If a state-transition sequence is performed without issue, the store branch can be committed to the underlying store at the end of the sequence or disregard them if something goes wrong. The pattern of usage for a Context is as follows: +The `Context` contains a `MultiStore`, which allows for branching and caching functionality using `CacheMultiStore` +(queries in `CacheMultiStore` are cached to avoid future round trips). +Each `KVStore` is branched in a safe and isolated ephemeral storage. Processes are free to write changes to +the `CacheMultiStore`. If a state-transition sequence is performed without issue, the store branch can +be committed to the underlying store at the end of the sequence or disregard them if something +goes wrong. The pattern of usage for a Context is as follows: -1. A process receives a Context `ctx` from its parent process, which provides information needed to perform the process. -2. The `ctx.ms` is a **branched store**, i.e. a branch of the [multistore](/v0.50/learn/advanced/store#multistore) is made so that the process can make changes to the state as it executes, without changing the original`ctx.ms`. This is useful to protect the underlying multistore in case the changes need to be reverted at some point in the execution. -3. The process may read and write from `ctx` as it is executing. It may call a subprocess and pass `ctx` to it as needed. -4. When a subprocess returns, it checks if the result is a success or failure. If a failure, nothing needs to be done - the branch `ctx` is simply discarded. If successful, the changes made to the `CacheMultiStore` can be committed to the original `ctx.ms` via `Write()`. +1. A process receives a Context `ctx` from its parent process, which provides information needed to + perform the process. +2. The `ctx.ms` is a **branched store**, i.e. a branch of the [multistore](/docs/sdk/v0.50/learn/advanced/store#multistore) is made so that the process can make changes to the state as it executes, without changing the original`ctx.ms`. This is useful to protect the underlying multistore in case the changes need to be reverted at some point in the execution. +3. The process may read and write from `ctx` as it is executing. It may call a subprocess and pass + `ctx` to it as needed. +4. When a subprocess returns, it checks if the result is a success or failure. If a failure, nothing + needs to be done - the branch `ctx` is simply discarded. If successful, the changes made to + the `CacheMultiStore` can be committed to the original `ctx.ms` via `Write()`. -For example, here is a snippet from the [`runTx`](/v0.50/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) function in [`baseapp`](/v0.50/learn/advanced/baseapp): +For example, here is a snippet from the [`runTx`](/docs/sdk/v0.50/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) function in [`baseapp`](/docs/sdk/v0.50/learn/advanced/baseapp): -``` -runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes)result = app.runMsgs(runMsgCtx, msgs, mode)result.GasWanted = gasWantedif mode != runTxModeDeliver { return result}if result.IsOK() { msCache.Write()} +```go +runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + +result = app.runMsgs(runMsgCtx, msgs, mode) + +result.GasWanted = gasWanted + if mode != runTxModeDeliver { + return result +} + if result.IsOK() { + msCache.Write() +} ``` Here is the process: -1. Prior to calling `runMsgs` on the message(s) in the transaction, it uses `app.cacheTxContext()` to branch and cache the context and multistore. +1. Prior to calling `runMsgs` on the message(s) in the transaction, it uses `app.cacheTxContext()` + to branch and cache the context and multistore. 2. `runMsgCtx` - the context with branched store, is used in `runMsgs` to return a result. -3. If the process is running in [`checkTxMode`](/v0.50/learn/advanced/baseapp#checktx), there is no need to write the changes - the result is returned immediately. -4. If the process is running in [`deliverTxMode`](/v0.50/learn/advanced/baseapp#delivertx) and the result indicates a successful run over all the messages, the branched multistore is written back to the original. +3. If the process is running in [`checkTxMode`](/docs/sdk/v0.50/learn/advanced/baseapp#checktx), there is no need to write the + changes - the result is returned immediately. +4. If the process is running in [`deliverTxMode`](/docs/sdk/v0.50/learn/advanced/baseapp#delivertx) and the result indicates + a successful run over all the messages, the branched multistore is written back to the original. diff --git a/docs/sdk/v0.50/learn/advanced/encoding.mdx b/docs/sdk/v0.50/learn/advanced/encoding.mdx index 872aa809..4112169b 100644 --- a/docs/sdk/v0.50/learn/advanced/encoding.mdx +++ b/docs/sdk/v0.50/learn/advanced/encoding.mdx @@ -1,220 +1,1895 @@ --- -title: "Encoding" -description: "Version: v0.50" +title: Encoding --- - - While encoding in the Cosmos SDK used to be mainly handled by `go-amino` codec, the Cosmos SDK is moving towards using `gogoprotobuf` for both state and client-side encoding. - +## Synopsis + +While encoding in the Cosmos SDK used to be mainly handled by `go-amino` codec, the Cosmos SDK is moving towards using `gogoprotobuf` for both state and client-side encoding. - * [Anatomy of a Cosmos SDK application](/v0.50/learn/beginner/app-anatomy) +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK application](/docs/sdk/v0.50/learn/beginner/app-anatomy) + -## Encoding[​](#encoding-1 "Direct link to Encoding") +## Encoding -The Cosmos SDK utilizes two binary wire encoding protocols, [Amino](https://github.com/tendermint/go-amino/) which is an object encoding specification and [Protocol Buffers](https://developers.google.com/protocol-buffers), a subset of Proto3 with an extension for interface support. See the [Proto3 spec](https://developers.google.com/protocol-buffers/docs/proto3) for more information on Proto3, which Amino is largely compatible with (but not with Proto2). +The Cosmos SDK utilizes two binary wire encoding protocols, [Amino](https://github.com/tendermint/go-amino/) which is an object encoding specification and [Protocol Buffers](https://developers.google.com/protocol-buffers), a subset of Proto3 with an extension for +interface support. See the [Proto3 spec](https://developers.google.com/protocol-buffers/docs/proto3) +for more information on Proto3, which Amino is largely compatible with (but not with Proto2). -Due to Amino having significant performance drawbacks, being reflection-based, and not having any meaningful cross-language/client support, Protocol Buffers, specifically [gogoprotobuf](https://github.com/cosmos/gogoproto/), is being used in place of Amino. Note, this process of using Protocol Buffers over Amino is still an ongoing process. +Due to Amino having significant performance drawbacks, being reflection-based, and +not having any meaningful cross-language/client support, Protocol Buffers, specifically +[gogoprotobuf](https://github.com/cosmos/gogoproto/), is being used in place of Amino. +Note, this process of using Protocol Buffers over Amino is still an ongoing process. -Binary wire encoding of types in the Cosmos SDK can be broken down into two main categories, client encoding and store encoding. Client encoding mainly revolves around transaction processing and signing, whereas store encoding revolves around types used in state-machine transitions and what is ultimately stored in the Merkle tree. +Binary wire encoding of types in the Cosmos SDK can be broken down into two main +categories, client encoding and store encoding. Client encoding mainly revolves +around transaction processing and signing, whereas store encoding revolves around +types used in state-machine transitions and what is ultimately stored in the Merkle +tree. -For store encoding, protobuf definitions can exist for any type and will typically have an Amino-based "intermediary" type. Specifically, the protobuf-based type definition is used for serialization and persistence, whereas the Amino-based type is used for business logic in the state-machine where they may convert back-n-forth. Note, the Amino-based types may slowly be phased-out in the future, so developers should take note to use the protobuf message definitions where possible. +For store encoding, protobuf definitions can exist for any type and will typically +have an Amino-based "intermediary" type. Specifically, the protobuf-based type +definition is used for serialization and persistence, whereas the Amino-based type +is used for business logic in the state-machine where they may convert back-n-forth. +Note, the Amino-based types may slowly be phased-out in the future, so developers +should take note to use the protobuf message definitions where possible. -In the `codec` package, there exists two core interfaces, `BinaryCodec` and `JSONCodec`, where the former encapsulates the current Amino interface except it operates on types implementing the latter instead of generic `interface{}` types. +In the `codec` package, there exists two core interfaces, `BinaryCodec` and `JSONCodec`, +where the former encapsulates the current Amino interface except it operates on +types implementing the latter instead of generic `interface{}` types. -The `ProtoCodec`, where both binary and JSON serialization is handled via Protobuf. This means that modules may use Protobuf encoding, but the types must implement `ProtoMarshaler`. If modules wish to avoid implementing this interface for their types, this is autogenerated via [buf](https://buf.build/) +The `ProtoCodec`, where both binary and JSON serialization is handled +via Protobuf. This means that modules may use Protobuf encoding, but the types must +implement `ProtoMarshaler`. If modules wish to avoid implementing this interface +for their types, this is autogenerated via [buf](https://buf.build/) -If modules use [Collections](/v0.50/build/packages/collections) or [ORM](/v0.50/build/packages/03-orm.md), encoding and decoding are handled, marshal and unmarshal should not be handled manually unless for specific cases identified by the developer. +If modules use [Collections](/docs/sdk/v0.50/documentation/operations/packages/collections) or [ORM](/docs/sdk/v0.47/build/packages/orm), encoding and decoding are handled, marshal and unmarshal should not be handled manually unless for specific cases identified by the developer. -### Gogoproto[​](#gogoproto "Direct link to Gogoproto") +### Gogoproto Modules are encouraged to utilize Protobuf encoding for their respective types. In the Cosmos SDK, we use the [Gogoproto](https://github.com/cosmos/gogoproto) specific implementation of the Protobuf spec that offers speed and DX improvements compared to the official [Google protobuf implementation](https://github.com/protocolbuffers/protobuf). -### Guidelines for protobuf message definitions[​](#guidelines-for-protobuf-message-definitions "Direct link to Guidelines for protobuf message definitions") +### Guidelines for protobuf message definitions In addition to [following official Protocol Buffer guidelines](https://developers.google.com/protocol-buffers/docs/proto3#simple), we recommend using these annotations in .proto files when dealing with interfaces: -* use `cosmos_proto.accepts_interface` to annote `Any` fields that accept interfaces +- use `cosmos_proto.accepts_interface` to annote `Any` fields that accept interfaces + - pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface` + - example: `(cosmos_proto.accepts_interface) = "cosmos.gov.v1beta1.Content"` (and not just `Content`) +- annotate interface implementations with `cosmos_proto.implements_interface` + - pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface` + - example: `(cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"` (and not just `Authorization`) - * pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface` - * example: `(cosmos_proto.accepts_interface) = "cosmos.gov.v1beta1.Content"` (and not just `Content`) +Code generators can then match the `accepts_interface` and `implements_interface` annotations to know whether some Protobuf messages are allowed to be packed in a given `Any` field or not. -* annotate interface implementations with `cosmos_proto.implements_interface` +### Transaction Encoding - * pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface` - * example: `(cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"` (and not just `Authorization`) +Another important use of Protobuf is the encoding and decoding of +[transactions](/docs/sdk/v0.50/learn/advanced/transactions). Transactions are defined by the application or +the Cosmos SDK but are then passed to the underlying consensus engine to be relayed to +other peers. Since the underlying consensus engine is agnostic to the application, +the consensus engine accepts only transactions in the form of raw bytes. -Code generators can then match the `accepts_interface` and `implements_interface` annotations to know whether some Protobuf messages are allowed to be packed in a given `Any` field or not. +- The `TxEncoder` object performs the encoding. +- The `TxDecoder` object performs the decoding. + +```go expandable +package types -### Transaction Encoding[​](#transaction-encoding "Direct link to Transaction Encoding") +import ( + + "encoding/json" + fmt "fmt" + strings "strings" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "github.com/cosmos/cosmos-sdk/codec" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) -Another important use of Protobuf is the encoding and decoding of [transactions](/v0.50/learn/advanced/transactions). Transactions are defined by the application or the Cosmos SDK but are then passed to the underlying consensus engine to be relayed to other peers. Since the underlying consensus engine is agnostic to the application, the consensus engine accepts only transactions in the form of raw bytes. +type ( + / Msg defines the interface a transaction message needed to fulfill. + Msg = proto.Message -* The `TxEncoder` object performs the encoding. -* The `TxDecoder` object performs the decoding. + / LegacyMsg defines the interface a transaction message needed to fulfill up through + / v0.47. + LegacyMsg interface { + Msg -types/tx\_msg.go + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / HasMsgs defines an interface a transaction must fulfill. + HasMsgs interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg +} + + / Tx defines an interface a transaction must fulfill. + Tx interface { + HasMsgs + + / GetMsgsV2 gets the transaction's messages as google.golang.org/protobuf/proto.Message's. + GetMsgsV2() ([]protov2.Message, error) +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() []byte + FeeGranter() + +string +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} + + / HasValidateBasic defines a type that has a ValidateBasic method. + / ValidateBasic is deprecated and now facultative. + / Prefer validating messages directly in the msg server. + HasValidateBasic interface { + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +func MsgTypeURL(msg proto.Message) + +string { + if m, ok := msg.(protov2.Message); ok { + return "/" + string(m.ProtoReflect().Descriptor().FullName()) +} + +return "/" + proto.MessageName(msg) +} + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} + +/ GetModuleNameFromTypeURL assumes that module name is the second element of the msg type URL +/ e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" +/ It returns an empty string if the input is not a valid type URL +func GetModuleNameFromTypeURL(input string) + +string { + moduleName := strings.Split(input, ".") + if len(moduleName) > 1 { + return moduleName[1] +} + +return "" +} ``` -loading... + +A standard implementation of both these objects can be found in the [`auth/tx` module](/docs/sdk/v0.50/documentation/module-system/modules/auth/tx): + +```go expandable +package tx + +import ( + + "fmt" + "google.golang.org/protobuf/encoding/protowire" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/unknownproto" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" +) + +/ DefaultTxDecoder returns a default protobuf TxDecoder using the provided Marshaler. +func DefaultTxDecoder(cdc codec.ProtoCodecMarshaler) + +sdk.TxDecoder { + return func(txBytes []byte) (sdk.Tx, error) { + / Make sure txBytes follow ADR-027. + err := rejectNonADR027TxRaw(txBytes) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +var raw tx.TxRaw + + / reject all unknown proto fields in the root TxRaw + err = unknownproto.RejectUnknownFieldsStrict(txBytes, &raw, cdc.InterfaceRegistry()) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +err = cdc.Unmarshal(txBytes, &raw) + if err != nil { + return nil, err +} + +var body tx.TxBody + + / allow non-critical unknown fields in TxBody + txBodyHasUnknownNonCriticals, err := unknownproto.RejectUnknownFields(raw.BodyBytes, &body, true, cdc.InterfaceRegistry()) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +err = cdc.Unmarshal(raw.BodyBytes, &body) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +var authInfo tx.AuthInfo + + / reject all unknown proto fields in AuthInfo + err = unknownproto.RejectUnknownFieldsStrict(raw.AuthInfoBytes, &authInfo, cdc.InterfaceRegistry()) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +err = cdc.Unmarshal(raw.AuthInfoBytes, &authInfo) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + theTx := &tx.Tx{ + Body: &body, + AuthInfo: &authInfo, + Signatures: raw.Signatures, +} + +return &wrapper{ + tx: theTx, + bodyBz: raw.BodyBytes, + authInfoBz: raw.AuthInfoBytes, + txBodyHasUnknownNonCriticals: txBodyHasUnknownNonCriticals, + cdc: cdc, +}, nil +} +} + +/ DefaultJSONTxDecoder returns a default protobuf JSON TxDecoder using the provided Marshaler. +func DefaultJSONTxDecoder(cdc codec.ProtoCodecMarshaler) + +sdk.TxDecoder { + return func(txBytes []byte) (sdk.Tx, error) { + var theTx tx.Tx + err := cdc.UnmarshalJSON(txBytes, &theTx) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +return &wrapper{ + tx: &theTx, + cdc: cdc, +}, nil +} +} + +/ rejectNonADR027TxRaw rejects txBytes that do not follow ADR-027. This is NOT +/ a generic ADR-027 checker, it only applies decoding TxRaw. Specifically, it +/ only checks that: +/ - field numbers are in ascending order (1, 2, and potentially multiple 3s), +/ - and varints are as short as possible. +/ All other ADR-027 edge cases (e.g. default values) + +are not applicable with +/ TxRaw. +func rejectNonADR027TxRaw(txBytes []byte) + +error { + / Make sure all fields are ordered in ascending order with this variable. + prevTagNum := protowire.Number(0) + for len(txBytes) > 0 { + tagNum, wireType, m := protowire.ConsumeTag(txBytes) + if m < 0 { + return fmt.Errorf("invalid length; %w", protowire.ParseError(m)) +} + / TxRaw only has bytes fields. + if wireType != protowire.BytesType { + return fmt.Errorf("expected %d wire type, got %d", protowire.BytesType, wireType) +} + / Make sure fields are ordered in ascending order. + if tagNum < prevTagNum { + return fmt.Errorf("txRaw must follow ADR-027, got tagNum %d after tagNum %d", tagNum, prevTagNum) +} + +prevTagNum = tagNum + + / All 3 fields of TxRaw have wireType == 2, so their next component + / is a varint, so we can safely call ConsumeVarint here. + / Byte structure: + / Inner fields are verified in `DefaultTxDecoder` + lengthPrefix, m := protowire.ConsumeVarint(txBytes[m:]) + if m < 0 { + return fmt.Errorf("invalid length; %w", protowire.ParseError(m)) +} + / We make sure that this varint is as short as possible. + n := varintMinLength(lengthPrefix) + if n != m { + return fmt.Errorf("length prefix varint for tagNum %d is not as short as possible, read %d, only need %d", tagNum, m, n) +} + + / Skip over the bytes that store fieldNumber and wireType bytes. + _, _, m = protowire.ConsumeField(txBytes) + if m < 0 { + return fmt.Errorf("invalid length; %w", protowire.ParseError(m)) +} + +txBytes = txBytes[m:] +} + +return nil +} + +/ varintMinLength returns the minimum number of bytes necessary to encode an +/ uint using varint encoding. +func varintMinLength(n uint64) + +int { + switch { + / Note: 1< valz[j].ConsensusPower(r) +} + +func (valz ValidatorsByVotingPower) + +Swap(i, j int) { + valz[i], valz[j] = valz[j], valz[i] +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (v Validators) + +UnpackInterfaces(c codectypes.AnyUnpacker) + +error { + for i := range v { + if err := v[i].UnpackInterfaces(c); err != nil { + return err +} + +} + +return nil +} + +/ return the redelegation +func MustMarshalValidator(cdc codec.BinaryCodec, validator *Validator) []byte { + return cdc.MustMarshal(validator) +} + +/ unmarshal a redelegation from a store value +func MustUnmarshalValidator(cdc codec.BinaryCodec, value []byte) + +Validator { + validator, err := UnmarshalValidator(cdc, value) + if err != nil { + panic(err) +} + +return validator +} + +/ unmarshal a redelegation from a store value +func UnmarshalValidator(cdc codec.BinaryCodec, value []byte) (v Validator, err error) { + err = cdc.Unmarshal(value, &v) + +return v, err +} + +/ IsBonded checks if the validator status equals Bonded +func (v Validator) + +IsBonded() + +bool { + return v.GetStatus() == Bonded +} + +/ IsUnbonded checks if the validator status equals Unbonded +func (v Validator) + +IsUnbonded() + +bool { + return v.GetStatus() == Unbonded +} + +/ IsUnbonding checks if the validator status equals Unbonding +func (v Validator) + +IsUnbonding() + +bool { + return v.GetStatus() == Unbonding +} + +/ constant used in flags to indicate that description field should not be updated +const DoNotModifyDesc = "[do-not-modify]" + +func NewDescription(moniker, identity, website, securityContact, details string) + +Description { + return Description{ + Moniker: moniker, + Identity: identity, + Website: website, + SecurityContact: securityContact, + Details: details, +} +} + +/ UpdateDescription updates the fields of a given description. An error is +/ returned if the resulting description contains an invalid length. +func (d Description) + +UpdateDescription(d2 Description) (Description, error) { + if d2.Moniker == DoNotModifyDesc { + d2.Moniker = d.Moniker +} + if d2.Identity == DoNotModifyDesc { + d2.Identity = d.Identity +} + if d2.Website == DoNotModifyDesc { + d2.Website = d.Website +} + if d2.SecurityContact == DoNotModifyDesc { + d2.SecurityContact = d.SecurityContact +} + if d2.Details == DoNotModifyDesc { + d2.Details = d.Details +} + +return NewDescription( + d2.Moniker, + d2.Identity, + d2.Website, + d2.SecurityContact, + d2.Details, + ).EnsureLength() +} + +/ EnsureLength ensures the length of a validator's description. +func (d Description) + +EnsureLength() (Description, error) { + if len(d.Moniker) > MaxMonikerLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid moniker length; got: %d, max: %d", len(d.Moniker), MaxMonikerLength) +} + if len(d.Identity) > MaxIdentityLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid identity length; got: %d, max: %d", len(d.Identity), MaxIdentityLength) +} + if len(d.Website) > MaxWebsiteLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid website length; got: %d, max: %d", len(d.Website), MaxWebsiteLength) +} + if len(d.SecurityContact) > MaxSecurityContactLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid security contact length; got: %d, max: %d", len(d.SecurityContact), MaxSecurityContactLength) +} + if len(d.Details) > MaxDetailsLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid details length; got: %d, max: %d", len(d.Details), MaxDetailsLength) +} + +return d, nil +} + +/ ABCIValidatorUpdate returns an abci.ValidatorUpdate from a staking validator type +/ with the full validator power +func (v Validator) + +ABCIValidatorUpdate(r math.Int) + +abci.ValidatorUpdate { + tmProtoPk, err := v.TmConsPublicKey() + if err != nil { + panic(err) +} + +return abci.ValidatorUpdate{ + PubKey: tmProtoPk, + Power: v.ConsensusPower(r), +} +} + +/ ABCIValidatorUpdateZero returns an abci.ValidatorUpdate from a staking validator type +/ with zero power used for validator updates. +func (v Validator) + +ABCIValidatorUpdateZero() + +abci.ValidatorUpdate { + tmProtoPk, err := v.TmConsPublicKey() + if err != nil { + panic(err) +} + +return abci.ValidatorUpdate{ + PubKey: tmProtoPk, + Power: 0, +} +} + +/ SetInitialCommission attempts to set a validator's initial commission. An +/ error is returned if the commission is invalid. +func (v Validator) + +SetInitialCommission(commission Commission) (Validator, error) { + if err := commission.Validate(); err != nil { + return v, err +} + +v.Commission = commission + + return v, nil +} + +/ In some situations, the exchange rate becomes invalid, e.g. if +/ Validator loses all tokens due to slashing. In this case, +/ make all future delegations invalid. +func (v Validator) + +InvalidExRate() + +bool { + return v.Tokens.IsZero() && v.DelegatorShares.IsPositive() +} + +/ calculate the token worth of provided shares +func (v Validator) + +TokensFromShares(shares math.LegacyDec) + +math.LegacyDec { + return (shares.MulInt(v.Tokens)).Quo(v.DelegatorShares) +} + +/ calculate the token worth of provided shares, truncated +func (v Validator) + +TokensFromSharesTruncated(shares math.LegacyDec) + +math.LegacyDec { + return (shares.MulInt(v.Tokens)).QuoTruncate(v.DelegatorShares) +} + +/ TokensFromSharesRoundUp returns the token worth of provided shares, rounded +/ up. +func (v Validator) + +TokensFromSharesRoundUp(shares math.LegacyDec) + +math.LegacyDec { + return (shares.MulInt(v.Tokens)).QuoRoundUp(v.DelegatorShares) +} + +/ SharesFromTokens returns the shares of a delegation given a bond amount. It +/ returns an error if the validator has no tokens. +func (v Validator) + +SharesFromTokens(amt math.Int) (math.LegacyDec, error) { + if v.Tokens.IsZero() { + return math.LegacyZeroDec(), ErrInsufficientShares +} + +return v.GetDelegatorShares().MulInt(amt).QuoInt(v.GetTokens()), nil +} + +/ SharesFromTokensTruncated returns the truncated shares of a delegation given +/ a bond amount. It returns an error if the validator has no tokens. +func (v Validator) + +SharesFromTokensTruncated(amt math.Int) (math.LegacyDec, error) { + if v.Tokens.IsZero() { + return math.LegacyZeroDec(), ErrInsufficientShares +} + +return v.GetDelegatorShares().MulInt(amt).QuoTruncate(math.LegacyNewDecFromInt(v.GetTokens())), nil +} + +/ get the bonded tokens which the validator holds +func (v Validator) + +BondedTokens() + +math.Int { + if v.IsBonded() { + return v.Tokens +} + +return math.ZeroInt() +} + +/ ConsensusPower gets the consensus-engine power. Aa reduction of 10^6 from +/ validator tokens is applied +func (v Validator) + +ConsensusPower(r math.Int) + +int64 { + if v.IsBonded() { + return v.PotentialConsensusPower(r) +} + +return 0 +} + +/ PotentialConsensusPower returns the potential consensus-engine power. +func (v Validator) + +PotentialConsensusPower(r math.Int) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/types/validator.go#L41-L64) +int64 { + return sdk.TokensToConsensusPower(v.Tokens, r) +} -#### `Any`'s TypeURL[​](#anys-typeurl "Direct link to anys-typeurl") +/ UpdateStatus updates the location of the shares within a validator +/ to reflect the new status +func (v Validator) + +UpdateStatus(newStatus BondStatus) + +Validator { + v.Status = newStatus + return v +} + +/ AddTokensFromDel adds tokens to a validator +func (v Validator) + +AddTokensFromDel(amount math.Int) (Validator, math.LegacyDec) { + / calculate the shares to issue + var issuedShares math.LegacyDec + if v.DelegatorShares.IsZero() { + / the first delegation to a validator sets the exchange rate to one + issuedShares = math.LegacyNewDecFromInt(amount) +} + +else { + shares, err := v.SharesFromTokens(amount) + if err != nil { + panic(err) +} + +issuedShares = shares +} + +v.Tokens = v.Tokens.Add(amount) + +v.DelegatorShares = v.DelegatorShares.Add(issuedShares) + +return v, issuedShares +} + +/ RemoveTokens removes tokens from a validator +func (v Validator) + +RemoveTokens(tokens math.Int) + +Validator { + if tokens.IsNegative() { + panic(fmt.Sprintf("should not happen: trying to remove negative tokens %v", tokens)) +} + if v.Tokens.LT(tokens) { + panic(fmt.Sprintf("should not happen: only have %v tokens, trying to remove %v", v.Tokens, tokens)) +} + +v.Tokens = v.Tokens.Sub(tokens) + +return v +} + +/ RemoveDelShares removes delegator shares from a validator. +/ NOTE: because token fractions are left in the valiadator, +/ +/ the exchange rate of future shares of this validator can increase. +func (v Validator) + +RemoveDelShares(delShares math.LegacyDec) (Validator, math.Int) { + remainingShares := v.DelegatorShares.Sub(delShares) + +var issuedTokens math.Int + if remainingShares.IsZero() { + / last delegation share gets any trimmings + issuedTokens = v.Tokens + v.Tokens = math.ZeroInt() +} + +else { + / leave excess tokens in the validator + / however fully use all the delegator shares + issuedTokens = v.TokensFromShares(delShares).TruncateInt() + +v.Tokens = v.Tokens.Sub(issuedTokens) + if v.Tokens.IsNegative() { + panic("attempting to remove more tokens than available in validator") +} + +} + +v.DelegatorShares = remainingShares + + return v, issuedTokens +} + +/ MinEqual defines a more minimum set of equality conditions when comparing two +/ validators. +func (v *Validator) + +MinEqual(other *Validator) + +bool { + return v.OperatorAddress == other.OperatorAddress && + v.Status == other.Status && + v.Tokens.Equal(other.Tokens) && + v.DelegatorShares.Equal(other.DelegatorShares) && + v.Description.Equal(other.Description) && + v.Commission.Equal(other.Commission) && + v.Jailed == other.Jailed && + v.MinSelfDelegation.Equal(other.MinSelfDelegation) && + v.ConsensusPubkey.Equal(other.ConsensusPubkey) +} + +/ Equal checks if the receiver equals the parameter +func (v *Validator) + +Equal(v2 *Validator) + +bool { + return v.MinEqual(v2) && + v.UnbondingHeight == v2.UnbondingHeight && + v.UnbondingTime.Equal(v2.UnbondingTime) +} + +func (v Validator) + +IsJailed() + +bool { + return v.Jailed +} + +func (v Validator) + +GetMoniker() + +string { + return v.Description.Moniker +} + +func (v Validator) + +GetStatus() + +BondStatus { + return v.Status +} + +func (v Validator) + +GetOperator() + +sdk.ValAddress { + if v.OperatorAddress == "" { + return nil +} + +addr, err := sdk.ValAddressFromBech32(v.OperatorAddress) + if err != nil { + panic(err) +} + +return addr +} + +/ ConsPubKey returns the validator PubKey as a cryptotypes.PubKey. +func (v Validator) + +ConsPubKey() (cryptotypes.PubKey, error) { + pk, ok := v.ConsensusPubkey.GetCachedValue().(cryptotypes.PubKey) + if !ok { + return nil, errors.Wrapf(sdkerrors.ErrInvalidType, "expecting cryptotypes.PubKey, got %T", pk) +} + +return pk, nil +} + +/ Deprecated: use CmtConsPublicKey instead +func (v Validator) + +TmConsPublicKey() (cmtprotocrypto.PublicKey, error) { + return v.CmtConsPublicKey() +} + +/ CmtConsPublicKey casts Validator.ConsensusPubkey to cmtprotocrypto.PubKey. +func (v Validator) + +CmtConsPublicKey() (cmtprotocrypto.PublicKey, error) { + pk, err := v.ConsPubKey() + if err != nil { + return cmtprotocrypto.PublicKey{ +}, err +} + +tmPk, err := cryptocodec.ToCmtProtoPublicKey(pk) + if err != nil { + return cmtprotocrypto.PublicKey{ +}, err +} + +return tmPk, nil +} + +/ GetConsAddr extracts Consensus key address +func (v Validator) + +GetConsAddr() (sdk.ConsAddress, error) { + pk, ok := v.ConsensusPubkey.GetCachedValue().(cryptotypes.PubKey) + if !ok { + return nil, errors.Wrapf(sdkerrors.ErrInvalidType, "expecting cryptotypes.PubKey, got %T", pk) +} + +return sdk.ConsAddress(pk.Address()), nil +} + +func (v Validator) + +GetTokens() + +math.Int { + return v.Tokens +} + +func (v Validator) + +GetBondedTokens() + +math.Int { + return v.BondedTokens() +} + +func (v Validator) + +GetConsensusPower(r math.Int) + +int64 { + return v.ConsensusPower(r) +} + +func (v Validator) + +GetCommission() + +math.LegacyDec { + return v.Commission.Rate +} + +func (v Validator) + +GetMinSelfDelegation() + +math.Int { + return v.MinSelfDelegation +} + +func (v Validator) + +GetDelegatorShares() + +math.LegacyDec { + return v.DelegatorShares +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (v Validator) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + var pk cryptotypes.PubKey + return unpacker.UnpackAny(v.ConsensusPubkey, &pk) +} +``` + +#### `Any`'s TypeURL When packing a protobuf message inside an `Any`, the message's type is uniquely defined by its type URL, which is the message's fully qualified name prefixed by a `/` (slash) character. In some implementations of `Any`, like the gogoproto one, there's generally [a resolvable prefix, e.g. `type.googleapis.com`](https://github.com/gogo/protobuf/blob/b03c65ea87cdc3521ede29f62fe3ce239267c1bc/protobuf/google/protobuf/any.proto#L87-L91). However, in the Cosmos SDK, we made the decision to not include such prefix, to have shorter type URLs. The Cosmos SDK's own `Any` implementation can be found in `github.com/cosmos/cosmos-sdk/codec/types`. The Cosmos SDK is also switching away from gogoproto to the official `google.golang.org/protobuf` (known as the Protobuf API v2). Its default `Any` implementation also contains the [`type.googleapis.com`](https://github.com/protocolbuffers/protobuf-go/blob/v1.28.1/types/known/anypb/any.pb.go#L266) prefix. To maintain compatibility with the SDK, the following methods from `"google.golang.org/protobuf/types/known/anypb"` should not be used: -* `anypb.New` -* `anypb.MarshalFrom` -* `anypb.Any#MarshalFrom` +- `anypb.New` +- `anypb.MarshalFrom` +- `anypb.Any#MarshalFrom` Instead, the Cosmos SDK provides helper functions in `"github.com/cosmos/cosmos-proto/anyutil"`, which create an official `anypb.Any` without inserting the prefixes: -* `anyutil.New` -* `anyutil.MarshalFrom` +- `anyutil.New` +- `anyutil.MarshalFrom` For example, to pack a `sdk.Msg` called `internalMsg`, use: -``` -import (- "google.golang.org/protobuf/types/known/anypb"+ "github.com/cosmos/cosmos-proto/anyutil")- anyMsg, err := anypb.New(internalMsg.Message().Interface())+ anyMsg, err := anyutil.New(internalMsg.Message().Interface())- fmt.Println(anyMsg.TypeURL) // type.googleapis.com/cosmos.bank.v1beta1.MsgSend+ fmt.Println(anyMsg.TypeURL) // /cosmos.bank.v1beta1.MsgSend +```diff +import ( +- "google.golang.org/protobuf/types/known/anypb" ++ "github.com/cosmos/cosmos-proto/anyutil" +) + +- anyMsg, err := anypb.New(internalMsg.Message().Interface()) ++ anyMsg, err := anyutil.New(internalMsg.Message().Interface()) + +- fmt.Println(anyMsg.TypeURL) / type.googleapis.com/cosmos.bank.v1beta1.MsgSend ++ fmt.Println(anyMsg.TypeURL) / /cosmos.bank.v1beta1.MsgSend ``` -## FAQ[​](#faq "Direct link to FAQ") +## FAQ -### How to create modules using protobuf encoding[​](#how-to-create-modules-using-protobuf-encoding "Direct link to How to create modules using protobuf encoding") +### How to create modules using protobuf encoding -#### Defining module types[​](#defining-module-types "Direct link to Defining module types") +#### Defining module types Protobuf types can be defined to encode: -* state -* [`Msg`s](/v0.50/build/building-modules/messages-and-queries#messages) -* [Query services](/v0.50/build/building-modules/query-services) -* [genesis](/v0.50/build/building-modules/genesis) +- state +- [`Msg`s](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#messages) +- [Query services](/docs/sdk/v0.50/documentation/module-system/query-services) +- [genesis](/docs/sdk/v0.50/documentation/module-system/genesis) -#### Naming and conventions[​](#naming-and-conventions "Direct link to Naming and conventions") +#### Naming and conventions -We encourage developers to follow industry guidelines: [Protocol Buffers style guide](https://developers.google.com/protocol-buffers/docs/style) and [Buf](https://buf.build/docs/style-guide), see more details in [ADR 023](/v0.50/architecture/adr-023-protobuf-naming.md) +We encourage developers to follow industry guidelines: [Protocol Buffers style guide](https://developers.google.com/protocol-buffers/docs/style) +and [Buf](https://buf.build/docs/style-guide), see more details in [ADR 023](/docs/common/pages/adr-comprehensive#adr-023-protocol-buffer-naming-and-versioning-conventions) -### How to update modules to protobuf encoding[​](#how-to-update-modules-to-protobuf-encoding "Direct link to How to update modules to protobuf encoding") +### How to update modules to protobuf encoding -If modules do not contain any interfaces (e.g. `Account` or `Content`), then they may simply migrate any existing types that are encoded and persisted via their concrete Amino codec to Protobuf (see 1. for further guidelines) and accept a `Marshaler` as the codec which is implemented via the `ProtoCodec` without any further customization. +If modules do not contain any interfaces (e.g. `Account` or `Content`), then they +may simply migrate any existing types that +are encoded and persisted via their concrete Amino codec to Protobuf (see 1. for further guidelines) and accept a `Marshaler` as the codec which is implemented via the `ProtoCodec` +without any further customization. However, if a module type composes an interface, it must wrap it in the `sdk.Any` (from `/types` package) type. To do that, a module-level .proto file must use [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto) for respective message type interface types. For example, in the `x/evidence` module defines an `Evidence` interface, which is used by the `MsgSubmitEvidence`. The structure definition must use `sdk.Any` to wrap the evidence file. In the proto file we define it as follows: -``` -// proto/cosmos/evidence/v1beta1/tx.protomessage MsgSubmitEvidence { string submitter = 1; google.protobuf.Any evidence = 2 [(cosmos_proto.accepts_interface) = "cosmos.evidence.v1beta1.Evidence"];} +```protobuf +/ proto/cosmos/evidence/v1beta1/tx.proto + +message MsgSubmitEvidence { + string submitter = 1; + google.protobuf.Any evidence = 2 [(cosmos_proto.accepts_interface) = "cosmos.evidence.v1beta1.Evidence"]; +} ``` The Cosmos SDK `codec.Codec` interface provides support methods `MarshalInterface` and `UnmarshalInterface` to easy encoding of state to `Any`. Module should register interfaces using `InterfaceRegistry` which provides a mechanism for registering interfaces: `RegisterInterface(protoName string, iface interface{}, impls ...proto.Message)` and implementations: `RegisterImplementations(iface interface{}, impls ...proto.Message)` that can be safely unpacked from Any, similarly to type registration with Amino: -codec/types/interface\_registry.go - -``` -loading... +```go expandable +package types + +import ( + + "fmt" + "reflect" + "github.com/cosmos/gogoproto/jsonpb" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/protobuf/reflect/protodesc" + "google.golang.org/protobuf/reflect/protoreflect" + "cosmossdk.io/x/tx/signing" +) + +/ AnyUnpacker is an interface which allows safely unpacking types packed +/ in Any's against a whitelist of registered types +type AnyUnpacker interface { + / UnpackAny unpacks the value in any to the interface pointer passed in as + / iface. Note that the type in any must have been registered in the + / underlying whitelist registry as a concrete type for that interface + / Ex: + / var msg sdk.Msg + / err := cdc.UnpackAny(any, &msg) + / ... + UnpackAny(any *Any, iface interface{ +}) + +error +} + +/ InterfaceRegistry provides a mechanism for registering interfaces and +/ implementations that can be safely unpacked from Any +type InterfaceRegistry interface { + AnyUnpacker + jsonpb.AnyResolver + + / RegisterInterface associates protoName as the public name for the + / interface passed in as iface. This is to be used primarily to create + / a public facing registry of interface implementations for clients. + / protoName should be a well-chosen public facing name that remains stable. + / RegisterInterface takes an optional list of impls to be registered + / as implementations of iface. + / + / Ex: + / registry.RegisterInterface("cosmos.base.v1beta1.Msg", (*sdk.Msg)(nil)) + +RegisterInterface(protoName string, iface interface{ +}, impls ...proto.Message) + + / RegisterImplementations registers impls as concrete implementations of + / the interface iface. + / + / Ex: + / registry.RegisterImplementations((*sdk.Msg)(nil), &MsgSend{ +}, &MsgMultiSend{ +}) + +RegisterImplementations(iface interface{ +}, impls ...proto.Message) + + / ListAllInterfaces list the type URLs of all registered interfaces. + ListAllInterfaces() []string + + / ListImplementations lists the valid type URLs for the given interface name that can be used + / for the provided interface type URL. + ListImplementations(ifaceTypeURL string) []string + + / EnsureRegistered ensures there is a registered interface for the given concrete type. + EnsureRegistered(iface interface{ +}) + +error + + protodesc.Resolver + + / RangeFiles iterates over all registered files and calls f on each one. This + / implements the part of protoregistry.Files that is needed for reflecting over + / the entire FileDescriptorSet. + RangeFiles(f func(protoreflect.FileDescriptor) + +bool) + +SigningContext() *signing.Context + + / mustEmbedInterfaceRegistry requires that all implementations of InterfaceRegistry embed an official implementation + / from this package. This allows new methods to be added to the InterfaceRegistry interface without breaking + / backwards compatibility. + mustEmbedInterfaceRegistry() +} + +/ UnpackInterfacesMessage is meant to extend protobuf types (which implement +/ proto.Message) + +to support a post-deserialization phase which unpacks +/ types packed within Any's using the whitelist provided by AnyUnpacker +type UnpackInterfacesMessage interface { + / UnpackInterfaces is implemented in order to unpack values packed within + / Any's using the AnyUnpacker. It should generally be implemented as + / follows: + / func (s *MyStruct) + +UnpackInterfaces(unpacker AnyUnpacker) + +error { + / var x AnyInterface + / / where X is an Any field on MyStruct + / err := unpacker.UnpackAny(s.X, &x) + / if err != nil { + / return nil + / +} + / / where Y is a field on MyStruct that implements UnpackInterfacesMessage itself + / err = s.Y.UnpackInterfaces(unpacker) + / if err != nil { + / return nil + / +} + / return nil + / +} + +UnpackInterfaces(unpacker AnyUnpacker) + +error +} + +type interfaceRegistry struct { + signing.ProtoFileResolver + interfaceNames map[string]reflect.Type + interfaceImpls map[reflect.Type]interfaceMap + implInterfaces map[reflect.Type]reflect.Type + typeURLMap map[string]reflect.Type + signingCtx *signing.Context +} + +type interfaceMap = map[string]reflect.Type + +/ NewInterfaceRegistry returns a new InterfaceRegistry +func NewInterfaceRegistry() + +InterfaceRegistry { + registry, err := NewInterfaceRegistryWithOptions(InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: failingAddressCodec{ +}, + ValidatorAddressCodec: failingAddressCodec{ +}, +}, +}) + if err != nil { + panic(err) +} + +return registry +} + +/ InterfaceRegistryOptions are options for creating a new InterfaceRegistry. +type InterfaceRegistryOptions struct { + / ProtoFiles is the set of files to use for the registry. It is required. + ProtoFiles signing.ProtoFileResolver + + / SigningOptions are the signing options to use for the registry. + SigningOptions signing.Options +} + +/ NewInterfaceRegistryWithOptions returns a new InterfaceRegistry with the given options. +func NewInterfaceRegistryWithOptions(options InterfaceRegistryOptions) (InterfaceRegistry, error) { + if options.ProtoFiles == nil { + return nil, fmt.Errorf("proto files must be provided") +} + +options.SigningOptions.FileResolver = options.ProtoFiles + signingCtx, err := signing.NewContext(options.SigningOptions) + if err != nil { + return nil, err +} + +return &interfaceRegistry{ + interfaceNames: map[string]reflect.Type{ +}, + interfaceImpls: map[reflect.Type]interfaceMap{ +}, + implInterfaces: map[reflect.Type]reflect.Type{ +}, + typeURLMap: map[string]reflect.Type{ +}, + ProtoFileResolver: options.ProtoFiles, + signingCtx: signingCtx, +}, nil +} + +func (registry *interfaceRegistry) + +RegisterInterface(protoName string, iface interface{ +}, impls ...proto.Message) { + typ := reflect.TypeOf(iface) + if typ.Elem().Kind() != reflect.Interface { + panic(fmt.Errorf("%T is not an interface type", iface)) +} + +registry.interfaceNames[protoName] = typ + registry.RegisterImplementations(iface, impls...) +} + +/ EnsureRegistered ensures there is a registered interface for the given concrete type. +/ +/ Returns an error if not, and nil if so. +func (registry *interfaceRegistry) + +EnsureRegistered(impl interface{ +}) + +error { + if reflect.ValueOf(impl).Kind() != reflect.Ptr { + return fmt.Errorf("%T is not a pointer", impl) +} + if _, found := registry.implInterfaces[reflect.TypeOf(impl)]; !found { + return fmt.Errorf("%T does not have a registered interface", impl) +} + +return nil +} + +/ RegisterImplementations registers a concrete proto Message which implements +/ the given interface. +/ +/ This function PANICs if different concrete types are registered under the +/ same typeURL. +func (registry *interfaceRegistry) + +RegisterImplementations(iface interface{ +}, impls ...proto.Message) { + for _, impl := range impls { + typeURL := "/" + proto.MessageName(impl) + +registry.registerImpl(iface, typeURL, impl) +} +} + +/ RegisterCustomTypeURL registers a concrete type which implements the given +/ interface under `typeURL`. +/ +/ This function PANICs if different concrete types are registered under the +/ same typeURL. +func (registry *interfaceRegistry) + +RegisterCustomTypeURL(iface interface{ +}, typeURL string, impl proto.Message) { + registry.registerImpl(iface, typeURL, impl) +} + +/ registerImpl registers a concrete type which implements the given +/ interface under `typeURL`. +/ +/ This function PANICs if different concrete types are registered under the +/ same typeURL. +func (registry *interfaceRegistry) + +registerImpl(iface interface{ +}, typeURL string, impl proto.Message) { + ityp := reflect.TypeOf(iface).Elem() + +imap, found := registry.interfaceImpls[ityp] + if !found { + imap = map[string]reflect.Type{ +} + +} + implType := reflect.TypeOf(impl) + if !implType.AssignableTo(ityp) { + panic(fmt.Errorf("type %T doesn't actually implement interface %+v", impl, ityp)) +} + + / Check if we already registered something under the given typeURL. It's + / okay to register the same concrete type again, but if we are registering + / a new concrete type under the same typeURL, then we throw an error (here, + / we panic). + foundImplType, found := imap[typeURL] + if found && foundImplType != implType { + panic( + fmt.Errorf( + "concrete type %s has already been registered under typeURL %s, cannot register %s under same typeURL. "+ + "This usually means that there are conflicting modules registering different concrete types "+ + "for a same interface implementation", + foundImplType, + typeURL, + implType, + ), + ) +} + +imap[typeURL] = implType + registry.typeURLMap[typeURL] = implType + registry.implInterfaces[implType] = ityp + registry.interfaceImpls[ityp] = imap +} + +func (registry *interfaceRegistry) + +ListAllInterfaces() []string { + interfaceNames := registry.interfaceNames + keys := make([]string, 0, len(interfaceNames)) + for key := range interfaceNames { + keys = append(keys, key) +} + +return keys +} + +func (registry *interfaceRegistry) + +ListImplementations(ifaceName string) []string { + typ, ok := registry.interfaceNames[ifaceName] + if !ok { + return []string{ +} + +} + +impls, ok := registry.interfaceImpls[typ.Elem()] + if !ok { + return []string{ +} + +} + keys := make([]string, 0, len(impls)) + for key := range impls { + keys = append(keys, key) +} + +return keys +} + +func (registry *interfaceRegistry) + +UnpackAny(any *Any, iface interface{ +}) + +error { + / here we gracefully handle the case in which `any` itself is `nil`, which may occur in message decoding + if any == nil { + return nil +} + if any.TypeUrl == "" { + / if TypeUrl is empty return nil because without it we can't actually unpack anything + return nil +} + rv := reflect.ValueOf(iface) + if rv.Kind() != reflect.Ptr { + return fmt.Errorf("UnpackAny expects a pointer") +} + rt := rv.Elem().Type() + cachedValue := any.cachedValue + if cachedValue != nil { + if reflect.TypeOf(cachedValue).AssignableTo(rt) { + rv.Elem().Set(reflect.ValueOf(cachedValue)) + +return nil +} + +} + +imap, found := registry.interfaceImpls[rt] + if !found { + return fmt.Errorf("no registered implementations of type %+v", rt) +} + +typ, found := imap[any.TypeUrl] + if !found { + return fmt.Errorf("no concrete type registered for type URL %s against interface %T", any.TypeUrl, iface) +} + +msg, ok := reflect.New(typ.Elem()).Interface().(proto.Message) + if !ok { + return fmt.Errorf("can't proto unmarshal %T", msg) +} + err := proto.Unmarshal(any.Value, msg) + if err != nil { + return err +} + +err = UnpackInterfaces(msg, registry) + if err != nil { + return err +} + +rv.Elem().Set(reflect.ValueOf(msg)) + +any.cachedValue = msg + + return nil +} + +/ Resolve returns the proto message given its typeURL. It works with types +/ registered with RegisterInterface/RegisterImplementations, as well as those +/ registered with RegisterWithCustomTypeURL. +func (registry *interfaceRegistry) + +Resolve(typeURL string) (proto.Message, error) { + typ, found := registry.typeURLMap[typeURL] + if !found { + return nil, fmt.Errorf("unable to resolve type URL %s", typeURL) +} + +msg, ok := reflect.New(typ.Elem()).Interface().(proto.Message) + if !ok { + return nil, fmt.Errorf("can't resolve type URL %s", typeURL) +} + +return msg, nil +} + +func (registry *interfaceRegistry) + +SigningContext() *signing.Context { + return registry.signingCtx +} + +func (registry *interfaceRegistry) + +mustEmbedInterfaceRegistry() { +} + +/ UnpackInterfaces is a convenience function that calls UnpackInterfaces +/ on x if x implements UnpackInterfacesMessage +func UnpackInterfaces(x interface{ +}, unpacker AnyUnpacker) + +error { + if msg, ok := x.(UnpackInterfacesMessage); ok { + return msg.UnpackInterfaces(unpacker) +} + +return nil +} + +type failingAddressCodec struct{ +} + +func (f failingAddressCodec) + +StringToBytes(string) ([]byte, error) { + return nil, fmt.Errorf("InterfaceRegistry requires a proper address codec implementation to do address conversion") +} + +func (f failingAddressCodec) + +BytesToString([]byte) (string, error) { + return "", fmt.Errorf("InterfaceRegistry requires a proper address codec implementation to do address conversion") +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/codec/types/interface_registry.go#L28-L75) - In addition, an `UnpackInterfaces` phase should be introduced to deserialization to unpack interfaces before they're needed. Protobuf types that contain a protobuf `Any` either directly or via one of their members should implement the `UnpackInterfacesMessage` interface: -``` -type UnpackInterfacesMessage interface { UnpackInterfaces(InterfaceUnpacker) error} +```go +type UnpackInterfacesMessage interface { + UnpackInterfaces(InterfaceUnpacker) + +error +} ``` diff --git a/docs/sdk/v0.50/learn/advanced/events.mdx b/docs/sdk/v0.50/learn/advanced/events.mdx index e22bc01a..6774de8a 100644 --- a/docs/sdk/v0.50/learn/advanced/events.mdx +++ b/docs/sdk/v0.50/learn/advanced/events.mdx @@ -1,141 +1,2329 @@ --- -title: "Events" -description: "Version: v0.50" +title: Events --- - - `Event`s are objects that contain information about the execution of the application. They are mainly used by service providers like block explorers and wallet to track the execution of various messages and index transactions. - +## Synopsis + +`Event`s are objects that contain information about the execution of the application. They are mainly used by service providers like block explorers and wallet to track the execution of various messages and index transactions. - * [Anatomy of a Cosmos SDK application](/v0.50/learn/beginner/app-anatomy) - * [CometBFT Documentation on Events](https://docs.cometbft.com/v0.37/spec/abci/abci++_basic_concepts#events) - +**Pre-requisite Readings** -## Events[​](#events-1 "Direct link to Events") +- [Anatomy of a Cosmos SDK application](/docs/sdk/v0.50/learn/beginner/app-anatomy) +- [CometBFT Documentation on Events](https://docs.cometbft.com/v0.37/spec/abci/abci++_basic_concepts#events) -Events are implemented in the Cosmos SDK as an alias of the ABCI `Event` type and take the form of: `{eventType}.{attributeKey}={attributeValue}`. + -proto/tendermint/abci/types.proto +## Events -``` -loading... -``` +Events are implemented in the Cosmos SDK as an alias of the ABCI `Event` type and +take the form of: `{eventType}.{attributeKey}={attributeValue}`. -[See full example on GitHub](https://github.com/cometbft/cometbft/blob/v0.37.0/proto/tendermint/abci/types.proto#L334-L343) +```protobuf +// Event allows application developers to attach additional information to +// ResponseBeginBlock, ResponseEndBlock, ResponseCheckTx and ResponseDeliverTx. +// Later, transactions may be queried using these events. +message Event { + string type = 1; + repeated EventAttribute attributes = 2 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "attributes,omitempty" + ]; +} +``` An Event contains: -* A `type` to categorize the Event at a high-level; for example, the Cosmos SDK uses the `"message"` type to filter Events by `Msg`s. -* A list of `attributes` are key-value pairs that give more information about the categorized Event. For example, for the `"message"` type, we can filter Events by key-value pairs using `message.action={some_action}`, `message.module={some_module}` or `message.sender={some_sender}`. -* A `msg_index` to identify which messages relate to the same transaction +- A `type` to categorize the Event at a high-level; for example, the Cosmos SDK uses the `"message"` type to filter Events by `Msg`s. +- A list of `attributes` are key-value pairs that give more information about the categorized Event. For example, for the `"message"` type, we can filter Events by key-value pairs using `message.action={some_action}`, `message.module={some_module}` or `message.sender={some_sender}`. +- A `msg_index` to identify which messages relate to the same transaction - - To parse the attribute values as strings, make sure to add `'` (single quotes) around each attribute value. - + + To parse the attribute values as strings, make sure to add `'` (single quotes) + around each attribute value. + -*Typed Events* are Protobuf-defined [messages](/v0.50/build/architecture/adr-032-typed-events) used by the Cosmos SDK for emitting and querying Events. They are defined in a `event.proto` file, on a **per-module basis** and are read as `proto.Message`. *Legacy Events* are defined on a **per-module basis** in the module's `/types/events.go` file. They are triggered from the module's Protobuf [`Msg` service](/v0.50/build/building-modules/msg-services) by using the [`EventManager`](#eventmanager). +_Typed Events_ are Protobuf-defined [messages](/docs/common/pages/adr-comprehensive#adr-032-typed-events) used by the Cosmos SDK +for emitting and querying Events. They are defined in a `event.proto` file, on a **per-module basis** and are read as `proto.Message`. +_Legacy Events_ are defined on a **per-module basis** in the module's `/types/events.go` file. +They are triggered from the module's Protobuf [`Msg` service](/docs/sdk/v0.50/documentation/module-system/msg-services) +by using the [`EventManager`](#eventmanager). -In addition, each module documents its events under in the `Events` sections of its specs (x/\{moduleName}/`README.md`). +In addition, each module documents its events under in the `Events` sections of its specs (x/`{moduleName}`/`README.md`). Lastly, Events are returned to the underlying consensus engine in the response of the following ABCI messages: -* [`BeginBlock`](/v0.50/learn/advanced/baseapp#beginblock) -* [`EndBlock`](/v0.50/learn/advanced/baseapp#endblock) -* [`CheckTx`](/v0.50/learn/advanced/baseapp#checktx) -* [`Transaction Execution`](/v0.50/learn/advanced/baseapp#transactionexecution) +- [`BeginBlock`](/docs/sdk/v0.50/learn/advanced/baseapp#beginblock) +- [`EndBlock`](/docs/sdk/v0.50/learn/advanced/baseapp#endblock) +- [`CheckTx`](/docs/sdk/v0.50/learn/advanced/baseapp#checktx) +- [`Transaction Execution`](/docs/sdk/v0.50/learn/advanced/baseapp#transactionexecution) -### Examples[​](#examples "Direct link to Examples") +### Examples The following examples show how to query Events using the Cosmos SDK. -| Event | Description | -| ------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------- | -| `tx.height=23` | Query all transactions at height 23 | -| `message.action='/cosmos.bank.v1beta1.Msg/Send'` | Query all transactions containing a x/bank `Send` [Service `Msg`](/v0.50/build/building-modules/msg-services). Note the `'`s around the value. | -| `message.module='bank'` | Query all transactions containing messages from the x/bank module. Note the `'`s around the value. | -| `create_validator.validator='cosmosval1...'` | x/staking-specific Event, see [x/staking SPEC](/v0.50/build/modules/staking). | +| Event | Description | +| ------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `tx.height=23` | Query all transactions at height 23 | +| `message.action='/cosmos.bank.v1beta1.Msg/Send'` | Query all transactions containing a x/bank `Send` [Service `Msg`](/docs/sdk/v0.50/documentation/module-system/msg-services). Note the `'`s around the value. | +| `message.module='bank'` | Query all transactions containing messages from the x/bank module. Note the `'`s around the value. | +| `create_validator.validator='cosmosval1...'` | x/staking-specific Event, see [x/staking SPEC](docs/sdk/v0.50/documentation/module-system/modules/staking/README). | -## EventManager[​](#eventmanager "Direct link to EventManager") +## EventManager -In Cosmos SDK applications, Events are managed by an abstraction called the `EventManager`. Internally, the `EventManager` tracks a list of Events for the entire execution flow of `FinalizeBlock` (i.e. transaction execution, `BeginBlock`, `EndBlock`). +In Cosmos SDK applications, Events are managed by an abstraction called the `EventManager`. +Internally, the `EventManager` tracks a list of Events for the entire execution flow of `FinalizeBlock` +(i.e. transaction execution, `BeginBlock`, `EndBlock`). -types/events.go +```go expandable +package types -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/types/events.go#L19-L26) + "encoding/json" + "fmt" + "reflect" + "strings" + "golang.org/x/exp/maps" + "golang.org/x/exp/slices" -The `EventManager` comes with a set of useful methods to manage Events. The method that is used most by module and application developers is `EmitTypedEvent` or `EmitEvent` that tracks an Event in the `EventManager`. + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/gogoproto/jsonpb" + proto "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/codec" +) -types/events.go +type EventManagerI interface { + Events() -``` -loading... -``` +Events + ABCIEvents() []abci.Event + EmitTypedEvent(tev proto.Message) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/types/events.go#L53-L62) +error + EmitTypedEvents(tevs ...proto.Message) -Module developers should handle Event emission via the `EventManager#EmitTypedEvent` or `EventManager#EmitEvent` in each message `Handler` and in each `BeginBlock`/`EndBlock` handler. The `EventManager` is accessed via the [`Context`](/v0.50/learn/advanced/context), where Event should be already registered, and emitted like this: +error + EmitEvent(event Event) -**Typed events:** +EmitEvents(events Events) +} + +/ ---------------------------------------------------------------------------- +/ Event Manager +/ ---------------------------------------------------------------------------- + +var _ EventManagerI = (*EventManager)(nil) + +/ EventManager implements a simple wrapper around a slice of Event objects that +/ can be emitted from. +type EventManager struct { + events Events +} + +func NewEventManager() *EventManager { + return &EventManager{ + EmptyEvents() +} +} + +func (em *EventManager) + +Events() + +Events { + return em.events +} + +/ EmitEvent stores a single Event object. +/ Deprecated: Use EmitTypedEvent +func (em *EventManager) + +EmitEvent(event Event) { + em.events = em.events.AppendEvent(event) +} + +/ EmitEvents stores a series of Event objects. +/ Deprecated: Use EmitTypedEvents +func (em *EventManager) + +EmitEvents(events Events) { + em.events = em.events.AppendEvents(events) +} + +/ ABCIEvents returns all stored Event objects as abci.Event objects. +func (em EventManager) + +ABCIEvents() []abci.Event { + return em.events.ToABCIEvents() +} + +/ EmitTypedEvent takes typed event and emits converting it into Event +func (em *EventManager) + +EmitTypedEvent(tev proto.Message) + +error { + event, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +em.EmitEvent(event) + +return nil +} + +/ EmitTypedEvents takes series of typed events and emit +func (em *EventManager) + +EmitTypedEvents(tevs ...proto.Message) + +error { + events := make(Events, len(tevs)) + for i, tev := range tevs { + res, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +events[i] = res +} + +em.EmitEvents(events) + +return nil +} + +/ TypedEventToEvent takes typed event and converts to Event object +func TypedEventToEvent(tev proto.Message) (Event, error) { + evtType := proto.MessageName(tev) + +evtJSON, err := codec.ProtoMarshalJSON(tev, nil) + if err != nil { + return Event{ +}, err +} + +var attrMap map[string]json.RawMessage + err = json.Unmarshal(evtJSON, &attrMap) + if err != nil { + return Event{ +}, err +} + + / sort the keys to ensure the order is always the same + keys := maps.Keys(attrMap) + +slices.Sort(keys) + attrs := make([]abci.EventAttribute, 0, len(attrMap)) + for _, k := range keys { + v := attrMap[k] + attrs = append(attrs, abci.EventAttribute{ + Key: k, + Value: string(v), +}) +} + +return Event{ + Type: evtType, + Attributes: attrs, +}, nil +} + +/ ParseTypedEvent converts abci.Event back to a typed event. +func ParseTypedEvent(event abci.Event) (proto.Message, error) { + concreteGoType := proto.MessageType(event.Type) + if concreteGoType == nil { + return nil, fmt.Errorf("failed to retrieve the message of type %q", event.Type) +} + +var value reflect.Value + if concreteGoType.Kind() == reflect.Ptr { + value = reflect.New(concreteGoType.Elem()) +} + +else { + value = reflect.Zero(concreteGoType) +} + +protoMsg, ok := value.Interface().(proto.Message) + if !ok { + return nil, fmt.Errorf("%q does not implement proto.Message", event.Type) +} + attrMap := make(map[string]json.RawMessage) + for _, attr := range event.Attributes { + attrMap[attr.Key] = json.RawMessage(attr.Value) +} + +attrBytes, err := json.Marshal(attrMap) + if err != nil { + return nil, err +} + unmarshaler := jsonpb.Unmarshaler{ + AllowUnknownFields: true +} + if err := unmarshaler.Unmarshal(strings.NewReader(string(attrBytes)), protoMsg); err != nil { + return nil, err +} + +return protoMsg, nil +} + +/ ---------------------------------------------------------------------------- +/ Events +/ ---------------------------------------------------------------------------- + +type ( + / Event is a type alias for an ABCI Event + Event abci.Event + + / Events defines a slice of Event objects + Events []Event +) + +/ NewEvent creates a new Event object with a given type and slice of one or more +/ attributes. +func NewEvent(ty string, attrs ...Attribute) + +Event { + e := Event{ + Type: ty +} + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} -x/group/keeper/msg\_server.go +return e +} +/ NewAttribute returns a new key/value Attribute object. +func NewAttribute(k, v string) + +Attribute { + return Attribute{ + k, v +} +} + +/ EmptyEvents returns an empty slice of events. +func EmptyEvents() + +Events { + return make(Events, 0) +} + +func (a Attribute) + +String() + +string { + return fmt.Sprintf("%s: %s", a.Key, a.Value) +} + +/ ToKVPair converts an Attribute object into a CometBFT key/value pair. +func (a Attribute) + +ToKVPair() + +abci.EventAttribute { + return abci.EventAttribute{ + Key: a.Key, + Value: a.Value +} +} + +/ AppendAttributes adds one or more attributes to an Event. +func (e Event) + +AppendAttributes(attrs ...Attribute) + +Event { + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ GetAttribute returns an attribute for a given key present in an event. +/ If the key is not found, the boolean value will be false. +func (e Event) + +GetAttribute(key string) (Attribute, bool) { + for _, attr := range e.Attributes { + if attr.Key == key { + return Attribute{ + Key: attr.Key, + Value: attr.Value +}, true +} + +} + +return Attribute{ +}, false +} + +/ AppendEvent adds an Event to a slice of events. +func (e Events) + +AppendEvent(event Event) + +Events { + return append(e, event) +} + +/ AppendEvents adds a slice of Event objects to an exist slice of Event objects. +func (e Events) + +AppendEvents(events Events) + +Events { + return append(e, events...) +} + +/ ToABCIEvents converts a slice of Event objects to a slice of abci.Event +/ objects. +func (e Events) + +ToABCIEvents() []abci.Event { + res := make([]abci.Event, len(e)) + for i, ev := range e { + res[i] = abci.Event{ + Type: ev.Type, + Attributes: ev.Attributes +} + +} + +return res +} + +/ GetAttributes returns all attributes matching a given key present in events. +/ If the key is not found, the boolean value will be false. +func (e Events) + +GetAttributes(key string) ([]Attribute, bool) { + attrs := make([]Attribute, 0) + for _, event := range e { + if attr, found := event.GetAttribute(key); found { + attrs = append(attrs, attr) +} + +} + +return attrs, len(attrs) > 0 +} + +/ Common event types and attribute keys +const ( + EventTypeTx = "tx" + + AttributeKeyAccountSequence = "acc_seq" + AttributeKeySignature = "signature" + AttributeKeyFee = "fee" + AttributeKeyFeePayer = "fee_payer" + + EventTypeMessage = "message" + + AttributeKeyAction = "action" + AttributeKeyModule = "module" + AttributeKeySender = "sender" + AttributeKeyAmount = "amount" +) + +type ( + / StringAttributes defines a slice of StringEvents objects. + StringEvents []StringEvent +) + +func (se StringEvents) + +String() + +string { + var sb strings.Builder + for _, e := range se { + fmt.Fprintf(&sb, "\t\t- %s\n", e.Type) + for _, attr := range e.Attributes { + fmt.Fprintf(&sb, "\t\t\t- %s\n", attr) +} + +} + +return strings.TrimRight(sb.String(), "\n") +} + +/ StringifyEvent converts an Event object to a StringEvent object. +func StringifyEvent(e abci.Event) + +StringEvent { + res := StringEvent{ + Type: e.Type +} + for _, attr := range e.Attributes { + res.Attributes = append( + res.Attributes, + Attribute{ + Key: attr.Key, + Value: attr.Value +}, + ) +} + +return res +} + +/ StringifyEvents converts a slice of Event objects into a slice of StringEvent +/ objects. +func StringifyEvents(events []abci.Event) + +StringEvents { + res := make(StringEvents, 0, len(events)) + for _, e := range events { + res = append(res, StringifyEvent(e)) +} + +return res +} + +/ MarkEventsToIndex returns the set of ABCI events, where each event's attribute +/ has it's index value marked based on the provided set of events to index. +func MarkEventsToIndex(events []abci.Event, indexSet map[string]struct{ +}) []abci.Event { + indexAll := len(indexSet) == 0 + updatedEvents := make([]abci.Event, len(events)) + for i, e := range events { + updatedEvent := abci.Event{ + Type: e.Type, + Attributes: make([]abci.EventAttribute, len(e.Attributes)), +} + for j, attr := range e.Attributes { + _, index := indexSet[fmt.Sprintf("%s.%s", e.Type, attr.Key)] + updatedAttr := abci.EventAttribute{ + Key: attr.Key, + Value: attr.Value, + Index: index || indexAll, +} + +updatedEvent.Attributes[j] = updatedAttr +} + +updatedEvents[i] = updatedEvent +} + +return updatedEvents +} ``` -loading... + +The `EventManager` comes with a set of useful methods to manage Events. The method +that is used most by module and application developers is `EmitTypedEvent` or `EmitEvent` that tracks +an Event in the `EventManager`. + +```go expandable +package types + +import ( + + "encoding/json" + "fmt" + "reflect" + "strings" + "golang.org/x/exp/maps" + "golang.org/x/exp/slices" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/gogoproto/jsonpb" + proto "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/codec" +) + +type EventManagerI interface { + Events() + +Events + ABCIEvents() []abci.Event + EmitTypedEvent(tev proto.Message) + +error + EmitTypedEvents(tevs ...proto.Message) + +error + EmitEvent(event Event) + +EmitEvents(events Events) +} + +/ ---------------------------------------------------------------------------- +/ Event Manager +/ ---------------------------------------------------------------------------- + +var _ EventManagerI = (*EventManager)(nil) + +/ EventManager implements a simple wrapper around a slice of Event objects that +/ can be emitted from. +type EventManager struct { + events Events +} + +func NewEventManager() *EventManager { + return &EventManager{ + EmptyEvents() +} +} + +func (em *EventManager) + +Events() + +Events { + return em.events +} + +/ EmitEvent stores a single Event object. +/ Deprecated: Use EmitTypedEvent +func (em *EventManager) + +EmitEvent(event Event) { + em.events = em.events.AppendEvent(event) +} + +/ EmitEvents stores a series of Event objects. +/ Deprecated: Use EmitTypedEvents +func (em *EventManager) + +EmitEvents(events Events) { + em.events = em.events.AppendEvents(events) +} + +/ ABCIEvents returns all stored Event objects as abci.Event objects. +func (em EventManager) + +ABCIEvents() []abci.Event { + return em.events.ToABCIEvents() +} + +/ EmitTypedEvent takes typed event and emits converting it into Event +func (em *EventManager) + +EmitTypedEvent(tev proto.Message) + +error { + event, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +em.EmitEvent(event) + +return nil +} + +/ EmitTypedEvents takes series of typed events and emit +func (em *EventManager) + +EmitTypedEvents(tevs ...proto.Message) + +error { + events := make(Events, len(tevs)) + for i, tev := range tevs { + res, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +events[i] = res +} + +em.EmitEvents(events) + +return nil +} + +/ TypedEventToEvent takes typed event and converts to Event object +func TypedEventToEvent(tev proto.Message) (Event, error) { + evtType := proto.MessageName(tev) + +evtJSON, err := codec.ProtoMarshalJSON(tev, nil) + if err != nil { + return Event{ +}, err +} + +var attrMap map[string]json.RawMessage + err = json.Unmarshal(evtJSON, &attrMap) + if err != nil { + return Event{ +}, err +} + + / sort the keys to ensure the order is always the same + keys := maps.Keys(attrMap) + +slices.Sort(keys) + attrs := make([]abci.EventAttribute, 0, len(attrMap)) + for _, k := range keys { + v := attrMap[k] + attrs = append(attrs, abci.EventAttribute{ + Key: k, + Value: string(v), +}) +} + +return Event{ + Type: evtType, + Attributes: attrs, +}, nil +} + +/ ParseTypedEvent converts abci.Event back to a typed event. +func ParseTypedEvent(event abci.Event) (proto.Message, error) { + concreteGoType := proto.MessageType(event.Type) + if concreteGoType == nil { + return nil, fmt.Errorf("failed to retrieve the message of type %q", event.Type) +} + +var value reflect.Value + if concreteGoType.Kind() == reflect.Ptr { + value = reflect.New(concreteGoType.Elem()) +} + +else { + value = reflect.Zero(concreteGoType) +} + +protoMsg, ok := value.Interface().(proto.Message) + if !ok { + return nil, fmt.Errorf("%q does not implement proto.Message", event.Type) +} + attrMap := make(map[string]json.RawMessage) + for _, attr := range event.Attributes { + attrMap[attr.Key] = json.RawMessage(attr.Value) +} + +attrBytes, err := json.Marshal(attrMap) + if err != nil { + return nil, err +} + unmarshaler := jsonpb.Unmarshaler{ + AllowUnknownFields: true +} + if err := unmarshaler.Unmarshal(strings.NewReader(string(attrBytes)), protoMsg); err != nil { + return nil, err +} + +return protoMsg, nil +} + +/ ---------------------------------------------------------------------------- +/ Events +/ ---------------------------------------------------------------------------- + +type ( + / Event is a type alias for an ABCI Event + Event abci.Event + + / Events defines a slice of Event objects + Events []Event +) + +/ NewEvent creates a new Event object with a given type and slice of one or more +/ attributes. +func NewEvent(ty string, attrs ...Attribute) + +Event { + e := Event{ + Type: ty +} + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ NewAttribute returns a new key/value Attribute object. +func NewAttribute(k, v string) + +Attribute { + return Attribute{ + k, v +} +} + +/ EmptyEvents returns an empty slice of events. +func EmptyEvents() + +Events { + return make(Events, 0) +} + +func (a Attribute) + +String() + +string { + return fmt.Sprintf("%s: %s", a.Key, a.Value) +} + +/ ToKVPair converts an Attribute object into a CometBFT key/value pair. +func (a Attribute) + +ToKVPair() + +abci.EventAttribute { + return abci.EventAttribute{ + Key: a.Key, + Value: a.Value +} +} + +/ AppendAttributes adds one or more attributes to an Event. +func (e Event) + +AppendAttributes(attrs ...Attribute) + +Event { + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ GetAttribute returns an attribute for a given key present in an event. +/ If the key is not found, the boolean value will be false. +func (e Event) + +GetAttribute(key string) (Attribute, bool) { + for _, attr := range e.Attributes { + if attr.Key == key { + return Attribute{ + Key: attr.Key, + Value: attr.Value +}, true +} + +} + +return Attribute{ +}, false +} + +/ AppendEvent adds an Event to a slice of events. +func (e Events) + +AppendEvent(event Event) + +Events { + return append(e, event) +} + +/ AppendEvents adds a slice of Event objects to an exist slice of Event objects. +func (e Events) + +AppendEvents(events Events) + +Events { + return append(e, events...) +} + +/ ToABCIEvents converts a slice of Event objects to a slice of abci.Event +/ objects. +func (e Events) + +ToABCIEvents() []abci.Event { + res := make([]abci.Event, len(e)) + for i, ev := range e { + res[i] = abci.Event{ + Type: ev.Type, + Attributes: ev.Attributes +} + +} + +return res +} + +/ GetAttributes returns all attributes matching a given key present in events. +/ If the key is not found, the boolean value will be false. +func (e Events) + +GetAttributes(key string) ([]Attribute, bool) { + attrs := make([]Attribute, 0) + for _, event := range e { + if attr, found := event.GetAttribute(key); found { + attrs = append(attrs, attr) +} + +} + +return attrs, len(attrs) > 0 +} + +/ Common event types and attribute keys +const ( + EventTypeTx = "tx" + + AttributeKeyAccountSequence = "acc_seq" + AttributeKeySignature = "signature" + AttributeKeyFee = "fee" + AttributeKeyFeePayer = "fee_payer" + + EventTypeMessage = "message" + + AttributeKeyAction = "action" + AttributeKeyModule = "module" + AttributeKeySender = "sender" + AttributeKeyAmount = "amount" +) + +type ( + / StringAttributes defines a slice of StringEvents objects. + StringEvents []StringEvent +) + +func (se StringEvents) + +String() + +string { + var sb strings.Builder + for _, e := range se { + fmt.Fprintf(&sb, "\t\t- %s\n", e.Type) + for _, attr := range e.Attributes { + fmt.Fprintf(&sb, "\t\t\t- %s\n", attr) +} + +} + +return strings.TrimRight(sb.String(), "\n") +} + +/ StringifyEvent converts an Event object to a StringEvent object. +func StringifyEvent(e abci.Event) + +StringEvent { + res := StringEvent{ + Type: e.Type +} + for _, attr := range e.Attributes { + res.Attributes = append( + res.Attributes, + Attribute{ + Key: attr.Key, + Value: attr.Value +}, + ) +} + +return res +} + +/ StringifyEvents converts a slice of Event objects into a slice of StringEvent +/ objects. +func StringifyEvents(events []abci.Event) + +StringEvents { + res := make(StringEvents, 0, len(events)) + for _, e := range events { + res = append(res, StringifyEvent(e)) +} + +return res +} + +/ MarkEventsToIndex returns the set of ABCI events, where each event's attribute +/ has it's index value marked based on the provided set of events to index. +func MarkEventsToIndex(events []abci.Event, indexSet map[string]struct{ +}) []abci.Event { + indexAll := len(indexSet) == 0 + updatedEvents := make([]abci.Event, len(events)) + for i, e := range events { + updatedEvent := abci.Event{ + Type: e.Type, + Attributes: make([]abci.EventAttribute, len(e.Attributes)), +} + for j, attr := range e.Attributes { + _, index := indexSet[fmt.Sprintf("%s.%s", e.Type, attr.Key)] + updatedAttr := abci.EventAttribute{ + Key: attr.Key, + Value: attr.Value, + Index: index || indexAll, +} + +updatedEvent.Attributes[j] = updatedAttr +} + +updatedEvents[i] = updatedEvent +} + +return updatedEvents +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/group/keeper/msg_server.go#L95-L97) +Module developers should handle Event emission via the `EventManager#EmitTypedEvent` or `EventManager#EmitEvent` in each message +`Handler` and in each `BeginBlock`/`EndBlock` handler. The `EventManager` is accessed via +the [`Context`](/docs/sdk/v0.50/learn/advanced/context), where Event should be already registered, and emitted like this: -**Legacy events:** +**Typed events:** + +```go expandable +package keeper + +import ( + + "bytes" + "context" + "encoding/binary" + "encoding/json" + "fmt" + "strings" + + errorsmod "cosmossdk.io/errors" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/errors" + "github.com/cosmos/cosmos-sdk/x/group/internal/math" + "github.com/cosmos/cosmos-sdk/x/group/internal/orm" + + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +var _ group.MsgServer = Keeper{ +} + +/ TODO: Revisit this once we have proper gas fee framework. +/ Tracking issues https://github.com/cosmos/cosmos-sdk/issues/9054, https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(20) + +func (k Keeper) + +CreateGroup(goCtx context.Context, msg *group.MsgCreateGroup) (*group.MsgCreateGroupResponse, error) { + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Admin); err != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidAddress, "invalid admin address: %s", msg.Admin) +} + if err := k.validateMembers(msg.Members); err != nil { + return nil, errorsmod.Wrap(err, "members") +} + if err := k.assertMetadataLength(msg.Metadata, "group metadata"); err != nil { + return nil, err +} + totalWeight := math.NewDecFromInt64(0) + for _, m := range msg.Members { + if err := k.assertMetadataLength(m.Metadata, "member metadata"); err != nil { + return nil, err +} + + / Members of a group must have a positive weight. + / NOTE: group member with zero weight are only allowed when updating group members. + / If the member has a zero weight, it will be removed from the group. + weight, err := math.NewPositiveDecFromString(m.Weight) + if err != nil { + return nil, err +} + + / Adding up members weights to compute group total weight. + totalWeight, err = totalWeight.Add(weight) + if err != nil { + return nil, err +} + +} + + / Create a new group in the groupTable. + ctx := sdk.UnwrapSDKContext(goCtx) + groupInfo := &group.GroupInfo{ + Id: k.groupTable.Sequence().PeekNextVal(ctx.KVStore(k.key)), + Admin: msg.Admin, + Metadata: msg.Metadata, + Version: 1, + TotalWeight: totalWeight.String(), + CreatedAt: ctx.BlockTime(), +} + +groupID, err := k.groupTable.Create(ctx.KVStore(k.key), groupInfo) + if err != nil { + return nil, errorsmod.Wrap(err, "could not create group") +} + + / Create new group members in the groupMemberTable. + for i, m := range msg.Members { + err := k.groupMemberTable.Create(ctx.KVStore(k.key), &group.GroupMember{ + GroupId: groupID, + Member: &group.Member{ + Address: m.Address, + Weight: m.Weight, + Metadata: m.Metadata, + AddedAt: ctx.BlockTime(), +}, +}) + if err != nil { + return nil, errorsmod.Wrapf(err, "could not store member %d", i) +} + +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventCreateGroup{ + GroupId: groupID +}); err != nil { + return nil, err +} + +return &group.MsgCreateGroupResponse{ + GroupId: groupID +}, nil +} + +func (k Keeper) + +UpdateGroupMembers(goCtx context.Context, msg *group.MsgUpdateGroupMembers) (*group.MsgUpdateGroupMembersResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group id") +} + if len(msg.MemberUpdates) == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "member updates") +} + if err := k.validateMembers(msg.MemberUpdates); err != nil { + return nil, errorsmod.Wrap(err, "members") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(g *group.GroupInfo) + +error { + totalWeight, err := math.NewNonNegativeDecFromString(g.TotalWeight) + if err != nil { + return errorsmod.Wrap(err, "group total weight") +} + for _, member := range msg.MemberUpdates { + if err := k.assertMetadataLength(member.Metadata, "group member metadata"); err != nil { + return err +} + groupMember := group.GroupMember{ + GroupId: msg.GroupId, + Member: &group.Member{ + Address: member.Address, + Weight: member.Weight, + Metadata: member.Metadata, +}, +} + + / Checking if the group member is already part of the group + var found bool + var prevGroupMember group.GroupMember + switch err := k.groupMemberTable.GetOne(ctx.KVStore(k.key), orm.PrimaryKey(&groupMember), &prevGroupMember); { + case err == nil: + found = true + case sdkerrors.ErrNotFound.Is(err): + found = false + default: + return errorsmod.Wrap(err, "get group member") +} + +newMemberWeight, err := math.NewNonNegativeDecFromString(groupMember.Member.Weight) + if err != nil { + return err +} + + / Handle delete for members with zero weight. + if newMemberWeight.IsZero() { + / We can't delete a group member that doesn't already exist. + if !found { + return errorsmod.Wrap(sdkerrors.ErrNotFound, "unknown member") +} + +previousMemberWeight, err := math.NewPositiveDecFromString(prevGroupMember.Member.Weight) + if err != nil { + return err +} + + / Subtract the weight of the group member to delete from the group total weight. + totalWeight, err = math.SubNonNegative(totalWeight, previousMemberWeight) + if err != nil { + return err +} + + / Delete group member in the groupMemberTable. + if err := k.groupMemberTable.Delete(ctx.KVStore(k.key), &groupMember); err != nil { + return errorsmod.Wrap(err, "delete member") +} + +continue +} + / If group member already exists, handle update + if found { + previousMemberWeight, err := math.NewPositiveDecFromString(prevGroupMember.Member.Weight) + if err != nil { + return err +} + / Subtract previous weight from the group total weight. + totalWeight, err = math.SubNonNegative(totalWeight, previousMemberWeight) + if err != nil { + return err +} + / Save updated group member in the groupMemberTable. + groupMember.Member.AddedAt = prevGroupMember.Member.AddedAt + if err := k.groupMemberTable.Update(ctx.KVStore(k.key), &groupMember); err != nil { + return errorsmod.Wrap(err, "add member") +} + +} + +else { / else handle create. + groupMember.Member.AddedAt = ctx.BlockTime() + if err := k.groupMemberTable.Create(ctx.KVStore(k.key), &groupMember); err != nil { + return errorsmod.Wrap(err, "add member") +} + +} + / In both cases (handle + update), we need to add the new member's weight to the group total weight. + totalWeight, err = totalWeight.Add(newMemberWeight) + if err != nil { + return err +} + +} + / Update group in the groupTable. + g.TotalWeight = totalWeight.String() + +g.Version++ + if err := k.validateDecisionPolicies(ctx, *g); err != nil { + return err +} + +return k.groupTable.Update(ctx.KVStore(k.key), g.Id, g) +} + if err := k.doUpdateGroup(ctx, msg.GetGroupID(), msg.GetAdmin(), action, "members updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupMembersResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupAdmin(goCtx context.Context, msg *group.MsgUpdateGroupAdmin) (*group.MsgUpdateGroupAdminResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group id") +} + if strings.EqualFold(msg.Admin, msg.NewAdmin) { + return nil, errorsmod.Wrap(errors.ErrInvalid, "new and old admin are the same") +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Admin); err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidAddress, "admin address") +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.NewAdmin); err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidAddress, "new admin address") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(g *group.GroupInfo) + +error { + g.Admin = msg.NewAdmin + g.Version++ + + return k.groupTable.Update(ctx.KVStore(k.key), g.Id, g) +} + if err := k.doUpdateGroup(ctx, msg.GetGroupID(), msg.GetAdmin(), action, "admin updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupAdminResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupMetadata(goCtx context.Context, msg *group.MsgUpdateGroupMetadata) (*group.MsgUpdateGroupMetadataResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group id") +} + if err := k.assertMetadataLength(msg.Metadata, "group metadata"); err != nil { + return nil, err +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Admin); err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidAddress, "admin address") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(g *group.GroupInfo) + +error { + g.Metadata = msg.Metadata + g.Version++ + return k.groupTable.Update(ctx.KVStore(k.key), g.Id, g) +} + if err := k.doUpdateGroup(ctx, msg.GetGroupID(), msg.GetAdmin(), action, "metadata updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupMetadataResponse{ +}, nil +} + +func (k Keeper) + +CreateGroupWithPolicy(ctx context.Context, msg *group.MsgCreateGroupWithPolicy) (*group.MsgCreateGroupWithPolicyResponse, error) { + / NOTE: admin, and group message validation is performed in the CreateGroup method + groupRes, err := k.CreateGroup(ctx, &group.MsgCreateGroup{ + Admin: msg.Admin, + Members: msg.Members, + Metadata: msg.GroupMetadata, +}) + if err != nil { + return nil, errorsmod.Wrap(err, "group response") +} + groupID := groupRes.GroupId + + / NOTE: group policy message validation is performed in the CreateGroupPolicy method + groupPolicyRes, err := k.CreateGroupPolicy(ctx, &group.MsgCreateGroupPolicy{ + Admin: msg.Admin, + GroupId: groupID, + Metadata: msg.GroupPolicyMetadata, + DecisionPolicy: msg.DecisionPolicy, +}) + if err != nil { + return nil, errorsmod.Wrap(err, "group policy response") +} + if msg.GroupPolicyAsAdmin { + updateAdminReq := &group.MsgUpdateGroupAdmin{ + GroupId: groupID, + Admin: msg.Admin, + NewAdmin: groupPolicyRes.Address, +} + _, err = k.UpdateGroupAdmin(ctx, updateAdminReq) + if err != nil { + return nil, err +} + updatePolicyAddressReq := &group.MsgUpdateGroupPolicyAdmin{ + Admin: msg.Admin, + GroupPolicyAddress: groupPolicyRes.Address, + NewAdmin: groupPolicyRes.Address, +} + _, err = k.UpdateGroupPolicyAdmin(ctx, updatePolicyAddressReq) + if err != nil { + return nil, err +} + +} + +return &group.MsgCreateGroupWithPolicyResponse{ + GroupId: groupID, + GroupPolicyAddress: groupPolicyRes.Address +}, nil +} + +func (k Keeper) + +CreateGroupPolicy(goCtx context.Context, msg *group.MsgCreateGroupPolicy) (*group.MsgCreateGroupPolicyResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group id") +} + if err := k.assertMetadataLength(msg.GetMetadata(), "group policy metadata"); err != nil { + return nil, err +} + +policy, err := msg.GetDecisionPolicy() + if err != nil { + return nil, errorsmod.Wrap(err, "request decision policy") +} + if err := policy.ValidateBasic(); err != nil { + return nil, errorsmod.Wrap(err, "decision policy") +} + +reqGroupAdmin, err := k.accKeeper.AddressCodec().StringToBytes(msg.GetAdmin()) + if err != nil { + return nil, errorsmod.Wrap(err, "request admin") +} + ctx := sdk.UnwrapSDKContext(goCtx) + +groupInfo, err := k.getGroupInfo(ctx, msg.GetGroupID()) + if err != nil { + return nil, err +} + +groupAdmin, err := k.accKeeper.AddressCodec().StringToBytes(groupInfo.Admin) + if err != nil { + return nil, errorsmod.Wrap(err, "group admin") +} + + / Only current group admin is authorized to create a group policy for this + if !bytes.Equal(groupAdmin, reqGroupAdmin) { + return nil, errorsmod.Wrap(sdkerrors.ErrUnauthorized, "not group admin") +} + if err := policy.Validate(groupInfo, k.config); err != nil { + return nil, err +} + + / Generate account address of group policy. + var accountAddr sdk.AccAddress + / loop here in the rare case where a ADR-028-derived address creates a + / collision with an existing address. + for { + nextAccVal := k.groupPolicySeq.NextVal(ctx.KVStore(k.key)) + derivationKey := make([]byte, 8) + +binary.BigEndian.PutUint64(derivationKey, nextAccVal) + +ac, err := authtypes.NewModuleCredential(group.ModuleName, []byte{ + GroupPolicyTablePrefix +}, derivationKey) + if err != nil { + return nil, err +} + +accountAddr = sdk.AccAddress(ac.Address()) + if k.accKeeper.GetAccount(ctx, accountAddr) != nil { + / handle a rare collision, in which case we just go on to the + / next sequence value and derive a new address. + continue +} + + / group policy accounts are unclaimable base accounts + account, err := authtypes.NewBaseAccountWithPubKey(ac) + if err != nil { + return nil, errorsmod.Wrap(err, "could not create group policy account") +} + acc := k.accKeeper.NewAccount(ctx, account) + +k.accKeeper.SetAccount(ctx, acc) + +break +} + +groupPolicy, err := group.NewGroupPolicyInfo( + accountAddr, + msg.GetGroupID(), + reqGroupAdmin, + msg.GetMetadata(), + 1, + policy, + ctx.BlockTime(), + ) + if err != nil { + return nil, err +} + if err := k.groupPolicyTable.Create(ctx.KVStore(k.key), &groupPolicy); err != nil { + return nil, errorsmod.Wrap(err, "could not create group policy") +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventCreateGroupPolicy{ + Address: accountAddr.String() +}); err != nil { + return nil, err +} + +return &group.MsgCreateGroupPolicyResponse{ + Address: accountAddr.String() +}, nil +} + +func (k Keeper) + +UpdateGroupPolicyAdmin(goCtx context.Context, msg *group.MsgUpdateGroupPolicyAdmin) (*group.MsgUpdateGroupPolicyAdminResponse, error) { + if strings.EqualFold(msg.Admin, msg.NewAdmin) { + return nil, errorsmod.Wrap(errors.ErrInvalid, "new and old admin are same") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(groupPolicy *group.GroupPolicyInfo) + +error { + groupPolicy.Admin = msg.NewAdmin + groupPolicy.Version++ + return k.groupPolicyTable.Update(ctx.KVStore(k.key), groupPolicy) +} + if err := k.doUpdateGroupPolicy(ctx, msg.GroupPolicyAddress, msg.Admin, action, "group policy admin updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupPolicyAdminResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupPolicyDecisionPolicy(goCtx context.Context, msg *group.MsgUpdateGroupPolicyDecisionPolicy) (*group.MsgUpdateGroupPolicyDecisionPolicyResponse, error) { + policy, err := msg.GetDecisionPolicy() + if err != nil { + return nil, errorsmod.Wrap(err, "decision policy") +} + if err := policy.ValidateBasic(); err != nil { + return nil, errorsmod.Wrap(err, "decision policy") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(groupPolicy *group.GroupPolicyInfo) + +error { + groupInfo, err := k.getGroupInfo(ctx, groupPolicy.GroupId) + if err != nil { + return err +} + +err = policy.Validate(groupInfo, k.config) + if err != nil { + return err +} + +err = groupPolicy.SetDecisionPolicy(policy) + if err != nil { + return err +} + +groupPolicy.Version++ + return k.groupPolicyTable.Update(ctx.KVStore(k.key), groupPolicy) +} + if err = k.doUpdateGroupPolicy(ctx, msg.GroupPolicyAddress, msg.Admin, action, "group policy's decision policy updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupPolicyDecisionPolicyResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupPolicyMetadata(goCtx context.Context, msg *group.MsgUpdateGroupPolicyMetadata) (*group.MsgUpdateGroupPolicyMetadataResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + metadata := msg.GetMetadata() + action := func(groupPolicy *group.GroupPolicyInfo) + +error { + groupPolicy.Metadata = metadata + groupPolicy.Version++ + return k.groupPolicyTable.Update(ctx.KVStore(k.key), groupPolicy) +} + if err := k.assertMetadataLength(metadata, "group policy metadata"); err != nil { + return nil, err +} + err := k.doUpdateGroupPolicy(ctx, msg.GroupPolicyAddress, msg.Admin, action, "group policy metadata updated") + if err != nil { + return nil, err +} + +return &group.MsgUpdateGroupPolicyMetadataResponse{ +}, nil +} + +func (k Keeper) + +SubmitProposal(goCtx context.Context, msg *group.MsgSubmitProposal) (*group.MsgSubmitProposalResponse, error) { + if len(msg.Proposers) == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "proposers") +} + if err := k.validateProposers(msg.Proposers); err != nil { + return nil, err +} + +groupPolicyAddr, err := k.accKeeper.AddressCodec().StringToBytes(msg.GroupPolicyAddress) + if err != nil { + return nil, errorsmod.Wrap(err, "request account address of group policy") +} + if err := k.assertMetadataLength(msg.Title, "proposal Title"); err != nil { + return nil, err +} + if err := k.assertMetadataLength(msg.Summary, "proposal summary"); err != nil { + return nil, err +} + if err := k.assertMetadataLength(msg.Metadata, "metadata"); err != nil { + return nil, err +} + + / verify that if present, the metadata title and summary equals the proposal title and summary + if len(msg.Metadata) != 0 { + proposalMetadata := govtypes.ProposalMetadata{ +} + if err := json.Unmarshal([]byte(msg.Metadata), &proposalMetadata); err == nil { + if proposalMetadata.Title != msg.Title { + return nil, fmt.Errorf("metadata title '%s' must equal proposal title '%s'", proposalMetadata.Title, msg.Title) +} + if proposalMetadata.Summary != msg.Summary { + return nil, fmt.Errorf("metadata summary '%s' must equal proposal summary '%s'", proposalMetadata.Summary, msg.Summary) +} + +} + + / if we can't unmarshal the metadata, this means the client didn't use the recommended metadata format + / nothing can be done here, and this is still a valid case, so we ignore the error +} + +msgs, err := msg.GetMsgs() + if err != nil { + return nil, errorsmod.Wrap(err, "request msgs") +} + if err := validateMsgs(msgs); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + +policyAcc, err := k.getGroupPolicyInfo(ctx, msg.GroupPolicyAddress) + if err != nil { + return nil, errorsmod.Wrapf(err, "load group policy: %s", msg.GroupPolicyAddress) +} + +groupInfo, err := k.getGroupInfo(ctx, policyAcc.GroupId) + if err != nil { + return nil, errorsmod.Wrap(err, "get group by groupId of group policy") +} + + / Only members of the group can submit a new proposal. + for _, proposer := range msg.Proposers { + if !k.groupMemberTable.Has(ctx.KVStore(k.key), orm.PrimaryKey(&group.GroupMember{ + GroupId: groupInfo.Id, + Member: &group.Member{ + Address: proposer +}})) { + return nil, errorsmod.Wrapf(errors.ErrUnauthorized, "not in group: %s", proposer) +} + +} + + / Check that if the messages require signers, they are all equal to the given account address of group policy. + if err := ensureMsgAuthZ(msgs, groupPolicyAddr, k.cdc); err != nil { + return nil, err +} + +policy, err := policyAcc.GetDecisionPolicy() + if err != nil { + return nil, errorsmod.Wrap(err, "proposal group policy decision policy") +} + + / Prevent proposal that cannot succeed. + if err = policy.Validate(groupInfo, k.config); err != nil { + return nil, err +} + m := &group.Proposal{ + Id: k.proposalTable.Sequence().PeekNextVal(ctx.KVStore(k.key)), + GroupPolicyAddress: msg.GroupPolicyAddress, + Metadata: msg.Metadata, + Proposers: msg.Proposers, + SubmitTime: ctx.BlockTime(), + GroupVersion: groupInfo.Version, + GroupPolicyVersion: policyAcc.Version, + Status: group.PROPOSAL_STATUS_SUBMITTED, + ExecutorResult: group.PROPOSAL_EXECUTOR_RESULT_NOT_RUN, + VotingPeriodEnd: ctx.BlockTime().Add(policy.GetVotingPeriod()), / The voting window begins as soon as the proposal is submitted. + FinalTallyResult: group.DefaultTallyResult(), + Title: msg.Title, + Summary: msg.Summary, +} + if err := m.SetMsgs(msgs); err != nil { + return nil, errorsmod.Wrap(err, "create proposal") +} + +id, err := k.proposalTable.Create(ctx.KVStore(k.key), m) + if err != nil { + return nil, errorsmod.Wrap(err, "create proposal") +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventSubmitProposal{ + ProposalId: id +}); err != nil { + return nil, err +} + + / Try to execute proposal immediately + if msg.Exec == group.Exec_EXEC_TRY { + / Consider proposers as Yes votes + for _, proposer := range msg.Proposers { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "vote on proposal") + _, err = k.Vote(ctx, &group.MsgVote{ + ProposalId: id, + Voter: proposer, + Option: group.VOTE_OPTION_YES, +}) + if err != nil { + return &group.MsgSubmitProposalResponse{ + ProposalId: id +}, errorsmod.Wrapf(err, "the proposal was created but failed on vote for voter %s", proposer) +} + +} + + / Then try to execute the proposal + _, err = k.Exec(ctx, &group.MsgExec{ + ProposalId: id, + / We consider the first proposer as the MsgExecRequest signer + / but that could be revisited (eg using the group policy) + +Executor: msg.Proposers[0], +}) + if err != nil { + return &group.MsgSubmitProposalResponse{ + ProposalId: id +}, errorsmod.Wrap(err, "the proposal was created but failed on exec") +} + +} + +return &group.MsgSubmitProposalResponse{ + ProposalId: id +}, nil +} + +func (k Keeper) + +WithdrawProposal(goCtx context.Context, msg *group.MsgWithdrawProposal) (*group.MsgWithdrawProposalResponse, error) { + if msg.ProposalId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "proposal id") +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Address); err != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidAddress, "invalid group policy admin / proposer address: %s", msg.Address) +} + ctx := sdk.UnwrapSDKContext(goCtx) + +proposal, err := k.getProposal(ctx, msg.ProposalId) + if err != nil { + return nil, err +} + + / Ensure the proposal can be withdrawn. + if proposal.Status != group.PROPOSAL_STATUS_SUBMITTED { + return nil, errorsmod.Wrapf(errors.ErrInvalid, "cannot withdraw a proposal with the status of %s", proposal.Status.String()) +} + +var policyInfo group.GroupPolicyInfo + if policyInfo, err = k.getGroupPolicyInfo(ctx, proposal.GroupPolicyAddress); err != nil { + return nil, errorsmod.Wrap(err, "load group policy") +} + + / check address is the group policy admin he is in proposers list.. + if msg.Address != policyInfo.Admin && !isProposer(proposal, msg.Address) { + return nil, errorsmod.Wrapf(errors.ErrUnauthorized, "given address is neither group policy admin nor in proposers: %s", msg.Address) +} + +proposal.Status = group.PROPOSAL_STATUS_WITHDRAWN + if err := k.proposalTable.Update(ctx.KVStore(k.key), msg.ProposalId, &proposal); err != nil { + return nil, err +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventWithdrawProposal{ + ProposalId: msg.ProposalId +}); err != nil { + return nil, err +} + +return &group.MsgWithdrawProposalResponse{ +}, nil +} + +func (k Keeper) + +Vote(goCtx context.Context, msg *group.MsgVote) (*group.MsgVoteResponse, error) { + if msg.ProposalId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "proposal id") +} + + / verify vote options + if msg.Option == group.VOTE_OPTION_UNSPECIFIED { + return nil, errorsmod.Wrap(errors.ErrEmpty, "vote option") +} + if _, ok := group.VoteOption_name[int32(msg.Option)]; !ok { + return nil, errorsmod.Wrap(errors.ErrInvalid, "vote option") +} + if err := k.assertMetadataLength(msg.Metadata, "metadata"); err != nil { + return nil, err +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Voter); err != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidAddress, "invalid voter address: %s", msg.Voter) +} + ctx := sdk.UnwrapSDKContext(goCtx) + +proposal, err := k.getProposal(ctx, msg.ProposalId) + if err != nil { + return nil, err +} + + / Ensure that we can still accept votes for this proposal. + if proposal.Status != group.PROPOSAL_STATUS_SUBMITTED { + return nil, errorsmod.Wrap(errors.ErrInvalid, "proposal not open for voting") +} + if ctx.BlockTime().After(proposal.VotingPeriodEnd) { + return nil, errorsmod.Wrap(errors.ErrExpired, "voting period has ended already") +} + +policyInfo, err := k.getGroupPolicyInfo(ctx, proposal.GroupPolicyAddress) + if err != nil { + return nil, errorsmod.Wrap(err, "load group policy") +} + +groupInfo, err := k.getGroupInfo(ctx, policyInfo.GroupId) + if err != nil { + return nil, err +} + + / Count and store votes. + voter := group.GroupMember{ + GroupId: groupInfo.Id, + Member: &group.Member{ + Address: msg.Voter +}} + if err := k.groupMemberTable.GetOne(ctx.KVStore(k.key), orm.PrimaryKey(&voter), &voter); err != nil { + return nil, errorsmod.Wrapf(err, "voter address: %s", msg.Voter) +} + newVote := group.Vote{ + ProposalId: msg.ProposalId, + Voter: msg.Voter, + Option: msg.Option, + Metadata: msg.Metadata, + SubmitTime: ctx.BlockTime(), +} + + / The ORM will return an error if the vote already exists, + / making sure than a voter hasn't already voted. + if err := k.voteTable.Create(ctx.KVStore(k.key), &newVote); err != nil { + return nil, errorsmod.Wrap(err, "store vote") +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventVote{ + ProposalId: msg.ProposalId +}); err != nil { + return nil, err +} + + / Try to execute proposal immediately + if msg.Exec == group.Exec_EXEC_TRY { + _, err = k.Exec(ctx, &group.MsgExec{ + ProposalId: msg.ProposalId, + Executor: msg.Voter +}) + if err != nil { + return nil, err +} + +} + +return &group.MsgVoteResponse{ +}, nil +} + +/ doTallyAndUpdate performs a tally, and, if the tally result is final, then: +/ - updates the proposal's `Status` and `FinalTallyResult` fields, +/ - prune all the votes. +func (k Keeper) + +doTallyAndUpdate(ctx sdk.Context, p *group.Proposal, groupInfo group.GroupInfo, policyInfo group.GroupPolicyInfo) + +error { + policy, err := policyInfo.GetDecisionPolicy() + if err != nil { + return err +} + +tallyResult, err := k.Tally(ctx, *p, policyInfo.GroupId) + if err != nil { + return err +} + +result, err := policy.Allow(tallyResult, groupInfo.TotalWeight) + if err != nil { + return errorsmod.Wrap(err, "policy allow") +} + + / If the result was final (i.e. enough votes to pass) + +or if the voting + / period ended, then we consider the proposal as final. + if isFinal := result.Final || ctx.BlockTime().After(p.VotingPeriodEnd); isFinal { + if err := k.pruneVotes(ctx, p.Id); err != nil { + return err +} + +p.FinalTallyResult = tallyResult + if result.Allow { + p.Status = group.PROPOSAL_STATUS_ACCEPTED +} + +else { + p.Status = group.PROPOSAL_STATUS_REJECTED +} + + +} + +return nil +} + +/ Exec executes the messages from a proposal. +func (k Keeper) + +Exec(goCtx context.Context, msg *group.MsgExec) (*group.MsgExecResponse, error) { + if msg.ProposalId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "proposal id") +} + ctx := sdk.UnwrapSDKContext(goCtx) + +proposal, err := k.getProposal(ctx, msg.ProposalId) + if err != nil { + return nil, err +} + if proposal.Status != group.PROPOSAL_STATUS_SUBMITTED && proposal.Status != group.PROPOSAL_STATUS_ACCEPTED { + return nil, errorsmod.Wrapf(errors.ErrInvalid, "not possible to exec with proposal status %s", proposal.Status.String()) +} + +policyInfo, err := k.getGroupPolicyInfo(ctx, proposal.GroupPolicyAddress) + if err != nil { + return nil, errorsmod.Wrap(err, "load group policy") +} + + / If proposal is still in SUBMITTED phase, it means that the voting period + / didn't end yet, and tallying hasn't been done. In this case, we need to + / tally first. + if proposal.Status == group.PROPOSAL_STATUS_SUBMITTED { + groupInfo, err := k.getGroupInfo(ctx, policyInfo.GroupId) + if err != nil { + return nil, errorsmod.Wrap(err, "load group") +} + if err = k.doTallyAndUpdate(ctx, &proposal, groupInfo, policyInfo); err != nil { + return nil, err +} + +} + + / Execute proposal payload. + var logs string + if proposal.Status == group.PROPOSAL_STATUS_ACCEPTED && proposal.ExecutorResult != group.PROPOSAL_EXECUTOR_RESULT_SUCCESS { + / Caching context so that we don't update the store in case of failure. + cacheCtx, flush := ctx.CacheContext() + +addr, err := k.accKeeper.AddressCodec().StringToBytes(policyInfo.Address) + if err != nil { + return nil, err +} + decisionPolicy := policyInfo.DecisionPolicy.GetCachedValue().(group.DecisionPolicy) + if results, err := k.doExecuteMsgs(cacheCtx, k.router, proposal, addr, decisionPolicy); err != nil { + proposal.ExecutorResult = group.PROPOSAL_EXECUTOR_RESULT_FAILURE + logs = fmt.Sprintf("proposal execution failed on proposal %d, because of error %s", proposal.Id, err.Error()) + +k.Logger(ctx).Info("proposal execution failed", "cause", err, "proposalID", proposal.Id) +} + +else { + proposal.ExecutorResult = group.PROPOSAL_EXECUTOR_RESULT_SUCCESS + flush() + for _, res := range results { + / NOTE: The sdk msg handler creates a new EventManager, so events must be correctly propagated back to the current context + ctx.EventManager().EmitEvents(res.GetEvents()) +} + +} + +} + + / Update proposal in proposalTable + / If proposal has successfully run, delete it from state. + if proposal.ExecutorResult == group.PROPOSAL_EXECUTOR_RESULT_SUCCESS { + if err := k.pruneProposal(ctx, proposal.Id); err != nil { + return nil, err +} + + / Emit event for proposal finalized with its result + if err := ctx.EventManager().EmitTypedEvent( + &group.EventProposalPruned{ + ProposalId: proposal.Id, + Status: proposal.Status, + TallyResult: &proposal.FinalTallyResult, +}); err != nil { + return nil, err +} + +} + +else { + store := ctx.KVStore(k.key) + if err := k.proposalTable.Update(store, proposal.Id, &proposal); err != nil { + return nil, err +} + +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventExec{ + ProposalId: proposal.Id, + Logs: logs, + Result: proposal.ExecutorResult, +}); err != nil { + return nil, err +} + +return &group.MsgExecResponse{ + Result: proposal.ExecutorResult, +}, nil +} + +/ LeaveGroup implements the MsgServer/LeaveGroup method. +func (k Keeper) + +LeaveGroup(goCtx context.Context, msg *group.MsgLeaveGroup) (*group.MsgLeaveGroupResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group-id") +} + + _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Address) + if err != nil { + return nil, errorsmod.Wrap(err, "group member") +} + ctx := sdk.UnwrapSDKContext(goCtx) + +groupInfo, err := k.getGroupInfo(ctx, msg.GroupId) + if err != nil { + return nil, errorsmod.Wrap(err, "group") +} + +groupWeight, err := math.NewNonNegativeDecFromString(groupInfo.TotalWeight) + if err != nil { + return nil, err +} + +gm, err := k.getGroupMember(ctx, &group.GroupMember{ + GroupId: msg.GroupId, + Member: &group.Member{ + Address: msg.Address +}, +}) + if err != nil { + return nil, err +} + +memberWeight, err := math.NewPositiveDecFromString(gm.Member.Weight) + if err != nil { + return nil, err +} + +updatedWeight, err := math.SubNonNegative(groupWeight, memberWeight) + if err != nil { + return nil, err +} + + / delete group member in the groupMemberTable. + if err := k.groupMemberTable.Delete(ctx.KVStore(k.key), gm); err != nil { + return nil, errorsmod.Wrap(err, "group member") +} + + / update group weight + groupInfo.TotalWeight = updatedWeight.String() + +groupInfo.Version++ + if err := k.validateDecisionPolicies(ctx, groupInfo); err != nil { + return nil, err +} + if err := k.groupTable.Update(ctx.KVStore(k.key), groupInfo.Id, &groupInfo); err != nil { + return nil, err +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventLeaveGroup{ + GroupId: msg.GroupId, + Address: msg.Address, +}); err != nil { + return nil, err +} + +return &group.MsgLeaveGroupResponse{ +}, nil +} + +func (k Keeper) + +getGroupMember(ctx sdk.Context, member *group.GroupMember) (*group.GroupMember, error) { + var groupMember group.GroupMember + switch err := k.groupMemberTable.GetOne(ctx.KVStore(k.key), + orm.PrimaryKey(member), &groupMember); { + case err == nil: + break + case sdkerrors.ErrNotFound.Is(err): + return nil, sdkerrors.ErrNotFound.Wrapf("%s is not part of group %d", member.Member.Address, member.GroupId) + +default: + return nil, err +} + +return &groupMember, nil +} + +type ( + actionFn func(m *group.GroupInfo) + +error + groupPolicyActionFn func(m *group.GroupPolicyInfo) + +error +) + +/ doUpdateGroupPolicy first makes sure that the group policy admin initiated the group policy update, +/ before performing the group policy update and emitting an event. +func (k Keeper) + +doUpdateGroupPolicy(ctx sdk.Context, reqGroupPolicy, reqAdmin string, action groupPolicyActionFn, note string) + +error { + groupPolicyAddr, err := k.accKeeper.AddressCodec().StringToBytes(reqGroupPolicy) + if err != nil { + return errorsmod.Wrap(err, "group policy address") +} + + _, err = k.accKeeper.AddressCodec().StringToBytes(reqAdmin) + if err != nil { + return errorsmod.Wrap(err, "group policy admin") +} + +groupPolicyInfo, err := k.getGroupPolicyInfo(ctx, reqGroupPolicy) + if err != nil { + return errorsmod.Wrap(err, "load group policy") +} + + / Only current group policy admin is authorized to update a group policy. + if reqAdmin != groupPolicyInfo.Admin { + return errorsmod.Wrap(sdkerrors.ErrUnauthorized, "not group policy admin") +} + if err := action(&groupPolicyInfo); err != nil { + return errorsmod.Wrap(err, note) +} + if err = k.abortProposals(ctx, groupPolicyAddr); err != nil { + return err +} + if err = ctx.EventManager().EmitTypedEvent(&group.EventUpdateGroupPolicy{ + Address: groupPolicyInfo.Address +}); err != nil { + return err +} + +return nil +} + +/ doUpdateGroup first makes sure that the group admin initiated the group update, +/ before performing the group update and emitting an event. +func (k Keeper) + +doUpdateGroup(ctx sdk.Context, groupID uint64, reqGroupAdmin string, action actionFn, errNote string) + +error { + groupInfo, err := k.getGroupInfo(ctx, groupID) + if err != nil { + return err +} + if !strings.EqualFold(groupInfo.Admin, reqGroupAdmin) { + return errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "not group admin; got %s, expected %s", reqGroupAdmin, groupInfo.Admin) +} + if err := action(&groupInfo); err != nil { + return errorsmod.Wrap(err, errNote) +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventUpdateGroup{ + GroupId: groupID +}); err != nil { + return err +} + +return nil +} + +/ assertMetadataLength returns an error if given metadata length +/ is greater than a pre-defined maxMetadataLen. +func (k Keeper) + +assertMetadataLength(metadata, description string) + +error { + if metadata != "" && uint64(len(metadata)) > k.config.MaxMetadataLen { + return errorsmod.Wrapf(errors.ErrMaxLimit, description) +} + +return nil +} + +/ validateDecisionPolicies loops through all decision policies from the group, +/ and calls each of their Validate() + +method. +func (k Keeper) + +validateDecisionPolicies(ctx sdk.Context, g group.GroupInfo) + +error { + it, err := k.groupPolicyByGroupIndex.Get(ctx.KVStore(k.key), g.Id) + if err != nil { + return err +} + +defer it.Close() + for { + var groupPolicy group.GroupPolicyInfo + _, err = it.LoadNext(&groupPolicy) + if errors.ErrORMIteratorDone.Is(err) { + break +} + if err != nil { + return err +} + +err = groupPolicy.DecisionPolicy.GetCachedValue().(group.DecisionPolicy).Validate(g, k.config) + if err != nil { + return err +} + +} + +return nil +} + +/ validateProposers checks that all proposers addresses are valid. +/ It as well verifies that there is no duplicate address. +func (k Keeper) + +validateProposers(proposers []string) + +error { + index := make(map[string]struct{ +}, len(proposers)) + for _, proposer := range proposers { + if _, exists := index[proposer]; exists { + return errorsmod.Wrapf(errors.ErrDuplicate, "address: %s", proposer) +} + + _, err := k.accKeeper.AddressCodec().StringToBytes(proposer) + if err != nil { + return errorsmod.Wrapf(err, "proposer address %s", proposer) +} + +index[proposer] = struct{ +}{ +} + +} + +return nil +} + +/ validateMembers checks that all members addresses are valid. +/ additionally it verifies that there is no duplicate address +/ and the member weight is non-negative. +/ Note: in state, a member's weight MUST be positive. However, in some Msgs, +/ it's possible to set a zero member weight, for example in +/ MsgUpdateGroupMembers to denote that we're removing a member. +/ It returns an error if any of the above conditions is not met. +func (k Keeper) + +validateMembers(members []group.MemberRequest) + +error { + index := make(map[string]struct{ +}, len(members)) + for _, member := range members { + if _, exists := index[member.Address]; exists { + return errorsmod.Wrapf(errors.ErrDuplicate, "address: %s", member.Address) +} + + _, err := k.accKeeper.AddressCodec().StringToBytes(member.Address) + if err != nil { + return errorsmod.Wrapf(err, "member address %s", member.Address) +} + if _, err := math.NewNonNegativeDecFromString(member.Weight); err != nil { + return errorsmod.Wrap(err, "weight must be non negative") +} + +index[member.Address] = struct{ +}{ +} + +} + +return nil +} + +/ isProposer checks that an address is a proposer of a given proposal. +func isProposer(proposal group.Proposal, address string) + +bool { + for _, proposer := range proposal.Proposers { + if proposer == address { + return true +} + +} + +return false +} + +func validateMsgs(msgs []sdk.Msg) + +error { + for i, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return errorsmod.Wrapf(err, "msg %d", i) +} + +} + +return nil +} ``` -ctx.EventManager().EmitEvent( sdk.NewEvent(eventType, sdk.NewAttribute(attributeKey, attributeValue)),) + +**Legacy events:** + +```go +ctx.EventManager().EmitEvent( + sdk.NewEvent(eventType, sdk.NewAttribute(attributeKey, attributeValue)), +) ``` -Where the `EventManager` is accessed via the [`Context`](/v0.50/learn/advanced/context). +Where the `EventManager` is accessed via the [`Context`](/docs/sdk/v0.50/learn/advanced/context). -See the [`Msg` services](/v0.50/build/building-modules/msg-services) concept doc for a more detailed view on how to typically implement Events and use the `EventManager` in modules. +See the [`Msg` services](/docs/sdk/v0.50/documentation/module-system/msg-services) concept doc for a more detailed +view on how to typically implement Events and use the `EventManager` in modules. -## Subscribing to Events[​](#subscribing-to-events "Direct link to Subscribing to Events") +## Subscribing to Events You can use CometBFT's [Websocket](https://docs.cometbft.com/v0.37/core/subscription) to subscribe to Events by calling the `subscribe` RPC method: -``` -{ "jsonrpc": "2.0", "method": "subscribe", "id": "0", "params": { "query": "tm.event='eventCategory' AND eventType.eventAttribute='attributeValue'" }} +```json +{ + "jsonrpc": "2.0", + "method": "subscribe", + "id": "0", + "params": { + "query": "tm.event='eventCategory' AND eventType.eventAttribute='attributeValue'" + } +} ``` The main `eventCategory` you can subscribe to are: -* `NewBlock`: Contains Events triggered during `BeginBlock` and `EndBlock`. -* `Tx`: Contains Events triggered during `DeliverTx` (i.e. transaction processing). -* `ValidatorSetUpdates`: Contains validator set updates for the block. +- `NewBlock`: Contains Events triggered during `BeginBlock` and `EndBlock`. +- `Tx`: Contains Events triggered during `DeliverTx` (i.e. transaction processing). +- `ValidatorSetUpdates`: Contains validator set updates for the block. -These Events are triggered from the `state` package after a block is committed. You can get the full list of Event categories [on the CometBFT Go documentation](https://pkg.go.dev/github.com/cometbft/cometbft/types#pkg-constants). +These Events are triggered from the `state` package after a block is committed. You can get the +full list of Event categories [on the CometBFT Go documentation](https://pkg.go.dev/github.com/cometbft/cometbft/types#pkg-constants). The `type` and `attribute` value of the `query` allow you to filter the specific Event you are looking for. For example, a `Mint` transaction triggers an Event of type `EventMint` and has an `Id` and an `Owner` as `attributes` (as defined in the [`events.proto` file of the `NFT` module](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/nft/v1beta1/event.proto#L21-L31)). Subscribing to this Event would be done like so: -``` -{ "jsonrpc": "2.0", "method": "subscribe", "id": "0", "params": { "query": "tm.event='Tx' AND mint.owner='ownerAddress'" }} +```json +{ + "jsonrpc": "2.0", + "method": "subscribe", + "id": "0", + "params": { + "query": "tm.event='Tx' AND mint.owner='ownerAddress'" + } +} ``` -where `ownerAddress` is an address following the [`AccAddress`](/v0.50/learn/beginner/accounts#addresses) format. +where `ownerAddress` is an address following the [`AccAddress`](/docs/sdk/v0.50/learn/beginner/accounts#addresses) format. The same way can be used to subscribe to [legacy events](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/bank/types/events.go). -## Default Events[​](#default-events "Direct link to Default Events") +## Default Events There are a few events that are automatically emitted for all messages, directly from `baseapp`. -* `message.action`: The name of the message type. -* `message.sender`: The address of the message signer. -* `message.module`: The name of the module that emitted the message. +- `message.action`: The name of the message type. +- `message.sender`: The address of the message signer. +- `message.module`: The name of the module that emitted the message. - - The module name is assumed by `baseapp` to be the second element of the message route: `"cosmos.bank.v1beta1.MsgSend" -> "bank"`. In case a module does not follow the standard message path, (e.g. IBC), it is advised to keep emitting the module name event. `Baseapp` only emits that event if the module have not already done so. - + + The module name is assumed by `baseapp` to be the second element of the + message route: `"cosmos.bank.v1beta1.MsgSend" -> "bank"`. In case a module + does not follow the standard message path, (e.g. IBC), it is advised to keep + emitting the module name event. `Baseapp` only emits that event if the module + have not already done so. + diff --git a/docs/sdk/v0.50/learn/advanced/grpc_rest.mdx b/docs/sdk/v0.50/learn/advanced/grpc_rest.mdx index 1b72521d..21b3689a 100644 --- a/docs/sdk/v0.50/learn/advanced/grpc_rest.mdx +++ b/docs/sdk/v0.50/learn/advanced/grpc_rest.mdx @@ -1,113 +1,222 @@ --- title: "gRPC, REST, and CometBFT Endpoints" -description: "Version: v0.50" --- - - This document presents an overview of all the endpoints a node exposes: gRPC, REST as well as some other endpoints. - +## Synopsis -## An Overview of All Endpoints[​](#an-overview-of-all-endpoints "Direct link to An Overview of All Endpoints") +This document presents an overview of all the endpoints a node exposes: gRPC, REST as well as some other endpoints. + +## An Overview of All Endpoints Each node exposes the following endpoints for users to interact with a node, each endpoint is served on a different port. Details on how to configure each endpoint is provided in the endpoint's own section. -* the gRPC server (default port: `9090`), -* the REST server (default port: `1317`), -* the CometBFT RPC endpoint (default port: `26657`). +- the gRPC server (default port: `9090`), +- the REST server (default port: `1317`), +- the CometBFT RPC endpoint (default port: `26657`). - - The node also exposes some other endpoints, such as the CometBFT P2P endpoint, or the [Prometheus endpoint](https://docs.cometbft.com/v0.37/core/metrics), which are not directly related to the Cosmos SDK. Please refer to the [CometBFT documentation](https://docs.cometbft.com/v0.37/core/configuration) for more information about these endpoints. - + + The node also exposes some other endpoints, such as the CometBFT P2P endpoint, + or the [Prometheus endpoint](https://docs.cometbft.com/v0.37/core/metrics), + which are not directly related to the Cosmos SDK. Please refer to the + [CometBFT documentation](https://docs.cometbft.com/v0.37/core/configuration) + for more information about these endpoints. + - All endpoints are defaulted to localhost and must be modified to be exposed to the public internet. + All endpoints are defaulted to localhost and must be modified to be exposed to + the public internet. -## gRPC Server[​](#grpc-server "Direct link to gRPC Server") - -In the Cosmos SDK, Protobuf is the main [encoding](/v0.50/learn/advanced/encoding) library. This brings a wide range of Protobuf-based tools that can be plugged into the Cosmos SDK. One such tool is [gRPC](https://grpc.io), a modern open-source high performance RPC framework that has decent client support in several languages. - -Each module exposes a [Protobuf `Query` service](/v0.50/build/building-modules/messages-and-queries#queries) that defines state queries. The `Query` services and a transaction service used to broadcast transactions are hooked up to the gRPC server via the following function inside the application: - -server/types/app.go +## gRPC Server +In the Cosmos SDK, Protobuf is the main [encoding](/docs/sdk/v0.50/learn/advanced/encoding) library. This brings a wide range of Protobuf-based tools that can be plugged into the Cosmos SDK. One such tool is [gRPC](https://grpc.io), a modern open-source high performance RPC framework that has decent client support in several languages. + +Each module exposes a [Protobuf `Query` service](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#queries) that defines state queries. The `Query` services and a transaction service used to broadcast transactions are hooked up to the gRPC server via the following function inside the application: + +```go expandable +package types + +import ( + + "encoding/json" + "io" + "cosmossdk.io/log" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" +) + +type ( + / AppOptions defines an interface that is passed into an application + / constructor, typically used to set BaseApp options that are either supplied + / via config file or through CLI arguments/flags. The underlying implementation + / is defined by the server package and is typically implemented via a Viper + / literal defined on the server Context. Note, casting Get calls may not yield + / the expected types and could result in type assertion errors. It is recommend + / to either use the cast package or perform manual conversion for safety. + AppOptions interface { + Get(string) + +interface{ +} + +} + + / Application defines an application interface that wraps abci.Application. + / The interface defines the necessary contracts to be implemented in order + / to fully bootstrap and start an application. + Application interface { + ABCI + + RegisterAPIRoutes(*api.Server, config.APIConfig) + + / RegisterGRPCServer registers gRPC services directly with the gRPC + / server. + RegisterGRPCServer(grpc.Server) + + / RegisterTxService registers the gRPC Query service for tx (such as tx + / simulation, fetching txs by hash...). + RegisterTxService(client.Context) + + / RegisterTendermintService registers the gRPC Query service for CometBFT queries. + RegisterTendermintService(client.Context) + + / RegisterNodeService registers the node gRPC Query service. + RegisterNodeService(client.Context, config.Config) + + / CommitMultiStore return the multistore instance + CommitMultiStore() + +storetypes.CommitMultiStore + + / Return the snapshot manager + SnapshotManager() *snapshots.Manager + + / Close is called in start cmd to gracefully cleanup resources. + / Must be safe to be called multiple times. + Close() + +error +} + + / AppCreator is a function that allows us to lazily initialize an + / application using various configurations. + AppCreator func(log.Logger, dbm.DB, io.Writer, AppOptions) + +Application + + / ModuleInitFlags takes a start command and adds modules specific init flags. + ModuleInitFlags func(startCmd *cobra.Command) + + / ExportedApp represents an exported app state, along with + / validators, consensus params and latest app height. + ExportedApp struct { + / AppState is the application state as JSON. + AppState json.RawMessage + / Validators is the exported validator set. + Validators []cmttypes.GenesisValidator + / Height is the app's latest block height. + Height int64 + / ConsensusParams are the exported consensus params for ABCI. + ConsensusParams cmtproto.ConsensusParams +} + + / AppExporter is a function that dumps all app state to + / JSON-serializable structure and returns the current validator set. + AppExporter func( + logger log.Logger, + db dbm.DB, + traceWriter io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + opts AppOptions, + modulesToExport []string, + ) (ExportedApp, error) +) ``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/server/types/app.go#L46-L48) -Note: It is not possible to expose any [Protobuf `Msg` service](/v0.50/build/building-modules/messages-and-queries#messages) endpoints via gRPC. Transactions must be generated and signed using the CLI or programmatically before they can be broadcasted using gRPC. See [Generating, Signing, and Broadcasting Transactions](/v0.50/user/run-node/txs) for more information. +Note: It is not possible to expose any [Protobuf `Msg` service](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#messages) endpoints via gRPC. Transactions must be generated and signed using the CLI or programmatically before they can be broadcasted using gRPC. See [Generating, Signing, and Broadcasting Transactions](/docs/sdk/v0.50/user/run-node/txs) for more information. The `grpc.Server` is a concrete gRPC server, which spawns and serves all gRPC query requests and a broadcast transaction request. This server can be configured inside `~/.simapp/config/app.toml`: -* `grpc.enable = true|false` field defines if the gRPC server should be enabled. Defaults to `true`. -* `grpc.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `localhost:9090`. +- `grpc.enable = true|false` field defines if the gRPC server should be enabled. Defaults to `true`. +- `grpc.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `localhost:9090`. - - `~/.simapp` is the directory where the node's configuration and databases are stored. By default, it's set to `~/.{app_name}`. - + + `~/.simapp` is the directory where the node's configuration and databases are + stored. By default, it's set to `~/.{app_name}`. + -Once the gRPC server is started, you can send requests to it using a gRPC client. Some examples are given in our [Interact with the Node](/v0.50/user/run-node/interact-node#using-grpc) tutorial. +Once the gRPC server is started, you can send requests to it using a gRPC client. Some examples are given in our [Interact with the Node](/docs/sdk/v0.50/user/run-node/interact-node#using-grpc) tutorial. An overview of all available gRPC endpoints shipped with the Cosmos SDK is [Protobuf documentation](https://buf.build/cosmos/cosmos-sdk). -## REST Server[​](#rest-server "Direct link to REST Server") +## REST Server Cosmos SDK supports REST routes via gRPC-gateway. All routes are configured under the following fields in `~/.simapp/config/app.toml`: -* `api.enable = true|false` field defines if the REST server should be enabled. Defaults to `false`. -* `api.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `tcp://localhost:1317`. -* some additional API configuration options are defined in `~/.simapp/config/app.toml`, along with comments, please refer to that file directly. +- `api.enable = true|false` field defines if the REST server should be enabled. Defaults to `false`. +- `api.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `tcp://localhost:1317`. +- some additional API configuration options are defined in `~/.simapp/config/app.toml`, along with comments, please refer to that file directly. -### gRPC-gateway REST Routes[​](#grpc-gateway-rest-routes "Direct link to gRPC-gateway REST Routes") +### gRPC-gateway REST Routes If, for various reasons, you cannot use gRPC (for example, you are building a web application, and browsers don't support HTTP2 on which gRPC is built), then the Cosmos SDK offers REST routes via gRPC-gateway. [gRPC-gateway](https://grpc-ecosystem.github.io/grpc-gateway/) is a tool to expose gRPC endpoints as REST endpoints. For each gRPC endpoint defined in a Protobuf `Query` service, the Cosmos SDK offers a REST equivalent. For instance, querying a balance could be done via the `/cosmos.bank.v1beta1.QueryAllBalances` gRPC endpoint, or alternatively via the gRPC-gateway `"/cosmos/bank/v1beta1/balances/{address}"` REST endpoint: both will return the same result. For each RPC method defined in a Protobuf `Query` service, the corresponding REST endpoint is defined as an option: -proto/cosmos/bank/v1beta1/query.proto - -``` -loading... +```protobuf + // AllBalances queries the balance of all coins for a single account. + // + // When called from another module, this query might consume a high amount of + // gas if the pagination field is incorrectly set. + rpc AllBalances(QueryAllBalancesRequest) returns (QueryAllBalancesResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/bank/v1beta1/balances/{address}"; + } ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/bank/v1beta1/query.proto#L23-L30) - For application developers, gRPC-gateway REST routes needs to be wired up to the REST server, this is done by calling the `RegisterGRPCGatewayRoutes` function on the ModuleManager. -### Swagger[​](#swagger "Direct link to Swagger") +### Swagger A [Swagger](https://swagger.io/) (or OpenAPIv2) specification file is exposed under the `/swagger` route on the API server. Swagger is an open specification describing the API endpoints a server serves, including description, input arguments, return types and much more about each endpoint. Enabling the `/swagger` endpoint is configurable inside `~/.simapp/config/app.toml` via the `api.swagger` field, which is set to false by default. -For application developers, you may want to generate your own Swagger definitions based on your custom modules. The Cosmos SDK's [Swagger generation script](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/scripts/protoc-swagger-gen.sh) is a good place to start. +For application developers, you may want to generate your own Swagger definitions based on your custom modules. +The Cosmos SDK's [Swagger generation script](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/scripts/protoc-swagger-gen.sh) is a good place to start. -## CometBFT RPC[​](#cometbft-rpc "Direct link to CometBFT RPC") +## CometBFT RPC Independently from the Cosmos SDK, CometBFT also exposes a RPC server. This RPC server can be configured by tuning parameters under the `rpc` table in the `~/.simapp/config/config.toml`, the default listening address is `tcp://localhost:26657`. An OpenAPI specification of all CometBFT RPC endpoints is available [here](https://docs.cometbft.com/main/rpc/). Some CometBFT RPC endpoints are directly related to the Cosmos SDK: -* `/abci_query`: this endpoint will query the application for state. As the `path` parameter, you can send the following strings: - - * any Protobuf fully-qualified service method, such as `/cosmos.bank.v1beta1.Query/AllBalances`. The `data` field should then include the method's request parameter(s) encoded as bytes using Protobuf. - * `/app/simulate`: this will simulate a transaction, and return some information such as gas used. - * `/app/version`: this will return the application's version. - * `/store/{storeName}/key`: this will directly query the named store for data associated with the key represented in the `data` parameter. - * `/store/{storeName}/subspace`: this will directly query the named store for key/value pairs in which the key has the value of the `data` parameter as a prefix. - * `/p2p/filter/addr/{port}`: this will return a filtered list of the node's P2P peers by address port. - * `/p2p/filter/id/{id}`: this will return a filtered list of the node's P2P peers by ID. - -* `/broadcast_tx_{sync,async,commit}`: these 3 endpoints will broadcast a transaction to other peers. CLI, gRPC and REST expose [a way to broadcast transactions](/v0.50/learn/advanced/transactions#broadcasting-the-transaction), but they all use these 3 CometBFT RPCs under the hood. - -## Comparison Table[​](#comparison-table "Direct link to Comparison Table") - -| Name | Advantages | Disadvantages | -| ------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | -| gRPC | - can use code-generated stubs in various languages - supports streaming and bidirectional communication (HTTP2) - small wire binary sizes, faster transmission | - based on HTTP2, not available in browsers - learning curve (mostly due to Protobuf) | -| REST | - ubiquitous - client libraries in all languages, faster implementation | - only supports unary request-response communication (HTTP1.1) - bigger over-the-wire message sizes (JSON) | -| CometBFT RPC | - easy to use | - bigger over-the-wire message sizes (JSON) | +- `/abci_query`: this endpoint will query the application for state. As the `path` parameter, you can send the following strings: + - any Protobuf fully-qualified service method, such as `/cosmos.bank.v1beta1.Query/AllBalances`. The `data` field should then include the method's request parameter(s) encoded as bytes using Protobuf. + - `/app/simulate`: this will simulate a transaction, and return some information such as gas used. + - `/app/version`: this will return the application's version. + - `/store/{storeName}/key`: this will directly query the named store for data associated with the key represented in the `data` parameter. + - `/store/{storeName}/subspace`: this will directly query the named store for key/value pairs in which the key has the value of the `data` parameter as a prefix. + - `/p2p/filter/addr/{port}`: this will return a filtered list of the node's P2P peers by address port. + - `/p2p/filter/id/{id}`: this will return a filtered list of the node's P2P peers by ID. +- `/broadcast_tx_{sync,async,commit}`: these 3 endpoints will broadcast a transaction to other peers. CLI, gRPC and REST expose [a way to broadcast transactions](/docs/sdk/v0.50/learn/advanced/transactions#broadcasting-the-transaction), but they all use these 3 CometBFT RPCs under the hood. + +## Comparison Table + +| Name | Advantages | Disadvantages | +| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | +| gRPC | - can use code-generated stubs in various languages
- supports streaming and bidirectional communication (HTTP2)
- small wire binary sizes, faster transmission | - based on HTTP2, not available in browsers
- learning curve (mostly due to Protobuf) | +| REST | - ubiquitous
- client libraries in all languages, faster implementation
| - only supports unary request-response communication (HTTP1.1)
- bigger over-the-wire message sizes (JSON) | +| CometBFT RPC | - easy to use | - bigger over-the-wire message sizes (JSON) | diff --git a/docs/sdk/v0.50/learn/advanced/node.mdx b/docs/sdk/v0.50/learn/advanced/node.mdx index 17c6d463..a2dbb1b0 100644 --- a/docs/sdk/v0.50/learn/advanced/node.mdx +++ b/docs/sdk/v0.50/learn/advanced/node.mdx @@ -1,61 +1,609 @@ --- -title: "Node Client (Daemon)" -description: "Version: v0.50" +title: Node Client (Daemon) --- - - The main endpoint of a Cosmos SDK application is the daemon client, otherwise known as the full-node client. The full-node runs the state-machine, starting from a genesis file. It connects to peers running the same client in order to receive and relay transactions, block proposals and signatures. The full-node is constituted of the application, defined with the Cosmos SDK, and of a consensus engine connected to the application via the ABCI. - +## Synopsis + +The main endpoint of a Cosmos SDK application is the daemon client, otherwise known as the full-node client. The full-node runs the state-machine, starting from a genesis file. It connects to peers running the same client in order to receive and relay transactions, block proposals and signatures. The full-node is constituted of the application, defined with the Cosmos SDK, and of a consensus engine connected to the application via the ABCI. - * [Anatomy of an SDK application](/v0.50/learn/beginner/app-anatomy) +**Pre-requisite Readings** + +- [Anatomy of an SDK application](/docs/sdk/v0.50/learn/beginner/app-anatomy) + -## `main` function[​](#main-function "Direct link to main-function") +## `main` function The full-node client of any Cosmos SDK application is built by running a `main` function. The client is generally named by appending the `-d` suffix to the application name (e.g. `appd` for an application named `app`), and the `main` function is defined in a `./appd/cmd/main.go` file. Running this function creates an executable `appd` that comes with a set of commands. For an app named `app`, the main command is [`appd start`](#start-command), which starts the full-node. In general, developers will implement the `main.go` function with the following structure: -* First, an [`encodingCodec`](/v0.50/learn/advanced/encoding) is instantiated for the application. -* Then, the `config` is retrieved and config parameters are set. This mainly involves setting the Bech32 prefixes for [addresses](/v0.50/learn/beginner/accounts#addresses). +- First, an [`encodingCodec`](/docs/sdk/v0.50/learn/advanced/encoding) is instantiated for the application. +- Then, the `config` is retrieved and config parameters are set. This mainly involves setting the Bech32 prefixes for [addresses](/docs/sdk/v0.50/learn/beginner/accounts#addresses). -types/config.go +```go expandable +package types -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/types/config.go#L14-L29) + "context" + "fmt" + "sync" + "github.com/cosmos/cosmos-sdk/version" +) -* Using [cobra](https://github.com/spf13/cobra), the root command of the full-node client is created. After that, all the custom commands of the application are added using the `AddCommand()` method of `rootCmd`. -* Add default server commands to `rootCmd` using the `server.AddCommands()` method. These commands are separated from the ones added above since they are standard and defined at Cosmos SDK level. They should be shared by all Cosmos SDK-based applications. They include the most important command: the [`start` command](#start-command). -* Prepare and execute the `executor`. +/ DefaultKeyringServiceName defines a default service name for the keyring. +const DefaultKeyringServiceName = "cosmos" -libs/cli/setup.go +/ Config is the structure that holds the SDK configuration parameters. +/ This could be used to initialize certain configuration parameters for the SDK. +type Config struct { + fullFundraiserPath string + bech32AddressPrefix map[string]string + txEncoder TxEncoder + addressVerifier func([]byte) -``` -loading... -``` +error + mtx sync.RWMutex -[See full example on GitHub](https://github.com/cometbft/cometbft/blob/v0.37.0/libs/cli/setup.go#L74-L78) + / SLIP-44 related + purpose uint32 + coinType uint32 -See an example of `main` function from the `simapp` application, the Cosmos SDK's application for demo purposes: + sealed bool + sealedch chan struct{ +} +} + +/ cosmos-sdk wide global singleton +var ( + sdkConfig *Config + initConfig sync.Once +) + +/ New returns a new Config with default values. +func NewConfig() *Config { + return &Config{ + sealedch: make(chan struct{ +}), + bech32AddressPrefix: map[string]string{ + "account_addr": Bech32PrefixAccAddr, + "validator_addr": Bech32PrefixValAddr, + "consensus_addr": Bech32PrefixConsAddr, + "account_pub": Bech32PrefixAccPub, + "validator_pub": Bech32PrefixValPub, + "consensus_pub": Bech32PrefixConsPub, +}, + fullFundraiserPath: FullFundraiserPath, + + purpose: Purpose, + coinType: CoinType, + txEncoder: nil, +} +} + +/ GetConfig returns the config instance for the SDK. +func GetConfig() *Config { + initConfig.Do(func() { + sdkConfig = NewConfig() +}) + +return sdkConfig +} + +/ GetSealedConfig returns the config instance for the SDK if/once it is sealed. +func GetSealedConfig(ctx context.Context) (*Config, error) { + config := GetConfig() + +select { + case <-config.sealedch: + return config, nil + case <-ctx.Done(): + return nil, ctx.Err() +} +} + +func (config *Config) + +assertNotSealed() { + config.mtx.RLock() + +defer config.mtx.RUnlock() + if config.sealed { + panic("Config is sealed") +} +} + +/ SetBech32PrefixForAccount builds the Config with Bech32 addressPrefix and publKeyPrefix for accounts +/ and returns the config instance +func (config *Config) + +SetBech32PrefixForAccount(addressPrefix, pubKeyPrefix string) { + config.assertNotSealed() + +config.bech32AddressPrefix["account_addr"] = addressPrefix + config.bech32AddressPrefix["account_pub"] = pubKeyPrefix +} + +/ SetBech32PrefixForValidator builds the Config with Bech32 addressPrefix and publKeyPrefix for validators +/ +/ and returns the config instance +func (config *Config) + +SetBech32PrefixForValidator(addressPrefix, pubKeyPrefix string) { + config.assertNotSealed() + +config.bech32AddressPrefix["validator_addr"] = addressPrefix + config.bech32AddressPrefix["validator_pub"] = pubKeyPrefix +} + +/ SetBech32PrefixForConsensusNode builds the Config with Bech32 addressPrefix and publKeyPrefix for consensus nodes +/ and returns the config instance +func (config *Config) + +SetBech32PrefixForConsensusNode(addressPrefix, pubKeyPrefix string) { + config.assertNotSealed() + +config.bech32AddressPrefix["consensus_addr"] = addressPrefix + config.bech32AddressPrefix["consensus_pub"] = pubKeyPrefix +} + +/ SetTxEncoder builds the Config with TxEncoder used to marshal StdTx to bytes +func (config *Config) + +SetTxEncoder(encoder TxEncoder) { + config.assertNotSealed() + +config.txEncoder = encoder +} + +/ SetAddressVerifier builds the Config with the provided function for verifying that addresses +/ have the correct format +func (config *Config) + +SetAddressVerifier(addressVerifier func([]byte) + +error) { + config.assertNotSealed() + +config.addressVerifier = addressVerifier +} + +/ Set the FullFundraiserPath (BIP44Prefix) + +on the config. +/ +/ Deprecated: This method is supported for backward compatibility only and will be removed in a future release. Use SetPurpose and SetCoinType instead. +func (config *Config) + +SetFullFundraiserPath(fullFundraiserPath string) { + config.assertNotSealed() + +config.fullFundraiserPath = fullFundraiserPath +} + +/ Set the BIP-0044 Purpose code on the config +func (config *Config) + +SetPurpose(purpose uint32) { + config.assertNotSealed() + +config.purpose = purpose +} + +/ Set the BIP-0044 CoinType code on the config +func (config *Config) + +SetCoinType(coinType uint32) { + config.assertNotSealed() + +config.coinType = coinType +} + +/ Seal seals the config such that the config state could not be modified further +func (config *Config) + +Seal() *Config { + config.mtx.Lock() + if config.sealed { + config.mtx.Unlock() + +return config +} + + / signal sealed after state exposed/unlocked + config.sealed = true + config.mtx.Unlock() + +close(config.sealedch) + +return config +} + +/ GetBech32AccountAddrPrefix returns the Bech32 prefix for account address +func (config *Config) + +GetBech32AccountAddrPrefix() + +string { + return config.bech32AddressPrefix["account_addr"] +} + +/ GetBech32ValidatorAddrPrefix returns the Bech32 prefix for validator address +func (config *Config) + +GetBech32ValidatorAddrPrefix() + +string { + return config.bech32AddressPrefix["validator_addr"] +} + +/ GetBech32ConsensusAddrPrefix returns the Bech32 prefix for consensus node address +func (config *Config) + +GetBech32ConsensusAddrPrefix() + +string { + return config.bech32AddressPrefix["consensus_addr"] +} + +/ GetBech32AccountPubPrefix returns the Bech32 prefix for account public key +func (config *Config) + +GetBech32AccountPubPrefix() + +string { + return config.bech32AddressPrefix["account_pub"] +} -simapp/simd/main.go +/ GetBech32ValidatorPubPrefix returns the Bech32 prefix for validator public key +func (config *Config) +GetBech32ValidatorPubPrefix() + +string { + return config.bech32AddressPrefix["validator_pub"] +} + +/ GetBech32ConsensusPubPrefix returns the Bech32 prefix for consensus node public key +func (config *Config) + +GetBech32ConsensusPubPrefix() + +string { + return config.bech32AddressPrefix["consensus_pub"] +} + +/ GetTxEncoder return function to encode transactions +func (config *Config) + +GetTxEncoder() + +TxEncoder { + return config.txEncoder +} + +/ GetAddressVerifier returns the function to verify that addresses have the correct format +func (config *Config) + +GetAddressVerifier() + +func([]byte) + +error { + return config.addressVerifier +} + +/ GetPurpose returns the BIP-0044 Purpose code on the config. +func (config *Config) + +GetPurpose() + +uint32 { + return config.purpose +} + +/ GetCoinType returns the BIP-0044 CoinType code on the config. +func (config *Config) + +GetCoinType() + +uint32 { + return config.coinType +} + +/ GetFullFundraiserPath returns the BIP44Prefix. +/ +/ Deprecated: This method is supported for backward compatibility only and will be removed in a future release. Use GetFullBIP44Path instead. +func (config *Config) + +GetFullFundraiserPath() + +string { + return config.fullFundraiserPath +} + +/ GetFullBIP44Path returns the BIP44Prefix. +func (config *Config) + +GetFullBIP44Path() + +string { + return fmt.Sprintf("m/%d'/%d'/0'/0/0", config.purpose, config.coinType) +} + +func KeyringServiceName() + +string { + if len(version.Name) == 0 { + return DefaultKeyringServiceName +} + +return version.Name +} ``` -loading... + +- Using [cobra](https://github.com/spf13/cobra), the root command of the full-node client is created. After that, all the custom commands of the application are added using the `AddCommand()` method of `rootCmd`. +- Add default server commands to `rootCmd` using the `server.AddCommands()` method. These commands are separated from the ones added above since they are standard and defined at Cosmos SDK level. They should be shared by all Cosmos SDK-based applications. They include the most important command: the [`start` command](#start-command). +- Prepare and execute the `executor`. + +```go expandable +package cli + +import ( + + "fmt" + "os" + "path/filepath" + "runtime" + "strings" + "github.com/spf13/cobra" + "github.com/spf13/viper" +) + +const ( + HomeFlag = "home" + TraceFlag = "trace" + OutputFlag = "output" + EncodingFlag = "encoding" +) + +/ Executable is the minimal interface to *corba.Command, so we can +/ wrap if desired before the test +type Executable interface { + Execute() + +error +} + +/ PrepareBaseCmd is meant for CometBFT and other servers +func PrepareBaseCmd(cmd *cobra.Command, envPrefix, defaultHome string) + +Executor { + cobra.OnInitialize(func() { + initEnv(envPrefix) +}) + +cmd.PersistentFlags().StringP(HomeFlag, "", defaultHome, "directory for config and data") + +cmd.PersistentFlags().Bool(TraceFlag, false, "print out full stack trace on errors") + +cmd.PersistentPreRunE = concatCobraCmdFuncs(bindFlagsLoadViper, cmd.PersistentPreRunE) + +return Executor{ + cmd, os.Exit +} +} + +/ PrepareMainCmd is meant for client side libs that want some more flags +/ +/ This adds --encoding (hex, btc, base64) + +and --output (text, json) + +to +/ the command. These only really make sense in interactive commands. +func PrepareMainCmd(cmd *cobra.Command, envPrefix, defaultHome string) + +Executor { + cmd.PersistentFlags().StringP(EncodingFlag, "e", "hex", "Binary encoding (hex|b64|btc)") + +cmd.PersistentFlags().StringP(OutputFlag, "o", "text", "Output format (text|json)") + +cmd.PersistentPreRunE = concatCobraCmdFuncs(validateOutput, cmd.PersistentPreRunE) + +return PrepareBaseCmd(cmd, envPrefix, defaultHome) +} + +/ initEnv sets to use ENV variables if set. +func initEnv(prefix string) { + copyEnvVars(prefix) + + / env variables with TM prefix (eg. TM_ROOT) + +viper.SetEnvPrefix(prefix) + +viper.SetEnvKeyReplacer(strings.NewReplacer(".", "_", "-", "_")) + +viper.AutomaticEnv() +} + +/ This copies all variables like TMROOT to TM_ROOT, +/ so we can support both formats for the user +func copyEnvVars(prefix string) { + prefix = strings.ToUpper(prefix) + ps := prefix + "_" + for _, e := range os.Environ() { + kv := strings.SplitN(e, "=", 2) + if len(kv) == 2 { + k, v := kv[0], kv[1] + if strings.HasPrefix(k, prefix) && !strings.HasPrefix(k, ps) { + k2 := strings.Replace(k, prefix, ps, 1) + +os.Setenv(k2, v) +} + +} + +} +} + +/ Executor wraps the cobra Command with a nicer Execute method +type Executor struct { + *cobra.Command + Exit func(int) / this is os.Exit by default, override in tests +} + +type ExitCoder interface { + ExitCode() + +int +} + +/ execute adds all child commands to the root command sets flags appropriately. +/ This is called by main.main(). It only needs to happen once to the rootCmd. +func (e Executor) + +Execute() + +error { + e.SilenceUsage = true + e.SilenceErrors = true + err := e.Command.Execute() + if err != nil { + if viper.GetBool(TraceFlag) { + const size = 64 << 10 + buf := make([]byte, size) + +buf = buf[:runtime.Stack(buf, false)] + fmt.Fprintf(os.Stderr, "ERROR: %v\n%s\n", err, buf) +} + +else { + fmt.Fprintf(os.Stderr, "ERROR: %v\n", err) +} + + / return error code 1 by default, can override it with a special error type + exitCode := 1 + if ec, ok := err.(ExitCoder); ok { + exitCode = ec.ExitCode() +} + +e.Exit(exitCode) +} + +return err +} + +type cobraCmdFunc func(cmd *cobra.Command, args []string) + +error + +/ Returns a single function that calls each argument function in sequence +/ RunE, PreRunE, PersistentPreRunE, etc. all have this same signature +func concatCobraCmdFuncs(fs ...cobraCmdFunc) + +cobraCmdFunc { + return func(cmd *cobra.Command, args []string) + +error { + for _, f := range fs { + if f != nil { + if err := f(cmd, args); err != nil { + return err +} + +} + +} + +return nil +} +} + +/ Bind all flags and read the config into viper +func bindFlagsLoadViper(cmd *cobra.Command, args []string) + +error { + / cmd.Flags() + +includes flags from this command and all persistent flags from the parent + if err := viper.BindPFlags(cmd.Flags()); err != nil { + return err +} + homeDir := viper.GetString(HomeFlag) + +viper.Set(HomeFlag, homeDir) + +viper.SetConfigName("config") / name of config file (without extension) + +viper.AddConfigPath(homeDir) / search root directory + viper.AddConfigPath(filepath.Join(homeDir, "config")) / search root directory /config + + / If a config file is found, read it in. + if err := viper.ReadInConfig(); err == nil { + / stderr, so if we redirect output to json file, this doesn't appear + / fmt.Fprintln(os.Stderr, "Using config file:", viper.ConfigFileUsed()) +} + +else if _, ok := err.(viper.ConfigFileNotFoundError); !ok { + / ignore not found error, return other errors + return err +} + +return nil +} + +func validateOutput(cmd *cobra.Command, args []string) + +error { + / validate output format + output := viper.GetString(OutputFlag) + switch output { + case "text", "json": + default: + return fmt.Errorf("unsupported output format: %s", output) +} + +return nil +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/simd/main.go) +See an example of `main` function from the `simapp` application, the Cosmos SDK's application for demo purposes: -## `start` command[​](#start-command "Direct link to start-command") +```go expandable +package main -The `start` command is defined in the `/server` folder of the Cosmos SDK. It is added to the root command of the full-node client in the [`main` function](#main-function) and called by the end-user to start their node: +import ( + "os" + "cosmossdk.io/log" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/simd/cmd" + svrcmd "github.com/cosmos/cosmos-sdk/server/cmd" +) + +func main() { + rootCmd := cmd.NewRootCmd() + if err := svrcmd.Execute(rootCmd, "", simapp.DefaultNodeHome); err != nil { + log.NewLogger(rootCmd.OutOrStderr()).Error("failure when running app", "err", err) + +os.Exit(1) +} +} ``` -# For an example app named "app", the following command starts the full-node.appd start# Using the Cosmos SDK's own simapp, the following commands start the simapp node.simd start + +## `start` command + +The `start` command is defined in the `/server` folder of the Cosmos SDK. It is added to the root command of the full-node client in the [`main` function](#main-function) and called by the end-user to start their node: + +```bash +# For an example app named "app", the following command starts the full-node. +appd start + +# Using the Cosmos SDK's own simapp, the following commands start the simapp node. +simd start ``` As a reminder, the full-node is composed of three conceptual layers: the networking layer, the consensus layer and the application layer. The first two are generally bundled together in an entity called the consensus engine (CometBFT by default), while the third is the state-machine defined with the help of the Cosmos SDK. Currently, the Cosmos SDK uses CometBFT as the default consensus engine, meaning the start command is implemented to boot up a CometBFT node. @@ -64,58 +612,2592 @@ The flow of the `start` command is pretty straightforward. First, it retrieves t With the `db`, the `start` command creates a new instance of the application using an `appCreator` function: -server/start.go +```go expandable +package server -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/server/start.go#L220) + "context" + "errors" + "fmt" + "io" + "net" + "os" + "runtime/pprof" -Note that an `appCreator` is a function that fulfills the `AppCreator` signature: + pruningtypes "cosmossdk.io/store/pruning/types" + "github.com/armon/go-metrics" + "github.com/cometbft/cometbft/abci/server" + cmtcmd "github.com/cometbft/cometbft/cmd/cometbft/commands" + cmtcfg "github.com/cometbft/cometbft/config" + "github.com/cometbft/cometbft/node" + "github.com/cometbft/cometbft/p2p" + pvm "github.com/cometbft/cometbft/privval" + "github.com/cometbft/cometbft/proxy" + "github.com/cometbft/cometbft/rpc/client/local" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "golang.org/x/sync/errgroup" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + servercmtlog "github.com/cosmos/cosmos-sdk/server/log" + "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/version" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" +) -server/types/app.go +const ( + / CometBFT full-node start flags + flagWithComet = "with-comet" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" -``` -loading... -``` + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/server/types/app.go#L68) + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" -In practice, the [constructor of the application](/v0.50/learn/beginner/app-anatomy#constructor-function) is passed as the `appCreator`. + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" -simapp/simd/cmd/root\_v2.go + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" -``` -loading... -``` + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" +) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/simd/cmd/root_v2.go#L294-L308) +/ StartCmdOptions defines options that can be customized in `StartCmdWithOptions`, +type StartCmdOptions struct { + / DBOpener can be used to customize db opening, for example customize db options or support different db backends, + / default to the builtin db opener. + DBOpener func(rootDir string, backendType dbm.BackendType) (dbm.DB, error) + / PostSetup can be used to setup extra services under the same cancellable context, + / it's not called in stand-alone mode, only for in-process mode. + PostSetup func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) -Then, the instance of `app` is used to instantiate a new CometBFT node: +error + / AddFlags add custom flags to start cmd + AddFlags func(cmd *cobra.Command) +} -server/start.go +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + return StartCmdWithOptions(appCreator, defaultNodeHome, StartCmdOptions{ +}) +} -``` -loading... -``` +/ StartCmdWithOptions runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmdWithOptions(appCreator types.AppCreator, defaultNodeHome string, opts StartCmdOptions) *cobra.Command { + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with CometBFT in or out of process. By +default, the application will run with CometBFT in process. -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/server/start.go#L341-L378) +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. -The CometBFT node can be created with `app` because the latter satisfies the [`abci.Application` interface](https://github.com/cometbft/cometbft/blob/v0.37.0/abci/types/application.go#L9-L35) (given that `app` extends [`baseapp`](/v0.50/learn/advanced/baseapp)). As part of the `node.New` method, CometBFT makes sure that the height of the application (i.e. number of blocks since genesis) is equal to the height of the CometBFT node. The difference between these two heights should always be negative or null. If it is strictly negative, `node.New` will replay blocks until the height of the application reaches the height of the CometBFT node. Finally, if the height of the application is `0`, the CometBFT node will call [`InitChain`](/v0.50/learn/advanced/baseapp#initchain) on the application to initialize the state from the genesis file. +For '--pruning' the options are as follows: -Once the CometBFT node is instantiated and in sync with the application, the node can be started: +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) -server/start.go +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' -``` -loading... -``` +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. + +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. + +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, CometBFT is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + PreRunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + + / Bind flags to the Context's Viper so the app construction can set + / options accordingly. + if err := serverCtx.Viper.BindPFlags(cmd.Flags()); err != nil { + return err +} + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + +return err +}, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + +return wrapCPUProfile(serverCtx, func() + +error { + return start(serverCtx, clientCtx, appCreator, withCMT, opts) +}) +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +cmd.Flags().Bool(flagWithComet, true, "Run abci app embedded in-process with CometBFT") + +cmd.Flags().String(flagAddress, "tcp://0.0.0.0:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune CometBFT blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the CometBFT RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the CometBFT RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the CometBFT maximum request body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no CometBFT process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + + / support old flags name for backwards compatibility + cmd.Flags().SetNormalizeFunc(func(f *pflag.FlagSet, name string) + +pflag.NormalizedName { + if name == "with-tendermint" { + name = flagWithComet +} + +return pflag.NormalizedName(name) +}) + + / add support for all CometBFT-specific command line options + cmtcmd.AddNodeFlags(cmd) + if opts.AddFlags != nil { + opts.AddFlags(cmd) +} + +return cmd +} + +func start(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, withCmt bool, opts StartCmdOptions) + +error { + svrCfg, err := getAndValidateConfig(svrCtx) + if err != nil { + return err +} + +app, appCleanupFn, err := startApp(svrCtx, appCreator, opts) + if err != nil { + return err +} + +defer appCleanupFn() + +metrics, err := startTelemetry(svrCfg) + if err != nil { + return err +} + +emitServerInfoMetrics() + if !withCmt { + return startStandAlone(svrCtx, app, opts) +} + +return startInProcess(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +func startStandAlone(svrCtx *Context, app types.Application, opts StartCmdOptions) + +error { + addr := svrCtx.Viper.GetString(flagAddress) + transport := svrCtx.Viper.GetString(flagTransport) + cmtApp := NewCometABCIWrapper(app) + +svr, err := server.NewServer(addr, transport, cmtApp) + if err != nil { + return fmt.Errorf("error creating listener: %v", err) +} + +svr.SetLogger(servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger.With("module", "abci-server") +}) + +g, ctx := getCtx(svrCtx, false) + +g.Go(func() + +error { + if err := svr.Start(); err != nil { + svrCtx.Logger.Error("failed to start out-of-process ABCI server", "err", err) + +return err +} + + / Wait for the calling process to be canceled or close the provided context, + / so we can gracefully stop the ABCI server. + <-ctx.Done() + +svrCtx.Logger.Info("stopping the ABCI server...") + +return errors.Join(svr.Stop(), app.Close()) +}) + +return g.Wait() +} + +func startInProcess(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, + metrics *telemetry.Metrics, opts StartCmdOptions, +) + +error { + cmtCfg := svrCtx.Config + home := cmtCfg.RootDir + gRPCOnly := svrCtx.Viper.GetBool(flagGRPCOnly) + if gRPCOnly { + / TODO: Generalize logic so that gRPC only is really in startStandAlone + svrCtx.Logger.Info("starting node in gRPC only mode; CometBFT is disabled") + +svrCfg.GRPC.Enable = true +} + +else { + svrCtx.Logger.Info("starting node with ABCI CometBFT in-process") + +tmNode, cleanupFn, err := startCmtNode(cmtCfg, app, svrCtx) + if err != nil { + return err +} + +defer cleanupFn() + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / Re-assign for making the client available below do not use := to avoid + / shadowing the clientCtx variable. + clientCtx = clientCtx.WithClient(local.New(tmNode)) + +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +} + +g, ctx := getCtx(svrCtx, true) + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, cmtCfg, svrCfg, clientCtx, svrCtx, app, home, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetup != nil { + if err := opts.PostSetup(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/server/start.go#L350-L352) + / wait for signal capture and gracefully return + / we are guaranteed to be waiting for the "ListenForQuitSignals" goroutine. + return g.Wait() +} + +/ TODO: Move nodeKey into being created within the function. +func startCmtNode( + cfg *cmtcfg.Config, + app types.Application, + svrCtx *Context, +) (tmNode *node.Node, cleanupFn func(), err error) { + nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return nil, cleanupFn, err +} + cmtApp := NewCometABCIWrapper(app) + +tmNode, err = node.NewNode( + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(cmtApp), + getGenDocProvider(cfg), + cmtcfg.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger +}, + ) + if err != nil { + return tmNode, cleanupFn, err +} + if err := tmNode.Start(); err != nil { + return tmNode, cleanupFn, err +} + +cleanupFn = func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() + _ = app.Close() +} + +} + +return tmNode, cleanupFn, nil +} + +func getAndValidateConfig(svrCtx *Context) (serverconfig.Config, error) { + config, err := serverconfig.GetConfig(svrCtx.Viper) + if err != nil { + return config, err +} + if err := config.ValidateBasic(); err != nil { + return config, err +} + +return config, nil +} + +/ returns a function which returns the genesis doc from the genesis file. +func getGenDocProvider(cfg *cmtcfg.Config) + +func() (*cmttypes.GenesisDoc, error) { + return func() (*cmttypes.GenesisDoc, error) { + appGenesis, err := genutiltypes.AppGenesisFromFile(cfg.GenesisFile()) + if err != nil { + return nil, err +} + +return appGenesis.ToGenesisDoc() +} +} + +func setupTraceWriter(svrCtx *Context) (traceWriter io.WriteCloser, cleanup func(), err error) { + / clean up the traceWriter when the server is shutting down + cleanup = func() { +} + traceWriterFile := svrCtx.Viper.GetString(flagTraceStore) + +traceWriter, err = openTraceWriter(traceWriterFile) + if err != nil { + return traceWriter, cleanup, err +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + cleanup = func() { + if err = traceWriter.Close(); err != nil { + svrCtx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +return traceWriter, cleanup, nil +} + +func startGrpcServer( + ctx context.Context, + g *errgroup.Group, + config serverconfig.GRPCConfig, + clientCtx client.Context, + svrCtx *Context, + app types.Application, +) (*grpc.Server, client.Context, error) { + if !config.Enable { + / return grpcServer as nil if gRPC is disabled + return nil, clientCtx, nil +} + _, port, err := net.SplitHostPort(config.Address) + if err != nil { + return nil, clientCtx, err +} + maxSendMsgSize := config.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + grpcAddress := fmt.Sprintf("127.0.0.1:%s", port) + + / if gRPC is enabled, configure gRPC client for gRPC gateway + grpcClient, err := grpc.Dial( + grpcAddress, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return nil, clientCtx, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +svrCtx.Logger.Debug("gRPC client assigned to client context", "target", grpcAddress) + +grpcSrv, err := servergrpc.NewGRPCServer(clientCtx, app, config) + if err != nil { + return nil, clientCtx, err +} + + / Start the gRPC server in a goroutine. Note, the provided ctx will ensure + / that the server is gracefully shut down. + g.Go(func() + +error { + return servergrpc.StartGRPCServer(ctx, svrCtx.Logger.With("module", "grpc-server"), config, grpcSrv) +}) + +return grpcSrv, clientCtx, nil +} + +func startAPIServer( + ctx context.Context, + g *errgroup.Group, + cmtCfg *cmtcfg.Config, + svrCfg serverconfig.Config, + clientCtx client.Context, + svrCtx *Context, + app types.Application, + home string, + grpcSrv *grpc.Server, + metrics *telemetry.Metrics, +) + +error { + if !svrCfg.API.Enable { + return nil +} + +clientCtx = clientCtx.WithHomeDir(home) + apiSrv := api.New(clientCtx, svrCtx.Logger.With("module", "api-server"), grpcSrv) + +app.RegisterAPIRoutes(apiSrv, svrCfg.API) + if svrCfg.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + +g.Go(func() + +error { + return apiSrv.Start(ctx, svrCfg) +}) + +return nil +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + if !cfg.Telemetry.Enabled { + return nil, nil +} + +return telemetry.New(cfg.Telemetry) +} + +/ wrapCPUProfile starts CPU profiling, if enabled, and executes the provided +/ callbackFn in a separate goroutine, then will wait for that callback to +/ return. +/ +/ NOTE: We expect the caller to handle graceful shutdown and signal handling. +func wrapCPUProfile(svrCtx *Context, callbackFn func() + +error) + +error { + if cpuProfile := svrCtx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +svrCtx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +defer func() { + svrCtx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + svrCtx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +}() +} + errCh := make(chan error) + +go func() { + errCh <- callbackFn() +}() + +return <-errCh +} + +/ emitServerInfoMetrics emits server info related metrics using application telemetry. +func emitServerInfoMetrics() { + var ls []metrics.Label + versionInfo := version.NewInfo() + if len(versionInfo.GoVersion) > 0 { + ls = append(ls, telemetry.NewLabel("go", versionInfo.GoVersion)) +} + if len(versionInfo.CosmosSdkVersion) > 0 { + ls = append(ls, telemetry.NewLabel("version", versionInfo.CosmosSdkVersion)) +} + if len(ls) == 0 { + return +} + +telemetry.SetGaugeWithLabels([]string{"server", "info" +}, 1, ls) +} + +func getCtx(svrCtx *Context, block bool) (*errgroup.Group, context.Context) { + ctx, cancelFn := context.WithCancel(context.Background()) + +g, ctx := errgroup.WithContext(ctx) + / listen for quit signals so the calling parent process can gracefully exit + ListenForQuitSignals(g, block, cancelFn, svrCtx.Logger) + +return g, ctx +} + +func startApp(svrCtx *Context, appCreator types.AppCreator, opts StartCmdOptions) (app types.Application, cleanupFn func(), err error) { + traceWriter, traceCleanupFn, err := setupTraceWriter(svrCtx) + if err != nil { + return app, traceCleanupFn, err +} + home := svrCtx.Config.RootDir + db, err := opts.DBOpener(home, GetAppDBBackend(svrCtx.Viper)) + if err != nil { + return app, traceCleanupFn, err +} + +app = appCreator(svrCtx.Logger, db, traceWriter, svrCtx.Viper) + +cleanupFn = func() { + traceCleanupFn() + if localErr := app.Close(); localErr != nil { + svrCtx.Logger.Error(localErr.Error()) +} + +} + +return app, cleanupFn, nil +} +``` + +Note that an `appCreator` is a function that fulfills the `AppCreator` signature: + +```go expandable +package types + +import ( + + "encoding/json" + "io" + "cosmossdk.io/log" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" +) + +type ( + / AppOptions defines an interface that is passed into an application + / constructor, typically used to set BaseApp options that are either supplied + / via config file or through CLI arguments/flags. The underlying implementation + / is defined by the server package and is typically implemented via a Viper + / literal defined on the server Context. Note, casting Get calls may not yield + / the expected types and could result in type assertion errors. It is recommend + / to either use the cast package or perform manual conversion for safety. + AppOptions interface { + Get(string) + +interface{ +} + +} + + / Application defines an application interface that wraps abci.Application. + / The interface defines the necessary contracts to be implemented in order + / to fully bootstrap and start an application. + Application interface { + ABCI + + RegisterAPIRoutes(*api.Server, config.APIConfig) + + / RegisterGRPCServer registers gRPC services directly with the gRPC + / server. + RegisterGRPCServer(grpc.Server) + + / RegisterTxService registers the gRPC Query service for tx (such as tx + / simulation, fetching txs by hash...). + RegisterTxService(client.Context) + + / RegisterTendermintService registers the gRPC Query service for CometBFT queries. + RegisterTendermintService(client.Context) + + / RegisterNodeService registers the node gRPC Query service. + RegisterNodeService(client.Context, config.Config) + + / CommitMultiStore return the multistore instance + CommitMultiStore() + +storetypes.CommitMultiStore + + / Return the snapshot manager + SnapshotManager() *snapshots.Manager + + / Close is called in start cmd to gracefully cleanup resources. + / Must be safe to be called multiple times. + Close() + +error +} + + / AppCreator is a function that allows us to lazily initialize an + / application using various configurations. + AppCreator func(log.Logger, dbm.DB, io.Writer, AppOptions) + +Application + + / ModuleInitFlags takes a start command and adds modules specific init flags. + ModuleInitFlags func(startCmd *cobra.Command) + + / ExportedApp represents an exported app state, along with + / validators, consensus params and latest app height. + ExportedApp struct { + / AppState is the application state as JSON. + AppState json.RawMessage + / Validators is the exported validator set. + Validators []cmttypes.GenesisValidator + / Height is the app's latest block height. + Height int64 + / ConsensusParams are the exported consensus params for ABCI. + ConsensusParams cmtproto.ConsensusParams +} + + / AppExporter is a function that dumps all app state to + / JSON-serializable structure and returns the current validator set. + AppExporter func( + logger log.Logger, + db dbm.DB, + traceWriter io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + opts AppOptions, + modulesToExport []string, + ) (ExportedApp, error) +) +``` + +In practice, the [constructor of the application](/docs/sdk/v0.50/learn/beginner/app-anatomy#constructor-function) is passed as the `appCreator`. + +```go expandable +/go:build !app_v1 + +package cmd + +import ( + + "errors" + "io" + "os" + + cmtcfg "github.com/cometbft/cometbft/config" + dbm "github.com/cosmos/cosmos-db" + "github.com/spf13/cobra" + "github.com/spf13/viper" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "cosmossdk.io/simapp" + confixcmd "cosmossdk.io/tools/confix/cmd" + rosettaCmd "cosmossdk.io/tools/rosetta/cmd" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/pruning" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/client/snapshot" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the main function. +func NewRootCmd() *cobra.Command { + var ( + interfaceRegistry codectypes.InterfaceRegistry + appCodec codec.Codec + txConfig client.TxConfig + legacyAmino *codec.LegacyAmino + autoCliOpts autocli.AppOptions + moduleBasicManager module.BasicManager + ) + if err := depinject.Inject(depinject.Configs(simapp.AppConfig, depinject.Supply(log.NewNopLogger())), + &interfaceRegistry, + &appCodec, + &txConfig, + &legacyAmino, + &autoCliOpts, + &moduleBasicManager, + ); err != nil { + panic(err) +} + initClientCtx := client.Context{ +}. + WithCodec(appCodec). + WithInterfaceRegistry(interfaceRegistry). + WithLegacyAmino(legacyAmino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(simapp.DefaultNodeHome). + WithViper("") / In simapp, we don't use any prefix for env variables. + rootCmd := &cobra.Command{ + Use: "simd", + Short: "simulation app", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + / set the default command outputs + cmd.SetOut(cmd.OutOrStdout()) + +cmd.SetErr(cmd.ErrOrStderr()) + +initClientCtx = initClientCtx.WithCmdContext(cmd.Context()) + +initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + + / This needs to go after ReadFromClientConfig, as that function + / sets the RPC client needed for SIGN_MODE_TEXTUAL. + enabledSignModes := append(tx.DefaultSignModes, signing.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewGRPCCoinMetadataQueryFn(initClientCtx), +} + +txConfigWithTextual, err := tx.NewTxConfigWithOptions( + codec.NewProtoCodec(interfaceRegistry), + txConfigOpts, + ) + if err != nil { + return err +} + +initClientCtx = initClientCtx.WithTxConfig(txConfigWithTextual) + if err := client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customAppTemplate, customAppConfig := initAppConfig() + customCMTConfig := initCometBFTConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig) +}, +} + +initRootCmd(rootCmd, txConfig, interfaceRegistry, appCodec, moduleBasicManager) + if err := autoCliOpts.EnhanceRootCommand(rootCmd); err != nil { + panic(err) +} + +return rootCmd +} + +/ initCometBFTConfig helps to override default CometBFT Config values. +/ return cmtcfg.DefaultConfig if no custom configuration is required for the application. +func initCometBFTConfig() *cmtcfg.Config { + cfg := cmtcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +/ initAppConfig helps to override default appConfig template and configs. +/ return "", nil if no custom configuration is required for the application. +func initAppConfig() (string, interface{ +}) { + / The following code snippet is just for reference. + + / WASMConfig defines configuration for the wasm module. + type WASMConfig struct { + / This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries + QueryGasLimit uint64 `mapstructure:"query_gas_limit"` + + / Address defines the gRPC-web server to listen on + LruSize uint64 `mapstructure:"lru_size"` +} + +type CustomAppConfig struct { + serverconfig.Config + + WASM WASMConfig `mapstructure:"wasm"` +} + + / Optionally allow the chain developer to overwrite the SDK's default + / server config. + srvCfg := serverconfig.DefaultConfig() + / The SDK's default minimum gas price is set to "" (empty value) + +inside + / app.toml. If left empty by validators, the node will halt on startup. + / However, the chain developer can set a default app.toml value for their + / validators here. + / + / In summary: + / - if you leave srvCfg.MinGasPrices = "", all validators MUST tweak their + / own app.toml config, + / - if you set srvCfg.MinGasPrices non-empty, validators CAN tweak their + / own app.toml to override, or use this default value. + / + / In simapp, we set the min gas prices to 0. + srvCfg.MinGasPrices = "0stake" + / srvCfg.BaseConfig.IAVLDisableFastNode = true / disable fastnode by default + customAppConfig := CustomAppConfig{ + Config: *srvCfg, + WASM: WASMConfig{ + LruSize: 1, + QueryGasLimit: 300000, +}, +} + customAppTemplate := serverconfig.DefaultConfigTemplate + ` +[wasm] +# This is the maximum sdk gas (wasm and storage) + +that we allow for any x/wasm "smart" queries +query_gas_limit = 300000 +# This is the number of wasm vm instances we keep cached in memory for speed-up +# Warning: this is currently unstable and may lead to crashes, best to keep for 0 unless testing locally +lru_size = 0` + + return customAppTemplate, customAppConfig +} + +func initRootCmd( + rootCmd *cobra.Command, + txConfig client.TxConfig, + interfaceRegistry codectypes.InterfaceRegistry, + appCodec codec.Codec, + basicManager module.BasicManager, +) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(basicManager, simapp.DefaultNodeHome), + NewTestnetCmd(basicManager, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + confixcmd.ConfigCommand(), + pruning.Cmd(newApp), + snapshot.Cmd(newApp), + ) + +server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, genesis, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + genesisCommand(txConfig, basicManager), + queryCommand(), + txCommand(), + keys.Commands(simapp.DefaultNodeHome), + ) + + / add rosetta + rootCmd.AddCommand(rosettaCmd.RosettaCommand(interfaceRegistry, appCodec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +/ genesisCommand builds genesis-related `simd genesis` command. Users may provide application specific commands as a parameter +func genesisCommand(txConfig client.TxConfig, basicManager module.BasicManager, cmds ...*cobra.Command) *cobra.Command { + cmd := genutilcli.Commands(txConfig, basicManager, simapp.DefaultNodeHome) + for _, subCmd := range cmds { + cmd.AddCommand(subCmd) +} + +return cmd +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + rpc.ValidatorCommand(), + server.QueryBlockCmd(), + authcmd.QueryTxsByEventsCmd(), + server.QueryBlocksCmd(), + authcmd.QueryTxCmd(), + ) + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: false, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +return cmd +} + +/ newApp creates the application +func newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + baseappOptions := server.DefaultBaseappOptions(appOpts) + +return simapp.NewSimApp( + logger, db, traceStore, true, + appOpts, + baseappOptions..., + ) +} + +/ appExport creates a new simapp (optionally at a given height) + +and exports state. +func appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, + modulesToExport []string, +) (servertypes.ExportedApp, error) { + / this check is necessary as we use the flag in x/upgrade. + / we can exit more gracefully by checking the flag here. + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home not set") +} + +viperAppOpts, ok := appOpts.(*viper.Viper) + if !ok { + return servertypes.ExportedApp{ +}, errors.New("appOpts is not viper.Viper") +} + + / overwrite the FlagInvCheckPeriod + viperAppOpts.Set(server.FlagInvCheckPeriod, 1) + +appOpts = viperAppOpts + + var simApp *simapp.SimApp + if height != -1 { + simApp = simapp.NewSimApp(logger, db, traceStore, false, appOpts) + if err := simApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +else { + simApp = simapp.NewSimApp(logger, db, traceStore, true, appOpts) +} + +return simApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs, modulesToExport) +} +``` + +Then, the instance of `app` is used to instantiate a new CometBFT node: + +```go expandable +package server + +import ( + + "context" + "errors" + "fmt" + "io" + "net" + "os" + "runtime/pprof" + + pruningtypes "cosmossdk.io/store/pruning/types" + "github.com/armon/go-metrics" + "github.com/cometbft/cometbft/abci/server" + cmtcmd "github.com/cometbft/cometbft/cmd/cometbft/commands" + cmtcfg "github.com/cometbft/cometbft/config" + "github.com/cometbft/cometbft/node" + "github.com/cometbft/cometbft/p2p" + pvm "github.com/cometbft/cometbft/privval" + "github.com/cometbft/cometbft/proxy" + "github.com/cometbft/cometbft/rpc/client/local" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "golang.org/x/sync/errgroup" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + servercmtlog "github.com/cosmos/cosmos-sdk/server/log" + "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/version" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" +) + +const ( + / CometBFT full-node start flags + flagWithComet = "with-comet" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" + + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" + + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" + + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" + + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" + + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" +) + +/ StartCmdOptions defines options that can be customized in `StartCmdWithOptions`, +type StartCmdOptions struct { + / DBOpener can be used to customize db opening, for example customize db options or support different db backends, + / default to the builtin db opener. + DBOpener func(rootDir string, backendType dbm.BackendType) (dbm.DB, error) + / PostSetup can be used to setup extra services under the same cancellable context, + / it's not called in stand-alone mode, only for in-process mode. + PostSetup func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / AddFlags add custom flags to start cmd + AddFlags func(cmd *cobra.Command) +} + +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + return StartCmdWithOptions(appCreator, defaultNodeHome, StartCmdOptions{ +}) +} + +/ StartCmdWithOptions runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmdWithOptions(appCreator types.AppCreator, defaultNodeHome string, opts StartCmdOptions) *cobra.Command { + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with CometBFT in or out of process. By +default, the application will run with CometBFT in process. + +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. + +For '--pruning' the options are as follows: + +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) + +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' + +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. + +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. + +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, CometBFT is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + PreRunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + + / Bind flags to the Context's Viper so the app construction can set + / options accordingly. + if err := serverCtx.Viper.BindPFlags(cmd.Flags()); err != nil { + return err +} + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + +return err +}, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + +return wrapCPUProfile(serverCtx, func() + +error { + return start(serverCtx, clientCtx, appCreator, withCMT, opts) +}) +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +cmd.Flags().Bool(flagWithComet, true, "Run abci app embedded in-process with CometBFT") + +cmd.Flags().String(flagAddress, "tcp://0.0.0.0:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune CometBFT blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the CometBFT RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the CometBFT RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the CometBFT maximum request body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no CometBFT process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + + / support old flags name for backwards compatibility + cmd.Flags().SetNormalizeFunc(func(f *pflag.FlagSet, name string) + +pflag.NormalizedName { + if name == "with-tendermint" { + name = flagWithComet +} + +return pflag.NormalizedName(name) +}) + + / add support for all CometBFT-specific command line options + cmtcmd.AddNodeFlags(cmd) + if opts.AddFlags != nil { + opts.AddFlags(cmd) +} + +return cmd +} + +func start(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, withCmt bool, opts StartCmdOptions) + +error { + svrCfg, err := getAndValidateConfig(svrCtx) + if err != nil { + return err +} + +app, appCleanupFn, err := startApp(svrCtx, appCreator, opts) + if err != nil { + return err +} + +defer appCleanupFn() + +metrics, err := startTelemetry(svrCfg) + if err != nil { + return err +} + +emitServerInfoMetrics() + if !withCmt { + return startStandAlone(svrCtx, app, opts) +} + +return startInProcess(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +func startStandAlone(svrCtx *Context, app types.Application, opts StartCmdOptions) + +error { + addr := svrCtx.Viper.GetString(flagAddress) + transport := svrCtx.Viper.GetString(flagTransport) + cmtApp := NewCometABCIWrapper(app) + +svr, err := server.NewServer(addr, transport, cmtApp) + if err != nil { + return fmt.Errorf("error creating listener: %v", err) +} + +svr.SetLogger(servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger.With("module", "abci-server") +}) + +g, ctx := getCtx(svrCtx, false) + +g.Go(func() + +error { + if err := svr.Start(); err != nil { + svrCtx.Logger.Error("failed to start out-of-process ABCI server", "err", err) + +return err +} + + / Wait for the calling process to be canceled or close the provided context, + / so we can gracefully stop the ABCI server. + <-ctx.Done() + +svrCtx.Logger.Info("stopping the ABCI server...") + +return errors.Join(svr.Stop(), app.Close()) +}) + +return g.Wait() +} + +func startInProcess(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, + metrics *telemetry.Metrics, opts StartCmdOptions, +) + +error { + cmtCfg := svrCtx.Config + home := cmtCfg.RootDir + gRPCOnly := svrCtx.Viper.GetBool(flagGRPCOnly) + if gRPCOnly { + / TODO: Generalize logic so that gRPC only is really in startStandAlone + svrCtx.Logger.Info("starting node in gRPC only mode; CometBFT is disabled") + +svrCfg.GRPC.Enable = true +} + +else { + svrCtx.Logger.Info("starting node with ABCI CometBFT in-process") + +tmNode, cleanupFn, err := startCmtNode(cmtCfg, app, svrCtx) + if err != nil { + return err +} + +defer cleanupFn() + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / Re-assign for making the client available below do not use := to avoid + / shadowing the clientCtx variable. + clientCtx = clientCtx.WithClient(local.New(tmNode)) + +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +} + +g, ctx := getCtx(svrCtx, true) + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, cmtCfg, svrCfg, clientCtx, svrCtx, app, home, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetup != nil { + if err := opts.PostSetup(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + + / wait for signal capture and gracefully return + / we are guaranteed to be waiting for the "ListenForQuitSignals" goroutine. + return g.Wait() +} + +/ TODO: Move nodeKey into being created within the function. +func startCmtNode( + cfg *cmtcfg.Config, + app types.Application, + svrCtx *Context, +) (tmNode *node.Node, cleanupFn func(), err error) { + nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return nil, cleanupFn, err +} + cmtApp := NewCometABCIWrapper(app) + +tmNode, err = node.NewNode( + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(cmtApp), + getGenDocProvider(cfg), + cmtcfg.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger +}, + ) + if err != nil { + return tmNode, cleanupFn, err +} + if err := tmNode.Start(); err != nil { + return tmNode, cleanupFn, err +} + +cleanupFn = func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() + _ = app.Close() +} + +} + +return tmNode, cleanupFn, nil +} + +func getAndValidateConfig(svrCtx *Context) (serverconfig.Config, error) { + config, err := serverconfig.GetConfig(svrCtx.Viper) + if err != nil { + return config, err +} + if err := config.ValidateBasic(); err != nil { + return config, err +} + +return config, nil +} + +/ returns a function which returns the genesis doc from the genesis file. +func getGenDocProvider(cfg *cmtcfg.Config) + +func() (*cmttypes.GenesisDoc, error) { + return func() (*cmttypes.GenesisDoc, error) { + appGenesis, err := genutiltypes.AppGenesisFromFile(cfg.GenesisFile()) + if err != nil { + return nil, err +} + +return appGenesis.ToGenesisDoc() +} +} + +func setupTraceWriter(svrCtx *Context) (traceWriter io.WriteCloser, cleanup func(), err error) { + / clean up the traceWriter when the server is shutting down + cleanup = func() { +} + traceWriterFile := svrCtx.Viper.GetString(flagTraceStore) + +traceWriter, err = openTraceWriter(traceWriterFile) + if err != nil { + return traceWriter, cleanup, err +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + cleanup = func() { + if err = traceWriter.Close(); err != nil { + svrCtx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +return traceWriter, cleanup, nil +} + +func startGrpcServer( + ctx context.Context, + g *errgroup.Group, + config serverconfig.GRPCConfig, + clientCtx client.Context, + svrCtx *Context, + app types.Application, +) (*grpc.Server, client.Context, error) { + if !config.Enable { + / return grpcServer as nil if gRPC is disabled + return nil, clientCtx, nil +} + _, port, err := net.SplitHostPort(config.Address) + if err != nil { + return nil, clientCtx, err +} + maxSendMsgSize := config.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + grpcAddress := fmt.Sprintf("127.0.0.1:%s", port) + + / if gRPC is enabled, configure gRPC client for gRPC gateway + grpcClient, err := grpc.Dial( + grpcAddress, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return nil, clientCtx, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +svrCtx.Logger.Debug("gRPC client assigned to client context", "target", grpcAddress) + +grpcSrv, err := servergrpc.NewGRPCServer(clientCtx, app, config) + if err != nil { + return nil, clientCtx, err +} + + / Start the gRPC server in a goroutine. Note, the provided ctx will ensure + / that the server is gracefully shut down. + g.Go(func() + +error { + return servergrpc.StartGRPCServer(ctx, svrCtx.Logger.With("module", "grpc-server"), config, grpcSrv) +}) + +return grpcSrv, clientCtx, nil +} + +func startAPIServer( + ctx context.Context, + g *errgroup.Group, + cmtCfg *cmtcfg.Config, + svrCfg serverconfig.Config, + clientCtx client.Context, + svrCtx *Context, + app types.Application, + home string, + grpcSrv *grpc.Server, + metrics *telemetry.Metrics, +) + +error { + if !svrCfg.API.Enable { + return nil +} + +clientCtx = clientCtx.WithHomeDir(home) + apiSrv := api.New(clientCtx, svrCtx.Logger.With("module", "api-server"), grpcSrv) + +app.RegisterAPIRoutes(apiSrv, svrCfg.API) + if svrCfg.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + +g.Go(func() + +error { + return apiSrv.Start(ctx, svrCfg) +}) + +return nil +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + if !cfg.Telemetry.Enabled { + return nil, nil +} + +return telemetry.New(cfg.Telemetry) +} + +/ wrapCPUProfile starts CPU profiling, if enabled, and executes the provided +/ callbackFn in a separate goroutine, then will wait for that callback to +/ return. +/ +/ NOTE: We expect the caller to handle graceful shutdown and signal handling. +func wrapCPUProfile(svrCtx *Context, callbackFn func() + +error) + +error { + if cpuProfile := svrCtx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +svrCtx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +defer func() { + svrCtx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + svrCtx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +}() +} + errCh := make(chan error) + +go func() { + errCh <- callbackFn() +}() + +return <-errCh +} + +/ emitServerInfoMetrics emits server info related metrics using application telemetry. +func emitServerInfoMetrics() { + var ls []metrics.Label + versionInfo := version.NewInfo() + if len(versionInfo.GoVersion) > 0 { + ls = append(ls, telemetry.NewLabel("go", versionInfo.GoVersion)) +} + if len(versionInfo.CosmosSdkVersion) > 0 { + ls = append(ls, telemetry.NewLabel("version", versionInfo.CosmosSdkVersion)) +} + if len(ls) == 0 { + return +} + +telemetry.SetGaugeWithLabels([]string{"server", "info" +}, 1, ls) +} + +func getCtx(svrCtx *Context, block bool) (*errgroup.Group, context.Context) { + ctx, cancelFn := context.WithCancel(context.Background()) + +g, ctx := errgroup.WithContext(ctx) + / listen for quit signals so the calling parent process can gracefully exit + ListenForQuitSignals(g, block, cancelFn, svrCtx.Logger) + +return g, ctx +} + +func startApp(svrCtx *Context, appCreator types.AppCreator, opts StartCmdOptions) (app types.Application, cleanupFn func(), err error) { + traceWriter, traceCleanupFn, err := setupTraceWriter(svrCtx) + if err != nil { + return app, traceCleanupFn, err +} + home := svrCtx.Config.RootDir + db, err := opts.DBOpener(home, GetAppDBBackend(svrCtx.Viper)) + if err != nil { + return app, traceCleanupFn, err +} + +app = appCreator(svrCtx.Logger, db, traceWriter, svrCtx.Viper) + +cleanupFn = func() { + traceCleanupFn() + if localErr := app.Close(); localErr != nil { + svrCtx.Logger.Error(localErr.Error()) +} + +} + +return app, cleanupFn, nil +} +``` + +The CometBFT node can be created with `app` because the latter satisfies the [`abci.Application` interface](https://github.com/cometbft/cometbft/blob/v0.37.0/abci/types/application.go#L9-L35) (given that `app` extends [`baseapp`](/docs/sdk/v0.50/learn/advanced/baseapp)). As part of the `node.New` method, CometBFT makes sure that the height of the application (i.e. number of blocks since genesis) is equal to the height of the CometBFT node. The difference between these two heights should always be negative or null. If it is strictly negative, `node.New` will replay blocks until the height of the application reaches the height of the CometBFT node. Finally, if the height of the application is `0`, the CometBFT node will call [`InitChain`](/docs/sdk/v0.50/learn/advanced/baseapp#initchain) on the application to initialize the state from the genesis file. + +Once the CometBFT node is instantiated and in sync with the application, the node can be started: + +```go expandable +package server + +import ( + + "context" + "errors" + "fmt" + "io" + "net" + "os" + "runtime/pprof" + + pruningtypes "cosmossdk.io/store/pruning/types" + "github.com/armon/go-metrics" + "github.com/cometbft/cometbft/abci/server" + cmtcmd "github.com/cometbft/cometbft/cmd/cometbft/commands" + cmtcfg "github.com/cometbft/cometbft/config" + "github.com/cometbft/cometbft/node" + "github.com/cometbft/cometbft/p2p" + pvm "github.com/cometbft/cometbft/privval" + "github.com/cometbft/cometbft/proxy" + "github.com/cometbft/cometbft/rpc/client/local" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "golang.org/x/sync/errgroup" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + servercmtlog "github.com/cosmos/cosmos-sdk/server/log" + "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/version" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" +) + +const ( + / CometBFT full-node start flags + flagWithComet = "with-comet" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" + + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" + + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" + + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" + + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" + + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" +) + +/ StartCmdOptions defines options that can be customized in `StartCmdWithOptions`, +type StartCmdOptions struct { + / DBOpener can be used to customize db opening, for example customize db options or support different db backends, + / default to the builtin db opener. + DBOpener func(rootDir string, backendType dbm.BackendType) (dbm.DB, error) + / PostSetup can be used to setup extra services under the same cancellable context, + / it's not called in stand-alone mode, only for in-process mode. + PostSetup func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / AddFlags add custom flags to start cmd + AddFlags func(cmd *cobra.Command) +} + +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + return StartCmdWithOptions(appCreator, defaultNodeHome, StartCmdOptions{ +}) +} + +/ StartCmdWithOptions runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmdWithOptions(appCreator types.AppCreator, defaultNodeHome string, opts StartCmdOptions) *cobra.Command { + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with CometBFT in or out of process. By +default, the application will run with CometBFT in process. + +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. + +For '--pruning' the options are as follows: + +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) + +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' + +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. + +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. + +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, CometBFT is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + PreRunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + + / Bind flags to the Context's Viper so the app construction can set + / options accordingly. + if err := serverCtx.Viper.BindPFlags(cmd.Flags()); err != nil { + return err +} + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + +return err +}, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + +return wrapCPUProfile(serverCtx, func() + +error { + return start(serverCtx, clientCtx, appCreator, withCMT, opts) +}) +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +cmd.Flags().Bool(flagWithComet, true, "Run abci app embedded in-process with CometBFT") + +cmd.Flags().String(flagAddress, "tcp://0.0.0.0:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune CometBFT blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the CometBFT RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the CometBFT RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the CometBFT maximum request body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no CometBFT process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + + / support old flags name for backwards compatibility + cmd.Flags().SetNormalizeFunc(func(f *pflag.FlagSet, name string) + +pflag.NormalizedName { + if name == "with-tendermint" { + name = flagWithComet +} + +return pflag.NormalizedName(name) +}) + + / add support for all CometBFT-specific command line options + cmtcmd.AddNodeFlags(cmd) + if opts.AddFlags != nil { + opts.AddFlags(cmd) +} + +return cmd +} + +func start(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, withCmt bool, opts StartCmdOptions) + +error { + svrCfg, err := getAndValidateConfig(svrCtx) + if err != nil { + return err +} + +app, appCleanupFn, err := startApp(svrCtx, appCreator, opts) + if err != nil { + return err +} + +defer appCleanupFn() + +metrics, err := startTelemetry(svrCfg) + if err != nil { + return err +} + +emitServerInfoMetrics() + if !withCmt { + return startStandAlone(svrCtx, app, opts) +} + +return startInProcess(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +func startStandAlone(svrCtx *Context, app types.Application, opts StartCmdOptions) + +error { + addr := svrCtx.Viper.GetString(flagAddress) + transport := svrCtx.Viper.GetString(flagTransport) + cmtApp := NewCometABCIWrapper(app) + +svr, err := server.NewServer(addr, transport, cmtApp) + if err != nil { + return fmt.Errorf("error creating listener: %v", err) +} + +svr.SetLogger(servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger.With("module", "abci-server") +}) + +g, ctx := getCtx(svrCtx, false) + +g.Go(func() + +error { + if err := svr.Start(); err != nil { + svrCtx.Logger.Error("failed to start out-of-process ABCI server", "err", err) + +return err +} + + / Wait for the calling process to be canceled or close the provided context, + / so we can gracefully stop the ABCI server. + <-ctx.Done() + +svrCtx.Logger.Info("stopping the ABCI server...") + +return errors.Join(svr.Stop(), app.Close()) +}) + +return g.Wait() +} + +func startInProcess(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, + metrics *telemetry.Metrics, opts StartCmdOptions, +) + +error { + cmtCfg := svrCtx.Config + home := cmtCfg.RootDir + gRPCOnly := svrCtx.Viper.GetBool(flagGRPCOnly) + if gRPCOnly { + / TODO: Generalize logic so that gRPC only is really in startStandAlone + svrCtx.Logger.Info("starting node in gRPC only mode; CometBFT is disabled") + +svrCfg.GRPC.Enable = true +} + +else { + svrCtx.Logger.Info("starting node with ABCI CometBFT in-process") + +tmNode, cleanupFn, err := startCmtNode(cmtCfg, app, svrCtx) + if err != nil { + return err +} + +defer cleanupFn() + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / Re-assign for making the client available below do not use := to avoid + / shadowing the clientCtx variable. + clientCtx = clientCtx.WithClient(local.New(tmNode)) + +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +} + +g, ctx := getCtx(svrCtx, true) + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, cmtCfg, svrCfg, clientCtx, svrCtx, app, home, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetup != nil { + if err := opts.PostSetup(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + + / wait for signal capture and gracefully return + / we are guaranteed to be waiting for the "ListenForQuitSignals" goroutine. + return g.Wait() +} + +/ TODO: Move nodeKey into being created within the function. +func startCmtNode( + cfg *cmtcfg.Config, + app types.Application, + svrCtx *Context, +) (tmNode *node.Node, cleanupFn func(), err error) { + nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return nil, cleanupFn, err +} + cmtApp := NewCometABCIWrapper(app) + +tmNode, err = node.NewNode( + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(cmtApp), + getGenDocProvider(cfg), + cmtcfg.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger +}, + ) + if err != nil { + return tmNode, cleanupFn, err +} + if err := tmNode.Start(); err != nil { + return tmNode, cleanupFn, err +} + +cleanupFn = func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() + _ = app.Close() +} + +} + +return tmNode, cleanupFn, nil +} + +func getAndValidateConfig(svrCtx *Context) (serverconfig.Config, error) { + config, err := serverconfig.GetConfig(svrCtx.Viper) + if err != nil { + return config, err +} + if err := config.ValidateBasic(); err != nil { + return config, err +} + +return config, nil +} + +/ returns a function which returns the genesis doc from the genesis file. +func getGenDocProvider(cfg *cmtcfg.Config) + +func() (*cmttypes.GenesisDoc, error) { + return func() (*cmttypes.GenesisDoc, error) { + appGenesis, err := genutiltypes.AppGenesisFromFile(cfg.GenesisFile()) + if err != nil { + return nil, err +} + +return appGenesis.ToGenesisDoc() +} +} + +func setupTraceWriter(svrCtx *Context) (traceWriter io.WriteCloser, cleanup func(), err error) { + / clean up the traceWriter when the server is shutting down + cleanup = func() { +} + traceWriterFile := svrCtx.Viper.GetString(flagTraceStore) + +traceWriter, err = openTraceWriter(traceWriterFile) + if err != nil { + return traceWriter, cleanup, err +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + cleanup = func() { + if err = traceWriter.Close(); err != nil { + svrCtx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +return traceWriter, cleanup, nil +} + +func startGrpcServer( + ctx context.Context, + g *errgroup.Group, + config serverconfig.GRPCConfig, + clientCtx client.Context, + svrCtx *Context, + app types.Application, +) (*grpc.Server, client.Context, error) { + if !config.Enable { + / return grpcServer as nil if gRPC is disabled + return nil, clientCtx, nil +} + _, port, err := net.SplitHostPort(config.Address) + if err != nil { + return nil, clientCtx, err +} + maxSendMsgSize := config.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + grpcAddress := fmt.Sprintf("127.0.0.1:%s", port) + + / if gRPC is enabled, configure gRPC client for gRPC gateway + grpcClient, err := grpc.Dial( + grpcAddress, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return nil, clientCtx, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +svrCtx.Logger.Debug("gRPC client assigned to client context", "target", grpcAddress) + +grpcSrv, err := servergrpc.NewGRPCServer(clientCtx, app, config) + if err != nil { + return nil, clientCtx, err +} + + / Start the gRPC server in a goroutine. Note, the provided ctx will ensure + / that the server is gracefully shut down. + g.Go(func() + +error { + return servergrpc.StartGRPCServer(ctx, svrCtx.Logger.With("module", "grpc-server"), config, grpcSrv) +}) + +return grpcSrv, clientCtx, nil +} + +func startAPIServer( + ctx context.Context, + g *errgroup.Group, + cmtCfg *cmtcfg.Config, + svrCfg serverconfig.Config, + clientCtx client.Context, + svrCtx *Context, + app types.Application, + home string, + grpcSrv *grpc.Server, + metrics *telemetry.Metrics, +) + +error { + if !svrCfg.API.Enable { + return nil +} + +clientCtx = clientCtx.WithHomeDir(home) + apiSrv := api.New(clientCtx, svrCtx.Logger.With("module", "api-server"), grpcSrv) + +app.RegisterAPIRoutes(apiSrv, svrCfg.API) + if svrCfg.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + +g.Go(func() + +error { + return apiSrv.Start(ctx, svrCfg) +}) + +return nil +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + if !cfg.Telemetry.Enabled { + return nil, nil +} + +return telemetry.New(cfg.Telemetry) +} + +/ wrapCPUProfile starts CPU profiling, if enabled, and executes the provided +/ callbackFn in a separate goroutine, then will wait for that callback to +/ return. +/ +/ NOTE: We expect the caller to handle graceful shutdown and signal handling. +func wrapCPUProfile(svrCtx *Context, callbackFn func() + +error) + +error { + if cpuProfile := svrCtx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +svrCtx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +defer func() { + svrCtx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + svrCtx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +}() +} + errCh := make(chan error) + +go func() { + errCh <- callbackFn() +}() + +return <-errCh +} + +/ emitServerInfoMetrics emits server info related metrics using application telemetry. +func emitServerInfoMetrics() { + var ls []metrics.Label + versionInfo := version.NewInfo() + if len(versionInfo.GoVersion) > 0 { + ls = append(ls, telemetry.NewLabel("go", versionInfo.GoVersion)) +} + if len(versionInfo.CosmosSdkVersion) > 0 { + ls = append(ls, telemetry.NewLabel("version", versionInfo.CosmosSdkVersion)) +} + if len(ls) == 0 { + return +} + +telemetry.SetGaugeWithLabels([]string{"server", "info" +}, 1, ls) +} + +func getCtx(svrCtx *Context, block bool) (*errgroup.Group, context.Context) { + ctx, cancelFn := context.WithCancel(context.Background()) + +g, ctx := errgroup.WithContext(ctx) + / listen for quit signals so the calling parent process can gracefully exit + ListenForQuitSignals(g, block, cancelFn, svrCtx.Logger) + +return g, ctx +} + +func startApp(svrCtx *Context, appCreator types.AppCreator, opts StartCmdOptions) (app types.Application, cleanupFn func(), err error) { + traceWriter, traceCleanupFn, err := setupTraceWriter(svrCtx) + if err != nil { + return app, traceCleanupFn, err +} + home := svrCtx.Config.RootDir + db, err := opts.DBOpener(home, GetAppDBBackend(svrCtx.Viper)) + if err != nil { + return app, traceCleanupFn, err +} + +app = appCreator(svrCtx.Logger, db, traceWriter, svrCtx.Viper) + +cleanupFn = func() { + traceCleanupFn() + if localErr := app.Close(); localErr != nil { + svrCtx.Logger.Error(localErr.Error()) +} + +} + +return app, cleanupFn, nil +} +``` Upon starting, the node will bootstrap its RPC and P2P server and start dialing peers. During handshake with its peers, if the node realizes they are ahead, it will query all the blocks sequentially in order to catch up. Then, it will wait for new block proposals and block signatures from validators in order to make progress. -## Other commands[​](#other-commands "Direct link to Other commands") +## Other commands -To discover how to concretely run a node and interact with it, please refer to our [Running a Node, API and CLI](/v0.50/user/run-node/run-node) guide. +To discover how to concretely run a node and interact with it, please refer to our [Running a Node, API and CLI](/docs/sdk/v0.50/user/run-node/run-node) guide. diff --git a/docs/sdk/v0.50/learn/advanced/ocap.mdx b/docs/sdk/v0.50/learn/advanced/ocap.mdx index b2786f20..1c8275b7 100644 --- a/docs/sdk/v0.50/learn/advanced/ocap.mdx +++ b/docs/sdk/v0.50/learn/advanced/ocap.mdx @@ -1,55 +1,1004 @@ --- -title: "Object-Capability Model" -description: "Version: v0.50" +title: Object-Capability Model +description: >- + When thinking about security, it is good to start with a specific threat + model. Our threat model is the following: --- -## Intro[​](#intro "Direct link to Intro") +## Intro When thinking about security, it is good to start with a specific threat model. Our threat model is the following: > We assume that a thriving ecosystem of Cosmos SDK modules that are easy to compose into a blockchain application will contain faulty or malicious modules. -The Cosmos SDK is designed to address this threat by being the foundation of an object capability system. +The Cosmos SDK is designed to address this threat by being the +foundation of an object capability system. -> The structural properties of object capability systems favor modularity in code design and ensure reliable encapsulation in code implementation. +> The structural properties of object capability systems favor +> modularity in code design and ensure reliable encapsulation in +> code implementation. > -> These structural properties facilitate the analysis of some security properties of an object-capability program or operating system. Some of these — in particular, information flow properties — can be analyzed at the level of object references and connectivity, independent of any knowledge or analysis of the code that determines the behavior of the objects. +> These structural properties facilitate the analysis of some +> security properties of an object-capability program or operating +> system. Some of these — in particular, information flow properties +> — can be analyzed at the level of object references and +> connectivity, independent of any knowledge or analysis of the code +> that determines the behavior of the objects. > -> As a consequence, these security properties can be established and maintained in the presence of new objects that contain unknown and possibly malicious code. +> As a consequence, these security properties can be established +> and maintained in the presence of new objects that contain unknown +> and possibly malicious code. > -> These structural properties stem from the two rules governing access to existing objects: +> These structural properties stem from the two rules governing +> access to existing objects: > -> 1. An object A can send a message to B only if object A holds a reference to B. -> 2. An object A can obtain a reference to C only if object A receives a message containing a reference to C. As a consequence of these two rules, an object can obtain a reference to another object only through a preexisting chain of references. In short, "Only connectivity begets connectivity." +> 1. An object A can send a message to B only if object A holds a +> reference to B. +> 2. An object A can obtain a reference to C only +> if object A receives a message containing a reference to C. As a +> consequence of these two rules, an object can obtain a reference +> to another object only through a preexisting chain of references. +> In short, "Only connectivity begets connectivity." For an introduction to object-capabilities, see this [Wikipedia article](https://en.wikipedia.org/wiki/Object-capability_model). -## Ocaps in practice[​](#ocaps-in-practice "Direct link to Ocaps in practice") +## Ocaps in practice The idea is to only reveal what is necessary to get the work done. -For example, the following code snippet violates the object capabilities principle: +For example, the following code snippet violates the object capabilities +principle: -``` -type AppAccount struct {...}account := &AppAccount{ Address: pub.Address(), Coins: sdk.Coins{sdk.NewInt64Coin("ATM", 100)},}sumValue := externalModule.ComputeSumValue(account) +```go +type AppAccount struct {... +} + account := &AppAccount{ + Address: pub.Address(), + Coins: sdk.Coins{ + sdk.NewInt64Coin("ATM", 100) +}, +} + sumValue := externalModule.ComputeSumValue(account) ``` -The method `ComputeSumValue` implies a pure function, yet the implied capability of accepting a pointer value is the capability to modify that value. The preferred method signature should take a copy instead. +The method `ComputeSumValue` implies a pure function, yet the implied +capability of accepting a pointer value is the capability to modify that +value. The preferred method signature should take a copy instead. -``` +```go sumValue := externalModule.ComputeSumValue(*account) ``` In the Cosmos SDK, you can see the application of this principle in simapp. -simapp/app.go +```go expandable +/go:build app_v1 -``` -loading... -``` +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "os" + "path/filepath" + "cosmossdk.io/log" + "cosmossdk.io/x/tx/signing" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/core/appmodule" + "github.com/cosmos/cosmos-sdk/codec/address" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ stdAccAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdAccAddressCodec struct{ +} + +func (g stdAccAddressCodec) + +StringToBytes(text string) ([]byte, error) { + if text == "" { + return nil, nil +} + +return sdk.AccAddressFromBech32(text) +} + +func (g stdAccAddressCodec) + +BytesToString(bz []byte) (string, error) { + if bz == nil { + return "", nil +} + +return sdk.AccAddress(bz).String(), nil +} + +/ stdValAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdValAddressCodec struct{ +} + +func (g stdValAddressCodec) + +StringToBytes(text string) ([]byte, error) { + return sdk.ValAddressFromBech32(text) +} + +func (g stdValAddressCodec) + +BytesToString(bz []byte) (string, error) { + return sdk.ValAddress(bz).String(), nil +} + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, circuittypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + tkeys := storetypes.NewTransientStoreKeys(paramstypes.TStoreKey) + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), runtime.EventService{ +}) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, runtime.NewKVStoreService(keys[authtypes.StoreKey]), authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[minttypes.StoreKey]), app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[distrtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[crisistypes.StoreKey]), invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[feegrant.StoreKey]), app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[circuittypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper(runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[govtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.DistrKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), app.StakingKeeper, app.SlashingKeeper, app.AccountKeeper.AddressCodec(), runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName), app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, circuittypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + err := app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/app.go) +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + + / At startup, after all modules have been registered, check that all prot + / annotations are correct. + protoFiles, err := proto.MergedRegistry() + if err != nil { + panic(err) +} + +err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + fmt.Fprintln(os.Stderr, err.Error()) +} + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} +``` The following diagram shows the current dependencies between keepers. -![Keeper dependencies](/images/v0.50/learn/advanced/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg) +![Keeper dependencies](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg) diff --git a/docs/sdk/v0.50/learn/advanced/proto-docs.mdx b/docs/sdk/v0.50/learn/advanced/proto-docs.mdx index 71593226..0f3593eb 100644 --- a/docs/sdk/v0.50/learn/advanced/proto-docs.mdx +++ b/docs/sdk/v0.50/learn/advanced/proto-docs.mdx @@ -1,6 +1,6 @@ --- -title: "Protobuf Documentation" -description: "Version: v0.50" +title: Protobuf Documentation +description: See Cosmos SDK Buf Proto-docs --- See [Cosmos SDK Buf Proto-docs](https://buf.build/cosmos/cosmos-sdk/docs/main) diff --git a/docs/sdk/v0.50/learn/advanced/runtx_middleware.mdx b/docs/sdk/v0.50/learn/advanced/runtx_middleware.mdx index 4b30dacc..8d462331 100644 --- a/docs/sdk/v0.50/learn/advanced/runtx_middleware.mdx +++ b/docs/sdk/v0.50/learn/advanced/runtx_middleware.mdx @@ -1,21 +1,126 @@ --- -title: "RunTx recovery middleware" -description: "Version: v0.50" +title: RunTx recovery middleware --- -`BaseApp.runTx()` function handles Go panics that might occur during transactions execution, for example, keeper has faced an invalid state and paniced. Depending on the panic type different handler is used, for instance the default one prints an error log message. Recovery middleware is used to add custom panic recovery for Cosmos SDK application developers. +`BaseApp.runTx()` function handles Go panics that might occur during transactions execution, for example, keeper has faced an invalid state and paniced. +Depending on the panic type different handler is used, for instance the default one prints an error log message. +Recovery middleware is used to add custom panic recovery for Cosmos SDK application developers. -More context can found in the corresponding [ADR-022](/v0.50/build/architecture/adr-022-custom-panic-handling) and the implementation in [recovery.go](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/recovery.go). +More context can found in the corresponding [ADR-022](/docs/common/pages/adr-comprehensive#adr-022-custom-baseapp-panic-handling) and the implementation in [recovery.go](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/recovery.go). -## Interface[​](#interface "Direct link to Interface") +## Interface -baseapp/recovery.go +```go expandable +package baseapp -``` -loading... -``` +import ( + + "fmt" + "runtime/debug" + + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ RecoveryHandler handles recovery() + +object. +/ Return a non-nil error if recoveryObj was processed. +/ Return nil if recoveryObj was not processed. +type RecoveryHandler func(recoveryObj interface{ +}) + +error + +/ recoveryMiddleware is wrapper for RecoveryHandler to create chained recovery handling. +/ returns (recoveryMiddleware, nil) + if recoveryObj was not processed and should be passed to the next middleware in chain. +/ returns (nil, error) + if recoveryObj was processed and middleware chain processing should be stopped. +type recoveryMiddleware func(recoveryObj interface{ +}) (recoveryMiddleware, error) + +/ processRecovery processes recoveryMiddleware chain for recovery() + +object. +/ Chain processing stops on non-nil error or when chain is processed. +func processRecovery(recoveryObj interface{ +}, middleware recoveryMiddleware) + +error { + if middleware == nil { + return nil +} + +next, err := middleware(recoveryObj) + if err != nil { + return err +} + +return processRecovery(recoveryObj, next) +} + +/ newRecoveryMiddleware creates a RecoveryHandler middleware. +func newRecoveryMiddleware(handler RecoveryHandler, next recoveryMiddleware) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/recovery.go#L14-L17) +recoveryMiddleware { + return func(recoveryObj interface{ +}) (recoveryMiddleware, error) { + if err := handler(recoveryObj); err != nil { + return nil, err +} + +return next, nil +} +} + +/ newOutOfGasRecoveryMiddleware creates a standard OutOfGas recovery middleware for app.runTx method. +func newOutOfGasRecoveryMiddleware(gasWanted uint64, ctx sdk.Context, next recoveryMiddleware) + +recoveryMiddleware { + handler := func(recoveryObj interface{ +}) + +error { + err, ok := recoveryObj.(storetypes.ErrorOutOfGas) + if !ok { + return nil +} + +return errorsmod.Wrap( + sdkerrors.ErrOutOfGas, fmt.Sprintf( + "out of gas in location: %v; gasWanted: %d, gasUsed: %d", + err.Descriptor, gasWanted, ctx.GasMeter().GasConsumed(), + ), + ) +} + +return newRecoveryMiddleware(handler, next) +} + +/ newDefaultRecoveryMiddleware creates a default (last in chain) + +recovery middleware for app.runTx method. +func newDefaultRecoveryMiddleware() + +recoveryMiddleware { + handler := func(recoveryObj interface{ +}) + +error { + return errorsmod.Wrap( + sdkerrors.ErrPanic, fmt.Sprintf( + "recovered: %v\nstack:\n%v", recoveryObj, string(debug.Stack()), + ), + ) +} + +return newRecoveryMiddleware(handler, nil) +} +``` `recoveryObj` is a return value for `recover()` function from the `buildin` Go package. @@ -24,24 +129,51 @@ loading... * RecoveryHandler returns `nil` if `recoveryObj` wasn't handled and should be passed to the next recovery middleware; * RecoveryHandler returns a non-nil `error` if `recoveryObj` was handled; -## Custom RecoveryHandler register[​](#custom-recoveryhandler-register "Direct link to Custom RecoveryHandler register") +## Custom RecoveryHandler register `BaseApp.AddRunTxRecoveryHandler(handlers ...RecoveryHandler)` BaseApp method adds recovery middleware to the default recovery chain. -## Example[​](#example "Direct link to Example") +## Example Lets assume we want to emit the "Consensus failure" chain state if some particular error occurred. We have a module keeper that panics: -``` -func (k FooKeeper) Do(obj interface{}) { if obj == nil { // that shouldn't happen, we need to crash the app err := errorsmod.Wrap(fooTypes.InternalError, "obj is nil") panic(err) }} +```go +func (k FooKeeper) + +Do(obj interface{ +}) { + if obj == nil { + / that shouldn't happen, we need to crash the app + err := errorsmod.Wrap(fooTypes.InternalError, "obj is nil") + +panic(err) +} +} ``` By default that panic would be recovered and an error message will be printed to log. To override that behaviour we should register a custom RecoveryHandler: -``` -// Cosmos SDK application constructorcustomHandler := func(recoveryObj interface{}) error { err, ok := recoveryObj.(error) if !ok { return nil } if fooTypes.InternalError.Is(err) { panic(fmt.Errorf("FooKeeper did panic with error: %w", err)) } return nil}baseApp := baseapp.NewBaseApp(...)baseApp.AddRunTxRecoveryHandler(customHandler) +```go expandable +/ Cosmos SDK application constructor + customHandler := func(recoveryObj interface{ +}) + +error { + err, ok := recoveryObj.(error) + if !ok { + return nil +} + if fooTypes.InternalError.Is(err) { + panic(fmt.Errorf("FooKeeper did panic with error: %w", err)) +} + +return nil +} + baseApp := baseapp.NewBaseApp(...) + +baseApp.AddRunTxRecoveryHandler(customHandler) ``` diff --git a/docs/sdk/v0.50/learn/advanced/simulation.mdx b/docs/sdk/v0.50/learn/advanced/simulation.mdx index 07986cff..0b0cc806 100644 --- a/docs/sdk/v0.50/learn/advanced/simulation.mdx +++ b/docs/sdk/v0.50/learn/advanced/simulation.mdx @@ -1,67 +1,102 @@ --- -title: "Cosmos Blockchain Simulator" -description: "Version: v0.50" +title: Cosmos Blockchain Simulator +description: >- + The Cosmos SDK offers a full fledged simulation framework to fuzz test every + message defined by a module. --- -The Cosmos SDK offers a full fledged simulation framework to fuzz test every message defined by a module. +The Cosmos SDK offers a full fledged simulation framework to fuzz test every +message defined by a module. -On the Cosmos SDK, this functionality is provided by [`SimApp`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/app_v2.go), which is a `Baseapp` application that is used for running the [`simulation`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/simulation) module. This module defines all the simulation logic as well as the operations for randomized parameters like accounts, balances etc. +On the Cosmos SDK, this functionality is provided by [`SimApp`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/app_v2.go), which is a +`Baseapp` application that is used for running the [`simulation`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/simulation) module. +This module defines all the simulation logic as well as the operations for +randomized parameters like accounts, balances etc. -## Goals[​](#goals "Direct link to Goals") +## Goals -The blockchain simulator tests how the blockchain application would behave under real life circumstances by generating and sending randomized messages. The goal of this is to detect and debug failures that could halt a live chain, by providing logs and statistics about the operations run by the simulator as well as exporting the latest application state when a failure was found. +The blockchain simulator tests how the blockchain application would behave under +real life circumstances by generating and sending randomized messages. +The goal of this is to detect and debug failures that could halt a live chain, +by providing logs and statistics about the operations run by the simulator as +well as exporting the latest application state when a failure was found. -Its main difference with integration testing is that the simulator app allows you to pass parameters to customize the chain that's being simulated. This comes in handy when trying to reproduce bugs that were generated in the provided operations (randomized or not). +Its main difference with integration testing is that the simulator app allows +you to pass parameters to customize the chain that's being simulated. +This comes in handy when trying to reproduce bugs that were generated in the +provided operations (randomized or not). -## Simulation commands[​](#simulation-commands "Direct link to Simulation commands") +## Simulation commands -The simulation app has different commands, each of which tests a different failure type: +The simulation app has different commands, each of which tests a different +failure type: -* `AppImportExport`: The simulator exports the initial app state and then it creates a new app with the exported `genesis.json` as an input, checking for inconsistencies between the stores. +* `AppImportExport`: The simulator exports the initial app state and then it + creates a new app with the exported `genesis.json` as an input, checking for + inconsistencies between the stores. * `AppSimulationAfterImport`: Queues two simulations together. The first one provides the app state (*i.e* genesis) to the second. Useful to test software upgrades or hard-forks from a live chain. * `AppStateDeterminism`: Checks that all the nodes return the same values, in the same order. -* `BenchmarkInvariants`: Analysis of the performance of running all modules' invariants (*i.e* sequentially runs a [benchmark](https://pkg.go.dev/testing/#hdr-Benchmarks) test). An invariant checks for differences between the values that are on the store and the passive tracker. Eg: total coins held by accounts vs total supply tracker. +* `BenchmarkInvariants`: Analysis of the performance of running all modules' invariants (*i.e* sequentially runs a [benchmark](https://pkg.go.dev/testing/#hdr-Benchmarks) test). An invariant checks for + differences between the values that are on the store and the passive tracker. Eg: total coins held by accounts vs total supply tracker. * `FullAppSimulation`: General simulation mode. Runs the chain and the specified operations for a given number of blocks. Tests that there're no `panics` on the simulation. It does also run invariant checks on every `Period` but they are not benchmarked. -Each simulation must receive a set of inputs (*i.e* flags) such as the number of blocks that the simulation is run, seed, block size, etc. Check the full list of flags [here](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/simulation/client/cli/flags.go#L35-L59). +Each simulation must receive a set of inputs (*i.e* flags) such as the number of +blocks that the simulation is run, seed, block size, etc. +Check the full list of flags [here](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/simulation/client/cli/flags.go#L35-L59). -## Simulator Modes[​](#simulator-modes "Direct link to Simulator Modes") +## Simulator Modes In addition to the various inputs and commands, the simulator runs in three modes: -1. Completely random where the initial state, module parameters and simulation parameters are **pseudo-randomly generated**. -2. From a `genesis.json` file where the initial state and the module parameters are defined. This mode is helpful for running simulations on a known state such as a live network export where a new (mostly likely breaking) version of the application needs to be tested. -3. From a `params.json` file where the initial state is pseudo-randomly generated but the module and simulation parameters can be provided manually. This allows for a more controlled and deterministic simulation setup while allowing the state space to still be pseudo-randomly simulated. The list of available parameters are listed [here](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/simulation/client/cli/flags.go#L59-L78). - - - These modes are not mutually exclusive. So you can for example run a randomly generated genesis state (`1`) with manually generated simulation params (`3`). - - -## Usage[​](#usage "Direct link to Usage") - -This is a general example of how simulations are run. For more specific examples check the Cosmos SDK [Makefile](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/Makefile#L282-L318). - -``` - $ go test -mod=readonly github.com/cosmos/cosmos-sdk/simapp \ -run=TestApp \ ... -v -timeout 24h +1. Completely random where the initial state, module parameters and simulation + parameters are **pseudo-randomly generated**. +2. From a `genesis.json` file where the initial state and the module parameters are defined. + This mode is helpful for running simulations on a known state such as a live network export where a new (mostly likely breaking) version of the application needs to be tested. +3. From a `params.json` file where the initial state is pseudo-randomly generated but the module and simulation parameters can be provided manually. + This allows for a more controlled and deterministic simulation setup while allowing the state space to still be pseudo-randomly simulated. + The list of available parameters are listed [here](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/simulation/client/cli/flags.go#L59-L78). + + +These modes are not mutually exclusive. So you can for example run a randomly +generated genesis state (`1`) with manually generated simulation params (`3`). + + +## Usage + +This is a general example of how simulations are run. For more specific examples +check the Cosmos SDK [Makefile](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/Makefile#L282-L318). + +```bash + $ go test -mod=readonly github.com/cosmos/cosmos-sdk/simapp \ + -run=TestApp \ + ... + -v -timeout 24h ``` -## Debugging Tips[​](#debugging-tips "Direct link to Debugging Tips") +## Debugging Tips Here are some suggestions when encountering a simulation failure: -* Export the app state at the height where the failure was found. You can do this by passing the `-ExportStatePath` flag to the simulator. -* Use `-Verbose` logs. They could give you a better hint on all the operations involved. -* Reduce the simulation `-Period`. This will run the invariants checks more frequently. +* Export the app state at the height where the failure was found. You can do this + by passing the `-ExportStatePath` flag to the simulator. +* Use `-Verbose` logs. They could give you a better hint on all the operations + involved. +* Reduce the simulation `-Period`. This will run the invariants checks more + frequently. * Print all the failed invariants at once with `-PrintAllInvariants`. -* Try using another `-Seed`. If it can reproduce the same error and if it fails sooner, you will spend less time running the simulations. -* Reduce the `-NumBlocks` . How's the app state at the height previous to the failure? -* Run invariants on every operation with `-SimulateEveryOperation`. *Note*: this will slow down your simulation **a lot**. -* Try adding logs to operations that are not logged. You will have to define a [Logger](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/keeper/keeper.go#L65-L68) on your `Keeper`. +* Try using another `-Seed`. If it can reproduce the same error and if it fails + sooner, you will spend less time running the simulations. +* Reduce the `-NumBlocks` . How's the app state at the height previous to the + failure? +* Run invariants on every operation with `-SimulateEveryOperation`. *Note*: this + will slow down your simulation **a lot**. +* Try adding logs to operations that are not logged. You will have to define a + [Logger](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/keeper/keeper.go#L65-L68) on your `Keeper`. -## Use simulation in your Cosmos SDK-based application[​](#use-simulation-in-your-cosmos-sdk-based-application "Direct link to Use simulation in your Cosmos SDK-based application") +## Use simulation in your Cosmos SDK-based application Learn how you can build the simulation into your Cosmos SDK-based application: * Application Simulation Manager -* [Building modules: Simulator](/v0.50/build/building-modules/simulator) +* [Building modules: Simulator](/docs/sdk/v0.50/documentation/module-system/simulator) * Simulator tests diff --git a/docs/sdk/v0.50/learn/advanced/store.mdx b/docs/sdk/v0.50/learn/advanced/store.mdx index e36fe686..74db1675 100644 --- a/docs/sdk/v0.50/learn/advanced/store.mdx +++ b/docs/sdk/v0.50/learn/advanced/store.mdx @@ -1,338 +1,11815 @@ --- -title: "Store" -description: "Version: v0.50" +title: Store --- +## Synopsis + +A store is a data structure that holds the state of the application. + - A store is a data structure that holds the state of the application. +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK application](/docs/sdk/v0.50/learn/beginner/app-anatomy) + - - * [Anatomy of a Cosmos SDK application](/v0.50/learn/beginner/app-anatomy) - +## Introduction to Cosmos SDK Stores + +The Cosmos SDK comes with a large set of stores to persist the state of applications. By default, the main store of Cosmos SDK applications is a `multistore`, i.e. a store of stores. Developers can add any number of key-value stores to the multistore, depending on their application needs. The multistore exists to support the modularity of the Cosmos SDK, as it lets each module declare and manage their own subset of the state. Key-value stores in the multistore can only be accessed with a specific capability `key`, which is typically held in the [`keeper`](/docs/sdk/v0.50/documentation/module-system/keeper) of the module that declared the store. + +```text expandable ++-----------------------------------------------------+ +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 1 - Manage by keeper of Module 1 | +| | | | +| +--------------------------------------------+ | +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 2 - Manage by keeper of Module 2 | | +| | | | +| +--------------------------------------------+ | +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 3 - Manage by keeper of Module 2 | | +| | | | +| +--------------------------------------------+ | +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 4 - Manage by keeper of Module 3 | | +| | | | +| +--------------------------------------------+ | +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 5 - Manage by keeper of Module 4 | | +| | | | +| +--------------------------------------------+ | +| | +| Main Multistore | +| | ++-----------------------------------------------------+ + + Application's State +``` + +### Store Interface + +At its very core, a Cosmos SDK `store` is an object that holds a `CacheWrapper` and has a `GetStoreType()` method: + +```go expandable +package types + +import ( + + "fmt" + "io" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + for _, added := range s.Added { + if key == added { + return true +} + +} + +return false +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + for _, d := range s.Deleted { + if d == key { + return true +} + +} + +return false +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + for k, v := range tc { + ret[k] = v +} + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + for k, v := range newTc { + tc[k] = v +} + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +The `GetStoreType` is a simple method that returns the type of store, whereas a `CacheWrapper` is a simple interface that implements store read caching and write branching through `Write` method: + +```go expandable +package types + +import ( + + "fmt" + "io" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + for _, added := range s.Added { + if key == added { + return true +} + +} + +return false +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + for _, d := range s.Deleted { + if d == key { + return true +} + +} + +return false +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + for k, v := range tc { + ret[k] = v +} + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + for k, v := range newTc { + tc[k] = v +} + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +Branching and cache is used ubiquitously in the Cosmos SDK and required to be implemented on every store type. A storage branch creates an isolated, ephemeral branch of a store that can be passed around and updated without affecting the main underlying store. This is used to trigger temporary state-transitions that may be reverted later should an error occur. Read more about it in [context](/docs/sdk/v0.50/learn/advanced/context#Store-branching) + +### Commit Store + +A commit store is a store that has the ability to commit changes made to the underlying tree or db. The Cosmos SDK differentiates simple stores from commit stores by extending the basic store interfaces with a `Committer`: + +```go expandable +package types + +import ( + + "fmt" + "io" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + for _, added := range s.Added { + if key == added { + return true +} + +} + +return false +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + for _, d := range s.Deleted { + if d == key { + return true +} + +} + +return false +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + for k, v := range tc { + ret[k] = v +} + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + for k, v := range newTc { + tc[k] = v +} + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +The `Committer` is an interface that defines methods to persist changes to disk: + +```go expandable +package types + +import ( + + "fmt" + "io" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + for _, added := range s.Added { + if key == added { + return true +} + +} + +return false +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + for _, d := range s.Deleted { + if d == key { + return true +} + +} + +return false +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + for k, v := range tc { + ret[k] = v +} + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + for k, v := range newTc { + tc[k] = v +} + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +The `CommitID` is a deterministic commit of the state tree. Its hash is returned to the underlying consensus engine and stored in the block header. Note that commit store interfaces exist for various purposes, one of which is to make sure not every object can commit the store. As part of the [object-capabilities model](/docs/sdk/v0.50/learn/advanced/ocap) of the Cosmos SDK, only `baseapp` should have the ability to commit stores. For example, this is the reason why the `ctx.KVStore()` method by which modules typically access stores returns a `KVStore` and not a `CommitKVStore`. + +The Cosmos SDK comes with many types of stores, the most used being [`CommitMultiStore`](#multistore), [`KVStore`](#kvstore) and [`GasKv` store](#gaskv-store). [Other types of stores](#other-stores) include `Transient` and `TraceKV` stores. + +## Multistore + +### Multistore Interface + +Each Cosmos SDK application holds a multistore at its root to persist its state. The multistore is a store of `KVStores` that follows the `Multistore` interface: + +```go expandable +package types + +import ( + + "fmt" + "io" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + for _, added := range s.Added { + if key == added { + return true +} + +} + +return false +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + for _, d := range s.Deleted { + if d == key { + return true +} + +} + +return false +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + for k, v := range tc { + ret[k] = v +} + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + for k, v := range newTc { + tc[k] = v +} + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +If tracing is enabled, then branching the multistore will firstly wrap all the underlying `KVStore` in [`TraceKv.Store`](#tracekv-store). + +### CommitMultiStore + +The main type of `Multistore` used in the Cosmos SDK is `CommitMultiStore`, which is an extension of the `Multistore` interface: + +```go expandable +package types + +import ( + + "fmt" + "io" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + for _, added := range s.Added { + if key == added { + return true +} + +} + +return false +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + for _, d := range s.Deleted { + if d == key { + return true +} + +} + +return false +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + for k, v := range tc { + ret[k] = v +} + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + for k, v := range newTc { + tc[k] = v +} + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +As for concrete implementation, the \[`rootMulti.Store`] is the go-to implementation of the `CommitMultiStore` interface. + +```go expandable +package rootmulti + +import ( + + "errors" + "fmt" + "io" + "math" + "sort" + "strings" + "sync" + "cosmossdk.io/log" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + protoio "github.com/cosmos/gogoproto/io" + gogotypes "github.com/cosmos/gogoproto/types" + iavltree "github.com/cosmos/iavl" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/store/cachemulti" + "cosmossdk.io/store/dbadapter" + "cosmossdk.io/store/iavl" + "cosmossdk.io/store/listenkv" + "cosmossdk.io/store/mem" + "cosmossdk.io/store/metrics" + "cosmossdk.io/store/pruning" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/transient" + "cosmossdk.io/store/types" +) + +const ( + latestVersionKey = "s/latest" + commitInfoKeyFmt = "s/%d" / s/ +) + +const iavlDisablefastNodeDefault = false + +/ keysFromStoreKeyMap returns a slice of keys for the provided map lexically sorted by StoreKey.Name() + +func keysFromStoreKeyMap[V any](/docs/sdk/v0.50/learn/advanced/m map[types.StoreKey]V) []types.StoreKey { + keys := make([]types.StoreKey, 0, len(m)) + for key := range m { + keys = append(keys, key) +} + +sort.Slice(keys, func(i, j int) + +bool { + ki, kj := keys[i], keys[j] + return ki.Name() < kj.Name() +}) + +return keys +} + +/ Store is composed of many CommitStores. Name contrasts with +/ cacheMultiStore which is used for branching other MultiStores. It implements +/ the CommitMultiStore interface. +type Store struct { + db dbm.DB + logger log.Logger + lastCommitInfo *types.CommitInfo + pruningManager *pruning.Manager + iavlCacheSize int + iavlDisableFastNode bool + storesParams map[types.StoreKey]storeParams + stores map[types.StoreKey]types.CommitKVStore + keysByName map[string]types.StoreKey + initialVersion int64 + removalMap map[types.StoreKey]bool + traceWriter io.Writer + traceContext types.TraceContext + traceContextMutex sync.Mutex + interBlockCache types.MultiStorePersistentCache + listeners map[types.StoreKey]*types.MemoryListener + metrics metrics.StoreMetrics + commitHeader cmtproto.Header +} + +var ( + _ types.CommitMultiStore = (*Store)(nil) + _ types.Queryable = (*Store)(nil) +) + +/ NewStore returns a reference to a new Store object with the provided DB. The +/ store will be created with a PruneNothing pruning strategy by default. After +/ a store is created, KVStores must be mounted and finally LoadLatestVersion or +/ LoadVersion must be called. +func NewStore(db dbm.DB, logger log.Logger, metricGatherer metrics.StoreMetrics) *Store { + return &Store{ + db: db, + logger: logger, + iavlCacheSize: iavl.DefaultIAVLCacheSize, + iavlDisableFastNode: iavlDisablefastNodeDefault, + storesParams: make(map[types.StoreKey]storeParams), + stores: make(map[types.StoreKey]types.CommitKVStore), + keysByName: make(map[string]types.StoreKey), + listeners: make(map[types.StoreKey]*types.MemoryListener), + removalMap: make(map[types.StoreKey]bool), + pruningManager: pruning.NewManager(db, logger), + metrics: metricGatherer, +} +} + +/ GetPruning fetches the pruning strategy from the root store. +func (rs *Store) + +GetPruning() + +pruningtypes.PruningOptions { + return rs.pruningManager.GetOptions() +} + +/ SetPruning sets the pruning strategy on the root store and all the sub-stores. +/ Note, calling SetPruning on the root store prior to LoadVersion or +/ LoadLatestVersion performs a no-op as the stores aren't mounted yet. +func (rs *Store) + +SetPruning(pruningOpts pruningtypes.PruningOptions) { + rs.pruningManager.SetOptions(pruningOpts) +} + +/ SetMetrics sets the metrics gatherer for the store package +func (rs *Store) + +SetMetrics(metrics metrics.StoreMetrics) { + rs.metrics = metrics +} + +/ SetSnapshotInterval sets the interval at which the snapshots are taken. +/ It is used by the store to determine which heights to retain until after the snapshot is complete. +func (rs *Store) + +SetSnapshotInterval(snapshotInterval uint64) { + rs.pruningManager.SetSnapshotInterval(snapshotInterval) +} + +func (rs *Store) + +SetIAVLCacheSize(cacheSize int) { + rs.iavlCacheSize = cacheSize +} + +func (rs *Store) + +SetIAVLDisableFastNode(disableFastNode bool) { + rs.iavlDisableFastNode = disableFastNode +} + +/ GetStoreType implements Store. +func (rs *Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeMulti +} + +/ MountStoreWithDB implements CommitMultiStore. +func (rs *Store) + +MountStoreWithDB(key types.StoreKey, typ types.StoreType, db dbm.DB) { + if key == nil { + panic("MountIAVLStore() + +key cannot be nil") +} + if _, ok := rs.storesParams[key]; ok { + panic(fmt.Sprintf("store duplicate store key %v", key)) +} + if _, ok := rs.keysByName[key.Name()]; ok { + panic(fmt.Sprintf("store duplicate store key name %v", key)) +} + +rs.storesParams[key] = newStoreParams(key, db, typ, 0) + +rs.keysByName[key.Name()] = key +} + +/ GetCommitStore returns a mounted CommitStore for a given StoreKey. If the +/ store is wrapped in an inter-block cache, it will be unwrapped before returning. +func (rs *Store) + +GetCommitStore(key types.StoreKey) + +types.CommitStore { + return rs.GetCommitKVStore(key) +} + +/ GetCommitKVStore returns a mounted CommitKVStore for a given StoreKey. If the +/ store is wrapped in an inter-block cache, it will be unwrapped before returning. +func (rs *Store) + +GetCommitKVStore(key types.StoreKey) + +types.CommitKVStore { + / If the Store has an inter-block cache, first attempt to lookup and unwrap + / the underlying CommitKVStore by StoreKey. If it does not exist, fallback to + / the main mapping of CommitKVStores. + if rs.interBlockCache != nil { + if store := rs.interBlockCache.Unwrap(key); store != nil { + return store +} + +} + +return rs.stores[key] +} + +/ StoreKeysByName returns mapping storeNames -> StoreKeys +func (rs *Store) + +StoreKeysByName() + +map[string]types.StoreKey { + return rs.keysByName +} + +/ LoadLatestVersionAndUpgrade implements CommitMultiStore +func (rs *Store) + +LoadLatestVersionAndUpgrade(upgrades *types.StoreUpgrades) + +error { + ver := GetLatestVersion(rs.db) + +return rs.loadVersion(ver, upgrades) +} + +/ LoadVersionAndUpgrade allows us to rename substores while loading an older version +func (rs *Store) + +LoadVersionAndUpgrade(ver int64, upgrades *types.StoreUpgrades) + +error { + return rs.loadVersion(ver, upgrades) +} + +/ LoadLatestVersion implements CommitMultiStore. +func (rs *Store) + +LoadLatestVersion() + +error { + ver := GetLatestVersion(rs.db) + +return rs.loadVersion(ver, nil) +} + +/ LoadVersion implements CommitMultiStore. +func (rs *Store) + +LoadVersion(ver int64) + +error { + return rs.loadVersion(ver, nil) +} + +func (rs *Store) + +loadVersion(ver int64, upgrades *types.StoreUpgrades) + +error { + infos := make(map[string]types.StoreInfo) + +rs.logger.Debug("loadVersion", "ver", ver) + cInfo := &types.CommitInfo{ +} + + / load old data if we are not version 0 + if ver != 0 { + var err error + cInfo, err = rs.GetCommitInfo(ver) + if err != nil { + return err +} + + / convert StoreInfos slice to map + for _, storeInfo := range cInfo.StoreInfos { + infos[storeInfo.Name] = storeInfo +} + +} + + / load each Store (note this doesn't panic on unmounted keys now) + newStores := make(map[types.StoreKey]types.CommitKVStore) + storesKeys := make([]types.StoreKey, 0, len(rs.storesParams)) + for key := range rs.storesParams { + storesKeys = append(storesKeys, key) +} + if upgrades != nil { + / deterministic iteration order for upgrades + / (as the underlying store may change and + / upgrades make store changes where the execution order may matter) + +sort.Slice(storesKeys, func(i, j int) + +bool { + return storesKeys[i].Name() < storesKeys[j].Name() +}) +} + for _, key := range storesKeys { + storeParams := rs.storesParams[key] + commitID := rs.getCommitID(infos, key.Name()) + +rs.logger.Debug("loadVersion commitID", "key", key, "ver", ver, "hash", fmt.Sprintf("%x", commitID.Hash)) + + / If it has been added, set the initial version + if upgrades.IsAdded(key.Name()) || upgrades.RenamedFrom(key.Name()) != "" { + storeParams.initialVersion = uint64(ver) + 1 +} + +else if commitID.Version != ver && storeParams.typ == types.StoreTypeIAVL { + return fmt.Errorf("version of store %s mismatch root store's version; expected %d got %d; new stores should be added using StoreUpgrades", key.Name(), ver, commitID.Version) +} + +store, err := rs.loadCommitStoreFromParams(key, commitID, storeParams) + if err != nil { + return errorsmod.Wrap(err, "failed to load store") +} + +newStores[key] = store + + / If it was deleted, remove all data + if upgrades.IsDeleted(key.Name()) { + if err := deleteKVStore(store.(types.KVStore)); err != nil { + return errorsmod.Wrapf(err, "failed to delete store %s", key.Name()) +} + +rs.removalMap[key] = true +} + +else if oldName := upgrades.RenamedFrom(key.Name()); oldName != "" { + / handle renames specially + / make an unregistered key to satisfy loadCommitStore params + oldKey := types.NewKVStoreKey(oldName) + oldParams := newStoreParams(oldKey, storeParams.db, storeParams.typ, 0) + + / load from the old name + oldStore, err := rs.loadCommitStoreFromParams(oldKey, rs.getCommitID(infos, oldName), oldParams) + if err != nil { + return errorsmod.Wrapf(err, "failed to load old store %s", oldName) +} + + / move all data + if err := moveKVStoreData(oldStore.(types.KVStore), store.(types.KVStore)); err != nil { + return errorsmod.Wrapf(err, "failed to move store %s -> %s", oldName, key.Name()) +} + + / add the old key so its deletion is committed + newStores[oldKey] = oldStore + / this will ensure it's not perpetually stored in commitInfo + rs.removalMap[oldKey] = true +} + +} + +rs.lastCommitInfo = cInfo + rs.stores = newStores + + / load any snapshot heights we missed from disk to be pruned on the next run + if err := rs.pruningManager.LoadSnapshotHeights(rs.db); err != nil { + return err +} + +return nil +} + +func (rs *Store) + +getCommitID(infos map[string]types.StoreInfo, name string) + +types.CommitID { + info, ok := infos[name] + if !ok { + return types.CommitID{ +} + +} + +return info.CommitId +} + +func deleteKVStore(kv types.KVStore) + +error { + / Note that we cannot write while iterating, so load all keys here, delete below + var keys [][]byte + itr := kv.Iterator(nil, nil) + for itr.Valid() { + keys = append(keys, itr.Key()) + +itr.Next() +} + +itr.Close() + for _, k := range keys { + kv.Delete(k) +} + +return nil +} + +/ we simulate move by a copy and delete +func moveKVStoreData(oldDB, newDB types.KVStore) + +error { + / we read from one and write to another + itr := oldDB.Iterator(nil, nil) + for itr.Valid() { + newDB.Set(itr.Key(), itr.Value()) + +itr.Next() +} + +itr.Close() + + / then delete the old store + return deleteKVStore(oldDB) +} + +/ PruneSnapshotHeight prunes the given height according to the prune strategy. +/ If PruneNothing, this is a no-op. +/ If other strategy, this height is persisted until the snapshot is operated. +func (rs *Store) + +PruneSnapshotHeight(height int64) { + rs.pruningManager.HandleHeightSnapshot(height) +} + +/ SetInterBlockCache sets the Store's internal inter-block (persistent) + +cache. +/ When this is defined, all CommitKVStores will be wrapped with their respective +/ inter-block cache. +func (rs *Store) + +SetInterBlockCache(c types.MultiStorePersistentCache) { + rs.interBlockCache = c +} + +/ SetTracer sets the tracer for the MultiStore that the underlying +/ stores will utilize to trace operations. A MultiStore is returned. +func (rs *Store) + +SetTracer(w io.Writer) + +types.MultiStore { + rs.traceWriter = w + return rs +} + +/ SetTracingContext updates the tracing context for the MultiStore by merging +/ the given context with the existing context by key. Any existing keys will +/ be overwritten. It is implied that the caller should update the context when +/ necessary between tracing operations. It returns a modified MultiStore. +func (rs *Store) + +SetTracingContext(tc types.TraceContext) + +types.MultiStore { + rs.traceContextMutex.Lock() + +defer rs.traceContextMutex.Unlock() + +rs.traceContext = rs.traceContext.Merge(tc) + +return rs +} + +func (rs *Store) + +getTracingContext() + +types.TraceContext { + rs.traceContextMutex.Lock() + +defer rs.traceContextMutex.Unlock() + if rs.traceContext == nil { + return nil +} + ctx := types.TraceContext{ +} + for k, v := range rs.traceContext { + ctx[k] = v +} + +return ctx +} + +/ TracingEnabled returns if tracing is enabled for the MultiStore. +func (rs *Store) + +TracingEnabled() + +bool { + return rs.traceWriter != nil +} + +/ AddListeners adds a listener for the KVStore belonging to the provided StoreKey +func (rs *Store) + +AddListeners(keys []types.StoreKey) { + for i := range keys { + listener := rs.listeners[keys[i]] + if listener == nil { + rs.listeners[keys[i]] = types.NewMemoryListener() +} + +} +} + +/ ListeningEnabled returns if listening is enabled for a specific KVStore +func (rs *Store) + +ListeningEnabled(key types.StoreKey) + +bool { + if ls, ok := rs.listeners[key]; ok { + return ls != nil +} + +return false +} + +/ PopStateCache returns the accumulated state change messages from the CommitMultiStore +/ Calling PopStateCache destroys only the currently accumulated state in each listener +/ not the state in the store itself. This is a mutating and destructive operation. +/ This method has been synchronized. +func (rs *Store) + +PopStateCache() []*types.StoreKVPair { + var cache []*types.StoreKVPair + for key := range rs.listeners { + ls := rs.listeners[key] + if ls != nil { + cache = append(cache, ls.PopStateCache()...) +} + +} + +sort.SliceStable(cache, func(i, j int) + +bool { + return cache[i].StoreKey < cache[j].StoreKey +}) + +return cache +} + +/ LatestVersion returns the latest version in the store +func (rs *Store) + +LatestVersion() + +int64 { + return rs.LastCommitID().Version +} + +/ LastCommitID implements Committer/CommitStore. +func (rs *Store) + +LastCommitID() + +types.CommitID { + if rs.lastCommitInfo == nil { + return types.CommitID{ + Version: GetLatestVersion(rs.db), +} + +} + +return rs.lastCommitInfo.CommitID() +} + +/ Commit implements Committer/CommitStore. +func (rs *Store) + +Commit() + +types.CommitID { + var previousHeight, version int64 + if rs.lastCommitInfo.GetVersion() == 0 && rs.initialVersion > 1 { + / This case means that no commit has been made in the store, we + / start from initialVersion. + version = rs.initialVersion +} + +else { + / This case can means two things: + / - either there was already a previous commit in the store, in which + / case we increment the version from there, + / - or there was no previous commit, and initial version was not set, + / in which case we start at version 1. + previousHeight = rs.lastCommitInfo.GetVersion() + +version = previousHeight + 1 +} + if rs.commitHeader.Height != version { + rs.logger.Debug("commit header and version mismatch", "header_height", rs.commitHeader.Height, "version", version) +} + +rs.lastCommitInfo = commitStores(version, rs.stores, rs.removalMap) + +rs.lastCommitInfo.Timestamp = rs.commitHeader.Time + defer rs.flushMetadata(rs.db, version, rs.lastCommitInfo) + + / remove remnants of removed stores + for sk := range rs.removalMap { + if _, ok := rs.stores[sk]; ok { + delete(rs.stores, sk) + +delete(rs.storesParams, sk) + +delete(rs.keysByName, sk.Name()) +} + +} + + / reset the removalMap + rs.removalMap = make(map[types.StoreKey]bool) + if err := rs.handlePruning(version); err != nil { + panic(err) +} + +return types.CommitID{ + Version: version, + Hash: rs.lastCommitInfo.Hash(), +} +} + +/ WorkingHash returns the current hash of the store. +/ it will be used to get the current app hash before commit. +func (rs *Store) + +WorkingHash() []byte { + storeInfos := make([]types.StoreInfo, 0, len(rs.stores)) + storeKeys := keysFromStoreKeyMap(rs.stores) + for _, key := range storeKeys { + store := rs.stores[key] + if store.GetStoreType() != types.StoreTypeIAVL { + continue +} + if !rs.removalMap[key] { + si := types.StoreInfo{ + Name: key.Name(), + CommitId: types.CommitID{ + Hash: store.WorkingHash(), +}, +} + +storeInfos = append(storeInfos, si) +} + +} + +sort.SliceStable(storeInfos, func(i, j int) + +bool { + return storeInfos[i].Name < storeInfos[j].Name +}) + +return types.CommitInfo{ + StoreInfos: storeInfos +}.Hash() +} + +/ CacheWrap implements CacheWrapper/Store/CommitStore. +func (rs *Store) + +CacheWrap() + +types.CacheWrap { + return rs.CacheMultiStore().(types.CacheWrap) +} + +/ CacheWrapWithTrace implements the CacheWrapper interface. +func (rs *Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + return rs.CacheWrap() +} + +/ CacheMultiStore creates ephemeral branch of the multi-store and returns a CacheMultiStore. +/ It implements the MultiStore interface. +func (rs *Store) + +CacheMultiStore() + +types.CacheMultiStore { + stores := make(map[types.StoreKey]types.CacheWrapper) + for k, v := range rs.stores { + store := types.KVStore(v) + / Wire the listenkv.Store to allow listeners to observe the writes from the cache store, + / set same listeners on cache store will observe duplicated writes. + if rs.ListeningEnabled(k) { + store = listenkv.NewStore(store, k, rs.listeners[k]) +} + +stores[k] = store +} + +return cachemulti.NewStore(rs.db, stores, rs.keysByName, rs.traceWriter, rs.getTracingContext()) +} + +/ CacheMultiStoreWithVersion is analogous to CacheMultiStore except that it +/ attempts to load stores at a given version (height). An error is returned if +/ any store cannot be loaded. This should only be used for querying and +/ iterating at past heights. +func (rs *Store) + +CacheMultiStoreWithVersion(version int64) (types.CacheMultiStore, error) { + cachedStores := make(map[types.StoreKey]types.CacheWrapper) + +var commitInfo *types.CommitInfo + storeInfos := map[string]bool{ +} + for key, store := range rs.stores { + var cacheStore types.KVStore + switch store.GetStoreType() { + case types.StoreTypeIAVL: + / If the store is wrapped with an inter-block cache, we must first unwrap + / it to get the underlying IAVL store. + store = rs.GetCommitKVStore(key) + + / Attempt to lazy-load an already saved IAVL store version. If the + / version does not exist or is pruned, an error should be returned. + var err error + cacheStore, err = store.(*iavl.Store).GetImmutable(version) + / if we got error from loading a module store + / we fetch commit info of this version + / we use commit info to check if the store existed at this version or not + if err != nil { + if commitInfo == nil { + var errCommitInfo error + commitInfo, errCommitInfo = rs.GetCommitInfo(version) + if errCommitInfo != nil { + return nil, errCommitInfo +} + for _, storeInfo := range commitInfo.StoreInfos { + storeInfos[storeInfo.Name] = true +} + +} + + / If the store existed at this version, it means there's actually an error + / getting the root store at this version. + if storeInfos[key.Name()] { + return nil, err +} + +} + +default: + cacheStore = store +} + + / Wire the listenkv.Store to allow listeners to observe the writes from the cache store, + / set same listeners on cache store will observe duplicated writes. + if rs.ListeningEnabled(key) { + cacheStore = listenkv.NewStore(cacheStore, key, rs.listeners[key]) +} + +cachedStores[key] = cacheStore +} + +return cachemulti.NewStore(rs.db, cachedStores, rs.keysByName, rs.traceWriter, rs.getTracingContext()), nil +} + +/ GetStore returns a mounted Store for a given StoreKey. If the StoreKey does +/ not exist, it will panic. If the Store is wrapped in an inter-block cache, it +/ will be unwrapped prior to being returned. +/ +/ TODO: This isn't used directly upstream. Consider returning the Store as-is +/ instead of unwrapping. +func (rs *Store) + +GetStore(key types.StoreKey) + +types.Store { + store := rs.GetCommitKVStore(key) + if store == nil { + panic(fmt.Sprintf("store does not exist for key: %s", key.Name())) +} + +return store +} + +/ GetKVStore returns a mounted KVStore for a given StoreKey. If tracing is +/ enabled on the KVStore, a wrapped TraceKVStore will be returned with the root +/ store's tracer, otherwise, the original KVStore will be returned. +/ +/ NOTE: The returned KVStore may be wrapped in an inter-block cache if it is +/ set on the root store. +func (rs *Store) + +GetKVStore(key types.StoreKey) + +types.KVStore { + s := rs.stores[key] + if s == nil { + panic(fmt.Sprintf("store does not exist for key: %s", key.Name())) +} + store := types.KVStore(s) + if rs.TracingEnabled() { + store = tracekv.NewStore(store, rs.traceWriter, rs.getTracingContext()) +} + if rs.ListeningEnabled(key) { + store = listenkv.NewStore(store, key, rs.listeners[key]) +} + +return store +} + +func (rs *Store) + +handlePruning(version int64) + +error { + pruneHeight := rs.pruningManager.GetPruningHeight(version) + +rs.logger.Info("prune start", "height", version) + +defer rs.logger.Info("prune end", "height", version) + +return rs.PruneStores(pruneHeight) +} + +/ PruneStores prunes all history upto the specific height of the multi store. +func (rs *Store) + +PruneStores(pruningHeight int64) (err error) { + if pruningHeight <= 0 { + rs.logger.Debug("pruning skipped, height is smaller than 0") + +return nil +} + +rs.logger.Debug("pruning store", "heights", pruningHeight) + for key, store := range rs.stores { + rs.logger.Debug("pruning store", "key", key) / Also log store.name (a private variable)? + + / If the store is wrapped with an inter-block cache, we must first unwrap + / it to get the underlying IAVL store. + if store.GetStoreType() != types.StoreTypeIAVL { + continue +} + +store = rs.GetCommitKVStore(key) + err := store.(*iavl.Store).DeleteVersionsTo(pruningHeight) + if err == nil { + continue +} + if errors.Is(err, iavltree.ErrVersionDoesNotExist) && err != nil { + return err +} + +} + +return nil +} + +/ getStoreByName performs a lookup of a StoreKey given a store name typically +/ provided in a path. The StoreKey is then used to perform a lookup and return +/ a Store. If the Store is wrapped in an inter-block cache, it will be unwrapped +/ prior to being returned. If the StoreKey does not exist, nil is returned. +func (rs *Store) + +GetStoreByName(name string) + +types.Store { + key := rs.keysByName[name] + if key == nil { + return nil +} + +return rs.GetCommitKVStore(key) +} + +/ Query calls substore.Query with the same `req` where `req.Path` is +/ modified to remove the substore prefix. +/ Ie. `req.Path` here is `//`, and trimmed to `/` for the substore. +/ TODO: add proof for `multistore -> substore`. +func (rs *Store) + +Query(req *types.RequestQuery) (*types.ResponseQuery, error) { + path := req.Path + storeName, subpath, err := parsePath(path) + if err != nil { + return &types.ResponseQuery{ +}, err +} + store := rs.GetStoreByName(storeName) + if store == nil { + return &types.ResponseQuery{ +}, errorsmod.Wrapf(types.ErrUnknownRequest, "no such store: %s", storeName) +} + +queryable, ok := store.(types.Queryable) + if !ok { + return &types.ResponseQuery{ +}, errorsmod.Wrapf(types.ErrUnknownRequest, "store %s (type %T) + +doesn't support queries", storeName, store) +} + + / trim the path and make the query + req.Path = subpath + res, err := queryable.Query(req) + if !req.Prove || !RequireProof(subpath) { + return res, err +} + if res.ProofOps == nil || len(res.ProofOps.Ops) == 0 { + return &types.ResponseQuery{ +}, errorsmod.Wrap(types.ErrInvalidRequest, "proof is unexpectedly empty; ensure height has not been pruned") +} + + / If the request's height is the latest height we've committed, then utilize + / the store's lastCommitInfo as this commit info may not be flushed to disk. + / Otherwise, we query for the commit info from disk. + var commitInfo *types.CommitInfo + if res.Height == rs.lastCommitInfo.Version { + commitInfo = rs.lastCommitInfo +} + +else { + commitInfo, err = rs.GetCommitInfo(res.Height) + if err != nil { + return &types.ResponseQuery{ +}, err +} + +} + + / Restore origin path and append proof op. + res.ProofOps.Ops = append(res.ProofOps.Ops, commitInfo.ProofOp(storeName)) + +return res, nil +} + +/ SetInitialVersion sets the initial version of the IAVL tree. It is used when +/ starting a new chain at an arbitrary height. +func (rs *Store) + +SetInitialVersion(version int64) + +error { + rs.initialVersion = version + + / Loop through all the stores, if it's an IAVL store, then set initial + / version on it. + for key, store := range rs.stores { + if store.GetStoreType() == types.StoreTypeIAVL { + / If the store is wrapped with an inter-block cache, we must first unwrap + / it to get the underlying IAVL store. + store = rs.GetCommitKVStore(key) + +store.(types.StoreWithInitialVersion).SetInitialVersion(version) +} + +} + +return nil +} + +/ parsePath expects a format like /[/] +/ Must start with /, subpath may be empty +/ Returns error if it doesn't start with / +func parsePath(path string) (storeName, subpath string, err error) { + if !strings.HasPrefix(path, "/") { + return storeName, subpath, errorsmod.Wrapf(types.ErrUnknownRequest, "invalid path: %s", path) +} + paths := strings.SplitN(path[1:], "/", 2) + +storeName = paths[0] + if len(paths) == 2 { + subpath = "/" + paths[1] +} + +return storeName, subpath, nil +} + +/---------------------- Snapshotting ------------------ + +/ Snapshot implements snapshottypes.Snapshotter. The snapshot output for a given format must be +/ identical across nodes such that chunks from different sources fit together. If the output for a +/ given format changes (at the byte level), the snapshot format must be bumped - see +/ TestMultistoreSnapshot_Checksum test. +func (rs *Store) + +Snapshot(height uint64, protoWriter protoio.Writer) + +error { + if height == 0 { + return errorsmod.Wrap(types.ErrLogic, "cannot snapshot height 0") +} + if height > uint64(GetLatestVersion(rs.db)) { + return errorsmod.Wrapf(types.ErrLogic, "cannot snapshot future height %v", height) +} + + / Collect stores to snapshot (only IAVL stores are supported) + +type namedStore struct { + *iavl.Store + name string +} + stores := []namedStore{ +} + keys := keysFromStoreKeyMap(rs.stores) + for _, key := range keys { + switch store := rs.GetCommitKVStore(key).(type) { + case *iavl.Store: + stores = append(stores, namedStore{ + name: key.Name(), + Store: store +}) + case *transient.Store, *mem.Store: + / Non-persisted stores shouldn't be snapshotted + continue + default: + return errorsmod.Wrapf(types.ErrLogic, + "don't know how to snapshot store %q of type %T", key.Name(), store) +} + +} + +sort.Slice(stores, func(i, j int) + +bool { + return strings.Compare(stores[i].name, stores[j].name) == -1 +}) + + / Export each IAVL store. Stores are serialized as a stream of SnapshotItem Protobuf + / messages. The first item contains a SnapshotStore with store metadata (i.e. name), + / and the following messages contain a SnapshotNode (i.e. an ExportNode). Store changes + / are demarcated by new SnapshotStore items. + for _, store := range stores { + rs.logger.Debug("starting snapshot", "store", store.name, "height", height) + +exporter, err := store.Export(int64(height)) + if err != nil { + rs.logger.Error("snapshot failed; exporter error", "store", store.name, "err", err) + +return err +} + +err = func() + +error { + defer exporter.Close() + err := protoWriter.WriteMsg(&snapshottypes.SnapshotItem{ + Item: &snapshottypes.SnapshotItem_Store{ + Store: &snapshottypes.SnapshotStoreItem{ + Name: store.name, +}, +}, +}) + if err != nil { + rs.logger.Error("snapshot failed; item store write failed", "store", store.name, "err", err) + +return err +} + nodeCount := 0 + for { + node, err := exporter.Next() + if err == iavltree.ErrorExportDone { + rs.logger.Debug("snapshot Done", "store", store.name, "nodeCount", nodeCount) + +break +} + +else if err != nil { + return err +} + +err = protoWriter.WriteMsg(&snapshottypes.SnapshotItem{ + Item: &snapshottypes.SnapshotItem_IAVL{ + IAVL: &snapshottypes.SnapshotIAVLItem{ + Key: node.Key, + Value: node.Value, + Height: int32(node.Height), + Version: node.Version, +}, +}, +}) + if err != nil { + return err +} + +nodeCount++ +} + +return nil +}() + if err != nil { + return err +} + +} + +return nil +} + +/ Restore implements snapshottypes.Snapshotter. +/ returns next snapshot item and error. +func (rs *Store) + +Restore( + height uint64, format uint32, protoReader protoio.Reader, +) (snapshottypes.SnapshotItem, error) { + / Import nodes into stores. The first item is expected to be a SnapshotItem containing + / a SnapshotStoreItem, telling us which store to import into. The following items will contain + / SnapshotNodeItem (i.e. ExportNode) + +until we reach the next SnapshotStoreItem or EOF. + var importer *iavltree.Importer + var snapshotItem snapshottypes.SnapshotItem +loop: + for { + snapshotItem = snapshottypes.SnapshotItem{ +} + err := protoReader.ReadMsg(&snapshotItem) + if err == io.EOF { + break +} + +else if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "invalid protobuf message") +} + switch item := snapshotItem.Item.(type) { + case *snapshottypes.SnapshotItem_Store: + if importer != nil { + err = importer.Commit() + if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "IAVL commit failed") +} + +importer.Close() +} + +store, ok := rs.GetStoreByName(item.Store.Name).(*iavl.Store) + if !ok || store == nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrapf(types.ErrLogic, "cannot import into non-IAVL store %q", item.Store.Name) +} + +importer, err = store.Import(int64(height)) + if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "import failed") +} + +defer importer.Close() + / Importer height must reflect the node height (which usually matches the block height, but not always) + +rs.logger.Debug("restoring snapshot", "store", item.Store.Name) + case *snapshottypes.SnapshotItem_IAVL: + if importer == nil { + rs.logger.Error("failed to restore; received IAVL node item before store item") + +return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(types.ErrLogic, "received IAVL node item before store item") +} + if item.IAVL.Height > math.MaxInt8 { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrapf(types.ErrLogic, "node height %v cannot exceed %v", + item.IAVL.Height, math.MaxInt8) +} + node := &iavltree.ExportNode{ + Key: item.IAVL.Key, + Value: item.IAVL.Value, + Height: int8(item.IAVL.Height), + Version: item.IAVL.Version, +} + / Protobuf does not differentiate between []byte{ +} + +as nil, but fortunately IAVL does + / not allow nil keys nor nil values for leaf nodes, so we can always set them to empty. + if node.Key == nil { + node.Key = []byte{ +} + +} + if node.Height == 0 && node.Value == nil { + node.Value = []byte{ +} + +} + err := importer.Add(node) + if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "IAVL node import failed") +} + +default: + break loop +} + +} + if importer != nil { + err := importer.Commit() + if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "IAVL commit failed") +} + +importer.Close() +} + +rs.flushMetadata(rs.db, int64(height), rs.buildCommitInfo(int64(height))) + +return snapshotItem, rs.LoadLatestVersion() +} + +func (rs *Store) + +loadCommitStoreFromParams(key types.StoreKey, id types.CommitID, params storeParams) (types.CommitKVStore, error) { + var db dbm.DB + if params.db != nil { + db = dbm.NewPrefixDB(params.db, []byte("s/_/")) +} + +else { + prefix := "s/k:" + params.key.Name() + "/" + db = dbm.NewPrefixDB(rs.db, []byte(prefix)) +} + switch params.typ { + case types.StoreTypeMulti: + panic("recursive MultiStores not yet supported") + case types.StoreTypeIAVL: + var store types.CommitKVStore + var err error + if params.initialVersion == 0 { + store, err = iavl.LoadStore(db, rs.logger, key, id, rs.iavlCacheSize, rs.iavlDisableFastNode, rs.metrics) +} + +else { + store, err = iavl.LoadStoreWithInitialVersion(db, rs.logger, key, id, params.initialVersion, rs.iavlCacheSize, rs.iavlDisableFastNode, rs.metrics) +} + if err != nil { + return nil, err +} + if rs.interBlockCache != nil { + / Wrap and get a CommitKVStore with inter-block caching. Note, this should + / only wrap the primary CommitKVStore, not any store that is already + / branched as that will create unexpected behavior. + store = rs.interBlockCache.GetStoreCache(key, store) +} + +return store, err + case types.StoreTypeDB: + return commitDBStoreAdapter{ + Store: dbadapter.Store{ + DB: db +}}, nil + case types.StoreTypeTransient: + _, ok := key.(*types.TransientStoreKey) + if !ok { + return nil, fmt.Errorf("invalid StoreKey for StoreTypeTransient: %s", key.String()) +} + +return transient.NewStore(), nil + case types.StoreTypeMemory: + if _, ok := key.(*types.MemoryStoreKey); !ok { + return nil, fmt.Errorf("unexpected key type for a MemoryStoreKey; got: %s", key.String()) +} + +return mem.NewStore(), nil + + default: + panic(fmt.Sprintf("unrecognized store type %v", params.typ)) +} +} + +func (rs *Store) + +buildCommitInfo(version int64) *types.CommitInfo { + keys := keysFromStoreKeyMap(rs.stores) + storeInfos := []types.StoreInfo{ +} + for _, key := range keys { + store := rs.stores[key] + storeType := store.GetStoreType() + if storeType == types.StoreTypeTransient || storeType == types.StoreTypeMemory { + continue +} + +storeInfos = append(storeInfos, types.StoreInfo{ + Name: key.Name(), + CommitId: store.LastCommitID(), +}) +} + +return &types.CommitInfo{ + Version: version, + StoreInfos: storeInfos, +} +} + +/ RollbackToVersion delete the versions after `target` and update the latest version. +func (rs *Store) + +RollbackToVersion(target int64) + +error { + if target <= 0 { + return fmt.Errorf("invalid rollback height target: %d", target) +} + for key, store := range rs.stores { + if store.GetStoreType() == types.StoreTypeIAVL { + / If the store is wrapped with an inter-block cache, we must first unwrap + / it to get the underlying IAVL store. + store = rs.GetCommitKVStore(key) + err := store.(*iavl.Store).LoadVersionForOverwriting(target) + if err != nil { + return err +} + +} + +} + +rs.flushMetadata(rs.db, target, rs.buildCommitInfo(target)) + +return rs.LoadLatestVersion() +} + +/ SetCommitHeader sets the commit block header of the store. +func (rs *Store) + +SetCommitHeader(h cmtproto.Header) { + rs.commitHeader = h +} + +/ GetCommitInfo attempts to retrieve CommitInfo for a given version/height. It +/ will return an error if no CommitInfo exists, we fail to unmarshal the record +/ or if we cannot retrieve the object from the DB. +func (rs *Store) + +GetCommitInfo(ver int64) (*types.CommitInfo, error) { + cInfoKey := fmt.Sprintf(commitInfoKeyFmt, ver) + +bz, err := rs.db.Get([]byte(cInfoKey)) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to get commit info") +} + +else if bz == nil { + return nil, errors.New("no commit info found") +} + cInfo := &types.CommitInfo{ +} + if err = cInfo.Unmarshal(bz); err != nil { + return nil, errorsmod.Wrap(err, "failed unmarshal commit info") +} + +return cInfo, nil +} + +func (rs *Store) + +flushMetadata(db dbm.DB, version int64, cInfo *types.CommitInfo) { + rs.logger.Debug("flushing metadata", "height", version) + batch := db.NewBatch() + +defer batch.Close() + if cInfo != nil { + flushCommitInfo(batch, version, cInfo) +} + +else { + rs.logger.Debug("commitInfo is nil, not flushed", "height", version) +} + +flushLatestVersion(batch, version) + if err := batch.WriteSync(); err != nil { + panic(fmt.Errorf("error on batch write %w", err)) +} + +rs.logger.Debug("flushing metadata finished", "height", version) +} + +type storeParams struct { + key types.StoreKey + db dbm.DB + typ types.StoreType + initialVersion uint64 +} + +func newStoreParams(key types.StoreKey, db dbm.DB, typ types.StoreType, initialVersion uint64) + +storeParams { + return storeParams{ + key: key, + db: db, + typ: typ, + initialVersion: initialVersion, +} +} + +func GetLatestVersion(db dbm.DB) + +int64 { + bz, err := db.Get([]byte(latestVersionKey)) + if err != nil { + panic(err) +} + +else if bz == nil { + return 0 +} + +var latestVersion int64 + if err := gogotypes.StdInt64Unmarshal(&latestVersion, bz); err != nil { + panic(err) +} + +return latestVersion +} + +/ Commits each store and returns a new commitInfo. +func commitStores(version int64, storeMap map[types.StoreKey]types.CommitKVStore, removalMap map[types.StoreKey]bool) *types.CommitInfo { + storeInfos := make([]types.StoreInfo, 0, len(storeMap)) + storeKeys := keysFromStoreKeyMap(storeMap) + for _, key := range storeKeys { + store := storeMap[key] + last := store.LastCommitID() + + / If a commit event execution is interrupted, a new iavl store's version + / will be larger than the RMS's metadata, when the block is replayed, we + / should avoid committing that iavl store again. + var commitID types.CommitID + if last.Version >= version { + last.Version = version + commitID = last +} + +else { + commitID = store.Commit() +} + storeType := store.GetStoreType() + if storeType == types.StoreTypeTransient || storeType == types.StoreTypeMemory { + continue +} + if !removalMap[key] { + si := types.StoreInfo{ +} + +si.Name = key.Name() + +si.CommitId = commitID + storeInfos = append(storeInfos, si) +} + +} + +sort.SliceStable(storeInfos, func(i, j int) + +bool { + return strings.Compare(storeInfos[i].Name, storeInfos[j].Name) < 0 +}) + +return &types.CommitInfo{ + Version: version, + StoreInfos: storeInfos, +} +} + +func flushCommitInfo(batch dbm.Batch, version int64, cInfo *types.CommitInfo) { + bz, err := cInfo.Marshal() + if err != nil { + panic(err) +} + cInfoKey := fmt.Sprintf(commitInfoKeyFmt, version) + +batch.Set([]byte(cInfoKey), bz) +} + +func flushLatestVersion(batch dbm.Batch, version int64) { + bz, err := gogotypes.StdInt64Marshal(version) + if err != nil { + panic(err) +} + +batch.Set([]byte(latestVersionKey), bz) +} +``` + +The `rootMulti.Store` is a base-layer multistore built around a `db` on top of which multiple `KVStores` can be mounted, and is the default multistore store used in [`baseapp`](/docs/sdk/v0.50/learn/advanced/baseapp). + +### CacheMultiStore + +Whenever the `rootMulti.Store` needs to be branched, a [`cachemulti.Store`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/cachemulti/store.go) is used. + +```go expandable +package cachemulti + +import ( + + "fmt" + "io" + + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/cachekv" + "cosmossdk.io/store/dbadapter" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" +) + +/ storeNameCtxKey is the TraceContext metadata key that identifies +/ the store which emitted a given trace. +const storeNameCtxKey = "store_name" + +/---------------------------------------- +/ Store + +/ Store holds many branched stores. +/ Implements MultiStore. +/ NOTE: a Store (and MultiStores in general) + +should never expose the +/ keys for the substores. +type Store struct { + db types.CacheKVStore + stores map[types.StoreKey]types.CacheWrap + keys map[string]types.StoreKey + + traceWriter io.Writer + traceContext types.TraceContext +} + +var _ types.CacheMultiStore = Store{ +} + +/ NewFromKVStore creates a new Store object from a mapping of store keys to +/ CacheWrapper objects and a KVStore as the database. Each CacheWrapper store +/ is a branched store. +func NewFromKVStore( + store types.KVStore, stores map[types.StoreKey]types.CacheWrapper, + keys map[string]types.StoreKey, traceWriter io.Writer, traceContext types.TraceContext, +) + +Store { + cms := Store{ + db: cachekv.NewStore(store), + stores: make(map[types.StoreKey]types.CacheWrap, len(stores)), + keys: keys, + traceWriter: traceWriter, + traceContext: traceContext, +} + for key, store := range stores { + if cms.TracingEnabled() { + tctx := cms.traceContext.Clone().Merge(types.TraceContext{ + storeNameCtxKey: key.Name(), +}) + +store = tracekv.NewStore(store.(types.KVStore), cms.traceWriter, tctx) +} + +cms.stores[key] = cachekv.NewStore(store.(types.KVStore)) +} + +return cms +} + +/ NewStore creates a new Store object from a mapping of store keys to +/ CacheWrapper objects. Each CacheWrapper store is a branched store. +func NewStore( + db dbm.DB, stores map[types.StoreKey]types.CacheWrapper, keys map[string]types.StoreKey, + traceWriter io.Writer, traceContext types.TraceContext, +) + +Store { + return NewFromKVStore(dbadapter.Store{ + DB: db +}, stores, keys, traceWriter, traceContext) +} + +func newCacheMultiStoreFromCMS(cms Store) + +Store { + stores := make(map[types.StoreKey]types.CacheWrapper) + for k, v := range cms.stores { + stores[k] = v +} + +return NewFromKVStore(cms.db, stores, nil, cms.traceWriter, cms.traceContext) +} + +/ SetTracer sets the tracer for the MultiStore that the underlying +/ stores will utilize to trace operations. A MultiStore is returned. +func (cms Store) + +SetTracer(w io.Writer) + +types.MultiStore { + cms.traceWriter = w + return cms +} + +/ SetTracingContext updates the tracing context for the MultiStore by merging +/ the given context with the existing context by key. Any existing keys will +/ be overwritten. It is implied that the caller should update the context when +/ necessary between tracing operations. It returns a modified MultiStore. +func (cms Store) + +SetTracingContext(tc types.TraceContext) + +types.MultiStore { + if cms.traceContext != nil { + for k, v := range tc { + cms.traceContext[k] = v +} + +} + +else { + cms.traceContext = tc +} + +return cms +} + +/ TracingEnabled returns if tracing is enabled for the MultiStore. +func (cms Store) + +TracingEnabled() + +bool { + return cms.traceWriter != nil +} + +/ LatestVersion returns the branch version of the store +func (cms Store) + +LatestVersion() + +int64 { + panic("cannot get latest version from branch cached multi-store") +} + +/ GetStoreType returns the type of the store. +func (cms Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeMulti +} + +/ Write calls Write on each underlying store. +func (cms Store) + +Write() { + cms.db.Write() + for _, store := range cms.stores { + store.Write() +} +} + +/ Implements CacheWrapper. +func (cms Store) + +CacheWrap() + +types.CacheWrap { + return cms.CacheMultiStore().(types.CacheWrap) +} + +/ CacheWrapWithTrace implements the CacheWrapper interface. +func (cms Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + return cms.CacheWrap() +} + +/ Implements MultiStore. +func (cms Store) + +CacheMultiStore() + +types.CacheMultiStore { + return newCacheMultiStoreFromCMS(cms) +} + +/ CacheMultiStoreWithVersion implements the MultiStore interface. It will panic +/ as an already cached multi-store cannot load previous versions. +/ +/ TODO: The store implementation can possibly be modified to support this as it +/ seems safe to load previous versions (heights). +func (cms Store) + +CacheMultiStoreWithVersion(_ int64) (types.CacheMultiStore, error) { + panic("cannot branch cached multi-store with a version") +} + +/ GetStore returns an underlying Store by key. +func (cms Store) + +GetStore(key types.StoreKey) + +types.Store { + s := cms.stores[key] + if key == nil || s == nil { + panic(fmt.Sprintf("kv store with key %v has not been registered in stores", key)) +} + +return s.(types.Store) +} + +/ GetKVStore returns an underlying KVStore by key. +func (cms Store) + +GetKVStore(key types.StoreKey) + +types.KVStore { + store := cms.stores[key] + if key == nil || store == nil { + panic(fmt.Sprintf("kv store with key %v has not been registered in stores", key)) +} + +return store.(types.KVStore) +} +``` + +`cachemulti.Store` branches all substores (creates a virtual store for each substore) in its constructor and hold them in `Store.stores`. Moreover caches all read queries. `Store.GetKVStore()` returns the store from `Store.stores`, and `Store.Write()` recursively calls `CacheWrap.Write()` on all the substores. + +## Base-layer KVStores + +### `KVStore` and `CommitKVStore` Interfaces + +A `KVStore` is a simple key-value store used to store and retrieve data. A `CommitKVStore` is a `KVStore` that also implements a `Committer`. By default, stores mounted in `baseapp`'s main `CommitMultiStore` are `CommitKVStore`s. The `KVStore` interface is primarily used to restrict modules from accessing the committer. + +Individual `KVStore`s are used by modules to manage a subset of the global state. `KVStores` can be accessed by objects that hold a specific key. This `key` should only be exposed to the [`keeper`](/docs/sdk/v0.50/documentation/module-system/keeper) of the module that defines the store. + +`CommitKVStore`s are declared by proxy of their respective `key` and mounted on the application's [multistore](#multistore) in the [main application file](/docs/sdk/v0.50/learn/beginner/app-anatomy#core-application-file). In the same file, the `key` is also passed to the module's `keeper` that is responsible for managing the store. + +```go expandable +package types + +import ( + + "fmt" + "io" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + for _, added := range s.Added { + if key == added { + return true +} + +} + +return false +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + for _, d := range s.Deleted { + if d == key { + return true +} + +} + +return false +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + for k, v := range tc { + ret[k] = v +} + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + for k, v := range newTc { + tc[k] = v +} + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +Apart from the traditional `Get` and `Set` methods, that a `KVStore` must implement via the `BasicKVStore` interface; a `KVStore` must provide an `Iterator(start, end)` method which returns an `Iterator` object. It is used to iterate over a range of keys, typically keys that share a common prefix. Below is an example from the bank's module keeper, used to iterate over all account balances: + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "cosmossdk.io/collections/indexes" + "cosmossdk.io/core/store" + "cosmossdk.io/log" + "github.com/cockroachdb/errors" + "cosmossdk.io/collections" + "cosmossdk.io/math" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var _ ViewKeeper = (*BaseViewKeeper)(nil) + +/ ViewKeeper defines a module interface that facilitates read only access to +/ account balances. +type ViewKeeper interface { + ValidateBalance(ctx context.Context, addr sdk.AccAddress) + +error + HasBalance(ctx context.Context, addr sdk.AccAddress, amt sdk.Coin) + +bool + + GetAllBalances(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + GetAccountsBalances(ctx context.Context) []types.Balance + GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin + LockedCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + SpendableCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + SpendableCoin(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin + + IterateAccountBalances(ctx context.Context, addr sdk.AccAddress, cb func(coin sdk.Coin) (stop bool)) + +IterateAllBalances(ctx context.Context, cb func(address sdk.AccAddress, coin sdk.Coin) (stop bool)) +} + +func newBalancesIndexes(sb *collections.SchemaBuilder) + +BalancesIndexes { + return BalancesIndexes{ + Denom: indexes.NewReversePair[math.Int](/docs/sdk/v0.50/learn/advanced/ + sb, types.DenomAddressPrefix, "address_by_denom_index", + collections.PairKeyCodec(sdk.AddressKeyAsIndexKey(sdk.AccAddressKey), collections.StringKey), / nolint:staticcheck / Note: refer to the AddressKeyAsIndexKey docs to understand why we do this. + ), +} +} + +type BalancesIndexes struct { + Denom *indexes.ReversePair[sdk.AccAddress, string, math.Int] +} + +func (b BalancesIndexes) + +IndexesList() []collections.Index[collections.Pair[sdk.AccAddress, string], math.Int] { + return []collections.Index[collections.Pair[sdk.AccAddress, string], math.Int]{ + b.Denom +} +} + +/ BaseViewKeeper implements a read only keeper implementation of ViewKeeper. +type BaseViewKeeper struct { + cdc codec.BinaryCodec + storeService store.KVStoreService + ak types.AccountKeeper + logger log.Logger + + Schema collections.Schema + Supply collections.Map[string, math.Int] + DenomMetadata collections.Map[string, types.Metadata] + SendEnabled collections.Map[string, bool] + Balances *collections.IndexedMap[collections.Pair[sdk.AccAddress, string], math.Int, BalancesIndexes] + Params collections.Item[types.Params] +} + +/ NewBaseViewKeeper returns a new BaseViewKeeper. +func NewBaseViewKeeper(cdc codec.BinaryCodec, storeService store.KVStoreService, ak types.AccountKeeper, logger log.Logger) + +BaseViewKeeper { + sb := collections.NewSchemaBuilder(storeService) + k := BaseViewKeeper{ + cdc: cdc, + storeService: storeService, + ak: ak, + logger: logger, + Supply: collections.NewMap(sb, types.SupplyKey, "supply", collections.StringKey, sdk.IntValue), + DenomMetadata: collections.NewMap(sb, types.DenomMetadataPrefix, "denom_metadata", collections.StringKey, codec.CollValue[types.Metadata](/docs/sdk/v0.50/learn/advanced/cdc)), + SendEnabled: collections.NewMap(sb, types.SendEnabledPrefix, "send_enabled", collections.StringKey, codec.BoolValue), / NOTE: we use a bool value which uses protobuf to retain state backwards compat + Balances: collections.NewIndexedMap(sb, types.BalancesPrefix, "balances", collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), types.NewBalanceCompatValueCodec(), newBalancesIndexes(sb)), + Params: collections.NewItem(sb, types.ParamsKey, "params", codec.CollValue[types.Params](/docs/sdk/v0.50/learn/advanced/cdc)), +} + +schema, err := sb.Build() + if err != nil { + panic(err) +} + +k.Schema = schema + return k +} + +/ HasBalance returns whether or not an account has at least amt balance. +func (k BaseViewKeeper) + +HasBalance(ctx context.Context, addr sdk.AccAddress, amt sdk.Coin) + +bool { + return k.GetBalance(ctx, addr, amt.Denom).IsGTE(amt) +} + +/ Logger returns a module-specific logger. +func (k BaseViewKeeper) + +Logger() + +log.Logger { + return k.logger +} + +/ GetAllBalances returns all the account balances for the given account address. +func (k BaseViewKeeper) + +GetAllBalances(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins { + balances := sdk.NewCoins() + +k.IterateAccountBalances(ctx, addr, func(balance sdk.Coin) + +bool { + balances = balances.Add(balance) + +return false +}) + +return balances.Sort() +} + +/ GetAccountsBalances returns all the accounts balances from the store. +func (k BaseViewKeeper) + +GetAccountsBalances(ctx context.Context) []types.Balance { + balances := make([]types.Balance, 0) + mapAddressToBalancesIdx := make(map[string]int) + +k.IterateAllBalances(ctx, func(addr sdk.AccAddress, balance sdk.Coin) + +bool { + idx, ok := mapAddressToBalancesIdx[addr.String()] + if ok { + / address is already on the set of accounts balances + balances[idx].Coins = balances[idx].Coins.Add(balance) + +balances[idx].Coins.Sort() + +return false +} + accountBalance := types.Balance{ + Address: addr.String(), + Coins: sdk.NewCoins(balance), +} + +balances = append(balances, accountBalance) + +mapAddressToBalancesIdx[addr.String()] = len(balances) - 1 + return false +}) + +return balances +} + +/ GetBalance returns the balance of a specific denomination for a given account +/ by address. +func (k BaseViewKeeper) + +GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin { + amt, err := k.Balances.Get(ctx, collections.Join(addr, denom)) + if err != nil { + return sdk.NewCoin(denom, sdk.ZeroInt()) +} + +return sdk.NewCoin(denom, amt) +} + +/ IterateAccountBalances iterates over the balances of a single account and +/ provides the token balance to a callback. If true is returned from the +/ callback, iteration is halted. +func (k BaseViewKeeper) + +IterateAccountBalances(ctx context.Context, addr sdk.AccAddress, cb func(sdk.Coin) + +bool) { + err := k.Balances.Walk(ctx, collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/v0.50/learn/advanced/addr), func(key collections.Pair[sdk.AccAddress, string], value math.Int) (stop bool, err error) { + return cb(sdk.NewCoin(key.K2(), value)), nil +}) + if err != nil && !errors.Is(err, collections.ErrInvalidIterator) { + panic(err) +} +} + +/ IterateAllBalances iterates over all the balances of all accounts and +/ denominations that are provided to a callback. If true is returned from the +/ callback, iteration is halted. +func (k BaseViewKeeper) + +IterateAllBalances(ctx context.Context, cb func(sdk.AccAddress, sdk.Coin) + +bool) { + err := k.Balances.Walk(ctx, nil, func(key collections.Pair[sdk.AccAddress, string], value math.Int) (stop bool, err error) { + return cb(key.K1(), sdk.NewCoin(key.K2(), value)), nil +}) + if err != nil && !errors.Is(err, collections.ErrInvalidIterator) { + panic(err) +} +} + +/ LockedCoins returns all the coins that are not spendable (i.e. locked) + for an +/ account by address. For standard accounts, the result will always be no coins. +/ For vesting accounts, LockedCoins is delegated to the concrete vesting account +/ type. +func (k BaseViewKeeper) + +LockedCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins { + acc := k.ak.GetAccount(ctx, addr) + if acc != nil { + vacc, ok := acc.(types.VestingAccount) + if ok { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +return vacc.LockedCoins(sdkCtx.BlockTime()) +} + +} + +return sdk.NewCoins() +} + +/ SpendableCoins returns the total balances of spendable coins for an account +/ by address. If the account has no spendable coins, an empty Coins slice is +/ returned. +func (k BaseViewKeeper) + +SpendableCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins { + spendable, _ := k.spendableCoins(ctx, addr) + +return spendable +} + +/ SpendableCoin returns the balance of specific denomination of spendable coins +/ for an account by address. If the account has no spendable coin, a zero Coin +/ is returned. +func (k BaseViewKeeper) + +SpendableCoin(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin { + balance := k.GetBalance(ctx, addr, denom) + locked := k.LockedCoins(ctx, addr) + +return balance.SubAmount(locked.AmountOf(denom)) +} + +/ spendableCoins returns the coins the given address can spend alongside the total amount of coins it holds. +/ It exists for gas efficiency, in order to avoid to have to get balance multiple times. +func (k BaseViewKeeper) + +spendableCoins(ctx context.Context, addr sdk.AccAddress) (spendable, total sdk.Coins) { + total = k.GetAllBalances(ctx, addr) + locked := k.LockedCoins(ctx, addr) + +spendable, hasNeg := total.SafeSub(locked...) + if hasNeg { + spendable = sdk.NewCoins() + +return +} + +return +} + +/ ValidateBalance validates all balances for a given account address returning +/ an error if any balance is invalid. It will check for vesting account types +/ and validate the balances against the original vesting balances. +/ +/ CONTRACT: ValidateBalance should only be called upon genesis state. In the +/ case of vesting accounts, balances may change in a valid manner that would +/ otherwise yield an error from this call. +func (k BaseViewKeeper) + +ValidateBalance(ctx context.Context, addr sdk.AccAddress) + +error { + acc := k.ak.GetAccount(ctx, addr) + if acc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "account %s does not exist", addr) +} + balances := k.GetAllBalances(ctx, addr) + if !balances.IsValid() { + return fmt.Errorf("account balance of %s is invalid", balances) +} + +vacc, ok := acc.(types.VestingAccount) + if ok { + ogv := vacc.GetOriginalVesting() + if ogv.IsAnyGT(balances) { + return fmt.Errorf("vesting amount %s cannot be greater than total amount %s", ogv, balances) +} + +} + +return nil +} +``` + +### `IAVL` Store + +The default implementation of `KVStore` and `CommitKVStore` used in `baseapp` is the `iavl.Store`. + +```go expandable +package iavl + +import ( + + "errors" + "fmt" + "io" + + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/iavl" + ics23 "github.com/cosmos/ics23/go" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store/cachekv" + "cosmossdk.io/store/internal/kv" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" +) + +const ( + DefaultIAVLCacheSize = 500000 +) + +var ( + _ types.KVStore = (*Store)(nil) + _ types.CommitStore = (*Store)(nil) + _ types.CommitKVStore = (*Store)(nil) + _ types.Queryable = (*Store)(nil) + _ types.StoreWithInitialVersion = (*Store)(nil) +) + +/ Store Implements types.KVStore and CommitKVStore. +type Store struct { + tree Tree + logger log.Logger + metrics metrics.StoreMetrics +} + +/ LoadStore returns an IAVL Store as a CommitKVStore. Internally, it will load the +/ store's version (id) + +from the provided DB. An error is returned if the version +/ fails to load, or if called with a positive version on an empty tree. +func LoadStore(db dbm.DB, logger log.Logger, key types.StoreKey, id types.CommitID, cacheSize int, disableFastNode bool, metrics metrics.StoreMetrics) (types.CommitKVStore, error) { + return LoadStoreWithInitialVersion(db, logger, key, id, 0, cacheSize, disableFastNode, metrics) +} + +/ LoadStoreWithInitialVersion returns an IAVL Store as a CommitKVStore setting its initialVersion +/ to the one given. Internally, it will load the store's version (id) + +from the +/ provided DB. An error is returned if the version fails to load, or if called with a positive +/ version on an empty tree. +func LoadStoreWithInitialVersion(db dbm.DB, logger log.Logger, key types.StoreKey, id types.CommitID, initialVersion uint64, cacheSize int, disableFastNode bool, metrics metrics.StoreMetrics) (types.CommitKVStore, error) { + tree := iavl.NewMutableTreeWithOpts(db, cacheSize, &iavl.Options{ + InitialVersion: initialVersion +}, disableFastNode, logger) + +isUpgradeable, err := tree.IsUpgradeable() + if err != nil { + return nil, err +} + if isUpgradeable && logger != nil { + logger.Info( + "Upgrading IAVL storage for faster queries + execution on live state. This may take a while", + "store_key", key.String(), + "version", initialVersion, + "commit", fmt.Sprintf("%X", id), + ) +} + + _, err = tree.LoadVersion(id.Version) + if err != nil { + return nil, err +} + if logger != nil { + logger.Debug("Finished loading IAVL tree") +} + +return &Store{ + tree: tree, + logger: logger, + metrics: metrics, +}, nil +} + +/ UnsafeNewStore returns a reference to a new IAVL Store with a given mutable +/ IAVL tree reference. It should only be used for testing purposes. +/ +/ CONTRACT: The IAVL tree should be fully loaded. +/ CONTRACT: PruningOptions passed in as argument must be the same as pruning options +/ passed into iavl.MutableTree +func UnsafeNewStore(tree *iavl.MutableTree) *Store { + return &Store{ + tree: tree, + metrics: metrics.NewNoOpMetrics(), +} +} + +/ GetImmutable returns a reference to a new store backed by an immutable IAVL +/ tree at a specific version (height) + +without any pruning options. This should +/ be used for querying and iteration only. If the version does not exist or has +/ been pruned, an empty immutable IAVL tree will be used. +/ Any mutable operations executed will result in a panic. +func (st *Store) + +GetImmutable(version int64) (*Store, error) { + if !st.VersionExists(version) { + return nil, errors.New("version mismatch on immutable IAVL tree; version does not exist. Version has either been pruned, or is for a future block height") +} + +iTree, err := st.tree.GetImmutable(version) + if err != nil { + return nil, err +} + +return &Store{ + tree: &immutableTree{ + iTree +}, + metrics: st.metrics, +}, nil +} + +/ Commit commits the current store state and returns a CommitID with the new +/ version and hash. +func (st *Store) + +Commit() + +types.CommitID { + defer st.metrics.MeasureSince("store", "iavl", "commit") + +hash, version, err := st.tree.SaveVersion() + if err != nil { + panic(err) +} + +return types.CommitID{ + Version: version, + Hash: hash, +} +} + +/ WorkingHash returns the hash of the current working tree. +func (st *Store) + +WorkingHash() []byte { + return st.tree.WorkingHash() +} + +/ LastCommitID implements Committer. +func (st *Store) + +LastCommitID() + +types.CommitID { + return types.CommitID{ + Version: st.tree.Version(), + Hash: st.tree.Hash(), +} +} + +/ SetPruning panics as pruning options should be provided at initialization +/ since IAVl accepts pruning options directly. +func (st *Store) + +SetPruning(_ pruningtypes.PruningOptions) { + panic("cannot set pruning options on an initialized IAVL store") +} + +/ SetPruning panics as pruning options should be provided at initialization +/ since IAVl accepts pruning options directly. +func (st *Store) + +GetPruning() + +pruningtypes.PruningOptions { + panic("cannot get pruning options on an initialized IAVL store") +} + +/ VersionExists returns whether or not a given version is stored. +func (st *Store) + +VersionExists(version int64) + +bool { + return st.tree.VersionExists(version) +} + +/ GetAllVersions returns all versions in the iavl tree +func (st *Store) + +GetAllVersions() []int { + return st.tree.AvailableVersions() +} + +/ Implements Store. +func (st *Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeIAVL +} + +/ Implements Store. +func (st *Store) + +CacheWrap() + +types.CacheWrap { + return cachekv.NewStore(st) +} + +/ CacheWrapWithTrace implements the Store interface. +func (st *Store) + +CacheWrapWithTrace(w io.Writer, tc types.TraceContext) + +types.CacheWrap { + return cachekv.NewStore(tracekv.NewStore(st, w, tc)) +} + +/ Implements types.KVStore. +func (st *Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +types.AssertValidValue(value) + _, err := st.tree.Set(key, value) + if err != nil && st.logger != nil { + st.logger.Error("iavl set error", "error", err.Error()) +} +} + +/ Implements types.KVStore. +func (st *Store) + +Get(key []byte) []byte { + defer st.metrics.MeasureSince("store", "iavl", "get") + +value, err := st.tree.Get(key) + if err != nil { + panic(err) +} + +return value +} + +/ Implements types.KVStore. +func (st *Store) + +Has(key []byte) (exists bool) { + defer st.metrics.MeasureSince("store", "iavl", "has") + +has, err := st.tree.Has(key) + if err != nil { + panic(err) +} + +return has +} + +/ Implements types.KVStore. +func (st *Store) + +Delete(key []byte) { + defer st.metrics.MeasureSince("store", "iavl", "delete") + +st.tree.Remove(key) +} + +/ DeleteVersionsTo deletes versions upto the given version from the MutableTree. An error +/ is returned if any single version is invalid or the delete fails. All writes +/ happen in a single batch with a single commit. +func (st *Store) + +DeleteVersionsTo(version int64) + +error { + return st.tree.DeleteVersionsTo(version) +} + +/ LoadVersionForOverwriting attempts to load a tree at a previously committed +/ version. Any versions greater than targetVersion will be deleted. +func (st *Store) + +LoadVersionForOverwriting(targetVersion int64) + +error { + return st.tree.LoadVersionForOverwriting(targetVersion) +} + +/ Implements types.KVStore. +func (st *Store) + +Iterator(start, end []byte) + +types.Iterator { + iterator, err := st.tree.Iterator(start, end, true) + if err != nil { + panic(err) +} + +return iterator +} + +/ Implements types.KVStore. +func (st *Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + iterator, err := st.tree.Iterator(start, end, false) + if err != nil { + panic(err) +} + +return iterator +} + +/ SetInitialVersion sets the initial version of the IAVL tree. It is used when +/ starting a new chain at an arbitrary height. +func (st *Store) + +SetInitialVersion(version int64) { + st.tree.SetInitialVersion(uint64(version)) +} + +/ Exports the IAVL store at the given version, returning an iavl.Exporter for the tree. +func (st *Store) + +Export(version int64) (*iavl.Exporter, error) { + istore, err := st.GetImmutable(version) + if err != nil { + return nil, errorsmod.Wrapf(err, "iavl export failed for version %v", version) +} + +tree, ok := istore.tree.(*immutableTree) + if !ok || tree == nil { + return nil, fmt.Errorf("iavl export failed: unable to fetch tree for version %v", version) +} + +return tree.Export() +} + +/ Import imports an IAVL tree at the given version, returning an iavl.Importer for importing. +func (st *Store) + +Import(version int64) (*iavl.Importer, error) { + tree, ok := st.tree.(*iavl.MutableTree) + if !ok { + return nil, errors.New("iavl import failed: unable to find mutable tree") +} + +return tree.Import(version) +} + +/ Handle gatest the latest height, if height is 0 +func getHeight(tree Tree, req *types.RequestQuery) + +int64 { + height := req.Height + if height == 0 { + latest := tree.Version() + if tree.VersionExists(latest - 1) { + height = latest - 1 +} + +else { + height = latest +} + +} + +return height +} + +/ Query implements ABCI interface, allows queries +/ +/ by default we will return from (latest height -1), +/ as we will have merkle proofs immediately (header height = data height + 1) +/ If latest-1 is not present, use latest (which must be present) +/ if you care to have the latest data to see a tx results, you must +/ explicitly set the height you want to see +func (st *Store) + +Query(req *types.RequestQuery) (res *types.ResponseQuery, err error) { + defer st.metrics.MeasureSince("store", "iavl", "query") + if len(req.Data) == 0 { + return &types.ResponseQuery{ +}, errorsmod.Wrap(types.ErrTxDecode, "query cannot be zero length") +} + tree := st.tree + + / store the height we chose in the response, with 0 being changed to the + / latest height + res = &types.ResponseQuery{ + Height: getHeight(tree, req), +} + switch req.Path { + case "/key": / get by key + key := req.Data / data holds the key bytes + + res.Key = key + if !st.VersionExists(res.Height) { + res.Log = iavl.ErrVersionDoesNotExist.Error() + +break +} + +value, err := tree.GetVersioned(key, res.Height) + if err != nil { + panic(err) +} + +res.Value = value + if !req.Prove { + break +} + + / Continue to prove existence/absence of value + / Must convert store.Tree to iavl.MutableTree with given version to use in CreateProof + iTree, err := tree.GetImmutable(res.Height) + if err != nil { + / sanity check: If value for given version was retrieved, immutable tree must also be retrievable + panic(fmt.Sprintf("version exists in store but could not retrieve corresponding versioned tree in store, %s", err.Error())) +} + mtree := &iavl.MutableTree{ + ImmutableTree: iTree, +} + + / get proof from tree and convert to merkle.Proof before adding to result + res.ProofOps = getProofFromTree(mtree, req.Data, res.Value != nil) + case "/subspace": + pairs := kv.Pairs{ + Pairs: make([]kv.Pair, 0), +} + subspace := req.Data + res.Key = subspace + iterator := types.KVStorePrefixIterator(st, subspace) + for ; iterator.Valid(); iterator.Next() { + pairs.Pairs = append(pairs.Pairs, kv.Pair{ + Key: iterator.Key(), + Value: iterator.Value() +}) +} + +iterator.Close() + +bz, err := pairs.Marshal() + if err != nil { + panic(fmt.Errorf("failed to marshal KV pairs: %w", err)) +} + +res.Value = bz + + default: + return &types.ResponseQuery{ +}, errorsmod.Wrapf(types.ErrUnknownRequest, "unexpected query path: %v", req.Path) +} + +return res, err +} + +/ TraverseStateChanges traverses the state changes between two versions and calls the given function. +func (st *Store) + +TraverseStateChanges(startVersion, endVersion int64, fn func(version int64, changeSet *iavl.ChangeSet) + +error) + +error { + return st.tree.TraverseStateChanges(startVersion, endVersion, fn) +} + +/ Takes a MutableTree, a key, and a flag for creating existence or absence proof and returns the +/ appropriate merkle.Proof. Since this must be called after querying for the value, this function should never error +/ Thus, it will panic on error rather than returning it +func getProofFromTree(tree *iavl.MutableTree, key []byte, exists bool) *cmtprotocrypto.ProofOps { + var ( + commitmentProof *ics23.CommitmentProof + err error + ) + if exists { + / value was found + commitmentProof, err = tree.GetMembershipProof(key) + if err != nil { + / sanity check: If value was found, membership proof must be creatable + panic(fmt.Sprintf("unexpected value for empty proof: %s", err.Error())) +} + +} + +else { + / value wasn't found + commitmentProof, err = tree.GetNonMembershipProof(key) + if err != nil { + / sanity check: If value wasn't found, nonmembership proof must be creatable + panic(fmt.Sprintf("unexpected error for nonexistence proof: %s", err.Error())) +} + +} + op := types.NewIavlCommitmentOp(key, commitmentProof) + +return &cmtprotocrypto.ProofOps{ + Ops: []cmtprotocrypto.ProofOp{ + op.ProofOp() +}} +} +``` + +`iavl` stores are based around an [IAVL Tree](https://github.com/cosmos/iavl), a self-balancing binary tree which guarantees that: + +- `Get` and `Set` operations are O(log n), where n is the number of elements in the tree. +- Iteration efficiently returns the sorted elements within the range. +- Each tree version is immutable and can be retrieved even after a commit (depending on the pruning settings). + +The documentation on the IAVL Tree is located [here](https://github.com/cosmos/iavl/blob/master/docs/overview.md). + +### `DbAdapter` Store + +`dbadapter.Store` is an adapter for `dbm.DB` making it fulfilling the `KVStore` interface. + +```go expandable +package dbadapter + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/cachekv" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" +) + +/ Wrapper type for dbm.Db with implementation of KVStore +type Store struct { + dbm.DB +} + +/ Get wraps the underlying DB's Get method panicing on error. +func (dsa Store) + +Get(key []byte) []byte { + v, err := dsa.DB.Get(key) + if err != nil { + panic(err) +} + +return v +} + +/ Has wraps the underlying DB's Has method panicing on error. +func (dsa Store) + +Has(key []byte) + +bool { + ok, err := dsa.DB.Has(key) + if err != nil { + panic(err) +} + +return ok +} + +/ Set wraps the underlying DB's Set method panicing on error. +func (dsa Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + if err := dsa.DB.Set(key, value); err != nil { + panic(err) +} +} + +/ Delete wraps the underlying DB's Delete method panicing on error. +func (dsa Store) + +Delete(key []byte) { + if err := dsa.DB.Delete(key); err != nil { + panic(err) +} +} + +/ Iterator wraps the underlying DB's Iterator method panicing on error. +func (dsa Store) + +Iterator(start, end []byte) + +types.Iterator { + iter, err := dsa.DB.Iterator(start, end) + if err != nil { + panic(err) +} + +return iter +} + +/ ReverseIterator wraps the underlying DB's ReverseIterator method panicing on error. +func (dsa Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + iter, err := dsa.DB.ReverseIterator(start, end) + if err != nil { + panic(err) +} + +return iter +} + +/ GetStoreType returns the type of the store. +func (Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeDB +} + +/ CacheWrap branches the underlying store. +func (dsa Store) + +CacheWrap() + +types.CacheWrap { + return cachekv.NewStore(dsa) +} + +/ CacheWrapWithTrace implements KVStore. +func (dsa Store) + +CacheWrapWithTrace(w io.Writer, tc types.TraceContext) + +types.CacheWrap { + return cachekv.NewStore(tracekv.NewStore(dsa, w, tc)) +} + +/ dbm.DB implements KVStore so we can CacheKVStore it. +var _ types.KVStore = Store{ +} +``` + +`dbadapter.Store` embeds `dbm.DB`, meaning most of the `KVStore` interface functions are implemented. The other functions (mostly miscellaneous) are manually implemented. This store is primarily used within [Transient Stores](#transient-store) + +### `Transient` Store + +`Transient.Store` is a base-layer `KVStore` which is automatically discarded at the end of the block. + +```go expandable +package transient + +import ( + + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/dbadapter" + pruningtypes "cosmossdk.io/store/pruning/types" + "cosmossdk.io/store/types" +) + +var ( + _ types.Committer = (*Store)(nil) + _ types.KVStore = (*Store)(nil) +) + +/ Store is a wrapper for a MemDB with Commiter implementation +type Store struct { + dbadapter.Store +} + +/ Constructs new MemDB adapter +func NewStore() *Store { + return &Store{ + Store: dbadapter.Store{ + DB: dbm.NewMemDB() +}} +} + +/ Implements CommitStore +/ Commit cleans up Store. +func (ts *Store) + +Commit() (id types.CommitID) { + ts.Store = dbadapter.Store{ + DB: dbm.NewMemDB() +} + +return +} + +func (ts *Store) + +SetPruning(_ pruningtypes.PruningOptions) { +} + +/ GetPruning is a no-op as pruning options cannot be directly set on this store. +/ They must be set on the root commit multi-store. +func (ts *Store) + +GetPruning() + +pruningtypes.PruningOptions { + return pruningtypes.NewPruningOptions(pruningtypes.PruningUndefined) +} + +/ Implements CommitStore +func (ts *Store) + +LastCommitID() + +types.CommitID { + return types.CommitID{ +} +} + +func (ts *Store) + +WorkingHash() []byte { + return []byte{ +} +} + +/ Implements Store. +func (ts *Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeTransient +} +``` + +`Transient.Store` is a `dbadapter.Store` with a `dbm.NewMemDB()`. All `KVStore` methods are reused. When `Store.Commit()` is called, a new `dbadapter.Store` is assigned, discarding previous reference and making it garbage collected. + +This type of store is useful to persist information that is only relevant per-block. One example would be to store parameter changes (i.e. a bool set to `true` if a parameter changed in a block). + +```go expandable +package types + +import ( + + "fmt" + "reflect" + "cosmossdk.io/store/prefix" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +const ( + / StoreKey is the string store key for the param store + StoreKey = "params" + + / TStoreKey is the string store key for the param transient store + TStoreKey = "transient_params" +) + +/ Individual parameter store for each keeper +/ Transient store persists for a block, so we use it for +/ recording whether the parameter has been changed or not +type Subspace struct { + cdc codec.BinaryCodec + legacyAmino *codec.LegacyAmino + key storetypes.StoreKey / []byte -> []byte, stores parameter + tkey storetypes.StoreKey / []byte -> bool, stores parameter change + name []byte + table KeyTable +} + +/ NewSubspace constructs a store with namestore +func NewSubspace(cdc codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey, name string) + +Subspace { + return Subspace{ + cdc: cdc, + legacyAmino: legacyAmino, + key: key, + tkey: tkey, + name: []byte(name), + table: NewKeyTable(), +} +} + +/ HasKeyTable returns if the Subspace has a KeyTable registered. +func (s Subspace) + +HasKeyTable() + +bool { + return len(s.table.m) > 0 +} + +/ WithKeyTable initializes KeyTable and returns modified Subspace +func (s Subspace) + +WithKeyTable(table KeyTable) + +Subspace { + if table.m == nil { + panic("WithKeyTable() + +called with nil KeyTable") +} + if len(s.table.m) != 0 { + panic("WithKeyTable() + +called on already initialized Subspace") +} + for k, v := range table.m { + s.table.m[k] = v +} + + / Allocate additional capacity for Subspace.name + / So we don't have to allocate extra space each time appending to the key + name := s.name + s.name = make([]byte, len(name), len(name)+table.maxKeyLength()) + +copy(s.name, name) + +return s +} + +/ Returns a KVStore identical with ctx.KVStore(s.key).Prefix() + +func (s Subspace) + +kvStore(ctx sdk.Context) + +storetypes.KVStore { + / append here is safe, appends within a function won't cause + / weird side effects when its singlethreaded + return prefix.NewStore(ctx.KVStore(s.key), append(s.name, '/')) +} + +/ Returns a transient store for modification +func (s Subspace) + +transientStore(ctx sdk.Context) + +storetypes.KVStore { + / append here is safe, appends within a function won't cause + / weird side effects when its singlethreaded + return prefix.NewStore(ctx.TransientStore(s.tkey), append(s.name, '/')) +} + +/ Validate attempts to validate a parameter value by its key. If the key is not +/ registered or if the validation of the value fails, an error is returned. +func (s Subspace) + +Validate(ctx sdk.Context, key []byte, value interface{ +}) + +error { + attr, ok := s.table.m[string(key)] + if !ok { + return fmt.Errorf("parameter %s not registered", key) +} + if err := attr.vfn(value); err != nil { + return fmt.Errorf("invalid parameter value: %s", err) +} + +return nil +} + +/ Get queries for a parameter by key from the Subspace's KVStore and sets the +/ value to the provided pointer. If the value does not exist, it will panic. +func (s Subspace) + +Get(ctx sdk.Context, key []byte, ptr interface{ +}) { + s.checkType(key, ptr) + store := s.kvStore(ctx) + bz := store.Get(key) + if err := s.legacyAmino.UnmarshalJSON(bz, ptr); err != nil { + panic(err) +} +} + +/ GetIfExists queries for a parameter by key from the Subspace's KVStore and +/ sets the value to the provided pointer. If the value does not exist, it will +/ perform a no-op. +func (s Subspace) + +GetIfExists(ctx sdk.Context, key []byte, ptr interface{ +}) { + store := s.kvStore(ctx) + bz := store.Get(key) + if bz == nil { + return +} + +s.checkType(key, ptr) + if err := s.legacyAmino.UnmarshalJSON(bz, ptr); err != nil { + panic(err) +} +} + +/ IterateKeys iterates over all the keys in the subspace and executes the +/ provided callback. If the callback returns true for a given key, iteration +/ will halt. +func (s Subspace) + +IterateKeys(ctx sdk.Context, cb func(key []byte) + +bool) { + store := s.kvStore(ctx) + iter := storetypes.KVStorePrefixIterator(store, nil) + +defer iter.Close() + for ; iter.Valid(); iter.Next() { + if cb(iter.Key()) { + break +} + +} +} + +/ GetRaw queries for the raw values bytes for a parameter by key. +func (s Subspace) + +GetRaw(ctx sdk.Context, key []byte) []byte { + store := s.kvStore(ctx) + +return store.Get(key) +} + +/ Has returns if a parameter key exists or not in the Subspace's KVStore. +func (s Subspace) + +Has(ctx sdk.Context, key []byte) + +bool { + store := s.kvStore(ctx) + +return store.Has(key) +} + +/ Modified returns true if the parameter key is set in the Subspace's transient +/ KVStore. +func (s Subspace) + +Modified(ctx sdk.Context, key []byte) + +bool { + tstore := s.transientStore(ctx) + +return tstore.Has(key) +} + +/ checkType verifies that the provided key and value are comptable and registered. +func (s Subspace) + +checkType(key []byte, value interface{ +}) { + attr, ok := s.table.m[string(key)] + if !ok { + panic(fmt.Sprintf("parameter %s not registered", key)) +} + ty := attr.ty + pty := reflect.TypeOf(value) + if pty.Kind() == reflect.Ptr { + pty = pty.Elem() +} + if pty != ty { + panic("type mismatch with registered table") +} +} + +/ Set stores a value for given a parameter key assuming the parameter type has +/ been registered. It will panic if the parameter type has not been registered +/ or if the value cannot be encoded. A change record is also set in the Subspace's +/ transient KVStore to mark the parameter as modified. +func (s Subspace) + +Set(ctx sdk.Context, key []byte, value interface{ +}) { + s.checkType(key, value) + store := s.kvStore(ctx) + +bz, err := s.legacyAmino.MarshalJSON(value) + if err != nil { + panic(err) +} + +store.Set(key, bz) + tstore := s.transientStore(ctx) + +tstore.Set(key, []byte{ +}) +} + +/ Update stores an updated raw value for a given parameter key assuming the +/ parameter type has been registered. It will panic if the parameter type has +/ not been registered or if the value cannot be encoded. An error is returned +/ if the raw value is not compatible with the registered type for the parameter +/ key or if the new value is invalid as determined by the registered type's +/ validation function. +func (s Subspace) + +Update(ctx sdk.Context, key, value []byte) + +error { + attr, ok := s.table.m[string(key)] + if !ok { + panic(fmt.Sprintf("parameter %s not registered", key)) +} + ty := attr.ty + dest := reflect.New(ty).Interface() + +s.GetIfExists(ctx, key, dest) + if err := s.legacyAmino.UnmarshalJSON(value, dest); err != nil { + return err +} + + / destValue contains the dereferenced value of dest so validation function do + / not have to operate on pointers. + destValue := reflect.Indirect(reflect.ValueOf(dest)).Interface() + if err := s.Validate(ctx, key, destValue); err != nil { + return err +} + +s.Set(ctx, key, dest) + +return nil +} + +/ GetParamSet iterates through each ParamSetPair where for each pair, it will +/ retrieve the value and set it to the corresponding value pointer provided +/ in the ParamSetPair by calling Subspace#Get. +func (s Subspace) + +GetParamSet(ctx sdk.Context, ps ParamSet) { + for _, pair := range ps.ParamSetPairs() { + s.Get(ctx, pair.Key, pair.Value) +} +} + +/ GetParamSetIfExists iterates through each ParamSetPair where for each pair, it will +/ retrieve the value and set it to the corresponding value pointer provided +/ in the ParamSetPair by calling Subspace#GetIfExists. +func (s Subspace) + +GetParamSetIfExists(ctx sdk.Context, ps ParamSet) { + for _, pair := range ps.ParamSetPairs() { + s.GetIfExists(ctx, pair.Key, pair.Value) +} +} + +/ SetParamSet iterates through each ParamSetPair and sets the value with the +/ corresponding parameter key in the Subspace's KVStore. +func (s Subspace) + +SetParamSet(ctx sdk.Context, ps ParamSet) { + for _, pair := range ps.ParamSetPairs() { + / pair.Field is a pointer to the field, so indirecting the ptr. + / go-amino automatically handles it but just for sure, + / since SetStruct is meant to be used in InitGenesis + / so this method will not be called frequently + v := reflect.Indirect(reflect.ValueOf(pair.Value)).Interface() + if err := pair.ValidatorFn(v); err != nil { + panic(fmt.Sprintf("value from ParamSetPair is invalid: %s", err)) +} + +s.Set(ctx, pair.Key, v) +} +} + +/ Name returns the name of the Subspace. +func (s Subspace) + +Name() + +string { + return string(s.name) +} + +/ Wrapper of Subspace, provides immutable functions only +type ReadOnlySubspace struct { + s Subspace +} + +/ Get delegates a read-only Get call to the Subspace. +func (ros ReadOnlySubspace) + +Get(ctx sdk.Context, key []byte, ptr interface{ +}) { + ros.s.Get(ctx, key, ptr) +} + +/ GetRaw delegates a read-only GetRaw call to the Subspace. +func (ros ReadOnlySubspace) + +GetRaw(ctx sdk.Context, key []byte) []byte { + return ros.s.GetRaw(ctx, key) +} + +/ Has delegates a read-only Has call to the Subspace. +func (ros ReadOnlySubspace) + +Has(ctx sdk.Context, key []byte) + +bool { + return ros.s.Has(ctx, key) +} + +/ Modified delegates a read-only Modified call to the Subspace. +func (ros ReadOnlySubspace) + +Modified(ctx sdk.Context, key []byte) + +bool { + return ros.s.Modified(ctx, key) +} + +/ Name delegates a read-only Name call to the Subspace. +func (ros ReadOnlySubspace) + +Name() + +string { + return ros.s.Name() +} +``` + +Transient stores are typically accessed via the [`context`](/docs/sdk/v0.50/learn/advanced/context) via the `TransientStore()` method: + +```go expandable +package types + +import ( + + "context" + "time" + "cosmossdk.io/log" + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cosmos/gogoproto/proto" + "cosmossdk.io/store/gaskv" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/header" +) + +/ ExecMode defines the execution mode which can be set on a Context. +type ExecMode uint8 + +/ All possible execution modes. +const ( + ExecModeCheck ExecMode = iota + ExecModeReCheck + ExecModeSimulate + ExecModePrepareProposal + ExecModeProcessProposal + ExecModeVoteExtension + ExecModeFinalize +) + +/* +Context is an immutable object contains all information needed to +process a request. + +It contains a context.Context object inside if you want to use that, +but please do not over-use it. We try to keep all data structured +and standard additions here would be better just to add to the Context struct +*/ +type Context struct { + baseCtx context.Context + ms storetypes.MultiStore + / Deprecated: Use HeaderService for height, time, and chainID and CometService for the rest + header cmtproto.Header + / Deprecated: Use HeaderService for hash + headerHash []byte + / Deprecated: Use HeaderService for chainID and CometService for the rest + chainID string + txBytes []byte + logger log.Logger + voteInfo []abci.VoteInfo + gasMeter storetypes.GasMeter + blockGasMeter storetypes.GasMeter + checkTx bool + recheckTx bool / if recheckTx == true, then checkTx must also be true + execMode ExecMode + minGasPrice DecCoins + consParams cmtproto.ConsensusParams + eventManager EventManagerI + priority int64 / The tx priority, only relevant in CheckTx + kvGasConfig storetypes.GasConfig + transientKVGasConfig storetypes.GasConfig + streamingManager storetypes.StreamingManager + cometInfo comet.BlockInfo + headerInfo header.Info +} + +/ Proposed rename, not done to avoid API breakage +type Request = Context + +/ Read-only accessors +func (c Context) + +Context() + +context.Context { + return c.baseCtx +} + +func (c Context) + +MultiStore() + +storetypes.MultiStore { + return c.ms +} + +func (c Context) + +BlockHeight() + +int64 { + return c.header.Height +} + +func (c Context) + +BlockTime() + +time.Time { + return c.header.Time +} + +func (c Context) + +ChainID() + +string { + return c.chainID +} + +func (c Context) + +TxBytes() []byte { + return c.txBytes +} + +func (c Context) + +Logger() + +log.Logger { + return c.logger +} + +func (c Context) + +VoteInfos() []abci.VoteInfo { + return c.voteInfo +} + +func (c Context) + +GasMeter() + +storetypes.GasMeter { + return c.gasMeter +} + +func (c Context) + +BlockGasMeter() + +storetypes.GasMeter { + return c.blockGasMeter +} + +func (c Context) + +IsCheckTx() + +bool { + return c.checkTx +} + +func (c Context) + +IsReCheckTx() + +bool { + return c.recheckTx +} + +func (c Context) + +ExecMode() + +ExecMode { + return c.execMode +} + +func (c Context) + +MinGasPrices() + +DecCoins { + return c.minGasPrice +} + +func (c Context) + +EventManager() + +EventManagerI { + return c.eventManager +} + +func (c Context) + +Priority() + +int64 { + return c.priority +} + +func (c Context) + +KVGasConfig() + +storetypes.GasConfig { + return c.kvGasConfig +} + +func (c Context) + +TransientKVGasConfig() + +storetypes.GasConfig { + return c.transientKVGasConfig +} + +func (c Context) + +StreamingManager() + +storetypes.StreamingManager { + return c.streamingManager +} + +func (c Context) + +CometInfo() + +comet.BlockInfo { + return c.cometInfo +} + +func (c Context) + +HeaderInfo() + +header.Info { + return c.headerInfo +} + +/ clone the header before returning +func (c Context) + +BlockHeader() + +cmtproto.Header { + msg := proto.Clone(&c.header).(*cmtproto.Header) + +return *msg +} + +/ HeaderHash returns a copy of the header hash obtained during abci.RequestBeginBlock +func (c Context) + +HeaderHash() []byte { + hash := make([]byte, len(c.headerHash)) + +copy(hash, c.headerHash) + +return hash +} + +func (c Context) + +ConsensusParams() + +cmtproto.ConsensusParams { + return c.consParams +} + +func (c Context) + +Deadline() (deadline time.Time, ok bool) { + return c.baseCtx.Deadline() +} + +func (c Context) + +Done() <-chan struct{ +} { + return c.baseCtx.Done() +} + +func (c Context) + +Err() + +error { + return c.baseCtx.Err() +} + +/ create a new context +func NewContext(ms storetypes.MultiStore, header cmtproto.Header, isCheckTx bool, logger log.Logger) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +return Context{ + baseCtx: context.Background(), + ms: ms, + header: header, + chainID: header.ChainID, + checkTx: isCheckTx, + logger: logger, + gasMeter: storetypes.NewInfiniteGasMeter(), + minGasPrice: DecCoins{ +}, + eventManager: NewEventManager(), + kvGasConfig: storetypes.KVGasConfig(), + transientKVGasConfig: storetypes.TransientGasConfig(), +} +} + +/ WithContext returns a Context with an updated context.Context. +func (c Context) + +WithContext(ctx context.Context) + +Context { + c.baseCtx = ctx + return c +} + +/ WithMultiStore returns a Context with an updated MultiStore. +func (c Context) + +WithMultiStore(ms storetypes.MultiStore) + +Context { + c.ms = ms + return c +} + +/ WithBlockHeader returns a Context with an updated CometBFT block header in UTC time. +func (c Context) + +WithBlockHeader(header cmtproto.Header) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +c.header = header + return c +} + +/ WithHeaderHash returns a Context with an updated CometBFT block header hash. +func (c Context) + +WithHeaderHash(hash []byte) + +Context { + temp := make([]byte, len(hash)) + +copy(temp, hash) + +c.headerHash = temp + return c +} + +/ WithBlockTime returns a Context with an updated CometBFT block header time in UTC with no monotonic component. +/ Stripping the monotonic component is for time equality. +func (c Context) + +WithBlockTime(newTime time.Time) + +Context { + newHeader := c.BlockHeader() + / https://github.com/gogo/protobuf/issues/519 + newHeader.Time = newTime.Round(0).UTC() + +return c.WithBlockHeader(newHeader) +} + +/ WithProposer returns a Context with an updated proposer consensus address. +func (c Context) + +WithProposer(addr ConsAddress) + +Context { + newHeader := c.BlockHeader() + +newHeader.ProposerAddress = addr.Bytes() + +return c.WithBlockHeader(newHeader) +} + +/ WithBlockHeight returns a Context with an updated block height. +func (c Context) + +WithBlockHeight(height int64) + +Context { + newHeader := c.BlockHeader() + +newHeader.Height = height + return c.WithBlockHeader(newHeader) +} + +/ WithChainID returns a Context with an updated chain identifier. +func (c Context) + +WithChainID(chainID string) + +Context { + c.chainID = chainID + return c +} + +/ WithTxBytes returns a Context with an updated txBytes. +func (c Context) + +WithTxBytes(txBytes []byte) + +Context { + c.txBytes = txBytes + return c +} + +/ WithLogger returns a Context with an updated logger. +func (c Context) + +WithLogger(logger log.Logger) + +Context { + c.logger = logger + return c +} + +/ WithVoteInfos returns a Context with an updated consensus VoteInfo. +func (c Context) + +WithVoteInfos(voteInfo []abci.VoteInfo) + +Context { + c.voteInfo = voteInfo + return c +} + +/ WithGasMeter returns a Context with an updated transaction GasMeter. +func (c Context) + +WithGasMeter(meter storetypes.GasMeter) + +Context { + c.gasMeter = meter + return c +} + +/ WithBlockGasMeter returns a Context with an updated block GasMeter +func (c Context) + +WithBlockGasMeter(meter storetypes.GasMeter) + +Context { + c.blockGasMeter = meter + return c +} + +/ WithKVGasConfig returns a Context with an updated gas configuration for +/ the KVStore +func (c Context) + +WithKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.kvGasConfig = gasConfig + return c +} + +/ WithTransientKVGasConfig returns a Context with an updated gas configuration for +/ the transient KVStore +func (c Context) + +WithTransientKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.transientKVGasConfig = gasConfig + return c +} + +/ WithIsCheckTx enables or disables CheckTx value for verifying transactions and returns an updated Context +func (c Context) + +WithIsCheckTx(isCheckTx bool) + +Context { + c.checkTx = isCheckTx + c.execMode = ExecModeCheck + return c +} + +/ WithIsRecheckTx called with true will also set true on checkTx in order to +/ enforce the invariant that if recheckTx = true then checkTx = true as well. +func (c Context) + +WithIsReCheckTx(isRecheckTx bool) + +Context { + if isRecheckTx { + c.checkTx = true +} + +c.recheckTx = isRecheckTx + c.execMode = ExecModeReCheck + return c +} + +/ WithExecMode returns a Context with an updated ExecMode. +func (c Context) + +WithExecMode(m ExecMode) + +Context { + c.execMode = m + return c +} + +/ WithMinGasPrices returns a Context with an updated minimum gas price value +func (c Context) + +WithMinGasPrices(gasPrices DecCoins) + +Context { + c.minGasPrice = gasPrices + return c +} + +/ WithConsensusParams returns a Context with an updated consensus params +func (c Context) + +WithConsensusParams(params cmtproto.ConsensusParams) + +Context { + c.consParams = params + return c +} + +/ WithEventManager returns a Context with an updated event manager +func (c Context) + +WithEventManager(em EventManagerI) + +Context { + c.eventManager = em + return c +} + +/ WithPriority returns a Context with an updated tx priority +func (c Context) + +WithPriority(p int64) + +Context { + c.priority = p + return c +} + +/ WithStreamingManager returns a Context with an updated streaming manager +func (c Context) + +WithStreamingManager(sm storetypes.StreamingManager) + +Context { + c.streamingManager = sm + return c +} + +/ WithCometInfo returns a Context with an updated comet info +func (c Context) + +WithCometInfo(cometInfo comet.BlockInfo) + +Context { + c.cometInfo = cometInfo + return c +} + +/ WithHeaderInfo returns a Context with an updated header info +func (c Context) + +WithHeaderInfo(headerInfo header.Info) + +Context { + / Settime to UTC + headerInfo.Time = headerInfo.Time.UTC() + +c.headerInfo = headerInfo + return c +} + +/ TODO: remove??? +func (c Context) + +IsZero() + +bool { + return c.ms == nil +} + +func (c Context) + +WithValue(key, value interface{ +}) + +Context { + c.baseCtx = context.WithValue(c.baseCtx, key, value) + +return c +} + +func (c Context) + +Value(key interface{ +}) + +interface{ +} { + if key == SdkContextKey { + return c +} + +return c.baseCtx.Value(key) +} + +/ ---------------------------------------------------------------------------- +/ Store / Caching +/ ---------------------------------------------------------------------------- + +/ KVStore fetches a KVStore from the MultiStore. +func (c Context) + +KVStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.kvGasConfig) +} + +/ TransientStore fetches a TransientStore from the MultiStore. +func (c Context) + +TransientStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.transientKVGasConfig) +} + +/ CacheContext returns a new Context with the multi-store cached and a new +/ EventManager. The cached context is written to the context when writeCache +/ is called. Note, events are automatically emitted on the parent context's +/ EventManager when the caller executes the write. +func (c Context) + +CacheContext() (cc Context, writeCache func()) { + cms := c.ms.CacheMultiStore() + +cc = c.WithMultiStore(cms).WithEventManager(NewEventManager()) + +writeCache = func() { + c.EventManager().EmitEvents(cc.EventManager().Events()) + +cms.Write() +} + +return cc, writeCache +} + +var ( + _ context.Context = Context{ +} + _ storetypes.Context = Context{ +} +) + +/ ContextKey defines a type alias for a stdlib Context key. +type ContextKey string + +/ SdkContextKey is the key in the context.Context which holds the sdk.Context. +const SdkContextKey ContextKey = "sdk-context" + +/ WrapSDKContext returns a stdlib context.Context with the provided sdk.Context's internal +/ context as a value. It is useful for passing an sdk.Context through methods that take a +/ stdlib context.Context parameter such as generated gRPC methods. To get the original +/ sdk.Context back, call UnwrapSDKContext. +/ +/ Deprecated: there is no need to wrap anymore as the Cosmos SDK context implements context.Context. +func WrapSDKContext(ctx Context) + +context.Context { + return ctx +} + +/ UnwrapSDKContext retrieves a Context from a context.Context instance +/ attached with WrapSDKContext. It panics if a Context was not properly +/ attached +func UnwrapSDKContext(ctx context.Context) + +Context { + if sdkCtx, ok := ctx.(Context); ok { + return sdkCtx +} + +return ctx.Value(SdkContextKey).(Context) +} +``` + +## KVStore Wrappers + +### CacheKVStore + +`cachekv.Store` is a wrapper `KVStore` which provides buffered writing / cached reading functionalities over the underlying `KVStore`. + +```go expandable +package cachekv + +import ( + + "bytes" + "io" + "sort" + "sync" + "cosmossdk.io/math" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/cachekv/internal" + "cosmossdk.io/store/internal/conv" + "cosmossdk.io/store/internal/kv" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" +) + +/ cValue represents a cached value. +/ If dirty is true, it indicates the cached value is different from the underlying value. +type cValue struct { + value []byte + dirty bool +} + +/ Store wraps an in-memory cache around an underlying types.KVStore. +type Store struct { + mtx sync.Mutex + cache map[string]*cValue + unsortedCache map[string]struct{ +} + +sortedCache internal.BTree / always ascending sorted + parent types.KVStore +} + +var _ types.CacheKVStore = (*Store)(nil) + +/ NewStore creates a new Store object +func NewStore(parent types.KVStore) *Store { + return &Store{ + cache: make(map[string]*cValue), + unsortedCache: make(map[string]struct{ +}), + sortedCache: internal.NewBTree(), + parent: parent, +} +} + +/ GetStoreType implements Store. +func (store *Store) + +GetStoreType() + +types.StoreType { + return store.parent.GetStoreType() +} + +/ Get implements types.KVStore. +func (store *Store) + +Get(key []byte) (value []byte) { + store.mtx.Lock() + +defer store.mtx.Unlock() + +types.AssertValidKey(key) + +cacheValue, ok := store.cache[conv.UnsafeBytesToStr(key)] + if !ok { + value = store.parent.Get(key) + +store.setCacheValue(key, value, false) +} + +else { + value = cacheValue.value +} + +return value +} + +/ Set implements types.KVStore. +func (store *Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +types.AssertValidValue(value) + +store.mtx.Lock() + +defer store.mtx.Unlock() + +store.setCacheValue(key, value, true) +} + +/ Has implements types.KVStore. +func (store *Store) + +Has(key []byte) + +bool { + value := store.Get(key) + +return value != nil +} + +/ Delete implements types.KVStore. +func (store *Store) + +Delete(key []byte) { + types.AssertValidKey(key) + +store.mtx.Lock() + +defer store.mtx.Unlock() + +store.setCacheValue(key, nil, true) +} + +func (store *Store) + +resetCaches() { + if len(store.cache) > 100_000 { + / Cache is too large. We likely did something linear time + / (e.g. Epoch block, Genesis block, etc). Free the old caches from memory, and let them get re-allocated. + / TODO: In a future CacheKV redesign, such linear workloads should get into a different cache instantiation. + / 100_000 is arbitrarily chosen as it solved Osmosis' InitGenesis RAM problem. + store.cache = make(map[string]*cValue) + +store.unsortedCache = make(map[string]struct{ +}) +} + +else { + / Clear the cache using the map clearing idiom + / and not allocating fresh objects. + / Please see https://bencher.orijtech.com/perfclinic/mapclearing/ + for key := range store.cache { + delete(store.cache, key) +} + for key := range store.unsortedCache { + delete(store.unsortedCache, key) +} + +} + +store.sortedCache = internal.NewBTree() +} + +/ Implements Cachetypes.KVStore. +func (store *Store) + +Write() { + store.mtx.Lock() + +defer store.mtx.Unlock() + if len(store.cache) == 0 && len(store.unsortedCache) == 0 { + store.sortedCache = internal.NewBTree() + +return +} + +type cEntry struct { + key string + val *cValue +} + + / We need a copy of all of the keys. + / Not the best. To reduce RAM pressure, we copy the values as well + / and clear out the old caches right after the copy. + sortedCache := make([]cEntry, 0, len(store.cache)) + for key, dbValue := range store.cache { + if dbValue.dirty { + sortedCache = append(sortedCache, cEntry{ + key, dbValue +}) +} + +} + +store.resetCaches() + +sort.Slice(sortedCache, func(i, j int) + +bool { + return sortedCache[i].key < sortedCache[j].key +}) + + / TODO: Consider allowing usage of Batch, which would allow the write to + / at least happen atomically. + for _, obj := range sortedCache { + / We use []byte(key) + +instead of conv.UnsafeStrToBytes because we cannot + / be sure if the underlying store might do a save with the byteslice or + / not. Once we get confirmation that .Delete is guaranteed not to + / save the byteslice, then we can assume only a read-only copy is sufficient. + if obj.val.value != nil { + / It already exists in the parent, hence update it. + store.parent.Set([]byte(obj.key), obj.val.value) +} + +else { + store.parent.Delete([]byte(obj.key)) +} + +} +} + +/ CacheWrap implements CacheWrapper. +func (store *Store) + +CacheWrap() + +types.CacheWrap { + return NewStore(store) +} + +/ CacheWrapWithTrace implements the CacheWrapper interface. +func (store *Store) + +CacheWrapWithTrace(w io.Writer, tc types.TraceContext) + +types.CacheWrap { + return NewStore(tracekv.NewStore(store, w, tc)) +} + +/---------------------------------------- +/ Iteration + +/ Iterator implements types.KVStore. +func (store *Store) + +Iterator(start, end []byte) + +types.Iterator { + return store.iterator(start, end, true) +} + +/ ReverseIterator implements types.KVStore. +func (store *Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + return store.iterator(start, end, false) +} + +func (store *Store) + +iterator(start, end []byte, ascending bool) + +types.Iterator { + store.mtx.Lock() + +defer store.mtx.Unlock() + +store.dirtyItems(start, end) + isoSortedCache := store.sortedCache.Copy() + +var ( + err error + parent, cache types.Iterator + ) + if ascending { + parent = store.parent.Iterator(start, end) + +cache, err = isoSortedCache.Iterator(start, end) +} + +else { + parent = store.parent.ReverseIterator(start, end) + +cache, err = isoSortedCache.ReverseIterator(start, end) +} + if err != nil { + panic(err) +} + +return internal.NewCacheMergeIterator(parent, cache, ascending) +} + +func findStartIndex(strL []string, startQ string) + +int { + / Modified binary search to find the very first element in >=startQ. + if len(strL) == 0 { + return -1 +} + +var left, right, mid int + right = len(strL) - 1 + for left <= right { + mid = (left + right) >> 1 + midStr := strL[mid] + if midStr == startQ { + / Handle condition where there might be multiple values equal to startQ. + / We are looking for the very first value < midStL, that i+1 will be the first + / element >= midStr. + for i := mid - 1; i >= 0; i-- { + if strL[i] != midStr { + return i + 1 +} + +} + +return 0 +} + if midStr < startQ { + left = mid + 1 +} + +else { / midStrL > startQ + right = mid - 1 +} + +} + if left >= 0 && left < len(strL) && strL[left] >= startQ { + return left +} + +return -1 +} + +func findEndIndex(strL []string, endQ string) + +int { + if len(strL) == 0 { + return -1 +} + + / Modified binary search to find the very first element > 1 + midStr := strL[mid] + if midStr == endQ { + / Handle condition where there might be multiple values equal to startQ. + / We are looking for the very first value < midStL, that i+1 will be the first + / element >= midStr. + for i := mid - 1; i >= 0; i-- { + if strL[i] < midStr { + return i + 1 +} + +} + +return 0 +} + if midStr < endQ { + left = mid + 1 +} + +else { / midStrL > startQ + right = mid - 1 +} + +} + + / Binary search failed, now let's find a value less than endQ. + for i := right; i >= 0; i-- { + if strL[i] < endQ { + return i +} + +} + +return -1 +} + +type sortState int + +const ( + stateUnsorted sortState = iota + stateAlreadySorted +) + +const minSortSize = 1024 + +/ Constructs a slice of dirty items, to use w/ memIterator. +func (store *Store) + +dirtyItems(start, end []byte) { + startStr, endStr := conv.UnsafeBytesToStr(start), conv.UnsafeBytesToStr(end) + if end != nil && startStr > endStr { + / Nothing to do here. + return +} + n := len(store.unsortedCache) + unsorted := make([]*kv.Pair, 0) + / If the unsortedCache is too big, its costs too much to determine + / whats in the subset we are concerned about. + / If you are interleaving iterator calls with writes, this can easily become an + / O(N^2) + +overhead. + / Even without that, too many range checks eventually becomes more expensive + / than just not having the cache. + if n < minSortSize { + for key := range store.unsortedCache { + / dbm.IsKeyInDomain is nil safe and returns true iff key is greater than start + if dbm.IsKeyInDomain(conv.UnsafeStrToBytes(key), start, end) { + cacheValue := store.cache[key] + unsorted = append(unsorted, &kv.Pair{ + Key: []byte(key), + Value: cacheValue.value +}) +} + +} + +store.clearUnsortedCacheSubset(unsorted, stateUnsorted) + +return +} + + / Otherwise it is large so perform a modified binary search to find + / the target ranges for the keys that we should be looking for. + strL := make([]string, 0, n) + for key := range store.unsortedCache { + strL = append(strL, key) +} + +sort.Strings(strL) + + / Now find the values within the domain + / [start, end) + startIndex := findStartIndex(strL, startStr) + if startIndex < 0 { + startIndex = 0 +} + +var endIndex int + if end == nil { + endIndex = len(strL) - 1 +} + +else { + endIndex = findEndIndex(strL, endStr) +} + if endIndex < 0 { + endIndex = len(strL) - 1 +} + + / Since we spent cycles to sort the values, we should process and remove a reasonable amount + / ensure start to end is at least minSortSize in size + / if below minSortSize, expand it to cover additional values + / this amortizes the cost of processing elements across multiple calls + if endIndex-startIndex < minSortSize { + endIndex = math.Min(startIndex+minSortSize, len(strL)-1) + if endIndex-startIndex < minSortSize { + startIndex = math.Max(endIndex-minSortSize, 0) +} + +} + kvL := make([]*kv.Pair, 0, 1+endIndex-startIndex) + for i := startIndex; i <= endIndex; i++ { + key := strL[i] + cacheValue := store.cache[key] + kvL = append(kvL, &kv.Pair{ + Key: []byte(key), + Value: cacheValue.value +}) +} + + / kvL was already sorted so pass it in as is. + store.clearUnsortedCacheSubset(kvL, stateAlreadySorted) +} + +func (store *Store) + +clearUnsortedCacheSubset(unsorted []*kv.Pair, sortState sortState) { + n := len(store.unsortedCache) + if len(unsorted) == n { / This pattern allows the Go compiler to emit the map clearing idiom for the entire map. + for key := range store.unsortedCache { + delete(store.unsortedCache, key) +} + +} + +else { / Otherwise, normally delete the unsorted keys from the map. + for _, kv := range unsorted { + delete(store.unsortedCache, conv.UnsafeBytesToStr(kv.Key)) +} + +} + if sortState == stateUnsorted { + sort.Slice(unsorted, func(i, j int) + +bool { + return bytes.Compare(unsorted[i].Key, unsorted[j].Key) < 0 +}) +} + for _, item := range unsorted { + / sortedCache is able to store `nil` value to represent deleted items. + store.sortedCache.Set(item.Key, item.Value) +} +} + +/---------------------------------------- +/ etc + +/ Only entrypoint to mutate store.cache. +/ A `nil` value means a deletion. +func (store *Store) + +setCacheValue(key, value []byte, dirty bool) { + keyStr := conv.UnsafeBytesToStr(key) + +store.cache[keyStr] = &cValue{ + value: value, + dirty: dirty, +} + if dirty { + store.unsortedCache[keyStr] = struct{ +}{ +} + +} +} +``` + +This is the type used whenever an IAVL Store needs to be branched to create an isolated store (typically when we need to mutate a state that might be reverted later). + +#### `Get` + +`Store.Get()` firstly checks if `Store.cache` has an associated value with the key. If the value exists, the function returns it. If not, the function calls `Store.parent.Get()`, caches the result in `Store.cache`, and returns it. + +#### `Set` + +`Store.Set()` sets the key-value pair to the `Store.cache`. `cValue` has the field dirty bool which indicates whether the cached value is different from the underlying value. When `Store.Set()` caches a new pair, the `cValue.dirty` is set `true` so when `Store.Write()` is called it can be written to the underlying store. + +#### `Iterator` + +`Store.Iterator()` have to traverse on both cached items and the original items. In `Store.iterator()`, two iterators are generated for each of them, and merged. `memIterator` is essentially a slice of the `KVPairs`, used for cached items. `mergeIterator` is a combination of two iterators, where traverse happens ordered on both iterators. + +### `GasKv` Store + +Cosmos SDK applications use [`gas`](/docs/sdk/v0.50/learn/beginner/gas-fees) to track resources usage and prevent spam. [`GasKv.Store`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/gaskv/store.go) is a `KVStore` wrapper that enables automatic gas consumption each time a read or write to the store is made. It is the solution of choice to track storage usage in Cosmos SDK applications. + +```go expandable +package gaskv + +import ( + + "io" + "cosmossdk.io/store/types" +) + +var _ types.KVStore = &Store{ +} + +/ Store applies gas tracking to an underlying KVStore. It implements the +/ KVStore interface. +type Store struct { + gasMeter types.GasMeter + gasConfig types.GasConfig + parent types.KVStore +} + +/ NewStore returns a reference to a new GasKVStore. +func NewStore(parent types.KVStore, gasMeter types.GasMeter, gasConfig types.GasConfig) *Store { + kvs := &Store{ + gasMeter: gasMeter, + gasConfig: gasConfig, + parent: parent, +} + +return kvs +} + +/ Implements Store. +func (gs *Store) + +GetStoreType() + +types.StoreType { + return gs.parent.GetStoreType() +} + +/ Implements KVStore. +func (gs *Store) + +Get(key []byte) (value []byte) { + gs.gasMeter.ConsumeGas(gs.gasConfig.ReadCostFlat, types.GasReadCostFlatDesc) + +value = gs.parent.Get(key) + + / TODO overflow-safe math? + gs.gasMeter.ConsumeGas(gs.gasConfig.ReadCostPerByte*types.Gas(len(key)), types.GasReadPerByteDesc) + +gs.gasMeter.ConsumeGas(gs.gasConfig.ReadCostPerByte*types.Gas(len(value)), types.GasReadPerByteDesc) + +return value +} + +/ Implements KVStore. +func (gs *Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +types.AssertValidValue(value) + +gs.gasMeter.ConsumeGas(gs.gasConfig.WriteCostFlat, types.GasWriteCostFlatDesc) + / TODO overflow-safe math? + gs.gasMeter.ConsumeGas(gs.gasConfig.WriteCostPerByte*types.Gas(len(key)), types.GasWritePerByteDesc) + +gs.gasMeter.ConsumeGas(gs.gasConfig.WriteCostPerByte*types.Gas(len(value)), types.GasWritePerByteDesc) + +gs.parent.Set(key, value) +} + +/ Implements KVStore. +func (gs *Store) + +Has(key []byte) + +bool { + gs.gasMeter.ConsumeGas(gs.gasConfig.HasCost, types.GasHasDesc) + +return gs.parent.Has(key) +} + +/ Implements KVStore. +func (gs *Store) + +Delete(key []byte) { + / charge gas to prevent certain attack vectors even though space is being freed + gs.gasMeter.ConsumeGas(gs.gasConfig.DeleteCost, types.GasDeleteDesc) + +gs.parent.Delete(key) +} + +/ Iterator implements the KVStore interface. It returns an iterator which +/ incurs a flat gas cost for seeking to the first key/value pair and a variable +/ gas cost based on the current value's length if the iterator is valid. +func (gs *Store) + +Iterator(start, end []byte) + +types.Iterator { + return gs.iterator(start, end, true) +} + +/ ReverseIterator implements the KVStore interface. It returns a reverse +/ iterator which incurs a flat gas cost for seeking to the first key/value pair +/ and a variable gas cost based on the current value's length if the iterator +/ is valid. +func (gs *Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + return gs.iterator(start, end, false) +} + +/ Implements KVStore. +func (gs *Store) + +CacheWrap() + +types.CacheWrap { + panic("cannot CacheWrap a GasKVStore") +} + +/ CacheWrapWithTrace implements the KVStore interface. +func (gs *Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + panic("cannot CacheWrapWithTrace a GasKVStore") +} + +func (gs *Store) + +iterator(start, end []byte, ascending bool) + +types.Iterator { + var parent types.Iterator + if ascending { + parent = gs.parent.Iterator(start, end) +} + +else { + parent = gs.parent.ReverseIterator(start, end) +} + gi := newGasIterator(gs.gasMeter, gs.gasConfig, parent) + +gi.(*gasIterator).consumeSeekGas() + +return gi +} + +type gasIterator struct { + gasMeter types.GasMeter + gasConfig types.GasConfig + parent types.Iterator +} + +func newGasIterator(gasMeter types.GasMeter, gasConfig types.GasConfig, parent types.Iterator) + +types.Iterator { + return &gasIterator{ + gasMeter: gasMeter, + gasConfig: gasConfig, + parent: parent, +} +} + +/ Implements Iterator. +func (gi *gasIterator) + +Domain() (start, end []byte) { + return gi.parent.Domain() +} + +/ Implements Iterator. +func (gi *gasIterator) + +Valid() + +bool { + return gi.parent.Valid() +} + +/ Next implements the Iterator interface. It seeks to the next key/value pair +/ in the iterator. It incurs a flat gas cost for seeking and a variable gas +/ cost based on the current value's length if the iterator is valid. +func (gi *gasIterator) + +Next() { + gi.consumeSeekGas() + +gi.parent.Next() +} + +/ Key implements the Iterator interface. It returns the current key and it does +/ not incur any gas cost. +func (gi *gasIterator) + +Key() (key []byte) { + key = gi.parent.Key() + +return key +} + +/ Value implements the Iterator interface. It returns the current value and it +/ does not incur any gas cost. +func (gi *gasIterator) + +Value() (value []byte) { + value = gi.parent.Value() + +return value +} + +/ Implements Iterator. +func (gi *gasIterator) + +Close() + +error { + return gi.parent.Close() +} + +/ Error delegates the Error call to the parent iterator. +func (gi *gasIterator) + +Error() + +error { + return gi.parent.Error() +} + +/ consumeSeekGas consumes on each iteration step a flat gas cost and a variable gas cost +/ based on the current value's length. +func (gi *gasIterator) + +consumeSeekGas() { + if gi.Valid() { + key := gi.Key() + value := gi.Value() + +gi.gasMeter.ConsumeGas(gi.gasConfig.ReadCostPerByte*types.Gas(len(key)), types.GasValuePerByteDesc) + +gi.gasMeter.ConsumeGas(gi.gasConfig.ReadCostPerByte*types.Gas(len(value)), types.GasValuePerByteDesc) +} + +gi.gasMeter.ConsumeGas(gi.gasConfig.IterNextCostFlat, types.GasIterNextCostFlatDesc) +} +``` + +When methods of the parent `KVStore` are called, `GasKv.Store` automatically consumes appropriate amount of gas depending on the `Store.gasConfig`: + +```go expandable +package types + +import ( + + "fmt" + "math" +) + +/ Gas consumption descriptors. +const ( + GasIterNextCostFlatDesc = "IterNextFlat" + GasValuePerByteDesc = "ValuePerByte" + GasWritePerByteDesc = "WritePerByte" + GasReadPerByteDesc = "ReadPerByte" + GasWriteCostFlatDesc = "WriteFlat" + GasReadCostFlatDesc = "ReadFlat" + GasHasDesc = "Has" + GasDeleteDesc = "Delete" +) + +/ Gas measured by the SDK +type Gas = uint64 + +/ ErrorNegativeGasConsumed defines an error thrown when the amount of gas refunded results in a +/ negative gas consumed amount. +type ErrorNegativeGasConsumed struct { + Descriptor string +} + +/ ErrorOutOfGas defines an error thrown when an action results in out of gas. +type ErrorOutOfGas struct { + Descriptor string +} + +/ ErrorGasOverflow defines an error thrown when an action results gas consumption +/ unsigned integer overflow. +type ErrorGasOverflow struct { + Descriptor string +} + +/ GasMeter interface to track gas consumption +type GasMeter interface { + GasConsumed() + +Gas + GasConsumedToLimit() + +Gas + GasRemaining() + +Gas + Limit() + +Gas + ConsumeGas(amount Gas, descriptor string) + +RefundGas(amount Gas, descriptor string) + +IsPastLimit() + +bool + IsOutOfGas() + +bool + String() + +string +} + +type basicGasMeter struct { + limit Gas + consumed Gas +} + +/ NewGasMeter returns a reference to a new basicGasMeter. +func NewGasMeter(limit Gas) + +GasMeter { + return &basicGasMeter{ + limit: limit, + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *basicGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasRemaining returns the gas left in the GasMeter. +func (g *basicGasMeter) + +GasRemaining() + +Gas { + if g.IsPastLimit() { + return 0 +} + +return g.limit - g.consumed +} + +/ Limit returns the gas limit of the GasMeter. +func (g *basicGasMeter) + +Limit() + +Gas { + return g.limit +} + +/ GasConsumedToLimit returns the gas limit if gas consumed is past the limit, +/ otherwise it returns the consumed gas. +/ +/ NOTE: This behavior is only called when recovering from panic when +/ BlockGasMeter consumes gas past the limit. +func (g *basicGasMeter) + +GasConsumedToLimit() + +Gas { + if g.IsPastLimit() { + return g.limit +} + +return g.consumed +} + +/ addUint64Overflow performs the addition operation on two uint64 integers and +/ returns a boolean on whether or not the result overflows. +func addUint64Overflow(a, b uint64) (uint64, bool) { + if math.MaxUint64-a < b { + return 0, true +} + +return a + b, false +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit or out of gas. +func (g *basicGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + g.consumed = math.MaxUint64 + panic(ErrorGasOverflow{ + descriptor +}) +} + if g.consumed > g.limit { + panic(ErrorOutOfGas{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the transaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *basicGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns true if gas consumed is past limit, otherwise it returns false. +func (g *basicGasMeter) + +IsPastLimit() + +bool { + return g.consumed > g.limit +} + +/ IsOutOfGas returns true if gas consumed is greater than or equal to gas limit, otherwise it returns false. +func (g *basicGasMeter) + +IsOutOfGas() + +bool { + return g.consumed >= g.limit +} + +/ String returns the BasicGasMeter's gas limit and gas consumed. +func (g *basicGasMeter) + +String() + +string { + return fmt.Sprintf("BasicGasMeter:\n limit: %d\n consumed: %d", g.limit, g.consumed) +} + +type infiniteGasMeter struct { + consumed Gas +} + +/ NewInfiniteGasMeter returns a new gas meter without a limit. +func NewInfiniteGasMeter() + +GasMeter { + return &infiniteGasMeter{ + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *infiniteGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasConsumedToLimit returns the gas consumed from the GasMeter since the gas is not confined to a limit. +/ NOTE: This behavior is only called when recovering from panic when BlockGasMeter consumes gas past the limit. +func (g *infiniteGasMeter) + +GasConsumedToLimit() + +Gas { + return g.consumed +} + +/ GasRemaining returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +GasRemaining() + +Gas { + return math.MaxUint64 +} + +/ Limit returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +Limit() + +Gas { + return math.MaxUint64 +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit. +func (g *infiniteGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + / TODO: Should we set the consumed field after overflow checking? + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + panic(ErrorGasOverflow{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the trasaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *infiniteGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsPastLimit() + +bool { + return false +} + +/ IsOutOfGas returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsOutOfGas() + +bool { + return false +} + +/ String returns the InfiniteGasMeter's gas consumed. +func (g *infiniteGasMeter) + +String() + +string { + return fmt.Sprintf("InfiniteGasMeter:\n consumed: %d", g.consumed) +} + +/ GasConfig defines gas cost for each operation on KVStores +type GasConfig struct { + HasCost Gas + DeleteCost Gas + ReadCostFlat Gas + ReadCostPerByte Gas + WriteCostFlat Gas + WriteCostPerByte Gas + IterNextCostFlat Gas +} + +/ KVGasConfig returns a default gas config for KVStores. +func KVGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 1000, + DeleteCost: 1000, + ReadCostFlat: 1000, + ReadCostPerByte: 3, + WriteCostFlat: 2000, + WriteCostPerByte: 30, + IterNextCostFlat: 30, +} +} + +/ TransientGasConfig returns a default gas config for TransientStores. +func TransientGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 100, + DeleteCost: 100, + ReadCostFlat: 100, + ReadCostPerByte: 0, + WriteCostFlat: 200, + WriteCostPerByte: 3, + IterNextCostFlat: 3, +} +} +``` + +By default, all `KVStores` are wrapped in `GasKv.Stores` when retrieved. This is done in the `KVStore()` method of the [`context`](/docs/sdk/v0.50/learn/advanced/context): + +```go expandable +package types + +import ( + + "context" + "time" + "cosmossdk.io/log" + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cosmos/gogoproto/proto" + "cosmossdk.io/store/gaskv" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/header" +) + +/ ExecMode defines the execution mode which can be set on a Context. +type ExecMode uint8 + +/ All possible execution modes. +const ( + ExecModeCheck ExecMode = iota + ExecModeReCheck + ExecModeSimulate + ExecModePrepareProposal + ExecModeProcessProposal + ExecModeVoteExtension + ExecModeFinalize +) + +/* +Context is an immutable object contains all information needed to +process a request. + +It contains a context.Context object inside if you want to use that, +but please do not over-use it. We try to keep all data structured +and standard additions here would be better just to add to the Context struct +*/ +type Context struct { + baseCtx context.Context + ms storetypes.MultiStore + / Deprecated: Use HeaderService for height, time, and chainID and CometService for the rest + header cmtproto.Header + / Deprecated: Use HeaderService for hash + headerHash []byte + / Deprecated: Use HeaderService for chainID and CometService for the rest + chainID string + txBytes []byte + logger log.Logger + voteInfo []abci.VoteInfo + gasMeter storetypes.GasMeter + blockGasMeter storetypes.GasMeter + checkTx bool + recheckTx bool / if recheckTx == true, then checkTx must also be true + execMode ExecMode + minGasPrice DecCoins + consParams cmtproto.ConsensusParams + eventManager EventManagerI + priority int64 / The tx priority, only relevant in CheckTx + kvGasConfig storetypes.GasConfig + transientKVGasConfig storetypes.GasConfig + streamingManager storetypes.StreamingManager + cometInfo comet.BlockInfo + headerInfo header.Info +} + +/ Proposed rename, not done to avoid API breakage +type Request = Context + +/ Read-only accessors +func (c Context) + +Context() + +context.Context { + return c.baseCtx +} + +func (c Context) + +MultiStore() + +storetypes.MultiStore { + return c.ms +} + +func (c Context) + +BlockHeight() + +int64 { + return c.header.Height +} + +func (c Context) + +BlockTime() + +time.Time { + return c.header.Time +} + +func (c Context) + +ChainID() + +string { + return c.chainID +} + +func (c Context) + +TxBytes() []byte { + return c.txBytes +} + +func (c Context) + +Logger() + +log.Logger { + return c.logger +} + +func (c Context) + +VoteInfos() []abci.VoteInfo { + return c.voteInfo +} + +func (c Context) + +GasMeter() + +storetypes.GasMeter { + return c.gasMeter +} + +func (c Context) + +BlockGasMeter() + +storetypes.GasMeter { + return c.blockGasMeter +} + +func (c Context) + +IsCheckTx() + +bool { + return c.checkTx +} + +func (c Context) + +IsReCheckTx() + +bool { + return c.recheckTx +} + +func (c Context) + +ExecMode() + +ExecMode { + return c.execMode +} + +func (c Context) + +MinGasPrices() + +DecCoins { + return c.minGasPrice +} + +func (c Context) + +EventManager() + +EventManagerI { + return c.eventManager +} + +func (c Context) + +Priority() + +int64 { + return c.priority +} + +func (c Context) + +KVGasConfig() + +storetypes.GasConfig { + return c.kvGasConfig +} + +func (c Context) + +TransientKVGasConfig() + +storetypes.GasConfig { + return c.transientKVGasConfig +} + +func (c Context) + +StreamingManager() + +storetypes.StreamingManager { + return c.streamingManager +} + +func (c Context) + +CometInfo() + +comet.BlockInfo { + return c.cometInfo +} + +func (c Context) + +HeaderInfo() + +header.Info { + return c.headerInfo +} + +/ clone the header before returning +func (c Context) + +BlockHeader() + +cmtproto.Header { + msg := proto.Clone(&c.header).(*cmtproto.Header) + +return *msg +} + +/ HeaderHash returns a copy of the header hash obtained during abci.RequestBeginBlock +func (c Context) + +HeaderHash() []byte { + hash := make([]byte, len(c.headerHash)) + +copy(hash, c.headerHash) + +return hash +} + +func (c Context) + +ConsensusParams() + +cmtproto.ConsensusParams { + return c.consParams +} + +func (c Context) + +Deadline() (deadline time.Time, ok bool) { + return c.baseCtx.Deadline() +} + +func (c Context) + +Done() <-chan struct{ +} { + return c.baseCtx.Done() +} + +func (c Context) + +Err() + +error { + return c.baseCtx.Err() +} + +/ create a new context +func NewContext(ms storetypes.MultiStore, header cmtproto.Header, isCheckTx bool, logger log.Logger) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +return Context{ + baseCtx: context.Background(), + ms: ms, + header: header, + chainID: header.ChainID, + checkTx: isCheckTx, + logger: logger, + gasMeter: storetypes.NewInfiniteGasMeter(), + minGasPrice: DecCoins{ +}, + eventManager: NewEventManager(), + kvGasConfig: storetypes.KVGasConfig(), + transientKVGasConfig: storetypes.TransientGasConfig(), +} +} + +/ WithContext returns a Context with an updated context.Context. +func (c Context) + +WithContext(ctx context.Context) + +Context { + c.baseCtx = ctx + return c +} + +/ WithMultiStore returns a Context with an updated MultiStore. +func (c Context) + +WithMultiStore(ms storetypes.MultiStore) + +Context { + c.ms = ms + return c +} + +/ WithBlockHeader returns a Context with an updated CometBFT block header in UTC time. +func (c Context) + +WithBlockHeader(header cmtproto.Header) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +c.header = header + return c +} + +/ WithHeaderHash returns a Context with an updated CometBFT block header hash. +func (c Context) + +WithHeaderHash(hash []byte) + +Context { + temp := make([]byte, len(hash)) + +copy(temp, hash) + +c.headerHash = temp + return c +} + +/ WithBlockTime returns a Context with an updated CometBFT block header time in UTC with no monotonic component. +/ Stripping the monotonic component is for time equality. +func (c Context) + +WithBlockTime(newTime time.Time) + +Context { + newHeader := c.BlockHeader() + / https://github.com/gogo/protobuf/issues/519 + newHeader.Time = newTime.Round(0).UTC() + +return c.WithBlockHeader(newHeader) +} + +/ WithProposer returns a Context with an updated proposer consensus address. +func (c Context) + +WithProposer(addr ConsAddress) + +Context { + newHeader := c.BlockHeader() + +newHeader.ProposerAddress = addr.Bytes() + +return c.WithBlockHeader(newHeader) +} + +/ WithBlockHeight returns a Context with an updated block height. +func (c Context) + +WithBlockHeight(height int64) + +Context { + newHeader := c.BlockHeader() + +newHeader.Height = height + return c.WithBlockHeader(newHeader) +} + +/ WithChainID returns a Context with an updated chain identifier. +func (c Context) + +WithChainID(chainID string) + +Context { + c.chainID = chainID + return c +} + +/ WithTxBytes returns a Context with an updated txBytes. +func (c Context) + +WithTxBytes(txBytes []byte) + +Context { + c.txBytes = txBytes + return c +} + +/ WithLogger returns a Context with an updated logger. +func (c Context) + +WithLogger(logger log.Logger) + +Context { + c.logger = logger + return c +} + +/ WithVoteInfos returns a Context with an updated consensus VoteInfo. +func (c Context) + +WithVoteInfos(voteInfo []abci.VoteInfo) + +Context { + c.voteInfo = voteInfo + return c +} + +/ WithGasMeter returns a Context with an updated transaction GasMeter. +func (c Context) + +WithGasMeter(meter storetypes.GasMeter) -## Introduction to Cosmos SDK Stores[​](#introduction-to-cosmos-sdk-stores "Direct link to Introduction to Cosmos SDK Stores") +Context { + c.gasMeter = meter + return c +} -The Cosmos SDK comes with a large set of stores to persist the state of applications. By default, the main store of Cosmos SDK applications is a `multistore`, i.e. a store of stores. Developers can add any number of key-value stores to the multistore, depending on their application needs. The multistore exists to support the modularity of the Cosmos SDK, as it lets each module declare and manage their own subset of the state. Key-value stores in the multistore can only be accessed with a specific capability `key`, which is typically held in the [`keeper`](/v0.50/build/building-modules/keeper) of the module that declared the store. +/ WithBlockGasMeter returns a Context with an updated block GasMeter +func (c Context) -``` -+-----------------------------------------------------+| || +--------------------------------------------+ || | | || | KVStore 1 - Manage by keeper of Module 1 || | | || +--------------------------------------------+ || || +--------------------------------------------+ || | | || | KVStore 2 - Manage by keeper of Module 2 | || | | || +--------------------------------------------+ || || +--------------------------------------------+ || | | || | KVStore 3 - Manage by keeper of Module 2 | || | | || +--------------------------------------------+ || || +--------------------------------------------+ || | | || | KVStore 4 - Manage by keeper of Module 3 | || | | || +--------------------------------------------+ || || +--------------------------------------------+ || | | || | KVStore 5 - Manage by keeper of Module 4 | || | | || +--------------------------------------------+ || || Main Multistore || |+-----------------------------------------------------+ Application's State -``` +WithBlockGasMeter(meter storetypes.GasMeter) -### Store Interface[​](#store-interface "Direct link to Store Interface") +Context { + c.blockGasMeter = meter + return c +} -At its very core, a Cosmos SDK `store` is an object that holds a `CacheWrapper` and has a `GetStoreType()` method: +/ WithKVGasConfig returns a Context with an updated gas configuration for +/ the KVStore +func (c Context) -store/types/store.go +WithKVGasConfig(gasConfig storetypes.GasConfig) -``` -loading... -``` +Context { + c.kvGasConfig = gasConfig + return c +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/types/store.go#L15-L18) +/ WithTransientKVGasConfig returns a Context with an updated gas configuration for +/ the transient KVStore +func (c Context) -The `GetStoreType` is a simple method that returns the type of store, whereas a `CacheWrapper` is a simple interface that implements store read caching and write branching through `Write` method: +WithTransientKVGasConfig(gasConfig storetypes.GasConfig) -store/types/store.go +Context { + c.transientKVGasConfig = gasConfig + return c +} -``` -loading... -``` +/ WithIsCheckTx enables or disables CheckTx value for verifying transactions and returns an updated Context +func (c Context) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/types/store.go#L287-L320) +WithIsCheckTx(isCheckTx bool) -Branching and cache is used ubiquitously in the Cosmos SDK and required to be implemented on every store type. A storage branch creates an isolated, ephemeral branch of a store that can be passed around and updated without affecting the main underlying store. This is used to trigger temporary state-transitions that may be reverted later should an error occur. Read more about it in [context](/v0.50/learn/advanced/context#Store-branching) +Context { + c.checkTx = isCheckTx + c.execMode = ExecModeCheck + return c +} -### Commit Store[​](#commit-store "Direct link to Commit Store") +/ WithIsRecheckTx called with true will also set true on checkTx in order to +/ enforce the invariant that if recheckTx = true then checkTx = true as well. +func (c Context) -A commit store is a store that has the ability to commit changes made to the underlying tree or db. The Cosmos SDK differentiates simple stores from commit stores by extending the basic store interfaces with a `Committer`: +WithIsReCheckTx(isRecheckTx bool) -store/types/store.go +Context { + if isRecheckTx { + c.checkTx = true +} -``` -loading... -``` +c.recheckTx = isRecheckTx + c.execMode = ExecModeReCheck + return c +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/types/store.go#L32-L37) +/ WithExecMode returns a Context with an updated ExecMode. +func (c Context) -The `Committer` is an interface that defines methods to persist changes to disk: +WithExecMode(m ExecMode) -store/types/store.go +Context { + c.execMode = m + return c +} -``` -loading... -``` +/ WithMinGasPrices returns a Context with an updated minimum gas price value +func (c Context) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/types/store.go#L20-L30) +WithMinGasPrices(gasPrices DecCoins) -The `CommitID` is a deterministic commit of the state tree. Its hash is returned to the underlying consensus engine and stored in the block header. Note that commit store interfaces exist for various purposes, one of which is to make sure not every object can commit the store. As part of the [object-capabilities model](/v0.50/learn/advanced/ocap) of the Cosmos SDK, only `baseapp` should have the ability to commit stores. For example, this is the reason why the `ctx.KVStore()` method by which modules typically access stores returns a `KVStore` and not a `CommitKVStore`. +Context { + c.minGasPrice = gasPrices + return c +} -The Cosmos SDK comes with many types of stores, the most used being [`CommitMultiStore`](#multistore), [`KVStore`](#kvstore) and [`GasKv` store](#gaskv-store). [Other types of stores](#other-stores) include `Transient` and `TraceKV` stores. +/ WithConsensusParams returns a Context with an updated consensus params +func (c Context) -## Multistore[​](#multistore "Direct link to Multistore") +WithConsensusParams(params cmtproto.ConsensusParams) -### Multistore Interface[​](#multistore-interface "Direct link to Multistore Interface") +Context { + c.consParams = params + return c +} -Each Cosmos SDK application holds a multistore at its root to persist its state. The multistore is a store of `KVStores` that follows the `Multistore` interface: +/ WithEventManager returns a Context with an updated event manager +func (c Context) -store/types/store.go +WithEventManager(em EventManagerI) -``` -loading... -``` +Context { + c.eventManager = em + return c +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/types/store.go#L123-L155) +/ WithPriority returns a Context with an updated tx priority +func (c Context) -If tracing is enabled, then branching the multistore will firstly wrap all the underlying `KVStore` in [`TraceKv.Store`](#tracekv-store). +WithPriority(p int64) -### CommitMultiStore[​](#commitmultistore "Direct link to CommitMultiStore") +Context { + c.priority = p + return c +} -The main type of `Multistore` used in the Cosmos SDK is `CommitMultiStore`, which is an extension of the `Multistore` interface: +/ WithStreamingManager returns a Context with an updated streaming manager +func (c Context) -store/types/store.go +WithStreamingManager(sm storetypes.StreamingManager) -``` -loading... -``` +Context { + c.streamingManager = sm + return c +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/types/store.go#L164-L227) +/ WithCometInfo returns a Context with an updated comet info +func (c Context) -As for concrete implementation, the \[`rootMulti.Store`] is the go-to implementation of the `CommitMultiStore` interface. +WithCometInfo(cometInfo comet.BlockInfo) -store/rootmulti/store.go +Context { + c.cometInfo = cometInfo + return c +} -``` -loading... -``` +/ WithHeaderInfo returns a Context with an updated header info +func (c Context) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/rootmulti/store.go#L53-L77) +WithHeaderInfo(headerInfo header.Info) -The `rootMulti.Store` is a base-layer multistore built around a `db` on top of which multiple `KVStores` can be mounted, and is the default multistore store used in [`baseapp`](/v0.50/learn/advanced/baseapp). +Context { + / Settime to UTC + headerInfo.Time = headerInfo.Time.UTC() -### CacheMultiStore[​](#cachemultistore "Direct link to CacheMultiStore") +c.headerInfo = headerInfo + return c +} -Whenever the `rootMulti.Store` needs to be branched, a [`cachemulti.Store`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/cachemulti/store.go) is used. +/ TODO: remove??? +func (c Context) -store/cachemulti/store.go +IsZero() -``` -loading... -``` +bool { + return c.ms == nil +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/cachemulti/store.go#L19-L33) +func (c Context) -`cachemulti.Store` branches all substores (creates a virtual store for each substore) in its constructor and hold them in `Store.stores`. Moreover caches all read queries. `Store.GetKVStore()` returns the store from `Store.stores`, and `Store.Write()` recursively calls `CacheWrap.Write()` on all the substores. +WithValue(key, value interface{ +}) -## Base-layer KVStores[​](#base-layer-kvstores "Direct link to Base-layer KVStores") +Context { + c.baseCtx = context.WithValue(c.baseCtx, key, value) -### `KVStore` and `CommitKVStore` Interfaces[​](#kvstore-and-commitkvstore-interfaces "Direct link to kvstore-and-commitkvstore-interfaces") +return c +} -A `KVStore` is a simple key-value store used to store and retrieve data. A `CommitKVStore` is a `KVStore` that also implements a `Committer`. By default, stores mounted in `baseapp`'s main `CommitMultiStore` are `CommitKVStore`s. The `KVStore` interface is primarily used to restrict modules from accessing the committer. +func (c Context) -Individual `KVStore`s are used by modules to manage a subset of the global state. `KVStores` can be accessed by objects that hold a specific key. This `key` should only be exposed to the [`keeper`](/v0.50/build/building-modules/keeper) of the module that defines the store. +Value(key interface{ +}) -`CommitKVStore`s are declared by proxy of their respective `key` and mounted on the application's [multistore](#multistore) in the [main application file](/v0.50/learn/beginner/app-anatomy#core-application-file). In the same file, the `key` is also passed to the module's `keeper` that is responsible for managing the store. +interface{ +} { + if key == SdkContextKey { + return c +} -store/types/store.go +return c.baseCtx.Value(key) +} -``` -loading... -``` +/ ---------------------------------------------------------------------------- +/ Store / Caching +/ ---------------------------------------------------------------------------- -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/types/store.go#L229-L266) +/ KVStore fetches a KVStore from the MultiStore. +func (c Context) -Apart from the traditional `Get` and `Set` methods, that a `KVStore` must implement via the `BasicKVStore` interface; a `KVStore` must provide an `Iterator(start, end)` method which returns an `Iterator` object. It is used to iterate over a range of keys, typically keys that share a common prefix. Below is an example from the bank's module keeper, used to iterate over all account balances: +KVStore(key storetypes.StoreKey) -x/bank/keeper/view\.go +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.kvGasConfig) +} -``` -loading... -``` +/ TransientStore fetches a TransientStore from the MultiStore. +func (c Context) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/bank/keeper/view.go#L125-L140) +TransientStore(key storetypes.StoreKey) -### `IAVL` Store[​](#iavl-store "Direct link to iavl-store") +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.transientKVGasConfig) +} -The default implementation of `KVStore` and `CommitKVStore` used in `baseapp` is the `iavl.Store`. +/ CacheContext returns a new Context with the multi-store cached and a new +/ EventManager. The cached context is written to the context when writeCache +/ is called. Note, events are automatically emitted on the parent context's +/ EventManager when the caller executes the write. +func (c Context) -store/iavl/store.go +CacheContext() (cc Context, writeCache func()) { + cms := c.ms.CacheMultiStore() -``` -loading... -``` +cc = c.WithMultiStore(cms).WithEventManager(NewEventManager()) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/iavl/store.go#L35-L40) +writeCache = func() { + c.EventManager().EmitEvents(cc.EventManager().Events()) -`iavl` stores are based around an [IAVL Tree](https://github.com/cosmos/iavl), a self-balancing binary tree which guarantees that: +cms.Write() +} -* `Get` and `Set` operations are O(log n), where n is the number of elements in the tree. -* Iteration efficiently returns the sorted elements within the range. -* Each tree version is immutable and can be retrieved even after a commit (depending on the pruning settings). +return cc, writeCache +} -The documentation on the IAVL Tree is located [here](https://github.com/cosmos/iavl/blob/master/docs/overview.md). +var ( + _ context.Context = Context{ +} + _ storetypes.Context = Context{ +} +) -### `DbAdapter` Store[​](#dbadapter-store "Direct link to dbadapter-store") +/ ContextKey defines a type alias for a stdlib Context key. +type ContextKey string -`dbadapter.Store` is an adapter for `dbm.DB` making it fulfilling the `KVStore` interface. +/ SdkContextKey is the key in the context.Context which holds the sdk.Context. +const SdkContextKey ContextKey = "sdk-context" -store/dbadapter/store.go +/ WrapSDKContext returns a stdlib context.Context with the provided sdk.Context's internal +/ context as a value. It is useful for passing an sdk.Context through methods that take a +/ stdlib context.Context parameter such as generated gRPC methods. To get the original +/ sdk.Context back, call UnwrapSDKContext. +/ +/ Deprecated: there is no need to wrap anymore as the Cosmos SDK context implements context.Context. +func WrapSDKContext(ctx Context) -``` -loading... +context.Context { + return ctx +} + +/ UnwrapSDKContext retrieves a Context from a context.Context instance +/ attached with WrapSDKContext. It panics if a Context was not properly +/ attached +func UnwrapSDKContext(ctx context.Context) + +Context { + if sdkCtx, ok := ctx.(Context); ok { + return sdkCtx +} + +return ctx.Value(SdkContextKey).(Context) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/dbadapter/store.go#L13-L16) +In this case, the gas configuration set in the `context` is used. The gas configuration can be set using the `WithKVGasConfig` method of the `context`. +Otherwise it uses the following default: -`dbadapter.Store` embeds `dbm.DB`, meaning most of the `KVStore` interface functions are implemented. The other functions (mostly miscellaneous) are manually implemented. This store is primarily used within [Transient Stores](#transient-store) +```go expandable +package types -### `Transient` Store[​](#transient-store "Direct link to transient-store") +import ( -`Transient.Store` is a base-layer `KVStore` which is automatically discarded at the end of the block. + "fmt" + "math" +) -store/transient/store.go +/ Gas consumption descriptors. +const ( + GasIterNextCostFlatDesc = "IterNextFlat" + GasValuePerByteDesc = "ValuePerByte" + GasWritePerByteDesc = "WritePerByte" + GasReadPerByteDesc = "ReadPerByte" + GasWriteCostFlatDesc = "WriteFlat" + GasReadCostFlatDesc = "ReadFlat" + GasHasDesc = "Has" + GasDeleteDesc = "Delete" +) -``` -loading... -``` +/ Gas measured by the SDK +type Gas = uint64 -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/transient/store.go#L16-L19) +/ ErrorNegativeGasConsumed defines an error thrown when the amount of gas refunded results in a +/ negative gas consumed amount. +type ErrorNegativeGasConsumed struct { + Descriptor string +} -`Transient.Store` is a `dbadapter.Store` with a `dbm.NewMemDB()`. All `KVStore` methods are reused. When `Store.Commit()` is called, a new `dbadapter.Store` is assigned, discarding previous reference and making it garbage collected. +/ ErrorOutOfGas defines an error thrown when an action results in out of gas. +type ErrorOutOfGas struct { + Descriptor string +} -This type of store is useful to persist information that is only relevant per-block. One example would be to store parameter changes (i.e. a bool set to `true` if a parameter changed in a block). +/ ErrorGasOverflow defines an error thrown when an action results gas consumption +/ unsigned integer overflow. +type ErrorGasOverflow struct { + Descriptor string +} -x/params/types/subspace.go +/ GasMeter interface to track gas consumption +type GasMeter interface { + GasConsumed() -``` -loading... -``` +Gas + GasConsumedToLimit() -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/params/types/subspace.go#L21-L31) +Gas + GasRemaining() -Transient stores are typically accessed via the [`context`](/v0.50/learn/advanced/context) via the `TransientStore()` method: +Gas + Limit() -types/context.go +Gas + ConsumeGas(amount Gas, descriptor string) -``` -loading... -``` +RefundGas(amount Gas, descriptor string) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/types/context.go#L340-L343) +IsPastLimit() + +bool + IsOutOfGas() + +bool + String() + +string +} + +type basicGasMeter struct { + limit Gas + consumed Gas +} + +/ NewGasMeter returns a reference to a new basicGasMeter. +func NewGasMeter(limit Gas) + +GasMeter { + return &basicGasMeter{ + limit: limit, + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *basicGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} -## KVStore Wrappers[​](#kvstore-wrappers "Direct link to KVStore Wrappers") +/ GasRemaining returns the gas left in the GasMeter. +func (g *basicGasMeter) + +GasRemaining() + +Gas { + if g.IsPastLimit() { + return 0 +} + +return g.limit - g.consumed +} + +/ Limit returns the gas limit of the GasMeter. +func (g *basicGasMeter) + +Limit() + +Gas { + return g.limit +} -### CacheKVStore[​](#cachekvstore "Direct link to CacheKVStore") +/ GasConsumedToLimit returns the gas limit if gas consumed is past the limit, +/ otherwise it returns the consumed gas. +/ +/ NOTE: This behavior is only called when recovering from panic when +/ BlockGasMeter consumes gas past the limit. +func (g *basicGasMeter) -`cachekv.Store` is a wrapper `KVStore` which provides buffered writing / cached reading functionalities over the underlying `KVStore`. +GasConsumedToLimit() -store/cachekv/store.go +Gas { + if g.IsPastLimit() { + return g.limit +} -``` -loading... +return g.consumed +} + +/ addUint64Overflow performs the addition operation on two uint64 integers and +/ returns a boolean on whether or not the result overflows. +func addUint64Overflow(a, b uint64) (uint64, bool) { + if math.MaxUint64-a < b { + return 0, true +} + +return a + b, false +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit or out of gas. +func (g *basicGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + g.consumed = math.MaxUint64 + panic(ErrorGasOverflow{ + descriptor +}) +} + if g.consumed > g.limit { + panic(ErrorOutOfGas{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the transaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *basicGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns true if gas consumed is past limit, otherwise it returns false. +func (g *basicGasMeter) + +IsPastLimit() + +bool { + return g.consumed > g.limit +} + +/ IsOutOfGas returns true if gas consumed is greater than or equal to gas limit, otherwise it returns false. +func (g *basicGasMeter) + +IsOutOfGas() + +bool { + return g.consumed >= g.limit +} + +/ String returns the BasicGasMeter's gas limit and gas consumed. +func (g *basicGasMeter) + +String() + +string { + return fmt.Sprintf("BasicGasMeter:\n limit: %d\n consumed: %d", g.limit, g.consumed) +} + +type infiniteGasMeter struct { + consumed Gas +} + +/ NewInfiniteGasMeter returns a new gas meter without a limit. +func NewInfiniteGasMeter() + +GasMeter { + return &infiniteGasMeter{ + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *infiniteGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasConsumedToLimit returns the gas consumed from the GasMeter since the gas is not confined to a limit. +/ NOTE: This behavior is only called when recovering from panic when BlockGasMeter consumes gas past the limit. +func (g *infiniteGasMeter) + +GasConsumedToLimit() + +Gas { + return g.consumed +} + +/ GasRemaining returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +GasRemaining() + +Gas { + return math.MaxUint64 +} + +/ Limit returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +Limit() + +Gas { + return math.MaxUint64 +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit. +func (g *infiniteGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + / TODO: Should we set the consumed field after overflow checking? + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + panic(ErrorGasOverflow{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the trasaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *infiniteGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsPastLimit() + +bool { + return false +} + +/ IsOutOfGas returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsOutOfGas() + +bool { + return false +} + +/ String returns the InfiniteGasMeter's gas consumed. +func (g *infiniteGasMeter) + +String() + +string { + return fmt.Sprintf("InfiniteGasMeter:\n consumed: %d", g.consumed) +} + +/ GasConfig defines gas cost for each operation on KVStores +type GasConfig struct { + HasCost Gas + DeleteCost Gas + ReadCostFlat Gas + ReadCostPerByte Gas + WriteCostFlat Gas + WriteCostPerByte Gas + IterNextCostFlat Gas +} + +/ KVGasConfig returns a default gas config for KVStores. +func KVGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 1000, + DeleteCost: 1000, + ReadCostFlat: 1000, + ReadCostPerByte: 3, + WriteCostFlat: 2000, + WriteCostPerByte: 30, + IterNextCostFlat: 30, +} +} + +/ TransientGasConfig returns a default gas config for TransientStores. +func TransientGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 100, + DeleteCost: 100, + ReadCostFlat: 100, + ReadCostPerByte: 0, + WriteCostFlat: 200, + WriteCostPerByte: 3, + IterNextCostFlat: 3, +} +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/cachekv/store.go#L26-L36) +### `TraceKv` Store -This is the type used whenever an IAVL Store needs to be branched to create an isolated store (typically when we need to mutate a state that might be reverted later). +`tracekv.Store` is a wrapper `KVStore` which provides operation tracing functionalities over the underlying `KVStore`. It is applied automatically by the Cosmos SDK on all `KVStore` if tracing is enabled on the parent `MultiStore`. -#### `Get`[​](#get "Direct link to get") +```go expandable +package tracekv + +import ( + + "encoding/base64" + "encoding/json" + "io" + "cosmossdk.io/errors" + "cosmossdk.io/store/types" +) + +const ( + writeOp operation = "write" + readOp operation = "read" + deleteOp operation = "delete" + iterKeyOp operation = "iterKey" + iterValueOp operation = "iterValue" +) + +type ( + / Store implements the KVStore interface with tracing enabled. + / Operations are traced on each core KVStore call and written to the + / underlying io.writer. + / + / TODO: Should we use a buffered writer and implement Commit on + / Store? + Store struct { + parent types.KVStore + writer io.Writer + context types.TraceContext +} + + / operation represents an IO operation + operation string + + / traceOperation implements a traced KVStore operation + traceOperation struct { + Operation operation `json:"operation"` + Key string `json:"key"` + Value string `json:"value"` + Metadata map[string]interface{ +} `json:"metadata"` +} +) -`Store.Get()` firstly checks if `Store.cache` has an associated value with the key. If the value exists, the function returns it. If not, the function calls `Store.parent.Get()`, caches the result in `Store.cache`, and returns it. +/ NewStore returns a reference to a new traceKVStore given a parent +/ KVStore implementation and a buffered writer. +func NewStore(parent types.KVStore, writer io.Writer, tc types.TraceContext) *Store { + return &Store{ + parent: parent, writer: writer, context: tc +} +} -#### `Set`[​](#set "Direct link to set") +/ Get implements the KVStore interface. It traces a read operation and +/ delegates a Get call to the parent KVStore. +func (tkv *Store) -`Store.Set()` sets the key-value pair to the `Store.cache`. `cValue` has the field dirty bool which indicates whether the cached value is different from the underlying value. When `Store.Set()` caches a new pair, the `cValue.dirty` is set `true` so when `Store.Write()` is called it can be written to the underlying store. +Get(key []byte) []byte { + value := tkv.parent.Get(key) -#### `Iterator`[​](#iterator "Direct link to iterator") +writeOperation(tkv.writer, readOp, tkv.context, key, value) -`Store.Iterator()` have to traverse on both cached items and the original items. In `Store.iterator()`, two iterators are generated for each of them, and merged. `memIterator` is essentially a slice of the `KVPairs`, used for cached items. `mergeIterator` is a combination of two iterators, where traverse happens ordered on both iterators. +return value +} -### `GasKv` Store[​](#gaskv-store "Direct link to gaskv-store") +/ Set implements the KVStore interface. It traces a write operation and +/ delegates the Set call to the parent KVStore. +func (tkv *Store) -Cosmos SDK applications use [`gas`](/v0.50/learn/beginner/gas-fees) to track resources usage and prevent spam. [`GasKv.Store`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/gaskv/store.go) is a `KVStore` wrapper that enables automatic gas consumption each time a read or write to the store is made. It is the solution of choice to track storage usage in Cosmos SDK applications. +Set(key, value []byte) { + types.AssertValidKey(key) -store/gaskv/store.go +writeOperation(tkv.writer, writeOp, tkv.context, key, value) -``` -loading... -``` +tkv.parent.Set(key, value) +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/gaskv/store.go#L11-L17) +/ Delete implements the KVStore interface. It traces a write operation and +/ delegates the Delete call to the parent KVStore. +func (tkv *Store) -When methods of the parent `KVStore` are called, `GasKv.Store` automatically consumes appropriate amount of gas depending on the `Store.gasConfig`: +Delete(key []byte) { + writeOperation(tkv.writer, deleteOp, tkv.context, key, nil) -store/types/gas.go +tkv.parent.Delete(key) +} -``` -loading... -``` +/ Has implements the KVStore interface. It delegates the Has call to the +/ parent KVStore. +func (tkv *Store) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/types/gas.go#L219-L228) +Has(key []byte) -By default, all `KVStores` are wrapped in `GasKv.Stores` when retrieved. This is done in the `KVStore()` method of the [`context`](/v0.50/learn/advanced/context): +bool { + return tkv.parent.Has(key) +} -types/context.go +/ Iterator implements the KVStore interface. It delegates the Iterator call +/ to the parent KVStore. +func (tkv *Store) -``` -loading... -``` +Iterator(start, end []byte) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/types/context.go#L335-L338) +types.Iterator { + return tkv.iterator(start, end, true) +} -In this case, the gas configuration set in the `context` is used. The gas configuration can be set using the `WithKVGasConfig` method of the `context`. Otherwise it uses the following default: +/ ReverseIterator implements the KVStore interface. It delegates the +/ ReverseIterator call to the parent KVStore. +func (tkv *Store) -store/types/gas.go +ReverseIterator(start, end []byte) -``` -loading... -``` +types.Iterator { + return tkv.iterator(start, end, false) +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/types/gas.go#L230-L241) +/ iterator facilitates iteration over a KVStore. It delegates the necessary +/ calls to it's parent KVStore. +func (tkv *Store) -### `TraceKv` Store[​](#tracekv-store "Direct link to tracekv-store") +iterator(start, end []byte, ascending bool) -`tracekv.Store` is a wrapper `KVStore` which provides operation tracing functionalities over the underlying `KVStore`. It is applied automatically by the Cosmos SDK on all `KVStore` if tracing is enabled on the parent `MultiStore`. +types.Iterator { + var parent types.Iterator + if ascending { + parent = tkv.parent.Iterator(start, end) +} -store/tracekv/store.go +else { + parent = tkv.parent.ReverseIterator(start, end) +} -``` -loading... -``` +return newTraceIterator(tkv.writer, parent, tkv.context) +} + +type traceIterator struct { + parent types.Iterator + writer io.Writer + context types.TraceContext +} + +func newTraceIterator(w io.Writer, parent types.Iterator, tc types.TraceContext) + +types.Iterator { + return &traceIterator{ + writer: w, parent: parent, context: tc +} +} + +/ Domain implements the Iterator interface. +func (ti *traceIterator) + +Domain() (start, end []byte) { + return ti.parent.Domain() +} + +/ Valid implements the Iterator interface. +func (ti *traceIterator) + +Valid() + +bool { + return ti.parent.Valid() +} + +/ Next implements the Iterator interface. +func (ti *traceIterator) + +Next() { + ti.parent.Next() +} + +/ Key implements the Iterator interface. +func (ti *traceIterator) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/tracekv/store.go#L20-L43) +Key() []byte { + key := ti.parent.Key() + +writeOperation(ti.writer, iterKeyOp, ti.context, key, nil) + +return key +} + +/ Value implements the Iterator interface. +func (ti *traceIterator) + +Value() []byte { + value := ti.parent.Value() + +writeOperation(ti.writer, iterValueOp, ti.context, nil, value) + +return value +} + +/ Close implements the Iterator interface. +func (ti *traceIterator) + +Close() + +error { + return ti.parent.Close() +} + +/ Error delegates the Error call to the parent iterator. +func (ti *traceIterator) + +Error() + +error { + return ti.parent.Error() +} + +/ GetStoreType implements the KVStore interface. It returns the underlying +/ KVStore type. +func (tkv *Store) + +GetStoreType() + +types.StoreType { + return tkv.parent.GetStoreType() +} + +/ CacheWrap implements the KVStore interface. It panics because a Store +/ cannot be branched. +func (tkv *Store) + +CacheWrap() + +types.CacheWrap { + panic("cannot CacheWrap a TraceKVStore") +} + +/ CacheWrapWithTrace implements the KVStore interface. It panics as a +/ Store cannot be branched. +func (tkv *Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + panic("cannot CacheWrapWithTrace a TraceKVStore") +} + +/ writeOperation writes a KVStore operation to the underlying io.Writer as +/ JSON-encoded data where the key/value pair is base64 encoded. +func writeOperation(w io.Writer, op operation, tc types.TraceContext, key, value []byte) { + traceOp := traceOperation{ + Operation: op, + Key: base64.StdEncoding.EncodeToString(key), + Value: base64.StdEncoding.EncodeToString(value), +} + if tc != nil { + traceOp.Metadata = tc +} + +raw, err := json.Marshal(traceOp) + if err != nil { + panic(errors.Wrap(err, "failed to serialize trace operation")) +} + if _, err := w.Write(raw); err != nil { + panic(errors.Wrap(err, "failed to write trace operation")) +} + +io.WriteString(w, "\n") +} +``` When each `KVStore` methods are called, `tracekv.Store` automatically logs `traceOperation` to the `Store.writer`. `traceOperation.Metadata` is filled with `Store.context` when it is not nil. `TraceContext` is a `map[string]interface{}`. -### `Prefix` Store[​](#prefix-store "Direct link to prefix-store") +### `Prefix` Store `prefix.Store` is a wrapper `KVStore` which provides automatic key-prefixing functionalities over the underlying `KVStore`. -store/prefix/store.go +```go expandable +package prefix -``` -loading... -``` +import ( + + "bytes" + "errors" + "io" + "cosmossdk.io/store/cachekv" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" +) + +var _ types.KVStore = Store{ +} + +/ Store is similar with cometbft/cometbft/libs/db/prefix_db +/ both gives access only to the limited subset of the store +/ for convinience or safety +type Store struct { + parent types.KVStore + prefix []byte +} + +func NewStore(parent types.KVStore, prefix []byte) + +Store { + return Store{ + parent: parent, + prefix: prefix, +} +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/prefix/store.go#L15-L21) +func cloneAppend(bz, tail []byte) (res []byte) { + res = make([]byte, len(bz)+len(tail)) + +copy(res, bz) + +copy(res[len(bz):], tail) + +return +} + +func (s Store) + +key(key []byte) (res []byte) { + if key == nil { + panic("nil key on Store") +} + +res = cloneAppend(s.prefix, key) + +return +} + +/ Implements Store +func (s Store) + +GetStoreType() + +types.StoreType { + return s.parent.GetStoreType() +} + +/ Implements CacheWrap +func (s Store) + +CacheWrap() + +types.CacheWrap { + return cachekv.NewStore(s) +} + +/ CacheWrapWithTrace implements the KVStore interface. +func (s Store) + +CacheWrapWithTrace(w io.Writer, tc types.TraceContext) + +types.CacheWrap { + return cachekv.NewStore(tracekv.NewStore(s, w, tc)) +} + +/ Implements KVStore +func (s Store) + +Get(key []byte) []byte { + res := s.parent.Get(s.key(key)) + +return res +} + +/ Implements KVStore +func (s Store) + +Has(key []byte) + +bool { + return s.parent.Has(s.key(key)) +} + +/ Implements KVStore +func (s Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +types.AssertValidValue(value) + +s.parent.Set(s.key(key), value) +} + +/ Implements KVStore +func (s Store) + +Delete(key []byte) { + s.parent.Delete(s.key(key)) +} + +/ Implements KVStore +/ Check https://github.com/cometbft/cometbft/blob/master/libs/db/prefix_db.go#L106 +func (s Store) + +Iterator(start, end []byte) + +types.Iterator { + newstart := cloneAppend(s.prefix, start) + +var newend []byte + if end == nil { + newend = cpIncr(s.prefix) +} + +else { + newend = cloneAppend(s.prefix, end) +} + iter := s.parent.Iterator(newstart, newend) + +return newPrefixIterator(s.prefix, start, end, iter) +} + +/ ReverseIterator implements KVStore +/ Check https://github.com/cometbft/cometbft/blob/master/libs/db/prefix_db.go#L129 +func (s Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + newstart := cloneAppend(s.prefix, start) + +var newend []byte + if end == nil { + newend = cpIncr(s.prefix) +} + +else { + newend = cloneAppend(s.prefix, end) +} + iter := s.parent.ReverseIterator(newstart, newend) + +return newPrefixIterator(s.prefix, start, end, iter) +} + +var _ types.Iterator = (*prefixIterator)(nil) + +type prefixIterator struct { + prefix []byte + start []byte + end []byte + iter types.Iterator + valid bool +} + +func newPrefixIterator(prefix, start, end []byte, parent types.Iterator) *prefixIterator { + return &prefixIterator{ + prefix: prefix, + start: start, + end: end, + iter: parent, + valid: parent.Valid() && bytes.HasPrefix(parent.Key(), prefix), +} +} + +/ Implements Iterator +func (pi *prefixIterator) + +Domain() ([]byte, []byte) { + return pi.start, pi.end +} + +/ Implements Iterator +func (pi *prefixIterator) + +Valid() + +bool { + return pi.valid && pi.iter.Valid() +} + +/ Implements Iterator +func (pi *prefixIterator) + +Next() { + if !pi.valid { + panic("prefixIterator invalid, cannot call Next()") +} + if pi.iter.Next(); !pi.iter.Valid() || !bytes.HasPrefix(pi.iter.Key(), pi.prefix) { + / TODO: shouldn't pi be set to nil instead? + pi.valid = false +} +} + +/ Implements Iterator +func (pi *prefixIterator) + +Key() (key []byte) { + if !pi.valid { + panic("prefixIterator invalid, cannot call Key()") +} + +key = pi.iter.Key() + +key = stripPrefix(key, pi.prefix) + +return +} + +/ Implements Iterator +func (pi *prefixIterator) + +Value() []byte { + if !pi.valid { + panic("prefixIterator invalid, cannot call Value()") +} + +return pi.iter.Value() +} + +/ Implements Iterator +func (pi *prefixIterator) + +Close() + +error { + return pi.iter.Close() +} + +/ Error returns an error if the prefixIterator is invalid defined by the Valid +/ method. +func (pi *prefixIterator) + +Error() + +error { + if !pi.Valid() { + return errors.New("invalid prefixIterator") +} + +return nil +} + +/ copied from github.com/cometbft/cometbft/libs/db/prefix_db.go +func stripPrefix(key, prefix []byte) []byte { + if len(key) < len(prefix) || !bytes.Equal(key[:len(prefix)], prefix) { + panic("should not happen") +} + +return key[len(prefix):] +} + +/ wrapping types.PrefixEndBytes +func cpIncr(bz []byte) []byte { + return types.PrefixEndBytes(bz) +} +``` When `Store.{Get, Set}()` is called, the store forwards the call to its parent, with the key prefixed with the `Store.prefix`. When `Store.Iterator()` is called, it does not simply prefix the `Store.prefix`, since it does not work as intended. In that case, some of the elements are traversed even if they are not starting with the prefix. -### `ListenKv` Store[​](#listenkv-store "Direct link to listenkv-store") +### `ListenKv` Store -`listenkv.Store` is a wrapper `KVStore` which provides state listening capabilities over the underlying `KVStore`. It is applied automatically by the Cosmos SDK on any `KVStore` whose `StoreKey` is specified during state streaming configuration. Additional information about state streaming configuration can be found in the [store/streaming/README.md](https://github.com/cosmos/cosmos-sdk/tree/v0.50.0-alpha.0/store/streaming). +`listenkv.Store` is a wrapper `KVStore` which provides state listening capabilities over the underlying `KVStore`. +It is applied automatically by the Cosmos SDK on any `KVStore` whose `StoreKey` is specified during state streaming configuration. +Additional information about state streaming configuration can be found in the [store/streaming/README.md](https://github.com/cosmos/cosmos-sdk/tree/v0.50.0-alpha.0/store/streaming). -store/listenkv/store.go +```go expandable +package listenkv -``` -loading... -``` +import ( + + "io" + "cosmossdk.io/store/types" +) + +var _ types.KVStore = &Store{ +} + +/ Store implements the KVStore interface with listening enabled. +/ Operations are traced on each core KVStore call and written to any of the +/ underlying listeners with the proper key and operation permissions +type Store struct { + parent types.KVStore + listener *types.MemoryListener + parentStoreKey types.StoreKey +} + +/ NewStore returns a reference to a new traceKVStore given a parent +/ KVStore implementation and a buffered writer. +func NewStore(parent types.KVStore, parentStoreKey types.StoreKey, listener *types.MemoryListener) *Store { + return &Store{ + parent: parent, listener: listener, parentStoreKey: parentStoreKey +} +} + +/ Get implements the KVStore interface. It traces a read operation and +/ delegates a Get call to the parent KVStore. +func (s *Store) + +Get(key []byte) []byte { + value := s.parent.Get(key) + +return value +} + +/ Set implements the KVStore interface. It traces a write operation and +/ delegates the Set call to the parent KVStore. +func (s *Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +s.parent.Set(key, value) + +s.listener.OnWrite(s.parentStoreKey, key, value, false) +} + +/ Delete implements the KVStore interface. It traces a write operation and +/ delegates the Delete call to the parent KVStore. +func (s *Store) + +Delete(key []byte) { + s.parent.Delete(key) + +s.listener.OnWrite(s.parentStoreKey, key, nil, true) +} + +/ Has implements the KVStore interface. It delegates the Has call to the +/ parent KVStore. +func (s *Store) + +Has(key []byte) + +bool { + return s.parent.Has(key) +} + +/ Iterator implements the KVStore interface. It delegates the Iterator call +/ the to the parent KVStore. +func (s *Store) + +Iterator(start, end []byte) + +types.Iterator { + return s.iterator(start, end, true) +} + +/ ReverseIterator implements the KVStore interface. It delegates the +/ ReverseIterator call the to the parent KVStore. +func (s *Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + return s.iterator(start, end, false) +} + +/ iterator facilitates iteration over a KVStore. It delegates the necessary +/ calls to it's parent KVStore. +func (s *Store) + +iterator(start, end []byte, ascending bool) + +types.Iterator { + var parent types.Iterator + if ascending { + parent = s.parent.Iterator(start, end) +} + +else { + parent = s.parent.ReverseIterator(start, end) +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/listenkv/store.go#L11-L18) +return newTraceIterator(parent, s.listener) +} + +type listenIterator struct { + parent types.Iterator + listener *types.MemoryListener +} + +func newTraceIterator(parent types.Iterator, listener *types.MemoryListener) + +types.Iterator { + return &listenIterator{ + parent: parent, listener: listener +} +} + +/ Domain implements the Iterator interface. +func (li *listenIterator) + +Domain() (start, end []byte) { + return li.parent.Domain() +} + +/ Valid implements the Iterator interface. +func (li *listenIterator) + +Valid() + +bool { + return li.parent.Valid() +} + +/ Next implements the Iterator interface. +func (li *listenIterator) + +Next() { + li.parent.Next() +} + +/ Key implements the Iterator interface. +func (li *listenIterator) + +Key() []byte { + key := li.parent.Key() + +return key +} + +/ Value implements the Iterator interface. +func (li *listenIterator) + +Value() []byte { + value := li.parent.Value() + +return value +} + +/ Close implements the Iterator interface. +func (li *listenIterator) + +Close() + +error { + return li.parent.Close() +} + +/ Error delegates the Error call to the parent iterator. +func (li *listenIterator) + +Error() + +error { + return li.parent.Error() +} + +/ GetStoreType implements the KVStore interface. It returns the underlying +/ KVStore type. +func (s *Store) + +GetStoreType() + +types.StoreType { + return s.parent.GetStoreType() +} + +/ CacheWrap implements the KVStore interface. It panics as a Store +/ cannot be cache wrapped. +func (s *Store) + +CacheWrap() + +types.CacheWrap { + panic("cannot CacheWrap a ListenKVStore") +} + +/ CacheWrapWithTrace implements the KVStore interface. It panics as a +/ Store cannot be cache wrapped. +func (s *Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + panic("cannot CacheWrapWithTrace a ListenKVStore") +} +``` When `KVStore.Set` or `KVStore.Delete` methods are called, `listenkv.Store` automatically writes the operations to the set of `Store.listeners`. -## `BasicKVStore` interface[​](#basickvstore-interface "Direct link to basickvstore-interface") +## `BasicKVStore` interface An interface providing only the basic CRUD functionality (`Get`, `Set`, `Has`, and `Delete` methods), without iteration or caching. This is used to partially expose components of a larger store. diff --git a/docs/sdk/v0.50/learn/advanced/telemetry.mdx b/docs/sdk/v0.50/learn/advanced/telemetry.mdx index 5172b59b..003319ca 100644 --- a/docs/sdk/v0.50/learn/advanced/telemetry.mdx +++ b/docs/sdk/v0.50/learn/advanced/telemetry.mdx @@ -1,62 +1,81 @@ --- -title: "Telemetry" -description: "Version: v0.50" +title: Telemetry --- - - Gather relevant insights about your application and modules with custom metrics and telemetry. - +## Synopsis -The Cosmos SDK enables operators and developers to gain insight into the performance and behavior of their application through the use of the `telemetry` package. To enable telemetrics, set `telemetry.enabled = true` in the app.toml config file. +Gather relevant insights about your application and modules with custom metrics and telemetry. + +The Cosmos SDK enables operators and developers to gain insight into the performance and behavior of +their application through the use of the `telemetry` package. To enable telemetrics, set `telemetry.enabled = true` in the app.toml config file. The Cosmos SDK currently supports enabling in-memory and prometheus as telemetry sinks. In-memory sink is always attached (when the telemetry is enabled) with 10 second interval and 1 minute retention. This means that metrics will be aggregated over 10 seconds, and metrics will be kept alive for 1 minute. To query active metrics (see retention note above) you have to enable API server (`api.enabled = true` in the app.toml). Single API endpoint is exposed: `http://localhost:1317/metrics?format={text|prometheus}`, the default being `text`. -## Emitting metrics[​](#emitting-metrics "Direct link to Emitting metrics") +## Emitting metrics -If telemetry is enabled via configuration, a single global metrics collector is registered via the [go-metrics](https://github.com/hashicorp/go-metrics) library. This allows emitting and collecting metrics through simple [API](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/telemetry/wrapper.go). Example: +If telemetry is enabled via configuration, a single global metrics collector is registered via the +[go-metrics](https://github.com/hashicorp/go-metrics) library. This allows emitting and collecting +metrics through simple [API](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/telemetry/wrapper.go). Example: -``` -func EndBlocker(ctx sdk.Context, k keeper.Keeper) { defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) // ...} +```go +func EndBlocker(ctx sdk.Context, k keeper.Keeper) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) + + / ... +} ``` -Developers may use the `telemetry` package directly, which provides wrappers around metric APIs that include adding useful labels, or they must use the `go-metrics` library directly. It is preferable to add as much context and adequate dimensionality to metrics as possible, so the `telemetry` package is advised. Regardless of the package or method used, the Cosmos SDK supports the following metrics types: +Developers may use the `telemetry` package directly, which provides wrappers around metric APIs +that include adding useful labels, or they must use the `go-metrics` library directly. It is preferable +to add as much context and adequate dimensionality to metrics as possible, so the `telemetry` package +is advised. Regardless of the package or method used, the Cosmos SDK supports the following metrics +types: -* gauges -* summaries -* counters +- gauges +- summaries +- counters -## Labels[​](#labels "Direct link to Labels") +## Labels -Certain components of modules will have their name automatically added as a label (e.g. `BeginBlock`). Operators may also supply the application with a global set of labels that will be applied to all metrics emitted using the `telemetry` package (e.g. chain-id). Global labels are supplied as a list of \[name, value] tuples. +Certain components of modules will have their name automatically added as a label (e.g. `BeginBlock`). +Operators may also supply the application with a global set of labels that will be applied to all +metrics emitted using the `telemetry` package (e.g. chain-id). Global labels are supplied as a list +of \[name, value] tuples. Example: -``` -global-labels = [ ["chain_id", "chain-OfXo4V"],] +```toml +global-labels = [ + ["chain_id", "chain-OfXo4V"], +] ``` -## Cardinality[​](#cardinality "Direct link to Cardinality") +## Cardinality -Cardinality is key, specifically label and key cardinality. Cardinality is how many unique values of something there are. So there is naturally a tradeoff between granularity and how much stress is put on the telemetry sink in terms of indexing, scrape, and query performance. +Cardinality is key, specifically label and key cardinality. Cardinality is how many unique values of +something there are. So there is naturally a tradeoff between granularity and how much stress is put +on the telemetry sink in terms of indexing, scrape, and query performance. -Developers should take care to support metrics with enough dimensionality and granularity to be useful, but not increase the cardinality beyond the sink's limits. A general rule of thumb is to not exceed a cardinality of 10. +Developers should take care to support metrics with enough dimensionality and granularity to be +useful, but not increase the cardinality beyond the sink's limits. A general rule of thumb is to not +exceed a cardinality of 10. Consider the following examples with enough granularity and adequate cardinality: -* begin/end blocker time -* tx gas used -* block gas used -* amount of tokens minted -* amount of accounts created +- begin/end blocker time +- tx gas used +- block gas used +- amount of tokens minted +- amount of accounts created The following examples expose too much cardinality and may not even prove to be useful: -* transfers between accounts with amount -* voting/deposit amount from unique addresses +- transfers between accounts with amount +- voting/deposit amount from unique addresses -## Supported Metrics[​](#supported-metrics "Direct link to Supported Metrics") +## Supported Metrics | Metric | Description | Unit | Type | | :------------------------------ | :---------------------------------------------------------------------------------------- | :-------------- | :------ | diff --git a/docs/sdk/v0.50/learn/advanced/transactions.mdx b/docs/sdk/v0.50/learn/advanced/transactions.mdx index 6e58c06a..0c1728fd 100644 --- a/docs/sdk/v0.50/learn/advanced/transactions.mdx +++ b/docs/sdk/v0.50/learn/advanced/transactions.mdx @@ -1,231 +1,1380 @@ --- -title: "Transactions" -description: "Version: v0.50" +title: Transactions --- - - `Transactions` are objects created by end-users to trigger state changes in the application. - +## Synopsis + +`Transactions` are objects created by end-users to trigger state changes in the application. - * [Anatomy of a Cosmos SDK Application](/v0.50/learn/beginner/app-anatomy) +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.50/learn/beginner/app-anatomy) + -## Transactions[​](#transactions-1 "Direct link to Transactions") +## Transactions -Transactions are comprised of metadata held in [contexts](/v0.50/learn/advanced/context) and [`sdk.Msg`s](/v0.50/build/building-modules/messages-and-queries) that trigger state changes within a module through the module's Protobuf [`Msg` service](/v0.50/build/building-modules/msg-services). +Transactions are comprised of metadata held in [contexts](/docs/sdk/v0.50/learn/advanced/context) and [`sdk.Msg`s](/docs/sdk/v0.50/documentation/module-system/messages-and-queries) that trigger state changes within a module through the module's Protobuf [`Msg` service](/docs/sdk/v0.50/documentation/module-system/msg-services). -When users want to interact with an application and make state changes (e.g. sending coins), they create transactions. Each of a transaction's `sdk.Msg` must be signed using the private key associated with the appropriate account(s), before the transaction is broadcasted to the network. A transaction must then be included in a block, validated, and approved by the network through the consensus process. To read more about the lifecycle of a transaction, click [here](/v0.50/learn/beginner/tx-lifecycle). +When users want to interact with an application and make state changes (e.g. sending coins), they create transactions. Each of a transaction's `sdk.Msg` must be signed using the private key associated with the appropriate account(s), before the transaction is broadcasted to the network. A transaction must then be included in a block, validated, and approved by the network through the consensus process. To read more about the lifecycle of a transaction, click [here](/docs/sdk/v0.50/learn/beginner/tx-lifecycle). -## Type Definition[​](#type-definition "Direct link to Type Definition") +## Type Definition Transaction objects are Cosmos SDK types that implement the `Tx` interface -types/tx\_msg.go +```go expandable +package types + +import ( + + "encoding/json" + fmt "fmt" + strings "strings" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "github.com/cosmos/cosmos-sdk/codec" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) -``` -loading... -``` +type ( + / Msg defines the interface a transaction message needed to fulfill. + Msg = proto.Message -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/types/tx_msg.go#L51-L56) + / LegacyMsg defines the interface a transaction message needed to fulfill up through + / v0.47. + LegacyMsg interface { + Msg + + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} + + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / HasMsgs defines an interface a transaction must fulfill. + HasMsgs interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg +} + + / Tx defines an interface a transaction must fulfill. + Tx interface { + HasMsgs + + / GetMsgsV2 gets the transaction's messages as google.golang.org/protobuf/proto.Message's. + GetMsgsV2() ([]protov2.Message, error) +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() []byte + FeeGranter() + +string +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} + + / HasValidateBasic defines a type that has a ValidateBasic method. + / ValidateBasic is deprecated and now facultative. + / Prefer validating messages directly in the msg server. + HasValidateBasic interface { + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +func MsgTypeURL(msg proto.Message) + +string { + if m, ok := msg.(protov2.Message); ok { + return "/" + string(m.ProtoReflect().Descriptor().FullName()) +} + +return "/" + proto.MessageName(msg) +} + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} + +/ GetModuleNameFromTypeURL assumes that module name is the second element of the msg type URL +/ e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" +/ It returns an empty string if the input is not a valid type URL +func GetModuleNameFromTypeURL(input string) + +string { + moduleName := strings.Split(input, ".") + if len(moduleName) > 1 { + return moduleName[1] +} + +return "" +} +``` It contains the following methods: -* **GetMsgs:** unwraps the transaction and returns a list of contained `sdk.Msg`s - one transaction may have one or multiple messages, which are defined by module developers. -* **ValidateBasic:** lightweight, [*stateless*](/v0.50/learn/beginner/tx-lifecycle#types-of-checks) checks used by ABCI messages [`CheckTx`](/v0.50/learn/advanced/baseapp#checktx) and [`DeliverTx`](/v0.50/learn/advanced/baseapp#delivertx) to make sure transactions are not invalid. For example, the [`auth`](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth) module's `ValidateBasic` function checks that its transactions are signed by the correct number of signers and that the fees do not exceed what the user's maximum. When [`runTx`](/v0.50/learn/advanced/baseapp#runtx) is checking a transaction created from the [`auth`](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth/spec) module, it first runs `ValidateBasic` on each message, then runs the `auth` module AnteHandler which calls `ValidateBasic` for the transaction itself. +- **GetMsgs:** unwraps the transaction and returns a list of contained `sdk.Msg`s - one transaction may have one or multiple messages, which are defined by module developers. +- **ValidateBasic:** lightweight, [_stateless_](/docs/sdk/v0.50/learn/beginner/tx-lifecycle#types-of-checks) checks used by ABCI messages [`CheckTx`](/docs/sdk/v0.50/learn/advanced/baseapp#checktx) and [`DeliverTx`](/docs/sdk/v0.50/learn/advanced/baseapp#delivertx) to make sure transactions are not invalid. For example, the [`auth`](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth) module's `ValidateBasic` function checks that its transactions are signed by the correct number of signers and that the fees do not exceed what the user's maximum. When [`runTx`](/docs/sdk/v0.50/learn/advanced/baseapp#runtx) is checking a transaction created from the [`auth`](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth/spec) module, it first runs `ValidateBasic` on each message, then runs the `auth` module AnteHandler which calls `ValidateBasic` for the transaction itself. - This function is different from the deprecated `sdk.Msg` [`ValidateBasic`](/v0.50/learn/beginner/tx-lifecycle#ValidateBasic) methods, which was performing basic validity checks on messages only. + This function is different from the deprecated `sdk.Msg` + [`ValidateBasic`](/docs/sdk/v0.50/learn/beginner/tx-lifecycle#ValidateBasic) + methods, which was performing basic validity checks on messages only. As a developer, you should rarely manipulate `Tx` directly, as `Tx` is really an intermediate type used for transaction generation. Instead, developers should prefer the `TxBuilder` interface, which you can learn more about [below](#transaction-generation). -### Signing Transactions[​](#signing-transactions "Direct link to Signing Transactions") +### Signing Transactions Every message in a transaction must be signed by the addresses specified by its `GetSigners`. The Cosmos SDK currently allows signing transactions in two different ways. -#### `SIGN_MODE_DIRECT` (preferred)[​](#sign_mode_direct-preferred "Direct link to sign_mode_direct-preferred") +#### `SIGN_MODE_DIRECT` (preferred) The most used implementation of the `Tx` interface is the Protobuf `Tx` message, which is used in `SIGN_MODE_DIRECT`: -proto/cosmos/tx/v1beta1/tx.proto +```protobuf -``` -loading... +// Tx is the standard type used for broadcasting transactions. +message Tx { + // body is the processable content of the transaction + TxBody body = 1; + + // auth_info is the authorization related content of the transaction, + // specifically signers, signer modes and fee + AuthInfo auth_info = 2; + + // signatures is a list of signatures that matches the length and order of + // AuthInfo's signer_infos to allow connecting signature meta information like + // public key and signing mode by position. + repeated bytes signatures = 3; ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/tx/v1beta1/tx.proto#L13-L26) +Because Protobuf serialization is not deterministic, the Cosmos SDK uses an additional `TxRaw` type to denote the pinned bytes over which a transaction is signed. Any user can generate a valid `body` and `auth_info` for a transaction, and serialize these two messages using Protobuf. `TxRaw` then pins the user's exact binary representation of `body` and `auth_info`, called respectively `body_bytes` and `auth_info_bytes`. The document that is signed by all signers of the transaction is `SignDoc` (deterministically serialized using [ADR-027](/docs/common/pages/adr-comprehensive#adr-027-deterministic-protobuf-serialization)): -Because Protobuf serialization is not deterministic, the Cosmos SDK uses an additional `TxRaw` type to denote the pinned bytes over which a transaction is signed. Any user can generate a valid `body` and `auth_info` for a transaction, and serialize these two messages using Protobuf. `TxRaw` then pins the user's exact binary representation of `body` and `auth_info`, called respectively `body_bytes` and `auth_info_bytes`. The document that is signed by all signers of the transaction is `SignDoc` (deterministically serialized using [ADR-027](/v0.50/build/architecture/adr-027-deterministic-protobuf-serialization)): +```protobuf -proto/cosmos/tx/v1beta1/tx.proto +// SignDoc is the type used for generating sign bytes for SIGN_MODE_DIRECT. +message SignDoc { + // body_bytes is protobuf serialization of a TxBody that matches the + // representation in TxRaw. + bytes body_bytes = 1; -``` -loading... -``` + // auth_info_bytes is a protobuf serialization of an AuthInfo that matches the + // representation in TxRaw. + bytes auth_info_bytes = 2; -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/tx/v1beta1/tx.proto#L48-L65) + // chain_id is the unique identifier of the chain this transaction targets. + // It prevents signed transactions from being used on another chain by an + // attacker + string chain_id = 3; + + // account_number is the account number of the account in state + uint64 account_number = 4; +``` Once signed by all signers, the `body_bytes`, `auth_info_bytes` and `signatures` are gathered into `TxRaw`, whose serialized bytes are broadcasted over the network. -#### `SIGN_MODE_LEGACY_AMINO_JSON`[​](#sign_mode_legacy_amino_json "Direct link to sign_mode_legacy_amino_json") +#### `SIGN_MODE_LEGACY_AMINO_JSON` The legacy implementation of the `Tx` interface is the `StdTx` struct from `x/auth`: -x/auth/migrations/legacytx/stdtx.go +```go expandable +package legacytx + +import ( + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/codec/legacy" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/types/tx/signing" +) + +/ Interface implementation checks +var ( + _ codectypes.UnpackInterfacesMessage = (*StdTx)(nil) + + _ codectypes.UnpackInterfacesMessage = (*StdSignature)(nil) +) + +/ StdFee includes the amount of coins paid in fees and the maximum +/ gas to be used by the transaction. The ratio yields an effective "gasprice", +/ which must be above some miminum to be accepted into the mempool. +/ [Deprecated] +type StdFee struct { + Amount sdk.Coins `json:"amount" yaml:"amount"` + Gas uint64 `json:"gas" yaml:"gas"` + Payer string `json:"payer,omitempty" yaml:"payer"` + Granter string `json:"granter,omitempty" yaml:"granter"` +} + +/ Deprecated: NewStdFee returns a new instance of StdFee +func NewStdFee(gas uint64, amount sdk.Coins) + +StdFee { + return StdFee{ + Amount: amount, + Gas: gas, +} +} + +/ GetGas returns the fee's (wanted) + +gas. +func (fee StdFee) + +GetGas() + +uint64 { + return fee.Gas +} + +/ GetAmount returns the fee's amount. +func (fee StdFee) + +GetAmount() + +sdk.Coins { + return fee.Amount +} + +/ Bytes returns the encoded bytes of a StdFee. +func (fee StdFee) + +Bytes() []byte { + if len(fee.Amount) == 0 { + fee.Amount = sdk.NewCoins() +} + +bz, err := legacy.Cdc.MarshalJSON(fee) + if err != nil { + panic(err) +} + +return bz +} + +/ GasPrices returns the gas prices for a StdFee. +/ +/ NOTE: The gas prices returned are not the true gas prices that were +/ originally part of the submitted transaction because the fee is computed +/ as fee = ceil(gasWanted * gasPrices). +func (fee StdFee) + +GasPrices() + +sdk.DecCoins { + return sdk.NewDecCoinsFromCoins(fee.Amount...).QuoDec(math.LegacyNewDec(int64(fee.Gas))) +} + +/ StdTip is the tips used in a tipped transaction. +type StdTip struct { + Amount sdk.Coins `json:"amount" yaml:"amount"` + Tipper string `json:"tipper" yaml:"tipper"` +} + +/ StdTx is the legacy transaction format for wrapping a Msg with Fee and Signatures. +/ It only works with Amino, please prefer the new protobuf Tx in types/tx. +/ NOTE: the first signature is the fee payer (Signatures must not be nil). +/ Deprecated +type StdTx struct { + Msgs []sdk.Msg `json:"msg" yaml:"msg"` + Fee StdFee `json:"fee" yaml:"fee"` + Signatures []StdSignature `json:"signatures" yaml:"signatures"` + Memo string `json:"memo" yaml:"memo"` + TimeoutHeight uint64 `json:"timeout_height" yaml:"timeout_height"` +} + +/ Deprecated +func NewStdTx(msgs []sdk.Msg, fee StdFee, sigs []StdSignature, memo string) + +StdTx { + return StdTx{ + Msgs: msgs, + Fee: fee, + Signatures: sigs, + Memo: memo, +} +} + +/ GetMsgs returns the all the transaction's messages. +func (tx StdTx) + +GetMsgs() []sdk.Msg { + return tx.Msgs +} + +/ Deprecated: AsAny implements intoAny. It doesn't work for protobuf serialization, +/ so it can't be saved into protobuf configured storage. We are using it only for API +/ compatibility. +func (tx *StdTx) -``` -loading... -``` +AsAny() *codectypes.Any { + return codectypes.UnsafePackAny(tx) +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/migrations/legacytx/stdtx.go#L83-L90) +/ GetMemo returns the memo +func (tx StdTx) -The document signed by all signers is `StdSignDoc`: +GetMemo() -x/auth/migrations/legacytx/stdsign.go +string { + return tx.Memo +} -``` -loading... +/ GetTimeoutHeight returns the transaction's timeout height (if set). +func (tx StdTx) + +GetTimeoutHeight() + +uint64 { + return tx.TimeoutHeight +} + +/ GetSignatures returns the signature of signers who signed the Msg. +/ CONTRACT: Length returned is same as length of +/ pubkeys returned from MsgKeySigners, and the order +/ matches. +/ CONTRACT: If the signature is missing (ie the Msg is +/ invalid), then the corresponding signature is +/ .Empty(). +func (tx StdTx) + +GetSignatures() [][]byte { + sigs := make([][]byte, len(tx.Signatures)) + for i, stdSig := range tx.Signatures { + sigs[i] = stdSig.Signature +} + +return sigs +} + +/ GetSignaturesV2 implements SigVerifiableTx.GetSignaturesV2 +func (tx StdTx) + +GetSignaturesV2() ([]signing.SignatureV2, error) { + res := make([]signing.SignatureV2, len(tx.Signatures)) + for i, sig := range tx.Signatures { + var err error + res[i], err = StdSignatureToSignatureV2(legacy.Cdc, sig) + if err != nil { + return nil, errorsmod.Wrapf(err, "Unable to convert signature %v to V2", sig) +} + +} + +return res, nil +} + +/ GetPubkeys returns the pubkeys of signers if the pubkey is included in the signature +/ If pubkey is not included in the signature, then nil is in the slice instead +func (tx StdTx) + +GetPubKeys() ([]cryptotypes.PubKey, error) { + pks := make([]cryptotypes.PubKey, len(tx.Signatures)) + for i, stdSig := range tx.Signatures { + pks[i] = stdSig.GetPubKey() +} + +return pks, nil +} + +/ GetGas returns the Gas in StdFee +func (tx StdTx) + +GetGas() + +uint64 { + return tx.Fee.Gas +} + +/ GetFee returns the FeeAmount in StdFee +func (tx StdTx) + +GetFee() + +sdk.Coins { + return tx.Fee.Amount +} + +/ FeeGranter always returns nil for StdTx +func (tx StdTx) + +FeeGranter() + +sdk.AccAddress { + return nil +} + +/ GetTip always returns nil for StdTx +func (tx StdTx) + +GetTip() *tx.Tip { + return nil +} + +func (tx StdTx) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + for _, m := range tx.Msgs { + err := codectypes.UnpackInterfaces(m, unpacker) + if err != nil { + return err +} + +} + + / Signatures contain PubKeys, which need to be unpacked. + for _, s := range tx.Signatures { + err := s.UnpackInterfaces(unpacker) + if err != nil { + return err +} + +} + +return nil +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/migrations/legacytx/stdsign.go#L31-L45) +The document signed by all signers is `StdSignDoc`: + +```go expandable +package legacytx + +import ( + + "encoding/json" + "fmt" + "sigs.k8s.io/yaml" + "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/legacy" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/crypto/types/multisig" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/types/tx/signing" +) + +/ LegacyMsg defines the old interface a message must fulfill, +/ containing Amino signing method. +/ Deprecated: Please use `Msg` instead. +type LegacyMsg interface { + sdk.Msg + + / Get the canonical byte representation of the Msg. + GetSignBytes() []byte +} + +/ StdSignDoc is replay-prevention structure. +/ It includes the result of msg.GetSignBytes(), +/ as well as the ChainID (prevent cross chain replay) +/ and the Sequence numbers for each signature (prevent +/ inchain replay and enforce tx ordering per account). +type StdSignDoc struct { + AccountNumber uint64 `json:"account_number" yaml:"account_number"` + Sequence uint64 `json:"sequence" yaml:"sequence"` + TimeoutHeight uint64 `json:"timeout_height,omitempty" yaml:"timeout_height"` + ChainID string `json:"chain_id" yaml:"chain_id"` + Memo string `json:"memo" yaml:"memo"` + Fee json.RawMessage `json:"fee" yaml:"fee"` + Msgs []json.RawMessage `json:"msgs" yaml:"msgs"` + Tip *StdTip `json:"tip,omitempty" yaml:"tip"` +} + +var RegressionTestingAminoCodec *codec.LegacyAmino + +/ StdSignBytes returns the bytes to sign for a transaction. +/ Deprecated: Please use x/tx/signing/aminojson instead. +func StdSignBytes(chainID string, accnum, sequence, timeout uint64, fee StdFee, msgs []sdk.Msg, memo string, tip *tx.Tip) []byte { + if RegressionTestingAminoCodec == nil { + panic(fmt.Errorf("must set RegressionTestingAminoCodec before calling StdSignBytes")) +} + msgsBytes := make([]json.RawMessage, 0, len(msgs)) + for _, msg := range msgs { + bz := RegressionTestingAminoCodec.MustMarshalJSON(msg) + +msgsBytes = append(msgsBytes, sdk.MustSortJSON(bz)) +} + +var stdTip *StdTip + if tip != nil { + if tip.Tipper == "" { + panic(fmt.Errorf("tipper cannot be empty")) +} + +stdTip = &StdTip{ + Amount: tip.Amount, + Tipper: tip.Tipper +} + +} + +bz, err := legacy.Cdc.MarshalJSON(StdSignDoc{ + AccountNumber: accnum, + ChainID: chainID, + Fee: json.RawMessage(fee.Bytes()), + Memo: memo, + Msgs: msgsBytes, + Sequence: sequence, + TimeoutHeight: timeout, + Tip: stdTip, +}) + if err != nil { + panic(err) +} + +return sdk.MustSortJSON(bz) +} + +/ Deprecated: StdSignature represents a sig +type StdSignature struct { + cryptotypes.PubKey `json:"pub_key" yaml:"pub_key"` / optional + Signature []byte `json:"signature" yaml:"signature"` +} + +/ Deprecated +func NewStdSignature(pk cryptotypes.PubKey, sig []byte) + +StdSignature { + return StdSignature{ + PubKey: pk, + Signature: sig +} +} + +/ GetSignature returns the raw signature bytes. +func (ss StdSignature) + +GetSignature() []byte { + return ss.Signature +} + +/ GetPubKey returns the public key of a signature as a cryptotypes.PubKey using the +/ Amino codec. +func (ss StdSignature) + +GetPubKey() + +cryptotypes.PubKey { + return ss.PubKey +} + +/ MarshalYAML returns the YAML representation of the signature. +func (ss StdSignature) + +MarshalYAML() (interface{ +}, error) { + pk := "" + if ss.PubKey != nil { + pk = ss.PubKey.String() +} + +bz, err := yaml.Marshal(struct { + PubKey string `json:"pub_key"` + Signature string `json:"signature"` +}{ + pk, + fmt.Sprintf("%X", ss.Signature), +}) + if err != nil { + return nil, err +} + +return string(bz), err +} + +func (ss StdSignature) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + return codectypes.UnpackInterfaces(ss.PubKey, unpacker) +} + +/ StdSignatureToSignatureV2 converts a StdSignature to a SignatureV2 +func StdSignatureToSignatureV2(cdc *codec.LegacyAmino, sig StdSignature) (signing.SignatureV2, error) { + pk := sig.GetPubKey() + +data, err := pubKeySigToSigData(cdc, pk, sig.Signature) + if err != nil { + return signing.SignatureV2{ +}, err +} + +return signing.SignatureV2{ + PubKey: pk, + Data: data, +}, nil +} + +func pubKeySigToSigData(cdc *codec.LegacyAmino, key cryptotypes.PubKey, sig []byte) (signing.SignatureData, error) { + multiPK, ok := key.(multisig.PubKey) + if !ok { + return &signing.SingleSignatureData{ + SignMode: signing.SignMode_SIGN_MODE_LEGACY_AMINO_JSON, + Signature: sig, +}, nil +} + +var multiSig multisig.AminoMultisignature + err := cdc.Unmarshal(sig, &multiSig) + if err != nil { + return nil, err +} + sigs := multiSig.Sigs + sigDatas := make([]signing.SignatureData, len(sigs)) + pubKeys := multiPK.GetPubKeys() + bitArray := multiSig.BitArray + n := multiSig.BitArray.Count() + signatures := multisig.NewMultisig(n) + sigIdx := 0 + for i := 0; i < n; i++ { + if bitArray.GetIndex(i) { + data, err := pubKeySigToSigData(cdc, pubKeys[i], multiSig.Sigs[sigIdx]) + if err != nil { + return nil, errors.Wrapf(err, "Unable to convert Signature to SigData %d", sigIdx) +} + +sigDatas[sigIdx] = data + multisig.AddSignature(signatures, data, sigIdx) + +sigIdx++ +} + +} + +return signatures, nil +} +``` which is encoded into bytes using Amino JSON. Once all signatures are gathered into `StdTx`, `StdTx` is serialized using Amino JSON, and these bytes are broadcasted over the network. -#### Other Sign Modes[​](#other-sign-modes "Direct link to Other Sign Modes") +#### Other Sign Modes The Cosmos SDK also provides a couple of other sign modes for particular use cases. -#### `SIGN_MODE_DIRECT_AUX`[​](#sign_mode_direct_aux "Direct link to sign_mode_direct_aux") +#### `SIGN_MODE_DIRECT_AUX` -`SIGN_MODE_DIRECT_AUX` is a sign mode released in the Cosmos SDK v0.46 which targets transactions with multiple signers. Whereas `SIGN_MODE_DIRECT` expects each signer to sign over both `TxBody` and `AuthInfo` (which includes all other signers' signer infos, i.e. their account sequence, public key and mode info), `SIGN_MODE_DIRECT_AUX` allows N-1 signers to only sign over `TxBody` and *their own* signer info. Morever, each auxiliary signer (i.e. a signer using `SIGN_MODE_DIRECT_AUX`) doesn't need to sign over the fees: +`SIGN_MODE_DIRECT_AUX` is a sign mode released in the Cosmos SDK v0.46 which targets transactions with multiple signers. Whereas `SIGN_MODE_DIRECT` expects each signer to sign over both `TxBody` and `AuthInfo` (which includes all other signers' signer infos, i.e. their account sequence, public key and mode info), `SIGN_MODE_DIRECT_AUX` allows N-1 signers to only sign over `TxBody` and _their own_ signer info. Morever, each auxiliary signer (i.e. a signer using `SIGN_MODE_DIRECT_AUX`) doesn't +need to sign over the fees: -proto/cosmos/tx/v1beta1/tx.proto +```protobuf -``` -loading... -``` +// SignDocDirectAux is the type used for generating sign bytes for +// SIGN_MODE_DIRECT_AUX. +// +// Since: cosmos-sdk 0.46 +message SignDocDirectAux { + // body_bytes is protobuf serialization of a TxBody that matches the + // representation in TxRaw. + bytes body_bytes = 1; + + // public_key is the public key of the signing account. + google.protobuf.Any public_key = 2; + + // chain_id is the identifier of the chain this transaction targets. + // It prevents signed transactions from being used on another chain by an + // attacker. + string chain_id = 3; + + // account_number is the account number of the account in state. + uint64 account_number = 4; -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/tx/v1beta1/tx.proto#L67-L98) + // sequence is the sequence number of the signing account. + uint64 sequence = 5; -The use case is a multi-signer transaction, where one of the signers is appointed to gather all signatures, broadcast the signature and pay for fees, and the others only care about the transaction body. This generally allows for a better multi-signing UX. If Alice, Bob and Charlie are part of a 3-signer transaction, then Alice and Bob can both use `SIGN_MODE_DIRECT_AUX` to sign over the `TxBody` and their own signer info (no need an additional step to gather other signers' ones, like in `SIGN_MODE_DIRECT`), without specifying a fee in their SignDoc. Charlie can then gather both signatures from Alice and Bob, and create the final transaction by appending a fee. Note that the fee payer of the transaction (in our case Charlie) must sign over the fees, so must use `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. + // Tip is the optional tip used for transactions fees paid in another denom. + // It should be left empty if the signer is not the tipper for this + // transaction. + // + // This field is ignored if the chain didn't enable tips, i.e. didn't add the + // `TipDecorator` in its posthandler. + Tip tip = 6; +} +``` + +The use case is a multi-signer transaction, where one of the signers is appointed to gather all signatures, broadcast the signature and pay for fees, and the others only care about the transaction body. This generally allows for a better multi-signing UX. If Alice, Bob and Charlie are part of a 3-signer transaction, then Alice and Bob can both use `SIGN_MODE_DIRECT_AUX` to sign over the `TxBody` and their own signer info (no need an additional step to gather other signers' ones, like in `SIGN_MODE_DIRECT`), without specifying a fee in their SignDoc. Charlie can then gather both signatures from Alice and Bob, and +create the final transaction by appending a fee. Note that the fee payer of the transaction (in our case Charlie) must sign over the fees, so must use `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. -#### `SIGN_MODE_TEXTUAL`[​](#sign_mode_textual "Direct link to sign_mode_textual") +#### `SIGN_MODE_TEXTUAL` `SIGN_MODE_TEXTUAL` is a new sign mode for delivering a better signing experience on hardware wallets and it is included in the v0.50 release. In this mode, the signer signs over the human-readable string representation of the transaction (CBOR) and makes all data being displayed easier to read. The data is formatted as screens, and each screen is meant to be displayed in its entirety even on small devices like the Ledger Nano. -There are also *expert* screens, which will only be displayed if the user has chosen that option in its hardware device. These screens contain things like account number, account sequence and the sign data hash. +There are also _expert_ screens, which will only be displayed if the user has chosen that option in its hardware device. These screens contain things like account number, account sequence and the sign data hash. Data is formatted using a set of `ValueRenderer` which the SDK provides defaults for all the known messages and value types. Chain developers can also opt to implement their own `ValueRenderer` for a type/message if they'd like to display information differently. -If you wish to learn more, please refer to [ADR-050](/v0.50/build/architecture/adr-050-sign-mode-textual). +If you wish to learn more, please refer to [ADR-050](/docs/common/pages/adr-comprehensive#adr-050-sign_mode_textual). -#### Custom Sign modes[​](#custom-sign-modes "Direct link to Custom Sign modes") +#### Custom Sign modes There is the opportunity to add your own custom sign mode to the Cosmos-SDK. While we can not accept the implementation of the sign mode to the repository, we can accept a pull request to add the custom signmode to the SignMode enum located [here](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/tx/signing/v1beta1/signing.proto#L17) -## Transaction Process[​](#transaction-process "Direct link to Transaction Process") +## Transaction Process The process of an end-user sending a transaction is: -* decide on the messages to put into the transaction, -* generate the transaction using the Cosmos SDK's `TxBuilder`, -* broadcast the transaction using one of the available interfaces. +- decide on the messages to put into the transaction, +- generate the transaction using the Cosmos SDK's `TxBuilder`, +- broadcast the transaction using one of the available interfaces. The next paragraphs will describe each of these components, in this order. -### Messages[​](#messages "Direct link to Messages") +### Messages - - Module `sdk.Msg`s are not to be confused with [ABCI Messages](https://docs.cometbft.com/v0.37/spec/abci/) which define interactions between the CometBFT and application layers. - + + Module `sdk.Msg`s are not to be confused with [ABCI + Messages](https://docs.cometbft.com/v0.37/spec/abci/) which define + interactions between the CometBFT and application layers. + -**Messages** (or `sdk.Msg`s) are module-specific objects that trigger state transitions within the scope of the module they belong to. Module developers define the messages for their module by adding methods to the Protobuf [`Msg` service](/v0.50/build/building-modules/msg-services), and also implement the corresponding `MsgServer`. +**Messages** (or `sdk.Msg`s) are module-specific objects that trigger state transitions within the scope of the module they belong to. Module developers define the messages for their module by adding methods to the Protobuf [`Msg` service](/docs/sdk/v0.50/documentation/module-system/msg-services), and also implement the corresponding `MsgServer`. -Each `sdk.Msg`s is related to exactly one Protobuf [`Msg` service](/v0.50/build/building-modules/msg-services) RPC, defined inside each module's `tx.proto` file. A SDK app router automatically maps every `sdk.Msg` to a corresponding RPC. Protobuf generates a `MsgServer` interface for each module `Msg` service, and the module developer needs to implement this interface. This design puts more responsibility on module developers, allowing application developers to reuse common functionalities without having to implement state transition logic repetitively. +Each `sdk.Msg`s is related to exactly one Protobuf [`Msg` service](/docs/sdk/v0.50/documentation/module-system/msg-services) RPC, defined inside each module's `tx.proto` file. A SDK app router automatically maps every `sdk.Msg` to a corresponding RPC. Protobuf generates a `MsgServer` interface for each module `Msg` service, and the module developer needs to implement this interface. +This design puts more responsibility on module developers, allowing application developers to reuse common functionalities without having to implement state transition logic repetitively. -To learn more about Protobuf `Msg` services and how to implement `MsgServer`, click [here](/v0.50/build/building-modules/msg-services). +To learn more about Protobuf `Msg` services and how to implement `MsgServer`, click [here](/docs/sdk/v0.50/documentation/module-system/msg-services). While messages contain the information for state transition logic, a transaction's other metadata and relevant information are stored in the `TxBuilder` and `Context`. -### Transaction Generation[​](#transaction-generation "Direct link to Transaction Generation") +### Transaction Generation The `TxBuilder` interface contains data closely related with the generation of transactions, which an end-user can freely set to generate the desired transaction: -client/tx\_config.go +```go expandable +package client -``` -loading... -``` +import ( + + txsigning "cosmossdk.io/x/tx/signing" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() + +sdk.TxEncoder + TxDecoder() -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/client/tx_config.go#L40-L53) +sdk.TxDecoder + TxJSONEncoder() -* `Msg`s, the array of [messages](#messages) included in the transaction. -* `GasLimit`, option chosen by the users for how to calculate how much gas they will need to pay. -* `Memo`, a note or comment to send with the transaction. -* `FeeAmount`, the maximum amount the user is willing to pay in fees. -* `TimeoutHeight`, block height until which the transaction is valid. -* `Signatures`, the array of signatures from all signers of the transaction. +sdk.TxEncoder + TxJSONDecoder() + +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) + +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} + + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() + +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() *txsigning.HandlerMap + SigningContext() *txsigning.Context +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTip(tip *tx.Tip) + +SetTimeoutHeight(height uint64) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} + + / ExtendedTxBuilder extends the TxBuilder interface, + / which is used to set extension options to be included in a transaction. + ExtendedTxBuilder interface { + SetExtensionOptions(extOpts ...*codectypes.Any) +} +) +``` + +- `Msg`s, the array of [messages](#messages) included in the transaction. +- `GasLimit`, option chosen by the users for how to calculate how much gas they will need to pay. +- `Memo`, a note or comment to send with the transaction. +- `FeeAmount`, the maximum amount the user is willing to pay in fees. +- `TimeoutHeight`, block height until which the transaction is valid. +- `Signatures`, the array of signatures from all signers of the transaction. As there are currently two sign modes for signing transactions, there are also two implementations of `TxBuilder`: -* [wrapper](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/tx/builder.go#L26-L43) for creating transactions for `SIGN_MODE_DIRECT`, -* [StdTxBuilder](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/migrations/legacytx/stdtx_builder.go#L14-L17) for `SIGN_MODE_LEGACY_AMINO_JSON`. +- [wrapper](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/tx/builder.go#L26-L43) for creating transactions for `SIGN_MODE_DIRECT`, +- [StdTxBuilder](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/migrations/legacytx/stdtx_builder.go#L14-L17) for `SIGN_MODE_LEGACY_AMINO_JSON`. However, the two implementations of `TxBuilder` should be hidden away from end-users, as they should prefer using the overarching `TxConfig` interface: -client/tx\_config.go +```go expandable +package client -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/client/tx_config.go#L24-L34) + txsigning "cosmossdk.io/x/tx/signing" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) -`TxConfig` is an app-wide configuration for managing transactions. Most importantly, it holds the information about whether to sign each transaction with `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. By calling `txBuilder := txConfig.NewTxBuilder()`, a new `TxBuilder` will be created with the appropriate sign mode. +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() -Once `TxBuilder` is correctly populated with the setters exposed above, `TxConfig` will also take care of correctly encoding the bytes (again, either using `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`). Here's a pseudo-code snippet of how to generate and encode a transaction, using the `TxEncoder()` method: +sdk.TxEncoder + TxDecoder() -``` -txBuilder := txConfig.NewTxBuilder()txBuilder.SetMsgs(...) // and other setters on txBuilderbz, err := txConfig.TxEncoder()(txBuilder.GetTx())// bz are bytes to be broadcasted over the network -``` +sdk.TxDecoder + TxJSONEncoder() -### Broadcasting the Transaction[​](#broadcasting-the-transaction "Direct link to Broadcasting the Transaction") +sdk.TxEncoder + TxJSONDecoder() -Once the transaction bytes are generated, there are currently three ways of broadcasting it. +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) -#### CLI[​](#cli "Direct link to CLI") +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} -Application developers create entry points to the application by creating a [command-line interface](/v0.50/learn/advanced/cli), [gRPC and/or REST interface](/v0.50/learn/advanced/grpc_rest), typically found in the application's `./cmd` folder. These interfaces allow users to interact with the application through command-line. + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig -For the [command-line interface](/v0.50/build/building-modules/module-interfaces#cli), module developers create subcommands to add as children to the application top-level transaction command `TxCmd`. CLI commands actually bundle all the steps of transaction processing into one simple command: creating messages, generating transactions and broadcasting. For concrete examples, see the [Interacting with a Node](/v0.50/user/run-node/interact-node) section. An example transaction made using CLI looks like: + NewTxBuilder() -``` -simd tx send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() *txsigning.HandlerMap + SigningContext() *txsigning.Context +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTip(tip *tx.Tip) + +SetTimeoutHeight(height uint64) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} + + / ExtendedTxBuilder extends the TxBuilder interface, + / which is used to set extension options to be included in a transaction. + ExtendedTxBuilder interface { + SetExtensionOptions(extOpts ...*codectypes.Any) +} +) ``` -#### gRPC[​](#grpc "Direct link to gRPC") +`TxConfig` is an app-wide configuration for managing transactions. Most importantly, it holds the information about whether to sign each transaction with `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. By calling `txBuilder := txConfig.NewTxBuilder()`, a new `TxBuilder` will be created with the appropriate sign mode. -[gRPC](https://grpc.io) is the main component for the Cosmos SDK's RPC layer. Its principal usage is in the context of modules' [`Query` services](/v0.50/build/building-modules/query-services). However, the Cosmos SDK also exposes a few other module-agnostic gRPC services, one of them being the `Tx` service: +Once `TxBuilder` is correctly populated with the setters exposed above, `TxConfig` will also take care of correctly encoding the bytes (again, either using `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`). Here's a pseudo-code snippet of how to generate and encode a transaction, using the `TxEncoder()` method: -proto/cosmos/tx/v1beta1/service.proto +```go +txBuilder := txConfig.NewTxBuilder() +txBuilder.SetMsgs(...) / and other setters on txBuilder + +bz, err := txConfig.TxEncoder()(txBuilder.GetTx()) +/ bz are bytes to be broadcasted over the network ``` -loading... + +### Broadcasting the Transaction + +Once the transaction bytes are generated, there are currently three ways of broadcasting it. + +#### CLI + +Application developers create entry points to the application by creating a [command-line interface](/docs/sdk/v0.50/learn/advanced/cli), [gRPC and/or REST interface](/docs/sdk/v0.50/learn/advanced/grpc_rest), typically found in the application's `./cmd` folder. These interfaces allow users to interact with the application through command-line. + +For the [command-line interface](/docs/sdk/v0.50/documentation/module-system/module-interfaces#cli), module developers create subcommands to add as children to the application top-level transaction command `TxCmd`. CLI commands actually bundle all the steps of transaction processing into one simple command: creating messages, generating transactions and broadcasting. For concrete examples, see the [Interacting with a Node](/docs/sdk/v0.50/user/run-node/interact-node) section. An example transaction made using CLI looks like: + +```bash +simd tx send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/tx/v1beta1/service.proto) +#### gRPC + +[gRPC](https://grpc.io) is the main component for the Cosmos SDK's RPC layer. Its principal usage is in the context of modules' [`Query` services](/docs/sdk/v0.50/documentation/module-system/query-services). However, the Cosmos SDK also exposes a few other module-agnostic gRPC services, one of them being the `Tx` service: + +```go expandable +syntax = "proto3"; +package cosmos.tx.v1beta1; + +import "google/api/annotations.proto"; +import "cosmos/base/abci/v1beta1/abci.proto"; +import "cosmos/tx/v1beta1/tx.proto"; +import "cosmos/base/query/v1beta1/pagination.proto"; +import "tendermint/types/block.proto"; +import "tendermint/types/types.proto"; + +option go_package = "github.com/cosmos/cosmos-sdk/types/tx"; + +/ Service defines a gRPC service for interacting with transactions. +service Service { + / Simulate simulates executing a transaction for estimating gas usage. + rpc Simulate(SimulateRequest) + +returns (SimulateResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/simulate" + body: "*" +}; +} + / GetTx fetches a tx by hash. + rpc GetTx(GetTxRequest) + +returns (GetTxResponse) { + option (google.api.http).get = "/cosmos/tx/v1beta1/txs/{ + hash +}"; +} + / BroadcastTx broadcast transaction. + rpc BroadcastTx(BroadcastTxRequest) + +returns (BroadcastTxResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/txs" + body: "*" +}; +} + / GetTxsEvent fetches txs by event. + rpc GetTxsEvent(GetTxsEventRequest) + +returns (GetTxsEventResponse) { + option (google.api.http).get = "/cosmos/tx/v1beta1/txs"; +} + / GetBlockWithTxs fetches a block with decoded txs. + / + / Since: cosmos-sdk 0.45.2 + rpc GetBlockWithTxs(GetBlockWithTxsRequest) + +returns (GetBlockWithTxsResponse) { + option (google.api.http).get = "/cosmos/tx/v1beta1/txs/block/{ + height +}"; +} + / TxDecode decodes the transaction. + / + / Since: cosmos-sdk 0.47 + rpc TxDecode(TxDecodeRequest) + +returns (TxDecodeResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/decode" + body: "*" +}; +} + / TxEncode encodes the transaction. + / + / Since: cosmos-sdk 0.47 + rpc TxEncode(TxEncodeRequest) + +returns (TxEncodeResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/encode" + body: "*" +}; +} + / TxEncodeAmino encodes an Amino transaction from JSON to encoded bytes. + / + / Since: cosmos-sdk 0.47 + rpc TxEncodeAmino(TxEncodeAminoRequest) + +returns (TxEncodeAminoResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/encode/amino" + body: "*" +}; +} + / TxDecodeAmino decodes an Amino transaction from encoded bytes to JSON. + / + / Since: cosmos-sdk 0.47 + rpc TxDecodeAmino(TxDecodeAminoRequest) + +returns (TxDecodeAminoResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/decode/amino" + body: "*" +}; +} +} + +/ GetTxsEventRequest is the request type for the Service.TxsByEvents +/ RPC method. +message GetTxsEventRequest { + / events is the list of transaction event type. + / Deprecated post v0.47.x: use query instead, which should contain a valid + / events query. + repeated string events = 1 [deprecated = true]; + + / pagination defines a pagination for the request. + / Deprecated post v0.46.x: use page and limit instead. + cosmos.base.query.v1beta1.PageRequest pagination = 2 [deprecated = true]; + + OrderBy order_by = 3; + + / page is the page number to query, starts at 1. If not provided, will + / default to first page. + uint64 page = 4; + + / limit is the total number of results to be returned in the result page. + / If left empty it will default to a value to be set by each app. + uint64 limit = 5; + + / query defines the transaction event query that is proxied to Tendermint's + / TxSearch RPC method. The query must be valid. + / + / Since cosmos-sdk 0.50 + string query = 6; +} + +/ OrderBy defines the sorting order +enum OrderBy { + / ORDER_BY_UNSPECIFIED specifies an unknown sorting order. OrderBy defaults + / to ASC in this case. + ORDER_BY_UNSPECIFIED = 0; + / ORDER_BY_ASC defines ascending order + ORDER_BY_ASC = 1; + / ORDER_BY_DESC defines descending order + ORDER_BY_DESC = 2; +} + +/ GetTxsEventResponse is the response type for the Service.TxsByEvents +/ RPC method. +message GetTxsEventResponse { + / txs is the list of queried transactions. + repeated cosmos.tx.v1beta1.Tx txs = 1; + / tx_responses is the list of queried TxResponses. + repeated cosmos.base.abci.v1beta1.TxResponse tx_responses = 2; + / pagination defines a pagination for the response. + / Deprecated post v0.46.x: use total instead. + cosmos.base.query.v1beta1.PageResponse pagination = 3 [deprecated = true]; + / total is total number of results available + uint64 total = 4; +} + +/ BroadcastTxRequest is the request type for the Service.BroadcastTxRequest +/ RPC method. +message BroadcastTxRequest { + / tx_bytes is the raw transaction. + bytes tx_bytes = 1; + BroadcastMode mode = 2; +} + +/ BroadcastMode specifies the broadcast mode for the TxService.Broadcast RPC +/ method. +enum BroadcastMode { + / zero-value for mode ordering + BROADCAST_MODE_UNSPECIFIED = 0; + / DEPRECATED: use BROADCAST_MODE_SYNC instead, + / BROADCAST_MODE_BLOCK is not supported by the SDK from v0.47.x onwards. + BROADCAST_MODE_BLOCK = 1 [deprecated = true]; + / BROADCAST_MODE_SYNC defines a tx broadcasting mode where the client waits + / for a CheckTx execution response only. + BROADCAST_MODE_SYNC = 2; + / BROADCAST_MODE_ASYNC defines a tx broadcasting mode where the client + / returns immediately. + BROADCAST_MODE_ASYNC = 3; +} + +/ BroadcastTxResponse is the response type for the +/ Service.BroadcastTx method. +message BroadcastTxResponse { + / tx_response is the queried TxResponses. + cosmos.base.abci.v1beta1.TxResponse tx_response = 1; +} + +/ SimulateRequest is the request type for the Service.Simulate +/ RPC method. +message SimulateRequest { + / tx is the transaction to simulate. + / Deprecated. Send raw tx bytes instead. + cosmos.tx.v1beta1.Tx tx = 1 [deprecated = true]; + / tx_bytes is the raw transaction. + / + / Since: cosmos-sdk 0.43 + bytes tx_bytes = 2; +} + +/ SimulateResponse is the response type for the +/ Service.SimulateRPC method. +message SimulateResponse { + / gas_info is the information about gas used in the simulation. + cosmos.base.abci.v1beta1.GasInfo gas_info = 1; + / result is the result of the simulation. + cosmos.base.abci.v1beta1.Result result = 2; +} + +/ GetTxRequest is the request type for the Service.GetTx +/ RPC method. +message GetTxRequest { + / hash is the tx hash to query, encoded as a hex string. + string hash = 1; +} + +/ GetTxResponse is the response type for the Service.GetTx method. +message GetTxResponse { + / tx is the queried transaction. + cosmos.tx.v1beta1.Tx tx = 1; + / tx_response is the queried TxResponses. + cosmos.base.abci.v1beta1.TxResponse tx_response = 2; +} + +/ GetBlockWithTxsRequest is the request type for the Service.GetBlockWithTxs +/ RPC method. +/ +/ Since: cosmos-sdk 0.45.2 +message GetBlockWithTxsRequest { + / height is the height of the block to query. + int64 height = 1; + / pagination defines a pagination for the request. + cosmos.base.query.v1beta1.PageRequest pagination = 2; +} + +/ GetBlockWithTxsResponse is the response type for the Service.GetBlockWithTxs +/ method. +/ +/ Since: cosmos-sdk 0.45.2 +message GetBlockWithTxsResponse { + / txs are the transactions in the block. + repeated cosmos.tx.v1beta1.Tx txs = 1; + .tendermint.types.BlockID block_id = 2; + .tendermint.types.Block block = 3; + / pagination defines a pagination for the response. + cosmos.base.query.v1beta1.PageResponse pagination = 4; +} + +/ TxDecodeRequest is the request type for the Service.TxDecode +/ RPC method. +/ +/ Since: cosmos-sdk 0.47 +message TxDecodeRequest { + / tx_bytes is the raw transaction. + bytes tx_bytes = 1; +} + +/ TxDecodeResponse is the response type for the +/ Service.TxDecode method. +/ +/ Since: cosmos-sdk 0.47 +message TxDecodeResponse { + / tx is the decoded transaction. + cosmos.tx.v1beta1.Tx tx = 1; +} + +/ TxEncodeRequest is the request type for the Service.TxEncode +/ RPC method. +/ +/ Since: cosmos-sdk 0.47 +message TxEncodeRequest { + / tx is the transaction to encode. + cosmos.tx.v1beta1.Tx tx = 1; +} + +/ TxEncodeResponse is the response type for the +/ Service.TxEncode method. +/ +/ Since: cosmos-sdk 0.47 +message TxEncodeResponse { + / tx_bytes is the encoded transaction bytes. + bytes tx_bytes = 1; +} + +/ TxEncodeAminoRequest is the request type for the Service.TxEncodeAmino +/ RPC method. +/ +/ Since: cosmos-sdk 0.47 +message TxEncodeAminoRequest { + string amino_json = 1; +} + +/ TxEncodeAminoResponse is the response type for the Service.TxEncodeAmino +/ RPC method. +/ +/ Since: cosmos-sdk 0.47 +message TxEncodeAminoResponse { + bytes amino_binary = 1; +} + +/ TxDecodeAminoRequest is the request type for the Service.TxDecodeAmino +/ RPC method. +/ +/ Since: cosmos-sdk 0.47 +message TxDecodeAminoRequest { + bytes amino_binary = 1; +} + +/ TxDecodeAminoResponse is the response type for the Service.TxDecodeAmino +/ RPC method. +/ +/ Since: cosmos-sdk 0.47 +message TxDecodeAminoResponse { + string amino_json = 1; +} +``` The `Tx` service exposes a handful of utility functions, such as simulating a transaction or querying a transaction, and also one method to broadcast transactions. -Examples of broadcasting and simulating a transaction are shown [here](/v0.50/user/run-node/txs#programmatically-with-go). +Examples of broadcasting and simulating a transaction are shown [here](/docs/sdk/v0.50/user/run-node/txs#programmatically-with-go). -#### REST[​](#rest "Direct link to REST") +#### REST Each gRPC method has its corresponding REST endpoint, generated using [gRPC-gateway](https://github.com/grpc-ecosystem/grpc-gateway). Therefore, instead of using gRPC, you can also use HTTP to broadcast the same transaction, on the `POST /cosmos/tx/v1beta1/txs` endpoint. -An example can be seen [here](/v0.50/user/run-node/txs#using-rest) +An example can be seen [here](/docs/sdk/v0.50/user/run-node/txs#using-rest) -#### CometBFT RPC[​](#cometbft-rpc "Direct link to CometBFT RPC") +#### CometBFT RPC The three methods presented above are actually higher abstractions over the CometBFT RPC `/broadcast_tx_{async,sync,commit}` endpoints, documented [here](https://docs.cometbft.com/v0.37/core/rpc). This means that you can use the CometBFT RPC endpoints directly to broadcast the transaction, if you wish so. diff --git a/docs/sdk/v0.50/learn/advanced/upgrade.mdx b/docs/sdk/v0.50/learn/advanced/upgrade.mdx index 5cdf68d2..f4852277 100644 --- a/docs/sdk/v0.50/learn/advanced/upgrade.mdx +++ b/docs/sdk/v0.50/learn/advanced/upgrade.mdx @@ -1,112 +1,167 @@ --- -title: "In-Place Store Migrations" -description: "Version: v0.50" +title: In-Place Store Migrations --- - Read and understand all the in-place store migration documentation before you run a migration on a live chain. + Read and understand all the in-place store migration documentation before you + run a migration on a live chain. - - Upgrade your app modules smoothly with custom in-place store migration logic. - +## Synopsis + +Upgrade your app modules smoothly with custom in-place store migration logic. The Cosmos SDK uses two methods to perform upgrades: -* Exporting the entire application state to a JSON file using the `export` CLI command, making changes, and then starting a new binary with the changed JSON file as the genesis file. +- Exporting the entire application state to a JSON file using the `export` CLI command, making changes, and then starting a new binary with the changed JSON file as the genesis file. -* Perform upgrades in place, which significantly decrease the upgrade time for chains with a larger state. Use the [Module Upgrade Guide](/v0.50/build/building-modules/upgrade) to set up your application modules to take advantage of in-place upgrades. +- Perform upgrades in place, which significantly decrease the upgrade time for chains with a larger state. Use the [Module Upgrade Guide](/docs/sdk/v0.50/documentation/application-framework/app-upgrade) to set up your application modules to take advantage of in-place upgrades. This document provides steps to use the In-Place Store Migrations upgrade method. -## Tracking Module Versions[​](#tracking-module-versions "Direct link to Tracking Module Versions") +## Tracking Module Versions Each module gets assigned a consensus version by the module developer. The consensus version serves as the breaking change version of the module. The Cosmos SDK keeps track of all module consensus versions in the x/upgrade `VersionMap` store. During an upgrade, the difference between the old `VersionMap` stored in state and the new `VersionMap` is calculated by the Cosmos SDK. For each identified difference, the module-specific migrations are run and the respective consensus version of each upgraded module is incremented. -### Consensus Version[​](#consensus-version "Direct link to Consensus Version") +### Consensus Version The consensus version is defined on each app module by the module developer and serves as the breaking change version of the module. The consensus version informs the Cosmos SDK on which modules need to be upgraded. For example, if the bank module was version 2 and an upgrade introduces bank module 3, the Cosmos SDK upgrades the bank module and runs the "version 2 to 3" migration script. -### Version Map[​](#version-map "Direct link to Version Map") +### Version Map The version map is a mapping of module names to consensus versions. The map is persisted to x/upgrade's state for use during in-place migrations. When migrations finish, the updated version map is persisted in the state. -## Upgrade Handlers[​](#upgrade-handlers "Direct link to Upgrade Handlers") +## Upgrade Handlers Upgrades use an `UpgradeHandler` to facilitate migrations. The `UpgradeHandler` functions implemented by the app developer must conform to the following function signature. These functions retrieve the `VersionMap` from x/upgrade's state and return the new `VersionMap` to be stored in x/upgrade after the upgrade. The diff between the two `VersionMap`s determines which modules need upgrading. -``` +```go type UpgradeHandler func(ctx sdk.Context, plan Plan, fromVM VersionMap) (VersionMap, error) ``` Inside these functions, you must perform any upgrade logic to include in the provided `plan`. All upgrade handler functions must end with the following line of code: -``` - return app.mm.RunMigrations(ctx, cfg, fromVM) +```go +return app.mm.RunMigrations(ctx, cfg, fromVM) ``` -## Running Migrations[​](#running-migrations "Direct link to Running Migrations") +## Running Migrations Migrations are run inside of an `UpgradeHandler` using `app.mm.RunMigrations(ctx, cfg, vm)`. The `UpgradeHandler` functions describe the functionality to occur during an upgrade. The `RunMigration` function loops through the `VersionMap` argument and runs the migration scripts for all versions that are less than the versions of the new binary app module. After the migrations are finished, a new `VersionMap` is returned to persist the upgraded module versions to state. -``` -cfg := module.NewConfigurator(...)app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { // ... // additional upgrade logic // ... // returns a VersionMap with the updated module ConsensusVersions return app.mm.RunMigrations(ctx, fromVM)}) +```go +cfg := module.NewConfigurator(...) + +app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + + / ... + / additional upgrade logic + / ... + + / returns a VersionMap with the updated module ConsensusVersions + return app.mm.RunMigrations(ctx, fromVM) +}) ``` -To learn more about configuring migration scripts for your modules, see the [Module Upgrade Guide](/v0.50/build/building-modules/upgrade). +To learn more about configuring migration scripts for your modules, see the [Module Upgrade Guide](/docs/sdk/v0.50/documentation/application-framework/app-upgrade). -### Order Of Migrations[​](#order-of-migrations "Direct link to Order Of Migrations") +### Order Of Migrations By default, all migrations are run in module name alphabetical ascending order, except `x/auth` which is run last. The reason is state dependencies between x/auth and other modules (you can read more in [issue #10606](https://github.com/cosmos/cosmos-sdk/issues/10606)). If you want to change the order of migration, then you should call `app.mm.SetOrderMigrations(module1, module2, ...)` in your app.go file. The function will panic if you forget to include a module in the argument list. -## Adding New Modules During Upgrades[​](#adding-new-modules-during-upgrades "Direct link to Adding New Modules During Upgrades") +## Adding New Modules During Upgrades You can introduce entirely new modules to the application during an upgrade. New modules are recognized because they have not yet been registered in `x/upgrade`'s `VersionMap` store. In this case, `RunMigrations` calls the `InitGenesis` function from the corresponding module to set up its initial state. -### Add StoreUpgrades for New Modules[​](#add-storeupgrades-for-new-modules "Direct link to Add StoreUpgrades for New Modules") +### Add StoreUpgrades for New Modules All chains preparing to run in-place store migrations will need to manually add store upgrades for new modules and then configure the store loader to apply those upgrades. This ensures that the new module's stores are added to the multistore before the migrations begin. -``` -upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk()if err != nil { panic(err)}if upgradeInfo.Name == "my-plan" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { storeUpgrades := storetypes.StoreUpgrades{ // add store upgrades for new modules // Example: // Added: []string{"foo", "bar"}, // ... } // configure store loader that checks if version == upgradeHeight and applies store upgrades app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades))} +```go expandable +upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk() + if err != nil { + panic(err) +} + if upgradeInfo.Name == "my-plan" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := storetypes.StoreUpgrades{ + / add store upgrades for new modules + / Example: + / Added: []string{"foo", "bar" +}, + / ... +} + + / configure store loader that checks if version == upgradeHeight and applies store upgrades + app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} ``` -## Genesis State[​](#genesis-state "Direct link to Genesis State") +## Genesis State When starting a new chain, the consensus version of each module MUST be saved to state during the application's genesis. To save the consensus version, add the following line to the `InitChainer` method in `app.go`: -``` -func (app *MyApp) InitChainer(ctx sdk.Context, req abci.RequestInitChain) abci.ResponseInitChain { ...+ app.UpgradeKeeper.SetModuleVersionMap(ctx, app.mm.GetVersionMap()) ...} +```diff +func (app *MyApp) InitChainer(ctx sdk.Context, req abci.RequestInitChain) abci.ResponseInitChain { + ... ++ app.UpgradeKeeper.SetModuleVersionMap(ctx, app.mm.GetVersionMap()) + ... +} ``` This information is used by the Cosmos SDK to detect when modules with newer versions are introduced to the app. For a new module `foo`, `InitGenesis` is called by `RunMigration` only when `foo` is registered in the module manager but it's not set in the `fromVM`. Therefore, if you want to skip `InitGenesis` when a new module is added to the app, then you should set its module version in `fromVM` to the module consensus version: -``` -app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { // ... // Set foo's version to the latest ConsensusVersion in the VersionMap. // This will skip running InitGenesis on Foo fromVM[foo.ModuleName] = foo.AppModule{}.ConsensusVersion() return app.mm.RunMigrations(ctx, fromVM)}) +```go +app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / ... + + / Set foo's version to the latest ConsensusVersion in the VersionMap. + / This will skip running InitGenesis on Foo + fromVM[foo.ModuleName] = foo.AppModule{ +}.ConsensusVersion() + +return app.mm.RunMigrations(ctx, fromVM) +}) ``` -### Overwriting Genesis Functions[​](#overwriting-genesis-functions "Direct link to Overwriting Genesis Functions") +### Overwriting Genesis Functions The Cosmos SDK offers modules that the application developer can import in their app. These modules often have an `InitGenesis` function already defined. You can write your own `InitGenesis` function for an imported module. To do this, manually trigger your custom genesis function in the upgrade handler. - You MUST manually set the consensus version in the version map passed to the `UpgradeHandler` function. Without this, the SDK will run the Module's existing `InitGenesis` code even if you triggered your custom function in the `UpgradeHandler`. + You MUST manually set the consensus version in the version map passed to the + `UpgradeHandler` function. Without this, the SDK will run the Module's + existing `InitGenesis` code even if you triggered your custom function in the + `UpgradeHandler`. -``` -import foo "github.com/my/module/foo"app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { // Register the consensus version in the version map // to avoid the SDK from triggering the default // InitGenesis function. fromVM["foo"] = foo.AppModule{}.ConsensusVersion() // Run custom InitGenesis for foo app.mm["foo"].InitGenesis(ctx, app.appCodec, myCustomGenesisState) return app.mm.RunMigrations(ctx, cfg, fromVM)}) +```go expandable +import foo "github.com/my/module/foo" + +app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + + / Register the consensus version in the version map + / to avoid the SDK from triggering the default + / InitGenesis function. + fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() + + / Run custom InitGenesis for foo + app.mm["foo"].InitGenesis(ctx, app.appCodec, myCustomGenesisState) + +return app.mm.RunMigrations(ctx, cfg, fromVM) +}) ``` -## Syncing a Full Node to an Upgraded Blockchain[​](#syncing-a-full-node-to-an-upgraded-blockchain "Direct link to Syncing a Full Node to an Upgraded Blockchain") +## Syncing a Full Node to an Upgraded Blockchain You can sync a full node to an existing blockchain which has been upgraded using Cosmovisor To successfully sync, you must start with the initial binary that the blockchain started with at genesis. If all Software Upgrade Plans contain binary instruction, then you can run Cosmovisor with auto-download option to automatically handle downloading and switching to the binaries associated with each sequential upgrade. Otherwise, you need to manually provide all binaries to Cosmovisor. -To learn more about Cosmovisor, see the [Cosmovisor Quick Start](/v0.50/build/tooling/cosmovisor). +To learn more about Cosmovisor, see the [Cosmovisor Quick Start](/docs/sdk/v0.50/documentation/operations/tooling/cosmovisor). diff --git a/docs/sdk/v0.50/learn/beginner/accounts.mdx b/docs/sdk/v0.50/learn/beginner/accounts.mdx index 3eb867ee..9a9b3503 100644 --- a/docs/sdk/v0.50/learn/beginner/accounts.mdx +++ b/docs/sdk/v0.50/learn/beginner/accounts.mdx @@ -1,29 +1,67 @@ --- -title: "Accounts" -description: "Version: v0.50" +title: Accounts --- - - This document describes the in-built account and public key system of the Cosmos SDK. - +## Synopsis + +This document describes the in-built account and public key system of the Cosmos SDK. - * [Anatomy of a Cosmos SDK Application](/v0.50/learn/beginner/app-anatomy) +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.50/learn/beginner/app-anatomy) + -## Account Definition[​](#account-definition "Direct link to Account Definition") +## Account Definition -In the Cosmos SDK, an *account* designates a pair of *public key* `PubKey` and *private key* `PrivKey`. The `PubKey` can be derived to generate various `Addresses`, which are used to identify users (among other parties) in the application. `Addresses` are also associated with [`message`s](/v0.50/build/building-modules/messages-and-queries#messages) to identify the sender of the `message`. The `PrivKey` is used to generate [digital signatures](#signatures) to prove that an `Address` associated with the `PrivKey` approved of a given `message`. +In the Cosmos SDK, an _account_ designates a pair of _public key_ `PubKey` and _private key_ `PrivKey`. The `PubKey` can be derived to generate various `Addresses`, which are used to identify users (among other parties) in the application. `Addresses` are also associated with [`message`s](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#messages) to identify the sender of the `message`. The `PrivKey` is used to generate [digital signatures](#signatures) to prove that an `Address` associated with the `PrivKey` approved of a given `message`. For HD key derivation the Cosmos SDK uses a standard called [BIP32](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki). The BIP32 allows users to create an HD wallet (as specified in [BIP44](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)) - a set of accounts derived from an initial secret seed. A seed is usually created from a 12- or 24-word mnemonic. A single seed can derive any number of `PrivKey`s using a one-way cryptographic function. Then, a `PubKey` can be derived from the `PrivKey`. Naturally, the mnemonic is the most sensitive information, as private keys can always be re-generated if the mnemonic is preserved. -``` - Account 0 Account 1 Account 2+------------------+ +------------------+ +------------------+| | | | | || Address 0 | | Address 1 | | Address 2 || ^ | | ^ | | ^ || | | | | | | | || | | | | | | | || | | | | | | | || + | | + | | + || Public key 0 | | Public key 1 | | Public key 2 || ^ | | ^ | | ^ || | | | | | | | || | | | | | | | || | | | | | | | || + | | + | | + || Private key 0 | | Private key 1 | | Private key 2 || ^ | | ^ | | ^ |+------------------+ +------------------+ +------------------+ | | | | | | | | | +--------------------------------------------------------------------+ | | +---------+---------+ | | | Master PrivKey | | | +-------------------+ | | +---------+---------+ | | | Mnemonic (Seed) | | | +-------------------+ +```text expandable + Account 0 Account 1 Account 2 + ++------------------+ +------------------+ +------------------+ +| | | | | | +| Address 0 | | Address 1 | | Address 2 | +| ^ | | ^ | | ^ | +| | | | | | | | | +| | | | | | | | | +| | | | | | | | | +| + | | + | | + | +| Public key 0 | | Public key 1 | | Public key 2 | +| ^ | | ^ | | ^ | +| | | | | | | | | +| | | | | | | | | +| | | | | | | | | +| + | | + | | + | +| Private key 0 | | Private key 1 | | Private key 2 | +| ^ | | ^ | | ^ | ++------------------+ +------------------+ +------------------+ + | | | + | | | + | | | + +--------------------------------------------------------------------+ + | + | + +---------+---------+ + | | + | Master PrivKey | + | | + +-------------------+ + | + | + +---------+---------+ + | | + | Mnemonic (Seed) | + | | + +-------------------+ ``` In the Cosmos SDK, keys are stored and managed by using an object called a [`Keyring`](#keyring). -## Keys, accounts, addresses, and signatures[​](#keys-accounts-addresses-and-signatures "Direct link to Keys, accounts, addresses, and signatures") +## Keys, accounts, addresses, and signatures The principal way of authenticating a user is done using [digital signatures](https://en.wikipedia.org/wiki/Digital_signature). Users sign transactions using their own private key. Signature verification is done with the associated public key. For on-chain signature verification purposes, we store the public key in an `Account` object (alongside other data required for a proper transaction validation). @@ -31,39 +69,899 @@ In the node, all data is stored using Protocol Buffers serialization. The Cosmos SDK supports the following digital key schemes for creating digital signatures: -* `secp256k1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256k1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/keys/secp256k1/secp256k1.go). -* `secp256r1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256r1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/keys/secp256r1/pubkey.go), -* `tm-ed25519`, as implemented in the [Cosmos SDK `crypto/keys/ed25519` package](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/keys/ed25519/ed25519.go). This scheme is supported only for the consensus validation. +- `secp256k1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256k1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/keys/secp256k1/secp256k1.go). +- `secp256r1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256r1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/keys/secp256r1/pubkey.go), +- `tm-ed25519`, as implemented in the [Cosmos SDK `crypto/keys/ed25519` package](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/keys/ed25519/ed25519.go). This scheme is supported only for the consensus validation. | | Address length in bytes | Public key length in bytes | Used for transaction authentication | Used for consensus (cometbft) | | :----------: | :---------------------: | :------------------------: | :---------------------------------: | :---------------------------: | -| `secp256k1` | 20 | 33 | yes | no | -| `secp256r1` | 32 | 33 | yes | no | -| `tm-ed25519` | -- not used -- | 32 | no | yes | +| `secp256k1` | 20 | 33 | yes | no | +| `secp256r1` | 32 | 33 | yes | no | +| `tm-ed25519` | -- not used -- | 32 | no | yes | -## Addresses[​](#addresses "Direct link to Addresses") +## Addresses `Addresses` and `PubKey`s are both public information that identifies actors in the application. `Account` is used to store authentication information. The basic account implementation is provided by a `BaseAccount` object. Each account is identified using `Address` which is a sequence of bytes derived from a public key. In the Cosmos SDK, we define 3 types of addresses that specify a context where an account is used: -* `AccAddress` identifies users (the sender of a `message`). -* `ValAddress` identifies validator operators. -* `ConsAddress` identifies validator nodes that are participating in consensus. Validator nodes are derived using the **`ed25519`** curve. +- `AccAddress` identifies users (the sender of a `message`). +- `ValAddress` identifies validator operators. +- `ConsAddress` identifies validator nodes that are participating in consensus. Validator nodes are derived using the **`ed25519`** curve. These types implement the `Address` interface: -types/address.go +```go expandable +package types + +import ( + + "bytes" + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "strings" + "sync" + "sync/atomic" + "github.com/hashicorp/golang-lru/simplelru" + "sigs.k8s.io/yaml" + + errorsmod "cosmossdk.io/errors" + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/internal/conv" + "github.com/cosmos/cosmos-sdk/types/address" + "github.com/cosmos/cosmos-sdk/types/bech32" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +const ( + / Constants defined here are the defaults value for address. + / You can use the specific values for your project. + / Add the follow lines to the `main()` of your server. + / + / config := sdk.GetConfig() + / config.SetBech32PrefixForAccount(yourBech32PrefixAccAddr, yourBech32PrefixAccPub) + / config.SetBech32PrefixForValidator(yourBech32PrefixValAddr, yourBech32PrefixValPub) + / config.SetBech32PrefixForConsensusNode(yourBech32PrefixConsAddr, yourBech32PrefixConsPub) + / config.SetPurpose(yourPurpose) + / config.SetCoinType(yourCoinType) + / config.Seal() + + / Bech32MainPrefix defines the main SDK Bech32 prefix of an account's address + Bech32MainPrefix = "cosmos" + + / Purpose is the ATOM purpose as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +Purpose = 44 + + / CoinType is the ATOM coin type as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +CoinType = 118 + + / FullFundraiserPath is the parts of the BIP44 HD path that are fixed by + / what we used during the ATOM fundraiser. + FullFundraiserPath = "m/44'/118'/0'/0/0" + + / PrefixAccount is the prefix for account keys + PrefixAccount = "acc" + / PrefixValidator is the prefix for validator keys + PrefixValidator = "val" + / PrefixConsensus is the prefix for consensus keys + PrefixConsensus = "cons" + / PrefixPublic is the prefix for public keys + PrefixPublic = "pub" + / PrefixOperator is the prefix for operator keys + PrefixOperator = "oper" + + / PrefixAddress is the prefix for addresses + PrefixAddress = "addr" + + / Bech32PrefixAccAddr defines the Bech32 prefix of an account's address + Bech32PrefixAccAddr = Bech32MainPrefix + / Bech32PrefixAccPub defines the Bech32 prefix of an account's public key + Bech32PrefixAccPub = Bech32MainPrefix + PrefixPublic + / Bech32PrefixValAddr defines the Bech32 prefix of a validator's operator address + Bech32PrefixValAddr = Bech32MainPrefix + PrefixValidator + PrefixOperator + / Bech32PrefixValPub defines the Bech32 prefix of a validator's operator public key + Bech32PrefixValPub = Bech32MainPrefix + PrefixValidator + PrefixOperator + PrefixPublic + / Bech32PrefixConsAddr defines the Bech32 prefix of a consensus node address + Bech32PrefixConsAddr = Bech32MainPrefix + PrefixValidator + PrefixConsensus + / Bech32PrefixConsPub defines the Bech32 prefix of a consensus node public key + Bech32PrefixConsPub = Bech32MainPrefix + PrefixValidator + PrefixConsensus + PrefixPublic +) + +/ cache variables +var ( + / AccAddress.String() + +is expensive and if unoptimized dominantly showed up in profiles, + / yet has no mechanisms to trivially cache the result given that AccAddress is a []byte type. + accAddrMu sync.Mutex + accAddrCache *simplelru.LRU + consAddrMu sync.Mutex + consAddrCache *simplelru.LRU + valAddrMu sync.Mutex + valAddrCache *simplelru.LRU + + isCachingEnabled atomic.Bool +) + +/ sentinel errors +var ( + ErrEmptyHexAddress = errors.New("decoding address from hex string failed: empty address") +) + +func init() { + var err error + SetAddrCacheEnabled(true) + + / in total the cache size is 61k entries. Key is 32 bytes and value is around 50-70 bytes. + / That will make around 92 * 61k * 2 (LRU) + +bytes ~ 11 MB + if accAddrCache, err = simplelru.NewLRU(60000, nil); err != nil { + panic(err) +} + if consAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} + if valAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} +} + +/ SetAddrCacheEnabled enables or disables accAddrCache, consAddrCache, and valAddrCache. By default, caches are enabled. +func SetAddrCacheEnabled(enabled bool) { + isCachingEnabled.Store(enabled) +} + +/ IsAddrCacheEnabled returns if the address caches are enabled. +func IsAddrCacheEnabled() + +bool { + return isCachingEnabled.Load() +} + +/ Address is a common interface for different types of addresses used by the SDK +type Address interface { + Equals(Address) + +bool + Empty() + +bool + Marshal() ([]byte, error) + +MarshalJSON() ([]byte, error) + +Bytes() []byte + String() + +string + Format(s fmt.State, verb rune) +} + +/ Ensure that different address types implement the interface +var ( + _ Address = AccAddress{ +} + _ Address = ValAddress{ +} + _ Address = ConsAddress{ +} +) + +/ ---------------------------------------------------------------------------- +/ account +/ ---------------------------------------------------------------------------- + +/ AccAddress a wrapper around bytes meant to represent an account address. +/ When marshaled to a string or JSON, it uses Bech32. +type AccAddress []byte + +/ AccAddressFromHexUnsafe creates an AccAddress from a HEX-encoded string. +/ +/ Note, this function is considered unsafe as it may produce an AccAddress from +/ otherwise invalid input, such as a transaction hash. Please use +/ AccAddressFromBech32. +func AccAddressFromHexUnsafe(address string) (addr AccAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return AccAddress(bz), err +} + +/ VerifyAddressFormat verifies that the provided bytes form a valid address +/ according to the default address rules or a custom address verifier set by +/ GetConfig().SetAddressVerifier(). +/ TODO make an issue to get rid of global Config +/ ref: https://github.com/cosmos/cosmos-sdk/issues/9690 +func VerifyAddressFormat(bz []byte) + +error { + verifier := GetConfig().GetAddressVerifier() + if verifier != nil { + return verifier(bz) +} + if len(bz) == 0 { + return errorsmod.Wrap(sdkerrors.ErrUnknownAddress, "addresses cannot be empty") +} + if len(bz) > address.MaxAddrLen { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "address max length is %d, got %d", address.MaxAddrLen, len(bz)) +} + +return nil +} + +/ MustAccAddressFromBech32 calls AccAddressFromBech32 and panics on error. +func MustAccAddressFromBech32(address string) + +AccAddress { + addr, err := AccAddressFromBech32(address) + if err != nil { + panic(err) +} + +return addr +} + +/ AccAddressFromBech32 creates an AccAddress from a Bech32 string. +func AccAddressFromBech32(address string) (addr AccAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return AccAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixAccAddr := GetConfig().GetBech32AccountAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixAccAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return AccAddress(bz), nil +} + +/ Returns boolean for whether two AccAddresses are Equal +func (aa AccAddress) + +Equals(aa2 Address) + +bool { + if aa.Empty() && aa2.Empty() { + return true +} + +return bytes.Equal(aa.Bytes(), aa2.Bytes()) +} + +/ Returns boolean for whether an AccAddress is empty +func (aa AccAddress) + +Empty() + +bool { + return len(aa) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (aa AccAddress) + +Marshal() ([]byte, error) { + return aa, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (aa *AccAddress) + +Unmarshal(data []byte) + +error { + *aa = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (aa AccAddress) -``` -loading... -``` +MarshalJSON() ([]byte, error) { + return json.Marshal(aa.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (aa AccAddress) + +MarshalYAML() (interface{ +}, error) { + return aa.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ UnmarshalYAML unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ Bytes returns the raw address bytes. +func (aa AccAddress) + +Bytes() []byte { + return aa +} + +/ String implements the Stringer interface. +func (aa AccAddress) + +String() + +string { + if aa.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(aa) + if IsAddrCacheEnabled() { + accAddrMu.Lock() + +defer accAddrMu.Unlock() + +addr, ok := accAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32AccountAddrPrefix(), aa, accAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (aa AccAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + s.Write([]byte(aa.String())) + case 'p': + s.Write([]byte(fmt.Sprintf("%p", aa))) + +default: + s.Write([]byte(fmt.Sprintf("%X", []byte(aa)))) +} +} + +/ ---------------------------------------------------------------------------- +/ validator operator +/ ---------------------------------------------------------------------------- + +/ ValAddress defines a wrapper around bytes meant to present a validator's +/ operator. When marshaled to a string or JSON, it uses Bech32. +type ValAddress []byte + +/ ValAddressFromHex creates a ValAddress from a hex string. +func ValAddressFromHex(address string) (addr ValAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return ValAddress(bz), err +} + +/ ValAddressFromBech32 creates a ValAddress from a Bech32 string. +func ValAddressFromBech32(address string) (addr ValAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ValAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixValAddr := GetConfig().GetBech32ValidatorAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixValAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ValAddress(bz), nil +} + +/ Returns boolean for whether two ValAddresses are Equal +func (va ValAddress) + +Equals(va2 Address) + +bool { + if va.Empty() && va2.Empty() { + return true +} + +return bytes.Equal(va.Bytes(), va2.Bytes()) +} + +/ Returns boolean for whether an AccAddress is empty +func (va ValAddress) + +Empty() + +bool { + return len(va) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (va ValAddress) + +Marshal() ([]byte, error) { + return va, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (va *ValAddress) + +Unmarshal(data []byte) + +error { + *va = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (va ValAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(va.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (va ValAddress) + +MarshalYAML() (interface{ +}, error) { + return va.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ Bytes returns the raw address bytes. +func (va ValAddress) + +Bytes() []byte { + return va +} + +/ String implements the Stringer interface. +func (va ValAddress) + +String() + +string { + if va.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(va) + if IsAddrCacheEnabled() { + valAddrMu.Lock() + +defer valAddrMu.Unlock() + +addr, ok := valAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32ValidatorAddrPrefix(), va, valAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (va ValAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + s.Write([]byte(va.String())) + case 'p': + s.Write([]byte(fmt.Sprintf("%p", va))) + +default: + s.Write([]byte(fmt.Sprintf("%X", []byte(va)))) +} +} + +/ ---------------------------------------------------------------------------- +/ consensus node +/ ---------------------------------------------------------------------------- + +/ ConsAddress defines a wrapper around bytes meant to present a consensus node. +/ When marshaled to a string or JSON, it uses Bech32. +type ConsAddress []byte -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/types/address.go#L126-L134) +/ ConsAddressFromHex creates a ConsAddress from a hex string. +func ConsAddressFromHex(address string) (addr ConsAddress, err error) { + bz, err := addressBytesFromHexString(address) -Address construction algorithm is defined in [ADR-28](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-028-public-key-addresses.md). Here is the standard way to obtain an account address from a `pub` public key: +return ConsAddress(bz), err +} +/ ConsAddressFromBech32 creates a ConsAddress from a Bech32 string. +func ConsAddressFromBech32(address string) (addr ConsAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ConsAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixConsAddr := GetConfig().GetBech32ConsensusAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixConsAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ConsAddress(bz), nil +} + +/ get ConsAddress from pubkey +func GetConsAddress(pubkey cryptotypes.PubKey) + +ConsAddress { + return ConsAddress(pubkey.Address()) +} + +/ Returns boolean for whether two ConsAddress are Equal +func (ca ConsAddress) + +Equals(ca2 Address) + +bool { + if ca.Empty() && ca2.Empty() { + return true +} + +return bytes.Equal(ca.Bytes(), ca2.Bytes()) +} + +/ Returns boolean for whether an ConsAddress is empty +func (ca ConsAddress) + +Empty() + +bool { + return len(ca) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (ca ConsAddress) + +Marshal() ([]byte, error) { + return ca, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (ca *ConsAddress) + +Unmarshal(data []byte) + +error { + *ca = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (ca ConsAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(ca.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (ca ConsAddress) + +MarshalYAML() (interface{ +}, error) { + return ca.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ Bytes returns the raw address bytes. +func (ca ConsAddress) + +Bytes() []byte { + return ca +} + +/ String implements the Stringer interface. +func (ca ConsAddress) + +String() + +string { + if ca.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(ca) + if IsAddrCacheEnabled() { + consAddrMu.Lock() + +defer consAddrMu.Unlock() + +addr, ok := consAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32ConsensusAddrPrefix(), ca, consAddrCache, key) +} + +/ Bech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. Returns an error if the bech32 conversion +/ fails or the prefix is empty. +func Bech32ifyAddressBytes(prefix string, bs []byte) (string, error) { + if len(bs) == 0 { + return "", nil +} + if len(prefix) == 0 { + return "", errors.New("prefix cannot be empty") +} + +return bech32.ConvertAndEncode(prefix, bs) +} + +/ MustBech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. It panics if the bech32 conversion +/ fails or the prefix is empty. +func MustBech32ifyAddressBytes(prefix string, bs []byte) + +string { + s, err := Bech32ifyAddressBytes(prefix, bs) + if err != nil { + panic(err) +} + +return s +} + +/ Format implements the fmt.Formatter interface. + +func (ca ConsAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + s.Write([]byte(ca.String())) + case 'p': + s.Write([]byte(fmt.Sprintf("%p", ca))) + +default: + s.Write([]byte(fmt.Sprintf("%X", []byte(ca)))) +} +} + +/ ---------------------------------------------------------------------------- +/ auxiliary +/ ---------------------------------------------------------------------------- + +var errBech32EmptyAddress = errors.New("decoding Bech32 address failed: must provide a non empty address") + +/ GetFromBech32 decodes a bytestring from a Bech32 encoded string. +func GetFromBech32(bech32str, prefix string) ([]byte, error) { + if len(bech32str) == 0 { + return nil, errBech32EmptyAddress +} + +hrp, bz, err := bech32.DecodeAndConvert(bech32str) + if err != nil { + return nil, err +} + if hrp != prefix { + return nil, fmt.Errorf("invalid Bech32 prefix; expected %s, got %s", prefix, hrp) +} + +return bz, nil +} + +func addressBytesFromHexString(address string) ([]byte, error) { + if len(address) == 0 { + return nil, ErrEmptyHexAddress +} + +return hex.DecodeString(address) +} + +/ cacheBech32Addr is not concurrency safe. Concurrent access to cache causes race condition. +func cacheBech32Addr(prefix string, addr []byte, cache *simplelru.LRU, cacheKey string) + +string { + bech32Addr, err := bech32.ConvertAndEncode(prefix, addr) + if err != nil { + panic(err) +} + if IsAddrCacheEnabled() { + cache.Add(cacheKey, bech32Addr) +} + +return bech32Addr +} ``` + +Address construction algorithm is defined in [ADR-28](docs/sdk/next/documentation/legacy/adr-comprehensive). +Here is the standard way to obtain an account address from a `pub` public key: + +```go sdk.AccAddress(pub.Address().Bytes()) ``` @@ -71,13 +969,872 @@ Of note, the `Marshal()` and `Bytes()` method both return the same raw `[]byte` For user interaction, addresses are formatted using [Bech32](https://en.bitcoin.it/wiki/Bech32) and implemented by the `String` method. The Bech32 method is the only supported format to use when interacting with a blockchain. The Bech32 human-readable part (Bech32 prefix) is used to denote an address type. Example: -types/address.go +```go expandable +package types + +import ( + + "bytes" + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "strings" + "sync" + "sync/atomic" + "github.com/hashicorp/golang-lru/simplelru" + "sigs.k8s.io/yaml" + + errorsmod "cosmossdk.io/errors" + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/internal/conv" + "github.com/cosmos/cosmos-sdk/types/address" + "github.com/cosmos/cosmos-sdk/types/bech32" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +const ( + / Constants defined here are the defaults value for address. + / You can use the specific values for your project. + / Add the follow lines to the `main()` of your server. + / + / config := sdk.GetConfig() + / config.SetBech32PrefixForAccount(yourBech32PrefixAccAddr, yourBech32PrefixAccPub) + / config.SetBech32PrefixForValidator(yourBech32PrefixValAddr, yourBech32PrefixValPub) + / config.SetBech32PrefixForConsensusNode(yourBech32PrefixConsAddr, yourBech32PrefixConsPub) + / config.SetPurpose(yourPurpose) + / config.SetCoinType(yourCoinType) + / config.Seal() + + / Bech32MainPrefix defines the main SDK Bech32 prefix of an account's address + Bech32MainPrefix = "cosmos" + + / Purpose is the ATOM purpose as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +Purpose = 44 + + / CoinType is the ATOM coin type as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +CoinType = 118 + + / FullFundraiserPath is the parts of the BIP44 HD path that are fixed by + / what we used during the ATOM fundraiser. + FullFundraiserPath = "m/44'/118'/0'/0/0" + + / PrefixAccount is the prefix for account keys + PrefixAccount = "acc" + / PrefixValidator is the prefix for validator keys + PrefixValidator = "val" + / PrefixConsensus is the prefix for consensus keys + PrefixConsensus = "cons" + / PrefixPublic is the prefix for public keys + PrefixPublic = "pub" + / PrefixOperator is the prefix for operator keys + PrefixOperator = "oper" + + / PrefixAddress is the prefix for addresses + PrefixAddress = "addr" + + / Bech32PrefixAccAddr defines the Bech32 prefix of an account's address + Bech32PrefixAccAddr = Bech32MainPrefix + / Bech32PrefixAccPub defines the Bech32 prefix of an account's public key + Bech32PrefixAccPub = Bech32MainPrefix + PrefixPublic + / Bech32PrefixValAddr defines the Bech32 prefix of a validator's operator address + Bech32PrefixValAddr = Bech32MainPrefix + PrefixValidator + PrefixOperator + / Bech32PrefixValPub defines the Bech32 prefix of a validator's operator public key + Bech32PrefixValPub = Bech32MainPrefix + PrefixValidator + PrefixOperator + PrefixPublic + / Bech32PrefixConsAddr defines the Bech32 prefix of a consensus node address + Bech32PrefixConsAddr = Bech32MainPrefix + PrefixValidator + PrefixConsensus + / Bech32PrefixConsPub defines the Bech32 prefix of a consensus node public key + Bech32PrefixConsPub = Bech32MainPrefix + PrefixValidator + PrefixConsensus + PrefixPublic +) + +/ cache variables +var ( + / AccAddress.String() + +is expensive and if unoptimized dominantly showed up in profiles, + / yet has no mechanisms to trivially cache the result given that AccAddress is a []byte type. + accAddrMu sync.Mutex + accAddrCache *simplelru.LRU + consAddrMu sync.Mutex + consAddrCache *simplelru.LRU + valAddrMu sync.Mutex + valAddrCache *simplelru.LRU + + isCachingEnabled atomic.Bool +) + +/ sentinel errors +var ( + ErrEmptyHexAddress = errors.New("decoding address from hex string failed: empty address") +) + +func init() { + var err error + SetAddrCacheEnabled(true) + + / in total the cache size is 61k entries. Key is 32 bytes and value is around 50-70 bytes. + / That will make around 92 * 61k * 2 (LRU) + +bytes ~ 11 MB + if accAddrCache, err = simplelru.NewLRU(60000, nil); err != nil { + panic(err) +} + if consAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} + if valAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} +} + +/ SetAddrCacheEnabled enables or disables accAddrCache, consAddrCache, and valAddrCache. By default, caches are enabled. +func SetAddrCacheEnabled(enabled bool) { + isCachingEnabled.Store(enabled) +} + +/ IsAddrCacheEnabled returns if the address caches are enabled. +func IsAddrCacheEnabled() + +bool { + return isCachingEnabled.Load() +} + +/ Address is a common interface for different types of addresses used by the SDK +type Address interface { + Equals(Address) + +bool + Empty() + +bool + Marshal() ([]byte, error) + +MarshalJSON() ([]byte, error) + +Bytes() []byte + String() + +string + Format(s fmt.State, verb rune) +} + +/ Ensure that different address types implement the interface +var ( + _ Address = AccAddress{ +} + _ Address = ValAddress{ +} + _ Address = ConsAddress{ +} +) + +/ ---------------------------------------------------------------------------- +/ account +/ ---------------------------------------------------------------------------- + +/ AccAddress a wrapper around bytes meant to represent an account address. +/ When marshaled to a string or JSON, it uses Bech32. +type AccAddress []byte + +/ AccAddressFromHexUnsafe creates an AccAddress from a HEX-encoded string. +/ +/ Note, this function is considered unsafe as it may produce an AccAddress from +/ otherwise invalid input, such as a transaction hash. Please use +/ AccAddressFromBech32. +func AccAddressFromHexUnsafe(address string) (addr AccAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return AccAddress(bz), err +} + +/ VerifyAddressFormat verifies that the provided bytes form a valid address +/ according to the default address rules or a custom address verifier set by +/ GetConfig().SetAddressVerifier(). +/ TODO make an issue to get rid of global Config +/ ref: https://github.com/cosmos/cosmos-sdk/issues/9690 +func VerifyAddressFormat(bz []byte) + +error { + verifier := GetConfig().GetAddressVerifier() + if verifier != nil { + return verifier(bz) +} + if len(bz) == 0 { + return errorsmod.Wrap(sdkerrors.ErrUnknownAddress, "addresses cannot be empty") +} + if len(bz) > address.MaxAddrLen { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "address max length is %d, got %d", address.MaxAddrLen, len(bz)) +} + +return nil +} + +/ MustAccAddressFromBech32 calls AccAddressFromBech32 and panics on error. +func MustAccAddressFromBech32(address string) + +AccAddress { + addr, err := AccAddressFromBech32(address) + if err != nil { + panic(err) +} + +return addr +} + +/ AccAddressFromBech32 creates an AccAddress from a Bech32 string. +func AccAddressFromBech32(address string) (addr AccAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return AccAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixAccAddr := GetConfig().GetBech32AccountAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixAccAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return AccAddress(bz), nil +} + +/ Returns boolean for whether two AccAddresses are Equal +func (aa AccAddress) + +Equals(aa2 Address) + +bool { + if aa.Empty() && aa2.Empty() { + return true +} + +return bytes.Equal(aa.Bytes(), aa2.Bytes()) +} + +/ Returns boolean for whether an AccAddress is empty +func (aa AccAddress) + +Empty() + +bool { + return len(aa) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (aa AccAddress) + +Marshal() ([]byte, error) { + return aa, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (aa *AccAddress) + +Unmarshal(data []byte) + +error { + *aa = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (aa AccAddress) -``` -loading... -``` +MarshalJSON() ([]byte, error) { + return json.Marshal(aa.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (aa AccAddress) + +MarshalYAML() (interface{ +}, error) { + return aa.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ UnmarshalYAML unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ Bytes returns the raw address bytes. +func (aa AccAddress) + +Bytes() []byte { + return aa +} + +/ String implements the Stringer interface. +func (aa AccAddress) + +String() + +string { + if aa.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(aa) + if IsAddrCacheEnabled() { + accAddrMu.Lock() + +defer accAddrMu.Unlock() + +addr, ok := accAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32AccountAddrPrefix(), aa, accAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (aa AccAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + s.Write([]byte(aa.String())) + case 'p': + s.Write([]byte(fmt.Sprintf("%p", aa))) + +default: + s.Write([]byte(fmt.Sprintf("%X", []byte(aa)))) +} +} + +/ ---------------------------------------------------------------------------- +/ validator operator +/ ---------------------------------------------------------------------------- + +/ ValAddress defines a wrapper around bytes meant to present a validator's +/ operator. When marshaled to a string or JSON, it uses Bech32. +type ValAddress []byte + +/ ValAddressFromHex creates a ValAddress from a hex string. +func ValAddressFromHex(address string) (addr ValAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return ValAddress(bz), err +} + +/ ValAddressFromBech32 creates a ValAddress from a Bech32 string. +func ValAddressFromBech32(address string) (addr ValAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ValAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixValAddr := GetConfig().GetBech32ValidatorAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixValAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ValAddress(bz), nil +} + +/ Returns boolean for whether two ValAddresses are Equal +func (va ValAddress) + +Equals(va2 Address) + +bool { + if va.Empty() && va2.Empty() { + return true +} + +return bytes.Equal(va.Bytes(), va2.Bytes()) +} + +/ Returns boolean for whether an AccAddress is empty +func (va ValAddress) + +Empty() + +bool { + return len(va) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (va ValAddress) + +Marshal() ([]byte, error) { + return va, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (va *ValAddress) + +Unmarshal(data []byte) + +error { + *va = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (va ValAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(va.String()) +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/types/address.go#L299-L316) +/ MarshalYAML marshals to YAML using Bech32. +func (va ValAddress) + +MarshalYAML() (interface{ +}, error) { + return va.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ Bytes returns the raw address bytes. +func (va ValAddress) + +Bytes() []byte { + return va +} + +/ String implements the Stringer interface. +func (va ValAddress) + +String() + +string { + if va.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(va) + if IsAddrCacheEnabled() { + valAddrMu.Lock() + +defer valAddrMu.Unlock() + +addr, ok := valAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32ValidatorAddrPrefix(), va, valAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (va ValAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + s.Write([]byte(va.String())) + case 'p': + s.Write([]byte(fmt.Sprintf("%p", va))) + +default: + s.Write([]byte(fmt.Sprintf("%X", []byte(va)))) +} +} + +/ ---------------------------------------------------------------------------- +/ consensus node +/ ---------------------------------------------------------------------------- + +/ ConsAddress defines a wrapper around bytes meant to present a consensus node. +/ When marshaled to a string or JSON, it uses Bech32. +type ConsAddress []byte + +/ ConsAddressFromHex creates a ConsAddress from a hex string. +func ConsAddressFromHex(address string) (addr ConsAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return ConsAddress(bz), err +} + +/ ConsAddressFromBech32 creates a ConsAddress from a Bech32 string. +func ConsAddressFromBech32(address string) (addr ConsAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ConsAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixConsAddr := GetConfig().GetBech32ConsensusAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixConsAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ConsAddress(bz), nil +} + +/ get ConsAddress from pubkey +func GetConsAddress(pubkey cryptotypes.PubKey) + +ConsAddress { + return ConsAddress(pubkey.Address()) +} + +/ Returns boolean for whether two ConsAddress are Equal +func (ca ConsAddress) + +Equals(ca2 Address) + +bool { + if ca.Empty() && ca2.Empty() { + return true +} + +return bytes.Equal(ca.Bytes(), ca2.Bytes()) +} + +/ Returns boolean for whether an ConsAddress is empty +func (ca ConsAddress) + +Empty() + +bool { + return len(ca) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (ca ConsAddress) + +Marshal() ([]byte, error) { + return ca, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (ca *ConsAddress) + +Unmarshal(data []byte) + +error { + *ca = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (ca ConsAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(ca.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (ca ConsAddress) + +MarshalYAML() (interface{ +}, error) { + return ca.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ Bytes returns the raw address bytes. +func (ca ConsAddress) + +Bytes() []byte { + return ca +} + +/ String implements the Stringer interface. +func (ca ConsAddress) + +String() + +string { + if ca.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(ca) + if IsAddrCacheEnabled() { + consAddrMu.Lock() + +defer consAddrMu.Unlock() + +addr, ok := consAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32ConsensusAddrPrefix(), ca, consAddrCache, key) +} + +/ Bech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. Returns an error if the bech32 conversion +/ fails or the prefix is empty. +func Bech32ifyAddressBytes(prefix string, bs []byte) (string, error) { + if len(bs) == 0 { + return "", nil +} + if len(prefix) == 0 { + return "", errors.New("prefix cannot be empty") +} + +return bech32.ConvertAndEncode(prefix, bs) +} + +/ MustBech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. It panics if the bech32 conversion +/ fails or the prefix is empty. +func MustBech32ifyAddressBytes(prefix string, bs []byte) + +string { + s, err := Bech32ifyAddressBytes(prefix, bs) + if err != nil { + panic(err) +} + +return s +} + +/ Format implements the fmt.Formatter interface. + +func (ca ConsAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + s.Write([]byte(ca.String())) + case 'p': + s.Write([]byte(fmt.Sprintf("%p", ca))) + +default: + s.Write([]byte(fmt.Sprintf("%X", []byte(ca)))) +} +} + +/ ---------------------------------------------------------------------------- +/ auxiliary +/ ---------------------------------------------------------------------------- + +var errBech32EmptyAddress = errors.New("decoding Bech32 address failed: must provide a non empty address") + +/ GetFromBech32 decodes a bytestring from a Bech32 encoded string. +func GetFromBech32(bech32str, prefix string) ([]byte, error) { + if len(bech32str) == 0 { + return nil, errBech32EmptyAddress +} + +hrp, bz, err := bech32.DecodeAndConvert(bech32str) + if err != nil { + return nil, err +} + if hrp != prefix { + return nil, fmt.Errorf("invalid Bech32 prefix; expected %s, got %s", prefix, hrp) +} + +return bz, nil +} + +func addressBytesFromHexString(address string) ([]byte, error) { + if len(address) == 0 { + return nil, ErrEmptyHexAddress +} + +return hex.DecodeString(address) +} + +/ cacheBech32Addr is not concurrency safe. Concurrent access to cache causes race condition. +func cacheBech32Addr(prefix string, addr []byte, cache *simplelru.LRU, cacheKey string) + +string { + bech32Addr, err := bech32.ConvertAndEncode(prefix, addr) + if err != nil { + panic(err) +} + if IsAddrCacheEnabled() { + cache.Add(cacheKey, bech32Addr) +} + +return bech32Addr +} +``` | | Address Bech32 Prefix | | ------------------ | --------------------- | @@ -85,115 +1842,1679 @@ loading... | Validator Operator | cosmosvaloper | | Consensus Nodes | cosmosvalcons | -### Public Keys[​](#public-keys "Direct link to Public Keys") +### Public Keys Public keys in Cosmos SDK are defined by `cryptotypes.PubKey` interface. Since public keys are saved in a store, `cryptotypes.PubKey` extends the `proto.Message` interface: -crypto/types/types.go - +```go expandable +package types + +import ( + + cmtcrypto "github.com/cometbft/cometbft/crypto" + proto "github.com/cosmos/gogoproto/proto" +) + +/ PubKey defines a public key and extends proto.Message. +type PubKey interface { + proto.Message + + Address() + +Address + Bytes() []byte + VerifySignature(msg, sig []byte) + +bool + Equals(PubKey) + +bool + Type() + +string +} + +/ LedgerPrivKey defines a private key that is not a proto message. For now, +/ LedgerSecp256k1 keys are not converted to proto.Message yet, this is why +/ they use LedgerPrivKey instead of PrivKey. All other keys must use PrivKey +/ instead of LedgerPrivKey. +/ TODO https://github.com/cosmos/cosmos-sdk/issues/7357. +type LedgerPrivKey interface { + Bytes() []byte + Sign(msg []byte) ([]byte, error) + +PubKey() + +PubKey + Equals(LedgerPrivKey) + +bool + Type() + +string +} + +/ LedgerPrivKeyAminoJSON is a Ledger PrivKey type that supports signing with +/ SIGN_MODE_LEGACY_AMINO_JSON. It is added as a non-breaking change, instead of directly +/ on the LedgerPrivKey interface (whose Sign method will sign with TEXTUAL), +/ and will be deprecated/removed once LEGACY_AMINO_JSON is removed. +type LedgerPrivKeyAminoJSON interface { + LedgerPrivKey + / SignLedgerAminoJSON signs a messages on the Ledger device using + / SIGN_MODE_LEGACY_AMINO_JSON. + SignLedgerAminoJSON(msg []byte) ([]byte, error) +} + +/ PrivKey defines a private key and extends proto.Message. For now, it extends +/ LedgerPrivKey (see godoc for LedgerPrivKey). Ultimately, we should remove +/ LedgerPrivKey and add its methods here directly. +/ TODO https://github.com/cosmos/cosmos-sdk/issues/7357. +type PrivKey interface { + proto.Message + LedgerPrivKey +} + +type ( + Address = cmtcrypto.Address +) ``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/types/types.go#L8-L17) A compressed format is used for `secp256k1` and `secp256r1` serialization. -* The first byte is a `0x02` byte if the `y`-coordinate is the lexicographically largest of the two associated with the `x`-coordinate. -* Otherwise the first byte is a `0x03`. +- The first byte is a `0x02` byte if the `y`-coordinate is the lexicographically largest of the two associated with the `x`-coordinate. +- Otherwise the first byte is a `0x03`. This prefix is followed by the `x`-coordinate. -Public Keys are not used to reference accounts (or users) and in general are not used when composing transaction messages (with few exceptions: `MsgCreateValidator`, `Validator` and `Multisig` messages). For user interactions, `PubKey` is formatted using Protobufs JSON ([ProtoMarshalJSON](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/codec/json.go#L14-L34) function). Example: +Public Keys are not used to reference accounts (or users) and in general are not used when composing transaction messages (with few exceptions: `MsgCreateValidator`, `Validator` and `Multisig` messages). +For user interactions, `PubKey` is formatted using Protobufs JSON ([ProtoMarshalJSON](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/codec/json.go#L14-L34) function). Example: + +```go expandable +package keys + +import ( + + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ Use protobuf interface marshaler rather then generic JSON + +/ KeyOutput defines a structure wrapping around an Info object used for output +/ functionality. +type KeyOutput struct { + Name string `json:"name" yaml:"name"` + Type string `json:"type" yaml:"type"` + Address string `json:"address" yaml:"address"` + PubKey string `json:"pubkey" yaml:"pubkey"` + Mnemonic string `json:"mnemonic,omitempty" yaml:"mnemonic"` +} + +/ NewKeyOutput creates a default KeyOutput instance without Mnemonic, Threshold and PubKeys +func NewKeyOutput(name string, keyType keyring.KeyType, a sdk.Address, pk cryptotypes.PubKey) (KeyOutput, error) { + apk, err := codectypes.NewAnyWithValue(pk) + if err != nil { + return KeyOutput{ +}, err +} + +bz, err := codec.ProtoMarshalJSON(apk, nil) + if err != nil { + return KeyOutput{ +}, err +} + +return KeyOutput{ + Name: name, + Type: keyType.String(), + Address: a.String(), + PubKey: string(bz), +}, nil +} + +/ MkConsKeyOutput create a KeyOutput in with "cons" Bech32 prefixes. +func MkConsKeyOutput(k *keyring.Record) (KeyOutput, error) { + pk, err := k.GetPubKey() + if err != nil { + return KeyOutput{ +}, err +} + addr := sdk.ConsAddress(pk.Address()) + +return NewKeyOutput(k.Name, k.GetType(), addr, pk) +} + +/ MkValKeyOutput create a KeyOutput in with "val" Bech32 prefixes. +func MkValKeyOutput(k *keyring.Record) (KeyOutput, error) { + pk, err := k.GetPubKey() + if err != nil { + return KeyOutput{ +}, err +} + addr := sdk.ValAddress(pk.Address()) + +return NewKeyOutput(k.Name, k.GetType(), addr, pk) +} + +/ MkAccKeyOutput create a KeyOutput in with "acc" Bech32 prefixes. If the +/ public key is a multisig public key, then the threshold and constituent +/ public keys will be added. +func MkAccKeyOutput(k *keyring.Record) (KeyOutput, error) { + pk, err := k.GetPubKey() + if err != nil { + return KeyOutput{ +}, err +} + addr := sdk.AccAddress(pk.Address()) + +return NewKeyOutput(k.Name, k.GetType(), addr, pk) +} + +/ MkAccKeysOutput returns a slice of KeyOutput objects, each with the "acc" +/ Bech32 prefixes, given a slice of Record objects. It returns an error if any +/ call to MkKeyOutput fails. +func MkAccKeysOutput(records []*keyring.Record) ([]KeyOutput, error) { + kos := make([]KeyOutput, len(records)) + +var err error + for i, r := range records { + kos[i], err = MkAccKeyOutput(r) + if err != nil { + return nil, err +} + +} + +return kos, nil +} +``` -client/keys/output.go +## Keyring -``` -loading... -``` +A `Keyring` is an object that stores and manages accounts. In the Cosmos SDK, a `Keyring` implementation follows the `Keyring` interface: -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/client/keys/output.go#L23-L39) +```go expandable +package keyring + +import ( + + "bufio" + "encoding/hex" + "fmt" + "io" + "os" + "path/filepath" + "sort" + "strings" + "github.com/99designs/keyring" + "github.com/cockroachdb/errors" + "github.com/cosmos/go-bip39" -## Keyring[​](#keyring "Direct link to Keyring") + errorsmod "cosmossdk.io/errors" + "golang.org/x/crypto/bcrypt" + "github.com/cosmos/cosmos-sdk/client/input" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/crypto" + "github.com/cosmos/cosmos-sdk/crypto/hd" + "github.com/cosmos/cosmos-sdk/crypto/ledger" + "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx/signing" +) + +/ Backend options for Keyring +const ( + BackendFile = "file" + BackendOS = "os" + BackendKWallet = "kwallet" + BackendPass = "pass" + BackendTest = "test" + BackendMemory = "memory" +) -A `Keyring` is an object that stores and manages accounts. In the Cosmos SDK, a `Keyring` implementation follows the `Keyring` interface: +const ( + keyringFileDirName = "keyring-file" + keyringTestDirName = "keyring-test" + passKeyringPrefix = "keyring-%s" -crypto/keyring/keyring.go + / temporary pass phrase for exporting a key during a key rename + passPhrase = "temp" +) -``` -loading... -``` +var ( + _ Keyring = &keystore{ +} + +maxPassphraseEntryAttempts = 3 +) + +/ Keyring exposes operations over a backend supported by github.com/99designs/keyring. +type Keyring interface { + / Get the backend type used in the keyring config: "file", "os", "kwallet", "pass", "test", "memory". + Backend() + +string + / List all keys. + List() ([]*Record, error) + + / Supported signing algorithms for Keyring and Ledger respectively. + SupportedAlgorithms() (SigningAlgoList, SigningAlgoList) + + / Key and KeyByAddress return keys by uid and address respectively. + Key(uid string) (*Record, error) + +KeyByAddress(address sdk.Address) (*Record, error) + + / Delete and DeleteByAddress remove keys from the keyring. + Delete(uid string) + +error + DeleteByAddress(address sdk.Address) + +error + + / Rename an existing key from the Keyring + Rename(from, to string) + +error + + / NewMnemonic generates a new mnemonic, derives a hierarchical deterministic key from it, and + / persists the key to storage. Returns the generated mnemonic and the key Info. + / It returns an error if it fails to generate a key for the given algo type, or if + / another key is already stored under the same name or address. + / + / A passphrase set to the empty string will set the passphrase to the DefaultBIP39Passphrase value. + NewMnemonic(uid string, language Language, hdPath, bip39Passphrase string, algo SignatureAlgo) (*Record, string, error) + + / NewAccount converts a mnemonic to a private key and BIP-39 HD Path and persists it. + / It fails if there is an existing key Info with the same address. + NewAccount(uid, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error) + + / SaveLedgerKey retrieves a public key reference from a Ledger device and persists it. + SaveLedgerKey(uid string, algo SignatureAlgo, hrp string, coinType, account, index uint32) (*Record, error) + + / SaveOfflineKey stores a public key and returns the persisted Info structure. + SaveOfflineKey(uid string, pubkey types.PubKey) (*Record, error) + + / SaveMultisig stores and returns a new multsig (offline) + +key reference. + SaveMultisig(uid string, pubkey types.PubKey) (*Record, error) + +Signer + + Importer + Exporter + + Migrator +} + +/ Signer is implemented by key stores that want to provide signing capabilities. +type Signer interface { + / Sign sign byte messages with a user key. + Sign(uid string, msg []byte, signMode signing.SignMode) ([]byte, types.PubKey, error) + + / SignByAddress sign byte messages with a user key providing the address. + SignByAddress(address sdk.Address, msg []byte, signMode signing.SignMode) ([]byte, types.PubKey, error) +} + +/ Importer is implemented by key stores that support import of public and private keys. +type Importer interface { + / ImportPrivKey imports ASCII armored passphrase-encrypted private keys. + ImportPrivKey(uid, armor, passphrase string) + +error + + / ImportPubKey imports ASCII armored public keys. + ImportPubKey(uid, armor string) + +error +} + +/ Migrator is implemented by key stores and enables migration of keys from amino to proto +type Migrator interface { + MigrateAll() ([]*Record, error) +} + +/ Exporter is implemented by key stores that support export of public and private keys. +type Exporter interface { + / Export public key + ExportPubKeyArmor(uid string) (string, error) + +ExportPubKeyArmorByAddress(address sdk.Address) (string, error) + + / ExportPrivKeyArmor returns a private key in ASCII armored format. + / It returns an error if the key does not exist or a wrong encryption passphrase is supplied. + ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error) + +ExportPrivKeyArmorByAddress(address sdk.Address, encryptPassphrase string) (armor string, err error) +} + +/ Option overrides keyring configuration options. +type Option func(options *Options) + +/ Options define the options of the Keyring. +type Options struct { + / supported signing algorithms for keyring + SupportedAlgos SigningAlgoList + / supported signing algorithms for Ledger + SupportedAlgosLedger SigningAlgoList + / define Ledger Derivation function + LedgerDerivation func() (ledger.SECP256K1, error) + / define Ledger key generation function + LedgerCreateKey func([]byte) + +types.PubKey + / define Ledger app name + LedgerAppName string + / indicate whether Ledger should skip DER Conversion on signature, + / depending on which format (DER or BER) + +the Ledger app returns signatures + LedgerSigSkipDERConv bool +} + +/ NewInMemory creates a transient keyring useful for testing +/ purposes and on-the-fly key generation. +/ Keybase options can be applied when generating this new Keybase. +func NewInMemory(cdc codec.Codec, opts ...Option) + +Keyring { + return NewInMemoryWithKeyring(keyring.NewArrayKeyring(nil), cdc, opts...) +} + +/ NewInMemoryWithKeyring returns an in memory keyring using the specified keyring.Keyring +/ as the backing keyring. +func NewInMemoryWithKeyring(kr keyring.Keyring, cdc codec.Codec, opts ...Option) + +Keyring { + return newKeystore(kr, cdc, BackendMemory, opts...) +} + +/ New creates a new instance of a keyring. +/ Keyring options can be applied when generating the new instance. +/ Available backends are "os", "file", "kwallet", "memory", "pass", "test". +func New( + appName, backend, rootDir string, userInput io.Reader, cdc codec.Codec, opts ...Option, +) (Keyring, error) { + var ( + db keyring.Keyring + err error + ) + switch backend { + case BackendMemory: + return NewInMemory(cdc, opts...), err + case BackendTest: + db, err = keyring.Open(newTestBackendKeyringConfig(appName, rootDir)) + case BackendFile: + db, err = keyring.Open(newFileBackendKeyringConfig(appName, rootDir, userInput)) + case BackendOS: + db, err = keyring.Open(newOSBackendKeyringConfig(appName, rootDir, userInput)) + case BackendKWallet: + db, err = keyring.Open(newKWalletBackendKeyringConfig(appName, rootDir, userInput)) + case BackendPass: + db, err = keyring.Open(newPassBackendKeyringConfig(appName, rootDir, userInput)) + +default: + return nil, errorsmod.Wrap(ErrUnknownBacked, backend) +} + if err != nil { + return nil, err +} + +return newKeystore(db, cdc, backend, opts...), nil +} + +type keystore struct { + db keyring.Keyring + cdc codec.Codec + backend string + options Options +} + +func newKeystore(kr keyring.Keyring, cdc codec.Codec, backend string, opts ...Option) + +keystore { + / Default options for keybase, these can be overwritten using the + / Option function + options := Options{ + SupportedAlgos: SigningAlgoList{ + hd.Secp256k1 +}, + SupportedAlgosLedger: SigningAlgoList{ + hd.Secp256k1 +}, +} + for _, optionFn := range opts { + optionFn(&options) +} + if options.LedgerDerivation != nil { + ledger.SetDiscoverLedger(options.LedgerDerivation) +} + if options.LedgerCreateKey != nil { + ledger.SetCreatePubkey(options.LedgerCreateKey) +} + if options.LedgerAppName != "" { + ledger.SetAppName(options.LedgerAppName) +} + if options.LedgerSigSkipDERConv { + ledger.SetSkipDERConversion() +} + +return keystore{ + db: kr, + cdc: cdc, + backend: backend, + options: options, +} +} + +/ Backend returns the keyring backend option used in the config +func (ks keystore) + +Backend() + +string { + return ks.backend +} + +func (ks keystore) + +ExportPubKeyArmor(uid string) (string, error) { + k, err := ks.Key(uid) + if err != nil { + return "", err +} + +key, err := k.GetPubKey() + if err != nil { + return "", err +} + +bz, err := ks.cdc.MarshalInterface(key) + if err != nil { + return "", err +} + +return crypto.ArmorPubKeyBytes(bz, key.Type()), nil +} + +func (ks keystore) + +ExportPubKeyArmorByAddress(address sdk.Address) (string, error) { + k, err := ks.KeyByAddress(address) + if err != nil { + return "", err +} + +return ks.ExportPubKeyArmor(k.Name) +} + +/ ExportPrivKeyArmor exports encrypted privKey +func (ks keystore) + +ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error) { + priv, err := ks.ExportPrivateKeyObject(uid) + if err != nil { + return "", err +} + +return crypto.EncryptArmorPrivKey(priv, encryptPassphrase, priv.Type()), nil +} + +/ ExportPrivateKeyObject exports an armored private key object. +func (ks keystore) + +ExportPrivateKeyObject(uid string) (types.PrivKey, error) { + k, err := ks.Key(uid) + if err != nil { + return nil, err +} + +priv, err := extractPrivKeyFromRecord(k) + if err != nil { + return nil, err +} + +return priv, err +} + +func (ks keystore) + +ExportPrivKeyArmorByAddress(address sdk.Address, encryptPassphrase string) (armor string, err error) { + k, err := ks.KeyByAddress(address) + if err != nil { + return "", err +} + +return ks.ExportPrivKeyArmor(k.Name, encryptPassphrase) +} + +func (ks keystore) + +ImportPrivKey(uid, armor, passphrase string) + +error { + if k, err := ks.Key(uid); err == nil { + if uid == k.Name { + return errorsmod.Wrap(ErrOverwriteKey, uid) +} + +} + +privKey, _, err := crypto.UnarmorDecryptPrivKey(armor, passphrase) + if err != nil { + return errorsmod.Wrap(err, "failed to decrypt private key") +} + + _, err = ks.writeLocalKey(uid, privKey) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +ImportPubKey(uid, armor string) + +error { + if _, err := ks.Key(uid); err == nil { + return errorsmod.Wrap(ErrOverwriteKey, uid) +} + +pubBytes, _, err := crypto.UnarmorPubKeyBytes(armor) + if err != nil { + return err +} + +var pubKey types.PubKey + if err := ks.cdc.UnmarshalInterface(pubBytes, &pubKey); err != nil { + return err +} + + _, err = ks.writeOfflineKey(uid, pubKey) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/keyring/keyring.go#L57-L105) +Sign(uid string, msg []byte, signMode signing.SignMode) ([]byte, types.PubKey, error) { + k, err := ks.Key(uid) + if err != nil { + return nil, nil, err +} + switch { + case k.GetLocal() != nil: + priv, err := extractPrivKeyFromLocal(k.GetLocal()) + if err != nil { + return nil, nil, err +} + +sig, err := priv.Sign(msg) + if err != nil { + return nil, nil, err +} + +return sig, priv.PubKey(), nil + case k.GetLedger() != nil: + return SignWithLedger(k, msg, signMode) + + / multi or offline record + default: + pub, err := k.GetPubKey() + if err != nil { + return nil, nil, err +} + +return nil, pub, ErrOfflineSign +} +} + +func (ks keystore) + +SignByAddress(address sdk.Address, msg []byte, signMode signing.SignMode) ([]byte, types.PubKey, error) { + k, err := ks.KeyByAddress(address) + if err != nil { + return nil, nil, err +} + +return ks.Sign(k.Name, msg, signMode) +} + +func (ks keystore) + +SaveLedgerKey(uid string, algo SignatureAlgo, hrp string, coinType, account, index uint32) (*Record, error) { + if !ks.options.SupportedAlgosLedger.Contains(algo) { + return nil, errorsmod.Wrap(ErrUnsupportedSigningAlgo, fmt.Sprintf("signature algo %s is not defined in the keyring options", algo.Name())) +} + hdPath := hd.NewFundraiserParams(account, coinType, index) + +priv, _, err := ledger.NewPrivKeySecp256k1(*hdPath, hrp) + if err != nil { + return nil, errors.CombineErrors(ErrLedgerGenerateKey, err) +} + +return ks.writeLedgerKey(uid, priv.PubKey(), hdPath) +} + +func (ks keystore) + +writeLedgerKey(name string, pk types.PubKey, path *hd.BIP44Params) (*Record, error) { + k, err := NewLedgerRecord(name, pk, path) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +func (ks keystore) + +SaveMultisig(uid string, pubkey types.PubKey) (*Record, error) { + return ks.writeMultisigKey(uid, pubkey) +} + +func (ks keystore) + +SaveOfflineKey(uid string, pubkey types.PubKey) (*Record, error) { + return ks.writeOfflineKey(uid, pubkey) +} + +func (ks keystore) + +DeleteByAddress(address sdk.Address) + +error { + k, err := ks.KeyByAddress(address) + if err != nil { + return err +} + +err = ks.Delete(k.Name) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +Rename(oldName, newName string) + +error { + _, err := ks.Key(newName) + if err == nil { + return errorsmod.Wrap(ErrKeyAlreadyExists, fmt.Sprintf("rename failed, %s", newName)) +} + +armor, err := ks.ExportPrivKeyArmor(oldName, passPhrase) + if err != nil { + return err +} + if err := ks.Delete(oldName); err != nil { + return err +} + if err := ks.ImportPrivKey(newName, armor, passPhrase); err != nil { + return err +} + +return nil +} + +/ Delete deletes a key in the keyring. `uid` represents the key name, without +/ the `.info` suffix. +func (ks keystore) + +Delete(uid string) + +error { + k, err := ks.Key(uid) + if err != nil { + return err +} + +addr, err := k.GetAddress() + if err != nil { + return err +} + +err = ks.db.Remove(addrHexKeyAsString(addr)) + if err != nil { + return err +} + +err = ks.db.Remove(infoKey(uid)) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +KeyByAddress(address sdk.Address) (*Record, error) { + ik, err := ks.db.Get(addrHexKeyAsString(address)) + if err != nil { + return nil, wrapKeyNotFound(err, fmt.Sprintf("key with address %s not found", address.String())) +} + if len(ik.Data) == 0 { + return nil, wrapKeyNotFound(err, fmt.Sprintf("key with address %s not found", address.String())) +} + +return ks.Key(string(ik.Data)) +} + +func wrapKeyNotFound(err error, msg string) + +error { + if err == keyring.ErrKeyNotFound { + return errorsmod.Wrap(sdkerrors.ErrKeyNotFound, msg) +} + +return err +} + +func (ks keystore) + +List() ([]*Record, error) { + return ks.MigrateAll() +} + +func (ks keystore) + +NewMnemonic(uid string, language Language, hdPath, bip39Passphrase string, algo SignatureAlgo) (*Record, string, error) { + if language != English { + return nil, "", ErrUnsupportedLanguage +} + if !ks.isSupportedSigningAlgo(algo) { + return nil, "", ErrUnsupportedSigningAlgo +} + + / Default number of words (24): This generates a mnemonic directly from the + / number of words by reading system entropy. + entropy, err := bip39.NewEntropy(defaultEntropySize) + if err != nil { + return nil, "", err +} + +mnemonic, err := bip39.NewMnemonic(entropy) + if err != nil { + return nil, "", err +} + if bip39Passphrase == "" { + bip39Passphrase = DefaultBIP39Passphrase +} + +k, err := ks.NewAccount(uid, mnemonic, bip39Passphrase, hdPath, algo) + if err != nil { + return nil, "", err +} + +return k, mnemonic, nil +} + +func (ks keystore) + +NewAccount(name, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error) { + if !ks.isSupportedSigningAlgo(algo) { + return nil, ErrUnsupportedSigningAlgo +} + + / create master key and derive first key for keyring + derivedPriv, err := algo.Derive()(mnemonic, bip39Passphrase, hdPath) + if err != nil { + return nil, err +} + privKey := algo.Generate()(derivedPriv) + + / check if the key already exists with the same address and return an error + / if found + address := sdk.AccAddress(privKey.PubKey().Address()) + if _, err := ks.KeyByAddress(address); err == nil { + return nil, ErrDuplicatedAddress +} + +return ks.writeLocalKey(name, privKey) +} + +func (ks keystore) + +isSupportedSigningAlgo(algo SignatureAlgo) + +bool { + return ks.options.SupportedAlgos.Contains(algo) +} + +func (ks keystore) + +Key(uid string) (*Record, error) { + k, err := ks.migrate(uid) + if err != nil { + return nil, err +} + +return k, nil +} + +/ SupportedAlgorithms returns the keystore Options' supported signing algorithm. +/ for the keyring and Ledger. +func (ks keystore) + +SupportedAlgorithms() (SigningAlgoList, SigningAlgoList) { + return ks.options.SupportedAlgos, ks.options.SupportedAlgosLedger +} + +/ SignWithLedger signs a binary message with the ledger device referenced by an Info object +/ and returns the signed bytes and the public key. It returns an error if the device could +/ not be queried or it returned an error. +func SignWithLedger(k *Record, msg []byte, signMode signing.SignMode) (sig []byte, pub types.PubKey, err error) { + ledgerInfo := k.GetLedger() + if ledgerInfo == nil { + return nil, nil, ErrNotLedgerObj +} + path := ledgerInfo.GetPath() + +priv, err := ledger.NewPrivKeySecp256k1Unsafe(*path) + if err != nil { + return +} + switch signMode { + case signing.SignMode_SIGN_MODE_TEXTUAL: + sig, err = priv.Sign(msg) + if err != nil { + return nil, nil, err +} + case signing.SignMode_SIGN_MODE_LEGACY_AMINO_JSON: + sig, err = priv.SignLedgerAminoJSON(msg) + if err != nil { + return nil, nil, err +} + +default: + return nil, nil, errorsmod.Wrap(ErrInvalidSignMode, fmt.Sprintf("%v", signMode)) +} + if !priv.PubKey().VerifySignature(msg, sig) { + return nil, nil, ErrLedgerInvalidSignature +} + +return sig, priv.PubKey(), nil +} + +func newOSBackendKeyringConfig(appName, dir string, buf io.Reader) + +keyring.Config { + return keyring.Config{ + ServiceName: appName, + FileDir: dir, + KeychainTrustApplication: true, + FilePasswordFunc: newRealPrompt(dir, buf), +} +} + +func newTestBackendKeyringConfig(appName, dir string) + +keyring.Config { + return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.FileBackend +}, + ServiceName: appName, + FileDir: filepath.Join(dir, keyringTestDirName), + FilePasswordFunc: func(_ string) (string, error) { + return "test", nil +}, +} +} + +func newKWalletBackendKeyringConfig(appName, _ string, _ io.Reader) + +keyring.Config { + return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.KWalletBackend +}, + ServiceName: "kdewallet", + KWalletAppID: appName, + KWalletFolder: "", +} +} + +func newPassBackendKeyringConfig(appName, _ string, _ io.Reader) + +keyring.Config { + prefix := fmt.Sprintf(passKeyringPrefix, appName) + +return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.PassBackend +}, + ServiceName: appName, + PassPrefix: prefix, +} +} + +func newFileBackendKeyringConfig(name, dir string, buf io.Reader) + +keyring.Config { + fileDir := filepath.Join(dir, keyringFileDirName) + +return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.FileBackend +}, + ServiceName: name, + FileDir: fileDir, + FilePasswordFunc: newRealPrompt(fileDir, buf), +} +} + +func newRealPrompt(dir string, buf io.Reader) + +func(string) (string, error) { + return func(prompt string) (string, error) { + keyhashStored := false + keyhashFilePath := filepath.Join(dir, "keyhash") + +var keyhash []byte + + _, err := os.Stat(keyhashFilePath) + switch { + case err == nil: + keyhash, err = os.ReadFile(keyhashFilePath) + if err != nil { + return "", errorsmod.Wrap(err, fmt.Sprintf("failed to read %s", keyhashFilePath)) +} + +keyhashStored = true + case os.IsNotExist(err): + keyhashStored = false + + default: + return "", errorsmod.Wrap(err, fmt.Sprintf("failed to open %s", keyhashFilePath)) +} + failureCounter := 0 + for { + failureCounter++ + if failureCounter > maxPassphraseEntryAttempts { + return "", ErrMaxPassPhraseAttempts +} + buf := bufio.NewReader(buf) + +pass, err := input.GetPassword(fmt.Sprintf("Enter keyring passphrase (attempt %d/%d):", failureCounter, maxPassphraseEntryAttempts), buf) + if err != nil { + / NOTE: LGTM.io reports a false positive alert that states we are printing the password, + / but we only log the error. + / + / lgtm [go/clear-text-logging] + fmt.Fprintln(os.Stderr, err) + +continue +} + if keyhashStored { + if err := bcrypt.CompareHashAndPassword(keyhash, []byte(pass)); err != nil { + fmt.Fprintln(os.Stderr, "incorrect passphrase") + +continue +} + +return pass, nil +} + +reEnteredPass, err := input.GetPassword("Re-enter keyring passphrase:", buf) + if err != nil { + / NOTE: LGTM.io reports a false positive alert that states we are printing the password, + / but we only log the error. + / + / lgtm [go/clear-text-logging] + fmt.Fprintln(os.Stderr, err) + +continue +} + if pass != reEnteredPass { + fmt.Fprintln(os.Stderr, "passphrase do not match") + +continue +} + +passwordHash, err := bcrypt.GenerateFromPassword([]byte(pass), 2) + if err != nil { + fmt.Fprintln(os.Stderr, err) + +continue +} + if err := os.WriteFile(keyhashFilePath, passwordHash, 0o600); err != nil { + return "", err +} + +return pass, nil +} + +} +} + +func (ks keystore) + +writeLocalKey(name string, privKey types.PrivKey) (*Record, error) { + k, err := NewLocalRecord(name, privKey, privKey.PubKey()) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +/ writeRecord persists a keyring item in keystore if it does not exist there. +/ For each key record, we actually write 2 items: +/ - one with key `.info`, with Data = the serialized protobuf key +/ - another with key `.address`, with Data = the uid (i.e. the key name) +/ This is to be able to query keys both by name and by address. +func (ks keystore) + +writeRecord(k *Record) + +error { + addr, err := k.GetAddress() + if err != nil { + return err +} + key := infoKey(k.Name) + +exists, err := ks.existsInDb(addr, key) + if err != nil { + return err +} + if exists { + return errorsmod.Wrap(ErrKeyAlreadyExists, key) +} + +serializedRecord, err := ks.cdc.Marshal(k) + if err != nil { + return errors.CombineErrors(ErrUnableToSerialize, err) +} + item := keyring.Item{ + Key: key, + Data: serializedRecord, +} + if err := ks.SetItem(item); err != nil { + return err +} + +item = keyring.Item{ + Key: addrHexKeyAsString(addr), + Data: []byte(key), +} + if err := ks.SetItem(item); err != nil { + return err +} + +return nil +} + +/ existsInDb returns (true, nil) + if either addr or name exist is in keystore DB. +/ On the other hand, it returns (false, error) + if Get method returns error different from keyring.ErrKeyNotFound +/ In case of inconsistent keyring, it recovers it automatically. +func (ks keystore) + +existsInDb(addr sdk.Address, name string) (bool, error) { + _, errAddr := ks.db.Get(addrHexKeyAsString(addr)) + if errAddr != nil && !errors.Is(errAddr, keyring.ErrKeyNotFound) { + return false, errAddr +} + + _, errInfo := ks.db.Get(infoKey(name)) + if errInfo == nil { + return true, nil / uid lookup succeeds - info exists +} + +else if !errors.Is(errInfo, keyring.ErrKeyNotFound) { + return false, errInfo / received unexpected error - returns +} + + / looking for an issue, record with meta (getByAddress) + +exists, but record with public key itself does not + if errAddr == nil && errors.Is(errInfo, keyring.ErrKeyNotFound) { + fmt.Fprintf(os.Stderr, "address \"%s\" exists but pubkey itself does not\n", hex.EncodeToString(addr.Bytes())) + +fmt.Fprintln(os.Stderr, "recreating pubkey record") + err := ks.db.Remove(addrHexKeyAsString(addr)) + if err != nil { + return true, err +} + +return false, nil +} + + / both lookups failed, info does not exist + return false, nil +} + +func (ks keystore) + +writeOfflineKey(name string, pk types.PubKey) (*Record, error) { + k, err := NewOfflineRecord(name, pk) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +/ writeMultisigKey investigate where thisf function is called maybe remove it +func (ks keystore) + +writeMultisigKey(name string, pk types.PubKey) (*Record, error) { + k, err := NewMultiRecord(name, pk) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +func (ks keystore) + +MigrateAll() ([]*Record, error) { + keys, err := ks.db.Keys() + if err != nil { + return nil, err +} + if len(keys) == 0 { + return nil, nil +} + +sort.Strings(keys) + +var recs []*Record + for _, key := range keys { + / The keyring items only with `.info` consists the key info. + if !strings.HasSuffix(key, infoSuffix) { + continue +} + +rec, err := ks.migrate(key) + if err != nil { + fmt.Printf("migrate err for key %s: %q\n", key, err) + +continue +} + +recs = append(recs, rec) +} + +return recs, nil +} + +/ migrate converts keyring.Item from amino to proto serialization format. +/ the `key` argument can be a key uid (e.g. "alice") + +or with the '.info' +/ suffix (e.g. "alice.info"). +/ +/ It operates as follows: +/ 1. retrieve any key +/ 2. try to decode it using protobuf +/ 3. if ok, then return the key, do nothing else +/ 4. if it fails, then try to decode it using amino +/ 5. convert from the amino struct to the protobuf struct +/ 6. write the proto-encoded key back to the keyring +func (ks keystore) + +migrate(key string) (*Record, error) { + if !strings.HasSuffix(key, infoSuffix) { + key = infoKey(key) +} + + / 1. get the key. + item, err := ks.db.Get(key) + if err != nil { + return nil, wrapKeyNotFound(err, key) +} + if len(item.Data) == 0 { + return nil, errorsmod.Wrap(sdkerrors.ErrKeyNotFound, key) +} + + / 2. Try to deserialize using proto + k, err := ks.protoUnmarshalRecord(item.Data) + / 3. If ok then return the key + if err == nil { + return k, nil +} + + / 4. Try to decode with amino + legacyInfo, err := unMarshalLegacyInfo(item.Data) + if err != nil { + return nil, errorsmod.Wrap(err, "unable to unmarshal item.Data") +} + + / 5. Convert and serialize info using proto + k, err = ks.convertFromLegacyInfo(legacyInfo) + if err != nil { + return nil, errorsmod.Wrap(err, "convertFromLegacyInfo") +} + +serializedRecord, err := ks.cdc.Marshal(k) + if err != nil { + return nil, errors.CombineErrors(ErrUnableToSerialize, err) +} + +item = keyring.Item{ + Key: key, + Data: serializedRecord, +} + + / 6. Overwrite the keyring entry with the new proto-encoded key. + if err := ks.SetItem(item); err != nil { + return nil, errorsmod.Wrap(err, "unable to set keyring.Item") +} + +fmt.Printf("Successfully migrated key %s.\n", key) + +return k, nil +} + +func (ks keystore) + +protoUnmarshalRecord(bz []byte) (*Record, error) { + k := new(Record) + if err := ks.cdc.Unmarshal(bz, k); err != nil { + return nil, err +} + +return k, nil +} + +func (ks keystore) + +SetItem(item keyring.Item) + +error { + return ks.db.Set(item) +} + +func (ks keystore) + +convertFromLegacyInfo(info LegacyInfo) (*Record, error) { + if info == nil { + return nil, errorsmod.Wrap(ErrLegacyToRecord, "info is nil") +} + name := info.GetName() + pk := info.GetPubKey() + switch info.GetType() { + case TypeLocal: + priv, err := privKeyFromLegacyInfo(info) + if err != nil { + return nil, err +} + +return NewLocalRecord(name, priv, pk) + case TypeOffline: + return NewOfflineRecord(name, pk) + case TypeMulti: + return NewMultiRecord(name, pk) + case TypeLedger: + path, err := info.GetPath() + if err != nil { + return nil, err +} + +return NewLedgerRecord(name, pk, path) + +default: + return nil, ErrUnknownLegacyType +} +} + +func addrHexKeyAsString(address sdk.Address) + +string { + return fmt.Sprintf("%s.%s", hex.EncodeToString(address.Bytes()), addressSuffix) +} +``` The default implementation of `Keyring` comes from the third-party [`99designs/keyring`](https://github.com/99designs/keyring) library. A few notes on the `Keyring` methods: -* `Sign(uid string, msg []byte) ([]byte, types.PubKey, error)` strictly deals with the signature of the `msg` bytes. You must prepare and encode the transaction into a canonical `[]byte` form. Because protobuf is not deterministic, it has been decided in [ADR-020](/v0.50/build/architecture/adr-020-protobuf-transaction-encoding) that the canonical `payload` to sign is the `SignDoc` struct, deterministically encoded using [ADR-027](/v0.50/build/architecture/adr-027-deterministic-protobuf-serialization). Note that signature verification is not implemented in the Cosmos SDK by default, it is deferred to the [`anteHandler`](/v0.50/learn/advanced/baseapp#antehandler). +- `Sign(uid string, msg []byte) ([]byte, types.PubKey, error)` strictly deals with the signature of the `msg` bytes. You must prepare and encode the transaction into a canonical `[]byte` form. Because protobuf is not deterministic, it has been decided in [ADR-020](/docs/common/pages/adr-comprehensive#adr-020-protocol-buffer-transaction-encoding) that the canonical `payload` to sign is the `SignDoc` struct, deterministically encoded using [ADR-027](/docs/common/pages/adr-comprehensive#adr-027-deterministic-protobuf-serialization). Note that signature verification is not implemented in the Cosmos SDK by default, it is deferred to the [`anteHandler`](/docs/sdk/v0.50/learn/advanced/baseapp#antehandler). -proto/cosmos/tx/v1beta1/tx.proto +```protobuf +message SignDoc { + // body_bytes is protobuf serialization of a TxBody that matches the + // representation in TxRaw. + bytes body_bytes = 1; -``` -loading... -``` + // auth_info_bytes is a protobuf serialization of an AuthInfo that matches the + // representation in TxRaw. + bytes auth_info_bytes = 2; + + // chain_id is the unique identifier of the chain this transaction targets. + // It prevents signed transactions from being used on another chain by an + // attacker + string chain_id = 3; -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/tx/v1beta1/tx.proto#L50-L66) + // account_number is the account number of the account in state + uint64 account_number = 4; +} +``` -* `NewAccount(uid, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error)` creates a new account based on the [`bip44 path`](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki) and persists it on disk. The `PrivKey` is **never stored unencrypted**, instead it is [encrypted with a passphrase](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/armor.go) before being persisted. In the context of this method, the key type and sequence number refer to the segment of the BIP44 derivation path (for example, `0`, `1`, `2`, ...) that is used to derive a private and a public key from the mnemonic. Using the same mnemonic and derivation path, the same `PrivKey`, `PubKey` and `Address` is generated. The following keys are supported by the keyring: +- `NewAccount(uid, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error)` creates a new account based on the [`bip44 path`](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki) and persists it on disk. The `PrivKey` is **never stored unencrypted**, instead it is [encrypted with a passphrase](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/armor.go) before being persisted. In the context of this method, the key type and sequence number refer to the segment of the BIP44 derivation path (for example, `0`, `1`, `2`, ...) that is used to derive a private and a public key from the mnemonic. Using the same mnemonic and derivation path, the same `PrivKey`, `PubKey` and `Address` is generated. The following keys are supported by the keyring: -* `secp256k1` +- `secp256k1` -* `ed25519` +- `ed25519` -* `ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error)` exports a private key in ASCII-armored encrypted format using the given passphrase. You can then either import the private key again into the keyring using the `ImportPrivKey(uid, armor, passphrase string)` function or decrypt it into a raw private key using the `UnarmorDecryptPrivKey(armorStr string, passphrase string)` function. +- `ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error)` exports a private key in ASCII-armored encrypted format using the given passphrase. You can then either import the private key again into the keyring using the `ImportPrivKey(uid, armor, passphrase string)` function or decrypt it into a raw private key using the `UnarmorDecryptPrivKey(armorStr string, passphrase string)` function. -### Create New Key Type[​](#create-new-key-type "Direct link to Create New Key Type") +### Create New Key Type To create a new key type for using in keyring, `keyring.SignatureAlgo` interface must be fulfilled. -crypto/keyring/signing\_algorithms.go +```go expandable +package keyring -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/keyring/signing_algorithms.go#L10-L15) + "strings" + "github.com/cockroachdb/errors" + "github.com/cosmos/cosmos-sdk/crypto/hd" +) -The interface consists in three methods where `Name()` returns the name of the algorithm as a `hd.PubKeyType` and `Derive()` and `Generate()` must return the following functions respectively: +/ SignatureAlgo defines the interface for a keyring supported algorithm. +type SignatureAlgo interface { + Name() -crypto/hd/algo.go +hd.PubKeyType + Derive() +hd.DeriveFn + Generate() + +hd.GenerateFn +} + +/ NewSigningAlgoFromString creates a supported SignatureAlgo. +func NewSigningAlgoFromString(str string, algoList SigningAlgoList) (SignatureAlgo, error) { + for _, algo := range algoList { + if str == string(algo.Name()) { + return algo, nil +} + +} + +return nil, errors.Wrap(ErrUnsupportedSigningAlgo, str) +} + +/ SigningAlgoList is a slice of signature algorithms +type SigningAlgoList []SignatureAlgo + +/ Contains returns true if the SigningAlgoList the given SignatureAlgo. +func (sal SigningAlgoList) + +Contains(algo SignatureAlgo) + +bool { + for _, cAlgo := range sal { + if cAlgo.Name() == algo.Name() { + return true +} + +} + +return false +} + +/ String returns a comma separated string of the signature algorithm names in the list. +func (sal SigningAlgoList) + +String() + +string { + names := make([]string, len(sal)) + for i := range sal { + names[i] = string(sal[i].Name()) +} + +return strings.Join(names, ",") +} ``` -loading... -``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/hd/algo.go#L28-L31) +The interface consists in three methods where `Name()` returns the name of the algorithm as a `hd.PubKeyType` and `Derive()` and `Generate()` must return the following functions respectively: + +```go expandable +package hd + +import ( + + "github.com/cosmos/go-bip39" + "github.com/cosmos/cosmos-sdk/crypto/keys/secp256k1" + "github.com/cosmos/cosmos-sdk/crypto/types" +) + +/ PubKeyType defines an algorithm to derive key-pairs which can be used for cryptographic signing. +type PubKeyType string + +const ( + / MultiType implies that a pubkey is a multisignature + MultiType = PubKeyType("multi") + / Secp256k1Type uses the Bitcoin secp256k1 ECDSA parameters. + Secp256k1Type = PubKeyType("secp256k1") + / Ed25519Type represents the Ed25519Type signature system. + / It is currently not supported for end-user keys (wallets/ledgers). + Ed25519Type = PubKeyType("ed25519") + / Sr25519Type represents the Sr25519Type signature system. + Sr25519Type = PubKeyType("sr25519") +) + +/ Secp256k1 uses the Bitcoin secp256k1 ECDSA parameters. +var Secp256k1 = secp256k1Algo{ +} + +type ( + DeriveFn func(mnemonic, bip39Passphrase, hdPath string) ([]byte, error) + +GenerateFn func(bz []byte) + +types.PrivKey +) + +type WalletGenerator interface { + Derive(mnemonic, bip39Passphrase, hdPath string) ([]byte, error) + +Generate(bz []byte) + +types.PrivKey +} + +type secp256k1Algo struct{ +} + +func (s secp256k1Algo) + +Name() + +PubKeyType { + return Secp256k1Type +} + +/ Derive derives and returns the secp256k1 private key for the given seed and HD path. +func (s secp256k1Algo) + +Derive() + +DeriveFn { + return func(mnemonic, bip39Passphrase, hdPath string) ([]byte, error) { + seed, err := bip39.NewSeedWithErrorChecking(mnemonic, bip39Passphrase) + if err != nil { + return nil, err +} + +masterPriv, ch := ComputeMastersFromSeed(seed) + if len(hdPath) == 0 { + return masterPriv[:], nil +} + +derivedKey, err := DerivePrivateKeyForPath(masterPriv, ch, hdPath) + +return derivedKey, err +} +} + +/ Generate generates a secp256k1 private key from the given bytes. +func (s secp256k1Algo) + +Generate() + +GenerateFn { + return func(bz []byte) + +types.PrivKey { + bzArr := make([]byte, secp256k1.PrivKeySize) + +copy(bzArr, bz) + +return &secp256k1.PrivKey{ + Key: bzArr +} + +} +} +``` Once the `keyring.SignatureAlgo` has been implemented it must be added to the [list of supported algos](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/keyring/keyring.go#L217) of the keyring. -For simplicity the implementation of a new key type should be done inside the `crypto/hd` package. There is an example of a working `secp256k1` implementation in [algo.go](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/hd/algo.go#L38). +For simplicity the implementation of a new key type should be done inside the `crypto/hd` package. +There is an example of a working `secp256k1` implementation in [algo.go](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/hd/algo.go#L38). -#### Implementing secp256r1 algo[​](#implementing-secp256r1-algo "Direct link to Implementing secp256r1 algo") +#### Implementing secp256r1 algo Here is an example of how secp256r1 could be implemented. First a new function to create a private key from a secret number is needed in the secp256r1 package. This function could look like this: -``` -// cosmos-sdk/crypto/keys/secp256r1/privkey.go// NewPrivKeyFromSecret creates a private key derived for the secret number// represented in big-endian. The `secret` must be a valid ECDSA field element.func NewPrivKeyFromSecret(secret []byte) (*PrivKey, error) { var d = new(big.Int).SetBytes(secret) if d.Cmp(secp256r1.Params().N) >= 1 { return nil, errorsmod.Wrap(errors.ErrInvalidRequest, "secret not in the curve base field") } sk := new(ecdsa.PrivKey) return &PrivKey{&ecdsaSK{*sk}}, nil} +```go expandable +/ cosmos-sdk/crypto/keys/secp256r1/privkey.go + +/ NewPrivKeyFromSecret creates a private key derived for the secret number +/ represented in big-endian. The `secret` must be a valid ECDSA field element. +func NewPrivKeyFromSecret(secret []byte) (*PrivKey, error) { + var d = new(big.Int).SetBytes(secret) + if d.Cmp(secp256r1.Params().N) >= 1 { + return nil, errorsmod.Wrap(errors.ErrInvalidRequest, "secret not in the curve base field") +} + sk := new(ecdsa.PrivKey) + +return &PrivKey{&ecdsaSK{*sk +}}, nil +} ``` After that `secp256r1Algo` can be implemented. -``` -// cosmos-sdk/crypto/hd/secp256r1Algo.gopackage hdimport ( "github.com/cosmos/go-bip39" "github.com/cosmos/cosmos-sdk/crypto/keys/secp256r1" "github.com/cosmos/cosmos-sdk/crypto/types")// Secp256r1Type uses the secp256r1 ECDSA parameters.const Secp256r1Type = PubKeyType("secp256r1")var Secp256r1 = secp256r1Algo{}type secp256r1Algo struct{}func (s secp256r1Algo) Name() PubKeyType { return Secp256r1Type}// Derive derives and returns the secp256r1 private key for the given seed and HD path.func (s secp256r1Algo) Derive() DeriveFn { return func(mnemonic string, bip39Passphrase, hdPath string) ([]byte, error) { seed, err := bip39.NewSeedWithErrorChecking(mnemonic, bip39Passphrase) if err != nil { return nil, err } masterPriv, ch := ComputeMastersFromSeed(seed) if len(hdPath) == 0 { return masterPriv[:], nil } derivedKey, err := DerivePrivateKeyForPath(masterPriv, ch, hdPath) return derivedKey, err }}// Generate generates a secp256r1 private key from the given bytes.func (s secp256r1Algo) Generate() GenerateFn { return func(bz []byte) types.PrivKey { key, err := secp256r1.NewPrivKeyFromSecret(bz) if err != nil { panic(err) } return key }} +```go expandable +/ cosmos-sdk/crypto/hd/secp256r1Algo.go + +package hd + +import ( + + "github.com/cosmos/go-bip39" + "github.com/cosmos/cosmos-sdk/crypto/keys/secp256r1" + "github.com/cosmos/cosmos-sdk/crypto/types" +) + +/ Secp256r1Type uses the secp256r1 ECDSA parameters. +const Secp256r1Type = PubKeyType("secp256r1") + +var Secp256r1 = secp256r1Algo{ +} + +type secp256r1Algo struct{ +} + +func (s secp256r1Algo) + +Name() + +PubKeyType { + return Secp256r1Type +} + +/ Derive derives and returns the secp256r1 private key for the given seed and HD path. +func (s secp256r1Algo) + +Derive() + +DeriveFn { + return func(mnemonic string, bip39Passphrase, hdPath string) ([]byte, error) { + seed, err := bip39.NewSeedWithErrorChecking(mnemonic, bip39Passphrase) + if err != nil { + return nil, err +} + +masterPriv, ch := ComputeMastersFromSeed(seed) + if len(hdPath) == 0 { + return masterPriv[:], nil +} + +derivedKey, err := DerivePrivateKeyForPath(masterPriv, ch, hdPath) + +return derivedKey, err +} +} + +/ Generate generates a secp256r1 private key from the given bytes. +func (s secp256r1Algo) + +Generate() + +GenerateFn { + return func(bz []byte) + +types.PrivKey { + key, err := secp256r1.NewPrivKeyFromSecret(bz) + if err != nil { + panic(err) +} + +return key +} +} ``` Finally, the algo must be added to the list of [supported algos](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/crypto/keyring/keyring.go#L217) by the keyring. -``` -// cosmos-sdk/crypto/keyring/keyring.gofunc newKeystore(kr keyring.Keyring, cdc codec.Codec, backend string, opts ...Option) keystore { // Default options for keybase, these can be overwritten using the // Option function options := Options{ SupportedAlgos: SigningAlgoList{hd.Secp256k1, hd.Secp256r1}, // added here SupportedAlgosLedger: SigningAlgoList{hd.Secp256k1}, }... +```go +/ cosmos-sdk/crypto/keyring/keyring.go + +func newKeystore(kr keyring.Keyring, cdc codec.Codec, backend string, opts ...Option) + +keystore { + / Default options for keybase, these can be overwritten using the + / Option function + options := Options{ + SupportedAlgos: SigningAlgoList{ + hd.Secp256k1, hd.Secp256r1 +}, / added here + SupportedAlgosLedger: SigningAlgoList{ + hd.Secp256k1 +}, +} +... ``` Hereafter to create new keys using your algo, you must specify it with the flag `--algo` : diff --git a/docs/sdk/v0.50/learn/beginner/app-anatomy.mdx b/docs/sdk/v0.50/learn/beginner/app-anatomy.mdx index ab7b624f..33b10713 100644 --- a/docs/sdk/v0.50/learn/beginner/app-anatomy.mdx +++ b/docs/sdk/v0.50/learn/beginner/app-anatomy.mdx @@ -1,232 +1,4061 @@ --- -title: "Anatomy of a Cosmos SDK Application" -description: "Version: v0.50" +title: Anatomy of a Cosmos SDK Application --- - - This document describes the core parts of a Cosmos SDK application, represented throughout the document as a placeholder application named `app`. - - -## Node Client[​](#node-client "Direct link to Node Client") - -The Daemon, or [Full-Node Client](/v0.50/learn/advanced/node), is the core process of a Cosmos SDK-based blockchain. Participants in the network run this process to initialize their state-machine, connect with other full-nodes, and update their state-machine as new blocks come in. - -``` - ^ +-------------------------------+ ^ | | | | | | State-machine = Application | | | | | | Built with Cosmos SDK | | ^ + | | | +----------- | ABCI | ----------+ v | | + v | ^ | | | |Blockchain Node | | Consensus | | | | | | | +-------------------------------+ | CometBFT | | | | | | Networking | | | | | | v +-------------------------------+ v +## Synopsis + +This document describes the core parts of a Cosmos SDK application, represented throughout the document as a placeholder application named `app`. + +## Node Client + +The Daemon, or [Full-Node Client](/docs/sdk/v0.50/learn/advanced/node), is the core process of a Cosmos SDK-based blockchain. Participants in the network run this process to initialize their state-machine, connect with other full-nodes, and update their state-machine as new blocks come in. + +```text expandable + ^ +-------------------------------+ ^ + | | | | + | | State-machine = Application | | + | | | | Built with Cosmos SDK + | | ^ + | | + | +----------- | ABCI | ----------+ v + | | + v | ^ + | | | | +Blockchain Node | | Consensus | | + | | | | + | +-------------------------------+ | CometBFT + | | | | + | | Networking | | + | | | | + v +-------------------------------+ v ``` -The blockchain full-node presents itself as a binary, generally suffixed by `-d` for "daemon" (e.g. `appd` for `app` or `gaiad` for `gaia`). This binary is built by running a simple [`main.go`](/v0.50/learn/advanced/node#main-function) function placed in `./cmd/appd/`. This operation usually happens through the [Makefile](#dependencies-and-makefile). +The blockchain full-node presents itself as a binary, generally suffixed by `-d` for "daemon" (e.g. `appd` for `app` or `gaiad` for `gaia`). This binary is built by running a simple [`main.go`](/docs/sdk/v0.50/learn/advanced/node#main-function) function placed in `./cmd/appd/`. This operation usually happens through the [Makefile](#dependencies-and-makefile). -Once the main binary is built, the node can be started by running the [`start` command](/v0.50/learn/advanced/node#start-command). This command function primarily does three things: +Once the main binary is built, the node can be started by running the [`start` command](/docs/sdk/v0.50/learn/advanced/node#start-command). This command function primarily does three things: 1. Create an instance of the state-machine defined in [`app.go`](#core-application-file). 2. Initialize the state-machine with the latest known state, extracted from the `db` stored in the `~/.app/data` folder. At this point, the state-machine is at height `appBlockHeight`. 3. Create and start a new CometBFT instance. Among other things, the node performs a handshake with its peers. It gets the latest `blockHeight` from them and replays blocks to sync to this height if it is greater than the local `appBlockHeight`. The node starts from genesis and CometBFT sends an `InitChain` message via the ABCI to the `app`, which triggers the [`InitChainer`](#initchainer). - When starting a CometBFT instance, the genesis file is the `0` height and the state within the genesis file is committed at block height `1`. When querying the state of the node, querying block height 0 will return an error. + When starting a CometBFT instance, the genesis file is the `0` height and the + state within the genesis file is committed at block height `1`. When querying + the state of the node, querying block height 0 will return an error. -## Core Application File[​](#core-application-file "Direct link to Core Application File") +## Core Application File In general, the core of the state-machine is defined in a file called `app.go`. This file mainly contains the **type definition of the application** and functions to **create and initialize it**. -### Type Definition of the Application[​](#type-definition-of-the-application "Direct link to Type Definition of the Application") +### Type Definition of the Application The first thing defined in `app.go` is the `type` of the application. It is generally comprised of the following parts: -* **A reference to [`baseapp`](/v0.50/learn/advanced/baseapp).** The custom application defined in `app.go` is an extension of `baseapp`. When a transaction is relayed by CometBFT to the application, `app` uses `baseapp`'s methods to route them to the appropriate module. `baseapp` implements most of the core logic for the application, including all the [ABCI methods](https://docs.cometbft.com/v0.37/spec/abci/) and the [routing logic](/v0.50/learn/advanced/baseapp#routing). -* **A list of store keys**. The [store](/v0.50/learn/advanced/store), which contains the entire state, is implemented as a [`multistore`](/v0.50/learn/advanced/store#multistore) (i.e. a store of stores) in the Cosmos SDK. Each module uses one or multiple stores in the multistore to persist their part of the state. These stores can be accessed with specific keys that are declared in the `app` type. These keys, along with the `keepers`, are at the heart of the [object-capabilities model](/v0.50/learn/advanced/ocap) of the Cosmos SDK. -* **A list of module's `keeper`s.** Each module defines an abstraction called [`keeper`](/v0.50/build/building-modules/keeper), which handles reads and writes for this module's store(s). The `keeper`'s methods of one module can be called from other modules (if authorized), which is why they are declared in the application's type and exported as interfaces to other modules so that the latter can only access the authorized functions. -* **A reference to an [`appCodec`](/v0.50/learn/advanced/encoding).** The application's `appCodec` is used to serialize and deserialize data structures in order to store them, as stores can only persist `[]bytes`. The default codec is [Protocol Buffers](/v0.50/learn/advanced/encoding). -* **A reference to a [`legacyAmino`](/v0.50/learn/advanced/encoding) codec.** Some parts of the Cosmos SDK have not been migrated to use the `appCodec` above, and are still hardcoded to use Amino. Other parts explicitly use Amino for backwards compatibility. For these reasons, the application still holds a reference to the legacy Amino codec. Please note that the Amino codec will be removed from the SDK in the upcoming releases. -* **A reference to a [module manager](/v0.50/build/building-modules/module-manager#manager)** and a [basic module manager](/v0.50/build/building-modules/module-manager#basicmanager). The module manager is an object that contains a list of the application's modules. It facilitates operations related to these modules, like registering their [`Msg` service](/v0.50/learn/advanced/baseapp#msg-services) and [gRPC `Query` service](/v0.50/learn/advanced/baseapp#grpc-query-services), or setting the order of execution between modules for various functions like [`InitChainer`](#initchainer), [`PreBlocker`](#preblocker) and [`BeginBlocker` and `EndBlocker`](#beginblocker-and-endblocker). +- **A reference to [`baseapp`](/docs/sdk/v0.50/learn/advanced/baseapp).** The custom application defined in `app.go` is an extension of `baseapp`. When a transaction is relayed by CometBFT to the application, `app` uses `baseapp`'s methods to route them to the appropriate module. `baseapp` implements most of the core logic for the application, including all the [ABCI methods](https://docs.cometbft.com/v0.37/spec/abci/) and the [routing logic](/docs/sdk/v0.50/learn/advanced/baseapp#routing). +- **A list of store keys**. The [store](/docs/sdk/v0.50/learn/advanced/store), which contains the entire state, is implemented as a [`multistore`](/docs/sdk/v0.50/learn/advanced/store#multistore) (i.e. a store of stores) in the Cosmos SDK. Each module uses one or multiple stores in the multistore to persist their part of the state. These stores can be accessed with specific keys that are declared in the `app` type. These keys, along with the `keepers`, are at the heart of the [object-capabilities model](/docs/sdk/v0.50/learn/advanced/ocap) of the Cosmos SDK. +- **A list of module's `keeper`s.** Each module defines an abstraction called [`keeper`](/docs/sdk/v0.50/documentation/module-system/keeper), which handles reads and writes for this module's store(s). The `keeper`'s methods of one module can be called from other modules (if authorized), which is why they are declared in the application's type and exported as interfaces to other modules so that the latter can only access the authorized functions. +- **A reference to an [`appCodec`](/docs/sdk/v0.50/learn/advanced/encoding).** The application's `appCodec` is used to serialize and deserialize data structures in order to store them, as stores can only persist `[]bytes`. The default codec is [Protocol Buffers](/docs/sdk/v0.50/learn/advanced/encoding). +- **A reference to a [`legacyAmino`](/docs/sdk/v0.50/learn/advanced/encoding) codec.** Some parts of the Cosmos SDK have not been migrated to use the `appCodec` above, and are still hardcoded to use Amino. Other parts explicitly use Amino for backwards compatibility. For these reasons, the application still holds a reference to the legacy Amino codec. Please note that the Amino codec will be removed from the SDK in the upcoming releases. +- **A reference to a [module manager](/docs/sdk/v0.50/documentation/module-system/module-manager#manager)** and a [basic module manager](/docs/sdk/v0.50/documentation/module-system/module-manager#basicmanager). The module manager is an object that contains a list of the application's modules. It facilitates operations related to these modules, like registering their [`Msg` service](/docs/sdk/v0.50/learn/advanced/baseapp#msg-services) and [gRPC `Query` service](/docs/sdk/v0.50/learn/advanced/baseapp#grpc-query-services), or setting the order of execution between modules for various functions like [`InitChainer`](#initchainer), [`PreBlocker`](#preblocker) and [`BeginBlocker` and `EndBlocker`](#beginblocker-and-endblocker). See an example of application type definition from `simapp`, the Cosmos SDK's own app used for demo and testing purposes: -simapp/app.go +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "os" + "path/filepath" + "cosmossdk.io/log" + "cosmossdk.io/x/tx/signing" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/core/appmodule" + "github.com/cosmos/cosmos-sdk/codec/address" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ stdAccAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdAccAddressCodec struct{ +} + +func (g stdAccAddressCodec) + +StringToBytes(text string) ([]byte, error) { + if text == "" { + return nil, nil +} + +return sdk.AccAddressFromBech32(text) +} + +func (g stdAccAddressCodec) + +BytesToString(bz []byte) (string, error) { + if bz == nil { + return "", nil +} + +return sdk.AccAddress(bz).String(), nil +} + +/ stdValAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdValAddressCodec struct{ +} + +func (g stdValAddressCodec) + +StringToBytes(text string) ([]byte, error) { + return sdk.ValAddressFromBech32(text) +} + +func (g stdValAddressCodec) + +BytesToString(bz []byte) (string, error) { + return sdk.ValAddress(bz).String(), nil +} + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, circuittypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + tkeys := storetypes.NewTransientStoreKeys(paramstypes.TStoreKey) + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), runtime.EventService{ +}) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, runtime.NewKVStoreService(keys[authtypes.StoreKey]), authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[minttypes.StoreKey]), app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[distrtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[crisistypes.StoreKey]), invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[feegrant.StoreKey]), app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[circuittypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper(runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[govtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.DistrKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), app.StakingKeeper, app.SlashingKeeper, app.AccountKeeper.AddressCodec(), runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName), app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, circuittypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + err := app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + + / At startup, after all modules have been registered, check that all prot + / annotations are correct. + protoFiles, err := proto.MergedRegistry() + if err != nil { + panic(err) +} + +err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + fmt.Fprintln(os.Stderr, err.Error()) +} + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} -``` -loading... -``` +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/app.go#L173-L212) +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) -### Constructor Function[​](#constructor-function "Direct link to Constructor Function") +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} -Also defined in `app.go` is the constructor function, which constructs a new application of the type defined in the preceding section. The function must fulfill the `AppCreator` signature in order to be used in the [`start` command](/v0.50/learn/advanced/node#start-command) of the application's daemon command. +/ LoadHeight loads a particular height +func (app *SimApp) -server/types/app.go +LoadHeight(height int64) -``` -loading... -``` +error { + return app.LoadVersion(height) +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/server/types/app.go#L66-L68) +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) -Here are the main actions performed by this function: +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} -* Instantiate a new [`codec`](/v0.50/learn/advanced/encoding) and initialize the `codec` of each of the application's modules using the [basic manager](/v0.50/build/building-modules/module-manager#basicmanager). +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) -* Instantiate a new application with a reference to a `baseapp` instance, a codec, and all the appropriate store keys. +AppCodec() -* Instantiate all the [`keeper`](#keeper) objects defined in the application's `type` using the `NewKeeper` function of each of the application's modules. Note that keepers must be instantiated in the correct order, as the `NewKeeper` of one module might require a reference to another module's `keeper`. +codec.Codec { + return app.appCodec +} -* Instantiate the application's [module manager](/v0.50/build/building-modules/module-manager#manager) with the [`AppModule`](#application-module-interface) object of each of the application's modules. +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) -* With the module manager, initialize the application's [`Msg` services](/v0.50/learn/advanced/baseapp#msg-services), [gRPC `Query` services](/v0.50/learn/advanced/baseapp#grpc-query-services), [legacy `Msg` routes](/v0.50/learn/advanced/baseapp#routing), and [legacy query routes](/v0.50/learn/advanced/baseapp#query-routing). When a transaction is relayed to the application by CometBFT via the ABCI, it is routed to the appropriate module's [`Msg` service](#msg-services) using the routes defined here. Likewise, when a gRPC query request is received by the application, it is routed to the appropriate module's [`gRPC query service`](#grpc-query-services) using the gRPC routes defined here. The Cosmos SDK still supports legacy `Msg`s and legacy CometBFT queries, which are routed using the legacy `Msg` routes and the legacy query routes, respectively. +InterfaceRegistry() -* With the module manager, register the [application's modules' invariants](/v0.50/build/building-modules/invariants). Invariants are variables (e.g. total supply of a token) that are evaluated at the end of each block. The process of checking invariants is done via a special module called the [`InvariantsRegistry`](/v0.50/build/building-modules/invariants#invariant-registry). The value of the invariant should be equal to a predicted value defined in the module. Should the value be different than the predicted one, special logic defined in the invariant registry is triggered (usually the chain is halted). This is useful to make sure that no critical bug goes unnoticed, producing long-lasting effects that are hard to fix. +types.InterfaceRegistry { + return app.interfaceRegistry +} -* With the module manager, set the order of execution between the `InitGenesis`, `PreBlocker`, `BeginBlocker`, and `EndBlocker` functions of each of the [application's modules](#application-module-interface). Note that not all modules implement these functions. +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) -* Set the remaining application parameters: +TxConfig() - * [`InitChainer`](#initchainer): used to initialize the application when it is first started. - * [`PreBlocker`](#preblocker): called before BeginBlock. - * [`BeginBlocker`, `EndBlocker`](#beginblocker-and-endblocker): called at the beginning and at the end of every block. - * [`anteHandler`](/v0.50/learn/advanced/baseapp#antehandler): used to handle fees and signature verification. +client.TxConfig { + return app.txConfig +} -* Mount the stores. +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) -* Return the application. +AutoCliOpts() -Note that the constructor function only creates an instance of the app, while the actual state is either carried over from the `~/.app/data` folder if the node is restarted, or generated from the genesis file if the node is started for the first time. +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} -See an example of application constructor from `simapp`: +} + +} + +return autocli.AppOptions{ + Modules: modules, + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) -simapp/app.go +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} ``` -loading... + +### Constructor Function + +Also defined in `app.go` is the constructor function, which constructs a new application of the type defined in the preceding section. The function must fulfill the `AppCreator` signature in order to be used in the [`start` command](/docs/sdk/v0.50/learn/advanced/node#start-command) of the application's daemon command. + +```go expandable +package types + +import ( + + "encoding/json" + "io" + "cosmossdk.io/log" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" +) + +type ( + / AppOptions defines an interface that is passed into an application + / constructor, typically used to set BaseApp options that are either supplied + / via config file or through CLI arguments/flags. The underlying implementation + / is defined by the server package and is typically implemented via a Viper + / literal defined on the server Context. Note, casting Get calls may not yield + / the expected types and could result in type assertion errors. It is recommend + / to either use the cast package or perform manual conversion for safety. + AppOptions interface { + Get(string) + +interface{ +} + +} + + / Application defines an application interface that wraps abci.Application. + / The interface defines the necessary contracts to be implemented in order + / to fully bootstrap and start an application. + Application interface { + ABCI + + RegisterAPIRoutes(*api.Server, config.APIConfig) + + / RegisterGRPCServer registers gRPC services directly with the gRPC + / server. + RegisterGRPCServer(grpc.Server) + + / RegisterTxService registers the gRPC Query service for tx (such as tx + / simulation, fetching txs by hash...). + RegisterTxService(client.Context) + + / RegisterTendermintService registers the gRPC Query service for CometBFT queries. + RegisterTendermintService(client.Context) + + / RegisterNodeService registers the node gRPC Query service. + RegisterNodeService(client.Context, config.Config) + + / CommitMultiStore return the multistore instance + CommitMultiStore() + +storetypes.CommitMultiStore + + / Return the snapshot manager + SnapshotManager() *snapshots.Manager + + / Close is called in start cmd to gracefully cleanup resources. + / Must be safe to be called multiple times. + Close() + +error +} + + / AppCreator is a function that allows us to lazily initialize an + / application using various configurations. + AppCreator func(log.Logger, dbm.DB, io.Writer, AppOptions) + +Application + + / ModuleInitFlags takes a start command and adds modules specific init flags. + ModuleInitFlags func(startCmd *cobra.Command) + + / ExportedApp represents an exported app state, along with + / validators, consensus params and latest app height. + ExportedApp struct { + / AppState is the application state as JSON. + AppState json.RawMessage + / Validators is the exported validator set. + Validators []cmttypes.GenesisValidator + / Height is the app's latest block height. + Height int64 + / ConsensusParams are the exported consensus params for ABCI. + ConsensusParams cmtproto.ConsensusParams +} + + / AppExporter is a function that dumps all app state to + / JSON-serializable structure and returns the current validator set. + AppExporter func( + logger log.Logger, + db dbm.DB, + traceWriter io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + opts AppOptions, + modulesToExport []string, + ) (ExportedApp, error) +) ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/app.go#L223-L575) +Here are the main actions performed by this function: + +- Instantiate a new [`codec`](/docs/sdk/v0.50/learn/advanced/encoding) and initialize the `codec` of each of the application's modules using the [basic manager](/docs/sdk/v0.50/documentation/module-system/module-manager#basicmanager). +- Instantiate a new application with a reference to a `baseapp` instance, a codec, and all the appropriate store keys. +- Instantiate all the [`keeper`](#keeper) objects defined in the application's `type` using the `NewKeeper` function of each of the application's modules. Note that keepers must be instantiated in the correct order, as the `NewKeeper` of one module might require a reference to another module's `keeper`. +- Instantiate the application's [module manager](/docs/sdk/v0.50/documentation/module-system/module-manager#manager) with the [`AppModule`](#application-module-interface) object of each of the application's modules. +- With the module manager, initialize the application's [`Msg` services](/docs/sdk/v0.50/learn/advanced/baseapp#msg-services), [gRPC `Query` services](/docs/sdk/v0.50/learn/advanced/baseapp#grpc-query-services), [legacy `Msg` routes](/docs/sdk/v0.50/learn/advanced/baseapp#routing), and [legacy query routes](/docs/sdk/v0.50/learn/advanced/baseapp#query-routing). When a transaction is relayed to the application by CometBFT via the ABCI, it is routed to the appropriate module's [`Msg` service](#msg-services) using the routes defined here. Likewise, when a gRPC query request is received by the application, it is routed to the appropriate module's [`gRPC query service`](#grpc-query-services) using the gRPC routes defined here. The Cosmos SDK still supports legacy `Msg`s and legacy CometBFT queries, which are routed using the legacy `Msg` routes and the legacy query routes, respectively. +- With the module manager, register the [application's modules' invariants](/docs/sdk/v0.50/documentation/module-system/invariants). Invariants are variables (e.g. total supply of a token) that are evaluated at the end of each block. The process of checking invariants is done via a special module called the [`InvariantsRegistry`](/docs/sdk/v0.50/documentation/module-system/invariants#invariant-registry). The value of the invariant should be equal to a predicted value defined in the module. Should the value be different than the predicted one, special logic defined in the invariant registry is triggered (usually the chain is halted). This is useful to make sure that no critical bug goes unnoticed, producing long-lasting effects that are hard to fix. +- With the module manager, set the order of execution between the `InitGenesis`, `PreBlocker`, `BeginBlocker`, and `EndBlocker` functions of each of the [application's modules](#application-module-interface). Note that not all modules implement these functions. +- Set the remaining application parameters: + - [`InitChainer`](#initchainer): used to initialize the application when it is first started. + - [`PreBlocker`](#preblocker): called before BeginBlock. + - [`BeginBlocker`, `EndBlocker`](#beginblocker-and-endblocker): called at the beginning and at the end of every block. + - [`anteHandler`](/docs/sdk/v0.50/learn/advanced/baseapp#antehandler): used to handle fees and signature verification. +- Mount the stores. +- Return the application. + +Note that the constructor function only creates an instance of the app, while the actual state is either carried over from the `~/.app/data` folder if the node is restarted, or generated from the genesis file if the node is started for the first time. + +See an example of application constructor from `simapp`: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "os" + "path/filepath" + "cosmossdk.io/log" + "cosmossdk.io/x/tx/signing" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/core/appmodule" + "github.com/cosmos/cosmos-sdk/codec/address" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ stdAccAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdAccAddressCodec struct{ +} + +func (g stdAccAddressCodec) + +StringToBytes(text string) ([]byte, error) { + if text == "" { + return nil, nil +} + +return sdk.AccAddressFromBech32(text) +} + +func (g stdAccAddressCodec) + +BytesToString(bz []byte) (string, error) { + if bz == nil { + return "", nil +} + +return sdk.AccAddress(bz).String(), nil +} + +/ stdValAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdValAddressCodec struct{ +} + +func (g stdValAddressCodec) + +StringToBytes(text string) ([]byte, error) { + return sdk.ValAddressFromBech32(text) +} + +func (g stdValAddressCodec) + +BytesToString(bz []byte) (string, error) { + return sdk.ValAddress(bz).String(), nil +} + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, circuittypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + tkeys := storetypes.NewTransientStoreKeys(paramstypes.TStoreKey) + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), runtime.EventService{ +}) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, runtime.NewKVStoreService(keys[authtypes.StoreKey]), authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[minttypes.StoreKey]), app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[distrtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[crisistypes.StoreKey]), invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[feegrant.StoreKey]), app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[circuittypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper(runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[govtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.DistrKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), app.StakingKeeper, app.SlashingKeeper, app.AccountKeeper.AddressCodec(), runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName), app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, circuittypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + err := app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + + / At startup, after all modules have been registered, check that all prot + / annotations are correct. + protoFiles, err := proto.MergedRegistry() + if err != nil { + panic(err) +} + +err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + fmt.Fprintln(os.Stderr, err.Error()) +} + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) -### InitChainer[​](#initchainer "Direct link to InitChainer") +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} +``` + +### InitChainer The `InitChainer` is a function that initializes the state of the application from a genesis file (i.e. token balances of genesis accounts). It is called when the application receives the `InitChain` message from the CometBFT engine, which happens when the node is started at `appBlockHeight == 0` (i.e. on genesis). The application must set the `InitChainer` in its [constructor](#constructor-function) via the [`SetInitChainer`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetInitChainer) method. -In general, the `InitChainer` is mostly composed of the [`InitGenesis`](/v0.50/build/building-modules/genesis#initgenesis) function of each of the application's modules. This is done by calling the `InitGenesis` function of the module manager, which in turn calls the `InitGenesis` function of each of the modules it contains. Note that the order in which the modules' `InitGenesis` functions must be called has to be set in the module manager using the [module manager's](/v0.50/build/building-modules/module-manager) `SetOrderInitGenesis` method. This is done in the [application's constructor](#application-constructor), and the `SetOrderInitGenesis` has to be called before the `SetInitChainer`. +In general, the `InitChainer` is mostly composed of the [`InitGenesis`](/docs/sdk/v0.50/documentation/module-system/genesis#initgenesis) function of each of the application's modules. This is done by calling the `InitGenesis` function of the module manager, which in turn calls the `InitGenesis` function of each of the modules it contains. Note that the order in which the modules' `InitGenesis` functions must be called has to be set in the module manager using the [module manager's](/docs/sdk/v0.50/documentation/module-system/module-manager) `SetOrderInitGenesis` method. This is done in the [application's constructor](#application-constructor), and the `SetOrderInitGenesis` has to be called before the `SetInitChainer`. See an example of an `InitChainer` from `simapp`: -simapp/app.go +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "os" + "path/filepath" + "cosmossdk.io/log" + "cosmossdk.io/x/tx/signing" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/core/appmodule" + "github.com/cosmos/cosmos-sdk/codec/address" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ stdAccAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdAccAddressCodec struct{ +} + +func (g stdAccAddressCodec) + +StringToBytes(text string) ([]byte, error) { + if text == "" { + return nil, nil +} + +return sdk.AccAddressFromBech32(text) +} + +func (g stdAccAddressCodec) + +BytesToString(bz []byte) (string, error) { + if bz == nil { + return "", nil +} + +return sdk.AccAddress(bz).String(), nil +} + +/ stdValAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdValAddressCodec struct{ +} + +func (g stdValAddressCodec) + +StringToBytes(text string) ([]byte, error) { + return sdk.ValAddressFromBech32(text) +} + +func (g stdValAddressCodec) + +BytesToString(bz []byte) (string, error) { + return sdk.ValAddress(bz).String(), nil +} + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, circuittypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + tkeys := storetypes.NewTransientStoreKeys(paramstypes.TStoreKey) + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), runtime.EventService{ +}) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, runtime.NewKVStoreService(keys[authtypes.StoreKey]), authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[minttypes.StoreKey]), app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[distrtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[crisistypes.StoreKey]), invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[feegrant.StoreKey]), app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[circuittypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper(runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[govtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.DistrKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), app.StakingKeeper, app.SlashingKeeper, app.AccountKeeper.AddressCodec(), runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName), app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, circuittypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + err := app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + + / At startup, after all modules have been registered, check that all prot + / annotations are correct. + protoFiles, err := proto.MergedRegistry() + if err != nil { + panic(err) +} + +err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + fmt.Fprintln(os.Stderr, err.Error()) +} + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} -``` -loading... -``` +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/app.go#L626-L634) +error { + return app.LoadVersion(height) +} -### PreBlocker[​](#preblocker "Direct link to PreBlocker") +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} +``` + +### PreBlocker There are two semantics around the new lifecycle method: -* It runs before the `BeginBlocker` of all modules -* It can modify consensus parameters in storage, and signal the caller through the return value. +- It runs before the `BeginBlocker` of all modules +- It can modify consensus parameters in storage, and signal the caller through the return value. When it returns `ConsensusParamsChanged=true`, the caller must refresh the consensus parameter in the finalize context: -``` +```go app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithConsensusParams(app.GetConsensusParams()) ``` The new ctx must be passed to all the other lifecycle methods. -### BeginBlocker and EndBlocker[​](#beginblocker-and-endblocker "Direct link to BeginBlocker and EndBlocker") +### BeginBlocker and EndBlocker The Cosmos SDK offers developers the possibility to implement automatic execution of code as part of their application. This is implemented through two functions called `BeginBlocker` and `EndBlocker`. They are called when the application receives the `FinalizeBlock` messages from the CometBFT consensus engine, which happens respectively at the beginning and at the end of each block. The application must set the `BeginBlocker` and `EndBlocker` in its [constructor](#constructor-function) via the [`SetBeginBlocker`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetBeginBlocker) and [`SetEndBlocker`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetEndBlocker) methods. -In general, the `BeginBlocker` and `EndBlocker` functions are mostly composed of the [`BeginBlock` and `EndBlock`](/v0.50/build/building-modules/beginblock-endblock) functions of each of the application's modules. This is done by calling the `BeginBlock` and `EndBlock` functions of the module manager, which in turn calls the `BeginBlock` and `EndBlock` functions of each of the modules it contains. Note that the order in which the modules' `BeginBlock` and `EndBlock` functions must be called has to be set in the module manager using the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods, respectively. This is done via the [module manager](/v0.50/build/building-modules/module-manager) in the [application's constructor](#application-constructor), and the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods have to be called before the `SetBeginBlocker` and `SetEndBlocker` functions. +In general, the `BeginBlocker` and `EndBlocker` functions are mostly composed of the [`BeginBlock` and `EndBlock`](/docs/sdk/v0.50/documentation/module-system/beginblock-endblock) functions of each of the application's modules. This is done by calling the `BeginBlock` and `EndBlock` functions of the module manager, which in turn calls the `BeginBlock` and `EndBlock` functions of each of the modules it contains. Note that the order in which the modules' `BeginBlock` and `EndBlock` functions must be called has to be set in the module manager using the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods, respectively. This is done via the [module manager](/docs/sdk/v0.50/documentation/module-system/module-manager) in the [application's constructor](#application-constructor), and the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods have to be called before the `SetBeginBlocker` and `SetEndBlocker` functions. -As a sidenote, it is important to remember that application-specific blockchains are deterministic. Developers must be careful not to introduce non-determinism in `BeginBlocker` or `EndBlocker`, and must also be careful not to make them too computationally expensive, as [gas](/v0.50/learn/beginner/gas-fees) does not constrain the cost of `BeginBlocker` and `EndBlocker` execution. +As a sidenote, it is important to remember that application-specific blockchains are deterministic. Developers must be careful not to introduce non-determinism in `BeginBlocker` or `EndBlocker`, and must also be careful not to make them too computationally expensive, as [gas](/docs/sdk/v0.50/learn/beginner/gas-fees) does not constrain the cost of `BeginBlocker` and `EndBlocker` execution. See an example of `BeginBlocker` and `EndBlocker` functions from `simapp` -simapp/app.go +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "os" + "path/filepath" + "cosmossdk.io/log" + "cosmossdk.io/x/tx/signing" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/core/appmodule" + "github.com/cosmos/cosmos-sdk/codec/address" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ stdAccAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdAccAddressCodec struct{ +} + +func (g stdAccAddressCodec) + +StringToBytes(text string) ([]byte, error) { + if text == "" { + return nil, nil +} + +return sdk.AccAddressFromBech32(text) +} + +func (g stdAccAddressCodec) + +BytesToString(bz []byte) (string, error) { + if bz == nil { + return "", nil +} + +return sdk.AccAddress(bz).String(), nil +} + +/ stdValAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdValAddressCodec struct{ +} + +func (g stdValAddressCodec) + +StringToBytes(text string) ([]byte, error) { + return sdk.ValAddressFromBech32(text) +} + +func (g stdValAddressCodec) + +BytesToString(bz []byte) (string, error) { + return sdk.ValAddress(bz).String(), nil +} + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, circuittypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + tkeys := storetypes.NewTransientStoreKeys(paramstypes.TStoreKey) + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), runtime.EventService{ +}) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, runtime.NewKVStoreService(keys[authtypes.StoreKey]), authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[minttypes.StoreKey]), app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[distrtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[crisistypes.StoreKey]), invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[feegrant.StoreKey]), app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[circuittypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper(runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[govtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.DistrKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), app.StakingKeeper, app.SlashingKeeper, app.AccountKeeper.AddressCodec(), runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName), app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, circuittypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + err := app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + + / At startup, after all modules have been registered, check that all prot + / annotations are correct. + protoFiles, err := proto.MergedRegistry() + if err != nil { + panic(err) +} + +err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + fmt.Fprintln(os.Stderr, err.Error()) +} + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} -``` -loading... -``` +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/app.go#L613-L620) +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) -### Register Codec[​](#register-codec "Direct link to Register Codec") +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} -The `EncodingConfig` structure is the last important part of the `app.go` file. The goal of this structure is to define the codecs that will be used throughout the app. +/ LoadHeight loads a particular height +func (app *SimApp) -simapp/params/encoding.go +LoadHeight(height int64) -``` -loading... -``` +error { + return app.LoadVersion(height) +} -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/params/encoding.go#L9-L16) +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) -Here are descriptions of what each of the four fields means: +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) -* `InterfaceRegistry`: The `InterfaceRegistry` is used by the Protobuf codec to handle interfaces that are encoded and decoded (we also say "unpacked") using [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto). `Any` could be thought as a struct that contains a `type_url` (name of a concrete type implementing the interface) and a `value` (its encoded bytes). `InterfaceRegistry` provides a mechanism for registering interfaces and implementations that can be safely unpacked from `Any`. Each application module implements the `RegisterInterfaces` method that can be used to register the module's own interfaces and implementations. +AppCodec() - * You can read more about `Any` in [ADR-019](/v0.50/build/architecture/adr-019-protobuf-state-encoding). - * To go more into details, the Cosmos SDK uses an implementation of the Protobuf specification called [`gogoprotobuf`](https://github.com/cosmos/gogoproto). By default, the [gogo protobuf implementation of `Any`](https://pkg.go.dev/github.com/cosmos/gogoproto/types) uses [global type registration](https://github.com/cosmos/gogoproto/blob/master/proto/properties.go#L540) to decode values packed in `Any` into concrete Go types. This introduces a vulnerability where any malicious module in the dependency tree could register a type with the global protobuf registry and cause it to be loaded and unmarshaled by a transaction that referenced it in the `type_url` field. For more information, please refer to [ADR-019](/v0.50/build/architecture/adr-019-protobuf-state-encoding). +codec.Codec { + return app.appCodec +} -* `Codec`: The default codec used throughout the Cosmos SDK. It is composed of a `BinaryCodec` used to encode and decode state, and a `JSONCodec` used to output data to the users (for example, in the [CLI](#cli)). By default, the SDK uses Protobuf as `Codec`. +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) -* `TxConfig`: `TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type. Currently, the SDK handles two transaction types: `SIGN_MODE_DIRECT` (which uses Protobuf binary as over-the-wire encoding) and `SIGN_MODE_LEGACY_AMINO_JSON` (which depends on Amino). Read more about transactions [here](/v0.50/learn/advanced/transactions). +InterfaceRegistry() -* `Amino`: Some legacy parts of the Cosmos SDK still use Amino for backwards-compatibility. Each module exposes a `RegisterLegacyAmino` method to register the module's specific types within Amino. This `Amino` codec should not be used by app developers anymore, and will be removed in future releases. +types.InterfaceRegistry { + return app.interfaceRegistry +} -An application should create its own encoding config. See an example of a `simappparams.EncodingConfig` from `simapp`: +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) -simapp/params/encoding.go +TxConfig() +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} ``` -loading... + +### Register Codec + +The `EncodingConfig` structure is the last important part of the `app.go` file. The goal of this structure is to define the codecs that will be used throughout the app. + +```go expandable +package params + +import ( + + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" +) + +/ EncodingConfig specifies the concrete encoding types to use for a given app. +/ This is provided for compatibility between protobuf and amino implementations. +type EncodingConfig struct { + InterfaceRegistry types.InterfaceRegistry + Codec codec.Codec + TxConfig client.TxConfig + Amino *codec.LegacyAmino +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/params/encoding.go#L11-L16) +Here are descriptions of what each of the four fields means: + +- `InterfaceRegistry`: The `InterfaceRegistry` is used by the Protobuf codec to handle interfaces that are encoded and decoded (we also say "unpacked") using [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto). `Any` could be thought as a struct that contains a `type_url` (name of a concrete type implementing the interface) and a `value` (its encoded bytes). `InterfaceRegistry` provides a mechanism for registering interfaces and implementations that can be safely unpacked from `Any`. Each application module implements the `RegisterInterfaces` method that can be used to register the module's own interfaces and implementations. + - You can read more about `Any` in [ADR-019](/docs/common/pages/adr-comprehensive#adr-019-protocol-buffer-state-encoding). + - To go more into details, the Cosmos SDK uses an implementation of the Protobuf specification called [`gogoprotobuf`](https://github.com/cosmos/gogoproto). By default, the [gogo protobuf implementation of `Any`](https://pkg.go.dev/github.com/cosmos/gogoproto/types) uses [global type registration](https://github.com/cosmos/gogoproto/blob/master/proto/properties.go#L540) to decode values packed in `Any` into concrete Go types. This introduces a vulnerability where any malicious module in the dependency tree could register a type with the global protobuf registry and cause it to be loaded and unmarshaled by a transaction that referenced it in the `type_url` field. For more information, please refer to [ADR-019](/docs/common/pages/adr-comprehensive#adr-019-protocol-buffer-state-encoding). +- `Codec`: The default codec used throughout the Cosmos SDK. It is composed of a `BinaryCodec` used to encode and decode state, and a `JSONCodec` used to output data to the users (for example, in the [CLI](#cli)). By default, the SDK uses Protobuf as `Codec`. +- `TxConfig`: `TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type. Currently, the SDK handles two transaction types: `SIGN_MODE_DIRECT` (which uses Protobuf binary as over-the-wire encoding) and `SIGN_MODE_LEGACY_AMINO_JSON` (which depends on Amino). Read more about transactions [here](/docs/sdk/v0.50/learn/advanced/transactions). +- `Amino`: Some legacy parts of the Cosmos SDK still use Amino for backwards-compatibility. Each module exposes a `RegisterLegacyAmino` method to register the module's specific types within Amino. This `Amino` codec should not be used by app developers anymore, and will be removed in future releases. + +An application should create its own encoding config. +See an example of a `simappparams.EncodingConfig` from `simapp`: + +```go expandable +package params + +import ( + + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" +) + +/ EncodingConfig specifies the concrete encoding types to use for a given app. +/ This is provided for compatibility between protobuf and amino implementations. +type EncodingConfig struct { + InterfaceRegistry types.InterfaceRegistry + Codec codec.Codec + TxConfig client.TxConfig + Amino *codec.LegacyAmino +} +``` -## Modules[​](#modules "Direct link to Modules") +## Modules -[Modules](/v0.50/build/building-modules/intro) are the heart and soul of Cosmos SDK applications. They can be considered as state-machines nested within the state-machine. When a transaction is relayed from the underlying CometBFT engine via the ABCI to the application, it is routed by [`baseapp`](/v0.50/learn/advanced/baseapp) to the appropriate module in order to be processed. This paradigm enables developers to easily build complex state-machines, as most of the modules they need often already exist. **For developers, most of the work involved in building a Cosmos SDK application revolves around building custom modules required by their application that do not exist yet, and integrating them with modules that do already exist into one coherent application**. In the application directory, the standard practice is to store modules in the `x/` folder (not to be confused with the Cosmos SDK's `x/` folder, which contains already-built modules). +[Modules](/docs/sdk/v0.50/documentation/module-system/intro) are the heart and soul of Cosmos SDK applications. They can be considered as state-machines nested within the state-machine. When a transaction is relayed from the underlying CometBFT engine via the ABCI to the application, it is routed by [`baseapp`](/docs/sdk/v0.50/learn/advanced/baseapp) to the appropriate module in order to be processed. This paradigm enables developers to easily build complex state-machines, as most of the modules they need often already exist. **For developers, most of the work involved in building a Cosmos SDK application revolves around building custom modules required by their application that do not exist yet, and integrating them with modules that do already exist into one coherent application**. In the application directory, the standard practice is to store modules in the `x/` folder (not to be confused with the Cosmos SDK's `x/` folder, which contains already-built modules). -### Application Module Interface[​](#application-module-interface "Direct link to Application Module Interface") +### Application Module Interface -Modules must implement [interfaces](/v0.50/build/building-modules/module-manager#application-module-interfaces) defined in the Cosmos SDK, [`AppModuleBasic`](/v0.50/build/building-modules/module-manager#appmodulebasic) and [`AppModule`](/v0.50/build/building-modules/module-manager#appmodule). The former implements basic non-dependent elements of the module, such as the `codec`, while the latter handles the bulk of the module methods (including methods that require references to other modules' `keeper`s). Both the `AppModule` and `AppModuleBasic` types are, by convention, defined in a file called `module.go`. +Modules must implement [interfaces](/docs/sdk/v0.50/documentation/module-system/module-manager#application-module-interfaces) defined in the Cosmos SDK, [`AppModuleBasic`](/docs/sdk/v0.50/documentation/module-system/module-manager#appmodulebasic) and [`AppModule`](/docs/sdk/v0.50/documentation/module-system/module-manager#appmodule). The former implements basic non-dependent elements of the module, such as the `codec`, while the latter handles the bulk of the module methods (including methods that require references to other modules' `keeper`s). Both the `AppModule` and `AppModuleBasic` types are, by convention, defined in a file called `module.go`. -`AppModule` exposes a collection of useful methods on the module that facilitates the composition of modules into a coherent application. These methods are called from the [`module manager`](/v0.50/build/building-modules/module-manager#manager), which manages the application's collection of modules. +`AppModule` exposes a collection of useful methods on the module that facilitates the composition of modules into a coherent application. These methods are called from the [`module manager`](/docs/sdk/v0.50/documentation/module-system/module-manager#manager), which manages the application's collection of modules. -### `Msg` Services[​](#msg-services "Direct link to msg-services") +### `Msg` Services -Each application module defines two [Protobuf services](https://developers.google.com/protocol-buffers/docs/proto#services): one `Msg` service to handle messages, and one gRPC `Query` service to handle queries. If we consider the module as a state-machine, then a `Msg` service is a set of state transition RPC methods. Each Protobuf `Msg` service method is 1:1 related to a Protobuf request type, which must implement `sdk.Msg` interface. Note that `sdk.Msg`s are bundled in [transactions](/v0.50/learn/advanced/transactions), and each transaction contains one or multiple messages. +Each application module defines two [Protobuf services](https://developers.google.com/protocol-buffers/docs/proto#services): one `Msg` service to handle messages, and one gRPC `Query` service to handle queries. If we consider the module as a state-machine, then a `Msg` service is a set of state transition RPC methods. +Each Protobuf `Msg` service method is 1:1 related to a Protobuf request type, which must implement `sdk.Msg` interface. +Note that `sdk.Msg`s are bundled in [transactions](/docs/sdk/v0.50/learn/advanced/transactions), and each transaction contains one or multiple messages. When a valid block of transactions is received by the full-node, CometBFT relays each one to the application via [`DeliverTx`](https://docs.cometbft.com/v0.37/spec/abci/abci++_app_requirements#specifics-of-responsedelivertx). Then, the application handles the transaction: 1. Upon receiving the transaction, the application first unmarshalls it from `[]byte`. -2. Then, it verifies a few things about the transaction like [fee payment and signatures](/v0.50/learn/beginner/gas-fees#antehandler) before extracting the `Msg`(s) contained in the transaction. +2. Then, it verifies a few things about the transaction like [fee payment and signatures](/docs/sdk/v0.50/learn/beginner/gas-fees#antehandler) before extracting the `Msg`(s) contained in the transaction. 3. `sdk.Msg`s are encoded using Protobuf [`Any`s](#register-codec). By analyzing each `Any`'s `type_url`, baseapp's `msgServiceRouter` routes the `sdk.Msg` to the corresponding module's `Msg` service. 4. If the message is successfully processed, the state is updated. -For more details, see [transaction lifecycle](/v0.50/learn/beginner/tx-lifecycle). +For more details, see [transaction lifecycle](/docs/sdk/v0.50/learn/beginner/tx-lifecycle). Module developers create custom `Msg` services when they build their own module. The general practice is to define the `Msg` Protobuf service in a `tx.proto` file. For example, the `x/bank` module defines a service with two methods to transfer tokens: -proto/cosmos/bank/v1beta1/tx.proto - +```protobuf +// Msg defines the bank Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + // Send defines a method for sending coins from one account to another account. + rpc Send(MsgSend) returns (MsgSendResponse); + + // MultiSend defines a method for sending coins from some accounts to other accounts. + rpc MultiSend(MsgMultiSend) returns (MsgMultiSendResponse); + + // UpdateParams defines a governance operation for updating the x/bank module parameters. + // The authority is defined in the keeper. + // + // Since: cosmos-sdk 0.47 + rpc UpdateParams(MsgUpdateParams) returns (MsgUpdateParamsResponse); + + // SetSendEnabled is a governance operation for setting the SendEnabled flag + // on any number of Denoms. Only the entries to add or update should be + // included. Entries that already exist in the store, but that aren't + // included in this message, will be left unchanged. + // + // Since: cosmos-sdk 0.47 + rpc SetSendEnabled(MsgSetSendEnabled) returns (MsgSetSendEnabledResponse); +} ``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/bank/v1beta1/tx.proto#L13-L36) Service methods use `keeper` in order to update the module state. Each module should also implement the `RegisterServices` method as part of the [`AppModule` interface](#application-module-interface). This method should call the `RegisterMsgServer` function provided by the generated Protobuf code. -### gRPC `Query` Services[​](#grpc-query-services "Direct link to grpc-query-services") +### gRPC `Query` Services -gRPC `Query` services allow users to query the state using [gRPC](https://grpc.io). They are enabled by default, and can be configured under the `grpc.enable` and `grpc.address` fields inside [`app.toml`](/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml). +gRPC `Query` services allow users to query the state using [gRPC](https://grpc.io). They are enabled by default, and can be configured under the `grpc.enable` and `grpc.address` fields inside [`app.toml`](/docs/sdk/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml). gRPC `Query` services are defined in the module's Protobuf definition files, specifically inside `query.proto`. The `query.proto` definition file exposes a single `Query` [Protobuf service](https://developers.google.com/protocol-buffers/docs/proto#services). Each gRPC query endpoint corresponds to a service method, starting with the `rpc` keyword, inside the `Query` service. @@ -234,81 +4063,674 @@ Protobuf generates a `QueryServer` interface for each module, containing all the Finally, each module should also implement the `RegisterServices` method as part of the [`AppModule` interface](#application-module-interface). This method should call the `RegisterQueryServer` function provided by the generated Protobuf code. -### Keeper[​](#keeper "Direct link to Keeper") +### Keeper -[`Keepers`](/v0.50/build/building-modules/keeper) are the gatekeepers of their module's store(s). To read or write in a module's store, it is mandatory to go through one of its `keeper`'s methods. This is ensured by the [object-capabilities](/v0.50/learn/advanced/ocap) model of the Cosmos SDK. Only objects that hold the key to a store can access it, and only the module's `keeper` should hold the key(s) to the module's store(s). +[`Keepers`](/docs/sdk/v0.50/documentation/module-system/keeper) are the gatekeepers of their module's store(s). To read or write in a module's store, it is mandatory to go through one of its `keeper`'s methods. This is ensured by the [object-capabilities](/docs/sdk/v0.50/learn/advanced/ocap) model of the Cosmos SDK. Only objects that hold the key to a store can access it, and only the module's `keeper` should hold the key(s) to the module's store(s). `Keepers` are generally defined in a file called `keeper.go`. It contains the `keeper`'s type definition and methods. The `keeper` type definition generally consists of the following: -* **Key(s)** to the module's store(s) in the multistore. -* Reference to **other module's `keepers`**. Only needed if the `keeper` needs to access other module's store(s) (either to read or write from them). -* A reference to the application's **codec**. The `keeper` needs it to marshal structs before storing them, or to unmarshal them when it retrieves them, because stores only accept `[]bytes` as value. +- **Key(s)** to the module's store(s) in the multistore. +- Reference to **other module's `keepers`**. Only needed if the `keeper` needs to access other module's store(s) (either to read or write from them). +- A reference to the application's **codec**. The `keeper` needs it to marshal structs before storing them, or to unmarshal them when it retrieves them, because stores only accept `[]bytes` as value. Along with the type definition, the next important component of the `keeper.go` file is the `keeper`'s constructor function, `NewKeeper`. This function instantiates a new `keeper` of the type defined above with a `codec`, stores `keys` and potentially references other modules' `keeper`s as parameters. The `NewKeeper` function is called from the [application's constructor](#constructor-function). The rest of the file defines the `keeper`'s methods, which are primarily getters and setters. -### Command-Line, gRPC Services and REST Interfaces[​](#command-line-grpc-services-and-rest-interfaces "Direct link to Command-Line, gRPC Services and REST Interfaces") +### Command-Line, gRPC Services and REST Interfaces Each module defines command-line commands, gRPC services, and REST routes to be exposed to the end-user via the [application's interfaces](#application-interfaces). This enables end-users to create messages of the types defined in the module, or to query the subset of the state managed by the module. -#### CLI[​](#cli "Direct link to CLI") +#### CLI -Generally, the [commands related to a module](/v0.50/build/building-modules/module-interfaces#cli) are defined in a folder called `client/cli` in the module's folder. The CLI divides commands into two categories, transactions and queries, defined in `client/cli/tx.go` and `client/cli/query.go`, respectively. Both commands are built on top of the [Cobra Library](https://github.com/spf13/cobra): +Generally, the [commands related to a module](/docs/sdk/v0.50/documentation/module-system/module-interfaces#cli) are defined in a folder called `client/cli` in the module's folder. The CLI divides commands into two categories, transactions and queries, defined in `client/cli/tx.go` and `client/cli/query.go`, respectively. Both commands are built on top of the [Cobra Library](https://github.com/spf13/cobra): -* Transactions commands let users generate new transactions so that they can be included in a block and eventually update the state. One command should be created for each [message type](#message-types) defined in the module. The command calls the constructor of the message with the parameters provided by the end-user, and wraps it into a transaction. The Cosmos SDK handles signing and the addition of other transaction metadata. -* Queries let users query the subset of the state defined by the module. Query commands forward queries to the [application's query router](/v0.50/learn/advanced/baseapp#query-routing), which routes them to the appropriate [querier](#querier) the `queryRoute` parameter supplied. +- Transactions commands let users generate new transactions so that they can be included in a block and eventually update the state. One command should be created for each [message type](#message-types) defined in the module. The command calls the constructor of the message with the parameters provided by the end-user, and wraps it into a transaction. The Cosmos SDK handles signing and the addition of other transaction metadata. +- Queries let users query the subset of the state defined by the module. Query commands forward queries to the [application's query router](/docs/sdk/v0.50/learn/advanced/baseapp#query-routing), which routes them to the appropriate [querier](#querier) the `queryRoute` parameter supplied. -#### gRPC[​](#grpc "Direct link to gRPC") +#### gRPC [gRPC](https://grpc.io) is a modern open-source high performance RPC framework that has support in multiple languages. It is the recommended way for external clients (such as wallets, browsers and other backend services) to interact with a node. Each module can expose gRPC endpoints called [service methods](https://grpc.io/docs/what-is-grpc/core-concepts/#service-definition), which are defined in the [module's Protobuf `query.proto` file](#grpc-query-services). A service method is defined by its name, input arguments, and output response. The module then needs to perform the following actions: -* Define a `RegisterGRPCGatewayRoutes` method on `AppModuleBasic` to wire the client gRPC requests to the correct handler inside the module. -* For each service method, define a corresponding handler. The handler implements the core logic necessary to serve the gRPC request, and is located in the `keeper/grpc_query.go` file. +- Define a `RegisterGRPCGatewayRoutes` method on `AppModuleBasic` to wire the client gRPC requests to the correct handler inside the module. +- For each service method, define a corresponding handler. The handler implements the core logic necessary to serve the gRPC request, and is located in the `keeper/grpc_query.go` file. -#### gRPC-gateway REST Endpoints[​](#grpc-gateway-rest-endpoints "Direct link to gRPC-gateway REST Endpoints") +#### gRPC-gateway REST Endpoints Some external clients may not wish to use gRPC. In this case, the Cosmos SDK provides a gRPC gateway service, which exposes each gRPC service as a corresponding REST endpoint. Please refer to the [grpc-gateway](https://grpc-ecosystem.github.io/grpc-gateway/) documentation to learn more. The REST endpoints are defined in the Protobuf files, along with the gRPC services, using Protobuf annotations. Modules that want to expose REST queries should add `google.api.http` annotations to their `rpc` methods. By default, all REST endpoints defined in the SDK have a URL starting with the `/cosmos/` prefix. -The Cosmos SDK also provides a development endpoint to generate [Swagger](https://swagger.io/) definition files for these REST endpoints. This endpoint can be enabled inside the [`app.toml`](/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml) config file, under the `api.swagger` key. +The Cosmos SDK also provides a development endpoint to generate [Swagger](https://swagger.io/) definition files for these REST endpoints. This endpoint can be enabled inside the [`app.toml`](/docs/sdk/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml) config file, under the `api.swagger` key. -## Application Interface[​](#application-interface "Direct link to Application Interface") +## Application Interface [Interfaces](#command-line-grpc-services-and-rest-interfaces) let end-users interact with full-node clients. This means querying data from the full-node or creating and sending new transactions to be relayed by the full-node and eventually included in a block. -The main interface is the [Command-Line Interface](/v0.50/learn/advanced/cli). The CLI of a Cosmos SDK application is built by aggregating [CLI commands](#cli) defined in each of the modules used by the application. The CLI of an application is the same as the daemon (e.g. `appd`), and is defined in a file called `appd/main.go`. The file contains the following: +The main interface is the [Command-Line Interface](/docs/sdk/v0.50/learn/advanced/cli). The CLI of a Cosmos SDK application is built by aggregating [CLI commands](#cli) defined in each of the modules used by the application. The CLI of an application is the same as the daemon (e.g. `appd`), and is defined in a file called `appd/main.go`. The file contains the following: -* **A `main()` function**, which is executed to build the `appd` interface client. This function prepares each command and adds them to the `rootCmd` before building them. At the root of `appd`, the function adds generic commands like `status`, `keys`, and `config`, query commands, tx commands, and `rest-server`. -* **Query commands**, which are added by calling the `queryCmd` function. This function returns a Cobra command that contains the query commands defined in each of the application's modules (passed as an array of `sdk.ModuleClients` from the `main()` function), as well as some other lower level query commands such as block or validator queries. Query command are called by using the command `appd query [query]` of the CLI. -* **Transaction commands**, which are added by calling the `txCmd` function. Similar to `queryCmd`, the function returns a Cobra command that contains the tx commands defined in each of the application's modules, as well as lower level tx commands like transaction signing or broadcasting. Tx commands are called by using the command `appd tx [tx]` of the CLI. +- **A `main()` function**, which is executed to build the `appd` interface client. This function prepares each command and adds them to the `rootCmd` before building them. At the root of `appd`, the function adds generic commands like `status`, `keys`, and `config`, query commands, tx commands, and `rest-server`. +- **Query commands**, which are added by calling the `queryCmd` function. This function returns a Cobra command that contains the query commands defined in each of the application's modules (passed as an array of `sdk.ModuleClients` from the `main()` function), as well as some other lower level query commands such as block or validator queries. Query command are called by using the command `appd query [query]` of the CLI. +- **Transaction commands**, which are added by calling the `txCmd` function. Similar to `queryCmd`, the function returns a Cobra command that contains the tx commands defined in each of the application's modules, as well as lower level tx commands like transaction signing or broadcasting. Tx commands are called by using the command `appd tx [tx]` of the CLI. See an example of an application's main command-line file from the [Cosmos Hub](https://github.com/cosmos/gaia). -cmd/gaiad/cmd/root.go - -``` -loading... +```go expandable +package cmd + +import ( + + "errors" + "io" + "os" + "path/filepath" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/snapshots" + snapshottypes "github.com/cosmos/cosmos-sdk/snapshots/types" + "github.com/cosmos/cosmos-sdk/store" + sdk "github.com/cosmos/cosmos-sdk/types" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" + "github.com/spf13/cast" + "github.com/spf13/cobra" + tmcfg "github.com/tendermint/tendermint/config" + tmcli "github.com/tendermint/tendermint/libs/cli" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + + gaia "github.com/cosmos/gaia/v8/app" + "github.com/cosmos/gaia/v8/app/params" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the +/ main function. +func NewRootCmd() (*cobra.Command, params.EncodingConfig) { + encodingConfig := gaia.MakeTestEncodingConfig() + initClientCtx := client.Context{ +}. + WithCodec(encodingConfig.Codec). + WithInterfaceRegistry(encodingConfig.InterfaceRegistry). + WithTxConfig(encodingConfig.TxConfig). + WithLegacyAmino(encodingConfig.Amino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(gaia.DefaultNodeHome). + WithViper("") + rootCmd := &cobra.Command{ + Use: "gaiad", + Short: "Stargate Cosmos Hub App", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + if err = client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customTemplate, customGaiaConfig := initAppConfig() + customTMConfig := initTendermintConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customTemplate, customGaiaConfig, customTMConfig) +}, +} + +initRootCmd(rootCmd, encodingConfig) + +return rootCmd, encodingConfig +} + +/ initTendermintConfig helps to override default Tendermint Config values. +/ return tmcfg.DefaultConfig if no custom configuration is required for the application. +func initTendermintConfig() *tmcfg.Config { + cfg := tmcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +func initAppConfig() (string, interface{ +}) { + srvCfg := serverconfig.DefaultConfig() + +srvCfg.StateSync.SnapshotInterval = 1000 + srvCfg.StateSync.SnapshotKeepRecent = 10 + + return params.CustomConfigTemplate(), params.CustomAppConfig{ + Config: *srvCfg, + BypassMinFeeMsgTypes: gaia.GetDefaultBypassFeeMessages(), +} +} + +func initRootCmd(rootCmd *cobra.Command, encodingConfig params.EncodingConfig) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(gaia.ModuleBasics, gaia.DefaultNodeHome), + genutilcli.CollectGenTxsCmd(banktypes.GenesisBalancesIterator{ +}, gaia.DefaultNodeHome), + genutilcli.GenTxCmd(gaia.ModuleBasics, encodingConfig.TxConfig, banktypes.GenesisBalancesIterator{ +}, gaia.DefaultNodeHome), + genutilcli.ValidateGenesisCmd(gaia.ModuleBasics), + AddGenesisAccountCmd(gaia.DefaultNodeHome), + tmcli.NewCompletionCmd(rootCmd, true), + testnetCmd(gaia.ModuleBasics, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + config.Cmd(), + ) + ac := appCreator{ + encCfg: encodingConfig, +} + +server.AddCommands(rootCmd, gaia.DefaultNodeHome, ac.newApp, ac.appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + queryCommand(), + txCommand(), + keys.Commands(gaia.DefaultNodeHome), + ) + +rootCmd.AddCommand(server.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetAccountCmd(), + rpc.ValidatorCommand(), + rpc.BlockCommand(), + authcmd.QueryTxsByEventsCmd(), + authcmd.QueryTxCmd(), + ) + +gaia.ModuleBasics.AddQueryCommands(cmd) + +cmd.PersistentFlags().String(flags.FlagChainID, "", "The network chain ID") + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + flags.LineBreak, + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +gaia.ModuleBasics.AddTxCommands(cmd) + +cmd.PersistentFlags().String(flags.FlagChainID, "", "The network chain ID") + +return cmd +} + +type appCreator struct { + encCfg params.EncodingConfig +} + +func (ac appCreator) + +newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + var cache sdk.MultiStorePersistentCache + if cast.ToBool(appOpts.Get(server.FlagInterBlockCache)) { + cache = store.NewCommitKVStoreCacheManager() +} + skipUpgradeHeights := make(map[int64]bool) + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + +pruningOpts, err := server.GetPruningOptionsFromFlags(appOpts) + if err != nil { + panic(err) +} + snapshotDir := filepath.Join(cast.ToString(appOpts.Get(flags.FlagHome)), "data", "snapshots") + +snapshotDB, err := dbm.NewDB("metadata", server.GetAppDBBackend(appOpts), snapshotDir) + if err != nil { + panic(err) +} + +snapshotStore, err := snapshots.NewStore(snapshotDB, snapshotDir) + if err != nil { + panic(err) +} + snapshotOptions := snapshottypes.NewSnapshotOptions( + cast.ToUint64(appOpts.Get(server.FlagStateSyncSnapshotInterval)), + cast.ToUint32(appOpts.Get(server.FlagStateSyncSnapshotKeepRecent)), + ) + +return gaia.NewGaiaApp( + logger, db, traceStore, true, skipUpgradeHeights, + cast.ToString(appOpts.Get(flags.FlagHome)), + cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)), + ac.encCfg, + appOpts, + baseapp.SetPruning(pruningOpts), + baseapp.SetMinGasPrices(cast.ToString(appOpts.Get(server.FlagMinGasPrices))), + baseapp.SetHaltHeight(cast.ToUint64(appOpts.Get(server.FlagHaltHeight))), + baseapp.SetHaltTime(cast.ToUint64(appOpts.Get(server.FlagHaltTime))), + baseapp.SetMinRetainBlocks(cast.ToUint64(appOpts.Get(server.FlagMinRetainBlocks))), + baseapp.SetInterBlockCache(cache), + baseapp.SetTrace(cast.ToBool(appOpts.Get(server.FlagTrace))), + baseapp.SetIndexEvents(cast.ToStringSlice(appOpts.Get(server.FlagIndexEvents))), + baseapp.SetSnapshot(snapshotStore, snapshotOptions), + ) +} + +func (ac appCreator) + +appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, +) (servertypes.ExportedApp, error) { + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home is not set") +} + +var loadLatest bool + if height == -1 { + loadLatest = true +} + gaiaApp := gaia.NewGaiaApp( + logger, + db, + traceStore, + loadLatest, + map[int64]bool{ +}, + homePath, + cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)), + ac.encCfg, + appOpts, + ) + if height != -1 { + if err := gaiaApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +return gaiaApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs) +} ``` -[See full example on GitHub](https://github.com/cosmos/gaia/blob/26ae7c2/cmd/gaiad/cmd/root.go#L39-L80) - -## Dependencies and Makefile[​](#dependencies-and-makefile "Direct link to Dependencies and Makefile") +## Dependencies and Makefile This section is optional, as developers are free to choose their dependency manager and project building method. That said, the current most used framework for versioning control is [`go.mod`](https://github.com/golang/go/wiki/Modules). It ensures each of the libraries used throughout the application are imported with the correct version. The following is the `go.mod` of the [Cosmos Hub](https://github.com/cosmos/gaia), provided as an example. -go.mod - +```go expandable +module github.com/cosmos/gaia/v8 + +go 1.18 + +require ( + cosmossdk.io/math v1.0.0-beta.3 + github.com/cosmos/cosmos-sdk v0.46.2 + github.com/cosmos/go-bip39 v1.0.0 / indirect + github.com/cosmos/ibc-go/v5 v5.0.0 + github.com/gogo/protobuf v1.3.3 + github.com/golang/protobuf v1.5.2 + github.com/golangci/golangci-lint v1.50.0 + github.com/gorilla/mux v1.8.0 + github.com/gravity-devs/liquidity/v2 v2.0.0 + github.com/grpc-ecosystem/grpc-gateway v1.16.0 + github.com/pkg/errors v0.9.1 + github.com/rakyll/statik v0.1.7 + github.com/spf13/cast v1.5.0 + github.com/spf13/cobra v1.6.0 + github.com/spf13/pflag v1.0.5 + github.com/spf13/viper v1.13.0 + github.com/strangelove-ventures/packet-forward-middleware/v2 v2.1.4-0.20220802012200-5a62a55a7f1d + github.com/stretchr/testify v1.8.0 + github.com/tendermint/tendermint v0.34.21 + github.com/tendermint/tm-db v0.6.7 + google.golang.org/genproto v0.0.0-20220815135757-37a418bb8959 + google.golang.org/grpc v1.50.0 +) + +require ( + 4d63.com/gochecknoglobals v0.1.0 / indirect + cloud.google.com/go v0.102.1 / indirect + cloud.google.com/go/compute v1.7.0 / indirect + cloud.google.com/go/iam v0.4.0 / indirect + cloud.google.com/go/storage v1.22.1 / indirect + cosmossdk.io/errors v1.0.0-beta.7 / indirect + filippo.io/edwards25519 v1.0.0-rc.1 / indirect + github.com/99designs/go-keychain v0.0.0-20191008050251-8e49817e8af4 / indirect + github.com/99designs/keyring v1.2.1 / indirect + github.com/Abirdcfly/dupword v0.0.7 / indirect + github.com/Antonboom/errname v0.1.7 / indirect + github.com/Antonboom/nilnil v0.1.1 / indirect + github.com/BurntSushi/toml v1.2.0 / indirect + github.com/ChainSafe/go-schnorrkel v0.0.0-20200405005733-88cbf1b4c40d / indirect + github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 / indirect + github.com/GaijinEntertainment/go-exhaustruct/v2 v2.3.0 / indirect + github.com/Masterminds/semver v1.5.0 / indirect + github.com/OpenPeeDeeP/depguard v1.1.1 / indirect + github.com/Workiva/go-datastructures v1.0.53 / indirect + github.com/alexkohler/prealloc v1.0.0 / indirect + github.com/alingse/asasalint v0.0.11 / indirect + github.com/armon/go-metrics v0.4.0 / indirect + github.com/ashanbrown/forbidigo v1.3.0 / indirect + github.com/ashanbrown/makezero v1.1.1 / indirect + github.com/aws/aws-sdk-go v1.40.45 / indirect + github.com/beorn7/perks v1.0.1 / indirect + github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d / indirect + github.com/bgentry/speakeasy v0.1.0 / indirect + github.com/bkielbasa/cyclop v1.2.0 / indirect + github.com/blizzy78/varnamelen v0.8.0 / indirect + github.com/bombsimon/wsl/v3 v3.3.0 / indirect + github.com/breml/bidichk v0.2.3 / indirect + github.com/breml/errchkjson v0.3.0 / indirect + github.com/btcsuite/btcd v0.22.1 / indirect + github.com/butuzov/ireturn v0.1.1 / indirect + github.com/cenkalti/backoff/v4 v4.1.3 / indirect + github.com/cespare/xxhash v1.1.0 / indirect + github.com/cespare/xxhash/v2 v2.1.2 / indirect + github.com/charithe/durationcheck v0.0.9 / indirect + github.com/chavacava/garif v0.0.0-20220630083739-93517212f375 / indirect + github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e / indirect + github.com/cockroachdb/apd/v2 v2.0.2 / indirect + github.com/coinbase/rosetta-sdk-go v0.7.9 / indirect + github.com/confio/ics23/go v0.7.0 / indirect + github.com/cosmos/btcutil v1.0.4 / indirect + github.com/cosmos/cosmos-proto v1.0.0-alpha7 / indirect + github.com/cosmos/gorocksdb v1.2.0 / indirect + github.com/cosmos/iavl v0.19.2-0.20220916140702-9b6be3095313 / indirect + github.com/cosmos/ledger-cosmos-go v0.11.1 / indirect + github.com/cosmos/ledger-go v0.9.3 / indirect + github.com/creachadair/taskgroup v0.3.2 / indirect + github.com/curioswitch/go-reassign v0.2.0 / indirect + github.com/daixiang0/gci v0.8.0 / indirect + github.com/danieljoos/wincred v1.1.2 / indirect + github.com/davecgh/go-spew v1.1.1 / indirect + github.com/denis-tingaikin/go-header v0.4.3 / indirect + github.com/desertbit/timer v0.0.0-20180107155436-c41aec40b27f / indirect + github.com/dgraph-io/badger/v2 v2.2007.4 / indirect + github.com/dgraph-io/ristretto v0.1.0 / indirect + github.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13 / indirect + github.com/dustin/go-humanize v1.0.0 / indirect + github.com/dvsekhvalnov/jose2go v1.5.0 / indirect + github.com/esimonov/ifshort v1.0.4 / indirect + github.com/ettle/strcase v0.1.1 / indirect + github.com/fatih/color v1.13.0 / indirect + github.com/fatih/structtag v1.2.0 / indirect + github.com/felixge/httpsnoop v1.0.1 / indirect + github.com/firefart/nonamedreturns v1.0.4 / indirect + github.com/fsnotify/fsnotify v1.5.4 / indirect + github.com/fzipp/gocyclo v0.6.0 / indirect + github.com/go-critic/go-critic v0.6.5 / indirect + github.com/go-kit/kit v0.12.0 / indirect + github.com/go-kit/log v0.2.1 / indirect + github.com/go-logfmt/logfmt v0.5.1 / indirect + github.com/go-playground/validator/v10 v10.4.1 / indirect + github.com/go-toolsmith/astcast v1.0.0 / indirect + github.com/go-toolsmith/astcopy v1.0.2 / indirect + github.com/go-toolsmith/astequal v1.0.3 / indirect + github.com/go-toolsmith/astfmt v1.0.0 / indirect + github.com/go-toolsmith/astp v1.0.0 / indirect + github.com/go-toolsmith/strparse v1.0.0 / indirect + github.com/go-toolsmith/typep v1.0.2 / indirect + github.com/go-xmlfmt/xmlfmt v0.0.0-20191208150333-d5b6f63a941b / indirect + github.com/gobwas/glob v0.2.3 / indirect + github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 / indirect + github.com/gofrs/flock v0.8.1 / indirect + github.com/gogo/gateway v1.1.0 / indirect + github.com/golang/glog v1.0.0 / indirect + github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da / indirect + github.com/golang/snappy v0.0.4 / indirect + github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2 / indirect + github.com/golangci/dupl v0.0.0-20180902072040-3e9179ac440a / indirect + github.com/golangci/go-misc v0.0.0-20220329215616-d24fe342adfe / indirect + github.com/golangci/gofmt v0.0.0-20220901101216-f2edd75033f2 / indirect + github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0 / indirect + github.com/golangci/maligned v0.0.0-20180506175553-b1d89398deca / indirect + github.com/golangci/misspell v0.3.5 / indirect + github.com/golangci/revgrep v0.0.0-20220804021717-745bb2f7c2e6 / indirect + github.com/golangci/unconvert v0.0.0-20180507085042-28b1c447d1f4 / indirect + github.com/google/btree v1.0.1 / indirect + github.com/google/go-cmp v0.5.9 / indirect + github.com/google/orderedcode v0.0.1 / indirect + github.com/google/uuid v1.3.0 / indirect + github.com/googleapis/enterprise-certificate-proxy v0.1.0 / indirect + github.com/googleapis/gax-go/v2 v2.4.0 / indirect + github.com/googleapis/go-type-adapters v1.0.0 / indirect + github.com/gordonklaus/ineffassign v0.0.0-20210914165742-4cc7213b9bc8 / indirect + github.com/gorilla/handlers v1.5.1 / indirect + github.com/gorilla/websocket v1.5.0 / indirect + github.com/gostaticanalysis/analysisutil v0.7.1 / indirect + github.com/gostaticanalysis/comment v1.4.2 / indirect + github.com/gostaticanalysis/forcetypeassert v0.1.0 / indirect + github.com/gostaticanalysis/nilerr v0.1.1 / indirect + github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 / indirect + github.com/grpc-ecosystem/grpc-gateway/v2 v2.0.1 / indirect + github.com/gsterjov/go-libsecret v0.0.0-20161001094733-a6f4afe4910c / indirect + github.com/gtank/merlin v0.1.1 / indirect + github.com/gtank/ristretto255 v0.1.2 / indirect + github.com/hashicorp/errwrap v1.1.0 / indirect + github.com/hashicorp/go-cleanhttp v0.5.2 / indirect + github.com/hashicorp/go-getter v1.6.1 / indirect + github.com/hashicorp/go-immutable-radix v1.3.1 / indirect + github.com/hashicorp/go-multierror v1.1.1 / indirect + github.com/hashicorp/go-safetemp v1.0.0 / indirect + github.com/hashicorp/go-version v1.6.0 / indirect + github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d / indirect + github.com/hashicorp/hcl v1.0.0 / indirect + github.com/hdevalence/ed25519consensus v0.0.0-20220222234857-c00d1f31bab3 / indirect + github.com/hexops/gotextdiff v1.0.3 / indirect + github.com/improbable-eng/grpc-web v0.15.0 / indirect + github.com/inconshreveable/mousetrap v1.0.1 / indirect + github.com/jgautheron/goconst v1.5.1 / indirect + github.com/jingyugao/rowserrcheck v1.1.1 / indirect + github.com/jirfag/go-printf-func-name v0.0.0-20200119135958-7558a9eaa5af / indirect + github.com/jmespath/go-jmespath v0.4.0 / indirect + github.com/jmhodges/levigo v1.0.0 / indirect + github.com/julz/importas v0.1.0 / indirect + github.com/kisielk/errcheck v1.6.2 / indirect + github.com/kisielk/gotool v1.0.0 / indirect + github.com/kkHAIKE/contextcheck v1.1.2 / indirect + github.com/klauspost/compress v1.15.9 / indirect + github.com/kulti/thelper v0.6.3 / indirect + github.com/kunwardeep/paralleltest v1.0.6 / indirect + github.com/kyoh86/exportloopref v0.1.8 / indirect + github.com/ldez/gomoddirectives v0.2.3 / indirect + github.com/ldez/tagliatelle v0.3.1 / indirect + github.com/leonklingele/grouper v1.1.0 / indirect + github.com/lib/pq v1.10.6 / indirect + github.com/libp2p/go-buffer-pool v0.1.0 / indirect + github.com/lufeee/execinquery v1.2.1 / indirect + github.com/magiconair/properties v1.8.6 / indirect + github.com/manifoldco/promptui v0.9.0 / indirect + github.com/maratori/testableexamples v1.0.0 / indirect + github.com/maratori/testpackage v1.1.0 / indirect + github.com/matoous/godox v0.0.0-20210227103229-6504466cf951 / indirect + github.com/mattn/go-colorable v0.1.13 / indirect + github.com/mattn/go-isatty v0.0.16 / indirect + github.com/mattn/go-runewidth v0.0.9 / indirect + github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 / indirect + github.com/mbilski/exhaustivestruct v1.2.0 / indirect + github.com/mgechev/revive v1.2.4 / indirect + github.com/mimoo/StrobeGo v0.0.0-20181016162300-f8f6d4d2b643 / indirect + github.com/minio/highwayhash v1.0.2 / indirect + github.com/mitchellh/go-homedir v1.1.0 / indirect + github.com/mitchellh/go-testing-interface v1.0.0 / indirect + github.com/mitchellh/mapstructure v1.5.0 / indirect + github.com/moricho/tparallel v0.2.1 / indirect + github.com/mtibben/percent v0.2.1 / indirect + github.com/nakabonne/nestif v0.3.1 / indirect + github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354 / indirect + github.com/nishanths/exhaustive v0.8.3 / indirect + github.com/nishanths/predeclared v0.2.2 / indirect + github.com/olekukonko/tablewriter v0.0.5 / indirect + github.com/pelletier/go-toml v1.9.5 / indirect + github.com/pelletier/go-toml/v2 v2.0.5 / indirect + github.com/petermattis/goid v0.0.0-20180202154549-b0b1615b78e5 / indirect + github.com/phayes/checkstyle v0.0.0-20170904204023-bfd46e6a821d / indirect + github.com/pmezard/go-difflib v1.0.0 / indirect + github.com/polyfloyd/go-errorlint v1.0.5 / indirect + github.com/prometheus/client_golang v1.12.2 / indirect + github.com/prometheus/client_model v0.2.0 / indirect + github.com/prometheus/common v0.34.0 / indirect + github.com/prometheus/procfs v0.7.3 / indirect + github.com/quasilyte/go-ruleguard v0.3.18 / indirect + github.com/quasilyte/gogrep v0.0.0-20220828223005-86e4605de09f / indirect + github.com/quasilyte/regex/syntax v0.0.0-20200407221936-30656e2c4a95 / indirect + github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 / indirect + github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0 / indirect + github.com/regen-network/cosmos-proto v0.3.1 / indirect + github.com/rs/cors v1.8.2 / indirect + github.com/rs/zerolog v1.27.0 / indirect + github.com/ryancurrah/gomodguard v1.2.4 / indirect + github.com/ryanrolds/sqlclosecheck v0.3.0 / indirect + github.com/sanposhiho/wastedassign/v2 v2.0.6 / indirect + github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa / indirect + github.com/sashamelentyev/interfacebloat v1.1.0 / indirect + github.com/sashamelentyev/usestdlibvars v1.20.0 / indirect + github.com/securego/gosec/v2 v2.13.1 / indirect + github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c / indirect + github.com/sirupsen/logrus v1.9.0 / indirect + github.com/sivchari/containedctx v1.0.2 / indirect + github.com/sivchari/nosnakecase v1.7.0 / indirect + github.com/sivchari/tenv v1.7.0 / indirect + github.com/sonatard/noctx v0.0.1 / indirect + github.com/sourcegraph/go-diff v0.6.1 / indirect + github.com/spf13/afero v1.8.2 / indirect + github.com/spf13/jwalterweatherman v1.1.0 / indirect + github.com/ssgreg/nlreturn/v2 v2.2.1 / indirect + github.com/stbenjam/no-sprintf-host-port v0.1.1 / indirect + github.com/stretchr/objx v0.4.0 / indirect + github.com/subosito/gotenv v1.4.1 / indirect + github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 / indirect + github.com/tdakkota/asciicheck v0.1.1 / indirect + github.com/tendermint/btcd v0.1.1 / indirect + github.com/tendermint/crypto v0.0.0-20191022145703-50d29ede1e15 / indirect + github.com/tendermint/go-amino v0.16.0 / indirect + github.com/tetafro/godot v1.4.11 / indirect + github.com/timakin/bodyclose v0.0.0-20210704033933-f49887972144 / indirect + github.com/timonwong/loggercheck v0.9.3 / indirect + github.com/tomarrell/wrapcheck/v2 v2.6.2 / indirect + github.com/tommy-muehle/go-mnd/v2 v2.5.0 / indirect + github.com/ulikunitz/xz v0.5.8 / indirect + github.com/ultraware/funlen v0.0.3 / indirect + github.com/ultraware/whitespace v0.0.5 / indirect + github.com/uudashr/gocognit v1.0.6 / indirect + github.com/yagipy/maintidx v1.0.0 / indirect + github.com/yeya24/promlinter v0.2.0 / indirect + github.com/zondax/hid v0.9.1-0.20220302062450-5552068d2266 / indirect + gitlab.com/bosi/decorder v0.2.3 / indirect + go.etcd.io/bbolt v1.3.6 / indirect + go.opencensus.io v0.23.0 / indirect + go.uber.org/atomic v1.9.0 / indirect + go.uber.org/multierr v1.8.0 / indirect + go.uber.org/zap v1.21.0 / indirect + golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa / indirect + golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e / indirect + golang.org/x/exp/typeparams v0.0.0-20220827204233-334a2380cb91 / indirect + golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 / indirect + golang.org/x/net v0.0.0-20220726230323-06994584191e / indirect + golang.org/x/oauth2 v0.0.0-20220622183110-fd043fe589d2 / indirect + golang.org/x/sync v0.0.0-20220819030929-7fc1605a5dde / indirect + golang.org/x/sys v0.0.0-20220915200043-7b5979e65e41 / indirect + golang.org/x/term v0.0.0-20220722155259-a9ba230a4035 / indirect + golang.org/x/text v0.3.7 / indirect + golang.org/x/tools v0.1.12 / indirect + golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f / indirect + google.golang.org/api v0.93.0 / indirect + google.golang.org/appengine v1.6.7 / indirect + google.golang.org/protobuf v1.28.1 / indirect + gopkg.in/ini.v1 v1.67.0 / indirect + gopkg.in/yaml.v2 v2.4.0 / indirect + gopkg.in/yaml.v3 v3.0.1 / indirect + honnef.co/go/tools v0.3.3 / indirect + mvdan.cc/gofumpt v0.4.0 / indirect + mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed / indirect + mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b / indirect + mvdan.cc/unparam v0.0.0-20220706161116-678bad134442 / indirect + nhooyr.io/websocket v1.8.6 / indirect + sigs.k8s.io/yaml v1.3.0 / indirect +) + +replace ( + github.com/gogo/protobuf => github.com/regen-network/protobuf v1.3.3-alpha.regen.1 + github.com/zondax/hid => github.com/zondax/hid v0.9.0 +) ``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/gaia/blob/26ae7c2/go.mod#L1-L28) For building the application, a [Makefile](https://en.wikipedia.org/wiki/Makefile) is generally used. The Makefile primarily ensures that the `go.mod` is run before building the two entrypoints to the application, [`Node Client`](#node-client) and [`Application Interface`](#application-interface). diff --git a/docs/sdk/v0.50/learn/beginner/gas-fees.mdx b/docs/sdk/v0.50/learn/beginner/gas-fees.mdx index 74109c2f..7e3a9c2e 100644 --- a/docs/sdk/v0.50/learn/beginner/gas-fees.mdx +++ b/docs/sdk/v0.50/learn/beginner/gas-fees.mdx @@ -1,67 +1,419 @@ --- -title: "Gas and Fees" -description: "Version: v0.50" +title: Gas and Fees --- - - This document describes the default strategies to handle gas and fees within a Cosmos SDK application. - +## Synopsis + +This document describes the default strategies to handle gas and fees within a Cosmos SDK application. - * [Anatomy of a Cosmos SDK Application](/v0.50/learn/beginner/app-anatomy) +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.50/learn/beginner/app-anatomy) + -## Introduction to `Gas` and `Fees`[​](#introduction-to-gas-and-fees "Direct link to introduction-to-gas-and-fees") +## Introduction to `Gas` and `Fees` In the Cosmos SDK, `gas` is a special unit that is used to track the consumption of resources during execution. `gas` is typically consumed whenever read and writes are made to the store, but it can also be consumed if expensive computation needs to be done. It serves two main purposes: -* Make sure blocks are not consuming too many resources and are finalized. This is implemented by default in the Cosmos SDK via the [block gas meter](#block-gas-meter). -* Prevent spam and abuse from end-user. To this end, `gas` consumed during [`message`](/v0.50/build/building-modules/messages-and-queries#messages) execution is typically priced, resulting in a `fee` (`fees = gas * gas-prices`). `fees` generally have to be paid by the sender of the `message`. Note that the Cosmos SDK does not enforce `gas` pricing by default, as there may be other ways to prevent spam (e.g. bandwidth schemes). Still, most applications implement `fee` mechanisms to prevent spam by using the [`AnteHandler`](#antehandler). +- Make sure blocks are not consuming too many resources and are finalized. This is implemented by default in the Cosmos SDK via the [block gas meter](#block-gas-meter). +- Prevent spam and abuse from end-user. To this end, `gas` consumed during [`message`](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#messages) execution is typically priced, resulting in a `fee` (`fees = gas * gas-prices`). `fees` generally have to be paid by the sender of the `message`. Note that the Cosmos SDK does not enforce `gas` pricing by default, as there may be other ways to prevent spam (e.g. bandwidth schemes). Still, most applications implement `fee` mechanisms to prevent spam by using the [`AnteHandler`](#antehandler). -## Gas Meter[​](#gas-meter "Direct link to Gas Meter") +## Gas Meter -In the Cosmos SDK, `gas` is a simple alias for `uint64`, and is managed by an object called a *gas meter*. Gas meters implement the `GasMeter` interface +In the Cosmos SDK, `gas` is a simple alias for `uint64`, and is managed by an object called a _gas meter_. Gas meters implement the `GasMeter` interface -store/types/gas.go +```go expandable +package types -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/store/types/gas.go#L40-L51) + "fmt" + "math" +) -where: +/ Gas consumption descriptors. +const ( + GasIterNextCostFlatDesc = "IterNextFlat" + GasValuePerByteDesc = "ValuePerByte" + GasWritePerByteDesc = "WritePerByte" + GasReadPerByteDesc = "ReadPerByte" + GasWriteCostFlatDesc = "WriteFlat" + GasReadCostFlatDesc = "ReadFlat" + GasHasDesc = "Has" + GasDeleteDesc = "Delete" +) + +/ Gas measured by the SDK +type Gas = uint64 + +/ ErrorNegativeGasConsumed defines an error thrown when the amount of gas refunded results in a +/ negative gas consumed amount. +type ErrorNegativeGasConsumed struct { + Descriptor string +} + +/ ErrorOutOfGas defines an error thrown when an action results in out of gas. +type ErrorOutOfGas struct { + Descriptor string +} + +/ ErrorGasOverflow defines an error thrown when an action results gas consumption +/ unsigned integer overflow. +type ErrorGasOverflow struct { + Descriptor string +} + +/ GasMeter interface to track gas consumption +type GasMeter interface { + GasConsumed() + +Gas + GasConsumedToLimit() + +Gas + GasRemaining() + +Gas + Limit() + +Gas + ConsumeGas(amount Gas, descriptor string) + +RefundGas(amount Gas, descriptor string) + +IsPastLimit() + +bool + IsOutOfGas() + +bool + String() + +string +} + +type basicGasMeter struct { + limit Gas + consumed Gas +} + +/ NewGasMeter returns a reference to a new basicGasMeter. +func NewGasMeter(limit Gas) + +GasMeter { + return &basicGasMeter{ + limit: limit, + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *basicGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasRemaining returns the gas left in the GasMeter. +func (g *basicGasMeter) + +GasRemaining() + +Gas { + if g.IsPastLimit() { + return 0 +} + +return g.limit - g.consumed +} + +/ Limit returns the gas limit of the GasMeter. +func (g *basicGasMeter) + +Limit() + +Gas { + return g.limit +} -* `GasConsumed()` returns the amount of gas that was consumed by the gas meter instance. -* `GasConsumedToLimit()` returns the amount of gas that was consumed by gas meter instance, or the limit if it is reached. -* `GasRemaining()` returns the gas left in the GasMeter. -* `Limit()` returns the limit of the gas meter instance. `0` if the gas meter is infinite. -* `ConsumeGas(amount Gas, descriptor string)` consumes the amount of `gas` provided. If the `gas` overflows, it panics with the `descriptor` message. If the gas meter is not infinite, it panics if `gas` consumed goes above the limit. -* `RefundGas()` deducts the given amount from the gas consumed. This functionality enables refunding gas to the transaction or block gas pools so that EVM-compatible chains can fully support the go-ethereum StateDB interface. -* `IsPastLimit()` returns `true` if the amount of gas consumed by the gas meter instance is strictly above the limit, `false` otherwise. -* `IsOutOfGas()` returns `true` if the amount of gas consumed by the gas meter instance is above or equal to the limit, `false` otherwise. +/ GasConsumedToLimit returns the gas limit if gas consumed is past the limit, +/ otherwise it returns the consumed gas. +/ +/ NOTE: This behavior is only called when recovering from panic when +/ BlockGasMeter consumes gas past the limit. +func (g *basicGasMeter) -The gas meter is generally held in [`ctx`](/v0.50/learn/advanced/context), and consuming gas is done with the following pattern: +GasConsumedToLimit() +Gas { + if g.IsPastLimit() { + return g.limit +} + +return g.consumed +} + +/ addUint64Overflow performs the addition operation on two uint64 integers and +/ returns a boolean on whether or not the result overflows. +func addUint64Overflow(a, b uint64) (uint64, bool) { + if math.MaxUint64-a < b { + return 0, true +} + +return a + b, false +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit or out of gas. +func (g *basicGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + g.consumed = math.MaxUint64 + panic(ErrorGasOverflow{ + descriptor +}) +} + if g.consumed > g.limit { + panic(ErrorOutOfGas{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the transaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *basicGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns true if gas consumed is past limit, otherwise it returns false. +func (g *basicGasMeter) + +IsPastLimit() + +bool { + return g.consumed > g.limit +} + +/ IsOutOfGas returns true if gas consumed is greater than or equal to gas limit, otherwise it returns false. +func (g *basicGasMeter) + +IsOutOfGas() + +bool { + return g.consumed >= g.limit +} + +/ String returns the BasicGasMeter's gas limit and gas consumed. +func (g *basicGasMeter) + +String() + +string { + return fmt.Sprintf("BasicGasMeter:\n limit: %d\n consumed: %d", g.limit, g.consumed) +} + +type infiniteGasMeter struct { + consumed Gas +} + +/ NewInfiniteGasMeter returns a new gas meter without a limit. +func NewInfiniteGasMeter() + +GasMeter { + return &infiniteGasMeter{ + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *infiniteGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasConsumedToLimit returns the gas consumed from the GasMeter since the gas is not confined to a limit. +/ NOTE: This behavior is only called when recovering from panic when BlockGasMeter consumes gas past the limit. +func (g *infiniteGasMeter) + +GasConsumedToLimit() + +Gas { + return g.consumed +} + +/ GasRemaining returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +GasRemaining() + +Gas { + return math.MaxUint64 +} + +/ Limit returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +Limit() + +Gas { + return math.MaxUint64 +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit. +func (g *infiniteGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + / TODO: Should we set the consumed field after overflow checking? + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + panic(ErrorGasOverflow{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the trasaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *infiniteGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsPastLimit() + +bool { + return false +} + +/ IsOutOfGas returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsOutOfGas() + +bool { + return false +} + +/ String returns the InfiniteGasMeter's gas consumed. +func (g *infiniteGasMeter) + +String() + +string { + return fmt.Sprintf("InfiniteGasMeter:\n consumed: %d", g.consumed) +} + +/ GasConfig defines gas cost for each operation on KVStores +type GasConfig struct { + HasCost Gas + DeleteCost Gas + ReadCostFlat Gas + ReadCostPerByte Gas + WriteCostFlat Gas + WriteCostPerByte Gas + IterNextCostFlat Gas +} + +/ KVGasConfig returns a default gas config for KVStores. +func KVGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 1000, + DeleteCost: 1000, + ReadCostFlat: 1000, + ReadCostPerByte: 3, + WriteCostFlat: 2000, + WriteCostPerByte: 30, + IterNextCostFlat: 30, +} +} + +/ TransientGasConfig returns a default gas config for TransientStores. +func TransientGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 100, + DeleteCost: 100, + ReadCostFlat: 100, + ReadCostPerByte: 0, + WriteCostFlat: 200, + WriteCostPerByte: 3, + IterNextCostFlat: 3, +} +} ``` + +where: + +- `GasConsumed()` returns the amount of gas that was consumed by the gas meter instance. +- `GasConsumedToLimit()` returns the amount of gas that was consumed by gas meter instance, or the limit if it is reached. +- `GasRemaining()` returns the gas left in the GasMeter. +- `Limit()` returns the limit of the gas meter instance. `0` if the gas meter is infinite. +- `ConsumeGas(amount Gas, descriptor string)` consumes the amount of `gas` provided. If the `gas` overflows, it panics with the `descriptor` message. If the gas meter is not infinite, it panics if `gas` consumed goes above the limit. +- `RefundGas()` deducts the given amount from the gas consumed. This functionality enables refunding gas to the transaction or block gas pools so that EVM-compatible chains can fully support the go-ethereum StateDB interface. +- `IsPastLimit()` returns `true` if the amount of gas consumed by the gas meter instance is strictly above the limit, `false` otherwise. +- `IsOutOfGas()` returns `true` if the amount of gas consumed by the gas meter instance is above or equal to the limit, `false` otherwise. + +The gas meter is generally held in [`ctx`](/docs/sdk/v0.50/learn/advanced/context), and consuming gas is done with the following pattern: + +```go ctx.GasMeter().ConsumeGas(amount, "description") ``` By default, the Cosmos SDK makes use of two different gas meters, the [main gas meter](#main-gas-metter) and the [block gas meter](#block-gas-meter). -### Main Gas Meter[​](#main-gas-meter "Direct link to Main Gas Meter") +### Main Gas Meter -`ctx.GasMeter()` is the main gas meter of the application. The main gas meter is initialized in `FinalizeBlock` via `setFinalizeBlockState`, and then tracks gas consumption during execution sequences that lead to state-transitions, i.e. those originally triggered by [`FinalizeBlock`](/v0.50/learn/advanced/baseapp#finalizeblock). At the beginning of each transaction execution, the main gas meter **must be set to 0** in the [`AnteHandler`](#antehandler), so that it can track gas consumption per-transaction. +`ctx.GasMeter()` is the main gas meter of the application. The main gas meter is initialized in `FinalizeBlock` via `setFinalizeBlockState`, and then tracks gas consumption during execution sequences that lead to state-transitions, i.e. those originally triggered by [`FinalizeBlock`](/docs/sdk/v0.50/learn/advanced/baseapp#finalizeblock). At the beginning of each transaction execution, the main gas meter **must be set to 0** in the [`AnteHandler`](#antehandler), so that it can track gas consumption per-transaction. -Gas consumption can be done manually, generally by the module developer in the [`BeginBlocker`, `EndBlocker`](/v0.50/build/building-modules/beginblock-endblock) or [`Msg` service](/v0.50/build/building-modules/msg-services), but most of the time it is done automatically whenever there is a read or write to the store. This automatic gas consumption logic is implemented in a special store called [`GasKv`](/v0.50/learn/advanced/store#gaskv-store). +Gas consumption can be done manually, generally by the module developer in the [`BeginBlocker`, `EndBlocker`](/docs/sdk/v0.50/documentation/module-system/beginblock-endblock) or [`Msg` service](/docs/sdk/v0.50/documentation/module-system/msg-services), but most of the time it is done automatically whenever there is a read or write to the store. This automatic gas consumption logic is implemented in a special store called [`GasKv`](/docs/sdk/v0.50/learn/advanced/store#gaskv-store). -### Block Gas Meter[​](#block-gas-meter "Direct link to Block Gas Meter") +### Block Gas Meter `ctx.BlockGasMeter()` is the gas meter used to track gas consumption per block and make sure it does not go above a certain limit. During the genesis phase, gas consumption is unlimited to accommodate initialisation transactions. -``` +```go app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(storetypes.NewInfiniteGasMeter())) ``` @@ -69,41 +421,204 @@ Following the genesis block, the block gas meter is set to a finite value by the Modules within the Cosmos SDK can consume block gas at any point during their execution by utilising the `ctx`. This gas consumption primarily occurs during state read/write operations and transaction processing. The block gas meter, accessible via `ctx.BlockGasMeter()`, monitors the total gas usage within a block, enforcing the gas limit to prevent excessive computation. This ensures that gas limits are adhered to on a per-block basis, starting from the first block post-genesis. -``` -gasMeter := app.getBlockGasMeter(app.finalizeBlockState.Context())app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(gasMeter)) +```go +gasMeter := app.getBlockGasMeter(app.finalizeBlockState.Context()) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(gasMeter)) ``` This above shows the general mechanism for setting the block gas meter with a finite limit based on the block's consensus parameters. -## AnteHandler[​](#antehandler "Direct link to AnteHandler") +## AnteHandler The `AnteHandler` is run for every transaction during `CheckTx` and `FinalizeBlock`, before a Protobuf `Msg` service method for each `sdk.Msg` in the transaction. The anteHandler is not implemented in the core Cosmos SDK but in a module. That said, most applications today use the default implementation defined in the [`auth` module](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth). Here is what the `anteHandler` is intended to do in a normal Cosmos SDK application: -* Verify that the transactions are of the correct type. Transaction types are defined in the module that implements the `anteHandler`, and they follow the transaction interface: +- Verify that the transactions are of the correct type. Transaction types are defined in the module that implements the `anteHandler`, and they follow the transaction interface: -types/tx\_msg.go +```go expandable +package types -``` -loading... -``` +import ( + + "encoding/json" + fmt "fmt" + strings "strings" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "github.com/cosmos/cosmos-sdk/codec" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/types/tx_msg.go#L51-L56) +type ( + / Msg defines the interface a transaction message needed to fulfill. + Msg = proto.Message -This enables developers to play with various types for the transaction of their application. In the default `auth` module, the default transaction type is `Tx`: + / LegacyMsg defines the interface a transaction message needed to fulfill up through + / v0.47. + LegacyMsg interface { + Msg -proto/cosmos/tx/v1beta1/tx.proto + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} -``` -loading... + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / HasMsgs defines an interface a transaction must fulfill. + HasMsgs interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg +} + + / Tx defines an interface a transaction must fulfill. + Tx interface { + HasMsgs + + / GetMsgsV2 gets the transaction's messages as google.golang.org/protobuf/proto.Message's. + GetMsgsV2() ([]protov2.Message, error) +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() []byte + FeeGranter() + +string +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} + + / HasValidateBasic defines a type that has a ValidateBasic method. + / ValidateBasic is deprecated and now facultative. + / Prefer validating messages directly in the msg server. + HasValidateBasic interface { + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +func MsgTypeURL(msg proto.Message) + +string { + if m, ok := msg.(protov2.Message); ok { + return "/" + string(m.ProtoReflect().Descriptor().FullName()) +} + +return "/" + proto.MessageName(msg) +} + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} + +/ GetModuleNameFromTypeURL assumes that module name is the second element of the msg type URL +/ e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" +/ It returns an empty string if the input is not a valid type URL +func GetModuleNameFromTypeURL(input string) + +string { + moduleName := strings.Split(input, ".") + if len(moduleName) > 1 { + return moduleName[1] +} + +return "" +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/tx/v1beta1/tx.proto#L14-L27) +This enables developers to play with various types for the transaction of their application. In the default `auth` module, the default transaction type is `Tx`: + +```protobuf +// Tx is the standard type used for broadcasting transactions. +message Tx { + // body is the processable content of the transaction + TxBody body = 1; + + // auth_info is the authorization related content of the transaction, + // specifically signers, signer modes and fee + AuthInfo auth_info = 2; + + // signatures is a list of signatures that matches the length and order of + // AuthInfo's signer_infos to allow connecting signature meta information like + // public key and signing mode by position. + repeated bytes signatures = 3; +} +``` -* Verify signatures for each [`message`](/v0.50/build/building-modules/messages-and-queries#messages) contained in the transaction. Each `message` should be signed by one or multiple sender(s), and these signatures must be verified in the `anteHandler`. -* During `CheckTx`, verify that the gas prices provided with the transaction is greater than the local `min-gas-prices` (as a reminder, gas-prices can be deducted from the following equation: `fees = gas * gas-prices`). `min-gas-prices` is a parameter local to each full-node and used during `CheckTx` to discard transactions that do not provide a minimum amount of fees. This ensures that the mempool cannot be spammed with garbage transactions. -* Verify that the sender of the transaction has enough funds to cover for the `fees`. When the end-user generates a transaction, they must indicate 2 of the 3 following parameters (the third one being implicit): `fees`, `gas` and `gas-prices`. This signals how much they are willing to pay for nodes to execute their transaction. The provided `gas` value is stored in a parameter called `GasWanted` for later use. -* Set `newCtx.GasMeter` to 0, with a limit of `GasWanted`. **This step is crucial**, as it not only makes sure the transaction cannot consume infinite gas, but also that `ctx.GasMeter` is reset in-between each transaction (`ctx` is set to `newCtx` after `anteHandler` is run, and the `anteHandler` is run each time a transactions executes). +- Verify signatures for each [`message`](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#messages) contained in the transaction. Each `message` should be signed by one or multiple sender(s), and these signatures must be verified in the `anteHandler`. +- During `CheckTx`, verify that the gas prices provided with the transaction is greater than the local `min-gas-prices` (as a reminder, gas-prices can be deducted from the following equation: `fees = gas * gas-prices`). `min-gas-prices` is a parameter local to each full-node and used during `CheckTx` to discard transactions that do not provide a minimum amount of fees. This ensures that the mempool cannot be spammed with garbage transactions. +- Verify that the sender of the transaction has enough funds to cover for the `fees`. When the end-user generates a transaction, they must indicate 2 of the 3 following parameters (the third one being implicit): `fees`, `gas` and `gas-prices`. This signals how much they are willing to pay for nodes to execute their transaction. The provided `gas` value is stored in a parameter called `GasWanted` for later use. +- Set `newCtx.GasMeter` to 0, with a limit of `GasWanted`. **This step is crucial**, as it not only makes sure the transaction cannot consume infinite gas, but also that `ctx.GasMeter` is reset in-between each transaction (`ctx` is set to `newCtx` after `anteHandler` is run, and the `anteHandler` is run each time a transactions executes). -As explained above, the `anteHandler` returns a maximum limit of `gas` the transaction can consume during execution called `GasWanted`. The actual amount consumed in the end is denominated `GasUsed`, and we must therefore have `GasUsed =< GasWanted`. Both `GasWanted` and `GasUsed` are relayed to the underlying consensus engine when [`FinalizeBlock`](/v0.50/learn/advanced/baseapp#finalizeblock) returns. +As explained above, the `anteHandler` returns a maximum limit of `gas` the transaction can consume during execution called `GasWanted`. The actual amount consumed in the end is denominated `GasUsed`, and we must therefore have `GasUsed =< GasWanted`. Both `GasWanted` and `GasUsed` are relayed to the underlying consensus engine when [`FinalizeBlock`](/docs/sdk/v0.50/learn/advanced/baseapp#finalizeblock) returns. diff --git a/docs/sdk/v0.50/learn/beginner/query-lifecycle.mdx b/docs/sdk/v0.50/learn/beginner/query-lifecycle.mdx index caec069b..46307add 100644 --- a/docs/sdk/v0.50/learn/beginner/query-lifecycle.mdx +++ b/docs/sdk/v0.50/learn/beginner/query-lifecycle.mdx @@ -1,159 +1,3069 @@ --- -title: "Query Lifecycle" -description: "Version: v0.50" +title: Query Lifecycle --- - - This document describes the lifecycle of a query in a Cosmos SDK application, from the user interface to application stores and back. The query is referred to as `MyQuery`. - +## Synopsis + +This document describes the lifecycle of a query in a Cosmos SDK application, from the user interface to application stores and back. The query is referred to as `MyQuery`. - * [Transaction Lifecycle](/v0.50/learn/beginner/tx-lifecycle) +**Pre-requisite Readings** + +- [Transaction Lifecycle](/docs/sdk/v0.50/learn/beginner/tx-lifecycle) + -## Query Creation[​](#query-creation "Direct link to Query Creation") +## Query Creation -A [**query**](/v0.50/build/building-modules/messages-and-queries#queries) is a request for information made by end-users of applications through an interface and processed by a full-node. Users can query information about the network, the application itself, and application state directly from the application's stores or modules. Note that queries are different from [transactions](/v0.50/learn/advanced/transactions) (view the lifecycle [here](/v0.50/learn/beginner/tx-lifecycle)), particularly in that they do not require consensus to be processed (as they do not trigger state-transitions); they can be fully handled by one full-node. +A [**query**](/docs/sdk/v0.50/documentation/module-system/messages-and-queries#queries) is a request for information made by end-users of applications through an interface and processed by a full-node. Users can query information about the network, the application itself, and application state directly from the application's stores or modules. Note that queries are different from [transactions](/docs/sdk/v0.50/learn/advanced/transactions) (view the lifecycle [here](/docs/sdk/v0.50/learn/beginner/tx-lifecycle)), particularly in that they do not require consensus to be processed (as they do not trigger state-transitions); they can be fully handled by one full-node. -For the purpose of explaining the query lifecycle, let's say the query, `MyQuery`, is requesting a list of delegations made by a certain delegator address in the application called `simapp`. As is to be expected, the [`staking`](/v0.50/build/modules/staking) module handles this query. But first, there are a few ways `MyQuery` can be created by users. +For the purpose of explaining the query lifecycle, let's say the query, `MyQuery`, is requesting a list of delegations made by a certain delegator address in the application called `simapp`. As is to be expected, the [`staking`](docs/sdk/v0.50/documentation/module-system/modules/staking/README) module handles this query. But first, there are a few ways `MyQuery` can be created by users. -### CLI[​](#cli "Direct link to CLI") +### CLI The main interface for an application is the command-line interface. Users connect to a full-node and run the CLI directly from their machines - the CLI interacts directly with the full-node. To create `MyQuery` from their terminal, users type the following command: -``` +```bash simd query staking delegations ``` -This query command was defined by the [`staking`](/v0.50/build/modules/staking) module developer and added to the list of subcommands by the application developer when creating the CLI. +This query command was defined by the [`staking`](docs/sdk/v0.50/documentation/module-system/modules/staking/README) module developer and added to the list of subcommands by the application developer when creating the CLI. Note that the general format is as follows: -``` +```bash simd query [moduleName] [command] --flag ``` -To provide values such as `--node` (the full-node the CLI connects to), the user can use the [`app.toml`](/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml) config file to set them or provide them as flags. +To provide values such as `--node` (the full-node the CLI connects to), the user can use the [`app.toml`](/docs/sdk/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml) config file to set them or provide them as flags. -The CLI understands a specific set of commands, defined in a hierarchical structure by the application developer: from the [root command](/v0.50/learn/advanced/cli#root-command) (`simd`), the type of command (`Myquery`), the module that contains the command (`staking`), and command itself (`delegations`). Thus, the CLI knows exactly which module handles this command and directly passes the call there. +The CLI understands a specific set of commands, defined in a hierarchical structure by the application developer: from the [root command](/docs/sdk/v0.50/learn/advanced/cli#root-command) (`simd`), the type of command (`Myquery`), the module that contains the command (`staking`), and command itself (`delegations`). Thus, the CLI knows exactly which module handles this command and directly passes the call there. -### gRPC[​](#grpc "Direct link to gRPC") +### gRPC -Another interface through which users can make queries is [gRPC](https://grpc.io) requests to a [gRPC server](/v0.50/learn/advanced/grpc_rest#grpc-server). The endpoints are defined as [Protocol Buffers](https://developers.google.com/protocol-buffers) service methods inside `.proto` files, written in Protobuf's own language-agnostic interface definition language (IDL). The Protobuf ecosystem developed tools for code-generation from `*.proto` files into various languages. These tools allow to build gRPC clients easily. +Another interface through which users can make queries is [gRPC](https://grpc.io) requests to a [gRPC server](/docs/sdk/v0.50/learn/advanced/grpc_rest#grpc-server). The endpoints are defined as [Protocol Buffers](https://developers.google.com/protocol-buffers) service methods inside `.proto` files, written in Protobuf's own language-agnostic interface definition language (IDL). The Protobuf ecosystem developed tools for code-generation from `*.proto` files into various languages. These tools allow to build gRPC clients easily. One such tool is [grpcurl](https://github.com/fullstorydev/grpcurl), and a gRPC request for `MyQuery` using this client looks like: -``` -grpcurl \ -plaintext # We want results in plain test -import-path ./proto \ # Import these .proto files -proto ./proto/cosmos/staking/v1beta1/query.proto \ # Look into this .proto file for the Query protobuf service -d '{"address":"$MY_DELEGATOR"}' \ # Query arguments localhost:9090 \ # gRPC server endpoint cosmos.staking.v1beta1.Query/Delegations # Fully-qualified service method name +```bash +grpcurl \ + -plaintext # We want results in plain test + -import-path ./proto \ # Import these .proto files + -proto ./proto/cosmos/staking/v1beta1/query.proto \ # Look into this .proto file for the Query protobuf service + -d '{"address":"$MY_DELEGATOR"}' \ # Query arguments + localhost:9090 \ # gRPC server endpoint + cosmos.staking.v1beta1.Query/Delegations # Fully-qualified service method name ``` -### REST[​](#rest "Direct link to REST") +### REST -Another interface through which users can make queries is through HTTP Requests to a [REST server](/v0.50/learn/advanced/grpc_rest#rest-server). The REST server is fully auto-generated from Protobuf services, using [gRPC-gateway](https://github.com/grpc-ecosystem/grpc-gateway). +Another interface through which users can make queries is through HTTP Requests to a [REST server](/docs/sdk/v0.50/learn/advanced/grpc_rest#rest-server). The REST server is fully auto-generated from Protobuf services, using [gRPC-gateway](https://github.com/grpc-ecosystem/grpc-gateway). An example HTTP request for `MyQuery` looks like: -``` +```bash GET http://localhost:1317/cosmos/staking/v1beta1/delegators/{delegatorAddr}/delegations ``` -## How Queries are Handled by the CLI[​](#how-queries-are-handled-by-the-cli "Direct link to How Queries are Handled by the CLI") +## How Queries are Handled by the CLI The preceding examples show how an external user can interact with a node by querying its state. To understand in more detail the exact lifecycle of a query, let's dig into how the CLI prepares the query, and how the node handles it. The interactions from the users' perspective are a bit different, but the underlying functions are almost identical because they are implementations of the same command defined by the module developer. This step of processing happens within the CLI, gRPC, or REST server, and heavily involves a `client.Context`. -### Context[​](#context "Direct link to Context") +### Context The first thing that is created in the execution of a CLI command is a `client.Context`. A `client.Context` is an object that stores all the data needed to process a request on the user side. In particular, a `client.Context` stores the following: -* **Codec**: The [encoder/decoder](/v0.50/learn/advanced/encoding) used by the application, used to marshal the parameters and query before making the CometBFT RPC request and unmarshal the returned response into a JSON object. The default codec used by the CLI is Protobuf. -* **Account Decoder**: The account decoder from the [`auth`](/v0.50/build/modules/auth) module, which translates `[]byte`s into accounts. -* **RPC Client**: The CometBFT RPC Client, or node, to which requests are relayed. -* **Keyring**: A \[Key Manager]../beginner/03-accounts.md#keyring) used to sign transactions and handle other operations with keys. -* **Output Writer**: A [Writer](https://pkg.go.dev/io/#Writer) used to output the response. -* **Configurations**: The flags configured by the user for this command, including `--height`, specifying the height of the blockchain to query, and `--indent`, which indicates to add an indent to the JSON response. +- **Codec**: The [encoder/decoder](/docs/sdk/v0.50/learn/advanced/encoding) used by the application, used to marshal the parameters and query before making the CometBFT RPC request and unmarshal the returned response into a JSON object. The default codec used by the CLI is Protobuf. +- **Account Decoder**: The account decoder from the [`auth`](/docs/sdk/v0.50/documentation/module-system/modules/auth) module, which translates `[]byte`s into accounts. +- **RPC Client**: The CometBFT RPC Client, or node, to which requests are relayed. +- **Keyring**: A \[Key Manager]../beginner/03-accounts.md#keyring) used to sign transactions and handle other operations with keys. +- **Output Writer**: A [Writer](https://pkg.go.dev/io/#Writer) used to output the response. +- **Configurations**: The flags configured by the user for this command, including `--height`, specifying the height of the blockchain to query, and `--indent`, which indicates to add an indent to the JSON response. The `client.Context` also contains various functions such as `Query()`, which retrieves the RPC Client and makes an ABCI call to relay a query to a full-node. -client/context.go +```go expandable +package client + +import ( + + "bufio" + "context" + "encoding/json" + "fmt" + "io" + "os" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/viper" + "google.golang.org/grpc" + "sigs.k8s.io/yaml" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ PreprocessTxFn defines a hook by which chains can preprocess transactions before broadcasting +type PreprocessTxFn func(chainID string, key keyring.KeyType, tx TxBuilder) + +error + +/ Context implements a typical context created in SDK modules for transaction +/ handling and queries. +type Context struct { + FromAddress sdk.AccAddress + Client CometRPC + GRPCClient *grpc.ClientConn + ChainID string + Codec codec.Codec + InterfaceRegistry codectypes.InterfaceRegistry + Input io.Reader + Keyring keyring.Keyring + KeyringOptions []keyring.Option + Output io.Writer + OutputFormat string + Height int64 + HomeDir string + KeyringDir string + From string + BroadcastMode string + FromName string + SignModeStr string + UseLedger bool + Simulate bool + GenerateOnly bool + Offline bool + SkipConfirm bool + TxConfig TxConfig + AccountRetriever AccountRetriever + NodeURI string + FeePayer sdk.AccAddress + FeeGranter sdk.AccAddress + Viper *viper.Viper + LedgerHasProtobuf bool + PreprocessTxHook PreprocessTxFn + + / IsAux is true when the signer is an auxiliary signer (e.g. the tipper). + IsAux bool + + / TODO: Deprecated (remove). + LegacyAmino *codec.LegacyAmino + + / CmdContext is the context.Context from the Cobra command. + CmdContext context.Context +} + +/ WithCmdContext returns a copy of the context with an updated context.Context, +/ usually set to the cobra cmd context. +func (ctx Context) + +WithCmdContext(c context.Context) + +Context { + ctx.CmdContext = c + return ctx +} + +/ WithKeyring returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyring(k keyring.Keyring) + +Context { + ctx.Keyring = k + return ctx +} + +/ WithKeyringOptions returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyringOptions(opts ...keyring.Option) + +Context { + ctx.KeyringOptions = opts + return ctx +} + +/ WithInput returns a copy of the context with an updated input. +func (ctx Context) + +WithInput(r io.Reader) + +Context { + / convert to a bufio.Reader to have a shared buffer between the keyring and the + / the Commands, ensuring a read from one advance the read pointer for the other. + / see https://github.com/cosmos/cosmos-sdk/issues/9566. + ctx.Input = bufio.NewReader(r) + +return ctx +} + +/ WithCodec returns a copy of the Context with an updated Codec. +func (ctx Context) -``` -loading... -``` +WithCodec(m codec.Codec) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/client/context.go#L25-L68) +Context { + ctx.Codec = m + return ctx +} -The `client.Context`'s primary role is to store data used during interactions with the end-user and provide methods to interact with this data - it is used before and after the query is processed by the full-node. Specifically, in handling `MyQuery`, the `client.Context` is utilized to encode the query parameters, retrieve the full-node, and write the output. Prior to being relayed to a full-node, the query needs to be encoded into a `[]byte` form, as full-nodes are application-agnostic and do not understand specific types. The full-node (RPC Client) itself is retrieved using the `client.Context`, which knows which node the user CLI is connected to. The query is relayed to this full-node to be processed. Finally, the `client.Context` contains a `Writer` to write output when the response is returned. These steps are further described in later sections. +/ WithLegacyAmino returns a copy of the context with an updated LegacyAmino codec. +/ TODO: Deprecated (remove). +func (ctx Context) -### Arguments and Route Creation[​](#arguments-and-route-creation "Direct link to Arguments and Route Creation") +WithLegacyAmino(cdc *codec.LegacyAmino) -At this point in the lifecycle, the user has created a CLI command with all of the data they wish to include in their query. A `client.Context` exists to assist in the rest of the `MyQuery`'s journey. Now, the next step is to parse the command or request, extract the arguments, and encode everything. These steps all happen on the user side within the interface they are interacting with. +Context { + ctx.LegacyAmino = cdc + return ctx +} -#### Encoding[​](#encoding "Direct link to Encoding") +/ WithOutput returns a copy of the context with an updated output writer (e.g. stdout). +func (ctx Context) -In our case (querying an address's delegations), `MyQuery` contains an [address](/v0.50/learn/beginner/accounts#addresses) `delegatorAddress` as its only argument. However, the request can only contain `[]byte`s, as it is ultimately relayed to a consensus engine (e.g. CometBFT) of a full-node that has no inherent knowledge of the application types. Thus, the `codec` of `client.Context` is used to marshal the address. +WithOutput(w io.Writer) -Here is what the code looks like for the CLI command: +Context { + ctx.Output = w + return ctx +} -x/staking/client/cli/query.go +/ WithFrom returns a copy of the context with an updated from address or name. +func (ctx Context) -``` -loading... -``` +WithFrom(from string) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/client/cli/query.go#L315-L318) +Context { + ctx.From = from + return ctx +} -#### gRPC Query Client Creation[​](#grpc-query-client-creation "Direct link to gRPC Query Client Creation") +/ WithOutputFormat returns a copy of the context with an updated OutputFormat field. +func (ctx Context) -The Cosmos SDK leverages code generated from Protobuf services to make queries. The `staking` module's `MyQuery` service generates a `queryClient`, which the CLI uses to make queries. Here is the relevant code: +WithOutputFormat(format string) -x/staking/client/cli/query.go +Context { + ctx.OutputFormat = format + return ctx +} -``` -loading... +/ WithNodeURI returns a copy of the context with an updated node URI. +func (ctx Context) + +WithNodeURI(nodeURI string) + +Context { + ctx.NodeURI = nodeURI + return ctx +} + +/ WithHeight returns a copy of the context with an updated height. +func (ctx Context) + +WithHeight(height int64) + +Context { + ctx.Height = height + return ctx +} + +/ WithClient returns a copy of the context with an updated RPC client +/ instance. +func (ctx Context) + +WithClient(client CometRPC) + +Context { + ctx.Client = client + return ctx +} + +/ WithGRPCClient returns a copy of the context with an updated GRPC client +/ instance. +func (ctx Context) + +WithGRPCClient(grpcClient *grpc.ClientConn) + +Context { + ctx.GRPCClient = grpcClient + return ctx +} + +/ WithUseLedger returns a copy of the context with an updated UseLedger flag. +func (ctx Context) + +WithUseLedger(useLedger bool) + +Context { + ctx.UseLedger = useLedger + return ctx +} + +/ WithChainID returns a copy of the context with an updated chain ID. +func (ctx Context) + +WithChainID(chainID string) + +Context { + ctx.ChainID = chainID + return ctx +} + +/ WithHomeDir returns a copy of the Context with HomeDir set. +func (ctx Context) + +WithHomeDir(dir string) + +Context { + if dir != "" { + ctx.HomeDir = dir +} + +return ctx +} + +/ WithKeyringDir returns a copy of the Context with KeyringDir set. +func (ctx Context) + +WithKeyringDir(dir string) + +Context { + ctx.KeyringDir = dir + return ctx +} + +/ WithGenerateOnly returns a copy of the context with updated GenerateOnly value +func (ctx Context) + +WithGenerateOnly(generateOnly bool) + +Context { + ctx.GenerateOnly = generateOnly + return ctx +} + +/ WithSimulation returns a copy of the context with updated Simulate value +func (ctx Context) + +WithSimulation(simulate bool) + +Context { + ctx.Simulate = simulate + return ctx +} + +/ WithOffline returns a copy of the context with updated Offline value. +func (ctx Context) + +WithOffline(offline bool) + +Context { + ctx.Offline = offline + return ctx +} + +/ WithFromName returns a copy of the context with an updated from account name. +func (ctx Context) + +WithFromName(name string) + +Context { + ctx.FromName = name + return ctx +} + +/ WithFromAddress returns a copy of the context with an updated from account +/ address. +func (ctx Context) + +WithFromAddress(addr sdk.AccAddress) + +Context { + ctx.FromAddress = addr + return ctx +} + +/ WithFeePayerAddress returns a copy of the context with an updated fee payer account +/ address. +func (ctx Context) + +WithFeePayerAddress(addr sdk.AccAddress) + +Context { + ctx.FeePayer = addr + return ctx +} + +/ WithFeeGranterAddress returns a copy of the context with an updated fee granter account +/ address. +func (ctx Context) + +WithFeeGranterAddress(addr sdk.AccAddress) + +Context { + ctx.FeeGranter = addr + return ctx +} + +/ WithBroadcastMode returns a copy of the context with an updated broadcast +/ mode. +func (ctx Context) + +WithBroadcastMode(mode string) + +Context { + ctx.BroadcastMode = mode + return ctx +} + +/ WithSignModeStr returns a copy of the context with an updated SignMode +/ value. +func (ctx Context) + +WithSignModeStr(signModeStr string) + +Context { + ctx.SignModeStr = signModeStr + return ctx +} + +/ WithSkipConfirmation returns a copy of the context with an updated SkipConfirm +/ value. +func (ctx Context) + +WithSkipConfirmation(skip bool) + +Context { + ctx.SkipConfirm = skip + return ctx +} + +/ WithTxConfig returns the context with an updated TxConfig +func (ctx Context) + +WithTxConfig(generator TxConfig) + +Context { + ctx.TxConfig = generator + return ctx +} + +/ WithAccountRetriever returns the context with an updated AccountRetriever +func (ctx Context) + +WithAccountRetriever(retriever AccountRetriever) + +Context { + ctx.AccountRetriever = retriever + return ctx +} + +/ WithInterfaceRegistry returns the context with an updated InterfaceRegistry +func (ctx Context) + +WithInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) + +Context { + ctx.InterfaceRegistry = interfaceRegistry + return ctx +} + +/ WithViper returns the context with Viper field. This Viper instance is used to read +/ client-side config from the config file. +func (ctx Context) + +WithViper(prefix string) + +Context { + v := viper.New() + +v.SetEnvPrefix(prefix) + +v.AutomaticEnv() + +ctx.Viper = v + return ctx +} + +/ WithAux returns a copy of the context with an updated IsAux value. +func (ctx Context) + +WithAux(isAux bool) + +Context { + ctx.IsAux = isAux + return ctx +} + +/ WithLedgerHasProto returns the context with the provided boolean value, indicating +/ whether the target Ledger application can support Protobuf payloads. +func (ctx Context) + +WithLedgerHasProtobuf(val bool) + +Context { + ctx.LedgerHasProtobuf = val + return ctx +} + +/ WithPreprocessTxHook returns the context with the provided preprocessing hook, which +/ enables chains to preprocess the transaction using the builder. +func (ctx Context) + +WithPreprocessTxHook(preprocessFn PreprocessTxFn) + +Context { + ctx.PreprocessTxHook = preprocessFn + return ctx +} + +/ PrintString prints the raw string to ctx.Output if it's defined, otherwise to os.Stdout +func (ctx Context) + +PrintString(str string) + +error { + return ctx.PrintBytes([]byte(str)) +} + +/ PrintBytes prints the raw bytes to ctx.Output if it's defined, otherwise to os.Stdout. +/ NOTE: for printing a complex state object, you should use ctx.PrintOutput +func (ctx Context) + +PrintBytes(o []byte) + +error { + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err := writer.Write(o) + +return err +} + +/ PrintProto outputs toPrint to the ctx.Output based on ctx.OutputFormat which is +/ either text or json. If text, toPrint will be YAML encoded. Otherwise, toPrint +/ will be JSON encoded using ctx.Codec. An error is returned upon failure. +func (ctx Context) + +PrintProto(toPrint proto.Message) + +error { + / always serialize JSON initially because proto json can't be directly YAML encoded + out, err := ctx.Codec.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintObjectLegacy is a variant of PrintProto that doesn't require a proto.Message type +/ and uses amino JSON encoding. +/ Deprecated: It will be removed in the near future! +func (ctx Context) + +PrintObjectLegacy(toPrint interface{ +}) + +error { + out, err := ctx.LegacyAmino.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintRaw is a variant of PrintProto that doesn't require a proto.Message type +/ and uses a raw JSON message. No marshaling is performed. +func (ctx Context) + +PrintRaw(toPrint json.RawMessage) + +error { + return ctx.printOutput(toPrint) +} + +func (ctx Context) + +printOutput(out []byte) + +error { + var err error + if ctx.OutputFormat == "text" { + out, err = yaml.JSONToYAML(out) + if err != nil { + return err +} + +} + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err = writer.Write(out) + if err != nil { + return err +} + if ctx.OutputFormat != "text" { + / append new-line for formats besides YAML + _, err = writer.Write([]byte("\n")) + if err != nil { + return err +} + +} + +return nil +} + +/ GetFromFields returns a from account address, account name and keyring type, given either an address or key name. +/ If clientCtx.Simulate is true the keystore is not accessed and a valid address must be provided +/ If clientCtx.GenerateOnly is true the keystore is only accessed if a key name is provided +func GetFromFields(clientCtx Context, kr keyring.Keyring, from string) (sdk.AccAddress, string, keyring.KeyType, error) { + if from == "" { + return nil, "", 0, nil +} + +addr, err := sdk.AccAddressFromBech32(from) + switch { + case clientCtx.Simulate: + if err != nil { + return nil, "", 0, fmt.Errorf("a valid bech32 address must be provided in simulation mode: %w", err) +} + +return addr, "", 0, nil + case clientCtx.GenerateOnly: + if err == nil { + return addr, "", 0, nil +} + +} + +var k *keyring.Record + if err == nil { + k, err = kr.KeyByAddress(addr) + if err != nil { + return nil, "", 0, err +} + +} + +else { + k, err = kr.Key(from) + if err != nil { + return nil, "", 0, err +} + +} + +addr, err = k.GetAddress() + if err != nil { + return nil, "", 0, err +} + +return addr, k.Name, k.GetType(), nil +} + +/ NewKeyringFromBackend gets a Keyring object from a backend +func NewKeyringFromBackend(ctx Context, backend string) (keyring.Keyring, error) { + if ctx.Simulate { + backend = keyring.BackendMemory +} + +return keyring.New(sdk.KeyringServiceName(), backend, ctx.KeyringDir, ctx.Input, ctx.Codec, ctx.KeyringOptions...) +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/client/cli/query.go#L308-L343) +The `client.Context`'s primary role is to store data used during interactions with the end-user and provide methods to interact with this data - it is used before and after the query is processed by the full-node. Specifically, in handling `MyQuery`, the `client.Context` is utilized to encode the query parameters, retrieve the full-node, and write the output. Prior to being relayed to a full-node, the query needs to be encoded into a `[]byte` form, as full-nodes are application-agnostic and do not understand specific types. The full-node (RPC Client) itself is retrieved using the `client.Context`, which knows which node the user CLI is connected to. The query is relayed to this full-node to be processed. Finally, the `client.Context` contains a `Writer` to write output when the response is returned. These steps are further described in later sections. -Under the hood, the `client.Context` has a `Query()` function used to retrieve the pre-configured node and relay a query to it; the function takes the query fully-qualified service method name as path (in our case: `/cosmos.staking.v1beta1.Query/Delegations`), and arguments as parameters. It first retrieves the RPC Client (called the [**node**](/v0.50/learn/advanced/node)) configured by the user to relay this query to, and creates the `ABCIQueryOptions` (parameters formatted for the ABCI call). The node is then used to make the ABCI call, `ABCIQueryWithOptions()`. +### Arguments and Route Creation -Here is what the code looks like: +At this point in the lifecycle, the user has created a CLI command with all of the data they wish to include in their query. A `client.Context` exists to assist in the rest of the `MyQuery`'s journey. Now, the next step is to parse the command or request, extract the arguments, and encode everything. These steps all happen on the user side within the interface they are interacting with. -client/query.go +#### Encoding +In our case (querying an address's delegations), `MyQuery` contains an [address](/docs/sdk/v0.50/learn/beginner/accounts#addresses) `delegatorAddress` as its only argument. However, the request can only contain `[]byte`s, as it is ultimately relayed to a consensus engine (e.g. CometBFT) of a full-node that has no inherent knowledge of the application types. Thus, the `codec` of `client.Context` is used to marshal the address. + +Here is what the code looks like for the CLI command: + +```go expandable +package cli + +import ( + + "fmt" + "strconv" + "strings" + "cosmossdk.io/core/address" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ GetQueryCmd returns the cli query commands for this module +func GetQueryCmd(ac address.Codec) *cobra.Command { + stakingQueryCmd := &cobra.Command{ + Use: types.ModuleName, + Short: "Querying commands for the staking module", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +stakingQueryCmd.AddCommand( + GetCmdQueryDelegation(ac), + GetCmdQueryDelegations(ac), + GetCmdQueryUnbondingDelegation(ac), + GetCmdQueryUnbondingDelegations(ac), + GetCmdQueryRedelegation(ac), + GetCmdQueryRedelegations(ac), + GetCmdQueryValidator(), + GetCmdQueryValidators(), + GetCmdQueryValidatorDelegations(), + GetCmdQueryValidatorUnbondingDelegations(), + GetCmdQueryValidatorRedelegations(), + GetCmdQueryHistoricalInfo(), + GetCmdQueryParams(), + GetCmdQueryPool(), + ) + +return stakingQueryCmd +} + +/ GetCmdQueryValidator implements the validator query command. +func GetCmdQueryValidator() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "validator [validator-addr]", + Short: "Query a validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query details about an individual validator. + +Example: +$ %s query staking validator %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +addr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + params := &types.QueryValidatorRequest{ + ValidatorAddr: addr.String() +} + +res, err := queryClient.Validator(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Validator) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryValidators implements the query all validators command. +func GetCmdQueryValidators() *cobra.Command { + cmd := &cobra.Command{ + Use: "validators", + Short: "Query for all validators", + Args: cobra.NoArgs, + Long: strings.TrimSpace( + fmt.Sprintf(`Query details about all validators on a network. + +Example: +$ %s query staking validators +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + +result, err := queryClient.Validators(cmd.Context(), &types.QueryValidatorsRequest{ + / Leaving status empty on purpose to query all validators. + Pagination: pageReq, +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(result) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "validators") + +return cmd +} + +/ GetCmdQueryValidatorUnbondingDelegations implements the query all unbonding delegatations from a validator command. +func GetCmdQueryValidatorUnbondingDelegations() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "unbonding-delegations-from [validator-addr]", + Short: "Query all unbonding delegatations from a validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations that are unbonding _from_ a validator. + +Example: +$ %s query staking unbonding-delegations-from %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valAddr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryValidatorUnbondingDelegationsRequest{ + ValidatorAddr: valAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.ValidatorUnbondingDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "unbonding delegations") + +return cmd +} + +/ GetCmdQueryValidatorRedelegations implements the query all redelegatations +/ from a validator command. +func GetCmdQueryValidatorRedelegations() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "redelegations-from [validator-addr]", + Short: "Query all outgoing redelegatations from a validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations that are redelegating _from_ a validator. + +Example: +$ %s query staking redelegations-from %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valSrcAddr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryRedelegationsRequest{ + SrcValidatorAddr: valSrcAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.Redelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "validator redelegations") + +return cmd +} + +/ GetCmdQueryDelegation the query delegation command. +func GetCmdQueryDelegation(ac address.Codec) *cobra.Command { + cmd := &cobra.Command{ + Use: "delegation [delegator-addr] [validator-addr]", + Short: "Query a delegation based on address and validator address", + Example: fmt.Sprintf(`%s query staking delegation [delegator-address] [validator-address]`, + version.AppName), + Long: "Query delegations for an individual delegator on an individual validator", + Args: cobra.ExactArgs(2), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + + _, err = ac.StringToBytes(args[0]) + if err != nil { + return err +} + +valAddr, err := sdk.ValAddressFromBech32(args[1]) + if err != nil { + return err +} + params := &types.QueryDelegationRequest{ + DelegatorAddr: args[0], + ValidatorAddr: valAddr.String(), +} + +res, err := queryClient.Delegation(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res.DelegationResponse) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryDelegations implements the command to query all the delegations +/ made from one delegator. +func GetCmdQueryDelegations(ac address.Codec) *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + cmd := &cobra.Command{ + Use: "delegations [delegator-addr]", + Short: "Query all delegations made by one delegator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations for an individual delegator on all validators. + +Example: +$ %s query staking delegations %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +`, + version.AppName, bech32PrefixAccAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + + _, err = ac.StringToBytes(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryDelegatorDelegationsRequest{ + DelegatorAddr: args[0], + Pagination: pageReq, +} + +res, err := queryClient.DelegatorDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "delegations") + +return cmd +} + +/ GetCmdQueryValidatorDelegations implements the command to query all the +/ delegations to a specific validator. +func GetCmdQueryValidatorDelegations() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "delegations-to [validator-addr]", + Short: "Query all delegations made to one validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations on an individual validator. + +Example: +$ %s query staking delegations-to %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valAddr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryValidatorDelegationsRequest{ + ValidatorAddr: valAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.ValidatorDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "validator delegations") + +return cmd +} + +/ GetCmdQueryUnbondingDelegation implements the command to query a single +/ unbonding-delegation record. +func GetCmdQueryUnbondingDelegation(ac address.Codec) *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + +bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "unbonding-delegation [delegator-addr] [validator-addr]", + Short: "Query an unbonding-delegation record based on delegator and validator address", + Long: strings.TrimSpace( + fmt.Sprintf(`Query unbonding delegations for an individual delegator on an individual validator. + +Example: +$ %s query staking unbonding-delegation %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixAccAddr, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(2), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valAddr, err := sdk.ValAddressFromBech32(args[1]) + if err != nil { + return err +} + + _, err = ac.StringToBytes(args[0]) + if err != nil { + return err +} + params := &types.QueryUnbondingDelegationRequest{ + DelegatorAddr: args[0], + ValidatorAddr: valAddr.String(), +} + +res, err := queryClient.UnbondingDelegation(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Unbond) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryUnbondingDelegations implements the command to query all the +/ unbonding-delegation records for a delegator. +func GetCmdQueryUnbondingDelegations(ac address.Codec) *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + cmd := &cobra.Command{ + Use: "unbonding-delegations [delegator-addr]", + Short: "Query all unbonding-delegations records for one delegator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query unbonding delegations for an individual delegator. + +Example: +$ %s query staking unbonding-delegations %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +`, + version.AppName, bech32PrefixAccAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + + _, err = ac.StringToBytes(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryDelegatorUnbondingDelegationsRequest{ + DelegatorAddr: args[0], + Pagination: pageReq, +} + +res, err := queryClient.DelegatorUnbondingDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "unbonding delegations") + +return cmd +} + +/ GetCmdQueryRedelegation implements the command to query a single +/ redelegation record. +func GetCmdQueryRedelegation(ac address.Codec) *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + +bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "redelegation [delegator-addr] [src-validator-addr] [dst-validator-addr]", + Short: "Query a redelegation record based on delegator and a source and destination validator address", + Long: strings.TrimSpace( + fmt.Sprintf(`Query a redelegation record for an individual delegator between a source and destination validator. + +Example: +$ %s query staking redelegation %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p %s1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixAccAddr, bech32PrefixValAddr, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(3), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + + _, err = ac.StringToBytes(args[0]) + if err != nil { + return err +} + +valSrcAddr, err := sdk.ValAddressFromBech32(args[1]) + if err != nil { + return err +} + +valDstAddr, err := sdk.ValAddressFromBech32(args[2]) + if err != nil { + return err +} + params := &types.QueryRedelegationsRequest{ + DelegatorAddr: args[0], + DstValidatorAddr: valDstAddr.String(), + SrcValidatorAddr: valSrcAddr.String(), +} + +res, err := queryClient.Redelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryRedelegations implements the command to query all the +/ redelegation records for a delegator. +func GetCmdQueryRedelegations(ac address.Codec) *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + cmd := &cobra.Command{ + Use: "redelegations [delegator-addr]", + Args: cobra.ExactArgs(1), + Short: "Query all redelegations records for one delegator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query all redelegation records for an individual delegator. + +Example: +$ %s query staking redelegation %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +`, + version.AppName, bech32PrefixAccAddr, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + + _, err = ac.StringToBytes(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryRedelegationsRequest{ + DelegatorAddr: args[0], + Pagination: pageReq, +} + +res, err := queryClient.Redelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "delegator redelegations") + +return cmd +} + +/ GetCmdQueryHistoricalInfo implements the historical info query command +func GetCmdQueryHistoricalInfo() *cobra.Command { + cmd := &cobra.Command{ + Use: "historical-info [height]", + Args: cobra.ExactArgs(1), + Short: "Query historical info at given height", + Long: strings.TrimSpace( + fmt.Sprintf(`Query historical info at given height. + +Example: +$ %s query staking historical-info 5 +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +height, err := strconv.ParseInt(args[0], 10, 64) + if err != nil || height < 0 { + return fmt.Errorf("height argument provided must be a non-negative-integer: %v", err) +} + params := &types.QueryHistoricalInfoRequest{ + Height: height +} + +res, err := queryClient.HistoricalInfo(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res.Hist) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryPool implements the pool query command. +func GetCmdQueryPool() *cobra.Command { + cmd := &cobra.Command{ + Use: "pool", + Args: cobra.NoArgs, + Short: "Query the current staking pool values", + Long: strings.TrimSpace( + fmt.Sprintf(`Query values for amounts stored in the staking pool. + +Example: +$ %s query staking pool +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.Pool(cmd.Context(), &types.QueryPoolRequest{ +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Pool) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryParams implements the params query command. +func GetCmdQueryParams() *cobra.Command { + cmd := &cobra.Command{ + Use: "params", + Args: cobra.NoArgs, + Short: "Query the current staking parameters information", + Long: strings.TrimSpace( + fmt.Sprintf(`Query values set as staking parameters. + +Example: +$ %s query staking params +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.Params(cmd.Context(), &types.QueryParamsRequest{ +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Params) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} ``` -loading... + +#### gRPC Query Client Creation + +The Cosmos SDK leverages code generated from Protobuf services to make queries. The `staking` module's `MyQuery` service generates a `queryClient`, which the CLI uses to make queries. Here is the relevant code: + +```go expandable +package cli + +import ( + + "fmt" + "strconv" + "strings" + "cosmossdk.io/core/address" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ GetQueryCmd returns the cli query commands for this module +func GetQueryCmd(ac address.Codec) *cobra.Command { + stakingQueryCmd := &cobra.Command{ + Use: types.ModuleName, + Short: "Querying commands for the staking module", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +stakingQueryCmd.AddCommand( + GetCmdQueryDelegation(ac), + GetCmdQueryDelegations(ac), + GetCmdQueryUnbondingDelegation(ac), + GetCmdQueryUnbondingDelegations(ac), + GetCmdQueryRedelegation(ac), + GetCmdQueryRedelegations(ac), + GetCmdQueryValidator(), + GetCmdQueryValidators(), + GetCmdQueryValidatorDelegations(), + GetCmdQueryValidatorUnbondingDelegations(), + GetCmdQueryValidatorRedelegations(), + GetCmdQueryHistoricalInfo(), + GetCmdQueryParams(), + GetCmdQueryPool(), + ) + +return stakingQueryCmd +} + +/ GetCmdQueryValidator implements the validator query command. +func GetCmdQueryValidator() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "validator [validator-addr]", + Short: "Query a validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query details about an individual validator. + +Example: +$ %s query staking validator %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +addr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + params := &types.QueryValidatorRequest{ + ValidatorAddr: addr.String() +} + +res, err := queryClient.Validator(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Validator) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryValidators implements the query all validators command. +func GetCmdQueryValidators() *cobra.Command { + cmd := &cobra.Command{ + Use: "validators", + Short: "Query for all validators", + Args: cobra.NoArgs, + Long: strings.TrimSpace( + fmt.Sprintf(`Query details about all validators on a network. + +Example: +$ %s query staking validators +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + +result, err := queryClient.Validators(cmd.Context(), &types.QueryValidatorsRequest{ + / Leaving status empty on purpose to query all validators. + Pagination: pageReq, +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(result) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "validators") + +return cmd +} + +/ GetCmdQueryValidatorUnbondingDelegations implements the query all unbonding delegatations from a validator command. +func GetCmdQueryValidatorUnbondingDelegations() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "unbonding-delegations-from [validator-addr]", + Short: "Query all unbonding delegatations from a validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations that are unbonding _from_ a validator. + +Example: +$ %s query staking unbonding-delegations-from %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valAddr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryValidatorUnbondingDelegationsRequest{ + ValidatorAddr: valAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.ValidatorUnbondingDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "unbonding delegations") + +return cmd +} + +/ GetCmdQueryValidatorRedelegations implements the query all redelegatations +/ from a validator command. +func GetCmdQueryValidatorRedelegations() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "redelegations-from [validator-addr]", + Short: "Query all outgoing redelegatations from a validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations that are redelegating _from_ a validator. + +Example: +$ %s query staking redelegations-from %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valSrcAddr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryRedelegationsRequest{ + SrcValidatorAddr: valSrcAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.Redelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "validator redelegations") + +return cmd +} + +/ GetCmdQueryDelegation the query delegation command. +func GetCmdQueryDelegation(ac address.Codec) *cobra.Command { + cmd := &cobra.Command{ + Use: "delegation [delegator-addr] [validator-addr]", + Short: "Query a delegation based on address and validator address", + Example: fmt.Sprintf(`%s query staking delegation [delegator-address] [validator-address]`, + version.AppName), + Long: "Query delegations for an individual delegator on an individual validator", + Args: cobra.ExactArgs(2), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + + _, err = ac.StringToBytes(args[0]) + if err != nil { + return err +} + +valAddr, err := sdk.ValAddressFromBech32(args[1]) + if err != nil { + return err +} + params := &types.QueryDelegationRequest{ + DelegatorAddr: args[0], + ValidatorAddr: valAddr.String(), +} + +res, err := queryClient.Delegation(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res.DelegationResponse) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryDelegations implements the command to query all the delegations +/ made from one delegator. +func GetCmdQueryDelegations(ac address.Codec) *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + cmd := &cobra.Command{ + Use: "delegations [delegator-addr]", + Short: "Query all delegations made by one delegator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations for an individual delegator on all validators. + +Example: +$ %s query staking delegations %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +`, + version.AppName, bech32PrefixAccAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + + _, err = ac.StringToBytes(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryDelegatorDelegationsRequest{ + DelegatorAddr: args[0], + Pagination: pageReq, +} + +res, err := queryClient.DelegatorDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "delegations") + +return cmd +} + +/ GetCmdQueryValidatorDelegations implements the command to query all the +/ delegations to a specific validator. +func GetCmdQueryValidatorDelegations() *cobra.Command { + bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "delegations-to [validator-addr]", + Short: "Query all delegations made to one validator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query delegations on an individual validator. + +Example: +$ %s query staking delegations-to %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valAddr, err := sdk.ValAddressFromBech32(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryValidatorDelegationsRequest{ + ValidatorAddr: valAddr.String(), + Pagination: pageReq, +} + +res, err := queryClient.ValidatorDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "validator delegations") + +return cmd +} + +/ GetCmdQueryUnbondingDelegation implements the command to query a single +/ unbonding-delegation record. +func GetCmdQueryUnbondingDelegation(ac address.Codec) *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + +bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "unbonding-delegation [delegator-addr] [validator-addr]", + Short: "Query an unbonding-delegation record based on delegator and validator address", + Long: strings.TrimSpace( + fmt.Sprintf(`Query unbonding delegations for an individual delegator on an individual validator. + +Example: +$ %s query staking unbonding-delegation %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixAccAddr, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(2), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +valAddr, err := sdk.ValAddressFromBech32(args[1]) + if err != nil { + return err +} + + _, err = ac.StringToBytes(args[0]) + if err != nil { + return err +} + params := &types.QueryUnbondingDelegationRequest{ + DelegatorAddr: args[0], + ValidatorAddr: valAddr.String(), +} + +res, err := queryClient.UnbondingDelegation(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Unbond) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryUnbondingDelegations implements the command to query all the +/ unbonding-delegation records for a delegator. +func GetCmdQueryUnbondingDelegations(ac address.Codec) *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + cmd := &cobra.Command{ + Use: "unbonding-delegations [delegator-addr]", + Short: "Query all unbonding-delegations records for one delegator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query unbonding delegations for an individual delegator. + +Example: +$ %s query staking unbonding-delegations %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +`, + version.AppName, bech32PrefixAccAddr, + ), + ), + Args: cobra.ExactArgs(1), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + + _, err = ac.StringToBytes(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryDelegatorUnbondingDelegationsRequest{ + DelegatorAddr: args[0], + Pagination: pageReq, +} + +res, err := queryClient.DelegatorUnbondingDelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "unbonding delegations") + +return cmd +} + +/ GetCmdQueryRedelegation implements the command to query a single +/ redelegation record. +func GetCmdQueryRedelegation(ac address.Codec) *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + +bech32PrefixValAddr := sdk.GetConfig().GetBech32ValidatorAddrPrefix() + cmd := &cobra.Command{ + Use: "redelegation [delegator-addr] [src-validator-addr] [dst-validator-addr]", + Short: "Query a redelegation record based on delegator and a source and destination validator address", + Long: strings.TrimSpace( + fmt.Sprintf(`Query a redelegation record for an individual delegator between a source and destination validator. + +Example: +$ %s query staking redelegation %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p %s1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm %s1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +`, + version.AppName, bech32PrefixAccAddr, bech32PrefixValAddr, bech32PrefixValAddr, + ), + ), + Args: cobra.ExactArgs(3), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + + _, err = ac.StringToBytes(args[0]) + if err != nil { + return err +} + +valSrcAddr, err := sdk.ValAddressFromBech32(args[1]) + if err != nil { + return err +} + +valDstAddr, err := sdk.ValAddressFromBech32(args[2]) + if err != nil { + return err +} + params := &types.QueryRedelegationsRequest{ + DelegatorAddr: args[0], + DstValidatorAddr: valDstAddr.String(), + SrcValidatorAddr: valSrcAddr.String(), +} + +res, err := queryClient.Redelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryRedelegations implements the command to query all the +/ redelegation records for a delegator. +func GetCmdQueryRedelegations(ac address.Codec) *cobra.Command { + bech32PrefixAccAddr := sdk.GetConfig().GetBech32AccountAddrPrefix() + cmd := &cobra.Command{ + Use: "redelegations [delegator-addr]", + Args: cobra.ExactArgs(1), + Short: "Query all redelegations records for one delegator", + Long: strings.TrimSpace( + fmt.Sprintf(`Query all redelegation records for an individual delegator. + +Example: +$ %s query staking redelegation %s1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +`, + version.AppName, bech32PrefixAccAddr, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + + _, err = ac.StringToBytes(args[0]) + if err != nil { + return err +} + +pageReq, err := client.ReadPageRequest(cmd.Flags()) + if err != nil { + return err +} + params := &types.QueryRedelegationsRequest{ + DelegatorAddr: args[0], + Pagination: pageReq, +} + +res, err := queryClient.Redelegations(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +flags.AddPaginationFlagsToCmd(cmd, "delegator redelegations") + +return cmd +} + +/ GetCmdQueryHistoricalInfo implements the historical info query command +func GetCmdQueryHistoricalInfo() *cobra.Command { + cmd := &cobra.Command{ + Use: "historical-info [height]", + Args: cobra.ExactArgs(1), + Short: "Query historical info at given height", + Long: strings.TrimSpace( + fmt.Sprintf(`Query historical info at given height. + +Example: +$ %s query staking historical-info 5 +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +height, err := strconv.ParseInt(args[0], 10, 64) + if err != nil || height < 0 { + return fmt.Errorf("height argument provided must be a non-negative-integer: %v", err) +} + params := &types.QueryHistoricalInfoRequest{ + Height: height +} + +res, err := queryClient.HistoricalInfo(cmd.Context(), params) + if err != nil { + return err +} + +return clientCtx.PrintProto(res.Hist) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryPool implements the pool query command. +func GetCmdQueryPool() *cobra.Command { + cmd := &cobra.Command{ + Use: "pool", + Args: cobra.NoArgs, + Short: "Query the current staking pool values", + Long: strings.TrimSpace( + fmt.Sprintf(`Query values for amounts stored in the staking pool. + +Example: +$ %s query staking pool +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.Pool(cmd.Context(), &types.QueryPoolRequest{ +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Pool) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} + +/ GetCmdQueryParams implements the params query command. +func GetCmdQueryParams() *cobra.Command { + cmd := &cobra.Command{ + Use: "params", + Args: cobra.NoArgs, + Short: "Query the current staking parameters information", + Long: strings.TrimSpace( + fmt.Sprintf(`Query values set as staking parameters. + +Example: +$ %s query staking params +`, + version.AppName, + ), + ), + RunE: func(cmd *cobra.Command, args []string) + +error { + clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + queryClient := types.NewQueryClient(clientCtx) + +res, err := queryClient.Params(cmd.Context(), &types.QueryParamsRequest{ +}) + if err != nil { + return err +} + +return clientCtx.PrintProto(&res.Params) +}, +} + +flags.AddQueryFlagsToCmd(cmd) + +return cmd +} ``` -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/client/query.go#L79-L113) +Under the hood, the `client.Context` has a `Query()` function used to retrieve the pre-configured node and relay a query to it; the function takes the query fully-qualified service method name as path (in our case: `/cosmos.staking.v1beta1.Query/Delegations`), and arguments as parameters. It first retrieves the RPC Client (called the [**node**](/docs/sdk/v0.50/learn/advanced/node)) configured by the user to relay this query to, and creates the `ABCIQueryOptions` (parameters formatted for the ABCI call). The node is then used to make the ABCI call, `ABCIQueryWithOptions()`. + +Here is what the code looks like: + +```go expandable +package client + +import ( + + "context" + "fmt" + "strings" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + rpcclient "github.com/cometbft/cometbft/rpc/client" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/status" + "cosmossdk.io/store/rootmulti" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) -## RPC[​](#rpc "Direct link to RPC") +/ GetNode returns an RPC client. If the context's client is not defined, an +/ error is returned. +func (ctx Context) -With a call to `ABCIQueryWithOptions()`, `MyQuery` is received by a [full-node](/v0.50/learn/advanced/encoding) which then processes the request. Note that, while the RPC is made to the consensus engine (e.g. CometBFT) of a full-node, queries are not part of consensus and so are not broadcasted to the rest of the network, as they do not require anything the network needs to agree upon. +GetNode() (CometRPC, error) { + if ctx.Client == nil { + return nil, errors.New("no RPC client is defined in offline mode") +} + +return ctx.Client, nil +} + +/ Query performs a query to a CometBFT node with the provided path. +/ It returns the result and height of the query upon success or an error if +/ the query fails. +func (ctx Context) + +Query(path string) ([]byte, int64, error) { + return ctx.query(path, nil) +} + +/ QueryWithData performs a query to a CometBFT node with the provided path +/ and a data payload. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +QueryWithData(path string, data []byte) ([]byte, int64, error) { + return ctx.query(path, data) +} + +/ QueryStore performs a query to a CometBFT node with the provided key and +/ store name. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +QueryStore(key []byte, storeName string) ([]byte, int64, error) { + return ctx.queryStore(key, storeName, "key") +} + +/ QueryABCI performs a query to a CometBFT node with the provide RequestQuery. +/ It returns the ResultQuery obtained from the query. The height used to perform +/ the query is the RequestQuery Height if it is non-zero, otherwise the context +/ height is used. +func (ctx Context) + +QueryABCI(req abci.RequestQuery) (abci.ResponseQuery, error) { + return ctx.queryABCI(req) +} + +/ GetFromAddress returns the from address from the context's name. +func (ctx Context) + +GetFromAddress() + +sdk.AccAddress { + return ctx.FromAddress +} + +/ GetFeePayerAddress returns the fee granter address from the context +func (ctx Context) + +GetFeePayerAddress() + +sdk.AccAddress { + return ctx.FeePayer +} + +/ GetFeeGranterAddress returns the fee granter address from the context +func (ctx Context) + +GetFeeGranterAddress() + +sdk.AccAddress { + return ctx.FeeGranter +} + +/ GetFromName returns the key name for the current context. +func (ctx Context) + +GetFromName() + +string { + return ctx.FromName +} + +func (ctx Context) + +queryABCI(req abci.RequestQuery) (abci.ResponseQuery, error) { + node, err := ctx.GetNode() + if err != nil { + return abci.ResponseQuery{ +}, err +} + +var queryHeight int64 + if req.Height != 0 { + queryHeight = req.Height +} + +else { + / fallback on the context height + queryHeight = ctx.Height +} + opts := rpcclient.ABCIQueryOptions{ + Height: queryHeight, + Prove: req.Prove, +} + +result, err := node.ABCIQueryWithOptions(context.Background(), req.Path, req.Data, opts) + if err != nil { + return abci.ResponseQuery{ +}, err +} + if !result.Response.IsOK() { + return abci.ResponseQuery{ +}, sdkErrorToGRPCError(result.Response) +} + + / data from trusted node or subspace query doesn't need verification + if !opts.Prove || !isQueryStoreWithProof(req.Path) { + return result.Response, nil +} + +return result.Response, nil +} + +func sdkErrorToGRPCError(resp abci.ResponseQuery) + +error { + switch resp.Code { + case sdkerrors.ErrInvalidRequest.ABCICode(): + return status.Error(codes.InvalidArgument, resp.Log) + case sdkerrors.ErrUnauthorized.ABCICode(): + return status.Error(codes.Unauthenticated, resp.Log) + case sdkerrors.ErrKeyNotFound.ABCICode(): + return status.Error(codes.NotFound, resp.Log) + +default: + return status.Error(codes.Unknown, resp.Log) +} +} + +/ query performs a query to a CometBFT node with the provided store name +/ and path. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +query(path string, key []byte) ([]byte, int64, error) { + resp, err := ctx.queryABCI(abci.RequestQuery{ + Path: path, + Data: key, + Height: ctx.Height, +}) + if err != nil { + return nil, 0, err +} + +return resp.Value, resp.Height, nil +} + +/ queryStore performs a query to a CometBFT node with the provided a store +/ name and path. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +queryStore(key []byte, storeName, endPath string) ([]byte, int64, error) { + path := fmt.Sprintf("/store/%s/%s", storeName, endPath) + +return ctx.query(path, key) +} + +/ isQueryStoreWithProof expects a format like /// +/ queryType must be "store" and subpath must be "key" to require a proof. +func isQueryStoreWithProof(path string) + +bool { + if !strings.HasPrefix(path, "/") { + return false +} + paths := strings.SplitN(path[1:], "/", 3) + switch { + case len(paths) != 3: + return false + case paths[0] != "store": + return false + case rootmulti.RequireProof("/" + paths[2]): + return true +} + +return false +} +``` + +## RPC + +With a call to `ABCIQueryWithOptions()`, `MyQuery` is received by a [full-node](/docs/sdk/v0.50/learn/advanced/encoding) which then processes the request. Note that, while the RPC is made to the consensus engine (e.g. CometBFT) of a full-node, queries are not part of consensus and so are not broadcasted to the rest of the network, as they do not require anything the network needs to agree upon. Read more about ABCI Clients and CometBFT RPC in the [CometBFT documentation](https://docs.cometbft.com/v0.37/spec/rpc/). -## Application Query Handling[​](#application-query-handling "Direct link to Application Query Handling") +## Application Query Handling -When a query is received by the full-node after it has been relayed from the underlying consensus engine, it is at that point being handled within an environment that understands application-specific types and has a copy of the state. [`baseapp`](/v0.50/learn/advanced/baseapp) implements the ABCI [`Query()`](/v0.50/learn/advanced/baseapp#query) function and handles gRPC queries. The query route is parsed, and it matches the fully-qualified service method name of an existing service method (most likely in one of the modules), then `baseapp` relays the request to the relevant module. +When a query is received by the full-node after it has been relayed from the underlying consensus engine, it is at that point being handled within an environment that understands application-specific types and has a copy of the state. [`baseapp`](/docs/sdk/v0.50/learn/advanced/baseapp) implements the ABCI [`Query()`](/docs/sdk/v0.50/learn/advanced/baseapp#query) function and handles gRPC queries. The query route is parsed, and it matches the fully-qualified service method name of an existing service method (most likely in one of the modules), then `baseapp` relays the request to the relevant module. -Since `MyQuery` has a Protobuf fully-qualified service method name from the `staking` module (recall `/cosmos.staking.v1beta1.Query/Delegations`), `baseapp` first parses the path, then uses its own internal `GRPCQueryRouter` to retrieve the corresponding gRPC handler, and routes the query to the module. The gRPC handler is responsible for recognizing this query, retrieving the appropriate values from the application's stores, and returning a response. Read more about query services [here](/v0.50/build/building-modules/query-services). +Since `MyQuery` has a Protobuf fully-qualified service method name from the `staking` module (recall `/cosmos.staking.v1beta1.Query/Delegations`), `baseapp` first parses the path, then uses its own internal `GRPCQueryRouter` to retrieve the corresponding gRPC handler, and routes the query to the module. The gRPC handler is responsible for recognizing this query, retrieving the appropriate values from the application's stores, and returning a response. Read more about query services [here](/docs/sdk/v0.50/documentation/module-system/query-services). Once a result is received from the querier, `baseapp` begins the process of returning a response to the user. -## Response[​](#response "Direct link to Response") +## Response Since `Query()` is an ABCI function, `baseapp` returns the response as an [`abci.ResponseQuery`](https://docs.cometbft.com/master/spec/abci/abci.html#query-2) type. The `client.Context` `Query()` routine receives the response and. -### CLI Response[​](#cli-response "Direct link to CLI Response") +### CLI Response + +The application [`codec`](/docs/sdk/v0.50/learn/advanced/encoding) is used to unmarshal the response to a JSON and the `client.Context` prints the output to the command line, applying any configurations such as the output type (text, JSON or YAML). + +```go expandable +package client + +import ( + + "bufio" + "context" + "encoding/json" + "fmt" + "io" + "os" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/viper" + "google.golang.org/grpc" + "sigs.k8s.io/yaml" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ PreprocessTxFn defines a hook by which chains can preprocess transactions before broadcasting +type PreprocessTxFn func(chainID string, key keyring.KeyType, tx TxBuilder) + +error + +/ Context implements a typical context created in SDK modules for transaction +/ handling and queries. +type Context struct { + FromAddress sdk.AccAddress + Client CometRPC + GRPCClient *grpc.ClientConn + ChainID string + Codec codec.Codec + InterfaceRegistry codectypes.InterfaceRegistry + Input io.Reader + Keyring keyring.Keyring + KeyringOptions []keyring.Option + Output io.Writer + OutputFormat string + Height int64 + HomeDir string + KeyringDir string + From string + BroadcastMode string + FromName string + SignModeStr string + UseLedger bool + Simulate bool + GenerateOnly bool + Offline bool + SkipConfirm bool + TxConfig TxConfig + AccountRetriever AccountRetriever + NodeURI string + FeePayer sdk.AccAddress + FeeGranter sdk.AccAddress + Viper *viper.Viper + LedgerHasProtobuf bool + PreprocessTxHook PreprocessTxFn + + / IsAux is true when the signer is an auxiliary signer (e.g. the tipper). + IsAux bool + + / TODO: Deprecated (remove). + LegacyAmino *codec.LegacyAmino + + / CmdContext is the context.Context from the Cobra command. + CmdContext context.Context +} + +/ WithCmdContext returns a copy of the context with an updated context.Context, +/ usually set to the cobra cmd context. +func (ctx Context) + +WithCmdContext(c context.Context) + +Context { + ctx.CmdContext = c + return ctx +} + +/ WithKeyring returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyring(k keyring.Keyring) + +Context { + ctx.Keyring = k + return ctx +} + +/ WithKeyringOptions returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyringOptions(opts ...keyring.Option) + +Context { + ctx.KeyringOptions = opts + return ctx +} + +/ WithInput returns a copy of the context with an updated input. +func (ctx Context) + +WithInput(r io.Reader) + +Context { + / convert to a bufio.Reader to have a shared buffer between the keyring and the + / the Commands, ensuring a read from one advance the read pointer for the other. + / see https://github.com/cosmos/cosmos-sdk/issues/9566. + ctx.Input = bufio.NewReader(r) + +return ctx +} + +/ WithCodec returns a copy of the Context with an updated Codec. +func (ctx Context) -The application [`codec`](/v0.50/learn/advanced/encoding) is used to unmarshal the response to a JSON and the `client.Context` prints the output to the command line, applying any configurations such as the output type (text, JSON or YAML). +WithCodec(m codec.Codec) -client/context.go +Context { + ctx.Codec = m + return ctx +} -``` -loading... -``` +/ WithLegacyAmino returns a copy of the context with an updated LegacyAmino codec. +/ TODO: Deprecated (remove). +func (ctx Context) + +WithLegacyAmino(cdc *codec.LegacyAmino) + +Context { + ctx.LegacyAmino = cdc + return ctx +} + +/ WithOutput returns a copy of the context with an updated output writer (e.g. stdout). +func (ctx Context) + +WithOutput(w io.Writer) + +Context { + ctx.Output = w + return ctx +} + +/ WithFrom returns a copy of the context with an updated from address or name. +func (ctx Context) + +WithFrom(from string) + +Context { + ctx.From = from + return ctx +} + +/ WithOutputFormat returns a copy of the context with an updated OutputFormat field. +func (ctx Context) + +WithOutputFormat(format string) + +Context { + ctx.OutputFormat = format + return ctx +} + +/ WithNodeURI returns a copy of the context with an updated node URI. +func (ctx Context) + +WithNodeURI(nodeURI string) + +Context { + ctx.NodeURI = nodeURI + return ctx +} + +/ WithHeight returns a copy of the context with an updated height. +func (ctx Context) + +WithHeight(height int64) + +Context { + ctx.Height = height + return ctx +} + +/ WithClient returns a copy of the context with an updated RPC client +/ instance. +func (ctx Context) + +WithClient(client CometRPC) + +Context { + ctx.Client = client + return ctx +} + +/ WithGRPCClient returns a copy of the context with an updated GRPC client +/ instance. +func (ctx Context) + +WithGRPCClient(grpcClient *grpc.ClientConn) + +Context { + ctx.GRPCClient = grpcClient + return ctx +} + +/ WithUseLedger returns a copy of the context with an updated UseLedger flag. +func (ctx Context) + +WithUseLedger(useLedger bool) + +Context { + ctx.UseLedger = useLedger + return ctx +} + +/ WithChainID returns a copy of the context with an updated chain ID. +func (ctx Context) + +WithChainID(chainID string) + +Context { + ctx.ChainID = chainID + return ctx +} + +/ WithHomeDir returns a copy of the Context with HomeDir set. +func (ctx Context) + +WithHomeDir(dir string) + +Context { + if dir != "" { + ctx.HomeDir = dir +} + +return ctx +} + +/ WithKeyringDir returns a copy of the Context with KeyringDir set. +func (ctx Context) + +WithKeyringDir(dir string) + +Context { + ctx.KeyringDir = dir + return ctx +} + +/ WithGenerateOnly returns a copy of the context with updated GenerateOnly value +func (ctx Context) + +WithGenerateOnly(generateOnly bool) + +Context { + ctx.GenerateOnly = generateOnly + return ctx +} + +/ WithSimulation returns a copy of the context with updated Simulate value +func (ctx Context) + +WithSimulation(simulate bool) + +Context { + ctx.Simulate = simulate + return ctx +} + +/ WithOffline returns a copy of the context with updated Offline value. +func (ctx Context) + +WithOffline(offline bool) + +Context { + ctx.Offline = offline + return ctx +} + +/ WithFromName returns a copy of the context with an updated from account name. +func (ctx Context) + +WithFromName(name string) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/client/context.go#L341-L349) +Context { + ctx.FromName = name + return ctx +} + +/ WithFromAddress returns a copy of the context with an updated from account +/ address. +func (ctx Context) + +WithFromAddress(addr sdk.AccAddress) + +Context { + ctx.FromAddress = addr + return ctx +} + +/ WithFeePayerAddress returns a copy of the context with an updated fee payer account +/ address. +func (ctx Context) + +WithFeePayerAddress(addr sdk.AccAddress) + +Context { + ctx.FeePayer = addr + return ctx +} + +/ WithFeeGranterAddress returns a copy of the context with an updated fee granter account +/ address. +func (ctx Context) + +WithFeeGranterAddress(addr sdk.AccAddress) + +Context { + ctx.FeeGranter = addr + return ctx +} + +/ WithBroadcastMode returns a copy of the context with an updated broadcast +/ mode. +func (ctx Context) + +WithBroadcastMode(mode string) + +Context { + ctx.BroadcastMode = mode + return ctx +} + +/ WithSignModeStr returns a copy of the context with an updated SignMode +/ value. +func (ctx Context) + +WithSignModeStr(signModeStr string) + +Context { + ctx.SignModeStr = signModeStr + return ctx +} + +/ WithSkipConfirmation returns a copy of the context with an updated SkipConfirm +/ value. +func (ctx Context) + +WithSkipConfirmation(skip bool) + +Context { + ctx.SkipConfirm = skip + return ctx +} + +/ WithTxConfig returns the context with an updated TxConfig +func (ctx Context) + +WithTxConfig(generator TxConfig) + +Context { + ctx.TxConfig = generator + return ctx +} + +/ WithAccountRetriever returns the context with an updated AccountRetriever +func (ctx Context) + +WithAccountRetriever(retriever AccountRetriever) + +Context { + ctx.AccountRetriever = retriever + return ctx +} + +/ WithInterfaceRegistry returns the context with an updated InterfaceRegistry +func (ctx Context) + +WithInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) + +Context { + ctx.InterfaceRegistry = interfaceRegistry + return ctx +} + +/ WithViper returns the context with Viper field. This Viper instance is used to read +/ client-side config from the config file. +func (ctx Context) + +WithViper(prefix string) + +Context { + v := viper.New() + +v.SetEnvPrefix(prefix) + +v.AutomaticEnv() + +ctx.Viper = v + return ctx +} + +/ WithAux returns a copy of the context with an updated IsAux value. +func (ctx Context) + +WithAux(isAux bool) + +Context { + ctx.IsAux = isAux + return ctx +} + +/ WithLedgerHasProto returns the context with the provided boolean value, indicating +/ whether the target Ledger application can support Protobuf payloads. +func (ctx Context) + +WithLedgerHasProtobuf(val bool) + +Context { + ctx.LedgerHasProtobuf = val + return ctx +} + +/ WithPreprocessTxHook returns the context with the provided preprocessing hook, which +/ enables chains to preprocess the transaction using the builder. +func (ctx Context) + +WithPreprocessTxHook(preprocessFn PreprocessTxFn) + +Context { + ctx.PreprocessTxHook = preprocessFn + return ctx +} + +/ PrintString prints the raw string to ctx.Output if it's defined, otherwise to os.Stdout +func (ctx Context) + +PrintString(str string) + +error { + return ctx.PrintBytes([]byte(str)) +} + +/ PrintBytes prints the raw bytes to ctx.Output if it's defined, otherwise to os.Stdout. +/ NOTE: for printing a complex state object, you should use ctx.PrintOutput +func (ctx Context) + +PrintBytes(o []byte) + +error { + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err := writer.Write(o) + +return err +} + +/ PrintProto outputs toPrint to the ctx.Output based on ctx.OutputFormat which is +/ either text or json. If text, toPrint will be YAML encoded. Otherwise, toPrint +/ will be JSON encoded using ctx.Codec. An error is returned upon failure. +func (ctx Context) + +PrintProto(toPrint proto.Message) + +error { + / always serialize JSON initially because proto json can't be directly YAML encoded + out, err := ctx.Codec.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintObjectLegacy is a variant of PrintProto that doesn't require a proto.Message type +/ and uses amino JSON encoding. +/ Deprecated: It will be removed in the near future! +func (ctx Context) + +PrintObjectLegacy(toPrint interface{ +}) + +error { + out, err := ctx.LegacyAmino.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintRaw is a variant of PrintProto that doesn't require a proto.Message type +/ and uses a raw JSON message. No marshaling is performed. +func (ctx Context) + +PrintRaw(toPrint json.RawMessage) + +error { + return ctx.printOutput(toPrint) +} + +func (ctx Context) + +printOutput(out []byte) + +error { + var err error + if ctx.OutputFormat == "text" { + out, err = yaml.JSONToYAML(out) + if err != nil { + return err +} + +} + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err = writer.Write(out) + if err != nil { + return err +} + if ctx.OutputFormat != "text" { + / append new-line for formats besides YAML + _, err = writer.Write([]byte("\n")) + if err != nil { + return err +} + +} + +return nil +} + +/ GetFromFields returns a from account address, account name and keyring type, given either an address or key name. +/ If clientCtx.Simulate is true the keystore is not accessed and a valid address must be provided +/ If clientCtx.GenerateOnly is true the keystore is only accessed if a key name is provided +func GetFromFields(clientCtx Context, kr keyring.Keyring, from string) (sdk.AccAddress, string, keyring.KeyType, error) { + if from == "" { + return nil, "", 0, nil +} + +addr, err := sdk.AccAddressFromBech32(from) + switch { + case clientCtx.Simulate: + if err != nil { + return nil, "", 0, fmt.Errorf("a valid bech32 address must be provided in simulation mode: %w", err) +} + +return addr, "", 0, nil + case clientCtx.GenerateOnly: + if err == nil { + return addr, "", 0, nil +} + +} + +var k *keyring.Record + if err == nil { + k, err = kr.KeyByAddress(addr) + if err != nil { + return nil, "", 0, err +} + +} + +else { + k, err = kr.Key(from) + if err != nil { + return nil, "", 0, err +} + +} + +addr, err = k.GetAddress() + if err != nil { + return nil, "", 0, err +} + +return addr, k.Name, k.GetType(), nil +} + +/ NewKeyringFromBackend gets a Keyring object from a backend +func NewKeyringFromBackend(ctx Context, backend string) (keyring.Keyring, error) { + if ctx.Simulate { + backend = keyring.BackendMemory +} + +return keyring.New(sdk.KeyringServiceName(), backend, ctx.KeyringDir, ctx.Input, ctx.Codec, ctx.KeyringOptions...) +} +``` And that's a wrap! The result of the query is outputted to the console by the CLI. diff --git a/docs/sdk/v0.50/learn/beginner/tx-lifecycle.mdx b/docs/sdk/v0.50/learn/beginner/tx-lifecycle.mdx index 60baf44d..ef889d21 100644 --- a/docs/sdk/v0.50/learn/beginner/tx-lifecycle.mdx +++ b/docs/sdk/v0.50/learn/beginner/tx-lifecycle.mdx @@ -1,93 +1,112 @@ --- -title: "Transaction Lifecycle" -description: "Version: v0.50" +title: Transaction Lifecycle --- - - This document describes the lifecycle of a transaction from creation to committed state changes. Transaction definition is described in a [different doc](/v0.50/learn/advanced/transactions). The transaction is referred to as `Tx`. - +## Synopsis + +This document describes the lifecycle of a transaction from creation to committed state changes. Transaction definition is described in a [different doc](/docs/sdk/v0.50/learn/advanced/transactions). The transaction is referred to as `Tx`. - * [Anatomy of a Cosmos SDK Application](/v0.50/learn/beginner/app-anatomy) +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.50/learn/beginner/app-anatomy) + -## Creation[​](#creation "Direct link to Creation") +## Creation -### Transaction Creation[​](#transaction-creation "Direct link to Transaction Creation") +### Transaction Creation -One of the main application interfaces is the command-line interface. The transaction `Tx` can be created by the user inputting a command in the following format from the [command-line](/v0.50/learn/advanced/cli), providing the type of transaction in `[command]`, arguments in `[args]`, and configurations such as gas prices in `[flags]`: +One of the main application interfaces is the command-line interface. The transaction `Tx` can be created by the user inputting a command in the following format from the [command-line](/docs/sdk/v0.50/learn/advanced/cli), providing the type of transaction in `[command]`, arguments in `[args]`, and configurations such as gas prices in `[flags]`: -``` +```bash [appname] tx [command] [args] [flags] ``` This command automatically **creates** the transaction, **signs** it using the account's private key, and **broadcasts** it to the specified peer node. -There are several required and optional flags for transaction creation. The `--from` flag specifies which [account](/v0.50/learn/beginner/accounts) the transaction is originating from. For example, if the transaction is sending coins, the funds are drawn from the specified `from` address. +There are several required and optional flags for transaction creation. The `--from` flag specifies which [account](/docs/sdk/v0.50/learn/beginner/accounts) the transaction is originating from. For example, if the transaction is sending coins, the funds are drawn from the specified `from` address. -#### Gas and Fees[​](#gas-and-fees "Direct link to Gas and Fees") +#### Gas and Fees -Additionally, there are several [flags](/v0.50/learn/advanced/cli) users can use to indicate how much they are willing to pay in [fees](/v0.50/learn/beginner/gas-fees): +Additionally, there are several [flags](/docs/sdk/v0.50/learn/advanced/cli) users can use to indicate how much they are willing to pay in [fees](/docs/sdk/v0.50/learn/beginner/gas-fees): -* `--gas` refers to how much [gas](/v0.50/learn/beginner/gas-fees), which represents computational resources, `Tx` consumes. Gas is dependent on the transaction and is not precisely calculated until execution, but can be estimated by providing `auto` as the value for `--gas`. -* `--gas-adjustment` (optional) can be used to scale `gas` up in order to avoid underestimating. For example, users can specify their gas adjustment as 1.5 to use 1.5 times the estimated gas. -* `--gas-prices` specifies how much the user is willing to pay per unit of gas, which can be one or multiple denominations of tokens. For example, `--gas-prices=0.025uatom, 0.025upho` means the user is willing to pay 0.025uatom AND 0.025upho per unit of gas. -* `--fees` specifies how much in fees the user is willing to pay in total. -* `--timeout-height` specifies a block timeout height to prevent the tx from being committed past a certain height. +- `--gas` refers to how much [gas](/docs/sdk/v0.50/learn/beginner/gas-fees), which represents computational resources, `Tx` consumes. Gas is dependent on the transaction and is not precisely calculated until execution, but can be estimated by providing `auto` as the value for `--gas`. +- `--gas-adjustment` (optional) can be used to scale `gas` up in order to avoid underestimating. For example, users can specify their gas adjustment as 1.5 to use 1.5 times the estimated gas. +- `--gas-prices` specifies how much the user is willing to pay per unit of gas, which can be one or multiple denominations of tokens. For example, `--gas-prices=0.025uatom, 0.025upho` means the user is willing to pay 0.025uatom AND 0.025upho per unit of gas. +- `--fees` specifies how much in fees the user is willing to pay in total. +- `--timeout-height` specifies a block timeout height to prevent the tx from being committed past a certain height. The ultimate value of the fees paid is equal to the gas multiplied by the gas prices. In other words, `fees = ceil(gas * gasPrices)`. Thus, since fees can be calculated using gas prices and vice versa, the users specify only one of the two. Later, validators decide whether or not to include the transaction in their block by comparing the given or calculated `gas-prices` to their local `min-gas-prices`. `Tx` is rejected if its `gas-prices` is not high enough, so users are incentivized to pay more. -#### CLI Example[​](#cli-example "Direct link to CLI Example") +#### CLI Example Users of the application `app` can enter the following command into their CLI to generate a transaction to send 1000uatom from a `senderAddress` to a `recipientAddress`. The command specifies how much gas they are willing to pay: an automatic estimate scaled up by 1.5 times, with a gas price of 0.025uatom per unit gas. -``` +```bash appd tx send 1000uatom --from --gas auto --gas-adjustment 1.5 --gas-prices 0.025uatom ``` -#### Other Transaction Creation Methods[​](#other-transaction-creation-methods "Direct link to Other Transaction Creation Methods") +#### Other Transaction Creation Methods -The command-line is an easy way to interact with an application, but `Tx` can also be created using a [gRPC or REST interface](/v0.50/learn/advanced/grpc_rest) or some other entry point defined by the application developer. From the user's perspective, the interaction depends on the web interface or wallet they are using (e.g. creating `Tx` using [Lunie.io](https://lunie.io/#/) and signing it with a Ledger Nano S). +The command-line is an easy way to interact with an application, but `Tx` can also be created using a [gRPC or REST interface](/docs/sdk/v0.50/learn/advanced/grpc_rest) or some other entry point defined by the application developer. From the user's perspective, the interaction depends on the web interface or wallet they are using (e.g. creating `Tx` using [Lunie.io](https://lunie.io/#/) and signing it with a Ledger Nano S). -## Addition to Mempool[​](#addition-to-mempool "Direct link to Addition to Mempool") +## Addition to Mempool -Each full-node (running CometBFT) that receives a `Tx` sends an [ABCI message](https://docs.cometbft.com/v0.37/spec/p2p/messages/), `CheckTx`, to the application layer to check for validity, and receives an `abci.ResponseCheckTx`. If the `Tx` passes the checks, it is held in the node's [**Mempool**](https://docs.cometbft.com/v0.37/spec/p2p/messages/mempool/), an in-memory pool of transactions unique to each node, pending inclusion in a block - honest nodes discard a `Tx` if it is found to be invalid. Prior to consensus, nodes continuously check incoming transactions and gossip them to their peers. +Each full-node (running CometBFT) that receives a `Tx` sends an [ABCI message](https://docs.cometbft.com/v0.37/spec/p2p/messages/), +`CheckTx`, to the application layer to check for validity, and receives an `abci.ResponseCheckTx`. If the `Tx` passes the checks, it is held in the node's +[**Mempool**](https://docs.cometbft.com/v0.37/spec/p2p/messages/mempool/), an in-memory pool of transactions unique to each node, pending inclusion in a block - honest nodes discard a `Tx` if it is found to be invalid. Prior to consensus, nodes continuously check incoming transactions and gossip them to their peers. -### Types of Checks[​](#types-of-checks "Direct link to Types of Checks") +### Types of Checks -The full-nodes perform stateless, then stateful checks on `Tx` during `CheckTx`, with the goal to identify and reject an invalid transaction as early on as possible to avoid wasted computation. +The full-nodes perform stateless, then stateful checks on `Tx` during `CheckTx`, with the goal to +identify and reject an invalid transaction as early on as possible to avoid wasted computation. -***Stateless*** checks do not require nodes to access state - light clients or offline nodes can do them - and are thus less computationally expensive. Stateless checks include making sure addresses are not empty, enforcing nonnegative numbers, and other logic specified in the definitions. +**_Stateless_** checks do not require nodes to access state - light clients or offline nodes can do +them - and are thus less computationally expensive. Stateless checks include making sure addresses +are not empty, enforcing nonnegative numbers, and other logic specified in the definitions. -***Stateful*** checks validate transactions and messages based on a committed state. Examples include checking that the relevant values exist and can be transacted with, the address has sufficient funds, and the sender is authorized or has the correct ownership to transact. At any given moment, full-nodes typically have [multiple versions](/v0.50/learn/advanced/baseapp#state-updates) of the application's internal state for different purposes. For example, nodes execute state changes while in the process of verifying transactions, but still need a copy of the last committed state in order to answer queries - they should not respond using state with uncommitted changes. +**_Stateful_** checks validate transactions and messages based on a committed state. Examples +include checking that the relevant values exist and can be transacted with, the address +has sufficient funds, and the sender is authorized or has the correct ownership to transact. +At any given moment, full-nodes typically have [multiple versions](/docs/sdk/v0.50/learn/advanced/baseapp#state-updates) +of the application's internal state for different purposes. For example, nodes execute state +changes while in the process of verifying transactions, but still need a copy of the last committed +state in order to answer queries - they should not respond using state with uncommitted changes. -In order to verify a `Tx`, full-nodes call `CheckTx`, which includes both *stateless* and *stateful* checks. Further validation happens later in the [`DeliverTx`](#delivertx) stage. `CheckTx` goes through several steps, beginning with decoding `Tx`. +In order to verify a `Tx`, full-nodes call `CheckTx`, which includes both _stateless_ and _stateful_ +checks. Further validation happens later in the [`DeliverTx`](#delivertx) stage. `CheckTx` goes +through several steps, beginning with decoding `Tx`. -### Decoding[​](#decoding "Direct link to Decoding") +### Decoding -When `Tx` is received by the application from the underlying consensus engine (e.g. CometBFT ), it is still in its [encoded](/v0.50/learn/advanced/encoding) `[]byte` form and needs to be unmarshaled in order to be processed. Then, the [`runTx`](/v0.50/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) function is called to run in `runTxModeCheck` mode, meaning the function runs all checks but exits before executing messages and writing state changes. +When `Tx` is received by the application from the underlying consensus engine (e.g. CometBFT ), it is still in its [encoded](/docs/sdk/v0.50/learn/advanced/encoding) `[]byte` form and needs to be unmarshaled in order to be processed. Then, the [`runTx`](/docs/sdk/v0.50/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) function is called to run in `runTxModeCheck` mode, meaning the function runs all checks but exits before executing messages and writing state changes. -### ValidateBasic (deprecated)[​](#validatebasic-deprecated "Direct link to ValidateBasic (deprecated)") +### ValidateBasic (deprecated) -Messages ([`sdk.Msg`](/v0.50/learn/advanced/transactions#messages)) are extracted from transactions (`Tx`). The `ValidateBasic` method of the `sdk.Msg` interface implemented by the module developer is run for each transaction. To discard obviously invalid messages, the `BaseApp` type calls the `ValidateBasic` method very early in the processing of the message in the [`CheckTx`](/v0.50/learn/advanced/baseapp#checktx) and [`DeliverTx`](/v0.50/learn/advanced/baseapp#delivertx) transactions. `ValidateBasic` can include only **stateless** checks (the checks that do not require access to the state). +Messages ([`sdk.Msg`](/docs/sdk/v0.50/learn/advanced/transactions#messages)) are extracted from transactions (`Tx`). The `ValidateBasic` method of the `sdk.Msg` interface implemented by the module developer is run for each transaction. +To discard obviously invalid messages, the `BaseApp` type calls the `ValidateBasic` method very early in the processing of the message in the [`CheckTx`](/docs/sdk/v0.50/learn/advanced/baseapp#checktx) and [`DeliverTx`](/docs/sdk/v0.50/learn/advanced/baseapp#delivertx) transactions. +`ValidateBasic` can include only **stateless** checks (the checks that do not require access to the state). - The `ValidateBasic` method on messages has been deprecated in favor of validating messages directly in their respective [`Msg` services](/v0.50/build/building-modules/msg-services#Validation). +The `ValidateBasic` method on messages has been deprecated in favor of validating messages directly in their respective [`Msg` services](/docs/sdk/v0.50/documentation/module-system/msg-services#Validation). + +Read [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) for more details. - Read [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) for more details. - `BaseApp` still calls `ValidateBasic` on messages that implements that method for backwards compatibility. + `BaseApp` still calls `ValidateBasic` on messages that implements that method + for backwards compatibility. -#### Guideline[​](#guideline "Direct link to Guideline") +#### Guideline -`ValidateBasic` should not be used anymore. Message validation should be performed in the `Msg` service when [handling a message](/v0.50/build/building-modules/msg-services#Validation) in a module Msg Server. +`ValidateBasic` should not be used anymore. Message validation should be performed in the `Msg` service when [handling a message](/docs/sdk/v0.50/documentation/module-system/msg-services#Validation) in a module Msg Server. -### AnteHandler[​](#antehandler "Direct link to AnteHandler") +### AnteHandler `AnteHandler`s even though optional, are in practice very often used to perform signature verification, gas calculation, fee deduction, and other core operations related to blockchain transactions. @@ -96,61 +115,162 @@ A copy of the cached context is provided to the `AnteHandler`, which performs li For example, the [`auth`](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth/spec) module `AnteHandler` checks and increments sequence numbers, checks signatures and account numbers, and deducts fees from the first signer of the transaction - all state changes are made using the `checkState`. - Ante handlers only run on a transaction. If a transaction embed multiple messages (like some x/authz, x/gov transactions for instance), the ante handlers only have awareness of the outer message. Inner messages are mostly directly routed to the [message router](https://docs.cosmos.network/main/learn/advanced/baseapp#msg-service-router) and will skip the chain of ante handlers. Keep that in mind when designing your own ante handler. + Ante handlers only run on a transaction. If a transaction embed multiple + messages (like some x/authz, x/gov transactions for instance), the ante + handlers only have awareness of the outer message. Inner messages are mostly + directly routed to the [message + router](https://docs.cosmos.network/main/learn/advanced/baseapp#msg-service-router) + and will skip the chain of ante handlers. Keep that in mind when designing + your own ante handler. -### Gas[​](#gas "Direct link to Gas") - -The [`Context`](/v0.50/learn/advanced/context), which keeps a `GasMeter` that tracks how much gas is used during the execution of `Tx`, is initialized. The user-provided amount of gas for `Tx` is known as `GasWanted`. If `GasConsumed`, the amount of gas consumed during execution, ever exceeds `GasWanted`, the execution stops and the changes made to the cached copy of the state are not committed. Otherwise, `CheckTx` sets `GasUsed` equal to `GasConsumed` and returns it in the result. After calculating the gas and fee values, validator-nodes check that the user-specified `gas-prices` is greater than their locally defined `min-gas-prices`. - -### Discard or Addition to Mempool[​](#discard-or-addition-to-mempool "Direct link to Discard or Addition to Mempool") - -If at any point during `CheckTx` the `Tx` fails, it is discarded and the transaction lifecycle ends there. Otherwise, if it passes `CheckTx` successfully, the default protocol is to relay it to peer nodes and add it to the Mempool so that the `Tx` becomes a candidate to be included in the next block. - -The **mempool** serves the purpose of keeping track of transactions seen by all full-nodes. Full-nodes keep a **mempool cache** of the last `mempool.cache_size` transactions they have seen, as a first line of defense to prevent replay attacks. Ideally, `mempool.cache_size` is large enough to encompass all of the transactions in the full mempool. If the mempool cache is too small to keep track of all the transactions, `CheckTx` is responsible for identifying and rejecting replayed transactions. - -Currently existing preventative measures include fees and a `sequence` (nonce) counter to distinguish replayed transactions from identical but valid ones. If an attacker tries to spam nodes with many copies of a `Tx`, full-nodes keeping a mempool cache reject all identical copies instead of running `CheckTx` on them. Even if the copies have incremented `sequence` numbers, attackers are disincentivized by the need to pay fees. - -Validator nodes keep a mempool to prevent replay attacks, just as full-nodes do, but also use it as a pool of unconfirmed transactions in preparation of block inclusion. Note that even if a `Tx` passes all checks at this stage, it is still possible to be found invalid later on, because `CheckTx` does not fully validate the transaction (that is, it does not actually execute the messages). - -## Inclusion in a Block[​](#inclusion-in-a-block "Direct link to Inclusion in a Block") - -Consensus, the process through which validator nodes come to agreement on which transactions to accept, happens in **rounds**. Each round begins with a proposer creating a block of the most recent transactions and ends with **validators**, special full-nodes with voting power responsible for consensus, agreeing to accept the block or go with a `nil` block instead. Validator nodes execute the consensus algorithm, such as [CometBFT](https://docs.cometbft.com/v0.37/spec/consensus/), confirming the transactions using ABCI requests to the application, in order to come to this agreement. - -The first step of consensus is the **block proposal**. One proposer amongst the validators is chosen by the consensus algorithm to create and propose a block - in order for a `Tx` to be included, it must be in this proposer's mempool. - -## State Changes[​](#state-changes "Direct link to State Changes") - -The next step of consensus is to execute the transactions to fully validate them. All full-nodes that receive a block proposal from the correct proposer execute the transactions by calling the ABCI function `FinalizeBlock`. As mentioned throughout the documentation `BeginBlock`, `ExecuteTx` and `EndBlock` are called within FinalizeBlock. Although every full-node operates individually and locally, the outcome is always consistent and unequivocal. This is because the state changes brought about by the messages are predictable, and the transactions are specifically sequenced in the proposed block. - +### Gas + +The [`Context`](/docs/sdk/v0.50/learn/advanced/context), which keeps a `GasMeter` that tracks how much gas is used during the execution of `Tx`, is initialized. The user-provided amount of gas for `Tx` is known as `GasWanted`. If `GasConsumed`, the amount of gas consumed during execution, ever exceeds `GasWanted`, the execution stops and the changes made to the cached copy of the state are not committed. Otherwise, `CheckTx` sets `GasUsed` equal to `GasConsumed` and returns it in the result. After calculating the gas and fee values, validator-nodes check that the user-specified `gas-prices` is greater than their locally defined `min-gas-prices`. + +### Discard or Addition to Mempool + +If at any point during `CheckTx` the `Tx` fails, it is discarded and the transaction lifecycle ends +there. Otherwise, if it passes `CheckTx` successfully, the default protocol is to relay it to peer +nodes and add it to the Mempool so that the `Tx` becomes a candidate to be included in the next block. + +The **mempool** serves the purpose of keeping track of transactions seen by all full-nodes. +Full-nodes keep a **mempool cache** of the last `mempool.cache_size` transactions they have seen, as a first line of +defense to prevent replay attacks. Ideally, `mempool.cache_size` is large enough to encompass all +of the transactions in the full mempool. If the mempool cache is too small to keep track of all +the transactions, `CheckTx` is responsible for identifying and rejecting replayed transactions. + +Currently existing preventative measures include fees and a `sequence` (nonce) counter to distinguish +replayed transactions from identical but valid ones. If an attacker tries to spam nodes with many +copies of a `Tx`, full-nodes keeping a mempool cache reject all identical copies instead of running +`CheckTx` on them. Even if the copies have incremented `sequence` numbers, attackers are +disincentivized by the need to pay fees. + +Validator nodes keep a mempool to prevent replay attacks, just as full-nodes do, but also use it as +a pool of unconfirmed transactions in preparation of block inclusion. Note that even if a `Tx` +passes all checks at this stage, it is still possible to be found invalid later on, because +`CheckTx` does not fully validate the transaction (that is, it does not actually execute the messages). + +## Inclusion in a Block + +Consensus, the process through which validator nodes come to agreement on which transactions to +accept, happens in **rounds**. Each round begins with a proposer creating a block of the most +recent transactions and ends with **validators**, special full-nodes with voting power responsible +for consensus, agreeing to accept the block or go with a `nil` block instead. Validator nodes +execute the consensus algorithm, such as [CometBFT](https://docs.cometbft.com/v0.37/spec/consensus/), +confirming the transactions using ABCI requests to the application, in order to come to this agreement. + +The first step of consensus is the **block proposal**. One proposer amongst the validators is chosen +by the consensus algorithm to create and propose a block - in order for a `Tx` to be included, it +must be in this proposer's mempool. + +## State Changes + +The next step of consensus is to execute the transactions to fully validate them. All full-nodes +that receive a block proposal from the correct proposer execute the transactions by calling the ABCI function `FinalizeBlock`. +As mentioned throughout the documentation `BeginBlock`, `ExecuteTx` and `EndBlock` are called within FinalizeBlock. +Although every full-node operates individually and locally, the outcome is always consistent and unequivocal. This is because the state changes brought about by the messages are predictable, and the transactions are specifically sequenced in the proposed block. + +```text expandable + -------------------------- + | Receive Block Proposal | + -------------------------- + | + v + ------------------------- + | FinalizeBlock | + ------------------------- + | + v + ------------------- + | BeginBlock | + ------------------- + | + v + -------------------- + | ExecuteTx(tx0) | + | ExecuteTx(tx1) | + | ExecuteTx(tx2) | + | ExecuteTx(tx3) | + | . | + | . | + | . | + ------------------- + | + v + -------------------- + | EndBlock | + -------------------- + | + v + ------------------------- + | Consensus | + ------------------------- + | + v + ------------------------- + | Commit | + ------------------------- ``` - -------------------------- | Receive Block Proposal | -------------------------- | v ------------------------- | FinalizeBlock | ------------------------- | v ------------------- | BeginBlock | ------------------- | v -------------------- | ExecuteTx(tx0) | | ExecuteTx(tx1) | | ExecuteTx(tx2) | | ExecuteTx(tx3) | | . | | . | | . | ------------------- | v -------------------- | EndBlock | -------------------- | v ------------------------- | Consensus | ------------------------- | v ------------------------- | Commit | ------------------------- -``` - -### Transaction Execution[​](#transaction-execution "Direct link to Transaction Execution") - -The `FinalizeBlock` ABCI function defined in [`BaseApp`](/v0.50/learn/advanced/baseapp) does the bulk of the state transitions: it is run for each transaction in the block in sequential order as committed to during consensus. Under the hood, transaction execution is almost identical to `CheckTx` but calls the [`runTx`](/v0.50/learn/advanced/baseapp#runtx) function in deliver mode instead of check mode. Instead of using their `checkState`, full-nodes use `finalizeblock`: - -* **Decoding:** Since `FinalizeBlock` is an ABCI call, `Tx` is received in the encoded `[]byte` form. Nodes first unmarshal the transaction, using the [`TxConfig`](/v0.50/learn/beginner/app-anatomy#register-codec) defined in the app, then call `runTx` in `execModeFinalize`, which is very similar to `CheckTx` but also executes and writes state changes. - -* **Checks and `AnteHandler`:** Full-nodes call `validateBasicMsgs` and `AnteHandler` again. This second check happens because they may not have seen the same transactions during the addition to Mempool stage and a malicious proposer may have included invalid ones. One difference here is that the `AnteHandler` does not compare `gas-prices` to the node's `min-gas-prices` since that value is local to each node - differing values across nodes yield nondeterministic results. - -* **`MsgServiceRouter`:** After `CheckTx` exits, `FinalizeBlock` continues to run [`runMsgs`](/v0.50/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) to fully execute each `Msg` within the transaction. Since the transaction may have messages from different modules, `BaseApp` needs to know which module to find the appropriate handler. This is achieved using `BaseApp`'s `MsgServiceRouter` so that it can be processed by the module's Protobuf [`Msg` service](/v0.50/build/building-modules/msg-services). For `LegacyMsg` routing, the `Route` function is called via the [module manager](/v0.50/build/building-modules/module-manager) to retrieve the route name and find the legacy [`Handler`](/v0.50/build/building-modules/msg-services#handler-type) within the module. - -* **`Msg` service:** Protobuf `Msg` service is responsible for executing each message in the `Tx` and causes state transitions to persist in `finalizeBlockState`. - -* **PostHandlers:** [`PostHandler`](/v0.50/learn/advanced/baseapp#posthandler)s run after the execution of the message. If they fail, the state change of `runMsgs`, as well of `PostHandlers`, are both reverted. - -* **Gas:** While a `Tx` is being delivered, a `GasMeter` is used to keep track of how much gas is being used; if execution completes, `GasUsed` is set and returned in the `abci.ExecTxResult`. If execution halts because `BlockGasMeter` or `GasMeter` has run out or something else goes wrong, a deferred function at the end appropriately errors or panics. - -If there are any failed state changes resulting from a `Tx` being invalid or `GasMeter` running out, the transaction processing terminates and any state changes are reverted. Invalid transactions in a block proposal cause validator nodes to reject the block and vote for a `nil` block instead. - -### Commit[​](#commit "Direct link to Commit") - -The final step is for nodes to commit the block and state changes. Validator nodes perform the previous step of executing state transitions in order to validate the transactions, then sign the block to confirm it. Full nodes that are not validators do not participate in consensus - i.e. they cannot vote - but listen for votes to understand whether or not they should commit the state changes. - -When they receive enough validator votes (2/3+ *precommits* weighted by voting power), full nodes commit to a new block to be added to the blockchain and finalize the state transitions in the application layer. A new state root is generated to serve as a merkle proof for the state transitions. Applications use the [`Commit`](/v0.50/learn/advanced/baseapp#commit) ABCI method inherited from [Baseapp](/v0.50/learn/advanced/baseapp); it syncs all the state transitions by writing the `deliverState` into the application's internal state. As soon as the state changes are committed, `checkState` starts afresh from the most recently committed state and `deliverState` resets to `nil` in order to be consistent and reflect the changes. - -Note that not all blocks have the same number of transactions and it is possible for consensus to result in a `nil` block or one with none at all. In a public blockchain network, it is also possible for validators to be **byzantine**, or malicious, which may prevent a `Tx` from being committed in the blockchain. Possible malicious behaviors include the proposer deciding to censor a `Tx` by excluding it from the block or a validator voting against the block. -At this point, the transaction lifecycle of a `Tx` is over: nodes have verified its validity, delivered it by executing its state changes, and committed those changes. The `Tx` itself, in `[]byte` form, is stored in a block and appended to the blockchain. +### Transaction Execution + +The `FinalizeBlock` ABCI function defined in [`BaseApp`](/docs/sdk/v0.50/learn/advanced/baseapp) does the bulk of the +state transitions: it is run for each transaction in the block in sequential order as committed +to during consensus. Under the hood, transaction execution is almost identical to `CheckTx` but calls the +[`runTx`](/docs/sdk/v0.50/learn/advanced/baseapp#runtx) function in deliver mode instead of check mode. +Instead of using their `checkState`, full-nodes use `finalizeblock`: + +- **Decoding:** Since `FinalizeBlock` is an ABCI call, `Tx` is received in the encoded `[]byte` form. + Nodes first unmarshal the transaction, using the [`TxConfig`](/docs/sdk/v0.50/learn/beginner/app-anatomy#register-codec) defined in the app, then call `runTx` in `execModeFinalize`, which is very similar to `CheckTx` but also executes and writes state changes. + +- **Checks and `AnteHandler`:** Full-nodes call `validateBasicMsgs` and `AnteHandler` again. This second check + happens because they may not have seen the same transactions during the addition to Mempool stage + and a malicious proposer may have included invalid ones. One difference here is that the + `AnteHandler` does not compare `gas-prices` to the node's `min-gas-prices` since that value is local + to each node - differing values across nodes yield nondeterministic results. + +- **`MsgServiceRouter`:** After `CheckTx` exits, `FinalizeBlock` continues to run + [`runMsgs`](/docs/sdk/v0.50/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) to fully execute each `Msg` within the transaction. + Since the transaction may have messages from different modules, `BaseApp` needs to know which module + to find the appropriate handler. This is achieved using `BaseApp`'s `MsgServiceRouter` so that it can be processed by the module's Protobuf [`Msg` service](/docs/sdk/v0.50/documentation/module-system/msg-services). + For `LegacyMsg` routing, the `Route` function is called via the [module manager](/docs/sdk/v0.50/documentation/module-system/module-manager) to retrieve the route name and find the legacy [`Handler`](/docs/sdk/v0.50/documentation/module-system/msg-services#handler-type) within the module. + +- **`Msg` service:** Protobuf `Msg` service is responsible for executing each message in the `Tx` and causes state transitions to persist in `finalizeBlockState`. + +- **PostHandlers:** [`PostHandler`](/docs/sdk/v0.50/learn/advanced/baseapp#posthandler)s run after the execution of the message. If they fail, the state change of `runMsgs`, as well of `PostHandlers`, are both reverted. + +- **Gas:** While a `Tx` is being delivered, a `GasMeter` is used to keep track of how much + gas is being used; if execution completes, `GasUsed` is set and returned in the + `abci.ExecTxResult`. If execution halts because `BlockGasMeter` or `GasMeter` has run out or something else goes + wrong, a deferred function at the end appropriately errors or panics. + +If there are any failed state changes resulting from a `Tx` being invalid or `GasMeter` running out, +the transaction processing terminates and any state changes are reverted. Invalid transactions in a +block proposal cause validator nodes to reject the block and vote for a `nil` block instead. + +### Commit + +The final step is for nodes to commit the block and state changes. Validator nodes +perform the previous step of executing state transitions in order to validate the transactions, +then sign the block to confirm it. Full nodes that are not validators do not +participate in consensus - i.e. they cannot vote - but listen for votes to understand whether or +not they should commit the state changes. + +When they receive enough validator votes (2/3+ _precommits_ weighted by voting power), full nodes commit to a new block to be added to the blockchain and +finalize the state transitions in the application layer. A new state root is generated to serve as +a merkle proof for the state transitions. Applications use the [`Commit`](/docs/sdk/v0.50/learn/advanced/baseapp#commit) +ABCI method inherited from [Baseapp](/docs/sdk/v0.50/learn/advanced/baseapp); it syncs all the state transitions by +writing the `deliverState` into the application's internal state. As soon as the state changes are +committed, `checkState` starts afresh from the most recently committed state and `deliverState` +resets to `nil` in order to be consistent and reflect the changes. + +Note that not all blocks have the same number of transactions and it is possible for consensus to +result in a `nil` block or one with none at all. In a public blockchain network, it is also possible +for validators to be **byzantine**, or malicious, which may prevent a `Tx` from being committed in +the blockchain. Possible malicious behaviors include the proposer deciding to censor a `Tx` by +excluding it from the block or a validator voting against the block. + +At this point, the transaction lifecycle of a `Tx` is over: nodes have verified its validity, +delivered it by executing its state changes, and committed those changes. The `Tx` itself, +in `[]byte` form, is stored in a block and appended to the blockchain. diff --git a/docs/sdk/v0.50/learn/intro/overview.mdx b/docs/sdk/v0.50/learn/intro/overview.mdx index 0c0fd915..55db23c6 100644 --- a/docs/sdk/v0.50/learn/intro/overview.mdx +++ b/docs/sdk/v0.50/learn/intro/overview.mdx @@ -1,41 +1,41 @@ --- -title: "What is the Cosmos SDK" -description: "Version: v0.50" +title: What is the Cosmos SDK --- -The [Cosmos SDK](https://github.com/cosmos/cosmos-sdk) is an open-source toolkit for building multi-asset public Proof-of-Stake (PoS) blockchains, like the Cosmos Hub, as well as permissioned Proof-of-Authority (PoA) blockchains. Blockchains built with the Cosmos SDK are generally referred to as **application-specific blockchains**. +The [Cosmos SDK](https://github.com/cosmos/cosmos-sdk) is an open-source toolkit for building multi-asset public Proof-of-Stake (PoS) blockchains, like the Cosmos Hub, as well as permissioned Proof-of-Authority (PoA) blockchains. Blockchains built with the Cosmos SDK are generally referred to as **application-specific blockchains**. -The goal of the Cosmos SDK is to allow developers to easily create custom blockchains from scratch that can natively interoperate with other blockchains. We further this modular approach by allowing developers to plug and play with different consensus engines this can range from the [CometBFT](https://github.com/cometbft/cometbft) or [Rollkit](https://rollkit.dev/). +The goal of the Cosmos SDK is to allow developers to easily create custom blockchains from scratch that can natively interoperate with other blockchains. +We further this modular approach by allowing developers to plug and play with different consensus engines this can range from the [CometBFT](https://github.com/cometbft/cometbft) or [Rollkit](https://rollkit.dev/). SDK-based blockchains have the choice to use the predefined modules or to build their own modules. What this means is that developers can build a blockchain that is tailored to their specific use case, without having to worry about the low-level details of building a blockchain from scratch. Predefined modules include staking, governance, and token issuance, among others. -What's more, the Cosmos SDK is a capabilities-based system that allows developers to better reason about the security of interactions between modules. For a deeper look at capabilities, jump to [Object-Capability Model](/v0.50/learn/advanced/ocap). +What's more, the Cosmos SDK is a capabilities-based system that allows developers to better reason about the security of interactions between modules. For a deeper look at capabilities, jump to [Object-Capability Model](/docs/sdk/v0.50/learn/advanced/ocap). How you can look at this is if we imagine that the SDK is like a lego kit. You can choose to build the basic house from the instructions or you can choose to modify your house and add more floors, more doors, more windows. The choice is yours. -## What are Application-Specific Blockchains[​](#what-are-application-specific-blockchains "Direct link to What are Application-Specific Blockchains") +## What are Application-Specific Blockchains One development paradigm in the blockchain world today is that of virtual-machine blockchains like Ethereum, where development generally revolves around building decentralized applications on top of an existing blockchain as a set of smart contracts. While smart contracts can be very good for some use cases like single-use applications (e.g. ICOs), they often fall short for building complex decentralized platforms. More generally, smart contracts can be limiting in terms of flexibility, sovereignty and performance. Application-specific blockchains offer a radically different development paradigm than virtual-machine blockchains. An application-specific blockchain is a blockchain customized to operate a single application: developers have all the freedom to make the design decisions required for the application to run optimally. They can also provide better sovereignty, security and performance. -Learn more about [application-specific blockchains](/v0.50/learn/intro/why-app-specific). +Learn more about [application-specific blockchains](/docs/sdk/v0.50/learn/intro/why-app-specific). -## What is Modularity[​](#what-is-modularity "Direct link to What is Modularity") +## What is Modularity Today there is a lot of talk around modularity and discussions between monolithic and modular. Originally the Cosmos SDK was built with a vision of modularity in mind. Modularity is derived from splitting a blockchain into customizable layers of execution, consensus, settlement and data availability, which is what the Cosmos SDK enables. This means that developers can plug and play, making their blockchain customisable by using different software for different layers. For example you can choose to build a vanilla chain and use the Cosmos SDK with CometBFT. CometBFT will be your consensus layer and the chain itself would be the settlement and execution layer. Another route could be to use the SDK with Rollkit and Celestia as your consensus and data availability layer. The benefit of modularity is that you can customize your chain to your specific use case. -## Why the Cosmos SDK[​](#why-the-cosmos-sdk "Direct link to Why the Cosmos SDK") +## Why the Cosmos SDK The Cosmos SDK is the most advanced framework for building custom modular application-specific blockchains today. Here are a few reasons why you might want to consider building your decentralized application with the Cosmos SDK: * It allows you to plug and play and customize your consensus layer. As above you can use Rollkit and Celestia as your consensus and data availability layer. This offers a lot of flexibility and customisation. * Previously the default consensus engine available within the Cosmos SDK is [CometBFT](https://github.com/cometbft/cometbft). CometBFT is the most (and only) mature BFT consensus engine in existence. It is widely used across the industry and is considered the gold standard consensus engine for building Proof-of-Stake systems. -* The Cosmos SDK is open-source and designed to make it easy to build blockchains out of composable [modules](/v0.50/build/modules). As the ecosystem of open-source Cosmos SDK modules grows, it will become increasingly easier to build complex decentralized platforms with it. +* The Cosmos SDK is open-source and designed to make it easy to build blockchains out of composable [modules](/docs/sdk/v0.50/documentation/module-system/modules). As the ecosystem of open-source Cosmos SDK modules grows, it will become increasingly easier to build complex decentralized platforms with it. * The Cosmos SDK is inspired by capabilities-based security, and informed by years of wrestling with blockchain state-machines. This makes the Cosmos SDK a very secure environment to build blockchains. * Most importantly, the Cosmos SDK has already been used to build many application-specific blockchains that are already in production. Among others, we can cite [Cosmos Hub](https://hub.cosmos.network), [IRIS Hub](https://irisnet.org), [Binance Chain](https://docs.binance.org/), [Terra](https://terra.money/) or [Kava](https://www.kava.io/). [Many more](https://cosmos.network/ecosystem) are building on the Cosmos SDK. -## Getting started with the Cosmos SDK[​](#getting-started-with-the-cosmos-sdk "Direct link to Getting started with the Cosmos SDK") +## Getting started with the Cosmos SDK -* Learn more about the [architecture of a Cosmos SDK application](/v0.50/learn/intro/sdk-app-architecture) +* Learn more about the [architecture of a Cosmos SDK application](/docs/sdk/v0.50/learn/intro/sdk-app-architecture) * Learn how to build an application-specific blockchain from scratch with the [Cosmos SDK Tutorial](https://cosmos.network/docs/tutorial) diff --git a/docs/sdk/v0.50/learn/intro/sdk-app-architecture.mdx b/docs/sdk/v0.50/learn/intro/sdk-app-architecture.mdx index d8694495..65ff5244 100644 --- a/docs/sdk/v0.50/learn/intro/sdk-app-architecture.mdx +++ b/docs/sdk/v0.50/learn/intro/sdk-app-architecture.mdx @@ -1,9 +1,9 @@ --- -title: "Blockchain Architecture" -description: "Version: v0.50" +title: Blockchain Architecture +description: 'At its core, a blockchain is a replicated deterministic state machine.' --- -## State machine[​](#state-machine "Direct link to State machine") +## State machine At its core, a blockchain is a [replicated deterministic state machine](https://en.wikipedia.org/wiki/State_machine_replication). @@ -11,48 +11,82 @@ A state machine is a computer science concept whereby a machine can have multipl Given a state S and a transaction T, the state machine will return a new state S'. -``` -+--------+ +--------+| | | || S +---------------->+ S' || | apply(T) | |+--------+ +--------+ +```text ++--------+ +--------+ +| | | | +| S +---------------->+ S' | +| | apply(T) | | ++--------+ +--------+ ``` In practice, the transactions are bundled in blocks to make the process more efficient. Given a state S and a block of transactions B, the state machine will return a new state S'. -``` -+--------+ +--------+| | | || S +----------------------------> | S' || | For each T in B: apply(T) | |+--------+ +--------+ +```text ++--------+ +--------+ +| | | | +| S +----------------------------> | S' | +| | For each T in B: apply(T) | | ++--------+ +--------+ ``` In a blockchain context, the state machine is deterministic. This means that if a node is started at a given state and replays the same sequence of transactions, it will always end up with the same final state. The Cosmos SDK gives developers maximum flexibility to define the state of their application, transaction types and state transition functions. The process of building state-machines with the Cosmos SDK will be described more in depth in the following sections. But first, let us see how the state-machine is replicated using **CometBFT**. -## CometBFT[​](#cometbft "Direct link to CometBFT") +## CometBFT Thanks to the Cosmos SDK, developers just have to define the state machine, and [*CometBFT*](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft) will handle replication over the network for them. -``` - ^ +-------------------------------+ ^ | | | | Built with Cosmos SDK | | State-machine = Application | | | | | v | +-------------------------------+ | | | ^Blockchain node | | Consensus | | | | | | | +-------------------------------+ | CometBFT | | | | | | Networking | | | | | | v +-------------------------------+ v +```text expandable + ^ +-------------------------------+ ^ + | | | | Built with Cosmos SDK + | | State-machine = Application | | + | | | v + | +-------------------------------+ + | | | ^ +Blockchain node | | Consensus | | + | | | | + | +-------------------------------+ | CometBFT + | | | | + | | Networking | | + | | | | + v +-------------------------------+ v ``` [CometBFT](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft) is an application-agnostic engine that is responsible for handling the *networking* and *consensus* layers of a blockchain. In practice, this means that CometBFT is responsible for propagating and ordering transaction bytes. CometBFT relies on an eponymous Byzantine-Fault-Tolerant (BFT) algorithm to reach consensus on the order of transactions. The CometBFT [consensus algorithm](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft#consensus-overview) works with a set of special nodes called *Validators*. Validators are responsible for adding blocks of transactions to the blockchain. At any given block, there is a validator set V. A validator in V is chosen by the algorithm to be the proposer of the next block. This block is considered valid if more than two thirds of V signed a `prevote` and a `precommit` on it, and if all the transactions that it contains are valid. The validator set can be changed by rules written in the state-machine. -## ABCI[​](#abci "Direct link to ABCI") +## ABCI CometBFT passes transactions to the application through an interface called the [ABCI](https://docs.cometbft.com/v0.37/spec/abci/), which the application must implement. -``` - +---------------------+ | | | Application | | | +--------+---+--------+ ^ | | | ABCI | v +--------+---+--------+ | | | | | CometBFT | | | | | +---------------------+ +```text expandable + +---------------------+ + | | + | Application | + | | + +--------+---+--------+ + ^ | + | | ABCI + | v + +--------+---+--------+ + | | + | | + | CometBFT | + | | + | | + +---------------------+ ``` Note that **CometBFT only handles transaction bytes**. It has no knowledge of what these bytes mean. All CometBFT does is order these transaction bytes deterministically. CometBFT passes the bytes to the application via the ABCI, and expects a return code to inform it if the messages contained in the transactions were successfully processed or not. Here are the most important messages of the ABCI: -* `CheckTx`: When a transaction is received by CometBFT, it is passed to the application to check if a few basic requirements are met. `CheckTx` is used to protect the mempool of full-nodes against spam transactions. . A special handler called the [`AnteHandler`](/v0.50/learn/beginner/gas-fees#antehandler) is used to execute a series of validation steps such as checking for sufficient fees and validating the signatures. If the checks are valid, the transaction is added to the [mempool](https://docs.cometbft.com/v0.37/spec/p2p/messages/mempool) and relayed to peer nodes. Note that transactions are not processed (i.e. no modification of the state occurs) with `CheckTx` since they have not been included in a block yet. -* `DeliverTx`: When a [valid block](https://docs.cometbft.com/v0.37/spec/core/data_structures#block) is received by CometBFT, each transaction in the block is passed to the application via `DeliverTx` in order to be processed. It is during this stage that the state transitions occur. The `AnteHandler` executes again, along with the actual [`Msg` service](/v0.50/build/building-modules/msg-services) RPC for each message in the transaction. +* `CheckTx`: When a transaction is received by CometBFT, it is passed to the application to check if a few basic requirements are met. `CheckTx` is used to protect the mempool of full-nodes against spam transactions. . A special handler called the [`AnteHandler`](/docs/sdk/v0.50/learn/beginner/gas-fees#antehandler) is used to execute a series of validation steps such as checking for sufficient fees and validating the signatures. If the checks are valid, the transaction is added to the [mempool](https://docs.cometbft.com/v0.37/spec/p2p/messages/mempool) and relayed to peer nodes. Note that transactions are not processed (i.e. no modification of the state occurs) with `CheckTx` since they have not been included in a block yet. +* `DeliverTx`: When a [valid block](https://docs.cometbft.com/v0.37/spec/core/data_structures#block) is received by CometBFT, each transaction in the block is passed to the application via `DeliverTx` in order to be processed. It is during this stage that the state transitions occur. The `AnteHandler` executes again, along with the actual [`Msg` service](/docs/sdk/v0.50/documentation/module-system/msg-services) RPC for each message in the transaction. * `BeginBlock`/`EndBlock`: These messages are executed at the beginning and the end of each block, whether the block contains transactions or not. It is useful to trigger automatic execution of logic. Proceed with caution though, as computationally expensive loops could slow down your blockchain, or even freeze it if the loop is infinite. Find a more detailed view of the ABCI methods from the [CometBFT docs](https://docs.cometbft.com/v0.37/spec/abci/). -Any application built on CometBFT needs to implement the ABCI interface in order to communicate with the underlying local CometBFT engine. Fortunately, you do not have to implement the ABCI interface. The Cosmos SDK provides a boilerplate implementation of it in the form of [baseapp](/v0.50/learn/intro/sdk-design#baseapp). +Any application built on CometBFT needs to implement the ABCI interface in order to communicate with the underlying local CometBFT engine. Fortunately, you do not have to implement the ABCI interface. The Cosmos SDK provides a boilerplate implementation of it in the form of [baseapp](/docs/sdk/v0.50/learn/intro/sdk-design#baseapp). diff --git a/docs/sdk/v0.50/learn/intro/sdk-design.mdx b/docs/sdk/v0.50/learn/intro/sdk-design.mdx index 8fd8d427..a49f524f 100644 --- a/docs/sdk/v0.50/learn/intro/sdk-design.mdx +++ b/docs/sdk/v0.50/learn/intro/sdk-design.mdx @@ -1,9 +1,8 @@ --- -title: "Main Components of the Cosmos SDK" -description: "Version: v0.50" +title: Main Components of the Cosmos SDK --- -The Cosmos SDK is a framework that facilitates the development of secure state-machines on top of CometBFT. At its core, the Cosmos SDK is a boilerplate implementation of the [ABCI](/v0.50/learn/intro/sdk-app-architecture#abci) in Golang. It comes with a [`multistore`](/v0.50/learn/advanced/store#multistore) to persist data and a [`router`](/v0.50/learn/advanced/baseapp#routing) to handle transactions. +The Cosmos SDK is a framework that facilitates the development of secure state-machines on top of CometBFT. At its core, the Cosmos SDK is a boilerplate implementation of the [ABCI](/docs/sdk/v0.50/learn/intro/sdk-app-architecture#abci) in Golang. It comes with a [`multistore`](/docs/sdk/v0.50/learn/advanced/store#multistore) to persist data and a [`router`](/docs/sdk/v0.50/learn/advanced/baseapp#routing) to handle transactions. Here is a simplified view of how transactions are handled by an application built on top of the Cosmos SDK when transferred from CometBFT via `DeliverTx`: @@ -12,41 +11,1002 @@ Here is a simplified view of how transactions are handled by an application buil 3. Route each message to the appropriate module so that it can be processed. 4. Commit state changes. -## `baseapp`[​](#baseapp "Direct link to baseapp") +## `baseapp` -`baseapp` is the boilerplate implementation of a Cosmos SDK application. It comes with an implementation of the ABCI to handle the connection with the underlying consensus engine. Typically, a Cosmos SDK application extends `baseapp` by embedding it in [`app.go`](/v0.50/learn/beginner/app-anatomy#core-application-file). +`baseapp` is the boilerplate implementation of a Cosmos SDK application. It comes with an implementation of the ABCI to handle the connection with the underlying consensus engine. Typically, a Cosmos SDK application extends `baseapp` by embedding it in [`app.go`](/docs/sdk/v0.50/learn/beginner/app-anatomy#core-application-file). Here is an example of this from `simapp`, the Cosmos SDK demonstration app: -simapp/app.go +```go expandable +/go:build app_v1 -``` -loading... -``` +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "os" + "path/filepath" + "cosmossdk.io/log" + "cosmossdk.io/x/tx/signing" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/core/appmodule" + "github.com/cosmos/cosmos-sdk/codec/address" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ stdAccAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdAccAddressCodec struct{ +} + +func (g stdAccAddressCodec) + +StringToBytes(text string) ([]byte, error) { + if text == "" { + return nil, nil +} + +return sdk.AccAddressFromBech32(text) +} + +func (g stdAccAddressCodec) + +BytesToString(bz []byte) (string, error) { + if bz == nil { + return "", nil +} + +return sdk.AccAddress(bz).String(), nil +} + +/ stdValAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdValAddressCodec struct{ +} + +func (g stdValAddressCodec) + +StringToBytes(text string) ([]byte, error) { + return sdk.ValAddressFromBech32(text) +} + +func (g stdValAddressCodec) + +BytesToString(bz []byte) (string, error) { + return sdk.ValAddress(bz).String(), nil +} + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, circuittypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + tkeys := storetypes.NewTransientStoreKeys(paramstypes.TStoreKey) + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), runtime.EventService{ +}) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, runtime.NewKVStoreService(keys[authtypes.StoreKey]), authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[minttypes.StoreKey]), app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[distrtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[crisistypes.StoreKey]), invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[feegrant.StoreKey]), app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[circuittypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper(runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[govtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.DistrKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), app.StakingKeeper, app.SlashingKeeper, app.AccountKeeper.AddressCodec(), runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName), app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, circuittypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + err := app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/app.go#L170-L212) +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + + / At startup, after all modules have been registered, check that all prot + / annotations are correct. + protoFiles, err := proto.MergedRegistry() + if err != nil { + panic(err) +} + +err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + fmt.Fprintln(os.Stderr, err.Error()) +} + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} +``` The goal of `baseapp` is to provide a secure interface between the store and the extensible state machine while defining as little about the state machine as possible (staying true to the ABCI). -For more on `baseapp`, please click [here](/v0.50/learn/advanced/baseapp). +For more on `baseapp`, please click [here](/docs/sdk/v0.50/learn/advanced/baseapp). -## Multistore[​](#multistore "Direct link to Multistore") +## Multistore -The Cosmos SDK provides a [`multistore`](/v0.50/learn/advanced/store#multistore) for persisting state. The multistore allows developers to declare any number of [`KVStores`](/v0.50/learn/advanced/store#base-layer-kvstores). These `KVStores` only accept the `[]byte` type as value and therefore any custom structure needs to be marshalled using [a codec](/v0.50/learn/advanced/encoding) before being stored. +The Cosmos SDK provides a [`multistore`](/docs/sdk/v0.50/learn/advanced/store#multistore) for persisting state. The multistore allows developers to declare any number of [`KVStores`](/docs/sdk/v0.50/learn/advanced/store#base-layer-kvstores). These `KVStores` only accept the `[]byte` type as value and therefore any custom structure needs to be marshalled using [a codec](/docs/sdk/v0.50/learn/advanced/encoding) before being stored. -The multistore abstraction is used to divide the state in distinct compartments, each managed by its own module. For more on the multistore, click [here](/v0.50/learn/advanced/store#multistore) +The multistore abstraction is used to divide the state in distinct compartments, each managed by its own module. For more on the multistore, click [here](/docs/sdk/v0.50/learn/advanced/store#multistore) -## Modules[​](#modules "Direct link to Modules") +## Modules The power of the Cosmos SDK lies in its modularity. Cosmos SDK applications are built by aggregating a collection of interoperable modules. Each module defines a subset of the state and contains its own message/transaction processor, while the Cosmos SDK is responsible for routing each message to its respective module. Here is a simplified view of how a transaction is processed by the application of each full-node when it is received in a valid block: -``` - + | | Transaction relayed from the full-node's | CometBFT engine to the node's application | via DeliverTx | | +---------------------v--------------------------+ | APPLICATION | | | | Using baseapp's methods: Decode the Tx, | | extract and route the message(s) | | | +---------------------+--------------------------+ | | | +---------------------------+ | | | Message routed to | the correct module | to be processed | |+----------------+ +---------------+ +----------------+ +------v----------+| | | | | | | || AUTH MODULE | | BANK MODULE | | STAKING MODULE | | GOV MODULE || | | | | | | || | | | | | | Handles message,|| | | | | | | Updates state || | | | | | | |+----------------+ +---------------+ +----------------+ +------+----------+ | | | | +--------------------------+ | | Return result to CometBFT | (0=Ok, 1=Err) v +```text expandable + + + | + | Transaction relayed from the full-node's + | CometBFT engine to the node's application + | via DeliverTx + | + | + +---------------------v--------------------------+ + | APPLICATION | + | | + | Using baseapp's methods: Decode the Tx, | + | extract and route the message(s) | + | | + +---------------------+--------------------------+ + | + | + | + +---------------------------+ + | + | + | Message routed to + | the correct module + | to be processed + | + | ++----------------+ +---------------+ +----------------+ +------v----------+ +| | | | | | | | +| AUTH MODULE | | BANK MODULE | | STAKING MODULE | | GOV MODULE | +| | | | | | | | +| | | | | | | Handles message,| +| | | | | | | Updates state | +| | | | | | | | ++----------------+ +---------------+ +----------------+ +------+----------+ + | + | + | + | + +--------------------------+ + | + | Return result to CometBFT + | (0=Ok, 1=Err) + v ``` -Each module can be seen as a little state-machine. Developers need to define the subset of the state handled by the module, as well as custom message types that modify the state (*Note:* `messages` are extracted from `transactions` by `baseapp`). In general, each module declares its own `KVStore` in the `multistore` to persist the subset of the state it defines. Most developers will need to access other 3rd party modules when building their own modules. Given that the Cosmos SDK is an open framework, some of the modules may be malicious, which means there is a need for security principles to reason about inter-module interactions. These principles are based on [object-capabilities](/v0.50/learn/advanced/ocap). In practice, this means that instead of having each module keep an access control list for other modules, each module implements special objects called `keepers` that can be passed to other modules to grant a pre-defined set of capabilities. +Each module can be seen as a little state-machine. Developers need to define the subset of the state handled by the module, as well as custom message types that modify the state (*Note:* `messages` are extracted from `transactions` by `baseapp`). In general, each module declares its own `KVStore` in the `multistore` to persist the subset of the state it defines. Most developers will need to access other 3rd party modules when building their own modules. Given that the Cosmos SDK is an open framework, some of the modules may be malicious, which means there is a need for security principles to reason about inter-module interactions. These principles are based on [object-capabilities](/docs/sdk/v0.50/learn/advanced/ocap). In practice, this means that instead of having each module keep an access control list for other modules, each module implements special objects called `keepers` that can be passed to other modules to grant a pre-defined set of capabilities. Cosmos SDK modules are defined in the `x/` folder of the Cosmos SDK. Some core modules include: diff --git a/docs/sdk/v0.50/learn/intro/why-app-specific.mdx b/docs/sdk/v0.50/learn/intro/why-app-specific.mdx index 81eb8fda..f9c0e6e9 100644 --- a/docs/sdk/v0.50/learn/intro/why-app-specific.mdx +++ b/docs/sdk/v0.50/learn/intro/why-app-specific.mdx @@ -1,69 +1,80 @@ --- -title: "Application-Specific Blockchains" -description: "Version: v0.50" +title: Application-Specific Blockchains --- - - This document explains what application-specific blockchains are, and why developers would want to build one as opposed to writing Smart Contracts. - +## Synopsis -## What are application-specific blockchains[​](#what-are-application-specific-blockchains "Direct link to What are application-specific blockchains") +This document explains what application-specific blockchains are, and why developers would want to build one as opposed to writing Smart Contracts. + +## What are application-specific blockchains Application-specific blockchains are blockchains customized to operate a single application. Instead of building a decentralized application on top of an underlying blockchain like Ethereum, developers build their own blockchain from the ground up. This means building a full-node client, a light-client, and all the necessary interfaces (CLI, REST, ...) to interact with the nodes. -``` - ^ +-------------------------------+ ^ | | | | Built with Cosmos SDK | | State-machine = Application | | | | | v | +-------------------------------+ | | | ^Blockchain node | | Consensus | | | | | | | +-------------------------------+ | CometBFT | | | | | | Networking | | | | | | v +-------------------------------+ v +```text expandable + ^ +-------------------------------+ ^ + | | | | Built with Cosmos SDK + | | State-machine = Application | | + | | | v + | +-------------------------------+ + | | | ^ +Blockchain node | | Consensus | | + | | | | + | +-------------------------------+ | CometBFT + | | | | + | | Networking | | + | | | | + v +-------------------------------+ v ``` -## What are the shortcomings of Smart Contracts[​](#what-are-the-shortcomings-of-smart-contracts "Direct link to What are the shortcomings of Smart Contracts") +## What are the shortcomings of Smart Contracts Virtual-machine blockchains like Ethereum addressed the demand for more programmability back in 2014. At the time, the options available for building decentralized applications were quite limited. Most developers would build on top of the complex and limited Bitcoin scripting language, or fork the Bitcoin codebase which was hard to work with and customize. Virtual-machine blockchains came in with a new value proposition. Their state-machine incorporates a virtual-machine that is able to interpret turing-complete programs called Smart Contracts. These Smart Contracts are very good for use cases like one-time events (e.g. ICOs), but they can fall short for building complex decentralized platforms. Here is why: -* Smart Contracts are generally developed with specific programming languages that can be interpreted by the underlying virtual-machine. These programming languages are often immature and inherently limited by the constraints of the virtual-machine itself. For example, the Ethereum Virtual Machine does not allow developers to implement automatic execution of code. Developers are also limited to the account-based system of the EVM, and they can only choose from a limited set of functions for their cryptographic operations. These are examples, but they hint at the lack of **flexibility** that a smart contract environment often entails. -* Smart Contracts are all run by the same virtual machine. This means that they compete for resources, which can severely restrain **performance**. And even if the state-machine were to be split in multiple subsets (e.g. via sharding), Smart Contracts would still need to be interpreted by a virtual machine, which would limit performance compared to a native application implemented at state-machine level (our benchmarks show an improvement on the order of 10x in performance when the virtual-machine is removed). -* Another issue with the fact that Smart Contracts share the same underlying environment is the resulting limitation in **sovereignty**. A decentralized application is an ecosystem that involves multiple players. If the application is built on a general-purpose virtual-machine blockchain, stakeholders have very limited sovereignty over their application, and are ultimately superseded by the governance of the underlying blockchain. If there is a bug in the application, very little can be done about it. +- Smart Contracts are generally developed with specific programming languages that can be interpreted by the underlying virtual-machine. These programming languages are often immature and inherently limited by the constraints of the virtual-machine itself. For example, the Ethereum Virtual Machine does not allow developers to implement automatic execution of code. Developers are also limited to the account-based system of the EVM, and they can only choose from a limited set of functions for their cryptographic operations. These are examples, but they hint at the lack of **flexibility** that a smart contract environment often entails. +- Smart Contracts are all run by the same virtual machine. This means that they compete for resources, which can severely restrain **performance**. And even if the state-machine were to be split in multiple subsets (e.g. via sharding), Smart Contracts would still need to be interpreted by a virtual machine, which would limit performance compared to a native application implemented at state-machine level (our benchmarks show an improvement on the order of 10x in performance when the virtual-machine is removed). +- Another issue with the fact that Smart Contracts share the same underlying environment is the resulting limitation in **sovereignty**. A decentralized application is an ecosystem that involves multiple players. If the application is built on a general-purpose virtual-machine blockchain, stakeholders have very limited sovereignty over their application, and are ultimately superseded by the governance of the underlying blockchain. If there is a bug in the application, very little can be done about it. Application-Specific Blockchains are designed to address these shortcomings. -## Application-Specific Blockchains Benefits[​](#application-specific-blockchains-benefits "Direct link to Application-Specific Blockchains Benefits") +## Application-Specific Blockchains Benefits -### Flexibility[​](#flexibility "Direct link to Flexibility") +### Flexibility Application-specific blockchains give maximum flexibility to developers: -* In Cosmos blockchains, the state-machine is typically connected to the underlying consensus engine via an interface called the [ABCI](https://docs.cometbft.com/v0.37/spec/abci/). This interface can be wrapped in any programming language, meaning developers can build their state-machine in the programming language of their choice. +- In Cosmos blockchains, the state-machine is typically connected to the underlying consensus engine via an interface called the [ABCI](https://docs.cometbft.com/v0.37/spec/abci/). This interface can be wrapped in any programming language, meaning developers can build their state-machine in the programming language of their choice. -* Developers can choose among multiple frameworks to build their state-machine. The most widely used today is the Cosmos SDK, but others exist (e.g. [Lotion](https://github.com/nomic-io/lotion), [Weave](https://github.com/iov-one/weave), ...). Typically the choice will be made based on the programming language they want to use (Cosmos SDK and Weave are in Golang, Lotion is in Javascript, ...). +- Developers can choose among multiple frameworks to build their state-machine. The most widely used today is the Cosmos SDK, but others exist (e.g. [Lotion](https://github.com/nomic-io/lotion), [Weave](https://github.com/iov-one/weave), ...). Typically the choice will be made based on the programming language they want to use (Cosmos SDK and Weave are in Golang, Lotion is in Javascript, ...). -* The ABCI also allows developers to swap the consensus engine of their application-specific blockchain. Today, only CometBFT is production-ready, but in the future other consensus engines are expected to emerge. +- The ABCI also allows developers to swap the consensus engine of their application-specific blockchain. Today, only CometBFT is production-ready, but in the future other consensus engines are expected to emerge. -* Even when they settle for a framework and consensus engine, developers still have the freedom to tweak them if they don't perfectly match their requirements in their pristine forms. +- Even when they settle for a framework and consensus engine, developers still have the freedom to tweak them if they don't perfectly match their requirements in their pristine forms. -* Developers are free to explore the full spectrum of tradeoffs (e.g. number of validators vs transaction throughput, safety vs availability in asynchrony, ...) and design choices (DB or IAVL tree for storage, UTXO or account model, ...). +- Developers are free to explore the full spectrum of tradeoffs (e.g. number of validators vs transaction throughput, safety vs availability in asynchrony, ...) and design choices (DB or IAVL tree for storage, UTXO or account model, ...). -* Developers can implement automatic execution of code. In the Cosmos SDK, logic can be automatically triggered at the beginning and the end of each block. They are also free to choose the cryptographic library used in their application, as opposed to being constrained by what is made available by the underlying environment in the case of virtual-machine blockchains. +- Developers can implement automatic execution of code. In the Cosmos SDK, logic can be automatically triggered at the beginning and the end of each block. They are also free to choose the cryptographic library used in their application, as opposed to being constrained by what is made available by the underlying environment in the case of virtual-machine blockchains. The list above contains a few examples that show how much flexibility application-specific blockchains give to developers. The goal of Cosmos and the Cosmos SDK is to make developer tooling as generic and composable as possible, so that each part of the stack can be forked, tweaked and improved without losing compatibility. As the community grows, more alternatives for each of the core building blocks will emerge, giving more options to developers. -### Performance[​](#performance "Direct link to Performance") +### Performance Decentralized applications built with Smart Contracts are inherently capped in performance by the underlying environment. For a decentralized application to optimise performance, it needs to be built as an application-specific blockchain. Next are some of the benefits an application-specific blockchain brings in terms of performance: -* Developers of application-specific blockchains can choose to operate with a novel consensus engine such as CometBFT BFT. Compared to Proof-of-Work (used by most virtual-machine blockchains today), it offers significant gains in throughput. -* An application-specific blockchain only operates a single application, so that the application does not compete with others for computation and storage. This is the opposite of most non-sharded virtual-machine blockchains today, where smart contracts all compete for computation and storage. -* Even if a virtual-machine blockchain offered application-based sharding coupled with an efficient consensus algorithm, performance would still be limited by the virtual-machine itself. The real throughput bottleneck is the state-machine, and requiring transactions to be interpreted by a virtual-machine significantly increases the computational complexity of processing them. +- Developers of application-specific blockchains can choose to operate with a novel consensus engine such as CometBFT BFT. Compared to Proof-of-Work (used by most virtual-machine blockchains today), it offers significant gains in throughput. +- An application-specific blockchain only operates a single application, so that the application does not compete with others for computation and storage. This is the opposite of most non-sharded virtual-machine blockchains today, where smart contracts all compete for computation and storage. +- Even if a virtual-machine blockchain offered application-based sharding coupled with an efficient consensus algorithm, performance would still be limited by the virtual-machine itself. The real throughput bottleneck is the state-machine, and requiring transactions to be interpreted by a virtual-machine significantly increases the computational complexity of processing them. -### Security[​](#security "Direct link to Security") +### Security Security is hard to quantify, and greatly varies from platform to platform. That said here are some important benefits an application-specific blockchain can bring in terms of security: -* Developers can choose proven programming languages like Go when building their application-specific blockchains, as opposed to smart contract programming languages that are often more immature. -* Developers are not constrained by the cryptographic functions made available by the underlying virtual-machines. They can use their own custom cryptography, and rely on well-audited crypto libraries. -* Developers do not have to worry about potential bugs or exploitable mechanisms in the underlying virtual-machine, making it easier to reason about the security of the application. +- Developers can choose proven programming languages like Go when building their application-specific blockchains, as opposed to smart contract programming languages that are often more immature. +- Developers are not constrained by the cryptographic functions made available by the underlying virtual-machines. They can use their own custom cryptography, and rely on well-audited crypto libraries. +- Developers do not have to worry about potential bugs or exploitable mechanisms in the underlying virtual-machine, making it easier to reason about the security of the application. -### Sovereignty[​](#sovereignty "Direct link to Sovereignty") +### Sovereignty One of the major benefits of application-specific blockchains is sovereignty. A decentralized application is an ecosystem that involves many actors: users, developers, third-party services, and more. When developers build on virtual-machine blockchain where many decentralized applications coexist, the community of the application is different than the community of the underlying blockchain, and the latter supersedes the former in the governance process. If there is a bug or if a new feature is needed, stakeholders of the application have very little leeway to upgrade the code. If the community of the underlying blockchain refuses to act, nothing can happen. diff --git a/docs/sdk/v0.50/learn/learn.mdx b/docs/sdk/v0.50/learn/learn.mdx new file mode 100644 index 00000000..7e0e22aa --- /dev/null +++ b/docs/sdk/v0.50/learn/learn.mdx @@ -0,0 +1,10 @@ +--- +title: Learn +--- + +* [Introduction](docs/sdk/v0.50/learn/intro/overview) - Dive into the fundamentals of Cosmos SDK with an insightful introduction, + laying the groundwork for understanding blockchain development. In this section we provide a High-Level Overview of the SDK, then dive deeper into Core concepts such as Application-Specific Blockchains, Blockchain Architecture, and finally we begin to explore what are the main components of the SDK. +* [Beginner](/docs/sdk/v0.50/learn/beginner/app-anatomy) - Start your journey with beginner-friendly resources in the Cosmos SDK's "Learn" + section, providing a gentle entry point for newcomers to blockchain development. Here we focus on a little more detail, covering the Anatomy of a Cosmos SDK Application, Transaction Lifecycles, Accounts and lastly, Gas and Fees. +* [Advanced](/docs/sdk/v0.50/learn/advanced/baseapp) - Level up your Cosmos SDK expertise with advanced topics, tailored for experienced + developers diving into intricate blockchain application development. We cover the Cosmos SDK on a lower level as we dive into the core of the SDK with BaseApp, Transactions, Context, Node Client (Daemon), Store, Encoding, gRPC, REST, and CometBFT Endpoints, CLI, Events, Telementry, Object-Capability Model, RunTx recovery middleware, Cosmos Blockchain Simulator, Protobuf Documentation, In-Place Store Migrations, Configuration and AutoCLI. diff --git a/docs/sdk/v0.50/tutorials/transactions/building-a-transaction.mdx b/docs/sdk/v0.50/tutorials/transactions/building-a-transaction.mdx index 87e1cf26..8ecf4a2b 100644 --- a/docs/sdk/v0.50/tutorials/transactions/building-a-transaction.mdx +++ b/docs/sdk/v0.50/tutorials/transactions/building-a-transaction.mdx @@ -1,54 +1,193 @@ --- -title: "Building a Transaction" -description: "Version: v0.50" +title: Building a Transaction +description: >- + These are the steps to build, sign and broadcast a transaction using v2 + semantics. --- These are the steps to build, sign and broadcast a transaction using v2 semantics. 1. Correctly set up imports -``` -import ( "context" "fmt" "log" "google.golang.org/grpc" "google.golang.org/grpc/credentials/insecure" apisigning "cosmossdk.io/api/cosmos/tx/signing/v1beta1" "cosmossdk.io/client/v2/broadcast/comet" "cosmossdk.io/client/v2/tx" "cosmossdk.io/core/transaction" "cosmossdk.io/math" banktypes "cosmossdk.io/x/bank/types" codectypes "github.com/cosmos/cosmos-sdk/codec/types" cryptocodec "github.com/cosmos/cosmos-sdk/crypto/codec" "github.com/cosmos/cosmos-sdk/crypto/keyring" authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" "github.com/cosmos/cosmos-sdk/codec" addrcodec "github.com/cosmos/cosmos-sdk/codec/address" sdk "github.com/cosmos/cosmos-sdk/types") +```go expandable +import ( + + "context" + "fmt" + "log" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + + apisigning "cosmossdk.io/api/cosmos/tx/signing/v1beta1" + "cosmossdk.io/client/v2/broadcast/comet" + "cosmossdk.io/client/v2/tx" + "cosmossdk.io/core/transaction" + "cosmossdk.io/math" + banktypes "cosmossdk.io/x/bank/types" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptocodec "github.com/cosmos/cosmos-sdk/crypto/codec" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/codec" + addrcodec "github.com/cosmos/cosmos-sdk/codec/address" + sdk "github.com/cosmos/cosmos-sdk/types" +) ``` 2. Create a gRPC connection -``` -clientConn, err := grpc.NewClient("127.0.0.1:9090", grpc.WithTransportCredentials(insecure.NewCredentials()))if err != nil { log.Fatal(err)} +```go +clientConn, err := grpc.NewClient("127.0.0.1:9090", grpc.WithTransportCredentials(insecure.NewCredentials())) + if err != nil { + log.Fatal(err) +} ``` 3. Setup codec and interface registry -``` - // Setup interface registry and register necessary interfaces interfaceRegistry := codectypes.NewInterfaceRegistry() banktypes.RegisterInterfaces(interfaceRegistry) authtypes.RegisterInterfaces(interfaceRegistry) cryptocodec.RegisterInterfaces(interfaceRegistry) // Create a ProtoCodec for encoding/decoding protoCodec := codec.NewProtoCodec(interfaceRegistry) +```go +/ Setup interface registry and register necessary interfaces + interfaceRegistry := codectypes.NewInterfaceRegistry() + +banktypes.RegisterInterfaces(interfaceRegistry) + +authtypes.RegisterInterfaces(interfaceRegistry) + +cryptocodec.RegisterInterfaces(interfaceRegistry) + + / Create a ProtoCodec for encoding/decoding + protoCodec := codec.NewProtoCodec(interfaceRegistry) ``` 4. Initialize keyring -``` - ckr, err := keyring.New("autoclikeyring", "test", home, nil, protoCodec) if err != nil { log.Fatal("error creating keyring", err) } kr, err := keyring.NewAutoCLIKeyring(ckr, addrcodec.NewBech32Codec("cosmos")) if err != nil { log.Fatal("error creating auto cli keyring", err) } +```go expandable +ckr, err := keyring.New("autoclikeyring", "test", home, nil, protoCodec) + if err != nil { + log.Fatal("error creating keyring", err) +} + +kr, err := keyring.NewAutoCLIKeyring(ckr, addrcodec.NewBech32Codec("cosmos")) + if err != nil { + log.Fatal("error creating auto cli keyring", err) +} ``` 5. Setup transaction parameters -``` - // Setup transaction parameters txParams := tx.TxParameters{ ChainID: "simapp-v2-chain", SignMode: apisigning.SignMode_SIGN_MODE_DIRECT, AccountConfig: tx.AccountConfig{ FromAddress: "cosmos1t0fmn0lyp2v99ga55mm37mpnqrlnc4xcs2hhhy", FromName: "alice", }, } // Configure gas settings gasConfig, err := tx.NewGasConfig(100, 100, "0stake") if err != nil { log.Fatal("error creating gas config: ", err) } txParams.GasConfig = gasConfig // Create auth query client authClient := authtypes.NewQueryClient(clientConn) // Retrieve account information for the sender fromAccount, err := getAccount("cosmos1t0fmn0lyp2v99ga55mm37mpnqrlnc4xcs2hhhy", authClient, protoCodec) if err != nil { log.Fatal("error getting from account: ", err) } // Update txParams with the correct account number and sequence txParams.AccountConfig.AccountNumber = fromAccount.GetAccountNumber() txParams.AccountConfig.Sequence = fromAccount.GetSequence() // Retrieve account information for the recipient toAccount, err := getAccount("cosmos1e2wanzh89mlwct7cs7eumxf7mrh5m3ykpsh66m", authClient, protoCodec) if err != nil { log.Fatal("error getting to account: ", err) } // Configure transaction settings txConf, _ := tx.NewTxConfig(tx.ConfigOptions{ AddressCodec: addrcodec.NewBech32Codec("cosmos"), Cdc: protoCodec, ValidatorAddressCodec: addrcodec.NewBech32Codec("cosmosval"), EnabledSignModes: []apisigning.SignMode{apisigning.SignMode_SIGN_MODE_DIRECT}, }) +```go expandable +/ Setup transaction parameters + txParams := tx.TxParameters{ + ChainID: "simapp-v2-chain", + SignMode: apisigning.SignMode_SIGN_MODE_DIRECT, + AccountConfig: tx.AccountConfig{ + FromAddress: "cosmos1t0fmn0lyp2v99ga55mm37mpnqrlnc4xcs2hhhy", + FromName: "alice", +}, +} + + / Configure gas settings + gasConfig, err := tx.NewGasConfig(100, 100, "0stake") + if err != nil { + log.Fatal("error creating gas config: ", err) +} + +txParams.GasConfig = gasConfig + + / Create auth query client + authClient := authtypes.NewQueryClient(clientConn) + + / Retrieve account information for the sender + fromAccount, err := getAccount("cosmos1t0fmn0lyp2v99ga55mm37mpnqrlnc4xcs2hhhy", authClient, protoCodec) + if err != nil { + log.Fatal("error getting from account: ", err) +} + + / Update txParams with the correct account number and sequence + txParams.AccountConfig.AccountNumber = fromAccount.GetAccountNumber() + +txParams.AccountConfig.Sequence = fromAccount.GetSequence() + + / Retrieve account information for the recipient + toAccount, err := getAccount("cosmos1e2wanzh89mlwct7cs7eumxf7mrh5m3ykpsh66m", authClient, protoCodec) + if err != nil { + log.Fatal("error getting to account: ", err) +} + + / Configure transaction settings + txConf, _ := tx.NewTxConfig(tx.ConfigOptions{ + AddressCodec: addrcodec.NewBech32Codec("cosmos"), + Cdc: protoCodec, + ValidatorAddressCodec: addrcodec.NewBech32Codec("cosmosval"), + EnabledSignModes: []apisigning.SignMode{ + apisigning.SignMode_SIGN_MODE_DIRECT +}, +}) ``` 6. Build the transaction -``` -// Create a transaction factory f, err := tx.NewFactory(kr, codec.NewProtoCodec(codectypes.NewInterfaceRegistry()), nil, txConf, addrcodec.NewBech32Codec("cosmos"), clientConn, txParams) if err != nil { log.Fatal("error creating factory", err) } // Define the transaction message msgs := []transaction.Msg{ &banktypes.MsgSend{ FromAddress: fromAccount.GetAddress().String(), ToAddress: toAccount.GetAddress().String(), Amount: sdk.Coins{ sdk.NewCoin("stake", math.NewInt(1000000)), }, }, } // Build and sign the transaction tx, err := f.BuildsSignedTx(context.Background(), msgs...) if err != nil { log.Fatal("error building signed tx", err) } +```go expandable +/ Create a transaction factory + f, err := tx.NewFactory(kr, codec.NewProtoCodec(codectypes.NewInterfaceRegistry()), nil, txConf, addrcodec.NewBech32Codec("cosmos"), clientConn, txParams) + if err != nil { + log.Fatal("error creating factory", err) +} + + / Define the transaction message + msgs := []transaction.Msg{ + &banktypes.MsgSend{ + FromAddress: fromAccount.GetAddress().String(), + ToAddress: toAccount.GetAddress().String(), + Amount: sdk.Coins{ + sdk.NewCoin("stake", math.NewInt(1000000)), +}, +}, +} + + / Build and sign the transaction + tx, err := f.BuildsSignedTx(context.Background(), msgs...) + if err != nil { + log.Fatal("error building signed tx", err) +} ``` 7. Broadcast the transaction -``` -// Create a broadcaster for the transaction c, err := comet.NewCometBFTBroadcaster("http://127.0.0.1:26657", comet.BroadcastSync, protoCodec) if err != nil { log.Fatal("error creating comet broadcaster", err) } // Broadcast the transaction res, err := c.Broadcast(context.Background(), tx.Bytes()) if err != nil { log.Fatal("error broadcasting tx", err) } +```go expandable +/ Create a broadcaster for the transaction + c, err := comet.NewCometBFTBroadcaster("http://127.0.0.1:26657", comet.BroadcastSync, protoCodec) + if err != nil { + log.Fatal("error creating comet broadcaster", err) +} + + / Broadcast the transaction + res, err := c.Broadcast(context.Background(), tx.Bytes()) + if err != nil { + log.Fatal("error broadcasting tx", err) +} ``` 8. Helpers -``` -// getAccount retrieves account information using the provided addressfunc getAccount(address string, authClient authtypes.QueryClient, codec codec.Codec) (sdk.AccountI, error) { // Query account info accountQuery, err := authClient.Account(context.Background(), &authtypes.QueryAccountRequest{ Address: string(address), }) if err != nil { return nil, fmt.Errorf("error getting account: %w", err) } // Unpack the account information var account sdk.AccountI err = codec.InterfaceRegistry().UnpackAny(accountQuery.Account, &account) if err != nil { return nil, fmt.Errorf("error unpacking account: %w", err) } return account, nil} +```go expandable +/ getAccount retrieves account information using the provided address +func getAccount(address string, authClient authtypes.QueryClient, codec codec.Codec) (sdk.AccountI, error) { + / Query account info + accountQuery, err := authClient.Account(context.Background(), &authtypes.QueryAccountRequest{ + Address: string(address), +}) + if err != nil { + return nil, fmt.Errorf("error getting account: %w", err) +} + + / Unpack the account information + var account sdk.AccountI + err = codec.InterfaceRegistry().UnpackAny(accountQuery.Account, &account) + if err != nil { + return nil, fmt.Errorf("error unpacking account: %w", err) +} + +return account, nil +} ``` diff --git a/docs/sdk/v0.50/tutorials.mdx b/docs/sdk/v0.50/tutorials/tutorials.mdx similarity index 88% rename from docs/sdk/v0.50/tutorials.mdx rename to docs/sdk/v0.50/tutorials/tutorials.mdx index 92695160..c896cc54 100644 --- a/docs/sdk/v0.50/tutorials.mdx +++ b/docs/sdk/v0.50/tutorials/tutorials.mdx @@ -1,9 +1,8 @@ --- -title: "Tutorials" -description: "Version: v0.50" +title: Tutorials --- -## Advanced Tutorials[​](#advanced-tutorials "Direct link to Advanced Tutorials") +## Advanced Tutorials This section provides a concise overview of tutorials focused on implementing vote extensions in the Cosmos SDK. Vote extensions are a powerful feature for enhancing the security and fairness of blockchain applications, particularly in scenarios like implementing oracles and mitigating auction front-running. diff --git a/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running.mdx b/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running.mdx index 0b6e0e7e..dad1cebf 100644 --- a/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running.mdx +++ b/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running.mdx @@ -1,6 +1,5 @@ --- -title: "Demo of Mitigating Front-Running with Vote Extensions" -description: "Version: v0.50" +title: Demo of Mitigating Front-Running with Vote Extensions --- The purpose of this demo is to test the implementation of the `VoteExtensionHandler` and `PrepareProposalHandler` that we have just added to the codebase. These handlers are designed to mitigate front-running by ensuring that all validators have a consistent view of the mempool when preparing proposals. @@ -9,57 +8,99 @@ In this demo, we are using a 3 validator network. The Beacon validator is specia 1. Bootstrap the validator network: This sets up a network with 3 validators. The script \`./scripts/configure.sh is used to configure the network and the validators. -``` -cd scriptsconfigure.sh +```shell +cd scripts +configure.sh ``` If this doesn't work please ensure you have run `make build` in the `tutorials/nameservice/base` directory. -2. Have alice attempt to reserve `bob.cosmos`: This is a normal transaction that alice wants to execute. The script \``./scripts/reserve.sh "bob.cosmos"` is used to send this transaction. +{/* nolint:all */} +2\. Have alice attempt to reserve `bob.cosmos`: This is a normal transaction that alice wants to execute. The script \`\`./scripts/reserve.sh "bob.cosmos"\` is used to send this transaction. -``` +```shell reserve.sh "bob.cosmos" ``` -3. Query to verify the name has been reserved: This is to check the result of the transaction. The script `./scripts/whois.sh "bob.cosmos"` is used to query the state of the blockchain. +{/* /nolint:all */} +3\. Query to verify the name has been reserved: This is to check the result of the transaction. The script `./scripts/whois.sh "bob.cosmos"` is used to query the state of the blockchain. -``` +```shell whois.sh "bob.cosmos" ``` It should return: -``` - "name": { "name": "bob.cosmos", "owner": "cosmos1nq9wuvuju4jdmpmzvxmg8zhhu2ma2y2l2pnu6w", "resolve_address": "cosmos1h6zy2kn9efxtw5z22rc5k9qu7twl70z24kr3ht", "amount": [ { "denom": "uatom", "amount": "1000" } ] }} +```{ expandable + "name": { + "name": "bob.cosmos", + "owner": "cosmos1nq9wuvuju4jdmpmzvxmg8zhhu2ma2y2l2pnu6w", + "resolve_address": "cosmos1h6zy2kn9efxtw5z22rc5k9qu7twl70z24kr3ht", + "amount": [ + { + "denom": "uatom", + "amount": "1000" + } + ] + } +} ``` To detect front-running attempts by the beacon, scrutinise the logs during the `ProcessProposal` stage. Open the logs for each validator, including the beacon, `val1`, and `val2`, to observe the following behavior. Open the log file of the validator node. The location of this file can vary depending on your setup, but it's typically located in a directory like `$HOME/cosmos/nodes/#{validator}/logs`. The directory in this case will be under the validator so, `beacon` `val1` or `val2`. Run the following to tail the logs of the validator or beacon: -``` +```shell tail -f $HOME/cosmos/nodes/#{validator}/logs ``` -``` -2:47PM ERR ❌️:: Detected invalid proposal bid :: name:"bob.cosmos" resolveAddress:"cosmos1wmuwv38pdur63zw04t0c78r2a8dyt08hf9tpvd" owner:"cosmos1wmuwv38pdur63zw04t0c78r2a8dyt08hf9tpvd" amount: module=server2:47PM ERR ❌️:: Unable to validate bids in Process Proposal :: module=server2:47PM ERR prevote step: state machine rejected a proposed block; this should not happen:the proposer may be misbehaving; prevoting nil err=null height=142 module=consensus round=0 +```shell +2:47PM ERR :: Detected invalid proposal bid :: name:"bob.cosmos" resolveAddress:"cosmos1wmuwv38pdur63zw04t0c78r2a8dyt08hf9tpvd" owner:"cosmos1wmuwv38pdur63zw04t0c78r2a8dyt08hf9tpvd" amount: module=server +2:47PM ERR :: Unable to validate bids in Process Proposal :: module=server +2:47PM ERR prevote step: state machine rejected a proposed block; this should not happen:the proposer may be misbehaving; prevoting nil err=null height=142 module=consensus round=0 ``` -4. List the Beacon's keys: This is to verify the addresses of the validators. The script `./scripts/list-beacon-keys.sh` is used to list the keys. +{/* /nolint:all */} +4\. List the Beacon's keys: This is to verify the addresses of the validators. The script `./scripts/list-beacon-keys.sh` is used to list the keys. -``` +```shell list-beacon-keys.sh ``` We should receive something similar to the following: -``` -[ { "name": "alice", "type": "local", "address": "cosmos1h6zy2kn9efxtw5z22rc5k9qu7twl70z24kr3ht", "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"A32cvBUkNJz+h2vld4A5BxvU5Rd+HyqpR3aGtvEhlm4C\"}" }, { "name": "barbara", "type": "local", "address": "cosmos1nq9wuvuju4jdmpmzvxmg8zhhu2ma2y2l2pnu6w", "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"Ag9PFsNyTQPoJdbyCWia5rZH9CrvSrjMsk7Oz4L3rXQ5\"}" }, { "name": "beacon-key", "type": "local", "address": "cosmos1ez9a6x7lz4gvn27zr368muw8jeyas7sv84lfup", "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"AlzJZMWyN7lass710TnAhyuFKAFIaANJyw5ad5P2kpcH\"}" }, { "name": "cindy", "type": "local", "address": "cosmos1m5j6za9w4qc2c5ljzxmm2v7a63mhjeag34pa3g", "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"A6F1/3yot5OpyXoSkBbkyl+3rqBkxzRVSJfvSpm/AvW5\"}" }] +```shell expandable +[ + { + "name": "alice", + "type": "local", + "address": "cosmos1h6zy2kn9efxtw5z22rc5k9qu7twl70z24kr3ht", + "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"A32cvBUkNJz+h2vld4A5BxvU5Rd+HyqpR3aGtvEhlm4C\"}" + }, + { + "name": "barbara", + "type": "local", + "address": "cosmos1nq9wuvuju4jdmpmzvxmg8zhhu2ma2y2l2pnu6w", + "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"Ag9PFsNyTQPoJdbyCWia5rZH9CrvSrjMsk7Oz4L3rXQ5\"}" + }, + { + "name": "beacon-key", + "type": "local", + "address": "cosmos1ez9a6x7lz4gvn27zr368muw8jeyas7sv84lfup", + "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"AlzJZMWyN7lass710TnAhyuFKAFIaANJyw5ad5P2kpcH\"}" + }, + { + "name": "cindy", + "type": "local", + "address": "cosmos1m5j6za9w4qc2c5ljzxmm2v7a63mhjeag34pa3g", + "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"A6F1/3yot5OpyXoSkBbkyl+3rqBkxzRVSJfvSpm/AvW5\"}" + } +] ``` This allows us to match up the addresses and see that the bid was not front run by the beacon due as the resolve address is Alice's address and not the beacons address. By running this demo, we can verify that the `VoteExtensionHandler` and `PrepareProposalHandler` are working as expected and that they are able to prevent front-running. -## Conclusion[​](#conclusion "Direct link to Conclusion") +## Conclusion In this tutorial, we've tackled front-running and MEV, focusing on nameservice auctions' vulnerability to these issues. We've explored vote extensions, a key feature of ABCI 2.0, and integrated them into a Cosmos SDK application. diff --git a/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/getting-started.mdx b/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/getting-started.mdx index 8ba26a67..c6e5abfb 100644 --- a/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/getting-started.mdx +++ b/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/getting-started.mdx @@ -1,18 +1,20 @@ --- -title: "Getting Started" -description: "Version: v0.50" +title: Getting Started +description: >- + - Getting Started - Understanding Front-Running - Mitigating Front-running + with Vote Extensions - Demo of Mitigating Front-Running --- -## Table of Contents[​](#table-of-contents "Direct link to Table of Contents") +## Table of Contents * [Getting Started](#overview-of-the-project) -* [Understanding Front-Running](/v0.50/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) -* [Mitigating Front-running with Vote Extensions](/v0.50/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extesions) -* [Demo of Mitigating Front-Running](/v0.50/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running) +* [Understanding Front-Running](/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) +* [Mitigating Front-running with Vote Extensions](/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions) +* [Demo of Mitigating Front-Running](/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running) -## Getting Started[​](#getting-started-1 "Direct link to Getting Started") +## Getting Started -### Overview of the Project[​](#overview-of-the-project "Direct link to Overview of the Project") +### Overview of the Project This tutorial outlines the development of a module designed to mitigate front-running in nameservice auctions. The following functions are central to this module: @@ -20,22 +22,22 @@ This tutorial outlines the development of a module designed to mitigate front-ru * `PrepareProposal`: Processes the vote extensions from the previous block, creating a special transaction that encapsulates bids to be included in the current proposal. * `ProcessProposal`: Validates that the first transaction in the proposal is the special transaction containing the vote extensions and ensures the integrity of the bids. -In this advanced tutorial, we will be working with an example application that facilitates the auctioning of nameservices. To see what frontrunning and nameservices are [here](/v0.50/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) This application provides a practical use case to explore the prevention of auction front-running, also known as "bid sniping", where a validator takes advantage of seeing a bid in the mempool to place their own higher bid before the original bid is processed. +In this advanced tutorial, we will be working with an example application that facilitates the auctioning of nameservices. To see what frontrunning and nameservices are [here](/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) This application provides a practical use case to explore the prevention of auction front-running, also known as "bid sniping", where a validator takes advantage of seeing a bid in the mempool to place their own higher bid before the original bid is processed. The tutorial will guide you through using the Cosmos SDK to mitigate front-running using vote extensions. The module will be built on top of the base blockchain provided in the `tutorials/base` directory and will use the `auction` module as a foundation. By the end of this tutorial, you will have a better understanding of how to prevent front-running in blockchain auctions, specifically in the context of nameservice auctioning. -## What are Vote extensions?[​](#what-are-vote-extensions "Direct link to What are Vote extensions?") +## What are Vote extensions? Vote extensions is arbitrary information which can be inserted into a block. This feature is part of ABCI 2.0, which is available for use in the SDK 0.50 release and part of the 0.38 CometBFT release. More information about vote extensions can be seen [here](https://docs.cosmos.network/main/build/abci/vote-extensions). -## Requirements and Setup[​](#requirements-and-setup "Direct link to Requirements and Setup") +## Requirements and Setup Before diving into the advanced tutorial on auction front-running simulation, ensure you meet the following requirements: * [Golang >1.21.5](https://golang.org/doc/install) installed -* Familiarity with the concepts of front-running and MEV, as detailed in [Understanding Front-Running](/v0.50/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) +* Familiarity with the concepts of front-running and MEV, as detailed in [Understanding Front-Running](/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) * Understanding of Vote Extensions as described [here](https://docs.cosmos.network/main/build/abci/vote-extensions) You will also need a foundational blockchain to build upon coupled with your own module. The `tutorials/base` directory has the necessary blockchain code to start your custom project with the Cosmos SDK. For the module, you can use the `auction` module provided in the `tutorials/auction/x/auction` directory as a reference but please be aware that all of the code needed to implement vote extensions is already implemented in this module. diff --git a/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions.mdx b/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions.mdx index 9d0f910b..406d510c 100644 --- a/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions.mdx +++ b/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions.mdx @@ -1,61 +1,100 @@ --- -title: "Mitigating Front-running with Vote Extensions" -description: "Version: v0.50" +title: Mitigating Front-running with Vote Extensions +description: >- + Prerequisites Implementing Structs for Vote Extensions Implementing Handlers + and Configuring Handlers --- -## Table of Contents[​](#table-of-contents "Direct link to Table of Contents") +## Table of Contents * [Prerequisites](#prerequisites) * [Implementing Structs for Vote Extensions](#implementing-structs-for-vote-extensions) * [Implementing Handlers and Configuring Handlers](#implementing-handlers-and-configuring-handlers) -## Prerequisites[​](#prerequisites "Direct link to Prerequisites") +## Prerequisites Before implementing vote extensions to mitigate front-running, ensure you have a module ready to implement the vote extensions with. If you need to create or reference a similar module, see `x/auction` for guidance. In this section, we will discuss the steps to mitigate front-running using vote extensions. We will introduce new types within the `abci/types.go` file. These types will be used to handle the process of preparing proposals, processing proposals, and handling vote extensions. -### Implementing Structs for Vote Extensions[​](#implementing-structs-for-vote-extensions "Direct link to Implementing Structs for Vote Extensions") +### Implementing Structs for Vote Extensions First, copy the following structs into the `abci/types.go` and each of these structs serves a specific purpose in the process of mitigating front-running using vote extensions: -``` -package abciimport ( //import the necessary files)type PrepareProposalHandler struct { logger log.Logger txConfig client.TxConfig cdc codec.Codec mempool *mempool.ThresholdMempool txProvider provider.TxProvider keyname string runProvider bool} +```go expandable +package abci + +import ( + + /import the necessary files +) + +type PrepareProposalHandler struct { + logger log.Logger + txConfig client.TxConfig + cdc codec.Codec + mempool *mempool.ThresholdMempool + txProvider provider.TxProvider + keyname string + runProvider bool +} ``` The `PrepareProposalHandler` struct is used to handle the preparation of a proposal in the consensus process. It contains several fields: logger for logging information and errors, txConfig for transaction configuration, cdc (Codec) for encoding and decoding transactions, mempool for referencing the set of unconfirmed transactions, txProvider for building the proposal with transactions, keyname for the name of the key used for signing transactions, and runProvider, a boolean flag indicating whether the provider should be run to build the proposal. -``` -type ProcessProposalHandler struct { TxConfig client.TxConfig Codec codec.Codec Logger log.Logger} +```go +type ProcessProposalHandler struct { + TxConfig client.TxConfig + Codec codec.Codec + Logger log.Logger +} ``` After the proposal has been prepared and vote extensions have been included, the `ProcessProposalHandler` is used to process the proposal. This includes validating the proposal and the included vote extensions. The `ProcessProposalHandler` allows you to access the transaction configuration and codec, which are necessary for processing the vote extensions. -``` -type VoteExtHandler struct { logger log.Logger currentBlock int64 mempool *mempool.ThresholdMempool cdc codec.Codec} +```go +type VoteExtHandler struct { + logger log.Logger + currentBlock int64 + mempool *mempool.ThresholdMempool + cdc codec.Codec +} ``` This struct is used to handle vote extensions. It contains a logger for logging events, the current block number, a mempool for storing transactions, and a codec for encoding and decoding. Vote extensions are a key part of the process to mitigate front-running, as they allow for additional information to be included with each vote. -``` -type InjectedVoteExt struct { VoteExtSigner []byte Bids [][]byte}type InjectedVotes struct { Votes []InjectedVoteExt} +```go +type InjectedVoteExt struct { + VoteExtSigner []byte + Bids [][]byte +} + +type InjectedVotes struct { + Votes []InjectedVoteExt +} ``` These structs are used to handle injected vote extensions. They include the signer of the vote extension and the bids associated with the vote extension. Each byte array in Bids is a serialised form of a bid transaction. Injected vote extensions are used to add additional information to a vote after it has been created, which can be useful for adding context or additional data to a vote. The serialised bid transactions provide a way to include complex transaction data in a compact, efficient format. -``` -type AppVoteExtension struct { Height int64 Bids [][]byte} +```go +type AppVoteExtension struct { + Height int64 + Bids [][]byte +} ``` This struct is used for application vote extensions. It includes the height of the block and the bids associated with the vote extension. Application vote extensions are used to add additional information to a vote at the application level, which can be useful for adding context or additional data to a vote that is specific to the application. -``` -type SpecialTransaction struct { Height int Bids [][]byte} +```go +type SpecialTransaction struct { + Height int + Bids [][]byte +} ``` This struct is used for special transactions. It includes the height of the block and the bids associated with the transaction. Special transactions are used for transactions that need to be handled differently from regular transactions, such as transactions that are part of the process to mitigate front-running. -### Implementing Handlers and Configuring Handlers[​](#implementing-handlers-and-configuring-handlers "Direct link to Implementing Handlers and Configuring Handlers") +### Implementing Handlers and Configuring Handlers To establish the `VoteExtensionHandler`, follow these steps: @@ -63,54 +102,277 @@ To establish the `VoteExtensionHandler`, follow these steps: 2. Implement the `NewVoteExtensionHandler` function. This function is a constructor for the `VoteExtHandler` struct. It takes a logger, a mempool, and a codec as parameters and returns a new instance of `VoteExtHandler`. -``` -func NewVoteExtensionHandler(lg log.Logger, mp *mempool.ThresholdMempool, cdc codec.Codec) *VoteExtHandler { return &VoteExtHandler{ logger: lg, mempool: mp, cdc: cdc, } } +```go +func NewVoteExtensionHandler(lg log.Logger, mp *mempool.ThresholdMempool, cdc codec.Codec) *VoteExtHandler { + return &VoteExtHandler{ + logger: lg, + mempool: mp, + cdc: cdc, +} +} ``` 3. Implement the `ExtendVoteHandler()` method. This method should handle the logic of extending votes, including inspecting the mempool and submitting a list of all pending bids. This will allow you to access the list of unconfirmed transactions in the abci.`RequestPrepareProposal` during the ensuing block. -``` -func (h *VoteExtHandler) ExtendVoteHandler() sdk.ExtendVoteHandler { return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { h.logger.Info(fmt.Sprintf("Extending votes at block height : %v", req.Height)) voteExtBids := [][]byte{} // Get mempool txs itr := h.mempool.SelectPending(context.Background(), nil) for itr != nil { tmptx := itr.Tx() sdkMsgs := tmptx.GetMsgs() // Iterate through msgs, check for any bids for _, msg := range sdkMsgs { switch msg := msg.(type) { case *nstypes.MsgBid: // Marshal sdk bids to []byte bz, err := h.cdc.Marshal(msg) if err != nil { h.logger.Error(fmt.Sprintf("Error marshalling VE Bid : %v", err)) break } voteExtBids = append(voteExtBids, bz) default: } } // Move tx to ready pool err := h.mempool.Update(context.Background(), tmptx) // Remove tx from app side mempool if err != nil { h.logger.Info(fmt.Sprintf("Unable to update mempool tx: %v", err)) } itr = itr.Next() } // Create vote extension voteExt := AppVoteExtension{ Height: req.Height, Bids: voteExtBids, } // Encode Vote Extension bz, err := json.Marshal(voteExt) if err != nil { return nil, fmt.Errorf("Error marshalling VE: %w", err) } return &abci.ResponseExtendVote{VoteExtension: bz}, nil} +```go expandable +func (h *VoteExtHandler) + +ExtendVoteHandler() + +sdk.ExtendVoteHandler { + return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + h.logger.Info(fmt.Sprintf("Extending votes at block height : %v", req.Height)) + voteExtBids := [][]byte{ +} + + / Get mempool txs + itr := h.mempool.SelectPending(context.Background(), nil) + for itr != nil { + tmptx := itr.Tx() + sdkMsgs := tmptx.GetMsgs() + + / Iterate through msgs, check for any bids + for _, msg := range sdkMsgs { + switch msg := msg.(type) { + case *nstypes.MsgBid: + / Marshal sdk bids to []byte + bz, err := h.cdc.Marshal(msg) + if err != nil { + h.logger.Error(fmt.Sprintf("Error marshalling VE Bid : %v", err)) + +break +} + +voteExtBids = append(voteExtBids, bz) + +default: +} + +} + + / Move tx to ready pool + err := h.mempool.Update(context.Background(), tmptx) + + / Remove tx from app side mempool + if err != nil { + h.logger.Info(fmt.Sprintf("Unable to update mempool tx: %v", err)) +} + +itr = itr.Next() +} + + / Create vote extension + voteExt := AppVoteExtension{ + Height: req.Height, + Bids: voteExtBids, +} + + / Encode Vote Extension + bz, err := json.Marshal(voteExt) + if err != nil { + return nil, fmt.Errorf("Error marshalling VE: %w", err) +} + +return &abci.ResponseExtendVote{ + VoteExtension: bz +}, nil +} ``` 4. Configure the handler in `app/app.go` as shown below -``` -bApp := baseapp.NewBaseApp(AppName, logger, db, txConfig.TxDecoder(), baseAppOptions...)voteExtHandler := abci2.NewVoteExtensionHandler(logger, mempool, appCodec)bApp.SetExtendVoteHandler(voteExtHandler.ExtendVoteHandler()) +```go +bApp := baseapp.NewBaseApp(AppName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + voteExtHandler := abci2.NewVoteExtensionHandler(logger, mempool, appCodec) + +bApp.SetExtendVoteHandler(voteExtHandler.ExtendVoteHandler()) ``` To give a bit of context on what is happening above, we first create a new instance of `VoteExtensionHandler` with the necessary dependencies (logger, mempool, and codec). Then, we set this handler as the `ExtendVoteHandler` for our application. This means that whenever a vote needs to be extended, our custom `ExtendVoteHandler()` method will be called. To test if vote extensions have been propagated, add the following to the `PrepareProposalHandler`: -``` -if req.Height > 2 { voteExt := req.GetLocalLastCommit() h.logger.Info(fmt.Sprintf("🛠️ :: Get vote extensions: %v", voteExt)) } +```go +if req.Height > 2 { + voteExt := req.GetLocalLastCommit() + +h.logger.Info(fmt.Sprintf(" :: Get vote extensions: %v", voteExt)) +} ``` This is how the whole function should look: -``` -func (h *PrepareProposalHandler) PrepareProposalHandler() sdk.PrepareProposalHandler { return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { h.logger.Info(fmt.Sprintf("🛠️ :: Prepare Proposal")) var proposalTxs [][]byte var txs []sdk.Tx // Get Vote Extensions if req.Height > 2 { voteExt := req.GetLocalLastCommit() h.logger.Info(fmt.Sprintf("🛠️ :: Get vote extensions: %v", voteExt)) } itr := h.mempool.Select(context.Background(), nil) for itr != nil { tmptx := itr.Tx() txs = append(txs, tmptx) itr = itr.Next() } h.logger.Info(fmt.Sprintf("🛠️ :: Number of Transactions available from mempool: %v", len(txs))) if h.runProvider { tmpMsgs, err := h.txProvider.BuildProposal(ctx, txs) if err != nil { h.logger.Error(fmt.Sprintf("❌️ :: Error Building Custom Proposal: %v", err)) } txs = tmpMsgs } for _, sdkTxs := range txs { txBytes, err := h.txConfig.TxEncoder()(sdkTxs) if err != nil { h.logger.Info(fmt.Sprintf("❌~Error encoding transaction: %v", err.Error())) } proposalTxs = append(proposalTxs, txBytes) } h.logger.Info(fmt.Sprintf("🛠️ :: Number of Transactions in proposal: %v", len(proposalTxs))) return &abci.ResponsePrepareProposal{Txs: proposalTxs}, nil }} +```go expandable +func (h *PrepareProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + h.logger.Info(fmt.Sprintf(" :: Prepare Proposal")) + +var proposalTxs [][]byte + + var txs []sdk.Tx + + / Get Vote Extensions + if req.Height > 2 { + voteExt := req.GetLocalLastCommit() + +h.logger.Info(fmt.Sprintf(" :: Get vote extensions: %v", voteExt)) +} + itr := h.mempool.Select(context.Background(), nil) + for itr != nil { + tmptx := itr.Tx() + +txs = append(txs, tmptx) + +itr = itr.Next() +} + +h.logger.Info(fmt.Sprintf(" :: Number of Transactions available from mempool: %v", len(txs))) + if h.runProvider { + tmpMsgs, err := h.txProvider.BuildProposal(ctx, txs) + if err != nil { + h.logger.Error(fmt.Sprintf(" :: Error Building Custom Proposal: %v", err)) +} + +txs = tmpMsgs +} + for _, sdkTxs := range txs { + txBytes, err := h.txConfig.TxEncoder()(sdkTxs) + if err != nil { + h.logger.Info(fmt.Sprintf("~Error encoding transaction: %v", err.Error())) +} + +proposalTxs = append(proposalTxs, txBytes) +} + +h.logger.Info(fmt.Sprintf(" :: Number of Transactions in proposal: %v", len(proposalTxs))) + +return &abci.ResponsePrepareProposal{ + Txs: proposalTxs +}, nil +} +} ``` -As mentioned above, we check if vote extensions have been propagated, you can do this by checking the logs for any relevant messages such as `🛠️ :: Get vote extensions:`. If the logs do not provide enough information, you can also reinitialise your local testing environment by running the `./scripts/single_node/setup.sh` script again. +As mentioned above, we check if vote extensions have been propagated, you can do this by checking the logs for any relevant messages such as ` :: Get vote extensions:`. If the logs do not provide enough information, you can also reinitialise your local testing environment by running the `./scripts/single_node/setup.sh` script again. 5. Implement the `ProcessProposalHandler()`. This function is responsible for processing the proposal. It should handle the logic of processing vote extensions, including inspecting the proposal and validating the bids. -``` -func (h *ProcessProposalHandler) ProcessProposalHandler() sdk.ProcessProposalHandler { return func(ctx sdk.Context, req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) { h.Logger.Info(fmt.Sprintf("⚙️ :: Process Proposal")) // The first transaction will always be the Special Transaction numTxs := len(req.Txs) h.Logger.Info(fmt.Sprintf("⚙️:: Number of transactions :: %v", numTxs)) if numTxs >= 1 { var st SpecialTransaction err = json.Unmarshal(req.Txs[0], &st) if err != nil { h.Logger.Error(fmt.Sprintf("❌️:: Error unmarshalling special Tx in Process Proposal :: %v", err)) } if len(st.Bids) > 0 { h.Logger.Info(fmt.Sprintf("⚙️:: There are bids in the Special Transaction")) var bids []nstypes.MsgBid for i, b := range st.Bids { var bid nstypes.MsgBid h.Codec.Unmarshal(b, &bid) h.Logger.Info(fmt.Sprintf("⚙️:: Special Transaction Bid No %v :: %v", i, bid)) bids = append(bids, bid) } // Validate Bids in Tx txs := req.Txs[1:] ok, err := ValidateBids(h.TxConfig, bids, txs, h.Logger) if err != nil { h.Logger.Error(fmt.Sprintf("❌️:: Error validating bids in Process Proposal :: %v", err)) return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil } if !ok { h.Logger.Error(fmt.Sprintf("❌️:: Unable to validate bids in Process Proposal :: %v", err)) return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil } h.Logger.Info("⚙️:: Successfully validated bids in Process Proposal") } } return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_ACCEPT}, nil }} +```go expandable +func (h *ProcessProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + return func(ctx sdk.Context, req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) { + h.Logger.Info(fmt.Sprintf(" :: Process Proposal")) + + / The first transaction will always be the Special Transaction + numTxs := len(req.Txs) + +h.Logger.Info(fmt.Sprintf(":: Number of transactions :: %v", numTxs)) + if numTxs >= 1 { + var st SpecialTransaction + err = json.Unmarshal(req.Txs[0], &st) + if err != nil { + h.Logger.Error(fmt.Sprintf(":: Error unmarshalling special Tx in Process Proposal :: %v", err)) +} + if len(st.Bids) > 0 { + h.Logger.Info(fmt.Sprintf(":: There are bids in the Special Transaction")) + +var bids []nstypes.MsgBid + for i, b := range st.Bids { + var bid nstypes.MsgBid + h.Codec.Unmarshal(b, &bid) + +h.Logger.Info(fmt.Sprintf(":: Special Transaction Bid No %v :: %v", i, bid)) + +bids = append(bids, bid) +} + / Validate Bids in Tx + txs := req.Txs[1:] + ok, err := ValidateBids(h.TxConfig, bids, txs, h.Logger) + if err != nil { + h.Logger.Error(fmt.Sprintf(":: Error validating bids in Process Proposal :: %v", err)) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if !ok { + h.Logger.Error(fmt.Sprintf(":: Unable to validate bids in Process Proposal :: %v", err)) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +h.Logger.Info(":: Successfully validated bids in Process Proposal") +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} ``` 6. Implement the `ProcessVoteExtensions()` function. This function should handle the logic of processing vote extensions, including validating the bids. -``` -func processVoteExtensions(req *abci.RequestPrepareProposal, log log.Logger) (SpecialTransaction, error) { log.Info(fmt.Sprintf("🛠️ :: Process Vote Extensions")) // Create empty response st := SpecialTransaction{ 0, [][]byte{}, } // Get Vote Ext for H-1 from Req voteExt := req.GetLocalLastCommit() votes := voteExt.Votes // Iterate through votes var ve AppVoteExtension for _, vote := range votes { // Unmarshal to AppExt err := json.Unmarshal(vote.VoteExtension, &ve) if err != nil { log.Error(fmt.Sprintf("❌ :: Error unmarshalling Vote Extension")) } st.Height = int(ve.Height) // If Bids in VE, append to Special Transaction if len(ve.Bids) > 0 { log.Info("🛠️ :: Bids in VE") for _, b := range ve.Bids { st.Bids = append(st.Bids, b) } } } return st, nil} +```go expandable +func processVoteExtensions(req *abci.RequestPrepareProposal, log log.Logger) (SpecialTransaction, error) { + log.Info(fmt.Sprintf(" :: Process Vote Extensions")) + + / Create empty response + st := SpecialTransaction{ + 0, + [][]byte{ +}, +} + + / Get Vote Ext for H-1 from Req + voteExt := req.GetLocalLastCommit() + votes := voteExt.Votes + + / Iterate through votes + var ve AppVoteExtension + for _, vote := range votes { + / Unmarshal to AppExt + err := json.Unmarshal(vote.VoteExtension, &ve) + if err != nil { + log.Error(fmt.Sprintf(" :: Error unmarshalling Vote Extension")) +} + +st.Height = int(ve.Height) + + / If Bids in VE, append to Special Transaction + if len(ve.Bids) > 0 { + log.Info(" :: Bids in VE") + for _, b := range ve.Bids { + st.Bids = append(st.Bids, b) +} + +} + +} + +return st, nil +} ``` 7. Configure the `ProcessProposalHandler()` in app/app.go: -``` -processPropHandler := abci2.ProcessProposalHandler{app.txConfig, appCodec, logger}bApp.SetProcessProposal(processPropHandler.ProcessProposalHandler()) +```go +processPropHandler := abci2.ProcessProposalHandler{ + app.txConfig, appCodec, logger +} + +bApp.SetProcessProposal(processPropHandler.ProcessProposalHandler()) ``` This sets the `ProcessProposalHandler()` for our application. This means that whenever a proposal needs to be processed, our custom `ProcessProposalHandler()` method will be called. diff --git a/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extesions.mdx b/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extesions.mdx index 9d0f910b..6452ee08 100644 --- a/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extesions.mdx +++ b/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extesions.mdx @@ -1,61 +1,100 @@ --- -title: "Mitigating Front-running with Vote Extensions" -description: "Version: v0.50" +title: Mitigating Front-running with Vote Extensions +description: >- + - Prerequisites - Implementing Structs for Vote Extensions - Implementing + Handlers and Configuring Handlers --- -## Table of Contents[​](#table-of-contents "Direct link to Table of Contents") +## Table of Contents * [Prerequisites](#prerequisites) * [Implementing Structs for Vote Extensions](#implementing-structs-for-vote-extensions) * [Implementing Handlers and Configuring Handlers](#implementing-handlers-and-configuring-handlers) -## Prerequisites[​](#prerequisites "Direct link to Prerequisites") +## Prerequisites Before implementing vote extensions to mitigate front-running, ensure you have a module ready to implement the vote extensions with. If you need to create or reference a similar module, see `x/auction` for guidance. In this section, we will discuss the steps to mitigate front-running using vote extensions. We will introduce new types within the `abci/types.go` file. These types will be used to handle the process of preparing proposals, processing proposals, and handling vote extensions. -### Implementing Structs for Vote Extensions[​](#implementing-structs-for-vote-extensions "Direct link to Implementing Structs for Vote Extensions") +### Implementing Structs for Vote Extensions First, copy the following structs into the `abci/types.go` and each of these structs serves a specific purpose in the process of mitigating front-running using vote extensions: -``` -package abciimport ( //import the necessary files)type PrepareProposalHandler struct { logger log.Logger txConfig client.TxConfig cdc codec.Codec mempool *mempool.ThresholdMempool txProvider provider.TxProvider keyname string runProvider bool} +```go expandable +package abci + +import ( + + /import the necessary files +) + +type PrepareProposalHandler struct { + logger log.Logger + txConfig client.TxConfig + cdc codec.Codec + mempool *mempool.ThresholdMempool + txProvider provider.TxProvider + keyname string + runProvider bool +} ``` The `PrepareProposalHandler` struct is used to handle the preparation of a proposal in the consensus process. It contains several fields: logger for logging information and errors, txConfig for transaction configuration, cdc (Codec) for encoding and decoding transactions, mempool for referencing the set of unconfirmed transactions, txProvider for building the proposal with transactions, keyname for the name of the key used for signing transactions, and runProvider, a boolean flag indicating whether the provider should be run to build the proposal. -``` -type ProcessProposalHandler struct { TxConfig client.TxConfig Codec codec.Codec Logger log.Logger} +```go +type ProcessProposalHandler struct { + TxConfig client.TxConfig + Codec codec.Codec + Logger log.Logger +} ``` After the proposal has been prepared and vote extensions have been included, the `ProcessProposalHandler` is used to process the proposal. This includes validating the proposal and the included vote extensions. The `ProcessProposalHandler` allows you to access the transaction configuration and codec, which are necessary for processing the vote extensions. -``` -type VoteExtHandler struct { logger log.Logger currentBlock int64 mempool *mempool.ThresholdMempool cdc codec.Codec} +```go +type VoteExtHandler struct { + logger log.Logger + currentBlock int64 + mempool *mempool.ThresholdMempool + cdc codec.Codec +} ``` This struct is used to handle vote extensions. It contains a logger for logging events, the current block number, a mempool for storing transactions, and a codec for encoding and decoding. Vote extensions are a key part of the process to mitigate front-running, as they allow for additional information to be included with each vote. -``` -type InjectedVoteExt struct { VoteExtSigner []byte Bids [][]byte}type InjectedVotes struct { Votes []InjectedVoteExt} +```go +type InjectedVoteExt struct { + VoteExtSigner []byte + Bids [][]byte +} + +type InjectedVotes struct { + Votes []InjectedVoteExt +} ``` These structs are used to handle injected vote extensions. They include the signer of the vote extension and the bids associated with the vote extension. Each byte array in Bids is a serialised form of a bid transaction. Injected vote extensions are used to add additional information to a vote after it has been created, which can be useful for adding context or additional data to a vote. The serialised bid transactions provide a way to include complex transaction data in a compact, efficient format. -``` -type AppVoteExtension struct { Height int64 Bids [][]byte} +```go +type AppVoteExtension struct { + Height int64 + Bids [][]byte +} ``` This struct is used for application vote extensions. It includes the height of the block and the bids associated with the vote extension. Application vote extensions are used to add additional information to a vote at the application level, which can be useful for adding context or additional data to a vote that is specific to the application. -``` -type SpecialTransaction struct { Height int Bids [][]byte} +```go +type SpecialTransaction struct { + Height int + Bids [][]byte +} ``` This struct is used for special transactions. It includes the height of the block and the bids associated with the transaction. Special transactions are used for transactions that need to be handled differently from regular transactions, such as transactions that are part of the process to mitigate front-running. -### Implementing Handlers and Configuring Handlers[​](#implementing-handlers-and-configuring-handlers "Direct link to Implementing Handlers and Configuring Handlers") +### Implementing Handlers and Configuring Handlers To establish the `VoteExtensionHandler`, follow these steps: @@ -63,54 +102,277 @@ To establish the `VoteExtensionHandler`, follow these steps: 2. Implement the `NewVoteExtensionHandler` function. This function is a constructor for the `VoteExtHandler` struct. It takes a logger, a mempool, and a codec as parameters and returns a new instance of `VoteExtHandler`. -``` -func NewVoteExtensionHandler(lg log.Logger, mp *mempool.ThresholdMempool, cdc codec.Codec) *VoteExtHandler { return &VoteExtHandler{ logger: lg, mempool: mp, cdc: cdc, } } +```go +func NewVoteExtensionHandler(lg log.Logger, mp *mempool.ThresholdMempool, cdc codec.Codec) *VoteExtHandler { + return &VoteExtHandler{ + logger: lg, + mempool: mp, + cdc: cdc, +} +} ``` 3. Implement the `ExtendVoteHandler()` method. This method should handle the logic of extending votes, including inspecting the mempool and submitting a list of all pending bids. This will allow you to access the list of unconfirmed transactions in the abci.`RequestPrepareProposal` during the ensuing block. -``` -func (h *VoteExtHandler) ExtendVoteHandler() sdk.ExtendVoteHandler { return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { h.logger.Info(fmt.Sprintf("Extending votes at block height : %v", req.Height)) voteExtBids := [][]byte{} // Get mempool txs itr := h.mempool.SelectPending(context.Background(), nil) for itr != nil { tmptx := itr.Tx() sdkMsgs := tmptx.GetMsgs() // Iterate through msgs, check for any bids for _, msg := range sdkMsgs { switch msg := msg.(type) { case *nstypes.MsgBid: // Marshal sdk bids to []byte bz, err := h.cdc.Marshal(msg) if err != nil { h.logger.Error(fmt.Sprintf("Error marshalling VE Bid : %v", err)) break } voteExtBids = append(voteExtBids, bz) default: } } // Move tx to ready pool err := h.mempool.Update(context.Background(), tmptx) // Remove tx from app side mempool if err != nil { h.logger.Info(fmt.Sprintf("Unable to update mempool tx: %v", err)) } itr = itr.Next() } // Create vote extension voteExt := AppVoteExtension{ Height: req.Height, Bids: voteExtBids, } // Encode Vote Extension bz, err := json.Marshal(voteExt) if err != nil { return nil, fmt.Errorf("Error marshalling VE: %w", err) } return &abci.ResponseExtendVote{VoteExtension: bz}, nil} +```go expandable +func (h *VoteExtHandler) + +ExtendVoteHandler() + +sdk.ExtendVoteHandler { + return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + h.logger.Info(fmt.Sprintf("Extending votes at block height : %v", req.Height)) + voteExtBids := [][]byte{ +} + + / Get mempool txs + itr := h.mempool.SelectPending(context.Background(), nil) + for itr != nil { + tmptx := itr.Tx() + sdkMsgs := tmptx.GetMsgs() + + / Iterate through msgs, check for any bids + for _, msg := range sdkMsgs { + switch msg := msg.(type) { + case *nstypes.MsgBid: + / Marshal sdk bids to []byte + bz, err := h.cdc.Marshal(msg) + if err != nil { + h.logger.Error(fmt.Sprintf("Error marshalling VE Bid : %v", err)) + +break +} + +voteExtBids = append(voteExtBids, bz) + +default: +} + +} + + / Move tx to ready pool + err := h.mempool.Update(context.Background(), tmptx) + + / Remove tx from app side mempool + if err != nil { + h.logger.Info(fmt.Sprintf("Unable to update mempool tx: %v", err)) +} + +itr = itr.Next() +} + + / Create vote extension + voteExt := AppVoteExtension{ + Height: req.Height, + Bids: voteExtBids, +} + + / Encode Vote Extension + bz, err := json.Marshal(voteExt) + if err != nil { + return nil, fmt.Errorf("Error marshalling VE: %w", err) +} + +return &abci.ResponseExtendVote{ + VoteExtension: bz +}, nil +} ``` 4. Configure the handler in `app/app.go` as shown below -``` -bApp := baseapp.NewBaseApp(AppName, logger, db, txConfig.TxDecoder(), baseAppOptions...)voteExtHandler := abci2.NewVoteExtensionHandler(logger, mempool, appCodec)bApp.SetExtendVoteHandler(voteExtHandler.ExtendVoteHandler()) +```go +bApp := baseapp.NewBaseApp(AppName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + voteExtHandler := abci2.NewVoteExtensionHandler(logger, mempool, appCodec) + +bApp.SetExtendVoteHandler(voteExtHandler.ExtendVoteHandler()) ``` To give a bit of context on what is happening above, we first create a new instance of `VoteExtensionHandler` with the necessary dependencies (logger, mempool, and codec). Then, we set this handler as the `ExtendVoteHandler` for our application. This means that whenever a vote needs to be extended, our custom `ExtendVoteHandler()` method will be called. To test if vote extensions have been propagated, add the following to the `PrepareProposalHandler`: -``` -if req.Height > 2 { voteExt := req.GetLocalLastCommit() h.logger.Info(fmt.Sprintf("🛠️ :: Get vote extensions: %v", voteExt)) } +```go +if req.Height > 2 { + voteExt := req.GetLocalLastCommit() + +h.logger.Info(fmt.Sprintf(" :: Get vote extensions: %v", voteExt)) +} ``` This is how the whole function should look: -``` -func (h *PrepareProposalHandler) PrepareProposalHandler() sdk.PrepareProposalHandler { return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { h.logger.Info(fmt.Sprintf("🛠️ :: Prepare Proposal")) var proposalTxs [][]byte var txs []sdk.Tx // Get Vote Extensions if req.Height > 2 { voteExt := req.GetLocalLastCommit() h.logger.Info(fmt.Sprintf("🛠️ :: Get vote extensions: %v", voteExt)) } itr := h.mempool.Select(context.Background(), nil) for itr != nil { tmptx := itr.Tx() txs = append(txs, tmptx) itr = itr.Next() } h.logger.Info(fmt.Sprintf("🛠️ :: Number of Transactions available from mempool: %v", len(txs))) if h.runProvider { tmpMsgs, err := h.txProvider.BuildProposal(ctx, txs) if err != nil { h.logger.Error(fmt.Sprintf("❌️ :: Error Building Custom Proposal: %v", err)) } txs = tmpMsgs } for _, sdkTxs := range txs { txBytes, err := h.txConfig.TxEncoder()(sdkTxs) if err != nil { h.logger.Info(fmt.Sprintf("❌~Error encoding transaction: %v", err.Error())) } proposalTxs = append(proposalTxs, txBytes) } h.logger.Info(fmt.Sprintf("🛠️ :: Number of Transactions in proposal: %v", len(proposalTxs))) return &abci.ResponsePrepareProposal{Txs: proposalTxs}, nil }} +```go expandable +func (h *PrepareProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + h.logger.Info(fmt.Sprintf(" :: Prepare Proposal")) + +var proposalTxs [][]byte + + var txs []sdk.Tx + + / Get Vote Extensions + if req.Height > 2 { + voteExt := req.GetLocalLastCommit() + +h.logger.Info(fmt.Sprintf(" :: Get vote extensions: %v", voteExt)) +} + itr := h.mempool.Select(context.Background(), nil) + for itr != nil { + tmptx := itr.Tx() + +txs = append(txs, tmptx) + +itr = itr.Next() +} + +h.logger.Info(fmt.Sprintf(" :: Number of Transactions available from mempool: %v", len(txs))) + if h.runProvider { + tmpMsgs, err := h.txProvider.BuildProposal(ctx, txs) + if err != nil { + h.logger.Error(fmt.Sprintf(" :: Error Building Custom Proposal: %v", err)) +} + +txs = tmpMsgs +} + for _, sdkTxs := range txs { + txBytes, err := h.txConfig.TxEncoder()(sdkTxs) + if err != nil { + h.logger.Info(fmt.Sprintf("~Error encoding transaction: %v", err.Error())) +} + +proposalTxs = append(proposalTxs, txBytes) +} + +h.logger.Info(fmt.Sprintf(" :: Number of Transactions in proposal: %v", len(proposalTxs))) + +return &abci.ResponsePrepareProposal{ + Txs: proposalTxs +}, nil +} +} ``` -As mentioned above, we check if vote extensions have been propagated, you can do this by checking the logs for any relevant messages such as `🛠️ :: Get vote extensions:`. If the logs do not provide enough information, you can also reinitialise your local testing environment by running the `./scripts/single_node/setup.sh` script again. +As mentioned above, we check if vote extensions have been propagated, you can do this by checking the logs for any relevant messages such as ` :: Get vote extensions:`. If the logs do not provide enough information, you can also reinitialise your local testing environment by running the `./scripts/single_node/setup.sh` script again. 5. Implement the `ProcessProposalHandler()`. This function is responsible for processing the proposal. It should handle the logic of processing vote extensions, including inspecting the proposal and validating the bids. -``` -func (h *ProcessProposalHandler) ProcessProposalHandler() sdk.ProcessProposalHandler { return func(ctx sdk.Context, req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) { h.Logger.Info(fmt.Sprintf("⚙️ :: Process Proposal")) // The first transaction will always be the Special Transaction numTxs := len(req.Txs) h.Logger.Info(fmt.Sprintf("⚙️:: Number of transactions :: %v", numTxs)) if numTxs >= 1 { var st SpecialTransaction err = json.Unmarshal(req.Txs[0], &st) if err != nil { h.Logger.Error(fmt.Sprintf("❌️:: Error unmarshalling special Tx in Process Proposal :: %v", err)) } if len(st.Bids) > 0 { h.Logger.Info(fmt.Sprintf("⚙️:: There are bids in the Special Transaction")) var bids []nstypes.MsgBid for i, b := range st.Bids { var bid nstypes.MsgBid h.Codec.Unmarshal(b, &bid) h.Logger.Info(fmt.Sprintf("⚙️:: Special Transaction Bid No %v :: %v", i, bid)) bids = append(bids, bid) } // Validate Bids in Tx txs := req.Txs[1:] ok, err := ValidateBids(h.TxConfig, bids, txs, h.Logger) if err != nil { h.Logger.Error(fmt.Sprintf("❌️:: Error validating bids in Process Proposal :: %v", err)) return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil } if !ok { h.Logger.Error(fmt.Sprintf("❌️:: Unable to validate bids in Process Proposal :: %v", err)) return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil } h.Logger.Info("⚙️:: Successfully validated bids in Process Proposal") } } return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_ACCEPT}, nil }} +```go expandable +func (h *ProcessProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + return func(ctx sdk.Context, req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) { + h.Logger.Info(fmt.Sprintf(" :: Process Proposal")) + + / The first transaction will always be the Special Transaction + numTxs := len(req.Txs) + +h.Logger.Info(fmt.Sprintf(":: Number of transactions :: %v", numTxs)) + if numTxs >= 1 { + var st SpecialTransaction + err = json.Unmarshal(req.Txs[0], &st) + if err != nil { + h.Logger.Error(fmt.Sprintf(":: Error unmarshalling special Tx in Process Proposal :: %v", err)) +} + if len(st.Bids) > 0 { + h.Logger.Info(fmt.Sprintf(":: There are bids in the Special Transaction")) + +var bids []nstypes.MsgBid + for i, b := range st.Bids { + var bid nstypes.MsgBid + h.Codec.Unmarshal(b, &bid) + +h.Logger.Info(fmt.Sprintf(":: Special Transaction Bid No %v :: %v", i, bid)) + +bids = append(bids, bid) +} + / Validate Bids in Tx + txs := req.Txs[1:] + ok, err := ValidateBids(h.TxConfig, bids, txs, h.Logger) + if err != nil { + h.Logger.Error(fmt.Sprintf(":: Error validating bids in Process Proposal :: %v", err)) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if !ok { + h.Logger.Error(fmt.Sprintf(":: Unable to validate bids in Process Proposal :: %v", err)) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +h.Logger.Info(":: Successfully validated bids in Process Proposal") +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} ``` 6. Implement the `ProcessVoteExtensions()` function. This function should handle the logic of processing vote extensions, including validating the bids. -``` -func processVoteExtensions(req *abci.RequestPrepareProposal, log log.Logger) (SpecialTransaction, error) { log.Info(fmt.Sprintf("🛠️ :: Process Vote Extensions")) // Create empty response st := SpecialTransaction{ 0, [][]byte{}, } // Get Vote Ext for H-1 from Req voteExt := req.GetLocalLastCommit() votes := voteExt.Votes // Iterate through votes var ve AppVoteExtension for _, vote := range votes { // Unmarshal to AppExt err := json.Unmarshal(vote.VoteExtension, &ve) if err != nil { log.Error(fmt.Sprintf("❌ :: Error unmarshalling Vote Extension")) } st.Height = int(ve.Height) // If Bids in VE, append to Special Transaction if len(ve.Bids) > 0 { log.Info("🛠️ :: Bids in VE") for _, b := range ve.Bids { st.Bids = append(st.Bids, b) } } } return st, nil} +```go expandable +func processVoteExtensions(req *abci.RequestPrepareProposal, log log.Logger) (SpecialTransaction, error) { + log.Info(fmt.Sprintf(" :: Process Vote Extensions")) + + / Create empty response + st := SpecialTransaction{ + 0, + [][]byte{ +}, +} + + / Get Vote Ext for H-1 from Req + voteExt := req.GetLocalLastCommit() + votes := voteExt.Votes + + / Iterate through votes + var ve AppVoteExtension + for _, vote := range votes { + / Unmarshal to AppExt + err := json.Unmarshal(vote.VoteExtension, &ve) + if err != nil { + log.Error(fmt.Sprintf(" :: Error unmarshalling Vote Extension")) +} + +st.Height = int(ve.Height) + + / If Bids in VE, append to Special Transaction + if len(ve.Bids) > 0 { + log.Info(" :: Bids in VE") + for _, b := range ve.Bids { + st.Bids = append(st.Bids, b) +} + +} + +} + +return st, nil +} ``` 7. Configure the `ProcessProposalHandler()` in app/app.go: -``` -processPropHandler := abci2.ProcessProposalHandler{app.txConfig, appCodec, logger}bApp.SetProcessProposal(processPropHandler.ProcessProposalHandler()) +```go +processPropHandler := abci2.ProcessProposalHandler{ + app.txConfig, appCodec, logger +} + +bApp.SetProcessProposal(processPropHandler.ProcessProposalHandler()) ``` This sets the `ProcessProposalHandler()` for our application. This means that whenever a proposal needs to be processed, our custom `ProcessProposalHandler()` method will be called. diff --git a/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning.mdx b/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning.mdx index c4f53e42..252d4184 100644 --- a/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning.mdx +++ b/docs/sdk/v0.50/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning.mdx @@ -1,27 +1,31 @@ --- -title: "Understanding Front-Running and more" -description: "Version: v0.50" +title: Understanding Front-Running and more +description: >- + Blockchain technology is vulnerable to practices that can affect the fairness + and security of the network. Two such practices are front-running and Maximal + Extractable Value (MEV), which are important for blockchain participants to + understand. --- -## Introduction[​](#introduction "Direct link to Introduction") +## Introduction Blockchain technology is vulnerable to practices that can affect the fairness and security of the network. Two such practices are front-running and Maximal Extractable Value (MEV), which are important for blockchain participants to understand. -## What is Front-Running?[​](#what-is-front-running "Direct link to What is Front-Running?") +## What is Front-Running? Front-running is when someone, such as a validator, uses their ability to see pending transactions to execute their own transactions first, benefiting from the knowledge of upcoming transactions. In nameservice auctions, a front-runner might place a higher bid before the original bid is confirmed, unfairly winning the auction. -## Nameservices and Nameservice Auctions[​](#nameservices-and-nameservice-auctions "Direct link to Nameservices and Nameservice Auctions") +## Nameservices and Nameservice Auctions Nameservices are human-readable identifiers on a blockchain, akin to internet domain names, that correspond to specific addresses or resources. They simplify interactions with typically long and complex blockchain addresses, allowing users to have a memorable and unique identifier for their blockchain address or smart contract. Nameservice auctions are the process by which these identifiers are bid on and acquired. To combat front-running—where someone might use knowledge of pending bids to place a higher bid first—mechanisms such as commit-reveal schemes, auction extensions, and fair sequencing are implemented. These strategies ensure a transparent and fair bidding process, reducing the potential for Maximal Extractable Value (MEV) exploitation. -## What is Maximal Extractable Value (MEV)?[​](#what-is-maximal-extractable-value-mev "Direct link to What is Maximal Extractable Value (MEV)?") +## What is Maximal Extractable Value (MEV)? MEV is the highest value that can be extracted by manipulating the order of transactions within a block, beyond the standard block rewards and fees. This has become more prominent with the growth of decentralised finance (DeFi), where transaction order can greatly affect profits. -## Implications of MEV[​](#implications-of-mev "Direct link to Implications of MEV") +## Implications of MEV MEV can lead to: @@ -29,7 +33,7 @@ MEV can lead to: * **Market Fairness**: An uneven playing field where only a few can gain at the expense of the majority. * **User Experience**: Higher fees and network congestion due to the competition for MEV. -## Mitigating MEV and Front-Running[​](#mitigating-mev-and-front-running "Direct link to Mitigating MEV and Front-Running") +## Mitigating MEV and Front-Running Some solutions being developed to mitigate MEV and front-running, including: @@ -39,6 +43,6 @@ Some solutions being developed to mitigate MEV and front-running, including: For this tutorial, we will be exploring the last solution, fair sequencing services, in the context of nameservice auctions. -## Conclusion[​](#conclusion "Direct link to Conclusion") +## Conclusion MEV and front-running are challenges to blockchain integrity and fairness. Ongoing innovation and implementation of mitigation strategies are crucial for the ecosystem's health and success. diff --git a/docs/sdk/v0.50/tutorials/vote-extensions/oracle/getting-started.mdx b/docs/sdk/v0.50/tutorials/vote-extensions/oracle/getting-started.mdx index 199b67e6..f2eb7b22 100644 --- a/docs/sdk/v0.50/tutorials/vote-extensions/oracle/getting-started.mdx +++ b/docs/sdk/v0.50/tutorials/vote-extensions/oracle/getting-started.mdx @@ -1,30 +1,30 @@ --- -title: "Getting Started" -description: "Version: v0.50" +title: Getting Started +description: What is an Oracle? Implementing Vote Extensions Testing the Oracle Module --- -## Table of Contents[​](#table-of-contents "Direct link to Table of Contents") +## Table of Contents -* [What is an Oracle?](/v0.50/tutorials/vote-extensions/oracle/what-is-an-oracle) -* [Implementing Vote Extensions](/v0.50/tutorials/vote-extensions/oracle/implementing-vote-extensions) -* [Testing the Oracle Module](/v0.50/tutorials/vote-extensions/oracle/testing-oracle) +* [What is an Oracle?](/docs/sdk/v0.50/tutorials/vote-extensions/oracle/what-is-an-oracle) +* [Implementing Vote Extensions](/docs/sdk/v0.50/tutorials/vote-extensions/oracle/implementing-vote-extensions) +* [Testing the Oracle Module](/docs/sdk/v0.50/tutorials/vote-extensions/oracle/testing-oracle) -## Prerequisites[​](#prerequisites "Direct link to Prerequisites") +## Prerequisites Before you start with this tutorial, make sure you have: * A working chain project. This tutorial won't cover the steps of creating a new chain/module. * Familiarity with the Cosmos SDK. If you're not, we suggest you start with [Cosmos SDK Tutorials](https://tutorials.cosmos.network), as ABCI++ is considered an advanced topic. -* Read and understood [What is an Oracle?](/v0.50/tutorials/vote-extensions/oracle/what-is-an-oracle). This provides necessary background information for understanding the Oracle module. +* Read and understood [What is an Oracle?](/docs/sdk/v0.50/tutorials/vote-extensions/oracle/what-is-an-oracle). This provides necessary background information for understanding the Oracle module. * Basic understanding of Go programming language. -## What are Vote extensions?[​](#what-are-vote-extensions "Direct link to What are Vote extensions?") +## What are Vote extensions? Vote extensions is arbitrary information which can be inserted into a block. This feature is part of ABCI 2.0, which is available for use in the SDK 0.50 release and part of the 0.38 CometBFT release. More information about vote extensions can be seen [here](https://docs.cosmos.network/main/build/abci/vote-extensions). -## Overview of the project[​](#overview-of-the-project "Direct link to Overview of the project") +## Overview of the project We’ll go through the creation of a simple price oracle module focusing on the vote extensions implementation, ignoring the details inside the price oracle itself. diff --git a/docs/sdk/v0.50/tutorials/vote-extensions/oracle/implementing-vote-extensions.mdx b/docs/sdk/v0.50/tutorials/vote-extensions/oracle/implementing-vote-extensions.mdx index 0578a98c..1c2df539 100644 --- a/docs/sdk/v0.50/tutorials/vote-extensions/oracle/implementing-vote-extensions.mdx +++ b/docs/sdk/v0.50/tutorials/vote-extensions/oracle/implementing-vote-extensions.mdx @@ -1,28 +1,66 @@ --- -title: "Implementing Vote Extensions" -description: "Version: v0.50" +title: Implementing Vote Extensions +description: >- + First we’ll create the OracleVoteExtension struct, this is the object that + will be marshaled as bytes and signed by the validator. --- -## Implement ExtendVote[​](#implement-extendvote "Direct link to Implement ExtendVote") +## Implement ExtendVote First we’ll create the `OracleVoteExtension` struct, this is the object that will be marshaled as bytes and signed by the validator. In our example we’ll use JSON to marshal the vote extension for simplicity but we recommend to find an encoding that produces a smaller output, given that large vote extensions could impact CometBFT’s performance. Custom encodings and compressed bytes can be used out of the box. -``` -// OracleVoteExtension defines the canonical vote extension structure.type OracleVoteExtension struct { Height int64 Prices map[string]math.LegacyDec} +```go +/ OracleVoteExtension defines the canonical vote extension structure. +type OracleVoteExtension struct { + Height int64 + Prices map[string]math.LegacyDec +} ``` Then we’ll create a `VoteExtensionsHandler` struct that contains everything we need to query for prices. -``` -type VoteExtHandler struct { logger log.Logger currentBlock int64 // current block height lastPriceSyncTS time.Time // last time we synced prices providerTimeout time.Duration // timeout for fetching prices from providers providers map[string]Provider // mapping of provider name to provider (e.g. Binance -> BinanceProvider) providerPairs map[string][]keeper.CurrencyPair // mapping of provider name to supported pairs (e.g. Binance -> [ATOM/USD]) Keeper keeper.Keeper // keeper of our oracle module} +```go +type VoteExtHandler struct { + logger log.Logger + currentBlock int64 / current block height + lastPriceSyncTS time.Time / last time we synced prices + providerTimeout time.Duration / timeout for fetching prices from providers + providers map[string]Provider / mapping of provider name to provider (e.g. Binance -> BinanceProvider) + +providerPairs map[string][]keeper.CurrencyPair / mapping of provider name to supported pairs (e.g. Binance -> [ATOM/USD]) + +Keeper keeper.Keeper / keeper of our oracle module +} ``` Finally, a function that returns `sdk.ExtendVoteHandler` is needed too, and this is where our vote extension logic will live. -``` -func (h *VoteExtHandler) ExtendVoteHandler() sdk.ExtendVoteHandler { return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { // here we'd have a helper function that gets all the prices and does a weighted average using the volume of each market prices := h.getAllVolumeWeightedPrices() voteExt := OracleVoteExtension{ Height: req.Height, Prices: prices, } bz, err := json.Marshal(voteExt) if err != nil { return nil, fmt.Errorf("failed to marshal vote extension: %w", err) } return &abci.ResponseExtendVote{VoteExtension: bz}, nil }} +```go expandable +func (h *VoteExtHandler) + +ExtendVoteHandler() + +sdk.ExtendVoteHandler { + return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + / here we'd have a helper function that gets all the prices and does a weighted average using the volume of each market + prices := h.getAllVolumeWeightedPrices() + voteExt := OracleVoteExtension{ + Height: req.Height, + Prices: prices, +} + +bz, err := json.Marshal(voteExt) + if err != nil { + return nil, fmt.Errorf("failed to marshal vote extension: %w", err) +} + +return &abci.ResponseExtendVote{ + VoteExtension: bz +}, nil +} +} ``` As you can see above, the creation of a vote extension is pretty simple and we just have to return bytes. CometBFT will handle the signing of these bytes for us. We ignored the process of getting the prices but you can see a more complete example [here:](https://github.com/cosmos/sdk-tutorials/blob/master/tutorials/oracle/base/x/oracle/abci/vote_extensions.go) @@ -33,64 +71,184 @@ Here we’ll do some simple checks like: * Is the vote extension for the right height? * Some other validation, for example, are the prices from this extension too deviated from my own prices? Or maybe checks that can detect malicious behavior. -``` -func (h *VoteExtHandler) VerifyVoteExtensionHandler() sdk.VerifyVoteExtensionHandler { return func(ctx sdk.Context, req *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { var voteExt OracleVoteExtension err := json.Unmarshal(req.VoteExtension, &voteExt) if err != nil { return nil, fmt.Errorf("failed to unmarshal vote extension: %w", err) } if voteExt.Height != req.Height { return nil, fmt.Errorf("vote extension height does not match request height; expected: %d, got: %d", req.Height, voteExt.Height) } // Verify incoming prices from a validator are valid. Note, verification during // VerifyVoteExtensionHandler MUST be deterministic. For brevity and demo // purposes, we omit implementation. if err := h.verifyOraclePrices(ctx, voteExt.Prices); err != nil { return nil, fmt.Errorf("failed to verify oracle prices from validator %X: %w", req.ValidatorAddress, err) } return &abci.ResponseVerifyVoteExtension{Status: abci.ResponseVerifyVoteExtension_ACCEPT}, nil }} -``` +```go expandable +func (h *VoteExtHandler) + +VerifyVoteExtensionHandler() -## Implement PrepareProposal[​](#implement-prepareproposal "Direct link to Implement PrepareProposal") +sdk.VerifyVoteExtensionHandler { + return func(ctx sdk.Context, req *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + var voteExt OracleVoteExtension + err := json.Unmarshal(req.VoteExtension, &voteExt) + if err != nil { + return nil, fmt.Errorf("failed to unmarshal vote extension: %w", err) +} + if voteExt.Height != req.Height { + return nil, fmt.Errorf("vote extension height does not match request height; expected: %d, got: %d", req.Height, voteExt.Height) +} + / Verify incoming prices from a validator are valid. Note, verification during + / VerifyVoteExtensionHandler MUST be deterministic. For brevity and demo + / purposes, we omit implementation. + if err := h.verifyOraclePrices(ctx, voteExt.Prices); err != nil { + return nil, fmt.Errorf("failed to verify oracle prices from validator %X: %w", req.ValidatorAddress, err) +} + +return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} ``` -type ProposalHandler struct { logger log.Logger keeper keeper.Keeper // our oracle module keeper valStore baseapp.ValidatorStore // to get the current validators' pubkeys} + +## Implement PrepareProposal + +```go +type ProposalHandler struct { + logger log.Logger + keeper keeper.Keeper / our oracle module keeper + valStore baseapp.ValidatorStore / to get the current validators' pubkeys +} ``` And we create the struct for our “special tx”, that will contain the prices and the votes so validators can later re-check in ProcessPRoposal that they get the same result than the block’s proposer. With this we could also check if all the votes have been used by comparing the votes received in ProcessProposal. -``` -type StakeWeightedPrices struct { StakeWeightedPrices map[string]math.LegacyDec ExtendedCommitInfo abci.ExtendedCommitInfo} +```go +type StakeWeightedPrices struct { + StakeWeightedPrices map[string]math.LegacyDec + ExtendedCommitInfo abci.ExtendedCommitInfo +} ``` Now we create the `PrepareProposalHandler`. In this step we’ll first check if the vote extensions’ signatures are correct using a helper function called ValidateVoteExtensions from the baseapp package. -``` -func (h *ProposalHandler) PrepareProposal() sdk.PrepareProposalHandler { return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { err := baseapp.ValidateVoteExtensions(ctx, h.valStore, req.Height, ctx.ChainID(), req.LocalLastCommit) if err != nil { return nil, err }... +```go +func (h *ProposalHandler) + +PrepareProposal() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + err := baseapp.ValidateVoteExtensions(ctx, h.valStore, req.Height, ctx.ChainID(), req.LocalLastCommit) + if err != nil { + return nil, err +} +... ``` Then we proceed to make the calculations only if the current height if higher than the height at which vote extensions have been enabled. Remember that vote extensions are made available to the block proposer on the next block at which they are produced/enabled. -``` -... proposalTxs := req.Txs if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { stakeWeightedPrices, err := h.computeStakeWeightedOraclePrices(ctx, req.LocalLastCommit) if err != nil { return nil, errors.New("failed to compute stake-weighted oracle prices") } injectedVoteExtTx := StakeWeightedPrices{ StakeWeightedPrices: stakeWeightedPrices, ExtendedCommitInfo: req.LocalLastCommit, }... +```go expandable +... + proposalTxs := req.Txs + if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { + stakeWeightedPrices, err := h.computeStakeWeightedOraclePrices(ctx, req.LocalLastCommit) + if err != nil { + return nil, errors.New("failed to compute stake-weighted oracle prices") +} + injectedVoteExtTx := StakeWeightedPrices{ + StakeWeightedPrices: stakeWeightedPrices, + ExtendedCommitInfo: req.LocalLastCommit, +} +... ``` Finally we inject the result as a transaction at a specific location, usually at the beginning of the block: -## Implement ProcessProposal[​](#implement-processproposal "Direct link to Implement ProcessProposal") +## Implement ProcessProposal Now we can implement the method that all validators will execute to ensure the proposer is doing his work correctly. Here, if vote extensions are enabled, we’ll check if the tx at index 0 is an injected vote extension -``` -func (h *ProposalHandler) ProcessProposal() sdk.ProcessProposalHandler { return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { var injectedVoteExtTx StakeWeightedPrices if err := json.Unmarshal(req.Txs[0], &injectedVoteExtTx); err != nil { h.logger.Error("failed to decode injected vote extension tx", "err", err) return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil }... -``` +```go +func (h *ProposalHandler) + +ProcessProposal() + +sdk.ProcessProposalHandler { + return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { + var injectedVoteExtTx StakeWeightedPrices + if err := json.Unmarshal(req.Txs[0], &injectedVoteExtTx); err != nil { + h.logger.Error("failed to decode injected vote extension tx", "err", err) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} +... +``` + +Then we re-validate the vote extensions signatures using +baseapp.ValidateVoteExtensions, re-calculate the results (just like in PrepareProposal) and compare them with the results we got from the injected tx. + +```go expandable +err := baseapp.ValidateVoteExtensions(ctx, h.valStore, req.Height, ctx.ChainID(), injectedVoteExtTx.ExtendedCommitInfo) + if err != nil { + return nil, err +} + + / Verify the proposer's stake-weighted oracle prices by computing the same + / calculation and comparing the results. We omit verification for brevity + / and demo purposes. + stakeWeightedPrices, err := h.computeStakeWeightedOraclePrices(ctx, injectedVoteExtTx.ExtendedCommitInfo) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if err := compareOraclePrices(injectedVoteExtTx.StakeWeightedPrices, stakeWeightedPrices); err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} +``` + +Important: In this example we avoided using the mempool and other basics, please refer to the DefaultProposalHandler for a complete implementation: [Link](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/baseapp/abci_utils.go) + +## Implement PreBlocker -Then we re-validate the vote extensions signatures using baseapp.ValidateVoteExtensions, re-calculate the results (just like in PrepareProposal) and compare them with the results we got from the injected tx. +Now validators are extending their vote, verifying other votes and including the result in the block. But how do we actually make use of this result? This is done in the PreBlocker which is code that is run before any other code during FinalizeBlock so we make sure we make this information available to the chain and its modules during the entire block execution (from BeginBlock). -``` - err := baseapp.ValidateVoteExtensions(ctx, h.valStore, req.Height, ctx.ChainID(), injectedVoteExtTx.ExtendedCommitInfo) if err != nil { return nil, err } // Verify the proposer's stake-weighted oracle prices by computing the same // calculation and comparing the results. We omit verification for brevity // and demo purposes. stakeWeightedPrices, err := h.computeStakeWeightedOraclePrices(ctx, injectedVoteExtTx.ExtendedCommitInfo) if err != nil { return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil } if err := compareOraclePrices(injectedVoteExtTx.StakeWeightedPrices, stakeWeightedPrices); err != nil { return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil } } return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_ACCEPT}, nil }} -``` +At this step we know that the injected tx is well-formatted and has been verified by the validators participating in consensus, so making use of it is straightforward. Just check if vote extensions are enabled, pick up the first transaction and use a method in your module’s keeper to set the result. -Important: In this example we avoided using the mempool and other basics, please refer to the DefaultProposalHandler for a complete implementation: [https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/baseapp/abci\_utils.go](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/baseapp/abci_utils.go) +```go expandable +func (h *ProposalHandler) -## Implement PreBlocker[​](#implement-preblocker "Direct link to Implement PreBlocker") +PreBlocker(ctx sdk.Context, req *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + res := &sdk.ResponsePreBlock{ +} + if len(req.Txs) == 0 { + return res, nil +} + if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { + var injectedVoteExtTx StakeWeightedPrices + if err := json.Unmarshal(req.Txs[0], &injectedVoteExtTx); err != nil { + h.logger.Error("failed to decode injected vote extension tx", "err", err) -Now validators are extending their vote, verifying other votes and including the result in the block. But how do we actually make use of this result? This is done in the PreBlocker which is code that is run before any other code during FinalizeBlock so we make sure we make this information available to the chain and its modules during the entire block execution (from BeginBlock). +return nil, err +} -At this step we know that the injected tx is well-formatted and has been verified by the validators participating in consensus, so making use of it is straightforward. Just check if vote extensions are enabled, pick up the first transaction and use a method in your module’s keeper to set the result. + / set oracle prices using the passed in context, which will make these prices available in the current block + if err := h.keeper.SetOraclePrices(ctx, injectedVoteExtTx.StakeWeightedPrices); err != nil { + return nil, err +} + +} -``` -func (h *ProposalHandler) PreBlocker(ctx sdk.Context, req *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { res := &sdk.ResponsePreBlock{} if len(req.Txs) == 0 { return res, nil } if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { var injectedVoteExtTx StakeWeightedPrices if err := json.Unmarshal(req.Txs[0], &injectedVoteExtTx); err != nil { h.logger.Error("failed to decode injected vote extension tx", "err", err) return nil, err } // set oracle prices using the passed in context, which will make these prices available in the current block if err := h.keeper.SetOraclePrices(ctx, injectedVoteExtTx.StakeWeightedPrices); err != nil { return nil, err } } return res, nil} +return res, nil +} ``` -## Conclusion[​](#conclusion "Direct link to Conclusion") +## Conclusion In this tutorial, we've created a simple price oracle module that incorporates vote extensions. We've seen how to implement `ExtendVote`, `VerifyVoteExtension`, `PrepareProposal`, `ProcessProposal`, and `PreBlocker` to handle the voting and verification process of vote extensions, as well as how to make use of the results during the block execution. diff --git a/docs/sdk/v0.50/tutorials/vote-extensions/oracle/testing-oracle.mdx b/docs/sdk/v0.50/tutorials/vote-extensions/oracle/testing-oracle.mdx index a13af96e..c60c9ea2 100644 --- a/docs/sdk/v0.50/tutorials/vote-extensions/oracle/testing-oracle.mdx +++ b/docs/sdk/v0.50/tutorials/vote-extensions/oracle/testing-oracle.mdx @@ -1,57 +1,61 @@ --- -title: "Testing the Oracle Module" -description: "Version: v0.50" +title: Testing the Oracle Module +description: >- + We will guide you through the process of testing the Oracle module in your + application. The Oracle module uses vote extensions to provide current price + data. If you would like to see the complete working oracle module please see + here. --- We will guide you through the process of testing the Oracle module in your application. The Oracle module uses vote extensions to provide current price data. If you would like to see the complete working oracle module please see [here](https://github.com/cosmos/sdk-tutorials/blob/master/tutorials/oracle/base/x/oracle). -## Step 1: Compile and Install the Application[​](#step-1-compile-and-install-the-application "Direct link to Step 1: Compile and Install the Application") +## Step 1: Compile and Install the Application First, we need to compile and install the application. Please ensure you are in the `tutorials/oracle/base` directory. Run the following command in your terminal: -``` +```shell make install ``` This command compiles the application and moves the resulting binary to a location in your system's PATH. -## Step 2: Initialise the Application[​](#step-2-initialise-the-application "Direct link to Step 2: Initialise the Application") +## Step 2: Initialise the Application Next, we need to initialise the application. Run the following command in your terminal: -``` +```shell make init ``` This command runs the script `tutorials/oracle/base/scripts/init.sh`, which sets up the necessary configuration for your application to run. This includes creating the `app.toml` configuration file and initialising the blockchain with a genesis block. -## Step 3: Start the Application[​](#step-3-start-the-application "Direct link to Step 3: Start the Application") +## Step 3: Start the Application Now, we can start the application. Run the following command in your terminal: -``` +```shell exampled start ``` This command starts your application, begins the blockchain node, and starts processing transactions. -## Step 4: Query the Oracle Prices[​](#step-4-query-the-oracle-prices "Direct link to Step 4: Query the Oracle Prices") +## Step 4: Query the Oracle Prices Finally, we can query the current prices from the Oracle module. Run the following command in your terminal: -``` +```shell exampled q oracle prices ``` This command queries the current prices from the Oracle module. The expected output shows that the vote extensions were successfully included in the block and the Oracle module was able to retrieve the price data. -## Understanding Vote Extensions in Oracle[​](#understanding-vote-extensions-in-oracle "Direct link to Understanding Vote Extensions in Oracle") +## Understanding Vote Extensions in Oracle In the Oracle module, the `ExtendVoteHandler` function is responsible for creating the vote extensions. This function fetches the current prices from the provider, creates a `OracleVoteExtension` struct with these prices, and then marshals this struct into bytes. These bytes are then set as the vote extension. In the context of testing, the Oracle module uses a mock provider to simulate the behavior of a real price provider. This mock provider is defined in the mockprovider package and is used to return predefined prices for specific currency pairs. -## Conclusion[​](#conclusion "Direct link to Conclusion") +## Conclusion In this tutorial, we've delved into the concept of Oracle's in blockchain technology, focusing on their role in providing external data to a blockchain network. We've explored vote extensions, a powerful feature of ABCI++, and integrated them into a Cosmos SDK application to create a price oracle module. diff --git a/docs/sdk/v0.50/tutorials/vote-extensions/oracle/what-is-an-oracle.mdx b/docs/sdk/v0.50/tutorials/vote-extensions/oracle/what-is-an-oracle.mdx index cfca8c4d..9e273990 100644 --- a/docs/sdk/v0.50/tutorials/vote-extensions/oracle/what-is-an-oracle.mdx +++ b/docs/sdk/v0.50/tutorials/vote-extensions/oracle/what-is-an-oracle.mdx @@ -1,16 +1,15 @@ --- -title: "What is an Oracle?" -description: "Version: v0.50" +title: What is an Oracle? --- An oracle in blockchain technology is a system that provides external data to a blockchain network. It acts as a source of information that is not natively accessible within the blockchain's closed environment. This can range from financial market prices to real-world event, making it crucial for decentralised applications. -## Oracle in the Cosmos SDK[​](#oracle-in-the-cosmos-sdk "Direct link to Oracle in the Cosmos SDK") +## Oracle in the Cosmos SDK In the Cosmos SDK, an oracle module can be implemented to provide external data to the blockchain. This module can use features like vote extensions to submit additional data during the consensus process, which can then be used by the blockchain to update its state with information from the outside world. For instance, a price oracle module in the Cosmos SDK could supply timely and accurate asset price information, which is vital for various financial operations within the blockchain ecosystem. -## Conclusion[​](#conclusion "Direct link to Conclusion") +## Conclusion Oracles are essential for blockchains to interact with external data, enabling them to respond to real-world information and events. Their implementation is key to the reliability and robustness of blockchain networks. diff --git a/docs/sdk/v0.50/user.mdx b/docs/sdk/v0.50/user.mdx deleted file mode 100644 index 34af2f8b..00000000 --- a/docs/sdk/v0.50/user.mdx +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "User Guides" -description: "Version: v0.50" ---- - -This section is designed for developers who are using the Cosmos SDK to build applications. It provides essential guides and references to effectively use the SDK's features. - -* [Setting up keys](/v0.50/user/run-node/keyring) - Learn how to set up secure key management using the Cosmos SDK's keyring feature. This guide provides a streamlined approach to cryptographic key handling, which is crucial for securing your application. -* [Running a node](/v0.50/user/run-node/run-node) - This guide provides step-by-step instructions to deploy and manage a node in the Cosmos network. It ensures a smooth and reliable operation of your blockchain application by covering all the necessary setup and maintenance steps. -* [CLI](/v0.50/user/run-node/interact-node) - Discover how to navigate and interact with the Cosmos SDK using the Command Line Interface (CLI). This section covers efficient and powerful command-based operations that can help you manage your application effectively. diff --git a/docs/sdk/v0.50/user/run-node/interact-node.mdx b/docs/sdk/v0.50/user/run-node/interact-node.mdx index 1f540707..73ab4eb3 100644 --- a/docs/sdk/v0.50/user/run-node/interact-node.mdx +++ b/docs/sdk/v0.50/user/run-node/interact-node.mdx @@ -1,62 +1,73 @@ --- -title: "Interacting with the Node" -description: "Version: v0.50" +title: Interacting with the Node --- - - There are multiple ways to interact with a node: using the CLI, using gRPC or using the REST endpoints. - +## Synopsis + +There are multiple ways to interact with a node: using the CLI, using gRPC or using the REST endpoints. - * [gRPC, REST and CometBFT Endpoints](/v0.50/learn/advanced/grpc_rest) - * [Running a Node](/v0.50/user/run-node/run-node) +**Pre-requisite Readings** + +- [gRPC, REST and CometBFT Endpoints](/docs/sdk/v0.50/learn/advanced/grpc_rest) +- [Running a Node](/docs/sdk/v0.50/user/run-node/run-node) + -## Using the CLI[​](#using-the-cli "Direct link to Using the CLI") +## Using the CLI Now that your chain is running, it is time to try sending tokens from the first account you created to a second account. In a new terminal window, start by running the following query command: -``` +```bash simd query bank balances $MY_VALIDATOR_ADDRESS ``` You should see the current balance of the account you created, equal to the original balance of `stake` you granted it minus the amount you delegated via the `gentx`. Now, create a second account: -``` -simd keys add recipient --keyring-backend test# Put the generated address in a variable for later use.RECIPIENT=$(simd keys show recipient -a --keyring-backend test) +```bash +simd keys add recipient --keyring-backend test + +# Put the generated address in a variable for later use. +RECIPIENT=$(simd keys show recipient -a --keyring-backend test) ``` The command above creates a local key-pair that is not yet registered on the chain. An account is created the first time it receives tokens from another account. Now, run the following command to send tokens to the `recipient` account: -``` -simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000000stake --chain-id my-test-chain --keyring-backend test# Check that the recipient account did receive the tokens.simd query bank balances $RECIPIENT +```bash +simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000000stake --chain-id my-test-chain --keyring-backend test + +# Check that the recipient account did receive the tokens. +simd query bank balances $RECIPIENT ``` Finally, delegate some of the stake tokens sent to the `recipient` account to the validator: -``` -simd tx staking delegate $(simd keys show my_validator --bech val -a --keyring-backend test) 500stake --from recipient --chain-id my-test-chain --keyring-backend test# Query the total delegations to `validator`.simd query staking delegations-to $(simd keys show my_validator --bech val -a --keyring-backend test) +```bash +simd tx staking delegate $(simd keys show my_validator --bech val -a --keyring-backend test) 500stake --from recipient --chain-id my-test-chain --keyring-backend test + +# Query the total delegations to `validator`. +simd query staking delegations-to $(simd keys show my_validator --bech val -a --keyring-backend test) ``` You should see two delegations, the first one made from the `gentx`, and the second one you just performed from the `recipient` account. -## Using gRPC[​](#using-grpc "Direct link to Using gRPC") +## Using gRPC -The Protobuf ecosystem developed tools for different use cases, including code-generation from `*.proto` files into various languages. These tools allow the building of clients easily. Often, the client connection (i.e. the transport) can be plugged and replaced very easily. Let's explore one of the most popular transport: [gRPC](/v0.50/learn/advanced/grpc_rest). +The Protobuf ecosystem developed tools for different use cases, including code-generation from `*.proto` files into various languages. These tools allow the building of clients easily. Often, the client connection (i.e. the transport) can be plugged and replaced very easily. Let's explore one of the most popular transport: [gRPC](/docs/sdk/v0.50/learn/advanced/grpc_rest). Since the code generation library largely depends on your own tech stack, we will only present three alternatives: -* `grpcurl` for generic debugging and testing, -* programmatically via Go, -* CosmJS for JavaScript/TypeScript developers. +- `grpcurl` for generic debugging and testing, +- programmatically via Go, +- CosmJS for JavaScript/TypeScript developers. -### grpcurl[​](#grpcurl "Direct link to grpcurl") +### grpcurl [grpcurl](https://github.com/fullstorydev/grpcurl) is like `curl` but for gRPC. It is also available as a Go library, but we will use it only as a CLI command for debugging and testing purposes. Follow the instructions in the previous link to install it. -Assuming you have a local node running (either a localnet, or connected a live network), you should be able to run the following command to list the Protobuf services available (you can replace `localhost:9000` by the gRPC server endpoint of another node, which is configured under the `grpc.address` field inside [`app.toml`](/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml)): +Assuming you have a local node running (either a localnet, or connected a live network), you should be able to run the following command to list the Protobuf services available (you can replace `localhost:9000` by the gRPC server endpoint of another node, which is configured under the `grpc.address` field inside [`app.toml`](/docs/sdk/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml)): -``` +```bash grpcurl -plaintext localhost:9090 list ``` @@ -64,86 +75,224 @@ You should see a list of gRPC services, like `cosmos.bank.v1beta1.Query`. This i In order to get a description of the service you can run the following command: -``` -grpcurl -plaintext \ localhost:9090 \ describe cosmos.bank.v1beta1.Query # Service we want to inspect +```bash +grpcurl -plaintext \ + localhost:9090 \ + describe cosmos.bank.v1beta1.Query # Service we want to inspect ``` It's also possible to execute an RPC call to query the node for information: -``` -grpcurl \ -plaintext \ -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ localhost:9090 \ cosmos.bank.v1beta1.Query/AllBalances +```bash +grpcurl \ + -plaintext \ + -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/AllBalances ``` The list of all available gRPC query endpoints is [coming soon](https://github.com/cosmos/cosmos-sdk/issues/7786). -#### Query for historical state using grpcurl[​](#query-for-historical-state-using-grpcurl "Direct link to Query for historical state using grpcurl") +#### Query for historical state using grpcurl You may also query for historical data by passing some [gRPC metadata](https://github.com/grpc/grpc-go/blob/master/Documentation/grpc-metadata.md) to the query: the `x-cosmos-block-height` metadata should contain the block to query. Using grpcurl as above, the command looks like: -``` -grpcurl \ -plaintext \ -H "x-cosmos-block-height: 123" \ -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ localhost:9090 \ cosmos.bank.v1beta1.Query/AllBalances +```bash +grpcurl \ + -plaintext \ + -H "x-cosmos-block-height: 123" \ + -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/AllBalances ``` Assuming the state at that block has not yet been pruned by the node, this query should return a non-empty response. -### Programmatically via Go[​](#programmatically-via-go "Direct link to Programmatically via Go") +### Programmatically via Go The following snippet shows how to query the state using gRPC inside a Go program. The idea is to create a gRPC connection, and use the Protobuf-generated client code to query the gRPC server. -#### Install Cosmos SDK[​](#install-cosmos-sdk "Direct link to Install Cosmos SDK") +#### Install Cosmos SDK -``` +```bash go get github.com/cosmos/cosmos-sdk@main ``` -``` -package mainimport ( "context" "fmt" "google.golang.org/grpc" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" banktypes "github.com/cosmos/cosmos-sdk/x/bank/types")func queryState() error { myAddress, err := sdk.AccAddressFromBech32("cosmos1...") // the my_validator or recipient address. if err != nil { return err } // Create a connection to the gRPC server. grpcConn, err := grpc.Dial( "127.0.0.1:9090", // your gRPC server address. grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanism. // This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry // if the request/response types contain interface instead of 'nil' you should pass the application specific codec. grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), ) if err != nil { return err } defer grpcConn.Close() // This creates a gRPC client to query the x/bank service. bankClient := banktypes.NewQueryClient(grpcConn) bankRes, err := bankClient.Balance( context.Background(), &banktypes.QueryBalanceRequest{Address: myAddress.String(), Denom: "stake"}, ) if err != nil { return err } fmt.Println(bankRes.GetBalance()) // Prints the account balance return nil}func main() { if err := queryState(); err != nil { panic(err) }} +```go expandable +package main + +import ( + + "context" + "fmt" + "google.golang.org/grpc" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +func queryState() + +error { + myAddress, err := sdk.AccAddressFromBech32("cosmos1...") / the my_validator or recipient address. + if err != nil { + return err +} + + / Create a connection to the gRPC server. + grpcConn, err := grpc.Dial( + "127.0.0.1:9090", / your gRPC server address. + grpc.WithInsecure(), / The Cosmos SDK doesn't support any transport security mechanism. + / This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry + / if the request/response types contain interface instead of 'nil' you should pass the application specific codec. + grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), + ) + if err != nil { + return err +} + +defer grpcConn.Close() + + / This creates a gRPC client to query the x/bank service. + bankClient := banktypes.NewQueryClient(grpcConn) + +bankRes, err := bankClient.Balance( + context.Background(), + &banktypes.QueryBalanceRequest{ + Address: myAddress.String(), + Denom: "stake" +}, + ) + if err != nil { + return err +} + +fmt.Println(bankRes.GetBalance()) / Prints the account balance + + return nil +} + +func main() { + if err := queryState(); err != nil { + panic(err) +} +} ``` You can replace the query client (here we are using `x/bank`'s) with one generated from any other Protobuf service. The list of all available gRPC query endpoints is [coming soon](https://github.com/cosmos/cosmos-sdk/issues/7786). -#### Query for historical state using Go[​](#query-for-historical-state-using-go "Direct link to Query for historical state using Go") +#### Query for historical state using Go Querying for historical blocks is done by adding the block height metadata in the gRPC request. -``` -package mainimport ( "context" "fmt" "google.golang.org/grpc" "google.golang.org/grpc/metadata" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" grpctypes "github.com/cosmos/cosmos-sdk/types/grpc" banktypes "github.com/cosmos/cosmos-sdk/x/bank/types")func queryState() error { myAddress, err := sdk.AccAddressFromBech32("cosmos1yerherx4d43gj5wa3zl5vflj9d4pln42n7kuzu") // the my_validator or recipient address. if err != nil { return err } // Create a connection to the gRPC server. grpcConn, err := grpc.Dial( "127.0.0.1:9090", // your gRPC server address. grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanism. // This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry // if the request/response types contain interface instead of 'nil' you should pass the application specific codec. grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), ) if err != nil { return err } defer grpcConn.Close() // This creates a gRPC client to query the x/bank service. bankClient := banktypes.NewQueryClient(grpcConn) var header metadata.MD _, err = bankClient.Balance( metadata.AppendToOutgoingContext(context.Background(), grpctypes.GRPCBlockHeightHeader, "12"), // Add metadata to request &banktypes.QueryBalanceRequest{Address: myAddress.String(), Denom: "stake"}, grpc.Header(&header), // Retrieve header from response ) if err != nil { return err } blockHeight := header.Get(grpctypes.GRPCBlockHeightHeader) fmt.Println(blockHeight) // Prints the block height (12) return nil}func main() { if err := queryState(); err != nil { panic(err) }} +```go expandable +package main + +import ( + + "context" + "fmt" + "google.golang.org/grpc" + "google.golang.org/grpc/metadata" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + grpctypes "github.com/cosmos/cosmos-sdk/types/grpc" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +func queryState() + +error { + myAddress, err := sdk.AccAddressFromBech32("cosmos1yerherx4d43gj5wa3zl5vflj9d4pln42n7kuzu") / the my_validator or recipient address. + if err != nil { + return err +} + + / Create a connection to the gRPC server. + grpcConn, err := grpc.Dial( + "127.0.0.1:9090", / your gRPC server address. + grpc.WithInsecure(), / The Cosmos SDK doesn't support any transport security mechanism. + / This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry + / if the request/response types contain interface instead of 'nil' you should pass the application specific codec. + grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), + ) + if err != nil { + return err +} + +defer grpcConn.Close() + + / This creates a gRPC client to query the x/bank service. + bankClient := banktypes.NewQueryClient(grpcConn) + +var header metadata.MD + _, err = bankClient.Balance( + metadata.AppendToOutgoingContext(context.Background(), grpctypes.GRPCBlockHeightHeader, "12"), / Add metadata to request + &banktypes.QueryBalanceRequest{ + Address: myAddress.String(), + Denom: "stake" +}, + grpc.Header(&header), / Retrieve header from response + ) + if err != nil { + return err +} + blockHeight := header.Get(grpctypes.GRPCBlockHeightHeader) + +fmt.Println(blockHeight) / Prints the block height (12) + +return nil +} + +func main() { + if err := queryState(); err != nil { + panic(err) +} +} ``` -### CosmJS[​](#cosmjs "Direct link to CosmJS") +### CosmJS -CosmJS documentation can be found at [https://cosmos.github.io/cosmjs](https://cosmos.github.io/cosmjs). As of January 2021, CosmJS documentation is still work in progress. +CosmJS documentation can be found at [Link](https://cosmos.github.io/cosmjs). As of January 2021, CosmJS documentation is still work in progress. -## Using the REST Endpoints[​](#using-the-rest-endpoints "Direct link to Using the REST Endpoints") +## Using the REST Endpoints -As described in the [gRPC guide](/v0.50/learn/advanced/grpc_rest), all gRPC services on the Cosmos SDK are made available for more convenient REST-based queries through gRPC-gateway. The format of the URL path is based on the Protobuf service method's full-qualified name, but may contain small customizations so that final URLs look more idiomatic. For example, the REST endpoint for the `cosmos.bank.v1beta1.Query/AllBalances` method is `GET /cosmos/bank/v1beta1/balances/{address}`. Request arguments are passed as query parameters. +As described in the [gRPC guide](/docs/sdk/v0.50/learn/advanced/grpc_rest), all gRPC services on the Cosmos SDK are made available for more convenient REST-based queries through gRPC-gateway. The format of the URL path is based on the Protobuf service method's full-qualified name, but may contain small customizations so that final URLs look more idiomatic. For example, the REST endpoint for the `cosmos.bank.v1beta1.Query/AllBalances` method is `GET /cosmos/bank/v1beta1/balances/{address}`. Request arguments are passed as query parameters. Note that the REST endpoints are not enabled by default. To enable them, edit the `api` section of your `~/.simapp/config/app.toml` file: -``` -# Enable defines if the API server should be enabled.enable = true +```toml +# Enable defines if the API server should be enabled. +enable = true ``` As a concrete example, the `curl` command to make balances request is: -``` -curl \ -X GET \ -H "Content-Type: application/json" \ http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS +```bash +curl \ + -X GET \ + -H "Content-Type: application/json" \ + http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS ``` Make sure to replace `localhost:1317` with the REST endpoint of your node, configured under the `api.address` field. -The list of all available REST endpoints is available as a Swagger specification file, it can be viewed at `localhost:1317/swagger`. Make sure that the `api.swagger` field is set to true in your [`app.toml`](/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml) file. +The list of all available REST endpoints is available as a Swagger specification file, it can be viewed at `localhost:1317/swagger`. Make sure that the `api.swagger` field is set to true in your [`app.toml`](/docs/sdk/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml) file. -### Query for historical state using REST[​](#query-for-historical-state-using-rest "Direct link to Query for historical state using REST") +### Query for historical state using REST Querying for historical state is done using the HTTP header `x-cosmos-block-height`. For example, a curl command would look like: -``` -curl \ -X GET \ -H "Content-Type: application/json" \ -H "x-cosmos-block-height: 123" \ http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS +```bash +curl \ + -X GET \ + -H "Content-Type: application/json" \ + -H "x-cosmos-block-height: 123" \ + http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS ``` Assuming the state at that block has not yet been pruned by the node, this query should return a non-empty response. -### Cross-Origin Resource Sharing (CORS)[​](#cross-origin-resource-sharing-cors "Direct link to Cross-Origin Resource Sharing (CORS)") +### Cross-Origin Resource Sharing (CORS) -[CORS policies](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) are not enabled by default to help with security. If you would like to use the rest-server in a public environment we recommend you provide a reverse proxy, this can be done with [nginx](https://www.nginx.com/). For testing and development purposes there is an `enabled-unsafe-cors` field inside [`app.toml`](/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml). +[CORS policies](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) are not enabled by default to help with security. If you would like to use the rest-server in a public environment we recommend you provide a reverse proxy, this can be done with [nginx](https://www.nginx.com/). For testing and development purposes there is an `enabled-unsafe-cors` field inside [`app.toml`](/docs/sdk/v0.50/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml). diff --git a/docs/sdk/v0.50/user/run-node/keyring.mdx b/docs/sdk/v0.50/user/run-node/keyring.mdx index 3add0561..b3cd1c72 100644 --- a/docs/sdk/v0.50/user/run-node/keyring.mdx +++ b/docs/sdk/v0.50/user/run-node/keyring.mdx @@ -1,96 +1,131 @@ --- -title: "Setting up the keyring" -description: "Version: v0.50" +title: Setting up the keyring --- - - This document describes how to configure and use the keyring and its various backends for an [**application**](/v0.50/learn/beginner/app-anatomy). - +## Synopsis -The keyring holds the private/public keypairs used to interact with a node. For instance, a validator key needs to be set up before running the blockchain node, so that blocks can be correctly signed. The private key can be stored in different locations, called "backends", such as a file or the operating system's own key storage. - -## Available backends for the keyring[​](#available-backends-for-the-keyring "Direct link to Available backends for the keyring") +This document describes how to configure and use the keyring and its various backends for an [**application**](/docs/sdk/v0.50/learn/beginner/app-anatomy). -Starting with the v0.38.0 release, Cosmos SDK comes with a new keyring implementation that provides a set of commands to manage cryptographic keys in a secure fashion. The new keyring supports multiple storage backends, some of which may not be available on all operating systems. - -### The `os` backend[​](#the-os-backend "Direct link to the-os-backend") +The keyring holds the private/public keypairs used to interact with a node. For instance, a validator key needs to be set up before running the blockchain node, so that blocks can be correctly signed. The private key can be stored in different locations, called "backends", such as a file or the operating system's own key storage. -The `os` backend relies on operating system-specific defaults to handle key storage securely. Typically, an operating system's credential sub-system handles password prompts, private keys storage, and user sessions according to the user's password policies. Here is a list of the most popular operating systems and their respective passwords manager: +## Available backends for the keyring -* macOS: [Keychain](https://support.apple.com/en-gb/guide/keychain-access/welcome/mac) +Starting with the v0.38.0 release, Cosmos SDK comes with a new keyring implementation +that provides a set of commands to manage cryptographic keys in a secure fashion. The +new keyring supports multiple storage backends, some of which may not be available on +all operating systems. -* Windows: [Credentials Management API](https://docs.microsoft.com/en-us/windows/win32/secauthn/credentials-management) +### The `os` backend -* GNU/Linux: +The `os` backend relies on operating system-specific defaults to handle key storage +securely. Typically, an operating system's credential sub-system handles password prompts, +private keys storage, and user sessions according to the user's password policies. Here +is a list of the most popular operating systems and their respective passwords manager: - * [libsecret](https://gitlab.gnome.org/GNOME/libsecret) - * [kwallet](https://api.kde.org/frameworks/kwallet/html/index.html) - * [keyctl](https://www.kernel.org/doc/html/latest/security/keys/core.html) +- macOS: [Keychain](https://support.apple.com/en-gb/guide/keychain-access/welcome/mac) +- Windows: [Credentials Management API](https://docs.microsoft.com/en-us/windows/win32/secauthn/credentials-management) +- GNU/Linux: + - [libsecret](https://gitlab.gnome.org/GNOME/libsecret) + - [kwallet](https://api.kde.org/frameworks/kwallet/html/index.html) + - [keyctl](https://www.kernel.org/doc/html/latest/security/keys/core.html) -GNU/Linux distributions that use GNOME as default desktop environment typically come with [Seahorse](https://wiki.gnome.org/Apps/Seahorse). Users of KDE based distributions are commonly provided with [KDE Wallet Manager](https://userbase.kde.org/KDE_Wallet_Manager). Whilst the former is in fact a `libsecret` convenient frontend, the latter is a `kwallet` client. `keyctl` is a secure backend leverages the Linux's kernel security key management system to store cryptographic keys securely in memory. +GNU/Linux distributions that use GNOME as default desktop environment typically come with +[Seahorse](https://wiki.gnome.org/Apps/Seahorse). Users of KDE based distributions are +commonly provided with [KDE Wallet Manager](https://userbase.kde.org/KDE_Wallet_Manager). +Whilst the former is in fact a `libsecret` convenient frontend, the latter is a `kwallet` +client. `keyctl` is a secure backend leverages the Linux's kernel security key management system +to store cryptographic keys securely in memory. -`os` is the default option since operating system's default credentials managers are designed to meet users' most common needs and provide them with a comfortable experience without compromising on security. +`os` is the default option since operating system's default credentials managers are +designed to meet users' most common needs and provide them with a comfortable +experience without compromising on security. The recommended backends for headless environments are `file` and `pass`. -### The `file` backend[​](#the-file-backend "Direct link to the-file-backend") +### The `file` backend -The `file` backend more closely resembles the keybase implementation used prior to v0.38.1. It stores the keyring encrypted within the app's configuration directory. This keyring will request a password each time it is accessed, which may occur multiple times in a single command resulting in repeated password prompts. If using bash scripts to execute commands using the `file` option you may want to utilize the following format for multiple prompts: +The `file` backend more closely resembles the keybase implementation used prior to +v0.38.1. It stores the keyring encrypted within the app's configuration directory. This +keyring will request a password each time it is accessed, which may occur multiple +times in a single command resulting in repeated password prompts. If using bash scripts +to execute commands using the `file` option you may want to utilize the following format +for multiple prompts: -``` -# assuming that KEYPASSWD is set in the environment$ gaiacli config keyring-backend file # use file backend$ (echo $KEYPASSWD; echo $KEYPASSWD) | gaiacli keys add me # multiple prompts$ echo $KEYPASSWD | gaiacli keys show me # single prompt +```shell +# assuming that KEYPASSWD is set in the environment +$ gaiacli config keyring-backend file # use file backend +$ (echo $KEYPASSWD; echo $KEYPASSWD) | gaiacli keys add me # multiple prompts +$ echo $KEYPASSWD | gaiacli keys show me # single prompt ``` - - The first time you add a key to an empty keyring, you will be prompted to type the password twice. - + + The first time you add a key to an empty keyring, you will be prompted to type + the password twice. + -### The `pass` backend[​](#the-pass-backend "Direct link to the-pass-backend") +### The `pass` backend -The `pass` backend uses the [pass](https://www.passwordstore.org/) utility to manage on-disk encryption of keys' sensitive data and metadata. Keys are stored inside `gpg` encrypted files within app-specific directories. `pass` is available for the most popular UNIX operating systems as well as GNU/Linux distributions. Please refer to its manual page for information on how to download and install it. +The `pass` backend uses the [pass](https://www.passwordstore.org/) utility to manage on-disk +encryption of keys' sensitive data and metadata. Keys are stored inside `gpg` encrypted files +within app-specific directories. `pass` is available for the most popular UNIX +operating systems as well as GNU/Linux distributions. Please refer to its manual page for +information on how to download and install it. - - **pass** uses [GnuPG](https://gnupg.org/) for encryption. `gpg` automatically invokes the `gpg-agent` daemon upon execution, which handles the caching of GnuPG credentials. Please refer to `gpg-agent` man page for more information on how to configure cache parameters such as credentials TTL and passphrase expiration. - + + **pass** uses [GnuPG](https://gnupg.org/) for encryption. `gpg` automatically + invokes the `gpg-agent` daemon upon execution, which handles the caching of + GnuPG credentials. Please refer to `gpg-agent` man page for more information + on how to configure cache parameters such as credentials TTL and passphrase + expiration. + The password store must be set up prior to first use: -``` +```shell pass init ``` -Replace `` with your GPG key ID. You can use your personal GPG key or an alternative one you may want to use specifically to encrypt the password store. +Replace `` with your GPG key ID. You can use your personal GPG key or an alternative +one you may want to use specifically to encrypt the password store. -### The `kwallet` backend[​](#the-kwallet-backend "Direct link to the-kwallet-backend") +### The `kwallet` backend -The `kwallet` backend uses `KDE Wallet Manager`, which comes installed by default on the GNU/Linux distributions that ships KDE as default desktop environment. Please refer to [KWallet Handbook](https://docs.kde.org/stable5/en/kdeutils/kwallet5/index.html) for more information. +The `kwallet` backend uses `KDE Wallet Manager`, which comes installed by default on the +GNU/Linux distributions that ships KDE as default desktop environment. Please refer to +[KWallet Handbook](https://docs.kde.org/stable5/en/kdeutils/kwallet5/index.html) for more +information. -### The `keyctl` backend[​](#the-keyctl-backend "Direct link to the-keyctl-backend") +### The `keyctl` backend -The *Kernel Key Retention Service* is a security facility that has been added to the Linux kernel relatively recently. It allows sensitive cryptographic data such as passwords, private key, authentication tokens, etc to be stored securely in memory. +The _Kernel Key Retention Service_ is a security facility that +has been added to the Linux kernel relatively recently. It allows sensitive +cryptographic data such as passwords, private key, authentication tokens, etc +to be stored securely in memory. The `keyctl` backend is available on Linux platforms only. -### The `test` backend[​](#the-test-backend "Direct link to the-test-backend") +### The `test` backend -The `test` backend is a password-less variation of the `file` backend. Keys are stored unencrypted on disk. +The `test` backend is a password-less variation of the `file` backend. Keys are stored +unencrypted on disk. **Provided for testing purposes only. The `test` backend is not recommended for use in production environments**. -### The `memory` backend[​](#the-memory-backend "Direct link to the-memory-backend") +### The `memory` backend The `memory` backend stores keys in memory. The keys are immediately deleted after the program has exited. **Provided for testing purposes only. The `memory` backend is not recommended for use in production environments**. -### Setting backend using the env variable[​](#setting-backend-using-the-env-variable "Direct link to Setting backend using the env variable") +### Setting backend using the env variable You can set the keyring-backend using env variable: `BINNAME_KEYRING_BACKEND`. For example, if your binary name is `gaia-v5` then set: `export GAIA_V5_KEYRING_BACKEND=pass` -## Adding keys to the keyring[​](#adding-keys-to-the-keyring "Direct link to Adding keys to the keyring") +## Adding keys to the keyring - Make sure you can build your own binary, and replace `simd` with the name of your binary in the snippets. + Make sure you can build your own binary, and replace `simd` with the name of + your binary in the snippets. Applications developed using the Cosmos SDK come with the `keys` subcommand. For the purpose of this tutorial, we're running the `simd` CLI, which is an application built using the Cosmos SDK for testing and educational purposes. For more information, see [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). @@ -99,8 +134,11 @@ You can use `simd keys` for help about the keys command and `simd keys [command] To create a new key in the keyring, run the `add` subcommand with a `` argument. For the purpose of this tutorial, we will solely use the `test` backend, and call our new key `my_validator`. This key will be used in the next section. -``` -$ simd keys add my_validator --keyring-backend test# Put the generated address in a variable for later use.MY_VALIDATOR_ADDRESS=$(simd keys show my_validator -a --keyring-backend test) +```bash +$ simd keys add my_validator --keyring-backend test + +# Put the generated address in a variable for later use. +MY_VALIDATOR_ADDRESS=$(simd keys show my_validator -a --keyring-backend test) ``` This command generates a new 24-word mnemonic phrase, persists it to the relevant backend, and outputs information about the keypair. If this keypair will be used to hold value-bearing tokens, be sure to write down the mnemonic phrase somewhere safe! diff --git a/docs/sdk/v0.50/user/run-node/rosetta.mdx b/docs/sdk/v0.50/user/run-node/rosetta.mdx index 23a8e6a6..5f4e2fc4 100644 --- a/docs/sdk/v0.50/user/run-node/rosetta.mdx +++ b/docs/sdk/v0.50/user/run-node/rosetta.mdx @@ -1,48 +1,53 @@ --- -title: "Rosetta" -description: "Version: v0.50" +title: Rosetta +description: >- + The rosetta project implements Coinbase's Rosetta API. This document provides + instructions on how to use the Rosetta API integration. For information about + the motivation and design choices, refer to ADR 035. --- -The `rosetta` project implements Coinbase's [Rosetta API](https://www.rosetta-api.org). This document provides instructions on how to use the Rosetta API integration. For information about the motivation and design choices, refer to [ADR 035](https://docs.cosmos.network/main/architecture/adr-035-rosetta-api-support). +The `rosetta` project implements Coinbase's [Rosetta API](https://www.rosetta-api.org). This document provides instructions on how to use the Rosetta API integration. For information about the motivation and design choices, refer to [ADR 035](/docs/common/pages/adr-comprehensive#adr-035-rosetta-api-support). -## Installing Rosetta[​](#installing-rosetta "Direct link to Installing Rosetta") +## Installing Rosetta The Rosetta API server is a stand-alone server that connects to a node of a chain developed with Cosmos SDK. Rosetta can be added to any cosmos chain node. standalone or natively. -### Standalone[​](#standalone "Direct link to Standalone") +### Standalone Rosetta can be executed as a standalone service, it connects to the node endpoints and expose the required endpoints. Install Rosetta standalone server with the following command: -``` +```bash go install github.com/cosmos/rosetta ``` Alternatively, for building from source, simply run `make build`. The binary will be located in the root folder. -### Native - As a node command[​](#native---as-a-node-command "Direct link to Native - As a node command") +### Native - As a node command To enable Native Rosetta API support, it's required to add the `RosettaCommand` to your application's root command file (e.g. `simd/cmd/root.go`). Import the `rosettaCmd` package: -``` +```go import "github.com/cosmos/rosetta/cmd" ``` Find the following line: -``` +```go initRootCmd(rootCmd, encodingConfig) ``` After that line, add the following: -``` -rootCmd.AddCommand( rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +```go +rootCmd.AddCommand( + rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec) +) ``` The `RosettaCommand` function builds the `rosetta` root command and is defined in the `rosettaCmd` package (`github.com/cosmos/rosetta/cmd`). @@ -51,23 +56,29 @@ Since we’ve updated the Cosmos SDK to work with the Rosetta API, updating the An implementation example can be found in `simapp` package. -## Use Rosetta Command[​](#use-rosetta-command "Direct link to Use Rosetta Command") +## Use Rosetta Command To run Rosetta in your application CLI, use the following command: > **Note:** if using the native approach, add your node name before any rosetta command. -``` +```shell rosetta --help ``` To test and run Rosetta API endpoints for applications that are running and exposed, use the following command: -``` -rosetta --blockchain "your application name (ex: gaia)" --network "your chain identifier (ex: testnet-1)" --tendermint "tendermint endpoint (ex: localhost:26657)" --grpc "gRPC endpoint (ex: localhost:9090)" --addr "rosetta binding address (ex: :8080)" --grpc-types-server (optional) "gRPC endpoint for message descriptor types" +```shell +rosetta + --blockchain "your application name (ex: gaia)" + --network "your chain identifier (ex: testnet-1)" + --tendermint "tendermint endpoint (ex: localhost:26657)" + --grpc "gRPC endpoint (ex: localhost:9090)" + --addr "rosetta binding address (ex: :8080)" + --grpc-types-server (optional) "gRPC endpoint for message descriptor types" ``` -## Plugins - Multi chain connections[​](#plugins---multi-chain-connections "Direct link to Plugins - Multi chain connections") +## Plugins - Multi chain connections Rosetta will try to reflect the node types trough reflection over the node gRPC endpoints, there may be cases were this approach is not enough. It is possible to extend or implement the required types easily through plugins. @@ -86,34 +97,59 @@ In order to add a new plugin: The plugin folder is selected through the cli `--plugin` flag and loaded into the Rosetta server. -## Extensions[​](#extensions "Direct link to Extensions") +## Extensions There are two ways in which you can customize and extend the implementation with your custom settings. -### Message extension[​](#message-extension "Direct link to Message extension") +### Message extension In order to make an `sdk.Msg` understandable by rosetta the only thing which is required is adding the methods to your messages that satisfy the `rosetta.Msg` interface. Examples on how to do so can be found in the staking types such as `MsgDelegate`, or in bank types such as `MsgSend`. -### Client interface override[​](#client-interface-override "Direct link to Client interface override") +### Client interface override In case more customization is required, it's possible to embed the Client type and override the methods which require customizations. Example: -``` -package custom_clientimport ("context""github.com/coinbase/rosetta-sdk-go/types""github.com/cosmos/rosetta/lib")// CustomClient embeds the standard cosmos client// which means that it implements the cosmos-rosetta-gateway Client// interface while at the same time allowing to customize certain methodstype CustomClient struct { *rosetta.Client}func (c *CustomClient) ConstructionPayload(_ context.Context, request *types.ConstructionPayloadsRequest) (resp *types.ConstructionPayloadsResponse, err error) { // provide custom signature bytes panic("implement me")} +```go expandable +package custom_client +import ( + + +"context" + "github.com/coinbase/rosetta-sdk-go/types" + "github.com/cosmos/rosetta/lib" +) + +/ CustomClient embeds the standard cosmos client +/ which means that it implements the cosmos-rosetta-gateway Client +/ interface while at the same time allowing to customize certain methods +type CustomClient struct { + *rosetta.Client +} + +func (c *CustomClient) + +ConstructionPayload(_ context.Context, request *types.ConstructionPayloadsRequest) (resp *types.ConstructionPayloadsResponse, err error) { + / provide custom signature bytes + panic("implement me") +} ``` NOTE: when using a customized client, the command cannot be used as the constructors required **may** differ, so it's required to create a new one. We intend to provide a way to init a customized client without writing extra code in the future. -### Error extension[​](#error-extension "Direct link to Error extension") +### Error extension Since rosetta requires to provide 'returned' errors to network options. In order to declare a new rosetta error, we use the `errors` package in cosmos-rosetta-gateway. Example: -``` -package custom_errorsimport crgerrs "github.com/cosmos/rosetta/lib/errors"var customErrRetriable = truevar CustomError = crgerrs.RegisterError(100, "custom message", customErrRetriable, "description") +```go +package custom_errors +import crgerrs "github.com/cosmos/rosetta/lib/errors" + +var customErrRetriable = true +var CustomError = crgerrs.RegisterError(100, "custom message", customErrRetriable, "description") ``` Note: errors must be registered before cosmos-rosetta-gateway's `Server`.`Start` method is called. Otherwise the registration will be ignored. Errors with same code will be ignored too. diff --git a/docs/sdk/v0.50/user/run-node/run-node.mdx b/docs/sdk/v0.50/user/run-node/run-node.mdx index 0020856b..824f5b70 100644 --- a/docs/sdk/v0.50/user/run-node/run-node.mdx +++ b/docs/sdk/v0.50/user/run-node/run-node.mdx @@ -1,73 +1,99 @@ --- -title: "Running a Node" -description: "Version: v0.50" +title: Running a Node --- - - Now that the application is ready and the keyring populated, it's time to see how to run the blockchain node. In this section, the application we are running is called [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp), and its corresponding CLI binary `simd`. - +## Synopsis + +Now that the application is ready and the keyring populated, it's time to see how to run the blockchain node. In this section, the application we are running is called [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp), and its corresponding CLI binary `simd`. - * [Anatomy of a Cosmos SDK Application](/v0.50/learn/beginner/app-anatomy) - * [Setting up the keyring](/v0.50/user/run-node/keyring) +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.50/learn/beginner/app-anatomy) +- [Setting up the keyring](/docs/sdk/v0.50/user/run-node/keyring) + -## Initialize the Chain[​](#initialize-the-chain "Direct link to Initialize the Chain") +## Initialize the Chain - Make sure you can build your own binary, and replace `simd` with the name of your binary in the snippets. + Make sure you can build your own binary, and replace `simd` with the name of + your binary in the snippets. Before actually running the node, we need to initialize the chain, and most importantly its genesis file. This is done with the `init` subcommand: -``` -# The argument is the custom username of your node, it should be human-readable.simd init --chain-id my-test-chain +```bash +# The argument is the custom username of your node, it should be human-readable. +simd init --chain-id my-test-chain ``` The command above creates all the configuration files needed for your node to run, as well as a default genesis file, which defines the initial state of the network. - - All these configuration files are in `~/.simapp` by default, but you can overwrite the location of this folder by passing the `--home` flag to each commands, or set an `$APPD_HOME` environment variable (where `APPD` is the name of the binary). - + + All these configuration files are in `~/.simapp` by default, but you can + overwrite the location of this folder by passing the `--home` flag to each + commands, or set an `$APPD_HOME` environment variable (where `APPD` is the + name of the binary). + The `~/.simapp` folder has the following structure: -``` -. # ~/.simapp |- data # Contains the databases used by the node. |- config/ |- app.toml # Application-related configuration file. |- config.toml # CometBFT-related configuration file. |- genesis.json # The genesis file. |- node_key.json # Private key to use for node authentication in the p2p protocol. |- priv_validator_key.json # Private key to use as a validator in the consensus protocol. +```bash +. # ~/.simapp + |- data # Contains the databases used by the node. + |- config/ + |- app.toml # Application-related configuration file. + |- config.toml # CometBFT-related configuration file. + |- genesis.json # The genesis file. + |- node_key.json # Private key to use for node authentication in the p2p protocol. + |- priv_validator_key.json # Private key to use as a validator in the consensus protocol. ``` -## Updating Some Default Settings[​](#updating-some-default-settings "Direct link to Updating Some Default Settings") +## Updating Some Default Settings If you want to change any field values in configuration files (for ex: genesis.json) you can use `jq` ([installation](https://stedolan.github.io/jq/download/) & [docs](https://stedolan.github.io/jq/manual/#Assignment)) & `sed` commands to do that. Few examples are listed here. -``` -# to change the chain-idjq '.chain_id = "testing"' genesis.json > temp.json && mv temp.json genesis.json# to enable the api serversed -i '/\[api\]/,+3 s/enable = false/enable = true/' app.toml# to change the voting_periodjq '.app_state.gov.voting_params.voting_period = "600s"' genesis.json > temp.json && mv temp.json genesis.json# to change the inflationjq '.app_state.mint.minter.inflation = "0.300000000000000000"' genesis.json > temp.json && mv temp.json genesis.json +```bash expandable +# to change the chain-id +jq '.chain_id = "testing"' genesis.json > temp.json && mv temp.json genesis.json + +# to enable the api server +sed -i '/\[api\]/,+3 s/enable = false/enable = true/' app.toml + +# to change the voting_period +jq '.app_state.gov.voting_params.voting_period = "600s"' genesis.json > temp.json && mv temp.json genesis.json + +# to change the inflation +jq '.app_state.mint.minter.inflation = "0.300000000000000000"' genesis.json > temp.json && mv temp.json genesis.json ``` -### Client Interaction[​](#client-interaction "Direct link to Client Interaction") +### Client Interaction When instantiating a node, GRPC and REST are defaulted to localhost to avoid unknown exposure of your node to the public. It is recommended to not expose these endpoints without a proxy that can handle load balancing or authentication is setup between your node and the public. - - A commonly used tool for this is [nginx](https://nginx.org). - +A commonly used tool for this is [nginx](https://nginx.org). -## Adding Genesis Accounts[​](#adding-genesis-accounts "Direct link to Adding Genesis Accounts") +## Adding Genesis Accounts -Before starting the chain, you need to populate the state with at least one account. To do so, first [create a new account in the keyring](/v0.50/user/run-node/keyring#adding-keys-to-the-keyring) named `my_validator` under the `test` keyring backend (feel free to choose another name and another backend). +Before starting the chain, you need to populate the state with at least one account. To do so, first [create a new account in the keyring](/docs/sdk/v0.50/user/run-node/keyring#adding-keys-to-the-keyring) named `my_validator` under the `test` keyring backend (feel free to choose another name and another backend). Now that you have created a local account, go ahead and grant it some `stake` tokens in your chain's genesis file. Doing so will also make sure your chain is aware of this account's existence: -``` +```bash simd genesis add-genesis-account $MY_VALIDATOR_ADDRESS 100000000000stake ``` -Recall that `$MY_VALIDATOR_ADDRESS` is a variable that holds the address of the `my_validator` key in the [keyring](/v0.50/user/run-node/keyring#adding-keys-to-the-keyring). Also note that the tokens in the Cosmos SDK have the `{amount}{denom}` format: `amount` is an 18-digit-precision decimal number, and `denom` is the unique token identifier with its denomination key (e.g. `atom` or `uatom`). Here, we are granting `stake` tokens, as `stake` is the token identifier used for staking in [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). For your own chain with its own staking denom, that token identifier should be used instead. +Recall that `$MY_VALIDATOR_ADDRESS` is a variable that holds the address of the `my_validator` key in the [keyring](/docs/sdk/v0.50/user/run-node/keyring#adding-keys-to-the-keyring). Also note that the tokens in the Cosmos SDK have the `{amount}{denom}` format: `amount` is an 18-digit-precision decimal number, and `denom` is the unique token identifier with its denomination key (e.g. `atom` or `uatom`). Here, we are granting `stake` tokens, as `stake` is the token identifier used for staking in [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). For your own chain with its own staking denom, that token identifier should be used instead. -Now that your account has some tokens, you need to add a validator to your chain. Validators are special full-nodes that participate in the consensus process (implemented in the [underlying consensus engine](/v0.50/learn/intro/sdk-app-architecture#cometbft)) in order to add new blocks to the chain. Any account can declare its intention to become a validator operator, but only those with sufficient delegation get to enter the active set (for example, only the top 125 validator candidates with the most delegation get to be validators in the Cosmos Hub). For this guide, you will add your local node (created via the `init` command above) as a validator of your chain. Validators can be declared before a chain is first started via a special transaction included in the genesis file called a `gentx`: +Now that your account has some tokens, you need to add a validator to your chain. Validators are special full-nodes that participate in the consensus process (implemented in the [underlying consensus engine](/docs/sdk/v0.50/learn/intro/sdk-app-architecture#cometbft)) in order to add new blocks to the chain. Any account can declare its intention to become a validator operator, but only those with sufficient delegation get to enter the active set (for example, only the top 125 validator candidates with the most delegation get to be validators in the Cosmos Hub). For this guide, you will add your local node (created via the `init` command above) as a validator of your chain. Validators can be declared before a chain is first started via a special transaction included in the genesis file called a `gentx`: -``` -# Create a gentx.simd genesis gentx my_validator 100000000stake --chain-id my-test-chain --keyring-backend test# Add the gentx to the genesis file.simd genesis collect-gentxs +```bash +# Create a gentx. +simd genesis gentx my_validator 100000000stake --chain-id my-test-chain --keyring-backend test + +# Add the gentx to the genesis file. +simd genesis collect-gentxs ``` A `gentx` does three things: @@ -78,38 +104,49 @@ A `gentx` does three things: For more information on `gentx`, use the following command: -``` +```bash simd genesis gentx --help ``` -## Configuring the Node Using `app.toml` and `config.toml`[​](#configuring-the-node-using-apptoml-and-configtoml "Direct link to configuring-the-node-using-apptoml-and-configtoml") +## Configuring the Node Using `app.toml` and `config.toml` The Cosmos SDK automatically generates two configuration files inside `~/.simapp/config`: -* `config.toml`: used to configure the CometBFT, learn more on [CometBFT's documentation](https://docs.cometbft.com/v0.37/core/configuration), -* `app.toml`: generated by the Cosmos SDK, and used to configure your app, such as state pruning strategies, telemetry, gRPC and REST servers configuration, state sync... +- `config.toml`: used to configure the CometBFT, learn more on [CometBFT's documentation](https://docs.cometbft.com/v0.37/core/configuration), +- `app.toml`: generated by the Cosmos SDK, and used to configure your app, such as state pruning strategies, telemetry, gRPC and REST servers configuration, state sync... Both files are heavily commented, please refer to them directly to tweak your node. One example config to tweak is the `minimum-gas-prices` field inside `app.toml`, which defines the minimum gas prices the validator node is willing to accept for processing a transaction. Depending on the chain, it might be an empty string or not. If it's empty, make sure to edit the field with some value, for example `10token`, or else the node will halt on startup. For the purpose of this tutorial, let's set the minimum gas price to 0: +```toml + # The minimum gas prices a validator is willing to accept for processing a + # transaction. A transaction's fees must meet the minimum of any denomination + # specified in this config (e.g. 0.25token1;0.0001token2). + minimum-gas-prices = "0stake" ``` - # The minimum gas prices a validator is willing to accept for processing a # transaction. A transaction's fees must meet the minimum of any denomination # specified in this config (e.g. 0.25token1;0.0001token2). minimum-gas-prices = "0stake" -``` - - When running a node (not a validator!) and not wanting to run the application mempool, set the `max-txs` field to `-1`. + +When running a node (not a validator!) and not wanting to run the application mempool, set the `max-txs` field to `-1`. + +```toml +[mempool] +# Setting max-txs to 0 will allow for a unbounded amount of transactions in the mempool. +# Setting max_txs to negative 1 (-1) will disable transactions from being inserted into the mempool. +# Setting max_txs to a positive number (> 0) will limit the number of transactions in the mempool, by the specified amount. +# +# Note, this configuration only applies to SDK built-in app-side mempool +# implementations. +max-txs = "-1" +``` - ``` - [mempool]# Setting max-txs to 0 will allow for a unbounded amount of transactions in the mempool.# Setting max_txs to negative 1 (-1) will disable transactions from being inserted into the mempool.# Setting max_txs to a positive number (> 0) will limit the number of transactions in the mempool, by the specified amount.## Note, this configuration only applies to SDK built-in app-side mempool# implementations.max-txs = "-1" - ``` - + -## Run a Localnet[​](#run-a-localnet "Direct link to Run a Localnet") +## Run a Localnet Now that everything is set up, you can finally start your node: -``` +```bash simd start ``` @@ -119,11 +156,14 @@ The previous command allow you to run a single node. This is enough for the next The naive way would be to run the same commands again in separate terminal windows. This is possible, however in the Cosmos SDK, we leverage the power of [Docker Compose](https://docs.docker.com/compose/) to run a localnet. If you need inspiration on how to set up your own localnet with Docker Compose, you can have a look at the Cosmos SDK's [`docker-compose.yml`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/docker-compose.yml). -### Standalone App/CometBFT[​](#standalone-appcometbft "Direct link to Standalone App/CometBFT") +### Standalone App/CometBFT -By default, the Cosmos SDK runs CometBFT in-process with the application If you want to run the application and CometBFT in separate processes, start the application with the `--with-comet=false` flag and set `rpc.laddr` in `config.toml` to the CometBFT node's RPC address. +By default, the Cosmos SDK runs CometBFT in-process with the application +If you want to run the application and CometBFT in separate processes, +start the application with the `--with-comet=false` flag +and set `rpc.laddr` in `config.toml` to the CometBFT node's RPC address. -## Logging[​](#logging "Direct link to Logging") +## Logging Logging provides a way to see what is going on with a node. By default the info level is set. This is a global level and all info logs will be outputted to the terminal. If you would like to filter specific logs to the terminal instead of all, then setting `module:log_level` is how this can work. @@ -131,37 +171,48 @@ Example: In config.toml: -``` +```toml log_level: "state:info,p2p:info,consensus:info,x/staking:info,x/ibc:info,*error" ``` -## State Sync[​](#state-sync "Direct link to State Sync") +## State Sync State sync is the act in which a node syncs the latest or close to the latest state of a blockchain. This is useful for users who don't want to sync all the blocks in history. Read more in [CometBFT documentation](https://docs.cometbft.com/v0.37/core/state-sync). State sync works thanks to snapshots. Read how the SDK handles snapshots [here](https://github.com/cosmos/cosmos-sdk/blob/825245d/store/snapshots/README.md). -### Local State Sync[​](#local-state-sync "Direct link to Local State Sync") +### Local State Sync Local state sync work similar to normal state sync except that it works off a local snapshot of state instead of one provided via the p2p network. The steps to start local state sync are similar to normal state sync with a few different designs. -1. As mentioned in [https://docs.cometbft.com/v0.37/core/state-sync](https://docs.cometbft.com/v0.37/core/state-sync), one must set a height and hash in the config.toml along with a few rpc servers (the afromentioned link has instructions on how to do this). -2. Run ` ` to restore a local snapshot (note: first load it from a file with the *load* command). +1. As mentioned in [Link](https://docs.cometbft.com/v0.37/core/state-sync), one must set a height and hash in the config.toml along with a few rpc servers (the afromentioned link has instructions on how to do this). +2. Run ` ` to restore a local snapshot (note: first load it from a file with the _load_ command). 3. Bootsrapping Comet state in order to start the node after the snapshot has been ingested. This can be done with the bootstrap command ` comet bootstrap-state` -### Snapshots Commands[​](#snapshots-commands "Direct link to Snapshots Commands") +### Snapshots Commands -The Cosmos SDK provides commands for managing snapshots. These commands can be added in an app with the following snippet in `cmd//root.go`: +The Cosmos SDK provides commands for managing snapshots. +These commands can be added in an app with the following snippet in `cmd//root.go`: -``` -import ( "github.com/cosmos/cosmos-sdk/client/snapshot")func initRootCmd(/* ... */) { // ... rootCmd.AddCommand( snapshot.Cmd(appCreator), )} +```go +import ( + + "github.com/cosmos/cosmos-sdk/client/snapshot" +) + +func initRootCmd(/* ... */) { + / ... + rootCmd.AddCommand( + snapshot.Cmd(appCreator), + ) +} ``` Then following commands are available at ` snapshots [command]`: -* **list**: list local snapshots -* **load**: Load a snapshot archive file into snapshot store -* **restore**: Restore app state from local snapshot -* **export**: Export app state to snapshot store -* **dump**: Dump the snapshot as portable archive format -* **delete**: Delete a local snapshot +- **list**: list local snapshots +- **load**: Load a snapshot archive file into snapshot store +- **restore**: Restore app state from local snapshot +- **export**: Export app state to snapshot store +- **dump**: Dump the snapshot as portable archive format +- **delete**: Delete a local snapshot diff --git a/docs/sdk/v0.50/user/run-node/run-production.mdx b/docs/sdk/v0.50/user/run-node/run-production.mdx index 730a8c8e..9e5a6946 100644 --- a/docs/sdk/v0.50/user/run-node/run-production.mdx +++ b/docs/sdk/v0.50/user/run-node/run-production.mdx @@ -1,53 +1,56 @@ --- -title: "Running in Production" -description: "Version: v0.50" +title: Running in Production --- - - This section describes how to securely run a node in a public setting and/or on a mainnet on one of the many Cosmos SDK public blockchains. - +## Synopsis + +This section describes how to securely run a node in a public setting and/or on a mainnet on one of the many Cosmos SDK public blockchains. When operating a node, full node or validator, in production it is important to set your server up securely. - There are many different ways to secure a server and your node, the described steps here is one way. To see another way of setting up a server see the [run in production tutorial](https://tutorials.cosmos.network/hands-on-exercise/5-run-in-prod/1-overview.html). + There are many different ways to secure a server and your node, the described + steps here is one way. To see another way of setting up a server see the [run + in production + tutorial](https://tutorials.cosmos.network/hands-on-exercise/5-run-in-prod/1-overview.html). - - This walkthrough assumes the underlying operating system is Ubuntu. - +This walkthrough assumes the underlying operating system is Ubuntu. -## Sever Setup[​](#sever-setup "Direct link to Sever Setup") +## Sever Setup -### User[​](#user "Direct link to User") +### User When creating a server most times it is created as user `root`. This user has heightened privileges on the server. When operating a node, it is recommended to not run your node as the root user. 1. Create a new user -``` +```bash sudo adduser change_me ``` 2. We want to allow this user to perform sudo tasks -``` +```bash sudo usermod -aG sudo change_me ``` Now when logging into the server, the non `root` user can be used. -### Go[​](#go "Direct link to Go") +### Go 1. Install the [Go](https://go.dev/doc/install) version preconized by the application. - In the past, validators [have had issues](https://github.com/cosmos/cosmos-sdk/issues/13976) when using different versions of Go. It is recommended that the whole validator set uses the version of Go that is preconized by the application. + In the past, validators [have had + issues](https://github.com/cosmos/cosmos-sdk/issues/13976) when using + different versions of Go. It is recommended that the whole validator set uses + the version of Go that is preconized by the application. -### Firewall[​](#firewall "Direct link to Firewall") +### Firewall -Nodes should not have all ports open to the public, this is a simple way to get DDOS'd. Secondly it is recommended by [CometBFT](/v0.50/user/run-node/github.com/cometbft/cometbft) to never expose ports that are not required to operate a node. +Nodes should not have all ports open to the public, this is a simple way to get DDOS'd. Secondly it is recommended by [CometBFT](https://github.com/cometbft/cometbft) to never expose ports that are not required to operate a node. When setting up a firewall there are a few ports that can be open when operating a Cosmos SDK node. There is the CometBFT json-RPC, prometheus, p2p, remote signer and Cosmos SDK GRPC and REST. If the node is being operated as a node that does not offer endpoints to be used for submission or querying then a max of three endpoints are needed. @@ -55,19 +58,20 @@ Most, if not all servers come equipped with [ufw](https://help.ubuntu.com/commun 1. Reset UFW to disallow all incoming connections and allow outgoing -``` -sudo ufw default deny incomingsudo ufw default allow outgoing +```bash +sudo ufw default deny incoming +sudo ufw default allow outgoing ``` 2. Lets make sure that port 22 (ssh) stays open. -``` +```bash sudo ufw allow ssh ``` or -``` +```bash sudo ufw allow 22 ``` @@ -75,81 +79,81 @@ Both of the above commands are the same. 3. Allow Port 26656 (cometbft p2p port). If the node has a modified p2p port then that port must be used here. -``` +```bash sudo ufw allow 26656/tcp ``` 4. Allow port 26660 (cometbft [prometheus](https://prometheus.io)). This acts as the applications monitoring port as well. -``` +```bash sudo ufw allow 26660/tcp ``` 5. IF the node which is being setup would like to expose CometBFTs jsonRPC and Cosmos SDK GRPC and REST then follow this step. (Optional) -##### CometBFT JsonRPC[​](#cometbft-jsonrpc "Direct link to CometBFT JsonRPC") +##### CometBFT JsonRPC -``` +```bash sudo ufw allow 26657/tcp ``` -##### Cosmos SDK GRPC[​](#cosmos-sdk-grpc "Direct link to Cosmos SDK GRPC") +##### Cosmos SDK GRPC -``` +```bash sudo ufw allow 9090/tcp ``` -##### Cosmos SDK REST[​](#cosmos-sdk-rest "Direct link to Cosmos SDK REST") +##### Cosmos SDK REST -``` +```bash sudo ufw allow 1317/tcp ``` 6. Lastly, enable ufw -``` +```bash sudo ufw enable ``` -### Signing[​](#signing "Direct link to Signing") +### Signing If the node that is being started is a validator there are multiple ways a validator could sign blocks. -#### File[​](#file "Direct link to File") +#### File File based signing is the simplest and default approach. This approach works by storing the consensus key, generated on initialization, to sign blocks. This approach is only as safe as your server setup as if the server is compromised so is your key. This key is located in the `config/priv_val_key.json` directory generated on initialization. A second file exists that user must be aware of, the file is located in the data directory `data/priv_val_state.json`. This file protects your node from double signing. It keeps track of the consensus keys last sign height, round and latest signature. If the node crashes and needs to be recovered this file must be kept in order to ensure that the consensus key will not be used for signing a block that was previously signed. -#### Remote Signer[​](#remote-signer "Direct link to Remote Signer") +#### Remote Signer A remote signer is a secondary server that is separate from the running node that signs blocks with the consensus key. This means that the consensus key does not live on the node itself. This increases security because your full node which is connected to the remote signer can be swapped without missing blocks. The two most used remote signers are [tmkms](https://github.com/iqlusioninc/tmkms) from [Iqlusion](https://www.iqlusion.io) and [horcrux](https://github.com/strangelove-ventures/horcrux) from [Strangelove](https://strange.love). -##### TMKMS[​](#tmkms "Direct link to TMKMS") +##### TMKMS -###### Dependencies[​](#dependencies "Direct link to Dependencies") +###### Dependencies 1. Update server dependencies and install extras needed. -``` +```sh sudo apt update -y && sudo apt install build-essential curl jq -y ``` 2. Install Rust: -``` +```sh curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` 3. Install Libusb: -``` +```sh sudo apt install libusb-1.0-0-dev ``` -###### Setup[​](#setup "Direct link to Setup") +###### Setup There are two ways to install tmkms, from source or `cargo install`. In the examples we will cover downloading or building from source and using softsign. Softsign stands for software signing, but you could use a [yubihsm](https://www.yubico.com/products/hardware-security-module/) as your signing key if you wish. @@ -157,16 +161,23 @@ There are two ways to install tmkms, from source or `cargo install`. In the exam From source: -``` -cd $HOMEgit clone https://github.com/iqlusioninc/tmkms.gitcd $HOME/tmkmscargo install tmkms --features=softsigntmkms init configtmkms softsign keygen ./config/secrets/secret_connection_key +```bash +cd $HOME +git clone https://github.com/iqlusioninc/tmkms.git +cd $HOME/tmkms +cargo install tmkms --features=softsign +tmkms init config +tmkms softsign keygen ./config/secrets/secret_connection_key ``` or Cargo install: -``` -cargo install tmkms --features=softsigntmkms init configtmkms softsign keygen ./config/secrets/secret_connection_key +```bash +cargo install tmkms --features=softsign +tmkms init config +tmkms softsign keygen ./config/secrets/secret_connection_key ``` @@ -175,13 +186,13 @@ cargo install tmkms --features=softsigntmkms init configtmkms softsign keygen ./ 2. Migrate the validator key from the full node to the new tmkms instance. -``` +```bash scp user@123.456.32.123:~/.simd/config/priv_validator_key.json ~/tmkms/config/secrets ``` 3. Import the validator key into tmkms. -``` +```bash tmkms softsign import $HOME/tmkms/config/secrets/priv_validator_key.json $HOME/tmkms/config/secrets/priv_validator_key ``` @@ -189,40 +200,73 @@ At this point, it is necessary to delete the `priv_validator_key.json` from the 4. Modifiy the `tmkms.toml`. -``` +```bash vim $HOME/tmkms/config/tmkms.toml ``` -This example shows a configuration that could be used for soft signing. The example has an IP of `123.456.12.345` with a port of `26659` a chain\_id of `test-chain-waSDSe`. These are items that most be modified for the usecase of tmkms and the network. +This example shows a configuration that could be used for soft signing. The example has an IP of `123.456.12.345` with a port of `26659` a chain_id of `test-chain-waSDSe`. These are items that most be modified for the usecase of tmkms and the network. -``` -# CometBFT KMS configuration file## Chain Configuration[[chain]]id = "osmosis-1"key_format = { type = "bech32", account_key_prefix = "cosmospub", consensus_key_prefix = "cosmosvalconspub" }state_file = "/root/tmkms/config/state/priv_validator_state.json"## Signing Provider Configuration### Software-based Signer Configuration[[providers.softsign]]chain_ids = ["test-chain-waSDSe"]key_type = "consensus"path = "/root/tmkms/config/secrets/priv_validator_key"## Validator Configuration[[validator]]chain_id = "test-chain-waSDSe"addr = "tcp://123.456.12.345:26659"secret_key = "/root/tmkms/config/secrets/secret_connection_key"protocol_version = "v0.34"reconnect = true +```toml expandable +# CometBFT KMS configuration file + +## Chain Configuration + +[[chain]] +id = "osmosis-1" +key_format = { type = "bech32", account_key_prefix = "cosmospub", consensus_key_prefix = "cosmosvalconspub" } +state_file = "/root/tmkms/config/state/priv_validator_state.json" + +## Signing Provider Configuration + +### Software-based Signer Configuration + +[[providers.softsign]] +chain_ids = ["test-chain-waSDSe"] +key_type = "consensus" +path = "/root/tmkms/config/secrets/priv_validator_key" + +## Validator Configuration + +[[validator]] +chain_id = "test-chain-waSDSe" +addr = "tcp://123.456.12.345:26659" +secret_key = "/root/tmkms/config/secrets/secret_connection_key" +protocol_version = "v0.34" +reconnect = true ``` 5. Set the address of the tmkms instance. -``` -vim $HOME/.simd/config/config.tomlpriv_validator_laddr = "tcp://0.0.0.0:26659" +```bash +vim $HOME/.simd/config/config.toml + +priv_validator_laddr = "tcp://0.0.0.0:26659" ``` - - The above address it set to `0.0.0.0` but it is recommended to set the tmkms server to secure the startup - + + The above address it set to `0.0.0.0` but it is recommended to set the tmkms + server to secure the startup + - - It is recommended to comment or delete the lines that specify the path of the validator key and validator: + +It is recommended to comment or delete the lines that specify the path of the validator key and validator: - ``` - # Path to the JSON file containing the private key to use as a validator in the consensus protocol# priv_validator_key_file = "config/priv_validator_key.json"# Path to the JSON file containing the last sign state of a validator# priv_validator_state_file = "data/priv_validator_state.json" - ``` - +```toml +# Path to the JSON file containing the private key to use as a validator in the consensus protocol +# priv_validator_key_file = "config/priv_validator_key.json" + +# Path to the JSON file containing the last sign state of a validator +# priv_validator_state_file = "data/priv_validator_state.json" +``` + + 6. Start the two processes. -``` +```bash tmkms start -c $HOME/tmkms/config/tmkms.toml ``` -``` +```bash simd start ``` diff --git a/docs/sdk/v0.50/user/run-node/run-testnet.mdx b/docs/sdk/v0.50/user/run-node/run-testnet.mdx index 84e10b2c..f2a15bd6 100644 --- a/docs/sdk/v0.50/user/run-node/run-testnet.mdx +++ b/docs/sdk/v0.50/user/run-node/run-testnet.mdx @@ -1,15 +1,14 @@ --- -title: "Running a Testnet" -description: "Version: v0.50" +title: Running a Testnet --- - - The `simd testnet` subcommand makes it easy to initialize and start a simulated test network for testing purposes. - +## Synopsis -In addition to the commands for [running a node](/v0.50/user/run-node/run-node), the `simd` binary also includes a `testnet` command that allows you to start a simulated test network in-process or to initialize files for a simulated test network that runs in a separate process. +The `simd testnet` subcommand makes it easy to initialize and start a simulated test network for testing purposes. -## Initialize Files[​](#initialize-files "Direct link to Initialize Files") +In addition to the commands for [running a node](/docs/sdk/v0.50/user/run-node/run-node), the `simd` binary also includes a `testnet` command that allows you to start a simulated test network in-process or to initialize files for a simulated test network that runs in a separate process. + +## Initialize Files First, let's take a look at the `init-files` subcommand. @@ -19,27 +18,27 @@ The `init-files` subcommand initializes the necessary files to run a test networ In order to initialize the files for a test network, run the following command: -``` +```bash simd testnet init-files ``` You should see the following output in your terminal: -``` +```bash Successfully initialized 4 node directories ``` The default output directory is a relative `.testnets` directory. Let's take a look at the files created within the `.testnets` directory. -### gentxs[​](#gentxs "Direct link to gentxs") +### gentxs The `gentxs` directory includes a genesis transaction for each validator node. Each file includes a JSON encoded genesis transaction used to register a validator node at the time of genesis. The genesis transactions are added to the `genesis.json` file within each node directory during the initilization process. -### nodes[​](#nodes "Direct link to nodes") +### nodes A node directory is created for each validator node. Within each node directory is a `simd` directory. The `simd` directory is the home directory for each node, which includes the configuration and data files for that node (i.e. the same files included in the default `~/.simapp` directory when running a single node). -## Start Testnet[​](#start-testnet "Direct link to Start Testnet") +## Start Testnet Now, let's take a look at the `start` subcommand. @@ -47,38 +46,52 @@ The `start` subcommand both initializes and starts an in-process test network. T You can start the local test network by running the following command: -``` +```bash simd testnet start ``` You should see something similar to the following: -``` -acquiring test network lockpreparing test network with chain-id "chain-mtoD9v"+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ THIS MNEMONIC IS FOR TESTING PURPOSES ONLY ++++ DO NOT USE IN PRODUCTION ++++ ++++ sustain know debris minute gate hybrid stereo custom ++++ divorce cross spoon machine latin vibrant term oblige ++++ moment beauty laundry repeat grab game bronze truly +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++starting test network...started test networkpress the Enter Key to terminate +```bash expandable +acquiring test network lock +preparing test network with chain-id "chain-mtoD9v" + ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++ THIS MNEMONIC IS FOR TESTING PURPOSES ONLY ++ +++ DO NOT USE IN PRODUCTION ++ +++ ++ +++ sustain know debris minute gate hybrid stereo custom ++ +++ divorce cross spoon machine latin vibrant term oblige ++ +++ moment beauty laundry repeat grab game bronze truly ++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +starting test network... +started test network +press the Enter Key to terminate ``` The first validator node is now running in-process, which means the test network will terminate once you either close the terminal window or you press the Enter key. In the output, the mnemonic phrase for the first validator node is provided for testing purposes. The validator node is using the same default addresses being used when initializing and starting a single node (no need to provide a `--node` flag). Check the status of the first validator node: -``` +```shell simd status ``` Import the key from the provided mnemonic: -``` +```shell simd keys add test --recover --keyring-backend test ``` Check the balance of the account address: -``` +```shell simd q bank balances [address] ``` Use this test account to manually test against the test network. -## Testnet Options[​](#testnet-options "Direct link to Testnet Options") +## Testnet Options You can customize the configuration of the test network with flags. In order to see all flag options, append the `--help` flag to each command. diff --git a/docs/sdk/v0.50/user/run-node/txs.mdx b/docs/sdk/v0.50/user/run-node/txs.mdx index faf3c06a..b7e2b7c0 100644 --- a/docs/sdk/v0.50/user/run-node/txs.mdx +++ b/docs/sdk/v0.50/user/run-node/txs.mdx @@ -1,45 +1,44 @@ --- title: "Generating, Signing and Broadcasting Transactions" -description: "Version: v0.50" --- - - This document describes how to generate an (unsigned) transaction, signing it (with one or multiple keys), and broadcasting it to the network. - +## Synopsis -## Using the CLI[​](#using-the-cli "Direct link to Using the CLI") +This document describes how to generate an (unsigned) transaction, signing it (with one or multiple keys), and broadcasting it to the network. -The easiest way to send transactions is using the CLI, as we have seen in the previous page when [interacting with a node](/v0.50/user/run-node/interact-node#using-the-cli). For example, running the following command +## Using the CLI -``` +The easiest way to send transactions is using the CLI, as we have seen in the previous page when [interacting with a node](/docs/sdk/v0.50/user/run-node/interact-node#using-the-cli). For example, running the following command + +```bash simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --chain-id my-test-chain --keyring-backend test ``` will run the following steps: -* generate a transaction with one `Msg` (`x/bank`'s `MsgSend`), and print the generated transaction to the console. -* ask the user for confirmation to send the transaction from the `$MY_VALIDATOR_ADDRESS` account. -* fetch `$MY_VALIDATOR_ADDRESS` from the keyring. This is possible because we have [set up the CLI's keyring](/v0.50/user/run-node/keyring) in a previous step. -* sign the generated transaction with the keyring's account. -* broadcast the signed transaction to the network. This is possible because the CLI connects to the node's CometBFT RPC endpoint. +- generate a transaction with one `Msg` (`x/bank`'s `MsgSend`), and print the generated transaction to the console. +- ask the user for confirmation to send the transaction from the `$MY_VALIDATOR_ADDRESS` account. +- fetch `$MY_VALIDATOR_ADDRESS` from the keyring. This is possible because we have [set up the CLI's keyring](/docs/sdk/v0.50/user/run-node/keyring) in a previous step. +- sign the generated transaction with the keyring's account. +- broadcast the signed transaction to the network. This is possible because the CLI connects to the node's CometBFT RPC endpoint. The CLI bundles all the necessary steps into a simple-to-use user experience. However, it's possible to run all the steps individually too. -### Generating a Transaction[​](#generating-a-transaction "Direct link to Generating a Transaction") +### Generating a Transaction Generating a transaction can simply be done by appending the `--generate-only` flag on any `tx` command, e.g.: -``` +```bash simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --chain-id my-test-chain --generate-only ``` This will output the unsigned transaction as JSON in the console. We can also save the unsigned transaction to a file (to be passed around between signers more easily) by appending `> unsigned_tx.json` to the above command. -### Signing a Transaction[​](#signing-a-transaction "Direct link to Signing a Transaction") +### Signing a Transaction Signing a transaction using the CLI requires the unsigned transaction to be saved in a file. Let's assume the unsigned transaction is in a file called `unsigned_tx.json` in the current directory (see previous paragraph on how to do that). Then, simply run the following command: -``` +```bash simd tx sign unsigned_tx.json --chain-id my-test-chain --keyring-backend test --from $MY_VALIDATOR_ADDRESS ``` @@ -47,153 +46,457 @@ This command will decode the unsigned transaction and sign it with `SIGN_MODE_DI Some useful flags to consider in the `tx sign` command: -* `--sign-mode`: you may use `amino-json` to sign the transaction using `SIGN_MODE_LEGACY_AMINO_JSON`, -* `--offline`: sign in offline mode. This means that the `tx sign` command doesn't connect to the node to retrieve the signer's account number and sequence, both needed for signing. In this case, you must manually supply the `--account-number` and `--sequence` flags. This is useful for offline signing, i.e. signing in a secure environment which doesn't have access to the internet. +- `--sign-mode`: you may use `amino-json` to sign the transaction using `SIGN_MODE_LEGACY_AMINO_JSON`, +- `--offline`: sign in offline mode. This means that the `tx sign` command doesn't connect to the node to retrieve the signer's account number and sequence, both needed for signing. In this case, you must manually supply the `--account-number` and `--sequence` flags. This is useful for offline signing, i.e. signing in a secure environment which doesn't have access to the internet. -#### Signing with Multiple Signers[​](#signing-with-multiple-signers "Direct link to Signing with Multiple Signers") +#### Signing with Multiple Signers - Please note that signing a transaction with multiple signers or with a multisig account, where at least one signer uses `SIGN_MODE_DIRECT`, is not yet possible. You may follow [this Github issue](https://github.com/cosmos/cosmos-sdk/issues/8141) for more info. + Please note that signing a transaction with multiple signers or with a + multisig account, where at least one signer uses `SIGN_MODE_DIRECT`, is not + yet possible. You may follow [this Github + issue](https://github.com/cosmos/cosmos-sdk/issues/8141) for more info. Signing with multiple signers is done with the `tx multisign` command. This command assumes that all signers use `SIGN_MODE_LEGACY_AMINO_JSON`. The flow is similar to the `tx sign` command flow, but instead of signing an unsigned transaction file, each signer signs the file signed by previous signer(s). The `tx multisign` command will append signatures to the existing transactions. It is important that signers sign the transaction **in the same order** as given by the transaction, which is retrievable using the `GetSigners()` method. For example, starting with the `unsigned_tx.json`, and assuming the transaction has 4 signers, we would run: -``` -# Let signer1 sign the unsigned tx.simd tx multisign unsigned_tx.json signer_key_1 --chain-id my-test-chain --keyring-backend test > partial_tx_1.json# Now signer1 will send the partial_tx_1.json to the signer2.# Signer2 appends their signature:simd tx multisign partial_tx_1.json signer_key_2 --chain-id my-test-chain --keyring-backend test > partial_tx_2.json# Signer2 sends the partial_tx_2.json file to signer3, and signer3 can append his signature:simd tx multisign partial_tx_2.json signer_key_3 --chain-id my-test-chain --keyring-backend test > partial_tx_3.json +```bash +# Let signer1 sign the unsigned tx. +simd tx multisign unsigned_tx.json signer_key_1 --chain-id my-test-chain --keyring-backend test > partial_tx_1.json +# Now signer1 will send the partial_tx_1.json to the signer2. +# Signer2 appends their signature: +simd tx multisign partial_tx_1.json signer_key_2 --chain-id my-test-chain --keyring-backend test > partial_tx_2.json +# Signer2 sends the partial_tx_2.json file to signer3, and signer3 can append his signature: +simd tx multisign partial_tx_2.json signer_key_3 --chain-id my-test-chain --keyring-backend test > partial_tx_3.json ``` -### Broadcasting a Transaction[​](#broadcasting-a-transaction "Direct link to Broadcasting a Transaction") +### Broadcasting a Transaction Broadcasting a transaction is done using the following command: -``` +```bash simd tx broadcast tx_signed.json ``` You may optionally pass the `--broadcast-mode` flag to specify which response to receive from the node: -* `sync`: the CLI waits for a CheckTx execution response only. -* `async`: the CLI returns immediately (transaction might fail). +- `sync`: the CLI waits for a CheckTx execution response only. +- `async`: the CLI returns immediately (transaction might fail). -### Encoding a Transaction[​](#encoding-a-transaction "Direct link to Encoding a Transaction") +### Encoding a Transaction In order to broadcast a transaction using the gRPC or REST endpoints, the transaction will need to be encoded first. This can be done using the CLI. Encoding a transaction is done using the following command: -``` +```bash simd tx encode tx_signed.json ``` This will read the transaction from the file, serialize it using Protobuf, and output the transaction bytes as base64 in the console. -### Decoding a Transaction[​](#decoding-a-transaction "Direct link to Decoding a Transaction") +### Decoding a Transaction The CLI can also be used to decode transaction bytes. Decoding a transaction is done using the following command: -``` +```bash simd tx decode [protobuf-byte-string] ``` This will decode the transaction bytes and output the transaction as JSON in the console. You can also save the transaction to a file by appending `> tx.json` to the above command. -## Programmatically with Go[​](#programmatically-with-go "Direct link to Programmatically with Go") +## Programmatically with Go It is possible to manipulate transactions programmatically via Go using the Cosmos SDK's `TxBuilder` interface. -### Generating a Transaction[​](#generating-a-transaction-1 "Direct link to Generating a Transaction") +### Generating a Transaction Before generating a transaction, a new instance of a `TxBuilder` needs to be created. Since the Cosmos SDK supports both Amino and Protobuf transactions, the first step would be to decide which encoding scheme to use. All the subsequent steps remain unchanged, whether you're using Amino or Protobuf, as `TxBuilder` abstracts the encoding mechanisms. In the following snippet, we will use Protobuf. -``` -import ( "github.com/cosmos/cosmos-sdk/simapp")func sendTx() error { // Choose your codec: Amino or Protobuf. Here, we use Protobuf, given by the following function. app := simapp.NewSimApp(...) // Create a new TxBuilder. txBuilder := app.TxConfig().NewTxBuilder() // --snip--} +```go expandable +import ( + + "github.com/cosmos/cosmos-sdk/simapp" +) + +func sendTx() + +error { + / Choose your codec: Amino or Protobuf. Here, we use Protobuf, given by the following function. + app := simapp.NewSimApp(...) + + / Create a new TxBuilder. + txBuilder := app.TxConfig().NewTxBuilder() + + / --snip-- +} ``` We can also set up some keys and addresses that will send and receive the transactions. Here, for the purpose of the tutorial, we will be using some dummy data to create keys. -``` -import ( "github.com/cosmos/cosmos-sdk/testutil/testdata")priv1, _, addr1 := testdata.KeyTestPubAddr()priv2, _, addr2 := testdata.KeyTestPubAddr()priv3, _, addr3 := testdata.KeyTestPubAddr() +```go +import ( + + "github.com/cosmos/cosmos-sdk/testutil/testdata" +) + +priv1, _, addr1 := testdata.KeyTestPubAddr() + +priv2, _, addr2 := testdata.KeyTestPubAddr() + +priv3, _, addr3 := testdata.KeyTestPubAddr() ``` Populating the `TxBuilder` can be done via its methods: -client/tx\_config.go +```go expandable +package client -``` -loading... -``` +import ( -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/client/tx_config.go#L33-L50) + txsigning "cosmossdk.io/x/tx/signing" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) -``` -import ( banktypes "github.com/cosmos/cosmos-sdk/x/bank/types")func sendTx() error { // --snip-- // Define two x/bank MsgSend messages: // - from addr1 to addr3, // - from addr2 to addr3. // This means that the transactions needs two signers: addr1 and addr2. msg1 := banktypes.NewMsgSend(addr1, addr3, types.NewCoins(types.NewInt64Coin("atom", 12))) msg2 := banktypes.NewMsgSend(addr2, addr3, types.NewCoins(types.NewInt64Coin("atom", 34))) err := txBuilder.SetMsgs(msg1, msg2) if err != nil { return err } txBuilder.SetGasLimit(...) txBuilder.SetFeeAmount(...) txBuilder.SetMemo(...) txBuilder.SetTimeoutHeight(...)} +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() + +sdk.TxEncoder + TxDecoder() + +sdk.TxDecoder + TxJSONEncoder() + +sdk.TxEncoder + TxJSONDecoder() + +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) + +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} + + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() + +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() *txsigning.HandlerMap + SigningContext() *txsigning.Context +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTip(tip *tx.Tip) + +SetTimeoutHeight(height uint64) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} + + / ExtendedTxBuilder extends the TxBuilder interface, + / which is used to set extension options to be included in a transaction. + ExtendedTxBuilder interface { + SetExtensionOptions(extOpts ...*codectypes.Any) +} +) ``` -At this point, `TxBuilder`'s underlying transaction is ready to be signed. +```go expandable +import ( + + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +func sendTx() -### Signing a Transaction[​](#signing-a-transaction-1 "Direct link to Signing a Transaction") +error { + / --snip-- -We set encoding config to use Protobuf, which will use `SIGN_MODE_DIRECT` by default. As per [ADR-020](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md), each signer needs to sign the `SignerInfo`s of all other signers. This means that we need to perform two steps sequentially: + / Define two x/bank MsgSend messages: + / - from addr1 to addr3, + / - from addr2 to addr3. + / This means that the transactions needs two signers: addr1 and addr2. + msg1 := banktypes.NewMsgSend(addr1, addr3, types.NewCoins(types.NewInt64Coin("atom", 12))) -* for each signer, populate the signer's `SignerInfo` inside `TxBuilder`, -* once all `SignerInfo`s are populated, for each signer, sign the `SignDoc` (the payload to be signed). +msg2 := banktypes.NewMsgSend(addr2, addr3, types.NewCoins(types.NewInt64Coin("atom", 34))) + err := txBuilder.SetMsgs(msg1, msg2) + if err != nil { + return err +} -In the current `TxBuilder`'s API, both steps are done using the same method: `SetSignatures()`. The current API requires us to first perform a round of `SetSignatures()` *with empty signatures*, only to populate `SignerInfo`s, and a second round of `SetSignatures()` to actually sign the correct payload. +txBuilder.SetGasLimit(...) +txBuilder.SetFeeAmount(...) + +txBuilder.SetMemo(...) + +txBuilder.SetTimeoutHeight(...) +} ``` -import ( cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" "github.com/cosmos/cosmos-sdk/types/tx/signing" xauthsigning "github.com/cosmos/cosmos-sdk/x/auth/signing")func sendTx() error { // --snip-- privs := []cryptotypes.PrivKey{priv1, priv2} accNums:= []uint64{..., ...} // The accounts' account numbers accSeqs:= []uint64{..., ...} // The accounts' sequence numbers // First round: we gather all the signer infos. We use the "set empty // signature" hack to do that. var sigsV2 []signing.SignatureV2 for i, priv := range privs { sigV2 := signing.SignatureV2{ PubKey: priv.PubKey(), Data: &signing.SingleSignatureData{ SignMode: encCfg.TxConfig.SignModeHandler().DefaultMode(), Signature: nil, }, Sequence: accSeqs[i], } sigsV2 = append(sigsV2, sigV2) } err := txBuilder.SetSignatures(sigsV2...) if err != nil { return err } // Second round: all signer infos are set, so each signer can sign. sigsV2 = []signing.SignatureV2{} for i, priv := range privs { signerData := xauthsigning.SignerData{ ChainID: chainID, AccountNumber: accNums[i], Sequence: accSeqs[i], } sigV2, err := tx.SignWithPrivKey( encCfg.TxConfig.SignModeHandler().DefaultMode(), signerData, txBuilder, priv, encCfg.TxConfig, accSeqs[i]) if err != nil { return nil, err } sigsV2 = append(sigsV2, sigV2) } err = txBuilder.SetSignatures(sigsV2...) if err != nil { return err }} + +At this point, `TxBuilder`'s underlying transaction is ready to be signed. + +### Signing a Transaction + +We set encoding config to use Protobuf, which will use `SIGN_MODE_DIRECT` by default. As per [ADR-020](docs/sdk/next/documentation/legacy/adr-comprehensive), each signer needs to sign the `SignerInfo`s of all other signers. This means that we need to perform two steps sequentially: + +- for each signer, populate the signer's `SignerInfo` inside `TxBuilder`, +- once all `SignerInfo`s are populated, for each signer, sign the `SignDoc` (the payload to be signed). + +In the current `TxBuilder`'s API, both steps are done using the same method: `SetSignatures()`. The current API requires us to first perform a round of `SetSignatures()` _with empty signatures_, only to populate `SignerInfo`s, and a second round of `SetSignatures()` to actually sign the correct payload. + +```go expandable +import ( + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + xauthsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +func sendTx() + +error { + / --snip-- + privs := []cryptotypes.PrivKey{ + priv1, priv2 +} + accNums:= []uint64{..., ... +} / The accounts' account numbers + accSeqs:= []uint64{..., ... +} / The accounts' sequence numbers + + / First round: we gather all the signer infos. We use the "set empty + / signature" hack to do that. + var sigsV2 []signing.SignatureV2 + for i, priv := range privs { + sigV2 := signing.SignatureV2{ + PubKey: priv.PubKey(), + Data: &signing.SingleSignatureData{ + SignMode: encCfg.TxConfig.SignModeHandler().DefaultMode(), + Signature: nil, +}, + Sequence: accSeqs[i], +} + +sigsV2 = append(sigsV2, sigV2) +} + err := txBuilder.SetSignatures(sigsV2...) + if err != nil { + return err +} + + / Second round: all signer infos are set, so each signer can sign. + sigsV2 = []signing.SignatureV2{ +} + for i, priv := range privs { + signerData := xauthsigning.SignerData{ + ChainID: chainID, + AccountNumber: accNums[i], + Sequence: accSeqs[i], +} + +sigV2, err := tx.SignWithPrivKey( + encCfg.TxConfig.SignModeHandler().DefaultMode(), signerData, + txBuilder, priv, encCfg.TxConfig, accSeqs[i]) + if err != nil { + return nil, err +} + +sigsV2 = append(sigsV2, sigV2) +} + +err = txBuilder.SetSignatures(sigsV2...) + if err != nil { + return err +} +} ``` The `TxBuilder` is now correctly populated. To print it, you can use the `TxConfig` interface from the initial encoding config `encCfg`: -``` -func sendTx() error { // --snip-- // Generated Protobuf-encoded bytes. txBytes, err := encCfg.TxConfig.TxEncoder()(txBuilder.GetTx()) if err != nil { return err } // Generate a JSON string. txJSONBytes, err := encCfg.TxConfig.TxJSONEncoder()(txBuilder.GetTx()) if err != nil { return err } txJSON := string(txJSONBytes)} -``` +```go expandable +func sendTx() -### Broadcasting a Transaction[​](#broadcasting-a-transaction-1 "Direct link to Broadcasting a Transaction") +error { + / --snip-- -The preferred way to broadcast a transaction is to use gRPC, though using REST (via `gRPC-gateway`) or the CometBFT RPC is also posible. An overview of the differences between these methods is exposed [here](/v0.50/learn/advanced/grpc_rest). For this tutorial, we will only describe the gRPC method. + / Generated Protobuf-encoded bytes. + txBytes, err := encCfg.TxConfig.TxEncoder()(txBuilder.GetTx()) + if err != nil { + return err +} + / Generate a JSON string. + txJSONBytes, err := encCfg.TxConfig.TxJSONEncoder()(txBuilder.GetTx()) + if err != nil { + return err +} + txJSON := string(txJSONBytes) +} ``` -import ( "context" "fmt" "google.golang.org/grpc" "github.com/cosmos/cosmos-sdk/types/tx")func sendTx(ctx context.Context) error { // --snip-- // Create a connection to the gRPC server. grpcConn := grpc.Dial( "127.0.0.1:9090", // Or your gRPC server address. grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanism. ) defer grpcConn.Close() // Broadcast the tx via gRPC. We create a new client for the Protobuf Tx // service. txClient := tx.NewServiceClient(grpcConn) // We then call the BroadcastTx method on this client. grpcRes, err := txClient.BroadcastTx( ctx, &tx.BroadcastTxRequest{ Mode: tx.BroadcastMode_BROADCAST_MODE_SYNC, TxBytes: txBytes, // Proto-binary of the signed transaction, see previous step. }, ) if err != nil { return err } fmt.Println(grpcRes.TxResponse.Code) // Should be `0` if the tx is successful return nil} + +### Broadcasting a Transaction + +The preferred way to broadcast a transaction is to use gRPC, though using REST (via `gRPC-gateway`) or the CometBFT RPC is also posible. An overview of the differences between these methods is exposed [here](/docs/sdk/v0.50/learn/advanced/grpc_rest). For this tutorial, we will only describe the gRPC method. + +```go expandable +import ( + + "context" + "fmt" + "google.golang.org/grpc" + "github.com/cosmos/cosmos-sdk/types/tx" +) + +func sendTx(ctx context.Context) + +error { + / --snip-- + + / Create a connection to the gRPC server. + grpcConn := grpc.Dial( + "127.0.0.1:9090", / Or your gRPC server address. + grpc.WithInsecure(), / The Cosmos SDK doesn't support any transport security mechanism. + ) + +defer grpcConn.Close() + + / Broadcast the tx via gRPC. We create a new client for the Protobuf Tx + / service. + txClient := tx.NewServiceClient(grpcConn) + / We then call the BroadcastTx method on this client. + grpcRes, err := txClient.BroadcastTx( + ctx, + &tx.BroadcastTxRequest{ + Mode: tx.BroadcastMode_BROADCAST_MODE_SYNC, + TxBytes: txBytes, / Proto-binary of the signed transaction, see previous step. +}, + ) + if err != nil { + return err +} + +fmt.Println(grpcRes.TxResponse.Code) / Should be `0` if the tx is successful + + return nil +} ``` -#### Simulating a Transaction[​](#simulating-a-transaction "Direct link to Simulating a Transaction") +#### Simulating a Transaction Before broadcasting a transaction, we sometimes may want to dry-run the transaction, to estimate some information about the transaction without actually committing it. This is called simulating a transaction, and can be done as follows: -``` -import ( "context" "fmt" "testing" "github.com/cosmos/cosmos-sdk/client" "github.com/cosmos/cosmos-sdk/types/tx" authtx "github.com/cosmos/cosmos-sdk/x/auth/tx")func simulateTx() error { // --snip-- // Simulate the tx via gRPC. We create a new client for the Protobuf Tx // service. txClient := tx.NewServiceClient(grpcConn) txBytes := /* Fill in with your signed transaction bytes. */ // We then call the Simulate method on this client. grpcRes, err := txClient.Simulate( context.Background(), &tx.SimulateRequest{ TxBytes: txBytes, }, ) if err != nil { return err } fmt.Println(grpcRes.GasInfo) // Prints estimated gas used. return nil} +```go expandable +import ( + + "context" + "fmt" + "testing" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/types/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" +) + +func simulateTx() + +error { + / --snip-- + + / Simulate the tx via gRPC. We create a new client for the Protobuf Tx + / service. + txClient := tx.NewServiceClient(grpcConn) + txBytes := /* Fill in with your signed transaction bytes. */ + + / We then call the Simulate method on this client. + grpcRes, err := txClient.Simulate( + context.Background(), + &tx.SimulateRequest{ + TxBytes: txBytes, +}, + ) + if err != nil { + return err +} + +fmt.Println(grpcRes.GasInfo) / Prints estimated gas used. + + return nil +} ``` -## Using gRPC[​](#using-grpc "Direct link to Using gRPC") +## Using gRPC It is not possible to generate or sign a transaction using gRPC, only to broadcast one. In order to broadcast a transaction using gRPC, you will need to generate, sign, and encode the transaction using either the CLI or programmatically with Go. -### Broadcasting a Transaction[​](#broadcasting-a-transaction-2 "Direct link to Broadcasting a Transaction") +### Broadcasting a Transaction Broadcasting a transaction using the gRPC endpoint can be done by sending a `BroadcastTx` request as follows, where the `txBytes` are the protobuf-encoded bytes of a signed transaction: -``` -grpcurl -plaintext \ -d '{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ localhost:9090 \ cosmos.tx.v1beta1.Service/BroadcastTx +```bash +grpcurl -plaintext \ + -d '{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/BroadcastTx ``` -## Using REST[​](#using-rest "Direct link to Using REST") +## Using REST It is not possible to generate or sign a transaction using REST, only to broadcast one. In order to broadcast a transaction using REST, you will need to generate, sign, and encode the transaction using either the CLI or programmatically with Go. -### Broadcasting a Transaction[​](#broadcasting-a-transaction-3 "Direct link to Broadcasting a Transaction") +### Broadcasting a Transaction Broadcasting a transaction using the REST endpoint (served by `gRPC-gateway`) can be done by sending a POST request as follows, where the `txBytes` are the protobuf-encoded bytes of a signed transaction: -``` -curl -X POST \ -H "Content-Type: application/json" \ -d'{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ localhost:1317/cosmos/tx/v1beta1/txs +```bash +curl -X POST \ + -H "Content-Type: application/json" \ + -d'{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ + localhost:1317/cosmos/tx/v1beta1/txs ``` -## Using CosmJS (JavaScript & TypeScript)[​](#using-cosmjs-javascript--typescript "Direct link to Using CosmJS (JavaScript & TypeScript)") +## Using CosmJS (JavaScript & TypeScript) -CosmJS aims to build client libraries in JavaScript that can be embedded in web applications. Please see [https://cosmos.github.io/cosmjs](https://cosmos.github.io/cosmjs) for more information. As of January 2021, CosmJS documentation is still work in progress. +CosmJS aims to build client libraries in JavaScript that can be embedded in web applications. Please see [Link](https://cosmos.github.io/cosmjs) for more information. As of January 2021, CosmJS documentation is still work in progress. diff --git a/docs/sdk/v0.50/user/user.mdx b/docs/sdk/v0.50/user/user.mdx new file mode 100644 index 00000000..db26dcf5 --- /dev/null +++ b/docs/sdk/v0.50/user/user.mdx @@ -0,0 +1,13 @@ +--- +title: User Guides +description: >- + This section is designed for developers who are using the Cosmos SDK to build + applications. It provides essential guides and references to effectively use + the SDK's features. +--- + +This section is designed for developers who are using the Cosmos SDK to build applications. It provides essential guides and references to effectively use the SDK's features. + +* [Setting up keys](/docs/sdk/v0.50/user/run-node/keyring) - Learn how to set up secure key management using the Cosmos SDK's keyring feature. This guide provides a streamlined approach to cryptographic key handling, which is crucial for securing your application. +* [Running a node](/docs/sdk/v0.50/user/run-node/run-node) - This guide provides step-by-step instructions to deploy and manage a node in the Cosmos network. It ensures a smooth and reliable operation of your blockchain application by covering all the necessary setup and maintenance steps. +* [CLI](/docs/sdk/v0.50/user/run-node/interact-node) - Discover how to navigate and interact with the Cosmos SDK using the Command Line Interface (CLI). This section covers efficient and powerful command-based operations that can help you manage your application effectively. diff --git a/docs/sdk/v0.53/api-reference/client-tools/README.mdx b/docs/sdk/v0.53/api-reference/client-tools/README.mdx new file mode 100644 index 00000000..4d5eed95 --- /dev/null +++ b/docs/sdk/v0.53/api-reference/client-tools/README.mdx @@ -0,0 +1,19 @@ +--- +title: Tools +description: >- + This section provides documentation on various tooling maintained by the SDK + team. This includes tools for development, operating a node, and ease of use + of a Cosmos SDK chain. +--- + +This section provides documentation on various tooling maintained by the SDK team. +This includes tools for development, operating a node, and ease of use of a Cosmos SDK chain. + +## CLI Tools + +* [Cosmovisor](/docs/sdk/v0.53/documentation/operations/cosmovisor) +* [Confix](/docs/sdk/v0.53/documentation/operations/confix) + +## Other Tools + +* [Protocol Buffers](/docs/sdk/v0.53/documentation/protocol-development/protobuf) diff --git a/docs/sdk/v0.53/api-reference/client-tools/autocli.mdx b/docs/sdk/v0.53/api-reference/client-tools/autocli.mdx new file mode 100644 index 00000000..768c1cca --- /dev/null +++ b/docs/sdk/v0.53/api-reference/client-tools/autocli.mdx @@ -0,0 +1,731 @@ +--- +title: AutoCLI +--- + +## Synopsis + +This document details how to build CLI and REST interfaces for a module. Examples from various Cosmos SDK modules are included. + + +**Pre-requisite Readings** + +- [CLI](https://docs.cosmos.network/main/core/cli) + + + +The `autocli` (also known as `client/v2`) package is a [Go library](https://pkg.go.dev/cosmossdk.io/client/v2/autocli) for generating CLI (command line interface) interfaces for Cosmos SDK-based applications. It provides a simple way to add CLI commands to your application by generating them automatically based on your gRPC service definitions. Autocli generates CLI commands and flags directly from your protobuf messages, including options, input parameters, and output parameters. This means that you can easily add a CLI interface to your application without having to manually create and manage commands. + +## Overview + +`autocli` generates CLI commands and flags for each method defined in your gRPC service. By default, it generates commands for each gRPC services. The commands are named based on the name of the service method. + +For example, given the following protobuf definition for a service: + +```protobuf +service MyService { + rpc MyMethod(MyRequest) returns (MyResponse) {} +} +``` + +For instance, `autocli` would generate a command named `my-method` for the `MyMethod` method. The command will have flags for each field in the `MyRequest` message. + +It is possible to customize the generation of transactions and queries by defining options for each service. + +## Application Wiring + +Here are the steps to use AutoCLI: + +1. Ensure your app's modules implements the `appmodule.AppModule` interface. +2. (optional) Configure how behave `autocli` command generation, by implementing the `func (am AppModule) AutoCLIOptions() *autocliv1.ModuleOptions` method on the module. +3. Use the `autocli.AppOptions` struct to specify the modules you defined. If you are using `depinject`, it can automatically create an instance of `autocli.AppOptions` based on your app's configuration. +4. Use the `EnhanceRootCommand()` method provided by `autocli` to add the CLI commands for the specified modules to your root command. + + + AutoCLI is additive only, meaning *enhancing* the root command will only add + subcommands that are not already registered. This means that you can use + AutoCLI alongside other custom commands within your app. + + +Here's an example of how to use `autocli` in your app: + +```go expandable +/ Define your app's modules + testModules := map[string]appmodule.AppModule{ + "testModule": &TestModule{ +}, +} + +/ Define the autocli AppOptions + autoCliOpts := autocli.AppOptions{ + Modules: testModules, +} + +/ Create the root command + rootCmd := &cobra.Command{ + Use: "app", +} + if err := appOptions.EnhanceRootCommand(rootCmd); err != nil { + return err +} + +/ Run the root command + if err := rootCmd.Execute(); err != nil { + return err +} +``` + +### Keyring + +`autocli` uses a keyring for key name resolving names and signing transactions. + + +AutoCLI provides a better UX than normal CLI as it allows to resolve key names directly from the keyring in all transactions and commands. + +```sh + q bank balances alice + tx bank send alice bob 1000denom +``` + + + +The keyring used for resolving names and signing transactions is provided via the `client.Context`. +The keyring is then converted to the `client/v2/autocli/keyring` interface. +If no keyring is provided, the `autocli` generated command will not be able to sign transactions, but will still be able to query the chain. + + +The Cosmos SDK keyring and Hubl keyring both implement the `client/v2/autocli/keyring` interface, thanks to the following wrapper: + +```go +keyring.NewAutoCLIKeyring(kb) +``` + + + +## Signing + +`autocli` supports signing transactions with the keyring. +The [`cosmos.msg.v1.signer` protobuf annotation](https://docs.cosmos.network/main/build/building-modules/protobuf-annotations) defines the signer field of the message. +This field is automatically filled when using the `--from` flag or defining the signer as a positional argument. + +AutoCLI currently supports only one signer per transaction. + +## Module wiring & Customization + +The `AutoCLIOptions()` method on your module allows to specify custom commands, sub-commands or flags for each service, as it was a `cobra.Command` instance, within the `RpcCommandOptions` struct. Defining such options will customize the behavior of the `autocli` command generation, which by default generates a command for each method in your gRPC service. + +```go +*autocliv1.RpcCommandOptions{ + RpcMethod: "Params", / The name of the gRPC service + Use: "params", / Command usage that is displayed in the help + Short: "Query the parameters of the governance process", / Short description of the command + Long: "Query the parameters of the governance process. Specify specific param types (voting|tallying|deposit) + +to filter results.", / Long description of the command + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "params_type", + Optional: true +}, / Transform a flag into a positional argument +}, +} +``` + + + AutoCLI can create a gov proposal of any tx by simply setting the + `GovProposal` field to `true` in the `autocli.RpcCommandOptions` struct. Users + can however use the `--no-proposal` flag to disable the proposal creation + (which is useful if the authority isn't the gov module on a chain). + + +### Specifying Subcommands + +By default, `autocli` generates a command for each method in your gRPC service. However, you can specify subcommands to group related commands together. To specify subcommands, use the `autocliv1.ServiceCommandDescriptor` struct. + +This example shows how to use the `autocliv1.ServiceCommandDescriptor` struct to group related commands together and specify subcommands in your gRPC service by defining an instance of `autocliv1.ModuleOptions` in your `autocli.go`. + +```go expandable +package gov + +import ( + + "fmt" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + govv1 "cosmossdk.io/api/cosmos/gov/v1" + govv1beta1 "cosmossdk.io/api/cosmos/gov/v1beta1" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. +func (am AppModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: &autocliv1.ServiceCommandDescriptor{ + Service: govv1.Query_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "Params", + Use: "params", + Short: "Query the parameters of the governance process", + Long: "Query the parameters of the governance process. Specify specific param types (voting|tallying|deposit) + +to filter results.", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "params_type", + Optional: true +}, +}, +}, + { + RpcMethod: "Proposals", + Use: "proposals", + Short: "Query proposals with optional filters", + Example: fmt.Sprintf("%[1]s query gov proposals --depositor cosmos1...\n%[1]s query gov proposals --voter cosmos1...\n%[1]s query gov proposals --proposal-status (PROPOSAL_STATUS_DEPOSIT_PERIOD|PROPOSAL_STATUS_VOTING_PERIOD|PROPOSAL_STATUS_PASSED|PROPOSAL_STATUS_REJECTED|PROPOSAL_STATUS_FAILED)", version.AppName), +}, + { + RpcMethod: "Proposal", + Use: "proposal [proposal-id]", + Short: "Query details of a single proposal", + Example: fmt.Sprintf("%s query gov proposal 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Vote", + Use: "vote [proposal-id] [voter-addr]", + Short: "Query details of a single vote", + Example: fmt.Sprintf("%s query gov vote 1 cosmos1...", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, + { + ProtoField: "voter" +}, +}, +}, + { + RpcMethod: "Votes", + Use: "votes [proposal-id]", + Short: "Query votes of a single proposal", + Example: fmt.Sprintf("%s query gov votes 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Deposit", + Use: "deposit [proposal-id] [depositer-addr]", + Short: "Query details of a deposit", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, + { + ProtoField: "depositor" +}, +}, +}, + { + RpcMethod: "Deposits", + Use: "deposits [proposal-id]", + Short: "Query deposits on a proposal", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "TallyResult", + Use: "tally [proposal-id]", + Short: "Query the tally of a proposal vote", + Example: fmt.Sprintf("%s query gov tally 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Constitution", + Use: "constitution", + Short: "Query the current chain constitution", +}, +}, + / map v1beta1 as a sub-command + SubCommands: map[string]*autocliv1.ServiceCommandDescriptor{ + "v1beta1": { + Service: govv1beta1.Query_ServiceDesc.ServiceName +}, +}, +}, + Tx: &autocliv1.ServiceCommandDescriptor{ + Service: govv1.Msg_ServiceDesc.ServiceName, + / map v1beta1 as a sub-command + SubCommands: map[string]*autocliv1.ServiceCommandDescriptor{ + "v1beta1": { + Service: govv1beta1.Msg_ServiceDesc.ServiceName +}, +}, +}, +} +} +``` + +### Positional Arguments + +By default `autocli` generates a flag for each field in your protobuf message. However, you can choose to use positional arguments instead of flags for certain fields. + +To add positional arguments to a command, use the `autocliv1.PositionalArgDescriptor` struct, as seen in the example below. Specify the `ProtoField` parameter, which is the name of the protobuf field that should be used as the positional argument. In addition, if the parameter is a variable-length argument, you can specify the `Varargs` parameter as `true`. This can only be applied to the last positional parameter, and the `ProtoField` must be a repeated field. + +Here's an example of how to define a positional argument for the `Account` method of the `auth` service: + +```go expandable +package auth + +import ( + + "fmt" + + authv1beta1 "cosmossdk.io/api/cosmos/auth/v1beta1" + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + _ "cosmossdk.io/api/cosmos/crypto/secp256k1" / register to that it shows up in protoregistry.GlobalTypes + _ "cosmossdk.io/api/cosmos/crypto/secp256r1" / register to that it shows up in protoregistry.GlobalTypes + + "github.com/cosmos/cosmos-sdk/version" +) + +/ AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. +func (am AppModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: &autocliv1.ServiceCommandDescriptor{ + Service: authv1beta1.Query_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "Accounts", + Use: "accounts", + Short: "Query all the accounts", +}, + { + RpcMethod: "Account", + Use: "account [address]", + Short: "Query account by address", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "address" +}}, +}, + { + RpcMethod: "AccountInfo", + Use: "account-info [address]", + Short: "Query account info which is common to all account types.", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "address" +}}, +}, + { + RpcMethod: "AccountAddressByID", + Use: "address-by-acc-num [acc-num]", + Short: "Query account address by account number", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "id" +}}, +}, + { + RpcMethod: "ModuleAccounts", + Use: "module-accounts", + Short: "Query all module accounts", +}, + { + RpcMethod: "ModuleAccountByName", + Use: "module-account [module-name]", + Short: "Query module account info by module name", + Example: fmt.Sprintf("%s q auth module-account gov", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "name" +}}, +}, + { + RpcMethod: "AddressBytesToString", + Use: "address-bytes-to-string [address-bytes]", + Short: "Transform an address bytes to string", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "address_bytes" +}}, +}, + { + RpcMethod: "AddressStringToBytes", + Use: "address-string-to-bytes [address-string]", + Short: "Transform an address string to bytes", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "address_string" +}}, +}, + { + RpcMethod: "Bech32Prefix", + Use: "bech32-prefix", + Short: "Query the chain bech32 prefix (if applicable)", +}, + { + RpcMethod: "Params", + Use: "params", + Short: "Query the current auth parameters", +}, +}, +}, + / Tx is purposely left empty, as the only tx is MsgUpdateParams which is gov gated. +} +} +``` + +Then the command can be used as follows, instead of having to specify the `--address` flag: + +```bash + query auth account cosmos1abcd...xyz +``` + +#### Flattened Fields in Positional Arguments + +AutoCLI also supports flattening nested message fields as positional arguments. This means you can access nested fields +using dot notation in the `ProtoField` parameter. This is particularly useful when you want to directly set nested +message fields as positional arguments. + +For example, if you have a nested message structure like this: + +```protobuf +message Permissions { + string level = 1; + repeated string limit_type_urls = 2; +} + +message MsgAuthorizeCircuitBreaker { + string grantee = 1; + Permissions permissions = 2; +} +``` + +You can flatten the fields in your AutoCLI configuration: + +```go +{ + RpcMethod: "AuthorizeCircuitBreaker", + Use: "authorize ", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "grantee" +}, + { + ProtoField: "permissions.level" +}, + { + ProtoField: "permissions.limit_type_urls" +}, +}, +} +``` + +This allows users to provide values for nested fields directly as positional arguments: + +```bash + tx circuit authorize cosmos1... super-admin "/cosmos.bank.v1beta1.MsgSend,/cosmos.bank.v1beta1.MsgMultiSend" +``` + +Instead of having to provide a complex JSON structure for nested fields, flattening makes the CLI more user-friendly by allowing direct access to nested fields. + +#### Customising Flag Names + +By default, `autocli` generates flag names based on the names of the fields in your protobuf message. However, you can customise the flag names by providing a `FlagOptions`. This parameter allows you to specify custom names for flags based on the names of the message fields. + +For example, if you have a message with the fields `test` and `test1`, you can use the following naming options to customise the flags: + +```go +autocliv1.RpcCommandOptions{ + FlagOptions: map[string]*autocliv1.FlagOptions{ + "test": { + Name: "custom_name", +}, + "test1": { + Name: "other_name", +}, +}, +} +``` + +`FlagsOptions` is defined like sub commands in the `AutoCLIOptions()` method on your module. + +### Combining AutoCLI with Other Commands Within A Module + +AutoCLI can be used alongside other commands within a module. For example, the `gov` module uses AutoCLI to generate commands for the `query` subcommand, but also defines custom commands for the `proposer` subcommands. + +In order to enable this behavior, set in `AutoCLIOptions()` the `EnhanceCustomCommand` field to `true`, for the command type (queries and/or transactions) you want to enhance. + +```go expandable +package gov + +import ( + + "fmt" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + govv1 "cosmossdk.io/api/cosmos/gov/v1" + govv1beta1 "cosmossdk.io/api/cosmos/gov/v1beta1" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. +func (am AppModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: &autocliv1.ServiceCommandDescriptor{ + Service: govv1.Query_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "Params", + Use: "params", + Short: "Query the parameters of the governance process", + Long: "Query the parameters of the governance process. Specify specific param types (voting|tallying|deposit) + +to filter results.", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "params_type", + Optional: true +}, +}, +}, + { + RpcMethod: "Proposals", + Use: "proposals", + Short: "Query proposals with optional filters", + Example: fmt.Sprintf("%[1]s query gov proposals --depositor cosmos1...\n%[1]s query gov proposals --voter cosmos1...\n%[1]s query gov proposals --proposal-status (PROPOSAL_STATUS_DEPOSIT_PERIOD|PROPOSAL_STATUS_VOTING_PERIOD|PROPOSAL_STATUS_PASSED|PROPOSAL_STATUS_REJECTED|PROPOSAL_STATUS_FAILED)", version.AppName), +}, + { + RpcMethod: "Proposal", + Use: "proposal [proposal-id]", + Short: "Query details of a single proposal", + Example: fmt.Sprintf("%s query gov proposal 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Vote", + Use: "vote [proposal-id] [voter-addr]", + Short: "Query details of a single vote", + Example: fmt.Sprintf("%s query gov vote 1 cosmos1...", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, + { + ProtoField: "voter" +}, +}, +}, + { + RpcMethod: "Votes", + Use: "votes [proposal-id]", + Short: "Query votes of a single proposal", + Example: fmt.Sprintf("%s query gov votes 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Deposit", + Use: "deposit [proposal-id] [depositer-addr]", + Short: "Query details of a deposit", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, + { + ProtoField: "depositor" +}, +}, +}, + { + RpcMethod: "Deposits", + Use: "deposits [proposal-id]", + Short: "Query deposits on a proposal", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "TallyResult", + Use: "tally [proposal-id]", + Short: "Query the tally of a proposal vote", + Example: fmt.Sprintf("%s query gov tally 1", version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "proposal_id" +}, +}, +}, + { + RpcMethod: "Constitution", + Use: "constitution", + Short: "Query the current chain constitution", +}, +}, + / map v1beta1 as a sub-command + SubCommands: map[string]*autocliv1.ServiceCommandDescriptor{ + "v1beta1": { + Service: govv1beta1.Query_ServiceDesc.ServiceName +}, +}, + EnhanceCustomCommand: true, / We still have manual commands in gov that we want to keep +}, + Tx: &autocliv1.ServiceCommandDescriptor{ + Service: govv1.Msg_ServiceDesc.ServiceName, + / map v1beta1 as a sub-command + SubCommands: map[string]*autocliv1.ServiceCommandDescriptor{ + "v1beta1": { + Service: govv1beta1.Msg_ServiceDesc.ServiceName +}, +}, +}, +} +} +``` + +If not set to true, `AutoCLI` will not generate commands for the module if there are already commands registered for the module (when `GetTxCmd()` or `GetTxCmd()` are defined). + +### Skip a command + +AutoCLI automatically skips unsupported commands when [`cosmos_proto.method_added_in` protobuf annotation](https://docs.cosmos.network/main/build/building-modules/protobuf-annotations) is present. + +Additionally, a command can be manually skipped using the `autocliv1.RpcCommandOptions`: + +```go +*autocliv1.RpcCommandOptions{ + RpcMethod: "Params", / The name of the gRPC service + Skip: true, +} +``` + +### Use AutoCLI for non module commands + +It is possible to use `AutoCLI` for non module commands. The trick is still to implement the `appmodule.Module` interface and append it to the `appOptions.ModuleOptions` map. + +For example, here is how the SDK does it for `cometbft` gRPC commands: + +```go expandable +package cmtservice + +import ( + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + cmtv1beta1 "cosmossdk.io/api/cosmos/base/tendermint/v1beta1" +) + +var CometBFTAutoCLIDescriptor = &autocliv1.ServiceCommandDescriptor{ + Service: cmtv1beta1.Service_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "GetNodeInfo", + Use: "node-info", + Short: "Query the current node info", +}, + { + RpcMethod: "GetSyncing", + Use: "syncing", + Short: "Query node syncing status", +}, + { + RpcMethod: "GetLatestBlock", + Use: "block-latest", + Short: "Query for the latest committed block", +}, + { + RpcMethod: "GetBlockByHeight", + Use: "block-by-height [height]", + Short: "Query for a committed block by height", + Long: "Query for a specific committed block using the CometBFT RPC `block_by_height` method", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "height" +}}, +}, + { + RpcMethod: "GetLatestValidatorSet", + Use: "validator-set", + Alias: []string{"validator-set-latest", "comet-validator-set", "cometbft-validator-set", "tendermint-validator-set" +}, + Short: "Query for the latest validator set", +}, + { + RpcMethod: "GetValidatorSetByHeight", + Use: "validator-set-by-height [height]", + Short: "Query for a validator set by height", + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "height" +}}, +}, + { + RpcMethod: "ABCIQuery", + Skip: true, +}, +}, +} + +/ NewCometBFTCommands is a fake `appmodule.Module` to be considered as a module +/ and be added in AutoCLI. +func NewCometBFTCommands() *cometModule { /nolint:revive / fake module and limiting import of core + return &cometModule{ +} +} + +type cometModule struct{ +} + +func (m cometModule) + +IsOnePerModuleType() { +} + +func (m cometModule) + +IsAppModule() { +} + +func (m cometModule) + +Name() + +string { + return "comet" +} + +func (m cometModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: CometBFTAutoCLIDescriptor, +} +} +``` + +## Summary + +`autocli` let you generate CLI to your Cosmos SDK-based applications without any cobra boilerplate. It allows you to easily generate CLI commands and flags from your protobuf messages, and provides many options for customising the behavior of your CLI application. + +To further enhance your CLI experience with Cosmos SDK-based blockchains, you can use `hubl`. `hubl` is a tool that allows you to query any Cosmos SDK-based blockchain using the new AutoCLI feature of the Cosmos SDK. With `hubl`, you can easily configure a new chain and query modules with just a few simple commands. + +For more information on `hubl`, including how to configure a new chain and query a module, see the [Hubl documentation](https://docs.cosmos.network/main/tooling/hubl). diff --git a/docs/sdk/v0.53/api-reference/client-tools/cli.mdx b/docs/sdk/v0.53/api-reference/client-tools/cli.mdx new file mode 100644 index 00000000..1ab0653f --- /dev/null +++ b/docs/sdk/v0.53/api-reference/client-tools/cli.mdx @@ -0,0 +1,237 @@ +--- +title: Command-Line Interface +--- + +## Synopsis + +This document describes how command-line interface (CLI) works on a high-level, for an [**application**](/docs/sdk/v0.53/documentation/application-framework/app-anatomy). A separate document for implementing a CLI for a Cosmos SDK [**module**](/docs/sdk/v0.53/documentation/module-system/intro) can be found [here](/docs/sdk/v0.53/documentation/module-system/module-interfaces#cli). + +## Command-Line Interface + +### Example Command + +There is no set way to create a CLI, but Cosmos SDK modules typically use the [Cobra Library](https://github.com/spf13/cobra). Building a CLI with Cobra entails defining commands, arguments, and flags. [**Commands**](#root-command) understand the actions users wish to take, such as `tx` for creating a transaction and `query` for querying the application. Each command can also have nested subcommands, necessary for naming the specific transaction type. Users also supply **Arguments**, such as account numbers to send coins to, and [**Flags**](#flags) to modify various aspects of the commands, such as gas prices or which node to broadcast to. + +Here is an example of a command a user might enter to interact with the simapp CLI `simd` in order to send some tokens: + +```bash +simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --gas auto --gas-prices +``` + +The first four strings specify the command: + +- The root command for the entire application `simd`. +- The subcommand `tx`, which contains all commands that let users create transactions. +- The subcommand `bank` to indicate which module to route the command to ([`x/bank`](/docs/sdk/v0.53/documentation/module-system/bank) module in this case). +- The type of transaction `send`. + +The next two strings are arguments: the `from_address` the user wishes to send from, the `to_address` of the recipient, and the `amount` they want to send. Finally, the last few strings of the command are optional flags to indicate how much the user is willing to pay in fees (calculated using the amount of gas used to execute the transaction and the gas prices provided by the user). + +The CLI interacts with a [node](/docs/sdk/v0.53/documentation/operations/node) to handle this command. The interface itself is defined in a `main.go` file. + +### Building the CLI + +The `main.go` file needs to have a `main()` function that creates a root command, to which all the application commands will be added as subcommands. The root command additionally handles: + +- **setting configurations** by reading in configuration files (e.g. the Cosmos SDK config file). +- **adding any flags** to it, such as `--chain-id`. +- **instantiating the `codec`** by injecting the application codecs. The [`codec`](/docs/sdk/v0.53/documentation/protocol-development/encoding) is used to encode and decode data structures for the application - stores can only persist `[]byte`s so the developer must define a serialization format for their data structures or use the default, Protobuf. +- **adding subcommand** for all the possible user interactions, including [transaction commands](#transaction-commands) and [query commands](#query-commands). + +The `main()` function finally creates an executor and [execute](https://pkg.go.dev/github.com/spf13/cobra#Command.Execute) the root command. See an example of `main()` function from the `simapp` application: + +```go expandable +package main + +import ( + + "fmt" + "os" + + clientv2helpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/simd/cmd" + + svrcmd "github.com/cosmos/cosmos-sdk/server/cmd" +) + +func main() { + rootCmd := cmd.NewRootCmd() + if err := svrcmd.Execute(rootCmd, clientv2helpers.EnvPrefix, simapp.DefaultNodeHome); err != nil { + fmt.Fprintln(rootCmd.OutOrStderr(), err) + +os.Exit(1) +} +} +``` + +The rest of the document will detail what needs to be implemented for each step and include smaller portions of code from the `simapp` CLI files. + +## Adding Commands to the CLI + +Every application CLI first constructs a root command, then adds functionality by aggregating subcommands (often with further nested subcommands) using `rootCmd.AddCommand()`. The bulk of an application's unique capabilities lies in its transaction and query commands, called `TxCmd` and `QueryCmd` respectively. + +### Root Command + +The root command (called `rootCmd`) is what the user first types into the command line to indicate which application they wish to interact with. The string used to invoke the command (the "Use" field) is typically the name of the application suffixed with `-d`, e.g. `simd` or `gaiad`. The root command typically includes the following commands to support basic functionality in the application. + +- **Status** command from the Cosmos SDK rpc client tools, which prints information about the status of the connected [`Node`](/docs/sdk/v0.53/documentation/operations/node). The Status of a node includes `NodeInfo`,`SyncInfo` and `ValidatorInfo`. +- **Keys** [commands](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/keys) from the Cosmos SDK client tools, which includes a collection of subcommands for using the key functions in the Cosmos SDK crypto tools, including adding a new key and saving it to the keyring, listing all public keys stored in the keyring, and deleting a key. For example, users can type `simd keys add ` to add a new key and save an encrypted copy to the keyring, using the flag `--recover` to recover a private key from a seed phrase or the flag `--multisig` to group multiple keys together to create a multisig key. For full details on the `add` key command, see the code [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/keys/add.go). For more details about usage of `--keyring-backend` for storage of key credentials look at the [keyring docs](/docs/sdk/v0.53/documentation/operations/keyring). +- **Server** commands from the Cosmos SDK server package. These commands are responsible for providing the mechanisms necessary to start an ABCI CometBFT application and provides the CLI framework (based on [cobra](https://github.com/spf13/cobra)) necessary to fully bootstrap an application. The package exposes two core functions: `StartCmd` and `ExportCmd` which creates commands to start the application and export state respectively. + Learn more [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server). +- [**Transaction**](#transaction-commands) commands. +- [**Query**](#query-commands) commands. + +Next is an example `rootCmd` function from the `simapp` application. It instantiates the root command, adds a [_persistent_ flag](#flags) and `PreRun` function to be run before every execution, and adds all of the necessary subcommands. + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L47-L130 +``` + + + Use the `EnhanceRootCommand()` from the AutoCLI options to automatically add + auto-generated commands from the modules to the root command. Additionnally it + adds all manually defined modules commands (`tx` and `query`) as well. Read + more about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its + dedicated section. + + +`rootCmd` has a function called `initAppConfig()` which is useful for setting the application's custom configs. +By default app uses CometBFT app config template from Cosmos SDK, which can be over-written via `initAppConfig()`. +Here's an example code to override default `app.toml` template. + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L144-L199 +``` + +The `initAppConfig()` also allows overriding the default Cosmos SDK's [server config](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server/config/config.go#L231). One example is the `min-gas-prices` config, which defines the minimum gas prices a validator is willing to accept for processing a transaction. By default, the Cosmos SDK sets this parameter to `""` (empty string), which forces all validators to tweak their own `app.toml` and set a non-empty value, or else the node will halt on startup. This might not be the best UX for validators, so the chain developer can set a default `app.toml` value for validators inside this `initAppConfig()` function. + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L164-L180 +``` + +The root-level `status` and `keys` subcommands are common across most applications and do not interact with application state. The bulk of an application's functionality - what users can actually _do_ with it - is enabled by its `tx` and `query` commands. + +### Transaction Commands + +[Transactions](/docs/sdk/v0.53/documentation/protocol-development/transactions) are objects wrapping [`Msg`s](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#messages) that trigger state changes. To enable the creation of transactions using the CLI interface, a function `txCommand` is generally added to the `rootCmd`: + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L222-L229 +``` + +This `txCommand` function adds all the transaction available to end-users for the application. This typically includes: + +- **Sign command** from the [`auth`](/docs/sdk/v0.53/documentation/module-system/auth) module that signs messages in a transaction. To enable multisig, add the `auth` module's `MultiSign` command. Since every transaction requires some sort of signature in order to be valid, the signing command is necessary for every application. +- **Broadcast command** from the Cosmos SDK client tools, to broadcast transactions. +- **All [module transaction commands](/docs/sdk/v0.53/documentation/module-system/module-interfaces#transaction-commands)** the application is dependent on, retrieved by using the [basic module manager's](/docs/sdk/v0.53/documentation/module-system/module-manager#basic-manager) `AddTxCommands()` function, or enhanced by [AutoCLI](https://docs.cosmos.network/main/core/autocli). + +Here is an example of a `txCommand` aggregating these subcommands from the `simapp` application: + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L270-L292 +``` + + + When using AutoCLI to generate module transaction commands, + `EnhanceRootCommand()` automatically adds the module `tx` command to the root + command. Read more about + [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its dedicated + section. + + +### Query Commands + +[**Queries**](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#queries) are objects that allow users to retrieve information about the application's state. To enable the creation of queries using the CLI interface, a function `queryCommand` is generally added to the `rootCmd`: + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L222-L229 +``` + +This `queryCommand` function adds all the queries available to end-users for the application. This typically includes: + +- **QueryTx** and/or other transaction query commands from the `auth` module which allow the user to search for a transaction by inputting its hash, a list of tags, or a block height. These queries allow users to see if transactions have been included in a block. +- **Account command** from the `auth` module, which displays the state (e.g. account balance) of an account given an address. +- **Validator command** from the Cosmos SDK rpc client tools, which displays the validator set of a given height. +- **Block command** from the Cosmos SDK RPC client tools, which displays the block data for a given height. +- **All [module query commands](/docs/sdk/v0.53/documentation/module-system/module-interfaces#query-commands)** the application is dependent on, retrieved by using the [basic module manager's](/docs/sdk/v0.53/documentation/module-system/module-manager#basic-manager) `AddQueryCommands()` function, or enhanced by [AutoCLI](https://docs.cosmos.network/main/core/autocli). + +Here is an example of a `queryCommand` aggregating subcommands from the `simapp` application: + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L249-L268 +``` + + + When using AutoCLI to generate module query commands, `EnhanceRootCommand()` + automatically adds the module `query` command to the root command. Read more + about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its + dedicated section. + + +## Flags + +Flags are used to modify commands; developers can include them in a `flags.go` file with their CLI. Users can explicitly include them in commands or pre-configure them by inside their [`app.toml`](/docs/sdk/v0.53/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml). Commonly pre-configured flags include the `--node` to connect to and `--chain-id` of the blockchain the user wishes to interact with. + +A _persistent_ flag (as opposed to a _local_ flag) added to a command transcends all of its children: subcommands will inherit the configured values for these flags. Additionally, all flags have default values when they are added to commands; some toggle an option off but others are empty values that the user needs to override to create valid commands. A flag can be explicitly marked as _required_ so that an error is automatically thrown if the user does not provide a value, but it is also acceptable to handle unexpected missing flags differently. + +Flags are added to commands directly (generally in the [module's CLI file](/docs/sdk/v0.53/documentation/module-system/module-interfaces#flags) where module commands are defined) and no flag except for the `rootCmd` persistent flags has to be added at application level. It is common to add a _persistent_ flag for `--chain-id`, the unique identifier of the blockchain the application pertains to, to the root command. Adding this flag can be done in the `main()` function. Adding this flag makes sense as the chain ID should not be changing across commands in this application CLI. + +## Environment variables + +Each flag is bound to its respective named environment variable. Then name of the environment variable consist of two parts - capital case `basename` followed by flag name of the flag. `-` must be substituted with `_`. For example flag `--node` for application with basename `GAIA` is bound to `GAIA_NODE`. It allows reducing the amount of flags typed for routine operations. For example instead of: + +```shell +gaia --home=./ --node= --chain-id="testchain-1" --keyring-backend=test tx ... --from= +``` + +this will be more convenient: + +```shell +# define env variables in .env, .envrc etc +GAIA_HOME= +GAIA_NODE= +GAIA_CHAIN_ID="testchain-1" +GAIA_KEYRING_BACKEND="test" + +# and later just use +gaia tx ... --from= +``` + +## Configurations + +It is vital that the root command of an application uses `PersistentPreRun()` cobra command property for executing the command, so all child commands have access to the server and client contexts. These contexts are set as their default values initially and may be modified, scoped to the command, in their respective `PersistentPreRun()` functions. Note that the `client.Context` is typically pre-populated with "default" values that may be useful for all commands to inherit and override if necessary. + +Here is an example of an `PersistentPreRun()` function from `simapp`: + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L81-L120 +``` + +The `SetCmdClientContextHandler` call reads persistent flags via `ReadPersistentCommandFlags` which creates a `client.Context` and sets that on the root command's `Context`. + +The `InterceptConfigsPreRunHandler` call creates a viper literal, default `server.Context`, and a logger and sets that on the root command's `Context`. The `server.Context` will be modified and saved to disk. The internal `interceptConfigs` call reads or creates a CometBFT configuration based on the home path provided. In addition, `interceptConfigs` also reads and loads the application configuration, `app.toml`, and binds that to the `server.Context` viper literal. This is vital so the application can get access to not only the CLI flags, but also to the application configuration values provided by this file. + + +When willing to configure which logger is used, do not use `InterceptConfigsPreRunHandler`, which sets the default SDK logger, but instead use `InterceptConfigsAndCreateContext` and set the server context and the logger manually: + +```diff expandable +-return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig) + ++serverCtx, err := server.InterceptConfigsAndCreateContext(cmd, customAppTemplate, customAppConfig, customCMTConfig) ++if err != nil { ++ return err ++} + ++/ overwrite default server logger ++logger, err := server.CreateSDKLogger(serverCtx, cmd.OutOrStdout()) ++if err != nil { ++ return err ++} ++serverCtx.Logger = logger.With(log.ModuleKey, "server") + ++/ set server context ++return server.SetCmdServerContext(cmd, serverCtx) +``` + + diff --git a/docs/sdk/v0.53/api-reference/client-tools/hubl.mdx b/docs/sdk/v0.53/api-reference/client-tools/hubl.mdx new file mode 100644 index 00000000..f7b193ca --- /dev/null +++ b/docs/sdk/v0.53/api-reference/client-tools/hubl.mdx @@ -0,0 +1,71 @@ +--- +title: Hubl +--- + +`Hubl` is a tool that allows you to query any Cosmos SDK based blockchain. +It takes advantage of the new [AutoCLI](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/client/v2@v2.0.0-20220916140313-c5245716b516/cli) feature {/* TODO replace with AutoCLI docs */} of the Cosmos SDK. + +## Installation + +Hubl can be installed using `go install`: + +```shell +go install cosmossdk.io/tools/hubl/cmd/hubl@latest +``` + +Or build from source: + +```shell +git clone --depth=1 https://github.com/cosmos/cosmos-sdk +make hubl +``` + +The binary will be located in `tools/hubl`. + +## Usage + +```shell +hubl --help +``` + +### Add chain + +To configure a new chain just run this command using the --init flag and the name of the chain as it's listed in the chain registry ([Link](https://github.com/cosmos/chain-registry)). + +If the chain is not listed in the chain registry, you can use any unique name. + +```shell +hubl init [chain-name] +hubl init regen +``` + +The chain configuration is stored in `~/.hubl/config.toml`. + + + +When using an unsecure gRPC endpoint, change the `insecure` field to `true` in the config file. + +```toml +[chains] +[chains.regen] +[[chains.regen.trusted-grpc-endpoints]] +endpoint = 'localhost:9090' +insecure = true +``` + +Or use the `--insecure` flag: + +```shell +hubl init regen --insecure +``` + + + +### Query + +To query a chain, you can use the `query` command. +Then specify which module you want to query and the query itself. + +```shell +hubl regen query auth module-accounts +``` diff --git a/docs/sdk/v0.53/api-reference/events-streaming/events.mdx b/docs/sdk/v0.53/api-reference/events-streaming/events.mdx new file mode 100644 index 00000000..45bdcac0 --- /dev/null +++ b/docs/sdk/v0.53/api-reference/events-streaming/events.mdx @@ -0,0 +1,2347 @@ +--- +title: Events +--- + +## Synopsis + +`Event`s are objects that contain information about the execution of the application. They are mainly used by service providers like block explorers and wallet to track the execution of various messages and index transactions. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK application](/docs/sdk/v0.53/documentation/application-framework/app-anatomy) +- [CometBFT Documentation on Events](https://docs.cometbft.com/v0.37/spec/abci/abci++_basic_concepts#events) + + + +## Events + +Events are implemented in the Cosmos SDK as an alias of the ABCI `Event` type and +take the form of: `{eventType}.{attributeKey}={attributeValue}`. + +```protobuf +// Event allows application developers to attach additional information to +// ResponseBeginBlock, ResponseEndBlock, ResponseCheckTx and ResponseDeliverTx. +// Later, transactions may be queried using these events. +message Event { + string type = 1; + repeated EventAttribute attributes = 2 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "attributes,omitempty" + ]; +} +``` + +An Event contains: + +- A `type` to categorize the Event at a high-level; for example, the Cosmos SDK uses the `"message"` type to filter Events by `Msg`s. +- A list of `attributes` are key-value pairs that give more information about the categorized Event. For example, for the `"message"` type, we can filter Events by key-value pairs using `message.action={some_action}`, `message.module={some_module}` or `message.sender={some_sender}`. +- A `msg_index` to identify which messages relate to the same transaction + + + To parse the attribute values as strings, make sure to add `'` (single quotes) + around each attribute value. + + +_Typed Events_ are Protobuf-defined [messages](docs/sdk/next/documentation/legacy/adr-comprehensive) used by the Cosmos SDK +for emitting and querying Events. They are defined in a `event.proto` file, on a **per-module basis** and are read as `proto.Message`. +_Legacy Events_ are defined on a **per-module basis** in the module's `/types/events.go` file. +They are triggered from the module's Protobuf [`Msg` service](/docs/sdk/v0.53/documentation/module-system/msg-services) +by using the [`EventManager`](#eventmanager). + +In addition, each module documents its events under in the `Events` sections of its specs (x/`{moduleName}`/`README.md`). + +Lastly, Events are returned to the underlying consensus engine in the response of the following ABCI messages: + +- [`BeginBlock`](/docs/sdk/v0.53/documentation/application-framework/baseapp#beginblock) +- [`EndBlock`](/docs/sdk/v0.53/documentation/application-framework/baseapp#endblock) +- [`CheckTx`](/docs/sdk/v0.53/documentation/application-framework/baseapp#checktx) +- [`Transaction Execution`](/docs/sdk/v0.53/documentation/application-framework/baseapp#transactionexecution) + +### Examples + +The following examples show how to query Events using the Cosmos SDK. + +| Event | Description | +| ------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `tx.height=23` | Query all transactions at height 23 | +| `message.action='/cosmos.bank.v1beta1.Msg/Send'` | Query all transactions containing a x/bank `Send` [Service `Msg`](/docs/sdk/v0.53/documentation/module-system/msg-services). Note the `'`s around the value. | +| `message.module='bank'` | Query all transactions containing messages from the x/bank module. Note the `'`s around the value. | +| `create_validator.validator='cosmosval1...'` | x/staking-specific Event, see [x/staking SPEC](/docs/sdk/v0.53/documentation/module-system/staking). | + +## EventManager + +In Cosmos SDK applications, Events are managed by an abstraction called the `EventManager`. +Internally, the `EventManager` tracks a list of Events for the entire execution flow of `FinalizeBlock` +(i.e. transaction execution, `BeginBlock`, `EndBlock`). + +```go expandable +package types + +import ( + + "encoding/json" + "fmt" + "maps" + "reflect" + "slices" + "strings" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/gogoproto/jsonpb" + proto "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/codec" +) + +type EventManagerI interface { + Events() + +Events + ABCIEvents() []abci.Event + EmitTypedEvent(tev proto.Message) + +error + EmitTypedEvents(tevs ...proto.Message) + +error + EmitEvent(event Event) + +EmitEvents(events Events) +} + +/ ---------------------------------------------------------------------------- +/ Event Manager +/ ---------------------------------------------------------------------------- + +var _ EventManagerI = (*EventManager)(nil) + +/ EventManager implements a simple wrapper around a slice of Event objects that +/ can be emitted from. +type EventManager struct { + events Events +} + +func NewEventManager() *EventManager { + return &EventManager{ + EmptyEvents() +} +} + +func (em *EventManager) + +Events() + +Events { + return em.events +} + +/ EmitEvent stores a single Event object. +/ Deprecated: Use EmitTypedEvent +func (em *EventManager) + +EmitEvent(event Event) { + em.events = em.events.AppendEvent(event) +} + +/ EmitEvents stores a series of Event objects. +/ Deprecated: Use EmitTypedEvents +func (em *EventManager) + +EmitEvents(events Events) { + em.events = em.events.AppendEvents(events) +} + +/ ABCIEvents returns all stored Event objects as abci.Event objects. +func (em EventManager) + +ABCIEvents() []abci.Event { + return em.events.ToABCIEvents() +} + +/ EmitTypedEvent takes typed event and emits converting it into Event +func (em *EventManager) + +EmitTypedEvent(tev proto.Message) + +error { + event, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +em.EmitEvent(event) + +return nil +} + +/ EmitTypedEvents takes series of typed events and emit +func (em *EventManager) + +EmitTypedEvents(tevs ...proto.Message) + +error { + events := make(Events, len(tevs)) + for i, tev := range tevs { + res, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +events[i] = res +} + +em.EmitEvents(events) + +return nil +} + +/ TypedEventToEvent takes typed event and converts to Event object +func TypedEventToEvent(tev proto.Message) (Event, error) { + evtType := proto.MessageName(tev) + +evtJSON, err := codec.ProtoMarshalJSON(tev, nil) + if err != nil { + return Event{ +}, err +} + +var attrMap map[string]json.RawMessage + err = json.Unmarshal(evtJSON, &attrMap) + if err != nil { + return Event{ +}, err +} + + / sort the keys to ensure the order is always the same + keys := slices.Sorted(maps.Keys(attrMap)) + attrs := make([]abci.EventAttribute, 0, len(attrMap)) + for _, k := range keys { + v := attrMap[k] + attrs = append(attrs, abci.EventAttribute{ + Key: k, + Value: string(v), +}) +} + +return Event{ + Type: evtType, + Attributes: attrs, +}, nil +} + +/ ParseTypedEvent converts abci.Event back to a typed event. +func ParseTypedEvent(event abci.Event) (proto.Message, error) { + concreteGoType := proto.MessageType(event.Type) + if concreteGoType == nil { + return nil, fmt.Errorf("failed to retrieve the message of type %q", event.Type) +} + +var value reflect.Value + if concreteGoType.Kind() == reflect.Ptr { + value = reflect.New(concreteGoType.Elem()) +} + +else { + value = reflect.Zero(concreteGoType) +} + +protoMsg, ok := value.Interface().(proto.Message) + if !ok { + return nil, fmt.Errorf("%q does not implement proto.Message", event.Type) +} + attrMap := make(map[string]json.RawMessage) + for _, attr := range event.Attributes { + attrMap[attr.Key] = json.RawMessage(attr.Value) +} + +attrBytes, err := json.Marshal(attrMap) + if err != nil { + return nil, err +} + unmarshaler := jsonpb.Unmarshaler{ + AllowUnknownFields: true +} + if err := unmarshaler.Unmarshal(strings.NewReader(string(attrBytes)), protoMsg); err != nil { + return nil, err +} + +return protoMsg, nil +} + +/ ---------------------------------------------------------------------------- +/ Events +/ ---------------------------------------------------------------------------- + +type ( + / Event is a type alias for an ABCI Event + Event abci.Event + + / Events defines a slice of Event objects + Events []Event +) + +/ NewEvent creates a new Event object with a given type and slice of one or more +/ attributes. +func NewEvent(ty string, attrs ...Attribute) + +Event { + e := Event{ + Type: ty +} + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ NewAttribute returns a new key/value Attribute object. +func NewAttribute(k, v string) + +Attribute { + return Attribute{ + k, v +} +} + +/ EmptyEvents returns an empty slice of events. +func EmptyEvents() + +Events { + return make(Events, 0) +} + +func (a Attribute) + +String() + +string { + return fmt.Sprintf("%s: %s", a.Key, a.Value) +} + +/ ToKVPair converts an Attribute object into a CometBFT key/value pair. +func (a Attribute) + +ToKVPair() + +abci.EventAttribute { + return abci.EventAttribute{ + Key: a.Key, + Value: a.Value +} +} + +/ AppendAttributes adds one or more attributes to an Event. +func (e Event) + +AppendAttributes(attrs ...Attribute) + +Event { + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ GetAttribute returns an attribute for a given key present in an event. +/ If the key is not found, the boolean value will be false. +func (e Event) + +GetAttribute(key string) (Attribute, bool) { + for _, attr := range e.Attributes { + if attr.Key == key { + return Attribute{ + Key: attr.Key, + Value: attr.Value +}, true +} + +} + +return Attribute{ +}, false +} + +/ AppendEvent adds an Event to a slice of events. +func (e Events) + +AppendEvent(event Event) + +Events { + return append(e, event) +} + +/ AppendEvents adds a slice of Event objects to an exist slice of Event objects. +func (e Events) + +AppendEvents(events Events) + +Events { + return append(e, events...) +} + +/ ToABCIEvents converts a slice of Event objects to a slice of abci.Event +/ objects. +func (e Events) + +ToABCIEvents() []abci.Event { + res := make([]abci.Event, len(e)) + for i, ev := range e { + res[i] = abci.Event{ + Type: ev.Type, + Attributes: ev.Attributes +} + +} + +return res +} + +/ GetAttributes returns all attributes matching a given key present in events. +/ If the key is not found, the boolean value will be false. +func (e Events) + +GetAttributes(key string) ([]Attribute, bool) { + attrs := make([]Attribute, 0) + for _, event := range e { + if attr, found := event.GetAttribute(key); found { + attrs = append(attrs, attr) +} + +} + +return attrs, len(attrs) > 0 +} + +/ Common event types and attribute keys +const ( + EventTypeTx = "tx" + + AttributeKeyAccountSequence = "acc_seq" + AttributeKeySignature = "signature" + AttributeKeyFee = "fee" + AttributeKeyFeePayer = "fee_payer" + + EventTypeMessage = "message" + + AttributeKeyAction = "action" + AttributeKeyModule = "module" + AttributeKeySender = "sender" + AttributeKeyAmount = "amount" +) + +type ( + / StringAttributes defines a slice of StringEvents objects. + StringEvents []StringEvent +) + +func (se StringEvents) + +String() + +string { + var sb strings.Builder + for _, e := range se { + fmt.Fprintf(&sb, "\t\t- %s\n", e.Type) + for _, attr := range e.Attributes { + fmt.Fprintf(&sb, "\t\t\t- %s\n", attr) +} + +} + +return strings.TrimRight(sb.String(), "\n") +} + +/ StringifyEvent converts an Event object to a StringEvent object. +func StringifyEvent(e abci.Event) + +StringEvent { + res := StringEvent{ + Type: e.Type +} + for _, attr := range e.Attributes { + res.Attributes = append( + res.Attributes, + Attribute{ + Key: attr.Key, + Value: attr.Value +}, + ) +} + +return res +} + +/ StringifyEvents converts a slice of Event objects into a slice of StringEvent +/ objects. +func StringifyEvents(events []abci.Event) + +StringEvents { + res := make(StringEvents, 0, len(events)) + for _, e := range events { + res = append(res, StringifyEvent(e)) +} + +return res +} + +/ MarkEventsToIndex returns the set of ABCI events, where each event's attribute +/ has it's index value marked based on the provided set of events to index. +func MarkEventsToIndex(events []abci.Event, indexSet map[string]struct{ +}) []abci.Event { + indexAll := len(indexSet) == 0 + updatedEvents := make([]abci.Event, len(events)) + for i, e := range events { + updatedEvent := abci.Event{ + Type: e.Type, + Attributes: make([]abci.EventAttribute, len(e.Attributes)), +} + for j, attr := range e.Attributes { + _, index := indexSet[fmt.Sprintf("%s.%s", e.Type, attr.Key)] + updatedAttr := abci.EventAttribute{ + Key: attr.Key, + Value: attr.Value, + Index: index || indexAll, +} + +updatedEvent.Attributes[j] = updatedAttr +} + +updatedEvents[i] = updatedEvent +} + +return updatedEvents +} +``` + +The `EventManager` comes with a set of useful methods to manage Events. The method +that is used most by module and application developers is `EmitTypedEvent` or `EmitEvent` that tracks +an Event in the `EventManager`. + +```go expandable +package types + +import ( + + "encoding/json" + "fmt" + "maps" + "reflect" + "slices" + "strings" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/gogoproto/jsonpb" + proto "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/codec" +) + +type EventManagerI interface { + Events() + +Events + ABCIEvents() []abci.Event + EmitTypedEvent(tev proto.Message) + +error + EmitTypedEvents(tevs ...proto.Message) + +error + EmitEvent(event Event) + +EmitEvents(events Events) +} + +/ ---------------------------------------------------------------------------- +/ Event Manager +/ ---------------------------------------------------------------------------- + +var _ EventManagerI = (*EventManager)(nil) + +/ EventManager implements a simple wrapper around a slice of Event objects that +/ can be emitted from. +type EventManager struct { + events Events +} + +func NewEventManager() *EventManager { + return &EventManager{ + EmptyEvents() +} +} + +func (em *EventManager) + +Events() + +Events { + return em.events +} + +/ EmitEvent stores a single Event object. +/ Deprecated: Use EmitTypedEvent +func (em *EventManager) + +EmitEvent(event Event) { + em.events = em.events.AppendEvent(event) +} + +/ EmitEvents stores a series of Event objects. +/ Deprecated: Use EmitTypedEvents +func (em *EventManager) + +EmitEvents(events Events) { + em.events = em.events.AppendEvents(events) +} + +/ ABCIEvents returns all stored Event objects as abci.Event objects. +func (em EventManager) + +ABCIEvents() []abci.Event { + return em.events.ToABCIEvents() +} + +/ EmitTypedEvent takes typed event and emits converting it into Event +func (em *EventManager) + +EmitTypedEvent(tev proto.Message) + +error { + event, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +em.EmitEvent(event) + +return nil +} + +/ EmitTypedEvents takes series of typed events and emit +func (em *EventManager) + +EmitTypedEvents(tevs ...proto.Message) + +error { + events := make(Events, len(tevs)) + for i, tev := range tevs { + res, err := TypedEventToEvent(tev) + if err != nil { + return err +} + +events[i] = res +} + +em.EmitEvents(events) + +return nil +} + +/ TypedEventToEvent takes typed event and converts to Event object +func TypedEventToEvent(tev proto.Message) (Event, error) { + evtType := proto.MessageName(tev) + +evtJSON, err := codec.ProtoMarshalJSON(tev, nil) + if err != nil { + return Event{ +}, err +} + +var attrMap map[string]json.RawMessage + err = json.Unmarshal(evtJSON, &attrMap) + if err != nil { + return Event{ +}, err +} + + / sort the keys to ensure the order is always the same + keys := slices.Sorted(maps.Keys(attrMap)) + attrs := make([]abci.EventAttribute, 0, len(attrMap)) + for _, k := range keys { + v := attrMap[k] + attrs = append(attrs, abci.EventAttribute{ + Key: k, + Value: string(v), +}) +} + +return Event{ + Type: evtType, + Attributes: attrs, +}, nil +} + +/ ParseTypedEvent converts abci.Event back to a typed event. +func ParseTypedEvent(event abci.Event) (proto.Message, error) { + concreteGoType := proto.MessageType(event.Type) + if concreteGoType == nil { + return nil, fmt.Errorf("failed to retrieve the message of type %q", event.Type) +} + +var value reflect.Value + if concreteGoType.Kind() == reflect.Ptr { + value = reflect.New(concreteGoType.Elem()) +} + +else { + value = reflect.Zero(concreteGoType) +} + +protoMsg, ok := value.Interface().(proto.Message) + if !ok { + return nil, fmt.Errorf("%q does not implement proto.Message", event.Type) +} + attrMap := make(map[string]json.RawMessage) + for _, attr := range event.Attributes { + attrMap[attr.Key] = json.RawMessage(attr.Value) +} + +attrBytes, err := json.Marshal(attrMap) + if err != nil { + return nil, err +} + unmarshaler := jsonpb.Unmarshaler{ + AllowUnknownFields: true +} + if err := unmarshaler.Unmarshal(strings.NewReader(string(attrBytes)), protoMsg); err != nil { + return nil, err +} + +return protoMsg, nil +} + +/ ---------------------------------------------------------------------------- +/ Events +/ ---------------------------------------------------------------------------- + +type ( + / Event is a type alias for an ABCI Event + Event abci.Event + + / Events defines a slice of Event objects + Events []Event +) + +/ NewEvent creates a new Event object with a given type and slice of one or more +/ attributes. +func NewEvent(ty string, attrs ...Attribute) + +Event { + e := Event{ + Type: ty +} + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ NewAttribute returns a new key/value Attribute object. +func NewAttribute(k, v string) + +Attribute { + return Attribute{ + k, v +} +} + +/ EmptyEvents returns an empty slice of events. +func EmptyEvents() + +Events { + return make(Events, 0) +} + +func (a Attribute) + +String() + +string { + return fmt.Sprintf("%s: %s", a.Key, a.Value) +} + +/ ToKVPair converts an Attribute object into a CometBFT key/value pair. +func (a Attribute) + +ToKVPair() + +abci.EventAttribute { + return abci.EventAttribute{ + Key: a.Key, + Value: a.Value +} +} + +/ AppendAttributes adds one or more attributes to an Event. +func (e Event) + +AppendAttributes(attrs ...Attribute) + +Event { + for _, attr := range attrs { + e.Attributes = append(e.Attributes, attr.ToKVPair()) +} + +return e +} + +/ GetAttribute returns an attribute for a given key present in an event. +/ If the key is not found, the boolean value will be false. +func (e Event) + +GetAttribute(key string) (Attribute, bool) { + for _, attr := range e.Attributes { + if attr.Key == key { + return Attribute{ + Key: attr.Key, + Value: attr.Value +}, true +} + +} + +return Attribute{ +}, false +} + +/ AppendEvent adds an Event to a slice of events. +func (e Events) + +AppendEvent(event Event) + +Events { + return append(e, event) +} + +/ AppendEvents adds a slice of Event objects to an exist slice of Event objects. +func (e Events) + +AppendEvents(events Events) + +Events { + return append(e, events...) +} + +/ ToABCIEvents converts a slice of Event objects to a slice of abci.Event +/ objects. +func (e Events) + +ToABCIEvents() []abci.Event { + res := make([]abci.Event, len(e)) + for i, ev := range e { + res[i] = abci.Event{ + Type: ev.Type, + Attributes: ev.Attributes +} + +} + +return res +} + +/ GetAttributes returns all attributes matching a given key present in events. +/ If the key is not found, the boolean value will be false. +func (e Events) + +GetAttributes(key string) ([]Attribute, bool) { + attrs := make([]Attribute, 0) + for _, event := range e { + if attr, found := event.GetAttribute(key); found { + attrs = append(attrs, attr) +} + +} + +return attrs, len(attrs) > 0 +} + +/ Common event types and attribute keys +const ( + EventTypeTx = "tx" + + AttributeKeyAccountSequence = "acc_seq" + AttributeKeySignature = "signature" + AttributeKeyFee = "fee" + AttributeKeyFeePayer = "fee_payer" + + EventTypeMessage = "message" + + AttributeKeyAction = "action" + AttributeKeyModule = "module" + AttributeKeySender = "sender" + AttributeKeyAmount = "amount" +) + +type ( + / StringAttributes defines a slice of StringEvents objects. + StringEvents []StringEvent +) + +func (se StringEvents) + +String() + +string { + var sb strings.Builder + for _, e := range se { + fmt.Fprintf(&sb, "\t\t- %s\n", e.Type) + for _, attr := range e.Attributes { + fmt.Fprintf(&sb, "\t\t\t- %s\n", attr) +} + +} + +return strings.TrimRight(sb.String(), "\n") +} + +/ StringifyEvent converts an Event object to a StringEvent object. +func StringifyEvent(e abci.Event) + +StringEvent { + res := StringEvent{ + Type: e.Type +} + for _, attr := range e.Attributes { + res.Attributes = append( + res.Attributes, + Attribute{ + Key: attr.Key, + Value: attr.Value +}, + ) +} + +return res +} + +/ StringifyEvents converts a slice of Event objects into a slice of StringEvent +/ objects. +func StringifyEvents(events []abci.Event) + +StringEvents { + res := make(StringEvents, 0, len(events)) + for _, e := range events { + res = append(res, StringifyEvent(e)) +} + +return res +} + +/ MarkEventsToIndex returns the set of ABCI events, where each event's attribute +/ has it's index value marked based on the provided set of events to index. +func MarkEventsToIndex(events []abci.Event, indexSet map[string]struct{ +}) []abci.Event { + indexAll := len(indexSet) == 0 + updatedEvents := make([]abci.Event, len(events)) + for i, e := range events { + updatedEvent := abci.Event{ + Type: e.Type, + Attributes: make([]abci.EventAttribute, len(e.Attributes)), +} + for j, attr := range e.Attributes { + _, index := indexSet[fmt.Sprintf("%s.%s", e.Type, attr.Key)] + updatedAttr := abci.EventAttribute{ + Key: attr.Key, + Value: attr.Value, + Index: index || indexAll, +} + +updatedEvent.Attributes[j] = updatedAttr +} + +updatedEvents[i] = updatedEvent +} + +return updatedEvents +} +``` + +Module developers should handle Event emission via the `EventManager#EmitTypedEvent` or `EventManager#EmitEvent` in each message +`Handler` and in each `BeginBlock`/`EndBlock` handler. The `EventManager` is accessed via +the [`Context`](/docs/sdk/v0.53/documentation/application-framework/context), where Event should be already registered, and emitted like this: + +**Typed events:** + +```go expandable +package keeper + +import ( + + "bytes" + "context" + "encoding/binary" + "encoding/json" + "fmt" + "slices" + "strings" + + errorsmod "cosmossdk.io/errors" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/errors" + "github.com/cosmos/cosmos-sdk/x/group/internal/math" + "github.com/cosmos/cosmos-sdk/x/group/internal/orm" +) + +var _ group.MsgServer = Keeper{ +} + +/ TODO: Revisit this once we have proper gas fee framework. +/ Tracking issues https://github.com/cosmos/cosmos-sdk/issues/9054, https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(20) + +func (k Keeper) + +CreateGroup(goCtx context.Context, msg *group.MsgCreateGroup) (*group.MsgCreateGroupResponse, error) { + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Admin); err != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidAddress, "invalid admin address: %s", msg.Admin) +} + if err := k.validateMembers(msg.Members); err != nil { + return nil, errorsmod.Wrap(err, "members") +} + if err := k.assertMetadataLength(msg.Metadata, "group metadata"); err != nil { + return nil, err +} + totalWeight := math.NewDecFromInt64(0) + for _, m := range msg.Members { + if err := k.assertMetadataLength(m.Metadata, "member metadata"); err != nil { + return nil, err +} + + / Members of a group must have a positive weight. + / NOTE: group member with zero weight are only allowed when updating group members. + / If the member has a zero weight, it will be removed from the group. + weight, err := math.NewPositiveDecFromString(m.Weight) + if err != nil { + return nil, err +} + + / Adding up members weights to compute group total weight. + totalWeight, err = totalWeight.Add(weight) + if err != nil { + return nil, err +} + +} + + / Create a new group in the groupTable. + ctx := sdk.UnwrapSDKContext(goCtx) + groupInfo := &group.GroupInfo{ + Id: k.groupTable.Sequence().PeekNextVal(ctx.KVStore(k.key)), + Admin: msg.Admin, + Metadata: msg.Metadata, + Version: 1, + TotalWeight: totalWeight.String(), + CreatedAt: ctx.BlockTime(), +} + +groupID, err := k.groupTable.Create(ctx.KVStore(k.key), groupInfo) + if err != nil { + return nil, errorsmod.Wrap(err, "could not create group") +} + + / Create new group members in the groupMemberTable. + for i, m := range msg.Members { + err := k.groupMemberTable.Create(ctx.KVStore(k.key), &group.GroupMember{ + GroupId: groupID, + Member: &group.Member{ + Address: m.Address, + Weight: m.Weight, + Metadata: m.Metadata, + AddedAt: ctx.BlockTime(), +}, +}) + if err != nil { + return nil, errorsmod.Wrapf(err, "could not store member %d", i) +} + +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventCreateGroup{ + GroupId: groupID +}); err != nil { + return nil, err +} + +return &group.MsgCreateGroupResponse{ + GroupId: groupID +}, nil +} + +func (k Keeper) + +UpdateGroupMembers(goCtx context.Context, msg *group.MsgUpdateGroupMembers) (*group.MsgUpdateGroupMembersResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group id") +} + if len(msg.MemberUpdates) == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "member updates") +} + if err := k.validateMembers(msg.MemberUpdates); err != nil { + return nil, errorsmod.Wrap(err, "members") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(g *group.GroupInfo) + +error { + totalWeight, err := math.NewNonNegativeDecFromString(g.TotalWeight) + if err != nil { + return errorsmod.Wrap(err, "group total weight") +} + for _, member := range msg.MemberUpdates { + if err := k.assertMetadataLength(member.Metadata, "group member metadata"); err != nil { + return err +} + groupMember := group.GroupMember{ + GroupId: msg.GroupId, + Member: &group.Member{ + Address: member.Address, + Weight: member.Weight, + Metadata: member.Metadata, +}, +} + + / Checking if the group member is already part of the group + var found bool + var prevGroupMember group.GroupMember + switch err := k.groupMemberTable.GetOne(ctx.KVStore(k.key), orm.PrimaryKey(&groupMember), &prevGroupMember); { + case err == nil: + found = true + case sdkerrors.ErrNotFound.Is(err): + found = false + default: + return errorsmod.Wrap(err, "get group member") +} + +newMemberWeight, err := math.NewNonNegativeDecFromString(groupMember.Member.Weight) + if err != nil { + return err +} + + / Handle delete for members with zero weight. + if newMemberWeight.IsZero() { + / We can't delete a group member that doesn't already exist. + if !found { + return errorsmod.Wrap(sdkerrors.ErrNotFound, "unknown member") +} + +previousMemberWeight, err := math.NewPositiveDecFromString(prevGroupMember.Member.Weight) + if err != nil { + return err +} + + / Subtract the weight of the group member to delete from the group total weight. + totalWeight, err = math.SubNonNegative(totalWeight, previousMemberWeight) + if err != nil { + return err +} + + / Delete group member in the groupMemberTable. + if err := k.groupMemberTable.Delete(ctx.KVStore(k.key), &groupMember); err != nil { + return errorsmod.Wrap(err, "delete member") +} + +continue +} + / If group member already exists, handle update + if found { + previousMemberWeight, err := math.NewPositiveDecFromString(prevGroupMember.Member.Weight) + if err != nil { + return err +} + / Subtract previous weight from the group total weight. + totalWeight, err = math.SubNonNegative(totalWeight, previousMemberWeight) + if err != nil { + return err +} + / Save updated group member in the groupMemberTable. + groupMember.Member.AddedAt = prevGroupMember.Member.AddedAt + if err := k.groupMemberTable.Update(ctx.KVStore(k.key), &groupMember); err != nil { + return errorsmod.Wrap(err, "add member") +} + +} + +else { / else handle create. + groupMember.Member.AddedAt = ctx.BlockTime() + if err := k.groupMemberTable.Create(ctx.KVStore(k.key), &groupMember); err != nil { + return errorsmod.Wrap(err, "add member") +} + +} + / In both cases (handle + update), we need to add the new member's weight to the group total weight. + totalWeight, err = totalWeight.Add(newMemberWeight) + if err != nil { + return err +} + +} + / ensure that group has one or more members + if totalWeight.IsZero() { + return errorsmod.Wrap(errors.ErrInvalid, "group must not be empty") +} + / Update group in the groupTable. + g.TotalWeight = totalWeight.String() + +g.Version++ + if err := k.validateDecisionPolicies(ctx, *g); err != nil { + return err +} + +return k.groupTable.Update(ctx.KVStore(k.key), g.Id, g) +} + if err := k.doUpdateGroup(ctx, msg.GetGroupID(), msg.GetAdmin(), action, "members updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupMembersResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupAdmin(goCtx context.Context, msg *group.MsgUpdateGroupAdmin) (*group.MsgUpdateGroupAdminResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group id") +} + if strings.EqualFold(msg.Admin, msg.NewAdmin) { + return nil, errorsmod.Wrap(errors.ErrInvalid, "new and old admin are the same") +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Admin); err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidAddress, "admin address") +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.NewAdmin); err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidAddress, "new admin address") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(g *group.GroupInfo) + +error { + g.Admin = msg.NewAdmin + g.Version++ + + return k.groupTable.Update(ctx.KVStore(k.key), g.Id, g) +} + if err := k.doUpdateGroup(ctx, msg.GetGroupID(), msg.GetAdmin(), action, "admin updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupAdminResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupMetadata(goCtx context.Context, msg *group.MsgUpdateGroupMetadata) (*group.MsgUpdateGroupMetadataResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group id") +} + if err := k.assertMetadataLength(msg.Metadata, "group metadata"); err != nil { + return nil, err +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Admin); err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidAddress, "admin address") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(g *group.GroupInfo) + +error { + g.Metadata = msg.Metadata + g.Version++ + return k.groupTable.Update(ctx.KVStore(k.key), g.Id, g) +} + if err := k.doUpdateGroup(ctx, msg.GetGroupID(), msg.GetAdmin(), action, "metadata updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupMetadataResponse{ +}, nil +} + +func (k Keeper) + +CreateGroupWithPolicy(ctx context.Context, msg *group.MsgCreateGroupWithPolicy) (*group.MsgCreateGroupWithPolicyResponse, error) { + / NOTE: admin, and group message validation is performed in the CreateGroup method + groupRes, err := k.CreateGroup(ctx, &group.MsgCreateGroup{ + Admin: msg.Admin, + Members: msg.Members, + Metadata: msg.GroupMetadata, +}) + if err != nil { + return nil, errorsmod.Wrap(err, "group response") +} + groupID := groupRes.GroupId + + / NOTE: group policy message validation is performed in the CreateGroupPolicy method + groupPolicyRes, err := k.CreateGroupPolicy(ctx, &group.MsgCreateGroupPolicy{ + Admin: msg.Admin, + GroupId: groupID, + Metadata: msg.GroupPolicyMetadata, + DecisionPolicy: msg.DecisionPolicy, +}) + if err != nil { + return nil, errorsmod.Wrap(err, "group policy response") +} + if msg.GroupPolicyAsAdmin { + updateAdminReq := &group.MsgUpdateGroupAdmin{ + GroupId: groupID, + Admin: msg.Admin, + NewAdmin: groupPolicyRes.Address, +} + _, err = k.UpdateGroupAdmin(ctx, updateAdminReq) + if err != nil { + return nil, err +} + updatePolicyAddressReq := &group.MsgUpdateGroupPolicyAdmin{ + Admin: msg.Admin, + GroupPolicyAddress: groupPolicyRes.Address, + NewAdmin: groupPolicyRes.Address, +} + _, err = k.UpdateGroupPolicyAdmin(ctx, updatePolicyAddressReq) + if err != nil { + return nil, err +} + +} + +return &group.MsgCreateGroupWithPolicyResponse{ + GroupId: groupID, + GroupPolicyAddress: groupPolicyRes.Address +}, nil +} + +func (k Keeper) + +CreateGroupPolicy(goCtx context.Context, msg *group.MsgCreateGroupPolicy) (*group.MsgCreateGroupPolicyResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group id") +} + if err := k.assertMetadataLength(msg.GetMetadata(), "group policy metadata"); err != nil { + return nil, err +} + +policy, err := msg.GetDecisionPolicy() + if err != nil { + return nil, errorsmod.Wrap(err, "request decision policy") +} + if err := policy.ValidateBasic(); err != nil { + return nil, errorsmod.Wrap(err, "decision policy") +} + +reqGroupAdmin, err := k.accKeeper.AddressCodec().StringToBytes(msg.GetAdmin()) + if err != nil { + return nil, errorsmod.Wrap(err, "request admin") +} + ctx := sdk.UnwrapSDKContext(goCtx) + +groupInfo, err := k.getGroupInfo(ctx, msg.GetGroupID()) + if err != nil { + return nil, err +} + +groupAdmin, err := k.accKeeper.AddressCodec().StringToBytes(groupInfo.Admin) + if err != nil { + return nil, errorsmod.Wrap(err, "group admin") +} + + / Only current group admin is authorized to create a group policy for this + if !bytes.Equal(groupAdmin, reqGroupAdmin) { + return nil, errorsmod.Wrap(sdkerrors.ErrUnauthorized, "not group admin") +} + if err := policy.Validate(groupInfo, k.config); err != nil { + return nil, err +} + + / Generate account address of group policy. + var accountAddr sdk.AccAddress + / loop here in the rare case where a ADR-028-derived address creates a + / collision with an existing address. + for { + nextAccVal := k.groupPolicySeq.NextVal(ctx.KVStore(k.key)) + derivationKey := make([]byte, 8) + +binary.BigEndian.PutUint64(derivationKey, nextAccVal) + +ac, err := authtypes.NewModuleCredential(group.ModuleName, []byte{ + GroupPolicyTablePrefix +}, derivationKey) + if err != nil { + return nil, err +} + +accountAddr = sdk.AccAddress(ac.Address()) + if k.accKeeper.GetAccount(ctx, accountAddr) != nil { + / handle a rare collision, in which case we just go on to the + / next sequence value and derive a new address. + continue +} + + / group policy accounts are unclaimable base accounts + account, err := authtypes.NewBaseAccountWithPubKey(ac) + if err != nil { + return nil, errorsmod.Wrap(err, "could not create group policy account") +} + acc := k.accKeeper.NewAccount(ctx, account) + +k.accKeeper.SetAccount(ctx, acc) + +break +} + +groupPolicy, err := group.NewGroupPolicyInfo( + accountAddr, + msg.GetGroupID(), + reqGroupAdmin, + msg.GetMetadata(), + 1, + policy, + ctx.BlockTime(), + ) + if err != nil { + return nil, err +} + if err := k.groupPolicyTable.Create(ctx.KVStore(k.key), &groupPolicy); err != nil { + return nil, errorsmod.Wrap(err, "could not create group policy") +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventCreateGroupPolicy{ + Address: accountAddr.String() +}); err != nil { + return nil, err +} + +return &group.MsgCreateGroupPolicyResponse{ + Address: accountAddr.String() +}, nil +} + +func (k Keeper) + +UpdateGroupPolicyAdmin(goCtx context.Context, msg *group.MsgUpdateGroupPolicyAdmin) (*group.MsgUpdateGroupPolicyAdminResponse, error) { + if strings.EqualFold(msg.Admin, msg.NewAdmin) { + return nil, errorsmod.Wrap(errors.ErrInvalid, "new and old admin are same") +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.NewAdmin); err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidAddress, "new admin address") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(groupPolicy *group.GroupPolicyInfo) + +error { + groupPolicy.Admin = msg.NewAdmin + groupPolicy.Version++ + return k.groupPolicyTable.Update(ctx.KVStore(k.key), groupPolicy) +} + if err := k.doUpdateGroupPolicy(ctx, msg.GroupPolicyAddress, msg.Admin, action, "group policy admin updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupPolicyAdminResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupPolicyDecisionPolicy(goCtx context.Context, msg *group.MsgUpdateGroupPolicyDecisionPolicy) (*group.MsgUpdateGroupPolicyDecisionPolicyResponse, error) { + policy, err := msg.GetDecisionPolicy() + if err != nil { + return nil, errorsmod.Wrap(err, "decision policy") +} + if err := policy.ValidateBasic(); err != nil { + return nil, errorsmod.Wrap(err, "decision policy") +} + ctx := sdk.UnwrapSDKContext(goCtx) + action := func(groupPolicy *group.GroupPolicyInfo) + +error { + groupInfo, err := k.getGroupInfo(ctx, groupPolicy.GroupId) + if err != nil { + return err +} + +err = policy.Validate(groupInfo, k.config) + if err != nil { + return err +} + +err = groupPolicy.SetDecisionPolicy(policy) + if err != nil { + return err +} + +groupPolicy.Version++ + return k.groupPolicyTable.Update(ctx.KVStore(k.key), groupPolicy) +} + if err = k.doUpdateGroupPolicy(ctx, msg.GroupPolicyAddress, msg.Admin, action, "group policy's decision policy updated"); err != nil { + return nil, err +} + +return &group.MsgUpdateGroupPolicyDecisionPolicyResponse{ +}, nil +} + +func (k Keeper) + +UpdateGroupPolicyMetadata(goCtx context.Context, msg *group.MsgUpdateGroupPolicyMetadata) (*group.MsgUpdateGroupPolicyMetadataResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + metadata := msg.GetMetadata() + action := func(groupPolicy *group.GroupPolicyInfo) + +error { + groupPolicy.Metadata = metadata + groupPolicy.Version++ + return k.groupPolicyTable.Update(ctx.KVStore(k.key), groupPolicy) +} + if err := k.assertMetadataLength(metadata, "group policy metadata"); err != nil { + return nil, err +} + err := k.doUpdateGroupPolicy(ctx, msg.GroupPolicyAddress, msg.Admin, action, "group policy metadata updated") + if err != nil { + return nil, err +} + +return &group.MsgUpdateGroupPolicyMetadataResponse{ +}, nil +} + +func (k Keeper) + +SubmitProposal(goCtx context.Context, msg *group.MsgSubmitProposal) (*group.MsgSubmitProposalResponse, error) { + if len(msg.Proposers) == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "proposers") +} + if err := k.validateProposers(msg.Proposers); err != nil { + return nil, err +} + +groupPolicyAddr, err := k.accKeeper.AddressCodec().StringToBytes(msg.GroupPolicyAddress) + if err != nil { + return nil, errorsmod.Wrap(err, "request account address of group policy") +} + if err := k.assertMetadataLength(msg.Title, "proposal Title"); err != nil { + return nil, err +} + if err := k.assertSummaryLength(msg.Summary); err != nil { + return nil, err +} + if err := k.assertMetadataLength(msg.Metadata, "metadata"); err != nil { + return nil, err +} + + / verify that if present, the metadata title and summary equals the proposal title and summary + if len(msg.Metadata) != 0 { + proposalMetadata := govtypes.ProposalMetadata{ +} + if err := json.Unmarshal([]byte(msg.Metadata), &proposalMetadata); err == nil { + if proposalMetadata.Title != msg.Title { + return nil, fmt.Errorf("metadata title '%s' must equal proposal title '%s'", proposalMetadata.Title, msg.Title) +} + if proposalMetadata.Summary != msg.Summary { + return nil, fmt.Errorf("metadata summary '%s' must equal proposal summary '%s'", proposalMetadata.Summary, msg.Summary) +} + +} + + / if we can't unmarshal the metadata, this means the client didn't use the recommended metadata format + / nothing can be done here, and this is still a valid case, so we ignore the error +} + +msgs, err := msg.GetMsgs() + if err != nil { + return nil, errorsmod.Wrap(err, "request msgs") +} + if err := validateMsgs(msgs); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + +policyAcc, err := k.getGroupPolicyInfo(ctx, msg.GroupPolicyAddress) + if err != nil { + return nil, errorsmod.Wrapf(err, "load group policy: %s", msg.GroupPolicyAddress) +} + +groupInfo, err := k.getGroupInfo(ctx, policyAcc.GroupId) + if err != nil { + return nil, errorsmod.Wrap(err, "get group by groupId of group policy") +} + + / Only members of the group can submit a new proposal. + for _, proposer := range msg.Proposers { + if !k.groupMemberTable.Has(ctx.KVStore(k.key), orm.PrimaryKey(&group.GroupMember{ + GroupId: groupInfo.Id, + Member: &group.Member{ + Address: proposer +}})) { + return nil, errorsmod.Wrapf(errors.ErrUnauthorized, "not in group: %s", proposer) +} + +} + + / Check that if the messages require signers, they are all equal to the given account address of group policy. + if err := ensureMsgAuthZ(msgs, groupPolicyAddr, k.cdc); err != nil { + return nil, err +} + +policy, err := policyAcc.GetDecisionPolicy() + if err != nil { + return nil, errorsmod.Wrap(err, "proposal group policy decision policy") +} + + / Prevent proposal that cannot succeed. + if err = policy.Validate(groupInfo, k.config); err != nil { + return nil, err +} + m := &group.Proposal{ + Id: k.proposalTable.Sequence().PeekNextVal(ctx.KVStore(k.key)), + GroupPolicyAddress: msg.GroupPolicyAddress, + Metadata: msg.Metadata, + Proposers: msg.Proposers, + SubmitTime: ctx.BlockTime(), + GroupVersion: groupInfo.Version, + GroupPolicyVersion: policyAcc.Version, + Status: group.PROPOSAL_STATUS_SUBMITTED, + ExecutorResult: group.PROPOSAL_EXECUTOR_RESULT_NOT_RUN, + VotingPeriodEnd: ctx.BlockTime().Add(policy.GetVotingPeriod()), / The voting window begins as soon as the proposal is submitted. + FinalTallyResult: group.DefaultTallyResult(), + Title: msg.Title, + Summary: msg.Summary, +} + if err := m.SetMsgs(msgs); err != nil { + return nil, errorsmod.Wrap(err, "create proposal") +} + +id, err := k.proposalTable.Create(ctx.KVStore(k.key), m) + if err != nil { + return nil, errorsmod.Wrap(err, "create proposal") +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventSubmitProposal{ + ProposalId: id +}); err != nil { + return nil, err +} + + / Try to execute proposal immediately + if msg.Exec == group.Exec_EXEC_TRY { + / Consider proposers as Yes votes + for _, proposer := range msg.Proposers { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "vote on proposal") + _, err = k.Vote(ctx, &group.MsgVote{ + ProposalId: id, + Voter: proposer, + Option: group.VOTE_OPTION_YES, +}) + if err != nil { + return &group.MsgSubmitProposalResponse{ + ProposalId: id +}, errorsmod.Wrapf(err, "the proposal was created but failed on vote for voter %s", proposer) +} + +} + + / Then try to execute the proposal + _, err = k.Exec(ctx, &group.MsgExec{ + ProposalId: id, + / We consider the first proposer as the MsgExecRequest signer + / but that could be revisited (eg using the group policy) + +Executor: msg.Proposers[0], +}) + if err != nil { + return &group.MsgSubmitProposalResponse{ + ProposalId: id +}, errorsmod.Wrap(err, "the proposal was created but failed on exec") +} + +} + +return &group.MsgSubmitProposalResponse{ + ProposalId: id +}, nil +} + +func (k Keeper) + +WithdrawProposal(goCtx context.Context, msg *group.MsgWithdrawProposal) (*group.MsgWithdrawProposalResponse, error) { + if msg.ProposalId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "proposal id") +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Address); err != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidAddress, "invalid group policy admin / proposer address: %s", msg.Address) +} + ctx := sdk.UnwrapSDKContext(goCtx) + +proposal, err := k.getProposal(ctx, msg.ProposalId) + if err != nil { + return nil, err +} + + / Ensure the proposal can be withdrawn. + if proposal.Status != group.PROPOSAL_STATUS_SUBMITTED { + return nil, errorsmod.Wrapf(errors.ErrInvalid, "cannot withdraw a proposal with the status of %s", proposal.Status.String()) +} + +var policyInfo group.GroupPolicyInfo + if policyInfo, err = k.getGroupPolicyInfo(ctx, proposal.GroupPolicyAddress); err != nil { + return nil, errorsmod.Wrap(err, "load group policy") +} + + / check address is the group policy admin he is in proposers list.. + if msg.Address != policyInfo.Admin && !isProposer(proposal, msg.Address) { + return nil, errorsmod.Wrapf(errors.ErrUnauthorized, "given address is neither group policy admin nor in proposers: %s", msg.Address) +} + +proposal.Status = group.PROPOSAL_STATUS_WITHDRAWN + if err := k.proposalTable.Update(ctx.KVStore(k.key), msg.ProposalId, &proposal); err != nil { + return nil, err +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventWithdrawProposal{ + ProposalId: msg.ProposalId +}); err != nil { + return nil, err +} + +return &group.MsgWithdrawProposalResponse{ +}, nil +} + +func (k Keeper) + +Vote(goCtx context.Context, msg *group.MsgVote) (*group.MsgVoteResponse, error) { + if msg.ProposalId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "proposal id") +} + + / verify vote options + if msg.Option == group.VOTE_OPTION_UNSPECIFIED { + return nil, errorsmod.Wrap(errors.ErrEmpty, "vote option") +} + if _, ok := group.VoteOption_name[int32(msg.Option)]; !ok { + return nil, errorsmod.Wrap(errors.ErrInvalid, "vote option") +} + if err := k.assertMetadataLength(msg.Metadata, "metadata"); err != nil { + return nil, err +} + if _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Voter); err != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidAddress, "invalid voter address: %s", msg.Voter) +} + ctx := sdk.UnwrapSDKContext(goCtx) + +proposal, err := k.getProposal(ctx, msg.ProposalId) + if err != nil { + return nil, err +} + + / Ensure that we can still accept votes for this proposal. + if proposal.Status != group.PROPOSAL_STATUS_SUBMITTED { + return nil, errorsmod.Wrap(errors.ErrInvalid, "proposal not open for voting") +} + if ctx.BlockTime().After(proposal.VotingPeriodEnd) { + return nil, errorsmod.Wrap(errors.ErrExpired, "voting period has ended already") +} + +policyInfo, err := k.getGroupPolicyInfo(ctx, proposal.GroupPolicyAddress) + if err != nil { + return nil, errorsmod.Wrap(err, "load group policy") +} + +groupInfo, err := k.getGroupInfo(ctx, policyInfo.GroupId) + if err != nil { + return nil, err +} + + / Count and store votes. + voter := group.GroupMember{ + GroupId: groupInfo.Id, + Member: &group.Member{ + Address: msg.Voter +}} + if err := k.groupMemberTable.GetOne(ctx.KVStore(k.key), orm.PrimaryKey(&voter), &voter); err != nil { + return nil, errorsmod.Wrapf(err, "voter address: %s", msg.Voter) +} + newVote := group.Vote{ + ProposalId: msg.ProposalId, + Voter: msg.Voter, + Option: msg.Option, + Metadata: msg.Metadata, + SubmitTime: ctx.BlockTime(), +} + + / The ORM will return an error if the vote already exists, + / making sure than a voter hasn't already voted. + if err := k.voteTable.Create(ctx.KVStore(k.key), &newVote); err != nil { + return nil, errorsmod.Wrap(err, "store vote") +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventVote{ + ProposalId: msg.ProposalId +}); err != nil { + return nil, err +} + + / Try to execute proposal immediately + if msg.Exec == group.Exec_EXEC_TRY { + _, err = k.Exec(ctx, &group.MsgExec{ + ProposalId: msg.ProposalId, + Executor: msg.Voter +}) + if err != nil { + return nil, err +} + +} + +return &group.MsgVoteResponse{ +}, nil +} + +/ doTallyAndUpdate performs a tally, and, if the tally result is final, then: +/ - updates the proposal's `Status` and `FinalTallyResult` fields, +/ - prune all the votes. +func (k Keeper) + +doTallyAndUpdate(ctx sdk.Context, proposal *group.Proposal, groupInfo group.GroupInfo, policyInfo group.GroupPolicyInfo) + +error { + policy, err := policyInfo.GetDecisionPolicy() + if err != nil { + return err +} + +var result group.DecisionPolicyResult + tallyResult, err := k.Tally(ctx, *proposal, policyInfo.GroupId) + if err == nil { + result, err = policy.Allow(tallyResult, groupInfo.TotalWeight) +} + if err != nil { + if err := k.pruneVotes(ctx, proposal.Id); err != nil { + return err +} + +proposal.Status = group.PROPOSAL_STATUS_REJECTED + return ctx.EventManager().EmitTypedEvents( + &group.EventTallyError{ + ProposalId: proposal.Id, + ErrorMessage: err.Error(), +}) +} + + / If the result was final (i.e. enough votes to pass) + +or if the voting + / period ended, then we consider the proposal as final. + if isFinal := result.Final || ctx.BlockTime().After(proposal.VotingPeriodEnd); isFinal { + if err := k.pruneVotes(ctx, proposal.Id); err != nil { + return err +} + +proposal.FinalTallyResult = tallyResult + if result.Allow { + proposal.Status = group.PROPOSAL_STATUS_ACCEPTED +} + +else { + proposal.Status = group.PROPOSAL_STATUS_REJECTED +} + + +} + +return nil +} + +/ Exec executes the messages from a proposal. +func (k Keeper) + +Exec(goCtx context.Context, msg *group.MsgExec) (*group.MsgExecResponse, error) { + if msg.ProposalId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "proposal id") +} + ctx := sdk.UnwrapSDKContext(goCtx) + +proposal, err := k.getProposal(ctx, msg.ProposalId) + if err != nil { + return nil, err +} + if proposal.Status != group.PROPOSAL_STATUS_SUBMITTED && proposal.Status != group.PROPOSAL_STATUS_ACCEPTED { + return nil, errorsmod.Wrapf(errors.ErrInvalid, "not possible to exec with proposal status %s", proposal.Status.String()) +} + +policyInfo, err := k.getGroupPolicyInfo(ctx, proposal.GroupPolicyAddress) + if err != nil { + return nil, errorsmod.Wrap(err, "load group policy") +} + + / If proposal is still in SUBMITTED phase, it means that the voting period + / didn't end yet, and tallying hasn't been done. In this case, we need to + / tally first. + if proposal.Status == group.PROPOSAL_STATUS_SUBMITTED { + groupInfo, err := k.getGroupInfo(ctx, policyInfo.GroupId) + if err != nil { + return nil, errorsmod.Wrap(err, "load group") +} + if err = k.doTallyAndUpdate(ctx, &proposal, groupInfo, policyInfo); err != nil { + return nil, err +} + +} + + / Execute proposal payload. + var logs string + if proposal.Status == group.PROPOSAL_STATUS_ACCEPTED && proposal.ExecutorResult != group.PROPOSAL_EXECUTOR_RESULT_SUCCESS { + / Caching context so that we don't update the store in case of failure. + cacheCtx, flush := ctx.CacheContext() + +addr, err := k.accKeeper.AddressCodec().StringToBytes(policyInfo.Address) + if err != nil { + return nil, err +} + decisionPolicy := policyInfo.DecisionPolicy.GetCachedValue().(group.DecisionPolicy) + if results, err := k.doExecuteMsgs(cacheCtx, k.router, proposal, addr, decisionPolicy); err != nil { + proposal.ExecutorResult = group.PROPOSAL_EXECUTOR_RESULT_FAILURE + logs = fmt.Sprintf("proposal execution failed on proposal %d, because of error %s", proposal.Id, err.Error()) + +k.Logger(ctx).Info("proposal execution failed", "cause", err, "proposalID", proposal.Id) +} + +else { + proposal.ExecutorResult = group.PROPOSAL_EXECUTOR_RESULT_SUCCESS + flush() + for _, res := range results { + / NOTE: The sdk msg handler creates a new EventManager, so events must be correctly propagated back to the current context + ctx.EventManager().EmitEvents(res.GetEvents()) +} + +} + +} + + / Update proposal in proposalTable + / If proposal has successfully run, delete it from state. + if proposal.ExecutorResult == group.PROPOSAL_EXECUTOR_RESULT_SUCCESS { + if err := k.pruneProposal(ctx, proposal.Id); err != nil { + return nil, err +} + + / Emit event for proposal finalized with its result + if err := ctx.EventManager().EmitTypedEvent( + &group.EventProposalPruned{ + ProposalId: proposal.Id, + Status: proposal.Status, + TallyResult: &proposal.FinalTallyResult, +}); err != nil { + return nil, err +} + +} + +else { + store := ctx.KVStore(k.key) + if err := k.proposalTable.Update(store, proposal.Id, &proposal); err != nil { + return nil, err +} + +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventExec{ + ProposalId: proposal.Id, + Logs: logs, + Result: proposal.ExecutorResult, +}); err != nil { + return nil, err +} + +return &group.MsgExecResponse{ + Result: proposal.ExecutorResult, +}, nil +} + +/ LeaveGroup implements the MsgServer/LeaveGroup method. +func (k Keeper) + +LeaveGroup(goCtx context.Context, msg *group.MsgLeaveGroup) (*group.MsgLeaveGroupResponse, error) { + if msg.GroupId == 0 { + return nil, errorsmod.Wrap(errors.ErrEmpty, "group-id") +} + + _, err := k.accKeeper.AddressCodec().StringToBytes(msg.Address) + if err != nil { + return nil, errorsmod.Wrap(err, "group member") +} + ctx := sdk.UnwrapSDKContext(goCtx) + +groupInfo, err := k.getGroupInfo(ctx, msg.GroupId) + if err != nil { + return nil, errorsmod.Wrap(err, "group") +} + +groupWeight, err := math.NewNonNegativeDecFromString(groupInfo.TotalWeight) + if err != nil { + return nil, err +} + +gm, err := k.getGroupMember(ctx, &group.GroupMember{ + GroupId: msg.GroupId, + Member: &group.Member{ + Address: msg.Address +}, +}) + if err != nil { + return nil, err +} + +memberWeight, err := math.NewPositiveDecFromString(gm.Member.Weight) + if err != nil { + return nil, err +} + +updatedWeight, err := math.SubNonNegative(groupWeight, memberWeight) + if err != nil { + return nil, err +} + + / delete group member in the groupMemberTable. + if err := k.groupMemberTable.Delete(ctx.KVStore(k.key), gm); err != nil { + return nil, errorsmod.Wrap(err, "group member") +} + + / update group weight + groupInfo.TotalWeight = updatedWeight.String() + +groupInfo.Version++ + if err := k.validateDecisionPolicies(ctx, groupInfo); err != nil { + return nil, err +} + if err := k.groupTable.Update(ctx.KVStore(k.key), groupInfo.Id, &groupInfo); err != nil { + return nil, err +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventLeaveGroup{ + GroupId: msg.GroupId, + Address: msg.Address, +}); err != nil { + return nil, err +} + +return &group.MsgLeaveGroupResponse{ +}, nil +} + +func (k Keeper) + +getGroupMember(ctx sdk.Context, member *group.GroupMember) (*group.GroupMember, error) { + var groupMember group.GroupMember + switch err := k.groupMemberTable.GetOne(ctx.KVStore(k.key), + orm.PrimaryKey(member), &groupMember); { + case err == nil: + break + case sdkerrors.ErrNotFound.Is(err): + return nil, sdkerrors.ErrNotFound.Wrapf("%s is not part of group %d", member.Member.Address, member.GroupId) + +default: + return nil, err +} + +return &groupMember, nil +} + +type ( + actionFn func(m *group.GroupInfo) + +error + groupPolicyActionFn func(m *group.GroupPolicyInfo) + +error +) + +/ doUpdateGroupPolicy first makes sure that the group policy admin initiated the group policy update, +/ before performing the group policy update and emitting an event. +func (k Keeper) + +doUpdateGroupPolicy(ctx sdk.Context, reqGroupPolicy, reqAdmin string, action groupPolicyActionFn, note string) + +error { + groupPolicyAddr, err := k.accKeeper.AddressCodec().StringToBytes(reqGroupPolicy) + if err != nil { + return errorsmod.Wrap(err, "group policy address") +} + + _, err = k.accKeeper.AddressCodec().StringToBytes(reqAdmin) + if err != nil { + return errorsmod.Wrap(err, "group policy admin") +} + +groupPolicyInfo, err := k.getGroupPolicyInfo(ctx, reqGroupPolicy) + if err != nil { + return errorsmod.Wrap(err, "load group policy") +} + + / Only current group policy admin is authorized to update a group policy. + if reqAdmin != groupPolicyInfo.Admin { + return errorsmod.Wrap(sdkerrors.ErrUnauthorized, "not group policy admin") +} + if err := action(&groupPolicyInfo); err != nil { + return errorsmod.Wrap(err, note) +} + if err = k.abortProposals(ctx, groupPolicyAddr); err != nil { + return err +} + if err = ctx.EventManager().EmitTypedEvent(&group.EventUpdateGroupPolicy{ + Address: groupPolicyInfo.Address +}); err != nil { + return err +} + +return nil +} + +/ doUpdateGroup first makes sure that the group admin initiated the group update, +/ before performing the group update and emitting an event. +func (k Keeper) + +doUpdateGroup(ctx sdk.Context, groupID uint64, reqGroupAdmin string, action actionFn, errNote string) + +error { + groupInfo, err := k.getGroupInfo(ctx, groupID) + if err != nil { + return err +} + if !strings.EqualFold(groupInfo.Admin, reqGroupAdmin) { + return errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "not group admin; got %s, expected %s", reqGroupAdmin, groupInfo.Admin) +} + if err := action(&groupInfo); err != nil { + return errorsmod.Wrap(err, errNote) +} + if err := ctx.EventManager().EmitTypedEvent(&group.EventUpdateGroup{ + GroupId: groupID +}); err != nil { + return err +} + +return nil +} + +/ assertMetadataLength returns an error if given metadata length +/ is greater than a pre-defined maxMetadataLen. +func (k Keeper) + +assertMetadataLength(metadata, description string) + +error { + if metadata != "" && uint64(len(metadata)) > k.config.MaxMetadataLen { + return errorsmod.Wrapf(errors.ErrMaxLimit, description) +} + +return nil +} + +/ assertSummaryLength returns an error if given summary length +/ is greater than a pre-defined 40*MaxMetadataLen. +func (k Keeper) + +assertSummaryLength(summary string) + +error { + if summary != "" && uint64(len(summary)) > 40*k.config.MaxMetadataLen { + return errorsmod.Wrapf(errors.ErrMaxLimit, "proposal summary is too long") +} + +return nil +} + +/ validateDecisionPolicies loops through all decision policies from the group, +/ and calls each of their Validate() + +method. +func (k Keeper) + +validateDecisionPolicies(ctx sdk.Context, g group.GroupInfo) + +error { + it, err := k.groupPolicyByGroupIndex.Get(ctx.KVStore(k.key), g.Id) + if err != nil { + return err +} + +defer it.Close() + for { + var groupPolicy group.GroupPolicyInfo + _, err = it.LoadNext(&groupPolicy) + if errors.ErrORMIteratorDone.Is(err) { + break +} + if err != nil { + return err +} + +err = groupPolicy.DecisionPolicy.GetCachedValue().(group.DecisionPolicy).Validate(g, k.config) + if err != nil { + return err +} + +} + +return nil +} + +/ validateProposers checks that all proposers addresses are valid. +/ It as well verifies that there is no duplicate address. +func (k Keeper) + +validateProposers(proposers []string) + +error { + index := make(map[string]struct{ +}, len(proposers)) + for _, proposer := range proposers { + if _, exists := index[proposer]; exists { + return errorsmod.Wrapf(errors.ErrDuplicate, "address: %s", proposer) +} + + _, err := k.accKeeper.AddressCodec().StringToBytes(proposer) + if err != nil { + return errorsmod.Wrapf(err, "proposer address %s", proposer) +} + +index[proposer] = struct{ +}{ +} + +} + +return nil +} + +/ validateMembers checks that all members addresses are valid. +/ additionally it verifies that there is no duplicate address +/ and the member weight is non-negative. +/ Note: in state, a member's weight MUST be positive. However, in some Msgs, +/ it's possible to set a zero member weight, for example in +/ MsgUpdateGroupMembers to denote that we're removing a member. +/ It returns an error if any of the above conditions is not met. +func (k Keeper) + +validateMembers(members []group.MemberRequest) + +error { + index := make(map[string]struct{ +}, len(members)) + for _, member := range members { + if _, exists := index[member.Address]; exists { + return errorsmod.Wrapf(errors.ErrDuplicate, "address: %s", member.Address) +} + + _, err := k.accKeeper.AddressCodec().StringToBytes(member.Address) + if err != nil { + return errorsmod.Wrapf(err, "member address %s", member.Address) +} + if _, err := math.NewNonNegativeDecFromString(member.Weight); err != nil { + return errorsmod.Wrap(err, "weight must be non negative") +} + +index[member.Address] = struct{ +}{ +} + +} + +return nil +} + +/ isProposer checks that an address is a proposer of a given proposal. +func isProposer(proposal group.Proposal, address string) + +bool { + return slices.Contains(proposal.Proposers, address) +} + +func validateMsgs(msgs []sdk.Msg) + +error { + for i, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return errorsmod.Wrapf(err, "msg %d", i) +} + +} + +return nil +} +``` + +**Legacy events:** + +```go +ctx.EventManager().EmitEvent( + sdk.NewEvent(eventType, sdk.NewAttribute(attributeKey, attributeValue)), +) +``` + +Where the `EventManager` is accessed via the [`Context`](/docs/sdk/v0.53/documentation/application-framework/context). + +See the [`Msg` services](/docs/sdk/v0.53/documentation/module-system/msg-services) concept doc for a more detailed +view on how to typically implement Events and use the `EventManager` in modules. + +## Subscribing to Events + +You can use CometBFT's [Websocket](https://docs.cometbft.com/v0.37/core/subscription) to subscribe to Events by calling the `subscribe` RPC method: + +```json +{ + "jsonrpc": "2.0", + "method": "subscribe", + "id": "0", + "params": { + "query": "tm.event='eventCategory' AND eventType.eventAttribute='attributeValue'" + } +} +``` + +The main `eventCategory` you can subscribe to are: + +- `NewBlock`: Contains Events triggered during `BeginBlock` and `EndBlock`. +- `Tx`: Contains Events triggered during `DeliverTx` (i.e. transaction processing). +- `ValidatorSetUpdates`: Contains validator set updates for the block. + +These Events are triggered from the `state` package after a block is committed. You can get the +full list of Event categories [on the CometBFT Go documentation](https://pkg.go.dev/github.com/cometbft/cometbft/types#pkg-constants). + +The `type` and `attribute` value of the `query` allow you to filter the specific Event you are looking for. For example, a `Mint` transaction triggers an Event of type `EventMint` and has an `Id` and an `Owner` as `attributes` (as defined in the [`events.proto` file of the `NFT` module](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/proto/cosmos/nft/v1beta1/event.proto#L21-L31)). + +Subscribing to this Event would be done like so: + +```json +{ + "jsonrpc": "2.0", + "method": "subscribe", + "id": "0", + "params": { + "query": "tm.event='Tx' AND mint.owner='ownerAddress'" + } +} +``` + +where `ownerAddress` is an address following the [`AccAddress`](/docs/sdk/v0.53/documentation/protocol-development/accounts#addresses) format. + +The same way can be used to subscribe to [legacy events](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/bank/types/events.go). + +## Default Events + +There are a few events that are automatically emitted for all messages, directly from `baseapp`. + +- `message.action`: The name of the message type. +- `message.sender`: The address of the message signer. +- `message.module`: The name of the module that emitted the message. + + + The module name is assumed by `baseapp` to be the second element of the + message route: `"cosmos.bank.v1beta1.MsgSend" -> "bank"`. In case a module + does not follow the standard message path, (e.g. IBC), it is advised to keep + emitting the module name event. `Baseapp` only emits that event if the module + have not already done so. + diff --git a/docs/sdk/v0.53/api-reference/service-apis/grpc_rest.mdx b/docs/sdk/v0.53/api-reference/service-apis/grpc_rest.mdx new file mode 100644 index 00000000..4047805e --- /dev/null +++ b/docs/sdk/v0.53/api-reference/service-apis/grpc_rest.mdx @@ -0,0 +1,223 @@ +--- +title: "gRPC, REST, and CometBFT Endpoints" +--- + +## Synopsis + +This document presents an overview of all the endpoints a node exposes: gRPC, REST as well as some other endpoints. + +## An Overview of All Endpoints + +Each node exposes the following endpoints for users to interact with a node, each endpoint is served on a different port. Details on how to configure each endpoint is provided in the endpoint's own section. + +- the gRPC server (default port: `9090`), +- the REST server (default port: `1317`), +- the CometBFT RPC endpoint (default port: `26657`). + + + The node also exposes some other endpoints, such as the CometBFT P2P endpoint, + or the [Prometheus endpoint](https://docs.cometbft.com/v0.37/core/metrics), + which are not directly related to the Cosmos SDK. Please refer to the + [CometBFT documentation](https://docs.cometbft.com/v0.37/core/configuration) + for more information about these endpoints. + + + + All endpoints are defaulted to localhost and must be modified to be exposed to + the public internet. + + +## gRPC Server + +In the Cosmos SDK, Protobuf is the main [encoding](/docs/sdk/v0.53/documentation/protocol-development/encoding) library. This brings a wide range of Protobuf-based tools that can be plugged into the Cosmos SDK. One such tool is [gRPC](https://grpc.io), a modern open-source high performance RPC framework that has decent client support in several languages. + +Each module exposes a [Protobuf `Query` service](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#queries) that defines state queries. The `Query` services and a transaction service used to broadcast transactions are hooked up to the gRPC server via the following function inside the application: + +```go expandable +package types + +import ( + + "encoding/json" + "io" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/cobra" + "cosmossdk.io/log" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" +) + +type ( + / AppOptions defines an interface that is passed into an application + / constructor, typically used to set BaseApp options that are either supplied + / via config file or through CLI arguments/flags. The underlying implementation + / is defined by the server package and is typically implemented via a Viper + / literal defined on the server Context. Note, casting Get calls may not yield + / the expected types and could result in type assertion errors. It is recommend + / to either use the cast package or perform manual conversion for safety. + AppOptions interface { + Get(string) + +interface{ +} + +} + + / Application defines an application interface that wraps abci.Application. + / The interface defines the necessary contracts to be implemented in order + / to fully bootstrap and start an application. + Application interface { + ABCI + + RegisterAPIRoutes(*api.Server, config.APIConfig) + + / RegisterGRPCServerWithSkipCheckHeader registers gRPC services directly with the gRPC + / server and bypass check header flag. + RegisterGRPCServerWithSkipCheckHeader(grpc.Server, bool) + + / RegisterTxService registers the gRPC Query service for tx (such as tx + / simulation, fetching txs by hash...). + RegisterTxService(client.Context) + + / RegisterTendermintService registers the gRPC Query service for CometBFT queries. + RegisterTendermintService(client.Context) + + / RegisterNodeService registers the node gRPC Query service. + RegisterNodeService(client.Context, config.Config) + + / CommitMultiStore return the multistore instance + CommitMultiStore() + +storetypes.CommitMultiStore + + / Return the snapshot manager + SnapshotManager() *snapshots.Manager + + / Close is called in start cmd to gracefully cleanup resources. + / Must be safe to be called multiple times. + Close() + +error +} + + / AppCreator is a function that allows us to lazily initialize an + / application using various configurations. + AppCreator func(log.Logger, dbm.DB, io.Writer, AppOptions) + +Application + + / ModuleInitFlags takes a start command and adds modules specific init flags. + ModuleInitFlags func(startCmd *cobra.Command) + + / ExportedApp represents an exported app state, along with + / validators, consensus params and latest app height. + ExportedApp struct { + / AppState is the application state as JSON. + AppState json.RawMessage + / Validators is the exported validator set. + Validators []cmttypes.GenesisValidator + / Height is the app's latest block height. + Height int64 + / ConsensusParams are the exported consensus params for ABCI. + ConsensusParams cmtproto.ConsensusParams +} + + / AppExporter is a function that dumps all app state to + / JSON-serializable structure and returns the current validator set. + AppExporter func( + logger log.Logger, + db dbm.DB, + traceWriter io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + opts AppOptions, + modulesToExport []string, + ) (ExportedApp, error) +) +``` + +Note: It is not possible to expose any [Protobuf `Msg` service](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#messages) endpoints via gRPC. Transactions must be generated and signed using the CLI or programmatically before they can be broadcasted using gRPC. See [Generating, Signing, and Broadcasting Transactions](/docs/sdk/v0.53/documentation/operations/txs) for more information. + +The `grpc.Server` is a concrete gRPC server, which spawns and serves all gRPC query requests and a broadcast transaction request. This server can be configured inside `~/.simapp/config/app.toml`: + +- `grpc.enable = true|false` field defines if the gRPC server should be enabled. Defaults to `true`. +- `grpc.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `localhost:9090`. + + + `~/.simapp` is the directory where the node's configuration and databases are + stored. By default, it's set to `~/.{app_name}`. + + +Once the gRPC server is started, you can send requests to it using a gRPC client. Some examples are given in our [Interact with the Node](/docs/sdk/v0.53/documentation/operations/interact-node#using-grpc) tutorial. + +An overview of all available gRPC endpoints shipped with the Cosmos SDK is [Protobuf documentation](https://buf.build/cosmos/cosmos-sdk). + +## REST Server + +Cosmos SDK supports REST routes via gRPC-gateway. + +All routes are configured under the following fields in `~/.simapp/config/app.toml`: + +- `api.enable = true|false` field defines if the REST server should be enabled. Defaults to `false`. +- `api.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `tcp://localhost:1317`. +- some additional API configuration options are defined in `~/.simapp/config/app.toml`, along with comments, please refer to that file directly. + +### gRPC-gateway REST Routes + +If, for various reasons, you cannot use gRPC (for example, you are building a web application, and browsers don't support HTTP2 on which gRPC is built), then the Cosmos SDK offers REST routes via gRPC-gateway. + +[gRPC-gateway](https://grpc-ecosystem.github.io/grpc-gateway/) is a tool to expose gRPC endpoints as REST endpoints. For each gRPC endpoint defined in a Protobuf `Query` service, the Cosmos SDK offers a REST equivalent. For instance, querying a balance could be done via the `/cosmos.bank.v1beta1.QueryAllBalances` gRPC endpoint, or alternatively via the gRPC-gateway `"/cosmos/bank/v1beta1/balances/{address}"` REST endpoint: both will return the same result. For each RPC method defined in a Protobuf `Query` service, the corresponding REST endpoint is defined as an option: + +```protobuf + // AllBalances queries the balance of all coins for a single account. + // + // When called from another module, this query might consume a high amount of + // gas if the pagination field is incorrectly set. + rpc AllBalances(QueryAllBalancesRequest) returns (QueryAllBalancesResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/bank/v1beta1/balances/{address}"; + } +``` + +For application developers, gRPC-gateway REST routes needs to be wired up to the REST server, this is done by calling the `RegisterGRPCGatewayRoutes` function on the ModuleManager. + +### Swagger + +A [Swagger](https://swagger.io/) (or OpenAPIv2) specification file is exposed under the `/swagger` route on the API server. Swagger is an open specification describing the API endpoints a server serves, including description, input arguments, return types and much more about each endpoint. + +Enabling the `/swagger` endpoint is configurable inside `~/.simapp/config/app.toml` via the `api.swagger` field, which is set to false by default. + +For application developers, you may want to generate your own Swagger definitions based on your custom modules. +The Cosmos SDK's [Swagger generation script](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/scripts/protoc-swagger-gen.sh) is a good place to start. + +## CometBFT RPC + +Independently from the Cosmos SDK, CometBFT also exposes a RPC server. This RPC server can be configured by tuning parameters under the `rpc` table in the `~/.simapp/config/config.toml`, the default listening address is `tcp://localhost:26657`. An OpenAPI specification of all CometBFT RPC endpoints is available [here](https://docs.cometbft.com/main/rpc/). + +Some CometBFT RPC endpoints are directly related to the Cosmos SDK: + +- `/abci_query`: this endpoint will query the application for state. As the `path` parameter, you can send the following strings: + - any Protobuf fully-qualified service method, such as `/cosmos.bank.v1beta1.Query/AllBalances`. The `data` field should then include the method's request parameter(s) encoded as bytes using Protobuf. + - `/app/simulate`: this will simulate a transaction, and return some information such as gas used. + - `/app/version`: this will return the application's version. + - `/store/{storeName}/key`: this will directly query the named store for data associated with the key represented in the `data` parameter. + - `/store/{storeName}/subspace`: this will directly query the named store for key/value pairs in which the key has the value of the `data` parameter as a prefix. + - `/p2p/filter/addr/{port}`: this will return a filtered list of the node's P2P peers by address port. + - `/p2p/filter/id/{id}`: this will return a filtered list of the node's P2P peers by ID. +- `/broadcast_tx_{sync,async,commit}`: these 3 endpoints will broadcast a transaction to other peers. CLI, gRPC and REST expose [a way to broadcast transactions](/docs/sdk/v0.53/documentation/protocol-development/transactions#broadcasting-the-transaction), but they all use these 3 CometBFT RPCs under the hood. + +## Comparison Table + +| Name | Advantages | Disadvantages | +| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | +| gRPC | - can use code-generated stubs in various languages
- supports streaming and bidirectional communication (HTTP2)
- small wire binary sizes, faster transmission | - based on HTTP2, not available in browsers
- learning curve (mostly due to Protobuf) | +| REST | - ubiquitous
- client libraries in all languages, faster implementation
| - only supports unary request-response communication (HTTP1.1)
- bigger over-the-wire message sizes (JSON) | +| CometBFT RPC | - easy to use | - bigger over-the-wire message sizes (JSON) | diff --git a/docs/sdk/v0.53/api-reference/service-apis/proto-docs.mdx b/docs/sdk/v0.53/api-reference/service-apis/proto-docs.mdx new file mode 100644 index 00000000..0f3593eb --- /dev/null +++ b/docs/sdk/v0.53/api-reference/service-apis/proto-docs.mdx @@ -0,0 +1,6 @@ +--- +title: Protobuf Documentation +description: See Cosmos SDK Buf Proto-docs +--- + +See [Cosmos SDK Buf Proto-docs](https://buf.build/cosmos/cosmos-sdk/docs/main) diff --git a/docs/sdk/v0.53/api-reference/service-apis/query-lifecycle.mdx b/docs/sdk/v0.53/api-reference/service-apis/query-lifecycle.mdx new file mode 100644 index 00000000..e0e556c8 --- /dev/null +++ b/docs/sdk/v0.53/api-reference/service-apis/query-lifecycle.mdx @@ -0,0 +1,1593 @@ +--- +title: Query Lifecycle +--- + +## Synopsis + +This document describes the lifecycle of a query in a Cosmos SDK application, from the user interface to application stores and back. The query is referred to as `MyQuery`. + + +**Pre-requisite Readings** + +- [Transaction Lifecycle](/docs/sdk/v0.53/documentation/protocol-development/tx-lifecycle) + + + +## Query Creation + +A [**query**](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#queries) is a request for information made by end-users of applications through an interface and processed by a full-node. Users can query information about the network, the application itself, and application state directly from the application's stores or modules. Note that queries are different from [transactions](/docs/sdk/v0.53/documentation/protocol-development/transactions) (view the lifecycle [here](/docs/sdk/v0.53/documentation/protocol-development/tx-lifecycle)), particularly in that they do not require consensus to be processed (as they do not trigger state-transitions); they can be fully handled by one full-node. + +For the purpose of explaining the query lifecycle, let's say the query, `MyQuery`, is requesting a list of delegations made by a certain delegator address in the application called `simapp`. As is to be expected, the [`staking`](/docs/sdk/v0.53/documentation/module-system/staking) module handles this query. But first, there are a few ways `MyQuery` can be created by users. + +### CLI + +The main interface for an application is the command-line interface. Users connect to a full-node and run the CLI directly from their machines - the CLI interacts directly with the full-node. To create `MyQuery` from their terminal, users type the following command: + +```bash +simd query staking delegations +``` + +This query command was defined by the [`staking`](/docs/sdk/v0.53/documentation/module-system/staking) module developer and added to the list of subcommands by the application developer when creating the CLI. + +Note that the general format is as follows: + +```bash +simd query [moduleName] [command] --flag +``` + +To provide values such as `--node` (the full-node the CLI connects to), the user can use the [`app.toml`](/docs/sdk/v0.53/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml) config file to set them or provide them as flags. + +The CLI understands a specific set of commands, defined in a hierarchical structure by the application developer: from the [root command](/docs/sdk/v0.53/api-reference/client-tools/cli#root-command) (`simd`), the type of command (`Myquery`), the module that contains the command (`staking`), and command itself (`delegations`). Thus, the CLI knows exactly which module handles this command and directly passes the call there. + +### gRPC + +Another interface through which users can make queries is [gRPC](https://grpc.io) requests to a [gRPC server](/docs/sdk/v0.53/api-reference/service-apis/grpc_rest#grpc-server). The endpoints are defined as [Protocol Buffers](https://developers.google.com/protocol-buffers) service methods inside `.proto` files, written in Protobuf's own language-agnostic interface definition language (IDL). The Protobuf ecosystem developed tools for code-generation from `*.proto` files into various languages. These tools allow to build gRPC clients easily. + +One such tool is [grpcurl](https://github.com/fullstorydev/grpcurl), and a gRPC request for `MyQuery` using this client looks like: + +```bash +grpcurl \ + -plaintext # We want results in plain test + -import-path ./proto \ # Import these .proto files + -proto ./proto/cosmos/staking/v1beta1/query.proto \ # Look into this .proto file for the Query protobuf service + -d '{"address":"$MY_DELEGATOR"}' \ # Query arguments + localhost:9090 \ # gRPC server endpoint + cosmos.staking.v1beta1.Query/Delegations # Fully-qualified service method name +``` + +### REST + +Another interface through which users can make queries is through HTTP Requests to a [REST server](/docs/sdk/v0.53/api-reference/service-apis/grpc_rest#rest-server). The REST server is fully auto-generated from Protobuf services, using [gRPC-gateway](https://github.com/grpc-ecosystem/grpc-gateway). + +An example HTTP request for `MyQuery` looks like: + +```bash +GET http://localhost:1317/cosmos/staking/v1beta1/delegators/{delegatorAddr}/delegations +``` + +## How Queries are Handled by the CLI + +The preceding examples show how an external user can interact with a node by querying its state. To understand in more detail the exact lifecycle of a query, let's dig into how the CLI prepares the query, and how the node handles it. The interactions from the users' perspective are a bit different, but the underlying functions are almost identical because they are implementations of the same command defined by the module developer. This step of processing happens within the CLI, gRPC, or REST server, and heavily involves a `client.Context`. + +### Context + +The first thing that is created in the execution of a CLI command is a `client.Context`. A `client.Context` is an object that stores all the data needed to process a request on the user side. In particular, a `client.Context` stores the following: + +- **Codec**: The [encoder/decoder](/docs/sdk/v0.53/documentation/protocol-development/encoding) used by the application, used to marshal the parameters and query before making the CometBFT RPC request and unmarshal the returned response into a JSON object. The default codec used by the CLI is Protobuf. +- **Account Decoder**: The account decoder from the [`auth`](/docs/sdk/v0.53/documentation/module-system/auth) module, which translates `[]byte`s into accounts. +- **RPC Client**: The CometBFT RPC Client, or node, to which requests are relayed. +- **Keyring**: A \[Key Manager]../beginner/03-accounts.md#keyring) used to sign transactions and handle other operations with keys. +- **Output Writer**: A [Writer](https://pkg.go.dev/io/#Writer) used to output the response. +- **Configurations**: The flags configured by the user for this command, including `--height`, specifying the height of the blockchain to query, and `--indent`, which indicates to add an indent to the JSON response. + +The `client.Context` also contains various functions such as `Query()`, which retrieves the RPC Client and makes an ABCI call to relay a query to a full-node. + +```go expandable +package client + +import ( + + "bufio" + "context" + "encoding/json" + "fmt" + "io" + "os" + "path" + "strings" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/viper" + "google.golang.org/grpc" + "sigs.k8s.io/yaml" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ PreprocessTxFn defines a hook by which chains can preprocess transactions before broadcasting +type PreprocessTxFn func(chainID string, key keyring.KeyType, tx TxBuilder) + +error + +/ Context implements a typical context created in SDK modules for transaction +/ handling and queries. +type Context struct { + FromAddress sdk.AccAddress + Client CometRPC + GRPCClient *grpc.ClientConn + ChainID string + Codec codec.Codec + InterfaceRegistry codectypes.InterfaceRegistry + Input io.Reader + Keyring keyring.Keyring + KeyringOptions []keyring.Option + KeyringDir string + KeyringDefaultKeyName string + Output io.Writer + OutputFormat string + Height int64 + HomeDir string + From string + BroadcastMode string + FromName string + SignModeStr string + UseLedger bool + Simulate bool + GenerateOnly bool + Offline bool + SkipConfirm bool + TxConfig TxConfig + AccountRetriever AccountRetriever + NodeURI string + FeePayer sdk.AccAddress + FeeGranter sdk.AccAddress + Viper *viper.Viper + LedgerHasProtobuf bool + PreprocessTxHook PreprocessTxFn + + / IsAux is true when the signer is an auxiliary signer (e.g. the tipper). + IsAux bool + + / TODO: Deprecated (remove). + LegacyAmino *codec.LegacyAmino + + / CmdContext is the context.Context from the Cobra command. + CmdContext context.Context +} + +/ WithCmdContext returns a copy of the context with an updated context.Context, +/ usually set to the cobra cmd context. +func (ctx Context) + +WithCmdContext(c context.Context) + +Context { + ctx.CmdContext = c + return ctx +} + +/ WithKeyring returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyring(k keyring.Keyring) + +Context { + ctx.Keyring = k + return ctx +} + +/ WithKeyringOptions returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyringOptions(opts ...keyring.Option) + +Context { + ctx.KeyringOptions = opts + return ctx +} + +/ WithInput returns a copy of the context with an updated input. +func (ctx Context) + +WithInput(r io.Reader) + +Context { + / convert to a bufio.Reader to have a shared buffer between the keyring and the + / the Commands, ensuring a read from one advance the read pointer for the other. + / see https://github.com/cosmos/cosmos-sdk/issues/9566. + ctx.Input = bufio.NewReader(r) + +return ctx +} + +/ WithCodec returns a copy of the Context with an updated Codec. +func (ctx Context) + +WithCodec(m codec.Codec) + +Context { + ctx.Codec = m + return ctx +} + +/ WithLegacyAmino returns a copy of the context with an updated LegacyAmino codec. +/ TODO: Deprecated (remove). +func (ctx Context) + +WithLegacyAmino(cdc *codec.LegacyAmino) + +Context { + ctx.LegacyAmino = cdc + return ctx +} + +/ WithOutput returns a copy of the context with an updated output writer (e.g. stdout). +func (ctx Context) + +WithOutput(w io.Writer) + +Context { + ctx.Output = w + return ctx +} + +/ WithFrom returns a copy of the context with an updated from address or name. +func (ctx Context) + +WithFrom(from string) + +Context { + ctx.From = from + return ctx +} + +/ WithOutputFormat returns a copy of the context with an updated OutputFormat field. +func (ctx Context) + +WithOutputFormat(format string) + +Context { + ctx.OutputFormat = format + return ctx +} + +/ WithNodeURI returns a copy of the context with an updated node URI. +func (ctx Context) + +WithNodeURI(nodeURI string) + +Context { + ctx.NodeURI = nodeURI + return ctx +} + +/ WithHeight returns a copy of the context with an updated height. +func (ctx Context) + +WithHeight(height int64) + +Context { + ctx.Height = height + return ctx +} + +/ WithClient returns a copy of the context with an updated RPC client +/ instance. +func (ctx Context) + +WithClient(client CometRPC) + +Context { + ctx.Client = client + return ctx +} + +/ WithGRPCClient returns a copy of the context with an updated GRPC client +/ instance. +func (ctx Context) + +WithGRPCClient(grpcClient *grpc.ClientConn) + +Context { + ctx.GRPCClient = grpcClient + return ctx +} + +/ WithUseLedger returns a copy of the context with an updated UseLedger flag. +func (ctx Context) + +WithUseLedger(useLedger bool) + +Context { + ctx.UseLedger = useLedger + return ctx +} + +/ WithChainID returns a copy of the context with an updated chain ID. +func (ctx Context) + +WithChainID(chainID string) + +Context { + ctx.ChainID = chainID + return ctx +} + +/ WithHomeDir returns a copy of the Context with HomeDir set. +func (ctx Context) + +WithHomeDir(dir string) + +Context { + if dir != "" { + ctx.HomeDir = dir +} + +return ctx +} + +/ WithKeyringDir returns a copy of the Context with KeyringDir set. +func (ctx Context) + +WithKeyringDir(dir string) + +Context { + ctx.KeyringDir = dir + return ctx +} + +/ WithKeyringDefaultKeyName returns a copy of the Context with KeyringDefaultKeyName set. +func (ctx Context) + +WithKeyringDefaultKeyName(keyName string) + +Context { + ctx.KeyringDefaultKeyName = keyName + return ctx +} + +/ WithGenerateOnly returns a copy of the context with updated GenerateOnly value +func (ctx Context) + +WithGenerateOnly(generateOnly bool) + +Context { + ctx.GenerateOnly = generateOnly + return ctx +} + +/ WithSimulation returns a copy of the context with updated Simulate value +func (ctx Context) + +WithSimulation(simulate bool) + +Context { + ctx.Simulate = simulate + return ctx +} + +/ WithOffline returns a copy of the context with updated Offline value. +func (ctx Context) + +WithOffline(offline bool) + +Context { + ctx.Offline = offline + return ctx +} + +/ WithFromName returns a copy of the context with an updated from account name. +func (ctx Context) + +WithFromName(name string) + +Context { + ctx.FromName = name + return ctx +} + +/ WithFromAddress returns a copy of the context with an updated from account +/ address. +func (ctx Context) + +WithFromAddress(addr sdk.AccAddress) + +Context { + ctx.FromAddress = addr + return ctx +} + +/ WithFeePayerAddress returns a copy of the context with an updated fee payer account +/ address. +func (ctx Context) + +WithFeePayerAddress(addr sdk.AccAddress) + +Context { + ctx.FeePayer = addr + return ctx +} + +/ WithFeeGranterAddress returns a copy of the context with an updated fee granter account +/ address. +func (ctx Context) + +WithFeeGranterAddress(addr sdk.AccAddress) + +Context { + ctx.FeeGranter = addr + return ctx +} + +/ WithBroadcastMode returns a copy of the context with an updated broadcast +/ mode. +func (ctx Context) + +WithBroadcastMode(mode string) + +Context { + ctx.BroadcastMode = mode + return ctx +} + +/ WithSignModeStr returns a copy of the context with an updated SignMode +/ value. +func (ctx Context) + +WithSignModeStr(signModeStr string) + +Context { + ctx.SignModeStr = signModeStr + return ctx +} + +/ WithSkipConfirmation returns a copy of the context with an updated SkipConfirm +/ value. +func (ctx Context) + +WithSkipConfirmation(skip bool) + +Context { + ctx.SkipConfirm = skip + return ctx +} + +/ WithTxConfig returns the context with an updated TxConfig +func (ctx Context) + +WithTxConfig(generator TxConfig) + +Context { + ctx.TxConfig = generator + return ctx +} + +/ WithAccountRetriever returns the context with an updated AccountRetriever +func (ctx Context) + +WithAccountRetriever(retriever AccountRetriever) + +Context { + ctx.AccountRetriever = retriever + return ctx +} + +/ WithInterfaceRegistry returns the context with an updated InterfaceRegistry +func (ctx Context) + +WithInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) + +Context { + ctx.InterfaceRegistry = interfaceRegistry + return ctx +} + +/ WithViper returns the context with Viper field. This Viper instance is used to read +/ client-side config from the config file. +func (ctx Context) + +WithViper(prefix string) + +Context { + v := viper.New() + if prefix == "" { + executableName, _ := os.Executable() + +prefix = path.Base(executableName) +} + +v.SetEnvPrefix(prefix) + +v.SetEnvKeyReplacer(strings.NewReplacer(".", "_", "-", "_")) + +v.AutomaticEnv() + +ctx.Viper = v + return ctx +} + +/ WithAux returns a copy of the context with an updated IsAux value. +func (ctx Context) + +WithAux(isAux bool) + +Context { + ctx.IsAux = isAux + return ctx +} + +/ WithLedgerHasProto returns the context with the provided boolean value, indicating +/ whether the target Ledger application can support Protobuf payloads. +func (ctx Context) + +WithLedgerHasProtobuf(val bool) + +Context { + ctx.LedgerHasProtobuf = val + return ctx +} + +/ WithPreprocessTxHook returns the context with the provided preprocessing hook, which +/ enables chains to preprocess the transaction using the builder. +func (ctx Context) + +WithPreprocessTxHook(preprocessFn PreprocessTxFn) + +Context { + ctx.PreprocessTxHook = preprocessFn + return ctx +} + +/ PrintString prints the raw string to ctx.Output if it's defined, otherwise to os.Stdout +func (ctx Context) + +PrintString(str string) + +error { + return ctx.PrintBytes([]byte(str)) +} + +/ PrintBytes prints the raw bytes to ctx.Output if it's defined, otherwise to os.Stdout. +/ NOTE: for printing a complex state object, you should use ctx.PrintOutput +func (ctx Context) + +PrintBytes(o []byte) + +error { + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err := writer.Write(o) + +return err +} + +/ PrintProto outputs toPrint to the ctx.Output based on ctx.OutputFormat which is +/ either text or json. If text, toPrint will be YAML encoded. Otherwise, toPrint +/ will be JSON encoded using ctx.Codec. An error is returned upon failure. +func (ctx Context) + +PrintProto(toPrint proto.Message) + +error { + / always serialize JSON initially because proto json can't be directly YAML encoded + out, err := ctx.Codec.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintObjectLegacy is a variant of PrintProto that doesn't require a proto.Message type +/ and uses amino JSON encoding. +/ Deprecated: It will be removed in the near future! +func (ctx Context) + +PrintObjectLegacy(toPrint any) + +error { + out, err := ctx.LegacyAmino.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintRaw is a variant of PrintProto that doesn't require a proto.Message type +/ and uses a raw JSON message. No marshaling is performed. +func (ctx Context) + +PrintRaw(toPrint json.RawMessage) + +error { + return ctx.printOutput(toPrint) +} + +func (ctx Context) + +printOutput(out []byte) + +error { + var err error + if ctx.OutputFormat == "text" { + out, err = yaml.JSONToYAML(out) + if err != nil { + return err +} + +} + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err = writer.Write(out) + if err != nil { + return err +} + if ctx.OutputFormat != "text" { + / append new-line for formats besides YAML + _, err = writer.Write([]byte("\n")) + if err != nil { + return err +} + +} + +return nil +} + +/ GetFromFields returns a from account address, account name and keyring type, given either an address or key name. +/ If clientCtx.Simulate is true the keystore is not accessed and a valid address must be provided +/ If clientCtx.GenerateOnly is true the keystore is only accessed if a key name is provided +/ If from is empty, the default key if specified in the context will be used +func GetFromFields(clientCtx Context, kr keyring.Keyring, from string) (sdk.AccAddress, string, keyring.KeyType, error) { + if from == "" && clientCtx.KeyringDefaultKeyName != "" { + from = clientCtx.KeyringDefaultKeyName + _ = clientCtx.PrintString(fmt.Sprintf("No key name or address provided; using the default key: %s\n", clientCtx.KeyringDefaultKeyName)) +} + if from == "" { + return nil, "", 0, nil +} + +addr, err := sdk.AccAddressFromBech32(from) + switch { + case clientCtx.Simulate: + if err != nil { + return nil, "", 0, fmt.Errorf("a valid bech32 address must be provided in simulation mode: %w", err) +} + +return addr, "", 0, nil + case clientCtx.GenerateOnly: + if err == nil { + return addr, "", 0, nil +} + +} + +var k *keyring.Record + if err == nil { + k, err = kr.KeyByAddress(addr) + if err != nil { + return nil, "", 0, err +} + +} + +else { + k, err = kr.Key(from) + if err != nil { + return nil, "", 0, err +} + +} + +addr, err = k.GetAddress() + if err != nil { + return nil, "", 0, err +} + +return addr, k.Name, k.GetType(), nil +} + +/ NewKeyringFromBackend gets a Keyring object from a backend +func NewKeyringFromBackend(ctx Context, backend string) (keyring.Keyring, error) { + if ctx.Simulate { + backend = keyring.BackendMemory +} + +return keyring.New(sdk.KeyringServiceName(), backend, ctx.KeyringDir, ctx.Input, ctx.Codec, ctx.KeyringOptions...) +} +``` + +The `client.Context`'s primary role is to store data used during interactions with the end-user and provide methods to interact with this data - it is used before and after the query is processed by the full-node. Specifically, in handling `MyQuery`, the `client.Context` is utilized to encode the query parameters, retrieve the full-node, and write the output. Prior to being relayed to a full-node, the query needs to be encoded into a `[]byte` form, as full-nodes are application-agnostic and do not understand specific types. The full-node (RPC Client) itself is retrieved using the `client.Context`, which knows which node the user CLI is connected to. The query is relayed to this full-node to be processed. Finally, the `client.Context` contains a `Writer` to write output when the response is returned. These steps are further described in later sections. + +### Arguments and Route Creation + +At this point in the lifecycle, the user has created a CLI command with all of the data they wish to include in their query. A `client.Context` exists to assist in the rest of the `MyQuery`'s journey. Now, the next step is to parse the command or request, extract the arguments, and encode everything. These steps all happen on the user side within the interface they are interacting with. + +#### Encoding + +In our case (querying an address's delegations), `MyQuery` contains an [address](/docs/sdk/v0.53/documentation/protocol-development/accounts#addresses) `delegatorAddress` as its only argument. However, the request can only contain `[]byte`s, as it is ultimately relayed to a consensus engine (e.g. CometBFT) of a full-node that has no inherent knowledge of the application types. Thus, the `codec` of `client.Context` is used to marshal the address. + +Here is what the code looks like for the CLI command: + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/staking/client/cli/query.go#L315-L318 +``` + +#### gRPC Query Client Creation + +The Cosmos SDK leverages code generated from Protobuf services to make queries. The `staking` module's `MyQuery` service generates a `queryClient`, which the CLI uses to make queries. Here is the relevant code: + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/staking/client/cli/query.go#L308-L343 +``` + +Under the hood, the `client.Context` has a `Query()` function used to retrieve the pre-configured node and relay a query to it; the function takes the query fully-qualified service method name as path (in our case: `/cosmos.staking.v1beta1.Query/Delegations`), and arguments as parameters. It first retrieves the RPC Client (called the [**node**](/docs/sdk/v0.53/documentation/operations/node)) configured by the user to relay this query to, and creates the `ABCIQueryOptions` (parameters formatted for the ABCI call). The node is then used to make the ABCI call, `ABCIQueryWithOptions()`. + +Here is what the code looks like: + +```go expandable +package client + +import ( + + "context" + "fmt" + "strings" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + rpcclient "github.com/cometbft/cometbft/rpc/client" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/status" + "cosmossdk.io/store/rootmulti" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ GetNode returns an RPC client. If the context's client is not defined, an +/ error is returned. +func (ctx Context) + +GetNode() (CometRPC, error) { + if ctx.Client == nil { + return nil, errors.New("no RPC client is defined in offline mode") +} + +return ctx.Client, nil +} + +/ Query performs a query to a CometBFT node with the provided path. +/ It returns the result and height of the query upon success or an error if +/ the query fails. +func (ctx Context) + +Query(path string) ([]byte, int64, error) { + return ctx.query(path, nil) +} + +/ QueryWithData performs a query to a CometBFT node with the provided path +/ and a data payload. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +QueryWithData(path string, data []byte) ([]byte, int64, error) { + return ctx.query(path, data) +} + +/ QueryStore performs a query to a CometBFT node with the provided key and +/ store name. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +QueryStore(key []byte, storeName string) ([]byte, int64, error) { + return ctx.queryStore(key, storeName, "key") +} + +/ QueryABCI performs a query to a CometBFT node with the provide RequestQuery. +/ It returns the ResultQuery obtained from the query. The height used to perform +/ the query is the RequestQuery Height if it is non-zero, otherwise the context +/ height is used. +func (ctx Context) + +QueryABCI(req abci.RequestQuery) (abci.ResponseQuery, error) { + return ctx.queryABCI(req) +} + +/ GetFromAddress returns the from address from the context's name. +func (ctx Context) + +GetFromAddress() + +sdk.AccAddress { + return ctx.FromAddress +} + +/ GetFeePayerAddress returns the fee granter address from the context +func (ctx Context) + +GetFeePayerAddress() + +sdk.AccAddress { + return ctx.FeePayer +} + +/ GetFeeGranterAddress returns the fee granter address from the context +func (ctx Context) + +GetFeeGranterAddress() + +sdk.AccAddress { + return ctx.FeeGranter +} + +/ GetFromName returns the key name for the current context. +func (ctx Context) + +GetFromName() + +string { + return ctx.FromName +} + +func (ctx Context) + +queryABCI(req abci.RequestQuery) (abci.ResponseQuery, error) { + node, err := ctx.GetNode() + if err != nil { + return abci.ResponseQuery{ +}, err +} + +var queryHeight int64 + if req.Height != 0 { + queryHeight = req.Height +} + +else { + / fallback on the context height + queryHeight = ctx.Height +} + opts := rpcclient.ABCIQueryOptions{ + Height: queryHeight, + Prove: req.Prove, +} + +result, err := node.ABCIQueryWithOptions(context.Background(), req.Path, req.Data, opts) + if err != nil { + return abci.ResponseQuery{ +}, err +} + if !result.Response.IsOK() { + return abci.ResponseQuery{ +}, sdkErrorToGRPCError(result.Response) +} + + / data from trusted node or subspace query doesn't need verification + if !opts.Prove || !isQueryStoreWithProof(req.Path) { + return result.Response, nil +} + +return result.Response, nil +} + +func sdkErrorToGRPCError(resp abci.ResponseQuery) + +error { + switch resp.Code { + case sdkerrors.ErrInvalidRequest.ABCICode(): + return status.Error(codes.InvalidArgument, resp.Log) + case sdkerrors.ErrUnauthorized.ABCICode(): + return status.Error(codes.Unauthenticated, resp.Log) + case sdkerrors.ErrKeyNotFound.ABCICode(): + return status.Error(codes.NotFound, resp.Log) + +default: + return status.Error(codes.Unknown, resp.Log) +} +} + +/ query performs a query to a CometBFT node with the provided store name +/ and path. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +query(path string, key []byte) ([]byte, int64, error) { + resp, err := ctx.queryABCI(abci.RequestQuery{ + Path: path, + Data: key, + Height: ctx.Height, +}) + if err != nil { + return nil, 0, err +} + +return resp.Value, resp.Height, nil +} + +/ queryStore performs a query to a CometBFT node with the provided a store +/ name and path. It returns the result and height of the query upon success +/ or an error if the query fails. +func (ctx Context) + +queryStore(key []byte, storeName, endPath string) ([]byte, int64, error) { + path := fmt.Sprintf("/store/%s/%s", storeName, endPath) + +return ctx.query(path, key) +} + +/ isQueryStoreWithProof expects a format like /// +/ queryType must be "store" and subpath must be "key" to require a proof. +func isQueryStoreWithProof(path string) + +bool { + if !strings.HasPrefix(path, "/") { + return false +} + paths := strings.SplitN(path[1:], "/", 3) + switch { + case len(paths) != 3: + return false + case paths[0] != "store": + return false + case rootmulti.RequireProof("/" + paths[2]): + return true +} + +return false +} +``` + +## RPC + +With a call to `ABCIQueryWithOptions()`, `MyQuery` is received by a [full-node](/docs/sdk/v0.53/documentation/protocol-development/encoding) which then processes the request. Note that, while the RPC is made to the consensus engine (e.g. CometBFT) of a full-node, queries are not part of consensus and so are not broadcasted to the rest of the network, as they do not require anything the network needs to agree upon. + +Read more about ABCI Clients and CometBFT RPC in the [CometBFT documentation](https://docs.cometbft.com/v0.37/spec/rpc/). + +## Application Query Handling + +When a query is received by the full-node after it has been relayed from the underlying consensus engine, it is at that point being handled within an environment that understands application-specific types and has a copy of the state. [`baseapp`](/docs/sdk/v0.53/documentation/application-framework/baseapp) implements the ABCI [`Query()`](/docs/sdk/v0.53/documentation/application-framework/baseapp#query) function and handles gRPC queries. The query route is parsed, and it matches the fully-qualified service method name of an existing service method (most likely in one of the modules), then `baseapp` relays the request to the relevant module. + +Since `MyQuery` has a Protobuf fully-qualified service method name from the `staking` module (recall `/cosmos.staking.v1beta1.Query/Delegations`), `baseapp` first parses the path, then uses its own internal `GRPCQueryRouter` to retrieve the corresponding gRPC handler, and routes the query to the module. The gRPC handler is responsible for recognizing this query, retrieving the appropriate values from the application's stores, and returning a response. Read more about query services [here](/docs/sdk/v0.53/documentation/module-system/query-services). + +Once a result is received from the querier, `baseapp` begins the process of returning a response to the user. + +## Response + +Since `Query()` is an ABCI function, `baseapp` returns the response as an [`abci.ResponseQuery`](https://docs.cometbft.com/master/spec/abci/abci.html#query-2) type. The `client.Context` `Query()` routine receives the response and. + +### CLI Response + +The application [`codec`](/docs/sdk/v0.53/documentation/protocol-development/encoding) is used to unmarshal the response to a JSON and the `client.Context` prints the output to the command line, applying any configurations such as the output type (text, JSON or YAML). + +```go expandable +package client + +import ( + + "bufio" + "context" + "encoding/json" + "fmt" + "io" + "os" + "path" + "strings" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/viper" + "google.golang.org/grpc" + "sigs.k8s.io/yaml" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ PreprocessTxFn defines a hook by which chains can preprocess transactions before broadcasting +type PreprocessTxFn func(chainID string, key keyring.KeyType, tx TxBuilder) + +error + +/ Context implements a typical context created in SDK modules for transaction +/ handling and queries. +type Context struct { + FromAddress sdk.AccAddress + Client CometRPC + GRPCClient *grpc.ClientConn + ChainID string + Codec codec.Codec + InterfaceRegistry codectypes.InterfaceRegistry + Input io.Reader + Keyring keyring.Keyring + KeyringOptions []keyring.Option + KeyringDir string + KeyringDefaultKeyName string + Output io.Writer + OutputFormat string + Height int64 + HomeDir string + From string + BroadcastMode string + FromName string + SignModeStr string + UseLedger bool + Simulate bool + GenerateOnly bool + Offline bool + SkipConfirm bool + TxConfig TxConfig + AccountRetriever AccountRetriever + NodeURI string + FeePayer sdk.AccAddress + FeeGranter sdk.AccAddress + Viper *viper.Viper + LedgerHasProtobuf bool + PreprocessTxHook PreprocessTxFn + + / IsAux is true when the signer is an auxiliary signer (e.g. the tipper). + IsAux bool + + / TODO: Deprecated (remove). + LegacyAmino *codec.LegacyAmino + + / CmdContext is the context.Context from the Cobra command. + CmdContext context.Context +} + +/ WithCmdContext returns a copy of the context with an updated context.Context, +/ usually set to the cobra cmd context. +func (ctx Context) + +WithCmdContext(c context.Context) + +Context { + ctx.CmdContext = c + return ctx +} + +/ WithKeyring returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyring(k keyring.Keyring) + +Context { + ctx.Keyring = k + return ctx +} + +/ WithKeyringOptions returns a copy of the context with an updated keyring. +func (ctx Context) + +WithKeyringOptions(opts ...keyring.Option) + +Context { + ctx.KeyringOptions = opts + return ctx +} + +/ WithInput returns a copy of the context with an updated input. +func (ctx Context) + +WithInput(r io.Reader) + +Context { + / convert to a bufio.Reader to have a shared buffer between the keyring and the + / the Commands, ensuring a read from one advance the read pointer for the other. + / see https://github.com/cosmos/cosmos-sdk/issues/9566. + ctx.Input = bufio.NewReader(r) + +return ctx +} + +/ WithCodec returns a copy of the Context with an updated Codec. +func (ctx Context) + +WithCodec(m codec.Codec) + +Context { + ctx.Codec = m + return ctx +} + +/ WithLegacyAmino returns a copy of the context with an updated LegacyAmino codec. +/ TODO: Deprecated (remove). +func (ctx Context) + +WithLegacyAmino(cdc *codec.LegacyAmino) + +Context { + ctx.LegacyAmino = cdc + return ctx +} + +/ WithOutput returns a copy of the context with an updated output writer (e.g. stdout). +func (ctx Context) + +WithOutput(w io.Writer) + +Context { + ctx.Output = w + return ctx +} + +/ WithFrom returns a copy of the context with an updated from address or name. +func (ctx Context) + +WithFrom(from string) + +Context { + ctx.From = from + return ctx +} + +/ WithOutputFormat returns a copy of the context with an updated OutputFormat field. +func (ctx Context) + +WithOutputFormat(format string) + +Context { + ctx.OutputFormat = format + return ctx +} + +/ WithNodeURI returns a copy of the context with an updated node URI. +func (ctx Context) + +WithNodeURI(nodeURI string) + +Context { + ctx.NodeURI = nodeURI + return ctx +} + +/ WithHeight returns a copy of the context with an updated height. +func (ctx Context) + +WithHeight(height int64) + +Context { + ctx.Height = height + return ctx +} + +/ WithClient returns a copy of the context with an updated RPC client +/ instance. +func (ctx Context) + +WithClient(client CometRPC) + +Context { + ctx.Client = client + return ctx +} + +/ WithGRPCClient returns a copy of the context with an updated GRPC client +/ instance. +func (ctx Context) + +WithGRPCClient(grpcClient *grpc.ClientConn) + +Context { + ctx.GRPCClient = grpcClient + return ctx +} + +/ WithUseLedger returns a copy of the context with an updated UseLedger flag. +func (ctx Context) + +WithUseLedger(useLedger bool) + +Context { + ctx.UseLedger = useLedger + return ctx +} + +/ WithChainID returns a copy of the context with an updated chain ID. +func (ctx Context) + +WithChainID(chainID string) + +Context { + ctx.ChainID = chainID + return ctx +} + +/ WithHomeDir returns a copy of the Context with HomeDir set. +func (ctx Context) + +WithHomeDir(dir string) + +Context { + if dir != "" { + ctx.HomeDir = dir +} + +return ctx +} + +/ WithKeyringDir returns a copy of the Context with KeyringDir set. +func (ctx Context) + +WithKeyringDir(dir string) + +Context { + ctx.KeyringDir = dir + return ctx +} + +/ WithKeyringDefaultKeyName returns a copy of the Context with KeyringDefaultKeyName set. +func (ctx Context) + +WithKeyringDefaultKeyName(keyName string) + +Context { + ctx.KeyringDefaultKeyName = keyName + return ctx +} + +/ WithGenerateOnly returns a copy of the context with updated GenerateOnly value +func (ctx Context) + +WithGenerateOnly(generateOnly bool) + +Context { + ctx.GenerateOnly = generateOnly + return ctx +} + +/ WithSimulation returns a copy of the context with updated Simulate value +func (ctx Context) + +WithSimulation(simulate bool) + +Context { + ctx.Simulate = simulate + return ctx +} + +/ WithOffline returns a copy of the context with updated Offline value. +func (ctx Context) + +WithOffline(offline bool) + +Context { + ctx.Offline = offline + return ctx +} + +/ WithFromName returns a copy of the context with an updated from account name. +func (ctx Context) + +WithFromName(name string) + +Context { + ctx.FromName = name + return ctx +} + +/ WithFromAddress returns a copy of the context with an updated from account +/ address. +func (ctx Context) + +WithFromAddress(addr sdk.AccAddress) + +Context { + ctx.FromAddress = addr + return ctx +} + +/ WithFeePayerAddress returns a copy of the context with an updated fee payer account +/ address. +func (ctx Context) + +WithFeePayerAddress(addr sdk.AccAddress) + +Context { + ctx.FeePayer = addr + return ctx +} + +/ WithFeeGranterAddress returns a copy of the context with an updated fee granter account +/ address. +func (ctx Context) + +WithFeeGranterAddress(addr sdk.AccAddress) + +Context { + ctx.FeeGranter = addr + return ctx +} + +/ WithBroadcastMode returns a copy of the context with an updated broadcast +/ mode. +func (ctx Context) + +WithBroadcastMode(mode string) + +Context { + ctx.BroadcastMode = mode + return ctx +} + +/ WithSignModeStr returns a copy of the context with an updated SignMode +/ value. +func (ctx Context) + +WithSignModeStr(signModeStr string) + +Context { + ctx.SignModeStr = signModeStr + return ctx +} + +/ WithSkipConfirmation returns a copy of the context with an updated SkipConfirm +/ value. +func (ctx Context) + +WithSkipConfirmation(skip bool) + +Context { + ctx.SkipConfirm = skip + return ctx +} + +/ WithTxConfig returns the context with an updated TxConfig +func (ctx Context) + +WithTxConfig(generator TxConfig) + +Context { + ctx.TxConfig = generator + return ctx +} + +/ WithAccountRetriever returns the context with an updated AccountRetriever +func (ctx Context) + +WithAccountRetriever(retriever AccountRetriever) + +Context { + ctx.AccountRetriever = retriever + return ctx +} + +/ WithInterfaceRegistry returns the context with an updated InterfaceRegistry +func (ctx Context) + +WithInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) + +Context { + ctx.InterfaceRegistry = interfaceRegistry + return ctx +} + +/ WithViper returns the context with Viper field. This Viper instance is used to read +/ client-side config from the config file. +func (ctx Context) + +WithViper(prefix string) + +Context { + v := viper.New() + if prefix == "" { + executableName, _ := os.Executable() + +prefix = path.Base(executableName) +} + +v.SetEnvPrefix(prefix) + +v.SetEnvKeyReplacer(strings.NewReplacer(".", "_", "-", "_")) + +v.AutomaticEnv() + +ctx.Viper = v + return ctx +} + +/ WithAux returns a copy of the context with an updated IsAux value. +func (ctx Context) + +WithAux(isAux bool) + +Context { + ctx.IsAux = isAux + return ctx +} + +/ WithLedgerHasProto returns the context with the provided boolean value, indicating +/ whether the target Ledger application can support Protobuf payloads. +func (ctx Context) + +WithLedgerHasProtobuf(val bool) + +Context { + ctx.LedgerHasProtobuf = val + return ctx +} + +/ WithPreprocessTxHook returns the context with the provided preprocessing hook, which +/ enables chains to preprocess the transaction using the builder. +func (ctx Context) + +WithPreprocessTxHook(preprocessFn PreprocessTxFn) + +Context { + ctx.PreprocessTxHook = preprocessFn + return ctx +} + +/ PrintString prints the raw string to ctx.Output if it's defined, otherwise to os.Stdout +func (ctx Context) + +PrintString(str string) + +error { + return ctx.PrintBytes([]byte(str)) +} + +/ PrintBytes prints the raw bytes to ctx.Output if it's defined, otherwise to os.Stdout. +/ NOTE: for printing a complex state object, you should use ctx.PrintOutput +func (ctx Context) + +PrintBytes(o []byte) + +error { + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err := writer.Write(o) + +return err +} + +/ PrintProto outputs toPrint to the ctx.Output based on ctx.OutputFormat which is +/ either text or json. If text, toPrint will be YAML encoded. Otherwise, toPrint +/ will be JSON encoded using ctx.Codec. An error is returned upon failure. +func (ctx Context) + +PrintProto(toPrint proto.Message) + +error { + / always serialize JSON initially because proto json can't be directly YAML encoded + out, err := ctx.Codec.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintObjectLegacy is a variant of PrintProto that doesn't require a proto.Message type +/ and uses amino JSON encoding. +/ Deprecated: It will be removed in the near future! +func (ctx Context) + +PrintObjectLegacy(toPrint any) + +error { + out, err := ctx.LegacyAmino.MarshalJSON(toPrint) + if err != nil { + return err +} + +return ctx.printOutput(out) +} + +/ PrintRaw is a variant of PrintProto that doesn't require a proto.Message type +/ and uses a raw JSON message. No marshaling is performed. +func (ctx Context) + +PrintRaw(toPrint json.RawMessage) + +error { + return ctx.printOutput(toPrint) +} + +func (ctx Context) + +printOutput(out []byte) + +error { + var err error + if ctx.OutputFormat == "text" { + out, err = yaml.JSONToYAML(out) + if err != nil { + return err +} + +} + writer := ctx.Output + if writer == nil { + writer = os.Stdout +} + + _, err = writer.Write(out) + if err != nil { + return err +} + if ctx.OutputFormat != "text" { + / append new-line for formats besides YAML + _, err = writer.Write([]byte("\n")) + if err != nil { + return err +} + +} + +return nil +} + +/ GetFromFields returns a from account address, account name and keyring type, given either an address or key name. +/ If clientCtx.Simulate is true the keystore is not accessed and a valid address must be provided +/ If clientCtx.GenerateOnly is true the keystore is only accessed if a key name is provided +/ If from is empty, the default key if specified in the context will be used +func GetFromFields(clientCtx Context, kr keyring.Keyring, from string) (sdk.AccAddress, string, keyring.KeyType, error) { + if from == "" && clientCtx.KeyringDefaultKeyName != "" { + from = clientCtx.KeyringDefaultKeyName + _ = clientCtx.PrintString(fmt.Sprintf("No key name or address provided; using the default key: %s\n", clientCtx.KeyringDefaultKeyName)) +} + if from == "" { + return nil, "", 0, nil +} + +addr, err := sdk.AccAddressFromBech32(from) + switch { + case clientCtx.Simulate: + if err != nil { + return nil, "", 0, fmt.Errorf("a valid bech32 address must be provided in simulation mode: %w", err) +} + +return addr, "", 0, nil + case clientCtx.GenerateOnly: + if err == nil { + return addr, "", 0, nil +} + +} + +var k *keyring.Record + if err == nil { + k, err = kr.KeyByAddress(addr) + if err != nil { + return nil, "", 0, err +} + +} + +else { + k, err = kr.Key(from) + if err != nil { + return nil, "", 0, err +} + +} + +addr, err = k.GetAddress() + if err != nil { + return nil, "", 0, err +} + +return addr, k.Name, k.GetType(), nil +} + +/ NewKeyringFromBackend gets a Keyring object from a backend +func NewKeyringFromBackend(ctx Context, backend string) (keyring.Keyring, error) { + if ctx.Simulate { + backend = keyring.BackendMemory +} + +return keyring.New(sdk.KeyringServiceName(), backend, ctx.KeyringDir, ctx.Input, ctx.Codec, ctx.KeyringOptions...) +} +``` + +And that's a wrap! The result of the query is outputted to the console by the CLI. diff --git a/docs/sdk/v0.53/learn/advanced/telemetry.mdx b/docs/sdk/v0.53/api-reference/telemetry-metrics/telemetry.mdx similarity index 78% rename from docs/sdk/v0.53/learn/advanced/telemetry.mdx rename to docs/sdk/v0.53/api-reference/telemetry-metrics/telemetry.mdx index 70b339ea..6e89e8ea 100644 --- a/docs/sdk/v0.53/learn/advanced/telemetry.mdx +++ b/docs/sdk/v0.53/api-reference/telemetry-metrics/telemetry.mdx @@ -1,62 +1,81 @@ --- -title: "Telemetry" -description: "Version: v0.53" +title: Telemetry --- - - Gather relevant insights about your application and modules with custom metrics and telemetry. - +## Synopsis -The Cosmos SDK enables operators and developers to gain insight into the performance and behavior of their application through the use of the `telemetry` package. To enable telemetrics, set `telemetry.enabled = true` in the app.toml config file. +Gather relevant insights about your application and modules with custom metrics and telemetry. + +The Cosmos SDK enables operators and developers to gain insight into the performance and behavior of +their application through the use of the `telemetry` package. To enable telemetrics, set `telemetry.enabled = true` in the app.toml config file. The Cosmos SDK currently supports enabling in-memory and prometheus as telemetry sinks. In-memory sink is always attached (when the telemetry is enabled) with 10 second interval and 1 minute retention. This means that metrics will be aggregated over 10 seconds, and metrics will be kept alive for 1 minute. To query active metrics (see retention note above) you have to enable API server (`api.enabled = true` in the app.toml). Single API endpoint is exposed: `http://localhost:1317/metrics?format={text|prometheus}`, the default being `text`. -## Emitting metrics[​](#emitting-metrics "Direct link to Emitting metrics") +## Emitting metrics -If telemetry is enabled via configuration, a single global metrics collector is registered via the [go-metrics](https://github.com/hashicorp/go-metrics) library. This allows emitting and collecting metrics through simple [API](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/telemetry/wrapper.go). Example: +If telemetry is enabled via configuration, a single global metrics collector is registered via the +[go-metrics](https://github.com/hashicorp/go-metrics) library. This allows emitting and collecting +metrics through simple [API](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/telemetry/wrapper.go). Example: -``` -func EndBlocker(ctx sdk.Context, k keeper.Keeper) { defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) // ...} +```go +func EndBlocker(ctx sdk.Context, k keeper.Keeper) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) + + / ... +} ``` -Developers may use the `telemetry` package directly, which provides wrappers around metric APIs that include adding useful labels, or they must use the `go-metrics` library directly. It is preferable to add as much context and adequate dimensionality to metrics as possible, so the `telemetry` package is advised. Regardless of the package or method used, the Cosmos SDK supports the following metrics types: +Developers may use the `telemetry` package directly, which provides wrappers around metric APIs +that include adding useful labels, or they must use the `go-metrics` library directly. It is preferable +to add as much context and adequate dimensionality to metrics as possible, so the `telemetry` package +is advised. Regardless of the package or method used, the Cosmos SDK supports the following metrics +types: -* gauges -* summaries -* counters +- gauges +- summaries +- counters -## Labels[​](#labels "Direct link to Labels") +## Labels -Certain components of modules will have their name automatically added as a label (e.g. `BeginBlock`). Operators may also supply the application with a global set of labels that will be applied to all metrics emitted using the `telemetry` package (e.g. chain-id). Global labels are supplied as a list of \[name, value] tuples. +Certain components of modules will have their name automatically added as a label (e.g. `BeginBlock`). +Operators may also supply the application with a global set of labels that will be applied to all +metrics emitted using the `telemetry` package (e.g. chain-id). Global labels are supplied as a list +of \[name, value] tuples. Example: -``` -global-labels = [ ["chain_id", "chain-OfXo4V"],] +```toml +global-labels = [ + ["chain_id", "chain-OfXo4V"], +] ``` -## Cardinality[​](#cardinality "Direct link to Cardinality") +## Cardinality -Cardinality is key, specifically label and key cardinality. Cardinality is how many unique values of something there are. So there is naturally a tradeoff between granularity and how much stress is put on the telemetry sink in terms of indexing, scrape, and query performance. +Cardinality is key, specifically label and key cardinality. Cardinality is how many unique values of +something there are. So there is naturally a tradeoff between granularity and how much stress is put +on the telemetry sink in terms of indexing, scrape, and query performance. -Developers should take care to support metrics with enough dimensionality and granularity to be useful, but not increase the cardinality beyond the sink's limits. A general rule of thumb is to not exceed a cardinality of 10. +Developers should take care to support metrics with enough dimensionality and granularity to be +useful, but not increase the cardinality beyond the sink's limits. A general rule of thumb is to not +exceed a cardinality of 10. Consider the following examples with enough granularity and adequate cardinality: -* begin/end blocker time -* tx gas used -* block gas used -* amount of tokens minted -* amount of accounts created +- begin/end blocker time +- tx gas used +- block gas used +- amount of tokens minted +- amount of accounts created The following examples expose too much cardinality and may not even prove to be useful: -* transfers between accounts with amount -* voting/deposit amount from unique addresses +- transfers between accounts with amount +- voting/deposit amount from unique addresses -## Supported Metrics[​](#supported-metrics "Direct link to Supported Metrics") +## Supported Metrics | Metric | Description | Unit | Type | | :------------------------------ | :---------------------------------------------------------------------------------------- | :-------------- | :------ | diff --git a/docs/sdk/v0.53/build.mdx b/docs/sdk/v0.53/build.mdx deleted file mode 100644 index c70fc97b..00000000 --- a/docs/sdk/v0.53/build.mdx +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: "Build" -description: "Version: v0.53" ---- - -* [Building Apps](./build/building-apps/app-go) - The documentation in this section will guide you through the process of developing your dApp using the Cosmos SDK framework. -* [Modules](./build/modules) - Information about the various modules available in the Cosmos SDK: Auth, Authz, Bank, Circuit, Consensus, Distribution, Epochs, Evidence, Feegrant, Governance, Group, Mint, NFT, Protocolpool, Slashing, Staking, Upgrade, Genutil. -* [Migrations](./build/migrations/intro) - See what has been updated in each release the process of the transition between versions. -* [Packages](./build/packages) - Explore a curated collection of pre-built modules and functionalities, streamlining the development process. -* [Tooling](./build/tooling) - A suite of utilities designed to enhance the development workflow, optimizing the efficiency of Cosmos SDK-based projects. -* [ADR's](./build/architecture) - Provides a structured repository of key decisions made during the development process, which have been documented and offers rationale behind key decisions being made. -* [REST API](https://docs.cosmos.network/api) - A comprehensive reference for the application programming interfaces (APIs) provided by the SDK. diff --git a/docs/sdk/v0.53/changelog/UPGRADING.md b/docs/sdk/v0.53/changelog/UPGRADING.md new file mode 100644 index 00000000..21ba7b3b --- /dev/null +++ b/docs/sdk/v0.53/changelog/UPGRADING.md @@ -0,0 +1,83 @@ +# Upgrading to Cosmos SDK v0.53 + +This document outlines the changes required when upgrading to Cosmos SDK v0.53. + +## Migration to CometBFT + +The most significant change in v0.53 is the migration from Tendermint to CometBFT. + +### Breaking Changes + +- **Import paths**: All Tendermint imports must be updated to CometBFT +- **Binary names**: Replace `tendermint` with `cometbft` +- **API changes**: Some Tendermint API calls have been updated + +### Migration Steps + +1. **Update Go dependencies**: + ```bash + go mod edit -replace github.com/tendermint/tendermint=github.com/cometbft/cometbft@v0.37.2 + go mod tidy + ``` + +2. **Update import statements**: + ```diff + - import "github.com/tendermint/tendermint/abci/types" + + import "github.com/cometbft/cometbft/abci/types" + ``` + +3. **Update binary references** in Makefiles, scripts, and documentation + +## Module Updates + +### New Features + +- **Enhanced governance**: Support for multiple choice proposals +- **Improved staking**: Better delegation and undelegation handling +- **Circuit breaker**: New safety mechanism for modules + +### Deprecated Features + +- Legacy amino encoding for transactions (use protobuf) +- Old REST endpoints (migrate to gRPC) + +## API Changes + +### Keeper Changes + +- Several keeper constructors now require additional parameters +- Some keeper methods have updated signatures +- New dependency injection patterns + +### Message Changes + +- Some message types have been restructured +- New validation rules for certain transactions +- Enhanced error messages and codes + +## Configuration Changes + +### App Configuration + +- New configuration options for mempool management +- Updated consensus parameters +- Enhanced logging configuration + +### Client Configuration + +- Updated client configuration for CometBFT compatibility +- New gRPC configuration options + +## Testing Changes + +- Updated test utilities for CometBFT +- New simulation test features +- Enhanced integration test framework + +## Known Issues + +- Some third-party tools may need updates for CometBFT compatibility +- Custom ABCI applications require careful migration +- Performance characteristics may differ slightly from Tendermint + +For more detailed information, see the [Cosmos SDK v0.53 release notes](https://github.com/cosmos/cosmos-sdk/releases/tag/v0.53.0). \ No newline at end of file diff --git a/docs/sdk/v0.53/changelog/release-notes.mdx b/docs/sdk/v0.53/changelog/release-notes.mdx new file mode 100644 index 00000000..6524b4cb --- /dev/null +++ b/docs/sdk/v0.53/changelog/release-notes.mdx @@ -0,0 +1,103 @@ +--- +title: "Release Notes" +description: "Release notes generated from the project `CHANGELOG.md`" +mode: "center" +--- + + + +This page tracks all releases and changes from the +[cosmos/cosmos-sdk](https://github.com/cosmos/cosmos-sdk) repository. For the latest +development updates, see the +[UNRELEASED](https://github.com/cosmos/cosmos-sdk/blob/main/CHANGELOG.md#unreleased) +section. + + + + + ### Features * (abci_utils) + [#25008](https://github.com/cosmos/cosmos-sdk/pull/24861) add the ability to + assign a custom signer extraction adapter in `DefaultProposalHandler`. + + + + ### Bug Fixes * + [GHSA-p22h-3m2v-cmgh](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-p22h-3m2v-cmgh) + Fix x/distribution can halt when historical rewards overflow. + + + + ### Bug Fixes * (x/epochs) + [#24770](https://github.com/cosmos/cosmos-sdk/pull/24770) Fix register of + epoch hooks in `InvokeSetHooks`. + + + + ### Features + * (simsx) [#24062](https://github.com/cosmos/cosmos-sdk/pull/24062) [#24145](https://github.com/cosmos/cosmos-sdk/pull/24145) Add new simsx framework on top of simulations for better module dev experience. + * (baseapp) [#24069](https://github.com/cosmos/cosmos-sdk/pull/24069) Create CheckTxHandler to allow extending the logic of CheckTx. + * (types) [#24093](https://github.com/cosmos/cosmos-sdk/pull/24093) Added a new method, `IsGT`, for `types.Coin`. This method is used to check if a `types.Coin` is greater than another `types.Coin`. + * (client/keys) [#24071](https://github.com/cosmos/cosmos-sdk/pull/24071) Add support for importing hex key using standard input. + * (types) [#23780](https://github.com/cosmos/cosmos-sdk/pull/23780) Add a ValueCodec for the math.Uint type that can be used in collections maps. + * (perf)[#24045](https://github.com/cosmos/cosmos-sdk/pull/24045) Sims: Replace runsim command with Go stdlib testing. CLI: `Commit` default true, `Lean`, `SimulateEveryOperation`, `PrintAllInvariants`, `DBBackend` params removed + * (crypto/keyring) [#24040](https://github.com/cosmos/cosmos-sdk/pull/24040) Expose the db keyring used in the keystore. + * (types) [#23919](https://github.com/cosmos/cosmos-sdk/pull/23919) Add MustValAddressFromBech32 function. + * (all) [#23708](https://github.com/cosmos/cosmos-sdk/pull/23708) Add unordered transaction support. + * Adds a `--timeout-timestamp` flag that allows users to specify a block time at which the unordered transactions should expire from the mempool. + * (x/epochs) [#23815](https://github.com/cosmos/cosmos-sdk/pull/23815) Upstream `x/epochs` from Osmosis + * (client) [#23811](https://github.com/cosmos/cosmos-sdk/pull/23811) Add auto cli for node service. + * (genutil) [#24018](https://github.com/cosmos/cosmos-sdk/pull/24018) Allow manually setting the consensus key type in genesis + * (client) [#18557](https://github.com/cosmos/cosmos-sdk/pull/18557) Add `--qrcode` flag to `keys show` command to support displaying keys address QR code. + * (x/auth) [#24030](https://github.com/cosmos/cosmos-sdk/pull/24030) Allow usage of ed25519 keys for transaction signing. + * (baseapp) [#24163](https://github.com/cosmos/cosmos-sdk/pull/24163) Add `StreamingManager` to baseapp to extend the abci listeners. + * (x/protocolpool) [#23933](https://github.com/cosmos/cosmos-sdk/pull/23933) Add x/protocolpool module. + * x/distribution can now utilize an externally managed community pool. NOTE: this will make the message handlers for FundCommunityPool and CommunityPoolSpend error, as well as the query handler for CommunityPool. + * (client) [#18101](https://github.com/cosmos/cosmos-sdk/pull/18101) Add a `keyring-default-keyname` in `client.toml` for specifying a default key name, and skip the need to use the `--from` flag when signing transactions. + * (x/gov) [#24355](https://github.com/cosmos/cosmos-sdk/pull/24355) Allow users to set a custom CalculateVoteResultsAndVotingPower function to be used in govkeeper.Tally. + * (x/mint) [#24436](https://github.com/cosmos/cosmos-sdk/pull/24436) Allow users to set a custom minting function used in the `x/mint` begin blocker. + * The `InflationCalculationFn` argument to `mint.NewAppModule()` is now ignored and must be nil. To set a custom `InflationCalculationFn` on the default minter, use `mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(customInflationFn))`. + * (api) [#24428](https://github.com/cosmos/cosmos-sdk/pull/24428) Add block height to response headers + ### Improvements + * (client) [#24561](https://github.com/cosmos/cosmos-sdk/pull/24561) TimeoutTimestamp flag has been changed to TimeoutDuration, which now sets the timeout timestamp of unordered transactions to the current time + duration passed. + * (telemetry) [#24541](https://github.com/cosmos/cosmos-sdk/pull/24541) Telemetry now includes a pre_blocker metric key. x/upgrade should migrate to this key in v0.54.0. + * (x/auth) [#24541](https://github.com/cosmos/cosmos-sdk/pull/24541) x/auth's PreBlocker now emits telemetry under the pre_blocker metric key. + * (x/bank) [#24431](https://github.com/cosmos/cosmos-sdk/pull/24431) Reduce the number of `ValidateDenom` calls in `bank.SendCoins` and `Coin`. + * The `AmountOf()` method on`sdk.Coins` no longer will `panic` if given an invalid denom and will instead return a zero value. + * (x/staking) [#24391](https://github.com/cosmos/cosmos-sdk/pull/24391) Replace panics with error results; more verbose error messages + * (x/staking) [#24354](https://github.com/cosmos/cosmos-sdk/pull/24354) Optimize validator endblock by reducing bech32 conversions, resulting in significant performance improvement + * (client/keys) [#18950](https://github.com/cosmos/cosmos-sdk/pull/18950) Improve ` keys add`, ` keys import` and ` keys rename` by checking name validation. + * (client/keys) [#18703](https://github.com/cosmos/cosmos-sdk/pull/18703) Improve ` keys add` and ` keys show` by checking whether there are duplicate keys in the multisig case. + * (client/keys) [#18745](https://github.com/cosmos/cosmos-sdk/pull/18745) Improve ` keys export` and ` keys mnemonic` by adding --yes option to skip interactive confirmation. + * (x/bank) [#24106](https://github.com/cosmos/cosmos-sdk/pull/24106) `SendCoins` now checks for `SendRestrictions` before instead of after deducting coins using `subUnlockedCoins`. + * (crypto/ledger) [#24036](https://github.com/cosmos/cosmos-sdk/pull/24036) Improve error message when deriving paths using index > 100 + * (gRPC) [#23844](https://github.com/cosmos/cosmos-sdk/pull/23844) Add debug log prints for each gRPC request. + * (gRPC) [#24073](https://github.com/cosmos/cosmos-sdk/pull/24073) Adds error handling for out-of-gas panics in grpc query handlers. + * (server) [#24072](https://github.com/cosmos/cosmos-sdk/pull/24072) Return BlockHeader by shallow copy in server Context. + * (x/bank) [#24053](https://github.com/cosmos/cosmos-sdk/pull/24053) Resolve a foot-gun by swapping send restrictions check in `InputOutputCoins` before coin deduction. + * (codec/types) [#24336](https://github.com/cosmos/cosmos-sdk/pull/24336) Most types definitions were moved to `github.com/cosmos/gogoproto/types/any` with aliases to these left in `codec/types` so that there should be no breakage to existing code. This allows protobuf generated code to optionally reference the SDK's custom `Any` type without a direct dependency on the SDK. This can be done by changing the `protoc` `M` parameter for `any.proto` to `Mgoogle/protobuf/any.proto=github.com/cosmos/gogoproto/types/any`. + ### Bug Fixes + * (x/gov)[#24460](https://github.com/cosmos/cosmos-sdk/pull/24460) Do not call Remove during Walk in defaultCalculateVoteResultsAndVotingPower. + * (baseapp) [24261](https://github.com/cosmos/cosmos-sdk/pull/24261) Fix post handler error always results in code 1 + * (server) [#24068](https://github.com/cosmos/cosmos-sdk/pull/24068) Allow align block header with skip check header in grpc server. + * (x/gov) [#24044](https://github.com/cosmos/cosmos-sdk/pull/24044) Fix some places in which we call Remove inside a Walk (x/gov). + * (baseapp) [#24042](https://github.com/cosmos/cosmos-sdk/pull/24042) Fixed a data race inside BaseApp.getContext, found by end-to-end (e2e) tests. + * (client/server) [#24059](https://github.com/cosmos/cosmos-sdk/pull/24059) Consistently set viper prefix in client and server. It defaults for the binary name for both client and server. + * (client/keys) [#24041](https://github.com/cosmos/cosmos-sdk/pull/24041) `keys delete` won't terminate when a key is not found, but will log the error. + * (baseapp) [#24027](https://github.com/cosmos/cosmos-sdk/pull/24027) Ensure that `BaseApp.Init` checks that the commit multistore is set to protect against nil dereferences. + * (x/group) [GHSA-47ww-ff84-4jrg](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-47ww-ff84-4jrg) Fix x/group can halt when erroring in EndBlocker + * (x/distribution) [#23934](https://github.com/cosmos/cosmos-sdk/pull/23934) Fix vulnerability in `incrementReferenceCount` in distribution. + * (baseapp) [#23879](https://github.com/cosmos/cosmos-sdk/pull/23879) Ensure finalize block response is not empty in the defer check of FinalizeBlock to avoid panic by nil pointer. + * (query) [#23883](https://github.com/cosmos/cosmos-sdk/pull/23883) Fix NPE in query pagination. + * (client) [#23860](https://github.com/cosmos/cosmos-sdk/pull/23860) Add missing `unordered` field for legacy amino signing of tx body. + * (x/bank) [#23836](https://github.com/cosmos/cosmos-sdk/pull/23836) Fix `DenomMetadata` rpc allow value with slashes. + * (query) [87d3a43](https://github.com/cosmos/cosmos-sdk/commit/87d3a432af95f4cf96aa02351ed5fcc51cca6e7b) Fix collection filtered pagination. + * (sims) [#23952](https://github.com/cosmos/cosmos-sdk/pull/23952) Use liveness matrix for validator sign status in sims + * (baseapp) [#24055](https://github.com/cosmos/cosmos-sdk/pull/24055) Align block header when query with latest height. + * (baseapp) [#24074](https://github.com/cosmos/cosmos-sdk/pull/24074) Use CometBFT's ComputeProtoSizeForTxs in defaultTxSelector.SelectTxForProposal for consistency. + * (cli) [#24090](https://github.com/cosmos/cosmos-sdk/pull/24090) Prune cmd should disable async pruning. + * (x/auth) [#19239](https://github.com/cosmos/cosmos-sdk/pull/19239) Sets from flag in multi-sign command to avoid no key name provided error. + * (x/auth) [#23741](https://github.com/cosmos/cosmos-sdk/pull/23741) Support legacy global AccountNumber for legacy compatibility. + * (baseapp) [#24526](https://github.com/cosmos/cosmos-sdk/pull/24526) Fix incorrect retention height when `commitHeight` equals `minRetainBlocks`. + * (x/protocolpool) [#24594](https://github.com/cosmos/cosmos-sdk/pull/24594) Fix NPE when initializing module via depinject. + * (x/epochs) [#24610](https://github.com/cosmos/cosmos-sdk/pull/24610) Fix semantics of `CurrentEpochStartHeight` being set before epoch has started. + diff --git a/docs/sdk/v0.53/documentation/application-framework/README.mdx b/docs/sdk/v0.53/documentation/application-framework/README.mdx new file mode 100644 index 00000000..d044336e --- /dev/null +++ b/docs/sdk/v0.53/documentation/application-framework/README.mdx @@ -0,0 +1,40 @@ +--- +title: Packages +description: >- + The Cosmos SDK is a collection of Go modules. This section provides + documentation on various packages that can used when developing a Cosmos SDK + chain. It lists all standalone Go modules that are part of the Cosmos SDK. +--- + +The Cosmos SDK is a collection of Go modules. This section provides documentation on various packages that can used when developing a Cosmos SDK chain. +It lists all standalone Go modules that are part of the Cosmos SDK. + + +For more information on SDK modules, see the [SDK Modules](https://docs.cosmos.network/main/modules) section. +For more information on SDK tooling, see the [Tooling](https://docs.cosmos.network/main/tooling) section. + + +## Core + +* [Core](https://pkg.go.dev/cosmossdk.io/core) - Core library defining SDK interfaces ([ADR-063](https://docs.cosmos.network/main/architecture/adr-063-core-module-api)) +* [API](https://pkg.go.dev/cosmossdk.io/api) - API library containing generated SDK Pulsar API +* [Store](https://pkg.go.dev/cosmossdk.io/store) - Implementation of the Cosmos SDK store + +## State Management + +* [Collections](/docs/sdk/v0.53/documentation/state-storage/collections) - State management library + +## Automation + +* [Depinject](/docs/sdk/v0.53/documentation/module-system/depinject) - Dependency injection framework +* [Client/v2](https://pkg.go.dev/cosmossdk.io/client/v2) - Library powering [AutoCLI](https://docs.cosmos.network/main/core/autocli) + +## Utilities + +* [Log](https://pkg.go.dev/cosmossdk.io/log) - Logging library +* [Errors](https://pkg.go.dev/cosmossdk.io/errors) - Error handling library +* [Math](https://pkg.go.dev/cosmossdk.io/math) - Math library for SDK arithmetic operations + +## Example + +* [SimApp](https://pkg.go.dev/cosmossdk.io/simapp) - SimApp is **the** sample Cosmos SDK chain. This package should not be imported in your application. diff --git a/docs/sdk/v0.53/documentation/application-framework/app-anatomy.mdx b/docs/sdk/v0.53/documentation/application-framework/app-anatomy.mdx new file mode 100644 index 00000000..ce2f045c --- /dev/null +++ b/docs/sdk/v0.53/documentation/application-framework/app-anatomy.mdx @@ -0,0 +1,4509 @@ +--- +title: Anatomy of a Cosmos SDK Application +--- + +## Synopsis + +This document describes the core parts of a Cosmos SDK application, represented throughout the document as a placeholder application named `app`. + +## Node Client + +The Daemon, or [Full-Node Client](/docs/sdk/v0.53/documentation/operations/node), is the core process of a Cosmos SDK-based blockchain. Participants in the network run this process to initialize their state-machine, connect with other full-nodes, and update their state-machine as new blocks come in. + +```text expandable + ^ +-------------------------------+ ^ + | | | | + | | State-machine = Application | | + | | | | Built with Cosmos SDK + | | ^ + | | + | +----------- | ABCI | ----------+ v + | | + v | ^ + | | | | +Blockchain Node | | Consensus | | + | | | | + | +-------------------------------+ | CometBFT + | | | | + | | Networking | | + | | | | + v +-------------------------------+ v +``` + +The blockchain full-node presents itself as a binary, generally suffixed by `-d` for "daemon" (e.g. `appd` for `app` or `gaiad` for `gaia`). This binary is built by running a simple [`main.go`](/docs/sdk/v0.53/documentation/operations/node#main-function) function placed in `./cmd/appd/`. This operation usually happens through the [Makefile](#dependencies-and-makefile). + +Once the main binary is built, the node can be started by running the [`start` command](/docs/sdk/v0.53/documentation/operations/node#start-command). This command function primarily does three things: + +1. Create an instance of the state-machine defined in [`app.go`](#core-application-file). +2. Initialize the state-machine with the latest known state, extracted from the `db` stored in the `~/.app/data` folder. At this point, the state-machine is at height `appBlockHeight`. +3. Create and start a new CometBFT instance. Among other things, the node performs a handshake with its peers. It gets the latest `blockHeight` from them and replays blocks to sync to this height if it is greater than the local `appBlockHeight`. The node starts from genesis and CometBFT sends an `InitChain` message via the ABCI to the `app`, which triggers the [`InitChainer`](#initchainer). + + + When starting a CometBFT instance, the genesis file is the `0` height and the + state within the genesis file is committed at block height `1`. When querying + the state of the node, querying block height 0 will return an error. + + +## Core Application File + +In general, the core of the state-machine is defined in a file called `app.go`. This file mainly contains the **type definition of the application** and functions to **create and initialize it**. + +### Type Definition of the Application + +The first thing defined in `app.go` is the `type` of the application. It is generally comprised of the following parts: + +- **Embeding [runtime.App](/docs/sdk/v0.53/documentation/application-framework/runtime)** The runtime package manages the application's core components and modules through dependency injection. It provides declarative configuration for module management, state storage, and ABCI handling. + - `Runtime` wraps `BaseApp`, meaning when a transaction is relayed by CometBFT to the application, `app` uses `runtime`'s methods to route them to the appropriate module. `BaseApp` implements all the [ABCI methods](https://docs.cometbft.com/v0.38/spec/abci/) and the [routing logic](/docs/sdk/v0.53/documentation/application-framework/baseapp#service-routers). + - It automatically configures the **[module manager](/docs/sdk/v0.53/documentation/module-system/module-manager#manager)** based on the app wiring configuration. The module manager facilitates operations related to these modules, like registering their [`Msg` service](/docs/sdk/v0.53/documentation/module-system/msg-services) and [gRPC `Query` service](#grpc-query-services), or setting the order of execution between modules for various functions like [`InitChainer`](#initchainer), [`PreBlocker`](#preblocker) and [`BeginBlocker` and `EndBlocker`](#beginblocker-and-endblocker). +- [**An App Wiring configuration file**](/docs/sdk/v0.53/documentation/application-framework/runtime) The app wiring configuration file contains the list of application's modules that `runtime` must instantiate. The instantiation of the modules are done using `depinject`. It also contains the order in which all module's `InitGenesis` and `Pre/Begin/EndBlocker` methods should be executed. +- **A reference to an [`appCodec`](/docs/sdk/v0.53/documentation/protocol-development/encoding).** The application's `appCodec` is used to serialize and deserialize data structures in order to store them, as stores can only persist `[]bytes`. The default codec is [Protocol Buffers](/docs/sdk/v0.53/documentation/protocol-development/encoding). +- **A reference to a [`legacyAmino`](/docs/sdk/v0.53/documentation/protocol-development/encoding) codec.** Some parts of the Cosmos SDK have not been migrated to use the `appCodec` above, and are still hardcoded to use Amino. Other parts explicitly use Amino for backwards compatibility. For these reasons, the application still holds a reference to the legacy Amino codec. Please note that the Amino codec will be removed from the SDK in the upcoming releases. + +See an example of application type definition from `simapp`, the Cosmos SDK's own app used for demo and testing purposes: + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + / + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + / + / For providing a different address codec, add it below. + / By default the auth module uses a Bech32 address codec, + / with the prefix defined in the auth module configuration. + / + / func() + +address.Codec { + return <- custom address codec type -> +} + / + / STAKING + / + / For provinding a different validator and consensus address codec, add it below. + / By default the staking module uses the bech32 prefix provided in the auth config, + / and appends "valoper" and "valcons" for validator and consensus addresses respectively. + / When providing a custom address codec in auth, custom address codecs must be provided here as well. + / + / func() + +runtime.ValidatorAddressCodec { + return <- custom validator address codec type -> +} + / func() + +runtime.ConsensusAddressCodec { + return <- custom consensus address codec type -> +} + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom minting function that implements the mintkeeper.MintFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.UpgradeKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitKeeper, + &app.EpochsKeeper, + &app.ProtocolPoolKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + + / set custom ante handler + app.setAnteHandler(app.txConfig) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ setAnteHandler sets custom ante handlers. +/ "x/auth/tx" pre-defined ante handler have been disabled in app_config. +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + +### Constructor Function + +Also defined in `app.go` is the constructor function, which constructs a new application of the type defined in the preceding section. The function must fulfill the `AppCreator` signature in order to be used in the [`start` command](/docs/sdk/v0.53/documentation/operations/node#start-command) of the application's daemon command. + +```go expandable +package types + +import ( + + "encoding/json" + "io" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/cobra" + "cosmossdk.io/log" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" +) + +type ( + / AppOptions defines an interface that is passed into an application + / constructor, typically used to set BaseApp options that are either supplied + / via config file or through CLI arguments/flags. The underlying implementation + / is defined by the server package and is typically implemented via a Viper + / literal defined on the server Context. Note, casting Get calls may not yield + / the expected types and could result in type assertion errors. It is recommend + / to either use the cast package or perform manual conversion for safety. + AppOptions interface { + Get(string) + +any +} + + / Application defines an application interface that wraps abci.Application. + / The interface defines the necessary contracts to be implemented in order + / to fully bootstrap and start an application. + Application interface { + ABCI + + RegisterAPIRoutes(*api.Server, config.APIConfig) + + / RegisterGRPCServerWithSkipCheckHeader registers gRPC services directly with the gRPC + / server and bypass check header flag. + RegisterGRPCServerWithSkipCheckHeader(grpc.Server, bool) + + / RegisterTxService registers the gRPC Query service for tx (such as tx + / simulation, fetching txs by hash...). + RegisterTxService(client.Context) + + / RegisterTendermintService registers the gRPC Query service for CometBFT queries. + RegisterTendermintService(client.Context) + + / RegisterNodeService registers the node gRPC Query service. + RegisterNodeService(client.Context, config.Config) + + / CommitMultiStore return the multistore instance + CommitMultiStore() + +storetypes.CommitMultiStore + + / Return the snapshot manager + SnapshotManager() *snapshots.Manager + + / Close is called in start cmd to gracefully cleanup resources. + / Must be safe to be called multiple times. + Close() + +error +} + + / AppCreator is a function that allows us to lazily initialize an + / application using various configurations. + AppCreator func(log.Logger, dbm.DB, io.Writer, AppOptions) + +Application + + / ModuleInitFlags takes a start command and adds modules specific init flags. + ModuleInitFlags func(startCmd *cobra.Command) + + / ExportedApp represents an exported app state, along with + / validators, consensus params and latest app height. + ExportedApp struct { + / AppState is the application state as JSON. + AppState json.RawMessage + / Validators is the exported validator set. + Validators []cmttypes.GenesisValidator + / Height is the app's latest block height. + Height int64 + / ConsensusParams are the exported consensus params for ABCI. + ConsensusParams cmtproto.ConsensusParams +} + + / AppExporter is a function that dumps all app state to + / JSON-serializable structure and returns the current validator set. + AppExporter func( + logger log.Logger, + db dbm.DB, + traceWriter io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + opts AppOptions, + modulesToExport []string, + ) (ExportedApp, error) +) +``` + +Here are the main actions performed by this function: + +- Instantiate a new [`codec`](/docs/sdk/v0.53/documentation/protocol-development/encoding) and initialize the `codec` of each of the application's modules using the [basic manager](/docs/sdk/v0.53/documentation/module-system/module-manager#basicmanager). +- Instantiate a new application with a reference to a `baseapp` instance, a codec, and all the appropriate store keys. +- Instantiate all the [`keeper`](#keeper) objects defined in the application's `type` using the `NewKeeper` function of each of the application's modules. Note that keepers must be instantiated in the correct order, as the `NewKeeper` of one module might require a reference to another module's `keeper`. +- Instantiate the application's [module manager](/docs/sdk/v0.53/documentation/module-system/module-manager#manager) with the [`AppModule`](#application-module-interface) object of each of the application's modules. +- With the module manager, initialize the application's [`Msg` services](/docs/sdk/v0.53/documentation/application-framework/baseapp#msg-services), [gRPC `Query` services](/docs/sdk/v0.53/documentation/application-framework/baseapp#grpc-query-services), [legacy `Msg` routes](/docs/sdk/v0.53/documentation/application-framework/baseapp#routing), and [legacy query routes](/docs/sdk/v0.53/documentation/application-framework/baseapp#query-routing). When a transaction is relayed to the application by CometBFT via the ABCI, it is routed to the appropriate module's [`Msg` service](#msg-services) using the routes defined here. Likewise, when a gRPC query request is received by the application, it is routed to the appropriate module's [`gRPC query service`](#grpc-query-services) using the gRPC routes defined here. The Cosmos SDK still supports legacy `Msg`s and legacy CometBFT queries, which are routed using the legacy `Msg` routes and the legacy query routes, respectively. +- With the module manager, register the [application's modules' invariants](/docs/sdk/v0.53/documentation/module-system/invariants). Invariants are variables (e.g. total supply of a token) that are evaluated at the end of each block. The process of checking invariants is done via a special module called the [`InvariantsRegistry`](/docs/sdk/v0.53/documentation/module-system/invariants#invariant-registry). The value of the invariant should be equal to a predicted value defined in the module. Should the value be different than the predicted one, special logic defined in the invariant registry is triggered (usually the chain is halted). This is useful to make sure that no critical bug goes unnoticed, producing long-lasting effects that are hard to fix. +- With the module manager, set the order of execution between the `InitGenesis`, `PreBlocker`, `BeginBlocker`, and `EndBlocker` functions of each of the [application's modules](#application-module-interface). Note that not all modules implement these functions. +- Set the remaining application parameters: + - [`InitChainer`](#initchainer): used to initialize the application when it is first started. + - [`PreBlocker`](#preblocker): called before BeginBlock. + - [`BeginBlocker`, `EndBlocker`](#beginblocker-and-endblocker): called at the beginning and at the end of every block. + - [`anteHandler`](/docs/sdk/v0.53/documentation/application-framework/baseapp#antehandler): used to handle fees and signature verification. +- Mount the stores. +- Return the application. + +Note that the constructor function only creates an instance of the app, while the actual state is either carried over from the `~/.app/data` folder if the node is restarted, or generated from the genesis file if the node is started for the first time. + +See an example of application constructor from `simapp`: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "maps" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/tx/signing" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + sigtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + panic(err) +} + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, + banktypes.StoreKey, + stakingtypes.StoreKey, + minttypes.StoreKey, + distrtypes.StoreKey, + slashingtypes.StoreKey, + govtypes.StoreKey, + consensusparamtypes.StoreKey, + upgradetypes.StoreKey, + feegrant.StoreKey, + evidencetypes.StoreKey, + circuittypes.StoreKey, + authzkeeper.StoreKey, + nftkeeper.StoreKey, + group.StoreKey, + epochstypes.StoreKey, + protocolpooltypes.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, +} + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + runtime.EventService{ +}, + ) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), + ) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + + / optional: enable sign mode textual by overwriting the default tx config (after setting the bank keeper) + enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + panic(err) +} + +app.txConfig = txConfig + + app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[stakingtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authcodec.NewBech32Codec(sdk.Bech32PrefixValAddr), + authcodec.NewBech32Codec(sdk.Bech32PrefixConsAddr), + ) + +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(minttypes.DefaultInflationCalculationFn)), custom mintFn can be added here + ) + +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), + ) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, + legacyAmino, + runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), + app.StakingKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[feegrant.StoreKey]), + app.AccountKeeper, + ) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks( + app.DistrKeeper.Hooks(), + app.SlashingKeeper.Hooks(), + ), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[circuittypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + app.AccountKeeper.AddressCodec(), + ) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper( + runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + ) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper( + keys[group.StoreKey], + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + groupConfig, + ) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper( + skipUpgradeHeights, + runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), + appCodec, + homePath, + app.BaseApp, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(...), / Add if you want to use a custom vote calculation function. + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper( + runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), + appCodec, + app.AccountKeeper, + app.BankKeeper, + ) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), + app.StakingKeeper, + app.SlashingKeeper, + app.AccountKeeper.AddressCodec(), + runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, + ) + +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + ), + ) + + /**** Module Options ****/ + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, nil), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, nil), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil, app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, nil), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + epochs.NewAppModule(app.EpochsKeeper), + protocolpool.NewAppModule(app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / NOTE: upgrade module is required to be prioritized + app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, + ) + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensusparamtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +} + exportModuleOrder := []string{ + consensusparamtypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +err = app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetPreBlocker(app.PreBlocker) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimeoutDuration), +}, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ PreBlocker application updates every pre block +func (app *SimApp) + +PreBlocker(ctx sdk.Context, _ *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + return app.ModuleManager.PreBlock(ctx) +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), + ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), + ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, 0, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + return maps.Clone(maccPerms) +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} +``` + +### InitChainer + +The `InitChainer` is a function that initializes the state of the application from a genesis file (i.e. token balances of genesis accounts). It is called when the application receives the `InitChain` message from the CometBFT engine, which happens when the node is started at `appBlockHeight == 0` (i.e. on genesis). The application must set the `InitChainer` in its [constructor](#constructor-function) via the [`SetInitChainer`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetInitChainer) method. + +In general, the `InitChainer` is mostly composed of the [`InitGenesis`](/docs/sdk/v0.53/documentation/module-system/genesis#initgenesis) function of each of the application's modules. This is done by calling the `InitGenesis` function of the module manager, which in turn calls the `InitGenesis` function of each of the modules it contains. Note that the order in which the modules' `InitGenesis` functions must be called has to be set in the module manager using the [module manager's](/docs/sdk/v0.53/documentation/module-system/module-manager) `SetOrderInitGenesis` method. This is done in the [application's constructor](#application-constructor), and the `SetOrderInitGenesis` has to be called before the `SetInitChainer`. + +See an example of an `InitChainer` from `simapp`: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "maps" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/tx/signing" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + sigtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + panic(err) +} + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, + banktypes.StoreKey, + stakingtypes.StoreKey, + minttypes.StoreKey, + distrtypes.StoreKey, + slashingtypes.StoreKey, + govtypes.StoreKey, + consensusparamtypes.StoreKey, + upgradetypes.StoreKey, + feegrant.StoreKey, + evidencetypes.StoreKey, + circuittypes.StoreKey, + authzkeeper.StoreKey, + nftkeeper.StoreKey, + group.StoreKey, + epochstypes.StoreKey, + protocolpooltypes.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, +} + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + runtime.EventService{ +}, + ) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), + ) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + + / optional: enable sign mode textual by overwriting the default tx config (after setting the bank keeper) + enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + panic(err) +} + +app.txConfig = txConfig + + app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[stakingtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authcodec.NewBech32Codec(sdk.Bech32PrefixValAddr), + authcodec.NewBech32Codec(sdk.Bech32PrefixConsAddr), + ) + +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(minttypes.DefaultInflationCalculationFn)), custom mintFn can be added here + ) + +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), + ) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, + legacyAmino, + runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), + app.StakingKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[feegrant.StoreKey]), + app.AccountKeeper, + ) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks( + app.DistrKeeper.Hooks(), + app.SlashingKeeper.Hooks(), + ), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[circuittypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + app.AccountKeeper.AddressCodec(), + ) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper( + runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + ) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper( + keys[group.StoreKey], + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + groupConfig, + ) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper( + skipUpgradeHeights, + runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), + appCodec, + homePath, + app.BaseApp, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(...), / Add if you want to use a custom vote calculation function. + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper( + runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), + appCodec, + app.AccountKeeper, + app.BankKeeper, + ) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), + app.StakingKeeper, + app.SlashingKeeper, + app.AccountKeeper.AddressCodec(), + runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, + ) + +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + ), + ) + + /**** Module Options ****/ + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, nil), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, nil), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil, app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, nil), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + epochs.NewAppModule(app.EpochsKeeper), + protocolpool.NewAppModule(app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / NOTE: upgrade module is required to be prioritized + app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, + ) + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensusparamtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +} + exportModuleOrder := []string{ + consensusparamtypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +err = app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetPreBlocker(app.PreBlocker) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimeoutDuration), +}, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ PreBlocker application updates every pre block +func (app *SimApp) + +PreBlocker(ctx sdk.Context, _ *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + return app.ModuleManager.PreBlock(ctx) +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), + ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), + ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, 0, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + return maps.Clone(maccPerms) +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} +``` + +### PreBlocker + +There are two semantics around the new lifecycle method: + +- It runs before the `BeginBlocker` of all modules +- It can modify consensus parameters in storage, and signal the caller through the return value. + +When it returns `ConsensusParamsChanged=true`, the caller must refresh the consensus parameter in the finalize context: + +```go +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithConsensusParams(app.GetConsensusParams()) +``` + +The new ctx must be passed to all the other lifecycle methods. + +### BeginBlocker and EndBlocker + +The Cosmos SDK offers developers the possibility to implement automatic execution of code as part of their application. This is implemented through two functions called `BeginBlocker` and `EndBlocker`. They are called when the application receives the `FinalizeBlock` messages from the CometBFT consensus engine, which happens respectively at the beginning and at the end of each block. The application must set the `BeginBlocker` and `EndBlocker` in its [constructor](#constructor-function) via the [`SetBeginBlocker`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetBeginBlocker) and [`SetEndBlocker`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetEndBlocker) methods. + +In general, the `BeginBlocker` and `EndBlocker` functions are mostly composed of the [`BeginBlock` and `EndBlock`](/docs/sdk/v0.53/documentation/module-system/beginblock-endblock) functions of each of the application's modules. This is done by calling the `BeginBlock` and `EndBlock` functions of the module manager, which in turn calls the `BeginBlock` and `EndBlock` functions of each of the modules it contains. Note that the order in which the modules' `BeginBlock` and `EndBlock` functions must be called has to be set in the module manager using the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods, respectively. This is done via the [module manager](/docs/sdk/v0.53/documentation/module-system/module-manager) in the [application's constructor](#application-constructor), and the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods have to be called before the `SetBeginBlocker` and `SetEndBlocker` functions. + +As a sidenote, it is important to remember that application-specific blockchains are deterministic. Developers must be careful not to introduce non-determinism in `BeginBlocker` or `EndBlocker`, and must also be careful not to make them too computationally expensive, as [gas](/docs/sdk/v0.53/documentation/protocol-development/gas-fees) does not constrain the cost of `BeginBlocker` and `EndBlocker` execution. + +See an example of `BeginBlocker` and `EndBlocker` functions from `simapp` + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "maps" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/tx/signing" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + sigtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + panic(err) +} + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, + banktypes.StoreKey, + stakingtypes.StoreKey, + minttypes.StoreKey, + distrtypes.StoreKey, + slashingtypes.StoreKey, + govtypes.StoreKey, + consensusparamtypes.StoreKey, + upgradetypes.StoreKey, + feegrant.StoreKey, + evidencetypes.StoreKey, + circuittypes.StoreKey, + authzkeeper.StoreKey, + nftkeeper.StoreKey, + group.StoreKey, + epochstypes.StoreKey, + protocolpooltypes.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, +} + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + runtime.EventService{ +}, + ) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), + ) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + + / optional: enable sign mode textual by overwriting the default tx config (after setting the bank keeper) + enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + panic(err) +} + +app.txConfig = txConfig + + app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[stakingtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authcodec.NewBech32Codec(sdk.Bech32PrefixValAddr), + authcodec.NewBech32Codec(sdk.Bech32PrefixConsAddr), + ) + +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(minttypes.DefaultInflationCalculationFn)), custom mintFn can be added here + ) + +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), + ) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, + legacyAmino, + runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), + app.StakingKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[feegrant.StoreKey]), + app.AccountKeeper, + ) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks( + app.DistrKeeper.Hooks(), + app.SlashingKeeper.Hooks(), + ), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[circuittypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + app.AccountKeeper.AddressCodec(), + ) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper( + runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + ) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper( + keys[group.StoreKey], + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + groupConfig, + ) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper( + skipUpgradeHeights, + runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), + appCodec, + homePath, + app.BaseApp, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(...), / Add if you want to use a custom vote calculation function. + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper( + runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), + appCodec, + app.AccountKeeper, + app.BankKeeper, + ) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), + app.StakingKeeper, + app.SlashingKeeper, + app.AccountKeeper.AddressCodec(), + runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, + ) + +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + ), + ) + + /**** Module Options ****/ + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, nil), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, nil), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil, app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, nil), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + epochs.NewAppModule(app.EpochsKeeper), + protocolpool.NewAppModule(app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / NOTE: upgrade module is required to be prioritized + app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, + ) + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensusparamtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +} + exportModuleOrder := []string{ + consensusparamtypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +err = app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetPreBlocker(app.PreBlocker) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimeoutDuration), +}, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ PreBlocker application updates every pre block +func (app *SimApp) + +PreBlocker(ctx sdk.Context, _ *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + return app.ModuleManager.PreBlock(ctx) +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), + ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), + ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, 0, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + return maps.Clone(maccPerms) +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} +``` + +### Register Codec + +The `EncodingConfig` structure is the last important part of the `app.go` file. The goal of this structure is to define the codecs that will be used throughout the app. + +```go expandable +package params + +import ( + + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" +) + +/ EncodingConfig specifies the concrete encoding types to use for a given app. +/ This is provided for compatibility between protobuf and amino implementations. +type EncodingConfig struct { + InterfaceRegistry types.InterfaceRegistry + Codec codec.Codec + TxConfig client.TxConfig + Amino *codec.LegacyAmino +} +``` + +Here are descriptions of what each of the four fields means: + +- `InterfaceRegistry`: The `InterfaceRegistry` is used by the Protobuf codec to handle interfaces that are encoded and decoded (we also say "unpacked") using [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto). `Any` could be thought as a struct that contains a `type_url` (name of a concrete type implementing the interface) and a `value` (its encoded bytes). `InterfaceRegistry` provides a mechanism for registering interfaces and implementations that can be safely unpacked from `Any`. Each application module implements the `RegisterInterfaces` method that can be used to register the module's own interfaces and implementations. + - You can read more about `Any` in [ADR-019](docs/sdk/next/documentation/legacy/adr-comprehensive). + - To go more into details, the Cosmos SDK uses an implementation of the Protobuf specification called [`gogoprotobuf`](https://github.com/cosmos/gogoproto). By default, the [gogo protobuf implementation of `Any`](https://pkg.go.dev/github.com/cosmos/gogoproto/types) uses [global type registration](https://github.com/cosmos/gogoproto/blob/master/proto/properties.go#L540) to decode values packed in `Any` into concrete Go types. This introduces a vulnerability where any malicious module in the dependency tree could register a type with the global protobuf registry and cause it to be loaded and unmarshaled by a transaction that referenced it in the `type_url` field. For more information, please refer to [ADR-019](docs/sdk/next/documentation/legacy/adr-comprehensive). +- `Codec`: The default codec used throughout the Cosmos SDK. It is composed of a `BinaryCodec` used to encode and decode state, and a `JSONCodec` used to output data to the users (for example, in the [CLI](#cli)). By default, the SDK uses Protobuf as `Codec`. +- `TxConfig`: `TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type. Currently, the SDK handles two transaction types: `SIGN_MODE_DIRECT` (which uses Protobuf binary as over-the-wire encoding) and `SIGN_MODE_LEGACY_AMINO_JSON` (which depends on Amino). Read more about transactions [here](/docs/sdk/v0.53/documentation/protocol-development/transactions). +- `Amino`: Some legacy parts of the Cosmos SDK still use Amino for backwards-compatibility. Each module exposes a `RegisterLegacyAmino` method to register the module's specific types within Amino. This `Amino` codec should not be used by app developers anymore, and will be removed in future releases. + +An application should create its own encoding config. +See an example of a `simappparams.EncodingConfig` from `simapp`: + +```go expandable +package params + +import ( + + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" +) + +/ EncodingConfig specifies the concrete encoding types to use for a given app. +/ This is provided for compatibility between protobuf and amino implementations. +type EncodingConfig struct { + InterfaceRegistry types.InterfaceRegistry + Codec codec.Codec + TxConfig client.TxConfig + Amino *codec.LegacyAmino +} +``` + +## Modules + +[Modules](/docs/sdk/v0.53/documentation/module-system/intro) are the heart and soul of Cosmos SDK applications. They can be considered as state-machines nested within the state-machine. When a transaction is relayed from the underlying CometBFT engine via the ABCI to the application, it is routed by [`baseapp`](/docs/sdk/v0.53/documentation/application-framework/baseapp) to the appropriate module in order to be processed. This paradigm enables developers to easily build complex state-machines, as most of the modules they need often already exist. **For developers, most of the work involved in building a Cosmos SDK application revolves around building custom modules required by their application that do not exist yet, and integrating them with modules that do already exist into one coherent application**. In the application directory, the standard practice is to store modules in the `x/` folder (not to be confused with the Cosmos SDK's `x/` folder, which contains already-built modules). + +### Application Module Interface + +Modules must implement [interfaces](/docs/sdk/v0.53/documentation/module-system/module-manager#application-module-interfaces) defined in the Cosmos SDK, [`AppModuleBasic`](/docs/sdk/v0.53/documentation/module-system/module-manager#appmodulebasic) and [`AppModule`](/docs/sdk/v0.53/documentation/module-system/module-manager#appmodule). The former implements basic non-dependent elements of the module, such as the `codec`, while the latter handles the bulk of the module methods (including methods that require references to other modules' `keeper`s). Both the `AppModule` and `AppModuleBasic` types are, by convention, defined in a file called `module.go`. + +`AppModule` exposes a collection of useful methods on the module that facilitates the composition of modules into a coherent application. These methods are called from the [`module manager`](/docs/sdk/v0.53/documentation/module-system/module-manager#manager), which manages the application's collection of modules. + +### `Msg` Services + +Each application module defines two [Protobuf services](https://developers.google.com/protocol-buffers/docs/proto#services): one `Msg` service to handle messages, and one gRPC `Query` service to handle queries. If we consider the module as a state-machine, then a `Msg` service is a set of state transition RPC methods. +Each Protobuf `Msg` service method is 1:1 related to a Protobuf request type, which must implement `sdk.Msg` interface. +Note that `sdk.Msg`s are bundled in [transactions](/docs/sdk/v0.53/documentation/protocol-development/transactions), and each transaction contains one or multiple messages. + +When a valid block of transactions is received by the full-node, CometBFT relays each one to the application via [`DeliverTx`](https://docs.cometbft.com/v0.37/spec/abci/abci++_app_requirements#specifics-of-responsedelivertx). Then, the application handles the transaction: + +1. Upon receiving the transaction, the application first unmarshalls it from `[]byte`. +2. Then, it verifies a few things about the transaction like [fee payment and signatures](/docs/sdk/v0.53/documentation/protocol-development/gas-fees#antehandler) before extracting the `Msg`(s) contained in the transaction. +3. `sdk.Msg`s are encoded using Protobuf [`Any`s](#register-codec). By analyzing each `Any`'s `type_url`, baseapp's `msgServiceRouter` routes the `sdk.Msg` to the corresponding module's `Msg` service. +4. If the message is successfully processed, the state is updated. + +For more details, see [transaction lifecycle](/docs/sdk/v0.53/documentation/protocol-development/tx-lifecycle). + +Module developers create custom `Msg` services when they build their own module. The general practice is to define the `Msg` Protobuf service in a `tx.proto` file. For example, the `x/bank` module defines a service with two methods to transfer tokens: + +```protobuf +// Msg defines the bank Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + // Send defines a method for sending coins from one account to another account. + rpc Send(MsgSend) returns (MsgSendResponse); + + // MultiSend defines a method for sending coins from some accounts to other accounts. + rpc MultiSend(MsgMultiSend) returns (MsgMultiSendResponse); + + // UpdateParams defines a governance operation for updating the x/bank module parameters. + // The authority is defined in the keeper. + rpc UpdateParams(MsgUpdateParams) returns (MsgUpdateParamsResponse) { + option (cosmos_proto.method_added_in) = "cosmos-sdk 0.47"; + }; + + // SetSendEnabled is a governance operation for setting the SendEnabled flag + // on any number of Denoms. Only the entries to add or update should be + // included. Entries that already exist in the store, but that aren't + // included in this message, will be left unchanged. + rpc SetSendEnabled(MsgSetSendEnabled) returns (MsgSetSendEnabledResponse) { + option (cosmos_proto.method_added_in) = "cosmos-sdk 0.47"; + }; +} +``` + +Service methods use `keeper` in order to update the module state. + +Each module should also implement the `RegisterServices` method as part of the [`AppModule` interface](#application-module-interface). This method should call the `RegisterMsgServer` function provided by the generated Protobuf code. + +### gRPC `Query` Services + +gRPC `Query` services allow users to query the state using [gRPC](https://grpc.io). They are enabled by default, and can be configured under the `grpc.enable` and `grpc.address` fields inside [`app.toml`](/docs/sdk/v0.53/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml). + +gRPC `Query` services are defined in the module's Protobuf definition files, specifically inside `query.proto`. The `query.proto` definition file exposes a single `Query` [Protobuf service](https://developers.google.com/protocol-buffers/docs/proto#services). Each gRPC query endpoint corresponds to a service method, starting with the `rpc` keyword, inside the `Query` service. + +Protobuf generates a `QueryServer` interface for each module, containing all the service methods. A module's [`keeper`](#keeper) then needs to implement this `QueryServer` interface, by providing the concrete implementation of each service method. This concrete implementation is the handler of the corresponding gRPC query endpoint. + +Finally, each module should also implement the `RegisterServices` method as part of the [`AppModule` interface](#application-module-interface). This method should call the `RegisterQueryServer` function provided by the generated Protobuf code. + +### Keeper + +[`Keepers`](/docs/sdk/v0.53/documentation/module-system/keeper) are the gatekeepers of their module's store(s). To read or write in a module's store, it is mandatory to go through one of its `keeper`'s methods. This is ensured by the [object-capabilities](/docs/sdk/v0.53/documentation/core-concepts/ocap) model of the Cosmos SDK. Only objects that hold the key to a store can access it, and only the module's `keeper` should hold the key(s) to the module's store(s). + +`Keepers` are generally defined in a file called `keeper.go`. It contains the `keeper`'s type definition and methods. + +The `keeper` type definition generally consists of the following: + +- **Key(s)** to the module's store(s) in the multistore. +- Reference to **other module's `keepers`**. Only needed if the `keeper` needs to access other module's store(s) (either to read or write from them). +- A reference to the application's **codec**. The `keeper` needs it to marshal structs before storing them, or to unmarshal them when it retrieves them, because stores only accept `[]bytes` as value. + +Along with the type definition, the next important component of the `keeper.go` file is the `keeper`'s constructor function, `NewKeeper`. This function instantiates a new `keeper` of the type defined above with a `codec`, stores `keys` and potentially references other modules' `keeper`s as parameters. The `NewKeeper` function is called from the [application's constructor](#constructor-function). The rest of the file defines the `keeper`'s methods, which are primarily getters and setters. + +### Command-Line, gRPC Services and REST Interfaces + +Each module defines command-line commands, gRPC services, and REST routes to be exposed to the end-user via the [application's interfaces](#application-interfaces). This enables end-users to create messages of the types defined in the module, or to query the subset of the state managed by the module. + +#### CLI + +Generally, the [commands related to a module](/docs/sdk/v0.53/documentation/module-system/module-interfaces#cli) are defined in a folder called `client/cli` in the module's folder. The CLI divides commands into two categories, transactions and queries, defined in `client/cli/tx.go` and `client/cli/query.go`, respectively. Both commands are built on top of the [Cobra Library](https://github.com/spf13/cobra): + +- Transactions commands let users generate new transactions so that they can be included in a block and eventually update the state. One command should be created for each [message type](#message-types) defined in the module. The command calls the constructor of the message with the parameters provided by the end-user, and wraps it into a transaction. The Cosmos SDK handles signing and the addition of other transaction metadata. +- Queries let users query the subset of the state defined by the module. Query commands forward queries to the [application's query router](/docs/sdk/v0.53/documentation/application-framework/baseapp#query-routing), which routes them to the appropriate [querier](#querier) the `queryRoute` parameter supplied. + +#### gRPC + +[gRPC](https://grpc.io) is a modern open-source high performance RPC framework that has support in multiple languages. It is the recommended way for external clients (such as wallets, browsers and other backend services) to interact with a node. + +Each module can expose gRPC endpoints called [service methods](https://grpc.io/docs/what-is-grpc/core-concepts/#service-definition), which are defined in the [module's Protobuf `query.proto` file](#grpc-query-services). A service method is defined by its name, input arguments, and output response. The module then needs to perform the following actions: + +- Define a `RegisterGRPCGatewayRoutes` method on `AppModuleBasic` to wire the client gRPC requests to the correct handler inside the module. +- For each service method, define a corresponding handler. The handler implements the core logic necessary to serve the gRPC request, and is located in the `keeper/grpc_query.go` file. + +#### gRPC-gateway REST Endpoints + +Some external clients may not wish to use gRPC. In this case, the Cosmos SDK provides a gRPC gateway service, which exposes each gRPC service as a corresponding REST endpoint. Please refer to the [grpc-gateway](https://grpc-ecosystem.github.io/grpc-gateway/) documentation to learn more. + +The REST endpoints are defined in the Protobuf files, along with the gRPC services, using Protobuf annotations. Modules that want to expose REST queries should add `google.api.http` annotations to their `rpc` methods. By default, all REST endpoints defined in the SDK have a URL starting with the `/cosmos/` prefix. + +The Cosmos SDK also provides a development endpoint to generate [Swagger](https://swagger.io/) definition files for these REST endpoints. This endpoint can be enabled inside the [`app.toml`](/docs/sdk/v0.53/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml) config file, under the `api.swagger` key. + +## Application Interface + +[Interfaces](#command-line-grpc-services-and-rest-interfaces) let end-users interact with full-node clients. This means querying data from the full-node or creating and sending new transactions to be relayed by the full-node and eventually included in a block. + +The main interface is the [Command-Line Interface](/docs/sdk/v0.53/api-reference/client-tools/cli). The CLI of a Cosmos SDK application is built by aggregating [CLI commands](#cli) defined in each of the modules used by the application. The CLI of an application is the same as the daemon (e.g. `appd`), and is defined in a file called `appd/main.go`. The file contains the following: + +- **A `main()` function**, which is executed to build the `appd` interface client. This function prepares each command and adds them to the `rootCmd` before building them. At the root of `appd`, the function adds generic commands like `status`, `keys`, and `config`, query commands, tx commands, and `rest-server`. +- **Query commands**, which are added by calling the `queryCmd` function. This function returns a Cobra command that contains the query commands defined in each of the application's modules (passed as an array of `sdk.ModuleClients` from the `main()` function), as well as some other lower level query commands such as block or validator queries. Query command are called by using the command `appd query [query]` of the CLI. +- **Transaction commands**, which are added by calling the `txCmd` function. Similar to `queryCmd`, the function returns a Cobra command that contains the tx commands defined in each of the application's modules, as well as lower level tx commands like transaction signing or broadcasting. Tx commands are called by using the command `appd tx [tx]` of the CLI. + +See an example of an application's main command-line file from the [Cosmos Hub](https://github.com/cosmos/gaia). + +```go expandable +package cmd + +import ( + + "errors" + "io" + "os" + "path/filepath" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/config" + "github.com/cosmos/cosmos-sdk/client/debug" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/keys" + "github.com/cosmos/cosmos-sdk/client/rpc" + "github.com/cosmos/cosmos-sdk/server" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/snapshots" + snapshottypes "github.com/cosmos/cosmos-sdk/snapshots/types" + "github.com/cosmos/cosmos-sdk/store" + sdk "github.com/cosmos/cosmos-sdk/types" + authcmd "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + genutilcli "github.com/cosmos/cosmos-sdk/x/genutil/client/cli" + "github.com/spf13/cast" + "github.com/spf13/cobra" + tmcfg "github.com/tendermint/tendermint/config" + tmcli "github.com/tendermint/tendermint/libs/cli" + "github.com/tendermint/tendermint/libs/log" + dbm "github.com/tendermint/tm-db" + + gaia "github.com/cosmos/gaia/v8/app" + "github.com/cosmos/gaia/v8/app/params" +) + +/ NewRootCmd creates a new root command for simd. It is called once in the +/ main function. +func NewRootCmd() (*cobra.Command, params.EncodingConfig) { + encodingConfig := gaia.MakeTestEncodingConfig() + initClientCtx := client.Context{ +}. + WithCodec(encodingConfig.Codec). + WithInterfaceRegistry(encodingConfig.InterfaceRegistry). + WithTxConfig(encodingConfig.TxConfig). + WithLegacyAmino(encodingConfig.Amino). + WithInput(os.Stdin). + WithAccountRetriever(types.AccountRetriever{ +}). + WithHomeDir(gaia.DefaultNodeHome). + WithViper("") + rootCmd := &cobra.Command{ + Use: "gaiad", + Short: "Stargate Cosmos Hub App", + PersistentPreRunE: func(cmd *cobra.Command, _ []string) + +error { + initClientCtx, err := client.ReadPersistentCommandFlags(initClientCtx, cmd.Flags()) + if err != nil { + return err +} + +initClientCtx, err = config.ReadFromClientConfig(initClientCtx) + if err != nil { + return err +} + if err = client.SetCmdClientContextHandler(initClientCtx, cmd); err != nil { + return err +} + +customTemplate, customGaiaConfig := initAppConfig() + customTMConfig := initTendermintConfig() + +return server.InterceptConfigsPreRunHandler(cmd, customTemplate, customGaiaConfig, customTMConfig) +}, +} + +initRootCmd(rootCmd, encodingConfig) + +return rootCmd, encodingConfig +} + +/ initTendermintConfig helps to override default Tendermint Config values. +/ return tmcfg.DefaultConfig if no custom configuration is required for the application. +func initTendermintConfig() *tmcfg.Config { + cfg := tmcfg.DefaultConfig() + + / these values put a higher strain on node memory + / cfg.P2P.MaxNumInboundPeers = 100 + / cfg.P2P.MaxNumOutboundPeers = 40 + + return cfg +} + +func initAppConfig() (string, interface{ +}) { + srvCfg := serverconfig.DefaultConfig() + +srvCfg.StateSync.SnapshotInterval = 1000 + srvCfg.StateSync.SnapshotKeepRecent = 10 + + return params.CustomConfigTemplate(), params.CustomAppConfig{ + Config: *srvCfg, + BypassMinFeeMsgTypes: gaia.GetDefaultBypassFeeMessages(), +} +} + +func initRootCmd(rootCmd *cobra.Command, encodingConfig params.EncodingConfig) { + cfg := sdk.GetConfig() + +cfg.Seal() + +rootCmd.AddCommand( + genutilcli.InitCmd(gaia.ModuleBasics, gaia.DefaultNodeHome), + genutilcli.CollectGenTxsCmd(banktypes.GenesisBalancesIterator{ +}, gaia.DefaultNodeHome), + genutilcli.GenTxCmd(gaia.ModuleBasics, encodingConfig.TxConfig, banktypes.GenesisBalancesIterator{ +}, gaia.DefaultNodeHome), + genutilcli.ValidateGenesisCmd(gaia.ModuleBasics), + AddGenesisAccountCmd(gaia.DefaultNodeHome), + tmcli.NewCompletionCmd(rootCmd, true), + testnetCmd(gaia.ModuleBasics, banktypes.GenesisBalancesIterator{ +}), + debug.Cmd(), + config.Cmd(), + ) + ac := appCreator{ + encCfg: encodingConfig, +} + +server.AddCommands(rootCmd, gaia.DefaultNodeHome, ac.newApp, ac.appExport, addModuleInitFlags) + + / add keybase, auxiliary RPC, query, and tx child commands + rootCmd.AddCommand( + rpc.StatusCommand(), + queryCommand(), + txCommand(), + keys.Commands(gaia.DefaultNodeHome), + ) + +rootCmd.AddCommand(server.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)) +} + +func addModuleInitFlags(startCmd *cobra.Command) { + crisis.AddModuleInitFlags(startCmd) +} + +func queryCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "query", + Aliases: []string{"q" +}, + Short: "Querying subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetAccountCmd(), + rpc.ValidatorCommand(), + rpc.BlockCommand(), + authcmd.QueryTxsByEventsCmd(), + authcmd.QueryTxCmd(), + ) + +gaia.ModuleBasics.AddQueryCommands(cmd) + +cmd.PersistentFlags().String(flags.FlagChainID, "", "The network chain ID") + +return cmd +} + +func txCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "tx", + Short: "Transactions subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +cmd.AddCommand( + authcmd.GetSignCommand(), + authcmd.GetSignBatchCommand(), + authcmd.GetMultiSignCommand(), + authcmd.GetMultiSignBatchCmd(), + authcmd.GetValidateSignaturesCommand(), + flags.LineBreak, + authcmd.GetBroadcastCommand(), + authcmd.GetEncodeCommand(), + authcmd.GetDecodeCommand(), + authcmd.GetAuxToFeeCommand(), + ) + +gaia.ModuleBasics.AddTxCommands(cmd) + +cmd.PersistentFlags().String(flags.FlagChainID, "", "The network chain ID") + +return cmd +} + +type appCreator struct { + encCfg params.EncodingConfig +} + +func (ac appCreator) + +newApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + appOpts servertypes.AppOptions, +) + +servertypes.Application { + var cache sdk.MultiStorePersistentCache + if cast.ToBool(appOpts.Get(server.FlagInterBlockCache)) { + cache = store.NewCommitKVStoreCacheManager() +} + skipUpgradeHeights := make(map[int64]bool) + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + +pruningOpts, err := server.GetPruningOptionsFromFlags(appOpts) + if err != nil { + panic(err) +} + snapshotDir := filepath.Join(cast.ToString(appOpts.Get(flags.FlagHome)), "data", "snapshots") + +snapshotDB, err := dbm.NewDB("metadata", server.GetAppDBBackend(appOpts), snapshotDir) + if err != nil { + panic(err) +} + +snapshotStore, err := snapshots.NewStore(snapshotDB, snapshotDir) + if err != nil { + panic(err) +} + snapshotOptions := snapshottypes.NewSnapshotOptions( + cast.ToUint64(appOpts.Get(server.FlagStateSyncSnapshotInterval)), + cast.ToUint32(appOpts.Get(server.FlagStateSyncSnapshotKeepRecent)), + ) + +return gaia.NewGaiaApp( + logger, db, traceStore, true, skipUpgradeHeights, + cast.ToString(appOpts.Get(flags.FlagHome)), + cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)), + ac.encCfg, + appOpts, + baseapp.SetPruning(pruningOpts), + baseapp.SetMinGasPrices(cast.ToString(appOpts.Get(server.FlagMinGasPrices))), + baseapp.SetHaltHeight(cast.ToUint64(appOpts.Get(server.FlagHaltHeight))), + baseapp.SetHaltTime(cast.ToUint64(appOpts.Get(server.FlagHaltTime))), + baseapp.SetMinRetainBlocks(cast.ToUint64(appOpts.Get(server.FlagMinRetainBlocks))), + baseapp.SetInterBlockCache(cache), + baseapp.SetTrace(cast.ToBool(appOpts.Get(server.FlagTrace))), + baseapp.SetIndexEvents(cast.ToStringSlice(appOpts.Get(server.FlagIndexEvents))), + baseapp.SetSnapshot(snapshotStore, snapshotOptions), + ) +} + +func (ac appCreator) + +appExport( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + appOpts servertypes.AppOptions, +) (servertypes.ExportedApp, error) { + homePath, ok := appOpts.Get(flags.FlagHome).(string) + if !ok || homePath == "" { + return servertypes.ExportedApp{ +}, errors.New("application home is not set") +} + +var loadLatest bool + if height == -1 { + loadLatest = true +} + gaiaApp := gaia.NewGaiaApp( + logger, + db, + traceStore, + loadLatest, + map[int64]bool{ +}, + homePath, + cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)), + ac.encCfg, + appOpts, + ) + if height != -1 { + if err := gaiaApp.LoadHeight(height); err != nil { + return servertypes.ExportedApp{ +}, err +} + +} + +return gaiaApp.ExportAppStateAndValidators(forZeroHeight, jailAllowedAddrs) +} +``` + +## Dependencies and Makefile + +This section is optional, as developers are free to choose their dependency manager and project building method. That said, the current most used framework for versioning control is [`go.mod`](https://github.com/golang/go/wiki/Modules). It ensures each of the libraries used throughout the application are imported with the correct version. + +The following is the `go.mod` of the [Cosmos Hub](https://github.com/cosmos/gaia), provided as an example. + +```go expandable +module github.com/cosmos/gaia/v8 + +go 1.18 + +require ( + cosmossdk.io/math v1.0.0-beta.3 + github.com/cosmos/cosmos-sdk v0.46.2 + github.com/cosmos/go-bip39 v1.0.0 / indirect + github.com/cosmos/ibc-go/v5 v5.0.0 + github.com/gogo/protobuf v1.3.3 + github.com/golang/protobuf v1.5.2 + github.com/golangci/golangci-lint v1.50.0 + github.com/gorilla/mux v1.8.0 + github.com/gravity-devs/liquidity/v2 v2.0.0 + github.com/grpc-ecosystem/grpc-gateway v1.16.0 + github.com/pkg/errors v0.9.1 + github.com/rakyll/statik v0.1.7 + github.com/spf13/cast v1.5.0 + github.com/spf13/cobra v1.6.0 + github.com/spf13/pflag v1.0.5 + github.com/spf13/viper v1.13.0 + github.com/strangelove-ventures/packet-forward-middleware/v2 v2.1.4-0.20220802012200-5a62a55a7f1d + github.com/stretchr/testify v1.8.0 + github.com/tendermint/tendermint v0.34.21 + github.com/tendermint/tm-db v0.6.7 + google.golang.org/genproto v0.0.0-20220815135757-37a418bb8959 + google.golang.org/grpc v1.50.0 +) + +require ( + 4d63.com/gochecknoglobals v0.1.0 / indirect + cloud.google.com/go v0.102.1 / indirect + cloud.google.com/go/compute v1.7.0 / indirect + cloud.google.com/go/iam v0.4.0 / indirect + cloud.google.com/go/storage v1.22.1 / indirect + cosmossdk.io/errors v1.0.0-beta.7 / indirect + filippo.io/edwards25519 v1.0.0-rc.1 / indirect + github.com/99designs/go-keychain v0.0.0-20191008050251-8e49817e8af4 / indirect + github.com/99designs/keyring v1.2.1 / indirect + github.com/Abirdcfly/dupword v0.0.7 / indirect + github.com/Antonboom/errname v0.1.7 / indirect + github.com/Antonboom/nilnil v0.1.1 / indirect + github.com/BurntSushi/toml v1.2.0 / indirect + github.com/ChainSafe/go-schnorrkel v0.0.0-20200405005733-88cbf1b4c40d / indirect + github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 / indirect + github.com/GaijinEntertainment/go-exhaustruct/v2 v2.3.0 / indirect + github.com/Masterminds/semver v1.5.0 / indirect + github.com/OpenPeeDeeP/depguard v1.1.1 / indirect + github.com/Workiva/go-datastructures v1.0.53 / indirect + github.com/alexkohler/prealloc v1.0.0 / indirect + github.com/alingse/asasalint v0.0.11 / indirect + github.com/armon/go-metrics v0.4.0 / indirect + github.com/ashanbrown/forbidigo v1.3.0 / indirect + github.com/ashanbrown/makezero v1.1.1 / indirect + github.com/aws/aws-sdk-go v1.40.45 / indirect + github.com/beorn7/perks v1.0.1 / indirect + github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d / indirect + github.com/bgentry/speakeasy v0.1.0 / indirect + github.com/bkielbasa/cyclop v1.2.0 / indirect + github.com/blizzy78/varnamelen v0.8.0 / indirect + github.com/bombsimon/wsl/v3 v3.3.0 / indirect + github.com/breml/bidichk v0.2.3 / indirect + github.com/breml/errchkjson v0.3.0 / indirect + github.com/btcsuite/btcd v0.22.1 / indirect + github.com/butuzov/ireturn v0.1.1 / indirect + github.com/cenkalti/backoff/v4 v4.1.3 / indirect + github.com/cespare/xxhash v1.1.0 / indirect + github.com/cespare/xxhash/v2 v2.1.2 / indirect + github.com/charithe/durationcheck v0.0.9 / indirect + github.com/chavacava/garif v0.0.0-20220630083739-93517212f375 / indirect + github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e / indirect + github.com/cockroachdb/apd/v2 v2.0.2 / indirect + github.com/coinbase/rosetta-sdk-go v0.7.9 / indirect + github.com/confio/ics23/go v0.7.0 / indirect + github.com/cosmos/btcutil v1.0.4 / indirect + github.com/cosmos/cosmos-proto v1.0.0-alpha7 / indirect + github.com/cosmos/gorocksdb v1.2.0 / indirect + github.com/cosmos/iavl v0.19.2-0.20220916140702-9b6be3095313 / indirect + github.com/cosmos/ledger-cosmos-go v0.11.1 / indirect + github.com/cosmos/ledger-go v0.9.3 / indirect + github.com/creachadair/taskgroup v0.3.2 / indirect + github.com/curioswitch/go-reassign v0.2.0 / indirect + github.com/daixiang0/gci v0.8.0 / indirect + github.com/danieljoos/wincred v1.1.2 / indirect + github.com/davecgh/go-spew v1.1.1 / indirect + github.com/denis-tingaikin/go-header v0.4.3 / indirect + github.com/desertbit/timer v0.0.0-20180107155436-c41aec40b27f / indirect + github.com/dgraph-io/badger/v2 v2.2007.4 / indirect + github.com/dgraph-io/ristretto v0.1.0 / indirect + github.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13 / indirect + github.com/dustin/go-humanize v1.0.0 / indirect + github.com/dvsekhvalnov/jose2go v1.5.0 / indirect + github.com/esimonov/ifshort v1.0.4 / indirect + github.com/ettle/strcase v0.1.1 / indirect + github.com/fatih/color v1.13.0 / indirect + github.com/fatih/structtag v1.2.0 / indirect + github.com/felixge/httpsnoop v1.0.1 / indirect + github.com/firefart/nonamedreturns v1.0.4 / indirect + github.com/fsnotify/fsnotify v1.5.4 / indirect + github.com/fzipp/gocyclo v0.6.0 / indirect + github.com/go-critic/go-critic v0.6.5 / indirect + github.com/go-kit/kit v0.12.0 / indirect + github.com/go-kit/log v0.2.1 / indirect + github.com/go-logfmt/logfmt v0.5.1 / indirect + github.com/go-playground/validator/v10 v10.4.1 / indirect + github.com/go-toolsmith/astcast v1.0.0 / indirect + github.com/go-toolsmith/astcopy v1.0.2 / indirect + github.com/go-toolsmith/astequal v1.0.3 / indirect + github.com/go-toolsmith/astfmt v1.0.0 / indirect + github.com/go-toolsmith/astp v1.0.0 / indirect + github.com/go-toolsmith/strparse v1.0.0 / indirect + github.com/go-toolsmith/typep v1.0.2 / indirect + github.com/go-xmlfmt/xmlfmt v0.0.0-20191208150333-d5b6f63a941b / indirect + github.com/gobwas/glob v0.2.3 / indirect + github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 / indirect + github.com/gofrs/flock v0.8.1 / indirect + github.com/gogo/gateway v1.1.0 / indirect + github.com/golang/glog v1.0.0 / indirect + github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da / indirect + github.com/golang/snappy v0.0.4 / indirect + github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2 / indirect + github.com/golangci/dupl v0.0.0-20180902072040-3e9179ac440a / indirect + github.com/golangci/go-misc v0.0.0-20220329215616-d24fe342adfe / indirect + github.com/golangci/gofmt v0.0.0-20220901101216-f2edd75033f2 / indirect + github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0 / indirect + github.com/golangci/maligned v0.0.0-20180506175553-b1d89398deca / indirect + github.com/golangci/misspell v0.3.5 / indirect + github.com/golangci/revgrep v0.0.0-20220804021717-745bb2f7c2e6 / indirect + github.com/golangci/unconvert v0.0.0-20180507085042-28b1c447d1f4 / indirect + github.com/google/btree v1.0.1 / indirect + github.com/google/go-cmp v0.5.9 / indirect + github.com/google/orderedcode v0.0.1 / indirect + github.com/google/uuid v1.3.0 / indirect + github.com/googleapis/enterprise-certificate-proxy v0.1.0 / indirect + github.com/googleapis/gax-go/v2 v2.4.0 / indirect + github.com/googleapis/go-type-adapters v1.0.0 / indirect + github.com/gordonklaus/ineffassign v0.0.0-20210914165742-4cc7213b9bc8 / indirect + github.com/gorilla/handlers v1.5.1 / indirect + github.com/gorilla/websocket v1.5.0 / indirect + github.com/gostaticanalysis/analysisutil v0.7.1 / indirect + github.com/gostaticanalysis/comment v1.4.2 / indirect + github.com/gostaticanalysis/forcetypeassert v0.1.0 / indirect + github.com/gostaticanalysis/nilerr v0.1.1 / indirect + github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 / indirect + github.com/grpc-ecosystem/grpc-gateway/v2 v2.0.1 / indirect + github.com/gsterjov/go-libsecret v0.0.0-20161001094733-a6f4afe4910c / indirect + github.com/gtank/merlin v0.1.1 / indirect + github.com/gtank/ristretto255 v0.1.2 / indirect + github.com/hashicorp/errwrap v1.1.0 / indirect + github.com/hashicorp/go-cleanhttp v0.5.2 / indirect + github.com/hashicorp/go-getter v1.6.1 / indirect + github.com/hashicorp/go-immutable-radix v1.3.1 / indirect + github.com/hashicorp/go-multierror v1.1.1 / indirect + github.com/hashicorp/go-safetemp v1.0.0 / indirect + github.com/hashicorp/go-version v1.6.0 / indirect + github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d / indirect + github.com/hashicorp/hcl v1.0.0 / indirect + github.com/hdevalence/ed25519consensus v0.0.0-20220222234857-c00d1f31bab3 / indirect + github.com/hexops/gotextdiff v1.0.3 / indirect + github.com/improbable-eng/grpc-web v0.15.0 / indirect + github.com/inconshreveable/mousetrap v1.0.1 / indirect + github.com/jgautheron/goconst v1.5.1 / indirect + github.com/jingyugao/rowserrcheck v1.1.1 / indirect + github.com/jirfag/go-printf-func-name v0.0.0-20200119135958-7558a9eaa5af / indirect + github.com/jmespath/go-jmespath v0.4.0 / indirect + github.com/jmhodges/levigo v1.0.0 / indirect + github.com/julz/importas v0.1.0 / indirect + github.com/kisielk/errcheck v1.6.2 / indirect + github.com/kisielk/gotool v1.0.0 / indirect + github.com/kkHAIKE/contextcheck v1.1.2 / indirect + github.com/klauspost/compress v1.15.9 / indirect + github.com/kulti/thelper v0.6.3 / indirect + github.com/kunwardeep/paralleltest v1.0.6 / indirect + github.com/kyoh86/exportloopref v0.1.8 / indirect + github.com/ldez/gomoddirectives v0.2.3 / indirect + github.com/ldez/tagliatelle v0.3.1 / indirect + github.com/leonklingele/grouper v1.1.0 / indirect + github.com/lib/pq v1.10.6 / indirect + github.com/libp2p/go-buffer-pool v0.1.0 / indirect + github.com/lufeee/execinquery v1.2.1 / indirect + github.com/magiconair/properties v1.8.6 / indirect + github.com/manifoldco/promptui v0.9.0 / indirect + github.com/maratori/testableexamples v1.0.0 / indirect + github.com/maratori/testpackage v1.1.0 / indirect + github.com/matoous/godox v0.0.0-20210227103229-6504466cf951 / indirect + github.com/mattn/go-colorable v0.1.13 / indirect + github.com/mattn/go-isatty v0.0.16 / indirect + github.com/mattn/go-runewidth v0.0.9 / indirect + github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 / indirect + github.com/mbilski/exhaustivestruct v1.2.0 / indirect + github.com/mgechev/revive v1.2.4 / indirect + github.com/mimoo/StrobeGo v0.0.0-20181016162300-f8f6d4d2b643 / indirect + github.com/minio/highwayhash v1.0.2 / indirect + github.com/mitchellh/go-homedir v1.1.0 / indirect + github.com/mitchellh/go-testing-interface v1.0.0 / indirect + github.com/mitchellh/mapstructure v1.5.0 / indirect + github.com/moricho/tparallel v0.2.1 / indirect + github.com/mtibben/percent v0.2.1 / indirect + github.com/nakabonne/nestif v0.3.1 / indirect + github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354 / indirect + github.com/nishanths/exhaustive v0.8.3 / indirect + github.com/nishanths/predeclared v0.2.2 / indirect + github.com/olekukonko/tablewriter v0.0.5 / indirect + github.com/pelletier/go-toml v1.9.5 / indirect + github.com/pelletier/go-toml/v2 v2.0.5 / indirect + github.com/petermattis/goid v0.0.0-20180202154549-b0b1615b78e5 / indirect + github.com/phayes/checkstyle v0.0.0-20170904204023-bfd46e6a821d / indirect + github.com/pmezard/go-difflib v1.0.0 / indirect + github.com/polyfloyd/go-errorlint v1.0.5 / indirect + github.com/prometheus/client_golang v1.12.2 / indirect + github.com/prometheus/client_model v0.2.0 / indirect + github.com/prometheus/common v0.34.0 / indirect + github.com/prometheus/procfs v0.7.3 / indirect + github.com/quasilyte/go-ruleguard v0.3.18 / indirect + github.com/quasilyte/gogrep v0.0.0-20220828223005-86e4605de09f / indirect + github.com/quasilyte/regex/syntax v0.0.0-20200407221936-30656e2c4a95 / indirect + github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 / indirect + github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0 / indirect + github.com/regen-network/cosmos-proto v0.3.1 / indirect + github.com/rs/cors v1.8.2 / indirect + github.com/rs/zerolog v1.27.0 / indirect + github.com/ryancurrah/gomodguard v1.2.4 / indirect + github.com/ryanrolds/sqlclosecheck v0.3.0 / indirect + github.com/sanposhiho/wastedassign/v2 v2.0.6 / indirect + github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa / indirect + github.com/sashamelentyev/interfacebloat v1.1.0 / indirect + github.com/sashamelentyev/usestdlibvars v1.20.0 / indirect + github.com/securego/gosec/v2 v2.13.1 / indirect + github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c / indirect + github.com/sirupsen/logrus v1.9.0 / indirect + github.com/sivchari/containedctx v1.0.2 / indirect + github.com/sivchari/nosnakecase v1.7.0 / indirect + github.com/sivchari/tenv v1.7.0 / indirect + github.com/sonatard/noctx v0.0.1 / indirect + github.com/sourcegraph/go-diff v0.6.1 / indirect + github.com/spf13/afero v1.8.2 / indirect + github.com/spf13/jwalterweatherman v1.1.0 / indirect + github.com/ssgreg/nlreturn/v2 v2.2.1 / indirect + github.com/stbenjam/no-sprintf-host-port v0.1.1 / indirect + github.com/stretchr/objx v0.4.0 / indirect + github.com/subosito/gotenv v1.4.1 / indirect + github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 / indirect + github.com/tdakkota/asciicheck v0.1.1 / indirect + github.com/tendermint/btcd v0.1.1 / indirect + github.com/tendermint/crypto v0.0.0-20191022145703-50d29ede1e15 / indirect + github.com/tendermint/go-amino v0.16.0 / indirect + github.com/tetafro/godot v1.4.11 / indirect + github.com/timakin/bodyclose v0.0.0-20210704033933-f49887972144 / indirect + github.com/timonwong/loggercheck v0.9.3 / indirect + github.com/tomarrell/wrapcheck/v2 v2.6.2 / indirect + github.com/tommy-muehle/go-mnd/v2 v2.5.0 / indirect + github.com/ulikunitz/xz v0.5.8 / indirect + github.com/ultraware/funlen v0.0.3 / indirect + github.com/ultraware/whitespace v0.0.5 / indirect + github.com/uudashr/gocognit v1.0.6 / indirect + github.com/yagipy/maintidx v1.0.0 / indirect + github.com/yeya24/promlinter v0.2.0 / indirect + github.com/zondax/hid v0.9.1-0.20220302062450-5552068d2266 / indirect + gitlab.com/bosi/decorder v0.2.3 / indirect + go.etcd.io/bbolt v1.3.6 / indirect + go.opencensus.io v0.23.0 / indirect + go.uber.org/atomic v1.9.0 / indirect + go.uber.org/multierr v1.8.0 / indirect + go.uber.org/zap v1.21.0 / indirect + golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa / indirect + golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e / indirect + golang.org/x/exp/typeparams v0.0.0-20220827204233-334a2380cb91 / indirect + golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 / indirect + golang.org/x/net v0.0.0-20220726230323-06994584191e / indirect + golang.org/x/oauth2 v0.0.0-20220622183110-fd043fe589d2 / indirect + golang.org/x/sync v0.0.0-20220819030929-7fc1605a5dde / indirect + golang.org/x/sys v0.0.0-20220915200043-7b5979e65e41 / indirect + golang.org/x/term v0.0.0-20220722155259-a9ba230a4035 / indirect + golang.org/x/text v0.3.7 / indirect + golang.org/x/tools v0.1.12 / indirect + golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f / indirect + google.golang.org/api v0.93.0 / indirect + google.golang.org/appengine v1.6.7 / indirect + google.golang.org/protobuf v1.28.1 / indirect + gopkg.in/ini.v1 v1.67.0 / indirect + gopkg.in/yaml.v2 v2.4.0 / indirect + gopkg.in/yaml.v3 v3.0.1 / indirect + honnef.co/go/tools v0.3.3 / indirect + mvdan.cc/gofumpt v0.4.0 / indirect + mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed / indirect + mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b / indirect + mvdan.cc/unparam v0.0.0-20220706161116-678bad134442 / indirect + nhooyr.io/websocket v1.8.6 / indirect + sigs.k8s.io/yaml v1.3.0 / indirect +) + +replace ( + github.com/gogo/protobuf => github.com/regen-network/protobuf v1.3.3-alpha.regen.1 + github.com/zondax/hid => github.com/zondax/hid v0.9.0 +) +``` + +For building the application, a [Makefile](https://en.wikipedia.org/wiki/Makefile) is generally used. The Makefile primarily ensures that the `go.mod` is run before building the two entrypoints to the application, [`Node Client`](#node-client) and [`Application Interface`](#application-interface). + +Here is an example of the [Cosmos Hub Makefile](https://github.com/cosmos/gaia/blob/main/Makefile). diff --git a/docs/sdk/v0.53/documentation/application-framework/app-go-di.mdx b/docs/sdk/v0.53/documentation/application-framework/app-go-di.mdx new file mode 100644 index 00000000..3c106788 --- /dev/null +++ b/docs/sdk/v0.53/documentation/application-framework/app-go-di.mdx @@ -0,0 +1,3320 @@ +--- +title: Overview of `app_di.go` +--- + +## Synopsis + +The Cosmos SDK allows much easier wiring of an `app.go` thanks to [runtime](/docs/sdk/v0.53/documentation/application-framework/runtime) and app wiring. +Learn more about the rationale of App Wiring in [ADR-057](docs/sdk/next/documentation/legacy/adr-comprehensive). + + +**Pre-requisite Readings** + +- [What is `runtime`?](/docs/sdk/v0.53/documentation/application-framework/runtime) +- [Depinject documentation](/docs/sdk/v0.53/documentation/module-system/depinject) +- [Modules depinject-ready](/docs/sdk/v0.53/documentation/module-system/depinject) +- [ADR 057: App Wiring](docs/sdk/next/documentation/legacy/adr-comprehensive) + + + +This section is intended to provide an overview of the `SimApp` `app_di.go` file with App Wiring. + +## `app_config.go` + +The `app_config.go` file is the single place to configure all modules parameters. + +1. Create the `AppConfig` variable: + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "cosmossdk.io/depinject" + _ "cosmossdk.io/x/circuit" / import for side-effects + circuittypes "cosmossdk.io/x/circuit/types" + _ "cosmossdk.io/x/evidence" / import for side-effects + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + "cosmossdk.io/x/nft" + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + { + Account: protocolpooltypes.ModuleName + }, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + ModuleConfig = []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / NOTE: upgrade module is required to be prioritized + PreBlockers: []string{ + upgradetypes.ModuleName, + authtypes.ModuleName, + }, + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + }, + EndBlockers: []string{ + govtypes.ModuleName, + stakingtypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + ExportGenesis: []string{ + consensustypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + / NOTE: specifying a prefix is only necessary when using bech32 addresses + / If not specfied, the auth Bech32Prefix appended with "valoper" and "valcons" is used by default + Bech32PrefixValidator: "cosmosvaloper", + Bech32PrefixConsensus: "cosmosvalcons", + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + SkipAnteHandler: true, / Enable this to skip the default antehandlers and set custom ante handlers. + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + { + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ + }), + }, + { + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ + }), + }, + } + + / AppConfig is application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: ModuleConfig, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + }, + ), + }, + ), + ) + ) + ``` + + Where the `appConfig` combines the [runtime](/docs/sdk/v0.53/documentation/application-framework/runtime) configuration and the (extra) modules configuration. + + ```go expandable + /go:build !app_v1 + + package simapp + + import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + ) + + / DefaultNodeHome default home directories for the application daemon + var DefaultNodeHome string + + var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) + ) + + / SimApp extends an ABCI application, but with most of its parameters exported. + / They are exported for convenience in creating helper functions, as object + / capabilities aren't needed for testing. + type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / simulation manager + sm *module.SimulationManager + } + + func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) + } + } + + / NewSimApp returns a reference to an initialized SimApp. + func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) *SimApp { + var ( + app = &SimApp{ + } + + appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + / + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + + sdk.AccountI { + return authtypes.ProtoBaseAccount() + }, + / + / For providing a different address codec, add it below. + / By default the auth module uses a Bech32 address codec, + / with the prefix defined in the auth module configuration. + / + / func() + + address.Codec { + return <- custom address codec type -> + } + + / + / STAKING + / + / For provinding a different validator and consensus address codec, add it below. + / By default the staking module uses the bech32 prefix provided in the auth config, + / and appends "valoper" and "valcons" for validator and consensus addresses respectively. + / When providing a custom address codec in auth, custom address codecs must be provided here as well. + / + / func() + + runtime.ValidatorAddressCodec { + return <- custom validator address codec type -> + } + / func() + + runtime.ConsensusAddressCodec { + return <- custom consensus address codec type -> + } + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.UpgradeKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitKeeper, + &app.EpochsKeeper, + &app.ProtocolPoolKeeper, + ); err != nil { + panic(err) + } + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / + } + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + + voteExtHandler.SetHandlers(bApp) + } + + baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + + app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) + } + + /**** Module Options ****/ + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ + }) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + } + + app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + + app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / + }) + + / set custom ante handler + app.setAnteHandler(app.txConfig) + if err := app.Load(loadLatest); err != nil { + panic(err) + } + + return app + } + + / setAnteHandler sets custom ante handlers. + / "x/auth/tx" pre-defined ante handler have been disabled in app_config. + func (app *SimApp) + + setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + UnorderedNonceManager: app.AccountKeeper, + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, + }, + &app.CircuitKeeper, + }, + ) + if err != nil { + panic(err) + } + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) + } + + / LegacyAmino returns SimApp's amino codec. + / + / NOTE: This is solely to be used for testing purposes as it may be desirable + / for modules to register their own custom testing types. + func (app *SimApp) + + LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino + } + + / AppCodec returns SimApp's app codec. + / + / NOTE: This is solely to be used for testing purposes as it may be desirable + / for modules to register their own custom testing types. + func (app *SimApp) + + AppCodec() + + codec.Codec { + return app.appCodec + } + + / InterfaceRegistry returns SimApp's InterfaceRegistry. + func (app *SimApp) + + InterfaceRegistry() + + codectypes.InterfaceRegistry { + return app.interfaceRegistry + } + + / TxConfig returns SimApp's TxConfig + func (app *SimApp) + + TxConfig() + + client.TxConfig { + return app.txConfig + } + + / GetKey returns the KVStoreKey for the provided store key. + / + / NOTE: This is solely to be used for testing purposes. + func (app *SimApp) + + GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + + kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil + } + + return kvStoreKey + } + + func (app *SimApp) + + kvStoreKeys() + + map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv + } + + } + + return keys + } + + / SimulationManager implements the SimulationApp interface + func (app *SimApp) + + SimulationManager() *module.SimulationManager { + return app.sm + } + + / RegisterAPIRoutes registers all application module routes with the provided + / API server. + func (app *SimApp) + + RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) + } + } + + / GetMaccPerms returns a copy of the module account permissions + / + / NOTE: This is solely to be used for testing purposes. + func GetMaccPerms() + + map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions + } + + return dup + } + + / BlockedAddresses returns all the app's blocked account addresses. + func BlockedAddresses() + + map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true + } + + } + + else { + for addr := range GetMaccPerms() { + result[addr] = true + } + + } + + return result + } + ``` + +2. Configure the `runtime` module: + + In this configuration, the order at which the modules are defined in PreBlockers, BeginBlocks, and EndBlockers is important. + They are named in the order they should be executed by the module manager. + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "cosmossdk.io/depinject" + _ "cosmossdk.io/x/circuit" / import for side-effects + circuittypes "cosmossdk.io/x/circuit/types" + _ "cosmossdk.io/x/evidence" / import for side-effects + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + "cosmossdk.io/x/nft" + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + { + Account: protocolpooltypes.ModuleName + }, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + ModuleConfig = []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / NOTE: upgrade module is required to be prioritized + PreBlockers: []string{ + upgradetypes.ModuleName, + authtypes.ModuleName, + }, + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + }, + EndBlockers: []string{ + govtypes.ModuleName, + stakingtypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + ExportGenesis: []string{ + consensustypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + / NOTE: specifying a prefix is only necessary when using bech32 addresses + / If not specfied, the auth Bech32Prefix appended with "valoper" and "valcons" is used by default + Bech32PrefixValidator: "cosmosvaloper", + Bech32PrefixConsensus: "cosmosvalcons", + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + SkipAnteHandler: true, / Enable this to skip the default antehandlers and set custom ante handlers. + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + { + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ + }), + }, + { + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ + }), + }, + } + + / AppConfig is application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: ModuleConfig, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + }, + ), + }, + ), + ) + ) + ``` + +3. Wire the other modules: + + Next to runtime, the other (depinject-enabled) modules are wired in the `AppConfig`: + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "cosmossdk.io/depinject" + _ "cosmossdk.io/x/circuit" / import for side-effects + circuittypes "cosmossdk.io/x/circuit/types" + _ "cosmossdk.io/x/evidence" / import for side-effects + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + "cosmossdk.io/x/nft" + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + { + Account: protocolpooltypes.ModuleName + }, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + ModuleConfig = []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / NOTE: upgrade module is required to be prioritized + PreBlockers: []string{ + upgradetypes.ModuleName, + authtypes.ModuleName, + }, + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + }, + EndBlockers: []string{ + govtypes.ModuleName, + stakingtypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + ExportGenesis: []string{ + consensustypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + / NOTE: specifying a prefix is only necessary when using bech32 addresses + / If not specfied, the auth Bech32Prefix appended with "valoper" and "valcons" is used by default + Bech32PrefixValidator: "cosmosvaloper", + Bech32PrefixConsensus: "cosmosvalcons", + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + SkipAnteHandler: true, / Enable this to skip the default antehandlers and set custom ante handlers. + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + { + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ + }), + }, + { + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ + }), + }, + } + + / AppConfig is application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: ModuleConfig, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + }, + ), + }, + ), + ) + ) + ``` + + Note: the `tx` isn't a module, but a configuration. It should be wired in the `AppConfig` as well. + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "cosmossdk.io/depinject" + _ "cosmossdk.io/x/circuit" / import for side-effects + circuittypes "cosmossdk.io/x/circuit/types" + _ "cosmossdk.io/x/evidence" / import for side-effects + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + "cosmossdk.io/x/nft" + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + { + Account: protocolpooltypes.ModuleName + }, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + ModuleConfig = []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / NOTE: upgrade module is required to be prioritized + PreBlockers: []string{ + upgradetypes.ModuleName, + authtypes.ModuleName, + }, + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + }, + EndBlockers: []string{ + govtypes.ModuleName, + stakingtypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + ExportGenesis: []string{ + consensustypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + / NOTE: specifying a prefix is only necessary when using bech32 addresses + / If not specfied, the auth Bech32Prefix appended with "valoper" and "valcons" is used by default + Bech32PrefixValidator: "cosmosvaloper", + Bech32PrefixConsensus: "cosmosvalcons", + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + SkipAnteHandler: true, / Enable this to skip the default antehandlers and set custom ante handlers. + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + { + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ + }), + }, + { + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ + }), + }, + } + + / AppConfig is application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: ModuleConfig, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + }, + ), + }, + ), + ) + ) + ``` + +See the complete `app_config.go` file for `SimApp` [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_config.go). + +### Alternative formats + + +The example above shows how to create an `AppConfig` using Go. However, it is also possible to create an `AppConfig` using YAML, or JSON.\ +The configuration can then be embed with `go:embed` and read with [`appconfig.LoadYAML`](https://pkg.go.dev/cosmossdk.io/core/appconfig#LoadYAML), or [`appconfig.LoadJSON`](https://pkg.go.dev/cosmossdk.io/core/appconfig#LoadJSON), in `app_di.go`. + +```go +/go:embed app_config.yaml +var ( + appConfigYaml []byte + appConfig = appconfig.LoadYAML(appConfigYaml) +) +``` + + + +```yaml expandable +modules: + - name: runtime + config: + "@type": cosmos.app.runtime.v1alpha1.Module + app_name: SimApp + begin_blockers: [staking, auth, bank] + end_blockers: [bank, auth, staking] + init_genesis: [bank, auth, staking] + - name: auth + config: + "@type": cosmos.auth.module.v1.Module + bech32_prefix: cosmos + - name: bank + config: + "@type": cosmos.bank.module.v1.Module + - name: staking + config: + "@type": cosmos.staking.module.v1.Module + - name: tx + config: + "@type": cosmos.tx.module.v1.Module +``` + +A more complete example of `app.yaml` can be found [here](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/simapp/example_app.yaml). + +## `app_di.go` + +`app_di.go` is the place where `SimApp` is constructed. `depinject.Inject` automatically wires the app modules and keepers when provided with an application configuration (`AppConfig`). `SimApp` is constructed upon calling the injected `*runtime.AppBuilder` with `appBuilder.Build(...)`.\ +In short `depinject` and the [`runtime` package](/docs/sdk/v0.53/documentation/application-framework/runtime) abstract the wiring of the app, and the `AppBuilder` is the place where the app is constructed. [`runtime`](/docs/sdk/v0.53/documentation/application-framework/runtime) takes care of registering the codecs, KV store, subspaces and instantiating `baseapp`. + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + / + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + / + / For providing a different address codec, add it below. + / By default the auth module uses a Bech32 address codec, + / with the prefix defined in the auth module configuration. + / + / func() + +address.Codec { + return <- custom address codec type -> +} + + / + / STAKING + / + / For provinding a different validator and consensus address codec, add it below. + / By default the staking module uses the bech32 prefix provided in the auth config, + / and appends "valoper" and "valcons" for validator and consensus addresses respectively. + / When providing a custom address codec in auth, custom address codecs must be provided here as well. + / + / func() + +runtime.ValidatorAddressCodec { + return <- custom validator address codec type -> +} + / func() + +runtime.ConsensusAddressCodec { + return <- custom consensus address codec type -> +} + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.UpgradeKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitKeeper, + &app.EpochsKeeper, + &app.ProtocolPoolKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + + / set custom ante handler + app.setAnteHandler(app.txConfig) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ setAnteHandler sets custom ante handlers. +/ "x/auth/tx" pre-defined ante handler have been disabled in app_config. +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + UnorderedNonceManager: app.AccountKeeper, + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + + + When using `depinject.Inject`, the injected types must be pointers. + + +### Advanced Configuration + +In advanced cases, it is possible to inject extra (module) configuration in a way that is not (yet) supported by `AppConfig`.\ +In this case, use `depinject.Configs` for combining the extra configuration, and `AppConfig` and `depinject.Supply` for providing the extra configuration. +More information on how `depinject.Configs` and `depinject.Supply` function can be found in the [`depinject` documentation](https://pkg.go.dev/cosmossdk.io/depinject). + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + / + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + / + / For providing a different address codec, add it below. + / By default the auth module uses a Bech32 address codec, + / with the prefix defined in the auth module configuration. + / + / func() + +address.Codec { + return <- custom address codec type -> +} + + / + / STAKING + / + / For provinding a different validator and consensus address codec, add it below. + / By default the staking module uses the bech32 prefix provided in the auth config, + / and appends "valoper" and "valcons" for validator and consensus addresses respectively. + / When providing a custom address codec in auth, custom address codecs must be provided here as well. + / + / func() + +runtime.ValidatorAddressCodec { + return <- custom validator address codec type -> +} + / func() + +runtime.ConsensusAddressCodec { + return <- custom consensus address codec type -> +} + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.UpgradeKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitKeeper, + &app.EpochsKeeper, + &app.ProtocolPoolKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + + / set custom ante handler + app.setAnteHandler(app.txConfig) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ setAnteHandler sets custom ante handlers. +/ "x/auth/tx" pre-defined ante handler have been disabled in app_config. +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + UnorderedNonceManager: app.AccountKeeper, + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + +### Registering non app wiring modules + +It is possible to combine app wiring / depinject enabled modules with non-app wiring modules. +To do so, use the `app.RegisterModules` method to register the modules on your app, as well as `app.RegisterStores` for registering the extra stores needed. + +```go expandable +/ .... +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + +/ register module manually +app.RegisterStores(storetypes.NewKVStoreKey(example.ModuleName)) + +app.ExampleKeeper = examplekeeper.NewKeeper(app.appCodec, app.AccountKeeper.AddressCodec(), runtime.NewKVStoreService(app.GetKey(example.ModuleName)), authtypes.NewModuleAddress(govtypes.ModuleName).String()) + exampleAppModule := examplemodule.NewAppModule(app.ExampleKeeper) + if err := app.RegisterModules(&exampleAppModule); err != nil { + panic(err) +} + +/ .... +``` + + + When using AutoCLI and combining app wiring and non-app wiring modules. The + AutoCLI options should be manually constructed instead of injected. Otherwise + it will miss the non depinject modules and not register their CLI. + + +### Complete `app_di.go` + + + Note that in the complete `SimApp` `app_di.go` file, testing utilities are + also defined, but they could as well be defined in a separate file. + + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + / + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + / + / For providing a different address codec, add it below. + / By default the auth module uses a Bech32 address codec, + / with the prefix defined in the auth module configuration. + / + / func() + +address.Codec { + return <- custom address codec type -> +} + + / + / STAKING + / + / For provinding a different validator and consensus address codec, add it below. + / By default the staking module uses the bech32 prefix provided in the auth config, + / and appends "valoper" and "valcons" for validator and consensus addresses respectively. + / When providing a custom address codec in auth, custom address codecs must be provided here as well. + / + / func() + +runtime.ValidatorAddressCodec { + return <- custom validator address codec type -> +} + / func() + +runtime.ConsensusAddressCodec { + return <- custom consensus address codec type -> +} + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom function that implements the minttypes.InflationCalculationFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.UpgradeKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitKeeper, + &app.EpochsKeeper, + &app.ProtocolPoolKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + + / set custom ante handler + app.setAnteHandler(app.txConfig) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ setAnteHandler sets custom ante handlers. +/ "x/auth/tx" pre-defined ante handler have been disabled in app_config. +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + UnorderedNonceManager: app.AccountKeeper, + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` diff --git a/docs/sdk/v0.53/documentation/application-framework/app-go.mdx b/docs/sdk/v0.53/documentation/application-framework/app-go.mdx new file mode 100644 index 00000000..f1691fa5 --- /dev/null +++ b/docs/sdk/v0.53/documentation/application-framework/app-go.mdx @@ -0,0 +1,940 @@ +--- +title: Overview of `app.go` +description: >- + This section is intended to provide an overview of the SimApp app.go file and + is still a work in progress. For now please instead read the tutorials for a + deep dive on how to build a chain. +--- + +This section is intended to provide an overview of the `SimApp` `app.go` file and is still a work in progress. +For now please instead read the [tutorials](https://tutorials.cosmos.network) for a deep dive on how to build a chain. + +## Complete `app.go` + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "os" + "path/filepath" + "cosmossdk.io/log" + "cosmossdk.io/x/tx/signing" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + "cosmossdk.io/core/appmodule" + "github.com/cosmos/cosmos-sdk/codec/address" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + "github.com/cosmos/cosmos-sdk/x/crisis" + crisiskeeper "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/params" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramskeeper "github.com/cosmos/cosmos-sdk/x/params/keeper" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + paramproposal "github.com/cosmos/cosmos-sdk/x/params/types/proposal" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ stdAccAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdAccAddressCodec struct{ +} + +func (g stdAccAddressCodec) + +StringToBytes(text string) ([]byte, error) { + if text == "" { + return nil, nil +} + +return sdk.AccAddressFromBech32(text) +} + +func (g stdAccAddressCodec) + +BytesToString(bz []byte) (string, error) { + if bz == nil { + return "", nil +} + +return sdk.AccAddress(bz).String(), nil +} + +/ stdValAddressCodec is a temporary address codec that we will use until we +/ can populate it with the correct bech32 prefixes without depending on the global. +type stdValAddressCodec struct{ +} + +func (g stdValAddressCodec) + +StringToBytes(text string) ([]byte, error) { + return sdk.ValAddressFromBech32(text) +} + +func (g stdValAddressCodec) + +BytesToString(bz []byte) (string, error) { + return sdk.ValAddress(bz).String(), nil +} + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + tkeys map[string]*storetypes.TransientStoreKey + + / keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + CrisisKeeper *crisiskeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + ParamsKeeper paramskeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + NFTKeeper nftkeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + userHomeDir, err := os.UserHomeDir() + if err != nil { + panic(err) +} + +DefaultNodeHome = filepath.Join(userHomeDir, ".simapp") +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, crisistypes.StoreKey, + minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, + govtypes.StoreKey, paramstypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, + evidencetypes.StoreKey, circuittypes.StoreKey, + authzkeeper.StoreKey, nftkeeper.StoreKey, group.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + tkeys := storetypes.NewTransientStoreKeys(paramstypes.TStoreKey) + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, + tkeys: tkeys, +} + +app.ParamsKeeper = initParamsKeeper(appCodec, legacyAmino, keys[paramstypes.StoreKey], tkeys[paramstypes.TStoreKey]) + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), runtime.EventService{ +}) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper(appCodec, runtime.NewKVStoreService(keys[authtypes.StoreKey]), authtypes.ProtoBaseAccount, maccPerms, sdk.Bech32MainPrefix, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, keys[stakingtypes.StoreKey], app.AccountKeeper, app.BankKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.MintKeeper = mintkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[minttypes.StoreKey]), app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.DistrKeeper = distrkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[distrtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, legacyAmino, runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), app.StakingKeeper, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + invCheckPeriod := cast.ToUint(appOpts.Get(server.FlagInvCheckPeriod)) + +app.CrisisKeeper = crisiskeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[crisistypes.StoreKey]), invCheckPeriod, + app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[feegrant.StoreKey]), app.AccountKeeper) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks(app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks()), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper(appCodec, runtime.NewKVStoreService(keys[circuittypes.StoreKey]), authtypes.NewModuleAddress(govtypes.ModuleName).String(), app.AccountKeeper.AddressCodec()) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper(runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), appCodec, app.MsgServiceRouter(), app.AccountKeeper) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper(keys[group.StoreKey], appCodec, app.MsgServiceRouter(), app.AccountKeeper, groupConfig) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper(skipUpgradeHeights, runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), appCodec, homePath, app.BaseApp, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler). + AddRoute(paramproposal.RouterKey, params.NewParamChangeProposalHandler(app.ParamsKeeper)). + AddRoute(upgradetypes.RouterKey, upgrade.NewSoftwareUpgradeProposalHandler(app.UpgradeKeeper)) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[govtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, + app.StakingKeeper, app.DistrKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper(runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), appCodec, app.AccountKeeper, app.BankKeeper) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), app.StakingKeeper, app.SlashingKeeper, app.AccountKeeper.AddressCodec(), runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + /**** Module Options ****/ + + / NOTE: we may consider parsing `appOpts` inside module constructors. For the moment + / we prefer to be more strict in what arguments the modules expect. + skipGenesisInvariants := cast.ToBool(appOpts.Get(crisis.FlagSkipGenesisInvariants)) + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, app.GetSubspace(banktypes.ModuleName)), + crisis.NewAppModule(app.CrisisKeeper, skipGenesisInvariants, app.GetSubspace(crisistypes.ModuleName)), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(govtypes.ModuleName)), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, app.GetSubspace(minttypes.ModuleName)), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(slashingtypes.ModuleName), app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, app.GetSubspace(distrtypes.ModuleName)), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, app.GetSubspace(stakingtypes.ModuleName)), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + params.NewAppModule(app.ParamsKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, banktypes.ModuleName, + distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, + minttypes.ModuleName, crisistypes.ModuleName, genutiltypes.ModuleName, evidencetypes.ModuleName, authz.ModuleName, + feegrant.ModuleName, nft.ModuleName, group.ModuleName, paramstypes.ModuleName, upgradetypes.ModuleName, + vestingtypes.ModuleName, consensusparamtypes.ModuleName, circuittypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(genesisModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.ModuleManager.RegisterInvariants(app.CrisisKeeper) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + err := app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, app.GetSubspace(authtypes.ModuleName)), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(tkeys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain, which comprises of only + / one decorator: the Transaction Tips decorator. However, some chains do + / not need it by default, so feel free to comment the next line if you do + / not need tips. + / To read more about tips: + / https://docs.cosmos.network/main/core/tips.html + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + + / At startup, after all modules have been registered, check that all prot + / annotations are correct. + protoFiles, err := proto.MergedRegistry() + if err != nil { + panic(err) +} + +err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + fmt.Fprintln(os.Stderr, err.Error()) +} + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ GetSubspace returns a param subspace for a given module name. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetSubspace(moduleName string) + +paramstypes.Subspace { + subspace, _ := app.ParamsKeeper.GetSubspace(moduleName) + +return subspace +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dupMaccPerms := make(map[string][]string) + for k, v := range maccPerms { + dupMaccPerms[k] = v +} + +return dupMaccPerms +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} + +/ initParamsKeeper init params keeper and its subspaces +func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) + +paramskeeper.Keeper { + paramsKeeper := paramskeeper.NewKeeper(appCodec, legacyAmino, key, tkey) + +paramsKeeper.Subspace(authtypes.ModuleName) + +paramsKeeper.Subspace(banktypes.ModuleName) + +paramsKeeper.Subspace(stakingtypes.ModuleName) + +paramsKeeper.Subspace(minttypes.ModuleName) + +paramsKeeper.Subspace(distrtypes.ModuleName) + +paramsKeeper.Subspace(slashingtypes.ModuleName) + +paramsKeeper.Subspace(govtypes.ModuleName) + +paramsKeeper.Subspace(crisistypes.ModuleName) + +return paramsKeeper +} +``` diff --git a/docs/sdk/v0.53/documentation/application-framework/app-mempool.mdx b/docs/sdk/v0.53/documentation/application-framework/app-mempool.mdx new file mode 100644 index 00000000..faf74c0a --- /dev/null +++ b/docs/sdk/v0.53/documentation/application-framework/app-mempool.mdx @@ -0,0 +1,94 @@ +--- +title: Application Mempool +--- + +## Synopsis + +This section describes how the app-side mempool can be used and replaced. + +Since `v0.47` the application has its own mempool to allow much more granular +block building than previous versions. This change was enabled by +[ABCI 1.0](https://github.com/cometbft/cometbft/blob/v0.37.0/spec/abci). +Notably it introduces the `PrepareProposal` and `ProcessProposal` steps of ABCI++. + + +**Pre-requisite Readings** + +- [BaseApp](/docs/sdk/v0.53/documentation/application-framework/baseapp) +- [ABCI](/docs/sdk/v0.53/documentation/core-concepts/abci) + + + +## Mempool + +There are countless designs that an application developer can write for a mempool, the SDK opted to provide only simple mempool implementations. +Namely, the SDK provides the following mempools: + +- [No-op Mempool](#no-op-mempool) +- [Sender Nonce Mempool](#sender-nonce-mempool) +- [Priority Nonce Mempool](#priority-nonce-mempool) + +By default, the SDK uses the [No-op Mempool](#no-op-mempool), but it can be replaced by the application developer in [`app.go`](/docs/sdk/v0.53/documentation/application-framework/app-go-di): + +```go +nonceMempool := mempool.NewSenderNonceMempool() + mempoolOpt := baseapp.SetMempool(nonceMempool) + +baseAppOptions = append(baseAppOptions, mempoolOpt) +``` + +### No-op Mempool + +A no-op mempool is a mempool where transactions are completely discarded and ignored when BaseApp interacts with the mempool. +When this mempool is used, it is assumed that an application will rely on CometBFT's transaction ordering defined in `RequestPrepareProposal`, +which is FIFO-ordered by default. + +> Note: If a NoOp mempool is used, PrepareProposal and ProcessProposal both should be aware of this as +> PrepareProposal could include transactions that could fail verification in ProcessProposal. + +### Sender Nonce Mempool + +The nonce mempool is a mempool that keeps transactions from an sorted by nonce in order to avoid the issues with nonces. +It works by storing the transaction in a list sorted by the transaction nonce. When the proposer asks for transactions to be included in a block it randomly selects a sender and gets the first transaction in the list. It repeats this until the mempool is empty or the block is full. + +It is configurable with the following parameters: + +#### MaxTxs + +It is an integer value that sets the mempool in one of three modes, _bounded_, _unbounded_, or _disabled_. + +- **negative**: Disabled, mempool does not insert new transaction and return early. +- **zero**: Unbounded mempool has no transaction limit and will never fail with `ErrMempoolTxMaxCapacity`. +- **positive**: Bounded, it fails with `ErrMempoolTxMaxCapacity` when `maxTx` value is the same as `CountTx()` + +#### Seed + +Set the seed for the random number generator used to select transactions from the mempool. + +### Priority Nonce Mempool + +The [priority nonce mempool](https://github.com/cosmos/cosmos-sdk/blob/main/types/mempool/priority_nonce_spec.md) is a mempool implementation that stores txs in a partially ordered set by 2 dimensions: + +- priority +- sender-nonce (sequence number) + +Internally it uses one priority ordered [skip list](https://pkg.go.dev/github.com/huandu/skiplist) and one skip list per sender ordered by sender-nonce (sequence number). When there are multiple txs from the same sender, they are not always comparable by priority to other sender txs and must be partially ordered by both sender-nonce and priority. + +It is configurable with the following parameters: + +#### MaxTxs + +It is an integer value that sets the mempool in one of three modes, _bounded_, _unbounded_, or _disabled_. + +- **negative**: Disabled, mempool does not insert new transaction and return early. +- **zero**: Unbounded mempool has no transaction limit and will never fail with `ErrMempoolTxMaxCapacity`. +- **positive**: Bounded, it fails with `ErrMempoolTxMaxCapacity` when `maxTx` value is the same as `CountTx()` + +#### Callback + +The priority nonce mempool provides mempool options allowing the application sets callback(s). + +- **OnRead**: Set a callback to be called when a transaction is read from the mempool. +- **TxReplacement**: Sets a callback to be called when duplicated transaction nonce detected during mempool insert. Application can define a transaction replacement rule based on tx priority or certain transaction fields. + +More information on the SDK mempool implementation can be found in the [godocs](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/mempool). diff --git a/docs/sdk/v0.53/documentation/application-framework/baseapp.mdx b/docs/sdk/v0.53/documentation/application-framework/baseapp.mdx new file mode 100644 index 00000000..a81406be --- /dev/null +++ b/docs/sdk/v0.53/documentation/application-framework/baseapp.mdx @@ -0,0 +1,11308 @@ +--- +title: BaseApp +--- + +## Synopsis + +This document describes `BaseApp`, the abstraction that implements the core functionalities of a Cosmos SDK application. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK application](/docs/sdk/v0.53/documentation/application-framework/app-anatomy) +- [Lifecycle of a Cosmos SDK transaction](/docs/sdk/v0.53/documentation/protocol-development/tx-lifecycle) + + + +## Introduction + +`BaseApp` is a base type that implements the core of a Cosmos SDK application, namely: + +- The [Application Blockchain Interface](#main-abci-messages), for the state-machine to communicate with the underlying consensus engine (e.g. CometBFT). +- [Service Routers](#service-routers), to route messages and queries to the appropriate module. +- Different [states](#state-updates), as the state-machine can have different volatile states updated based on the ABCI message received. + +The goal of `BaseApp` is to provide the fundamental layer of a Cosmos SDK application +that developers can easily extend to build their own custom application. Usually, +developers will create a custom type for their application, like so: + +```go +type App struct { + / reference to a BaseApp + *baseapp.BaseApp + + / list of application store keys + + / list of application keepers + + / module manager +} +``` + +Extending the application with `BaseApp` gives the former access to all of `BaseApp`'s methods. +This allows developers to compose their custom application with the modules they want, while not +having to concern themselves with the hard work of implementing the ABCI, the service routers and state +management logic. + +## Type Definition + +The `BaseApp` type holds many important parameters for any Cosmos SDK based application. + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + "maps" + "math" + "slices" + "strconv" + "sync" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/core/header" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp/oe" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + +error +) + +const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + +transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeVerifyVoteExtension / Verify a vote extension + execModeFinalize / Finalize a block proposal +) + +var _ servertypes.ABCI = (*BaseApp)(nil) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { + / initialized on creation + mu sync.Mutex / mu protects the fields below. + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + +state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional + + checkTxHandler sdk.CheckTxHandler / ABCI CheckTx handler + initChainer sdk.InitChainer / ABCI InitChain handler + preBlocker sdk.PreBlocker / logic to run before BeginBlocker + beginBlocker sdk.BeginBlocker / (legacy ABCI) + +BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + +EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + sigverifyTx bool / in the simulation test, since the account does not have a private key, we have to ignore the tx sigverify. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / queryGasLimit defines the maximum gas for queries; unbounded if 0. + queryGasLimit uint64 + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec + + / optimisticExec contains the context required for Optimistic Execution, + / including the goroutine handling.This is experimental and must be enabled + / by developers. + optimisticExec *oe.OptimisticExecution + + / disableBlockGasMeter will disable the block gas meter if true, block gas meter is tricky to support + / when executing transactions in parallel. + / when disabled, the block gas meter in context is a noop one. + / + / SAFETY: it's safe to do if validators validate the total gas wanted in the `ProcessProposal`, which is the case in the default handler. + disableBlockGasMeter bool +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger.With(log.ModuleKey, "baseapp"), + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, + sigverifyTx: true, + queryGasLimit: math.MaxUint64, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) +} + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) +} + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) +} + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs baseapp will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + +protoFiles, err := proto.MergedRegistry() + if err != nil { + logger.Warn("error creating merged proto registry", "error", err) +} + +else { + err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + logger.Warn("error validating merged proto registry annotations", "error", err) +} + +} + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ GRPCQueryRouter returns the GRPCQueryRouter of a BaseApp. +func (app *BaseApp) + +GRPCQueryRouter() *GRPCQueryRouter { + return app.grpcQueryRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} + +} +} + +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) + +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + +} +} + +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) + +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} + +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := slices.Sorted(maps.Keys(keys)) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms storetypes.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +storetypes.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ ChainID returns the chainID of the app. +func (app *BaseApp) + +ChainID() + +string { + return app.chainID +} + +/ AnteHandler returns the AnteHandler of the app. +func (app *BaseApp) + +AnteHandler() + +sdk.AnteHandler { + return app.anteHandler +} + +/ Mempool returns the Mempool of the app. +func (app *BaseApp) + +Mempool() + +mempool.Mempool { + return app.mempool +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + if app.cms == nil { + return errors.New("commit multi-store must not be nil") +} + emptyHeader := cmtproto.Header{ + ChainID: app.chainID +} + + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) + +app.Seal() + +return app.cms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode execMode, h cmtproto.Header) { + ms := app.cms.CacheMultiStore() + headerInfo := header.Info{ + Height: h.Height, + Time: h.Time, + ChainID: h.ChainID, + AppHash: h.AppHash, +} + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, h, false, app.logger). + WithStreamingManager(app.streamingManager). + WithHeaderInfo(headerInfo), +} + switch mode { + case execModeCheck: + baseState.SetContext(baseState.Context().WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices)) + +app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ SetCircuitBreaker sets the circuit breaker for the BaseApp. +/ The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. +func (app *BaseApp) + +SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") +} + +app.msgServiceRouter.SetCircuit(cb) +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) + +cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ +} + +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + / This could happen while migrating from v0.45/v0.46 to v0.50, we should + / allow it to happen so during preblock the upgrade plan can be executed + / and the consensus params set for the first time in the new format. + app.logger.Error("failed to get consensus params", "err", err) + +return cmtproto.ConsensusParams{ +} + +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the BaseApp's param +/ store. +/ +/ NOTE: We're explicitly not storing the CometBFT app_version in the param store. +/ It's stored instead in the x/upgrade store, with its own bump logic. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + +error { + if app.paramStore == nil { + return errors.New("cannot store consensus params with no params store set") +} + +return app.paramStore.Set(ctx, cp) +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + +error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) +} + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 +} + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return err +} + +} + +return nil +} + +func (app *BaseApp) + +getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState +} +} + +func (app *BaseApp) + +getBlockGasMeter(ctx sdk.Context) + +storetypes.GasMeter { + if app.disableBlockGasMeter { + return noopGasMeter{ +} + +} + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) +} + +return storetypes.NewInfiniteGasMeter() +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode execMode, txBytes []byte) + +sdk.Context { + app.mu.Lock() + +defer app.mu.Unlock() + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.Context(). + WithTxBytes(txBytes). + WithGasMeter(storetypes.NewInfiniteGasMeter()) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithIsSigverifyTx(app.sigverifyTx) + +ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() + +ctx = ctx.WithExecMode(sdk.ExecMode(execModeSimulate)) +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]any{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(storetypes.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +func (app *BaseApp) + +preBlock(req *abci.RequestFinalizeBlock) ([]abci.Event, error) { + var events []abci.Event + if app.preBlocker != nil { + ctx := app.finalizeBlockState.Context().WithEventManager(sdk.NewEventManager()) + +rsp, err := app.preBlocker(ctx, req) + if err != nil { + return nil, err +} + / rsp.ConsensusParamsChanged is true from preBlocker means ConsensusParams in store get changed + / write the consensus parameters in store to context + if rsp.ConsensusParamsChanged { + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + / GasMeter must be set after we get a context with updated consensus params. + gasMeter := app.getBlockGasMeter(ctx) + +ctx = ctx.WithBlockGasMeter(gasMeter) + +app.finalizeBlockState.SetContext(ctx) +} + +events = ctx.EventManager().ABCIEvents() +} + +return events, nil +} + +func (app *BaseApp) + +beginBlock(_ *abci.RequestFinalizeBlock) (sdk.BeginBlock, error) { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.Context()) + if err != nil { + return resp, err +} + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" +}, + ) +} + +resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) +} + +return resp, nil +} + +func (app *BaseApp) + +deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ +} + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + +telemetry.IncrCounter(1, "tx", resultStr) + +telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + +telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") +}() + +gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx, nil) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + +return resp +} + +resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +} + +return resp +} + +/ endBlock is an application-defined function that is called after transactions +/ have been processed in FinalizeBlock. +func (app *BaseApp) + +endBlock(_ context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.Context()) + if err != nil { + return endblock, err +} + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" +}, + ) +} + +eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + +endblock = eb +} + +return endblock, nil +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +/ both txbytes and the decoded tx are passed to runTx to avoid the state machine encoding the tx and decoding the transaction twice +/ passing the decoded tx to runTX is optional, it will be decoded if the tx is nil +func (app *BaseApp) + +runTx(mode execMode, txBytes []byte, tx sdk.Tx) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil + ctx.Logger().Error("panic recovered in runTx", "err", err) +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() +} + + / if the transaction is not decoded, decode it here + if tx == nil { + tx, err = app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ + GasUsed: 0, + GasWanted: 0 +}, nil, nil, sdkerrors.ErrTxDecode.Wrap(err.Error()) +} + +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + for _, msg := range msgs { + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return sdk.GasInfo{ +}, nil, nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + if mode == execModeReCheck { + / if the ante handler fails on recheck, we want to remove the tx from the mempool + if mempoolErr := app.mempool.Remove(tx); mempoolErr != nil { + return gInfo, nil, anteEvents, errors.Join(err, mempoolErr) +} + +} + +return gInfo, nil, nil, err +} + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + switch mode { + case execModeCheck: + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err +} + case execModeFinalize: + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) +} + + / Run optional postHandlers (should run regardless of the execution result). + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, errPostHandler := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if errPostHandler != nil { + if err == nil { + / when the msg was handled successfully, return the post handler error only + return gInfo, nil, anteEvents, errPostHandler +} + / otherwise append to the msg error so that we keep the original error code for better user experience + return gInfo, nil, anteEvents, errorsmod.Wrapf(err, "postHandler: %s", errPostHandler) +} + + / we don't want runTx to panic if runMsgs has failed earlier + if result == nil { + result = &sdk.Result{ +} + +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if err == nil { + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents, err := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to create message events; message index: %d", i) +} + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) +} + +events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + + +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) (sdk.Events, error) { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + return nil, err +} + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + return nil, err +} + +msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events), nil +} + +/ PrepareProposalVerifyTx performs transaction verification when a proposer is +/ creating a block proposal during PrepareProposal. Any state committed to the +/ PrepareProposal state internally will be discarded. will be +/ returned if the transaction cannot be encoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModePrepareProposal, bz, tx) + if err != nil { + return nil, err +} + +return bz, nil +} + +/ ProcessProposalVerifyTx performs transaction verification when receiving a +/ block proposal during ProcessProposal. Any state committed to the +/ ProcessProposal state internally will be discarded. will be +/ returned if the transaction cannot be decoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModeProcessProposal, txBz, tx) + if err != nil { + return nil, err +} + +return tx, nil +} + +func (app *BaseApp) + +TxDecode(txBytes []byte) (sdk.Tx, error) { + return app.txDecoder(txBytes) +} + +func (app *BaseApp) + +TxEncode(tx sdk.Tx) ([]byte, error) { + return app.txEncoder(tx) +} + +func (app *BaseApp) + +StreamingManager() + +storetypes.StreamingManager { + return app.streamingManager +} + +/ Close is called in start cmd to gracefully cleanup resources. +func (app *BaseApp) + +Close() + +error { + var errs []error + + / Close app.db (opened by cosmos-sdk/server/start.go call to openDB) + if app.db != nil { + app.logger.Info("Closing application.db") + if err := app.db.Close(); err != nil { + errs = append(errs, err) +} + +} + + / Close app.snapshotManager + / - opened when app chains use cosmos-sdk/server/util.go/DefaultBaseappOptions (boilerplate) + / - which calls cosmos-sdk/server/util.go/GetSnapshotStore + / - which is passed to baseapp/options.go/SetSnapshot + / - to set app.snapshotManager = snapshots.NewManager + if app.snapshotManager != nil { + app.logger.Info("Closing snapshots/metadata.db") + if err := app.snapshotManager.Close(); err != nil { + errs = append(errs, err) +} + +} + +return errors.Join(errs...) +} + +/ GetBaseApp returns the pointer to itself. +func (app *BaseApp) + +GetBaseApp() *BaseApp { + return app +} +``` + +Let us go through the most important components. + +> **Note**: Not all parameters are described, only the most important ones. Refer to the +> type definition for the full list. + +First, the important parameters that are initialized during the bootstrapping of the application: + +- [`CommitMultiStore`](/docs/sdk/v0.53/documentation/state-storage/store#commitmultistore): This is the main store of the application, + which holds the canonical state that is committed at the [end of each block](#commit). This store + is **not** cached, meaning it is not used to update the application's volatile (un-committed) states. + The `CommitMultiStore` is a multi-store, meaning a store of stores. Each module of the application + uses one or multiple `KVStores` in the multi-store to persist their subset of the state. +- Database: The `db` is used by the `CommitMultiStore` to handle data persistence. +- [`Msg` Service Router](#msg-service-router): The `msgServiceRouter` facilitates the routing of `sdk.Msg` requests to the appropriate + module `Msg` service for processing. Here a `sdk.Msg` refers to the transaction component that needs to be + processed by a service in order to update the application state, and not to ABCI message which implements + the interface between the application and the underlying consensus engine. +- [gRPC Query Router](#grpc-query-router): The `grpcQueryRouter` facilitates the routing of gRPC queries to the + appropriate module for it to be processed. These queries are not ABCI messages themselves, but they + are relayed to the relevant module's gRPC `Query` service. +- [`TxDecoder`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types#TxDecoder): It is used to decode + raw transaction bytes relayed by the underlying CometBFT engine. +- [`AnteHandler`](#antehandler): This handler is used to handle signature verification, fee payment, + and other pre-message execution checks when a transaction is received. It's executed during + [`CheckTx/RecheckTx`](#checktx) and [`FinalizeBlock`](#finalizeblock). +- [`InitChainer`](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#initchainer), [`PreBlocker`](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#preblocker), [`BeginBlocker` and `EndBlocker`](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#beginblocker-and-endblocker): These are + the functions executed when the application receives the `InitChain` and `FinalizeBlock` + ABCI messages from the underlying CometBFT engine. + +Then, parameters used to define [volatile states](#state-updates) (i.e. cached states): + +- `checkState`: This state is updated during [`CheckTx`](#checktx), and reset on [`Commit`](#commit). +- `finalizeBlockState`: This state is updated during [`FinalizeBlock`](#finalizeblock), and set to `nil` on + [`Commit`](#commit) and gets re-initialized on `FinalizeBlock`. +- `processProposalState`: This state is updated during [`ProcessProposal`](#process-proposal). +- `prepareProposalState`: This state is updated during [`PrepareProposal`](#prepare-proposal). + +Finally, a few more important parameters: + +- `voteInfos`: This parameter carries the list of validators whose precommit is missing, either + because they did not vote or because the proposer did not include their vote. This information is + carried by the [Context](/docs/sdk/v0.53/documentation/application-framework/context) and can be used by the application for various things like + punishing absent validators. +- `minGasPrices`: This parameter defines the minimum gas prices accepted by the node. This is a + **local** parameter, meaning each full-node can set a different `minGasPrices`. It is used in the + `AnteHandler` during [`CheckTx`](#checktx), mainly as a spam protection mechanism. The transaction + enters the [mempool](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#mempool-methods) + only if the gas prices of the transaction are greater than one of the minimum gas price in + `minGasPrices` (e.g. if `minGasPrices == 1uatom,1photon`, the `gas-price` of the transaction must be + greater than `1uatom` OR `1photon`). +- `appVersion`: Version of the application. It is set in the + [application's constructor function](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor-function). + +## Constructor + +```go +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + + / ... +} +``` + +The `BaseApp` constructor function is pretty straightforward. The only thing worth noting is the +possibility to provide additional [`options`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/options.go) +to the `BaseApp`, which will execute them in order. The `options` are generally `setter` functions +for important parameters, like `SetPruning()` to set pruning options or `SetMinGasPrices()` to set +the node's `min-gas-prices`. + +Naturally, developers can add additional `options` based on their application's needs. + +## State Updates + +The `BaseApp` maintains four primary volatile states and a root or main state. The main state +is the canonical state of the application and the volatile states, `checkState`, `prepareProposalState`, `processProposalState` and `finalizeBlockState` +are used to handle state transitions in-between the main state made during [`Commit`](#commit). + +Internally, there is only a single `CommitMultiStore` which we refer to as the main or root state. +From this root state, we derive four volatile states by using a mechanism called _store branching_ (performed by `CacheWrap` function). +The types can be illustrated as follows: + +![Types](/docs/sdk/images/learn/advanced/baseapp_state.png) + +### InitChain State Updates + +During `InitChain`, the four volatile states, `checkState`, `prepareProposalState`, `processProposalState` +and `finalizeBlockState` are set by branching the root `CommitMultiStore`. Any subsequent reads and writes happen +on branched versions of the `CommitMultiStore`. +To avoid unnecessary roundtrip to the main state, all reads to the branched store are cached. + +![InitChain](/docs/sdk/images/learn/advanced/baseapp_state-initchain.png) + +### CheckTx State Updates + +During `CheckTx`, the `checkState`, which is based off of the last committed state from the root +store, is used for any reads and writes. Here we only execute the `AnteHandler` and verify a service router +exists for every message in the transaction. Note, when we execute the `AnteHandler`, we branch +the already branched `checkState`. +This has the side effect that if the `AnteHandler` fails, the state transitions won't be reflected in the `checkState` +\-- i.e. `checkState` is only updated on success. + +![CheckTx](/docs/sdk/images/learn/advanced/baseapp_state-checktx.png) + +### PrepareProposal State Updates + +During `PrepareProposal`, the `prepareProposalState` is set by branching the root `CommitMultiStore`. +The `prepareProposalState` is used for any reads and writes that occur during the `PrepareProposal` phase. +The function uses the `Select()` method of the mempool to iterate over the transactions. `runTx` is then called, +which encodes and validates each transaction and from there the `AnteHandler` is executed. +If successful, valid transactions are returned inclusive of the events, tags, and data generated +during the execution of the proposal. +The described behavior is that of the default handler, applications have the flexibility to define their own +[custom mempool handlers](https://docs.cosmos.network/main/building-apps/app-mempool#custom-mempool-handlers). + +![ProcessProposal](/docs/sdk/images/learn/advanced/baseapp_state-prepareproposal.png) + +### ProcessProposal State Updates + +During `ProcessProposal`, the `processProposalState` is set based off of the last committed state +from the root store and is used to process a signed proposal received from a validator. +In this state, `runTx` is called and the `AnteHandler` is executed and the context used in this state is built with information +from the header and the main state, including the minimum gas prices, which are also set. +Again we want to highlight that the described behavior is that of the default handler and applications have the flexibility to define their own +[custom mempool handlers](https://docs.cosmos.network/main/building-apps/app-mempool#custom-mempool-handlers). + +![ProcessProposal](/docs/sdk/images/learn/advanced/baseapp_state-processproposal.png) + +### FinalizeBlock State Updates + +During `FinalizeBlock`, the `finalizeBlockState` is set for use during transaction execution and endblock. The +`finalizeBlockState` is based off of the last committed state from the root store and is branched. +Note, the `finalizeBlockState` is set to `nil` on [`Commit`](#commit). + +The state flow for transaction execution is nearly identical to `CheckTx` except state transitions occur on +the `finalizeBlockState` and messages in a transaction are executed. Similarly to `CheckTx`, state transitions +occur on a doubly branched state -- `finalizeBlockState`. Successful message execution results in +writes being committed to `finalizeBlockState`. Note, if message execution fails, state transitions from +the AnteHandler are persisted. + +### Commit State Updates + +During `Commit` all the state transitions that occurred in the `finalizeBlockState` are finally written to +the root `CommitMultiStore` which in turn is committed to disk and results in a new application +root hash. These state transitions are now considered final. Finally, the `checkState` is set to the +newly committed state and `finalizeBlockState` is set to `nil` to be reset on `FinalizeBlock`. + +![Commit](/docs/sdk/images/learn/advanced/baseapp_state-commit.png) + +## ParamStore + +During `InitChain`, the `RequestInitChain` provides `ConsensusParams` which contains parameters +related to block execution such as maximum gas and size in addition to evidence parameters. If these +parameters are non-nil, they are set in the BaseApp's `ParamStore`. Behind the scenes, the `ParamStore` +is managed by an `x/consensus_params` module. This allows the parameters to be tweaked via +on-chain governance. + +## Service Routers + +When messages and queries are received by the application, they must be routed to the appropriate module in order to be processed. Routing is done via `BaseApp`, which holds a `msgServiceRouter` for messages, and a `grpcQueryRouter` for queries. + +### `Msg` Service Router + +[`sdk.Msg`s](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#messages) need to be routed after they are extracted from transactions, which are sent from the underlying CometBFT engine via the [`CheckTx`](#checktx) and [`FinalizeBlock`](#finalizeblock) ABCI messages. To do so, `BaseApp` holds a `msgServiceRouter` which maps fully-qualified service methods (`string`, defined in each module's Protobuf `Msg` service) to the appropriate module's `MsgServer` implementation. + +The [default `msgServiceRouter` included in `BaseApp`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/msg_service_router.go) is stateless. However, some applications may want to make use of more stateful routing mechanisms such as allowing governance to disable certain routes or point them to new modules for upgrade purposes. For this reason, the `sdk.Context` is also passed into each [route handler inside `msgServiceRouter`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/msg_service_router.go#L35-L36). For a stateless router that doesn't want to make use of this, you can just ignore the `ctx`. + +The application's `msgServiceRouter` is initialized with all the routes using the application's [module manager](/docs/sdk/v0.53/documentation/module-system/module-manager#manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor-function). + +### gRPC Query Router + +Similar to `sdk.Msg`s, [`queries`](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#queries) need to be routed to the appropriate module's [`Query` service](/docs/sdk/v0.53/documentation/module-system/query-services). To do so, `BaseApp` holds a `grpcQueryRouter`, which maps modules' fully-qualified service methods (`string`, defined in their Protobuf `Query` gRPC) to their `QueryServer` implementation. The `grpcQueryRouter` is called during the initial stages of query processing, which can be either by directly sending a gRPC query to the gRPC endpoint, or via the [`Query` ABCI message](#query) on the CometBFT RPC endpoint. + +Just like the `msgServiceRouter`, the `grpcQueryRouter` is initialized with all the query routes using the application's [module manager](/docs/sdk/v0.53/documentation/module-system/module-manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#app-constructor). + +## Main ABCI 2.0 Messages + +The [Application-Blockchain Interface](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md) (ABCI) is a generic interface that connects a state-machine with a consensus engine to form a functional full-node. It can be wrapped in any language, and needs to be implemented by each application-specific blockchain built on top of an ABCI-compatible consensus engine like CometBFT. + +The consensus engine handles two main tasks: + +- The networking logic, which mainly consists in gossiping block parts, transactions and consensus votes. +- The consensus logic, which results in the deterministic ordering of transactions in the form of blocks. + +It is **not** the role of the consensus engine to define the state or the validity of transactions. Generally, transactions are handled by the consensus engine in the form of `[]bytes`, and relayed to the application via the ABCI to be decoded and processed. At keys moments in the networking and consensus processes (e.g. beginning of a block, commit of a block, reception of an unconfirmed transaction, ...), the consensus engine emits ABCI messages for the state-machine to act on. + +Developers building on top of the Cosmos SDK need not implement the ABCI themselves, as `BaseApp` comes with a built-in implementation of the interface. Let us go through the main ABCI messages that `BaseApp` implements: + +- [`Prepare Proposal`](#prepare-proposal) +- [`Process Proposal`](#process-proposal) +- [`CheckTx`](#checktx) +- [`FinalizeBlock`](#finalizeblock) +- [`ExtendVote`](#extendvote) +- [`VerifyVoteExtension`](#verifyvoteextension) + +### Prepare Proposal + +The `PrepareProposal` function is part of the new methods introduced in Application Blockchain Interface (ABCI++) in CometBFT and is an important part of the application's overall governance system. In the Cosmos SDK, it allows the application to have more fine-grained control over the transactions that are processed, and ensures that only valid transactions are committed to the blockchain. + +Here is how the `PrepareProposal` function can be implemented: + +1. Extract the `sdk.Msg`s from the transaction. +2. Perform _stateful_ checks by calling `Validate()` on each of the `sdk.Msg`'s. This is done after _stateless_ checks as _stateful_ checks are more computationally expensive. If `Validate()` fails, `PrepareProposal` returns before running further checks, which saves resources. +3. Perform any additional checks that are specific to the application, such as checking account balances, or ensuring that certain conditions are met before a transaction is proposed.hey are processed by the consensus engine, if necessary. +4. Return the updated transactions to be processed by the consensus engine + +Note that, unlike `CheckTx()`, `PrepareProposal` process `sdk.Msg`s, so it can directly update the state. However, unlike `FinalizeBlock()`, it does not commit the state updates. It's important to exercise caution when using `PrepareProposal` as incorrect coding could affect the overall liveness of the network. + +It's important to note that `PrepareProposal` complements the `ProcessProposal` method which is executed after this method. The combination of these two methods means that it is possible to guarantee that no invalid transactions are ever committed. Furthermore, such a setup can give rise to other interesting use cases such as Oracles, threshold decryption and more. + +`PrepareProposal` returns a response to the underlying consensus engine of type [`abci.ResponseCheckTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#processproposal). The response contains: + +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. + +### Process Proposal + +The `ProcessProposal` function is called by the BaseApp as part of the ABCI message flow, and is executed during the `FinalizeBlock` phase of the consensus process. The purpose of this function is to give more control to the application for block validation, allowing it to check all transactions in a proposed block before the validator sends the prevote for the block. It allows a validator to perform application-dependent work in a proposed block, enabling features such as immediate block execution, and allows the Application to reject invalid blocks. + +The `ProcessProposal` function performs several key tasks, including: + +1. Validating the proposed block by checking all transactions in it. +2. Checking the proposed block against the current state of the application, to ensure that it is valid and that it can be executed. +3. Updating the application's state based on the proposal, if it is valid and passes all checks. +4. Returning a response to CometBFT indicating the result of the proposal processing. + +The `ProcessProposal` is an important part of the application's overall governance system. It is used to manage the network's parameters and other key aspects of its operation. It also ensures that the coherence property is adhered to i.e. all honest validators must accept a proposal by an honest proposer. + +It's important to note that `ProcessProposal` complements the `PrepareProposal` method which enables the application to have more fine-grained transaction control by allowing it to reorder, drop, delay, modify, and even add transactions as they see necessary. The combination of these two methods means that it is possible to guarantee that no invalid transactions are ever committed. Furthermore, such a setup can give rise to other interesting use cases such as Oracles, threshold decryption and more. + +CometBFT calls it when it receives a proposal and the CometBFT algorithm has not locked on a value. The Application cannot modify the proposal at this point but can reject it if it is invalid. If that is the case, CometBFT will prevote `nil` on the proposal, which has strong liveness implications for CometBFT. As a general rule, the Application SHOULD accept a prepared proposal passed via `ProcessProposal`, even if a part of the proposal is invalid (e.g., an invalid transaction); the Application can ignore the invalid part of the prepared proposal at block execution time. + +However, developers must exercise greater caution when using these methods. Incorrectly coding these methods could affect liveness as CometBFT is unable to receive 2/3 valid precommits to finalize a block. + +`ProcessProposal` returns a response to the underlying consensus engine of type [`abci.ResponseCheckTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#processproposal). The response contains: + +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. + +### CheckTx + +`CheckTx` is sent by the underlying consensus engine when a new unconfirmed (i.e. not yet included in a valid block) +transaction is received by a full-node. The role of `CheckTx` is to guard the full-node's mempool +(where unconfirmed transactions are stored until they are included in a block) from spam transactions. +Unconfirmed transactions are relayed to peers only if they pass `CheckTx`. + +`CheckTx()` can perform both _stateful_ and _stateless_ checks, but developers should strive to +make the checks **lightweight** because gas fees are not charged for the resources (CPU, data load...) used during the `CheckTx`. + +In the Cosmos SDK, after [decoding transactions](/docs/sdk/v0.53/documentation/protocol-development/encoding), `CheckTx()` is implemented +to do the following checks: + +1. Extract the `sdk.Msg`s from the transaction. +2. **Optionally** perform _stateless_ checks by calling `ValidateBasic()` on each of the `sdk.Msg`s. This is done + first, as _stateless_ checks are less computationally expensive than _stateful_ checks. If + `ValidateBasic()` fail, `CheckTx` returns before running _stateful_ checks, which saves resources. + This check is still performed for messages that have not yet migrated to the new message validation mechanism defined in [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) and still have a `ValidateBasic()` method. +3. Perform non-module related _stateful_ checks on the [account](/docs/sdk/v0.53/documentation/protocol-development/accounts). This step is mainly about checking + that the `sdk.Msg` signatures are valid, that enough fees are provided and that the sending account + has enough funds to pay for said fees. Note that no precise [`gas`](/docs/sdk/v0.53/documentation/protocol-development/gas-fees) counting occurs here, + as `sdk.Msg`s are not processed. Usually, the [`AnteHandler`](/docs/sdk/v0.53/documentation/protocol-development/gas-fees#antehandler) will check that the `gas` provided + with the transaction is superior to a minimum reference gas amount based on the raw transaction size, + in order to avoid spam with transactions that provide 0 gas. + +`CheckTx` does **not** process `sdk.Msg`s - they only need to be processed when the canonical state needs to be updated, which happens during `FinalizeBlock`. + +Steps 2. and 3. are performed by the [`AnteHandler`](/docs/sdk/v0.53/documentation/protocol-development/gas-fees#antehandler) in the [`RunTx()`](#runtx-antehandler-and-runmsgs) +function, which `CheckTx()` calls with the `runTxModeCheck` mode. During each step of `CheckTx()`, a +special [volatile state](#state-updates) called `checkState` is updated. This state is used to keep +track of the temporary changes triggered by the `CheckTx()` calls of each transaction without modifying +the [main canonical state](#main-state). For example, when a transaction goes through `CheckTx()`, the +transaction's fees are deducted from the sender's account in `checkState`. If a second transaction is +received from the same account before the first is processed, and the account has consumed all its +funds in `checkState` during the first transaction, the second transaction will fail `CheckTx`() and +be rejected. In any case, the sender's account will not actually pay the fees until the transaction +is actually included in a block, because `checkState` never gets committed to the main state. The +`checkState` is reset to the latest state of the main state each time a blocks gets [committed](#commit). + +`CheckTx` returns a response to the underlying consensus engine of type [`abci.ResponseCheckTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#checktx). +The response contains: + +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. +- `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction. +- `GasUsed (int64)`: Amount of gas consumed by transaction. During `CheckTx`, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction. Next is an example: + +```go expandable +package ante + +import ( + + "slices" + "time" + + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec/legacy" + "github.com/cosmos/cosmos-sdk/crypto/keys/multisig" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/migrations/legacytx" + authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +/ ValidateBasicDecorator will call tx.ValidateBasic and return any non-nil error. +/ If ValidateBasic passes, decorator calls next AnteHandler in chain. Note, +/ ValidateBasicDecorator decorator will not get executed on ReCheckTx since it +/ is not dependent on application state. +type ValidateBasicDecorator struct{ +} + +func NewValidateBasicDecorator() + +ValidateBasicDecorator { + return ValidateBasicDecorator{ +} +} + +func (vbd ValidateBasicDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + / no need to validate basic on recheck tx, call next antehandler + if ctx.IsReCheckTx() { + return next(ctx, tx, simulate) +} + if validateBasic, ok := tx.(sdk.HasValidateBasic); ok { + if err := validateBasic.ValidateBasic(); err != nil { + return ctx, err +} + +} + +return next(ctx, tx, simulate) +} + +/ ValidateMemoDecorator will validate memo given the parameters passed in +/ If memo is too large decorator returns with error, otherwise call next AnteHandler +/ CONTRACT: Tx must implement TxWithMemo interface +type ValidateMemoDecorator struct { + ak AccountKeeper +} + +func NewValidateMemoDecorator(ak AccountKeeper) + +ValidateMemoDecorator { + return ValidateMemoDecorator{ + ak: ak, +} +} + +func (vmd ValidateMemoDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + memoTx, ok := tx.(sdk.TxWithMemo) + if !ok { + return ctx, errorsmod.Wrap(sdkerrors.ErrTxDecode, "invalid transaction type") +} + memoLength := len(memoTx.GetMemo()) + if memoLength > 0 { + params := vmd.ak.GetParams(ctx) + if uint64(memoLength) > params.MaxMemoCharacters { + return ctx, errorsmod.Wrapf(sdkerrors.ErrMemoTooLarge, + "maximum number of characters is %d but received %d characters", + params.MaxMemoCharacters, memoLength, + ) +} + +} + +return next(ctx, tx, simulate) +} + +/ ConsumeTxSizeGasDecorator will take in parameters and consume gas proportional +/ to the size of tx before calling next AnteHandler. Note, the gas costs will be +/ slightly over estimated due to the fact that any given signing account may need +/ to be retrieved from state. +/ +/ CONTRACT: If simulate=true, then signatures must either be completely filled +/ in or empty. +/ CONTRACT: To use this decorator, signatures of transaction must be represented +/ as legacytx.StdSignature otherwise simulate mode will incorrectly estimate gas cost. +type ConsumeTxSizeGasDecorator struct { + ak AccountKeeper +} + +func NewConsumeGasForTxSizeDecorator(ak AccountKeeper) + +ConsumeTxSizeGasDecorator { + return ConsumeTxSizeGasDecorator{ + ak: ak, +} +} + +func (cgts ConsumeTxSizeGasDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + sigTx, ok := tx.(authsigning.SigVerifiableTx) + if !ok { + return ctx, errorsmod.Wrap(sdkerrors.ErrTxDecode, "invalid tx type") +} + params := cgts.ak.GetParams(ctx) + +ctx.GasMeter().ConsumeGas(params.TxSizeCostPerByte*storetypes.Gas(len(ctx.TxBytes())), "txSize") + + / simulate gas cost for signatures in simulate mode + if simulate { + / in simulate mode, each element should be a nil signature + sigs, err := sigTx.GetSignaturesV2() + if err != nil { + return ctx, err +} + n := len(sigs) + +signers, err := sigTx.GetSigners() + if err != nil { + return sdk.Context{ +}, err +} + for i, signer := range signers { + / if signature is already filled in, no need to simulate gas cost + if i < n && !isIncompleteSignature(sigs[i].Data) { + continue +} + +var pubkey cryptotypes.PubKey + acc := cgts.ak.GetAccount(ctx, signer) + + / use placeholder simSecp256k1Pubkey if sig is nil + if acc == nil || acc.GetPubKey() == nil { + pubkey = simSecp256k1Pubkey +} + +else { + pubkey = acc.GetPubKey() +} + + / use stdsignature to mock the size of a full signature + simSig := legacytx.StdSignature{ /nolint:staticcheck / SA1019: legacytx.StdSignature is deprecated + Signature: simSecp256k1Sig[:], + PubKey: pubkey, +} + sigBz := legacy.Cdc.MustMarshal(simSig) + cost := storetypes.Gas(len(sigBz) + 6) + + / If the pubkey is a multi-signature pubkey, then we estimate for the maximum + / number of signers. + if _, ok := pubkey.(*multisig.LegacyAminoPubKey); ok { + cost *= params.TxSigLimit +} + +ctx.GasMeter().ConsumeGas(params.TxSizeCostPerByte*cost, "txSize") +} + +} + +return next(ctx, tx, simulate) +} + +/ isIncompleteSignature tests whether SignatureData is fully filled in for simulation purposes +func isIncompleteSignature(data signing.SignatureData) + +bool { + if data == nil { + return true +} + switch data := data.(type) { + case *signing.SingleSignatureData: + return len(data.Signature) == 0 + case *signing.MultiSignatureData: + if len(data.Signatures) == 0 { + return true +} + if slices.ContainsFunc(data.Signatures, isIncompleteSignature) { + return true +} + +} + +return false +} + +type ( + / TxTimeoutHeightDecorator defines an AnteHandler decorator that checks for a + / tx height timeout. + TxTimeoutHeightDecorator struct{ +} + + / TxWithTimeoutHeight defines the interface a tx must implement in order for + / TxHeightTimeoutDecorator to process the tx. + TxWithTimeoutHeight interface { + sdk.Tx + + GetTimeoutHeight() + +uint64 + GetTimeoutTimeStamp() + +time.Time +} +) + +/ TxTimeoutHeightDecorator defines an AnteHandler decorator that checks for a +/ tx height timeout. +func NewTxTimeoutHeightDecorator() + +TxTimeoutHeightDecorator { + return TxTimeoutHeightDecorator{ +} +} + +/ AnteHandle implements an AnteHandler decorator for the TxHeightTimeoutDecorator +/ type where the current block height is checked against the tx's height timeout. +/ If a height timeout is provided (non-zero) + +and is less than the current block +/ height, then an error is returned. +func (txh TxTimeoutHeightDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + timeoutTx, ok := tx.(TxWithTimeoutHeight) + if !ok { + return ctx, errorsmod.Wrap(sdkerrors.ErrTxDecode, "expected tx to implement TxWithTimeoutHeight") +} + timeoutHeight := timeoutTx.GetTimeoutHeight() + if timeoutHeight > 0 && uint64(ctx.BlockHeight()) > timeoutHeight { + return ctx, errorsmod.Wrapf( + sdkerrors.ErrTxTimeoutHeight, "block height: %d, timeout height: %d", ctx.BlockHeight(), timeoutHeight, + ) +} + timeoutTimestamp := timeoutTx.GetTimeoutTimeStamp() + blockTime := ctx.BlockHeader().Time + if !timeoutTimestamp.IsZero() && timeoutTimestamp.Unix() != 0 && timeoutTimestamp.Before(blockTime) { + return ctx, errorsmod.Wrapf( + sdkerrors.ErrTxTimeout, "block time: %s, timeout timestamp: %s", blockTime, timeoutTimestamp.String(), + ) +} + +return next(ctx, tx, simulate) +} +``` + +- `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](/docs/sdk/v0.53/api-reference/events-streaming/events) for more. +- `Codespace (string)`: Namespace for the Code. + +#### RecheckTx + +After `Commit`, `CheckTx` is run again on all transactions that remain in the node's local mempool +excluding the transactions that are included in the block. To prevent the mempool from rechecking all transactions +every time a block is committed, the configuration option `mempool.recheck=false` can be set. As of +Tendermint v0.32.1, an additional `Type` parameter is made available to the `CheckTx` function that +indicates whether an incoming transaction is new (`CheckTxType_New`), or a recheck (`CheckTxType_Recheck`). +This allows certain checks like signature verification can be skipped during `CheckTxType_Recheck`. + +## RunTx, AnteHandler, RunMsgs, PostHandler + +### RunTx + +`RunTx` is called from `CheckTx`/`Finalizeblock` to handle the transaction, with `execModeCheck` or `execModeFinalize` as parameter to differentiate between the two modes of execution. Note that when `RunTx` receives a transaction, it has already been decoded. + +The first thing `RunTx` does upon being called is to retrieve the `context`'s `CacheMultiStore` by calling the `getContextForTx()` function with the appropriate mode (either `runTxModeCheck` or `execModeFinalize`). This `CacheMultiStore` is a branch of the main store, with cache functionality (for query requests), instantiated during `FinalizeBlock` for transaction execution and during the `Commit` of the previous block for `CheckTx`. After that, two `defer func()` are called for [`gas`](/docs/sdk/v0.53/documentation/protocol-development/gas-fees) management. They are executed when `runTx` returns and make sure `gas` is actually consumed, and will throw errors, if any. + +After that, `RunTx()` calls `ValidateBasic()`, when available and for backward compatibility, on each `sdk.Msg`in the `Tx`, which runs preliminary _stateless_ validity checks. If any `sdk.Msg` fails to pass `ValidateBasic()`, `RunTx()` returns with an error. + +Then, the [`anteHandler`](#antehandler) of the application is run (if it exists). In preparation of this step, both the `checkState`/`finalizeBlockState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function. + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + "maps" + "math" + "slices" + "strconv" + "sync" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/core/header" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp/oe" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + +error +) + +const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + +transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeVerifyVoteExtension / Verify a vote extension + execModeFinalize / Finalize a block proposal +) + +var _ servertypes.ABCI = (*BaseApp)(nil) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { + / initialized on creation + mu sync.Mutex / mu protects the fields below. + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + +state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional + + checkTxHandler sdk.CheckTxHandler / ABCI CheckTx handler + initChainer sdk.InitChainer / ABCI InitChain handler + preBlocker sdk.PreBlocker / logic to run before BeginBlocker + beginBlocker sdk.BeginBlocker / (legacy ABCI) + +BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + +EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + sigverifyTx bool / in the simulation test, since the account does not have a private key, we have to ignore the tx sigverify. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / queryGasLimit defines the maximum gas for queries; unbounded if 0. + queryGasLimit uint64 + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec + + / optimisticExec contains the context required for Optimistic Execution, + / including the goroutine handling.This is experimental and must be enabled + / by developers. + optimisticExec *oe.OptimisticExecution + + / disableBlockGasMeter will disable the block gas meter if true, block gas meter is tricky to support + / when executing transactions in parallel. + / when disabled, the block gas meter in context is a noop one. + / + / SAFETY: it's safe to do if validators validate the total gas wanted in the `ProcessProposal`, which is the case in the default handler. + disableBlockGasMeter bool +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger.With(log.ModuleKey, "baseapp"), + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, + sigverifyTx: true, + queryGasLimit: math.MaxUint64, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) +} + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) +} + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) +} + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs baseapp will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + +protoFiles, err := proto.MergedRegistry() + if err != nil { + logger.Warn("error creating merged proto registry", "error", err) +} + +else { + err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + logger.Warn("error validating merged proto registry annotations", "error", err) +} + +} + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ GRPCQueryRouter returns the GRPCQueryRouter of a BaseApp. +func (app *BaseApp) + +GRPCQueryRouter() *GRPCQueryRouter { + return app.grpcQueryRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} + +} +} + +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) + +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + +} +} + +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) + +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} + +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := slices.Sorted(maps.Keys(keys)) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms storetypes.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +storetypes.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ ChainID returns the chainID of the app. +func (app *BaseApp) + +ChainID() + +string { + return app.chainID +} + +/ AnteHandler returns the AnteHandler of the app. +func (app *BaseApp) + +AnteHandler() + +sdk.AnteHandler { + return app.anteHandler +} + +/ Mempool returns the Mempool of the app. +func (app *BaseApp) + +Mempool() + +mempool.Mempool { + return app.mempool +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + if app.cms == nil { + return errors.New("commit multi-store must not be nil") +} + emptyHeader := cmtproto.Header{ + ChainID: app.chainID +} + + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) + +app.Seal() + +return app.cms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode execMode, h cmtproto.Header) { + ms := app.cms.CacheMultiStore() + headerInfo := header.Info{ + Height: h.Height, + Time: h.Time, + ChainID: h.ChainID, + AppHash: h.AppHash, +} + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, h, false, app.logger). + WithStreamingManager(app.streamingManager). + WithHeaderInfo(headerInfo), +} + switch mode { + case execModeCheck: + baseState.SetContext(baseState.Context().WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices)) + +app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ SetCircuitBreaker sets the circuit breaker for the BaseApp. +/ The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. +func (app *BaseApp) + +SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") +} + +app.msgServiceRouter.SetCircuit(cb) +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) + +cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ +} + +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + / This could happen while migrating from v0.45/v0.46 to v0.50, we should + / allow it to happen so during preblock the upgrade plan can be executed + / and the consensus params set for the first time in the new format. + app.logger.Error("failed to get consensus params", "err", err) + +return cmtproto.ConsensusParams{ +} + +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the BaseApp's param +/ store. +/ +/ NOTE: We're explicitly not storing the CometBFT app_version in the param store. +/ It's stored instead in the x/upgrade store, with its own bump logic. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + +error { + if app.paramStore == nil { + return errors.New("cannot store consensus params with no params store set") +} + +return app.paramStore.Set(ctx, cp) +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + +error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) +} + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 +} + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return err +} + +} + +return nil +} + +func (app *BaseApp) + +getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState +} +} + +func (app *BaseApp) + +getBlockGasMeter(ctx sdk.Context) + +storetypes.GasMeter { + if app.disableBlockGasMeter { + return noopGasMeter{ +} + +} + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) +} + +return storetypes.NewInfiniteGasMeter() +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode execMode, txBytes []byte) + +sdk.Context { + app.mu.Lock() + +defer app.mu.Unlock() + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.Context(). + WithTxBytes(txBytes). + WithGasMeter(storetypes.NewInfiniteGasMeter()) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithIsSigverifyTx(app.sigverifyTx) + +ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() + +ctx = ctx.WithExecMode(sdk.ExecMode(execModeSimulate)) +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]any{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(storetypes.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +func (app *BaseApp) + +preBlock(req *abci.RequestFinalizeBlock) ([]abci.Event, error) { + var events []abci.Event + if app.preBlocker != nil { + ctx := app.finalizeBlockState.Context().WithEventManager(sdk.NewEventManager()) + +rsp, err := app.preBlocker(ctx, req) + if err != nil { + return nil, err +} + / rsp.ConsensusParamsChanged is true from preBlocker means ConsensusParams in store get changed + / write the consensus parameters in store to context + if rsp.ConsensusParamsChanged { + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + / GasMeter must be set after we get a context with updated consensus params. + gasMeter := app.getBlockGasMeter(ctx) + +ctx = ctx.WithBlockGasMeter(gasMeter) + +app.finalizeBlockState.SetContext(ctx) +} + +events = ctx.EventManager().ABCIEvents() +} + +return events, nil +} + +func (app *BaseApp) + +beginBlock(_ *abci.RequestFinalizeBlock) (sdk.BeginBlock, error) { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.Context()) + if err != nil { + return resp, err +} + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" +}, + ) +} + +resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) +} + +return resp, nil +} + +func (app *BaseApp) + +deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ +} + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + +telemetry.IncrCounter(1, "tx", resultStr) + +telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + +telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") +}() + +gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx, nil) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + +return resp +} + +resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +} + +return resp +} + +/ endBlock is an application-defined function that is called after transactions +/ have been processed in FinalizeBlock. +func (app *BaseApp) + +endBlock(_ context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.Context()) + if err != nil { + return endblock, err +} + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" +}, + ) +} + +eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + +endblock = eb +} + +return endblock, nil +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +/ both txbytes and the decoded tx are passed to runTx to avoid the state machine encoding the tx and decoding the transaction twice +/ passing the decoded tx to runTX is optional, it will be decoded if the tx is nil +func (app *BaseApp) + +runTx(mode execMode, txBytes []byte, tx sdk.Tx) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil + ctx.Logger().Error("panic recovered in runTx", "err", err) +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() +} + + / if the transaction is not decoded, decode it here + if tx == nil { + tx, err = app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ + GasUsed: 0, + GasWanted: 0 +}, nil, nil, sdkerrors.ErrTxDecode.Wrap(err.Error()) +} + +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + for _, msg := range msgs { + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return sdk.GasInfo{ +}, nil, nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + if mode == execModeReCheck { + / if the ante handler fails on recheck, we want to remove the tx from the mempool + if mempoolErr := app.mempool.Remove(tx); mempoolErr != nil { + return gInfo, nil, anteEvents, errors.Join(err, mempoolErr) +} + +} + +return gInfo, nil, nil, err +} + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + switch mode { + case execModeCheck: + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err +} + case execModeFinalize: + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) +} + + / Run optional postHandlers (should run regardless of the execution result). + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, errPostHandler := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if errPostHandler != nil { + if err == nil { + / when the msg was handled successfully, return the post handler error only + return gInfo, nil, anteEvents, errPostHandler +} + / otherwise append to the msg error so that we keep the original error code for better user experience + return gInfo, nil, anteEvents, errorsmod.Wrapf(err, "postHandler: %s", errPostHandler) +} + + / we don't want runTx to panic if runMsgs has failed earlier + if result == nil { + result = &sdk.Result{ +} + +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if err == nil { + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents, err := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to create message events; message index: %d", i) +} + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) +} + +events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + + +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) (sdk.Events, error) { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + return nil, err +} + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + return nil, err +} + +msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events), nil +} + +/ PrepareProposalVerifyTx performs transaction verification when a proposer is +/ creating a block proposal during PrepareProposal. Any state committed to the +/ PrepareProposal state internally will be discarded. will be +/ returned if the transaction cannot be encoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModePrepareProposal, bz, tx) + if err != nil { + return nil, err +} + +return bz, nil +} + +/ ProcessProposalVerifyTx performs transaction verification when receiving a +/ block proposal during ProcessProposal. Any state committed to the +/ ProcessProposal state internally will be discarded. will be +/ returned if the transaction cannot be decoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModeProcessProposal, txBz, tx) + if err != nil { + return nil, err +} + +return tx, nil +} + +func (app *BaseApp) + +TxDecode(txBytes []byte) (sdk.Tx, error) { + return app.txDecoder(txBytes) +} + +func (app *BaseApp) + +TxEncode(tx sdk.Tx) ([]byte, error) { + return app.txEncoder(tx) +} + +func (app *BaseApp) + +StreamingManager() + +storetypes.StreamingManager { + return app.streamingManager +} + +/ Close is called in start cmd to gracefully cleanup resources. +func (app *BaseApp) + +Close() + +error { + var errs []error + + / Close app.db (opened by cosmos-sdk/server/start.go call to openDB) + if app.db != nil { + app.logger.Info("Closing application.db") + if err := app.db.Close(); err != nil { + errs = append(errs, err) +} + +} + + / Close app.snapshotManager + / - opened when app chains use cosmos-sdk/server/util.go/DefaultBaseappOptions (boilerplate) + / - which calls cosmos-sdk/server/util.go/GetSnapshotStore + / - which is passed to baseapp/options.go/SetSnapshot + / - to set app.snapshotManager = snapshots.NewManager + if app.snapshotManager != nil { + app.logger.Info("Closing snapshots/metadata.db") + if err := app.snapshotManager.Close(); err != nil { + errs = append(errs, err) +} + +} + +return errors.Join(errs...) +} + +/ GetBaseApp returns the pointer to itself. +func (app *BaseApp) + +GetBaseApp() *BaseApp { + return app +} +``` + +This allows `RunTx` not to commit the changes made to the state during the execution of `anteHandler` if it ends up failing. It also prevents the module implementing the `anteHandler` from writing to state, which is an important part of the [object-capabilities](/docs/sdk/v0.53/documentation/core-concepts/ocap) of the Cosmos SDK. + +Finally, the [`RunMsgs()`](#runmsgs) function is called to process the `sdk.Msg`s in the `Tx`. In preparation of this step, just like with the `anteHandler`, both the `checkState`/`finalizeBlockState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function. + +### AnteHandler + +The `AnteHandler` is a special handler that implements the `AnteHandler` interface and is used to authenticate the transaction before the transaction's internal messages are processed. + +```go expandable +package types + +/ AnteHandler authenticates transactions, before their internal messages are handled. +/ If newCtx.IsZero(), ctx is used instead. +type AnteHandler func(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) + +/ PostHandler like AnteHandler but it executes after RunMsgs. Runs on success +/ or failure and enables use cases like gas refunding. +type PostHandler func(ctx Context, tx Tx, simulate, success bool) (newCtx Context, err error) + +/ AnteDecorator wraps the next AnteHandler to perform custom pre-processing. +type AnteDecorator interface { + AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) +} + +/ PostDecorator wraps the next PostHandler to perform custom post-processing. +type PostDecorator interface { + PostHandle(ctx Context, tx Tx, simulate, success bool, next PostHandler) (newCtx Context, err error) +} + +/ ChainAnteDecorators ChainDecorator chains AnteDecorators together with each AnteDecorator +/ wrapping over the decorators further along chain and returns a single AnteHandler. +/ +/ NOTE: The first element is outermost decorator, while the last element is innermost +/ decorator. Decorator ordering is critical since some decorators will expect +/ certain checks and updates to be performed (e.g. the Context) + +before the decorator +/ is run. These expectations should be documented clearly in a CONTRACT docline +/ in the decorator's godoc. +/ +/ NOTE: Any application that uses GasMeter to limit transaction processing cost +/ MUST set GasMeter with the FIRST AnteDecorator. Failing to do so will cause +/ transactions to be processed with an infinite gasmeter and open a DOS attack vector. +/ Use `ante.SetUpContextDecorator` or a custom Decorator with similar functionality. +/ Returns nil when no AnteDecorator are supplied. +func ChainAnteDecorators(chain ...AnteDecorator) + +AnteHandler { + if len(chain) == 0 { + return nil +} + handlerChain := make([]AnteHandler, len(chain)+1) + / set the terminal AnteHandler decorator + handlerChain[len(chain)] = func(ctx Context, tx Tx, simulate bool) (Context, error) { + return ctx, nil +} + for i := range chain { + ii := i + handlerChain[ii] = func(ctx Context, tx Tx, simulate bool) (Context, error) { + return chain[ii].AnteHandle(ctx, tx, simulate, handlerChain[ii+1]) +} + +} + +return handlerChain[0] +} + +/ ChainPostDecorators chains PostDecorators together with each PostDecorator +/ wrapping over the decorators further along chain and returns a single PostHandler. +/ +/ NOTE: The first element is outermost decorator, while the last element is innermost +/ decorator. Decorator ordering is critical since some decorators will expect +/ certain checks and updates to be performed (e.g. the Context) + +before the decorator +/ is run. These expectations should be documented clearly in a CONTRACT docline +/ in the decorator's godoc. +func ChainPostDecorators(chain ...PostDecorator) + +PostHandler { + if len(chain) == 0 { + return nil +} + handlerChain := make([]PostHandler, len(chain)+1) + / set the terminal PostHandler decorator + handlerChain[len(chain)] = func(ctx Context, tx Tx, simulate, success bool) (Context, error) { + return ctx, nil +} + for i := range chain { + ii := i + handlerChain[ii] = func(ctx Context, tx Tx, simulate, success bool) (Context, error) { + return chain[ii].PostHandle(ctx, tx, simulate, success, handlerChain[ii+1]) +} + +} + +return handlerChain[0] +} + +/ Terminator AnteDecorator will get added to the chain to simplify decorator code +/ Don't need to check if next == nil further up the chain +/ +/ ______ +/ <((((((\\\ +/ / . +}\ +/ ;--..--._| +} +/ (\ '--/\--' ) +/ \\ | '-' :'| +/ \\ . -==- .-| +/ \\ \.__.' \--._ +/ [\\ __.--| / _/'--. +/ \ \\ .'-._ ('-----'/ __/ \ +/ \ \\ / __>| | '--. | +/ \ \\ | \ | / / / +/ \ '\ / \ | | _/ / +/ \ \ \ | | / / +/ snd \ \ \ / +/ +/ Deprecated: Terminator is retired (ref https://github.com/cosmos/cosmos-sdk/pull/16076). +type Terminator struct{ +} + +/ AnteHandle returns the provided Context and nil error +func (t Terminator) + +AnteHandle(ctx Context, _ Tx, _ bool, _ AnteHandler) (Context, error) { + return ctx, nil +} + +/ PostHandle returns the provided Context and nil error +func (t Terminator) + +PostHandle(ctx Context, _ Tx, _, _ bool, _ PostHandler) (Context, error) { + return ctx, nil +} +``` + +The `AnteHandler` is theoretically optional, but still a very important component of public blockchain networks. It serves 3 primary purposes: + +- Be a primary line of defense against spam and second line of defense (the first one being the mempool) against transaction replay with fees deduction and [`sequence`](/docs/sdk/v0.53/documentation/protocol-development/transactions#transaction-generation) checking. +- Perform preliminary _stateful_ validity checks like ensuring signatures are valid or that the sender has enough funds to pay for fees. +- Play a role in the incentivisation of stakeholders via the collection of transaction fees. + +`BaseApp` holds an `anteHandler` as parameter that is initialized in the [application's constructor](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#application-constructor). The most widely used `anteHandler` is the [`auth` module](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/ante/ante.go). + +Click [here](/docs/sdk/v0.53/documentation/protocol-development/gas-fees#antehandler) for more on the `anteHandler`. + +### RunMsgs + +`RunMsgs` is called from `RunTx` with `runTxModeCheck` as parameter to check the existence of a route for each message the transaction, and with `execModeFinalize` to actually process the `sdk.Msg`s. + +First, it retrieves the `sdk.Msg`'s fully-qualified type name, by checking the `type_url` of the Protobuf `Any` representing the `sdk.Msg`. Then, using the application's [`msgServiceRouter`](#msg-service-router), it checks for the existence of `Msg` service method related to that `type_url`. At this point, if `mode == runTxModeCheck`, `RunMsgs` returns. Otherwise, if `mode == execModeFinalize`, the [`Msg` service](/docs/sdk/v0.53/documentation/module-system/msg-services) RPC is executed, before `RunMsgs` returns. + +### PostHandler + +`PostHandler` is similar to `AnteHandler`, but it, as the name suggests, executes custom post tx processing logic after [`RunMsgs`](#runmsgs) is called. `PostHandler` receives the `Result` of the `RunMsgs` in order to enable this customizable behavior. + +Like `AnteHandler`s, `PostHandler`s are theoretically optional. + +Other use cases like unused gas refund can also be enabled by `PostHandler`s. + +```go expandable +package posthandler + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ HandlerOptions are the options required for constructing a default SDK PostHandler. +type HandlerOptions struct{ +} + +/ NewPostHandler returns an empty PostHandler chain. +func NewPostHandler(_ HandlerOptions) (sdk.PostHandler, error) { + postDecorators := []sdk.PostDecorator{ +} + +return sdk.ChainPostDecorators(postDecorators...), nil +} +``` + +Note, when `PostHandler`s fail, the state from `runMsgs` is also reverted, effectively making the transaction fail. + +## Other ABCI Messages + +### InitChain + +The [`InitChain` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine when the chain is first started. It is mainly used to **initialize** parameters and state like: + +- [Consensus Parameters](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#consensus-parameters) via `setConsensusParams`. +- [`checkState` and `finalizeBlockState`](#state-updates) via `setState`. +- The [block gas meter](/docs/sdk/v0.53/documentation/protocol-development/gas-fees#block-gas-meter), with infinite gas to process genesis transactions. + +Finally, the `InitChain(req abci.RequestInitChain)` method of `BaseApp` calls the [`initChainer()`](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#initchainer) of the application in order to initialize the main state of the application from the `genesis file` and, if defined, call the [`InitGenesis`](/docs/sdk/v0.53/documentation/module-system/genesis#initgenesis) function of each of the application's modules. + +### FinalizeBlock + +The [`FinalizeBlock` ABCI message](https://github.com/cometbft/cometbft/blob/v0.38.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine when a block proposal created by the correct proposer is received. The previous `BeginBlock, DeliverTx and Endblock` calls are private methods on the BaseApp struct. + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + "sort" + "strings" + "time" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc/codes" + grpcstatus "google.golang.org/grpc/status" + + coreheader "cosmossdk.io/core/header" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/store/rootmulti" + snapshottypes "cosmossdk.io/store/snapshots/types" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ Supported ABCI Query prefixes and paths +const ( + QueryPathApp = "app" + QueryPathCustom = "custom" + QueryPathP2P = "p2p" + QueryPathStore = "store" + + QueryPathBroadcastTx = "/cosmos.tx.v1beta1.Service/BroadcastTx" +) + +func (app *BaseApp) + +InitChain(req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + if req.ChainId != app.chainID { + return nil, fmt.Errorf("invalid chain-id on InitChain; expected: %s, got: %s", app.chainID, req.ChainId) +} + + / On a new chain, we consider the init chain block height as 0, even though + / req.InitialHeight is 1 by default. + initHeader := cmtproto.Header{ + ChainID: req.ChainId, + Time: req.Time +} + +app.logger.Info("InitChain", "initialHeight", req.InitialHeight, "chainID", req.ChainId) + + / Set the initial height, which will be used to determine if we are proposing + / or processing the first block or not. + app.initialHeight = req.InitialHeight + if app.initialHeight == 0 { / If initial height is 0, set it to 1 + app.initialHeight = 1 +} + + / if req.InitialHeight is > 1, then we set the initial version on all stores + if req.InitialHeight > 1 { + initHeader.Height = req.InitialHeight + if err := app.cms.SetInitialVersion(req.InitialHeight); err != nil { + return nil, err +} + +} + + / initialize states with a correct header + app.setState(execModeFinalize, initHeader) + +app.setState(execModeCheck, initHeader) + + / Store the consensus params in the BaseApp's param store. Note, this must be + / done after the finalizeBlockState and context have been set as it's persisted + / to state. + if req.ConsensusParams != nil { + err := app.StoreConsensusParams(app.finalizeBlockState.Context(), *req.ConsensusParams) + if err != nil { + return nil, err +} + +} + +defer func() { + / InitChain represents the state of the application BEFORE the first block, + / i.e. the genesis block. This means that when processing the app's InitChain + / handler, the block height is zero by default. However, after Commit is called + / the height needs to reflect the true block height. + initHeader.Height = req.InitialHeight + app.checkState.SetContext(app.checkState.Context().WithBlockHeader(initHeader). + WithHeaderInfo(coreheader.Info{ + ChainID: req.ChainId, + Height: req.InitialHeight, + Time: req.Time, +})) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockHeader(initHeader). + WithHeaderInfo(coreheader.Info{ + ChainID: req.ChainId, + Height: req.InitialHeight, + Time: req.Time, +})) +}() + if app.initChainer == nil { + return &abci.ResponseInitChain{ +}, nil +} + + / add block gas meter for any genesis transactions (allow infinite gas) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(storetypes.NewInfiniteGasMeter())) + +res, err := app.initChainer(app.finalizeBlockState.Context(), req) + if err != nil { + return nil, err +} + if len(req.Validators) > 0 { + if len(req.Validators) != len(res.Validators) { + return nil, fmt.Errorf( + "len(RequestInitChain.Validators) != len(GenesisValidators) (%d != %d)", + len(req.Validators), len(res.Validators), + ) +} + +sort.Sort(abci.ValidatorUpdates(req.Validators)) + +sort.Sort(abci.ValidatorUpdates(res.Validators)) + for i := range res.Validators { + if !proto.Equal(&res.Validators[i], &req.Validators[i]) { + return nil, fmt.Errorf("genesisValidators[%d] != req.Validators[%d] ", i, i) +} + +} + +} + + / NOTE: We don't commit, but FinalizeBlock for block InitialHeight starts from + / this FinalizeBlockState. + return &abci.ResponseInitChain{ + ConsensusParams: res.ConsensusParams, + Validators: res.Validators, + AppHash: app.LastCommitID().Hash, +}, nil +} + +func (app *BaseApp) + +Info(_ *abci.RequestInfo) (*abci.ResponseInfo, error) { + lastCommitID := app.cms.LastCommitID() + +return &abci.ResponseInfo{ + Data: app.name, + Version: app.version, + AppVersion: app.appVersion, + LastBlockHeight: lastCommitID.Version, + LastBlockAppHash: lastCommitID.Hash, +}, nil +} + +/ Query implements the ABCI interface. It delegates to CommitMultiStore if it +/ implements Queryable. +func (app *BaseApp) + +Query(_ context.Context, req *abci.RequestQuery) (resp *abci.ResponseQuery, err error) { + / add panic recovery for all queries + / + / Ref: https://github.com/cosmos/cosmos-sdk/pull/8039 + defer func() { + if r := recover(); r != nil { + resp = sdkerrors.QueryResult(errorsmod.Wrapf(sdkerrors.ErrPanic, "%v", r), app.trace) +} + +}() + + / when a client did not provide a query height, manually inject the latest + if req.Height == 0 { + req.Height = app.LastBlockHeight() +} + +telemetry.IncrCounter(1, "query", "count") + +telemetry.IncrCounter(1, "query", req.Path) + +defer telemetry.MeasureSince(telemetry.Now(), req.Path) + if req.Path == QueryPathBroadcastTx { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "can't route a broadcast tx message"), app.trace), nil +} + + / handle gRPC routes first rather than calling splitPath because '/' characters + / are used as part of gRPC paths + if grpcHandler := app.grpcQueryRouter.Route(req.Path); grpcHandler != nil { + return app.handleQueryGRPC(grpcHandler, req), nil +} + path := SplitABCIQueryPath(req.Path) + if len(path) == 0 { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "no query path provided"), app.trace), nil +} + switch path[0] { + case QueryPathApp: + / "/app" prefix for special application queries + resp = handleQueryApp(app, path, req) + case QueryPathStore: + resp = handleQueryStore(app, path, *req) + case QueryPathP2P: + resp = handleQueryP2P(app, path) + +default: + resp = sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "unknown query path"), app.trace) +} + +return resp, nil +} + +/ ListSnapshots implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ListSnapshots(req *abci.RequestListSnapshots) (*abci.ResponseListSnapshots, error) { + resp := &abci.ResponseListSnapshots{ + Snapshots: []*abci.Snapshot{ +}} + if app.snapshotManager == nil { + return resp, nil +} + +snapshots, err := app.snapshotManager.List() + if err != nil { + app.logger.Error("failed to list snapshots", "err", err) + +return nil, err +} + for _, snapshot := range snapshots { + abciSnapshot, err := snapshot.ToABCI() + if err != nil { + app.logger.Error("failed to convert ABCI snapshots", "err", err) + +return nil, err +} + +resp.Snapshots = append(resp.Snapshots, &abciSnapshot) +} + +return resp, nil +} + +/ LoadSnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +LoadSnapshotChunk(req *abci.RequestLoadSnapshotChunk) (*abci.ResponseLoadSnapshotChunk, error) { + if app.snapshotManager == nil { + return &abci.ResponseLoadSnapshotChunk{ +}, nil +} + +chunk, err := app.snapshotManager.LoadChunk(req.Height, req.Format, req.Chunk) + if err != nil { + app.logger.Error( + "failed to load snapshot chunk", + "height", req.Height, + "format", req.Format, + "chunk", req.Chunk, + "err", err, + ) + +return nil, err +} + +return &abci.ResponseLoadSnapshotChunk{ + Chunk: chunk +}, nil +} + +/ OfferSnapshot implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +OfferSnapshot(req *abci.RequestOfferSnapshot) (*abci.ResponseOfferSnapshot, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ABORT +}, nil +} + if req.Snapshot == nil { + app.logger.Error("received nil snapshot") + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil +} + +snapshot, err := snapshottypes.SnapshotFromABCI(req.Snapshot) + if err != nil { + app.logger.Error("failed to decode snapshot metadata", "err", err) + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil +} + +err = app.snapshotManager.Restore(snapshot) + switch { + case err == nil: + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrUnknownFormat): + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT_FORMAT +}, nil + case errors.Is(err, snapshottypes.ErrInvalidMetadata): + app.logger.Error( + "rejecting invalid snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil + + default: + / CometBFT errors are defined here: https://github.com/cometbft/cometbft/blob/main/statesync/syncer.go + / It may happen that in case of a CometBFT error, such as a timeout (which occurs after two minutes), + / the process is aborted. This is done intentionally because deleting the database programmatically + / can lead to more complicated situations. + app.logger.Error( + "failed to restore snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + + / We currently don't support resetting the IAVL stores and retrying a + / different snapshot, so we ask CometBFT to abort all snapshot restoration. + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ABORT +}, nil +} +} + +/ ApplySnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ApplySnapshotChunk(req *abci.RequestApplySnapshotChunk) (*abci.ResponseApplySnapshotChunk, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ABORT +}, nil +} + + _, err := app.snapshotManager.RestoreChunk(req.Chunk) + switch { + case err == nil: + return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrChunkHashMismatch): + app.logger.Error( + "chunk checksum mismatch; rejecting sender and requesting refetch", + "chunk", req.Index, + "sender", req.Sender, + "err", err, + ) + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_RETRY, + RefetchChunks: []uint32{ + req.Index +}, + RejectSenders: []string{ + req.Sender +}, +}, nil + + default: + app.logger.Error("failed to restore snapshot", "err", err) + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ABORT +}, nil +} +} + +/ CheckTx implements the ABCI interface and executes a tx in CheckTx mode. In +/ CheckTx mode, messages are not executed. This means messages are only validated +/ and only the AnteHandler is executed. State is persisted to the BaseApp's +/ internal CheckTx state if the AnteHandler passes. Otherwise, the ResponseCheckTx +/ will contain relevant error information. Regardless of tx execution outcome, +/ the ResponseCheckTx will contain relevant gas execution context. +func (app *BaseApp) + +CheckTx(req *abci.RequestCheckTx) (*abci.ResponseCheckTx, error) { + var mode execMode + switch req.Type { + case abci.CheckTxType_New: + mode = execModeCheck + case abci.CheckTxType_Recheck: + mode = execModeReCheck + + default: + return nil, fmt.Errorf("unknown RequestCheckTx type: %s", req.Type) +} + if app.checkTxHandler == nil { + gInfo, result, anteEvents, err := app.runTx(mode, req.Tx, nil) + if err != nil { + return sdkerrors.ResponseCheckTxWithEvents(err, gInfo.GasWanted, gInfo.GasUsed, anteEvents, app.trace), nil +} + +return &abci.ResponseCheckTx{ + GasWanted: int64(gInfo.GasWanted), / TODO: Should type accept unsigned ints? + GasUsed: int64(gInfo.GasUsed), / TODO: Should type accept unsigned ints? + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +}, nil +} + + / Create wrapper to avoid users overriding the execution mode + runTx := func(txBytes []byte, tx sdk.Tx) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + return app.runTx(mode, txBytes, tx) +} + +return app.checkTxHandler(runTx, req) +} + +/ PrepareProposal implements the PrepareProposal ABCI method and returns a +/ ResponsePrepareProposal object to the client. The PrepareProposal method is +/ responsible for allowing the block proposer to perform application-dependent +/ work in a block before proposing it. +/ +/ Transactions can be modified, removed, or added by the application. Since the +/ application maintains its own local mempool, it will ignore the transactions +/ provided to it in RequestPrepareProposal. Instead, it will determine which +/ transactions to return based on the mempool's semantics and the MaxTxBytes +/ provided by the client's request. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +PrepareProposal(req *abci.RequestPrepareProposal) (resp *abci.ResponsePrepareProposal, err error) { + if app.prepareProposal == nil { + return nil, errors.New("PrepareProposal handler not set") +} + + / Always reset state given that PrepareProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, + AppHash: app.LastCommitID().Hash, +} + +app.setState(execModePrepareProposal, header) + + / CometBFT must never call PrepareProposal with a height of 0. + / + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("PrepareProposal called with invalid height") +} + +app.prepareProposalState.SetContext(app.getContextForProposal(app.prepareProposalState.Context(), req.Height). + WithVoteInfos(toVoteInfo(req.LocalLastCommit.Votes)). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithBlockTime(req.Time). + WithProposer(req.ProposerAddress). + WithExecMode(sdk.ExecModePrepareProposal). + WithCometInfo(prepareProposalInfo{ + req +}). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, +})) + +app.prepareProposalState.SetContext(app.prepareProposalState.Context(). + WithConsensusParams(app.GetConsensusParams(app.prepareProposalState.Context())). + WithBlockGasMeter(app.getBlockGasMeter(app.prepareProposalState.Context()))) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in PrepareProposal", + "height", req.Height, + "time", req.Time, + "panic", err, + ) + +resp = &abci.ResponsePrepareProposal{ + Txs: req.Txs +} + +} + +}() + +resp, err = app.prepareProposal(app.prepareProposalState.Context(), req) + if err != nil { + app.logger.Error("failed to prepare proposal", "height", req.Height, "time", req.Time, "err", err) + +return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} + +return resp, nil +} + +/ ProcessProposal implements the ProcessProposal ABCI method and returns a +/ ResponseProcessProposal object to the client. The ProcessProposal method is +/ responsible for allowing execution of application-dependent work in a proposed +/ block. Note, the application defines the exact implementation details of +/ ProcessProposal. In general, the application must at the very least ensure +/ that all transactions are valid. If all transactions are valid, then we inform +/ CometBFT that the Status is ACCEPT. However, the application is also able +/ to implement optimizations such as executing the entire proposed block +/ immediately. +/ +/ If a panic is detected during execution of an application's ProcessProposal +/ handler, it will be recovered and we will reject the proposal. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +ProcessProposal(req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) { + if app.processProposal == nil { + return nil, errors.New("ProcessProposal handler not set") +} + + / CometBFT must never call ProcessProposal with a height of 0. + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("ProcessProposal called with invalid height") +} + + / Always reset state given that ProcessProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, + AppHash: app.LastCommitID().Hash, +} + +app.setState(execModeProcessProposal, header) + + / Since the application can get access to FinalizeBlock state and write to it, + / we must be sure to reset it in case ProcessProposal timeouts and is called + / again in a subsequent round. However, we only want to do this after we've + / processed the first block, as we want to avoid overwriting the finalizeState + / after state changes during InitChain. + if req.Height > app.initialHeight { + / abort any running OE + app.optimisticExec.Abort() + +app.setState(execModeFinalize, header) +} + +app.processProposalState.SetContext(app.getContextForProposal(app.processProposalState.Context(), req.Height). + WithVoteInfos(req.ProposedLastCommit.Votes). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithBlockTime(req.Time). + WithHeaderHash(req.Hash). + WithProposer(req.ProposerAddress). + WithCometInfo(cometInfo{ + ProposerAddress: req.ProposerAddress, + ValidatorsHash: req.NextValidatorsHash, + Misbehavior: req.Misbehavior, + LastCommit: req.ProposedLastCommit +}). + WithExecMode(sdk.ExecModeProcessProposal). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, +})) + +app.processProposalState.SetContext(app.processProposalState.Context(). + WithConsensusParams(app.GetConsensusParams(app.processProposalState.Context())). + WithBlockGasMeter(app.getBlockGasMeter(app.processProposalState.Context()))) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in ProcessProposal", + "height", req.Height, + "time", req.Time, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +resp = &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + +}() + +resp, err = app.processProposal(app.processProposalState.Context(), req) + if err != nil { + app.logger.Error("failed to process proposal", "height", req.Height, "time", req.Time, "hash", fmt.Sprintf("%X", req.Hash), "err", err) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + + / Only execute optimistic execution if the proposal is accepted, OE is + / enabled and the block height is greater than the initial height. During + / the first block we'll be carrying state from InitChain, so it would be + / impossible for us to easily revert. + / After the first block has been processed, the next blocks will get executed + / optimistically, so that when the ABCI client calls `FinalizeBlock` the app + / can have a response ready. + if resp.Status == abci.ResponseProcessProposal_ACCEPT && + app.optimisticExec.Enabled() && + req.Height > app.initialHeight { + app.optimisticExec.Execute(req) +} + +return resp, nil +} + +/ ExtendVote implements the ExtendVote ABCI method and returns a ResponseExtendVote. +/ It calls the application's ExtendVote handler which is responsible for performing +/ application-specific business logic when sending a pre-commit for the NEXT +/ block height. The extensions response may be non-deterministic but must always +/ be returned, even if empty. +/ +/ Agreed upon vote extensions are made available to the proposer of the next +/ height and are committed in the subsequent height, i.e. H+2. An error is +/ returned if vote extensions are not enabled or if extendVote fails or panics. +func (app *BaseApp) + +ExtendVote(_ context.Context, req *abci.RequestExtendVote) (resp *abci.ResponseExtendVote, err error) { + / Always reset state given that ExtendVote and VerifyVoteExtension can timeout + / and be called again in a subsequent round. + var ctx sdk.Context + + / If we're extending the vote for the initial height, we need to use the + / finalizeBlockState context, otherwise we don't get the uncommitted data + / from InitChain. + if req.Height == app.initialHeight { + ctx, _ = app.finalizeBlockState.Context().CacheContext() +} + +else { + emptyHeader := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height +} + ms := app.cms.CacheMultiStore() + +ctx = sdk.NewContext(ms, emptyHeader, false, app.logger).WithStreamingManager(app.streamingManager) +} + if app.extendVote == nil { + return nil, errors.New("application ExtendVote handler not set") +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(ctx) + + / Note: In this case, we do want to extend vote if the height is equal or + / greater than VoteExtensionsEnableHeight. This defers from the check done + / in ValidateVoteExtensions and PrepareProposal in which we'll check for + / vote extensions on VoteExtensionsEnableHeight+1. + extsEnabled := cp.Abci != nil && req.Height >= cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + if !extsEnabled { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to ExtendVote at height %d", req.Height) +} + +ctx = ctx. + WithConsensusParams(cp). + WithBlockGasMeter(storetypes.NewInfiniteGasMeter()). + WithBlockHeight(req.Height). + WithHeaderHash(req.Hash). + WithExecMode(sdk.ExecModeVoteExtension). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Hash: req.Hash, +}) + + / add a deferred recover handler in case extendVote panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in ExtendVote", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +err = fmt.Errorf("recovered application panic in ExtendVote: %v", r) +} + +}() + +resp, err = app.extendVote(ctx, req) + if err != nil { + app.logger.Error("failed to extend vote", "height", req.Height, "hash", fmt.Sprintf("%X", req.Hash), "err", err) + +return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} + +return resp, err +} + +/ VerifyVoteExtension implements the VerifyVoteExtension ABCI method and returns +/ a ResponseVerifyVoteExtension. It calls the applications' VerifyVoteExtension +/ handler which is responsible for performing application-specific business +/ logic in verifying a vote extension from another validator during the pre-commit +/ phase. The response MUST be deterministic. An error is returned if vote +/ extensions are not enabled or if verifyVoteExt fails or panics. +func (app *BaseApp) + +VerifyVoteExtension(req *abci.RequestVerifyVoteExtension) (resp *abci.ResponseVerifyVoteExtension, err error) { + if app.verifyVoteExt == nil { + return nil, errors.New("application VerifyVoteExtension handler not set") +} + +var ctx sdk.Context + + / If we're verifying the vote for the initial height, we need to use the + / finalizeBlockState context, otherwise we don't get the uncommitted data + / from InitChain. + if req.Height == app.initialHeight { + ctx, _ = app.finalizeBlockState.Context().CacheContext() +} + +else { + emptyHeader := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height +} + ms := app.cms.CacheMultiStore() + +ctx = sdk.NewContext(ms, emptyHeader, false, app.logger).WithStreamingManager(app.streamingManager) +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(ctx) + + / Note: we verify votes extensions on VoteExtensionsEnableHeight+1. Check + / comment in ExtendVote and ValidateVoteExtensions for more details. + extsEnabled := cp.Abci != nil && req.Height >= cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + if !extsEnabled { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to VerifyVoteExtension at height %d", req.Height) +} + + / add a deferred recover handler in case verifyVoteExt panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in VerifyVoteExtension", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "validator", fmt.Sprintf("%X", req.ValidatorAddress), + "panic", r, + ) + +err = fmt.Errorf("recovered application panic in VerifyVoteExtension: %v", r) +} + +}() + +ctx = ctx. + WithConsensusParams(cp). + WithBlockGasMeter(storetypes.NewInfiniteGasMeter()). + WithBlockHeight(req.Height). + WithHeaderHash(req.Hash). + WithExecMode(sdk.ExecModeVerifyVoteExtension). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Hash: req.Hash, +}) + +resp, err = app.verifyVoteExt(ctx, req) + if err != nil { + app.logger.Error("failed to verify vote extension", "height", req.Height, "err", err) + +return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_REJECT +}, nil +} + +return resp, err +} + +/ internalFinalizeBlock executes the block, called by the Optimistic +/ Execution flow or by the FinalizeBlock ABCI method. The context received is +/ only used to handle early cancellation, for anything related to state app.finalizeBlockState.Context() +/ must be used. +func (app *BaseApp) + +internalFinalizeBlock(ctx context.Context, req *abci.RequestFinalizeBlock) (*abci.ResponseFinalizeBlock, error) { + var events []abci.Event + if err := app.checkHalt(req.Height, req.Time); err != nil { + return nil, err +} + if err := app.validateFinalizeBlockHeight(req); err != nil { + return nil, err +} + if app.cms.TracingEnabled() { + app.cms.SetTracingContext(storetypes.TraceContext( + map[string]any{"blockHeight": req.Height +}, + )) +} + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, + AppHash: app.LastCommitID().Hash, +} + + / finalizeBlockState should be set on InitChain or ProcessProposal. If it is + / nil, it means we are replaying this block and we need to set the state here + / given that during block replay ProcessProposal is not executed by CometBFT. + if app.finalizeBlockState == nil { + app.setState(execModeFinalize, header) +} + + / Context is now updated with Header information. + app.finalizeBlockState.SetContext(app.finalizeBlockState.Context(). + WithBlockHeader(header). + WithHeaderHash(req.Hash). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + Hash: req.Hash, + AppHash: app.LastCommitID().Hash, +}). + WithConsensusParams(app.GetConsensusParams(app.finalizeBlockState.Context())). + WithVoteInfos(req.DecidedLastCommit.Votes). + WithExecMode(sdk.ExecModeFinalize). + WithCometInfo(cometInfo{ + Misbehavior: req.Misbehavior, + ValidatorsHash: req.NextValidatorsHash, + ProposerAddress: req.ProposerAddress, + LastCommit: req.DecidedLastCommit, +})) + + / GasMeter must be set after we get a context with updated consensus params. + gasMeter := app.getBlockGasMeter(app.finalizeBlockState.Context()) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(gasMeter)) + if app.checkState != nil { + app.checkState.SetContext(app.checkState.Context(). + WithBlockGasMeter(gasMeter). + WithHeaderHash(req.Hash)) +} + +preblockEvents, err := app.preBlock(req) + if err != nil { + return nil, err +} + +events = append(events, preblockEvents...) + +beginBlock, err := app.beginBlock(req) + if err != nil { + return nil, err +} + + / First check for an abort signal after beginBlock, as it's the first place + / we spend any significant amount of time. + select { + case <-ctx.Done(): + return nil, ctx.Err() + +default: + / continue +} + +events = append(events, beginBlock.Events...) + + / Reset the gas meter so that the AnteHandlers aren't required to + gasMeter = app.getBlockGasMeter(app.finalizeBlockState.Context()) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(gasMeter)) + + / Iterate over all raw transactions in the proposal and attempt to execute + / them, gathering the execution results. + / + / NOTE: Not all raw transactions may adhere to the sdk.Tx interface, e.g. + / vote extensions, so skip those. + txResults := make([]*abci.ExecTxResult, 0, len(req.Txs)) + for _, rawTx := range req.Txs { + var response *abci.ExecTxResult + if _, err := app.txDecoder(rawTx); err == nil { + response = app.deliverTx(rawTx) +} + +else { + / In the case where a transaction included in a block proposal is malformed, + / we still want to return a default response to comet. This is because comet + / expects a response for each transaction included in a block proposal. + response = sdkerrors.ResponseExecTxResultWithEvents( + sdkerrors.ErrTxDecode, + 0, + 0, + nil, + false, + ) +} + + / check after every tx if we should abort + select { + case <-ctx.Done(): + return nil, ctx.Err() + +default: + / continue +} + +txResults = append(txResults, response) +} + if app.finalizeBlockState.ms.TracingEnabled() { + app.finalizeBlockState.ms = app.finalizeBlockState.ms.SetTracingContext(nil).(storetypes.CacheMultiStore) +} + +endBlock, err := app.endBlock(app.finalizeBlockState.Context()) + if err != nil { + return nil, err +} + + / check after endBlock if we should abort, to avoid propagating the result + select { + case <-ctx.Done(): + return nil, ctx.Err() + +default: + / continue +} + +events = append(events, endBlock.Events...) + cp := app.GetConsensusParams(app.finalizeBlockState.Context()) + +return &abci.ResponseFinalizeBlock{ + Events: events, + TxResults: txResults, + ValidatorUpdates: endBlock.ValidatorUpdates, + ConsensusParamUpdates: &cp, +}, nil +} + +/ FinalizeBlock will execute the block proposal provided by RequestFinalizeBlock. +/ Specifically, it will execute an application's BeginBlock (if defined), followed +/ by the transactions in the proposal, finally followed by the application's +/ EndBlock (if defined). +/ +/ For each raw transaction, i.e. a byte slice, BaseApp will only execute it if +/ it adheres to the sdk.Tx interface. Otherwise, the raw transaction will be +/ skipped. This is to support compatibility with proposers injecting vote +/ extensions into the proposal, which should not themselves be executed in cases +/ where they adhere to the sdk.Tx interface. +func (app *BaseApp) + +FinalizeBlock(req *abci.RequestFinalizeBlock) (res *abci.ResponseFinalizeBlock, err error) { + defer func() { + if res == nil { + return +} + / call the streaming service hooks with the FinalizeBlock messages + for _, streamingListener := range app.streamingManager.ABCIListeners { + if err := streamingListener.ListenFinalizeBlock(app.finalizeBlockState.Context(), *req, *res); err != nil { + app.logger.Error("ListenFinalizeBlock listening hook failed", "height", req.Height, "err", err) +} + +} + +}() + if app.optimisticExec.Initialized() { + / check if the hash we got is the same as the one we are executing + aborted := app.optimisticExec.AbortIfNeeded(req.Hash) + / Wait for the OE to finish, regardless of whether it was aborted or not + res, err = app.optimisticExec.WaitResult() + + / only return if we are not aborting + if !aborted { + if res != nil { + res.AppHash = app.workingHash() +} + +return res, err +} + + / if it was aborted, we need to reset the state + app.finalizeBlockState = nil + app.optimisticExec.Reset() +} + + / if no OE is running, just run the block (this is either a block replay or a OE that got aborted) + +res, err = app.internalFinalizeBlock(context.Background(), req) + if res != nil { + res.AppHash = app.workingHash() +} + +return res, err +} + +/ checkHalt checkes if height or time exceeds halt-height or halt-time respectively. +func (app *BaseApp) + +checkHalt(height int64, time time.Time) + +error { + var halt bool + switch { + case app.haltHeight > 0 && uint64(height) >= app.haltHeight: + halt = true + case app.haltTime > 0 && time.Unix() >= int64(app.haltTime): + halt = true +} + if halt { + return fmt.Errorf("halt per configuration height %d time %d", app.haltHeight, app.haltTime) +} + +return nil +} + +/ Commit implements the ABCI interface. It will commit all state that exists in +/ the deliver state's multi-store and includes the resulting commit ID in the +/ returned abci.ResponseCommit. Commit will set the check state based on the +/ latest header and reset the deliver state. Also, if a non-zero halt height is +/ defined in config, Commit will execute a deferred function call to check +/ against that height and gracefully halt if it matches the latest committed +/ height. +func (app *BaseApp) + +Commit() (*abci.ResponseCommit, error) { + header := app.finalizeBlockState.Context().BlockHeader() + retainHeight := app.GetBlockRetentionHeight(header.Height) + if app.precommiter != nil { + app.precommiter(app.finalizeBlockState.Context()) +} + +rms, ok := app.cms.(*rootmulti.Store) + if ok { + rms.SetCommitHeader(header) +} + +app.cms.Commit() + resp := &abci.ResponseCommit{ + RetainHeight: retainHeight, +} + abciListeners := app.streamingManager.ABCIListeners + if len(abciListeners) > 0 { + ctx := app.finalizeBlockState.Context() + blockHeight := ctx.BlockHeight() + changeSet := app.cms.PopStateCache() + for _, abciListener := range abciListeners { + if err := abciListener.ListenCommit(ctx, *resp, changeSet); err != nil { + app.logger.Error("Commit listening hook failed", "height", blockHeight, "err", err) +} + +} + +} + + / Reset the CheckTx state to the latest committed. + / + / NOTE: This is safe because CometBFT holds a lock on the mempool for + / Commit. Use the header from this latest block. + app.setState(execModeCheck, header) + +app.finalizeBlockState = nil + if app.prepareCheckStater != nil { + app.prepareCheckStater(app.checkState.Context()) +} + + / The SnapshotIfApplicable method will create the snapshot by starting the goroutine + app.snapshotManager.SnapshotIfApplicable(header.Height) + +return resp, nil +} + +/ workingHash gets the apphash that will be finalized in commit. +/ These writes will be persisted to the root multi-store (app.cms) + +and flushed to +/ disk in the Commit phase. This means when the ABCI client requests Commit(), the application +/ state transitions will be flushed to disk and as a result, but we already have +/ an application Merkle root. +func (app *BaseApp) + +workingHash() []byte { + / Write the FinalizeBlock state into branched storage and commit the MultiStore. + / The write to the FinalizeBlock state writes all state transitions to the root + / MultiStore (app.cms) + +so when Commit() + +is called it persists those values. + app.finalizeBlockState.ms.Write() + + / Get the hash of all writes in order to return the apphash to the comet in finalizeBlock. + commitHash := app.cms.WorkingHash() + +app.logger.Debug("hash of all writes", "workingHash", fmt.Sprintf("%X", commitHash)) + +return commitHash +} + +func handleQueryApp(app *BaseApp, path []string, req *abci.RequestQuery) *abci.ResponseQuery { + if len(path) >= 2 { + switch path[1] { + case "simulate": + txBytes := req.Data + + gInfo, res, err := app.Simulate(txBytes) + if err != nil { + return sdkerrors.QueryResult(errorsmod.Wrap(err, "failed to simulate tx"), app.trace) +} + simRes := &sdk.SimulationResponse{ + GasInfo: gInfo, + Result: res, +} + +bz, err := codec.ProtoMarshalJSON(simRes, app.interfaceRegistry) + if err != nil { + return sdkerrors.QueryResult(errorsmod.Wrap(err, "failed to JSON encode simulation response"), app.trace) +} + +return &abci.ResponseQuery{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: bz, +} + case "version": + return &abci.ResponseQuery{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: []byte(app.version), +} + +default: + return sdkerrors.QueryResult(errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "unknown query: %s", path), app.trace) +} + +} + +return sdkerrors.QueryResult( + errorsmod.Wrap( + sdkerrors.ErrUnknownRequest, + "expected second parameter to be either 'simulate' or 'version', neither was present", + ), app.trace) +} + +func handleQueryStore(app *BaseApp, path []string, req abci.RequestQuery) *abci.ResponseQuery { + / "/store" prefix for store queries + queryable, ok := app.cms.(storetypes.Queryable) + if !ok { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "multi-store does not support queries"), app.trace) +} + +req.Path = "/" + strings.Join(path[1:], "/") + if req.Height <= 1 && req.Prove { + return sdkerrors.QueryResult( + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ), app.trace) +} + sdkReq := storetypes.RequestQuery(req) + +resp, err := queryable.Query(&sdkReq) + if err != nil { + return sdkerrors.QueryResult(err, app.trace) +} + +resp.Height = req.Height + abciResp := abci.ResponseQuery(*resp) + +return &abciResp +} + +func handleQueryP2P(app *BaseApp, path []string) *abci.ResponseQuery { + / "/p2p" prefix for p2p queries + if len(path) < 4 { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "path should be p2p filter "), app.trace) +} + +var resp *abci.ResponseQuery + + cmd, typ, arg := path[1], path[2], path[3] + switch cmd { + case "filter": + switch typ { + case "addr": + resp = app.FilterPeerByAddrPort(arg) + case "id": + resp = app.FilterPeerByID(arg) +} + +default: + resp = sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "expected second parameter to be 'filter'"), app.trace) +} + +return resp +} + +/ SplitABCIQueryPath splits a string path using the delimiter '/'. +/ +/ e.g. "this/is/funny" becomes []string{"this", "is", "funny" +} + +func SplitABCIQueryPath(requestPath string) (path []string) { + path = strings.Split(requestPath, "/") + + / first element is empty string + if len(path) > 0 && path[0] == "" { + path = path[1:] +} + +return path +} + +/ FilterPeerByAddrPort filters peers by address/port. +func (app *BaseApp) + +FilterPeerByAddrPort(info string) *abci.ResponseQuery { + if app.addrPeerFilter != nil { + return app.addrPeerFilter(info) +} + +return &abci.ResponseQuery{ +} +} + +/ FilterPeerByID filters peers by node ID. +func (app *BaseApp) + +FilterPeerByID(info string) *abci.ResponseQuery { + if app.idPeerFilter != nil { + return app.idPeerFilter(info) +} + +return &abci.ResponseQuery{ +} +} + +/ getContextForProposal returns the correct Context for PrepareProposal and +/ ProcessProposal. We use finalizeBlockState on the first block to be able to +/ access any state changes made in InitChain. +func (app *BaseApp) + +getContextForProposal(ctx sdk.Context, height int64) + +sdk.Context { + if height == app.initialHeight { + ctx, _ = app.finalizeBlockState.Context().CacheContext() + + / clear all context data set during InitChain to avoid inconsistent behavior + ctx = ctx.WithBlockHeader(cmtproto.Header{ +}).WithHeaderInfo(coreheader.Info{ +}) + +return ctx +} + +return ctx +} + +func (app *BaseApp) + +handleQueryGRPC(handler GRPCQueryHandler, req *abci.RequestQuery) *abci.ResponseQuery { + ctx, err := app.CreateQueryContext(req.Height, req.Prove) + if err != nil { + return sdkerrors.QueryResult(err, app.trace) +} + +resp, err := handler(ctx, req) + if err != nil { + resp = sdkerrors.QueryResult(gRPCErrorToSDKError(err), app.trace) + +resp.Height = req.Height + return resp +} + +return resp +} + +func gRPCErrorToSDKError(err error) + +error { + status, ok := grpcstatus.FromError(err) + if !ok { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) +} + switch status.Code() { + case codes.NotFound: + return errorsmod.Wrap(sdkerrors.ErrKeyNotFound, err.Error()) + case codes.InvalidArgument: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.FailedPrecondition: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.Unauthenticated: + return errorsmod.Wrap(sdkerrors.ErrUnauthorized, err.Error()) + +default: + return errorsmod.Wrap(sdkerrors.ErrUnknownRequest, err.Error()) +} +} + +func checkNegativeHeight(height int64) + +error { + if height < 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "cannot query with height < 0; please provide a valid height") +} + +return nil +} + +/ CreateQueryContext creates a new sdk.Context for a query, taking as args +/ the block height and whether the query needs a proof or not. +func (app *BaseApp) + +CreateQueryContext(height int64, prove bool) (sdk.Context, error) { + return app.CreateQueryContextWithCheckHeader(height, prove, true) +} + +/ CreateQueryContextWithCheckHeader creates a new sdk.Context for a query, taking as args +/ the block height, whether the query needs a proof or not, and whether to check the header or not. +func (app *BaseApp) + +CreateQueryContextWithCheckHeader(height int64, prove, checkHeader bool) (sdk.Context, error) { + if err := checkNegativeHeight(height); err != nil { + return sdk.Context{ +}, err +} + + / use custom query multi-store if provided + qms := app.qms + if qms == nil { + qms = app.cms.(storetypes.MultiStore) +} + lastBlockHeight := qms.LatestVersion() + if lastBlockHeight == 0 { + return sdk.Context{ +}, errorsmod.Wrapf(sdkerrors.ErrInvalidHeight, "%s is not ready; please wait for first block", app.Name()) +} + if height > lastBlockHeight { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidHeight, + "cannot query with height in the future; please provide a valid height", + ) +} + if height == 1 && prove { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ) +} + +var header *cmtproto.Header + isLatest := height == 0 + for _, state := range []*state{ + app.checkState, + app.finalizeBlockState, +} { + if state != nil { + / branch the commit multi-store for safety + h := state.Context().BlockHeader() + if isLatest { + lastBlockHeight = qms.LatestVersion() +} + if !checkHeader || !isLatest || isLatest && h.Height == lastBlockHeight { + header = &h + break +} + +} + +} + if header == nil { + return sdk.Context{ +}, + errorsmod.Wrapf( + sdkerrors.ErrInvalidHeight, + "context did not contain latest block height in either check state or finalize block state (%d)", lastBlockHeight, + ) +} + + / when a client did not provide a query height, manually inject the latest + if isLatest { + height = lastBlockHeight +} + +cacheMS, err := qms.CacheMultiStoreWithVersion(height) + if err != nil { + return sdk.Context{ +}, + errorsmod.Wrapf( + sdkerrors.ErrNotFound, + "failed to load state at height %d; %s (latest height: %d)", height, err, lastBlockHeight, + ) +} + + / branch the commit multi-store for safety + ctx := sdk.NewContext(cacheMS, *header, true, app.logger). + WithMinGasPrices(app.minGasPrices). + WithGasMeter(storetypes.NewGasMeter(app.queryGasLimit)). + WithBlockHeader(*header). + WithBlockHeight(height) + if !isLatest { + rms, ok := app.cms.(*rootmulti.Store) + if ok { + cInfo, err := rms.GetCommitInfo(height) + if cInfo != nil && err == nil { + ctx = ctx.WithBlockHeight(height).WithBlockTime(cInfo.Timestamp) +} + +} + +} + +return ctx, nil +} + +/ GetBlockRetentionHeight returns the height for which all blocks below this height +/ are pruned from CometBFT. Given a commitment height and a non-zero local +/ minRetainBlocks configuration, the retentionHeight is the smallest height that +/ satisfies: +/ +/ - Unbonding (safety threshold) + +time: The block interval in which validators +/ can be economically punished for misbehavior. Blocks in this interval must be +/ auditable e.g. by the light client. +/ +/ - Logical store snapshot interval: The block interval at which the underlying +/ logical store database is persisted to disk, e.g. every 10000 heights. Blocks +/ since the last IAVL snapshot must be available for replay on application restart. +/ +/ - State sync snapshots: Blocks since the oldest available snapshot must be +/ available for state sync nodes to catch up (oldest because a node may be +/ restoring an old snapshot while a new snapshot was taken). +/ +/ - Local (minRetainBlocks) + +config: Archive nodes may want to retain more or +/ all blocks, e.g. via a local config option min-retain-blocks. There may also +/ be a need to vary retention for other nodes, e.g. sentry nodes which do not +/ need historical blocks. +func (app *BaseApp) + +GetBlockRetentionHeight(commitHeight int64) + +int64 { + / If minRetainBlocks is zero, pruning is disabled and we return 0 + / If commitHeight is less than or equal to minRetainBlocks, return 0 since there are not enough + / blocks to trigger pruning yet. This ensures we keep all blocks until we have at least minRetainBlocks. + retentionBlockWindow := commitHeight - int64(app.minRetainBlocks) + if app.minRetainBlocks == 0 || retentionBlockWindow <= 0 { + return 0 +} + minNonZero := func(x, y int64) + +int64 { + switch { + case x == 0: + return y + case y == 0: + return x + case x < y: + return x + + default: + return y +} + +} + + / Define retentionHeight as the minimum value that satisfies all non-zero + / constraints. All blocks below (commitHeight-retentionHeight) + +are pruned + / from CometBFT. + var retentionHeight int64 + + / Define the number of blocks needed to protect against misbehaving validators + / which allows light clients to operate safely. Note, we piggy back of the + / evidence parameters instead of computing an estimated number of blocks based + / on the unbonding period and block commitment time as the two should be + / equivalent. + cp := app.GetConsensusParams(app.finalizeBlockState.Context()) + if cp.Evidence != nil && cp.Evidence.MaxAgeNumBlocks > 0 { + retentionHeight = commitHeight - cp.Evidence.MaxAgeNumBlocks +} + if app.snapshotManager != nil { + snapshotRetentionHeights := app.snapshotManager.GetSnapshotBlockRetentionHeights() + if snapshotRetentionHeights > 0 { + retentionHeight = minNonZero(retentionHeight, commitHeight-snapshotRetentionHeights) +} + +} + +retentionHeight = minNonZero(retentionHeight, retentionBlockWindow) + if retentionHeight <= 0 { + / prune nothing in the case of a non-positive height + return 0 +} + +return retentionHeight +} + +/ toVoteInfo converts the new ExtendedVoteInfo to VoteInfo. +func toVoteInfo(votes []abci.ExtendedVoteInfo) []abci.VoteInfo { + legacyVotes := make([]abci.VoteInfo, len(votes)) + for i, vote := range votes { + legacyVotes[i] = abci.VoteInfo{ + Validator: abci.Validator{ + Address: vote.Validator.Address, + Power: vote.Validator.Power, +}, + BlockIdFlag: vote.BlockIdFlag, +} + +} + +return legacyVotes +} +``` + +#### PreBlock + +- Run the application's [`preBlocker()`](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#preblocker), which mainly runs the [`PreBlocker()`](/docs/sdk/v0.53/documentation/module-system/preblock#preblock) method of each of the modules. + +#### BeginBlock + +- Initialize [`finalizeBlockState`](#state-updates) with the latest header using the `req abci.RequestFinalizeBlock` passed as parameter via the `setState` function. + + ```go expandable + package baseapp + + import ( + + "context" + "fmt" + "maps" + "math" + "slices" + "strconv" + "sync" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/core/header" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp/oe" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/types/msgservice" + ) + + type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + + error + ) + + const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + + transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeVerifyVoteExtension / Verify a vote extension + execModeFinalize / Finalize a block proposal + ) + + var _ servertypes.ABCI = (*BaseApp)(nil) + + / BaseApp reflects the ABCI application implementation. + type BaseApp struct { + / initialized on creation + mu sync.Mutex / mu protects the fields below. + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + + state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + + grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional + + checkTxHandler sdk.CheckTxHandler / ABCI CheckTx handler + initChainer sdk.InitChainer / ABCI InitChain handler + preBlocker sdk.PreBlocker / logic to run before BeginBlocker + beginBlocker sdk.BeginBlocker / (legacy ABCI) + + BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + + EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + sigverifyTx bool / in the simulation test, since the account does not have a private key, we have to ignore the tx sigverify. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / queryGasLimit defines the maximum gas for queries; unbounded if 0. + queryGasLimit uint64 + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + + at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + + period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType + }.{ + attributeKey + }, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ + } + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec + + / optimisticExec contains the context required for Optimistic Execution, + / including the goroutine handling.This is experimental and must be enabled + / by developers. + optimisticExec *oe.OptimisticExecution + + / disableBlockGasMeter will disable the block gas meter if true, block gas meter is tricky to support + / when executing transactions in parallel. + / when disabled, the block gas meter in context is a noop one. + / + / SAFETY: it's safe to do if validators validate the total gas wanted in the `ProcessProposal`, which is the case in the default handler. + disableBlockGasMeter bool + } + + / NewBaseApp returns a reference to an initialized BaseApp. It accepts a + / variadic number of option functions, which act on the BaseApp to set + / configuration choices. + func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), + ) *BaseApp { + app := &BaseApp{ + logger: logger.With(log.ModuleKey, "baseapp"), + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, + sigverifyTx: true, + queryGasLimit: math.MaxUint64, + } + for _, option := range options { + option(app) + } + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ + }) + } + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) + } + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) + } + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) + } + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) + } + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) + } + + app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs baseapp will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + + protoFiles, err := proto.MergedRegistry() + if err != nil { + logger.Warn("error creating merged proto registry", "error", err) + } + + else { + err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + logger.Warn("error validating merged proto registry annotations", "error", err) + } + + } + + return app + } + + / Name returns the name of the BaseApp. + func (app *BaseApp) + + Name() + + string { + return app.name + } + + / AppVersion returns the application's protocol version. + func (app *BaseApp) + + AppVersion() + + uint64 { + return app.appVersion + } + + / Version returns the application's version string. + func (app *BaseApp) + + Version() + + string { + return app.version + } + + / Logger returns the logger of the BaseApp. + func (app *BaseApp) + + Logger() + + log.Logger { + return app.logger + } + + / Trace returns the boolean value for logging error stack traces. + func (app *BaseApp) + + Trace() + + bool { + return app.trace + } + + / MsgServiceRouter returns the MsgServiceRouter of a BaseApp. + func (app *BaseApp) + + MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter + } + + / GRPCQueryRouter returns the GRPCQueryRouter of a BaseApp. + func (app *BaseApp) + + GRPCQueryRouter() *GRPCQueryRouter { + return app.grpcQueryRouter + } + + / MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp + / multistore. + func (app *BaseApp) + + MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) + } + + else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) + } + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + + default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) + } + + } + } + + / MountKVStores mounts all IAVL or DB stores to the provided keys in the + / BaseApp multistore. + func (app *BaseApp) + + MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) + } + + else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) + } + + } + } + + / MountTransientStores mounts all transient stores to the provided keys in + / the BaseApp multistore. + func (app *BaseApp) + + MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) + } + } + + / MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal + / commit multi-store. + func (app *BaseApp) + + MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := slices.Sorted(maps.Keys(keys)) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) + } + } + + / MountStore mounts a store to the provided key in the BaseApp multistore, + / using the default DB. + func (app *BaseApp) + + MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) + } + + / LoadLatestVersion loads the latest application version. It will panic if + / called more than once on a running BaseApp. + func (app *BaseApp) + + LoadLatestVersion() + + error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) + } + + return app.Init() + } + + / DefaultStoreLoader will be used by default and loads the latest version + func DefaultStoreLoader(ms storetypes.CommitMultiStore) + + error { + return ms.LoadLatestVersion() + } + + / CommitMultiStore returns the root multi-store. + / App constructor can use this to access the `cms`. + / UNSAFE: must not be used during the abci life cycle. + func (app *BaseApp) + + CommitMultiStore() + + storetypes.CommitMultiStore { + return app.cms + } + + / SnapshotManager returns the snapshot manager. + / application use this to register extra extension snapshotters. + func (app *BaseApp) + + SnapshotManager() *snapshots.Manager { + return app.snapshotManager + } + + / LoadVersion loads the BaseApp application version. It will panic if called + / more than once on a running baseapp. + func (app *BaseApp) + + LoadVersion(version int64) + + error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) + } + + return app.Init() + } + + / LastCommitID returns the last CommitID of the multistore. + func (app *BaseApp) + + LastCommitID() + + storetypes.CommitID { + return app.cms.LastCommitID() + } + + / LastBlockHeight returns the last committed block height. + func (app *BaseApp) + + LastBlockHeight() + + int64 { + return app.cms.LastCommitID().Version + } + + / ChainID returns the chainID of the app. + func (app *BaseApp) + + ChainID() + + string { + return app.chainID + } + + / AnteHandler returns the AnteHandler of the app. + func (app *BaseApp) + + AnteHandler() + + sdk.AnteHandler { + return app.anteHandler + } + + / Mempool returns the Mempool of the app. + func (app *BaseApp) + + Mempool() + + mempool.Mempool { + return app.mempool + } + + / Init initializes the app. It seals the app, preventing any + / further modifications. In addition, it validates the app against + / the earlier provided settings. Returns an error if validation fails. + / nil otherwise. Panics if the app is already sealed. + func (app *BaseApp) + + Init() + + error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") + } + if app.cms == nil { + return errors.New("commit multi-store must not be nil") + } + emptyHeader := cmtproto.Header{ + ChainID: app.chainID + } + + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) + + app.Seal() + + return app.cms.GetPruning().Validate() + } + + func (app *BaseApp) + + setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices + } + + func (app *BaseApp) + + setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight + } + + func (app *BaseApp) + + setHaltTime(haltTime uint64) { + app.haltTime = haltTime + } + + func (app *BaseApp) + + setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks + } + + func (app *BaseApp) + + setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache + } + + func (app *BaseApp) + + setTrace(trace bool) { + app.trace = trace + } + + func (app *BaseApp) + + setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ + }) + for _, e := range ie { + app.indexEvents[e] = struct{ + }{ + } + + } + } + + / Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. + func (app *BaseApp) + + Seal() { + app.sealed = true + } + + / IsSealed returns true if the BaseApp is sealed and false otherwise. + func (app *BaseApp) + + IsSealed() + + bool { + return app.sealed + } + + / setState sets the BaseApp's state for the corresponding mode with a branched + / multi-store (i.e. a CacheMultiStore) + + and a new Context with the same + / multi-store branch, and provided header. + func (app *BaseApp) + + setState(mode execMode, h cmtproto.Header) { + ms := app.cms.CacheMultiStore() + headerInfo := header.Info{ + Height: h.Height, + Time: h.Time, + ChainID: h.ChainID, + AppHash: h.AppHash, + } + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, h, false, app.logger). + WithStreamingManager(app.streamingManager). + WithHeaderInfo(headerInfo), + } + switch mode { + case execModeCheck: + baseState.SetContext(baseState.Context().WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices)) + + app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) + } + } + + / SetCircuitBreaker sets the circuit breaker for the BaseApp. + / The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. + func (app *BaseApp) + + SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") + } + + app.msgServiceRouter.SetCircuit(cb) + } + + / GetConsensusParams returns the current consensus parameters from the BaseApp's + / ParamStore. If the BaseApp has no ParamStore defined, nil is returned. + func (app *BaseApp) + + GetConsensusParams(ctx sdk.Context) + + cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ + } + + } + + cp, err := app.paramStore.Get(ctx) + if err != nil { + / This could happen while migrating from v0.45/v0.46 to v0.50, we should + / allow it to happen so during preblock the upgrade plan can be executed + / and the consensus params set for the first time in the new format. + app.logger.Error("failed to get consensus params", "err", err) + + return cmtproto.ConsensusParams{ + } + + } + + return cp + } + + / StoreConsensusParams sets the consensus parameters to the BaseApp's param + / store. + / + / NOTE: We're explicitly not storing the CometBFT app_version in the param store. + / It's stored instead in the x/upgrade store, with its own bump logic. + func (app *BaseApp) + + StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + + error { + if app.paramStore == nil { + return errors.New("cannot store consensus params with no params store set") + } + + return app.paramStore.Set(ctx, cp) + } + + / AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. + func (app *BaseApp) + + AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) + } + } + + / GetMaximumBlockGas gets the maximum gas from the consensus params. It panics + / if maximum block gas is less than negative one and returns zero if negative + / one. + func (app *BaseApp) + + GetMaximumBlockGas(ctx sdk.Context) + + uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 + } + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) + } + } + + func (app *BaseApp) + + validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + + error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) + } + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight + } + + else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 + } + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) + } + + return nil + } + + / validateBasicTxMsgs executes basic validator calls for messages. + func validateBasicTxMsgs(msgs []sdk.Msg) + + error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") + } + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue + } + if err := m.ValidateBasic(); err != nil { + return err + } + + } + + return nil + } + + func (app *BaseApp) + + getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState + } + } + + func (app *BaseApp) + + getBlockGasMeter(ctx sdk.Context) + + storetypes.GasMeter { + if app.disableBlockGasMeter { + return noopGasMeter{ + } + + } + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) + } + + return storetypes.NewInfiniteGasMeter() + } + + / retrieve the context for the tx w/ txBytes and other memoized values. + func (app *BaseApp) + + getContextForTx(mode execMode, txBytes []byte) + + sdk.Context { + app.mu.Lock() + + defer app.mu.Unlock() + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) + } + ctx := modeState.Context(). + WithTxBytes(txBytes). + WithGasMeter(storetypes.NewInfiniteGasMeter()) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithIsSigverifyTx(app.sigverifyTx) + + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) + } + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() + + ctx = ctx.WithExecMode(sdk.ExecMode(execModeSimulate)) + } + + return ctx + } + + / cacheTxContext returns a new context based off of the provided context with + / a branched multi-store. + func (app *BaseApp) + + cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]any{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), + }, + ), + ).(storetypes.CacheMultiStore) + } + + return ctx.WithMultiStore(msCache), msCache + } + + func (app *BaseApp) + + preBlock(req *abci.RequestFinalizeBlock) ([]abci.Event, error) { + var events []abci.Event + if app.preBlocker != nil { + ctx := app.finalizeBlockState.Context().WithEventManager(sdk.NewEventManager()) + + rsp, err := app.preBlocker(ctx, req) + if err != nil { + return nil, err + } + / rsp.ConsensusParamsChanged is true from preBlocker means ConsensusParams in store get changed + / write the consensus parameters in store to context + if rsp.ConsensusParamsChanged { + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + / GasMeter must be set after we get a context with updated consensus params. + gasMeter := app.getBlockGasMeter(ctx) + + ctx = ctx.WithBlockGasMeter(gasMeter) + + app.finalizeBlockState.SetContext(ctx) + } + + events = ctx.EventManager().ABCIEvents() + } + + return events, nil + } + + func (app *BaseApp) + + beginBlock(_ *abci.RequestFinalizeBlock) (sdk.BeginBlock, error) { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.Context()) + if err != nil { + return resp, err + } + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" + }, + ) + } + + resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) + } + + return resp, nil + } + + func (app *BaseApp) + + deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ + } + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + + telemetry.IncrCounter(1, "tx", resultStr) + + telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + + telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") + }() + + gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx, nil) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + + return resp + } + + resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), + } + + return resp + } + + / endBlock is an application-defined function that is called after transactions + / have been processed in FinalizeBlock. + func (app *BaseApp) + + endBlock(_ context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.Context()) + if err != nil { + return endblock, err + } + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" + }, + ) + } + + eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + + endblock = eb + } + + return endblock, nil + } + + / runTx processes a transaction within a given execution mode, encoded transaction + / bytes, and the decoded transaction itself. All state transitions occur through + / a cached Context depending on the mode provided. State only gets persisted + / if all messages get executed successfully and the execution mode is DeliverTx. + / Note, gas execution info is always returned. A reference to a Result is + / returned if the tx does not run out of gas and if all the messages are valid + / and execute successfully. An error is returned otherwise. + / both txbytes and the decoded tx are passed to runTx to avoid the state machine encoding the tx and decoding the transaction twice + / passing the decoded tx to runTX is optional, it will be decoded if the tx is nil + func (app *BaseApp) + + runTx(mode execMode, txBytes []byte, tx sdk.Tx) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") + } + + defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + + err, result = processRecovery(r, recoveryMW), nil + ctx.Logger().Error("panic recovered in runTx", "err", err) + } + + gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() + } + + }() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) + } + + } + + / If BlockGasMeter() + + panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() + } + + / if the transaction is not decoded, decode it here + if tx == nil { + tx, err = app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ + GasUsed: 0, + GasWanted: 0 + }, nil, nil, sdkerrors.ErrTxDecode.Wrap(err.Error()) + } + + } + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ + }, nil, nil, err + } + for _, msg := range msgs { + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return sdk.GasInfo{ + }, nil, nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) + } + + } + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + + anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + + newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + + is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) + } + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + if mode == execModeReCheck { + / if the ante handler fails on recheck, we want to remove the tx from the mempool + if mempoolErr := app.mempool.Remove(tx); mempoolErr != nil { + return gInfo, nil, anteEvents, errors.Join(err, mempoolErr) + } + + } + + return gInfo, nil, nil, err + } + + msCache.Write() + + anteEvents = events.ToABCIEvents() + } + switch mode { + case execModeCheck: + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err + } + case execModeFinalize: + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) + } + + } + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) + } + + / Run optional postHandlers (should run regardless of the execution result). + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + + newCtx, errPostHandler := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if errPostHandler != nil { + if err == nil { + / when the msg was handled successfully, return the post handler error only + return gInfo, nil, anteEvents, errPostHandler + } + / otherwise append to the msg error so that we keep the original error code for better user experience + return gInfo, nil, anteEvents, errorsmod.Wrapf(err, "postHandler: %s", errPostHandler) + } + + / we don't want runTx to panic if runMsgs has failed earlier + if result == nil { + result = &sdk.Result{ + } + + } + + result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) + } + if err == nil { + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + + msCache.Write() + } + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) + } + + } + + return gInfo, result, anteEvents, err + } + + / runMsgs iterates through a list of messages and executes them with the provided + / Context and execution mode. Messages will only be executed during simulation + / and DeliverTx. An error is returned if any single message fails or if a + / Handler does not exist for a given message route. Otherwise, a reference to a + / Result is returned. The caller must not commit state if an error is returned. + func (app *BaseApp) + + runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + + var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break + } + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) + } + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) + } + + / create message events + msgEvents, err := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to create message events; message index: %d", i) + } + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) + } + + events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + + has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) + } + + msgResponses = append(msgResponses, msgResponse) + } + + + } + + data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") + } + + return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, + }, nil + } + + / makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. + func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses + }) + } + + func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) (sdk.Events, error) { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + return nil, err + } + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + return nil, err + } + + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) + } + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) + } + + } + + return sdk.Events{ + msgEvent + }.AppendEvents(events), nil + } + + / PrepareProposalVerifyTx performs transaction verification when a proposer is + / creating a block proposal during PrepareProposal. Any state committed to the + / PrepareProposal state internally will be discarded. will be + / returned if the transaction cannot be encoded. will be returned if + / the transaction is valid, otherwise will be returned. + func (app *BaseApp) + + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err + } + + _, _, _, err = app.runTx(execModePrepareProposal, bz, tx) + if err != nil { + return nil, err + } + + return bz, nil + } + + / ProcessProposalVerifyTx performs transaction verification when receiving a + / block proposal during ProcessProposal. Any state committed to the + / ProcessProposal state internally will be discarded. will be + / returned if the transaction cannot be decoded. will be returned if + / the transaction is valid, otherwise will be returned. + func (app *BaseApp) + + ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err + } + + _, _, _, err = app.runTx(execModeProcessProposal, txBz, tx) + if err != nil { + return nil, err + } + + return tx, nil + } + + func (app *BaseApp) + + TxDecode(txBytes []byte) (sdk.Tx, error) { + return app.txDecoder(txBytes) + } + + func (app *BaseApp) + + TxEncode(tx sdk.Tx) ([]byte, error) { + return app.txEncoder(tx) + } + + func (app *BaseApp) + + StreamingManager() + + storetypes.StreamingManager { + return app.streamingManager + } + + / Close is called in start cmd to gracefully cleanup resources. + func (app *BaseApp) + + Close() + + error { + var errs []error + + / Close app.db (opened by cosmos-sdk/server/start.go call to openDB) + if app.db != nil { + app.logger.Info("Closing application.db") + if err := app.db.Close(); err != nil { + errs = append(errs, err) + } + + } + + / Close app.snapshotManager + / - opened when app chains use cosmos-sdk/server/util.go/DefaultBaseappOptions (boilerplate) + / - which calls cosmos-sdk/server/util.go/GetSnapshotStore + / - which is passed to baseapp/options.go/SetSnapshot + / - to set app.snapshotManager = snapshots.NewManager + if app.snapshotManager != nil { + app.logger.Info("Closing snapshots/metadata.db") + if err := app.snapshotManager.Close(); err != nil { + errs = append(errs, err) + } + + } + + return errors.Join(errs...) + } + + / GetBaseApp returns the pointer to itself. + func (app *BaseApp) + + GetBaseApp() *BaseApp { + return app + } + ``` + + This function also resets the [main gas meter](/docs/sdk/v0.53/documentation/protocol-development/gas-fees#main-gas-meter). + +- Initialize the [block gas meter](/docs/sdk/v0.53/documentation/protocol-development/gas-fees#block-gas-meter) with the `maxGas` limit. The `gas` consumed within the block cannot go above `maxGas`. This parameter is defined in the application's consensus parameters. + +- Run the application's [`beginBlocker()`](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#beginblocker-and-endblocker), which mainly runs the [`BeginBlocker()`](/docs/sdk/v0.53/documentation/module-system/beginblock-endblock#beginblock) method of each of the modules. + +- Set the [`VoteInfos`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#voteinfo) of the application, i.e. the list of validators whose _precommit_ for the previous block was included by the proposer of the current block. This information is carried into the [`Context`](/docs/sdk/v0.53/documentation/application-framework/context) so that it can be used during transaction execution and EndBlock. + +#### Transaction Execution + +When the underlying consensus engine receives a block proposal, each transaction in the block needs to be processed by the application. To that end, the underlying consensus engine sends the transactions in FinalizeBlock message to the application for each transaction in a sequential order. + +Before the first transaction of a given block is processed, a [volatile state](#state-updates) called `finalizeBlockState` is initialized during FinalizeBlock. This state is updated each time a transaction is processed via `FinalizeBlock`, and committed to the [main state](#main-state) when the block is [committed](#commit), after what it is set to `nil`. + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + "maps" + "math" + "slices" + "strconv" + "sync" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/core/header" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp/oe" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + +error +) + +const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + +transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeVerifyVoteExtension / Verify a vote extension + execModeFinalize / Finalize a block proposal +) + +var _ servertypes.ABCI = (*BaseApp)(nil) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { + / initialized on creation + mu sync.Mutex / mu protects the fields below. + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + +state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional + + checkTxHandler sdk.CheckTxHandler / ABCI CheckTx handler + initChainer sdk.InitChainer / ABCI InitChain handler + preBlocker sdk.PreBlocker / logic to run before BeginBlocker + beginBlocker sdk.BeginBlocker / (legacy ABCI) + +BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + +EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + sigverifyTx bool / in the simulation test, since the account does not have a private key, we have to ignore the tx sigverify. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / queryGasLimit defines the maximum gas for queries; unbounded if 0. + queryGasLimit uint64 + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec + + / optimisticExec contains the context required for Optimistic Execution, + / including the goroutine handling.This is experimental and must be enabled + / by developers. + optimisticExec *oe.OptimisticExecution + + / disableBlockGasMeter will disable the block gas meter if true, block gas meter is tricky to support + / when executing transactions in parallel. + / when disabled, the block gas meter in context is a noop one. + / + / SAFETY: it's safe to do if validators validate the total gas wanted in the `ProcessProposal`, which is the case in the default handler. + disableBlockGasMeter bool +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger.With(log.ModuleKey, "baseapp"), + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, + sigverifyTx: true, + queryGasLimit: math.MaxUint64, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) +} + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) +} + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) +} + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs baseapp will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + +protoFiles, err := proto.MergedRegistry() + if err != nil { + logger.Warn("error creating merged proto registry", "error", err) +} + +else { + err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + logger.Warn("error validating merged proto registry annotations", "error", err) +} + +} + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ GRPCQueryRouter returns the GRPCQueryRouter of a BaseApp. +func (app *BaseApp) + +GRPCQueryRouter() *GRPCQueryRouter { + return app.grpcQueryRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} + +} +} + +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) + +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + +} +} + +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) + +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} + +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := slices.Sorted(maps.Keys(keys)) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms storetypes.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +storetypes.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ ChainID returns the chainID of the app. +func (app *BaseApp) + +ChainID() + +string { + return app.chainID +} + +/ AnteHandler returns the AnteHandler of the app. +func (app *BaseApp) + +AnteHandler() + +sdk.AnteHandler { + return app.anteHandler +} + +/ Mempool returns the Mempool of the app. +func (app *BaseApp) + +Mempool() + +mempool.Mempool { + return app.mempool +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + if app.cms == nil { + return errors.New("commit multi-store must not be nil") +} + emptyHeader := cmtproto.Header{ + ChainID: app.chainID +} + + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) + +app.Seal() + +return app.cms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode execMode, h cmtproto.Header) { + ms := app.cms.CacheMultiStore() + headerInfo := header.Info{ + Height: h.Height, + Time: h.Time, + ChainID: h.ChainID, + AppHash: h.AppHash, +} + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, h, false, app.logger). + WithStreamingManager(app.streamingManager). + WithHeaderInfo(headerInfo), +} + switch mode { + case execModeCheck: + baseState.SetContext(baseState.Context().WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices)) + +app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ SetCircuitBreaker sets the circuit breaker for the BaseApp. +/ The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. +func (app *BaseApp) + +SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") +} + +app.msgServiceRouter.SetCircuit(cb) +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) + +cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ +} + +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + / This could happen while migrating from v0.45/v0.46 to v0.50, we should + / allow it to happen so during preblock the upgrade plan can be executed + / and the consensus params set for the first time in the new format. + app.logger.Error("failed to get consensus params", "err", err) + +return cmtproto.ConsensusParams{ +} + +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the BaseApp's param +/ store. +/ +/ NOTE: We're explicitly not storing the CometBFT app_version in the param store. +/ It's stored instead in the x/upgrade store, with its own bump logic. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + +error { + if app.paramStore == nil { + return errors.New("cannot store consensus params with no params store set") +} + +return app.paramStore.Set(ctx, cp) +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + +error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) +} + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 +} + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return err +} + +} + +return nil +} + +func (app *BaseApp) + +getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState +} +} + +func (app *BaseApp) + +getBlockGasMeter(ctx sdk.Context) + +storetypes.GasMeter { + if app.disableBlockGasMeter { + return noopGasMeter{ +} + +} + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) +} + +return storetypes.NewInfiniteGasMeter() +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode execMode, txBytes []byte) + +sdk.Context { + app.mu.Lock() + +defer app.mu.Unlock() + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.Context(). + WithTxBytes(txBytes). + WithGasMeter(storetypes.NewInfiniteGasMeter()) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithIsSigverifyTx(app.sigverifyTx) + +ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() + +ctx = ctx.WithExecMode(sdk.ExecMode(execModeSimulate)) +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]any{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(storetypes.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +func (app *BaseApp) + +preBlock(req *abci.RequestFinalizeBlock) ([]abci.Event, error) { + var events []abci.Event + if app.preBlocker != nil { + ctx := app.finalizeBlockState.Context().WithEventManager(sdk.NewEventManager()) + +rsp, err := app.preBlocker(ctx, req) + if err != nil { + return nil, err +} + / rsp.ConsensusParamsChanged is true from preBlocker means ConsensusParams in store get changed + / write the consensus parameters in store to context + if rsp.ConsensusParamsChanged { + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + / GasMeter must be set after we get a context with updated consensus params. + gasMeter := app.getBlockGasMeter(ctx) + +ctx = ctx.WithBlockGasMeter(gasMeter) + +app.finalizeBlockState.SetContext(ctx) +} + +events = ctx.EventManager().ABCIEvents() +} + +return events, nil +} + +func (app *BaseApp) + +beginBlock(_ *abci.RequestFinalizeBlock) (sdk.BeginBlock, error) { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.Context()) + if err != nil { + return resp, err +} + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" +}, + ) +} + +resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) +} + +return resp, nil +} + +func (app *BaseApp) + +deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ +} + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + +telemetry.IncrCounter(1, "tx", resultStr) + +telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + +telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") +}() + +gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx, nil) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + +return resp +} + +resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +} + +return resp +} + +/ endBlock is an application-defined function that is called after transactions +/ have been processed in FinalizeBlock. +func (app *BaseApp) + +endBlock(_ context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.Context()) + if err != nil { + return endblock, err +} + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" +}, + ) +} + +eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + +endblock = eb +} + +return endblock, nil +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +/ both txbytes and the decoded tx are passed to runTx to avoid the state machine encoding the tx and decoding the transaction twice +/ passing the decoded tx to runTX is optional, it will be decoded if the tx is nil +func (app *BaseApp) + +runTx(mode execMode, txBytes []byte, tx sdk.Tx) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil + ctx.Logger().Error("panic recovered in runTx", "err", err) +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() +} + + / if the transaction is not decoded, decode it here + if tx == nil { + tx, err = app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ + GasUsed: 0, + GasWanted: 0 +}, nil, nil, sdkerrors.ErrTxDecode.Wrap(err.Error()) +} + +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + for _, msg := range msgs { + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return sdk.GasInfo{ +}, nil, nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + if mode == execModeReCheck { + / if the ante handler fails on recheck, we want to remove the tx from the mempool + if mempoolErr := app.mempool.Remove(tx); mempoolErr != nil { + return gInfo, nil, anteEvents, errors.Join(err, mempoolErr) +} + +} + +return gInfo, nil, nil, err +} + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + switch mode { + case execModeCheck: + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err +} + case execModeFinalize: + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) +} + + / Run optional postHandlers (should run regardless of the execution result). + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, errPostHandler := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if errPostHandler != nil { + if err == nil { + / when the msg was handled successfully, return the post handler error only + return gInfo, nil, anteEvents, errPostHandler +} + / otherwise append to the msg error so that we keep the original error code for better user experience + return gInfo, nil, anteEvents, errorsmod.Wrapf(err, "postHandler: %s", errPostHandler) +} + + / we don't want runTx to panic if runMsgs has failed earlier + if result == nil { + result = &sdk.Result{ +} + +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if err == nil { + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents, err := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to create message events; message index: %d", i) +} + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) +} + +events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + + +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) (sdk.Events, error) { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + return nil, err +} + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + return nil, err +} + +msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events), nil +} + +/ PrepareProposalVerifyTx performs transaction verification when a proposer is +/ creating a block proposal during PrepareProposal. Any state committed to the +/ PrepareProposal state internally will be discarded. will be +/ returned if the transaction cannot be encoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModePrepareProposal, bz, tx) + if err != nil { + return nil, err +} + +return bz, nil +} + +/ ProcessProposalVerifyTx performs transaction verification when receiving a +/ block proposal during ProcessProposal. Any state committed to the +/ ProcessProposal state internally will be discarded. will be +/ returned if the transaction cannot be decoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModeProcessProposal, txBz, tx) + if err != nil { + return nil, err +} + +return tx, nil +} + +func (app *BaseApp) + +TxDecode(txBytes []byte) (sdk.Tx, error) { + return app.txDecoder(txBytes) +} + +func (app *BaseApp) + +TxEncode(tx sdk.Tx) ([]byte, error) { + return app.txEncoder(tx) +} + +func (app *BaseApp) + +StreamingManager() + +storetypes.StreamingManager { + return app.streamingManager +} + +/ Close is called in start cmd to gracefully cleanup resources. +func (app *BaseApp) + +Close() + +error { + var errs []error + + / Close app.db (opened by cosmos-sdk/server/start.go call to openDB) + if app.db != nil { + app.logger.Info("Closing application.db") + if err := app.db.Close(); err != nil { + errs = append(errs, err) +} + +} + + / Close app.snapshotManager + / - opened when app chains use cosmos-sdk/server/util.go/DefaultBaseappOptions (boilerplate) + / - which calls cosmos-sdk/server/util.go/GetSnapshotStore + / - which is passed to baseapp/options.go/SetSnapshot + / - to set app.snapshotManager = snapshots.NewManager + if app.snapshotManager != nil { + app.logger.Info("Closing snapshots/metadata.db") + if err := app.snapshotManager.Close(); err != nil { + errs = append(errs, err) +} + +} + +return errors.Join(errs...) +} + +/ GetBaseApp returns the pointer to itself. +func (app *BaseApp) + +GetBaseApp() *BaseApp { + return app +} +``` + +Transaction execution within `FinalizeBlock` performs the **exact same steps as `CheckTx`**, with a little caveat at step 3 and the addition of a fifth step: + +1. The `AnteHandler` does **not** check that the transaction's `gas-prices` is sufficient. That is because the `min-gas-prices` value `gas-prices` is checked against is local to the node, and therefore what is enough for one full-node might not be for another. This means that the proposer can potentially include transactions for free, although they are not incentivised to do so, as they earn a bonus on the total fee of the block they propose. +2. For each `sdk.Msg` in the transaction, route to the appropriate module's Protobuf [`Msg` service](/docs/sdk/v0.53/documentation/module-system/msg-services). Additional _stateful_ checks are performed, and the branched multistore held in `finalizeBlockState`'s `context` is updated by the module's `keeper`. If the `Msg` service returns successfully, the branched multistore held in `context` is written to `finalizeBlockState` `CacheMultiStore`. + +During the additional fifth step outlined in (2), each read/write to the store increases the value of `GasConsumed`. You can find the default cost of each operation: + +```go expandable +package types + +import ( + + "fmt" + "math" +) + +/ Gas consumption descriptors. +const ( + GasIterNextCostFlatDesc = "IterNextFlat" + GasValuePerByteDesc = "ValuePerByte" + GasWritePerByteDesc = "WritePerByte" + GasReadPerByteDesc = "ReadPerByte" + GasWriteCostFlatDesc = "WriteFlat" + GasReadCostFlatDesc = "ReadFlat" + GasHasDesc = "Has" + GasDeleteDesc = "Delete" +) + +/ Gas measured by the SDK +type Gas = uint64 + +/ ErrorNegativeGasConsumed defines an error thrown when the amount of gas refunded results in a +/ negative gas consumed amount. +type ErrorNegativeGasConsumed struct { + Descriptor string +} + +/ ErrorOutOfGas defines an error thrown when an action results in out of gas. +type ErrorOutOfGas struct { + Descriptor string +} + +/ ErrorGasOverflow defines an error thrown when an action results gas consumption +/ unsigned integer overflow. +type ErrorGasOverflow struct { + Descriptor string +} + +/ GasMeter interface to track gas consumption +type GasMeter interface { + GasConsumed() + +Gas + GasConsumedToLimit() + +Gas + GasRemaining() + +Gas + Limit() + +Gas + ConsumeGas(amount Gas, descriptor string) + +RefundGas(amount Gas, descriptor string) + +IsPastLimit() + +bool + IsOutOfGas() + +bool + String() + +string +} + +type basicGasMeter struct { + limit Gas + consumed Gas +} + +/ NewGasMeter returns a reference to a new basicGasMeter. +func NewGasMeter(limit Gas) + +GasMeter { + return &basicGasMeter{ + limit: limit, + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *basicGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasRemaining returns the gas left in the GasMeter. +func (g *basicGasMeter) + +GasRemaining() + +Gas { + if g.IsPastLimit() { + return 0 +} + +return g.limit - g.consumed +} + +/ Limit returns the gas limit of the GasMeter. +func (g *basicGasMeter) + +Limit() + +Gas { + return g.limit +} + +/ GasConsumedToLimit returns the gas limit if gas consumed is past the limit, +/ otherwise it returns the consumed gas. +/ +/ NOTE: This behavior is only called when recovering from panic when +/ BlockGasMeter consumes gas past the limit. +func (g *basicGasMeter) + +GasConsumedToLimit() + +Gas { + if g.IsPastLimit() { + return g.limit +} + +return g.consumed +} + +/ addUint64Overflow performs the addition operation on two uint64 integers and +/ returns a boolean on whether or not the result overflows. +func addUint64Overflow(a, b uint64) (uint64, bool) { + if math.MaxUint64-a < b { + return 0, true +} + +return a + b, false +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit or out of gas. +func (g *basicGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + g.consumed = math.MaxUint64 + panic(ErrorGasOverflow{ + descriptor +}) +} + if g.consumed > g.limit { + panic(ErrorOutOfGas{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the transaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *basicGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns true if gas consumed is past limit, otherwise it returns false. +func (g *basicGasMeter) + +IsPastLimit() + +bool { + return g.consumed > g.limit +} + +/ IsOutOfGas returns true if gas consumed is greater than or equal to gas limit, otherwise it returns false. +func (g *basicGasMeter) + +IsOutOfGas() + +bool { + return g.consumed >= g.limit +} + +/ String returns the BasicGasMeter's gas limit and gas consumed. +func (g *basicGasMeter) + +String() + +string { + return fmt.Sprintf("BasicGasMeter:\n limit: %d\n consumed: %d", g.limit, g.consumed) +} + +type infiniteGasMeter struct { + consumed Gas +} + +/ NewInfiniteGasMeter returns a new gas meter without a limit. +func NewInfiniteGasMeter() + +GasMeter { + return &infiniteGasMeter{ + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *infiniteGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasConsumedToLimit returns the gas consumed from the GasMeter since the gas is not confined to a limit. +/ NOTE: This behavior is only called when recovering from panic when BlockGasMeter consumes gas past the limit. +func (g *infiniteGasMeter) + +GasConsumedToLimit() + +Gas { + return g.consumed +} + +/ GasRemaining returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +GasRemaining() + +Gas { + return math.MaxUint64 +} + +/ Limit returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +Limit() + +Gas { + return math.MaxUint64 +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit. +func (g *infiniteGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + / TODO: Should we set the consumed field after overflow checking? + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + panic(ErrorGasOverflow{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the trasaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *infiniteGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsPastLimit() + +bool { + return false +} + +/ IsOutOfGas returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsOutOfGas() + +bool { + return false +} + +/ String returns the InfiniteGasMeter's gas consumed. +func (g *infiniteGasMeter) + +String() + +string { + return fmt.Sprintf("InfiniteGasMeter:\n consumed: %d", g.consumed) +} + +/ GasConfig defines gas cost for each operation on KVStores +type GasConfig struct { + HasCost Gas + DeleteCost Gas + ReadCostFlat Gas + ReadCostPerByte Gas + WriteCostFlat Gas + WriteCostPerByte Gas + IterNextCostFlat Gas +} + +/ KVGasConfig returns a default gas config for KVStores. +func KVGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 1000, + DeleteCost: 1000, + ReadCostFlat: 1000, + ReadCostPerByte: 3, + WriteCostFlat: 2000, + WriteCostPerByte: 30, + IterNextCostFlat: 30, +} +} + +/ TransientGasConfig returns a default gas config for TransientStores. +func TransientGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 100, + DeleteCost: 100, + ReadCostFlat: 100, + ReadCostPerByte: 0, + WriteCostFlat: 200, + WriteCostPerByte: 3, + IterNextCostFlat: 3, +} +} +``` + +At any point, if `GasConsumed > GasWanted`, the function returns with `Code != 0` and the execution fails. + +Each transactions returns a response to the underlying consensus engine of type [`abci.ExecTxResult`](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci%2B%2B_methods.md#exectxresult). The response contains: + +- `Code (uint32)`: Response Code. `0` if successful. +- `Data ([]byte)`: Result bytes, if any. +- `Log (string):` The output of the application's logger. May be non-deterministic. +- `Info (string):` Additional information. May be non-deterministic. +- `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction. +- `GasUsed (int64)`: Amount of gas consumed by transaction. During transaction execution, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction, and by adding gas each time a read/write to the store occurs. +- `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](/docs/sdk/v0.53/api-reference/events-streaming/events) for more. +- `Codespace (string)`: Namespace for the Code. + +#### EndBlock + +EndBlock is run after transaction execution completes. It allows developers to have logic be executed at the end of each block. In the Cosmos SDK, the bulk EndBlock() method is to run the application's EndBlocker(), which mainly runs the EndBlocker() method of each of the application's modules. + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + "maps" + "math" + "slices" + "strconv" + "sync" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cometbft/cometbft/crypto/tmhash" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "cosmossdk.io/core/header" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store" + storemetrics "cosmossdk.io/store/metrics" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp/oe" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type ( + execMode uint8 + + / StoreLoader defines a customizable function to control how we load the + / CommitMultiStore from disk. This is useful for state migration, when + / loading a datastore written with an older version of the software. In + / particular, if a module changed the substore key name (or removed a substore) + / between two versions of the software. + StoreLoader func(ms storetypes.CommitMultiStore) + +error +) + +const ( + execModeCheck execMode = iota / Check a transaction + execModeReCheck / Recheck a (pending) + +transaction after a commit + execModeSimulate / Simulate a transaction + execModePrepareProposal / Prepare a block proposal + execModeProcessProposal / Process a block proposal + execModeVoteExtension / Extend or verify a pre-commit vote + execModeVerifyVoteExtension / Verify a vote extension + execModeFinalize / Finalize a block proposal +) + +var _ servertypes.ABCI = (*BaseApp)(nil) + +/ BaseApp reflects the ABCI application implementation. +type BaseApp struct { + / initialized on creation + mu sync.Mutex / mu protects the fields below. + logger log.Logger + name string / application name from abci.BlockInfo + db dbm.DB / common DB backend + cms storetypes.CommitMultiStore / Main (uncached) + +state + qms storetypes.MultiStore / Optional alternative multistore for querying only. + storeLoader StoreLoader / function to handle store loading, may be overridden with SetStoreLoader() + +grpcQueryRouter *GRPCQueryRouter / router for redirecting gRPC query calls + msgServiceRouter *MsgServiceRouter / router for redirecting Msg service messages + interfaceRegistry codectypes.InterfaceRegistry + txDecoder sdk.TxDecoder / unmarshal []byte into sdk.Tx + txEncoder sdk.TxEncoder / marshal sdk.Tx into []byte + + mempool mempool.Mempool / application side mempool + anteHandler sdk.AnteHandler / ante handler for fee and auth + postHandler sdk.PostHandler / post handler, optional + + checkTxHandler sdk.CheckTxHandler / ABCI CheckTx handler + initChainer sdk.InitChainer / ABCI InitChain handler + preBlocker sdk.PreBlocker / logic to run before BeginBlocker + beginBlocker sdk.BeginBlocker / (legacy ABCI) + +BeginBlock handler + endBlocker sdk.EndBlocker / (legacy ABCI) + +EndBlock handler + processProposal sdk.ProcessProposalHandler / ABCI ProcessProposal handler + prepareProposal sdk.PrepareProposalHandler / ABCI PrepareProposal + extendVote sdk.ExtendVoteHandler / ABCI ExtendVote handler + verifyVoteExt sdk.VerifyVoteExtensionHandler / ABCI VerifyVoteExtension handler + prepareCheckStater sdk.PrepareCheckStater / logic to run during commit using the checkState + precommiter sdk.Precommiter / logic to run during commit using the deliverState + + addrPeerFilter sdk.PeerFilter / filter peers by address and port + idPeerFilter sdk.PeerFilter / filter peers by node ID + fauxMerkleMode bool / if true, IAVL MountStores uses MountStoresDB for simulation speed. + sigverifyTx bool / in the simulation test, since the account does not have a private key, we have to ignore the tx sigverify. + + / manages snapshots, i.e. dumps of app state at certain intervals + snapshotManager *snapshots.Manager + + / volatile states: + / + / - checkState is set on InitChain and reset on Commit + / - finalizeBlockState is set on InitChain and FinalizeBlock and set to nil + / on Commit. + / + / - checkState: Used for CheckTx, which is set based on the previous block's + / state. This state is never committed. + / + / - prepareProposalState: Used for PrepareProposal, which is set based on the + / previous block's state. This state is never committed. In case of multiple + / consensus rounds, the state is always reset to the previous block's state. + / + / - processProposalState: Used for ProcessProposal, which is set based on the + / the previous block's state. This state is never committed. In case of + / multiple rounds, the state is always reset to the previous block's state. + / + / - finalizeBlockState: Used for FinalizeBlock, which is set based on the + / previous block's state. This state is committed. + checkState *state + prepareProposalState *state + processProposalState *state + finalizeBlockState *state + + / An inter-block write-through cache provided to the context during the ABCI + / FinalizeBlock call. + interBlockCache storetypes.MultiStorePersistentCache + + / paramStore is used to query for ABCI consensus parameters from an + / application parameter store. + paramStore ParamStore + + / queryGasLimit defines the maximum gas for queries; unbounded if 0. + queryGasLimit uint64 + + / The minimum gas prices a validator is willing to accept for processing a + / transaction. This is mainly used for DoS and spam prevention. + minGasPrices sdk.DecCoins + + / initialHeight is the initial height at which we start the BaseApp + initialHeight int64 + + / flag for sealing options and parameters to a BaseApp + sealed bool + + / block height at which to halt the chain and gracefully shutdown + haltHeight uint64 + + / minimum block time (in Unix seconds) + +at which to halt the chain and gracefully shutdown + haltTime uint64 + + / minRetainBlocks defines the minimum block height offset from the current + / block being committed, such that all blocks past this offset are pruned + / from CometBFT. It is used as part of the process of determining the + / ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates + / that no blocks should be pruned. + / + / Note: CometBFT block pruning is dependant on this parameter in conjunction + / with the unbonding (safety threshold) + +period, state pruning and state sync + / snapshot parameters to determine the correct minimum value of + / ResponseCommit.RetainHeight. + minRetainBlocks uint64 + + / application's version string + version string + + / application's protocol version that increments on every upgrade + / if BaseApp is passed to the upgrade keeper's NewKeeper method. + appVersion uint64 + + / recovery handler for app.runTx method + runTxRecoveryMiddleware recoveryMiddleware + + / trace set will return full stack traces for errors in ABCI Log field + trace bool + + / indexEvents defines the set of events in the form { + eventType +}.{ + attributeKey +}, + / which informs CometBFT what to index. If empty, all events will be indexed. + indexEvents map[string]struct{ +} + + / streamingManager for managing instances and configuration of ABCIListener services + streamingManager storetypes.StreamingManager + + chainID string + + cdc codec.Codec + + / optimisticExec contains the context required for Optimistic Execution, + / including the goroutine handling.This is experimental and must be enabled + / by developers. + optimisticExec *oe.OptimisticExecution + + / disableBlockGasMeter will disable the block gas meter if true, block gas meter is tricky to support + / when executing transactions in parallel. + / when disabled, the block gas meter in context is a noop one. + / + / SAFETY: it's safe to do if validators validate the total gas wanted in the `ProcessProposal`, which is the case in the default handler. + disableBlockGasMeter bool +} + +/ NewBaseApp returns a reference to an initialized BaseApp. It accepts a +/ variadic number of option functions, which act on the BaseApp to set +/ configuration choices. +func NewBaseApp( + name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp), +) *BaseApp { + app := &BaseApp{ + logger: logger.With(log.ModuleKey, "baseapp"), + name: name, + db: db, + cms: store.NewCommitMultiStore(db, logger, storemetrics.NewNoOpMetrics()), / by default we use a no-op metric gather in store + storeLoader: DefaultStoreLoader, + grpcQueryRouter: NewGRPCQueryRouter(), + msgServiceRouter: NewMsgServiceRouter(), + txDecoder: txDecoder, + fauxMerkleMode: false, + sigverifyTx: true, + queryGasLimit: math.MaxUint64, +} + for _, option := range options { + option(app) +} + if app.mempool == nil { + app.SetMempool(mempool.NoOpMempool{ +}) +} + abciProposalHandler := NewDefaultProposalHandler(app.mempool, app) + if app.prepareProposal == nil { + app.SetPrepareProposal(abciProposalHandler.PrepareProposalHandler()) +} + if app.processProposal == nil { + app.SetProcessProposal(abciProposalHandler.ProcessProposalHandler()) +} + if app.extendVote == nil { + app.SetExtendVoteHandler(NoOpExtendVote()) +} + if app.verifyVoteExt == nil { + app.SetVerifyVoteExtensionHandler(NoOpVerifyVoteExtensionHandler()) +} + if app.interBlockCache != nil { + app.cms.SetInterBlockCache(app.interBlockCache) +} + +app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() + + / Initialize with an empty interface registry to avoid nil pointer dereference. + / Unless SetInterfaceRegistry is called with an interface registry with proper address codecs baseapp will panic. + app.cdc = codec.NewProtoCodec(codectypes.NewInterfaceRegistry()) + +protoFiles, err := proto.MergedRegistry() + if err != nil { + logger.Warn("error creating merged proto registry", "error", err) +} + +else { + err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + logger.Warn("error validating merged proto registry annotations", "error", err) +} + +} + +return app +} + +/ Name returns the name of the BaseApp. +func (app *BaseApp) + +Name() + +string { + return app.name +} + +/ AppVersion returns the application's protocol version. +func (app *BaseApp) + +AppVersion() + +uint64 { + return app.appVersion +} + +/ Version returns the application's version string. +func (app *BaseApp) + +Version() + +string { + return app.version +} + +/ Logger returns the logger of the BaseApp. +func (app *BaseApp) + +Logger() + +log.Logger { + return app.logger +} + +/ Trace returns the boolean value for logging error stack traces. +func (app *BaseApp) + +Trace() + +bool { + return app.trace +} + +/ MsgServiceRouter returns the MsgServiceRouter of a BaseApp. +func (app *BaseApp) + +MsgServiceRouter() *MsgServiceRouter { + return app.msgServiceRouter +} + +/ GRPCQueryRouter returns the GRPCQueryRouter of a BaseApp. +func (app *BaseApp) + +GRPCQueryRouter() *GRPCQueryRouter { + return app.grpcQueryRouter +} + +/ MountStores mounts all IAVL or DB stores to the provided keys in the BaseApp +/ multistore. +func (app *BaseApp) + +MountStores(keys ...storetypes.StoreKey) { + for _, key := range keys { + switch key.(type) { + case *storetypes.KVStoreKey: + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + case *storetypes.TransientStoreKey: + app.MountStore(key, storetypes.StoreTypeTransient) + case *storetypes.MemoryStoreKey: + app.MountStore(key, storetypes.StoreTypeMemory) + +default: + panic(fmt.Sprintf("Unrecognized store key type :%T", key)) +} + +} +} + +/ MountKVStores mounts all IAVL or DB stores to the provided keys in the +/ BaseApp multistore. +func (app *BaseApp) + +MountKVStores(keys map[string]*storetypes.KVStoreKey) { + for _, key := range keys { + if !app.fauxMerkleMode { + app.MountStore(key, storetypes.StoreTypeIAVL) +} + +else { + / StoreTypeDB doesn't do anything upon commit, and it doesn't + / retain history, but it's useful for faster simulation. + app.MountStore(key, storetypes.StoreTypeDB) +} + +} +} + +/ MountTransientStores mounts all transient stores to the provided keys in +/ the BaseApp multistore. +func (app *BaseApp) + +MountTransientStores(keys map[string]*storetypes.TransientStoreKey) { + for _, key := range keys { + app.MountStore(key, storetypes.StoreTypeTransient) +} +} + +/ MountMemoryStores mounts all in-memory KVStores with the BaseApp's internal +/ commit multi-store. +func (app *BaseApp) + +MountMemoryStores(keys map[string]*storetypes.MemoryStoreKey) { + skeys := slices.Sorted(maps.Keys(keys)) + for _, key := range skeys { + memKey := keys[key] + app.MountStore(memKey, storetypes.StoreTypeMemory) +} +} + +/ MountStore mounts a store to the provided key in the BaseApp multistore, +/ using the default DB. +func (app *BaseApp) + +MountStore(key storetypes.StoreKey, typ storetypes.StoreType) { + app.cms.MountStoreWithDB(key, typ, nil) +} + +/ LoadLatestVersion loads the latest application version. It will panic if +/ called more than once on a running BaseApp. +func (app *BaseApp) + +LoadLatestVersion() + +error { + err := app.storeLoader(app.cms) + if err != nil { + return fmt.Errorf("failed to load latest version: %w", err) +} + +return app.Init() +} + +/ DefaultStoreLoader will be used by default and loads the latest version +func DefaultStoreLoader(ms storetypes.CommitMultiStore) + +error { + return ms.LoadLatestVersion() +} + +/ CommitMultiStore returns the root multi-store. +/ App constructor can use this to access the `cms`. +/ UNSAFE: must not be used during the abci life cycle. +func (app *BaseApp) + +CommitMultiStore() + +storetypes.CommitMultiStore { + return app.cms +} + +/ SnapshotManager returns the snapshot manager. +/ application use this to register extra extension snapshotters. +func (app *BaseApp) + +SnapshotManager() *snapshots.Manager { + return app.snapshotManager +} + +/ LoadVersion loads the BaseApp application version. It will panic if called +/ more than once on a running baseapp. +func (app *BaseApp) + +LoadVersion(version int64) + +error { + app.logger.Info("NOTICE: this could take a long time to migrate IAVL store to fastnode if you enable Fast Node.\n") + err := app.cms.LoadVersion(version) + if err != nil { + return fmt.Errorf("failed to load version %d: %w", version, err) +} + +return app.Init() +} + +/ LastCommitID returns the last CommitID of the multistore. +func (app *BaseApp) + +LastCommitID() + +storetypes.CommitID { + return app.cms.LastCommitID() +} + +/ LastBlockHeight returns the last committed block height. +func (app *BaseApp) + +LastBlockHeight() + +int64 { + return app.cms.LastCommitID().Version +} + +/ ChainID returns the chainID of the app. +func (app *BaseApp) + +ChainID() + +string { + return app.chainID +} + +/ AnteHandler returns the AnteHandler of the app. +func (app *BaseApp) + +AnteHandler() + +sdk.AnteHandler { + return app.anteHandler +} + +/ Mempool returns the Mempool of the app. +func (app *BaseApp) + +Mempool() + +mempool.Mempool { + return app.mempool +} + +/ Init initializes the app. It seals the app, preventing any +/ further modifications. In addition, it validates the app against +/ the earlier provided settings. Returns an error if validation fails. +/ nil otherwise. Panics if the app is already sealed. +func (app *BaseApp) + +Init() + +error { + if app.sealed { + panic("cannot call initFromMainStore: baseapp already sealed") +} + if app.cms == nil { + return errors.New("commit multi-store must not be nil") +} + emptyHeader := cmtproto.Header{ + ChainID: app.chainID +} + + / needed for the export command which inits from store but never calls initchain + app.setState(execModeCheck, emptyHeader) + +app.Seal() + +return app.cms.GetPruning().Validate() +} + +func (app *BaseApp) + +setMinGasPrices(gasPrices sdk.DecCoins) { + app.minGasPrices = gasPrices +} + +func (app *BaseApp) + +setHaltHeight(haltHeight uint64) { + app.haltHeight = haltHeight +} + +func (app *BaseApp) + +setHaltTime(haltTime uint64) { + app.haltTime = haltTime +} + +func (app *BaseApp) + +setMinRetainBlocks(minRetainBlocks uint64) { + app.minRetainBlocks = minRetainBlocks +} + +func (app *BaseApp) + +setInterBlockCache(cache storetypes.MultiStorePersistentCache) { + app.interBlockCache = cache +} + +func (app *BaseApp) + +setTrace(trace bool) { + app.trace = trace +} + +func (app *BaseApp) + +setIndexEvents(ie []string) { + app.indexEvents = make(map[string]struct{ +}) + for _, e := range ie { + app.indexEvents[e] = struct{ +}{ +} + +} +} + +/ Seal seals a BaseApp. It prohibits any further modifications to a BaseApp. +func (app *BaseApp) + +Seal() { + app.sealed = true +} + +/ IsSealed returns true if the BaseApp is sealed and false otherwise. +func (app *BaseApp) + +IsSealed() + +bool { + return app.sealed +} + +/ setState sets the BaseApp's state for the corresponding mode with a branched +/ multi-store (i.e. a CacheMultiStore) + +and a new Context with the same +/ multi-store branch, and provided header. +func (app *BaseApp) + +setState(mode execMode, h cmtproto.Header) { + ms := app.cms.CacheMultiStore() + headerInfo := header.Info{ + Height: h.Height, + Time: h.Time, + ChainID: h.ChainID, + AppHash: h.AppHash, +} + baseState := &state{ + ms: ms, + ctx: sdk.NewContext(ms, h, false, app.logger). + WithStreamingManager(app.streamingManager). + WithHeaderInfo(headerInfo), +} + switch mode { + case execModeCheck: + baseState.SetContext(baseState.Context().WithIsCheckTx(true).WithMinGasPrices(app.minGasPrices)) + +app.checkState = baseState + case execModePrepareProposal: + app.prepareProposalState = baseState + case execModeProcessProposal: + app.processProposalState = baseState + case execModeFinalize: + app.finalizeBlockState = baseState + + default: + panic(fmt.Sprintf("invalid runTxMode for setState: %d", mode)) +} +} + +/ SetCircuitBreaker sets the circuit breaker for the BaseApp. +/ The circuit breaker is checked on every message execution to verify if a transaction should be executed or not. +func (app *BaseApp) + +SetCircuitBreaker(cb CircuitBreaker) { + if app.msgServiceRouter == nil { + panic("cannot set circuit breaker with no msg service router set") +} + +app.msgServiceRouter.SetCircuit(cb) +} + +/ GetConsensusParams returns the current consensus parameters from the BaseApp's +/ ParamStore. If the BaseApp has no ParamStore defined, nil is returned. +func (app *BaseApp) + +GetConsensusParams(ctx sdk.Context) + +cmtproto.ConsensusParams { + if app.paramStore == nil { + return cmtproto.ConsensusParams{ +} + +} + +cp, err := app.paramStore.Get(ctx) + if err != nil { + / This could happen while migrating from v0.45/v0.46 to v0.50, we should + / allow it to happen so during preblock the upgrade plan can be executed + / and the consensus params set for the first time in the new format. + app.logger.Error("failed to get consensus params", "err", err) + +return cmtproto.ConsensusParams{ +} + +} + +return cp +} + +/ StoreConsensusParams sets the consensus parameters to the BaseApp's param +/ store. +/ +/ NOTE: We're explicitly not storing the CometBFT app_version in the param store. +/ It's stored instead in the x/upgrade store, with its own bump logic. +func (app *BaseApp) + +StoreConsensusParams(ctx sdk.Context, cp cmtproto.ConsensusParams) + +error { + if app.paramStore == nil { + return errors.New("cannot store consensus params with no params store set") +} + +return app.paramStore.Set(ctx, cp) +} + +/ AddRunTxRecoveryHandler adds custom app.runTx method panic handlers. +func (app *BaseApp) + +AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { + for _, h := range handlers { + app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) +} +} + +/ GetMaximumBlockGas gets the maximum gas from the consensus params. It panics +/ if maximum block gas is less than negative one and returns zero if negative +/ one. +func (app *BaseApp) + +GetMaximumBlockGas(ctx sdk.Context) + +uint64 { + cp := app.GetConsensusParams(ctx) + if cp.Block == nil { + return 0 +} + maxGas := cp.Block.MaxGas + switch { + case maxGas < -1: + panic(fmt.Sprintf("invalid maximum block gas: %d", maxGas)) + case maxGas == -1: + return 0 + + default: + return uint64(maxGas) +} +} + +func (app *BaseApp) + +validateFinalizeBlockHeight(req *abci.RequestFinalizeBlock) + +error { + if req.Height < 1 { + return fmt.Errorf("invalid height: %d", req.Height) +} + lastBlockHeight := app.LastBlockHeight() + + / expectedHeight holds the expected height to validate + var expectedHeight int64 + if lastBlockHeight == 0 && app.initialHeight > 1 { + / In this case, we're validating the first block of the chain, i.e no + / previous commit. The height we're expecting is the initial height. + expectedHeight = app.initialHeight +} + +else { + / This case can mean two things: + / + / - Either there was already a previous commit in the store, in which + / case we increment the version from there. + / - Or there was no previous commit, in which case we start at version 1. + expectedHeight = lastBlockHeight + 1 +} + if req.Height != expectedHeight { + return fmt.Errorf("invalid height: %d; expected: %d", req.Height, expectedHeight) +} + +return nil +} + +/ validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) + +error { + if len(msgs) == 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") +} + for _, msg := range msgs { + m, ok := msg.(sdk.HasValidateBasic) + if !ok { + continue +} + if err := m.ValidateBasic(); err != nil { + return err +} + +} + +return nil +} + +func (app *BaseApp) + +getState(mode execMode) *state { + switch mode { + case execModeFinalize: + return app.finalizeBlockState + case execModePrepareProposal: + return app.prepareProposalState + case execModeProcessProposal: + return app.processProposalState + + default: + return app.checkState +} +} + +func (app *BaseApp) + +getBlockGasMeter(ctx sdk.Context) + +storetypes.GasMeter { + if app.disableBlockGasMeter { + return noopGasMeter{ +} + +} + if maxGas := app.GetMaximumBlockGas(ctx); maxGas > 0 { + return storetypes.NewGasMeter(maxGas) +} + +return storetypes.NewInfiniteGasMeter() +} + +/ retrieve the context for the tx w/ txBytes and other memoized values. +func (app *BaseApp) + +getContextForTx(mode execMode, txBytes []byte) + +sdk.Context { + app.mu.Lock() + +defer app.mu.Unlock() + modeState := app.getState(mode) + if modeState == nil { + panic(fmt.Sprintf("state is nil for mode %v", mode)) +} + ctx := modeState.Context(). + WithTxBytes(txBytes). + WithGasMeter(storetypes.NewInfiniteGasMeter()) + / WithVoteInfos(app.voteInfos) / TODO: identify if this is needed + + ctx = ctx.WithIsSigverifyTx(app.sigverifyTx) + +ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + if mode == execModeReCheck { + ctx = ctx.WithIsReCheckTx(true) +} + if mode == execModeSimulate { + ctx, _ = ctx.CacheContext() + +ctx = ctx.WithExecMode(sdk.ExecMode(execModeSimulate)) +} + +return ctx +} + +/ cacheTxContext returns a new context based off of the provided context with +/ a branched multi-store. +func (app *BaseApp) + +cacheTxContext(ctx sdk.Context, txBytes []byte) (sdk.Context, storetypes.CacheMultiStore) { + ms := ctx.MultiStore() + msCache := ms.CacheMultiStore() + if msCache.TracingEnabled() { + msCache = msCache.SetTracingContext( + storetypes.TraceContext( + map[string]any{ + "txHash": fmt.Sprintf("%X", tmhash.Sum(txBytes)), +}, + ), + ).(storetypes.CacheMultiStore) +} + +return ctx.WithMultiStore(msCache), msCache +} + +func (app *BaseApp) + +preBlock(req *abci.RequestFinalizeBlock) ([]abci.Event, error) { + var events []abci.Event + if app.preBlocker != nil { + ctx := app.finalizeBlockState.Context().WithEventManager(sdk.NewEventManager()) + +rsp, err := app.preBlocker(ctx, req) + if err != nil { + return nil, err +} + / rsp.ConsensusParamsChanged is true from preBlocker means ConsensusParams in store get changed + / write the consensus parameters in store to context + if rsp.ConsensusParamsChanged { + ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) + / GasMeter must be set after we get a context with updated consensus params. + gasMeter := app.getBlockGasMeter(ctx) + +ctx = ctx.WithBlockGasMeter(gasMeter) + +app.finalizeBlockState.SetContext(ctx) +} + +events = ctx.EventManager().ABCIEvents() +} + +return events, nil +} + +func (app *BaseApp) + +beginBlock(_ *abci.RequestFinalizeBlock) (sdk.BeginBlock, error) { + var ( + resp sdk.BeginBlock + err error + ) + if app.beginBlocker != nil { + resp, err = app.beginBlocker(app.finalizeBlockState.Context()) + if err != nil { + return resp, err +} + + / append BeginBlock attributes to all events in the EndBlock response + for i, event := range resp.Events { + resp.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "BeginBlock" +}, + ) +} + +resp.Events = sdk.MarkEventsToIndex(resp.Events, app.indexEvents) +} + +return resp, nil +} + +func (app *BaseApp) + +deliverTx(tx []byte) *abci.ExecTxResult { + gInfo := sdk.GasInfo{ +} + resultStr := "successful" + + var resp *abci.ExecTxResult + + defer func() { + telemetry.IncrCounter(1, "tx", "count") + +telemetry.IncrCounter(1, "tx", resultStr) + +telemetry.SetGauge(float32(gInfo.GasUsed), "tx", "gas", "used") + +telemetry.SetGauge(float32(gInfo.GasWanted), "tx", "gas", "wanted") +}() + +gInfo, result, anteEvents, err := app.runTx(execModeFinalize, tx, nil) + if err != nil { + resultStr = "failed" + resp = sdkerrors.ResponseExecTxResultWithEvents( + err, + gInfo.GasWanted, + gInfo.GasUsed, + sdk.MarkEventsToIndex(anteEvents, app.indexEvents), + app.trace, + ) + +return resp +} + +resp = &abci.ExecTxResult{ + GasWanted: int64(gInfo.GasWanted), + GasUsed: int64(gInfo.GasUsed), + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +} + +return resp +} + +/ endBlock is an application-defined function that is called after transactions +/ have been processed in FinalizeBlock. +func (app *BaseApp) + +endBlock(_ context.Context) (sdk.EndBlock, error) { + var endblock sdk.EndBlock + if app.endBlocker != nil { + eb, err := app.endBlocker(app.finalizeBlockState.Context()) + if err != nil { + return endblock, err +} + + / append EndBlock attributes to all events in the EndBlock response + for i, event := range eb.Events { + eb.Events[i].Attributes = append( + event.Attributes, + abci.EventAttribute{ + Key: "mode", + Value: "EndBlock" +}, + ) +} + +eb.Events = sdk.MarkEventsToIndex(eb.Events, app.indexEvents) + +endblock = eb +} + +return endblock, nil +} + +/ runTx processes a transaction within a given execution mode, encoded transaction +/ bytes, and the decoded transaction itself. All state transitions occur through +/ a cached Context depending on the mode provided. State only gets persisted +/ if all messages get executed successfully and the execution mode is DeliverTx. +/ Note, gas execution info is always returned. A reference to a Result is +/ returned if the tx does not run out of gas and if all the messages are valid +/ and execute successfully. An error is returned otherwise. +/ both txbytes and the decoded tx are passed to runTx to avoid the state machine encoding the tx and decoding the transaction twice +/ passing the decoded tx to runTX is optional, it will be decoded if the tx is nil +func (app *BaseApp) + +runTx(mode execMode, txBytes []byte, tx sdk.Tx) (gInfo sdk.GasInfo, result *sdk.Result, anteEvents []abci.Event, err error) { + / NOTE: GasWanted should be returned by the AnteHandler. GasUsed is + / determined by the GasMeter. We need access to the context to get the gas + / meter, so we initialize upfront. + var gasWanted uint64 + ctx := app.getContextForTx(mode, txBytes) + ms := ctx.MultiStore() + + / only run the tx if there is block gas remaining + if mode == execModeFinalize && ctx.BlockGasMeter().IsOutOfGas() { + return gInfo, nil, nil, errorsmod.Wrap(sdkerrors.ErrOutOfGas, "no block gas left to run tx") +} + +defer func() { + if r := recover(); r != nil { + recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) + +err, result = processRecovery(r, recoveryMW), nil + ctx.Logger().Error("panic recovered in runTx", "err", err) +} + +gInfo = sdk.GasInfo{ + GasWanted: gasWanted, + GasUsed: ctx.GasMeter().GasConsumed() +} + +}() + blockGasConsumed := false + + / consumeBlockGas makes sure block gas is consumed at most once. It must + / happen after tx processing, and must be executed even if tx processing + / fails. Hence, it's execution is deferred. + consumeBlockGas := func() { + if !blockGasConsumed { + blockGasConsumed = true + ctx.BlockGasMeter().ConsumeGas( + ctx.GasMeter().GasConsumedToLimit(), "block gas meter", + ) +} + +} + + / If BlockGasMeter() + +panics it will be caught by the above recover and will + / return an error - in any case BlockGasMeter will consume gas past the limit. + / + / NOTE: consumeBlockGas must exist in a separate defer function from the + / general deferred recovery function to recover from consumeBlockGas as it'll + / be executed first (deferred statements are executed as stack). + if mode == execModeFinalize { + defer consumeBlockGas() +} + + / if the transaction is not decoded, decode it here + if tx == nil { + tx, err = app.txDecoder(txBytes) + if err != nil { + return sdk.GasInfo{ + GasUsed: 0, + GasWanted: 0 +}, nil, nil, sdkerrors.ErrTxDecode.Wrap(err.Error()) +} + +} + msgs := tx.GetMsgs() + if err := validateBasicTxMsgs(msgs); err != nil { + return sdk.GasInfo{ +}, nil, nil, err +} + for _, msg := range msgs { + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return sdk.GasInfo{ +}, nil, nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + +} + if app.anteHandler != nil { + var ( + anteCtx sdk.Context + msCache storetypes.CacheMultiStore + ) + + / Branch context before AnteHandler call in case it aborts. + / This is required for both CheckTx and DeliverTx. + / Ref: https://github.com/cosmos/cosmos-sdk/issues/2772 + / + / NOTE: Alternatively, we could require that AnteHandler ensures that + / writes do not happen if aborted/failed. This may have some + / performance benefits, but it'll be more difficult to get right. + anteCtx, msCache = app.cacheTxContext(ctx, txBytes) + +anteCtx = anteCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, err := app.anteHandler(anteCtx, tx, mode == execModeSimulate) + if !newCtx.IsZero() { + / At this point, newCtx.MultiStore() + +is a store branch, or something else + / replaced by the AnteHandler. We want the original multistore. + / + / Also, in the case of the tx aborting, we need to track gas consumed via + / the instantiated gas meter in the AnteHandler, so we update the context + / prior to returning. + ctx = newCtx.WithMultiStore(ms) +} + events := ctx.EventManager().Events() + + / GasMeter expected to be set in AnteHandler + gasWanted = ctx.GasMeter().Limit() + if err != nil { + if mode == execModeReCheck { + / if the ante handler fails on recheck, we want to remove the tx from the mempool + if mempoolErr := app.mempool.Remove(tx); mempoolErr != nil { + return gInfo, nil, anteEvents, errors.Join(err, mempoolErr) +} + +} + +return gInfo, nil, nil, err +} + +msCache.Write() + +anteEvents = events.ToABCIEvents() +} + switch mode { + case execModeCheck: + err = app.mempool.Insert(ctx, tx) + if err != nil { + return gInfo, nil, anteEvents, err +} + case execModeFinalize: + err = app.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return gInfo, nil, anteEvents, + fmt.Errorf("failed to remove tx from mempool: %w", err) +} + +} + + / Create a new Context based off of the existing Context with a MultiStore branch + / in case message processing fails. At this point, the MultiStore + / is a branch of a branch. + runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + + / Attempt to execute all messages and only update state if all messages pass + / and we're in DeliverTx. Note, runMsgs will never return a reference to a + / Result if any single message fails or does not have a registered Handler. + msgsV2, err := tx.GetMsgsV2() + if err == nil { + result, err = app.runMsgs(runMsgCtx, msgs, msgsV2, mode) +} + + / Run optional postHandlers (should run regardless of the execution result). + / + / Note: If the postHandler fails, we also revert the runMsgs state. + if app.postHandler != nil { + / The runMsgCtx context currently contains events emitted by the ante handler. + / We clear this to correctly order events without duplicates. + / Note that the state is still preserved. + postCtx := runMsgCtx.WithEventManager(sdk.NewEventManager()) + +newCtx, errPostHandler := app.postHandler(postCtx, tx, mode == execModeSimulate, err == nil) + if errPostHandler != nil { + if err == nil { + / when the msg was handled successfully, return the post handler error only + return gInfo, nil, anteEvents, errPostHandler +} + / otherwise append to the msg error so that we keep the original error code for better user experience + return gInfo, nil, anteEvents, errorsmod.Wrapf(err, "postHandler: %s", errPostHandler) +} + + / we don't want runTx to panic if runMsgs has failed earlier + if result == nil { + result = &sdk.Result{ +} + +} + +result.Events = append(result.Events, newCtx.EventManager().ABCIEvents()...) +} + if err == nil { + if mode == execModeFinalize { + / When block gas exceeds, it'll panic and won't commit the cached store. + consumeBlockGas() + +msCache.Write() +} + if len(anteEvents) > 0 && (mode == execModeFinalize || mode == execModeSimulate) { + / append the events in the order of occurrence + result.Events = append(anteEvents, result.Events...) +} + +} + +return gInfo, result, anteEvents, err +} + +/ runMsgs iterates through a list of messages and executes them with the provided +/ Context and execution mode. Messages will only be executed during simulation +/ and DeliverTx. An error is returned if any single message fails or if a +/ Handler does not exist for a given message route. Otherwise, a reference to a +/ Result is returned. The caller must not commit state if an error is returned. +func (app *BaseApp) + +runMsgs(ctx sdk.Context, msgs []sdk.Msg, msgsV2 []protov2.Message, mode execMode) (*sdk.Result, error) { + events := sdk.EmptyEvents() + +var msgResponses []*codectypes.Any + + / NOTE: GasWanted is determined by the AnteHandler and GasUsed by the GasMeter. + for i, msg := range msgs { + if mode != execModeFinalize && mode != execModeSimulate { + break +} + handler := app.msgServiceRouter.Handler(msg) + if handler == nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "no message handler found for %T", msg) +} + + / ADR 031 request type routing + msgResult, err := handler(ctx, msg) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to execute message; message index: %d", i) +} + + / create message events + msgEvents, err := createEvents(app.cdc, msgResult.GetEvents(), msg, msgsV2[i]) + if err != nil { + return nil, errorsmod.Wrapf(err, "failed to create message events; message index: %d", i) +} + + / append message events and data + / + / Note: Each message result's data must be length-prefixed in order to + / separate each result. + for j, event := range msgEvents { + / append message index to all events + msgEvents[j] = event.AppendAttributes(sdk.NewAttribute("msg_index", strconv.Itoa(i))) +} + +events = events.AppendEvents(msgEvents) + + / Each individual sdk.Result that went through the MsgServiceRouter + / (which should represent 99% of the Msgs now, since everyone should + / be using protobuf Msgs) + +has exactly one Msg response, set inside + / `WrapServiceResult`. We take that Msg response, and aggregate it + / into an array. + if len(msgResult.MsgResponses) > 0 { + msgResponse := msgResult.MsgResponses[0] + if msgResponse == nil { + return nil, sdkerrors.ErrLogic.Wrapf("got nil Msg response at index %d for msg %s", i, sdk.MsgTypeURL(msg)) +} + +msgResponses = append(msgResponses, msgResponse) +} + + +} + +data, err := makeABCIData(msgResponses) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to marshal tx data") +} + +return &sdk.Result{ + Data: data, + Events: events.ToABCIEvents(), + MsgResponses: msgResponses, +}, nil +} + +/ makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. +func makeABCIData(msgResponses []*codectypes.Any) ([]byte, error) { + return proto.Marshal(&sdk.TxMsgData{ + MsgResponses: msgResponses +}) +} + +func createEvents(cdc codec.Codec, events sdk.Events, msg sdk.Msg, msgV2 protov2.Message) (sdk.Events, error) { + eventMsgName := sdk.MsgTypeURL(msg) + msgEvent := sdk.NewEvent(sdk.EventTypeMessage, sdk.NewAttribute(sdk.AttributeKeyAction, eventMsgName)) + + / we set the signer attribute as the sender + signers, err := cdc.GetMsgV2Signers(msgV2) + if err != nil { + return nil, err +} + if len(signers) > 0 && signers[0] != nil { + addrStr, err := cdc.InterfaceRegistry().SigningContext().AddressCodec().BytesToString(signers[0]) + if err != nil { + return nil, err +} + +msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeySender, addrStr)) +} + + / verify that events have no module attribute set + if _, found := events.GetAttributes(sdk.AttributeKeyModule); !found { + if moduleName := sdk.GetModuleNameFromTypeURL(eventMsgName); moduleName != "" { + msgEvent = msgEvent.AppendAttributes(sdk.NewAttribute(sdk.AttributeKeyModule, moduleName)) +} + +} + +return sdk.Events{ + msgEvent +}.AppendEvents(events), nil +} + +/ PrepareProposalVerifyTx performs transaction verification when a proposer is +/ creating a block proposal during PrepareProposal. Any state committed to the +/ PrepareProposal state internally will be discarded. will be +/ returned if the transaction cannot be encoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) { + bz, err := app.txEncoder(tx) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModePrepareProposal, bz, tx) + if err != nil { + return nil, err +} + +return bz, nil +} + +/ ProcessProposalVerifyTx performs transaction verification when receiving a +/ block proposal during ProcessProposal. Any state committed to the +/ ProcessProposal state internally will be discarded. will be +/ returned if the transaction cannot be decoded. will be returned if +/ the transaction is valid, otherwise will be returned. +func (app *BaseApp) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) { + tx, err := app.txDecoder(txBz) + if err != nil { + return nil, err +} + + _, _, _, err = app.runTx(execModeProcessProposal, txBz, tx) + if err != nil { + return nil, err +} + +return tx, nil +} + +func (app *BaseApp) + +TxDecode(txBytes []byte) (sdk.Tx, error) { + return app.txDecoder(txBytes) +} + +func (app *BaseApp) + +TxEncode(tx sdk.Tx) ([]byte, error) { + return app.txEncoder(tx) +} + +func (app *BaseApp) + +StreamingManager() + +storetypes.StreamingManager { + return app.streamingManager +} + +/ Close is called in start cmd to gracefully cleanup resources. +func (app *BaseApp) + +Close() + +error { + var errs []error + + / Close app.db (opened by cosmos-sdk/server/start.go call to openDB) + if app.db != nil { + app.logger.Info("Closing application.db") + if err := app.db.Close(); err != nil { + errs = append(errs, err) +} + +} + + / Close app.snapshotManager + / - opened when app chains use cosmos-sdk/server/util.go/DefaultBaseappOptions (boilerplate) + / - which calls cosmos-sdk/server/util.go/GetSnapshotStore + / - which is passed to baseapp/options.go/SetSnapshot + / - to set app.snapshotManager = snapshots.NewManager + if app.snapshotManager != nil { + app.logger.Info("Closing snapshots/metadata.db") + if err := app.snapshotManager.Close(); err != nil { + errs = append(errs, err) +} + +} + +return errors.Join(errs...) +} + +/ GetBaseApp returns the pointer to itself. +func (app *BaseApp) + +GetBaseApp() *BaseApp { + return app +} +``` + +### Commit + +The [`Commit` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine after the full-node has received _precommits_ from 2/3+ of validators (weighted by voting power). On the `BaseApp` end, the `Commit(res abci.ResponseCommit)` function is implemented to commit all the valid state transitions that occurred during `FinalizeBlock` and to reset state for the next block. + +To commit state-transitions, the `Commit` function calls the `Write()` function on `finalizeBlockState.ms`, where `finalizeBlockState.ms` is a branched multistore of the main store `app.cms`. Then, the `Commit` function sets `checkState` to the latest header (obtained from `finalizeBlockState.ctx.BlockHeader`) and `finalizeBlockState` to `nil`. + +Finally, `Commit` returns the hash of the commitment of `app.cms` back to the underlying consensus engine. This hash is used as a reference in the header of the next block. + +### Info + +The [`Info` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#info-methods) is a simple query from the underlying consensus engine, notably used to sync the latter with the application during a handshake that happens on startup. When called, the `Info(res abci.ResponseInfo)` function from `BaseApp` will return the application's name, version and the hash of the last commit of `app.cms`. + +### Query + +The [`Query` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#info-methods) is used to serve queries received from the underlying consensus engine, including queries received via RPC like CometBFT RPC. It used to be the main entrypoint to build interfaces with the application, but with the introduction of [gRPC queries](/docs/sdk/v0.53/documentation/module-system/query-services) in Cosmos SDK v0.40, its usage is more limited. The application must respect a few rules when implementing the `Query` method, which are outlined [here](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#query). + +Each CometBFT `query` comes with a `path`, which is a `string` which denotes what to query. If the `path` matches a gRPC fully-qualified service method, then `BaseApp` will defer the query to the `grpcQueryRouter` and let it handle it like explained [above](#grpc-query-router). Otherwise, the `path` represents a query that is not (yet) handled by the gRPC router. `BaseApp` splits the `path` string with the `/` delimiter. By convention, the first element of the split string (`split[0]`) contains the category of `query` (`app`, `p2p`, `store` or `custom` ). The `BaseApp` implementation of the `Query(req abci.RequestQuery)` method is a simple dispatcher serving these 4 main categories of queries: + +- Application-related queries like querying the application's version, which are served via the `handleQueryApp` method. +- Direct queries to the multistore, which are served by the `handlerQueryStore` method. These direct queries are different from custom queries which go through `app.queryRouter`, and are mainly used by third-party service provider like block explorers. +- P2P queries, which are served via the `handleQueryP2P` method. These queries return either `app.addrPeerFilter` or `app.ipPeerFilter` that contain the list of peers filtered by address or IP respectively. These lists are first initialized via `options` in `BaseApp`'s [constructor](#constructor). + +### ExtendVote + +`ExtendVote` allows an application to extend a pre-commit vote with arbitrary data. This process does NOT have to be deterministic and the data returned can be unique to the validator process. + +In the Cosmos-SDK this is implemented as a NoOp: + +```go expandable +package baseapp + +import ( + + "bytes" + "context" + "fmt" + "slices" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cryptoenc "github.com/cometbft/cometbft/crypto/encoding" + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + protoio "github.com/cosmos/gogoproto/io" + "github.com/cosmos/gogoproto/proto" + "cosmossdk.io/core/comet" + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +type ( + / ValidatorStore defines the interface contract required for verifying vote + / extension signatures. Typically, this will be implemented by the x/staking + / module, which has knowledge of the CometBFT public key. + ValidatorStore interface { + GetPubKeyByConsAddr(context.Context, sdk.ConsAddress) (cmtprotocrypto.PublicKey, error) +} + + / GasTx defines the contract that a transaction with a gas limit must implement. + GasTx interface { + GetGas() + +uint64 +} +) + +/ ValidateVoteExtensions defines a helper function for verifying vote extension +/ signatures that may be passed or manually injected into a block proposal from +/ a proposer in PrepareProposal. It returns an error if any signature is invalid +/ or if unexpected vote extensions and/or signatures are found or less than 2/3 +/ power is received. +/ NOTE: From v0.50.5 `currentHeight` and `chainID` arguments are ignored for fixing an issue. +/ They will be removed from the function in v0.51+. +func ValidateVoteExtensions( + ctx sdk.Context, + valStore ValidatorStore, + _ int64, + _ string, + extCommit abci.ExtendedCommitInfo, +) + +error { + / Get values from context + cp := ctx.ConsensusParams() + currentHeight := ctx.HeaderInfo().Height + chainID := ctx.HeaderInfo().ChainID + commitInfo := ctx.CometInfo().GetLastCommit() + + / Check that both extCommit + commit are ordered in accordance with vp/address. + if err := validateExtendedCommitAgainstLastCommit(extCommit, commitInfo); err != nil { + return err +} + + / Start checking vote extensions only **after** the vote extensions enable + / height, because when `currentHeight == VoteExtensionsEnableHeight` + / PrepareProposal doesn't get any vote extensions in its request. + extsEnabled := cp.Abci != nil && currentHeight > cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + marshalDelimitedFn := func(msg proto.Message) ([]byte, error) { + var buf bytes.Buffer + if err := protoio.NewDelimitedWriter(&buf).WriteMsg(msg); err != nil { + return nil, err +} + +return buf.Bytes(), nil +} + +var ( + / Total voting power of all vote extensions. + totalVP int64 + / Total voting power of all validators that submitted valid vote extensions. + sumVP int64 + ) + for _, vote := range extCommit.Votes { + totalVP += vote.Validator.Power + + / Only check + include power if the vote is a commit vote. There must be super-majority, otherwise the + / previous block (the block the vote is for) + +could not have been committed. + if vote.BlockIdFlag != cmtproto.BlockIDFlagCommit { + continue +} + if !extsEnabled { + if len(vote.VoteExtension) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension at height %d", currentHeight) +} + if len(vote.ExtensionSignature) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension signature at height %d", currentHeight) +} + +continue +} + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("vote extensions enabled; received empty vote extension signature at height %d", currentHeight) +} + valConsAddr := sdk.ConsAddress(vote.Validator.Address) + +pubKeyProto, err := valStore.GetPubKeyByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get validator %X public key: %w", valConsAddr, err) +} + +cmtPubKey, err := cryptoenc.PubKeyFromProto(pubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) +} + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, / the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: chainID, +} + +extSignBytes, err := marshalDelimitedFn(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) +} + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return fmt.Errorf("failed to verify validator %X vote extension signature", valConsAddr) +} + +sumVP += vote.Validator.Power +} + + / This check is probably unnecessary, but better safe than sorry. + if totalVP <= 0 { + return fmt.Errorf("total voting power must be positive, got: %d", totalVP) +} + + / If the sum of the voting power has not reached (2/3 + 1) + +we need to error. + if requiredVP := ((totalVP * 2) / 3) + 1; sumVP < requiredVP { + return fmt.Errorf( + "insufficient cumulative voting power received to verify vote extensions; got: %d, expected: >=%d", + sumVP, requiredVP, + ) +} + +return nil +} + +/ validateExtendedCommitAgainstLastCommit validates an ExtendedCommitInfo against a LastCommit. Specifically, +/ it checks that the ExtendedCommit + LastCommit (for the same height), are consistent with each other + that +/ they are ordered correctly (by voting power) + +in accordance with +/ [comet](https://github.com/cometbft/cometbft/blob/4ce0277b35f31985bbf2c25d3806a184a4510010/types/validator_set.go#L784). +func validateExtendedCommitAgainstLastCommit(ec abci.ExtendedCommitInfo, lc comet.CommitInfo) + +error { + / check that the rounds are the same + if ec.Round != lc.Round() { + return fmt.Errorf("extended commit round %d does not match last commit round %d", ec.Round, lc.Round()) +} + + / check that the # of votes are the same + if len(ec.Votes) != lc.Votes().Len() { + return fmt.Errorf("extended commit votes length %d does not match last commit votes length %d", len(ec.Votes), lc.Votes().Len()) +} + + / check sort order of extended commit votes + if !slices.IsSortedFunc(ec.Votes, func(vote1, vote2 abci.ExtendedVoteInfo) + +int { + if vote1.Validator.Power == vote2.Validator.Power { + return bytes.Compare(vote1.Validator.Address, vote2.Validator.Address) / addresses sorted in ascending order (used to break vp conflicts) +} + +return -int(vote1.Validator.Power - vote2.Validator.Power) / vp sorted in descending order +}) { + return fmt.Errorf("extended commit votes are not sorted by voting power") +} + addressCache := make(map[string]struct{ +}, len(ec.Votes)) + / check that consistency between LastCommit and ExtendedCommit + for i, vote := range ec.Votes { + / cache addresses to check for duplicates + if _, ok := addressCache[string(vote.Validator.Address)]; ok { + return fmt.Errorf("extended commit vote address %X is duplicated", vote.Validator.Address) +} + +addressCache[string(vote.Validator.Address)] = struct{ +}{ +} + if !bytes.Equal(vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) { + return fmt.Errorf("extended commit vote address %X does not match last commit vote address %X", vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) +} + if vote.Validator.Power != lc.Votes().Get(i).Validator().Power() { + return fmt.Errorf("extended commit vote power %d does not match last commit vote power %d", vote.Validator.Power, lc.Votes().Get(i).Validator().Power()) +} + +} + +return nil +} + +type ( + / ProposalTxVerifier defines the interface that is implemented by BaseApp, + / that any custom ABCI PrepareProposal and ProcessProposal handler can use + / to verify a transaction. + ProposalTxVerifier interface { + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) + +TxDecode(txBz []byte) (sdk.Tx, error) + +TxEncode(tx sdk.Tx) ([]byte, error) +} + + / DefaultProposalHandler defines the default ABCI PrepareProposal and + / ProcessProposal handlers. + DefaultProposalHandler struct { + mempool mempool.Mempool + txVerifier ProposalTxVerifier + txSelector TxSelector + signerExtAdapter mempool.SignerExtractionAdapter +} +) + +func NewDefaultProposalHandler(mp mempool.Mempool, txVerifier ProposalTxVerifier) *DefaultProposalHandler { + return &DefaultProposalHandler{ + mempool: mp, + txVerifier: txVerifier, + txSelector: NewDefaultTxSelector(), + signerExtAdapter: mempool.NewDefaultSignerExtractionAdapter(), +} +} + +/ SetTxSelector sets the TxSelector function on the DefaultProposalHandler. +func (h *DefaultProposalHandler) + +SetTxSelector(ts TxSelector) { + h.txSelector = ts +} + +/ PrepareProposalHandler returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from CometBFT will simply be returned, which, by default, are in +/ FIFO order. +func (h *DefaultProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + var maxBlockGas uint64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = uint64(b.MaxGas) +} + +defer h.txSelector.Clear() + + / If the mempool is nil or NoOp we simply return the transactions + / requested from CometBFT, which, by default, should be in FIFO order. + / + / Note, we still need to ensure the transactions returned respect req.MaxTxBytes. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + for _, txBz := range req.Txs { + tx, err := h.txVerifier.TxDecode(txBz) + if err != nil { + return nil, err +} + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, tx, txBz) + if stop { + break +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} + selectedTxsSignersSeqs := make(map[string]uint64) + +var ( + resError error + selectedTxsNums int + invalidTxs []sdk.Tx / invalid txs to be removed out of the loop to avoid dead lock + ) + +mempool.SelectBy(ctx, h.mempool, req.Txs, func(memTx sdk.Tx) + +bool { + unorderedTx, ok := memTx.(sdk.TxWithUnordered) + isUnordered := ok && unorderedTx.GetUnordered() + txSignersSeqs := make(map[string]uint64) + + / if the tx is unordered, we don't need to check the sequence, we just add it + if !isUnordered { + signerData, err := h.signerExtAdapter.GetSigners(memTx) + if err != nil { + / propagate the error to the caller + resError = err + return false +} + + / If the signers aren't in selectedTxsSignersSeqs then we haven't seen them before + / so we add them and continue given that we don't need to check the sequence. + shouldAdd := true + for _, signer := range signerData { + seq, ok := selectedTxsSignersSeqs[signer.Signer.String()] + if !ok { + txSignersSeqs[signer.Signer.String()] = signer.Sequence + continue +} + + / If we have seen this signer before in this block, we must make + / sure that the current sequence is seq+1; otherwise is invalid + / and we skip it. + if seq+1 != signer.Sequence { + shouldAdd = false + break +} + +txSignersSeqs[signer.Signer.String()] = signer.Sequence +} + if !shouldAdd { + return true +} + +} + + / NOTE: Since transaction verification was already executed in CheckTx, + / which calls mempool.Insert, in theory everything in the pool should be + / valid. But some mempool implementations may insert invalid txs, so we + / check again. + txBz, err := h.txVerifier.PrepareProposalVerifyTx(memTx) + if err != nil { + invalidTxs = append(invalidTxs, memTx) +} + +else { + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, memTx, txBz) + if stop { + return false +} + txsLen := len(h.txSelector.SelectedTxs(ctx)) + / If the tx is unordered, we don't need to update the sender sequence. + if !isUnordered { + for sender, seq := range txSignersSeqs { + / If txsLen != selectedTxsNums is true, it means that we've + / added a new tx to the selected txs, so we need to update + / the sequence of the sender. + if txsLen != selectedTxsNums { + selectedTxsSignersSeqs[sender] = seq +} + +else if _, ok := selectedTxsSignersSeqs[sender]; !ok { + / The transaction hasn't been added but it passed the + / verification, so we know that the sequence is correct. + / So we set this sender's sequence to seq-1, in order + / to avoid unnecessary calls to PrepareProposalVerifyTx. + selectedTxsSignersSeqs[sender] = seq - 1 +} + +} + +} + +selectedTxsNums = txsLen +} + +return true +}) + if resError != nil { + return nil, resError +} + for _, tx := range invalidTxs { + err := h.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return nil, err +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} +} + +/ ProcessProposalHandler returns the default implementation for processing an +/ ABCI proposal. Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. +/ Note that step (2) + +is identical to the validation step performed in +/ DefaultPrepareProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +func (h *DefaultProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + / If the mempool is nil or NoOp we simply return ACCEPT, + / because PrepareProposal may have included txs that could fail verification. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return NoOpProcessProposal() +} + +return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + var totalTxGas uint64 + + var maxBlockGas int64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = b.MaxGas +} + for _, txBytes := range req.Txs { + tx, err := h.txVerifier.ProcessProposalVerifyTx(txBytes) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if maxBlockGas > 0 { + gasTx, ok := tx.(GasTx) + if ok { + totalTxGas += gasTx.GetGas() +} + if totalTxGas > uint64(maxBlockGas) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpExtendVote defines a no-op ExtendVote handler. It will always return an +/ empty byte slice as the vote extension. +func NoOpExtendVote() + +sdk.ExtendVoteHandler { + return func(_ sdk.Context, _ *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} +} + +/ NoOpVerifyVoteExtensionHandler defines a no-op VerifyVoteExtension handler. It +/ will always return an ACCEPT status with no error. +func NoOpVerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(_ sdk.Context, _ *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} + +/ TxSelector defines a helper type that assists in selecting transactions during +/ mempool transaction selection in PrepareProposal. It keeps track of the total +/ number of bytes and total gas of the selected transactions. It also keeps +/ track of the selected transactions themselves. +type TxSelector interface { + / SelectedTxs should return a copy of the selected transactions. + SelectedTxs(ctx context.Context) [][]byte + + / Clear should clear the TxSelector, nulling out all relevant fields. + Clear() + + / SelectTxForProposal should attempt to select a transaction for inclusion in + / a proposal based on inclusion criteria defined by the TxSelector. It must + / return if the caller should halt the transaction selection loop + / (typically over a mempool) + +or otherwise. + SelectTxForProposal(ctx context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool +} + +type defaultTxSelector struct { + totalTxBytes uint64 + totalTxGas uint64 + selectedTxs [][]byte +} + +func NewDefaultTxSelector() + +TxSelector { + return &defaultTxSelector{ +} +} + +func (ts *defaultTxSelector) + +SelectedTxs(_ context.Context) [][]byte { + txs := make([][]byte, len(ts.selectedTxs)) + +copy(txs, ts.selectedTxs) + +return txs +} + +func (ts *defaultTxSelector) + +Clear() { + ts.totalTxBytes = 0 + ts.totalTxGas = 0 + ts.selectedTxs = nil +} + +func (ts *defaultTxSelector) + +SelectTxForProposal(_ context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool { + txSize := uint64(cmttypes.ComputeProtoSizeForTxs([]cmttypes.Tx{ + txBz +})) + +var txGasLimit uint64 + if memTx != nil { + if gasTx, ok := memTx.(GasTx); ok { + txGasLimit = gasTx.GetGas() +} + +} + + / only add the transaction to the proposal if we have enough capacity + if (txSize + ts.totalTxBytes) <= maxTxBytes { + / If there is a max block gas limit, add the tx only if the limit has + / not been met. + if maxBlockGas > 0 { + if (txGasLimit + ts.totalTxGas) <= maxBlockGas { + ts.totalTxGas += txGasLimit + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + +else { + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + + / check if we've reached capacity; if so, we cannot select any more transactions + return ts.totalTxBytes >= maxTxBytes || (maxBlockGas > 0 && (ts.totalTxGas >= maxBlockGas)) +} +``` + +### VerifyVoteExtension + +`VerifyVoteExtension` allows an application to verify that the data returned by `ExtendVote` is valid. This process MUST be deterministic. Moreover, the value of ResponseVerifyVoteExtension.status MUST exclusively depend on the parameters passed in the call to RequestVerifyVoteExtension, and the last committed Application state. + +In the Cosmos-SDK this is implemented as a NoOp: + +```go expandable +package baseapp + +import ( + + "bytes" + "context" + "fmt" + "slices" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cryptoenc "github.com/cometbft/cometbft/crypto/encoding" + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + protoio "github.com/cosmos/gogoproto/io" + "github.com/cosmos/gogoproto/proto" + "cosmossdk.io/core/comet" + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +type ( + / ValidatorStore defines the interface contract required for verifying vote + / extension signatures. Typically, this will be implemented by the x/staking + / module, which has knowledge of the CometBFT public key. + ValidatorStore interface { + GetPubKeyByConsAddr(context.Context, sdk.ConsAddress) (cmtprotocrypto.PublicKey, error) +} + + / GasTx defines the contract that a transaction with a gas limit must implement. + GasTx interface { + GetGas() + +uint64 +} +) + +/ ValidateVoteExtensions defines a helper function for verifying vote extension +/ signatures that may be passed or manually injected into a block proposal from +/ a proposer in PrepareProposal. It returns an error if any signature is invalid +/ or if unexpected vote extensions and/or signatures are found or less than 2/3 +/ power is received. +/ NOTE: From v0.50.5 `currentHeight` and `chainID` arguments are ignored for fixing an issue. +/ They will be removed from the function in v0.51+. +func ValidateVoteExtensions( + ctx sdk.Context, + valStore ValidatorStore, + _ int64, + _ string, + extCommit abci.ExtendedCommitInfo, +) + +error { + / Get values from context + cp := ctx.ConsensusParams() + currentHeight := ctx.HeaderInfo().Height + chainID := ctx.HeaderInfo().ChainID + commitInfo := ctx.CometInfo().GetLastCommit() + + / Check that both extCommit + commit are ordered in accordance with vp/address. + if err := validateExtendedCommitAgainstLastCommit(extCommit, commitInfo); err != nil { + return err +} + + / Start checking vote extensions only **after** the vote extensions enable + / height, because when `currentHeight == VoteExtensionsEnableHeight` + / PrepareProposal doesn't get any vote extensions in its request. + extsEnabled := cp.Abci != nil && currentHeight > cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + marshalDelimitedFn := func(msg proto.Message) ([]byte, error) { + var buf bytes.Buffer + if err := protoio.NewDelimitedWriter(&buf).WriteMsg(msg); err != nil { + return nil, err +} + +return buf.Bytes(), nil +} + +var ( + / Total voting power of all vote extensions. + totalVP int64 + / Total voting power of all validators that submitted valid vote extensions. + sumVP int64 + ) + for _, vote := range extCommit.Votes { + totalVP += vote.Validator.Power + + / Only check + include power if the vote is a commit vote. There must be super-majority, otherwise the + / previous block (the block the vote is for) + +could not have been committed. + if vote.BlockIdFlag != cmtproto.BlockIDFlagCommit { + continue +} + if !extsEnabled { + if len(vote.VoteExtension) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension at height %d", currentHeight) +} + if len(vote.ExtensionSignature) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension signature at height %d", currentHeight) +} + +continue +} + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("vote extensions enabled; received empty vote extension signature at height %d", currentHeight) +} + valConsAddr := sdk.ConsAddress(vote.Validator.Address) + +pubKeyProto, err := valStore.GetPubKeyByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get validator %X public key: %w", valConsAddr, err) +} + +cmtPubKey, err := cryptoenc.PubKeyFromProto(pubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) +} + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, / the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: chainID, +} + +extSignBytes, err := marshalDelimitedFn(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) +} + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return fmt.Errorf("failed to verify validator %X vote extension signature", valConsAddr) +} + +sumVP += vote.Validator.Power +} + + / This check is probably unnecessary, but better safe than sorry. + if totalVP <= 0 { + return fmt.Errorf("total voting power must be positive, got: %d", totalVP) +} + + / If the sum of the voting power has not reached (2/3 + 1) + +we need to error. + if requiredVP := ((totalVP * 2) / 3) + 1; sumVP < requiredVP { + return fmt.Errorf( + "insufficient cumulative voting power received to verify vote extensions; got: %d, expected: >=%d", + sumVP, requiredVP, + ) +} + +return nil +} + +/ validateExtendedCommitAgainstLastCommit validates an ExtendedCommitInfo against a LastCommit. Specifically, +/ it checks that the ExtendedCommit + LastCommit (for the same height), are consistent with each other + that +/ they are ordered correctly (by voting power) + +in accordance with +/ [comet](https://github.com/cometbft/cometbft/blob/4ce0277b35f31985bbf2c25d3806a184a4510010/types/validator_set.go#L784). +func validateExtendedCommitAgainstLastCommit(ec abci.ExtendedCommitInfo, lc comet.CommitInfo) + +error { + / check that the rounds are the same + if ec.Round != lc.Round() { + return fmt.Errorf("extended commit round %d does not match last commit round %d", ec.Round, lc.Round()) +} + + / check that the # of votes are the same + if len(ec.Votes) != lc.Votes().Len() { + return fmt.Errorf("extended commit votes length %d does not match last commit votes length %d", len(ec.Votes), lc.Votes().Len()) +} + + / check sort order of extended commit votes + if !slices.IsSortedFunc(ec.Votes, func(vote1, vote2 abci.ExtendedVoteInfo) + +int { + if vote1.Validator.Power == vote2.Validator.Power { + return bytes.Compare(vote1.Validator.Address, vote2.Validator.Address) / addresses sorted in ascending order (used to break vp conflicts) +} + +return -int(vote1.Validator.Power - vote2.Validator.Power) / vp sorted in descending order +}) { + return fmt.Errorf("extended commit votes are not sorted by voting power") +} + addressCache := make(map[string]struct{ +}, len(ec.Votes)) + / check that consistency between LastCommit and ExtendedCommit + for i, vote := range ec.Votes { + / cache addresses to check for duplicates + if _, ok := addressCache[string(vote.Validator.Address)]; ok { + return fmt.Errorf("extended commit vote address %X is duplicated", vote.Validator.Address) +} + +addressCache[string(vote.Validator.Address)] = struct{ +}{ +} + if !bytes.Equal(vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) { + return fmt.Errorf("extended commit vote address %X does not match last commit vote address %X", vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) +} + if vote.Validator.Power != lc.Votes().Get(i).Validator().Power() { + return fmt.Errorf("extended commit vote power %d does not match last commit vote power %d", vote.Validator.Power, lc.Votes().Get(i).Validator().Power()) +} + +} + +return nil +} + +type ( + / ProposalTxVerifier defines the interface that is implemented by BaseApp, + / that any custom ABCI PrepareProposal and ProcessProposal handler can use + / to verify a transaction. + ProposalTxVerifier interface { + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) + +TxDecode(txBz []byte) (sdk.Tx, error) + +TxEncode(tx sdk.Tx) ([]byte, error) +} + + / DefaultProposalHandler defines the default ABCI PrepareProposal and + / ProcessProposal handlers. + DefaultProposalHandler struct { + mempool mempool.Mempool + txVerifier ProposalTxVerifier + txSelector TxSelector + signerExtAdapter mempool.SignerExtractionAdapter +} +) + +func NewDefaultProposalHandler(mp mempool.Mempool, txVerifier ProposalTxVerifier) *DefaultProposalHandler { + return &DefaultProposalHandler{ + mempool: mp, + txVerifier: txVerifier, + txSelector: NewDefaultTxSelector(), + signerExtAdapter: mempool.NewDefaultSignerExtractionAdapter(), +} +} + +/ SetTxSelector sets the TxSelector function on the DefaultProposalHandler. +func (h *DefaultProposalHandler) + +SetTxSelector(ts TxSelector) { + h.txSelector = ts +} + +/ PrepareProposalHandler returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from CometBFT will simply be returned, which, by default, are in +/ FIFO order. +func (h *DefaultProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + var maxBlockGas uint64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = uint64(b.MaxGas) +} + +defer h.txSelector.Clear() + + / If the mempool is nil or NoOp we simply return the transactions + / requested from CometBFT, which, by default, should be in FIFO order. + / + / Note, we still need to ensure the transactions returned respect req.MaxTxBytes. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + for _, txBz := range req.Txs { + tx, err := h.txVerifier.TxDecode(txBz) + if err != nil { + return nil, err +} + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, tx, txBz) + if stop { + break +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} + selectedTxsSignersSeqs := make(map[string]uint64) + +var ( + resError error + selectedTxsNums int + invalidTxs []sdk.Tx / invalid txs to be removed out of the loop to avoid dead lock + ) + +mempool.SelectBy(ctx, h.mempool, req.Txs, func(memTx sdk.Tx) + +bool { + unorderedTx, ok := memTx.(sdk.TxWithUnordered) + isUnordered := ok && unorderedTx.GetUnordered() + txSignersSeqs := make(map[string]uint64) + + / if the tx is unordered, we don't need to check the sequence, we just add it + if !isUnordered { + signerData, err := h.signerExtAdapter.GetSigners(memTx) + if err != nil { + / propagate the error to the caller + resError = err + return false +} + + / If the signers aren't in selectedTxsSignersSeqs then we haven't seen them before + / so we add them and continue given that we don't need to check the sequence. + shouldAdd := true + for _, signer := range signerData { + seq, ok := selectedTxsSignersSeqs[signer.Signer.String()] + if !ok { + txSignersSeqs[signer.Signer.String()] = signer.Sequence + continue +} + + / If we have seen this signer before in this block, we must make + / sure that the current sequence is seq+1; otherwise is invalid + / and we skip it. + if seq+1 != signer.Sequence { + shouldAdd = false + break +} + +txSignersSeqs[signer.Signer.String()] = signer.Sequence +} + if !shouldAdd { + return true +} + +} + + / NOTE: Since transaction verification was already executed in CheckTx, + / which calls mempool.Insert, in theory everything in the pool should be + / valid. But some mempool implementations may insert invalid txs, so we + / check again. + txBz, err := h.txVerifier.PrepareProposalVerifyTx(memTx) + if err != nil { + invalidTxs = append(invalidTxs, memTx) +} + +else { + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, memTx, txBz) + if stop { + return false +} + txsLen := len(h.txSelector.SelectedTxs(ctx)) + / If the tx is unordered, we don't need to update the sender sequence. + if !isUnordered { + for sender, seq := range txSignersSeqs { + / If txsLen != selectedTxsNums is true, it means that we've + / added a new tx to the selected txs, so we need to update + / the sequence of the sender. + if txsLen != selectedTxsNums { + selectedTxsSignersSeqs[sender] = seq +} + +else if _, ok := selectedTxsSignersSeqs[sender]; !ok { + / The transaction hasn't been added but it passed the + / verification, so we know that the sequence is correct. + / So we set this sender's sequence to seq-1, in order + / to avoid unnecessary calls to PrepareProposalVerifyTx. + selectedTxsSignersSeqs[sender] = seq - 1 +} + +} + +} + +selectedTxsNums = txsLen +} + +return true +}) + if resError != nil { + return nil, resError +} + for _, tx := range invalidTxs { + err := h.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return nil, err +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} +} + +/ ProcessProposalHandler returns the default implementation for processing an +/ ABCI proposal. Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. +/ Note that step (2) + +is identical to the validation step performed in +/ DefaultPrepareProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +func (h *DefaultProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + / If the mempool is nil or NoOp we simply return ACCEPT, + / because PrepareProposal may have included txs that could fail verification. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return NoOpProcessProposal() +} + +return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + var totalTxGas uint64 + + var maxBlockGas int64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = b.MaxGas +} + for _, txBytes := range req.Txs { + tx, err := h.txVerifier.ProcessProposalVerifyTx(txBytes) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if maxBlockGas > 0 { + gasTx, ok := tx.(GasTx) + if ok { + totalTxGas += gasTx.GetGas() +} + if totalTxGas > uint64(maxBlockGas) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpExtendVote defines a no-op ExtendVote handler. It will always return an +/ empty byte slice as the vote extension. +func NoOpExtendVote() + +sdk.ExtendVoteHandler { + return func(_ sdk.Context, _ *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} +} + +/ NoOpVerifyVoteExtensionHandler defines a no-op VerifyVoteExtension handler. It +/ will always return an ACCEPT status with no error. +func NoOpVerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(_ sdk.Context, _ *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} + +/ TxSelector defines a helper type that assists in selecting transactions during +/ mempool transaction selection in PrepareProposal. It keeps track of the total +/ number of bytes and total gas of the selected transactions. It also keeps +/ track of the selected transactions themselves. +type TxSelector interface { + / SelectedTxs should return a copy of the selected transactions. + SelectedTxs(ctx context.Context) [][]byte + + / Clear should clear the TxSelector, nulling out all relevant fields. + Clear() + + / SelectTxForProposal should attempt to select a transaction for inclusion in + / a proposal based on inclusion criteria defined by the TxSelector. It must + / return if the caller should halt the transaction selection loop + / (typically over a mempool) + +or otherwise. + SelectTxForProposal(ctx context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool +} + +type defaultTxSelector struct { + totalTxBytes uint64 + totalTxGas uint64 + selectedTxs [][]byte +} + +func NewDefaultTxSelector() + +TxSelector { + return &defaultTxSelector{ +} +} + +func (ts *defaultTxSelector) + +SelectedTxs(_ context.Context) [][]byte { + txs := make([][]byte, len(ts.selectedTxs)) + +copy(txs, ts.selectedTxs) + +return txs +} + +func (ts *defaultTxSelector) + +Clear() { + ts.totalTxBytes = 0 + ts.totalTxGas = 0 + ts.selectedTxs = nil +} + +func (ts *defaultTxSelector) + +SelectTxForProposal(_ context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool { + txSize := uint64(cmttypes.ComputeProtoSizeForTxs([]cmttypes.Tx{ + txBz +})) + +var txGasLimit uint64 + if memTx != nil { + if gasTx, ok := memTx.(GasTx); ok { + txGasLimit = gasTx.GetGas() +} + +} + + / only add the transaction to the proposal if we have enough capacity + if (txSize + ts.totalTxBytes) <= maxTxBytes { + / If there is a max block gas limit, add the tx only if the limit has + / not been met. + if maxBlockGas > 0 { + if (txGasLimit + ts.totalTxGas) <= maxBlockGas { + ts.totalTxGas += txGasLimit + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + +else { + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + + / check if we've reached capacity; if so, we cannot select any more transactions + return ts.totalTxBytes >= maxTxBytes || (maxBlockGas > 0 && (ts.totalTxGas >= maxBlockGas)) +} +``` diff --git a/docs/sdk/v0.53/documentation/application-framework/context.mdx b/docs/sdk/v0.53/documentation/application-framework/context.mdx new file mode 100644 index 00000000..b8026a0a --- /dev/null +++ b/docs/sdk/v0.53/documentation/application-framework/context.mdx @@ -0,0 +1,821 @@ +--- +title: Context +--- + +## Synopsis + +The `context` is a data structure intended to be passed from function to function that carries information about the current state of the application. It provides access to a branched storage (a safe branch of the entire state) as well as useful objects and information like `gasMeter`, `block height`, `consensus parameters` and more. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.53/documentation/application-framework/app-anatomy) +- [Lifecycle of a Transaction](/docs/sdk/v0.53/documentation/protocol-development/tx-lifecycle) + + + +## Context Definition + +The Cosmos SDK `Context` is a custom data structure that contains Go's stdlib [`context`](https://pkg.go.dev/context) as its base, and has many additional types within its definition that are specific to the Cosmos SDK. The `Context` is integral to transaction processing in that it allows modules to easily access their respective [store](/docs/sdk/v0.53/documentation/state-storage/store#base-layer-kvstores) in the [`multistore`](/docs/sdk/v0.53/documentation/state-storage/store#multistore) and retrieve transactional context such as the block header and gas meter. + +```go expandable +package types + +import ( + + "context" + "time" + + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/header" + "cosmossdk.io/log" + "cosmossdk.io/store/gaskv" + storetypes "cosmossdk.io/store/types" +) + +/ ExecMode defines the execution mode which can be set on a Context. +type ExecMode uint8 + +/ All possible execution modes. +const ( + ExecModeCheck ExecMode = iota + ExecModeReCheck + ExecModeSimulate + ExecModePrepareProposal + ExecModeProcessProposal + ExecModeVoteExtension + ExecModeVerifyVoteExtension + ExecModeFinalize +) + +/* +Context is an immutable object contains all information needed to +process a request. + +It contains a context.Context object inside if you want to use that, +but please do not over-use it. We try to keep all data structured +and standard additions here would be better just to add to the Context struct +*/ +type Context struct { + baseCtx context.Context + ms storetypes.MultiStore + / Deprecated: Use HeaderService for height, time, and chainID and CometService for the rest + header cmtproto.Header + / Deprecated: Use HeaderService for hash + headerHash []byte + / Deprecated: Use HeaderService for chainID and CometService for the rest + chainID string + txBytes []byte + logger log.Logger + voteInfo []abci.VoteInfo + gasMeter storetypes.GasMeter + blockGasMeter storetypes.GasMeter + checkTx bool + recheckTx bool / if recheckTx == true, then checkTx must also be true + sigverifyTx bool / when run simulation, because the private key corresponding to the account in the genesis.json randomly generated, we must skip the sigverify. + execMode ExecMode + minGasPrice DecCoins + consParams cmtproto.ConsensusParams + eventManager EventManagerI + priority int64 / The tx priority, only relevant in CheckTx + kvGasConfig storetypes.GasConfig + transientKVGasConfig storetypes.GasConfig + streamingManager storetypes.StreamingManager + cometInfo comet.BlockInfo + headerInfo header.Info +} + +/ Proposed rename, not done to avoid API breakage +type Request = Context + +/ Read-only accessors +func (c Context) + +Context() + +context.Context { + return c.baseCtx +} + +func (c Context) + +MultiStore() + +storetypes.MultiStore { + return c.ms +} + +func (c Context) + +BlockHeight() + +int64 { + return c.header.Height +} + +func (c Context) + +BlockTime() + +time.Time { + return c.header.Time +} + +func (c Context) + +ChainID() + +string { + return c.chainID +} + +func (c Context) + +TxBytes() []byte { + return c.txBytes +} + +func (c Context) + +Logger() + +log.Logger { + return c.logger +} + +func (c Context) + +VoteInfos() []abci.VoteInfo { + return c.voteInfo +} + +func (c Context) + +GasMeter() + +storetypes.GasMeter { + return c.gasMeter +} + +func (c Context) + +BlockGasMeter() + +storetypes.GasMeter { + return c.blockGasMeter +} + +func (c Context) + +IsCheckTx() + +bool { + return c.checkTx +} + +func (c Context) + +IsReCheckTx() + +bool { + return c.recheckTx +} + +func (c Context) + +IsSigverifyTx() + +bool { + return c.sigverifyTx +} + +func (c Context) + +ExecMode() + +ExecMode { + return c.execMode +} + +func (c Context) + +MinGasPrices() + +DecCoins { + return c.minGasPrice +} + +func (c Context) + +EventManager() + +EventManagerI { + return c.eventManager +} + +func (c Context) + +Priority() + +int64 { + return c.priority +} + +func (c Context) + +KVGasConfig() + +storetypes.GasConfig { + return c.kvGasConfig +} + +func (c Context) + +TransientKVGasConfig() + +storetypes.GasConfig { + return c.transientKVGasConfig +} + +func (c Context) + +StreamingManager() + +storetypes.StreamingManager { + return c.streamingManager +} + +func (c Context) + +CometInfo() + +comet.BlockInfo { + return c.cometInfo +} + +func (c Context) + +HeaderInfo() + +header.Info { + return c.headerInfo +} + +/ BlockHeader returns the header by value. +func (c Context) + +BlockHeader() + +cmtproto.Header { + return c.header +} + +/ HeaderHash returns a copy of the header hash obtained during abci.RequestBeginBlock +func (c Context) + +HeaderHash() []byte { + hash := make([]byte, len(c.headerHash)) + +copy(hash, c.headerHash) + +return hash +} + +func (c Context) + +ConsensusParams() + +cmtproto.ConsensusParams { + return c.consParams +} + +func (c Context) + +Deadline() (deadline time.Time, ok bool) { + return c.baseCtx.Deadline() +} + +func (c Context) + +Done() <-chan struct{ +} { + return c.baseCtx.Done() +} + +func (c Context) + +Err() + +error { + return c.baseCtx.Err() +} + +/ create a new context +func NewContext(ms storetypes.MultiStore, header cmtproto.Header, isCheckTx bool, logger log.Logger) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +return Context{ + baseCtx: context.Background(), + ms: ms, + header: header, + chainID: header.ChainID, + checkTx: isCheckTx, + sigverifyTx: true, + logger: logger, + gasMeter: storetypes.NewInfiniteGasMeter(), + minGasPrice: DecCoins{ +}, + eventManager: NewEventManager(), + kvGasConfig: storetypes.KVGasConfig(), + transientKVGasConfig: storetypes.TransientGasConfig(), +} +} + +/ WithContext returns a Context with an updated context.Context. +func (c Context) + +WithContext(ctx context.Context) + +Context { + c.baseCtx = ctx + return c +} + +/ WithMultiStore returns a Context with an updated MultiStore. +func (c Context) + +WithMultiStore(ms storetypes.MultiStore) + +Context { + c.ms = ms + return c +} + +/ WithBlockHeader returns a Context with an updated CometBFT block header in UTC time. +func (c Context) + +WithBlockHeader(header cmtproto.Header) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +c.header = header + return c +} + +/ WithHeaderHash returns a Context with an updated CometBFT block header hash. +func (c Context) + +WithHeaderHash(hash []byte) + +Context { + temp := make([]byte, len(hash)) + +copy(temp, hash) + +c.headerHash = temp + return c +} + +/ WithBlockTime returns a Context with an updated CometBFT block header time in UTC with no monotonic component. +/ Stripping the monotonic component is for time equality. +func (c Context) + +WithBlockTime(newTime time.Time) + +Context { + newHeader := c.BlockHeader() + / https://github.com/gogo/protobuf/issues/519 + newHeader.Time = newTime.Round(0).UTC() + +return c.WithBlockHeader(newHeader) +} + +/ WithProposer returns a Context with an updated proposer consensus address. +func (c Context) + +WithProposer(addr ConsAddress) + +Context { + newHeader := c.BlockHeader() + +newHeader.ProposerAddress = addr.Bytes() + +return c.WithBlockHeader(newHeader) +} + +/ WithBlockHeight returns a Context with an updated block height. +func (c Context) + +WithBlockHeight(height int64) + +Context { + newHeader := c.BlockHeader() + +newHeader.Height = height + return c.WithBlockHeader(newHeader) +} + +/ WithChainID returns a Context with an updated chain identifier. +func (c Context) + +WithChainID(chainID string) + +Context { + c.chainID = chainID + return c +} + +/ WithTxBytes returns a Context with an updated txBytes. +func (c Context) + +WithTxBytes(txBytes []byte) + +Context { + c.txBytes = txBytes + return c +} + +/ WithLogger returns a Context with an updated logger. +func (c Context) + +WithLogger(logger log.Logger) + +Context { + c.logger = logger + return c +} + +/ WithVoteInfos returns a Context with an updated consensus VoteInfo. +func (c Context) + +WithVoteInfos(voteInfo []abci.VoteInfo) + +Context { + c.voteInfo = voteInfo + return c +} + +/ WithGasMeter returns a Context with an updated transaction GasMeter. +func (c Context) + +WithGasMeter(meter storetypes.GasMeter) + +Context { + c.gasMeter = meter + return c +} + +/ WithBlockGasMeter returns a Context with an updated block GasMeter +func (c Context) + +WithBlockGasMeter(meter storetypes.GasMeter) + +Context { + c.blockGasMeter = meter + return c +} + +/ WithKVGasConfig returns a Context with an updated gas configuration for +/ the KVStore +func (c Context) + +WithKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.kvGasConfig = gasConfig + return c +} + +/ WithTransientKVGasConfig returns a Context with an updated gas configuration for +/ the transient KVStore +func (c Context) + +WithTransientKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.transientKVGasConfig = gasConfig + return c +} + +/ WithIsCheckTx enables or disables CheckTx value for verifying transactions and returns an updated Context +func (c Context) + +WithIsCheckTx(isCheckTx bool) + +Context { + c.checkTx = isCheckTx + c.execMode = ExecModeCheck + return c +} + +/ WithIsRecheckTx called with true will also set true on checkTx in order to +/ enforce the invariant that if recheckTx = true then checkTx = true as well. +func (c Context) + +WithIsReCheckTx(isRecheckTx bool) + +Context { + if isRecheckTx { + c.checkTx = true +} + +c.recheckTx = isRecheckTx + c.execMode = ExecModeReCheck + return c +} + +/ WithIsSigverifyTx called with true will sigverify in auth module +func (c Context) + +WithIsSigverifyTx(isSigverifyTx bool) + +Context { + c.sigverifyTx = isSigverifyTx + return c +} + +/ WithExecMode returns a Context with an updated ExecMode. +func (c Context) + +WithExecMode(m ExecMode) + +Context { + c.execMode = m + return c +} + +/ WithMinGasPrices returns a Context with an updated minimum gas price value +func (c Context) + +WithMinGasPrices(gasPrices DecCoins) + +Context { + c.minGasPrice = gasPrices + return c +} + +/ WithConsensusParams returns a Context with an updated consensus params +func (c Context) + +WithConsensusParams(params cmtproto.ConsensusParams) + +Context { + c.consParams = params + return c +} + +/ WithEventManager returns a Context with an updated event manager +func (c Context) + +WithEventManager(em EventManagerI) + +Context { + c.eventManager = em + return c +} + +/ WithPriority returns a Context with an updated tx priority +func (c Context) + +WithPriority(p int64) + +Context { + c.priority = p + return c +} + +/ WithStreamingManager returns a Context with an updated streaming manager +func (c Context) + +WithStreamingManager(sm storetypes.StreamingManager) + +Context { + c.streamingManager = sm + return c +} + +/ WithCometInfo returns a Context with an updated comet info +func (c Context) + +WithCometInfo(cometInfo comet.BlockInfo) + +Context { + c.cometInfo = cometInfo + return c +} + +/ WithHeaderInfo returns a Context with an updated header info +func (c Context) + +WithHeaderInfo(headerInfo header.Info) + +Context { + / Settime to UTC + headerInfo.Time = headerInfo.Time.UTC() + +c.headerInfo = headerInfo + return c +} + +/ TODO: remove??? +func (c Context) + +IsZero() + +bool { + return c.ms == nil +} + +func (c Context) + +WithValue(key, value interface{ +}) + +Context { + c.baseCtx = context.WithValue(c.baseCtx, key, value) + +return c +} + +func (c Context) + +Value(key interface{ +}) + +interface{ +} { + if key == SdkContextKey { + return c +} + +return c.baseCtx.Value(key) +} + +/ ---------------------------------------------------------------------------- +/ Store / Caching +/ ---------------------------------------------------------------------------- + +/ KVStore fetches a KVStore from the MultiStore. +func (c Context) + +KVStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.kvGasConfig) +} + +/ TransientStore fetches a TransientStore from the MultiStore. +func (c Context) + +TransientStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.transientKVGasConfig) +} + +/ CacheContext returns a new Context with the multi-store cached and a new +/ EventManager. The cached context is written to the context when writeCache +/ is called. Note, events are automatically emitted on the parent context's +/ EventManager when the caller executes the write. +func (c Context) + +CacheContext() (cc Context, writeCache func()) { + cms := c.ms.CacheMultiStore() + +cc = c.WithMultiStore(cms).WithEventManager(NewEventManager()) + +writeCache = func() { + c.EventManager().EmitEvents(cc.EventManager().Events()) + +cms.Write() +} + +return cc, writeCache +} + +var ( + _ context.Context = Context{ +} + _ storetypes.Context = Context{ +} +) + +/ ContextKey defines a type alias for a stdlib Context key. +type ContextKey string + +/ SdkContextKey is the key in the context.Context which holds the sdk.Context. +const SdkContextKey ContextKey = "sdk-context" + +/ WrapSDKContext returns a stdlib context.Context with the provided sdk.Context's internal +/ context as a value. It is useful for passing an sdk.Context through methods that take a +/ stdlib context.Context parameter such as generated gRPC methods. To get the original +/ sdk.Context back, call UnwrapSDKContext. +/ +/ Deprecated: there is no need to wrap anymore as the Cosmos SDK context implements context.Context. +func WrapSDKContext(ctx Context) + +context.Context { + return ctx +} + +/ UnwrapSDKContext retrieves a Context from a context.Context instance +/ attached with WrapSDKContext. It panics if a Context was not properly +/ attached +func UnwrapSDKContext(ctx context.Context) + +Context { + if sdkCtx, ok := ctx.(Context); ok { + return sdkCtx +} + +return ctx.Value(SdkContextKey).(Context) +} +``` + +- **Base Context:** The base type is a Go [Context](https://pkg.go.dev/context), which is explained further in the [Go Context Package](#go-context-package) section below. +- **Multistore:** Every application's `BaseApp` contains a [`CommitMultiStore`](/docs/sdk/v0.53/documentation/state-storage/store#multistore) which is provided when a `Context` is created. Calling the `KVStore()` and `TransientStore()` methods allows modules to fetch their respective [`KVStore`](/docs/sdk/v0.53/documentation/state-storage/store#base-layer-kvstores) using their unique `StoreKey`. +- **Header:** The [header](https://docs.cometbft.com/v0.37/spec/core/data_structures#header) is a Blockchain type. It carries important information about the state of the blockchain, such as block height and proposer of the current block. +- **Header Hash:** The current block header hash, obtained during `abci.FinalizeBlock`. +- **Chain ID:** The unique identification number of the blockchain a block pertains to. +- **Transaction Bytes:** The `[]byte` representation of a transaction being processed using the context. Every transaction is processed by various parts of the Cosmos SDK and consensus engine (e.g. CometBFT) throughout its [lifecycle](/docs/sdk/v0.53/documentation/protocol-development/tx-lifecycle), some of which do not have any understanding of transaction types. Thus, transactions are marshaled into the generic `[]byte` type using some kind of [encoding format](/docs/sdk/v0.53/documentation/protocol-development/encoding) such as [Amino](/docs/sdk/v0.53/documentation/protocol-development/encoding). +- **Logger:** A `logger` from the CometBFT libraries. Learn more about logs [here](https://docs.cometbft.com/v0.37/core/configuration). Modules call this method to create their own unique module-specific logger. +- **VoteInfo:** A list of the ABCI type [`VoteInfo`](https://docs.cometbft.com/master/spec/abci/abci.html#voteinfo), which includes the name of a validator and a boolean indicating whether they have signed the block. +- **Gas Meters:** Specifically, a [`gasMeter`](/docs/sdk/v0.53/documentation/protocol-development/gas-fees#main-gas-meter) for the transaction currently being processed using the context and a [`blockGasMeter`](/docs/sdk/v0.53/documentation/protocol-development/gas-fees#block-gas-meter) for the entire block it belongs to. Users specify how much in fees they wish to pay for the execution of their transaction; these gas meters keep track of how much [gas](/docs/sdk/v0.53/documentation/protocol-development/gas-fees) has been used in the transaction or block so far. If the gas meter runs out, execution halts. +- **CheckTx Mode:** A boolean value indicating whether a transaction should be processed in `CheckTx` or `DeliverTx` mode. +- **Min Gas Price:** The minimum [gas](/docs/sdk/v0.53/documentation/protocol-development/gas-fees) price a node is willing to take in order to include a transaction in its block. This price is a local value configured by each node individually, and should therefore **not be used in any functions used in sequences leading to state-transitions**. +- **Consensus Params:** The ABCI type [Consensus Parameters](https://docs.cometbft.com/master/spec/abci/apps.html#consensus-parameters), which specify certain limits for the blockchain, such as maximum gas for a block. +- **Event Manager:** The event manager allows any caller with access to a `Context` to emit [`Events`](/docs/sdk/v0.53/api-reference/events-streaming/events). Modules may define module specific + `Events` by defining various `Types` and `Attributes` or use the common definitions found in `types/`. Clients can subscribe or query for these `Events`. These `Events` are collected throughout `FinalizeBlock` and are returned to CometBFT for indexing. +- **Priority:** The transaction priority, only relevant in `CheckTx`. +- **KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the `KVStore`. +- **Transient KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the transiant `KVStore`. +- **StreamingManager:** The streamingManager field provides access to the streaming manager, which allows modules to subscribe to state changes emitted by the blockchain. The streaming manager is used by the state listening API, which is described in [ADR 038](https://docs.cosmos.network/main/architecture/adr-038-state-listening). +- **CometInfo:** A lightweight field that contains information about the current block, such as the block height, time, and hash. This information can be used for validating evidence, providing historical data, and enhancing the user experience. For further details see [here](https://github.com/cosmos/cosmos-sdk/blob/main/core/comet/service.go#L14). +- **HeaderInfo:** The `headerInfo` field contains information about the current block header, such as the chain ID, gas limit, and timestamp. For further details see [here](https://github.com/cosmos/cosmos-sdk/blob/main/core/header/service.go#L14). + +## Go Context Package + +A basic `Context` is defined in the [Golang Context Package](https://pkg.go.dev/context). A `Context` +is an immutable data structure that carries request-scoped data across APIs and processes. Contexts +are also designed to enable concurrency and to be used in goroutines. + +Contexts are intended to be **immutable**; they should never be edited. Instead, the convention is +to create a child context from its parent using a `With` function. For example: + +```go +childCtx = parentCtx.WithBlockHeader(header) +``` + +The [Golang Context Package](https://pkg.go.dev/context) documentation instructs developers to +explicitly pass a context `ctx` as the first argument of a process. + +## Store branching + +The `Context` contains a `MultiStore`, which allows for branching and caching functionality using `CacheMultiStore` +(queries in `CacheMultiStore` are cached to avoid future round trips). +Each `KVStore` is branched in a safe and isolated ephemeral storage. Processes are free to write changes to +the `CacheMultiStore`. If a state-transition sequence is performed without issue, the store branch can +be committed to the underlying store at the end of the sequence or disregard them if something +goes wrong. The pattern of usage for a Context is as follows: + +1. A process receives a Context `ctx` from its parent process, which provides information needed to + perform the process. +2. The `ctx.ms` is a **branched store**, i.e. a branch of the [multistore](/docs/sdk/v0.53/documentation/state-storage/store#multistore) is made so that the process can make changes to the state as it executes, without changing the original`ctx.ms`. This is useful to protect the underlying multistore in case the changes need to be reverted at some point in the execution. +3. The process may read and write from `ctx` as it is executing. It may call a subprocess and pass + `ctx` to it as needed. +4. When a subprocess returns, it checks if the result is a success or failure. If a failure, nothing + needs to be done - the branch `ctx` is simply discarded. If successful, the changes made to + the `CacheMultiStore` can be committed to the original `ctx.ms` via `Write()`. + +For example, here is a snippet from the [`runTx`](/docs/sdk/v0.53/documentation/application-framework/baseapp#runtx-antehandler-runmsgs-posthandler) function in [`baseapp`](/docs/sdk/v0.53/documentation/application-framework/baseapp): + +```go +runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes) + +result = app.runMsgs(runMsgCtx, msgs, mode) + +result.GasWanted = gasWanted + if mode != runTxModeDeliver { + return result +} + if result.IsOK() { + msCache.Write() +} +``` + +Here is the process: + +1. Prior to calling `runMsgs` on the message(s) in the transaction, it uses `app.cacheTxContext()` + to branch and cache the context and multistore. +2. `runMsgCtx` - the context with branched store, is used in `runMsgs` to return a result. +3. If the process is running in [`checkTxMode`](/docs/sdk/v0.53/documentation/application-framework/baseapp#checktx), there is no need to write the + changes - the result is returned immediately. +4. If the process is running in [`deliverTxMode`](/docs/sdk/v0.53/documentation/application-framework/baseapp#delivertx) and the result indicates + a successful run over all the messages, the branched multistore is written back to the original. diff --git a/docs/sdk/v0.53/documentation/application-framework/depinject.mdx b/docs/sdk/v0.53/documentation/application-framework/depinject.mdx new file mode 100644 index 00000000..d9b94cd9 --- /dev/null +++ b/docs/sdk/v0.53/documentation/application-framework/depinject.mdx @@ -0,0 +1,678 @@ +--- +title: Depinject +--- + +> **DISCLAIMER**: This is a **beta** package. The SDK team is actively working on this feature and we are looking for feedback from the community. Please try it out and let us know what you think. + +## Overview + +`depinject` is a dependency injection (DI) framework for the Cosmos SDK, designed to streamline the process of building and configuring blockchain applications. It works in conjunction with the `core/appconfig` module to replace the majority of boilerplate code in `app.go` with a configuration file in Go, YAML, or JSON format. + +`depinject` is particularly useful for developing blockchain applications: + +* With multiple interdependent components, modules, or services. Helping manage their dependencies effectively. +* That require decoupling of these components, making it easier to test, modify, or replace individual parts without affecting the entire system. +* That are wanting to simplify the setup and initialisation of modules and their dependencies by reducing boilerplate code and automating dependency management. + +By using `depinject`, developers can achieve: + +* Cleaner and more organised code. + +* Improved modularity and maintainability. + +* A more maintainable and modular structure for their blockchain applications, ultimately enhancing development velocity and code quality. + +* [Go Doc](https://pkg.go.dev/cosmossdk.io/depinject) + +## Usage + +The `depinject` framework, based on dependency injection concepts, streamlines the management of dependencies within your blockchain application using its Configuration API. This API offers a set of functions and methods to create easy to use configurations, making it simple to define, modify, and access dependencies and their relationships. + +A core component of the [Configuration API](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/depinject#Config) is the `Provide` function, which allows you to register provider functions that supply dependencies. Inspired by constructor injection, these provider functions form the basis of the dependency tree, enabling the management and resolution of dependencies in a structured and maintainable manner. Additionally, `depinject` supports interface types as inputs to provider functions, offering flexibility and decoupling between components, similar to interface injection concepts. + +By leveraging `depinject` and its Configuration API, you can efficiently handle dependencies in your blockchain application, ensuring a clean, modular, and well-organised codebase. + +Example: + +```go expandable +package main + +import ( + + "fmt" + "cosmossdk.io/depinject" +) + +type AnotherInt int + +func GetInt() + +int { + return 1 +} + +func GetAnotherInt() + +AnotherInt { + return 2 +} + +func main() { + var ( + x int + y AnotherInt + ) + +fmt.Printf("Before (%v, %v)\n", x, y) + +depinject.Inject( + depinject.Provide( + GetInt, + GetAnotherInt, + ), + &x, + &y, + ) + +fmt.Printf("After (%v, %v)\n", x, y) +} +``` + +In this example, `depinject.Provide` registers two provider functions that return `int` and `AnotherInt` values. The `depinject.Inject` function is then used to inject these values into the variables `x` and `y`. + +Provider functions serve as the basis for the dependency tree. They are analysed to identify their inputs as dependencies and their outputs as dependents. These dependents can either be used by another provider function or be stored outside the DI container (e.g., `&x` and `&y` in the example above). Provider functions must be exported. + +### Interface type resolution + +`depinject` supports the use of interface types as inputs to provider functions, which helps decouple dependencies between modules. This approach is particularly useful for managing complex systems with multiple modules, such as the Cosmos SDK, where dependencies need to be flexible and maintainable. + +For example, `x/bank` expects an [AccountKeeper](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/x/bank/types#AccountKeeper) interface as [input to ProvideModule](https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/x/bank/module.go#L208-L260). `SimApp` uses the implementation in `x/auth`, but the modular design allows for easy changes to the implementation if needed. + +Consider the following example: + +```go expandable +package duck + +type Duck interface { + quack() +} + +type AlsoDuck interface { + quack() +} + +type Mallard struct{ +} + +type Canvasback struct{ +} + +func (duck Mallard) + +quack() { +} + +func (duck Canvasback) + +quack() { +} + +type Pond struct { + Duck AlsoDuck +} +``` + +And the following provider functions: + +```go expandable +func GetMallard() + +duck.Mallard { + return Mallard{ +} +} + +func GetPond(duck Duck) + +Pond { + return Pond{ + Duck: duck +} +} + +func GetCanvasback() + +Canvasback { + return Canvasback{ +} +} +``` + +In this example, there's a `Pond` struct that has a `Duck` field of type `AlsoDuck`. The `depinject` framework can automatically resolve the appropriate implementation when there's only one available, as shown below: + +```go +var pond Pond + +depinject.Inject( + depinject.Provide( + GetMallard, + GetPond, + ), + &pond) +``` + +This code snippet results in the `Duck` field of `Pond` being implicitly bound to the `Mallard` implementation because it's the only implementation of the `Duck` interface in the container. + +However, if there are multiple implementations of the `Duck` interface, as in the following example, you'll encounter an error: + +```go +var pond Pond + +depinject.Inject( + depinject.Provide( + GetMallard, + GetCanvasback, + GetPond, + ), + &pond) +``` + +A specific binding preference for `Duck` is required. + +#### `BindInterface` API + +In the above situation registering a binding for a given interface binding may look like: + +```go expandable +depinject.Inject( + depinject.Configs( + depinject.BindInterface( + "duck/duck.Duck", + "duck/duck.Mallard", + ), + depinject.Provide( + GetMallard, + GetCanvasback, + GetPond, + ), + ), + &pond) +``` + +Now `depinject` has enough information to provide `Mallard` as an input to `APond`. + +### Full example in real app + + +When using `depinject.Inject`, the injected types must be pointers. + + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + / + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + / + / For providing a different address codec, add it below. + / By default the auth module uses a Bech32 address codec, + / with the prefix defined in the auth module configuration. + / + / func() + +address.Codec { + return <- custom address codec type -> +} + / + / STAKING + / + / For provinding a different validator and consensus address codec, add it below. + / By default the staking module uses the bech32 prefix provided in the auth config, + / and appends "valoper" and "valcons" for validator and consensus addresses respectively. + / When providing a custom address codec in auth, custom address codecs must be provided here as well. + / + / func() + +runtime.ValidatorAddressCodec { + return <- custom validator address codec type -> +} + / func() + +runtime.ConsensusAddressCodec { + return <- custom consensus address codec type -> +} + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom minting function that implements the mintkeeper.MintFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.UpgradeKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitKeeper, + &app.EpochsKeeper, + &app.ProtocolPoolKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + + / set custom ante handler + app.setAnteHandler(app.txConfig) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ setAnteHandler sets custom ante handlers. +/ "x/auth/tx" pre-defined ante handler have been disabled in app_config. +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + +## Debugging + +Issues with resolving dependencies in the container can be done with logs and [Graphviz](https://graphviz.org) renderings of the container tree. +By default, whenever there is an error, logs will be printed to stderr and a rendering of the dependency graph in Graphviz DOT format will be saved to `debug_container.dot`. + +Here is an example Graphviz rendering of a successful build of a dependency graph: +![Graphviz Example](https://raw.githubusercontent.com/cosmos/cosmos-sdk/ff39d243d421442b400befcd959ec3ccd2525154/depinject/testdata/example.svg) + +Rectangles represent functions, ovals represent types, rounded rectangles represent modules and the single hexagon +represents the function which called `Build`. Black-colored shapes mark functions and types that were called/resolved +without an error. Gray-colored nodes mark functions and types that could have been called/resolved in the container but +were left unused. + +Here is an example Graphviz rendering of a dependency graph build which failed: +![Graphviz Error Example](https://raw.githubusercontent.com/cosmos/cosmos-sdk/ff39d243d421442b400befcd959ec3ccd2525154/depinject/testdata/example_error.svg) + +Graphviz DOT files can be converted into SVG's for viewing in a web browser using the `dot` command-line tool, ex: + +```txt +dot -Tsvg debug_container.dot > debug_container.svg +``` + +Many other tools including some IDEs support working with DOT files. diff --git a/docs/sdk/v0.53/documentation/application-framework/runtime.mdx b/docs/sdk/v0.53/documentation/application-framework/runtime.mdx new file mode 100644 index 00000000..1111b22f --- /dev/null +++ b/docs/sdk/v0.53/documentation/application-framework/runtime.mdx @@ -0,0 +1,1877 @@ +--- +title: What is `runtime`? +description: >- + The runtime package in the Cosmos SDK provides a flexible framework for + configuring and managing blockchain applications. It serves as the foundation + for creating modular blockchain applications using a declarative configuration + approach. +--- + +The `runtime` package in the Cosmos SDK provides a flexible framework for configuring and managing blockchain applications. It serves as the foundation for creating modular blockchain applications using a declarative configuration approach. + +## Overview + +The runtime package acts as a wrapper around the `BaseApp` and `ModuleManager`, offering a hybrid approach where applications can be configured both declaratively through configuration files and programmatically through traditional methods. +It is a layer of abstraction between `baseapp` and the application modules that simplifies the process of building a Cosmos SDK application. + +## Core Components + +### App Structure + +The runtime App struct contains several key components: + +```go +type App struct { + *baseapp.BaseApp + ModuleManager *module.Manager + configurator module.Configurator + config *runtimev1alpha1.Module + storeKeys []storetypes.StoreKey + / ... other fields +} +``` + +Cosmos SDK applications should embed the `*runtime.App` struct to leverage the runtime module. + +```go expandable +/go:build !app_v1 + +package simapp + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + nftkeeper "cosmossdk.io/x/nft/keeper" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + consensuskeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +/ DefaultNodeHome default home directories for the application daemon +var DefaultNodeHome string + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *runtime.App + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry codectypes.InterfaceRegistry + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper *govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensuskeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / simulation manager + sm *module.SimulationManager +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + + / merge the AppConfig and other configuration in one config + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + / supply the application options + appOpts, + / supply the logger + logger, + + / ADVANCED CONFIGURATION + + / + / AUTH + / + / For providing a custom function required in auth to generate custom account types + / add it below. By default the auth module uses simulation.RandomGenesisAccounts. + / + / authtypes.RandomGenesisAccountsFn(simulation.RandomGenesisAccounts), + / + / For providing a custom a base account type add it below. + / By default the auth module uses authtypes.ProtoBaseAccount(). + / + / func() + +sdk.AccountI { + return authtypes.ProtoBaseAccount() +}, + / + / For providing a different address codec, add it below. + / By default the auth module uses a Bech32 address codec, + / with the prefix defined in the auth module configuration. + / + / func() + +address.Codec { + return <- custom address codec type -> +} + / + / STAKING + / + / For provinding a different validator and consensus address codec, add it below. + / By default the staking module uses the bech32 prefix provided in the auth config, + / and appends "valoper" and "valcons" for validator and consensus addresses respectively. + / When providing a custom address codec in auth, custom address codecs must be provided here as well. + / + / func() + +runtime.ValidatorAddressCodec { + return <- custom validator address codec type -> +} + / func() + +runtime.ConsensusAddressCodec { + return <- custom consensus address codec type -> +} + + / + / MINT + / + + / For providing a custom inflation function for x/mint add here your + / custom minting function that implements the mintkeeper.MintFn + / interface. + ), + ) + ) + if err := depinject.Inject(appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + &app.AccountKeeper, + &app.BankKeeper, + &app.StakingKeeper, + &app.SlashingKeeper, + &app.MintKeeper, + &app.DistrKeeper, + &app.GovKeeper, + &app.UpgradeKeeper, + &app.AuthzKeeper, + &app.EvidenceKeeper, + &app.FeeGrantKeeper, + &app.GroupKeeper, + &app.NFTKeeper, + &app.ConsensusParamsKeeper, + &app.CircuitKeeper, + &app.EpochsKeeper, + &app.ProtocolPoolKeeper, + ); err != nil { + panic(err) +} + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / app.App = appBuilder.Build(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, app.App.BaseApp) + / + / app.App.BaseApp.SetMempool(nonceMempool) + / app.App.BaseApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / app.App.BaseApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to the appBuilder. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + +app.App = appBuilder.Build(db, traceStore, baseAppOptions...) + + / register streaming services + if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil { + panic(err) +} + + /**** Module Options ****/ + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + app.RegisterUpgradeHandlers() + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / A custom InitChainer can be set if extra pre-init-genesis logic is required. + / By default, when using app wiring enabled module, this is not required. + / For instance, the upgrade module will set automatically the module version map in its init genesis thanks to app wiring. + / However, when registering a module manually (i.e. that does not support app wiring), the module version map + / must be set manually as follow. The upgrade module will de-duplicate the module version map. + / + / app.SetInitChainer(func(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + / app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + / return app.App.InitChainer(ctx, req) + / +}) + + / set custom ante handler + app.setAnteHandler(app.txConfig) + if err := app.Load(loadLatest); err != nil { + panic(err) +} + +return app +} + +/ setAnteHandler sets custom ante handlers. +/ "x/auth/tx" pre-defined ante handler have been disabled in app_config. +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry. +func (app *SimApp) + +InterfaceRegistry() + +codectypes.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + sk := app.UnsafeFindStoreKey(storeKey) + +kvStoreKey, ok := sk.(*storetypes.KVStoreKey) + if !ok { + return nil +} + +return kvStoreKey +} + +func (app *SimApp) + +kvStoreKeys() + +map[string]*storetypes.KVStoreKey { + keys := make(map[string]*storetypes.KVStoreKey) + for _, k := range app.GetStoreKeys() { + if kv, ok := k.(*storetypes.KVStoreKey); ok { + keys[kv.Name()] = kv +} + +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + app.App.RegisterAPIRoutes(apiSvr, apiConfig) + / register swagger API in app.go so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + dup := make(map[string][]string) + for _, perms := range moduleAccPerms { + dup[perms.Account] = perms.Permissions +} + +return dup +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + result := make(map[string]bool) + if len(blockAccAddrs) > 0 { + for _, addr := range blockAccAddrs { + result[addr] = true +} + +} + +else { + for addr := range GetMaccPerms() { + result[addr] = true +} + +} + +return result +} +``` + +### Configuration + +The runtime module is configured using App Wiring. The main configuration object is the [`Module` message](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/proto/cosmos/app/runtime/v1alpha1/module.proto), which supports the following key settings: + +* `app_name`: The name of the application +* `begin_blockers`: List of module names to call during BeginBlock +* `end_blockers`: List of module names to call during EndBlock +* `init_genesis`: Order of module initialization during genesis +* `export_genesis`: Order for exporting module genesis data +* `pre_blockers`: Modules to execute before block processing + +Learn more about wiring `runtime` in the [next section](/docs/sdk/v0.53/documentation/application-framework/app-go-di). + +#### Store Configuration + +By default, the runtime module uses the module name as the store key. +However it provides a flexible store key configuration through: + +* `override_store_keys`: Allows customizing module store keys +* `skip_store_keys`: Specifies store keys to skip during keeper construction + +Example configuration: + +```go expandable +package simapp + +import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "cosmossdk.io/depinject" + _ "cosmossdk.io/x/circuit" / import for side-effects + circuittypes "cosmossdk.io/x/circuit/types" + _ "cosmossdk.io/x/evidence" / import for side-effects + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + "cosmossdk.io/x/nft" + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName +}, + { + Account: distrtypes.ModuleName +}, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter +}}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName +}}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName +}}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner +}}, + { + Account: nft.ModuleName +}, + { + Account: protocolpooltypes.ModuleName +}, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount +}, +} + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName +} + +ModuleConfig = []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / NOTE: upgrade module is required to be prioritized + PreBlockers: []string{ + upgradetypes.ModuleName, + authtypes.ModuleName, +}, + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, +}, + EndBlockers: []string{ + govtypes.ModuleName, + stakingtypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, +}, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", +}, +}, + SkipStoreKeys: []string{ + "tx", +}, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +}, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + ExportGenesis: []string{ + consensustypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +}, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ +}, +}), +}, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + EnableUnorderedTransactions: true, +}), +}, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ +}), +}, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, +}), +}, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + / NOTE: specifying a prefix is only necessary when using bech32 addresses + / If not specfied, the auth Bech32Prefix appended with "valoper" and "valcons" is used by default + Bech32PrefixValidator: "cosmosvaloper", + Bech32PrefixConsensus: "cosmosvalcons", +}), +}, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ +}), +}, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + SkipAnteHandler: true, / Enable this to skip the default antehandlers and set custom ante handlers. +}), +}, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ +}), +}, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ +}), +}, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ +}), +}, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ +}), +}, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ +}), +}, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ +}), +}, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, +}), +}, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ +}), +}, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ +}), +}, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ +}), +}, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ +}), +}, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ +}), +}, + { + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ +}), +}, + { + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ +}), +}, +} + + / AppConfig is application configuration (used by depinject) + +AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: ModuleConfig, +}), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}, + ), + ) +) +``` + +## Key Features + +### 1. BaseApp and other Core SDK components integration + +The runtime module integrates with the `BaseApp` and other core SDK components to provide a seamless experience for developers. + +The developer only needs to embed the `runtime.App` struct in their application to leverage the runtime module. +The configuration of the module manager and other core components is handled internally via the [`AppBuilder`](#4-application-building). + +### 2. Module Registration + +Runtime has built-in support for [`depinject`-enabled modules](/docs/sdk/v0.53/documentation/module-system/depinject). +Such modules can be registered through the configuration file (often named `app_config.go`), with no additional code required. + +```go expandable +package simapp + +import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/core/appconfig" + "cosmossdk.io/depinject" + _ "cosmossdk.io/x/circuit" / import for side-effects + circuittypes "cosmossdk.io/x/circuit/types" + _ "cosmossdk.io/x/evidence" / import for side-effects + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + "cosmossdk.io/x/nft" + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName +}, + { + Account: distrtypes.ModuleName +}, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter +}}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName +}}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName +}}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner +}}, + { + Account: nft.ModuleName +}, + { + Account: protocolpooltypes.ModuleName +}, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount +}, +} + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName +} + +ModuleConfig = []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / NOTE: upgrade module is required to be prioritized + PreBlockers: []string{ + upgradetypes.ModuleName, + authtypes.ModuleName, +}, + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, +}, + EndBlockers: []string{ + govtypes.ModuleName, + stakingtypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, +}, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", +}, +}, + SkipStoreKeys: []string{ + "tx", +}, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +}, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + ExportGenesis: []string{ + consensustypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +}, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ +}, +}), +}, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + EnableUnorderedTransactions: true, +}), +}, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ +}), +}, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, +}), +}, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + / NOTE: specifying a prefix is only necessary when using bech32 addresses + / If not specfied, the auth Bech32Prefix appended with "valoper" and "valcons" is used by default + Bech32PrefixValidator: "cosmosvaloper", + Bech32PrefixConsensus: "cosmosvalcons", +}), +}, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ +}), +}, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + SkipAnteHandler: true, / Enable this to skip the default antehandlers and set custom ante handlers. +}), +}, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ +}), +}, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ +}), +}, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ +}), +}, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ +}), +}, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ +}), +}, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ +}), +}, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, +}), +}, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ +}), +}, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ +}), +}, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ +}), +}, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ +}), +}, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ +}), +}, + { + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ +}), +}, + { + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ +}), +}, +} + + / AppConfig is application configuration (used by depinject) + +AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: ModuleConfig, +}), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}, + ), + ) +) +``` + +Additionally, the runtime package facilitates manual module registration through the `RegisterModules` method. This is the primary integration point for modules not registered via configuration. + + +Even when using manual registration, the module should still be configured in the `Module` message in AppConfig. + + +```go +func (a *App) + +RegisterModules(modules ...module.AppModule) + +error +``` + +The SDK recommends using the declarative approach with `depinject` for module registration whenever possible. + +### 3. Service Registration + +Runtime registers all [core services](https://pkg.go.dev/cosmossdk.io/core) required by modules. +These services include `store`, `event manager`, `context`, and `logger`. +Runtime ensures that services are scoped to their respective modules during the wiring process. + +```go expandable +package runtime + +import ( + + "fmt" + "os" + "slices" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/protobuf/reflect/protodesc" + "google.golang.org/protobuf/reflect/protoregistry" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/event" + "cosmossdk.io/core/genesis" + "cosmossdk.io/core/header" + "cosmossdk.io/core/store" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/tx/signing" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + addresscodec "github.com/cosmos/cosmos-sdk/codec/address" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type appModule struct { + app *App +} + +func (m appModule) + +RegisterServices(configurator module.Configurator) { + err := m.app.registerRuntimeServices(configurator) + if err != nil { + panic(err) +} +} + +func (m appModule) + +IsOnePerModuleType() { +} + +func (m appModule) + +IsAppModule() { +} + +var ( + _ appmodule.AppModule = appModule{ +} + _ module.HasServices = appModule{ +} +) + +/ BaseAppOption is a depinject.AutoGroupType which can be used to pass +/ BaseApp options into the depinject. It should be used carefully. +type BaseAppOption func(*baseapp.BaseApp) + +/ IsManyPerContainerType indicates that this is a depinject.ManyPerContainerType. +func (b BaseAppOption) + +IsManyPerContainerType() { +} + +func init() { + appmodule.Register(&runtimev1alpha1.Module{ +}, + appmodule.Provide( + ProvideApp, + ProvideInterfaceRegistry, + ProvideKVStoreKey, + ProvideTransientStoreKey, + ProvideMemoryStoreKey, + ProvideGenesisTxHandler, + ProvideKVStoreService, + ProvideMemoryStoreService, + ProvideTransientStoreService, + ProvideEventService, + ProvideHeaderInfoService, + ProvideCometInfoService, + ProvideBasicManager, + ProvideAddressCodec, + ), + appmodule.Invoke(SetupAppBuilder), + ) +} + +func ProvideApp(interfaceRegistry codectypes.InterfaceRegistry) ( + codec.Codec, + *codec.LegacyAmino, + *AppBuilder, + *baseapp.MsgServiceRouter, + *baseapp.GRPCQueryRouter, + appmodule.AppModule, + protodesc.Resolver, + protoregistry.MessageTypeResolver, + error, +) { + protoFiles := proto.HybridResolver + protoTypes := protoregistry.GlobalTypes + + / At startup, check that all proto annotations are correct. + if err := msgservice.ValidateProtoAnnotations(protoFiles); err != nil { + / Once we switch to using protoreflect-based ante handlers, we might + / want to panic here instead of logging a warning. + _, _ = fmt.Fprintln(os.Stderr, err.Error()) +} + amino := codec.NewLegacyAmino() + +std.RegisterInterfaces(interfaceRegistry) + +std.RegisterLegacyAminoCodec(amino) + cdc := codec.NewProtoCodec(interfaceRegistry) + msgServiceRouter := baseapp.NewMsgServiceRouter() + grpcQueryRouter := baseapp.NewGRPCQueryRouter() + app := &App{ + storeKeys: nil, + interfaceRegistry: interfaceRegistry, + cdc: cdc, + amino: amino, + basicManager: module.BasicManager{ +}, + msgServiceRouter: msgServiceRouter, + grpcQueryRouter: grpcQueryRouter, +} + appBuilder := &AppBuilder{ + app +} + +return cdc, amino, appBuilder, msgServiceRouter, grpcQueryRouter, appModule{ + app +}, protoFiles, protoTypes, nil +} + +type AppInputs struct { + depinject.In + + AppConfig *appv1alpha1.Config `optional:"true"` + Config *runtimev1alpha1.Module + AppBuilder *AppBuilder + Modules map[string]appmodule.AppModule + CustomModuleBasics map[string]module.AppModuleBasic `optional:"true"` + BaseAppOptions []BaseAppOption + InterfaceRegistry codectypes.InterfaceRegistry + LegacyAmino *codec.LegacyAmino + Logger log.Logger +} + +func SetupAppBuilder(inputs AppInputs) { + app := inputs.AppBuilder.app + app.baseAppOptions = inputs.BaseAppOptions + app.config = inputs.Config + app.appConfig = inputs.AppConfig + app.logger = inputs.Logger + app.ModuleManager = module.NewManagerFromMap(inputs.Modules) + for name, mod := range inputs.Modules { + if customBasicMod, ok := inputs.CustomModuleBasics[name]; ok { + app.basicManager[name] = customBasicMod + customBasicMod.RegisterInterfaces(inputs.InterfaceRegistry) + +customBasicMod.RegisterLegacyAminoCodec(inputs.LegacyAmino) + +continue +} + coreAppModuleBasic := module.CoreAppModuleBasicAdaptor(name, mod) + +app.basicManager[name] = coreAppModuleBasic + coreAppModuleBasic.RegisterInterfaces(inputs.InterfaceRegistry) + +coreAppModuleBasic.RegisterLegacyAminoCodec(inputs.LegacyAmino) +} +} + +func ProvideInterfaceRegistry(addressCodec address.Codec, validatorAddressCodec ValidatorAddressCodec, customGetSigners []signing.CustomGetSigner) (codectypes.InterfaceRegistry, error) { + signingOptions := signing.Options{ + AddressCodec: addressCodec, + ValidatorAddressCodec: validatorAddressCodec, +} + for _, signer := range customGetSigners { + signingOptions.DefineCustomGetSigners(signer.MsgType, signer.Fn) +} + +interfaceRegistry, err := codectypes.NewInterfaceRegistryWithOptions(codectypes.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signingOptions, +}) + if err != nil { + return nil, err +} + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + return nil, err +} + +return interfaceRegistry, nil +} + +func registerStoreKey(wrapper *AppBuilder, key storetypes.StoreKey) { + wrapper.app.storeKeys = append(wrapper.app.storeKeys, key) +} + +func storeKeyOverride(config *runtimev1alpha1.Module, moduleName string) *runtimev1alpha1.StoreKeyConfig { + for _, cfg := range config.OverrideStoreKeys { + if cfg.ModuleName == moduleName { + return cfg +} + +} + +return nil +} + +func ProvideKVStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.KVStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + override := storeKeyOverride(config, key.Name()) + +var storeKeyName string + if override != nil { + storeKeyName = override.KvStoreKey +} + +else { + storeKeyName = key.Name() +} + storeKey := storetypes.NewKVStoreKey(storeKeyName) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideTransientStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.TransientStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + storeKey := storetypes.NewTransientStoreKey(fmt.Sprintf("transient:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideMemoryStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.MemoryStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + storeKey := storetypes.NewMemoryStoreKey(fmt.Sprintf("memory:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideGenesisTxHandler(appBuilder *AppBuilder) + +genesis.TxHandler { + return appBuilder.app +} + +func ProvideKVStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.KVStoreService { + storeKey := ProvideKVStoreKey(config, key, app) + +return kvStoreService{ + key: storeKey +} +} + +func ProvideMemoryStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.MemoryStoreService { + storeKey := ProvideMemoryStoreKey(config, key, app) + +return memStoreService{ + key: storeKey +} +} + +func ProvideTransientStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.TransientStoreService { + storeKey := ProvideTransientStoreKey(config, key, app) + +return transientStoreService{ + key: storeKey +} +} + +func ProvideEventService() + +event.Service { + return EventService{ +} +} + +func ProvideCometInfoService() + +comet.BlockInfoService { + return cometInfoService{ +} +} + +func ProvideHeaderInfoService(app *AppBuilder) + +header.Service { + return headerInfoService{ +} +} + +func ProvideBasicManager(app *AppBuilder) + +module.BasicManager { + return app.app.basicManager +} + +type ( + / ValidatorAddressCodec is an alias for address.Codec for validator addresses. + ValidatorAddressCodec address.Codec + + / ConsensusAddressCodec is an alias for address.Codec for validator consensus addresses. + ConsensusAddressCodec address.Codec +) + +type AddressCodecInputs struct { + depinject.In + + AuthConfig *authmodulev1.Module `optional:"true"` + StakingConfig *stakingmodulev1.Module `optional:"true"` + + AddressCodecFactory func() + +address.Codec `optional:"true"` + ValidatorAddressCodecFactory func() + +ValidatorAddressCodec `optional:"true"` + ConsensusAddressCodecFactory func() + +ConsensusAddressCodec `optional:"true"` +} + +/ ProvideAddressCodec provides an address.Codec to the container for any +/ modules that want to do address string <> bytes conversion. +func ProvideAddressCodec(in AddressCodecInputs) (address.Codec, ValidatorAddressCodec, ConsensusAddressCodec) { + if in.AddressCodecFactory != nil && in.ValidatorAddressCodecFactory != nil && in.ConsensusAddressCodecFactory != nil { + return in.AddressCodecFactory(), in.ValidatorAddressCodecFactory(), in.ConsensusAddressCodecFactory() +} + if in.AuthConfig == nil || in.AuthConfig.Bech32Prefix == "" { + panic("auth config bech32 prefix cannot be empty if no custom address codec is provided") +} + if in.StakingConfig == nil { + in.StakingConfig = &stakingmodulev1.Module{ +} + +} + if in.StakingConfig.Bech32PrefixValidator == "" { + in.StakingConfig.Bech32PrefixValidator = fmt.Sprintf("%svaloper", in.AuthConfig.Bech32Prefix) +} + if in.StakingConfig.Bech32PrefixConsensus == "" { + in.StakingConfig.Bech32PrefixConsensus = fmt.Sprintf("%svalcons", in.AuthConfig.Bech32Prefix) +} + +return addresscodec.NewBech32Codec(in.AuthConfig.Bech32Prefix), + addresscodec.NewBech32Codec(in.StakingConfig.Bech32PrefixValidator), + addresscodec.NewBech32Codec(in.StakingConfig.Bech32PrefixConsensus) +} +``` + +Additionally, runtime provides automatic registration of other essential (i.e., gRPC routes) services available to the App: + +* AutoCLI Query Service +* Reflection Service +* Custom module services + +```go expandable +package runtime + +import ( + + "encoding/json" + "io" + + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AppBuilder is a type that is injected into a container by the runtime module +/ (as *AppBuilder) + +which can be used to create an app which is compatible with +/ the existing app.go initialization conventions. +type AppBuilder struct { + app *App +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *AppBuilder) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.app.DefaultGenesis() +} + +/ Build builds an *App instance. +func (a *AppBuilder) + +Build(db dbm.DB, traceStore io.Writer, baseAppOptions ...func(*baseapp.BaseApp)) *App { + for _, option := range a.app.baseAppOptions { + baseAppOptions = append(baseAppOptions, option) +} + + / set routers first in case they get modified by other options + baseAppOptions = append( + []func(*baseapp.BaseApp) { + func(bApp *baseapp.BaseApp) { + bApp.SetMsgServiceRouter(a.app.msgServiceRouter) + +bApp.SetGRPCQueryRouter(a.app.grpcQueryRouter) +}, +}, + baseAppOptions..., + ) + bApp := baseapp.NewBaseApp(a.app.config.AppName, a.app.logger, db, nil, baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(a.app.interfaceRegistry) + +bApp.MountStores(a.app.storeKeys...) + +a.app.BaseApp = bApp + a.app.configurator = module.NewConfigurator(a.app.cdc, a.app.MsgServiceRouter(), a.app.GRPCQueryRouter()) + if err := a.app.ModuleManager.RegisterServices(a.app.configurator); err != nil { + panic(err) +} + +return a.app +} +``` + +### 4. Application Building + +The `AppBuilder` type provides a structured way to build applications: + +```go expandable +package runtime + +import ( + + "encoding/json" + "io" + + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AppBuilder is a type that is injected into a container by the runtime module +/ (as *AppBuilder) + +which can be used to create an app which is compatible with +/ the existing app.go initialization conventions. +type AppBuilder struct { + app *App +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *AppBuilder) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.app.DefaultGenesis() +} + +/ Build builds an *App instance. +func (a *AppBuilder) + +Build(db dbm.DB, traceStore io.Writer, baseAppOptions ...func(*baseapp.BaseApp)) *App { + for _, option := range a.app.baseAppOptions { + baseAppOptions = append(baseAppOptions, option) +} + + / set routers first in case they get modified by other options + baseAppOptions = append( + []func(*baseapp.BaseApp) { + func(bApp *baseapp.BaseApp) { + bApp.SetMsgServiceRouter(a.app.msgServiceRouter) + +bApp.SetGRPCQueryRouter(a.app.grpcQueryRouter) +}, +}, + baseAppOptions..., + ) + bApp := baseapp.NewBaseApp(a.app.config.AppName, a.app.logger, db, nil, baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(a.app.interfaceRegistry) + +bApp.MountStores(a.app.storeKeys...) + +a.app.BaseApp = bApp + a.app.configurator = module.NewConfigurator(a.app.cdc, a.app.MsgServiceRouter(), a.app.GRPCQueryRouter()) + if err := a.app.ModuleManager.RegisterServices(a.app.configurator); err != nil { + panic(err) +} + +return a.app +} +``` + +Key building steps: + +1. Configuration loading +2. Module registration +3. Service setup +4. Store mounting +5. Router configuration + +An application only needs to call `AppBuilder.Build` to create a fully configured application (`runtime.App`). + +```go expandable +package runtime + +import ( + + "encoding/json" + "io" + + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AppBuilder is a type that is injected into a container by the runtime module +/ (as *AppBuilder) + +which can be used to create an app which is compatible with +/ the existing app.go initialization conventions. +type AppBuilder struct { + app *App +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *AppBuilder) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.app.DefaultGenesis() +} + +/ Build builds an *App instance. +func (a *AppBuilder) + +Build(db dbm.DB, traceStore io.Writer, baseAppOptions ...func(*baseapp.BaseApp)) *App { + for _, option := range a.app.baseAppOptions { + baseAppOptions = append(baseAppOptions, option) +} + + / set routers first in case they get modified by other options + baseAppOptions = append( + []func(*baseapp.BaseApp) { + func(bApp *baseapp.BaseApp) { + bApp.SetMsgServiceRouter(a.app.msgServiceRouter) + +bApp.SetGRPCQueryRouter(a.app.grpcQueryRouter) +}, +}, + baseAppOptions..., + ) + bApp := baseapp.NewBaseApp(a.app.config.AppName, a.app.logger, db, nil, baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(a.app.interfaceRegistry) + +bApp.MountStores(a.app.storeKeys...) + +a.app.BaseApp = bApp + a.app.configurator = module.NewConfigurator(a.app.cdc, a.app.MsgServiceRouter(), a.app.GRPCQueryRouter()) + if err := a.app.ModuleManager.RegisterServices(a.app.configurator); err != nil { + panic(err) +} + +return a.app +} +``` + +More information on building applications can be found in the [next section](/docs/sdk/v0.53/02-app-building). + +## Best Practices + +1. **Module Order**: Carefully consider the order of modules in begin\_blockers, end\_blockers, and pre\_blockers. +2. **Store Keys**: Use override\_store\_keys only when necessary to maintain clarity +3. **Genesis Order**: Maintain correct initialization order in init\_genesis +4. **Migration Management**: Use order\_migrations to control upgrade paths + +### Migration Considerations + +When upgrading between versions: + +1. Review the migration order specified in `order_migrations` +2. Ensure all required modules are included in the configuration +3. Validate store key configurations +4. Test the upgrade path thoroughly diff --git a/docs/sdk/v0.53/documentation/application-framework/runtx_middleware.mdx b/docs/sdk/v0.53/documentation/application-framework/runtx_middleware.mdx new file mode 100644 index 00000000..0d27b1ba --- /dev/null +++ b/docs/sdk/v0.53/documentation/application-framework/runtx_middleware.mdx @@ -0,0 +1,179 @@ +--- +title: RunTx recovery middleware +--- + +`BaseApp.runTx()` function handles Go panics that might occur during transactions execution, for example, keeper has faced an invalid state and paniced. +Depending on the panic type different handler is used, for instance the default one prints an error log message. +Recovery middleware is used to add custom panic recovery for Cosmos SDK application developers. + +More context can found in the corresponding [ADR-022](docs/sdk/next/documentation/legacy/adr-comprehensive) and the implementation in [recovery.go](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/baseapp/recovery.go). + +## Interface + +```go expandable +package baseapp + +import ( + + "fmt" + "runtime/debug" + + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ RecoveryHandler handles recovery() + +object. +/ Return a non-nil error if recoveryObj was processed. +/ Return nil if recoveryObj was not processed. +type RecoveryHandler func(recoveryObj interface{ +}) + +error + +/ recoveryMiddleware is wrapper for RecoveryHandler to create chained recovery handling. +/ returns (recoveryMiddleware, nil) + if recoveryObj was not processed and should be passed to the next middleware in chain. +/ returns (nil, error) + if recoveryObj was processed and middleware chain processing should be stopped. +type recoveryMiddleware func(recoveryObj interface{ +}) (recoveryMiddleware, error) + +/ processRecovery processes recoveryMiddleware chain for recovery() + +object. +/ Chain processing stops on non-nil error or when chain is processed. +func processRecovery(recoveryObj interface{ +}, middleware recoveryMiddleware) + +error { + if middleware == nil { + return nil +} + +next, err := middleware(recoveryObj) + if err != nil { + return err +} + +return processRecovery(recoveryObj, next) +} + +/ newRecoveryMiddleware creates a RecoveryHandler middleware. +func newRecoveryMiddleware(handler RecoveryHandler, next recoveryMiddleware) + +recoveryMiddleware { + return func(recoveryObj interface{ +}) (recoveryMiddleware, error) { + if err := handler(recoveryObj); err != nil { + return nil, err +} + +return next, nil +} +} + +/ newOutOfGasRecoveryMiddleware creates a standard OutOfGas recovery middleware for app.runTx method. +func newOutOfGasRecoveryMiddleware(gasWanted uint64, ctx sdk.Context, next recoveryMiddleware) + +recoveryMiddleware { + handler := func(recoveryObj interface{ +}) + +error { + err, ok := recoveryObj.(storetypes.ErrorOutOfGas) + if !ok { + return nil +} + +return errorsmod.Wrap( + sdkerrors.ErrOutOfGas, fmt.Sprintf( + "out of gas in location: %v; gasWanted: %d, gasUsed: %d", + err.Descriptor, gasWanted, ctx.GasMeter().GasConsumed(), + ), + ) +} + +return newRecoveryMiddleware(handler, next) +} + +/ newDefaultRecoveryMiddleware creates a default (last in chain) + +recovery middleware for app.runTx method. +func newDefaultRecoveryMiddleware() + +recoveryMiddleware { + handler := func(recoveryObj interface{ +}) + +error { + return errorsmod.Wrap( + sdkerrors.ErrPanic, fmt.Sprintf( + "recovered: %v\nstack:\n%v", recoveryObj, string(debug.Stack()), + ), + ) +} + +return newRecoveryMiddleware(handler, nil) +} +``` + +`recoveryObj` is a return value for `recover()` function from the `buildin` Go package. + +**Contract:** + +* RecoveryHandler returns `nil` if `recoveryObj` wasn't handled and should be passed to the next recovery middleware; +* RecoveryHandler returns a non-nil `error` if `recoveryObj` was handled; + +## Custom RecoveryHandler register + +`BaseApp.AddRunTxRecoveryHandler(handlers ...RecoveryHandler)` + +BaseApp method adds recovery middleware to the default recovery chain. + +## Example + +Lets assume we want to emit the "Consensus failure" chain state if some particular error occurred. + +We have a module keeper that panics: + +```go +func (k FooKeeper) + +Do(obj interface{ +}) { + if obj == nil { + / that shouldn't happen, we need to crash the app + err := errorsmod.Wrap(fooTypes.InternalError, "obj is nil") + +panic(err) +} +} +``` + +By default that panic would be recovered and an error message will be printed to log. To override that behaviour we should register a custom RecoveryHandler: + +```go expandable +/ Cosmos SDK application constructor + customHandler := func(recoveryObj interface{ +}) + +error { + err, ok := recoveryObj.(error) + if !ok { + return nil +} + if fooTypes.InternalError.Is(err) { + panic(fmt.Errorf("FooKeeper did panic with error: %w", err)) +} + +return nil +} + baseApp := baseapp.NewBaseApp(...) + +baseApp.AddRunTxRecoveryHandler(customHandler) +``` diff --git a/docs/sdk/v0.53/documentation/application-framework/vote-extensions.mdx b/docs/sdk/v0.53/documentation/application-framework/vote-extensions.mdx new file mode 100644 index 00000000..27cfe7d8 --- /dev/null +++ b/docs/sdk/v0.53/documentation/application-framework/vote-extensions.mdx @@ -0,0 +1,185 @@ +--- +title: Vote Extensions +--- + +## Synopsis + +This section describes how the application can define and use vote extensions +defined in ABCI++. + +## Extend Vote + +ABCI++ allows an application to extend a pre-commit vote with arbitrary data. This +process does NOT have to be deterministic, and the data returned can be unique to the +validator process. The Cosmos SDK defines `baseapp.ExtendVoteHandler`: + +```go +type ExtendVoteHandler func(Context, *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) +``` + +An application can set this handler in `app.go` via the `baseapp.SetExtendVoteHandler` +`BaseApp` option function. The `sdk.ExtendVoteHandler`, if defined, is called during +the `ExtendVote` ABCI method. Note, if an application decides to implement +`baseapp.ExtendVoteHandler`, it MUST return a non-nil `VoteExtension`. However, the vote +extension can be empty. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#extendvote) +for more details. + +There are many decentralized censorship-resistant use cases for vote extensions. +For example, a validator may want to submit prices for a price oracle or encryption +shares for an encrypted transaction mempool. Note, an application should be careful +to consider the size of the vote extensions as they could increase latency in block +production. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/docs/qa/CometBFT-QA-38.md#vote-extensions-testbed) +for more details. + +## Verify Vote Extension + +Similar to extending a vote, an application can also verify vote extensions from +other validators when validating their pre-commits. For a given vote extension, +this process MUST be deterministic. The Cosmos SDK defines `sdk.VerifyVoteExtensionHandler`: + +```go expandable +package types + +import ( + + abci "github.com/cometbft/cometbft/abci/types" +) + +/ InitChainer initializes application state at genesis +type InitChainer func(ctx Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) + +/ PrepareCheckStater runs code during commit after the block has been committed, and the `checkState` +/ has been branched for the new block. +type PrepareCheckStater func(ctx Context) + +/ Precommiter runs code during commit immediately before the `deliverState` is written to the `rootMultiStore`. +type Precommiter func(ctx Context) + +/ PeerFilter responds to p2p filtering queries from Tendermint +type PeerFilter func(info string) *abci.ResponseQuery + +/ ProcessProposalHandler defines a function type alias for processing a proposer +type ProcessProposalHandler func(Context, *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) + +/ PrepareProposalHandler defines a function type alias for preparing a proposal +type PrepareProposalHandler func(Context, *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) + +/ ExtendVoteHandler defines a function type alias for extending a pre-commit vote. +type ExtendVoteHandler func(Context, *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) + +/ VerifyVoteExtensionHandler defines a function type alias for verifying a +/ pre-commit vote extension. +type VerifyVoteExtensionHandler func(Context, *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) + +/ BeginBlocker defines a function type alias for executing application +/ business logic before transactions are executed. +/ +/ Note: The BeginBlock ABCI method no longer exists in the ABCI specification +/ as of CometBFT v0.38.0. This function type alias is provided for backwards +/ compatibility with applications that still use the BeginBlock ABCI method +/ and allows for existing BeginBlock functionality within applications. +type BeginBlocker func(Context) (BeginBlock, error) + +/ EndBlocker defines a function type alias for executing application +/ business logic after transactions are executed but before committing. +/ +/ Note: The EndBlock ABCI method no longer exists in the ABCI specification +/ as of CometBFT v0.38.0. This function type alias is provided for backwards +/ compatibility with applications that still use the EndBlock ABCI method +/ and allows for existing EndBlock functionality within applications. +type EndBlocker func(Context) (EndBlock, error) + +/ EndBlock defines a type which contains endblock events and validator set updates +type EndBlock struct { + ValidatorUpdates []abci.ValidatorUpdate + Events []abci.Event +} + +/ BeginBlock defines a type which contains beginBlock events +type BeginBlock struct { + Events []abci.Event +} +``` + +An application can set this handler in `app.go` via the `baseapp.SetVerifyVoteExtensionHandler` +`BaseApp` option function. The `sdk.VerifyVoteExtensionHandler`, if defined, is called +during the `VerifyVoteExtension` ABCI method. If an application defines a vote +extension handler, it should also define a verification handler. Note, not all +validators will share the same view of what vote extensions they verify depending +on how votes are propagated. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#verifyvoteextension) +for more details. + +## Vote Extension Propagation + +The agreed upon vote extensions at height `H` are provided to the proposing validator +at height `H+1` during `PrepareProposal`. As a result, the vote extensions are +not natively provided or exposed to the remaining validators during `ProcessProposal`. +As a result, if an application requires that the agreed upon vote extensions from +height `H` are available to all validators at `H+1`, the application must propagate +these vote extensions manually in the block proposal itself. This can be done by +"injecting" them into the block proposal, since the `Txs` field in `PrepareProposal` +is just a slice of byte slices. + +`FinalizeBlock` will ignore any byte slice that doesn't implement an `sdk.Tx`, so +any injected vote extensions will safely be ignored in `FinalizeBlock`. For more +details on propagation, see the [ABCI++ 2.0 ADR](docs/sdk/next/documentation/legacy/adr-comprehensive#vote-extension-propagation--verification). + +### Recovery of injected Vote Extensions + +As stated before, vote extensions can be injected into a block proposal (along with +other transactions in the `Txs` field). The Cosmos SDK provides a pre-FinalizeBlock +hook to allow applications to recover vote extensions, perform any necessary +computation on them, and then store the results in the cached store. These results +will be available to the application during the subsequent `FinalizeBlock` call. + +An example of how a pre-FinalizeBlock hook could look like is shown below: + +```go expandable +app.SetPreBlocker(func(ctx sdk.Context, req *abci.RequestFinalizeBlock) + +error { + allVEs := []VE{ +} / store all parsed vote extensions here + for _, tx := range req.Txs { + / define a custom function that tries to parse the tx as a vote extension + ve, ok := parseVoteExtension(tx) + if !ok { + continue +} + +allVEs = append(allVEs, ve) +} + + / perform any necessary computation on the vote extensions and store the result + / in the cached store + result := compute(allVEs) + err := storeVEResult(ctx, result) + if err != nil { + return err +} + +return nil +}) +``` + +Then, in an app's module, the application can retrieve the result of the computation +of vote extensions from the cached store: + +```go expandable +func (k Keeper) + +BeginBlocker(ctx context.Context) + +error { + / retrieve the result of the computation of vote extensions from the cached store + result, err := k.GetVEResult(ctx) + if err != nil { + return err +} + + / use the result of the computation of vote extensions + k.setSomething(result) + +return nil +} +``` diff --git a/docs/sdk/v0.53/documentation/consensus-block-production/checktx.mdx b/docs/sdk/v0.53/documentation/consensus-block-production/checktx.mdx new file mode 100644 index 00000000..37908656 --- /dev/null +++ b/docs/sdk/v0.53/documentation/consensus-block-production/checktx.mdx @@ -0,0 +1,1577 @@ +--- +title: CheckTx +description: >- + CheckTx is called by the BaseApp when comet receives a transaction from a + client, over the p2p network or RPC. The CheckTx method is responsible for + validating the transaction and returning an error if the transaction is + invalid. +--- + +CheckTx is called by the `BaseApp` when comet receives a transaction from a client, over the p2p network or RPC. The CheckTx method is responsible for validating the transaction and returning an error if the transaction is invalid. + +```mermaid +graph TD + subgraph SDK[Cosmos SDK] + B[Baseapp] + A[AnteHandlers] + B <-->|Validate TX| A + end + C[CometBFT] <-->|CheckTx|SDK + U((User)) -->|Submit TX| C + N[P2P] -->|Receive TX| C +``` + +```go expandable +package baseapp + +import ( + + "context" + "errors" + "fmt" + "sort" + "strings" + "time" + + abcitypes "github.com/cometbft/cometbft/abci/types" + abci "github.com/cometbft/cometbft/api/cometbft/abci/v1" + cmtproto "github.com/cometbft/cometbft/api/cometbft/types/v1" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc/codes" + grpcstatus "google.golang.org/grpc/status" + + corecomet "cosmossdk.io/core/comet" + coreheader "cosmossdk.io/core/header" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/store/rootmulti" + snapshottypes "cosmossdk.io/store/snapshots/types" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ Supported ABCI Query prefixes and paths +const ( + QueryPathApp = "app" + QueryPathCustom = "custom" + QueryPathP2P = "p2p" + QueryPathStore = "store" + + QueryPathBroadcastTx = "/cosmos.tx.v1beta1.Service/BroadcastTx" +) + +/ InitChain implements the ABCI interface. It initializes the application's state +/ and sets up the initial validator set. +func (app *BaseApp) + +InitChain(req *abci.InitChainRequest) (*abci.InitChainResponse, error) { + if req.ChainId != app.chainID { + return nil, fmt.Errorf("invalid chain-id on InitChain; expected: %s, got: %s", app.chainID, req.ChainId) +} + + / On a new chain, we consider the init chain block height as 0, even though + / req.InitialHeight is 1 by default. + initHeader := cmtproto.Header{ + ChainID: req.ChainId, + Time: req.Time +} + +app.logger.Info("InitChain", "initialHeight", req.InitialHeight, "chainID", req.ChainId) + + / Set the initial height, which will be used to determine if we are proposing + / or processing the first block or not. + app.initialHeight = req.InitialHeight + if app.initialHeight == 0 { / If initial height is 0, set it to 1 + app.initialHeight = 1 +} + + / if req.InitialHeight is > 1, then we set the initial version on all stores + if req.InitialHeight > 1 { + initHeader.Height = req.InitialHeight + if err := app.cms.SetInitialVersion(req.InitialHeight); err != nil { + return nil, err +} + +} + + / initialize states with a correct header + app.setState(execModeFinalize, initHeader) + +app.setState(execModeCheck, initHeader) + + / Store the consensus params in the BaseApp's param store. Note, this must be + / done after the finalizeBlockState and context have been set as it's persisted + / to state. + if req.ConsensusParams != nil { + err := app.StoreConsensusParams(app.finalizeBlockState.Context(), *req.ConsensusParams) + if err != nil { + return nil, err +} + +} + +defer func() { + / InitChain represents the state of the application BEFORE the first block, + / i.e. the genesis block. This means that when processing the app's InitChain + / handler, the block height is zero by default. However, after Commit is called + / the height needs to reflect the true block height. + initHeader.Height = req.InitialHeight + app.checkState.SetContext(app.checkState.Context().WithBlockHeader(initHeader). + WithHeaderInfo(coreheader.Info{ + ChainID: req.ChainId, + Height: req.InitialHeight, + Time: req.Time, +})) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockHeader(initHeader). + WithHeaderInfo(coreheader.Info{ + ChainID: req.ChainId, + Height: req.InitialHeight, + Time: req.Time, +})) +}() + if app.initChainer == nil { + return &abci.InitChainResponse{ +}, nil +} + + / add block gas meter for any genesis transactions (allow infinite gas) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(storetypes.NewInfiniteGasMeter())) + +res, err := app.initChainer(app.finalizeBlockState.Context(), req) + if err != nil { + return nil, err +} + if len(req.Validators) > 0 { + if len(req.Validators) != len(res.Validators) { + return nil, fmt.Errorf( + "len(RequestInitChain.Validators) != len(GenesisValidators) (%d != %d)", + len(req.Validators), len(res.Validators), + ) +} + +sort.Sort(abcitypes.ValidatorUpdates(req.Validators)) + for i := range res.Validators { + if !proto.Equal(&res.Validators[i], &req.Validators[i]) { + return nil, fmt.Errorf("genesisValidators[%d] != req.Validators[%d] ", i, i) +} + +} + +} + + / NOTE: We don't commit, but FinalizeBlock for block InitialHeight starts from + / this FinalizeBlockState. + return &abci.InitChainResponse{ + ConsensusParams: res.ConsensusParams, + Validators: res.Validators, + AppHash: app.LastCommitID().Hash, +}, nil +} + +/ Info implements the ABCI interface. It returns information about the application. +func (app *BaseApp) + +Info(_ *abci.InfoRequest) (*abci.InfoResponse, error) { + lastCommitID := app.cms.LastCommitID() + appVersion := InitialAppVersion + if lastCommitID.Version > 0 { + ctx, err := app.CreateQueryContext(lastCommitID.Version, false) + if err != nil { + return nil, fmt.Errorf("failed creating query context: %w", err) +} + +appVersion, err = app.AppVersion(ctx) + if err != nil { + return nil, fmt.Errorf("failed getting app version: %w", err) +} + +} + +return &abci.InfoResponse{ + Data: app.name, + Version: app.version, + AppVersion: appVersion, + LastBlockHeight: lastCommitID.Version, + LastBlockAppHash: lastCommitID.Hash, +}, nil +} + +/ Query implements the ABCI interface. It delegates to CommitMultiStore if it +/ implements Queryable. +func (app *BaseApp) + +Query(_ context.Context, req *abci.QueryRequest) (resp *abci.QueryResponse, err error) { + / add panic recovery for all queries + / + / Ref: https://github.com/cosmos/cosmos-sdk/pull/8039 + defer func() { + if r := recover(); r != nil { + resp = queryResult(errorsmod.Wrapf(sdkerrors.ErrPanic, "%v", r), app.trace) +} + +}() + + / when a client did not provide a query height, manually inject the latest + if req.Height == 0 { + req.Height = app.LastBlockHeight() +} + +telemetry.IncrCounter(1, "query", "count") + +telemetry.IncrCounter(1, "query", req.Path) + start := telemetry.Now() + +defer telemetry.MeasureSince(start, req.Path) + if req.Path == QueryPathBroadcastTx { + return queryResult(errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "can't route a broadcast tx message"), app.trace), nil +} + + / handle gRPC routes first rather than calling splitPath because '/' characters + / are used as part of gRPC paths + if grpcHandler := app.grpcQueryRouter.Route(req.Path); grpcHandler != nil { + return app.handleQueryGRPC(grpcHandler, req), nil +} + path := SplitABCIQueryPath(req.Path) + if len(path) == 0 { + return queryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "no query path provided"), app.trace), nil +} + switch path[0] { + case QueryPathApp: + / "/app" prefix for special application queries + resp = handleQueryApp(app, path, req) + case QueryPathStore: + resp = handleQueryStore(app, path, *req) + case QueryPathP2P: + resp = handleQueryP2P(app, path) + +default: + resp = queryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "unknown query path"), app.trace) +} + +return resp, nil +} + +/ ListSnapshots implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ListSnapshots(req *abci.ListSnapshotsRequest) (*abci.ListSnapshotsResponse, error) { + resp := &abci.ListSnapshotsResponse{ + Snapshots: []*abci.Snapshot{ +}} + if app.snapshotManager == nil { + return resp, nil +} + +snapshots, err := app.snapshotManager.List() + if err != nil { + app.logger.Error("failed to list snapshots", "err", err) + +return nil, err +} + for _, snapshot := range snapshots { + abciSnapshot, err := snapshot.ToABCI() + if err != nil { + app.logger.Error("failed to convert ABCI snapshots", "err", err) + +return nil, err +} + +resp.Snapshots = append(resp.Snapshots, &abciSnapshot) +} + +return resp, nil +} + +/ LoadSnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +LoadSnapshotChunk(req *abci.LoadSnapshotChunkRequest) (*abci.LoadSnapshotChunkResponse, error) { + if app.snapshotManager == nil { + return &abci.LoadSnapshotChunkResponse{ +}, nil +} + +chunk, err := app.snapshotManager.LoadChunk(req.Height, req.Format, req.Chunk) + if err != nil { + app.logger.Error( + "failed to load snapshot chunk", + "height", req.Height, + "format", req.Format, + "chunk", req.Chunk, + "err", err, + ) + +return nil, err +} + +return &abci.LoadSnapshotChunkResponse{ + Chunk: chunk +}, nil +} + +/ OfferSnapshot implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +OfferSnapshot(req *abci.OfferSnapshotRequest) (*abci.OfferSnapshotResponse, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.OfferSnapshotResponse{ + Result: abci.OFFER_SNAPSHOT_RESULT_ABORT +}, nil +} + if req.Snapshot == nil { + app.logger.Error("received nil snapshot") + +return &abci.OfferSnapshotResponse{ + Result: abci.OFFER_SNAPSHOT_RESULT_REJECT +}, nil +} + +snapshot, err := snapshottypes.SnapshotFromABCI(req.Snapshot) + if err != nil { + app.logger.Error("failed to decode snapshot metadata", "err", err) + +return &abci.OfferSnapshotResponse{ + Result: abci.OFFER_SNAPSHOT_RESULT_REJECT +}, nil +} + +err = app.snapshotManager.Restore(snapshot) + switch { + case err == nil: + return &abci.OfferSnapshotResponse{ + Result: abci.OFFER_SNAPSHOT_RESULT_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrUnknownFormat): + return &abci.OfferSnapshotResponse{ + Result: abci.OFFER_SNAPSHOT_RESULT_REJECT_FORMAT +}, nil + case errors.Is(err, snapshottypes.ErrInvalidMetadata): + app.logger.Error( + "rejecting invalid snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + +return &abci.OfferSnapshotResponse{ + Result: abci.OFFER_SNAPSHOT_RESULT_REJECT +}, nil + + default: + / CometBFT errors are defined here: https://github.com/cometbft/cometbft/blob/main/statesync/syncer.go + / It may happen that in case of a CometBFT error, such as a timeout (which occurs after two minutes), + / the process is aborted. This is done intentionally because deleting the database programmatically + / can lead to more complicated situations. + app.logger.Error( + "failed to restore snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + + / We currently don't support resetting the IAVL stores and retrying a + / different snapshot, so we ask CometBFT to abort all snapshot restoration. + return &abci.OfferSnapshotResponse{ + Result: abci.OFFER_SNAPSHOT_RESULT_ABORT +}, nil +} +} + +/ ApplySnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ApplySnapshotChunk(req *abci.ApplySnapshotChunkRequest) (*abci.ApplySnapshotChunkResponse, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.ApplySnapshotChunkResponse{ + Result: abci.APPLY_SNAPSHOT_CHUNK_RESULT_ABORT +}, nil +} + + _, err := app.snapshotManager.RestoreChunk(req.Chunk) + switch { + case err == nil: + return &abci.ApplySnapshotChunkResponse{ + Result: abci.APPLY_SNAPSHOT_CHUNK_RESULT_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrChunkHashMismatch): + app.logger.Error( + "chunk checksum mismatch; rejecting sender and requesting refetch", + "chunk", req.Index, + "sender", req.Sender, + "err", err, + ) + +return &abci.ApplySnapshotChunkResponse{ + Result: abci.APPLY_SNAPSHOT_CHUNK_RESULT_RETRY, + RefetchChunks: []uint32{ + req.Index +}, + RejectSenders: []string{ + req.Sender +}, +}, nil + + default: + app.logger.Error("failed to restore snapshot", "err", err) + +return &abci.ApplySnapshotChunkResponse{ + Result: abci.APPLY_SNAPSHOT_CHUNK_RESULT_ABORT +}, nil +} +} + +/ CheckTx implements the ABCI interface and executes a tx in CheckTx mode. In +/ CheckTx mode, messages are not executed. This means messages are only validated +/ and only the AnteHandler is executed. State is persisted to the BaseApp's +/ internal CheckTx state if the AnteHandler passes. Otherwise, the ResponseCheckTx +/ will contain relevant error information. Regardless of tx execution outcome, +/ the ResponseCheckTx will contain relevant gas execution context. +func (app *BaseApp) + +CheckTx(req *abci.CheckTxRequest) (*abci.CheckTxResponse, error) { + var mode execMode + switch { + case req.Type == abci.CHECK_TX_TYPE_CHECK: + mode = execModeCheck + case req.Type == abci.CHECK_TX_TYPE_RECHECK: + mode = execModeReCheck + + default: + return nil, fmt.Errorf("unknown RequestCheckTx type: %s", req.Type) +} + if app.checkTxHandler == nil { + gInfo, result, anteEvents, err := app.runTx(mode, req.Tx, nil) + if err != nil { + return responseCheckTxWithEvents(err, gInfo.GasWanted, gInfo.GasUsed, anteEvents, app.trace), nil +} + +return &abci.CheckTxResponse{ + GasWanted: int64(gInfo.GasWanted), / TODO: Should type accept unsigned ints? + GasUsed: int64(gInfo.GasUsed), / TODO: Should type accept unsigned ints? + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +}, nil +} + +return app.checkTxHandler(app.runTx, req) +} + +/ PrepareProposal implements the PrepareProposal ABCI method and returns a +/ ResponsePrepareProposal object to the client. The PrepareProposal method is +/ responsible for allowing the block proposer to perform application-dependent +/ work in a block before proposing it. +/ +/ Transactions can be modified, removed, or added by the application. Since the +/ application maintains its own local mempool, it will ignore the transactions +/ provided to it in RequestPrepareProposal. Instead, it will determine which +/ transactions to return based on the mempool's semantics and the MaxTxBytes +/ provided by the client's request. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +PrepareProposal(req *abci.PrepareProposalRequest) (resp *abci.PrepareProposalResponse, err error) { + if app.prepareProposal == nil { + return nil, errors.New("PrepareProposal handler not set") +} + + / Always reset state given that PrepareProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, + AppHash: app.LastCommitID().Hash, +} + +app.setState(execModePrepareProposal, header) + + / CometBFT must never call PrepareProposal with a height of 0. + / + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("PrepareProposal called with invalid height") +} + +app.prepareProposalState.SetContext(app.getContextForProposal(app.prepareProposalState.Context(), req.Height). + WithVoteInfos(toVoteInfo(req.LocalLastCommit.Votes)). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithProposer(req.ProposerAddress). + WithExecMode(sdk.ExecModePrepareProposal). + WithCometInfo(corecomet.Info{ + Evidence: sdk.ToSDKEvidence(req.Misbehavior), + ValidatorsHash: req.NextValidatorsHash, + ProposerAddress: req.ProposerAddress, + LastCommit: sdk.ToSDKExtendedCommitInfo(req.LocalLastCommit), +}). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, +})) + +app.prepareProposalState.SetContext(app.prepareProposalState.Context(). + WithConsensusParams(app.GetConsensusParams(app.prepareProposalState.Context())). + WithBlockGasMeter(app.getBlockGasMeter(app.prepareProposalState.Context()))) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in PrepareProposal", + "height", req.Height, + "time", req.Time, + "panic", err, + ) + +resp = &abci.PrepareProposalResponse{ + Txs: req.Txs +} + +} + +}() + +resp, err = app.prepareProposal(app.prepareProposalState.Context(), req) + if err != nil { + app.logger.Error("failed to prepare proposal", "height", req.Height, "time", req.Time, "err", err) + +return &abci.PrepareProposalResponse{ + Txs: req.Txs +}, nil +} + +return resp, nil +} + +/ ProcessProposal implements the ProcessProposal ABCI method and returns a +/ ResponseProcessProposal object to the client. The ProcessProposal method is +/ responsible for allowing execution of application-dependent work in a proposed +/ block. Note, the application defines the exact implementation details of +/ ProcessProposal. In general, the application must at the very least ensure +/ that all transactions are valid. If all transactions are valid, then we inform +/ CometBFT that the Status is ACCEPT. However, the application is also able +/ to implement optimizations such as executing the entire proposed block +/ immediately. +/ +/ If a panic is detected during execution of an application's ProcessProposal +/ handler, it will be recovered and we will reject the proposal. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +ProcessProposal(req *abci.ProcessProposalRequest) (resp *abci.ProcessProposalResponse, err error) { + if app.processProposal == nil { + return nil, errors.New("ProcessProposal handler not set") +} + + / CometBFT must never call ProcessProposal with a height of 0. + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("ProcessProposal called with invalid height") +} + + / Always reset state given that ProcessProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, + AppHash: app.LastCommitID().Hash, +} + +app.setState(execModeProcessProposal, header) + + / Since the application can get access to FinalizeBlock state and write to it, + / we must be sure to reset it in case ProcessProposal timeouts and is called + / again in a subsequent round. However, we only want to do this after we've + / processed the first block, as we want to avoid overwriting the finalizeState + / after state changes during InitChain. + if req.Height > app.initialHeight { + / abort any running OE + app.optimisticExec.Abort() + +app.setState(execModeFinalize, header) +} + +app.processProposalState.SetContext(app.getContextForProposal(app.processProposalState.Context(), req.Height). + WithVoteInfos(req.ProposedLastCommit.Votes). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithHeaderHash(req.Hash). + WithProposer(req.ProposerAddress). + WithCometInfo(corecomet.Info{ + ProposerAddress: req.ProposerAddress, + ValidatorsHash: req.NextValidatorsHash, + Evidence: sdk.ToSDKEvidence(req.Misbehavior), + LastCommit: sdk.ToSDKCommitInfo(req.ProposedLastCommit), +}, + ). + WithExecMode(sdk.ExecModeProcessProposal). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, +})) + +app.processProposalState.SetContext(app.processProposalState.Context(). + WithConsensusParams(app.GetConsensusParams(app.processProposalState.Context())). + WithBlockGasMeter(app.getBlockGasMeter(app.processProposalState.Context()))) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in ProcessProposal", + "height", req.Height, + "time", req.Time, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +resp = &abci.ProcessProposalResponse{ + Status: abci.PROCESS_PROPOSAL_STATUS_REJECT +} + +} + +}() + +resp, err = app.processProposal(app.processProposalState.Context(), req) + if err != nil { + app.logger.Error("failed to process proposal", "height", req.Height, "time", req.Time, "hash", fmt.Sprintf("%X", req.Hash), "err", err) + +return &abci.ProcessProposalResponse{ + Status: abci.PROCESS_PROPOSAL_STATUS_REJECT +}, nil +} + + / Only execute optimistic execution if the proposal is accepted, OE is + / enabled and the block height is greater than the initial height. During + / the first block we'll be carrying state from InitChain, so it would be + / impossible for us to easily revert. + / After the first block has been processed, the next blocks will get executed + / optimistically, so that when the ABCI client calls `FinalizeBlock` the app + / can have a response ready. + if resp.Status == abci.PROCESS_PROPOSAL_STATUS_ACCEPT && + app.optimisticExec.Enabled() && + req.Height > app.initialHeight { + app.optimisticExec.Execute(req) +} + +return resp, nil +} + +/ ExtendVote implements the ExtendVote ABCI method and returns a ResponseExtendVote. +/ It calls the application's ExtendVote handler which is responsible for performing +/ application-specific business logic when sending a pre-commit for the NEXT +/ block height. The extensions response may be non-deterministic but must always +/ be returned, even if empty. +/ +/ Agreed upon vote extensions are made available to the proposer of the next +/ height and are committed in the subsequent height, i.e. H+2. An error is +/ returned if vote extensions are not enabled or if extendVote fails or panics. +func (app *BaseApp) + +ExtendVote(_ context.Context, req *abci.ExtendVoteRequest) (resp *abci.ExtendVoteResponse, err error) { + / Always reset state given that ExtendVote and VerifyVoteExtension can timeout + / and be called again in a subsequent round. + var ctx sdk.Context + + / If we're extending the vote for the initial height, we need to use the + / finalizeBlockState context, otherwise we don't get the uncommitted data + / from InitChain. + if req.Height == app.initialHeight { + ctx, _ = app.finalizeBlockState.Context().CacheContext() +} + +else { + ms := app.cms.CacheMultiStore() + +ctx = sdk.NewContext(ms, false, app.logger).WithStreamingManager(app.streamingManager).WithChainID(app.chainID).WithBlockHeight(req.Height) +} + if app.extendVote == nil { + return nil, errors.New("application ExtendVote handler not set") +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(ctx) + + / Note: In this case, we do want to extend vote if the height is equal or + / greater than VoteExtensionsEnableHeight. This defers from the check done + / in ValidateVoteExtensions and PrepareProposal in which we'll check for + / vote extensions on VoteExtensionsEnableHeight+1. + extsEnabled := cp.Feature != nil && req.Height >= cp.Feature.VoteExtensionsEnableHeight.Value && cp.Feature.VoteExtensionsEnableHeight.Value != 0 + if !extsEnabled { + / check abci params + extsEnabled = cp.Abci != nil && req.Height >= cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + if !extsEnabled { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to ExtendVote at height %d", req.Height) +} + +} + +ctx = ctx. + WithConsensusParams(cp). + WithBlockGasMeter(storetypes.NewInfiniteGasMeter()). + WithBlockHeight(req.Height). + WithHeaderHash(req.Hash). + WithExecMode(sdk.ExecModeVoteExtension). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Hash: req.Hash, +}) + + / add a deferred recover handler in case extendVote panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in ExtendVote", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +err = fmt.Errorf("recovered application panic in ExtendVote: %v", r) +} + +}() + +resp, err = app.extendVote(ctx, req) + if err != nil { + app.logger.Error("failed to extend vote", "height", req.Height, "hash", fmt.Sprintf("%X", req.Hash), "err", err) + +return &abci.ExtendVoteResponse{ + VoteExtension: []byte{ +}}, nil +} + +return resp, err +} + +/ VerifyVoteExtension implements the VerifyVoteExtension ABCI method and returns +/ a ResponseVerifyVoteExtension. It calls the applications' VerifyVoteExtension +/ handler which is responsible for performing application-specific business +/ logic in verifying a vote extension from another validator during the pre-commit +/ phase. The response MUST be deterministic. An error is returned if vote +/ extensions are not enabled or if verifyVoteExt fails or panics. +/ We highly recommend a size validation due to performance degradation, +/ see more here https://docs.cometbft.com/v1.0/references/qa/cometbft-qa-38#vote-extensions-testbed +func (app *BaseApp) + +VerifyVoteExtension(req *abci.VerifyVoteExtensionRequest) (resp *abci.VerifyVoteExtensionResponse, err error) { + if app.verifyVoteExt == nil { + return nil, errors.New("application VerifyVoteExtension handler not set") +} + +var ctx sdk.Context + + / If we're verifying the vote for the initial height, we need to use the + / finalizeBlockState context, otherwise we don't get the uncommitted data + / from InitChain. + if req.Height == app.initialHeight { + ctx, _ = app.finalizeBlockState.Context().CacheContext() +} + +else { + ms := app.cms.CacheMultiStore() + +ctx = sdk.NewContext(ms, false, app.logger).WithStreamingManager(app.streamingManager).WithChainID(app.chainID).WithBlockHeight(req.Height) +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(ctx) + + / Note: we verify votes extensions on VoteExtensionsEnableHeight+1. Check + / comment in ExtendVote and ValidateVoteExtensions for more details. + extsEnabled := cp.Feature.VoteExtensionsEnableHeight != nil && req.Height >= cp.Feature.VoteExtensionsEnableHeight.Value && cp.Feature.VoteExtensionsEnableHeight.Value != 0 + if !extsEnabled { + / check abci params + extsEnabled = cp.Abci != nil && req.Height >= cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + if !extsEnabled { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to VerifyVoteExtension at height %d", req.Height) +} + +} + + / add a deferred recover handler in case verifyVoteExt panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in VerifyVoteExtension", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "validator", fmt.Sprintf("%X", req.ValidatorAddress), + "panic", r, + ) + +err = fmt.Errorf("recovered application panic in VerifyVoteExtension: %v", r) +} + +}() + +ctx = ctx. + WithConsensusParams(cp). + WithBlockGasMeter(storetypes.NewInfiniteGasMeter()). + WithBlockHeight(req.Height). + WithHeaderHash(req.Hash). + WithExecMode(sdk.ExecModeVerifyVoteExtension). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Hash: req.Hash, +}) + +resp, err = app.verifyVoteExt(ctx, req) + if err != nil { + app.logger.Error("failed to verify vote extension", "height", req.Height, "err", err) + +return &abci.VerifyVoteExtensionResponse{ + Status: abci.VERIFY_VOTE_EXTENSION_STATUS_REJECT +}, nil +} + +return resp, err +} + +/ internalFinalizeBlock executes the block, called by the Optimistic +/ Execution flow or by the FinalizeBlock ABCI method. The context received is +/ only used to handle early cancellation, for anything related to state app.finalizeBlockState.Context() +/ must be used. +func (app *BaseApp) + +internalFinalizeBlock(ctx context.Context, req *abci.FinalizeBlockRequest) (*abci.FinalizeBlockResponse, error) { + var events []abci.Event + if err := app.checkHalt(req.Height, req.Time); err != nil { + return nil, err +} + if err := app.validateFinalizeBlockHeight(req); err != nil { + return nil, err +} + if app.cms.TracingEnabled() { + app.cms.SetTracingContext(storetypes.TraceContext( + map[string]any{"blockHeight": req.Height +}, + )) +} + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, + AppHash: app.LastCommitID().Hash, +} + + / finalizeBlockState should be set on InitChain or ProcessProposal. If it is + / nil, it means we are replaying this block and we need to set the state here + / given that during block replay ProcessProposal is not executed by CometBFT. + if app.finalizeBlockState == nil { + app.setState(execModeFinalize, header) +} + + / Context is now updated with Header information. + app.finalizeBlockState.SetContext(app.finalizeBlockState.Context(). + WithBlockHeader(header). + WithHeaderHash(req.Hash). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + Hash: req.Hash, + AppHash: app.LastCommitID().Hash, +}). + WithConsensusParams(app.GetConsensusParams(app.finalizeBlockState.Context())). + WithVoteInfos(req.DecidedLastCommit.Votes). + WithExecMode(sdk.ExecModeFinalize). + WithCometInfo(corecomet.Info{ + Evidence: sdk.ToSDKEvidence(req.Misbehavior), + ValidatorsHash: req.NextValidatorsHash, + ProposerAddress: req.ProposerAddress, + LastCommit: sdk.ToSDKCommitInfo(req.DecidedLastCommit), +})) + + / GasMeter must be set after we get a context with updated consensus params. + gasMeter := app.getBlockGasMeter(app.finalizeBlockState.Context()) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(gasMeter)) + if app.checkState != nil { + app.checkState.SetContext(app.checkState.Context(). + WithBlockGasMeter(gasMeter). + WithHeaderHash(req.Hash)) +} + +preblockEvents, err := app.preBlock(req) + if err != nil { + return nil, err +} + +events = append(events, preblockEvents...) + +beginBlock, err := app.beginBlock(req) + if err != nil { + return nil, err +} + + / First check for an abort signal after beginBlock, as it's the first place + / we spend any significant amount of time. + select { + case <-ctx.Done(): + return nil, ctx.Err() + +default: + / continue +} + +events = append(events, beginBlock.Events...) + + / Reset the gas meter so that the AnteHandlers aren't required to + gasMeter = app.getBlockGasMeter(app.finalizeBlockState.Context()) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(gasMeter)) + + / Iterate over all raw transactions in the proposal and attempt to execute + / them, gathering the execution results. + / + / NOTE: Not all raw transactions may adhere to the sdk.Tx interface, e.g. + / vote extensions, so skip those. + txResults := make([]*abci.ExecTxResult, 0, len(req.Txs)) + for _, rawTx := range req.Txs { + response := app.deliverTx(rawTx) + + / check after every tx if we should abort + select { + case <-ctx.Done(): + return nil, ctx.Err() + +default: + / continue +} + +txResults = append(txResults, response) +} + if app.finalizeBlockState.ms.TracingEnabled() { + app.finalizeBlockState.ms = app.finalizeBlockState.ms.SetTracingContext(nil).(storetypes.CacheMultiStore) +} + +endBlock, err := app.endBlock(app.finalizeBlockState.Context()) + if err != nil { + return nil, err +} + + / check after endBlock if we should abort, to avoid propagating the result + select { + case <-ctx.Done(): + return nil, ctx.Err() + +default: + / continue +} + +events = append(events, endBlock.Events...) + cp := app.GetConsensusParams(app.finalizeBlockState.Context()) + +return &abci.FinalizeBlockResponse{ + Events: events, + TxResults: txResults, + ValidatorUpdates: endBlock.ValidatorUpdates, + ConsensusParamUpdates: &cp, +}, nil +} + +/ FinalizeBlock will execute the block proposal provided by RequestFinalizeBlock. +/ Specifically, it will execute an application's BeginBlock (if defined), followed +/ by the transactions in the proposal, finally followed by the application's +/ EndBlock (if defined). +/ +/ For each raw transaction, i.e. a byte slice, BaseApp will only execute it if +/ it adheres to the sdk.Tx interface. Otherwise, the raw transaction will be +/ skipped. This is to support compatibility with proposers injecting vote +/ extensions into the proposal, which should not themselves be executed in cases +/ where they adhere to the sdk.Tx interface. +func (app *BaseApp) + +FinalizeBlock(req *abci.FinalizeBlockRequest) (res *abci.FinalizeBlockResponse, err error) { + defer func() { + / call the streaming service hooks with the FinalizeBlock messages + for _, streamingListener := range app.streamingManager.ABCIListeners { + if err := streamingListener.ListenFinalizeBlock(app.finalizeBlockState.Context(), *req, *res); err != nil { + app.logger.Error("ListenFinalizeBlock listening hook failed", "height", req.Height, "err", err) +} + +} + +}() + if app.optimisticExec.Initialized() { + / check if the hash we got is the same as the one we are executing + aborted := app.optimisticExec.AbortIfNeeded(req.Hash) + / Wait for the OE to finish, regardless of whether it was aborted or not + res, err = app.optimisticExec.WaitResult() + + / only return if we are not aborting + if !aborted { + if res != nil { + res.AppHash = app.workingHash() +} + +return res, err +} + + / if it was aborted, we need to reset the state + app.finalizeBlockState = nil + app.optimisticExec.Reset() +} + + / if no OE is running, just run the block (this is either a block replay or a OE that got aborted) + +res, err = app.internalFinalizeBlock(context.Background(), req) + if res != nil { + res.AppHash = app.workingHash() +} + +return res, err +} + +/ checkHalt checks if height or time exceeds halt-height or halt-time respectively. +func (app *BaseApp) + +checkHalt(height int64, time time.Time) + +error { + var halt bool + switch { + case app.haltHeight > 0 && uint64(height) >= app.haltHeight: + halt = true + case app.haltTime > 0 && time.Unix() >= int64(app.haltTime): + halt = true +} + if halt { + return fmt.Errorf("halt per configuration height %d time %d", app.haltHeight, app.haltTime) +} + +return nil +} + +/ Commit implements the ABCI interface. It will commit all state that exists in +/ the deliver state's multi-store and includes the resulting commit ID in the +/ returned abci.ResponseCommit. Commit will set the check state based on the +/ latest header and reset the deliver state. Also, if a non-zero halt height is +/ defined in config, Commit will execute a deferred function call to check +/ against that height and gracefully halt if it matches the latest committed +/ height. +func (app *BaseApp) + +Commit() (*abci.CommitResponse, error) { + header := app.finalizeBlockState.Context().BlockHeader() + retainHeight := app.GetBlockRetentionHeight(header.Height) + if app.precommiter != nil { + app.precommiter(app.finalizeBlockState.Context()) +} + +rms, ok := app.cms.(*rootmulti.Store) + if ok { + rms.SetCommitHeader(header) +} + +app.cms.Commit() + resp := &abci.CommitResponse{ + RetainHeight: retainHeight, +} + abciListeners := app.streamingManager.ABCIListeners + if len(abciListeners) > 0 { + ctx := app.finalizeBlockState.Context() + blockHeight := ctx.BlockHeight() + changeSet := app.cms.PopStateCache() + for _, abciListener := range abciListeners { + if err := abciListener.ListenCommit(ctx, *resp, changeSet); err != nil { + app.logger.Error("Commit listening hook failed", "height", blockHeight, "err", err) +} + +} + +} + + / Reset the CheckTx state to the latest committed. + / + / NOTE: This is safe because CometBFT holds a lock on the mempool for + / Commit. Use the header from this latest block. + app.setState(execModeCheck, header) + +app.finalizeBlockState = nil + if app.prepareCheckStater != nil { + app.prepareCheckStater(app.checkState.Context()) +} + + / The SnapshotIfApplicable method will create the snapshot by starting the goroutine + app.snapshotManager.SnapshotIfApplicable(header.Height) + +return resp, nil +} + +/ workingHash gets the apphash that will be finalized in commit. +/ These writes will be persisted to the root multi-store (app.cms) + +and flushed to +/ disk in the Commit phase. This means when the ABCI client requests Commit(), the application +/ state transitions will be flushed to disk and as a result, but we already have +/ an application Merkle root. +func (app *BaseApp) + +workingHash() []byte { + / Write the FinalizeBlock state into branched storage and commit the MultiStore. + / The write to the FinalizeBlock state writes all state transitions to the root + / MultiStore (app.cms) + +so when Commit() + +is called it persists those values. + app.finalizeBlockState.ms.Write() + + / Get the hash of all writes in order to return the apphash to the comet in finalizeBlock. + commitHash := app.cms.WorkingHash() + +app.logger.Debug("hash of all writes", "workingHash", fmt.Sprintf("%X", commitHash)) + +return commitHash +} + +func handleQueryApp(app *BaseApp, path []string, req *abci.QueryRequest) *abci.QueryResponse { + if len(path) >= 2 { + switch path[1] { + case "simulate": + txBytes := req.Data + + gInfo, res, err := app.Simulate(txBytes) + if err != nil { + return queryResult(errorsmod.Wrap(err, "failed to simulate tx"), app.trace) +} + simRes := &sdk.SimulationResponse{ + GasInfo: gInfo, + Result: res, +} + +bz, err := codec.ProtoMarshalJSON(simRes, app.interfaceRegistry) + if err != nil { + return queryResult(errorsmod.Wrap(err, "failed to JSON encode simulation response"), app.trace) +} + +return &abci.QueryResponse{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: bz, +} + case "version": + return &abci.QueryResponse{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: []byte(app.version), +} + +default: + return queryResult(errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "unknown query: %s", path), app.trace) +} + +} + +return queryResult( + errorsmod.Wrap( + sdkerrors.ErrUnknownRequest, + "expected second parameter to be either 'simulate' or 'version', neither was present", + ), app.trace) +} + +func handleQueryStore(app *BaseApp, path []string, req abci.QueryRequest) *abci.QueryResponse { + / "/store" prefix for store queries + queryable, ok := app.cms.(storetypes.Queryable) + if !ok { + return queryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "multi-store does not support queries"), app.trace) +} + +req.Path = "/" + strings.Join(path[1:], "/") + if req.Height <= 1 && req.Prove { + return queryResult( + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ), app.trace) +} + sdkReq := storetypes.RequestQuery(req) + +resp, err := queryable.Query(&sdkReq) + if err != nil { + return queryResult(err, app.trace) +} + +resp.Height = req.Height + abciResp := abci.QueryResponse(*resp) + +return &abciResp +} + +func handleQueryP2P(app *BaseApp, path []string) *abci.QueryResponse { + / "/p2p" prefix for p2p queries + if len(path) < 4 { + return queryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "path should be p2p filter "), app.trace) +} + +var resp *abci.QueryResponse + + cmd, typ, arg := path[1], path[2], path[3] + switch cmd { + case "filter": + switch typ { + case "addr": + resp = app.FilterPeerByAddrPort(arg) + case "id": + resp = app.FilterPeerByID(arg) +} + +default: + resp = queryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "expected second parameter to be 'filter'"), app.trace) +} + +return resp +} + +/ SplitABCIQueryPath splits a string path using the delimiter '/'. +/ +/ e.g. "this/is/funny" becomes []string{"this", "is", "funny" +} + +func SplitABCIQueryPath(requestPath string) (path []string) { + path = strings.Split(requestPath, "/") + + / first element is empty string + if len(path) > 0 && path[0] == "" { + path = path[1:] +} + +return path +} + +/ FilterPeerByAddrPort filters peers by address/port. +func (app *BaseApp) + +FilterPeerByAddrPort(info string) *abci.QueryResponse { + if app.addrPeerFilter != nil { + return app.addrPeerFilter(info) +} + +return &abci.QueryResponse{ +} +} + +/ FilterPeerByID filters peers by node ID. +func (app *BaseApp) + +FilterPeerByID(info string) *abci.QueryResponse { + if app.idPeerFilter != nil { + return app.idPeerFilter(info) +} + +return &abci.QueryResponse{ +} +} + +/ getContextForProposal returns the correct Context for PrepareProposal and +/ ProcessProposal. We use finalizeBlockState on the first block to be able to +/ access any state changes made in InitChain. +func (app *BaseApp) + +getContextForProposal(ctx sdk.Context, height int64) + +sdk.Context { + if height == app.initialHeight { + ctx, _ = app.finalizeBlockState.Context().CacheContext() + + / clear all context data set during InitChain to avoid inconsistent behavior + ctx = ctx.WithHeaderInfo(coreheader.Info{ +}).WithBlockHeader(cmtproto.Header{ +}) + +return ctx +} + +return ctx +} + +func (app *BaseApp) + +handleQueryGRPC(handler GRPCQueryHandler, req *abci.QueryRequest) *abci.QueryResponse { + ctx, err := app.CreateQueryContext(req.Height, req.Prove) + if err != nil { + return queryResult(err, app.trace) +} + +resp, err := handler(ctx, req) + if err != nil { + resp = queryResult(gRPCErrorToSDKError(err), app.trace) + +resp.Height = req.Height + return resp +} + +return resp +} + +func gRPCErrorToSDKError(err error) + +error { + status, ok := grpcstatus.FromError(err) + if !ok { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) +} + switch status.Code() { + case codes.NotFound: + return errorsmod.Wrap(sdkerrors.ErrKeyNotFound, err.Error()) + case codes.InvalidArgument: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.FailedPrecondition: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.Unauthenticated: + return errorsmod.Wrap(sdkerrors.ErrUnauthorized, err.Error()) + +default: + return errorsmod.Wrap(sdkerrors.ErrUnknownRequest, err.Error()) +} +} + +func checkNegativeHeight(height int64) + +error { + if height < 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "cannot query with height < 0; please provide a valid height") +} + +return nil +} + +/ CreateQueryContext creates a new sdk.Context for a query, taking as args +/ the block height and whether the query needs a proof or not. +func (app *BaseApp) + +CreateQueryContext(height int64, prove bool) (sdk.Context, error) { + if err := checkNegativeHeight(height); err != nil { + return sdk.Context{ +}, err +} + + / use custom query multi-store if provided + qms := app.qms + if qms == nil { + qms = app.cms.(storetypes.MultiStore) +} + lastBlockHeight := qms.LatestVersion() + if lastBlockHeight == 0 { + return sdk.Context{ +}, errorsmod.Wrapf(sdkerrors.ErrInvalidHeight, "%s is not ready; please wait for first block", app.Name()) +} + if height > lastBlockHeight { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidHeight, + "cannot query with height in the future; please provide a valid height", + ) +} + + / when a client did not provide a query height, manually inject the latest + if height == 0 { + height = lastBlockHeight +} + if height <= 1 && prove { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ) +} + +cacheMS, err := qms.CacheMultiStoreWithVersion(height) + if err != nil { + return sdk.Context{ +}, + errorsmod.Wrapf( + sdkerrors.ErrNotFound, + "failed to load state at height %d; %s (latest height: %d)", height, err, lastBlockHeight, + ) +} + + / branch the commit multi-store for safety + ctx := sdk.NewContext(cacheMS, true, app.logger). + WithMinGasPrices(app.minGasPrices). + WithGasMeter(storetypes.NewGasMeter(app.queryGasLimit)). + WithHeaderInfo(coreheader.Info{ + ChainID: app.chainID, + Height: height, +}). + WithBlockHeader(app.checkState.Context().BlockHeader()). + WithBlockHeight(height) + if height != lastBlockHeight { + rms, ok := app.cms.(*rootmulti.Store) + if ok { + cInfo, err := rms.GetCommitInfo(height) + if cInfo != nil && err == nil { + ctx = ctx.WithHeaderInfo(coreheader.Info{ + Height: height, + Time: cInfo.Timestamp +}) +} + +} + +} + +return ctx, nil +} + +/ GetBlockRetentionHeight returns the height for which all blocks below this height +/ are pruned from CometBFT. Given a commitment height and a non-zero local +/ minRetainBlocks configuration, the retentionHeight is the smallest height that +/ satisfies: +/ +/ - Unbonding (safety threshold) + +time: The block interval in which validators +/ can be economically punished for misbehavior. Blocks in this interval must be +/ auditable e.g. by the light client. +/ +/ - Logical store snapshot interval: The block interval at which the underlying +/ logical store database is persisted to disk, e.g. every 10000 heights. Blocks +/ since the last IAVL snapshot must be available for replay on application restart. +/ +/ - State sync snapshots: Blocks since the oldest available snapshot must be +/ available for state sync nodes to catch up (oldest because a node may be +/ restoring an old snapshot while a new snapshot was taken). +/ +/ - Local (minRetainBlocks) + +config: Archive nodes may want to retain more or +/ all blocks, e.g. via a local config option min-retain-blocks. There may also +/ be a need to vary retention for other nodes, e.g. sentry nodes which do not +/ need historical blocks. +func (app *BaseApp) + +GetBlockRetentionHeight(commitHeight int64) + +int64 { + / pruning is disabled if minRetainBlocks is zero + if app.minRetainBlocks == 0 { + return 0 +} + minNonZero := func(x, y int64) + +int64 { + switch { + case x == 0: + return y + case y == 0: + return x + case x < y: + return x + + default: + return y +} + +} + + / Define retentionHeight as the minimum value that satisfies all non-zero + / constraints. All blocks below (commitHeight-retentionHeight) + +are pruned + / from CometBFT. + var retentionHeight int64 + + / Define the number of blocks needed to protect against misbehaving validators + / which allows light clients to operate safely. Note, we piggy back of the + / evidence parameters instead of computing an estimated number of blocks based + / on the unbonding period and block commitment time as the two should be + / equivalent. + cp := app.GetConsensusParams(app.finalizeBlockState.Context()) + if cp.Evidence != nil && cp.Evidence.MaxAgeNumBlocks > 0 { + retentionHeight = commitHeight - cp.Evidence.MaxAgeNumBlocks +} + if app.snapshotManager != nil { + snapshotRetentionHeights := app.snapshotManager.GetSnapshotBlockRetentionHeights() + if snapshotRetentionHeights > 0 { + retentionHeight = minNonZero(retentionHeight, commitHeight-snapshotRetentionHeights) +} + +} + v := commitHeight - int64(app.minRetainBlocks) + +retentionHeight = minNonZero(retentionHeight, v) + if retentionHeight <= 0 { + / prune nothing in the case of a non-positive height + return 0 +} + +return retentionHeight +} + +/ toVoteInfo converts the new ExtendedVoteInfo to VoteInfo. +func toVoteInfo(votes []abci.ExtendedVoteInfo) []abci.VoteInfo { + legacyVotes := make([]abci.VoteInfo, len(votes)) + for i, vote := range votes { + legacyVotes[i] = abci.VoteInfo{ + Validator: abci.Validator{ + Address: vote.Validator.Address, + Power: vote.Validator.Power, +}, + BlockIdFlag: vote.BlockIdFlag, +} + +} + +return legacyVotes +} +``` + +## CheckTx Handler + +`CheckTxHandler` allows users to extend the logic of `CheckTx`. `CheckTxHandler` is called by passing context and the transaction bytes received through ABCI. It is required that the handler returns deterministic results given the same transaction bytes. + + +we return the raw decoded transaction here to avoid decoding it twice. + + +```go +type CheckTxHandler func(ctx sdk.Context, tx []byte) (Tx, error) +``` + +Setting a custom `CheckTxHandler` is optional. It can be done from your app.go file: + +```go expandable +func NewSimApp( + logger log.Logger, + db corestore.KVStoreWithBatch, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + ... + / Create ChecktxHandler + checktxHandler := abci.NewCustomCheckTxHandler(...) + +app.SetCheckTxHandler(checktxHandler) + ... +} +``` diff --git a/docs/sdk/v0.53/documentation/consensus-block-production/introduction.mdx b/docs/sdk/v0.53/documentation/consensus-block-production/introduction.mdx new file mode 100644 index 00000000..23e54449 --- /dev/null +++ b/docs/sdk/v0.53/documentation/consensus-block-production/introduction.mdx @@ -0,0 +1,56 @@ +--- +title: Introduction +description: >- + ABCI, Application Blockchain Interface is the interface between CometBFT and + the application. More information about ABCI can be found here. CometBFT + version 0.38 included a new version of ABCI (called ABCI 2.0) which added + several new methods. +--- + +## What is ABCI? + +ABCI, Application Blockchain Interface is the interface between CometBFT and the application. More information about ABCI can be found [here](https://docs.cometbft.com/v0.38/spec/abci/). CometBFT version 0.38 included a new version of ABCI (called ABCI 2.0) which added several new methods. + +The 5 methods introduced in ABCI 2.0 are: + +* `PrepareProposal` +* `ProcessProposal` +* `ExtendVote` +* `VerifyVoteExtension` +* `FinalizeBlock` + +## The Flow + +## PrepareProposal + +Based on validator voting power, CometBFT chooses a block proposer and calls `PrepareProposal` on the block proposer's application (Cosmos SDK). The selected block proposer is responsible for collecting outstanding transactions from the mempool, adhering to the application's specifications. The application can enforce custom transaction ordering and incorporate additional transactions, potentially generated from vote extensions in the previous block. + +To perform this manipulation on the application side, a custom handler must be implemented. By default, the Cosmos SDK provides `PrepareProposalHandler`, used in conjunction with an application specific mempool. A custom handler can be written by application developer, if a noop handler provided, all transactions are considered valid. + +Please note that vote extensions will only be available on the following height in which vote extensions are enabled. More information about vote extensions can be found [here](https://docs.cosmos.network/main/build/abci/vote-extensions). + +After creating the proposal, the proposer returns it to CometBFT. + +PrepareProposal CAN be non-deterministic. + +## ProcessProposal + +This method allows validators to perform application-specific checks on the block proposal and is called on all validators. This is an important step in the consensus process, as it ensures that the block is valid and meets the requirements of the application. For example, validators could check that the block contains all the required transactions or that the block does not create any invalid state transitions. + +The implementation of `ProcessProposal` MUST be deterministic. + +## ExtendVote and VerifyVoteExtensions + +These methods allow applications to extend the voting process by requiring validators to perform additional actions beyond simply validating blocks. + +If vote extensions are enabled, `ExtendVote` will be called on every validator and each one will return its vote extension which is in practice a bunch of bytes. As mentioned above this data (vote extension) can only be retrieved in the next block height during `PrepareProposal`. Additionally, this data can be arbitrary, but in the provided tutorials, it serves as an oracle or proof of transactions in the mempool. Essentially, vote extensions are processed and injected as transactions. Examples of use-cases for vote extensions include prices for a price oracle or encryption shares for an encrypted transaction mempool. `ExtendVote` CAN be non-deterministic. + +`VerifyVoteExtensions` is performed on every validator multiple times in order to verify other validators' vote extensions. This check is submitted to validate the integrity and validity of the vote extensions preventing malicious or invalid vote extensions. + +Additionally, applications must keep the vote extension data concise as it can degrade the performance of their chain, see testing results [here](https://docs.cometbft.com/v0.38/qa/cometbft-qa-38#vote-extensions-testbed). + +`VerifyVoteExtensions` MUST be deterministic. + +## FinalizeBlock + +`FinalizeBlock` is then called and is responsible for updating the state of the blockchain and making the block available to users. diff --git a/docs/sdk/v0.53/documentation/consensus-block-production/prepare-proposal.mdx b/docs/sdk/v0.53/documentation/consensus-block-production/prepare-proposal.mdx new file mode 100644 index 00000000..d38faa2f --- /dev/null +++ b/docs/sdk/v0.53/documentation/consensus-block-production/prepare-proposal.mdx @@ -0,0 +1,667 @@ +--- +title: Prepare Proposal +--- + +`PrepareProposal` handles construction of the block, meaning that when a proposer +is preparing to propose a block, it requests the application to evaluate a +`RequestPrepareProposal`, which contains a series of transactions from CometBFT's +mempool. At this point, the application has complete control over the proposal. +It can modify, delete, and inject transactions from its own app-side mempool into +the proposal or even ignore all the transactions altogether. What the application +does with the transactions provided to it by `RequestPrepareProposal` has no +effect on CometBFT's mempool. + +Note, that the application defines the semantics of the `PrepareProposal` and it +MAY be non-deterministic and is only executed by the current block proposer. + +Now, reading mempool twice in the previous sentence is confusing, lets break it down. +CometBFT has a mempool that handles gossiping transactions to other nodes +in the network. The order of these transactions is determined by CometBFT's mempool, +using FIFO as the sole ordering mechanism. It's worth noting that the priority mempool +in Comet was removed or deprecated. +However, since the application is able to fully inspect +all transactions, it can provide greater control over transaction ordering. +Allowing the application to handle ordering enables the application to define how +it would like the block constructed. + +The Cosmos SDK defines the `DefaultProposalHandler` type, which provides applications with +`PrepareProposal` and `ProcessProposal` handlers. If you decide to implement your +own `PrepareProposal` handler, you must ensure that the transactions +selected DO NOT exceed the maximum block gas (if set) and the maximum bytes provided +by `req.MaxBytes`. + +```go expandable +package baseapp + +import ( + + "bytes" + "context" + "fmt" + "slices" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cryptoenc "github.com/cometbft/cometbft/crypto/encoding" + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + protoio "github.com/cosmos/gogoproto/io" + "github.com/cosmos/gogoproto/proto" + "cosmossdk.io/core/comet" + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +type ( + / ValidatorStore defines the interface contract required for verifying vote + / extension signatures. Typically, this will be implemented by the x/staking + / module, which has knowledge of the CometBFT public key. + ValidatorStore interface { + GetPubKeyByConsAddr(context.Context, sdk.ConsAddress) (cmtprotocrypto.PublicKey, error) +} + + / GasTx defines the contract that a transaction with a gas limit must implement. + GasTx interface { + GetGas() + +uint64 +} +) + +/ ValidateVoteExtensions defines a helper function for verifying vote extension +/ signatures that may be passed or manually injected into a block proposal from +/ a proposer in PrepareProposal. It returns an error if any signature is invalid +/ or if unexpected vote extensions and/or signatures are found or less than 2/3 +/ power is received. +/ NOTE: From v0.50.5 `currentHeight` and `chainID` arguments are ignored for fixing an issue. +/ They will be removed from the function in v0.51+. +func ValidateVoteExtensions( + ctx sdk.Context, + valStore ValidatorStore, + _ int64, + _ string, + extCommit abci.ExtendedCommitInfo, +) + +error { + / Get values from context + cp := ctx.ConsensusParams() + currentHeight := ctx.HeaderInfo().Height + chainID := ctx.HeaderInfo().ChainID + commitInfo := ctx.CometInfo().GetLastCommit() + + / Check that both extCommit + commit are ordered in accordance with vp/address. + if err := validateExtendedCommitAgainstLastCommit(extCommit, commitInfo); err != nil { + return err +} + + / Start checking vote extensions only **after** the vote extensions enable + / height, because when `currentHeight == VoteExtensionsEnableHeight` + / PrepareProposal doesn't get any vote extensions in its request. + extsEnabled := cp.Abci != nil && currentHeight > cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + marshalDelimitedFn := func(msg proto.Message) ([]byte, error) { + var buf bytes.Buffer + if err := protoio.NewDelimitedWriter(&buf).WriteMsg(msg); err != nil { + return nil, err +} + +return buf.Bytes(), nil +} + +var ( + / Total voting power of all vote extensions. + totalVP int64 + / Total voting power of all validators that submitted valid vote extensions. + sumVP int64 + ) + for _, vote := range extCommit.Votes { + totalVP += vote.Validator.Power + + / Only check + include power if the vote is a commit vote. There must be super-majority, otherwise the + / previous block (the block the vote is for) + +could not have been committed. + if vote.BlockIdFlag != cmtproto.BlockIDFlagCommit { + continue +} + if !extsEnabled { + if len(vote.VoteExtension) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension at height %d", currentHeight) +} + if len(vote.ExtensionSignature) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension signature at height %d", currentHeight) +} + +continue +} + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("vote extensions enabled; received empty vote extension signature at height %d", currentHeight) +} + valConsAddr := sdk.ConsAddress(vote.Validator.Address) + +pubKeyProto, err := valStore.GetPubKeyByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get validator %X public key: %w", valConsAddr, err) +} + +cmtPubKey, err := cryptoenc.PubKeyFromProto(pubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) +} + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, / the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: chainID, +} + +extSignBytes, err := marshalDelimitedFn(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) +} + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return fmt.Errorf("failed to verify validator %X vote extension signature", valConsAddr) +} + +sumVP += vote.Validator.Power +} + + / This check is probably unnecessary, but better safe than sorry. + if totalVP <= 0 { + return fmt.Errorf("total voting power must be positive, got: %d", totalVP) +} + + / If the sum of the voting power has not reached (2/3 + 1) + +we need to error. + if requiredVP := ((totalVP * 2) / 3) + 1; sumVP < requiredVP { + return fmt.Errorf( + "insufficient cumulative voting power received to verify vote extensions; got: %d, expected: >=%d", + sumVP, requiredVP, + ) +} + +return nil +} + +/ validateExtendedCommitAgainstLastCommit validates an ExtendedCommitInfo against a LastCommit. Specifically, +/ it checks that the ExtendedCommit + LastCommit (for the same height), are consistent with each other + that +/ they are ordered correctly (by voting power) + +in accordance with +/ [comet](https://github.com/cometbft/cometbft/blob/4ce0277b35f31985bbf2c25d3806a184a4510010/types/validator_set.go#L784). +func validateExtendedCommitAgainstLastCommit(ec abci.ExtendedCommitInfo, lc comet.CommitInfo) + +error { + / check that the rounds are the same + if ec.Round != lc.Round() { + return fmt.Errorf("extended commit round %d does not match last commit round %d", ec.Round, lc.Round()) +} + + / check that the # of votes are the same + if len(ec.Votes) != lc.Votes().Len() { + return fmt.Errorf("extended commit votes length %d does not match last commit votes length %d", len(ec.Votes), lc.Votes().Len()) +} + + / check sort order of extended commit votes + if !slices.IsSortedFunc(ec.Votes, func(vote1, vote2 abci.ExtendedVoteInfo) + +int { + if vote1.Validator.Power == vote2.Validator.Power { + return bytes.Compare(vote1.Validator.Address, vote2.Validator.Address) / addresses sorted in ascending order (used to break vp conflicts) +} + +return -int(vote1.Validator.Power - vote2.Validator.Power) / vp sorted in descending order +}) { + return fmt.Errorf("extended commit votes are not sorted by voting power") +} + addressCache := make(map[string]struct{ +}, len(ec.Votes)) + / check that consistency between LastCommit and ExtendedCommit + for i, vote := range ec.Votes { + / cache addresses to check for duplicates + if _, ok := addressCache[string(vote.Validator.Address)]; ok { + return fmt.Errorf("extended commit vote address %X is duplicated", vote.Validator.Address) +} + +addressCache[string(vote.Validator.Address)] = struct{ +}{ +} + if !bytes.Equal(vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) { + return fmt.Errorf("extended commit vote address %X does not match last commit vote address %X", vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) +} + if vote.Validator.Power != lc.Votes().Get(i).Validator().Power() { + return fmt.Errorf("extended commit vote power %d does not match last commit vote power %d", vote.Validator.Power, lc.Votes().Get(i).Validator().Power()) +} + +} + +return nil +} + +type ( + / ProposalTxVerifier defines the interface that is implemented by BaseApp, + / that any custom ABCI PrepareProposal and ProcessProposal handler can use + / to verify a transaction. + ProposalTxVerifier interface { + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) + +TxDecode(txBz []byte) (sdk.Tx, error) + +TxEncode(tx sdk.Tx) ([]byte, error) +} + + / DefaultProposalHandler defines the default ABCI PrepareProposal and + / ProcessProposal handlers. + DefaultProposalHandler struct { + mempool mempool.Mempool + txVerifier ProposalTxVerifier + txSelector TxSelector + signerExtAdapter mempool.SignerExtractionAdapter +} +) + +func NewDefaultProposalHandler(mp mempool.Mempool, txVerifier ProposalTxVerifier) *DefaultProposalHandler { + return &DefaultProposalHandler{ + mempool: mp, + txVerifier: txVerifier, + txSelector: NewDefaultTxSelector(), + signerExtAdapter: mempool.NewDefaultSignerExtractionAdapter(), +} +} + +/ SetTxSelector sets the TxSelector function on the DefaultProposalHandler. +func (h *DefaultProposalHandler) + +SetTxSelector(ts TxSelector) { + h.txSelector = ts +} + +/ PrepareProposalHandler returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from CometBFT will simply be returned, which, by default, are in +/ FIFO order. +func (h *DefaultProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + var maxBlockGas uint64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = uint64(b.MaxGas) +} + +defer h.txSelector.Clear() + + / If the mempool is nil or NoOp we simply return the transactions + / requested from CometBFT, which, by default, should be in FIFO order. + / + / Note, we still need to ensure the transactions returned respect req.MaxTxBytes. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + for _, txBz := range req.Txs { + tx, err := h.txVerifier.TxDecode(txBz) + if err != nil { + return nil, err +} + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, tx, txBz) + if stop { + break +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} + selectedTxsSignersSeqs := make(map[string]uint64) + +var ( + resError error + selectedTxsNums int + invalidTxs []sdk.Tx / invalid txs to be removed out of the loop to avoid dead lock + ) + +mempool.SelectBy(ctx, h.mempool, req.Txs, func(memTx sdk.Tx) + +bool { + unorderedTx, ok := memTx.(sdk.TxWithUnordered) + isUnordered := ok && unorderedTx.GetUnordered() + txSignersSeqs := make(map[string]uint64) + + / if the tx is unordered, we don't need to check the sequence, we just add it + if !isUnordered { + signerData, err := h.signerExtAdapter.GetSigners(memTx) + if err != nil { + / propagate the error to the caller + resError = err + return false +} + + / If the signers aren't in selectedTxsSignersSeqs then we haven't seen them before + / so we add them and continue given that we don't need to check the sequence. + shouldAdd := true + for _, signer := range signerData { + seq, ok := selectedTxsSignersSeqs[signer.Signer.String()] + if !ok { + txSignersSeqs[signer.Signer.String()] = signer.Sequence + continue +} + + / If we have seen this signer before in this block, we must make + / sure that the current sequence is seq+1; otherwise is invalid + / and we skip it. + if seq+1 != signer.Sequence { + shouldAdd = false + break +} + +txSignersSeqs[signer.Signer.String()] = signer.Sequence +} + if !shouldAdd { + return true +} + +} + + / NOTE: Since transaction verification was already executed in CheckTx, + / which calls mempool.Insert, in theory everything in the pool should be + / valid. But some mempool implementations may insert invalid txs, so we + / check again. + txBz, err := h.txVerifier.PrepareProposalVerifyTx(memTx) + if err != nil { + invalidTxs = append(invalidTxs, memTx) +} + +else { + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, memTx, txBz) + if stop { + return false +} + txsLen := len(h.txSelector.SelectedTxs(ctx)) + / If the tx is unordered, we don't need to update the sender sequence. + if !isUnordered { + for sender, seq := range txSignersSeqs { + / If txsLen != selectedTxsNums is true, it means that we've + / added a new tx to the selected txs, so we need to update + / the sequence of the sender. + if txsLen != selectedTxsNums { + selectedTxsSignersSeqs[sender] = seq +} + +else if _, ok := selectedTxsSignersSeqs[sender]; !ok { + / The transaction hasn't been added but it passed the + / verification, so we know that the sequence is correct. + / So we set this sender's sequence to seq-1, in order + / to avoid unnecessary calls to PrepareProposalVerifyTx. + selectedTxsSignersSeqs[sender] = seq - 1 +} + +} + +} + +selectedTxsNums = txsLen +} + +return true +}) + if resError != nil { + return nil, resError +} + for _, tx := range invalidTxs { + err := h.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return nil, err +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} +} + +/ ProcessProposalHandler returns the default implementation for processing an +/ ABCI proposal. Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. +/ Note that step (2) + +is identical to the validation step performed in +/ DefaultPrepareProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +func (h *DefaultProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + / If the mempool is nil or NoOp we simply return ACCEPT, + / because PrepareProposal may have included txs that could fail verification. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return NoOpProcessProposal() +} + +return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + var totalTxGas uint64 + + var maxBlockGas int64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = b.MaxGas +} + for _, txBytes := range req.Txs { + tx, err := h.txVerifier.ProcessProposalVerifyTx(txBytes) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if maxBlockGas > 0 { + gasTx, ok := tx.(GasTx) + if ok { + totalTxGas += gasTx.GetGas() +} + if totalTxGas > uint64(maxBlockGas) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpExtendVote defines a no-op ExtendVote handler. It will always return an +/ empty byte slice as the vote extension. +func NoOpExtendVote() + +sdk.ExtendVoteHandler { + return func(_ sdk.Context, _ *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} +} + +/ NoOpVerifyVoteExtensionHandler defines a no-op VerifyVoteExtension handler. It +/ will always return an ACCEPT status with no error. +func NoOpVerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(_ sdk.Context, _ *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} + +/ TxSelector defines a helper type that assists in selecting transactions during +/ mempool transaction selection in PrepareProposal. It keeps track of the total +/ number of bytes and total gas of the selected transactions. It also keeps +/ track of the selected transactions themselves. +type TxSelector interface { + / SelectedTxs should return a copy of the selected transactions. + SelectedTxs(ctx context.Context) [][]byte + + / Clear should clear the TxSelector, nulling out all relevant fields. + Clear() + + / SelectTxForProposal should attempt to select a transaction for inclusion in + / a proposal based on inclusion criteria defined by the TxSelector. It must + / return if the caller should halt the transaction selection loop + / (typically over a mempool) + +or otherwise. + SelectTxForProposal(ctx context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool +} + +type defaultTxSelector struct { + totalTxBytes uint64 + totalTxGas uint64 + selectedTxs [][]byte +} + +func NewDefaultTxSelector() + +TxSelector { + return &defaultTxSelector{ +} +} + +func (ts *defaultTxSelector) + +SelectedTxs(_ context.Context) [][]byte { + txs := make([][]byte, len(ts.selectedTxs)) + +copy(txs, ts.selectedTxs) + +return txs +} + +func (ts *defaultTxSelector) + +Clear() { + ts.totalTxBytes = 0 + ts.totalTxGas = 0 + ts.selectedTxs = nil +} + +func (ts *defaultTxSelector) + +SelectTxForProposal(_ context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool { + txSize := uint64(cmttypes.ComputeProtoSizeForTxs([]cmttypes.Tx{ + txBz +})) + +var txGasLimit uint64 + if memTx != nil { + if gasTx, ok := memTx.(GasTx); ok { + txGasLimit = gasTx.GetGas() +} + +} + + / only add the transaction to the proposal if we have enough capacity + if (txSize + ts.totalTxBytes) <= maxTxBytes { + / If there is a max block gas limit, add the tx only if the limit has + / not been met. + if maxBlockGas > 0 { + if (txGasLimit + ts.totalTxGas) <= maxBlockGas { + ts.totalTxGas += txGasLimit + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + +else { + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + + / check if we've reached capacity; if so, we cannot select any more transactions + return ts.totalTxBytes >= maxTxBytes || (maxBlockGas > 0 && (ts.totalTxGas >= maxBlockGas)) +} +``` + +This default implementation can be overridden by the application developer in +favor of a custom implementation in [`app_di.go`](/docs/sdk/v0.53/documentation/application-framework/app-go-di): + +```go +prepareOpt := func(app *baseapp.BaseApp) { + abciPropHandler := baseapp.NewDefaultProposalHandler(mempool, app) + +app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) +} + +baseAppOptions = append(baseAppOptions, prepareOpt) +``` diff --git a/docs/sdk/v0.53/documentation/consensus-block-production/process-proposal.mdx b/docs/sdk/v0.53/documentation/consensus-block-production/process-proposal.mdx new file mode 100644 index 00000000..5b954e60 --- /dev/null +++ b/docs/sdk/v0.53/documentation/consensus-block-production/process-proposal.mdx @@ -0,0 +1,654 @@ +--- +title: Process Proposal +--- + +`ProcessProposal` handles the validation of a proposal from `PrepareProposal`, +which also includes a block header. After a block has been proposed, +the other validators have the right to accept or reject that block. The validator in the +default implementation of `PrepareProposal` runs basic validity checks on each +transaction. + +Note, `ProcessProposal` MUST be deterministic. Non-deterministic behaviors will cause apphash mismatches. +This means if `ProcessProposal` panics or fails and we reject, all honest validator +processes should reject (i.e., prevote nil). If so, CometBFT will start a new round with a new block proposal and the same cycle will happen with `PrepareProposal` +and `ProcessProposal` for the new proposal. + +Here is the implementation of the default implementation: + +```go expandable +package baseapp + +import ( + + "bytes" + "context" + "fmt" + "slices" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cryptoenc "github.com/cometbft/cometbft/crypto/encoding" + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + protoio "github.com/cosmos/gogoproto/io" + "github.com/cosmos/gogoproto/proto" + "cosmossdk.io/core/comet" + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/mempool" +) + +type ( + / ValidatorStore defines the interface contract required for verifying vote + / extension signatures. Typically, this will be implemented by the x/staking + / module, which has knowledge of the CometBFT public key. + ValidatorStore interface { + GetPubKeyByConsAddr(context.Context, sdk.ConsAddress) (cmtprotocrypto.PublicKey, error) +} + + / GasTx defines the contract that a transaction with a gas limit must implement. + GasTx interface { + GetGas() + +uint64 +} +) + +/ ValidateVoteExtensions defines a helper function for verifying vote extension +/ signatures that may be passed or manually injected into a block proposal from +/ a proposer in PrepareProposal. It returns an error if any signature is invalid +/ or if unexpected vote extensions and/or signatures are found or less than 2/3 +/ power is received. +/ NOTE: From v0.50.5 `currentHeight` and `chainID` arguments are ignored for fixing an issue. +/ They will be removed from the function in v0.51+. +func ValidateVoteExtensions( + ctx sdk.Context, + valStore ValidatorStore, + _ int64, + _ string, + extCommit abci.ExtendedCommitInfo, +) + +error { + / Get values from context + cp := ctx.ConsensusParams() + currentHeight := ctx.HeaderInfo().Height + chainID := ctx.HeaderInfo().ChainID + commitInfo := ctx.CometInfo().GetLastCommit() + + / Check that both extCommit + commit are ordered in accordance with vp/address. + if err := validateExtendedCommitAgainstLastCommit(extCommit, commitInfo); err != nil { + return err +} + + / Start checking vote extensions only **after** the vote extensions enable + / height, because when `currentHeight == VoteExtensionsEnableHeight` + / PrepareProposal doesn't get any vote extensions in its request. + extsEnabled := cp.Abci != nil && currentHeight > cp.Abci.VoteExtensionsEnableHeight && cp.Abci.VoteExtensionsEnableHeight != 0 + marshalDelimitedFn := func(msg proto.Message) ([]byte, error) { + var buf bytes.Buffer + if err := protoio.NewDelimitedWriter(&buf).WriteMsg(msg); err != nil { + return nil, err +} + +return buf.Bytes(), nil +} + +var ( + / Total voting power of all vote extensions. + totalVP int64 + / Total voting power of all validators that submitted valid vote extensions. + sumVP int64 + ) + for _, vote := range extCommit.Votes { + totalVP += vote.Validator.Power + + / Only check + include power if the vote is a commit vote. There must be super-majority, otherwise the + / previous block (the block the vote is for) + +could not have been committed. + if vote.BlockIdFlag != cmtproto.BlockIDFlagCommit { + continue +} + if !extsEnabled { + if len(vote.VoteExtension) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension at height %d", currentHeight) +} + if len(vote.ExtensionSignature) > 0 { + return fmt.Errorf("vote extensions disabled; received non-empty vote extension signature at height %d", currentHeight) +} + +continue +} + if len(vote.ExtensionSignature) == 0 { + return fmt.Errorf("vote extensions enabled; received empty vote extension signature at height %d", currentHeight) +} + valConsAddr := sdk.ConsAddress(vote.Validator.Address) + +pubKeyProto, err := valStore.GetPubKeyByConsAddr(ctx, valConsAddr) + if err != nil { + return fmt.Errorf("failed to get validator %X public key: %w", valConsAddr, err) +} + +cmtPubKey, err := cryptoenc.PubKeyFromProto(pubKeyProto) + if err != nil { + return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) +} + cve := cmtproto.CanonicalVoteExtension{ + Extension: vote.VoteExtension, + Height: currentHeight - 1, / the vote extension was signed in the previous height + Round: int64(extCommit.Round), + ChainId: chainID, +} + +extSignBytes, err := marshalDelimitedFn(&cve) + if err != nil { + return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) +} + if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { + return fmt.Errorf("failed to verify validator %X vote extension signature", valConsAddr) +} + +sumVP += vote.Validator.Power +} + + / This check is probably unnecessary, but better safe than sorry. + if totalVP <= 0 { + return fmt.Errorf("total voting power must be positive, got: %d", totalVP) +} + + / If the sum of the voting power has not reached (2/3 + 1) + +we need to error. + if requiredVP := ((totalVP * 2) / 3) + 1; sumVP < requiredVP { + return fmt.Errorf( + "insufficient cumulative voting power received to verify vote extensions; got: %d, expected: >=%d", + sumVP, requiredVP, + ) +} + +return nil +} + +/ validateExtendedCommitAgainstLastCommit validates an ExtendedCommitInfo against a LastCommit. Specifically, +/ it checks that the ExtendedCommit + LastCommit (for the same height), are consistent with each other + that +/ they are ordered correctly (by voting power) + +in accordance with +/ [comet](https://github.com/cometbft/cometbft/blob/4ce0277b35f31985bbf2c25d3806a184a4510010/types/validator_set.go#L784). +func validateExtendedCommitAgainstLastCommit(ec abci.ExtendedCommitInfo, lc comet.CommitInfo) + +error { + / check that the rounds are the same + if ec.Round != lc.Round() { + return fmt.Errorf("extended commit round %d does not match last commit round %d", ec.Round, lc.Round()) +} + + / check that the # of votes are the same + if len(ec.Votes) != lc.Votes().Len() { + return fmt.Errorf("extended commit votes length %d does not match last commit votes length %d", len(ec.Votes), lc.Votes().Len()) +} + + / check sort order of extended commit votes + if !slices.IsSortedFunc(ec.Votes, func(vote1, vote2 abci.ExtendedVoteInfo) + +int { + if vote1.Validator.Power == vote2.Validator.Power { + return bytes.Compare(vote1.Validator.Address, vote2.Validator.Address) / addresses sorted in ascending order (used to break vp conflicts) +} + +return -int(vote1.Validator.Power - vote2.Validator.Power) / vp sorted in descending order +}) { + return fmt.Errorf("extended commit votes are not sorted by voting power") +} + addressCache := make(map[string]struct{ +}, len(ec.Votes)) + / check that consistency between LastCommit and ExtendedCommit + for i, vote := range ec.Votes { + / cache addresses to check for duplicates + if _, ok := addressCache[string(vote.Validator.Address)]; ok { + return fmt.Errorf("extended commit vote address %X is duplicated", vote.Validator.Address) +} + +addressCache[string(vote.Validator.Address)] = struct{ +}{ +} + if !bytes.Equal(vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) { + return fmt.Errorf("extended commit vote address %X does not match last commit vote address %X", vote.Validator.Address, lc.Votes().Get(i).Validator().Address()) +} + if vote.Validator.Power != lc.Votes().Get(i).Validator().Power() { + return fmt.Errorf("extended commit vote power %d does not match last commit vote power %d", vote.Validator.Power, lc.Votes().Get(i).Validator().Power()) +} + +} + +return nil +} + +type ( + / ProposalTxVerifier defines the interface that is implemented by BaseApp, + / that any custom ABCI PrepareProposal and ProcessProposal handler can use + / to verify a transaction. + ProposalTxVerifier interface { + PrepareProposalVerifyTx(tx sdk.Tx) ([]byte, error) + +ProcessProposalVerifyTx(txBz []byte) (sdk.Tx, error) + +TxDecode(txBz []byte) (sdk.Tx, error) + +TxEncode(tx sdk.Tx) ([]byte, error) +} + + / DefaultProposalHandler defines the default ABCI PrepareProposal and + / ProcessProposal handlers. + DefaultProposalHandler struct { + mempool mempool.Mempool + txVerifier ProposalTxVerifier + txSelector TxSelector + signerExtAdapter mempool.SignerExtractionAdapter +} +) + +func NewDefaultProposalHandler(mp mempool.Mempool, txVerifier ProposalTxVerifier) *DefaultProposalHandler { + return &DefaultProposalHandler{ + mempool: mp, + txVerifier: txVerifier, + txSelector: NewDefaultTxSelector(), + signerExtAdapter: mempool.NewDefaultSignerExtractionAdapter(), +} +} + +/ SetTxSelector sets the TxSelector function on the DefaultProposalHandler. +func (h *DefaultProposalHandler) + +SetTxSelector(ts TxSelector) { + h.txSelector = ts +} + +/ PrepareProposalHandler returns the default implementation for processing an +/ ABCI proposal. The application's mempool is enumerated and all valid +/ transactions are added to the proposal. Transactions are valid if they: +/ +/ 1) + +Successfully encode to bytes. +/ 2) + +Are valid (i.e. pass runTx, AnteHandler only). +/ +/ Enumeration is halted once RequestPrepareProposal.MaxBytes of transactions is +/ reached or the mempool is exhausted. +/ +/ Note: +/ +/ - Step (2) + +is identical to the validation step performed in +/ DefaultProcessProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +/ +/ - If no mempool is set or if the mempool is a no-op mempool, the transactions +/ requested from CometBFT will simply be returned, which, by default, are in +/ FIFO order. +func (h *DefaultProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + var maxBlockGas uint64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = uint64(b.MaxGas) +} + +defer h.txSelector.Clear() + + / If the mempool is nil or NoOp we simply return the transactions + / requested from CometBFT, which, by default, should be in FIFO order. + / + / Note, we still need to ensure the transactions returned respect req.MaxTxBytes. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + for _, txBz := range req.Txs { + tx, err := h.txVerifier.TxDecode(txBz) + if err != nil { + return nil, err +} + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, tx, txBz) + if stop { + break +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} + selectedTxsSignersSeqs := make(map[string]uint64) + +var ( + resError error + selectedTxsNums int + invalidTxs []sdk.Tx / invalid txs to be removed out of the loop to avoid dead lock + ) + +mempool.SelectBy(ctx, h.mempool, req.Txs, func(memTx sdk.Tx) + +bool { + unorderedTx, ok := memTx.(sdk.TxWithUnordered) + isUnordered := ok && unorderedTx.GetUnordered() + txSignersSeqs := make(map[string]uint64) + + / if the tx is unordered, we don't need to check the sequence, we just add it + if !isUnordered { + signerData, err := h.signerExtAdapter.GetSigners(memTx) + if err != nil { + / propagate the error to the caller + resError = err + return false +} + + / If the signers aren't in selectedTxsSignersSeqs then we haven't seen them before + / so we add them and continue given that we don't need to check the sequence. + shouldAdd := true + for _, signer := range signerData { + seq, ok := selectedTxsSignersSeqs[signer.Signer.String()] + if !ok { + txSignersSeqs[signer.Signer.String()] = signer.Sequence + continue +} + + / If we have seen this signer before in this block, we must make + / sure that the current sequence is seq+1; otherwise is invalid + / and we skip it. + if seq+1 != signer.Sequence { + shouldAdd = false + break +} + +txSignersSeqs[signer.Signer.String()] = signer.Sequence +} + if !shouldAdd { + return true +} + +} + + / NOTE: Since transaction verification was already executed in CheckTx, + / which calls mempool.Insert, in theory everything in the pool should be + / valid. But some mempool implementations may insert invalid txs, so we + / check again. + txBz, err := h.txVerifier.PrepareProposalVerifyTx(memTx) + if err != nil { + invalidTxs = append(invalidTxs, memTx) +} + +else { + stop := h.txSelector.SelectTxForProposal(ctx, uint64(req.MaxTxBytes), maxBlockGas, memTx, txBz) + if stop { + return false +} + txsLen := len(h.txSelector.SelectedTxs(ctx)) + / If the tx is unordered, we don't need to update the sender sequence. + if !isUnordered { + for sender, seq := range txSignersSeqs { + / If txsLen != selectedTxsNums is true, it means that we've + / added a new tx to the selected txs, so we need to update + / the sequence of the sender. + if txsLen != selectedTxsNums { + selectedTxsSignersSeqs[sender] = seq +} + +else if _, ok := selectedTxsSignersSeqs[sender]; !ok { + / The transaction hasn't been added but it passed the + / verification, so we know that the sequence is correct. + / So we set this sender's sequence to seq-1, in order + / to avoid unnecessary calls to PrepareProposalVerifyTx. + selectedTxsSignersSeqs[sender] = seq - 1 +} + +} + +} + +selectedTxsNums = txsLen +} + +return true +}) + if resError != nil { + return nil, resError +} + for _, tx := range invalidTxs { + err := h.mempool.Remove(tx) + if err != nil && !errors.Is(err, mempool.ErrTxNotFound) { + return nil, err +} + +} + +return &abci.ResponsePrepareProposal{ + Txs: h.txSelector.SelectedTxs(ctx) +}, nil +} +} + +/ ProcessProposalHandler returns the default implementation for processing an +/ ABCI proposal. Every transaction in the proposal must pass 2 conditions: +/ +/ 1. The transaction bytes must decode to a valid transaction. +/ 2. The transaction must be valid (i.e. pass runTx, AnteHandler only) +/ +/ If any transaction fails to pass either condition, the proposal is rejected. +/ Note that step (2) + +is identical to the validation step performed in +/ DefaultPrepareProposal. It is very important that the same validation logic +/ is used in both steps, and applications must ensure that this is the case in +/ non-default handlers. +func (h *DefaultProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + / If the mempool is nil or NoOp we simply return ACCEPT, + / because PrepareProposal may have included txs that could fail verification. + _, isNoOp := h.mempool.(mempool.NoOpMempool) + if h.mempool == nil || isNoOp { + return NoOpProcessProposal() +} + +return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + var totalTxGas uint64 + + var maxBlockGas int64 + if b := ctx.ConsensusParams().Block; b != nil { + maxBlockGas = b.MaxGas +} + for _, txBytes := range req.Txs { + tx, err := h.txVerifier.ProcessProposalVerifyTx(txBytes) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if maxBlockGas > 0 { + gasTx, ok := tx.(GasTx) + if ok { + totalTxGas += gasTx.GetGas() +} + if totalTxGas > uint64(maxBlockGas) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpPrepareProposal defines a no-op PrepareProposal handler. It will always +/ return the transactions sent by the client's request. +func NoOpPrepareProposal() + +sdk.PrepareProposalHandler { + return func(_ sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + return &abci.ResponsePrepareProposal{ + Txs: req.Txs +}, nil +} +} + +/ NoOpProcessProposal defines a no-op ProcessProposal Handler. It will always +/ return ACCEPT. +func NoOpProcessProposal() + +sdk.ProcessProposalHandler { + return func(_ sdk.Context, _ *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} + +/ NoOpExtendVote defines a no-op ExtendVote handler. It will always return an +/ empty byte slice as the vote extension. +func NoOpExtendVote() + +sdk.ExtendVoteHandler { + return func(_ sdk.Context, _ *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} +} + +/ NoOpVerifyVoteExtensionHandler defines a no-op VerifyVoteExtension handler. It +/ will always return an ACCEPT status with no error. +func NoOpVerifyVoteExtensionHandler() + +sdk.VerifyVoteExtensionHandler { + return func(_ sdk.Context, _ *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} + +/ TxSelector defines a helper type that assists in selecting transactions during +/ mempool transaction selection in PrepareProposal. It keeps track of the total +/ number of bytes and total gas of the selected transactions. It also keeps +/ track of the selected transactions themselves. +type TxSelector interface { + / SelectedTxs should return a copy of the selected transactions. + SelectedTxs(ctx context.Context) [][]byte + + / Clear should clear the TxSelector, nulling out all relevant fields. + Clear() + + / SelectTxForProposal should attempt to select a transaction for inclusion in + / a proposal based on inclusion criteria defined by the TxSelector. It must + / return if the caller should halt the transaction selection loop + / (typically over a mempool) + +or otherwise. + SelectTxForProposal(ctx context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool +} + +type defaultTxSelector struct { + totalTxBytes uint64 + totalTxGas uint64 + selectedTxs [][]byte +} + +func NewDefaultTxSelector() + +TxSelector { + return &defaultTxSelector{ +} +} + +func (ts *defaultTxSelector) + +SelectedTxs(_ context.Context) [][]byte { + txs := make([][]byte, len(ts.selectedTxs)) + +copy(txs, ts.selectedTxs) + +return txs +} + +func (ts *defaultTxSelector) + +Clear() { + ts.totalTxBytes = 0 + ts.totalTxGas = 0 + ts.selectedTxs = nil +} + +func (ts *defaultTxSelector) + +SelectTxForProposal(_ context.Context, maxTxBytes, maxBlockGas uint64, memTx sdk.Tx, txBz []byte) + +bool { + txSize := uint64(cmttypes.ComputeProtoSizeForTxs([]cmttypes.Tx{ + txBz +})) + +var txGasLimit uint64 + if memTx != nil { + if gasTx, ok := memTx.(GasTx); ok { + txGasLimit = gasTx.GetGas() +} + +} + + / only add the transaction to the proposal if we have enough capacity + if (txSize + ts.totalTxBytes) <= maxTxBytes { + / If there is a max block gas limit, add the tx only if the limit has + / not been met. + if maxBlockGas > 0 { + if (txGasLimit + ts.totalTxGas) <= maxBlockGas { + ts.totalTxGas += txGasLimit + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + +else { + ts.totalTxBytes += txSize + ts.selectedTxs = append(ts.selectedTxs, txBz) +} + +} + + / check if we've reached capacity; if so, we cannot select any more transactions + return ts.totalTxBytes >= maxTxBytes || (maxBlockGas > 0 && (ts.totalTxGas >= maxBlockGas)) +} +``` + +Like `PrepareProposal`, this implementation is the default and can be modified by +the application developer in [`app_di.go`](/docs/sdk/v0.53/documentation/application-framework/app-go-di). If you decide to implement +your own `ProcessProposal` handler, you must ensure that the transactions +provided in the proposal DO NOT exceed the maximum block gas and `maxtxbytes` (if set). + +```go +processOpt := func(app *baseapp.BaseApp) { + abciPropHandler := baseapp.NewDefaultProposalHandler(mempool, app) + +app.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) +} + +baseAppOptions = append(baseAppOptions, processOpt) +``` diff --git a/docs/sdk/v0.53/documentation/consensus-block-production/vote-extensions.mdx b/docs/sdk/v0.53/documentation/consensus-block-production/vote-extensions.mdx new file mode 100644 index 00000000..057050cf --- /dev/null +++ b/docs/sdk/v0.53/documentation/consensus-block-production/vote-extensions.mdx @@ -0,0 +1,128 @@ +--- +title: Vote Extensions +--- + +## Synopsis + +This section describes how the application can define and use vote extensions +defined in ABCI++. + +## Extend Vote + +ABCI 2.0 (colloquially called ABCI++) allows an application to extend a pre-commit vote with arbitrary data. This process does NOT have to be deterministic, and the data returned can be unique to the +validator process. The Cosmos SDK defines [`baseapp.ExtendVoteHandler`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/abci.go#L32): + +```go +type ExtendVoteHandler func(Context, *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) +``` + +An application can set this handler in `app.go` via the `baseapp.SetExtendVoteHandler` +`BaseApp` option function. The `sdk.ExtendVoteHandler`, if defined, is called during +the `ExtendVote` ABCI method. Note, if an application decides to implement +`baseapp.ExtendVoteHandler`, it MUST return a non-nil `VoteExtension`. However, the vote +extension can be empty. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#extendvote) +for more details. + +There are many decentralized censorship-resistant use cases for vote extensions. +For example, a validator may want to submit prices for a price oracle or encryption +shares for an encrypted transaction mempool. Note, an application should be careful +to consider the size of the vote extensions as they could increase latency in block +production. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/docs/qa/CometBFT-QA-38.md#vote-extensions-testbed) +for more details. + +Click [here](https://docs.cosmos.network/main/build/abci/vote-extensions) if you would like a walkthrough of how to implement vote extensions. + +## Verify Vote Extension + +Similar to extending a vote, an application can also verify vote extensions from +other validators when validating their pre-commits. For a given vote extension, +this process MUST be deterministic. The Cosmos SDK defines [`sdk.VerifyVoteExtensionHandler`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/types/abci.go#L29-L31): + +```go +type VerifyVoteExtensionHandler func(Context, *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) +``` + +An application can set this handler in `app.go` via the `baseapp.SetVerifyVoteExtensionHandler` +`BaseApp` option function. The `sdk.VerifyVoteExtensionHandler`, if defined, is called +during the `VerifyVoteExtension` ABCI method. If an application defines a vote +extension handler, it should also define a verification handler. Note, not all +validators will share the same view of what vote extensions they verify depending +on how votes are propagated. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#verifyvoteextension) +for more details. + +Additionally, please keep in mind that performance can be degraded if vote extensions are too big ([Link](https://docs.cometbft.com/v0.38/qa/cometbft-qa-38#vote-extensions-testbed)), so we highly recommend a size validation in `VerifyVoteExtensions`. + +## Vote Extension Propagation + +The agreed upon vote extensions at height `H` are provided to the proposing validator +at height `H+1` during `PrepareProposal`. As a result, the vote extensions are +not natively provided or exposed to the remaining validators during `ProcessProposal`. +As a result, if an application requires that the agreed upon vote extensions from +height `H` are available to all validators at `H+1`, the application must propagate +these vote extensions manually in the block proposal itself. This can be done by +"injecting" them into the block proposal, since the `Txs` field in `PrepareProposal` +is just a slice of byte slices. + +`FinalizeBlock` will ignore any byte slice that doesn't implement an `sdk.Tx`, so +any injected vote extensions will safely be ignored in `FinalizeBlock`. For more +details on propagation, see the [ABCI++ 2.0 ADR](docs/sdk/next/documentation/legacy/adr-comprehensive#vote-extension-propagation--verification). + +### Recovery of injected Vote Extensions + +As stated before, vote extensions can be injected into a block proposal (along with +other transactions in the `Txs` field). The Cosmos SDK provides a pre-FinalizeBlock +hook to allow applications to recover vote extensions, perform any necessary +computation on them, and then store the results in the cached store. These results +will be available to the application during the subsequent `FinalizeBlock` call. + +An example of how a pre-FinalizeBlock hook could look like is shown below: + +```go expandable +app.SetPreBlocker(func(ctx sdk.Context, req *abci.RequestFinalizeBlock) + +error { + allVEs := []VE{ +} / store all parsed vote extensions here + for _, tx := range req.Txs { + / define a custom function that tries to parse the tx as a vote extension + ve, ok := parseVoteExtension(tx) + if !ok { + continue +} + +allVEs = append(allVEs, ve) +} + + / perform any necessary computation on the vote extensions and store the result + / in the cached store + result := compute(allVEs) + err := storeVEResult(ctx, result) + if err != nil { + return err +} + +return nil +}) +``` + +Then, in an app's module, the application can retrieve the result of the computation +of vote extensions from the cached store: + +```go expandable +func (k Keeper) + +BeginBlocker(ctx context.Context) + +error { + / retrieve the result of the computation of vote extensions from the cached store + result, err := k.GetVEResult(ctx) + if err != nil { + return err +} + + / use the result of the computation of vote extensions + k.setSomething(result) + +return nil +} +``` diff --git a/docs/sdk/v0.53/documentation/core-concepts/build.mdx b/docs/sdk/v0.53/documentation/core-concepts/build.mdx new file mode 100644 index 00000000..0e2720ed --- /dev/null +++ b/docs/sdk/v0.53/documentation/core-concepts/build.mdx @@ -0,0 +1,8 @@ +--- +title: Build +--- + +* [Building Apps](/docs/sdk/v0.53/documentation/application-framework/app-go) - The documentation in this section will guide you through the process of developing your dApp using the Cosmos SDK framework. +* [Modules](/docs/sdk/v0.53/documentation/module-system/README) - Information about the various modules available in the Cosmos SDK: Auth, Authz, Bank, Circuit, Consensus, Distribution, Epochs, Evidence, Feegrant, Governance, Group, Mint, NFT, Protocolpool, Slashing, Staking, Upgrade, Genutil. +* [Modules](/docs/sdk/v0.53/documentation/module-system/README) provides detailed information about available modules and their usage. +* [REST API](https://docs.cosmos.network/api) - A comprehensive reference for the application programming interfaces (APIs) provided by the SDK. diff --git a/docs/sdk/v0.53/documentation/core-concepts/learn.mdx b/docs/sdk/v0.53/documentation/core-concepts/learn.mdx new file mode 100644 index 00000000..7c4c1d99 --- /dev/null +++ b/docs/sdk/v0.53/documentation/core-concepts/learn.mdx @@ -0,0 +1,10 @@ +--- +title: Learn +--- + +* [Introduction](/docs/sdk/v0.53/documentation/core-concepts/overview) - Dive into the fundamentals of Cosmos SDK with an insightful introduction, + laying the groundwork for understanding blockchain development. In this section we provide a High-Level Overview of the SDK, then dive deeper into Core concepts such as Application-Specific Blockchains, Blockchain Architecture, and finally we begin to explore the main components of the SDK. +* [Beginner](/docs/sdk/v0.53/documentation/application-framework/app-anatomy) - Start your journey with beginner-friendly resources in the Cosmos SDK's "Learn" + section, providing a gentle entry point for newcomers to blockchain development. Here we focus on a little more detail, covering the Anatomy of a Cosmos SDK Application, Transaction Lifecycles, Accounts and lastly, Gas and Fees. +* [Advanced](/docs/sdk/v0.53/documentation/application-framework/baseapp) - Level up your Cosmos SDK expertise with advanced topics, tailored for experienced + developers diving into intricate blockchain application development. We cover the Cosmos SDK on a lower level as we dive into the core of the SDK with BaseApp, Transactions, Context, Node Client (Daemon), Store, Encoding, gRPC, REST, and CometBFT Endpoints, CLI, Events, Telemetry, Object-Capability Model, RunTx recovery middleware, Cosmos Blockchain Simulator, Protobuf Documentation, In-Place Store Migrations, Configuration and AutoCLI. diff --git a/docs/sdk/v0.53/documentation/core-concepts/ocap.mdx b/docs/sdk/v0.53/documentation/core-concepts/ocap.mdx new file mode 100644 index 00000000..72dfaa2c --- /dev/null +++ b/docs/sdk/v0.53/documentation/core-concepts/ocap.mdx @@ -0,0 +1,1098 @@ +--- +title: Object-Capability Model +description: >- + When thinking about security, it is good to start with a specific threat + model. Our threat model is the following: +--- + +## Intro + +When thinking about security, it is good to start with a specific threat model. Our threat model is the following: + +> We assume that a thriving ecosystem of Cosmos SDK modules that are easy to compose into a blockchain application will contain faulty or malicious modules. + +The Cosmos SDK is designed to address this threat by being the +foundation of an object capability system. + +> The structural properties of object capability systems favor +> modularity in code design and ensure reliable encapsulation in +> code implementation. +> +> These structural properties facilitate the analysis of some +> security properties of an object-capability program or operating +> system. Some of these — in particular, information flow properties +> — can be analyzed at the level of object references and +> connectivity, independent of any knowledge or analysis of the code +> that determines the behavior of the objects. +> +> As a consequence, these security properties can be established +> and maintained in the presence of new objects that contain unknown +> and possibly malicious code. +> +> These structural properties stem from the two rules governing +> access to existing objects: +> +> 1. An object A can send a message to B only if object A holds a +> reference to B. +> 2. An object A can obtain a reference to C only +> if object A receives a message containing a reference to C. As a +> consequence of these two rules, an object can obtain a reference +> to another object only through a preexisting chain of references. +> In short, "Only connectivity begets connectivity." + +For an introduction to object-capabilities, see this [Wikipedia article](https://en.wikipedia.org/wiki/Object-capability_model). + +## Ocaps in practice + +The idea is to only reveal what is necessary to get the work done. + +For example, the following code snippet violates the object capabilities +principle: + +```go +type AppAccount struct {... +} + account := &AppAccount{ + Address: pub.Address(), + Coins: sdk.Coins{ + sdk.NewInt64Coin("ATM", 100) +}, +} + sumValue := externalModule.ComputeSumValue(account) +``` + +The method `ComputeSumValue` implies a pure function, yet the implied +capability of accepting a pointer value is the capability to modify that +value. The preferred method signature should take a copy instead. + +```go +sumValue := externalModule.ComputeSumValue(*account) +``` + +In the Cosmos SDK, you can see the application of this principle in simapp. + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "maps" + "os" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/tx/signing" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" + sigtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + panic(err) +} + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, + banktypes.StoreKey, + stakingtypes.StoreKey, + minttypes.StoreKey, + distrtypes.StoreKey, + slashingtypes.StoreKey, + govtypes.StoreKey, + consensusparamtypes.StoreKey, + upgradetypes.StoreKey, + feegrant.StoreKey, + evidencetypes.StoreKey, + circuittypes.StoreKey, + authzkeeper.StoreKey, + nftkeeper.StoreKey, + group.StoreKey, + epochstypes.StoreKey, + protocolpooltypes.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, +} + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + runtime.EventService{ +}, + ) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + + / optional: enable sign mode textual by overwriting the default tx config (after setting the bank keeper) + enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + panic(err) +} + +app.txConfig = txConfig + + app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[stakingtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authcodec.NewBech32Codec(sdk.Bech32PrefixValAddr), + authcodec.NewBech32Codec(sdk.Bech32PrefixConsAddr), + ) + +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), + ) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, + legacyAmino, + runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), + app.StakingKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[feegrant.StoreKey]), + app.AccountKeeper, + ) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks( + app.DistrKeeper.Hooks(), + app.SlashingKeeper.Hooks(), + ), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[circuittypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + app.AccountKeeper.AddressCodec(), + ) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper( + runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + ) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper( + keys[group.StoreKey], + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + groupConfig, + ) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper( + skipUpgradeHeights, + runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), + appCodec, + homePath, + app.BaseApp, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper( + runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), + appCodec, + app.AccountKeeper, + app.BankKeeper, + ) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), + app.StakingKeeper, + app.SlashingKeeper, + app.AccountKeeper.AddressCodec(), + runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, + ) + +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + ), + ) + + /**** Module Options ****/ + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, nil), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, nil), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil, app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, nil), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + epochs.NewAppModule(appCodec, app.EpochsKeeper), + protocolpool.NewAppModule(appCodec, app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / NOTE: upgrade module is required to be prioritized + app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, + ) + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensusparamtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +} + exportModuleOrder := []string{ + consensusparamtypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +err = app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetPreBlocker(app.PreBlocker) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + + / At startup, after all modules have been registered, check that all prot + / annotations are correct. + protoFiles, err := proto.MergedRegistry() + if err != nil { + panic(err) +} + +err = msgservice.ValidateProtoAnnotations(protoFiles) + if err != nil { + / Once we switch to using protoreflect-based antehandlers, we might + / want to panic here instead of logging a warning. + fmt.Fprintln(os.Stderr, err.Error()) +} + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + UnorderedNonceManager: app.AccountKeeper, + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ PreBlocker application updates every pre block +func (app *SimApp) + +PreBlocker(ctx sdk.Context, _ *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + return app.ModuleManager.PreBlock(ctx) +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), + ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), + ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, 0, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + return maps.Clone(maccPerms) +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} +``` + +The following diagram shows the current dependencies between keepers. + +![Keeper dependencies](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg) diff --git a/docs/sdk/v0.53/learn/intro/overview.mdx b/docs/sdk/v0.53/documentation/core-concepts/overview.mdx similarity index 76% rename from docs/sdk/v0.53/learn/intro/overview.mdx rename to docs/sdk/v0.53/documentation/core-concepts/overview.mdx index 445b5482..e029bd5f 100644 --- a/docs/sdk/v0.53/learn/intro/overview.mdx +++ b/docs/sdk/v0.53/documentation/core-concepts/overview.mdx @@ -1,41 +1,41 @@ --- -title: "What is the Cosmos SDK" -description: "Version: v0.53" +title: What is the Cosmos SDK --- -The [Cosmos SDK](https://github.com/cosmos/cosmos-sdk) is an open-source toolkit for building multi-asset public Proof-of-Stake (PoS) blockchains, like the Cosmos Hub, as well as permissioned Proof-of-Authority (PoA) blockchains. Blockchains built with the Cosmos SDK are generally referred to as **application-specific blockchains**. +The [Cosmos SDK](https://github.com/cosmos/cosmos-sdk) is an open-source toolkit for building multi-asset public Proof-of-Stake (PoS) blockchains, like the Cosmos Hub, as well as permissioned Proof-of-Authority (PoA) blockchains. Blockchains built with the Cosmos SDK are generally referred to as **application-specific blockchains**. -The goal of the Cosmos SDK is to allow developers to easily create custom blockchains from scratch that can natively interoperate with other blockchains. We further this modular approach by allowing developers to plug and play with different consensus engines this can range from the [CometBFT](https://github.com/cometbft/cometbft) or [Rollkit](https://rollkit.dev/). +The goal of the Cosmos SDK is to allow developers to easily create custom blockchains from scratch that can natively interoperate with other blockchains. +We further this modular approach by allowing developers to plug and play with different consensus engines this can range from the [CometBFT](https://github.com/cometbft/cometbft) or [Rollkit](https://rollkit.dev/). SDK-based blockchains have the choice to use the predefined modules or to build their own modules. What this means is that developers can build a blockchain that is tailored to their specific use case, without having to worry about the low-level details of building a blockchain from scratch. Predefined modules include staking, governance, and token issuance, among others. -What's more, the Cosmos SDK is a capabilities-based system that allows developers to better reason about the security of interactions between modules. For a deeper look at capabilities, jump to [Object-Capability Model](/v0.53/learn/advanced/ocap). +What's more, the Cosmos SDK is a capabilities-based system that allows developers to better reason about the security of interactions between modules. For a deeper look at capabilities, jump to [Object-Capability Model](/docs/sdk/v0.53/documentation/core-concepts/ocap). How you can look at this is if we imagine that the SDK is like a lego kit. You can choose to build the basic house from the instructions or you can choose to modify your house and add more floors, more doors, more windows. The choice is yours. -## What are Application-Specific Blockchains[​](#what-are-application-specific-blockchains "Direct link to What are Application-Specific Blockchains") +## What are Application-Specific Blockchains One development paradigm in the blockchain world today is that of virtual-machine blockchains like Ethereum, where development generally revolves around building decentralized applications on top of an existing blockchain as a set of smart contracts. While smart contracts can be very good for some use cases like single-use applications (e.g. ICOs), they often fall short for building complex decentralized platforms. More generally, smart contracts can be limiting in terms of flexibility, sovereignty and performance. Application-specific blockchains offer a radically different development paradigm than virtual-machine blockchains. An application-specific blockchain is a blockchain customized to operate a single application: developers have all the freedom to make the design decisions required for the application to run optimally. They can also provide better sovereignty, security and performance. -Learn more about [application-specific blockchains](/v0.53/learn/intro/why-app-specific). +Learn more about [application-specific blockchains](/docs/sdk/v0.53/documentation/core-concepts/why-app-specific). -## What is Modularity[​](#what-is-modularity "Direct link to What is Modularity") +## What is Modularity Today there is a lot of talk around modularity and discussions between monolithic and modular. Originally the Cosmos SDK was built with a vision of modularity in mind. Modularity is derived from splitting a blockchain into customizable layers of execution, consensus, settlement and data availability, which is what the Cosmos SDK enables. This means that developers can plug and play, making their blockchain customisable by using different software for different layers. For example you can choose to build a vanilla chain and use the Cosmos SDK with CometBFT. CometBFT will be your consensus layer and the chain itself would be the settlement and execution layer. Another route could be to use the SDK with Rollkit and Celestia as your consensus and data availability layer. The benefit of modularity is that you can customize your chain to your specific use case. -## Why the Cosmos SDK[​](#why-the-cosmos-sdk "Direct link to Why the Cosmos SDK") +## Why the Cosmos SDK The Cosmos SDK is the most advanced framework for building custom modular application-specific blockchains today. Here are a few reasons why you might want to consider building your decentralized application with the Cosmos SDK: * It allows you to plug and play and customize your consensus layer. As above you can use Rollkit and Celestia as your consensus and data availability layer. This offers a lot of flexibility and customisation. * Previously the default consensus engine available within the Cosmos SDK is [CometBFT](https://github.com/cometbft/cometbft). CometBFT is the most (and only) mature BFT consensus engine in existence. It is widely used across the industry and is considered the gold standard consensus engine for building Proof-of-Stake systems. -* The Cosmos SDK is open-source and designed to make it easy to build blockchains out of composable [modules](/v0.53/build/modules). As the ecosystem of open-source Cosmos SDK modules grows, it will become increasingly easier to build complex decentralized platforms with it. +* The Cosmos SDK is open-source and designed to make it easy to build blockchains out of composable [modules](/docs/sdk/v0.50/build/modules). As the ecosystem of open-source Cosmos SDK modules grows, it will become increasingly easier to build complex decentralized platforms with it. * The Cosmos SDK is inspired by capabilities-based security, and informed by years of wrestling with blockchain state-machines. This makes the Cosmos SDK a very secure environment to build blockchains. * Most importantly, the Cosmos SDK has already been used to build many application-specific blockchains that are already in production. Among others, we can cite [Cosmos Hub](https://hub.cosmos.network), [IRIS Hub](https://irisnet.org), [Binance Chain](https://docs.binance.org/), [Terra](https://terra.money/) or [Kava](https://www.kava.io/). [Many more](https://cosmos.network/ecosystem) are building on the Cosmos SDK. -## Getting started with the Cosmos SDK[​](#getting-started-with-the-cosmos-sdk "Direct link to Getting started with the Cosmos SDK") +## Getting started with the Cosmos SDK -* Learn more about the [architecture of a Cosmos SDK application](/v0.53/learn/intro/sdk-app-architecture) +* Learn more about the [architecture of a Cosmos SDK application](/docs/sdk/v0.53/documentation/core-concepts/sdk-app-architecture) * Learn how to build an application-specific blockchain from scratch with the [Cosmos SDK Tutorial](https://cosmos.network/docs/tutorial) diff --git a/docs/sdk/v0.53/learn/intro/sdk-app-architecture.mdx b/docs/sdk/v0.53/documentation/core-concepts/sdk-app-architecture.mdx similarity index 62% rename from docs/sdk/v0.53/learn/intro/sdk-app-architecture.mdx rename to docs/sdk/v0.53/documentation/core-concepts/sdk-app-architecture.mdx index c889eae4..35ef53dc 100644 --- a/docs/sdk/v0.53/learn/intro/sdk-app-architecture.mdx +++ b/docs/sdk/v0.53/documentation/core-concepts/sdk-app-architecture.mdx @@ -1,9 +1,9 @@ --- -title: "Blockchain Architecture" -description: "Version: v0.53" +title: Blockchain Architecture +description: 'At its core, a blockchain is a replicated deterministic state machine.' --- -## State machine[​](#state-machine "Direct link to State machine") +## State machine At its core, a blockchain is a [replicated deterministic state machine](https://en.wikipedia.org/wiki/State_machine_replication). @@ -11,48 +11,80 @@ A state machine is a computer science concept whereby a machine can have multipl Given a state S and a transaction T, the state machine will return a new state S'. -``` -+--------+ +--------+| | | || S +---------------->+ S' || | apply(T) | |+--------+ +--------+ +```mermaid expandable +flowchart LR + S["State S"] -->|"apply(T)"| S'["State S'"] + style S fill:#f9f,stroke:#333,stroke-width:2px + style S' fill:#9ff,stroke:#333,stroke-width:2px ``` In practice, the transactions are bundled in blocks to make the process more efficient. Given a state S and a block of transactions B, the state machine will return a new state S'. -``` -+--------+ +--------+| | | || S +----------------------------> | S' || | For each T in B: apply(T) | |+--------+ +--------+ +```mermaid expandable +flowchart LR + S["State S"] -->|"For each T in Block B: apply(T)"| S'["State S'"] + style S fill:#f9f,stroke:#333,stroke-width:2px + style S' fill:#9ff,stroke:#333,stroke-width:2px ``` In a blockchain context, the state machine is deterministic. This means that if a node is started at a given state and replays the same sequence of transactions, it will always end up with the same final state. The Cosmos SDK gives developers maximum flexibility to define the state of their application, transaction types and state transition functions. The process of building state-machines with the Cosmos SDK will be described more in depth in the following sections. But first, let us see how the state-machine is replicated using **CometBFT**. -## CometBFT[​](#cometbft "Direct link to CometBFT") +## CometBFT Thanks to the Cosmos SDK, developers just have to define the state machine, and [*CometBFT*](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft) will handle replication over the network for them. -``` - ^ +-------------------------------+ ^ | | | | Built with Cosmos SDK | | State-machine = Application | | | | | v | +-------------------------------+ | | | ^Blockchain node | | Consensus | | | | | | | +-------------------------------+ | CometBFT | | | | | | Networking | | | | | | v +-------------------------------+ v +```mermaid expandable +flowchart TB + subgraph Node["Blockchain Node"] + subgraph SDK["Built with Cosmos SDK"] + App["State-machine = Application"] + end + subgraph CBT["CometBFT"] + Consensus["Consensus"] + Network["Networking"] + end + end + + App -.->|"ABCI"| Consensus + Consensus --> Network + + style App fill:#e1f5e1,stroke:#4caf50,stroke-width:2px + style Consensus fill:#e3f2fd,stroke:#2196f3,stroke-width:2px + style Network fill:#e3f2fd,stroke:#2196f3,stroke-width:2px + style SDK fill:#f5f5f5,stroke:#333,stroke-width:1px + style CBT fill:#f5f5f5,stroke:#333,stroke-width:1px + style Node fill:#fafafa,stroke:#333,stroke-width:2px ``` [CometBFT](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft) is an application-agnostic engine that is responsible for handling the *networking* and *consensus* layers of a blockchain. In practice, this means that CometBFT is responsible for propagating and ordering transaction bytes. CometBFT relies on an eponymous Byzantine-Fault-Tolerant (BFT) algorithm to reach consensus on the order of transactions. The CometBFT [consensus algorithm](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft#consensus-overview) works with a set of special nodes called *Validators*. Validators are responsible for adding blocks of transactions to the blockchain. At any given block, there is a validator set V. A validator in V is chosen by the algorithm to be the proposer of the next block. This block is considered valid if more than two thirds of V signed a `prevote` and a `precommit` on it, and if all the transactions that it contains are valid. The validator set can be changed by rules written in the state-machine. -## ABCI[​](#abci "Direct link to ABCI") +## ABCI CometBFT passes transactions to the application through an interface called the [ABCI](https://docs.cometbft.com/v0.37/spec/abci/), which the application must implement. -``` - +---------------------+ | | | Application | | | +--------+---+--------+ ^ | | | ABCI | v +--------+---+--------+ | | | | | CometBFT | | | | | +---------------------+ +```mermaid expandable +flowchart TB + App["Application
(State Machine)"] + CBT["CometBFT
(Consensus & Networking)"] + + App <-->|"ABCI
Interface"| CBT + + style App fill:#e1f5e1,stroke:#4caf50,stroke-width:2px + style CBT fill:#e3f2fd,stroke:#2196f3,stroke-width:2px ``` Note that **CometBFT only handles transaction bytes**. It has no knowledge of what these bytes mean. All CometBFT does is order these transaction bytes deterministically. CometBFT passes the bytes to the application via the ABCI, and expects a return code to inform it if the messages contained in the transactions were successfully processed or not. Here are the most important messages of the ABCI: -* `CheckTx`: When a transaction is received by CometBFT, it is passed to the application to check if a few basic requirements are met. `CheckTx` is used to protect the mempool of full-nodes against spam transactions. . A special handler called the [`AnteHandler`](/v0.53/learn/beginner/gas-fees#antehandler) is used to execute a series of validation steps such as checking for sufficient fees and validating the signatures. If the checks are valid, the transaction is added to the [mempool](https://docs.cometbft.com/v0.37/spec/p2p/legacy-docs/messages/mempool) and relayed to peer nodes. Note that transactions are not processed (i.e. no modification of the state occurs) with `CheckTx` since they have not been included in a block yet. -* `DeliverTx`: When a [valid block](https://docs.cometbft.com/v0.37/spec/core/data_structures#block) is received by CometBFT, each transaction in the block is passed to the application via `DeliverTx` in order to be processed. It is during this stage that the state transitions occur. The `AnteHandler` executes again, along with the actual [`Msg` service](/v0.53/build/building-modules/msg-services) RPC for each message in the transaction. +* `CheckTx`: When a transaction is received by CometBFT, it is passed to the application to check if a few basic requirements are met. `CheckTx` is used to protect the mempool of full-nodes against spam transactions. . A special handler called the [`AnteHandler`](/docs/sdk/v0.53/documentation/protocol-development/gas-fees#antehandler) is used to execute a series of validation steps such as checking for sufficient fees and validating the signatures. If the checks are valid, the transaction is added to the [mempool](https://docs.cometbft.com/v0.37/spec/p2p/legacy-docs/messages/mempool) and relayed to peer nodes. Note that transactions are not processed (i.e. no modification of the state occurs) with `CheckTx` since they have not been included in a block yet. +* `DeliverTx`: When a [valid block](https://docs.cometbft.com/v0.37/spec/core/data_structures#block) is received by CometBFT, each transaction in the block is passed to the application via `DeliverTx` in order to be processed. It is during this stage that the state transitions occur. The `AnteHandler` executes again, along with the actual [`Msg` service](/docs/sdk/v0.53/documentation/module-system/msg-services) RPC for each message in the transaction. * `BeginBlock`/`EndBlock`: These messages are executed at the beginning and the end of each block, whether the block contains transactions or not. It is useful to trigger automatic execution of logic. Proceed with caution though, as computationally expensive loops could slow down your blockchain, or even freeze it if the loop is infinite. Find a more detailed view of the ABCI methods from the [CometBFT docs](https://docs.cometbft.com/v0.37/spec/abci/). -Any application built on CometBFT needs to implement the ABCI interface in order to communicate with the underlying local CometBFT engine. Fortunately, you do not have to implement the ABCI interface. The Cosmos SDK provides a boilerplate implementation of it in the form of [baseapp](/v0.53/learn/intro/sdk-design#baseapp). +Any application built on CometBFT needs to implement the ABCI interface in order to communicate with the underlying local CometBFT engine. Fortunately, you do not have to implement the ABCI interface. The Cosmos SDK provides a boilerplate implementation of it in the form of [baseapp](/docs/sdk/v0.53/documentation/core-concepts/sdk-design#baseapp). diff --git a/docs/sdk/v0.53/documentation/core-concepts/sdk-design.mdx b/docs/sdk/v0.53/documentation/core-concepts/sdk-design.mdx new file mode 100644 index 00000000..68f11af6 --- /dev/null +++ b/docs/sdk/v0.53/documentation/core-concepts/sdk-design.mdx @@ -0,0 +1,1071 @@ +--- +title: Main Components of the Cosmos SDK +--- + +The Cosmos SDK is a framework that facilitates the development of secure state-machines on top of CometBFT. At its core, the Cosmos SDK is a boilerplate implementation of the [ABCI](/docs/sdk/v0.53/documentation/core-concepts/sdk-app-architecture#abci) in Golang. It comes with a [`multistore`](/docs/sdk/v0.53/documentation/state-storage/store#multistore) to persist data and a [`router`](/docs/sdk/v0.53/documentation/application-framework/baseapp#routing) to handle transactions. + +Here is a simplified view of how transactions are handled by an application built on top of the Cosmos SDK when transferred from CometBFT via `DeliverTx`: + +1. Decode `transactions` received from the CometBFT consensus engine (remember that CometBFT only deals with `[]bytes`). +2. Extract `messages` from `transactions` and do basic sanity checks. +3. Route each message to the appropriate module so that it can be processed. +4. Commit state changes. + +## `baseapp` + +`baseapp` is the boilerplate implementation of a Cosmos SDK application. It comes with an implementation of the ABCI to handle the connection with the underlying consensus engine. Typically, a Cosmos SDK application extends `baseapp` by embedding it in [`app.go`](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#core-application-file). + +Here is an example of this from `simapp`, the Cosmos SDK demonstration app: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "maps" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/tx/signing" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + sigtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + panic(err) +} + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, + banktypes.StoreKey, + stakingtypes.StoreKey, + minttypes.StoreKey, + distrtypes.StoreKey, + slashingtypes.StoreKey, + govtypes.StoreKey, + consensusparamtypes.StoreKey, + upgradetypes.StoreKey, + feegrant.StoreKey, + evidencetypes.StoreKey, + circuittypes.StoreKey, + authzkeeper.StoreKey, + nftkeeper.StoreKey, + group.StoreKey, + epochstypes.StoreKey, + protocolpooltypes.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, +} + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + runtime.EventService{ +}, + ) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), + ) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + + / optional: enable sign mode textual by overwriting the default tx config (after setting the bank keeper) + enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + panic(err) +} + +app.txConfig = txConfig + + app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[stakingtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authcodec.NewBech32Codec(sdk.Bech32PrefixValAddr), + authcodec.NewBech32Codec(sdk.Bech32PrefixConsAddr), + ) + +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(minttypes.DefaultInflationCalculationFn)), custom mintFn can be added here + ) + +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), + ) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, + legacyAmino, + runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), + app.StakingKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[feegrant.StoreKey]), + app.AccountKeeper, + ) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks( + app.DistrKeeper.Hooks(), + app.SlashingKeeper.Hooks(), + ), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[circuittypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + app.AccountKeeper.AddressCodec(), + ) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper( + runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + ) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper( + keys[group.StoreKey], + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + groupConfig, + ) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper( + skipUpgradeHeights, + runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), + appCodec, + homePath, + app.BaseApp, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(...), / Add if you want to use a custom vote calculation function. + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper( + runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), + appCodec, + app.AccountKeeper, + app.BankKeeper, + ) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), + app.StakingKeeper, + app.SlashingKeeper, + app.AccountKeeper.AddressCodec(), + runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, + ) + +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + ), + ) + + /**** Module Options ****/ + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, nil), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, nil), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil, app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, nil), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + epochs.NewAppModule(app.EpochsKeeper), + protocolpool.NewAppModule(app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / NOTE: upgrade module is required to be prioritized + app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, + ) + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensusparamtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +} + exportModuleOrder := []string{ + consensusparamtypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +err = app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetPreBlocker(app.PreBlocker) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimeoutDuration), +}, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ PreBlocker application updates every pre block +func (app *SimApp) + +PreBlocker(ctx sdk.Context, _ *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + return app.ModuleManager.PreBlock(ctx) +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), + ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), + ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, 0, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + return maps.Clone(maccPerms) +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} +``` + +The goal of `baseapp` is to provide a secure interface between the store and the extensible state machine while defining as little about the state machine as possible (staying true to the ABCI). + +For more on `baseapp`, please click [here](/docs/sdk/v0.53/documentation/application-framework/baseapp). + +## Multistore + +The Cosmos SDK provides a [`multistore`](/docs/sdk/v0.53/documentation/state-storage/store#multistore) for persisting state. The multistore allows developers to declare any number of [`KVStores`](/docs/sdk/v0.53/documentation/state-storage/store#base-layer-kvstores). These `KVStores` only accept the `[]byte` type as value and therefore any custom structure needs to be marshalled using [a codec](/docs/sdk/v0.53/documentation/protocol-development/encoding) before being stored. + +The multistore abstraction is used to divide the state in distinct compartments, each managed by its own module. For more on the multistore, click [here](/docs/sdk/v0.53/documentation/state-storage/store#multistore) + +## Modules + +The power of the Cosmos SDK lies in its modularity. Cosmos SDK applications are built by aggregating a collection of interoperable modules. Each module defines a subset of the state and contains its own message/transaction processor, while the Cosmos SDK is responsible for routing each message to its respective module. + +Here is a simplified view of how a transaction is processed by the application of each full-node when it is received in a valid block: + +```mermaid expandable + flowchart TD + A[Transaction relayed from the full-node's CometBFT engine to the node's application via DeliverTx] --> B[APPLICATION] + B -->|"Using baseapp's methods: Decode the Tx, extract and route the message(s)"| C[Message routed to the correct module to be processed] + C --> D1[AUTH MODULE] + C --> D2[BANK MODULE] + C --> D3[STAKING MODULE] + C --> D4[GOV MODULE] + D1 -->|Handle message, Update state| E["Return result to CometBFT (0=Ok, 1=Err)"] + D2 -->|Handle message, Update state| E["Return result to CometBFT (0=Ok, 1=Err)"] + D3 -->|Handle message, Update state| E["Return result to CometBFT (0=Ok, 1=Err)"] + D4 -->|Handle message, Update state| E["Return result to CometBFT (0=Ok, 1=Err)"] +``` + +Each module can be seen as a little state-machine. Developers need to define the subset of the state handled by the module, as well as custom message types that modify the state (*Note:* `messages` are extracted from `transactions` by `baseapp`). In general, each module declares its own `KVStore` in the `multistore` to persist the subset of the state it defines. Most developers will need to access other 3rd party modules when building their own modules. Given that the Cosmos SDK is an open framework, some of the modules may be malicious, which means there is a need for security principles to reason about inter-module interactions. These principles are based on [object-capabilities](/docs/sdk/v0.53/documentation/core-concepts/ocap). In practice, this means that instead of having each module keep an access control list for other modules, each module implements special objects called `keepers` that can be passed to other modules to grant a pre-defined set of capabilities. + +Cosmos SDK modules are defined in the `x/` folder of the Cosmos SDK. Some core modules include: + +* `x/auth`: Used to manage accounts and signatures. +* `x/bank`: Used to enable tokens and token transfers. +* `x/staking` + `x/slashing`: Used to build Proof-of-Stake blockchains. + +In addition to the already existing modules in `x/`, which anyone can use in their app, the Cosmos SDK lets you build your own custom modules. You can check an [example of that in the tutorial](https://tutorials.cosmos.network/). diff --git a/docs/sdk/v0.53/learn/intro/why-app-specific.mdx b/docs/sdk/v0.53/documentation/core-concepts/why-app-specific.mdx similarity index 73% rename from docs/sdk/v0.53/learn/intro/why-app-specific.mdx rename to docs/sdk/v0.53/documentation/core-concepts/why-app-specific.mdx index 734b8f29..b27934b3 100644 --- a/docs/sdk/v0.53/learn/intro/why-app-specific.mdx +++ b/docs/sdk/v0.53/documentation/core-concepts/why-app-specific.mdx @@ -1,69 +1,87 @@ --- -title: "Application-Specific Blockchains" -description: "Version: v0.53" +title: Application-Specific Blockchains --- - - This document explains what application-specific blockchains are, and why developers would want to build one as opposed to writing Smart Contracts. - +## Synopsis -## What are application-specific blockchains[​](#what-are-application-specific-blockchains "Direct link to What are application-specific blockchains") +This document explains what application-specific blockchains are, and why developers would want to build one as opposed to writing Smart Contracts. + +## What are application-specific blockchains Application-specific blockchains are blockchains customized to operate a single application. Instead of building a decentralized application on top of an underlying blockchain like Ethereum, developers build their own blockchain from the ground up. This means building a full-node client, a light-client, and all the necessary interfaces (CLI, REST, ...) to interact with the nodes. -``` - ^ +-------------------------------+ ^ | | | | Built with Cosmos SDK | | State-machine = Application | | | | | v | +-------------------------------+ | | | ^Blockchain node | | Consensus | | | | | | | +-------------------------------+ | CometBFT | | | | | | Networking | | | | | | v +-------------------------------+ v +```mermaid expandable +flowchart TB + subgraph Node["Blockchain Node"] + subgraph SDK["Built with Cosmos SDK"] + App["State-machine = Application"] + end + subgraph CBT["CometBFT"] + Consensus["Consensus"] + Network["Networking"] + end + end + + App -.->|"ABCI"| Consensus + Consensus --> Network + + style App fill:#e1f5e1,stroke:#4caf50,stroke-width:2px + style Consensus fill:#e3f2fd,stroke:#2196f3,stroke-width:2px + style Network fill:#e3f2fd,stroke:#2196f3,stroke-width:2px + style SDK fill:#f5f5f5,stroke:#333,stroke-width:1px + style CBT fill:#f5f5f5,stroke:#333,stroke-width:1px + style Node fill:#fafafa,stroke:#333,stroke-width:2px ``` -## What are the shortcomings of Smart Contracts[​](#what-are-the-shortcomings-of-smart-contracts "Direct link to What are the shortcomings of Smart Contracts") +## What are the shortcomings of Smart Contracts Virtual-machine blockchains like Ethereum addressed the demand for more programmability back in 2014. At the time, the options available for building decentralized applications were quite limited. Most developers would build on top of the complex and limited Bitcoin scripting language, or fork the Bitcoin codebase which was hard to work with and customize. Virtual-machine blockchains came in with a new value proposition. Their state-machine incorporates a virtual-machine that is able to interpret turing-complete programs called Smart Contracts. These Smart Contracts are very good for use cases like one-time events (e.g. ICOs), but they can fall short for building complex decentralized platforms. Here is why: -* Smart Contracts are generally developed with specific programming languages that can be interpreted by the underlying virtual-machine. These programming languages are often immature and inherently limited by the constraints of the virtual-machine itself. For example, the Ethereum Virtual Machine does not allow developers to implement automatic execution of code. Developers are also limited to the account-based system of the EVM, and they can only choose from a limited set of functions for their cryptographic operations. These are examples, but they hint at the lack of **flexibility** that a smart contract environment often entails. -* Smart Contracts are all run by the same virtual machine. This means that they compete for resources, which can severely restrain **performance**. And even if the state-machine were to be split in multiple subsets (e.g. via sharding), Smart Contracts would still need to be interpreted by a virtual machine, which would limit performance compared to a native application implemented at state-machine level (our benchmarks show an improvement on the order of 10x in performance when the virtual-machine is removed). -* Another issue with the fact that Smart Contracts share the same underlying environment is the resulting limitation in **sovereignty**. A decentralized application is an ecosystem that involves multiple players. If the application is built on a general-purpose virtual-machine blockchain, stakeholders have very limited sovereignty over their application, and are ultimately superseded by the governance of the underlying blockchain. If there is a bug in the application, very little can be done about it. +- Smart Contracts are generally developed with specific programming languages that can be interpreted by the underlying virtual-machine. These programming languages are often immature and inherently limited by the constraints of the virtual-machine itself. For example, the Ethereum Virtual Machine does not allow developers to implement automatic execution of code. Developers are also limited to the account-based system of the EVM, and they can only choose from a limited set of functions for their cryptographic operations. These are examples, but they hint at the lack of **flexibility** that a smart contract environment often entails. +- Smart Contracts are all run by the same virtual machine. This means that they compete for resources, which can severely restrain **performance**. And even if the state-machine were to be split in multiple subsets (e.g. via sharding), Smart Contracts would still need to be interpreted by a virtual machine, which would limit performance compared to a native application implemented at state-machine level (our benchmarks show an improvement on the order of 10x in performance when the virtual-machine is removed). +- Another issue with the fact that Smart Contracts share the same underlying environment is the resulting limitation in **sovereignty**. A decentralized application is an ecosystem that involves multiple players. If the application is built on a general-purpose virtual-machine blockchain, stakeholders have very limited sovereignty over their application, and are ultimately superseded by the governance of the underlying blockchain. If there is a bug in the application, very little can be done about it. Application-Specific Blockchains are designed to address these shortcomings. -## Application-Specific Blockchains Benefits[​](#application-specific-blockchains-benefits "Direct link to Application-Specific Blockchains Benefits") +## Application-Specific Blockchains Benefits -### Flexibility[​](#flexibility "Direct link to Flexibility") +### Flexibility Application-specific blockchains give maximum flexibility to developers: -* In Cosmos blockchains, the state-machine is typically connected to the underlying consensus engine via an interface called the [ABCI](https://docs.cometbft.com/v0.37/spec/abci/). This interface can be wrapped in any programming language, meaning developers can build their state-machine in the programming language of their choice. +- In Cosmos blockchains, the state-machine is typically connected to the underlying consensus engine via an interface called the [ABCI](https://docs.cometbft.com/v0.37/spec/abci/). This interface can be wrapped in any programming language, meaning developers can build their state-machine in the programming language of their choice. -* Developers can choose among multiple frameworks to build their state-machine. The most widely used today is the Cosmos SDK, but others exist (e.g. [Lotion](https://github.com/nomic-io/lotion), [Weave](https://github.com/iov-one/weave), ...). Typically the choice will be made based on the programming language they want to use (Cosmos SDK and Weave are in Golang, Lotion is in Javascript, ...). +- Developers can choose among multiple frameworks to build their state-machine. The most widely used today is the Cosmos SDK, but others exist (e.g. [Lotion](https://github.com/nomic-io/lotion), [Weave](https://github.com/iov-one/weave), ...). Typically the choice will be made based on the programming language they want to use (Cosmos SDK and Weave are in Golang, Lotion is in Javascript, ...). -* The ABCI also allows developers to swap the consensus engine of their application-specific blockchain. Today, only CometBFT is production-ready, but in the future other consensus engines are expected to emerge. +- The ABCI also allows developers to swap the consensus engine of their application-specific blockchain. Today, only CometBFT is production-ready, but in the future other consensus engines are expected to emerge. -* Even when they settle for a framework and consensus engine, developers still have the freedom to tweak them if they don't perfectly match their requirements in their pristine forms. +- Even when they settle for a framework and consensus engine, developers still have the freedom to tweak them if they don't perfectly match their requirements in their pristine forms. -* Developers are free to explore the full spectrum of tradeoffs (e.g. number of validators vs transaction throughput, safety vs availability in asynchrony, ...) and design choices (DB or IAVL tree for storage, UTXO or account model, ...). +- Developers are free to explore the full spectrum of tradeoffs (e.g. number of validators vs transaction throughput, safety vs availability in asynchrony, ...) and design choices (DB or IAVL tree for storage, UTXO or account model, ...). -* Developers can implement automatic execution of code. In the Cosmos SDK, logic can be automatically triggered at the beginning and the end of each block. They are also free to choose the cryptographic library used in their application, as opposed to being constrained by what is made available by the underlying environment in the case of virtual-machine blockchains. +- Developers can implement automatic execution of code. In the Cosmos SDK, logic can be automatically triggered at the beginning and the end of each block. They are also free to choose the cryptographic library used in their application, as opposed to being constrained by what is made available by the underlying environment in the case of virtual-machine blockchains. The list above contains a few examples that show how much flexibility application-specific blockchains give to developers. The goal of Cosmos and the Cosmos SDK is to make developer tooling as generic and composable as possible, so that each part of the stack can be forked, tweaked and improved without losing compatibility. As the community grows, more alternatives for each of the core building blocks will emerge, giving more options to developers. -### Performance[​](#performance "Direct link to Performance") +### Performance Decentralized applications built with Smart Contracts are inherently capped in performance by the underlying environment. For a decentralized application to optimise performance, it needs to be built as an application-specific blockchain. Next are some of the benefits an application-specific blockchain brings in terms of performance: -* Developers of application-specific blockchains can choose to operate with a novel consensus engine such as CometBFT BFT. Compared to Proof-of-Work (used by most virtual-machine blockchains today), it offers significant gains in throughput. -* An application-specific blockchain only operates a single application, so that the application does not compete with others for computation and storage. This is the opposite of most non-sharded virtual-machine blockchains today, where smart contracts all compete for computation and storage. -* Even if a virtual-machine blockchain offered application-based sharding coupled with an efficient consensus algorithm, performance would still be limited by the virtual-machine itself. The real throughput bottleneck is the state-machine, and requiring transactions to be interpreted by a virtual-machine significantly increases the computational complexity of processing them. +- Developers of application-specific blockchains can choose to operate with a novel consensus engine such as CometBFT BFT. Compared to Proof-of-Work (used by most virtual-machine blockchains today), it offers significant gains in throughput. +- An application-specific blockchain only operates a single application, so that the application does not compete with others for computation and storage. This is the opposite of most non-sharded virtual-machine blockchains today, where smart contracts all compete for computation and storage. +- Even if a virtual-machine blockchain offered application-based sharding coupled with an efficient consensus algorithm, performance would still be limited by the virtual-machine itself. The real throughput bottleneck is the state-machine, and requiring transactions to be interpreted by a virtual-machine significantly increases the computational complexity of processing them. -### Security[​](#security "Direct link to Security") +### Security Security is hard to quantify, and greatly varies from platform to platform. That said here are some important benefits an application-specific blockchain can bring in terms of security: -* Developers can choose proven programming languages like Go when building their application-specific blockchains, as opposed to smart contract programming languages that are often more immature. -* Developers are not constrained by the cryptographic functions made available by the underlying virtual-machines. They can use their own custom cryptography, and rely on well-audited crypto libraries. -* Developers do not have to worry about potential bugs or exploitable mechanisms in the underlying virtual-machine, making it easier to reason about the security of the application. +- Developers can choose proven programming languages like Go when building their application-specific blockchains, as opposed to smart contract programming languages that are often more immature. +- Developers are not constrained by the cryptographic functions made available by the underlying virtual-machines. They can use their own custom cryptography, and rely on well-audited crypto libraries. +- Developers do not have to worry about potential bugs or exploitable mechanisms in the underlying virtual-machine, making it easier to reason about the security of the application. -### Sovereignty[​](#sovereignty "Direct link to Sovereignty") +### Sovereignty One of the major benefits of application-specific blockchains is sovereignty. A decentralized application is an ecosystem that involves many actors: users, developers, third-party services, and more. When developers build on virtual-machine blockchain where many decentralized applications coexist, the community of the application is different than the community of the underlying blockchain, and the latter supersedes the former in the governance process. If there is a bug or if a new feature is needed, stakeholders of the application have very little leeway to upgrade the code. If the community of the underlying blockchain refuses to act, nothing can happen. diff --git a/docs/sdk/v0.53/documentation/legacy/adr-overview.mdx b/docs/sdk/v0.53/documentation/legacy/adr-overview.mdx new file mode 100644 index 00000000..f4ff3394 --- /dev/null +++ b/docs/sdk/v0.53/documentation/legacy/adr-overview.mdx @@ -0,0 +1,107 @@ +--- +title: Architecture Decision Records (ADR) +description: Historical overview of the Cosmos SDK's architecture decision process +--- + +## Overview + +Architecture Decision Records (ADRs) were the primary mechanism used by the Cosmos SDK team to document significant architectural decisions from 2019 to 2023. While the ADR process is no longer actively used for new decisions, these historical documents remain valuable references for understanding the design rationale behind many core SDK features. + +An ADR captured a single architectural decision, addressing functional or non-functional requirements that were architecturally significant. These records formed the project's decision log and served as a form of Architectural Knowledge Management. + +## What Were ADRs? + +ADRs documented implementation and design decisions that had already been discussed and agreed upon by the team. Unlike RFCs (Request for Comments) which facilitated discussion, ADRs recorded decisions that had already reached consensus through either: + +- Prior RFC discussions +- Synchronous team meetings +- Working group sessions + +Each ADR provided: + +- **Context** on the relevant goals and current state +- **Proposed changes** to achieve those goals +- **Analysis** of pros and cons +- **References** to related work +- **Implementation status** and changelog + +## The ADR Lifecycle + +The ADR process followed a defined lifecycle: + +1. **Consensus Building**: Every ADR started with either an RFC or discussion where consensus was reached +2. **Documentation**: A pull request was created using the ADR template +3. **Review**: Project stakeholders reviewed the proposed architecture +4. **Acceptance**: ADRs were merged regardless of outcome (accepted, rejected, or abandoned) to maintain historical record +5. **Supersession**: ADRs could be superseded by newer decisions as the project evolved + +### Status Classifications + +ADRs used a two-component status system: + +- **Consensus Status**: Draft → Proposed → Last Call → Accepted/Rejected → Superseded +- **Implementation Status**: Implemented or Not Implemented + +## Historical ADR Index + +The complete collection of ADRs can be found in the [Cosmos SDK repository](https://github.com/cosmos/cosmos-sdk/tree/main/docs/architecture). Below are some of the most significant ADRs that shaped the current SDK architecture: + +### Core Architecture + +- [ADR 002: SDK Documentation Structure](docs/sdk/next/documentation/legacy/adr-comprehensive) - Established documentation organization +- [ADR 057: App Wiring](docs/sdk/next/documentation/legacy/adr-comprehensive) - Introduced dependency injection system +- [ADR 063: Core Module API](docs/sdk/next/documentation/legacy/adr-comprehensive) - Defined module interface standards + +### State Management + +- [ADR 004: Split Denomination Keys](docs/sdk/next/documentation/legacy/adr-comprehensive) - Optimized denomination storage +- [ADR 062: Collections State Layer](docs/sdk/next/documentation/legacy/adr-comprehensive) - Simplified state management +- [ADR 065: Store V2](docs/sdk/next/documentation/legacy/adr-comprehensive) - Next generation storage layer + +### Protocol & Encoding + +- [ADR 019: Protocol Buffer State Encoding](docs/sdk/next/documentation/legacy/adr-comprehensive) - Protobuf adoption for state +- [ADR 020: Protocol Buffer Transaction Encoding](docs/sdk/next/documentation/legacy/adr-comprehensive) - Protobuf for transactions +- [ADR 027: Deterministic Protobuf Serialization](docs/sdk/next/documentation/legacy/adr-comprehensive) - Ensuring determinism + +### Module System + +- [ADR 030: Authorization Module](docs/sdk/next/documentation/legacy/adr-comprehensive) - Authorization framework +- [ADR 029: Fee Grant Module](docs/sdk/next/documentation/legacy/adr-comprehensive) - Fee abstraction +- [ADR 031: Protobuf Msg Services](docs/sdk/next/documentation/legacy/adr-comprehensive) - Message service pattern + +### Consensus & ABCI + +- [ADR 060: ABCI 1.0](docs/sdk/next/documentation/legacy/adr-comprehensive) - ABCI 1.0 integration +- [ADR 064: ABCI 2.0](docs/sdk/next/documentation/legacy/adr-comprehensive) - ABCI 2.0 planning + +### Developer Experience + +- [ADR 058: Auto-Generated CLI](docs/sdk/next/documentation/legacy/adr-comprehensive) - AutoCLI system +- [ADR 055: ORM](docs/sdk/next/documentation/legacy/adr-comprehensive) - Object-relational mapping + +## Why ADRs Matter + +Although the formal ADR process is no longer active, these documents remain valuable because they: + +1. **Preserve Design Rationale**: They explain not just what was built, but why specific design choices were made +2. **Document Trade-offs**: They capture the pros and cons considered for each decision +3. **Show Evolution**: They demonstrate how the SDK architecture evolved over time +4. **Provide Context**: They help developers understand the historical context of current features + +## Current Decision Process + +The Cosmos SDK team has transitioned to more agile decision-making processes, utilizing: + +- GitHub Discussions for community input +- Pull requests for implementation proposals +- Working groups for synchronous collaboration +- Direct implementation with iterative refinement + +For developers interested in understanding specific SDK components, the historical ADRs remain an excellent resource for diving deep into the architectural reasoning behind the current implementation. + +## Additional Resources + +- [Complete ADR Archive](https://github.com/cosmos/cosmos-sdk/tree/main/docs/architecture) - Full collection of all historical ADRs +- [ADR Template](/docs/common/pages/adr-comprehensive#adr-template-adr-template) - The template that was used for creating ADRs +- [ADR Process Documentation](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/PROCESS.md) - Detailed process that was followed diff --git a/docs/sdk/v0.53/documentation/module-system/README.mdx b/docs/sdk/v0.53/documentation/module-system/README.mdx new file mode 100644 index 00000000..ccd1f9ff --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/README.mdx @@ -0,0 +1,64 @@ +--- +title: List of Modules +description: >- + Here are some production-grade modules that can be used in Cosmos SDK + applications, along with their respective documentation: +--- + +Here are some production-grade modules that can be used in Cosmos SDK applications, along with their respective documentation: + +## Essential Modules + +Essential modules include functionality that *must* be included in your Cosmos SDK blockchain. +These modules provide the core behaviors that are needed for users and operators such as balance tracking, +proof-of-stake capabilities and governance. + +* [Auth](/docs/sdk/v0.53/documentation/module-system/auth) - Authentication of accounts and transactions for Cosmos SDK applications. +* [Bank](/docs/sdk/v0.53/documentation/module-system/bank) - Token transfer functionalities. +* [Circuit](/docs/sdk/v0.53/documentation/module-system/circuit) - Circuit breaker module for pausing messages. +* [Consensus](/docs/sdk/v0.53/documentation/module-system/consensus) - Consensus module for modifying CometBFT's ABCI consensus params. +* [Distribution](/docs/sdk/v0.53/documentation/module-system/distribution) - Fee distribution, and staking token provision distribution. +* [Evidence](/docs/sdk/v0.53/documentation/module-system/evidence) - Evidence handling for double signing, misbehaviour, etc. +* [Governance](/docs/sdk/v0.53/documentation/module-system/gov) - On-chain proposals and voting. +* [Genutil](/docs/sdk/v0.53/documentation/module-system/genutil) - Genesis utilities for the Cosmos SDK. +* [Mint](/docs/sdk/v0.53/documentation/module-system/mint) - Creation of new units of staking token. +* [Slashing](/docs/sdk/v0.53/documentation/module-system/slashing) - Validator punishment mechanisms. +* [Staking](/docs/sdk/v0.53/documentation/module-system/staking) - Proof-of-Stake layer for public blockchains. +* [Upgrade](/docs/sdk/v0.53/documentation/module-system/upgrade) - Software upgrades handling and coordination. + +## Supplementary Modules + +Supplementary modules are modules that are maintained in the Cosmos SDK but are not necessary for +the core functionality of your blockchain. They can be thought of as ways to extend the +capabilities of your blockchain or further specialize it. + +* [Authz](/docs/sdk/v0.53/documentation/module-system/authz) - Authorization for accounts to perform actions on behalf of other accounts. +* [Epochs](/docs/sdk/v0.53/documentation/module-system/epochs) - Registration so SDK modules can have logic to be executed at the timed tickers. +* [Feegrant](/docs/sdk/v0.53/documentation/module-system/feegrant) - Grant fee allowances for executing transactions. +* [Group](/docs/sdk/v0.53/documentation/module-system/group) - Allows for the creation and management of on-chain multisig accounts. +* [NFT](/docs/sdk/v0.53/documentation/module-system/nft) - NFT module implemented based on [ADR43](https://docs.cosmos.network/main/architecture/adr-043-nft-module.html). +* [ProtocolPool](/docs/sdk/v0.53/documentation/module-system/protocolpool) - Extended management of community pool functionality. + +## Deprecated Modules + +The following modules are deprecated. They will no longer be maintained and eventually will be removed +in an upcoming release of the Cosmos SDK per our [release process](https://github.com/cosmos/cosmos-sdk/blob/main/RELEASE_PROCESS.md). + +* [Crisis](/docs/sdk/v0.53/documentation/module-system/crisis) - *Deprecated* halting the blockchain under certain circumstances (e.g. if an invariant is broken). +* [Params](/docs/sdk/v0.53/documentation/module-system/params) - *Deprecated* Globally available parameter store. + +To learn more about the process of building modules, visit the [building modules reference documentation](https://docs.cosmos.network/main/building-modules/intro). + +## IBC + +The IBC module for the SDK is maintained by the IBC Go team in its [own repository](https://github.com/cosmos/ibc-go). + +Additionally, the [capability module](https://github.com/cosmos/ibc-go/tree/fdd664698d79864f1e00e147f9879e58497b5ef1/modules/capability) is from v0.50+ maintained by the IBC Go team in its [own repository](https://github.com/cosmos/ibc-go/tree/fdd664698d79864f1e00e147f9879e58497b5ef1/modules/capability). + +## CosmWasm + +The CosmWasm module enables smart contracts, learn more by going to their [documentation site](https://book.cosmwasm.com/), or visit [the repository](https://github.com/CosmWasm/cosmwasm). + +## EVM + +Read more about writing smart contracts with solidity at the official [`evm` documentation page](https://evm.cosmos.network/). diff --git a/docs/sdk/v0.53/documentation/module-system/auth.mdx b/docs/sdk/v0.53/documentation/module-system/auth.mdx new file mode 100644 index 00000000..486f6d54 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/auth.mdx @@ -0,0 +1,738 @@ +--- +title: '`x/auth`' +description: This document specifies the auth module of the Cosmos SDK. +--- + +## Abstract + +This document specifies the auth module of the Cosmos SDK. + +The auth module is responsible for specifying the base transaction and account types +for an application, since the SDK itself is agnostic to these particulars. It contains +the middlewares, where all basic transaction validity checks (signatures, nonces, auxiliary fields) +are performed, and exposes the account keeper, which allows other modules to read, write, and modify accounts. + +This module is used in the Cosmos Hub. + +## Contents + +* [Concepts](#concepts) + * [Gas & Fees](#gas--fees) +* [State](#state) + * [Accounts](#accounts) +* [AnteHandlers](#antehandlers) +* [Keepers](#keepers) + * [Account Keeper](#account-keeper) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +**Note:** The auth module is different from the [authz module](docs/sdk/v0.50/documentation/module-system/modules/authz/README). + +The differences are: + +* `auth` - authentication of accounts and transactions for Cosmos SDK applications and is responsible for specifying the base transaction and account types. +* `authz` - authorization for accounts to perform actions on behalf of other accounts and enables a granter to grant authorizations to a grantee that allows the grantee to execute messages on behalf of the granter. + +### Gas & Fees + +Fees serve two purposes for an operator of the network. + +Fees limit the growth of the state stored by every full node and allow for +general purpose censorship of transactions of little economic value. Fees +are best suited as an anti-spam mechanism where validators are disinterested in +the use of the network and identities of users. + +Fees are determined by the gas limits and gas prices transactions provide, where +`fees = ceil(gasLimit * gasPrices)`. Txs incur gas costs for all state reads/writes, +signature verification, as well as costs proportional to the tx size. Operators +should set minimum gas prices when starting their nodes. They must set the unit +costs of gas in each token denomination they wish to support: + +`simd start ... --minimum-gas-prices=0.00001stake;0.05photinos` + +When adding transactions to mempool or gossipping transactions, validators check +if the transaction's gas prices, which are determined by the provided fees, meet +any of the validator's minimum gas prices. In other words, a transaction must +provide a fee of at least one denomination that matches a validator's minimum +gas price. + +CometBFT does not currently provide fee based mempool prioritization, and fee +based mempool filtering is local to node and not part of consensus. But with +minimum gas prices set, such a mechanism could be implemented by node operators. + +Because the market value for tokens will fluctuate, validators are expected to +dynamically adjust their minimum gas prices to a level that would encourage the +use of the network. + +## State + +### Accounts + +Accounts contain authentication information for a uniquely identified external user of an SDK blockchain, +including public key, address, and account number / sequence number for replay protection. For efficiency, +since account balances must also be fetched to pay fees, account structs also store the balance of a user +as `sdk.Coins`. + +Accounts are exposed externally as an interface, and stored internally as +either a base account or vesting account. Module clients wishing to add more +account types may do so. + +* `0x01 | Address -> ProtocolBuffer(account)` + +#### Account Interface + +The account interface exposes methods to read and write standard account information. +Note that all of these methods operate on an account struct conforming to the +interface - in order to write the account to the store, the account keeper will +need to be used. + +```go expandable +/ AccountI is an interface used to store coins at a given address within state. +/ It presumes a notion of sequence numbers for replay protection, +/ a notion of account numbers for replay protection for previously pruned accounts, +/ and a pubkey for authentication purposes. +/ +/ Many complex conditions can be used in the concrete struct which implements AccountI. +type AccountI interface { + proto.Message + + GetAddress() + +sdk.AccAddress + SetAddress(sdk.AccAddress) + +error / errors if already set. + + GetPubKey() + +crypto.PubKey / can return nil. + SetPubKey(crypto.PubKey) + +error + + GetAccountNumber() + +uint64 + SetAccountNumber(uint64) + +error + + GetSequence() + +uint64 + SetSequence(uint64) + +error + + / Ensure that account implements stringer + String() + +string +} +``` + +##### Base Account + +A base account is the simplest and most common account type, which just stores all requisite +fields directly in a struct. + +```protobuf +/ BaseAccount defines a base account type. It contains all the necessary fields +/ for basic account functionality. Any custom account type should extend this +/ type for additional functionality (e.g. vesting). +message BaseAccount { + string address = 1; + google.protobuf.Any pub_key = 2; + uint64 account_number = 3; + uint64 sequence = 4; +} +``` + +### Vesting Account + +See [Vesting](https://docs.cosmos.network/main/modules/auth/vesting/). + +## AnteHandlers + +The `x/auth` module presently has no transaction handlers of its own, but does expose the special `AnteHandler`, used for performing basic validity checks on a transaction, such that it could be thrown out of the mempool. +The `AnteHandler` can be seen as a set of decorators that check transactions within the current context, per [ADR 010](docs/sdk/next/documentation/legacy/adr-comprehensive). + +Note that the `AnteHandler` is called on both `CheckTx` and `DeliverTx`, as CometBFT proposers presently have the ability to include in their proposed block transactions which fail `CheckTx`. + +### Decorators + +The auth module provides `AnteDecorator`s that are recursively chained together into a single `AnteHandler` in the following order: + +* `SetUpContextDecorator`: Sets the `GasMeter` in the `Context` and wraps the next `AnteHandler` with a defer clause to recover from any downstream `OutOfGas` panics in the `AnteHandler` chain to return an error with information on gas provided and gas used. + +* `RejectExtensionOptionsDecorator`: Rejects all extension options which can optionally be included in protobuf transactions. + +* `MempoolFeeDecorator`: Checks if the `tx` fee is above local mempool `minFee` parameter during `CheckTx`. + +* `ValidateBasicDecorator`: Calls `tx.ValidateBasic` and returns any non-nil error. + +* `TxTimeoutHeightDecorator`: Check for a `tx` height timeout. + +* `ValidateMemoDecorator`: Validates `tx` memo with application parameters and returns any non-nil error. + +* `ConsumeGasTxSizeDecorator`: Consumes gas proportional to the `tx` size based on application parameters. + +* `DeductFeeDecorator`: Deducts the `FeeAmount` from first signer of the `tx`. If the `x/feegrant` module is enabled and a fee granter is set, it deducts fees from the fee granter account. + +* `SetPubKeyDecorator`: Sets the pubkey from a `tx`'s signers that does not already have its corresponding pubkey saved in the state machine and in the current context. + +* `ValidateSigCountDecorator`: Validates the number of signatures in `tx` based on app-parameters. + +* `SigGasConsumeDecorator`: Consumes parameter-defined amount of gas for each signature. This requires pubkeys to be set in context for all signers as part of `SetPubKeyDecorator`. + +* `SigVerificationDecorator`: Verifies all signatures are valid. This requires pubkeys to be set in context for all signers as part of `SetPubKeyDecorator`. + +* `IncrementSequenceDecorator`: Increments the account sequence for each signer to prevent replay attacks. + +## Keepers + +The auth module only exposes one keeper, the account keeper, which can be used to read and write accounts. + +### Account Keeper + +Presently only one fully-permissioned account keeper is exposed, which has the ability to both read and write +all fields of all accounts, and to iterate over all stored accounts. + +```go expandable +/ AccountKeeperI is the interface contract that x/auth's keeper implements. +type AccountKeeperI interface { + / Return a new account with the next account number and the specified address. Does not save the new account to the store. + NewAccountWithAddress(sdk.Context, sdk.AccAddress) + +types.AccountI + + / Return a new account with the next account number. Does not save the new account to the store. + NewAccount(sdk.Context, types.AccountI) + +types.AccountI + + / Check if an account exists in the store. + HasAccount(sdk.Context, sdk.AccAddress) + +bool + + / Retrieve an account from the store. + GetAccount(sdk.Context, sdk.AccAddress) + +types.AccountI + + / Set an account in the store. + SetAccount(sdk.Context, types.AccountI) + + / Remove an account from the store. + RemoveAccount(sdk.Context, types.AccountI) + + / Iterate over all accounts, calling the provided function. Stop iteration when it returns true. + IterateAccounts(sdk.Context, func(types.AccountI) + +bool) + + / Fetch the public key of an account at a specified address + GetPubKey(sdk.Context, sdk.AccAddress) (crypto.PubKey, error) + + / Fetch the sequence of an account at a specified address. + GetSequence(sdk.Context, sdk.AccAddress) (uint64, error) + + / Fetch the next account number, and increment the internal counter. + NextAccountNumber(sdk.Context) + +uint64 +} +``` + +## Parameters + +The auth module contains the following parameters: + +| Key | Type | Example | +| ---------------------- | ------ | ------- | +| MaxMemoCharacters | uint64 | 256 | +| TxSigLimit | uint64 | 7 | +| TxSizeCostPerByte | uint64 | 10 | +| SigVerifyCostED25519 | uint64 | 590 | +| SigVerifyCostSecp256k1 | uint64 | 1000 | + +## Client + +### CLI + +A user can query and interact with the `auth` module using the CLI. + +### Query + +The `query` commands allow users to query `auth` state. + +```bash +simd query auth --help +``` + +#### account + +The `account` command allow users to query for an account by it's address. + +```bash +simd query auth account [address] [flags] +``` + +Example: + +```bash +simd query auth account cosmos1... +``` + +Example Output: + +```bash +'@type': /cosmos.auth.v1beta1.BaseAccount +account_number: "0" +address: cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2 +pub_key: + '@type': /cosmos.crypto.secp256k1.PubKey + key: ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD +sequence: "1" +``` + +#### accounts + +The `accounts` command allow users to query all the available accounts. + +```bash +simd query auth accounts [flags] +``` + +Example: + +```bash +simd query auth accounts +``` + +Example Output: + +```bash expandable +accounts: +- '@type': /cosmos.auth.v1beta1.BaseAccount + account_number: "0" + address: cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2 + pub_key: + '@type': /cosmos.crypto.secp256k1.PubKey + key: ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD + sequence: "1" +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "8" + address: cosmos1yl6hdjhmkf37639730gffanpzndzdpmhwlkfhr + pub_key: null + sequence: "0" + name: transfer + permissions: + - minter + - burner +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "4" + address: cosmos1fl48vsnmsdzcv85q5d2q4z5ajdha8yu34mf0eh + pub_key: null + sequence: "0" + name: bonded_tokens_pool + permissions: + - burner + - staking +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "5" + address: cosmos1tygms3xhhs3yv487phx3dw4a95jn7t7lpm470r + pub_key: null + sequence: "0" + name: not_bonded_tokens_pool + permissions: + - burner + - staking +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "6" + address: cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn + pub_key: null + sequence: "0" + name: gov + permissions: + - burner +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "3" + address: cosmos1jv65s3grqf6v6jl3dp4t6c9t9rk99cd88lyufl + pub_key: null + sequence: "0" + name: distribution + permissions: [] +- '@type': /cosmos.auth.v1beta1.BaseAccount + account_number: "1" + address: cosmos147k3r7v2tvwqhcmaxcfql7j8rmkrlsemxshd3j + pub_key: null + sequence: "0" +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "7" + address: cosmos1m3h30wlvsf8llruxtpukdvsy0km2kum8g38c8q + pub_key: null + sequence: "0" + name: mint + permissions: + - minter +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "2" + address: cosmos17xpfvakm2amg962yls6f84z3kell8c5lserqta + pub_key: null + sequence: "0" + name: fee_collector + permissions: [] +pagination: + next_key: null + total: "0" +``` + +#### params + +The `params` command allow users to query the current auth parameters. + +```bash +simd query auth params [flags] +``` + +Example: + +```bash +simd query auth params +``` + +Example Output: + +```bash +max_memo_characters: "256" +sig_verify_cost_ed25519: "590" +sig_verify_cost_secp256k1: "1000" +tx_sig_limit: "7" +tx_size_cost_per_byte: "10" +``` + +### Transactions + +The `auth` module supports transactions commands to help you with signing and more. Compared to other modules you can access directly the `auth` module transactions commands using the only `tx` command. + +Use directly the `--help` flag to get more information about the `tx` command. + +```bash +simd tx --help +``` + +#### `sign` + +The `sign` command allows users to sign transactions that was generated offline. + +```bash +simd tx sign tx.json --from $ALICE > tx.signed.json +``` + +The result is a signed transaction that can be broadcasted to the network thanks to the broadcast command. + +More information about the `sign` command can be found running `simd tx sign --help`. + +#### `sign-batch` + +The `sign-batch` command allows users to sign multiples offline generated transactions. +The transactions can be in one file, with one tx per line, or in multiple files. + +```bash +simd tx sign txs.json --from $ALICE > tx.signed.json +``` + +or + +```bash +simd tx sign tx1.json tx2.json tx3.json --from $ALICE > tx.signed.json +``` + +The result is multiples signed transactions. For combining the signed transactions into one transactions, use the `--append` flag. + +More information about the `sign-batch` command can be found running `simd tx sign-batch --help`. + +#### `multi-sign` + +The `multi-sign` command allows users to sign transactions that was generated offline by a multisig account. + +```bash +simd tx multisign transaction.json k1k2k3 k1sig.json k2sig.json k3sig.json +``` + +Where `k1k2k3` is the multisig account address, `k1sig.json` is the signature of the first signer, `k2sig.json` is the signature of the second signer, and `k3sig.json` is the signature of the third signer. + +##### Nested multisig transactions + +To allow transactions to be signed by nested multisigs, meaning that a participant of a multisig account can be another multisig account, the `--skip-signature-verification` flag must be used. + +```bash +# First aggregate signatures of the multisig participant +simd tx multi-sign transaction.json ms1 ms1p1sig.json ms1p2sig.json --signature-only --skip-signature-verification > ms1sig.json + +# Then use the aggregated signatures and the other signatures to sign the final transaction +simd tx multi-sign transaction.json k1ms1 k1sig.json ms1sig.json --skip-signature-verification +``` + +Where `ms1` is the nested multisig account address, `ms1p1sig.json` is the signature of the first participant of the nested multisig account, `ms1p2sig.json` is the signature of the second participant of the nested multisig account, and `ms1sig.json` is the aggregated signature of the nested multisig account. + +`k1ms1` is a multisig account comprised of an individual signer and another nested multisig account (`ms1`). `k1sig.json` is the signature of the first signer of the individual member. + +More information about the `multi-sign` command can be found running `simd tx multi-sign --help`. + +#### `multisign-batch` + +The `multisign-batch` works the same way as `sign-batch`, but for multisig accounts. +With the difference that the `multisign-batch` command requires all transactions to be in one file, and the `--append` flag does not exist. + +More information about the `multisign-batch` command can be found running `simd tx multisign-batch --help`. + +#### `validate-signatures` + +The `validate-signatures` command allows users to validate the signatures of a signed transaction. + +```bash +$ simd tx validate-signatures tx.signed.json +Signers: + 0: cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275 + +Signatures: + 0: cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275 [OK] +``` + +More information about the `validate-signatures` command can be found running `simd tx validate-signatures --help`. + +#### `broadcast` + +The `broadcast` command allows users to broadcast a signed transaction to the network. + +```bash +simd tx broadcast tx.signed.json +``` + +More information about the `broadcast` command can be found running `simd tx broadcast --help`. + +### gRPC + +A user can query the `auth` module using gRPC endpoints. + +#### Account + +The `account` endpoint allow users to query for an account by it's address. + +```bash +cosmos.auth.v1beta1.Query/Account +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Account +``` + +Example Output: + +```bash expandable +{ + "account":{ + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "address":"cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2", + "pubKey":{ + "@type":"/cosmos.crypto.secp256k1.PubKey", + "key":"ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD" + }, + "sequence":"1" + } +} +``` + +#### Accounts + +The `accounts` endpoint allow users to query all the available accounts. + +```bash +cosmos.auth.v1beta1.Query/Accounts +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Accounts +``` + +Example Output: + +```bash expandable +{ + "accounts":[ + { + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "address":"cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2", + "pubKey":{ + "@type":"/cosmos.crypto.secp256k1.PubKey", + "key":"ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD" + }, + "sequence":"1" + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1yl6hdjhmkf37639730gffanpzndzdpmhwlkfhr", + "accountNumber":"8" + }, + "name":"transfer", + "permissions":[ + "minter", + "burner" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1fl48vsnmsdzcv85q5d2q4z5ajdha8yu34mf0eh", + "accountNumber":"4" + }, + "name":"bonded_tokens_pool", + "permissions":[ + "burner", + "staking" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1tygms3xhhs3yv487phx3dw4a95jn7t7lpm470r", + "accountNumber":"5" + }, + "name":"not_bonded_tokens_pool", + "permissions":[ + "burner", + "staking" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn", + "accountNumber":"6" + }, + "name":"gov", + "permissions":[ + "burner" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1jv65s3grqf6v6jl3dp4t6c9t9rk99cd88lyufl", + "accountNumber":"3" + }, + "name":"distribution" + }, + { + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "accountNumber":"1", + "address":"cosmos147k3r7v2tvwqhcmaxcfql7j8rmkrlsemxshd3j" + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1m3h30wlvsf8llruxtpukdvsy0km2kum8g38c8q", + "accountNumber":"7" + }, + "name":"mint", + "permissions":[ + "minter" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos17xpfvakm2amg962yls6f84z3kell8c5lserqta", + "accountNumber":"2" + }, + "name":"fee_collector" + } + ], + "pagination":{ + "total":"9" + } +} +``` + +#### Params + +The `params` endpoint allow users to query the current auth parameters. + +```bash +cosmos.auth.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Params +``` + +Example Output: + +```bash +{ + "params": { + "maxMemoCharacters": "256", + "txSigLimit": "7", + "txSizeCostPerByte": "10", + "sigVerifyCostEd25519": "590", + "sigVerifyCostSecp256k1": "1000" + } +} +``` + +### REST + +A user can query the `auth` module using REST endpoints. + +#### Account + +The `account` endpoint allow users to query for an account by it's address. + +```bash +/cosmos/auth/v1beta1/account?address={address} +``` + +#### Accounts + +The `accounts` endpoint allow users to query all the available accounts. + +```bash +/cosmos/auth/v1beta1/accounts +``` + +#### Params + +The `params` endpoint allow users to query the current auth parameters. + +```bash +/cosmos/auth/v1beta1/params +``` diff --git a/docs/sdk/v0.53/documentation/module-system/authz.mdx b/docs/sdk/v0.53/documentation/module-system/authz.mdx new file mode 100644 index 00000000..49418c51 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/authz.mdx @@ -0,0 +1,1430 @@ +--- +title: '`x/authz`' +--- + +## Abstract + +`x/authz` is an implementation of a Cosmos SDK module, per [ADR 30](docs/sdk/next/documentation/legacy/adr-comprehensive), that allows +granting arbitrary privileges from one account (the granter) to another account (the grantee). Authorizations must be granted for a particular Msg service method one by one using an implementation of the `Authorization` interface. + +## Contents + +* [Concepts](#concepts) + * [Authorization and Grant](#authorization-and-grant) + * [Built-in Authorizations](#built-in-authorizations) + * [Gas](#gas) +* [State](#state) + * [Grant](#grant) + * [GrantQueue](#grantqueue) +* [Messages](#messages) + * [MsgGrant](#msggrant) + * [MsgRevoke](#msgrevoke) + * [MsgExec](#msgexec) +* [Events](#events) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +### Authorization and Grant + +The `x/authz` module defines interfaces and messages grant authorizations to perform actions +on behalf of one account to other accounts. The design is defined in the [ADR 030](docs/sdk/next/documentation/legacy/adr-comprehensive). + +A *grant* is an allowance to execute a Msg by the grantee on behalf of the granter. +Authorization is an interface that must be implemented by a concrete authorization logic to validate and execute grants. Authorizations are extensible and can be defined for any Msg service method even outside of the module where the Msg method is defined. See the `SendAuthorization` example in the next section for more details. + +**Note:** The authz module is different from the [auth (authentication)](/docs/sdk/v0.53/documentation/module-system/auth) module that is responsible for specifying the base transaction and account types. + +```go expandable +package authz + +import ( + + "github.com/cosmos/gogoproto/proto" + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ Authorization represents the interface of various Authorization types implemented +/ by other modules. +type Authorization interface { + proto.Message + + / MsgTypeURL returns the fully-qualified Msg service method URL (as described in ADR 031), + / which will process and accept or reject a request. + MsgTypeURL() + +string + + / Accept determines whether this grant permits the provided sdk.Msg to be performed, + / and if so provides an upgraded authorization instance. + Accept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error) + + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} + +/ AcceptResponse instruments the controller of an authz message if the request is accepted +/ and if it should be updated or deleted. +type AcceptResponse struct { + / If Accept=true, the controller can accept and authorization and handle the update. + Accept bool + / If Delete=true, the controller must delete the authorization object and release + / storage resources. + Delete bool + / Controller, who is calling Authorization.Accept must check if `Updated != nil`. If yes, + / it must use the updated version and handle the update on the storage level. + Updated Authorization +} +``` + +### Built-in Authorizations + +The Cosmos SDK `x/authz` module comes with following authorization types: + +#### GenericAuthorization + +`GenericAuthorization` implements the `Authorization` interface that gives unrestricted permission to execute the provided Msg on behalf of granter's account. + +```protobuf +// GenericAuthorization gives the grantee unrestricted permissions to execute +// the provided method on behalf of the granter's account. +message GenericAuthorization { + option (amino.name) = "cosmos-sdk/GenericAuthorization"; + option (cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"; + + // Msg, identified by it's type URL, to grant unrestricted permissions to execute + string msg = 1; +} +``` + +```go expandable +package authz + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var _ Authorization = &GenericAuthorization{ +} + +/ NewGenericAuthorization creates a new GenericAuthorization object. +func NewGenericAuthorization(msgTypeURL string) *GenericAuthorization { + return &GenericAuthorization{ + Msg: msgTypeURL, +} +} + +/ MsgTypeURL implements Authorization.MsgTypeURL. +func (a GenericAuthorization) + +MsgTypeURL() + +string { + return a.Msg +} + +/ Accept implements Authorization.Accept. +func (a GenericAuthorization) + +Accept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error) { + return AcceptResponse{ + Accept: true +}, nil +} + +/ ValidateBasic implements Authorization.ValidateBasic. +func (a GenericAuthorization) + +ValidateBasic() + +error { + return nil +} +``` + +* `msg` stores Msg type URL. + +#### SendAuthorization + +`SendAuthorization` implements the `Authorization` interface for the `cosmos.bank.v1beta1.MsgSend` Msg. + +* It takes a (positive) `SpendLimit` that specifies the maximum amount of tokens the grantee can spend. The `SpendLimit` is updated as the tokens are spent. +* It takes an (optional) `AllowList` that specifies to which addresses a grantee can send token. + +```protobuf +// SendAuthorization allows the grantee to spend up to spend_limit coins from +// the granter's account. +// +// Since: cosmos-sdk 0.43 +message SendAuthorization { + option (cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"; + option (amino.name) = "cosmos-sdk/SendAuthorization"; + + repeated cosmos.base.v1beta1.Coin spend_limit = 1 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + + // allow_list specifies an optional list of addresses to whom the grantee can send tokens on behalf of the + // granter. If omitted, any recipient is allowed. + // + // Since: cosmos-sdk 0.47 + repeated string allow_list = 2; +} +``` + +```go expandable +package types + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ TODO: Revisit this once we have proper gas fee framework. +/ Ref: https://github.com/cosmos/cosmos-sdk/issues/9054 +/ Ref: https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(10) + +var _ authz.Authorization = &SendAuthorization{ +} + +/ NewSendAuthorization creates a new SendAuthorization object. +func NewSendAuthorization(spendLimit sdk.Coins, allowed []sdk.AccAddress) *SendAuthorization { + return &SendAuthorization{ + AllowList: toBech32Addresses(allowed), + SpendLimit: spendLimit, +} +} + +/ MsgTypeURL implements Authorization.MsgTypeURL. +func (a SendAuthorization) + +MsgTypeURL() + +string { + return sdk.MsgTypeURL(&MsgSend{ +}) +} + +/ Accept implements Authorization.Accept. +func (a SendAuthorization) + +Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) { + mSend, ok := msg.(*MsgSend) + if !ok { + return authz.AcceptResponse{ +}, sdkerrors.ErrInvalidType.Wrap("type mismatch") +} + toAddr := mSend.ToAddress + + limitLeft, isNegative := a.SpendLimit.SafeSub(mSend.Amount...) + if isNegative { + return authz.AcceptResponse{ +}, sdkerrors.ErrInsufficientFunds.Wrapf("requested amount is more than spend limit") +} + if limitLeft.IsZero() { + return authz.AcceptResponse{ + Accept: true, + Delete: true +}, nil +} + isAddrExists := false + allowedList := a.GetAllowList() + for _, addr := range allowedList { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "send authorization") + if addr == toAddr { + isAddrExists = true + break +} + +} + if len(allowedList) > 0 && !isAddrExists { + return authz.AcceptResponse{ +}, sdkerrors.ErrUnauthorized.Wrapf("cannot send to %s address", toAddr) +} + +return authz.AcceptResponse{ + Accept: true, + Delete: false, + Updated: &SendAuthorization{ + SpendLimit: limitLeft, + AllowList: allowedList +}}, nil +} + +/ ValidateBasic implements Authorization.ValidateBasic. +func (a SendAuthorization) + +ValidateBasic() + +error { + if a.SpendLimit == nil { + return sdkerrors.ErrInvalidCoins.Wrap("spend limit cannot be nil") +} + if !a.SpendLimit.IsAllPositive() { + return sdkerrors.ErrInvalidCoins.Wrapf("spend limit must be positive") +} + found := make(map[string]bool, 0) + for i := 0; i < len(a.AllowList); i++ { + if found[a.AllowList[i]] { + return ErrDuplicateEntry +} + +found[a.AllowList[i]] = true +} + +return nil +} + +func toBech32Addresses(allowed []sdk.AccAddress) []string { + if len(allowed) == 0 { + return nil +} + allowedAddrs := make([]string, len(allowed)) + for i, addr := range allowed { + allowedAddrs[i] = addr.String() +} + +return allowedAddrs +} +``` + +* `spend_limit` keeps track of how many coins are left in the authorization. +* `allow_list` specifies an optional list of addresses to whom the grantee can send tokens on behalf of the granter. + +#### StakeAuthorization + +`StakeAuthorization` implements the `Authorization` interface for messages in the [staking module](https://docs.cosmos.network/v0.53/build/modules/staking). It takes an `AuthorizationType` to specify whether you want to authorise delegating, undelegating or redelegating (i.e. these have to be authorised separately). It also takes an optional `MaxTokens` that keeps track of a limit to the amount of tokens that can be delegated/undelegated/redelegated. If left empty, the amount is unlimited. Additionally, this Msg takes an `AllowList` or a `DenyList`, which allows you to select which validators you allow or deny grantees to stake with. + +```protobuf +// StakeAuthorization defines authorization for delegate/undelegate/redelegate. +// +// Since: cosmos-sdk 0.43 +message StakeAuthorization { + option (cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"; + option (amino.name) = "cosmos-sdk/StakeAuthorization"; + + // max_tokens specifies the maximum amount of tokens can be delegate to a validator. If it is + // empty, there is no spend limit and any amount of coins can be delegated. + cosmos.base.v1beta1.Coin max_tokens = 1 [(gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coin"]; + // validators is the oneof that represents either allow_list or deny_list + oneof validators { + // allow_list specifies list of validator addresses to whom grantee can delegate tokens on behalf of granter's + // account. + Validators allow_list = 2; + // deny_list specifies list of validator addresses to whom grantee can not delegate tokens. + Validators deny_list = 3; + } + // Validators defines list of validator addresses. + message Validators { + repeated string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + } + // authorization_type defines one of AuthorizationType. + AuthorizationType authorization_type = 4; +} +``` + +```go expandable +package types + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ TODO: Revisit this once we have propoer gas fee framework. +/ Tracking issues https://github.com/cosmos/cosmos-sdk/issues/9054, https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(10) + +var _ authz.Authorization = &StakeAuthorization{ +} + +/ NewStakeAuthorization creates a new StakeAuthorization object. +func NewStakeAuthorization(allowed []sdk.ValAddress, denied []sdk.ValAddress, authzType AuthorizationType, amount *sdk.Coin) (*StakeAuthorization, error) { + allowedValidators, deniedValidators, err := validateAllowAndDenyValidators(allowed, denied) + if err != nil { + return nil, err +} + a := StakeAuthorization{ +} + if allowedValidators != nil { + a.Validators = &StakeAuthorization_AllowList{ + AllowList: &StakeAuthorization_Validators{ + Address: allowedValidators +}} + +} + +else { + a.Validators = &StakeAuthorization_DenyList{ + DenyList: &StakeAuthorization_Validators{ + Address: deniedValidators +}} + +} + if amount != nil { + a.MaxTokens = amount +} + +a.AuthorizationType = authzType + + return &a, nil +} + +/ MsgTypeURL implements Authorization.MsgTypeURL. +func (a StakeAuthorization) + +MsgTypeURL() + +string { + authzType, err := normalizeAuthzType(a.AuthorizationType) + if err != nil { + panic(err) +} + +return authzType +} + +func (a StakeAuthorization) + +ValidateBasic() + +error { + if a.MaxTokens != nil && a.MaxTokens.IsNegative() { + return sdkerrors.Wrapf(authz.ErrNegativeMaxTokens, "negative coin amount: %v", a.MaxTokens) +} + if a.AuthorizationType == AuthorizationType_AUTHORIZATION_TYPE_UNSPECIFIED { + return authz.ErrUnknownAuthorizationType +} + +return nil +} + +/ Accept implements Authorization.Accept. +func (a StakeAuthorization) + +Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) { + var validatorAddress string + var amount sdk.Coin + switch msg := msg.(type) { + case *MsgDelegate: + validatorAddress = msg.ValidatorAddress + amount = msg.Amount + case *MsgUndelegate: + validatorAddress = msg.ValidatorAddress + amount = msg.Amount + case *MsgBeginRedelegate: + validatorAddress = msg.ValidatorDstAddress + amount = msg.Amount + default: + return authz.AcceptResponse{ +}, sdkerrors.ErrInvalidRequest.Wrap("unknown msg type") +} + isValidatorExists := false + allowedList := a.GetAllowList().GetAddress() + for _, validator := range allowedList { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "stake authorization") + if validator == validatorAddress { + isValidatorExists = true + break +} + +} + denyList := a.GetDenyList().GetAddress() + for _, validator := range denyList { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "stake authorization") + if validator == validatorAddress { + return authz.AcceptResponse{ +}, sdkerrors.ErrUnauthorized.Wrapf("cannot delegate/undelegate to %s validator", validator) +} + +} + if len(allowedList) > 0 && !isValidatorExists { + return authz.AcceptResponse{ +}, sdkerrors.ErrUnauthorized.Wrapf("cannot delegate/undelegate to %s validator", validatorAddress) +} + if a.MaxTokens == nil { + return authz.AcceptResponse{ + Accept: true, + Delete: false, + Updated: &StakeAuthorization{ + Validators: a.GetValidators(), + AuthorizationType: a.GetAuthorizationType() +}, +}, nil +} + +limitLeft, err := a.MaxTokens.SafeSub(amount) + if err != nil { + return authz.AcceptResponse{ +}, err +} + if limitLeft.IsZero() { + return authz.AcceptResponse{ + Accept: true, + Delete: true +}, nil +} + +return authz.AcceptResponse{ + Accept: true, + Delete: false, + Updated: &StakeAuthorization{ + Validators: a.GetValidators(), + AuthorizationType: a.GetAuthorizationType(), + MaxTokens: &limitLeft +}, +}, nil +} + +func validateAllowAndDenyValidators(allowed []sdk.ValAddress, denied []sdk.ValAddress) ([]string, []string, error) { + if len(allowed) == 0 && len(denied) == 0 { + return nil, nil, sdkerrors.ErrInvalidRequest.Wrap("both allowed & deny list cannot be empty") +} + if len(allowed) > 0 && len(denied) > 0 { + return nil, nil, sdkerrors.ErrInvalidRequest.Wrap("cannot set both allowed & deny list") +} + allowedValidators := make([]string, len(allowed)) + if len(allowed) > 0 { + for i, validator := range allowed { + allowedValidators[i] = validator.String() +} + +return allowedValidators, nil, nil +} + deniedValidators := make([]string, len(denied)) + for i, validator := range denied { + deniedValidators[i] = validator.String() +} + +return nil, deniedValidators, nil +} + +/ Normalized Msg type URLs +func normalizeAuthzType(authzType AuthorizationType) (string, error) { + switch authzType { + case AuthorizationType_AUTHORIZATION_TYPE_DELEGATE: + return sdk.MsgTypeURL(&MsgDelegate{ +}), nil + case AuthorizationType_AUTHORIZATION_TYPE_UNDELEGATE: + return sdk.MsgTypeURL(&MsgUndelegate{ +}), nil + case AuthorizationType_AUTHORIZATION_TYPE_REDELEGATE: + return sdk.MsgTypeURL(&MsgBeginRedelegate{ +}), nil + default: + return "", sdkerrors.Wrapf(authz.ErrUnknownAuthorizationType, "cannot normalize authz type with %T", authzType) +} +} +``` + +### Gas + +In order to prevent DoS attacks, granting `StakeAuthorization`s with `x/authz` incurs gas. `StakeAuthorization` allows you to authorize another account to delegate, undelegate, or redelegate to validators. The authorizer can define a list of validators they allow or deny delegations to. The Cosmos SDK iterates over these lists and charge 10 gas for each validator in both of the lists. + +Since the state maintaining a list for granter, grantee pair with same expiration, we are iterating over the list to remove the grant (incase of any revoke of paritcular `msgType`) from the list and we are charging 20 gas per iteration. + +## State + +### Grant + +Grants are identified by combining granter address (the address bytes of the granter), grantee address (the address bytes of the grantee) and Authorization type (its type URL). Hence we only allow one grant for the (granter, grantee, Authorization) triple. + +* Grant: `0x01 | granter_address_len (1 byte) | granter_address_bytes | grantee_address_len (1 byte) | grantee_address_bytes | msgType_bytes -> ProtocolBuffer(AuthorizationGrant)` + +The grant object encapsulates an `Authorization` type and an expiration timestamp: + +```protobuf +// Grant gives permissions to execute +// the provide method with expiration time. +message Grant { + google.protobuf.Any authorization = 1 [(cosmos_proto.accepts_interface) = "cosmos.authz.v1beta1.Authorization"]; + // time when the grant will expire and will be pruned. If null, then the grant + // doesn't have a time expiration (other conditions in `authorization` + // may apply to invalidate the grant) + google.protobuf.Timestamp expiration = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = true]; +} +``` + +### GrantQueue + +We are maintaining a queue for authz pruning. Whenever a grant is created, an item will be added to `GrantQueue` with a key of expiration, granter, grantee. + +In `EndBlock` (which runs for every block) we continuously check and prune the expired grants by forming a prefix key with current blocktime that passed the stored expiration in `GrantQueue`, we iterate through all the matched records from `GrantQueue` and delete them from the `GrantQueue` & `Grant`s store. + +```go expandable +package keeper + +import ( + + "fmt" + "strconv" + "time" + "github.com/cosmos/gogoproto/proto" + abci "github.com/tendermint/tendermint/abci/types" + "github.com/tendermint/tendermint/libs/log" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ TODO: Revisit this once we have propoer gas fee framework. +/ Tracking issues https://github.com/cosmos/cosmos-sdk/issues/9054, +/ https://github.com/cosmos/cosmos-sdk/discussions/9072 +const gasCostPerIteration = uint64(20) + +type Keeper struct { + storeKey storetypes.StoreKey + cdc codec.BinaryCodec + router *baseapp.MsgServiceRouter + authKeeper authz.AccountKeeper +} + +/ NewKeeper constructs a message authorization Keeper +func NewKeeper(storeKey storetypes.StoreKey, cdc codec.BinaryCodec, router *baseapp.MsgServiceRouter, ak authz.AccountKeeper) + +Keeper { + return Keeper{ + storeKey: storeKey, + cdc: cdc, + router: router, + authKeeper: ak, +} +} + +/ Logger returns a module-specific logger. +func (k Keeper) + +Logger(ctx sdk.Context) + +log.Logger { + return ctx.Logger().With("module", fmt.Sprintf("x/%s", authz.ModuleName)) +} + +/ getGrant returns grant stored at skey. +func (k Keeper) + +getGrant(ctx sdk.Context, skey []byte) (grant authz.Grant, found bool) { + store := ctx.KVStore(k.storeKey) + bz := store.Get(skey) + if bz == nil { + return grant, false +} + +k.cdc.MustUnmarshal(bz, &grant) + +return grant, true +} + +func (k Keeper) + +update(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, updated authz.Authorization) + +error { + skey := grantStoreKey(grantee, granter, updated.MsgTypeURL()) + +grant, found := k.getGrant(ctx, skey) + if !found { + return authz.ErrNoAuthorizationFound +} + +msg, ok := updated.(proto.Message) + if !ok { + return sdkerrors.ErrPackAny.Wrapf("cannot proto marshal %T", updated) +} + +any, err := codectypes.NewAnyWithValue(msg) + if err != nil { + return err +} + +grant.Authorization = any + store := ctx.KVStore(k.storeKey) + +store.Set(skey, k.cdc.MustMarshal(&grant)) + +return nil +} + +/ DispatchActions attempts to execute the provided messages via authorization +/ grants from the message signer to the grantee. +func (k Keeper) + +DispatchActions(ctx sdk.Context, grantee sdk.AccAddress, msgs []sdk.Msg) ([][]byte, error) { + results := make([][]byte, len(msgs)) + now := ctx.BlockTime() + for i, msg := range msgs { + signers := msg.GetSigners() + if len(signers) != 1 { + return nil, authz.ErrAuthorizationNumOfSigners +} + granter := signers[0] + + / If granter != grantee then check authorization.Accept, otherwise we + / implicitly accept. + if !granter.Equals(grantee) { + skey := grantStoreKey(grantee, granter, sdk.MsgTypeURL(msg)) + +grant, found := k.getGrant(ctx, skey) + if !found { + return nil, sdkerrors.Wrapf(authz.ErrNoAuthorizationFound, "failed to update grant with key %s", string(skey)) +} + if grant.Expiration != nil && grant.Expiration.Before(now) { + return nil, authz.ErrAuthorizationExpired +} + +authorization, err := grant.GetAuthorization() + if err != nil { + return nil, err +} + +resp, err := authorization.Accept(ctx, msg) + if err != nil { + return nil, err +} + if resp.Delete { + err = k.DeleteGrant(ctx, grantee, granter, sdk.MsgTypeURL(msg)) +} + +else if resp.Updated != nil { + err = k.update(ctx, grantee, granter, resp.Updated) +} + if err != nil { + return nil, err +} + if !resp.Accept { + return nil, sdkerrors.ErrUnauthorized +} + +} + handler := k.router.Handler(msg) + if handler == nil { + return nil, sdkerrors.ErrUnknownRequest.Wrapf("unrecognized message route: %s", sdk.MsgTypeURL(msg)) +} + +msgResp, err := handler(ctx, msg) + if err != nil { + return nil, sdkerrors.Wrapf(err, "failed to execute message; message %v", msg) +} + +results[i] = msgResp.Data + + / emit the events from the dispatched actions + events := msgResp.Events + sdkEvents := make([]sdk.Event, 0, len(events)) + for _, event := range events { + e := event + e.Attributes = append(e.Attributes, abci.EventAttribute{ + Key: "authz_msg_index", + Value: strconv.Itoa(i) +}) + +sdkEvents = append(sdkEvents, sdk.Event(e)) +} + +ctx.EventManager().EmitEvents(sdkEvents) +} + +return results, nil +} + +/ SaveGrant method grants the provided authorization to the grantee on the granter's account +/ with the provided expiration time and insert authorization key into the grants queue. If there is an existing authorization grant for the +/ same `sdk.Msg` type, this grant overwrites that. +func (k Keeper) + +SaveGrant(ctx sdk.Context, grantee, granter sdk.AccAddress, authorization authz.Authorization, expiration *time.Time) + +error { + store := ctx.KVStore(k.storeKey) + msgType := authorization.MsgTypeURL() + skey := grantStoreKey(grantee, granter, msgType) + +grant, err := authz.NewGrant(ctx.BlockTime(), authorization, expiration) + if err != nil { + return err +} + +var oldExp *time.Time + if oldGrant, found := k.getGrant(ctx, skey); found { + oldExp = oldGrant.Expiration +} + if oldExp != nil && (expiration == nil || !oldExp.Equal(*expiration)) { + if err = k.removeFromGrantQueue(ctx, skey, granter, grantee, *oldExp); err != nil { + return err +} + +} + + / If the expiration didn't change, then we don't remove it and we should not insert again + if expiration != nil && (oldExp == nil || !oldExp.Equal(*expiration)) { + if err = k.insertIntoGrantQueue(ctx, granter, grantee, msgType, *expiration); err != nil { + return err +} + +} + bz := k.cdc.MustMarshal(&grant) + +store.Set(skey, bz) + +return ctx.EventManager().EmitTypedEvent(&authz.EventGrant{ + MsgTypeUrl: authorization.MsgTypeURL(), + Granter: granter.String(), + Grantee: grantee.String(), +}) +} + +/ DeleteGrant revokes any authorization for the provided message type granted to the grantee +/ by the granter. +func (k Keeper) + +DeleteGrant(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) + +error { + store := ctx.KVStore(k.storeKey) + skey := grantStoreKey(grantee, granter, msgType) + +grant, found := k.getGrant(ctx, skey) + if !found { + return sdkerrors.Wrapf(authz.ErrNoAuthorizationFound, "failed to delete grant with key %s", string(skey)) +} + if grant.Expiration != nil { + err := k.removeFromGrantQueue(ctx, skey, granter, grantee, *grant.Expiration) + if err != nil { + return err +} + +} + +store.Delete(skey) + +return ctx.EventManager().EmitTypedEvent(&authz.EventRevoke{ + MsgTypeUrl: msgType, + Granter: granter.String(), + Grantee: grantee.String(), +}) +} + +/ GetAuthorizations Returns list of `Authorizations` granted to the grantee by the granter. +func (k Keeper) + +GetAuthorizations(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress) ([]authz.Authorization, error) { + store := ctx.KVStore(k.storeKey) + key := grantStoreKey(grantee, granter, "") + iter := sdk.KVStorePrefixIterator(store, key) + +defer iter.Close() + +var authorization authz.Grant + var authorizations []authz.Authorization + for ; iter.Valid(); iter.Next() { + if err := k.cdc.Unmarshal(iter.Value(), &authorization); err != nil { + return nil, err +} + +a, err := authorization.GetAuthorization() + if err != nil { + return nil, err +} + +authorizations = append(authorizations, a) +} + +return authorizations, nil +} + +/ GetAuthorization returns an Authorization and it's expiration time. +/ A nil Authorization is returned under the following circumstances: +/ - No grant is found. +/ - A grant is found, but it is expired. +/ - There was an error getting the authorization from the grant. +func (k Keeper) + +GetAuthorization(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) (authz.Authorization, *time.Time) { + grant, found := k.getGrant(ctx, grantStoreKey(grantee, granter, msgType)) + if !found || (grant.Expiration != nil && grant.Expiration.Before(ctx.BlockHeader().Time)) { + return nil, nil +} + +auth, err := grant.GetAuthorization() + if err != nil { + return nil, nil +} + +return auth, grant.Expiration +} + +/ IterateGrants iterates over all authorization grants +/ This function should be used with caution because it can involve significant IO operations. +/ It should not be used in query or msg services without charging additional gas. +/ The iteration stops when the handler function returns true or the iterator exhaust. +func (k Keeper) + +IterateGrants(ctx sdk.Context, + handler func(granterAddr sdk.AccAddress, granteeAddr sdk.AccAddress, grant authz.Grant) + +bool, +) { + store := ctx.KVStore(k.storeKey) + iter := sdk.KVStorePrefixIterator(store, GrantKey) + +defer iter.Close() + for ; iter.Valid(); iter.Next() { + var grant authz.Grant + granterAddr, granteeAddr, _ := parseGrantStoreKey(iter.Key()) + +k.cdc.MustUnmarshal(iter.Value(), &grant) + if handler(granterAddr, granteeAddr, grant) { + break +} + +} +} + +func (k Keeper) + +getGrantQueueItem(ctx sdk.Context, expiration time.Time, granter, grantee sdk.AccAddress) (*authz.GrantQueueItem, error) { + store := ctx.KVStore(k.storeKey) + bz := store.Get(GrantQueueKey(expiration, granter, grantee)) + if bz == nil { + return &authz.GrantQueueItem{ +}, nil +} + +var queueItems authz.GrantQueueItem + if err := k.cdc.Unmarshal(bz, &queueItems); err != nil { + return nil, err +} + +return &queueItems, nil +} + +func (k Keeper) + +setGrantQueueItem(ctx sdk.Context, expiration time.Time, + granter sdk.AccAddress, grantee sdk.AccAddress, queueItems *authz.GrantQueueItem, +) + +error { + store := ctx.KVStore(k.storeKey) + +bz, err := k.cdc.Marshal(queueItems) + if err != nil { + return err +} + +store.Set(GrantQueueKey(expiration, granter, grantee), bz) + +return nil +} + +/ insertIntoGrantQueue inserts a grant key into the grant queue +func (k Keeper) + +insertIntoGrantQueue(ctx sdk.Context, granter, grantee sdk.AccAddress, msgType string, expiration time.Time) + +error { + queueItems, err := k.getGrantQueueItem(ctx, expiration, granter, grantee) + if err != nil { + return err +} + if len(queueItems.MsgTypeUrls) == 0 { + k.setGrantQueueItem(ctx, expiration, granter, grantee, &authz.GrantQueueItem{ + MsgTypeUrls: []string{ + msgType +}, +}) +} + +else { + queueItems.MsgTypeUrls = append(queueItems.MsgTypeUrls, msgType) + +k.setGrantQueueItem(ctx, expiration, granter, grantee, queueItems) +} + +return nil +} + +/ removeFromGrantQueue removes a grant key from the grant queue +func (k Keeper) + +removeFromGrantQueue(ctx sdk.Context, grantKey []byte, granter, grantee sdk.AccAddress, expiration time.Time) + +error { + store := ctx.KVStore(k.storeKey) + key := GrantQueueKey(expiration, granter, grantee) + bz := store.Get(key) + if bz == nil { + return sdkerrors.Wrap(authz.ErrNoGrantKeyFound, "can't remove grant from the expire queue, grant key not found") +} + +var queueItem authz.GrantQueueItem + if err := k.cdc.Unmarshal(bz, &queueItem); err != nil { + return err +} + + _, _, msgType := parseGrantStoreKey(grantKey) + queueItems := queueItem.MsgTypeUrls + for index, typeURL := range queueItems { + ctx.GasMeter().ConsumeGas(gasCostPerIteration, "grant queue") + if typeURL == msgType { + end := len(queueItem.MsgTypeUrls) - 1 + queueItems[index] = queueItems[end] + queueItems = queueItems[:end] + if err := k.setGrantQueueItem(ctx, expiration, granter, grantee, &authz.GrantQueueItem{ + MsgTypeUrls: queueItems, +}); err != nil { + return err +} + +break +} + +} + +return nil +} + +/ DequeueAndDeleteExpiredGrants deletes expired grants from the state and grant queue. +func (k Keeper) + +DequeueAndDeleteExpiredGrants(ctx sdk.Context) + +error { + store := ctx.KVStore(k.storeKey) + iterator := store.Iterator(GrantQueuePrefix, sdk.InclusiveEndBytes(GrantQueueTimePrefix(ctx.BlockTime()))) + +defer iterator.Close() + for ; iterator.Valid(); iterator.Next() { + var queueItem authz.GrantQueueItem + if err := k.cdc.Unmarshal(iterator.Value(), &queueItem); err != nil { + return err +} + + _, granter, grantee, err := parseGrantQueueKey(iterator.Key()) + if err != nil { + return err +} + +store.Delete(iterator.Key()) + for _, typeURL := range queueItem.MsgTypeUrls { + store.Delete(grantStoreKey(grantee, granter, typeURL)) +} + +} + +return nil +} +``` + +* GrantQueue: `0x02 | expiration_bytes | granter_address_len (1 byte) | granter_address_bytes | grantee_address_len (1 byte) | grantee_address_bytes -> ProtocalBuffer(GrantQueueItem)` + +The `expiration_bytes` are the expiration date in UTC with the format `"2006-01-02T15:04:05.000000000"`. + +```go expandable +package keeper + +import ( + + "time" + "github.com/cosmos/cosmos-sdk/internal/conv" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/address" + "github.com/cosmos/cosmos-sdk/types/kv" + "github.com/cosmos/cosmos-sdk/x/authz" +) + +/ Keys for store prefixes +/ Items are stored with the following key: values +/ +/ - 0x01: Grant +/ - 0x02: GrantQueueItem +var ( + GrantKey = []byte{0x01 +} / prefix for each key + GrantQueuePrefix = []byte{0x02 +} +) + +var lenTime = len(sdk.FormatTimeBytes(time.Now())) + +/ StoreKey is the store key string for authz +const StoreKey = authz.ModuleName + +/ grantStoreKey - return authorization store key +/ Items are stored with the following key: values +/ +/ - 0x01: Grant +func grantStoreKey(grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) []byte { + m := conv.UnsafeStrToBytes(msgType) + +granter = address.MustLengthPrefix(granter) + +grantee = address.MustLengthPrefix(grantee) + key := sdk.AppendLengthPrefixedBytes(GrantKey, granter, grantee, m) + +return key +} + +/ parseGrantStoreKey - split granter, grantee address and msg type from the authorization key +func parseGrantStoreKey(key []byte) (granterAddr, granteeAddr sdk.AccAddress, msgType string) { + / key is of format: + / 0x01 + + granterAddrLen, granterAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, 1, 1) / ignore key[0] since it is a prefix key + granterAddr, granterAddrEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrLenEndIndex+1, int(granterAddrLen[0])) + +granteeAddrLen, granteeAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrEndIndex+1, 1) + +granteeAddr, granteeAddrEndIndex := sdk.ParseLengthPrefixedBytes(key, granteeAddrLenEndIndex+1, int(granteeAddrLen[0])) + +kv.AssertKeyAtLeastLength(key, granteeAddrEndIndex+1) + +return granterAddr, granteeAddr, conv.UnsafeBytesToStr(key[(granteeAddrEndIndex + 1):]) +} + +/ parseGrantQueueKey split expiration time, granter and grantee from the grant queue key +func parseGrantQueueKey(key []byte) (time.Time, sdk.AccAddress, sdk.AccAddress, error) { + / key is of format: + / 0x02 + + expBytes, expEndIndex := sdk.ParseLengthPrefixedBytes(key, 1, lenTime) + +exp, err := sdk.ParseTimeBytes(expBytes) + if err != nil { + return exp, nil, nil, err +} + +granterAddrLen, granterAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, expEndIndex+1, 1) + +granter, granterEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrLenEndIndex+1, int(granterAddrLen[0])) + +granteeAddrLen, granteeAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, granterEndIndex+1, 1) + +grantee, _ := sdk.ParseLengthPrefixedBytes(key, granteeAddrLenEndIndex+1, int(granteeAddrLen[0])) + +return exp, granter, grantee, nil +} + +/ GrantQueueKey - return grant queue store key. If a given grant doesn't have a defined +/ expiration, then it should not be used in the pruning queue. +/ Key format is: +/ +/ 0x02: GrantQueueItem +func GrantQueueKey(expiration time.Time, granter sdk.AccAddress, grantee sdk.AccAddress) []byte { + exp := sdk.FormatTimeBytes(expiration) + +granter = address.MustLengthPrefix(granter) + +grantee = address.MustLengthPrefix(grantee) + +return sdk.AppendLengthPrefixedBytes(GrantQueuePrefix, exp, granter, grantee) +} + +/ GrantQueueTimePrefix - return grant queue time prefix +func GrantQueueTimePrefix(expiration time.Time) []byte { + return append(GrantQueuePrefix, sdk.FormatTimeBytes(expiration)...) +} + +/ firstAddressFromGrantStoreKey parses the first address only +func firstAddressFromGrantStoreKey(key []byte) + +sdk.AccAddress { + addrLen := key[0] + return sdk.AccAddress(key[1 : 1+addrLen]) +} +``` + +The `GrantQueueItem` object contains the list of type urls between granter and grantee that expire at the time indicated in the key. + +## Messages + +In this section we describe the processing of messages for the authz module. + +### MsgGrant + +An authorization grant is created using the `MsgGrant` message. +If there is already a grant for the `(granter, grantee, Authorization)` triple, then the new grant overwrites the previous one. To update or extend an existing grant, a new grant with the same `(granter, grantee, Authorization)` triple should be created. + +```protobuf +// MsgGrant is a request type for Grant method. It declares authorization to the grantee +// on behalf of the granter with the provided expiration time. +message MsgGrant { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgGrant"; + + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + cosmos.authz.v1beta1.Grant grant = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message handling should fail if: + +* both granter and grantee have the same address. +* provided `Expiration` time is less than current unix timestamp (but a grant will be created if no `expiration` time is provided since `expiration` is optional). +* provided `Grant.Authorization` is not implemented. +* `Authorization.MsgTypeURL()` is not defined in the router (there is no defined handler in the app router to handle that Msg types). + +### MsgRevoke + +A grant can be removed with the `MsgRevoke` message. + +```protobuf +// MsgRevoke revokes any authorization with the provided sdk.Msg type on the +// granter's account with that has been granted to the grantee. +message MsgRevoke { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgRevoke"; + + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string msg_type_url = 3; +} +``` + +The message handling should fail if: + +* both granter and grantee have the same address. +* provided `MsgTypeUrl` is empty. + +NOTE: The `MsgExec` message removes a grant if the grant has expired. + +### MsgExec + +When a grantee wants to execute a transaction on behalf of a granter, they must send `MsgExec`. + +```protobuf +// MsgExec attempts to execute the provided messages using +// authorizations granted to the grantee. Each message should have only +// one signer corresponding to the granter of the authorization. +message MsgExec { + option (cosmos.msg.v1.signer) = "grantee"; + option (amino.name) = "cosmos-sdk/MsgExec"; + + string grantee = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // Execute Msg. + // The x/authz will try to find a grant matching (msg.signers[0], grantee, MsgTypeURL(msg)) + // triple and validate it. + repeated google.protobuf.Any msgs = 2 [(cosmos_proto.accepts_interface) = "cosmos.base.v1beta1.Msg"]; +``` + +The message handling should fail if: + +* provided `Authorization` is not implemented. +* grantee doesn't have permission to run the transaction. +* if granted authorization is expired. + +## Events + +The authz module emits proto events defined in [the Protobuf reference](https://buf.build/cosmos/cosmos-sdk/docs/main/cosmos.authz.v1beta1#cosmos.authz.v1beta1.EventGrant). + +## Client + +### CLI + +A user can query and interact with the `authz` module using the CLI. + +#### Query + +The `query` commands allow users to query `authz` state. + +```bash +simd query authz --help +``` + +##### grants + +The `grants` command allows users to query grants for a granter-grantee pair. If the message type URL is set, it selects grants only for that message type. + +```bash +simd query authz grants [granter-addr] [grantee-addr] [msg-type-url]? [flags] +``` + +Example: + +```bash +simd query authz grants cosmos1.. cosmos1.. /cosmos.bank.v1beta1.MsgSend +``` + +Example Output: + +```bash +grants: +- authorization: + '@type': /cosmos.bank.v1beta1.SendAuthorization + spend_limit: + - amount: "100" + denom: stake + expiration: "2022-01-01T00:00:00Z" +pagination: null +``` + +#### Transactions + +The `tx` commands allow users to interact with the `authz` module. + +```bash +simd tx authz --help +``` + +##### exec + +The `exec` command allows a grantee to execute a transaction on behalf of granter. + +```bash + simd tx authz exec [tx-json-file] --from [grantee] [flags] +``` + +Example: + +```bash +simd tx authz exec tx.json --from=cosmos1.. +``` + +##### grant + +The `grant` command allows a granter to grant an authorization to a grantee. + +```bash +simd tx authz grant --from [flags] +``` + +* The `send` authorization\_type refers to the built-in `SendAuthorization` type. The custom flags available are `spend-limit` (required) and `allow-list` (optional) , documented [here](#SendAuthorization) + +Example: + +```bash + simd tx authz grant cosmos1.. send --spend-limit=100stake --allow-list=cosmos1...,cosmos2... --from=cosmos1.. +``` + +* The `generic` authorization\_type refers to the built-in `GenericAuthorization` type. The custom flag available is `msg-type` ( required) documented [here](#GenericAuthorization). + +> Note: `msg-type` is any valid Cosmos SDK `Msg` type url. + +Example: + +```bash + simd tx authz grant cosmos1.. generic --msg-type=/cosmos.bank.v1beta1.MsgSend --from=cosmos1.. +``` + +* The `delegate`,`unbond`,`redelegate` authorization\_types refer to the built-in `StakeAuthorization` type. The custom flags available are `spend-limit` (optional), `allowed-validators` (optional) and `deny-validators` (optional) documented [here](#StakeAuthorization). + +> Note: `allowed-validators` and `deny-validators` cannot both be empty. `spend-limit` represents the `MaxTokens` + +Example: + +```bash +simd tx authz grant cosmos1.. delegate --spend-limit=100stake --allowed-validators=cosmos...,cosmos... --deny-validators=cosmos... --from=cosmos1.. +``` + +##### revoke + +The `revoke` command allows a granter to revoke an authorization from a grantee. + +```bash +simd tx authz revoke [grantee] [msg-type-url] --from=[granter] [flags] +``` + +Example: + +```bash +simd tx authz revoke cosmos1.. /cosmos.bank.v1beta1.MsgSend --from=cosmos1.. +``` + +### gRPC + +A user can query the `authz` module using gRPC endpoints. + +#### Grants + +The `Grants` endpoint allows users to query grants for a granter-grantee pair. If the message type URL is set, it selects grants only for that message type. + +```bash +cosmos.authz.v1beta1.Query/Grants +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"granter":"cosmos1..","grantee":"cosmos1..","msg_type_url":"/cosmos.bank.v1beta1.MsgSend"}' \ + localhost:9090 \ + cosmos.authz.v1beta1.Query/Grants +``` + +Example Output: + +```bash expandable +{ + "grants": [ + { + "authorization": { + "@type": "/cosmos.bank.v1beta1.SendAuthorization", + "spendLimit": [ + { + "denom":"stake", + "amount":"100" + } + ] + }, + "expiration": "2022-01-01T00:00:00Z" + } + ] +} +``` + +### REST + +A user can query the `authz` module using REST endpoints. + +```bash +/cosmos/authz/v1beta1/grants +``` + +Example: + +```bash +curl "localhost:1317/cosmos/authz/v1beta1/grants?granter=cosmos1..&grantee=cosmos1..&msg_type_url=/cosmos.bank.v1beta1.MsgSend" +``` + +Example Output: + +```bash expandable +{ + "grants": [ + { + "authorization": { + "@type": "/cosmos.bank.v1beta1.SendAuthorization", + "spend_limit": [ + { + "denom": "stake", + "amount": "100" + } + ] + }, + "expiration": "2022-01-01T00:00:00Z" + } + ], + "pagination": null +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/bank.mdx b/docs/sdk/v0.53/documentation/module-system/bank.mdx new file mode 100644 index 00000000..7bb31dab --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/bank.mdx @@ -0,0 +1,1209 @@ +--- +title: '`x/bank`' +description: This document specifies the bank module of the Cosmos SDK. +--- + +## Abstract + +This document specifies the bank module of the Cosmos SDK. + +The bank module is responsible for handling multi-asset coin transfers between +accounts and tracking special-case pseudo-transfers which must work differently +with particular kinds of accounts (notably delegating/undelegating for vesting +accounts). It exposes several interfaces with varying capabilities for secure +interaction with other modules which must alter user balances. + +In addition, the bank module tracks and provides query support for the total +supply of all assets used in the application. + +This module is used in the Cosmos Hub. + +## Contents + +* [Supply](#supply) + * [Total Supply](#total-supply) +* [Module Accounts](#module-accounts) + * [Permissions](#permissions) +* [State](#state) +* [Params](#params) +* [Keepers](#keepers) +* [Messages](#messages) +* [Events](#events) + * [Message Events](#message-events) + * [Keeper Events](#keeper-events) +* [Parameters](#parameters) + * [SendEnabled](#sendenabled) + * [DefaultSendEnabled](#defaultsendenabled) +* [Client](#client) + * [CLI](#cli) + * [Query](#query) + * [Transactions](#transactions) +* [gRPC](#grpc) + +## Supply + +The `supply` functionality: + +* passively tracks the total supply of coins within a chain, +* provides a pattern for modules to hold/interact with `Coins`, and +* introduces the invariant check to verify a chain's total supply. + +### Total Supply + +The total `Supply` of the network is equal to the sum of all coins from the +account. The total supply is updated every time a `Coin` is minted (eg: as part +of the inflation mechanism) or burned (eg: due to slashing or if a governance +proposal is vetoed). + +## Module Accounts + +The supply functionality introduces a new type of `auth.Account` which can be used by +modules to allocate tokens and in special cases mint or burn tokens. At a base +level these module accounts are capable of sending/receiving tokens to and from +`auth.Account`s and other module accounts. This design replaces previous +alternative designs where, to hold tokens, modules would burn the incoming +tokens from the sender account, and then track those tokens internally. Later, +in order to send tokens, the module would need to effectively mint tokens +within a destination account. The new design removes duplicate logic between +modules to perform this accounting. + +The `ModuleAccount` interface is defined as follows: + +```go +type ModuleAccount interface { + auth.Account / same methods as the Account interface + + GetName() + +string / name of the module; used to obtain the address + GetPermissions() []string / permissions of module account + HasPermission(string) + +bool +} +``` + +> **WARNING!** +> Any module or message handler that allows either direct or indirect sending of funds must explicitly guarantee those funds cannot be sent to module accounts (unless allowed). + +The supply `Keeper` also introduces new wrapper functions for the auth `Keeper` +and the bank `Keeper` that are related to `ModuleAccount`s in order to be able +to: + +* Get and set `ModuleAccount`s by providing the `Name`. +* Send coins from and to other `ModuleAccount`s or standard `Account`s + (`BaseAccount` or `VestingAccount`) by passing only the `Name`. +* `Mint` or `Burn` coins for a `ModuleAccount` (restricted to its permissions). + +### Permissions + +Each `ModuleAccount` has a different set of permissions that provide different +object capabilities to perform certain actions. Permissions need to be +registered upon the creation of the supply `Keeper` so that every time a +`ModuleAccount` calls the allowed functions, the `Keeper` can lookup the +permissions to that specific account and perform or not perform the action. + +The available permissions are: + +* `Minter`: allows for a module to mint a specific amount of coins. +* `Burner`: allows for a module to burn a specific amount of coins. +* `Staking`: allows for a module to delegate and undelegate a specific amount of coins. + +## State + +The `x/bank` module keeps state of the following primary objects: + +1. Account balances +2. Denomination metadata +3. The total supply of all balances +4. Information on which denominations are allowed to be sent. + +In addition, the `x/bank` module keeps the following indexes to manage the +aforementioned state: + +* Supply Index: `0x0 | byte(denom) -> byte(amount)` +* Denom Metadata Index: `0x1 | byte(denom) -> ProtocolBuffer(Metadata)` +* Balances Index: `0x2 | byte(address length) | []byte(address) | []byte(balance.Denom) -> ProtocolBuffer(balance)` +* Reverse Denomination to Address Index: `0x03 | byte(denom) | 0x00 | []byte(address) -> 0` + +## Params + +The bank module stores it's params in state with the prefix of `0x05`, +it can be updated with governance or the address with authority. + +* Params: `0x05 | ProtocolBuffer(Params)` + +```protobuf +// Params defines the parameters for the bank module. +message Params { + option (amino.name) = "cosmos-sdk/x/bank/Params"; + option (gogoproto.goproto_stringer) = false; + // Deprecated: Use of SendEnabled in params is deprecated. + // For genesis, use the newly added send_enabled field in the genesis object. + // Storage, lookup, and manipulation of this information is now in the keeper. + // + // As of cosmos-sdk 0.47, this only exists for backwards compatibility of genesis files. + repeated SendEnabled send_enabled = 1 [deprecated = true]; + bool default_send_enabled = 2; +} +``` + +## Keepers + +The bank module provides these exported keeper interfaces that can be +passed to other modules that read or update account balances. Modules +should use the least-permissive interface that provides the functionality they +require. + +Best practices dictate careful review of `bank` module code to ensure that +permissions are limited in the way that you expect. + +### Denied Addresses + +The `x/bank` module accepts a map of addresses that are considered blocklisted +from directly and explicitly receiving funds through means such as `MsgSend` and +`MsgMultiSend` and direct API calls like `SendCoinsFromModuleToAccount`. + +Typically, these addresses are module accounts. If these addresses receive funds +outside the expected rules of the state machine, invariants are likely to be +broken and could result in a halted network. + +By providing the `x/bank` module with a blocklisted set of addresses, an error occurs for the operation if a user or client attempts to directly or indirectly send funds to a blocklisted account, for example, by using [IBC](https://ibc.cosmos.network). + +### Common Types + +#### Input + +An input of a multiparty transfer + +```protobuf +/ Input models transaction input. +message Input { + string address = 1; + repeated cosmos.base.v1beta1.Coin coins = 2; +} +``` + +#### Output + +An output of a multiparty transfer. + +```protobuf +/ Output models transaction outputs. +message Output { + string address = 1; + repeated cosmos.base.v1beta1.Coin coins = 2; +} +``` + +### BaseKeeper + +The base keeper provides full-permission access: the ability to arbitrary modify any account's balance and mint or burn coins. + +Restricted permission to mint per module could be achieved by using baseKeeper with `WithMintCoinsRestriction` to give specific restrictions to mint (e.g. only minting certain denom). + +```go expandable +/ Keeper defines a module interface that facilitates the transfer of coins +/ between accounts. +type Keeper interface { + SendKeeper + WithMintCoinsRestriction(MintingRestrictionFn) + +BaseKeeper + + InitGenesis(context.Context, *types.GenesisState) + +ExportGenesis(context.Context) *types.GenesisState + + GetSupply(ctx context.Context, denom string) + +sdk.Coin + HasSupply(ctx context.Context, denom string) + +bool + GetPaginatedTotalSupply(ctx context.Context, pagination *query.PageRequest) (sdk.Coins, *query.PageResponse, error) + +IterateTotalSupply(ctx context.Context, cb func(sdk.Coin) + +bool) + +GetDenomMetaData(ctx context.Context, denom string) (types.Metadata, bool) + +HasDenomMetaData(ctx context.Context, denom string) + +bool + SetDenomMetaData(ctx context.Context, denomMetaData types.Metadata) + +IterateAllDenomMetaData(ctx context.Context, cb func(types.Metadata) + +bool) + +SendCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + SendCoinsFromModuleToModule(ctx context.Context, senderModule, recipientModule string, amt sdk.Coins) + +error + SendCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + DelegateCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + UndelegateCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + MintCoins(ctx context.Context, moduleName string, amt sdk.Coins) + +error + BurnCoins(ctx context.Context, moduleName string, amt sdk.Coins) + +error + + DelegateCoins(ctx context.Context, delegatorAddr, moduleAccAddr sdk.AccAddress, amt sdk.Coins) + +error + UndelegateCoins(ctx context.Context, moduleAccAddr, delegatorAddr sdk.AccAddress, amt sdk.Coins) + +error + + / GetAuthority gets the address capable of executing governance proposal messages. Usually the gov module account. + GetAuthority() + +string + + types.QueryServer +} +``` + +### SendKeeper + +The send keeper provides access to account balances and the ability to transfer coins between +accounts. The send keeper does not alter the total supply (mint or burn coins). + +```go expandable +/ SendKeeper defines a module interface that facilitates the transfer of coins +/ between accounts without the possibility of creating coins. +type SendKeeper interface { + ViewKeeper + + AppendSendRestriction(restriction SendRestrictionFn) + +PrependSendRestriction(restriction SendRestrictionFn) + +ClearSendRestriction() + +InputOutputCoins(ctx context.Context, input types.Input, outputs []types.Output) + +error + SendCoins(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) + +error + + GetParams(ctx context.Context) + +types.Params + SetParams(ctx context.Context, params types.Params) + +error + + IsSendEnabledDenom(ctx context.Context, denom string) + +bool + SetSendEnabled(ctx context.Context, denom string, value bool) + +SetAllSendEnabled(ctx context.Context, sendEnableds []*types.SendEnabled) + +DeleteSendEnabled(ctx context.Context, denom string) + +IterateSendEnabledEntries(ctx context.Context, cb func(denom string, sendEnabled bool) (stop bool)) + +GetAllSendEnabledEntries(ctx context.Context) []types.SendEnabled + + IsSendEnabledCoin(ctx context.Context, coin sdk.Coin) + +bool + IsSendEnabledCoins(ctx context.Context, coins ...sdk.Coin) + +error + + BlockedAddr(addr sdk.AccAddress) + +bool +} +``` + +#### Send Restrictions + +The `SendKeeper` applies a `SendRestrictionFn` before each transfer of funds. + +```golang +/ A SendRestrictionFn can restrict sends and/or provide a new receiver address. +type SendRestrictionFn func(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) (newToAddr sdk.AccAddress, err error) +``` + +After the `SendKeeper` (or `BaseKeeper`) has been created, send restrictions can be added to it using the `AppendSendRestriction` or `PrependSendRestriction` functions. +Both functions compose the provided restriction with any previously provided restrictions. +`AppendSendRestriction` adds the provided restriction to be run after any previously provided send restrictions. +`PrependSendRestriction` adds the restriction to be run before any previously provided send restrictions. +The composition will short-circuit when an error is encountered. I.e. if the first one returns an error, the second is not run. + +During `SendCoins`, the send restriction is applied before coins are removed from the from address and adding them to the to address. +During `InputOutputCoins`, the send restriction is applied after the input coins are removed and once for each output before the funds are added. + +A send restriction function should make use of a custom value in the context to allow bypassing that specific restriction. + +Send Restrictions are not placed on `ModuleToAccount` or `ModuleToModule` transfers. This is done due to modules needing to move funds to user accounts and other module accounts. This is a design decision to allow for more flexibility in the state machine. The state machine should be able to move funds between module accounts and user accounts without restrictions. + +Secondly this limitation would limit the usage of the state machine even for itself. users would not be able to receive rewards, not be able to move funds between module accounts. In the case that a user sends funds from a user account to the community pool and then a governance proposal is used to get those tokens into the users account this would fall under the discretion of the app chain developer to what they would like to do here. We can not make strong assumptions here. +Thirdly, this issue could lead into a chain halt if a token is disabled and the token is moved in the begin/endblock. This is the last reason we see the current change and more damaging then beneficial for users. + +For example, in your module's keeper package, you'd define the send restriction function: + +```golang expandable +var _ banktypes.SendRestrictionFn = Keeper{ +}.SendRestrictionFn + +func (k Keeper) + +SendRestrictionFn(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) (sdk.AccAddress, error) { + / Bypass if the context says to. + if mymodule.HasBypass(ctx) { + return toAddr, nil +} + + / Your custom send restriction logic goes here. + return nil, errors.New("not implemented") +} +``` + +The bank keeper should be provided to your keeper's constructor so the send restriction can be added to it: + +```golang +func NewKeeper(cdc codec.BinaryCodec, storeKey storetypes.StoreKey, bankKeeper mymodule.BankKeeper) + +Keeper { + rv := Keeper{/*...*/ +} + +bankKeeper.AppendSendRestriction(rv.SendRestrictionFn) + +return rv +} +``` + +Then, in the `mymodule` package, define the context helpers: + +```golang expandable +const bypassKey = "bypass-mymodule-restriction" + +/ WithBypass returns a new context that will cause the mymodule bank send restriction to be skipped. +func WithBypass(ctx context.Context) + +context.Context { + return sdk.UnwrapSDKContext(ctx).WithValue(bypassKey, true) +} + +/ WithoutBypass returns a new context that will cause the mymodule bank send restriction to not be skipped. +func WithoutBypass(ctx context.Context) + +context.Context { + return sdk.UnwrapSDKContext(ctx).WithValue(bypassKey, false) +} + +/ HasBypass checks the context to see if the mymodule bank send restriction should be skipped. +func HasBypass(ctx context.Context) + +bool { + bypassValue := ctx.Value(bypassKey) + if bypassValue == nil { + return false +} + +bypass, isBool := bypassValue.(bool) + +return isBool && bypass +} +``` + +Now, anywhere where you want to use `SendCoins` or `InputOutputCoins`, but you don't want your send restriction applied: + +```golang +func (k Keeper) + +DoThing(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) + +error { + return k.bankKeeper.SendCoins(mymodule.WithBypass(ctx), fromAddr, toAddr, amt) +} +``` + +### ViewKeeper + +The view keeper provides read-only access to account balances. The view keeper does not have balance alteration functionality. All balance lookups are `O(1)`. + +```go expandable +/ ViewKeeper defines a module interface that facilitates read only access to +/ account balances. +type ViewKeeper interface { + ValidateBalance(ctx context.Context, addr sdk.AccAddress) + +error + HasBalance(ctx context.Context, addr sdk.AccAddress, amt sdk.Coin) + +bool + + GetAllBalances(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + GetAccountsBalances(ctx context.Context) []types.Balance + GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin + LockedCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + SpendableCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + SpendableCoin(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin + + IterateAccountBalances(ctx context.Context, addr sdk.AccAddress, cb func(coin sdk.Coin) (stop bool)) + +IterateAllBalances(ctx context.Context, cb func(address sdk.AccAddress, coin sdk.Coin) (stop bool)) +} +``` + +## Messages + +### MsgSend + +Send coins from one address to another. + +```protobuf +// MsgSend represents a message to send coins from one account to another. +message MsgSend { + option (cosmos.msg.v1.signer) = "from_address"; + option (amino.name) = "cosmos-sdk/MsgSend"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string from_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string to_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + repeated cosmos.base.v1beta1.Coin amount = 3 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; +} +``` + +The message will fail under the following conditions: + +* The coins do not have sending enabled +* The `to` address is restricted + +### MsgMultiSend + +Send coins from one sender and to a series of different address. If any of the receiving addresses do not correspond to an existing account, a new account is created. + +```protobuf +// MsgMultiSend represents an arbitrary multi-in, multi-out send message. +message MsgMultiSend { + option (cosmos.msg.v1.signer) = "inputs"; + option (amino.name) = "cosmos-sdk/MsgMultiSend"; + + option (gogoproto.equal) = false; + + // Inputs, despite being `repeated`, only allows one sender input. This is + // checked in MsgMultiSend's ValidateBasic. + repeated Input inputs = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + repeated Output outputs = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message will fail under the following conditions: + +* Any of the coins do not have sending enabled +* Any of the `to` addresses are restricted +* Any of the coins are locked +* The inputs and outputs do not correctly correspond to one another + +### MsgUpdateParams + +The `bank` module params can be updated through `MsgUpdateParams`, which can be done using governance proposal. The signer will always be the `gov` module account address. + +```protobuf +// MsgUpdateParams is the Msg/UpdateParams request type. +// +// Since: cosmos-sdk 0.47 +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + option (amino.name) = "cosmos-sdk/x/bank/MsgUpdateParams"; + + // params defines the x/bank parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message handling can fail if: + +* signer is not the gov module account address. + +### MsgSetSendEnabled + +Used with the x/gov module to set create/edit SendEnabled entries. + +```protobuf +// MsgSetSendEnabled is the Msg/SetSendEnabled request type. +// +// Only entries to add/update/delete need to be included. +// Existing SendEnabled entries that are not included in this +// message are left unchanged. +// +// Since: cosmos-sdk 0.47 +message MsgSetSendEnabled { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/MsgSetSendEnabled"; + + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // send_enabled is the list of entries to add or update. + repeated SendEnabled send_enabled = 2; + + // use_default_for is a list of denoms that should use the params.default_send_enabled value. + // Denoms listed here will have their SendEnabled entries deleted. + // If a denom is included that doesn't have a SendEnabled entry, + // it will be ignored. + repeated string use_default_for = 3; +} +``` + +The message will fail under the following conditions: + +* The authority is not a bech32 address. +* The authority is not x/gov module's address. +* There are multiple SendEnabled entries with the same Denom. +* One or more SendEnabled entries has an invalid Denom. + +## Events + +The bank module emits the following events: + +### Message Events + +#### MsgSend + +| Type | Attribute Key | Attribute Value | +| -------- | ------------- | ------------------ | +| transfer | recipient | `{recipientAddress}` | +| transfer | amount | `{amount}` | +| message | module | bank | +| message | action | send | +| message | sender | `{senderAddress}` | + +#### MsgMultiSend + +| Type | Attribute Key | Attribute Value | +| -------- | ------------- | ------------------ | +| transfer | recipient | `{recipientAddress}` | +| transfer | amount | `{amount}` | +| message | module | bank | +| message | action | multisend | +| message | sender | `{senderAddress}` | + +### Keeper Events + +In addition to message events, the bank keeper will produce events when the following methods are called (or any method which ends up calling them) + +#### MintCoins + +```json expandable +{ + "type": "coinbase", + "attributes": [ + { + "key": "minter", + "value": "{{sdk.AccAddress of the module minting coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being minted}}", + "index": true + } + ] +} +``` + +```json expandable +{ + "type": "coin_received", + "attributes": [ + { + "key": "receiver", + "value": "{{sdk.AccAddress of the module minting coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being received}}", + "index": true + } + ] +} +``` + +#### BurnCoins + +```json expandable +{ + "type": "burn", + "attributes": [ + { + "key": "burner", + "value": "{{sdk.AccAddress of the module burning coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being burned}}", + "index": true + } + ] +} +``` + +```json expandable +{ + "type": "coin_spent", + "attributes": [ + { + "key": "spender", + "value": "{{sdk.AccAddress of the module burning coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being burned}}", + "index": true + } + ] +} +``` + +#### addCoins + +```json expandable +{ + "type": "coin_received", + "attributes": [ + { + "key": "receiver", + "value": "{{sdk.AccAddress of the address beneficiary of the coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being received}}", + "index": true + } + ] +} +``` + +#### subUnlockedCoins/DelegateCoins + +```json expandable +{ + "type": "coin_spent", + "attributes": [ + { + "key": "spender", + "value": "{{sdk.AccAddress of the address which is spending coins}}", + "index": true + }, + { + "key": "amount", + "value": "{{sdk.Coins being spent}}", + "index": true + } + ] +} +``` + +## Parameters + +The bank module contains the following parameters + +### SendEnabled + +The SendEnabled parameter is now deprecated and not to be use. It is replaced +with state store records. + +### DefaultSendEnabled + +The default send enabled value controls send transfer capability for all +coin denominations unless specifically included in the array of `SendEnabled` +parameters. + +## Client + +### CLI + +A user can query and interact with the `bank` module using the CLI. + +#### Query + +The `query` commands allow users to query `bank` state. + +```shell +simd query bank --help +``` + +##### balances + +The `balances` command allows users to query account balances by address. + +```shell +simd query bank balances [address] [flags] +``` + +Example: + +```shell +simd query bank balances cosmos1.. +``` + +Example Output: + +```yml +balances: +- amount: "1000000000" + denom: stake +pagination: + next_key: null + total: "0" +``` + +##### denom-metadata + +The `denom-metadata` command allows users to query metadata for coin denominations. A user can query metadata for a single denomination using the `--denom` flag or all denominations without it. + +```shell +simd query bank denom-metadata [flags] +``` + +Example: + +```shell +simd query bank denom-metadata --denom stake +``` + +Example Output: + +```yml +metadata: + base: stake + denom_units: + - aliases: + - STAKE + denom: stake + description: native staking token of simulation app + display: stake + name: SimApp Token + symbol: STK +``` + +##### total + +The `total` command allows users to query the total supply of coins. A user can query the total supply for a single coin using the `--denom` flag or all coins without it. + +```shell +simd query bank total [flags] +``` + +Example: + +```shell +simd query bank total --denom stake +``` + +Example Output: + +```yml +amount: "10000000000" +denom: stake +``` + +##### send-enabled + +The `send-enabled` command allows users to query for all or some SendEnabled entries. + +```shell +simd query bank send-enabled [denom1 ...] [flags] +``` + +Example: + +```shell +simd query bank send-enabled +``` + +Example output: + +```yml +send_enabled: +- denom: foocoin + enabled: true +- denom: barcoin +pagination: + next-key: null + total: 2 +``` + +#### Transactions + +The `tx` commands allow users to interact with the `bank` module. + +```shell +simd tx bank --help +``` + +##### send + +The `send` command allows users to send funds from one account to another. + +```shell +simd tx bank send [from_key_or_address] [to_address] [amount] [flags] +``` + +Example: + +```shell +simd tx bank send cosmos1.. cosmos1.. 100stake +``` + +## gRPC + +A user can query the `bank` module using gRPC endpoints. + +### Balance + +The `Balance` endpoint allows users to query account balance by address for a given denomination. + +```shell +cosmos.bank.v1beta1.Query/Balance +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"address":"cosmos1..","denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/Balance +``` + +Example Output: + +```json +{ + "balance": { + "denom": "stake", + "amount": "1000000000" + } +} +``` + +### AllBalances + +The `AllBalances` endpoint allows users to query account balance by address for all denominations. + +```shell +cosmos.bank.v1beta1.Query/AllBalances +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/AllBalances +``` + +Example Output: + +```json expandable +{ + "balances": [ + { + "denom": "stake", + "amount": "1000000000" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### DenomMetadata + +The `DenomMetadata` endpoint allows users to query metadata for a single coin denomination. + +```shell +cosmos.bank.v1beta1.Query/DenomMetadata +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/DenomMetadata +``` + +Example Output: + +```json expandable +{ + "metadata": { + "description": "native staking token of simulation app", + "denomUnits": [ + { + "denom": "stake", + "aliases": [ + "STAKE" + ] + } + ], + "base": "stake", + "display": "stake", + "name": "SimApp Token", + "symbol": "STK" + } +} +``` + +### DenomsMetadata + +The `DenomsMetadata` endpoint allows users to query metadata for all coin denominations. + +```shell +cosmos.bank.v1beta1.Query/DenomsMetadata +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/DenomsMetadata +``` + +Example Output: + +```json expandable +{ + "metadatas": [ + { + "description": "native staking token of simulation app", + "denomUnits": [ + { + "denom": "stake", + "aliases": [ + "STAKE" + ] + } + ], + "base": "stake", + "display": "stake", + "name": "SimApp Token", + "symbol": "STK" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### DenomOwners + +The `DenomOwners` endpoint allows users to query metadata for a single coin denomination. + +```shell +cosmos.bank.v1beta1.Query/DenomOwners +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/DenomOwners +``` + +Example Output: + +```json expandable +{ + "denomOwners": [ + { + "address": "cosmos1..", + "balance": { + "denom": "stake", + "amount": "5000000000" + } + +}, + { + "address": "cosmos1..", + "balance": { + "denom": "stake", + "amount": "5000000000" + } + +}, + ], + "pagination": { + "total": "2" + } +} +``` + +### TotalSupply + +The `TotalSupply` endpoint allows users to query the total supply of all coins. + +```shell +cosmos.bank.v1beta1.Query/TotalSupply +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/TotalSupply +``` + +Example Output: + +```json expandable +{ + "supply": [ + { + "denom": "stake", + "amount": "10000000000" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### SupplyOf + +The `SupplyOf` endpoint allows users to query the total supply of a single coin. + +```shell +cosmos.bank.v1beta1.Query/SupplyOf +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"denom":"stake"}' \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/SupplyOf +``` + +Example Output: + +```json +{ + "amount": { + "denom": "stake", + "amount": "10000000000" + } +} +``` + +### Params + +The `Params` endpoint allows users to query the parameters of the `bank` module. + +```shell +cosmos.bank.v1beta1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "defaultSendEnabled": true + } +} +``` + +### SendEnabled + +The `SendEnabled` enpoints allows users to query the SendEnabled entries of the `bank` module. + +Any denominations NOT returned, use the `Params.DefaultSendEnabled` value. + +```shell +cosmos.bank.v1beta1.Query/SendEnabled +``` + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/SendEnabled +``` + +Example Output: + +```json expandable +{ + "send_enabled": [ + { + "denom": "foocoin", + "enabled": true + }, + { + "denom": "barcoin" + } + ], + "pagination": { + "next-key": null, + "total": 2 + } +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/beginblock-endblock.mdx b/docs/sdk/v0.53/documentation/module-system/beginblock-endblock.mdx new file mode 100644 index 00000000..a9cb7c3e --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/beginblock-endblock.mdx @@ -0,0 +1,112 @@ +--- +title: BeginBlocker and EndBlocker +--- + +## Synopsis + +`BeginBlocker` and `EndBlocker` are optional methods module developers can implement in their module. They will be triggered at the beginning and at the end of each block respectively, when the [`BeginBlock`](/docs/sdk/v0.53/documentation/application-framework/baseapp#beginblock) and [`EndBlock`](/docs/sdk/v0.53/documentation/application-framework/baseapp#endblock) ABCI messages are received from the underlying consensus engine. + + +**Pre-requisite Readings** + +- [Module Manager](/docs/sdk/v0.53/documentation/module-system/module-manager) + + + +## BeginBlocker and EndBlocker + +`BeginBlocker` and `EndBlocker` are a way for module developers to add automatic execution of logic to their module. This is a powerful tool that should be used carefully, as complex automatic functions can slow down or even halt the chain. + +In 0.47.0, Prepare and Process Proposal were added that allow app developers to do arbitrary work at those phases, but they do not influence the work that will be done in BeginBlock. If an application required `BeginBlock` to execute prior to any sort of work is done then this is not possible today (0.50.0). + +When needed, `BeginBlocker` and `EndBlocker` are implemented as part of the [`HasBeginBlocker`, `HasABCIEndBlocker` and `EndBlocker` interfaces](/docs/sdk/v0.53/documentation/module-system/module-manager#appmodule). This means either can be left-out if not required. The `BeginBlock` and `EndBlock` methods of the interface implemented in `module.go` generally defer to `BeginBlocker` and `EndBlocker` methods respectively, which are usually implemented in `abci.go`. + +The actual implementation of `BeginBlocker` and `EndBlocker` in `abci.go` are very similar to that of a [`Msg` service](/docs/sdk/v0.53/documentation/module-system/msg-services): + +- They generally use the [`keeper`](/docs/sdk/v0.53/documentation/module-system/keeper) and [`ctx`](/docs/sdk/v0.53/documentation/application-framework/context) to retrieve information about the latest state. +- If needed, they use the `keeper` and `ctx` to trigger state-transitions. +- If needed, they can emit [`events`](/docs/sdk/v0.53/api-reference/events-streaming/events) via the `ctx`'s `EventManager`. + +A specific type of `EndBlocker` is available to return validator updates to the underlying consensus engine in the form of an [`[]abci.ValidatorUpdates`](https://docs.cometbft.com/v0.37/spec/abci/abci++_methods#endblock). This is the preferred way to implement custom validator changes. + +It is possible for developers to define the order of execution between the `BeginBlocker`/`EndBlocker` functions of each of their application's modules via the module's manager `SetOrderBeginBlocker`/`SetOrderEndBlocker` methods. For more on the module manager, click [here](/docs/sdk/v0.53/documentation/module-system/module-manager#manager). + +See an example implementation of `BeginBlocker` from the `distribution` module: + +```go expandable +package distribution + +import ( + + "time" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + "github.com/cosmos/cosmos-sdk/x/distribution/types" +) + +/ BeginBlocker sets the proposer for determining distribution during endblock +/ and distribute rewards for the previous block. +func BeginBlocker(ctx sdk.Context, k keeper.Keeper) + +error { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyBeginBlocker) + + / determine the total power signing the block + var previousTotalPower int64 + for _, voteInfo := range ctx.VoteInfos() { + previousTotalPower += voteInfo.Validator.Power +} + + / TODO this is Tendermint-dependent + / ref https://github.com/cosmos/cosmos-sdk/issues/3095 + if ctx.BlockHeight() > 1 { + k.AllocateTokens(ctx, previousTotalPower, ctx.VoteInfos()) +} + + / record the proposer for when we payout on the next block + consAddr := sdk.ConsAddress(ctx.BlockHeader().ProposerAddress) + +k.SetPreviousProposerConsAddr(ctx, consAddr) + +return nil +} +``` + +and an example implementation of `EndBlocker` from the `staking` module: + +```go expandable +package keeper + +import ( + + "context" + "time" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ BeginBlocker will persist the current header and validator set as a historical entry +/ and prune the oldest entry based on the HistoricalEntries parameter +func (k *Keeper) + +BeginBlocker(ctx sdk.Context) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyBeginBlocker) + +k.TrackHistoricalInfo(ctx) +} + +/ Called every block, update validator set +func (k *Keeper) + +EndBlocker(ctx context.Context) ([]abci.ValidatorUpdate, error) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) + +return k.BlockValidatorUpdates(sdk.UnwrapSDKContext(ctx)), nil +} +``` + +{/* TODO: leaving this here to update docs with core api changes */} diff --git a/docs/sdk/v0.53/documentation/module-system/circuit.mdx b/docs/sdk/v0.53/documentation/module-system/circuit.mdx new file mode 100644 index 00000000..513894f2 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/circuit.mdx @@ -0,0 +1,599 @@ +--- +title: "`x/circuit`" +--- + +## Concepts + +Circuit Breaker is a module that is meant to avoid a chain needing to halt/shut down in the presence of a vulnerability, instead the module will allow specific messages or all messages to be disabled. When operating a chain, if it is app specific then a halt of the chain is less detrimental, but if there are applications built on top of the chain then halting is expensive due to the disturbance to applications. + +Circuit Breaker works with the idea that an address or set of addresses have the right to block messages from being executed and/or included in the mempool. Any address with a permission is able to reset the circuit breaker for the message. + +The transactions are checked and can be rejected at two points: + +- In `CircuitBreakerDecorator` [ante handler](https://docs.cosmos.network/main/learn/advanced/baseapp#antehandler): + +```go expandable +package ante + +import ( + + "context" + "github.com/cockroachdb/errors" + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ CircuitBreaker is an interface that defines the methods for a circuit breaker. +type CircuitBreaker interface { + IsAllowed(ctx context.Context, typeURL string) (bool, error) +} + +/ CircuitBreakerDecorator is an AnteDecorator that checks if the transaction type is allowed to enter the mempool or be executed +type CircuitBreakerDecorator struct { + circuitKeeper CircuitBreaker +} + +func NewCircuitBreakerDecorator(ck CircuitBreaker) + +CircuitBreakerDecorator { + return CircuitBreakerDecorator{ + circuitKeeper: ck, +} +} + +func (cbd CircuitBreakerDecorator) + +AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { + / loop through all the messages and check if the message type is allowed + for _, msg := range tx.GetMsgs() { + isAllowed, err := cbd.circuitKeeper.IsAllowed(ctx, sdk.MsgTypeURL(msg)) + if err != nil { + return ctx, err +} + if !isAllowed { + return ctx, errors.New("tx type not allowed") +} + +} + +return next(ctx, tx, simulate) +} +``` + +- With a [message router check](https://docs.cosmos.network/main/learn/advanced/baseapp#msg-service-router): + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + + gogogrpc "github.com/cosmos/gogoproto/grpc" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc" + "google.golang.org/protobuf/runtime/protoiface" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/baseapp/internal/protocompat" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ MessageRouter ADR 031 request type routing +/ docs/sdk/next/documentation/legacy/adr-comprehensive +type MessageRouter interface { + Handler(msg sdk.Msg) + +MsgServiceHandler + HandlerByTypeURL(typeURL string) + +MsgServiceHandler +} + +/ MsgServiceRouter routes fully-qualified Msg service methods to their handler. +type MsgServiceRouter struct { + interfaceRegistry codectypes.InterfaceRegistry + routes map[string]MsgServiceHandler + hybridHandlers map[string]func(ctx context.Context, req, resp protoiface.MessageV1) + +error + circuitBreaker CircuitBreaker +} + +var _ gogogrpc.Server = &MsgServiceRouter{ +} + +/ NewMsgServiceRouter creates a new MsgServiceRouter. +func NewMsgServiceRouter() *MsgServiceRouter { + return &MsgServiceRouter{ + routes: map[string]MsgServiceHandler{ +}, + hybridHandlers: map[string]func(ctx context.Context, req, resp protoiface.MessageV1) + +error{ +}, +} +} + +func (msr *MsgServiceRouter) + +SetCircuit(cb CircuitBreaker) { + msr.circuitBreaker = cb +} + +/ MsgServiceHandler defines a function type which handles Msg service message. +type MsgServiceHandler = func(ctx sdk.Context, req sdk.Msg) (*sdk.Result, error) + +/ Handler returns the MsgServiceHandler for a given msg or nil if not found. +func (msr *MsgServiceRouter) + +Handler(msg sdk.Msg) + +MsgServiceHandler { + return msr.routes[sdk.MsgTypeURL(msg)] +} + +/ HandlerByTypeURL returns the MsgServiceHandler for a given query route path or nil +/ if not found. +func (msr *MsgServiceRouter) + +HandlerByTypeURL(typeURL string) + +MsgServiceHandler { + return msr.routes[typeURL] +} + +/ RegisterService implements the gRPC Server.RegisterService method. sd is a gRPC +/ service description, handler is an object which implements that gRPC service. +/ +/ This function PANICs: +/ - if it is called before the service `Msg`s have been registered using +/ RegisterInterfaces, +/ - or if a service is being registered twice. +func (msr *MsgServiceRouter) + +RegisterService(sd *grpc.ServiceDesc, handler interface{ +}) { + / Adds a top-level query handler based on the gRPC service name. + for _, method := range sd.Methods { + err := msr.registerMsgServiceHandler(sd, method, handler) + if err != nil { + panic(err) +} + +err = msr.registerHybridHandler(sd, method, handler) + if err != nil { + panic(err) +} + +} +} + +func (msr *MsgServiceRouter) + +HybridHandlerByMsgName(msgName string) + +func(ctx context.Context, req, resp protoiface.MessageV1) + +error { + return msr.hybridHandlers[msgName] +} + +func (msr *MsgServiceRouter) + +registerHybridHandler(sd *grpc.ServiceDesc, method grpc.MethodDesc, handler interface{ +}) + +error { + inputName, err := protocompat.RequestFullNameFromMethodDesc(sd, method) + if err != nil { + return err +} + cdc := codec.NewProtoCodec(msr.interfaceRegistry) + +hybridHandler, err := protocompat.MakeHybridHandler(cdc, sd, method, handler) + if err != nil { + return err +} + / if circuit breaker is not nil, then we decorate the hybrid handler with the circuit breaker + if msr.circuitBreaker == nil { + msr.hybridHandlers[string(inputName)] = hybridHandler + return nil +} + / decorate the hybrid handler with the circuit breaker + circuitBreakerHybridHandler := func(ctx context.Context, req, resp protoiface.MessageV1) + +error { + messageName := codectypes.MsgTypeURL(req) + +allowed, err := msr.circuitBreaker.IsAllowed(ctx, messageName) + if err != nil { + return err +} + if !allowed { + return fmt.Errorf("circuit breaker disallows execution of message %s", messageName) +} + +return hybridHandler(ctx, req, resp) +} + +msr.hybridHandlers[string(inputName)] = circuitBreakerHybridHandler + return nil +} + +func (msr *MsgServiceRouter) + +registerMsgServiceHandler(sd *grpc.ServiceDesc, method grpc.MethodDesc, handler interface{ +}) + +error { + fqMethod := fmt.Sprintf("/%s/%s", sd.ServiceName, method.MethodName) + methodHandler := method.Handler + + var requestTypeName string + + / NOTE: This is how we pull the concrete request type for each handler for registering in the InterfaceRegistry. + / This approach is maybe a bit hacky, but less hacky than reflecting on the handler object itself. + / We use a no-op interceptor to avoid actually calling into the handler itself. + _, _ = methodHandler(nil, context.Background(), func(i interface{ +}) + +error { + msg, ok := i.(sdk.Msg) + if !ok { + / We panic here because there is no other alternative and the app cannot be initialized correctly + / this should only happen if there is a problem with code generation in which case the app won't + / work correctly anyway. + panic(fmt.Errorf("unable to register service method %s: %T does not implement sdk.Msg", fqMethod, i)) +} + +requestTypeName = sdk.MsgTypeURL(msg) + +return nil +}, noopInterceptor) + + / Check that the service Msg fully-qualified method name has already + / been registered (via RegisterInterfaces). If the user registers a + / service without registering according service Msg type, there might be + / some unexpected behavior down the road. Since we can't return an error + / (`Server.RegisterService` interface restriction) + +we panic (at startup). + reqType, err := msr.interfaceRegistry.Resolve(requestTypeName) + if err != nil || reqType == nil { + return fmt.Errorf( + "type_url %s has not been registered yet. "+ + "Before calling RegisterService, you must register all interfaces by calling the `RegisterInterfaces` "+ + "method on module.BasicManager. Each module should call `msgservice.RegisterMsgServiceDesc` inside its "+ + "`RegisterInterfaces` method with the `_Msg_serviceDesc` generated by proto-gen", + requestTypeName, + ) +} + + / Check that each service is only registered once. If a service is + / registered more than once, then we should error. Since we can't + / return an error (`Server.RegisterService` interface restriction) + +we + / panic (at startup). + _, found := msr.routes[requestTypeName] + if found { + return fmt.Errorf( + "msg service %s has already been registered. Please make sure to only register each service once. "+ + "This usually means that there are conflicting modules registering the same msg service", + fqMethod, + ) +} + +msr.routes[requestTypeName] = func(ctx sdk.Context, msg sdk.Msg) (*sdk.Result, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + interceptor := func(goCtx context.Context, _ interface{ +}, _ *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{ +}, error) { + goCtx = context.WithValue(goCtx, sdk.SdkContextKey, ctx) + +return handler(goCtx, msg) +} + if m, ok := msg.(sdk.HasValidateBasic); ok { + if err := m.ValidateBasic(); err != nil { + return nil, err +} + +} + if msr.circuitBreaker != nil { + msgURL := sdk.MsgTypeURL(msg) + +isAllowed, err := msr.circuitBreaker.IsAllowed(ctx, msgURL) + if err != nil { + return nil, err +} + if !isAllowed { + return nil, fmt.Errorf("circuit breaker disables execution of this message: %s", msgURL) +} + +} + + / Call the method handler from the service description with the handler object. + / We don't do any decoding here because the decoding was already done. + res, err := methodHandler(handler, ctx, noopDecoder, interceptor) + if err != nil { + return nil, err +} + +resMsg, ok := res.(proto.Message) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "Expecting proto.Message, got %T", resMsg) +} + +return sdk.WrapServiceResult(ctx, resMsg, err) +} + +return nil +} + +/ SetInterfaceRegistry sets the interface registry for the router. +func (msr *MsgServiceRouter) + +SetInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) { + msr.interfaceRegistry = interfaceRegistry +} + +func noopDecoder(_ interface{ +}) + +error { + return nil +} + +func noopInterceptor(_ context.Context, _ interface{ +}, _ *grpc.UnaryServerInfo, _ grpc.UnaryHandler) (interface{ +}, error) { + return nil, nil +} +``` + + + The `CircuitBreakerDecorator` works for most use cases, but [does not check + the inner messages of a + transaction](https://docs.cosmos.network/main/learn/beginner/tx-lifecycle#antehandler). + This some transactions (such as `x/authz` transactions or some `x/gov` + transactions) may pass the ante handler. **This does not affect the circuit + breaker** as the message router check will still fail the transaction. This + tradeoff is to avoid introducing more dependencies in the `x/circuit` module. + Chains can re-define the `CircuitBreakerDecorator` to check for inner messages + if they wish to do so. + + +## State + +### Accounts + +- AccountPermissions `0x1 | account_address -> ProtocolBuffer(CircuitBreakerPermissions)` + +```go expandable +type level int32 + +const ( + / LEVEL_NONE_UNSPECIFIED indicates that the account will have no circuit + / breaker permissions. + LEVEL_NONE_UNSPECIFIED = iota + / LEVEL_SOME_MSGS indicates that the account will have permission to + / trip or reset the circuit breaker for some Msg type URLs. If this level + / is chosen, a non-empty list of Msg type URLs must be provided in + / limit_type_urls. + LEVEL_SOME_MSGS + / LEVEL_ALL_MSGS indicates that the account can trip or reset the circuit + / breaker for Msg's of all type URLs. + LEVEL_ALL_MSGS + / LEVEL_SUPER_ADMIN indicates that the account can take all circuit breaker + / actions and can grant permissions to other accounts. + LEVEL_SUPER_ADMIN +) + +type Access struct { + level int32 + msgs []string / if full permission, msgs can be empty +} +``` + +### Disable List + +List of type urls that are disabled. + +- DisableList `0x2 | msg_type_url -> []byte{}` {/* - should this be stored in json to skip encoding and decoding each block, does it matter? */} + +## State Transitions + +### Authorize + +Authorize, is called by the module authority (default governance module account) or any account with `LEVEL_SUPER_ADMIN` to give permission to disable/enable messages to another account. There are three levels of permissions that can be granted. `LEVEL_SOME_MSGS` limits the number of messages that can be disabled. `LEVEL_ALL_MSGS` permits all messages to be disabled. `LEVEL_SUPER_ADMIN` allows an account to take all circuit breaker actions including authorizing and deauthorizing other accounts. + +```protobuf + / AuthorizeCircuitBreaker allows a super-admin to grant (or revoke) another + / account's circuit breaker permissions. + rpc AuthorizeCircuitBreaker(MsgAuthorizeCircuitBreaker) returns (MsgAuthorizeCircuitBreakerResponse); +``` + +### Trip + +Trip, is called by an authorized account to disable message execution for a specific msgURL. If empty, all the msgs will be disabled. + +```protobuf + / TripCircuitBreaker pauses processing of Msg's in the state machine. + rpc TripCircuitBreaker(MsgTripCircuitBreaker) returns (MsgTripCircuitBreakerResponse); +``` + +### Reset + +Reset is called by an authorized account to enable execution for a specific msgURL of previously disabled message. If empty, all the disabled messages will be enabled. + +```protobuf + / ResetCircuitBreaker resumes processing of Msg's in the state machine that + / have been been paused using TripCircuitBreaker. + rpc ResetCircuitBreaker(MsgResetCircuitBreaker) returns (MsgResetCircuitBreakerResponse); +``` + +## Messages + +### MsgAuthorizeCircuitBreaker + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/main/contrib/proto/cosmos/circuit/v1/tx.proto#L25-L75 +``` + +This message is expected to fail if: + +- the granter is not an account with permission level `LEVEL_SUPER_ADMIN` or the module authority + +### MsgTripCircuitBreaker + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/main/contrib/proto/cosmos/circuit/v1/tx.proto#L77-L93 +``` + +This message is expected to fail if: + +- if the signer does not have a permission level with the ability to disable the specified type url message + +### MsgResetCircuitBreaker + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/main/contrib/proto/cosmos/circuit/v1/tx.proto#L95-109 +``` + +This message is expected to fail if: + +- if the type url is not disabled + +## Events - list and describe event tags + +The circuit module emits the following events: + +### Message Events + +#### MsgAuthorizeCircuitBreaker + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ------------------------- | +| string | granter | `{granterAddress}` | +| string | grantee | `{granteeAddress}` | +| string | permission | `{granteePermissions}` | +| message | module | circuit | +| message | action | authorize_circuit_breaker | + +#### MsgTripCircuitBreaker + +| Type | Attribute Key | Attribute Value | +| --------- | ------------- | ---------------------- | +| string | authority | `{authorityAddress}` | +| \[]string | msg_urls | \[]string`{msg\_urls}` | +| message | module | circuit | +| message | action | trip_circuit_breaker | + +#### ResetCircuitBreaker + +| Type | Attribute Key | Attribute Value | +| --------- | ------------- | ---------------------- | +| string | authority | `{authorityAddress}` | +| \[]string | msg_urls | \[]string`{msg\_urls}` | +| message | module | circuit | +| message | action | reset_circuit_breaker | + +## Keys - list of key prefixes used by the circuit module + +- `AccountPermissionPrefix` - `0x01` +- `DisableListPrefix` - `0x02` + +## Client - list and describe CLI commands and gRPC and REST endpoints + +## Examples: Using Circuit Breaker CLI Commands + +This section provides practical examples for using the Circuit Breaker module through the command-line interface (CLI). These examples demonstrate how to authorize accounts, disable (trip) specific message types, and re-enable (reset) them when needed. + +### Querying Circuit Breaker Permissions + +Check an account's current circuit breaker permissions: + +```bash +# Query permissions for a specific account + query circuit account-permissions + +# Example: +simd query circuit account-permissions cosmos1... +``` + +Check which message types are currently disabled: + +```bash +# Query all disabled message types + query circuit disabled-list + +# Example: +simd query circuit disabled-list +``` + +### Authorizing an Account as Circuit Breaker + +Only a super-admin or the module authority (typically the governance module account) can grant circuit breaker permissions to other accounts: + +```bash +# Grant LEVEL_ALL_MSGS permission (can disable any message type) + tx circuit authorize --level=ALL_MSGS --from= --gas=auto --gas-adjustment=1.5 + +# Grant LEVEL_SOME_MSGS permission (can only disable specific message types) + tx circuit authorize --level=SOME_MSGS --limit-type-urls="/cosmos.bank.v1beta1.MsgSend,/cosmos.staking.v1beta1.MsgDelegate" --from= --gas=auto --gas-adjustment=1.5 + +# Grant LEVEL_SUPER_ADMIN permission (can disable messages and authorize other accounts) + tx circuit authorize --level=SUPER_ADMIN --from= --gas=auto --gas-adjustment=1.5 +``` + +### Disabling Message Processing (Trip) + +Disable specific message types to prevent their execution (requires authorization): + +```bash +# Disable a single message type + tx circuit trip --type-urls="/cosmos.bank.v1beta1.MsgSend" --from= --gas=auto --gas-adjustment=1.5 + +# Disable multiple message types + tx circuit trip --type-urls="/cosmos.bank.v1beta1.MsgSend,/cosmos.staking.v1beta1.MsgDelegate" --from= --gas=auto --gas-adjustment=1.5 + +# Disable all message types (emergency measure) + tx circuit trip --from= --gas=auto --gas-adjustment=1.5 +``` + +### Re-enabling Message Processing (Reset) + +Re-enable previously disabled message types (requires authorization): + +```bash +# Re-enable a single message type + tx circuit reset --type-urls="/cosmos.bank.v1beta1.MsgSend" --from= --gas=auto --gas-adjustment=1.5 + +# Re-enable multiple message types + tx circuit reset --type-urls="/cosmos.bank.v1beta1.MsgSend,/cosmos.staking.v1beta1.MsgDelegate" --from= --gas=auto --gas-adjustment=1.5 + +# Re-enable all disabled message types + tx circuit reset --from= --gas=auto --gas-adjustment=1.5 +``` + +### Usage in Emergency Scenarios + +In case of a critical vulnerability in a specific message type: + +1. Quickly disable the vulnerable message type: + + ```bash + tx circuit trip --type-urls="/cosmos.vulnerable.v1beta1.MsgVulnerable" --from= --gas=auto --gas-adjustment=1.5 + ``` + +2. After a fix is deployed, re-enable the message type: + ```bash + tx circuit reset --type-urls="/cosmos.vulnerable.v1beta1.MsgVulnerable" --from= --gas=auto --gas-adjustment=1.5 + ``` + +This allows chains to surgically disable problematic functionality without halting the entire chain, providing time for developers to implement and deploy fixes. diff --git a/docs/sdk/v0.53/documentation/module-system/consensus.mdx b/docs/sdk/v0.53/documentation/module-system/consensus.mdx new file mode 100644 index 00000000..0a377fe4 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/consensus.mdx @@ -0,0 +1,6 @@ +--- +title: '`x/consensus`' +description: Functionality to modify CometBFT's ABCI consensus params. +--- + +Functionality to modify CometBFT's ABCI consensus params. diff --git a/docs/sdk/v0.53/documentation/module-system/crisis.mdx b/docs/sdk/v0.53/documentation/module-system/crisis.mdx new file mode 100644 index 00000000..7dfa0b59 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/crisis.mdx @@ -0,0 +1,129 @@ +--- +title: '`x/crisis`' +description: >- + NOTE: x/crisis is deprecated as of Cosmos SDK v0.53 and will be removed in the + next release. +--- + +NOTE: `x/crisis` is deprecated as of Cosmos SDK v0.53 and will be removed in the next release. + +## Overview + +The crisis module halts the blockchain under the circumstance that a blockchain +invariant is broken. Invariants can be registered with the application during the +application initialization process. + +## Contents + +* [State](#state) +* [Messages](#messages) +* [Events](#events) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + +## State + +### ConstantFee + +Due to the anticipated large gas cost requirement to verify an invariant (and +potential to exceed the maximum allowable block gas limit) a constant fee is +used instead of the standard gas consumption method. The constant fee is +intended to be larger than the anticipated gas cost of running the invariant +with the standard gas consumption method. + +The ConstantFee param is stored in the module params state with the prefix of `0x01`, +it can be updated with governance or the address with authority. + +* Params: `mint/params -> legacy_amino(sdk.Coin)` + +## Messages + +In this section we describe the processing of the crisis messages and the +corresponding updates to the state. + +### MsgVerifyInvariant + +Blockchain invariants can be checked using the `MsgVerifyInvariant` message. + +```protobuf +// MsgVerifyInvariant represents a message to verify a particular invariance. +message MsgVerifyInvariant { + option (cosmos.msg.v1.signer) = "sender"; + option (amino.name) = "cosmos-sdk/MsgVerifyInvariant"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + // sender is the account address of private key to send coins to fee collector account. + string sender = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // name of the invariant module. + string invariant_module_name = 2; + + // invariant_route is the msg's invariant route. + string invariant_route = 3; +} +``` + +This message is expected to fail if: + +* the sender does not have enough coins for the constant fee +* the invariant route is not registered + +This message checks the invariant provided, and if the invariant is broken it +panics, halting the blockchain. If the invariant is broken, the constant fee is +never deducted as the transaction is never committed to a block (equivalent to +being refunded). However, if the invariant is not broken, the constant fee will +not be refunded. + +## Events + +The crisis module emits the following events: + +### Handlers + +#### MsgVerifyInvariance + +| Type | Attribute Key | Attribute Value | +| --------- | ------------- | ----------------- | +| invariant | route | `{invariantRoute}` | +| message | module | crisis | +| message | action | verify\_invariant | +| message | sender | `{senderAddress}` | + +## Parameters + +The crisis module contains the following parameters: + +| Key | Type | Example | +| ----------- | ------------- | --------------------------------- | +| ConstantFee | object (coin) | `{"denom":"uatom","amount":"1000"}` | + +## Client + +### CLI + +A user can query and interact with the `crisis` module using the CLI. + +#### Transactions + +The `tx` commands allow users to interact with the `crisis` module. + +```bash +simd tx crisis --help +``` + +##### invariant-broken + +The `invariant-broken` command submits proof when an invariant was broken to halt the chain + +```bash +simd tx crisis invariant-broken [module-name] [invariant-route] [flags] +``` + +Example: + +```bash +simd tx crisis invariant-broken bank total-supply --from=[keyname or address] +``` diff --git a/docs/sdk/v0.53/documentation/module-system/depinject.mdx b/docs/sdk/v0.53/documentation/module-system/depinject.mdx new file mode 100644 index 00000000..c43931d2 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/depinject.mdx @@ -0,0 +1,3519 @@ +--- +title: Modules depinject-ready +--- + + +**Pre-requisite Readings** + +* [Depinject Documentation](/docs/sdk/v0.53/documentation/module-system/depinject) + + + +[`depinject`](/docs/sdk/v0.53/documentation/module-system/depinject) is used to wire any module in `app.go`. +All core modules are already configured to support dependency injection. + +To work with `depinject` a module must define its configuration and requirements so that `depinject` can provide the right dependencies. + +In brief, as a module developer, the following steps are required: + +1. Define the module configuration using Protobuf +2. Define the module dependencies in `x/{moduleName}/module.go` + +A chain developer can then use the module by following these two steps: + +1. Configure the module in `app_config.go` or `app.yaml` +2. Inject the module in `app.go` + +## Module Configuration + +The module available configuration is defined in a Protobuf file, located at `{moduleName}/module/v1/module.proto`. + +```protobuf +syntax = "proto3"; + +package cosmos.group.module.v1; + +import "cosmos/app/v1alpha1/module.proto"; +import "gogoproto/gogo.proto"; +import "google/protobuf/duration.proto"; +import "amino/amino.proto"; + +// Module is the config object of the group module. +message Module { + option (cosmos.app.v1alpha1.module) = { + go_import: "github.com/cosmos/cosmos-sdk/x/group" + }; + + // max_execution_period defines the max duration after a proposal's voting period ends that members can send a MsgExec + // to execute the proposal. + google.protobuf.Duration max_execution_period = 1 + [(gogoproto.stdduration) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // max_metadata_len defines the max length of the metadata bytes field for various entities within the group module. + // Defaults to 255 if not explicitly set. + uint64 max_metadata_len = 2; +} + +``` + +* `go_import` must point to the Go package of the custom module. +* Message fields define the module configuration. + That configuration can be set in the `app_config.go` / `app.yaml` file for a chain developer to configure the module.\ + Taking `group` as example, a chain developer is able to decide, thanks to `uint64 max_metadata_len`, what the maximum metadata length allowed for a group proposal is. + + ```go expandable + package simapp + + import ( + + "time" + "google.golang.org/protobuf/types/known/durationpb" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + authzmodulev1 "cosmossdk.io/api/cosmos/authz/module/v1" + bankmodulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + circuitmodulev1 "cosmossdk.io/api/cosmos/circuit/module/v1" + consensusmodulev1 "cosmossdk.io/api/cosmos/consensus/module/v1" + crisismodulev1 "cosmossdk.io/api/cosmos/crisis/module/v1" + distrmodulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + evidencemodulev1 "cosmossdk.io/api/cosmos/evidence/module/v1" + feegrantmodulev1 "cosmossdk.io/api/cosmos/feegrant/module/v1" + genutilmodulev1 "cosmossdk.io/api/cosmos/genutil/module/v1" + govmodulev1 "cosmossdk.io/api/cosmos/gov/module/v1" + groupmodulev1 "cosmossdk.io/api/cosmos/group/module/v1" + mintmodulev1 "cosmossdk.io/api/cosmos/mint/module/v1" + nftmodulev1 "cosmossdk.io/api/cosmos/nft/module/v1" + paramsmodulev1 "cosmossdk.io/api/cosmos/params/module/v1" + slashingmodulev1 "cosmossdk.io/api/cosmos/slashing/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + txconfigv1 "cosmossdk.io/api/cosmos/tx/config/v1" + upgrademodulev1 "cosmossdk.io/api/cosmos/upgrade/module/v1" + vestingmodulev1 "cosmossdk.io/api/cosmos/vesting/module/v1" + "cosmossdk.io/depinject" + + _ "cosmossdk.io/x/circuit" / import for side-effects + _ "cosmossdk.io/x/evidence" / import for side-effects + _ "cosmossdk.io/x/feegrant/module" / import for side-effects + _ "cosmossdk.io/x/nft/module" / import for side-effects + _ "cosmossdk.io/x/upgrade" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/auth/vesting" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/authz/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/bank" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/consensus" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/crisis" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/distribution" / import for side-effects + "github.com/cosmos/cosmos-sdk/x/genutil" + "github.com/cosmos/cosmos-sdk/x/gov" + _ "github.com/cosmos/cosmos-sdk/x/group/module" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/mint" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/params" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/slashing" / import for side-effects + _ "github.com/cosmos/cosmos-sdk/x/staking" / import for side-effects + + "cosmossdk.io/core/appconfig" + circuittypes "cosmossdk.io/x/circuit/types" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + "cosmossdk.io/x/nft" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/types/module" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensustypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + crisistypes "github.com/cosmos/cosmos-sdk/x/crisis/types" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "github.com/cosmos/cosmos-sdk/x/group" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + paramsclient "github.com/cosmos/cosmos-sdk/x/params/client" + paramstypes "github.com/cosmos/cosmos-sdk/x/params/types" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + ) + + var ( + / module account permissions + moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + { + Account: authtypes.FeeCollectorName + }, + { + Account: distrtypes.ModuleName + }, + { + Account: minttypes.ModuleName, + Permissions: []string{ + authtypes.Minter + }}, + { + Account: stakingtypes.BondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: stakingtypes.NotBondedPoolName, + Permissions: []string{ + authtypes.Burner, stakingtypes.ModuleName + }}, + { + Account: govtypes.ModuleName, + Permissions: []string{ + authtypes.Burner + }}, + { + Account: nft.ModuleName + }, + } + + / blocked account addresses + blockAccAddrs = []string{ + authtypes.FeeCollectorName, + distrtypes.ModuleName, + minttypes.ModuleName, + stakingtypes.BondedPoolName, + stakingtypes.NotBondedPoolName, + nft.ModuleName, + / We allow the following module accounts to receive funds: + / govtypes.ModuleName + } + + / application configuration (used by depinject) + + AppConfig = depinject.Configs(appconfig.Compose(&appv1alpha1.Config{ + Modules: []*appv1alpha1.ModuleConfig{ + { + Name: runtime.ModuleName, + Config: appconfig.WrapAny(&runtimev1alpha1.Module{ + AppName: "SimApp", + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + BeginBlockers: []string{ + upgradetypes.ModuleName, + minttypes.ModuleName, + distrtypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + }, + EndBlockers: []string{ + crisistypes.ModuleName, + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + }, + OverrideStoreKeys: []*runtimev1alpha1.StoreKeyConfig{ + { + ModuleName: authtypes.ModuleName, + KvStoreKey: "acc", + }, + }, + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + InitGenesis: []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + crisistypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + paramstypes.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensustypes.ModuleName, + circuittypes.ModuleName, + }, + / When ExportGenesis is not specified, the export genesis module order + / is equal to the init genesis order + / ExportGenesis: []string{ + }, + / Uncomment if you want to set a custom migration order here. + / OrderMigrations: []string{ + }, + }), + }, + { + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + / By default modules authority is the governance module. This is configurable with the following: + / Authority: "group", / A custom module authority can be set using a module name + / Authority: "cosmos1cwwv22j5ca08ggdv9c2uky355k908694z577tv", / or a specific address + }), + }, + { + Name: vestingtypes.ModuleName, + Config: appconfig.WrapAny(&vestingmodulev1.Module{ + }), + }, + { + Name: banktypes.ModuleName, + Config: appconfig.WrapAny(&bankmodulev1.Module{ + BlockedModuleAccountsOverride: blockAccAddrs, + }), + }, + { + Name: stakingtypes.ModuleName, + Config: appconfig.WrapAny(&stakingmodulev1.Module{ + }), + }, + { + Name: slashingtypes.ModuleName, + Config: appconfig.WrapAny(&slashingmodulev1.Module{ + }), + }, + { + Name: paramstypes.ModuleName, + Config: appconfig.WrapAny(¶msmodulev1.Module{ + }), + }, + { + Name: "tx", + Config: appconfig.WrapAny(&txconfigv1.Config{ + }), + }, + { + Name: genutiltypes.ModuleName, + Config: appconfig.WrapAny(&genutilmodulev1.Module{ + }), + }, + { + Name: authz.ModuleName, + Config: appconfig.WrapAny(&authzmodulev1.Module{ + }), + }, + { + Name: upgradetypes.ModuleName, + Config: appconfig.WrapAny(&upgrademodulev1.Module{ + }), + }, + { + Name: distrtypes.ModuleName, + Config: appconfig.WrapAny(&distrmodulev1.Module{ + }), + }, + { + Name: evidencetypes.ModuleName, + Config: appconfig.WrapAny(&evidencemodulev1.Module{ + }), + }, + { + Name: minttypes.ModuleName, + Config: appconfig.WrapAny(&mintmodulev1.Module{ + }), + }, + { + Name: group.ModuleName, + Config: appconfig.WrapAny(&groupmodulev1.Module{ + MaxExecutionPeriod: durationpb.New(time.Second * 1209600), + MaxMetadataLen: 255, + }), + }, + { + Name: nft.ModuleName, + Config: appconfig.WrapAny(&nftmodulev1.Module{ + }), + }, + { + Name: feegrant.ModuleName, + Config: appconfig.WrapAny(&feegrantmodulev1.Module{ + }), + }, + { + Name: govtypes.ModuleName, + Config: appconfig.WrapAny(&govmodulev1.Module{ + }), + }, + { + Name: crisistypes.ModuleName, + Config: appconfig.WrapAny(&crisismodulev1.Module{ + }), + }, + { + Name: consensustypes.ModuleName, + Config: appconfig.WrapAny(&consensusmodulev1.Module{ + }), + }, + { + Name: circuittypes.ModuleName, + Config: appconfig.WrapAny(&circuitmodulev1.Module{ + }), + }, + }, + }), + depinject.Supply( + / supply custom module basics + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ + paramsclient.ProposalHandler, + }, + ), + }, + )) + ) + ``` + +That message is generated using [`pulsar`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/scripts/protocgen-pulsar.sh) (by running `make proto-gen`). +In the case of the `group` module, this file is generated here: [Link](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/api/cosmos/group/module/v1/module.pulsar.go). + +The part that is relevant for the module configuration is: + +```go expandable +/ Code generated by protoc-gen-go-pulsar. DO NOT EDIT. +package modulev1 + +import ( + + _ "cosmossdk.io/api/amino" + _ "cosmossdk.io/api/cosmos/app/v1alpha1" + fmt "fmt" + runtime "github.com/cosmos/cosmos-proto/runtime" + _ "github.com/cosmos/gogoproto/gogoproto" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoiface "google.golang.org/protobuf/runtime/protoiface" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + durationpb "google.golang.org/protobuf/types/known/durationpb" + io "io" + reflect "reflect" + sync "sync" +) + +var ( + md_Module protoreflect.MessageDescriptor + fd_Module_max_execution_period protoreflect.FieldDescriptor + fd_Module_max_metadata_len protoreflect.FieldDescriptor +) + +func init() { + file_cosmos_group_module_v1_module_proto_init() + +md_Module = File_cosmos_group_module_v1_module_proto.Messages().ByName("Module") + +fd_Module_max_execution_period = md_Module.Fields().ByName("max_execution_period") + +fd_Module_max_metadata_len = md_Module.Fields().ByName("max_metadata_len") +} + +var _ protoreflect.Message = (*fastReflection_Module)(nil) + +type fastReflection_Module Module + +func (x *Module) + +ProtoReflect() + +protoreflect.Message { + return (*fastReflection_Module)(x) +} + +func (x *Module) + +slowProtoReflect() + +protoreflect.Message { + mi := &file_cosmos_group_module_v1_module_proto_msgTypes[0] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) +} + +return ms +} + +return mi.MessageOf(x) +} + +var _fastReflection_Module_messageType fastReflection_Module_messageType +var _ protoreflect.MessageType = fastReflection_Module_messageType{ +} + +type fastReflection_Module_messageType struct{ +} + +func (x fastReflection_Module_messageType) + +Zero() + +protoreflect.Message { + return (*fastReflection_Module)(nil) +} + +func (x fastReflection_Module_messageType) + +New() + +protoreflect.Message { + return new(fastReflection_Module) +} + +func (x fastReflection_Module_messageType) + +Descriptor() + +protoreflect.MessageDescriptor { + return md_Module +} + +/ Descriptor returns message descriptor, which contains only the protobuf +/ type information for the message. +func (x *fastReflection_Module) + +Descriptor() + +protoreflect.MessageDescriptor { + return md_Module +} + +/ Type returns the message type, which encapsulates both Go and protobuf +/ type information. If the Go type information is not needed, +/ it is recommended that the message descriptor be used instead. +func (x *fastReflection_Module) + +Type() + +protoreflect.MessageType { + return _fastReflection_Module_messageType +} + +/ New returns a newly allocated and mutable empty message. +func (x *fastReflection_Module) + +New() + +protoreflect.Message { + return new(fastReflection_Module) +} + +/ Interface unwraps the message reflection interface and +/ returns the underlying ProtoMessage interface. +func (x *fastReflection_Module) + +Interface() + +protoreflect.ProtoMessage { + return (*Module)(x) +} + +/ Range iterates over every populated field in an undefined order, +/ calling f for each field descriptor and value encountered. +/ Range returns immediately if f returns false. +/ While iterating, mutating operations may only be performed +/ on the current field descriptor. +func (x *fastReflection_Module) + +Range(f func(protoreflect.FieldDescriptor, protoreflect.Value) + +bool) { + if x.MaxExecutionPeriod != nil { + value := protoreflect.ValueOfMessage(x.MaxExecutionPeriod.ProtoReflect()) + if !f(fd_Module_max_execution_period, value) { + return +} + +} + if x.MaxMetadataLen != uint64(0) { + value := protoreflect.ValueOfUint64(x.MaxMetadataLen) + if !f(fd_Module_max_metadata_len, value) { + return +} + +} +} + +/ Has reports whether a field is populated. +/ +/ Some fields have the property of nullability where it is possible to +/ distinguish between the default value of a field and whether the field +/ was explicitly populated with the default value. Singular message fields, +/ member fields of a oneof, and proto2 scalar fields are nullable. Such +/ fields are populated only if explicitly set. +/ +/ In other cases (aside from the nullable cases above), +/ a proto3 scalar field is populated if it contains a non-zero value, and +/ a repeated field is populated if it is non-empty. +func (x *fastReflection_Module) + +Has(fd protoreflect.FieldDescriptor) + +bool { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + return x.MaxExecutionPeriod != nil + case "cosmos.group.module.v1.Module.max_metadata_len": + return x.MaxMetadataLen != uint64(0) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ Clear clears the field such that a subsequent Has call reports false. +/ +/ Clearing an extension field clears both the extension type and value +/ associated with the given field number. +/ +/ Clear is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +Clear(fd protoreflect.FieldDescriptor) { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + x.MaxExecutionPeriod = nil + case "cosmos.group.module.v1.Module.max_metadata_len": + x.MaxMetadataLen = uint64(0) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ Get retrieves the value for a field. +/ +/ For unpopulated scalars, it returns the default value, where +/ the default value of a bytes scalar is guaranteed to be a copy. +/ For unpopulated composite types, it returns an empty, read-only view +/ of the value; to obtain a mutable reference, use Mutable. +func (x *fastReflection_Module) + +Get(descriptor protoreflect.FieldDescriptor) + +protoreflect.Value { + switch descriptor.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + value := x.MaxExecutionPeriod + return protoreflect.ValueOfMessage(value.ProtoReflect()) + case "cosmos.group.module.v1.Module.max_metadata_len": + value := x.MaxMetadataLen + return protoreflect.ValueOfUint64(value) + +default: + if descriptor.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", descriptor.FullName())) +} +} + +/ Set stores the value for a field. +/ +/ For a field belonging to a oneof, it implicitly clears any other field +/ that may be currently set within the same oneof. +/ For extension fields, it implicitly stores the provided ExtensionType. +/ When setting a composite type, it is unspecified whether the stored value +/ aliases the source's memory in any way. If the composite value is an +/ empty, read-only value, then it panics. +/ +/ Set is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +Set(fd protoreflect.FieldDescriptor, value protoreflect.Value) { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + x.MaxExecutionPeriod = value.Message().Interface().(*durationpb.Duration) + case "cosmos.group.module.v1.Module.max_metadata_len": + x.MaxMetadataLen = value.Uint() + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ Mutable returns a mutable reference to a composite type. +/ +/ If the field is unpopulated, it may allocate a composite value. +/ For a field belonging to a oneof, it implicitly clears any other field +/ that may be currently set within the same oneof. +/ For extension fields, it implicitly stores the provided ExtensionType +/ if not already stored. +/ It panics if the field does not contain a composite type. +/ +/ Mutable is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +Mutable(fd protoreflect.FieldDescriptor) + +protoreflect.Value { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + if x.MaxExecutionPeriod == nil { + x.MaxExecutionPeriod = new(durationpb.Duration) +} + +return protoreflect.ValueOfMessage(x.MaxExecutionPeriod.ProtoReflect()) + case "cosmos.group.module.v1.Module.max_metadata_len": + panic(fmt.Errorf("field max_metadata_len of message cosmos.group.module.v1.Module is not mutable")) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ NewField returns a new value that is assignable to the field +/ for the given descriptor. For scalars, this returns the default value. +/ For lists, maps, and messages, this returns a new, empty, mutable value. +func (x *fastReflection_Module) + +NewField(fd protoreflect.FieldDescriptor) + +protoreflect.Value { + switch fd.FullName() { + case "cosmos.group.module.v1.Module.max_execution_period": + m := new(durationpb.Duration) + +return protoreflect.ValueOfMessage(m.ProtoReflect()) + case "cosmos.group.module.v1.Module.max_metadata_len": + return protoreflect.ValueOfUint64(uint64(0)) + +default: + if fd.IsExtension() { + panic(fmt.Errorf("proto3 declared messages do not support extensions: cosmos.group.module.v1.Module")) +} + +panic(fmt.Errorf("message cosmos.group.module.v1.Module does not contain field %s", fd.FullName())) +} +} + +/ WhichOneof reports which field within the oneof is populated, +/ returning nil if none are populated. +/ It panics if the oneof descriptor does not belong to this message. +func (x *fastReflection_Module) + +WhichOneof(d protoreflect.OneofDescriptor) + +protoreflect.FieldDescriptor { + switch d.FullName() { + default: + panic(fmt.Errorf("%s is not a oneof field in cosmos.group.module.v1.Module", d.FullName())) +} + +panic("unreachable") +} + +/ GetUnknown retrieves the entire list of unknown fields. +/ The caller may only mutate the contents of the RawFields +/ if the mutated bytes are stored back into the message with SetUnknown. +func (x *fastReflection_Module) + +GetUnknown() + +protoreflect.RawFields { + return x.unknownFields +} + +/ SetUnknown stores an entire list of unknown fields. +/ The raw fields must be syntactically valid according to the wire format. +/ An implementation may panic if this is not the case. +/ Once stored, the caller must not mutate the content of the RawFields. +/ An empty RawFields may be passed to clear the fields. +/ +/ SetUnknown is a mutating operation and unsafe for concurrent use. +func (x *fastReflection_Module) + +SetUnknown(fields protoreflect.RawFields) { + x.unknownFields = fields +} + +/ IsValid reports whether the message is valid. +/ +/ An invalid message is an empty, read-only value. +/ +/ An invalid message often corresponds to a nil pointer of the concrete +/ message type, but the details are implementation dependent. +/ Validity is not part of the protobuf data model, and may not +/ be preserved in marshaling or other operations. +func (x *fastReflection_Module) + +IsValid() + +bool { + return x != nil +} + +/ ProtoMethods returns optional fastReflectionFeature-path implementations of various operations. +/ This method may return nil. +/ +/ The returned methods type is identical to +/ "google.golang.org/protobuf/runtime/protoiface".Methods. +/ Consult the protoiface package documentation for details. +func (x *fastReflection_Module) + +ProtoMethods() *protoiface.Methods { + size := func(input protoiface.SizeInput) + +protoiface.SizeOutput { + x := input.Message.Interface().(*Module) + if x == nil { + return protoiface.SizeOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Size: 0, +} + +} + options := runtime.SizeInputToOptions(input) + _ = options + var n int + var l int + _ = l + if x.MaxExecutionPeriod != nil { + l = options.Size(x.MaxExecutionPeriod) + +n += 1 + l + runtime.Sov(uint64(l)) +} + if x.MaxMetadataLen != 0 { + n += 1 + runtime.Sov(uint64(x.MaxMetadataLen)) +} + if x.unknownFields != nil { + n += len(x.unknownFields) +} + +return protoiface.SizeOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Size: n, +} + +} + marshal := func(input protoiface.MarshalInput) (protoiface.MarshalOutput, error) { + x := input.Message.Interface().(*Module) + if x == nil { + return protoiface.MarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Buf: input.Buf, +}, nil +} + options := runtime.MarshalInputToOptions(input) + _ = options + size := options.Size(x) + dAtA := make([]byte, size) + i := len(dAtA) + _ = i + var l int + _ = l + if x.unknownFields != nil { + i -= len(x.unknownFields) + +copy(dAtA[i:], x.unknownFields) +} + if x.MaxMetadataLen != 0 { + i = runtime.EncodeVarint(dAtA, i, uint64(x.MaxMetadataLen)) + +i-- + dAtA[i] = 0x10 +} + if x.MaxExecutionPeriod != nil { + encoded, err := options.Marshal(x.MaxExecutionPeriod) + if err != nil { + return protoiface.MarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Buf: input.Buf, +}, err +} + +i -= len(encoded) + +copy(dAtA[i:], encoded) + +i = runtime.EncodeVarint(dAtA, i, uint64(len(encoded))) + +i-- + dAtA[i] = 0xa +} + if input.Buf != nil { + input.Buf = append(input.Buf, dAtA...) +} + +else { + input.Buf = dAtA +} + +return protoiface.MarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Buf: input.Buf, +}, nil +} + unmarshal := func(input protoiface.UnmarshalInput) (protoiface.UnmarshalOutput, error) { + x := input.Message.Interface().(*Module) + if x == nil { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags, +}, nil +} + options := runtime.UnmarshalInputToOptions(input) + _ = options + dAtA := input.Buf + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrIntOverflow +} + if iNdEx >= l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: Module: wiretype end group for non-group") +} + if fieldNum <= 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: Module: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: wrong wireType = %d for field MaxExecutionPeriod", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrIntOverflow +} + if iNdEx >= l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrInvalidLength +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrInvalidLength +} + if postIndex > l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + if x.MaxExecutionPeriod == nil { + x.MaxExecutionPeriod = &durationpb.Duration{ +} + +} + if err := options.Unmarshal(dAtA[iNdEx:postIndex], x.MaxExecutionPeriod); err != nil { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, err +} + +iNdEx = postIndex + case 2: + if wireType != 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, fmt.Errorf("proto: wrong wireType = %d for field MaxMetadataLen", wireType) +} + +x.MaxMetadataLen = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrIntOverflow +} + if iNdEx >= l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + x.MaxMetadataLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + +default: + iNdEx = preIndex + skippy, err := runtime.Skip(dAtA[iNdEx:]) + if err != nil { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, runtime.ErrInvalidLength +} + if (iNdEx + skippy) > l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + if !options.DiscardUnknown { + x.unknownFields = append(x.unknownFields, dAtA[iNdEx:iNdEx+skippy]...) +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, io.ErrUnexpectedEOF +} + +return protoiface.UnmarshalOutput{ + NoUnkeyedLiterals: input.NoUnkeyedLiterals, + Flags: input.Flags +}, nil +} + +return &protoiface.Methods{ + NoUnkeyedLiterals: struct{ +}{ +}, + Flags: protoiface.SupportMarshalDeterministic | protoiface.SupportUnmarshalDiscardUnknown, + Size: size, + Marshal: marshal, + Unmarshal: unmarshal, + Merge: nil, + CheckInitialized: nil, +} +} + +/ Code generated by protoc-gen-go. DO NOT EDIT. +/ versions: +/ protoc-gen-go v1.27.0 +/ protoc (unknown) +/ source: cosmos/group/module/v1/module.proto + +const ( + / Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + / Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +/ Module is the config object of the group module. +type Module struct { + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields + + / max_execution_period defines the max duration after a proposal's voting period ends that members can send a MsgExec + / to execute the proposal. + MaxExecutionPeriod *durationpb.Duration `protobuf:"bytes,1,opt,name=max_execution_period,json=maxExecutionPeriod,proto3" json:"max_execution_period,omitempty"` + / max_metadata_len defines the max length of the metadata bytes field for various entities within the group module. + / Defaults to 255 if not explicitly set. + MaxMetadataLen uint64 `protobuf:"varint,2,opt,name=max_metadata_len,json=maxMetadataLen,proto3" json:"max_metadata_len,omitempty"` +} + +func (x *Module) + +Reset() { + *x = Module{ +} + if protoimpl.UnsafeEnabled { + mi := &file_cosmos_group_module_v1_module_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + +ms.StoreMessageInfo(mi) +} +} + +func (x *Module) + +String() + +string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Module) + +ProtoMessage() { +} + +/ Deprecated: Use Module.ProtoReflect.Descriptor instead. +func (*Module) + +Descriptor() ([]byte, []int) { + return file_cosmos_group_module_v1_module_proto_rawDescGZIP(), []int{0 +} +} + +func (x *Module) + +GetMaxExecutionPeriod() *durationpb.Duration { + if x != nil { + return x.MaxExecutionPeriod +} + +return nil +} + +func (x *Module) + +GetMaxMetadataLen() + +uint64 { + if x != nil { + return x.MaxMetadataLen +} + +return 0 +} + +var File_cosmos_group_module_v1_module_proto protoreflect.FileDescriptor + +var file_cosmos_group_module_v1_module_proto_rawDesc = []byte{ + 0x0a, 0x23, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x2f, 0x6d, + 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2f, 0x76, 0x31, 0x2f, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x16, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2e, 0x67, 0x72, + 0x6f, 0x75, 0x70, 0x2e, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x1a, 0x20, 0x63, + 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x61, 0x70, 0x70, 0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, + 0x61, 0x31, 0x2f, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, + 0x14, 0x67, 0x6f, 0x67, 0x6f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x67, 0x6f, 0x67, 0x6f, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, + 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x11, 0x61, 0x6d, 0x69, 0x6e, 0x6f, 0x2f, 0x61, 0x6d, 0x69, + 0x6e, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xbc, 0x01, 0x0a, 0x06, 0x4d, 0x6f, 0x64, + 0x75, 0x6c, 0x65, 0x12, 0x5a, 0x0a, 0x14, 0x6d, 0x61, 0x78, 0x5f, 0x65, 0x78, 0x65, 0x63, 0x75, + 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x70, 0x65, 0x72, 0x69, 0x6f, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, + 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, + 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x42, 0x0d, 0xc8, 0xde, + 0x1f, 0x00, 0x98, 0xdf, 0x1f, 0x01, 0xa8, 0xe7, 0xb0, 0x2a, 0x01, 0x52, 0x12, 0x6d, 0x61, 0x78, + 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x65, 0x72, 0x69, 0x6f, 0x64, 0x12, + 0x28, 0x0a, 0x10, 0x6d, 0x61, 0x78, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x5f, + 0x6c, 0x65, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0e, 0x6d, 0x61, 0x78, 0x4d, 0x65, + 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x4c, 0x65, 0x6e, 0x3a, 0x2c, 0xba, 0xc0, 0x96, 0xda, 0x01, + 0x26, 0x0a, 0x24, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, + 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2d, 0x73, 0x64, 0x6b, 0x2f, + 0x78, 0x2f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x42, 0xd6, 0x01, 0x0a, 0x1a, 0x63, 0x6f, 0x6d, 0x2e, + 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2e, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x2e, 0x6d, 0x6f, 0x64, + 0x75, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x42, 0x0b, 0x4d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x50, 0x72, + 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x30, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x73, 0x64, 0x6b, + 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x67, + 0x72, 0x6f, 0x75, 0x70, 0x2f, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x2f, 0x76, 0x31, 0x3b, 0x6d, + 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x76, 0x31, 0xa2, 0x02, 0x03, 0x43, 0x47, 0x4d, 0xaa, 0x02, 0x16, + 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2e, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x2e, 0x4d, 0x6f, 0x64, + 0x75, 0x6c, 0x65, 0x2e, 0x56, 0x31, 0xca, 0x02, 0x16, 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x5c, + 0x47, 0x72, 0x6f, 0x75, 0x70, 0x5c, 0x4d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x5c, 0x56, 0x31, 0xe2, + 0x02, 0x22, 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x5c, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x5c, 0x4d, + 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x5c, 0x56, 0x31, 0x5c, 0x47, 0x50, 0x42, 0x4d, 0x65, 0x74, 0x61, + 0x64, 0x61, 0x74, 0x61, 0xea, 0x02, 0x19, 0x43, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x3a, 0x3a, 0x47, + 0x72, 0x6f, 0x75, 0x70, 0x3a, 0x3a, 0x4d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x3a, 0x3a, 0x56, 0x31, + 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, +} + +var ( + file_cosmos_group_module_v1_module_proto_rawDescOnce sync.Once + file_cosmos_group_module_v1_module_proto_rawDescData = file_cosmos_group_module_v1_module_proto_rawDesc +) + +func file_cosmos_group_module_v1_module_proto_rawDescGZIP() []byte { + file_cosmos_group_module_v1_module_proto_rawDescOnce.Do(func() { + file_cosmos_group_module_v1_module_proto_rawDescData = protoimpl.X.CompressGZIP(file_cosmos_group_module_v1_module_proto_rawDescData) +}) + +return file_cosmos_group_module_v1_module_proto_rawDescData +} + +var file_cosmos_group_module_v1_module_proto_msgTypes = make([]protoimpl.MessageInfo, 1) + +var file_cosmos_group_module_v1_module_proto_goTypes = []interface{ +}{ + (*Module)(nil), / 0: cosmos.group.module.v1.Module + (*durationpb.Duration)(nil), / 1: google.protobuf.Duration +} + +var file_cosmos_group_module_v1_module_proto_depIdxs = []int32{ + 1, / 0: cosmos.group.module.v1.Module.max_execution_period:type_name -> google.protobuf.Duration + 1, / [1:1] is the sub-list for method output_type + 1, / [1:1] is the sub-list for method input_type + 1, / [1:1] is the sub-list for extension type_name + 1, / [1:1] is the sub-list for extension extendee + 0, / [0:1] is the sub-list for field type_name +} + +func init() { + file_cosmos_group_module_v1_module_proto_init() +} + +func file_cosmos_group_module_v1_module_proto_init() { + if File_cosmos_group_module_v1_module_proto != nil { + return +} + if !protoimpl.UnsafeEnabled { + file_cosmos_group_module_v1_module_proto_msgTypes[0].Exporter = func(v interface{ +}, i int) + +interface{ +} { + switch v := v.(*Module); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil +} + +} + +} + +type x struct{ +} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{ +}).PkgPath(), + RawDescriptor: file_cosmos_group_module_v1_module_proto_rawDesc, + NumEnums: 0, + NumMessages: 1, + NumExtensions: 0, + NumServices: 0, +}, + GoTypes: file_cosmos_group_module_v1_module_proto_goTypes, + DependencyIndexes: file_cosmos_group_module_v1_module_proto_depIdxs, + MessageInfos: file_cosmos_group_module_v1_module_proto_msgTypes, +}.Build() + +File_cosmos_group_module_v1_module_proto = out.File + file_cosmos_group_module_v1_module_proto_rawDesc = nil + file_cosmos_group_module_v1_module_proto_goTypes = nil + file_cosmos_group_module_v1_module_proto_depIdxs = nil +} +``` + + +Pulsar is optional. The official [`protoc-gen-go`](https://developers.google.com/protocol-buffers/docs/reference/go-generated) can be used as well. + + +## Dependency Definition + +Once the configuration proto is defined, the module's `module.go` must define what dependencies are required by the module. +The boilerplate is similar for all modules. + + +All methods, structs and their fields must be public for `depinject`. + + +1. Import the module configuration generated package: + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var ( + _ appmodule.AppModule = AppModule{ + } + _ appmodule.HasEndBlocker = AppModule{ + } + ) + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx context.Context) + + error { + c := sdk.UnwrapSDKContext(ctx) + + return EndBlocker(c, am.keeper) + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + + Define an `init()` function for defining the `providers` of the module configuration:\ + This registers the module configuration message and the wiring of the module. + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var ( + _ appmodule.AppModule = AppModule{ + } + _ appmodule.HasEndBlocker = AppModule{ + } + ) + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx context.Context) + + error { + c := sdk.UnwrapSDKContext(ctx) + + return EndBlocker(c, am.keeper) + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + +2. Ensure that the module implements the `appmodule.AppModule` interface: + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + store "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.EndBlockAppModule = AppModule{ + } + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var _ appmodule.AppModule = AppModule{ + } + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name()) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + func (am AppModule) + + NewHandler() + + sdk.Handler { + return nil + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { + EndBlocker(ctx, am.keeper) + + return []abci.ValidatorUpdate{ + } + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr sdk.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter *baseapp.MsgServiceRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + +3. Define a struct that inherits `depinject.In` and define the module inputs (i.e. module dependencies): + + * `depinject` provides the right dependencies to the module. + * `depinject` also checks that all dependencies are provided. + + :::tip + For making a dependency optional, add the `optional:"true"` struct tag.\ + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var ( + _ appmodule.AppModule = AppModule{ + } + _ appmodule.HasEndBlocker = AppModule{ + } + ) + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx context.Context) + + error { + c := sdk.UnwrapSDKContext(ctx) + + return EndBlocker(c, am.keeper) + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + +4. Define the module outputs with a public struct that inherits `depinject.Out`: + The module outputs are the dependencies that the module provides to other modules. It is usually the module itself and its keeper. + + ```go expandable + package module + + import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" + ) + + / ConsensusVersion defines the current x/group module consensus version. + const ConsensusVersion = 2 + + var ( + _ module.AppModuleBasic = AppModuleBasic{ + } + _ module.AppModuleSimulation = AppModule{ + } + ) + + type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry + } + + / NewAppModule creates a new AppModule object + func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + + AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() + }, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, + } + } + + var ( + _ appmodule.AppModule = AppModule{ + } + _ appmodule.HasEndBlocker = AppModule{ + } + ) + + / IsOnePerModuleType implements the depinject.OnePerModuleType interface. + func (am AppModule) + + IsOnePerModuleType() { + } + + / IsAppModule implements the appmodule.AppModule interface. + func (am AppModule) + + IsAppModule() { + } + + type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec + } + + / Name returns the group module's name. + func (AppModuleBasic) + + Name() + + string { + return group.ModuleName + } + + / DefaultGenesis returns default genesis state as raw bytes for the group + / module. + func (AppModuleBasic) + + DefaultGenesis(cdc codec.JSONCodec) + + json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) + } + + / ValidateGenesis performs genesis state validation for the group module. + func (AppModuleBasic) + + ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + + error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) + } + + return data.Validate() + } + + / GetQueryCmd returns the cli query commands for the group module + func (a AppModuleBasic) + + GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) + } + + / GetTxCmd returns the transaction commands for the group module + func (a AppModuleBasic) + + GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) + } + + / RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. + func (a AppModuleBasic) + + RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) + } + } + + / RegisterInterfaces registers the group module's interface types + func (AppModuleBasic) + + RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) + } + + / RegisterLegacyAminoCodec registers the group module's types for the given codec. + func (AppModuleBasic) + + RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) + } + + / Name returns the group module's name. + func (AppModule) + + Name() + + string { + return group.ModuleName + } + + / RegisterInvariants does nothing, there are no invariants to enforce + func (am AppModule) + + RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) + } + + / InitGenesis performs genesis initialization for the group module. It returns + / no validator updates. + func (am AppModule) + + InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + + return []abci.ValidatorUpdate{ + } + } + + / ExportGenesis returns the exported genesis state as raw bytes for the group + / module. + func (am AppModule) + + ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + + json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + + return cdc.MustMarshalJSON(gs) + } + + / RegisterServices registers a gRPC query service to respond to the + / module-specific gRPC queries. + func (am AppModule) + + RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + + group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) + } + } + + / ConsensusVersion implements AppModule/ConsensusVersion. + func (AppModule) + + ConsensusVersion() + + uint64 { + return ConsensusVersion + } + + / EndBlock implements the group module's EndBlock. + func (am AppModule) + + EndBlock(ctx context.Context) + + error { + c := sdk.UnwrapSDKContext(ctx) + + return EndBlocker(c, am.keeper) + } + + / ____________________________________________________________________________ + + / AppModuleSimulation functions + + / GenerateGenesisState creates a randomized GenState of the group module. + func (AppModule) + + GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) + } + + / RegisterStoreDecoder registers a decoder for group module's types + func (am AppModule) + + RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) + } + + / WeightedOperations returns the all the gov module operations with their respective weights. + func (am AppModule) + + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) + } + + / + / App Wiring Setup + / + + func init() { + appmodule.Register( + &modulev1.Module{ + }, + appmodule.Provide(ProvideModule), + ) + } + + type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter + } + + type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule + } + + func ProvideModule(in GroupInputs) + + GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen + }) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + + return GroupOutputs{ + GroupKeeper: k, + Module: m + } + } + ``` + +5. Create a function named `ProvideModule` (as called in 1.) and use the inputs for instantiating the module outputs. + +```go expandable +package module + +import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" +) + +/ ConsensusVersion defines the current x/group module consensus version. +const ConsensusVersion = 2 + +var ( + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() +}, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, +} +} + +var ( + _ appmodule.AppModule = AppModule{ +} + _ appmodule.HasEndBlocker = AppModule{ +} +) + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec +} + +/ Name returns the group module's name. +func (AppModuleBasic) + +Name() + +string { + return group.ModuleName +} + +/ DefaultGenesis returns default genesis state as raw bytes for the group +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the group module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) +} + +return data.Validate() +} + +/ GetQueryCmd returns the cli query commands for the group module +func (a AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) +} + +/ GetTxCmd returns the transaction commands for the group module +func (a AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. +func (a AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ RegisterInterfaces registers the group module's interface types +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) +} + +/ RegisterLegacyAminoCodec registers the group module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) +} + +/ Name returns the group module's name. +func (AppModule) + +Name() + +string { + return group.ModuleName +} + +/ RegisterInvariants does nothing, there are no invariants to enforce +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +/ InitGenesis performs genesis initialization for the group module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the group +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + +return cdc.MustMarshalJSON(gs) +} + +/ RegisterServices registers a gRPC query service to respond to the +/ module-specific gRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + +group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) +} +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ EndBlock implements the group module's EndBlock. +func (am AppModule) + +EndBlock(ctx context.Context) + +error { + c := sdk.UnwrapSDKContext(ctx) + +return EndBlocker(c, am.keeper) +} + +/ ____________________________________________________________________________ + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the group module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ RegisterStoreDecoder registers a decoder for group module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter +} + +type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule +} + +func ProvideModule(in GroupInputs) + +GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen +}) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + +return GroupOutputs{ + GroupKeeper: k, + Module: m +} +} +``` + +The `ProvideModule` function should return an instance of `cosmossdk.io/core/appmodule.AppModule` which implements +one or more app module extension interfaces for initializing the module. + +Following is the complete app wiring configuration for `group`: + +```go expandable +package module + +import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/group/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + + store "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/group" + "github.com/cosmos/cosmos-sdk/x/group/client/cli" + "github.com/cosmos/cosmos-sdk/x/group/keeper" + "github.com/cosmos/cosmos-sdk/x/group/simulation" +) + +/ ConsensusVersion defines the current x/group module consensus version. +const ConsensusVersion = 2 + +var ( + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +type AppModule struct { + AppModuleBasic + keeper keeper.Keeper + bankKeeper group.BankKeeper + accKeeper group.AccountKeeper + registry cdctypes.InterfaceRegistry +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, ak group.AccountKeeper, bk group.BankKeeper, registry cdctypes.InterfaceRegistry) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: ak.AddressCodec() +}, + keeper: keeper, + bankKeeper: bk, + accKeeper: ak, + registry: registry, +} +} + +var ( + _ appmodule.AppModule = AppModule{ +} + _ appmodule.HasEndBlocker = AppModule{ +} +) + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec +} + +/ Name returns the group module's name. +func (AppModuleBasic) + +Name() + +string { + return group.ModuleName +} + +/ DefaultGenesis returns default genesis state as raw bytes for the group +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(group.NewGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the group module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data group.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", group.ModuleName, err) +} + +return data.Validate() +} + +/ GetQueryCmd returns the cli query commands for the group module +func (a AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.QueryCmd(a.Name()) +} + +/ GetTxCmd returns the transaction commands for the group module +func (a AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.TxCmd(a.Name(), a.ac) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the group module. +func (a AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := group.RegisterQueryHandlerClient(context.Background(), mux, group.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ RegisterInterfaces registers the group module's interface types +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + group.RegisterInterfaces(registry) +} + +/ RegisterLegacyAminoCodec registers the group module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + group.RegisterLegacyAminoCodec(cdc) +} + +/ Name returns the group module's name. +func (AppModule) + +Name() + +string { + return group.ModuleName +} + +/ RegisterInvariants does nothing, there are no invariants to enforce +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +/ InitGenesis performs genesis initialization for the group module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + am.keeper.InitGenesis(ctx, cdc, data) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the group +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx, cdc) + +return cdc.MustMarshalJSON(gs) +} + +/ RegisterServices registers a gRPC query service to respond to the +/ module-specific gRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + group.RegisterMsgServer(cfg.MsgServer(), am.keeper) + +group.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper) + if err := cfg.RegisterMigration(group.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", group.ModuleName, err)) +} +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ EndBlock implements the group module's EndBlock. +func (am AppModule) + +EndBlock(ctx context.Context) + +error { + c := sdk.UnwrapSDKContext(ctx) + +return EndBlocker(c, am.keeper) +} + +/ ____________________________________________________________________________ + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the group module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ RegisterStoreDecoder registers a decoder for group module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[group.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + am.registry, + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accKeeper, am.bankKeeper, am.keeper, am.cdc, + ) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type GroupInputs struct { + depinject.In + + Config *modulev1.Module + Key *store.KVStoreKey + Cdc codec.Codec + AccountKeeper group.AccountKeeper + BankKeeper group.BankKeeper + Registry cdctypes.InterfaceRegistry + MsgServiceRouter baseapp.MessageRouter +} + +type GroupOutputs struct { + depinject.Out + + GroupKeeper keeper.Keeper + Module appmodule.AppModule +} + +func ProvideModule(in GroupInputs) + +GroupOutputs { + /* + Example of setting group params: + in.Config.MaxMetadataLen = 1000 + in.Config.MaxExecutionPeriod = "1209600s" + */ + k := keeper.NewKeeper(in.Key, in.Cdc, in.MsgServiceRouter, in.AccountKeeper, group.Config{ + MaxExecutionPeriod: in.Config.MaxExecutionPeriod.AsDuration(), + MaxMetadataLen: in.Config.MaxMetadataLen +}) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.Registry) + +return GroupOutputs{ + GroupKeeper: k, + Module: m +} +} +``` + +The module is now ready to be used with `depinject` by a chain developer. + +## Integrate in an application + +The App Wiring is done in `app_config.go` / `app.yaml` and `app_di.go` and is explained in detail in the [overview of `app_di.go`](/docs/sdk/v0.53/documentation/application-framework/app-go-di). diff --git a/docs/sdk/v0.53/documentation/module-system/distribution.mdx b/docs/sdk/v0.53/documentation/module-system/distribution.mdx new file mode 100644 index 00000000..353dab47 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/distribution.mdx @@ -0,0 +1,1222 @@ +--- +title: '`x/distribution`' +--- + +## Overview + +This *simple* distribution mechanism describes a functional way to passively +distribute rewards between validators and delegators. Note that this mechanism does +not distribute funds in as precisely as active reward distribution mechanisms and +will therefore be upgraded in the future. + +The mechanism operates as follows. Collected rewards are pooled globally and +divided out passively to validators and delegators. Each validator has the +opportunity to charge commission to the delegators on the rewards collected on +behalf of the delegators. Fees are collected directly into a global reward pool +and validator proposer-reward pool. Due to the nature of passive accounting, +whenever changes to parameters which affect the rate of reward distribution +occurs, withdrawal of rewards must also occur. + +* Whenever withdrawing, one must withdraw the maximum amount they are entitled + to, leaving nothing in the pool. +* Whenever bonding, unbonding, or re-delegating tokens to an existing account, a + full withdrawal of the rewards must occur (as the rules for lazy accounting + change). +* Whenever a validator chooses to change the commission on rewards, all accumulated + commission rewards must be simultaneously withdrawn. + +The above scenarios are covered in `hooks.md`. + +The distribution mechanism outlined herein is used to lazily distribute the +following rewards between validators and associated delegators: + +* multi-token fees to be socially distributed +* inflated staked asset provisions +* validator commission on all rewards earned by their delegators stake + +Fees are pooled within a global pool. The mechanisms used allow for validators +and delegators to independently and lazily withdraw their rewards. + +## Shortcomings + +As a part of the lazy computations, each delegator holds an accumulation term +specific to each validator which is used to estimate what their approximate +fair portion of tokens held in the global fee pool is owed to them. + +```text +entitlement = delegator-accumulation / all-delegators-accumulation +``` + +Under the circumstance that there was constant and equal flow of incoming +reward tokens every block, this distribution mechanism would be equal to the +active distribution (distribute individually to all delegators each block). +However, this is unrealistic so deviations from the active distribution will +occur based on fluctuations of incoming reward tokens as well as timing of +reward withdrawal by other delegators. + +If you happen to know that incoming rewards are about to significantly increase, +you are incentivized to not withdraw until after this event, increasing the +worth of your existing *accum*. See [#2764](https://github.com/cosmos/cosmos-sdk/issues/2764) +for further details. + +## Effect on Staking + +Charging commission on Atom provisions while also allowing for Atom-provisions +to be auto-bonded (distributed directly to the validators bonded stake) is +problematic within BPoS. Fundamentally, these two mechanisms are mutually +exclusive. If both commission and auto-bonding mechanisms are simultaneously +applied to the staking-token then the distribution of staking-tokens between +any validator and its delegators will change with each block. This then +necessitates a calculation for each delegation records for each block - +which is considered computationally expensive. + +In conclusion, we can only have Atom commission and unbonded atoms +provisions or bonded atom provisions with no Atom commission, and we elect to +implement the former. Stakeholders wishing to rebond their provisions may elect +to set up a script to periodically withdraw and rebond rewards. + +## Contents + +* [Concepts](#concepts) +* [State](#state) + * [FeePool](#feepool) + * [Validator Distribution](#validator-distribution) + * [Delegation Distribution](#delegation-distribution) + * [Params](#params) +* [Begin Block](#begin-block) +* [Messages](#messages) +* [Hooks](#hooks) +* [Events](#events) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + +## Concepts + +In Proof of Stake (PoS) blockchains, rewards gained from transaction fees are paid to validators. The fee distribution module fairly distributes the rewards to the validators' constituent delegators. + +Rewards are calculated per period. The period is updated each time a validator's delegation changes, for example, when the validator receives a new delegation. +The rewards for a single validator can then be calculated by taking the total rewards for the period before the delegation started, minus the current total rewards. +To learn more, see the [F1 Fee Distribution paper](https://github.com/cosmos/cosmos-sdk/tree/main/docs/spec/fee_distribution/f1_fee_distr.pdf). + +The commission to the validator is paid when the validator is removed or when the validator requests a withdrawal. +The commission is calculated and incremented at every `BeginBlock` operation to update accumulated fee amounts. + +The rewards to a delegator are distributed when the delegation is changed or removed, or a withdrawal is requested. +Before rewards are distributed, all slashes to the validator that occurred during the current delegation are applied. + +### Reference Counting in F1 Fee Distribution + +In F1 fee distribution, the rewards a delegator receives are calculated when their delegation is withdrawn. This calculation must read the terms of the summation of rewards divided by the share of tokens from the period which they ended when they delegated, and the final period that was created for the withdrawal. + +Additionally, as slashes change the amount of tokens a delegation will have (but we calculate this lazily, +only when a delegator un-delegates), we must calculate rewards in separate periods before / after any slashes +which occurred in between when a delegator delegated and when they withdrew their rewards. Thus slashes, like +delegations, reference the period which was ended by the slash event. + +All stored historical rewards records for periods which are no longer referenced by any delegations +or any slashes can thus be safely removed, as they will never be read (future delegations and future +slashes will always reference future periods). This is implemented by tracking a `ReferenceCount` +along with each historical reward storage entry. Each time a new object (delegation or slash) +is created which might need to reference the historical record, the reference count is incremented. +Each time one object which previously needed to reference the historical record is deleted, the reference +count is decremented. If the reference count hits zero, the historical record is deleted. + +### External Community Pool Keepers + +An external pool community keeper is defined as: + +```go expandable +/ ExternalCommunityPoolKeeper is the interface that an external community pool module keeper must fulfill +/ for x/distribution to properly accept it as a community pool fund destination. +type ExternalCommunityPoolKeeper interface { + / GetCommunityPoolModule gets the module name that funds should be sent to for the community pool. + / This is the address that x/distribution will send funds to for external management. + GetCommunityPoolModule() + +string + / FundCommunityPool allows an account to directly fund the community fund pool. + FundCommunityPool(ctx sdk.Context, amount sdk.Coins, senderAddr sdk.AccAddress) + +error + / DistributeFromCommunityPool distributes funds from the community pool module account to + / a receiver address. + DistributeFromCommunityPool(ctx sdk.Context, amount sdk.Coins, receiveAddr sdk.AccAddress) + +error +} +``` + +By default, the distribution module will use a community pool implementation that is internal. An external community pool +can be provided to the module which will have funds be diverted to it instead of the internal implementation. The reference +external community pool maintained by the Cosmos SDK is [`x/protocolpool`](/docs/sdk/v0.53/documentation/module-system/protocolpool). + +## State + +### FeePool + +All globally tracked parameters for distribution are stored within +`FeePool`. Rewards are collected and added to the reward pool and +distributed to validators/delegators from here. + +Note that the reward pool holds decimal coins (`DecCoins`) to allow +for fractions of coins to be received from operations like inflation. +When coins are distributed from the pool they are truncated back to +`sdk.Coins` which are non-decimal. + +* FeePool: `0x00 -> ProtocolBuffer(FeePool)` + +```go +/ coins with decimal +type DecCoins []DecCoin + +type DecCoin struct { + Amount math.LegacyDec + Denom string +} +``` + +```protobuf +// FeePool is the global fee pool for distribution. +message FeePool { + repeated cosmos.base.v1beta1.DecCoin community_pool = 1 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.DecCoins" + ]; +} +``` + +### Validator Distribution + +Validator distribution information for the relevant validator is updated each time: + +1. delegation amount to a validator is updated, +2. any delegator withdraws from a validator, or +3. the validator withdraws its commission. + +* ValidatorDistInfo: `0x02 | ValOperatorAddrLen (1 byte) | ValOperatorAddr -> ProtocolBuffer(validatorDistribution)` + +```go +type ValidatorDistInfo struct { + OperatorAddress sdk.AccAddress + SelfBondRewards sdkmath.DecCoins + ValidatorCommission types.ValidatorAccumulatedCommission +} +``` + +### Delegation Distribution + +Each delegation distribution only needs to record the height at which it last +withdrew fees. Because a delegation must withdraw fees each time it's +properties change (aka bonded tokens etc.) its properties will remain constant +and the delegator's *accumulation* factor can be calculated passively knowing +only the height of the last withdrawal and its current properties. + +* DelegationDistInfo: `0x02 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValOperatorAddrLen (1 byte) | ValOperatorAddr -> ProtocolBuffer(delegatorDist)` + +```go +type DelegationDistInfo struct { + WithdrawalHeight int64 / last time this delegation withdrew rewards +} +``` + +### Params + +The distribution module stores it's params in state with the prefix of `0x09`, +it can be updated with governance or the address with authority. + +* Params: `0x09 | ProtocolBuffer(Params)` + +```protobuf +// Params defines the set of params for the distribution module. +message Params { + option (amino.name) = "cosmos-sdk/x/distribution/Params"; + option (gogoproto.goproto_stringer) = false; + + string community_tax = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + + // Deprecated: The base_proposer_reward field is deprecated and is no longer used + // in the x/distribution module's reward mechanism. + string base_proposer_reward = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + deprecated = true + ]; + + // Deprecated: The bonus_proposer_reward field is deprecated and is no longer used + // in the x/distribution module's reward mechanism. + string bonus_proposer_reward = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + deprecated = true + ]; + + bool withdraw_addr_enabled = 4; +} +``` + +## Begin Block + +At each `BeginBlock`, all fees received in the previous block are transferred to +the distribution `ModuleAccount` account. When a delegator or validator +withdraws their rewards, they are taken out of the `ModuleAccount`. During begin +block, the different claims on the fees collected are updated as follows: + +* The reserve community tax is charged. +* The remainder is distributed proportionally by voting power to all bonded validators + +### The Distribution Scheme + +See [params](#params) for description of parameters. + +Let `fees` be the total fees collected in the previous block, including +inflationary rewards to the stake. All fees are collected in a specific module +account during the block. During `BeginBlock`, they are sent to the +`"distribution"` `ModuleAccount`. No other sending of tokens occurs. Instead, the +rewards each account is entitled to are stored, and withdrawals can be triggered +through the messages `FundCommunityPool`, `WithdrawValidatorCommission` and +`WithdrawDelegatorReward`. + +#### Reward to the Community Pool + +The community pool gets `community_tax * fees`, plus any remaining dust after +validators get their rewards that are always rounded down to the nearest +integer value. + +#### Using an External Community Pool + +Starting with Cosmos SDK v0.53.0, an external community pool, such as `x/protocolpool`, can be used in place of the `x/distribution` managed community pool. + +Please view the warning in the next section before deciding to use an external community pool. + +```go expandable +/ ExternalCommunityPoolKeeper is the interface that an external community pool module keeper must fulfill +/ for x/distribution to properly accept it as a community pool fund destination. +type ExternalCommunityPoolKeeper interface { + / GetCommunityPoolModule gets the module name that funds should be sent to for the community pool. + / This is the address that x/distribution will send funds to for external management. + GetCommunityPoolModule() + +string + / FundCommunityPool allows an account to directly fund the community fund pool. + FundCommunityPool(ctx sdk.Context, amount sdk.Coins, senderAddr sdk.AccAddress) + +error + / DistributeFromCommunityPool distributes funds from the community pool module account to + / a receiver address. + DistributeFromCommunityPool(ctx sdk.Context, amount sdk.Coins, receiveAddr sdk.AccAddress) + +error +} +``` + +```go +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), / New option. +) +``` + +#### External Community Pool Usage Warning + +When using an external community pool with `x/distribution`, the following handlers will return an error: + +**QueryService** + +* `CommunityPool` + +**MsgService** + +* `CommunityPoolSpend` +* `FundCommunityPool` + +If you have services that rely on this functionality from `x/distribution`, please update them to use the `x/protocolpool` equivalents. + +#### Reward To the Validators + +The proposer receives no extra rewards. All fees are distributed among all the +bonded validators, including the proposer, in proportion to their consensus power. + +```text +powFrac = validator power / total bonded validator power +voteMul = 1 - community_tax +``` + +All validators receive `fees * voteMul * powFrac`. + +#### Rewards to Delegators + +Each validator's rewards are distributed to its delegators. The validator also +has a self-delegation that is treated like a regular delegation in +distribution calculations. + +The validator sets a commission rate. The commission rate is flexible, but each +validator sets a maximum rate and a maximum daily increase. These maximums cannot be exceeded and protect delegators from sudden increases of validator commission rates to prevent validators from taking all of the rewards. + +The outstanding rewards that the operator is entitled to are stored in +`ValidatorAccumulatedCommission`, while the rewards the delegators are entitled +to are stored in `ValidatorCurrentRewards`. The [F1 fee distribution scheme](#concepts) is used to calculate the rewards per delegator as they +withdraw or update their delegation, and is thus not handled in `BeginBlock`. + +#### Example Distribution + +For this example distribution, the underlying consensus engine selects block proposers in +proportion to their power relative to the entire bonded power. + +All validators are equally performant at including pre-commits in their proposed +blocks. Then hold `(pre_commits included) / (total bonded validator power)` +constant so that the amortized block reward for the validator is `( validator power / total bonded power) * (1 - community tax rate)` of +the total rewards. Consequently, the reward for a single delegator is: + +```text +(delegator proportion of the validator power / validator power) * (validator power / total bonded power) + * (1 - community tax rate) * (1 - validator commission rate) += (delegator proportion of the validator power / total bonded power) * (1 - +community tax rate) * (1 - validator commission rate) +``` + +## Messages + +### MsgSetWithdrawAddress + +By default, the withdraw address is the delegator address. To change its withdraw address, a delegator must send a `MsgSetWithdrawAddress` message. +Changing the withdraw address is possible only if the parameter `WithdrawAddrEnabled` is set to `true`. + +The withdraw address cannot be any of the module accounts. These accounts are blocked from being withdraw addresses by being added to the distribution keeper's `blockedAddrs` array at initialization. + +Response: + +```protobuf +// MsgSetWithdrawAddress sets the withdraw address for +// a delegator (or validator self-delegation). +message MsgSetWithdrawAddress { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgModifyWithdrawAddress"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string withdraw_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +```go +func (k Keeper) + +SetWithdrawAddr(ctx context.Context, delegatorAddr sdk.AccAddress, withdrawAddr sdk.AccAddress) + +error + if k.blockedAddrs[withdrawAddr.String()] { + fail with "`{ + withdrawAddr +}` is not allowed to receive external funds" +} + if !k.GetWithdrawAddrEnabled(ctx) { + fail with `ErrSetWithdrawAddrDisabled` +} + +k.SetDelegatorWithdrawAddr(ctx, delegatorAddr, withdrawAddr) +``` + +### MsgWithdrawDelegatorReward + +A delegator can withdraw its rewards. +Internally in the distribution module, this transaction simultaneously removes the previous delegation with associated rewards, the same as if the delegator simply started a new delegation of the same value. +The rewards are sent immediately from the distribution `ModuleAccount` to the withdraw address. +Any remainder (truncated decimals) are sent to the community pool. +The starting height of the delegation is set to the current validator period, and the reference count for the previous period is decremented. +The amount withdrawn is deducted from the `ValidatorOutstandingRewards` variable for the validator. + +In the F1 distribution, the total rewards are calculated per validator period, and a delegator receives a piece of those rewards in proportion to their stake in the validator. +In basic F1, the total rewards that all the delegators are entitled to between to periods is calculated the following way. +Let `R(X)` be the total accumulated rewards up to period `X` divided by the tokens staked at that time. The delegator allocation is `R(X) * delegator_stake`. +Then the rewards for all the delegators for staking between periods `A` and `B` are `(R(B) - R(A)) * total stake`. +However, these calculated rewards don't account for slashing. + +Taking the slashes into account requires iteration. +Let `F(X)` be the fraction a validator is to be slashed for a slashing event that happened at period `X`. +If the validator was slashed at periods `P1, ..., PN`, where `A < P1`, `PN < B`, the distribution module calculates the individual delegator's rewards, `T(A, B)`, as follows: + +```go +stake := initial stake + rewards := 0 + previous := A + for P in P1, ..., PN`: + rewards = (R(P) - previous) * stake + stake = stake * F(P) + +previous = P +rewards = rewards + (R(B) - R(PN)) * stake +``` + +The historical rewards are calculated retroactively by playing back all the slashes and then attenuating the delegator's stake at each step. +The final calculated stake is equivalent to the actual staked coins in the delegation with a margin of error due to rounding errors. + +Response: + +```protobuf +// MsgWithdrawDelegatorReward represents delegation withdrawal to a delegator +// from a single validator. +message MsgWithdrawDelegatorReward { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgWithdrawDelegationReward"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +### WithdrawValidatorCommission + +The validator can send the WithdrawValidatorCommission message to withdraw their accumulated commission. +The commission is calculated in every block during `BeginBlock`, so no iteration is required to withdraw. +The amount withdrawn is deducted from the `ValidatorOutstandingRewards` variable for the validator. +Only integer amounts can be sent. If the accumulated awards have decimals, the amount is truncated before the withdrawal is sent, and the remainder is left to be withdrawn later. + +### FundCommunityPool + + + +This handler will return an error if an `ExternalCommunityPool` is used. + + + +This message sends coins directly from the sender to the community pool. + +The transaction fails if the amount cannot be transferred from the sender to the distribution module account. + +```go expandable +func (k Keeper) + +FundCommunityPool(ctx context.Context, amount sdk.Coins, sender sdk.AccAddress) + +error { + if err := k.bankKeeper.SendCoinsFromAccountToModule(ctx, sender, types.ModuleName, amount); err != nil { + return err +} + +feePool, err := k.FeePool.Get(ctx) + if err != nil { + return err +} + +feePool.CommunityPool = feePool.CommunityPool.Add(sdk.NewDecCoinsFromCoins(amount...)...) + if err := k.FeePool.Set(ctx, feePool); err != nil { + return err +} + +return nil +} +``` + +### Common distribution operations + +These operations take place during many different messages. + +#### Initialize delegation + +Each time a delegation is changed, the rewards are withdrawn and the delegation is reinitialized. +Initializing a delegation increments the validator period and keeps track of the starting period of the delegation. + +```go expandable +/ initialize starting info for a new delegation +func (k Keeper) + +initializeDelegation(ctx context.Context, val sdk.ValAddress, del sdk.AccAddress) { + / period has already been incremented - we want to store the period ended by this delegation action + previousPeriod := k.GetValidatorCurrentRewards(ctx, val).Period - 1 + + / increment reference count for the period we're going to track + k.incrementReferenceCount(ctx, val, previousPeriod) + validator := k.stakingKeeper.Validator(ctx, val) + delegation := k.stakingKeeper.Delegation(ctx, del, val) + + / calculate delegation stake in tokens + / we don't store directly, so multiply delegation shares * (tokens per share) + / note: necessary to truncate so we don't allow withdrawing more rewards than owed + stake := validator.TokensFromSharesTruncated(delegation.GetShares()) + +k.SetDelegatorStartingInfo(ctx, val, del, types.NewDelegatorStartingInfo(previousPeriod, stake, uint64(ctx.BlockHeight()))) +} +``` + +### MsgUpdateParams + +Distribution module params can be updated through `MsgUpdateParams`, which can be done using governance proposal and the signer will always be gov module account address. + +```protobuf +// MsgUpdateParams is the Msg/UpdateParams request type. +// +// Since: cosmos-sdk 0.47 +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/distribution/MsgUpdateParams"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // params defines the x/distribution parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +The message handling can fail if: + +* signer is not the gov module account address. + +## Hooks + +Available hooks that can be called by and from this module. + +### Create or modify delegation distribution + +* triggered-by: `staking.MsgDelegate`, `staking.MsgBeginRedelegate`, `staking.MsgUndelegate` + +#### Before + +* The delegation rewards are withdrawn to the withdraw address of the delegator. + The rewards include the current period and exclude the starting period. +* The validator period is incremented. + The validator period is incremented because the validator's power and share distribution might have changed. +* The reference count for the delegator's starting period is decremented. + +#### After + +The starting height of the delegation is set to the previous period. +Because of the `Before`-hook, this period is the last period for which the delegator was rewarded. + +### Validator created + +* triggered-by: `staking.MsgCreateValidator` + +When a validator is created, the following validator variables are initialized: + +* Historical rewards +* Current accumulated rewards +* Accumulated commission +* Total outstanding rewards +* Period + +By default, all values are set to a `0`, except period, which is set to `1`. + +### Validator removed + +* triggered-by: `staking.RemoveValidator` + +Outstanding commission is sent to the validator's self-delegation withdrawal address. +Remaining delegator rewards get sent to the community fee pool. + +Note: The validator gets removed only when it has no remaining delegations. +At that time, all outstanding delegator rewards will have been withdrawn. +Any remaining rewards are dust amounts. + +### Validator is slashed + +* triggered-by: `staking.Slash` +* The current validator period reference count is incremented. + The reference count is incremented because the slash event has created a reference to it. +* The validator period is incremented. +* The slash event is stored for later use. + The slash event will be referenced when calculating delegator rewards. + +## Events + +The distribution module emits the following events: + +### BeginBlocker + +| Type | Attribute Key | Attribute Value | +| ---------------- | ------------- | ------------------ | +| proposer\_reward | validator | `{validatorAddress}` | +| proposer\_reward | reward | `{proposerReward}` | +| commission | amount | `{commissionAmount}` | +| commission | validator | `{validatorAddress}` | +| rewards | amount | `{rewardAmount}` | +| rewards | validator | `{validatorAddress}` | + +### Handlers + +#### MsgSetWithdrawAddress + +| Type | Attribute Key | Attribute Value | +| ---------------------- | ----------------- | ---------------------- | +| set\_withdraw\_address | withdraw\_address | `{withdrawAddress}` | +| message | module | distribution | +| message | action | set\_withdraw\_address | +| message | sender | `{senderAddress}` | + +#### MsgWithdrawDelegatorReward + +| Type | Attribute Key | Attribute Value | +| ----------------- | ------------- | --------------------------- | +| withdraw\_rewards | amount | `{rewardAmount}` | +| withdraw\_rewards | validator | `{validatorAddress}` | +| message | module | distribution | +| message | action | withdraw\_delegator\_reward | +| message | sender | `{senderAddress}` | + +#### MsgWithdrawValidatorCommission + +| Type | Attribute Key | Attribute Value | +| -------------------- | ------------- | ------------------------------- | +| withdraw\_commission | amount | `{commissionAmount}` | +| message | module | distribution | +| message | action | withdraw\_validator\_commission | +| message | sender | `{senderAddress}` | + +## Parameters + +The distribution module contains the following parameters: + +| Key | Type | Example | +| ------------------- | ------------ | --------------------------- | +| communitytax | string (dec) | "0.020000000000000000" \[0] | +| withdrawaddrenabled | bool | true | + +* \[0] `communitytax` must be positive and cannot exceed 1.00. +* `baseproposerreward` and `bonusproposerreward` were parameters that are deprecated in v0.47 and are not used. + + +The reserve pool is the pool of collected funds for use by governance taken via the `CommunityTax`. +Currently with the Cosmos SDK, tokens collected by the CommunityTax are accounted for but unspendable. + + +## Client + +## CLI + +A user can query and interact with the `distribution` module using the CLI. + +#### Query + +The `query` commands allow users to query `distribution` state. + +```shell +simd query distribution --help +``` + +##### commission + +The `commission` command allows users to query validator commission rewards by address. + +```shell +simd query distribution commission [address] [flags] +``` + +Example: + +```shell +simd query distribution commission cosmosvaloper1... +``` + +Example Output: + +```yml +commission: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### community-pool + +The `community-pool` command allows users to query all coin balances within the community pool. + +```shell +simd query distribution community-pool [flags] +``` + +Example: + +```shell +simd query distribution community-pool +``` + +Example Output: + +```yml +pool: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### params + +The `params` command allows users to query the parameters of the `distribution` module. + +```shell +simd query distribution params [flags] +``` + +Example: + +```shell +simd query distribution params +``` + +Example Output: + +```yml +base_proposer_reward: "0.000000000000000000" +bonus_proposer_reward: "0.000000000000000000" +community_tax: "0.020000000000000000" +withdraw_addr_enabled: true +``` + +##### rewards + +The `rewards` command allows users to query delegator rewards. Users can optionally include the validator address to query rewards earned from a specific validator. + +```shell +simd query distribution rewards [delegator-addr] [validator-addr] [flags] +``` + +Example: + +```shell +simd query distribution rewards cosmos1... +``` + +Example Output: + +```yml +rewards: +- reward: + - amount: "1000000.000000000000000000" + denom: stake + validator_address: cosmosvaloper1.. +total: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### slashes + +The `slashes` command allows users to query all slashes for a given block range. + +```shell +simd query distribution slashes [validator] [start-height] [end-height] [flags] +``` + +Example: + +```shell +simd query distribution slashes cosmosvaloper1... 1 1000 +``` + +Example Output: + +```yml +pagination: + next_key: null + total: "0" +slashes: +- validator_period: 20, + fraction: "0.009999999999999999" +``` + +##### validator-outstanding-rewards + +The `validator-outstanding-rewards` command allows users to query all outstanding (un-withdrawn) rewards for a validator and all their delegations. + +```shell +simd query distribution validator-outstanding-rewards [validator] [flags] +``` + +Example: + +```shell +simd query distribution validator-outstanding-rewards cosmosvaloper1... +``` + +Example Output: + +```yml +rewards: +- amount: "1000000.000000000000000000" + denom: stake +``` + +##### validator-distribution-info + +The `validator-distribution-info` command allows users to query validator commission and self-delegation rewards for validator. + +````shell expandable +simd query distribution validator-distribution-info cosmosvaloper1... +``` + +Example Output: + +```yml +commission: +- amount: "100000.000000000000000000" + denom: stake +operator_address: cosmosvaloper1... +self_bond_rewards: +- amount: "100000.000000000000000000" + denom: stake +``` + +#### Transactions + +The `tx` commands allow users to interact with the `distribution` module. + +```shell +simd tx distribution --help +``` + +##### fund-community-pool + +The `fund-community-pool` command allows users to send funds to the community pool. + +```shell +simd tx distribution fund-community-pool [amount] [flags] +``` + +Example: + +```shell +simd tx distribution fund-community-pool 100stake --from cosmos1... +``` + +##### set-withdraw-addr + +The `set-withdraw-addr` command allows users to set the withdraw address for rewards associated with a delegator address. + +```shell +simd tx distribution set-withdraw-addr [withdraw-addr] [flags] +``` + +Example: + +```shell +simd tx distribution set-withdraw-addr cosmos1... --from cosmos1... +``` + +##### withdraw-all-rewards + +The `withdraw-all-rewards` command allows users to withdraw all rewards for a delegator. + +```shell +simd tx distribution withdraw-all-rewards [flags] +``` + +Example: + +```shell +simd tx distribution withdraw-all-rewards --from cosmos1... +``` + +##### withdraw-rewards + +The `withdraw-rewards` command allows users to withdraw all rewards from a given delegation address, +and optionally withdraw validator commission if the delegation address given is a validator operator and the user proves the `--commission` flag. + +```shell +simd tx distribution withdraw-rewards [validator-addr] [flags] +``` + +Example: + +```shell +simd tx distribution withdraw-rewards cosmosvaloper1... --from cosmos1... --commission +``` + +### gRPC + +A user can query the `distribution` module using gRPC endpoints. + +#### Params + +The `Params` endpoint allows users to query parameters of the `distribution` module. + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "communityTax": "20000000000000000", + "baseProposerReward": "00000000000000000", + "bonusProposerReward": "00000000000000000", + "withdrawAddrEnabled": true + } +} +``` + +#### ValidatorDistributionInfo + +The `ValidatorDistributionInfo` queries validator commission and self-delegation rewards for validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorDistributionInfo +``` + +Example Output: + +```json +{ + "commission": { + "commission": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + }, + "self_bond_rewards": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ], + "validator_address": "cosmosvalop1..." +} +``` + +#### ValidatorOutstandingRewards + +The `ValidatorOutstandingRewards` endpoint allows users to query rewards of a validator address. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1.."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorOutstandingRewards +``` + +Example Output: + +```json +{ + "rewards": { + "rewards": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + } +} +``` + +#### ValidatorCommission + +The `ValidatorCommission` endpoint allows users to query accumulated commission for a validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1.."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorCommission +``` + +Example Output: + +```json +{ + "commission": { + "commission": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + } +} +``` + +#### ValidatorSlashes + +The `ValidatorSlashes` endpoint allows users to query slash events of a validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"validator_address":"cosmosvalop1.."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/ValidatorSlashes +``` + +Example Output: + +```json +{ + "slashes": [ + { + "validator_period": "20", + "fraction": "0.009999999999999999" + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### DelegationRewards + +The `DelegationRewards` endpoint allows users to query the total rewards accrued by a delegation. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1...","validator_address":"cosmosvalop1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegationRewards +``` + +Example Output: + +```json +{ + "rewards": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] +} +``` + +#### DelegationTotalRewards + +The `DelegationTotalRewards` endpoint allows users to query the total rewards accrued by each validator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegationTotalRewards +``` + +Example Output: + +```json +{ + "rewards": [ + { + "validatorAddress": "cosmosvaloper1...", + "reward": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] + } + ], + "total": [ + { + "denom": "stake", + "amount": "1000000000000000" + } + ] +} +``` + +#### DelegatorValidators + +The `DelegatorValidators` endpoint allows users to query all validators for given delegator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegatorValidators +``` + +Example Output: + +```json +{ + "validators": ["cosmosvaloper1..."] +} +``` + +#### DelegatorWithdrawAddress + +The `DelegatorWithdrawAddress` endpoint allows users to query the withdraw address of a delegator. + +Example: + +```shell +grpcurl -plaintext \ + -d '{"delegator_address":"cosmos1..."}' \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/DelegatorWithdrawAddress +``` + +Example Output: + +```json +{ + "withdrawAddress": "cosmos1..." +} +``` + +#### CommunityPool + +The `CommunityPool` endpoint allows users to query the community pool coins. + +Example: + +```shell +grpcurl -plaintext \ + localhost:9090 \ + cosmos.distribution.v1beta1.Query/CommunityPool +``` + +Example Output: + +```json +{ + "pool": [ + { + "denom": "stake", + "amount": "1000000000000000000" + } + ] +} +``` +```` diff --git a/docs/sdk/v0.53/documentation/module-system/epochs.mdx b/docs/sdk/v0.53/documentation/module-system/epochs.mdx new file mode 100644 index 00000000..900dac7d --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/epochs.mdx @@ -0,0 +1,179 @@ +--- +title: '`x/epochs`' +--- + +## Abstract + +Often in the SDK, we would like to run certain code every-so often. The +purpose of `epochs` module is to allow other modules to set that they +would like to be signaled once every period. So another module can +specify it wants to execute code once a week, starting at UTC-time = x. +`epochs` creates a generalized epoch interface to other modules so that +they can easily be signaled upon such events. + +## Contents + +1. **[Concept](#concepts)** +2. **[State](#state)** +3. **[Events](#events)** +4. **[Keeper](#keepers)** +5. **[Hooks](#hooks)** +6. **[Queries](#queries)** + +## Concepts + +The epochs module defines on-chain timers that execute at fixed time intervals. +Other SDK modules can then register logic to be executed at the timer ticks. +We refer to the period in between two timer ticks as an "epoch". + +Every timer has a unique identifier. +Every epoch will have a start time, and an end time, where `end time = start time + timer interval`. +On mainnet, we only utilize one identifier, with a time interval of `one day`. + +The timer will tick at the first block whose block time is greater than the timer end time, +and set the start as the prior timer end time. (Notably, it's not set to the block time!) +This means that if the chain has been down for a while, you will get one timer tick per block, +until the timer has caught up. + +## State + +The Epochs module keeps a single `EpochInfo` per identifier. +This contains the current state of the timer with the corresponding identifier. +Its fields are modified at every timer tick. +EpochInfos are initialized as part of genesis initialization or upgrade logic, +and are only modified on begin blockers. + +## Events + +The `epochs` module emits the following events: + +### BeginBlocker + +| Type | Attribute Key | Attribute Value | +| ------------ | ------------- | --------------- | +| epoch\_start | epoch\_number | `{epoch\_number}` | +| epoch\_start | start\_time | `{start\_time}` | + +### EndBlocker + +| Type | Attribute Key | Attribute Value | +| ---------- | ------------- | --------------- | +| epoch\_end | epoch\_number | `{epoch\_number}` | + +## Keepers + +### Keeper functions + +Epochs keeper module provides utility functions to manage epochs. + +## Hooks + +```go +/ the first block whose timestamp is after the duration is counted as the end of the epoch + AfterEpochEnd(ctx sdk.Context, epochIdentifier string, epochNumber int64) + / new epoch is next block of epoch end block + BeforeEpochStart(ctx sdk.Context, epochIdentifier string, epochNumber int64) +``` + +### How modules receive hooks + +On hook receiver function of other modules, they need to filter +`epochIdentifier` and only do executions for only specific +epochIdentifier. Filtering epochIdentifier could be in `Params` of other +modules so that they can be modified by governance. + +This is the standard dev UX of this: + +```golang +func (k MyModuleKeeper) + +AfterEpochEnd(ctx sdk.Context, epochIdentifier string, epochNumber int64) { + params := k.GetParams(ctx) + if epochIdentifier == params.DistrEpochIdentifier { + / my logic +} +} +``` + +### Panic isolation + +If a given epoch hook panics, its state update is reverted, but we keep +proceeding through the remaining hooks. This allows more advanced epoch +logic to be used, without concern over state machine halting, or halting +subsequent modules. + +This does mean that if there is behavior you expect from a prior epoch +hook, and that epoch hook reverted, your hook may also have an issue. So +do keep in mind "what if a prior hook didn't get executed" in the safety +checks you consider for a new epoch hook. + +## Queries + +The Epochs module provides the following queries to check the module's state. + +```protobuf +service Query { + / EpochInfos provide running epochInfos + rpc EpochInfos(QueryEpochsInfoRequest) returns (QueryEpochsInfoResponse) {} + / CurrentEpoch provide current epoch of specified identifier + rpc CurrentEpoch(QueryCurrentEpochRequest) returns (QueryCurrentEpochResponse) {} +} +``` + +### Epoch Infos + +Query the currently running epochInfos + +```sh + query epochs epoch-infos +``` + + +**Example** + +An example output: + +```sh expandable +epochs: +- current_epoch: "183" + current_epoch_start_height: "2438409" + current_epoch_start_time: "2021-12-18T17:16:09.898160996Z" + duration: 86400s + epoch_counting_started: true + identifier: day + start_time: "2021-06-18T17:00:00Z" +- current_epoch: "26" + current_epoch_start_height: "2424854" + current_epoch_start_time: "2021-12-17T17:02:07.229632445Z" + duration: 604800s + epoch_counting_started: true + identifier: week + start_time: "2021-06-18T17:00:00Z" +``` + + + +### Current Epoch + +Query the current epoch by the specified identifier + +```sh + query epochs current-epoch [identifier] +``` + + +**Example** + +Query the current `day` epoch: + +```sh + query epochs current-epoch day +``` + +Which in this example outputs: + +```sh +current_epoch: "183" +``` + + diff --git a/docs/sdk/v0.53/documentation/module-system/evidence.mdx b/docs/sdk/v0.53/documentation/module-system/evidence.mdx new file mode 100644 index 00000000..4d93ad77 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/evidence.mdx @@ -0,0 +1,594 @@ +--- +title: '`x/evidence`' +description: Concepts State Messages Events Parameters BeginBlock Client CLI REST gRPC +--- + +* [Concepts](#concepts) +* [State](#state) +* [Messages](#messages) +* [Events](#events) +* [Parameters](#parameters) +* [BeginBlock](#beginblock) +* [Client](#client) + * [CLI](#cli) + * [REST](#rest) + * [gRPC](#grpc) + +## Abstract + +`x/evidence` is an implementation of a Cosmos SDK module, per [ADR 009](docs/sdk/next/documentation/legacy/adr-comprehensive), +that allows for the submission and handling of arbitrary evidence of misbehavior such +as equivocation and counterfactual signing. + +The evidence module differs from standard evidence handling which typically expects the +underlying consensus engine, e.g. CometBFT, to automatically submit evidence when +it is discovered by allowing clients and foreign chains to submit more complex evidence +directly. + +All concrete evidence types must implement the `Evidence` interface contract. Submitted +`Evidence` is first routed through the evidence module's `Router` in which it attempts +to find a corresponding registered `Handler` for that specific `Evidence` type. +Each `Evidence` type must have a `Handler` registered with the evidence module's +keeper in order for it to be successfully routed and executed. + +Each corresponding handler must also fulfill the `Handler` interface contract. The +`Handler` for a given `Evidence` type can perform any arbitrary state transitions +such as slashing, jailing, and tombstoning. + +## Concepts + +### Evidence + +Any concrete type of evidence submitted to the `x/evidence` module must fulfill the +`Evidence` contract outlined below. Not all concrete types of evidence will fulfill +this contract in the same way and some data may be entirely irrelevant to certain +types of evidence. An additional `ValidatorEvidence`, which extends `Evidence`, +has also been created to define a contract for evidence against malicious validators. + +```go expandable +/ Evidence defines the contract which concrete evidence types of misbehavior +/ must implement. +type Evidence interface { + proto.Message + + Route() + +string + String() + +string + Hash() []byte + ValidateBasic() + +error + + / Height at which the infraction occurred + GetHeight() + +int64 +} + +/ ValidatorEvidence extends Evidence interface to define contract +/ for evidence against malicious validators +type ValidatorEvidence interface { + Evidence + + / The consensus address of the malicious validator at time of infraction + GetConsensusAddress() + +sdk.ConsAddress + + / The total power of the malicious validator at time of infraction + GetValidatorPower() + +int64 + + / The total validator set power at time of infraction + GetTotalPower() + +int64 +} +``` + +### Registration & Handling + +The `x/evidence` module must first know about all types of evidence it is expected +to handle. This is accomplished by registering the `Route` method in the `Evidence` +contract with what is known as a `Router` (defined below). The `Router` accepts +`Evidence` and attempts to find the corresponding `Handler` for the `Evidence` +via the `Route` method. + +```go +type Router interface { + AddRoute(r string, h Handler) + +Router + HasRoute(r string) + +bool + GetRoute(path string) + +Handler + Seal() + +Sealed() + +bool +} +``` + +The `Handler` (defined below) is responsible for executing the entirety of the +business logic for handling `Evidence`. This typically includes validating the +evidence, both stateless checks via `ValidateBasic` and stateful checks via any +keepers provided to the `Handler`. In addition, the `Handler` may also perform +capabilities such as slashing and jailing a validator. All `Evidence` handled +by the `Handler` should be persisted. + +```go +/ Handler defines an agnostic Evidence handler. The handler is responsible +/ for executing all corresponding business logic necessary for verifying the +/ evidence as valid. In addition, the Handler may execute any necessary +/ slashing and potential jailing. +type Handler func(context.Context, Evidence) + +error +``` + +## State + +Currently the `x/evidence` module only stores valid submitted `Evidence` in state. +The evidence state is also stored and exported in the `x/evidence` module's `GenesisState`. + +```protobuf +/ GenesisState defines the evidence module's genesis state. +message GenesisState { + / evidence defines all the evidence at genesis. + repeated google.protobuf.Any evidence = 1; +} + +``` + +All `Evidence` is retrieved and stored via a prefix `KVStore` using prefix `0x00` (`KeyPrefixEvidence`). + +## Messages + +### MsgSubmitEvidence + +Evidence is submitted through a `MsgSubmitEvidence` message: + +```protobuf +/ MsgSubmitEvidence represents a message that supports submitting arbitrary +/ Evidence of misbehavior such as equivocation or counterfactual signing. +message MsgSubmitEvidence { + string submitter = 1; + google.protobuf.Any evidence = 2; +} +``` + +Note, the `Evidence` of a `MsgSubmitEvidence` message must have a corresponding +`Handler` registered with the `x/evidence` module's `Router` in order to be processed +and routed correctly. + +Given the `Evidence` is registered with a corresponding `Handler`, it is processed +as follows: + +```go expandable +func SubmitEvidence(ctx Context, evidence Evidence) + +error { + if _, err := GetEvidence(ctx, evidence.Hash()); err == nil { + return errorsmod.Wrap(types.ErrEvidenceExists, strings.ToUpper(hex.EncodeToString(evidence.Hash()))) +} + if !router.HasRoute(evidence.Route()) { + return errorsmod.Wrap(types.ErrNoEvidenceHandlerExists, evidence.Route()) +} + handler := router.GetRoute(evidence.Route()) + if err := handler(ctx, evidence); err != nil { + return errorsmod.Wrap(types.ErrInvalidEvidence, err.Error()) +} + +ctx.EventManager().EmitEvent( + sdk.NewEvent( + types.EventTypeSubmitEvidence, + sdk.NewAttribute(types.AttributeKeyEvidenceHash, strings.ToUpper(hex.EncodeToString(evidence.Hash()))), + ), + ) + +SetEvidence(ctx, evidence) + +return nil +} +``` + +First, there must not already exist valid submitted `Evidence` of the exact same +type. Secondly, the `Evidence` is routed to the `Handler` and executed. Finally, +if there is no error in handling the `Evidence`, an event is emitted and it is persisted to state. + +## Events + +The `x/evidence` module emits the following events: + +### Handlers + +#### MsgSubmitEvidence + +| Type | Attribute Key | Attribute Value | +| ---------------- | -------------- | ---------------- | +| submit\_evidence | evidence\_hash | `{evidenceHash}` | +| message | module | evidence | +| message | sender | `{senderAddress}` | +| message | action | submit\_evidence | + +## Parameters + +The evidence module does not contain any parameters. + +## BeginBlock + +### Evidence Handling + +CometBFT blocks can include +[Evidence](https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md#evidence) that indicates if a validator committed malicious behavior. The relevant information is forwarded to the application as ABCI Evidence in `abci.RequestBeginBlock` so that the validator can be punished accordingly. + +#### Equivocation + +The Cosmos SDK handles two types of evidence inside the ABCI `BeginBlock`: + +* `DuplicateVoteEvidence`, +* `LightClientAttackEvidence`. + +The evidence module handles these two evidence types the same way. First, the Cosmos SDK converts the CometBFT concrete evidence type to an SDK `Evidence` interface using `Equivocation` as the concrete type. + +```protobuf +// Equivocation implements the Evidence interface and defines evidence of double +// signing misbehavior. +message Equivocation { + option (amino.name) = "cosmos-sdk/Equivocation"; + option (gogoproto.goproto_getters) = false; + option (gogoproto.equal) = false; + + // height is the equivocation height. + int64 height = 1; + + // time is the equivocation time. + google.protobuf.Timestamp time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + + // power is the equivocation validator power. + int64 power = 3; + + // consensus_address is the equivocation validator consensus address. + string consensus_address = 4 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +For some `Equivocation` submitted in `block` to be valid, it must satisfy: + +`Evidence.Timestamp >= block.Timestamp - MaxEvidenceAge` + +Where: + +* `Evidence.Timestamp` is the timestamp in the block at height `Evidence.Height` +* `block.Timestamp` is the current block timestamp. + +If valid `Equivocation` evidence is included in a block, the validator's stake is +reduced (slashed) by `SlashFractionDoubleSign` as defined by the `x/slashing` module +of what their stake was when the infraction occurred, rather than when the evidence was discovered. +We want to "follow the stake", i.e., the stake that contributed to the infraction +should be slashed, even if it has since been redelegated or started unbonding. + +In addition, the validator is permanently jailed and tombstoned to make it impossible for that +validator to ever re-enter the validator set. + +The `Equivocation` evidence is handled as follows: + +```go +// in the case of a lunatic attack. +func (k Keeper) handleEquivocationEvidence(ctx context.Context, evidence *types.Equivocation) error { + sdkCtx := sdk.UnwrapSDKContext(ctx) + logger := k.Logger(ctx) + consAddr := evidence.GetConsensusAddress(k.stakingKeeper.ConsensusAddressCodec()) + + validator, err := k.stakingKeeper.ValidatorByConsAddr(ctx, consAddr) + if err != nil { + return err + } + if validator == nil || validator.IsUnbonded() { + // Defensive: Simulation doesn't take unbonding periods into account, and + // CometBFT might break this assumption at some point. + return nil + } + + if len(validator.GetOperator()) != 0 { + if _, err := k.slashingKeeper.GetPubkey(ctx, consAddr.Bytes()); err != nil { + // Ignore evidence that cannot be handled. + // + // NOTE: We used to panic with: + // `panic(fmt.Sprintf("Validator consensus-address %v not found", consAddr))`, + // but this couples the expectations of the app to both CometBFT and + // the simulator. Both are expected to provide the full range of + // allowable but none of the disallowed evidence types. Instead of + // getting this coordination right, it is easier to relax the + // constraints and ignore evidence that cannot be handled. + logger.Error(fmt.Sprintf("ignore evidence; expected public key for validator %s not found", consAddr)) + return nil + } + } + + // calculate the age of the evidence + infractionHeight := evidence.GetHeight() + infractionTime := evidence.GetTime() + ageDuration := sdkCtx.BlockHeader().Time.Sub(infractionTime) + ageBlocks := sdkCtx.BlockHeader().Height - infractionHeight + + // Reject evidence if the double-sign is too old. Evidence is considered stale + // if the difference in time and number of blocks is greater than the allowed + // parameters defined. + cp := sdkCtx.ConsensusParams() + if cp.Evidence != nil { + if ageDuration > cp.Evidence.MaxAgeDuration && ageBlocks > cp.Evidence.MaxAgeNumBlocks { + logger.Info( + "ignored equivocation; evidence too old", + "validator", consAddr, + "infraction_height", infractionHeight, + "max_age_num_blocks", cp.Evidence.MaxAgeNumBlocks, + "infraction_time", infractionTime, + "max_age_duration", cp.Evidence.MaxAgeDuration, + ) + return nil + } + } + + if ok := k.slashingKeeper.HasValidatorSigningInfo(ctx, consAddr); !ok { + panic(fmt.Sprintf("expected signing info for validator %s but not found", consAddr)) + } + + // ignore if the validator is already tombstoned + if k.slashingKeeper.IsTombstoned(ctx, consAddr) { + logger.Info( + "ignored equivocation; validator already tombstoned", + "validator", consAddr, + "infraction_height", infractionHeight, + "infraction_time", infractionTime, + ) + return nil + } + + logger.Info( + "confirmed equivocation", + "validator", consAddr, + "infraction_height", infractionHeight, + "infraction_time", infractionTime, + ) + + // We need to retrieve the stake distribution which signed the block, so we + // subtract ValidatorUpdateDelay from the evidence height. + // Note, that this *can* result in a negative "distributionHeight", up to + // -ValidatorUpdateDelay, i.e. at the end of the + // pre-genesis block (none) = at the beginning of the genesis block. + // That's fine since this is just used to filter unbonding delegations & redelegations. + distributionHeight := infractionHeight - sdk.ValidatorUpdateDelay + + // Slash validator. The `power` is the int64 power of the validator as provided + // to/by CometBFT. This value is validator.Tokens as sent to CometBFT via + // ABCI, and now received as evidence. The fraction is passed in to separately + // to slash unbonding and rebonding delegations. + slashFractionDoubleSign, err := k.slashingKeeper.SlashFractionDoubleSign(ctx) + if err != nil { + return err + } + + err = k.slashingKeeper.SlashWithInfractionReason( + ctx, + consAddr, + slashFractionDoubleSign, + evidence.GetValidatorPower(), distributionHeight, + stakingtypes.Infraction_INFRACTION_DOUBLE_SIGN, + ) + if err != nil { + return err + } + + // Jail the validator if not already jailed. This will begin unbonding the + // validator if not already unbonding (tombstoned). + if !validator.IsJailed() { + err = k.slashingKeeper.Jail(ctx, consAddr) + if err != nil { + return err + } + } + +``` + +**Note:** The slashing, jailing, and tombstoning calls are delegated through the `x/slashing` module +that emits informative events and finally delegates calls to the `x/staking` module. See documentation +on slashing and jailing in [State Transitions](/docs/sdk/v0.53/documentation/module-system/staking#state-transitions). + +## Client + +### CLI + +A user can query and interact with the `evidence` module using the CLI. + +#### Query + +The `query` commands allows users to query `evidence` state. + +```bash +simd query evidence --help +``` + +#### evidence + +The `evidence` command allows users to list all evidence or evidence by hash. + +Usage: + +```bash +simd query evidence evidence [flags] +``` + +To query evidence by hash + +Example: + +```bash +simd query evidence evidence "DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660" +``` + +Example Output: + +```bash +evidence: + consensus_address: cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h + height: 11 + power: 100 + time: "2021-10-20T16:08:38.194017624Z" +``` + +To get all evidence + +Example: + +```bash +simd query evidence list +``` + +Example Output: + +```bash +evidence: + consensus_address: cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h + height: 11 + power: 100 + time: "2021-10-20T16:08:38.194017624Z" +pagination: + next_key: null + total: "1" +``` + +### REST + +A user can query the `evidence` module using REST endpoints. + +#### Evidence + +Get evidence by hash + +```bash +/cosmos/evidence/v1beta1/evidence/{hash} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence/DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660" +``` + +Example Output: + +```bash +{ + "evidence": { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } +} +``` + +#### All evidence + +Get all evidence + +```bash +/cosmos/evidence/v1beta1/evidence +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence" +``` + +Example Output: + +```bash expandable +{ + "evidence": [ + { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### gRPC + +A user can query the `evidence` module using gRPC endpoints. + +#### Evidence + +Get evidence by hash + +```bash +cosmos.evidence.v1beta1.Query/Evidence +``` + +Example: + +```bash +grpcurl -plaintext -d '{"evidence_hash":"DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660"}' localhost:9090 cosmos.evidence.v1beta1.Query/Evidence +``` + +Example Output: + +```bash +{ + "evidence": { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } +} +``` + +#### All evidence + +Get all evidence + +```bash +cosmos.evidence.v1beta1.Query/AllEvidence +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.evidence.v1beta1.Query/AllEvidence +``` + +Example Output: + +```bash expandable +{ + "evidence": [ + { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } + ], + "pagination": { + "total": "1" + } +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/feegrant.mdx b/docs/sdk/v0.53/documentation/module-system/feegrant.mdx new file mode 100644 index 00000000..0d80b995 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/feegrant.mdx @@ -0,0 +1,3771 @@ +--- +title: '`x/feegrant`' +description: >- + This document specifies the fee grant module. For the full ADR, please see Fee + Grant ADR-029. +--- + +## Abstract + +This document specifies the fee grant module. For the full ADR, please see [Fee Grant ADR-029](docs/sdk/next/documentation/legacy/adr-comprehensive). + +This module allows accounts to grant fee allowances and to use fees from their accounts. Grantees can execute any transaction without the need to maintain sufficient fees. + +## Contents + +* [Concepts](#concepts) +* [State](#state) + * [FeeAllowance](#feeallowance) + * [FeeAllowanceQueue](#feeallowancequeue) +* [Messages](#messages) + * [Msg/GrantAllowance](#msggrantallowance) + * [Msg/RevokeAllowance](#msgrevokeallowance) +* [Events](#events) +* [Msg Server](#msg-server) + * [MsgGrantAllowance](#msggrantallowance-1) + * [MsgRevokeAllowance](#msgrevokeallowance-1) + * [Exec fee allowance](#exec-fee-allowance) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + +## Concepts + +### Grant + +`Grant` is stored in the KVStore to record a grant with full context. Every grant will contain `granter`, `grantee` and what kind of `allowance` is granted. `granter` is an account address who is giving permission to `grantee` (the beneficiary account address) to pay for some or all of `grantee`'s transaction fees. `allowance` defines what kind of fee allowance (`BasicAllowance` or `PeriodicAllowance`, see below) is granted to `grantee`. `allowance` accepts an interface which implements `FeeAllowanceI`, encoded as `Any` type. There can be only one existing fee grant allowed for a `grantee` and `granter`, self grants are not allowed. + +```protobuf +// Grant is stored in the KVStore to record a grant with full context +message Grant { + // granter is the address of the user granting an allowance of their funds. + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // grantee is the address of the user being granted an allowance of another user's funds. + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // allowance can be any of basic, periodic, allowed fee allowance. + google.protobuf.Any allowance = 3 [(cosmos_proto.accepts_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"]; +} +``` + +`FeeAllowanceI` looks like: + +```go expandable +package feegrant + +import ( + + "time" + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ FeeAllowance implementations are tied to a given fee delegator and delegatee, +/ and are used to enforce fee grant limits. +type FeeAllowanceI interface { + / Accept can use fee payment requested as well as timestamp of the current block + / to determine whether or not to process this. This is checked in + / Keeper.UseGrantedFees and the return values should match how it is handled there. + / + / If it returns an error, the fee payment is rejected, otherwise it is accepted. + / The FeeAllowance implementation is expected to update it's internal state + / and will be saved again after an acceptance. + / + / If remove is true (regardless of the error), the FeeAllowance will be deleted from storage + / (eg. when it is used up). (See call to RevokeAllowance in Keeper.UseGrantedFees) + +Accept(ctx sdk.Context, fee sdk.Coins, msgs []sdk.Msg) (remove bool, err error) + + / ValidateBasic should evaluate this FeeAllowance for internal consistency. + / Don't allow negative amounts, or negative periods for example. + ValidateBasic() + +error + + / ExpiresAt returns the expiry time of the allowance. + ExpiresAt() (*time.Time, error) +} +``` + +### Fee Allowance types + +There are two types of fee allowances present at the moment: + +* `BasicAllowance` +* `PeriodicAllowance` +* `AllowedMsgAllowance` + +### BasicAllowance + +`BasicAllowance` is permission for `grantee` to use fee from a `granter`'s account. If any of the `spend_limit` or `expiration` reaches its limit, the grant will be removed from the state. + +```protobuf +// BasicAllowance implements Allowance with a one-time grant of coins +// that optionally expires. The grantee can use up to SpendLimit to cover fees. +message BasicAllowance { + option (cosmos_proto.implements_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"; + option (amino.name) = "cosmos-sdk/BasicAllowance"; + + // spend_limit specifies the maximum amount of coins that can be spent + // by this allowance and will be updated as coins are spent. If it is + // empty, there is no spend limit and any amount of coins can be spent. + repeated cosmos.base.v1beta1.Coin spend_limit = 1 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; +``` + +* `spend_limit` is the limit of coins that are allowed to be used from the `granter` account. If it is empty, it assumes there's no spend limit, `grantee` can use any number of available coins from `granter` account address before the expiration. + +* `expiration` specifies an optional time when this allowance expires. If the value is left empty, there is no expiry for the grant. + +* When a grant is created with empty values for `spend_limit` and `expiration`, it is still a valid grant. It won't restrict the `grantee` to use any number of coins from `granter` and it won't have any expiration. The only way to restrict the `grantee` is by revoking the grant. + +### PeriodicAllowance + +`PeriodicAllowance` is a repeating fee allowance for the mentioned period, we can mention when the grant can expire as well as when a period can reset. We can also define the maximum number of coins that can be used in a mentioned period of time. + +```protobuf +// PeriodicAllowance extends Allowance to allow for both a maximum cap, +// as well as a limit per time period. +message PeriodicAllowance { + option (cosmos_proto.implements_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"; + option (amino.name) = "cosmos-sdk/PeriodicAllowance"; + + // basic specifies a struct of `BasicAllowance` + BasicAllowance basic = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // period specifies the time duration in which period_spend_limit coins can + // be spent before that allowance is reset + google.protobuf.Duration period = 2 + [(gogoproto.stdduration) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // period_spend_limit specifies the maximum number of coins that can be spent + // in the period + repeated cosmos.base.v1beta1.Coin period_spend_limit = 3 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + + // period_can_spend is the number of coins left to be spent before the period_reset time + repeated cosmos.base.v1beta1.Coin period_can_spend = 4 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + + // period_reset is the time at which this period resets and a new one begins, + // it is calculated from the start time of the first transaction after the + // last period ended + google.protobuf.Timestamp period_reset = 5 + [(gogoproto.stdtime) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +* `basic` is the instance of `BasicAllowance` which is optional for periodic fee allowance. If empty, the grant will have no `expiration` and no `spend_limit`. + +* `period` is the specific period of time, after each period passes, `period_can_spend` will be reset. + +* `period_spend_limit` specifies the maximum number of coins that can be spent in the period. + +* `period_can_spend` is the number of coins left to be spent before the period\_reset time. + +* `period_reset` keeps track of when a next period reset should happen. + +### AllowedMsgAllowance + +`AllowedMsgAllowance` is a fee allowance, it can be any of `BasicFeeAllowance`, `PeriodicAllowance` but restricted only to the allowed messages mentioned by the granter. + +```protobuf +// AllowedMsgAllowance creates allowance only for specified message types. +message AllowedMsgAllowance { + option (gogoproto.goproto_getters) = false; + option (cosmos_proto.implements_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"; + option (amino.name) = "cosmos-sdk/AllowedMsgAllowance"; + + // allowance can be any of basic and periodic fee allowance. + google.protobuf.Any allowance = 1 [(cosmos_proto.accepts_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"]; + + // allowed_messages are the messages for which the grantee has the access. + repeated string allowed_messages = 2; +} +``` + +* `allowance` is either `BasicAllowance` or `PeriodicAllowance`. + +* `allowed_messages` is array of messages allowed to execute the given allowance. + +### FeeGranter flag + +`feegrant` module introduces a `FeeGranter` flag for CLI for the sake of executing transactions with fee granter. When this flag is set, `clientCtx` will append the granter account address for transactions generated through CLI. + +```go expandable +package client + +import ( + + "crypto/tls" + "fmt" + "strings" + "github.com/pkg/errors" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "github.com/tendermint/tendermint/libs/cli" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials" + "google.golang.org/grpc/credentials/insecure" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ ClientContextKey defines the context key used to retrieve a client.Context from +/ a command's Context. +const ClientContextKey = sdk.ContextKey("client.context") + +/ SetCmdClientContextHandler is to be used in a command pre-hook execution to +/ read flags that populate a Context and sets that to the command's Context. +func SetCmdClientContextHandler(clientCtx Context, cmd *cobra.Command) (err error) { + clientCtx, err = ReadPersistentCommandFlags(clientCtx, cmd.Flags()) + if err != nil { + return err +} + +return SetCmdClientContext(cmd, clientCtx) +} + +/ ValidateCmd returns unknown command error or Help display if help flag set +func ValidateCmd(cmd *cobra.Command, args []string) + +error { + var unknownCmd string + var skipNext bool + for _, arg := range args { + / search for help flag + if arg == "--help" || arg == "-h" { + return cmd.Help() +} + + / check if the current arg is a flag + switch { + case len(arg) > 0 && (arg[0] == '-'): + / the next arg should be skipped if the current arg is a + / flag and does not use "=" to assign the flag's value + if !strings.Contains(arg, "=") { + skipNext = true +} + +else { + skipNext = false +} + case skipNext: + / skip current arg + skipNext = false + case unknownCmd == "": + / unknown command found + / continue searching for help flag + unknownCmd = arg +} + +} + + / return the help screen if no unknown command is found + if unknownCmd != "" { + err := fmt.Sprintf("unknown command \"%s\" for \"%s\"", unknownCmd, cmd.CalledAs()) + + / build suggestions for unknown argument + if suggestions := cmd.SuggestionsFor(unknownCmd); len(suggestions) > 0 { + err += "\n\nDid you mean this?\n" + for _, s := range suggestions { + err += fmt.Sprintf("\t%v\n", s) +} + +} + +return errors.New(err) +} + +return cmd.Help() +} + +/ ReadPersistentCommandFlags returns a Context with fields set for "persistent" +/ or common flags that do not necessarily change with context. +/ +/ Note, the provided clientCtx may have field pre-populated. The following order +/ of precedence occurs: +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func ReadPersistentCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { + if clientCtx.OutputFormat == "" || flagSet.Changed(cli.OutputFlag) { + output, _ := flagSet.GetString(cli.OutputFlag) + +clientCtx = clientCtx.WithOutputFormat(output) +} + if clientCtx.HomeDir == "" || flagSet.Changed(flags.FlagHome) { + homeDir, _ := flagSet.GetString(flags.FlagHome) + +clientCtx = clientCtx.WithHomeDir(homeDir) +} + if !clientCtx.Simulate || flagSet.Changed(flags.FlagDryRun) { + dryRun, _ := flagSet.GetBool(flags.FlagDryRun) + +clientCtx = clientCtx.WithSimulation(dryRun) +} + if clientCtx.KeyringDir == "" || flagSet.Changed(flags.FlagKeyringDir) { + keyringDir, _ := flagSet.GetString(flags.FlagKeyringDir) + + / The keyring directory is optional and falls back to the home directory + / if omitted. + if keyringDir == "" { + keyringDir = clientCtx.HomeDir +} + +clientCtx = clientCtx.WithKeyringDir(keyringDir) +} + if clientCtx.ChainID == "" || flagSet.Changed(flags.FlagChainID) { + chainID, _ := flagSet.GetString(flags.FlagChainID) + +clientCtx = clientCtx.WithChainID(chainID) +} + if clientCtx.Keyring == nil || flagSet.Changed(flags.FlagKeyringBackend) { + keyringBackend, _ := flagSet.GetString(flags.FlagKeyringBackend) + if keyringBackend != "" { + kr, err := NewKeyringFromBackend(clientCtx, keyringBackend) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithKeyring(kr) +} + +} + if clientCtx.Client == nil || flagSet.Changed(flags.FlagNode) { + rpcURI, _ := flagSet.GetString(flags.FlagNode) + if rpcURI != "" { + clientCtx = clientCtx.WithNodeURI(rpcURI) + +client, err := NewClientFromNode(rpcURI) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithClient(client) +} + +} + if clientCtx.GRPCClient == nil || flagSet.Changed(flags.FlagGRPC) { + grpcURI, _ := flagSet.GetString(flags.FlagGRPC) + if grpcURI != "" { + var dialOpts []grpc.DialOption + + useInsecure, _ := flagSet.GetBool(flags.FlagGRPCInsecure) + if useInsecure { + dialOpts = append(dialOpts, grpc.WithTransportCredentials(insecure.NewCredentials())) +} + +else { + dialOpts = append(dialOpts, grpc.WithTransportCredentials(credentials.NewTLS(&tls.Config{ + MinVersion: tls.VersionTLS12, +}))) +} + +grpcClient, err := grpc.Dial(grpcURI, dialOpts...) + if err != nil { + return Context{ +}, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) +} + +} + +return clientCtx, nil +} + +/ readQueryCommandFlags returns an updated Context with fields set based on flags +/ defined in AddQueryFlagsToCmd. An error is returned if any flag query fails. +/ +/ Note, the provided clientCtx may have field pre-populated. The following order +/ of precedence occurs: +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func readQueryCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { + if clientCtx.Height == 0 || flagSet.Changed(flags.FlagHeight) { + height, _ := flagSet.GetInt64(flags.FlagHeight) + +clientCtx = clientCtx.WithHeight(height) +} + if !clientCtx.UseLedger || flagSet.Changed(flags.FlagUseLedger) { + useLedger, _ := flagSet.GetBool(flags.FlagUseLedger) + +clientCtx = clientCtx.WithUseLedger(useLedger) +} + +return ReadPersistentCommandFlags(clientCtx, flagSet) +} + +/ readTxCommandFlags returns an updated Context with fields set based on flags +/ defined in AddTxFlagsToCmd. An error is returned if any flag query fails. +/ +/ Note, the provided clientCtx may have field pre-populated. The following order +/ of precedence occurs: +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func readTxCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { + clientCtx, err := ReadPersistentCommandFlags(clientCtx, flagSet) + if err != nil { + return clientCtx, err +} + if !clientCtx.GenerateOnly || flagSet.Changed(flags.FlagGenerateOnly) { + genOnly, _ := flagSet.GetBool(flags.FlagGenerateOnly) + +clientCtx = clientCtx.WithGenerateOnly(genOnly) +} + if !clientCtx.Offline || flagSet.Changed(flags.FlagOffline) { + offline, _ := flagSet.GetBool(flags.FlagOffline) + +clientCtx = clientCtx.WithOffline(offline) +} + if !clientCtx.UseLedger || flagSet.Changed(flags.FlagUseLedger) { + useLedger, _ := flagSet.GetBool(flags.FlagUseLedger) + +clientCtx = clientCtx.WithUseLedger(useLedger) +} + if clientCtx.BroadcastMode == "" || flagSet.Changed(flags.FlagBroadcastMode) { + bMode, _ := flagSet.GetString(flags.FlagBroadcastMode) + +clientCtx = clientCtx.WithBroadcastMode(bMode) +} + if !clientCtx.SkipConfirm || flagSet.Changed(flags.FlagSkipConfirmation) { + skipConfirm, _ := flagSet.GetBool(flags.FlagSkipConfirmation) + +clientCtx = clientCtx.WithSkipConfirmation(skipConfirm) +} + if clientCtx.SignModeStr == "" || flagSet.Changed(flags.FlagSignMode) { + signModeStr, _ := flagSet.GetString(flags.FlagSignMode) + +clientCtx = clientCtx.WithSignModeStr(signModeStr) +} + if clientCtx.FeePayer == nil || flagSet.Changed(flags.FlagFeePayer) { + payer, _ := flagSet.GetString(flags.FlagFeePayer) + if payer != "" { + payerAcc, err := sdk.AccAddressFromBech32(payer) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithFeePayerAddress(payerAcc) +} + +} + if clientCtx.FeeGranter == nil || flagSet.Changed(flags.FlagFeeGranter) { + granter, _ := flagSet.GetString(flags.FlagFeeGranter) + if granter != "" { + granterAcc, err := sdk.AccAddressFromBech32(granter) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithFeeGranterAddress(granterAcc) +} + +} + if clientCtx.From == "" || flagSet.Changed(flags.FlagFrom) { + from, _ := flagSet.GetString(flags.FlagFrom) + +fromAddr, fromName, keyType, err := GetFromFields(clientCtx, clientCtx.Keyring, from) + if err != nil { + return clientCtx, err +} + +clientCtx = clientCtx.WithFrom(from).WithFromAddress(fromAddr).WithFromName(fromName) + + / If the `from` signer account is a ledger key, we need to use + / SIGN_MODE_AMINO_JSON, because ledger doesn't support proto yet. + / ref: https://github.com/cosmos/cosmos-sdk/issues/8109 + if keyType == keyring.TypeLedger && clientCtx.SignModeStr != flags.SignModeLegacyAminoJSON && !clientCtx.LedgerHasProtobuf { + fmt.Println("Default sign-mode 'direct' not supported by Ledger, using sign-mode 'amino-json'.") + +clientCtx = clientCtx.WithSignModeStr(flags.SignModeLegacyAminoJSON) +} + +} + if !clientCtx.IsAux || flagSet.Changed(flags.FlagAux) { + isAux, _ := flagSet.GetBool(flags.FlagAux) + +clientCtx = clientCtx.WithAux(isAux) + if isAux { + / If the user didn't explicitly set an --output flag, use JSON by + / default. + if clientCtx.OutputFormat == "" || !flagSet.Changed(cli.OutputFlag) { + clientCtx = clientCtx.WithOutputFormat("json") +} + + / If the user didn't explicitly set a --sign-mode flag, use + / DIRECT_AUX by default. + if clientCtx.SignModeStr == "" || !flagSet.Changed(flags.FlagSignMode) { + clientCtx = clientCtx.WithSignModeStr(flags.SignModeDirectAux) +} + +} + +} + +return clientCtx, nil +} + +/ GetClientQueryContext returns a Context from a command with fields set based on flags +/ defined in AddQueryFlagsToCmd. An error is returned if any flag query fails. +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func GetClientQueryContext(cmd *cobra.Command) (Context, error) { + ctx := GetClientContextFromCmd(cmd) + +return readQueryCommandFlags(ctx, cmd.Flags()) +} + +/ GetClientTxContext returns a Context from a command with fields set based on flags +/ defined in AddTxFlagsToCmd. An error is returned if any flag query fails. +/ +/ - client.Context field not pre-populated & flag not set: uses default flag value +/ - client.Context field not pre-populated & flag set: uses set flag value +/ - client.Context field pre-populated & flag not set: uses pre-populated value +/ - client.Context field pre-populated & flag set: uses set flag value +func GetClientTxContext(cmd *cobra.Command) (Context, error) { + ctx := GetClientContextFromCmd(cmd) + +return readTxCommandFlags(ctx, cmd.Flags()) +} + +/ GetClientContextFromCmd returns a Context from a command or an empty Context +/ if it has not been set. +func GetClientContextFromCmd(cmd *cobra.Command) + +Context { + if v := cmd.Context().Value(ClientContextKey); v != nil { + clientCtxPtr := v.(*Context) + +return *clientCtxPtr +} + +return Context{ +} +} + +/ SetCmdClientContext sets a command's Context value to the provided argument. +func SetCmdClientContext(cmd *cobra.Command, clientCtx Context) + +error { + v := cmd.Context().Value(ClientContextKey) + if v == nil { + return errors.New("client context not set") +} + clientCtxPtr := v.(*Context) + *clientCtxPtr = clientCtx + + return nil +} +``` + +```go expandable +package tx + +import ( + + "bufio" + "context" + "encoding/json" + "errors" + "fmt" + "os" + + gogogrpc "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/pflag" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/input" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +/ GenerateOrBroadcastTxCLI will either generate and print and unsigned transaction +/ or sign it and broadcast it returning an error upon failure. +func GenerateOrBroadcastTxCLI(clientCtx client.Context, flagSet *pflag.FlagSet, msgs ...sdk.Msg) + +error { + txf := NewFactoryCLI(clientCtx, flagSet) + +return GenerateOrBroadcastTxWithFactory(clientCtx, txf, msgs...) +} + +/ GenerateOrBroadcastTxWithFactory will either generate and print and unsigned transaction +/ or sign it and broadcast it returning an error upon failure. +func GenerateOrBroadcastTxWithFactory(clientCtx client.Context, txf Factory, msgs ...sdk.Msg) + +error { + / Validate all msgs before generating or broadcasting the tx. + / We were calling ValidateBasic separately in each CLI handler before. + / Right now, we're factorizing that call inside this function. + / ref: https://github.com/cosmos/cosmos-sdk/pull/9236#discussion_r623803504 + for _, msg := range msgs { + if err := msg.ValidateBasic(); err != nil { + return err +} + +} + + / If the --aux flag is set, we simply generate and print the AuxSignerData. + if clientCtx.IsAux { + auxSignerData, err := makeAuxSignerData(clientCtx, txf, msgs...) + if err != nil { + return err +} + +return clientCtx.PrintProto(&auxSignerData) +} + if clientCtx.GenerateOnly { + return txf.PrintUnsignedTx(clientCtx, msgs...) +} + +return BroadcastTx(clientCtx, txf, msgs...) +} + +/ BroadcastTx attempts to generate, sign and broadcast a transaction with the +/ given set of messages. It will also simulate gas requirements if necessary. +/ It will return an error upon failure. +func BroadcastTx(clientCtx client.Context, txf Factory, msgs ...sdk.Msg) + +error { + txf, err := txf.Prepare(clientCtx) + if err != nil { + return err +} + if txf.SimulateAndExecute() || clientCtx.Simulate { + _, adjusted, err := CalculateGas(clientCtx, txf, msgs...) + if err != nil { + return err +} + +txf = txf.WithGas(adjusted) + _, _ = fmt.Fprintf(os.Stderr, "%s\n", GasEstimateResponse{ + GasEstimate: txf.Gas() +}) +} + if clientCtx.Simulate { + return nil +} + +tx, err := txf.BuildUnsignedTx(msgs...) + if err != nil { + return err +} + if !clientCtx.SkipConfirm { + txBytes, err := clientCtx.TxConfig.TxJSONEncoder()(tx.GetTx()) + if err != nil { + return err +} + if err := clientCtx.PrintRaw(json.RawMessage(txBytes)); err != nil { + _, _ = fmt.Fprintf(os.Stderr, "%s\n", txBytes) +} + buf := bufio.NewReader(os.Stdin) + +ok, err := input.GetConfirmation("confirm transaction before signing and broadcasting", buf, os.Stderr) + if err != nil || !ok { + _, _ = fmt.Fprintf(os.Stderr, "%s\n", "cancelled transaction") + +return err +} + +} + +err = Sign(txf, clientCtx.GetFromName(), tx, true) + if err != nil { + return err +} + +txBytes, err := clientCtx.TxConfig.TxEncoder()(tx.GetTx()) + if err != nil { + return err +} + + / broadcast to a Tendermint node + res, err := clientCtx.BroadcastTx(txBytes) + if err != nil { + return err +} + +return clientCtx.PrintProto(res) +} + +/ CalculateGas simulates the execution of a transaction and returns the +/ simulation response obtained by the query and the adjusted gas amount. +func CalculateGas( + clientCtx gogogrpc.ClientConn, txf Factory, msgs ...sdk.Msg, +) (*tx.SimulateResponse, uint64, error) { + txBytes, err := txf.BuildSimTx(msgs...) + if err != nil { + return nil, 0, err +} + txSvcClient := tx.NewServiceClient(clientCtx) + +simRes, err := txSvcClient.Simulate(context.Background(), &tx.SimulateRequest{ + TxBytes: txBytes, +}) + if err != nil { + return nil, 0, err +} + +return simRes, uint64(txf.GasAdjustment() * float64(simRes.GasInfo.GasUsed)), nil +} + +/ SignWithPrivKey signs a given tx with the given private key, and returns the +/ corresponding SignatureV2 if the signing is successful. +func SignWithPrivKey( + signMode signing.SignMode, signerData authsigning.SignerData, + txBuilder client.TxBuilder, priv cryptotypes.PrivKey, txConfig client.TxConfig, + accSeq uint64, +) (signing.SignatureV2, error) { + var sigV2 signing.SignatureV2 + + / Generate the bytes to be signed. + signBytes, err := txConfig.SignModeHandler().GetSignBytes(signMode, signerData, txBuilder.GetTx()) + if err != nil { + return sigV2, err +} + + / Sign those bytes + signature, err := priv.Sign(signBytes) + if err != nil { + return sigV2, err +} + + / Construct the SignatureV2 struct + sigData := signing.SingleSignatureData{ + SignMode: signMode, + Signature: signature, +} + +sigV2 = signing.SignatureV2{ + PubKey: priv.PubKey(), + Data: &sigData, + Sequence: accSeq, +} + +return sigV2, nil +} + +/ countDirectSigners counts the number of DIRECT signers in a signature data. +func countDirectSigners(data signing.SignatureData) + +int { + switch data := data.(type) { + case *signing.SingleSignatureData: + if data.SignMode == signing.SignMode_SIGN_MODE_DIRECT { + return 1 +} + +return 0 + case *signing.MultiSignatureData: + directSigners := 0 + for _, d := range data.Signatures { + directSigners += countDirectSigners(d) +} + +return directSigners + default: + panic("unreachable case") +} +} + +/ checkMultipleSigners checks that there can be maximum one DIRECT signer in +/ a tx. +func checkMultipleSigners(tx authsigning.Tx) + +error { + directSigners := 0 + sigsV2, err := tx.GetSignaturesV2() + if err != nil { + return err +} + for _, sig := range sigsV2 { + directSigners += countDirectSigners(sig.Data) + if directSigners > 1 { + return sdkerrors.ErrNotSupported.Wrap("txs signed with CLI can have maximum 1 DIRECT signer") +} + +} + +return nil +} + +/ Sign signs a given tx with a named key. The bytes signed over are canconical. +/ The resulting signature will be added to the transaction builder overwriting the previous +/ ones if overwrite=true (otherwise, the signature will be appended). +/ Signing a transaction with mutltiple signers in the DIRECT mode is not supprted and will +/ return an error. +/ An error is returned upon failure. +func Sign(txf Factory, name string, txBuilder client.TxBuilder, overwriteSig bool) + +error { + if txf.keybase == nil { + return errors.New("keybase must be set prior to signing a transaction") +} + signMode := txf.signMode + if signMode == signing.SignMode_SIGN_MODE_UNSPECIFIED { + / use the SignModeHandler's default mode if unspecified + signMode = txf.txConfig.SignModeHandler().DefaultMode() +} + +k, err := txf.keybase.Key(name) + if err != nil { + return err +} + +pubKey, err := k.GetPubKey() + if err != nil { + return err +} + signerData := authsigning.SignerData{ + ChainID: txf.chainID, + AccountNumber: txf.accountNumber, + Sequence: txf.sequence, + PubKey: pubKey, + Address: sdk.AccAddress(pubKey.Address()).String(), +} + + / For SIGN_MODE_DIRECT, calling SetSignatures calls setSignerInfos on + / TxBuilder under the hood, and SignerInfos is needed to generated the + / sign bytes. This is the reason for setting SetSignatures here, with a + / nil signature. + / + / Note: this line is not needed for SIGN_MODE_LEGACY_AMINO, but putting it + / also doesn't affect its generated sign bytes, so for code's simplicity + / sake, we put it here. + sigData := signing.SingleSignatureData{ + SignMode: signMode, + Signature: nil, +} + sig := signing.SignatureV2{ + PubKey: pubKey, + Data: &sigData, + Sequence: txf.Sequence(), +} + +var prevSignatures []signing.SignatureV2 + if !overwriteSig { + prevSignatures, err = txBuilder.GetTx().GetSignaturesV2() + if err != nil { + return err +} + +} + / Overwrite or append signer infos. + var sigs []signing.SignatureV2 + if overwriteSig { + sigs = []signing.SignatureV2{ + sig +} + +} + +else { + sigs = append(sigs, prevSignatures...) + +sigs = append(sigs, sig) +} + if err := txBuilder.SetSignatures(sigs...); err != nil { + return err +} + if err := checkMultipleSigners(txBuilder.GetTx()); err != nil { + return err +} + + / Generate the bytes to be signed. + bytesToSign, err := txf.txConfig.SignModeHandler().GetSignBytes(signMode, signerData, txBuilder.GetTx()) + if err != nil { + return err +} + + / Sign those bytes + sigBytes, _, err := txf.keybase.Sign(name, bytesToSign) + if err != nil { + return err +} + + / Construct the SignatureV2 struct + sigData = signing.SingleSignatureData{ + SignMode: signMode, + Signature: sigBytes, +} + +sig = signing.SignatureV2{ + PubKey: pubKey, + Data: &sigData, + Sequence: txf.Sequence(), +} + if overwriteSig { + err = txBuilder.SetSignatures(sig) +} + +else { + prevSignatures = append(prevSignatures, sig) + +err = txBuilder.SetSignatures(prevSignatures...) +} + if err != nil { + return fmt.Errorf("unable to set signatures on payload: %w", err) +} + + / Run optional preprocessing if specified. By default, this is unset + / and will return nil. + return txf.PreprocessTx(name, txBuilder) +} + +/ GasEstimateResponse defines a response definition for tx gas estimation. +type GasEstimateResponse struct { + GasEstimate uint64 `json:"gas_estimate" yaml:"gas_estimate"` +} + +func (gr GasEstimateResponse) + +String() + +string { + return fmt.Sprintf("gas estimate: %d", gr.GasEstimate) +} + +/ makeAuxSignerData generates an AuxSignerData from the client inputs. +func makeAuxSignerData(clientCtx client.Context, f Factory, msgs ...sdk.Msg) (tx.AuxSignerData, error) { + b := NewAuxTxBuilder() + +fromAddress, name, _, err := client.GetFromFields(clientCtx, clientCtx.Keyring, clientCtx.From) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetAddress(fromAddress.String()) + if clientCtx.Offline { + b.SetAccountNumber(f.accountNumber) + +b.SetSequence(f.sequence) +} + +else { + accNum, seq, err := clientCtx.AccountRetriever.GetAccountNumberSequence(clientCtx, fromAddress) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetAccountNumber(accNum) + +b.SetSequence(seq) +} + +err = b.SetMsgs(msgs...) + if err != nil { + return tx.AuxSignerData{ +}, err +} + if f.tip != nil { + if _, err := sdk.AccAddressFromBech32(f.tip.Tipper); err != nil { + return tx.AuxSignerData{ +}, sdkerrors.ErrInvalidAddress.Wrap("tipper must be a bech32 address") +} + +b.SetTip(f.tip) +} + +err = b.SetSignMode(f.SignMode()) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +key, err := clientCtx.Keyring.Key(name) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +pub, err := key.GetPubKey() + if err != nil { + return tx.AuxSignerData{ +}, err +} + +err = b.SetPubKey(pub) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetChainID(clientCtx.ChainID) + +signBz, err := b.GetSignBytes() + if err != nil { + return tx.AuxSignerData{ +}, err +} + +sig, _, err := clientCtx.Keyring.Sign(name, signBz) + if err != nil { + return tx.AuxSignerData{ +}, err +} + +b.SetSignature(sig) + +return b.GetAuxSignerData() +} +``` + +```go expandable +package tx + +import ( + + "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +/ wrapper is a wrapper around the tx.Tx proto.Message which retain the raw +/ body and auth_info bytes. +type wrapper struct { + cdc codec.Codec + + tx *tx.Tx + + / bodyBz represents the protobuf encoding of TxBody. This should be encoding + / from the client using TxRaw if the tx was decoded from the wire + bodyBz []byte + + / authInfoBz represents the protobuf encoding of TxBody. This should be encoding + / from the client using TxRaw if the tx was decoded from the wire + authInfoBz []byte + + txBodyHasUnknownNonCriticals bool +} + +var ( + _ authsigning.Tx = &wrapper{ +} + _ client.TxBuilder = &wrapper{ +} + _ tx.TipTx = &wrapper{ +} + _ ante.HasExtensionOptionsTx = &wrapper{ +} + _ ExtensionOptionsTxBuilder = &wrapper{ +} + _ tx.TipTx = &wrapper{ +} +) + +/ ExtensionOptionsTxBuilder defines a TxBuilder that can also set extensions. +type ExtensionOptionsTxBuilder interface { + client.TxBuilder + + SetExtensionOptions(...*codectypes.Any) + +SetNonCriticalExtensionOptions(...*codectypes.Any) +} + +func newBuilder(cdc codec.Codec) *wrapper { + return &wrapper{ + cdc: cdc, + tx: &tx.Tx{ + Body: &tx.TxBody{ +}, + AuthInfo: &tx.AuthInfo{ + Fee: &tx.Fee{ +}, +}, +}, +} +} + +func (w *wrapper) + +GetMsgs() []sdk.Msg { + return w.tx.GetMsgs() +} + +func (w *wrapper) + +ValidateBasic() + +error { + return w.tx.ValidateBasic() +} + +func (w *wrapper) + +getBodyBytes() []byte { + if len(w.bodyBz) == 0 { + / if bodyBz is empty, then marshal the body. bodyBz will generally + / be set to nil whenever SetBody is called so the result of calling + / this method should always return the correct bytes. Note that after + / decoding bodyBz is derived from TxRaw so that it matches what was + / transmitted over the wire + var err error + w.bodyBz, err = proto.Marshal(w.tx.Body) + if err != nil { + panic(err) +} + +} + +return w.bodyBz +} + +func (w *wrapper) + +getAuthInfoBytes() []byte { + if len(w.authInfoBz) == 0 { + / if authInfoBz is empty, then marshal the body. authInfoBz will generally + / be set to nil whenever SetAuthInfo is called so the result of calling + / this method should always return the correct bytes. Note that after + / decoding authInfoBz is derived from TxRaw so that it matches what was + / transmitted over the wire + var err error + w.authInfoBz, err = proto.Marshal(w.tx.AuthInfo) + if err != nil { + panic(err) +} + +} + +return w.authInfoBz +} + +func (w *wrapper) + +GetSigners() []sdk.AccAddress { + return w.tx.GetSigners() +} + +func (w *wrapper) + +GetPubKeys() ([]cryptotypes.PubKey, error) { + signerInfos := w.tx.AuthInfo.SignerInfos + pks := make([]cryptotypes.PubKey, len(signerInfos)) + for i, si := range signerInfos { + / NOTE: it is okay to leave this nil if there is no PubKey in the SignerInfo. + / PubKey's can be left unset in SignerInfo. + if si.PublicKey == nil { + continue +} + pkAny := si.PublicKey.GetCachedValue() + +pk, ok := pkAny.(cryptotypes.PubKey) + if ok { + pks[i] = pk +} + +else { + return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "Expecting PubKey, got: %T", pkAny) +} + +} + +return pks, nil +} + +func (w *wrapper) + +GetGas() + +uint64 { + return w.tx.AuthInfo.Fee.GasLimit +} + +func (w *wrapper) + +GetFee() + +sdk.Coins { + return w.tx.AuthInfo.Fee.Amount +} + +func (w *wrapper) + +FeePayer() + +sdk.AccAddress { + feePayer := w.tx.AuthInfo.Fee.Payer + if feePayer != "" { + return sdk.MustAccAddressFromBech32(feePayer) +} + / use first signer as default if no payer specified + return w.GetSigners()[0] +} + +func (w *wrapper) + +FeeGranter() + +sdk.AccAddress { + feePayer := w.tx.AuthInfo.Fee.Granter + if feePayer != "" { + return sdk.MustAccAddressFromBech32(feePayer) +} + +return nil +} + +func (w *wrapper) + +GetTip() *tx.Tip { + return w.tx.AuthInfo.Tip +} + +func (w *wrapper) + +GetMemo() + +string { + return w.tx.Body.Memo +} + +/ GetTimeoutHeight returns the transaction's timeout height (if set). +func (w *wrapper) + +GetTimeoutHeight() + +uint64 { + return w.tx.Body.TimeoutHeight +} + +func (w *wrapper) + +GetSignaturesV2() ([]signing.SignatureV2, error) { + signerInfos := w.tx.AuthInfo.SignerInfos + sigs := w.tx.Signatures + pubKeys, err := w.GetPubKeys() + if err != nil { + return nil, err +} + n := len(signerInfos) + res := make([]signing.SignatureV2, n) + for i, si := range signerInfos { + / handle nil signatures (in case of simulation) + if si.ModeInfo == nil { + res[i] = signing.SignatureV2{ + PubKey: pubKeys[i], +} + +} + +else { + var err error + sigData, err := ModeInfoAndSigToSignatureData(si.ModeInfo, sigs[i]) + if err != nil { + return nil, err +} + / sequence number is functionally a transaction nonce and referred to as such in the SDK + nonce := si.GetSequence() + +res[i] = signing.SignatureV2{ + PubKey: pubKeys[i], + Data: sigData, + Sequence: nonce, +} + + +} + +} + +return res, nil +} + +func (w *wrapper) + +SetMsgs(msgs ...sdk.Msg) + +error { + anys, err := tx.SetMsgs(msgs) + if err != nil { + return err +} + +w.tx.Body.Messages = anys + + / set bodyBz to nil because the cached bodyBz no longer matches tx.Body + w.bodyBz = nil + + return nil +} + +/ SetTimeoutHeight sets the transaction's height timeout. +func (w *wrapper) + +SetTimeoutHeight(height uint64) { + w.tx.Body.TimeoutHeight = height + + / set bodyBz to nil because the cached bodyBz no longer matches tx.Body + w.bodyBz = nil +} + +func (w *wrapper) + +SetMemo(memo string) { + w.tx.Body.Memo = memo + + / set bodyBz to nil because the cached bodyBz no longer matches tx.Body + w.bodyBz = nil +} + +func (w *wrapper) + +SetGasLimit(limit uint64) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.GasLimit = limit + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetFeeAmount(coins sdk.Coins) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.Amount = coins + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetTip(tip *tx.Tip) { + w.tx.AuthInfo.Tip = tip + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetFeePayer(feePayer sdk.AccAddress) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.Payer = feePayer.String() + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetFeeGranter(feeGranter sdk.AccAddress) { + if w.tx.AuthInfo.Fee == nil { + w.tx.AuthInfo.Fee = &tx.Fee{ +} + +} + +w.tx.AuthInfo.Fee.Granter = feeGranter.String() + + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +SetSignatures(signatures ...signing.SignatureV2) + +error { + n := len(signatures) + signerInfos := make([]*tx.SignerInfo, n) + rawSigs := make([][]byte, n) + for i, sig := range signatures { + var modeInfo *tx.ModeInfo + modeInfo, rawSigs[i] = SignatureDataToModeInfoAndSig(sig.Data) + +any, err := codectypes.NewAnyWithValue(sig.PubKey) + if err != nil { + return err +} + +signerInfos[i] = &tx.SignerInfo{ + PublicKey: any, + ModeInfo: modeInfo, + Sequence: sig.Sequence, +} + +} + +w.setSignerInfos(signerInfos) + +w.setSignatures(rawSigs) + +return nil +} + +func (w *wrapper) + +setSignerInfos(infos []*tx.SignerInfo) { + w.tx.AuthInfo.SignerInfos = infos + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +setSignerInfoAtIndex(index int, info *tx.SignerInfo) { + if w.tx.AuthInfo.SignerInfos == nil { + w.tx.AuthInfo.SignerInfos = make([]*tx.SignerInfo, len(w.GetSigners())) +} + +w.tx.AuthInfo.SignerInfos[index] = info + / set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo + w.authInfoBz = nil +} + +func (w *wrapper) + +setSignatures(sigs [][]byte) { + w.tx.Signatures = sigs +} + +func (w *wrapper) + +setSignatureAtIndex(index int, sig []byte) { + if w.tx.Signatures == nil { + w.tx.Signatures = make([][]byte, len(w.GetSigners())) +} + +w.tx.Signatures[index] = sig +} + +func (w *wrapper) + +GetTx() + +authsigning.Tx { + return w +} + +func (w *wrapper) + +GetProtoTx() *tx.Tx { + return w.tx +} + +/ Deprecated: AsAny extracts proto Tx and wraps it into Any. +/ NOTE: You should probably use `GetProtoTx` if you want to serialize the transaction. +func (w *wrapper) + +AsAny() *codectypes.Any { + return codectypes.UnsafePackAny(w.tx) +} + +/ WrapTx creates a TxBuilder wrapper around a tx.Tx proto message. +func WrapTx(protoTx *tx.Tx) + +client.TxBuilder { + return &wrapper{ + tx: protoTx, +} +} + +func (w *wrapper) + +GetExtensionOptions() []*codectypes.Any { + return w.tx.Body.ExtensionOptions +} + +func (w *wrapper) + +GetNonCriticalExtensionOptions() []*codectypes.Any { + return w.tx.Body.NonCriticalExtensionOptions +} + +func (w *wrapper) + +SetExtensionOptions(extOpts ...*codectypes.Any) { + w.tx.Body.ExtensionOptions = extOpts + w.bodyBz = nil +} + +func (w *wrapper) + +SetNonCriticalExtensionOptions(extOpts ...*codectypes.Any) { + w.tx.Body.NonCriticalExtensionOptions = extOpts + w.bodyBz = nil +} + +func (w *wrapper) + +AddAuxSignerData(data tx.AuxSignerData) + +error { + err := data.ValidateBasic() + if err != nil { + return err +} + +w.bodyBz = data.SignDoc.BodyBytes + + var body tx.TxBody + err = w.cdc.Unmarshal(w.bodyBz, &body) + if err != nil { + return err +} + if w.tx.Body.Memo != "" && w.tx.Body.Memo != body.Memo { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has memo %s, got %s in AuxSignerData", w.tx.Body.Memo, body.Memo) +} + if w.tx.Body.TimeoutHeight != 0 && w.tx.Body.TimeoutHeight != body.TimeoutHeight { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has timeout height %d, got %d in AuxSignerData", w.tx.Body.TimeoutHeight, body.TimeoutHeight) +} + if len(w.tx.Body.ExtensionOptions) != 0 { + if len(w.tx.Body.ExtensionOptions) != len(body.ExtensionOptions) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d extension options, got %d in AuxSignerData", len(w.tx.Body.ExtensionOptions), len(body.ExtensionOptions)) +} + for i, o := range w.tx.Body.ExtensionOptions { + if !o.Equal(body.ExtensionOptions[i]) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has extension option %+v at index %d, got %+v in AuxSignerData", o, i, body.ExtensionOptions[i]) +} + +} + +} + if len(w.tx.Body.NonCriticalExtensionOptions) != 0 { + if len(w.tx.Body.NonCriticalExtensionOptions) != len(body.NonCriticalExtensionOptions) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d non-critical extension options, got %d in AuxSignerData", len(w.tx.Body.NonCriticalExtensionOptions), len(body.NonCriticalExtensionOptions)) +} + for i, o := range w.tx.Body.NonCriticalExtensionOptions { + if !o.Equal(body.NonCriticalExtensionOptions[i]) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has non-critical extension option %+v at index %d, got %+v in AuxSignerData", o, i, body.NonCriticalExtensionOptions[i]) +} + +} + +} + if len(w.tx.Body.Messages) != 0 { + if len(w.tx.Body.Messages) != len(body.Messages) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d Msgs, got %d in AuxSignerData", len(w.tx.Body.Messages), len(body.Messages)) +} + for i, o := range w.tx.Body.Messages { + if !o.Equal(body.Messages[i]) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has Msg %+v at index %d, got %+v in AuxSignerData", o, i, body.Messages[i]) +} + +} + +} + if w.tx.AuthInfo.Tip != nil && data.SignDoc.Tip != nil { + if !w.tx.AuthInfo.Tip.Amount.IsEqual(data.SignDoc.Tip.Amount) { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has tip %+v, got %+v in AuxSignerData", w.tx.AuthInfo.Tip.Amount, data.SignDoc.Tip.Amount) +} + if w.tx.AuthInfo.Tip.Tipper != data.SignDoc.Tip.Tipper { + return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has tipper %s, got %s in AuxSignerData", w.tx.AuthInfo.Tip.Tipper, data.SignDoc.Tip.Tipper) +} + +} + +w.SetMemo(body.Memo) + +w.SetTimeoutHeight(body.TimeoutHeight) + +w.SetExtensionOptions(body.ExtensionOptions...) + +w.SetNonCriticalExtensionOptions(body.NonCriticalExtensionOptions...) + msgs := make([]sdk.Msg, len(body.Messages)) + for i, msgAny := range body.Messages { + msgs[i] = msgAny.GetCachedValue().(sdk.Msg) +} + +w.SetMsgs(msgs...) + +w.SetTip(data.GetSignDoc().GetTip()) + + / Get the aux signer's index in GetSigners. + signerIndex := -1 + for i, signer := range w.GetSigners() { + if signer.String() == data.Address { + signerIndex = i +} + +} + if signerIndex < 0 { + return sdkerrors.ErrLogic.Wrapf("address %s is not a signer", data.Address) +} + +w.setSignerInfoAtIndex(signerIndex, &tx.SignerInfo{ + PublicKey: data.SignDoc.PublicKey, + ModeInfo: &tx.ModeInfo{ + Sum: &tx.ModeInfo_Single_{ + Single: &tx.ModeInfo_Single{ + Mode: data.Mode +}}}, + Sequence: data.SignDoc.Sequence, +}) + +w.setSignatureAtIndex(signerIndex, data.Sig) + +return nil +} +``` + +```protobuf +// Fee includes the amount of coins paid in fees and the maximum +// gas to be used by the transaction. The ratio yields an effective "gasprice", +// which must be above some miminum to be accepted into the mempool. +message Fee { + // amount is the amount of coins to be paid as a fee + repeated cosmos.base.v1beta1.Coin amount = 1 + [(gogoproto.nullable) = false, (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins"]; + + // gas_limit is the maximum gas that can be used in transaction processing + // before an out of gas error occurs + uint64 gas_limit = 2; + + // if unset, the first signer is responsible for paying the fees. If set, the specified account must pay the fees. + // the payer must be a tx signer (and thus have signed this field in AuthInfo). + // setting this field does *not* change the ordering of required signers for the transaction. + string payer = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // if set, the fee payer (either the first signer or the value of the payer field) requests that a fee grant be used + // to pay fees instead of the fee payer's own balance. If an appropriate fee grant does not exist or the chain does + // not support fee grants, this will fail + string granter = 4 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +Example cmd: + +```go +./simd tx gov submit-proposal --title="Test Proposal" --description="My awesome proposal" --type="Text" --from validator-key --fee-granter=cosmos1xh44hxt7spr67hqaa7nyx5gnutrz5fraw6grxn --chain-id=testnet --fees="10stake" +``` + +### Granted Fee Deductions + +Fees are deducted from grants in the `x/auth` ante handler. To learn more about how ante handlers work, read the [Auth Module AnteHandlers Guide](/docs/sdk/v0.53/documentation/module-system/auth#antehandlers). + +### Gas + +In order to prevent DoS attacks, using a filtered `x/feegrant` incurs gas. The SDK must assure that the `grantee`'s transactions all conform to the filter set by the `granter`. The SDK does this by iterating over the allowed messages in the filter and charging 10 gas per filtered message. The SDK will then iterate over the messages being sent by the `grantee` to ensure the messages adhere to the filter, also charging 10 gas per message. The SDK will stop iterating and fail the transaction if it finds a message that does not conform to the filter. + +**WARNING**: The gas is charged against the granted allowance. Ensure your messages conform to the filter, if any, before sending transactions using your allowance. + +### Pruning + +A queue in the state maintained with the prefix of expiration of the grants and checks them on EndBlock with the current block time for every block to prune. + +## State + +### FeeAllowance + +Fee Allowances are identified by combining `Grantee` (the account address of fee allowance grantee) with the `Granter` (the account address of fee allowance granter). + +Fee allowance grants are stored in the state as follows: + +* Grant: `0x00 | grantee_addr_len (1 byte) | grantee_addr_bytes | granter_addr_len (1 byte) | granter_addr_bytes -> ProtocolBuffer(Grant)` + +```go expandable +/ Code generated by protoc-gen-gogo. DO NOT EDIT. +/ source: cosmos/feegrant/v1beta1/feegrant.proto + +package feegrant + +import ( + + fmt "fmt" + _ "github.com/cosmos/cosmos-proto" + types1 "github.com/cosmos/cosmos-sdk/codec/types" + github_com_cosmos_cosmos_sdk_types "github.com/cosmos/cosmos-sdk/types" + types "github.com/cosmos/cosmos-sdk/types" + _ "github.com/cosmos/cosmos-sdk/types/tx/amino" + _ "github.com/cosmos/gogoproto/gogoproto" + proto "github.com/cosmos/gogoproto/proto" + github_com_cosmos_gogoproto_types "github.com/cosmos/gogoproto/types" + _ "google.golang.org/protobuf/types/known/durationpb" + _ "google.golang.org/protobuf/types/known/timestamppb" + io "io" + math "math" + math_bits "math/bits" + time "time" +) + +/ Reference imports to suppress errors if they are not otherwise used. +var _ = proto.Marshal +var _ = fmt.Errorf +var _ = math.Inf +var _ = time.Kitchen + +/ This is a compile-time assertion to ensure that this generated file +/ is compatible with the proto package it is being compiled against. +/ A compilation error at this line likely means your copy of the +/ proto package needs to be updated. +const _ = proto.GoGoProtoPackageIsVersion3 / please upgrade the proto package + +/ BasicAllowance implements Allowance with a one-time grant of coins +/ that optionally expires. The grantee can use up to SpendLimit to cover fees. +type BasicAllowance struct { + / spend_limit specifies the maximum amount of coins that can be spent + / by this allowance and will be updated as coins are spent. If it is + / empty, there is no spend limit and any amount of coins can be spent. + SpendLimit github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,1,rep,name=spend_limit,json=spendLimit,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"spend_limit"` + / expiration specifies an optional time when this allowance expires + Expiration *time.Time `protobuf:"bytes,2,opt,name=expiration,proto3,stdtime" json:"expiration,omitempty"` +} + +func (m *BasicAllowance) + +Reset() { *m = BasicAllowance{ +} +} + +func (m *BasicAllowance) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*BasicAllowance) + +ProtoMessage() { +} + +func (*BasicAllowance) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{0 +} +} + +func (m *BasicAllowance) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *BasicAllowance) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_BasicAllowance.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *BasicAllowance) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_BasicAllowance.Merge(m, src) +} + +func (m *BasicAllowance) + +XXX_Size() + +int { + return m.Size() +} + +func (m *BasicAllowance) + +XXX_DiscardUnknown() { + xxx_messageInfo_BasicAllowance.DiscardUnknown(m) +} + +var xxx_messageInfo_BasicAllowance proto.InternalMessageInfo + +func (m *BasicAllowance) + +GetSpendLimit() + +github_com_cosmos_cosmos_sdk_types.Coins { + if m != nil { + return m.SpendLimit +} + +return nil +} + +func (m *BasicAllowance) + +GetExpiration() *time.Time { + if m != nil { + return m.Expiration +} + +return nil +} + +/ PeriodicAllowance extends Allowance to allow for both a maximum cap, +/ as well as a limit per time period. +type PeriodicAllowance struct { + / basic specifies a struct of `BasicAllowance` + Basic BasicAllowance `protobuf:"bytes,1,opt,name=basic,proto3" json:"basic"` + / period specifies the time duration in which period_spend_limit coins can + / be spent before that allowance is reset + Period time.Duration `protobuf:"bytes,2,opt,name=period,proto3,stdduration" json:"period"` + / period_spend_limit specifies the maximum number of coins that can be spent + / in the period + PeriodSpendLimit github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,3,rep,name=period_spend_limit,json=periodSpendLimit,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"period_spend_limit"` + / period_can_spend is the number of coins left to be spent before the period_reset time + PeriodCanSpend github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,4,rep,name=period_can_spend,json=periodCanSpend,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"period_can_spend"` + / period_reset is the time at which this period resets and a new one begins, + / it is calculated from the start time of the first transaction after the + / last period ended + PeriodReset time.Time `protobuf:"bytes,5,opt,name=period_reset,json=periodReset,proto3,stdtime" json:"period_reset"` +} + +func (m *PeriodicAllowance) + +Reset() { *m = PeriodicAllowance{ +} +} + +func (m *PeriodicAllowance) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*PeriodicAllowance) + +ProtoMessage() { +} + +func (*PeriodicAllowance) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{1 +} +} + +func (m *PeriodicAllowance) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *PeriodicAllowance) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_PeriodicAllowance.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *PeriodicAllowance) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_PeriodicAllowance.Merge(m, src) +} + +func (m *PeriodicAllowance) + +XXX_Size() + +int { + return m.Size() +} + +func (m *PeriodicAllowance) + +XXX_DiscardUnknown() { + xxx_messageInfo_PeriodicAllowance.DiscardUnknown(m) +} + +var xxx_messageInfo_PeriodicAllowance proto.InternalMessageInfo + +func (m *PeriodicAllowance) + +GetBasic() + +BasicAllowance { + if m != nil { + return m.Basic +} + +return BasicAllowance{ +} +} + +func (m *PeriodicAllowance) + +GetPeriod() + +time.Duration { + if m != nil { + return m.Period +} + +return 0 +} + +func (m *PeriodicAllowance) + +GetPeriodSpendLimit() + +github_com_cosmos_cosmos_sdk_types.Coins { + if m != nil { + return m.PeriodSpendLimit +} + +return nil +} + +func (m *PeriodicAllowance) + +GetPeriodCanSpend() + +github_com_cosmos_cosmos_sdk_types.Coins { + if m != nil { + return m.PeriodCanSpend +} + +return nil +} + +func (m *PeriodicAllowance) + +GetPeriodReset() + +time.Time { + if m != nil { + return m.PeriodReset +} + +return time.Time{ +} +} + +/ AllowedMsgAllowance creates allowance only for specified message types. +type AllowedMsgAllowance struct { + / allowance can be any of basic and periodic fee allowance. + Allowance *types1.Any `protobuf:"bytes,1,opt,name=allowance,proto3" json:"allowance,omitempty"` + / allowed_messages are the messages for which the grantee has the access. + AllowedMessages []string `protobuf:"bytes,2,rep,name=allowed_messages,json=allowedMessages,proto3" json:"allowed_messages,omitempty"` +} + +func (m *AllowedMsgAllowance) + +Reset() { *m = AllowedMsgAllowance{ +} +} + +func (m *AllowedMsgAllowance) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*AllowedMsgAllowance) + +ProtoMessage() { +} + +func (*AllowedMsgAllowance) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{2 +} +} + +func (m *AllowedMsgAllowance) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *AllowedMsgAllowance) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_AllowedMsgAllowance.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *AllowedMsgAllowance) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_AllowedMsgAllowance.Merge(m, src) +} + +func (m *AllowedMsgAllowance) + +XXX_Size() + +int { + return m.Size() +} + +func (m *AllowedMsgAllowance) + +XXX_DiscardUnknown() { + xxx_messageInfo_AllowedMsgAllowance.DiscardUnknown(m) +} + +var xxx_messageInfo_AllowedMsgAllowance proto.InternalMessageInfo + +/ Grant is stored in the KVStore to record a grant with full context +type Grant struct { + / granter is the address of the user granting an allowance of their funds. + Granter string `protobuf:"bytes,1,opt,name=granter,proto3" json:"granter,omitempty"` + / grantee is the address of the user being granted an allowance of another user's funds. + Grantee string `protobuf:"bytes,2,opt,name=grantee,proto3" json:"grantee,omitempty"` + / allowance can be any of basic, periodic, allowed fee allowance. + Allowance *types1.Any `protobuf:"bytes,3,opt,name=allowance,proto3" json:"allowance,omitempty"` +} + +func (m *Grant) + +Reset() { *m = Grant{ +} +} + +func (m *Grant) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*Grant) + +ProtoMessage() { +} + +func (*Grant) + +Descriptor() ([]byte, []int) { + return fileDescriptor_7279582900c30aea, []int{3 +} +} + +func (m *Grant) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *Grant) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_Grant.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *Grant) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_Grant.Merge(m, src) +} + +func (m *Grant) + +XXX_Size() + +int { + return m.Size() +} + +func (m *Grant) + +XXX_DiscardUnknown() { + xxx_messageInfo_Grant.DiscardUnknown(m) +} + +var xxx_messageInfo_Grant proto.InternalMessageInfo + +func (m *Grant) + +GetGranter() + +string { + if m != nil { + return m.Granter +} + +return "" +} + +func (m *Grant) + +GetGrantee() + +string { + if m != nil { + return m.Grantee +} + +return "" +} + +func (m *Grant) + +GetAllowance() *types1.Any { + if m != nil { + return m.Allowance +} + +return nil +} + +func init() { + proto.RegisterType((*BasicAllowance)(nil), "cosmos.feegrant.v1beta1.BasicAllowance") + +proto.RegisterType((*PeriodicAllowance)(nil), "cosmos.feegrant.v1beta1.PeriodicAllowance") + +proto.RegisterType((*AllowedMsgAllowance)(nil), "cosmos.feegrant.v1beta1.AllowedMsgAllowance") + +proto.RegisterType((*Grant)(nil), "cosmos.feegrant.v1beta1.Grant") +} + +func init() { + proto.RegisterFile("cosmos/feegrant/v1beta1/feegrant.proto", fileDescriptor_7279582900c30aea) +} + +var fileDescriptor_7279582900c30aea = []byte{ + / 639 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xb4, 0x55, 0x3f, 0x6f, 0xd3, 0x40, + 0x14, 0x8f, 0x9b, 0xb6, 0x28, 0x17, 0x28, 0xad, 0xa9, 0x84, 0x53, 0x21, 0xbb, 0x8a, 0x04, 0x4d, + 0x2b, 0xd5, 0x56, 0x8b, 0x58, 0x3a, 0x35, 0x2e, 0xa2, 0x80, 0x5a, 0xa9, 0x72, 0x99, 0x90, 0x50, + 0x74, 0xb6, 0xaf, 0xe6, 0x44, 0xec, 0x33, 0x3e, 0x17, 0x1a, 0x06, 0x66, 0xc4, 0x80, 0x32, 0x32, + 0x32, 0x22, 0xa6, 0x0e, 0xe5, 0x3b, 0x54, 0x0c, 0xa8, 0x62, 0x62, 0x22, 0x28, 0x19, 0x3a, 0xf3, + 0x0d, 0x90, 0xef, 0xce, 0x8e, 0x9b, 0x50, 0x68, 0x25, 0xba, 0x24, 0x77, 0xef, 0xde, 0xfb, 0xfd, + 0x79, 0xef, 0x45, 0x01, 0xb7, 0x1c, 0x42, 0x7d, 0x42, 0x8d, 0x1d, 0x84, 0xbc, 0x08, 0x06, 0xb1, + 0xf1, 0x62, 0xc9, 0x46, 0x31, 0x5c, 0xca, 0x02, 0x7a, 0x18, 0x91, 0x98, 0xc8, 0xd7, 0x79, 0x9e, + 0x9e, 0x85, 0x45, 0xde, 0xcc, 0xb4, 0x47, 0x3c, 0xc2, 0x72, 0x8c, 0xe4, 0xc4, 0xd3, 0x67, 0x2a, + 0x1e, 0x21, 0x5e, 0x13, 0x19, 0xec, 0x66, 0xef, 0xee, 0x18, 0x30, 0x68, 0xa5, 0x4f, 0x1c, 0xa9, + 0xc1, 0x6b, 0x04, 0x2c, 0x7f, 0x52, 0x85, 0x18, 0x1b, 0x52, 0x94, 0x09, 0x71, 0x08, 0x0e, 0xc4, + 0xfb, 0x14, 0xf4, 0x71, 0x40, 0x0c, 0xf6, 0x29, 0x42, 0xda, 0x20, 0x51, 0x8c, 0x7d, 0x44, 0x63, + 0xe8, 0x87, 0x29, 0xe6, 0x60, 0x82, 0xbb, 0x1b, 0xc1, 0x18, 0x13, 0x81, 0x59, 0x7d, 0x37, 0x02, + 0x26, 0x4c, 0x48, 0xb1, 0x53, 0x6f, 0x36, 0xc9, 0x4b, 0x18, 0x38, 0x48, 0x7e, 0x0e, 0xca, 0x34, + 0x44, 0x81, 0xdb, 0x68, 0x62, 0x1f, 0xc7, 0x8a, 0x34, 0x5b, 0xac, 0x95, 0x97, 0x2b, 0xba, 0x90, + 0x9a, 0x88, 0x4b, 0xdd, 0xeb, 0x6b, 0x04, 0x07, 0xe6, 0x9d, 0xc3, 0x1f, 0x5a, 0xe1, 0x53, 0x47, + 0xab, 0x79, 0x38, 0x7e, 0xba, 0x6b, 0xeb, 0x0e, 0xf1, 0x85, 0x2f, 0xf1, 0xb5, 0x48, 0xdd, 0x67, + 0x46, 0xdc, 0x0a, 0x11, 0x65, 0x05, 0xf4, 0xe3, 0xf1, 0xfe, 0x82, 0x64, 0x01, 0x46, 0xb2, 0x91, + 0x70, 0xc8, 0xab, 0x00, 0xa0, 0xbd, 0x10, 0x73, 0x65, 0xca, 0xc8, 0xac, 0x54, 0x2b, 0x2f, 0xcf, + 0xe8, 0x5c, 0xba, 0x9e, 0x4a, 0xd7, 0x1f, 0xa5, 0xde, 0xcc, 0xd1, 0x76, 0x47, 0x93, 0xac, 0x5c, + 0xcd, 0xca, 0xfa, 0x97, 0x83, 0xc5, 0x9b, 0xa7, 0x0c, 0x49, 0xbf, 0x87, 0x50, 0x66, 0xef, 0xc1, + 0xdb, 0xe3, 0xfd, 0x85, 0x4a, 0x4e, 0xd8, 0x49, 0xf7, 0xd5, 0xcf, 0xa3, 0x60, 0x6a, 0x0b, 0x45, + 0x98, 0xb8, 0xf9, 0x9e, 0xdc, 0x07, 0x63, 0x76, 0x92, 0xa7, 0x48, 0x4c, 0xdb, 0x9c, 0x7e, 0x1a, + 0xd5, 0x49, 0x34, 0xb3, 0x94, 0xf4, 0x86, 0xfb, 0xe5, 0x00, 0xf2, 0x2a, 0x18, 0x0f, 0x19, 0xbc, + 0xb0, 0x59, 0x19, 0xb2, 0x79, 0x57, 0x4c, 0xc8, 0xbc, 0x92, 0x14, 0xbf, 0xef, 0x68, 0x12, 0x07, + 0x10, 0x75, 0xf2, 0x6b, 0x20, 0xf3, 0x53, 0x23, 0x3f, 0xa6, 0xe2, 0x05, 0x8d, 0x69, 0x92, 0x73, + 0x6d, 0xf7, 0x87, 0xf5, 0x0a, 0x88, 0x58, 0xc3, 0x81, 0x01, 0xd7, 0xa0, 0x8c, 0x5e, 0x10, 0xfb, + 0x04, 0x67, 0x5a, 0x83, 0x01, 0x13, 0x20, 0x6f, 0x80, 0xcb, 0x82, 0x3b, 0x42, 0x14, 0xc5, 0xca, + 0xd8, 0x3f, 0x57, 0x85, 0x35, 0xb1, 0x9d, 0x35, 0xb1, 0xcc, 0xcb, 0xad, 0xa4, 0x7a, 0xe5, 0xe1, + 0xb9, 0x96, 0xe6, 0x46, 0x4e, 0xe8, 0xd0, 0x86, 0x54, 0x7f, 0x49, 0xe0, 0x1a, 0xbb, 0x21, 0x77, + 0x93, 0x7a, 0xfd, 0xcd, 0x79, 0x02, 0x4a, 0x30, 0xbd, 0x88, 0xed, 0x99, 0x1e, 0x92, 0x5b, 0x0f, + 0x5a, 0xe6, 0xfc, 0x99, 0xc5, 0x58, 0x7d, 0x44, 0x79, 0x1e, 0x4c, 0x42, 0xce, 0xda, 0xf0, 0x11, + 0xa5, 0xd0, 0x43, 0x54, 0x19, 0x99, 0x2d, 0xd6, 0x4a, 0xd6, 0x55, 0x11, 0xdf, 0x14, 0xe1, 0x95, + 0xad, 0x37, 0x1f, 0xb4, 0xc2, 0xb9, 0x1c, 0xab, 0x39, 0xc7, 0x7f, 0xf0, 0x56, 0xfd, 0x2a, 0x81, + 0xb1, 0xf5, 0x04, 0x42, 0x5e, 0x06, 0x97, 0x18, 0x16, 0x8a, 0x98, 0xc7, 0x92, 0xa9, 0x7c, 0x3b, + 0x58, 0x9c, 0x16, 0x44, 0x75, 0xd7, 0x8d, 0x10, 0xa5, 0xdb, 0x71, 0x84, 0x03, 0xcf, 0x4a, 0x13, + 0xfb, 0x35, 0x88, 0xfd, 0x14, 0xce, 0x50, 0x33, 0xd0, 0xcd, 0xe2, 0xff, 0xee, 0xa6, 0x59, 0x3f, + 0xec, 0xaa, 0xd2, 0x51, 0x57, 0x95, 0x7e, 0x76, 0x55, 0xa9, 0xdd, 0x53, 0x0b, 0x47, 0x3d, 0xb5, + 0xf0, 0xbd, 0xa7, 0x16, 0x1e, 0xcf, 0xfd, 0x75, 0x6f, 0xf7, 0xb2, 0xff, 0x0b, 0x7b, 0x9c, 0xc9, + 0xb8, 0xfd, 0x3b, 0x00, 0x00, 0xff, 0xff, 0xe4, 0x3d, 0x09, 0x1d, 0x5a, 0x06, 0x00, 0x00, +} + +func (m *BasicAllowance) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *BasicAllowance) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *BasicAllowance) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.Expiration != nil { + n1, err1 := github_com_cosmos_gogoproto_types.StdTimeMarshalTo(*m.Expiration, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdTime(*m.Expiration):]) + if err1 != nil { + return 0, err1 +} + +i -= n1 + i = encodeVarintFeegrant(dAtA, i, uint64(n1)) + +i-- + dAtA[i] = 0x12 +} + if len(m.SpendLimit) > 0 { + for iNdEx := len(m.SpendLimit) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.SpendLimit[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa +} + +} + +return len(dAtA) - i, nil +} + +func (m *PeriodicAllowance) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *PeriodicAllowance) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *PeriodicAllowance) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + n2, err2 := github_com_cosmos_gogoproto_types.StdTimeMarshalTo(m.PeriodReset, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdTime(m.PeriodReset):]) + if err2 != nil { + return 0, err2 +} + +i -= n2 + i = encodeVarintFeegrant(dAtA, i, uint64(n2)) + +i-- + dAtA[i] = 0x2a + if len(m.PeriodCanSpend) > 0 { + for iNdEx := len(m.PeriodCanSpend) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.PeriodCanSpend[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x22 +} + +} + if len(m.PeriodSpendLimit) > 0 { + for iNdEx := len(m.PeriodSpendLimit) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.PeriodSpendLimit[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x1a +} + +} + +n3, err3 := github_com_cosmos_gogoproto_types.StdDurationMarshalTo(m.Period, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdDuration(m.Period):]) + if err3 != nil { + return 0, err3 +} + +i -= n3 + i = encodeVarintFeegrant(dAtA, i, uint64(n3)) + +i-- + dAtA[i] = 0x12 + { + size, err := m.Basic.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *AllowedMsgAllowance) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *AllowedMsgAllowance) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AllowedMsgAllowance) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.AllowedMessages) > 0 { + for iNdEx := len(m.AllowedMessages) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.AllowedMessages[iNdEx]) + +copy(dAtA[i:], m.AllowedMessages[iNdEx]) + +i = encodeVarintFeegrant(dAtA, i, uint64(len(m.AllowedMessages[iNdEx]))) + +i-- + dAtA[i] = 0x12 +} + +} + if m.Allowance != nil { + { + size, err := m.Allowance.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *Grant) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *Grant) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *Grant) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.Allowance != nil { + { + size, err := m.Allowance.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintFeegrant(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x1a +} + if len(m.Grantee) > 0 { + i -= len(m.Grantee) + +copy(dAtA[i:], m.Grantee) + +i = encodeVarintFeegrant(dAtA, i, uint64(len(m.Grantee))) + +i-- + dAtA[i] = 0x12 +} + if len(m.Granter) > 0 { + i -= len(m.Granter) + +copy(dAtA[i:], m.Granter) + +i = encodeVarintFeegrant(dAtA, i, uint64(len(m.Granter))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func encodeVarintFeegrant(dAtA []byte, offset int, v uint64) + +int { + offset -= sovFeegrant(v) + base := offset + for v >= 1<<7 { + dAtA[offset] = uint8(v&0x7f | 0x80) + +v >>= 7 + offset++ +} + +dAtA[offset] = uint8(v) + +return base +} + +func (m *BasicAllowance) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + if len(m.SpendLimit) > 0 { + for _, e := range m.SpendLimit { + l = e.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + if m.Expiration != nil { + l = github_com_cosmos_gogoproto_types.SizeOfStdTime(*m.Expiration) + +n += 1 + l + sovFeegrant(uint64(l)) +} + +return n +} + +func (m *PeriodicAllowance) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = m.Basic.Size() + +n += 1 + l + sovFeegrant(uint64(l)) + +l = github_com_cosmos_gogoproto_types.SizeOfStdDuration(m.Period) + +n += 1 + l + sovFeegrant(uint64(l)) + if len(m.PeriodSpendLimit) > 0 { + for _, e := range m.PeriodSpendLimit { + l = e.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + if len(m.PeriodCanSpend) > 0 { + for _, e := range m.PeriodCanSpend { + l = e.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + +l = github_com_cosmos_gogoproto_types.SizeOfStdTime(m.PeriodReset) + +n += 1 + l + sovFeegrant(uint64(l)) + +return n +} + +func (m *AllowedMsgAllowance) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + if m.Allowance != nil { + l = m.Allowance.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + if len(m.AllowedMessages) > 0 { + for _, s := range m.AllowedMessages { + l = len(s) + +n += 1 + l + sovFeegrant(uint64(l)) +} + +} + +return n +} + +func (m *Grant) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.Granter) + if l > 0 { + n += 1 + l + sovFeegrant(uint64(l)) +} + +l = len(m.Grantee) + if l > 0 { + n += 1 + l + sovFeegrant(uint64(l)) +} + if m.Allowance != nil { + l = m.Allowance.Size() + +n += 1 + l + sovFeegrant(uint64(l)) +} + +return n +} + +func sovFeegrant(x uint64) (n int) { + return (math_bits.Len64(x|1) + 6) / 7 +} + +func sozFeegrant(x uint64) (n int) { + return sovFeegrant(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} + +func (m *BasicAllowance) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: BasicAllowance: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: BasicAllowance: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SpendLimit", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.SpendLimit = append(m.SpendLimit, types.Coin{ +}) + if err := m.SpendLimit[len(m.SpendLimit)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Expiration", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if m.Expiration == nil { + m.Expiration = new(time.Time) +} + if err := github_com_cosmos_gogoproto_types.StdTimeUnmarshal(m.Expiration, dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *PeriodicAllowance) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PeriodicAllowance: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: PeriodicAllowance: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Basic", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := m.Basic.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Period", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := github_com_cosmos_gogoproto_types.StdDurationUnmarshal(&m.Period, dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PeriodSpendLimit", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.PeriodSpendLimit = append(m.PeriodSpendLimit, types.Coin{ +}) + if err := m.PeriodSpendLimit[len(m.PeriodSpendLimit)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PeriodCanSpend", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.PeriodCanSpend = append(m.PeriodCanSpend, types.Coin{ +}) + if err := m.PeriodCanSpend[len(m.PeriodCanSpend)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PeriodReset", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := github_com_cosmos_gogoproto_types.StdTimeUnmarshal(&m.PeriodReset, dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *AllowedMsgAllowance) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AllowedMsgAllowance: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: AllowedMsgAllowance: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Allowance", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if m.Allowance == nil { + m.Allowance = &types1.Any{ +} + +} + if err := m.Allowance.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AllowedMessages", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.AllowedMessages = append(m.AllowedMessages, string(dAtA[iNdEx:postIndex])) + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *Grant) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: Grant: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: Grant: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Granter", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Granter = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Grantee", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Grantee = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Allowance", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowFeegrant +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthFeegrant +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthFeegrant +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if m.Allowance == nil { + m.Allowance = &types1.Any{ +} + +} + if err := m.Allowance.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipFeegrant(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthFeegrant +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func skipFeegrant(dAtA []byte) (n int, err error) { + l := len(dAtA) + iNdEx := 0 + depth := 0 + for iNdEx < l { + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowFeegrant +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + wireType := int(wire & 0x7) + switch wireType { + case 0: + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowFeegrant +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + +iNdEx++ + if dAtA[iNdEx-1] < 0x80 { + break +} + +} + case 1: + iNdEx += 8 + case 2: + var length int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowFeegrant +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + length |= (int(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + if length < 0 { + return 0, ErrInvalidLengthFeegrant +} + +iNdEx += length + case 3: + depth++ + case 4: + if depth == 0 { + return 0, ErrUnexpectedEndOfGroupFeegrant +} + +depth-- + case 5: + iNdEx += 4 + default: + return 0, fmt.Errorf("proto: illegal wireType %d", wireType) +} + if iNdEx < 0 { + return 0, ErrInvalidLengthFeegrant +} + if depth == 0 { + return iNdEx, nil +} + +} + +return 0, io.ErrUnexpectedEOF +} + +var ( + ErrInvalidLengthFeegrant = fmt.Errorf("proto: negative length found during unmarshaling") + +ErrIntOverflowFeegrant = fmt.Errorf("proto: integer overflow") + +ErrUnexpectedEndOfGroupFeegrant = fmt.Errorf("proto: unexpected end of group") +) +``` + +### FeeAllowanceQueue + +Fee Allowances queue items are identified by combining the `FeeAllowancePrefixQueue` (i.e., 0x01), `expiration`, `grantee` (the account address of fee allowance grantee), `granter` (the account address of fee allowance granter). Endblocker checks `FeeAllowanceQueue` state for the expired grants and prunes them from `FeeAllowance` if there are any found. + +Fee allowance queue keys are stored in the state as follows: + +* Grant: `0x01 | expiration_bytes | grantee_addr_len (1 byte) | grantee_addr_bytes | granter_addr_len (1 byte) | granter_addr_bytes -> EmptyBytes` + +## Messages + +### Msg/GrantAllowance + +A fee allowance grant will be created with the `MsgGrantAllowance` message. + +```protobuf +// MsgGrantAllowance adds permission for Grantee to spend up to Allowance +// of fees from the account of Granter. +message MsgGrantAllowance { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgGrantAllowance"; + + // granter is the address of the user granting an allowance of their funds. + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // grantee is the address of the user being granted an allowance of another user's funds. + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // allowance can be any of basic, periodic, allowed fee allowance. + google.protobuf.Any allowance = 3 [(cosmos_proto.accepts_interface) = "cosmos.feegrant.v1beta1.FeeAllowanceI"]; +} +``` + +### Msg/RevokeAllowance + +An allowed grant fee allowance can be removed with the `MsgRevokeAllowance` message. + +```protobuf +// MsgGrantAllowanceResponse defines the Msg/GrantAllowanceResponse response type. +message MsgGrantAllowanceResponse {} + +// MsgRevokeAllowance removes any existing Allowance from Granter to Grantee. +message MsgRevokeAllowance { + option (cosmos.msg.v1.signer) = "granter"; + option (amino.name) = "cosmos-sdk/MsgRevokeAllowance"; + + // granter is the address of the user granting an allowance of their funds. + string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // grantee is the address of the user being granted an allowance of another user's funds. + string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +## Events + +The feegrant module emits the following events: + +## Msg Server + +### MsgGrantAllowance + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ---------------- | +| message | action | set\_feegrant | +| message | granter | `{granterAddress}` | +| message | grantee | `{granteeAddress}` | + +### MsgRevokeAllowance + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ---------------- | +| message | action | revoke\_feegrant | +| message | granter | `{granterAddress}` | +| message | grantee | `{granteeAddress}` | + +### Exec fee allowance + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ---------------- | +| message | action | use\_feegrant | +| message | granter | `{granterAddress}` | +| message | grantee | `{granteeAddress}` | + +### Prune fee allowances + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | --------------- | +| message | action | prune\_feegrant | +| message | pruner | `{prunerAddress}` | + +## Client + +### CLI + +A user can query and interact with the `feegrant` module using the CLI. + +#### Query + +The `query` commands allow users to query `feegrant` state. + +```shell +simd query feegrant --help +``` + +##### grant + +The `grant` command allows users to query a grant for a given granter-grantee pair. + +```shell +simd query feegrant grant [granter] [grantee] [flags] +``` + +Example: + +```shell +simd query feegrant grant cosmos1.. cosmos1.. +``` + +Example Output: + +```yml +allowance: + '@type': /cosmos.feegrant.v1beta1.BasicAllowance + expiration: null + spend_limit: + - amount: "100" + denom: stake +grantee: cosmos1.. +granter: cosmos1.. +``` + +##### grants + +The `grants` command allows users to query all grants for a given grantee. + +```shell +simd query feegrant grants [grantee] [flags] +``` + +Example: + +```shell +simd query feegrant grants cosmos1.. +``` + +Example Output: + +```yml expandable +allowances: +- allowance: + '@type': /cosmos.feegrant.v1beta1.BasicAllowance + expiration: null + spend_limit: + - amount: "100" + denom: stake + grantee: cosmos1.. + granter: cosmos1.. +pagination: + next_key: null + total: "0" +``` + +#### Transactions + +The `tx` commands allow users to interact with the `feegrant` module. + +```shell +simd tx feegrant --help +``` + +##### grant + +The `grant` command allows users to grant fee allowances to another account. The fee allowance can have an expiration date, a total spend limit, and/or a periodic spend limit. + +```shell +simd tx feegrant grant [granter] [grantee] [flags] +``` + +Example (one-time spend limit): + +```shell +simd tx feegrant grant cosmos1.. cosmos1.. --spend-limit 100stake +``` + +Example (periodic spend limit): + +```shell +simd tx feegrant grant cosmos1.. cosmos1.. --period 3600 --period-limit 10stake +``` + +##### revoke + +The `revoke` command allows users to revoke a granted fee allowance. + +```shell +simd tx feegrant revoke [granter] [grantee] [flags] +``` + +Example: + +```shell +simd tx feegrant revoke cosmos1.. cosmos1.. +``` + +### gRPC + +A user can query the `feegrant` module using gRPC endpoints. + +#### Allowance + +The `Allowance` endpoint allows users to query a granted fee allowance. + +```shell +cosmos.feegrant.v1beta1.Query/Allowance +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"grantee":"cosmos1..","granter":"cosmos1.."}' \ + localhost:9090 \ + cosmos.feegrant.v1beta1.Query/Allowance +``` + +Example Output: + +```json +{ + "allowance": { + "granter": "cosmos1..", + "grantee": "cosmos1..", + "allowance": { + "@type": "/cosmos.feegrant.v1beta1.BasicAllowance", + "spendLimit": [ + { + "denom": "stake", + "amount": "100" + } + ] + } + } +} +``` + +#### Allowances + +The `Allowances` endpoint allows users to query all granted fee allowances for a given grantee. + +```shell +cosmos.feegrant.v1beta1.Query/Allowances +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' \ + localhost:9090 \ + cosmos.feegrant.v1beta1.Query/Allowances +``` + +Example Output: + +```json expandable +{ + "allowances": [ + { + "granter": "cosmos1..", + "grantee": "cosmos1..", + "allowance": { + "@type": "/cosmos.feegrant.v1beta1.BasicAllowance", + "spendLimit": [ + { + "denom": "stake", + "amount": "100" + } + ] + } + } + ], + "pagination": { + "total": "1" + } +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/genesis.mdx b/docs/sdk/v0.53/documentation/module-system/genesis.mdx new file mode 100644 index 00000000..8988cd24 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/genesis.mdx @@ -0,0 +1,784 @@ +--- +title: Module Genesis +--- + +## Synopsis + +Modules generally handle a subset of the state and, as such, they need to define the related subset of the genesis file as well as methods to initialize, verify and export it. + + +**Pre-requisite Readings** + +- [Module Manager](/docs/sdk/v0.53/documentation/module-system/module-manager) +- [Keepers](/docs/sdk/v0.53/documentation/module-system/keeper) + + + +## Type Definition + +The subset of the genesis state defined from a given module is generally defined in a `genesis.proto` file ([more info](/docs/sdk/v0.53/documentation/protocol-development/encoding#gogoproto) on how to define protobuf messages). The struct defining the module's subset of the genesis state is usually called `GenesisState` and contains all the module-related values that need to be initialized during the genesis process. + +See an example of `GenesisState` protobuf message definition from the `auth` module: + +```protobuf +syntax = "proto3"; +package cosmos.auth.v1beta1; + +import "google/protobuf/any.proto"; +import "gogoproto/gogo.proto"; +import "cosmos/auth/v1beta1/auth.proto"; +import "amino/amino.proto"; + +option go_package = "github.com/cosmos/cosmos-sdk/x/auth/types"; + +// GenesisState defines the auth module's genesis state. +message GenesisState { + // params defines all the parameters of the module. + Params params = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // accounts are the accounts present at genesis. + repeated google.protobuf.Any accounts = 2; +} + +``` + +Next we present the main genesis-related methods that need to be implemented by module developers in order for their module to be used in Cosmos SDK applications. + +### `DefaultGenesis` + +The `DefaultGenesis()` method is a simple method that calls the constructor function for `GenesisState` with the default value for each parameter. See an example from the `auth` module: + +```go expandable +package auth + +import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/depinject" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + + modulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + "cosmossdk.io/core/store" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/exported" + "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ ConsensusVersion defines the current x/auth module consensus version. +const ( + ConsensusVersion = 5 + GovModuleName = "gov" +) + +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the auth module. +type AppModuleBasic struct { + ac address.Codec +} + +/ Name returns the auth module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the auth module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the auth +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the auth module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return types.ValidateGenesis(data) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the auth module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the auth module. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return nil +} + +/ GetQueryCmd returns the root query command for the auth module. +func (ab AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.GetQueryCmd(ab.ac) +} + +/ RegisterInterfaces registers interfaces and implementations of the auth module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the auth module. +type AppModule struct { + AppModuleBasic + + accountKeeper keeper.AccountKeeper + randGenAccountsFn types.RandomGenesisAccountsFn + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, accountKeeper keeper.AccountKeeper, randGenAccountsFn types.RandomGenesisAccountsFn, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + ac: accountKeeper.AddressCodec() +}, + accountKeeper: accountKeeper, + randGenAccountsFn: randGenAccountsFn, + legacySubspace: ss, +} +} + +/ Name returns the auth module's name. +func (AppModule) + +Name() + +string { + return types.ModuleName +} + +/ RegisterServices registers a GRPC query service to respond to the +/ module-specific GRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.accountKeeper)) + +types.RegisterQueryServer(cfg.QueryServer(), keeper.NewQueryServer(am.accountKeeper)) + m := keeper.NewMigrator(am.accountKeeper, cfg.QueryServer(), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 2 to 3: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 3 to 4: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 4, m.Migrate4To5); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 4 to 5", types.ModuleName)) +} +} + +/ InitGenesis performs genesis initialization for the auth module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +am.accountKeeper.InitGenesis(ctx, genesisState) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the auth +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.accountKeeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the auth module +func (am AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState, am.randGenAccountsFn) +} + +/ ProposalMsgs returns msgs used for governance proposals for simulations. +func (AppModule) + +ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg { + return simulation.ProposalMsgs() +} + +/ RegisterStoreDecoder registers a decoder for auth module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[types.StoreKey] = simtypes.NewStoreDecoderFuncFromCollectionsSchema(am.accountKeeper.Schema) +} + +/ WeightedOperations doesn't return any auth module operation. +func (AppModule) + +WeightedOperations(_ module.SimulationState) []simtypes.WeightedOperation { + return nil +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideAddressCodec), + appmodule.Provide(ProvideModule), + ) +} + +/ ProvideAddressCodec provides an address.Codec to the container for any +/ modules that want to do address string <> bytes conversion. +func ProvideAddressCodec(config *modulev1.Module) + +address.Codec { + return authcodec.NewBech32Codec(config.Bech32Prefix) +} + +type ModuleInputs struct { + depinject.In + + Config *modulev1.Module + StoreService store.KVStoreService + Cdc codec.Codec + + RandomGenesisAccountsFn types.RandomGenesisAccountsFn `optional:"true"` + AccountI func() + +sdk.AccountI `optional:"true"` + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type ModuleOutputs struct { + depinject.Out + + AccountKeeper keeper.AccountKeeper + Module appmodule.AppModule +} + +func ProvideModule(in ModuleInputs) + +ModuleOutputs { + maccPerms := map[string][]string{ +} + for _, permission := range in.Config.ModuleAccountPermissions { + maccPerms[permission.Account] = permission.Permissions +} + + / default to governance authority if not provided + authority := types.NewModuleAddress(GovModuleName) + if in.Config.Authority != "" { + authority = types.NewModuleAddressOrBech32Address(in.Config.Authority) +} + if in.RandomGenesisAccountsFn == nil { + in.RandomGenesisAccountsFn = simulation.RandomGenesisAccounts +} + if in.AccountI == nil { + in.AccountI = types.ProtoBaseAccount +} + k := keeper.NewAccountKeeper(in.Cdc, in.StoreService, in.AccountI, maccPerms, in.Config.Bech32Prefix, authority.String()) + m := NewAppModule(in.Cdc, k, in.RandomGenesisAccountsFn, in.LegacySubspace) + +return ModuleOutputs{ + AccountKeeper: k, + Module: m +} +} +``` + +### `ValidateGenesis` + +The `ValidateGenesis(data GenesisState)` method is called to verify that the provided `genesisState` is correct. It should perform validity checks on each of the parameters listed in `GenesisState`. See an example from the `auth` module: + +```go expandable +package types + +import ( + + "encoding/json" + "fmt" + "sort" + + proto "github.com/cosmos/gogoproto/proto" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" +) + +var _ types.UnpackInterfacesMessage = GenesisState{ +} + +/ RandomGenesisAccountsFn defines the function required to generate custom account types +type RandomGenesisAccountsFn func(simState *module.SimulationState) + +GenesisAccounts + +/ NewGenesisState - Create a new genesis state +func NewGenesisState(params Params, accounts GenesisAccounts) *GenesisState { + genAccounts, err := PackAccounts(accounts) + if err != nil { + panic(err) +} + +return &GenesisState{ + Params: params, + Accounts: genAccounts, +} +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (g GenesisState) + +UnpackInterfaces(unpacker types.AnyUnpacker) + +error { + for _, any := range g.Accounts { + var account GenesisAccount + err := unpacker.UnpackAny(any, &account) + if err != nil { + return err +} + +} + +return nil +} + +/ DefaultGenesisState - Return a default genesis state +func DefaultGenesisState() *GenesisState { + return NewGenesisState(DefaultParams(), GenesisAccounts{ +}) +} + +/ GetGenesisStateFromAppState returns x/auth GenesisState given raw application +/ genesis state. +func GetGenesisStateFromAppState(cdc codec.Codec, appState map[string]json.RawMessage) + +GenesisState { + var genesisState GenesisState + if appState[ModuleName] != nil { + cdc.MustUnmarshalJSON(appState[ModuleName], &genesisState) +} + +return genesisState +} + +/ ValidateGenesis performs basic validation of auth genesis data returning an +/ error for any failed validation criteria. +func ValidateGenesis(data GenesisState) + +error { + if err := data.Params.Validate(); err != nil { + return err +} + +genAccs, err := UnpackAccounts(data.Accounts) + if err != nil { + return err +} + +return ValidateGenAccounts(genAccs) +} + +/ SanitizeGenesisAccounts sorts accounts and coin sets. +func SanitizeGenesisAccounts(genAccs GenesisAccounts) + +GenesisAccounts { + / Make sure there aren't any duplicated account numbers by fixing the duplicates with the lowest unused values. + / seenAccNum = easy lookup for used account numbers. + seenAccNum := map[uint64]bool{ +} + / dupAccNum = a map of account number to accounts with duplicate account numbers (excluding the 1st one seen). + dupAccNum := map[uint64]GenesisAccounts{ +} + for _, acc := range genAccs { + num := acc.GetAccountNumber() + if !seenAccNum[num] { + seenAccNum[num] = true +} + +else { + dupAccNum[num] = append(dupAccNum[num], acc) +} + +} + + / dupAccNums a sorted list of the account numbers with duplicates. + var dupAccNums []uint64 + for num := range dupAccNum { + dupAccNums = append(dupAccNums, num) +} + +sort.Slice(dupAccNums, func(i, j int) + +bool { + return dupAccNums[i] < dupAccNums[j] +}) + + / Change the account number of the duplicated ones to the first unused value. + globalNum := uint64(0) + for _, dupNum := range dupAccNums { + accs := dupAccNum[dupNum] + for _, acc := range accs { + for seenAccNum[globalNum] { + globalNum++ +} + if err := acc.SetAccountNumber(globalNum); err != nil { + panic(err) +} + +seenAccNum[globalNum] = true +} + +} + + / Then sort them all by account number. + sort.Slice(genAccs, func(i, j int) + +bool { + return genAccs[i].GetAccountNumber() < genAccs[j].GetAccountNumber() +}) + +return genAccs +} + +/ ValidateGenAccounts validates an array of GenesisAccounts and checks for duplicates +func ValidateGenAccounts(accounts GenesisAccounts) + +error { + addrMap := make(map[string]bool, len(accounts)) + for _, acc := range accounts { + / check for duplicated accounts + addrStr := acc.GetAddress().String() + if _, ok := addrMap[addrStr]; ok { + return fmt.Errorf("duplicate account found in genesis state; address: %s", addrStr) +} + +addrMap[addrStr] = true + + / check account specific validation + if err := acc.Validate(); err != nil { + return fmt.Errorf("invalid account found in genesis state; address: %s, error: %s", addrStr, err.Error()) +} + +} + +return nil +} + +/ GenesisAccountIterator implements genesis account iteration. +type GenesisAccountIterator struct{ +} + +/ IterateGenesisAccounts iterates over all the genesis accounts found in +/ appGenesis and invokes a callback on each genesis account. If any call +/ returns true, iteration stops. +func (GenesisAccountIterator) + +IterateGenesisAccounts( + cdc codec.Codec, appGenesis map[string]json.RawMessage, cb func(sdk.AccountI) (stop bool), +) { + for _, genAcc := range GetGenesisStateFromAppState(cdc, appGenesis).Accounts { + acc, ok := genAcc.GetCachedValue().(sdk.AccountI) + if !ok { + panic("expected account") +} + if cb(acc) { + break +} + +} +} + +/ PackAccounts converts GenesisAccounts to Any slice +func PackAccounts(accounts GenesisAccounts) ([]*types.Any, error) { + accountsAny := make([]*types.Any, len(accounts)) + for i, acc := range accounts { + msg, ok := acc.(proto.Message) + if !ok { + return nil, fmt.Errorf("cannot proto marshal %T", acc) +} + +any, err := types.NewAnyWithValue(msg) + if err != nil { + return nil, err +} + +accountsAny[i] = any +} + +return accountsAny, nil +} + +/ UnpackAccounts converts Any slice to GenesisAccounts +func UnpackAccounts(accountsAny []*types.Any) (GenesisAccounts, error) { + accounts := make(GenesisAccounts, len(accountsAny)) + for i, any := range accountsAny { + acc, ok := any.GetCachedValue().(GenesisAccount) + if !ok { + return nil, fmt.Errorf("expected genesis account") +} + +accounts[i] = acc +} + +return accounts, nil +} +``` + +## Other Genesis Methods + +Other than the methods related directly to `GenesisState`, module developers are expected to implement two other methods as part of the [`AppModuleGenesis` interface](/docs/sdk/v0.53/documentation/module-system/module-manager#appmodulegenesis) (only if the module needs to initialize a subset of state in genesis). These methods are [`InitGenesis`](#initgenesis) and [`ExportGenesis`](#exportgenesis). + +### `InitGenesis` + +The `InitGenesis` method is executed during [`InitChain`](/docs/sdk/v0.53/documentation/application-framework/baseapp#initchain) when the application is first started. Given a `GenesisState`, it initializes the subset of the state managed by the module by using the module's [`keeper`](/docs/sdk/v0.53/documentation/module-system/keeper) setter function on each parameter within the `GenesisState`. + +The [module manager](/docs/sdk/v0.53/documentation/module-system/module-manager#manager) of the application is responsible for calling the `InitGenesis` method of each of the application's modules in order. This order is set by the application developer via the manager's `SetOrderGenesisMethod`, which is called in the [application's constructor function](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor-function). + +See an example of `InitGenesis` from the `auth` module: + +```go expandable +package keeper + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ InitGenesis - Init store state from genesis data +/ +/ CONTRACT: old coins from the FeeCollectionKeeper need to be transferred through +/ a genesis port script to the new fee collector account +func (ak AccountKeeper) + +InitGenesis(ctx sdk.Context, data types.GenesisState) { + if err := ak.Params.Set(ctx, data.Params); err != nil { + panic(err) +} + +accounts, err := types.UnpackAccounts(data.Accounts) + if err != nil { + panic(err) +} + +accounts = types.SanitizeGenesisAccounts(accounts) + + / Set the accounts and make sure the global account number matches the largest account number (even if zero). + var lastAccNum *uint64 + for _, acc := range accounts { + accNum := acc.GetAccountNumber() + for lastAccNum == nil || *lastAccNum < accNum { + n := ak.NextAccountNumber(ctx) + +lastAccNum = &n +} + +ak.SetAccount(ctx, acc) +} + +ak.GetModuleAccount(ctx, types.FeeCollectorName) +} + +/ ExportGenesis returns a GenesisState for a given context and keeper +func (ak AccountKeeper) + +ExportGenesis(ctx sdk.Context) *types.GenesisState { + params := ak.GetParams(ctx) + +var genAccounts types.GenesisAccounts + ak.IterateAccounts(ctx, func(account sdk.AccountI) + +bool { + genAccount := account.(types.GenesisAccount) + +genAccounts = append(genAccounts, genAccount) + +return false +}) + +return types.NewGenesisState(params, genAccounts) +} +``` + +### `ExportGenesis` + +The `ExportGenesis` method is executed whenever an export of the state is made. It takes the latest known version of the subset of the state managed by the module and creates a new `GenesisState` out of it. This is mainly used when the chain needs to be upgraded via a hard fork. + +See an example of `ExportGenesis` from the `auth` module. + +```go expandable +package keeper + +import ( + + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ InitGenesis - Init store state from genesis data +/ +/ CONTRACT: old coins from the FeeCollectionKeeper need to be transferred through +/ a genesis port script to the new fee collector account +func (ak AccountKeeper) + +InitGenesis(ctx sdk.Context, data types.GenesisState) { + if err := ak.Params.Set(ctx, data.Params); err != nil { + panic(err) +} + +accounts, err := types.UnpackAccounts(data.Accounts) + if err != nil { + panic(err) +} + +accounts = types.SanitizeGenesisAccounts(accounts) + + / Set the accounts and make sure the global account number matches the largest account number (even if zero). + var lastAccNum *uint64 + for _, acc := range accounts { + accNum := acc.GetAccountNumber() + for lastAccNum == nil || *lastAccNum < accNum { + n := ak.NextAccountNumber(ctx) + +lastAccNum = &n +} + +ak.SetAccount(ctx, acc) +} + +ak.GetModuleAccount(ctx, types.FeeCollectorName) +} + +/ ExportGenesis returns a GenesisState for a given context and keeper +func (ak AccountKeeper) + +ExportGenesis(ctx sdk.Context) *types.GenesisState { + params := ak.GetParams(ctx) + +var genAccounts types.GenesisAccounts + ak.IterateAccounts(ctx, func(account sdk.AccountI) + +bool { + genAccount := account.(types.GenesisAccount) + +genAccounts = append(genAccounts, genAccount) + +return false +}) + +return types.NewGenesisState(params, genAccounts) +} +``` + +### GenesisTxHandler + +`GenesisTxHandler` is a way for modules to submit state transitions prior to the first block. This is used by `x/genutil` to submit the genesis transactions for the validators to be added to staking. + +```go +package genesis + +/ TxHandler is an interface that modules can implement to provide genesis state transitions +type TxHandler interface { + ExecuteGenesisTx([]byte) + +error +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/genutil.mdx b/docs/sdk/v0.53/documentation/module-system/genutil.mdx new file mode 100644 index 00000000..556d5cd2 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/genutil.mdx @@ -0,0 +1,1251 @@ +--- +title: '`x/genutil`' +description: >- + The genutil package contains a variety of genesis utility functionalities for + usage within a blockchain application. Namely: +--- + +## Concepts + +The `genutil` package contains a variety of genesis utility functionalities for usage within a blockchain application. Namely: + +* Genesis transactions related (gentx) +* Commands for collection and creation of gentxs +* `InitChain` processing of gentxs +* Genesis file creation +* Genesis file validation +* Genesis file migration +* CometBFT related initialization + * Translation of an app genesis to a CometBFT genesis + +## Genesis + +Genutil contains the data structure that defines an application genesis. +An application genesis consist of a consensus genesis (g.e. CometBFT genesis) and application related genesis data. + +```go expandable +package types + +import ( + + "bytes" + "encoding/json" + "errors" + "fmt" + "os" + "time" + + cmtjson "github.com/cometbft/cometbft/libs/json" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + cmttime "github.com/cometbft/cometbft/types/time" + "github.com/cosmos/cosmos-sdk/version" +) + +const ( + / MaxChainIDLen is the maximum length of a chain ID. + MaxChainIDLen = cmttypes.MaxChainIDLen +) + +/ AppGenesis defines the app's genesis. +type AppGenesis struct { + AppName string `json:"app_name"` + AppVersion string `json:"app_version"` + GenesisTime time.Time `json:"genesis_time"` + ChainID string `json:"chain_id"` + InitialHeight int64 `json:"initial_height"` + AppHash []byte `json:"app_hash"` + AppState json.RawMessage `json:"app_state,omitempty"` + Consensus *ConsensusGenesis `json:"consensus,omitempty"` +} + +/ NewAppGenesisWithVersion returns a new AppGenesis with the app name and app version already. +func NewAppGenesisWithVersion(chainID string, appState json.RawMessage) *AppGenesis { + return &AppGenesis{ + AppName: version.AppName, + AppVersion: version.Version, + ChainID: chainID, + AppState: appState, + Consensus: &ConsensusGenesis{ + Validators: nil, +}, +} +} + +/ ValidateAndComplete performs validation and completes the AppGenesis. +func (ag *AppGenesis) + +ValidateAndComplete() + +error { + if ag.ChainID == "" { + return errors.New("genesis doc must include non-empty chain_id") +} + if len(ag.ChainID) > MaxChainIDLen { + return fmt.Errorf("chain_id in genesis doc is too long (max: %d)", MaxChainIDLen) +} + if ag.InitialHeight < 0 { + return fmt.Errorf("initial_height cannot be negative (got %v)", ag.InitialHeight) +} + if ag.InitialHeight == 0 { + ag.InitialHeight = 1 +} + if ag.GenesisTime.IsZero() { + ag.GenesisTime = cmttime.Now() +} + if err := ag.Consensus.ValidateAndComplete(); err != nil { + return err +} + +return nil +} + +/ SaveAs is a utility method for saving AppGenesis as a JSON file. +func (ag *AppGenesis) + +SaveAs(file string) + +error { + appGenesisBytes, err := json.MarshalIndent(ag, "", " + ") + if err != nil { + return err +} + +return os.WriteFile(file, appGenesisBytes, 0o600) +} + +/ AppGenesisFromFile reads the AppGenesis from the provided file. +func AppGenesisFromFile(genFile string) (*AppGenesis, error) { + jsonBlob, err := os.ReadFile(genFile) + if err != nil { + return nil, fmt.Errorf("couldn't read AppGenesis file (%s): %w", genFile, err) +} + +var appGenesis AppGenesis + if err := json.Unmarshal(jsonBlob, &appGenesis); err != nil { + / fallback to CometBFT genesis + var ctmGenesis cmttypes.GenesisDoc + if err2 := cmtjson.Unmarshal(jsonBlob, &ctmGenesis); err2 != nil { + return nil, fmt.Errorf("error unmarshalling AppGenesis at %s: %w\n failed fallback to CometBFT GenDoc: %w", genFile, err, err2) +} + +appGenesis = AppGenesis{ + AppName: version.AppName, + / AppVersion is not filled as we do not know it from a CometBFT genesis + GenesisTime: ctmGenesis.GenesisTime, + ChainID: ctmGenesis.ChainID, + InitialHeight: ctmGenesis.InitialHeight, + AppHash: ctmGenesis.AppHash, + AppState: ctmGenesis.AppState, + Consensus: &ConsensusGenesis{ + Validators: ctmGenesis.Validators, + Params: ctmGenesis.ConsensusParams, +}, +} + +} + +return &appGenesis, nil +} + +/ -------------------------- +/ CometBFT Genesis Handling +/ -------------------------- + +/ ToGenesisDoc converts the AppGenesis to a CometBFT GenesisDoc. +func (ag *AppGenesis) + +ToGenesisDoc() (*cmttypes.GenesisDoc, error) { + return &cmttypes.GenesisDoc{ + GenesisTime: ag.GenesisTime, + ChainID: ag.ChainID, + InitialHeight: ag.InitialHeight, + AppHash: ag.AppHash, + AppState: ag.AppState, + Validators: ag.Consensus.Validators, + ConsensusParams: ag.Consensus.Params, +}, nil +} + +/ ConsensusGenesis defines the consensus layer's genesis. +/ TODO(@julienrbrt) + +eventually abstract from CometBFT types +type ConsensusGenesis struct { + Validators []cmttypes.GenesisValidator `json:"validators,omitempty"` + Params *cmttypes.ConsensusParams `json:"params,omitempty"` +} + +/ NewConsensusGenesis returns a ConsensusGenesis with given values. +/ It takes a proto consensus params so it can called from server export command. +func NewConsensusGenesis(params cmtproto.ConsensusParams, validators []cmttypes.GenesisValidator) *ConsensusGenesis { + return &ConsensusGenesis{ + Params: &cmttypes.ConsensusParams{ + Block: cmttypes.BlockParams{ + MaxBytes: params.Block.MaxBytes, + MaxGas: params.Block.MaxGas, +}, + Evidence: cmttypes.EvidenceParams{ + MaxAgeNumBlocks: params.Evidence.MaxAgeNumBlocks, + MaxAgeDuration: params.Evidence.MaxAgeDuration, + MaxBytes: params.Evidence.MaxBytes, +}, + Validator: cmttypes.ValidatorParams{ + PubKeyTypes: params.Validator.PubKeyTypes, +}, +}, + Validators: validators, +} +} + +func (cs *ConsensusGenesis) + +MarshalJSON() ([]byte, error) { + type Alias ConsensusGenesis + return cmtjson.Marshal(&Alias{ + Validators: cs.Validators, + Params: cs.Params, +}) +} + +func (cs *ConsensusGenesis) + +UnmarshalJSON(b []byte) + +error { + type Alias ConsensusGenesis + result := Alias{ +} + if err := cmtjson.Unmarshal(b, &result); err != nil { + return err +} + +cs.Params = result.Params + cs.Validators = result.Validators + + return nil +} + +func (cs *ConsensusGenesis) + +ValidateAndComplete() + +error { + if cs == nil { + return fmt.Errorf("consensus genesis cannot be nil") +} + if cs.Params == nil { + cs.Params = cmttypes.DefaultConsensusParams() +} + +else if err := cs.Params.ValidateBasic(); err != nil { + return err +} + for i, v := range cs.Validators { + if v.Power == 0 { + return fmt.Errorf("the genesis file cannot contain validators with no voting power: %v", v) +} + if len(v.Address) > 0 && !bytes.Equal(v.PubKey.Address(), v.Address) { + return fmt.Errorf("incorrect address for validator %v in the genesis file, should be %v", v, v.PubKey.Address()) +} + if len(v.Address) == 0 { + cs.Validators[i].Address = v.PubKey.Address() +} + +} + +return nil +} +``` + +The application genesis can then be translated to the consensus engine to the right format: + +```go expandable +package types + +import ( + + "bytes" + "encoding/json" + "errors" + "fmt" + "os" + "time" + + cmtjson "github.com/cometbft/cometbft/libs/json" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + cmttime "github.com/cometbft/cometbft/types/time" + "github.com/cosmos/cosmos-sdk/version" +) + +const ( + / MaxChainIDLen is the maximum length of a chain ID. + MaxChainIDLen = cmttypes.MaxChainIDLen +) + +/ AppGenesis defines the app's genesis. +type AppGenesis struct { + AppName string `json:"app_name"` + AppVersion string `json:"app_version"` + GenesisTime time.Time `json:"genesis_time"` + ChainID string `json:"chain_id"` + InitialHeight int64 `json:"initial_height"` + AppHash []byte `json:"app_hash"` + AppState json.RawMessage `json:"app_state,omitempty"` + Consensus *ConsensusGenesis `json:"consensus,omitempty"` +} + +/ NewAppGenesisWithVersion returns a new AppGenesis with the app name and app version already. +func NewAppGenesisWithVersion(chainID string, appState json.RawMessage) *AppGenesis { + return &AppGenesis{ + AppName: version.AppName, + AppVersion: version.Version, + ChainID: chainID, + AppState: appState, + Consensus: &ConsensusGenesis{ + Validators: nil, +}, +} +} + +/ ValidateAndComplete performs validation and completes the AppGenesis. +func (ag *AppGenesis) + +ValidateAndComplete() + +error { + if ag.ChainID == "" { + return errors.New("genesis doc must include non-empty chain_id") +} + if len(ag.ChainID) > MaxChainIDLen { + return fmt.Errorf("chain_id in genesis doc is too long (max: %d)", MaxChainIDLen) +} + if ag.InitialHeight < 0 { + return fmt.Errorf("initial_height cannot be negative (got %v)", ag.InitialHeight) +} + if ag.InitialHeight == 0 { + ag.InitialHeight = 1 +} + if ag.GenesisTime.IsZero() { + ag.GenesisTime = cmttime.Now() +} + if err := ag.Consensus.ValidateAndComplete(); err != nil { + return err +} + +return nil +} + +/ SaveAs is a utility method for saving AppGenesis as a JSON file. +func (ag *AppGenesis) + +SaveAs(file string) + +error { + appGenesisBytes, err := json.MarshalIndent(ag, "", " + ") + if err != nil { + return err +} + +return os.WriteFile(file, appGenesisBytes, 0o600) +} + +/ AppGenesisFromFile reads the AppGenesis from the provided file. +func AppGenesisFromFile(genFile string) (*AppGenesis, error) { + jsonBlob, err := os.ReadFile(genFile) + if err != nil { + return nil, fmt.Errorf("couldn't read AppGenesis file (%s): %w", genFile, err) +} + +var appGenesis AppGenesis + if err := json.Unmarshal(jsonBlob, &appGenesis); err != nil { + / fallback to CometBFT genesis + var ctmGenesis cmttypes.GenesisDoc + if err2 := cmtjson.Unmarshal(jsonBlob, &ctmGenesis); err2 != nil { + return nil, fmt.Errorf("error unmarshalling AppGenesis at %s: %w\n failed fallback to CometBFT GenDoc: %w", genFile, err, err2) +} + +appGenesis = AppGenesis{ + AppName: version.AppName, + / AppVersion is not filled as we do not know it from a CometBFT genesis + GenesisTime: ctmGenesis.GenesisTime, + ChainID: ctmGenesis.ChainID, + InitialHeight: ctmGenesis.InitialHeight, + AppHash: ctmGenesis.AppHash, + AppState: ctmGenesis.AppState, + Consensus: &ConsensusGenesis{ + Validators: ctmGenesis.Validators, + Params: ctmGenesis.ConsensusParams, +}, +} + +} + +return &appGenesis, nil +} + +/ -------------------------- +/ CometBFT Genesis Handling +/ -------------------------- + +/ ToGenesisDoc converts the AppGenesis to a CometBFT GenesisDoc. +func (ag *AppGenesis) + +ToGenesisDoc() (*cmttypes.GenesisDoc, error) { + return &cmttypes.GenesisDoc{ + GenesisTime: ag.GenesisTime, + ChainID: ag.ChainID, + InitialHeight: ag.InitialHeight, + AppHash: ag.AppHash, + AppState: ag.AppState, + Validators: ag.Consensus.Validators, + ConsensusParams: ag.Consensus.Params, +}, nil +} + +/ ConsensusGenesis defines the consensus layer's genesis. +/ TODO(@julienrbrt) + +eventually abstract from CometBFT types +type ConsensusGenesis struct { + Validators []cmttypes.GenesisValidator `json:"validators,omitempty"` + Params *cmttypes.ConsensusParams `json:"params,omitempty"` +} + +/ NewConsensusGenesis returns a ConsensusGenesis with given values. +/ It takes a proto consensus params so it can called from server export command. +func NewConsensusGenesis(params cmtproto.ConsensusParams, validators []cmttypes.GenesisValidator) *ConsensusGenesis { + return &ConsensusGenesis{ + Params: &cmttypes.ConsensusParams{ + Block: cmttypes.BlockParams{ + MaxBytes: params.Block.MaxBytes, + MaxGas: params.Block.MaxGas, +}, + Evidence: cmttypes.EvidenceParams{ + MaxAgeNumBlocks: params.Evidence.MaxAgeNumBlocks, + MaxAgeDuration: params.Evidence.MaxAgeDuration, + MaxBytes: params.Evidence.MaxBytes, +}, + Validator: cmttypes.ValidatorParams{ + PubKeyTypes: params.Validator.PubKeyTypes, +}, +}, + Validators: validators, +} +} + +func (cs *ConsensusGenesis) + +MarshalJSON() ([]byte, error) { + type Alias ConsensusGenesis + return cmtjson.Marshal(&Alias{ + Validators: cs.Validators, + Params: cs.Params, +}) +} + +func (cs *ConsensusGenesis) + +UnmarshalJSON(b []byte) + +error { + type Alias ConsensusGenesis + result := Alias{ +} + if err := cmtjson.Unmarshal(b, &result); err != nil { + return err +} + +cs.Params = result.Params + cs.Validators = result.Validators + + return nil +} + +func (cs *ConsensusGenesis) + +ValidateAndComplete() + +error { + if cs == nil { + return fmt.Errorf("consensus genesis cannot be nil") +} + if cs.Params == nil { + cs.Params = cmttypes.DefaultConsensusParams() +} + +else if err := cs.Params.ValidateBasic(); err != nil { + return err +} + for i, v := range cs.Validators { + if v.Power == 0 { + return fmt.Errorf("the genesis file cannot contain validators with no voting power: %v", v) +} + if len(v.Address) > 0 && !bytes.Equal(v.PubKey.Address(), v.Address) { + return fmt.Errorf("incorrect address for validator %v in the genesis file, should be %v", v, v.PubKey.Address()) +} + if len(v.Address) == 0 { + cs.Validators[i].Address = v.PubKey.Address() +} + +} + +return nil +} +``` + +```go expandable +package server + +import ( + + "context" + "errors" + "fmt" + "io" + "net" + "os" + "runtime/pprof" + "github.com/cometbft/cometbft/abci/server" + cmtcmd "github.com/cometbft/cometbft/cmd/cometbft/commands" + cmtcfg "github.com/cometbft/cometbft/config" + "github.com/cometbft/cometbft/node" + "github.com/cometbft/cometbft/p2p" + pvm "github.com/cometbft/cometbft/privval" + "github.com/cometbft/cometbft/proxy" + "github.com/cometbft/cometbft/rpc/client/local" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/hashicorp/go-metrics" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "golang.org/x/sync/errgroup" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + + pruningtypes "cosmossdk.io/store/pruning/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + servercmtlog "github.com/cosmos/cosmos-sdk/server/log" + "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/version" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" +) + +const ( + / CometBFT full-node start flags + flagWithComet = "with-comet" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagQueryGasLimit = "query-gas-limit" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" + + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" + + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" + + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" + + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" + + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" +) + +/ StartCmdOptions defines options that can be customized in `StartCmdWithOptions`, +type StartCmdOptions struct { + / DBOpener can be used to customize db opening, for example customize db options or support different db backends, + / default to the builtin db opener. + DBOpener func(rootDir string, backendType dbm.BackendType) (dbm.DB, error) + / PostSetup can be used to setup extra services under the same cancellable context, + / it's not called in stand-alone mode, only for in-process mode. + PostSetup func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / AddFlags add custom flags to start cmd + AddFlags func(cmd *cobra.Command) +} + +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + return StartCmdWithOptions(appCreator, defaultNodeHome, StartCmdOptions{ +}) +} + +/ StartCmdWithOptions runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmdWithOptions(appCreator types.AppCreator, defaultNodeHome string, opts StartCmdOptions) *cobra.Command { + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with CometBFT in or out of process. By +default, the application will run with CometBFT in process. + +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. + +For '--pruning' the options are as follows: + +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) + +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' + +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. + +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. + +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, CometBFT is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + PreRunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + + / Bind flags to the Context's Viper so the app construction can set + / options accordingly. + if err := serverCtx.Viper.BindPFlags(cmd.Flags()); err != nil { + return err +} + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + +return err +}, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + +return wrapCPUProfile(serverCtx, func() + +error { + return start(serverCtx, clientCtx, appCreator, withCMT, opts) +}) +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +cmd.Flags().Bool(flagWithComet, true, "Run abci app embedded in-process with CometBFT") + +cmd.Flags().String(flagAddress, "tcp://0.0.0.0:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().Uint64(FlagQueryGasLimit, 0, "Maximum gas a Rest/Grpc query can consume. Blank and 0 imply unbounded.") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune CometBFT blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the CometBFT RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the CometBFT RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the CometBFT maximum request body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no CometBFT process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + + / support old flags name for backwards compatibility + cmd.Flags().SetNormalizeFunc(func(f *pflag.FlagSet, name string) + +pflag.NormalizedName { + if name == "with-tendermint" { + name = flagWithComet +} + +return pflag.NormalizedName(name) +}) + + / add support for all CometBFT-specific command line options + cmtcmd.AddNodeFlags(cmd) + if opts.AddFlags != nil { + opts.AddFlags(cmd) +} + +return cmd +} + +func start(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, withCmt bool, opts StartCmdOptions) + +error { + svrCfg, err := getAndValidateConfig(svrCtx) + if err != nil { + return err +} + +app, appCleanupFn, err := startApp(svrCtx, appCreator, opts) + if err != nil { + return err +} + +defer appCleanupFn() + +metrics, err := startTelemetry(svrCfg) + if err != nil { + return err +} + +emitServerInfoMetrics() + if !withCmt { + return startStandAlone(svrCtx, app, opts) +} + +return startInProcess(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +func startStandAlone(svrCtx *Context, app types.Application, opts StartCmdOptions) + +error { + addr := svrCtx.Viper.GetString(flagAddress) + transport := svrCtx.Viper.GetString(flagTransport) + cmtApp := NewCometABCIWrapper(app) + +svr, err := server.NewServer(addr, transport, cmtApp) + if err != nil { + return fmt.Errorf("error creating listener: %v", err) +} + +svr.SetLogger(servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger.With("module", "abci-server") +}) + +g, ctx := getCtx(svrCtx, false) + +g.Go(func() + +error { + if err := svr.Start(); err != nil { + svrCtx.Logger.Error("failed to start out-of-process ABCI server", "err", err) + +return err +} + + / Wait for the calling process to be canceled or close the provided context, + / so we can gracefully stop the ABCI server. + <-ctx.Done() + +svrCtx.Logger.Info("stopping the ABCI server...") + +return errors.Join(svr.Stop(), app.Close()) +}) + +return g.Wait() +} + +func startInProcess(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, + metrics *telemetry.Metrics, opts StartCmdOptions, +) + +error { + cmtCfg := svrCtx.Config + home := cmtCfg.RootDir + gRPCOnly := svrCtx.Viper.GetBool(flagGRPCOnly) + +g, ctx := getCtx(svrCtx, true) + if gRPCOnly { + / TODO: Generalize logic so that gRPC only is really in startStandAlone + svrCtx.Logger.Info("starting node in gRPC only mode; CometBFT is disabled") + +svrCfg.GRPC.Enable = true +} + +else { + svrCtx.Logger.Info("starting node with ABCI CometBFT in-process") + +tmNode, cleanupFn, err := startCmtNode(ctx, cmtCfg, app, svrCtx) + if err != nil { + return err +} + +defer cleanupFn() + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / Re-assign for making the client available below do not use := to avoid + / shadowing the clientCtx variable. + clientCtx = clientCtx.WithClient(local.New(tmNode)) + +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, cmtCfg, svrCfg, clientCtx, svrCtx, app, home, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetup != nil { + if err := opts.PostSetup(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + + / wait for signal capture and gracefully return + / we are guaranteed to be waiting for the "ListenForQuitSignals" goroutine. + return g.Wait() +} + +/ TODO: Move nodeKey into being created within the function. +func startCmtNode( + ctx context.Context, + cfg *cmtcfg.Config, + app types.Application, + svrCtx *Context, +) (tmNode *node.Node, cleanupFn func(), err error) { + nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return nil, cleanupFn, err +} + cmtApp := NewCometABCIWrapper(app) + +tmNode, err = node.NewNodeWithContext( + ctx, + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(cmtApp), + getGenDocProvider(cfg), + cmtcfg.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger +}, + ) + if err != nil { + return tmNode, cleanupFn, err +} + if err := tmNode.Start(); err != nil { + return tmNode, cleanupFn, err +} + +cleanupFn = func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() + _ = app.Close() +} + +} + +return tmNode, cleanupFn, nil +} + +func getAndValidateConfig(svrCtx *Context) (serverconfig.Config, error) { + config, err := serverconfig.GetConfig(svrCtx.Viper) + if err != nil { + return config, err +} + if err := config.ValidateBasic(); err != nil { + return config, err +} + +return config, nil +} + +/ returns a function which returns the genesis doc from the genesis file. +func getGenDocProvider(cfg *cmtcfg.Config) + +func() (*cmttypes.GenesisDoc, error) { + return func() (*cmttypes.GenesisDoc, error) { + appGenesis, err := genutiltypes.AppGenesisFromFile(cfg.GenesisFile()) + if err != nil { + return nil, err +} + +return appGenesis.ToGenesisDoc() +} +} + +func setupTraceWriter(svrCtx *Context) (traceWriter io.WriteCloser, cleanup func(), err error) { + / clean up the traceWriter when the server is shutting down + cleanup = func() { +} + traceWriterFile := svrCtx.Viper.GetString(flagTraceStore) + +traceWriter, err = openTraceWriter(traceWriterFile) + if err != nil { + return traceWriter, cleanup, err +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + cleanup = func() { + if err = traceWriter.Close(); err != nil { + svrCtx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +return traceWriter, cleanup, nil +} + +func startGrpcServer( + ctx context.Context, + g *errgroup.Group, + config serverconfig.GRPCConfig, + clientCtx client.Context, + svrCtx *Context, + app types.Application, +) (*grpc.Server, client.Context, error) { + if !config.Enable { + / return grpcServer as nil if gRPC is disabled + return nil, clientCtx, nil +} + _, port, err := net.SplitHostPort(config.Address) + if err != nil { + return nil, clientCtx, err +} + maxSendMsgSize := config.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + grpcAddress := fmt.Sprintf("127.0.0.1:%s", port) + + / if gRPC is enabled, configure gRPC client for gRPC gateway + grpcClient, err := grpc.Dial( + grpcAddress, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return nil, clientCtx, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +svrCtx.Logger.Debug("gRPC client assigned to client context", "target", grpcAddress) + +grpcSrv, err := servergrpc.NewGRPCServer(clientCtx, app, config) + if err != nil { + return nil, clientCtx, err +} + + / Start the gRPC server in a goroutine. Note, the provided ctx will ensure + / that the server is gracefully shut down. + g.Go(func() + +error { + return servergrpc.StartGRPCServer(ctx, svrCtx.Logger.With("module", "grpc-server"), config, grpcSrv) +}) + +return grpcSrv, clientCtx, nil +} + +func startAPIServer( + ctx context.Context, + g *errgroup.Group, + cmtCfg *cmtcfg.Config, + svrCfg serverconfig.Config, + clientCtx client.Context, + svrCtx *Context, + app types.Application, + home string, + grpcSrv *grpc.Server, + metrics *telemetry.Metrics, +) + +error { + if !svrCfg.API.Enable { + return nil +} + +clientCtx = clientCtx.WithHomeDir(home) + apiSrv := api.New(clientCtx, svrCtx.Logger.With("module", "api-server"), grpcSrv) + +app.RegisterAPIRoutes(apiSrv, svrCfg.API) + if svrCfg.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + +g.Go(func() + +error { + return apiSrv.Start(ctx, svrCfg) +}) + +return nil +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + if !cfg.Telemetry.Enabled { + return nil, nil +} + +return telemetry.New(cfg.Telemetry) +} + +/ wrapCPUProfile starts CPU profiling, if enabled, and executes the provided +/ callbackFn in a separate goroutine, then will wait for that callback to +/ return. +/ +/ NOTE: We expect the caller to handle graceful shutdown and signal handling. +func wrapCPUProfile(svrCtx *Context, callbackFn func() + +error) + +error { + if cpuProfile := svrCtx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +svrCtx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +defer func() { + svrCtx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + svrCtx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +}() +} + +return callbackFn() +} + +/ emitServerInfoMetrics emits server info related metrics using application telemetry. +func emitServerInfoMetrics() { + var ls []metrics.Label + versionInfo := version.NewInfo() + if len(versionInfo.GoVersion) > 0 { + ls = append(ls, telemetry.NewLabel("go", versionInfo.GoVersion)) +} + if len(versionInfo.CosmosSdkVersion) > 0 { + ls = append(ls, telemetry.NewLabel("version", versionInfo.CosmosSdkVersion)) +} + if len(ls) == 0 { + return +} + +telemetry.SetGaugeWithLabels([]string{"server", "info" +}, 1, ls) +} + +func getCtx(svrCtx *Context, block bool) (*errgroup.Group, context.Context) { + ctx, cancelFn := context.WithCancel(context.Background()) + +g, ctx := errgroup.WithContext(ctx) + / listen for quit signals so the calling parent process can gracefully exit + ListenForQuitSignals(g, block, cancelFn, svrCtx.Logger) + +return g, ctx +} + +func startApp(svrCtx *Context, appCreator types.AppCreator, opts StartCmdOptions) (app types.Application, cleanupFn func(), err error) { + traceWriter, traceCleanupFn, err := setupTraceWriter(svrCtx) + if err != nil { + return app, traceCleanupFn, err +} + home := svrCtx.Config.RootDir + db, err := opts.DBOpener(home, GetAppDBBackend(svrCtx.Viper)) + if err != nil { + return app, traceCleanupFn, err +} + +app = appCreator(svrCtx.Logger, db, traceWriter, svrCtx.Viper) + +cleanupFn = func() { + traceCleanupFn() + if localErr := app.Close(); localErr != nil { + svrCtx.Logger.Error(localErr.Error()) +} + +} + +return app, cleanupFn, nil +} +``` + +## Client + +### CLI + +The genutil commands are available under the `genesis` subcommand. + +#### add-genesis-account + +Add a genesis account to `genesis.json`. Learn more [here](https://docs.cosmos.network/main/run-node/run-node#adding-genesis-accounts). + +#### collect-gentxs + +Collect genesis txs and output a `genesis.json` file. + +```shell +simd genesis collect-gentxs +``` + +This will create a new `genesis.json` file that includes data from all the validators (we sometimes call it the "super genesis file" to distinguish it from single-validator genesis files). + +#### gentx + +Generate a genesis tx carrying a self delegation. + +```shell +simd genesis gentx [key_name] [amount] --chain-id [chain-id] +``` + +This will create the genesis transaction for your new chain. Here `amount` should be at least `1000000000stake`. +If you provide too much or too little, you will encounter an error when starting a node. + +#### migrate + +Migrate genesis to a specified target (SDK) version. + +```shell +simd genesis migrate [target-version] +``` + + +The `migrate` command is extensible and takes a `MigrationMap`. This map is a mapping of target versions to genesis migrations functions. +When not using the default `MigrationMap`, it is recommended to still call the default `MigrationMap` corresponding the SDK version of the chain and prepend/append your own genesis migrations. + + +#### validate-genesis + +Validates the genesis file at the default location or at the location passed as an argument. + +```shell +simd genesis validate-genesis +``` + + +Validate genesis only validates if the genesis is valid at the **current application binary**. For validating a genesis from a previous version of the application, use the `migrate` command to migrate the genesis to the current version. + diff --git a/docs/sdk/v0.53/documentation/module-system/gov.mdx b/docs/sdk/v0.53/documentation/module-system/gov.mdx new file mode 100644 index 00000000..0320bb37 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/gov.mdx @@ -0,0 +1,2991 @@ +--- +title: '`x/gov`' +description: >- + This paper specifies the Governance module of the Cosmos SDK, which was first + described in the Cosmos Whitepaper in June 2016. +--- + +## Abstract + +This paper specifies the Governance module of the Cosmos SDK, which was first +described in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper) in +June 2016. + +The module enables Cosmos SDK based blockchain to support an on-chain governance +system. In this system, holders of the native staking token of the chain can vote +on proposals on a 1 token 1 vote basis. Next is a list of features the module +currently supports: + +* **Proposal submission:** Users can submit proposals with a deposit. Once the + minimum deposit is reached, the proposal enters voting period. The minimum deposit can be reached by collecting deposits from different users (including proposer) within deposit period. +* **Vote:** Participants can vote on proposals that reached MinDeposit and entered voting period. +* **Inheritance and penalties:** Delegators inherit their validator's vote if + they don't vote themselves. +* **Claiming deposit:** Users that deposited on proposals can recover their + deposits if the proposal was accepted or rejected. If the proposal was vetoed, or never entered voting period (minimum deposit not reached within deposit period), the deposit is burned. + +This module is in use on the Cosmos Hub (a.k.a [gaia](https://github.com/cosmos/gaia)). +Features that may be added in the future are described in [Future Improvements](#future-improvements). + +## Contents + +The following specification uses *ATOM* as the native staking token. The module +can be adapted to any Proof-Of-Stake blockchain by replacing *ATOM* with the native +staking token of the chain. + +* [Concepts](#concepts) + * [Proposal submission](#proposal-submission) + * [Deposit](#deposit) + * [Vote](#vote) + * [Software Upgrade](#software-upgrade) +* [State](#state) + * [Proposals](#proposals) + * [Parameters and base types](#parameters-and-base-types) + * [Deposit](#deposit-1) + * [ValidatorGovInfo](#validatorgovinfo) + * [Stores](#stores) + * [Proposal Processing Queue](#proposal-processing-queue) + * [Legacy Proposal](#legacy-proposal) +* [Messages](#messages) + * [Proposal Submission](#proposal-submission-1) + * [Deposit](#deposit-2) + * [Vote](#vote-1) +* [Events](#events) + * [EndBlocker](#endblocker) + * [Handlers](#handlers) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) +* [Metadata](#metadata) + * [Proposal](#proposal-3) + * [Vote](#vote-5) +* [Future Improvements](#future-improvements) + +## Concepts + +*Disclaimer: This is work in progress. Mechanisms are susceptible to change.* + +The governance process is divided in a few steps that are outlined below: + +* **Proposal submission:** Proposal is submitted to the blockchain with a + deposit. +* **Vote:** Once deposit reaches a certain value (`MinDeposit`), proposal is + confirmed and vote opens. Bonded Atom holders can then send `TxGovVote` + transactions to vote on the proposal. +* **Execution** After a period of time, the votes are tallied and depending + on the result, the messages in the proposal will be executed. + +### Proposal submission + +#### Right to submit a proposal + +Every account can submit proposals by sending a `MsgSubmitProposal` transaction. +Once a proposal is submitted, it is identified by its unique `proposalID`. + +#### Proposal Messages + +A proposal includes an array of `sdk.Msg`s which are executed automatically if the +proposal passes. The messages are executed by the governance `ModuleAccount` itself. Modules +such as `x/upgrade`, that want to allow certain messages to be executed by governance +only should add a whitelist within the respective msg server, granting the governance +module the right to execute the message once a quorum has been reached. The governance +module uses the `MsgServiceRouter` to check that these messages are correctly constructed +and have a respective path to execute on but do not perform a full validity check. + +### Deposit + +To prevent spam, proposals must be submitted with a deposit in the coins defined by +the `MinDeposit` param. + +When a proposal is submitted, it has to be accompanied with a deposit that must be +strictly positive, but can be inferior to `MinDeposit`. The submitter doesn't need +to pay for the entire deposit on their own. The newly created proposal is stored in +an *inactive proposal queue* and stays there until its deposit passes the `MinDeposit`. +Other token holders can increase the proposal's deposit by sending a `Deposit` +transaction. If a proposal doesn't pass the `MinDeposit` before the deposit end time +(the time when deposits are no longer accepted), the proposal will be destroyed: the +proposal will be removed from state and the deposit will be burned (see x/gov `EndBlocker`). +When a proposal deposit passes the `MinDeposit` threshold (even during the proposal +submission) before the deposit end time, the proposal will be moved into the +*active proposal queue* and the voting period will begin. + +The deposit is kept in escrow and held by the governance `ModuleAccount` until the +proposal is finalized (passed or rejected). + +#### Deposit refund and burn + +When a proposal is finalized, the coins from the deposit are either refunded or burned +according to the final tally of the proposal: + +* If the proposal is approved or rejected but *not* vetoed, each deposit will be + automatically refunded to its respective depositor (transferred from the governance + `ModuleAccount`). +* When the proposal is vetoed with greater than 1/3, deposits will be burned from the + governance `ModuleAccount` and the proposal information along with its deposit + information will be removed from state. +* All refunded or burned deposits are removed from the state. Events are issued when + burning or refunding a deposit. + +### Vote + +#### Participants + +*Participants* are users that have the right to vote on proposals. On the +Cosmos Hub, participants are bonded Atom holders. Unbonded Atom holders and +other users do not get the right to participate in governance. However, they +can submit and deposit on proposals. + +Note that when *participants* have bonded and unbonded Atoms, their voting power is calculated from their bonded Atom holdings only. + +#### Voting period + +Once a proposal reaches `MinDeposit`, it immediately enters `Voting period`. We +define `Voting period` as the interval between the moment the vote opens and +the moment the vote closes. The initial value of `Voting period` is 2 weeks. + +#### Option set + +The option set of a proposal refers to the set of choices a participant can +choose from when casting its vote. + +The initial option set includes the following options: + +* `Yes` +* `No` +* `NoWithVeto` +* `Abstain` + +`NoWithVeto` counts as `No` but also adds a `Veto` vote. `Abstain` option +allows voters to signal that they do not intend to vote in favor or against the +proposal but accept the result of the vote. + +*Note: from the UI, for urgent proposals we should maybe add a ‘Not Urgent’ option that casts a `NoWithVeto` vote.* + +#### Weighted Votes + +[ADR-037](docs/sdk/next/documentation/legacy/adr-comprehensive) introduces the weighted vote feature which allows a staker to split their votes into several voting options. For example, it could use 70% of its voting power to vote Yes and 30% of its voting power to vote No. + +Often times the entity owning that address might not be a single individual. For example, a company might have different stakeholders who want to vote differently, and so it makes sense to allow them to split their voting power. Currently, it is not possible for them to do "passthrough voting" and giving their users voting rights over their tokens. However, with this system, exchanges can poll their users for voting preferences, and then vote on-chain proportionally to the results of the poll. + +To represent weighted vote on chain, we use the following Protobuf message. + +```protobuf +// WeightedVoteOption defines a unit of vote for vote split. +// +// Since: cosmos-sdk 0.43 +message WeightedVoteOption { + // option defines the valid vote options, it must not contain duplicate vote options. + VoteOption option = 1; + + // weight is the vote weight associated with the vote option. + string weight = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +```protobuf +// Vote defines a vote on a governance proposal. +// A Vote consists of a proposal ID, the voter, and the vote option. +message Vote { + option (gogoproto.goproto_stringer) = false; + option (gogoproto.equal) = false; + + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1 [(gogoproto.jsontag) = "id", (amino.field_name) = "id", (amino.dont_omitempty) = true]; + + // voter is the voter address of the proposal. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // Deprecated: Prefer to use `options` instead. This field is set in queries + // if and only if `len(options) == 1` and that option has weight 1. In all + // other cases, this field will default to VOTE_OPTION_UNSPECIFIED. + VoteOption option = 3 [deprecated = true]; + + // options is the weighted vote options. + // + // Since: cosmos-sdk 0.43 + repeated WeightedVoteOption options = 4 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +For a weighted vote to be valid, the `options` field must not contain duplicate vote options, and the sum of weights of all options must be equal to 1. + +#### Custom Vote Calculation + +Cosmos SDK v0.53.0 introduced an option for developers to define a custom vote result and voting power calculation function. + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "cosmossdk.io/collections" + "cosmossdk.io/math" + + sdk "github.com/cosmos/cosmos-sdk/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ CalculateVoteResultsAndVotingPowerFn is a function signature for calculating vote results and voting power +/ It can be overridden to customize the voting power calculation for proposals +/ It gets the proposal tallied and the validators governance infos (bonded tokens, voting power, etc.) +/ It must return the total voting power and the results of the vote +type CalculateVoteResultsAndVotingPowerFn func( + ctx context.Context, + k Keeper, + proposal v1.Proposal, + validators map[string]v1.ValidatorGovInfo, +) (totalVoterPower math.LegacyDec, results map[v1.VoteOption]math.LegacyDec, err error) + +func defaultCalculateVoteResultsAndVotingPower( + ctx context.Context, + k Keeper, + proposal v1.Proposal, + validators map[string]v1.ValidatorGovInfo, +) (totalVoterPower math.LegacyDec, results map[v1.VoteOption]math.LegacyDec, err error) { + totalVotingPower := math.LegacyZeroDec() + +results = make(map[v1.VoteOption]math.LegacyDec) + +results[v1.OptionYes] = math.LegacyZeroDec() + +results[v1.OptionAbstain] = math.LegacyZeroDec() + +results[v1.OptionNo] = math.LegacyZeroDec() + +results[v1.OptionNoWithVeto] = math.LegacyZeroDec() + rng := collections.NewPrefixedPairRange[uint64, sdk.AccAddress](/docs/sdk/v0.53/documentation/module-system/proposal.Id) + votesToRemove := []collections.Pair[uint64, sdk.AccAddress]{ +} + +err = k.Votes.Walk(ctx, rng, func(key collections.Pair[uint64, sdk.AccAddress], vote v1.Vote) (bool, error) { + / if validator, just record it in the map + voter, err := k.authKeeper.AddressCodec().StringToBytes(vote.Voter) + if err != nil { + return false, err +} + +valAddrStr, err := k.sk.ValidatorAddressCodec().BytesToString(voter) + if err != nil { + return false, err +} + if val, ok := validators[valAddrStr]; ok { + val.Vote = vote.Options + validators[valAddrStr] = val +} + + / iterate over all delegations from voter, deduct from any delegated-to validators + err = k.sk.IterateDelegations(ctx, voter, func(index int64, delegation stakingtypes.DelegationI) (stop bool) { + valAddrStr := delegation.GetValidatorAddr() + if val, ok := validators[valAddrStr]; ok { + / There is no need to handle the special case that validator address equal to voter address. + / Because voter's voting power will tally again even if there will be deduction of voter's voting power from validator. + val.DelegatorDeductions = val.DelegatorDeductions.Add(delegation.GetShares()) + +validators[valAddrStr] = val + + / delegation shares * bonded / total shares + votingPower := delegation.GetShares().MulInt(val.BondedTokens).Quo(val.DelegatorShares) + for _, option := range vote.Options { + weight, _ := math.LegacyNewDecFromStr(option.Weight) + subPower := votingPower.Mul(weight) + +results[option.Option] = results[option.Option].Add(subPower) +} + +totalVotingPower = totalVotingPower.Add(votingPower) +} + +return false +}) + if err != nil { + return false, err +} + +votesToRemove = append(votesToRemove, key) + +return false, nil +}) + if err != nil { + return math.LegacyZeroDec(), nil, fmt.Errorf("error while iterating delegations: %w", err) +} + + / remove all votes from store + for _, key := range votesToRemove { + if err := k.Votes.Remove(ctx, key); err != nil { + return math.LegacyDec{ +}, nil, fmt.Errorf("error while removing vote (%d/%s): %w", key.K1(), key.K2(), err) +} + +} + + / iterate over the validators again to tally their voting power + for _, val := range validators { + if len(val.Vote) == 0 { + continue +} + sharesAfterDeductions := val.DelegatorShares.Sub(val.DelegatorDeductions) + votingPower := sharesAfterDeductions.MulInt(val.BondedTokens).Quo(val.DelegatorShares) + for _, option := range val.Vote { + weight, _ := math.LegacyNewDecFromStr(option.Weight) + subPower := votingPower.Mul(weight) + +results[option.Option] = results[option.Option].Add(subPower) +} + +totalVotingPower = totalVotingPower.Add(votingPower) +} + +return totalVotingPower, results, nil +} + +/ getCurrentValidators fetches all the bonded validators, insert them into currValidators +func (k Keeper) + +getCurrentValidators(ctx context.Context) (map[string]v1.ValidatorGovInfo, error) { + currValidators := make(map[string]v1.ValidatorGovInfo) + if err := k.sk.IterateBondedValidatorsByPower(ctx, func(index int64, validator stakingtypes.ValidatorI) (stop bool) { + valBz, err := k.sk.ValidatorAddressCodec().StringToBytes(validator.GetOperator()) + if err != nil { + return false +} + +currValidators[validator.GetOperator()] = v1.NewValidatorGovInfo( + valBz, + validator.GetBondedTokens(), + validator.GetDelegatorShares(), + math.LegacyZeroDec(), + v1.WeightedVoteOptions{ +}, + ) + +return false +}); err != nil { + return nil, err +} + +return currValidators, nil +} + +/ Tally iterates over the votes and updates the tally of a proposal based on the voting power of the +/ voters +func (k Keeper) + +Tally(ctx context.Context, proposal v1.Proposal) (passes, burnDeposits bool, tallyResults v1.TallyResult, err error) { + currValidators, err := k.getCurrentValidators(ctx) + if err != nil { + return false, false, tallyResults, fmt.Errorf("error while getting current validators: %w", err) +} + tallyFn := k.calculateVoteResultsAndVotingPowerFn + totalVotingPower, results, err := tallyFn(ctx, k, proposal, currValidators) + if err != nil { + return false, false, tallyResults, fmt.Errorf("error while calculating tally results: %w", err) +} + +tallyResults = v1.NewTallyResultFromMap(results) + + / TODO: Upgrade the spec to cover all of these cases & remove pseudocode. + / If there is no staked coins, the proposal fails + totalBonded, err := k.sk.TotalBondedTokens(ctx) + if err != nil { + return false, false, tallyResults, err +} + if totalBonded.IsZero() { + return false, false, tallyResults, nil +} + +params, err := k.Params.Get(ctx) + if err != nil { + return false, false, tallyResults, fmt.Errorf("error while getting params: %w", err) +} + + / If there is not enough quorum of votes, the proposal fails + percentVoting := totalVotingPower.Quo(math.LegacyNewDecFromInt(totalBonded)) + +quorum, _ := math.LegacyNewDecFromStr(params.Quorum) + if percentVoting.LT(quorum) { + return false, params.BurnVoteQuorum, tallyResults, nil +} + + / If no one votes (everyone abstains), proposal fails + if totalVotingPower.Sub(results[v1.OptionAbstain]).Equal(math.LegacyZeroDec()) { + return false, false, tallyResults, nil +} + + / If more than 1/3 of voters veto, proposal fails + vetoThreshold, _ := math.LegacyNewDecFromStr(params.VetoThreshold) + if results[v1.OptionNoWithVeto].Quo(totalVotingPower).GT(vetoThreshold) { + return false, params.BurnVoteVeto, tallyResults, nil +} + + / If more than 1/2 of non-abstaining voters vote Yes, proposal passes + / For expedited 2/3 + var thresholdStr string + if proposal.Expedited { + thresholdStr = params.GetExpeditedThreshold() +} + +else { + thresholdStr = params.GetThreshold() +} + +threshold, _ := math.LegacyNewDecFromStr(thresholdStr) + if results[v1.OptionYes].Quo(totalVotingPower.Sub(results[v1.OptionAbstain])).GT(threshold) { + return true, false, tallyResults, nil +} + + / If more than 1/2 of non-abstaining voters vote No, proposal fails + return false, false, tallyResults, nil +} +``` + +This gives developers a more expressive way to handle governance on their appchains. +Developers can now build systems with: + +* Quadratic Voting +* Time-weighted Voting +* Reputation-Based voting + +##### Example + +```go expandable +func myCustomVotingFunction( + ctx context.Context, + k Keeper, + proposal v1.Proposal, + validators map[string]v1.ValidatorGovInfo, +) (totalVoterPower math.LegacyDec, results map[v1.VoteOption]math.LegacyDec, err error) { + / ... tally logic +} + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(myCustomVotingFunction), +) +``` + +### Quorum + +Quorum is defined as the minimum percentage of voting power that needs to be +cast on a proposal for the result to be valid. + +### Expedited Proposals + +A proposal can be expedited, making the proposal use shorter voting duration and a higher tally threshold by its default. If an expedited proposal fails to meet the threshold within the scope of shorter voting duration, the expedited proposal is then converted to a regular proposal and restarts voting under regular voting conditions. + +#### Threshold + +Threshold is defined as the minimum proportion of `Yes` votes (excluding +`Abstain` votes) for the proposal to be accepted. + +Initially, the threshold is set at 50% of `Yes` votes, excluding `Abstain` +votes. A possibility to veto exists if more than 1/3rd of all votes are +`NoWithVeto` votes. Note, both of these values are derived from the `TallyParams` +on-chain parameter, which is modifiable by governance. +This means that proposals are accepted iff: + +* There exist bonded tokens. +* Quorum has been achieved. +* The proportion of `Abstain` votes is inferior to 1/1. +* The proportion of `NoWithVeto` votes is inferior to 1/3, including + `Abstain` votes. +* The proportion of `Yes` votes, excluding `Abstain` votes, at the end of + the voting period is superior to 1/2. + +For expedited proposals, by default, the threshold is higher than with a *normal proposal*, namely, 66.7%. + +#### Inheritance + +If a delegator does not vote, it will inherit its validator vote. + +* If the delegator votes before its validator, it will not inherit from the + validator's vote. +* If the delegator votes after its validator, it will override its validator + vote with its own. If the proposal is urgent, it is possible + that the vote will close before delegators have a chance to react and + override their validator's vote. This is not a problem, as proposals require more than 2/3rd of the total voting power to pass, when tallied at the end of the voting period. Because as little as 1/3 + 1 validation power could collude to censor transactions, non-collusion is already assumed for ranges exceeding this threshold. + +#### Validator’s punishment for non-voting + +At present, validators are not punished for failing to vote. + +#### Governance address + +Later, we may add permissioned keys that could only sign txs from certain modules. For the MVP, the `Governance address` will be the main validator address generated at account creation. This address corresponds to a different PrivKey than the CometBFT PrivKey which is responsible for signing consensus messages. Validators thus do not have to sign governance transactions with the sensitive CometBFT PrivKey. + +#### Burnable Params + +There are three parameters that define if the deposit of a proposal should be burned or returned to the depositors. + +* `BurnVoteVeto` burns the proposal deposit if the proposal gets vetoed. +* `BurnVoteQuorum` burns the proposal deposit if the proposal deposit if the vote does not reach quorum. +* `BurnProposalDepositPrevote` burns the proposal deposit if it does not enter the voting phase. + +> Note: These parameters are modifiable via governance. + +## State + +### Constitution + +`Constitution` is found in the genesis state. It is a string field intended to be used to descibe the purpose of a particular blockchain, and its expected norms. A few examples of how the constitution field can be used: + +* define the purpose of the chain, laying a foundation for its future development +* set expectations for delegators +* set expectations for validators +* define the chain's relationship to "meatspace" entities, like a foundation or corporation + +Since this is more of a social feature than a technical feature, we'll now get into some items that may have been useful to have in a genesis constitution: + +* What limitations on governance exist, if any? + * is it okay for the community to slash the wallet of a whale that they no longer feel that they want around? (viz: Juno Proposal 4 and 16) + * can governance "socially slash" a validator who is using unapproved MEV? (viz: commonwealth.im/osmosis) + * In the event of an economic emergency, what should validators do? + * Terra crash of May, 2022, saw validators choose to run a new binary with code that had not been approved by governance, because the governance token had been inflated to nothing. +* What is the purpose of the chain, specifically? + * best example of this is the Cosmos hub, where different founding groups, have different interpertations of the purpose of the network. + +This genesis entry, "constitution" hasn't been designed for existing chains, who should likely just ratify a constitution using their governance system. Instead, this is for new chains. It will allow for validators to have a much clearer idea of purpose and the expecations placed on them while operating thier nodes. Likewise, for community members, the constitution will give them some idea of what to expect from both the "chain team" and the validators, respectively. + +This constitution is designed to be immutable, and placed only in genesis, though that could change over time by a pull request to the cosmos-sdk that allows for the constitution to be changed by governance. Communities whishing to make amendments to their original constitution should use the governance mechanism and a "signaling proposal" to do exactly that. + +**Ideal use scenario for a cosmos chain constitution** + +As a chain developer, you decide that you'd like to provide clarity to your key user groups: + +* validators +* token holders +* developers (yourself) + +You use the constitution to immutably store some Markdown in genesis, so that when difficult questions come up, the constutituon can provide guidance to the community. + +### Proposals + +`Proposal` objects are used to tally votes and generally track the proposal's state. +They contain an array of arbitrary `sdk.Msg`'s which the governance module will attempt +to resolve and then execute if the proposal passes. `Proposal`'s are identified by a +unique id and contains a series of timestamps: `submit_time`, `deposit_end_time`, +`voting_start_time`, `voting_end_time` which track the lifecycle of a proposal + +```protobuf +// Proposal defines the core field members of a governance proposal. +message Proposal { + // id defines the unique id of the proposal. + uint64 id = 1; + + // messages are the arbitrary messages to be executed if the proposal passes. + repeated google.protobuf.Any messages = 2; + + // status defines the proposal status. + ProposalStatus status = 3; + + // final_tally_result is the final tally result of the proposal. When + // querying a proposal via gRPC, this field is not populated until the + // proposal's voting period has ended. + TallyResult final_tally_result = 4; + + // submit_time is the time of proposal submission. + google.protobuf.Timestamp submit_time = 5 [(gogoproto.stdtime) = true]; + + // deposit_end_time is the end time for deposition. + google.protobuf.Timestamp deposit_end_time = 6 [(gogoproto.stdtime) = true]; + + // total_deposit is the total deposit on the proposal. + repeated cosmos.base.v1beta1.Coin total_deposit = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // voting_start_time is the starting time to vote on a proposal. + google.protobuf.Timestamp voting_start_time = 8 [(gogoproto.stdtime) = true]; + + // voting_end_time is the end time of voting on a proposal. + google.protobuf.Timestamp voting_end_time = 9 [(gogoproto.stdtime) = true]; + + // metadata is any arbitrary metadata attached to the proposal. + string metadata = 10; + + // title is the title of the proposal + // + // Since: cosmos-sdk 0.47 + string title = 11; + + // summary is a short summary of the proposal + // + // Since: cosmos-sdk 0.47 + string summary = 12; + + // Proposer is the address of the proposal sumbitter + // + // Since: cosmos-sdk 0.47 + string proposer = 13 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +A proposal will generally require more than just a set of messages to explain its +purpose but need some greater justification and allow a means for interested participants +to discuss and debate the proposal. +In most cases, **it is encouraged to have an off-chain system that supports the on-chain governance process**. +To accommodate for this, a proposal contains a special **`metadata`** field, a string, +which can be used to add context to the proposal. The `metadata` field allows custom use for networks, +however, it is expected that the field contains a URL or some form of CID using a system such as +[IPFS](https://docs.ipfs.io/concepts/content-addressing/). To support the case of +interoperability across networks, the SDK recommends that the `metadata` represents +the following `JSON` template: + +```json +{ + "title": "...", + "description": "...", + "forum": "...", / a link to the discussion platform (i.e. Discord) + "other": "..." / any extra data that doesn't correspond to the other fields +} +``` + +This makes it far easier for clients to support multiple networks. + +The metadata has a maximum length that is chosen by the app developer, and +passed into the gov keeper as a config. The default maximum length in the SDK is 255 characters. + +#### Writing a module that uses governance + +There are many aspects of a chain, or of the individual modules that you may want to +use governance to perform such as changing various parameters. This is very simple +to do. First, write out your message types and `MsgServer` implementation. Add an +`authority` field to the keeper which will be populated in the constructor with the +governance module account: `govKeeper.GetGovernanceAccount().GetAddress()`. Then for +the methods in the `msg_server.go`, perform a check on the message that the signer +matches `authority`. This will prevent any user from executing that message. + +### Parameters and base types + +`Parameters` define the rules according to which votes are run. There can only +be one active parameter set at any given time. If governance wants to change a +parameter set, either to modify a value or add/remove a parameter field, a new +parameter set has to be created and the previous one rendered inactive. + +#### DepositParams + +```protobuf +// DepositParams defines the params for deposits on governance proposals. +message DepositParams { + // Minimum deposit for a proposal to enter voting period. + repeated cosmos.base.v1beta1.Coin min_deposit = 1 + [(gogoproto.nullable) = false, (gogoproto.jsontag) = "min_deposit,omitempty"]; + + // Maximum period for Atom holders to deposit on a proposal. Initial value: 2 + // months. + google.protobuf.Duration max_deposit_period = 2 + [(gogoproto.stdduration) = true, (gogoproto.jsontag) = "max_deposit_period,omitempty"]; +} +``` + +#### VotingParams + +```protobuf +// VotingParams defines the params for voting on governance proposals. +message VotingParams { + // Duration of the voting period. + google.protobuf.Duration voting_period = 1 [(gogoproto.stdduration) = true]; +} +``` + +#### TallyParams + +```protobuf +// TallyParams defines the params for tallying votes on governance proposals. +message TallyParams { + // Minimum percentage of total stake needed to vote for a result to be + // considered valid. + string quorum = 1 [(cosmos_proto.scalar) = "cosmos.Dec"]; + + // Minimum proportion of Yes votes for proposal to pass. Default value: 0.5. + string threshold = 2 [(cosmos_proto.scalar) = "cosmos.Dec"]; + + // Minimum value of Veto votes to Total votes ratio for proposal to be + // vetoed. Default value: 1/3. + string veto_threshold = 3 [(cosmos_proto.scalar) = "cosmos.Dec"]; +} +``` + +Parameters are stored in a global `GlobalParams` KVStore. + +Additionally, we introduce some basic types: + +```go expandable +type Vote byte + +const ( + VoteYes = 0x1 + VoteNo = 0x2 + VoteNoWithVeto = 0x3 + VoteAbstain = 0x4 +) + +type ProposalType string + +const ( + ProposalTypePlainText = "Text" + ProposalTypeSoftwareUpgrade = "SoftwareUpgrade" +) + +type ProposalStatus byte + +const ( + StatusNil ProposalStatus = 0x00 + StatusDepositPeriod ProposalStatus = 0x01 / Proposal is submitted. Participants can deposit on it but not vote + StatusVotingPeriod ProposalStatus = 0x02 / MinDeposit is reached, participants can vote + StatusPassed ProposalStatus = 0x03 / Proposal passed and successfully executed + StatusRejected ProposalStatus = 0x04 / Proposal has been rejected + StatusFailed ProposalStatus = 0x05 / Proposal passed but failed execution +) +``` + +### Deposit + +```protobuf +// Deposit defines an amount deposited by an account address to an active +// proposal. +message Deposit { + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1; + + // depositor defines the deposit addresses from the proposals. + string depositor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // amount to be deposited by depositor. + repeated cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +### ValidatorGovInfo + +This type is used in a temp map when tallying + +```go +type ValidatorGovInfo struct { + Minus sdk.Dec + Vote Vote +} +``` + +## Stores + + +Stores are KVStores in the multi-store. The key to find the store is the first parameter in the list + + +We will use one KVStore `Governance` to store four mappings: + +* A mapping from `proposalID|'proposal'` to `Proposal`. +* A mapping from `proposalID|'addresses'|address` to `Vote`. This mapping allows + us to query all addresses that voted on the proposal along with their vote by + doing a range query on `proposalID:addresses`. +* A mapping from `ParamsKey|'Params'` to `Params`. This map allows to query all + x/gov params. +* A mapping from `VotingPeriodProposalKeyPrefix|proposalID` to a single byte. This allows + us to know if a proposal is in the voting period or not with very low gas cost. + +For pseudocode purposes, here are the two function we will use to read or write in stores: + +* `load(StoreKey, Key)`: Retrieve item stored at key `Key` in store found at key `StoreKey` in the multistore +* `store(StoreKey, Key, value)`: Write value `Value` at key `Key` in store found at key `StoreKey` in the multistore + +### Proposal Processing Queue + +**Store:** + +* `ProposalProcessingQueue`: A queue `queue[proposalID]` containing all the + `ProposalIDs` of proposals that reached `MinDeposit`. During each `EndBlock`, + all the proposals that have reached the end of their voting period are processed. + To process a finished proposal, the application tallies the votes, computes the + votes of each validator and checks if every validator in the validator set has + voted. If the proposal is accepted, deposits are refunded. Finally, the proposal + content `Handler` is executed. + +And the pseudocode for the `ProposalProcessingQueue`: + +```go expandable +in EndBlock do + for finishedProposalID in GetAllFinishedProposalIDs(block.Time) + +proposal = load(Governance, ) / proposal is a const key + + validators = Keeper.getAllValidators() + tmpValMap := map(sdk.AccAddress) + +ValidatorGovInfo + + / Initiate mapping at 0. This is the amount of shares of the validator's vote that will be overridden by their delegator's votes + for each validator in validators + tmpValMap(validator.OperatorAddr).Minus = 0 + + / Tally + voterIterator = rangeQuery(Governance, ) /return all the addresses that voted on the proposal + for each (voterAddress, vote) + +in voterIterator + delegations = stakingKeeper.getDelegations(voterAddress) / get all delegations for current voter + for each delegation in delegations + / make sure delegation.Shares does NOT include shares being unbonded + tmpValMap(delegation.ValidatorAddr).Minus += delegation.Shares + proposal.updateTally(vote, delegation.Shares) + + _, isVal = stakingKeeper.getValidator(voterAddress) + if (isVal) + +tmpValMap(voterAddress).Vote = vote + + tallyingParam = load(GlobalParams, 'TallyingParam') + + / Update tally if validator voted + for each validator in validators + if tmpValMap(validator).HasVoted + proposal.updateTally(tmpValMap(validator).Vote, (validator.TotalShares - tmpValMap(validator).Minus)) + + / Check if proposal is accepted or rejected + totalNonAbstain := proposal.YesVotes + proposal.NoVotes + proposal.NoWithVetoVotes + if (proposal.Votes.YesVotes/totalNonAbstain > tallyingParam.Threshold AND proposal.Votes.NoWithVetoVotes/totalNonAbstain < tallyingParam.Veto) + / proposal was accepted at the end of the voting period + / refund deposits (non-voters already punished) + for each (amount, depositor) + +in proposal.Deposits + depositor.AtomBalance += amount + + stateWriter, err := proposal.Handler() + if err != nil + / proposal passed but failed during state execution + proposal.CurrentStatus = ProposalStatusFailed + else + / proposal pass and state is persisted + proposal.CurrentStatus = ProposalStatusAccepted + stateWriter.save() + +else + / proposal was rejected + proposal.CurrentStatus = ProposalStatusRejected + + store(Governance, , proposal) +``` + +### Legacy Proposal + + +Legacy proposals are deprecated. Use the new proposal flow by granting the governance module the right to execute the message. + + +A legacy proposal is the old implementation of governance proposal. +Contrary to proposal that can contain any messages, a legacy proposal allows to submit a set of pre-defined proposals. +These proposals are defined by their types and handled by handlers that are registered in the gov v1beta1 router. + +More information on how to submit proposals in the [client section](#client). + +## Messages + +### Proposal Submission + +Proposals can be submitted by any account via a `MsgSubmitProposal` transaction. + +```protobuf +// MsgSubmitProposal defines an sdk.Msg type that supports submitting arbitrary +// proposal Content. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposer"; + option (amino.name) = "cosmos-sdk/v1/MsgSubmitProposal"; + + // messages are the arbitrary messages to be executed if proposal passes. + repeated google.protobuf.Any messages = 1; + + // initial_deposit is the deposit value that must be paid at proposal submission. + repeated cosmos.base.v1beta1.Coin initial_deposit = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // proposer is the account address of the proposer. + string proposer = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // metadata is any arbitrary metadata attached to the proposal. + string metadata = 4; + + // title is the title of the proposal. + // + // Since: cosmos-sdk 0.47 + string title = 5; + + // summary is the summary of the proposal + // + // Since: cosmos-sdk 0.47 + string summary = 6; +} +``` + +All `sdk.Msgs` passed into the `messages` field of a `MsgSubmitProposal` message +must be registered in the app's `MsgServiceRouter`. Each of these messages must +have one signer, namely the gov module account. And finally, the metadata length +must not be larger than the `maxMetadataLen` config passed into the gov keeper. +The `initialDeposit` must be strictly positive and conform to the accepted denom of the `MinDeposit` param. + +**State modifications:** + +* Generate new `proposalID` +* Create new `Proposal` +* Initialise `Proposal`'s attributes +* Decrease balance of sender by `InitialDeposit` +* If `MinDeposit` is reached: + * Push `proposalID` in `ProposalProcessingQueue` +* Transfer `InitialDeposit` from the `Proposer` to the governance `ModuleAccount` + +### Deposit + +Once a proposal is submitted, if `Proposal.TotalDeposit < ActiveParam.MinDeposit`, Atom holders can send +`MsgDeposit` transactions to increase the proposal's deposit. + +A deposit is accepted iff: + +* The proposal exists +* The proposal is not in the voting period +* The deposited coins are conform to the accepted denom from the `MinDeposit` param + +```protobuf +// MsgDeposit defines a message to submit a deposit to an existing proposal. +message MsgDeposit { + option (cosmos.msg.v1.signer) = "depositor"; + option (amino.name) = "cosmos-sdk/v1/MsgDeposit"; + + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1 [(gogoproto.jsontag) = "proposal_id", (amino.dont_omitempty) = true]; + + // depositor defines the deposit addresses from the proposals. + string depositor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // amount to be deposited by depositor. + repeated cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +**State modifications:** + +* Decrease balance of sender by `deposit` +* Add `deposit` of sender in `proposal.Deposits` +* Increase `proposal.TotalDeposit` by sender's `deposit` +* If `MinDeposit` is reached: + * Push `proposalID` in `ProposalProcessingQueueEnd` +* Transfer `Deposit` from the `proposer` to the governance `ModuleAccount` + +### Vote + +Once `ActiveParam.MinDeposit` is reached, voting period starts. From there, +bonded Atom holders are able to send `MsgVote` transactions to cast their +vote on the proposal. + +```protobuf +// MsgVote defines a message to cast a vote. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/v1/MsgVote"; + + // proposal_id defines the unique id of the proposal. + uint64 proposal_id = 1 [(gogoproto.jsontag) = "proposal_id", (amino.dont_omitempty) = true]; + + // voter is the voter address for the proposal. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // option defines the vote option. + VoteOption option = 3; + + // metadata is any arbitrary metadata attached to the Vote. + string metadata = 4; +} +``` + +**State modifications:** + +* Record `Vote` of sender + + +Gas cost for this message has to take into account the future tallying of the vote in EndBlocker. + + +## Events + +The governance module emits the following events: + +### EndBlocker + +| Type | Attribute Key | Attribute Value | +| ------------------ | ---------------- | ---------------- | +| inactive\_proposal | proposal\_id | `{proposalID}` | +| inactive\_proposal | proposal\_result | `{proposalResult}` | +| active\_proposal | proposal\_id | `{proposalID}` | +| active\_proposal | proposal\_result | `{proposalResult}` | + +### Handlers + +#### MsgSubmitProposal + +| Type | Attribute Key | Attribute Value | +| --------------------- | --------------------- | ---------------- | +| submit\_proposal | proposal\_id | `{proposalID}` | +| submit\_proposal \[0] | voting\_period\_start | `{proposalID}` | +| proposal\_deposit | amount | `{depositAmount}` | +| proposal\_deposit | proposal\_id | `{proposalID}` | +| message | module | governance | +| message | action | submit\_proposal | +| message | sender | `{senderAddress}` | + +* \[0] Event only emitted if the voting period starts during the submission. + +#### MsgVote + +| Type | Attribute Key | Attribute Value | +| -------------- | ------------- | --------------- | +| proposal\_vote | option | `{voteOption}` | +| proposal\_vote | proposal\_id | `{proposalID}` | +| message | module | governance | +| message | action | vote | +| message | sender | `{senderAddress}` | + +#### MsgVoteWeighted + +| Type | Attribute Key | Attribute Value | +| -------------- | ------------- | --------------------- | +| proposal\_vote | option | `{weightedVoteOptions}` | +| proposal\_vote | proposal\_id | `{proposalID}` | +| message | module | governance | +| message | action | vote | +| message | sender | `{senderAddress}` | + +#### MsgDeposit + +| Type | Attribute Key | Attribute Value | +| ---------------------- | --------------------- | --------------- | +| proposal\_deposit | amount | `{depositAmount}` | +| proposal\_deposit | proposal\_id | `{proposalID}` | +| proposal\_deposit \[0] | voting\_period\_start | `{proposalID}` | +| message | module | governance | +| message | action | deposit | +| message | sender | `{senderAddress}` | + +* \[0] Event only emitted if the voting period starts during the submission. + +## Parameters + +The governance module contains the following parameters: + +| Key | Type | Example | +| -------------------------------- | ---------------- | ---------------------------------------- | +| min\_deposit | array (coins) | \[`{"denom":"uatom","amount":"10000000"}`] | +| max\_deposit\_period | string (time ns) | "172800000000000" (17280s) | +| voting\_period | string (time ns) | "172800000000000" (17280s) | +| quorum | string (dec) | "0.334000000000000000" | +| threshold | string (dec) | "0.500000000000000000" | +| veto | string (dec) | "0.334000000000000000" | +| expedited\_threshold | string (time ns) | "0.667000000000000000" | +| expedited\_voting\_period | string (time ns) | "86400000000000" (8600s) | +| expedited\_min\_deposit | array (coins) | \[`{"denom":"uatom","amount":"50000000"}`] | +| burn\_proposal\_deposit\_prevote | bool | false | +| burn\_vote\_quorum | bool | false | +| burn\_vote\_veto | bool | true | +| min\_initial\_deposit\_ratio | string | "0.1" | + +**NOTE**: The governance module contains parameters that are objects unlike other +modules. If only a subset of parameters are desired to be changed, only they need +to be included and not the entire parameter object structure. + +## Client + +### CLI + +A user can query and interact with the `gov` module using the CLI. + +#### Query + +The `query` commands allow users to query `gov` state. + +```bash +simd query gov --help +``` + +##### deposit + +The `deposit` command allows users to query a deposit for a given proposal from a given depositor. + +```bash +simd query gov deposit [proposal-id] [depositer-addr] [flags] +``` + +Example: + +```bash +simd query gov deposit 1 cosmos1.. +``` + +Example Output: + +```bash +amount: +- amount: "100" + denom: stake +depositor: cosmos1.. +proposal_id: "1" +``` + +##### deposits + +The `deposits` command allows users to query all deposits for a given proposal. + +```bash +simd query gov deposits [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov deposits 1 +``` + +Example Output: + +```bash +deposits: +- amount: + - amount: "100" + denom: stake + depositor: cosmos1.. + proposal_id: "1" +pagination: + next_key: null + total: "0" +``` + +##### param + +The `param` command allows users to query a given parameter for the `gov` module. + +```bash +simd query gov param [param-type] [flags] +``` + +Example: + +```bash +simd query gov param voting +``` + +Example Output: + +```bash +voting_period: "172800000000000" +``` + +##### params + +The `params` command allows users to query all parameters for the `gov` module. + +```bash +simd query gov params [flags] +``` + +Example: + +```bash +simd query gov params +``` + +Example Output: + +```bash expandable +deposit_params: + max_deposit_period: 172800s + min_deposit: + - amount: "10000000" + denom: stake +params: + expedited_min_deposit: + - amount: "50000000" + denom: stake + expedited_threshold: "0.670000000000000000" + expedited_voting_period: 86400s + max_deposit_period: 172800s + min_deposit: + - amount: "10000000" + denom: stake + min_initial_deposit_ratio: "0.000000000000000000" + proposal_cancel_burn_rate: "0.500000000000000000" + quorum: "0.334000000000000000" + threshold: "0.500000000000000000" + veto_threshold: "0.334000000000000000" + voting_period: 172800s +tally_params: + quorum: "0.334000000000000000" + threshold: "0.500000000000000000" + veto_threshold: "0.334000000000000000" +voting_params: + voting_period: 172800s +``` + +##### proposal + +The `proposal` command allows users to query a given proposal. + +```bash +simd query gov proposal [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov proposal 1 +``` + +Example Output: + +```bash expandable +deposit_end_time: "2022-03-30T11:50:20.819676256Z" +final_tally_result: + abstain_count: "0" + no_count: "0" + no_with_veto_count: "0" + yes_count: "0" +id: "1" +messages: +- '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "10" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. +metadata: AQ== +status: PROPOSAL_STATUS_DEPOSIT_PERIOD +submit_time: "2022-03-28T11:50:20.819676256Z" +total_deposit: +- amount: "10" + denom: stake +voting_end_time: null +voting_start_time: null +``` + +##### proposals + +The `proposals` command allows users to query all proposals with optional filters. + +```bash +simd query gov proposals [flags] +``` + +Example: + +```bash +simd query gov proposals +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +proposals: +- deposit_end_time: "2022-03-30T11:50:20.819676256Z" + final_tally_result: + abstain_count: "0" + no_count: "0" + no_with_veto_count: "0" + yes_count: "0" + id: "1" + messages: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "10" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + metadata: AQ== + status: PROPOSAL_STATUS_DEPOSIT_PERIOD + submit_time: "2022-03-28T11:50:20.819676256Z" + total_deposit: + - amount: "10" + denom: stake + voting_end_time: null + voting_start_time: null +- deposit_end_time: "2022-03-30T14:02:41.165025015Z" + final_tally_result: + abstain_count: "0" + no_count: "0" + no_with_veto_count: "0" + yes_count: "0" + id: "2" + messages: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "10" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + metadata: AQ== + status: PROPOSAL_STATUS_DEPOSIT_PERIOD + submit_time: "2022-03-28T14:02:41.165025015Z" + total_deposit: + - amount: "10" + denom: stake + voting_end_time: null + voting_start_time: null +``` + +##### proposer + +The `proposer` command allows users to query the proposer for a given proposal. + +```bash +simd query gov proposer [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov proposer 1 +``` + +Example Output: + +```bash +proposal_id: "1" +proposer: cosmos1.. +``` + +##### tally + +The `tally` command allows users to query the tally of a given proposal vote. + +```bash +simd query gov tally [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov tally 1 +``` + +Example Output: + +```bash +abstain: "0" +"no": "0" +no_with_veto: "0" +"yes": "1" +``` + +##### vote + +The `vote` command allows users to query a vote for a given proposal. + +```bash +simd query gov vote [proposal-id] [voter-addr] [flags] +``` + +Example: + +```bash +simd query gov vote 1 cosmos1.. +``` + +Example Output: + +```bash +option: VOTE_OPTION_YES +options: +- option: VOTE_OPTION_YES + weight: "1.000000000000000000" +proposal_id: "1" +voter: cosmos1.. +``` + +##### votes + +The `votes` command allows users to query all votes for a given proposal. + +```bash +simd query gov votes [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov votes 1 +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "0" +votes: +- option: VOTE_OPTION_YES + options: + - option: VOTE_OPTION_YES + weight: "1.000000000000000000" + proposal_id: "1" + voter: cosmos1.. +``` + +#### Transactions + +The `tx` commands allow users to interact with the `gov` module. + +```bash +simd tx gov --help +``` + +##### deposit + +The `deposit` command allows users to deposit tokens for a given proposal. + +```bash +simd tx gov deposit [proposal-id] [deposit] [flags] +``` + +Example: + +```bash +simd tx gov deposit 1 10000000stake --from cosmos1.. +``` + +##### draft-proposal + +The `draft-proposal` command allows users to draft any type of proposal. +The command returns a `draft_proposal.json`, to be used by `submit-proposal` after being completed. +The `draft_metadata.json` is meant to be uploaded to [IPFS](#metadata). + +```bash +simd tx gov draft-proposal +``` + +##### submit-proposal + +The `submit-proposal` command allows users to submit a governance proposal along with some messages and metadata. +Messages, metadata and deposit are defined in a JSON file. + +```bash +simd tx gov submit-proposal [path-to-proposal-json] [flags] +``` + +Example: + +```bash +simd tx gov submit-proposal /path/to/proposal.json --from cosmos1.. +``` + +where `proposal.json` contains: + +```json expandable +{ + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1...", / The gov module module address + "to_address": "cosmos1...", + "amount":[{ + "denom": "stake", + "amount": "10"}] + } + ], + "metadata": "AQ==", + "deposit": "10stake", + "title": "Proposal Title", + "summary": "Proposal Summary" +} +``` + + +By default the metadata, summary and title are both limited by 255 characters, this can be overridden by the application developer. + + + +When metadata is not specified, the title is limited to 255 characters and the summary 40x the title length. + + +##### submit-legacy-proposal + +The `submit-legacy-proposal` command allows users to submit a governance legacy proposal along with an initial deposit. + +```bash +simd tx gov submit-legacy-proposal [command] [flags] +``` + +Example: + +```bash +simd tx gov submit-legacy-proposal --title="Test Proposal" --description="testing" --type="Text" --deposit="100000000stake" --from cosmos1.. +``` + +Example (`param-change`): + +```bash +simd tx gov submit-legacy-proposal param-change proposal.json --from cosmos1.. +``` + +```json expandable +{ + "title": "Test Proposal", + "description": "testing, testing, 1, 2, 3", + "changes": [ + { + "subspace": "staking", + "key": "MaxValidators", + "value": 100 + } + ], + "deposit": "10000000stake" +} +``` + +#### cancel-proposal + +Once proposal is canceled, from the deposits of proposal `deposits * proposal_cancel_ratio` will be burned or sent to `ProposalCancelDest` address , if `ProposalCancelDest` is empty then deposits will be burned. The `remaining deposits` will be sent to depositers. + +```bash +simd tx gov cancel-proposal [proposal-id] [flags] +``` + +Example: + +```bash +simd tx gov cancel-proposal 1 --from cosmos1... +``` + +##### vote + +The `vote` command allows users to submit a vote for a given governance proposal. + +```bash +simd tx gov vote [command] [flags] +``` + +Example: + +```bash +simd tx gov vote 1 yes --from cosmos1.. +``` + +##### weighted-vote + +The `weighted-vote` command allows users to submit a weighted vote for a given governance proposal. + +```bash +simd tx gov weighted-vote [proposal-id] [weighted-options] [flags] +``` + +Example: + +```bash +simd tx gov weighted-vote 1 yes=0.5,no=0.5 --from cosmos1.. +``` + +### gRPC + +A user can query the `gov` module using gRPC endpoints. + +#### Proposal + +The `Proposal` endpoint allows users to query a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Proposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Proposal +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposalId": "1", + "content": {"@type":"/cosmos.gov.v1beta1.TextProposal","description":"testing, testing, 1, 2, 3","title":"Test Proposal"}, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2021-09-16T19:40:08.712440474Z", + "depositEndTime": "2021-09-18T19:40:08.712440474Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "votingStartTime": "2021-09-16T19:40:08.712440474Z", + "votingEndTime": "2021-09-18T19:40:08.712440474Z", + "title": "Test Proposal", + "summary": "testing, testing, 1, 2, 3" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Proposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Proposal +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "id": "1", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yesCount": "0", + "abstainCount": "0", + "noCount": "0", + "noWithVetoCount": "0" + }, + "submitTime": "2022-03-28T11:50:20.819676256Z", + "depositEndTime": "2022-03-30T11:50:20.819676256Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "votingStartTime": "2022-03-28T14:25:26.644857113Z", + "votingEndTime": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Test Proposal", + "summary": "testing, testing, 1, 2, 3" + } +} +``` + +#### Proposals + +The `Proposals` endpoint allows users to query all proposals with optional filters. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Proposals +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "proposalId": "1", + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2022-03-28T11:50:20.819676256Z", + "depositEndTime": "2022-03-30T11:50:20.819676256Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "votingStartTime": "2022-03-28T14:25:26.644857113Z", + "votingEndTime": "2022-03-30T14:25:26.644857113Z" + }, + { + "proposalId": "2", + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2022-03-28T14:02:41.165025015Z", + "depositEndTime": "2022-03-30T14:02:41.165025015Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "votingStartTime": "0001-01-01T00:00:00Z", + "votingEndTime": "0001-01-01T00:00:00Z" + } + ], + "pagination": { + "total": "2" + } +} + +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Proposals +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.gov.v1.Query/Proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "id": "1", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yesCount": "0", + "abstainCount": "0", + "noCount": "0", + "noWithVetoCount": "0" + }, + "submitTime": "2022-03-28T11:50:20.819676256Z", + "depositEndTime": "2022-03-30T11:50:20.819676256Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "votingStartTime": "2022-03-28T14:25:26.644857113Z", + "votingEndTime": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + }, + { + "id": "2", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "finalTallyResult": { + "yesCount": "0", + "abstainCount": "0", + "noCount": "0", + "noWithVetoCount": "0" + }, + "submitTime": "2022-03-28T14:02:41.165025015Z", + "depositEndTime": "2022-03-30T14:02:41.165025015Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### Vote + +The `Vote` endpoint allows users to query a vote for a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Vote +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1","voter":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Vote +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposalId": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1000000000000000000" + } + ] + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Vote +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1","voter":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Vote +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposalId": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } +} +``` + +#### Votes + +The `Votes` endpoint allows users to query all votes for a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Votes +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1000000000000000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Votes +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### Params + +The `Params` endpoint allows users to query all parameters for the `gov` module. + +{/* TODO: #10197 Querying governance params outputs nil values */} + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"params_type":"voting"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Params +``` + +Example Output: + +```bash expandable +{ + "votingParams": { + "votingPeriod": "172800s" + }, + "depositParams": { + "maxDepositPeriod": "0s" + }, + "tallyParams": { + "quorum": "MA==", + "threshold": "MA==", + "vetoThreshold": "MA==" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"params_type":"voting"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Params +``` + +Example Output: + +```bash +{ + "votingParams": { + "votingPeriod": "172800s" + } +} +``` + +#### Deposit + +The `Deposit` endpoint allows users to query a deposit for a given proposal from a given depositor. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Deposit +``` + +Example: + +```bash +grpcurl -plaintext \ + '{"proposal_id":"1","depositor":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Deposit +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Deposit +``` + +Example: + +```bash +grpcurl -plaintext \ + '{"proposal_id":"1","depositor":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Deposit +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +#### deposits + +The `Deposits` endpoint allows users to query all deposits for a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/Deposits +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/Deposits +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/Deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### TallyResult + +The `TallyResult` endpoint allows users to query the tally of a given proposal. + +Using legacy v1beta1: + +```bash +cosmos.gov.v1beta1.Query/TallyResult +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/TallyResult +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + } +} +``` + +Using v1: + +```bash +cosmos.gov.v1.Query/TallyResult +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1.Query/TallyResult +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + } +} +``` + +### REST + +A user can query the `gov` module using REST endpoints. + +#### proposal + +The `proposals` endpoint allows users to query a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1 +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposal_id": "1", + "content": null, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1 +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "id": "1", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10" + } + ] + } + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes_count": "0", + "abstain_count": "0", + "no_count": "0", + "no_with_veto_count": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + } +} +``` + +#### proposals + +The `proposals` endpoint also allows users to query all proposals with optional filters. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "proposal_id": "1", + "content": null, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z" + }, + { + "proposal_id": "2", + "content": null, + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2022-03-28T14:02:41.165025015Z", + "deposit_end_time": "2022-03-30T14:02:41.165025015Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "voting_start_time": "0001-01-01T00:00:00Z", + "voting_end_time": "0001-01-01T00:00:00Z" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "id": "1", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10" + } + ] + } + ], + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes_count": "0", + "abstain_count": "0", + "no_count": "0", + "no_with_veto_count": "0" + }, + "submit_time": "2022-03-28T11:50:20.819676256Z", + "deposit_end_time": "2022-03-30T11:50:20.819676256Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000010" + } + ], + "voting_start_time": "2022-03-28T14:25:26.644857113Z", + "voting_end_time": "2022-03-30T14:25:26.644857113Z", + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + }, + { + "id": "2", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10" + } + ] + } + ], + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "final_tally_result": { + "yes_count": "0", + "abstain_count": "0", + "no_count": "0", + "no_with_veto_count": "0" + }, + "submit_time": "2022-03-28T14:02:41.165025015Z", + "deposit_end_time": "2022-03-30T14:02:41.165025015Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10" + } + ], + "voting_start_time": null, + "voting_end_time": null, + "metadata": "AQ==", + "title": "Proposal Title", + "summary": "Proposal Summary" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### voter vote + +The `votes` endpoint allows users to query a vote for a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/votes/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/votes/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposal_id": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/votes/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/votes/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "vote": { + "proposal_id": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ], + "metadata": "" + } +} +``` + +#### votes + +The `votes` endpoint allows users to query all votes for a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/votes +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/votes +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/votes +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ], + "metadata": "" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### params + +The `params` endpoint allows users to query all parameters for the `gov` module. + +{/* TODO: #10197 Querying governance params outputs nil values */} + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/params/{params_type} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/params/voting +``` + +Example Output: + +```bash expandable +{ + "voting_params": { + "voting_period": "172800s" + }, + "deposit_params": { + "min_deposit": [ + ], + "max_deposit_period": "0s" + }, + "tally_params": { + "quorum": "0.000000000000000000", + "threshold": "0.000000000000000000", + "veto_threshold": "0.000000000000000000" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/params/{params_type} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/params/voting +``` + +Example Output: + +```bash expandable +{ + "voting_params": { + "voting_period": "172800s" + }, + "deposit_params": { + "min_deposit": [ + ], + "max_deposit_period": "0s" + }, + "tally_params": { + "quorum": "0.000000000000000000", + "threshold": "0.000000000000000000", + "veto_threshold": "0.000000000000000000" + } +} +``` + +#### deposits + +The `deposits` endpoint allows users to query a deposit for a given proposal from a given depositor. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/deposits/{depositor} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/deposits/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/deposits/{depositor} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/deposits/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "deposit": { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +#### proposal deposits + +The `deposits` endpoint allows users to query all deposits for a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/deposits +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/deposits +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/deposits +``` + +Example Output: + +```bash expandable +{ + "deposits": [ + { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### tally + +The `tally` endpoint allows users to query the tally of a given proposal. + +Using legacy v1beta1: + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/tally +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/tally +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + } +} +``` + +Using v1: + +```bash +/cosmos/gov/v1/proposals/{proposal_id}/tally +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1/proposals/1/tally +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + } +} +``` + +## Metadata + +The gov module has two locations for metadata where users can provide further context about the on-chain actions they are taking. By default all metadata fields have a 255 character length field where metadata can be stored in json format, either on-chain or off-chain depending on the amount of data required. Here we provide a recommendation for the json structure and where the data should be stored. There are two important factors in making these recommendations. First, that the gov and group modules are consistent with one another, note the number of proposals made by all groups may be quite large. Second, that client applications such as block explorers and governance interfaces have confidence in the consistency of metadata structure accross chains. + +### Proposal + +Location: off-chain as json object stored on IPFS (mirrors [group proposal](/docs/sdk/v0.53/documentation/module-system/group#metadata)) + +```json +{ + "title": "", + "authors": [""], + "summary": "", + "details": "", + "proposal_forum_url": "", + "vote_option_context": "", +} +``` + + +The `authors` field is an array of strings, this is to allow for multiple authors to be listed in the metadata. +In v0.46, the `authors` field is a comma-separated string. Frontends are encouraged to support both formats for backwards compatibility. + + +### Vote + +Location: on-chain as json within 255 character limit (mirrors [group vote](/docs/sdk/v0.53/documentation/module-system/group#metadata)) + +```json +{ + "justification": "", +} +``` + +## Future Improvements + +The current documentation only describes the minimum viable product for the +governance module. Future improvements may include: + +* **`BountyProposals`:** If accepted, a `BountyProposal` creates an open + bounty. The `BountyProposal` specifies how many Atoms will be given upon + completion. These Atoms will be taken from the `reserve pool`. After a + `BountyProposal` is accepted by governance, anybody can submit a + `SoftwareUpgradeProposal` with the code to claim the bounty. Note that once a + `BountyProposal` is accepted, the corresponding funds in the `reserve pool` + are locked so that payment can always be honored. In order to link a + `SoftwareUpgradeProposal` to an open bounty, the submitter of the + `SoftwareUpgradeProposal` will use the `Proposal.LinkedProposal` attribute. + If a `SoftwareUpgradeProposal` linked to an open bounty is accepted by + governance, the funds that were reserved are automatically transferred to the + submitter. +* **Complex delegation:** Delegators could choose other representatives than + their validators. Ultimately, the chain of representatives would always end + up to a validator, but delegators could inherit the vote of their chosen + representative before they inherit the vote of their validator. In other + words, they would only inherit the vote of their validator if their other + appointed representative did not vote. +* **Better process for proposal review:** There would be two parts to + `proposal.Deposit`, one for anti-spam (same as in MVP) and an other one to + reward third party auditors. diff --git a/docs/sdk/v0.53/documentation/module-system/group.mdx b/docs/sdk/v0.53/documentation/module-system/group.mdx new file mode 100644 index 00000000..76197ea8 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/group.mdx @@ -0,0 +1,2402 @@ +--- +title: '`x/group`' +description: The following documents specify the group module. +--- + +## Abstract + +The following documents specify the group module. + +This module allows the creation and management of on-chain multisig accounts and enables voting for message execution based on configurable decision policies. + +## Contents + +* [Concepts](#concepts) + * [Group](#group) + * [Group Policy](#group-policy) + * [Decision Policy](#decision-policy) + * [Proposal](#proposal) + * [Pruning](#pruning) +* [State](#state) + * [Group Table](#group-table) + * [Group Member Table](#group-member-table) + * [Group Policy Table](#group-policy-table) + * [Proposal Table](#proposal-table) + * [Vote Table](#vote-table) +* [Msg Service](#msg-service) + * [Msg/CreateGroup](#msgcreategroup) + * [Msg/UpdateGroupMembers](#msgupdategroupmembers) + * [Msg/UpdateGroupAdmin](#msgupdategroupadmin) + * [Msg/UpdateGroupMetadata](#msgupdategroupmetadata) + * [Msg/CreateGroupPolicy](#msgcreategrouppolicy) + * [Msg/CreateGroupWithPolicy](#msgcreategroupwithpolicy) + * [Msg/UpdateGroupPolicyAdmin](#msgupdategrouppolicyadmin) + * [Msg/UpdateGroupPolicyDecisionPolicy](#msgupdategrouppolicydecisionpolicy) + * [Msg/UpdateGroupPolicyMetadata](#msgupdategrouppolicymetadata) + * [Msg/SubmitProposal](#msgsubmitproposal) + * [Msg/WithdrawProposal](#msgwithdrawproposal) + * [Msg/Vote](#msgvote) + * [Msg/Exec](#msgexec) + * [Msg/LeaveGroup](#msgleavegroup) +* [Events](#events) + * [EventCreateGroup](#eventcreategroup) + * [EventUpdateGroup](#eventupdategroup) + * [EventCreateGroupPolicy](#eventcreategrouppolicy) + * [EventUpdateGroupPolicy](#eventupdategrouppolicy) + * [EventCreateProposal](#eventcreateproposal) + * [EventWithdrawProposal](#eventwithdrawproposal) + * [EventVote](#eventvote) + * [EventExec](#eventexec) + * [EventLeaveGroup](#eventleavegroup) + * [EventProposalPruned](#eventproposalpruned) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) +* [Metadata](#metadata) + +## Concepts + +### Group + +A group is simply an aggregation of accounts with associated weights. It is not +an account and doesn't have a balance. It doesn't in and of itself have any +sort of voting or decision weight. It does have an "administrator" which has +the ability to add, remove and update members in the group. Note that a +group policy account could be an administrator of a group, and that the +administrator doesn't necessarily have to be a member of the group. + +### Group Policy + +A group policy is an account associated with a group and a decision policy. +Group policies are abstracted from groups because a single group may have +multiple decision policies for different types of actions. Managing group +membership separately from decision policies results in the least overhead +and keeps membership consistent across different policies. The pattern that +is recommended is to have a single master group policy for a given group, +and then to create separate group policies with different decision policies +and delegate the desired permissions from the master account to +those "sub-accounts" using the `x/authz` module. + +### Decision Policy + +A decision policy is the mechanism by which members of a group can vote on +proposals, as well as the rules that dictate whether a proposal should pass +or not based on its tally outcome. + +All decision policies generally would have a mininum execution period and a +maximum voting window. The minimum execution period is the minimum amount of time +that must pass after submission in order for a proposal to potentially be executed, and it may +be set to 0. The maximum voting window is the maximum time after submission that a proposal may +be voted on before it is tallied. + +The chain developer also defines an app-wide maximum execution period, which is +the maximum amount of time after a proposal's voting period end where users are +allowed to execute a proposal. + +The current group module comes shipped with two decision policies: threshold +and percentage. Any chain developer can extend upon these two, by creating +custom decision policies, as long as they adhere to the `DecisionPolicy` +interface: + +```go + Final bool +} + +// DecisionPolicy is the persistent set of rules to determine the result of election on a proposal. +type DecisionPolicy interface { + proto.Message + + // GetVotingPeriod returns the duration after proposal submission where + // votes are accepted. + GetVotingPeriod() time.Duration + // GetMinExecutionPeriod returns the minimum duration after submission + // where we can execution a proposal. It can be set to 0 or to a value + // lesser than VotingPeriod to allow TRY_EXEC. + GetMinExecutionPeriod() time.Duration + // Allow defines policy-specific logic to allow a proposal to pass or not, + // based on its tally result, the group's total power and the time since + // the proposal was submitted. + Allow(tallyResult TallyResult, totalPower string) (DecisionPolicyResult, error) + +``` + +#### Threshold decision policy + +A threshold decision policy defines a threshold of yes votes (based on a tally +of voter weights) that must be achieved in order for a proposal to pass. For +this decision policy, abstain and veto are simply treated as no's. + +This decision policy also has a VotingPeriod window and a MinExecutionPeriod +window. The former defines the duration after proposal submission where members +are allowed to vote, after which tallying is performed. The latter specifies +the minimum duration after proposal submission where the proposal can be +executed. If set to 0, then the proposal is allowed to be executed immediately +on submission (using the `TRY_EXEC` option). Obviously, MinExecutionPeriod +cannot be greater than VotingPeriod+MaxExecutionPeriod (where MaxExecution is +the app-defined duration that specifies the window after voting ended where a +proposal can be executed). + +#### Percentage decision policy + +A percentage decision policy is similar to a threshold decision policy, except +that the threshold is not defined as a constant weight, but as a percentage. +It's more suited for groups where the group members' weights can be updated, as +the percentage threshold stays the same, and doesn't depend on how those member +weights get updated. + +Same as the Threshold decision policy, the percentage decision policy has the +two VotingPeriod and MinExecutionPeriod parameters. + +### Proposal + +Any member(s) of a group can submit a proposal for a group policy account to decide upon. +A proposal consists of a set of messages that will be executed if the proposal +passes as well as any metadata associated with the proposal. + +#### Voting + +There are four choices to choose while voting - yes, no, abstain and veto. Not +all decision policies will take the four choices into account. Votes can contain some optional metadata. +In the current implementation, the voting window begins as soon as a proposal +is submitted, and the end is defined by the group policy's decision policy. + +#### Withdrawing Proposals + +Proposals can be withdrawn any time before the voting period end, either by the +admin of the group policy or by one of the proposers. Once withdrawn, it is +marked as `PROPOSAL_STATUS_WITHDRAWN`, and no more voting or execution is +allowed on it. + +#### Aborted Proposals + +If the group policy is updated during the voting period of the proposal, then +the proposal is marked as `PROPOSAL_STATUS_ABORTED`, and no more voting or +execution is allowed on it. This is because the group policy defines the rules +of proposal voting and execution, so if those rules change during the lifecycle +of a proposal, then the proposal should be marked as stale. + +#### Tallying + +Tallying is the counting of all votes on a proposal. It happens only once in +the lifecycle of a proposal, but can be triggered by two factors, whichever +happens first: + +* either someone tries to execute the proposal (see next section), which can + happen on a `Msg/Exec` transaction, or a `Msg/{SubmitProposal,Vote}` + transaction with the `Exec` field set. When a proposal execution is attempted, + a tally is done first to make sure the proposal passes. +* or on `EndBlock` when the proposal's voting period end just passed. + +If the tally result passes the decision policy's rules, then the proposal is +marked as `PROPOSAL_STATUS_ACCEPTED`, or else it is marked as +`PROPOSAL_STATUS_REJECTED`. In any case, no more voting is allowed anymore, and the tally +result is persisted to state in the proposal's `FinalTallyResult`. + +#### Executing Proposals + +Proposals are executed only when the tallying is done, and the group account's +decision policy allows the proposal to pass based on the tally outcome. They +are marked by the status `PROPOSAL_STATUS_ACCEPTED`. Execution must happen +before a duration of `MaxExecutionPeriod` (set by the chain developer) after +each proposal's voting period end. + +Proposals will not be automatically executed by the chain in this current design, +but rather a user must submit a `Msg/Exec` transaction to attempt to execute the +proposal based on the current votes and decision policy. Any user (not only the +group members) can execute proposals that have been accepted, and execution fees are +paid by the proposal executor. +It's also possible to try to execute a proposal immediately on creation or on +new votes using the `Exec` field of `Msg/SubmitProposal` and `Msg/Vote` requests. +In the former case, proposers signatures are considered as yes votes. +In these cases, if the proposal can't be executed (i.e. it didn't pass the +decision policy's rules), it will still be opened for new votes and +could be tallied and executed later on. + +A successful proposal execution will have its `ExecutorResult` marked as +`PROPOSAL_EXECUTOR_RESULT_SUCCESS`. The proposal will be automatically pruned +after execution. On the other hand, a failed proposal execution will be marked +as `PROPOSAL_EXECUTOR_RESULT_FAILURE`. Such a proposal can be re-executed +multiple times, until it expires after `MaxExecutionPeriod` after voting period +end. + +### Pruning + +Proposals and votes are automatically pruned to avoid state bloat. + +Votes are pruned: + +* either after a successful tally, i.e. a tally whose result passes the decision + policy's rules, which can be trigged by a `Msg/Exec` or a + `Msg/{SubmitProposal,Vote}` with the `Exec` field set, +* or on `EndBlock` right after the proposal's voting period end. This applies to proposals with status `aborted` or `withdrawn` too. + +whichever happens first. + +Proposals are pruned: + +* on `EndBlock` whose proposal status is `withdrawn` or `aborted` on proposal's voting period end before tallying, +* and either after a successful proposal execution, +* or on `EndBlock` right after the proposal's `voting_period_end` + + `max_execution_period` (defined as an app-wide configuration) is passed, + +whichever happens first. + +## State + +The `group` module uses the `orm` package which provides table storage with support for +primary keys and secondary indexes. `orm` also defines `Sequence` which is a persistent unique key generator based on a counter that can be used along with `Table`s. + +Here's the list of tables and associated sequences and indexes stored as part of the `group` module. + +### Group Table + +The `groupTable` stores `GroupInfo`: `0x0 | BigEndian(GroupId) -> ProtocolBuffer(GroupInfo)`. + +#### groupSeq + +The value of `groupSeq` is incremented when creating a new group and corresponds to the new `GroupId`: `0x1 | 0x1 -> BigEndian`. + +The second `0x1` corresponds to the ORM `sequenceStorageKey`. + +#### groupByAdminIndex + +`groupByAdminIndex` allows to retrieve groups by admin address: +`0x2 | len([]byte(group.Admin)) | []byte(group.Admin) | BigEndian(GroupId) -> []byte()`. + +### Group Member Table + +The `groupMemberTable` stores `GroupMember`s: `0x10 | BigEndian(GroupId) | []byte(member.Address) -> ProtocolBuffer(GroupMember)`. + +The `groupMemberTable` is a primary key table and its `PrimaryKey` is given by +`BigEndian(GroupId) | []byte(member.Address)` which is used by the following indexes. + +#### groupMemberByGroupIndex + +`groupMemberByGroupIndex` allows to retrieve group members by group id: +`0x11 | BigEndian(GroupId) | PrimaryKey -> []byte()`. + +#### groupMemberByMemberIndex + +`groupMemberByMemberIndex` allows to retrieve group members by member address: +`0x12 | len([]byte(member.Address)) | []byte(member.Address) | PrimaryKey -> []byte()`. + +### Group Policy Table + +The `groupPolicyTable` stores `GroupPolicyInfo`: `0x20 | len([]byte(Address)) | []byte(Address) -> ProtocolBuffer(GroupPolicyInfo)`. + +The `groupPolicyTable` is a primary key table and its `PrimaryKey` is given by +`len([]byte(Address)) | []byte(Address)` which is used by the following indexes. + +#### groupPolicySeq + +The value of `groupPolicySeq` is incremented when creating a new group policy and is used to generate the new group policy account `Address`: +`0x21 | 0x1 -> BigEndian`. + +The second `0x1` corresponds to the ORM `sequenceStorageKey`. + +#### groupPolicyByGroupIndex + +`groupPolicyByGroupIndex` allows to retrieve group policies by group id: +`0x22 | BigEndian(GroupId) | PrimaryKey -> []byte()`. + +#### groupPolicyByAdminIndex + +`groupPolicyByAdminIndex` allows to retrieve group policies by admin address: +`0x23 | len([]byte(Address)) | []byte(Address) | PrimaryKey -> []byte()`. + +### Proposal Table + +The `proposalTable` stores `Proposal`s: `0x30 | BigEndian(ProposalId) -> ProtocolBuffer(Proposal)`. + +#### proposalSeq + +The value of `proposalSeq` is incremented when creating a new proposal and corresponds to the new `ProposalId`: `0x31 | 0x1 -> BigEndian`. + +The second `0x1` corresponds to the ORM `sequenceStorageKey`. + +#### proposalByGroupPolicyIndex + +`proposalByGroupPolicyIndex` allows to retrieve proposals by group policy account address: +`0x32 | len([]byte(account.Address)) | []byte(account.Address) | BigEndian(ProposalId) -> []byte()`. + +#### ProposalsByVotingPeriodEndIndex + +`proposalsByVotingPeriodEndIndex` allows to retrieve proposals sorted by chronological `voting_period_end`: +`0x33 | sdk.FormatTimeBytes(proposal.VotingPeriodEnd) | BigEndian(ProposalId) -> []byte()`. + +This index is used when tallying the proposal votes at the end of the voting period, and for pruning proposals at `VotingPeriodEnd + MaxExecutionPeriod`. + +### Vote Table + +The `voteTable` stores `Vote`s: `0x40 | BigEndian(ProposalId) | []byte(voter.Address) -> ProtocolBuffer(Vote)`. + +The `voteTable` is a primary key table and its `PrimaryKey` is given by +`BigEndian(ProposalId) | []byte(voter.Address)` which is used by the following indexes. + +#### voteByProposalIndex + +`voteByProposalIndex` allows to retrieve votes by proposal id: +`0x41 | BigEndian(ProposalId) | PrimaryKey -> []byte()`. + +#### voteByVoterIndex + +`voteByVoterIndex` allows to retrieve votes by voter address: +`0x42 | len([]byte(voter.Address)) | []byte(voter.Address) | PrimaryKey -> []byte()`. + +## Msg Service + +### Msg/CreateGroup + +A new group can be created with the `MsgCreateGroup`, which has an admin address, a list of members and some optional metadata. + +The metadata has a maximum length that is chosen by the app developer, and +passed into the group keeper as a config. + +```go +// MsgCreateGroup is the Msg/CreateGroup request type. +message MsgCreateGroup { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroup"; + + // admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // metadata is any arbitrary metadata to attached to the group. + string metadata = 3; +} +``` + +It's expected to fail if + +* metadata length is greater than `MaxMetadataLen` config +* members are not correctly set (e.g. wrong address format, duplicates, or with 0 weight). + +### Msg/UpdateGroupMembers + +Group members can be updated with the `UpdateGroupMembers`. + +```go +// MsgUpdateGroupMembers is the Msg/UpdateGroupMembers request type. +message MsgUpdateGroupMembers { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMembers"; + + // admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_id is the unique ID of the group. + uint64 group_id = 2; + + // member_updates is the list of members to update, + // set weight to 0 to remove a member. + repeated MemberRequest member_updates = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +In the list of `MemberUpdates`, an existing member can be removed by setting its weight to 0. + +It's expected to fail if: + +* the signer is not the admin of the group. +* for any one of the associated group policies, if its decision policy's `Validate()` method fails against the updated group. + +### Msg/UpdateGroupAdmin + +The `UpdateGroupAdmin` can be used to update a group admin. + +```go +// MsgUpdateGroupAdmin is the Msg/UpdateGroupAdmin request type. +message MsgUpdateGroupAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupAdmin"; + + // admin is the current account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_id is the unique ID of the group. + uint64 group_id = 2; + + // new_admin is the group new admin account address. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +It's expected to fail if the signer is not the admin of the group. + +### Msg/UpdateGroupMetadata + +The `UpdateGroupMetadata` can be used to update a group metadata. + +```go +// MsgUpdateGroupMetadata is the Msg/UpdateGroupMetadata request type. +message MsgUpdateGroupMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupMetadata"; + + // admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_id is the unique ID of the group. + uint64 group_id = 2; + + // metadata is the updated group's metadata. + string metadata = 3; +} +``` + +It's expected to fail if: + +* new metadata length is greater than `MaxMetadataLen` config. +* the signer is not the admin of the group. + +### Msg/CreateGroupPolicy + +A new group policy can be created with the `MsgCreateGroupPolicy`, which has an admin address, a group id, a decision policy and some optional metadata. + +```go +// MsgCreateGroupPolicy is the Msg/CreateGroupPolicy request type. +message MsgCreateGroupPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupPolicy"; + + option (gogoproto.goproto_getters) = false; + + // admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_id is the unique ID of the group. + uint64 group_id = 2; + + // metadata is any arbitrary metadata attached to the group policy. + string metadata = 3; + + // decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 4 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} +``` + +It's expected to fail if: + +* the signer is not the admin of the group. +* metadata length is greater than `MaxMetadataLen` config. +* the decision policy's `Validate()` method doesn't pass against the group. + +### Msg/CreateGroupWithPolicy + +A new group with policy can be created with the `MsgCreateGroupWithPolicy`, which has an admin address, a list of members, a decision policy, a `group_policy_as_admin` field to optionally set group and group policy admin with group policy address and some optional metadata for group and group policy. + +```go +// MsgCreateGroupWithPolicy is the Msg/CreateGroupWithPolicy request type. +message MsgCreateGroupWithPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgCreateGroupWithPolicy"; + option (gogoproto.goproto_getters) = false; + + // admin is the account address of the group and group policy admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // members defines the group members. + repeated MemberRequest members = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + // group_metadata is any arbitrary metadata attached to the group. + string group_metadata = 3; + + // group_policy_metadata is any arbitrary metadata attached to the group policy. + string group_policy_metadata = 4; + + // group_policy_as_admin is a boolean field, if set to true, the group policy account address will be used as group + // and group policy admin. + bool group_policy_as_admin = 5; + + // decision_policy specifies the group policy's decision policy. + google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} +``` + +It's expected to fail for the same reasons as `Msg/CreateGroup` and `Msg/CreateGroupPolicy`. + +### Msg/UpdateGroupPolicyAdmin + +The `UpdateGroupPolicyAdmin` can be used to update a group policy admin. + +```go +// MsgUpdateGroupPolicyAdmin is the Msg/UpdateGroupPolicyAdmin request type. +message MsgUpdateGroupPolicyAdmin { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyAdmin"; + + // admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_policy_address is the account address of the group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // new_admin is the new group policy admin. + string new_admin = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +It's expected to fail if the signer is not the admin of the group policy. + +### Msg/UpdateGroupPolicyDecisionPolicy + +The `UpdateGroupPolicyDecisionPolicy` can be used to update a decision policy. + +```go +// MsgUpdateGroupPolicyDecisionPolicy is the Msg/UpdateGroupPolicyDecisionPolicy request type. +message MsgUpdateGroupPolicyDecisionPolicy { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupDecisionPolicy"; + + option (gogoproto.goproto_getters) = false; + + // admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // decision_policy is the updated group policy's decision policy. + google.protobuf.Any decision_policy = 3 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; +} +``` + +It's expected to fail if: + +* the signer is not the admin of the group policy. +* the new decision policy's `Validate()` method doesn't pass against the group. + +### Msg/UpdateGroupPolicyMetadata + +The `UpdateGroupPolicyMetadata` can be used to update a group policy metadata. + +```go +// MsgUpdateGroupPolicyMetadata is the Msg/UpdateGroupPolicyMetadata request type. +message MsgUpdateGroupPolicyMetadata { + option (cosmos.msg.v1.signer) = "admin"; + option (amino.name) = "cosmos-sdk/MsgUpdateGroupPolicyMetadata"; + + // admin is the account address of the group admin. + string admin = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_policy_address is the account address of group policy. + string group_policy_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // metadata is the group policy metadata to be updated. + string metadata = 3; +} +``` + +It's expected to fail if: + +* new metadata length is greater than `MaxMetadataLen` config. +* the signer is not the admin of the group. + +### Msg/SubmitProposal + +A new proposal can be created with the `MsgSubmitProposal`, which has a group policy account address, a list of proposers addresses, a list of messages to execute if the proposal is accepted and some optional metadata. +An optional `Exec` value can be provided to try to execute the proposal immediately after proposal creation. Proposers signatures are considered as yes votes in this case. + +```go +// MsgSubmitProposal is the Msg/SubmitProposal request type. +message MsgSubmitProposal { + option (cosmos.msg.v1.signer) = "proposers"; + option (amino.name) = "cosmos-sdk/group/MsgSubmitProposal"; + + option (gogoproto.goproto_getters) = false; + + // group_policy_address is the account address of group policy. + string group_policy_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // proposers are the account addresses of the proposers. + // Proposers signatures will be counted as yes votes. + repeated string proposers = 2; + + // metadata is any arbitrary metadata attached to the proposal. + string metadata = 3; + + // messages is a list of `sdk.Msg`s that will be executed if the proposal passes. + repeated google.protobuf.Any messages = 4; + + // exec defines the mode of execution of the proposal, + // whether it should be executed immediately on creation or not. + // If so, proposers signatures are considered as Yes votes. + Exec exec = 5; + + // title is the title of the proposal. + // + // Since: cosmos-sdk 0.47 + string title = 6; + + // summary is the summary of the proposal. + // + // Since: cosmos-sdk 0.47 + string summary = 7; +} +``` + +It's expected to fail if: + +* metadata, title, or summary length is greater than `MaxMetadataLen` config. +* if any of the proposers is not a group member. + +### Msg/WithdrawProposal + +A proposal can be withdrawn using `MsgWithdrawProposal` which has an `address` (can be either a proposer or the group policy admin) and a `proposal_id` (which has to be withdrawn). + +```go +// MsgWithdrawProposal is the Msg/WithdrawProposal request type. +message MsgWithdrawProposal { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgWithdrawProposal"; + + // proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + // address is the admin of the group policy or one of the proposer of the proposal. + string address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +It's expected to fail if: + +* the signer is neither the group policy admin nor proposer of the proposal. +* the proposal is already closed or aborted. + +### Msg/Vote + +A new vote can be created with the `MsgVote`, given a proposal id, a voter address, a choice (yes, no, veto or abstain) and some optional metadata. +An optional `Exec` value can be provided to try to execute the proposal immediately after voting. + +```go +// MsgVote is the Msg/Vote request type. +message MsgVote { + option (cosmos.msg.v1.signer) = "voter"; + option (amino.name) = "cosmos-sdk/group/MsgVote"; + + // proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + // voter is the voter account address. + string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // option is the voter's choice on the proposal. + VoteOption option = 3; + + // metadata is any arbitrary metadata attached to the vote. + string metadata = 4; + + // exec defines whether the proposal should be executed + // immediately after voting or not. + Exec exec = 5; +} +``` + +It's expected to fail if: + +* metadata length is greater than `MaxMetadataLen` config. +* the proposal is not in voting period anymore. + +### Msg/Exec + +A proposal can be executed with the `MsgExec`. + +```go +// MsgExec is the Msg/Exec request type. +message MsgExec { + option (cosmos.msg.v1.signer) = "executor"; + option (amino.name) = "cosmos-sdk/group/MsgExec"; + + // proposal is the unique ID of the proposal. + uint64 proposal_id = 1; + + // executor is the account address used to execute the proposal. + string executor = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +The messages that are part of this proposal won't be executed if: + +* the proposal has not been accepted by the group policy. +* the proposal has already been successfully executed. + +### Msg/LeaveGroup + +The `MsgLeaveGroup` allows group member to leave a group. + +```go +// MsgLeaveGroup is the Msg/LeaveGroup request type. +message MsgLeaveGroup { + option (cosmos.msg.v1.signer) = "address"; + option (amino.name) = "cosmos-sdk/group/MsgLeaveGroup"; + + // address is the account address of the group member. + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // group_id is the unique ID of the group. + uint64 group_id = 2; +} +``` + +It's expected to fail if: + +* the group member is not part of the group. +* for any one of the associated group policies, if its decision policy's `Validate()` method fails against the updated group. + +## Events + +The group module emits the following events: + +### EventCreateGroup + +| Type | Attribute Key | Attribute Value | +| -------------------------------- | ------------- | -------------------------------- | +| message | action | /cosmos.group.v1.Msg/CreateGroup | +| cosmos.group.v1.EventCreateGroup | group\_id | `{groupId}` | + +### EventUpdateGroup + +| Type | Attribute Key | Attribute Value | +| -------------------------------- | ------------- | ---------------------------------------------------------- | +| message | action | `/cosmos.group.v1.Msg/UpdateGroup{Admin\|Metadata\|Members}` | +| cosmos.group.v1.EventUpdateGroup | group\_id | `{groupId}` | + +### EventCreateGroupPolicy + +| Type | Attribute Key | Attribute Value | +| -------------------------------------- | ------------- | -------------------------------------- | +| message | action | /cosmos.group.v1.Msg/CreateGroupPolicy | +| cosmos.group.v1.EventCreateGroupPolicy | address | `{groupPolicyAddress}` | + +### EventUpdateGroupPolicy + +| Type | Attribute Key | Attribute Value | +| -------------------------------------- | ------------- | ----------------------------------------------------------------------- | +| message | action | `/cosmos.group.v1.Msg/UpdateGroupPolicy{Admin\|Metadata\|DecisionPolicy}` | +| cosmos.group.v1.EventUpdateGroupPolicy | address | `{groupPolicyAddress}` | + +### EventCreateProposal + +| Type | Attribute Key | Attribute Value | +| ----------------------------------- | ------------- | ----------------------------------- | +| message | action | /cosmos.group.v1.Msg/CreateProposal | +| cosmos.group.v1.EventCreateProposal | proposal\_id | `{proposalId}` | + +### EventWithdrawProposal + +| Type | Attribute Key | Attribute Value | +| ------------------------------------- | ------------- | ------------------------------------- | +| message | action | /cosmos.group.v1.Msg/WithdrawProposal | +| cosmos.group.v1.EventWithdrawProposal | proposal\_id | `{proposalId}` | + +### EventVote + +| Type | Attribute Key | Attribute Value | +| ------------------------- | ------------- | ------------------------- | +| message | action | /cosmos.group.v1.Msg/Vote | +| cosmos.group.v1.EventVote | proposal\_id | `{proposalId}` | + +## EventExec + +| Type | Attribute Key | Attribute Value | +| ------------------------- | ------------- | ------------------------- | +| message | action | /cosmos.group.v1.Msg/Exec | +| cosmos.group.v1.EventExec | proposal\_id | `{proposalId}` | +| cosmos.group.v1.EventExec | logs | `{logs\_string}` | + +### EventLeaveGroup + +| Type | Attribute Key | Attribute Value | +| ------------------------------- | ------------- | ------------------------------- | +| message | action | /cosmos.group.v1.Msg/LeaveGroup | +| cosmos.group.v1.EventLeaveGroup | proposal\_id | `{proposalId}` | +| cosmos.group.v1.EventLeaveGroup | address | `{address}` | + +### EventProposalPruned + +| Type | Attribute Key | Attribute Value | +| ----------------------------------- | ------------- | ------------------------------- | +| message | action | /cosmos.group.v1.Msg/LeaveGroup | +| cosmos.group.v1.EventProposalPruned | proposal\_id | `{proposalId}` | +| cosmos.group.v1.EventProposalPruned | status | `{ProposalStatus}` | +| cosmos.group.v1.EventProposalPruned | tally\_result | `{TallyResult}` | + +## Client + +### CLI + +A user can query and interact with the `group` module using the CLI. + +#### Query + +The `query` commands allow users to query `group` state. + +```bash +simd query group --help +``` + +##### group-info + +The `group-info` command allows users to query for group info by given group id. + +```bash +simd query group group-info [id] [flags] +``` + +Example: + +```bash +simd query group group-info 1 +``` + +Example Output: + +```bash +admin: cosmos1.. +group_id: "1" +metadata: AQ== +total_weight: "3" +version: "1" +``` + +##### group-policy-info + +The `group-policy-info` command allows users to query for group policy info by account address of group policy . + +```bash +simd query group group-policy-info [group-policy-account] [flags] +``` + +Example: + +```bash +simd query group group-policy-info cosmos1.. +``` + +Example Output: + +```bash expandable +address: cosmos1.. +admin: cosmos1.. +decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s +group_id: "1" +metadata: AQ== +version: "1" +``` + +##### group-members + +The `group-members` command allows users to query for group members by group id with pagination flags. + +```bash +simd query group group-members [id] [flags] +``` + +Example: + +```bash +simd query group group-members 1 +``` + +Example Output: + +```bash expandable +members: +- group_id: "1" + member: + address: cosmos1.. + metadata: AQ== + weight: "2" +- group_id: "1" + member: + address: cosmos1.. + metadata: AQ== + weight: "1" +pagination: + next_key: null + total: "2" +``` + +##### groups-by-admin + +The `groups-by-admin` command allows users to query for groups by admin account address with pagination flags. + +```bash +simd query group groups-by-admin [admin] [flags] +``` + +Example: + +```bash +simd query group groups-by-admin cosmos1.. +``` + +Example Output: + +```bash expandable +groups: +- admin: cosmos1.. + group_id: "1" + metadata: AQ== + total_weight: "3" + version: "1" +- admin: cosmos1.. + group_id: "2" + metadata: AQ== + total_weight: "3" + version: "1" +pagination: + next_key: null + total: "2" +``` + +##### group-policies-by-group + +The `group-policies-by-group` command allows users to query for group policies by group id with pagination flags. + +```bash +simd query group group-policies-by-group [group-id] [flags] +``` + +Example: + +```bash +simd query group group-policies-by-group 1 +``` + +Example Output: + +```bash expandable +group_policies: +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +pagination: + next_key: null + total: "2" +``` + +##### group-policies-by-admin + +The `group-policies-by-admin` command allows users to query for group policies by admin account address with pagination flags. + +```bash +simd query group group-policies-by-admin [admin] [flags] +``` + +Example: + +```bash +simd query group group-policies-by-admin cosmos1.. +``` + +Example Output: + +```bash expandable +group_policies: +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +- address: cosmos1.. + admin: cosmos1.. + decision_policy: + '@type': /cosmos.group.v1.ThresholdDecisionPolicy + threshold: "1" + windows: + min_execution_period: 0s + voting_period: 432000s + group_id: "1" + metadata: AQ== + version: "1" +pagination: + next_key: null + total: "2" +``` + +##### proposal + +The `proposal` command allows users to query for proposal by id. + +```bash +simd query group proposal [id] [flags] +``` + +Example: + +```bash +simd query group proposal 1 +``` + +Example Output: + +```bash expandable +proposal: + address: cosmos1.. + executor_result: EXECUTOR_RESULT_NOT_RUN + group_policy_version: "1" + group_version: "1" + metadata: AQ== + msgs: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "100000000" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + proposal_id: "1" + proposers: + - cosmos1.. + result: RESULT_UNFINALIZED + status: STATUS_SUBMITTED + submitted_at: "2021-12-17T07:06:26.310638964Z" + windows: + min_execution_period: 0s + voting_period: 432000s + vote_state: + abstain_count: "0" + no_count: "0" + veto_count: "0" + yes_count: "0" + summary: "Summary" + title: "Title" +``` + +##### proposals-by-group-policy + +The `proposals-by-group-policy` command allows users to query for proposals by account address of group policy with pagination flags. + +```bash +simd query group proposals-by-group-policy [group-policy-account] [flags] +``` + +Example: + +```bash +simd query group proposals-by-group-policy cosmos1.. +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "1" +proposals: +- address: cosmos1.. + executor_result: EXECUTOR_RESULT_NOT_RUN + group_policy_version: "1" + group_version: "1" + metadata: AQ== + msgs: + - '@type': /cosmos.bank.v1beta1.MsgSend + amount: + - amount: "100000000" + denom: stake + from_address: cosmos1.. + to_address: cosmos1.. + proposal_id: "1" + proposers: + - cosmos1.. + result: RESULT_UNFINALIZED + status: STATUS_SUBMITTED + submitted_at: "2021-12-17T07:06:26.310638964Z" + windows: + min_execution_period: 0s + voting_period: 432000s + vote_state: + abstain_count: "0" + no_count: "0" + veto_count: "0" + yes_count: "0" + summary: "Summary" + title: "Title" +``` + +##### vote + +The `vote` command allows users to query for vote by proposal id and voter account address. + +```bash +simd query group vote [proposal-id] [voter] [flags] +``` + +Example: + +```bash +simd query group vote 1 cosmos1.. +``` + +Example Output: + +```bash +vote: + choice: CHOICE_YES + metadata: AQ== + proposal_id: "1" + submitted_at: "2021-12-17T08:05:02.490164009Z" + voter: cosmos1.. +``` + +##### votes-by-proposal + +The `votes-by-proposal` command allows users to query for votes by proposal id with pagination flags. + +```bash +simd query group votes-by-proposal [proposal-id] [flags] +``` + +Example: + +```bash +simd query group votes-by-proposal 1 +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "1" +votes: +- choice: CHOICE_YES + metadata: AQ== + proposal_id: "1" + submitted_at: "2021-12-17T08:05:02.490164009Z" + voter: cosmos1.. +``` + +##### votes-by-voter + +The `votes-by-voter` command allows users to query for votes by voter account address with pagination flags. + +```bash +simd query group votes-by-voter [voter] [flags] +``` + +Example: + +```bash +simd query group votes-by-voter cosmos1.. +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "1" +votes: +- choice: CHOICE_YES + metadata: AQ== + proposal_id: "1" + submitted_at: "2021-12-17T08:05:02.490164009Z" + voter: cosmos1.. +``` + +### Transactions + +The `tx` commands allow users to interact with the `group` module. + +```bash +simd tx group --help +``` + +#### create-group + +The `create-group` command allows users to create a group which is an aggregation of member accounts with associated weights and +an administrator account. + +```bash +simd tx group create-group [admin] [metadata] [members-json-file] +``` + +Example: + +```bash +simd tx group create-group cosmos1.. "AQ==" members.json +``` + +#### update-group-admin + +The `update-group-admin` command allows users to update a group's admin. + +```bash +simd tx group update-group-admin [admin] [group-id] [new-admin] [flags] +``` + +Example: + +```bash +simd tx group update-group-admin cosmos1.. 1 cosmos1.. +``` + +#### update-group-members + +The `update-group-members` command allows users to update a group's members. + +```bash +simd tx group update-group-members [admin] [group-id] [members-json-file] [flags] +``` + +Example: + +```bash +simd tx group update-group-members cosmos1.. 1 members.json +``` + +#### update-group-metadata + +The `update-group-metadata` command allows users to update a group's metadata. + +```bash +simd tx group update-group-metadata [admin] [group-id] [metadata] [flags] +``` + +Example: + +```bash +simd tx group update-group-metadata cosmos1.. 1 "AQ==" +``` + +#### create-group-policy + +The `create-group-policy` command allows users to create a group policy which is an account associated with a group and a decision policy. + +```bash +simd tx group create-group-policy [admin] [group-id] [metadata] [decision-policy] [flags] +``` + +Example: + +```bash +simd tx group create-group-policy cosmos1.. 1 "AQ==" '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"1", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' +``` + +#### create-group-with-policy + +The `create-group-with-policy` command allows users to create a group which is an aggregation of member accounts with associated weights and an administrator account with decision policy. If the `--group-policy-as-admin` flag is set to `true`, the group policy address becomes the group and group policy admin. + +```bash +simd tx group create-group-with-policy [admin] [group-metadata] [group-policy-metadata] [members-json-file] [decision-policy] [flags] +``` + +Example: + +```bash +simd tx group create-group-with-policy cosmos1.. "AQ==" "AQ==" members.json '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"1", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' +``` + +#### update-group-policy-admin + +The `update-group-policy-admin` command allows users to update a group policy admin. + +```bash +simd tx group update-group-policy-admin [admin] [group-policy-account] [new-admin] [flags] +``` + +Example: + +```bash +simd tx group update-group-policy-admin cosmos1.. cosmos1.. cosmos1.. +``` + +#### update-group-policy-metadata + +The `update-group-policy-metadata` command allows users to update a group policy metadata. + +```bash +simd tx group update-group-policy-metadata [admin] [group-policy-account] [new-metadata] [flags] +``` + +Example: + +```bash +simd tx group update-group-policy-metadata cosmos1.. cosmos1.. "AQ==" +``` + +#### update-group-policy-decision-policy + +The `update-group-policy-decision-policy` command allows users to update a group policy's decision policy. + +```bash +simd tx group update-group-policy-decision-policy [admin] [group-policy-account] [decision-policy] [flags] +``` + +Example: + +```bash +simd tx group update-group-policy-decision-policy cosmos1.. cosmos1.. '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"2", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' +``` + +#### submit-proposal + +The `submit-proposal` command allows users to submit a new proposal. + +```bash +simd tx group submit-proposal [group-policy-account] [proposer[,proposer]*] [msg_tx_json_file] [metadata] [flags] +``` + +Example: + +```bash +simd tx group submit-proposal cosmos1.. cosmos1.. msg_tx.json "AQ==" +``` + +#### withdraw-proposal + +The `withdraw-proposal` command allows users to withdraw a proposal. + +```bash +simd tx group withdraw-proposal [proposal-id] [group-policy-admin-or-proposer] +``` + +Example: + +```bash +simd tx group withdraw-proposal 1 cosmos1.. +``` + +#### vote + +The `vote` command allows users to vote on a proposal. + +```bash +simd tx group vote proposal-id] [voter] [choice] [metadata] [flags] +``` + +Example: + +```bash +simd tx group vote 1 cosmos1.. CHOICE_YES "AQ==" +``` + +#### exec + +The `exec` command allows users to execute a proposal. + +```bash +simd tx group exec [proposal-id] [flags] +``` + +Example: + +```bash +simd tx group exec 1 +``` + +#### leave-group + +The `leave-group` command allows group member to leave the group. + +```bash +simd tx group leave-group [member-address] [group-id] +``` + +Example: + +```bash +simd tx group leave-group cosmos1... 1 +``` + +### gRPC + +A user can query the `group` module using gRPC endpoints. + +#### GroupInfo + +The `GroupInfo` endpoint allows users to query for group info by given group id. + +```bash +cosmos.group.v1.Query/GroupInfo +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"group_id":1}' localhost:9090 cosmos.group.v1.Query/GroupInfo +``` + +Example Output: + +```bash +{ + "info": { + "groupId": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "totalWeight": "3" + } +} +``` + +#### GroupPolicyInfo + +The `GroupPolicyInfo` endpoint allows users to query for group policy info by account address of group policy. + +```bash +cosmos.group.v1.Query/GroupPolicyInfo +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupPolicyInfo +``` + +Example Output: + +```bash +{ + "info": { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows": {"voting_period": "120h", "min_execution_period": "0s"}}, + } +} +``` + +#### GroupMembers + +The `GroupMembers` endpoint allows users to query for group members by group id with pagination flags. + +```bash +cosmos.group.v1.Query/GroupMembers +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"group_id":"1"}' localhost:9090 cosmos.group.v1.Query/GroupMembers +``` + +Example Output: + +```bash expandable +{ + "members": [ + { + "groupId": "1", + "member": { + "address": "cosmos1..", + "weight": "1" + } + }, + { + "groupId": "1", + "member": { + "address": "cosmos1..", + "weight": "2" + } + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### GroupsByAdmin + +The `GroupsByAdmin` endpoint allows users to query for groups by admin account address with pagination flags. + +```bash +cosmos.group.v1.Query/GroupsByAdmin +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"admin":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupsByAdmin +``` + +Example Output: + +```bash expandable +{ + "groups": [ + { + "groupId": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "totalWeight": "3" + }, + { + "groupId": "2", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "totalWeight": "3" + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### GroupPoliciesByGroup + +The `GroupPoliciesByGroup` endpoint allows users to query for group policies by group id with pagination flags. + +```bash +cosmos.group.v1.Query/GroupPoliciesByGroup +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"group_id":"1"}' localhost:9090 cosmos.group.v1.Query/GroupPoliciesByGroup +``` + +Example Output: + +```bash expandable +{ + "GroupPolicies": [ + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + }, + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### GroupPoliciesByAdmin + +The `GroupPoliciesByAdmin` endpoint allows users to query for group policies by admin account address with pagination flags. + +```bash +cosmos.group.v1.Query/GroupPoliciesByAdmin +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"admin":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupPoliciesByAdmin +``` + +Example Output: + +```bash expandable +{ + "GroupPolicies": [ + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + }, + { + "address": "cosmos1..", + "groupId": "1", + "admin": "cosmos1..", + "version": "1", + "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, + } + ], + "pagination": { + "total": "2" + } +} +``` + +#### Proposal + +The `Proposal` endpoint allows users to query for proposal by id. + +```bash +cosmos.group.v1.Query/Proposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' localhost:9090 cosmos.group.v1.Query/Proposal +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposalId": "1", + "address": "cosmos1..", + "proposers": [ + "cosmos1.." + ], + "submittedAt": "2021-12-17T07:06:26.310638964Z", + "groupVersion": "1", + "GroupPolicyVersion": "1", + "status": "STATUS_SUBMITTED", + "result": "RESULT_UNFINALIZED", + "voteState": { + "yesCount": "0", + "noCount": "0", + "abstainCount": "0", + "vetoCount": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executorResult": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100000000"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "title": "Title", + "summary": "Summary", + } +} +``` + +#### ProposalsByGroupPolicy + +The `ProposalsByGroupPolicy` endpoint allows users to query for proposals by account address of group policy with pagination flags. + +```bash +cosmos.group.v1.Query/ProposalsByGroupPolicy +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/ProposalsByGroupPolicy +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "proposalId": "1", + "address": "cosmos1..", + "proposers": [ + "cosmos1.." + ], + "submittedAt": "2021-12-17T08:03:27.099649352Z", + "groupVersion": "1", + "GroupPolicyVersion": "1", + "status": "STATUS_CLOSED", + "result": "RESULT_ACCEPTED", + "voteState": { + "yesCount": "1", + "noCount": "0", + "abstainCount": "0", + "vetoCount": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executorResult": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100000000"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} + ], + "title": "Title", + "summary": "Summary", + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### VoteByProposalVoter + +The `VoteByProposalVoter` endpoint allows users to query for vote by proposal id and voter account address. + +```bash +cosmos.group.v1.Query/VoteByProposalVoter +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1","voter":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/VoteByProposalVoter +``` + +Example Output: + +```bash +{ + "vote": { + "proposalId": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "submittedAt": "2021-12-17T08:05:02.490164009Z" + } +} +``` + +#### VotesByProposal + +The `VotesByProposal` endpoint allows users to query for votes by proposal id with pagination flags. + +```bash +cosmos.group.v1.Query/VotesByProposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' localhost:9090 cosmos.group.v1.Query/VotesByProposal +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "submittedAt": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### VotesByVoter + +The `VotesByVoter` endpoint allows users to query for votes by voter account address with pagination flags. + +```bash +cosmos.group.v1.Query/VotesByVoter +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"voter":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/VotesByVoter +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "submittedAt": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### REST + +A user can query the `group` module using REST endpoints. + +#### GroupInfo + +The `GroupInfo` endpoint allows users to query for group info by given group id. + +```bash +/cosmos/group/v1/group_info/{group_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_info/1 +``` + +Example Output: + +```bash +{ + "info": { + "id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "total_weight": "3" + } +} +``` + +#### GroupPolicyInfo + +The `GroupPolicyInfo` endpoint allows users to query for group policy info by account address of group policy. + +```bash +/cosmos/group/v1/group_policy_info/{address} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_policy_info/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "info": { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + } +} +``` + +#### GroupMembers + +The `GroupMembers` endpoint allows users to query for group members by group id with pagination flags. + +```bash +/cosmos/group/v1/group_members/{group_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_members/1 +``` + +Example Output: + +```bash expandable +{ + "members": [ + { + "group_id": "1", + "member": { + "address": "cosmos1..", + "weight": "1", + "metadata": "AQ==" + } + }, + { + "group_id": "1", + "member": { + "address": "cosmos1..", + "weight": "2", + "metadata": "AQ==" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### GroupsByAdmin + +The `GroupsByAdmin` endpoint allows users to query for groups by admin account address with pagination flags. + +```bash +/cosmos/group/v1/groups_by_admin/{admin} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/groups_by_admin/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "groups": [ + { + "id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "total_weight": "3" + }, + { + "id": "2", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "total_weight": "3" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### GroupPoliciesByGroup + +The `GroupPoliciesByGroup` endpoint allows users to query for group policies by group id with pagination flags. + +```bash +/cosmos/group/v1/group_policies_by_group/{group_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_policies_by_group/1 +``` + +Example Output: + +```bash expandable +{ + "group_policies": [ + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + }, + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### GroupPoliciesByAdmin + +The `GroupPoliciesByAdmin` endpoint allows users to query for group policies by admin account address with pagination flags. + +```bash +/cosmos/group/v1/group_policies_by_admin/{admin} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/group_policies_by_admin/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "group_policies": [ + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + }, + { + "address": "cosmos1..", + "group_id": "1", + "admin": "cosmos1..", + "metadata": "AQ==", + "version": "1", + "decision_policy": { + "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", + "threshold": "1", + "windows": { + "voting_period": "120h", + "min_execution_period": "0s" + } + }, + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +``` + +#### Proposal + +The `Proposal` endpoint allows users to query for proposal by id. + +```bash +/cosmos/group/v1/proposal/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/proposal/1 +``` + +Example Output: + +```bash expandable +{ + "proposal": { + "proposal_id": "1", + "address": "cosmos1..", + "metadata": "AQ==", + "proposers": [ + "cosmos1.." + ], + "submitted_at": "2021-12-17T07:06:26.310638964Z", + "group_version": "1", + "group_policy_version": "1", + "status": "STATUS_SUBMITTED", + "result": "RESULT_UNFINALIZED", + "vote_state": { + "yes_count": "0", + "no_count": "0", + "abstain_count": "0", + "veto_count": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executor_result": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "100000000" + } + ] + } + ], + "title": "Title", + "summary": "Summary", + } +} +``` + +#### ProposalsByGroupPolicy + +The `ProposalsByGroupPolicy` endpoint allows users to query for proposals by account address of group policy with pagination flags. + +```bash +/cosmos/group/v1/proposals_by_group_policy/{address} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/proposals_by_group_policy/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "proposals": [ + { + "id": "1", + "group_policy_address": "cosmos1..", + "metadata": "AQ==", + "proposers": [ + "cosmos1.." + ], + "submit_time": "2021-12-17T08:03:27.099649352Z", + "group_version": "1", + "group_policy_version": "1", + "status": "STATUS_CLOSED", + "result": "RESULT_ACCEPTED", + "vote_state": { + "yes_count": "1", + "no_count": "0", + "abstain_count": "0", + "veto_count": "0" + }, + "windows": { + "min_execution_period": "0s", + "voting_period": "432000s" + }, + "executor_result": "EXECUTOR_RESULT_NOT_RUN", + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "from_address": "cosmos1..", + "to_address": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "100000000" + } + ] + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### VoteByProposalVoter + +The `VoteByProposalVoter` endpoint allows users to query for vote by proposal id and voter account address. + +```bash +/cosmos/group/v1/vote_by_proposal_voter/{proposal_id}/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1beta1/vote_by_proposal_voter/1/cosmos1.. +``` + +Example Output: + +```bash +{ + "vote": { + "proposal_id": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "metadata": "AQ==", + "submitted_at": "2021-12-17T08:05:02.490164009Z" + } +} +``` + +#### VotesByProposal + +The `VotesByProposal` endpoint allows users to query for votes by proposal id with pagination flags. + +```bash +/cosmos/group/v1/votes_by_proposal/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/votes_by_proposal/1 +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "option": "CHOICE_YES", + "metadata": "AQ==", + "submit_time": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### VotesByVoter + +The `VotesByVoter` endpoint allows users to query for votes by voter account address with pagination flags. + +```bash +/cosmos/group/v1/votes_by_voter/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/group/v1/votes_by_voter/cosmos1.. +``` + +Example Output: + +```bash expandable +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "choice": "CHOICE_YES", + "metadata": "AQ==", + "submitted_at": "2021-12-17T08:05:02.490164009Z" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +## Metadata + +The group module has four locations for metadata where users can provide further context about the on-chain actions they are taking. By default all metadata fields have a 255 character length field where metadata can be stored in json format, either on-chain or off-chain depending on the amount of data required. Here we provide a recommendation for the json structure and where the data should be stored. There are two important factors in making these recommendations. First, that the group and gov modules are consistent with one another, note the number of proposals made by all groups may be quite large. Second, that client applications such as block explorers and governance interfaces have confidence in the consistency of metadata structure across chains. + +### Proposal + +Location: off-chain as json object stored on IPFS (mirrors [gov proposal](/docs/sdk/v0.53/documentation/module-system/gov#metadata)) + +```json +{ + "title": "", + "authors": [""], + "summary": "", + "details": "", + "proposal_forum_url": "", + "vote_option_context": "", +} +``` + + +The `authors` field is an array of strings, this is to allow for multiple authors to be listed in the metadata. +In v0.46, the `authors` field is a comma-separated string. Frontends are encouraged to support both formats for backwards compatibility. + + +### Vote + +Location: on-chain as json within 255 character limit (mirrors [gov vote](/docs/sdk/v0.53/documentation/module-system/gov#metadata)) + +```json +{ + "justification": "", +} +``` + +### Group + +Location: off-chain as json object stored on IPFS + +```json +{ + "name": "", + "description": "", + "group_website_url": "", + "group_forum_url": "", +} +``` + +### Decision policy + +Location: on-chain as json within 255 character limit + +```json +{ + "name": "", + "description": "", +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/intro.mdx b/docs/sdk/v0.53/documentation/module-system/intro.mdx new file mode 100644 index 00000000..61c3e8dd --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/intro.mdx @@ -0,0 +1,303 @@ +--- +title: Introduction to Cosmos SDK Modules +--- + +## Synopsis + +Modules define most of the logic of Cosmos SDK applications. Developers compose modules together using the Cosmos SDK to build their custom application-specific blockchains. This document outlines the basic concepts behind SDK modules and how to approach module management. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK application](/docs/sdk/v0.53/documentation/application-framework/app-anatomy) +- [Lifecycle of a Cosmos SDK transaction](/docs/sdk/v0.53/documentation/protocol-development/tx-lifecycle) + + + +## Role of Modules in a Cosmos SDK Application + +The Cosmos SDK can be thought of as the Ruby-on-Rails of blockchain development. It comes with a core that provides the basic functionalities every blockchain application needs, like a [boilerplate implementation of the ABCI](/docs/sdk/v0.53/documentation/application-framework/baseapp) to communicate with the underlying consensus engine, a [`multistore`](/docs/sdk/v0.53/documentation/state-storage/store#multistore) to persist state, a [server](/docs/sdk/v0.53/documentation/operations/node) to form a full-node and [interfaces](/docs/sdk/v0.53/documentation/module-system/module-interfaces) to handle queries. + +On top of this core, the Cosmos SDK enables developers to build modules that implement the business logic of their application. In other words, SDK modules implement the bulk of the logic of applications, while the core does the wiring and enables modules to be composed together. The end goal is to build a robust ecosystem of open-source Cosmos SDK modules, making it increasingly easier to build complex blockchain applications. + +Cosmos SDK modules can be seen as little state-machines within the state-machine. They generally define a subset of the state using one or more `KVStore`s in the [main multistore](/docs/sdk/v0.53/documentation/state-storage/store), as well as a subset of [message types](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#messages). These messages are routed by one of the main components of Cosmos SDK core, [`BaseApp`](/docs/sdk/v0.53/documentation/application-framework/baseapp), to a module Protobuf [`Msg` service](/docs/sdk/v0.53/documentation/module-system/msg-services) that defines them. + +```mermaid expandable +flowchart TD + A[Transaction relayed from the full-node's consensus engine to the node's application via DeliverTx] + A --> B[APPLICATION] + B --> C["Using baseapp's methods: Decode the Tx, extract and route the message(s)"] + C --> D[Message routed to the correct module to be processed] + D --> E[AUTH MODULE] + D --> F[BANK MODULE] + D --> G[STAKING MODULE] + D --> H[GOV MODULE] + H --> I[Handles message, Updates state] + E --> I + F --> I + G --> I + I --> J["Return result to the underlying consensus engine (e.g. CometBFT) (0=Ok, 1=Err)"] +``` + +As a result of this architecture, building a Cosmos SDK application usually revolves around writing modules to implement the specialized logic of the application and composing them with existing modules to complete the application. Developers will generally work on modules that implement logic needed for their specific use case that do not exist yet, and will use existing modules for more generic functionalities like staking, accounts, or token management. + +### Modules as super-users + +Modules have the ability to perform actions that are not available to regular users. This is because modules are given sudo permissions by the state machine. Modules can reject another modules desire to execute a function but this logic must be explicit. Examples of this can be seen when modules create functions to modify parameters: + +```go expandable +package keeper + +import ( + + "context" + "github.com/hashicorp/go-metrics" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/x/bank/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +type msgServer struct { + Keeper +} + +var _ types.MsgServer = msgServer{ +} + +/ NewMsgServerImpl returns an implementation of the bank MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &msgServer{ + Keeper: keeper +} +} + +func (k msgServer) + +Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + var ( + from, to []byte + err error + ) + if base, ok := k.Keeper.(BaseKeeper); ok { + from, err = base.ak.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid from address: %s", err) +} + +to, err = base.ak.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid to address: %s", err) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + if !msg.Amount.IsValid() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + if !msg.Amount.IsAllPositive() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + if err := k.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if k.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + +err = k.SendCoins(ctx, from, to, msg.Amount) + if err != nil { + return nil, err +} + +defer func() { + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "send" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + +return &types.MsgSendResponse{ +}, nil +} + +func (k msgServer) + +MultiSend(ctx context.Context, msg *types.MsgMultiSend) (*types.MsgMultiSendResponse, error) { + if len(msg.Inputs) == 0 { + return nil, types.ErrNoInputs +} + if len(msg.Inputs) != 1 { + return nil, types.ErrMultipleSenders +} + if len(msg.Outputs) == 0 { + return nil, types.ErrNoOutputs +} + if err := types.ValidateInputOutputs(msg.Inputs[0], msg.Outputs); err != nil { + return nil, err +} + + / NOTE: totalIn == totalOut should already have been checked + for _, in := range msg.Inputs { + if err := k.IsSendEnabledCoins(ctx, in.Coins...); err != nil { + return nil, err +} + +} + for _, out := range msg.Outputs { + if base, ok := k.Keeper.(BaseKeeper); ok { + accAddr, err := base.ak.AddressCodec().StringToBytes(out.Address) + if err != nil { + return nil, err +} + if k.BlockedAddr(accAddr) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", out.Address) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + +} + err := k.InputOutputCoins(ctx, msg.Inputs[0], msg.Outputs) + if err != nil { + return nil, err +} + +return &types.MsgMultiSendResponse{ +}, nil +} + +func (k msgServer) + +UpdateParams(ctx context.Context, req *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + if k.GetAuthority() != req.Authority { + return nil, errorsmod.Wrapf(types.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), req.Authority) +} + if err := req.Params.Validate(); err != nil { + return nil, err +} + if err := k.SetParams(ctx, req.Params); err != nil { + return nil, err +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} + +func (k msgServer) + +SetSendEnabled(ctx context.Context, msg *types.MsgSetSendEnabled) (*types.MsgSetSendEnabledResponse, error) { + if k.GetAuthority() != msg.Authority { + return nil, errorsmod.Wrapf(types.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), msg.Authority) +} + seen := map[string]bool{ +} + for _, se := range msg.SendEnabled { + if _, alreadySeen := seen[se.Denom]; alreadySeen { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("duplicate denom entries found for %q", se.Denom) +} + +seen[se.Denom] = true + if err := se.Validate(); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid SendEnabled denom %q: %s", se.Denom, err) +} + +} + for _, denom := range msg.UseDefaultFor { + if err := sdk.ValidateDenom(denom); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid UseDefaultFor denom %q: %s", denom, err) +} + +} + if len(msg.SendEnabled) > 0 { + k.SetAllSendEnabled(ctx, msg.SendEnabled) +} + if len(msg.UseDefaultFor) > 0 { + k.DeleteSendEnabled(ctx, msg.UseDefaultFor...) +} + +return &types.MsgSetSendEnabledResponse{ +}, nil +} + +func (k msgServer) + +Burn(goCtx context.Context, msg *types.MsgBurn) (*types.MsgBurnResponse, error) { + var ( + from []byte + err error + ) + +var coins sdk.Coins + for _, coin := range msg.Amount { + coins = coins.Add(sdk.NewCoin(coin.Denom, coin.Amount)) +} + if base, ok := k.Keeper.(BaseKeeper); ok { + from, err = base.ak.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid from address: %s", err) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + if !coins.IsValid() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, coins.String()) +} + if !coins.IsAllPositive() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, coins.String()) +} + +err = k.BurnCoins(goCtx, from, coins) + if err != nil { + return nil, err +} + +return &types.MsgBurnResponse{ +}, nil +} +``` + +## How to Approach Building Modules as a Developer + +While there are no definitive guidelines for writing modules, here are some important design principles developers should keep in mind when building them: + +- **Composability**: Cosmos SDK applications are almost always composed of multiple modules. This means developers need to carefully consider the integration of their module not only with the core of the Cosmos SDK, but also with other modules. The former is achieved by following standard design patterns outlined [here](#main-components-of-cosmos-sdk-modules), while the latter is achieved by properly exposing the store(s) of the module via the [`keeper`](/docs/sdk/v0.53/documentation/module-system/keeper). +- **Specialization**: A direct consequence of the **composability** feature is that modules should be **specialized**. Developers should carefully establish the scope of their module and not batch multiple functionalities into the same module. This separation of concerns enables modules to be re-used in other projects and improves the upgradability of the application. **Specialization** also plays an important role in the [object-capabilities model](/docs/sdk/v0.53/documentation/core-concepts/ocap) of the Cosmos SDK. +- **Capabilities**: Most modules need to read and/or write to the store(s) of other modules. However, in an open-source environment, it is possible for some modules to be malicious. That is why module developers need to carefully think not only about how their module interacts with other modules, but also about how to give access to the module's store(s). The Cosmos SDK takes a capabilities-oriented approach to inter-module security. This means that each store defined by a module is accessed by a `key`, which is held by the module's [`keeper`](/docs/sdk/v0.53/documentation/module-system/keeper). This `keeper` defines how to access the store(s) and under what conditions. Access to the module's store(s) is done by passing a reference to the module's `keeper`. + +## Main Components of Cosmos SDK Modules + +Modules are by convention defined in the `./x/` subfolder (e.g. the `bank` module will be defined in the `./x/bank` folder). They generally share the same core components: + +- A [`keeper`](/docs/sdk/v0.53/documentation/module-system/keeper), used to access the module's store(s) and update the state. +- A [`Msg` service](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#messages), used to process messages when they are routed to the module by [`BaseApp`](/docs/sdk/v0.53/documentation/application-framework/baseapp#message-routing) and trigger state-transitions. +- A [query service](/docs/sdk/v0.53/documentation/module-system/query-services), used to process user queries when they are routed to the module by [`BaseApp`](/docs/sdk/v0.53/documentation/application-framework/baseapp#query-routing). +- Interfaces, for end users to query the subset of the state defined by the module and create `message`s of the custom types defined in the module. + +In addition to these components, modules implement the `AppModule` interface in order to be managed by the [`module manager`](/docs/sdk/v0.53/documentation/module-system/module-manager). + +Please refer to the [structure document](/docs/sdk/v0.53/documentation/module-system/structure) to learn about the recommended structure of a module's directory. diff --git a/docs/sdk/v0.53/documentation/module-system/invariants.mdx b/docs/sdk/v0.53/documentation/module-system/invariants.mdx new file mode 100644 index 00000000..1ac27ffc --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/invariants.mdx @@ -0,0 +1,528 @@ +--- +title: Invariants +--- + +## Synopsis + +An invariant is a property of the application that should always be true. In the context of the Cosmos SDK, an `Invariant` is a function that checks for a particular invariant. These functions are useful to detect bugs early on and act upon them to limit their potential consequences (e.g. by halting the chain). They are also useful in the development process of the application to detect bugs via simulations. + + +**Pre-requisite Readings** + +- [Keepers](/docs/sdk/v0.53/documentation/module-system/keeper) + + + +## Implementing `Invariant`s + +An `Invariant` is a function that checks for a particular invariant within a module. Module `Invariant`s must follow the `Invariant` type: + +```go expandable +package types + +import "fmt" + +/ An Invariant is a function which tests a particular invariant. +/ The invariant returns a descriptive message about what happened +/ and a boolean indicating whether the invariant has been broken. +/ The simulator will then halt and print the logs. +type Invariant func(ctx Context) (string, bool) + +/ Invariants defines a group of invariants +type Invariants []Invariant + +/ expected interface for registering invariants +type InvariantRegistry interface { + RegisterRoute(moduleName, route string, invar Invariant) +} + +/ FormatInvariant returns a standardized invariant message. +func FormatInvariant(module, name, msg string) + +string { + return fmt.Sprintf("%s: %s invariant\n%s\n", module, name, msg) +} +``` + +The `string` return value is the invariant message, which can be used when printing logs, and the `bool` return value is the actual result of the invariant check. + +In practice, each module implements `Invariant`s in a `keeper/invariants.go` file within the module's folder. The standard is to implement one `Invariant` function per logical grouping of invariants with the following model: + +```go +/ Example for an Invariant that checks balance-related invariants + +func BalanceInvariants(k Keeper) + +sdk.Invariant { + return func(ctx context.Context) (string, bool) { + / Implement checks for balance-related invariants +} +} +``` + +Additionally, module developers should generally implement an `AllInvariants` function that runs all the `Invariant`s functions of the module: + +```go expandable +/ AllInvariants runs all invariants of the module. +/ In this example, the module implements two Invariants: BalanceInvariants and DepositsInvariants + +func AllInvariants(k Keeper) + +sdk.Invariant { + return func(ctx context.Context) (string, bool) { + res, stop := BalanceInvariants(k)(ctx) + if stop { + return res, stop +} + +return DepositsInvariant(k)(ctx) +} +} +``` + +Finally, module developers need to implement the `RegisterInvariants` method as part of the [`AppModule` interface](/docs/sdk/v0.53/documentation/module-system/module-manager#appmodule). Indeed, the `RegisterInvariants` method of the module, implemented in the `module/module.go` file, typically only defers the call to a `RegisterInvariants` method implemented in the `keeper/invariants.go` file. The `RegisterInvariants` method registers a route for each `Invariant` function in the [`InvariantRegistry`](#invariant-registry): + +```go expandable +package keeper + +import ( + + "bytes" + "fmt" + "cosmossdk.io/math" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ RegisterInvariants registers all staking invariants +func RegisterInvariants(ir sdk.InvariantRegistry, k *Keeper) { + ir.RegisterRoute(types.ModuleName, "module-accounts", + ModuleAccountInvariants(k)) + +ir.RegisterRoute(types.ModuleName, "nonnegative-power", + NonNegativePowerInvariant(k)) + +ir.RegisterRoute(types.ModuleName, "positive-delegation", + PositiveDelegationInvariant(k)) + +ir.RegisterRoute(types.ModuleName, "delegator-shares", + DelegatorSharesInvariant(k)) +} + +/ AllInvariants runs all invariants of the staking module. +func AllInvariants(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + res, stop := ModuleAccountInvariants(k)(ctx) + if stop { + return res, stop +} + +res, stop = NonNegativePowerInvariant(k)(ctx) + if stop { + return res, stop +} + +res, stop = PositiveDelegationInvariant(k)(ctx) + if stop { + return res, stop +} + +return DelegatorSharesInvariant(k)(ctx) +} +} + +/ ModuleAccountInvariants checks that the bonded and notBonded ModuleAccounts pools +/ reflects the tokens actively bonded and not bonded +func ModuleAccountInvariants(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + bonded := math.ZeroInt() + notBonded := math.ZeroInt() + bondedPool := k.GetBondedPool(ctx) + notBondedPool := k.GetNotBondedPool(ctx) + bondDenom := k.BondDenom(ctx) + +k.IterateValidators(ctx, func(_ int64, validator types.ValidatorI) + +bool { + switch validator.GetStatus() { + case types.Bonded: + bonded = bonded.Add(validator.GetTokens()) + case types.Unbonding, types.Unbonded: + notBonded = notBonded.Add(validator.GetTokens()) + +default: + panic("invalid validator status") +} + +return false +}) + +k.IterateUnbondingDelegations(ctx, func(_ int64, ubd types.UnbondingDelegation) + +bool { + for _, entry := range ubd.Entries { + notBonded = notBonded.Add(entry.Balance) +} + +return false +}) + poolBonded := k.bankKeeper.GetBalance(ctx, bondedPool.GetAddress(), bondDenom) + poolNotBonded := k.bankKeeper.GetBalance(ctx, notBondedPool.GetAddress(), bondDenom) + broken := !poolBonded.Amount.Equal(bonded) || !poolNotBonded.Amount.Equal(notBonded) + + / Bonded tokens should equal sum of tokens with bonded validators + / Not-bonded tokens should equal unbonding delegations plus tokens on unbonded validators + return sdk.FormatInvariant(types.ModuleName, "bonded and not bonded module account coins", fmt.Sprintf( + "\tPool's bonded tokens: %v\n"+ + "\tsum of bonded tokens: %v\n"+ + "not bonded token invariance:\n"+ + "\tPool's not bonded tokens: %v\n"+ + "\tsum of not bonded tokens: %v\n"+ + "module accounts total (bonded + not bonded):\n"+ + "\tModule Accounts' tokens: %v\n"+ + "\tsum tokens: %v\n", + poolBonded, bonded, poolNotBonded, notBonded, poolBonded.Add(poolNotBonded), bonded.Add(notBonded))), broken +} +} + +/ NonNegativePowerInvariant checks that all stored validators have >= 0 power. +func NonNegativePowerInvariant(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + var ( + msg string + broken bool + ) + iterator := k.ValidatorsPowerStoreIterator(ctx) + for ; iterator.Valid(); iterator.Next() { + validator, found := k.GetValidator(ctx, iterator.Value()) + if !found { + panic(fmt.Sprintf("validator record not found for address: %X\n", iterator.Value())) +} + powerKey := types.GetValidatorsByPowerIndexKey(validator, k.PowerReduction(ctx)) + if !bytes.Equal(iterator.Key(), powerKey) { + broken = true + msg += fmt.Sprintf("power store invariance:\n\tvalidator.Power: %v"+ + "\n\tkey should be: %v\n\tkey in store: %v\n", + validator.GetConsensusPower(k.PowerReduction(ctx)), powerKey, iterator.Key()) +} + if validator.Tokens.IsNegative() { + broken = true + msg += fmt.Sprintf("\tnegative tokens for validator: %v\n", validator) +} + +} + +iterator.Close() + +return sdk.FormatInvariant(types.ModuleName, "nonnegative power", fmt.Sprintf("found invalid validator powers\n%s", msg)), broken +} +} + +/ PositiveDelegationInvariant checks that all stored delegations have > 0 shares. +func PositiveDelegationInvariant(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + var ( + msg string + count int + ) + delegations := k.GetAllDelegations(ctx) + for _, delegation := range delegations { + if delegation.Shares.IsNegative() { + count++ + msg += fmt.Sprintf("\tdelegation with negative shares: %+v\n", delegation) +} + if delegation.Shares.IsZero() { + count++ + msg += fmt.Sprintf("\tdelegation with zero shares: %+v\n", delegation) +} + +} + broken := count != 0 + + return sdk.FormatInvariant(types.ModuleName, "positive delegations", fmt.Sprintf( + "%d invalid delegations found\n%s", count, msg)), broken +} +} + +/ DelegatorSharesInvariant checks whether all the delegator shares which persist +/ in the delegator object add up to the correct total delegator shares +/ amount stored in each validator. +func DelegatorSharesInvariant(k *Keeper) + +sdk.Invariant { + return func(ctx sdk.Context) (string, bool) { + var ( + msg string + broken bool + ) + validators := k.GetAllValidators(ctx) + validatorsDelegationShares := map[string]math.LegacyDec{ +} + + / initialize a map: validator -> its delegation shares + for _, validator := range validators { + validatorsDelegationShares[validator.GetOperator().String()] = math.LegacyZeroDec() +} + + / iterate through all the delegations to calculate the total delegation shares for each validator + delegations := k.GetAllDelegations(ctx) + for _, delegation := range delegations { + delegationValidatorAddr := delegation.GetValidatorAddr().String() + validatorDelegationShares := validatorsDelegationShares[delegationValidatorAddr] + validatorsDelegationShares[delegationValidatorAddr] = validatorDelegationShares.Add(delegation.Shares) +} + + / for each validator, check if its total delegation shares calculated from the step above equals to its expected delegation shares + for _, validator := range validators { + expValTotalDelShares := validator.GetDelegatorShares() + calculatedValTotalDelShares := validatorsDelegationShares[validator.GetOperator().String()] + if !calculatedValTotalDelShares.Equal(expValTotalDelShares) { + broken = true + msg += fmt.Sprintf("broken delegator shares invariance:\n"+ + "\tvalidator.DelegatorShares: %v\n"+ + "\tsum of Delegator.Shares: %v\n", expValTotalDelShares, calculatedValTotalDelShares) +} + +} + +return sdk.FormatInvariant(types.ModuleName, "delegator shares", msg), broken +} +} +``` + +For more, see an example of [`Invariant`s implementation from the `staking` module](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/keeper/invariants.go). + +## Invariant Registry + +The `InvariantRegistry` is a registry where the `Invariant`s of all the modules of an application are registered. There is only one `InvariantRegistry` per **application**, meaning module developers need not implement their own `InvariantRegistry` when building a module. **All module developers need to do is to register their modules' invariants in the `InvariantRegistry`, as explained in the section above**. The rest of this section gives more information on the `InvariantRegistry` itself, and does not contain anything directly relevant to module developers. + +At its core, the `InvariantRegistry` is defined in the Cosmos SDK as an interface: + +```go expandable +package types + +import "fmt" + +/ An Invariant is a function which tests a particular invariant. +/ The invariant returns a descriptive message about what happened +/ and a boolean indicating whether the invariant has been broken. +/ The simulator will then halt and print the logs. +type Invariant func(ctx Context) (string, bool) + +/ Invariants defines a group of invariants +type Invariants []Invariant + +/ expected interface for registering invariants +type InvariantRegistry interface { + RegisterRoute(moduleName, route string, invar Invariant) +} + +/ FormatInvariant returns a standardized invariant message. +func FormatInvariant(module, name, msg string) + +string { + return fmt.Sprintf("%s: %s invariant\n%s\n", module, name, msg) +} +``` + +Typically, this interface is implemented in the `keeper` of a specific module. The most used implementation of an `InvariantRegistry` can be found in the `crisis` module: + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "time" + "cosmossdk.io/collections" + "cosmossdk.io/core/address" + "cosmossdk.io/log" + + storetypes "cosmossdk.io/core/store" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/crisis/types" +) + +/ Keeper - crisis keeper +type Keeper struct { + routes []types.InvarRoute + invCheckPeriod uint + storeService storetypes.KVStoreService + cdc codec.BinaryCodec + + / the address capable of executing a MsgUpdateParams message. Typically, this + / should be the x/gov module account. + authority string + + supplyKeeper types.SupplyKeeper + + feeCollectorName string / name of the FeeCollector ModuleAccount + + addressCodec address.Codec + + Schema collections.Schema + ConstantFee collections.Item[sdk.Coin] +} + +/ NewKeeper creates a new Keeper object +func NewKeeper( + cdc codec.BinaryCodec, storeService storetypes.KVStoreService, invCheckPeriod uint, + supplyKeeper types.SupplyKeeper, feeCollectorName, authority string, ac address.Codec, +) *Keeper { + sb := collections.NewSchemaBuilder(storeService) + k := &Keeper{ + storeService: storeService, + cdc: cdc, + routes: make([]types.InvarRoute, 0), + invCheckPeriod: invCheckPeriod, + supplyKeeper: supplyKeeper, + feeCollectorName: feeCollectorName, + authority: authority, + addressCodec: ac, + ConstantFee: collections.NewItem(sb, types.ConstantFeeKey, "constant_fee", codec.CollValue[sdk.Coin](/docs/sdk/v0.53/documentation/module-system/cdc)), +} + +schema, err := sb.Build() + if err != nil { + panic(err) +} + +k.Schema = schema + return k +} + +/ GetAuthority returns the x/crisis module's authority. +func (k *Keeper) + +GetAuthority() + +string { + return k.authority +} + +/ Logger returns a module-specific logger. +func (k *Keeper) + +Logger(ctx context.Context) + +log.Logger { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +return sdkCtx.Logger().With("module", "x/"+types.ModuleName) +} + +/ RegisterRoute register the routes for each of the invariants +func (k *Keeper) + +RegisterRoute(moduleName, route string, invar sdk.Invariant) { + invarRoute := types.NewInvarRoute(moduleName, route, invar) + +k.routes = append(k.routes, invarRoute) +} + +/ Routes - return the keeper's invariant routes +func (k *Keeper) + +Routes() []types.InvarRoute { + return k.routes +} + +/ Invariants returns a copy of all registered Crisis keeper invariants. +func (k *Keeper) + +Invariants() []sdk.Invariant { + invars := make([]sdk.Invariant, len(k.routes)) + for i, route := range k.routes { + invars[i] = route.Invar +} + +return invars +} + +/ AssertInvariants asserts all registered invariants. If any invariant fails, +/ the method panics. +func (k *Keeper) + +AssertInvariants(ctx sdk.Context) { + logger := k.Logger(ctx) + start := time.Now() + invarRoutes := k.Routes() + n := len(invarRoutes) + for i, ir := range invarRoutes { + logger.Info("asserting crisis invariants", "inv", fmt.Sprint(i+1, "/", n), "name", ir.FullRoute()) + +invCtx, _ := ctx.CacheContext() + if res, stop := ir.Invar(invCtx); stop { + / TODO: Include app name as part of context to allow for this to be + / variable. + panic(fmt.Errorf("invariant broken: %s\n"+ + "\tCRITICAL please submit the following transaction:\n"+ + "\t\t tx crisis invariant-broken %s %s", res, ir.ModuleName, ir.Route)) +} + +} + diff := time.Since(start) + +logger.Info("asserted all invariants", "duration", diff, "height", ctx.BlockHeight()) +} + +/ InvCheckPeriod returns the invariant checks period. +func (k *Keeper) + +InvCheckPeriod() + +uint { + return k.invCheckPeriod +} + +/ SendCoinsFromAccountToFeeCollector transfers amt to the fee collector account. +func (k *Keeper) + +SendCoinsFromAccountToFeeCollector(ctx context.Context, senderAddr sdk.AccAddress, amt sdk.Coins) + +error { + return k.supplyKeeper.SendCoinsFromAccountToModule(ctx, senderAddr, k.feeCollectorName, amt) +} +``` + +The `InvariantRegistry` is therefore typically instantiated by instantiating the `keeper` of the `crisis` module in the [application's constructor function](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor-function). + +`Invariant`s can be checked manually via [`message`s](/docs/sdk/v0.53/documentation/module-system/messages-and-queries), but most often they are checked automatically at the end of each block. Here is an example from the `crisis` module: + +```go expandable +package crisis + +import ( + + "context" + "time" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/crisis/keeper" + "github.com/cosmos/cosmos-sdk/x/crisis/types" +) + +/ check all registered invariants +func EndBlocker(ctx context.Context, k keeper.Keeper) { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker) + sdkCtx := sdk.UnwrapSDKContext(ctx) + if k.InvCheckPeriod() == 0 || sdkCtx.BlockHeight()%int64(k.InvCheckPeriod()) != 0 { + / skip running the invariant check + return +} + +k.AssertInvariants(sdkCtx) +} +``` + +In both cases, if one of the `Invariant`s returns false, the `InvariantRegistry` can trigger special logic (e.g. have the application panic and print the `Invariant`s message in the log). diff --git a/docs/sdk/v0.53/documentation/module-system/keeper.mdx b/docs/sdk/v0.53/documentation/module-system/keeper.mdx new file mode 100644 index 00000000..b918cf6c --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/keeper.mdx @@ -0,0 +1,370 @@ +--- +title: Keepers +--- + +## Synopsis + +`Keeper`s refer to a Cosmos SDK abstraction whose role is to manage access to the subset of the state defined by various modules. `Keeper`s are module-specific, i.e. the subset of state defined by a module can only be accessed by a `keeper` defined in said module. If a module needs to access the subset of state defined by another module, a reference to the second module's internal `keeper` needs to be passed to the first one. This is done in `app.go` during the instantiation of module keepers. + + +**Pre-requisite Readings** + +- [Introduction to Cosmos SDK Modules](/docs/sdk/v0.53/documentation/module-system/intro) + + + +## Motivation + +The Cosmos SDK is a framework that makes it easy for developers to build complex decentralized applications from scratch, mainly by composing modules together. As the ecosystem of open-source modules for the Cosmos SDK expands, it will become increasingly likely that some of these modules contain vulnerabilities, as a result of the negligence or malice of their developer. + +The Cosmos SDK adopts an [object-capabilities-based approach](/docs/sdk/v0.53/documentation/core-concepts/ocap) to help developers better protect their application from unwanted inter-module interactions, and `keeper`s are at the core of this approach. A `keeper` can be considered quite literally to be the gatekeeper of a module's store(s). Each store (typically an [`IAVL` Store](/docs/sdk/v0.50/learn/advanced/store#iavl-store)) defined within a module comes with a `storeKey`, which grants unlimited access to it. The module's `keeper` holds this `storeKey` (which should otherwise remain unexposed), and defines [methods](#implementing-methods) for reading and writing to the store(s). + +The core idea behind the object-capabilities approach is to only reveal what is necessary to get the work done. In practice, this means that instead of handling permissions of modules through access-control lists, module `keeper`s are passed a reference to the specific instance of the other modules' `keeper`s that they need to access (this is done in the [application's constructor function](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor-function)). As a consequence, a module can only interact with the subset of state defined in another module via the methods exposed by the instance of the other module's `keeper`. This is a great way for developers to control the interactions that their own module can have with modules developed by external developers. + +## Type Definition + +`keeper`s are generally implemented in a `/keeper/keeper.go` file located in the module's folder. By convention, the type `keeper` of a module is simply named `Keeper` and usually follows the following structure: + +```go +type Keeper struct { + / External keepers, if any + + / Store key(s) + + / codec + + / authority +} +``` + +For example, here is the type definition of the `keeper` from the `staking` module: + +```go expandable +package keeper + +import ( + + "fmt" + "cosmossdk.io/log" + "cosmossdk.io/math" + abci "github.com/cometbft/cometbft/abci/types" + + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ Implements ValidatorSet interface +var _ types.ValidatorSet = Keeper{ +} + +/ Implements DelegationSet interface +var _ types.DelegationSet = Keeper{ +} + +/ Keeper of the x/staking store +type Keeper struct { + storeKey storetypes.StoreKey + cdc codec.BinaryCodec + authKeeper types.AccountKeeper + bankKeeper types.BankKeeper + hooks types.StakingHooks + authority string +} + +/ NewKeeper creates a new staking Keeper instance +func NewKeeper( + cdc codec.BinaryCodec, + key storetypes.StoreKey, + ak types.AccountKeeper, + bk types.BankKeeper, + authority string, +) *Keeper { + / ensure bonded and not bonded module accounts are set + if addr := ak.GetModuleAddress(types.BondedPoolName); addr == nil { + panic(fmt.Sprintf("%s module account has not been set", types.BondedPoolName)) +} + if addr := ak.GetModuleAddress(types.NotBondedPoolName); addr == nil { + panic(fmt.Sprintf("%s module account has not been set", types.NotBondedPoolName)) +} + + / ensure that authority is a valid AccAddress + if _, err := ak.AddressCodec().StringToBytes(authority); err != nil { + panic("authority is not a valid acc address") +} + +return &Keeper{ + storeKey: key, + cdc: cdc, + authKeeper: ak, + bankKeeper: bk, + hooks: nil, + authority: authority, +} +} + +/ Logger returns a module-specific logger. +func (k Keeper) + +Logger(ctx sdk.Context) + +log.Logger { + return ctx.Logger().With("module", "x/"+types.ModuleName) +} + +/ Hooks gets the hooks for staking *Keeper { + func (k *Keeper) + +Hooks() + +types.StakingHooks { + if k.hooks == nil { + / return a no-op implementation if no hooks are set + return types.MultiStakingHooks{ +} + +} + +return k.hooks +} + +/ SetHooks Set the validator hooks. In contrast to other receivers, this method must take a pointer due to nature +/ of the hooks interface and SDK start up sequence. +func (k *Keeper) + +SetHooks(sh types.StakingHooks) { + if k.hooks != nil { + panic("cannot set validator hooks twice") +} + +k.hooks = sh +} + +/ GetLastTotalPower Load the last total validator power. +func (k Keeper) + +GetLastTotalPower(ctx sdk.Context) + +math.Int { + store := ctx.KVStore(k.storeKey) + bz := store.Get(types.LastTotalPowerKey) + if bz == nil { + return math.ZeroInt() +} + ip := sdk.IntProto{ +} + +k.cdc.MustUnmarshal(bz, &ip) + +return ip.Int +} + +/ SetLastTotalPower Set the last total validator power. +func (k Keeper) + +SetLastTotalPower(ctx sdk.Context, power math.Int) { + store := ctx.KVStore(k.storeKey) + bz := k.cdc.MustMarshal(&sdk.IntProto{ + Int: power +}) + +store.Set(types.LastTotalPowerKey, bz) +} + +/ GetAuthority returns the x/staking module's authority. +func (k Keeper) + +GetAuthority() + +string { + return k.authority +} + +/ SetValidatorUpdates sets the ABCI validator power updates for the current block. +func (k Keeper) + +SetValidatorUpdates(ctx sdk.Context, valUpdates []abci.ValidatorUpdate) { + store := ctx.KVStore(k.storeKey) + bz := k.cdc.MustMarshal(&types.ValidatorUpdates{ + Updates: valUpdates +}) + +store.Set(types.ValidatorUpdatesKey, bz) +} + +/ GetValidatorUpdates returns the ABCI validator power updates within the current block. +func (k Keeper) + +GetValidatorUpdates(ctx sdk.Context) []abci.ValidatorUpdate { + store := ctx.KVStore(k.storeKey) + bz := store.Get(types.ValidatorUpdatesKey) + +var valUpdates types.ValidatorUpdates + k.cdc.MustUnmarshal(bz, &valUpdates) + +return valUpdates.Updates +} +``` + +Let us go through the different parameters: + +- An expected `keeper` is a `keeper` external to a module that is required by the internal `keeper` of said module. External `keeper`s are listed in the internal `keeper`'s type definition as interfaces. These interfaces are themselves defined in an `expected_keepers.go` file in the root of the module's folder. In this context, interfaces are used to reduce the number of dependencies, as well as to facilitate the maintenance of the module itself. +- `storeKey`s grant access to the store(s) of the [multistore](/docs/sdk/v0.50/learn/advanced/store) managed by the module. They should always remain unexposed to external modules. +- `cdc` is the [codec](/docs/sdk/v0.53/documentation/protocol-development/encoding) used to marshall and unmarshall structs to/from `[]byte`. The `cdc` can be any of `codec.BinaryCodec`, `codec.JSONCodec` or `codec.Codec` based on your requirements. It can be either a proto or amino codec as long as they implement these interfaces. +- The authority listed is a module account or user account that has the right to change module level parameters. Previously this was handled by the param module, which has been deprecated. + +Of course, it is possible to define different types of internal `keeper`s for the same module (e.g. a read-only `keeper`). Each type of `keeper` comes with its own constructor function, which is called from the [application's constructor function](/docs/sdk/v0.53/documentation/application-framework/app-anatomy). This is where `keeper`s are instantiated, and where developers make sure to pass correct instances of modules' `keeper`s to other modules that require them. + +## Implementing Methods + +`Keeper`s primarily expose getter and setter methods for the store(s) managed by their module. These methods should remain as simple as possible and strictly be limited to getting or setting the requested value, as validity checks should have already been performed by the [`Msg` server](/docs/sdk/v0.53/documentation/module-system/msg-services) when `keeper`s' methods are called. + +Typically, a _getter_ method will have the following signature + +```go +func (k Keeper) + +Get(ctx context.Context, key string) + +returnType +``` + +and the method will go through the following steps: + +1. Retrieve the appropriate store from the `ctx` using the `storeKey`. This is done through the `KVStore(storeKey sdk.StoreKey)` method of the `ctx`. Then it's preferred to use the `prefix.Store` to access only the desired limited subset of the store for convenience and safety. +2. If it exists, get the `[]byte` value stored at location `[]byte(key)` using the `Get(key []byte)` method of the store. +3. Unmarshall the retrieved value from `[]byte` to `returnType` using the codec `cdc`. Return the value. + +Similarly, a _setter_ method will have the following signature + +```go +func (k Keeper) + +Set(ctx context.Context, key string, value valueType) +``` + +and the method will go through the following steps: + +1. Retrieve the appropriate store from the `ctx` using the `storeKey`. This is done through the `KVStore(storeKey sdk.StoreKey)` method of the `ctx`. It's preferred to use the `prefix.Store` to access only the desired limited subset of the store for convenience and safety. +2. Marshal `value` to `[]byte` using the codec `cdc`. +3. Set the encoded value in the store at location `key` using the `Set(key []byte, value []byte)` method of the store. + +For more, see an example of `keeper`'s [methods implementation from the `staking` module](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/keeper/keeper.go). + +The [module `KVStore`](/docs/sdk/v0.50/learn/advanced/store#kvstore-and-commitkvstore-interfaces) also provides an `Iterator()` method which returns an `Iterator` object to iterate over a domain of keys. + +This is an example from the `auth` module to iterate accounts: + +```go expandable +package keeper + +import ( + + "context" + "errors" + "cosmossdk.io/collections" + + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ NewAccountWithAddress implements AccountKeeperI. +func (ak AccountKeeper) + +NewAccountWithAddress(ctx context.Context, addr sdk.AccAddress) + +sdk.AccountI { + acc := ak.proto() + err := acc.SetAddress(addr) + if err != nil { + panic(err) +} + +return ak.NewAccount(ctx, acc) +} + +/ NewAccount sets the next account number to a given account interface +func (ak AccountKeeper) + +NewAccount(ctx context.Context, acc sdk.AccountI) + +sdk.AccountI { + if err := acc.SetAccountNumber(ak.NextAccountNumber(ctx)); err != nil { + panic(err) +} + +return acc +} + +/ HasAccount implements AccountKeeperI. +func (ak AccountKeeper) + +HasAccount(ctx context.Context, addr sdk.AccAddress) + +bool { + has, _ := ak.Accounts.Has(ctx, addr) + +return has +} + +/ GetAccount implements AccountKeeperI. +func (ak AccountKeeper) + +GetAccount(ctx context.Context, addr sdk.AccAddress) + +sdk.AccountI { + acc, err := ak.Accounts.Get(ctx, addr) + if err != nil && !errors.Is(err, collections.ErrNotFound) { + panic(err) +} + +return acc +} + +/ GetAllAccounts returns all accounts in the accountKeeper. +func (ak AccountKeeper) + +GetAllAccounts(ctx context.Context) (accounts []sdk.AccountI) { + ak.IterateAccounts(ctx, func(acc sdk.AccountI) (stop bool) { + accounts = append(accounts, acc) + +return false +}) + +return accounts +} + +/ SetAccount implements AccountKeeperI. +func (ak AccountKeeper) + +SetAccount(ctx context.Context, acc sdk.AccountI) { + err := ak.Accounts.Set(ctx, acc.GetAddress(), acc) + if err != nil { + panic(err) +} +} + +/ RemoveAccount removes an account for the account mapper store. +/ NOTE: this will cause supply invariant violation if called +func (ak AccountKeeper) + +RemoveAccount(ctx context.Context, acc sdk.AccountI) { + err := ak.Accounts.Remove(ctx, acc.GetAddress()) + if err != nil { + panic(err) +} +} + +/ IterateAccounts iterates over all the stored accounts and performs a callback function. +/ Stops iteration when callback returns true. +func (ak AccountKeeper) + +IterateAccounts(ctx context.Context, cb func(account sdk.AccountI) (stop bool)) { + err := ak.Accounts.Walk(ctx, nil, func(_ sdk.AccAddress, value sdk.AccountI) (bool, error) { + return cb(value), nil +}) + if err != nil { + panic(err) +} +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/messages-and-queries.mdx b/docs/sdk/v0.53/documentation/module-system/messages-and-queries.mdx new file mode 100644 index 00000000..0ea4ddbe --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/messages-and-queries.mdx @@ -0,0 +1,1703 @@ +--- +title: Messages and Queries +--- + +## Synopsis + +`Msg`s and `Queries` are the two primary objects handled by modules. Most of the core components defined in a module, like `Msg` services, `keeper`s and `Query` services, exist to process `message`s and `queries`. + + +**Pre-requisite Readings** + +- [Introduction to Cosmos SDK Modules](/docs/sdk/v0.53/documentation/module-system/intro) + + + +## Messages + +`Msg`s are objects whose end-goal is to trigger state-transitions. They are wrapped in [transactions](/docs/sdk/v0.53/documentation/protocol-development/transactions), which may contain one or more of them. + +When a transaction is relayed from the underlying consensus engine to the Cosmos SDK application, it is first decoded by [`BaseApp`](/docs/sdk/v0.53/documentation/application-framework/baseapp). Then, each message contained in the transaction is extracted and routed to the appropriate module via `BaseApp`'s `MsgServiceRouter` so that it can be processed by the module's [`Msg` service](/docs/sdk/v0.53/documentation/module-system/msg-services). For a more detailed explanation of the lifecycle of a transaction, click [here](/docs/sdk/v0.53/documentation/protocol-development/tx-lifecycle). + +### `Msg` Services + +Defining Protobuf `Msg` services is the recommended way to handle messages. A Protobuf `Msg` service should be created for each module, typically in `tx.proto` (see more info about [conventions and naming](/docs/sdk/v0.53/documentation/protocol-development/encoding#faq)). It must have an RPC service method defined for each message in the module. + +Each `Msg` service method must have exactly one argument, which must implement the `sdk.Msg` interface, and a Protobuf response. The naming convention is to call the RPC argument `Msg` and the RPC response `MsgResponse`. For example: + +```protobuf + rpc Send(MsgSend) returns (MsgSendResponse); +``` + +See an example of a `Msg` service definition from `x/bank` module: + +```protobuf +// Msg defines the bank Msg service. +service Msg { + option (cosmos.msg.v1.service) = true; + + // Send defines a method for sending coins from one account to another account. + rpc Send(MsgSend) returns (MsgSendResponse); + + // MultiSend defines a method for sending coins from some accounts to other accounts. + rpc MultiSend(MsgMultiSend) returns (MsgMultiSendResponse); + + // UpdateParams defines a governance operation for updating the x/bank module parameters. + // The authority is defined in the keeper. + // + // Since: cosmos-sdk 0.47 + rpc UpdateParams(MsgUpdateParams) returns (MsgUpdateParamsResponse); + + // SetSendEnabled is a governance operation for setting the SendEnabled flag + // on any number of Denoms. Only the entries to add or update should be + // included. Entries that already exist in the store, but that aren't + // included in this message, will be left unchanged. + // + // Since: cosmos-sdk 0.47 + rpc SetSendEnabled(MsgSetSendEnabled) returns (MsgSetSendEnabledResponse); +} +``` + +### `sdk.Msg` Interface + +`sdk.Msg` is a alias of `proto.Message`. + +To attach a `ValidateBasic()` method to a message then you must add methods to the type adhereing to the `HasValidateBasic`. + +```go expandable +package types + +import ( + + "encoding/json" + fmt "fmt" + strings "strings" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "github.com/cosmos/cosmos-sdk/codec" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) + +type ( + / Msg defines the interface a transaction message needed to fulfill. + Msg = proto.Message + + / LegacyMsg defines the interface a transaction message needed to fulfill up through + / v0.47. + LegacyMsg interface { + Msg + + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} + + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / HasMsgs defines an interface a transaction must fulfill. + HasMsgs interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg +} + + / Tx defines an interface a transaction must fulfill. + Tx interface { + HasMsgs + + / GetMsgsV2 gets the transaction's messages as google.golang.org/protobuf/proto.Message's. + GetMsgsV2() ([]protov2.Message, error) +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() []byte + FeeGranter() []byte +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} + + / HasValidateBasic defines a type that has a ValidateBasic method. + / ValidateBasic is deprecated and now facultative. + / Prefer validating messages directly in the msg server. + HasValidateBasic interface { + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +func MsgTypeURL(msg proto.Message) + +string { + if m, ok := msg.(protov2.Message); ok { + return "/" + string(m.ProtoReflect().Descriptor().FullName()) +} + +return "/" + proto.MessageName(msg) +} + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} + +/ GetModuleNameFromTypeURL assumes that module name is the second element of the msg type URL +/ e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" +/ It returns an empty string if the input is not a valid type URL +func GetModuleNameFromTypeURL(input string) + +string { + moduleName := strings.Split(input, ".") + if len(moduleName) > 1 { + return moduleName[1] +} + +return "" +} +``` + +In 0.50+ signers from the `GetSigners()` call is automated via a protobuf annotation. + +Read more about the signer field [here](/docs/sdk/v0.53/documentation/protocol-development/protobuf-annotations). + +```protobuf + option (cosmos.msg.v1.signer) = "from_address"; +``` + +If there is a need for custom signers then there is an alternative path which can be taken. A function which returns `signing.CustomGetSigner` for a specific message can be defined. + +```go expandable +func ProvideBankSendTransactionGetSigners() + +signing.CustomGetSigner { + + / Extract the signer from the signature. + signer, err := coretypes.LatestSigner(Tx).Sender(ethTx) + if err != nil { + return nil, err +} + + / Return the signer in the required format. + return [][]byte{ + signer.Bytes() +}, nil +} +``` + +When using dependency injection (depinject) this can be provided to the application via the provide method. + +```go +depinject.Provide(banktypes.ProvideBankSendTransactionGetSigners) +``` + +The Cosmos SDK uses Protobuf definitions to generate client and server code: + +- `MsgServer` interface defines the server API for the `Msg` service and its implementation is described as part of the [`Msg` services](/docs/sdk/v0.53/documentation/module-system/msg-services) documentation. +- Structures are generated for all RPC request and response types. + +A `RegisterMsgServer` method is also generated and should be used to register the module's `MsgServer` implementation in `RegisterServices` method from the [`AppModule` interface](/docs/sdk/v0.53/documentation/module-system/module-manager#appmodule). + +In order for clients (CLI and grpc-gateway) to have these URLs registered, the Cosmos SDK provides the function `RegisterMsgServiceDesc(registry codectypes.InterfaceRegistry, sd *grpc.ServiceDesc)` that should be called inside module's [`RegisterInterfaces`](/docs/sdk/v0.53/documentation/module-system/module-manager#appmodulebasic) method, using the proto-generated `&_Msg_serviceDesc` as `*grpc.ServiceDesc` argument. + +## Queries + +A `query` is a request for information made by end-users of applications through an interface and processed by a full-node. A `query` is received by a full-node through its consensus engine and relayed to the application via the ABCI. It is then routed to the appropriate module via `BaseApp`'s `QueryRouter` so that it can be processed by the module's query service (./04-query-services.md). For a deeper look at the lifecycle of a `query`, click [here](/docs/sdk/v0.53/api-reference/service-apis/query-lifecycle). + +### gRPC Queries + +Queries should be defined using [Protobuf services](https://developers.google.com/protocol-buffers/docs/proto#services). A `Query` service should be created per module in `query.proto`. This service lists endpoints starting with `rpc`. + +Here's an example of such a `Query` service definition: + +```protobuf +// Query defines the gRPC querier service. +service Query { + // Accounts returns all the existing accounts. + // + // When called from another module, this query might consume a high amount of + // gas if the pagination field is incorrectly set. + // + // Since: cosmos-sdk 0.43 + rpc Accounts(QueryAccountsRequest) returns (QueryAccountsResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/accounts"; + } + + // Account returns account details based on address. + rpc Account(QueryAccountRequest) returns (QueryAccountResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/accounts/{address}"; + } + + // AccountAddressByID returns account address based on account number. + // + // Since: cosmos-sdk 0.46.2 + rpc AccountAddressByID(QueryAccountAddressByIDRequest) returns (QueryAccountAddressByIDResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/address_by_id/{id}"; + } + + // Params queries all parameters. + rpc Params(QueryParamsRequest) returns (QueryParamsResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/params"; + } + + // ModuleAccounts returns all the existing module accounts. + // + // Since: cosmos-sdk 0.46 + rpc ModuleAccounts(QueryModuleAccountsRequest) returns (QueryModuleAccountsResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/module_accounts"; + } + + // ModuleAccountByName returns the module account info by module name + rpc ModuleAccountByName(QueryModuleAccountByNameRequest) returns (QueryModuleAccountByNameResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/module_accounts/{name}"; + } + + // Bech32Prefix queries bech32Prefix + // + // Since: cosmos-sdk 0.46 + rpc Bech32Prefix(Bech32PrefixRequest) returns (Bech32PrefixResponse) { + option (google.api.http).get = "/cosmos/auth/v1beta1/bech32"; + } + + // AddressBytesToString converts Account Address bytes to string + // + // Since: cosmos-sdk 0.46 + rpc AddressBytesToString(AddressBytesToStringRequest) returns (AddressBytesToStringResponse) { + option (google.api.http).get = "/cosmos/auth/v1beta1/bech32/{address_bytes}"; + } + + // AddressStringToBytes converts Address string to bytes + // + // Since: cosmos-sdk 0.46 + rpc AddressStringToBytes(AddressStringToBytesRequest) returns (AddressStringToBytesResponse) { + option (google.api.http).get = "/cosmos/auth/v1beta1/bech32/{address_string}"; + } + + // AccountInfo queries account info which is common to all account types. + // + // Since: cosmos-sdk 0.47 + rpc AccountInfo(QueryAccountInfoRequest) returns (QueryAccountInfoResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/account_info/{address}"; + } +} +``` + +As `proto.Message`s, generated `Response` types implement by default `String()` method of [`fmt.Stringer`](https://pkg.go.dev/fmt#Stringer). + +A `RegisterQueryServer` method is also generated and should be used to register the module's query server in the `RegisterServices` method from the [`AppModule` interface](/docs/sdk/v0.53/documentation/module-system/module-manager#appmodule). + +### Legacy Queries + +Before the introduction of Protobuf and gRPC in the Cosmos SDK, there was usually no specific `query` object defined by module developers, contrary to `message`s. Instead, the Cosmos SDK took the simpler approach of using a simple `path` to define each `query`. The `path` contains the `query` type and all the arguments needed to process it. For most module queries, the `path` should look like the following: + +```text +queryCategory/queryRoute/queryType/arg1/arg2/... +``` + +where: + +- `queryCategory` is the category of the `query`, typically `custom` for module queries. It is used to differentiate between different kinds of queries within `BaseApp`'s [`Query` method](/docs/sdk/v0.53/documentation/application-framework/baseapp#query). +- `queryRoute` is used by `BaseApp`'s [`queryRouter`](/docs/sdk/v0.53/documentation/application-framework/baseapp#query-routing) to map the `query` to its module. Usually, `queryRoute` should be the name of the module. +- `queryType` is used by the module's [`querier`](/docs/sdk/v0.53/documentation/module-system/query-services#legacy-queriers) to map the `query` to the appropriate `querier function` within the module. +- `args` are the actual arguments needed to process the `query`. They are filled out by the end-user. Note that for bigger queries, you might prefer passing arguments in the `Data` field of the request `req` instead of the `path`. + +The `path` for each `query` must be defined by the module developer in the module's [command-line interface file](/docs/sdk/v0.53/documentation/module-system/module-interfaces#query-commands).Overall, there are 3 mains components module developers need to implement in order to make the subset of the state defined by their module queryable: + +- A [`querier`](/docs/sdk/v0.53/documentation/module-system/query-services#legacy-queriers), to process the `query` once it has been [routed to the module](/docs/sdk/v0.53/documentation/application-framework/baseapp#query-routing). +- [Query commands](/docs/sdk/v0.53/documentation/module-system/module-interfaces#query-commands) in the module's CLI file, where the `path` for each `query` is specified. +- `query` return types. Typically defined in a file `types/querier.go`, they specify the result type of each of the module's `queries`. These custom types must implement the `String()` method of [`fmt.Stringer`](https://pkg.go.dev/fmt#Stringer). + +### Store Queries + +Store queries query directly for store keys. They use `clientCtx.QueryABCI(req abci.RequestQuery)` to return the full `abci.ResponseQuery` with inclusion Merkle proofs. + +See following examples: + +```go expandable +package baseapp + +import ( + + "context" + "crypto/sha256" + "fmt" + "os" + "sort" + "strings" + "syscall" + "time" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/store/rootmulti" + snapshottypes "cosmossdk.io/store/snapshots/types" + storetypes "cosmossdk.io/store/types" + "github.com/cockroachdb/errors" + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc/codes" + grpcstatus "google.golang.org/grpc/status" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ Supported ABCI Query prefixes and paths +const ( + QueryPathApp = "app" + QueryPathCustom = "custom" + QueryPathP2P = "p2p" + QueryPathStore = "store" + + QueryPathBroadcastTx = "/cosmos.tx.v1beta1.Service/BroadcastTx" +) + +func (app *BaseApp) + +InitChain(req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + if req.ChainId != app.chainID { + return nil, fmt.Errorf("invalid chain-id on InitChain; expected: %s, got: %s", app.chainID, req.ChainId) +} + + / On a new chain, we consider the init chain block height as 0, even though + / req.InitialHeight is 1 by default. + initHeader := cmtproto.Header{ + ChainID: req.ChainId, + Time: req.Time +} + +app.initialHeight = req.InitialHeight + + app.logger.Info("InitChain", "initialHeight", req.InitialHeight, "chainID", req.ChainId) + + / Set the initial height, which will be used to determine if we are proposing + / or processing the first block or not. + app.initialHeight = req.InitialHeight + + / if req.InitialHeight is > 1, then we set the initial version on all stores + if req.InitialHeight > 1 { + initHeader.Height = req.InitialHeight + if err := app.cms.SetInitialVersion(req.InitialHeight); err != nil { + return nil, err +} + +} + + / initialize states with a correct header + app.setState(execModeFinalize, initHeader) + +app.setState(execModeCheck, initHeader) + + / Store the consensus params in the BaseApp's param store. Note, this must be + / done after the finalizeBlockState and context have been set as it's persisted + / to state. + if req.ConsensusParams != nil { + err := app.StoreConsensusParams(app.finalizeBlockState.ctx, *req.ConsensusParams) + if err != nil { + return nil, err +} + +} + +defer func() { + / InitChain represents the state of the application BEFORE the first block, + / i.e. the genesis block. This means that when processing the app's InitChain + / handler, the block height is zero by default. However, after Commit is called + / the height needs to reflect the true block height. + initHeader.Height = req.InitialHeight + app.checkState.ctx = app.checkState.ctx.WithBlockHeader(initHeader) + +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithBlockHeader(initHeader) +}() + if app.initChainer == nil { + return &abci.ResponseInitChain{ +}, nil +} + + / add block gas meter for any genesis transactions (allow infinite gas) + +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithBlockGasMeter(storetypes.NewInfiniteGasMeter()) + +res, err := app.initChainer(app.finalizeBlockState.ctx, req) + if err != nil { + return nil, err +} + if len(req.Validators) > 0 { + if len(req.Validators) != len(res.Validators) { + return nil, fmt.Errorf( + "len(RequestInitChain.Validators) != len(GenesisValidators) (%d != %d)", + len(req.Validators), len(res.Validators), + ) +} + +sort.Sort(abci.ValidatorUpdates(req.Validators)) + +sort.Sort(abci.ValidatorUpdates(res.Validators)) + for i := range res.Validators { + if !proto.Equal(&res.Validators[i], &req.Validators[i]) { + return nil, fmt.Errorf("genesisValidators[%d] != req.Validators[%d] ", i, i) +} + +} + +} + + / In the case of a new chain, AppHash will be the hash of an empty string. + / During an upgrade, it'll be the hash of the last committed block. + var appHash []byte + if !app.LastCommitID().IsZero() { + appHash = app.LastCommitID().Hash +} + +else { + / $ echo -n '' | sha256sum + / e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 + emptyHash := sha256.Sum256([]byte{ +}) + +appHash = emptyHash[:] +} + + / NOTE: We don't commit, but FinalizeBlock for block InitialHeight starts from + / this FinalizeBlockState. + return &abci.ResponseInitChain{ + ConsensusParams: res.ConsensusParams, + Validators: res.Validators, + AppHash: appHash, +}, nil +} + +func (app *BaseApp) + +Info(req *abci.RequestInfo) (*abci.ResponseInfo, error) { + lastCommitID := app.cms.LastCommitID() + +return &abci.ResponseInfo{ + Data: app.name, + Version: app.version, + AppVersion: app.appVersion, + LastBlockHeight: lastCommitID.Version, + LastBlockAppHash: lastCommitID.Hash, +}, nil +} + +/ Query implements the ABCI interface. It delegates to CommitMultiStore if it +/ implements Queryable. +func (app *BaseApp) + +Query(_ context.Context, req *abci.RequestQuery) (resp *abci.ResponseQuery, err error) { + / add panic recovery for all queries + / + / Ref: https://github.com/cosmos/cosmos-sdk/pull/8039 + defer func() { + if r := recover(); r != nil { + resp = sdkerrors.QueryResult(errorsmod.Wrapf(sdkerrors.ErrPanic, "%v", r), app.trace) +} + +}() + + / when a client did not provide a query height, manually inject the latest + if req.Height == 0 { + req.Height = app.LastBlockHeight() +} + +telemetry.IncrCounter(1, "query", "count") + +telemetry.IncrCounter(1, "query", req.Path) + +defer telemetry.MeasureSince(time.Now(), req.Path) + if req.Path == QueryPathBroadcastTx { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "can't route a broadcast tx message"), app.trace), nil +} + + / handle gRPC routes first rather than calling splitPath because '/' characters + / are used as part of gRPC paths + if grpcHandler := app.grpcQueryRouter.Route(req.Path); grpcHandler != nil { + return app.handleQueryGRPC(grpcHandler, req), nil +} + path := SplitABCIQueryPath(req.Path) + if len(path) == 0 { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "no query path provided"), app.trace), nil +} + switch path[0] { + case QueryPathApp: + / "/app" prefix for special application queries + resp = handleQueryApp(app, path, req) + case QueryPathStore: + resp = handleQueryStore(app, path, *req) + case QueryPathP2P: + resp = handleQueryP2P(app, path) + +default: + resp = sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "unknown query path"), app.trace) +} + +return resp, nil +} + +/ ListSnapshots implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ListSnapshots(req *abci.RequestListSnapshots) (*abci.ResponseListSnapshots, error) { + resp := &abci.ResponseListSnapshots{ + Snapshots: []*abci.Snapshot{ +}} + if app.snapshotManager == nil { + return resp, nil +} + +snapshots, err := app.snapshotManager.List() + if err != nil { + app.logger.Error("failed to list snapshots", "err", err) + +return nil, err +} + for _, snapshot := range snapshots { + abciSnapshot, err := snapshot.ToABCI() + if err != nil { + app.logger.Error("failed to convert ABCI snapshots", "err", err) + +return nil, err +} + +resp.Snapshots = append(resp.Snapshots, &abciSnapshot) +} + +return resp, nil +} + +/ LoadSnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +LoadSnapshotChunk(req *abci.RequestLoadSnapshotChunk) (*abci.ResponseLoadSnapshotChunk, error) { + if app.snapshotManager == nil { + return &abci.ResponseLoadSnapshotChunk{ +}, nil +} + +chunk, err := app.snapshotManager.LoadChunk(req.Height, req.Format, req.Chunk) + if err != nil { + app.logger.Error( + "failed to load snapshot chunk", + "height", req.Height, + "format", req.Format, + "chunk", req.Chunk, + "err", err, + ) + +return nil, err +} + +return &abci.ResponseLoadSnapshotChunk{ + Chunk: chunk +}, nil +} + +/ OfferSnapshot implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +OfferSnapshot(req *abci.RequestOfferSnapshot) (*abci.ResponseOfferSnapshot, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ABORT +}, nil +} + if req.Snapshot == nil { + app.logger.Error("received nil snapshot") + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil +} + +snapshot, err := snapshottypes.SnapshotFromABCI(req.Snapshot) + if err != nil { + app.logger.Error("failed to decode snapshot metadata", "err", err) + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil +} + +err = app.snapshotManager.Restore(snapshot) + switch { + case err == nil: + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrUnknownFormat): + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT_FORMAT +}, nil + case errors.Is(err, snapshottypes.ErrInvalidMetadata): + app.logger.Error( + "rejecting invalid snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + +return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_REJECT +}, nil + + default: + app.logger.Error( + "failed to restore snapshot", + "height", req.Snapshot.Height, + "format", req.Snapshot.Format, + "err", err, + ) + + / We currently don't support resetting the IAVL stores and retrying a + / different snapshot, so we ask CometBFT to abort all snapshot restoration. + return &abci.ResponseOfferSnapshot{ + Result: abci.ResponseOfferSnapshot_ABORT +}, nil +} +} + +/ ApplySnapshotChunk implements the ABCI interface. It delegates to app.snapshotManager if set. +func (app *BaseApp) + +ApplySnapshotChunk(req *abci.RequestApplySnapshotChunk) (*abci.ResponseApplySnapshotChunk, error) { + if app.snapshotManager == nil { + app.logger.Error("snapshot manager not configured") + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ABORT +}, nil +} + + _, err := app.snapshotManager.RestoreChunk(req.Chunk) + switch { + case err == nil: + return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ACCEPT +}, nil + case errors.Is(err, snapshottypes.ErrChunkHashMismatch): + app.logger.Error( + "chunk checksum mismatch; rejecting sender and requesting refetch", + "chunk", req.Index, + "sender", req.Sender, + "err", err, + ) + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_RETRY, + RefetchChunks: []uint32{ + req.Index +}, + RejectSenders: []string{ + req.Sender +}, +}, nil + + default: + app.logger.Error("failed to restore snapshot", "err", err) + +return &abci.ResponseApplySnapshotChunk{ + Result: abci.ResponseApplySnapshotChunk_ABORT +}, nil +} +} + +/ CheckTx implements the ABCI interface and executes a tx in CheckTx mode. In +/ CheckTx mode, messages are not executed. This means messages are only validated +/ and only the AnteHandler is executed. State is persisted to the BaseApp's +/ internal CheckTx state if the AnteHandler passes. Otherwise, the ResponseCheckTx +/ will contain relevant error information. Regardless of tx execution outcome, +/ the ResponseCheckTx will contain relevant gas execution context. +func (app *BaseApp) + +CheckTx(req *abci.RequestCheckTx) (*abci.ResponseCheckTx, error) { + var mode execMode + switch { + case req.Type == abci.CheckTxType_New: + mode = execModeCheck + case req.Type == abci.CheckTxType_Recheck: + mode = execModeReCheck + + default: + return nil, fmt.Errorf("unknown RequestCheckTx type: %s", req.Type) +} + +gInfo, result, anteEvents, err := app.runTx(mode, req.Tx) + if err != nil { + return sdkerrors.ResponseCheckTxWithEvents(err, gInfo.GasWanted, gInfo.GasUsed, anteEvents, app.trace), nil +} + +return &abci.ResponseCheckTx{ + GasWanted: int64(gInfo.GasWanted), / TODO: Should type accept unsigned ints? + GasUsed: int64(gInfo.GasUsed), / TODO: Should type accept unsigned ints? + Log: result.Log, + Data: result.Data, + Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents), +}, nil +} + +/ PrepareProposal implements the PrepareProposal ABCI method and returns a +/ ResponsePrepareProposal object to the client. The PrepareProposal method is +/ responsible for allowing the block proposer to perform application-dependent +/ work in a block before proposing it. +/ +/ Transactions can be modified, removed, or added by the application. Since the +/ application maintains its own local mempool, it will ignore the transactions +/ provided to it in RequestPrepareProposal. Instead, it will determine which +/ transactions to return based on the mempool's semantics and the MaxTxBytes +/ provided by the client's request. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +PrepareProposal(req *abci.RequestPrepareProposal) (resp *abci.ResponsePrepareProposal, err error) { + if app.prepareProposal == nil { + return nil, errors.New("PrepareProposal handler not set") +} + + / Always reset state given that PrepareProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, +} + +app.setState(execModePrepareProposal, header) + + / CometBFT must never call PrepareProposal with a height of 0. + / + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("PrepareProposal called with invalid height") +} + +app.prepareProposalState.ctx = app.getContextForProposal(app.prepareProposalState.ctx, req.Height). + WithVoteInfos(toVoteInfo(req.LocalLastCommit.Votes)). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithBlockTime(req.Time). + WithProposer(req.ProposerAddress). + WithExecMode(sdk.ExecModePrepareProposal). + WithCometInfo(prepareProposalInfo{ + req +}) + +app.prepareProposalState.ctx = app.prepareProposalState.ctx. + WithConsensusParams(app.GetConsensusParams(app.prepareProposalState.ctx)). + WithBlockGasMeter(app.getBlockGasMeter(app.prepareProposalState.ctx)) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in PrepareProposal", + "height", req.Height, + "time", req.Time, + "panic", err, + ) + +resp = &abci.ResponsePrepareProposal{ +} + +} + +}() + +resp, err = app.prepareProposal(app.prepareProposalState.ctx, req) + if err != nil { + app.logger.Error("failed to prepare proposal", "height", req.Height, "error", err) + +return &abci.ResponsePrepareProposal{ +}, nil +} + +return resp, nil +} + +/ ProcessProposal implements the ProcessProposal ABCI method and returns a +/ ResponseProcessProposal object to the client. The ProcessProposal method is +/ responsible for allowing execution of application-dependent work in a proposed +/ block. Note, the application defines the exact implementation details of +/ ProcessProposal. In general, the application must at the very least ensure +/ that all transactions are valid. If all transactions are valid, then we inform +/ CometBFT that the Status is ACCEPT. However, the application is also able +/ to implement optimizations such as executing the entire proposed block +/ immediately. +/ +/ If a panic is detected during execution of an application's ProcessProposal +/ handler, it will be recovered and we will reject the proposal. +/ +/ Ref: docs/sdk/next/documentation/legacy/adr-comprehensive +/ Ref: https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md +func (app *BaseApp) + +ProcessProposal(req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) { + if app.processProposal == nil { + return nil, errors.New("ProcessProposal handler not set") +} + + / CometBFT must never call ProcessProposal with a height of 0. + / Ref: https://github.com/cometbft/cometbft/blob/059798a4f5b0c9f52aa8655fa619054a0154088c/spec/core/state.md?plain=1#L37-L38 + if req.Height < 1 { + return nil, errors.New("ProcessProposal called with invalid height") +} + + / Always reset state given that ProcessProposal can timeout and be called + / again in a subsequent round. + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, +} + +app.setState(execModeProcessProposal, header) + + / Since the application can get access to FinalizeBlock state and write to it, + / we must be sure to reset it in case ProcessProposal timeouts and is called + / again in a subsequent round. However, we only want to do this after we've + / processed the first block, as we want to avoid overwriting the finalizeState + / after state changes during InitChain. + if req.Height > app.initialHeight { + app.setState(execModeFinalize, header) +} + +app.processProposalState.ctx = app.getContextForProposal(app.processProposalState.ctx, req.Height). + WithVoteInfos(req.ProposedLastCommit.Votes). / this is a set of votes that are not finalized yet, wait for commit + WithBlockHeight(req.Height). + WithBlockTime(req.Time). + WithHeaderHash(req.Hash). + WithProposer(req.ProposerAddress). + WithCometInfo(cometInfo{ + ProposerAddress: req.ProposerAddress, + ValidatorsHash: req.NextValidatorsHash, + Misbehavior: req.Misbehavior, + LastCommit: req.ProposedLastCommit +}). + WithExecMode(sdk.ExecModeProcessProposal) + +app.processProposalState.ctx = app.processProposalState.ctx. + WithConsensusParams(app.GetConsensusParams(app.processProposalState.ctx)). + WithBlockGasMeter(app.getBlockGasMeter(app.processProposalState.ctx)) + +defer func() { + if err := recover(); err != nil { + app.logger.Error( + "panic recovered in ProcessProposal", + "height", req.Height, + "time", req.Time, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +resp = &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +} + +} + +}() + +resp, err = app.processProposal(app.processProposalState.ctx, req) + if err != nil { + app.logger.Error("failed to process proposal", "height", req.Height, "error", err) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +return resp, nil +} + +/ ExtendVote implements the ExtendVote ABCI method and returns a ResponseExtendVote. +/ It calls the application's ExtendVote handler which is responsible for performing +/ application-specific business logic when sending a pre-commit for the NEXT +/ block height. The extensions response may be non-deterministic but must always +/ be returned, even if empty. +/ +/ Agreed upon vote extensions are made available to the proposer of the next +/ height and are committed in the subsequent height, i.e. H+2. An error is +/ returned if vote extensions are not enabled or if extendVote fails or panics. +func (app *BaseApp) + +ExtendVote(_ context.Context, req *abci.RequestExtendVote) (resp *abci.ResponseExtendVote, err error) { + / Always reset state given that ExtendVote and VerifyVoteExtension can timeout + / and be called again in a subsequent round. + emptyHeader := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height +} + +app.setState(execModeVoteExtension, emptyHeader) + if app.extendVote == nil { + return nil, errors.New("application ExtendVote handler not set") +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(app.voteExtensionState.ctx) + if cp.Abci != nil && cp.Abci.VoteExtensionsEnableHeight <= 0 { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to ExtendVote at height %d", req.Height) +} + +app.voteExtensionState.ctx = app.voteExtensionState.ctx. + WithConsensusParams(cp). + WithBlockGasMeter(storetypes.NewInfiniteGasMeter()). + WithBlockHeight(req.Height). + WithHeaderHash(req.Hash). + WithExecMode(sdk.ExecModeVoteExtension) + + / add a deferred recover handler in case extendVote panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in ExtendVote", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "panic", err, + ) + +err = fmt.Errorf("recovered application panic in ExtendVote: %v", r) +} + +}() + +resp, err = app.extendVote(app.voteExtensionState.ctx, req) + if err != nil { + app.logger.Error("failed to extend vote", "height", req.Height, "error", err) + +return &abci.ResponseExtendVote{ + VoteExtension: []byte{ +}}, nil +} + +return resp, err +} + +/ VerifyVoteExtension implements the VerifyVoteExtension ABCI method and returns +/ a ResponseVerifyVoteExtension. It calls the applications' VerifyVoteExtension +/ handler which is responsible for performing application-specific business +/ logic in verifying a vote extension from another validator during the pre-commit +/ phase. The response MUST be deterministic. An error is returned if vote +/ extensions are not enabled or if verifyVoteExt fails or panics. +func (app *BaseApp) + +VerifyVoteExtension(req *abci.RequestVerifyVoteExtension) (resp *abci.ResponseVerifyVoteExtension, err error) { + if app.verifyVoteExt == nil { + return nil, errors.New("application VerifyVoteExtension handler not set") +} + + / If vote extensions are not enabled, as a safety precaution, we return an + / error. + cp := app.GetConsensusParams(app.voteExtensionState.ctx) + if cp.Abci != nil && cp.Abci.VoteExtensionsEnableHeight <= 0 { + return nil, fmt.Errorf("vote extensions are not enabled; unexpected call to VerifyVoteExtension at height %d", req.Height) +} + + / add a deferred recover handler in case verifyVoteExt panics + defer func() { + if r := recover(); r != nil { + app.logger.Error( + "panic recovered in VerifyVoteExtension", + "height", req.Height, + "hash", fmt.Sprintf("%X", req.Hash), + "validator", fmt.Sprintf("%X", req.ValidatorAddress), + "panic", r, + ) + +err = fmt.Errorf("recovered application panic in VerifyVoteExtension: %v", r) +} + +}() + +resp, err = app.verifyVoteExt(app.voteExtensionState.ctx, req) + if err != nil { + app.logger.Error("failed to verify vote extension", "height", req.Height, "error", err) + +return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_REJECT +}, nil +} + +return resp, err +} + +/ FinalizeBlock will execute the block proposal provided by RequestFinalizeBlock. +/ Specifically, it will execute an application's BeginBlock (if defined), followed +/ by the transactions in the proposal, finally followed by the application's +/ EndBlock (if defined). +/ +/ For each raw transaction, i.e. a byte slice, BaseApp will only execute it if +/ it adheres to the sdk.Tx interface. Otherwise, the raw transaction will be +/ skipped. This is to support compatibility with proposers injecting vote +/ extensions into the proposal, which should not themselves be executed in cases +/ where they adhere to the sdk.Tx interface. +func (app *BaseApp) + +FinalizeBlock(req *abci.RequestFinalizeBlock) (*abci.ResponseFinalizeBlock, error) { + var events []abci.Event + if err := app.validateFinalizeBlockHeight(req); err != nil { + return nil, err +} + if app.cms.TracingEnabled() { + app.cms.SetTracingContext(storetypes.TraceContext( + map[string]any{"blockHeight": req.Height +}, + )) +} + header := cmtproto.Header{ + ChainID: app.chainID, + Height: req.Height, + Time: req.Time, + ProposerAddress: req.ProposerAddress, + NextValidatorsHash: req.NextValidatorsHash, +} + + / Initialize the FinalizeBlock state. If this is the first block, it should + / already be initialized in InitChain. Otherwise app.finalizeBlockState will be + / nil, since it is reset on Commit. + if app.finalizeBlockState == nil { + app.setState(execModeFinalize, header) +} + +else { + / In the first block, app.finalizeBlockState.ctx will already be initialized + / by InitChain. Context is now updated with Header information. + app.finalizeBlockState.ctx = app.finalizeBlockState.ctx. + WithBlockHeader(header). + WithBlockHeight(req.Height) +} + gasMeter := app.getBlockGasMeter(app.finalizeBlockState.ctx) + +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx. + WithBlockGasMeter(gasMeter). + WithHeaderHash(req.Hash). + WithConsensusParams(app.GetConsensusParams(app.finalizeBlockState.ctx)). + WithVoteInfos(req.DecidedLastCommit.Votes). + WithExecMode(sdk.ExecModeFinalize) + if app.checkState != nil { + app.checkState.ctx = app.checkState.ctx. + WithBlockGasMeter(gasMeter). + WithHeaderHash(req.Hash) +} + beginBlock := app.beginBlock(req) + +events = append(events, beginBlock.Events...) + + / Iterate over all raw transactions in the proposal and attempt to execute + / them, gathering the execution results. + / + / NOTE: Not all raw transactions may adhere to the sdk.Tx interface, e.g. + / vote extensions, so skip those. + txResults := make([]*abci.ExecTxResult, 0, len(req.Txs)) + for _, rawTx := range req.Txs { + if _, err := app.txDecoder(rawTx); err == nil { + txResults = append(txResults, app.deliverTx(rawTx)) +} + +} + if app.finalizeBlockState.ms.TracingEnabled() { + app.finalizeBlockState.ms = app.finalizeBlockState.ms.SetTracingContext(nil).(storetypes.CacheMultiStore) +} + +endBlock, err := app.endBlock(app.finalizeBlockState.ctx) + if err != nil { + return nil, err +} + +events = append(events, endBlock.Events...) + cp := app.GetConsensusParams(app.finalizeBlockState.ctx) + +return &abci.ResponseFinalizeBlock{ + Events: events, + TxResults: txResults, + ValidatorUpdates: endBlock.ValidatorUpdates, + ConsensusParamUpdates: &cp, + AppHash: app.workingHash(), +}, nil +} + +/ Commit implements the ABCI interface. It will commit all state that exists in +/ the deliver state's multi-store and includes the resulting commit ID in the +/ returned abci.ResponseCommit. Commit will set the check state based on the +/ latest header and reset the deliver state. Also, if a non-zero halt height is +/ defined in config, Commit will execute a deferred function call to check +/ against that height and gracefully halt if it matches the latest committed +/ height. +func (app *BaseApp) + +Commit() (*abci.ResponseCommit, error) { + header := app.finalizeBlockState.ctx.BlockHeader() + retainHeight := app.GetBlockRetentionHeight(header.Height) + if app.precommiter != nil { + app.precommiter(app.finalizeBlockState.ctx) +} + +rms, ok := app.cms.(*rootmulti.Store) + if ok { + rms.SetCommitHeader(header) +} + +app.cms.Commit() + resp := &abci.ResponseCommit{ + RetainHeight: retainHeight, +} + abciListeners := app.streamingManager.ABCIListeners + if len(abciListeners) > 0 { + ctx := app.finalizeBlockState.ctx + blockHeight := ctx.BlockHeight() + changeSet := app.cms.PopStateCache() + for _, abciListener := range abciListeners { + if err := abciListener.ListenCommit(ctx, *resp, changeSet); err != nil { + app.logger.Error("Commit listening hook failed", "height", blockHeight, "err", err) +} + +} + +} + + / Reset the CheckTx state to the latest committed. + / + / NOTE: This is safe because CometBFT holds a lock on the mempool for + / Commit. Use the header from this latest block. + app.setState(execModeCheck, header) + +app.finalizeBlockState = nil + if app.prepareCheckStater != nil { + app.prepareCheckStater(app.checkState.ctx) +} + +var halt bool + switch { + case app.haltHeight > 0 && uint64(header.Height) >= app.haltHeight: + halt = true + case app.haltTime > 0 && header.Time.Unix() >= int64(app.haltTime): + halt = true +} + if halt { + / Halt the binary and allow CometBFT to receive the ResponseCommit + / response with the commit ID hash. This will allow the node to successfully + / restart and process blocks assuming the halt configuration has been + / reset or moved to a more distant value. + app.halt() +} + +go app.snapshotManager.SnapshotIfApplicable(header.Height) + +return resp, nil +} + +/ workingHash gets the apphash that will be finalized in commit. +/ These writes will be persisted to the root multi-store (app.cms) + +and flushed to +/ disk in the Commit phase. This means when the ABCI client requests Commit(), the application +/ state transitions will be flushed to disk and as a result, but we already have +/ an application Merkle root. +func (app *BaseApp) + +workingHash() []byte { + / Write the FinalizeBlock state into branched storage and commit the MultiStore. + / The write to the FinalizeBlock state writes all state transitions to the root + / MultiStore (app.cms) + +so when Commit() + +is called it persists those values. + app.finalizeBlockState.ms.Write() + + / Get the hash of all writes in order to return the apphash to the comet in finalizeBlock. + commitHash := app.cms.WorkingHash() + +app.logger.Debug("hash of all writes", "workingHash", fmt.Sprintf("%X", commitHash)) + +return commitHash +} + +/ halt attempts to gracefully shutdown the node via SIGINT and SIGTERM falling +/ back on os.Exit if both fail. +func (app *BaseApp) + +halt() { + app.logger.Info("halting node per configuration", "height", app.haltHeight, "time", app.haltTime) + +p, err := os.FindProcess(os.Getpid()) + if err == nil { + / attempt cascading signals in case SIGINT fails (os dependent) + sigIntErr := p.Signal(syscall.SIGINT) + sigTermErr := p.Signal(syscall.SIGTERM) + if sigIntErr == nil || sigTermErr == nil { + return +} + +} + + / Resort to exiting immediately if the process could not be found or killed + / via SIGINT/SIGTERM signals. + app.logger.Info("failed to send SIGINT/SIGTERM; exiting...") + +os.Exit(0) +} + +func handleQueryApp(app *BaseApp, path []string, req *abci.RequestQuery) *abci.ResponseQuery { + if len(path) >= 2 { + switch path[1] { + case "simulate": + txBytes := req.Data + + gInfo, res, err := app.Simulate(txBytes) + if err != nil { + return sdkerrors.QueryResult(errorsmod.Wrap(err, "failed to simulate tx"), app.trace) +} + simRes := &sdk.SimulationResponse{ + GasInfo: gInfo, + Result: res, +} + +bz, err := codec.ProtoMarshalJSON(simRes, app.interfaceRegistry) + if err != nil { + return sdkerrors.QueryResult(errorsmod.Wrap(err, "failed to JSON encode simulation response"), app.trace) +} + +return &abci.ResponseQuery{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: bz, +} + case "version": + return &abci.ResponseQuery{ + Codespace: sdkerrors.RootCodespace, + Height: req.Height, + Value: []byte(app.version), +} + +default: + return sdkerrors.QueryResult(errorsmod.Wrapf(sdkerrors.ErrUnknownRequest, "unknown query: %s", path), app.trace) +} + +} + +return sdkerrors.QueryResult( + errorsmod.Wrap( + sdkerrors.ErrUnknownRequest, + "expected second parameter to be either 'simulate' or 'version', neither was present", + ), app.trace) +} + +func handleQueryStore(app *BaseApp, path []string, req abci.RequestQuery) *abci.ResponseQuery { + / "/store" prefix for store queries + queryable, ok := app.cms.(storetypes.Queryable) + if !ok { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "multi-store does not support queries"), app.trace) +} + +req.Path = "/" + strings.Join(path[1:], "/") + if req.Height <= 1 && req.Prove { + return sdkerrors.QueryResult( + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ), app.trace) +} + sdkReq := storetypes.RequestQuery(req) + +resp, err := queryable.Query(&sdkReq) + if err != nil { + return sdkerrors.QueryResult(err, app.trace) +} + +resp.Height = req.Height + abciResp := abci.ResponseQuery(*resp) + +return &abciResp +} + +func handleQueryP2P(app *BaseApp, path []string) *abci.ResponseQuery { + / "/p2p" prefix for p2p queries + if len(path) < 4 { + return sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "path should be p2p filter "), app.trace) +} + +var resp *abci.ResponseQuery + + cmd, typ, arg := path[1], path[2], path[3] + switch cmd { + case "filter": + switch typ { + case "addr": + resp = app.FilterPeerByAddrPort(arg) + case "id": + resp = app.FilterPeerByID(arg) +} + +default: + resp = sdkerrors.QueryResult(errorsmod.Wrap(sdkerrors.ErrUnknownRequest, "expected second parameter to be 'filter'"), app.trace) +} + +return resp +} + +/ SplitABCIQueryPath splits a string path using the delimiter '/'. +/ +/ e.g. "this/is/funny" becomes []string{"this", "is", "funny" +} + +func SplitABCIQueryPath(requestPath string) (path []string) { + path = strings.Split(requestPath, "/") + + / first element is empty string + if len(path) > 0 && path[0] == "" { + path = path[1:] +} + +return path +} + +/ FilterPeerByAddrPort filters peers by address/port. +func (app *BaseApp) + +FilterPeerByAddrPort(info string) *abci.ResponseQuery { + if app.addrPeerFilter != nil { + return app.addrPeerFilter(info) +} + +return &abci.ResponseQuery{ +} +} + +/ FilterPeerByID filters peers by node ID. +func (app *BaseApp) + +FilterPeerByID(info string) *abci.ResponseQuery { + if app.idPeerFilter != nil { + return app.idPeerFilter(info) +} + +return &abci.ResponseQuery{ +} +} + +/ getContextForProposal returns the correct Context for PrepareProposal and +/ ProcessProposal. We use finalizeBlockState on the first block to be able to +/ access any state changes made in InitChain. +func (app *BaseApp) + +getContextForProposal(ctx sdk.Context, height int64) + +sdk.Context { + if height == app.initialHeight { + ctx, _ = app.finalizeBlockState.ctx.CacheContext() + + / clear all context data set during InitChain to avoid inconsistent behavior + ctx = ctx.WithBlockHeader(cmtproto.Header{ +}) + +return ctx +} + +return ctx +} + +func (app *BaseApp) + +handleQueryGRPC(handler GRPCQueryHandler, req *abci.RequestQuery) *abci.ResponseQuery { + ctx, err := app.CreateQueryContext(req.Height, req.Prove) + if err != nil { + return sdkerrors.QueryResult(err, app.trace) +} + +resp, err := handler(ctx, req) + if err != nil { + resp = sdkerrors.QueryResult(gRPCErrorToSDKError(err), app.trace) + +resp.Height = req.Height + return resp +} + +return resp +} + +func gRPCErrorToSDKError(err error) + +error { + status, ok := grpcstatus.FromError(err) + if !ok { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) +} + switch status.Code() { + case codes.NotFound: + return errorsmod.Wrap(sdkerrors.ErrKeyNotFound, err.Error()) + case codes.InvalidArgument: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.FailedPrecondition: + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, err.Error()) + case codes.Unauthenticated: + return errorsmod.Wrap(sdkerrors.ErrUnauthorized, err.Error()) + +default: + return errorsmod.Wrap(sdkerrors.ErrUnknownRequest, err.Error()) +} +} + +func checkNegativeHeight(height int64) + +error { + if height < 0 { + return errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "cannot query with height < 0; please provide a valid height") +} + +return nil +} + +/ createQueryContext creates a new sdk.Context for a query, taking as args +/ the block height and whether the query needs a proof or not. +func (app *BaseApp) + +CreateQueryContext(height int64, prove bool) (sdk.Context, error) { + if err := checkNegativeHeight(height); err != nil { + return sdk.Context{ +}, err +} + + / use custom query multi-store if provided + qms := app.qms + if qms == nil { + qms = app.cms.(storetypes.MultiStore) +} + lastBlockHeight := qms.LatestVersion() + if lastBlockHeight == 0 { + return sdk.Context{ +}, errorsmod.Wrapf(sdkerrors.ErrInvalidHeight, "%s is not ready; please wait for first block", app.Name()) +} + if height > lastBlockHeight { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidHeight, + "cannot query with height in the future; please provide a valid height", + ) +} + + / when a client did not provide a query height, manually inject the latest + if height == 0 { + height = lastBlockHeight +} + if height <= 1 && prove { + return sdk.Context{ +}, + errorsmod.Wrap( + sdkerrors.ErrInvalidRequest, + "cannot query with proof when height <= 1; please provide a valid height", + ) +} + +cacheMS, err := qms.CacheMultiStoreWithVersion(height) + if err != nil { + return sdk.Context{ +}, + errorsmod.Wrapf( + sdkerrors.ErrInvalidRequest, + "failed to load state at height %d; %s (latest height: %d)", height, err, lastBlockHeight, + ) +} + + / branch the commit multi-store for safety + ctx := sdk.NewContext(cacheMS, app.checkState.ctx.BlockHeader(), true, app.logger). + WithMinGasPrices(app.minGasPrices). + WithBlockHeight(height) + if height != lastBlockHeight { + rms, ok := app.cms.(*rootmulti.Store) + if ok { + cInfo, err := rms.GetCommitInfo(height) + if cInfo != nil && err == nil { + ctx = ctx.WithBlockTime(cInfo.Timestamp) +} + +} + +} + +return ctx, nil +} + +/ GetBlockRetentionHeight returns the height for which all blocks below this height +/ are pruned from CometBFT. Given a commitment height and a non-zero local +/ minRetainBlocks configuration, the retentionHeight is the smallest height that +/ satisfies: +/ +/ - Unbonding (safety threshold) + +time: The block interval in which validators +/ can be economically punished for misbehavior. Blocks in this interval must be +/ auditable e.g. by the light client. +/ +/ - Logical store snapshot interval: The block interval at which the underlying +/ logical store database is persisted to disk, e.g. every 10000 heights. Blocks +/ since the last IAVL snapshot must be available for replay on application restart. +/ +/ - State sync snapshots: Blocks since the oldest available snapshot must be +/ available for state sync nodes to catch up (oldest because a node may be +/ restoring an old snapshot while a new snapshot was taken). +/ +/ - Local (minRetainBlocks) + +config: Archive nodes may want to retain more or +/ all blocks, e.g. via a local config option min-retain-blocks. There may also +/ be a need to vary retention for other nodes, e.g. sentry nodes which do not +/ need historical blocks. +func (app *BaseApp) + +GetBlockRetentionHeight(commitHeight int64) + +int64 { + / pruning is disabled if minRetainBlocks is zero + if app.minRetainBlocks == 0 { + return 0 +} + minNonZero := func(x, y int64) + +int64 { + switch { + case x == 0: + return y + case y == 0: + return x + case x < y: + return x + + default: + return y +} + +} + + / Define retentionHeight as the minimum value that satisfies all non-zero + / constraints. All blocks below (commitHeight-retentionHeight) + +are pruned + / from CometBFT. + var retentionHeight int64 + + / Define the number of blocks needed to protect against misbehaving validators + / which allows light clients to operate safely. Note, we piggy back of the + / evidence parameters instead of computing an estimated number of blocks based + / on the unbonding period and block commitment time as the two should be + / equivalent. + cp := app.GetConsensusParams(app.finalizeBlockState.ctx) + if cp.Evidence != nil && cp.Evidence.MaxAgeNumBlocks > 0 { + retentionHeight = commitHeight - cp.Evidence.MaxAgeNumBlocks +} + if app.snapshotManager != nil { + snapshotRetentionHeights := app.snapshotManager.GetSnapshotBlockRetentionHeights() + if snapshotRetentionHeights > 0 { + retentionHeight = minNonZero(retentionHeight, commitHeight-snapshotRetentionHeights) +} + +} + v := commitHeight - int64(app.minRetainBlocks) + +retentionHeight = minNonZero(retentionHeight, v) + if retentionHeight <= 0 { + / prune nothing in the case of a non-positive height + return 0 +} + +return retentionHeight +} + +/ toVoteInfo converts the new ExtendedVoteInfo to VoteInfo. +func toVoteInfo(votes []abci.ExtendedVoteInfo) []abci.VoteInfo { + legacyVotes := make([]abci.VoteInfo, len(votes)) + for i, vote := range votes { + legacyVotes[i] = abci.VoteInfo{ + Validator: abci.Validator{ + Address: vote.Validator.Address, + Power: vote.Validator.Power, +}, + BlockIdFlag: vote.BlockIdFlag, +} + +} + +return legacyVotes +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/mint.mdx b/docs/sdk/v0.53/documentation/module-system/mint.mdx new file mode 100644 index 00000000..c1f5f922 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/mint.mdx @@ -0,0 +1,518 @@ +--- +title: '`x/mint`' +description: >- + The x/mint module handles the regular minting of new tokens in a configurable + manner. +--- + +The `x/mint` module handles the regular minting of new tokens in a configurable manner. + +## Contents + +* [State](#state) + * [Minter](#minter) + * [Params](#params) +* [Begin-Block](#begin-block) + * [NextInflationRate](#nextinflationrate) + * [NextAnnualProvisions](#nextannualprovisions) + * [BlockProvision](#blockprovision) +* [Parameters](#parameters) +* [Events](#events) + * [BeginBlocker](#beginblocker) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +### The Minting Mechanism + +The default minting mechanism was designed to: + +* allow for a flexible inflation rate determined by market demand targeting a particular bonded-stake ratio +* effect a balance between market liquidity and staked supply + +In order to best determine the appropriate market rate for inflation rewards, a +moving change rate is used. The moving change rate mechanism ensures that if +the % bonded is either over or under the goal %-bonded, the inflation rate will +adjust to further incentivize or disincentivize being bonded, respectively. Setting the goal +%-bonded at less than 100% encourages the network to maintain some non-staked tokens +which should help provide some liquidity. + +It can be broken down in the following way: + +* If the actual percentage of bonded tokens is below the goal %-bonded the inflation rate will + increase until a maximum value is reached +* If the goal % bonded (67% in Cosmos-Hub) is maintained, then the inflation + rate will stay constant +* If the actual percentage of bonded tokens is above the goal %-bonded the inflation rate will + decrease until a minimum value is reached + +### Custom Minters + +As of Cosmos SDK v0.53.0, developers can set a custom `MintFn` for the module for specialized token minting logic. + +The function signature that a `MintFn` must implement is as follows: + +```go +/ MintFn defines the function that needs to be implemented in order to customize the minting process. +type MintFn func(ctx sdk.Context, k *Keeper) + +error +``` + +This can be passed to the `Keeper` upon creation with an additional `Option`: + +```go +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / mintkeeper.WithMintFn(CUSTOM_MINT_FN), / custom mintFn can be added here + ) +``` + +#### Custom Minter DI Example + +Below is a simple approach to creating a custom mint function with extra dependencies in DI configurations. +For this basic example, we will make the minter simply double the supply of `foo` coin. + +First, we will define a function that takes our required dependencies, and returns a `MintFn`. + +```go expandable +/ MyCustomMintFunction is a custom mint function that doubles the supply of `foo` coin. +func MyCustomMintFunction(bank bankkeeper.BaseKeeper) + +mintkeeper.MintFn { + return func(ctx sdk.Context, k *mintkeeper.Keeper) + +error { + supply := bank.GetSupply(ctx, "foo") + err := k.MintCoins(ctx, sdk.NewCoins(supply.Add(supply))) + if err != nil { + return err +} + +return nil +} +} +``` + +Then, pass the function defined above into the `depinject.Supply` function with the required dependencies. + +```go expandable +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + var ( + app = &SimApp{ +} + +appBuilder *runtime.AppBuilder + appConfig = depinject.Configs( + AppConfig, + depinject.Supply( + appOpts, + logger, + / our custom mint function with the necessary dependency passed in. + MyCustomMintFunction(app.BankKeeper), + ), + ) + ) + / ... +} +``` + +## State + +### Minter + +The minter is a space for holding current inflation information. + +* Minter: `0x00 -> ProtocolBuffer(minter)` + +```protobuf +// Minter represents the minting state. +message Minter { + // current annual inflation rate + string inflation = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // current annual expected provisions + string annual_provisions = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +### Params + +The mint module stores its params in state with the prefix of `0x01`, +it can be updated with governance or the address with authority. + +* Params: `mint/params -> legacy_amino(params)` + +```protobuf +// Params defines the parameters for the x/mint module. +message Params { + option (gogoproto.goproto_stringer) = false; + option (amino.name) = "cosmos-sdk/x/mint/Params"; + + // type of coin to mint + string mint_denom = 1; + // maximum annual change in inflation rate + string inflation_rate_change = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // maximum inflation rate + string inflation_max = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // minimum inflation rate + string inflation_min = 4 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // goal of percent bonded atoms + string goal_bonded = 5 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // expected blocks per year + uint64 blocks_per_year = 6; +} +``` + +## Begin-Block + +Minting parameters are recalculated and inflation paid at the beginning of each block. + +### Inflation rate calculation + +Inflation rate is calculated using an "inflation calculation function" that's +passed to the `NewAppModule` function. If no function is passed, then the SDK's +default inflation function will be used (`NextInflationRate`). In case a custom +inflation calculation logic is needed, this can be achieved by defining and +passing a function that matches `InflationCalculationFn`'s signature. + +```go +type InflationCalculationFn func(ctx sdk.Context, minter Minter, params Params, bondedRatio math.LegacyDec) + +math.LegacyDec +``` + +#### NextInflationRate + +The target annual inflation rate is recalculated each block. +The inflation is also subject to a rate change (positive or negative) +depending on the distance from the desired ratio (67%). The maximum rate change +possible is defined to be 13% per year, however, the annual inflation is capped +as between 7% and 20%. + +```go expandable +NextInflationRate(params Params, bondedRatio math.LegacyDec) (inflation math.LegacyDec) { + inflationRateChangePerYear = (1 - bondedRatio/params.GoalBonded) * params.InflationRateChange + inflationRateChange = inflationRateChangePerYear/blocksPerYr + + / increase the new annual inflation for this next block + inflation += inflationRateChange + if inflation > params.InflationMax { + inflation = params.InflationMax +} + if inflation < params.InflationMin { + inflation = params.InflationMin +} + +return inflation +} +``` + +### NextAnnualProvisions + +Calculate the annual provisions based on current total supply and inflation +rate. This parameter is calculated once per block. + +```go +NextAnnualProvisions(params Params, totalSupply math.LegacyDec) (provisions math.LegacyDec) { + return Inflation * totalSupply +``` + +### BlockProvision + +Calculate the provisions generated for each block based on current annual provisions. The provisions are then minted by the `mint` module's `ModuleMinterAccount` and then transferred to the `auth`'s `FeeCollector` `ModuleAccount`. + +```go +BlockProvision(params Params) + +sdk.Coin { + provisionAmt = AnnualProvisions/ params.BlocksPerYear + return sdk.NewCoin(params.MintDenom, provisionAmt.Truncate()) +``` + +## Parameters + +The minting module contains the following parameters: + +| Key | Type | Example | +| ------------------- | --------------- | ---------------------- | +| MintDenom | string | "uatom" | +| InflationRateChange | string (dec) | "0.130000000000000000" | +| InflationMax | string (dec) | "0.200000000000000000" | +| InflationMin | string (dec) | "0.070000000000000000" | +| GoalBonded | string (dec) | "0.670000000000000000" | +| BlocksPerYear | string (uint64) | "6311520" | + +## Events + +The minting module emits the following events: + +### BeginBlocker + +| Type | Attribute Key | Attribute Value | +| ---- | ------------------ | ------------------ | +| mint | bonded\_ratio | `{bondedRatio}` | +| mint | inflation | `{inflation}` | +| mint | annual\_provisions | `{annualProvisions}` | +| mint | amount | `{amount}` | + +## Client + +### CLI + +A user can query and interact with the `mint` module using the CLI. + +#### Query + +The `query` commands allows users to query `mint` state. + +```shell +simd query mint --help +``` + +##### annual-provisions + +The `annual-provisions` command allows users to query the current minting annual provisions value + +```shell +simd query mint annual-provisions [flags] +``` + +Example: + +```shell +simd query mint annual-provisions +``` + +Example Output: + +```shell +22268504368893.612100895088410693 +``` + +##### inflation + +The `inflation` command allows users to query the current minting inflation value + +```shell +simd query mint inflation [flags] +``` + +Example: + +```shell +simd query mint inflation +``` + +Example Output: + +```shell +0.199200302563256955 +``` + +##### params + +The `params` command allows users to query the current minting parameters + +```shell +simd query mint params [flags] +``` + +Example: + +```yml +blocks_per_year: "4360000" +goal_bonded: "0.670000000000000000" +inflation_max: "0.200000000000000000" +inflation_min: "0.070000000000000000" +inflation_rate_change: "0.130000000000000000" +mint_denom: stake +``` + +### gRPC + +A user can query the `mint` module using gRPC endpoints. + +#### AnnualProvisions + +The `AnnualProvisions` endpoint allows users to query the current minting annual provisions value + +```shell +/cosmos.mint.v1beta1.Query/AnnualProvisions +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/AnnualProvisions +``` + +Example Output: + +```json +{ + "annualProvisions": "1432452520532626265712995618" +} +``` + +#### Inflation + +The `Inflation` endpoint allows users to query the current minting inflation value + +```shell +/cosmos.mint.v1beta1.Query/Inflation +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/Inflation +``` + +Example Output: + +```json +{ + "inflation": "130197115720711261" +} +``` + +#### Params + +The `Params` endpoint allows users to query the current minting parameters + +```shell +/cosmos.mint.v1beta1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "mintDenom": "stake", + "inflationRateChange": "130000000000000000", + "inflationMax": "200000000000000000", + "inflationMin": "70000000000000000", + "goalBonded": "670000000000000000", + "blocksPerYear": "6311520" + } +} +``` + +### REST + +A user can query the `mint` module using REST endpoints. + +#### annual-provisions + +```shell +/cosmos/mint/v1beta1/annual_provisions +``` + +Example: + +```shell +curl "localhost:1317/cosmos/mint/v1beta1/annual_provisions" +``` + +Example Output: + +```json +{ + "annualProvisions": "1432452520532626265712995618" +} +``` + +#### inflation + +```shell +/cosmos/mint/v1beta1/inflation +``` + +Example: + +```shell +curl "localhost:1317/cosmos/mint/v1beta1/inflation" +``` + +Example Output: + +```json +{ + "inflation": "130197115720711261" +} +``` + +#### params + +```shell +/cosmos/mint/v1beta1/params +``` + +Example: + +```shell +curl "localhost:1317/cosmos/mint/v1beta1/params" +``` + +Example Output: + +```json +{ + "params": { + "mintDenom": "stake", + "inflationRateChange": "130000000000000000", + "inflationMax": "200000000000000000", + "inflationMin": "70000000000000000", + "goalBonded": "670000000000000000", + "blocksPerYear": "6311520" + } +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/module-development-guide.mdx b/docs/sdk/v0.53/documentation/module-system/module-development-guide.mdx new file mode 100644 index 00000000..29271ecd --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/module-development-guide.mdx @@ -0,0 +1,527 @@ +--- +title: Module Development Guide +description: Technical reference for Cosmos SDK module development +--- + +# Module Development Guide + +## Overview + +Cosmos SDK modules are self-contained units of functionality that extend the capabilities of a blockchain application. To understand their flexibility, consider the range of modules in production: from the `x/bank` module that handles token transfers, to `x/ibc` that enables communication between independent blockchains, to the EVM module that runs Ethereum smart contracts on Cosmos chains. + +## Module Types and Capabilities + +The Cosmos SDK doesn't prescribe a single pattern for modules. Let's examine how different production modules leverage this flexibility: + +### The Bank Module: State Management Foundation + +The `x/bank` module manages all token balances and transfers. It's the foundation that other modules build upon: + +```go +// From x/bank/keeper/keeper.go +type BaseKeeper struct { + ak types.AccountKeeper + cdc codec.BinaryCodec + storeService store.KVStoreService + mintCoinsRestrictionFn MintingRestrictionFn + + // State management using Collections + Schema collections.Schema + Balances collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] + Supply collections.Map[string, math.Int] + DenomMetadata collections.Map[string, types.Metadata] + SendEnabled collections.Map[string, bool] +} +``` + +The bank module demonstrates how modules manage state using the Collections framework. Every account balance is stored as a key-value pair where the key is the combination of address and denomination. + +### IBC: Protocol Implementation + +The IBC (Inter-Blockchain Communication) module shows how modules can implement complex protocols. IBC isn't just a simple state manager—it's a complete protocol for trustless communication between chains: + +```go +// From ibc-go/modules/core/keeper/keeper.go +type Keeper struct { + cdc codec.BinaryCodec + + ClientKeeper clientkeeper.Keeper + ConnectionKeeper connectionkeeper.Keeper + ChannelKeeper channelkeeper.Keeper + PortKeeper portkeeper.Keeper + + Router *porttypes.Router + + authority string +} +``` + +IBC is composed of multiple sub-keepers, each managing different aspects of the protocol: clients track other chains, connections establish communication paths, channels handle packet flow, and ports manage module bindings. + +### The Upgrade Module: Coordinating Chain Evolution + +The `x/upgrade` module demonstrates a different pattern—it doesn't manage user funds or complex protocols, but coordinates the entire chain's upgrade process: + +```go +// From x/upgrade/keeper/keeper.go +func (k Keeper) ScheduleUpgrade(ctx context.Context, plan types.Plan) error { + sdkCtx := sdk.UnwrapSDKContext(ctx) + + // The upgrade module has the special ability to halt the chain + if err := plan.ValidateBasic(); err != nil { + return err + } + + if !plan.Time.IsZero() { + return errors.New("upgrade by time is disabled") + } + + if plan.Height <= sdkCtx.HeaderInfo().Height { + return errors.New("upgrade cannot be scheduled in the past") + } + + // Store the plan - when the chain reaches this height, it will halt + if err := k.SetUpgradePlan(ctx, plan); err != nil { + return err + } + + return nil +} +``` + +This demonstrates how modules can have special privileges—the upgrade module can halt the entire chain, something no regular transaction could do. + +## Core Requirements + +Every module must implement `AppModuleBasic`, but what they do beyond that varies widely. Let's look at how different modules approach this: + +### The Minimal Interface: x/auth + +The auth module manages accounts and transaction authentication. Its basic interface is straightforward: + +```go +// From x/auth/module.go +func (AppModuleBasic) Name() string { + return types.ModuleName +} + +func (AppModuleBasic) RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +func (AppModuleBasic) RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) +} +``` + +### Extended Capabilities: x/staking + +The staking module needs to participate in block processing to handle validator updates: + +```go +// From x/staking/module.go +type AppModule struct { + AppModuleBasic + keeper *keeper.Keeper + accountKeeper types.AccountKeeper + bankKeeper types.BankKeeper +} + +// BeginBlock updates validator set and handles slashing +func (am AppModule) BeginBlock(ctx context.Context) error { + return am.keeper.BeginBlocker(ctx) +} + +// EndBlock processes validator updates for CometBFT +func (am AppModule) EndBlock(ctx context.Context) error { + return am.keeper.EndBlocker(ctx) +} +``` + +The staking module's `BeginBlock` and `EndBlock` methods aren't just arbitrary hooks—they're essential for maintaining consensus by updating the validator set that CometBFT uses. + +## State Management Patterns + +### Collections Framework: x/bank's Approach + +The bank module uses Collections for type-safe state management: + +```go +// From x/bank/keeper/keeper.go - Setting up collections +sb := collections.NewSchemaBuilder(storeService) + +k := BaseKeeper{ + Balances: collections.NewMap( + sb, + types.BalancesPrefix, + "balances", + collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), + sdk.IntValue, + ), + Supply: collections.NewMap( + sb, + types.SupplyKey, + "supply", + collections.StringKey, + sdk.IntValue, + ), +} + +// From x/bank/keeper/send.go - Using collections +func (k BaseSendKeeper) GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) sdk.Coin { + amt, err := k.Balances.Get(ctx, collections.Join(addr, denom)) + if err != nil && !errors.Is(err, collections.ErrNotFound) { + panic(err) + } + return sdk.NewCoin(denom, amt) +} +``` + +### Collections in Governance: x/gov Module + +The governance module also uses Collections for managing proposals and votes: + +```go +// From x/gov/keeper/keeper.go +type Keeper struct { + // ... other fields ... + + // Collections for state management + Proposals collections.Map[uint64, v1.Proposal] + Votes collections.Map[collections.Pair[uint64, sdk.AccAddress], v1.Vote] + ActiveProposalsQueue collections.Map[collections.Pair[time.Time, uint64], uint64] + InactiveProposalsQueue collections.Map[collections.Pair[time.Time, uint64], uint64] + VotingPeriodProposals collections.Map[uint64, []byte] +} + +// From x/gov/keeper/keeper.go - Setup +k := Keeper{ + Proposals: collections.NewMap( + sb, types.ProposalsKeyPrefix, "proposals", + collections.Uint64Key, codec.CollValue[v1.Proposal](cdc), + ), + ActiveProposalsQueue: collections.NewMap( + sb, types.ActiveProposalQueuePrefix, "active_proposals_queue", + collections.PairKeyCodec(sdk.TimeKey, collections.Uint64Key), + collections.Uint64Value, + ), +} +``` + +## Service Registration + +### Transaction Processing: x/bank's Message Server + +The bank module's message server shows how modules handle transactions: + +```go +// From x/bank/keeper/msg_server.go +type msgServer struct { + BaseKeeper +} + +func (k msgServer) Send(goCtx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + from, err := k.ak.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, err + } + + to, err := k.ak.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, err + } + + if k.BlockedAddr(to) { + return nil, errors.Wrapf(types.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) + } + + err = k.SendCoins(goCtx, from, to, msg.Amount) + if err != nil { + return nil, err + } + + return &types.MsgSendResponse{}, nil +} +``` + +This isn't just moving numbers in a database—the bank module enforces critical invariants like preventing sends to module accounts that shouldn't receive funds. + +### Query Services: x/staking's Information Provider + +The staking module provides extensive query services for validator information: + +```go +// From x/staking/keeper/grpc_query.go +func (k Querier) Validators(ctx context.Context, req *types.QueryValidatorsRequest) (*types.QueryValidatorsResponse, error) { + if req == nil { + return nil, errors.New("empty request") + } + + // This query is critical for UIs and other chains to understand the validator set + validators, pageRes, err := query.CollectionPaginate( + ctx, + k.Keeper.Validators, + req.Pagination, + func(key []byte, val types.Validator) (types.Validator, error) { + if req.Status != "" && !strings.EqualFold(val.Status.String(), req.Status) { + return types.Validator{}, errors.Wrapf(collection.ErrSkipIteration, "validator %s does not match status %s", val.OperatorAddress, req.Status) + } + return val, nil + }, + ) + + if err != nil { + return nil, errors.New("failed to query validators") + } + + return &types.QueryValidatorsResponse{Validators: validators, Pagination: pageRes}, nil +} +``` + +## Module Lifecycle Integration + +### The Distribution Module: Rewards at Block Boundaries + +The distribution module calculates and distributes staking rewards during block processing: + +```go +// From x/distribution/abci.go +func (am AppModule) BeginBlock(ctx context.Context) error { + defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyBeginBlocker) + + // Distribute rewards for the previous block + // This must happen in BeginBlock to ensure rewards are distributed before any transactions + return am.keeper.AllocateTokens(ctx) +} + +// From x/distribution/keeper/allocation.go +func (k Keeper) AllocateTokens(ctx context.Context) error { + // Get the total power of all validators + totalPower := k.stakingKeeper.TotalBondedTokens(ctx) + + // Get the fees collected in the last block + feeCollector := k.authKeeper.GetModuleAccount(ctx, k.feeCollectorName) + feesCollected := k.bankKeeper.GetAllBalances(ctx, feeCollector.GetAddress()) + + // Distribute to validators based on their voting power + // This complex calculation ensures fair reward distribution + // ... +} +``` + +### The Evidence Module: Handling Misbehavior + +The evidence module processes evidence of validator misbehavior: + +```go +// From x/evidence/keeper/keeper.go +type Keeper struct { + cdc codec.BinaryCodec + storeService corestore.KVStoreService + router types.Router + stakingKeeper types.StakingKeeper + slashingKeeper types.SlashingKeeper + + Schema collections.Schema + Evidences collections.Map[[]byte, exported.Evidence] +} + +// From x/evidence/keeper/infraction.go +// handleEquivocationEvidence processes evidence of double signing +func (k Keeper) handleEquivocationEvidence(ctx context.Context, evidence *types.Equivocation) error { + consAddr := evidence.GetConsensusAddress(k.stakingKeeper.ConsensusAddressCodec()) + + validator, err := k.stakingKeeper.ValidatorByConsAddr(ctx, consAddr) + if err != nil { + return err + } + if validator == nil || validator.IsUnbonded() { + return nil // Cannot slash unbonded validators + } + + // Check if validator is already tombstoned + if k.slashingKeeper.IsTombstoned(ctx, consAddr) { + return nil // Already slashed for equivocation + } + + // Slash the validator for double signing + // This protects the network from malicious validators + k.slashingKeeper.SlashWithInfractionReason( + ctx, consAddr, infractionHeight, power, + k.slashingKeeper.SlashFractionDoubleSign(ctx), + types.Infraction_INFRACTION_DOUBLE_SIGN, + ) +} +``` + +## Module Communication Patterns + +### Direct Keeper Interaction: x/gov and x/staking + +The governance module needs to execute proposals that can affect other modules. Here's how it interacts with the staking module: + +```go +// From x/gov/keeper/keeper.go +type Keeper struct { + // Governance needs access to staking to handle validator-related proposals + sk types.StakingKeeper + // ... other keepers +} + +// From x/staking/types/expected_keepers.go +type StakingKeeper interface { + // Gov module can jail validators through proposals + Jail(context.Context, sdk.ConsAddress) error + Unjail(context.Context, sdk.ConsAddress) error + + // Gov can update staking parameters + SetParams(context.Context, types.Params) error +} +``` + +### Event-Based Communication: x/slashing Notifications + +The slashing module emits events that other modules and external systems can respond to: + +```go +// From x/slashing/keeper/infractions.go +func (k Keeper) HandleValidatorSignature(ctx context.Context, addr cryptotypes.Address, power int64, signed comet.BlockIDFlag) error { + // ... slashing logic ... + + // Emit an event that other modules or external monitors can observe + sdkCtx.EventManager().EmitEvent( + sdk.NewEvent( + types.EventTypeSlash, + sdk.NewAttribute(types.AttributeKeyAddress, consAddr.String()), + sdk.NewAttribute(types.AttributeKeyPower, fmt.Sprintf("%d", power)), + sdk.NewAttribute(types.AttributeKeyReason, types.AttributeValueMissingSignature), + sdk.NewAttribute(types.AttributeKeyJailed, fmt.Sprintf("%v", !validator.IsJailed())), + ), + ) +} +``` + +### The Feegrant Module: Extending Capabilities + +The feegrant module shows how modules can extend the capabilities of others without modifying them: + +```go +// From x/feegrant/keeper/keeper.go +func (k Keeper) UseGrantedFees(ctx context.Context, granter, grantee sdk.AccAddress, fee sdk.Coins, msgs []sdk.Msg) error { + grant, err := k.GetGrant(ctx, granter, grantee) + if err != nil { + return err + } + + // The feegrant module allows one account to pay fees for another + // This extends the auth module's fee payment system without modifying it + allow, err := grant.GetGrant().Accept(ctx, fee, msgs) + if err != nil { + return err + } + + // Deduct fees from the granter, not the grantee + err = k.bankKeeper.SendCoinsFromAccountToModule(ctx, granter, types.FeeCollectorName, fee) + if err != nil { + return err + } + + // Update or remove the grant based on its remaining allowance + if allow != nil { + grant.Grant = allow + k.SetGrant(ctx, grant) + } else { + k.RemoveGrant(ctx, granter, grantee) + } + + return nil +} +``` + +## Advanced Module Capabilities + +### The AuthZ Module: Delegated Permissions + +The authz module enables sophisticated permission delegation: + +```go +// From x/authz/keeper/keeper.go +func (k Keeper) DispatchActions(ctx context.Context, grantee sdk.AccAddress, msgs []sdk.Msg) ([][]byte, error) { + results := make([][]byte, len(msgs)) + + for i, msg := range msgs { + signers := msg.GetSigners() + if len(signers) != 1 { + return nil, errors.New("authz can only dispatch messages with exactly one signer") + } + + granter := signers[0] + + // Check if grantee has authorization from granter + authorization, expiration, err := k.GetAuthorization(ctx, grantee, granter, sdk.MsgTypeURL(msg)) + if err != nil { + return nil, err + } + + // Execute the message on behalf of the granter + resp, err := authorization.Accept(ctx, msg) + if err != nil { + return nil, err + } + + if resp.Delete { + k.DeleteGrant(ctx, grantee, granter, sdk.MsgTypeURL(msg)) + } else if resp.Updated != nil { + k.SaveGrant(ctx, grantee, granter, resp.Updated, expiration) + } + + results[i] = sdk.MsgTypeURL(msg) + } + + return results, nil +} +``` + +## Testing Strategies + +### The Bank Module's Comprehensive Testing + +The bank module's tests demonstrate thorough testing practices: + +```go +// From x/bank/keeper/keeper_test.go +func (suite *KeeperTestSuite) TestSendCoinsFromModuleToAccount() { + ctx := suite.ctx + + // Setup: Create a module account with funds + moduleName := "test" + moduleAcc := authtypes.NewModuleAccount( + authtypes.NewBaseAccountWithAddress(sdk.AccAddress([]byte(moduleName))), + moduleName, + authtypes.Minter, + ) + + suite.authKeeper.SetModuleAccount(ctx, moduleAcc) + suite.Require().NoError(suite.bankKeeper.MintCoins(ctx, moduleName, initCoins)) + + // Test: Send from module to account + addr := sdk.AccAddress([]byte("addr")) + suite.Require().NoError(suite.bankKeeper.SendCoinsFromModuleToAccount( + ctx, moduleName, addr, initCoins, + )) + + // Verify: Check balances + balance := suite.bankKeeper.GetAllBalances(ctx, addr) + suite.Require().Equal(initCoins, balance) + + moduleBalance := suite.bankKeeper.GetAllBalances(ctx, moduleAcc.GetAddress()) + suite.Require().True(moduleBalance.Empty()) +} +``` + +## References + +- [Cosmos SDK Modules Source Code](https://github.com/cosmos/cosmos-sdk/tree/main/x) - Production module implementations +- [IBC-Go Modules](https://github.com/cosmos/ibc-go) - Inter-blockchain communication protocol +- [CosmWasm](https://github.com/CosmWasm/cosmwasm) - Smart contract platform module +- [Ethermint EVM](https://github.com/evmos/ethermint) - Ethereum Virtual Machine implementation \ No newline at end of file diff --git a/docs/sdk/v0.53/documentation/module-system/module-interfaces.mdx b/docs/sdk/v0.53/documentation/module-system/module-interfaces.mdx new file mode 100644 index 00000000..729be109 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/module-interfaces.mdx @@ -0,0 +1,1157 @@ +--- +title: Module Interfaces +--- + +## Synopsis + +This document details how to build CLI and REST interfaces for a module. Examples from various Cosmos SDK modules are included. + + +**Pre-requisite Readings** + +- [Building Modules Intro](docs/sdk/v0.50/learn/intro/overview) + + + +## CLI + +One of the main interfaces for an application is the [command-line interface](/docs/sdk/v0.53/api-reference/client-tools/cli). This entrypoint adds commands from the application's modules enabling end-users to create [**messages**](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#messages) wrapped in transactions and [**queries**](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#queries). The CLI files are typically found in the module's `./client/cli` folder. + +### Transaction Commands + +In order to create messages that trigger state changes, end-users must create [transactions](/docs/sdk/v0.53/documentation/protocol-development/transactions) that wrap and deliver the messages. A transaction command creates a transaction that includes one or more messages. + +Transaction commands typically have their own `tx.go` file that lives within the module's `./client/cli` folder. The commands are specified in getter functions and the name of the function should include the name of the command. + +Here is an example from the `x/bank` module: + +```go expandable +package cli + +import ( + + "fmt" + "cosmossdk.io/core/address" + sdkmath "cosmossdk.io/math" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/tx" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var FlagSplit = "split" + +/ NewTxCmd returns a root CLI command handler for all x/bank transaction commands. +func NewTxCmd(ac address.Codec) *cobra.Command { + txCmd := &cobra.Command{ + Use: types.ModuleName, + Short: "Bank transaction subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +txCmd.AddCommand( + NewSendTxCmd(ac), + NewMultiSendTxCmd(ac), + ) + +return txCmd +} + +/ NewSendTxCmd returns a CLI command handler for creating a MsgSend transaction. +func NewSendTxCmd(ac address.Codec) *cobra.Command { + cmd := &cobra.Command{ + Use: "send [from_key_or_address] [to_address] [amount]", + Short: "Send funds from one account to another.", + Long: `Send funds from one account to another. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.ExactArgs(3), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +toAddr, err := ac.StringToBytes(args[1]) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[2]) + if err != nil { + return err +} + if len(coins) == 0 { + return fmt.Errorf("invalid coins") +} + msg := types.NewMsgSend(clientCtx.GetFromAddress(), toAddr, coins) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} + +/ NewMultiSendTxCmd returns a CLI command handler for creating a MsgMultiSend transaction. +/ For a better UX this command is limited to send funds from one account to two or more accounts. +func NewMultiSendTxCmd(ac address.Codec) *cobra.Command { + cmd := &cobra.Command{ + Use: "multi-send [from_key_or_address] [to_address_1, to_address_2, ...] [amount]", + Short: "Send funds from one account to two or more accounts.", + Long: `Send funds from one account to two or more accounts. +By default, sends the [amount] to each address of the list. +Using the '--split' flag, the [amount] is split equally between the addresses. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.MinimumNArgs(4), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[len(args)-1]) + if err != nil { + return err +} + if coins.IsZero() { + return fmt.Errorf("must send positive amount") +} + +split, err := cmd.Flags().GetBool(FlagSplit) + if err != nil { + return err +} + totalAddrs := sdkmath.NewInt(int64(len(args) - 2)) + / coins to be received by the addresses + sendCoins := coins + if split { + sendCoins = coins.QuoInt(totalAddrs) +} + +var output []types.Output + for _, arg := range args[1 : len(args)-1] { + toAddr, err := ac.StringToBytes(arg) + if err != nil { + return err +} + +output = append(output, types.NewOutput(toAddr, sendCoins)) +} + + / amount to be send from the from address + var amount sdk.Coins + if split { + / user input: 1000stake to send to 3 addresses + / actual: 333stake to each address (=> 999stake actually sent) + +amount = sendCoins.MulInt(totalAddrs) +} + +else { + amount = coins.MulInt(totalAddrs) +} + msg := types.NewMsgMultiSend(types.NewInput(clientCtx.FromAddress, amount), output) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +cmd.Flags().Bool(FlagSplit, false, "Send the equally split token amount to each address") + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} +``` + +In the example, `NewSendTxCmd()` creates and returns the transaction command for a transaction that wraps and delivers `MsgSend`. `MsgSend` is the message used to send tokens from one account to another. + +In general, the getter function does the following: + +- **Constructs the command:** Read the [Cobra Documentation](https://pkg.go.dev/github.com/spf13/cobra) for more detailed information on how to create commands. + - **Use:** Specifies the format of the user input required to invoke the command. In the example above, `send` is the name of the transaction command and `[from_key_or_address]`, `[to_address]`, and `[amount]` are the arguments. + - **Args:** The number of arguments the user provides. In this case, there are exactly three: `[from_key_or_address]`, `[to_address]`, and `[amount]`. + - **Short and Long:** Descriptions for the command. A `Short` description is expected. A `Long` description can be used to provide additional information that is displayed when a user adds the `--help` flag. + - **RunE:** Defines a function that can return an error. This is the function that is called when the command is executed. This function encapsulates all of the logic to create a new transaction. + - The function typically starts by getting the `clientCtx`, which can be done with `client.GetClientTxContext(cmd)`. The `clientCtx` contains information relevant to transaction handling, including information about the user. In this example, the `clientCtx` is used to retrieve the address of the sender by calling `clientCtx.GetFromAddress()`. + - If applicable, the command's arguments are parsed. In this example, the arguments `[to_address]` and `[amount]` are both parsed. + - A [message](/docs/sdk/v0.53/documentation/module-system/messages-and-queries) is created using the parsed arguments and information from the `clientCtx`. The constructor function of the message type is called directly. In this case, `types.NewMsgSend(fromAddr, toAddr, amount)`. Its good practice to call, if possible, the necessary [message validation methods](/docs/sdk/v0.53/documentation/module-system/msg-services#Validation) before broadcasting the message. + - Depending on what the user wants, the transaction is either generated offline or signed and broadcasted to the preconfigured node using `tx.GenerateOrBroadcastTxCLI(clientCtx, flags, msg)`. +- **Adds transaction flags:** All transaction commands must add a set of transaction [flags](#flags). The transaction flags are used to collect additional information from the user (e.g. the amount of fees the user is willing to pay). The transaction flags are added to the constructed command using `AddTxFlagsToCmd(cmd)`. +- **Returns the command:** Finally, the transaction command is returned. + +Each module can implement `NewTxCmd()`, which aggregates all of the transaction commands of the module. Here is an example from the `x/bank` module: + +```go expandable +package cli + +import ( + + "fmt" + "cosmossdk.io/core/address" + sdkmath "cosmossdk.io/math" + "github.com/spf13/cobra" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/tx" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var FlagSplit = "split" + +/ NewTxCmd returns a root CLI command handler for all x/bank transaction commands. +func NewTxCmd(ac address.Codec) *cobra.Command { + txCmd := &cobra.Command{ + Use: types.ModuleName, + Short: "Bank transaction subcommands", + DisableFlagParsing: true, + SuggestionsMinimumDistance: 2, + RunE: client.ValidateCmd, +} + +txCmd.AddCommand( + NewSendTxCmd(ac), + NewMultiSendTxCmd(ac), + ) + +return txCmd +} + +/ NewSendTxCmd returns a CLI command handler for creating a MsgSend transaction. +func NewSendTxCmd(ac address.Codec) *cobra.Command { + cmd := &cobra.Command{ + Use: "send [from_key_or_address] [to_address] [amount]", + Short: "Send funds from one account to another.", + Long: `Send funds from one account to another. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.ExactArgs(3), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +toAddr, err := ac.StringToBytes(args[1]) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[2]) + if err != nil { + return err +} + if len(coins) == 0 { + return fmt.Errorf("invalid coins") +} + msg := types.NewMsgSend(clientCtx.GetFromAddress(), toAddr, coins) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} + +/ NewMultiSendTxCmd returns a CLI command handler for creating a MsgMultiSend transaction. +/ For a better UX this command is limited to send funds from one account to two or more accounts. +func NewMultiSendTxCmd(ac address.Codec) *cobra.Command { + cmd := &cobra.Command{ + Use: "multi-send [from_key_or_address] [to_address_1, to_address_2, ...] [amount]", + Short: "Send funds from one account to two or more accounts.", + Long: `Send funds from one account to two or more accounts. +By default, sends the [amount] to each address of the list. +Using the '--split' flag, the [amount] is split equally between the addresses. +Note, the '--from' flag is ignored as it is implied from [from_key_or_address]. +When using '--dry-run' a key name cannot be used, only a bech32 address. +`, + Args: cobra.MinimumNArgs(4), + RunE: func(cmd *cobra.Command, args []string) + +error { + cmd.Flags().Set(flags.FlagFrom, args[0]) + +clientCtx, err := client.GetClientTxContext(cmd) + if err != nil { + return err +} + +coins, err := sdk.ParseCoinsNormalized(args[len(args)-1]) + if err != nil { + return err +} + if coins.IsZero() { + return fmt.Errorf("must send positive amount") +} + +split, err := cmd.Flags().GetBool(FlagSplit) + if err != nil { + return err +} + totalAddrs := sdkmath.NewInt(int64(len(args) - 2)) + / coins to be received by the addresses + sendCoins := coins + if split { + sendCoins = coins.QuoInt(totalAddrs) +} + +var output []types.Output + for _, arg := range args[1 : len(args)-1] { + toAddr, err := ac.StringToBytes(arg) + if err != nil { + return err +} + +output = append(output, types.NewOutput(toAddr, sendCoins)) +} + + / amount to be send from the from address + var amount sdk.Coins + if split { + / user input: 1000stake to send to 3 addresses + / actual: 333stake to each address (=> 999stake actually sent) + +amount = sendCoins.MulInt(totalAddrs) +} + +else { + amount = coins.MulInt(totalAddrs) +} + msg := types.NewMsgMultiSend(types.NewInput(clientCtx.FromAddress, amount), output) + +return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) +}, +} + +cmd.Flags().Bool(FlagSplit, false, "Send the equally split token amount to each address") + +flags.AddTxFlagsToCmd(cmd) + +return cmd +} +``` + +Each module then can also implement a `GetTxCmd()` method that simply returns `NewTxCmd()`. This allows the root command to easily aggregate all of the transaction commands for each module. Here is an example: + +```go expandable +package bank + +import ( + + "context" + "encoding/json" + "fmt" + "time" + + modulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + corestore "cosmossdk.io/core/store" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/bank/client/cli" + "github.com/cosmos/cosmos-sdk/x/bank/exported" + "github.com/cosmos/cosmos-sdk/x/bank/keeper" + v1bank "github.com/cosmos/cosmos-sdk/x/bank/migrations/v1" + "github.com/cosmos/cosmos-sdk/x/bank/simulation" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +/ ConsensusVersion defines the current x/bank module consensus version. +const ConsensusVersion = 4 + +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the bank module. +type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec +} + +/ Name returns the bank module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the bank module's types on the LegacyAmino codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the bank +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the bank module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, _ client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return data.Validate() +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the bank module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the bank module. +func (ab AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.NewTxCmd(ab.ac) +} + +/ GetQueryCmd returns no root query command for the bank module. +func (ab AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.GetQueryCmd(ab.ac) +} + +/ RegisterInterfaces registers interfaces and implementations of the bank module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) + + / Register legacy interfaces for migration scripts. + v1bank.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the bank module. +type AppModule struct { + AppModuleBasic + + keeper keeper.Keeper + accountKeeper types.AccountKeeper + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ RegisterServices registers module services. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.keeper)) + +types.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper.(keeper.BaseKeeper), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 1 to 2: %v", err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 2 to 3: %v", err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 3 to 4: %v", err)) +} +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, accountKeeper types.AccountKeeper, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: accountKeeper.AddressCodec() +}, + keeper: keeper, + accountKeeper: accountKeeper, + legacySubspace: ss, +} +} + +/ Name returns the bank module's name. +func (AppModule) + +Name() + +string { + return types.ModuleName +} + +/ RegisterInvariants registers the bank module invariants. +func (am AppModule) + +RegisterInvariants(ir sdk.InvariantRegistry) { + keeper.RegisterInvariants(ir, am.keeper) +} + +/ QuerierRoute returns the bank module's querier route name. +func (AppModule) + +QuerierRoute() + +string { + return types.RouterKey +} + +/ InitGenesis performs genesis initialization for the bank module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + start := time.Now() + +var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +telemetry.MeasureSince(start, "InitGenesis", "crisis", "unmarshal") + +am.keeper.InitGenesis(ctx, &genesisState) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the bank +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the bank module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalMsgs returns msgs used for governance proposals for simulations. +func (AppModule) + +ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg { + return simulation.ProposalMsgs() +} + +/ RegisterStoreDecoder registers a decoder for supply module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[types.StoreKey] = simtypes.NewStoreDecoderFuncFromCollectionsSchema(am.keeper.(keeper.BaseKeeper).Schema) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + simState.AppParams, simState.Cdc, simState.TxConfig, am.accountKeeper, am.keeper, + ) +} + +/ App Wiring Setup + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type ModuleInputs struct { + depinject.In + + Config *modulev1.Module + Cdc codec.Codec + StoreService corestore.KVStoreService + Logger log.Logger + + AccountKeeper types.AccountKeeper + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type ModuleOutputs struct { + depinject.Out + + BankKeeper keeper.BaseKeeper + Module appmodule.AppModule +} + +func ProvideModule(in ModuleInputs) + +ModuleOutputs { + / Configure blocked module accounts. + / + / Default behavior for blockedAddresses is to regard any module mentioned in + / AccountKeeper's module account permissions as blocked. + blockedAddresses := make(map[string]bool) + if len(in.Config.BlockedModuleAccountsOverride) > 0 { + for _, moduleName := range in.Config.BlockedModuleAccountsOverride { + blockedAddresses[authtypes.NewModuleAddress(moduleName).String()] = true +} + +} + +else { + for _, permission := range in.AccountKeeper.GetModulePermissions() { + blockedAddresses[permission.GetAddress().String()] = true +} + +} + + / default to governance authority if not provided + authority := authtypes.NewModuleAddress(govtypes.ModuleName) + if in.Config.Authority != "" { + authority = authtypes.NewModuleAddressOrBech32Address(in.Config.Authority) +} + bankKeeper := keeper.NewBaseKeeper( + in.Cdc, + in.StoreService, + in.AccountKeeper, + blockedAddresses, + authority.String(), + in.Logger, + ) + m := NewAppModule(in.Cdc, bankKeeper, in.AccountKeeper, in.LegacySubspace) + +return ModuleOutputs{ + BankKeeper: bankKeeper, + Module: m +} +} +``` + +### Query Commands + + + This section is being rewritten. Refer to + [AutoCLI](https://docs.cosmos.network/main/core/autocli) while this section is + being updated. + + +## gRPC + +[gRPC](https://grpc.io/) is a Remote Procedure Call (RPC) framework. RPC is the preferred way for external clients like wallets and exchanges to interact with a blockchain. + +In addition to providing an ABCI query pathway, the Cosmos SDK provides a gRPC proxy server that routes gRPC query requests to ABCI query requests. + +In order to do that, modules must implement `RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *runtime.ServeMux)` on `AppModuleBasic` to wire the client gRPC requests to the correct handler inside the module. + +Here's an example from the `x/auth` module: + +```go expandable +package auth + +import ( + + "context" + "encoding/json" + "fmt" + + abci "github.com/cometbft/cometbft/abci/types" + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/depinject" + + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + + modulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + "cosmossdk.io/core/store" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/client/cli" + "github.com/cosmos/cosmos-sdk/x/auth/exported" + "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +/ ConsensusVersion defines the current x/auth module consensus version. +const ( + ConsensusVersion = 5 + GovModuleName = "gov" +) + +var ( + _ module.AppModule = AppModule{ +} + _ module.AppModuleBasic = AppModuleBasic{ +} + _ module.AppModuleSimulation = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the auth module. +type AppModuleBasic struct { + ac address.Codec +} + +/ Name returns the auth module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the auth module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the auth +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the auth module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, config client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return types.ValidateGenesis(data) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the auth module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the auth module. +func (AppModuleBasic) + +GetTxCmd() *cobra.Command { + return nil +} + +/ GetQueryCmd returns the root query command for the auth module. +func (ab AppModuleBasic) + +GetQueryCmd() *cobra.Command { + return cli.GetQueryCmd(ab.ac) +} + +/ RegisterInterfaces registers interfaces and implementations of the auth module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the auth module. +type AppModule struct { + AppModuleBasic + + accountKeeper keeper.AccountKeeper + randGenAccountsFn types.RandomGenesisAccountsFn + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +var _ appmodule.AppModule = AppModule{ +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, accountKeeper keeper.AccountKeeper, randGenAccountsFn types.RandomGenesisAccountsFn, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + ac: accountKeeper.AddressCodec() +}, + accountKeeper: accountKeeper, + randGenAccountsFn: randGenAccountsFn, + legacySubspace: ss, +} +} + +/ Name returns the auth module's name. +func (AppModule) + +Name() + +string { + return types.ModuleName +} + +/ RegisterServices registers a GRPC query service to respond to the +/ module-specific GRPC queries. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.accountKeeper)) + +types.RegisterQueryServer(cfg.QueryServer(), keeper.NewQueryServer(am.accountKeeper)) + m := keeper.NewMigrator(am.accountKeeper, cfg.QueryServer(), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 2 to 3: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 3 to 4: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 4, m.Migrate4To5); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 4 to 5", types.ModuleName)) +} +} + +/ InitGenesis performs genesis initialization for the auth module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) []abci.ValidatorUpdate { + var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +am.accountKeeper.InitGenesis(ctx, genesisState) + +return []abci.ValidatorUpdate{ +} +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the auth +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.accountKeeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the auth module +func (am AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState, am.randGenAccountsFn) +} + +/ ProposalMsgs returns msgs used for governance proposals for simulations. +func (AppModule) + +ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg { + return simulation.ProposalMsgs() +} + +/ RegisterStoreDecoder registers a decoder for auth module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[types.StoreKey] = simtypes.NewStoreDecoderFuncFromCollectionsSchema(am.accountKeeper.Schema) +} + +/ WeightedOperations doesn't return any auth module operation. +func (AppModule) + +WeightedOperations(_ module.SimulationState) []simtypes.WeightedOperation { + return nil +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideAddressCodec), + appmodule.Provide(ProvideModule), + ) +} + +/ ProvideAddressCodec provides an address.Codec to the container for any +/ modules that want to do address string <> bytes conversion. +func ProvideAddressCodec(config *modulev1.Module) + +address.Codec { + return authcodec.NewBech32Codec(config.Bech32Prefix) +} + +type ModuleInputs struct { + depinject.In + + Config *modulev1.Module + StoreService store.KVStoreService + Cdc codec.Codec + + RandomGenesisAccountsFn types.RandomGenesisAccountsFn `optional:"true"` + AccountI func() + +sdk.AccountI `optional:"true"` + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type ModuleOutputs struct { + depinject.Out + + AccountKeeper keeper.AccountKeeper + Module appmodule.AppModule +} + +func ProvideModule(in ModuleInputs) + +ModuleOutputs { + maccPerms := map[string][]string{ +} + for _, permission := range in.Config.ModuleAccountPermissions { + maccPerms[permission.Account] = permission.Permissions +} + + / default to governance authority if not provided + authority := types.NewModuleAddress(GovModuleName) + if in.Config.Authority != "" { + authority = types.NewModuleAddressOrBech32Address(in.Config.Authority) +} + if in.RandomGenesisAccountsFn == nil { + in.RandomGenesisAccountsFn = simulation.RandomGenesisAccounts +} + if in.AccountI == nil { + in.AccountI = types.ProtoBaseAccount +} + k := keeper.NewAccountKeeper(in.Cdc, in.StoreService, in.AccountI, maccPerms, in.Config.Bech32Prefix, authority.String()) + m := NewAppModule(in.Cdc, k, in.RandomGenesisAccountsFn, in.LegacySubspace) + +return ModuleOutputs{ + AccountKeeper: k, + Module: m +} +} +``` + +## gRPC-gateway REST + +Applications need to support web services that use HTTP requests (e.g. a web wallet like [Keplr](https://keplr.app)). [grpc-gateway](https://github.com/grpc-ecosystem/grpc-gateway) translates REST calls into gRPC calls, which might be useful for clients that do not use gRPC. + +Modules that want to expose REST queries should add `google.api.http` annotations to their `rpc` methods, such as in the example below from the `x/auth` module: + +```protobuf +// Query defines the gRPC querier service. +service Query { + // Accounts returns all the existing accounts. + // + // When called from another module, this query might consume a high amount of + // gas if the pagination field is incorrectly set. + // + // Since: cosmos-sdk 0.43 + rpc Accounts(QueryAccountsRequest) returns (QueryAccountsResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/accounts"; + } + + // Account returns account details based on address. + rpc Account(QueryAccountRequest) returns (QueryAccountResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/accounts/{address}"; + } + + // AccountAddressByID returns account address based on account number. + // + // Since: cosmos-sdk 0.46.2 + rpc AccountAddressByID(QueryAccountAddressByIDRequest) returns (QueryAccountAddressByIDResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/address_by_id/{id}"; + } + + // Params queries all parameters. + rpc Params(QueryParamsRequest) returns (QueryParamsResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/params"; + } + + // ModuleAccounts returns all the existing module accounts. + // + // Since: cosmos-sdk 0.46 + rpc ModuleAccounts(QueryModuleAccountsRequest) returns (QueryModuleAccountsResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/module_accounts"; + } + + // ModuleAccountByName returns the module account info by module name + rpc ModuleAccountByName(QueryModuleAccountByNameRequest) returns (QueryModuleAccountByNameResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/module_accounts/{name}"; + } + + // Bech32Prefix queries bech32Prefix + // + // Since: cosmos-sdk 0.46 + rpc Bech32Prefix(Bech32PrefixRequest) returns (Bech32PrefixResponse) { + option (google.api.http).get = "/cosmos/auth/v1beta1/bech32"; + } + + // AddressBytesToString converts Account Address bytes to string + // + // Since: cosmos-sdk 0.46 + rpc AddressBytesToString(AddressBytesToStringRequest) returns (AddressBytesToStringResponse) { + option (google.api.http).get = "/cosmos/auth/v1beta1/bech32/{address_bytes}"; + } + + // AddressStringToBytes converts Address string to bytes + // + // Since: cosmos-sdk 0.46 + rpc AddressStringToBytes(AddressStringToBytesRequest) returns (AddressStringToBytesResponse) { + option (google.api.http).get = "/cosmos/auth/v1beta1/bech32/{address_string}"; + } + + // AccountInfo queries account info which is common to all account types. + // + // Since: cosmos-sdk 0.47 + rpc AccountInfo(QueryAccountInfoRequest) returns (QueryAccountInfoResponse) { + option (cosmos.query.v1.module_query_safe) = true; + option (google.api.http).get = "/cosmos/auth/v1beta1/account_info/{address}"; + } +} +``` + +gRPC gateway is started in-process along with the application and CometBFT. It can be enabled or disabled by setting gRPC Configuration `enable` in [`app.toml`](/docs/sdk/v0.53/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml). + +The Cosmos SDK provides a command for generating [Swagger](https://swagger.io/) documentation (`protoc-gen-swagger`). Setting `swagger` in [`app.toml`](/docs/sdk/v0.53/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml) defines if swagger documentation should be automatically registered. diff --git a/docs/sdk/v0.53/documentation/module-system/module-manager.mdx b/docs/sdk/v0.53/documentation/module-system/module-manager.mdx new file mode 100644 index 00000000..acdb3cb4 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/module-manager.mdx @@ -0,0 +1,16227 @@ +--- +title: Module Manager +--- + +## Synopsis + +Cosmos SDK modules need to implement the [`AppModule` interfaces](#application-module-interfaces), in order to be managed by the application's [module manager](#module-manager). The module manager plays an important role in [`message` and `query` routing](/docs/sdk/v0.53/documentation/application-framework/baseapp#routing), and allows application developers to set the order of execution of a variety of functions like [`PreBlocker`](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#preblocker) and [`BeginBlocker` and `EndBlocker`](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#begingblocker-and-endblocker). + + +**Pre-requisite Readings** + +- [Introduction to Cosmos SDK Modules](/docs/sdk/v0.53/documentation/module-system/intro) + + + +## Application Module Interfaces + +Application module interfaces exist to facilitate the composition of modules together to form a functional Cosmos SDK application. + + + +It is recommended to implement interfaces from the [Core API](https://docs.cosmos.network/main/architecture/adr-063-core-module-api) `appmodule` package. This makes modules less dependent on the SDK. +For legacy reason modules can still implement interfaces from the SDK `module` package. + + + +There are 2 main application module interfaces: + +- [`appmodule.AppModule` / `module.AppModule`](#appmodule) for inter-dependent module functionalities (except genesis-related functionalities). +- (legacy) [`module.AppModuleBasic`](#appmodulebasic) for independent module functionalities. New modules can use `module.CoreAppModuleBasicAdaptor` instead. + +The above interfaces are mostly embedding smaller interfaces (extension interfaces), that defines specific functionalities: + +- (legacy) `module.HasName`: Allows the module to provide its own name for legacy purposes. +- (legacy) [`module.HasGenesisBasics`](#modulehasgenesisbasics): The legacy interface for stateless genesis methods. +- [`module.HasGenesis`](#modulehasgenesis) for inter-dependent genesis-related module functionalities. +- [`module.HasABCIGenesis`](#modulehasabcigenesis) for inter-dependent genesis-related module functionalities. +- [`appmodule.HasGenesis` / `module.HasGenesis`](#appmodulehasgenesis): The extension interface for stateful genesis methods. +- [`appmodule.HasPreBlocker`](#haspreblocker): The extension interface that contains information about the `AppModule` and `PreBlock`. +- [`appmodule.HasBeginBlocker`](#hasbeginblocker): The extension interface that contains information about the `AppModule` and `BeginBlock`. +- [`appmodule.HasEndBlocker`](#hasendblocker): The extension interface that contains information about the `AppModule` and `EndBlock`. +- [`appmodule.HasPrecommit`](#hasprecommit): The extension interface that contains information about the `AppModule` and `Precommit`. +- [`appmodule.HasPrepareCheckState`](#haspreparecheckstate): The extension interface that contains information about the `AppModule` and `PrepareCheckState`. +- [`appmodule.HasService` / `module.HasServices`](#hasservices): The extension interface for modules to register services. +- [`module.HasABCIEndBlock`](#hasabciendblock): The extension interface that contains information about the `AppModule`, `EndBlock` and returns an updated validator set. +- (legacy) [`module.HasInvariants`](#hasinvariants): The extension interface for registering invariants. +- (legacy) [`module.HasConsensusVersion`](#hasconsensusversion): The extension interface for declaring a module consensus version. + +The `AppModuleBasic` interface exists to define independent methods of the module, i.e. those that do not depend on other modules in the application. This allows for the construction of the basic application structure early in the application definition, generally in the `init()` function of the [main application file](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#core-application-file). + +The `AppModule` interface exists to define inter-dependent module methods. Many modules need to interact with other modules, typically through [`keeper`s](/docs/sdk/v0.53/documentation/module-system/keeper), which means there is a need for an interface where modules list their `keeper`s and other methods that require a reference to another module's object. `AppModule` interface extension, such as `HasBeginBlocker` and `HasEndBlocker`, also enables the module manager to set the order of execution between module's methods like `BeginBlock` and `EndBlock`, which is important in cases where the order of execution between modules matters in the context of the application. + +The usage of extension interfaces allows modules to define only the functionalities they need. For example, a module that does not need an `EndBlock` does not need to define the `HasEndBlocker` interface and thus the `EndBlock` method. `AppModule` and `AppModuleGenesis` are voluntarily small interfaces, that can take advantage of the `Module` patterns without having to define many placeholder functions. + +### `AppModuleBasic` + + + Use `module.CoreAppModuleBasicAdaptor` instead for creating an + `AppModuleBasic` from an `appmodule.AppModule`. + + +The `AppModuleBasic` interface defines the independent methods modules need to implement. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +- `RegisterLegacyAminoCodec(*codec.LegacyAmino)`: Registers the `amino` codec for the module, which is used to marshal and unmarshal structs to/from `[]byte` in order to persist them in the module's `KVStore`. +- `RegisterInterfaces(codectypes.InterfaceRegistry)`: Registers a module's interface types and their concrete implementations as `proto.Message`. +- `RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux)`: Registers gRPC routes for the module. + +All the `AppModuleBasic` of an application are managed by the [`BasicManager`](#basicmanager). + +### `HasName` + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +- `HasName` is an interface that has a method `Name()`. This method returns the name of the module as a `string`. + +### Genesis + + + For easily creating an `AppModule` that only has genesis functionalities, use + `module.GenesisOnlyAppModule`. + + +#### `module.HasGenesisBasics` + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +Let us go through the methods: + +- `DefaultGenesis(codec.JSONCodec)`: Returns a default [`GenesisState`](/docs/sdk/v0.53/documentation/module-system/genesis#genesisstate) for the module, marshalled to `json.RawMessage`. The default `GenesisState` need to be defined by the module developer and is primarily used for testing. +- `ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage)`: Used to validate the `GenesisState` defined by a module, given in its `json.RawMessage` form. It will usually unmarshall the `json` before running a custom [`ValidateGenesis`](/docs/sdk/v0.53/documentation/module-system/genesis#validategenesis) function defined by the module developer. + +#### `module.HasGenesis` + +`HasGenesis` is an extension interface for allowing modules to implement genesis functionalities. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ genesisOnlyModule is an interface need to return GenesisOnlyAppModule struct in order to wrap two interfaces +type genesisOnlyModule interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + genesisOnlyModule +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg genesisOnlyModule) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + genesisOnlyModule: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndblock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + +module1, ok := m.Modules[moduleName].(HasGenesis) + if ok { + module1.InitGenesis(sdkCtx, c.cdc, module1.DefaultGenesis(c.cdc)) +} + if module2, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module2.InitGenesis(sdkCtx, c.cdc, module1.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ RunMigrationBeginBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was executed or not and an error if fails. +func (m *Manager) + +RunMigrationBeginBlock(ctx sdk.Context) (bool, error) { + for _, moduleName := range m.OrderBeginBlockers { + if mod, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if _, ok := mod.(appmodule.UpgradeModule); ok { + err := mod.BeginBlock(ctx) + +return err == nil, err +} + +} + +} + +return false, nil +} + +/ BeginBlock performs begin block functionality for non-upgrade modules. It creates a +/ child context with an event manager to aggregate events emitted from non-upgrade +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if _, ok := module.(appmodule.UpgradeModule); !ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +#### `module.HasABCIGenesis` + +`HasABCIGenesis` is an extension interface for allowing modules to implement genesis functionalities and returns validator set updates. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "golang.org/x/exp/maps" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +type AppModule interface { + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +type HasABCIEndblock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ genesisOnlyModule is an interface need to return GenesisOnlyAppModule struct in order to wrap two interfaces +type genesisOnlyModule interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + genesisOnlyModule +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg genesisOnlyModule) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + genesisOnlyModule: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]interface{ +} / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(modules)) + for _, module := range modules { + moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]interface{ +}) + modulesStr := make([]string, 0, len(simpleModuleMap)) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndblock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +func (m *Manager) + +RegisterInvariants(ir sdk.InvariantRegistry) { + for _, module := range m.Modules { + if module, ok := module.(HasInvariants); ok { + module.RegisterInvariants(ir) +} + +} +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, res.err +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + m := m + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + +module1, ok := m.Modules[moduleName].(HasGenesis) + if ok { + module1.InitGenesis(sdkCtx, c.cdc, module1.DefaultGenesis(c.cdc)) +} + if module2, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module2.InitGenesis(sdkCtx, c.cdc, module1.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ RunMigrationBeginBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was executed or not and an error if fails. +func (m *Manager) + +RunMigrationBeginBlock(ctx sdk.Context) (bool, error) { + for _, moduleName := range m.OrderBeginBlockers { + if mod, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if _, ok := mod.(appmodule.UpgradeModule); ok { + err := mod.BeginBlock(ctx) + +return err == nil, err +} + +} + +} + +return false, nil +} + +/ BeginBlock performs begin block functionality for non-upgrade modules. It creates a +/ child context with an event manager to aggregate events emitted from non-upgrade +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if _, ok := module.(appmodule.UpgradeModule); !ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndblock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + name := name + vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return maps.Keys(m.Modules) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +#### `appmodule.HasGenesis` + + + `appmodule.HasGenesis` is experimental and should be considered unstable, it + is recommended to not use this interface at this time. + + +```go expandable +package appmodule + +import ( + + "context" + "io" +) + +/ HasGenesis is the extension interface that modules should implement to handle +/ genesis data and state initialization. +/ WARNING: This interface is experimental and may change at any time. +type HasGenesis interface { + AppModule + + / DefaultGenesis writes the default genesis for this module to the target. + DefaultGenesis(GenesisTarget) + +error + + / ValidateGenesis validates the genesis data read from the source. + ValidateGenesis(GenesisSource) + +error + + / InitGenesis initializes module state from the genesis source. + InitGenesis(context.Context, GenesisSource) + +error + + / ExportGenesis exports module state to the genesis target. + ExportGenesis(context.Context, GenesisTarget) + +error +} + +/ GenesisSource is a source for genesis data in JSON format. It may abstract over a +/ single JSON object or separate files for each field in a JSON object that can +/ be streamed over. Modules should open a separate io.ReadCloser for each field that +/ is required. When fields represent arrays they can efficiently be streamed +/ over. If there is no data for a field, this function should return nil, nil. It is +/ important that the caller closes the reader when done with it. +type GenesisSource = func(field string) (io.ReadCloser, error) + +/ GenesisTarget is a target for writing genesis data in JSON format. It may +/ abstract over a single JSON object or JSON in separate files that can be +/ streamed over. Modules should open a separate io.WriteCloser for each field +/ and should prefer writing fields as arrays when possible to support efficient +/ iteration. It is important the caller closers the writer AND checks the error +/ when done with it. It is expected that a stream of JSON data is written +/ to the writer. +type GenesisTarget = func(field string) (io.WriteCloser, error) +``` + +### `AppModule` + +The `AppModule` interface defines a module. Modules can declare their functionalities by implementing extensions interfaces. +`AppModule`s are managed by the [module manager](#manager), which checks which extension interfaces are implemented by the module. + +#### `appmodule.AppModule` + +```go expandable +package appmodule + +import ( + + "context" + "google.golang.org/grpc" + "cosmossdk.io/depinject" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} +``` + +#### `module.AppModule` + + + Previously the `module.AppModule` interface was containing all the methods + that are defined in the extensions interfaces. This was leading to much + boilerplate for modules that did not need all the functionalities. + + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +### `HasInvariants` + +This interface defines one method. It allows to checks if a module can register invariants. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +- `RegisterInvariants(sdk.InvariantRegistry)`: Registers the [`invariants`](/docs/sdk/v0.53/documentation/module-system/invariants) of the module. If an invariant deviates from its predicted value, the [`InvariantRegistry`](/docs/sdk/v0.53/documentation/module-system/invariants#registry) triggers appropriate logic (most often the chain will be halted). + +### `HasServices` + +This interface defines one method. It allows to checks if a module can register invariants. + +#### `appmodule.HasService` + +```go expandable +package appmodule + +import ( + + "context" + "google.golang.org/grpc" + "cosmossdk.io/depinject" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} +``` + +#### `module.HasServices` + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +- `RegisterServices(Configurator)`: Allows a module to register services. + +### `HasConsensusVersion` + +This interface defines one method for checking a module consensus version. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +- `ConsensusVersion() uint64`: Returns the consensus version of the module. + +### `HasPreBlocker` + +The `HasPreBlocker` is an extension interface from `appmodule.AppModule`. All modules that have an `PreBlock` method implement this interface. + +### `HasBeginBlocker` + +The `HasBeginBlocker` is an extension interface from `appmodule.AppModule`. All modules that have an `BeginBlock` method implement this interface. + +```go expandable +package appmodule + +import ( + + "context" + "google.golang.org/grpc" + "cosmossdk.io/depinject" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ ResponsePreBlock represents the response from the PreBlock method. +/ It can modify consensus parameters in storage and signal the caller through the return value. +/ When it returns ConsensusParamsChanged=true, the caller must refresh the consensus parameter in the finalize context. +/ The new context (ctx) + +must be passed to all the other lifecycle methods. +type ResponsePreBlock interface { + IsConsensusParamsChanged() + +bool +} + +/ HasPreBlocker is the extension interface that modules should implement to run +/ custom logic before BeginBlock. +type HasPreBlocker interface { + AppModule + / PreBlock is method that will be run before BeginBlock. + PreBlock(context.Context) (ResponsePreBlock, error) +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} + +/ UpgradeModule is the extension interface that upgrade module should implement to differentiate +/ it from other modules, migration handler need ensure the upgrade module's migration is executed +/ before the rest of the modules. +type UpgradeModule interface { + IsUpgradeModule() +} +``` + +- `BeginBlock(context.Context) error`: This method gives module developers the option to implement logic that is automatically triggered at the beginning of each block. + +### `HasEndBlocker` + +The `HasEndBlocker` is an extension interface from `appmodule.AppModule`. All modules that have an `EndBlock` method implement this interface. If a module need to return validator set updates (staking), they can use `HasABCIEndBlock` + +```go expandable +package appmodule + +import ( + + "context" + "google.golang.org/grpc" + "cosmossdk.io/depinject" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ ResponsePreBlock represents the response from the PreBlock method. +/ It can modify consensus parameters in storage and signal the caller through the return value. +/ When it returns ConsensusParamsChanged=true, the caller must refresh the consensus parameter in the finalize context. +/ The new context (ctx) + +must be passed to all the other lifecycle methods. +type ResponsePreBlock interface { + IsConsensusParamsChanged() + +bool +} + +/ HasPreBlocker is the extension interface that modules should implement to run +/ custom logic before BeginBlock. +type HasPreBlocker interface { + AppModule + / PreBlock is method that will be run before BeginBlock. + PreBlock(context.Context) (ResponsePreBlock, error) +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} + +/ UpgradeModule is the extension interface that upgrade module should implement to differentiate +/ it from other modules, migration handler need ensure the upgrade module's migration is executed +/ before the rest of the modules. +type UpgradeModule interface { + IsUpgradeModule() +} +``` + +- `EndBlock(context.Context) error`: This method gives module developers the option to implement logic that is automatically triggered at the end of each block. + +### `HasABCIEndBlock` + +The `HasABCIEndBlock` is an extension interface from `module.AppModule`. All modules that have an `EndBlock` which return validator set updates implement this interface. + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +- `EndBlock(context.Context) ([]abci.ValidatorUpdate, error)`: This method gives module developers the option to inform the underlying consensus engine of validator set changes (e.g. the `staking` module). + +### `HasPrecommit` + +`HasPrecommit` is an extension interface from `appmodule.AppModule`. All modules that have a `Precommit` method implement this interface. + +```go expandable +package appmodule + +import ( + + "context" + "google.golang.org/grpc" + "cosmossdk.io/depinject" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ ResponsePreBlock represents the response from the PreBlock method. +/ It can modify consensus parameters in storage and signal the caller through the return value. +/ When it returns ConsensusParamsChanged=true, the caller must refresh the consensus parameter in the finalize context. +/ The new context (ctx) + +must be passed to all the other lifecycle methods. +type ResponsePreBlock interface { + IsConsensusParamsChanged() + +bool +} + +/ HasPreBlocker is the extension interface that modules should implement to run +/ custom logic before BeginBlock. +type HasPreBlocker interface { + AppModule + / PreBlock is method that will be run before BeginBlock. + PreBlock(context.Context) (ResponsePreBlock, error) +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} + +/ UpgradeModule is the extension interface that upgrade module should implement to differentiate +/ it from other modules, migration handler need ensure the upgrade module's migration is executed +/ before the rest of the modules. +type UpgradeModule interface { + IsUpgradeModule() +} +``` + +- `Precommit(context.Context)`: This method gives module developers the option to implement logic that is automatically triggered during \[`Commit'](/docs/sdk/v0.53/learn/advanced/00-baseapp#commit) of each block using the [`finalizeblockstate`](/docs/sdk/v0.53/learn/advanced/00-baseapp#state-updates) of the block to be committed. Implement empty if no logic needs to be triggered during `Commit\` of each block for this module. + +### `HasPrepareCheckState` + +`HasPrepareCheckState` is an extension interface from `appmodule.AppModule`. All modules that have a `PrepareCheckState` method implement this interface. + +```go expandable +package appmodule + +import ( + + "context" + "google.golang.org/grpc" + "cosmossdk.io/depinject" +) + +/ AppModule is a tag interface for app module implementations to use as a basis +/ for extension interfaces. It provides no functionality itself, but is the +/ type that all valid app modules should provide so that they can be identified +/ by other modules (usually via depinject) + +as app modules. +type AppModule interface { + depinject.OnePerModuleType + + / IsAppModule is a dummy method to tag a struct as implementing an AppModule. + IsAppModule() +} + +/ HasServices is the extension interface that modules should implement to register +/ implementations of services defined in .proto files. +type HasServices interface { + AppModule + + / RegisterServices registers the module's services with the app's service + / registrar. + / + / Two types of services are currently supported: + / - read-only gRPC query services, which are the default. + / - transaction message services, which must have the protobuf service + / option "cosmos.msg.v1.service" (defined in "cosmos/msg/v1/service.proto") + / set to true. + / + / The service registrar will figure out which type of service you are + / implementing based on the presence (or absence) + +of protobuf options. You + / do not need to specify this in golang code. + RegisterServices(grpc.ServiceRegistrar) + +error +} + +/ HasPrepareCheckState is an extension interface that contains information about the AppModule +/ and PrepareCheckState. +type HasPrepareCheckState interface { + AppModule + PrepareCheckState(context.Context) + +error +} + +/ HasPrecommit is an extension interface that contains information about the AppModule and Precommit. +type HasPrecommit interface { + AppModule + Precommit(context.Context) + +error +} + +/ ResponsePreBlock represents the response from the PreBlock method. +/ It can modify consensus parameters in storage and signal the caller through the return value. +/ When it returns ConsensusParamsChanged=true, the caller must refresh the consensus parameter in the finalize context. +/ The new context (ctx) + +must be passed to all the other lifecycle methods. +type ResponsePreBlock interface { + IsConsensusParamsChanged() + +bool +} + +/ HasPreBlocker is the extension interface that modules should implement to run +/ custom logic before BeginBlock. +type HasPreBlocker interface { + AppModule + / PreBlock is method that will be run before BeginBlock. + PreBlock(context.Context) (ResponsePreBlock, error) +} + +/ HasBeginBlocker is the extension interface that modules should implement to run +/ custom logic before transaction processing in a block. +type HasBeginBlocker interface { + AppModule + + / BeginBlock is a method that will be run before transactions are processed in + / a block. + BeginBlock(context.Context) + +error +} + +/ HasEndBlocker is the extension interface that modules should implement to run +/ custom logic after transaction processing in a block. +type HasEndBlocker interface { + AppModule + + / EndBlock is a method that will be run after transactions are processed in + / a block. + EndBlock(context.Context) + +error +} + +/ UpgradeModule is the extension interface that upgrade module should implement to differentiate +/ it from other modules, migration handler need ensure the upgrade module's migration is executed +/ before the rest of the modules. +type UpgradeModule interface { + IsUpgradeModule() +} +``` + +- `PrepareCheckState(context.Context)`: This method gives module developers the option to implement logic that is automatically triggered during \[`Commit'](/docs/sdk/v0.53/learn/advanced/00-baseapp#commit) of each block using the [`checkState`](/docs/sdk/v0.53/learn/advanced/00-baseapp#state-updates) of the next block. Implement empty if no logic needs to be triggered during `Commit\` of each block for this module. + +### Implementing the Application Module Interfaces + +Typically, the various application module interfaces are implemented in a file called `module.go`, located in the module's folder (e.g. `./x/module/module.go`). + +Almost every module needs to implement the `AppModuleBasic` and `AppModule` interfaces. If the module is only used for genesis, it will implement `AppModuleGenesis` instead of `AppModule`. The concrete type that implements the interface can add parameters that are required for the implementation of the various methods of the interface. For example, the `Route()` function often calls a `NewMsgServerImpl(k keeper)` function defined in `keeper/msg_server.go` and therefore needs to pass the module's [`keeper`](/docs/sdk/v0.53/documentation/module-system/keeper) as a parameter. + +```go +/ example +type AppModule struct { + AppModuleBasic + keeper Keeper +} +``` + +In the example above, you can see that the `AppModule` concrete type references an `AppModuleBasic`, and not an `AppModuleGenesis`. That is because `AppModuleGenesis` only needs to be implemented in modules that focus on genesis-related functionalities. In most modules, the concrete `AppModule` type will have a reference to an `AppModuleBasic` and implement the two added methods of `AppModuleGenesis` directly in the `AppModule` type. + +If no parameter is required (which is often the case for `AppModuleBasic`), just declare an empty concrete type like so: + +```go +type AppModuleBasic struct{ +} +``` + +## Module Managers + +Module managers are used to manage collections of `AppModuleBasic` and `AppModule`. + +### `BasicManager` + +The `BasicManager` is a structure that lists all the `AppModuleBasic` of an application: + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +It implements the following methods: + +- `NewBasicManager(modules ...AppModuleBasic)`: Constructor function. It takes a list of the application's `AppModuleBasic` and builds a new `BasicManager`. This function is generally called in the `init()` function of [`app.go`](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#core-application-file) to quickly initialize the independent elements of the application's modules (click [here](https://github.com/cosmos/gaia/blob/main/app/app.go#L59-L74) to see an example). +- `NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic)`: Contructor function. It creates a new `BasicManager` from a `Manager`. The `BasicManager` will contain all `AppModuleBasic` from the `AppModule` manager using `CoreAppModuleBasicAdaptor` whenever possible. Module's `AppModuleBasic` can be overridden by passing a custom AppModuleBasic map +- `RegisterLegacyAminoCodec(cdc *codec.LegacyAmino)`: Registers the [`codec.LegacyAmino`s](/docs/sdk/v0.53/documentation/protocol-development/encoding#amino) of each of the application's `AppModuleBasic`. This function is usually called early on in the [application's construction](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor). +- `RegisterInterfaces(registry codectypes.InterfaceRegistry)`: Registers interface types and implementations of each of the application's `AppModuleBasic`. +- `DefaultGenesis(cdc codec.JSONCodec)`: Provides default genesis information for modules in the application by calling the [`DefaultGenesis(cdc codec.JSONCodec)`](/docs/sdk/v0.53/documentation/module-system/genesis#defaultgenesis) function of each module. It only calls the modules that implements the `HasGenesisBasics` interfaces. +- `ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage)`: Validates the genesis information modules by calling the [`ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage)`](/docs/sdk/v0.53/documentation/module-system/genesis#validategenesis) function of modules implementing the `HasGenesisBasics` interface. +- `RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux)`: Registers gRPC routes for modules. +- `AddTxCommands(rootTxCmd *cobra.Command)`: Adds modules' transaction commands (defined as `GetTxCmd() *cobra.Command`) to the application's [`rootTxCommand`](/docs/sdk/v0.53/api-reference/client-tools/cli#transaction-commands). This function is usually called function from the `main.go` function of the [application's command-line interface](/docs/sdk/v0.53/api-reference/client-tools/cli). +- `AddQueryCommands(rootQueryCmd *cobra.Command)`: Adds modules' query commands (defined as `GetQueryCmd() *cobra.Command`) to the application's [`rootQueryCommand`](/docs/sdk/v0.53/api-reference/client-tools/cli#query-commands). This function is usually called function from the `main.go` function of the [application's command-line interface](/docs/sdk/v0.53/api-reference/client-tools/cli). + +### `Manager` + +The `Manager` is a structure that holds all the `AppModule` of an application, and defines the order of execution between several key components of these modules: + +```go expandable +/* +Package module contains application module patterns and associated "manager" functionality. +The module pattern has been broken down by: + - independent module functionality (AppModuleBasic) + - inter-dependent module simulation functionality (AppModuleSimulation) + - inter-dependent module full functionality (AppModule) + +inter-dependent module functionality is module functionality which somehow +depends on other modules, typically through the module keeper. Many of the +module keepers are dependent on each other, thus in order to access the full +set of module functionality we need to define all the keepers/params-store/keys +etc. This full set of advanced functionality is defined by the AppModule interface. + +Independent module functions are separated to allow for the construction of the +basic application structures required early on in the application definition +and used to enable the definition of full module functionality later in the +process. This separation is necessary, however we still want to allow for a +high level pattern for modules to follow - for instance, such that we don't +have to manually register all of the codecs for all the modules. This basic +procedure as well as other basic patterns are handled through the use of +BasicManager. + +Lastly the interface for genesis functionality (HasGenesis & HasABCIGenesis) + +has been +separated out from full module functionality (AppModule) + +so that modules which +are only used for genesis can take advantage of the Module patterns without +needlessly defining many placeholder functions +*/ +package module + +import ( + + "context" + "encoding/json" + "errors" + "fmt" + "maps" + "slices" + "sort" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/genesis" + errorsmod "cosmossdk.io/errors" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ AppModuleBasic is the standard form for basic non-dependant elements of an application module. +type AppModuleBasic interface { + HasName + RegisterLegacyAminoCodec(*codec.LegacyAmino) + +RegisterInterfaces(types.InterfaceRegistry) + +RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux) +} + +/ HasName allows the module to provide its own name for legacy purposes. +/ Newer apps should specify the name for their modules using a map +/ using NewManagerFromMap. +type HasName interface { + Name() + +string +} + +/ HasGenesisBasics is the legacy interface for stateless genesis methods. +type HasGenesisBasics interface { + DefaultGenesis(codec.JSONCodec) + +json.RawMessage + ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage) + +error +} + +/ BasicManager is a collection of AppModuleBasic +type BasicManager map[string]AppModuleBasic + +/ NewBasicManager creates a new BasicManager object +func NewBasicManager(modules ...AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for _, module := range modules { + moduleMap[module.Name()] = module +} + +return moduleMap +} + +/ NewBasicManagerFromManager creates a new BasicManager from a Manager +/ The BasicManager will contain all AppModuleBasic from the AppModule Manager +/ Module's AppModuleBasic can be overridden by passing a custom AppModuleBasic map +func NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic) + +BasicManager { + moduleMap := make(map[string]AppModuleBasic) + for name, module := range manager.Modules { + if customBasicMod, ok := customModuleBasics[name]; ok { + moduleMap[name] = customBasicMod + continue +} + if appModule, ok := module.(appmodule.AppModule); ok { + moduleMap[name] = CoreAppModuleBasicAdaptor(name, appModule) + +continue +} + if basicMod, ok := module.(AppModuleBasic); ok { + moduleMap[name] = basicMod +} + +} + +return moduleMap +} + +/ RegisterLegacyAminoCodec registers all module codecs +func (bm BasicManager) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + for _, b := range bm { + b.RegisterLegacyAminoCodec(cdc) +} +} + +/ RegisterInterfaces registers all module interface types +func (bm BasicManager) + +RegisterInterfaces(registry types.InterfaceRegistry) { + for _, m := range bm { + m.RegisterInterfaces(registry) +} +} + +/ DefaultGenesis provides default genesis information for all modules +func (bm BasicManager) + +DefaultGenesis(cdc codec.JSONCodec) + +map[string]json.RawMessage { + genesisData := make(map[string]json.RawMessage) + for _, b := range bm { + if mod, ok := b.(HasGenesisBasics); ok { + genesisData[b.Name()] = mod.DefaultGenesis(cdc) +} + +} + +return genesisData +} + +/ ValidateGenesis performs genesis state validation for all modules +func (bm BasicManager) + +ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesisData map[string]json.RawMessage) + +error { + for _, b := range bm { + / first check if the module is an adapted Core API Module + if mod, ok := b.(HasGenesisBasics); ok { + if err := mod.ValidateGenesis(cdc, txEncCfg, genesisData[b.Name()]); err != nil { + return err +} + +} + +} + +return nil +} + +/ RegisterGRPCGatewayRoutes registers all module rest routes +func (bm BasicManager) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux) { + for _, b := range bm { + b.RegisterGRPCGatewayRoutes(clientCtx, rtr) +} +} + +/ AddTxCommands adds all tx commands to the rootTxCmd. +func (bm BasicManager) + +AddTxCommands(rootTxCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetTxCmd() *cobra.Command +}); ok { + if cmd := mod.GetTxCmd(); cmd != nil { + rootTxCmd.AddCommand(cmd) +} + +} + +} +} + +/ AddQueryCommands adds all query commands to the rootQueryCmd. +func (bm BasicManager) + +AddQueryCommands(rootQueryCmd *cobra.Command) { + for _, b := range bm { + if mod, ok := b.(interface { + GetQueryCmd() *cobra.Command +}); ok { + if cmd := mod.GetQueryCmd(); cmd != nil { + rootQueryCmd.AddCommand(cmd) +} + +} + +} +} + +/ HasGenesis is the extension interface for stateful genesis methods. +type HasGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) + +ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ HasABCIGenesis is the extension interface for stateful genesis methods which returns validator updates. +type HasABCIGenesis interface { + HasGenesisBasics + InitGenesis(sdk.Context, codec.JSONCodec, json.RawMessage) []abci.ValidatorUpdate + ExportGenesis(sdk.Context, codec.JSONCodec) + +json.RawMessage +} + +/ AppModule is the form for an application module. Most of +/ its functionality has been moved to extension interfaces. +/ Deprecated: use appmodule.AppModule with a combination of extension interfaes interfaces instead. +type AppModule interface { + appmodule.AppModule + + AppModuleBasic +} + +/ HasInvariants is the interface for registering invariants. +/ +/ Deprecated: this will be removed in the next Cosmos SDK release. +type HasInvariants interface { + / RegisterInvariants registers module invariants. + RegisterInvariants(sdk.InvariantRegistry) +} + +/ HasServices is the interface for modules to register services. +type HasServices interface { + / RegisterServices allows a module to register services. + RegisterServices(Configurator) +} + +/ HasConsensusVersion is the interface for declaring a module consensus version. +type HasConsensusVersion interface { + / ConsensusVersion is a sequence number for state-breaking change of the + / module. It should be incremented on each consensus-breaking change + / introduced by the module. To avoid wrong/empty versions, the initial version + / should be set to 1. + ConsensusVersion() + +uint64 +} + +/ HasABCIEndblock is a released typo of HasABCIEndBlock. +/ Deprecated: use HasABCIEndBlock instead. +type HasABCIEndblock HasABCIEndBlock + +/ HasABCIEndBlock is the interface for modules that need to run code at the end of the block. +type HasABCIEndBlock interface { + AppModule + EndBlock(context.Context) ([]abci.ValidatorUpdate, error) +} + +var ( + _ appmodule.AppModule = (*GenesisOnlyAppModule)(nil) + _ AppModuleBasic = (*GenesisOnlyAppModule)(nil) +) + +/ AppModuleGenesis is the standard form for an application module genesis functions +type AppModuleGenesis interface { + AppModuleBasic + HasABCIGenesis +} + +/ GenesisOnlyAppModule is an AppModule that only has import/export functionality +type GenesisOnlyAppModule struct { + AppModuleGenesis +} + +/ NewGenesisOnlyAppModule creates a new GenesisOnlyAppModule object +func NewGenesisOnlyAppModule(amg AppModuleGenesis) + +GenesisOnlyAppModule { + return GenesisOnlyAppModule{ + AppModuleGenesis: amg, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (GenesisOnlyAppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (GenesisOnlyAppModule) + +IsAppModule() { +} + +/ RegisterInvariants is a placeholder function register no invariants +func (GenesisOnlyAppModule) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (gam GenesisOnlyAppModule) + +ConsensusVersion() + +uint64 { + return 1 +} + +/ Manager defines a module manager that provides the high level utility for managing and executing +/ operations for a group of modules +type Manager struct { + Modules map[string]any / interface{ +} + +is used now to support the legacy AppModule as well as new core appmodule.AppModule. + OrderInitGenesis []string + OrderExportGenesis []string + OrderPreBlockers []string + OrderBeginBlockers []string + OrderEndBlockers []string + OrderPrepareCheckStaters []string + OrderPrecommiters []string + OrderMigrations []string +} + +/ NewManager creates a new Manager object. +func NewManager(modules ...AppModule) *Manager { + moduleMap := make(map[string]any) + modulesStr := make([]string, 0, len(modules)) + preBlockModulesStr := make([]string, 0) + for _, module := range modules { + if _, ok := module.(appmodule.AppModule); !ok { + panic(fmt.Sprintf("module %s does not implement appmodule.AppModule", module.Name())) +} + +moduleMap[module.Name()] = module + modulesStr = append(modulesStr, module.Name()) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, module.Name()) +} + +} + +return &Manager{ + Modules: moduleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderPrepareCheckStaters: modulesStr, + OrderPrecommiters: modulesStr, + OrderEndBlockers: modulesStr, +} +} + +/ NewManagerFromMap creates a new Manager object from a map of module names to module implementations. +/ This method should be used for apps and modules which have migrated to the cosmossdk.io/core.appmodule.AppModule API. +func NewManagerFromMap(moduleMap map[string]appmodule.AppModule) *Manager { + simpleModuleMap := make(map[string]any) + modulesStr := make([]string, 0, len(simpleModuleMap)) + preBlockModulesStr := make([]string, 0) + for name, module := range moduleMap { + simpleModuleMap[name] = module + modulesStr = append(modulesStr, name) + if _, ok := module.(appmodule.HasPreBlocker); ok { + preBlockModulesStr = append(preBlockModulesStr, name) +} + +} + + / Sort the modules by name. Given that we are using a map above we can't guarantee the order. + sort.Strings(modulesStr) + +return &Manager{ + Modules: simpleModuleMap, + OrderInitGenesis: modulesStr, + OrderExportGenesis: modulesStr, + OrderPreBlockers: preBlockModulesStr, + OrderBeginBlockers: modulesStr, + OrderEndBlockers: modulesStr, + OrderPrecommiters: modulesStr, + OrderPrepareCheckStaters: modulesStr, +} +} + +/ SetOrderInitGenesis sets the order of init genesis calls +func (m *Manager) + +SetOrderInitGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderInitGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderInitGenesis = moduleNames +} + +/ SetOrderExportGenesis sets the order of export genesis calls +func (m *Manager) + +SetOrderExportGenesis(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderExportGenesis", moduleNames, func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasGenesis := module.(appmodule.HasGenesis); hasGenesis { + return !hasGenesis +} + if _, hasABCIGenesis := module.(HasABCIGenesis); hasABCIGenesis { + return !hasABCIGenesis +} + + _, hasGenesis := module.(HasGenesis) + +return !hasGenesis +}) + +m.OrderExportGenesis = moduleNames +} + +/ SetOrderPreBlockers sets the order of set pre-blocker calls +func (m *Manager) + +SetOrderPreBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPreBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBlock := module.(appmodule.HasPreBlocker) + +return !hasBlock +}) + +m.OrderPreBlockers = moduleNames +} + +/ SetOrderBeginBlockers sets the order of set begin-blocker calls +func (m *Manager) + +SetOrderBeginBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderBeginBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasBeginBlock := module.(appmodule.HasBeginBlocker) + +return !hasBeginBlock +}) + +m.OrderBeginBlockers = moduleNames +} + +/ SetOrderEndBlockers sets the order of set end-blocker calls +func (m *Manager) + +SetOrderEndBlockers(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderEndBlockers", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + if _, hasEndBlock := module.(appmodule.HasEndBlocker); hasEndBlock { + return !hasEndBlock +} + + _, hasABCIEndBlock := module.(HasABCIEndBlock) + +return !hasABCIEndBlock +}) + +m.OrderEndBlockers = moduleNames +} + +/ SetOrderPrepareCheckStaters sets the order of set prepare-check-stater calls +func (m *Manager) + +SetOrderPrepareCheckStaters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrepareCheckStaters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrepareCheckState := module.(appmodule.HasPrepareCheckState) + +return !hasPrepareCheckState +}) + +m.OrderPrepareCheckStaters = moduleNames +} + +/ SetOrderPrecommiters sets the order of set precommiter calls +func (m *Manager) + +SetOrderPrecommiters(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderPrecommiters", moduleNames, + func(moduleName string) + +bool { + module := m.Modules[moduleName] + _, hasPrecommit := module.(appmodule.HasPrecommit) + +return !hasPrecommit +}) + +m.OrderPrecommiters = moduleNames +} + +/ SetOrderMigrations sets the order of migrations to be run. If not set +/ then migrations will be run with an order defined in `DefaultMigrationsOrder`. +func (m *Manager) + +SetOrderMigrations(moduleNames ...string) { + m.assertNoForgottenModules("SetOrderMigrations", moduleNames, nil) + +m.OrderMigrations = moduleNames +} + +/ RegisterInvariants registers all module invariants +/ +/ Deprecated: this function is a no-op and will be removed in the next release of the Cosmos SDK. +func (m *Manager) + +RegisterInvariants(_ sdk.InvariantRegistry) { +} + +/ RegisterServices registers all module services +func (m *Manager) + +RegisterServices(cfg Configurator) + +error { + for _, module := range m.Modules { + if module, ok := module.(HasServices); ok { + module.RegisterServices(cfg) +} + if module, ok := module.(appmodule.HasServices); ok { + err := module.RegisterServices(cfg) + if err != nil { + return err +} + +} + if cfg.Error() != nil { + return cfg.Error() +} + +} + +return nil +} + +/ InitGenesis performs init genesis functionality for modules. Exactly one +/ module must return a non-empty validator set update to correctly initialize +/ the chain. +func (m *Manager) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage) (*abci.ResponseInitChain, error) { + var validatorUpdates []abci.ValidatorUpdate + ctx.Logger().Info("initializing blockchain state from genesis.json") + for _, moduleName := range m.OrderInitGenesis { + if genesisData[moduleName] == nil { + continue +} + mod := m.Modules[moduleName] + / we might get an adapted module, a native core API module or a legacy module + if module, ok := mod.(appmodule.HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + / core API genesis + source, err := genesis.SourceFromRawJSON(genesisData[moduleName]) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +err = module.InitGenesis(ctx, source) + if err != nil { + return &abci.ResponseInitChain{ +}, err +} + +} + +else if module, ok := mod.(HasGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + +module.InitGenesis(ctx, cdc, genesisData[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + ctx.Logger().Debug("running initialization for module", "module", moduleName) + moduleValUpdates := module.InitGenesis(ctx, cdc, genesisData[moduleName]) + + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return &abci.ResponseInitChain{ +}, errors.New("validator InitGenesis updates already set by a previous module") +} + +validatorUpdates = moduleValUpdates +} + +} + +} + + / a chain must initialize with a non-empty validator set + if len(validatorUpdates) == 0 { + return &abci.ResponseInitChain{ +}, fmt.Errorf("validator set is empty after InitGenesis, please ensure at least one validator is initialized with a delegation greater than or equal to the DefaultPowerReduction (%d)", sdk.DefaultPowerReduction) +} + +return &abci.ResponseInitChain{ + Validators: validatorUpdates, +}, nil +} + +/ ExportGenesis performs export genesis functionality for modules +func (m *Manager) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) (map[string]json.RawMessage, error) { + return m.ExportGenesisForModules(ctx, cdc, []string{ +}) +} + +/ ExportGenesisForModules performs export genesis functionality for modules +func (m *Manager) + +ExportGenesisForModules(ctx sdk.Context, cdc codec.JSONCodec, modulesToExport []string) (map[string]json.RawMessage, error) { + if len(modulesToExport) == 0 { + modulesToExport = m.OrderExportGenesis +} + / verify modules exists in app, so that we don't panic in the middle of an export + if err := m.checkModulesExists(modulesToExport); err != nil { + return nil, err +} + +type genesisResult struct { + bz json.RawMessage + err error +} + channels := make(map[string]chan genesisResult) + for _, moduleName := range modulesToExport { + mod := m.Modules[moduleName] + if module, ok := mod.(appmodule.HasGenesis); ok { + / core API genesis + channels[moduleName] = make(chan genesisResult) + +go func(module appmodule.HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + target := genesis.RawJSONTarget{ +} + err := module.ExportGenesis(ctx, target.Target()) + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +rawJSON, err := target.JSON() + if err != nil { + ch <- genesisResult{ + nil, err +} + +return +} + +ch <- genesisResult{ + rawJSON, nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +else if module, ok := mod.(HasABCIGenesis); ok { + channels[moduleName] = make(chan genesisResult) + +go func(module HasABCIGenesis, ch chan genesisResult) { + ctx := ctx.WithGasMeter(storetypes.NewInfiniteGasMeter()) / avoid race conditions + ch <- genesisResult{ + module.ExportGenesis(ctx, cdc), nil +} + +}(module, channels[moduleName]) +} + +} + genesisData := make(map[string]json.RawMessage) + for moduleName := range channels { + res := <-channels[moduleName] + if res.err != nil { + return nil, fmt.Errorf("genesis export error in %s: %w", moduleName, res.err) +} + +genesisData[moduleName] = res.bz +} + +return genesisData, nil +} + +/ checkModulesExists verifies that all modules in the list exist in the app +func (m *Manager) + +checkModulesExists(moduleName []string) + +error { + for _, name := range moduleName { + if _, ok := m.Modules[name]; !ok { + return fmt.Errorf("module %s does not exist", name) +} + +} + +return nil +} + +/ assertNoForgottenModules checks that we didn't forget any modules in the SetOrder* functions. +/ `pass` is a closure which allows one to omit modules from `moduleNames`. +/ If you provide non-nil `pass` and it returns true, the module would not be subject of the assertion. +func (m *Manager) + +assertNoForgottenModules(setOrderFnName string, moduleNames []string, pass func(moduleName string) + +bool) { + ms := make(map[string]bool) + for _, m := range moduleNames { + ms[m] = true +} + +var missing []string + for m := range m.Modules { + if pass != nil && pass(m) { + continue +} + if !ms[m] { + missing = append(missing, m) +} + +} + if len(missing) != 0 { + sort.Strings(missing) + +panic(fmt.Sprintf( + "all modules must be defined when setting %s, missing: %v", setOrderFnName, missing)) +} +} + +/ MigrationHandler is the migration function that each module registers. +type MigrationHandler func(sdk.Context) + +error + +/ VersionMap is a map of moduleName -> version +type VersionMap map[string]uint64 + +/ RunMigrations performs in-place store migrations for all modules. This +/ function MUST be called insde an x/upgrade UpgradeHandler. +/ +/ Recall that in an upgrade handler, the `fromVM` VersionMap is retrieved from +/ x/upgrade's store, and the function needs to return the target VersionMap +/ that will in turn be persisted to the x/upgrade's store. In general, +/ returning RunMigrations should be enough: +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Internally, RunMigrations will perform the following steps: +/ - create an `updatedVM` VersionMap of module with their latest ConsensusVersion +/ - make a diff of `fromVM` and `udpatedVM`, and for each module: +/ - if the module's `fromVM` version is less than its `updatedVM` version, +/ then run in-place store migrations for that module between those versions. +/ - if the module does not exist in the `fromVM` (which means that it's a new module, +/ because it was not in the previous x/upgrade's store), then run +/ `InitGenesis` on that module. +/ +/ - return the `updatedVM` to be persisted in the x/upgrade's store. +/ +/ Migrations are run in an order defined by `Manager.OrderMigrations` or (if not set) + +defined by +/ `DefaultMigrationsOrder` function. +/ +/ As an app developer, if you wish to skip running InitGenesis for your new +/ module "foo", you need to manually pass a `fromVM` argument to this function +/ foo's module version set to its latest ConsensusVersion. That way, the diff +/ between the function's `fromVM` and `udpatedVM` will be empty, hence not +/ running anything for foo. +/ +/ Example: +/ +/ cfg := module.NewConfigurator(...) +/ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { +/ / Assume "foo" is a new module. +/ / `fromVM` is fetched from existing x/upgrade store. Since foo didn't exist +/ / before this upgrade, `v, exists := fromVM["foo"]; exists == false`, and RunMigration will by default +/ / run InitGenesis on foo. +/ / To skip running foo's InitGenesis, you need set `fromVM`'s foo to its latest +/ / consensus version: +/ fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() +/ +/ return app.mm.RunMigrations(ctx, cfg, fromVM) +/ +}) +/ +/ Please also refer to https://docs.cosmos.network/main/core/upgrade for more information. +func (m Manager) + +RunMigrations(ctx context.Context, cfg Configurator, fromVM VersionMap) (VersionMap, error) { + c, ok := cfg.(*configurator) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", &configurator{ +}, cfg) +} + modules := m.OrderMigrations + if modules == nil { + modules = DefaultMigrationsOrder(m.ModuleNames()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + updatedVM := VersionMap{ +} + for _, moduleName := range modules { + module := m.Modules[moduleName] + fromVersion, exists := fromVM[moduleName] + toVersion := uint64(0) + if module, ok := module.(HasConsensusVersion); ok { + toVersion = module.ConsensusVersion() +} + + / We run migration if the module is specified in `fromVM`. + / Otherwise we run InitGenesis. + / + / The module won't exist in the fromVM in two cases: + / 1. A new module is added. In this case we run InitGenesis with an + / empty genesis state. + / 2. An existing chain is upgrading from version < 0.43 to v0.43+ for the first time. + / In this case, all modules have yet to be added to x/upgrade's VersionMap store. + if exists { + err := c.runModuleMigrations(sdkCtx, moduleName, fromVersion, toVersion) + if err != nil { + return nil, err +} + +} + +else { + sdkCtx.Logger().Info(fmt.Sprintf("adding a new module: %s", moduleName)) + if module, ok := m.Modules[moduleName].(HasGenesis); ok { + module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) +} + if module, ok := m.Modules[moduleName].(HasABCIGenesis); ok { + moduleValUpdates := module.InitGenesis(sdkCtx, c.cdc, module.DefaultGenesis(c.cdc)) + / The module manager assumes only one module will update the + / validator set, and it can't be a new module. + if len(moduleValUpdates) > 0 { + return nil, errorsmod.Wrapf(sdkerrors.ErrLogic, "validator InitGenesis update is already set by another module") +} + +} + +} + +updatedVM[moduleName] = toVersion +} + +return updatedVM, nil +} + +/ PreBlock performs begin block functionality for upgrade module. +/ It takes the current context as a parameter and returns a boolean value +/ indicating whether the migration was successfully executed or not. +func (m *Manager) + +PreBlock(ctx sdk.Context) (*sdk.ResponsePreBlock, error) { + paramsChanged := false + for _, moduleName := range m.OrderPreBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasPreBlocker); ok { + rsp, err := module.PreBlock(ctx) + if err != nil { + return nil, err +} + if rsp.IsConsensusParamsChanged() { + paramsChanged = true +} + +} + +} + +return &sdk.ResponsePreBlock{ + ConsensusParamsChanged: paramsChanged, +}, nil +} + +/ BeginBlock performs begin block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +BeginBlock(ctx sdk.Context) (sdk.BeginBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + for _, moduleName := range m.OrderBeginBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasBeginBlocker); ok { + if err := module.BeginBlock(ctx); err != nil { + return sdk.BeginBlock{ +}, err +} + +} + +} + +return sdk.BeginBlock{ + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ EndBlock performs end block functionality for all modules. It creates a +/ child context with an event manager to aggregate events emitted from all +/ modules. +func (m *Manager) + +EndBlock(ctx sdk.Context) (sdk.EndBlock, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + validatorUpdates := []abci.ValidatorUpdate{ +} + for _, moduleName := range m.OrderEndBlockers { + if module, ok := m.Modules[moduleName].(appmodule.HasEndBlocker); ok { + err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + +} + +else if module, ok := m.Modules[moduleName].(HasABCIEndBlock); ok { + moduleValUpdates, err := module.EndBlock(ctx) + if err != nil { + return sdk.EndBlock{ +}, err +} + / use these validator updates if provided, the module manager assumes + / only one module will update the validator set + if len(moduleValUpdates) > 0 { + if len(validatorUpdates) > 0 { + return sdk.EndBlock{ +}, errors.New("validator EndBlock updates already set by a previous module") +} + for _, updates := range moduleValUpdates { + validatorUpdates = append(validatorUpdates, abci.ValidatorUpdate{ + PubKey: updates.PubKey, + Power: updates.Power +}) +} + +} + +} + +else { + continue +} + +} + +return sdk.EndBlock{ + ValidatorUpdates: validatorUpdates, + Events: ctx.EventManager().ABCIEvents(), +}, nil +} + +/ Precommit performs precommit functionality for all modules. +func (m *Manager) + +Precommit(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrecommiters { + module, ok := m.Modules[moduleName].(appmodule.HasPrecommit) + if !ok { + continue +} + if err := module.Precommit(ctx); err != nil { + return err +} + +} + +return nil +} + +/ PrepareCheckState performs functionality for preparing the check state for all modules. +func (m *Manager) + +PrepareCheckState(ctx sdk.Context) + +error { + for _, moduleName := range m.OrderPrepareCheckStaters { + module, ok := m.Modules[moduleName].(appmodule.HasPrepareCheckState) + if !ok { + continue +} + if err := module.PrepareCheckState(ctx); err != nil { + return err +} + +} + +return nil +} + +/ GetVersionMap gets consensus version from all modules +func (m *Manager) + +GetVersionMap() + +VersionMap { + vermap := make(VersionMap) + for name, v := range m.Modules { + version := uint64(0) + if v, ok := v.(HasConsensusVersion); ok { + version = v.ConsensusVersion() +} + +vermap[name] = version +} + +return vermap +} + +/ ModuleNames returns list of all module names, without any particular order. +func (m *Manager) + +ModuleNames() []string { + return slices.Collect(maps.Keys(m.Modules)) +} + +/ DefaultMigrationsOrder returns a default migrations order: ascending alphabetical by module name, +/ except x/auth which will run last, see: +/ https://github.com/cosmos/cosmos-sdk/issues/10591 +func DefaultMigrationsOrder(modules []string) []string { + const authName = "auth" + out := make([]string, 0, len(modules)) + hasAuth := false + for _, m := range modules { + if m == authName { + hasAuth = true +} + +else { + out = append(out, m) +} + +} + +sort.Strings(out) + if hasAuth { + out = append(out, authName) +} + +return out +} +``` + +The module manager is used throughout the application whenever an action on a collection of modules is required. It implements the following methods: + +- `NewManager(modules ...AppModule)`: Constructor function. It takes a list of the application's `AppModule`s and builds a new `Manager`. It is generally called from the application's main [constructor function](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor-function). +- `SetOrderInitGenesis(moduleNames ...string)`: Sets the order in which the [`InitGenesis`](/docs/sdk/v0.53/documentation/module-system/genesis#initgenesis) function of each module will be called when the application is first started. This function is generally called from the application's main [constructor function](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor-function). + To initialize modules successfully, module dependencies should be considered. For example, the `genutil` module must occur after `staking` module so that the pools are properly initialized with tokens from genesis accounts, the `genutils` module must also occur after `auth` so that it can access the params from auth, IBC's `capability` module should be initialized before all other modules so that it can initialize any capabilities. +- `SetOrderExportGenesis(moduleNames ...string)`: Sets the order in which the [`ExportGenesis`](/docs/sdk/v0.53/documentation/module-system/genesis#exportgenesis) function of each module will be called in case of an export. This function is generally called from the application's main [constructor function](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor-function). +- `SetOrderPreBlockers(moduleNames ...string)`: Sets the order in which the `PreBlock()` function of each module will be called before `BeginBlock()` of all modules. This function is generally called from the application's main [constructor function](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor-function). +- `SetOrderBeginBlockers(moduleNames ...string)`: Sets the order in which the `BeginBlock()` function of each module will be called at the beginning of each block. This function is generally called from the application's main [constructor function](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor-function). +- `SetOrderEndBlockers(moduleNames ...string)`: Sets the order in which the `EndBlock()` function of each module will be called at the end of each block. This function is generally called from the application's main [constructor function](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor-function). +- `SetOrderPrecommiters(moduleNames ...string)`: Sets the order in which the `Precommit()` function of each module will be called during commit of each block. This function is generally called from the application's main [constructor function](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor-function). +- `SetOrderPrepareCheckStaters(moduleNames ...string)`: Sets the order in which the `PrepareCheckState()` function of each module will be called during commit of each block. This function is generally called from the application's main [constructor function](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor-function). +- `SetOrderMigrations(moduleNames ...string)`: Sets the order of migrations to be run. If not set then migrations will be run with an order defined in `DefaultMigrationsOrder`. +- `RegisterInvariants(ir sdk.InvariantRegistry)`: Registers the [invariants](/docs/sdk/v0.53/documentation/module-system/invariants) of module implementing the `HasInvariants` interface. +- `RegisterServices(cfg Configurator)`: Registers the services of modules implementing the `HasServices` interface. +- `InitGenesis(ctx context.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage)`: Calls the [`InitGenesis`](/docs/sdk/v0.53/documentation/module-system/genesis#initgenesis) function of each module when the application is first started, in the order defined in `OrderInitGenesis`. Returns an `abci.ResponseInitChain` to the underlying consensus engine, which can contain validator updates. +- `ExportGenesis(ctx context.Context, cdc codec.JSONCodec)`: Calls the [`ExportGenesis`](/docs/sdk/v0.53/documentation/module-system/genesis#exportgenesis) function of each module, in the order defined in `OrderExportGenesis`. The export constructs a genesis file from a previously existing state, and is mainly used when a hard-fork upgrade of the chain is required. +- `ExportGenesisForModules(ctx context.Context, cdc codec.JSONCodec, modulesToExport []string)`: Behaves the same as `ExportGenesis`, except takes a list of modules to export. +- `BeginBlock(ctx context.Context) error`: At the beginning of each block, this function is called from [`BaseApp`](/docs/sdk/v0.53/documentation/application-framework/baseapp#beginblock) and, in turn, calls the [`BeginBlock`](/docs/sdk/v0.53/documentation/module-system/beginblock-endblock) function of each modules implementing the `appmodule.HasBeginBlocker` interface, in the order defined in `OrderBeginBlockers`. It creates a child [context](/docs/sdk/v0.53/documentation/application-framework/context) with an event manager to aggregate [events](/docs/sdk/v0.53/api-reference/events-streaming/events) emitted from each modules. +- `EndBlock(ctx context.Context) error`: At the end of each block, this function is called from [`BaseApp`](/docs/sdk/v0.53/documentation/application-framework/baseapp#endblock) and, in turn, calls the [`EndBlock`](/docs/sdk/v0.53/documentation/module-system/beginblock-endblock) function of each modules implementing the `appmodule.HasEndBlocker` interface, in the order defined in `OrderEndBlockers`. It creates a child [context](/docs/sdk/v0.53/documentation/application-framework/context) with an event manager to aggregate [events](/docs/sdk/v0.53/api-reference/events-streaming/events) emitted from all modules. The function returns an `abci` which contains the aforementioned events, as well as validator set updates (if any). +- `EndBlock(context.Context) ([]abci.ValidatorUpdate, error)`: At the end of each block, this function is called from [`BaseApp`](/docs/sdk/v0.53/documentation/application-framework/baseapp#endblock) and, in turn, calls the [`EndBlock`](/docs/sdk/v0.53/documentation/module-system/beginblock-endblock) function of each modules implementing the `module.HasABCIEndBlock` interface, in the order defined in `OrderEndBlockers`. It creates a child [context](/docs/sdk/v0.53/documentation/application-framework/context) with an event manager to aggregate [events](/docs/sdk/v0.53/api-reference/events-streaming/events) emitted from all modules. The function returns an `abci` which contains the aforementioned events, as well as validator set updates (if any). +- `Precommit(ctx context.Context)`: During [`Commit`](/docs/sdk/v0.53/documentation/application-framework/baseapp#commit), this function is called from `BaseApp` immediately before the [`deliverState`](/docs/sdk/v0.53/documentation/application-framework/baseapp#state-updates) is written to the underlying [`rootMultiStore`](/docs/sdk/v0.53/documentation/state-storage/store#commitmultistore) and, in turn calls the `Precommit` function of each modules implementing the `HasPrecommit` interface, in the order defined in `OrderPrecommiters`. It creates a child [context](/docs/sdk/v0.53/documentation/application-framework/context) where the underlying `CacheMultiStore` is that of the newly committed block's [`finalizeblockstate`](/docs/sdk/v0.53/documentation/application-framework/baseapp#state-updates). +- `PrepareCheckState(ctx context.Context)`: During [`Commit`](/docs/sdk/v0.53/documentation/application-framework/baseapp#commit), this function is called from `BaseApp` immediately after the [`deliverState`](/docs/sdk/v0.53/documentation/application-framework/baseapp#state-updates) is written to the underlying [`rootMultiStore`](/docs/sdk/v0.53/documentation/state-storage/store#commitmultistore) and, in turn calls the `PrepareCheckState` function of each module implementing the `HasPrepareCheckState` interface, in the order defined in `OrderPrepareCheckStaters`. It creates a child [context](/docs/sdk/v0.53/documentation/application-framework/context) where the underlying `CacheMultiStore` is that of the next block's [`checkState`](/docs/sdk/v0.53/documentation/application-framework/baseapp#state-updates). Writes to this state will be present in the [`checkState`](/docs/sdk/v0.53/documentation/application-framework/baseapp#state-updates) of the next block, and therefore this method can be used to prepare the `checkState` for the next block. + +Here's an example of a concrete integration within an `simapp`: + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "maps" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/circuit" + circuitkeeper "cosmossdk.io/x/circuit/keeper" + circuittypes "cosmossdk.io/x/circuit/types" + "cosmossdk.io/x/evidence" + evidencekeeper "cosmossdk.io/x/evidence/keeper" + evidencetypes "cosmossdk.io/x/evidence/types" + "cosmossdk.io/x/feegrant" + feegrantkeeper "cosmossdk.io/x/feegrant/keeper" + feegrantmodule "cosmossdk.io/x/feegrant/module" + "cosmossdk.io/x/nft" + nftkeeper "cosmossdk.io/x/nft/keeper" + nftmodule "cosmossdk.io/x/nft/module" + "cosmossdk.io/x/tx/signing" + "cosmossdk.io/x/upgrade" + upgradekeeper "cosmossdk.io/x/upgrade/keeper" + upgradetypes "cosmossdk.io/x/upgrade/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + sigtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/group" + groupkeeper "github.com/cosmos/cosmos-sdk/x/group/keeper" + groupmodule "github.com/cosmos/cosmos-sdk/x/group/module" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + nft.ModuleName: nil, + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + CircuitKeeper circuitkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + GroupKeeper groupkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + NFTKeeper nftkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + panic(err) +} + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, + banktypes.StoreKey, + stakingtypes.StoreKey, + minttypes.StoreKey, + distrtypes.StoreKey, + slashingtypes.StoreKey, + govtypes.StoreKey, + consensusparamtypes.StoreKey, + upgradetypes.StoreKey, + feegrant.StoreKey, + evidencetypes.StoreKey, + circuittypes.StoreKey, + authzkeeper.StoreKey, + nftkeeper.StoreKey, + group.StoreKey, + epochstypes.StoreKey, + protocolpooltypes.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, +} + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + runtime.EventService{ +}, + ) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), + ) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + + / optional: enable sign mode textual by overwriting the default tx config (after setting the bank keeper) + enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + panic(err) +} + +app.txConfig = txConfig + + app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[stakingtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authcodec.NewBech32Codec(sdk.Bech32PrefixValAddr), + authcodec.NewBech32Codec(sdk.Bech32PrefixConsAddr), + ) + +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(minttypes.DefaultInflationCalculationFn)), custom mintFn can be added here + ) + +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), + ) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, + legacyAmino, + runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), + app.StakingKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[feegrant.StoreKey]), + app.AccountKeeper, + ) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks( + app.DistrKeeper.Hooks(), + app.SlashingKeeper.Hooks(), + ), + ) + +app.CircuitKeeper = circuitkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[circuittypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + app.AccountKeeper.AddressCodec(), + ) + +app.BaseApp.SetCircuitBreaker(&app.CircuitKeeper) + +app.AuthzKeeper = authzkeeper.NewKeeper( + runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + ) + groupConfig := group.DefaultConfig() + /* + Example of setting group params: + groupConfig.MaxMetadataLen = 1000 + */ + app.GroupKeeper = groupkeeper.NewKeeper( + keys[group.StoreKey], + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + groupConfig, + ) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper( + skipUpgradeHeights, + runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), + appCodec, + homePath, + app.BaseApp, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(...), / Add if you want to use a custom vote calculation function. + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + +app.NFTKeeper = nftkeeper.NewKeeper( + runtime.NewKVStoreService(keys[nftkeeper.StoreKey]), + appCodec, + app.AccountKeeper, + app.BankKeeper, + ) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), + app.StakingKeeper, + app.SlashingKeeper, + app.AccountKeeper.AddressCodec(), + runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, + ) + +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + ), + ) + + /**** Module Options ****/ + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, nil), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, nil), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil, app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, nil), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + groupmodule.NewAppModule(appCodec, app.GroupKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + nftmodule.NewAppModule(appCodec, app.NFTKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + circuit.NewAppModule(appCodec, app.CircuitKeeper), + epochs.NewAppModule(app.EpochsKeeper), + protocolpool.NewAppModule(app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependant module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / NOTE: upgrade module is required to be prioritized + app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, + ) + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + group.ModuleName, + protocolpooltypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensusparamtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +} + exportModuleOrder := []string{ + consensusparamtypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + nft.ModuleName, + group.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + circuittypes.ModuleName, + epochstypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +err = app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetPreBlocker(app.PreBlocker) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := NewAnteHandler( + HandlerOptions{ + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimeoutDuration), +}, +}, + &app.CircuitKeeper, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ PreBlocker application updates every pre block +func (app *SimApp) + +PreBlocker(ctx sdk.Context, _ *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + return app.ModuleManager.PreBlock(ctx) +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), + ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), + ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, 0, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + return maps.Clone(maccPerms) +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} +``` + +This is the same example from `runtime` (the package that powers app di): + +```go expandable +package runtime + +import ( + + "fmt" + "os" + "slices" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/protobuf/reflect/protodesc" + "google.golang.org/protobuf/reflect/protoregistry" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/event" + "cosmossdk.io/core/genesis" + "cosmossdk.io/core/header" + "cosmossdk.io/core/store" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/tx/signing" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + addresscodec "github.com/cosmos/cosmos-sdk/codec/address" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type appModule struct { + app *App +} + +func (m appModule) + +RegisterServices(configurator module.Configurator) { + err := m.app.registerRuntimeServices(configurator) + if err != nil { + panic(err) +} +} + +func (m appModule) + +IsOnePerModuleType() { +} + +func (m appModule) + +IsAppModule() { +} + +var ( + _ appmodule.AppModule = appModule{ +} + _ module.HasServices = appModule{ +} +) + +/ BaseAppOption is a depinject.AutoGroupType which can be used to pass +/ BaseApp options into the depinject. It should be used carefully. +type BaseAppOption func(*baseapp.BaseApp) + +/ IsManyPerContainerType indicates that this is a depinject.ManyPerContainerType. +func (b BaseAppOption) + +IsManyPerContainerType() { +} + +func init() { + appmodule.Register(&runtimev1alpha1.Module{ +}, + appmodule.Provide( + ProvideApp, + ProvideInterfaceRegistry, + ProvideKVStoreKey, + ProvideTransientStoreKey, + ProvideMemoryStoreKey, + ProvideGenesisTxHandler, + ProvideKVStoreService, + ProvideMemoryStoreService, + ProvideTransientStoreService, + ProvideEventService, + ProvideHeaderInfoService, + ProvideCometInfoService, + ProvideBasicManager, + ProvideAddressCodec, + ), + appmodule.Invoke(SetupAppBuilder), + ) +} + +func ProvideApp(interfaceRegistry codectypes.InterfaceRegistry) ( + codec.Codec, + *codec.LegacyAmino, + *AppBuilder, + *baseapp.MsgServiceRouter, + *baseapp.GRPCQueryRouter, + appmodule.AppModule, + protodesc.Resolver, + protoregistry.MessageTypeResolver, + error, +) { + protoFiles := proto.HybridResolver + protoTypes := protoregistry.GlobalTypes + + / At startup, check that all proto annotations are correct. + if err := msgservice.ValidateProtoAnnotations(protoFiles); err != nil { + / Once we switch to using protoreflect-based ante handlers, we might + / want to panic here instead of logging a warning. + _, _ = fmt.Fprintln(os.Stderr, err.Error()) +} + amino := codec.NewLegacyAmino() + +std.RegisterInterfaces(interfaceRegistry) + +std.RegisterLegacyAminoCodec(amino) + cdc := codec.NewProtoCodec(interfaceRegistry) + msgServiceRouter := baseapp.NewMsgServiceRouter() + grpcQueryRouter := baseapp.NewGRPCQueryRouter() + app := &App{ + storeKeys: nil, + interfaceRegistry: interfaceRegistry, + cdc: cdc, + amino: amino, + basicManager: module.BasicManager{ +}, + msgServiceRouter: msgServiceRouter, + grpcQueryRouter: grpcQueryRouter, +} + appBuilder := &AppBuilder{ + app +} + +return cdc, amino, appBuilder, msgServiceRouter, grpcQueryRouter, appModule{ + app +}, protoFiles, protoTypes, nil +} + +type AppInputs struct { + depinject.In + + AppConfig *appv1alpha1.Config `optional:"true"` + Config *runtimev1alpha1.Module + AppBuilder *AppBuilder + Modules map[string]appmodule.AppModule + CustomModuleBasics map[string]module.AppModuleBasic `optional:"true"` + BaseAppOptions []BaseAppOption + InterfaceRegistry codectypes.InterfaceRegistry + LegacyAmino *codec.LegacyAmino + Logger log.Logger +} + +func SetupAppBuilder(inputs AppInputs) { + app := inputs.AppBuilder.app + app.baseAppOptions = inputs.BaseAppOptions + app.config = inputs.Config + app.appConfig = inputs.AppConfig + app.logger = inputs.Logger + app.ModuleManager = module.NewManagerFromMap(inputs.Modules) + for name, mod := range inputs.Modules { + if customBasicMod, ok := inputs.CustomModuleBasics[name]; ok { + app.basicManager[name] = customBasicMod + customBasicMod.RegisterInterfaces(inputs.InterfaceRegistry) + +customBasicMod.RegisterLegacyAminoCodec(inputs.LegacyAmino) + +continue +} + coreAppModuleBasic := module.CoreAppModuleBasicAdaptor(name, mod) + +app.basicManager[name] = coreAppModuleBasic + coreAppModuleBasic.RegisterInterfaces(inputs.InterfaceRegistry) + +coreAppModuleBasic.RegisterLegacyAminoCodec(inputs.LegacyAmino) +} +} + +func ProvideInterfaceRegistry(addressCodec address.Codec, validatorAddressCodec ValidatorAddressCodec, customGetSigners []signing.CustomGetSigner) (codectypes.InterfaceRegistry, error) { + signingOptions := signing.Options{ + AddressCodec: addressCodec, + ValidatorAddressCodec: validatorAddressCodec, +} + for _, signer := range customGetSigners { + signingOptions.DefineCustomGetSigners(signer.MsgType, signer.Fn) +} + +interfaceRegistry, err := codectypes.NewInterfaceRegistryWithOptions(codectypes.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signingOptions, +}) + if err != nil { + return nil, err +} + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + return nil, err +} + +return interfaceRegistry, nil +} + +func registerStoreKey(wrapper *AppBuilder, key storetypes.StoreKey) { + wrapper.app.storeKeys = append(wrapper.app.storeKeys, key) +} + +func storeKeyOverride(config *runtimev1alpha1.Module, moduleName string) *runtimev1alpha1.StoreKeyConfig { + for _, cfg := range config.OverrideStoreKeys { + if cfg.ModuleName == moduleName { + return cfg +} + +} + +return nil +} + +func ProvideKVStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.KVStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + override := storeKeyOverride(config, key.Name()) + +var storeKeyName string + if override != nil { + storeKeyName = override.KvStoreKey +} + +else { + storeKeyName = key.Name() +} + storeKey := storetypes.NewKVStoreKey(storeKeyName) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideTransientStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.TransientStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + storeKey := storetypes.NewTransientStoreKey(fmt.Sprintf("transient:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideMemoryStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.MemoryStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + storeKey := storetypes.NewMemoryStoreKey(fmt.Sprintf("memory:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideGenesisTxHandler(appBuilder *AppBuilder) + +genesis.TxHandler { + return appBuilder.app +} + +func ProvideKVStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.KVStoreService { + storeKey := ProvideKVStoreKey(config, key, app) + +return kvStoreService{ + key: storeKey +} +} + +func ProvideMemoryStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.MemoryStoreService { + storeKey := ProvideMemoryStoreKey(config, key, app) + +return memStoreService{ + key: storeKey +} +} + +func ProvideTransientStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.TransientStoreService { + storeKey := ProvideTransientStoreKey(config, key, app) + +return transientStoreService{ + key: storeKey +} +} + +func ProvideEventService() + +event.Service { + return EventService{ +} +} + +func ProvideCometInfoService() + +comet.BlockInfoService { + return cometInfoService{ +} +} + +func ProvideHeaderInfoService(app *AppBuilder) + +header.Service { + return headerInfoService{ +} +} + +func ProvideBasicManager(app *AppBuilder) + +module.BasicManager { + return app.app.basicManager +} + +type ( + / ValidatorAddressCodec is an alias for address.Codec for validator addresses. + ValidatorAddressCodec address.Codec + + / ConsensusAddressCodec is an alias for address.Codec for validator consensus addresses. + ConsensusAddressCodec address.Codec +) + +type AddressCodecInputs struct { + depinject.In + + AuthConfig *authmodulev1.Module `optional:"true"` + StakingConfig *stakingmodulev1.Module `optional:"true"` + + AddressCodecFactory func() + +address.Codec `optional:"true"` + ValidatorAddressCodecFactory func() + +ValidatorAddressCodec `optional:"true"` + ConsensusAddressCodecFactory func() + +ConsensusAddressCodec `optional:"true"` +} + +/ ProvideAddressCodec provides an address.Codec to the container for any +/ modules that want to do address string <> bytes conversion. +func ProvideAddressCodec(in AddressCodecInputs) (address.Codec, ValidatorAddressCodec, ConsensusAddressCodec) { + if in.AddressCodecFactory != nil && in.ValidatorAddressCodecFactory != nil && in.ConsensusAddressCodecFactory != nil { + return in.AddressCodecFactory(), in.ValidatorAddressCodecFactory(), in.ConsensusAddressCodecFactory() +} + if in.AuthConfig == nil || in.AuthConfig.Bech32Prefix == "" { + panic("auth config bech32 prefix cannot be empty if no custom address codec is provided") +} + if in.StakingConfig == nil { + in.StakingConfig = &stakingmodulev1.Module{ +} + +} + if in.StakingConfig.Bech32PrefixValidator == "" { + in.StakingConfig.Bech32PrefixValidator = fmt.Sprintf("%svaloper", in.AuthConfig.Bech32Prefix) +} + if in.StakingConfig.Bech32PrefixConsensus == "" { + in.StakingConfig.Bech32PrefixConsensus = fmt.Sprintf("%svalcons", in.AuthConfig.Bech32Prefix) +} + +return addresscodec.NewBech32Codec(in.AuthConfig.Bech32Prefix), + addresscodec.NewBech32Codec(in.StakingConfig.Bech32PrefixValidator), + addresscodec.NewBech32Codec(in.StakingConfig.Bech32PrefixConsensus) +} +``` + +```go expandable +package runtime + +import ( + + "fmt" + "os" + "slices" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/protobuf/reflect/protodesc" + "google.golang.org/protobuf/reflect/protoregistry" + + runtimev1alpha1 "cosmossdk.io/api/cosmos/app/runtime/v1alpha1" + appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1" + authmodulev1 "cosmossdk.io/api/cosmos/auth/module/v1" + stakingmodulev1 "cosmossdk.io/api/cosmos/staking/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/event" + "cosmossdk.io/core/genesis" + "cosmossdk.io/core/header" + "cosmossdk.io/core/store" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/tx/signing" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + addresscodec "github.com/cosmos/cosmos-sdk/codec/address" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/std" + "github.com/cosmos/cosmos-sdk/types/module" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +type appModule struct { + app *App +} + +func (m appModule) + +RegisterServices(configurator module.Configurator) { + err := m.app.registerRuntimeServices(configurator) + if err != nil { + panic(err) +} +} + +func (m appModule) + +IsOnePerModuleType() { +} + +func (m appModule) + +IsAppModule() { +} + +var ( + _ appmodule.AppModule = appModule{ +} + _ module.HasServices = appModule{ +} +) + +/ BaseAppOption is a depinject.AutoGroupType which can be used to pass +/ BaseApp options into the depinject. It should be used carefully. +type BaseAppOption func(*baseapp.BaseApp) + +/ IsManyPerContainerType indicates that this is a depinject.ManyPerContainerType. +func (b BaseAppOption) + +IsManyPerContainerType() { +} + +func init() { + appmodule.Register(&runtimev1alpha1.Module{ +}, + appmodule.Provide( + ProvideApp, + ProvideInterfaceRegistry, + ProvideKVStoreKey, + ProvideTransientStoreKey, + ProvideMemoryStoreKey, + ProvideGenesisTxHandler, + ProvideKVStoreService, + ProvideMemoryStoreService, + ProvideTransientStoreService, + ProvideEventService, + ProvideHeaderInfoService, + ProvideCometInfoService, + ProvideBasicManager, + ProvideAddressCodec, + ), + appmodule.Invoke(SetupAppBuilder), + ) +} + +func ProvideApp(interfaceRegistry codectypes.InterfaceRegistry) ( + codec.Codec, + *codec.LegacyAmino, + *AppBuilder, + *baseapp.MsgServiceRouter, + *baseapp.GRPCQueryRouter, + appmodule.AppModule, + protodesc.Resolver, + protoregistry.MessageTypeResolver, + error, +) { + protoFiles := proto.HybridResolver + protoTypes := protoregistry.GlobalTypes + + / At startup, check that all proto annotations are correct. + if err := msgservice.ValidateProtoAnnotations(protoFiles); err != nil { + / Once we switch to using protoreflect-based ante handlers, we might + / want to panic here instead of logging a warning. + _, _ = fmt.Fprintln(os.Stderr, err.Error()) +} + amino := codec.NewLegacyAmino() + +std.RegisterInterfaces(interfaceRegistry) + +std.RegisterLegacyAminoCodec(amino) + cdc := codec.NewProtoCodec(interfaceRegistry) + msgServiceRouter := baseapp.NewMsgServiceRouter() + grpcQueryRouter := baseapp.NewGRPCQueryRouter() + app := &App{ + storeKeys: nil, + interfaceRegistry: interfaceRegistry, + cdc: cdc, + amino: amino, + basicManager: module.BasicManager{ +}, + msgServiceRouter: msgServiceRouter, + grpcQueryRouter: grpcQueryRouter, +} + appBuilder := &AppBuilder{ + app +} + +return cdc, amino, appBuilder, msgServiceRouter, grpcQueryRouter, appModule{ + app +}, protoFiles, protoTypes, nil +} + +type AppInputs struct { + depinject.In + + AppConfig *appv1alpha1.Config `optional:"true"` + Config *runtimev1alpha1.Module + AppBuilder *AppBuilder + Modules map[string]appmodule.AppModule + CustomModuleBasics map[string]module.AppModuleBasic `optional:"true"` + BaseAppOptions []BaseAppOption + InterfaceRegistry codectypes.InterfaceRegistry + LegacyAmino *codec.LegacyAmino + Logger log.Logger +} + +func SetupAppBuilder(inputs AppInputs) { + app := inputs.AppBuilder.app + app.baseAppOptions = inputs.BaseAppOptions + app.config = inputs.Config + app.appConfig = inputs.AppConfig + app.logger = inputs.Logger + app.ModuleManager = module.NewManagerFromMap(inputs.Modules) + for name, mod := range inputs.Modules { + if customBasicMod, ok := inputs.CustomModuleBasics[name]; ok { + app.basicManager[name] = customBasicMod + customBasicMod.RegisterInterfaces(inputs.InterfaceRegistry) + +customBasicMod.RegisterLegacyAminoCodec(inputs.LegacyAmino) + +continue +} + coreAppModuleBasic := module.CoreAppModuleBasicAdaptor(name, mod) + +app.basicManager[name] = coreAppModuleBasic + coreAppModuleBasic.RegisterInterfaces(inputs.InterfaceRegistry) + +coreAppModuleBasic.RegisterLegacyAminoCodec(inputs.LegacyAmino) +} +} + +func ProvideInterfaceRegistry(addressCodec address.Codec, validatorAddressCodec ValidatorAddressCodec, customGetSigners []signing.CustomGetSigner) (codectypes.InterfaceRegistry, error) { + signingOptions := signing.Options{ + AddressCodec: addressCodec, + ValidatorAddressCodec: validatorAddressCodec, +} + for _, signer := range customGetSigners { + signingOptions.DefineCustomGetSigners(signer.MsgType, signer.Fn) +} + +interfaceRegistry, err := codectypes.NewInterfaceRegistryWithOptions(codectypes.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signingOptions, +}) + if err != nil { + return nil, err +} + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + return nil, err +} + +return interfaceRegistry, nil +} + +func registerStoreKey(wrapper *AppBuilder, key storetypes.StoreKey) { + wrapper.app.storeKeys = append(wrapper.app.storeKeys, key) +} + +func storeKeyOverride(config *runtimev1alpha1.Module, moduleName string) *runtimev1alpha1.StoreKeyConfig { + for _, cfg := range config.OverrideStoreKeys { + if cfg.ModuleName == moduleName { + return cfg +} + +} + +return nil +} + +func ProvideKVStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.KVStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + override := storeKeyOverride(config, key.Name()) + +var storeKeyName string + if override != nil { + storeKeyName = override.KvStoreKey +} + +else { + storeKeyName = key.Name() +} + storeKey := storetypes.NewKVStoreKey(storeKeyName) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideTransientStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.TransientStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + storeKey := storetypes.NewTransientStoreKey(fmt.Sprintf("transient:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideMemoryStoreKey(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) *storetypes.MemoryStoreKey { + if slices.Contains(config.SkipStoreKeys, key.Name()) { + return nil +} + storeKey := storetypes.NewMemoryStoreKey(fmt.Sprintf("memory:%s", key.Name())) + +registerStoreKey(app, storeKey) + +return storeKey +} + +func ProvideGenesisTxHandler(appBuilder *AppBuilder) + +genesis.TxHandler { + return appBuilder.app +} + +func ProvideKVStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.KVStoreService { + storeKey := ProvideKVStoreKey(config, key, app) + +return kvStoreService{ + key: storeKey +} +} + +func ProvideMemoryStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.MemoryStoreService { + storeKey := ProvideMemoryStoreKey(config, key, app) + +return memStoreService{ + key: storeKey +} +} + +func ProvideTransientStoreService(config *runtimev1alpha1.Module, key depinject.ModuleKey, app *AppBuilder) + +store.TransientStoreService { + storeKey := ProvideTransientStoreKey(config, key, app) + +return transientStoreService{ + key: storeKey +} +} + +func ProvideEventService() + +event.Service { + return EventService{ +} +} + +func ProvideCometInfoService() + +comet.BlockInfoService { + return cometInfoService{ +} +} + +func ProvideHeaderInfoService(app *AppBuilder) + +header.Service { + return headerInfoService{ +} +} + +func ProvideBasicManager(app *AppBuilder) + +module.BasicManager { + return app.app.basicManager +} + +type ( + / ValidatorAddressCodec is an alias for address.Codec for validator addresses. + ValidatorAddressCodec address.Codec + + / ConsensusAddressCodec is an alias for address.Codec for validator consensus addresses. + ConsensusAddressCodec address.Codec +) + +type AddressCodecInputs struct { + depinject.In + + AuthConfig *authmodulev1.Module `optional:"true"` + StakingConfig *stakingmodulev1.Module `optional:"true"` + + AddressCodecFactory func() + +address.Codec `optional:"true"` + ValidatorAddressCodecFactory func() + +ValidatorAddressCodec `optional:"true"` + ConsensusAddressCodecFactory func() + +ConsensusAddressCodec `optional:"true"` +} + +/ ProvideAddressCodec provides an address.Codec to the container for any +/ modules that want to do address string <> bytes conversion. +func ProvideAddressCodec(in AddressCodecInputs) (address.Codec, ValidatorAddressCodec, ConsensusAddressCodec) { + if in.AddressCodecFactory != nil && in.ValidatorAddressCodecFactory != nil && in.ConsensusAddressCodecFactory != nil { + return in.AddressCodecFactory(), in.ValidatorAddressCodecFactory(), in.ConsensusAddressCodecFactory() +} + if in.AuthConfig == nil || in.AuthConfig.Bech32Prefix == "" { + panic("auth config bech32 prefix cannot be empty if no custom address codec is provided") +} + if in.StakingConfig == nil { + in.StakingConfig = &stakingmodulev1.Module{ +} + +} + if in.StakingConfig.Bech32PrefixValidator == "" { + in.StakingConfig.Bech32PrefixValidator = fmt.Sprintf("%svaloper", in.AuthConfig.Bech32Prefix) +} + if in.StakingConfig.Bech32PrefixConsensus == "" { + in.StakingConfig.Bech32PrefixConsensus = fmt.Sprintf("%svalcons", in.AuthConfig.Bech32Prefix) +} + +return addresscodec.NewBech32Codec(in.AuthConfig.Bech32Prefix), + addresscodec.NewBech32Codec(in.StakingConfig.Bech32PrefixValidator), + addresscodec.NewBech32Codec(in.StakingConfig.Bech32PrefixConsensus) +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/msg-services.mdx b/docs/sdk/v0.53/documentation/module-system/msg-services.mdx new file mode 100644 index 00000000..099a8c02 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/msg-services.mdx @@ -0,0 +1,3619 @@ +--- +title: "`Msg` Services" +--- + +## Synopsis + +A Protobuf `Msg` service processes [messages](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#messages). Protobuf `Msg` services are specific to the module in which they are defined, and only process messages defined within the said module. They are called from `BaseApp` during [`DeliverTx`](/docs/sdk/v0.53/documentation/application-framework/baseapp#delivertx). + + +**Pre-requisite Readings** + +- [Module Manager](/docs/sdk/v0.53/documentation/module-system/module-manager) +- [Messages and Queries](/docs/sdk/v0.53/documentation/module-system/messages-and-queries) + + + +## Implementation of a module `Msg` service + +Each module should define a Protobuf `Msg` service, which will be responsible for processing requests (implementing `sdk.Msg`) and returning responses. + +As further described in [ADR 031](docs/sdk/next/documentation/legacy/adr-comprehensive), this approach has the advantage of clearly specifying return types and generating server and client code. + +Protobuf generates a `MsgServer` interface based on a definition of `Msg` service. It is the role of the module developer to implement this interface, by implementing the state transition logic that should happen upon receival of each `sdk.Msg`. As an example, here is the generated `MsgServer` interface for `x/bank`, which exposes two `sdk.Msg`s: + +```go expandable +/ Code generated by protoc-gen-gogo. DO NOT EDIT. +/ source: cosmos/bank/v1beta1/tx.proto + +package types + +import ( + + context "context" + fmt "fmt" + _ "github.com/cosmos/cosmos-proto" + github_com_cosmos_cosmos_sdk_types "github.com/cosmos/cosmos-sdk/types" + types "github.com/cosmos/cosmos-sdk/types" + _ "github.com/cosmos/cosmos-sdk/types/msgservice" + _ "github.com/cosmos/cosmos-sdk/types/tx/amino" + _ "github.com/cosmos/gogoproto/gogoproto" + grpc1 "github.com/cosmos/gogoproto/grpc" + proto "github.com/cosmos/gogoproto/proto" + grpc "google.golang.org/grpc" + codes "google.golang.org/grpc/codes" + status "google.golang.org/grpc/status" + io "io" + math "math" + math_bits "math/bits" +) + +/ Reference imports to suppress errors if they are not otherwise used. +var _ = proto.Marshal +var _ = fmt.Errorf +var _ = math.Inf + +/ This is a compile-time assertion to ensure that this generated file +/ is compatible with the proto package it is being compiled against. +/ A compilation error at this line likely means your copy of the +/ proto package needs to be updated. +const _ = proto.GoGoProtoPackageIsVersion3 / please upgrade the proto package + +/ MsgSend represents a message to send coins from one account to another. +type MsgSend struct { + FromAddress string `protobuf:"bytes,1,opt,name=from_address,json=fromAddress,proto3" json:"from_address,omitempty"` + ToAddress string `protobuf:"bytes,2,opt,name=to_address,json=toAddress,proto3" json:"to_address,omitempty"` + Amount github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,3,rep,name=amount,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"amount"` +} + +func (m *MsgSend) + +Reset() { *m = MsgSend{ +} +} + +func (m *MsgSend) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSend) + +ProtoMessage() { +} + +func (*MsgSend) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{0 +} +} + +func (m *MsgSend) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSend) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSend.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSend) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSend.Merge(m, src) +} + +func (m *MsgSend) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSend) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSend.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSend proto.InternalMessageInfo + +/ MsgSendResponse defines the Msg/Send response type. +type MsgSendResponse struct { +} + +func (m *MsgSendResponse) + +Reset() { *m = MsgSendResponse{ +} +} + +func (m *MsgSendResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSendResponse) + +ProtoMessage() { +} + +func (*MsgSendResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{1 +} +} + +func (m *MsgSendResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSendResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSendResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSendResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSendResponse.Merge(m, src) +} + +func (m *MsgSendResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSendResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSendResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSendResponse proto.InternalMessageInfo + +/ MsgMultiSend represents an arbitrary multi-in, multi-out send message. +type MsgMultiSend struct { + / Inputs, despite being `repeated`, only allows one sender input. This is + / checked in MsgMultiSend's ValidateBasic. + Inputs []Input `protobuf:"bytes,1,rep,name=inputs,proto3" json:"inputs"` + Outputs []Output `protobuf:"bytes,2,rep,name=outputs,proto3" json:"outputs"` +} + +func (m *MsgMultiSend) + +Reset() { *m = MsgMultiSend{ +} +} + +func (m *MsgMultiSend) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgMultiSend) + +ProtoMessage() { +} + +func (*MsgMultiSend) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{2 +} +} + +func (m *MsgMultiSend) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgMultiSend) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgMultiSend.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgMultiSend) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgMultiSend.Merge(m, src) +} + +func (m *MsgMultiSend) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgMultiSend) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgMultiSend.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgMultiSend proto.InternalMessageInfo + +func (m *MsgMultiSend) + +GetInputs() []Input { + if m != nil { + return m.Inputs +} + +return nil +} + +func (m *MsgMultiSend) + +GetOutputs() []Output { + if m != nil { + return m.Outputs +} + +return nil +} + +/ MsgMultiSendResponse defines the Msg/MultiSend response type. +type MsgMultiSendResponse struct { +} + +func (m *MsgMultiSendResponse) + +Reset() { *m = MsgMultiSendResponse{ +} +} + +func (m *MsgMultiSendResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgMultiSendResponse) + +ProtoMessage() { +} + +func (*MsgMultiSendResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{3 +} +} + +func (m *MsgMultiSendResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgMultiSendResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgMultiSendResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgMultiSendResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgMultiSendResponse.Merge(m, src) +} + +func (m *MsgMultiSendResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgMultiSendResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgMultiSendResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgMultiSendResponse proto.InternalMessageInfo + +/ MsgUpdateParams is the Msg/UpdateParams request type. +/ +/ Since: cosmos-sdk 0.47 +type MsgUpdateParams struct { + / authority is the address that controls the module (defaults to x/gov unless overwritten). + Authority string `protobuf:"bytes,1,opt,name=authority,proto3" json:"authority,omitempty"` + / params defines the x/bank parameters to update. + / + / NOTE: All parameters must be supplied. + Params Params `protobuf:"bytes,2,opt,name=params,proto3" json:"params"` +} + +func (m *MsgUpdateParams) + +Reset() { *m = MsgUpdateParams{ +} +} + +func (m *MsgUpdateParams) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgUpdateParams) + +ProtoMessage() { +} + +func (*MsgUpdateParams) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{4 +} +} + +func (m *MsgUpdateParams) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgUpdateParams) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgUpdateParams.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgUpdateParams) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgUpdateParams.Merge(m, src) +} + +func (m *MsgUpdateParams) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgUpdateParams) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgUpdateParams.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgUpdateParams proto.InternalMessageInfo + +func (m *MsgUpdateParams) + +GetAuthority() + +string { + if m != nil { + return m.Authority +} + +return "" +} + +func (m *MsgUpdateParams) + +GetParams() + +Params { + if m != nil { + return m.Params +} + +return Params{ +} +} + +/ MsgUpdateParamsResponse defines the response structure for executing a +/ MsgUpdateParams message. +/ +/ Since: cosmos-sdk 0.47 +type MsgUpdateParamsResponse struct { +} + +func (m *MsgUpdateParamsResponse) + +Reset() { *m = MsgUpdateParamsResponse{ +} +} + +func (m *MsgUpdateParamsResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgUpdateParamsResponse) + +ProtoMessage() { +} + +func (*MsgUpdateParamsResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{5 +} +} + +func (m *MsgUpdateParamsResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgUpdateParamsResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgUpdateParamsResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgUpdateParamsResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgUpdateParamsResponse.Merge(m, src) +} + +func (m *MsgUpdateParamsResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgUpdateParamsResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgUpdateParamsResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgUpdateParamsResponse proto.InternalMessageInfo + +/ MsgSetSendEnabled is the Msg/SetSendEnabled request type. +/ +/ Only entries to add/update/delete need to be included. +/ Existing SendEnabled entries that are not included in this +/ message are left unchanged. +/ +/ Since: cosmos-sdk 0.47 +type MsgSetSendEnabled struct { + Authority string `protobuf:"bytes,1,opt,name=authority,proto3" json:"authority,omitempty"` + / send_enabled is the list of entries to add or update. + SendEnabled []*SendEnabled `protobuf:"bytes,2,rep,name=send_enabled,json=sendEnabled,proto3" json:"send_enabled,omitempty"` + / use_default_for is a list of denoms that should use the params.default_send_enabled value. + / Denoms listed here will have their SendEnabled entries deleted. + / If a denom is included that doesn't have a SendEnabled entry, + / it will be ignored. + UseDefaultFor []string `protobuf:"bytes,3,rep,name=use_default_for,json=useDefaultFor,proto3" json:"use_default_for,omitempty"` +} + +func (m *MsgSetSendEnabled) + +Reset() { *m = MsgSetSendEnabled{ +} +} + +func (m *MsgSetSendEnabled) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSetSendEnabled) + +ProtoMessage() { +} + +func (*MsgSetSendEnabled) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{6 +} +} + +func (m *MsgSetSendEnabled) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSetSendEnabled) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSetSendEnabled.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSetSendEnabled) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSetSendEnabled.Merge(m, src) +} + +func (m *MsgSetSendEnabled) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSetSendEnabled) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSetSendEnabled.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSetSendEnabled proto.InternalMessageInfo + +func (m *MsgSetSendEnabled) + +GetAuthority() + +string { + if m != nil { + return m.Authority +} + +return "" +} + +func (m *MsgSetSendEnabled) + +GetSendEnabled() []*SendEnabled { + if m != nil { + return m.SendEnabled +} + +return nil +} + +func (m *MsgSetSendEnabled) + +GetUseDefaultFor() []string { + if m != nil { + return m.UseDefaultFor +} + +return nil +} + +/ MsgSetSendEnabledResponse defines the Msg/SetSendEnabled response type. +/ +/ Since: cosmos-sdk 0.47 +type MsgSetSendEnabledResponse struct { +} + +func (m *MsgSetSendEnabledResponse) + +Reset() { *m = MsgSetSendEnabledResponse{ +} +} + +func (m *MsgSetSendEnabledResponse) + +String() + +string { + return proto.CompactTextString(m) +} + +func (*MsgSetSendEnabledResponse) + +ProtoMessage() { +} + +func (*MsgSetSendEnabledResponse) + +Descriptor() ([]byte, []int) { + return fileDescriptor_1d8cb1613481f5b7, []int{7 +} +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Unmarshal(b []byte) + +error { + return m.Unmarshal(b) +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MsgSetSendEnabledResponse.Marshal(b, m, deterministic) +} + +else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err +} + +return b[:n], nil +} +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Merge(src proto.Message) { + xxx_messageInfo_MsgSetSendEnabledResponse.Merge(m, src) +} + +func (m *MsgSetSendEnabledResponse) + +XXX_Size() + +int { + return m.Size() +} + +func (m *MsgSetSendEnabledResponse) + +XXX_DiscardUnknown() { + xxx_messageInfo_MsgSetSendEnabledResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_MsgSetSendEnabledResponse proto.InternalMessageInfo + +func init() { + proto.RegisterType((*MsgSend)(nil), "cosmos.bank.v1beta1.MsgSend") + +proto.RegisterType((*MsgSendResponse)(nil), "cosmos.bank.v1beta1.MsgSendResponse") + +proto.RegisterType((*MsgMultiSend)(nil), "cosmos.bank.v1beta1.MsgMultiSend") + +proto.RegisterType((*MsgMultiSendResponse)(nil), "cosmos.bank.v1beta1.MsgMultiSendResponse") + +proto.RegisterType((*MsgUpdateParams)(nil), "cosmos.bank.v1beta1.MsgUpdateParams") + +proto.RegisterType((*MsgUpdateParamsResponse)(nil), "cosmos.bank.v1beta1.MsgUpdateParamsResponse") + +proto.RegisterType((*MsgSetSendEnabled)(nil), "cosmos.bank.v1beta1.MsgSetSendEnabled") + +proto.RegisterType((*MsgSetSendEnabledResponse)(nil), "cosmos.bank.v1beta1.MsgSetSendEnabledResponse") +} + +func init() { + proto.RegisterFile("cosmos/bank/v1beta1/tx.proto", fileDescriptor_1d8cb1613481f5b7) +} + +var fileDescriptor_1d8cb1613481f5b7 = []byte{ + / 700 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x54, 0xcf, 0x4f, 0xd3, 0x50, + 0x1c, 0x5f, 0x99, 0x8e, 0xec, 0x31, 0x25, 0x54, 0x22, 0xac, 0x90, 0x0e, 0x16, 0x43, 0x00, 0xa5, + 0x15, 0x34, 0x9a, 0xcc, 0x68, 0x74, 0x28, 0x89, 0x26, 0x8b, 0x66, 0xc4, 0x83, 0x5e, 0x96, 0xd7, + 0xf5, 0x51, 0x1a, 0xd6, 0xbe, 0xa6, 0xef, 0x95, 0xb0, 0x9b, 0x7a, 0x32, 0x9e, 0x3c, 0x7b, 0xe2, + 0x68, 0x8c, 0x07, 0x0e, 0x1e, 0x4d, 0xbc, 0x72, 0x24, 0x9e, 0x3c, 0xa9, 0x81, 0x03, 0xfa, 0x5f, + 0x98, 0xf7, 0xa3, 0xa5, 0x8c, 0x8d, 0x11, 0x2f, 0x6b, 0xf7, 0x3e, 0x3f, 0xbe, 0xef, 0xf3, 0xed, + 0xf7, 0x3d, 0x30, 0xd9, 0xc4, 0xc4, 0xc3, 0xc4, 0xb4, 0xa0, 0xbf, 0x61, 0x6e, 0x2e, 0x5a, 0x88, + 0xc2, 0x45, 0x93, 0x6e, 0x19, 0x41, 0x88, 0x29, 0x56, 0x2f, 0x09, 0xd4, 0x60, 0xa8, 0x21, 0x51, + 0x6d, 0xd4, 0xc1, 0x0e, 0xe6, 0xb8, 0xc9, 0xde, 0x04, 0x55, 0xd3, 0x13, 0x23, 0x82, 0x12, 0xa3, + 0x26, 0x76, 0xfd, 0x13, 0x78, 0xaa, 0x10, 0xf7, 0x15, 0x78, 0x51, 0xe0, 0x0d, 0x61, 0x2c, 0xeb, + 0x0a, 0x68, 0x4c, 0x4a, 0x3d, 0xe2, 0x98, 0x9b, 0x8b, 0xec, 0x21, 0x81, 0x11, 0xe8, 0xb9, 0x3e, + 0x36, 0xf9, 0xaf, 0x58, 0x2a, 0x7f, 0x1e, 0x00, 0x83, 0x35, 0xe2, 0xac, 0x22, 0xdf, 0x56, 0xef, + 0x80, 0xc2, 0x5a, 0x88, 0xbd, 0x06, 0xb4, 0xed, 0x10, 0x11, 0x32, 0xae, 0x4c, 0x29, 0xb3, 0xf9, + 0xea, 0xf8, 0xf7, 0x2f, 0x0b, 0xa3, 0xd2, 0xff, 0x81, 0x40, 0x56, 0x69, 0xe8, 0xfa, 0x4e, 0x7d, + 0x88, 0xb1, 0xe5, 0x92, 0x7a, 0x1b, 0x00, 0x8a, 0x13, 0xe9, 0x40, 0x1f, 0x69, 0x9e, 0xe2, 0x58, + 0xd8, 0x06, 0x39, 0xe8, 0xe1, 0xc8, 0xa7, 0xe3, 0xd9, 0xa9, 0xec, 0xec, 0xd0, 0x52, 0xd1, 0x48, + 0x9a, 0x48, 0x50, 0xdc, 0x44, 0x63, 0x19, 0xbb, 0x7e, 0x75, 0x65, 0xf7, 0x67, 0x29, 0xf3, 0xe9, + 0x57, 0x69, 0xd6, 0x71, 0xe9, 0x7a, 0x64, 0x19, 0x4d, 0xec, 0xc9, 0xe4, 0xf2, 0xb1, 0x40, 0xec, + 0x0d, 0x93, 0xb6, 0x03, 0x44, 0xb8, 0x80, 0x7c, 0x38, 0xdc, 0x99, 0x2f, 0xb4, 0x90, 0x03, 0x9b, + 0xed, 0x06, 0xeb, 0x2d, 0xf9, 0x78, 0xb8, 0x33, 0xaf, 0xd4, 0x65, 0xc1, 0xca, 0xf5, 0xb7, 0xdb, + 0xa5, 0xcc, 0x9f, 0xed, 0x52, 0xe6, 0x0d, 0xe3, 0xa5, 0xb3, 0xbf, 0x3b, 0xdc, 0x99, 0x57, 0x53, + 0x9e, 0xb2, 0x45, 0xe5, 0x11, 0x30, 0x2c, 0x5f, 0xeb, 0x88, 0x04, 0xd8, 0x27, 0xa8, 0xfc, 0x55, + 0x01, 0x85, 0x1a, 0x71, 0x6a, 0x51, 0x8b, 0xba, 0xbc, 0x8d, 0x77, 0x41, 0xce, 0xf5, 0x83, 0x88, + 0xb2, 0x06, 0xb2, 0x40, 0x9a, 0xd1, 0x65, 0x2a, 0x8c, 0xc7, 0x8c, 0x52, 0xcd, 0xb3, 0x44, 0x72, + 0x53, 0x42, 0xa4, 0xde, 0x07, 0x83, 0x38, 0xa2, 0x5c, 0x3f, 0xc0, 0xf5, 0x13, 0x5d, 0xf5, 0x4f, + 0x39, 0x27, 0x6d, 0x10, 0xcb, 0x2a, 0x57, 0xe3, 0x48, 0xd2, 0x92, 0x85, 0x19, 0x3b, 0x1e, 0x26, + 0xd9, 0x6d, 0xf9, 0x32, 0x18, 0x4d, 0xff, 0x4f, 0x62, 0x7d, 0x53, 0x78, 0xd4, 0xe7, 0x81, 0x0d, + 0x29, 0x7a, 0x06, 0x43, 0xe8, 0x11, 0xf5, 0x16, 0xc8, 0xc3, 0x88, 0xae, 0xe3, 0xd0, 0xa5, 0xed, + 0xbe, 0xd3, 0x71, 0x44, 0x55, 0xef, 0x81, 0x5c, 0xc0, 0x1d, 0xf8, 0x5c, 0xf4, 0x4a, 0x24, 0x8a, + 0x1c, 0x6b, 0x89, 0x50, 0x55, 0x6e, 0xb2, 0x30, 0x47, 0x7e, 0x2c, 0xcf, 0x74, 0x2a, 0xcf, 0x96, + 0x38, 0x24, 0x1d, 0xbb, 0x2d, 0x17, 0xc1, 0x58, 0xc7, 0x52, 0x12, 0xee, 0xaf, 0x02, 0x46, 0xf8, + 0x77, 0xa4, 0x2c, 0xf3, 0x23, 0x1f, 0x5a, 0x2d, 0x64, 0xff, 0x77, 0xbc, 0x65, 0x50, 0x20, 0xc8, + 0xb7, 0x1b, 0x48, 0xf8, 0xc8, 0xcf, 0x36, 0xd5, 0x35, 0x64, 0xaa, 0x5e, 0x7d, 0x88, 0xa4, 0x8a, + 0xcf, 0x80, 0xe1, 0x88, 0xa0, 0x86, 0x8d, 0xd6, 0x60, 0xd4, 0xa2, 0x8d, 0x35, 0x1c, 0xf2, 0xf3, + 0x90, 0xaf, 0x5f, 0x88, 0x08, 0x7a, 0x28, 0x56, 0x57, 0x70, 0x58, 0x31, 0x4f, 0xf6, 0x62, 0xb2, + 0x73, 0x50, 0xd3, 0xa9, 0xca, 0x13, 0xa0, 0x78, 0x62, 0x31, 0x6e, 0xc4, 0xd2, 0xeb, 0x2c, 0xc8, + 0xd6, 0x88, 0xa3, 0x3e, 0x01, 0xe7, 0xf8, 0xec, 0x4e, 0x76, 0xdd, 0xb4, 0x1c, 0x79, 0xed, 0xca, + 0x69, 0x68, 0xec, 0xa9, 0xbe, 0x00, 0xf9, 0xa3, 0xc3, 0x30, 0xdd, 0x4b, 0x92, 0x50, 0xb4, 0xb9, + 0xbe, 0x94, 0xc4, 0xda, 0x02, 0x85, 0x63, 0x03, 0xd9, 0x73, 0x43, 0x69, 0x96, 0x76, 0xed, 0x2c, + 0xac, 0xa4, 0xc6, 0x3a, 0xb8, 0xd8, 0x31, 0x17, 0x33, 0xbd, 0x63, 0xa7, 0x79, 0x9a, 0x71, 0x36, + 0x5e, 0x5c, 0x49, 0x3b, 0xff, 0x8a, 0x4d, 0x79, 0x75, 0x79, 0x77, 0x5f, 0x57, 0xf6, 0xf6, 0x75, + 0xe5, 0xf7, 0xbe, 0xae, 0xbc, 0x3f, 0xd0, 0x33, 0x7b, 0x07, 0x7a, 0xe6, 0xc7, 0x81, 0x9e, 0x79, + 0x39, 0x77, 0xea, 0x3d, 0x27, 0xc7, 0x9e, 0x5f, 0x77, 0x56, 0x8e, 0x5f, 0xe7, 0x37, 0xfe, 0x05, + 0x00, 0x00, 0xff, 0xff, 0x5b, 0x5b, 0x43, 0xa9, 0xa0, 0x06, 0x00, 0x00, +} + +/ Reference imports to suppress errors if they are not otherwise used. +var _ context.Context +var _ grpc.ClientConn + +/ This is a compile-time assertion to ensure that this generated file +/ is compatible with the grpc package it is being compiled against. +const _ = grpc.SupportPackageIsVersion4 + +/ MsgClient is the client API for Msg service. +/ +/ For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. +type MsgClient interface { + / Send defines a method for sending coins from one account to another account. + Send(ctx context.Context, in *MsgSend, opts ...grpc.CallOption) (*MsgSendResponse, error) + / MultiSend defines a method for sending coins from some accounts to other accounts. + MultiSend(ctx context.Context, in *MsgMultiSend, opts ...grpc.CallOption) (*MsgMultiSendResponse, error) + / UpdateParams defines a governance operation for updating the x/bank module parameters. + / The authority is defined in the keeper. + / + / Since: cosmos-sdk 0.47 + UpdateParams(ctx context.Context, in *MsgUpdateParams, opts ...grpc.CallOption) (*MsgUpdateParamsResponse, error) + / SetSendEnabled is a governance operation for setting the SendEnabled flag + / on any number of Denoms. Only the entries to add or update should be + / included. Entries that already exist in the store, but that aren't + / included in this message, will be left unchanged. + / + / Since: cosmos-sdk 0.47 + SetSendEnabled(ctx context.Context, in *MsgSetSendEnabled, opts ...grpc.CallOption) (*MsgSetSendEnabledResponse, error) +} + +type msgClient struct { + cc grpc1.ClientConn +} + +func NewMsgClient(cc grpc1.ClientConn) + +MsgClient { + return &msgClient{ + cc +} +} + +func (c *msgClient) + +Send(ctx context.Context, in *MsgSend, opts ...grpc.CallOption) (*MsgSendResponse, error) { + out := new(MsgSendResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/Send", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +func (c *msgClient) + +MultiSend(ctx context.Context, in *MsgMultiSend, opts ...grpc.CallOption) (*MsgMultiSendResponse, error) { + out := new(MsgMultiSendResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/MultiSend", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +func (c *msgClient) + +UpdateParams(ctx context.Context, in *MsgUpdateParams, opts ...grpc.CallOption) (*MsgUpdateParamsResponse, error) { + out := new(MsgUpdateParamsResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/UpdateParams", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +func (c *msgClient) + +SetSendEnabled(ctx context.Context, in *MsgSetSendEnabled, opts ...grpc.CallOption) (*MsgSetSendEnabledResponse, error) { + out := new(MsgSetSendEnabledResponse) + err := c.cc.Invoke(ctx, "/cosmos.bank.v1beta1.Msg/SetSendEnabled", in, out, opts...) + if err != nil { + return nil, err +} + +return out, nil +} + +/ MsgServer is the server API for Msg service. +type MsgServer interface { + / Send defines a method for sending coins from one account to another account. + Send(context.Context, *MsgSend) (*MsgSendResponse, error) + / MultiSend defines a method for sending coins from some accounts to other accounts. + MultiSend(context.Context, *MsgMultiSend) (*MsgMultiSendResponse, error) + / UpdateParams defines a governance operation for updating the x/bank module parameters. + / The authority is defined in the keeper. + / + / Since: cosmos-sdk 0.47 + UpdateParams(context.Context, *MsgUpdateParams) (*MsgUpdateParamsResponse, error) + / SetSendEnabled is a governance operation for setting the SendEnabled flag + / on any number of Denoms. Only the entries to add or update should be + / included. Entries that already exist in the store, but that aren't + / included in this message, will be left unchanged. + / + / Since: cosmos-sdk 0.47 + SetSendEnabled(context.Context, *MsgSetSendEnabled) (*MsgSetSendEnabledResponse, error) +} + +/ UnimplementedMsgServer can be embedded to have forward compatible implementations. +type UnimplementedMsgServer struct { +} + +func (*UnimplementedMsgServer) + +Send(ctx context.Context, req *MsgSend) (*MsgSendResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method Send not implemented") +} + +func (*UnimplementedMsgServer) + +MultiSend(ctx context.Context, req *MsgMultiSend) (*MsgMultiSendResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method MultiSend not implemented") +} + +func (*UnimplementedMsgServer) + +UpdateParams(ctx context.Context, req *MsgUpdateParams) (*MsgUpdateParamsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UpdateParams not implemented") +} + +func (*UnimplementedMsgServer) + +SetSendEnabled(ctx context.Context, req *MsgSetSendEnabled) (*MsgSetSendEnabledResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method SetSendEnabled not implemented") +} + +func RegisterMsgServer(s grpc1.Server, srv MsgServer) { + s.RegisterService(&_Msg_serviceDesc, srv) +} + +func _Msg_Send_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgSend) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).Send(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/Send", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).Send(ctx, req.(*MsgSend)) +} + +return interceptor(ctx, in, info, handler) +} + +func _Msg_MultiSend_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgMultiSend) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).MultiSend(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/MultiSend", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).MultiSend(ctx, req.(*MsgMultiSend)) +} + +return interceptor(ctx, in, info, handler) +} + +func _Msg_UpdateParams_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgUpdateParams) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).UpdateParams(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/UpdateParams", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).UpdateParams(ctx, req.(*MsgUpdateParams)) +} + +return interceptor(ctx, in, info, handler) +} + +func _Msg_SetSendEnabled_Handler(srv interface{ +}, ctx context.Context, dec func(interface{ +}) + +error, interceptor grpc.UnaryServerInterceptor) (interface{ +}, error) { + in := new(MsgSetSendEnabled) + if err := dec(in); err != nil { + return nil, err +} + if interceptor == nil { + return srv.(MsgServer).SetSendEnabled(ctx, in) +} + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/cosmos.bank.v1beta1.Msg/SetSendEnabled", +} + handler := func(ctx context.Context, req interface{ +}) (interface{ +}, error) { + return srv.(MsgServer).SetSendEnabled(ctx, req.(*MsgSetSendEnabled)) +} + +return interceptor(ctx, in, info, handler) +} + +var _Msg_serviceDesc = grpc.ServiceDesc{ + ServiceName: "cosmos.bank.v1beta1.Msg", + HandlerType: (*MsgServer)(nil), + Methods: []grpc.MethodDesc{ + { + MethodName: "Send", + Handler: _Msg_Send_Handler, +}, + { + MethodName: "MultiSend", + Handler: _Msg_MultiSend_Handler, +}, + { + MethodName: "UpdateParams", + Handler: _Msg_UpdateParams_Handler, +}, + { + MethodName: "SetSendEnabled", + Handler: _Msg_SetSendEnabled_Handler, +}, +}, + Streams: []grpc.StreamDesc{ +}, + Metadata: "cosmos/bank/v1beta1/tx.proto", +} + +func (m *MsgSend) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSend) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSend) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Amount) > 0 { + for iNdEx := len(m.Amount) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Amount[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x1a +} + +} + if len(m.ToAddress) > 0 { + i -= len(m.ToAddress) + +copy(dAtA[i:], m.ToAddress) + +i = encodeVarintTx(dAtA, i, uint64(len(m.ToAddress))) + +i-- + dAtA[i] = 0x12 +} + if len(m.FromAddress) > 0 { + i -= len(m.FromAddress) + +copy(dAtA[i:], m.FromAddress) + +i = encodeVarintTx(dAtA, i, uint64(len(m.FromAddress))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *MsgSendResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSendResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSendResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func (m *MsgMultiSend) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgMultiSend) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgMultiSend) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Outputs) > 0 { + for iNdEx := len(m.Outputs) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Outputs[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x12 +} + +} + if len(m.Inputs) > 0 { + for iNdEx := len(m.Inputs) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Inputs[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0xa +} + +} + +return len(dAtA) - i, nil +} + +func (m *MsgMultiSendResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgMultiSendResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgMultiSendResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func (m *MsgUpdateParams) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgUpdateParams) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgUpdateParams) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + { + size, err := m.Params.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x12 + if len(m.Authority) > 0 { + i -= len(m.Authority) + +copy(dAtA[i:], m.Authority) + +i = encodeVarintTx(dAtA, i, uint64(len(m.Authority))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *MsgUpdateParamsResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgUpdateParamsResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgUpdateParamsResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func (m *MsgSetSendEnabled) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSetSendEnabled) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSetSendEnabled) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.UseDefaultFor) > 0 { + for iNdEx := len(m.UseDefaultFor) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.UseDefaultFor[iNdEx]) + +copy(dAtA[i:], m.UseDefaultFor[iNdEx]) + +i = encodeVarintTx(dAtA, i, uint64(len(m.UseDefaultFor[iNdEx]))) + +i-- + dAtA[i] = 0x1a +} + +} + if len(m.SendEnabled) > 0 { + for iNdEx := len(m.SendEnabled) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.SendEnabled[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err +} + +i -= size + i = encodeVarintTx(dAtA, i, uint64(size)) +} + +i-- + dAtA[i] = 0x12 +} + +} + if len(m.Authority) > 0 { + i -= len(m.Authority) + +copy(dAtA[i:], m.Authority) + +i = encodeVarintTx(dAtA, i, uint64(len(m.Authority))) + +i-- + dAtA[i] = 0xa +} + +return len(dAtA) - i, nil +} + +func (m *MsgSetSendEnabledResponse) + +Marshal() (dAtA []byte, err error) { + size := m.Size() + +dAtA = make([]byte, size) + +n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err +} + +return dAtA[:n], nil +} + +func (m *MsgSetSendEnabledResponse) + +MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + +return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MsgSetSendEnabledResponse) + +MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func encodeVarintTx(dAtA []byte, offset int, v uint64) + +int { + offset -= sovTx(v) + base := offset + for v >= 1<<7 { + dAtA[offset] = uint8(v&0x7f | 0x80) + +v >>= 7 + offset++ +} + +dAtA[offset] = uint8(v) + +return base +} + +func (m *MsgSend) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.FromAddress) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + +l = len(m.ToAddress) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + if len(m.Amount) > 0 { + for _, e := range m.Amount { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + +return n +} + +func (m *MsgSendResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func (m *MsgMultiSend) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + if len(m.Inputs) > 0 { + for _, e := range m.Inputs { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + if len(m.Outputs) > 0 { + for _, e := range m.Outputs { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + +return n +} + +func (m *MsgMultiSendResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func (m *MsgUpdateParams) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.Authority) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + +l = m.Params.Size() + +n += 1 + l + sovTx(uint64(l)) + +return n +} + +func (m *MsgUpdateParamsResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func (m *MsgSetSendEnabled) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + l = len(m.Authority) + if l > 0 { + n += 1 + l + sovTx(uint64(l)) +} + if len(m.SendEnabled) > 0 { + for _, e := range m.SendEnabled { + l = e.Size() + +n += 1 + l + sovTx(uint64(l)) +} + +} + if len(m.UseDefaultFor) > 0 { + for _, s := range m.UseDefaultFor { + l = len(s) + +n += 1 + l + sovTx(uint64(l)) +} + +} + +return n +} + +func (m *MsgSetSendEnabledResponse) + +Size() (n int) { + if m == nil { + return 0 +} + +var l int + _ = l + return n +} + +func sovTx(x uint64) (n int) { + return (math_bits.Len64(x|1) + 6) / 7 +} + +func sozTx(x uint64) (n int) { + return sovTx(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} + +func (m *MsgSend) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSend: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSend: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field FromAddress", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.FromAddress = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ToAddress", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.ToAddress = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Amount", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Amount = append(m.Amount, types.Coin{ +}) + if err := m.Amount[len(m.Amount)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgSendResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSendResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSendResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgMultiSend) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgMultiSend: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgMultiSend: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Inputs", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Inputs = append(m.Inputs, Input{ +}) + if err := m.Inputs[len(m.Inputs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Outputs", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Outputs = append(m.Outputs, Output{ +}) + if err := m.Outputs[len(m.Outputs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgMultiSendResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgMultiSendResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgMultiSendResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgUpdateParams) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgUpdateParams: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgUpdateParams: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Authority", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Authority = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Params", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + if err := m.Params.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgUpdateParamsResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgUpdateParamsResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgUpdateParamsResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgSetSendEnabled) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSetSendEnabled: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSetSendEnabled: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Authority", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.Authority = string(dAtA[iNdEx:postIndex]) + +iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SendEnabled", wireType) +} + +var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break +} + +} + if msglen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.SendEnabled = append(m.SendEnabled, &SendEnabled{ +}) + if err := m.SendEnabled[len(m.SendEnabled)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err +} + +iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UseDefaultFor", wireType) +} + +var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTx +} + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTx +} + if postIndex > l { + return io.ErrUnexpectedEOF +} + +m.UseDefaultFor = append(m.UseDefaultFor, string(dAtA[iNdEx:postIndex])) + +iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func (m *MsgSetSendEnabledResponse) + +Unmarshal(dAtA []byte) + +error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTx +} + if iNdEx >= l { + return io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break +} + +} + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MsgSetSendEnabledResponse: wiretype end group for non-group") +} + if fieldNum <= 0 { + return fmt.Errorf("proto: MsgSetSendEnabledResponse: illegal tag %d (wire type %d)", fieldNum, wire) +} + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipTx(dAtA[iNdEx:]) + if err != nil { + return err +} + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTx +} + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF +} + +iNdEx += skippy +} + +} + if iNdEx > l { + return io.ErrUnexpectedEOF +} + +return nil +} + +func skipTx(dAtA []byte) (n int, err error) { + l := len(dAtA) + iNdEx := 0 + depth := 0 + for iNdEx < l { + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowTx +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + wireType := int(wire & 0x7) + switch wireType { + case 0: + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowTx +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + +iNdEx++ + if dAtA[iNdEx-1] < 0x80 { + break +} + +} + case 1: + iNdEx += 8 + case 2: + var length int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowTx +} + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF +} + b := dAtA[iNdEx] + iNdEx++ + length |= (int(b) & 0x7F) << shift + if b < 0x80 { + break +} + +} + if length < 0 { + return 0, ErrInvalidLengthTx +} + +iNdEx += length + case 3: + depth++ + case 4: + if depth == 0 { + return 0, ErrUnexpectedEndOfGroupTx +} + +depth-- + case 5: + iNdEx += 4 + default: + return 0, fmt.Errorf("proto: illegal wireType %d", wireType) +} + if iNdEx < 0 { + return 0, ErrInvalidLengthTx +} + if depth == 0 { + return iNdEx, nil +} + +} + +return 0, io.ErrUnexpectedEOF +} + +var ( + ErrInvalidLengthTx = fmt.Errorf("proto: negative length found during unmarshaling") + +ErrIntOverflowTx = fmt.Errorf("proto: integer overflow") + +ErrUnexpectedEndOfGroupTx = fmt.Errorf("proto: unexpected end of group") +) +``` + +When possible, the existing module's [`Keeper`](/docs/sdk/v0.53/documentation/module-system/keeper) should implement `MsgServer`, otherwise a `msgServer` struct that embeds the `Keeper` can be created, typically in `./keeper/msg_server.go`: + +```go expandable +package keeper + +import ( + + "context" + "github.com/armon/go-metrics" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +type msgServer struct { + Keeper +} + +var _ types.MsgServer = msgServer{ +} + +/ NewMsgServerImpl returns an implementation of the bank MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &msgServer{ + Keeper: keeper +} +} + +func (k msgServer) + +Send(goCtx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + var ( + from, to []byte + err error + ) + if base, ok := k.Keeper.(BaseKeeper); ok { + from, err = base.ak.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid from address: %s", err) +} + +to, err = base.ak.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid to address: %s", err) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + if !msg.Amount.IsValid() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + if !msg.Amount.IsAllPositive() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if k.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + +err = k.SendCoins(ctx, from, to, msg.Amount) + if err != nil { + return nil, err +} + +defer func() { + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "send" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + +return &types.MsgSendResponse{ +}, nil +} + +func (k msgServer) + +MultiSend(goCtx context.Context, msg *types.MsgMultiSend) (*types.MsgMultiSendResponse, error) { + if len(msg.Inputs) == 0 { + return nil, types.ErrNoInputs +} + if len(msg.Inputs) != 1 { + return nil, types.ErrMultipleSenders +} + if len(msg.Outputs) == 0 { + return nil, types.ErrNoOutputs +} + if err := types.ValidateInputOutputs(msg.Inputs[0], msg.Outputs); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + + / NOTE: totalIn == totalOut should already have been checked + for _, in := range msg.Inputs { + if err := k.IsSendEnabledCoins(ctx, in.Coins...); err != nil { + return nil, err +} + +} + for _, out := range msg.Outputs { + if base, ok := k.Keeper.(BaseKeeper); ok { + accAddr, err := base.ak.AddressCodec().StringToBytes(out.Address) + if err != nil { + return nil, err +} + if k.BlockedAddr(accAddr) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", out.Address) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + +} + err := k.InputOutputCoins(ctx, msg.Inputs[0], msg.Outputs) + if err != nil { + return nil, err +} + +return &types.MsgMultiSendResponse{ +}, nil +} + +func (k msgServer) + +UpdateParams(goCtx context.Context, req *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + if k.GetAuthority() != req.Authority { + return nil, errorsmod.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), req.Authority) +} + if err := req.Params.Validate(); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.SetParams(ctx, req.Params); err != nil { + return nil, err +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} + +func (k msgServer) + +SetSendEnabled(goCtx context.Context, msg *types.MsgSetSendEnabled) (*types.MsgSetSendEnabledResponse, error) { + if k.GetAuthority() != msg.Authority { + return nil, errorsmod.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), msg.Authority) +} + seen := map[string]bool{ +} + for _, se := range msg.SendEnabled { + if _, alreadySeen := seen[se.Denom]; alreadySeen { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("duplicate denom entries found for %q", se.Denom) +} + +seen[se.Denom] = true + if err := se.Validate(); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid SendEnabled denom %q: %s", se.Denom, err) +} + +} + for _, denom := range msg.UseDefaultFor { + if err := sdk.ValidateDenom(denom); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid UseDefaultFor denom %q: %s", denom, err) +} + +} + ctx := sdk.UnwrapSDKContext(goCtx) + if len(msg.SendEnabled) > 0 { + k.SetAllSendEnabled(ctx, msg.SendEnabled) +} + if len(msg.UseDefaultFor) > 0 { + k.DeleteSendEnabled(ctx, msg.UseDefaultFor...) +} + +return &types.MsgSetSendEnabledResponse{ +}, nil +} +``` + +`msgServer` methods can retrieve the `context.Context` from the `context.Context` parameter method using the `sdk.UnwrapSDKContext`: + +```go expandable +package keeper + +import ( + + "context" + "github.com/armon/go-metrics" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +type msgServer struct { + Keeper +} + +var _ types.MsgServer = msgServer{ +} + +/ NewMsgServerImpl returns an implementation of the bank MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &msgServer{ + Keeper: keeper +} +} + +func (k msgServer) + +Send(goCtx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + var ( + from, to []byte + err error + ) + if base, ok := k.Keeper.(BaseKeeper); ok { + from, err = base.ak.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid from address: %s", err) +} + +to, err = base.ak.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid to address: %s", err) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + if !msg.Amount.IsValid() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + if !msg.Amount.IsAllPositive() { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if k.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + +err = k.SendCoins(ctx, from, to, msg.Amount) + if err != nil { + return nil, err +} + +defer func() { + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "send" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + +return &types.MsgSendResponse{ +}, nil +} + +func (k msgServer) + +MultiSend(goCtx context.Context, msg *types.MsgMultiSend) (*types.MsgMultiSendResponse, error) { + if len(msg.Inputs) == 0 { + return nil, types.ErrNoInputs +} + if len(msg.Inputs) != 1 { + return nil, types.ErrMultipleSenders +} + if len(msg.Outputs) == 0 { + return nil, types.ErrNoOutputs +} + if err := types.ValidateInputOutputs(msg.Inputs[0], msg.Outputs); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + + / NOTE: totalIn == totalOut should already have been checked + for _, in := range msg.Inputs { + if err := k.IsSendEnabledCoins(ctx, in.Coins...); err != nil { + return nil, err +} + +} + for _, out := range msg.Outputs { + if base, ok := k.Keeper.(BaseKeeper); ok { + accAddr, err := base.ak.AddressCodec().StringToBytes(out.Address) + if err != nil { + return nil, err +} + if k.BlockedAddr(accAddr) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", out.Address) +} + +} + +else { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) +} + +} + err := k.InputOutputCoins(ctx, msg.Inputs[0], msg.Outputs) + if err != nil { + return nil, err +} + +return &types.MsgMultiSendResponse{ +}, nil +} + +func (k msgServer) + +UpdateParams(goCtx context.Context, req *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + if k.GetAuthority() != req.Authority { + return nil, errorsmod.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), req.Authority) +} + if err := req.Params.Validate(); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := k.SetParams(ctx, req.Params); err != nil { + return nil, err +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} + +func (k msgServer) + +SetSendEnabled(goCtx context.Context, msg *types.MsgSetSendEnabled) (*types.MsgSetSendEnabledResponse, error) { + if k.GetAuthority() != msg.Authority { + return nil, errorsmod.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), msg.Authority) +} + seen := map[string]bool{ +} + for _, se := range msg.SendEnabled { + if _, alreadySeen := seen[se.Denom]; alreadySeen { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("duplicate denom entries found for %q", se.Denom) +} + +seen[se.Denom] = true + if err := se.Validate(); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid SendEnabled denom %q: %s", se.Denom, err) +} + +} + for _, denom := range msg.UseDefaultFor { + if err := sdk.ValidateDenom(denom); err != nil { + return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid UseDefaultFor denom %q: %s", denom, err) +} + +} + ctx := sdk.UnwrapSDKContext(goCtx) + if len(msg.SendEnabled) > 0 { + k.SetAllSendEnabled(ctx, msg.SendEnabled) +} + if len(msg.UseDefaultFor) > 0 { + k.DeleteSendEnabled(ctx, msg.UseDefaultFor...) +} + +return &types.MsgSetSendEnabledResponse{ +}, nil +} +``` + +`sdk.Msg` processing usually follows these 3 steps: + +### Validation + +The message server must perform all validation required (both _stateful_ and _stateless_) to make sure the `message` is valid. +The `signer` is charged for the gas cost of this validation. + +For example, a `msgServer` method for a `transfer` message should check that the sending account has enough funds to actually perform the transfer. + +It is recommended to implement all validation checks in a separate function that passes state values as arguments. This implementation simplifies testing. As expected, expensive validation functions charge additional gas. Example: + +```go +ValidateMsgA(msg MsgA, now Time, gm GasMeter) + +error { + if now.Before(msg.Expire) { + return sdkerrrors.ErrInvalidRequest.Wrap("msg expired") +} + +gm.ConsumeGas(1000, "signature verification") + +return signatureVerificaton(msg.Prover, msg.Data) +} +``` + + + Previously, the `ValidateBasic` method was used to perform simple and + stateless validation checks. This way of validating is deprecated, this means + the `msgServer` must perform all validation checks. + + +### State Transition + +After the validation is successful, the `msgServer` method uses the [`keeper`](/docs/sdk/v0.53/documentation/module-system/keeper) functions to access the state and perform a state transition. + +### Events + +Before returning, `msgServer` methods generally emit one or more [events](/docs/sdk/v0.53/api-reference/events-streaming/events) by using the `EventManager` held in the `ctx`. Use the new `EmitTypedEvent` function that uses protobuf-based event types: + +```go +ctx.EventManager().EmitTypedEvent( + &group.EventABC{ + Key1: Value1, Key2, Value2 +}) +``` + +or the older `EmitEvent` function: + +```go +ctx.EventManager().EmitEvent( + sdk.NewEvent( + eventType, / e.g. sdk.EventTypeMessage for a message, types.CustomEventType for a custom event defined in the module + sdk.NewAttribute(key1, value1), + sdk.NewAttribute(key2, value2), + ), +) +``` + +These events are relayed back to the underlying consensus engine and can be used by service providers to implement services around the application. Click [here](/docs/sdk/v0.53/api-reference/events-streaming/events) to learn more about events. + +The invoked `msgServer` method returns a `proto.Message` response and an `error`. These return values are then wrapped into an `*sdk.Result` or an `error` using `sdk.WrapServiceResult(ctx context.Context, res proto.Message, err error)`: + +```go expandable +package baseapp + +import ( + + "context" + "fmt" + + gogogrpc "github.com/cosmos/gogoproto/grpc" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/grpc" + + errorsmod "cosmossdk.io/errors" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +/ MessageRouter ADR 031 request type routing +/ docs/sdk/next/documentation/legacy/adr-comprehensive +type MessageRouter interface { + Handler(msg sdk.Msg) + +MsgServiceHandler + HandlerByTypeURL(typeURL string) + +MsgServiceHandler +} + +/ MsgServiceRouter routes fully-qualified Msg service methods to their handler. +type MsgServiceRouter struct { + interfaceRegistry codectypes.InterfaceRegistry + routes map[string]MsgServiceHandler + circuitBreaker CircuitBreaker +} + +var _ gogogrpc.Server = &MsgServiceRouter{ +} + +/ NewMsgServiceRouter creates a new MsgServiceRouter. +func NewMsgServiceRouter() *MsgServiceRouter { + return &MsgServiceRouter{ + routes: map[string]MsgServiceHandler{ +}, +} +} + +func (msr *MsgServiceRouter) + +SetCircuit(cb CircuitBreaker) { + msr.circuitBreaker = cb +} + +/ MsgServiceHandler defines a function type which handles Msg service message. +type MsgServiceHandler = func(ctx sdk.Context, req sdk.Msg) (*sdk.Result, error) + +/ Handler returns the MsgServiceHandler for a given msg or nil if not found. +func (msr *MsgServiceRouter) + +Handler(msg sdk.Msg) + +MsgServiceHandler { + return msr.routes[sdk.MsgTypeURL(msg)] +} + +/ HandlerByTypeURL returns the MsgServiceHandler for a given query route path or nil +/ if not found. +func (msr *MsgServiceRouter) + +HandlerByTypeURL(typeURL string) + +MsgServiceHandler { + return msr.routes[typeURL] +} + +/ RegisterService implements the gRPC Server.RegisterService method. sd is a gRPC +/ service description, handler is an object which implements that gRPC service. +/ +/ This function PANICs: +/ - if it is called before the service `Msg`s have been registered using +/ RegisterInterfaces, +/ - or if a service is being registered twice. +func (msr *MsgServiceRouter) + +RegisterService(sd *grpc.ServiceDesc, handler interface{ +}) { + / Adds a top-level query handler based on the gRPC service name. + for _, method := range sd.Methods { + fqMethod := fmt.Sprintf("/%s/%s", sd.ServiceName, method.MethodName) + methodHandler := method.Handler + + var requestTypeName string + + / NOTE: This is how we pull the concrete request type for each handler for registering in the InterfaceRegistry. + / This approach is maybe a bit hacky, but less hacky than reflecting on the handler object itself. + / We use a no-op interceptor to avoid actually calling into the handler itself. + _, _ = methodHandler(nil, context.Background(), func(i interface{ +}) + +error { + msg, ok := i.(sdk.Msg) + if !ok { + / We panic here because there is no other alternative and the app cannot be initialized correctly + / this should only happen if there is a problem with code generation in which case the app won't + / work correctly anyway. + panic(fmt.Errorf("unable to register service method %s: %T does not implement sdk.Msg", fqMethod, i)) +} + +requestTypeName = sdk.MsgTypeURL(msg) + +return nil +}, noopInterceptor) + + / Check that the service Msg fully-qualified method name has already + / been registered (via RegisterInterfaces). If the user registers a + / service without registering according service Msg type, there might be + / some unexpected behavior down the road. Since we can't return an error + / (`Server.RegisterService` interface restriction) + +we panic (at startup). + reqType, err := msr.interfaceRegistry.Resolve(requestTypeName) + if err != nil || reqType == nil { + panic( + fmt.Errorf( + "type_url %s has not been registered yet. "+ + "Before calling RegisterService, you must register all interfaces by calling the `RegisterInterfaces` "+ + "method on module.BasicManager. Each module should call `msgservice.RegisterMsgServiceDesc` inside its "+ + "`RegisterInterfaces` method with the `_Msg_serviceDesc` generated by proto-gen", + requestTypeName, + ), + ) +} + + / Check that each service is only registered once. If a service is + / registered more than once, then we should error. Since we can't + / return an error (`Server.RegisterService` interface restriction) + +we + / panic (at startup). + _, found := msr.routes[requestTypeName] + if found { + panic( + fmt.Errorf( + "msg service %s has already been registered. Please make sure to only register each service once. "+ + "This usually means that there are conflicting modules registering the same msg service", + fqMethod, + ), + ) +} + +msr.routes[requestTypeName] = func(ctx sdk.Context, msg sdk.Msg) (*sdk.Result, error) { + ctx = ctx.WithEventManager(sdk.NewEventManager()) + interceptor := func(goCtx context.Context, _ interface{ +}, _ *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{ +}, error) { + goCtx = context.WithValue(goCtx, sdk.SdkContextKey, ctx) + +return handler(goCtx, msg) +} + if m, ok := msg.(sdk.HasValidateBasic); ok { + if err := m.ValidateBasic(); err != nil { + return nil, err +} + +} + if msr.circuitBreaker != nil { + msgURL := sdk.MsgTypeURL(msg) + +isAllowed, err := msr.circuitBreaker.IsAllowed(ctx, msgURL) + if err != nil { + return nil, err +} + if !isAllowed { + return nil, fmt.Errorf("circuit breaker disables execution of this message: %s", msgURL) +} + +} + + / Call the method handler from the service description with the handler object. + / We don't do any decoding here because the decoding was already done. + res, err := methodHandler(handler, ctx, noopDecoder, interceptor) + if err != nil { + return nil, err +} + +resMsg, ok := res.(proto.Message) + if !ok { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "Expecting proto.Message, got %T", resMsg) +} + +return sdk.WrapServiceResult(ctx, resMsg, err) +} + +} +} + +/ SetInterfaceRegistry sets the interface registry for the router. +func (msr *MsgServiceRouter) + +SetInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) { + msr.interfaceRegistry = interfaceRegistry +} + +func noopDecoder(_ interface{ +}) + +error { + return nil +} + +func noopInterceptor(_ context.Context, _ interface{ +}, _ *grpc.UnaryServerInfo, _ grpc.UnaryHandler) (interface{ +}, error) { + return nil, nil +} +``` + +This method takes care of marshaling the `res` parameter to protobuf and attaching any events on the `ctx.EventManager()` to the `sdk.Result`. + +```protobuf +// Result is the union of ResponseFormat and ResponseCheckTx. +message Result { + option (gogoproto.goproto_getters) = false; + + // Data is any data returned from message or handler execution. It MUST be + // length prefixed in order to separate data from multiple message executions. + // Deprecated. This field is still populated, but prefer msg_response instead + // because it also contains the Msg response typeURL. + bytes data = 1 [deprecated = true]; + + // Log contains the log information from message or handler execution. + string log = 2; + + // Events contains a slice of Event objects that were emitted during message + // or handler execution. + repeated tendermint.abci.Event events = 3 [(gogoproto.nullable) = false]; + + // msg_responses contains the Msg handler responses type packed in Anys. + // + // Since: cosmos-sdk 0.46 + repeated google.protobuf.Any msg_responses = 4; +``` + +This diagram shows a typical structure of a Protobuf `Msg` service, and how the message propagates through the module. + +![Transaction flow](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/transaction_flow.svg) + +## Telemetry + +New [telemetry metrics](/docs/sdk/v0.53/api-reference/telemetry-metrics/telemetry) can be created from `msgServer` methods when handling messages. + +This is an example from the `x/auth/vesting` module: + +```go expandable +package vesting + +import ( + + "context" + "github.com/armon/go-metrics" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/telemetry" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" +) + +type msgServer struct { + keeper.AccountKeeper + types.BankKeeper +} + +/ NewMsgServerImpl returns an implementation of the vesting MsgServer interface, +/ wrapping the corresponding AccountKeeper and BankKeeper. +func NewMsgServerImpl(k keeper.AccountKeeper, bk types.BankKeeper) + +types.MsgServer { + return &msgServer{ + AccountKeeper: k, + BankKeeper: bk +} +} + +var _ types.MsgServer = msgServer{ +} + +func (s msgServer) + +CreateVestingAccount(goCtx context.Context, msg *types.MsgCreateVestingAccount) (*types.MsgCreateVestingAccountResponse, error) { + from, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'from' address: %s", err) +} + +to, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'to' address: %s", err) +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + if msg.EndTime <= 0 { + return nil, errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "invalid end time") +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := s.BankKeeper.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if s.BankKeeper.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + if acc := s.AccountKeeper.GetAccount(ctx, to); acc != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "account %s already exists", msg.ToAddress) +} + baseAccount := authtypes.NewBaseAccountWithAddress(to) + +baseAccount = s.AccountKeeper.NewAccount(ctx, baseAccount).(*authtypes.BaseAccount) + baseVestingAccount := types.NewBaseVestingAccount(baseAccount, msg.Amount.Sort(), msg.EndTime) + +var vestingAccount sdk.AccountI + if msg.Delayed { + vestingAccount = types.NewDelayedVestingAccountRaw(baseVestingAccount) +} + +else { + vestingAccount = types.NewContinuousVestingAccountRaw(baseVestingAccount, ctx.BlockTime().Unix()) +} + +s.AccountKeeper.SetAccount(ctx, vestingAccount) + +defer func() { + telemetry.IncrCounter(1, "new", "account") + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "create_vesting_account" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + if err = s.BankKeeper.SendCoins(ctx, from, to, msg.Amount); err != nil { + return nil, err +} + +return &types.MsgCreateVestingAccountResponse{ +}, nil +} + +func (s msgServer) + +CreatePermanentLockedAccount(goCtx context.Context, msg *types.MsgCreatePermanentLockedAccount) (*types.MsgCreatePermanentLockedAccountResponse, error) { + from, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'from' address: %s", err) +} + +to, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'to' address: %s", err) +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + ctx := sdk.UnwrapSDKContext(goCtx) + if err := s.BankKeeper.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { + return nil, err +} + if s.BankKeeper.BlockedAddr(to) { + return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) +} + if acc := s.AccountKeeper.GetAccount(ctx, to); acc != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "account %s already exists", msg.ToAddress) +} + baseAccount := authtypes.NewBaseAccountWithAddress(to) + +baseAccount = s.AccountKeeper.NewAccount(ctx, baseAccount).(*authtypes.BaseAccount) + vestingAccount := types.NewPermanentLockedAccount(baseAccount, msg.Amount) + +s.AccountKeeper.SetAccount(ctx, vestingAccount) + +defer func() { + telemetry.IncrCounter(1, "new", "account") + for _, a := range msg.Amount { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "create_permanent_locked_account" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + if err = s.BankKeeper.SendCoins(ctx, from, to, msg.Amount); err != nil { + return nil, err +} + +return &types.MsgCreatePermanentLockedAccountResponse{ +}, nil +} + +func (s msgServer) + +CreatePeriodicVestingAccount(goCtx context.Context, msg *types.MsgCreatePeriodicVestingAccount) (*types.MsgCreatePeriodicVestingAccountResponse, error) { + from, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.FromAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'from' address: %s", err) +} + +to, err := s.AccountKeeper.AddressCodec().StringToBytes(msg.ToAddress) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid 'to' address: %s", err) +} + if msg.StartTime < 1 { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "invalid start time of %d, length must be greater than 0", msg.StartTime) +} + +var totalCoins sdk.Coins + for i, period := range msg.VestingPeriods { + if period.Length < 1 { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "invalid period length of %d in period %d, length must be greater than 0", period.Length, i) +} + +totalCoins = totalCoins.Add(period.Amount...) +} + ctx := sdk.UnwrapSDKContext(goCtx) + if acc := s.AccountKeeper.GetAccount(ctx, to); acc != nil { + return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidRequest, "account %s already exists", msg.ToAddress) +} + if err := s.BankKeeper.IsSendEnabledCoins(ctx, totalCoins...); err != nil { + return nil, err +} + baseAccount := authtypes.NewBaseAccountWithAddress(to) + +baseAccount = s.AccountKeeper.NewAccount(ctx, baseAccount).(*authtypes.BaseAccount) + vestingAccount := types.NewPeriodicVestingAccount(baseAccount, totalCoins.Sort(), msg.StartTime, msg.VestingPeriods) + +s.AccountKeeper.SetAccount(ctx, vestingAccount) + +defer func() { + telemetry.IncrCounter(1, "new", "account") + for _, a := range totalCoins { + if a.Amount.IsInt64() { + telemetry.SetGaugeWithLabels( + []string{"tx", "msg", "create_periodic_vesting_account" +}, + float32(a.Amount.Int64()), + []metrics.Label{ + telemetry.NewLabel("denom", a.Denom) +}, + ) +} + +} + +}() + if err = s.BankKeeper.SendCoins(ctx, from, to, totalCoins); err != nil { + return nil, err +} + +return &types.MsgCreatePeriodicVestingAccountResponse{ +}, nil +} + +func validateAmount(amount sdk.Coins) + +error { + if !amount.IsValid() { + return sdkerrors.ErrInvalidCoins.Wrap(amount.String()) +} + if !amount.IsAllPositive() { + return sdkerrors.ErrInvalidCoins.Wrap(amount.String()) +} + +return nil +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/nft.mdx b/docs/sdk/v0.53/documentation/module-system/nft.mdx new file mode 100644 index 00000000..d55e953b --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/nft.mdx @@ -0,0 +1,88 @@ +--- +title: '`x/nft`' +description: '## Abstract' +--- + +## Contents + +## Abstract + +`x/nft` is an implementation of a Cosmos SDK module, per [ADR 43](docs/sdk/next/documentation/legacy/adr-comprehensive), that allows you to create nft classification, create nft, transfer nft, update nft, and support various queries by integrating the module. It is fully compatible with the ERC721 specification. + +* [Concepts](#concepts) + * [Class](#class) + * [NFT](#nft) +* [State](#state) + * [Class](#class-1) + * [NFT](#nft-1) + * [NFTOfClassByOwner](#nftofclassbyowner) + * [Owner](#owner) + * [TotalSupply](#totalsupply) +* [Messages](#messages) + * [MsgSend](#msgsend) +* [Events](#events) + +## Concepts + +### Class + +`x/nft` module defines a struct `Class` to describe the common characteristics of a class of nft, under this class, you can create a variety of nft, which is equivalent to an erc721 contract for Ethereum. The design is defined in the [ADR 043](docs/sdk/next/documentation/legacy/adr-comprehensive). + +### NFT + +The full name of NFT is Non-Fungible Tokens. Because of the irreplaceable nature of NFT, it means that it can be used to represent unique things. The nft implemented by this module is fully compatible with Ethereum ERC721 standard. + +## State + +### Class + +Class is mainly composed of `id`, `name`, `symbol`, `description`, `uri`, `uri_hash`,`data` where `id` is the unique identifier of the class, similar to the Ethereum ERC721 contract address, the others are optional. + +* Class: `0x01 | classID | -> ProtocolBuffer(Class)` + +### NFT + +NFT is mainly composed of `class_id`, `id`, `uri`, `uri_hash` and `data`. Among them, `class_id` and `id` are two-tuples that identify the uniqueness of nft, `uri` and `uri_hash` is optional, which identifies the off-chain storage location of the nft, and `data` is an Any type. Use Any chain of `x/nft` modules can be customized by extending this field + +* NFT: `0x02 | classID | 0x00 | nftID |-> ProtocolBuffer(NFT)` + +### NFTOfClassByOwner + +NFTOfClassByOwner is mainly to realize the function of querying all nfts using classID and owner, without other redundant functions. + +* NFTOfClassByOwner: `0x03 | owner | 0x00 | classID | 0x00 | nftID |-> 0x01` + +### Owner + +Since there is no extra field in NFT to indicate the owner of nft, an additional key-value pair is used to save the ownership of nft. With the transfer of nft, the key-value pair is updated synchronously. + +* OwnerKey: `0x04 | classID | 0x00 | nftID |-> owner` + +### TotalSupply + +TotalSupply is responsible for tracking the number of all nfts under a certain class. Mint operation is performed under the changed class, supply increases by one, burn operation, and supply decreases by one. + +* OwnerKey: `0x05 | classID |-> totalSupply` + +## Messages + +In this section we describe the processing of messages for the NFT module. + + +The validation of `ClassID` and `NftID` is left to the app developer.\ +The SDK does not provide any validation for these fields. + + +### MsgSend + +You can use the `MsgSend` message to transfer the ownership of nft. This is a function provided by the `x/nft` module. Of course, you can use the `Transfer` method to implement your own transfer logic, but you need to pay extra attention to the transfer permissions. + +The message handling should fail if: + +* provided `ClassID` does not exist. +* provided `Id` does not exist. +* provided `Sender` does not the owner of nft. + +## Events + +The nft module emits proto events defined in [the Protobuf reference](https://buf.build/cosmos/cosmos-sdk/docs/main:cosmos.nft.v1beta1). diff --git a/docs/sdk/v0.53/documentation/module-system/params.mdx b/docs/sdk/v0.53/documentation/module-system/params.mdx new file mode 100644 index 00000000..7822389b --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/params.mdx @@ -0,0 +1,82 @@ +--- +title: '`x/params`' +description: >- + NOTE: x/params is deprecated as of Cosmos SDK v0.53 and will be removed in the + next release. +--- + +NOTE: `x/params` is deprecated as of Cosmos SDK v0.53 and will be removed in the next release. + +## Abstract + +Package params provides a globally available parameter store. + +There are two main types, Keeper and Subspace. Subspace is an isolated namespace for a +paramstore, where keys are prefixed by preconfigured spacename. Keeper has a +permission to access all existing spaces. + +Subspace can be used by the individual keepers, which need a private parameter store +that the other keepers cannot modify. The params Keeper can be used to add a route to `x/gov` router in order to modify any parameter in case a proposal passes. + +The following contents explains how to use params module for master and user modules. + +## Contents + +* [Keeper](#keeper) +* [Subspace](#subspace) + * [Key](#key) + * [KeyTable](#keytable) + * [ParamSet](#paramset) + +## Keeper + +In the app initialization stage, [subspaces](#subspace) can be allocated for other modules' keeper using `Keeper.Subspace` and are stored in `Keeper.spaces`. Then, those modules can have a reference to their specific parameter store through `Keeper.GetSubspace`. + +Example: + +```go +type ExampleKeeper struct { + paramSpace paramtypes.Subspace +} + +func (k ExampleKeeper) + +SetParams(ctx sdk.Context, params types.Params) { + k.paramSpace.SetParamSet(ctx, ¶ms) +} +``` + +## Subspace + +`Subspace` is a prefixed subspace of the parameter store. Each module which uses the +parameter store will take a `Subspace` to isolate permission to access. + +### Key + +Parameter keys are human readable alphanumeric strings. A parameter for the key +`"ExampleParameter"` is stored under `[]byte("SubspaceName" + "/" + "ExampleParameter")`, +where `"SubspaceName"` is the name of the subspace. + +Subkeys are secondary parameter keys those are used along with a primary parameter key. +Subkeys can be used for grouping or dynamic parameter key generation during runtime. + +### KeyTable + +All of the parameter keys that will be used should be registered at the compile +time. `KeyTable` is essentially a `map[string]attribute`, where the `string` is a parameter key. + +Currently, `attribute` consists of a `reflect.Type`, which indicates the parameter +type to check that provided key and value are compatible and registered, as well as a function `ValueValidatorFn` to validate values. + +Only primary keys have to be registered on the `KeyTable`. Subkeys inherit the +attribute of the primary key. + +### ParamSet + +Modules often define parameters as a proto message. The generated struct can implement +`ParamSet` interface to be used with the following methods: + +* `KeyTable.RegisterParamSet()`: registers all parameters in the struct +* `Subspace.{Get, Set}ParamSet()`: Get to & Set from the struct + +The implementor should be a pointer in order to use `GetParamSet()`. diff --git a/docs/sdk/v0.53/documentation/module-system/preblock.mdx b/docs/sdk/v0.53/documentation/module-system/preblock.mdx new file mode 100644 index 00000000..51dca5f1 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/preblock.mdx @@ -0,0 +1,31 @@ +--- +title: PreBlocker +--- + +## Synopsis + +`PreBlocker` is optional method module developers can implement in their module. They will be triggered before [`BeginBlock`](/docs/sdk/v0.53/documentation/application-framework/baseapp#beginblock). + + +**Pre-requisite Readings** + +- [Module Manager](/docs/sdk/v0.53/documentation/module-system/module-manager) + + + +## PreBlocker + +There are two semantics around the new lifecycle method: + +- It runs before the `BeginBlocker` of all modules +- It can modify consensus parameters in storage, and signal the caller through the return value. + +When it returns `ConsensusParamsChanged=true`, the caller must refresh the consensus parameter in the deliver context: + +``` +app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithConsensusParams(app.GetConsensusParams()) +``` + +The new ctx must be passed to all the other lifecycle methods. + +{/* TODO: leaving this here to update docs with core api changes */} diff --git a/docs/sdk/v0.53/documentation/module-system/protocolpool.mdx b/docs/sdk/v0.53/documentation/module-system/protocolpool.mdx new file mode 100644 index 00000000..26851081 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/protocolpool.mdx @@ -0,0 +1,694 @@ +--- +title: '`x/protocolpool`' +--- + +## Concepts + +`x/protocolpool` is a supplemental Cosmos SDK module that handles functionality for community pool funds. The module provides a separate module account for the community pool making it easier to track the pool assets. Starting with v0.53 of the Cosmos SDK, community funds can be tracked using this module instead of the `x/distribution` module. Funds are migrated from the `x/distribution` module's community pool to `x/protocolpool`'s module account automatically. + +This module is `supplemental`; it is not required to run a Cosmos SDK chain. `x/protocolpool` enhances the community pool functionality provided by `x/distribution` and enables custom modules to further extend the community pool. + +Note: *as long as an external commmunity pool keeper (here, `x/protocolpool`) is wired in DI configs, `x/distribution` will automatically use it for its external pool.* + +## Usage Limitations + +The following `x/distribution` handlers will now return an error when the `protocolpool` module is used with `x/distribution`: + +**QueryService** + +* `CommunityPool` + +**MsgService** + +* `CommunityPoolSpend` +* `FundCommunityPool` + +If you have services that rely on this functionality from `x/distribution`, please update them to use the `x/protocolpool` equivalents. + +## State Transitions + +### FundCommunityPool + +FundCommunityPool can be called by any valid account to send funds to the `x/protocolpool` module account. + +```protobuf + / FundCommunityPool defines a method to allow an account to directly + / fund the community pool. + rpc FundCommunityPool(MsgFundCommunityPool) returns (MsgFundCommunityPoolResponse); +``` + +### CommunityPoolSpend + +CommunityPoolSpend can be called by the module authority (default governance module account) or any account with authorization to spend funds from the `x/protocolpool` module account to a receiver address. + +```protobuf + / CommunityPoolSpend defines a governance operation for sending tokens from + / the community pool in the x/protocolpool module to another account, which + / could be the governance module itself. The authority is defined in the + / keeper. + rpc CommunityPoolSpend(MsgCommunityPoolSpend) returns (MsgCommunityPoolSpendResponse); +``` + +### CreateContinuousFund + +CreateContinuousFund is a message used to initiate a continuous fund for a specific recipient. The proposed percentage of funds will be distributed only on withdraw request for the recipient. The fund distribution continues until expiry time is reached or continuous fund request is canceled. +NOTE: This feature is designed to work with the SDK's default bond denom. + +```protobuf + / CreateContinuousFund defines a method to distribute a percentage of funds to an address continuously. + / This ContinuousFund can be indefinite or run until a given expiry time. + / Funds come from validator block rewards from x/distribution, but may also come from + / any user who funds the ProtocolPoolEscrow module account directly through x/bank. + rpc CreateContinuousFund(MsgCreateContinuousFund) returns (MsgCreateContinuousFundResponse); +``` + +### CancelContinuousFund + +CancelContinuousFund is a message used to cancel an existing continuous fund proposal for a specific recipient. Cancelling a continuous fund stops further distribution of funds, and the state object is removed from storage. + +```protobuf + / CancelContinuousFund defines a method for cancelling continuous fund. + rpc CancelContinuousFund(MsgCancelContinuousFund) returns (MsgCancelContinuousFundResponse); +``` + +## Messages + +### MsgFundCommunityPool + +This message sends coins directly from the sender to the community pool. + + +If you know the `x/protocolpool` module account address, you can directly use bank `send` transaction instead. + + +```protobuf +message MsgFundCommunityPool { + option (cosmos.msg.v1.signer) = "depositor"; + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string depositor = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + repeated cosmos.base.v1beta1.Coin amount = 2 + [(gogoproto.nullable) = false, (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins"]; +} + +``` + +* The msg will fail if the amount cannot be transferred from the sender to the `x/protocolpool` module account. + +```go +func (k Keeper) + +FundCommunityPool(ctx context.Context, amount sdk.Coins, sender sdk.AccAddress) + +error { + return k.bankKeeper.SendCoinsFromAccountToModule(ctx, sender, types.ModuleName, amount) +} +``` + +### MsgCommunityPoolSpend + +This message distributes funds from the `x/protocolpool` module account to the recipient using `DistributeFromCommunityPool` keeper method. + +```protobuf +// pool to another account. This message is typically executed via a governance +// proposal with the governance module being the executing authority. +message MsgCommunityPoolSpend { + option (cosmos.msg.v1.signer) = "authority"; + + // Authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string recipient = 2; + repeated cosmos.base.v1beta1.Coin amount = 3 + [(gogoproto.nullable) = false, (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins"]; +} + +``` + +The message will fail under the following conditions: + +* The amount cannot be transferred to the recipient from the `x/protocolpool` module account. +* The `recipient` address is restricted + +```go +func (k Keeper) + +DistributeFromCommunityPool(ctx context.Context, amount sdk.Coins, receiveAddr sdk.AccAddress) + +error { + return k.bankKeeper.SendCoinsFromModuleToAccount(ctx, types.ModuleName, receiveAddr, amount) +} +``` + +### MsgCreateContinuousFund + +This message is used to create a continuous fund for a specific recipient. The proposed percentage of funds will be distributed only on withdraw request for the recipient. This fund distribution continues until expiry time is reached or continuous fund request is canceled. + +```protobuf + string recipient = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +// MsgUpdateParams is the Msg/UpdateParams request type. +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // params defines the x/protocolpool parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false]; +} + +// MsgUpdateParamsResponse defines the response structure for executing a +``` + +The message will fail under the following conditions: + +* The recipient address is empty or restricted. +* The percentage is zero/negative/greater than one. +* The Expiry time is less than the current block time. + + +If two continuous fund proposals to the same address are created, the previous ContinuousFund will be updated with the new ContinuousFund. + + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "cosmossdk.io/math" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/protocolpool/types" +) + +type MsgServer struct { + Keeper +} + +var _ types.MsgServer = MsgServer{ +} + +/ NewMsgServerImpl returns an implementation of the protocolpool MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &MsgServer{ + Keeper: keeper +} +} + +func (k MsgServer) + +FundCommunityPool(ctx context.Context, msg *types.MsgFundCommunityPool) (*types.MsgFundCommunityPoolResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +depositor, err := k.authKeeper.AddressCodec().StringToBytes(msg.Depositor) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid depositor address: %s", err) +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + + / send funds to community pool module account + if err := k.Keeper.FundCommunityPool(sdkCtx, msg.Amount, depositor); err != nil { + return nil, err +} + +return &types.MsgFundCommunityPoolResponse{ +}, nil +} + +func (k MsgServer) + +CommunityPoolSpend(ctx context.Context, msg *types.MsgCommunityPoolSpend) (*types.MsgCommunityPoolSpendResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + +recipient, err := k.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + + / distribute funds from community pool module account + if err := k.DistributeFromCommunityPool(sdkCtx, msg.Amount, recipient); err != nil { + return nil, err +} + +sdkCtx.Logger().Debug("transferred from the community pool", "amount", msg.Amount.String(), "recipient", msg.Recipient) + +return &types.MsgCommunityPoolSpendResponse{ +}, nil +} + +func (k MsgServer) + +CreateContinuousFund(ctx context.Context, msg *types.MsgCreateContinuousFund) (*types.MsgCreateContinuousFundResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + +recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + + / deny creation if we know this address is blocked from receiving funds + if k.bankKeeper.BlockedAddr(recipient) { + return nil, fmt.Errorf("recipient is blocked in the bank keeper: %s", msg.Recipient) +} + +has, err := k.ContinuousFunds.Has(sdkCtx, recipient) + if err != nil { + return nil, err +} + if has { + return nil, fmt.Errorf("continuous fund already exists for recipient %s", msg.Recipient) +} + + / Validate the message fields + err = validateContinuousFund(sdkCtx, *msg) + if err != nil { + return nil, err +} + + / Check if total funds percentage exceeds 100% + / If exceeds, we should not setup continuous fund proposal. + totalStreamFundsPercentage := math.LegacyZeroDec() + +err = k.ContinuousFunds.Walk(sdkCtx, nil, func(key sdk.AccAddress, value types.ContinuousFund) (stop bool, err error) { + totalStreamFundsPercentage = totalStreamFundsPercentage.Add(value.Percentage) + +return false, nil +}) + if err != nil { + return nil, err +} + +totalStreamFundsPercentage = totalStreamFundsPercentage.Add(msg.Percentage) + if totalStreamFundsPercentage.GT(math.LegacyOneDec()) { + return nil, fmt.Errorf("cannot set continuous fund proposal\ntotal funds percentage exceeds 100\ncurrent total percentage: %s", totalStreamFundsPercentage.Sub(msg.Percentage).MulInt64(100).TruncateInt().String()) +} + + / Create continuous fund proposal + cf := types.ContinuousFund{ + Recipient: msg.Recipient, + Percentage: msg.Percentage, + Expiry: msg.Expiry, +} + + / Set continuous fund to the state + err = k.ContinuousFunds.Set(sdkCtx, recipient, cf) + if err != nil { + return nil, err +} + +return &types.MsgCreateContinuousFundResponse{ +}, nil +} + +func (k MsgServer) + +CancelContinuousFund(ctx context.Context, msg *types.MsgCancelContinuousFund) (*types.MsgCancelContinuousFundResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + +recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + canceledHeight := sdkCtx.BlockHeight() + canceledTime := sdkCtx.BlockTime() + +has, err := k.ContinuousFunds.Has(sdkCtx, recipient) + if err != nil { + return nil, fmt.Errorf("cannot get continuous fund for recipient %w", err) +} + if !has { + return nil, fmt.Errorf("cannot cancel continuous fund for recipient %s - does not exist", msg.Recipient) +} + if err := k.ContinuousFunds.Remove(sdkCtx, recipient); err != nil { + return nil, fmt.Errorf("failed to remove continuous fund for recipient %s: %w", msg.Recipient, err) +} + +return &types.MsgCancelContinuousFundResponse{ + CanceledTime: canceledTime, + CanceledHeight: uint64(canceledHeight), + Recipient: msg.Recipient, +}, nil +} + +func (k MsgServer) + +UpdateParams(ctx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.GetAuthority()); err != nil { + return nil, err +} + if err := msg.Params.Validate(); err != nil { + return nil, fmt.Errorf("invalid params: %w", err) +} + if err := k.Params.Set(sdkCtx, msg.Params); err != nil { + return nil, fmt.Errorf("failed to set params: %w", err) +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} +``` + +### MsgCancelContinuousFund + +This message is used to cancel an existing continuous fund proposal for a specific recipient. Once canceled, the continuous fund will no longer distribute funds at each begin block, and the state object will be removed. + +```protobuf +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/x/protocolpool/proto/cosmos/protocolpool/v1/tx.proto#L136-L161 +``` + +The message will fail under the following conditions: + +* The recipient address is empty or restricted. +* The ContinuousFund for the recipient does not exist. + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "cosmossdk.io/math" + + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/protocolpool/types" +) + +type MsgServer struct { + Keeper +} + +var _ types.MsgServer = MsgServer{ +} + +/ NewMsgServerImpl returns an implementation of the protocolpool MsgServer interface +/ for the provided Keeper. +func NewMsgServerImpl(keeper Keeper) + +types.MsgServer { + return &MsgServer{ + Keeper: keeper +} +} + +func (k MsgServer) + +FundCommunityPool(ctx context.Context, msg *types.MsgFundCommunityPool) (*types.MsgFundCommunityPoolResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +depositor, err := k.authKeeper.AddressCodec().StringToBytes(msg.Depositor) + if err != nil { + return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid depositor address: %s", err) +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + + / send funds to community pool module account + if err := k.Keeper.FundCommunityPool(sdkCtx, msg.Amount, depositor); err != nil { + return nil, err +} + +return &types.MsgFundCommunityPoolResponse{ +}, nil +} + +func (k MsgServer) + +CommunityPoolSpend(ctx context.Context, msg *types.MsgCommunityPoolSpend) (*types.MsgCommunityPoolSpendResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + if err := validateAmount(msg.Amount); err != nil { + return nil, err +} + +recipient, err := k.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + + / distribute funds from community pool module account + if err := k.DistributeFromCommunityPool(sdkCtx, msg.Amount, recipient); err != nil { + return nil, err +} + +sdkCtx.Logger().Debug("transferred from the community pool", "amount", msg.Amount.String(), "recipient", msg.Recipient) + +return &types.MsgCommunityPoolSpendResponse{ +}, nil +} + +func (k MsgServer) + +CreateContinuousFund(ctx context.Context, msg *types.MsgCreateContinuousFund) (*types.MsgCreateContinuousFundResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + +recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + + / deny creation if we know this address is blocked from receiving funds + if k.bankKeeper.BlockedAddr(recipient) { + return nil, fmt.Errorf("recipient is blocked in the bank keeper: %s", msg.Recipient) +} + +has, err := k.ContinuousFunds.Has(sdkCtx, recipient) + if err != nil { + return nil, err +} + if has { + return nil, fmt.Errorf("continuous fund already exists for recipient %s", msg.Recipient) +} + + / Validate the message fields + err = validateContinuousFund(sdkCtx, *msg) + if err != nil { + return nil, err +} + + / Check if total funds percentage exceeds 100% + / If exceeds, we should not setup continuous fund proposal. + totalStreamFundsPercentage := math.LegacyZeroDec() + +err = k.ContinuousFunds.Walk(sdkCtx, nil, func(key sdk.AccAddress, value types.ContinuousFund) (stop bool, err error) { + totalStreamFundsPercentage = totalStreamFundsPercentage.Add(value.Percentage) + +return false, nil +}) + if err != nil { + return nil, err +} + +totalStreamFundsPercentage = totalStreamFundsPercentage.Add(msg.Percentage) + if totalStreamFundsPercentage.GT(math.LegacyOneDec()) { + return nil, fmt.Errorf("cannot set continuous fund proposal\ntotal funds percentage exceeds 100\ncurrent total percentage: %s", totalStreamFundsPercentage.Sub(msg.Percentage).MulInt64(100).TruncateInt().String()) +} + + / Create continuous fund proposal + cf := types.ContinuousFund{ + Recipient: msg.Recipient, + Percentage: msg.Percentage, + Expiry: msg.Expiry, +} + + / Set continuous fund to the state + err = k.ContinuousFunds.Set(sdkCtx, recipient, cf) + if err != nil { + return nil, err +} + +return &types.MsgCreateContinuousFundResponse{ +}, nil +} + +func (k MsgServer) + +CancelContinuousFund(ctx context.Context, msg *types.MsgCancelContinuousFund) (*types.MsgCancelContinuousFundResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.Authority); err != nil { + return nil, err +} + +recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) + if err != nil { + return nil, err +} + canceledHeight := sdkCtx.BlockHeight() + canceledTime := sdkCtx.BlockTime() + +has, err := k.ContinuousFunds.Has(sdkCtx, recipient) + if err != nil { + return nil, fmt.Errorf("cannot get continuous fund for recipient %w", err) +} + if !has { + return nil, fmt.Errorf("cannot cancel continuous fund for recipient %s - does not exist", msg.Recipient) +} + if err := k.ContinuousFunds.Remove(sdkCtx, recipient); err != nil { + return nil, fmt.Errorf("failed to remove continuous fund for recipient %s: %w", msg.Recipient, err) +} + +return &types.MsgCancelContinuousFundResponse{ + CanceledTime: canceledTime, + CanceledHeight: uint64(canceledHeight), + Recipient: msg.Recipient, +}, nil +} + +func (k MsgServer) + +UpdateParams(ctx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + if err := k.validateAuthority(msg.GetAuthority()); err != nil { + return nil, err +} + if err := msg.Params.Validate(); err != nil { + return nil, fmt.Errorf("invalid params: %w", err) +} + if err := k.Params.Set(sdkCtx, msg.Params); err != nil { + return nil, fmt.Errorf("failed to set params: %w", err) +} + +return &types.MsgUpdateParamsResponse{ +}, nil +} +``` + +## Client + +It takes the advantage of `AutoCLI` + +```go expandable +package protocolpool + +import ( + + "fmt" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + poolv1 "cosmossdk.io/api/cosmos/protocolpool/v1" + "github.com/cosmos/cosmos-sdk/version" +) + +/ AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. +func (am AppModule) + +AutoCLIOptions() *autocliv1.ModuleOptions { + return &autocliv1.ModuleOptions{ + Query: &autocliv1.ServiceCommandDescriptor{ + Service: poolv1.Query_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "CommunityPool", + Use: "community-pool", + Short: "Query the amount of coins in the community pool", + Example: fmt.Sprintf(`%s query protocolpool community-pool`, version.AppName), +}, + { + RpcMethod: "ContinuousFunds", + Use: "continuous-funds", + Short: "Query all continuous funds", + Example: fmt.Sprintf(`%s query protocolpool continuous-funds`, version.AppName), +}, + { + RpcMethod: "ContinuousFund", + Use: "continuous-fund ", + Short: "Query a continuous fund by its recipient address", + Example: fmt.Sprintf(`%s query protocolpool continuous-fund cosmos1...`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "recipient" +}}, +}, +}, +}, + Tx: &autocliv1.ServiceCommandDescriptor{ + Service: poolv1.Msg_ServiceDesc.ServiceName, + RpcCommandOptions: []*autocliv1.RpcCommandOptions{ + { + RpcMethod: "FundCommunityPool", + Use: "fund-community-pool ", + Short: "Funds the community pool with the specified amount", + Example: fmt.Sprintf(`%s tx protocolpool fund-community-pool 100uatom --from mykey`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "amount" +}}, +}, + { + RpcMethod: "CreateContinuousFund", + Use: "create-continuous-fund ", + Short: "Create continuous fund for a recipient with optional expiry", + Example: fmt.Sprintf(`%s tx protocolpool create-continuous-fund cosmos1... 0.2 2023-11-31T12:34:56.789Z --from mykey`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "recipient" +}, + { + ProtoField: "percentage" +}, + { + ProtoField: "expiry", + Optional: true +}, +}, + GovProposal: true, +}, + { + RpcMethod: "CancelContinuousFund", + Use: "cancel-continuous-fund ", + Short: "Cancel continuous fund for a specific recipient", + Example: fmt.Sprintf(`%s tx protocolpool cancel-continuous-fund cosmos1... --from mykey`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{ + { + ProtoField: "recipient" +}, +}, + GovProposal: true, +}, + { + RpcMethod: "UpdateParams", + Use: "update-params-proposal ", + Short: "Submit a proposal to update protocolpool module params. Note: the entire params must be provided.", + Example: fmt.Sprintf(`%s tx protocolpool update-params-proposal '{ "enabled_distribution_denoms": ["stake", "foo"] +}'`, version.AppName), + PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ + ProtoField: "params" +}}, + GovProposal: true, +}, +}, +}, +} +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/query-services.mdx b/docs/sdk/v0.53/documentation/module-system/query-services.mdx new file mode 100644 index 00000000..b28b4aee --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/query-services.mdx @@ -0,0 +1,390 @@ +--- +title: Query Services +--- + +## Synopsis + +A Protobuf Query service processes [`queries`](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#queries). Query services are specific to the module in which they are defined, and only process `queries` defined within said module. They are called from `BaseApp`'s [`Query` method](/docs/sdk/v0.53/documentation/application-framework/baseapp#query). + + +**Pre-requisite Readings** + +- [Module Manager](/docs/sdk/v0.53/documentation/module-system/module-manager) +- [Messages and Queries](/docs/sdk/v0.53/documentation/module-system/messages-and-queries) + + + +## Implementation of a module query service + +### gRPC Service + +When defining a Protobuf `Query` service, a `QueryServer` interface is generated for each module with all the service methods: + +```go +type QueryServer interface { + QueryBalance(context.Context, *QueryBalanceParams) (*types.Coin, error) + +QueryAllBalances(context.Context, *QueryAllBalancesParams) (*QueryAllBalancesResponse, error) +} +``` + +These custom queries methods should be implemented by a module's keeper, typically in `./keeper/grpc_query.go`. The first parameter of these methods is a generic `context.Context`. Therefore, the Cosmos SDK provides a function `sdk.UnwrapSDKContext` to retrieve the `context.Context` from the provided +`context.Context`. + +Here's an example implementation for the bank module: + +```go expandable +package keeper + +import ( + + "context" + "cosmossdk.io/collections" + "cosmossdk.io/math" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/status" + "cosmossdk.io/store/prefix" + "github.com/cosmos/cosmos-sdk/runtime" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/query" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +type Querier struct { + BaseKeeper +} + +var _ types.QueryServer = BaseKeeper{ +} + +func NewQuerier(keeper *BaseKeeper) + +Querier { + return Querier{ + BaseKeeper: *keeper +} +} + +/ Balance implements the Query/Balance gRPC method +func (k BaseKeeper) + +Balance(ctx context.Context, req *types.QueryBalanceRequest) (*types.QueryBalanceResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + +address, err := k.ak.AddressCodec().StringToBytes(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + balance := k.GetBalance(sdkCtx, address, req.Denom) + +return &types.QueryBalanceResponse{ + Balance: &balance +}, nil +} + +/ AllBalances implements the Query/AllBalances gRPC method +func (k BaseKeeper) + +AllBalances(ctx context.Context, req *types.QueryAllBalancesRequest) (*types.QueryAllBalancesResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + +addr, err := k.ak.AddressCodec().StringToBytes(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + balances := sdk.NewCoins() + + _, pageRes, err := query.CollectionFilteredPaginate(ctx, k.Balances, req.Pagination, func(key collections.Pair[sdk.AccAddress, string], value math.Int) (include bool, err error) { + denom := key.K2() + if req.ResolveDenom { + if metadata, ok := k.GetDenomMetaData(sdkCtx, denom); ok { + denom = metadata.Display +} + +} + +balances = append(balances, sdk.NewCoin(denom, value)) + +return false, nil / we don't include results because we're appending them here. +}, query.WithCollectionPaginationPairPrefix[sdk.AccAddress, string](/docs/sdk/v0.53/documentation/module-system/addr)) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "paginate: %v", err) +} + +return &types.QueryAllBalancesResponse{ + Balances: balances, + Pagination: pageRes +}, nil +} + +/ SpendableBalances implements a gRPC query handler for retrieving an account's +/ spendable balances. +func (k BaseKeeper) + +SpendableBalances(ctx context.Context, req *types.QuerySpendableBalancesRequest) (*types.QuerySpendableBalancesResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + +addr, err := k.ak.AddressCodec().StringToBytes(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + balances := sdk.NewCoins() + zeroAmt := math.ZeroInt() + + _, pageRes, err := query.CollectionFilteredPaginate(ctx, k.Balances, req.Pagination, func(key collections.Pair[sdk.AccAddress, string], _ math.Int) (include bool, err error) { + balances = append(balances, sdk.NewCoin(key.K2(), zeroAmt)) + +return false, nil / not including results as they're appended here +}, query.WithCollectionPaginationPairPrefix[sdk.AccAddress, string](/docs/sdk/v0.53/documentation/module-system/addr)) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "paginate: %v", err) +} + result := sdk.NewCoins() + spendable := k.SpendableCoins(sdkCtx, addr) + for _, c := range balances { + result = append(result, sdk.NewCoin(c.Denom, spendable.AmountOf(c.Denom))) +} + +return &types.QuerySpendableBalancesResponse{ + Balances: result, + Pagination: pageRes +}, nil +} + +/ SpendableBalanceByDenom implements a gRPC query handler for retrieving an account's +/ spendable balance for a specific denom. +func (k BaseKeeper) + +SpendableBalanceByDenom(ctx context.Context, req *types.QuerySpendableBalanceByDenomRequest) (*types.QuerySpendableBalanceByDenomResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + +addr, err := k.ak.AddressCodec().StringToBytes(req.Address) + if err != nil { + return nil, status.Errorf(codes.InvalidArgument, "invalid address: %s", err.Error()) +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + spendable := k.SpendableCoin(sdkCtx, addr, req.Denom) + +return &types.QuerySpendableBalanceByDenomResponse{ + Balance: &spendable +}, nil +} + +/ TotalSupply implements the Query/TotalSupply gRPC method +func (k BaseKeeper) + +TotalSupply(ctx context.Context, req *types.QueryTotalSupplyRequest) (*types.QueryTotalSupplyResponse, error) { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +totalSupply, pageRes, err := k.GetPaginatedTotalSupply(sdkCtx, req.Pagination) + if err != nil { + return nil, status.Error(codes.Internal, err.Error()) +} + +return &types.QueryTotalSupplyResponse{ + Supply: totalSupply, + Pagination: pageRes +}, nil +} + +/ SupplyOf implements the Query/SupplyOf gRPC method +func (k BaseKeeper) + +SupplyOf(c context.Context, req *types.QuerySupplyOfRequest) (*types.QuerySupplyOfResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + ctx := sdk.UnwrapSDKContext(c) + supply := k.GetSupply(ctx, req.Denom) + +return &types.QuerySupplyOfResponse{ + Amount: sdk.NewCoin(req.Denom, supply.Amount) +}, nil +} + +/ Params implements the gRPC service handler for querying x/bank parameters. +func (k BaseKeeper) + +Params(ctx context.Context, req *types.QueryParamsRequest) (*types.QueryParamsResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + sdkCtx := sdk.UnwrapSDKContext(ctx) + params := k.GetParams(sdkCtx) + +return &types.QueryParamsResponse{ + Params: params +}, nil +} + +/ DenomsMetadata implements Query/DenomsMetadata gRPC method. +func (k BaseKeeper) + +DenomsMetadata(c context.Context, req *types.QueryDenomsMetadataRequest) (*types.QueryDenomsMetadataResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + kvStore := runtime.KVStoreAdapter(k.storeService.OpenKVStore(c)) + store := prefix.NewStore(kvStore, types.DenomMetadataPrefix) + metadatas := []types.Metadata{ +} + +pageRes, err := query.Paginate(store, req.Pagination, func(_, value []byte) + +error { + var metadata types.Metadata + k.cdc.MustUnmarshal(value, &metadata) + +metadatas = append(metadatas, metadata) + +return nil +}) + if err != nil { + return nil, status.Error(codes.Internal, err.Error()) +} + +return &types.QueryDenomsMetadataResponse{ + Metadatas: metadatas, + Pagination: pageRes, +}, nil +} + +/ DenomMetadata implements Query/DenomMetadata gRPC method. +func (k BaseKeeper) + +DenomMetadata(c context.Context, req *types.QueryDenomMetadataRequest) (*types.QueryDenomMetadataResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + ctx := sdk.UnwrapSDKContext(c) + +metadata, found := k.GetDenomMetaData(ctx, req.Denom) + if !found { + return nil, status.Errorf(codes.NotFound, "client metadata for denom %s", req.Denom) +} + +return &types.QueryDenomMetadataResponse{ + Metadata: metadata, +}, nil +} + +func (k BaseKeeper) + +DenomOwners( + goCtx context.Context, + req *types.QueryDenomOwnersRequest, +) (*types.QueryDenomOwnersResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + if err := sdk.ValidateDenom(req.Denom); err != nil { + return nil, status.Error(codes.InvalidArgument, err.Error()) +} + +var denomOwners []*types.DenomOwner + + _, pageRes, err := query.CollectionFilteredPaginate(goCtx, k.Balances.Indexes.Denom, req.Pagination, + func(key collections.Pair[string, sdk.AccAddress], value collections.NoValue) (include bool, err error) { + amt, err := k.Balances.Get(goCtx, collections.Join(key.K2(), req.Denom)) + if err != nil { + return false, err +} + +denomOwners = append(denomOwners, &types.DenomOwner{ + Address: key.K2().String(), + Balance: sdk.NewCoin(req.Denom, amt), +}) + +return false, nil +}, + query.WithCollectionPaginationPairPrefix[string, sdk.AccAddress](/docs/sdk/v0.53/documentation/module-system/req.Denom), + ) + if err != nil { + return nil, err +} + +return &types.QueryDenomOwnersResponse{ + DenomOwners: denomOwners, + Pagination: pageRes +}, nil +} + +func (k BaseKeeper) + +SendEnabled(goCtx context.Context, req *types.QuerySendEnabledRequest) (*types.QuerySendEnabledResponse, error) { + if req == nil { + return nil, status.Errorf(codes.InvalidArgument, "empty request") +} + ctx := sdk.UnwrapSDKContext(goCtx) + resp := &types.QuerySendEnabledResponse{ +} + if len(req.Denoms) > 0 { + for _, denom := range req.Denoms { + if se, ok := k.getSendEnabled(ctx, denom); ok { + resp.SendEnabled = append(resp.SendEnabled, types.NewSendEnabled(denom, se)) +} + +} + +} + +else { + results, pageResp, err := query.CollectionPaginate[string, bool](/docs/sdk/v0.53/documentation/module-system/ctx, k.BaseViewKeeper.SendEnabled, req.Pagination) + if err != nil { + return nil, status.Error(codes.Internal, err.Error()) +} + for _, r := range results { + resp.SendEnabled = append(resp.SendEnabled, &types.SendEnabled{ + Denom: r.Key, + Enabled: r.Value, +}) +} + +resp.Pagination = pageResp +} + +return resp, nil +} +``` + +### Calling queries from the State Machine + +The Cosmos SDK v0.47 introduces a new `cosmos.query.v1.module_query_safe` Protobuf annotation which is used to state that a query that is safe to be called from within the state machine, for example: + +- a Keeper's query function can be called from another module's Keeper, +- ADR-033 intermodule query calls, +- CosmWasm contracts can also directly interact with these queries. + +If the `module_query_safe` annotation set to `true`, it means: + +- The query is deterministic: given a block height it will return the same response upon multiple calls, and doesn't introduce any state-machine breaking changes across SDK patch versions. +- Gas consumption never fluctuates across calls and across patch versions. + +If you are a module developer and want to use `module_query_safe` annotation for your own query, you have to ensure the following things: + +- the query is deterministic and won't introduce state-machine-breaking changes without coordinated upgrades +- it has its gas tracked, to avoid the attack vector where no gas is accounted for + on potentially high-computation queries. diff --git a/docs/sdk/v0.53/documentation/module-system/slashing.mdx b/docs/sdk/v0.53/documentation/module-system/slashing.mdx new file mode 100644 index 00000000..71093a3c --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/slashing.mdx @@ -0,0 +1,858 @@ +--- +title: '`x/slashing`' +description: >- + This section specifies the slashing module of the Cosmos SDK, which implements + functionality first outlined in the Cosmos Whitepaper in June 2016. +--- + +## Abstract + +This section specifies the slashing module of the Cosmos SDK, which implements functionality +first outlined in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper) in June 2016. + +The slashing module enables Cosmos SDK-based blockchains to disincentivize any attributable action +by a protocol-recognized actor with value at stake by penalizing them ("slashing"). + +Penalties may include, but are not limited to: + +* Burning some amount of their stake +* Removing their ability to vote on future blocks for a period of time. + +This module will be used by the Cosmos Hub, the first hub in the Cosmos ecosystem. + +## Contents + +* [Concepts](#concepts) + * [States](#states) + * [Tombstone Caps](#tombstone-caps) + * [Infraction Timelines](#infraction-timelines) +* [State](#state) + * [Signing Info (Liveness)](#signing-info-liveness) + * [Params](#params) +* [Messages](#messages) + * [Unjail](#unjail) +* [BeginBlock](#beginblock) + * [Liveness Tracking](#liveness-tracking) +* [Hooks](#hooks) +* [Events](#events) +* [Staking Tombstone](#staking-tombstone) +* [Parameters](#parameters) +* [CLI](#cli) + * [Query](#query) + * [Transactions](#transactions) + * [gRPC](#grpc) + * [REST](#rest) + +## Concepts + +### States + +At any given time, there are any number of validators registered in the state +machine. Each block, the top `MaxValidators` (defined by `x/staking`) validators +who are not jailed become *bonded*, meaning that they may propose and vote on +blocks. Validators who are *bonded* are *at stake*, meaning that part or all of +their stake and their delegators' stake is at risk if they commit a protocol fault. + +For each of these validators we keep a `ValidatorSigningInfo` record that contains +information partaining to validator's liveness and other infraction related +attributes. + +### Tombstone Caps + +In order to mitigate the impact of initially likely categories of non-malicious +protocol faults, the Cosmos Hub implements for each validator +a *tombstone* cap, which only allows a validator to be slashed once for a double +sign fault. For example, if you misconfigure your HSM and double-sign a bunch of +old blocks, you'll only be punished for the first double-sign (and then immediately tombstombed). This will still be quite expensive and desirable to avoid, but tombstone caps +somewhat blunt the economic impact of unintentional misconfiguration. + +Liveness faults do not have caps, as they can't stack upon each other. Liveness bugs are "detected" as soon as the infraction occurs, and the validators are immediately put in jail, so it is not possible for them to commit multiple liveness faults without unjailing in between. + +### Infraction Timelines + +To illustrate how the `x/slashing` module handles submitted evidence through +CometBFT consensus, consider the following examples: + +**Definitions**: + +*\[* : timeline start\ +*]* : timeline end\ +*Cn* : infraction `n` committed\ +*Dn* : infraction `n` discovered\ +*Vb* : validator bonded\ +*Vu* : validator unbonded + +#### Single Double Sign Infraction + +\[----------C1----D1,Vu-----] + +A single infraction is committed then later discovered, at which point the +validator is unbonded and slashed at the full amount for the infraction. + +#### Multiple Double Sign Infractions + +\[----------C1--C2---C3---D1,D2,D3Vu-----] + +Multiple infractions are committed and then later discovered, at which point the +validator is jailed and slashed for only one infraction. Because the validator +is also tombstoned, they can not rejoin the validator set. + +## State + +### Signing Info (Liveness) + +Every block includes a set of precommits by the validators for the previous block, +known as the `LastCommitInfo` provided by CometBFT. A `LastCommitInfo` is valid so +long as it contains precommits from +2/3 of total voting power. + +Proposers are incentivized to include precommits from all validators in the CometBFT `LastCommitInfo` +by receiving additional fees proportional to the difference between the voting +power included in the `LastCommitInfo` and +2/3 (see [fee distribution](/docs/sdk/v0.53/documentation/module-system/distribution#begin-block)). + +```go +type LastCommitInfo struct { + Round int32 + Votes []VoteInfo +} +``` + +Validators are penalized for failing to be included in the `LastCommitInfo` for some +number of blocks by being automatically jailed, potentially slashed, and unbonded. + +Information about validator's liveness activity is tracked through `ValidatorSigningInfo`. +It is indexed in the store as follows: + +* ValidatorSigningInfo: `0x01 | ConsAddrLen (1 byte) | ConsAddress -> ProtocolBuffer(ValSigningInfo)` +* MissedBlocksBitArray: `0x02 | ConsAddrLen (1 byte) | ConsAddress | LittleEndianUint64(signArrayIndex) -> VarInt(didMiss)` (varint is a number encoding format) + +The first mapping allows us to easily lookup the recent signing info for a +validator based on the validator's consensus address. + +The second mapping (`MissedBlocksBitArray`) acts +as a bit-array of size `SignedBlocksWindow` that tells us if the validator missed +the block for a given index in the bit-array. The index in the bit-array is given +as little endian uint64. +The result is a `varint` that takes on `0` or `1`, where `0` indicates the +validator did not miss (did sign) the corresponding block, and `1` indicates +they missed the block (did not sign). + +Note that the `MissedBlocksBitArray` is not explicitly initialized up-front. Keys +are added as we progress through the first `SignedBlocksWindow` blocks for a newly +bonded validator. The `SignedBlocksWindow` parameter defines the size +(number of blocks) of the sliding window used to track validator liveness. + +The information stored for tracking validator liveness is as follows: + +```protobuf +// ValidatorSigningInfo defines a validator's signing info for monitoring their +// liveness activity. +message ValidatorSigningInfo { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + string address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // Height at which validator was first a candidate OR was unjailed + int64 start_height = 2; + // Index which is incremented each time the validator was a bonded + // in a block and may have signed a precommit or not. This in conjunction with the + // `SignedBlocksWindow` param determines the index in the `MissedBlocksBitArray`. + int64 index_offset = 3; + // Timestamp until which the validator is jailed due to liveness downtime. + google.protobuf.Timestamp jailed_until = 4 + [(gogoproto.stdtime) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // Whether or not a validator has been tombstoned (killed out of validator set). It is set + // once the validator commits an equivocation or for any other configured misbehiavor. + bool tombstoned = 5; + // A counter kept to avoid unnecessary array reads. + // Note that `Sum(MissedBlocksBitArray)` always equals `MissedBlocksCounter`. + int64 missed_blocks_counter = 6; +} +``` + +### Params + +The slashing module stores it's params in state with the prefix of `0x00`, +it can be updated with governance or the address with authority. + +* Params: `0x00 | ProtocolBuffer(Params)` + +```protobuf +// Params represents the parameters used for by the slashing module. +message Params { + option (amino.name) = "cosmos-sdk/x/slashing/Params"; + + int64 signed_blocks_window = 1; + bytes min_signed_per_window = 2 [ + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true + ]; + google.protobuf.Duration downtime_jail_duration = 3 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdduration) = true]; + bytes slash_fraction_double_sign = 4 [ + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true + ]; + bytes slash_fraction_downtime = 5 [ + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true + ]; +} +``` + +## Messages + +In this section we describe the processing of messages for the `slashing` module. + +### Unjail + +If a validator was automatically unbonded due to downtime and wishes to come back online & +possibly rejoin the bonded set, it must send `MsgUnjail`: + +```protobuf +/ MsgUnjail is an sdk.Msg used for unjailing a jailed validator, thus returning +/ them into the bonded validator set, so they can begin receiving provisions +/ and rewards again. +message MsgUnjail { + string validator_addr = 1; +} +``` + +Below is a pseudocode of the `MsgSrv/Unjail` RPC: + +```go expandable +unjail(tx MsgUnjail) + +validator = getValidator(tx.ValidatorAddr) + if validator == nil + fail with "No validator found" + if getSelfDelegation(validator) == 0 + fail with "validator must self delegate before unjailing" + if !validator.Jailed + fail with "Validator not jailed, cannot unjail" + + info = GetValidatorSigningInfo(operator) + if info.Tombstoned + fail with "Tombstoned validator cannot be unjailed" + if block time < info.JailedUntil + fail with "Validator still jailed, cannot unjail until period has expired" + + validator.Jailed = false + setValidator(validator) + +return +``` + +If the validator has enough stake to be in the top `n = MaximumBondedValidators`, it will be automatically rebonded, +and all delegators still delegated to the validator will be rebonded and begin to again collect +provisions and rewards. + +## BeginBlock + +### Liveness Tracking + +At the beginning of each block, we update the `ValidatorSigningInfo` for each +validator and check if they've crossed below the liveness threshold over a +sliding window. This sliding window is defined by `SignedBlocksWindow` and the +index in this window is determined by `IndexOffset` found in the validator's +`ValidatorSigningInfo`. For each block processed, the `IndexOffset` is incremented +regardless if the validator signed or not. Once the index is determined, the +`MissedBlocksBitArray` and `MissedBlocksCounter` are updated accordingly. + +Finally, in order to determine if a validator crosses below the liveness threshold, +we fetch the maximum number of blocks missed, `maxMissed`, which is +`SignedBlocksWindow - (MinSignedPerWindow * SignedBlocksWindow)` and the minimum +height at which we can determine liveness, `minHeight`. If the current block is +greater than `minHeight` and the validator's `MissedBlocksCounter` is greater than +`maxMissed`, they will be slashed by `SlashFractionDowntime`, will be jailed +for `DowntimeJailDuration`, and have the following values reset: +`MissedBlocksBitArray`, `MissedBlocksCounter`, and `IndexOffset`. + +**Note**: Liveness slashes do **NOT** lead to a tombstombing. + +```go expandable +height := block.Height + for vote in block.LastCommitInfo.Votes { + signInfo := GetValidatorSigningInfo(vote.Validator.Address) + + / This is a relative index, so we counts blocks the validator SHOULD have + / signed. We use the 0-value default signing info if not present, except for + / start height. + index := signInfo.IndexOffset % SignedBlocksWindow() + +signInfo.IndexOffset++ + + / Update MissedBlocksBitArray and MissedBlocksCounter. The MissedBlocksCounter + / just tracks the sum of MissedBlocksBitArray. That way we avoid needing to + / read/write the whole array each time. + missedPrevious := GetValidatorMissedBlockBitArray(vote.Validator.Address, index) + missed := !signed + switch { + case !missedPrevious && missed: + / array index has changed from not missed to missed, increment counter + SetValidatorMissedBlockBitArray(vote.Validator.Address, index, true) + +signInfo.MissedBlocksCounter++ + case missedPrevious && !missed: + / array index has changed from missed to not missed, decrement counter + SetValidatorMissedBlockBitArray(vote.Validator.Address, index, false) + +signInfo.MissedBlocksCounter-- + + default: + / array index at this index has not changed; no need to update counter +} + if missed { + / emit events... +} + minHeight := signInfo.StartHeight + SignedBlocksWindow() + maxMissed := SignedBlocksWindow() - MinSignedPerWindow() + + / If we are past the minimum height and the validator has missed too many + / jail and slash them. + if height > minHeight && signInfo.MissedBlocksCounter > maxMissed { + validator := ValidatorByConsAddr(vote.Validator.Address) + + / emit events... + + / We need to retrieve the stake distribution which signed the block, so we + / subtract ValidatorUpdateDelay from the block height, and subtract an + / additional 1 since this is the LastCommit. + / + / Note, that this CAN result in a negative "distributionHeight" up to + / -ValidatorUpdateDelay-1, i.e. at the end of the pre-genesis block (none) = at the beginning of the genesis block. + / That's fine since this is just used to filter unbonding delegations & redelegations. + distributionHeight := height - sdk.ValidatorUpdateDelay - 1 + + SlashWithInfractionReason(vote.Validator.Address, distributionHeight, vote.Validator.Power, SlashFractionDowntime(), stakingtypes.Downtime) + +Jail(vote.Validator.Address) + +signInfo.JailedUntil = block.Time.Add(DowntimeJailDuration()) + + / We need to reset the counter & array so that the validator won't be + / immediately slashed for downtime upon rebonding. + signInfo.MissedBlocksCounter = 0 + signInfo.IndexOffset = 0 + ClearValidatorMissedBlockBitArray(vote.Validator.Address) +} + +SetValidatorSigningInfo(vote.Validator.Address, signInfo) +} +``` + +## Hooks + +This section contains a description of the module's `hooks`. Hooks are operations that are executed automatically when events are raised. + +### Staking hooks + +The slashing module implements the `StakingHooks` defined in `x/staking` and are used as record-keeping of validators information. During the app initialization, these hooks should be registered in the staking module struct. + +The following hooks impact the slashing state: + +* `AfterValidatorBonded` creates a `ValidatorSigningInfo` instance as described in the following section. +* `AfterValidatorCreated` stores a validator's consensus key. +* `AfterValidatorRemoved` removes a validator's consensus key. + +### Validator Bonded + +Upon successful first-time bonding of a new validator, we create a new `ValidatorSigningInfo` structure for the +now-bonded validator, which `StartHeight` of the current block. + +If the validator was out of the validator set and gets bonded again, its new bonded height is set. + +```go expandable +onValidatorBonded(address sdk.ValAddress) + +signingInfo, found = GetValidatorSigningInfo(address) + if !found { + signingInfo = ValidatorSigningInfo { + StartHeight : CurrentHeight, + IndexOffset : 0, + JailedUntil : time.Unix(0, 0), + Tombstone : false, + MissedBloskCounter : 0 +} + +else { + signingInfo.StartHeight = CurrentHeight +} + +setValidatorSigningInfo(signingInfo) +} + +return +``` + +## Events + +The slashing module emits the following events: + +### MsgServer + +#### MsgUnjail + +| Type | Attribute Key | Attribute Value | +| ------- | ------------- | ------------------ | +| message | module | slashing | +| message | sender | `{validatorAddress}` | + +### Keeper + +### BeginBlocker: HandleValidatorSignature + +| Type | Attribute Key | Attribute Value | +| ----- | ------------- | --------------------------- | +| slash | address | `{validatorConsensusAddress}` | +| slash | power | `{validatorPower}` | +| slash | reason | `{slashReason}` | +| slash | jailed \[0] | `{validatorConsensusAddress}` | +| slash | burned coins | `{math.Int}` | + +* \[0] Only included if the validator is jailed. + +| Type | Attribute Key | Attribute Value | +| -------- | -------------- | --------------------------- | +| liveness | address | `{validatorConsensusAddress}` | +| liveness | missed\_blocks | `{missedBlocksCounter}` | +| liveness | height | `{blockHeight}` | + +#### Slash + +* same as `"slash"` event from `HandleValidatorSignature`, but without the `jailed` attribute. + +#### Jail + +| Type | Attribute Key | Attribute Value | +| ----- | ------------- | ------------------ | +| slash | jailed | `{validatorAddress}` | + +## Staking Tombstone + +### Abstract + +In the current implementation of the `slashing` module, when the consensus engine +informs the state machine of a validator's consensus fault, the validator is +partially slashed, and put into a "jail period", a period of time in which they +are not allowed to rejoin the validator set. However, because of the nature of +consensus faults and ABCI, there can be a delay between an infraction occurring, +and evidence of the infraction reaching the state machine (this is one of the +primary reasons for the existence of the unbonding period). + +> Note: The tombstone concept, only applies to faults that have a delay between +> the infraction occurring and evidence reaching the state machine. For example, +> evidence of a validator double signing may take a while to reach the state machine +> due to unpredictable evidence gossip layer delays and the ability of validators to +> selectively reveal double-signatures (e.g. to infrequently-online light clients). +> Liveness slashing, on the other hand, is detected immediately as soon as the +> infraction occurs, and therefore no slashing period is needed. A validator is +> immediately put into jail period, and they cannot commit another liveness fault +> until they unjail. In the future, there may be other types of byzantine faults +> that have delays (for example, submitting evidence of an invalid proposal as a transaction). +> When implemented, it will have to be decided whether these future types of +> byzantine faults will result in a tombstoning (and if not, the slash amounts +> will not be capped by a slashing period). + +In the current system design, once a validator is put in the jail for a consensus +fault, after the `JailPeriod` they are allowed to send a transaction to `unjail` +themselves, and thus rejoin the validator set. + +One of the "design desires" of the `slashing` module is that if multiple +infractions occur before evidence is executed (and a validator is put in jail), +they should only be punished for single worst infraction, but not cumulatively. +For example, if the sequence of events is: + +1. Validator A commits Infraction 1 (worth 30% slash) +2. Validator A commits Infraction 2 (worth 40% slash) +3. Validator A commits Infraction 3 (worth 35% slash) +4. Evidence for Infraction 1 reaches state machine (and validator is put in jail) +5. Evidence for Infraction 2 reaches state machine +6. Evidence for Infraction 3 reaches state machine + +Only Infraction 2 should have its slash take effect, as it is the highest. This +is done, so that in the case of the compromise of a validator's consensus key, +they will only be punished once, even if the hacker double-signs many blocks. +Because, the unjailing has to be done with the validator's operator key, they +have a chance to re-secure their consensus key, and then signal that they are +ready using their operator key. We call this period during which we track only +the max infraction, the "slashing period". + +Once, a validator rejoins by unjailing themselves, we begin a new slashing period; +if they commit a new infraction after unjailing, it gets slashed cumulatively on +top of the worst infraction from the previous slashing period. + +However, while infractions are grouped based off of the slashing periods, because +evidence can be submitted up to an `unbondingPeriod` after the infraction, we +still have to allow for evidence to be submitted for previous slashing periods. +For example, if the sequence of events is: + +1. Validator A commits Infraction 1 (worth 30% slash) +2. Validator A commits Infraction 2 (worth 40% slash) +3. Evidence for Infraction 1 reaches state machine (and Validator A is put in jail) +4. Validator A unjails + +We are now in a new slashing period, however we still have to keep the door open +for the previous infraction, as the evidence for Infraction 2 may still come in. +As the number of slashing periods increase, it creates more complexity as we have +to keep track of the highest infraction amount for every single slashing period. + +> Note: Currently, according to the `slashing` module spec, a new slashing period +> is created every time a validator is unbonded then rebonded. This should probably +> be changed to jailed/unjailed. See issue [#3205](https://github.com/cosmos/cosmos-sdk/issues/3205) +> for further details. For the remainder of this, I will assume that we only start +> a new slashing period when a validator gets unjailed. + +The maximum number of slashing periods is the `len(UnbondingPeriod) / len(JailPeriod)`. +The current defaults in Gaia for the `UnbondingPeriod` and `JailPeriod` are 3 weeks +and 2 days, respectively. This means there could potentially be up to 11 slashing +periods concurrently being tracked per validator. If we set the `JailPeriod >= UnbondingPeriod`, +we only have to track 1 slashing period (i.e not have to track slashing periods). + +Currently, in the jail period implementation, once a validator unjails, all of +their delegators who are delegated to them (haven't unbonded / redelegated away), +stay with them. Given that consensus safety faults are so egregious +(way more so than liveness faults), it is probably prudent to have delegators not +"auto-rebond" to the validator. + +#### Proposal: infinite jail + +We propose setting the "jail time" for a +validator who commits a consensus safety fault, to `infinite` (i.e. a tombstone state). +This essentially kicks the validator out of the validator set and does not allow +them to re-enter the validator set. All of their delegators (including the operator themselves) +have to either unbond or redelegate away. The validator operator can create a new +validator if they would like, with a new operator key and consensus key, but they +have to "re-earn" their delegations back. + +Implementing the tombstone system and getting rid of the slashing period tracking +will make the `slashing` module way simpler, especially because we can remove all +of the hooks defined in the `slashing` module consumed by the `staking` module +(the `slashing` module still consumes hooks defined in `staking`). + +#### Single slashing amount + +Another optimization that can be made is that if we assume that all ABCI faults +for CometBFT consensus are slashed at the same level, we don't have to keep +track of "max slash". Once an ABCI fault happens, we don't have to worry about +comparing potential future ones to find the max. + +Currently the only CometBFT ABCI fault is: + +* Unjustified precommits (double signs) + +It is currently planned to include the following fault in the near future: + +* Signing a precommit when you're in unbonding phase (needed to make light client bisection safe) + +Given that these faults are both attributable byzantine faults, we will likely +want to slash them equally, and thus we can enact the above change. + +> Note: This change may make sense for current CometBFT consensus, but maybe +> not for a different consensus algorithm or future versions of CometBFT that +> may want to punish at different levels (for example, partial slashing). + +## Parameters + +The slashing module contains the following parameters: + +| Key | Type | Example | +| ----------------------- | -------------- | ---------------------- | +| SignedBlocksWindow | string (int64) | "100" | +| MinSignedPerWindow | string (dec) | "0.500000000000000000" | +| DowntimeJailDuration | string (ns) | "600000000000" | +| SlashFractionDoubleSign | string (dec) | "0.050000000000000000" | +| SlashFractionDowntime | string (dec) | "0.010000000000000000" | + +## CLI + +A user can query and interact with the `slashing` module using the CLI. + +### Query + +The `query` commands allow users to query `slashing` state. + +```shell +simd query slashing --help +``` + +#### params + +The `params` command allows users to query genesis parameters for the slashing module. + +```shell +simd query slashing params [flags] +``` + +Example: + +```shell +simd query slashing params +``` + +Example Output: + +```yml +downtime_jail_duration: 600s +min_signed_per_window: "0.500000000000000000" +signed_blocks_window: "100" +slash_fraction_double_sign: "0.050000000000000000" +slash_fraction_downtime: "0.010000000000000000" +``` + +#### signing-info + +The `signing-info` command allows users to query signing-info of the validator using consensus public key. + +```shell +simd query slashing signing-infos [flags] +``` + +Example: + +```shell +simd query slashing signing-info '{"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys6jD5B6tPgC8="}' + +``` + +Example Output: + +```yml +address: cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c +index_offset: "2068" +jailed_until: "1970-01-01T00:00:00Z" +missed_blocks_counter: "0" +start_height: "0" +tombstoned: false +``` + +#### signing-infos + +The `signing-infos` command allows users to query signing infos of all validators. + +```shell +simd query slashing signing-infos [flags] +``` + +Example: + +```shell +simd query slashing signing-infos +``` + +Example Output: + +```yml +info: +- address: cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c + index_offset: "2075" + jailed_until: "1970-01-01T00:00:00Z" + missed_blocks_counter: "0" + start_height: "0" + tombstoned: false +pagination: + next_key: null + total: "0" +``` + +### Transactions + +The `tx` commands allow users to interact with the `slashing` module. + +```bash +simd tx slashing --help +``` + +#### unjail + +The `unjail` command allows users to unjail a validator previously jailed for downtime. + +```bash +simd tx slashing unjail --from mykey [flags] +``` + +Example: + +```bash +simd tx slashing unjail --from mykey +``` + +### gRPC + +A user can query the `slashing` module using gRPC endpoints. + +#### Params + +The `Params` endpoint allows users to query the parameters of slashing module. + +```shell +cosmos.slashing.v1beta1.Query/Params +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/Params +``` + +Example Output: + +```json +{ + "params": { + "signedBlocksWindow": "100", + "minSignedPerWindow": "NTAwMDAwMDAwMDAwMDAwMDAw", + "downtimeJailDuration": "600s", + "slashFractionDoubleSign": "NTAwMDAwMDAwMDAwMDAwMDA=", + "slashFractionDowntime": "MTAwMDAwMDAwMDAwMDAwMDA=" + } +} +``` + +#### SigningInfo + +The SigningInfo queries the signing info of given cons address. + +```shell +cosmos.slashing.v1beta1.Query/SigningInfo +``` + +Example: + +```shell +grpcurl -plaintext -d '{"cons_address":"cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c"}' localhost:9090 cosmos.slashing.v1beta1.Query/SigningInfo +``` + +Example Output: + +```json +{ + "valSigningInfo": { + "address": "cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c", + "indexOffset": "3493", + "jailedUntil": "1970-01-01T00:00:00Z" + } +} +``` + +#### SigningInfos + +The SigningInfos queries signing info of all validators. + +```shell +cosmos.slashing.v1beta1.Query/SigningInfos +``` + +Example: + +```shell +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/SigningInfos +``` + +Example Output: + +```json expandable +{ + "info": [ + { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "indexOffset": "2467", + "jailedUntil": "1970-01-01T00:00:00Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### REST + +A user can query the `slashing` module using REST endpoints. + +#### Params + +```shell +/cosmos/slashing/v1beta1/params +``` + +Example: + +```shell +curl "localhost:1317/cosmos/slashing/v1beta1/params" +``` + +Example Output: + +```json +{ + "params": { + "signed_blocks_window": "100", + "min_signed_per_window": "0.500000000000000000", + "downtime_jail_duration": "600s", + "slash_fraction_double_sign": "0.050000000000000000", + "slash_fraction_downtime": "0.010000000000000000" +} +``` + +#### signing\_info + +```shell +/cosmos/slashing/v1beta1/signing_infos/%s +``` + +Example: + +```shell +curl "localhost:1317/cosmos/slashing/v1beta1/signing_infos/cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c" +``` + +Example Output: + +```json +{ + "val_signing_info": { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "start_height": "0", + "index_offset": "4184", + "jailed_until": "1970-01-01T00:00:00Z", + "tombstoned": false, + "missed_blocks_counter": "0" + } +} +``` + +#### signing\_infos + +```shell +/cosmos/slashing/v1beta1/signing_infos +``` + +Example: + +```shell +curl "localhost:1317/cosmos/slashing/v1beta1/signing_infos +``` + +Example Output: + +```json expandable +{ + "info": [ + { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "start_height": "0", + "index_offset": "4169", + "jailed_until": "1970-01-01T00:00:00Z", + "tombstoned": false, + "missed_blocks_counter": "0" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/staking.mdx b/docs/sdk/v0.53/documentation/module-system/staking.mdx new file mode 100644 index 00000000..fd3ea486 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/staking.mdx @@ -0,0 +1,3851 @@ +--- +title: '`x/staking`' +description: >- + This paper specifies the Staking module of the Cosmos SDK that was first + described in the Cosmos Whitepaper in June 2016. +--- + +## Abstract + +This paper specifies the Staking module of the Cosmos SDK that was first +described in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper) +in June 2016. + +The module enables Cosmos SDK-based blockchain to support an advanced +Proof-of-Stake (PoS) system. In this system, holders of the native staking token of +the chain can become validators and can delegate tokens to validators, +ultimately determining the effective validator set for the system. + +This module is used in the Cosmos Hub, the first Hub in the Cosmos +network. + +## Contents + +* [State](#state) + * [Pool](#pool) + * [LastTotalPower](#lasttotalpower) + * [ValidatorUpdates](#validatorupdates) + * [UnbondingID](#unbondingid) + * [Params](#params) + * [Validator](#validator) + * [Delegation](#delegation) + * [UnbondingDelegation](#unbondingdelegation) + * [Redelegation](#redelegation) + * [Queues](#queues) + * [HistoricalInfo](#historicalinfo) +* [State Transitions](#state-transitions) + * [Validators](#validators) + * [Delegations](#delegations) + * [Slashing](#slashing) + * [How Shares are calculated](#how-shares-are-calculated) +* [Messages](#messages) + * [MsgCreateValidator](#msgcreatevalidator) + * [MsgEditValidator](#msgeditvalidator) + * [MsgDelegate](#msgdelegate) + * [MsgUndelegate](#msgundelegate) + * [MsgCancelUnbondingDelegation](#msgcancelunbondingdelegation) + * [MsgBeginRedelegate](#msgbeginredelegate) + * [MsgUpdateParams](#msgupdateparams) +* [Begin-Block](#begin-block) + * [Historical Info Tracking](#historical-info-tracking) +* [End-Block](#end-block) + * [Validator Set Changes](#validator-set-changes) + * [Queues](#queues-1) +* [Hooks](#hooks) +* [Events](#events) + * [EndBlocker](#endblocker) + * [Msg's](#msgs) +* [Parameters](#parameters) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + * [REST](#rest) + +## State + +### Pool + +Pool is used for tracking bonded and not-bonded token supply of the bond denomination. + +### LastTotalPower + +LastTotalPower tracks the total amounts of bonded tokens recorded during the previous end block. +Store entries prefixed with "Last" must remain unchanged until EndBlock. + +* LastTotalPower: `0x12 -> ProtocolBuffer(math.Int)` + +### ValidatorUpdates + +ValidatorUpdates contains the validator updates returned to ABCI at the end of every block. +The values are overwritten in every block. + +* ValidatorUpdates `0x61 -> []abci.ValidatorUpdate` + +### UnbondingID + +UnbondingID stores the ID of the latest unbonding operation. It enables creating unique IDs for unbonding operations, i.e., UnbondingID is incremented every time a new unbonding operation (validator unbonding, unbonding delegation, redelegation) is initiated. + +* UnbondingID: `0x37 -> uint64` + +### Params + +The staking module stores its params in state with the prefix of `0x51`, +it can be updated with governance or the address with authority. + +* Params: `0x51 | ProtocolBuffer(Params)` + +```protobuf +// Params defines the parameters for the x/staking module. +message Params { + option (amino.name) = "cosmos-sdk/x/staking/Params"; + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // unbonding_time is the time duration of unbonding. + google.protobuf.Duration unbonding_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdduration) = true]; + // max_validators is the maximum number of validators. + uint32 max_validators = 2; + // max_entries is the max entries for either unbonding delegation or redelegation (per pair/trio). + uint32 max_entries = 3; + // historical_entries is the number of historical entries to persist. + uint32 historical_entries = 4; + // bond_denom defines the bondable coin denomination. + string bond_denom = 5; + // min_commission_rate is the chain-wide minimum commission rate that a validator can charge their delegators + string min_commission_rate = 6 [ + (gogoproto.moretags) = "yaml:\"min_commission_rate\"", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +### Validator + +Validators can have one of three statuses + +* `Unbonded`: The validator is not in the active set. They cannot sign blocks and do not earn + rewards. They can receive delegations. +* `Bonded`: Once the validator receives sufficient bonded tokens they automatically join the + active set during [`EndBlock`](#validator-set-changes) and their status is updated to `Bonded`. + They are signing blocks and receiving rewards. They can receive further delegations. + They can be slashed for misbehavior. Delegators to this validator who unbond their delegation + must wait the duration of the UnbondingTime, a chain-specific param, during which time + they are still slashable for offences of the source validator if those offences were committed + during the period of time that the tokens were bonded. +* `Unbonding`: When a validator leaves the active set, either by choice or due to slashing, jailing or + tombstoning, an unbonding of all their delegations begins. All delegations must then wait the UnbondingTime + before their tokens are moved to their accounts from the `BondedPool`. + + +Tombstoning is permanent, once tombstoned a validator's consensus key can not be reused within the chain where the tombstoning happened. + + +Validators objects should be primarily stored and accessed by the +`OperatorAddr`, an SDK validator address for the operator of the validator. Two +additional indices are maintained per validator object in order to fulfill +required lookups for slashing and validator-set updates. A third special index +(`LastValidatorPower`) is also maintained which however remains constant +throughout each block, unlike the first two indices which mirror the validator +records within a block. + +* Validators: `0x21 | OperatorAddrLen (1 byte) | OperatorAddr -> ProtocolBuffer(validator)` +* ValidatorsByConsAddr: `0x22 | ConsAddrLen (1 byte) | ConsAddr -> OperatorAddr` +* ValidatorsByPower: `0x23 | BigEndian(ConsensusPower) | OperatorAddrLen (1 byte) | OperatorAddr -> OperatorAddr` +* LastValidatorsPower: `0x11 | OperatorAddrLen (1 byte) | OperatorAddr -> ProtocolBuffer(ConsensusPower)` +* ValidatorsByUnbondingID: `0x38 | UnbondingID -> 0x21 | OperatorAddrLen (1 byte) | OperatorAddr` + +`Validators` is the primary index - it ensures that each operator can have only one +associated validator, where the public key of that validator can change in the +future. Delegators can refer to the immutable operator of the validator, without +concern for the changing public key. + +`ValidatorsByUnbondingID` is an additional index that enables lookups for +validators by the unbonding IDs corresponding to their current unbonding. + +`ValidatorByConsAddr` is an additional index that enables lookups for slashing. +When CometBFT reports evidence, it provides the validator address, so this +map is needed to find the operator. Note that the `ConsAddr` corresponds to the +address which can be derived from the validator's `ConsPubKey`. + +`ValidatorsByPower` is an additional index that provides a sorted list of +potential validators to quickly determine the current active set. Here +ConsensusPower is validator.Tokens/10^6 by default. Note that all validators +where `Jailed` is true are not stored within this index. + +`LastValidatorsPower` is a special index that provides a historical list of the +last-block's bonded validators. This index remains constant during a block but +is updated during the validator set update process which takes place in [`EndBlock`](#end-block). + +Each validator's state is stored in a `Validator` struct: + +```protobuf +// Validator defines a validator, together with the total amount of the +// Validator's bond shares and their exchange rate to coins. Slashing results in +// a decrease in the exchange rate, allowing correct calculation of future +// undelegations without iterating over delegators. When coins are delegated to +// this validator, the validator is credited with a delegation whose number of +// bond shares is based on the amount of coins delegated divided by the current +// exchange rate. Voting power can be calculated as total bonded shares +// multiplied by exchange rate. +message Validator { + option (gogoproto.equal) = false; + option (gogoproto.goproto_stringer) = false; + option (gogoproto.goproto_getters) = false; + + // operator_address defines the address of the validator's operator; bech encoded in JSON. + string operator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // consensus_pubkey is the consensus public key of the validator, as a Protobuf Any. + google.protobuf.Any consensus_pubkey = 2 [(cosmos_proto.accepts_interface) = "cosmos.crypto.PubKey"]; + // jailed defined whether the validator has been jailed from bonded status or not. + bool jailed = 3; + // status is the validator status (bonded/unbonding/unbonded). + BondStatus status = 4; + // tokens define the delegated tokens (incl. self-delegation). + string tokens = 5 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // delegator_shares defines total shares issued to a validator's delegators. + string delegator_shares = 6 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // description defines the description terms for the validator. + Description description = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // unbonding_height defines, if unbonding, the height at which this validator has begun unbonding. + int64 unbonding_height = 8; + // unbonding_time defines, if unbonding, the min time for the validator to complete unbonding. + google.protobuf.Timestamp unbonding_time = 9 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + // commission defines the commission parameters. + Commission commission = 10 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // min_self_delegation is the validator's self declared minimum self delegation. + // + // Since: cosmos-sdk 0.46 + string min_self_delegation = 11 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + + // strictly positive if this validator's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 12; + + // list of unbonding ids, each uniquely identifing an unbonding of this validator + repeated uint64 unbonding_ids = 13; +} +``` + +```protobuf +// CommissionRates defines the initial commission rates to be used for creating +// a validator. +message CommissionRates { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // rate is the commission rate charged to delegators, as a fraction. + string rate = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // max_rate defines the maximum commission rate which validator can ever charge, as a fraction. + string max_rate = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // max_change_rate defines the maximum daily increase of the validator commission, as a fraction. + string max_change_rate = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +// Commission defines commission parameters for a given validator. +message Commission { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // commission_rates defines the initial commission rates to be used for creating a validator. + CommissionRates commission_rates = 1 + [(gogoproto.embed) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // update_time is the last time the commission rate was changed. + google.protobuf.Timestamp update_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} + +// Description defines a validator description. +message Description { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // moniker defines a human-readable name for the validator. + string moniker = 1; + // identity defines an optional identity signature (ex. UPort or Keybase). + string identity = 2; + // website defines an optional website link. + string website = 3; + // security_contact defines an optional email for security contact. + string security_contact = 4; + // details define other optional details. + string details = 5; +} +``` + +### Delegation + +Delegations are identified by combining `DelegatorAddr` (the address of the delegator) +with the `ValidatorAddr` Delegators are indexed in the store as follows: + +* Delegation: `0x31 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr -> ProtocolBuffer(delegation)` + +Stake holders may delegate coins to validators; under this circumstance their +funds are held in a `Delegation` data structure. It is owned by one +delegator, and is associated with the shares for one validator. The sender of +the transaction is the owner of the bond. + +```protobuf +// Delegation represents the bond with tokens held by an account. It is +// owned by one delegator, and is associated with the voting power of one +// validator. +message Delegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + // delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // shares define the delegation shares received. + string shares = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} +``` + +#### Delegator Shares + +When one delegates tokens to a Validator, they are issued a number of delegator shares based on a +dynamic exchange rate, calculated as follows from the total number of tokens delegated to the +validator and the number of shares issued so far: + +`Shares per Token = validator.TotalShares() / validator.Tokens()` + +Only the number of shares received is stored on the DelegationEntry. When a delegator then +Undelegates, the token amount they receive is calculated from the number of shares they currently +hold and the inverse exchange rate: + +`Tokens per Share = validator.Tokens() / validatorShares()` + +These `Shares` are simply an accounting mechanism. They are not a fungible asset. The reason for +this mechanism is to simplify the accounting around slashing. Rather than iteratively slashing the +tokens of every delegation entry, instead the Validator's total bonded tokens can be slashed, +effectively reducing the value of each issued delegator share. + +### UnbondingDelegation + +Shares in a `Delegation` can be unbonded, but they must for some time exist as +an `UnbondingDelegation`, where shares can be reduced if Byzantine behavior is +detected. + +`UnbondingDelegation` are indexed in the store as: + +* UnbondingDelegation: `0x32 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr -> ProtocolBuffer(unbondingDelegation)` +* UnbondingDelegationsFromValidator: `0x33 | ValidatorAddrLen (1 byte) | ValidatorAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` +* UnbondingDelegationByUnbondingId: `0x38 | UnbondingId -> 0x32 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr` + `UnbondingDelegation` is used in queries, to lookup all unbonding delegations for + a given delegator. + +`UnbondingDelegationsFromValidator` is used in slashing, to lookup all +unbonding delegations associated with a given validator that need to be +slashed. + +`UnbondingDelegationByUnbondingId` is an additional index that enables +lookups for unbonding delegations by the unbonding IDs of the containing +unbonding delegation entries. + +A UnbondingDelegation object is created every time an unbonding is initiated. + +```protobuf +// UnbondingDelegation stores all of a single delegator's unbonding bonds +// for a single validator in an time-ordered list. +message UnbondingDelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + // delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // entries are the unbonding delegation entries. + repeated UnbondingDelegationEntry entries = 3 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; // unbonding delegation entries +} + +// UnbondingDelegationEntry defines an unbonding object with relevant metadata. +message UnbondingDelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // creation_height is the height which the unbonding took place. + int64 creation_height = 1; + // completion_time is the unix time for unbonding completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + // initial_balance defines the tokens initially scheduled to receive at completion. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // balance defines the tokens to receive at completion. + string balance = 4 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + // Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} +``` + +### Redelegation + +The bonded tokens worth of a `Delegation` may be instantly redelegated from a +source validator to a different validator (destination validator). However when +this occurs they must be tracked in a `Redelegation` object, whereby their +shares can be slashed if their tokens have contributed to a Byzantine fault +committed by the source validator. + +`Redelegation` are indexed in the store as: + +* Redelegations: `0x34 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddr -> ProtocolBuffer(redelegation)` +* RedelegationsBySrc: `0x35 | ValidatorSrcAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddrLen (1 byte) | ValidatorDstAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` +* RedelegationsByDst: `0x36 | ValidatorDstAddrLen (1 byte) | ValidatorDstAddr | ValidatorSrcAddrLen (1 byte) | ValidatorSrcAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` +* RedelegationByUnbondingId: `0x38 | UnbondingId -> 0x34 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddr` + +`Redelegations` is used for queries, to lookup all redelegations for a given +delegator. + +`RedelegationsBySrc` is used for slashing based on the `ValidatorSrcAddr`. + +`RedelegationsByDst` is used for slashing based on the `ValidatorDstAddr` + +The first map here is used for queries, to lookup all redelegations for a given +delegator. The second map is used for slashing based on the `ValidatorSrcAddr`, +while the third map is for slashing based on the `ValidatorDstAddr`. + +`RedelegationByUnbondingId` is an additional index that enables +lookups for redelegations by the unbonding IDs of the containing +redelegation entries. + +A redelegation object is created every time a redelegation occurs. To prevent +"redelegation hopping" redelegations may not occur under the situation that: + +* the (re)delegator already has another immature redelegation in progress + with a destination to a validator (let's call it `Validator X`) +* and, the (re)delegator is attempting to create a *new* redelegation + where the source validator for this new redelegation is `Validator X`. + +```protobuf +// RedelegationEntry defines a redelegation object with relevant metadata. +message RedelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + // creation_height defines the height which the redelegation took place. + int64 creation_height = 1; + // completion_time defines the unix time for redelegation completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + // initial_balance defines the initial balance when redelegation started. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + // shares_dst is the amount of destination-validator shares created by redelegation. + string shares_dst = 4 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + // Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + // Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} + +// Redelegation contains the list of a particular delegator's redelegating bonds +// from a particular source validator to a particular destination validator. +message Redelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + // delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_src_address is the validator redelegation source operator address. + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // validator_dst_address is the validator redelegation destination operator address. + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // entries are the redelegation entries. + repeated RedelegationEntry entries = 4 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; // redelegation entries +} +``` + +### Queues + +All queue objects are sorted by timestamp. The time used within any queue is +firstly converted to UTC, rounded to the nearest nanosecond then sorted. The sortable time format +used is a slight modification of the RFC3339Nano and uses the format string +`"2006-01-02T15:04:05.000000000"`. Notably this format: + +* right pads all zeros +* drops the time zone info (we already use UTC) + +In all cases, the stored timestamp represents the maturation time of the queue +element. + +#### UnbondingDelegationQueue + +For the purpose of tracking progress of unbonding delegations the unbonding +delegations queue is kept. + +* UnbondingDelegation: `0x41 | format(time) -> []DVPair` + +```protobuf +// DVPair is struct that just has a delegator-validator pair with no other data. +// It is intended to be used as a marshalable pointer. For example, a DVPair can +// be used to construct the key to getting an UnbondingDelegation from state. +message DVPair { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +#### RedelegationQueue + +For the purpose of tracking progress of redelegations the redelegation queue is +kept. + +* RedelegationQueue: `0x42 | format(time) -> []DVVTriplet` + +```protobuf +// DVVTriplet is struct that just has a delegator-validator-validator triplet +// with no other data. It is intended to be used as a marshalable pointer. For +// example, a DVVTriplet can be used to construct the key to getting a +// Redelegation from state. +message DVVTriplet { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +#### ValidatorQueue + +For the purpose of tracking progress of unbonding validators the validator +queue is kept. + +* ValidatorQueueTime: `0x43 | format(time) -> []sdk.ValAddress` + +The stored object by each key is an array of validator operator addresses from +which the validator object can be accessed. Typically it is expected that only +a single validator record will be associated with a given timestamp however it is possible +that multiple validators exist in the queue at the same location. + +### HistoricalInfo + +HistoricalInfo objects are stored and pruned at each block such that the staking keeper persists +the `n` most recent historical info defined by staking module parameter: `HistoricalEntries`. + +```go expandable +syntax = "proto3"; +package cosmos.staking.v1beta1; + +import "gogoproto/gogo.proto"; +import "google/protobuf/any.proto"; +import "google/protobuf/duration.proto"; +import "google/protobuf/timestamp.proto"; + +import "cosmos_proto/cosmos.proto"; +import "cosmos/base/v1beta1/coin.proto"; +import "amino/amino.proto"; +import "tendermint/types/types.proto"; +import "tendermint/abci/types.proto"; + +option go_package = "github.com/cosmos/cosmos-sdk/x/staking/types"; + +/ HistoricalInfo contains header and validator information for a given block. +/ It is stored as part of staking module's state, which persists the `n` most +/ recent HistoricalInfo +/ (`n` is set by the staking module's `historical_entries` parameter). +message HistoricalInfo { + tendermint.types.Header header = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + repeated Validator valset = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ CommissionRates defines the initial commission rates to be used for creating +/ a validator. +message CommissionRates { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / rate is the commission rate charged to delegators, as a fraction. + string rate = 1 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / max_rate defines the maximum commission rate which validator can ever charge, as a fraction. + string max_rate = 2 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / max_change_rate defines the maximum daily increase of the validator commission, as a fraction. + string max_change_rate = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +/ Commission defines commission parameters for a given validator. +message Commission { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / commission_rates defines the initial commission rates to be used for creating a validator. + CommissionRates commission_rates = 1 + [(gogoproto.embed) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + / update_time is the last time the commission rate was changed. + google.protobuf.Timestamp update_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} + +/ Description defines a validator description. +message Description { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / moniker defines a human-readable name for the validator. + string moniker = 1; + / identity defines an optional identity signature (ex. UPort or Keybase). + string identity = 2; + / website defines an optional website link. + string website = 3; + / security_contact defines an optional email for security contact. + string security_contact = 4; + / details define other optional details. + string details = 5; +} + +/ Validator defines a validator, together with the total amount of the +/ Validator's bond shares and their exchange rate to coins. Slashing results in +/ a decrease in the exchange rate, allowing correct calculation of future +/ undelegations without iterating over delegators. When coins are delegated to +/ this validator, the validator is credited with a delegation whose number of +/ bond shares is based on the amount of coins delegated divided by the current +/ exchange rate. Voting power can be calculated as total bonded shares +/ multiplied by exchange rate. +message Validator { + option (gogoproto.equal) = false; + option (gogoproto.goproto_stringer) = false; + option (gogoproto.goproto_getters) = false; + + / operator_address defines the address of the validator's operator; bech encoded in JSON. + string operator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / consensus_pubkey is the consensus public key of the validator, as a Protobuf Any. + google.protobuf.Any consensus_pubkey = 2 [(cosmos_proto.accepts_interface) = "cosmos.crypto.PubKey"]; + / jailed defined whether the validator has been jailed from bonded status or not. + bool jailed = 3; + / status is the validator status (bonded/unbonding/unbonded). + BondStatus status = 4; + / tokens define the delegated tokens (incl. self-delegation). + string tokens = 5 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / delegator_shares defines total shares issued to a validator's delegators. + string delegator_shares = 6 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / description defines the description terms for the validator. + Description description = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + / unbonding_height defines, if unbonding, the height at which this validator has begun unbonding. + int64 unbonding_height = 8; + / unbonding_time defines, if unbonding, the min time for the validator to complete unbonding. + google.protobuf.Timestamp unbonding_time = 9 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + / commission defines the commission parameters. + Commission commission = 10 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + / min_self_delegation is the validator's self declared minimum self delegation. + / + / Since: cosmos-sdk 0.46 + string min_self_delegation = 11 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + + / strictly positive if this validator's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 12; + + / list of unbonding ids, each uniquely identifing an unbonding of this validator + repeated uint64 unbonding_ids = 13; +} + +/ BondStatus is the status of a validator. +enum BondStatus { + option (gogoproto.goproto_enum_prefix) = false; + + / UNSPECIFIED defines an invalid validator status. + BOND_STATUS_UNSPECIFIED = 0 [(gogoproto.enumvalue_customname) = "Unspecified"]; + / UNBONDED defines a validator that is not bonded. + BOND_STATUS_UNBONDED = 1 [(gogoproto.enumvalue_customname) = "Unbonded"]; + / UNBONDING defines a validator that is unbonding. + BOND_STATUS_UNBONDING = 2 [(gogoproto.enumvalue_customname) = "Unbonding"]; + / BONDED defines a validator that is bonded. + BOND_STATUS_BONDED = 3 [(gogoproto.enumvalue_customname) = "Bonded"]; +} + +/ ValAddresses defines a repeated set of validator addresses. +message ValAddresses { + option (gogoproto.goproto_stringer) = false; + option (gogoproto.stringer) = true; + + repeated string addresses = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ DVPair is struct that just has a delegator-validator pair with no other data. +/ It is intended to be used as a marshalable pointer. For example, a DVPair can +/ be used to construct the key to getting an UnbondingDelegation from state. +message DVPair { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ DVPairs defines an array of DVPair objects. +message DVPairs { + repeated DVPair pairs = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ DVVTriplet is struct that just has a delegator-validator-validator triplet +/ with no other data. It is intended to be used as a marshalable pointer. For +/ example, a DVVTriplet can be used to construct the key to getting a +/ Redelegation from state. +message DVVTriplet { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} + +/ DVVTriplets defines an array of DVVTriplet objects. +message DVVTriplets { + repeated DVVTriplet triplets = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ Delegation represents the bond with tokens held by an account. It is +/ owned by one delegator, and is associated with the voting power of one +/ validator. +message Delegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + / delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / shares define the delegation shares received. + string shares = 3 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +/ UnbondingDelegation stores all of a single delegator's unbonding bonds +/ for a single validator in an time-ordered list. +message UnbondingDelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + / delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_address is the bech32-encoded address of the validator. + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / entries are the unbonding delegation entries. + repeated UnbondingDelegationEntry entries = 3 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; / unbonding delegation entries +} + +/ UnbondingDelegationEntry defines an unbonding object with relevant metadata. +message UnbondingDelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / creation_height is the height which the unbonding took place. + int64 creation_height = 1; + / completion_time is the unix time for unbonding completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + / initial_balance defines the tokens initially scheduled to receive at completion. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / balance defines the tokens to receive at completion. + string balance = 4 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + / Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} + +/ RedelegationEntry defines a redelegation object with relevant metadata. +message RedelegationEntry { + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / creation_height defines the height which the redelegation took place. + int64 creation_height = 1; + / completion_time defines the unix time for redelegation completion. + google.protobuf.Timestamp completion_time = 2 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; + / initial_balance defines the initial balance when redelegation started. + string initial_balance = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + / shares_dst is the amount of destination-validator shares created by redelegation. + string shares_dst = 4 [ + (cosmos_proto.scalar) = "cosmos.Dec", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; + / Incrementing id that uniquely identifies this entry + uint64 unbonding_id = 5; + + / Strictly positive if this entry's unbonding has been stopped by external modules + int64 unbonding_on_hold_ref_count = 6; +} + +/ Redelegation contains the list of a particular delegator's redelegating bonds +/ from a particular source validator to a particular destination validator. +message Redelegation { + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + option (gogoproto.goproto_stringer) = false; + + / delegator_address is the bech32-encoded address of the delegator. + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_src_address is the validator redelegation source operator address. + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / validator_dst_address is the validator redelegation destination operator address. + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + / entries are the redelegation entries. + repeated RedelegationEntry entries = 4 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; / redelegation entries +} + +/ Params defines the parameters for the x/staking module. +message Params { + option (amino.name) = "cosmos-sdk/x/staking/Params"; + option (gogoproto.equal) = true; + option (gogoproto.goproto_stringer) = false; + + / unbonding_time is the time duration of unbonding. + google.protobuf.Duration unbonding_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdduration) = true]; + / max_validators is the maximum number of validators. + uint32 max_validators = 2; + / max_entries is the max entries for either unbonding delegation or redelegation (per pair/trio). + uint32 max_entries = 3; + / historical_entries is the number of historical entries to persist. + uint32 historical_entries = 4; + / bond_denom defines the bondable coin denomination. + string bond_denom = 5; + / min_commission_rate is the chain-wide minimum commission rate that a validator can charge their delegators + string min_commission_rate = 6 [ + (gogoproto.moretags) = "yaml:\"min_commission_rate\"", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", + (gogoproto.nullable) = false + ]; +} + +/ DelegationResponse is equivalent to Delegation except that it contains a +/ balance in addition to shares which is more suitable for client responses. +message DelegationResponse { + option (gogoproto.equal) = false; + option (gogoproto.goproto_stringer) = false; + + Delegation delegation = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + + cosmos.base.v1beta1.Coin balance = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ RedelegationEntryResponse is equivalent to a RedelegationEntry except that it +/ contains a balance in addition to shares which is more suitable for client +/ responses. +message RedelegationEntryResponse { + option (gogoproto.equal) = true; + + RedelegationEntry redelegation_entry = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + string balance = 4 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; +} + +/ RedelegationResponse is equivalent to a Redelegation except that its entries +/ contain a balance in addition to shares which is more suitable for client +/ responses. +message RedelegationResponse { + option (gogoproto.equal) = false; + + Redelegation redelegation = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + repeated RedelegationEntryResponse entries = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} + +/ Pool is used for tracking bonded and not-bonded token supply of the bond +/ denomination. +message Pool { + option (gogoproto.description) = true; + option (gogoproto.equal) = true; + string not_bonded_tokens = 1 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "not_bonded_tokens", + (amino.dont_omitempty) = true + ]; + string bonded_tokens = 2 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "bonded_tokens", + (amino.dont_omitempty) = true + ]; +} + +/ Infraction indicates the infraction a validator commited. +enum Infraction { + / UNSPECIFIED defines an empty infraction. + INFRACTION_UNSPECIFIED = 0; + / DOUBLE_SIGN defines a validator that double-signs a block. + INFRACTION_DOUBLE_SIGN = 1; + / DOWNTIME defines a validator that missed signing too many blocks. + INFRACTION_DOWNTIME = 2; +} + +/ ValidatorUpdates defines an array of abci.ValidatorUpdate objects. +/ TODO: explore moving this to proto/cosmos/base to separate modules from tendermint dependence +message ValidatorUpdates { + repeated tendermint.abci.ValidatorUpdate updates = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +At each BeginBlock, the staking keeper will persist the current Header and the Validators that committed +the current block in a `HistoricalInfo` object. The Validators are sorted on their address to ensure that +they are in a deterministic order. +The oldest HistoricalEntries will be pruned to ensure that there only exist the parameter-defined number of +historical entries. + +## State Transitions + +### Validators + +State transitions in validators are performed on every [`EndBlock`](#validator-set-changes) +in order to check for changes in the active `ValidatorSet`. + +A validator can be `Unbonded`, `Unbonding` or `Bonded`. `Unbonded` +and `Unbonding` are collectively called `Not Bonded`. A validator can move +directly between all the states, except for from `Bonded` to `Unbonded`. + +#### Not bonded to Bonded + +The following transition occurs when a validator's ranking in the `ValidatorPowerIndex` surpasses +that of the `LastValidator`. + +* set `validator.Status` to `Bonded` +* send the `validator.Tokens` from the `NotBondedTokens` to the `BondedPool` `ModuleAccount` +* delete the existing record from `ValidatorByPowerIndex` +* add a new updated record to the `ValidatorByPowerIndex` +* update the `Validator` object for this validator +* if it exists, delete any `ValidatorQueue` record for this validator + +#### Bonded to Unbonding + +When a validator begins the unbonding process the following operations occur: + +* send the `validator.Tokens` from the `BondedPool` to the `NotBondedTokens` `ModuleAccount` +* set `validator.Status` to `Unbonding` +* delete the existing record from `ValidatorByPowerIndex` +* add a new updated record to the `ValidatorByPowerIndex` +* update the `Validator` object for this validator +* insert a new record into the `ValidatorQueue` for this validator + +#### Unbonding to Unbonded + +A validator moves from unbonding to unbonded when the `ValidatorQueue` object +moves from bonded to unbonded + +* update the `Validator` object for this validator +* set `validator.Status` to `Unbonded` + +#### Jail/Unjail + +when a validator is jailed it is effectively removed from the CometBFT set. +this process may be also be reversed. the following operations occur: + +* set `Validator.Jailed` and update object +* if jailed delete record from `ValidatorByPowerIndex` +* if unjailed add record to `ValidatorByPowerIndex` + +Jailed validators are not present in any of the following stores: + +* the power store (from consensus power to address) + +### Delegations + +#### Delegate + +When a delegation occurs both the validator and the delegation objects are affected + +* determine the delegators shares based on tokens delegated and the validator's exchange rate +* remove tokens from the sending account +* add shares the delegation object or add them to a created validator object +* add new delegator shares and update the `Validator` object +* transfer the `delegation.Amount` from the delegator's account to the `BondedPool` or the `NotBondedPool` `ModuleAccount` depending if the `validator.Status` is `Bonded` or not +* delete the existing record from `ValidatorByPowerIndex` +* add an new updated record to the `ValidatorByPowerIndex` + +#### Begin Unbonding + +As a part of the Undelegate and Complete Unbonding state transitions Unbond +Delegation may be called. + +* subtract the unbonded shares from delegator +* add the unbonded tokens to an `UnbondingDelegationEntry` +* update the delegation or remove the delegation if there are no more shares +* if the delegation is the operator of the validator and no more shares exist then trigger a jail validator +* update the validator with removed the delegator shares and associated coins +* if the validator state is `Bonded`, transfer the `Coins` worth of the unbonded + shares from the `BondedPool` to the `NotBondedPool` `ModuleAccount` +* remove the validator if it is unbonded and there are no more delegation shares. +* remove the validator if it is unbonded and there are no more delegation shares +* get a unique `unbondingId` and map it to the `UnbondingDelegationEntry` in `UnbondingDelegationByUnbondingId` +* call the `AfterUnbondingInitiated(unbondingId)` hook +* add the unbonding delegation to `UnbondingDelegationQueue` with the completion time set to `UnbondingTime` + +#### Cancel an `UnbondingDelegation` Entry + +When a `cancel unbond delegation` occurs both the `validator`, the `delegation` and an `UnbondingDelegationQueue` state will be updated. + +* if cancel unbonding delegation amount equals to the `UnbondingDelegation` entry `balance`, then the `UnbondingDelegation` entry deleted from `UnbondingDelegationQueue`. +* if the `cancel unbonding delegation amount is less than the `UnbondingDelegation`entry balance, then the`UnbondingDelegation`entry will be updated with new balance in the`UnbondingDelegationQueue\`. +* cancel `amount` is [Delegated](#delegations) back to the original `validator`. + +#### Complete Unbonding + +For undelegations which do not complete immediately, the following operations +occur when the unbonding delegation queue element matures: + +* remove the entry from the `UnbondingDelegation` object +* transfer the tokens from the `NotBondedPool` `ModuleAccount` to the delegator `Account` + +#### Begin Redelegation + +Redelegations affect the delegation, source and destination validators. + +* perform an `unbond` delegation from the source validator to retrieve the tokens worth of the unbonded shares +* using the unbonded tokens, `Delegate` them to the destination validator +* if the `sourceValidator.Status` is `Bonded`, and the `destinationValidator` is not, + transfer the newly delegated tokens from the `BondedPool` to the `NotBondedPool` `ModuleAccount` +* otherwise, if the `sourceValidator.Status` is not `Bonded`, and the `destinationValidator` + is `Bonded`, transfer the newly delegated tokens from the `NotBondedPool` to the `BondedPool` `ModuleAccount` +* record the token amount in an new entry in the relevant `Redelegation` + +From when a redelegation begins until it completes, the delegator is in a state of "pseudo-unbonding", and can still be +slashed for infractions that occurred before the redelegation began. + +#### Complete Redelegation + +When a redelegations complete the following occurs: + +* remove the entry from the `Redelegation` object + +### Slashing + +#### Slash Validator + +When a Validator is slashed, the following occurs: + +* The total `slashAmount` is calculated as the `slashFactor` (a chain parameter) \* `TokensFromConsensusPower`, + the total number of tokens bonded to the validator at the time of the infraction. +* Every unbonding delegation and pseudo-unbonding redelegation such that the infraction occurred before the unbonding or + redelegation began from the validator are slashed by the `slashFactor` percentage of the initialBalance. +* Each amount slashed from redelegations and unbonding delegations is subtracted from the + total slash amount. +* The `remaingSlashAmount` is then slashed from the validator's tokens in the `BondedPool` or + `NonBondedPool` depending on the validator's status. This reduces the total supply of tokens. + +In the case of a slash due to any infraction that requires evidence to submitted (for example double-sign), the slash +occurs at the block where the evidence is included, not at the block where the infraction occurred. +Put otherwise, validators are not slashed retroactively, only when they are caught. + +#### Slash Unbonding Delegation + +When a validator is slashed, so are those unbonding delegations from the validator that began unbonding +after the time of the infraction. Every entry in every unbonding delegation from the validator +is slashed by `slashFactor`. The amount slashed is calculated from the `InitialBalance` of the +delegation and is capped to prevent a resulting negative balance. Completed (or mature) unbondings are not slashed. + +#### Slash Redelegation + +When a validator is slashed, so are all redelegations from the validator that began after the +infraction. Redelegations are slashed by `slashFactor`. +Redelegations that began before the infraction are not slashed. +The amount slashed is calculated from the `InitialBalance` of the delegation and is capped to +prevent a resulting negative balance. +Mature redelegations (that have completed pseudo-unbonding) are not slashed. + +### How Shares are calculated + +At any given point in time, each validator has a number of tokens, `T`, and has a number of shares issued, `S`. +Each delegator, `i`, holds a number of shares, `S_i`. +The number of tokens is the sum of all tokens delegated to the validator, plus the rewards, minus the slashes. + +The delegator is entitled to a portion of the underlying tokens proportional to their proportion of shares. +So delegator `i` is entitled to `T * S_i / S` of the validator's tokens. + +When a delegator delegates new tokens to the validator, they receive a number of shares proportional to their contribution. +So when delegator `j` delegates `T_j` tokens, they receive `S_j = S * T_j / T` shares. +The total number of tokens is now `T + T_j`, and the total number of shares is `S + S_j`. +`j`s proportion of the shares is the same as their proportion of the total tokens contributed: `(S + S_j) / S = (T + T_j) / T`. + +A special case is the initial delegation, when `T = 0` and `S = 0`, so `T_j / T` is undefined. +For the initial delegation, delegator `j` who delegates `T_j` tokens receive `S_j = T_j` shares. +So a validator that hasn't received any rewards and has not been slashed will have `T = S`. + +## Messages + +In this section we describe the processing of the staking messages and the corresponding updates to the state. All created/modified state objects specified by each message are defined within the [state](#state) section. + +### MsgCreateValidator + +A validator is created using the `MsgCreateValidator` message. +The validator must be created with an initial delegation from the operator. + +```protobuf + // CreateValidator defines a method for creating a new validator. + rpc CreateValidator(MsgCreateValidator) returns (MsgCreateValidatorResponse); +``` + +```protobuf +// MsgCreateValidator defines a SDK message for creating a new validator. +message MsgCreateValidator { + // NOTE(fdymylja): this is a particular case in which + // if validator_address == delegator_address then only one + // is expected to sign, otherwise both are. + option (cosmos.msg.v1.signer) = "delegator_address"; + option (cosmos.msg.v1.signer) = "validator_address"; + option (amino.name) = "cosmos-sdk/MsgCreateValidator"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + Description description = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + CommissionRates commission = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + string min_self_delegation = 3 [ + (cosmos_proto.scalar) = "cosmos.Int", + (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", + (gogoproto.nullable) = false + ]; + string delegator_address = 4 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 5 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + google.protobuf.Any pubkey = 6 [(cosmos_proto.accepts_interface) = "cosmos.crypto.PubKey"]; + cosmos.base.v1beta1.Coin value = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message is expected to fail if: + +* another validator with this operator address is already registered +* another validator with this pubkey is already registered +* the initial self-delegation tokens are of a denom not specified as the bonding denom +* the commission parameters are faulty, namely: + * `MaxRate` is either > 1 or < 0 + * the initial `Rate` is either negative or > `MaxRate` + * the initial `MaxChangeRate` is either negative or > `MaxRate` +* the description fields are too large + +This message creates and stores the `Validator` object at appropriate indexes. +Additionally a self-delegation is made with the initial tokens delegation +tokens `Delegation`. The validator always starts as unbonded but may be bonded +in the first end-block. + +### MsgEditValidator + +The `Description`, `CommissionRate` of a validator can be updated using the +`MsgEditValidator` message. + +```protobuf + // EditValidator defines a method for editing an existing validator. + rpc EditValidator(MsgEditValidator) returns (MsgEditValidatorResponse); +``` + +```protobuf +// MsgEditValidator defines a SDK message for editing an existing validator. +message MsgEditValidator { + option (cosmos.msg.v1.signer) = "validator_address"; + option (amino.name) = "cosmos-sdk/MsgEditValidator"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + Description description = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // We pass a reference to the new commission rate and min self delegation as + // it's not mandatory to update. If not updated, the deserialized rate will be + // zero with no way to distinguish if an update was intended. + // REF: #2373 + string commission_rate = 3 + [(cosmos_proto.scalar) = "cosmos.Dec", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec"]; + string min_self_delegation = 4 + [(cosmos_proto.scalar) = "cosmos.Int", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int"]; +} +``` + +This message is expected to fail if: + +* the initial `CommissionRate` is either negative or > `MaxRate` +* the `CommissionRate` has already been updated within the previous 24 hours +* the `CommissionRate` is > `MaxChangeRate` +* the description fields are too large + +This message stores the updated `Validator` object. + +### MsgDelegate + +Within this message the delegator provides coins, and in return receives +some amount of their validator's (newly created) delegator-shares that are +assigned to `Delegation.Shares`. + +```protobuf + // Delegate defines a method for performing a delegation of coins + // from a delegator to a validator. + rpc Delegate(MsgDelegate) returns (MsgDelegateResponse); +``` + +```protobuf +// MsgDelegate defines a SDK message for performing a delegation of coins +// from a delegator to a validator. +message MsgDelegate { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgDelegate"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message is expected to fail if: + +* the validator does not exist +* the `Amount` `Coin` has a denomination different than one defined by `params.BondDenom` +* the exchange rate is invalid, meaning the validator has no tokens (due to slashing) but there are outstanding shares +* the amount delegated is less than the minimum allowed delegation + +If an existing `Delegation` object for provided addresses does not already +exist then it is created as part of this message otherwise the existing +`Delegation` is updated to include the newly received shares. + +The delegator receives newly minted shares at the current exchange rate. +The exchange rate is the number of existing shares in the validator divided by +the number of currently delegated tokens. + +The validator is updated in the `ValidatorByPower` index, and the delegation is +tracked in validator object in the `Validators` index. + +It is possible to delegate to a jailed validator, the only difference being it +will not be added to the power index until it is unjailed. + +![Delegation sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/delegation_sequence.svg) + +### MsgUndelegate + +The `MsgUndelegate` message allows delegators to undelegate their tokens from +validator. + +```protobuf + // Undelegate defines a method for performing an undelegation from a + // delegate and a validator. + rpc Undelegate(MsgUndelegate) returns (MsgUndelegateResponse); +``` + +```protobuf +// MsgUndelegate defines a SDK message for performing an undelegation from a +// delegate and a validator. +message MsgUndelegate { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgUndelegate"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message returns a response containing the completion time of the undelegation: + +```protobuf +// MsgUndelegateResponse defines the Msg/Undelegate response type. +message MsgUndelegateResponse { + google.protobuf.Timestamp completion_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} +``` + +This message is expected to fail if: + +* the delegation doesn't exist +* the validator doesn't exist +* the delegation has less shares than the ones worth of `Amount` +* existing `UnbondingDelegation` has maximum entries as defined by `params.MaxEntries` +* the `Amount` has a denomination different than one defined by `params.BondDenom` + +When this message is processed the following actions occur: + +* validator's `DelegatorShares` and the delegation's `Shares` are both reduced by the message `SharesAmount` +* calculate the token worth of the shares remove that amount tokens held within the validator +* with those removed tokens, if the validator is: + * `Bonded` - add them to an entry in `UnbondingDelegation` (create `UnbondingDelegation` if it doesn't exist) with a completion time a full unbonding period from the current time. Update pool shares to reduce BondedTokens and increase NotBondedTokens by token worth of the shares. + * `Unbonding` - add them to an entry in `UnbondingDelegation` (create `UnbondingDelegation` if it doesn't exist) with the same completion time as the validator (`UnbondingMinTime`). + * `Unbonded` - then send the coins the message `DelegatorAddr` +* if there are no more `Shares` in the delegation, then the delegation object is removed from the store + * under this situation if the delegation is the validator's self-delegation then also jail the validator. + +![Unbond sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/unbond_sequence.svg) + +### MsgCancelUnbondingDelegation + +The `MsgCancelUnbondingDelegation` message allows delegators to cancel the `unbondingDelegation` entry and delegate back to a previous validator. + +```protobuf + // CancelUnbondingDelegation defines a method for performing canceling the unbonding delegation + // and delegate back to previous validator. + // + // Since: cosmos-sdk 0.46 + rpc CancelUnbondingDelegation(MsgCancelUnbondingDelegation) returns (MsgCancelUnbondingDelegationResponse); +``` + +```protobuf +// MsgCancelUnbondingDelegation defines the SDK message for performing a cancel unbonding delegation for delegator +// +// Since: cosmos-sdk 0.46 +message MsgCancelUnbondingDelegation { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgCancelUnbondingDelegation"; + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // amount is always less than or equal to unbonding delegation entry balance + cosmos.base.v1beta1.Coin amount = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; + // creation_height is the height which the unbonding took place. + int64 creation_height = 4; +} +``` + +This message is expected to fail if: + +* the `unbondingDelegation` entry is already processed. +* the `cancel unbonding delegation` amount is greater than the `unbondingDelegation` entry balance. +* the `cancel unbonding delegation` height doesn't exist in the `unbondingDelegationQueue` of the delegator. + +When this message is processed the following actions occur: + +* if the `unbondingDelegation` Entry balance is zero + * in this condition `unbondingDelegation` entry will be removed from `unbondingDelegationQueue`. + * otherwise `unbondingDelegationQueue` will be updated with new `unbondingDelegation` entry balance and initial balance +* the validator's `DelegatorShares` and the delegation's `Shares` are both increased by the message `Amount`. + +### MsgBeginRedelegate + +The redelegation command allows delegators to instantly switch validators. Once +the unbonding period has passed, the redelegation is automatically completed in +the EndBlocker. + +```protobuf + // BeginRedelegate defines a method for performing a redelegation + // of coins from a delegator and source validator to a destination validator. + rpc BeginRedelegate(MsgBeginRedelegate) returns (MsgBeginRedelegateResponse); +``` + +```protobuf +// MsgBeginRedelegate defines a SDK message for performing a redelegation +// of coins from a delegator and source validator to a destination validator. +message MsgBeginRedelegate { + option (cosmos.msg.v1.signer) = "delegator_address"; + option (amino.name) = "cosmos-sdk/MsgBeginRedelegate"; + + option (gogoproto.equal) = false; + option (gogoproto.goproto_getters) = false; + + string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + cosmos.base.v1beta1.Coin amount = 4 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +This message returns a response containing the completion time of the redelegation: + +```protobuf + +// MsgBeginRedelegateResponse defines the Msg/BeginRedelegate response type. +message MsgBeginRedelegateResponse { + google.protobuf.Timestamp completion_time = 1 + [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; +} +``` + +This message is expected to fail if: + +* the delegation doesn't exist +* the source or destination validators don't exist +* the delegation has less shares than the ones worth of `Amount` +* the source validator has a receiving redelegation which is not matured (aka. the redelegation may be transitive) +* existing `Redelegation` has maximum entries as defined by `params.MaxEntries` +* the `Amount` `Coin` has a denomination different than one defined by `params.BondDenom` + +When this message is processed the following actions occur: + +* the source validator's `DelegatorShares` and the delegations `Shares` are both reduced by the message `SharesAmount` +* calculate the token worth of the shares remove that amount tokens held within the source validator. +* if the source validator is: + * `Bonded` - add an entry to the `Redelegation` (create `Redelegation` if it doesn't exist) with a completion time a full unbonding period from the current time. Update pool shares to reduce BondedTokens and increase NotBondedTokens by token worth of the shares (this may be effectively reversed in the next step however). + * `Unbonding` - add an entry to the `Redelegation` (create `Redelegation` if it doesn't exist) with the same completion time as the validator (`UnbondingMinTime`). + * `Unbonded` - no action required in this step +* Delegate the token worth to the destination validator, possibly moving tokens back to the bonded state. +* if there are no more `Shares` in the source delegation, then the source delegation object is removed from the store + * under this situation if the delegation is the validator's self-delegation then also jail the validator. + +![Begin redelegation sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/begin_redelegation_sequence.svg) + +### MsgUpdateParams + +The `MsgUpdateParams` update the staking module parameters. +The params are updated through a governance proposal where the signer is the gov module account address. + +```protobuf +// MsgUpdateParams is the Msg/UpdateParams request type. +// +// Since: cosmos-sdk 0.47 +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/x/staking/MsgUpdateParams"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + // params defines the x/staking parameters to update. + // + // NOTE: All parameters must be supplied. + Params params = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +}; +``` + +The message handling can fail if: + +* signer is not the authority defined in the staking keeper (usually the gov module account). + +## Begin-Block + +Each abci begin block call, the historical info will get stored and pruned +according to the `HistoricalEntries` parameter. + +### Historical Info Tracking + +If the `HistoricalEntries` parameter is 0, then the `BeginBlock` performs a no-op. + +Otherwise, the latest historical info is stored under the key `historicalInfoKey|height`, while any entries older than `height - HistoricalEntries` is deleted. +In most cases, this results in a single entry being pruned per block. +However, if the parameter `HistoricalEntries` has changed to a lower value there will be multiple entries in the store that must be pruned. + +## End-Block + +Each abci end block call, the operations to update queues and validator set +changes are specified to execute. + +### Validator Set Changes + +The staking validator set is updated during this process by state transitions +that run at the end of every block. As a part of this process any updated +validators are also returned back to CometBFT for inclusion in the CometBFT +validator set which is responsible for validating CometBFT messages at the +consensus layer. Operations are as following: + +* the new validator set is taken as the top `params.MaxValidators` number of + validators retrieved from the `ValidatorsByPower` index +* the previous validator set is compared with the new validator set: + * missing validators begin unbonding and their `Tokens` are transferred from the + `BondedPool` to the `NotBondedPool` `ModuleAccount` + * new validators are instantly bonded and their `Tokens` are transferred from the + `NotBondedPool` to the `BondedPool` `ModuleAccount` + +In all cases, any validators leaving or entering the bonded validator set or +changing balances and staying within the bonded validator set incur an update +message reporting their new consensus power which is passed back to CometBFT. + +The `LastTotalPower` and `LastValidatorsPower` hold the state of the total power +and validator power from the end of the last block, and are used to check for +changes that have occurred in `ValidatorsByPower` and the total new power, which +is calculated during `EndBlock`. + +### Queues + +Within staking, certain state-transitions are not instantaneous but take place +over a duration of time (typically the unbonding period). When these +transitions are mature certain operations must take place in order to complete +the state operation. This is achieved through the use of queues which are +checked/processed at the end of each block. + +#### Unbonding Validators + +When a validator is kicked out of the bonded validator set (either through +being jailed, or not having sufficient bonded tokens) it begins the unbonding +process along with all its delegations begin unbonding (while still being +delegated to this validator). At this point the validator is said to be an +"unbonding validator", whereby it will mature to become an "unbonded validator" +after the unbonding period has passed. + +Each block the validator queue is to be checked for mature unbonding validators +(namely with a completion time `<=` current time and completion height `<=` current +block height). At this point any mature validators which do not have any +delegations remaining are deleted from state. For all other mature unbonding +validators that still have remaining delegations, the `validator.Status` is +switched from `types.Unbonding` to +`types.Unbonded`. + +Unbonding operations can be put on hold by external modules via the `PutUnbondingOnHold(unbondingId)` method. +As a result, an unbonding operation (e.g., an unbonding delegation) that is on hold, cannot complete +even if it reaches maturity. For an unbonding operation with `unbondingId` to eventually complete +(after it reaches maturity), every call to `PutUnbondingOnHold(unbondingId)` must be matched +by a call to `UnbondingCanComplete(unbondingId)`. + +#### Unbonding Delegations + +Complete the unbonding of all mature `UnbondingDelegations.Entries` within the +`UnbondingDelegations` queue with the following procedure: + +* transfer the balance coins to the delegator's wallet address +* remove the mature entry from `UnbondingDelegation.Entries` +* remove the `UnbondingDelegation` object from the store if there are no + remaining entries. + +#### Redelegations + +Complete the unbonding of all mature `Redelegation.Entries` within the +`Redelegations` queue with the following procedure: + +* remove the mature entry from `Redelegation.Entries` +* remove the `Redelegation` object from the store if there are no + remaining entries. + +## Hooks + +Other modules may register operations to execute when a certain event has +occurred within staking. These events can be registered to execute either +right `Before` or `After` the staking event (as per the hook name). The +following hooks can registered with staking: + +* `AfterValidatorCreated(Context, ValAddress) error` + * called when a validator is created +* `BeforeValidatorModified(Context, ValAddress) error` + * called when a validator's state is changed +* `AfterValidatorRemoved(Context, ConsAddress, ValAddress) error` + * called when a validator is deleted +* `AfterValidatorBonded(Context, ConsAddress, ValAddress) error` + * called when a validator is bonded +* `AfterValidatorBeginUnbonding(Context, ConsAddress, ValAddress) error` + * called when a validator begins unbonding +* `BeforeDelegationCreated(Context, AccAddress, ValAddress) error` + * called when a delegation is created +* `BeforeDelegationSharesModified(Context, AccAddress, ValAddress) error` + * called when a delegation's shares are modified +* `AfterDelegationModified(Context, AccAddress, ValAddress) error` + * called when a delegation is created or modified +* `BeforeDelegationRemoved(Context, AccAddress, ValAddress) error` + * called when a delegation is removed +* `AfterUnbondingInitiated(Context, UnbondingID)` + * called when an unbonding operation (validator unbonding, unbonding delegation, redelegation) was initiated + +## Events + +The staking module emits the following events: + +### EndBlocker + +| Type | Attribute Key | Attribute Value | +| ---------------------- | ---------------------- | ------------------------- | +| complete\_unbonding | amount | `{totalUnbondingAmount}` | +| complete\_unbonding | validator | `{validatorAddress}` | +| complete\_unbonding | delegator | `{delegatorAddress}` | +| complete\_redelegation | amount | `{totalRedelegationAmount}` | +| complete\_redelegation | source\_validator | `{srcValidatorAddress}` | +| complete\_redelegation | destination\_validator | `{dstValidatorAddress}` | +| complete\_redelegation | delegator | `{delegatorAddress}` | + +## Msg's + +### MsgCreateValidator + +| Type | Attribute Key | Attribute Value | +| ----------------- | ------------- | ------------------ | +| create\_validator | validator | `{validatorAddress}` | +| create\_validator | amount | `{delegationAmount}` | +| message | module | staking | +| message | action | create\_validator | +| message | sender | `{senderAddress}` | + +### MsgEditValidator + +| Type | Attribute Key | Attribute Value | +| --------------- | --------------------- | ------------------- | +| edit\_validator | commission\_rate | `{commissionRate}` | +| edit\_validator | min\_self\_delegation | `{minSelfDelegation}` | +| message | module | staking | +| message | action | edit\_validator | +| message | sender | `{senderAddress}` | + +### MsgDelegate + +| Type | Attribute Key | Attribute Value | +| -------- | ------------- | ------------------ | +| delegate | validator | `{validatorAddress}` | +| delegate | amount | `{delegationAmount}` | +| message | module | staking | +| message | action | delegate | +| message | sender | `{senderAddress}` | + +### MsgUndelegate + +| Type | Attribute Key | Attribute Value | +| ------- | --------------------- | ------------------ | +| unbond | validator | `{validatorAddress}` | +| unbond | amount | `{unbondAmount}` | +| unbond | completion\_time \[0] | `{completionTime}` | +| message | module | staking | +| message | action | begin\_unbonding | +| message | sender | `{senderAddress}` | + +* \[0] Time is formatted in the RFC3339 standard + +### MsgCancelUnbondingDelegation + +| Type | Attribute Key | Attribute Value | +| ----------------------------- | ---------------- | --------------------------------- | +| cancel\_unbonding\_delegation | validator | `{validatorAddress}` | +| cancel\_unbonding\_delegation | delegator | `{delegatorAddress}` | +| cancel\_unbonding\_delegation | amount | `{cancelUnbondingDelegationAmount}` | +| cancel\_unbonding\_delegation | creation\_height | `{unbondingCreationHeight}` | +| message | module | staking | +| message | action | cancel\_unbond | +| message | sender | `{senderAddress}` | + +### MsgBeginRedelegate + +| Type | Attribute Key | Attribute Value | +| ---------- | ---------------------- | --------------------- | +| redelegate | source\_validator | `{srcValidatorAddress}` | +| redelegate | destination\_validator | `{dstValidatorAddress}` | +| redelegate | amount | `{unbondAmount}` | +| redelegate | completion\_time \[0] | `{completionTime}` | +| message | module | staking | +| message | action | begin\_redelegate | +| message | sender | `{senderAddress}` | + +* \[0] Time is formatted in the RFC3339 standard + +## Parameters + +The staking module contains the following parameters: + +| Key | Type | Example | +| ----------------- | ---------------- | ---------------------- | +| UnbondingTime | string (time ns) | "259200000000000" | +| MaxValidators | uint16 | 100 | +| KeyMaxEntries | uint16 | 7 | +| HistoricalEntries | uint16 | 3 | +| BondDenom | string | "stake" | +| MinCommissionRate | string | "0.000000000000000000" | + +## Client + +### CLI + +A user can query and interact with the `staking` module using the CLI. + +#### Query + +The `query` commands allows users to query `staking` state. + +```bash +simd query staking --help +``` + +##### delegation + +The `delegation` command allows users to query delegations for an individual delegator on an individual validator. + +Usage: + +```bash +simd query staking delegation [delegator-addr] [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash +balance: + amount: "10000000000" + denom: stake +delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +##### delegations + +The `delegations` command allows users to query delegations for an individual delegator on all validators. + +Usage: + +```bash +simd query staking delegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegations cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash expandable +delegation_responses: +- balance: + amount: "10000000000" + denom: stake + delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- balance: + amount: "10000000000" + denom: stake + delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1x20lytyf6zkcrv5edpkfkn8sz578qg5sqfyqnp +pagination: + next_key: null + total: "0" +``` + +##### delegations-to + +The `delegations-to` command allows users to query delegations on an individual validator. + +Usage: + +```bash +simd query staking delegations-to [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegations-to cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +- balance: + amount: "504000000" + denom: stake + delegation: + delegator_address: cosmos1q2qwwynhv8kh3lu5fkeex4awau9x8fwt45f5cp + shares: "504000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- balance: + amount: "78125000000" + denom: uixo + delegation: + delegator_address: cosmos1qvppl3479hw4clahe0kwdlfvf8uvjtcd99m2ca + shares: "78125000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +pagination: + next_key: null + total: "0" +``` + +##### historical-info + +The `historical-info` command allows users to query historical information at given height. + +Usage: + +```bash +simd query staking historical-info [height] [flags] +``` + +Example: + +```bash +simd query staking historical-info 10 +``` + +Example Output: + +```bash expandable +header: + app_hash: Lbx8cXpI868wz8sgp4qPYVrlaKjevR5WP/IjUxwp3oo= + chain_id: testnet + consensus_hash: BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8= + data_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + evidence_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + height: "10" + last_block_id: + hash: RFbkpu6pWfSThXxKKl6EZVDnBSm16+U0l0xVjTX08Fk= + part_set_header: + hash: vpIvXD4rxD5GM4MXGz0Sad9I7/iVYLzZsEU4BVgWIU= + total: 1 + last_commit_hash: Ne4uXyx4QtNp4Zx89kf9UK7oG9QVbdB6e7ZwZkhy8K0= + last_results_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + next_validators_hash: nGBgKeWBjoxeKFti00CxHsnULORgKY4LiuQwBuUrhCs= + proposer_address: mMEP2c2IRPLr99LedSRtBg9eONM= + time: "2021-10-01T06:00:49.785790894Z" + validators_hash: nGBgKeWBjoxeKFti00CxHsnULORgKY4LiuQwBuUrhCs= + version: + app: "0" + block: "11" +valset: +- commission: + commission_rates: + max_change_rate: "0.010000000000000000" + max_rate: "0.200000000000000000" + rate: "0.100000000000000000" + update_time: "2021-10-01T05:52:50.380144238Z" + consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8= + delegator_shares: "10000000.000000000000000000" + description: + details: "" + identity: "" + moniker: myvalidator + security_contact: "" + website: "" + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc + status: BOND_STATUS_BONDED + tokens: "10000000" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +``` + +##### params + +The `params` command allows users to query values set as staking parameters. + +Usage: + +```bash +simd query staking params [flags] +``` + +Example: + +```bash +simd query staking params +``` + +Example Output: + +```bash +bond_denom: stake +historical_entries: 10000 +max_entries: 7 +max_validators: 50 +unbonding_time: 1814400s +``` + +##### pool + +The `pool` command allows users to query values for amounts stored in the staking pool. + +Usage: + +```bash +simd q staking pool [flags] +``` + +Example: + +```bash +simd q staking pool +``` + +Example Output: + +```bash +bonded_tokens: "10000000" +not_bonded_tokens: "0" +``` + +##### redelegation + +The `redelegation` command allows users to query a redelegation record based on delegator and a source and destination validator address. + +Usage: + +```bash +simd query staking redelegation [delegator-addr] [src-validator-addr] [dst-validator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +pagination: null +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm + validator_src_address: cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm +``` + +##### redelegations + +The `redelegations` command allows users to query all redelegation records for an individual delegator. + +Usage: + +```bash +simd query staking redelegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1zppjyal5emta5cquje8ndkpz0rs046m7zqxrpp +- entries: + - balance: "562770000000" + redelegation_entry: + completion_time: "2021-10-25T21:42:07.336911677Z" + creation_height: 2.39735e+06 + initial_balance: "562770000000" + shares_dst: "562770000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1zppjyal5emta5cquje8ndkpz0rs046m7zqxrpp +``` + +##### redelegations-from + +The `redelegations-from` command allows users to query delegations that are redelegating *from* a validator. + +Usage: + +```bash +simd query staking redelegations-from [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegations-from cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1pm6e78p4pgn0da365plzl4t56pxy8hwtqp2mph + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +- entries: + - balance: "221000000" + redelegation_entry: + completion_time: "2021-10-05T21:05:45.669420544Z" + creation_height: 2.120693e+06 + initial_balance: "221000000" + shares_dst: "221000000.000000000000000000" + redelegation: + delegator_address: cosmos1zqv8qxy2zgn4c58fz8jt8jmhs3d0attcussrf6 + entries: null + validator_dst_address: cosmosvaloper10mseqwnwtjaqfrwwp2nyrruwmjp6u5jhah4c3y + validator_src_address: cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +``` + +##### unbonding-delegation + +The `unbonding-delegation` command allows users to query unbonding delegations for an individual delegator on an individual validator. + +Usage: + +```bash +simd query staking unbonding-delegation [delegator-addr] [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash +delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +entries: +- balance: "52000000" + completion_time: "2021-11-02T11:35:55.391594709Z" + creation_height: "55078" + initial_balance: "52000000" +validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +##### unbonding-delegations + +The `unbonding-delegations` command allows users to query all unbonding-delegations records for one delegator. + +Usage: + +```bash +simd query staking unbonding-delegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegations cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +unbonding_responses: +- delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: + - balance: "52000000" + completion_time: "2021-11-02T11:35:55.391594709Z" + creation_height: "55078" + initial_balance: "52000000" + validator_address: cosmosvaloper1t8ehvswxjfn3ejzkjtntcyrqwvmvuknzmvtaaa + +``` + +##### unbonding-delegations-from + +The `unbonding-delegations-from` command allows users to query delegations that are unbonding *from* a validator. + +Usage: + +```bash +simd query staking unbonding-delegations-from [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegations-from cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +pagination: + next_key: null + total: "0" +unbonding_responses: +- delegator_address: cosmos1qqq9txnw4c77sdvzx0tkedsafl5s3vk7hn53fn + entries: + - balance: "150000000" + completion_time: "2021-11-01T21:41:13.098141574Z" + creation_height: "46823" + initial_balance: "150000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- delegator_address: cosmos1peteje73eklqau66mr7h7rmewmt2vt99y24f5z + entries: + - balance: "24000000" + completion_time: "2021-10-31T02:57:18.192280361Z" + creation_height: "21516" + initial_balance: "24000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +##### validator + +The `validator` command allows users to query details about an individual validator. + +Usage: + +```bash +simd query staking validator [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking validator cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash expandable +commission: + commission_rates: + max_change_rate: "0.020000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-01T19:24:52.663191049Z" +consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc= +delegator_shares: "32948270000.000000000000000000" +description: + details: Witval is the validator arm from Vitwit. Vitwit is into software consulting + and services business since 2015. We are working closely with Cosmos ecosystem + since 2018. We are also building tools for the ecosystem, Aneka is our explorer + for the cosmos ecosystem. + identity: 51468B615127273A + moniker: Witval + security_contact: "" + website: "" +jailed: false +min_self_delegation: "1" +operator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +status: BOND_STATUS_BONDED +tokens: "32948270000" +unbonding_height: "0" +unbonding_time: "1970-01-01T00:00:00Z" +``` + +##### validators + +The `validators` command allows users to query details about all validators on a network. + +Usage: + +```bash +simd query staking validators [flags] +``` + +Example: + +```bash +simd query staking validators +``` + +Example Output: + +```bash expandable +pagination: + next_key: FPTi7TKAjN63QqZh+BaXn6gBmD5/ + total: "0" +validators: +commission: + commission_rates: + max_change_rate: "0.020000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-01T19:24:52.663191049Z" +consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc= +delegator_shares: "32948270000.000000000000000000" +description: + details: Witval is the validator arm from Vitwit. Vitwit is into software consulting + and services business since 2015. We are working closely with Cosmos ecosystem + since 2018. We are also building tools for the ecosystem, Aneka is our explorer + for the cosmos ecosystem. + identity: 51468B615127273A + moniker: Witval + security_contact: "" + website: "" + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj + status: BOND_STATUS_BONDED + tokens: "32948270000" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +- commission: + commission_rates: + max_change_rate: "0.100000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-04T18:02:21.446645619Z" + consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: GDNpuKDmCg9GnhnsiU4fCWktuGUemjNfvpCZiqoRIYA= + delegator_shares: "559343421.000000000000000000" + description: + details: Noderunners is a professional validator in POS networks. We have a huge + node running experience, reliable soft and hardware. Our commissions are always + low, our support to delegators is always full. Stake with us and start receiving + your Cosmos rewards now! + identity: 812E82D12FEA3493 + moniker: Noderunners + security_contact: info@noderunners.biz + website: http://noderunners.biz + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1q5ku90atkhktze83j9xjaks2p7uruag5zp6wt7 + status: BOND_STATUS_BONDED + tokens: "559343421" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +``` + +#### Transactions + +The `tx` commands allows users to interact with the `staking` module. + +```bash +simd tx staking --help +``` + +##### create-validator + +The command `create-validator` allows users to create new validator initialized with a self-delegation to it. + +Usage: + +```bash +simd tx staking create-validator [path/to/validator.json] [flags] +``` + +Example: + +```bash +simd tx staking create-validator /path/to/validator.json \ + --chain-id="name_of_chain_id" \ + --gas="auto" \ + --gas-adjustment="1.2" \ + --gas-prices="0.025stake" \ + --from=mykey +``` + +where `validator.json` contains: + +```json expandable +{ + "pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "BnbwFpeONLqvWqJb3qaUbL5aoIcW3fSuAp9nT3z5f20=" + }, + "amount": "1000000stake", + "moniker": "my-moniker", + "website": "https://myweb.site", + "security": "security-contact@gmail.com", + "details": "description of your validator", + "commission-rate": "0.10", + "commission-max-rate": "0.20", + "commission-max-change-rate": "0.01", + "min-self-delegation": "1" +} +``` + +and pubkey can be obtained by using `simd tendermint show-validator` command. + +##### delegate + +The command `delegate` allows users to delegate liquid tokens to a validator. + +Usage: + +```bash +simd tx staking delegate [validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking delegate cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm 1000stake --from mykey +``` + +##### edit-validator + +The command `edit-validator` allows users to edit an existing validator account. + +Usage: + +```bash +simd tx staking edit-validator [flags] +``` + +Example: + +```bash +simd tx staking edit-validator --moniker "new_moniker_name" --website "new_webiste_url" --from mykey +``` + +##### redelegate + +The command `redelegate` allows users to redelegate illiquid tokens from one validator to another. + +Usage: + +```bash +simd tx staking redelegate [src-validator-addr] [dst-validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking redelegate cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm 100stake --from mykey +``` + +##### unbond + +The command `unbond` allows users to unbond shares from a validator. + +Usage: + +```bash +simd tx staking unbond [validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking unbond cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj 100stake --from mykey +``` + +##### cancel unbond + +The command `cancel-unbond` allow users to cancel the unbonding delegation entry and delegate back to the original validator. + +Usage: + +```bash +simd tx staking cancel-unbond [validator-addr] [amount] [creation-height] +``` + +Example: + +```bash +simd tx staking cancel-unbond cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj 100stake 123123 --from mykey +``` + +### gRPC + +A user can query the `staking` module using gRPC endpoints. + +#### Validators + +The `Validators` endpoint queries all validators that match the given status. + +```bash +cosmos.staking.v1beta1.Query/Validators +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.staking.v1beta1.Query/Validators +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "consensusPubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8="}, + "status": "BOND_STATUS_BONDED", + "tokens": "10000000", + "delegatorShares": "10000000000000000000000000", + "description": { + "moniker": "myvalidator" + }, + "unbondingTime": "1970-01-01T00:00:00Z", + "commission": { + "commissionRates": { + "rate": "100000000000000000", + "maxRate": "200000000000000000", + "maxChangeRate": "10000000000000000" + }, + "updateTime": "2021-10-01T05:52:50.380144238Z" + }, + "minSelfDelegation": "1" + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### Validator + +The `Validator` endpoint queries validator information for given validator address. + +```bash +cosmos.staking.v1beta1.Query/Validator +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Validator +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "consensusPubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8="}, + "status": "BOND_STATUS_BONDED", + "tokens": "10000000", + "delegatorShares": "10000000000000000000000000", + "description": { + "moniker": "myvalidator" + }, + "unbondingTime": "1970-01-01T00:00:00Z", + "commission": { + "commissionRates": { + "rate": "100000000000000000", + "maxRate": "200000000000000000", + "maxChangeRate": "10000000000000000" + }, + "updateTime": "2021-10-01T05:52:50.380144238Z" + }, + "minSelfDelegation": "1" + } +} +``` + +#### ValidatorDelegations + +The `ValidatorDelegations` endpoint queries delegate information for given validator. + +```bash +cosmos.staking.v1beta1.Query/ValidatorDelegations +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/ValidatorDelegations +``` + +Example Output: + +```bash expandable +{ + "delegationResponses": [ + { + "delegation": { + "delegatorAddress": "cosmos1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgy3ua5t", + "validatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "shares": "10000000000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "10000000" + } + } + ], + "pagination": { + "total": "1" + } +} +``` + +#### ValidatorUnbondingDelegations + +The `ValidatorUnbondingDelegations` endpoint queries delegate information for given validator. + +```bash +cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1z3pzzw84d6xn00pw9dy3yapqypfde7vg6965fy", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "25325", + "completion_time": "2021-10-31T09:24:36.797320636Z", + "initial_balance": "20000000", + "balance": "20000000" + } + ] + }, + { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "13100", + "completion_time": "2021-10-30T12:53:02.272266791Z", + "initial_balance": "1000000", + "balance": "1000000" + } + ] + }, + ], + "pagination": { + "next_key": null, + "total": "8" + } +} +``` + +#### Delegation + +The `Delegation` endpoint queries delegate information for given validator delegator pair. + +```bash +cosmos.staking.v1beta1.Query/Delegation +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Delegation +``` + +Example Output: + +```bash expandable +{ + "delegation_response": + { + "delegation": + { + "delegator_address":"cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "shares":"25083119936.000000000000000000" + }, + "balance": + { + "denom":"stake", + "amount":"25083119936" + } + } +} +``` + +#### UnbondingDelegation + +The `UnbondingDelegation` endpoint queries unbonding information for given validator delegator. + +```bash +cosmos.staking.v1beta1.Query/UnbondingDelegation +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/UnbondingDelegation +``` + +Example Output: + +```bash expandable +{ + "unbond": { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "136984", + "completion_time": "2021-11-08T05:38:47.505593891Z", + "initial_balance": "400000000", + "balance": "400000000" + }, + { + "creation_height": "137005", + "completion_time": "2021-11-08T05:40:53.526196312Z", + "initial_balance": "385000000", + "balance": "385000000" + } + ] + } +} +``` + +#### DelegatorDelegations + +The `DelegatorDelegations` endpoint queries all delegations of a given delegator address. + +```bash +cosmos.staking.v1beta1.Query/DelegatorDelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorDelegations +``` + +Example Output: + +```bash +{ + "delegation_responses": [ + {"delegation":{"delegator_address":"cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77","validator_address":"cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8","shares":"25083339023.000000000000000000"},"balance":{"denom":"stake","amount":"25083339023"}} + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorUnbondingDelegations + +The `DelegatorUnbondingDelegations` endpoint queries all unbonding delegations of a given delegator address. + +```bash +cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1sjllsnramtg3ewxqwwrwjxfgc4n4ef9uxyejze", + "entries": [ + { + "creation_height": "136984", + "completion_time": "2021-11-08T05:38:47.505593891Z", + "initial_balance": "400000000", + "balance": "400000000" + }, + { + "creation_height": "137005", + "completion_time": "2021-11-08T05:40:53.526196312Z", + "initial_balance": "385000000", + "balance": "385000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### Redelegations + +The `Redelegations` endpoint queries redelegations of given address. + +```bash +cosmos.staking.v1beta1.Query/Redelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf", "src_validator_addr" : "cosmosvaloper1j7euyj85fv2jugejrktj540emh9353ltgppc3g", "dst_validator_addr" : "cosmosvaloper1yy3tnegzmkdcm7czzcy3flw5z0zyr9vkkxrfse"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Redelegations +``` + +Example Output: + +```bash expandable +{ + "redelegation_responses": [ + { + "redelegation": { + "delegator_address": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf", + "validator_src_address": "cosmosvaloper1j7euyj85fv2jugejrktj540emh9353ltgppc3g", + "validator_dst_address": "cosmosvaloper1yy3tnegzmkdcm7czzcy3flw5z0zyr9vkkxrfse", + "entries": null + }, + "entries": [ + { + "redelegation_entry": { + "creation_height": 135932, + "completion_time": "2021-11-08T03:52:55.299147901Z", + "initial_balance": "2900000", + "shares_dst": "2900000.000000000000000000" + }, + "balance": "2900000" + } + ] + } + ], + "pagination": null +} +``` + +#### DelegatorValidators + +The `DelegatorValidators` endpoint queries all validators information for given delegator. + +```bash +cosmos.staking.v1beta1.Query/DelegatorValidators +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorValidators +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operator_address": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "UPwHWxH1zHJWGOa/m6JB3f5YjHMvPQPkVbDqqi+U7Uw=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "347260647559", + "delegator_shares": "347260647559.000000000000000000", + "description": { + "moniker": "BouBouNode", + "identity": "", + "website": "https://boubounode.com", + "security_contact": "", + "details": "AI-based Validator. #1 AI Validator on Game of Stakes. Fairly priced. Don't trust (humans), verify. Made with BouBou love." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.061000000000000000", + "max_rate": "0.300000000000000000", + "max_change_rate": "0.150000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorValidator + +The `DelegatorValidator` endpoint queries validator information for given delegator validator + +```bash +cosmos.staking.v1beta1.Query/DelegatorValidator +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1eh5mwu044gd5ntkkc2xgfg8247mgc56f3n8rr7", "validator_addr": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorValidator +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operator_address": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "UPwHWxH1zHJWGOa/m6JB3f5YjHMvPQPkVbDqqi+U7Uw=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "347262754841", + "delegator_shares": "347262754841.000000000000000000", + "description": { + "moniker": "BouBouNode", + "identity": "", + "website": "https://boubounode.com", + "security_contact": "", + "details": "AI-based Validator. #1 AI Validator on Game of Stakes. Fairly priced. Don't trust (humans), verify. Made with BouBou love." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.061000000000000000", + "max_rate": "0.300000000000000000", + "max_change_rate": "0.150000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } +} +``` + +#### HistoricalInfo + +```bash +cosmos.staking.v1beta1.Query/HistoricalInfo +``` + +Example: + +```bash +grpcurl -plaintext -d '{"height" : 1}' localhost:9090 cosmos.staking.v1beta1.Query/HistoricalInfo +``` + +Example Output: + +```bash expandable +{ + "hist": { + "header": { + "version": { + "block": "11", + "app": "0" + }, + "chain_id": "simd-1", + "height": "140142", + "time": "2021-10-11T10:56:29.720079569Z", + "last_block_id": { + "hash": "9gri/4LLJUBFqioQ3NzZIP9/7YHR9QqaM6B2aJNQA7o=", + "part_set_header": { + "total": 1, + "hash": "Hk1+C864uQkl9+I6Zn7IurBZBKUevqlVtU7VqaZl1tc=" + } + }, + "last_commit_hash": "VxrcS27GtvGruS3I9+AlpT7udxIT1F0OrRklrVFSSKc=", + "data_hash": "80BjOrqNYUOkTnmgWyz9AQ8n7SoEmPVi4QmAe8RbQBY=", + "validators_hash": "95W49n2hw8RWpr1GPTAO5MSPi6w6Wjr3JjjS7AjpBho=", + "next_validators_hash": "95W49n2hw8RWpr1GPTAO5MSPi6w6Wjr3JjjS7AjpBho=", + "consensus_hash": "BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=", + "app_hash": "ZZaxnSY3E6Ex5Bvkm+RigYCK82g8SSUL53NymPITeOE=", + "last_results_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "evidence_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "proposer_address": "aH6dO428B+ItuoqPq70efFHrSMY=" + }, + "valset": [ + { + "operator_address": "cosmosvaloper196ax4vc0lwpxndu9dyhvca7jhxp70rmcqcnylw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "/O7BtNW0pafwfvomgR4ZnfldwPXiFfJs9mHg3gwfv5Q=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1426045203613", + "delegator_shares": "1426045203613.000000000000000000", + "description": { + "moniker": "SG-1", + "identity": "48608633F99D1B60", + "website": "https://sg-1.online", + "security_contact": "", + "details": "SG-1 - your favorite validator on Witval. We offer 100% Soft Slash protection." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.037500000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.030000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } + ] + } +} + +``` + +#### Pool + +The `Pool` endpoint queries the pool information. + +```bash +cosmos.staking.v1beta1.Query/Pool +``` + +Example: + +```bash +grpcurl -plaintext -d localhost:9090 cosmos.staking.v1beta1.Query/Pool +``` + +Example Output: + +```bash +{ + "pool": { + "not_bonded_tokens": "369054400189", + "bonded_tokens": "15657192425623" + } +} +``` + +#### Params + +The `Params` endpoint queries the pool information. + +```bash +cosmos.staking.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.staking.v1beta1.Query/Params +``` + +Example Output: + +```bash +{ + "params": { + "unbondingTime": "1814400s", + "maxValidators": 100, + "maxEntries": 7, + "historicalEntries": 10000, + "bondDenom": "stake" + } +} +``` + +### REST + +A user can query the `staking` module using REST endpoints. + +#### DelegatorDelegations + +The `DelegtaorDelegations` REST endpoint queries all delegations of a given delegator address. + +```bash +/cosmos/staking/v1beta1/delegations/{delegatorAddr} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/delegations/cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "delegation_responses": [ + { + "delegation": { + "delegator_address": "cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5", + "validator_address": "cosmosvaloper1quqxfrxkycr0uzt4yk0d57tcq3zk7srm7sm6r8", + "shares": "256250000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "256250000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5", + "validator_address": "cosmosvaloper194v8uwee2fvs2s8fa5k7j03ktwc87h5ym39jfv", + "shares": "255150000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "255150000" + } + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +#### Redelegations + +The `Redelegations` REST endpoint queries redelegations of given address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/redelegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1thfntksw0d35n2tkr0k8v54fr8wxtxwxl2c56e/redelegations?srcValidatorAddr=cosmosvaloper1lzhlnpahvznwfv4jmay2tgaha5kmz5qx4cuznf&dstValidatorAddr=cosmosvaloper1vq8tw77kp8lvxq9u3c8eeln9zymn68rng8pgt4" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "redelegation_responses": [ + { + "redelegation": { + "delegator_address": "cosmos1thfntksw0d35n2tkr0k8v54fr8wxtxwxl2c56e", + "validator_src_address": "cosmosvaloper1lzhlnpahvznwfv4jmay2tgaha5kmz5qx4cuznf", + "validator_dst_address": "cosmosvaloper1vq8tw77kp8lvxq9u3c8eeln9zymn68rng8pgt4", + "entries": null + }, + "entries": [ + { + "redelegation_entry": { + "creation_height": 151523, + "completion_time": "2021-11-09T06:03:25.640682116Z", + "initial_balance": "200000000", + "shares_dst": "200000000.000000000000000000" + }, + "balance": "200000000" + } + ] + } + ], + "pagination": null +} +``` + +#### DelegatorUnbondingDelegations + +The `DelegatorUnbondingDelegations` REST endpoint queries all unbonding delegations of a given delegator address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/unbonding_delegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1nxv42u3lv642q0fuzu2qmrku27zgut3n3z7lll/unbonding_delegations" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1nxv42u3lv642q0fuzu2qmrku27zgut3n3z7lll", + "validator_address": "cosmosvaloper1e7mvqlz50ch6gw4yjfemsc069wfre4qwmw53kq", + "entries": [ + { + "creation_height": "2442278", + "completion_time": "2021-10-12T10:59:03.797335857Z", + "initial_balance": "50000000000", + "balance": "50000000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorValidators + +The `DelegatorValidators` REST endpoint queries all validators information for given delegator address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/validators +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1xwazl8ftks4gn00y5x3c47auquc62ssune9ppv/validators" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operator_address": "cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "5v4n3px3PkfNnKflSgepDnsMQR1hiNXnqOC11Y72/PQ=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "21592843799", + "delegator_shares": "21592843799.000000000000000000", + "description": { + "moniker": "jabbey", + "identity": "", + "website": "https://twitter.com/JoeAbbey", + "security_contact": "", + "details": "just another dad in the cosmos" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.100000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-09T19:03:54.984821705Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +#### DelegatorValidator + +The `DelegatorValidator` REST endpoint queries validator information for given delegator validator pair. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/validators/{validatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1xwazl8ftks4gn00y5x3c47auquc62ssune9ppv/validators/cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operator_address": "cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "5v4n3px3PkfNnKflSgepDnsMQR1hiNXnqOC11Y72/PQ=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "21592843799", + "delegator_shares": "21592843799.000000000000000000", + "description": { + "moniker": "jabbey", + "identity": "", + "website": "https://twitter.com/JoeAbbey", + "security_contact": "", + "details": "just another dad in the cosmos" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.100000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-09T19:03:54.984821705Z" + }, + "min_self_delegation": "1" + } +} +``` + +#### HistoricalInfo + +The `HistoricalInfo` REST endpoint queries the historical information for given height. + +```bash +/cosmos/staking/v1beta1/historical_info/{height} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/historical_info/153332" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "hist": { + "header": { + "version": { + "block": "11", + "app": "0" + }, + "chain_id": "cosmos-1", + "height": "153332", + "time": "2021-10-12T09:05:35.062230221Z", + "last_block_id": { + "hash": "NX8HevR5khb7H6NGKva+jVz7cyf0skF1CrcY9A0s+d8=", + "part_set_header": { + "total": 1, + "hash": "zLQ2FiKM5tooL3BInt+VVfgzjlBXfq0Hc8Iux/xrhdg=" + } + }, + "last_commit_hash": "P6IJrK8vSqU3dGEyRHnAFocoDGja0bn9euLuy09s350=", + "data_hash": "eUd+6acHWrNXYju8Js449RJ99lOYOs16KpqQl4SMrEM=", + "validators_hash": "mB4pravvMsJKgi+g8aYdSeNlt0kPjnRFyvtAQtaxcfw=", + "next_validators_hash": "mB4pravvMsJKgi+g8aYdSeNlt0kPjnRFyvtAQtaxcfw=", + "consensus_hash": "BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=", + "app_hash": "fuELArKRK+CptnZ8tu54h6xEleSWenHNmqC84W866fU=", + "last_results_hash": "p/BPexV4LxAzlVcPRvW+lomgXb6Yze8YLIQUo/4Kdgc=", + "evidence_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "proposer_address": "G0MeY8xQx7ooOsni8KE/3R/Ib3Q=" + }, + "valset": [ + { + "operator_address": "cosmosvaloper196ax4vc0lwpxndu9dyhvca7jhxp70rmcqcnylw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "/O7BtNW0pafwfvomgR4ZnfldwPXiFfJs9mHg3gwfv5Q=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1416521659632", + "delegator_shares": "1416521659632.000000000000000000", + "description": { + "moniker": "SG-1", + "identity": "48608633F99D1B60", + "website": "https://sg-1.online", + "security_contact": "", + "details": "SG-1 - your favorite validator on cosmos. We offer 100% Soft Slash protection." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.037500000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.030000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + }, + { + "operator_address": "cosmosvaloper1t8ehvswxjfn3ejzkjtntcyrqwvmvuknzmvtaaa", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "uExZyjNLtr2+FFIhNDAMcQ8+yTrqE7ygYTsI7khkA5Y=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1348298958808", + "delegator_shares": "1348298958808.000000000000000000", + "description": { + "moniker": "Cosmostation", + "identity": "AE4C403A6E7AA1AC", + "website": "https://www.cosmostation.io", + "security_contact": "admin@stamper.network", + "details": "Cosmostation validator node. Delegate your tokens and Start Earning Staking Rewards" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "1.000000000000000000", + "max_change_rate": "0.200000000000000000" + }, + "update_time": "2021-10-01T15:06:38.821314287Z" + }, + "min_self_delegation": "1" + } + ] + } +} +``` + +#### Parameters + +The `Parameters` REST endpoint queries the staking parameters. + +```bash +/cosmos/staking/v1beta1/params +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/params" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "params": { + "unbonding_time": "2419200s", + "max_validators": 100, + "max_entries": 7, + "historical_entries": 10000, + "bond_denom": "stake" + } +} +``` + +#### Pool + +The `Pool` REST endpoint queries the pool information. + +```bash +/cosmos/staking/v1beta1/pool +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/pool" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "pool": { + "not_bonded_tokens": "432805737458", + "bonded_tokens": "15783637712645" + } +} +``` + +#### Validators + +The `Validators` REST endpoint queries all validators that match the given status. + +```bash +/cosmos/staking/v1beta1/validators +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/validators" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validators": [ + { + "operator_address": "cosmosvaloper1q3jsx9dpfhtyqqgetwpe5tmk8f0ms5qywje8tw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "N7BPyek2aKuNZ0N/8YsrqSDhGZmgVaYUBuddY8pwKaE=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "383301887799", + "delegator_shares": "383301887799.000000000000000000", + "description": { + "moniker": "SmartNodes", + "identity": "D372724899D1EDC8", + "website": "https://smartnodes.co", + "security_contact": "", + "details": "Earn Rewards with Crypto Staking & Node Deployment" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-01T15:51:31.596618510Z" + }, + "min_self_delegation": "1" + }, + { + "operator_address": "cosmosvaloper1q5ku90atkhktze83j9xjaks2p7uruag5zp6wt7", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "GDNpuKDmCg9GnhnsiU4fCWktuGUemjNfvpCZiqoRIYA=" + }, + "jailed": false, + "status": "BOND_STATUS_UNBONDING", + "tokens": "1017819654", + "delegator_shares": "1017819654.000000000000000000", + "description": { + "moniker": "Noderunners", + "identity": "812E82D12FEA3493", + "website": "http://noderunners.biz", + "security_contact": "info@noderunners.biz", + "details": "Noderunners is a professional validator in POS networks. We have a huge node running experience, reliable soft and hardware. Our commissions are always low, our support to delegators is always full. Stake with us and start receiving your cosmos rewards now!" + }, + "unbonding_height": "147302", + "unbonding_time": "2021-11-08T22:58:53.718662452Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-04T18:02:21.446645619Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": "FONDBFkE4tEEf7yxWWKOD49jC2NK", + "total": "2" + } +} +``` + +#### Validator + +The `Validator` REST endpoint queries validator information for given validator address. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "validator": { + "operator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "33027900000", + "delegator_shares": "33027900000.000000000000000000", + "description": { + "moniker": "Witval", + "identity": "51468B615127273A", + "website": "", + "security_contact": "", + "details": "Witval is the validator arm from Vitwit. Vitwit is into software consulting and services business since 2015. We are working closely with Cosmos ecosystem since 2018. We are also building tools for the ecosystem, Aneka is our explorer for the cosmos ecosystem." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.020000000000000000" + }, + "update_time": "2021-10-01T19:24:52.663191049Z" + }, + "min_self_delegation": "1" + } +} +``` + +#### ValidatorDelegations + +The `ValidatorDelegations` REST endpoint queries delegate information for given validator. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q/delegations" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "delegation_responses": [ + { + "delegation": { + "delegator_address": "cosmos190g5j8aszqhvtg7cprmev8xcxs6csra7xnk3n3", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "31000000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "31000000000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1ddle9tczl87gsvmeva3c48nenyng4n56qwq4ee", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "628470000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "628470000" + } + }, + { + "delegation": { + "delegator_address": "cosmos10fdvkczl76m040smd33lh9xn9j0cf26kk4s2nw", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "838120000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "838120000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "500000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "500000000" + } + }, + { + "delegation": { + "delegator_address": "cosmos16msryt3fqlxtvsy8u5ay7wv2p8mglfg9hrek2e", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "61310000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "61310000" + } + } + ], + "pagination": { + "next_key": null, + "total": "5" + } +} +``` + +#### Delegation + +The `Delegation` REST endpoint queries delegate information for given validator delegator pair. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations/{delegatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q/delegations/cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "delegation_response": { + "delegation": { + "delegator_address": "cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "500000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "500000000" + } + } +} +``` + +#### UnbondingDelegation + +The `UnbondingDelegation` REST endpoint queries unbonding information for given validator delegator pair. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations/{delegatorAddr}/unbonding_delegation +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu/delegations/cosmos1ze2ye5u5k3qdlexvt2e0nn0508p04094ya0qpm/unbonding_delegation" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "unbond": { + "delegator_address": "cosmos1ze2ye5u5k3qdlexvt2e0nn0508p04094ya0qpm", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "153687", + "completion_time": "2021-11-09T09:41:18.352401903Z", + "initial_balance": "525111", + "balance": "525111" + } + ] + } +} +``` + +#### ValidatorUnbondingDelegations + +The `ValidatorUnbondingDelegations` REST endpoint queries unbonding delegations of a validator. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/unbonding_delegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu/unbonding_delegations" \ +-H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1q9snn84jfrd9ge8t46kdcggpe58dua82vnj7uy", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "90998", + "completion_time": "2021-11-05T00:14:37.005841058Z", + "initial_balance": "24000000", + "balance": "24000000" + } + ] + }, + { + "delegator_address": "cosmos1qf36e6wmq9h4twhdvs6pyq9qcaeu7ye0s3dqq2", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "47478", + "completion_time": "2021-11-01T22:47:26.714116854Z", + "initial_balance": "8000000", + "balance": "8000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/structure.mdx b/docs/sdk/v0.53/documentation/module-system/structure.mdx new file mode 100644 index 00000000..da61fc14 --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/structure.mdx @@ -0,0 +1,93 @@ +--- +title: Recommended Folder Structure +--- + +## Synopsis + +This document outlines the recommended structure of Cosmos SDK modules. These ideas are meant to be applied as suggestions. Application developers are encouraged to improve upon and contribute to module structure and development design. + +## Structure + +A typical Cosmos SDK module can be structured as follows: + +```shell +proto +└── {project_name} +    └── {module_name} +    └── {proto_version} +       ├── {module_name}.proto +       ├── event.proto +       ├── genesis.proto +       ├── query.proto +       └── tx.proto +``` + +- `{module_name}.proto`: The module's common message type definitions. +- `event.proto`: The module's message type definitions related to events. +- `genesis.proto`: The module's message type definitions related to genesis state. +- `query.proto`: The module's Query service and related message type definitions. +- `tx.proto`: The module's Msg service and related message type definitions. + +```shell expandable +x/{module_name} +├── client +│   ├── cli +│   │ ├── query.go +│   │   └── tx.go +│   └── testutil +│   ├── cli_test.go +│   └── suite.go +├── exported +│   └── exported.go +├── keeper +│   ├── genesis.go +│   ├── grpc_query.go +│   ├── hooks.go +│   ├── invariants.go +│   ├── keeper.go +│   ├── keys.go +│   ├── msg_server.go +│   └── querier.go +├── module +│   └── module.go +│   └── abci.go +│   └── autocli.go +├── simulation +│   ├── decoder.go +│   ├── genesis.go +│   ├── operations.go +│   └── params.go +├── {module_name}.pb.go +├── codec.go +├── errors.go +├── events.go +├── events.pb.go +├── expected_keepers.go +├── genesis.go +├── genesis.pb.go +├── keys.go +├── msgs.go +├── params.go +├── query.pb.go +├── tx.pb.go +└── README.md +``` + +- `client/`: The module's CLI client functionality implementation and the module's CLI testing suite. +- `exported/`: The module's exported types - typically interface types. If a module relies on keepers from another module, it is expected to receive the keepers as interface contracts through the `expected_keepers.go` file (see below) in order to avoid a direct dependency on the module implementing the keepers. However, these interface contracts can define methods that operate on and/or return types that are specific to the module that is implementing the keepers and this is where `exported/` comes into play. The interface types that are defined in `exported/` use canonical types, allowing for the module to receive the keepers as interface contracts through the `expected_keepers.go` file. This pattern allows for code to remain DRY and also alleviates import cycle chaos. +- `keeper/`: The module's `Keeper` and `MsgServer` implementation. +- `module/`: The module's `AppModule` and `AppModuleBasic` implementation. + - `abci.go`: The module's `BeginBlocker` and `EndBlocker` implementations (this file is only required if `BeginBlocker` and/or `EndBlocker` need to be defined). + - `autocli.go`: The module [autocli](https://docs.cosmos.network/main/core/autocli) options. +- `simulation/`: The module's [simulation](/docs/sdk/v0.53/documentation/operations/simulator) package defines functions used by the blockchain simulator application (`simapp`). +- `REAMDE.md`: The module's specification documents outlining important concepts, state storage structure, and message and event type definitions. Learn more how to write module specs in the [spec guidelines](/docs/sdk/v0.53/documentation/protocol-development/SPEC_MODULE). +- The root directory includes type definitions for messages, events, and genesis state, including the type definitions generated by Protocol Buffers. + - `codec.go`: The module's registry methods for interface types. + - `errors.go`: The module's sentinel errors. + - `events.go`: The module's event types and constructors. + - `expected_keepers.go`: The module's [expected keeper](/docs/sdk/v0.53/documentation/module-system/keeper#type-definition) interfaces. + - `genesis.go`: The module's genesis state methods and helper functions. + - `keys.go`: The module's store keys and associated helper functions. + - `msgs.go`: The module's message type definitions and associated methods. + - `params.go`: The module's parameter type definitions and associated methods. + - `*.pb.go`: The module's type definitions generated by Protocol Buffers (as defined in the respective `*.proto` files above). diff --git a/docs/sdk/v0.53/tutorials.mdx.bak b/docs/sdk/v0.53/documentation/module-system/tutorials.mdx similarity index 88% rename from docs/sdk/v0.53/tutorials.mdx.bak rename to docs/sdk/v0.53/documentation/module-system/tutorials.mdx index 2474f604..c896cc54 100644 --- a/docs/sdk/v0.53/tutorials.mdx.bak +++ b/docs/sdk/v0.53/documentation/module-system/tutorials.mdx @@ -1,9 +1,8 @@ --- -title: "Tutorials" -description: "Version: v0.53" +title: Tutorials --- -## Advanced Tutorials[​](#advanced-tutorials "Direct link to Advanced Tutorials") +## Advanced Tutorials This section provides a concise overview of tutorials focused on implementing vote extensions in the Cosmos SDK. Vote extensions are a powerful feature for enhancing the security and fairness of blockchain applications, particularly in scenarios like implementing oracles and mitigating auction front-running. diff --git a/docs/sdk/v0.53/documentation/module-system/tx.mdx b/docs/sdk/v0.53/documentation/module-system/tx.mdx new file mode 100644 index 00000000..78d62a1a --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/tx.mdx @@ -0,0 +1,301 @@ +--- +title: '`x/auth/tx`' +--- + + +**Pre-requisite Readings** + +* [Transactions](https://docs.cosmos.network/main/core/transactions#transaction-generation) +* [Encoding](https://docs.cosmos.network/main/core/encoding#transaction-encoding) + + + +## Abstract + +This document specifies the `x/auth/tx` package of the Cosmos SDK. + +This package represents the Cosmos SDK implementation of the `client.TxConfig`, `client.TxBuilder`, `client.TxEncoder` and `client.TxDecoder` interfaces. + +## Contents + +* [Transactions](#transactions) + * [`TxConfig`](#txconfig) + * [`TxBuilder`](#txbuilder) + * [`TxEncoder`/ `TxDecoder`](#txencoder-txdecoder) +* [Client](#client) + * [CLI](#cli) + * [gRPC](#grpc) + +## Transactions + +### `TxConfig` + +`client.TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type. +The interface defines a set of methods for creating a `client.TxBuilder`. + +```go + // TxConfig defines an interface a client can utilize to generate an + // application-defined concrete transaction type. The type returned must + // implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() TxBuilder +``` + +The default implementation of `client.TxConfig` is instantiated by `NewTxConfig` in `x/auth/tx` module. + +```go + encoder sdk.TxEncoder + jsonDecoder sdk.TxDecoder + jsonEncoder sdk.TxEncoder + protoCodec codec.Codec + signingContext *txsigning.Context +} + +``` + +### `TxBuilder` + +```go + SignModeHandler() *txsigning.HandlerMap + SigningContext() *txsigning.Context + } + + // TxBuilder defines an interface which an application-defined concrete transaction + // type must implement. Namely, it must be able to set messages, generate + // signatures, and provide canonical bytes to sign over. The transaction must + // also know how to encode itself. + TxBuilder interface { + GetTx() signing.Tx + + SetMsgs(msgs ...sdk.Msg) error + SetSignatures(signatures ...signingtypes.SignatureV2) error + SetMemo(memo string) + SetFeeAmount(amount sdk.Coins) + SetFeePayer(feePayer sdk.AccAddress) + SetGasLimit(limit uint64) + SetTimeoutHeight(height uint64) +``` + +The [`client.TxBuilder`](https://docs.cosmos.network/main/core/transactions#transaction-generation) interface is as well implemented by `x/auth/tx`. +A `client.TxBuilder` can be accessed with `TxConfig.NewTxBuilder()`. + +### `TxEncoder`/ `TxDecoder` + +More information about `TxEncoder` and `TxDecoder` can be found [here](https://docs.cosmos.network/main/core/encoding#transaction-encoding). + +## Client + +### CLI + +#### Query + +The `x/auth/tx` module provides a CLI command to query any transaction, given its hash, transaction sequence or signature. + +Without any argument, the command will query the transaction using the transaction hash. + +```shell +simd query tx DFE87B78A630C0EFDF76C80CD24C997E252792E0317502AE1A02B9809F0D8685 +``` + +When querying a transaction from an account given its sequence, use the `--type=acc_seq` flag: + +```shell +simd query tx --type=acc_seq cosmos1u69uyr6v9qwe6zaaeaqly2h6wnedac0xpxq325/1 +``` + +When querying a transaction given its signature, use the `--type=signature` flag: + +```shell +simd query tx --type=signature Ofjvgrqi8twZfqVDmYIhqwRLQjZZ40XbxEamk/veH3gQpRF0hL2PH4ejRaDzAX+2WChnaWNQJQ41ekToIi5Wqw== +``` + +When querying a transaction given its events, use the `--type=events` flag: + +```shell +simd query txs --events 'message.sender=cosmos...' --page 1 --limit 30 +``` + +The `x/auth/block` module provides a CLI command to query any block, given its hash, height, or events. + +When querying a block by its hash, use the `--type=hash` flag: + +```shell +simd query block --type=hash DFE87B78A630C0EFDF76C80CD24C997E252792E0317502AE1A02B9809F0D8685 +``` + +When querying a block by its height, use the `--type=height` flag: + +```shell +simd query block --type=height 1357 +``` + +When querying a block by its events, use the `--query` flag: + +```shell +simd query blocks --query 'message.sender=cosmos...' --page 1 --limit 30 +``` + +#### Transactions + +The `x/auth/tx` module provides a convinient CLI command for decoding and encoding transactions. + +#### `encode` + +The `encode` command encodes a transaction created with the `--generate-only` flag or signed with the sign command. +The transaction is seralized it to Protobuf and returned as base64. + +```bash +$ simd tx encode tx.json +Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA== +$ simd tx encode tx.signed.json +``` + +More information about the `encode` command can be found running `simd tx encode --help`. + +#### `decode` + +The `decode` commands decodes a transaction encoded with the `encode` command. + +```bash +simd tx decode Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA== +``` + +More information about the `decode` command can be found running `simd tx decode --help`. + +### gRPC + +A user can query the `x/auth/tx` module using gRPC endpoints. + +#### `TxDecode` + +The `TxDecode` endpoint allows to decode a transaction. + +```shell +cosmos.tx.v1beta1.Service/TxDecode +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"tx_bytes":"Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA=="}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxDecode +``` + +Example Output: + +```json expandable +{ + "tx": { + "body": { + "messages": [ + { + "@type": "/cosmos.bank.v1beta1.MsgSend", + "amount": [ + { + "denom": "stake", + "amount": "100" + } + ], + "fromAddress": "cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275", + "toAddress": "cosmos158saldyg8pmxu7fwvt0d6x7jeswp4gwyklk6y3" + } + ] + }, + "authInfo": { + "fee": { + "gasLimit": "200000" + } + } + } +} +``` + +#### `TxEncode` + +The `TxEncode` endpoint allows to encode a transaction. + +```shell +cosmos.tx.v1beta1.Service/TxEncode +``` + +Example: + +```shell expandable +grpcurl -plaintext \ + -d '{"tx": { + "body": { + "messages": [ + {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100"}],"fromAddress":"cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275","toAddress":"cosmos158saldyg8pmxu7fwvt0d6x7jeswp4gwyklk6y3"} + ] + }, + "authInfo": { + "fee": { + "gasLimit": "200000" + } + } + }}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxEncode +``` + +Example Output: + +```json +{ + "txBytes": "Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA==" +} +``` + +#### `TxDecodeAmino` + +The `TxDecode` endpoint allows to decode an amino transaction. + +```shell +cosmos.tx.v1beta1.Service/TxDecodeAmino +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"amino_binary": "KCgWqQpvqKNhmgotY29zbW9zMXRzeno3cDJ6Z2Q3dnZrYWh5ZnJlNHduNXh5dTgwcnB0ZzZ2OWg1Ei1jb3Ntb3MxdHN6ejdwMnpnZDd2dmthaHlmcmU0d241eHl1ODBycHRnNnY5aDUaCwoFc3Rha2USAjEwEhEKCwoFc3Rha2USAjEwEMCaDCIGZm9vYmFy"}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxDecodeAmino +``` + +Example Output: + +```json +{ + "aminoJson": "{\"type\":\"cosmos-sdk/StdTx\",\"value\":{\"msg\":[{\"type\":\"cosmos-sdk/MsgSend\",\"value\":{\"from_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"to_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}]}}],\"fee\":{\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}],\"gas\":\"200000\"},\"signatures\":null,\"memo\":\"foobar\",\"timeout_height\":\"0\"}}" +} +``` + +#### `TxEncodeAmino` + +The `TxEncodeAmino` endpoint allows to encode an amino transaction. + +```shell +cosmos.tx.v1beta1.Service/TxEncodeAmino +``` + +Example: + +```shell +grpcurl -plaintext \ + -d '{"amino_json":"{\"type\":\"cosmos-sdk/StdTx\",\"value\":{\"msg\":[{\"type\":\"cosmos-sdk/MsgSend\",\"value\":{\"from_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"to_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}]}}],\"fee\":{\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}],\"gas\":\"200000\"},\"signatures\":null,\"memo\":\"foobar\",\"timeout_height\":\"0\"}}"}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/TxEncodeAmino +``` + +Example Output: + +```json +{ + "amino_binary": "KCgWqQpvqKNhmgotY29zbW9zMXRzeno3cDJ6Z2Q3dnZrYWh5ZnJlNHduNXh5dTgwcnB0ZzZ2OWg1Ei1jb3Ntb3MxdHN6ejdwMnpnZDd2dmthaHlmcmU0d241eHl1ODBycHRnNnY5aDUaCwoFc3Rha2USAjEwEhEKCwoFc3Rha2USAjEwEMCaDCIGZm9vYmFy" +} +``` diff --git a/docs/sdk/v0.53/documentation/module-system/upgrade.mdx b/docs/sdk/v0.53/documentation/module-system/upgrade.mdx new file mode 100644 index 00000000..94dbda8e --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/upgrade.mdx @@ -0,0 +1,630 @@ +--- +title: '`x/upgrade`' +--- + +## Abstract + +`x/upgrade` is an implementation of a Cosmos SDK module that facilitates smoothly +upgrading a live Cosmos chain to a new (breaking) software version. It accomplishes this by +providing a `PreBlocker` hook that prevents the blockchain state machine from +proceeding once a pre-defined upgrade block height has been reached. + +The module does not prescribe anything regarding how governance decides to do an +upgrade, but just the mechanism for coordinating the upgrade safely. Without software +support for upgrades, upgrading a live chain is risky because all of the validators +need to pause their state machines at exactly the same point in the process. If +this is not done correctly, there can be state inconsistencies which are hard to +recover from. + +* [Concepts](#concepts) +* [State](#state) +* [Events](#events) +* [Client](#client) + * [CLI](#cli) + * [REST](#rest) + * [gRPC](#grpc) +* [Resources](#resources) + +## Concepts + +### Plan + +The `x/upgrade` module defines a `Plan` type in which a live upgrade is scheduled +to occur. A `Plan` can be scheduled at a specific block height. +A `Plan` is created once a (frozen) release candidate along with an appropriate upgrade +`Handler` (see below) is agreed upon, where the `Name` of a `Plan` corresponds to a +specific `Handler`. Typically, a `Plan` is created through a governance proposal +process, where if voted upon and passed, will be scheduled. The `Info` of a `Plan` +may contain various metadata about the upgrade, typically application specific +upgrade info to be included on-chain such as a git commit that validators could +automatically upgrade to. + +```go +type Plan struct { + Name string + Height int64 + Info string +} +``` + +#### Sidecar Process + +If an operator running the application binary also runs a sidecar process to assist +in the automatic download and upgrade of a binary, the `Info` allows this process to +be seamless. This tool is [Cosmovisor](https://github.com/cosmos/cosmos-sdk/tree/main/tools/cosmovisor#readme). + +### Handler + +The `x/upgrade` module facilitates upgrading from major version X to major version Y. To +accomplish this, node operators must first upgrade their current binary to a new +binary that has a corresponding `Handler` for the new version Y. It is assumed that +this version has fully been tested and approved by the community at large. This +`Handler` defines what state migrations need to occur before the new binary Y +can successfully run the chain. Naturally, this `Handler` is application specific +and not defined on a per-module basis. Registering a `Handler` is done via +`Keeper#SetUpgradeHandler` in the application. + +```go +type UpgradeHandler func(Context, Plan, VersionMap) (VersionMap, error) +``` + +During each `EndBlock` execution, the `x/upgrade` module checks if there exists a +`Plan` that should execute (is scheduled at that height). If so, the corresponding +`Handler` is executed. If the `Plan` is expected to execute but no `Handler` is registered +or if the binary was upgraded too early, the node will gracefully panic and exit. + +### StoreLoader + +The `x/upgrade` module also facilitates store migrations as part of the upgrade. The +`StoreLoader` sets the migrations that need to occur before the new binary can +successfully run the chain. This `StoreLoader` is also application specific and +not defined on a per-module basis. Registering this `StoreLoader` is done via +`app#SetStoreLoader` in the application. + +```go +func UpgradeStoreLoader (upgradeHeight int64, storeUpgrades *store.StoreUpgrades) + +baseapp.StoreLoader +``` + +If there's a planned upgrade and the upgrade height is reached, the old binary writes `Plan` to the disk before panicking. + +This information is critical to ensure the `StoreUpgrades` happens smoothly at correct height and +expected upgrade. It eliminiates the chances for the new binary to execute `StoreUpgrades` multiple +times everytime on restart. Also if there are multiple upgrades planned on same height, the `Name` +will ensure these `StoreUpgrades` takes place only in planned upgrade handler. + +### Proposal + +Typically, a `Plan` is proposed and submitted through governance via a proposal +containing a `MsgSoftwareUpgrade` message. +This proposal prescribes to the standard governance process. If the proposal passes, +the `Plan`, which targets a specific `Handler`, is persisted and scheduled. The +upgrade can be delayed or hastened by updating the `Plan.Height` in a new proposal. + +```protobuf +// MsgSoftwareUpgrade is the Msg/SoftwareUpgrade request type. +// +// Since: cosmos-sdk 0.46 +message MsgSoftwareUpgrade { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/MsgSoftwareUpgrade"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + + // plan is the upgrade plan. + Plan plan = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +} +``` + +#### Cancelling Upgrade Proposals + +Upgrade proposals can be cancelled. There exists a gov-enabled `MsgCancelUpgrade` +message type, which can be embedded in a proposal, voted on and, if passed, will +remove the scheduled upgrade `Plan`. +Of course this requires that the upgrade was known to be a bad idea well before the +upgrade itself, to allow time for a vote. + +```protobuf +// MsgCancelUpgrade is the Msg/CancelUpgrade request type. +// +// Since: cosmos-sdk 0.46 +message MsgCancelUpgrade { + option (cosmos.msg.v1.signer) = "authority"; + option (amino.name) = "cosmos-sdk/MsgCancelUpgrade"; + + // authority is the address that controls the module (defaults to x/gov unless overwritten). + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +If such a possibility is desired, the upgrade height is to be +`2 * (VotingPeriod + DepositPeriod) + (SafetyDelta)` from the beginning of the +upgrade proposal. The `SafetyDelta` is the time available from the success of an +upgrade proposal and the realization it was a bad idea (due to external social consensus). + +A `MsgCancelUpgrade` proposal can also be made while the original +`MsgSoftwareUpgrade` proposal is still being voted upon, as long as the `VotingPeriod` +ends after the `MsgSoftwareUpgrade` proposal. + +## State + +The internal state of the `x/upgrade` module is relatively minimal and simple. The +state contains the currently active upgrade `Plan` (if one exists) by key +`0x0` and if a `Plan` is marked as "done" by key `0x1`. The state +contains the consensus versions of all app modules in the application. The versions +are stored as big endian `uint64`, and can be accessed with prefix `0x2` appended +by the corresponding module name of type `string`. The state maintains a +`Protocol Version` which can be accessed by key `0x3`. + +* Plan: `0x0 -> Plan` +* Done: `0x1 | byte(plan name) -> BigEndian(Block Height)` +* ConsensusVersion: `0x2 | byte(module name) -> BigEndian(Module Consensus Version)` +* ProtocolVersion: `0x3 -> BigEndian(Protocol Version)` + +The `x/upgrade` module contains no genesis state. + +## Events + +The `x/upgrade` does not emit any events by itself. Any and all proposal related +events are emitted through the `x/gov` module. + +## Client + +### CLI + +A user can query and interact with the `upgrade` module using the CLI. + +#### Query + +The `query` commands allow users to query `upgrade` state. + +```bash +simd query upgrade --help +``` + +##### applied + +The `applied` command allows users to query the block header for height at which a completed upgrade was applied. + +```bash +simd query upgrade applied [upgrade-name] [flags] +``` + +If upgrade-name was previously executed on the chain, this returns the header for the block at which it was applied. +This helps a client determine which binary was valid over a given range of blocks, as well as more context to understand past migrations. + +Example: + +```bash +simd query upgrade applied "test-upgrade" +``` + +Example Output: + +```bash expandable +"block_id": { + "hash": "A769136351786B9034A5F196DC53F7E50FCEB53B48FA0786E1BFC45A0BB646B5", + "parts": { + "total": 1, + "hash": "B13CBD23011C7480E6F11BE4594EE316548648E6A666B3575409F8F16EC6939E" + } + }, + "block_size": "7213", + "header": { + "version": { + "block": "11" + }, + "chain_id": "testnet-2", + "height": "455200", + "time": "2021-04-10T04:37:57.085493838Z", + "last_block_id": { + "hash": "0E8AD9309C2DC411DF98217AF59E044A0E1CCEAE7C0338417A70338DF50F4783", + "parts": { + "total": 1, + "hash": "8FE572A48CD10BC2CBB02653CA04CA247A0F6830FF19DC972F64D339A355E77D" + } + }, + "last_commit_hash": "DE890239416A19E6164C2076B837CC1D7F7822FC214F305616725F11D2533140", + "data_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "validators_hash": "A31047ADE54AE9072EE2A12FF260A8990BA4C39F903EAF5636B50D58DBA72582", + "next_validators_hash": "A31047ADE54AE9072EE2A12FF260A8990BA4C39F903EAF5636B50D58DBA72582", + "consensus_hash": "048091BC7DDC283F77BFBF91D73C44DA58C3DF8A9CBC867405D8B7F3DAADA22F", + "app_hash": "28ECC486AFC332BA6CC976706DBDE87E7D32441375E3F10FD084CD4BAF0DA021", + "last_results_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "evidence_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "proposer_address": "2ABC4854B1A1C5AA8403C4EA853A81ACA901CC76" + }, + "num_txs": "0" +} +``` + +##### module versions + +The `module_versions` command gets a list of module names and their respective consensus versions. + +Following the command with a specific module name will return only +that module's information. + +```bash +simd query upgrade module_versions [optional module_name] [flags] +``` + +Example: + +```bash +simd query upgrade module_versions +``` + +Example Output: + +```bash expandable +module_versions: +- name: auth + version: "2" +- name: authz + version: "1" +- name: bank + version: "2" +- name: distribution + version: "2" +- name: evidence + version: "1" +- name: feegrant + version: "1" +- name: genutil + version: "1" +- name: gov + version: "2" +- name: ibc + version: "2" +- name: mint + version: "1" +- name: params + version: "1" +- name: slashing + version: "2" +- name: staking + version: "2" +- name: transfer + version: "1" +- name: upgrade + version: "1" +- name: vesting + version: "1" +``` + +Example: + +```bash +regen query upgrade module_versions ibc +``` + +Example Output: + +```bash +module_versions: +- name: ibc + version: "2" +``` + +##### plan + +The `plan` command gets the currently scheduled upgrade plan, if one exists. + +```bash +regen query upgrade plan [flags] +``` + +Example: + +```bash +simd query upgrade plan +``` + +Example Output: + +```bash +height: "130" +info: "" +name: test-upgrade +time: "0001-01-01T00:00:00Z" +upgraded_client_state: null +``` + +#### Transactions + +The upgrade module supports the following transactions: + +* `software-proposal` - submits an upgrade proposal: + +```bash +simd tx upgrade software-upgrade v2 --title="Test Proposal" --summary="testing" --deposit="100000000stake" --upgrade-height 1000000 \ +--upgrade-info '{ "binaries": { "linux/amd64":"https://example.com/simd.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" } }' --from cosmos1.. +``` + +* `cancel-software-upgrade` - cancels a previously submitted upgrade proposal: + +```bash +simd tx upgrade cancel-software-upgrade --title="Test Proposal" --summary="testing" --deposit="100000000stake" --from cosmos1.. +``` + +### REST + +A user can query the `upgrade` module using REST endpoints. + +#### Applied Plan + +`AppliedPlan` queries a previously applied upgrade plan by its name. + +```bash +/cosmos/upgrade/v1beta1/applied_plan/{name} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/applied_plan/v2.0-upgrade" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "height": "30" +} +``` + +#### Current Plan + +`CurrentPlan` queries the current upgrade plan. + +```bash +/cosmos/upgrade/v1beta1/current_plan +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/current_plan" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "plan": "v2.1-upgrade" +} +``` + +#### Module versions + +`ModuleVersions` queries the list of module versions from state. + +```bash +/cosmos/upgrade/v1beta1/module_versions +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/module_versions" -H "accept: application/json" +``` + +Example Output: + +```bash expandable +{ + "module_versions": [ + { + "name": "auth", + "version": "2" + }, + { + "name": "authz", + "version": "1" + }, + { + "name": "bank", + "version": "2" + }, + { + "name": "distribution", + "version": "2" + }, + { + "name": "evidence", + "version": "1" + }, + { + "name": "feegrant", + "version": "1" + }, + { + "name": "genutil", + "version": "1" + }, + { + "name": "gov", + "version": "2" + }, + { + "name": "ibc", + "version": "2" + }, + { + "name": "mint", + "version": "1" + }, + { + "name": "params", + "version": "1" + }, + { + "name": "slashing", + "version": "2" + }, + { + "name": "staking", + "version": "2" + }, + { + "name": "transfer", + "version": "1" + }, + { + "name": "upgrade", + "version": "1" + }, + { + "name": "vesting", + "version": "1" + } + ] +} +``` + +### gRPC + +A user can query the `upgrade` module using gRPC endpoints. + +#### Applied Plan + +`AppliedPlan` queries a previously applied upgrade plan by its name. + +```bash +cosmos.upgrade.v1beta1.Query/AppliedPlan +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"name":"v2.0-upgrade"}' \ + localhost:9090 \ + cosmos.upgrade.v1beta1.Query/AppliedPlan +``` + +Example Output: + +```bash +{ + "height": "30" +} +``` + +#### Current Plan + +`CurrentPlan` queries the current upgrade plan. + +```bash +cosmos.upgrade.v1beta1.Query/CurrentPlan +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/CurrentPlan +``` + +Example Output: + +```bash +{ + "plan": "v2.1-upgrade" +} +``` + +#### Module versions + +`ModuleVersions` queries the list of module versions from state. + +```bash +cosmos.upgrade.v1beta1.Query/ModuleVersions +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/ModuleVersions +``` + +Example Output: + +```bash expandable +{ + "module_versions": [ + { + "name": "auth", + "version": "2" + }, + { + "name": "authz", + "version": "1" + }, + { + "name": "bank", + "version": "2" + }, + { + "name": "distribution", + "version": "2" + }, + { + "name": "evidence", + "version": "1" + }, + { + "name": "feegrant", + "version": "1" + }, + { + "name": "genutil", + "version": "1" + }, + { + "name": "gov", + "version": "2" + }, + { + "name": "ibc", + "version": "2" + }, + { + "name": "mint", + "version": "1" + }, + { + "name": "params", + "version": "1" + }, + { + "name": "slashing", + "version": "2" + }, + { + "name": "staking", + "version": "2" + }, + { + "name": "transfer", + "version": "1" + }, + { + "name": "upgrade", + "version": "1" + }, + { + "name": "vesting", + "version": "1" + } + ] +} +``` + +## Resources + +A list of (external) resources to learn more about the `x/upgrade` module. + +* [Cosmos Dev Series: Cosmos Blockchain Upgrade](https://medium.com/web3-surfers/cosmos-dev-series-cosmos-sdk-based-blockchain-upgrade-b5e99181554c) - The blog post that explains how software upgrades work in detail. diff --git a/docs/sdk/v0.53/documentation/module-system/vesting.mdx b/docs/sdk/v0.53/documentation/module-system/vesting.mdx new file mode 100644 index 00000000..74fa238d --- /dev/null +++ b/docs/sdk/v0.53/documentation/module-system/vesting.mdx @@ -0,0 +1,752 @@ +--- +title: '`x/auth/vesting`' +--- + +* [Intro and Requirements](#intro-and-requirements) +* [Note](#note) +* [Vesting Account Types](#vesting-account-types) + * [BaseVestingAccount](#basevestingaccount) + * [ContinuousVestingAccount](#continuousvestingaccount) + * [DelayedVestingAccount](#delayedvestingaccount) + * [Period](#period) + * [PeriodicVestingAccount](#periodicvestingaccount) + * [PermanentLockedAccount](#permanentlockedaccount) +* [Vesting Account Specification](#vesting-account-specification) + * [Determining Vesting & Vested Amounts](#determining-vesting--vested-amounts) + * [Periodic Vesting Accounts](#periodic-vesting-accounts) + * [Transferring/Sending](#transferringsending) + * [Delegating](#delegating) + * [Undelegating](#undelegating) +* [Keepers & Handlers](#keepers--handlers) +* [Genesis Initialization](#genesis-initialization) +* [Examples](#examples) + * [Simple](#simple) + * [Slashing](#slashing) + * [Periodic Vesting](#periodic-vesting) +* [Glossary](#glossary) + +## Intro and Requirements + +This specification defines the vesting account implementation that is used by the Cosmos Hub. The requirements for this vesting account is that it should be initialized during genesis with a starting balance `X` and a vesting end time `ET`. A vesting account may be initialized with a vesting start time `ST` and a number of vesting periods `P`. If a vesting start time is included, the vesting period does not begin until start time is reached. If vesting periods are included, the vesting occurs over the specified number of periods. + +For all vesting accounts, the owner of the vesting account is able to delegate and undelegate from validators, however they cannot transfer coins to another account until those coins are vested. This specification allows for four different kinds of vesting: + +* Delayed vesting, where all coins are vested once `ET` is reached. +* Continous vesting, where coins begin to vest at `ST` and vest linearly with respect to time until `ET` is reached +* Periodic vesting, where coins begin to vest at `ST` and vest periodically according to number of periods and the vesting amount per period. The number of periods, length per period, and amount per period are configurable. A periodic vesting account is distinguished from a continuous vesting account in that coins can be released in staggered tranches. For example, a periodic vesting account could be used for vesting arrangements where coins are relased quarterly, yearly, or over any other function of tokens over time. +* Permanent locked vesting, where coins are locked forever. Coins in this account can still be used for delegating and for governance votes even while locked. + +## Note + +Vesting accounts can be initialized with some vesting and non-vesting coins. The non-vesting coins would be immediately transferable. DelayedVesting ContinuousVesting, PeriodicVesting and PermenantVesting accounts can be created with normal messages after genesis. Other types of vesting accounts must be created at genesis, or as part of a manual network upgrade. The current specification only allows for *unconditional* vesting (ie. there is no possibility of reaching `ET` and +having coins fail to vest). + +## Vesting Account Types + +```go expandable +/ VestingAccount defines an interface that any vesting account type must +/ implement. +type VestingAccount interface { + Account + + GetVestedCoins(Time) + +Coins + GetVestingCoins(Time) + +Coins + + / TrackDelegation performs internal vesting accounting necessary when + / delegating from a vesting account. It accepts the current block time, the + / delegation amount and balance of all coins whose denomination exists in + / the account's original vesting balance. + TrackDelegation(Time, Coins, Coins) + + / TrackUndelegation performs internal vesting accounting necessary when a + / vesting account performs an undelegation. + TrackUndelegation(Coins) + +GetStartTime() + +int64 + GetEndTime() + +int64 +} +``` + +### BaseVestingAccount + +```protobuf +// BaseVestingAccount implements the VestingAccount interface. It contains all +// the necessary fields needed for any vesting account implementation. +message BaseVestingAccount { + option (amino.name) = "cosmos-sdk/BaseVestingAccount"; + option (gogoproto.goproto_getters) = false; + + cosmos.auth.v1beta1.BaseAccount base_account = 1 [(gogoproto.embed) = true]; + repeated cosmos.base.v1beta1.Coin original_vesting = 2 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (amino.encoding) = "legacy_coins", + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + repeated cosmos.base.v1beta1.Coin delegated_free = 3 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (amino.encoding) = "legacy_coins", + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; + repeated cosmos.base.v1beta1.Coin delegated_vesting = 4 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (amino.encoding) = "legacy_coins", + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" + ]; +``` + +### ContinuousVestingAccount + +```protobuf + int64 end_time = 5; +} + +// ContinuousVestingAccount implements the VestingAccount interface. It +// continuously vests by unlocking coins linearly with respect to time. +message ContinuousVestingAccount { + option (amino.name) = "cosmos-sdk/ContinuousVestingAccount"; + option (gogoproto.goproto_getters) = false; + + BaseVestingAccount base_vesting_account = 1 [(gogoproto.embed) = true]; +``` + +### DelayedVestingAccount + +```protobuf + int64 start_time = 2; +} + +// DelayedVestingAccount implements the VestingAccount interface. It vests all +// coins after a specific time, but non prior. In other words, it keeps them +// locked until a specified time. +message DelayedVestingAccount { + option (amino.name) = "cosmos-sdk/DelayedVestingAccount"; + option (gogoproto.goproto_getters) = false; + +``` + +### Period + +```protobuf +} + +// Period defines a length of time and amount of coins that will vest. +message Period { + // Period duration in seconds. + int64 length = 1; + repeated cosmos.base.v1beta1.Coin amount = 2 [ + (gogoproto.nullable) = false, + (amino.dont_omitempty) = true, + (amino.encoding) = "legacy_coins", + (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins" +``` + +```go +/ Stores all vesting periods passed as part of a PeriodicVestingAccount +type Periods []Period +``` + +### PeriodicVestingAccount + +```protobuf +} + +// PeriodicVestingAccount implements the VestingAccount interface. It +// periodically vests by unlocking coins during each specified period. +message PeriodicVestingAccount { + option (amino.name) = "cosmos-sdk/PeriodicVestingAccount"; + option (gogoproto.goproto_getters) = false; + + BaseVestingAccount base_vesting_account = 1 [(gogoproto.embed) = true]; + int64 start_time = 2; + repeated Period vesting_periods = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; +``` + +In order to facilitate less ad-hoc type checking and assertions and to support flexibility in account balance usage, the existing `x/bank` `ViewKeeper` interface is updated to contain the following: + +```go +type ViewKeeper interface { + / ... + + / Calculates the total locked account balance. + LockedCoins(ctx sdk.Context, addr sdk.AccAddress) + +sdk.Coins + + / Calculates the total spendable balance that can be sent to other accounts. + SpendableCoins(ctx sdk.Context, addr sdk.AccAddress) + +sdk.Coins +} +``` + +### PermanentLockedAccount + +```protobuf + +// PermanentLockedAccount implements the VestingAccount interface. It does +// not ever release coins, locking them indefinitely. Coins in this account can +// still be used for delegating and for governance votes even while locked. +// +// Since: cosmos-sdk 0.43 +message PermanentLockedAccount { + option (amino.name) = "cosmos-sdk/PermanentLockedAccount"; + option (gogoproto.goproto_getters) = false; + + BaseVestingAccount base_vesting_account = 1 [(gogoproto.embed) = true]; +} +``` + +## Vesting Account Specification + +Given a vesting account, we define the following in the proceeding operations: + +* `OV`: The original vesting coin amount. It is a constant value. +* `V`: The number of `OV` coins that are still *vesting*. It is derived by + `OV`, `StartTime` and `EndTime`. This value is computed on demand and not on a per-block basis. +* `V'`: The number of `OV` coins that are *vested* (unlocked). This value is computed on demand and not a per-block basis. +* `DV`: The number of delegated *vesting* coins. It is a variable value. It is stored and modified directly in the vesting account. +* `DF`: The number of delegated *vested* (unlocked) coins. It is a variable value. It is stored and modified directly in the vesting account. +* `BC`: The number of `OV` coins less any coins that are transferred + (which can be negative or delegated). It is considered to be balance of the embedded base account. It is stored and modified directly in the vesting account. + +### Determining Vesting & Vested Amounts + +It is important to note that these values are computed on demand and not on a mandatory per-block basis (e.g. `BeginBlocker` or `EndBlocker`). + +#### Continuously Vesting Accounts + +To determine the amount of coins that are vested for a given block time `T`, the +following is performed: + +1. Compute `X := T - StartTime` +2. Compute `Y := EndTime - StartTime` +3. Compute `V' := OV * (X / Y)` +4. Compute `V := OV - V'` + +Thus, the total amount of *vested* coins is `V'` and the remaining amount, `V`, +is *vesting*. + +```go expandable +func (cva ContinuousVestingAccount) + +GetVestedCoins(t Time) + +Coins { + if t <= cva.StartTime { + / We must handle the case where the start time for a vesting account has + / been set into the future or when the start of the chain is not exactly + / known. + return ZeroCoins +} + +else if t >= cva.EndTime { + return cva.OriginalVesting +} + x := t - cva.StartTime + y := cva.EndTime - cva.StartTime + + return cva.OriginalVesting * (x / y) +} + +func (cva ContinuousVestingAccount) + +GetVestingCoins(t Time) + +Coins { + return cva.OriginalVesting - cva.GetVestedCoins(t) +} +``` + +### Periodic Vesting Accounts + +Periodic vesting accounts require calculating the coins released during each period for a given block time `T`. Note that multiple periods could have passed when calling `GetVestedCoins`, so we must iterate over each period until the end of that period is after `T`. + +1. Set `CT := StartTime` +2. Set `V' := 0` + +For each Period P: + +1. Compute `X := T - CT` +2. IF `X >= P.Length` + 1. Compute `V' += P.Amount` + 2. Compute `CT += P.Length` + 3. ELSE break +3. Compute `V := OV - V'` + +```go expandable +func (pva PeriodicVestingAccount) + +GetVestedCoins(t Time) + +Coins { + if t < pva.StartTime { + return ZeroCoins +} + ct := pva.StartTime / The start of the vesting schedule + vested := 0 + periods = pva.GetPeriods() + for _, period := range periods { + if t - ct < period.Length { + break +} + +vested += period.Amount + ct += period.Length / increment ct to the start of the next vesting period +} + +return vested +} + +func (pva PeriodicVestingAccount) + +GetVestingCoins(t Time) + +Coins { + return pva.OriginalVesting - cva.GetVestedCoins(t) +} +``` + +#### Delayed/Discrete Vesting Accounts + +Delayed vesting accounts are easier to reason about as they only have the full amount vesting up until a certain time, then all the coins become vested (unlocked). This does not include any unlocked coins the account may have initially. + +```go expandable +func (dva DelayedVestingAccount) + +GetVestedCoins(t Time) + +Coins { + if t >= dva.EndTime { + return dva.OriginalVesting +} + +return ZeroCoins +} + +func (dva DelayedVestingAccount) + +GetVestingCoins(t Time) + +Coins { + return dva.OriginalVesting - dva.GetVestedCoins(t) +} +``` + +### Transferring/Sending + +At any given time, a vesting account may transfer: `min((BC + DV) - V, BC)`. + +In other words, a vesting account may transfer the minimum of the base account balance and the base account balance plus the number of currently delegated vesting coins less the number of coins vested so far. + +However, given that account balances are tracked via the `x/bank` module and that we want to avoid loading the entire account balance, we can instead determine the locked balance, which can be defined as `max(V - DV, 0)`, and infer the spendable balance from that. + +```go +func (va VestingAccount) + +LockedCoins(t Time) + +Coins { + return max(va.GetVestingCoins(t) - va.DelegatedVesting, 0) +} +``` + +The `x/bank` `ViewKeeper` can then provide APIs to determine locked and spendable coins for any account: + +```go expandable +func (k Keeper) + +LockedCoins(ctx Context, addr AccAddress) + +Coins { + acc := k.GetAccount(ctx, addr) + if acc != nil { + if acc.IsVesting() { + return acc.LockedCoins(ctx.BlockTime()) +} + +} + + / non-vesting accounts do not have any locked coins + return NewCoins() +} +``` + +#### Keepers/Handlers + +The corresponding `x/bank` keeper should appropriately handle sending coins based on if the account is a vesting account or not. + +```go expandable +func (k Keeper) + +SendCoins(ctx Context, from Account, to Account, amount Coins) { + bc := k.GetBalances(ctx, from) + v := k.LockedCoins(ctx, from) + spendable := bc - v + newCoins := spendable - amount + assert(newCoins >= 0) + +from.SetBalance(newCoins) + +to.AddBalance(amount) + + / save balances... +} +``` + +### Delegating + +For a vesting account attempting to delegate `D` coins, the following is performed: + +1. Verify `BC >= D > 0` +2. Compute `X := min(max(V - DV, 0), D)` (portion of `D` that is vesting) +3. Compute `Y := D - X` (portion of `D` that is free) +4. Set `DV += X` +5. Set `DF += Y` + +```go +func (va VestingAccount) + +TrackDelegation(t Time, balance Coins, amount Coins) { + assert(balance <= amount) + x := min(max(va.GetVestingCoins(t) - va.DelegatedVesting, 0), amount) + y := amount - x + + va.DelegatedVesting += x + va.DelegatedFree += y +} +``` + +**Note** `TrackDelegation` only modifies the `DelegatedVesting` and `DelegatedFree` fields, so upstream callers MUST modify the `Coins` field by subtracting `amount`. + +#### Keepers/Handlers + +```go +func DelegateCoins(t Time, from Account, amount Coins) { + if isVesting(from) { + from.TrackDelegation(t, amount) +} + +else { + from.SetBalance(sc - amount) +} + + / save account... +} +``` + +### Undelegating + +For a vesting account attempting to undelegate `D` coins, the following is performed: + +> NOTE: `DV < D` and `(DV + DF) < D` may be possible due to quirks in the rounding of delegation/undelegation logic. + +1. Verify `D > 0` +2. Compute `X := min(DF, D)` (portion of `D` that should become free, prioritizing free coins) +3. Compute `Y := min(DV, D - X)` (portion of `D` that should remain vesting) +4. Set `DF -= X` +5. Set `DV -= Y` + +```go +func (cva ContinuousVestingAccount) + +TrackUndelegation(amount Coins) { + x := min(cva.DelegatedFree, amount) + y := amount - x + + cva.DelegatedFree -= x + cva.DelegatedVesting -= y +} +``` + +**Note** `TrackUnDelegation` only modifies the `DelegatedVesting` and `DelegatedFree` fields, so upstream callers MUST modify the `Coins` field by adding `amount`. + +**Note**: If a delegation is slashed, the continuous vesting account ends up with an excess `DV` amount, even after all its coins have vested. This is because undelegating free coins are prioritized. + +**Note**: The undelegation (bond refund) amount may exceed the delegated vesting (bond) amount due to the way undelegation truncates the bond refund, which can increase the validator's exchange rate (tokens/shares) slightly if the undelegated tokens are non-integral. + +#### Keepers/Handlers + +```go expandable +func UndelegateCoins(to Account, amount Coins) { + if isVesting(to) { + if to.DelegatedFree + to.DelegatedVesting >= amount { + to.TrackUndelegation(amount) + / save account ... +} + +} + +else { + AddBalance(to, amount) + / save account... +} +} +``` + +## Keepers & Handlers + +The `VestingAccount` implementations reside in `x/auth`. However, any keeper in a module (e.g. staking in `x/staking`) wishing to potentially utilize any vesting coins, must call explicit methods on the `x/bank` keeper (e.g. `DelegateCoins`) opposed to `SendCoins` and `SubtractCoins`. + +In addition, the vesting account should also be able to spend any coins it receives from other users. Thus, the bank module's `MsgSend` handler should error if a vesting account is trying to send an amount that exceeds their unlocked coin amount. + +See the above specification for full implementation details. + +## Genesis Initialization + +To initialize both vesting and non-vesting accounts, the `GenesisAccount` struct includes new fields: `Vesting`, `StartTime`, and `EndTime`. Accounts meant to be of type `BaseAccount` or any non-vesting type have `Vesting = false`. The genesis initialization logic (e.g. `initFromGenesisState`) must parse and return the correct accounts accordingly based off of these fields. + +```go expandable +type GenesisAccount struct { + / ... + + / vesting account fields + OriginalVesting sdk.Coins `json:"original_vesting"` + DelegatedFree sdk.Coins `json:"delegated_free"` + DelegatedVesting sdk.Coins `json:"delegated_vesting"` + StartTime int64 `json:"start_time"` + EndTime int64 `json:"end_time"` +} + +func ToAccount(gacc GenesisAccount) + +Account { + bacc := NewBaseAccount(gacc) + if gacc.OriginalVesting > 0 { + if ga.StartTime != 0 && ga.EndTime != 0 { + / return a continuous vesting account +} + +else if ga.EndTime != 0 { + / return a delayed vesting account +} + +else { + / invalid genesis vesting account provided + panic() +} + +} + +return bacc +} +``` + +## Examples + +### Simple + +Given a continuous vesting account with 10 vesting coins. + +```text +OV = 10 +DF = 0 +DV = 0 +BC = 10 +V = 10 +V' = 0 +``` + +1. Immediately receives 1 coin + + ```text + BC = 11 + ``` + +2. Time passes, 2 coins vest + + ```text + V = 8 + V' = 2 + ``` + +3. Delegates 4 coins to validator A + + ```text + DV = 4 + BC = 7 + ``` + +4. Sends 3 coins + + ```text + BC = 4 + ``` + +5. More time passes, 2 more coins vest + + ```text + V = 6 + V' = 4 + ``` + +6. Sends 2 coins. At this point the account cannot send anymore until further + coins vest or it receives additional coins. It can still however, delegate. + + ```text + BC = 2 + ``` + +### Slashing + +Same initial starting conditions as the simple example. + +1. Time passes, 5 coins vest + + ```text + V = 5 + V' = 5 + ``` + +2. Delegate 5 coins to validator A + + ```text + DV = 5 + BC = 5 + ``` + +3. Delegate 5 coins to validator B + + ```text + DF = 5 + BC = 0 + ``` + +4. Validator A gets slashed by 50%, making the delegation to A now worth 2.5 coins + +5. Undelegate from validator A (2.5 coins) + + ```text + DF = 5 - 2.5 = 2.5 + BC = 0 + 2.5 = 2.5 + ``` + +6. Undelegate from validator B (5 coins). The account at this point can only + send 2.5 coins unless it receives more coins or until more coins vest. + It can still however, delegate. + + ```text + DV = 5 - 2.5 = 2.5 + DF = 2.5 - 2.5 = 0 + BC = 2.5 + 5 = 7.5 + ``` + + Notice how we have an excess amount of `DV`. + +### Periodic Vesting + +A vesting account is created where 100 tokens will be released over 1 year, with +1/4 of tokens vesting each quarter. The vesting schedule would be as follows: + +```yaml +Periods: +- amount: 25stake, length: 7884000 +- amount: 25stake, length: 7884000 +- amount: 25stake, length: 7884000 +- amount: 25stake, length: 7884000 +``` + +```text +OV = 100 +DF = 0 +DV = 0 +BC = 100 +V = 100 +V' = 0 +``` + +1. Immediately receives 1 coin + + ```text + BC = 101 + ``` + +2. Vesting period 1 passes, 25 coins vest + + ```text + V = 75 + V' = 25 + ``` + +3. During vesting period 2, 5 coins are transfered and 5 coins are delegated + + ```text + DV = 5 + BC = 91 + ``` + +4. Vesting period 2 passes, 25 coins vest + + ```text + V = 50 + V' = 50 + ``` + +## Glossary + +* OriginalVesting: The amount of coins (per denomination) that are initially + part of a vesting account. These coins are set at genesis. +* StartTime: The BFT time at which a vesting account starts to vest. +* EndTime: The BFT time at which a vesting account is fully vested. +* DelegatedFree: The tracked amount of coins (per denomination) that are + delegated from a vesting account that have been fully vested at time of delegation. +* DelegatedVesting: The tracked amount of coins (per denomination) that are + delegated from a vesting account that were vesting at time of delegation. +* ContinuousVestingAccount: A vesting account implementation that vests coins + linearly over time. +* DelayedVestingAccount: A vesting account implementation that only fully vests + all coins at a given time. +* PeriodicVestingAccount: A vesting account implementation that vests coins + according to a custom vesting schedule. +* PermanentLockedAccount: It does not ever release coins, locking them indefinitely. + Coins in this account can still be used for delegating and for governance votes even while locked. + +## CLI + +A user can query and interact with the `vesting` module using the CLI. + +### Transactions + +The `tx` commands allow users to interact with the `vesting` module. + +```bash +simd tx vesting --help +``` + +#### create-periodic-vesting-account + +The `create-periodic-vesting-account` command creates a new vesting account funded with an allocation of tokens, where a sequence of coins and period length in seconds. Periods are sequential, in that the duration of of a period only starts at the end of the previous period. The duration of the first period starts upon account creation. + +```bash +simd tx vesting create-periodic-vesting-account [to_address] [periods_json_file] [flags] +``` + +Example: + +```bash +simd tx vesting create-periodic-vesting-account cosmos1.. periods.json +``` + +#### create-vesting-account + +The `create-vesting-account` command creates a new vesting account funded with an allocation of tokens. The account can either be a delayed or continuous vesting account, which is determined by the '--delayed' flag. All vesting accouts created will have their start time set by the committed block's time. The end\_time must be provided as a UNIX epoch timestamp. + +```bash +simd tx vesting create-vesting-account [to_address] [amount] [end_time] [flags] +``` + +Example: + +```bash +simd tx vesting create-vesting-account cosmos1.. 100stake 2592000 +``` diff --git a/docs/sdk/v0.53/documentation/operations/app-testnet.mdx b/docs/sdk/v0.53/documentation/operations/app-testnet.mdx new file mode 100644 index 00000000..ed9cb72f --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/app-testnet.mdx @@ -0,0 +1,257 @@ +--- +title: Application Testnets +description: >- + Building an application is complicated and requires a lot of testing. The + Cosmos SDK provides a way to test your application in a real-world + environment: a testnet. +--- + +Building an application is complicated and requires a lot of testing. The Cosmos SDK provides a way to test your application in a real-world environment: a testnet. + +We allow developers to take the state from their mainnet and run tests against the state. This is useful for testing upgrade migrations, or for testing the application in a real-world environment. + +## Testnet Setup + +We will be breaking down the steps to create a testnet from mainnet state. + +```go +/ InitSimAppForTestnet is broken down into two sections: + / Required Changes: Changes that, if not made, will cause the testnet to halt or panic + / Optional Changes: Changes to customize the testnet to one's liking (lower vote times, fund accounts, etc) + +func InitSimAppForTestnet(app *SimApp, newValAddr bytes.HexBytes, newValPubKey crypto.PubKey, newOperatorAddress, upgradeToTrigger string) *SimApp { + ... +} +``` + +### Required Changes + +#### Staking + +When creating a testnet the important part is migrate the validator set from many validators to one or a few. This allows developers to spin up the chain without needing to replace validator keys. + +```go expandable +ctx := app.BaseApp.NewUncachedContext(true, tmproto.Header{ +}) + pubkey := &ed25519.PubKey{ + Key: newValPubKey.Bytes() +} + +pubkeyAny, err := types.NewAnyWithValue(pubkey) + if err != nil { + tmos.Exit(err.Error()) +} + + / STAKING + / + + / Create Validator struct for our new validator. + _, bz, err := bech32.DecodeAndConvert(newOperatorAddress) + if err != nil { + tmos.Exit(err.Error()) +} + +bech32Addr, err := bech32.ConvertAndEncode("simvaloper", bz) + if err != nil { + tmos.Exit(err.Error()) +} + newVal := stakingtypes.Validator{ + OperatorAddress: bech32Addr, + ConsensusPubkey: pubkeyAny, + Jailed: false, + Status: stakingtypes.Bonded, + Tokens: sdk.NewInt(900000000000000), + DelegatorShares: sdk.MustNewDecFromStr("10000000"), + Description: stakingtypes.Description{ + Moniker: "Testnet Validator", +}, + Commission: stakingtypes.Commission{ + CommissionRates: stakingtypes.CommissionRates{ + Rate: sdk.MustNewDecFromStr("0.05"), + MaxRate: sdk.MustNewDecFromStr("0.1"), + MaxChangeRate: sdk.MustNewDecFromStr("0.05"), +}, +}, + MinSelfDelegation: sdk.OneInt(), +} + + / Remove all validators from power store + stakingKey := app.GetKey(stakingtypes.ModuleName) + stakingStore := ctx.KVStore(stakingKey) + iterator := app.StakingKeeper.ValidatorsPowerStoreIterator(ctx) + for ; iterator.Valid(); iterator.Next() { + stakingStore.Delete(iterator.Key()) +} + +iterator.Close() + + / Remove all validators from last validators store + iterator = app.StakingKeeper.LastValidatorsIterator(ctx) + for ; iterator.Valid(); iterator.Next() { + app.StakingKeeper.LastValidatorPower.Delete(iterator.Key()) +} + +iterator.Close() + + / Add our validator to power and last validators store + app.StakingKeeper.SetValidator(ctx, newVal) + +err = app.StakingKeeper.SetValidatorByConsAddr(ctx, newVal) + if err != nil { + panic(err) +} + +app.StakingKeeper.SetValidatorByPowerIndex(ctx, newVal) + +app.StakingKeeper.SetLastValidatorPower(ctx, newVal.GetOperator(), 0) + if err := app.StakingKeeper.Hooks().AfterValidatorCreated(ctx, newVal.GetOperator()); err != nil { + panic(err) +} +``` + +#### Distribution + +Since the validator set has changed, we need to update the distribution records for the new validator. + +```go +/ Initialize records for this validator across all distribution stores + app.DistrKeeper.ValidatorHistoricalRewards.Set(ctx, newVal.GetOperator(), 0, distrtypes.NewValidatorHistoricalRewards(sdk.DecCoins{ +}, 1)) + +app.DistrKeeper.ValidatorCurrentRewards.Set(ctx, newVal.GetOperator(), distrtypes.NewValidatorCurrentRewards(sdk.DecCoins{ +}, 1)) + +app.DistrKeeper.ValidatorAccumulatedCommission.Set(ctx, newVal.GetOperator(), distrtypes.InitialValidatorAccumulatedCommission()) + +app.DistrKeeper.ValidatorOutstandingRewards.Set(ctx, newVal.GetOperator(), distrtypes.ValidatorOutstandingRewards{ + Rewards: sdk.DecCoins{ +}}) +``` + +#### Slashing + +We also need to set the validator signing info for the new validator. + +```go expandable +/ SLASHING + / + + / Set validator signing info for our new validator. + newConsAddr := sdk.ConsAddress(newValAddr.Bytes()) + newValidatorSigningInfo := slashingtypes.ValidatorSigningInfo{ + Address: newConsAddr.String(), + StartHeight: app.LastBlockHeight() - 1, + Tombstoned: false, +} + +app.SlashingKeeper.ValidatorSigningInfo.Set(ctx, newConsAddr, newValidatorSigningInfo) +``` + +#### Bank + +It is useful to create new accounts for your testing purposes. This avoids the need to have the same key as you may have on mainnet. + +```go expandable +/ BANK + / + defaultCoins := sdk.NewCoins(sdk.NewInt64Coin("ustake", 1000000000000)) + localSimAppAccounts := []sdk.AccAddress{ + sdk.MustAccAddressFromBech32("cosmos12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj"), + sdk.MustAccAddressFromBech32("cosmos1cyyzpxplxdzkeea7kwsydadg87357qnahakaks"), + sdk.MustAccAddressFromBech32("cosmos18s5lynnmx37hq4wlrw9gdn68sg2uxp5rgk26vv"), + sdk.MustAccAddressFromBech32("cosmos1qwexv7c6sm95lwhzn9027vyu2ccneaqad4w8ka"), + sdk.MustAccAddressFromBech32("cosmos14hcxlnwlqtq75ttaxf674vk6mafspg8xwgnn53"), + sdk.MustAccAddressFromBech32("cosmos12rr534cer5c0vj53eq4y32lcwguyy7nndt0u2t"), + sdk.MustAccAddressFromBech32("cosmos1nt33cjd5auzh36syym6azgc8tve0jlvklnq7jq"), + sdk.MustAccAddressFromBech32("cosmos10qfrpash5g2vk3hppvu45x0g860czur8ff5yx0"), + sdk.MustAccAddressFromBech32("cosmos1f4tvsdukfwh6s9swrc24gkuz23tp8pd3e9r5fa"), + sdk.MustAccAddressFromBech32("cosmos1myv43sqgnj5sm4zl98ftl45af9cfzk7nhjxjqh"), + sdk.MustAccAddressFromBech32("cosmos14gs9zqh8m49yy9kscjqu9h72exyf295afg6kgk"), + sdk.MustAccAddressFromBech32("cosmos1jllfytsz4dryxhz5tl7u73v29exsf80vz52ucc") +} + + / Fund localSimApp accounts + for _, account := range localSimAppAccounts { + err := app.BankKeeper.MintCoins(ctx, minttypes.ModuleName, defaultCoins) + if err != nil { + tmos.Exit(err.Error()) +} + +err = app.BankKeeper.SendCoinsFromModuleToAccount(ctx, minttypes.ModuleName, account, defaultCoins) + if err != nil { + tmos.Exit(err.Error()) +} + +} +``` + +#### Upgrade + +If you would like to schedule an upgrade the below can be used. + +```go expandable +/ UPGRADE + / + if upgradeToTrigger != "" { + upgradePlan := upgradetypes.Plan{ + Name: upgradeToTrigger, + Height: app.LastBlockHeight(), +} + +err = app.UpgradeKeeper.ScheduleUpgrade(ctx, upgradePlan) + if err != nil { + panic(err) +} + +} +``` + +### Optional Changes + +If you have custom modules that rely on specific state from the above modules and/or you would like to test your custom module, you will need to update the state of your custom module to reflect your needs + +## Running the Testnet + +Before we can run the testnet we must plug everything together. + +in `root.go`, in the `initRootCmd` function we add: + +```diff + server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, createSimAppAndExport, addModuleInitFlags) + ++ server.AddTestnetCreatorCommand(rootCmd, simapp.DefaultNodeHome, newTestnetApp, addModuleInitFlags) +``` + +Next we will add a newTestnetApp helper function: + +```diff expandable +/ newTestnetApp starts by running the normal newApp method. From there, the app interface returned is modified in order +/ for a testnet to be created from the provided app. +func newTestnetApp(logger log.Logger, db cometbftdb.DB, traceStore io.Writer, appOpts servertypes.AppOptions) servertypes.Application { + / Create an app and type cast to an SimApp + app := newApp(logger, db, traceStore, appOpts) + simApp, ok := app.(*simapp.SimApp) + if !ok { + panic("app created from newApp is not of type simApp") + } + + newValAddr, ok := appOpts.Get(server.KeyNewValAddr).(bytes.HexBytes) + if !ok { + panic("newValAddr is not of type bytes.HexBytes") + } + newValPubKey, ok := appOpts.Get(server.KeyUserPubKey).(crypto.PubKey) + if !ok { + panic("newValPubKey is not of type crypto.PubKey") + } + newOperatorAddress, ok := appOpts.Get(server.KeyNewOpAddr).(string) + if !ok { + panic("newOperatorAddress is not of type string") + } + upgradeToTrigger, ok := appOpts.Get(server.KeyTriggerTestnetUpgrade).(string) + if !ok { + panic("upgradeToTrigger is not of type string") + } + + / Make modifications to the normal SimApp required to run the network locally + return simapp.InitSimAppForTestnet(simApp, newValAddr, newValPubKey, newOperatorAddress, upgradeToTrigger) +} +``` diff --git a/docs/sdk/v0.53/documentation/operations/app-upgrade.mdx b/docs/sdk/v0.53/documentation/operations/app-upgrade.mdx new file mode 100644 index 00000000..cb466893 --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/app-upgrade.mdx @@ -0,0 +1,219 @@ +--- +title: Application Upgrade +--- + + +This document describes how to upgrade your application. If you are looking specifically for the changes to perform between SDK versions, see the [SDK migrations documentation](https://docs.cosmos.network/main/migrations/intro). + + + +This section is currently incomplete. Track the progress of this document [here](https://github.com/cosmos/cosmos-sdk/issues/11504). + + + +**Pre-requisite Readings** + +* [`x/upgrade` Documentation](https://docs.cosmos.network/main/modules/upgrade) + + + +## General Workflow + +Let's assume we are running v0.38.0 of our software in our testnet and want to upgrade to v0.40.0. +How would this look in practice? First, we want to finalize the v0.40.0 release candidate +and then install a specially named upgrade handler (eg. "testnet-v2" or even "v0.40.0"). An upgrade +handler should be defined in a new version of the software to define what migrations +to run to migrate from the older version of the software. Naturally, this is app-specific rather +than module specific, and must be defined in `app.go`, even if it imports logic from various +modules to perform the actions. You can register them with `upgradeKeeper.SetUpgradeHandler` +during the app initialization (before starting the abci server), and they serve not only to +perform a migration, but also to identify if this is the old or new version (eg. presence of +a handler registered for the named upgrade). + +Once the release candidate along with an appropriate upgrade handler is frozen, +we can have a governance vote to approve this upgrade at some future block height (e.g. 200000). +This is known as an upgrade.Plan. The v0.38.0 code will not know of this handler, but will +continue to run until block 200000, when the plan kicks in at `BeginBlock`. It will check +for the existence of the handler, and finding it missing, know that it is running the obsolete software, +and gracefully exit. + +Generally the application binary will restart on exit, but then will execute this BeginBlocker +again and exit, causing a restart loop. Either the operator can manually install the new software, +or you can make use of an external watcher daemon to possibly download and then switch binaries, +also potentially doing a backup. The SDK tool for doing such, is called [Cosmovisor](https://docs.cosmos.network/main/tooling/cosmovisor). + +When the binary restarts with the upgraded version (here v0.40.0), it will detect we have registered the +"testnet-v2" upgrade handler in the code, and realize it is the new version. It then will run the upgrade handler +and *migrate the database in-place*. Once finished, it marks the upgrade as done, and continues processing +the rest of the block as normal. Once 2/3 of the voting power has upgraded, the blockchain will immediately +resume the consensus mechanism. If the majority of operators add a custom `do-upgrade` script, this should +be a matter of minutes and not even require them to be awake at that time. + +## Integrating With An App + + +The following is not required for users using `depinject`, this is abstracted for them. + + +In addition to basic module wiring, setup the upgrade Keeper for the app and then define a `PreBlocker` that calls the upgrade +keeper's PreBlocker method: + +```go +func (app *myApp) + +PreBlocker(ctx sdk.Context, req req.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + / For demonstration sake, the app PreBlocker only returns the upgrade module pre-blocker. + / In a real app, the module manager should call all pre-blockers + / return app.ModuleManager.PreBlock(ctx, req) + +return app.upgradeKeeper.PreBlocker(ctx, req) +} +``` + +The app must then integrate the upgrade keeper with its governance module as appropriate. The governance module +should call ScheduleUpgrade to schedule an upgrade and ClearUpgradePlan to cancel a pending upgrade. + +## Performing Upgrades + +Upgrades can be scheduled at a predefined block height. Once this block height is reached, the +existing software will cease to process ABCI messages and a new version with code that handles the upgrade must be deployed. +All upgrades are coordinated by a unique upgrade name that cannot be reused on the same blockchain. In order for the upgrade +module to know that the upgrade has been safely applied, a handler with the name of the upgrade must be installed. +Here is an example handler for an upgrade named "my-fancy-upgrade": + +```go +app.upgradeKeeper.SetUpgradeHandler("my-fancy-upgrade", func(ctx context.Context, plan upgrade.Plan) { + / Perform any migrations of the state store needed for this upgrade +}) +``` + +This upgrade handler performs the dual function of alerting the upgrade module that the named upgrade has been applied, +as well as providing the opportunity for the upgraded software to perform any necessary state migrations. Both the halt +(with the old binary) and applying the migration (with the new binary) are enforced in the state machine. Actually +switching the binaries is an ops task and not handled inside the sdk / abci app. + +Here is a sample code to set store migrations with an upgrade: + +```go expandable +/ this configures a no-op upgrade handler for the "my-fancy-upgrade" upgrade +app.UpgradeKeeper.SetUpgradeHandler("my-fancy-upgrade", func(ctx context.Context, plan upgrade.Plan) { + / upgrade changes here +}) + +upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk() + if err != nil { + / handle error +} + if upgradeInfo.Name == "my-fancy-upgrade" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := store.StoreUpgrades{ + Renamed: []store.StoreRename{{ + OldKey: "foo", + NewKey: "bar", +}}, + Deleted: []string{ +}, +} + / configure store loader that checks if version == upgradeHeight and applies store upgrades + app.SetStoreLoader(upgrade.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +``` + +## Halt Behavior + +Before halting the ABCI state machine in the BeginBlocker method, the upgrade module will log an error +that looks like: + +```text + UPGRADE "" NEEDED at height : +``` + +where `Name` and `Info` are the values of the respective fields on the upgrade Plan. + +To perform the actual halt of the blockchain, the upgrade keeper simply panics which prevents the ABCI state machine +from proceeding but doesn't actually exit the process. Exiting the process can cause issues for other nodes that start +to lose connectivity with the exiting nodes, thus this module prefers to just halt but not exit. + +## Automation + +Read more about [Cosmovisor](https://docs.cosmos.network/main/tooling/cosmovisor), the tool for automating upgrades. + +## Canceling Upgrades + +There are two ways to cancel a planned upgrade - with on-chain governance or off-chain social consensus. +For the first one, there is a `CancelSoftwareUpgrade` governance proposal, which can be voted on and will +remove the scheduled upgrade plan. Of course this requires that the upgrade was known to be a bad idea +well before the upgrade itself, to allow time for a vote. If you want to allow such a possibility, you +should set the upgrade height to be `2 * (votingperiod + depositperiod) + (safety delta)` from the beginning of +the first upgrade proposal. Safety delta is the time available from the success of an upgrade proposal +and the realization it was a bad idea (due to external testing). You can also start a `CancelSoftwareUpgrade` +proposal while the original `SoftwareUpgrade` proposal is still being voted upon, as long as the voting +period ends after the `SoftwareUpgrade` proposal. + +However, let's assume that we don't realize the upgrade has a bug until shortly before it will occur +(or while we try it out - hitting some panic in the migration). It would seem the blockchain is stuck, +but we need to allow an escape for social consensus to overrule the planned upgrade. To do so, there's +a `--unsafe-skip-upgrades` flag to the start command, which will cause the node to mark the upgrade +as done upon hitting the planned upgrade height(s), without halting and without actually performing a migration. +If over two-thirds run their nodes with this flag on the old binary, it will allow the chain to continue through +the upgrade with a manual override. (This must be well-documented for anyone syncing from genesis later on). + +Example: + +```shell + start --unsafe-skip-upgrades ... +``` + +## Pre-Upgrade Handling + +Cosmovisor supports custom pre-upgrade handling. Use pre-upgrade handling when you need to implement application config changes that are required in the newer version before you perform the upgrade. + +Using Cosmovisor pre-upgrade handling is optional. If pre-upgrade handling is not implemented, the upgrade continues. + +For example, make the required new-version changes to `app.toml` settings during the pre-upgrade handling. The pre-upgrade handling process means that the file does not have to be manually updated after the upgrade. + +Before the application binary is upgraded, Cosmovisor calls a `pre-upgrade` command that can be implemented by the application. + +The `pre-upgrade` command does not take in any command-line arguments and is expected to terminate with the following exit codes: + +| Exit status code | How it is handled in Cosmosvisor | +| ---------------- | ------------------------------------------------------------------------------------------------------------------- | +| `0` | Assumes `pre-upgrade` command executed successfully and continues the upgrade. | +| `1` | Default exit code when `pre-upgrade` command has not been implemented. | +| `30` | `pre-upgrade` command was executed but failed. This fails the entire upgrade. | +| `31` | `pre-upgrade` command was executed but failed. But the command is retried until exit code `1` or `30` are returned. | + +## Sample + +Here is a sample structure of the `pre-upgrade` command: + +```go expandable +func preUpgradeCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "pre-upgrade", + Short: "Pre-upgrade command", + Long: "Pre-upgrade command to implement custom pre-upgrade handling", + Run: func(cmd *cobra.Command, args []string) { + err := HandlePreUpgrade() + if err != nil { + os.Exit(30) +} + +os.Exit(0) +}, +} + +return cmd +} +``` + +Ensure that the pre-upgrade command has been registered in the application: + +```go +rootCmd.AddCommand( + / .. + preUpgradeCommand(), + / .. + ) +``` + +When not using Cosmovisor, ensure to run ` pre-upgrade` before starting the application binary. diff --git a/docs/sdk/v0.53/documentation/operations/config.mdx b/docs/sdk/v0.53/documentation/operations/config.mdx new file mode 100644 index 00000000..a9526711 --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/config.mdx @@ -0,0 +1,288 @@ +--- +title: Configuration +description: >- + This documentation refers to the app.toml, if you'd like to read about the + config.toml please visit CometBFT docs. +--- + +This documentation refers to the app.toml, if you'd like to read about the config.toml please visit [CometBFT docs](https://docs.cometbft.com/v0.37/). + +{/* the following is not a python reference, however syntax coloring makes the file more readable in the docs */} + +```python +# This is a TOML config file. +# For more information, see https://github.com/toml-lang/toml + +############################################################################### +### Base Configuration ### +############################################################################### + +# The minimum gas prices a validator is willing to accept for processing a +# transaction. A transaction's fees must meet the minimum of any denomination +# specified in this config (e.g. 0.25token1,0.0001token2). +minimum-gas-prices = "0stake" + +# default: the last 362880 states are kept, pruning at 10 block intervals +# nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) +# everything: 2 latest states will be kept; pruning at 10 block intervals. +# custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' +pruning = "default" + +# These are applied if and only if the pruning strategy is custom. +pruning-keep-recent = "0" +pruning-interval = "0" + +# HaltHeight contains a non-zero block height at which a node will gracefully +# halt and shutdown that can be used to assist upgrades and testing. +# +# Note: Commitment of state will be attempted on the corresponding block. +halt-height = 0 + +# HaltTime contains a non-zero minimum block time (in Unix seconds) at which +# a node will gracefully halt and shutdown that can be used to assist upgrades +# and testing. +# +# Note: Commitment of state will be attempted on the corresponding block. +halt-time = 0 + +# MinRetainBlocks defines the minimum block height offset from the current +# block being committed, such that all blocks past this offset are pruned +# from Tendermint. It is used as part of the process of determining the +# ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates +# that no blocks should be pruned. +# +# This configuration value is only responsible for pruning Tendermint blocks. +# It has no bearing on application state pruning which is determined by the +# "pruning-*" configurations. +# +# Note: Tendermint block pruning is dependent on this parameter in conjunction +# with the unbonding (safety threshold) period, state pruning and state sync +# snapshot parameters to determine the correct minimum value of +# ResponseCommit.RetainHeight. +min-retain-blocks = 0 + +# InterBlockCache enables inter-block caching. +inter-block-cache = true + +# IndexEvents defines the set of events in the form {eventType}.{attributeKey}, +# which informs Tendermint what to index. If empty, all events will be indexed. +# +# Example: +# ["message.sender", "message.recipient"] +index-events = [] + +# IavlCacheSize set the size of the iavl tree cache (in number of nodes). +iavl-cache-size = 781250 + +# IAVLDisableFastNode enables or disables the fast node feature of IAVL. +# Default is false. +iavl-disable-fastnode = false + +# IAVLLazyLoading enable/disable the lazy loading of iavl store. +# Default is false. +iavl-lazy-loading = false + +# AppDBBackend defines the database backend type to use for the application and snapshots DBs. +# An empty string indicates that a fallback will be used. +# The fallback is the db_backend value set in Tendermint's config.toml. +app-db-backend = "" + +############################################################################### +### Telemetry Configuration ### +############################################################################### + +[telemetry] + +# Prefixed with keys to separate services. +service-name = "" + +# Enabled enables the application telemetry functionality. When enabled, +# an in-memory sink is also enabled by default. Operators may also enabled +# other sinks such as Prometheus. +enabled = false + +# Enable prefixing gauge values with hostname. +enable-hostname = false + +# Enable adding hostname to labels. +enable-hostname-label = false + +# Enable adding service to labels. +enable-service-label = false + +# PrometheusRetentionTime, when positive, enables a Prometheus metrics sink. +prometheus-retention-time = 0 + +# GlobalLabels defines a global set of name/value label tuples applied to all +# metrics emitted using the wrapper functions defined in telemetry package. +# +# Example: +# [["chain_id", "cosmoshub-1"]] +global-labels = [] + +############################################################################### +### API Configuration ### +############################################################################### + +[api] + +# Enable defines if the API server should be enabled. +enable = false + +# Swagger defines if swagger documentation should automatically be registered. +swagger = false + +# Address defines the API server to listen on. +address = "tcp://localhost:1317" + +# MaxOpenConnections defines the number of maximum open connections. +max-open-connections = 1000 + +# RPCReadTimeout defines the Tendermint RPC read timeout (in seconds). +rpc-read-timeout = 10 + +# RPCWriteTimeout defines the Tendermint RPC write timeout (in seconds). +rpc-write-timeout = 0 + +# RPCMaxBodyBytes defines the Tendermint maximum request body (in bytes). +rpc-max-body-bytes = 1000000 + +# EnableUnsafeCORS defines if CORS should be enabled (unsafe - use it at your own risk). +enabled-unsafe-cors = false + +############################################################################### +### Rosetta Configuration ### +############################################################################### + +[rosetta] + +# Enable defines if the Rosetta API server should be enabled. +enable = false + +# Address defines the Rosetta API server to listen on. +address = ":8080" + +# Network defines the name of the blockchain that will be returned by Rosetta. +blockchain = "app" + +# Network defines the name of the network that will be returned by Rosetta. +network = "network" + +# Retries defines the number of retries when connecting to the node before failing. +retries = 3 + +# Offline defines if Rosetta server should run in offline mode. +offline = false + +# EnableDefaultSuggestedFee defines if the server should suggest fee by default. +# If 'construction/medata' is called without gas limit and gas price, +# suggested fee based on gas-to-suggest and denom-to-suggest will be given. +enable-fee-suggestion = false + +# GasToSuggest defines gas limit when calculating the fee +gas-to-suggest = 200000 + +# DenomToSuggest defines the default denom for fee suggestion. +# Price must be in minimum-gas-prices. +denom-to-suggest = "uatom" + +############################################################################### +### gRPC Configuration ### +############################################################################### + +[grpc] + +# Enable defines if the gRPC server should be enabled. +enable = true + +# Address defines the gRPC server address to bind to. +address = "localhost:9090" + +# MaxRecvMsgSize defines the max message size in bytes the server can receive. +# The default value is 10MB. +max-recv-msg-size = "10485760" + +# MaxSendMsgSize defines the max message size in bytes the server can send. +# The default value is math.MaxInt32. +max-send-msg-size = "2147483647" + +############################################################################### +### gRPC Web Configuration ### +############################################################################### + +[grpc-web] + +# GRPCWebEnable defines if the gRPC-web should be enabled. +# NOTE: gRPC must also be enabled, otherwise, this configuration is a no-op. +enable = true + +# Address defines the gRPC-web server address to bind to. +address = "localhost:9091" + +# EnableUnsafeCORS defines if CORS should be enabled (unsafe - use it at your own risk). +enable-unsafe-cors = false + +############################################################################### +### State Sync Configuration ### +############################################################################### + +# State sync snapshots allow other nodes to rapidly join the network without replaying historical +# blocks, instead downloading and applying a snapshot of the application state at a given height. +[state-sync] + +# snapshot-interval specifies the block interval at which local state sync snapshots are +# taken (0 to disable). +snapshot-interval = 0 + +# snapshot-keep-recent specifies the number of recent snapshots to keep and serve (0 to keep all). +snapshot-keep-recent = 2 + +############################################################################### +### Store / State Streaming ### +############################################################################### + +[store] +streamers = [] + +[streamers] +[streamers.file] +keys = ["*"] +write_dir = "" +prefix = "" + +# output-metadata specifies if output the metadata file which includes the abci request/responses +# during processing the block. +output-metadata = "true" + +# stop-node-on-error specifies if propagate the file streamer errors to consensus state machine. +stop-node-on-error = "true" + +# fsync specifies if call fsync after writing the files. +fsync = "false" + +############################################################################### +### Mempool ### +############################################################################### + +[mempool] +# Setting max-txs to 0 will allow for a unbounded amount of transactions in the mempool. +# Setting max_txs to negative 1 (-1) will disable transactions from being inserted into the mempool. +# Setting max_txs to a positive number (> 0) will limit the number of transactions in the mempool, by the specified amount. +# +# Note, this configuration only applies to SDK built-in app-side mempool +# implementations. +max-txs = 5000 + +``` + +## inter-block-cache + +This feature will consume more ram than a normal node, if enabled. + +## iavl-cache-size + +Using this feature will increase ram consumption + +## iavl-lazy-loading + +This feature is to be used for archive nodes, allowing them to have a faster start up time. diff --git a/docs/sdk/v0.53/documentation/operations/confix.mdx b/docs/sdk/v0.53/documentation/operations/confix.mdx new file mode 100644 index 00000000..77031a64 --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/confix.mdx @@ -0,0 +1,157 @@ +--- +title: Confix +description: >- + Confix is a configuration management tool that allows you to manage your + configuration via CLI. +--- + +`Confix` is a configuration management tool that allows you to manage your configuration via CLI. + +It is based on the [CometBFT RFC 019](https://github.com/cometbft/cometbft/blob/5013bc3f4a6d64dcc2bf02ccc002ebc9881c62e4/docs/rfc/rfc-019-config-version.md). + +## Installation + +### Add Config Command + +To add the confix tool, it's required to add the `ConfigCommand` to your application's root command file (e.g. `/cmd/root.go`). + +Import the `confixCmd` package: + +```go +import "cosmossdk.io/tools/confix/cmd" +``` + +Find the following line: + +```go +initRootCmd(rootCmd, moduleManager) +``` + +After that line, add the following: + +```go +rootCmd.AddCommand( + confixcmd.ConfigCommand(), +) +``` + +The `ConfixCommand` function builds the `config` root command and is defined in the `confixCmd` package (`cosmossdk.io/tools/confix/cmd`). +An implementation example can be found in `simapp`. + +The command will be available as `simd config`. + + +Using confix directly in the application can have less features than using it standalone. +This is because confix is versioned with the SDK, while `latest` is the standalone version. + + +### Using Confix Standalone + +To use Confix standalone, without having to add it in your application, install it with the following command: + +```bash +go install cosmossdk.io/tools/confix/cmd/confix@latest +``` + +Alternatively, for building from source, simply run `make confix`. The binary will be located in `tools/confix`. + +## Usage + +Use standalone: + +```shell +confix --help +``` + +Use in simd: + +```shell +simd config fix --help +``` + +### Get + +Get a configuration value, e.g.: + +```shell +simd config get app pruning # gets the value pruning from app.toml +simd config get client chain-id # gets the value chain-id from client.toml +``` + +```shell +confix get ~/.simapp/config/app.toml pruning # gets the value pruning from app.toml +confix get ~/.simapp/config/client.toml chain-id # gets the value chain-id from client.toml +``` + +### Set + +Set a configuration value, e.g.: + +```shell +simd config set app pruning "enabled" # sets the value pruning from app.toml +simd config set client chain-id "foo-1" # sets the value chain-id from client.toml +``` + +```shell +confix set ~/.simapp/config/app.toml pruning "enabled" # sets the value pruning from app.toml +confix set ~/.simapp/config/client.toml chain-id "foo-1" # sets the value chain-id from client.toml +``` + +### Migrate + +Migrate a configuration file to a new version, config type defaults to `app.toml`, if you want to change it to `client.toml`, please indicate it by adding the optional parameter, e.g.: + +```shell +simd config migrate v0.50 # migrates defaultHome/config/app.toml to the latest v0.50 config +simd config migrate v0.50 --client # migrates defaultHome/config/client.toml to the latest v0.50 config +``` + +```shell +confix migrate v0.50 ~/.simapp/config/app.toml # migrate ~/.simapp/config/app.toml to the latest v0.50 config +confix migrate v0.50 ~/.simapp/config/client.toml --client # migrate ~/.simapp/config/client.toml to the latest v0.50 config +``` + +### Diff + +Get the diff between a given configuration file and the default configuration file, e.g.: + +```shell +simd config diff v0.47 # gets the diff between defaultHome/config/app.toml and the latest v0.47 config +simd config diff v0.47 --client # gets the diff between defaultHome/config/client.toml and the latest v0.47 config +``` + +```shell +confix diff v0.47 ~/.simapp/config/app.toml # gets the diff between ~/.simapp/config/app.toml and the latest v0.47 config +confix diff v0.47 ~/.simapp/config/client.toml --client # gets the diff between ~/.simapp/config/client.toml and the latest v0.47 config +``` + +### View + +View a configuration file, e.g: + +```shell +simd config view client # views the current app client config +``` + +```shell +confix view ~/.simapp/config/client.toml # views the current app client conf +``` + +### Maintainer + +At each SDK modification of the default configuration, add the default SDK config under `data/vXX-app.toml`. +This allows users to use the tool standalone. + +### Compatibility + +The recommended standalone version is `latest`, which is using the latest development version of the Confix. + +| SDK Version | Confix Version | +| ----------- | -------------- | +| v0.50 | v0.1.x | +| v0.52 | v0.2.x | +| v2 | v0.2.x | + +## Credits + +This project is based on the [CometBFT RFC 019](https://github.com/cometbft/cometbft/blob/5013bc3f4a6d64dcc2bf02ccc002ebc9881c62e4/docs/rfc/rfc-019-config-version.md) and their never released own implementation of [confix](https://github.com/cometbft/cometbft/blob/v0.36.x/scripts/confix/confix.go). diff --git a/docs/sdk/v0.53/documentation/operations/cosmovisor.mdx b/docs/sdk/v0.53/documentation/operations/cosmovisor.mdx new file mode 100644 index 00000000..b18e76a7 --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/cosmovisor.mdx @@ -0,0 +1,409 @@ +--- +title: Cosmovisor +--- + +`cosmovisor` is a process manager for Cosmos SDK application binaries that automates application binary switch at chain upgrades. +It polls the `upgrade-info.json` file that is created by the x/upgrade module at upgrade height, and then can automatically download the new binary, stop the current binary, switch from the old binary to the new one, and finally restart the node with the new binary. + +* [Design](#design) +* [Contributing](#contributing) +* [Setup](#setup) + * [Installation](#installation) + * [Command Line Arguments And Environment Variables](#command-line-arguments-and-environment-variables) + * [Folder Layout](#folder-layout) +* [Usage](#usage) + * [Initialization](#initialization) + * [Detecting Upgrades](#detecting-upgrades) + * [Adding Upgrade Binary](#adding-upgrade-binary) + * [Auto-Download](#auto-download) + * [Preparing for an Upgrade](#preparing-for-an-upgrade) +* [Example: SimApp Upgrade](#example-simapp-upgrade) + * [Chain Setup](#chain-setup) + * [Prepare Cosmovisor and Start the Chain](#prepare-cosmovisor-and-start-the-chain) + * [Update App](#update-app) + +## Design + +Cosmovisor is designed to be used as a wrapper for a `Cosmos SDK` app: + +* it will pass arguments to the associated app (configured by `DAEMON_NAME` env variable). + Running `cosmovisor run arg1 arg2 ....` will run `app arg1 arg2 ...`; +* it will manage an app by restarting and upgrading if needed; +* it is configured using environment variables, not positional arguments. + +*Note: If new versions of the application are not set up to run in-place store migrations, migrations will need to be run manually before restarting `cosmovisor` with the new binary. For this reason, we recommend applications adopt in-place store migrations.* + + +Only the latest version of cosmovisor is actively developed/maintained. + + + +Versions prior to v1.0.0 have a vulnerability that could lead to a DOS. Please upgrade to the latest version. + + +## Contributing + +Cosmovisor is part of the Cosmos SDK monorepo, but it's a separate module with it's own release schedule. + +Release branches have the following format `release/cosmovisor/vA.B.x`, where A and B are a number (e.g. `release/cosmovisor/v1.3.x`). Releases are tagged using the following format: `cosmovisor/vA.B.C`. + +## Setup + +### Installation + +You can download Cosmovisor from the [GitHub releases](https://github.com/cosmos/cosmos-sdk/releases/tag/cosmovisor%2Fv1.5.0). + +To install the latest version of `cosmovisor`, run the following command: + +```shell +go install cosmossdk.io/tools/cosmovisor/cmd/cosmovisor@latest +``` + +To install a specific version, you can specify the version: + +```shell +go install cosmossdk.io/tools/cosmovisor/cmd/cosmovisor@v1.5.0 +``` + +Run `cosmovisor version` to check the cosmovisor version. + +Alternatively, for building from source, simply run `make cosmovisor`. The binary will be located in `tools/cosmovisor`. + + +Installing cosmovisor using `go install` will display the correct `cosmovisor` version. +Building from source (`make cosmovisor`) or installing `cosmovisor` by other means won't display the correct version. + + +### Command Line Arguments And Environment Variables + +The first argument passed to `cosmovisor` is the action for `cosmovisor` to take. Options are: + +* `help`, `--help`, or `-h` - Output `cosmovisor` help information and check your `cosmovisor` configuration. +* `run` - Run the configured binary using the rest of the provided arguments. +* `version` - Output the `cosmovisor` version and also run the binary with the `version` argument. +* `config` - Display the current `cosmovisor` configuration, that means displaying the environment variables value that `cosmovisor` is using. +* `add-upgrade` - Add an upgrade manually to `cosmovisor`. This command allow you to easily add the binary corresponding to an upgrade in cosmovisor. + +All arguments passed to `cosmovisor run` will be passed to the application binary (as a subprocess). `cosmovisor` will return `/dev/stdout` and `/dev/stderr` of the subprocess as its own. For this reason, `cosmovisor run` cannot accept any command-line arguments other than those available to the application binary. + +`cosmovisor` reads its configuration from environment variables, or its configuration file (use `--cosmovisor-config `): + +* `DAEMON_HOME` is the location where the `cosmovisor/` directory is kept that contains the genesis binary, the upgrade binaries, and any additional auxiliary files associated with each binary (e.g. `$HOME/.gaiad`, `$HOME/.regend`, `$HOME/.simd`, etc.). +* `DAEMON_NAME` is the name of the binary itself (e.g. `gaiad`, `regend`, `simd`, etc.). +* `DAEMON_ALLOW_DOWNLOAD_BINARIES` (*optional*), if set to `true`, will enable auto-downloading of new binaries (for security reasons, this is intended for full nodes rather than validators). By default, `cosmovisor` will not auto-download new binaries. +* `DAEMON_DOWNLOAD_MUST_HAVE_CHECKSUM` (*optional*, default = `false`), if `true` cosmovisor will require that a checksum is provided in the upgrade plan for the binary to be downloaded. If `false`, cosmovisor will not require a checksum to be provided, but still check the checksum if one is provided. +* `DAEMON_RESTART_AFTER_UPGRADE` (*optional*, default = `true`), if `true`, restarts the subprocess with the same command-line arguments and flags (but with the new binary) after a successful upgrade. Otherwise (`false`), `cosmovisor` stops running after an upgrade and requires the system administrator to manually restart it. Note restart is only after the upgrade and does not auto-restart the subprocess after an error occurs. +* `DAEMON_RESTART_DELAY` (*optional*, default none), allow a node operator to define a delay between the node halt (for upgrade) and backup by the specified time. The value must be a duration (e.g. `1s`). +* `DAEMON_SHUTDOWN_GRACE` (*optional*, default none), if set, send interrupt to binary and wait the specified time to allow for cleanup/cache flush to disk before sending the kill signal. The value must be a duration (e.g. `1s`). +* `DAEMON_POLL_INTERVAL` (*optional*, default 300 milliseconds), is the interval length for polling the upgrade plan file. The value must be a duration (e.g. `1s`). +* `DAEMON_DATA_BACKUP_DIR` option to set a custom backup directory. If not set, `DAEMON_HOME` is used. +* `UNSAFE_SKIP_BACKUP` (defaults to `false`), if set to `true`, upgrades directly without performing a backup. Otherwise (`false`, default) backs up the data before trying the upgrade. The default value of false is useful and recommended in case of failures and when a backup needed to rollback. We recommend using the default backup option `UNSAFE_SKIP_BACKUP=false`. +* `DAEMON_PREUPGRADE_MAX_RETRIES` (defaults to `0`). The maximum number of times to call [`pre-upgrade`](https://docs.cosmos.network/main/build/building-apps/app-upgrade#pre-upgrade-handling) in the application after exit status of `31`. After the maximum number of retries, Cosmovisor fails the upgrade. +* `COSMOVISOR_DISABLE_LOGS` (defaults to `false`). If set to true, this will disable Cosmovisor logs (but not the underlying process) completely. This may be useful, for example, when a Cosmovisor subcommand you are executing returns a valid JSON you are then parsing, as logs added by Cosmovisor make this output not a valid JSON. +* `COSMOVISOR_COLOR_LOGS` (defaults to `true`). If set to true, this will colorise Cosmovisor logs (but not the underlying process). +* `COSMOVISOR_TIMEFORMAT_LOGS` (defaults to `kitchen`). If set to a value (`layout|ansic|unixdate|rubydate|rfc822|rfc822z|rfc850|rfc1123|rfc1123z|rfc3339|rfc3339nano|kitchen`), this will add timestamp prefix to Cosmovisor logs (but not the underlying process). +* `COSMOVISOR_CUSTOM_PREUPGRADE` (defaults to \`\`). If set, this will run $DAEMON\_HOME/cosmovisor/$COSMOVISOR\_CUSTOM\_PREUPGRADE prior to upgrade with the arguments \[ upgrade.Name, upgrade.Height ]. Executes a custom script (separate and prior to the chain daemon pre-upgrade command) +* `COSMOVISOR_DISABLE_RECASE` (defaults to `false`). If set to true, the upgrade directory will expected to match the upgrade plan name without any case changes + +### Folder Layout + +`$DAEMON_HOME/cosmovisor` is expected to belong completely to `cosmovisor` and the subprocesses that are controlled by it. The folder content is organized as follows: + +```text expandable +. +├── current -> genesis or upgrades/ +├── genesis +│   └── bin +│   └── $DAEMON_NAME +└── upgrades +│ └── +│ ├── bin +│ │   └── $DAEMON_NAME +│ └── upgrade-info.json +└── preupgrade.sh (optional) +``` + +The `cosmovisor/` directory includes a subdirectory for each version of the application (i.e. `genesis` or `upgrades/`). Within each subdirectory is the application binary (i.e. `bin/$DAEMON_NAME`) and any additional auxiliary files associated with each binary. `current` is a symbolic link to the currently active directory (i.e. `genesis` or `upgrades/`). The `name` variable in `upgrades/` is the lowercased URI-encoded name of the upgrade as specified in the upgrade module plan. Note that the upgrade name path are normalized to be lowercased: for instance, `MyUpgrade` is normalized to `myupgrade`, and its path is `upgrades/myupgrade`. + +Please note that `$DAEMON_HOME/cosmovisor` only stores the *application binaries*. The `cosmovisor` binary itself can be stored in any typical location (e.g. `/usr/local/bin`). The application will continue to store its data in the default data directory (e.g. `$HOME/.simapp`) or the data directory specified with the `--home` flag. `$DAEMON_HOME` is dependent of the data directory and must be set to the same directory as the data directory, you will end up with a configuration like the following: + +```text +.simapp +├── config +├── data +└── cosmovisor +``` + +## Usage + +The system administrator is responsible for: + +* installing the `cosmovisor` binary +* configuring the host's init system (e.g. `systemd`, `launchd`, etc.) +* appropriately setting the environmental variables +* creating the `/cosmovisor` directory +* creating the `/cosmovisor/genesis/bin` folder +* creating the `/cosmovisor/upgrades//bin` folders +* placing the different versions of the `` executable in the appropriate `bin` folders. + +`cosmovisor` will set the `current` link to point to `genesis` at first start (i.e. when no `current` link exists) and then handle switching binaries at the correct points in time so that the system administrator can prepare days in advance and relax at upgrade time. + +In order to support downloadable binaries, a tarball for each upgrade binary will need to be packaged up and made available through a canonical URL. Additionally, a tarball that includes the genesis binary and all available upgrade binaries can be packaged up and made available so that all the necessary binaries required to sync a fullnode from start can be easily downloaded. + +The `DAEMON` specific code and operations (e.g. cometBFT config, the application db, syncing blocks, etc.) all work as expected. The application binaries' directives such as command-line flags and environment variables also work as expected. + +### Initialization + +The `cosmovisor init ` command creates the folder structure required for using cosmovisor. + +It does the following: + +* creates the `/cosmovisor` folder if it doesn't yet exist +* creates the `/cosmovisor/genesis/bin` folder if it doesn't yet exist +* copies the provided executable file to `/cosmovisor/genesis/bin/` +* creates the `current` link, pointing to the `genesis` folder + +It uses the `DAEMON_HOME` and `DAEMON_NAME` environment variables for folder location and executable name. + +The `cosmovisor init` command is specifically for initializing cosmovisor, and should not be confused with a chain's `init` command (e.g. `cosmovisor run init`). + +### Detecting Upgrades + +`cosmovisor` is polling the `$DAEMON_HOME/data/upgrade-info.json` file for new upgrade instructions. The file is created by the x/upgrade module in `BeginBlocker` when an upgrade is detected and the blockchain reaches the upgrade height. +The following heuristic is applied to detect the upgrade: + +* When starting, `cosmovisor` doesn't know much about currently running upgrade, except the binary which is `current/bin/`. It tries to read the `current/update-info.json` file to get information about the current upgrade name. +* If neither `cosmovisor/current/upgrade-info.json` nor `data/upgrade-info.json` exist, then `cosmovisor` will wait for `data/upgrade-info.json` file to trigger an upgrade. +* If `cosmovisor/current/upgrade-info.json` doesn't exist but `data/upgrade-info.json` exists, then `cosmovisor` assumes that whatever is in `data/upgrade-info.json` is a valid upgrade request. In this case `cosmovisor` tries immediately to make an upgrade according to the `name` attribute in `data/upgrade-info.json`. +* Otherwise, `cosmovisor` waits for changes in `upgrade-info.json`. As soon as a new upgrade name is recorded in the file, `cosmovisor` will trigger an upgrade mechanism. + +When the upgrade mechanism is triggered, `cosmovisor` will: + +1. if `DAEMON_ALLOW_DOWNLOAD_BINARIES` is enabled, start by auto-downloading a new binary into `cosmovisor//bin` (where `` is the `upgrade-info.json:name` attribute); +2. update the `current` symbolic link to point to the new directory and save `data/upgrade-info.json` to `cosmovisor/current/upgrade-info.json`. + +### Adding Upgrade Binary + +`cosmovisor` has an `add-upgrade` command that allows to easily link a binary to an upgrade. It creates a new folder in `cosmovisor/upgrades/` and copies the provided executable file to `cosmovisor/upgrades//bin/`. + +Using the `--upgrade-height` flag allows to specify at which height the binary should be switched, without going via a gorvernance proposal. +This enables support for an emergency coordinated upgrades where the binary must be switched at a specific height, but there is no time to go through a governance proposal. + + +`--upgrade-height` creates an `upgrade-info.json` file. This means if a chain upgrade via governance proposal is executed before the specified height with `--upgrade-height`, the governance proposal will overwrite the `upgrade-info.json` plan created by `add-upgrade --upgrade-height `. +Take this into consideration when using `--upgrade-height`. + + +### Auto-Download + +Generally, `cosmovisor` requires that the system administrator place all relevant binaries on disk before the upgrade happens. However, for people who don't need such control and want an automated setup (maybe they are syncing a non-validating fullnode and want to do little maintenance), there is another option. + +**NOTE: we don't recommend using auto-download** because it doesn't verify in advance if a binary is available. If there will be any issue with downloading a binary, the cosmovisor will stop and won't restart an App (which could lead to a chain halt). + +If `DAEMON_ALLOW_DOWNLOAD_BINARIES` is set to `true`, and no local binary can be found when an upgrade is triggered, `cosmovisor` will attempt to download and install the binary itself based on the instructions in the `info` attribute in the `data/upgrade-info.json` file. The files is constructed by the x/upgrade module and contains data from the upgrade `Plan` object. The `Plan` has an info field that is expected to have one of the following two valid formats to specify a download: + +1. Store an os/architecture -> binary URI map in the upgrade plan info field as JSON under the `"binaries"` key. For example: + + ```json + { + "binaries": { + "linux/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" + } + } + ``` + + You can include multiple binaries at once to ensure more than one environment will receive the correct binaries: + + ```json + { + "binaries": { + "linux/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f", + "linux/arm64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f", + "darwin/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" + } + } + ``` + + When submitting this as a proposal ensure there are no spaces. An example command using `gaiad` could look like: + + ```shell expandable + > gaiad tx upgrade software-upgrade Vega \ + --title Vega \ + --deposit 100uatom \ + --upgrade-height 7368420 \ + --upgrade-info '{"binaries":{"linux/amd64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-linux-amd64","linux/arm64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-linux-arm64","darwin/amd64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-darwin-amd64"}}' \ + --summary "upgrade to Vega" \ + --gas 400000 \ + --from user \ + --chain-id test \ + --home test/val2 \ + --node tcp://localhost:36657 \ + --yes + ``` + +2. Store a link to a file that contains all information in the above format (e.g. if you want to specify lots of binaries, changelog info, etc. without filling up the blockchain). For example: + + ```text + https://example.com/testnet-1001-info.json?checksum=sha256:deaaa99fda9407c4dbe1d04bd49bab0cc3c1dd76fa392cd55a9425be074af01e + ``` + +When `cosmovisor` is triggered to download the new binary, `cosmovisor` will parse the `"binaries"` field, download the new binary with [go-getter](https://github.com/hashicorp/go-getter), and unpack the new binary in the `upgrades/` folder so that it can be run as if it was installed manually. + +Note that for this mechanism to provide strong security guarantees, all URLs should include a SHA 256/512 checksum. This ensures that no false binary is run, even if someone hacks the server or hijacks the DNS. `go-getter` will always ensure the downloaded file matches the checksum if it is provided. `go-getter` will also handle unpacking archives into directories (in this case the download link should point to a `zip` file of all data in the `bin` directory). + +To properly create a sha256 checksum on linux, you can use the `sha256sum` utility. For example: + +```shell +sha256sum ./testdata/repo/zip_directory/autod.zip +``` + +The result will look something like the following: `29139e1381b8177aec909fab9a75d11381cab5adf7d3af0c05ff1c9c117743a7`. + +You can also use `sha512sum` if you would prefer to use longer hashes, or `md5sum` if you would prefer to use broken hashes. Whichever you choose, make sure to set the hash algorithm properly in the checksum argument to the URL. + +### Preparing for an Upgrade + +To prepare for an upgrade, use the `prepare-upgrade` command: + +```shell +cosmovisor prepare-upgrade +``` + +This command performs the following actions: + +1. Retrieves upgrade information directly from the blockchain about the next scheduled upgrade. +2. Downloads the new binary specified in the upgrade plan. +3. Verifies the binary's checksum (if required by configuration). +4. Places the new binary in the appropriate directory for Cosmovisor to use during the upgrade. + +The `prepare-upgrade` command provides detailed logging throughout the process, including: + +* The name and height of the upcoming upgrade +* The URL from which the new binary is being downloaded +* Confirmation of successful download and verification +* The path where the new binary has been placed + +Example output: + +```bash +INFO Preparing for upgrade name=v1.0.0 height=1000000 +INFO Downloading upgrade binary url=https://example.com/binary/v1.0.0?checksum=sha256:339911508de5e20b573ce902c500ee670589073485216bee8b045e853f24bce8 +INFO Upgrade preparation complete name=v1.0.0 height=1000000 +``` + +*Note: The current way of downloading manually and placing the binary at the right place would still work.* + +## Example: SimApp Upgrade + +The following instructions provide a demonstration of `cosmovisor` using the simulation application (`simapp`) shipped with the Cosmos SDK's source code. The following commands are to be run from within the `cosmos-sdk` repository. + +### Chain Setup + +Let's create a new chain using the `v0.47.4` version of simapp (the Cosmos SDK demo app): + +```shell +git checkout v0.47.4 +make build +``` + +Clean `~/.simapp` (never do this in a production environment): + +```shell +./build/simd tendermint unsafe-reset-all +``` + +Set up app config: + +```shell +./build/simd config chain-id test +./build/simd config keyring-backend test +./build/simd config broadcast-mode sync +``` + +Initialize the node and overwrite any previous genesis file (never do this in a production environment): + +```shell +./build/simd init test --chain-id test --overwrite +``` + +For the sake of this demonstration, amend `voting_period` in `genesis.json` to a reduced time of 20 seconds (`20s`): + +```shell +cat <<< $(jq '.app_state.gov.params.voting_period = "20s"' $HOME/.simapp/config/genesis.json) > $HOME/.simapp/config/genesis.json +``` + +Create a validator, and setup genesis transaction: + +```shell +./build/simd keys add validator +./build/simd genesis add-genesis-account validator 1000000000stake --keyring-backend test +./build/simd genesis gentx validator 1000000stake --chain-id test +./build/simd genesis collect-gentxs +``` + +#### Prepare Cosmovisor and Start the Chain + +Set the required environment variables: + +```shell +export DAEMON_NAME=simd +export DAEMON_HOME=$HOME/.simapp +``` + +Set the optional environment variable to trigger an automatic app restart: + +```shell +export DAEMON_RESTART_AFTER_UPGRADE=true +``` + +Initialize cosmovisor with the current binary: + +```shell +cosmovisor init ./build/simd +``` + +Now you can run cosmovisor with simapp v0.47.4: + +```shell +cosmovisor run start +``` + +### Update App + +Update app to the latest version (e.g. v0.50.0). + + + +Migration plans are defined using the `x/upgrade` module and described in [In-Place Store Migrations](https://github.com/cosmos/cosmos-sdk/blob/main/docs/learn/advanced/15-upgrade.md). Migrations can perform any deterministic state change. + +The migration plan to upgrade the simapp from v0.47 to v0.50 is defined in `simapp/upgrade.go`. + + + +Build the new version `simd` binary: + +```shell +make build +``` + +Add the new `simd` binary and the upgrade name: + + + +The migration name must match the one defined in the migration plan. + + + +```shell +cosmovisor add-upgrade v047-to-v050 ./build/simd +``` + +Open a new terminal window and submit an upgrade proposal along with a deposit and a vote (these commands must be run within 20 seconds of each other): + +```shell +./build/simd tx upgrade software-upgrade v047-to-v050 --title upgrade --summary upgrade --upgrade-height 200 --upgrade-info "{}" --no-validate --from validator --yes +./build/simd tx gov deposit 1 10000000stake --from validator --yes +./build/simd tx gov vote 1 yes --from validator --yes +``` + +The upgrade will occur automatically at height 200. Note: you may need to change the upgrade height in the snippet above if your test play takes more time. diff --git a/docs/sdk/v0.53/documentation/operations/interact-node.mdx b/docs/sdk/v0.53/documentation/operations/interact-node.mdx new file mode 100644 index 00000000..8c0e3834 --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/interact-node.mdx @@ -0,0 +1,298 @@ +--- +title: Interacting with the Node +--- + +## Synopsis + +There are multiple ways to interact with a node: using the CLI, using gRPC or using the REST endpoints. + + +**Pre-requisite Readings** + +- [gRPC, REST and CometBFT Endpoints](/docs/sdk/v0.53/api-reference/service-apis/grpc_rest) +- [Running a Node](/docs/sdk/v0.53/documentation/operations/run-node) + + + +## Using the CLI + +Now that your chain is running, it is time to try sending tokens from the first account you created to a second account. In a new terminal window, start by running the following query command: + +```bash +simd query bank balances $MY_VALIDATOR_ADDRESS +``` + +You should see the current balance of the account you created, equal to the original balance of `stake` you granted it minus the amount you delegated via the `gentx`. Now, create a second account: + +```bash +simd keys add recipient --keyring-backend test + +# Put the generated address in a variable for later use. +RECIPIENT=$(simd keys show recipient -a --keyring-backend test) +``` + +The command above creates a local key-pair that is not yet registered on the chain. An account is created the first time it receives tokens from another account. Now, run the following command to send tokens to the `recipient` account: + +```bash +simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000000stake --chain-id my-test-chain --keyring-backend test + +# Check that the recipient account did receive the tokens. +simd query bank balances $RECIPIENT +``` + +Finally, delegate some of the stake tokens sent to the `recipient` account to the validator: + +```bash +simd tx staking delegate $(simd keys show my_validator --bech val -a --keyring-backend test) 500stake --from recipient --chain-id my-test-chain --keyring-backend test + +# Query the total delegations to `validator`. +simd query staking delegations-to $(simd keys show my_validator --bech val -a --keyring-backend test) +``` + +You should see two delegations, the first one made from the `gentx`, and the second one you just performed from the `recipient` account. + +## Using gRPC + +The Protobuf ecosystem developed tools for different use cases, including code-generation from `*.proto` files into various languages. These tools allow the building of clients easily. Often, the client connection (i.e. the transport) can be plugged and replaced very easily. Let's explore one of the most popular transport: [gRPC](/docs/sdk/v0.53/api-reference/service-apis/grpc_rest). + +Since the code generation library largely depends on your own tech stack, we will only present three alternatives: + +- `grpcurl` for generic debugging and testing, +- programmatically via Go, +- CosmJS for JavaScript/TypeScript developers. + +### grpcurl + +[grpcurl](https://github.com/fullstorydev/grpcurl) is like `curl` but for gRPC. It is also available as a Go library, but we will use it only as a CLI command for debugging and testing purposes. Follow the instructions in the previous link to install it. + +Assuming you have a local node running (either a localnet, or connected a live network), you should be able to run the following command to list the Protobuf services available (you can replace `localhost:9000` by the gRPC server endpoint of another node, which is configured under the `grpc.address` field inside [`app.toml`](/docs/sdk/v0.53/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml)): + +```bash +grpcurl -plaintext localhost:9090 list +``` + +You should see a list of gRPC services, like `cosmos.bank.v1beta1.Query`. This is called reflection, which is a Protobuf endpoint returning a description of all available endpoints. Each of these represents a different Protobuf service, and each service exposes multiple RPC methods you can query against. + +In order to get a description of the service you can run the following command: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + describe cosmos.bank.v1beta1.Query # Service we want to inspect +``` + +It's also possible to execute an RPC call to query the node for information: + +```bash +grpcurl \ + -plaintext \ + -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/AllBalances +``` + +The list of all available gRPC query endpoints is [coming soon](https://github.com/cosmos/cosmos-sdk/issues/7786). + +#### Query for historical state using grpcurl + +You may also query for historical data by passing some [gRPC metadata](https://github.com/grpc/grpc-go/blob/master/Documentation/grpc-metadata.md) to the query: the `x-cosmos-block-height` metadata should contain the block to query. Using grpcurl as above, the command looks like: + +```bash +grpcurl \ + -plaintext \ + -H "x-cosmos-block-height: 123" \ + -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ + localhost:9090 \ + cosmos.bank.v1beta1.Query/AllBalances +``` + +Assuming the state at that block has not yet been pruned by the node, this query should return a non-empty response. + +### Programmatically via Go + +The following snippet shows how to query the state using gRPC inside a Go program. The idea is to create a gRPC connection, and use the Protobuf-generated client code to query the gRPC server. + +#### Install Cosmos SDK + +```bash +go get github.com/cosmos/cosmos-sdk@main +``` + +```go expandable +package main + +import ( + + "context" + "fmt" + "google.golang.org/grpc" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +func queryState() + +error { + myAddress, err := sdk.AccAddressFromBech32("cosmos1...") / the my_validator or recipient address. + if err != nil { + return err +} + + / Create a connection to the gRPC server. + grpcConn, err := grpc.Dial( + "127.0.0.1:9090", / your gRPC server address. + grpc.WithInsecure(), / The Cosmos SDK doesn't support any transport security mechanism. + / This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry + / if the request/response types contain interface instead of 'nil' you should pass the application specific codec. + grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), + ) + if err != nil { + return err +} + +defer grpcConn.Close() + + / This creates a gRPC client to query the x/bank service. + bankClient := banktypes.NewQueryClient(grpcConn) + +bankRes, err := bankClient.Balance( + context.Background(), + &banktypes.QueryBalanceRequest{ + Address: myAddress.String(), + Denom: "stake" +}, + ) + if err != nil { + return err +} + +fmt.Println(bankRes.GetBalance()) / Prints the account balance + + return nil +} + +func main() { + if err := queryState(); err != nil { + panic(err) +} +} +``` + +You can replace the query client (here we are using `x/bank`'s) with one generated from any other Protobuf service. The list of all available gRPC query endpoints is [coming soon](https://github.com/cosmos/cosmos-sdk/issues/7786). + +#### Query for historical state using Go + +Querying for historical blocks is done by adding the block height metadata in the gRPC request. + +```go expandable +package main + +import ( + + "context" + "fmt" + "google.golang.org/grpc" + "google.golang.org/grpc/metadata" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + grpctypes "github.com/cosmos/cosmos-sdk/types/grpc" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +func queryState() + +error { + myAddress, err := sdk.AccAddressFromBech32("cosmos1yerherx4d43gj5wa3zl5vflj9d4pln42n7kuzu") / the my_validator or recipient address. + if err != nil { + return err +} + + / Create a connection to the gRPC server. + grpcConn, err := grpc.Dial( + "127.0.0.1:9090", / your gRPC server address. + grpc.WithInsecure(), / The Cosmos SDK doesn't support any transport security mechanism. + / This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry + / if the request/response types contain interface instead of 'nil' you should pass the application specific codec. + grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), + ) + if err != nil { + return err +} + +defer grpcConn.Close() + + / This creates a gRPC client to query the x/bank service. + bankClient := banktypes.NewQueryClient(grpcConn) + +var header metadata.MD + _, err = bankClient.Balance( + metadata.AppendToOutgoingContext(context.Background(), grpctypes.GRPCBlockHeightHeader, "12"), / Add metadata to request + &banktypes.QueryBalanceRequest{ + Address: myAddress.String(), + Denom: "stake" +}, + grpc.Header(&header), / Retrieve header from response + ) + if err != nil { + return err +} + blockHeight := header.Get(grpctypes.GRPCBlockHeightHeader) + +fmt.Println(blockHeight) / Prints the block height (12) + +return nil +} + +func main() { + if err := queryState(); err != nil { + panic(err) +} +} +``` + +### CosmJS + +CosmJS documentation can be found at [Link](https://cosmos.github.io/cosmjs). As of January 2021, CosmJS documentation is still work in progress. + +## Using the REST Endpoints + +As described in the [gRPC guide](/docs/sdk/v0.53/api-reference/service-apis/grpc_rest), all gRPC services on the Cosmos SDK are made available for more convenient REST-based queries through gRPC-gateway. The format of the URL path is based on the Protobuf service method's full-qualified name, but may contain small customizations so that final URLs look more idiomatic. For example, the REST endpoint for the `cosmos.bank.v1beta1.Query/AllBalances` method is `GET /cosmos/bank/v1beta1/balances/{address}`. Request arguments are passed as query parameters. + +Note that the REST endpoints are not enabled by default. To enable them, edit the `api` section of your `~/.simapp/config/app.toml` file: + +```toml +# Enable defines if the API server should be enabled. +enable = true +``` + +As a concrete example, the `curl` command to make balances request is: + +```bash +curl \ + -X GET \ + -H "Content-Type: application/json" \ + http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS +``` + +Make sure to replace `localhost:1317` with the REST endpoint of your node, configured under the `api.address` field. + +The list of all available REST endpoints is available as a Swagger specification file, it can be viewed at `localhost:1317/swagger`. Make sure that the `api.swagger` field is set to true in your [`app.toml`](/docs/sdk/v0.53/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml) file. + +### Query for historical state using REST + +Querying for historical state is done using the HTTP header `x-cosmos-block-height`. For example, a curl command would look like: + +```bash +curl \ + -X GET \ + -H "Content-Type: application/json" \ + -H "x-cosmos-block-height: 123" \ + http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS +``` + +Assuming the state at that block has not yet been pruned by the node, this query should return a non-empty response. + +### Cross-Origin Resource Sharing (CORS) + +[CORS policies](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) are not enabled by default to help with security. If you would like to use the rest-server in a public environment we recommend you provide a reverse proxy, this can be done with [nginx](https://www.nginx.com/). For testing and development purposes there is an `enabled-unsafe-cors` field inside [`app.toml`](/docs/sdk/v0.53/documentation/operations/run-node#configuring-the-node-using-apptoml-and-configtoml). diff --git a/docs/sdk/v0.53/documentation/operations/intro.mdx b/docs/sdk/v0.53/documentation/operations/intro.mdx new file mode 100644 index 00000000..727ee456 --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/intro.mdx @@ -0,0 +1,13 @@ +--- +title: SDK Migrations +--- + +To smoothen the update to the latest stable release, the SDK includes a CLI command for hard-fork migrations (under the ` genesis migrate` subcommand). +Additionally, the SDK includes in-place migrations for its core modules. These in-place migrations are useful to migrate between major releases. + +* Hard-fork migrations are supported from the last major release to the current one. +* [In-place module migrations](https://docs.cosmos.network/main/core/upgrade#overwriting-genesis-functions) are supported from the last two major releases to the current one. + +Migration from a version older than the last two major releases is not supported. + +When migrating from a previous version, refer to the [`UPGRADING.md`](/docs/sdk/next/documentation/operations/upgrading) and the `CHANGELOG.md` of the version you are migrating to. diff --git a/docs/sdk/v0.53/documentation/operations/keyring.mdx b/docs/sdk/v0.53/documentation/operations/keyring.mdx new file mode 100644 index 00000000..1a1a30bb --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/keyring.mdx @@ -0,0 +1,146 @@ +--- +title: Setting up the keyring +--- + +## Synopsis + +This document describes how to configure and use the keyring and its various backends for an [**application**](/docs/sdk/v0.53/documentation/application-framework/app-anatomy). + +The keyring holds the private/public key pairs used to interact with a node. For instance, a validator key needs to be set up before running the blockchain node, so that blocks can be correctly signed. The private key can be stored in different locations, called "backends," such as a file or the operating system's own key storage. + +## Available backends for the keyring + +Starting with the v0.38.0 release, Cosmos SDK comes with a new keyring implementation +that provides a set of commands to manage cryptographic keys in a secure fashion. The +new keyring supports multiple storage backends, some of which may not be available on +all operating systems. + +### The `os` backend + +The `os` backend relies on operating system-specific defaults to handle key storage +securely. Typically, an operating system's credential subsystem handles password prompts, +private keys storage, and user sessions according to the user's password policies. Here +is a list of the most popular operating systems and their respective passwords manager: + +- macOS: [Keychain](https://support.apple.com/en-gb/guide/keychain-access/welcome/mac) +- Windows: [Credentials Management API](https://docs.microsoft.com/en-us/windows/win32/secauthn/credentials-management) +- GNU/Linux: + - [libsecret](https://gitlab.gnome.org/GNOME/libsecret) + - [kwallet](https://api.kde.org/frameworks/kwallet/html/index.html) + - [keyctl](https://www.kernel.org/doc/html/latest/security/keys/core.html) + +GNU/Linux distributions that use GNOME as the default desktop environment typically come with +[Seahorse](https://wiki.gnome.org/Apps/Seahorse). Users of KDE based distributions are +commonly provided with [KDE Wallet Manager](https://userbase.kde.org/KDE_Wallet_Manager). +Whilst the former is in fact a `libsecret` convenient frontend, the latter is a `kwallet` +client. `keyctl` is a secure backend that leverages the Linux's kernel security key management system +to store cryptographic keys securely in memory. + +`os` is the default option since operating system's default credentials managers are +designed to meet users' most common needs and provide them with a comfortable +experience without compromising on security. + +The recommended backends for headless environments are `file` and `pass`. + +### The `file` backend + +The `file` backend more closely resembles the keybase implementation used prior to +v0.38.1. It stores the keyring encrypted within the app's configuration directory. This +keyring will request a password each time it is accessed, which may occur multiple +times in a single command resulting in repeated password prompts. If using bash scripts +to execute commands using the `file` option you may want to utilize the following format +for multiple prompts: + +```shell +# assuming that KEYPASSWD is set in the environment +$ gaiacli config keyring-backend file # use file backend +$ (echo $KEYPASSWD; echo $KEYPASSWD) | gaiacli keys add me # multiple prompts +$ echo $KEYPASSWD | gaiacli keys show me # single prompt +``` + + + The first time you add a key to an empty keyring, you will be prompted to type + the password twice. + + +### The `pass` backend + +The `pass` backend uses the [pass](https://www.passwordstore.org/) utility to manage on-disk +encryption of keys' sensitive data and metadata. Keys are stored inside `gpg` encrypted files +within app-specific directories. `pass` is available for the most popular UNIX +operating systems as well as GNU/Linux distributions. Please refer to its manual page for +information on how to download and install it. + + + **pass** uses [GnuPG](https://gnupg.org/) for encryption. `gpg` automatically + invokes the `gpg-agent` daemon upon execution, which handles the caching of + GnuPG credentials. Please refer to `gpg-agent` man page for more information + on how to configure cache parameters such as credentials TTL and passphrase + expiration. + + +The password store must be set up prior to first use: + +```shell +pass init +``` + +Replace `` with your GPG key ID. You can use your personal GPG key or an alternative +one you may want to use specifically to encrypt the password store. + +### The `kwallet` backend + +The `kwallet` backend uses `KDE Wallet Manager`, which comes installed by default on the +GNU/Linux distributions that ships KDE as default desktop environment. Please refer to +[KWallet Handbook](https://docs.kde.org/stable5/en/kdeutils/kwallet5/index.html) for more +information. + +### The `keyctl` backend + +The _Kernel Key Retention Service_ is a security facility that +has been added to the Linux kernel relatively recently. It allows sensitive +cryptographic data such as passwords, private key, authentication tokens, etc +to be stored securely in memory. + +The `keyctl` backend is available on Linux platforms only. + +### The `test` backend + +The `test` backend is a password-less variation of the `file` backend. Keys are stored +unencrypted on disk. + +**Provided for testing purposes only. The `test` backend is not recommended for use in production environments**. + +### The `memory` backend + +The `memory` backend stores keys in memory. The keys are immediately deleted after the program has exited. + +**Provided for testing purposes only. The `memory` backend is not recommended for use in production environments**. + +### Setting backend using the env variable + +You can set the keyring-backend using env variable: `BINNAME_KEYRING_BACKEND`. For example, if your binary name is `gaia-v5` then set: `export GAIA_V5_KEYRING_BACKEND=pass` + +## Adding keys to the keyring + + + Make sure you can build your own binary, and replace `simd` with the name of + your binary in the snippets. + + +Applications developed using the Cosmos SDK come with the `keys` subcommand. For the purpose of this tutorial, we're running the `simd` CLI, which is an application built using the Cosmos SDK for testing and educational purposes. For more information, see [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). + +You can use `simd keys` for help about the keys command and `simd keys [command] --help` for more information about a particular subcommand. + +To create a new key in the keyring, run the `add` subcommand with a `` argument. For the purpose of this tutorial, we will solely use the `test` backend, and call our new key `my_validator`. This key will be used in the next section. + +```bash +$ simd keys add my_validator --keyring-backend test + +# Put the generated address in a variable for later use. +MY_VALIDATOR_ADDRESS=$(simd keys show my_validator -a --keyring-backend test) +``` + +This command generates a new 24-word mnemonic phrase, persists it to the relevant backend, and outputs information about the keypair. If this keypair will be used to hold value-bearing tokens, be sure to write down the mnemonic phrase somewhere safe! + +By default, the keyring generates a `secp256k1` keypair. The keyring also supports `ed25519` keys, which may be created by passing the `--algo ed25519` flag. A keyring can of course hold both types of keys simultaneously, and the Cosmos SDK's `x/auth` module supports natively these two public key algorithms. diff --git a/docs/sdk/v0.53/documentation/operations/node.mdx b/docs/sdk/v0.53/documentation/operations/node.mdx new file mode 100644 index 00000000..56a99ccc --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/node.mdx @@ -0,0 +1,4192 @@ +--- +title: Node Client (Daemon) +--- + +## Synopsis + +The main endpoint of a Cosmos SDK application is the daemon client, otherwise known as the full-node client. The full-node runs the state-machine, starting from a genesis file. It connects to peers running the same client in order to receive and relay transactions, block proposals and signatures. The full-node is constituted of the application, defined with the Cosmos SDK, and of a consensus engine connected to the application via the ABCI. + + +**Pre-requisite Readings** + +- [Anatomy of an SDK application](/docs/sdk/v0.53/documentation/application-framework/app-anatomy) + + + +## `main` function + +The full-node client of any Cosmos SDK application is built by running a `main` function. The client is generally named by appending the `-d` suffix to the application name (e.g. `appd` for an application named `app`), and the `main` function is defined in a `./appd/cmd/main.go` file. Running this function creates an executable `appd` that comes with a set of commands. For an app named `app`, the main command is [`appd start`](#start-command), which starts the full-node. + +In general, developers will implement the `main.go` function with the following structure: + +- First, an [`encodingCodec`](/docs/sdk/v0.53/documentation/protocol-development/encoding) is instantiated for the application. +- Then, the `config` is retrieved and config parameters are set. This mainly involves setting the Bech32 prefixes for [addresses](/docs/sdk/v0.53/documentation/protocol-development/accounts#addresses). + +```go expandable +package types + +import ( + + "context" + "fmt" + "sync" + "github.com/cosmos/cosmos-sdk/version" +) + +/ DefaultKeyringServiceName defines a default service name for the keyring. +const DefaultKeyringServiceName = "cosmos" + +/ Config is the structure that holds the SDK configuration parameters. +/ This could be used to initialize certain configuration parameters for the SDK. +type Config struct { + fullFundraiserPath string + bech32AddressPrefix map[string]string + txEncoder TxEncoder + addressVerifier func([]byte) + +error + mtx sync.RWMutex + + / SLIP-44 related + purpose uint32 + coinType uint32 + + sealed bool + sealedch chan struct{ +} +} + +/ cosmos-sdk wide global singleton +var ( + sdkConfig *Config + initConfig sync.Once +) + +/ New returns a new Config with default values. +func NewConfig() *Config { + return &Config{ + sealedch: make(chan struct{ +}), + bech32AddressPrefix: map[string]string{ + "account_addr": Bech32PrefixAccAddr, + "validator_addr": Bech32PrefixValAddr, + "consensus_addr": Bech32PrefixConsAddr, + "account_pub": Bech32PrefixAccPub, + "validator_pub": Bech32PrefixValPub, + "consensus_pub": Bech32PrefixConsPub, +}, + fullFundraiserPath: FullFundraiserPath, + + purpose: Purpose, + coinType: CoinType, + txEncoder: nil, +} +} + +/ GetConfig returns the config instance for the SDK. +func GetConfig() *Config { + initConfig.Do(func() { + sdkConfig = NewConfig() +}) + +return sdkConfig +} + +/ GetSealedConfig returns the config instance for the SDK if/once it is sealed. +func GetSealedConfig(ctx context.Context) (*Config, error) { + config := GetConfig() + +select { + case <-config.sealedch: + return config, nil + case <-ctx.Done(): + return nil, ctx.Err() +} +} + +func (config *Config) + +assertNotSealed() { + config.mtx.RLock() + +defer config.mtx.RUnlock() + if config.sealed { + panic("Config is sealed") +} +} + +/ SetBech32PrefixForAccount builds the Config with Bech32 addressPrefix and publKeyPrefix for accounts +/ and returns the config instance +func (config *Config) + +SetBech32PrefixForAccount(addressPrefix, pubKeyPrefix string) { + config.assertNotSealed() + +config.bech32AddressPrefix["account_addr"] = addressPrefix + config.bech32AddressPrefix["account_pub"] = pubKeyPrefix +} + +/ SetBech32PrefixForValidator builds the Config with Bech32 addressPrefix and publKeyPrefix for validators +/ +/ and returns the config instance +func (config *Config) + +SetBech32PrefixForValidator(addressPrefix, pubKeyPrefix string) { + config.assertNotSealed() + +config.bech32AddressPrefix["validator_addr"] = addressPrefix + config.bech32AddressPrefix["validator_pub"] = pubKeyPrefix +} + +/ SetBech32PrefixForConsensusNode builds the Config with Bech32 addressPrefix and publKeyPrefix for consensus nodes +/ and returns the config instance +func (config *Config) + +SetBech32PrefixForConsensusNode(addressPrefix, pubKeyPrefix string) { + config.assertNotSealed() + +config.bech32AddressPrefix["consensus_addr"] = addressPrefix + config.bech32AddressPrefix["consensus_pub"] = pubKeyPrefix +} + +/ SetTxEncoder builds the Config with TxEncoder used to marshal StdTx to bytes +func (config *Config) + +SetTxEncoder(encoder TxEncoder) { + config.assertNotSealed() + +config.txEncoder = encoder +} + +/ SetAddressVerifier builds the Config with the provided function for verifying that addresses +/ have the correct format +func (config *Config) + +SetAddressVerifier(addressVerifier func([]byte) + +error) { + config.assertNotSealed() + +config.addressVerifier = addressVerifier +} + +/ Set the FullFundraiserPath (BIP44Prefix) + +on the config. +/ +/ Deprecated: This method is supported for backward compatibility only and will be removed in a future release. Use SetPurpose and SetCoinType instead. +func (config *Config) + +SetFullFundraiserPath(fullFundraiserPath string) { + config.assertNotSealed() + +config.fullFundraiserPath = fullFundraiserPath +} + +/ Set the BIP-0044 Purpose code on the config +func (config *Config) + +SetPurpose(purpose uint32) { + config.assertNotSealed() + +config.purpose = purpose +} + +/ Set the BIP-0044 CoinType code on the config +func (config *Config) + +SetCoinType(coinType uint32) { + config.assertNotSealed() + +config.coinType = coinType +} + +/ Seal seals the config such that the config state could not be modified further +func (config *Config) + +Seal() *Config { + config.mtx.Lock() + if config.sealed { + config.mtx.Unlock() + +return config +} + + / signal sealed after state exposed/unlocked + config.sealed = true + config.mtx.Unlock() + +close(config.sealedch) + +return config +} + +/ GetBech32AccountAddrPrefix returns the Bech32 prefix for account address +func (config *Config) + +GetBech32AccountAddrPrefix() + +string { + return config.bech32AddressPrefix["account_addr"] +} + +/ GetBech32ValidatorAddrPrefix returns the Bech32 prefix for validator address +func (config *Config) + +GetBech32ValidatorAddrPrefix() + +string { + return config.bech32AddressPrefix["validator_addr"] +} + +/ GetBech32ConsensusAddrPrefix returns the Bech32 prefix for consensus node address +func (config *Config) + +GetBech32ConsensusAddrPrefix() + +string { + return config.bech32AddressPrefix["consensus_addr"] +} + +/ GetBech32AccountPubPrefix returns the Bech32 prefix for account public key +func (config *Config) + +GetBech32AccountPubPrefix() + +string { + return config.bech32AddressPrefix["account_pub"] +} + +/ GetBech32ValidatorPubPrefix returns the Bech32 prefix for validator public key +func (config *Config) + +GetBech32ValidatorPubPrefix() + +string { + return config.bech32AddressPrefix["validator_pub"] +} + +/ GetBech32ConsensusPubPrefix returns the Bech32 prefix for consensus node public key +func (config *Config) + +GetBech32ConsensusPubPrefix() + +string { + return config.bech32AddressPrefix["consensus_pub"] +} + +/ GetTxEncoder return function to encode transactions +func (config *Config) + +GetTxEncoder() + +TxEncoder { + return config.txEncoder +} + +/ GetAddressVerifier returns the function to verify that addresses have the correct format +func (config *Config) + +GetAddressVerifier() + +func([]byte) + +error { + return config.addressVerifier +} + +/ GetPurpose returns the BIP-0044 Purpose code on the config. +func (config *Config) + +GetPurpose() + +uint32 { + return config.purpose +} + +/ GetCoinType returns the BIP-0044 CoinType code on the config. +func (config *Config) + +GetCoinType() + +uint32 { + return config.coinType +} + +/ GetFullFundraiserPath returns the BIP44Prefix. +/ +/ Deprecated: This method is supported for backward compatibility only and will be removed in a future release. Use GetFullBIP44Path instead. +func (config *Config) + +GetFullFundraiserPath() + +string { + return config.fullFundraiserPath +} + +/ GetFullBIP44Path returns the BIP44Prefix. +func (config *Config) + +GetFullBIP44Path() + +string { + return fmt.Sprintf("m/%d'/%d'/0'/0/0", config.purpose, config.coinType) +} + +func KeyringServiceName() + +string { + if len(version.Name) == 0 { + return DefaultKeyringServiceName +} + +return version.Name +} +``` + +- Using [cobra](https://github.com/spf13/cobra), the root command of the full-node client is created. After that, all the custom commands of the application are added using the `AddCommand()` method of `rootCmd`. +- Add default server commands to `rootCmd` using the `server.AddCommands()` method. These commands are separated from the ones added above since they are standard and defined at Cosmos SDK level. They should be shared by all Cosmos SDK-based applications. They include the most important command: the [`start` command](#start-command). +- Prepare and execute the `executor`. + +```go expandable +package cli + +import ( + + "fmt" + "os" + "path/filepath" + "runtime" + "strings" + "github.com/spf13/cobra" + "github.com/spf13/viper" +) + +const ( + HomeFlag = "home" + TraceFlag = "trace" + OutputFlag = "output" + EncodingFlag = "encoding" +) + +/ Executable is the minimal interface to *corba.Command, so we can +/ wrap if desired before the test +type Executable interface { + Execute() + +error +} + +/ PrepareBaseCmd is meant for CometBFT and other servers +func PrepareBaseCmd(cmd *cobra.Command, envPrefix, defaultHome string) + +Executor { + cobra.OnInitialize(func() { + initEnv(envPrefix) +}) + +cmd.PersistentFlags().StringP(HomeFlag, "", defaultHome, "directory for config and data") + +cmd.PersistentFlags().Bool(TraceFlag, false, "print out full stack trace on errors") + +cmd.PersistentPreRunE = concatCobraCmdFuncs(bindFlagsLoadViper, cmd.PersistentPreRunE) + +return Executor{ + cmd, os.Exit +} +} + +/ PrepareMainCmd is meant for client side libs that want some more flags +/ +/ This adds --encoding (hex, btc, base64) + +and --output (text, json) + +to +/ the command. These only really make sense in interactive commands. +func PrepareMainCmd(cmd *cobra.Command, envPrefix, defaultHome string) + +Executor { + cmd.PersistentFlags().StringP(EncodingFlag, "e", "hex", "Binary encoding (hex|b64|btc)") + +cmd.PersistentFlags().StringP(OutputFlag, "o", "text", "Output format (text|json)") + +cmd.PersistentPreRunE = concatCobraCmdFuncs(validateOutput, cmd.PersistentPreRunE) + +return PrepareBaseCmd(cmd, envPrefix, defaultHome) +} + +/ initEnv sets to use ENV variables if set. +func initEnv(prefix string) { + copyEnvVars(prefix) + + / env variables with TM prefix (eg. TM_ROOT) + +viper.SetEnvPrefix(prefix) + +viper.SetEnvKeyReplacer(strings.NewReplacer(".", "_", "-", "_")) + +viper.AutomaticEnv() +} + +/ This copies all variables like TMROOT to TM_ROOT, +/ so we can support both formats for the user +func copyEnvVars(prefix string) { + prefix = strings.ToUpper(prefix) + ps := prefix + "_" + for _, e := range os.Environ() { + kv := strings.SplitN(e, "=", 2) + if len(kv) == 2 { + k, v := kv[0], kv[1] + if strings.HasPrefix(k, prefix) && !strings.HasPrefix(k, ps) { + k2 := strings.Replace(k, prefix, ps, 1) + +os.Setenv(k2, v) +} + +} + +} +} + +/ Executor wraps the cobra Command with a nicer Execute method +type Executor struct { + *cobra.Command + Exit func(int) / this is os.Exit by default, override in tests +} + +type ExitCoder interface { + ExitCode() + +int +} + +/ execute adds all child commands to the root command sets flags appropriately. +/ This is called by main.main(). It only needs to happen once to the rootCmd. +func (e Executor) + +Execute() + +error { + e.SilenceUsage = true + e.SilenceErrors = true + err := e.Command.Execute() + if err != nil { + if viper.GetBool(TraceFlag) { + const size = 64 << 10 + buf := make([]byte, size) + +buf = buf[:runtime.Stack(buf, false)] + fmt.Fprintf(os.Stderr, "ERROR: %v\n%s\n", err, buf) +} + +else { + fmt.Fprintf(os.Stderr, "ERROR: %v\n", err) +} + + / return error code 1 by default, can override it with a special error type + exitCode := 1 + if ec, ok := err.(ExitCoder); ok { + exitCode = ec.ExitCode() +} + +e.Exit(exitCode) +} + +return err +} + +type cobraCmdFunc func(cmd *cobra.Command, args []string) + +error + +/ Returns a single function that calls each argument function in sequence +/ RunE, PreRunE, PersistentPreRunE, etc. all have this same signature +func concatCobraCmdFuncs(fs ...cobraCmdFunc) + +cobraCmdFunc { + return func(cmd *cobra.Command, args []string) + +error { + for _, f := range fs { + if f != nil { + if err := f(cmd, args); err != nil { + return err +} + +} + +} + +return nil +} +} + +/ Bind all flags and read the config into viper +func bindFlagsLoadViper(cmd *cobra.Command, args []string) + +error { + / cmd.Flags() + +includes flags from this command and all persistent flags from the parent + if err := viper.BindPFlags(cmd.Flags()); err != nil { + return err +} + homeDir := viper.GetString(HomeFlag) + +viper.Set(HomeFlag, homeDir) + +viper.SetConfigName("config") / name of config file (without extension) + +viper.AddConfigPath(homeDir) / search root directory + viper.AddConfigPath(filepath.Join(homeDir, "config")) / search root directory /config + + / If a config file is found, read it in. + if err := viper.ReadInConfig(); err == nil { + / stderr, so if we redirect output to json file, this doesn't appear + / fmt.Fprintln(os.Stderr, "Using config file:", viper.ConfigFileUsed()) +} + +else if _, ok := err.(viper.ConfigFileNotFoundError); !ok { + / ignore not found error, return other errors + return err +} + +return nil +} + +func validateOutput(cmd *cobra.Command, args []string) + +error { + / validate output format + output := viper.GetString(OutputFlag) + switch output { + case "text", "json": + default: + return fmt.Errorf("unsupported output format: %s", output) +} + +return nil +} +``` + +See an example of `main` function from the `simapp` application, the Cosmos SDK's application for demo purposes: + +```go expandable +package main + +import ( + + "fmt" + "os" + + clientv2helpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/simapp" + "cosmossdk.io/simapp/simd/cmd" + + svrcmd "github.com/cosmos/cosmos-sdk/server/cmd" +) + +func main() { + rootCmd := cmd.NewRootCmd() + if err := svrcmd.Execute(rootCmd, clientv2helpers.EnvPrefix, simapp.DefaultNodeHome); err != nil { + fmt.Fprintln(rootCmd.OutOrStderr(), err) + +os.Exit(1) +} +} +``` + +## `start` command + +The `start` command is defined in the `/server` folder of the Cosmos SDK. It is added to the root command of the full-node client in the [`main` function](#main-function) and called by the end-user to start their node: + +```bash +# For an example app named "app", the following command starts the full-node. +appd start + +# Using the Cosmos SDK's own simapp, the following commands start the simapp node. +simd start +``` + +As a reminder, the full-node is composed of three conceptual layers: the networking layer, the consensus layer and the application layer. The first two are generally bundled together in an entity called the consensus engine (CometBFT by default), while the third is the state-machine defined with the help of the Cosmos SDK. Currently, the Cosmos SDK uses CometBFT as the default consensus engine, meaning the start command is implemented to boot up a CometBFT node. + +The flow of the `start` command is pretty straightforward. First, it retrieves the `config` from the `context` in order to open the `db` (a [`leveldb`](https://github.com/syndtr/goleveldb) instance by default). This `db` contains the latest known state of the application (empty if the application is started from the first time. + +With the `db`, the `start` command creates a new instance of the application using an `appCreator` function: + +```go expandable +package server + +import ( + + "bufio" + "context" + "fmt" + "io" + "net" + "os" + "path/filepath" + "runtime/pprof" + "strings" + "time" + "github.com/cometbft/cometbft/abci/server" + cmtcmd "github.com/cometbft/cometbft/cmd/cometbft/commands" + cmtcfg "github.com/cometbft/cometbft/config" + cmtjson "github.com/cometbft/cometbft/libs/json" + "github.com/cometbft/cometbft/node" + "github.com/cometbft/cometbft/p2p" + pvm "github.com/cometbft/cometbft/privval" + cmtstate "github.com/cometbft/cometbft/proto/tendermint/state" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cometbft/cometbft/proxy" + rpchttp "github.com/cometbft/cometbft/rpc/client/http" + "github.com/cometbft/cometbft/rpc/client/local" + sm "github.com/cometbft/cometbft/state" + "github.com/cometbft/cometbft/store" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/hashicorp/go-metrics" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "golang.org/x/sync/errgroup" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + + pruningtypes "cosmossdk.io/store/pruning/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + servercmtlog "github.com/cosmos/cosmos-sdk/server/log" + "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/version" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" +) + +const ( + / CometBFT full-node start flags + flagWithComet = "with-comet" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagQueryGasLimit = "query-gas-limit" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" + + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" + FlagIAVLSyncPruning = "iavl-sync-pruning" + FlagShutdownGrace = "shutdown-grace" + + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" + + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" + + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" + flagGRPCSkipCheckHeader = "grpc.skip-check-header" + + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" + + / testnet keys + KeyIsTestnet = "is-testnet" + KeyNewChainID = "new-chain-ID" + KeyNewOpAddr = "new-operator-addr" + KeyNewValAddr = "new-validator-addr" + KeyUserPubKey = "user-pub-key" + KeyTriggerTestnetUpgrade = "trigger-testnet-upgrade" +) + +/ StartCmdOptions defines options that can be customized in `StartCmdWithOptions`, +type StartCmdOptions struct { + / DBOpener can be used to customize db opening, for example customize db options or support different db backends, + / default to the builtin db opener. + DBOpener func(rootDir string, backendType dbm.BackendType) (dbm.DB, error) + / PostSetup can be used to setup extra services under the same cancellable context, + / it's not called in stand-alone mode, only for in-process mode. + PostSetup func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / PostSetupStandalone can be used to setup extra services under the same cancellable context, + PostSetupStandalone func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / AddFlags add custom flags to start cmd + AddFlags func(cmd *cobra.Command) + / StartCommandHanlder can be used to customize the start command handler + StartCommandHandler func(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, inProcessConsensus bool, opts StartCmdOptions) + +error +} + +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + return StartCmdWithOptions(appCreator, defaultNodeHome, StartCmdOptions{ +}) +} + +/ StartCmdWithOptions runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmdWithOptions(appCreator types.AppCreator, defaultNodeHome string, opts StartCmdOptions) *cobra.Command { + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + if opts.StartCommandHandler == nil { + opts.StartCommandHandler = start +} + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with CometBFT in or out of process. By +default, the application will run with CometBFT in process. + +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. + +For '--pruning' the options are as follows: + +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) + +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' + +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. + +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. + +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, CometBFT is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + if err != nil { + return err +} + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + +err = wrapCPUProfile(serverCtx, func() + +error { + return opts.StartCommandHandler(serverCtx, clientCtx, appCreator, withCMT, opts) +}) + +serverCtx.Logger.Debug("received quit signal") + +graceDuration, _ := cmd.Flags().GetDuration(FlagShutdownGrace) + if graceDuration > 0 { + serverCtx.Logger.Info("graceful shutdown start", FlagShutdownGrace, graceDuration) + <-time.After(graceDuration) + +serverCtx.Logger.Info("graceful shutdown complete") +} + +return err +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +addStartNodeFlags(cmd, opts) + +return cmd +} + +func start(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, withCmt bool, opts StartCmdOptions) + +error { + svrCfg, err := getAndValidateConfig(svrCtx) + if err != nil { + return err +} + +app, appCleanupFn, err := startApp(svrCtx, appCreator, opts) + if err != nil { + return err +} + +defer appCleanupFn() + +metrics, err := startTelemetry(svrCfg) + if err != nil { + return err +} + +emitServerInfoMetrics() + if !withCmt { + return startStandAlone(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +return startInProcess(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +func startStandAlone(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, metrics *telemetry.Metrics, opts StartCmdOptions) + +error { + addr := svrCtx.Viper.GetString(flagAddress) + transport := svrCtx.Viper.GetString(flagTransport) + cmtApp := NewCometABCIWrapper(app) + +svr, err := server.NewServer(addr, transport, cmtApp) + if err != nil { + return fmt.Errorf("error creating listener: %w", err) +} + +svr.SetLogger(servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger.With("module", "abci-server") +}) + +g, ctx := getCtx(svrCtx, false) + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / create tendermint client + / assumes the rpc listen address is where tendermint has its rpc server + rpcclient, err := rpchttp.New(svrCtx.Config.RPC.ListenAddress, "/websocket") + if err != nil { + return err +} + / re-assign for making the client available below + / do not use := to avoid shadowing clientCtx + clientCtx = clientCtx.WithClient(rpcclient) + + / use the provided clientCtx to register the services + app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, svrCfg, clientCtx, svrCtx, app, svrCtx.Config.RootDir, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetupStandalone != nil { + if err := opts.PostSetupStandalone(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + +g.Go(func() + +error { + if err := svr.Start(); err != nil { + svrCtx.Logger.Error("failed to start out-of-process ABCI server", "err", err) + +return err +} + + / Wait for the calling process to be canceled or close the provided context, + / so we can gracefully stop the ABCI server. + <-ctx.Done() + +svrCtx.Logger.Info("stopping the ABCI server...") + +return svr.Stop() +}) + +return g.Wait() +} + +func startInProcess(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, + metrics *telemetry.Metrics, opts StartCmdOptions, +) + +error { + cmtCfg := svrCtx.Config + gRPCOnly := svrCtx.Viper.GetBool(flagGRPCOnly) + +g, ctx := getCtx(svrCtx, true) + if gRPCOnly { + / TODO: Generalize logic so that gRPC only is really in startStandAlone + svrCtx.Logger.Info("starting node in gRPC only mode; CometBFT is disabled") + +svrCfg.GRPC.Enable = true +} + +else { + svrCtx.Logger.Info("starting node with ABCI CometBFT in-process") + +tmNode, cleanupFn, err := startCmtNode(ctx, cmtCfg, app, svrCtx) + if err != nil { + return err +} + +defer cleanupFn() + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / Re-assign for making the client available below do not use := to avoid + / shadowing the clientCtx variable. + clientCtx = clientCtx.WithClient(local.New(tmNode)) + +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, svrCfg, clientCtx, svrCtx, app, cmtCfg.RootDir, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetup != nil { + if err := opts.PostSetup(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + + / wait for signal capture and gracefully return + / we are guaranteed to be waiting for the "ListenForQuitSignals" goroutine. + return g.Wait() +} + +/ TODO: Move nodeKey into being created within the function. +func startCmtNode( + ctx context.Context, + cfg *cmtcfg.Config, + app types.Application, + svrCtx *Context, +) (tmNode *node.Node, cleanupFn func(), err error) { + nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return nil, cleanupFn, err +} + cmtApp := NewCometABCIWrapper(app) + +tmNode, err = node.NewNodeWithContext( + ctx, + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(cmtApp), + getGenDocProvider(cfg), + cmtcfg.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger +}, + ) + if err != nil { + return tmNode, cleanupFn, err +} + if err := tmNode.Start(); err != nil { + return tmNode, cleanupFn, err +} + +cleanupFn = func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() +} + +} + +return tmNode, cleanupFn, nil +} + +func getAndValidateConfig(svrCtx *Context) (serverconfig.Config, error) { + config, err := serverconfig.GetConfig(svrCtx.Viper) + if err != nil { + return config, err +} + if err := config.ValidateBasic(); err != nil { + return config, err +} + +return config, nil +} + +/ returns a function which returns the genesis doc from the genesis file. +func getGenDocProvider(cfg *cmtcfg.Config) + +func() (*cmttypes.GenesisDoc, error) { + return func() (*cmttypes.GenesisDoc, error) { + appGenesis, err := genutiltypes.AppGenesisFromFile(cfg.GenesisFile()) + if err != nil { + return nil, err +} + +return appGenesis.ToGenesisDoc() +} +} + +func setupTraceWriter(svrCtx *Context) (traceWriter io.WriteCloser, cleanup func(), err error) { + / clean up the traceWriter when the server is shutting down + cleanup = func() { +} + traceWriterFile := svrCtx.Viper.GetString(flagTraceStore) + +traceWriter, err = openTraceWriter(traceWriterFile) + if err != nil { + return traceWriter, cleanup, err +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + cleanup = func() { + if err = traceWriter.Close(); err != nil { + svrCtx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +return traceWriter, cleanup, nil +} + +func startGrpcServer( + ctx context.Context, + g *errgroup.Group, + config serverconfig.GRPCConfig, + clientCtx client.Context, + svrCtx *Context, + app types.Application, +) (*grpc.Server, client.Context, error) { + if !config.Enable { + / return grpcServer as nil if gRPC is disabled + return nil, clientCtx, nil +} + _, _, err := net.SplitHostPort(config.Address) + if err != nil { + return nil, clientCtx, err +} + maxSendMsgSize := config.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + + / if gRPC is enabled, configure gRPC client for gRPC gateway + grpcClient, err := grpc.Dial( /nolint: staticcheck / ignore this line for this linter + config.Address, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return nil, clientCtx, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +svrCtx.Logger.Debug("gRPC client assigned to client context", "target", config.Address) + +grpcSrv, err := servergrpc.NewGRPCServer(clientCtx, app, config) + if err != nil { + return nil, clientCtx, err +} + + / Start the gRPC server in a goroutine. Note, the provided ctx will ensure + / that the server is gracefully shut down. + g.Go(func() + +error { + return servergrpc.StartGRPCServer(ctx, svrCtx.Logger.With("module", "grpc-server"), config, grpcSrv) +}) + +return grpcSrv, clientCtx, nil +} + +func startAPIServer( + ctx context.Context, + g *errgroup.Group, + svrCfg serverconfig.Config, + clientCtx client.Context, + svrCtx *Context, + app types.Application, + home string, + grpcSrv *grpc.Server, + metrics *telemetry.Metrics, +) + +error { + if !svrCfg.API.Enable { + return nil +} + +clientCtx = clientCtx.WithHomeDir(home) + apiSrv := api.New(clientCtx, svrCtx.Logger.With("module", "api-server"), grpcSrv) + +app.RegisterAPIRoutes(apiSrv, svrCfg.API) + if svrCfg.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + +g.Go(func() + +error { + return apiSrv.Start(ctx, svrCfg) +}) + +return nil +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + return telemetry.New(cfg.Telemetry) +} + +/ wrapCPUProfile starts CPU profiling, if enabled, and executes the provided +/ callbackFn in a separate goroutine, then will wait for that callback to +/ return. +/ +/ NOTE: We expect the caller to handle graceful shutdown and signal handling. +func wrapCPUProfile(svrCtx *Context, callbackFn func() + +error) + +error { + if cpuProfile := svrCtx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +svrCtx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +defer func() { + svrCtx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + svrCtx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +}() +} + +return callbackFn() +} + +/ emitServerInfoMetrics emits server info related metrics using application telemetry. +func emitServerInfoMetrics() { + var ls []metrics.Label + versionInfo := version.NewInfo() + if len(versionInfo.GoVersion) > 0 { + ls = append(ls, telemetry.NewLabel("go", versionInfo.GoVersion)) +} + if len(versionInfo.CosmosSdkVersion) > 0 { + ls = append(ls, telemetry.NewLabel("version", versionInfo.CosmosSdkVersion)) +} + if len(ls) == 0 { + return +} + +telemetry.SetGaugeWithLabels([]string{"server", "info" +}, 1, ls) +} + +func getCtx(svrCtx *Context, block bool) (*errgroup.Group, context.Context) { + ctx, cancelFn := context.WithCancel(context.Background()) + +g, ctx := errgroup.WithContext(ctx) + / listen for quit signals so the calling parent process can gracefully exit + ListenForQuitSignals(g, block, cancelFn, svrCtx.Logger) + +return g, ctx +} + +func startApp(svrCtx *Context, appCreator types.AppCreator, opts StartCmdOptions) (app types.Application, cleanupFn func(), err error) { + traceWriter, traceCleanupFn, err := setupTraceWriter(svrCtx) + if err != nil { + return app, traceCleanupFn, err +} + home := svrCtx.Config.RootDir + db, err := opts.DBOpener(home, GetAppDBBackend(svrCtx.Viper)) + if err != nil { + return app, traceCleanupFn, err +} + if isTestnet, ok := svrCtx.Viper.Get(KeyIsTestnet).(bool); ok && isTestnet { + app, err = testnetify(svrCtx, appCreator, db, traceWriter) + if err != nil { + return app, traceCleanupFn, err +} + +} + +else { + app = appCreator(svrCtx.Logger, db, traceWriter, svrCtx.Viper) +} + +cleanupFn = func() { + traceCleanupFn() + if localErr := app.Close(); localErr != nil { + svrCtx.Logger.Error(localErr.Error()) +} + +} + +return app, cleanupFn, nil +} + +/ InPlaceTestnetCreator utilizes the provided chainID and operatorAddress as well as the local private validator key to +/ control the network represented in the data folder. This is useful to create testnets nearly identical to your +/ mainnet environment. +func InPlaceTestnetCreator(testnetAppCreator types.AppCreator) *cobra.Command { + opts := StartCmdOptions{ +} + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + if opts.StartCommandHandler == nil { + opts.StartCommandHandler = start +} + cmd := &cobra.Command{ + Use: "in-place-testnet [newChainID] [newOperatorAddress]", + Short: "Create and start a testnet from current local state", + Long: `Create and start a testnet from current local state. +After utilizing this command the network will start. If the network is stopped, +the normal "start" command should be used. Re-using this command on state that +has already been modified by this command could result in unexpected behavior. + +Additionally, the first block may take up to one minute to be committed, depending +on how old the block is. For instance, if a snapshot was taken weeks ago and we want +to turn this into a testnet, it is possible lots of pending state needs to be committed +(expiring locks, etc.). It is recommended that you should wait for this block to be committed +before stopping the daemon. + +If the --trigger-testnet-upgrade flag is set, the upgrade handler specified by the flag will be run +on the first block of the testnet. + +Regardless of whether the flag is set or not, if any new stores are introduced in the daemon being run, +those stores will be registered in order to prevent panics. Therefore, you only need to set the flag if +you want to test the upgrade handler itself. +`, + Example: "in-place-testnet localosmosis osmo12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj", + Args: cobra.ExactArgs(2), + RunE: func(cmd *cobra.Command, args []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + if err != nil { + return err +} + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + newChainID := args[0] + newOperatorAddress := args[1] + + skipConfirmation, _ := cmd.Flags().GetBool("skip-confirmation") + if !skipConfirmation { + / Confirmation prompt to prevent accidental modification of state. + reader := bufio.NewReader(os.Stdin) + +fmt.Println("This operation will modify state in your data folder and cannot be undone. Do you want to continue? (y/n)") + +text, _ := reader.ReadString('\n') + response := strings.TrimSpace(strings.ToLower(text)) + if response != "y" && response != "yes" { + fmt.Println("Operation canceled.") + +return nil +} + +} + + / Set testnet keys to be used by the application. + / This is done to prevent changes to existing start API. + serverCtx.Viper.Set(KeyIsTestnet, true) + +serverCtx.Viper.Set(KeyNewChainID, newChainID) + +serverCtx.Viper.Set(KeyNewOpAddr, newOperatorAddress) + +err = wrapCPUProfile(serverCtx, func() + +error { + return opts.StartCommandHandler(serverCtx, clientCtx, testnetAppCreator, withCMT, opts) +}) + +serverCtx.Logger.Debug("received quit signal") + +graceDuration, _ := cmd.Flags().GetDuration(FlagShutdownGrace) + if graceDuration > 0 { + serverCtx.Logger.Info("graceful shutdown start", FlagShutdownGrace, graceDuration) + <-time.After(graceDuration) + +serverCtx.Logger.Info("graceful shutdown complete") +} + +return err +}, +} + +addStartNodeFlags(cmd, opts) + +cmd.Flags().String(KeyTriggerTestnetUpgrade, "", "If set (example: \"v21\"), triggers the v21 upgrade handler to run on the first block of the testnet") + +cmd.Flags().Bool("skip-confirmation", false, "Skip the confirmation prompt") + +return cmd +} + +/ testnetify modifies both state and blockStore, allowing the provided operator address and local validator key to control the network +/ that the state in the data folder represents. The chainID of the local genesis file is modified to match the provided chainID. +func testnetify(ctx *Context, testnetAppCreator types.AppCreator, db dbm.DB, traceWriter io.WriteCloser) (types.Application, error) { + config := ctx.Config + + newChainID, ok := ctx.Viper.Get(KeyNewChainID).(string) + if !ok { + return nil, fmt.Errorf("expected string for key %s", KeyNewChainID) +} + + / Modify app genesis chain ID and save to genesis file. + genFilePath := config.GenesisFile() + +appGen, err := genutiltypes.AppGenesisFromFile(genFilePath) + if err != nil { + return nil, err +} + +appGen.ChainID = newChainID + if err := appGen.ValidateAndComplete(); err != nil { + return nil, err +} + if err := appGen.SaveAs(genFilePath); err != nil { + return nil, err +} + + / Regenerate addrbook.json to prevent peers on old network from causing error logs. + addrBookPath := filepath.Join(config.RootDir, "config", "addrbook.json") + if err := os.Remove(addrBookPath); err != nil && !os.IsNotExist(err) { + return nil, fmt.Errorf("failed to remove existing addrbook.json: %w", err) +} + emptyAddrBook := []byte("{ +}") + if err := os.WriteFile(addrBookPath, emptyAddrBook, 0o600); err != nil { + return nil, fmt.Errorf("failed to create empty addrbook.json: %w", err) +} + + / Load the comet genesis doc provider. + genDocProvider := node.DefaultGenesisDocProviderFunc(config) + + / Initialize blockStore and stateDB. + blockStoreDB, err := cmtcfg.DefaultDBProvider(&cmtcfg.DBContext{ + ID: "blockstore", + Config: config +}) + if err != nil { + return nil, err +} + blockStore := store.NewBlockStore(blockStoreDB) + +stateDB, err := cmtcfg.DefaultDBProvider(&cmtcfg.DBContext{ + ID: "state", + Config: config +}) + if err != nil { + return nil, err +} + +defer blockStore.Close() + +defer stateDB.Close() + privValidator := pvm.LoadOrGenFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile()) + +userPubKey, err := privValidator.GetPubKey() + if err != nil { + return nil, err +} + validatorAddress := userPubKey.Address() + stateStore := sm.NewStore(stateDB, sm.StoreOptions{ + DiscardABCIResponses: config.Storage.DiscardABCIResponses, +}) + +state, genDoc, err := node.LoadStateFromDBOrGenesisDocProvider(stateDB, genDocProvider) + if err != nil { + return nil, err +} + +ctx.Viper.Set(KeyNewValAddr, validatorAddress) + +ctx.Viper.Set(KeyUserPubKey, userPubKey) + testnetApp := testnetAppCreator(ctx.Logger, db, traceWriter, ctx.Viper) + + / We need to create a temporary proxyApp to get the initial state of the application. + / Depending on how the node was stopped, the application height can differ from the blockStore height. + / This height difference changes how we go about modifying the state. + cmtApp := NewCometABCIWrapper(testnetApp) + _, context := getCtx(ctx, true) + clientCreator := proxy.NewLocalClientCreator(cmtApp) + metrics := node.DefaultMetricsProvider(cmtcfg.DefaultConfig().Instrumentation) + _, _, _, _, proxyMetrics, _, _ := metrics(genDoc.ChainID) + proxyApp := proxy.NewAppConns(clientCreator, proxyMetrics) + if err := proxyApp.Start(); err != nil { + return nil, fmt.Errorf("error starting proxy app connections: %w", err) +} + +res, err := proxyApp.Query().Info(context, proxy.RequestInfo) + if err != nil { + return nil, fmt.Errorf("error calling Info: %w", err) +} + +err = proxyApp.Stop() + if err != nil { + return nil, err +} + appHash := res.LastBlockAppHash + appHeight := res.LastBlockHeight + + var block *cmttypes.Block + switch { + case appHeight == blockStore.Height(): + block = blockStore.LoadBlock(blockStore.Height()) + / If the state's last blockstore height does not match the app and blockstore height, we likely stopped with the halt height flag. + if state.LastBlockHeight != appHeight { + state.LastBlockHeight = appHeight + block.AppHash = appHash + state.AppHash = appHash +} + +else { + / Node was likely stopped via SIGTERM, delete the next block's seen commit + err := blockStoreDB.Delete(fmt.Appendf(nil, "SC:%v", blockStore.Height()+1)) + if err != nil { + return nil, err +} + +} + case blockStore.Height() > state.LastBlockHeight: + / This state usually occurs when we gracefully stop the node. + err = blockStore.DeleteLatestBlock() + if err != nil { + return nil, err +} + +block = blockStore.LoadBlock(blockStore.Height()) + +default: + / If there is any other state, we just load the block + block = blockStore.LoadBlock(blockStore.Height()) +} + +block.ChainID = newChainID + state.ChainID = newChainID + + block.LastBlockID = state.LastBlockID + block.LastCommit.BlockID = state.LastBlockID + + / Create a vote from our validator + vote := cmttypes.Vote{ + Type: cmtproto.PrecommitType, + Height: state.LastBlockHeight, + Round: 0, + BlockID: state.LastBlockID, + Timestamp: time.Now(), + ValidatorAddress: validatorAddress, + ValidatorIndex: 0, + Signature: []byte{ +}, +} + + / Sign the vote, and copy the proto changes from the act of signing to the vote itself + voteProto := vote.ToProto() + +err = privValidator.SignVote(newChainID, voteProto) + if err != nil { + return nil, err +} + +vote.Signature = voteProto.Signature + vote.Timestamp = voteProto.Timestamp + + / Modify the block's lastCommit to be signed only by our validator + block.LastCommit.Signatures[0].ValidatorAddress = validatorAddress + block.LastCommit.Signatures[0].Signature = vote.Signature + block.LastCommit.Signatures = []cmttypes.CommitSig{ + block.LastCommit.Signatures[0] +} + + / Load the seenCommit of the lastBlockHeight and modify it to be signed from our validator + seenCommit := blockStore.LoadSeenCommit(state.LastBlockHeight) + +seenCommit.BlockID = state.LastBlockID + seenCommit.Round = vote.Round + seenCommit.Signatures[0].Signature = vote.Signature + seenCommit.Signatures[0].ValidatorAddress = validatorAddress + seenCommit.Signatures[0].Timestamp = vote.Timestamp + seenCommit.Signatures = []cmttypes.CommitSig{ + seenCommit.Signatures[0] +} + +err = blockStore.SaveSeenCommit(state.LastBlockHeight, seenCommit) + if err != nil { + return nil, err +} + + / Create ValidatorSet struct containing just our valdiator. + newVal := &cmttypes.Validator{ + Address: validatorAddress, + PubKey: userPubKey, + VotingPower: 900000000000000, +} + newValSet := &cmttypes.ValidatorSet{ + Validators: []*cmttypes.Validator{ + newVal +}, + Proposer: newVal, +} + + / Replace all valSets in state to be the valSet with just our validator. + state.Validators = newValSet + state.LastValidators = newValSet + state.NextValidators = newValSet + state.LastHeightValidatorsChanged = blockStore.Height() + +err = stateStore.Save(state) + if err != nil { + return nil, err +} + + / Create a ValidatorsInfo struct to store in stateDB. + valSet, err := state.Validators.ToProto() + if err != nil { + return nil, err +} + valInfo := &cmtstate.ValidatorsInfo{ + ValidatorSet: valSet, + LastHeightChanged: state.LastBlockHeight, +} + +buf, err := valInfo.Marshal() + if err != nil { + return nil, err +} + + / Modfiy Validators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()), buf) + if err != nil { + return nil, err +} + + / Modify LastValidators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()-1), buf) + if err != nil { + return nil, err +} + + / Modify NextValidators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()+1), buf) + if err != nil { + return nil, err +} + + / Since we modified the chainID, we set the new genesisDoc in the stateDB. + b, err := cmtjson.Marshal(genDoc) + if err != nil { + return nil, err +} + if err := stateDB.SetSync([]byte("genesisDoc"), b); err != nil { + return nil, err +} + +return testnetApp, err +} + +/ addStartNodeFlags should be added to any CLI commands that start the network. +func addStartNodeFlags(cmd *cobra.Command, opts StartCmdOptions) { + cmd.Flags().Bool(flagWithComet, true, "Run abci app embedded in-process with CometBFT") + +cmd.Flags().String(flagAddress, "tcp://127.0.0.1:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().Uint64(FlagQueryGasLimit, 0, "Maximum gas a Rest/Grpc query can consume. Blank and 0 imply unbounded.") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune CometBFT blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the CometBFT RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the CometBFT RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the CometBFT maximum request body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no CometBFT process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + +cmd.Flags().Duration(FlagShutdownGrace, 0*time.Second, "On Shutdown, duration to wait for resource clean up") + + / support old flags name for backwards compatibility + cmd.Flags().SetNormalizeFunc(func(f *pflag.FlagSet, name string) + +pflag.NormalizedName { + if name == "with-tendermint" { + name = flagWithComet +} + +return pflag.NormalizedName(name) +}) + + / add support for all CometBFT-specific command line options + cmtcmd.AddNodeFlags(cmd) + if opts.AddFlags != nil { + opts.AddFlags(cmd) +} +} +``` + +Note that an `appCreator` is a function that fulfills the `AppCreator` signature: + +```go expandable +package types + +import ( + + "encoding/json" + "io" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/grpc" + "github.com/spf13/cobra" + "cosmossdk.io/log" + "cosmossdk.io/store/snapshots" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" +) + +type ( + / AppOptions defines an interface that is passed into an application + / constructor, typically used to set BaseApp options that are either supplied + / via config file or through CLI arguments/flags. The underlying implementation + / is defined by the server package and is typically implemented via a Viper + / literal defined on the server Context. Note, casting Get calls may not yield + / the expected types and could result in type assertion errors. It is recommend + / to either use the cast package or perform manual conversion for safety. + AppOptions interface { + Get(string) + +any +} + + / Application defines an application interface that wraps abci.Application. + / The interface defines the necessary contracts to be implemented in order + / to fully bootstrap and start an application. + Application interface { + ABCI + + RegisterAPIRoutes(*api.Server, config.APIConfig) + + / RegisterGRPCServerWithSkipCheckHeader registers gRPC services directly with the gRPC + / server and bypass check header flag. + RegisterGRPCServerWithSkipCheckHeader(grpc.Server, bool) + + / RegisterTxService registers the gRPC Query service for tx (such as tx + / simulation, fetching txs by hash...). + RegisterTxService(client.Context) + + / RegisterTendermintService registers the gRPC Query service for CometBFT queries. + RegisterTendermintService(client.Context) + + / RegisterNodeService registers the node gRPC Query service. + RegisterNodeService(client.Context, config.Config) + + / CommitMultiStore return the multistore instance + CommitMultiStore() + +storetypes.CommitMultiStore + + / Return the snapshot manager + SnapshotManager() *snapshots.Manager + + / Close is called in start cmd to gracefully cleanup resources. + / Must be safe to be called multiple times. + Close() + +error +} + + / AppCreator is a function that allows us to lazily initialize an + / application using various configurations. + AppCreator func(log.Logger, dbm.DB, io.Writer, AppOptions) + +Application + + / ModuleInitFlags takes a start command and adds modules specific init flags. + ModuleInitFlags func(startCmd *cobra.Command) + + / ExportedApp represents an exported app state, along with + / validators, consensus params and latest app height. + ExportedApp struct { + / AppState is the application state as JSON. + AppState json.RawMessage + / Validators is the exported validator set. + Validators []cmttypes.GenesisValidator + / Height is the app's latest block height. + Height int64 + / ConsensusParams are the exported consensus params for ABCI. + ConsensusParams cmtproto.ConsensusParams +} + + / AppExporter is a function that dumps all app state to + / JSON-serializable structure and returns the current validator set. + AppExporter func( + logger log.Logger, + db dbm.DB, + traceWriter io.Writer, + height int64, + forZeroHeight bool, + jailAllowedAddrs []string, + opts AppOptions, + modulesToExport []string, + ) (ExportedApp, error) +) +``` + +In practice, the [constructor of the application](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#constructor-function) is passed as the `appCreator`. + +```go +/ Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L294-L308 +``` + +Then, the instance of `app` is used to instantiate a new CometBFT node: + +```go expandable +package server + +import ( + + "bufio" + "context" + "fmt" + "io" + "net" + "os" + "path/filepath" + "runtime/pprof" + "strings" + "time" + "github.com/cometbft/cometbft/abci/server" + cmtcmd "github.com/cometbft/cometbft/cmd/cometbft/commands" + cmtcfg "github.com/cometbft/cometbft/config" + cmtjson "github.com/cometbft/cometbft/libs/json" + "github.com/cometbft/cometbft/node" + "github.com/cometbft/cometbft/p2p" + pvm "github.com/cometbft/cometbft/privval" + cmtstate "github.com/cometbft/cometbft/proto/tendermint/state" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cometbft/cometbft/proxy" + rpchttp "github.com/cometbft/cometbft/rpc/client/http" + "github.com/cometbft/cometbft/rpc/client/local" + sm "github.com/cometbft/cometbft/state" + "github.com/cometbft/cometbft/store" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/hashicorp/go-metrics" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "golang.org/x/sync/errgroup" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + + pruningtypes "cosmossdk.io/store/pruning/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + servercmtlog "github.com/cosmos/cosmos-sdk/server/log" + "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/version" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" +) + +const ( + / CometBFT full-node start flags + flagWithComet = "with-comet" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagQueryGasLimit = "query-gas-limit" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" + + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" + FlagIAVLSyncPruning = "iavl-sync-pruning" + FlagShutdownGrace = "shutdown-grace" + + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" + + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" + + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" + flagGRPCSkipCheckHeader = "grpc.skip-check-header" + + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" + + / testnet keys + KeyIsTestnet = "is-testnet" + KeyNewChainID = "new-chain-ID" + KeyNewOpAddr = "new-operator-addr" + KeyNewValAddr = "new-validator-addr" + KeyUserPubKey = "user-pub-key" + KeyTriggerTestnetUpgrade = "trigger-testnet-upgrade" +) + +/ StartCmdOptions defines options that can be customized in `StartCmdWithOptions`, +type StartCmdOptions struct { + / DBOpener can be used to customize db opening, for example customize db options or support different db backends, + / default to the builtin db opener. + DBOpener func(rootDir string, backendType dbm.BackendType) (dbm.DB, error) + / PostSetup can be used to setup extra services under the same cancellable context, + / it's not called in stand-alone mode, only for in-process mode. + PostSetup func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / PostSetupStandalone can be used to setup extra services under the same cancellable context, + PostSetupStandalone func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / AddFlags add custom flags to start cmd + AddFlags func(cmd *cobra.Command) + / StartCommandHanlder can be used to customize the start command handler + StartCommandHandler func(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, inProcessConsensus bool, opts StartCmdOptions) + +error +} + +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + return StartCmdWithOptions(appCreator, defaultNodeHome, StartCmdOptions{ +}) +} + +/ StartCmdWithOptions runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmdWithOptions(appCreator types.AppCreator, defaultNodeHome string, opts StartCmdOptions) *cobra.Command { + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + if opts.StartCommandHandler == nil { + opts.StartCommandHandler = start +} + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with CometBFT in or out of process. By +default, the application will run with CometBFT in process. + +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. + +For '--pruning' the options are as follows: + +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) + +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' + +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. + +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. + +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, CometBFT is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + if err != nil { + return err +} + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + +err = wrapCPUProfile(serverCtx, func() + +error { + return opts.StartCommandHandler(serverCtx, clientCtx, appCreator, withCMT, opts) +}) + +serverCtx.Logger.Debug("received quit signal") + +graceDuration, _ := cmd.Flags().GetDuration(FlagShutdownGrace) + if graceDuration > 0 { + serverCtx.Logger.Info("graceful shutdown start", FlagShutdownGrace, graceDuration) + <-time.After(graceDuration) + +serverCtx.Logger.Info("graceful shutdown complete") +} + +return err +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +addStartNodeFlags(cmd, opts) + +return cmd +} + +func start(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, withCmt bool, opts StartCmdOptions) + +error { + svrCfg, err := getAndValidateConfig(svrCtx) + if err != nil { + return err +} + +app, appCleanupFn, err := startApp(svrCtx, appCreator, opts) + if err != nil { + return err +} + +defer appCleanupFn() + +metrics, err := startTelemetry(svrCfg) + if err != nil { + return err +} + +emitServerInfoMetrics() + if !withCmt { + return startStandAlone(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +return startInProcess(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +func startStandAlone(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, metrics *telemetry.Metrics, opts StartCmdOptions) + +error { + addr := svrCtx.Viper.GetString(flagAddress) + transport := svrCtx.Viper.GetString(flagTransport) + cmtApp := NewCometABCIWrapper(app) + +svr, err := server.NewServer(addr, transport, cmtApp) + if err != nil { + return fmt.Errorf("error creating listener: %w", err) +} + +svr.SetLogger(servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger.With("module", "abci-server") +}) + +g, ctx := getCtx(svrCtx, false) + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / create tendermint client + / assumes the rpc listen address is where tendermint has its rpc server + rpcclient, err := rpchttp.New(svrCtx.Config.RPC.ListenAddress, "/websocket") + if err != nil { + return err +} + / re-assign for making the client available below + / do not use := to avoid shadowing clientCtx + clientCtx = clientCtx.WithClient(rpcclient) + + / use the provided clientCtx to register the services + app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, svrCfg, clientCtx, svrCtx, app, svrCtx.Config.RootDir, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetupStandalone != nil { + if err := opts.PostSetupStandalone(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + +g.Go(func() + +error { + if err := svr.Start(); err != nil { + svrCtx.Logger.Error("failed to start out-of-process ABCI server", "err", err) + +return err +} + + / Wait for the calling process to be canceled or close the provided context, + / so we can gracefully stop the ABCI server. + <-ctx.Done() + +svrCtx.Logger.Info("stopping the ABCI server...") + +return svr.Stop() +}) + +return g.Wait() +} + +func startInProcess(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, + metrics *telemetry.Metrics, opts StartCmdOptions, +) + +error { + cmtCfg := svrCtx.Config + gRPCOnly := svrCtx.Viper.GetBool(flagGRPCOnly) + +g, ctx := getCtx(svrCtx, true) + if gRPCOnly { + / TODO: Generalize logic so that gRPC only is really in startStandAlone + svrCtx.Logger.Info("starting node in gRPC only mode; CometBFT is disabled") + +svrCfg.GRPC.Enable = true +} + +else { + svrCtx.Logger.Info("starting node with ABCI CometBFT in-process") + +tmNode, cleanupFn, err := startCmtNode(ctx, cmtCfg, app, svrCtx) + if err != nil { + return err +} + +defer cleanupFn() + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / Re-assign for making the client available below do not use := to avoid + / shadowing the clientCtx variable. + clientCtx = clientCtx.WithClient(local.New(tmNode)) + +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, svrCfg, clientCtx, svrCtx, app, cmtCfg.RootDir, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetup != nil { + if err := opts.PostSetup(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + + / wait for signal capture and gracefully return + / we are guaranteed to be waiting for the "ListenForQuitSignals" goroutine. + return g.Wait() +} + +/ TODO: Move nodeKey into being created within the function. +func startCmtNode( + ctx context.Context, + cfg *cmtcfg.Config, + app types.Application, + svrCtx *Context, +) (tmNode *node.Node, cleanupFn func(), err error) { + nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return nil, cleanupFn, err +} + cmtApp := NewCometABCIWrapper(app) + +tmNode, err = node.NewNodeWithContext( + ctx, + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(cmtApp), + getGenDocProvider(cfg), + cmtcfg.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger +}, + ) + if err != nil { + return tmNode, cleanupFn, err +} + if err := tmNode.Start(); err != nil { + return tmNode, cleanupFn, err +} + +cleanupFn = func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() +} + +} + +return tmNode, cleanupFn, nil +} + +func getAndValidateConfig(svrCtx *Context) (serverconfig.Config, error) { + config, err := serverconfig.GetConfig(svrCtx.Viper) + if err != nil { + return config, err +} + if err := config.ValidateBasic(); err != nil { + return config, err +} + +return config, nil +} + +/ returns a function which returns the genesis doc from the genesis file. +func getGenDocProvider(cfg *cmtcfg.Config) + +func() (*cmttypes.GenesisDoc, error) { + return func() (*cmttypes.GenesisDoc, error) { + appGenesis, err := genutiltypes.AppGenesisFromFile(cfg.GenesisFile()) + if err != nil { + return nil, err +} + +return appGenesis.ToGenesisDoc() +} +} + +func setupTraceWriter(svrCtx *Context) (traceWriter io.WriteCloser, cleanup func(), err error) { + / clean up the traceWriter when the server is shutting down + cleanup = func() { +} + traceWriterFile := svrCtx.Viper.GetString(flagTraceStore) + +traceWriter, err = openTraceWriter(traceWriterFile) + if err != nil { + return traceWriter, cleanup, err +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + cleanup = func() { + if err = traceWriter.Close(); err != nil { + svrCtx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +return traceWriter, cleanup, nil +} + +func startGrpcServer( + ctx context.Context, + g *errgroup.Group, + config serverconfig.GRPCConfig, + clientCtx client.Context, + svrCtx *Context, + app types.Application, +) (*grpc.Server, client.Context, error) { + if !config.Enable { + / return grpcServer as nil if gRPC is disabled + return nil, clientCtx, nil +} + _, _, err := net.SplitHostPort(config.Address) + if err != nil { + return nil, clientCtx, err +} + maxSendMsgSize := config.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + + / if gRPC is enabled, configure gRPC client for gRPC gateway + grpcClient, err := grpc.Dial( /nolint: staticcheck / ignore this line for this linter + config.Address, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return nil, clientCtx, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +svrCtx.Logger.Debug("gRPC client assigned to client context", "target", config.Address) + +grpcSrv, err := servergrpc.NewGRPCServer(clientCtx, app, config) + if err != nil { + return nil, clientCtx, err +} + + / Start the gRPC server in a goroutine. Note, the provided ctx will ensure + / that the server is gracefully shut down. + g.Go(func() + +error { + return servergrpc.StartGRPCServer(ctx, svrCtx.Logger.With("module", "grpc-server"), config, grpcSrv) +}) + +return grpcSrv, clientCtx, nil +} + +func startAPIServer( + ctx context.Context, + g *errgroup.Group, + svrCfg serverconfig.Config, + clientCtx client.Context, + svrCtx *Context, + app types.Application, + home string, + grpcSrv *grpc.Server, + metrics *telemetry.Metrics, +) + +error { + if !svrCfg.API.Enable { + return nil +} + +clientCtx = clientCtx.WithHomeDir(home) + apiSrv := api.New(clientCtx, svrCtx.Logger.With("module", "api-server"), grpcSrv) + +app.RegisterAPIRoutes(apiSrv, svrCfg.API) + if svrCfg.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + +g.Go(func() + +error { + return apiSrv.Start(ctx, svrCfg) +}) + +return nil +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + return telemetry.New(cfg.Telemetry) +} + +/ wrapCPUProfile starts CPU profiling, if enabled, and executes the provided +/ callbackFn in a separate goroutine, then will wait for that callback to +/ return. +/ +/ NOTE: We expect the caller to handle graceful shutdown and signal handling. +func wrapCPUProfile(svrCtx *Context, callbackFn func() + +error) + +error { + if cpuProfile := svrCtx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +svrCtx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +defer func() { + svrCtx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + svrCtx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +}() +} + +return callbackFn() +} + +/ emitServerInfoMetrics emits server info related metrics using application telemetry. +func emitServerInfoMetrics() { + var ls []metrics.Label + versionInfo := version.NewInfo() + if len(versionInfo.GoVersion) > 0 { + ls = append(ls, telemetry.NewLabel("go", versionInfo.GoVersion)) +} + if len(versionInfo.CosmosSdkVersion) > 0 { + ls = append(ls, telemetry.NewLabel("version", versionInfo.CosmosSdkVersion)) +} + if len(ls) == 0 { + return +} + +telemetry.SetGaugeWithLabels([]string{"server", "info" +}, 1, ls) +} + +func getCtx(svrCtx *Context, block bool) (*errgroup.Group, context.Context) { + ctx, cancelFn := context.WithCancel(context.Background()) + +g, ctx := errgroup.WithContext(ctx) + / listen for quit signals so the calling parent process can gracefully exit + ListenForQuitSignals(g, block, cancelFn, svrCtx.Logger) + +return g, ctx +} + +func startApp(svrCtx *Context, appCreator types.AppCreator, opts StartCmdOptions) (app types.Application, cleanupFn func(), err error) { + traceWriter, traceCleanupFn, err := setupTraceWriter(svrCtx) + if err != nil { + return app, traceCleanupFn, err +} + home := svrCtx.Config.RootDir + db, err := opts.DBOpener(home, GetAppDBBackend(svrCtx.Viper)) + if err != nil { + return app, traceCleanupFn, err +} + if isTestnet, ok := svrCtx.Viper.Get(KeyIsTestnet).(bool); ok && isTestnet { + app, err = testnetify(svrCtx, appCreator, db, traceWriter) + if err != nil { + return app, traceCleanupFn, err +} + +} + +else { + app = appCreator(svrCtx.Logger, db, traceWriter, svrCtx.Viper) +} + +cleanupFn = func() { + traceCleanupFn() + if localErr := app.Close(); localErr != nil { + svrCtx.Logger.Error(localErr.Error()) +} + +} + +return app, cleanupFn, nil +} + +/ InPlaceTestnetCreator utilizes the provided chainID and operatorAddress as well as the local private validator key to +/ control the network represented in the data folder. This is useful to create testnets nearly identical to your +/ mainnet environment. +func InPlaceTestnetCreator(testnetAppCreator types.AppCreator) *cobra.Command { + opts := StartCmdOptions{ +} + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + if opts.StartCommandHandler == nil { + opts.StartCommandHandler = start +} + cmd := &cobra.Command{ + Use: "in-place-testnet [newChainID] [newOperatorAddress]", + Short: "Create and start a testnet from current local state", + Long: `Create and start a testnet from current local state. +After utilizing this command the network will start. If the network is stopped, +the normal "start" command should be used. Re-using this command on state that +has already been modified by this command could result in unexpected behavior. + +Additionally, the first block may take up to one minute to be committed, depending +on how old the block is. For instance, if a snapshot was taken weeks ago and we want +to turn this into a testnet, it is possible lots of pending state needs to be committed +(expiring locks, etc.). It is recommended that you should wait for this block to be committed +before stopping the daemon. + +If the --trigger-testnet-upgrade flag is set, the upgrade handler specified by the flag will be run +on the first block of the testnet. + +Regardless of whether the flag is set or not, if any new stores are introduced in the daemon being run, +those stores will be registered in order to prevent panics. Therefore, you only need to set the flag if +you want to test the upgrade handler itself. +`, + Example: "in-place-testnet localosmosis osmo12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj", + Args: cobra.ExactArgs(2), + RunE: func(cmd *cobra.Command, args []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + if err != nil { + return err +} + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + newChainID := args[0] + newOperatorAddress := args[1] + + skipConfirmation, _ := cmd.Flags().GetBool("skip-confirmation") + if !skipConfirmation { + / Confirmation prompt to prevent accidental modification of state. + reader := bufio.NewReader(os.Stdin) + +fmt.Println("This operation will modify state in your data folder and cannot be undone. Do you want to continue? (y/n)") + +text, _ := reader.ReadString('\n') + response := strings.TrimSpace(strings.ToLower(text)) + if response != "y" && response != "yes" { + fmt.Println("Operation canceled.") + +return nil +} + +} + + / Set testnet keys to be used by the application. + / This is done to prevent changes to existing start API. + serverCtx.Viper.Set(KeyIsTestnet, true) + +serverCtx.Viper.Set(KeyNewChainID, newChainID) + +serverCtx.Viper.Set(KeyNewOpAddr, newOperatorAddress) + +err = wrapCPUProfile(serverCtx, func() + +error { + return opts.StartCommandHandler(serverCtx, clientCtx, testnetAppCreator, withCMT, opts) +}) + +serverCtx.Logger.Debug("received quit signal") + +graceDuration, _ := cmd.Flags().GetDuration(FlagShutdownGrace) + if graceDuration > 0 { + serverCtx.Logger.Info("graceful shutdown start", FlagShutdownGrace, graceDuration) + <-time.After(graceDuration) + +serverCtx.Logger.Info("graceful shutdown complete") +} + +return err +}, +} + +addStartNodeFlags(cmd, opts) + +cmd.Flags().String(KeyTriggerTestnetUpgrade, "", "If set (example: \"v21\"), triggers the v21 upgrade handler to run on the first block of the testnet") + +cmd.Flags().Bool("skip-confirmation", false, "Skip the confirmation prompt") + +return cmd +} + +/ testnetify modifies both state and blockStore, allowing the provided operator address and local validator key to control the network +/ that the state in the data folder represents. The chainID of the local genesis file is modified to match the provided chainID. +func testnetify(ctx *Context, testnetAppCreator types.AppCreator, db dbm.DB, traceWriter io.WriteCloser) (types.Application, error) { + config := ctx.Config + + newChainID, ok := ctx.Viper.Get(KeyNewChainID).(string) + if !ok { + return nil, fmt.Errorf("expected string for key %s", KeyNewChainID) +} + + / Modify app genesis chain ID and save to genesis file. + genFilePath := config.GenesisFile() + +appGen, err := genutiltypes.AppGenesisFromFile(genFilePath) + if err != nil { + return nil, err +} + +appGen.ChainID = newChainID + if err := appGen.ValidateAndComplete(); err != nil { + return nil, err +} + if err := appGen.SaveAs(genFilePath); err != nil { + return nil, err +} + + / Regenerate addrbook.json to prevent peers on old network from causing error logs. + addrBookPath := filepath.Join(config.RootDir, "config", "addrbook.json") + if err := os.Remove(addrBookPath); err != nil && !os.IsNotExist(err) { + return nil, fmt.Errorf("failed to remove existing addrbook.json: %w", err) +} + emptyAddrBook := []byte("{ +}") + if err := os.WriteFile(addrBookPath, emptyAddrBook, 0o600); err != nil { + return nil, fmt.Errorf("failed to create empty addrbook.json: %w", err) +} + + / Load the comet genesis doc provider. + genDocProvider := node.DefaultGenesisDocProviderFunc(config) + + / Initialize blockStore and stateDB. + blockStoreDB, err := cmtcfg.DefaultDBProvider(&cmtcfg.DBContext{ + ID: "blockstore", + Config: config +}) + if err != nil { + return nil, err +} + blockStore := store.NewBlockStore(blockStoreDB) + +stateDB, err := cmtcfg.DefaultDBProvider(&cmtcfg.DBContext{ + ID: "state", + Config: config +}) + if err != nil { + return nil, err +} + +defer blockStore.Close() + +defer stateDB.Close() + privValidator := pvm.LoadOrGenFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile()) + +userPubKey, err := privValidator.GetPubKey() + if err != nil { + return nil, err +} + validatorAddress := userPubKey.Address() + stateStore := sm.NewStore(stateDB, sm.StoreOptions{ + DiscardABCIResponses: config.Storage.DiscardABCIResponses, +}) + +state, genDoc, err := node.LoadStateFromDBOrGenesisDocProvider(stateDB, genDocProvider) + if err != nil { + return nil, err +} + +ctx.Viper.Set(KeyNewValAddr, validatorAddress) + +ctx.Viper.Set(KeyUserPubKey, userPubKey) + testnetApp := testnetAppCreator(ctx.Logger, db, traceWriter, ctx.Viper) + + / We need to create a temporary proxyApp to get the initial state of the application. + / Depending on how the node was stopped, the application height can differ from the blockStore height. + / This height difference changes how we go about modifying the state. + cmtApp := NewCometABCIWrapper(testnetApp) + _, context := getCtx(ctx, true) + clientCreator := proxy.NewLocalClientCreator(cmtApp) + metrics := node.DefaultMetricsProvider(cmtcfg.DefaultConfig().Instrumentation) + _, _, _, _, proxyMetrics, _, _ := metrics(genDoc.ChainID) + proxyApp := proxy.NewAppConns(clientCreator, proxyMetrics) + if err := proxyApp.Start(); err != nil { + return nil, fmt.Errorf("error starting proxy app connections: %w", err) +} + +res, err := proxyApp.Query().Info(context, proxy.RequestInfo) + if err != nil { + return nil, fmt.Errorf("error calling Info: %w", err) +} + +err = proxyApp.Stop() + if err != nil { + return nil, err +} + appHash := res.LastBlockAppHash + appHeight := res.LastBlockHeight + + var block *cmttypes.Block + switch { + case appHeight == blockStore.Height(): + block = blockStore.LoadBlock(blockStore.Height()) + / If the state's last blockstore height does not match the app and blockstore height, we likely stopped with the halt height flag. + if state.LastBlockHeight != appHeight { + state.LastBlockHeight = appHeight + block.AppHash = appHash + state.AppHash = appHash +} + +else { + / Node was likely stopped via SIGTERM, delete the next block's seen commit + err := blockStoreDB.Delete(fmt.Appendf(nil, "SC:%v", blockStore.Height()+1)) + if err != nil { + return nil, err +} + +} + case blockStore.Height() > state.LastBlockHeight: + / This state usually occurs when we gracefully stop the node. + err = blockStore.DeleteLatestBlock() + if err != nil { + return nil, err +} + +block = blockStore.LoadBlock(blockStore.Height()) + +default: + / If there is any other state, we just load the block + block = blockStore.LoadBlock(blockStore.Height()) +} + +block.ChainID = newChainID + state.ChainID = newChainID + + block.LastBlockID = state.LastBlockID + block.LastCommit.BlockID = state.LastBlockID + + / Create a vote from our validator + vote := cmttypes.Vote{ + Type: cmtproto.PrecommitType, + Height: state.LastBlockHeight, + Round: 0, + BlockID: state.LastBlockID, + Timestamp: time.Now(), + ValidatorAddress: validatorAddress, + ValidatorIndex: 0, + Signature: []byte{ +}, +} + + / Sign the vote, and copy the proto changes from the act of signing to the vote itself + voteProto := vote.ToProto() + +err = privValidator.SignVote(newChainID, voteProto) + if err != nil { + return nil, err +} + +vote.Signature = voteProto.Signature + vote.Timestamp = voteProto.Timestamp + + / Modify the block's lastCommit to be signed only by our validator + block.LastCommit.Signatures[0].ValidatorAddress = validatorAddress + block.LastCommit.Signatures[0].Signature = vote.Signature + block.LastCommit.Signatures = []cmttypes.CommitSig{ + block.LastCommit.Signatures[0] +} + + / Load the seenCommit of the lastBlockHeight and modify it to be signed from our validator + seenCommit := blockStore.LoadSeenCommit(state.LastBlockHeight) + +seenCommit.BlockID = state.LastBlockID + seenCommit.Round = vote.Round + seenCommit.Signatures[0].Signature = vote.Signature + seenCommit.Signatures[0].ValidatorAddress = validatorAddress + seenCommit.Signatures[0].Timestamp = vote.Timestamp + seenCommit.Signatures = []cmttypes.CommitSig{ + seenCommit.Signatures[0] +} + +err = blockStore.SaveSeenCommit(state.LastBlockHeight, seenCommit) + if err != nil { + return nil, err +} + + / Create ValidatorSet struct containing just our valdiator. + newVal := &cmttypes.Validator{ + Address: validatorAddress, + PubKey: userPubKey, + VotingPower: 900000000000000, +} + newValSet := &cmttypes.ValidatorSet{ + Validators: []*cmttypes.Validator{ + newVal +}, + Proposer: newVal, +} + + / Replace all valSets in state to be the valSet with just our validator. + state.Validators = newValSet + state.LastValidators = newValSet + state.NextValidators = newValSet + state.LastHeightValidatorsChanged = blockStore.Height() + +err = stateStore.Save(state) + if err != nil { + return nil, err +} + + / Create a ValidatorsInfo struct to store in stateDB. + valSet, err := state.Validators.ToProto() + if err != nil { + return nil, err +} + valInfo := &cmtstate.ValidatorsInfo{ + ValidatorSet: valSet, + LastHeightChanged: state.LastBlockHeight, +} + +buf, err := valInfo.Marshal() + if err != nil { + return nil, err +} + + / Modfiy Validators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()), buf) + if err != nil { + return nil, err +} + + / Modify LastValidators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()-1), buf) + if err != nil { + return nil, err +} + + / Modify NextValidators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()+1), buf) + if err != nil { + return nil, err +} + + / Since we modified the chainID, we set the new genesisDoc in the stateDB. + b, err := cmtjson.Marshal(genDoc) + if err != nil { + return nil, err +} + if err := stateDB.SetSync([]byte("genesisDoc"), b); err != nil { + return nil, err +} + +return testnetApp, err +} + +/ addStartNodeFlags should be added to any CLI commands that start the network. +func addStartNodeFlags(cmd *cobra.Command, opts StartCmdOptions) { + cmd.Flags().Bool(flagWithComet, true, "Run abci app embedded in-process with CometBFT") + +cmd.Flags().String(flagAddress, "tcp://127.0.0.1:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().Uint64(FlagQueryGasLimit, 0, "Maximum gas a Rest/Grpc query can consume. Blank and 0 imply unbounded.") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune CometBFT blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the CometBFT RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the CometBFT RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the CometBFT maximum request body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no CometBFT process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + +cmd.Flags().Duration(FlagShutdownGrace, 0*time.Second, "On Shutdown, duration to wait for resource clean up") + + / support old flags name for backwards compatibility + cmd.Flags().SetNormalizeFunc(func(f *pflag.FlagSet, name string) + +pflag.NormalizedName { + if name == "with-tendermint" { + name = flagWithComet +} + +return pflag.NormalizedName(name) +}) + + / add support for all CometBFT-specific command line options + cmtcmd.AddNodeFlags(cmd) + if opts.AddFlags != nil { + opts.AddFlags(cmd) +} +} +``` + +The CometBFT node can be created with `app` because the latter satisfies the [`abci.Application` interface](https://github.com/cometbft/cometbft/blob/v0.37.0/abci/types/application.go#L9-L35) (given that `app` extends [`baseapp`](/docs/sdk/v0.53/documentation/application-framework/baseapp)). As part of the `node.New` method, CometBFT makes sure that the height of the application (i.e. number of blocks since genesis) is equal to the height of the CometBFT node. The difference between these two heights should always be negative or null. If it is strictly negative, `node.New` will replay blocks until the height of the application reaches the height of the CometBFT node. Finally, if the height of the application is `0`, the CometBFT node will call [`InitChain`](/docs/sdk/v0.53/documentation/application-framework/baseapp#initchain) on the application to initialize the state from the genesis file. + +Once the CometBFT node is instantiated and in sync with the application, the node can be started: + +```go expandable +package server + +import ( + + "bufio" + "context" + "fmt" + "io" + "net" + "os" + "path/filepath" + "runtime/pprof" + "strings" + "time" + "github.com/cometbft/cometbft/abci/server" + cmtcmd "github.com/cometbft/cometbft/cmd/cometbft/commands" + cmtcfg "github.com/cometbft/cometbft/config" + cmtjson "github.com/cometbft/cometbft/libs/json" + "github.com/cometbft/cometbft/node" + "github.com/cometbft/cometbft/p2p" + pvm "github.com/cometbft/cometbft/privval" + cmtstate "github.com/cometbft/cometbft/proto/tendermint/state" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/cometbft/cometbft/proxy" + rpchttp "github.com/cometbft/cometbft/rpc/client/http" + "github.com/cometbft/cometbft/rpc/client/local" + sm "github.com/cometbft/cometbft/state" + "github.com/cometbft/cometbft/store" + cmttypes "github.com/cometbft/cometbft/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/hashicorp/go-metrics" + "github.com/spf13/cobra" + "github.com/spf13/pflag" + "golang.org/x/sync/errgroup" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + + pruningtypes "cosmossdk.io/store/pruning/types" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/server/api" + serverconfig "github.com/cosmos/cosmos-sdk/server/config" + servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" + servercmtlog "github.com/cosmos/cosmos-sdk/server/log" + "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/telemetry" + "github.com/cosmos/cosmos-sdk/types/mempool" + "github.com/cosmos/cosmos-sdk/version" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" +) + +const ( + / CometBFT full-node start flags + flagWithComet = "with-comet" + flagAddress = "address" + flagTransport = "transport" + flagTraceStore = "trace-store" + flagCPUProfile = "cpu-profile" + FlagMinGasPrices = "minimum-gas-prices" + FlagQueryGasLimit = "query-gas-limit" + FlagHaltHeight = "halt-height" + FlagHaltTime = "halt-time" + FlagInterBlockCache = "inter-block-cache" + FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" + FlagTrace = "trace" + FlagInvCheckPeriod = "inv-check-period" + + FlagPruning = "pruning" + FlagPruningKeepRecent = "pruning-keep-recent" + FlagPruningInterval = "pruning-interval" + FlagIndexEvents = "index-events" + FlagMinRetainBlocks = "min-retain-blocks" + FlagIAVLCacheSize = "iavl-cache-size" + FlagDisableIAVLFastNode = "iavl-disable-fastnode" + FlagIAVLSyncPruning = "iavl-sync-pruning" + FlagShutdownGrace = "shutdown-grace" + + / state sync-related flags + FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" + FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" + + / api-related flags + FlagAPIEnable = "api.enable" + FlagAPISwagger = "api.swagger" + FlagAPIAddress = "api.address" + FlagAPIMaxOpenConnections = "api.max-open-connections" + FlagRPCReadTimeout = "api.rpc-read-timeout" + FlagRPCWriteTimeout = "api.rpc-write-timeout" + FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" + FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" + + / gRPC-related flags + flagGRPCOnly = "grpc-only" + flagGRPCEnable = "grpc.enable" + flagGRPCAddress = "grpc.address" + flagGRPCWebEnable = "grpc-web.enable" + flagGRPCSkipCheckHeader = "grpc.skip-check-header" + + / mempool flags + FlagMempoolMaxTxs = "mempool.max-txs" + + / testnet keys + KeyIsTestnet = "is-testnet" + KeyNewChainID = "new-chain-ID" + KeyNewOpAddr = "new-operator-addr" + KeyNewValAddr = "new-validator-addr" + KeyUserPubKey = "user-pub-key" + KeyTriggerTestnetUpgrade = "trigger-testnet-upgrade" +) + +/ StartCmdOptions defines options that can be customized in `StartCmdWithOptions`, +type StartCmdOptions struct { + / DBOpener can be used to customize db opening, for example customize db options or support different db backends, + / default to the builtin db opener. + DBOpener func(rootDir string, backendType dbm.BackendType) (dbm.DB, error) + / PostSetup can be used to setup extra services under the same cancellable context, + / it's not called in stand-alone mode, only for in-process mode. + PostSetup func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / PostSetupStandalone can be used to setup extra services under the same cancellable context, + PostSetupStandalone func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) + +error + / AddFlags add custom flags to start cmd + AddFlags func(cmd *cobra.Command) + / StartCommandHanlder can be used to customize the start command handler + StartCommandHandler func(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, inProcessConsensus bool, opts StartCmdOptions) + +error +} + +/ StartCmd runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { + return StartCmdWithOptions(appCreator, defaultNodeHome, StartCmdOptions{ +}) +} + +/ StartCmdWithOptions runs the service passed in, either stand-alone or in-process with +/ CometBFT. +func StartCmdWithOptions(appCreator types.AppCreator, defaultNodeHome string, opts StartCmdOptions) *cobra.Command { + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + if opts.StartCommandHandler == nil { + opts.StartCommandHandler = start +} + cmd := &cobra.Command{ + Use: "start", + Short: "Run the full node", + Long: `Run the full node application with CometBFT in or out of process. By +default, the application will run with CometBFT in process. + +Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and +'pruning-interval' together. + +For '--pruning' the options are as follows: + +default: the last 362880 states are kept, pruning at 10 block intervals +nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) + +everything: 2 latest states will be kept; pruning at 10 block intervals. +custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' + +Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During +the ABCI Commit phase, the node will check if the current block height is greater than or equal to +the halt-height or if the current block time is greater than or equal to the halt-time. If so, the +node will attempt to gracefully shutdown and the block will not be committed. In addition, the node +will not be able to commit subsequent blocks. + +For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag +which accepts a path for the resulting pprof file. + +The node may be started in a 'query only' mode where only the gRPC and JSON HTTP +API services are enabled via the 'grpc-only' flag. In this mode, CometBFT is +bypassed and can be used when legacy queries are needed after an on-chain upgrade +is performed. Note, when enabled, gRPC will also be automatically enabled. +`, + RunE: func(cmd *cobra.Command, _ []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + if err != nil { + return err +} + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + +err = wrapCPUProfile(serverCtx, func() + +error { + return opts.StartCommandHandler(serverCtx, clientCtx, appCreator, withCMT, opts) +}) + +serverCtx.Logger.Debug("received quit signal") + +graceDuration, _ := cmd.Flags().GetDuration(FlagShutdownGrace) + if graceDuration > 0 { + serverCtx.Logger.Info("graceful shutdown start", FlagShutdownGrace, graceDuration) + <-time.After(graceDuration) + +serverCtx.Logger.Info("graceful shutdown complete") +} + +return err +}, +} + +cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") + +addStartNodeFlags(cmd, opts) + +return cmd +} + +func start(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, withCmt bool, opts StartCmdOptions) + +error { + svrCfg, err := getAndValidateConfig(svrCtx) + if err != nil { + return err +} + +app, appCleanupFn, err := startApp(svrCtx, appCreator, opts) + if err != nil { + return err +} + +defer appCleanupFn() + +metrics, err := startTelemetry(svrCfg) + if err != nil { + return err +} + +emitServerInfoMetrics() + if !withCmt { + return startStandAlone(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +return startInProcess(svrCtx, svrCfg, clientCtx, app, metrics, opts) +} + +func startStandAlone(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, metrics *telemetry.Metrics, opts StartCmdOptions) + +error { + addr := svrCtx.Viper.GetString(flagAddress) + transport := svrCtx.Viper.GetString(flagTransport) + cmtApp := NewCometABCIWrapper(app) + +svr, err := server.NewServer(addr, transport, cmtApp) + if err != nil { + return fmt.Errorf("error creating listener: %w", err) +} + +svr.SetLogger(servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger.With("module", "abci-server") +}) + +g, ctx := getCtx(svrCtx, false) + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / create tendermint client + / assumes the rpc listen address is where tendermint has its rpc server + rpcclient, err := rpchttp.New(svrCtx.Config.RPC.ListenAddress, "/websocket") + if err != nil { + return err +} + / re-assign for making the client available below + / do not use := to avoid shadowing clientCtx + clientCtx = clientCtx.WithClient(rpcclient) + + / use the provided clientCtx to register the services + app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, svrCfg, clientCtx, svrCtx, app, svrCtx.Config.RootDir, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetupStandalone != nil { + if err := opts.PostSetupStandalone(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + +g.Go(func() + +error { + if err := svr.Start(); err != nil { + svrCtx.Logger.Error("failed to start out-of-process ABCI server", "err", err) + +return err +} + + / Wait for the calling process to be canceled or close the provided context, + / so we can gracefully stop the ABCI server. + <-ctx.Done() + +svrCtx.Logger.Info("stopping the ABCI server...") + +return svr.Stop() +}) + +return g.Wait() +} + +func startInProcess(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, + metrics *telemetry.Metrics, opts StartCmdOptions, +) + +error { + cmtCfg := svrCtx.Config + gRPCOnly := svrCtx.Viper.GetBool(flagGRPCOnly) + +g, ctx := getCtx(svrCtx, true) + if gRPCOnly { + / TODO: Generalize logic so that gRPC only is really in startStandAlone + svrCtx.Logger.Info("starting node in gRPC only mode; CometBFT is disabled") + +svrCfg.GRPC.Enable = true +} + +else { + svrCtx.Logger.Info("starting node with ABCI CometBFT in-process") + +tmNode, cleanupFn, err := startCmtNode(ctx, cmtCfg, app, svrCtx) + if err != nil { + return err +} + +defer cleanupFn() + + / Add the tx service to the gRPC router. We only need to register this + / service if API or gRPC is enabled, and avoid doing so in the general + / case, because it spawns a new local CometBFT RPC client. + if svrCfg.API.Enable || svrCfg.GRPC.Enable { + / Re-assign for making the client available below do not use := to avoid + / shadowing the clientCtx variable. + clientCtx = clientCtx.WithClient(local.New(tmNode)) + +app.RegisterTxService(clientCtx) + +app.RegisterTendermintService(clientCtx) + +app.RegisterNodeService(clientCtx, svrCfg) +} + +} + +grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) + if err != nil { + return err +} + +err = startAPIServer(ctx, g, svrCfg, clientCtx, svrCtx, app, cmtCfg.RootDir, grpcSrv, metrics) + if err != nil { + return err +} + if opts.PostSetup != nil { + if err := opts.PostSetup(svrCtx, clientCtx, ctx, g); err != nil { + return err +} + +} + + / wait for signal capture and gracefully return + / we are guaranteed to be waiting for the "ListenForQuitSignals" goroutine. + return g.Wait() +} + +/ TODO: Move nodeKey into being created within the function. +func startCmtNode( + ctx context.Context, + cfg *cmtcfg.Config, + app types.Application, + svrCtx *Context, +) (tmNode *node.Node, cleanupFn func(), err error) { + nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) + if err != nil { + return nil, cleanupFn, err +} + cmtApp := NewCometABCIWrapper(app) + +tmNode, err = node.NewNodeWithContext( + ctx, + cfg, + pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), + nodeKey, + proxy.NewLocalClientCreator(cmtApp), + getGenDocProvider(cfg), + cmtcfg.DefaultDBProvider, + node.DefaultMetricsProvider(cfg.Instrumentation), + servercmtlog.CometLoggerWrapper{ + Logger: svrCtx.Logger +}, + ) + if err != nil { + return tmNode, cleanupFn, err +} + if err := tmNode.Start(); err != nil { + return tmNode, cleanupFn, err +} + +cleanupFn = func() { + if tmNode != nil && tmNode.IsRunning() { + _ = tmNode.Stop() +} + +} + +return tmNode, cleanupFn, nil +} + +func getAndValidateConfig(svrCtx *Context) (serverconfig.Config, error) { + config, err := serverconfig.GetConfig(svrCtx.Viper) + if err != nil { + return config, err +} + if err := config.ValidateBasic(); err != nil { + return config, err +} + +return config, nil +} + +/ returns a function which returns the genesis doc from the genesis file. +func getGenDocProvider(cfg *cmtcfg.Config) + +func() (*cmttypes.GenesisDoc, error) { + return func() (*cmttypes.GenesisDoc, error) { + appGenesis, err := genutiltypes.AppGenesisFromFile(cfg.GenesisFile()) + if err != nil { + return nil, err +} + +return appGenesis.ToGenesisDoc() +} +} + +func setupTraceWriter(svrCtx *Context) (traceWriter io.WriteCloser, cleanup func(), err error) { + / clean up the traceWriter when the server is shutting down + cleanup = func() { +} + traceWriterFile := svrCtx.Viper.GetString(flagTraceStore) + +traceWriter, err = openTraceWriter(traceWriterFile) + if err != nil { + return traceWriter, cleanup, err +} + + / if flagTraceStore is not used then traceWriter is nil + if traceWriter != nil { + cleanup = func() { + if err = traceWriter.Close(); err != nil { + svrCtx.Logger.Error("failed to close trace writer", "err", err) +} + +} + +} + +return traceWriter, cleanup, nil +} + +func startGrpcServer( + ctx context.Context, + g *errgroup.Group, + config serverconfig.GRPCConfig, + clientCtx client.Context, + svrCtx *Context, + app types.Application, +) (*grpc.Server, client.Context, error) { + if !config.Enable { + / return grpcServer as nil if gRPC is disabled + return nil, clientCtx, nil +} + _, _, err := net.SplitHostPort(config.Address) + if err != nil { + return nil, clientCtx, err +} + maxSendMsgSize := config.MaxSendMsgSize + if maxSendMsgSize == 0 { + maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize +} + maxRecvMsgSize := config.MaxRecvMsgSize + if maxRecvMsgSize == 0 { + maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize +} + + / if gRPC is enabled, configure gRPC client for gRPC gateway + grpcClient, err := grpc.Dial( /nolint: staticcheck / ignore this line for this linter + config.Address, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithDefaultCallOptions( + grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), + grpc.MaxCallRecvMsgSize(maxRecvMsgSize), + grpc.MaxCallSendMsgSize(maxSendMsgSize), + ), + ) + if err != nil { + return nil, clientCtx, err +} + +clientCtx = clientCtx.WithGRPCClient(grpcClient) + +svrCtx.Logger.Debug("gRPC client assigned to client context", "target", config.Address) + +grpcSrv, err := servergrpc.NewGRPCServer(clientCtx, app, config) + if err != nil { + return nil, clientCtx, err +} + + / Start the gRPC server in a goroutine. Note, the provided ctx will ensure + / that the server is gracefully shut down. + g.Go(func() + +error { + return servergrpc.StartGRPCServer(ctx, svrCtx.Logger.With("module", "grpc-server"), config, grpcSrv) +}) + +return grpcSrv, clientCtx, nil +} + +func startAPIServer( + ctx context.Context, + g *errgroup.Group, + svrCfg serverconfig.Config, + clientCtx client.Context, + svrCtx *Context, + app types.Application, + home string, + grpcSrv *grpc.Server, + metrics *telemetry.Metrics, +) + +error { + if !svrCfg.API.Enable { + return nil +} + +clientCtx = clientCtx.WithHomeDir(home) + apiSrv := api.New(clientCtx, svrCtx.Logger.With("module", "api-server"), grpcSrv) + +app.RegisterAPIRoutes(apiSrv, svrCfg.API) + if svrCfg.Telemetry.Enabled { + apiSrv.SetTelemetry(metrics) +} + +g.Go(func() + +error { + return apiSrv.Start(ctx, svrCfg) +}) + +return nil +} + +func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { + return telemetry.New(cfg.Telemetry) +} + +/ wrapCPUProfile starts CPU profiling, if enabled, and executes the provided +/ callbackFn in a separate goroutine, then will wait for that callback to +/ return. +/ +/ NOTE: We expect the caller to handle graceful shutdown and signal handling. +func wrapCPUProfile(svrCtx *Context, callbackFn func() + +error) + +error { + if cpuProfile := svrCtx.Viper.GetString(flagCPUProfile); cpuProfile != "" { + f, err := os.Create(cpuProfile) + if err != nil { + return err +} + +svrCtx.Logger.Info("starting CPU profiler", "profile", cpuProfile) + if err := pprof.StartCPUProfile(f); err != nil { + return err +} + +defer func() { + svrCtx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) + +pprof.StopCPUProfile() + if err := f.Close(); err != nil { + svrCtx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) +} + +}() +} + +return callbackFn() +} + +/ emitServerInfoMetrics emits server info related metrics using application telemetry. +func emitServerInfoMetrics() { + var ls []metrics.Label + versionInfo := version.NewInfo() + if len(versionInfo.GoVersion) > 0 { + ls = append(ls, telemetry.NewLabel("go", versionInfo.GoVersion)) +} + if len(versionInfo.CosmosSdkVersion) > 0 { + ls = append(ls, telemetry.NewLabel("version", versionInfo.CosmosSdkVersion)) +} + if len(ls) == 0 { + return +} + +telemetry.SetGaugeWithLabels([]string{"server", "info" +}, 1, ls) +} + +func getCtx(svrCtx *Context, block bool) (*errgroup.Group, context.Context) { + ctx, cancelFn := context.WithCancel(context.Background()) + +g, ctx := errgroup.WithContext(ctx) + / listen for quit signals so the calling parent process can gracefully exit + ListenForQuitSignals(g, block, cancelFn, svrCtx.Logger) + +return g, ctx +} + +func startApp(svrCtx *Context, appCreator types.AppCreator, opts StartCmdOptions) (app types.Application, cleanupFn func(), err error) { + traceWriter, traceCleanupFn, err := setupTraceWriter(svrCtx) + if err != nil { + return app, traceCleanupFn, err +} + home := svrCtx.Config.RootDir + db, err := opts.DBOpener(home, GetAppDBBackend(svrCtx.Viper)) + if err != nil { + return app, traceCleanupFn, err +} + if isTestnet, ok := svrCtx.Viper.Get(KeyIsTestnet).(bool); ok && isTestnet { + app, err = testnetify(svrCtx, appCreator, db, traceWriter) + if err != nil { + return app, traceCleanupFn, err +} + +} + +else { + app = appCreator(svrCtx.Logger, db, traceWriter, svrCtx.Viper) +} + +cleanupFn = func() { + traceCleanupFn() + if localErr := app.Close(); localErr != nil { + svrCtx.Logger.Error(localErr.Error()) +} + +} + +return app, cleanupFn, nil +} + +/ InPlaceTestnetCreator utilizes the provided chainID and operatorAddress as well as the local private validator key to +/ control the network represented in the data folder. This is useful to create testnets nearly identical to your +/ mainnet environment. +func InPlaceTestnetCreator(testnetAppCreator types.AppCreator) *cobra.Command { + opts := StartCmdOptions{ +} + if opts.DBOpener == nil { + opts.DBOpener = openDB +} + if opts.StartCommandHandler == nil { + opts.StartCommandHandler = start +} + cmd := &cobra.Command{ + Use: "in-place-testnet [newChainID] [newOperatorAddress]", + Short: "Create and start a testnet from current local state", + Long: `Create and start a testnet from current local state. +After utilizing this command the network will start. If the network is stopped, +the normal "start" command should be used. Re-using this command on state that +has already been modified by this command could result in unexpected behavior. + +Additionally, the first block may take up to one minute to be committed, depending +on how old the block is. For instance, if a snapshot was taken weeks ago and we want +to turn this into a testnet, it is possible lots of pending state needs to be committed +(expiring locks, etc.). It is recommended that you should wait for this block to be committed +before stopping the daemon. + +If the --trigger-testnet-upgrade flag is set, the upgrade handler specified by the flag will be run +on the first block of the testnet. + +Regardless of whether the flag is set or not, if any new stores are introduced in the daemon being run, +those stores will be registered in order to prevent panics. Therefore, you only need to set the flag if +you want to test the upgrade handler itself. +`, + Example: "in-place-testnet localosmosis osmo12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj", + Args: cobra.ExactArgs(2), + RunE: func(cmd *cobra.Command, args []string) + +error { + serverCtx := GetServerContextFromCmd(cmd) + _, err := GetPruningOptionsFromFlags(serverCtx.Viper) + if err != nil { + return err +} + +clientCtx, err := client.GetClientQueryContext(cmd) + if err != nil { + return err +} + +withCMT, _ := cmd.Flags().GetBool(flagWithComet) + if !withCMT { + serverCtx.Logger.Info("starting ABCI without CometBFT") +} + newChainID := args[0] + newOperatorAddress := args[1] + + skipConfirmation, _ := cmd.Flags().GetBool("skip-confirmation") + if !skipConfirmation { + / Confirmation prompt to prevent accidental modification of state. + reader := bufio.NewReader(os.Stdin) + +fmt.Println("This operation will modify state in your data folder and cannot be undone. Do you want to continue? (y/n)") + +text, _ := reader.ReadString('\n') + response := strings.TrimSpace(strings.ToLower(text)) + if response != "y" && response != "yes" { + fmt.Println("Operation canceled.") + +return nil +} + +} + + / Set testnet keys to be used by the application. + / This is done to prevent changes to existing start API. + serverCtx.Viper.Set(KeyIsTestnet, true) + +serverCtx.Viper.Set(KeyNewChainID, newChainID) + +serverCtx.Viper.Set(KeyNewOpAddr, newOperatorAddress) + +err = wrapCPUProfile(serverCtx, func() + +error { + return opts.StartCommandHandler(serverCtx, clientCtx, testnetAppCreator, withCMT, opts) +}) + +serverCtx.Logger.Debug("received quit signal") + +graceDuration, _ := cmd.Flags().GetDuration(FlagShutdownGrace) + if graceDuration > 0 { + serverCtx.Logger.Info("graceful shutdown start", FlagShutdownGrace, graceDuration) + <-time.After(graceDuration) + +serverCtx.Logger.Info("graceful shutdown complete") +} + +return err +}, +} + +addStartNodeFlags(cmd, opts) + +cmd.Flags().String(KeyTriggerTestnetUpgrade, "", "If set (example: \"v21\"), triggers the v21 upgrade handler to run on the first block of the testnet") + +cmd.Flags().Bool("skip-confirmation", false, "Skip the confirmation prompt") + +return cmd +} + +/ testnetify modifies both state and blockStore, allowing the provided operator address and local validator key to control the network +/ that the state in the data folder represents. The chainID of the local genesis file is modified to match the provided chainID. +func testnetify(ctx *Context, testnetAppCreator types.AppCreator, db dbm.DB, traceWriter io.WriteCloser) (types.Application, error) { + config := ctx.Config + + newChainID, ok := ctx.Viper.Get(KeyNewChainID).(string) + if !ok { + return nil, fmt.Errorf("expected string for key %s", KeyNewChainID) +} + + / Modify app genesis chain ID and save to genesis file. + genFilePath := config.GenesisFile() + +appGen, err := genutiltypes.AppGenesisFromFile(genFilePath) + if err != nil { + return nil, err +} + +appGen.ChainID = newChainID + if err := appGen.ValidateAndComplete(); err != nil { + return nil, err +} + if err := appGen.SaveAs(genFilePath); err != nil { + return nil, err +} + + / Regenerate addrbook.json to prevent peers on old network from causing error logs. + addrBookPath := filepath.Join(config.RootDir, "config", "addrbook.json") + if err := os.Remove(addrBookPath); err != nil && !os.IsNotExist(err) { + return nil, fmt.Errorf("failed to remove existing addrbook.json: %w", err) +} + emptyAddrBook := []byte("{ +}") + if err := os.WriteFile(addrBookPath, emptyAddrBook, 0o600); err != nil { + return nil, fmt.Errorf("failed to create empty addrbook.json: %w", err) +} + + / Load the comet genesis doc provider. + genDocProvider := node.DefaultGenesisDocProviderFunc(config) + + / Initialize blockStore and stateDB. + blockStoreDB, err := cmtcfg.DefaultDBProvider(&cmtcfg.DBContext{ + ID: "blockstore", + Config: config +}) + if err != nil { + return nil, err +} + blockStore := store.NewBlockStore(blockStoreDB) + +stateDB, err := cmtcfg.DefaultDBProvider(&cmtcfg.DBContext{ + ID: "state", + Config: config +}) + if err != nil { + return nil, err +} + +defer blockStore.Close() + +defer stateDB.Close() + privValidator := pvm.LoadOrGenFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile()) + +userPubKey, err := privValidator.GetPubKey() + if err != nil { + return nil, err +} + validatorAddress := userPubKey.Address() + stateStore := sm.NewStore(stateDB, sm.StoreOptions{ + DiscardABCIResponses: config.Storage.DiscardABCIResponses, +}) + +state, genDoc, err := node.LoadStateFromDBOrGenesisDocProvider(stateDB, genDocProvider) + if err != nil { + return nil, err +} + +ctx.Viper.Set(KeyNewValAddr, validatorAddress) + +ctx.Viper.Set(KeyUserPubKey, userPubKey) + testnetApp := testnetAppCreator(ctx.Logger, db, traceWriter, ctx.Viper) + + / We need to create a temporary proxyApp to get the initial state of the application. + / Depending on how the node was stopped, the application height can differ from the blockStore height. + / This height difference changes how we go about modifying the state. + cmtApp := NewCometABCIWrapper(testnetApp) + _, context := getCtx(ctx, true) + clientCreator := proxy.NewLocalClientCreator(cmtApp) + metrics := node.DefaultMetricsProvider(cmtcfg.DefaultConfig().Instrumentation) + _, _, _, _, proxyMetrics, _, _ := metrics(genDoc.ChainID) + proxyApp := proxy.NewAppConns(clientCreator, proxyMetrics) + if err := proxyApp.Start(); err != nil { + return nil, fmt.Errorf("error starting proxy app connections: %w", err) +} + +res, err := proxyApp.Query().Info(context, proxy.RequestInfo) + if err != nil { + return nil, fmt.Errorf("error calling Info: %w", err) +} + +err = proxyApp.Stop() + if err != nil { + return nil, err +} + appHash := res.LastBlockAppHash + appHeight := res.LastBlockHeight + + var block *cmttypes.Block + switch { + case appHeight == blockStore.Height(): + block = blockStore.LoadBlock(blockStore.Height()) + / If the state's last blockstore height does not match the app and blockstore height, we likely stopped with the halt height flag. + if state.LastBlockHeight != appHeight { + state.LastBlockHeight = appHeight + block.AppHash = appHash + state.AppHash = appHash +} + +else { + / Node was likely stopped via SIGTERM, delete the next block's seen commit + err := blockStoreDB.Delete(fmt.Appendf(nil, "SC:%v", blockStore.Height()+1)) + if err != nil { + return nil, err +} + +} + case blockStore.Height() > state.LastBlockHeight: + / This state usually occurs when we gracefully stop the node. + err = blockStore.DeleteLatestBlock() + if err != nil { + return nil, err +} + +block = blockStore.LoadBlock(blockStore.Height()) + +default: + / If there is any other state, we just load the block + block = blockStore.LoadBlock(blockStore.Height()) +} + +block.ChainID = newChainID + state.ChainID = newChainID + + block.LastBlockID = state.LastBlockID + block.LastCommit.BlockID = state.LastBlockID + + / Create a vote from our validator + vote := cmttypes.Vote{ + Type: cmtproto.PrecommitType, + Height: state.LastBlockHeight, + Round: 0, + BlockID: state.LastBlockID, + Timestamp: time.Now(), + ValidatorAddress: validatorAddress, + ValidatorIndex: 0, + Signature: []byte{ +}, +} + + / Sign the vote, and copy the proto changes from the act of signing to the vote itself + voteProto := vote.ToProto() + +err = privValidator.SignVote(newChainID, voteProto) + if err != nil { + return nil, err +} + +vote.Signature = voteProto.Signature + vote.Timestamp = voteProto.Timestamp + + / Modify the block's lastCommit to be signed only by our validator + block.LastCommit.Signatures[0].ValidatorAddress = validatorAddress + block.LastCommit.Signatures[0].Signature = vote.Signature + block.LastCommit.Signatures = []cmttypes.CommitSig{ + block.LastCommit.Signatures[0] +} + + / Load the seenCommit of the lastBlockHeight and modify it to be signed from our validator + seenCommit := blockStore.LoadSeenCommit(state.LastBlockHeight) + +seenCommit.BlockID = state.LastBlockID + seenCommit.Round = vote.Round + seenCommit.Signatures[0].Signature = vote.Signature + seenCommit.Signatures[0].ValidatorAddress = validatorAddress + seenCommit.Signatures[0].Timestamp = vote.Timestamp + seenCommit.Signatures = []cmttypes.CommitSig{ + seenCommit.Signatures[0] +} + +err = blockStore.SaveSeenCommit(state.LastBlockHeight, seenCommit) + if err != nil { + return nil, err +} + + / Create ValidatorSet struct containing just our valdiator. + newVal := &cmttypes.Validator{ + Address: validatorAddress, + PubKey: userPubKey, + VotingPower: 900000000000000, +} + newValSet := &cmttypes.ValidatorSet{ + Validators: []*cmttypes.Validator{ + newVal +}, + Proposer: newVal, +} + + / Replace all valSets in state to be the valSet with just our validator. + state.Validators = newValSet + state.LastValidators = newValSet + state.NextValidators = newValSet + state.LastHeightValidatorsChanged = blockStore.Height() + +err = stateStore.Save(state) + if err != nil { + return nil, err +} + + / Create a ValidatorsInfo struct to store in stateDB. + valSet, err := state.Validators.ToProto() + if err != nil { + return nil, err +} + valInfo := &cmtstate.ValidatorsInfo{ + ValidatorSet: valSet, + LastHeightChanged: state.LastBlockHeight, +} + +buf, err := valInfo.Marshal() + if err != nil { + return nil, err +} + + / Modfiy Validators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()), buf) + if err != nil { + return nil, err +} + + / Modify LastValidators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()-1), buf) + if err != nil { + return nil, err +} + + / Modify NextValidators stateDB entry. + err = stateDB.Set(fmt.Appendf(nil, "validatorsKey:%v", blockStore.Height()+1), buf) + if err != nil { + return nil, err +} + + / Since we modified the chainID, we set the new genesisDoc in the stateDB. + b, err := cmtjson.Marshal(genDoc) + if err != nil { + return nil, err +} + if err := stateDB.SetSync([]byte("genesisDoc"), b); err != nil { + return nil, err +} + +return testnetApp, err +} + +/ addStartNodeFlags should be added to any CLI commands that start the network. +func addStartNodeFlags(cmd *cobra.Command, opts StartCmdOptions) { + cmd.Flags().Bool(flagWithComet, true, "Run abci app embedded in-process with CometBFT") + +cmd.Flags().String(flagAddress, "tcp://127.0.0.1:26658", "Listen address") + +cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") + +cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") + +cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") + +cmd.Flags().Uint64(FlagQueryGasLimit, 0, "Maximum gas a Rest/Grpc query can consume. Blank and 0 imply unbounded.") + +cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ +}, "Skip a set of upgrade heights to continue the old binary") + +cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) + +at which to gracefully halt the chain and shutdown the node") + +cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") + +cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") + +cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") + +cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") + +cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") + +cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") + +cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune CometBFT blocks") + +cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") + +cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") + +cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") + +cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") + +cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the CometBFT RPC read timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the CometBFT RPC write timeout (in seconds)") + +cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the CometBFT maximum request body (in bytes)") + +cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") + +cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no CometBFT process is started)") + +cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") + +cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") + +cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") + +cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") + +cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") + +cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") + +cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") + +cmd.Flags().Duration(FlagShutdownGrace, 0*time.Second, "On Shutdown, duration to wait for resource clean up") + + / support old flags name for backwards compatibility + cmd.Flags().SetNormalizeFunc(func(f *pflag.FlagSet, name string) + +pflag.NormalizedName { + if name == "with-tendermint" { + name = flagWithComet +} + +return pflag.NormalizedName(name) +}) + + / add support for all CometBFT-specific command line options + cmtcmd.AddNodeFlags(cmd) + if opts.AddFlags != nil { + opts.AddFlags(cmd) +} +} +``` + +Upon starting, the node will bootstrap its RPC and P2P server and start dialing peers. During handshake with its peers, if the node realizes they are ahead, it will query all the blocks sequentially in order to catch up. Then, it will wait for new block proposals and block signatures from validators in order to make progress. + +## Other commands + +To discover how to concretely run a node and interact with it, please refer to our [Running a Node, API and CLI](/docs/sdk/v0.53/documentation/operations/run-node) guide. diff --git a/docs/sdk/v0.53/documentation/operations/run-node.mdx b/docs/sdk/v0.53/documentation/operations/run-node.mdx new file mode 100644 index 00000000..0b5d654a --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/run-node.mdx @@ -0,0 +1,218 @@ +--- +title: Running a Node +--- + +## Synopsis + +Now that the application is ready and the keyring populated, it's time to see how to run the blockchain node. In this section, the application we are running is called [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp), and its corresponding CLI binary `simd`. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.53/documentation/application-framework/app-anatomy) +- [Setting up the keyring](/docs/sdk/v0.53/documentation/operations/keyring) + + + +## Initialize the Chain + + + Make sure you can build your own binary, and replace `simd` with the name of + your binary in the snippets. + + +Before actually running the node, we need to initialize the chain, and most importantly, its genesis file. This is done with the `init` subcommand: + +```bash +# The argument is the custom username of your node, it should be human-readable. +simd init --chain-id my-test-chain +``` + +The command above creates all the configuration files needed for your node to run, as well as a default genesis file, which defines the initial state of the network. + + + All these configuration files are in `~/.simapp` by default, but you can + overwrite the location of this folder by passing the `--home` flag to each + command, or set an `$APPD_HOME` environment variable (where `APPD` is the name + of the binary). + + +The `~/.simapp` folder has the following structure: + +```bash +. # ~/.simapp + |- data # Contains the databases used by the node. + |- config/ + |- app.toml # Application-related configuration file. + |- config.toml # CometBFT-related configuration file. + |- genesis.json # The genesis file. + |- node_key.json # Private key to use for node authentication in the p2p protocol. + |- priv_validator_key.json # Private key to use as a validator in the consensus protocol. +``` + +## Updating Some Default Settings + +If you want to change any field values in configuration files (for ex: genesis.json) you can use `jq` ([installation](https://stedolan.github.io/jq/download/) & [docs](https://stedolan.github.io/jq/manual/#Assignment)) & `sed` commands to do that. Few examples are listed here. + +```bash expandable +# to change the chain-id +jq '.chain_id = "testing"' genesis.json > temp.json && mv temp.json genesis.json + +# to enable the api server +sed -i '/\[api\]/,+3 s/enable = false/enable = true/' app.toml + +# to change the voting_period +jq '.app_state.gov.voting_params.voting_period = "600s"' genesis.json > temp.json && mv temp.json genesis.json + +# to change the inflation +jq '.app_state.mint.minter.inflation = "0.300000000000000000"' genesis.json > temp.json && mv temp.json genesis.json +``` + +### Client Interaction + +When instantiating a node, GRPC and REST are defaulted to localhost to avoid unknown exposure of your node to the public. It is recommended to not expose these endpoints without a proxy that can handle load balancing or authentication is set up between your node and the public. + +A commonly used tool for this is [nginx](https://nginx.org). + +## Adding Genesis Accounts + +Before starting the chain, you need to populate the state with at least one account. To do so, first [create a new account in the keyring](/docs/sdk/v0.53/documentation/operations/keyring#adding-keys-to-the-keyring) named `my_validator` under the `test` keyring backend (feel free to choose another name and another backend). + +Now that you have created a local account, go ahead and grant it some `stake` tokens in your chain's genesis file. Doing so will also make sure your chain is aware of this account's existence: + +```bash +simd genesis add-genesis-account $MY_VALIDATOR_ADDRESS 100000000000stake +``` + +Recall that `$MY_VALIDATOR_ADDRESS` is a variable that holds the address of the `my_validator` key in the [keyring](/docs/sdk/v0.53/documentation/operations/keyring#adding-keys-to-the-keyring). Also note that the tokens in the Cosmos SDK have the `{amount}{denom}` format: `amount` is an 18-digit-precision decimal number, and `denom` is the unique token identifier with its denomination key (e.g. `atom` or `uatom`). Here, we are granting `stake` tokens, as `stake` is the token identifier used for staking in [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). For your own chain with its own staking denom, that token identifier should be used instead. + +Now that your account has some tokens, you need to add a validator to your chain. Validators are special full-nodes that participate in the consensus process (implemented in the [underlying consensus engine](/docs/sdk/v0.53/documentation/core-concepts/sdk-app-architecture#cometbft)) in order to add new blocks to the chain. Any account can declare its intention to become a validator operator, but only those with sufficient delegation get to enter the active set (for example, only the top 125 validator candidates with the most delegation get to be validators in the Cosmos Hub). For this guide, you will add your local node (created via the `init` command above) as a validator of your chain. Validators can be declared before a chain is first started via a special transaction included in the genesis file called a `gentx`: + +```bash +# Create a gentx. +simd genesis gentx my_validator 100000000stake --chain-id my-test-chain --keyring-backend test + +# Add the gentx to the genesis file. +simd genesis collect-gentxs +``` + +A `gentx` does three things: + +1. Registers the `validator` account you created as a validator operator account (i.e., the account that controls the validator). +2. Self-delegates the provided `amount` of staking tokens. +3. Link the operator account with a CometBFT node pubkey that will be used for signing blocks. If no `--pubkey` flag is provided, it defaults to the local node pubkey created via the `simd init` command above. + +For more information on `gentx`, use the following command: + +```bash +simd genesis gentx --help +``` + +## Configuring the Node Using `app.toml` and `config.toml` + +The Cosmos SDK automatically generates two configuration files inside `~/.simapp/config`: + +- `config.toml`: used to configure the CometBFT, learn more on [CometBFT's documentation](https://docs.cometbft.com/v0.37/core/configuration), +- `app.toml`: generated by the Cosmos SDK, and used to configure your app, such as state pruning strategies, telemetry, gRPC and REST servers configuration, state sync... + +Both files are heavily commented, please refer to them directly to tweak your node. + +One example config to tweak is the `minimum-gas-prices` field inside `app.toml`, which defines the minimum gas prices the validator node is willing to accept for processing a transaction. Depending on the chain, it might be an empty string or not. If it's empty, make sure to edit the field with some value, for example `10token`, or else the node will halt on startup. For the purpose of this tutorial, let's set the minimum gas price to 0: + +```toml + # The minimum gas prices a validator is willing to accept for processing a + # transaction. A transaction's fees must meet the minimum of any denomination + # specified in this config (e.g. 0.25token1;0.0001token2). + minimum-gas-prices = "0stake" +``` + + +When running a node (not a validator!) and not wanting to run the application mempool, set the `max-txs` field to `-1`. + +```toml +[mempool] +# Setting max-txs to 0 will allow for an unbounded amount of transactions in the mempool. +# Setting max_txs to negative 1 (-1) will disable transactions from being inserted into the mempool. +# Setting max_txs to a positive number (> 0) will limit the number of transactions in the mempool, by the specified amount. +# +# Note, this configuration only applies to SDK built-in app-side mempool +# implementations. +max-txs = "-1" +``` + + + +## Run a Localnet + +Now that everything is set up, you can finally start your node: + +```bash +simd start +``` + +You should see blocks come in. + +The previous command allows you to run a single node. This is enough for the next section on interacting with this node, but you may wish to run multiple nodes at the same time, and see how consensus happens between them. + +The naive way would be to run the same commands again in separate terminal windows. This is possible, however, in the Cosmos SDK, we leverage the power of [Docker Compose](https://docs.docker.com/compose/) to run a localnet. If you need inspiration on how to set up your own localnet with Docker Compose, you can have a look at the Cosmos SDK's [`docker-compose.yml`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/docker-compose.yml). + +### Standalone App/CometBFT + +By default, the Cosmos SDK runs CometBFT in-process with the application +If you want to run the application and CometBFT in separate processes, +start the application with the `--with-comet=false` flag +and set `rpc.laddr` in `config.toml` to the CometBFT node's RPC address. + +## Logging + +Logging provides a way to see what is going on with a node. The default logging level is info. This is a global level and all info logs will be outputted to the terminal. If you would like to filter specific logs to the terminal instead of all, then setting `module:log_level` is how this can work. + +Example: + +In config.toml: + +```toml +log_level: "state:info,p2p:info,consensus:info,x/staking:info,x/ibc:info,*error" +``` + +## State Sync + +State sync is the act in which a node syncs the latest or close to the latest state of a blockchain. This is useful for users who don't want to sync all the blocks in history. Read more in [CometBFT documentation](https://docs.cometbft.com/v0.37/core/state-sync). + +State sync works thanks to snapshots. Read how the SDK handles snapshots [here](https://github.com/cosmos/cosmos-sdk/blob/825245d/store/snapshots/README.md). + +### Local State Sync + +Local state sync works similar to normal state sync except that it works off a local snapshot of state instead of one provided via the p2p network. The steps to start local state sync are similar to normal state sync with a few different designs. + +1. As mentioned in [Link](https://docs.cometbft.com/v0.37/core/state-sync), one must set a height and hash in the config.toml along with a few rpc servers (the aforementioned link has instructions on how to do this). +2. Run ` ` to restore a local snapshot (note: first load it from a file with the _load_ command). +3. Bootstrapping Comet state to start the node after the snapshot has been ingested. This can be done with the bootstrap command ` comet bootstrap-state` + +### Snapshots Commands + +The Cosmos SDK provides commands for managing snapshots. +These commands can be added in an app with the following snippet in `cmd//root.go`: + +```go +import ( + + "github.com/cosmos/cosmos-sdk/client/snapshot" +) + +func initRootCmd(/* ... */) { + / ... + rootCmd.AddCommand( + snapshot.Cmd(appCreator), + ) +} +``` + +Then the following commands are available at ` snapshots [command]`: + +- **list**: list local snapshots +- **load**: Load a snapshot archive file into snapshot store +- **restore**: Restore app state from local snapshot +- **export**: Export app state to snapshot store +- **dump**: Dump the snapshot as portable archive format +- **delete**: Delete a local snapshot diff --git a/docs/sdk/v0.53/user/run-node/run-production.mdx b/docs/sdk/v0.53/documentation/operations/run-production.mdx similarity index 62% rename from docs/sdk/v0.53/user/run-node/run-production.mdx rename to docs/sdk/v0.53/documentation/operations/run-production.mdx index dd66e35c..5fd57158 100644 --- a/docs/sdk/v0.53/user/run-node/run-production.mdx +++ b/docs/sdk/v0.53/documentation/operations/run-production.mdx @@ -1,51 +1,54 @@ --- -title: "Running in Production" -description: "Version: v0.53" +title: Running in Production --- - - This section describes how to securely run a node in a public setting and/or on a mainnet on one of the many Cosmos SDK public blockchains. - +## Synopsis + +This section describes how to securely run a node in a public setting and/or on a mainnet on one of the many Cosmos SDK public blockchains. When operating a node, full node or validator, in production it is important to set your server up securely. - There are many different ways to secure a server and your node, the described steps here is one way. To see another way of setting up a server see the [run in production tutorial](https://tutorials.cosmos.network/hands-on-exercise/4-run-in-prod). + There are many different ways to secure a server and your node, the described + steps here is one way. To see another way of setting up a server see the [run + in production + tutorial](https://tutorials.cosmos.network/hands-on-exercise/4-run-in-prod). - - This walkthrough assumes the underlying operating system is Ubuntu. - +This walkthrough assumes the underlying operating system is Ubuntu. -## Sever Setup[​](#sever-setup "Direct link to Sever Setup") +## Sever Setup -### User[​](#user "Direct link to User") +### User When creating a server most times it is created as user `root`. This user has heightened privileges on the server. When operating a node, it is recommended to not run your node as the root user. 1. Create a new user -``` +```bash sudo adduser change_me ``` 2. We want to allow this user to perform sudo tasks -``` +```bash sudo usermod -aG sudo change_me ``` Now when logging into the server, the non `root` user can be used. -### Go[​](#go "Direct link to Go") +### Go 1. Install the [Go](https://go.dev/doc/install) version preconized by the application. - In the past, validators [have had issues](https://github.com/cosmos/cosmos-sdk/issues/13976) when using different versions of Go. It is recommended that the whole validator set uses the version of Go that is preconized by the application. + In the past, validators [have had + issues](https://github.com/cosmos/cosmos-sdk/issues/13976) when using + different versions of Go. It is recommended that the whole validator set uses + the version of Go that is preconized by the application. -### Firewall[​](#firewall "Direct link to Firewall") +### Firewall Nodes should not have all ports open to the public, this is a simple way to get DDOS'd. Secondly it is recommended by [CometBFT](https://github.com/cometbft/cometbft) to never expose ports that are not required to operate a node. @@ -55,19 +58,20 @@ Most, if not all servers come equipped with [ufw](https://help.ubuntu.com/commun 1. Reset UFW to disallow all incoming connections and allow outgoing -``` -sudo ufw default deny incomingsudo ufw default allow outgoing +```bash +sudo ufw default deny incoming +sudo ufw default allow outgoing ``` 2. Lets make sure that port 22 (ssh) stays open. -``` +```bash sudo ufw allow ssh ``` or -``` +```bash sudo ufw allow 22 ``` @@ -75,81 +79,81 @@ Both of the above commands are the same. 3. Allow Port 26656 (cometbft p2p port). If the node has a modified p2p port then that port must be used here. -``` +```bash sudo ufw allow 26656/tcp ``` 4. Allow port 26660 (cometbft [prometheus](https://prometheus.io)). This acts as the applications monitoring port as well. -``` +```bash sudo ufw allow 26660/tcp ``` 5. IF the node which is being setup would like to expose CometBFTs jsonRPC and Cosmos SDK GRPC and REST then follow this step. (Optional) -##### CometBFT JsonRPC[​](#cometbft-jsonrpc "Direct link to CometBFT JsonRPC") +##### CometBFT JsonRPC -``` +```bash sudo ufw allow 26657/tcp ``` -##### Cosmos SDK GRPC[​](#cosmos-sdk-grpc "Direct link to Cosmos SDK GRPC") +##### Cosmos SDK GRPC -``` +```bash sudo ufw allow 9090/tcp ``` -##### Cosmos SDK REST[​](#cosmos-sdk-rest "Direct link to Cosmos SDK REST") +##### Cosmos SDK REST -``` +```bash sudo ufw allow 1317/tcp ``` 6. Lastly, enable ufw -``` +```bash sudo ufw enable ``` -### Signing[​](#signing "Direct link to Signing") +### Signing If the node that is being started is a validator there are multiple ways a validator could sign blocks. -#### File[​](#file "Direct link to File") +#### File File based signing is the simplest and default approach. This approach works by storing the consensus key, generated on initialization, to sign blocks. This approach is only as safe as your server setup as if the server is compromised so is your key. This key is located in the `config/priv_val_key.json` directory generated on initialization. A second file exists that user must be aware of, the file is located in the data directory `data/priv_val_state.json`. This file protects your node from double signing. It keeps track of the consensus keys last sign height, round and latest signature. If the node crashes and needs to be recovered this file must be kept in order to ensure that the consensus key will not be used for signing a block that was previously signed. -#### Remote Signer[​](#remote-signer "Direct link to Remote Signer") +#### Remote Signer A remote signer is a secondary server that is separate from the running node that signs blocks with the consensus key. This means that the consensus key does not live on the node itself. This increases security because your full node which is connected to the remote signer can be swapped without missing blocks. The two most used remote signers are [tmkms](https://github.com/iqlusioninc/tmkms) from [Iqlusion](https://www.iqlusion.io) and [horcrux](https://github.com/strangelove-ventures/horcrux) from [Strangelove](https://strange.love). -##### TMKMS[​](#tmkms "Direct link to TMKMS") +##### TMKMS -###### Dependencies[​](#dependencies "Direct link to Dependencies") +###### Dependencies 1. Update server dependencies and install extras needed. -``` +```sh sudo apt update -y && sudo apt install build-essential curl jq -y ``` 2. Install Rust: -``` +```sh curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` 3. Install Libusb: -``` +```sh sudo apt install libusb-1.0-0-dev ``` -###### Setup[​](#setup "Direct link to Setup") +###### Setup There are two ways to install tmkms, from source or `cargo install`. In the examples we will cover downloading or building from source and using softsign. Softsign stands for software signing, but you could use a [yubihsm](https://www.yubico.com/products/hardware-security-module/) as your signing key if you wish. @@ -157,16 +161,23 @@ There are two ways to install tmkms, from source or `cargo install`. In the exam From source: -``` -cd $HOMEgit clone https://github.com/iqlusioninc/tmkms.gitcd $HOME/tmkmscargo install tmkms --features=softsigntmkms init configtmkms softsign keygen ./config/secrets/secret_connection_key +```bash +cd $HOME +git clone https://github.com/iqlusioninc/tmkms.git +cd $HOME/tmkms +cargo install tmkms --features=softsign +tmkms init config +tmkms softsign keygen ./config/secrets/secret_connection_key ``` or Cargo install: -``` -cargo install tmkms --features=softsigntmkms init configtmkms softsign keygen ./config/secrets/secret_connection_key +```bash +cargo install tmkms --features=softsign +tmkms init config +tmkms softsign keygen ./config/secrets/secret_connection_key ``` @@ -175,13 +186,13 @@ cargo install tmkms --features=softsigntmkms init configtmkms softsign keygen ./ 2. Migrate the validator key from the full node to the new tmkms instance. -``` +```bash scp user@123.456.32.123:~/.simd/config/priv_validator_key.json ~/tmkms/config/secrets ``` 3. Import the validator key into tmkms. -``` +```bash tmkms softsign import $HOME/tmkms/config/secrets/priv_validator_key.json $HOME/tmkms/config/secrets/priv_validator_key ``` @@ -189,40 +200,73 @@ At this point, it is necessary to delete the `priv_validator_key.json` from the 4. Modifiy the `tmkms.toml`. -``` +```bash vim $HOME/tmkms/config/tmkms.toml ``` -This example shows a configuration that could be used for soft signing. The example has an IP of `123.456.12.345` with a port of `26659` a chain\_id of `test-chain-waSDSe`. These are items that most be modified for the usecase of tmkms and the network. +This example shows a configuration that could be used for soft signing. The example has an IP of `123.456.12.345` with a port of `26659` a chain_id of `test-chain-waSDSe`. These are items that most be modified for the usecase of tmkms and the network. -``` -# CometBFT KMS configuration file## Chain Configuration[[chain]]id = "osmosis-1"key_format = { type = "bech32", account_key_prefix = "cosmospub", consensus_key_prefix = "cosmosvalconspub" }state_file = "/root/tmkms/config/state/priv_validator_state.json"## Signing Provider Configuration### Software-based Signer Configuration[[providers.softsign]]chain_ids = ["test-chain-waSDSe"]key_type = "consensus"path = "/root/tmkms/config/secrets/priv_validator_key"## Validator Configuration[[validator]]chain_id = "test-chain-waSDSe"addr = "tcp://123.456.12.345:26659"secret_key = "/root/tmkms/config/secrets/secret_connection_key"protocol_version = "v0.34"reconnect = true +```toml expandable +# CometBFT KMS configuration file + +## Chain Configuration + +[[chain]] +id = "osmosis-1" +key_format = { type = "bech32", account_key_prefix = "cosmospub", consensus_key_prefix = "cosmosvalconspub" } +state_file = "/root/tmkms/config/state/priv_validator_state.json" + +## Signing Provider Configuration + +### Software-based Signer Configuration + +[[providers.softsign]] +chain_ids = ["test-chain-waSDSe"] +key_type = "consensus" +path = "/root/tmkms/config/secrets/priv_validator_key" + +## Validator Configuration + +[[validator]] +chain_id = "test-chain-waSDSe" +addr = "tcp://123.456.12.345:26659" +secret_key = "/root/tmkms/config/secrets/secret_connection_key" +protocol_version = "v0.34" +reconnect = true ``` 5. Set the address of the tmkms instance. -``` -vim $HOME/.simd/config/config.tomlpriv_validator_laddr = "tcp://0.0.0.0:26659" +```bash +vim $HOME/.simd/config/config.toml + +priv_validator_laddr = "tcp://0.0.0.0:26659" ``` - - The above address it set to `0.0.0.0` but it is recommended to set the tmkms server to secure the startup - + + The above address it set to `0.0.0.0` but it is recommended to set the tmkms + server to secure the startup + - - It is recommended to comment or delete the lines that specify the path of the validator key and validator: + +It is recommended to comment or delete the lines that specify the path of the validator key and validator: - ``` - # Path to the JSON file containing the private key to use as a validator in the consensus protocol# priv_validator_key_file = "config/priv_validator_key.json"# Path to the JSON file containing the last sign state of a validator# priv_validator_state_file = "data/priv_validator_state.json" - ``` - +```toml +# Path to the JSON file containing the private key to use as a validator in the consensus protocol +# priv_validator_key_file = "config/priv_validator_key.json" + +# Path to the JSON file containing the last sign state of a validator +# priv_validator_state_file = "data/priv_validator_state.json" +``` + + 6. Start the two processes. -``` +```bash tmkms start -c $HOME/tmkms/config/tmkms.toml ``` -``` +```bash simd start ``` diff --git a/docs/sdk/v0.53/documentation/operations/run-testnet.mdx b/docs/sdk/v0.53/documentation/operations/run-testnet.mdx new file mode 100644 index 00000000..7580ab50 --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/run-testnet.mdx @@ -0,0 +1,97 @@ +--- +title: Running a Testnet +--- + +## Synopsis + +The `simd testnet` subcommand makes it easy to initialize and start a simulated test network for testing purposes. + +In addition to the commands for [running a node](/docs/sdk/v0.53/documentation/operations/run-node), the `simd` binary also includes a `testnet` command that allows you to start a simulated test network in-process or to initialize files for a simulated test network that runs in a separate process. + +## Initialize Files + +First, let's take a look at the `init-files` subcommand. + +This is similar to the `init` command when initializing a single node, but in this case we are initializing multiple nodes, generating the genesis transactions for each node, and then collecting those transactions. + +The `init-files` subcommand initializes the necessary files to run a test network in a separate process (i.e. using a Docker container). Running this command is not a prerequisite for the `start` subcommand ([see below](#start-testnet)). + +In order to initialize the files for a test network, run the following command: + +```bash +simd testnet init-files +``` + +You should see the following output in your terminal: + +```bash +Successfully initialized 4 node directories +``` + +The default output directory is a relative `.testnets` directory. Let's take a look at the files created within the `.testnets` directory. + +### gentxs + +The `gentxs` directory includes a genesis transaction for each validator node. Each file includes a JSON encoded genesis transaction used to register a validator node at the time of genesis. The genesis transactions are added to the `genesis.json` file within each node directory during the initialization process. + +### nodes + +A node directory is created for each validator node. Within each node directory is a `simd` directory. The `simd` directory is the home directory for each node, which includes the configuration and data files for that node (i.e. the same files included in the default `~/.simapp` directory when running a single node). + +## Start Testnet + +Now, let's take a look at the `start` subcommand. + +The `start` subcommand both initializes and starts an in-process test network. This is the fastest way to spin up a local test network for testing purposes. + +You can start the local test network by running the following command: + +```bash +simd testnet start +``` + +You should see something similar to the following: + +```bash expandable +acquiring test network lock +preparing test network with chain-id "chain-mtoD9v" + ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++ THIS MNEMONIC IS FOR TESTING PURPOSES ONLY ++ +++ DO NOT USE IN PRODUCTION ++ +++ ++ +++ sustain know debris minute gate hybrid stereo custom ++ +++ divorce cross spoon machine latin vibrant term oblige ++ +++ moment beauty laundry repeat grab game bronze truly ++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +starting test network... +started test network +press the Enter Key to terminate +``` + +The first validator node is now running in-process, which means the test network will terminate once you either close the terminal window or you press the Enter key. In the output, the mnemonic phrase for the first validator node is provided for testing purposes. The validator node is using the same default addresses being used when initializing and starting a single node (no need to provide a `--node` flag). + +Check the status of the first validator node: + +```shell +simd status +``` + +Import the key from the provided mnemonic: + +```shell +simd keys add test --recover --keyring-backend test +``` + +Check the balance of the account address: + +```shell +simd q bank balances [address] +``` + +Use this test account to manually test against the test network. + +## Testnet Options + +You can customize the configuration of the test network with flags. In order to see all flag options, append the `--help` flag to each command. diff --git a/docs/sdk/v0.53/documentation/operations/simulation.mdx b/docs/sdk/v0.53/documentation/operations/simulation.mdx new file mode 100644 index 00000000..6a5d3429 --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/simulation.mdx @@ -0,0 +1,95 @@ +--- +title: Cosmos Blockchain Simulator +description: >- + The Cosmos SDK offers a full fledged simulation framework to fuzz test every + message defined by a module. +--- + +The Cosmos SDK offers a full fledged simulation framework to fuzz test every +message defined by a module. + +On the Cosmos SDK, this functionality is provided by [`SimApp`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_di.go), which is a +`Baseapp` application that is used for running the [`simulation`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/simulation) module. +This module defines all the simulation logic as well as the operations for +randomized parameters like accounts, balances etc. + +## Goals + +The blockchain simulator tests how the blockchain application would behave under +real life circumstances by generating and sending randomized messages. +The goal of this is to detect and debug failures that could halt a live chain, +by providing logs and statistics about the operations run by the simulator as +well as exporting the latest application state when a failure was found. + +Its main difference with integration testing is that the simulator app allows +you to pass parameters to customize the chain that's being simulated. +This comes in handy when trying to reproduce bugs that were generated in the +provided operations (randomized or not). + +## Simulation commands + +The simulation app has different commands, each of which tests a different +failure type: + +* `AppImportExport`: The simulator exports the initial app state and then it + creates a new app with the exported `genesis.json` as an input, checking for + inconsistencies between the stores. +* `AppSimulationAfterImport`: Queues two simulations together. The first one provides the app state (*i.e* genesis) to the second. Useful to test software upgrades or hard-forks from a live chain. +* `AppStateDeterminism`: Checks that all the nodes return the same values, in the same order. +* `FullAppSimulation`: General simulation mode. Runs the chain and the specified operations for a given number of blocks. Tests that there're no `panics` on the simulation. + +Each simulation must receive a set of inputs (*i.e* flags) such as the number of +blocks that the simulation is run, seed, block size, etc. +Check the full list of flags [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/simulation/client/cli/flags.go#L43-L70). + +## Simulator Modes + +In addition to the various inputs and commands, the simulator runs in three modes: + +1. Completely random where the initial state, module parameters and simulation + parameters are **pseudo-randomly generated**. +2. From a `genesis.json` file where the initial state and the module parameters are defined. + This mode is helpful for running simulations on a known state such as a live network export where a new (mostly likely breaking) version of the application needs to be tested. +3. From a `params.json` file where the initial state is pseudo-randomly generated but the module and simulation parameters can be provided manually. + This allows for a more controlled and deterministic simulation setup while allowing the state space to still be pseudo-randomly simulated. + The list of available parameters are listed [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/simulation/client/cli/flags.go#L72-L90). + + +These modes are not mutually exclusive. So you can for example run a randomly +generated genesis state (`1`) with manually generated simulation params (`3`). + + +## Usage + +This is a general example of how simulations are run. For more specific examples +check the Cosmos SDK [Makefile](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/Makefile#L285-L320). + +```bash + $ go test -mod=readonly github.com/cosmos/cosmos-sdk/simapp \ + -run=TestApp \ + ... + -v -timeout 24h +``` + +## Debugging Tips + +Here are some suggestions when encountering a simulation failure: + +* Export the app state at the height where the failure was found. You can do this + by passing the `-ExportStatePath` flag to the simulator. +* Use `-Verbose` logs. They could give you a better hint on all the operations + involved. +* Try using another `-Seed`. If it can reproduce the same error and if it fails + sooner, you will spend less time running the simulations. +* Reduce the `-NumBlocks` . How's the app state at the height previous to the + failure? +* Try adding logs to operations that are not logged. You will have to define a + [Logger](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/staking/keeper/keeper.go#L77-L81) on your `Keeper`. + +## Use simulation in your Cosmos SDK-based application + +Learn how you can build the simulation into your Cosmos SDK-based application: + +* Application Simulation Manager +* [Building modules: Simulator](/docs/sdk/v0.53/documentation/operations/simulator) +* Simulator tests diff --git a/docs/sdk/v0.53/documentation/operations/simulator.mdx b/docs/sdk/v0.53/documentation/operations/simulator.mdx new file mode 100644 index 00000000..c2d9ab3e --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/simulator.mdx @@ -0,0 +1,4063 @@ +--- +title: Module Simulation +--- + + +**Pre-requisite Readings** + +* [Cosmos Blockchain Simulator](/docs/sdk/v0.53/documentation/operations/simulation) + + + +## Synopsis + +This document guides developers on integrating their custom modules with the Cosmos SDK `Simulations`. +Simulations are useful for testing edge cases in module implementations. + +* [Simulation Package](#simulation-package) +* [Simulation App Module](#simulation-app-module) +* [SimsX](#simsx) + * [Example Implementations](#example-implementations) +* [Store decoders](#store-decoders) +* [Randomized genesis](#randomized-genesis) +* [Random weighted operations](#random-weighted-operations) + * [Using Simsx](#using-simsx) +* [App Simulator manager](#app-simulator-manager) +* [Running Simulations](#running-simulations) + +## Simulation Package + +The Cosmos SDK suggests organizing your simulation related code in a `x//simulation` package. + +## Simulation App Module + +To integrate with the Cosmos SDK `SimulationManager`, app modules must implement the `AppModuleSimulation` interface. + +```go expandable +package module + +import ( + + "encoding/json" + "math/rand" + "sort" + "time" + + sdkmath "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/types/simulation" +) + +/ AppModuleSimulation defines the standard functions that every module should expose +/ for the SDK blockchain simulator +type AppModuleSimulation interface { + / randomized genesis states + GenerateGenesisState(input *SimulationState) + + / register a func to decode the each module's defined types from their corresponding store key + RegisterStoreDecoder(simulation.StoreDecoderRegistry) + + / simulation operations (i.e msgs) + +with their respective weight + WeightedOperations(simState SimulationState) []simulation.WeightedOperation +} + +/ HasProposalMsgs defines the messages that can be used to simulate governance (v1) + +proposals +type HasProposalMsgs interface { + / msg functions used to simulate governance proposals + ProposalMsgs(simState SimulationState) []simulation.WeightedProposalMsg +} + +/ HasProposalContents defines the contents that can be used to simulate legacy governance (v1beta1) + +proposals +type HasProposalContents interface { + / content functions used to simulate governance proposals + ProposalContents(simState SimulationState) []simulation.WeightedProposalContent /nolint:staticcheck / legacy v1beta1 governance +} + +/ SimulationManager defines a simulation manager that provides the high level utility +/ for managing and executing simulation functionalities for a group of modules +type SimulationManager struct { + Modules []AppModuleSimulation / array of app modules; we use an array for deterministic simulation tests + StoreDecoders simulation.StoreDecoderRegistry / functions to decode the key-value pairs from each module's store +} + +/ NewSimulationManager creates a new SimulationManager object +/ +/ CONTRACT: All the modules provided must be also registered on the module Manager +func NewSimulationManager(modules ...AppModuleSimulation) *SimulationManager { + return &SimulationManager{ + Modules: modules, + StoreDecoders: make(simulation.StoreDecoderRegistry), +} +} + +/ NewSimulationManagerFromAppModules creates a new SimulationManager object. +/ +/ First it sets any SimulationModule provided by overrideModules, and ignores any AppModule +/ with the same moduleName. +/ Then it attempts to cast every provided AppModule into an AppModuleSimulation. +/ If the cast succeeds, its included, otherwise it is excluded. +func NewSimulationManagerFromAppModules(modules map[string]any, overrideModules map[string]AppModuleSimulation) *SimulationManager { + simModules := []AppModuleSimulation{ +} + appModuleNamesSorted := make([]string, 0, len(modules)) + for moduleName := range modules { + appModuleNamesSorted = append(appModuleNamesSorted, moduleName) +} + +sort.Strings(appModuleNamesSorted) + for _, moduleName := range appModuleNamesSorted { + / for every module, see if we override it. If so, use override. + / Else, if we can cast the app module into a simulation module add it. + / otherwise no simulation module. + if simModule, ok := overrideModules[moduleName]; ok { + simModules = append(simModules, simModule) +} + +else { + appModule := modules[moduleName] + if simModule, ok := appModule.(AppModuleSimulation); ok { + simModules = append(simModules, simModule) +} + / cannot cast, so we continue +} + +} + +return NewSimulationManager(simModules...) +} + +/ Deprecated: Use GetProposalMsgs instead. +/ GetProposalContents returns each module's proposal content generator function +/ with their default operation weight and key. +func (sm *SimulationManager) + +GetProposalContents(simState SimulationState) []simulation.WeightedProposalContent { + wContents := make([]simulation.WeightedProposalContent, 0, len(sm.Modules)) + for _, module := range sm.Modules { + if module, ok := module.(HasProposalContents); ok { + wContents = append(wContents, module.ProposalContents(simState)...) +} + +} + +return wContents +} + +/ GetProposalMsgs returns each module's proposal msg generator function +/ with their default operation weight and key. +func (sm *SimulationManager) + +GetProposalMsgs(simState SimulationState) []simulation.WeightedProposalMsg { + wContents := make([]simulation.WeightedProposalMsg, 0, len(sm.Modules)) + for _, module := range sm.Modules { + if module, ok := module.(HasProposalMsgs); ok { + wContents = append(wContents, module.ProposalMsgs(simState)...) +} + +} + +return wContents +} + +/ RegisterStoreDecoders registers each of the modules' store decoders into a map +func (sm *SimulationManager) + +RegisterStoreDecoders() { + for _, module := range sm.Modules { + module.RegisterStoreDecoder(sm.StoreDecoders) +} +} + +/ GenerateGenesisStates generates a randomized GenesisState for each of the +/ registered modules +func (sm *SimulationManager) + +GenerateGenesisStates(simState *SimulationState) { + for _, module := range sm.Modules { + module.GenerateGenesisState(simState) +} +} + +/ WeightedOperations returns all the modules' weighted operations of an application +func (sm *SimulationManager) + +WeightedOperations(simState SimulationState) []simulation.WeightedOperation { + wOps := make([]simulation.WeightedOperation, 0, len(sm.Modules)) + for _, module := range sm.Modules { + wOps = append(wOps, module.WeightedOperations(simState)...) +} + +return wOps +} + +/ SimulationState is the input parameters used on each of the module's randomized +/ GenesisState generator function +type SimulationState struct { + AppParams simulation.AppParams + Cdc codec.JSONCodec / application codec + TxConfig client.TxConfig / Shared TxConfig; this is expensive to create and stateless, so create it once up front. + Rand *rand.Rand / random number + GenState map[string]json.RawMessage / genesis state + Accounts []simulation.Account / simulation accounts + InitialStake sdkmath.Int / initial coins per account + NumBonded int64 / number of initially bonded accounts + BondDenom string / denom to be used as default + GenTimestamp time.Time / genesis timestamp + UnbondTime time.Duration / staking unbond time stored to use it as the slashing maximum evidence duration + LegacyParamChange []simulation.LegacyParamChange / simulated parameter changes from modules + /nolint:staticcheck / legacy used for testing + LegacyProposalContents []simulation.WeightedProposalContent / proposal content generator functions with their default weight and app sim key + ProposalMsgs []simulation.WeightedProposalMsg / proposal msg generator functions with their default weight and app sim key +} +``` + +See an example implementation of these methods from `x/distribution` [here](https://github.com/cosmos/cosmos-sdk/blob/b55b9e14fb792cc8075effb373be9d26327fddea/x/distribution/module.go#L170-L194). + +## SimsX + +Cosmos SDK v0.53.0 introduced a new package, `simsx`, providing improved DevX for writing simulation code. + +It exposes the following extension interfaces that modules may implement to integrate with the new `simsx` runner. + +```go expandable +package simsx + +import ( + + "encoding/json" + "fmt" + "io" + "math" + "os" + "path/filepath" + "strings" + "testing" + + dbm "github.com/cosmos/cosmos-db" + "github.com/stretchr/testify/require" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/runtime" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/simulation" + "github.com/cosmos/cosmos-sdk/x/simulation/client/cli" +) + +const SimAppChainID = "simulation-app" + +/ this list of seeds was imported from the original simulation runner: https://github.com/cosmos/tools/blob/v1.0.0/cmd/runsim/main.go#L32 +var defaultSeeds = []int64{ + 1, 2, 4, 7, + 32, 123, 124, 582, 1893, 2989, + 3012, 4728, 37827, 981928, 87821, 891823782, + 989182, 89182391, 11, 22, 44, 77, 99, 2020, + 3232, 123123, 124124, 582582, 18931893, + 29892989, 30123012, 47284728, 7601778, 8090485, + 977367484, 491163361, 424254581, 673398983, +} + +/ SimStateFactory is a factory type that provides a convenient way to create a simulation state for testing. +/ It contains the following fields: +/ - Codec: a codec used for serializing other objects +/ - AppStateFn: a function that returns the app state JSON bytes and the genesis accounts +/ - BlockedAddr: a map of blocked addresses +/ - AccountSource: an interface for retrieving accounts +/ - BalanceSource: an interface for retrieving balance-related information +type SimStateFactory struct { + Codec codec.Codec + AppStateFn simtypes.AppStateFn + BlockedAddr map[string]bool + AccountSource AccountSourceX + BalanceSource BalanceSource +} + +/ SimulationApp abstract app that is used by sims +type SimulationApp interface { + runtime.AppI + SetNotSigverifyTx() + +GetBaseApp() *baseapp.BaseApp + TxConfig() + +client.TxConfig + Close() + +error +} + +/ Run is a helper function that runs a simulation test with the given parameters. +/ It calls the RunWithSeeds function with the default seeds and parameters. +/ +/ This is the entrypoint to run simulation tests that used to run with the runsim binary. +func Run[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + +RunWithSeeds(t, appFactory, setupStateFactory, defaultSeeds, nil, postRunActions...) +} + +/ RunWithSeeds is a helper function that runs a simulation test with the given parameters. +/ It iterates over the provided seeds and runs the simulation test for each seed in parallel. +/ +/ It sets up the environment, creates an instance of the simulation app, +/ calls the simulation.SimulateFromSeed function to run the simulation, and performs post-run actions for each seed. +/ The execution is deterministic and can be used for fuzz tests as well. +/ +/ The system under test is isolated for each run but unlike the old runsim command, there is no Process separation. +/ This means, global caches may be reused for example. This implementation build upon the vanilla Go stdlib test framework. +func RunWithSeeds[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seeds []int64, + fuzzSeed []byte, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + +RunWithSeedsAndRandAcc(t, appFactory, setupStateFactory, seeds, fuzzSeed, simtypes.RandomAccounts, postRunActions...) +} + +/ RunWithSeedsAndRandAcc calls RunWithSeeds with randAccFn +func RunWithSeedsAndRandAcc[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seeds []int64, + fuzzSeed []byte, + randAccFn simtypes.RandomAccountFn, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + if deprecatedParams := cli.GetDeprecatedFlagUsed(); len(deprecatedParams) != 0 { + fmt.Printf("Warning: Deprecated flag are used: %s", strings.Join(deprecatedParams, ",")) +} + cfg := cli.NewConfigFromFlags() + +cfg.ChainID = SimAppChainID + for i := range seeds { + seed := seeds[i] + t.Run(fmt.Sprintf("seed: %d", seed), func(t *testing.T) { + t.Parallel() + +RunWithSeed(t, cfg, appFactory, setupStateFactory, seed, fuzzSeed, postRunActions...) +}) +} +} + +/ RunWithSeed is a helper function that runs a simulation test with the given parameters. +/ It iterates over the provided seeds and runs the simulation test for each seed in parallel. +/ +/ It sets up the environment, creates an instance of the simulation app, +/ calls the simulation.SimulateFromSeed function to run the simulation, and performs post-run actions for the seed. +/ The execution is deterministic and can be used for fuzz tests as well. +func RunWithSeed[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + tb testing.TB, + cfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seed int64, + fuzzSeed []byte, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + tb.Helper() + +RunWithSeedAndRandAcc(tb, cfg, appFactory, setupStateFactory, seed, fuzzSeed, simtypes.RandomAccounts, postRunActions...) +} + +/ RunWithSeedAndRandAcc calls RunWithSeed with randAccFn +func RunWithSeedAndRandAcc[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + tb testing.TB, + cfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seed int64, + fuzzSeed []byte, + randAccFn simtypes.RandomAccountFn, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + tb.Helper() + / setup environment + tCfg := cfg.With(tb, seed, fuzzSeed) + testInstance := NewSimulationAppInstance(tb, tCfg, appFactory) + +var runLogger log.Logger + if cli.FlagVerboseValue { + runLogger = log.NewTestLogger(tb) +} + +else { + runLogger = log.NewTestLoggerInfo(tb) +} + +runLogger = runLogger.With("seed", tCfg.Seed) + app := testInstance.App + stateFactory := setupStateFactory(app) + +ops, reporter := prepareWeightedOps(app.SimulationManager(), stateFactory, tCfg, testInstance.App.TxConfig(), runLogger) + +simParams, accs, err := simulation.SimulateFromSeedX( + tb, + runLogger, + WriteToDebugLog(runLogger), + app.GetBaseApp(), + stateFactory.AppStateFn, + randAccFn, + ops, + stateFactory.BlockedAddr, + tCfg, + stateFactory.Codec, + testInstance.ExecLogWriter, + ) + +require.NoError(tb, err) + +err = simtestutil.CheckExportSimulation(app, tCfg, simParams) + +require.NoError(tb, err) + if tCfg.Commit { + simtestutil.PrintStats(testInstance.DB) +} + / not using tb.Log to always print the summary + fmt.Printf("+++ DONE (seed: %d): \n%s\n", seed, reporter.Summary().String()) + for _, step := range postRunActions { + step(tb, testInstance, accs) +} + +require.NoError(tb, app.Close()) +} + +type ( + HasWeightedOperationsX interface { + WeightedOperationsX(weight WeightSource, reg Registry) +} + +HasWeightedOperationsXWithProposals interface { + WeightedOperationsX(weights WeightSource, reg Registry, proposals WeightedProposalMsgIter, + legacyProposals []simtypes.WeightedProposalContent) /nolint: staticcheck / used for legacy proposal types +} + +HasProposalMsgsX interface { + ProposalMsgsX(weights WeightSource, reg Registry) +} +) + +type ( + HasLegacyWeightedOperations interface { + / WeightedOperations simulation operations (i.e msgs) + +with their respective weight + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation +} + / HasLegacyProposalMsgs defines the messages that can be used to simulate governance (v1) + +proposals + / Deprecated replaced by HasProposalMsgsX + HasLegacyProposalMsgs interface { + / ProposalMsgs msg fu nctions used to simulate governance proposals + ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg +} + + / HasLegacyProposalContents defines the contents that can be used to simulate legacy governance (v1beta1) + +proposals + / Deprecated replaced by HasProposalMsgsX + HasLegacyProposalContents interface { + / ProposalContents content functions used to simulate governance proposals + ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent /nolint:staticcheck / legacy v1beta1 governance +} +) + +/ TestInstance is a generic type that represents an instance of a SimulationApp used for testing simulations. +/ It contains the following fields: +/ - App: The instance of the SimulationApp under test. +/ - DB: The LevelDB database for the simulation app. +/ - WorkDir: The temporary working directory for the simulation app. +/ - Cfg: The configuration flags for the simulator. +/ - AppLogger: The logger used for logging in the app during the simulation, with seed value attached. +/ - ExecLogWriter: Captures block and operation data coming from the simulation +type TestInstance[T SimulationApp] struct { + App T + DB dbm.DB + WorkDir string + Cfg simtypes.Config + AppLogger log.Logger + ExecLogWriter simulation.LogWriter +} + +/ included to avoid cyclic dependency in testutils/sims +func prepareWeightedOps( + sm *module.SimulationManager, + stateFact SimStateFactory, + config simtypes.Config, + txConfig client.TxConfig, + logger log.Logger, +) (simulation.WeightedOperations, *BasicSimulationReporter) { + cdc := stateFact.Codec + simState := module.SimulationState{ + AppParams: make(simtypes.AppParams), + Cdc: cdc, + TxConfig: txConfig, + BondDenom: sdk.DefaultBondDenom, +} + if config.ParamsFile != "" { + bz, err := os.ReadFile(config.ParamsFile) + if err != nil { + panic(err) +} + +err = json.Unmarshal(bz, &simState.AppParams) + if err != nil { + panic(err) +} + +} + weights := ParamWeightSource(simState.AppParams) + reporter := NewBasicSimulationReporter() + pReg := make(UniqueTypeRegistry) + wContent := make([]simtypes.WeightedProposalContent, 0) /nolint:staticcheck / required for legacy type + legacyPReg := NewWeightedFactoryMethods() + / add gov proposals types + for _, m := range sm.Modules { + switch xm := m.(type) { + case HasProposalMsgsX: + xm.ProposalMsgsX(weights, pReg) + case HasLegacyProposalMsgs: + for _, p := range xm.ProposalMsgs(simState) { + weight := weights.Get(p.AppParamsKey(), safeUint(p.DefaultWeight())) + +legacyPReg.Add(weight, legacyToMsgFactoryAdapter(p.MsgSimulatorFn())) +} + case HasLegacyProposalContents: + wContent = append(wContent, xm.ProposalContents(simState)...) +} + +} + oReg := NewSimsMsgRegistryAdapter( + reporter, + stateFact.AccountSource, + stateFact.BalanceSource, + txConfig, + logger, + ) + wOps := make([]simtypes.WeightedOperation, 0, len(sm.Modules)) + for _, m := range sm.Modules { + / add operations + switch xm := m.(type) { + case HasWeightedOperationsX: + xm.WeightedOperationsX(weights, oReg) + case HasWeightedOperationsXWithProposals: + xm.WeightedOperationsX(weights, oReg, AppendIterators(legacyPReg.Iterator(), pReg.Iterator()), wContent) + case HasLegacyWeightedOperations: + wOps = append(wOps, xm.WeightedOperations(simState)...) +} + +} + +return append(wOps, Collect(oReg.items, func(a weightedOperation) + +simtypes.WeightedOperation { + return a +})...), reporter +} + +func safeUint(p int) + +uint32 { + if p < 0 || p > math.MaxUint32 { + panic(fmt.Sprintf("can not cast to uint32: %d", p)) +} + +return uint32(p) +} + +/ NewSimulationAppInstance initializes and returns a TestInstance of a SimulationApp. +/ The function takes a testing.T instance, a simtypes.Config instance, and an appFactory function as parameters. +/ It creates a temporary working directory and a LevelDB database for the simulation app. +/ The function then initializes a logger based on the verbosity flag and sets the logger's seed to the test configuration's seed. +/ The database is closed and cleaned up on test completion. +func NewSimulationAppInstance[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + tb testing.TB, + tCfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, +) + +TestInstance[T] { + tb.Helper() + workDir := tb.TempDir() + +require.NoError(tb, os.Mkdir(filepath.Join(workDir, "data"), 0o750)) + dbDir := filepath.Join(workDir, "leveldb-app-sim") + +var logger log.Logger + if cli.FlagVerboseValue { + logger = log.NewTestLogger(tb) +} + +else { + logger = log.NewTestLoggerError(tb) +} + +logger = logger.With("seed", tCfg.Seed) + +db, err := dbm.NewDB("Simulation", dbm.BackendType(tCfg.DBBackend), dbDir) + +require.NoError(tb, err) + +tb.Cleanup(func() { + _ = db.Close() / ensure db is closed +}) + appOptions := make(simtestutil.AppOptionsMap) + +appOptions[flags.FlagHome] = workDir + opts := []func(*baseapp.BaseApp) { + baseapp.SetChainID(tCfg.ChainID) +} + if tCfg.FauxMerkle { + opts = append(opts, FauxMerkleModeOpt) +} + app := appFactory(logger, db, nil, true, appOptions, opts...) + if !cli.FlagSigverifyTxValue { + app.SetNotSigverifyTx() +} + +return TestInstance[T]{ + App: app, + DB: db, + WorkDir: workDir, + Cfg: tCfg, + AppLogger: logger, + ExecLogWriter: &simulation.StandardLogWriter{ + Seed: tCfg.Seed +}, +} +} + +var _ io.Writer = writerFn(nil) + +type writerFn func(p []byte) (n int, err error) + +func (w writerFn) + +Write(p []byte) (n int, err error) { + return w(p) +} + +/ WriteToDebugLog is an adapter to io.Writer interface +func WriteToDebugLog(logger log.Logger) + +io.Writer { + return writerFn(func(p []byte) (n int, err error) { + logger.Debug(string(p)) + +return len(p), nil +}) +} + +/ FauxMerkleModeOpt returns a BaseApp option to use a dbStoreAdapter instead of +/ an IAVLStore for faster simulation speed. +func FauxMerkleModeOpt(bapp *baseapp.BaseApp) { + bapp.SetFauxMerkleMode() +} +``` + +These methods allow constructing randomized messages and/or proposal messages. + + +Note that modules should **not** implement both `HasWeightedOperationsX` and `HasWeightedOperationsXWithProposals`. +See the runner code [here](https://github.com/cosmos/cosmos-sdk/blob/main/testutil/simsx/runner.go#L330-L339) for details + +If the module does **not** have message handlers or governance proposal handlers, these interface methods do **not** need to be implemented. + + +### Example Implementations + +* `HasWeightedOperationsXWithProposals`: [x/gov](https://github.com/cosmos/cosmos-sdk/blob/main/x/gov/module.go#L242-L261) +* `HasWeightedOperationsX`: [x/bank](https://github.com/cosmos/cosmos-sdk/blob/main/x/bank/module.go#L199-L203) +* `HasProposalMsgsX`: [x/bank](https://github.com/cosmos/cosmos-sdk/blob/main/x/bank/module.go#L194-L197) + +## Store decoders + +Registering the store decoders is required for the `AppImportExport` simulation. This allows +for the key-value pairs from the stores to be decoded to their corresponding types. +In particular, it matches the key to a concrete type and then unmarshalls the value from the `KVPair` to the type provided. + +Modules using [collections](https://github.com/cosmos/cosmos-sdk/blob/main/collections/README.md) can use the `NewStoreDecoderFuncFromCollectionsSchema` function that builds the decoder for you: + +```go expandable +package bank + +import ( + + "context" + "encoding/json" + "fmt" + "maps" + "slices" + "sort" + + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/bank/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + corestore "cosmossdk.io/core/store" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/testutil/simsx" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/bank/client/cli" + "github.com/cosmos/cosmos-sdk/x/bank/exported" + "github.com/cosmos/cosmos-sdk/x/bank/keeper" + v1bank "github.com/cosmos/cosmos-sdk/x/bank/migrations/v1" + "github.com/cosmos/cosmos-sdk/x/bank/simulation" + "github.com/cosmos/cosmos-sdk/x/bank/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" +) + +/ ConsensusVersion defines the current x/bank module consensus version. +const ConsensusVersion = 4 + +var ( + _ module.AppModuleBasic = AppModule{ +} + _ module.AppModuleSimulation = AppModule{ +} + _ module.HasGenesis = AppModule{ +} + _ module.HasServices = AppModule{ +} + + _ appmodule.AppModule = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the bank module. +type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec +} + +/ Name returns the bank module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the bank module's types on the LegacyAmino codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the bank +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the bank module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, _ client.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return data.Validate() +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the bank module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the bank module. +func (ab AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.NewTxCmd(ab.ac) +} + +/ RegisterInterfaces registers interfaces and implementations of the bank module. +func (AppModuleBasic) + +RegisterInterfaces(registry codectypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) + + / Register legacy interfaces for migration scripts. + v1bank.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the bank module. +type AppModule struct { + AppModuleBasic + + keeper keeper.Keeper + accountKeeper types.AccountKeeper + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ RegisterServices registers module services. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.keeper)) + +types.RegisterQueryServer(cfg.QueryServer(), am.keeper) + m := keeper.NewMigrator(am.keeper.(keeper.BaseKeeper), am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 1 to 2: %v", err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 2 to 3: %v", err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 3, m.Migrate3to4); err != nil { + panic(fmt.Sprintf("failed to migrate x/bank from version 3 to 4: %v", err)) +} +} + +/ NewAppModule creates a new AppModule object +func NewAppModule(cdc codec.Codec, keeper keeper.Keeper, accountKeeper types.AccountKeeper, ss exported.Subspace) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: accountKeeper.AddressCodec() +}, + keeper: keeper, + accountKeeper: accountKeeper, + legacySubspace: ss, +} +} + +/ QuerierRoute returns the bank module's querier route name. +func (AppModule) + +QuerierRoute() + +string { + return types.RouterKey +} + +/ InitGenesis performs genesis initialization for the bank module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) { + var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +am.keeper.InitGenesis(ctx, &genesisState) +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the bank +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the bank module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalMsgs returns msgs used for governance proposals for simulations. +/ migrate to ProposalMsgsX. This method is ignored when ProposalMsgsX exists and will be removed in the future. +func (AppModule) + +ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg { + return simulation.ProposalMsgs() +} + +/ RegisterStoreDecoder registers a decoder for supply module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[types.StoreKey] = simtypes.NewStoreDecoderFuncFromCollectionsSchema(am.keeper.(keeper.BaseKeeper).Schema) +} + +/ WeightedOperations returns the all the bank module operations with their respective weights. +/ migrate to WeightedOperationsX. This method is ignored when WeightedOperationsX exists and will be removed in the future +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + simState.AppParams, simState.Cdc, simState.TxConfig, am.accountKeeper, am.keeper, + ) +} + +/ ProposalMsgsX registers governance proposal messages in the simulation registry. +func (AppModule) + +ProposalMsgsX(weights simsx.WeightSource, reg simsx.Registry) { + reg.Add(weights.Get("msg_update_params", 100), simulation.MsgUpdateParamsFactory()) +} + +/ WeightedOperationsX registers weighted bank module operations for simulation. +func (am AppModule) + +WeightedOperationsX(weights simsx.WeightSource, reg simsx.Registry) { + reg.Add(weights.Get("msg_send", 100), simulation.MsgSendFactory()) + +reg.Add(weights.Get("msg_multisend", 10), simulation.MsgMultiSendFactory()) +} + +/ App Wiring Setup + +func init() { + appmodule.Register( + &modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + appmodule.Invoke(InvokeSetSendRestrictions), + ) +} + +type ModuleInputs struct { + depinject.In + + Config *modulev1.Module + Cdc codec.Codec + StoreService corestore.KVStoreService + Logger log.Logger + + AccountKeeper types.AccountKeeper + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type ModuleOutputs struct { + depinject.Out + + BankKeeper keeper.BaseKeeper + Module appmodule.AppModule +} + +func ProvideModule(in ModuleInputs) + +ModuleOutputs { + / Configure blocked module accounts. + / + / Default behavior for blockedAddresses is to regard any module mentioned in + / AccountKeeper's module account permissions as blocked. + blockedAddresses := make(map[string]bool) + if len(in.Config.BlockedModuleAccountsOverride) > 0 { + for _, moduleName := range in.Config.BlockedModuleAccountsOverride { + blockedAddresses[authtypes.NewModuleAddress(moduleName).String()] = true +} + +} + +else { + for _, permission := range in.AccountKeeper.GetModulePermissions() { + blockedAddresses[permission.GetAddress().String()] = true +} + +} + + / default to governance authority if not provided + authority := authtypes.NewModuleAddress(govtypes.ModuleName) + if in.Config.Authority != "" { + authority = authtypes.NewModuleAddressOrBech32Address(in.Config.Authority) +} + bankKeeper := keeper.NewBaseKeeper( + in.Cdc, + in.StoreService, + in.AccountKeeper, + blockedAddresses, + authority.String(), + in.Logger, + ) + m := NewAppModule(in.Cdc, bankKeeper, in.AccountKeeper, in.LegacySubspace) + +return ModuleOutputs{ + BankKeeper: bankKeeper, + Module: m +} +} + +func InvokeSetSendRestrictions( + config *modulev1.Module, + keeper keeper.BaseKeeper, + restrictions map[string]types.SendRestrictionFn, +) + +error { + if config == nil { + return nil +} + modules := slices.Collect(maps.Keys(restrictions)) + order := config.RestrictionsOrder + if len(order) == 0 { + order = modules + sort.Strings(order) +} + if len(order) != len(modules) { + return fmt.Errorf("len(restrictions order: %v) != len(restriction modules: %v)", order, modules) +} + if len(modules) == 0 { + return nil +} + for _, module := range order { + restriction, ok := restrictions[module] + if !ok { + return fmt.Errorf("can't find send restriction for module %s", module) +} + +keeper.AppendSendRestriction(restriction) +} + +return nil +} +``` + +Modules not using collections must manually build the store decoder. +See the implementation [here](https://github.com/cosmos/cosmos-sdk/blob/main/x/distribution/simulation/decoder.go) from the distribution module for an example. + +## Randomized genesis + +The simulator tests different scenarios and values for genesis parameters. +App modules must implement a `GenerateGenesisState` method to generate the initial random `GenesisState` from a given seed. + +```go expandable +package module + +import ( + + "encoding/json" + "math/rand" + "sort" + "time" + + sdkmath "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/types/simulation" +) + +/ AppModuleSimulation defines the standard functions that every module should expose +/ for the SDK blockchain simulator +type AppModuleSimulation interface { + / randomized genesis states + GenerateGenesisState(input *SimulationState) + + / register a func to decode the each module's defined types from their corresponding store key + RegisterStoreDecoder(simulation.StoreDecoderRegistry) + + / simulation operations (i.e msgs) + +with their respective weight + WeightedOperations(simState SimulationState) []simulation.WeightedOperation +} + +/ HasProposalMsgs defines the messages that can be used to simulate governance (v1) + +proposals +type HasProposalMsgs interface { + / msg functions used to simulate governance proposals + ProposalMsgs(simState SimulationState) []simulation.WeightedProposalMsg +} + +/ HasProposalContents defines the contents that can be used to simulate legacy governance (v1beta1) + +proposals +type HasProposalContents interface { + / content functions used to simulate governance proposals + ProposalContents(simState SimulationState) []simulation.WeightedProposalContent /nolint:staticcheck / legacy v1beta1 governance +} + +/ SimulationManager defines a simulation manager that provides the high level utility +/ for managing and executing simulation functionalities for a group of modules +type SimulationManager struct { + Modules []AppModuleSimulation / array of app modules; we use an array for deterministic simulation tests + StoreDecoders simulation.StoreDecoderRegistry / functions to decode the key-value pairs from each module's store +} + +/ NewSimulationManager creates a new SimulationManager object +/ +/ CONTRACT: All the modules provided must be also registered on the module Manager +func NewSimulationManager(modules ...AppModuleSimulation) *SimulationManager { + return &SimulationManager{ + Modules: modules, + StoreDecoders: make(simulation.StoreDecoderRegistry), +} +} + +/ NewSimulationManagerFromAppModules creates a new SimulationManager object. +/ +/ First it sets any SimulationModule provided by overrideModules, and ignores any AppModule +/ with the same moduleName. +/ Then it attempts to cast every provided AppModule into an AppModuleSimulation. +/ If the cast succeeds, its included, otherwise it is excluded. +func NewSimulationManagerFromAppModules(modules map[string]any, overrideModules map[string]AppModuleSimulation) *SimulationManager { + simModules := []AppModuleSimulation{ +} + appModuleNamesSorted := make([]string, 0, len(modules)) + for moduleName := range modules { + appModuleNamesSorted = append(appModuleNamesSorted, moduleName) +} + +sort.Strings(appModuleNamesSorted) + for _, moduleName := range appModuleNamesSorted { + / for every module, see if we override it. If so, use override. + / Else, if we can cast the app module into a simulation module add it. + / otherwise no simulation module. + if simModule, ok := overrideModules[moduleName]; ok { + simModules = append(simModules, simModule) +} + +else { + appModule := modules[moduleName] + if simModule, ok := appModule.(AppModuleSimulation); ok { + simModules = append(simModules, simModule) +} + / cannot cast, so we continue +} + +} + +return NewSimulationManager(simModules...) +} + +/ Deprecated: Use GetProposalMsgs instead. +/ GetProposalContents returns each module's proposal content generator function +/ with their default operation weight and key. +func (sm *SimulationManager) + +GetProposalContents(simState SimulationState) []simulation.WeightedProposalContent { + wContents := make([]simulation.WeightedProposalContent, 0, len(sm.Modules)) + for _, module := range sm.Modules { + if module, ok := module.(HasProposalContents); ok { + wContents = append(wContents, module.ProposalContents(simState)...) +} + +} + +return wContents +} + +/ GetProposalMsgs returns each module's proposal msg generator function +/ with their default operation weight and key. +func (sm *SimulationManager) + +GetProposalMsgs(simState SimulationState) []simulation.WeightedProposalMsg { + wContents := make([]simulation.WeightedProposalMsg, 0, len(sm.Modules)) + for _, module := range sm.Modules { + if module, ok := module.(HasProposalMsgs); ok { + wContents = append(wContents, module.ProposalMsgs(simState)...) +} + +} + +return wContents +} + +/ RegisterStoreDecoders registers each of the modules' store decoders into a map +func (sm *SimulationManager) + +RegisterStoreDecoders() { + for _, module := range sm.Modules { + module.RegisterStoreDecoder(sm.StoreDecoders) +} +} + +/ GenerateGenesisStates generates a randomized GenesisState for each of the +/ registered modules +func (sm *SimulationManager) + +GenerateGenesisStates(simState *SimulationState) { + for _, module := range sm.Modules { + module.GenerateGenesisState(simState) +} +} + +/ WeightedOperations returns all the modules' weighted operations of an application +func (sm *SimulationManager) + +WeightedOperations(simState SimulationState) []simulation.WeightedOperation { + wOps := make([]simulation.WeightedOperation, 0, len(sm.Modules)) + for _, module := range sm.Modules { + wOps = append(wOps, module.WeightedOperations(simState)...) +} + +return wOps +} + +/ SimulationState is the input parameters used on each of the module's randomized +/ GenesisState generator function +type SimulationState struct { + AppParams simulation.AppParams + Cdc codec.JSONCodec / application codec + TxConfig client.TxConfig / Shared TxConfig; this is expensive to create and stateless, so create it once up front. + Rand *rand.Rand / random number + GenState map[string]json.RawMessage / genesis state + Accounts []simulation.Account / simulation accounts + InitialStake sdkmath.Int / initial coins per account + NumBonded int64 / number of initially bonded accounts + BondDenom string / denom to be used as default + GenTimestamp time.Time / genesis timestamp + UnbondTime time.Duration / staking unbond time stored to use it as the slashing maximum evidence duration + LegacyParamChange []simulation.LegacyParamChange / simulated parameter changes from modules + /nolint:staticcheck / legacy used for testing + LegacyProposalContents []simulation.WeightedProposalContent / proposal content generator functions with their default weight and app sim key + ProposalMsgs []simulation.WeightedProposalMsg / proposal msg generator functions with their default weight and app sim key +} +``` + +See an example from `x/auth` [here](https://github.com/cosmos/cosmos-sdk/blob/main/x/auth/module.go#L169-L172). + +Once the module's genesis parameters are generated randomly (or with the key and +values defined in a `params` file), they are marshaled to JSON format and added +to the app genesis JSON for the simulation. + +## Random weighted operations + +Operations are one of the crucial parts of the Cosmos SDK simulation. They are the transactions +(`Msg`) that are simulated with random field values. The sender of the operation +is also assigned randomly. + +Operations on the simulation are simulated using the full [transaction cycle](/docs/sdk/v0.53/documentation/protocol-development/transactions) of a +`ABCI` application that exposes the `BaseApp`. + +### Using Simsx + +Simsx introduces the ability to define a `MsgFactory` for each of a module's messages. + +These factories are registered in `WeightedOperationsX` and/or `ProposalMsgsX`. + +```go expandable +package distribution + +import ( + + "context" + "encoding/json" + "fmt" + + gwruntime "github.com/grpc-ecosystem/grpc-gateway/runtime" + "github.com/spf13/cobra" + + modulev1 "cosmossdk.io/api/cosmos/distribution/module/v1" + "cosmossdk.io/core/address" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/core/store" + "cosmossdk.io/depinject" + + sdkclient "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/testutil/simsx" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/distribution/client/cli" + "github.com/cosmos/cosmos-sdk/x/distribution/exported" + "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + "github.com/cosmos/cosmos-sdk/x/distribution/simulation" + "github.com/cosmos/cosmos-sdk/x/distribution/types" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + staking "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +/ ConsensusVersion defines the current x/distribution module consensus version. +const ConsensusVersion = 3 + +var ( + _ module.AppModuleBasic = AppModule{ +} + _ module.AppModuleSimulation = AppModule{ +} + _ module.HasGenesis = AppModule{ +} + _ module.HasServices = AppModule{ +} + + _ appmodule.AppModule = AppModule{ +} + _ appmodule.HasBeginBlocker = AppModule{ +} +) + +/ AppModuleBasic defines the basic application module used by the distribution module. +type AppModuleBasic struct { + cdc codec.Codec + ac address.Codec +} + +/ Name returns the distribution module's name. +func (AppModuleBasic) + +Name() + +string { + return types.ModuleName +} + +/ RegisterLegacyAminoCodec registers the distribution module's types for the given codec. +func (AppModuleBasic) + +RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { + types.RegisterLegacyAminoCodec(cdc) +} + +/ DefaultGenesis returns default genesis state as raw bytes for the distribution +/ module. +func (AppModuleBasic) + +DefaultGenesis(cdc codec.JSONCodec) + +json.RawMessage { + return cdc.MustMarshalJSON(types.DefaultGenesisState()) +} + +/ ValidateGenesis performs genesis state validation for the distribution module. +func (AppModuleBasic) + +ValidateGenesis(cdc codec.JSONCodec, _ sdkclient.TxEncodingConfig, bz json.RawMessage) + +error { + var data types.GenesisState + if err := cdc.UnmarshalJSON(bz, &data); err != nil { + return fmt.Errorf("failed to unmarshal %s genesis state: %w", types.ModuleName, err) +} + +return types.ValidateGenesis(&data) +} + +/ RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the distribution module. +func (AppModuleBasic) + +RegisterGRPCGatewayRoutes(clientCtx sdkclient.Context, mux *gwruntime.ServeMux) { + if err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)); err != nil { + panic(err) +} +} + +/ GetTxCmd returns the root tx command for the distribution module. +func (ab AppModuleBasic) + +GetTxCmd() *cobra.Command { + return cli.NewTxCmd(ab.cdc.InterfaceRegistry().SigningContext().ValidatorAddressCodec(), ab.cdc.InterfaceRegistry().SigningContext().AddressCodec()) +} + +/ RegisterInterfaces implements InterfaceModule +func (AppModuleBasic) + +RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + types.RegisterInterfaces(registry) +} + +/ AppModule implements an application module for the distribution module. +type AppModule struct { + AppModuleBasic + + keeper keeper.Keeper + accountKeeper types.AccountKeeper + bankKeeper types.BankKeeper + stakingKeeper types.StakingKeeper + + / legacySubspace is used solely for migration of x/params managed parameters + legacySubspace exported.Subspace +} + +/ NewAppModule creates a new AppModule object +func NewAppModule( + cdc codec.Codec, keeper keeper.Keeper, accountKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, stakingKeeper types.StakingKeeper, ss exported.Subspace, +) + +AppModule { + return AppModule{ + AppModuleBasic: AppModuleBasic{ + cdc: cdc, ac: accountKeeper.AddressCodec() +}, + keeper: keeper, + accountKeeper: accountKeeper, + bankKeeper: bankKeeper, + stakingKeeper: stakingKeeper, + legacySubspace: ss, +} +} + +/ IsOnePerModuleType implements the depinject.OnePerModuleType interface. +func (am AppModule) + +IsOnePerModuleType() { +} + +/ IsAppModule implements the appmodule.AppModule interface. +func (am AppModule) + +IsAppModule() { +} + +/ RegisterServices registers module services. +func (am AppModule) + +RegisterServices(cfg module.Configurator) { + types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.keeper)) + +types.RegisterQueryServer(cfg.QueryServer(), keeper.NewQuerier(am.keeper)) + m := keeper.NewMigrator(am.keeper, am.legacySubspace) + if err := cfg.RegisterMigration(types.ModuleName, 1, m.Migrate1to2); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 1 to 2: %v", types.ModuleName, err)) +} + if err := cfg.RegisterMigration(types.ModuleName, 2, m.Migrate2to3); err != nil { + panic(fmt.Sprintf("failed to migrate x/%s from version 2 to 3: %v", types.ModuleName, err)) +} +} + +/ InitGenesis performs genesis initialization for the distribution module. It returns +/ no validator updates. +func (am AppModule) + +InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, data json.RawMessage) { + var genesisState types.GenesisState + cdc.MustUnmarshalJSON(data, &genesisState) + +am.keeper.InitGenesis(ctx, genesisState) +} + +/ ExportGenesis returns the exported genesis state as raw bytes for the distribution +/ module. +func (am AppModule) + +ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) + +json.RawMessage { + gs := am.keeper.ExportGenesis(ctx) + +return cdc.MustMarshalJSON(gs) +} + +/ ConsensusVersion implements AppModule/ConsensusVersion. +func (AppModule) + +ConsensusVersion() + +uint64 { + return ConsensusVersion +} + +/ BeginBlock returns the begin blocker for the distribution module. +func (am AppModule) + +BeginBlock(ctx context.Context) + +error { + c := sdk.UnwrapSDKContext(ctx) + +return BeginBlocker(c, am.keeper) +} + +/ AppModuleSimulation functions + +/ GenerateGenesisState creates a randomized GenState of the distribution module. +func (AppModule) + +GenerateGenesisState(simState *module.SimulationState) { + simulation.RandomizedGenState(simState) +} + +/ ProposalMsgs returns msgs used for governance proposals for simulations. +/ migrate to ProposalMsgsX. This method is ignored when ProposalMsgsX exists and will be removed in the future. +func (AppModule) + +ProposalMsgs(_ module.SimulationState) []simtypes.WeightedProposalMsg { + return simulation.ProposalMsgs() +} + +/ RegisterStoreDecoder registers a decoder for distribution module's types +func (am AppModule) + +RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { + sdr[types.StoreKey] = simulation.NewDecodeStore(am.cdc) +} + +/ WeightedOperations returns the all the gov module operations with their respective weights. +/ migrate to WeightedOperationsX. This method is ignored when WeightedOperationsX exists and will be removed in the future +func (am AppModule) + +WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation { + return simulation.WeightedOperations( + simState.AppParams, simState.Cdc, simState.TxConfig, + am.accountKeeper, am.bankKeeper, am.keeper, am.stakingKeeper, + ) +} + +/ ProposalMsgsX registers governance proposal messages in the simulation registry. +func (AppModule) + +ProposalMsgsX(weights simsx.WeightSource, reg simsx.Registry) { + reg.Add(weights.Get("msg_update_params", 100), simulation.MsgUpdateParamsFactory()) +} + +/ WeightedOperationsX registers weighted distribution module operations for simulation. +func (am AppModule) + +WeightedOperationsX(weights simsx.WeightSource, reg simsx.Registry) { + reg.Add(weights.Get("msg_set_withdraw_address", 50), simulation.MsgSetWithdrawAddressFactory(am.keeper)) + +reg.Add(weights.Get("msg_withdraw_delegation_reward", 50), simulation.MsgWithdrawDelegatorRewardFactory(am.keeper, am.stakingKeeper)) + +reg.Add(weights.Get("msg_withdraw_validator_commission", 50), simulation.MsgWithdrawValidatorCommissionFactory(am.keeper, am.stakingKeeper)) +} + +/ +/ App Wiring Setup +/ + +func init() { + appmodule.Register(&modulev1.Module{ +}, + appmodule.Provide(ProvideModule), + ) +} + +type ModuleInputs struct { + depinject.In + + Config *modulev1.Module + StoreService store.KVStoreService + Cdc codec.Codec + + AccountKeeper types.AccountKeeper + BankKeeper types.BankKeeper + StakingKeeper types.StakingKeeper + ExternalPoolKeeper types.ExternalCommunityPoolKeeper `optional:"true"` + + / LegacySubspace is used solely for migration of x/params managed parameters + LegacySubspace exported.Subspace `optional:"true"` +} + +type ModuleOutputs struct { + depinject.Out + + DistrKeeper keeper.Keeper + Module appmodule.AppModule + Hooks staking.StakingHooksWrapper +} + +func ProvideModule(in ModuleInputs) + +ModuleOutputs { + feeCollectorName := in.Config.FeeCollectorName + if feeCollectorName == "" { + feeCollectorName = authtypes.FeeCollectorName +} + + / default to governance authority if not provided + authority := authtypes.NewModuleAddress(govtypes.ModuleName) + if in.Config.Authority != "" { + authority = authtypes.NewModuleAddressOrBech32Address(in.Config.Authority) +} + +var opts []keeper.InitOption + if in.ExternalPoolKeeper != nil { + opts = append(opts, keeper.WithExternalCommunityPool(in.ExternalPoolKeeper)) +} + k := keeper.NewKeeper( + in.Cdc, + in.StoreService, + in.AccountKeeper, + in.BankKeeper, + in.StakingKeeper, + feeCollectorName, + authority.String(), + opts..., + ) + m := NewAppModule(in.Cdc, k, in.AccountKeeper, in.BankKeeper, in.StakingKeeper, in.LegacySubspace) + +return ModuleOutputs{ + DistrKeeper: k, + Module: m, + Hooks: staking.StakingHooksWrapper{ + StakingHooks: k.Hooks() +}, +} +} +``` + +Note that the name passed in to `weights.Get` must match the name of the operation set in the `WeightedOperations`. + +For example, if the module contains an operation `op_weight_msg_set_withdraw_address`, the name passed to `weights.Get` should be `msg_set_withdraw_address`. + +See the `x/distribution` for an example of implementing message factories [here](https://github.com/cosmos/cosmos-sdk/blob/main/x/distribution/simulation/msg_factory.go) + +## App Simulator manager + +The following step is setting up the `SimulatorManager` at the app level. This +is required for the simulation test files in the next step. + +```go +type CoolApp struct { +... +sm *module.SimulationManager +} +``` + +Within the constructor of the application, construct the simulation manager using the modules from `ModuleManager` and call the `RegisterStoreDecoders` method. + +```go expandable +/go:build app_v1 + +package simapp + +import ( + + "encoding/json" + "fmt" + "io" + "maps" + + abci "github.com/cometbft/cometbft/abci/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/gogoproto/proto" + "github.com/spf13/cast" + + autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" + reflectionv1 "cosmossdk.io/api/cosmos/reflection/v1" + "cosmossdk.io/client/v2/autocli" + clienthelpers "cosmossdk.io/client/v2/helpers" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + "cosmossdk.io/x/tx/signing" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/client/grpc/cmtservice" + nodeservice "github.com/cosmos/cosmos-sdk/client/grpc/node" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/runtime" + runtimeservices "github.com/cosmos/cosmos-sdk/runtime/services" + "github.com/cosmos/cosmos-sdk/server" + "github.com/cosmos/cosmos-sdk/server/api" + "github.com/cosmos/cosmos-sdk/server/config" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + "github.com/cosmos/cosmos-sdk/std" + testdata_pulsar "github.com/cosmos/cosmos-sdk/testutil/testdata/testpb" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + sigtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/version" + "github.com/cosmos/cosmos-sdk/x/auth" + "github.com/cosmos/cosmos-sdk/x/auth/ante" + authcodec "github.com/cosmos/cosmos-sdk/x/auth/codec" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + "github.com/cosmos/cosmos-sdk/x/auth/posthandler" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + "github.com/cosmos/cosmos-sdk/x/auth/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" + txmodule "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/auth/vesting" + vestingtypes "github.com/cosmos/cosmos-sdk/x/auth/vesting/types" + "github.com/cosmos/cosmos-sdk/x/authz" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + authzmodule "github.com/cosmos/cosmos-sdk/x/authz/module" + "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + consensus "github.com/cosmos/cosmos-sdk/x/consensus" + consensusparamkeeper "github.com/cosmos/cosmos-sdk/x/consensus/keeper" + consensusparamtypes "github.com/cosmos/cosmos-sdk/x/consensus/types" + distr "github.com/cosmos/cosmos-sdk/x/distribution" + distrkeeper "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + distrtypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" + "github.com/cosmos/cosmos-sdk/x/evidence" + evidencekeeper "github.com/cosmos/cosmos-sdk/x/evidence/keeper" + evidencetypes "github.com/cosmos/cosmos-sdk/x/evidence/types" + "github.com/cosmos/cosmos-sdk/x/feegrant" + feegrantkeeper "github.com/cosmos/cosmos-sdk/x/feegrant/keeper" + feegrantmodule "github.com/cosmos/cosmos-sdk/x/feegrant/module" + "github.com/cosmos/cosmos-sdk/x/genutil" + genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" + "github.com/cosmos/cosmos-sdk/x/gov" + govclient "github.com/cosmos/cosmos-sdk/x/gov/client" + govkeeper "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + govv1beta1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" + "github.com/cosmos/cosmos-sdk/x/slashing" + slashingkeeper "github.com/cosmos/cosmos-sdk/x/slashing/keeper" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + "github.com/cosmos/cosmos-sdk/x/upgrade" + upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper" + upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types" +) + +const appName = "SimApp" + +var ( + / DefaultNodeHome default home directories for the application daemon + DefaultNodeHome string + + / module account permissions + maccPerms = map[string][]string{ + authtypes.FeeCollectorName: nil, + distrtypes.ModuleName: nil, + minttypes.ModuleName: { + authtypes.Minter +}, + stakingtypes.BondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + stakingtypes.NotBondedPoolName: { + authtypes.Burner, authtypes.Staking +}, + govtypes.ModuleName: { + authtypes.Burner +}, + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil +} +) + +var ( + _ runtime.AppI = (*SimApp)(nil) + _ servertypes.Application = (*SimApp)(nil) +) + +/ SimApp extends an ABCI application, but with most of its parameters exported. +/ They are exported for convenience in creating helper functions, as object +/ capabilities aren't needed for testing. +type SimApp struct { + *baseapp.BaseApp + legacyAmino *codec.LegacyAmino + appCodec codec.Codec + txConfig client.TxConfig + interfaceRegistry types.InterfaceRegistry + + / keys to access the substores + keys map[string]*storetypes.KVStoreKey + + / essential keepers + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.BaseKeeper + StakingKeeper *stakingkeeper.Keeper + SlashingKeeper slashingkeeper.Keeper + MintKeeper mintkeeper.Keeper + DistrKeeper distrkeeper.Keeper + GovKeeper govkeeper.Keeper + UpgradeKeeper *upgradekeeper.Keeper + EvidenceKeeper evidencekeeper.Keeper + ConsensusParamsKeeper consensusparamkeeper.Keeper + + / supplementary keepers + FeeGrantKeeper feegrantkeeper.Keeper + AuthzKeeper authzkeeper.Keeper + EpochsKeeper epochskeeper.Keeper + ProtocolPoolKeeper protocolpoolkeeper.Keeper + + / the module manager + ModuleManager *module.Manager + BasicModuleManager module.BasicManager + + / simulation manager + sm *module.SimulationManager + + / module configurator + configurator module.Configurator +} + +func init() { + var err error + DefaultNodeHome, err = clienthelpers.GetNodeHomeDirectory(".simapp") + if err != nil { + panic(err) +} +} + +/ NewSimApp returns a reference to an initialized SimApp. +func NewSimApp( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), +) *SimApp { + interfaceRegistry, _ := types.NewInterfaceRegistryWithOptions(types.InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32AccountAddrPrefix(), +}, + ValidatorAddressCodec: address.Bech32Codec{ + Bech32Prefix: sdk.GetConfig().GetBech32ValidatorAddrPrefix(), +}, +}, +}) + appCodec := codec.NewProtoCodec(interfaceRegistry) + legacyAmino := codec.NewLegacyAmino() + txConfig := tx.NewTxConfig(appCodec, tx.DefaultSignModes) + if err := interfaceRegistry.SigningContext().Validate(); err != nil { + panic(err) +} + +std.RegisterLegacyAminoCodec(legacyAmino) + +std.RegisterInterfaces(interfaceRegistry) + + / Below we could construct and set an application specific mempool and + / ABCI 1.0 PrepareProposal and ProcessProposal handlers. These defaults are + / already set in the SDK's BaseApp, this shows an example of how to override + / them. + / + / Example: + / + / bApp := baseapp.NewBaseApp(...) + / nonceMempool := mempool.NewSenderNonceMempool() + / abciPropHandler := NewDefaultProposalHandler(nonceMempool, bApp) + / + / bApp.SetMempool(nonceMempool) + / bApp.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / bApp.SetProcessProposal(abciPropHandler.ProcessProposalHandler()) + / + / Alternatively, you can construct BaseApp options, append those to + / baseAppOptions and pass them to NewBaseApp. + / + / Example: + / + / prepareOpt = func(app *baseapp.BaseApp) { + / abciPropHandler := baseapp.NewDefaultProposalHandler(nonceMempool, app) + / app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) + / +} + / baseAppOptions = append(baseAppOptions, prepareOpt) + + / create and set dummy vote extension handler + voteExtOp := func(bApp *baseapp.BaseApp) { + voteExtHandler := NewVoteExtensionHandler() + +voteExtHandler.SetHandlers(bApp) +} + +baseAppOptions = append(baseAppOptions, voteExtOp, baseapp.SetOptimisticExecution()) + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + +bApp.SetTxEncoder(txConfig.TxEncoder()) + keys := storetypes.NewKVStoreKeys( + authtypes.StoreKey, + banktypes.StoreKey, + stakingtypes.StoreKey, + minttypes.StoreKey, + distrtypes.StoreKey, + slashingtypes.StoreKey, + govtypes.StoreKey, + consensusparamtypes.StoreKey, + upgradetypes.StoreKey, + feegrant.StoreKey, + evidencetypes.StoreKey, + authzkeeper.StoreKey, + epochstypes.StoreKey, + protocolpooltypes.StoreKey, + ) + + / register streaming services + if err := bApp.RegisterStreamingServices(appOpts, keys); err != nil { + panic(err) +} + app := &SimApp{ + BaseApp: bApp, + legacyAmino: legacyAmino, + appCodec: appCodec, + txConfig: txConfig, + interfaceRegistry: interfaceRegistry, + keys: keys, +} + + / set the BaseApp's parameter store + app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + runtime.EventService{ +}, + ) + +bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) + + / add keepers + app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), + ) + +app.BankKeeper = bankkeeper.NewBaseKeeper( + appCodec, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + app.AccountKeeper, + BlockedAddresses(), + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + logger, + ) + + / optional: enable sign mode textual by overwriting the default tx config (after setting the bank keeper) + enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL) + txConfigOpts := tx.ConfigOptions{ + EnabledSignModes: enabledSignModes, + TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper), +} + +txConfig, err := tx.NewTxConfigWithOptions( + appCodec, + txConfigOpts, + ) + if err != nil { + panic(err) +} + +app.txConfig = txConfig + + app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[stakingtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authcodec.NewBech32Codec(sdk.Bech32PrefixValAddr), + authcodec.NewBech32Codec(sdk.Bech32PrefixConsAddr), + ) + +app.MintKeeper = mintkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[minttypes.StoreKey]), + app.StakingKeeper, + app.AccountKeeper, + app.BankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(minttypes.DefaultInflationCalculationFn)), custom mintFn can be added here + ) + +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), + ) + +app.SlashingKeeper = slashingkeeper.NewKeeper( + appCodec, + legacyAmino, + runtime.NewKVStoreService(keys[slashingtypes.StoreKey]), + app.StakingKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + +app.FeeGrantKeeper = feegrantkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[feegrant.StoreKey]), + app.AccountKeeper, + ) + + / register the staking hooks + / NOTE: stakingKeeper above is passed by reference, so that it will contain these hooks + app.StakingKeeper.SetHooks( + stakingtypes.NewMultiStakingHooks( + app.DistrKeeper.Hooks(), + app.SlashingKeeper.Hooks(), + ), + ) + +app.AuthzKeeper = authzkeeper.NewKeeper( + runtime.NewKVStoreService(keys[authzkeeper.StoreKey]), + appCodec, + app.MsgServiceRouter(), + app.AccountKeeper, + ) + + / get skipUpgradeHeights from the app options + skipUpgradeHeights := map[int64]bool{ +} + for _, h := range cast.ToIntSlice(appOpts.Get(server.FlagUnsafeSkipUpgrades)) { + skipUpgradeHeights[int64(h)] = true +} + homePath := cast.ToString(appOpts.Get(flags.FlagHome)) + / set the governance module account as the authority for conducting upgrades + app.UpgradeKeeper = upgradekeeper.NewKeeper( + skipUpgradeHeights, + runtime.NewKVStoreService(keys[upgradetypes.StoreKey]), + appCodec, + homePath, + app.BaseApp, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) + + / Register the proposal types + / Deprecated: Avoid adding new handlers, instead use the new proposal flow + / by granting the governance module the right to execute the message. + / See: https://docs.cosmos.network/main/modules/gov#proposal-messages + govRouter := govv1beta1.NewRouter() + +govRouter.AddRoute(govtypes.RouterKey, govv1beta1.ProposalHandler) + govConfig := govtypes.DefaultConfig() + /* + Example of setting gov params: + govConfig.MaxMetadataLen = 10000 + */ + govKeeper := govkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[govtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + app.DistrKeeper, + app.MsgServiceRouter(), + govConfig, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + / govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(...), / Add if you want to use a custom vote calculation function. + ) + + / Set legacy router for backwards compatibility with gov v1beta1 + govKeeper.SetLegacyRouter(govRouter) + +app.GovKeeper = *govKeeper.SetHooks( + govtypes.NewMultiGovHooks( + / register the governance hooks + ), + ) + + / create evidence keeper with router + evidenceKeeper := evidencekeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[evidencetypes.StoreKey]), + app.StakingKeeper, + app.SlashingKeeper, + app.AccountKeeper.AddressCodec(), + runtime.ProvideCometInfoService(), + ) + / If evidence needs to be handled for the app, set routes in router here and seal + app.EvidenceKeeper = *evidenceKeeper + + app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, + ) + +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + ), + ) + + /**** Module Options ****/ + + / NOTE: Any module instantiated in the module manager that is later modified + / must be passed by reference here. + app.ModuleManager = module.NewManager( + genutil.NewAppModule( + app.AccountKeeper, app.StakingKeeper, app, + txConfig, + ), + auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), + vesting.NewAppModule(app.AccountKeeper, app.BankKeeper), + bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), + feegrantmodule.NewAppModule(appCodec, app.AccountKeeper, app.BankKeeper, app.FeeGrantKeeper, app.interfaceRegistry), + gov.NewAppModule(appCodec, &app.GovKeeper, app.AccountKeeper, app.BankKeeper, nil), + mint.NewAppModule(appCodec, app.MintKeeper, app.AccountKeeper, nil, nil), + slashing.NewAppModule(appCodec, app.SlashingKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil, app.interfaceRegistry), + distr.NewAppModule(appCodec, app.DistrKeeper, app.AccountKeeper, app.BankKeeper, app.StakingKeeper, nil), + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper, nil), + upgrade.NewAppModule(app.UpgradeKeeper, app.AccountKeeper.AddressCodec()), + evidence.NewAppModule(app.EvidenceKeeper), + authzmodule.NewAppModule(appCodec, app.AuthzKeeper, app.AccountKeeper, app.BankKeeper, app.interfaceRegistry), + consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), + epochs.NewAppModule(app.EpochsKeeper), + protocolpool.NewAppModule(app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), + ) + + / BasicModuleManager defines the module BasicManager is in charge of setting up basic, + / non-dependent module elements, such as codec registration and genesis verification. + / By default it is composed of all the module from the module manager. + / Additionally, app module basics can be overwritten by passing them as argument. + app.BasicModuleManager = module.NewBasicManagerFromManager( + app.ModuleManager, + map[string]module.AppModuleBasic{ + genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator), + govtypes.ModuleName: gov.NewAppModuleBasic( + []govclient.ProposalHandler{ +}, + ), +}) + +app.BasicModuleManager.RegisterLegacyAminoCodec(legacyAmino) + +app.BasicModuleManager.RegisterInterfaces(interfaceRegistry) + + / NOTE: upgrade module is required to be prioritized + app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, + ) + / During begin block slashing happens after distr.BeginBlocker so that + / there is nothing left over in the validator fee pool, so as to keep the + / CanWithdrawInvariant invariant. + / NOTE: staking module is required if HistoricalEntries param > 0 + app.ModuleManager.SetOrderBeginBlockers( + minttypes.ModuleName, + distrtypes.ModuleName, + protocolpooltypes.ModuleName, + slashingtypes.ModuleName, + evidencetypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + authz.ModuleName, + epochstypes.ModuleName, + ) + +app.ModuleManager.SetOrderEndBlockers( + govtypes.ModuleName, + stakingtypes.ModuleName, + genutiltypes.ModuleName, + feegrant.ModuleName, + protocolpooltypes.ModuleName, + ) + + / NOTE: The genutils module must occur after staking so that pools are + / properly initialized with tokens from genesis accounts. + / NOTE: The genutils module must also occur after auth so that it can access the params from auth. + genesisModuleOrder := []string{ + authtypes.ModuleName, + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + consensusparamtypes.ModuleName, + epochstypes.ModuleName, + protocolpooltypes.ModuleName, +} + exportModuleOrder := []string{ + consensusparamtypes.ModuleName, + authtypes.ModuleName, + protocolpooltypes.ModuleName, / Must be exported before bank + banktypes.ModuleName, + distrtypes.ModuleName, + stakingtypes.ModuleName, + slashingtypes.ModuleName, + govtypes.ModuleName, + minttypes.ModuleName, + genutiltypes.ModuleName, + evidencetypes.ModuleName, + authz.ModuleName, + feegrant.ModuleName, + upgradetypes.ModuleName, + vestingtypes.ModuleName, + epochstypes.ModuleName, +} + +app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) + +app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) + + / Uncomment if you want to set a custom migration order here. + / app.ModuleManager.SetOrderMigrations(custom order) + +app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) + +err = app.ModuleManager.RegisterServices(app.configurator) + if err != nil { + panic(err) +} + + / RegisterUpgradeHandlers is used for registering any on-chain upgrades. + / Make sure it's called after `app.ModuleManager` and `app.configurator` are set. + app.RegisterUpgradeHandlers() + +autocliv1.RegisterQueryServer(app.GRPCQueryRouter(), runtimeservices.NewAutoCLIQueryService(app.ModuleManager.Modules)) + +reflectionSvc, err := runtimeservices.NewReflectionService() + if err != nil { + panic(err) +} + +reflectionv1.RegisterReflectionServiceServer(app.GRPCQueryRouter(), reflectionSvc) + + / add test gRPC service for testing gRPC queries in isolation + testdata_pulsar.RegisterQueryServer(app.GRPCQueryRouter(), testdata_pulsar.QueryImpl{ +}) + + / create the simulation manager and define the order of the modules for deterministic simulations + / + / NOTE: this is not required apps that don't use the simulator for fuzz testing + / transactions + overrideModules := map[string]module.AppModuleSimulation{ + authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), +} + +app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) + +app.sm.RegisterStoreDecoders() + + / initialize stores + app.MountKVStores(keys) + + / initialize BaseApp + app.SetInitChainer(app.InitChainer) + +app.SetPreBlocker(app.PreBlocker) + +app.SetBeginBlocker(app.BeginBlocker) + +app.SetEndBlocker(app.EndBlocker) + +app.setAnteHandler(txConfig) + + / In v0.46, the SDK introduces _postHandlers_. PostHandlers are like + / antehandlers, but are run _after_ the `runMsgs` execution. They are also + / defined as a chain, and have the same signature as antehandlers. + / + / In baseapp, postHandlers are run in the same store branch as `runMsgs`, + / meaning that both `runMsgs` and `postHandler` state will be committed if + / both are successful, and both will be reverted if any of the two fails. + / + / The SDK exposes a default postHandlers chain + / + / Please note that changing any of the anteHandler or postHandler chain is + / likely to be a state-machine breaking change, which needs a coordinated + / upgrade. + app.setPostHandler() + if loadLatest { + if err := app.LoadLatestVersion(); err != nil { + panic(fmt.Errorf("error loading last version: %w", err)) +} + +} + +return app +} + +func (app *SimApp) + +setAnteHandler(txConfig client.TxConfig) { + anteHandler, err := ante.NewAnteHandler( + ante.HandlerOptions{ + AccountKeeper: app.AccountKeeper, + BankKeeper: app.BankKeeper, + SignModeHandler: txConfig.SignModeHandler(), + FeegrantKeeper: app.FeeGrantKeeper, + SigGasConsumer: ante.DefaultSigVerificationGasConsumer, + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimeoutDuration), +}, +}, + ) + if err != nil { + panic(err) +} + + / Set the AnteHandler for the app + app.SetAnteHandler(anteHandler) +} + +func (app *SimApp) + +setPostHandler() { + postHandler, err := posthandler.NewPostHandler( + posthandler.HandlerOptions{ +}, + ) + if err != nil { + panic(err) +} + +app.SetPostHandler(postHandler) +} + +/ Name returns the name of the App +func (app *SimApp) + +Name() + +string { + return app.BaseApp.Name() +} + +/ PreBlocker application updates every pre block +func (app *SimApp) + +PreBlocker(ctx sdk.Context, _ *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + return app.ModuleManager.PreBlock(ctx) +} + +/ BeginBlocker application updates every begin block +func (app *SimApp) + +BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { + return app.ModuleManager.BeginBlock(ctx) +} + +/ EndBlocker application updates every end block +func (app *SimApp) + +EndBlocker(ctx sdk.Context) (sdk.EndBlock, error) { + return app.ModuleManager.EndBlock(ctx) +} + +func (a *SimApp) + +Configurator() + +module.Configurator { + return a.configurator +} + +/ InitChainer application update at chain initialization +func (app *SimApp) + +InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { + var genesisState GenesisState + if err := json.Unmarshal(req.AppStateBytes, &genesisState); err != nil { + panic(err) +} + +app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) + +return app.ModuleManager.InitGenesis(ctx, app.appCodec, genesisState) +} + +/ LoadHeight loads a particular height +func (app *SimApp) + +LoadHeight(height int64) + +error { + return app.LoadVersion(height) +} + +/ LegacyAmino returns SimApp's amino codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +LegacyAmino() *codec.LegacyAmino { + return app.legacyAmino +} + +/ AppCodec returns SimApp's app codec. +/ +/ NOTE: This is solely to be used for testing purposes as it may be desirable +/ for modules to register their own custom testing types. +func (app *SimApp) + +AppCodec() + +codec.Codec { + return app.appCodec +} + +/ InterfaceRegistry returns SimApp's InterfaceRegistry +func (app *SimApp) + +InterfaceRegistry() + +types.InterfaceRegistry { + return app.interfaceRegistry +} + +/ TxConfig returns SimApp's TxConfig +func (app *SimApp) + +TxConfig() + +client.TxConfig { + return app.txConfig +} + +/ AutoCliOpts returns the autocli options for the app. +func (app *SimApp) + +AutoCliOpts() + +autocli.AppOptions { + modules := make(map[string]appmodule.AppModule, 0) + for _, m := range app.ModuleManager.Modules { + if moduleWithName, ok := m.(module.HasName); ok { + moduleName := moduleWithName.Name() + if appModule, ok := moduleWithName.(appmodule.AppModule); ok { + modules[moduleName] = appModule +} + +} + +} + +return autocli.AppOptions{ + Modules: modules, + ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), + AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), + ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), + ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), +} +} + +/ DefaultGenesis returns a default genesis from the registered AppModuleBasic's. +func (a *SimApp) + +DefaultGenesis() + +map[string]json.RawMessage { + return a.BasicModuleManager.DefaultGenesis(a.appCodec) +} + +/ GetKey returns the KVStoreKey for the provided store key. +/ +/ NOTE: This is solely to be used for testing purposes. +func (app *SimApp) + +GetKey(storeKey string) *storetypes.KVStoreKey { + return app.keys[storeKey] +} + +/ GetStoreKeys returns all the stored store keys. +func (app *SimApp) + +GetStoreKeys() []storetypes.StoreKey { + keys := make([]storetypes.StoreKey, 0, len(app.keys)) + for _, key := range app.keys { + keys = append(keys, key) +} + +return keys +} + +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} + +/ RegisterAPIRoutes registers all application module routes with the provided +/ API server. +func (app *SimApp) + +RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { + clientCtx := apiSvr.ClientCtx + / Register new tx routes from grpc-gateway. + authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register new CometBFT queries routes from grpc-gateway. + cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register node gRPC service for grpc-gateway. + nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / Register grpc-gateway routes for all modules. + app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) + + / register swagger API from root so that other applications can override easily + if err := server.RegisterSwaggerAPI(apiSvr.ClientCtx, apiSvr.Router, apiConfig.Swagger); err != nil { + panic(err) +} +} + +/ RegisterTxService implements the Application.RegisterTxService method. +func (app *SimApp) + +RegisterTxService(clientCtx client.Context) { + authtx.RegisterTxService(app.BaseApp.GRPCQueryRouter(), clientCtx, app.BaseApp.Simulate, app.interfaceRegistry) +} + +/ RegisterTendermintService implements the Application.RegisterTendermintService method. +func (app *SimApp) + +RegisterTendermintService(clientCtx client.Context) { + cmtApp := server.NewCometABCIWrapper(app) + +cmtservice.RegisterTendermintService( + clientCtx, + app.BaseApp.GRPCQueryRouter(), + app.interfaceRegistry, + cmtApp.Query, + ) +} + +func (app *SimApp) + +RegisterNodeService(clientCtx client.Context, cfg config.Config) { + nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg) +} + +/ GetMaccPerms returns a copy of the module account permissions +/ +/ NOTE: This is solely to be used for testing purposes. +func GetMaccPerms() + +map[string][]string { + return maps.Clone(maccPerms) +} + +/ BlockedAddresses returns all the app's blocked account addresses. +func BlockedAddresses() + +map[string]bool { + modAccAddrs := make(map[string]bool) + for acc := range GetMaccPerms() { + modAccAddrs[authtypes.NewModuleAddress(acc).String()] = true +} + + / allow the following addresses to receive funds + delete(modAccAddrs, authtypes.NewModuleAddress(govtypes.ModuleName).String()) + +return modAccAddrs +} +``` + +Note that you may override some modules. +This is useful if the existing module configuration in the `ModuleManager` should be different in the `SimulationManager`. + +Finally, the application should expose the `SimulationManager` via the following method defined in the `Runtime` interface: + +```go +/ SimulationManager implements the SimulationApp interface +func (app *SimApp) + +SimulationManager() *module.SimulationManager { + return app.sm +} +``` + +## Running Simulations + +To run the simulation, use the `simsx` runner. + +Call the following function from the `simsx` package to begin simulating with a default seed: + +```go expandable +package simsx + +import ( + + "encoding/json" + "fmt" + "io" + "math" + "os" + "path/filepath" + "strings" + "testing" + + dbm "github.com/cosmos/cosmos-db" + "github.com/stretchr/testify/require" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/runtime" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/simulation" + "github.com/cosmos/cosmos-sdk/x/simulation/client/cli" +) + +const SimAppChainID = "simulation-app" + +/ this list of seeds was imported from the original simulation runner: https://github.com/cosmos/tools/blob/v1.0.0/cmd/runsim/main.go#L32 +var defaultSeeds = []int64{ + 1, 2, 4, 7, + 32, 123, 124, 582, 1893, 2989, + 3012, 4728, 37827, 981928, 87821, 891823782, + 989182, 89182391, 11, 22, 44, 77, 99, 2020, + 3232, 123123, 124124, 582582, 18931893, + 29892989, 30123012, 47284728, 7601778, 8090485, + 977367484, 491163361, 424254581, 673398983, +} + +/ SimStateFactory is a factory type that provides a convenient way to create a simulation state for testing. +/ It contains the following fields: +/ - Codec: a codec used for serializing other objects +/ - AppStateFn: a function that returns the app state JSON bytes and the genesis accounts +/ - BlockedAddr: a map of blocked addresses +/ - AccountSource: an interface for retrieving accounts +/ - BalanceSource: an interface for retrieving balance-related information +type SimStateFactory struct { + Codec codec.Codec + AppStateFn simtypes.AppStateFn + BlockedAddr map[string]bool + AccountSource AccountSourceX + BalanceSource BalanceSource +} + +/ SimulationApp abstract app that is used by sims +type SimulationApp interface { + runtime.AppI + SetNotSigverifyTx() + +GetBaseApp() *baseapp.BaseApp + TxConfig() + +client.TxConfig + Close() + +error +} + +/ Run is a helper function that runs a simulation test with the given parameters. +/ It calls the RunWithSeeds function with the default seeds and parameters. +/ +/ This is the entrypoint to run simulation tests that used to run with the runsim binary. +func Run[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + +RunWithSeeds(t, appFactory, setupStateFactory, defaultSeeds, nil, postRunActions...) +} + +/ RunWithSeeds is a helper function that runs a simulation test with the given parameters. +/ It iterates over the provided seeds and runs the simulation test for each seed in parallel. +/ +/ It sets up the environment, creates an instance of the simulation app, +/ calls the simulation.SimulateFromSeed function to run the simulation, and performs post-run actions for each seed. +/ The execution is deterministic and can be used for fuzz tests as well. +/ +/ The system under test is isolated for each run but unlike the old runsim command, there is no Process separation. +/ This means, global caches may be reused for example. This implementation build upon the vanilla Go stdlib test framework. +func RunWithSeeds[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seeds []int64, + fuzzSeed []byte, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + +RunWithSeedsAndRandAcc(t, appFactory, setupStateFactory, seeds, fuzzSeed, simtypes.RandomAccounts, postRunActions...) +} + +/ RunWithSeedsAndRandAcc calls RunWithSeeds with randAccFn +func RunWithSeedsAndRandAcc[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seeds []int64, + fuzzSeed []byte, + randAccFn simtypes.RandomAccountFn, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + if deprecatedParams := cli.GetDeprecatedFlagUsed(); len(deprecatedParams) != 0 { + fmt.Printf("Warning: Deprecated flag are used: %s", strings.Join(deprecatedParams, ",")) +} + cfg := cli.NewConfigFromFlags() + +cfg.ChainID = SimAppChainID + for i := range seeds { + seed := seeds[i] + t.Run(fmt.Sprintf("seed: %d", seed), func(t *testing.T) { + t.Parallel() + +RunWithSeed(t, cfg, appFactory, setupStateFactory, seed, fuzzSeed, postRunActions...) +}) +} +} + +/ RunWithSeed is a helper function that runs a simulation test with the given parameters. +/ It iterates over the provided seeds and runs the simulation test for each seed in parallel. +/ +/ It sets up the environment, creates an instance of the simulation app, +/ calls the simulation.SimulateFromSeed function to run the simulation, and performs post-run actions for the seed. +/ The execution is deterministic and can be used for fuzz tests as well. +func RunWithSeed[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + tb testing.TB, + cfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seed int64, + fuzzSeed []byte, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + tb.Helper() + +RunWithSeedAndRandAcc(tb, cfg, appFactory, setupStateFactory, seed, fuzzSeed, simtypes.RandomAccounts, postRunActions...) +} + +/ RunWithSeedAndRandAcc calls RunWithSeed with randAccFn +func RunWithSeedAndRandAcc[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + tb testing.TB, + cfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seed int64, + fuzzSeed []byte, + randAccFn simtypes.RandomAccountFn, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + tb.Helper() + / setup environment + tCfg := cfg.With(tb, seed, fuzzSeed) + testInstance := NewSimulationAppInstance(tb, tCfg, appFactory) + +var runLogger log.Logger + if cli.FlagVerboseValue { + runLogger = log.NewTestLogger(tb) +} + +else { + runLogger = log.NewTestLoggerInfo(tb) +} + +runLogger = runLogger.With("seed", tCfg.Seed) + app := testInstance.App + stateFactory := setupStateFactory(app) + +ops, reporter := prepareWeightedOps(app.SimulationManager(), stateFactory, tCfg, testInstance.App.TxConfig(), runLogger) + +simParams, accs, err := simulation.SimulateFromSeedX( + tb, + runLogger, + WriteToDebugLog(runLogger), + app.GetBaseApp(), + stateFactory.AppStateFn, + randAccFn, + ops, + stateFactory.BlockedAddr, + tCfg, + stateFactory.Codec, + testInstance.ExecLogWriter, + ) + +require.NoError(tb, err) + +err = simtestutil.CheckExportSimulation(app, tCfg, simParams) + +require.NoError(tb, err) + if tCfg.Commit { + simtestutil.PrintStats(testInstance.DB) +} + / not using tb.Log to always print the summary + fmt.Printf("+++ DONE (seed: %d): \n%s\n", seed, reporter.Summary().String()) + for _, step := range postRunActions { + step(tb, testInstance, accs) +} + +require.NoError(tb, app.Close()) +} + +type ( + HasWeightedOperationsX interface { + WeightedOperationsX(weight WeightSource, reg Registry) +} + +HasWeightedOperationsXWithProposals interface { + WeightedOperationsX(weights WeightSource, reg Registry, proposals WeightedProposalMsgIter, + legacyProposals []simtypes.WeightedProposalContent) /nolint: staticcheck / used for legacy proposal types +} + +HasProposalMsgsX interface { + ProposalMsgsX(weights WeightSource, reg Registry) +} +) + +type ( + HasLegacyWeightedOperations interface { + / WeightedOperations simulation operations (i.e msgs) + +with their respective weight + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation +} + / HasLegacyProposalMsgs defines the messages that can be used to simulate governance (v1) + +proposals + / Deprecated replaced by HasProposalMsgsX + HasLegacyProposalMsgs interface { + / ProposalMsgs msg fu nctions used to simulate governance proposals + ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg +} + + / HasLegacyProposalContents defines the contents that can be used to simulate legacy governance (v1beta1) + +proposals + / Deprecated replaced by HasProposalMsgsX + HasLegacyProposalContents interface { + / ProposalContents content functions used to simulate governance proposals + ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent /nolint:staticcheck / legacy v1beta1 governance +} +) + +/ TestInstance is a generic type that represents an instance of a SimulationApp used for testing simulations. +/ It contains the following fields: +/ - App: The instance of the SimulationApp under test. +/ - DB: The LevelDB database for the simulation app. +/ - WorkDir: The temporary working directory for the simulation app. +/ - Cfg: The configuration flags for the simulator. +/ - AppLogger: The logger used for logging in the app during the simulation, with seed value attached. +/ - ExecLogWriter: Captures block and operation data coming from the simulation +type TestInstance[T SimulationApp] struct { + App T + DB dbm.DB + WorkDir string + Cfg simtypes.Config + AppLogger log.Logger + ExecLogWriter simulation.LogWriter +} + +/ included to avoid cyclic dependency in testutils/sims +func prepareWeightedOps( + sm *module.SimulationManager, + stateFact SimStateFactory, + config simtypes.Config, + txConfig client.TxConfig, + logger log.Logger, +) (simulation.WeightedOperations, *BasicSimulationReporter) { + cdc := stateFact.Codec + simState := module.SimulationState{ + AppParams: make(simtypes.AppParams), + Cdc: cdc, + TxConfig: txConfig, + BondDenom: sdk.DefaultBondDenom, +} + if config.ParamsFile != "" { + bz, err := os.ReadFile(config.ParamsFile) + if err != nil { + panic(err) +} + +err = json.Unmarshal(bz, &simState.AppParams) + if err != nil { + panic(err) +} + +} + weights := ParamWeightSource(simState.AppParams) + reporter := NewBasicSimulationReporter() + pReg := make(UniqueTypeRegistry) + wContent := make([]simtypes.WeightedProposalContent, 0) /nolint:staticcheck / required for legacy type + legacyPReg := NewWeightedFactoryMethods() + / add gov proposals types + for _, m := range sm.Modules { + switch xm := m.(type) { + case HasProposalMsgsX: + xm.ProposalMsgsX(weights, pReg) + case HasLegacyProposalMsgs: + for _, p := range xm.ProposalMsgs(simState) { + weight := weights.Get(p.AppParamsKey(), safeUint(p.DefaultWeight())) + +legacyPReg.Add(weight, legacyToMsgFactoryAdapter(p.MsgSimulatorFn())) +} + case HasLegacyProposalContents: + wContent = append(wContent, xm.ProposalContents(simState)...) +} + +} + oReg := NewSimsMsgRegistryAdapter( + reporter, + stateFact.AccountSource, + stateFact.BalanceSource, + txConfig, + logger, + ) + wOps := make([]simtypes.WeightedOperation, 0, len(sm.Modules)) + for _, m := range sm.Modules { + / add operations + switch xm := m.(type) { + case HasWeightedOperationsX: + xm.WeightedOperationsX(weights, oReg) + case HasWeightedOperationsXWithProposals: + xm.WeightedOperationsX(weights, oReg, AppendIterators(legacyPReg.Iterator(), pReg.Iterator()), wContent) + case HasLegacyWeightedOperations: + wOps = append(wOps, xm.WeightedOperations(simState)...) +} + +} + +return append(wOps, Collect(oReg.items, func(a weightedOperation) + +simtypes.WeightedOperation { + return a +})...), reporter +} + +func safeUint(p int) + +uint32 { + if p < 0 || p > math.MaxUint32 { + panic(fmt.Sprintf("can not cast to uint32: %d", p)) +} + +return uint32(p) +} + +/ NewSimulationAppInstance initializes and returns a TestInstance of a SimulationApp. +/ The function takes a testing.T instance, a simtypes.Config instance, and an appFactory function as parameters. +/ It creates a temporary working directory and a LevelDB database for the simulation app. +/ The function then initializes a logger based on the verbosity flag and sets the logger's seed to the test configuration's seed. +/ The database is closed and cleaned up on test completion. +func NewSimulationAppInstance[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + tb testing.TB, + tCfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, +) + +TestInstance[T] { + tb.Helper() + workDir := tb.TempDir() + +require.NoError(tb, os.Mkdir(filepath.Join(workDir, "data"), 0o750)) + dbDir := filepath.Join(workDir, "leveldb-app-sim") + +var logger log.Logger + if cli.FlagVerboseValue { + logger = log.NewTestLogger(tb) +} + +else { + logger = log.NewTestLoggerError(tb) +} + +logger = logger.With("seed", tCfg.Seed) + +db, err := dbm.NewDB("Simulation", dbm.BackendType(tCfg.DBBackend), dbDir) + +require.NoError(tb, err) + +tb.Cleanup(func() { + _ = db.Close() / ensure db is closed +}) + appOptions := make(simtestutil.AppOptionsMap) + +appOptions[flags.FlagHome] = workDir + opts := []func(*baseapp.BaseApp) { + baseapp.SetChainID(tCfg.ChainID) +} + if tCfg.FauxMerkle { + opts = append(opts, FauxMerkleModeOpt) +} + app := appFactory(logger, db, nil, true, appOptions, opts...) + if !cli.FlagSigverifyTxValue { + app.SetNotSigverifyTx() +} + +return TestInstance[T]{ + App: app, + DB: db, + WorkDir: workDir, + Cfg: tCfg, + AppLogger: logger, + ExecLogWriter: &simulation.StandardLogWriter{ + Seed: tCfg.Seed +}, +} +} + +var _ io.Writer = writerFn(nil) + +type writerFn func(p []byte) (n int, err error) + +func (w writerFn) + +Write(p []byte) (n int, err error) { + return w(p) +} + +/ WriteToDebugLog is an adapter to io.Writer interface +func WriteToDebugLog(logger log.Logger) + +io.Writer { + return writerFn(func(p []byte) (n int, err error) { + logger.Debug(string(p)) + +return len(p), nil +}) +} + +/ FauxMerkleModeOpt returns a BaseApp option to use a dbStoreAdapter instead of +/ an IAVLStore for faster simulation speed. +func FauxMerkleModeOpt(bapp *baseapp.BaseApp) { + bapp.SetFauxMerkleMode() +} +``` + +If a custom seed is desired, tests should use `RunWithSeed`: + +```go expandable +package simsx + +import ( + + "encoding/json" + "fmt" + "io" + "math" + "os" + "path/filepath" + "strings" + "testing" + + dbm "github.com/cosmos/cosmos-db" + "github.com/stretchr/testify/require" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/client/flags" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/runtime" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/module" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + "github.com/cosmos/cosmos-sdk/x/simulation" + "github.com/cosmos/cosmos-sdk/x/simulation/client/cli" +) + +const SimAppChainID = "simulation-app" + +/ this list of seeds was imported from the original simulation runner: https://github.com/cosmos/tools/blob/v1.0.0/cmd/runsim/main.go#L32 +var defaultSeeds = []int64{ + 1, 2, 4, 7, + 32, 123, 124, 582, 1893, 2989, + 3012, 4728, 37827, 981928, 87821, 891823782, + 989182, 89182391, 11, 22, 44, 77, 99, 2020, + 3232, 123123, 124124, 582582, 18931893, + 29892989, 30123012, 47284728, 7601778, 8090485, + 977367484, 491163361, 424254581, 673398983, +} + +/ SimStateFactory is a factory type that provides a convenient way to create a simulation state for testing. +/ It contains the following fields: +/ - Codec: a codec used for serializing other objects +/ - AppStateFn: a function that returns the app state JSON bytes and the genesis accounts +/ - BlockedAddr: a map of blocked addresses +/ - AccountSource: an interface for retrieving accounts +/ - BalanceSource: an interface for retrieving balance-related information +type SimStateFactory struct { + Codec codec.Codec + AppStateFn simtypes.AppStateFn + BlockedAddr map[string]bool + AccountSource AccountSourceX + BalanceSource BalanceSource +} + +/ SimulationApp abstract app that is used by sims +type SimulationApp interface { + runtime.AppI + SetNotSigverifyTx() + +GetBaseApp() *baseapp.BaseApp + TxConfig() + +client.TxConfig + Close() + +error +} + +/ Run is a helper function that runs a simulation test with the given parameters. +/ It calls the RunWithSeeds function with the default seeds and parameters. +/ +/ This is the entrypoint to run simulation tests that used to run with the runsim binary. +func Run[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + +RunWithSeeds(t, appFactory, setupStateFactory, defaultSeeds, nil, postRunActions...) +} + +/ RunWithSeeds is a helper function that runs a simulation test with the given parameters. +/ It iterates over the provided seeds and runs the simulation test for each seed in parallel. +/ +/ It sets up the environment, creates an instance of the simulation app, +/ calls the simulation.SimulateFromSeed function to run the simulation, and performs post-run actions for each seed. +/ The execution is deterministic and can be used for fuzz tests as well. +/ +/ The system under test is isolated for each run but unlike the old runsim command, there is no Process separation. +/ This means, global caches may be reused for example. This implementation build upon the vanilla Go stdlib test framework. +func RunWithSeeds[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seeds []int64, + fuzzSeed []byte, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + +RunWithSeedsAndRandAcc(t, appFactory, setupStateFactory, seeds, fuzzSeed, simtypes.RandomAccounts, postRunActions...) +} + +/ RunWithSeedsAndRandAcc calls RunWithSeeds with randAccFn +func RunWithSeedsAndRandAcc[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + t *testing.T, + appFactory func( + logger log.Logger, + db dbm.DB, + traceStore io.Writer, + loadLatest bool, + appOpts servertypes.AppOptions, + baseAppOptions ...func(*baseapp.BaseApp), + ) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seeds []int64, + fuzzSeed []byte, + randAccFn simtypes.RandomAccountFn, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + t.Helper() + if deprecatedParams := cli.GetDeprecatedFlagUsed(); len(deprecatedParams) != 0 { + fmt.Printf("Warning: Deprecated flag are used: %s", strings.Join(deprecatedParams, ",")) +} + cfg := cli.NewConfigFromFlags() + +cfg.ChainID = SimAppChainID + for i := range seeds { + seed := seeds[i] + t.Run(fmt.Sprintf("seed: %d", seed), func(t *testing.T) { + t.Parallel() + +RunWithSeed(t, cfg, appFactory, setupStateFactory, seed, fuzzSeed, postRunActions...) +}) +} +} + +/ RunWithSeed is a helper function that runs a simulation test with the given parameters. +/ It iterates over the provided seeds and runs the simulation test for each seed in parallel. +/ +/ It sets up the environment, creates an instance of the simulation app, +/ calls the simulation.SimulateFromSeed function to run the simulation, and performs post-run actions for the seed. +/ The execution is deterministic and can be used for fuzz tests as well. +func RunWithSeed[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + tb testing.TB, + cfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seed int64, + fuzzSeed []byte, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + tb.Helper() + +RunWithSeedAndRandAcc(tb, cfg, appFactory, setupStateFactory, seed, fuzzSeed, simtypes.RandomAccounts, postRunActions...) +} + +/ RunWithSeedAndRandAcc calls RunWithSeed with randAccFn +func RunWithSeedAndRandAcc[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + tb testing.TB, + cfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, + setupStateFactory func(app T) + +SimStateFactory, + seed int64, + fuzzSeed []byte, + randAccFn simtypes.RandomAccountFn, + postRunActions ...func(t testing.TB, app TestInstance[T], accs []simtypes.Account), +) { + tb.Helper() + / setup environment + tCfg := cfg.With(tb, seed, fuzzSeed) + testInstance := NewSimulationAppInstance(tb, tCfg, appFactory) + +var runLogger log.Logger + if cli.FlagVerboseValue { + runLogger = log.NewTestLogger(tb) +} + +else { + runLogger = log.NewTestLoggerInfo(tb) +} + +runLogger = runLogger.With("seed", tCfg.Seed) + app := testInstance.App + stateFactory := setupStateFactory(app) + +ops, reporter := prepareWeightedOps(app.SimulationManager(), stateFactory, tCfg, testInstance.App.TxConfig(), runLogger) + +simParams, accs, err := simulation.SimulateFromSeedX( + tb, + runLogger, + WriteToDebugLog(runLogger), + app.GetBaseApp(), + stateFactory.AppStateFn, + randAccFn, + ops, + stateFactory.BlockedAddr, + tCfg, + stateFactory.Codec, + testInstance.ExecLogWriter, + ) + +require.NoError(tb, err) + +err = simtestutil.CheckExportSimulation(app, tCfg, simParams) + +require.NoError(tb, err) + if tCfg.Commit { + simtestutil.PrintStats(testInstance.DB) +} + / not using tb.Log to always print the summary + fmt.Printf("+++ DONE (seed: %d): \n%s\n", seed, reporter.Summary().String()) + for _, step := range postRunActions { + step(tb, testInstance, accs) +} + +require.NoError(tb, app.Close()) +} + +type ( + HasWeightedOperationsX interface { + WeightedOperationsX(weight WeightSource, reg Registry) +} + +HasWeightedOperationsXWithProposals interface { + WeightedOperationsX(weights WeightSource, reg Registry, proposals WeightedProposalMsgIter, + legacyProposals []simtypes.WeightedProposalContent) /nolint: staticcheck / used for legacy proposal types +} + +HasProposalMsgsX interface { + ProposalMsgsX(weights WeightSource, reg Registry) +} +) + +type ( + HasLegacyWeightedOperations interface { + / WeightedOperations simulation operations (i.e msgs) + +with their respective weight + WeightedOperations(simState module.SimulationState) []simtypes.WeightedOperation +} + / HasLegacyProposalMsgs defines the messages that can be used to simulate governance (v1) + +proposals + / Deprecated replaced by HasProposalMsgsX + HasLegacyProposalMsgs interface { + / ProposalMsgs msg fu nctions used to simulate governance proposals + ProposalMsgs(simState module.SimulationState) []simtypes.WeightedProposalMsg +} + + / HasLegacyProposalContents defines the contents that can be used to simulate legacy governance (v1beta1) + +proposals + / Deprecated replaced by HasProposalMsgsX + HasLegacyProposalContents interface { + / ProposalContents content functions used to simulate governance proposals + ProposalContents(simState module.SimulationState) []simtypes.WeightedProposalContent /nolint:staticcheck / legacy v1beta1 governance +} +) + +/ TestInstance is a generic type that represents an instance of a SimulationApp used for testing simulations. +/ It contains the following fields: +/ - App: The instance of the SimulationApp under test. +/ - DB: The LevelDB database for the simulation app. +/ - WorkDir: The temporary working directory for the simulation app. +/ - Cfg: The configuration flags for the simulator. +/ - AppLogger: The logger used for logging in the app during the simulation, with seed value attached. +/ - ExecLogWriter: Captures block and operation data coming from the simulation +type TestInstance[T SimulationApp] struct { + App T + DB dbm.DB + WorkDir string + Cfg simtypes.Config + AppLogger log.Logger + ExecLogWriter simulation.LogWriter +} + +/ included to avoid cyclic dependency in testutils/sims +func prepareWeightedOps( + sm *module.SimulationManager, + stateFact SimStateFactory, + config simtypes.Config, + txConfig client.TxConfig, + logger log.Logger, +) (simulation.WeightedOperations, *BasicSimulationReporter) { + cdc := stateFact.Codec + simState := module.SimulationState{ + AppParams: make(simtypes.AppParams), + Cdc: cdc, + TxConfig: txConfig, + BondDenom: sdk.DefaultBondDenom, +} + if config.ParamsFile != "" { + bz, err := os.ReadFile(config.ParamsFile) + if err != nil { + panic(err) +} + +err = json.Unmarshal(bz, &simState.AppParams) + if err != nil { + panic(err) +} + +} + weights := ParamWeightSource(simState.AppParams) + reporter := NewBasicSimulationReporter() + pReg := make(UniqueTypeRegistry) + wContent := make([]simtypes.WeightedProposalContent, 0) /nolint:staticcheck / required for legacy type + legacyPReg := NewWeightedFactoryMethods() + / add gov proposals types + for _, m := range sm.Modules { + switch xm := m.(type) { + case HasProposalMsgsX: + xm.ProposalMsgsX(weights, pReg) + case HasLegacyProposalMsgs: + for _, p := range xm.ProposalMsgs(simState) { + weight := weights.Get(p.AppParamsKey(), safeUint(p.DefaultWeight())) + +legacyPReg.Add(weight, legacyToMsgFactoryAdapter(p.MsgSimulatorFn())) +} + case HasLegacyProposalContents: + wContent = append(wContent, xm.ProposalContents(simState)...) +} + +} + oReg := NewSimsMsgRegistryAdapter( + reporter, + stateFact.AccountSource, + stateFact.BalanceSource, + txConfig, + logger, + ) + wOps := make([]simtypes.WeightedOperation, 0, len(sm.Modules)) + for _, m := range sm.Modules { + / add operations + switch xm := m.(type) { + case HasWeightedOperationsX: + xm.WeightedOperationsX(weights, oReg) + case HasWeightedOperationsXWithProposals: + xm.WeightedOperationsX(weights, oReg, AppendIterators(legacyPReg.Iterator(), pReg.Iterator()), wContent) + case HasLegacyWeightedOperations: + wOps = append(wOps, xm.WeightedOperations(simState)...) +} + +} + +return append(wOps, Collect(oReg.items, func(a weightedOperation) + +simtypes.WeightedOperation { + return a +})...), reporter +} + +func safeUint(p int) + +uint32 { + if p < 0 || p > math.MaxUint32 { + panic(fmt.Sprintf("can not cast to uint32: %d", p)) +} + +return uint32(p) +} + +/ NewSimulationAppInstance initializes and returns a TestInstance of a SimulationApp. +/ The function takes a testing.T instance, a simtypes.Config instance, and an appFactory function as parameters. +/ It creates a temporary working directory and a LevelDB database for the simulation app. +/ The function then initializes a logger based on the verbosity flag and sets the logger's seed to the test configuration's seed. +/ The database is closed and cleaned up on test completion. +func NewSimulationAppInstance[T SimulationApp](/docs/sdk/v0.53/documentation/operations/ + tb testing.TB, + tCfg simtypes.Config, + appFactory func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) + +T, +) + +TestInstance[T] { + tb.Helper() + workDir := tb.TempDir() + +require.NoError(tb, os.Mkdir(filepath.Join(workDir, "data"), 0o750)) + dbDir := filepath.Join(workDir, "leveldb-app-sim") + +var logger log.Logger + if cli.FlagVerboseValue { + logger = log.NewTestLogger(tb) +} + +else { + logger = log.NewTestLoggerError(tb) +} + +logger = logger.With("seed", tCfg.Seed) + +db, err := dbm.NewDB("Simulation", dbm.BackendType(tCfg.DBBackend), dbDir) + +require.NoError(tb, err) + +tb.Cleanup(func() { + _ = db.Close() / ensure db is closed +}) + appOptions := make(simtestutil.AppOptionsMap) + +appOptions[flags.FlagHome] = workDir + opts := []func(*baseapp.BaseApp) { + baseapp.SetChainID(tCfg.ChainID) +} + if tCfg.FauxMerkle { + opts = append(opts, FauxMerkleModeOpt) +} + app := appFactory(logger, db, nil, true, appOptions, opts...) + if !cli.FlagSigverifyTxValue { + app.SetNotSigverifyTx() +} + +return TestInstance[T]{ + App: app, + DB: db, + WorkDir: workDir, + Cfg: tCfg, + AppLogger: logger, + ExecLogWriter: &simulation.StandardLogWriter{ + Seed: tCfg.Seed +}, +} +} + +var _ io.Writer = writerFn(nil) + +type writerFn func(p []byte) (n int, err error) + +func (w writerFn) + +Write(p []byte) (n int, err error) { + return w(p) +} + +/ WriteToDebugLog is an adapter to io.Writer interface +func WriteToDebugLog(logger log.Logger) + +io.Writer { + return writerFn(func(p []byte) (n int, err error) { + logger.Debug(string(p)) + +return len(p), nil +}) +} + +/ FauxMerkleModeOpt returns a BaseApp option to use a dbStoreAdapter instead of +/ an IAVLStore for faster simulation speed. +func FauxMerkleModeOpt(bapp *baseapp.BaseApp) { + bapp.SetFauxMerkleMode() +} +``` + +These functions should be called in tests (i.e., app\_test.go, app\_sim\_test.go, etc.) + +Example: + +```go expandable +/go:build sims + +package simapp + +import ( + + "encoding/binary" + "encoding/json" + "flag" + "io" + "math/rand" + "strings" + "sync" + "testing" + + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "cosmossdk.io/log" + "cosmossdk.io/store" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + servertypes "github.com/cosmos/cosmos-sdk/server/types" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sims "github.com/cosmos/cosmos-sdk/testutil/simsx" + sdk "github.com/cosmos/cosmos-sdk/types" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + authzkeeper "github.com/cosmos/cosmos-sdk/x/authz/keeper" + "github.com/cosmos/cosmos-sdk/x/feegrant" + "github.com/cosmos/cosmos-sdk/x/simulation" + simcli "github.com/cosmos/cosmos-sdk/x/simulation/client/cli" + slashingtypes "github.com/cosmos/cosmos-sdk/x/slashing/types" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +var FlagEnableStreamingValue bool + +/ Get flags every time the simulator is run +func init() { + simcli.GetSimulatorFlags() + +flag.BoolVar(&FlagEnableStreamingValue, "EnableStreaming", false, "Enable streaming service") +} + +/ interBlockCacheOpt returns a BaseApp option function that sets the persistent +/ inter-block write-through cache. +func interBlockCacheOpt() + +func(*baseapp.BaseApp) { + return baseapp.SetInterBlockCache(store.NewCommitKVStoreCacheManager()) +} + +func TestFullAppSimulation(t *testing.T) { + sims.Run(t, NewSimApp, setupStateFactory) +} + +func setupStateFactory(app *SimApp) + +sims.SimStateFactory { + return sims.SimStateFactory{ + Codec: app.AppCodec(), + AppStateFn: simtestutil.AppStateFn(app.AppCodec(), app.SimulationManager(), app.DefaultGenesis()), + BlockedAddr: BlockedAddresses(), + AccountSource: app.AccountKeeper, + BalanceSource: app.BankKeeper, +} +} + +var ( + exportAllModules = []string{ +} + +exportWithValidatorSet = []string{ +} +) + +func TestAppImportExport(t *testing.T) { + sims.Run(t, NewSimApp, setupStateFactory, func(tb testing.TB, ti sims.TestInstance[*SimApp], accs []simtypes.Account) { + tb.Helper() + app := ti.App + tb.Log("exporting genesis...\n") + +exported, err := app.ExportAppStateAndValidators(false, exportWithValidatorSet, exportAllModules) + +require.NoError(tb, err) + +tb.Log("importing genesis...\n") + newTestInstance := sims.NewSimulationAppInstance(tb, ti.Cfg, NewSimApp) + newApp := newTestInstance.App + var genesisState GenesisState + require.NoError(tb, json.Unmarshal(exported.AppState, &genesisState)) + ctxB := newApp.NewContextLegacy(true, cmtproto.Header{ + Height: app.LastBlockHeight() +}) + _, err = newApp.ModuleManager.InitGenesis(ctxB, newApp.appCodec, genesisState) + if IsEmptyValidatorSetErr(err) { + tb.Skip("Skipping simulation as all validators have been unbonded") + +return +} + +require.NoError(tb, err) + +err = newApp.StoreConsensusParams(ctxB, exported.ConsensusParams) + +require.NoError(tb, err) + +tb.Log("comparing stores...") + / skip certain prefixes + skipPrefixes := map[string][][]byte{ + stakingtypes.StoreKey: { + stakingtypes.UnbondingQueueKey, stakingtypes.RedelegationQueueKey, stakingtypes.ValidatorQueueKey, + stakingtypes.HistoricalInfoKey, stakingtypes.UnbondingIDKey, stakingtypes.UnbondingIndexKey, + stakingtypes.UnbondingTypeKey, + stakingtypes.ValidatorUpdatesKey, / todo (Alex): double check why there is a diff with test-sim-import-export +}, + authzkeeper.StoreKey: { + authzkeeper.GrantQueuePrefix +}, + feegrant.StoreKey: { + feegrant.FeeAllowanceQueueKeyPrefix +}, + slashingtypes.StoreKey: { + slashingtypes.ValidatorMissedBlockBitmapKeyPrefix +}, +} + +AssertEqualStores(tb, app, newApp, app.SimulationManager().StoreDecoders, skipPrefixes) +}) +} + +/ Scenario: +/ +/ Start a fresh node and run n blocks, export state +/ set up a new node instance, Init chain from exported genesis +/ run new instance for n blocks +func TestAppSimulationAfterImport(t *testing.T) { + sims.Run(t, NewSimApp, setupStateFactory, func(tb testing.TB, ti sims.TestInstance[*SimApp], accs []simtypes.Account) { + tb.Helper() + app := ti.App + tb.Log("exporting genesis...\n") + +exported, err := app.ExportAppStateAndValidators(false, exportWithValidatorSet, exportAllModules) + +require.NoError(tb, err) + +tb.Log("importing genesis...\n") + newTestInstance := sims.NewSimulationAppInstance(tb, ti.Cfg, NewSimApp) + newApp := newTestInstance.App + _, err = newApp.InitChain(&abci.RequestInitChain{ + AppStateBytes: exported.AppState, + ChainId: sims.SimAppChainID, +}) + if IsEmptyValidatorSetErr(err) { + tb.Skip("Skipping simulation as all validators have been unbonded") + +return +} + +require.NoError(tb, err) + newStateFactory := setupStateFactory(newApp) + _, _, err = simulation.SimulateFromSeedX( + tb, + newTestInstance.AppLogger, + sims.WriteToDebugLog(newTestInstance.AppLogger), + newApp.BaseApp, + newStateFactory.AppStateFn, + simtypes.RandomAccounts, + simtestutil.BuildSimulationOperations(newApp, newApp.AppCodec(), newTestInstance.Cfg, newApp.TxConfig()), + newStateFactory.BlockedAddr, + newTestInstance.Cfg, + newStateFactory.Codec, + ti.ExecLogWriter, + ) + +require.NoError(tb, err) +}) +} + +func IsEmptyValidatorSetErr(err error) + +bool { + return err != nil && strings.Contains(err.Error(), "validator set is empty after InitGenesis") +} + +func TestAppStateDeterminism(t *testing.T) { + const numTimesToRunPerSeed = 3 + var seeds []int64 + if s := simcli.NewConfigFromFlags().Seed; s != simcli.DefaultSeedValue { + / We will be overriding the random seed and just run a single simulation on the provided seed value + for j := 0; j < numTimesToRunPerSeed; j++ { / multiple rounds + seeds = append(seeds, s) +} + +} + +else { + / setup with 3 random seeds + for i := 0; i < 3; i++ { + seed := rand.Int63() + for j := 0; j < numTimesToRunPerSeed; j++ { / multiple rounds + seeds = append(seeds, seed) +} + +} + +} + / overwrite default app config + interBlockCachingAppFactory := func(logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp)) *SimApp { + if FlagEnableStreamingValue { + m := map[string]any{ + "streaming.abci.keys": []string{"*" +}, + "streaming.abci.plugin": "abci_v1", + "streaming.abci.stop-node-on-err": true, +} + others := appOpts + appOpts = appOptionsFn(func(k string) + +any { + if v, ok := m[k]; ok { + return v +} + +return others.Get(k) +}) +} + +return NewSimApp(logger, db, nil, true, appOpts, append(baseAppOptions, interBlockCacheOpt())...) +} + +var mx sync.Mutex + appHashResults := make(map[int64][][]byte) + appSimLogger := make(map[int64][]simulation.LogWriter) + captureAndCheckHash := func(tb testing.TB, ti sims.TestInstance[*SimApp], _ []simtypes.Account) { + tb.Helper() + +seed, appHash := ti.Cfg.Seed, ti.App.LastCommitID().Hash + mx.Lock() + +otherHashes, execWriters := appHashResults[seed], appSimLogger[seed] + if len(otherHashes) < numTimesToRunPerSeed-1 { + appHashResults[seed], appSimLogger[seed] = append(otherHashes, appHash), append(execWriters, ti.ExecLogWriter) +} + +else { / cleanup + delete(appHashResults, seed) + +delete(appSimLogger, seed) +} + +mx.Unlock() + +var failNow bool + / and check that all app hashes per seed are equal for each iteration + for i := 0; i < len(otherHashes); i++ { + if !assert.Equal(tb, otherHashes[i], appHash) { + execWriters[i].PrintLogs() + +failNow = true +} + +} + if failNow { + ti.ExecLogWriter.PrintLogs() + +tb.Fatalf("non-determinism in seed %d", seed) +} + +} + / run simulations + sims.RunWithSeeds(t, interBlockCachingAppFactory, setupStateFactory, seeds, []byte{ +}, captureAndCheckHash) +} + +type ComparableStoreApp interface { + LastBlockHeight() + +int64 + NewContextLegacy(isCheckTx bool, header cmtproto.Header) + +sdk.Context + GetKey(storeKey string) *storetypes.KVStoreKey + GetStoreKeys() []storetypes.StoreKey +} + +func AssertEqualStores( + tb testing.TB, + app, newApp ComparableStoreApp, + storeDecoders simtypes.StoreDecoderRegistry, + skipPrefixes map[string][][]byte, +) { + tb.Helper() + ctxA := app.NewContextLegacy(true, cmtproto.Header{ + Height: app.LastBlockHeight() +}) + ctxB := newApp.NewContextLegacy(true, cmtproto.Header{ + Height: app.LastBlockHeight() +}) + storeKeys := app.GetStoreKeys() + +require.NotEmpty(tb, storeKeys) + for _, appKeyA := range storeKeys { + / only compare kvstores + if _, ok := appKeyA.(*storetypes.KVStoreKey); !ok { + continue +} + keyName := appKeyA.Name() + appKeyB := newApp.GetKey(keyName) + storeA := ctxA.KVStore(appKeyA) + storeB := ctxB.KVStore(appKeyB) + +failedKVAs, failedKVBs := simtestutil.DiffKVStores(storeA, storeB, skipPrefixes[keyName]) + +require.Equal(tb, len(failedKVAs), len(failedKVBs), "unequal sets of key-values to compare %s, key stores %s and %s", keyName, appKeyA, appKeyB) + +tb.Logf("compared %d different key/value pairs between %s and %s\n", len(failedKVAs), appKeyA, appKeyB) + if !assert.Equal(tb, 0, len(failedKVAs), simtestutil.GetSimulationLog(keyName, storeDecoders, failedKVAs, failedKVBs)) { + for _, v := range failedKVAs { + tb.Logf("store mismatch: %q\n", v) +} + +tb.FailNow() +} + +} +} + +/ appOptionsFn is an adapter to the single method AppOptions interface +type appOptionsFn func(string) + +any + +func (f appOptionsFn) + +Get(k string) + +any { + return f(k) +} + +/ FauxMerkleModeOpt returns a BaseApp option to use a dbStoreAdapter instead of +/ an IAVLStore for faster simulation speed. +func FauxMerkleModeOpt(bapp *baseapp.BaseApp) { + bapp.SetFauxMerkleMode() +} + +func FuzzFullAppSimulation(f *testing.F) { + f.Fuzz(func(t *testing.T, rawSeed []byte) { + if len(rawSeed) < 8 { + t.Skip() + +return +} + +sims.RunWithSeeds( + t, + NewSimApp, + setupStateFactory, + []int64{ + int64(binary.BigEndian.Uint64(rawSeed)) +}, + rawSeed[8:], + ) +}) +} +``` diff --git a/docs/sdk/v0.53/documentation/operations/testing.mdx b/docs/sdk/v0.53/documentation/operations/testing.mdx new file mode 100644 index 00000000..5e5921f1 --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/testing.mdx @@ -0,0 +1,2922 @@ +--- +title: Testing +--- + +The Cosmos SDK contains different types of [tests](https://martinfowler.com/articles/practical-test-pyramid.html). +These tests have different goals and are used at different stages of the development cycle. +We advice, as a general rule, to use tests at all stages of the development cycle. +It is adviced, as a chain developer, to test your application and modules in a similar way than the SDK. + +The rationale behind testing can be found in [ADR-59](https://docs.cosmos.network/main/build/architecture/adr-059-test-scopes). + +## Unit Tests + +Unit tests are the lowest test category of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html). +All packages and modules should have unit test coverage. Modules should have their dependencies mocked: this means mocking keepers. + +The SDK uses `mockgen` to generate mocks for keepers: + +```go expandable +#!/usr/bin/env bash + +mockgen_cmd="mockgen" +$mockgen_cmd -source=baseapp/abci_utils.go -package mock -destination baseapp/testutil/mock/mocks.go +$mockgen_cmd -source=client/account_retriever.go -package mock -destination testutil/mock/account_retriever.go +$mockgen_cmd -package mock -destination store/mock/cosmos_cosmos_db_DB.go github.com/cosmos/cosmos-db DB +$mockgen_cmd -source=types/module/module.go -package mock -destination testutil/mock/types_module_module.go +$mockgen_cmd -source=types/module/mock_appmodule_test.go -package mock -destination testutil/mock/types_mock_appmodule.go +$mockgen_cmd -source=types/invariant.go -package mock -destination testutil/mock/types_invariant.go +$mockgen_cmd -package mock -destination testutil/mock/grpc_server.go github.com/cosmos/gogoproto/grpc Server +$mockgen_cmd -package mock -destination testutil/mock/logger.go cosmossdk.io/log Logger +$mockgen_cmd -source=x/nft/expected_keepers.go -package testutil -destination x/nft/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/feegrant/expected_keepers.go -package testutil -destination x/feegrant/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/mint/types/expected_keepers.go -package testutil -destination x/mint/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/params/proposal_handler_test.go -package testutil -destination x/params/testutil/staking_keeper_mock.go +$mockgen_cmd -source=x/auth/tx/config/expected_keepers.go -package testutil -destination x/auth/tx/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/auth/types/expected_keepers.go -package testutil -destination x/auth/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/auth/ante/expected_keepers.go -package testutil -destination x/auth/ante/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/authz/expected_keepers.go -package testutil -destination x/authz/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/bank/types/expected_keepers.go -package testutil -destination x/bank/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/group/testutil/expected_keepers.go -package testutil -destination x/group/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/evidence/types/expected_keepers.go -package testutil -destination x/evidence/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/distribution/types/expected_keepers.go -package testutil -destination x/distribution/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/slashing/types/expected_keepers.go -package testutil -destination x/slashing/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/genutil/types/expected_keepers.go -package testutil -destination x/genutil/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/gov/testutil/expected_keepers.go -package testutil -destination x/gov/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/staking/types/expected_keepers.go -package testutil -destination x/staking/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/auth/vesting/types/expected_keepers.go -package testutil -destination x/auth/vesting/testutil/expected_keepers_mocks.go +$mockgen_cmd -source=x/protocolpool/types/expected_keepers.go -package testutil -destination x/protocolpool/testutil/expected_keepers_mocks.go +``` + +You can read more about mockgen [here](https://go.uber.org/mock). + +### Example + +As an example, we will walkthrough the [keeper tests](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/gov/keeper/keeper_test.go) of the `x/gov` module. + +The `x/gov` module has a `Keeper` type, which requires a few external dependencies (ie. imports outside `x/gov` to work properly). + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "time" + "cosmossdk.io/collections" + corestoretypes "cosmossdk.io/core/store" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" +) + +/ Keeper defines the governance module Keeper +type Keeper struct { + authKeeper types.AccountKeeper + bankKeeper types.BankKeeper + distrKeeper types.DistributionKeeper + + / The reference to the DelegationSet and ValidatorSet to get information about validators and delegators + sk types.StakingKeeper + + / GovHooks + hooks types.GovHooks + + / The (unexposed) + +keys used to access the stores from the Context. + storeService corestoretypes.KVStoreService + + / The codec for binary encoding/decoding. + cdc codec.Codec + + / Legacy Proposal router + legacyRouter v1beta1.Router + + / Msg server router + router baseapp.MessageRouter + + config types.Config + + calculateVoteResultsAndVotingPowerFn CalculateVoteResultsAndVotingPowerFn + + / the address capable of executing a MsgUpdateParams message. Typically, this + / should be the x/gov module account. + authority string + + Schema collections.Schema + Constitution collections.Item[string] + Params collections.Item[v1.Params] + Deposits collections.Map[collections.Pair[uint64, sdk.AccAddress], v1.Deposit] + Votes collections.Map[collections.Pair[uint64, sdk.AccAddress], v1.Vote] + ProposalID collections.Sequence + Proposals collections.Map[uint64, v1.Proposal] + ActiveProposalsQueue collections.Map[collections.Pair[time.Time, uint64], uint64] / TODO(tip): this should be simplified and go into an index. + InactiveProposalsQueue collections.Map[collections.Pair[time.Time, uint64], uint64] / TODO(tip): this should be simplified and go into an index. + VotingPeriodProposals collections.Map[uint64, []byte] / TODO(tip): this could be a keyset or index. +} + +type InitOption func(*Keeper) + +/ WithCustomCalculateVoteResultsAndVotingPowerFn is an optional input to set a custom CalculateVoteResultsAndVotingPowerFn. +/ If this function is not provided, the default function is used. +func WithCustomCalculateVoteResultsAndVotingPowerFn(calculateVoteResultsAndVotingPowerFn CalculateVoteResultsAndVotingPowerFn) + +InitOption { + return func(k *Keeper) { + if calculateVoteResultsAndVotingPowerFn == nil { + panic("calculateVoteResultsAndVotingPowerFn cannot be nil") +} + +k.calculateVoteResultsAndVotingPowerFn = calculateVoteResultsAndVotingPowerFn +} +} + +/ GetAuthority returns the x/gov module's authority. +func (k Keeper) + +GetAuthority() + +string { + return k.authority +} + +/ NewKeeper returns a governance keeper. It handles: +/ - submitting governance proposals +/ - depositing funds into proposals, and activating upon sufficient funds being deposited +/ - users voting on proposals, with weight proportional to stake in the system +/ - and tallying the result of the vote. +/ +/ CONTRACT: the parameter Subspace must have the param key table already initialized +func NewKeeper( + cdc codec.Codec, storeService corestoretypes.KVStoreService, authKeeper types.AccountKeeper, + bankKeeper types.BankKeeper, sk types.StakingKeeper, distrKeeper types.DistributionKeeper, + router baseapp.MessageRouter, config types.Config, authority string, initOptions ...InitOption, +) *Keeper { + / ensure governance module account is set + if addr := authKeeper.GetModuleAddress(types.ModuleName); addr == nil { + panic(fmt.Sprintf("%s module account has not been set", types.ModuleName)) +} + if _, err := authKeeper.AddressCodec().StringToBytes(authority); err != nil { + panic(fmt.Sprintf("invalid authority address: %s", authority)) +} + + / If MaxMetadataLen not set by app developer, set to default value. + if config.MaxMetadataLen == 0 { + config.MaxMetadataLen = types.DefaultConfig().MaxMetadataLen +} + sb := collections.NewSchemaBuilder(storeService) + k := &Keeper{ + storeService: storeService, + authKeeper: authKeeper, + bankKeeper: bankKeeper, + distrKeeper: distrKeeper, + sk: sk, + cdc: cdc, + router: router, + config: config, + calculateVoteResultsAndVotingPowerFn: defaultCalculateVoteResultsAndVotingPower, + authority: authority, + Constitution: collections.NewItem(sb, types.ConstitutionKey, "constitution", collections.StringValue), + Params: collections.NewItem(sb, types.ParamsKey, "params", codec.CollValue[v1.Params](/docs/sdk/v0.53/documentation/operations/cdc)), + Deposits: collections.NewMap(sb, types.DepositsKeyPrefix, "deposits", collections.PairKeyCodec(collections.Uint64Key, sdk.LengthPrefixedAddressKey(sdk.AccAddressKey)), codec.CollValue[v1.Deposit](/docs/sdk/v0.53/documentation/operations/cdc)), / nolint: staticcheck / sdk.LengthPrefixedAddressKey is needed to retain state compatibility + Votes: collections.NewMap(sb, types.VotesKeyPrefix, "votes", collections.PairKeyCodec(collections.Uint64Key, sdk.LengthPrefixedAddressKey(sdk.AccAddressKey)), codec.CollValue[v1.Vote](/docs/sdk/v0.53/documentation/operations/cdc)), / nolint: staticcheck / sdk.LengthPrefixedAddressKey is needed to retain state compatibility + ProposalID: collections.NewSequence(sb, types.ProposalIDKey, "proposal_id"), + Proposals: collections.NewMap(sb, types.ProposalsKeyPrefix, "proposals", collections.Uint64Key, codec.CollValue[v1.Proposal](/docs/sdk/v0.53/documentation/operations/cdc)), + ActiveProposalsQueue: collections.NewMap(sb, types.ActiveProposalQueuePrefix, "active_proposals_queue", collections.PairKeyCodec(sdk.TimeKey, collections.Uint64Key), collections.Uint64Value), / sdk.TimeKey is needed to retain state compatibility + InactiveProposalsQueue: collections.NewMap(sb, types.InactiveProposalQueuePrefix, "inactive_proposals_queue", collections.PairKeyCodec(sdk.TimeKey, collections.Uint64Key), collections.Uint64Value), / sdk.TimeKey is needed to retain state compatibility + VotingPeriodProposals: collections.NewMap(sb, types.VotingPeriodProposalKeyPrefix, "voting_period_proposals", collections.Uint64Key, collections.BytesValue), +} + for _, opt := range initOptions { + opt(k) +} + +schema, err := sb.Build() + if err != nil { + panic(err) +} + +k.Schema = schema + return k +} + +/ Hooks gets the hooks for governance *Keeper { + func (k *Keeper) + +Hooks() + +types.GovHooks { + if k.hooks == nil { + / return a no-op implementation if no hooks are set + return types.MultiGovHooks{ +} + +} + +return k.hooks +} + +/ SetHooks sets the hooks for governance +func (k *Keeper) + +SetHooks(gh types.GovHooks) *Keeper { + if k.hooks != nil { + panic("cannot set governance hooks twice") +} + +k.hooks = gh + + return k +} + +/ SetLegacyRouter sets the legacy router for governance +func (k *Keeper) + +SetLegacyRouter(router v1beta1.Router) { + / It is vital to seal the governance proposal router here as to not allow + / further handlers to be registered after the keeper is created since this + / could create invalid or non-deterministic behavior. + router.Seal() + +k.legacyRouter = router +} + +/ Logger returns a module-specific logger. +func (k Keeper) + +Logger(ctx context.Context) + +log.Logger { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +return sdkCtx.Logger().With("module", "x/"+types.ModuleName) +} + +/ Router returns the gov keeper's router +func (k Keeper) + +Router() + +baseapp.MessageRouter { + return k.router +} + +/ LegacyRouter returns the gov keeper's legacy router +func (k Keeper) + +LegacyRouter() + +v1beta1.Router { + return k.legacyRouter +} + +/ GetGovernanceAccount returns the governance ModuleAccount +func (k Keeper) + +GetGovernanceAccount(ctx context.Context) + +sdk.ModuleAccountI { + return k.authKeeper.GetModuleAccount(ctx, types.ModuleName) +} + +/ ModuleAccountAddress returns gov module account address +func (k Keeper) + +ModuleAccountAddress() + +sdk.AccAddress { + return k.authKeeper.GetModuleAddress(types.ModuleName) +} + +/ assertMetadataLength returns an error if given metadata length +/ is greater than a pre-defined MaxMetadataLen. +func (k Keeper) + +assertMetadataLength(metadata string) + +error { + if metadata != "" && uint64(len(metadata)) > k.config.MaxMetadataLen { + return types.ErrMetadataTooLong.Wrapf("got metadata with length %d", len(metadata)) +} + +return nil +} + +/ assertSummaryLength returns an error if given summary length +/ is greater than a pre-defined 40*MaxMetadataLen. +func (k Keeper) + +assertSummaryLength(summary string) + +error { + if summary != "" && uint64(len(summary)) > 40*k.config.MaxMetadataLen { + return types.ErrSummaryTooLong.Wrapf("got summary with length %d", len(summary)) +} + +return nil +} +``` + +In order to only test `x/gov`, we mock the [expected keepers](https://docs.cosmos.network/v0.46/building-modules/keeper.html#type-definition) and instantiate the `Keeper` with the mocked dependencies. Note that we may need to configure the mocked dependencies to return the expected values: + +```go expandable +package keeper_test + +import ( + + "fmt" + "testing" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + cmttime "github.com/cometbft/cometbft/types/time" + "github.com/stretchr/testify/require" + "go.uber.org/mock/gomock" + "cosmossdk.io/math" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil" + "github.com/cosmos/cosmos-sdk/testutil/testdata" + sdk "github.com/cosmos/cosmos-sdk/types" + moduletestutil "github.com/cosmos/cosmos-sdk/types/module/testutil" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + disttypes "github.com/cosmos/cosmos-sdk/x/distribution/types" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtestutil "github.com/cosmos/cosmos-sdk/x/gov/testutil" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +var ( + _, _, addr = testdata.KeyTestPubAddr() + +govAcct = authtypes.NewModuleAddress(types.ModuleName) + +distAcct = authtypes.NewModuleAddress(disttypes.ModuleName) + +TestProposal = getTestProposal() +) + +/ getTestProposal creates and returns a test proposal message. +func getTestProposal() []sdk.Msg { + legacyProposalMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Title", "description"), authtypes.NewModuleAddress(types.ModuleName).String()) + if err != nil { + panic(err) +} + +return []sdk.Msg{ + banktypes.NewMsgSend(govAcct, addr, sdk.NewCoins(sdk.NewCoin("stake", math.NewInt(1000)))), + legacyProposalMsg, +} +} + +/ setupGovKeeper creates a govKeeper as well as all its dependencies. +func setupGovKeeper(t *testing.T) ( + *keeper.Keeper, + *govtestutil.MockAccountKeeper, + *govtestutil.MockBankKeeper, + *govtestutil.MockStakingKeeper, + *govtestutil.MockDistributionKeeper, + moduletestutil.TestEncodingConfig, + sdk.Context, +) { + t.Helper() + key := storetypes.NewKVStoreKey(types.StoreKey) + storeService := runtime.NewKVStoreService(key) + testCtx := testutil.DefaultContextWithDB(t, key, storetypes.NewTransientStoreKey("transient_test")) + ctx := testCtx.Ctx.WithBlockHeader(cmtproto.Header{ + Time: cmttime.Now() +}) + encCfg := moduletestutil.MakeTestEncodingConfig() + +v1.RegisterInterfaces(encCfg.InterfaceRegistry) + +v1beta1.RegisterInterfaces(encCfg.InterfaceRegistry) + +banktypes.RegisterInterfaces(encCfg.InterfaceRegistry) + + / Create MsgServiceRouter, but don't populate it before creating the gov + / keeper. + msr := baseapp.NewMsgServiceRouter() + + / gomock initializations + ctrl := gomock.NewController(t) + acctKeeper := govtestutil.NewMockAccountKeeper(ctrl) + bankKeeper := govtestutil.NewMockBankKeeper(ctrl) + stakingKeeper := govtestutil.NewMockStakingKeeper(ctrl) + distributionKeeper := govtestutil.NewMockDistributionKeeper(ctrl) + +acctKeeper.EXPECT().GetModuleAddress(types.ModuleName).Return(govAcct).AnyTimes() + +acctKeeper.EXPECT().GetModuleAddress(disttypes.ModuleName).Return(distAcct).AnyTimes() + +acctKeeper.EXPECT().GetModuleAccount(gomock.Any(), types.ModuleName).Return(authtypes.NewEmptyModuleAccount(types.ModuleName)).AnyTimes() + +acctKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + +trackMockBalances(bankKeeper, distributionKeeper) + +stakingKeeper.EXPECT().TokensFromConsensusPower(ctx, gomock.Any()).DoAndReturn(func(ctx sdk.Context, power int64) + +math.Int { + return sdk.TokensFromConsensusPower(power, math.NewIntFromUint64(1000000)) +}).AnyTimes() + +stakingKeeper.EXPECT().BondDenom(ctx).Return("stake", nil).AnyTimes() + +stakingKeeper.EXPECT().IterateBondedValidatorsByPower(gomock.Any(), gomock.Any()).AnyTimes() + +stakingKeeper.EXPECT().IterateDelegations(gomock.Any(), gomock.Any(), gomock.Any()).AnyTimes() + +stakingKeeper.EXPECT().TotalBondedTokens(gomock.Any()).Return(math.NewInt(10000000), nil).AnyTimes() + +distributionKeeper.EXPECT().FundCommunityPool(gomock.Any(), gomock.Any(), gomock.Any()).Return(nil).AnyTimes() + + / Gov keeper initializations + govKeeper := keeper.NewKeeper(encCfg.Codec, storeService, acctKeeper, bankKeeper, stakingKeeper, distributionKeeper, msr, types.DefaultConfig(), govAcct.String()) + +require.NoError(t, govKeeper.ProposalID.Set(ctx, 1)) + govRouter := v1beta1.NewRouter() / Also register legacy gov handlers to test them too. + govRouter.AddRoute(types.RouterKey, v1beta1.ProposalHandler) + +govKeeper.SetLegacyRouter(govRouter) + err := govKeeper.Params.Set(ctx, v1.DefaultParams()) + +require.NoError(t, err) + +err = govKeeper.Constitution.Set(ctx, "constitution") + +require.NoError(t, err) + + / Register all handlers for the MegServiceRouter. + msr.SetInterfaceRegistry(encCfg.InterfaceRegistry) + +v1.RegisterMsgServer(msr, keeper.NewMsgServerImpl(govKeeper)) + +banktypes.RegisterMsgServer(msr, nil) / Nil is fine here as long as we never execute the proposal's Msgs. + + return govKeeper, acctKeeper, bankKeeper, stakingKeeper, distributionKeeper, encCfg, ctx +} + +/ trackMockBalances sets up expected calls on the Mock BankKeeper, and also +/ locally tracks accounts balances (not modules balances). +func trackMockBalances(bankKeeper *govtestutil.MockBankKeeper, distributionKeeper *govtestutil.MockDistributionKeeper) { + balances := make(map[string]sdk.Coins) + +balances[distAcct.String()] = sdk.NewCoins(sdk.NewCoin(sdk.DefaultBondDenom, math.NewInt(0))) + + / We don't track module account balances. + bankKeeper.EXPECT().MintCoins(gomock.Any(), minttypes.ModuleName, gomock.Any()).AnyTimes() + +bankKeeper.EXPECT().BurnCoins(gomock.Any(), types.ModuleName, gomock.Any()).AnyTimes() + +bankKeeper.EXPECT().SendCoinsFromModuleToModule(gomock.Any(), minttypes.ModuleName, types.ModuleName, gomock.Any()).AnyTimes() + + / But we do track normal account balances. + bankKeeper.EXPECT().SendCoinsFromAccountToModule(gomock.Any(), gomock.Any(), types.ModuleName, gomock.Any()).DoAndReturn(func(_ sdk.Context, sender sdk.AccAddress, _ string, coins sdk.Coins) + +error { + newBalance, negative := balances[sender.String()].SafeSub(coins...) + if negative { + return fmt.Errorf("not enough balance") +} + +balances[sender.String()] = newBalance + return nil +}).AnyTimes() + +bankKeeper.EXPECT().SendCoinsFromModuleToAccount(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).DoAndReturn(func(_ sdk.Context, module string, rcpt sdk.AccAddress, coins sdk.Coins) + +error { + balances[rcpt.String()] = balances[rcpt.String()].Add(coins...) + +return nil +}).AnyTimes() + +bankKeeper.EXPECT().GetAllBalances(gomock.Any(), gomock.Any()).DoAndReturn(func(_ sdk.Context, addr sdk.AccAddress) + +sdk.Coins { + return balances[addr.String()] +}).AnyTimes() + +bankKeeper.EXPECT().GetBalance(gomock.Any(), gomock.Any(), sdk.DefaultBondDenom).DoAndReturn(func(_ sdk.Context, addr sdk.AccAddress, _ string) + +sdk.Coin { + balances := balances[addr.String()] + for _, balance := range balances { + if balance.Denom == sdk.DefaultBondDenom { + return balance +} + +} + +return sdk.NewCoin(sdk.DefaultBondDenom, math.NewInt(0)) +}).AnyTimes() + +distributionKeeper.EXPECT().FundCommunityPool(gomock.Any(), gomock.Any(), gomock.Any()).DoAndReturn(func(_ sdk.Context, coins sdk.Coins, sender sdk.AccAddress) + +error { + / sender balance + newBalance, negative := balances[sender.String()].SafeSub(coins...) + if negative { + return fmt.Errorf("not enough balance") +} + +balances[sender.String()] = newBalance + / receiver balance + balances[distAcct.String()] = balances[distAcct.String()].Add(coins...) + +return nil +}).AnyTimes() +} +``` + +This allows us to test the `x/gov` module without having to import other modules. + +```go expandable +package keeper_test + +import ( + + "testing" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" + "cosmossdk.io/collections" + sdkmath "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtestutil "github.com/cosmos/cosmos-sdk/x/gov/testutil" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +var address1 = "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r" + +type KeeperTestSuite struct { + suite.Suite + + cdc codec.Codec + ctx sdk.Context + govKeeper *keeper.Keeper + acctKeeper *govtestutil.MockAccountKeeper + bankKeeper *govtestutil.MockBankKeeper + stakingKeeper *govtestutil.MockStakingKeeper + distKeeper *govtestutil.MockDistributionKeeper + queryClient v1.QueryClient + legacyQueryClient v1beta1.QueryClient + addrs []sdk.AccAddress + msgSrvr v1.MsgServer + legacyMsgSrvr v1beta1.MsgServer +} + +func (suite *KeeperTestSuite) + +SetupSuite() { + suite.reset() +} + +func (suite *KeeperTestSuite) + +reset() { + govKeeper, acctKeeper, bankKeeper, stakingKeeper, distKeeper, encCfg, ctx := setupGovKeeper(suite.T()) + + / Populate the gov account with some coins, as the TestProposal we have + / is a MsgSend from the gov account. + coins := sdk.NewCoins(sdk.NewCoin("stake", sdkmath.NewInt(100000))) + err := bankKeeper.MintCoins(suite.ctx, minttypes.ModuleName, coins) + +suite.NoError(err) + +err = bankKeeper.SendCoinsFromModuleToModule(ctx, minttypes.ModuleName, types.ModuleName, coins) + +suite.NoError(err) + queryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1.RegisterQueryServer(queryHelper, keeper.NewQueryServer(govKeeper)) + legacyQueryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1beta1.RegisterQueryServer(legacyQueryHelper, keeper.NewLegacyQueryServer(govKeeper)) + queryClient := v1.NewQueryClient(queryHelper) + legacyQueryClient := v1beta1.NewQueryClient(legacyQueryHelper) + +suite.ctx = ctx + suite.govKeeper = govKeeper + suite.acctKeeper = acctKeeper + suite.bankKeeper = bankKeeper + suite.stakingKeeper = stakingKeeper + suite.distKeeper = distKeeper + suite.cdc = encCfg.Codec + suite.queryClient = queryClient + suite.legacyQueryClient = legacyQueryClient + suite.msgSrvr = keeper.NewMsgServerImpl(suite.govKeeper) + +suite.legacyMsgSrvr = keeper.NewLegacyMsgServerImpl(govAcct.String(), suite.msgSrvr) + +suite.addrs = simtestutil.AddTestAddrsIncremental(bankKeeper, stakingKeeper, ctx, 3, sdkmath.NewInt(30000000)) + +suite.acctKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() +} + +func TestIncrementProposalNumber(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + +authKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + ac := address.NewBech32Codec("cosmos") + +addrBz, err := ac.StringToBytes(address1) + +require.NoError(t, err) + tp := TestProposal + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, true) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, true) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +proposal6, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +require.Equal(t, uint64(6), proposal6.Id) +} + +func TestProposalQueues(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + ac := address.NewBech32Codec("cosmos") + +addrBz, err := ac.StringToBytes(address1) + +require.NoError(t, err) + +authKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + + / create test proposals + tp := TestProposal + proposal, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +has, err := govKeeper.InactiveProposalsQueue.Has(ctx, collections.Join(*proposal.DepositEndTime, proposal.Id)) + +require.NoError(t, err) + +require.True(t, has) + +require.NoError(t, govKeeper.ActivateVotingPeriod(ctx, proposal)) + +proposal, err = govKeeper.Proposals.Get(ctx, proposal.Id) + +require.Nil(t, err) + +has, err = govKeeper.ActiveProposalsQueue.Has(ctx, collections.Join(*proposal.VotingEndTime, proposal.Id)) + +require.NoError(t, err) + +require.True(t, has) +} + +func TestSetHooks(t *testing.T) { + govKeeper, _, _, _, _, _, _ := setupGovKeeper(t) + +require.Empty(t, govKeeper.Hooks()) + govHooksReceiver := MockGovHooksReceiver{ +} + +govKeeper.SetHooks(types.NewMultiGovHooks(&govHooksReceiver)) + +require.NotNil(t, govKeeper.Hooks()) + +require.Panics(t, func() { + govKeeper.SetHooks(&govHooksReceiver) +}) +} + +func TestGetGovGovernanceAndModuleAccountAddress(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + mAcc := authKeeper.GetModuleAccount(ctx, "gov") + +require.Equal(t, mAcc, govKeeper.GetGovernanceAccount(ctx)) + mAddr := authKeeper.GetModuleAddress("gov") + +require.Equal(t, mAddr, govKeeper.ModuleAccountAddress()) +} + +func TestKeeperTestSuite(t *testing.T) { + suite.Run(t, new(KeeperTestSuite)) +} +``` + +We can test then create unit tests using the newly created `Keeper` instance. + +```go expandable +package keeper_test + +import ( + + "testing" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" + "cosmossdk.io/collections" + sdkmath "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/baseapp" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/address" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + govtestutil "github.com/cosmos/cosmos-sdk/x/gov/testutil" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +var address1 = "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r" + +type KeeperTestSuite struct { + suite.Suite + + cdc codec.Codec + ctx sdk.Context + govKeeper *keeper.Keeper + acctKeeper *govtestutil.MockAccountKeeper + bankKeeper *govtestutil.MockBankKeeper + stakingKeeper *govtestutil.MockStakingKeeper + distKeeper *govtestutil.MockDistributionKeeper + queryClient v1.QueryClient + legacyQueryClient v1beta1.QueryClient + addrs []sdk.AccAddress + msgSrvr v1.MsgServer + legacyMsgSrvr v1beta1.MsgServer +} + +func (suite *KeeperTestSuite) + +SetupSuite() { + suite.reset() +} + +func (suite *KeeperTestSuite) + +reset() { + govKeeper, acctKeeper, bankKeeper, stakingKeeper, distKeeper, encCfg, ctx := setupGovKeeper(suite.T()) + + / Populate the gov account with some coins, as the TestProposal we have + / is a MsgSend from the gov account. + coins := sdk.NewCoins(sdk.NewCoin("stake", sdkmath.NewInt(100000))) + err := bankKeeper.MintCoins(suite.ctx, minttypes.ModuleName, coins) + +suite.NoError(err) + +err = bankKeeper.SendCoinsFromModuleToModule(ctx, minttypes.ModuleName, types.ModuleName, coins) + +suite.NoError(err) + queryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1.RegisterQueryServer(queryHelper, keeper.NewQueryServer(govKeeper)) + legacyQueryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) + +v1beta1.RegisterQueryServer(legacyQueryHelper, keeper.NewLegacyQueryServer(govKeeper)) + queryClient := v1.NewQueryClient(queryHelper) + legacyQueryClient := v1beta1.NewQueryClient(legacyQueryHelper) + +suite.ctx = ctx + suite.govKeeper = govKeeper + suite.acctKeeper = acctKeeper + suite.bankKeeper = bankKeeper + suite.stakingKeeper = stakingKeeper + suite.distKeeper = distKeeper + suite.cdc = encCfg.Codec + suite.queryClient = queryClient + suite.legacyQueryClient = legacyQueryClient + suite.msgSrvr = keeper.NewMsgServerImpl(suite.govKeeper) + +suite.legacyMsgSrvr = keeper.NewLegacyMsgServerImpl(govAcct.String(), suite.msgSrvr) + +suite.addrs = simtestutil.AddTestAddrsIncremental(bankKeeper, stakingKeeper, ctx, 3, sdkmath.NewInt(30000000)) + +suite.acctKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() +} + +func TestIncrementProposalNumber(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + +authKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + ac := address.NewBech32Codec("cosmos") + +addrBz, err := ac.StringToBytes(address1) + +require.NoError(t, err) + tp := TestProposal + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, true) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, true) + +require.NoError(t, err) + _, err = govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +proposal6, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +require.Equal(t, uint64(6), proposal6.Id) +} + +func TestProposalQueues(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + ac := address.NewBech32Codec("cosmos") + +addrBz, err := ac.StringToBytes(address1) + +require.NoError(t, err) + +authKeeper.EXPECT().AddressCodec().Return(address.NewBech32Codec("cosmos")).AnyTimes() + + / create test proposals + tp := TestProposal + proposal, err := govKeeper.SubmitProposal(ctx, tp, "", "test", "summary", addrBz, false) + +require.NoError(t, err) + +has, err := govKeeper.InactiveProposalsQueue.Has(ctx, collections.Join(*proposal.DepositEndTime, proposal.Id)) + +require.NoError(t, err) + +require.True(t, has) + +require.NoError(t, govKeeper.ActivateVotingPeriod(ctx, proposal)) + +proposal, err = govKeeper.Proposals.Get(ctx, proposal.Id) + +require.Nil(t, err) + +has, err = govKeeper.ActiveProposalsQueue.Has(ctx, collections.Join(*proposal.VotingEndTime, proposal.Id)) + +require.NoError(t, err) + +require.True(t, has) +} + +func TestSetHooks(t *testing.T) { + govKeeper, _, _, _, _, _, _ := setupGovKeeper(t) + +require.Empty(t, govKeeper.Hooks()) + govHooksReceiver := MockGovHooksReceiver{ +} + +govKeeper.SetHooks(types.NewMultiGovHooks(&govHooksReceiver)) + +require.NotNil(t, govKeeper.Hooks()) + +require.Panics(t, func() { + govKeeper.SetHooks(&govHooksReceiver) +}) +} + +func TestGetGovGovernanceAndModuleAccountAddress(t *testing.T) { + govKeeper, authKeeper, _, _, _, _, ctx := setupGovKeeper(t) + mAcc := authKeeper.GetModuleAccount(ctx, "gov") + +require.Equal(t, mAcc, govKeeper.GetGovernanceAccount(ctx)) + mAddr := authKeeper.GetModuleAddress("gov") + +require.Equal(t, mAddr, govKeeper.ModuleAccountAddress()) +} + +func TestKeeperTestSuite(t *testing.T) { + suite.Run(t, new(KeeperTestSuite)) +} +``` + +## Integration Tests + +Integration tests are at the second level of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html). +In the SDK, we locate our integration tests under [`/tests/integrations`](https://github.com/cosmos/cosmos-sdk/tree/main/tests/integration). + +The goal of these integration tests is to test how a component interacts with other dependencies. Compared to unit tests, integration tests do not mock dependencies. Instead, they use the direct dependencies of the component. This differs as well from end-to-end tests, which test the component with a full application. + +Integration tests interact with the tested module via the defined `Msg` and `Query` services. The result of the test can be verified by checking the state of the application, by checking the emitted events or the response. It is adviced to combine two of these methods to verify the result of the test. + +The SDK provides small helpers for quickly setting up an integration tests. These helpers can be found at [Link](https://github.com/cosmos/cosmos-sdk/blob/main/testutil/integration). + +### Example + +```go expandable +package integration_test + +import ( + + "fmt" + "io" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/google/go-cmp/cmp" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + storetypes "cosmossdk.io/store/types" + + addresscodec "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/integration" + sdk "github.com/cosmos/cosmos-sdk/types" + moduletestutil "github.com/cosmos/cosmos-sdk/types/module/testutil" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/mint" + mintkeeper "github.com/cosmos/cosmos-sdk/x/mint/keeper" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" +) + +/ Example shows how to use the integration test framework to test the integration of SDK modules. +/ Panics are used in this example, but in a real test case, you should use the testing.T object and assertions. +func Example() { + / in this example we are testing the integration of the following modules: + / - mint, which directly depends on auth, bank and staking + encodingCfg := moduletestutil.MakeTestEncodingConfig(auth.AppModuleBasic{ +}, mint.AppModuleBasic{ +}) + keys := storetypes.NewKVStoreKeys(authtypes.StoreKey, minttypes.StoreKey) + authority := authtypes.NewModuleAddress("gov").String() + + / replace the logger by testing values in a real test case (e.g. log.NewTestLogger(t)) + logger := log.NewNopLogger() + cms := integration.CreateMultiStore(keys, logger) + newCtx := sdk.NewContext(cms, cmtproto.Header{ +}, true, logger) + accountKeeper := authkeeper.NewAccountKeeper( + encodingCfg.Codec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + map[string][]string{ + minttypes.ModuleName: { + authtypes.Minter +}}, + addresscodec.NewBech32Codec("cosmos"), + "cosmos", + authority, + ) + + / subspace is nil because we don't test params (which is legacy anyway) + authModule := auth.NewAppModule(encodingCfg.Codec, accountKeeper, authsims.RandomGenesisAccounts, nil) + + / here bankkeeper and staking keeper is nil because we are not testing them + / subspace is nil because we don't test params (which is legacy anyway) + mintKeeper := mintkeeper.NewKeeper(encodingCfg.Codec, runtime.NewKVStoreService(keys[minttypes.StoreKey]), nil, accountKeeper, nil, authtypes.FeeCollectorName, authority) + mintModule := mint.NewAppModule(encodingCfg.Codec, mintKeeper, accountKeeper, nil, nil) + + / create the application and register all the modules from the previous step + integrationApp := integration.NewIntegrationApp( + newCtx, + logger, + keys, + encodingCfg.Codec, + map[string]appmodule.AppModule{ + authtypes.ModuleName: authModule, + minttypes.ModuleName: mintModule, +}, + ) + + / register the message and query servers + authtypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), authkeeper.NewMsgServerImpl(accountKeeper)) + +minttypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), mintkeeper.NewMsgServerImpl(mintKeeper)) + +minttypes.RegisterQueryServer(integrationApp.QueryHelper(), mintkeeper.NewQueryServerImpl(mintKeeper)) + params := minttypes.DefaultParams() + +params.BlocksPerYear = 10000 + + / now we can use the application to test a mint message + result, err := integrationApp.RunMsg(&minttypes.MsgUpdateParams{ + Authority: authority, + Params: params, +}) + if err != nil { + panic(err) +} + + / in this example the result is an empty response, a nil check is enough + / in other cases, it is recommended to check the result value. + if result == nil { + panic(fmt.Errorf("unexpected nil result")) +} + + / we now check the result + resp := minttypes.MsgUpdateParamsResponse{ +} + +err = encodingCfg.Codec.Unmarshal(result.Value, &resp) + if err != nil { + panic(err) +} + sdkCtx := sdk.UnwrapSDKContext(integrationApp.Context()) + + / we should also check the state of the application + got, err := mintKeeper.Params.Get(sdkCtx) + if err != nil { + panic(err) +} + if diff := cmp.Diff(got, params); diff != "" { + panic(diff) +} + +fmt.Println(got.BlocksPerYear) + / Output: 10000 +} + +/ ExampleOneModule shows how to use the integration test framework to test the integration of a single module. +/ That module has no dependency on other modules. +func Example_oneModule() { + / in this example we are testing the integration of the auth module: + encodingCfg := moduletestutil.MakeTestEncodingConfig(auth.AppModuleBasic{ +}) + keys := storetypes.NewKVStoreKeys(authtypes.StoreKey) + authority := authtypes.NewModuleAddress("gov").String() + + / replace the logger by testing values in a real test case (e.g. log.NewTestLogger(t)) + logger := log.NewLogger(io.Discard) + cms := integration.CreateMultiStore(keys, logger) + newCtx := sdk.NewContext(cms, cmtproto.Header{ +}, true, logger) + accountKeeper := authkeeper.NewAccountKeeper( + encodingCfg.Codec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + map[string][]string{ + minttypes.ModuleName: { + authtypes.Minter +}}, + addresscodec.NewBech32Codec("cosmos"), + "cosmos", + authority, + ) + + / subspace is nil because we don't test params (which is legacy anyway) + authModule := auth.NewAppModule(encodingCfg.Codec, accountKeeper, authsims.RandomGenesisAccounts, nil) + + / create the application and register all the modules from the previous step + integrationApp := integration.NewIntegrationApp( + newCtx, + logger, + keys, + encodingCfg.Codec, + map[string]appmodule.AppModule{ + authtypes.ModuleName: authModule, +}, + ) + + / register the message and query servers + authtypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), authkeeper.NewMsgServerImpl(accountKeeper)) + params := authtypes.DefaultParams() + +params.MaxMemoCharacters = 1000 + + / now we can use the application to test a mint message + result, err := integrationApp.RunMsg(&authtypes.MsgUpdateParams{ + Authority: authority, + Params: params, +}, + / this allows to the begin and end blocker of the module before and after the message + integration.WithAutomaticFinalizeBlock(), + / this allows to commit the state after the message + integration.WithAutomaticCommit(), + ) + if err != nil { + panic(err) +} + + / verify that the begin and end blocker were called + / NOTE: in this example, we are testing auth, which doesn't have any begin or end blocker + / so verifying the block height is enough + if integrationApp.LastBlockHeight() != 2 { + panic(fmt.Errorf("expected block height to be 2, got %d", integrationApp.LastBlockHeight())) +} + + / in this example the result is an empty response, a nil check is enough + / in other cases, it is recommended to check the result value. + if result == nil { + panic(fmt.Errorf("unexpected nil result")) +} + + / we now check the result + resp := authtypes.MsgUpdateParamsResponse{ +} + +err = encodingCfg.Codec.Unmarshal(result.Value, &resp) + if err != nil { + panic(err) +} + sdkCtx := sdk.UnwrapSDKContext(integrationApp.Context()) + + / we should also check the state of the application + got := accountKeeper.GetParams(sdkCtx) + if diff := cmp.Diff(got, params); diff != "" { + panic(diff) +} + +fmt.Println(got.MaxMemoCharacters) + / Output: 1000 +} +``` + +## Deterministic and Regression tests + +Tests are written for queries in the Cosmos SDK which have `module_query_safe` Protobuf annotation. + +Each query is tested using 2 methods: + +* Use property-based testing with the [`rapid`](https://pkg.go.dev/pgregory.net/rapid@v0.5.3) library. The property that is tested is that the query response and gas consumption are the same upon 1000 query calls. +* Regression tests are written with hardcoded responses and gas, and verify they don't change upon 1000 calls and between SDK patch versions. + +Here's an example of regression tests: + +```go expandable +package keeper_test + +import ( + + "testing" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "github.com/stretchr/testify/require" + "gotest.tools/v3/assert" + "pgregory.net/rapid" + "cosmossdk.io/core/appmodule" + "cosmossdk.io/log" + "cosmossdk.io/math" + storetypes "cosmossdk.io/store/types" + + addresscodec "github.com/cosmos/cosmos-sdk/codec/address" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/integration" + "github.com/cosmos/cosmos-sdk/testutil/testdata" + sdk "github.com/cosmos/cosmos-sdk/types" + moduletestutil "github.com/cosmos/cosmos-sdk/types/module/testutil" + "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + authsims "github.com/cosmos/cosmos-sdk/x/auth/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/bank" + "github.com/cosmos/cosmos-sdk/x/bank/keeper" + banktestutil "github.com/cosmos/cosmos-sdk/x/bank/testutil" + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" + _ "github.com/cosmos/cosmos-sdk/x/consensus" + minttypes "github.com/cosmos/cosmos-sdk/x/mint/types" + _ "github.com/cosmos/cosmos-sdk/x/params" + _ "github.com/cosmos/cosmos-sdk/x/staking" +) + +var ( + denomRegex = sdk.DefaultCoinDenomRegex() + +addr1 = sdk.MustAccAddressFromBech32("cosmos139f7kncmglres2nf3h4hc4tade85ekfr8sulz5") + +coin1 = sdk.NewCoin("denom", math.NewInt(10)) + +metadataAtom = banktypes.Metadata{ + Description: "The native staking token of the Cosmos Hub.", + DenomUnits: []*banktypes.DenomUnit{ + { + Denom: "uatom", + Exponent: 0, + Aliases: []string{"microatom" +}, +}, + { + Denom: "atom", + Exponent: 6, + Aliases: []string{"ATOM" +}, +}, +}, + Base: "uatom", + Display: "atom", +} +) + +type deterministicFixture struct { + ctx sdk.Context + bankKeeper keeper.BaseKeeper + queryClient banktypes.QueryClient +} + +func initDeterministicFixture(t *testing.T) *deterministicFixture { + t.Helper() + keys := storetypes.NewKVStoreKeys(authtypes.StoreKey, banktypes.StoreKey) + cdc := moduletestutil.MakeTestEncodingConfig(auth.AppModuleBasic{ +}, bank.AppModuleBasic{ +}).Codec + logger := log.NewTestLogger(t) + cms := integration.CreateMultiStore(keys, logger) + newCtx := sdk.NewContext(cms, cmtproto.Header{ +}, true, logger) + authority := authtypes.NewModuleAddress("gov") + maccPerms := map[string][]string{ + minttypes.ModuleName: { + authtypes.Minter +}, +} + accountKeeper := authkeeper.NewAccountKeeper( + cdc, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + addresscodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authority.String(), + ) + blockedAddresses := map[string]bool{ + accountKeeper.GetAuthority(): false, +} + bankKeeper := keeper.NewBaseKeeper( + cdc, + runtime.NewKVStoreService(keys[banktypes.StoreKey]), + accountKeeper, + blockedAddresses, + authority.String(), + log.NewNopLogger(), + ) + authModule := auth.NewAppModule(cdc, accountKeeper, authsims.RandomGenesisAccounts, nil) + bankModule := bank.NewAppModule(cdc, bankKeeper, accountKeeper, nil) + integrationApp := integration.NewIntegrationApp(newCtx, logger, keys, cdc, map[string]appmodule.AppModule{ + authtypes.ModuleName: authModule, + banktypes.ModuleName: bankModule, +}) + sdkCtx := sdk.UnwrapSDKContext(integrationApp.Context()) + + / Register MsgServer and QueryServer + banktypes.RegisterMsgServer(integrationApp.MsgServiceRouter(), keeper.NewMsgServerImpl(bankKeeper)) + +banktypes.RegisterQueryServer(integrationApp.QueryHelper(), keeper.NewQuerier(&bankKeeper)) + qr := integrationApp.QueryHelper() + queryClient := banktypes.NewQueryClient(qr) + f := deterministicFixture{ + ctx: sdkCtx, + bankKeeper: bankKeeper, + queryClient: queryClient, +} + +return &f +} + +func fundAccount(f *deterministicFixture, addr sdk.AccAddress, coin ...sdk.Coin) { + err := banktestutil.FundAccount(f.ctx, f.bankKeeper, addr, sdk.NewCoins(coin...)) + +assert.NilError(&testing.T{ +}, err) +} + +func getCoin(rt *rapid.T) + +sdk.Coin { + return sdk.NewCoin( + rapid.StringMatching(denomRegex).Draw(rt, "denom"), + math.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) +} + +func TestGRPCQueryBalance(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + addr := testdata.AddressGenerator(rt).Draw(rt, "address") + coin := getCoin(rt) + +fundAccount(f, addr, coin) + req := banktypes.NewQueryBalanceRequest(addr, coin.GetDenom()) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.Balance, 0, true) +}) + +fundAccount(f, addr1, coin1) + req := banktypes.NewQueryBalanceRequest(addr1, coin1.GetDenom()) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.Balance, 1087, false) +} + +func TestGRPCQueryAllBalances(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + addr := testdata.AddressGenerator(rt).Draw(rt, "address") + numCoins := rapid.IntRange(1, 10).Draw(rt, "num-count") + coins := make(sdk.Coins, 0, numCoins) + for i := 0; i < numCoins; i++ { + coin := getCoin(rt) + + / NewCoins sorts the denoms + coins = sdk.NewCoins(append(coins, coin)...) +} + +fundAccount(f, addr, coins...) + req := banktypes.NewQueryAllBalancesRequest(addr, testdata.PaginationGenerator(rt, uint64(numCoins)).Draw(rt, "pagination"), false) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.AllBalances, 0, true) +}) + coins := sdk.NewCoins( + sdk.NewCoin("stake", math.NewInt(10)), + sdk.NewCoin("denom", math.NewInt(100)), + ) + +fundAccount(f, addr1, coins...) + req := banktypes.NewQueryAllBalancesRequest(addr1, nil, false) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.AllBalances, 357, false) +} + +func TestGRPCQuerySpendableBalances(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + addr := testdata.AddressGenerator(rt).Draw(rt, "address") + + / Denoms must be unique, otherwise sdk.NewCoins will panic. + denoms := rapid.SliceOfNDistinct(rapid.StringMatching(denomRegex), 1, 10, rapid.ID[string]).Draw(rt, "denoms") + coins := make(sdk.Coins, 0, len(denoms)) + for _, denom := range denoms { + coin := sdk.NewCoin( + denom, + math.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) + + / NewCoins sorts the denoms + coins = sdk.NewCoins(append(coins, coin)...) +} + err := banktestutil.FundAccount(f.ctx, f.bankKeeper, addr, coins) + +assert.NilError(t, err) + req := banktypes.NewQuerySpendableBalancesRequest(addr, testdata.PaginationGenerator(rt, uint64(len(denoms))).Draw(rt, "pagination")) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SpendableBalances, 0, true) +}) + coins := sdk.NewCoins( + sdk.NewCoin("stake", math.NewInt(10)), + sdk.NewCoin("denom", math.NewInt(100)), + ) + err := banktestutil.FundAccount(f.ctx, f.bankKeeper, addr1, coins) + +assert.NilError(t, err) + req := banktypes.NewQuerySpendableBalancesRequest(addr1, nil) + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SpendableBalances, 2032, false) +} + +func TestGRPCQueryTotalSupply(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +res, err := f.queryClient.TotalSupply(f.ctx, &banktypes.QueryTotalSupplyRequest{ +}) + +assert.NilError(t, err) + initialSupply := res.GetSupply() + +rapid.Check(t, func(rt *rapid.T) { + numCoins := rapid.IntRange(1, 3).Draw(rt, "num-count") + coins := make(sdk.Coins, 0, numCoins) + for i := 0; i < numCoins; i++ { + coin := sdk.NewCoin( + rapid.StringMatching(denomRegex).Draw(rt, "denom"), + math.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) + +coins = coins.Add(coin) +} + +assert.NilError(t, f.bankKeeper.MintCoins(f.ctx, minttypes.ModuleName, coins)) + +initialSupply = initialSupply.Add(coins...) + req := &banktypes.QueryTotalSupplyRequest{ + Pagination: testdata.PaginationGenerator(rt, uint64(len(initialSupply))).Draw(rt, "pagination"), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.TotalSupply, 0, true) +}) + +f = initDeterministicFixture(t) / reset + coins := sdk.NewCoins( + sdk.NewCoin("foo", math.NewInt(10)), + sdk.NewCoin("bar", math.NewInt(100)), + ) + +assert.NilError(t, f.bankKeeper.MintCoins(f.ctx, minttypes.ModuleName, coins)) + req := &banktypes.QueryTotalSupplyRequest{ +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.TotalSupply, 150, false) +} + +func TestGRPCQueryTotalSupplyOf(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + coin := sdk.NewCoin( + rapid.StringMatching(denomRegex).Draw(rt, "denom"), + math.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) + +assert.NilError(t, f.bankKeeper.MintCoins(f.ctx, minttypes.ModuleName, sdk.NewCoins(coin))) + req := &banktypes.QuerySupplyOfRequest{ + Denom: coin.GetDenom() +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SupplyOf, 0, true) +}) + coin := sdk.NewCoin("bar", math.NewInt(100)) + +assert.NilError(t, f.bankKeeper.MintCoins(f.ctx, minttypes.ModuleName, sdk.NewCoins(coin))) + req := &banktypes.QuerySupplyOfRequest{ + Denom: coin.GetDenom() +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SupplyOf, 1021, false) +} + +func TestGRPCQueryParams(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + enabledStatus := banktypes.SendEnabled{ + Denom: rapid.StringMatching(denomRegex).Draw(rt, "denom"), + Enabled: rapid.Bool().Draw(rt, "status"), +} + params := banktypes.Params{ + SendEnabled: []*banktypes.SendEnabled{&enabledStatus +}, + DefaultSendEnabled: rapid.Bool().Draw(rt, "send"), +} + +require.NoError(t, f.bankKeeper.SetParams(f.ctx, params)) + req := &banktypes.QueryParamsRequest{ +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.Params, 0, true) +}) + enabledStatus := banktypes.SendEnabled{ + Denom: "denom", + Enabled: true, +} + params := banktypes.Params{ + SendEnabled: []*banktypes.SendEnabled{&enabledStatus +}, + DefaultSendEnabled: false, +} + +require.NoError(t, f.bankKeeper.SetParams(f.ctx, params)) + req := &banktypes.QueryParamsRequest{ +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.Params, 1003, false) +} + +func createAndReturnMetadatas(t *rapid.T, count int) []banktypes.Metadata { + denomsMetadata := make([]banktypes.Metadata, 0, count) + for i := 0; i < count; i++ { + denom := rapid.StringMatching(denomRegex).Draw(t, "denom") + aliases := rapid.SliceOf(rapid.String()).Draw(t, "aliases") + / In the GRPC server code, empty arrays are returned as nil + if len(aliases) == 0 { + aliases = nil +} + metadata := banktypes.Metadata{ + Description: rapid.StringN(1, 100, 100).Draw(t, "desc"), + DenomUnits: []*banktypes.DenomUnit{ + { + Denom: denom, + Exponent: rapid.Uint32().Draw(t, "exponent"), + Aliases: aliases, +}, +}, + Base: denom, + Display: denom, + Name: rapid.String().Draw(t, "name"), + Symbol: rapid.String().Draw(t, "symbol"), + URI: rapid.String().Draw(t, "uri"), + URIHash: rapid.String().Draw(t, "uri-hash"), +} + +denomsMetadata = append(denomsMetadata, metadata) +} + +return denomsMetadata +} + +func TestGRPCDenomsMetadata(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + count := rapid.IntRange(1, 3).Draw(rt, "count") + denomsMetadata := createAndReturnMetadatas(rt, count) + +assert.Assert(t, len(denomsMetadata) == count) + for i := 0; i < count; i++ { + f.bankKeeper.SetDenomMetaData(f.ctx, denomsMetadata[i]) +} + req := &banktypes.QueryDenomsMetadataRequest{ + Pagination: testdata.PaginationGenerator(rt, uint64(count)).Draw(rt, "pagination"), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomsMetadata, 0, true) +}) + +f = initDeterministicFixture(t) / reset + + f.bankKeeper.SetDenomMetaData(f.ctx, metadataAtom) + req := &banktypes.QueryDenomsMetadataRequest{ +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomsMetadata, 660, false) +} + +func TestGRPCDenomMetadata(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + denomMetadata := createAndReturnMetadatas(rt, 1) + +assert.Assert(t, len(denomMetadata) == 1) + +f.bankKeeper.SetDenomMetaData(f.ctx, denomMetadata[0]) + req := &banktypes.QueryDenomMetadataRequest{ + Denom: denomMetadata[0].Base, +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomMetadata, 0, true) +}) + +f.bankKeeper.SetDenomMetaData(f.ctx, metadataAtom) + req := &banktypes.QueryDenomMetadataRequest{ + Denom: metadataAtom.Base, +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomMetadata, 1300, false) +} + +func TestGRPCSendEnabled(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + allDenoms := []string{ +} + +rapid.Check(t, func(rt *rapid.T) { + count := rapid.IntRange(0, 10).Draw(rt, "count") + denoms := make([]string, 0, count) + for i := 0; i < count; i++ { + coin := banktypes.SendEnabled{ + Denom: rapid.StringMatching(denomRegex).Draw(rt, "denom"), + Enabled: rapid.Bool().Draw(rt, "enabled-status"), +} + +f.bankKeeper.SetSendEnabled(f.ctx, coin.Denom, coin.Enabled) + +denoms = append(denoms, coin.Denom) +} + +allDenoms = append(allDenoms, denoms...) + req := &banktypes.QuerySendEnabledRequest{ + Denoms: denoms, + / Pagination is only taken into account when `denoms` is an empty array + Pagination: testdata.PaginationGenerator(rt, uint64(len(allDenoms))).Draw(rt, "pagination"), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SendEnabled, 0, true) +}) + +coin1 := banktypes.SendEnabled{ + Denom: "falsecoin", + Enabled: false, +} + +coin2 := banktypes.SendEnabled{ + Denom: "truecoin", + Enabled: true, +} + +f.bankKeeper.SetSendEnabled(f.ctx, coin1.Denom, false) + +f.bankKeeper.SetSendEnabled(f.ctx, coin2.Denom, true) + req := &banktypes.QuerySendEnabledRequest{ + Denoms: []string{ + coin1.GetDenom(), coin2.GetDenom() +}, +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.SendEnabled, 4063, false) +} + +func TestGRPCDenomOwners(t *testing.T) { + t.Parallel() + f := initDeterministicFixture(t) + +rapid.Check(t, func(rt *rapid.T) { + denom := rapid.StringMatching(denomRegex).Draw(rt, "denom") + numAddr := rapid.IntRange(1, 10).Draw(rt, "number-address") + for i := 0; i < numAddr; i++ { + addr := testdata.AddressGenerator(rt).Draw(rt, "address") + coin := sdk.NewCoin( + denom, + math.NewInt(rapid.Int64Min(1).Draw(rt, "amount")), + ) + err := banktestutil.FundAccount(f.ctx, f.bankKeeper, addr, sdk.NewCoins(coin)) + +assert.NilError(t, err) +} + req := &banktypes.QueryDenomOwnersRequest{ + Denom: denom, + Pagination: testdata.PaginationGenerator(rt, uint64(numAddr)).Draw(rt, "pagination"), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomOwners, 0, true) +}) + denomOwners := []*banktypes.DenomOwner{ + { + Address: "cosmos1qg65a9q6k2sqq7l3ycp428sqqpmqcucgzze299", + Balance: coin1, +}, + { + Address: "cosmos1qglnsqgpq48l7qqzgs8qdshr6fh3gqq9ej3qut", + Balance: coin1, +}, +} + for i := 0; i < len(denomOwners); i++ { + addr, err := sdk.AccAddressFromBech32(denomOwners[i].Address) + +assert.NilError(t, err) + +err = banktestutil.FundAccount(f.ctx, f.bankKeeper, addr, sdk.NewCoins(coin1)) + +assert.NilError(t, err) +} + req := &banktypes.QueryDenomOwnersRequest{ + Denom: coin1.GetDenom(), +} + +testdata.DeterministicIterations(f.ctx, t, req, f.queryClient.DenomOwners, 2516, false) +} +``` + +## Simulations + +Simulations uses as well a minimal application, built with [`depinject`](/docs/sdk/v0.53/documentation/module-system/depinject): + + +You can as well use the `AppConfig` `configurator` for creating an `AppConfig` [inline](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/slashing/app_test.go#L54-L62). There is no difference between those two ways, use whichever you prefer. + + +Following is an example for `x/gov/` simulations: + +```go expandable +package simulation_test + +import ( + + "fmt" + "math/rand" + "testing" + "time" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/gogoproto/proto" + "github.com/stretchr/testify/require" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/configurator" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + _ "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + "github.com/cosmos/cosmos-sdk/x/bank/testutil" + _ "github.com/cosmos/cosmos-sdk/x/consensus" + _ "github.com/cosmos/cosmos-sdk/x/distribution" + dk "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + _ "github.com/cosmos/cosmos-sdk/x/gov" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + "github.com/cosmos/cosmos-sdk/x/gov/simulation" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + _ "github.com/cosmos/cosmos-sdk/x/params" + _ "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +var ( + _ simtypes.WeightedProposalMsg = MockWeightedProposals{ +} + /nolint:staticcheck / keeping around for legacy testing + _ simtypes.WeightedProposalContent = MockWeightedProposals{ +} +) + +type MockWeightedProposals struct { + n int +} + +func (m MockWeightedProposals) + +AppParamsKey() + +string { + return fmt.Sprintf("AppParamsKey-%d", m.n) +} + +func (m MockWeightedProposals) + +DefaultWeight() + +int { + return m.n +} + +func (m MockWeightedProposals) + +MsgSimulatorFn() + +simtypes.MsgSimulatorFn { + return func(r *rand.Rand, _ sdk.Context, _ []simtypes.Account) + +sdk.Msg { + return nil +} +} + +/nolint:staticcheck / retaining legacy content to maintain gov functionality +func (m MockWeightedProposals) + +ContentSimulatorFn() + +simtypes.ContentSimulatorFn { + return func(r *rand.Rand, _ sdk.Context, _ []simtypes.Account) + +simtypes.Content { + return v1beta1.NewTextProposal( + fmt.Sprintf("title-%d: %s", m.n, simtypes.RandStringOfLength(r, 100)), + fmt.Sprintf("description-%d: %s", m.n, simtypes.RandStringOfLength(r, 4000)), + ) +} +} + +func mockWeightedProposalMsg(n int) []simtypes.WeightedProposalMsg { + wpc := make([]simtypes.WeightedProposalMsg, n) + for i := range n { + wpc[i] = MockWeightedProposals{ + i +} + +} + +return wpc +} + +/ nolint / keeping this legacy proposal for testing +func mockWeightedLegacyProposalContent(n int) []simtypes.WeightedProposalContent { + wpc := make([]simtypes.WeightedProposalContent, n) + for i := range n { + wpc[i] = MockWeightedProposals{ + i +} + +} + +return wpc +} + +/ TestWeightedOperations tests the weights of the operations. +func TestWeightedOperations(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + ctx.WithChainID("test-chain") + appParams := make(simtypes.AppParams) + weightesOps := simulation.WeightedOperations(appParams, suite.TxConfig, suite.AccountKeeper, + suite.BankKeeper, suite.GovKeeper, mockWeightedProposalMsg(3), mockWeightedLegacyProposalContent(1), + ) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accs := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + expected := []struct { + weight int + opMsgRoute string + opMsgName string +}{ + { + simulation.DefaultWeightMsgDeposit, types.ModuleName, simulation.TypeMsgDeposit +}, + { + simulation.DefaultWeightMsgVote, types.ModuleName, simulation.TypeMsgVote +}, + { + simulation.DefaultWeightMsgVoteWeighted, types.ModuleName, simulation.TypeMsgVoteWeighted +}, + { + simulation.DefaultWeightMsgCancelProposal, types.ModuleName, simulation.TypeMsgCancelProposal +}, + {0, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {1, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {2, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {0, types.ModuleName, simulation.TypeMsgSubmitProposal +}, +} + +require.Equal(t, len(weightesOps), len(expected), "number of operations should be the same") + for i, w := range weightesOps { + operationMsg, _, err := w.Op()(r, app.BaseApp, ctx, accs, ctx.ChainID()) + +require.NoError(t, err) + + / the following checks are very much dependent from the ordering of the output given + / by WeightedOperations. if the ordering in WeightedOperations changes some tests + / will fail + require.Equal(t, expected[i].weight, w.Weight(), "weight should be the same") + +require.Equal(t, expected[i].opMsgRoute, operationMsg.Route, "route should be the same") + +require.Equal(t, expected[i].opMsgName, operationMsg.Name, "operation Msg name should be the same") +} +} + +/ TestSimulateMsgSubmitProposal tests the normal scenario of a valid message of type TypeMsgSubmitProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgSubmitProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + _, err := app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgSubmitProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper, MockWeightedProposals{3 +}.MsgSimulatorFn()) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgSubmitProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Proposer) + +require.NotEqual(t, len(msg.InitialDeposit), 0) + +require.Equal(t, "47841094stake", msg.InitialDeposit[0].String()) + +require.Equal(t, simulation.TypeMsgSubmitProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgSubmitProposal tests the normal scenario of a valid message of type TypeMsgSubmitProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgSubmitLegacyProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + _, err := app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + / execute operation + op := simulation.SimulateMsgSubmitLegacyProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper, MockWeightedProposals{3 +}.ContentSimulatorFn()) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgSubmitProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +var msgLegacyContent v1.MsgExecLegacyContent + err = proto.Unmarshal(msg.Messages[0].Value, &msgLegacyContent) + +require.NoError(t, err) + +var textProposal v1beta1.TextProposal + err = proto.Unmarshal(msgLegacyContent.Content.Value, &textProposal) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, "cosmos1p8wcgrjr4pjju90xg6u9cgq55dxwq8j7u4x9a0", msg.Proposer) + +require.NotEqual(t, len(msg.InitialDeposit), 0) + +require.Equal(t, "25166256stake", msg.InitialDeposit[0].String()) + +require.Equal(t, "title-3: ZBSpYuLyYggwexjxusrBqDOTtGTOWeLrQKjLxzIivHSlcxgdXhhuTSkuxKGLwQvuyNhYFmBZHeAerqyNEUzXPFGkqEGqiQWIXnku", + textProposal.GetTitle()) + +require.Equal(t, "description-3: NJWzHdBNpAXKJPHWQdrGYcAHSctgVlqwqHoLfHsXUdStwfefwzqLuKEhmMyYLdbZrcPgYqjNHxPexsruwEGStAneKbWkQDDIlCWBLSiAASNhZqNFlPtfqPJoxKsgMdzjWqLWdqKQuJqWPMvwPQWZUtVMOTMYKJbfdlZsjdsomuScvDmbDkgRualsxDvRJuCAmPOXitIbcyWsKGSdrEunFAOdmXnsuyFVgJqEjbklvmwrUlsxjRSfKZxGcpayDdgoFcnVSutxjRgOSFzPwidAjubMncNweqpbxhXGchpZUxuFDOtpnhNUycJICRYqsPhPSCjPTWZFLkstHWJxvdPEAyEIxXgLwbNOjrgzmaujiBABBIXvcXpLrbcEWNNQsbjvgJFgJkflpRohHUutvnaUqoopuKjTDaemDeSdqbnOzcfJpcTuAQtZoiLZOoAIlboFDAeGmSNwkvObPRvRWQgWkGkxwtPauYgdkmypLjbqhlHJIQTntgWjXwZdOyYEdQRRLfMSdnxqppqUofqLbLQDUjwKVKfZJUJQPsWIPwIVaSTrmKskoAhvmZyJgeRpkaTfGgrJzAigcxtfshmiDCFkuiluqtMOkidknnTBtumyJYlIsWLnCQclqdVmikUoMOPdPWwYbJxXyqUVicNxFxyqJTenNblyyKSdlCbiXxUiYUiMwXZASYfvMDPFgxniSjWaZTjHkqlJvtBsXqwPpyVxnJVGFWhfSxgOcduoxkiopJvFjMmFabrGYeVtTXLhxVUEiGwYUvndjFGzDVntUvibiyZhfMQdMhgsiuysLMiePBNXifRLMsSmXPkwlPloUbJveCvUlaalhZHuvdkCnkSHbMbmOnrfEGPwQiACiPlnihiaOdbjPqPiTXaHDoJXjSlZmltGqNHHNrcKdlFSCdmVOuvDcBLdSklyGJmcLTbSFtALdGlPkqqecJrpLCXNPWefoTJNgEJlyMEPneVaxxduAAEqQpHWZodWyRkDAxzyMnFMcjSVqeRXLqsNyNtQBbuRvunZflWSbbvXXdkyLikYqutQhLPONXbvhcQZJPSWnOulqQaXmbfFxAkqfYeseSHOQidHwbcsOaMnSrrmGjjRmEMQNuknupMxJiIeVjmgZvbmjPIQTEhQFULQLBMPrxcFPvBinaOPYWGvYGRKxLZdwamfRQQFngcdSlvwjfaPbURasIsGJVHtcEAxnIIrhSriiXLOlbEBLXFElXJFGxHJczRBIxAuPKtBisjKBwfzZFagdNmjdwIRvwzLkFKWRTDPxJCmpzHUcrPiiXXHnOIlqNVoGSXZewdnCRhuxeYGPVTfrNTQNOxZmxInOazUYNTNDgzsxlgiVEHPKMfbesvPHUqpNkUqbzeuzfdrsuLDpKHMUbBMKczKKWOdYoIXoPYtEjfOnlQLoGnbQUCuERdEFaptwnsHzTJDsuZkKtzMpFaZobynZdzNydEeJJHDYaQcwUxcqvwfWwNUsCiLvkZQiSfzAHftYgAmVsXgtmcYgTqJIawstRYJrZdSxlfRiqTufgEQVambeZZmaAyRQbcmdjVUZZCgqDrSeltJGXPMgZnGDZqISrGDOClxXCxMjmKqEPwKHoOfOeyGmqWqihqjINXLqnyTesZePQRqaWDQNqpLgNrAUKulklmckTijUltQKuWQDwpLmDyxLppPVMwsmBIpOwQttYFMjgJQZLYFPmxWFLIeZihkRNnkzoypBICIxgEuYsVWGIGRbbxqVasYnstWomJnHwmtOhAFSpttRYYzBmyEtZXiCthvKvWszTXDbiJbGXMcrYpKAgvUVFtdKUfvdMfhAryctklUCEdjetjuGNfJjajZtvzdYaqInKtFPPLYmRaXPdQzxdSQfmZDEVHlHGEGNSPRFJuIfKLLfUmnHxHnRjmzQPNlqrXgifUdzAGKVabYqvcDeYoTYgPsBUqehrBhmQUgTvDnsdpuhUoxskDdppTsYMcnDIPSwKIqhXDCIxOuXrywahvVavvHkPuaenjLmEbMgrkrQLHEAwrhHkPRNvonNQKqprqOFVZKAtpRSpvQUxMoXCMZLSSbnLEFsjVfANdQNQVwTmGxqVjVqRuxREAhuaDrFgEZpYKhwWPEKBevBfsOIcaZKyykQafzmGPLRAKDtTcJxJVgiiuUkmyMYuDUNEUhBEdoBLJnamtLmMJQgmLiUELIhLpiEvpOXOvXCPUeldLFqkKOwfacqIaRcnnZvERKRMCKUkMABbDHytQqQblrvoxOZkwzosQfDKGtIdfcXRJNqlBNwOCWoQBcEWyqrMlYZIAXYJmLfnjoJepgSFvrgajaBAIksoyeHqgqbGvpAstMIGmIhRYGGNPRIfOQKsGoKgxtsidhTaAePRCBFqZgPDWCIkqOJezGVkjfYUCZTlInbxBXwUAVRsxHTQtJFnnpmMvXDYCVlEmnZBKhmmxQOIQzxFWpJQkQoSAYzTEiDWEOsVLNrbfzeHFRyeYATakQQWmFDLPbVMCJcWjFGJjfqCoVzlbNNEsqxdSmNPjTjHYOkuEMFLkXYGaoJlraLqayMeCsTjWNRDPBywBJLAPVkGQqTwApVVwYAetlwSbzsdHWsTwSIcctkyKDuRWYDQikRqsKTMJchrliONJeaZIzwPQrNbTwxsGdwuduvibtYndRwpdsvyCktRHFalvUuEKMqXbItfGcNGWsGzubdPMYayOUOINjpcFBeESdwpdlTYmrPsLsVDhpTzoMegKrytNVZkfJRPuDCUXxSlSthOohmsuxmIZUedzxKmowKOdXTMcEtdpHaPWgIsIjrViKrQOCONlSuazmLuCUjLltOGXeNgJKedTVrrVCpWYWHyVrdXpKgNaMJVjbXxnVMSChdWKuZdqpisvrkBJPoURDYxWOtpjzZoOpWzyUuYNhCzRoHsMjmmWDcXzQiHIyjwdhPNwiPqFxeUfMVFQGImhykFgMIlQEoZCaRoqSBXTSWAeDumdbsOGtATwEdZlLfoBKiTvodQBGOEcuATWXfiinSjPmJKcWgQrTVYVrwlyMWhxqNbCMpIQNoSMGTiWfPTCezUjYcdWppnsYJihLQCqbNLRGgqrwHuIvsazapTpoPZIyZyeeSueJuTIhpHMEJfJpScshJubJGfkusuVBgfTWQoywSSliQQSfbvaHKiLnyjdSbpMkdBgXepoSsHnCQaYuHQqZsoEOmJCiuQUpJkmfyfbIShzlZpHFmLCsbknEAkKXKfRTRnuwdBeuOGgFbJLbDksHVapaRayWzwoYBEpmrlAxrUxYMUekKbpjPNfjUCjhbdMAnJmYQVZBQZkFVweHDAlaqJjRqoQPoOMLhyvYCzqEuQsAFoxWrzRnTVjStPadhsESlERnKhpEPsfDxNvxqcOyIulaCkmPdambLHvGhTZzysvqFauEgkFRItPfvisehFmoBhQqmkfbHVsgfHXDPJVyhwPllQpuYLRYvGodxKjkarnSNgsXoKEMlaSKxKdcVgvOkuLcfLFfdtXGTclqfPOfeoVLbqcjcXCUEBgAGplrkgsmIEhWRZLlGPGCwKWRaCKMkBHTAcypUrYjWwCLtOPVygMwMANGoQwFnCqFrUGMCRZUGJKTZIGPyldsifauoMnJPLTcDHmilcmahlqOELaAUYDBuzsVywnDQfwRLGIWozYaOAilMBcObErwgTDNGWnwQMUgFFSKtPDMEoEQCTKVREqrXZSGLqwTMcxHfWotDllNkIJPMbXzjDVjPOOjCFuIvTyhXKLyhUScOXvYthRXpPfKwMhptXaxIxgqBoUqzrWbaoLTVpQoottZyPFfNOoMioXHRuFwMRYUiKvcWPkrayyTLOCFJlAyslDameIuqVAuxErqFPEWIScKpBORIuZqoXlZuTvAjEdlEWDODFRregDTqGNoFBIHxvimmIZwLfFyKUfEWAnNBdtdzDmTPXtpHRGdIbuucfTjOygZsTxPjfweXhSUkMhPjMaxKlMIJMOXcnQfyzeOcbWwNbeH", + textProposal.GetDescription()) + +require.Equal(t, simulation.TypeMsgSubmitProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgCancelProposal tests the normal scenario of a valid message of type TypeMsgCancelProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgCancelProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + / setup a proposal + proposer := accounts[0].Address + content := v1beta1.NewTextProposal("Test", "description") + +contentMsg, err := v1.NewLegacyContent(content, suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String()) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "title", "summary", proposer, false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.SetProposal(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgCancelProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgCancelProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, proposer.String(), msg.Proposer) + +require.Equal(t, simulation.TypeMsgCancelProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgDeposit tests the normal scenario of a valid message of type TypeMsgDeposit. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgDeposit(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + content := v1beta1.NewTextProposal("Test", "description") + +contentMsg, err := v1.NewLegacyContent(content, suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String()) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.SetProposal(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgDeposit(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgDeposit + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Depositor) + +require.NotEqual(t, len(msg.Amount), 0) + +require.Equal(t, "560969stake", msg.Amount[0].String()) + +require.Equal(t, simulation.TypeMsgDeposit, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgVote tests the normal scenario of a valid message of type TypeMsgVote. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVote(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.ActivateVotingPeriod(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgVote(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVote + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.Equal(t, v1.OptionYes, msg.Option) + +require.Equal(t, simulation.TypeMsgVote, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgVoteWeighted tests the normal scenario of a valid message of type TypeMsgVoteWeighted. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVoteWeighted(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "test", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.ActivateVotingPeriod(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgVoteWeighted(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVoteWeighted + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.True(t, len(msg.Options) >= 1) + +require.Equal(t, simulation.TypeMsgVoteWeighted, sdk.MsgTypeURL(&msg)) +} + +type suite struct { + TxConfig client.TxConfig + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + GovKeeper *keeper.Keeper + StakingKeeper *stakingkeeper.Keeper + DistributionKeeper dk.Keeper + App *runtime.App +} + +/ returns context and an app with updated mint keeper +func createTestSuite(t *testing.T, isCheckTx bool) (suite, sdk.Context) { + t.Helper() + res := suite{ +} + +app, err := simtestutil.Setup( + depinject.Configs( + configurator.NewAppConfig( + configurator.AuthModule(), + configurator.TxModule(), + configurator.ParamsModule(), + configurator.BankModule(), + configurator.StakingModule(), + configurator.ConsensusModule(), + configurator.DistributionModule(), + configurator.GovModule(), + ), + depinject.Supply(log.NewNopLogger()), + ), + &res.TxConfig, &res.AccountKeeper, &res.BankKeeper, &res.GovKeeper, &res.StakingKeeper, &res.DistributionKeeper) + +require.NoError(t, err) + ctx := app.NewContext(isCheckTx) + +res.App = app + return res, ctx +} + +func getTestingAccounts( + t *testing.T, + r *rand.Rand, + accountKeeper authkeeper.AccountKeeper, + bankKeeper bankkeeper.Keeper, + stakingKeeper *stakingkeeper.Keeper, + ctx sdk.Context, + n int, +) []simtypes.Account { + t.Helper() + accounts := simtypes.RandomAccounts(r, n) + initAmt := stakingKeeper.TokensFromConsensusPower(ctx, 200) + initCoins := sdk.NewCoins(sdk.NewCoin(sdk.DefaultBondDenom, initAmt)) + + / add coins to the accounts + for _, account := range accounts { + acc := accountKeeper.NewAccountWithAddress(ctx, account.Address) + +accountKeeper.SetAccount(ctx, acc) + +require.NoError(t, testutil.FundAccount(ctx, bankKeeper, account.Address, initCoins)) +} + +return accounts +} +``` + +```go expandable +package simulation_test + +import ( + + "fmt" + "math/rand" + "testing" + "time" + + abci "github.com/cometbft/cometbft/abci/types" + "github.com/cosmos/gogoproto/proto" + "github.com/stretchr/testify/require" + "cosmossdk.io/depinject" + "cosmossdk.io/log" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/runtime" + "github.com/cosmos/cosmos-sdk/testutil/configurator" + simtestutil "github.com/cosmos/cosmos-sdk/testutil/sims" + sdk "github.com/cosmos/cosmos-sdk/types" + simtypes "github.com/cosmos/cosmos-sdk/types/simulation" + _ "github.com/cosmos/cosmos-sdk/x/auth" + authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" + _ "github.com/cosmos/cosmos-sdk/x/auth/tx/config" + _ "github.com/cosmos/cosmos-sdk/x/bank" + bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper" + "github.com/cosmos/cosmos-sdk/x/bank/testutil" + _ "github.com/cosmos/cosmos-sdk/x/consensus" + _ "github.com/cosmos/cosmos-sdk/x/distribution" + dk "github.com/cosmos/cosmos-sdk/x/distribution/keeper" + _ "github.com/cosmos/cosmos-sdk/x/gov" + "github.com/cosmos/cosmos-sdk/x/gov/keeper" + "github.com/cosmos/cosmos-sdk/x/gov/simulation" + "github.com/cosmos/cosmos-sdk/x/gov/types" + v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" + "github.com/cosmos/cosmos-sdk/x/gov/types/v1beta1" + _ "github.com/cosmos/cosmos-sdk/x/params" + _ "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" +) + +var ( + _ simtypes.WeightedProposalMsg = MockWeightedProposals{ +} + /nolint:staticcheck / keeping around for legacy testing + _ simtypes.WeightedProposalContent = MockWeightedProposals{ +} +) + +type MockWeightedProposals struct { + n int +} + +func (m MockWeightedProposals) + +AppParamsKey() + +string { + return fmt.Sprintf("AppParamsKey-%d", m.n) +} + +func (m MockWeightedProposals) + +DefaultWeight() + +int { + return m.n +} + +func (m MockWeightedProposals) + +MsgSimulatorFn() + +simtypes.MsgSimulatorFn { + return func(r *rand.Rand, _ sdk.Context, _ []simtypes.Account) + +sdk.Msg { + return nil +} +} + +/nolint:staticcheck / retaining legacy content to maintain gov functionality +func (m MockWeightedProposals) + +ContentSimulatorFn() + +simtypes.ContentSimulatorFn { + return func(r *rand.Rand, _ sdk.Context, _ []simtypes.Account) + +simtypes.Content { + return v1beta1.NewTextProposal( + fmt.Sprintf("title-%d: %s", m.n, simtypes.RandStringOfLength(r, 100)), + fmt.Sprintf("description-%d: %s", m.n, simtypes.RandStringOfLength(r, 4000)), + ) +} +} + +func mockWeightedProposalMsg(n int) []simtypes.WeightedProposalMsg { + wpc := make([]simtypes.WeightedProposalMsg, n) + for i := range n { + wpc[i] = MockWeightedProposals{ + i +} + +} + +return wpc +} + +/ nolint / keeping this legacy proposal for testing +func mockWeightedLegacyProposalContent(n int) []simtypes.WeightedProposalContent { + wpc := make([]simtypes.WeightedProposalContent, n) + for i := range n { + wpc[i] = MockWeightedProposals{ + i +} + +} + +return wpc +} + +/ TestWeightedOperations tests the weights of the operations. +func TestWeightedOperations(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + ctx.WithChainID("test-chain") + appParams := make(simtypes.AppParams) + weightesOps := simulation.WeightedOperations(appParams, suite.TxConfig, suite.AccountKeeper, + suite.BankKeeper, suite.GovKeeper, mockWeightedProposalMsg(3), mockWeightedLegacyProposalContent(1), + ) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accs := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + expected := []struct { + weight int + opMsgRoute string + opMsgName string +}{ + { + simulation.DefaultWeightMsgDeposit, types.ModuleName, simulation.TypeMsgDeposit +}, + { + simulation.DefaultWeightMsgVote, types.ModuleName, simulation.TypeMsgVote +}, + { + simulation.DefaultWeightMsgVoteWeighted, types.ModuleName, simulation.TypeMsgVoteWeighted +}, + { + simulation.DefaultWeightMsgCancelProposal, types.ModuleName, simulation.TypeMsgCancelProposal +}, + {0, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {1, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {2, types.ModuleName, simulation.TypeMsgSubmitProposal +}, + {0, types.ModuleName, simulation.TypeMsgSubmitProposal +}, +} + +require.Equal(t, len(weightesOps), len(expected), "number of operations should be the same") + for i, w := range weightesOps { + operationMsg, _, err := w.Op()(r, app.BaseApp, ctx, accs, ctx.ChainID()) + +require.NoError(t, err) + + / the following checks are very much dependent from the ordering of the output given + / by WeightedOperations. if the ordering in WeightedOperations changes some tests + / will fail + require.Equal(t, expected[i].weight, w.Weight(), "weight should be the same") + +require.Equal(t, expected[i].opMsgRoute, operationMsg.Route, "route should be the same") + +require.Equal(t, expected[i].opMsgName, operationMsg.Name, "operation Msg name should be the same") +} +} + +/ TestSimulateMsgSubmitProposal tests the normal scenario of a valid message of type TypeMsgSubmitProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgSubmitProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + _, err := app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgSubmitProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper, MockWeightedProposals{3 +}.MsgSimulatorFn()) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgSubmitProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Proposer) + +require.NotEqual(t, len(msg.InitialDeposit), 0) + +require.Equal(t, "47841094stake", msg.InitialDeposit[0].String()) + +require.Equal(t, simulation.TypeMsgSubmitProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgSubmitProposal tests the normal scenario of a valid message of type TypeMsgSubmitProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgSubmitLegacyProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + _, err := app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + / execute operation + op := simulation.SimulateMsgSubmitLegacyProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper, MockWeightedProposals{3 +}.ContentSimulatorFn()) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgSubmitProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +var msgLegacyContent v1.MsgExecLegacyContent + err = proto.Unmarshal(msg.Messages[0].Value, &msgLegacyContent) + +require.NoError(t, err) + +var textProposal v1beta1.TextProposal + err = proto.Unmarshal(msgLegacyContent.Content.Value, &textProposal) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, "cosmos1p8wcgrjr4pjju90xg6u9cgq55dxwq8j7u4x9a0", msg.Proposer) + +require.NotEqual(t, len(msg.InitialDeposit), 0) + +require.Equal(t, "25166256stake", msg.InitialDeposit[0].String()) + +require.Equal(t, "title-3: ZBSpYuLyYggwexjxusrBqDOTtGTOWeLrQKjLxzIivHSlcxgdXhhuTSkuxKGLwQvuyNhYFmBZHeAerqyNEUzXPFGkqEGqiQWIXnku", + textProposal.GetTitle()) + +require.Equal(t, "description-3: NJWzHdBNpAXKJPHWQdrGYcAHSctgVlqwqHoLfHsXUdStwfefwzqLuKEhmMyYLdbZrcPgYqjNHxPexsruwEGStAneKbWkQDDIlCWBLSiAASNhZqNFlPtfqPJoxKsgMdzjWqLWdqKQuJqWPMvwPQWZUtVMOTMYKJbfdlZsjdsomuScvDmbDkgRualsxDvRJuCAmPOXitIbcyWsKGSdrEunFAOdmXnsuyFVgJqEjbklvmwrUlsxjRSfKZxGcpayDdgoFcnVSutxjRgOSFzPwidAjubMncNweqpbxhXGchpZUxuFDOtpnhNUycJICRYqsPhPSCjPTWZFLkstHWJxvdPEAyEIxXgLwbNOjrgzmaujiBABBIXvcXpLrbcEWNNQsbjvgJFgJkflpRohHUutvnaUqoopuKjTDaemDeSdqbnOzcfJpcTuAQtZoiLZOoAIlboFDAeGmSNwkvObPRvRWQgWkGkxwtPauYgdkmypLjbqhlHJIQTntgWjXwZdOyYEdQRRLfMSdnxqppqUofqLbLQDUjwKVKfZJUJQPsWIPwIVaSTrmKskoAhvmZyJgeRpkaTfGgrJzAigcxtfshmiDCFkuiluqtMOkidknnTBtumyJYlIsWLnCQclqdVmikUoMOPdPWwYbJxXyqUVicNxFxyqJTenNblyyKSdlCbiXxUiYUiMwXZASYfvMDPFgxniSjWaZTjHkqlJvtBsXqwPpyVxnJVGFWhfSxgOcduoxkiopJvFjMmFabrGYeVtTXLhxVUEiGwYUvndjFGzDVntUvibiyZhfMQdMhgsiuysLMiePBNXifRLMsSmXPkwlPloUbJveCvUlaalhZHuvdkCnkSHbMbmOnrfEGPwQiACiPlnihiaOdbjPqPiTXaHDoJXjSlZmltGqNHHNrcKdlFSCdmVOuvDcBLdSklyGJmcLTbSFtALdGlPkqqecJrpLCXNPWefoTJNgEJlyMEPneVaxxduAAEqQpHWZodWyRkDAxzyMnFMcjSVqeRXLqsNyNtQBbuRvunZflWSbbvXXdkyLikYqutQhLPONXbvhcQZJPSWnOulqQaXmbfFxAkqfYeseSHOQidHwbcsOaMnSrrmGjjRmEMQNuknupMxJiIeVjmgZvbmjPIQTEhQFULQLBMPrxcFPvBinaOPYWGvYGRKxLZdwamfRQQFngcdSlvwjfaPbURasIsGJVHtcEAxnIIrhSriiXLOlbEBLXFElXJFGxHJczRBIxAuPKtBisjKBwfzZFagdNmjdwIRvwzLkFKWRTDPxJCmpzHUcrPiiXXHnOIlqNVoGSXZewdnCRhuxeYGPVTfrNTQNOxZmxInOazUYNTNDgzsxlgiVEHPKMfbesvPHUqpNkUqbzeuzfdrsuLDpKHMUbBMKczKKWOdYoIXoPYtEjfOnlQLoGnbQUCuERdEFaptwnsHzTJDsuZkKtzMpFaZobynZdzNydEeJJHDYaQcwUxcqvwfWwNUsCiLvkZQiSfzAHftYgAmVsXgtmcYgTqJIawstRYJrZdSxlfRiqTufgEQVambeZZmaAyRQbcmdjVUZZCgqDrSeltJGXPMgZnGDZqISrGDOClxXCxMjmKqEPwKHoOfOeyGmqWqihqjINXLqnyTesZePQRqaWDQNqpLgNrAUKulklmckTijUltQKuWQDwpLmDyxLppPVMwsmBIpOwQttYFMjgJQZLYFPmxWFLIeZihkRNnkzoypBICIxgEuYsVWGIGRbbxqVasYnstWomJnHwmtOhAFSpttRYYzBmyEtZXiCthvKvWszTXDbiJbGXMcrYpKAgvUVFtdKUfvdMfhAryctklUCEdjetjuGNfJjajZtvzdYaqInKtFPPLYmRaXPdQzxdSQfmZDEVHlHGEGNSPRFJuIfKLLfUmnHxHnRjmzQPNlqrXgifUdzAGKVabYqvcDeYoTYgPsBUqehrBhmQUgTvDnsdpuhUoxskDdppTsYMcnDIPSwKIqhXDCIxOuXrywahvVavvHkPuaenjLmEbMgrkrQLHEAwrhHkPRNvonNQKqprqOFVZKAtpRSpvQUxMoXCMZLSSbnLEFsjVfANdQNQVwTmGxqVjVqRuxREAhuaDrFgEZpYKhwWPEKBevBfsOIcaZKyykQafzmGPLRAKDtTcJxJVgiiuUkmyMYuDUNEUhBEdoBLJnamtLmMJQgmLiUELIhLpiEvpOXOvXCPUeldLFqkKOwfacqIaRcnnZvERKRMCKUkMABbDHytQqQblrvoxOZkwzosQfDKGtIdfcXRJNqlBNwOCWoQBcEWyqrMlYZIAXYJmLfnjoJepgSFvrgajaBAIksoyeHqgqbGvpAstMIGmIhRYGGNPRIfOQKsGoKgxtsidhTaAePRCBFqZgPDWCIkqOJezGVkjfYUCZTlInbxBXwUAVRsxHTQtJFnnpmMvXDYCVlEmnZBKhmmxQOIQzxFWpJQkQoSAYzTEiDWEOsVLNrbfzeHFRyeYATakQQWmFDLPbVMCJcWjFGJjfqCoVzlbNNEsqxdSmNPjTjHYOkuEMFLkXYGaoJlraLqayMeCsTjWNRDPBywBJLAPVkGQqTwApVVwYAetlwSbzsdHWsTwSIcctkyKDuRWYDQikRqsKTMJchrliONJeaZIzwPQrNbTwxsGdwuduvibtYndRwpdsvyCktRHFalvUuEKMqXbItfGcNGWsGzubdPMYayOUOINjpcFBeESdwpdlTYmrPsLsVDhpTzoMegKrytNVZkfJRPuDCUXxSlSthOohmsuxmIZUedzxKmowKOdXTMcEtdpHaPWgIsIjrViKrQOCONlSuazmLuCUjLltOGXeNgJKedTVrrVCpWYWHyVrdXpKgNaMJVjbXxnVMSChdWKuZdqpisvrkBJPoURDYxWOtpjzZoOpWzyUuYNhCzRoHsMjmmWDcXzQiHIyjwdhPNwiPqFxeUfMVFQGImhykFgMIlQEoZCaRoqSBXTSWAeDumdbsOGtATwEdZlLfoBKiTvodQBGOEcuATWXfiinSjPmJKcWgQrTVYVrwlyMWhxqNbCMpIQNoSMGTiWfPTCezUjYcdWppnsYJihLQCqbNLRGgqrwHuIvsazapTpoPZIyZyeeSueJuTIhpHMEJfJpScshJubJGfkusuVBgfTWQoywSSliQQSfbvaHKiLnyjdSbpMkdBgXepoSsHnCQaYuHQqZsoEOmJCiuQUpJkmfyfbIShzlZpHFmLCsbknEAkKXKfRTRnuwdBeuOGgFbJLbDksHVapaRayWzwoYBEpmrlAxrUxYMUekKbpjPNfjUCjhbdMAnJmYQVZBQZkFVweHDAlaqJjRqoQPoOMLhyvYCzqEuQsAFoxWrzRnTVjStPadhsESlERnKhpEPsfDxNvxqcOyIulaCkmPdambLHvGhTZzysvqFauEgkFRItPfvisehFmoBhQqmkfbHVsgfHXDPJVyhwPllQpuYLRYvGodxKjkarnSNgsXoKEMlaSKxKdcVgvOkuLcfLFfdtXGTclqfPOfeoVLbqcjcXCUEBgAGplrkgsmIEhWRZLlGPGCwKWRaCKMkBHTAcypUrYjWwCLtOPVygMwMANGoQwFnCqFrUGMCRZUGJKTZIGPyldsifauoMnJPLTcDHmilcmahlqOELaAUYDBuzsVywnDQfwRLGIWozYaOAilMBcObErwgTDNGWnwQMUgFFSKtPDMEoEQCTKVREqrXZSGLqwTMcxHfWotDllNkIJPMbXzjDVjPOOjCFuIvTyhXKLyhUScOXvYthRXpPfKwMhptXaxIxgqBoUqzrWbaoLTVpQoottZyPFfNOoMioXHRuFwMRYUiKvcWPkrayyTLOCFJlAyslDameIuqVAuxErqFPEWIScKpBORIuZqoXlZuTvAjEdlEWDODFRregDTqGNoFBIHxvimmIZwLfFyKUfEWAnNBdtdzDmTPXtpHRGdIbuucfTjOygZsTxPjfweXhSUkMhPjMaxKlMIJMOXcnQfyzeOcbWwNbeH", + textProposal.GetDescription()) + +require.Equal(t, simulation.TypeMsgSubmitProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgCancelProposal tests the normal scenario of a valid message of type TypeMsgCancelProposal. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgCancelProposal(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + / setup a proposal + proposer := accounts[0].Address + content := v1beta1.NewTextProposal("Test", "description") + +contentMsg, err := v1.NewLegacyContent(content, suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String()) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "title", "summary", proposer, false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.SetProposal(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgCancelProposal(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgCancelProposal + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, proposer.String(), msg.Proposer) + +require.Equal(t, simulation.TypeMsgCancelProposal, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgDeposit tests the normal scenario of a valid message of type TypeMsgDeposit. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgDeposit(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + content := v1beta1.NewTextProposal("Test", "description") + +contentMsg, err := v1.NewLegacyContent(content, suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String()) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.SetProposal(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgDeposit(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgDeposit + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Depositor) + +require.NotEqual(t, len(msg.Amount), 0) + +require.Equal(t, "560969stake", msg.Amount[0].String()) + +require.Equal(t, simulation.TypeMsgDeposit, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgVote tests the normal scenario of a valid message of type TypeMsgVote. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVote(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "description", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.ActivateVotingPeriod(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgVote(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVote + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.Equal(t, v1.OptionYes, msg.Option) + +require.Equal(t, simulation.TypeMsgVote, sdk.MsgTypeURL(&msg)) +} + +/ TestSimulateMsgVoteWeighted tests the normal scenario of a valid message of type TypeMsgVoteWeighted. +/ Abnormal scenarios, where errors occur, are not tested here. +func TestSimulateMsgVoteWeighted(t *testing.T) { + suite, ctx := createTestSuite(t, false) + app := suite.App + blockTime := time.Now().UTC() + +ctx = ctx.WithBlockTime(blockTime) + + / setup 3 accounts + s := rand.NewSource(1) + r := rand.New(s) + accounts := getTestingAccounts(t, r, suite.AccountKeeper, suite.BankKeeper, suite.StakingKeeper, ctx, 3) + + / setup a proposal + govAcc := suite.GovKeeper.GetGovernanceAccount(ctx).GetAddress().String() + +contentMsg, err := v1.NewLegacyContent(v1beta1.NewTextProposal("Test", "description"), govAcc) + +require.NoError(t, err) + submitTime := ctx.BlockHeader().Time + params, _ := suite.GovKeeper.Params.Get(ctx) + depositPeriod := params.MaxDepositPeriod + + proposal, err := v1.NewProposal([]sdk.Msg{ + contentMsg +}, 1, submitTime, submitTime.Add(*depositPeriod), "", "text proposal", "test", sdk.AccAddress("cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r"), false) + +require.NoError(t, err) + +require.NoError(t, suite.GovKeeper.ActivateVotingPeriod(ctx, proposal)) + + _, err = app.FinalizeBlock(&abci.RequestFinalizeBlock{ + Height: app.LastBlockHeight() + 1, + Hash: app.LastCommitID().Hash, +}) + +require.NoError(t, err) + + / execute operation + op := simulation.SimulateMsgVoteWeighted(suite.TxConfig, suite.AccountKeeper, suite.BankKeeper, suite.GovKeeper) + +operationMsg, _, err := op(r, app.BaseApp, ctx, accounts, "") + +require.NoError(t, err) + +var msg v1.MsgVoteWeighted + err = proto.Unmarshal(operationMsg.Msg, &msg) + +require.NoError(t, err) + +require.True(t, operationMsg.OK) + +require.Equal(t, uint64(1), msg.ProposalId) + +require.Equal(t, "cosmos1ghekyjucln7y67ntx7cf27m9dpuxxemn4c8g4r", msg.Voter) + +require.True(t, len(msg.Options) >= 1) + +require.Equal(t, simulation.TypeMsgVoteWeighted, sdk.MsgTypeURL(&msg)) +} + +type suite struct { + TxConfig client.TxConfig + AccountKeeper authkeeper.AccountKeeper + BankKeeper bankkeeper.Keeper + GovKeeper *keeper.Keeper + StakingKeeper *stakingkeeper.Keeper + DistributionKeeper dk.Keeper + App *runtime.App +} + +/ returns context and an app with updated mint keeper +func createTestSuite(t *testing.T, isCheckTx bool) (suite, sdk.Context) { + t.Helper() + res := suite{ +} + +app, err := simtestutil.Setup( + depinject.Configs( + configurator.NewAppConfig( + configurator.AuthModule(), + configurator.TxModule(), + configurator.ParamsModule(), + configurator.BankModule(), + configurator.StakingModule(), + configurator.ConsensusModule(), + configurator.DistributionModule(), + configurator.GovModule(), + ), + depinject.Supply(log.NewNopLogger()), + ), + &res.TxConfig, &res.AccountKeeper, &res.BankKeeper, &res.GovKeeper, &res.StakingKeeper, &res.DistributionKeeper) + +require.NoError(t, err) + ctx := app.NewContext(isCheckTx) + +res.App = app + return res, ctx +} + +func getTestingAccounts( + t *testing.T, + r *rand.Rand, + accountKeeper authkeeper.AccountKeeper, + bankKeeper bankkeeper.Keeper, + stakingKeeper *stakingkeeper.Keeper, + ctx sdk.Context, + n int, +) []simtypes.Account { + t.Helper() + accounts := simtypes.RandomAccounts(r, n) + initAmt := stakingKeeper.TokensFromConsensusPower(ctx, 200) + initCoins := sdk.NewCoins(sdk.NewCoin(sdk.DefaultBondDenom, initAmt)) + + / add coins to the accounts + for _, account := range accounts { + acc := accountKeeper.NewAccountWithAddress(ctx, account.Address) + +accountKeeper.SetAccount(ctx, acc) + +require.NoError(t, testutil.FundAccount(ctx, bankKeeper, account.Address, initCoins)) +} + +return accounts +} +``` + +## End-to-end Tests + +End-to-end tests are at the top of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html). +They must test the whole application flow, from the user perspective (for instance, CLI tests). They are located under [`/tests/e2e`](https://github.com/cosmos/cosmos-sdk/tree/main/tests/e2e). + +{/* @julienrbrt: makes more sense to use an app wired app to have 0 simapp dependencies */} +For that, the SDK is using `simapp` but you should use your own application (`appd`). +Here are some examples: + +* SDK E2E tests: [Link](https://github.com/cosmos/cosmos-sdk/tree/main/tests/e2e). +* Cosmos Hub E2E tests: [Link](https://github.com/cosmos/gaia/tree/main/tests/e2e). +* Osmosis E2E tests: [Link](https://github.com/osmosis-labs/osmosis/tree/main/tests/e2e). + + +**warning** +The SDK is in the process of creating its E2E tests, as defined in [ADR-59](https://docs.cosmos.network/main/architecture/adr-059-test-scopes.html). This page will eventually be updated with better examples. + + +## Learn More + +Learn more about testing scope in [ADR-59](https://docs.cosmos.network/main/architecture/adr-059-test-scopes.html). diff --git a/docs/sdk/v0.53/documentation/operations/txs.mdx b/docs/sdk/v0.53/documentation/operations/txs.mdx new file mode 100644 index 00000000..6fe07300 --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/txs.mdx @@ -0,0 +1,556 @@ +--- +title: "Generating, Signing and Broadcasting Transactions" +--- + +## Synopsis + +This document describes how to generate an (unsigned) transaction, signing it (with one or multiple keys), and broadcasting it to the network. + +## Using the CLI + +The easiest way to send transactions is using the CLI, as we have seen in the previous page when [interacting with a node](/docs/sdk/v0.53/documentation/operations/interact-node#using-the-cli). For example, running the following command + +```bash +simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --chain-id my-test-chain --keyring-backend test +``` + +will run the following steps: + +- generate a transaction with one `Msg` (`x/bank`'s `MsgSend`), and print the generated transaction to the console. +- ask the user for confirmation to send the transaction from the `$MY_VALIDATOR_ADDRESS` account. +- fetch `$MY_VALIDATOR_ADDRESS` from the keyring. This is possible because we have [set up the CLI's keyring](/docs/sdk/v0.53/documentation/operations/keyring) in a previous step. +- sign the generated transaction with the keyring's account. +- broadcast the signed transaction to the network. This is possible because the CLI connects to the node's CometBFT RPC endpoint. + +The CLI bundles all the necessary steps into a simple-to-use user experience. However, it's possible to run all the steps individually too. + +### Generating a Transaction + +Generating a transaction can simply be done by appending the `--generate-only` flag on any `tx` command, e.g.: + +```bash +simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --chain-id my-test-chain --generate-only +``` + +This will output the unsigned transaction as JSON in the console. We can also save the unsigned transaction to a file (to be passed around between signers more easily) by appending `> unsigned_tx.json` to the above command. + +### Signing a Transaction + +Signing a transaction using the CLI requires the unsigned transaction to be saved in a file. Let's assume the unsigned transaction is in a file called `unsigned_tx.json` in the current directory (see previous paragraph on how to do that). Then, simply run the following command: + +```bash +simd tx sign unsigned_tx.json --chain-id my-test-chain --keyring-backend test --from $MY_VALIDATOR_ADDRESS +``` + +This command will decode the unsigned transaction and sign it with `SIGN_MODE_DIRECT` with `$MY_VALIDATOR_ADDRESS`'s key, which we already set up in the keyring. The signed transaction will be output as JSON to the console, and, as above, we can save it to a file by appending `--output-document signed_tx.json`. + +Some useful flags to consider in the `tx sign` command: + +- `--sign-mode`: you may use `amino-json` to sign the transaction using `SIGN_MODE_LEGACY_AMINO_JSON`, +- `--offline`: sign in offline mode. This means that the `tx sign` command doesn't connect to the node to retrieve the signer's account number and sequence, both needed for signing. In this case, you must manually supply the `--account-number` and `--sequence` flags. This is useful for offline signing, i.e. signing in a secure environment which doesn't have access to the internet. + +#### Signing with Multiple Signers + + + Please note that signing a transaction with multiple signers or with a + multisig account, where at least one signer uses `SIGN_MODE_DIRECT`, is not + yet possible. You may follow [this Github + issue](https://github.com/cosmos/cosmos-sdk/issues/8141) for more info. + + +Signing with multiple signers is done with the `tx multisign` command. This command assumes that all signers use `SIGN_MODE_LEGACY_AMINO_JSON`. The flow is similar to the `tx sign` command flow, but instead of signing an unsigned transaction file, each signer signs the file signed by previous signer(s). The `tx multisign` command will append signatures to the existing transactions. It is important that signers sign the transaction **in the same order** as given by the transaction, which is retrievable using the `GetSigners()` method. + +For example, starting with the `unsigned_tx.json`, and assuming the transaction has 4 signers, we would run: + +```bash +# Let signer1 sign the unsigned tx. +simd tx multisign unsigned_tx.json signer_key_1 --chain-id my-test-chain --keyring-backend test > partial_tx_1.json +# Now signer1 will send the partial_tx_1.json to the signer2. +# Signer2 appends their signature: +simd tx multisign partial_tx_1.json signer_key_2 --chain-id my-test-chain --keyring-backend test > partial_tx_2.json +# Signer2 sends the partial_tx_2.json file to signer3, and signer3 can append his signature: +simd tx multisign partial_tx_2.json signer_key_3 --chain-id my-test-chain --keyring-backend test > partial_tx_3.json +``` + +### Broadcasting a Transaction + +Broadcasting a transaction is done using the following command: + +```bash +simd tx broadcast tx_signed.json +``` + +You may optionally pass the `--broadcast-mode` flag to specify which response to receive from the node: + +- `sync`: the CLI waits for a CheckTx execution response only. +- `async`: the CLI returns immediately (transaction might fail). + +### Encoding a Transaction + +In order to broadcast a transaction using the gRPC or REST endpoints, the transaction will need to be encoded first. This can be done using the CLI. + +Encoding a transaction is done using the following command: + +```bash +simd tx encode tx_signed.json +``` + +This will read the transaction from the file, serialize it using Protobuf, and output the transaction bytes as base64 in the console. + +### Decoding a Transaction + +The CLI can also be used to decode transaction bytes. + +Decoding a transaction is done using the following command: + +```bash +simd tx decode [protobuf-byte-string] +``` + +This will decode the transaction bytes and output the transaction as JSON in the console. You can also save the transaction to a file by appending `> tx.json` to the above command. + +## Programmatically with Go + +It is possible to manipulate transactions programmatically via Go using the Cosmos SDK's `TxBuilder` interface. + +### Generating a Transaction + +Before generating a transaction, a new instance of a `TxBuilder` needs to be created. Since the Cosmos SDK supports both Amino and Protobuf transactions, the first step would be to decide which encoding scheme to use. All the subsequent steps remain unchanged, whether you're using Amino or Protobuf, as `TxBuilder` abstracts the encoding mechanisms. In the following snippet, we will use Protobuf. + +```go expandable +import ( + + "github.com/cosmos/cosmos-sdk/simapp" +) + +func sendTx() + +error { + / Choose your codec: Amino or Protobuf. Here, we use Protobuf, given by the following function. + app := simapp.NewSimApp(...) + + / Create a new TxBuilder. + txBuilder := app.TxConfig().NewTxBuilder() + + / --snip-- +} +``` + +We can also set up some keys and addresses that will send and receive the transactions. Here, for the purpose of the tutorial, we will be using some dummy data to create keys. + +```go +import ( + + "github.com/cosmos/cosmos-sdk/testutil/testdata" +) + +priv1, _, addr1 := testdata.KeyTestPubAddr() + +priv2, _, addr2 := testdata.KeyTestPubAddr() + +priv3, _, addr3 := testdata.KeyTestPubAddr() +``` + +Populating the `TxBuilder` can be done via its methods: + +```go expandable +package client + +import ( + + "time" + + txsigning "cosmossdk.io/x/tx/signing" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() + +sdk.TxEncoder + TxDecoder() + +sdk.TxDecoder + TxJSONEncoder() + +sdk.TxEncoder + TxJSONDecoder() + +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) + +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} + + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() + +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() *txsigning.HandlerMap + SigningContext() *txsigning.Context +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTimeoutHeight(height uint64) + +SetTimeoutTimestamp(timestamp time.Time) + +SetUnordered(v bool) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} + + / ExtendedTxBuilder extends the TxBuilder interface, + / which is used to set extension options to be included in a transaction. + ExtendedTxBuilder interface { + SetExtensionOptions(extOpts ...*codectypes.Any) +} +) +``` + +```go expandable +import ( + + banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +func sendTx() + +error { + / --snip-- + + / Define two x/bank MsgSend messages: + / - from addr1 to addr3, + / - from addr2 to addr3. + / This means that the transactions needs two signers: addr1 and addr2. + msg1 := banktypes.NewMsgSend(addr1, addr3, types.NewCoins(types.NewInt64Coin("atom", 12))) + +msg2 := banktypes.NewMsgSend(addr2, addr3, types.NewCoins(types.NewInt64Coin("atom", 34))) + err := txBuilder.SetMsgs(msg1, msg2) + if err != nil { + return err +} + +txBuilder.SetGasLimit(...) + +txBuilder.SetFeeAmount(...) + +txBuilder.SetMemo(...) + +txBuilder.SetTimeoutHeight(...) +} +``` + +At this point, `TxBuilder`'s underlying transaction is ready to be signed. + +#### Generating an Unordered Transaction + +Starting with Cosmos SDK v0.53.0, users may send unordered transactions to chains that have the feature enabled. + + + +Unordered transactions MUST leave sequence values unset. When a transaction is both unordered and contains a non-zero sequence value, +the transaction will be rejected. External services that operate on prior assumptions about transaction sequence values should be updated to handle unordered transactions. +Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. + + + +Using the example above, we can set the required fields to mark a transaction as unordered. +By default, unordered transactions charge an extra 2240 units of gas to offset the additional storage overhead that supports their functionality. +The extra units of gas are customizable and therefore vary by chain, so be sure to check the chain's ante handler for the gas value set, if any. + +```go +func sendTx() + +error { + / --snip-- + expiration := 5 * time.Minute + txBuilder.SetUnordered(true) + +txBuilder.SetTimeoutTimestamp(time.Now().Add(expiration + (1 * time.Nanosecond))) +} +``` + +Unordered transactions from the same account must use a unique timeout timestamp value. The difference between each timeout timestamp value may be as small as a nanosecond, however. + +```go expandable +import ( + + "github.com/cosmos/cosmos-sdk/client" +) + +func sendMessages(txBuilders []client.TxBuilder) + +error { + / --snip-- + expiration := 5 * time.Minute + for _, txb := range txBuilders { + txb.SetUnordered(true) + +txb.SetTimeoutTimestamp(time.Now().Add(expiration + (1 * time.Nanosecond))) +} +} +``` + +### Signing a Transaction + +We set encoding config to use Protobuf, which will use `SIGN_MODE_DIRECT` by default. As per [ADR-020](docs/sdk/next/documentation/legacy/adr-comprehensive), each signer needs to sign the `SignerInfo`s of all other signers. This means that we need to perform two steps sequentially: + +- for each signer, populate the signer's `SignerInfo` inside `TxBuilder`, +- once all `SignerInfo`s are populated, for each signer, sign the `SignDoc` (the payload to be signed). + +In the current `TxBuilder`'s API, both steps are done using the same method: `SetSignatures()`. The current API requires us to first perform a round of `SetSignatures()` _with empty signatures_, only to populate `SignerInfo`s, and a second round of `SetSignatures()` to actually sign the correct payload. + +```go expandable +import ( + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + xauthsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +func sendTx() + +error { + / --snip-- + privs := []cryptotypes.PrivKey{ + priv1, priv2 +} + accNums:= []uint64{..., ... +} / The accounts' account numbers + accSeqs:= []uint64{..., ... +} / The accounts' sequence numbers + + / First round: we gather all the signer infos. We use the "set empty + / signature" hack to do that. + var sigsV2 []signing.SignatureV2 + for i, priv := range privs { + sigV2 := signing.SignatureV2{ + PubKey: priv.PubKey(), + Data: &signing.SingleSignatureData{ + SignMode: encCfg.TxConfig.SignModeHandler().DefaultMode(), + Signature: nil, +}, + Sequence: accSeqs[i], +} + +sigsV2 = append(sigsV2, sigV2) +} + err := txBuilder.SetSignatures(sigsV2...) + if err != nil { + return err +} + + / Second round: all signer infos are set, so each signer can sign. + sigsV2 = []signing.SignatureV2{ +} + for i, priv := range privs { + signerData := xauthsigning.SignerData{ + ChainID: chainID, + AccountNumber: accNums[i], + Sequence: accSeqs[i], +} + +sigV2, err := tx.SignWithPrivKey( + encCfg.TxConfig.SignModeHandler().DefaultMode(), signerData, + txBuilder, priv, encCfg.TxConfig, accSeqs[i]) + if err != nil { + return nil, err +} + +sigsV2 = append(sigsV2, sigV2) +} + +err = txBuilder.SetSignatures(sigsV2...) + if err != nil { + return err +} +} +``` + +The `TxBuilder` is now correctly populated. To print it, you can use the `TxConfig` interface from the initial encoding config `encCfg`: + +```go expandable +func sendTx() + +error { + / --snip-- + + / Generated Protobuf-encoded bytes. + txBytes, err := encCfg.TxConfig.TxEncoder()(txBuilder.GetTx()) + if err != nil { + return err +} + + / Generate a JSON string. + txJSONBytes, err := encCfg.TxConfig.TxJSONEncoder()(txBuilder.GetTx()) + if err != nil { + return err +} + txJSON := string(txJSONBytes) +} +``` + +### Broadcasting a Transaction + +The preferred way to broadcast a transaction is to use gRPC, though using REST (via `gRPC-gateway`) or the CometBFT RPC is also possible. An overview of the differences between these methods is exposed [here](/docs/sdk/v0.53/api-reference/service-apis/grpc_rest). For this tutorial, we will only describe the gRPC method. + +```go expandable +import ( + + "context" + "fmt" + "google.golang.org/grpc" + "github.com/cosmos/cosmos-sdk/types/tx" +) + +func sendTx(ctx context.Context) + +error { + / --snip-- + + / Create a connection to the gRPC server. + grpcConn := grpc.Dial( + "127.0.0.1:9090", / Or your gRPC server address. + grpc.WithInsecure(), / The Cosmos SDK doesn't support any transport security mechanism. + ) + +defer grpcConn.Close() + + / Broadcast the tx via gRPC. We create a new client for the Protobuf Tx + / service. + txClient := tx.NewServiceClient(grpcConn) + / We then call the BroadcastTx method on this client. + grpcRes, err := txClient.BroadcastTx( + ctx, + &tx.BroadcastTxRequest{ + Mode: tx.BroadcastMode_BROADCAST_MODE_SYNC, + TxBytes: txBytes, / Proto-binary of the signed transaction, see previous step. +}, + ) + if err != nil { + return err +} + +fmt.Println(grpcRes.TxResponse.Code) / Should be `0` if the tx is successful + + return nil +} +``` + +#### Simulating a Transaction + +Before broadcasting a transaction, we sometimes may want to dry-run the transaction, to estimate some information about the transaction without actually committing it. This is called simulating a transaction, and can be done as follows: + +```go expandable +import ( + + "context" + "fmt" + "testing" + "github.com/cosmos/cosmos-sdk/client" + "github.com/cosmos/cosmos-sdk/types/tx" + authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" +) + +func simulateTx() + +error { + / --snip-- + + / Simulate the tx via gRPC. We create a new client for the Protobuf Tx + / service. + txClient := tx.NewServiceClient(grpcConn) + txBytes := /* Fill in with your signed transaction bytes. */ + + / We then call the Simulate method on this client. + grpcRes, err := txClient.Simulate( + context.Background(), + &tx.SimulateRequest{ + TxBytes: txBytes, +}, + ) + if err != nil { + return err +} + +fmt.Println(grpcRes.GasInfo) / Prints estimated gas used. + + return nil +} +``` + +## Using gRPC + +It is not possible to generate or sign a transaction using gRPC, only to broadcast one. In order to broadcast a transaction using gRPC, you will need to generate, sign, and encode the transaction using either the CLI or programmatically with Go. + +### Broadcasting a Transaction + +Broadcasting a transaction using the gRPC endpoint can be done by sending a `BroadcastTx` request as follows, where the `txBytes` are the protobuf-encoded bytes of a signed transaction: + +```bash +grpcurl -plaintext \ + -d '{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ + localhost:9090 \ + cosmos.tx.v1beta1.Service/BroadcastTx +``` + +## Using REST + +It is not possible to generate or sign a transaction using REST, only to broadcast one. In order to broadcast a transaction using REST, you will need to generate, sign, and encode the transaction using either the CLI or programmatically with Go. + +### Broadcasting a Transaction + +Broadcasting a transaction using the REST endpoint (served by `gRPC-gateway`) can be done by sending a POST request as follows, where the `txBytes` are the protobuf-encoded bytes of a signed transaction: + +```bash +curl -X POST \ + -H "Content-Type: application/json" \ + -d'{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ + localhost:1317/cosmos/tx/v1beta1/txs +``` + +## Using CosmJS (JavaScript & TypeScript) + +CosmJS aims to build client libraries in JavaScript that can be embedded in web applications. Please see [Link](https://cosmos.github.io/cosmjs) for more information. As of January 2021, CosmJS documentation is still work in progress. diff --git a/docs/sdk/v0.53/documentation/operations/upgrade-guide.mdx b/docs/sdk/v0.53/documentation/operations/upgrade-guide.mdx new file mode 100644 index 00000000..b8fd918e --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/upgrade-guide.mdx @@ -0,0 +1,566 @@ +--- +title: Upgrade Guide +description: >- + This document provides a full guide for upgrading a Cosmos SDK chain from + v0.50.x to v0.53.x. +--- + +This document provides a full guide for upgrading a Cosmos SDK chain from `v0.50.x` to `v0.53.x`. + +This guide includes one **required** change and three **optional** features. + +After completing this guide, applications will have: + +* The `x/protocolpool` module +* The `x/epochs` module +* Unordered Transaction support + + + +**IMPORTANT**: If your chain uses IBC, you can choose between IBC-Go v8.x or v10.x when upgrading to SDK v0.53. Both are compatible. See the [IBC-Go Version Requirements section](#ibc-go-version-requirements) below for guidance on choosing. + + + +## Table of Contents + +* [IBC-Go Version Requirements](#ibc-go-version-requirements) + * [Compatibility](#compatibility) + * [Choosing Your Version](#choosing-your-ibc-go-version) + * [Migration Considerations](#migration-considerations) +* [App Wiring Changes (REQUIRED)](#app-wiring-changes-required) +* [Adding ProtocolPool Module (OPTIONAL)](#adding-protocolpool-module-optional) + * [ProtocolPool Manual Wiring](#protocolpool-manual-wiring) + * [ProtocolPool DI Wiring](#protocolpool-di-wiring) +* [Adding Epochs Module (OPTIONAL)](#adding-epochs-module-optional) + * [Epochs Manual Wiring](#epochs-manual-wiring) + * [Epochs DI Wiring](#epochs-di-wiring) +* [Enable Unordered Transactions (OPTIONAL)](#enable-unordered-transactions-optional) +* [Upgrade Handler](#upgrade-handler) + +## IBC-Go Version Requirements + +### Compatibility + +Cosmos SDK v0.53 is compatible with both IBC-Go v8.x and v10.x. You can choose either version. + +| Cosmos SDK | IBC-Go v8.x | IBC-Go v10.x | +|------------|--------------|---------------| +| v0.53.x | ✅ Compatible | ✅ Compatible | +| v0.50.x | ✅ Compatible | ❌ Not Compatible | + +### Choosing Your IBC-Go Version + +#### Option 1: Stay on IBC-Go v8.x +- **Latest**: v8.7.0 +- **Support until**: May 10, 2025 +- **Migration**: No IBC changes required for SDK v0.53 upgrade +- **Middleware**: All existing middleware continues to work + +#### Option 2: Upgrade to IBC-Go v10.x +- **Latest**: v10.3.0 +- **Features**: IBC v2 protocol support +- **Migration**: Follow the [IBC-Go v8.1 to v10 migration guide](https://ibc.cosmos.network/main/migrations/v8_1-to-v10/) +- **Breaking changes**: Fee middleware removed, capability module removed + +### Migration Considerations + +#### If Staying on IBC-Go v8.x +No IBC-related changes are required. Your existing IBC configuration will continue to work with SDK v0.53. + +#### If Upgrading to IBC-Go v10.x +Key breaking changes: +- Remove fee middleware (29-fee) from module configuration +- Remove capability module +- Update keeper constructors +- Reconfigure middleware stacks + +## App Wiring Changes **REQUIRED** + +The `x/auth` module now contains a `PreBlocker` that *must* be set in the module manager's `SetOrderPreBlockers` method. + +```go +app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, / NEW +) +``` + +## Adding ProtocolPool Module **OPTIONAL** + + + +Using an external community pool such as `x/protocolpool` will cause the following `x/distribution` handlers to return an error: + +**QueryService** + +* `CommunityPool` + +**MsgService** + +* `CommunityPoolSpend` +* `FundCommunityPool` + +If your services depend on this functionality from `x/distribution`, please update them to use either `x/protocolpool` or your custom external community pool alternatives. + + + +### Manual Wiring + +Import the following: + +```go +import ( + + / ... + "github.com/cosmos/cosmos-sdk/x/protocolpool" + protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" +) +``` + +Set the module account permissions. + +```go +maccPerms = map[string][]string{ + / ... + protocolpooltypes.ModuleName: nil, + protocolpooltypes.ProtocolPoolEscrowAccount: nil, +} +``` + +Add the protocol pool keeper to your application struct. + +```go +ProtocolPoolKeeper protocolpoolkeeper.Keeper +``` + +Add the store key: + +```go +keys := storetypes.NewKVStoreKeys( + / ... + protocolpooltypes.StoreKey, +) +``` + +Instantiate the keeper. + +Make sure to do this before the distribution module instantiation, as you will pass the keeper there next. + +```go +app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +Pass the protocolpool keeper to the distribution keeper: + +```go +app.DistrKeeper = distrkeeper.NewKeeper( + appCodec, + runtime.NewKVStoreService(keys[distrtypes.StoreKey]), + app.AccountKeeper, + app.BankKeeper, + app.StakingKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), / NEW +) +``` + +Add the protocolpool module to the module manager: + +```go +app.ModuleManager = module.NewManager( + / ... + protocolpool.NewAppModule(appCodec, app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper), +) +``` + +Add an entry for SetOrderBeginBlockers, SetOrderEndBlockers, SetOrderInitGenesis, and SetOrderExportGenesis. + +```go +app.ModuleManager.SetOrderBeginBlockers( + / must come AFTER distribution. + distrtypes.ModuleName, + protocolpooltypes.ModuleName, +) +``` + +```go +app.ModuleManager.SetOrderEndBlockers( + / order does not matter. + protocolpooltypes.ModuleName, +) +``` + +```go +app.ModuleManager.SetOrderInitGenesis( + / order does not matter. + protocolpooltypes.ModuleName, +) +``` + +```go +app.ModuleManager.SetOrderInitGenesis( + protocolpooltypes.ModuleName, / must be exported before bank. + banktypes.ModuleName, +) +``` + +### DI Wiring + +Note: *as long as an external community pool keeper (here, `x/protocolpool`) is wired in DI configs, `x/distribution` will automatically use it for its external pool.* + +First, set up the keeper for the application. + +Import the protocolpool keeper: + +```go +protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper" +``` + +Add the keeper to your application struct: + +```go +ProtocolPoolKeeper protocolpoolkeeper.Keeper +``` + +Add the keeper to the depinject system: + +```go +depinject.Inject( + appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + / ... other modules + &app.ProtocolPoolKeeper, / NEW MODULE! +) +``` + +Next, set up configuration for the module. + +Import the following: + +```go +import ( + + protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1" + + _ "github.com/cosmos/cosmos-sdk/x/protocolpool" / import for side-effects + protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types" +) +``` + +The protocolpool module has module accounts that handle funds. Add them to the module account permission configuration: + +```go +moduleAccPerms = []*authmodulev1.ModuleAccountPermission{ + / ... + { + Account: protocolpooltypes.ModuleName +}, + { + Account: protocolpooltypes.ProtocolPoolEscrowAccount +}, +} +``` + +Next, add an entry for BeginBlockers, EndBlockers, InitGenesis, and ExportGenesis. + +```go +BeginBlockers: []string{ + / ... + / must be AFTER distribution. + distrtypes.ModuleName, + protocolpooltypes.ModuleName, +}, +``` + +```go +EndBlockers: []string{ + / ... + / order for protocolpool does not matter. + protocolpooltypes.ModuleName, +}, +``` + +```go +InitGenesis: []string{ + / ... must be AFTER distribution. + distrtypes.ModuleName, + protocolpooltypes.ModuleName, +}, +``` + +```go +ExportGenesis: []string{ + / ... + / Must be exported before x/bank. + protocolpooltypes.ModuleName, + banktypes.ModuleName, +}, +``` + +Lastly, add an entry for protocolpool in the ModuleConfig. + +```go +{ + Name: protocolpooltypes.ModuleName, + Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{ +}), +}, +``` + +## Adding Epochs Module **OPTIONAL** + +### Manual Wiring + +Import the following: + +```go +import ( + + / ... + "github.com/cosmos/cosmos-sdk/x/epochs" + epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" +) +``` + +Add the epochs keeper to your application struct: + +```go +EpochsKeeper epochskeeper.Keeper +``` + +Add the store key: + +```go +keys := storetypes.NewKVStoreKeys( + / ... + epochstypes.StoreKey, +) +``` + +Instantiate the keeper: + +```go +app.EpochsKeeper = epochskeeper.NewKeeper( + runtime.NewKVStoreService(keys[epochstypes.StoreKey]), + appCodec, +) +``` + +Set up hooks for the epochs keeper: + +To learn how to write hooks for the epoch keeper, see the [x/epoch README](https://github.com/cosmos/cosmos-sdk/blob/main/x/epochs/README.md) + +```go +app.EpochsKeeper.SetHooks( + epochstypes.NewMultiEpochHooks( + / insert epoch hooks receivers here + app.SomeOtherModule + ), +) +``` + +Add the epochs module to the module manager: + +```go +app.ModuleManager = module.NewManager( + / ... + epochs.NewAppModule(appCodec, app.EpochsKeeper), +) +``` + +Add entries for SetOrderBeginBlockers and SetOrderInitGenesis: + +```go +app.ModuleManager.SetOrderBeginBlockers( + / ... + epochstypes.ModuleName, +) +``` + +```go +app.ModuleManager.SetOrderInitGenesis( + / ... + epochstypes.ModuleName, +) +``` + +### DI Wiring + +First, set up the keeper for the application. + +Import the epochs keeper: + +```go +epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper" +``` + +Add the keeper to your application struct: + +```go +EpochsKeeper epochskeeper.Keeper +``` + +Add the keeper to the depinject system: + +```go +depinject.Inject( + appConfig, + &appBuilder, + &app.appCodec, + &app.legacyAmino, + &app.txConfig, + &app.interfaceRegistry, + / ... other modules + &app.EpochsKeeper, / NEW MODULE! +) +``` + +Next, set up configuration for the module. + +Import the following: + +```go +import ( + + epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1" + + _ "github.com/cosmos/cosmos-sdk/x/epochs" / import for side-effects + epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types" +) +``` + +Add an entry for BeginBlockers and InitGenesis: + +```go +BeginBlockers: []string{ + / ... + epochstypes.ModuleName, +}, +``` + +```go +InitGenesis: []string{ + / ... + epochstypes.ModuleName, +}, +``` + +Lastly, add an entry for epochs in the ModuleConfig: + +```go +{ + Name: epochstypes.ModuleName, + Config: appconfig.WrapAny(&epochsmodulev1.Module{ +}), +}, +``` + +## Enable Unordered Transactions **OPTIONAL** + +To enable unordered transaction support on an application, the `x/auth` keeper must be supplied with the `WithUnorderedTransactions` option. + +Note that unordered transactions require sequence values to be zero, and will **FAIL** if a non-zero sequence value is set. +Please ensure no sequence value is set when submitting an unordered transaction. +Services that rely on prior assumptions about sequence values should be updated to handle unordered transactions. +Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. + +```go +app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), / new option! + ) +``` + +If using dependency injection, update the auth module config. + +```go +{ + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + EnableUnorderedTransactions: true, / remove this line if you do not want unordered transactions. +}), +}, +``` + +By default, unordered transactions use a transaction timeout duration of 10 minutes and a default gas charge of 2240 gas units. +To modify these default values, pass in the corresponding options to the new `SigVerifyOptions` field in `x/auth's` `ante.HandlerOptions`. + +```go +options := ante.HandlerOptions{ + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimoutDuration), +}, +} +``` + +```go +anteDecorators := []sdk.AnteDecorator{ + / ... other decorators ... + ante.NewSigVerificationDecorator(options.AccountKeeper, options.SignModeHandler, options.SigVerifyOptions...), / supply new options +} +``` + +## Upgrade Handler + +The upgrade handler only requires adding the store upgrades for the modules added above. +If your application is not adding `x/protocolpool` or `x/epochs`, you do not need to add the store upgrade. + +```go expandable +/ UpgradeName defines the on-chain upgrade name for the sample SimApp upgrade +/ from v050 to v053. +/ +/ NOTE: This upgrade defines a reference implementation of what an upgrade +/ could look like when an application is migrating from Cosmos SDK version +/ v0.50.x to v0.53.x. +const UpgradeName = "v050-to-v053" + +func (app SimApp) + +RegisterUpgradeHandlers() { + app.UpgradeKeeper.SetUpgradeHandler( + UpgradeName, + func(ctx context.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + return app.ModuleManager.RunMigrations(ctx, app.Configurator(), fromVM) +}, + ) + +upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk() + if err != nil { + panic(err) +} + if upgradeInfo.Name == UpgradeName && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := storetypes.StoreUpgrades{ + Added: []string{ + epochstypes.ModuleName, / if not adding x/epochs to your chain, remove this line. + protocolpooltypes.ModuleName, / if not adding x/protocolpool to your chain, remove this line. +}, +} + + / configure store loader that checks if version == upgradeHeight and applies store upgrades + app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} +} +``` diff --git a/docs/sdk/v0.53/documentation/operations/upgrade-reference.mdx b/docs/sdk/v0.53/documentation/operations/upgrade-reference.mdx new file mode 100644 index 00000000..ef78b402 --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/upgrade-reference.mdx @@ -0,0 +1,234 @@ +--- +title: Upgrade Reference +description: >- + This document provides a quick reference for the upgrades from v0.50.x to + v0.53.x of Cosmos SDK. +--- + +This document provides a quick reference for the upgrades from `v0.50.x` to `v0.53.x` of Cosmos SDK. + +Note, always read the **App Wiring Changes** section for more information on application wiring updates. + +Upgrading to v0.53.x will require a **coordinated** chain upgrade. + +### TLDR; + +Unordered transactions, `x/protocolpool`, and `x/epoch` are the major new features added in v0.53.x. + +We also added the ability to add a `CheckTx` handler and enabled ed25519 signature verification. + +For a full list of changes, see the [Changelog](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/CHANGELOG.md). + +### Unordered Transactions + +The Cosmos SDK now supports unordered transactions. *This is an opt-in feature*. + +Clients that use this feature may now submit their transactions in a fire-and-forget manner to chains that enabled unordered transactions. + +To submit an unordered transaction, clients must set the `unordered` flag to +`true` and ensure a reasonable `timeout_timestamp` is set. The `timeout_timestamp` is +used as a TTL for the transaction and provides replay protection. Each transaction's `timeout_timestamp` must be +unique to the account; however, the difference may be as small as a nanosecond. See [ADR-070](docs/sdk/next/documentation/legacy/adr-comprehensive) for more details. + +Note that unordered transactions require sequence values to be zero, and will **FAIL** if a non-zero sequence value is set. +Please ensure no sequence value is set when submitting an unordered transaction. +Services that rely on prior assumptions about sequence values should be updated to handle unordered transactions. +Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. + +#### Enabling Unordered Transactions + +To enable unordered transactions, supply the `WithUnorderedTransactions` option to the `x/auth` keeper: + +```go +app.AccountKeeper = authkeeper.NewAccountKeeper( + appCodec, + runtime.NewKVStoreService(keys[authtypes.StoreKey]), + authtypes.ProtoBaseAccount, + maccPerms, + authcodec.NewBech32Codec(sdk.Bech32MainPrefix), + sdk.Bech32MainPrefix, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + authkeeper.WithUnorderedTransactions(true), / new option! + ) +``` + +If using dependency injection, update the auth module config. + +```go +{ + Name: authtypes.ModuleName, + Config: appconfig.WrapAny(&authmodulev1.Module{ + Bech32Prefix: "cosmos", + ModuleAccountPermissions: moduleAccPerms, + EnableUnorderedTransactions: true, / remove this line if you do not want unordered transactions. +}), +}, +``` + +By default, unordered transactions use a transaction timeout duration of 10 minutes and a default gas charge of 2240 gas units. +To modify these default values, pass in the corresponding options to the new `SigVerifyOptions` field in `x/auth's` `ante.HandlerOptions`. + +```go +options := ante.HandlerOptions{ + SigVerifyOptions: []ante.SigVerificationDecoratorOption{ + / change below as needed. + ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost), + ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimoutDuration), +}, +} +``` + +```go +anteDecorators := []sdk.AnteDecorator{ + / ... other decorators ... + ante.NewSigVerificationDecorator(options.AccountKeeper, options.SignModeHandler, options.SigVerifyOptions...), / supply new options +} +``` + +### App Wiring Changes + +In this section, we describe the required app wiring changes to run a v0.53.x Cosmos SDK application. + +**These changes are directly applicable to your application wiring.** + +The `x/auth` module now contains a `PreBlocker` that *must* be set in the module manager's `SetOrderPreBlockers` method. + +```go +app.ModuleManager.SetOrderPreBlockers( + upgradetypes.ModuleName, + authtypes.ModuleName, / NEW +) +``` + +That's it. + +### New Modules + +Below are some **optional** new modules you can include in your chain. +To see a full example of wiring these modules, please check out the [SimApp](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/simapp/app.go). + +#### Epochs + +Adding this module requires a `StoreUpgrade` + +The new, supplemental `x/epochs` module provides Cosmos SDK modules functionality to register and execute custom logic at fixed time-intervals. + +Required wiring: + +* Keeper Instantiation +* StoreKey addition +* Hooks Registration +* App Module Registration +* entry in SetOrderBeginBlockers +* entry in SetGenesisModuleOrder +* entry in SetExportModuleOrder + +#### ProtocolPool + + + +Using `protocolpool` will cause the following `x/distribution` handlers to return an error: + +**QueryService** + +* `CommunityPool` + +**MsgService** + +* `CommunityPoolSpend` +* `FundCommunityPool` + +If you have services that rely on this functionality from `x/distribution`, please update them to use the `x/protocolpool` equivalents. + + + +Adding this module requires a `StoreUpgrade` + +The new, supplemental `x/protocolpool` module provides extended functionality for managing and distributing block reward revenue. + +Required wiring: + +* Module Account Permissions + * protocolpooltypes.ModuleName (nil) + * protocolpooltypes.ProtocolPoolEscrowAccount (nil) +* Keeper Instantiation +* StoreKey addition +* Passing the keeper to the Distribution Keeper + * `distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper)` +* App Module Registration +* entry in SetOrderBeginBlockers +* entry in SetOrderEndBlockers +* entry in SetGenesisModuleOrder +* entry in SetExportModuleOrder **before `x/bank`** + +## Custom Minting Function in `x/mint` + +This release introduces the ability to configure a custom mint function in `x/mint`. The minting logic is now abstracted as a `MintFn` with a default implementation that can be overridden. + +### What’s New + +* **Configurable Mint Function:**\ + A new `MintFn` abstraction is introduced. By default, the module uses `DefaultMintFn`, but you can supply your own implementation. + +* **Deprecated InflationCalculationFn Parameter:**\ + The `InflationCalculationFn` argument previously provided to `mint.NewAppModule()` is now ignored and must be `nil`. To customize the default minter’s inflation behavior, wrap your custom function with `mintkeeper.DefaultMintFn` and pass it via the `WithMintFn` option: + +```go +mintkeeper.WithMintFn(mintkeeper.DefaultMintFn(customInflationFn)) +``` + +### How to Upgrade + +1. **Using the Default Minting Function** + + No action is needed if you’re happy with the default behavior. Make sure your application wiring initializes the MintKeeper like this: + +```go +mintKeeper := mintkeeper.NewKeeper( + appCodec, + storeService, + stakingKeeper, + accountKeeper, + bankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + ) +``` + +2. **Using a Custom Minting Function** + + To use a custom minting function, define it as follows and pass it you your mintKeeper when constructing it: + +```go expandable +func myCustomMintFunc(ctx sdk.Context, k *mintkeeper.Keeper) { + / do minting... +} + +/ ... + mintKeeper := mintkeeper.NewKeeper( + appCodec, + storeService, + stakingKeeper, + accountKeeper, + bankKeeper, + authtypes.FeeCollectorName, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), + mintkeeper.WithMintFn(myCustomMintFunc), / Use custom minting function + ) +``` + +### Misc Changes + +#### Testnet's init-files Command + +Some changes were made to `testnet`'s `init-files` command to support our new testing framework, `Systemtest`. + +##### Flag Changes + +* The flag for validator count was changed from `--v` to `--validator-count`(shorthand: `-v`). + +##### Flag Additions + +* `--staking-denom` allows changing the default stake denom, `stake`. +* `--commit-timeout` enables changing the commit timeout of the chain. +* `--single-host` enables running a multi-node network on a single host. This bumps each subsequent node's network addresses by 1. For example, node1's gRPC address will be 9090, node2's 9091, etc... diff --git a/docs/sdk/v0.53/learn/advanced/upgrade.mdx b/docs/sdk/v0.53/documentation/operations/upgrade.mdx similarity index 60% rename from docs/sdk/v0.53/learn/advanced/upgrade.mdx rename to docs/sdk/v0.53/documentation/operations/upgrade.mdx index 131b6584..d941a118 100644 --- a/docs/sdk/v0.53/learn/advanced/upgrade.mdx +++ b/docs/sdk/v0.53/documentation/operations/upgrade.mdx @@ -1,112 +1,167 @@ --- -title: "In-Place Store Migrations" -description: "Version: v0.53" +title: In-Place Store Migrations --- - Read and understand all the in-place store migration documentation before you run a migration on a live chain. + Read and understand all the in-place store migration documentation before you + run a migration on a live chain. - - Upgrade your app modules smoothly with custom in-place store migration logic. - +## Synopsis + +Upgrade your app modules smoothly with custom in-place store migration logic. The Cosmos SDK uses two methods to perform upgrades: -* Exporting the entire application state to a JSON file using the `export` CLI command, making changes, and then starting a new binary with the changed JSON file as the genesis file. +- Exporting the entire application state to a JSON file using the `export` CLI command, making changes, and then starting a new binary with the changed JSON file as the genesis file. -* Perform upgrades in place, which significantly decrease the upgrade time for chains with a larger state. Use the [Module Upgrade Guide](/v0.53/build/building-modules/upgrade) to set up your application modules to take advantage of in-place upgrades. +- Perform upgrades in place, which significantly decrease the upgrade time for chains with a larger state. Use the [Module Upgrade Guide](docs/sdk/v0.53/documentation/application-framework/app-upgrade) to set up your application modules to take advantage of in-place upgrades. This document provides steps to use the In-Place Store Migrations upgrade method. -## Tracking Module Versions[​](#tracking-module-versions "Direct link to Tracking Module Versions") +## Tracking Module Versions Each module gets assigned a consensus version by the module developer. The consensus version serves as the breaking change version of the module. The Cosmos SDK keeps track of all module consensus versions in the x/upgrade `VersionMap` store. During an upgrade, the difference between the old `VersionMap` stored in state and the new `VersionMap` is calculated by the Cosmos SDK. For each identified difference, the module-specific migrations are run and the respective consensus version of each upgraded module is incremented. -### Consensus Version[​](#consensus-version "Direct link to Consensus Version") +### Consensus Version The consensus version is defined on each app module by the module developer and serves as the breaking change version of the module. The consensus version informs the Cosmos SDK on which modules need to be upgraded. For example, if the bank module was version 2 and an upgrade introduces bank module 3, the Cosmos SDK upgrades the bank module and runs the "version 2 to 3" migration script. -### Version Map[​](#version-map "Direct link to Version Map") +### Version Map The version map is a mapping of module names to consensus versions. The map is persisted to x/upgrade's state for use during in-place migrations. When migrations finish, the updated version map is persisted in the state. -## Upgrade Handlers[​](#upgrade-handlers "Direct link to Upgrade Handlers") +## Upgrade Handlers Upgrades use an `UpgradeHandler` to facilitate migrations. The `UpgradeHandler` functions implemented by the app developer must conform to the following function signature. These functions retrieve the `VersionMap` from x/upgrade's state and return the new `VersionMap` to be stored in x/upgrade after the upgrade. The diff between the two `VersionMap`s determines which modules need upgrading. -``` +```go type UpgradeHandler func(ctx sdk.Context, plan Plan, fromVM VersionMap) (VersionMap, error) ``` Inside these functions, you must perform any upgrade logic to include in the provided `plan`. All upgrade handler functions must end with the following line of code: -``` - return app.mm.RunMigrations(ctx, cfg, fromVM) +```go +return app.mm.RunMigrations(ctx, cfg, fromVM) ``` -## Running Migrations[​](#running-migrations "Direct link to Running Migrations") +## Running Migrations Migrations are run inside of an `UpgradeHandler` using `app.mm.RunMigrations(ctx, cfg, vm)`. The `UpgradeHandler` functions describe the functionality to occur during an upgrade. The `RunMigration` function loops through the `VersionMap` argument and runs the migration scripts for all versions that are less than the versions of the new binary app module. After the migrations are finished, a new `VersionMap` is returned to persist the upgraded module versions to state. -``` -cfg := module.NewConfigurator(...)app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { // ... // additional upgrade logic // ... // returns a VersionMap with the updated module ConsensusVersions return app.mm.RunMigrations(ctx, fromVM)}) +```go +cfg := module.NewConfigurator(...) + +app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + + / ... + / additional upgrade logic + / ... + + / returns a VersionMap with the updated module ConsensusVersions + return app.mm.RunMigrations(ctx, fromVM) +}) ``` -To learn more about configuring migration scripts for your modules, see the [Module Upgrade Guide](/v0.53/build/building-modules/upgrade). +To learn more about configuring migration scripts for your modules, see the [Module Upgrade Guide](docs/sdk/v0.53/documentation/application-framework/app-upgrade). -### Order Of Migrations[​](#order-of-migrations "Direct link to Order Of Migrations") +### Order Of Migrations By default, all migrations are run in module name alphabetical ascending order, except `x/auth` which is run last. The reason is state dependencies between x/auth and other modules (you can read more in [issue #10606](https://github.com/cosmos/cosmos-sdk/issues/10606)). If you want to change the order of migration, then you should call `app.mm.SetOrderMigrations(module1, module2, ...)` in your app.go file. The function will panic if you forget to include a module in the argument list. -## Adding New Modules During Upgrades[​](#adding-new-modules-during-upgrades "Direct link to Adding New Modules During Upgrades") +## Adding New Modules During Upgrades You can introduce entirely new modules to the application during an upgrade. New modules are recognized because they have not yet been registered in `x/upgrade`'s `VersionMap` store. In this case, `RunMigrations` calls the `InitGenesis` function from the corresponding module to set up its initial state. -### Add StoreUpgrades for New Modules[​](#add-storeupgrades-for-new-modules "Direct link to Add StoreUpgrades for New Modules") +### Add StoreUpgrades for New Modules All chains preparing to run in-place store migrations will need to manually add store upgrades for new modules and then configure the store loader to apply those upgrades. This ensures that the new module's stores are added to the multistore before the migrations begin. -``` -upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk()if err != nil { panic(err)}if upgradeInfo.Name == "my-plan" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { storeUpgrades := storetypes.StoreUpgrades{ // add store upgrades for new modules // Example: // Added: []string{"foo", "bar"}, // ... } // configure store loader that checks if version == upgradeHeight and applies store upgrades app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades))} +```go expandable +upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk() + if err != nil { + panic(err) +} + if upgradeInfo.Name == "my-plan" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { + storeUpgrades := storetypes.StoreUpgrades{ + / add store upgrades for new modules + / Example: + / Added: []string{"foo", "bar" +}, + / ... +} + + / configure store loader that checks if version == upgradeHeight and applies store upgrades + app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) +} ``` -## Genesis State[​](#genesis-state "Direct link to Genesis State") +## Genesis State When starting a new chain, the consensus version of each module MUST be saved to state during the application's genesis. To save the consensus version, add the following line to the `InitChainer` method in `app.go`: -``` -func (app *MyApp) InitChainer(ctx sdk.Context, req abci.RequestInitChain) abci.ResponseInitChain { ...+ app.UpgradeKeeper.SetModuleVersionMap(ctx, app.mm.GetVersionMap()) ...} +```diff +func (app *MyApp) InitChainer(ctx sdk.Context, req abci.RequestInitChain) abci.ResponseInitChain { + ... ++ app.UpgradeKeeper.SetModuleVersionMap(ctx, app.mm.GetVersionMap()) + ... +} ``` This information is used by the Cosmos SDK to detect when modules with newer versions are introduced to the app. For a new module `foo`, `InitGenesis` is called by `RunMigration` only when `foo` is registered in the module manager but it's not set in the `fromVM`. Therefore, if you want to skip `InitGenesis` when a new module is added to the app, then you should set its module version in `fromVM` to the module consensus version: -``` -app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { // ... // Set foo's version to the latest ConsensusVersion in the VersionMap. // This will skip running InitGenesis on Foo fromVM[foo.ModuleName] = foo.AppModule{}.ConsensusVersion() return app.mm.RunMigrations(ctx, fromVM)}) +```go +app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + / ... + + / Set foo's version to the latest ConsensusVersion in the VersionMap. + / This will skip running InitGenesis on Foo + fromVM[foo.ModuleName] = foo.AppModule{ +}.ConsensusVersion() + +return app.mm.RunMigrations(ctx, fromVM) +}) ``` -### Overwriting Genesis Functions[​](#overwriting-genesis-functions "Direct link to Overwriting Genesis Functions") +### Overwriting Genesis Functions The Cosmos SDK offers modules that the application developer can import in their app. These modules often have an `InitGenesis` function already defined. You can write your own `InitGenesis` function for an imported module. To do this, manually trigger your custom genesis function in the upgrade handler. - You MUST manually set the consensus version in the version map passed to the `UpgradeHandler` function. Without this, the SDK will run the Module's existing `InitGenesis` code even if you triggered your custom function in the `UpgradeHandler`. + You MUST manually set the consensus version in the version map passed to the + `UpgradeHandler` function. Without this, the SDK will run the Module's + existing `InitGenesis` code even if you triggered your custom function in the + `UpgradeHandler`. -``` -import foo "github.com/my/module/foo"app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { // Register the consensus version in the version map // to avoid the SDK from triggering the default // InitGenesis function. fromVM["foo"] = foo.AppModule{}.ConsensusVersion() // Run custom InitGenesis for foo app.mm["foo"].InitGenesis(ctx, app.appCodec, myCustomGenesisState) return app.mm.RunMigrations(ctx, cfg, fromVM)}) +```go expandable +import foo "github.com/my/module/foo" + +app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + + / Register the consensus version in the version map + / to avoid the SDK from triggering the default + / InitGenesis function. + fromVM["foo"] = foo.AppModule{ +}.ConsensusVersion() + + / Run custom InitGenesis for foo + app.mm["foo"].InitGenesis(ctx, app.appCodec, myCustomGenesisState) + +return app.mm.RunMigrations(ctx, cfg, fromVM) +}) ``` -## Syncing a Full Node to an Upgraded Blockchain[​](#syncing-a-full-node-to-an-upgraded-blockchain "Direct link to Syncing a Full Node to an Upgraded Blockchain") +## Syncing a Full Node to an Upgraded Blockchain You can sync a full node to an existing blockchain which has been upgraded using Cosmovisor To successfully sync, you must start with the initial binary that the blockchain started with at genesis. If all Software Upgrade Plans contain binary instruction, then you can run Cosmovisor with auto-download option to automatically handle downloading and switching to the binaries associated with each sequential upgrade. Otherwise, you need to manually provide all binaries to Cosmovisor. -To learn more about Cosmovisor, see the [Cosmovisor Quick Start](/v0.53/build/tooling/cosmovisor). +To learn more about Cosmovisor, see the [Cosmovisor Quick Start](/docs/sdk/v0.53/documentation/operations/cosmovisor). diff --git a/docs/sdk/v0.53/documentation/operations/user.mdx b/docs/sdk/v0.53/documentation/operations/user.mdx new file mode 100644 index 00000000..efb2c11a --- /dev/null +++ b/docs/sdk/v0.53/documentation/operations/user.mdx @@ -0,0 +1,13 @@ +--- +title: User Guides +description: >- + This section is designed for developers who are using the Cosmos SDK to build + applications. It provides essential guides and references to effectively use + the SDK's features. +--- + +This section is designed for developers who are using the Cosmos SDK to build applications. It provides essential guides and references to effectively use the SDK's features. + +* [Setting up keys](/docs/sdk/v0.53/documentation/operations/keyring) - Learn how to set up secure key management using the Cosmos SDK's keyring feature. This guide provides a streamlined approach to cryptographic key handling, which is crucial for securing your application. +* [Running a node](/docs/sdk/v0.53/documentation/operations/run-node) - This guide provides step-by-step instructions to deploy and manage a node in the Cosmos network. It ensures a smooth and reliable operation of your blockchain application by covering all the necessary setup and maintenance steps. +* [CLI](/docs/sdk/v0.53/documentation/operations/interact-node) - Discover how to navigate and interact with the Cosmos SDK using the Command Line Interface (CLI). This section covers efficient and powerful command-based operations that can help you manage your application effectively. diff --git a/docs/sdk/v0.53/documentation/protocol-development/README.mdx b/docs/sdk/v0.53/documentation/protocol-development/README.mdx new file mode 100644 index 00000000..984bf9e3 --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/README.mdx @@ -0,0 +1,5 @@ +--- +title: Addresses spec +--- + +* [Bech32](/docs/sdk/v0.53/documentation/protocol-development/bech32) diff --git a/docs/sdk/v0.53/documentation/protocol-development/SPEC_MODULE.mdx b/docs/sdk/v0.53/documentation/protocol-development/SPEC_MODULE.mdx new file mode 100644 index 00000000..926715c4 --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/SPEC_MODULE.mdx @@ -0,0 +1,65 @@ +--- +title: Specification of Modules +description: >- + This file intends to outline the common structure for specifications within + this directory. +--- + +This file intends to outline the common structure for specifications within +this directory. + +## Tense + +For consistency, specs should be written in passive present tense. + +## Pseudo-Code + +Generally, pseudo-code should be minimized throughout the spec. Often, simple +bulleted-lists which describe a function's operations are sufficient and should +be considered preferable. In certain instances, due to the complex nature of +the functionality being described pseudo-code may the most suitable form of +specification. In these cases use of pseudo-code is permissible, but should be +presented in a concise manner, ideally restricted to only the complex +element as a part of a larger description. + +## Common Layout + +The following generalized `README` structure should be used to breakdown +specifications for modules. The following list is nonbinding and all sections are optional. + +* `# {Module Name}` - overview of the module +* `## Concepts` - describe specialized concepts and definitions used throughout the spec +* `## State` - specify and describe structures expected to be marshaled into the store, and their keys +* `## State Transitions` - standard state transition operations triggered by hooks, messages, etc. +* `## Messages` - specify message structure(s) and expected state machine behavior(s) +* `## Begin Block` - specify any begin-block operations +* `## End Block` - specify any end-block operations +* `## Hooks` - describe available hooks to be called by/from this module +* `## Events` - list and describe event tags used +* `## Client` - list and describe CLI commands and gRPC and REST endpoints +* `## Params` - list all module parameters, their types (in JSON) and examples +* `## Future Improvements` - describe future improvements of this module +* `## Tests` - acceptance tests +* `## Appendix` - supplementary details referenced elsewhere within the spec + +### Notation for key-value mapping + +Within `## State` the following notation `->` should be used to describe key to +value mapping: + +```text +key -> value +``` + +to represent byte concatenation the `|` may be used. In addition, encoding +type may be specified, for example: + +```text +0x00 | addressBytes | address2Bytes -> amino(value_object) +``` + +Additionally, index mappings may be specified by mapping to the `nil` value, for example: + +```text +0x01 | address2Bytes | addressBytes -> nil +``` diff --git a/docs/sdk/v0.53/documentation/protocol-development/SPEC_STANDARD.mdx b/docs/sdk/v0.53/documentation/protocol-development/SPEC_STANDARD.mdx new file mode 100644 index 00000000..6d302e03 --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/SPEC_STANDARD.mdx @@ -0,0 +1,128 @@ +--- +title: What is an SDK standard? +--- + +An SDK standard is a design document describing a particular protocol, standard, or feature expected to be used by the Cosmos SDK. An SDK standard should list the desired properties of the standard, explain the design rationale, and provide a concise but comprehensive technical specification. The primary author is responsible for pushing the proposal through the standardization process, soliciting input and support from the community, and communicating with relevant stakeholders to ensure (social) consensus. + +## Sections + +An SDK standard consists of: + +* a synopsis, +* overview and basic concepts, +* technical specification, +* history log, and +* copyright notice. + +All top-level sections are required. References should be included inline as links, or tabulated at the bottom of the section if necessary. Included subsections should be listed in the order specified below. + +### Table Of Contents + +Provide a table of contents at the top of the file to help readers. + +### Synopsis + +The document should include a brief (\~200 word) synopsis providing a high-level description of and rationale for the specification. + +### Overview and basic concepts + +This section should include a motivation subsection and a definition subsection if required: + +* *Motivation* - A rationale for the existence of the proposed feature, or the proposed changes to an existing feature. +* *Definitions* - A list of new terms or concepts used in the document or required to understand it. + +### System model and properties + +This section should include an assumption subsection if any, the mandatory properties subsection, and a dependency subsection. Note that the first two subsections are tightly coupled: how to enforce a property will depend directly on the assumptions made. This subsection is important to capture the interactions of the specified feature with the "rest-of-the-world," i.e., with other features of the ecosystem. + +* *Assumptions* - A list of any assumptions made by the feature designer. It should capture which features are used by the feature under specification, and what do we expect from them. +* *Properties* - A list of the desired properties or characteristics of the feature specified, and expected effects or failures when the properties are violated. In case it is relevant, it can also include a list of properties that the feature does not guarantee. +* *Dependencies* - A list of the features that use the feature under specification and how. + +### Technical specification + +This is the main section of the document, and should contain protocol documentation, design rationale, required references, and technical details where appropriate. +The section may have any or all of the following subsections, as appropriate to the particular specification. The API subsection is especially encouraged when appropriate. + +* *API* - A detailed description of the feature's API. +* *Technical Details* - All technical details including syntax, diagrams, semantics, protocols, data structures, algorithms, and pseudocode as appropriate. The technical specification should be detailed enough such that separate correct implementations of the specification without knowledge of each other are compatible. +* *Backwards Compatibility* - A discussion of compatibility (or lack thereof) with previous feature or protocol versions. +* *Known Issues* - A list of known issues. This subsection is specially important for specifications of already in-use features. +* *Example Implementation* - A concrete example implementation or description of an expected implementation to serve as the primary reference for implementers. + +### History + +A specification should include a history section, listing any inspiring documents and a plaintext log of significant changes. + +See an example history section [below](#history-1). + +### Copyright + +A specification should include a copyright section waiving rights via [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). + +## Formatting + +### General + +Specifications must be written in GitHub-flavored Markdown. + +For a GitHub-flavored Markdown cheat sheet, see [here](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet). For a local Markdown renderer, see [here](https://github.com/joeyespo/grip). + +### Language + +Specifications should be written in Simple English, avoiding obscure terminology and unnecessary jargon. For excellent examples of Simple English, please see the [Simple English Wikipedia](https://simple.wikipedia.org/wiki/Main_Page). + +The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in specifications are to be interpreted as described in [RFC 2119](https://tools.ietf.org/html/rfc2119). + +### Pseudocode + +Pseudocode in specifications should be language-agnostic and formatted in a simple imperative standard, with line numbers, variables, simple conditional blocks, for loops, and +English fragments where necessary to explain further functionality such as scheduling timeouts. LaTeX images should be avoided because they are challenging to review in diff form. + +Pseudocode for structs can be written in a simple language like TypeScript or golang, as interfaces. + +Example Golang pseudocode struct: + +```go +type CacheKVStore interface { + cache: map[Key]Value + parent: KVStore + deleted: Key +} +``` + +Pseudocode for algorithms should be written in simple Golang, as functions. + +Example pseudocode algorithm: + +```go expandable +func get( + store CacheKVStore, + key Key) + +Value { + value = store.cache.get(Key) + if (value !== null) { + return value +} + +else { + value = store.parent.get(key) + +store.cache.set(key, value) + +return value +} +} +``` + +## History + +This specification was significantly inspired by and derived from IBC's [ICS](https://github.com/cosmos/ibc/blob/main/spec/ics-001-ics-standard/README.md), which +was in turn derived from Ethereum's [EIP 1](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1.md). + +Nov 24, 2022 - Initial draft finished and submitted as a PR + +## Copyright + +All content herein is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/docs/sdk/v0.53/documentation/protocol-development/accounts.mdx b/docs/sdk/v0.53/documentation/protocol-development/accounts.mdx new file mode 100644 index 00000000..45572a72 --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/accounts.mdx @@ -0,0 +1,3590 @@ +--- +title: Accounts +--- + +## Synopsis + +This document describes the in-built account and public key system of the Cosmos SDK. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.53/documentation/application-framework/app-anatomy) + + + +## Account Definition + +In the Cosmos SDK, an _account_ designates a pair of _public key_ `PubKey` and _private key_ `PrivKey`. The `PubKey` can be derived to generate various `Addresses`, which are used to identify users (among other parties) in the application. `Addresses` are also associated with [`message`s](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#messages) to identify the sender of the `message`. The `PrivKey` is used to generate [digital signatures](#signatures) to prove that an `Address` associated with the `PrivKey` approved of a given `message`. + +For HD key derivation the Cosmos SDK uses a standard called [BIP32](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki). The BIP32 allows users to create an HD wallet (as specified in [BIP44](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)) - a set of accounts derived from an initial secret seed. A seed is usually created from a 12- or 24-word mnemonic. A single seed can derive any number of `PrivKey`s using a one-way cryptographic function. Then, a `PubKey` can be derived from the `PrivKey`. Naturally, the mnemonic is the most sensitive information, as private keys can always be re-generated if the mnemonic is preserved. + +```text expandable + Account 0 Account 1 Account 2 + ++------------------+ +------------------+ +------------------+ +| | | | | | +| Address 0 | | Address 1 | | Address 2 | +| ^ | | ^ | | ^ | +| | | | | | | | | +| | | | | | | | | +| | | | | | | | | +| + | | + | | + | +| Public key 0 | | Public key 1 | | Public key 2 | +| ^ | | ^ | | ^ | +| | | | | | | | | +| | | | | | | | | +| | | | | | | | | +| + | | + | | + | +| Private key 0 | | Private key 1 | | Private key 2 | +| ^ | | ^ | | ^ | ++------------------+ +------------------+ +------------------+ + | | | + | | | + | | | + +--------------------------------------------------------------------+ + | + | + +---------+---------+ + | | + | Master PrivKey | + | | + +-------------------+ + | + | + +---------+---------+ + | | + | Mnemonic (Seed) | + | | + +-------------------+ +``` + +In the Cosmos SDK, keys are stored and managed by using an object called a [`Keyring`](#keyring). + +## Keys, accounts, addresses, and signatures + +The principal way of authenticating a user is done using [digital signatures](https://en.wikipedia.org/wiki/Digital_signature). Users sign transactions using their own private key. Signature verification is done with the associated public key. For on-chain signature verification purposes, we store the public key in an `Account` object (alongside other data required for a proper transaction validation). + +In the node, all data is stored using Protocol Buffers serialization. + +The Cosmos SDK supports the following digital key schemes for creating digital signatures: + +- `secp256k1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256k1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keys/secp256k1/secp256k1.go). +- `secp256r1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256r1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keys/secp256r1/pubkey.go), +- `tm-ed25519`, as implemented in the [Cosmos SDK `crypto/keys/ed25519` package](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keys/ed25519/ed25519.go). This scheme is supported only for the consensus validation. + +| | Address length in bytes | Public key length in bytes | Used for transaction authentication | Used for consensus (cometbft) | +| :----------: | :---------------------: | :------------------------: | :---------------------------------: | :---------------------------: | +| `secp256k1` | 20 | 33 | yes | no | +| `secp256r1` | 32 | 33 | yes | no | +| `tm-ed25519` | -- not used -- | 32 | no | yes | + +## Addresses + +`Addresses` and `PubKey`s are both public information that identifies actors in the application. `Account` is used to store authentication information. The basic account implementation is provided by a `BaseAccount` object. + +Each account is identified using `Address` which is a sequence of bytes derived from a public key. In the Cosmos SDK, we define 3 types of addresses that specify a context where an account is used: + +- `AccAddress` identifies users (the sender of a `message`). +- `ValAddress` identifies validator operators. +- `ConsAddress` identifies validator nodes that are participating in consensus. Validator nodes are derived using the **`ed25519`** curve. + +These types implement the `Address` interface: + +```go expandable +package types + +import ( + + "bytes" + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "strings" + "sync" + "sync/atomic" + "github.com/hashicorp/golang-lru/simplelru" + "sigs.k8s.io/yaml" + + errorsmod "cosmossdk.io/errors" + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/internal/conv" + "github.com/cosmos/cosmos-sdk/types/address" + "github.com/cosmos/cosmos-sdk/types/bech32" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +const ( + / Constants defined here are the defaults value for address. + / You can use the specific values for your project. + / Add the follow lines to the `main()` of your server. + / + / config := sdk.GetConfig() + / config.SetBech32PrefixForAccount(yourBech32PrefixAccAddr, yourBech32PrefixAccPub) + / config.SetBech32PrefixForValidator(yourBech32PrefixValAddr, yourBech32PrefixValPub) + / config.SetBech32PrefixForConsensusNode(yourBech32PrefixConsAddr, yourBech32PrefixConsPub) + / config.SetPurpose(yourPurpose) + / config.SetCoinType(yourCoinType) + / config.Seal() + + / Bech32MainPrefix defines the main SDK Bech32 prefix of an account's address + Bech32MainPrefix = "cosmos" + + / Purpose is the ATOM purpose as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +Purpose = 44 + + / CoinType is the ATOM coin type as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +CoinType = 118 + + / FullFundraiserPath is the parts of the BIP44 HD path that are fixed by + / what we used during the ATOM fundraiser. + FullFundraiserPath = "m/44'/118'/0'/0/0" + + / PrefixAccount is the prefix for account keys + PrefixAccount = "acc" + / PrefixValidator is the prefix for validator keys + PrefixValidator = "val" + / PrefixConsensus is the prefix for consensus keys + PrefixConsensus = "cons" + / PrefixPublic is the prefix for public keys + PrefixPublic = "pub" + / PrefixOperator is the prefix for operator keys + PrefixOperator = "oper" + + / PrefixAddress is the prefix for addresses + PrefixAddress = "addr" + + / Bech32PrefixAccAddr defines the Bech32 prefix of an account's address + Bech32PrefixAccAddr = Bech32MainPrefix + / Bech32PrefixAccPub defines the Bech32 prefix of an account's public key + Bech32PrefixAccPub = Bech32MainPrefix + PrefixPublic + / Bech32PrefixValAddr defines the Bech32 prefix of a validator's operator address + Bech32PrefixValAddr = Bech32MainPrefix + PrefixValidator + PrefixOperator + / Bech32PrefixValPub defines the Bech32 prefix of a validator's operator public key + Bech32PrefixValPub = Bech32MainPrefix + PrefixValidator + PrefixOperator + PrefixPublic + / Bech32PrefixConsAddr defines the Bech32 prefix of a consensus node address + Bech32PrefixConsAddr = Bech32MainPrefix + PrefixValidator + PrefixConsensus + / Bech32PrefixConsPub defines the Bech32 prefix of a consensus node public key + Bech32PrefixConsPub = Bech32MainPrefix + PrefixValidator + PrefixConsensus + PrefixPublic +) + +/ cache variables +var ( + / AccAddress.String() + +is expensive and if unoptimized dominantly showed up in profiles, + / yet has no mechanisms to trivially cache the result given that AccAddress is a []byte type. + accAddrMu sync.Mutex + accAddrCache *simplelru.LRU + consAddrMu sync.Mutex + consAddrCache *simplelru.LRU + valAddrMu sync.Mutex + valAddrCache *simplelru.LRU + + isCachingEnabled atomic.Bool +) + +/ sentinel errors +var ( + ErrEmptyHexAddress = errors.New("decoding address from hex string failed: empty address") +) + +func init() { + var err error + SetAddrCacheEnabled(true) + + / in total the cache size is 61k entries. Key is 32 bytes and value is around 50-70 bytes. + / That will make around 92 * 61k * 2 (LRU) + +bytes ~ 11 MB + if accAddrCache, err = simplelru.NewLRU(60000, nil); err != nil { + panic(err) +} + if consAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} + if valAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} +} + +/ SetAddrCacheEnabled enables or disables accAddrCache, consAddrCache, and valAddrCache. By default, caches are enabled. +func SetAddrCacheEnabled(enabled bool) { + isCachingEnabled.Store(enabled) +} + +/ IsAddrCacheEnabled returns if the address caches are enabled. +func IsAddrCacheEnabled() + +bool { + return isCachingEnabled.Load() +} + +/ Address is a common interface for different types of addresses used by the SDK +type Address interface { + Equals(Address) + +bool + Empty() + +bool + Marshal() ([]byte, error) + +MarshalJSON() ([]byte, error) + +Bytes() []byte + String() + +string + Format(s fmt.State, verb rune) +} + +/ Ensure that different address types implement the interface +var ( + _ Address = AccAddress{ +} + _ Address = ValAddress{ +} + _ Address = ConsAddress{ +} +) + +/ ---------------------------------------------------------------------------- +/ account +/ ---------------------------------------------------------------------------- + +/ AccAddress a wrapper around bytes meant to represent an account address. +/ When marshaled to a string or JSON, it uses Bech32. +type AccAddress []byte + +/ AccAddressFromHexUnsafe creates an AccAddress from a HEX-encoded string. +/ +/ Note, this function is considered unsafe as it may produce an AccAddress from +/ otherwise invalid input, such as a transaction hash. Please use +/ AccAddressFromBech32. +func AccAddressFromHexUnsafe(address string) (addr AccAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return AccAddress(bz), err +} + +/ VerifyAddressFormat verifies that the provided bytes form a valid address +/ according to the default address rules or a custom address verifier set by +/ GetConfig().SetAddressVerifier(). +/ TODO make an issue to get rid of global Config +/ ref: https://github.com/cosmos/cosmos-sdk/issues/9690 +func VerifyAddressFormat(bz []byte) + +error { + verifier := GetConfig().GetAddressVerifier() + if verifier != nil { + return verifier(bz) +} + if len(bz) == 0 { + return errorsmod.Wrap(sdkerrors.ErrUnknownAddress, "addresses cannot be empty") +} + if len(bz) > address.MaxAddrLen { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "address max length is %d, got %d", address.MaxAddrLen, len(bz)) +} + +return nil +} + +/ MustAccAddressFromBech32 calls AccAddressFromBech32 and panics on error. +func MustAccAddressFromBech32(address string) + +AccAddress { + addr, err := AccAddressFromBech32(address) + if err != nil { + panic(err) +} + +return addr +} + +/ AccAddressFromBech32 creates an AccAddress from a Bech32 string. +func AccAddressFromBech32(address string) (addr AccAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return AccAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixAccAddr := GetConfig().GetBech32AccountAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixAccAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return AccAddress(bz), nil +} + +/ Returns boolean for whether two AccAddresses are Equal +func (aa AccAddress) + +Equals(aa2 Address) + +bool { + if aa.Empty() && aa2.Empty() { + return true +} + +return bytes.Equal(aa.Bytes(), aa2.Bytes()) +} + +/ Returns boolean for whether an AccAddress is empty +func (aa AccAddress) + +Empty() + +bool { + return len(aa) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (aa AccAddress) + +Marshal() ([]byte, error) { + return aa, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (aa *AccAddress) + +Unmarshal(data []byte) + +error { + *aa = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (aa AccAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(aa.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (aa AccAddress) + +MarshalYAML() (any, error) { + return aa.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ UnmarshalYAML unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ Bytes returns the raw address bytes. +func (aa AccAddress) + +Bytes() []byte { + return aa +} + +/ String implements the Stringer interface. +func (aa AccAddress) + +String() + +string { + if aa.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(aa) + if IsAddrCacheEnabled() { + accAddrMu.Lock() + +defer accAddrMu.Unlock() + +addr, ok := accAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32AccountAddrPrefix(), aa, accAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (aa AccAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + _, _ = s.Write([]byte(aa.String())) + case 'p': + _, _ = fmt.Fprintf(s, "%p", aa) + +default: + _, _ = fmt.Fprintf(s, "%X", []byte(aa)) +} +} + +/ ---------------------------------------------------------------------------- +/ validator operator +/ ---------------------------------------------------------------------------- + +/ ValAddress defines a wrapper around bytes meant to present a validator's +/ operator. When marshaled to a string or JSON, it uses Bech32. +type ValAddress []byte + +/ ValAddressFromHex creates a ValAddress from a hex string. +func ValAddressFromHex(address string) (addr ValAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return ValAddress(bz), err +} + +/ ValAddressFromBech32 creates a ValAddress from a Bech32 string. +func ValAddressFromBech32(address string) (addr ValAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ValAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixValAddr := GetConfig().GetBech32ValidatorAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixValAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ValAddress(bz), nil +} + +/ MustValAddressFromBech32 calls ValAddressFromBech32 and panics on error. +func MustValAddressFromBech32(address string) + +ValAddress { + addr, err := ValAddressFromBech32(address) + if err != nil { + panic(err) +} + +return addr +} + +/ Returns boolean for whether two ValAddresses are Equal +func (va ValAddress) + +Equals(va2 Address) + +bool { + if va.Empty() && va2.Empty() { + return true +} + +return bytes.Equal(va.Bytes(), va2.Bytes()) +} + +/ Returns boolean for whether an ValAddress is empty +func (va ValAddress) + +Empty() + +bool { + return len(va) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (va ValAddress) + +Marshal() ([]byte, error) { + return va, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (va *ValAddress) + +Unmarshal(data []byte) + +error { + *va = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (va ValAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(va.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (va ValAddress) + +MarshalYAML() (any, error) { + return va.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ Bytes returns the raw address bytes. +func (va ValAddress) + +Bytes() []byte { + return va +} + +/ String implements the Stringer interface. +func (va ValAddress) + +String() + +string { + if va.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(va) + if IsAddrCacheEnabled() { + valAddrMu.Lock() + +defer valAddrMu.Unlock() + +addr, ok := valAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32ValidatorAddrPrefix(), va, valAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (va ValAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + _, _ = s.Write([]byte(va.String())) + case 'p': + _, _ = fmt.Fprintf(s, "%p", va) + +default: + _, _ = fmt.Fprintf(s, "%X", []byte(va)) +} +} + +/ ---------------------------------------------------------------------------- +/ consensus node +/ ---------------------------------------------------------------------------- + +/ ConsAddress defines a wrapper around bytes meant to present a consensus node. +/ When marshaled to a string or JSON, it uses Bech32. +type ConsAddress []byte + +/ ConsAddressFromHex creates a ConsAddress from a hex string. +/ Deprecated: use ConsensusAddressCodec from Staking keeper +func ConsAddressFromHex(address string) (addr ConsAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return ConsAddress(bz), err +} + +/ ConsAddressFromBech32 creates a ConsAddress from a Bech32 string. +func ConsAddressFromBech32(address string) (addr ConsAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ConsAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixConsAddr := GetConfig().GetBech32ConsensusAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixConsAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ConsAddress(bz), nil +} + +/ get ConsAddress from pubkey +func GetConsAddress(pubkey cryptotypes.PubKey) + +ConsAddress { + return ConsAddress(pubkey.Address()) +} + +/ Returns boolean for whether two ConsAddress are Equal +func (ca ConsAddress) + +Equals(ca2 Address) + +bool { + if ca.Empty() && ca2.Empty() { + return true +} + +return bytes.Equal(ca.Bytes(), ca2.Bytes()) +} + +/ Returns boolean for whether an ConsAddress is empty +func (ca ConsAddress) + +Empty() + +bool { + return len(ca) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (ca ConsAddress) + +Marshal() ([]byte, error) { + return ca, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (ca *ConsAddress) + +Unmarshal(data []byte) + +error { + *ca = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (ca ConsAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(ca.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (ca ConsAddress) + +MarshalYAML() (any, error) { + return ca.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ Bytes returns the raw address bytes. +func (ca ConsAddress) + +Bytes() []byte { + return ca +} + +/ String implements the Stringer interface. +func (ca ConsAddress) + +String() + +string { + if ca.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(ca) + if IsAddrCacheEnabled() { + consAddrMu.Lock() + +defer consAddrMu.Unlock() + +addr, ok := consAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32ConsensusAddrPrefix(), ca, consAddrCache, key) +} + +/ Bech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. Returns an error if the bech32 conversion +/ fails or the prefix is empty. +func Bech32ifyAddressBytes(prefix string, bs []byte) (string, error) { + if len(bs) == 0 { + return "", nil +} + if len(prefix) == 0 { + return "", errors.New("prefix cannot be empty") +} + +return bech32.ConvertAndEncode(prefix, bs) +} + +/ MustBech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. It panics if the bech32 conversion +/ fails or the prefix is empty. +func MustBech32ifyAddressBytes(prefix string, bs []byte) + +string { + s, err := Bech32ifyAddressBytes(prefix, bs) + if err != nil { + panic(err) +} + +return s +} + +/ Format implements the fmt.Formatter interface. + +func (ca ConsAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + _, _ = s.Write([]byte(ca.String())) + case 'p': + _, _ = fmt.Fprintf(s, "%p", ca) + +default: + _, _ = fmt.Fprintf(s, "%X", []byte(ca)) +} +} + +/ ---------------------------------------------------------------------------- +/ auxiliary +/ ---------------------------------------------------------------------------- + +var errBech32EmptyAddress = errors.New("decoding Bech32 address failed: must provide a non empty address") + +/ GetFromBech32 decodes a bytestring from a Bech32 encoded string. +func GetFromBech32(bech32str, prefix string) ([]byte, error) { + if len(bech32str) == 0 { + return nil, errBech32EmptyAddress +} + +hrp, bz, err := bech32.DecodeAndConvert(bech32str) + if err != nil { + return nil, err +} + if hrp != prefix { + return nil, fmt.Errorf("invalid Bech32 prefix; expected %s, got %s", prefix, hrp) +} + +return bz, nil +} + +func addressBytesFromHexString(address string) ([]byte, error) { + if len(address) == 0 { + return nil, ErrEmptyHexAddress +} + +return hex.DecodeString(address) +} + +/ cacheBech32Addr is not concurrency safe. Concurrent access to cache causes race condition. +func cacheBech32Addr(prefix string, addr []byte, cache *simplelru.LRU, cacheKey string) + +string { + bech32Addr, err := bech32.ConvertAndEncode(prefix, addr) + if err != nil { + panic(err) +} + if IsAddrCacheEnabled() { + cache.Add(cacheKey, bech32Addr) +} + +return bech32Addr +} +``` + +Address construction algorithm is defined in [ADR-28](/docs/common/pages/adr-comprehensive#adr-028-public-key-addresses). +Here is the standard way to obtain an account address from a `pub` public key: + +```go +sdk.AccAddress(pub.Address().Bytes()) +``` + +Of note, the `Marshal()` and `Bytes()` method both return the same raw `[]byte` form of the address. `Marshal()` is required for Protobuf compatibility. + +For user interaction, addresses are formatted using [Bech32](https://en.bitcoin.it/wiki/Bech32) and implemented by the `String` method. The Bech32 method is the only supported format to use when interacting with a blockchain. The Bech32 human-readable part (Bech32 prefix) is used to denote an address type. Example: + +```go expandable +package types + +import ( + + "bytes" + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "strings" + "sync" + "sync/atomic" + "github.com/hashicorp/golang-lru/simplelru" + "sigs.k8s.io/yaml" + + errorsmod "cosmossdk.io/errors" + + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/internal/conv" + "github.com/cosmos/cosmos-sdk/types/address" + "github.com/cosmos/cosmos-sdk/types/bech32" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" +) + +const ( + / Constants defined here are the defaults value for address. + / You can use the specific values for your project. + / Add the follow lines to the `main()` of your server. + / + / config := sdk.GetConfig() + / config.SetBech32PrefixForAccount(yourBech32PrefixAccAddr, yourBech32PrefixAccPub) + / config.SetBech32PrefixForValidator(yourBech32PrefixValAddr, yourBech32PrefixValPub) + / config.SetBech32PrefixForConsensusNode(yourBech32PrefixConsAddr, yourBech32PrefixConsPub) + / config.SetPurpose(yourPurpose) + / config.SetCoinType(yourCoinType) + / config.Seal() + + / Bech32MainPrefix defines the main SDK Bech32 prefix of an account's address + Bech32MainPrefix = "cosmos" + + / Purpose is the ATOM purpose as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +Purpose = 44 + + / CoinType is the ATOM coin type as defined in SLIP44 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) + +CoinType = 118 + + / FullFundraiserPath is the parts of the BIP44 HD path that are fixed by + / what we used during the ATOM fundraiser. + FullFundraiserPath = "m/44'/118'/0'/0/0" + + / PrefixAccount is the prefix for account keys + PrefixAccount = "acc" + / PrefixValidator is the prefix for validator keys + PrefixValidator = "val" + / PrefixConsensus is the prefix for consensus keys + PrefixConsensus = "cons" + / PrefixPublic is the prefix for public keys + PrefixPublic = "pub" + / PrefixOperator is the prefix for operator keys + PrefixOperator = "oper" + + / PrefixAddress is the prefix for addresses + PrefixAddress = "addr" + + / Bech32PrefixAccAddr defines the Bech32 prefix of an account's address + Bech32PrefixAccAddr = Bech32MainPrefix + / Bech32PrefixAccPub defines the Bech32 prefix of an account's public key + Bech32PrefixAccPub = Bech32MainPrefix + PrefixPublic + / Bech32PrefixValAddr defines the Bech32 prefix of a validator's operator address + Bech32PrefixValAddr = Bech32MainPrefix + PrefixValidator + PrefixOperator + / Bech32PrefixValPub defines the Bech32 prefix of a validator's operator public key + Bech32PrefixValPub = Bech32MainPrefix + PrefixValidator + PrefixOperator + PrefixPublic + / Bech32PrefixConsAddr defines the Bech32 prefix of a consensus node address + Bech32PrefixConsAddr = Bech32MainPrefix + PrefixValidator + PrefixConsensus + / Bech32PrefixConsPub defines the Bech32 prefix of a consensus node public key + Bech32PrefixConsPub = Bech32MainPrefix + PrefixValidator + PrefixConsensus + PrefixPublic +) + +/ cache variables +var ( + / AccAddress.String() + +is expensive and if unoptimized dominantly showed up in profiles, + / yet has no mechanisms to trivially cache the result given that AccAddress is a []byte type. + accAddrMu sync.Mutex + accAddrCache *simplelru.LRU + consAddrMu sync.Mutex + consAddrCache *simplelru.LRU + valAddrMu sync.Mutex + valAddrCache *simplelru.LRU + + isCachingEnabled atomic.Bool +) + +/ sentinel errors +var ( + ErrEmptyHexAddress = errors.New("decoding address from hex string failed: empty address") +) + +func init() { + var err error + SetAddrCacheEnabled(true) + + / in total the cache size is 61k entries. Key is 32 bytes and value is around 50-70 bytes. + / That will make around 92 * 61k * 2 (LRU) + +bytes ~ 11 MB + if accAddrCache, err = simplelru.NewLRU(60000, nil); err != nil { + panic(err) +} + if consAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} + if valAddrCache, err = simplelru.NewLRU(500, nil); err != nil { + panic(err) +} +} + +/ SetAddrCacheEnabled enables or disables accAddrCache, consAddrCache, and valAddrCache. By default, caches are enabled. +func SetAddrCacheEnabled(enabled bool) { + isCachingEnabled.Store(enabled) +} + +/ IsAddrCacheEnabled returns if the address caches are enabled. +func IsAddrCacheEnabled() + +bool { + return isCachingEnabled.Load() +} + +/ Address is a common interface for different types of addresses used by the SDK +type Address interface { + Equals(Address) + +bool + Empty() + +bool + Marshal() ([]byte, error) + +MarshalJSON() ([]byte, error) + +Bytes() []byte + String() + +string + Format(s fmt.State, verb rune) +} + +/ Ensure that different address types implement the interface +var ( + _ Address = AccAddress{ +} + _ Address = ValAddress{ +} + _ Address = ConsAddress{ +} +) + +/ ---------------------------------------------------------------------------- +/ account +/ ---------------------------------------------------------------------------- + +/ AccAddress a wrapper around bytes meant to represent an account address. +/ When marshaled to a string or JSON, it uses Bech32. +type AccAddress []byte + +/ AccAddressFromHexUnsafe creates an AccAddress from a HEX-encoded string. +/ +/ Note, this function is considered unsafe as it may produce an AccAddress from +/ otherwise invalid input, such as a transaction hash. Please use +/ AccAddressFromBech32. +func AccAddressFromHexUnsafe(address string) (addr AccAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return AccAddress(bz), err +} + +/ VerifyAddressFormat verifies that the provided bytes form a valid address +/ according to the default address rules or a custom address verifier set by +/ GetConfig().SetAddressVerifier(). +/ TODO make an issue to get rid of global Config +/ ref: https://github.com/cosmos/cosmos-sdk/issues/9690 +func VerifyAddressFormat(bz []byte) + +error { + verifier := GetConfig().GetAddressVerifier() + if verifier != nil { + return verifier(bz) +} + if len(bz) == 0 { + return errorsmod.Wrap(sdkerrors.ErrUnknownAddress, "addresses cannot be empty") +} + if len(bz) > address.MaxAddrLen { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "address max length is %d, got %d", address.MaxAddrLen, len(bz)) +} + +return nil +} + +/ MustAccAddressFromBech32 calls AccAddressFromBech32 and panics on error. +func MustAccAddressFromBech32(address string) + +AccAddress { + addr, err := AccAddressFromBech32(address) + if err != nil { + panic(err) +} + +return addr +} + +/ AccAddressFromBech32 creates an AccAddress from a Bech32 string. +func AccAddressFromBech32(address string) (addr AccAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return AccAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixAccAddr := GetConfig().GetBech32AccountAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixAccAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return AccAddress(bz), nil +} + +/ Returns boolean for whether two AccAddresses are Equal +func (aa AccAddress) + +Equals(aa2 Address) + +bool { + if aa.Empty() && aa2.Empty() { + return true +} + +return bytes.Equal(aa.Bytes(), aa2.Bytes()) +} + +/ Returns boolean for whether an AccAddress is empty +func (aa AccAddress) + +Empty() + +bool { + return len(aa) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (aa AccAddress) + +Marshal() ([]byte, error) { + return aa, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (aa *AccAddress) + +Unmarshal(data []byte) + +error { + *aa = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (aa AccAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(aa.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (aa AccAddress) + +MarshalYAML() (any, error) { + return aa.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ UnmarshalYAML unmarshals from JSON assuming Bech32 encoding. +func (aa *AccAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *aa = AccAddress{ +} + +return nil +} + +aa2, err := AccAddressFromBech32(s) + if err != nil { + return err +} + + *aa = aa2 + return nil +} + +/ Bytes returns the raw address bytes. +func (aa AccAddress) + +Bytes() []byte { + return aa +} + +/ String implements the Stringer interface. +func (aa AccAddress) + +String() + +string { + if aa.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(aa) + if IsAddrCacheEnabled() { + accAddrMu.Lock() + +defer accAddrMu.Unlock() + +addr, ok := accAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32AccountAddrPrefix(), aa, accAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (aa AccAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + _, _ = s.Write([]byte(aa.String())) + case 'p': + _, _ = fmt.Fprintf(s, "%p", aa) + +default: + _, _ = fmt.Fprintf(s, "%X", []byte(aa)) +} +} + +/ ---------------------------------------------------------------------------- +/ validator operator +/ ---------------------------------------------------------------------------- + +/ ValAddress defines a wrapper around bytes meant to present a validator's +/ operator. When marshaled to a string or JSON, it uses Bech32. +type ValAddress []byte + +/ ValAddressFromHex creates a ValAddress from a hex string. +func ValAddressFromHex(address string) (addr ValAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return ValAddress(bz), err +} + +/ ValAddressFromBech32 creates a ValAddress from a Bech32 string. +func ValAddressFromBech32(address string) (addr ValAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ValAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixValAddr := GetConfig().GetBech32ValidatorAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixValAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ValAddress(bz), nil +} + +/ MustValAddressFromBech32 calls ValAddressFromBech32 and panics on error. +func MustValAddressFromBech32(address string) + +ValAddress { + addr, err := ValAddressFromBech32(address) + if err != nil { + panic(err) +} + +return addr +} + +/ Returns boolean for whether two ValAddresses are Equal +func (va ValAddress) + +Equals(va2 Address) + +bool { + if va.Empty() && va2.Empty() { + return true +} + +return bytes.Equal(va.Bytes(), va2.Bytes()) +} + +/ Returns boolean for whether an ValAddress is empty +func (va ValAddress) + +Empty() + +bool { + return len(va) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (va ValAddress) + +Marshal() ([]byte, error) { + return va, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (va *ValAddress) + +Unmarshal(data []byte) + +error { + *va = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (va ValAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(va.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (va ValAddress) + +MarshalYAML() (any, error) { + return va.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (va *ValAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *va = ValAddress{ +} + +return nil +} + +va2, err := ValAddressFromBech32(s) + if err != nil { + return err +} + + *va = va2 + return nil +} + +/ Bytes returns the raw address bytes. +func (va ValAddress) + +Bytes() []byte { + return va +} + +/ String implements the Stringer interface. +func (va ValAddress) + +String() + +string { + if va.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(va) + if IsAddrCacheEnabled() { + valAddrMu.Lock() + +defer valAddrMu.Unlock() + +addr, ok := valAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32ValidatorAddrPrefix(), va, valAddrCache, key) +} + +/ Format implements the fmt.Formatter interface. + +func (va ValAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + _, _ = s.Write([]byte(va.String())) + case 'p': + _, _ = fmt.Fprintf(s, "%p", va) + +default: + _, _ = fmt.Fprintf(s, "%X", []byte(va)) +} +} + +/ ---------------------------------------------------------------------------- +/ consensus node +/ ---------------------------------------------------------------------------- + +/ ConsAddress defines a wrapper around bytes meant to present a consensus node. +/ When marshaled to a string or JSON, it uses Bech32. +type ConsAddress []byte + +/ ConsAddressFromHex creates a ConsAddress from a hex string. +/ Deprecated: use ConsensusAddressCodec from Staking keeper +func ConsAddressFromHex(address string) (addr ConsAddress, err error) { + bz, err := addressBytesFromHexString(address) + +return ConsAddress(bz), err +} + +/ ConsAddressFromBech32 creates a ConsAddress from a Bech32 string. +func ConsAddressFromBech32(address string) (addr ConsAddress, err error) { + if len(strings.TrimSpace(address)) == 0 { + return ConsAddress{ +}, errors.New("empty address string is not allowed") +} + +bech32PrefixConsAddr := GetConfig().GetBech32ConsensusAddrPrefix() + +bz, err := GetFromBech32(address, bech32PrefixConsAddr) + if err != nil { + return nil, err +} + +err = VerifyAddressFormat(bz) + if err != nil { + return nil, err +} + +return ConsAddress(bz), nil +} + +/ get ConsAddress from pubkey +func GetConsAddress(pubkey cryptotypes.PubKey) + +ConsAddress { + return ConsAddress(pubkey.Address()) +} + +/ Returns boolean for whether two ConsAddress are Equal +func (ca ConsAddress) + +Equals(ca2 Address) + +bool { + if ca.Empty() && ca2.Empty() { + return true +} + +return bytes.Equal(ca.Bytes(), ca2.Bytes()) +} + +/ Returns boolean for whether an ConsAddress is empty +func (ca ConsAddress) + +Empty() + +bool { + return len(ca) == 0 +} + +/ Marshal returns the raw address bytes. It is needed for protobuf +/ compatibility. +func (ca ConsAddress) + +Marshal() ([]byte, error) { + return ca, nil +} + +/ Unmarshal sets the address to the given data. It is needed for protobuf +/ compatibility. +func (ca *ConsAddress) + +Unmarshal(data []byte) + +error { + *ca = data + return nil +} + +/ MarshalJSON marshals to JSON using Bech32. +func (ca ConsAddress) + +MarshalJSON() ([]byte, error) { + return json.Marshal(ca.String()) +} + +/ MarshalYAML marshals to YAML using Bech32. +func (ca ConsAddress) + +MarshalYAML() (any, error) { + return ca.String(), nil +} + +/ UnmarshalJSON unmarshals from JSON assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalJSON(data []byte) + +error { + var s string + err := json.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ UnmarshalYAML unmarshals from YAML assuming Bech32 encoding. +func (ca *ConsAddress) + +UnmarshalYAML(data []byte) + +error { + var s string + err := yaml.Unmarshal(data, &s) + if err != nil { + return err +} + if s == "" { + *ca = ConsAddress{ +} + +return nil +} + +ca2, err := ConsAddressFromBech32(s) + if err != nil { + return err +} + + *ca = ca2 + return nil +} + +/ Bytes returns the raw address bytes. +func (ca ConsAddress) + +Bytes() []byte { + return ca +} + +/ String implements the Stringer interface. +func (ca ConsAddress) + +String() + +string { + if ca.Empty() { + return "" +} + key := conv.UnsafeBytesToStr(ca) + if IsAddrCacheEnabled() { + consAddrMu.Lock() + +defer consAddrMu.Unlock() + +addr, ok := consAddrCache.Get(key) + if ok { + return addr.(string) +} + +} + +return cacheBech32Addr(GetConfig().GetBech32ConsensusAddrPrefix(), ca, consAddrCache, key) +} + +/ Bech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. Returns an error if the bech32 conversion +/ fails or the prefix is empty. +func Bech32ifyAddressBytes(prefix string, bs []byte) (string, error) { + if len(bs) == 0 { + return "", nil +} + if len(prefix) == 0 { + return "", errors.New("prefix cannot be empty") +} + +return bech32.ConvertAndEncode(prefix, bs) +} + +/ MustBech32ifyAddressBytes returns a bech32 representation of address bytes. +/ Returns an empty sting if the byte slice is 0-length. It panics if the bech32 conversion +/ fails or the prefix is empty. +func MustBech32ifyAddressBytes(prefix string, bs []byte) + +string { + s, err := Bech32ifyAddressBytes(prefix, bs) + if err != nil { + panic(err) +} + +return s +} + +/ Format implements the fmt.Formatter interface. + +func (ca ConsAddress) + +Format(s fmt.State, verb rune) { + switch verb { + case 's': + _, _ = s.Write([]byte(ca.String())) + case 'p': + _, _ = fmt.Fprintf(s, "%p", ca) + +default: + _, _ = fmt.Fprintf(s, "%X", []byte(ca)) +} +} + +/ ---------------------------------------------------------------------------- +/ auxiliary +/ ---------------------------------------------------------------------------- + +var errBech32EmptyAddress = errors.New("decoding Bech32 address failed: must provide a non empty address") + +/ GetFromBech32 decodes a bytestring from a Bech32 encoded string. +func GetFromBech32(bech32str, prefix string) ([]byte, error) { + if len(bech32str) == 0 { + return nil, errBech32EmptyAddress +} + +hrp, bz, err := bech32.DecodeAndConvert(bech32str) + if err != nil { + return nil, err +} + if hrp != prefix { + return nil, fmt.Errorf("invalid Bech32 prefix; expected %s, got %s", prefix, hrp) +} + +return bz, nil +} + +func addressBytesFromHexString(address string) ([]byte, error) { + if len(address) == 0 { + return nil, ErrEmptyHexAddress +} + +return hex.DecodeString(address) +} + +/ cacheBech32Addr is not concurrency safe. Concurrent access to cache causes race condition. +func cacheBech32Addr(prefix string, addr []byte, cache *simplelru.LRU, cacheKey string) + +string { + bech32Addr, err := bech32.ConvertAndEncode(prefix, addr) + if err != nil { + panic(err) +} + if IsAddrCacheEnabled() { + cache.Add(cacheKey, bech32Addr) +} + +return bech32Addr +} +``` + +| | Address Bech32 Prefix | +| ------------------ | --------------------- | +| Accounts | cosmos | +| Validator Operator | cosmosvaloper | +| Consensus Nodes | cosmosvalcons | + +### Public Keys + +Public keys in Cosmos SDK are defined by `cryptotypes.PubKey` interface. Since public keys are saved in a store, `cryptotypes.PubKey` extends the `proto.Message` interface: + +```go expandable +package types + +import ( + + cmtcrypto "github.com/cometbft/cometbft/crypto" + proto "github.com/cosmos/gogoproto/proto" +) + +/ PubKey defines a public key and extends proto.Message. +type PubKey interface { + proto.Message + + Address() + +Address + Bytes() []byte + VerifySignature(msg, sig []byte) + +bool + Equals(PubKey) + +bool + Type() + +string +} + +/ LedgerPrivKey defines a private key that is not a proto message. For now, +/ LedgerSecp256k1 keys are not converted to proto.Message yet, this is why +/ they use LedgerPrivKey instead of PrivKey. All other keys must use PrivKey +/ instead of LedgerPrivKey. +/ TODO https://github.com/cosmos/cosmos-sdk/issues/7357. +type LedgerPrivKey interface { + Bytes() []byte + Sign(msg []byte) ([]byte, error) + +PubKey() + +PubKey + Equals(LedgerPrivKey) + +bool + Type() + +string +} + +/ LedgerPrivKeyAminoJSON is a Ledger PrivKey type that supports signing with +/ SIGN_MODE_LEGACY_AMINO_JSON. It is added as a non-breaking change, instead of directly +/ on the LedgerPrivKey interface (whose Sign method will sign with TEXTUAL), +/ and will be deprecated/removed once LEGACY_AMINO_JSON is removed. +type LedgerPrivKeyAminoJSON interface { + LedgerPrivKey + / SignLedgerAminoJSON signs a messages on the Ledger device using + / SIGN_MODE_LEGACY_AMINO_JSON. + SignLedgerAminoJSON(msg []byte) ([]byte, error) +} + +/ PrivKey defines a private key and extends proto.Message. For now, it extends +/ LedgerPrivKey (see godoc for LedgerPrivKey). Ultimately, we should remove +/ LedgerPrivKey and add its methods here directly. +/ TODO https://github.com/cosmos/cosmos-sdk/issues/7357. +type PrivKey interface { + proto.Message + LedgerPrivKey +} + +type ( + Address = cmtcrypto.Address +) +``` + +A compressed format is used for `secp256k1` and `secp256r1` serialization. + +- The first byte is a `0x02` byte if the `y`-coordinate is the lexicographically largest of the two associated with the `x`-coordinate. +- Otherwise the first byte is a `0x03`. + +This prefix is followed by the `x`-coordinate. + +Public Keys are not used to reference accounts (or users) and in general are not used when composing transaction messages (with few exceptions: `MsgCreateValidator`, `Validator` and `Multisig` messages). +For user interactions, `PubKey` is formatted using Protobufs JSON ([ProtoMarshalJSON](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/codec/json.go#L14-L34) function). Example: + +```go expandable +package keys + +import ( + + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +/ Use protobuf interface marshaler rather then generic JSON + +/ KeyOutput defines a structure wrapping around an Info object used for output +/ functionality. +type KeyOutput struct { + Name string `json:"name" yaml:"name"` + Type string `json:"type" yaml:"type"` + Address string `json:"address" yaml:"address"` + PubKey string `json:"pubkey" yaml:"pubkey"` + Mnemonic string `json:"mnemonic,omitempty" yaml:"mnemonic"` +} + +/ NewKeyOutput creates a default KeyOutput instance without Mnemonic, Threshold and PubKeys +func NewKeyOutput(name string, keyType keyring.KeyType, a sdk.Address, pk cryptotypes.PubKey) (KeyOutput, error) { + apk, err := codectypes.NewAnyWithValue(pk) + if err != nil { + return KeyOutput{ +}, err +} + +bz, err := codec.ProtoMarshalJSON(apk, nil) + if err != nil { + return KeyOutput{ +}, err +} + +return KeyOutput{ + Name: name, + Type: keyType.String(), + Address: a.String(), + PubKey: string(bz), +}, nil +} + +/ MkConsKeyOutput create a KeyOutput in with "cons" Bech32 prefixes. +func MkConsKeyOutput(k *keyring.Record) (KeyOutput, error) { + pk, err := k.GetPubKey() + if err != nil { + return KeyOutput{ +}, err +} + addr := sdk.ConsAddress(pk.Address()) + +return NewKeyOutput(k.Name, k.GetType(), addr, pk) +} + +/ MkValKeyOutput create a KeyOutput in with "val" Bech32 prefixes. +func MkValKeyOutput(k *keyring.Record) (KeyOutput, error) { + pk, err := k.GetPubKey() + if err != nil { + return KeyOutput{ +}, err +} + addr := sdk.ValAddress(pk.Address()) + +return NewKeyOutput(k.Name, k.GetType(), addr, pk) +} + +/ MkAccKeyOutput create a KeyOutput in with "acc" Bech32 prefixes. If the +/ public key is a multisig public key, then the threshold and constituent +/ public keys will be added. +func MkAccKeyOutput(k *keyring.Record) (KeyOutput, error) { + pk, err := k.GetPubKey() + if err != nil { + return KeyOutput{ +}, err +} + addr := sdk.AccAddress(pk.Address()) + +return NewKeyOutput(k.Name, k.GetType(), addr, pk) +} + +/ MkAccKeysOutput returns a slice of KeyOutput objects, each with the "acc" +/ Bech32 prefixes, given a slice of Record objects. It returns an error if any +/ call to MkKeyOutput fails. +func MkAccKeysOutput(records []*keyring.Record) ([]KeyOutput, error) { + kos := make([]KeyOutput, len(records)) + +var err error + for i, r := range records { + kos[i], err = MkAccKeyOutput(r) + if err != nil { + return nil, err +} + +} + +return kos, nil +} +``` + +## Keyring + +A `Keyring` is an object that stores and manages accounts. In the Cosmos SDK, a `Keyring` implementation follows the `Keyring` interface: + +```go expandable +package keyring + +import ( + + "bufio" + "encoding/hex" + "fmt" + "io" + "os" + "path/filepath" + "sort" + "strings" + "github.com/99designs/keyring" + "github.com/cockroachdb/errors" + "github.com/cosmos/go-bip39" + "golang.org/x/crypto/bcrypt" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/client/input" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/crypto" + "github.com/cosmos/cosmos-sdk/crypto/hd" + "github.com/cosmos/cosmos-sdk/crypto/ledger" + "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx/signing" +) + +/ Backend options for Keyring +const ( + BackendFile = "file" + BackendOS = "os" + BackendKWallet = "kwallet" + BackendPass = "pass" + BackendTest = "test" + BackendMemory = "memory" +) + +const ( + keyringFileDirName = "keyring-file" + keyringTestDirName = "keyring-test" + passKeyringPrefix = "keyring-%s" + + / temporary pass phrase for exporting a key during a key rename + passPhrase = "temp" + / prefix for exported hex private keys + hexPrefix = "0x" +) + +var ( + _ Keyring = &keystore{ +} + _ KeyringWithDB = &keystore{ +} + +maxPassphraseEntryAttempts = 3 +) + +/ Keyring exposes operations over a backend supported by github.com/99designs/keyring. +type Keyring interface { + / Get the backend type used in the keyring config: "file", "os", "kwallet", "pass", "test", "memory". + Backend() + +string + / List all keys. + List() ([]*Record, error) + + / Supported signing algorithms for Keyring and Ledger respectively. + SupportedAlgorithms() (SigningAlgoList, SigningAlgoList) + + / Key and KeyByAddress return keys by uid and address respectively. + Key(uid string) (*Record, error) + +KeyByAddress(address sdk.Address) (*Record, error) + + / Delete and DeleteByAddress remove keys from the keyring. + Delete(uid string) + +error + DeleteByAddress(address sdk.Address) + +error + + / Rename an existing key from the Keyring + Rename(from, to string) + +error + + / NewMnemonic generates a new mnemonic, derives a hierarchical deterministic key from it, and + / persists the key to storage. Returns the generated mnemonic and the key Info. + / It returns an error if it fails to generate a key for the given algo type, or if + / another key is already stored under the same name or address. + / + / A passphrase set to the empty string will set the passphrase to the DefaultBIP39Passphrase value. + NewMnemonic(uid string, language Language, hdPath, bip39Passphrase string, algo SignatureAlgo) (*Record, string, error) + + / NewAccount converts a mnemonic to a private key and BIP-39 HD Path and persists it. + / It fails if there is an existing key Info with the same address. + NewAccount(uid, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error) + + / SaveLedgerKey retrieves a public key reference from a Ledger device and persists it. + SaveLedgerKey(uid string, algo SignatureAlgo, hrp string, coinType, account, index uint32) (*Record, error) + + / SaveOfflineKey stores a public key and returns the persisted Info structure. + SaveOfflineKey(uid string, pubkey types.PubKey) (*Record, error) + + / SaveMultisig stores and returns a new multsig (offline) + +key reference. + SaveMultisig(uid string, pubkey types.PubKey) (*Record, error) + +Signer + + Importer + Exporter + + Migrator +} + +type KeyringWithDB interface { + Keyring + + / Get the db keyring used in the keystore. + DB() + +keyring.Keyring +} + +/ Signer is implemented by key stores that want to provide signing capabilities. +type Signer interface { + / Sign sign byte messages with a user key. + Sign(uid string, msg []byte, signMode signing.SignMode) ([]byte, types.PubKey, error) + + / SignByAddress sign byte messages with a user key providing the address. + SignByAddress(address sdk.Address, msg []byte, signMode signing.SignMode) ([]byte, types.PubKey, error) +} + +/ Importer is implemented by key stores that support import of public and private keys. +type Importer interface { + / ImportPrivKey imports ASCII armored passphrase-encrypted private keys. + ImportPrivKey(uid, armor, passphrase string) + +error + / ImportPrivKeyHex imports hex encoded keys. + ImportPrivKeyHex(uid, privKey, algoStr string) + +error + / ImportPubKey imports ASCII armored public keys. + ImportPubKey(uid, armor string) + +error +} + +/ Migrator is implemented by key stores and enables migration of keys from amino to proto +type Migrator interface { + MigrateAll() ([]*Record, error) +} + +/ Exporter is implemented by key stores that support export of public and private keys. +type Exporter interface { + / Export public key + ExportPubKeyArmor(uid string) (string, error) + +ExportPubKeyArmorByAddress(address sdk.Address) (string, error) + + / ExportPrivKeyArmor returns a private key in ASCII armored format. + / It returns an error if the key does not exist or a wrong encryption passphrase is supplied. + ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error) + +ExportPrivKeyArmorByAddress(address sdk.Address, encryptPassphrase string) (armor string, err error) +} + +/ Option overrides keyring configuration options. +type Option func(options *Options) + +/ NewInMemory creates a transient keyring useful for testing +/ purposes and on-the-fly key generation. +/ Keybase options can be applied when generating this new Keybase. +func NewInMemory(cdc codec.Codec, opts ...Option) + +Keyring { + return NewInMemoryWithKeyring(keyring.NewArrayKeyring(nil), cdc, opts...) +} + +/ NewInMemoryWithKeyring returns an in memory keyring using the specified keyring.Keyring +/ as the backing keyring. +func NewInMemoryWithKeyring(kr keyring.Keyring, cdc codec.Codec, opts ...Option) + +Keyring { + return newKeystore(kr, cdc, BackendMemory, opts...) +} + +/ New creates a new instance of a keyring. +/ Keyring options can be applied when generating the new instance. +/ Available backends are "os", "file", "kwallet", "memory", "pass", "test". +func newKeyringGeneric( + appName, backend, rootDir string, userInput io.Reader, cdc codec.Codec, opts ...Option, +) (Keyring, error) { + var ( + db keyring.Keyring + err error + ) + switch backend { + case BackendMemory: + return NewInMemory(cdc, opts...), err + case BackendTest: + db, err = keyring.Open(newTestBackendKeyringConfig(appName, rootDir)) + case BackendFile: + db, err = keyring.Open(newFileBackendKeyringConfig(appName, rootDir, userInput)) + case BackendOS: + db, err = keyring.Open(newOSBackendKeyringConfig(appName, rootDir, userInput)) + case BackendKWallet: + db, err = keyring.Open(newKWalletBackendKeyringConfig(appName, rootDir, userInput)) + case BackendPass: + db, err = keyring.Open(newPassBackendKeyringConfig(appName, rootDir, userInput)) + +default: + return nil, errorsmod.Wrap(ErrUnknownBacked, backend) +} + if err != nil { + return nil, err +} + +return newKeystore(db, cdc, backend, opts...), nil +} + +type keystore struct { + db keyring.Keyring + cdc codec.Codec + backend string + options Options +} + +func newKeystore(kr keyring.Keyring, cdc codec.Codec, backend string, opts ...Option) + +keystore { + / Default options for keybase, these can be overwritten using the + / Option function + options := Options{ + SupportedAlgos: SigningAlgoList{ + hd.Secp256k1 +}, + SupportedAlgosLedger: SigningAlgoList{ + hd.Secp256k1 +}, +} + for _, optionFn := range opts { + optionFn(&options) +} + if options.LedgerDerivation != nil { + ledger.SetDiscoverLedger(options.LedgerDerivation) +} + if options.LedgerCreateKey != nil { + ledger.SetCreatePubkey(options.LedgerCreateKey) +} + if options.LedgerAppName != "" { + ledger.SetAppName(options.LedgerAppName) +} + if options.LedgerSigSkipDERConv { + ledger.SetSkipDERConversion() +} + +return keystore{ + db: kr, + cdc: cdc, + backend: backend, + options: options, +} +} + +/ Backend returns the keyring backend option used in the config +func (ks keystore) + +Backend() + +string { + return ks.backend +} + +func (ks keystore) + +ExportPubKeyArmor(uid string) (string, error) { + k, err := ks.Key(uid) + if err != nil { + return "", err +} + +key, err := k.GetPubKey() + if err != nil { + return "", err +} + +bz, err := ks.cdc.MarshalInterface(key) + if err != nil { + return "", err +} + +return crypto.ArmorPubKeyBytes(bz, key.Type()), nil +} + +/ DB returns the db keyring used in the keystore +func (ks keystore) + +DB() + +keyring.Keyring { + return ks.db +} + +func (ks keystore) + +ExportPubKeyArmorByAddress(address sdk.Address) (string, error) { + k, err := ks.KeyByAddress(address) + if err != nil { + return "", err +} + +return ks.ExportPubKeyArmor(k.Name) +} + +/ ExportPrivKeyArmor exports encrypted privKey +func (ks keystore) + +ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error) { + priv, err := ks.ExportPrivateKeyObject(uid) + if err != nil { + return "", err +} + +return crypto.EncryptArmorPrivKey(priv, encryptPassphrase, priv.Type()), nil +} + +/ ExportPrivateKeyObject exports an armored private key object. +func (ks keystore) + +ExportPrivateKeyObject(uid string) (types.PrivKey, error) { + k, err := ks.Key(uid) + if err != nil { + return nil, err +} + +priv, err := extractPrivKeyFromRecord(k) + if err != nil { + return nil, err +} + +return priv, err +} + +func (ks keystore) + +ExportPrivKeyArmorByAddress(address sdk.Address, encryptPassphrase string) (armor string, err error) { + k, err := ks.KeyByAddress(address) + if err != nil { + return "", err +} + +return ks.ExportPrivKeyArmor(k.Name, encryptPassphrase) +} + +func (ks keystore) + +ImportPrivKey(uid, armor, passphrase string) + +error { + if k, err := ks.Key(uid); err == nil { + if uid == k.Name { + return errorsmod.Wrap(ErrOverwriteKey, uid) +} + +} + +privKey, _, err := crypto.UnarmorDecryptPrivKey(armor, passphrase) + if err != nil { + return errorsmod.Wrap(err, "failed to decrypt private key") +} + + _, err = ks.writeLocalKey(uid, privKey) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +ImportPrivKeyHex(uid, privKey, algoStr string) + +error { + if _, err := ks.Key(uid); err == nil { + return errorsmod.Wrap(ErrOverwriteKey, uid) +} + if privKey[:2] == hexPrefix { + privKey = privKey[2:] +} + +decodedPriv, err := hex.DecodeString(privKey) + if err != nil { + return err +} + +algo, err := NewSigningAlgoFromString(algoStr, ks.options.SupportedAlgos) + if err != nil { + return err +} + priv := algo.Generate()(decodedPriv) + _, err = ks.writeLocalKey(uid, priv) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +ImportPubKey(uid, armor string) + +error { + if _, err := ks.Key(uid); err == nil { + return errorsmod.Wrap(ErrOverwriteKey, uid) +} + +pubBytes, _, err := crypto.UnarmorPubKeyBytes(armor) + if err != nil { + return err +} + +var pubKey types.PubKey + if err := ks.cdc.UnmarshalInterface(pubBytes, &pubKey); err != nil { + return err +} + + _, err = ks.writeOfflineKey(uid, pubKey) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +Sign(uid string, msg []byte, signMode signing.SignMode) ([]byte, types.PubKey, error) { + k, err := ks.Key(uid) + if err != nil { + return nil, nil, err +} + switch { + case k.GetLocal() != nil: + priv, err := extractPrivKeyFromLocal(k.GetLocal()) + if err != nil { + return nil, nil, err +} + +sig, err := priv.Sign(msg) + if err != nil { + return nil, nil, err +} + +return sig, priv.PubKey(), nil + case k.GetLedger() != nil: + return SignWithLedger(k, msg, signMode) + + / multi or offline record + default: + pub, err := k.GetPubKey() + if err != nil { + return nil, nil, err +} + +return nil, pub, ErrOfflineSign +} +} + +func (ks keystore) + +SignByAddress(address sdk.Address, msg []byte, signMode signing.SignMode) ([]byte, types.PubKey, error) { + k, err := ks.KeyByAddress(address) + if err != nil { + return nil, nil, err +} + +return ks.Sign(k.Name, msg, signMode) +} + +func (ks keystore) + +SaveLedgerKey(uid string, algo SignatureAlgo, hrp string, coinType, account, index uint32) (*Record, error) { + if !ks.options.SupportedAlgosLedger.Contains(algo) { + return nil, errorsmod.Wrap(ErrUnsupportedSigningAlgo, fmt.Sprintf("signature algo %s is not defined in the keyring options", algo.Name())) +} + hdPath := hd.NewFundraiserParams(account, coinType, index) + +priv, _, err := ledger.NewPrivKeySecp256k1(*hdPath, hrp) + if err != nil { + return nil, errors.CombineErrors(ErrLedgerGenerateKey, err) +} + +return ks.writeLedgerKey(uid, priv.PubKey(), hdPath) +} + +func (ks keystore) + +writeLedgerKey(name string, pk types.PubKey, path *hd.BIP44Params) (*Record, error) { + k, err := NewLedgerRecord(name, pk, path) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +func (ks keystore) + +SaveMultisig(uid string, pubkey types.PubKey) (*Record, error) { + return ks.writeMultisigKey(uid, pubkey) +} + +func (ks keystore) + +SaveOfflineKey(uid string, pubkey types.PubKey) (*Record, error) { + return ks.writeOfflineKey(uid, pubkey) +} + +func (ks keystore) + +DeleteByAddress(address sdk.Address) + +error { + k, err := ks.KeyByAddress(address) + if err != nil { + return err +} + +err = ks.Delete(k.Name) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +Rename(oldName, newName string) + +error { + _, err := ks.Key(newName) + if err == nil { + return errorsmod.Wrap(ErrKeyAlreadyExists, fmt.Sprintf("rename failed, %s", newName)) +} + +armor, err := ks.ExportPrivKeyArmor(oldName, passPhrase) + if err != nil { + return err +} + if err := ks.Delete(oldName); err != nil { + return err +} + if err := ks.ImportPrivKey(newName, armor, passPhrase); err != nil { + return err +} + +return nil +} + +/ Delete deletes a key in the keyring. `uid` represents the key name, without +/ the `.info` suffix. +func (ks keystore) + +Delete(uid string) + +error { + k, err := ks.Key(uid) + if err != nil { + return err +} + +addr, err := k.GetAddress() + if err != nil { + return err +} + +err = ks.db.Remove(addrHexKeyAsString(addr)) + if err != nil { + return err +} + +err = ks.db.Remove(infoKey(uid)) + if err != nil { + return err +} + +return nil +} + +func (ks keystore) + +KeyByAddress(address sdk.Address) (*Record, error) { + ik, err := ks.db.Get(addrHexKeyAsString(address)) + if err != nil { + return nil, wrapKeyNotFound(err, fmt.Sprintf("key with address %s not found", address.String())) +} + if len(ik.Data) == 0 { + return nil, wrapKeyNotFound(err, fmt.Sprintf("key with address %s not found", address.String())) +} + +return ks.Key(string(ik.Data)) +} + +func wrapKeyNotFound(err error, msg string) + +error { + if errors.Is(err, keyring.ErrKeyNotFound) { + return errorsmod.Wrap(sdkerrors.ErrKeyNotFound, msg) +} + +return err +} + +func (ks keystore) + +List() ([]*Record, error) { + return ks.MigrateAll() +} + +func (ks keystore) + +NewMnemonic(uid string, language Language, hdPath, bip39Passphrase string, algo SignatureAlgo) (*Record, string, error) { + if language != English { + return nil, "", ErrUnsupportedLanguage +} + if !ks.isSupportedSigningAlgo(algo) { + return nil, "", ErrUnsupportedSigningAlgo +} + + / Default number of words (24): This generates a mnemonic directly from the + / number of words by reading system entropy. + entropy, err := bip39.NewEntropy(defaultEntropySize) + if err != nil { + return nil, "", err +} + +mnemonic, err := bip39.NewMnemonic(entropy) + if err != nil { + return nil, "", err +} + if bip39Passphrase == "" { + bip39Passphrase = DefaultBIP39Passphrase +} + +k, err := ks.NewAccount(uid, mnemonic, bip39Passphrase, hdPath, algo) + if err != nil { + return nil, "", err +} + +return k, mnemonic, nil +} + +func (ks keystore) + +NewAccount(name, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error) { + if !ks.isSupportedSigningAlgo(algo) { + return nil, ErrUnsupportedSigningAlgo +} + + / create master key and derive first key for keyring + derivedPriv, err := algo.Derive()(mnemonic, bip39Passphrase, hdPath) + if err != nil { + return nil, err +} + privKey := algo.Generate()(derivedPriv) + + / check if the key already exists with the same address and return an error + / if found + address := sdk.AccAddress(privKey.PubKey().Address()) + if _, err := ks.KeyByAddress(address); err == nil { + return nil, ErrDuplicatedAddress +} + +return ks.writeLocalKey(name, privKey) +} + +func (ks keystore) + +isSupportedSigningAlgo(algo SignatureAlgo) + +bool { + return ks.options.SupportedAlgos.Contains(algo) +} + +func (ks keystore) + +Key(uid string) (*Record, error) { + k, err := ks.migrate(uid) + if err != nil { + return nil, err +} + +return k, nil +} + +/ SupportedAlgorithms returns the keystore Options' supported signing algorithm. +/ for the keyring and Ledger. +func (ks keystore) + +SupportedAlgorithms() (SigningAlgoList, SigningAlgoList) { + return ks.options.SupportedAlgos, ks.options.SupportedAlgosLedger +} + +/ SignWithLedger signs a binary message with the ledger device referenced by an Info object +/ and returns the signed bytes and the public key. It returns an error if the device could +/ not be queried or it returned an error. +func SignWithLedger(k *Record, msg []byte, signMode signing.SignMode) (sig []byte, pub types.PubKey, err error) { + ledgerInfo := k.GetLedger() + if ledgerInfo == nil { + return nil, nil, ErrNotLedgerObj +} + path := ledgerInfo.GetPath() + +priv, err := ledger.NewPrivKeySecp256k1Unsafe(*path) + if err != nil { + return nil, nil, err +} + ledgerPubKey := priv.PubKey() + +pubKey, err := k.GetPubKey() + if err != nil { + return nil, nil, err +} + if !pubKey.Equals(ledgerPubKey) { + return nil, nil, fmt.Errorf("the public key that the user attempted to sign with does not match the public key on the ledger device. %v does not match %v", pubKey.String(), ledgerPubKey.String()) +} + switch signMode { + case signing.SignMode_SIGN_MODE_TEXTUAL: + sig, err = priv.Sign(msg) + if err != nil { + return nil, nil, err +} + case signing.SignMode_SIGN_MODE_LEGACY_AMINO_JSON: + sig, err = priv.SignLedgerAminoJSON(msg) + if err != nil { + return nil, nil, err +} + +default: + return nil, nil, errorsmod.Wrap(ErrInvalidSignMode, fmt.Sprintf("%v", signMode)) +} + if !priv.PubKey().VerifySignature(msg, sig) { + return nil, nil, ErrLedgerInvalidSignature +} + +return sig, priv.PubKey(), nil +} + +func newOSBackendKeyringConfig(appName, dir string, buf io.Reader) + +keyring.Config { + return keyring.Config{ + ServiceName: appName, + FileDir: dir, + KeychainTrustApplication: true, + FilePasswordFunc: newRealPrompt(dir, buf), +} +} + +func newTestBackendKeyringConfig(appName, dir string) + +keyring.Config { + return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.FileBackend +}, + ServiceName: appName, + FileDir: filepath.Join(dir, keyringTestDirName), + FilePasswordFunc: func(_ string) (string, error) { + return "test", nil +}, +} +} + +func newKWalletBackendKeyringConfig(appName, _ string, _ io.Reader) + +keyring.Config { + return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.KWalletBackend +}, + ServiceName: "kdewallet", + KWalletAppID: appName, + KWalletFolder: "", +} +} + +func newPassBackendKeyringConfig(appName, _ string, _ io.Reader) + +keyring.Config { + prefix := fmt.Sprintf(passKeyringPrefix, appName) + +return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.PassBackend +}, + ServiceName: appName, + PassPrefix: prefix, +} +} + +func newFileBackendKeyringConfig(name, dir string, buf io.Reader) + +keyring.Config { + fileDir := filepath.Join(dir, keyringFileDirName) + +return keyring.Config{ + AllowedBackends: []keyring.BackendType{ + keyring.FileBackend +}, + ServiceName: name, + FileDir: fileDir, + FilePasswordFunc: newRealPrompt(fileDir, buf), +} +} + +func newRealPrompt(dir string, buf io.Reader) + +func(string) (string, error) { + return func(prompt string) (string, error) { + keyhashStored := false + keyhashFilePath := filepath.Join(dir, "keyhash") + +var keyhash []byte + + _, err := os.Stat(keyhashFilePath) + switch { + case err == nil: + keyhash, err = os.ReadFile(keyhashFilePath) + if err != nil { + return "", errorsmod.Wrap(err, fmt.Sprintf("failed to read %s", keyhashFilePath)) +} + +keyhashStored = true + case os.IsNotExist(err): + keyhashStored = false + + default: + return "", errorsmod.Wrap(err, fmt.Sprintf("failed to open %s", keyhashFilePath)) +} + failureCounter := 0 + for { + failureCounter++ + if failureCounter > maxPassphraseEntryAttempts { + return "", ErrMaxPassPhraseAttempts +} + buf := bufio.NewReader(buf) + +pass, err := input.GetPassword(fmt.Sprintf("Enter keyring passphrase (attempt %d/%d):", failureCounter, maxPassphraseEntryAttempts), buf) + if err != nil { + / NOTE: LGTM.io reports a false positive alert that states we are printing the password, + / but we only log the error. + / + / lgtm [go/clear-text-logging] + fmt.Fprintln(os.Stderr, err) + +continue +} + if keyhashStored { + if err := bcrypt.CompareHashAndPassword(keyhash, []byte(pass)); err != nil { + fmt.Fprintln(os.Stderr, "incorrect passphrase") + +continue +} + +return pass, nil +} + +reEnteredPass, err := input.GetPassword("Re-enter keyring passphrase:", buf) + if err != nil { + / NOTE: LGTM.io reports a false positive alert that states we are printing the password, + / but we only log the error. + / + / lgtm [go/clear-text-logging] + fmt.Fprintln(os.Stderr, err) + +continue +} + if pass != reEnteredPass { + fmt.Fprintln(os.Stderr, "passphrase do not match") + +continue +} + +passwordHash, err := bcrypt.GenerateFromPassword([]byte(pass), 2) + if err != nil { + fmt.Fprintln(os.Stderr, err) + +continue +} + if err := os.WriteFile(keyhashFilePath, passwordHash, 0o600); err != nil { + return "", err +} + +return pass, nil +} + +} +} + +func (ks keystore) + +writeLocalKey(name string, privKey types.PrivKey) (*Record, error) { + k, err := NewLocalRecord(name, privKey, privKey.PubKey()) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +/ writeRecord persists a keyring item in keystore if it does not exist there. +/ For each key record, we actually write 2 items: +/ - one with key `.info`, with Data = the serialized protobuf key +/ - another with key `.address`, with Data = the uid (i.e. the key name) +/ This is to be able to query keys both by name and by address. +func (ks keystore) + +writeRecord(k *Record) + +error { + addr, err := k.GetAddress() + if err != nil { + return err +} + key := infoKey(k.Name) + +exists, err := ks.existsInDb(addr, key) + if err != nil { + return err +} + if exists { + return errorsmod.Wrap(ErrKeyAlreadyExists, key) +} + +serializedRecord, err := ks.cdc.Marshal(k) + if err != nil { + return errors.CombineErrors(ErrUnableToSerialize, err) +} + item := keyring.Item{ + Key: key, + Data: serializedRecord, +} + if err := ks.SetItem(item); err != nil { + return err +} + +item = keyring.Item{ + Key: addrHexKeyAsString(addr), + Data: []byte(key), +} + if err := ks.SetItem(item); err != nil { + return err +} + +return nil +} + +/ existsInDb returns (true, nil) + if either addr or name exist is in keystore DB. +/ On the other hand, it returns (false, error) + if Get method returns error different from keyring.ErrKeyNotFound +/ In case of inconsistent keyring, it recovers it automatically. +func (ks keystore) + +existsInDb(addr sdk.Address, name string) (bool, error) { + _, errAddr := ks.db.Get(addrHexKeyAsString(addr)) + if errAddr != nil && !errors.Is(errAddr, keyring.ErrKeyNotFound) { + return false, errAddr +} + + _, errInfo := ks.db.Get(infoKey(name)) + if errInfo == nil { + return true, nil / uid lookup succeeds - info exists +} + +else if !errors.Is(errInfo, keyring.ErrKeyNotFound) { + return false, errInfo / received unexpected error - returns +} + + / looking for an issue, record with meta (getByAddress) + +exists, but record with public key itself does not + if errAddr == nil && errors.Is(errInfo, keyring.ErrKeyNotFound) { + fmt.Fprintf(os.Stderr, "address \"%s\" exists but pubkey itself does not\n", hex.EncodeToString(addr.Bytes())) + +fmt.Fprintln(os.Stderr, "recreating pubkey record") + err := ks.db.Remove(addrHexKeyAsString(addr)) + if err != nil { + return true, err +} + +return false, nil +} + + / both lookups failed, info does not exist + return false, nil +} + +func (ks keystore) + +writeOfflineKey(name string, pk types.PubKey) (*Record, error) { + k, err := NewOfflineRecord(name, pk) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +/ writeMultisigKey investigate where thisf function is called maybe remove it +func (ks keystore) + +writeMultisigKey(name string, pk types.PubKey) (*Record, error) { + k, err := NewMultiRecord(name, pk) + if err != nil { + return nil, err +} + +return k, ks.writeRecord(k) +} + +func (ks keystore) + +MigrateAll() ([]*Record, error) { + keys, err := ks.db.Keys() + if err != nil { + return nil, err +} + if len(keys) == 0 { + return nil, nil +} + +sort.Strings(keys) + +var recs []*Record + for _, key := range keys { + / The keyring items only with `.info` consists the key info. + if !strings.HasSuffix(key, infoSuffix) { + continue +} + +rec, err := ks.migrate(key) + if err != nil { + fmt.Fprintf(os.Stderr, "migrate err for key %s: %q\n", key, err) + +continue +} + +recs = append(recs, rec) +} + +return recs, nil +} + +/ migrate converts keyring.Item from amino to proto serialization format. +/ the `key` argument can be a key uid (e.g. "alice") + +or with the '.info' +/ suffix (e.g. "alice.info"). +/ +/ It operates as follows: +/ 1. retrieve any key +/ 2. try to decode it using protobuf +/ 3. if ok, then return the key, do nothing else +/ 4. if it fails, then try to decode it using amino +/ 5. convert from the amino struct to the protobuf struct +/ 6. write the proto-encoded key back to the keyring +func (ks keystore) + +migrate(key string) (*Record, error) { + if !strings.HasSuffix(key, infoSuffix) { + key = infoKey(key) +} + + / 1. get the key. + item, err := ks.db.Get(key) + if err != nil { + if key == fmt.Sprintf(".%s", infoSuffix) { + return nil, errors.New("no key name or address provided; have you forgotten the --from flag?") +} + +return nil, wrapKeyNotFound(err, key) +} + if len(item.Data) == 0 { + return nil, errorsmod.Wrap(sdkerrors.ErrKeyNotFound, key) +} + + / 2. Try to deserialize using proto + k, err := ks.protoUnmarshalRecord(item.Data) + / 3. If ok then return the key + if err == nil { + return k, nil +} + + / 4. Try to decode with amino + legacyInfo, err := unMarshalLegacyInfo(item.Data) + if err != nil { + return nil, errorsmod.Wrap(err, "unable to unmarshal item.Data") +} + + / 5. Convert and serialize info using proto + k, err = ks.convertFromLegacyInfo(legacyInfo) + if err != nil { + return nil, errorsmod.Wrap(err, "convertFromLegacyInfo") +} + +serializedRecord, err := ks.cdc.Marshal(k) + if err != nil { + return nil, errors.CombineErrors(ErrUnableToSerialize, err) +} + +item = keyring.Item{ + Key: key, + Data: serializedRecord, +} + + / 6. Overwrite the keyring entry with the new proto-encoded key. + if err := ks.SetItem(item); err != nil { + return nil, errorsmod.Wrap(err, "unable to set keyring.Item") +} + +fmt.Fprintf(os.Stderr, "Successfully migrated key %s.\n", key) + +return k, nil +} + +func (ks keystore) + +protoUnmarshalRecord(bz []byte) (*Record, error) { + k := new(Record) + if err := ks.cdc.Unmarshal(bz, k); err != nil { + return nil, err +} + +return k, nil +} + +func (ks keystore) + +SetItem(item keyring.Item) + +error { + return ks.db.Set(item) +} + +func (ks keystore) + +convertFromLegacyInfo(info LegacyInfo) (*Record, error) { + if info == nil { + return nil, errorsmod.Wrap(ErrLegacyToRecord, "info is nil") +} + name := info.GetName() + pk := info.GetPubKey() + switch info.GetType() { + case TypeLocal: + priv, err := privKeyFromLegacyInfo(info) + if err != nil { + return nil, err +} + +return NewLocalRecord(name, priv, pk) + case TypeOffline: + return NewOfflineRecord(name, pk) + case TypeMulti: + return NewMultiRecord(name, pk) + case TypeLedger: + path, err := info.GetPath() + if err != nil { + return nil, err +} + +return NewLedgerRecord(name, pk, path) + +default: + return nil, ErrUnknownLegacyType +} +} + +func addrHexKeyAsString(address sdk.Address) + +string { + return fmt.Sprintf("%s.%s", hex.EncodeToString(address.Bytes()), addressSuffix) +} +``` + +The default implementation of `Keyring` comes from the third-party [`99designs/keyring`](https://github.com/99designs/keyring) library. + +A few notes on the `Keyring` methods: + +- `Sign(uid string, msg []byte) ([]byte, types.PubKey, error)` strictly deals with the signature of the `msg` bytes. You must prepare and encode the transaction into a canonical `[]byte` form. Because protobuf is not deterministic, it has been decided in [ADR-020](/docs/common/pages/adr-comprehensive#adr-020-protocol-buffer-transaction-encoding) that the canonical `payload` to sign is the `SignDoc` struct, deterministically encoded using [ADR-027](/docs/common/pages/adr-comprehensive#adr-027-deterministic-protobuf-serialization). Note that signature verification is not implemented in the Cosmos SDK by default, it is deferred to the [`anteHandler`](/docs/sdk/v0.53/documentation/application-framework/baseapp#antehandler). + +```protobuf +// SignDoc is the type used for generating sign bytes for SIGN_MODE_DIRECT. +message SignDoc { + // body_bytes is protobuf serialization of a TxBody that matches the + // representation in TxRaw. + bytes body_bytes = 1; + + // auth_info_bytes is a protobuf serialization of an AuthInfo that matches the + // representation in TxRaw. + bytes auth_info_bytes = 2; + + // chain_id is the unique identifier of the chain this transaction targets. + // It prevents signed transactions from being used on another chain by an + // attacker + string chain_id = 3; + + // account_number is the account number of the account in state + uint64 account_number = 4; +} +``` + +- `NewAccount(uid, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error)` creates a new account based on the [`bip44 path`](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki) and persists it on disk. The `PrivKey` is **never stored unencrypted**, instead it is [encrypted with a passphrase](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/armor.go) before being persisted. In the context of this method, the key type and sequence number refer to the segment of the BIP44 derivation path (for example, `0`, `1`, `2`, ...) that is used to derive a private and a public key from the mnemonic. Using the same mnemonic and derivation path, the same `PrivKey`, `PubKey` and `Address` is generated. The following keys are supported by the keyring: + +- `secp256k1` + +- `ed25519` + +- `ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error)` exports a private key in ASCII-armored encrypted format using the given passphrase. You can then either import the private key again into the keyring using the `ImportPrivKey(uid, armor, passphrase string)` function or decrypt it into a raw private key using the `UnarmorDecryptPrivKey(armorStr string, passphrase string)` function. + +### Create New Key Type + +To create a new key type for using in keyring, `keyring.SignatureAlgo` interface must be fulfilled. + +```go expandable +package keyring + +import ( + + "strings" + "github.com/cockroachdb/errors" + "github.com/cosmos/cosmos-sdk/crypto/hd" +) + +/ SignatureAlgo defines the interface for a keyring supported algorithm. +type SignatureAlgo interface { + Name() + +hd.PubKeyType + Derive() + +hd.DeriveFn + Generate() + +hd.GenerateFn +} + +/ NewSigningAlgoFromString creates a supported SignatureAlgo. +func NewSigningAlgoFromString(str string, algoList SigningAlgoList) (SignatureAlgo, error) { + for _, algo := range algoList { + if str == string(algo.Name()) { + return algo, nil +} + +} + +return nil, errors.Wrap(ErrUnsupportedSigningAlgo, str) +} + +/ SigningAlgoList is a slice of signature algorithms +type SigningAlgoList []SignatureAlgo + +/ Contains returns true if the SigningAlgoList the given SignatureAlgo. +func (sal SigningAlgoList) + +Contains(algo SignatureAlgo) + +bool { + for _, cAlgo := range sal { + if cAlgo.Name() == algo.Name() { + return true +} + +} + +return false +} + +/ String returns a comma separated string of the signature algorithm names in the list. +func (sal SigningAlgoList) + +String() + +string { + names := make([]string, len(sal)) + for i := range sal { + names[i] = string(sal[i].Name()) +} + +return strings.Join(names, ",") +} +``` + +The interface consists in three methods where `Name()` returns the name of the algorithm as a `hd.PubKeyType` and `Derive()` and `Generate()` must return the following functions respectively: + +```go expandable +package hd + +import ( + + "github.com/cosmos/go-bip39" + "github.com/cosmos/cosmos-sdk/crypto/keys/secp256k1" + "github.com/cosmos/cosmos-sdk/crypto/types" +) + +/ PubKeyType defines an algorithm to derive key-pairs which can be used for cryptographic signing. +type PubKeyType string + +const ( + / MultiType implies that a pubkey is a multisignature + MultiType = PubKeyType("multi") + / Secp256k1Type uses the Bitcoin secp256k1 ECDSA parameters. + Secp256k1Type = PubKeyType("secp256k1") + / Ed25519Type represents the Ed25519Type signature system. + / It is currently not supported for end-user keys (wallets/ledgers). + Ed25519Type = PubKeyType("ed25519") + / Sr25519Type represents the Sr25519Type signature system. + Sr25519Type = PubKeyType("sr25519") +) + +/ Secp256k1 uses the Bitcoin secp256k1 ECDSA parameters. +var Secp256k1 = secp256k1Algo{ +} + +type ( + DeriveFn func(mnemonic, bip39Passphrase, hdPath string) ([]byte, error) + +GenerateFn func(bz []byte) + +types.PrivKey +) + +type WalletGenerator interface { + Derive(mnemonic, bip39Passphrase, hdPath string) ([]byte, error) + +Generate(bz []byte) + +types.PrivKey +} + +type secp256k1Algo struct{ +} + +func (s secp256k1Algo) + +Name() + +PubKeyType { + return Secp256k1Type +} + +/ Derive derives and returns the secp256k1 private key for the given seed and HD path. +func (s secp256k1Algo) + +Derive() + +DeriveFn { + return func(mnemonic, bip39Passphrase, hdPath string) ([]byte, error) { + seed, err := bip39.NewSeedWithErrorChecking(mnemonic, bip39Passphrase) + if err != nil { + return nil, err +} + +masterPriv, ch := ComputeMastersFromSeed(seed) + if len(hdPath) == 0 { + return masterPriv[:], nil +} + +derivedKey, err := DerivePrivateKeyForPath(masterPriv, ch, hdPath) + +return derivedKey, err +} +} + +/ Generate generates a secp256k1 private key from the given bytes. +func (s secp256k1Algo) + +Generate() + +GenerateFn { + return func(bz []byte) + +types.PrivKey { + bzArr := make([]byte, secp256k1.PrivKeySize) + +copy(bzArr, bz) + +return &secp256k1.PrivKey{ + Key: bzArr +} + +} +} +``` + +Once the `keyring.SignatureAlgo` has been implemented it must be added to the [list of supported algos](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keyring/keyring.go#L209) of the keyring. + +For simplicity the implementation of a new key type should be done inside the `crypto/hd` package. +There is an example of a working `secp256k1` implementation in [algo.go](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/hd/algo.go#L38). + +#### Implementing secp256r1 algo + +Here is an example of how secp256r1 could be implemented. + +First a new function to create a private key from a secret number is needed in the secp256r1 package. This function could look like this: + +```go expandable +/ cosmos-sdk/crypto/keys/secp256r1/privkey.go + +/ NewPrivKeyFromSecret creates a private key derived for the secret number +/ represented in big-endian. The `secret` must be a valid ECDSA field element. +func NewPrivKeyFromSecret(secret []byte) (*PrivKey, error) { + var d = new(big.Int).SetBytes(secret) + if d.Cmp(secp256r1.Params().N) >= 1 { + return nil, errorsmod.Wrap(errors.ErrInvalidRequest, "secret not in the curve base field") +} + sk := new(ecdsa.PrivKey) + +return &PrivKey{&ecdsaSK{*sk +}}, nil +} +``` + +After that `secp256r1Algo` can be implemented. + +```go expandable +/ cosmos-sdk/crypto/hd/secp256r1Algo.go + +package hd + +import ( + + "github.com/cosmos/go-bip39" + "github.com/cosmos/cosmos-sdk/crypto/keys/secp256r1" + "github.com/cosmos/cosmos-sdk/crypto/types" +) + +/ Secp256r1Type uses the secp256r1 ECDSA parameters. +const Secp256r1Type = PubKeyType("secp256r1") + +var Secp256r1 = secp256r1Algo{ +} + +type secp256r1Algo struct{ +} + +func (s secp256r1Algo) + +Name() + +PubKeyType { + return Secp256r1Type +} + +/ Derive derives and returns the secp256r1 private key for the given seed and HD path. +func (s secp256r1Algo) + +Derive() + +DeriveFn { + return func(mnemonic string, bip39Passphrase, hdPath string) ([]byte, error) { + seed, err := bip39.NewSeedWithErrorChecking(mnemonic, bip39Passphrase) + if err != nil { + return nil, err +} + +masterPriv, ch := ComputeMastersFromSeed(seed) + if len(hdPath) == 0 { + return masterPriv[:], nil +} + +derivedKey, err := DerivePrivateKeyForPath(masterPriv, ch, hdPath) + +return derivedKey, err +} +} + +/ Generate generates a secp256r1 private key from the given bytes. +func (s secp256r1Algo) + +Generate() + +GenerateFn { + return func(bz []byte) + +types.PrivKey { + key, err := secp256r1.NewPrivKeyFromSecret(bz) + if err != nil { + panic(err) +} + +return key +} +} +``` + +Finally, the algo must be added to the list of [supported algos](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keyring/keyring.go#L209) by the keyring. + +```go +/ cosmos-sdk/crypto/keyring/keyring.go + +func newKeystore(kr keyring.Keyring, cdc codec.Codec, backend string, opts ...Option) + +keystore { + / Default options for keybase, these can be overwritten using the + / Option function + options := Options{ + SupportedAlgos: SigningAlgoList{ + hd.Secp256k1, hd.Secp256r1 +}, / added here + SupportedAlgosLedger: SigningAlgoList{ + hd.Secp256k1 +}, +} +... +``` + +Hereafter to create new keys using your algo, you must specify it with the flag `--algo` : + +`simd keys add myKey --algo secp256r1` diff --git a/docs/sdk/v0.53/documentation/protocol-development/bech32.mdx b/docs/sdk/v0.53/documentation/protocol-development/bech32.mdx new file mode 100644 index 00000000..ee3b4ad1 --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/bech32.mdx @@ -0,0 +1,23 @@ +--- +title: Bech32 on Cosmos +--- + +The Cosmos network prefers to use the Bech32 address format wherever users must handle binary data. Bech32 encoding provides robust integrity checks on data and the human readable part (HRP) provides contextual hints that can assist UI developers with providing informative error messages. + +In the Cosmos network, keys and addresses may refer to a number of different roles in the network like accounts, validators etc. + +## HRP table + +| HRP | Definition | +| ------------- | ---------------------------------- | +| cosmos | Cosmos Account Address | +| cosmosvalcons | Cosmos Validator Consensus Address | +| cosmosvaloper | Cosmos Validator Operator Address | + +## Encoding + +While all user facing interfaces to Cosmos software should exposed Bech32 interfaces, many internal interfaces encode binary value in hex or base64 encoded form. + +To convert between other binary representation of addresses and keys, it is important to first apply the Amino encoding process before Bech32 encoding. + +A complete implementation of the Amino serialization format is unnecessary in most cases. Simply prepending bytes from this [table](https://github.com/cometbft/cometbft/blob/main/spec/blockchain/encoding.md) to the byte string payload before Bech32 encoding will sufficient for compatible representation. diff --git a/docs/sdk/v0.53/documentation/protocol-development/encoding.mdx b/docs/sdk/v0.53/documentation/protocol-development/encoding.mdx new file mode 100644 index 00000000..95271375 --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/encoding.mdx @@ -0,0 +1,1976 @@ +--- +title: Encoding +--- + +## Synopsis + +While encoding in the Cosmos SDK used to be mainly handled by `go-amino` codec, the Cosmos SDK is moving towards using `gogoprotobuf` for both state and client-side encoding. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK application](/docs/sdk/v0.53/documentation/application-framework/app-anatomy) + + + +## Encoding + +The Cosmos SDK utilizes two binary wire encoding protocols, [Amino](https://github.com/tendermint/go-amino/) which is an object encoding specification and [Protocol Buffers](https://developers.google.com/protocol-buffers), a subset of Proto3 with an extension for +interface support. See the [Proto3 spec](https://developers.google.com/protocol-buffers/docs/proto3) +for more information on Proto3, which Amino is largely compatible with (but not with Proto2). + +Due to Amino having significant performance drawbacks, being reflection-based, and +not having any meaningful cross-language/client support, Protocol Buffers, specifically +[gogoprotobuf](https://github.com/cosmos/gogoproto/), is being used in place of Amino. +Note, this process of using Protocol Buffers over Amino is still an ongoing process. + +Binary wire encoding of types in the Cosmos SDK can be broken down into two main +categories, client encoding and store encoding. Client encoding mainly revolves +around transaction processing and signing, whereas store encoding revolves around +types used in state-machine transitions and what is ultimately stored in the Merkle +tree. + +For store encoding, protobuf definitions can exist for any type and will typically +have an Amino-based "intermediary" type. Specifically, the protobuf-based type +definition is used for serialization and persistence, whereas the Amino-based type +is used for business logic in the state-machine where they may convert back-n-forth. +Note, the Amino-based types may slowly be phased-out in the future, so developers +should take note to use the protobuf message definitions where possible. + +In the `codec` package, there exists two core interfaces, `BinaryCodec` and `JSONCodec`, +where the former encapsulates the current Amino interface except it operates on +types implementing the latter instead of generic `interface{}` types. + +The `ProtoCodec`, where both binary and JSON serialization is handled +via Protobuf. This means that modules may use Protobuf encoding, but the types must +implement `ProtoMarshaler`. If modules wish to avoid implementing this interface +for their types, this is autogenerated via [buf](https://buf.build/) + +If modules use [Collections](/docs/sdk/v0.53/documentation/state-storage/collections), encoding and decoding are handled, marshal and unmarshal should not be handled manually unless for specific cases identified by the developer. + +### Gogoproto + +Modules are encouraged to utilize Protobuf encoding for their respective types. In the Cosmos SDK, we use the [Gogoproto](https://github.com/cosmos/gogoproto) specific implementation of the Protobuf spec that offers speed and DX improvements compared to the official [Google protobuf implementation](https://github.com/protocolbuffers/protobuf). + +### Guidelines for protobuf message definitions + +In addition to [following official Protocol Buffer guidelines](https://developers.google.com/protocol-buffers/docs/proto3#simple), we recommend using these annotations in .proto files when dealing with interfaces: + +- use `cosmos_proto.accepts_interface` to annote `Any` fields that accept interfaces + - pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface` + - example: `(cosmos_proto.accepts_interface) = "cosmos.gov.v1beta1.Content"` (and not just `Content`) +- annotate interface implementations with `cosmos_proto.implements_interface` + - pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface` + - example: `(cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"` (and not just `Authorization`) + +Code generators can then match the `accepts_interface` and `implements_interface` annotations to know whether some Protobuf messages are allowed to be packed in a given `Any` field or not. + +### Transaction Encoding + +Another important use of Protobuf is the encoding and decoding of +[transactions](/docs/sdk/v0.53/documentation/protocol-development/transactions). Transactions are defined by the application or +the Cosmos SDK but are then passed to the underlying consensus engine to be relayed to +other peers. Since the underlying consensus engine is agnostic to the application, +the consensus engine accepts only transactions in the form of raw bytes. + +- The `TxEncoder` object performs the encoding. +- The `TxDecoder` object performs the decoding. + +```go expandable +package types + +import ( + + "encoding/json" + fmt "fmt" + strings "strings" + "time" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) + +type ( + / Msg defines the interface a transaction message needed to fulfill. + Msg = proto.Message + + / LegacyMsg defines the interface a transaction message needed to fulfill up through + / v0.47. + LegacyMsg interface { + Msg + + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} + + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / HasMsgs defines an interface a transaction must fulfill. + HasMsgs interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg +} + + / Tx defines an interface a transaction must fulfill. + Tx interface { + HasMsgs + + / GetMsgsV2 gets the transaction's messages as google.golang.org/protobuf/proto.Message's. + GetMsgsV2() ([]protov2.Message, error) +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() []byte + FeeGranter() []byte +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutTimeStamp extends the Tx interface by allowing a transaction to + / set a timeout timestamp. + TxWithTimeoutTimeStamp interface { + Tx + + GetTimeoutTimeStamp() + +time.Time +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} + + / TxWithUnordered extends the Tx interface by allowing a transaction to set + / the unordered field, which implicitly relies on TxWithTimeoutTimeStamp. + TxWithUnordered interface { + TxWithTimeoutTimeStamp + + GetUnordered() + +bool +} + + / HasValidateBasic defines a type that has a ValidateBasic method. + / ValidateBasic is deprecated and now facultative. + / Prefer validating messages directly in the msg server. + HasValidateBasic interface { + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +var MsgTypeURL = codectypes.MsgTypeURL + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} + +/ GetModuleNameFromTypeURL assumes that module name is the second element of the msg type URL +/ e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" +/ It returns an empty string if the input is not a valid type URL +func GetModuleNameFromTypeURL(input string) + +string { + moduleName := strings.Split(input, ".") + if len(moduleName) > 1 { + return moduleName[1] +} + +return "" +} +``` + +A standard implementation of both these objects can be found in the [`auth/tx` module](/docs/sdk/v0.53/documentation/module-system/tx): + +```go expandable +package tx + +import ( + + "fmt" + "google.golang.org/protobuf/encoding/protowire" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/unknownproto" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" +) + +/ DefaultTxDecoder returns a default protobuf TxDecoder using the provided Marshaler. +func DefaultTxDecoder(cdc codec.Codec) + +sdk.TxDecoder { + return func(txBytes []byte) (sdk.Tx, error) { + / Make sure txBytes follow ADR-027. + err := rejectNonADR027TxRaw(txBytes) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +var raw tx.TxRaw + + / reject all unknown proto fields in the root TxRaw + err = unknownproto.RejectUnknownFieldsStrict(txBytes, &raw, cdc.InterfaceRegistry()) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +err = cdc.Unmarshal(txBytes, &raw) + if err != nil { + return nil, err +} + +var body tx.TxBody + + / allow non-critical unknown fields in TxBody + txBodyHasUnknownNonCriticals, err := unknownproto.RejectUnknownFields(raw.BodyBytes, &body, true, cdc.InterfaceRegistry()) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +err = cdc.Unmarshal(raw.BodyBytes, &body) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +var authInfo tx.AuthInfo + + / reject all unknown proto fields in AuthInfo + err = unknownproto.RejectUnknownFieldsStrict(raw.AuthInfoBytes, &authInfo, cdc.InterfaceRegistry()) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +err = cdc.Unmarshal(raw.AuthInfoBytes, &authInfo) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + theTx := &tx.Tx{ + Body: &body, + AuthInfo: &authInfo, + Signatures: raw.Signatures, +} + +return &wrapper{ + tx: theTx, + bodyBz: raw.BodyBytes, + authInfoBz: raw.AuthInfoBytes, + txBodyHasUnknownNonCriticals: txBodyHasUnknownNonCriticals, + cdc: cdc, +}, nil +} +} + +/ DefaultJSONTxDecoder returns a default protobuf JSON TxDecoder using the provided Marshaler. +func DefaultJSONTxDecoder(cdc codec.Codec) + +sdk.TxDecoder { + return func(txBytes []byte) (sdk.Tx, error) { + var theTx tx.Tx + err := cdc.UnmarshalJSON(txBytes, &theTx) + if err != nil { + return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, err.Error()) +} + +return &wrapper{ + tx: &theTx, + cdc: cdc, +}, nil +} +} + +/ rejectNonADR027TxRaw rejects txBytes that do not follow ADR-027. This is NOT +/ a generic ADR-027 checker, it only applies decoding TxRaw. Specifically, it +/ only checks that: +/ - field numbers are in ascending order (1, 2, and potentially multiple 3s), +/ - and varints are as short as possible. +/ All other ADR-027 edge cases (e.g. default values) + +are not applicable with +/ TxRaw. +func rejectNonADR027TxRaw(txBytes []byte) + +error { + / Make sure all fields are ordered in ascending order with this variable. + prevTagNum := protowire.Number(0) + for len(txBytes) > 0 { + tagNum, wireType, m := protowire.ConsumeTag(txBytes) + if m < 0 { + return fmt.Errorf("invalid length; %w", protowire.ParseError(m)) +} + / TxRaw only has bytes fields. + if wireType != protowire.BytesType { + return fmt.Errorf("expected %d wire type, got %d", protowire.BytesType, wireType) +} + / Make sure fields are ordered in ascending order. + if tagNum < prevTagNum { + return fmt.Errorf("txRaw must follow ADR-027, got tagNum %d after tagNum %d", tagNum, prevTagNum) +} + +prevTagNum = tagNum + + / All 3 fields of TxRaw have wireType == 2, so their next component + / is a varint, so we can safely call ConsumeVarint here. + / Byte structure: + / Inner fields are verified in `DefaultTxDecoder` + lengthPrefix, m := protowire.ConsumeVarint(txBytes[m:]) + if m < 0 { + return fmt.Errorf("invalid length; %w", protowire.ParseError(m)) +} + / We make sure that this varint is as short as possible. + n := varintMinLength(lengthPrefix) + if n != m { + return fmt.Errorf("length prefix varint for tagNum %d is not as short as possible, read %d, only need %d", tagNum, m, n) +} + + / Skip over the bytes that store fieldNumber and wireType bytes. + _, _, m = protowire.ConsumeField(txBytes) + if m < 0 { + return fmt.Errorf("invalid length; %w", protowire.ParseError(m)) +} + +txBytes = txBytes[m:] +} + +return nil +} + +/ varintMinLength returns the minimum number of bytes necessary to encode an +/ uint using varint encoding. +func varintMinLength(n uint64) + +int { + switch { + / Note: 1< valz[j].ConsensusPower(r) +} + +func (valz ValidatorsByVotingPower) + +Swap(i, j int) { + valz[i], valz[j] = valz[j], valz[i] +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (v Validators) + +UnpackInterfaces(c codectypes.AnyUnpacker) + +error { + for i := range v.Validators { + if err := v.Validators[i].UnpackInterfaces(c); err != nil { + return err +} + +} + +return nil +} + +/ return the redelegation +func MustMarshalValidator(cdc codec.BinaryCodec, validator *Validator) []byte { + return cdc.MustMarshal(validator) +} + +/ unmarshal a redelegation from a store value +func MustUnmarshalValidator(cdc codec.BinaryCodec, value []byte) + +Validator { + validator, err := UnmarshalValidator(cdc, value) + if err != nil { + panic(err) +} + +return validator +} + +/ unmarshal a redelegation from a store value +func UnmarshalValidator(cdc codec.BinaryCodec, value []byte) (v Validator, err error) { + err = cdc.Unmarshal(value, &v) + +return v, err +} + +/ IsBonded checks if the validator status equals Bonded +func (v Validator) + +IsBonded() + +bool { + return v.GetStatus() == Bonded +} + +/ IsUnbonded checks if the validator status equals Unbonded +func (v Validator) + +IsUnbonded() + +bool { + return v.GetStatus() == Unbonded +} + +/ IsUnbonding checks if the validator status equals Unbonding +func (v Validator) + +IsUnbonding() + +bool { + return v.GetStatus() == Unbonding +} + +/ constant used in flags to indicate that description field should not be updated +const DoNotModifyDesc = "[do-not-modify]" + +func NewDescription(moniker, identity, website, securityContact, details string) + +Description { + return Description{ + Moniker: moniker, + Identity: identity, + Website: website, + SecurityContact: securityContact, + Details: details, +} +} + +/ UpdateDescription updates the fields of a given description. An error is +/ returned if the resulting description contains an invalid length. +func (d Description) + +UpdateDescription(d2 Description) (Description, error) { + if d2.Moniker == DoNotModifyDesc { + d2.Moniker = d.Moniker +} + if d2.Identity == DoNotModifyDesc { + d2.Identity = d.Identity +} + if d2.Website == DoNotModifyDesc { + d2.Website = d.Website +} + if d2.SecurityContact == DoNotModifyDesc { + d2.SecurityContact = d.SecurityContact +} + if d2.Details == DoNotModifyDesc { + d2.Details = d.Details +} + +return NewDescription( + d2.Moniker, + d2.Identity, + d2.Website, + d2.SecurityContact, + d2.Details, + ).EnsureLength() +} + +/ EnsureLength ensures the length of a validator's description. +func (d Description) + +EnsureLength() (Description, error) { + if len(d.Moniker) > MaxMonikerLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid moniker length; got: %d, max: %d", len(d.Moniker), MaxMonikerLength) +} + if len(d.Identity) > MaxIdentityLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid identity length; got: %d, max: %d", len(d.Identity), MaxIdentityLength) +} + if len(d.Website) > MaxWebsiteLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid website length; got: %d, max: %d", len(d.Website), MaxWebsiteLength) +} + if len(d.SecurityContact) > MaxSecurityContactLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid security contact length; got: %d, max: %d", len(d.SecurityContact), MaxSecurityContactLength) +} + if len(d.Details) > MaxDetailsLength { + return d, errors.Wrapf(sdkerrors.ErrInvalidRequest, "invalid details length; got: %d, max: %d", len(d.Details), MaxDetailsLength) +} + +return d, nil +} + +/ ABCIValidatorUpdate returns an abci.ValidatorUpdate from a staking validator type +/ with the full validator power +func (v Validator) + +ABCIValidatorUpdate(r math.Int) + +abci.ValidatorUpdate { + tmProtoPk, err := v.TmConsPublicKey() + if err != nil { + panic(err) +} + +return abci.ValidatorUpdate{ + PubKey: tmProtoPk, + Power: v.ConsensusPower(r), +} +} + +/ ABCIValidatorUpdateZero returns an abci.ValidatorUpdate from a staking validator type +/ with zero power used for validator updates. +func (v Validator) + +ABCIValidatorUpdateZero() + +abci.ValidatorUpdate { + tmProtoPk, err := v.TmConsPublicKey() + if err != nil { + panic(err) +} + +return abci.ValidatorUpdate{ + PubKey: tmProtoPk, + Power: 0, +} +} + +/ SetInitialCommission attempts to set a validator's initial commission. An +/ error is returned if the commission is invalid. +func (v Validator) + +SetInitialCommission(commission Commission) (Validator, error) { + if err := commission.Validate(); err != nil { + return v, err +} + +v.Commission = commission + + return v, nil +} + +/ In some situations, the exchange rate becomes invalid, e.g. if +/ Validator loses all tokens due to slashing. In this case, +/ make all future delegations invalid. +func (v Validator) + +InvalidExRate() + +bool { + return v.Tokens.IsZero() && v.DelegatorShares.IsPositive() +} + +/ calculate the token worth of provided shares +func (v Validator) + +TokensFromShares(shares math.LegacyDec) + +math.LegacyDec { + return (shares.MulInt(v.Tokens)).Quo(v.DelegatorShares) +} + +/ calculate the token worth of provided shares, truncated +func (v Validator) + +TokensFromSharesTruncated(shares math.LegacyDec) + +math.LegacyDec { + return (shares.MulInt(v.Tokens)).QuoTruncate(v.DelegatorShares) +} + +/ TokensFromSharesRoundUp returns the token worth of provided shares, rounded +/ up. +func (v Validator) + +TokensFromSharesRoundUp(shares math.LegacyDec) + +math.LegacyDec { + return (shares.MulInt(v.Tokens)).QuoRoundUp(v.DelegatorShares) +} + +/ SharesFromTokens returns the shares of a delegation given a bond amount. It +/ returns an error if the validator has no tokens. +func (v Validator) + +SharesFromTokens(amt math.Int) (math.LegacyDec, error) { + if v.Tokens.IsZero() { + return math.LegacyZeroDec(), ErrInsufficientShares +} + +return v.GetDelegatorShares().MulInt(amt).QuoInt(v.GetTokens()), nil +} + +/ SharesFromTokensTruncated returns the truncated shares of a delegation given +/ a bond amount. It returns an error if the validator has no tokens. +func (v Validator) + +SharesFromTokensTruncated(amt math.Int) (math.LegacyDec, error) { + if v.Tokens.IsZero() { + return math.LegacyZeroDec(), ErrInsufficientShares +} + +return v.GetDelegatorShares().MulInt(amt).QuoTruncate(math.LegacyNewDecFromInt(v.GetTokens())), nil +} + +/ get the bonded tokens which the validator holds +func (v Validator) + +BondedTokens() + +math.Int { + if v.IsBonded() { + return v.Tokens +} + +return math.ZeroInt() +} + +/ ConsensusPower gets the consensus-engine power. Aa reduction of 10^6 from +/ validator tokens is applied +func (v Validator) + +ConsensusPower(r math.Int) + +int64 { + if v.IsBonded() { + return v.PotentialConsensusPower(r) +} + +return 0 +} + +/ PotentialConsensusPower returns the potential consensus-engine power. +func (v Validator) + +PotentialConsensusPower(r math.Int) + +int64 { + return sdk.TokensToConsensusPower(v.Tokens, r) +} + +/ UpdateStatus updates the location of the shares within a validator +/ to reflect the new status +func (v Validator) + +UpdateStatus(newStatus BondStatus) + +Validator { + v.Status = newStatus + return v +} + +/ AddTokensFromDel adds tokens to a validator +func (v Validator) + +AddTokensFromDel(amount math.Int) (Validator, math.LegacyDec) { + / calculate the shares to issue + var issuedShares math.LegacyDec + if v.DelegatorShares.IsZero() { + / the first delegation to a validator sets the exchange rate to one + issuedShares = math.LegacyNewDecFromInt(amount) +} + +else { + shares, err := v.SharesFromTokens(amount) + if err != nil { + panic(err) +} + +issuedShares = shares +} + +v.Tokens = v.Tokens.Add(amount) + +v.DelegatorShares = v.DelegatorShares.Add(issuedShares) + +return v, issuedShares +} + +/ RemoveTokens removes tokens from a validator +func (v Validator) + +RemoveTokens(tokens math.Int) + +Validator { + if tokens.IsNegative() { + panic(fmt.Sprintf("should not happen: trying to remove negative tokens %v", tokens)) +} + if v.Tokens.LT(tokens) { + panic(fmt.Sprintf("should not happen: only have %v tokens, trying to remove %v", v.Tokens, tokens)) +} + +v.Tokens = v.Tokens.Sub(tokens) + +return v +} + +/ RemoveDelShares removes delegator shares from a validator. +/ NOTE: because token fractions are left in the valiadator, +/ +/ the exchange rate of future shares of this validator can increase. +func (v Validator) + +RemoveDelShares(delShares math.LegacyDec) (Validator, math.Int) { + remainingShares := v.DelegatorShares.Sub(delShares) + +var issuedTokens math.Int + if remainingShares.IsZero() { + / last delegation share gets any trimmings + issuedTokens = v.Tokens + v.Tokens = math.ZeroInt() +} + +else { + / leave excess tokens in the validator + / however fully use all the delegator shares + issuedTokens = v.TokensFromShares(delShares).TruncateInt() + +v.Tokens = v.Tokens.Sub(issuedTokens) + if v.Tokens.IsNegative() { + panic("attempting to remove more tokens than available in validator") +} + +} + +v.DelegatorShares = remainingShares + + return v, issuedTokens +} + +/ MinEqual defines a more minimum set of equality conditions when comparing two +/ validators. +func (v *Validator) + +MinEqual(other *Validator) + +bool { + return v.OperatorAddress == other.OperatorAddress && + v.Status == other.Status && + v.Tokens.Equal(other.Tokens) && + v.DelegatorShares.Equal(other.DelegatorShares) && + v.Description.Equal(other.Description) && + v.Commission.Equal(other.Commission) && + v.Jailed == other.Jailed && + v.MinSelfDelegation.Equal(other.MinSelfDelegation) && + v.ConsensusPubkey.Equal(other.ConsensusPubkey) +} + +/ Equal checks if the receiver equals the parameter +func (v *Validator) + +Equal(v2 *Validator) + +bool { + return v.MinEqual(v2) && + v.UnbondingHeight == v2.UnbondingHeight && + v.UnbondingTime.Equal(v2.UnbondingTime) +} + +func (v Validator) + +IsJailed() + +bool { + return v.Jailed +} + +func (v Validator) + +GetMoniker() + +string { + return v.Description.Moniker +} + +func (v Validator) + +GetStatus() + +BondStatus { + return v.Status +} + +func (v Validator) + +GetOperator() + +string { + return v.OperatorAddress +} + +/ ConsPubKey returns the validator PubKey as a cryptotypes.PubKey. +func (v Validator) + +ConsPubKey() (cryptotypes.PubKey, error) { + pk, ok := v.ConsensusPubkey.GetCachedValue().(cryptotypes.PubKey) + if !ok { + return nil, errors.Wrapf(sdkerrors.ErrInvalidType, "expecting cryptotypes.PubKey, got %T", pk) +} + +return pk, nil +} + +/ Deprecated: use CmtConsPublicKey instead +func (v Validator) + +TmConsPublicKey() (cmtprotocrypto.PublicKey, error) { + return v.CmtConsPublicKey() +} + +/ CmtConsPublicKey casts Validator.ConsensusPubkey to cmtprotocrypto.PubKey. +func (v Validator) + +CmtConsPublicKey() (cmtprotocrypto.PublicKey, error) { + pk, err := v.ConsPubKey() + if err != nil { + return cmtprotocrypto.PublicKey{ +}, err +} + +tmPk, err := cryptocodec.ToCmtProtoPublicKey(pk) + if err != nil { + return cmtprotocrypto.PublicKey{ +}, err +} + +return tmPk, nil +} + +/ GetConsAddr extracts Consensus key address +func (v Validator) + +GetConsAddr() ([]byte, error) { + pk, ok := v.ConsensusPubkey.GetCachedValue().(cryptotypes.PubKey) + if !ok { + return nil, errors.Wrapf(sdkerrors.ErrInvalidType, "expecting cryptotypes.PubKey, got %T", pk) +} + +return pk.Address().Bytes(), nil +} + +func (v Validator) + +GetTokens() + +math.Int { + return v.Tokens +} + +func (v Validator) + +GetBondedTokens() + +math.Int { + return v.BondedTokens() +} + +func (v Validator) + +GetConsensusPower(r math.Int) + +int64 { + return v.ConsensusPower(r) +} + +func (v Validator) + +GetCommission() + +math.LegacyDec { + return v.Commission.Rate +} + +func (v Validator) + +GetMinSelfDelegation() + +math.Int { + return v.MinSelfDelegation +} + +func (v Validator) + +GetDelegatorShares() + +math.LegacyDec { + return v.DelegatorShares +} + +/ UnpackInterfaces implements UnpackInterfacesMessage.UnpackInterfaces +func (v Validator) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + var pk cryptotypes.PubKey + return unpacker.UnpackAny(v.ConsensusPubkey, &pk) +} +``` + +#### `Any`'s TypeURL + +When packing a protobuf message inside an `Any`, the message's type is uniquely defined by its type URL, which is the message's fully qualified name prefixed by a `/` (slash) character. In some implementations of `Any`, like the gogoproto one, there's generally [a resolvable prefix, e.g. `type.googleapis.com`](https://github.com/gogo/protobuf/blob/b03c65ea87cdc3521ede29f62fe3ce239267c1bc/protobuf/google/protobuf/any.proto#L87-L91). However, in the Cosmos SDK, we made the decision to not include such prefix, to have shorter type URLs. The Cosmos SDK's own `Any` implementation can be found in `github.com/cosmos/cosmos-sdk/codec/types`. + +The Cosmos SDK is also switching away from gogoproto to the official `google.golang.org/protobuf` (known as the Protobuf API v2). Its default `Any` implementation also contains the [`type.googleapis.com`](https://github.com/protocolbuffers/protobuf-go/blob/v1.28.1/types/known/anypb/any.pb.go#L266) prefix. To maintain compatibility with the SDK, the following methods from `"google.golang.org/protobuf/types/known/anypb"` should not be used: + +- `anypb.New` +- `anypb.MarshalFrom` +- `anypb.Any#MarshalFrom` + +Instead, the Cosmos SDK provides helper functions in `"github.com/cosmos/cosmos-proto/anyutil"`, which create an official `anypb.Any` without inserting the prefixes: + +- `anyutil.New` +- `anyutil.MarshalFrom` + +For example, to pack a `sdk.Msg` called `internalMsg`, use: + +```diff +import ( +- "google.golang.org/protobuf/types/known/anypb" ++ "github.com/cosmos/cosmos-proto/anyutil" +) + +- anyMsg, err := anypb.New(internalMsg.Message().Interface()) ++ anyMsg, err := anyutil.New(internalMsg.Message().Interface()) + +- fmt.Println(anyMsg.TypeURL) / type.googleapis.com/cosmos.bank.v1beta1.MsgSend ++ fmt.Println(anyMsg.TypeURL) / /cosmos.bank.v1beta1.MsgSend +``` + +## FAQ + +### How to create modules using protobuf encoding + +#### Defining module types + +Protobuf types can be defined to encode: + +- state +- [`Msg`s](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#messages) +- [Query services](/docs/sdk/v0.53/documentation/module-system/query-services) +- [genesis](/docs/sdk/v0.53/documentation/module-system/genesis) + +#### Naming and conventions + +We encourage developers to follow industry guidelines: [Protocol Buffers style guide](https://developers.google.com/protocol-buffers/docs/style) +and [Buf](https://buf.build/docs/style-guide), see more details in [ADR 023](/docs/common/pages/adr-comprehensive#adr-023-protocol-buffer-naming-and-versioning-conventions) + +### How to update modules to protobuf encoding + +If modules do not contain any interfaces (e.g. `Account` or `Content`), then they +may simply migrate any existing types that +are encoded and persisted via their concrete Amino codec to Protobuf (see 1. for further guidelines) and accept a `Marshaler` as the codec which is implemented via the `ProtoCodec` +without any further customization. + +However, if a module type composes an interface, it must wrap it in the `sdk.Any` (from `/types` package) type. To do that, a module-level .proto file must use [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto) for respective message type interface types. + +For example, in the `x/evidence` module defines an `Evidence` interface, which is used by the `MsgSubmitEvidence`. The structure definition must use `sdk.Any` to wrap the evidence file. In the proto file we define it as follows: + +```protobuf +/ proto/cosmos/evidence/v1beta1/tx.proto + +message MsgSubmitEvidence { + string submitter = 1; + google.protobuf.Any evidence = 2 [(cosmos_proto.accepts_interface) = "cosmos.evidence.v1beta1.Evidence"]; +} +``` + +The Cosmos SDK `codec.Codec` interface provides support methods `MarshalInterface` and `UnmarshalInterface` to easy encoding of state to `Any`. + +Module should register interfaces using `InterfaceRegistry` which provides a mechanism for registering interfaces: `RegisterInterface(protoName string, iface interface{}, impls ...proto.Message)` and implementations: `RegisterImplementations(iface interface{}, impls ...proto.Message)` that can be safely unpacked from Any, similarly to type registration with Amino: + +```go expandable +package types + +import ( + + "errors" + "fmt" + "reflect" + "github.com/cosmos/gogoproto/jsonpb" + "github.com/cosmos/gogoproto/proto" + "google.golang.org/protobuf/reflect/protodesc" + "google.golang.org/protobuf/reflect/protoreflect" + "cosmossdk.io/x/tx/signing" +) + +var ( + + / MaxUnpackAnySubCalls extension point that defines the maximum number of sub-calls allowed during the unpacking + / process of protobuf Any messages. + MaxUnpackAnySubCalls = 100 + + / MaxUnpackAnyRecursionDepth extension point that defines the maximum allowed recursion depth during protobuf Any + / message unpacking. + MaxUnpackAnyRecursionDepth = 10 +) + +/ AnyUnpacker is an interface which allows safely unpacking types packed +/ in Any's against a whitelist of registered types +type AnyUnpacker interface { + / UnpackAny unpacks the value in any to the interface pointer passed in as + / iface. Note that the type in any must have been registered in the + / underlying whitelist registry as a concrete type for that interface + / Ex: + / var msg sdk.Msg + / err := cdc.UnpackAny(any, &msg) + / ... + UnpackAny(any *Any, iface interface{ +}) + +error +} + +/ InterfaceRegistry provides a mechanism for registering interfaces and +/ implementations that can be safely unpacked from Any +type InterfaceRegistry interface { + AnyUnpacker + jsonpb.AnyResolver + + / RegisterInterface associates protoName as the public name for the + / interface passed in as iface. This is to be used primarily to create + / a public facing registry of interface implementations for clients. + / protoName should be a well-chosen public facing name that remains stable. + / RegisterInterface takes an optional list of impls to be registered + / as implementations of iface. + / + / Ex: + / registry.RegisterInterface("cosmos.base.v1beta1.Msg", (*sdk.Msg)(nil)) + +RegisterInterface(protoName string, iface interface{ +}, impls ...proto.Message) + + / RegisterImplementations registers impls as concrete implementations of + / the interface iface. + / + / Ex: + / registry.RegisterImplementations((*sdk.Msg)(nil), &MsgSend{ +}, &MsgMultiSend{ +}) + +RegisterImplementations(iface interface{ +}, impls ...proto.Message) + + / ListAllInterfaces list the type URLs of all registered interfaces. + ListAllInterfaces() []string + + / ListImplementations lists the valid type URLs for the given interface name that can be used + / for the provided interface type URL. + ListImplementations(ifaceTypeURL string) []string + + / EnsureRegistered ensures there is a registered interface for the given concrete type. + EnsureRegistered(iface interface{ +}) + +error + + protodesc.Resolver + + / RangeFiles iterates over all registered files and calls f on each one. This + / implements the part of protoregistry.Files that is needed for reflecting over + / the entire FileDescriptorSet. + RangeFiles(f func(protoreflect.FileDescriptor) + +bool) + +SigningContext() *signing.Context + + / mustEmbedInterfaceRegistry requires that all implementations of InterfaceRegistry embed an official implementation + / from this package. This allows new methods to be added to the InterfaceRegistry interface without breaking + / backwards compatibility. + mustEmbedInterfaceRegistry() +} + +/ UnpackInterfacesMessage is meant to extend protobuf types (which implement +/ proto.Message) + +to support a post-deserialization phase which unpacks +/ types packed within Any's using the whitelist provided by AnyUnpacker +type UnpackInterfacesMessage interface { + / UnpackInterfaces is implemented in order to unpack values packed within + / Any's using the AnyUnpacker. It should generally be implemented as + / follows: + / func (s *MyStruct) + +UnpackInterfaces(unpacker AnyUnpacker) + +error { + / var x AnyInterface + / / where X is an Any field on MyStruct + / err := unpacker.UnpackAny(s.X, &x) + / if err != nil { + / return nil + / +} + / / where Y is a field on MyStruct that implements UnpackInterfacesMessage itself + / err = s.Y.UnpackInterfaces(unpacker) + / if err != nil { + / return nil + / +} + / return nil + / +} + +UnpackInterfaces(unpacker AnyUnpacker) + +error +} + +type interfaceRegistry struct { + signing.ProtoFileResolver + interfaceNames map[string]reflect.Type + interfaceImpls map[reflect.Type]interfaceMap + implInterfaces map[reflect.Type]reflect.Type + typeURLMap map[string]reflect.Type + signingCtx *signing.Context +} + +type interfaceMap = map[string]reflect.Type + +/ NewInterfaceRegistry returns a new InterfaceRegistry +func NewInterfaceRegistry() + +InterfaceRegistry { + registry, err := NewInterfaceRegistryWithOptions(InterfaceRegistryOptions{ + ProtoFiles: proto.HybridResolver, + SigningOptions: signing.Options{ + AddressCodec: failingAddressCodec{ +}, + ValidatorAddressCodec: failingAddressCodec{ +}, +}, +}) + if err != nil { + panic(err) +} + +return registry +} + +/ InterfaceRegistryOptions are options for creating a new InterfaceRegistry. +type InterfaceRegistryOptions struct { + / ProtoFiles is the set of files to use for the registry. It is required. + ProtoFiles signing.ProtoFileResolver + + / SigningOptions are the signing options to use for the registry. + SigningOptions signing.Options +} + +/ NewInterfaceRegistryWithOptions returns a new InterfaceRegistry with the given options. +func NewInterfaceRegistryWithOptions(options InterfaceRegistryOptions) (InterfaceRegistry, error) { + if options.ProtoFiles == nil { + return nil, fmt.Errorf("proto files must be provided") +} + +options.SigningOptions.FileResolver = options.ProtoFiles + signingCtx, err := signing.NewContext(options.SigningOptions) + if err != nil { + return nil, err +} + +return &interfaceRegistry{ + interfaceNames: map[string]reflect.Type{ +}, + interfaceImpls: map[reflect.Type]interfaceMap{ +}, + implInterfaces: map[reflect.Type]reflect.Type{ +}, + typeURLMap: map[string]reflect.Type{ +}, + ProtoFileResolver: options.ProtoFiles, + signingCtx: signingCtx, +}, nil +} + +func (registry *interfaceRegistry) + +RegisterInterface(protoName string, iface interface{ +}, impls ...proto.Message) { + typ := reflect.TypeOf(iface) + if typ.Elem().Kind() != reflect.Interface { + panic(fmt.Errorf("%T is not an interface type", iface)) +} + +registry.interfaceNames[protoName] = typ + registry.RegisterImplementations(iface, impls...) +} + +/ EnsureRegistered ensures there is a registered interface for the given concrete type. +/ +/ Returns an error if not, and nil if so. +func (registry *interfaceRegistry) + +EnsureRegistered(impl interface{ +}) + +error { + if reflect.ValueOf(impl).Kind() != reflect.Ptr { + return fmt.Errorf("%T is not a pointer", impl) +} + if _, found := registry.implInterfaces[reflect.TypeOf(impl)]; !found { + return fmt.Errorf("%T does not have a registered interface", impl) +} + +return nil +} + +/ RegisterImplementations registers a concrete proto Message which implements +/ the given interface. +/ +/ This function PANICs if different concrete types are registered under the +/ same typeURL. +func (registry *interfaceRegistry) + +RegisterImplementations(iface interface{ +}, impls ...proto.Message) { + for _, impl := range impls { + typeURL := MsgTypeURL(impl) + +registry.registerImpl(iface, typeURL, impl) +} +} + +/ RegisterCustomTypeURL registers a concrete type which implements the given +/ interface under `typeURL`. +/ +/ This function PANICs if different concrete types are registered under the +/ same typeURL. +func (registry *interfaceRegistry) + +RegisterCustomTypeURL(iface interface{ +}, typeURL string, impl proto.Message) { + registry.registerImpl(iface, typeURL, impl) +} + +/ registerImpl registers a concrete type which implements the given +/ interface under `typeURL`. +/ +/ This function PANICs if different concrete types are registered under the +/ same typeURL. +func (registry *interfaceRegistry) + +registerImpl(iface interface{ +}, typeURL string, impl proto.Message) { + ityp := reflect.TypeOf(iface).Elem() + +imap, found := registry.interfaceImpls[ityp] + if !found { + imap = map[string]reflect.Type{ +} + +} + implType := reflect.TypeOf(impl) + if !implType.AssignableTo(ityp) { + panic(fmt.Errorf("type %T doesn't actually implement interface %+v", impl, ityp)) +} + + / Check if we already registered something under the given typeURL. It's + / okay to register the same concrete type again, but if we are registering + / a new concrete type under the same typeURL, then we throw an error (here, + / we panic). + foundImplType, found := imap[typeURL] + if found && foundImplType != implType { + panic( + fmt.Errorf( + "concrete type %s has already been registered under typeURL %s, cannot register %s under same typeURL. "+ + "This usually means that there are conflicting modules registering different concrete types "+ + "for a same interface implementation", + foundImplType, + typeURL, + implType, + ), + ) +} + +imap[typeURL] = implType + registry.typeURLMap[typeURL] = implType + registry.implInterfaces[implType] = ityp + registry.interfaceImpls[ityp] = imap +} + +func (registry *interfaceRegistry) + +ListAllInterfaces() []string { + interfaceNames := registry.interfaceNames + keys := make([]string, 0, len(interfaceNames)) + for key := range interfaceNames { + keys = append(keys, key) +} + +return keys +} + +func (registry *interfaceRegistry) + +ListImplementations(ifaceName string) []string { + typ, ok := registry.interfaceNames[ifaceName] + if !ok { + return []string{ +} + +} + +impls, ok := registry.interfaceImpls[typ.Elem()] + if !ok { + return []string{ +} + +} + keys := make([]string, 0, len(impls)) + for key := range impls { + keys = append(keys, key) +} + +return keys +} + +func (registry *interfaceRegistry) + +UnpackAny(any *Any, iface interface{ +}) + +error { + unpacker := &statefulUnpacker{ + registry: registry, + maxDepth: MaxUnpackAnyRecursionDepth, + maxCalls: &sharedCounter{ + count: MaxUnpackAnySubCalls +}, +} + +return unpacker.UnpackAny(any, iface) +} + +/ sharedCounter is a type that encapsulates a counter value +type sharedCounter struct { + count int +} + +/ statefulUnpacker is a struct that helps in deserializing and unpacking +/ protobuf Any messages while maintaining certain stateful constraints. +type statefulUnpacker struct { + registry *interfaceRegistry + maxDepth int + maxCalls *sharedCounter +} + +/ cloneForRecursion returns a new statefulUnpacker instance with maxDepth reduced by one, preserving the registry and maxCalls. +func (r statefulUnpacker) + +cloneForRecursion() *statefulUnpacker { + return &statefulUnpacker{ + registry: r.registry, + maxDepth: r.maxDepth - 1, + maxCalls: r.maxCalls, +} +} + +/ UnpackAny deserializes a protobuf Any message into the provided interface, ensuring the interface is a pointer. +/ It applies stateful constraints such as max depth and call limits, and unpacks interfaces if required. +func (r *statefulUnpacker) + +UnpackAny(any *Any, iface interface{ +}) + +error { + if r.maxDepth == 0 { + return errors.New("max depth exceeded") +} + if r.maxCalls.count == 0 { + return errors.New("call limit exceeded") +} + / here we gracefully handle the case in which `any` itself is `nil`, which may occur in message decoding + if any == nil { + return nil +} + if any.TypeUrl == "" { + / if TypeUrl is empty return nil because without it we can't actually unpack anything + return nil +} + +r.maxCalls.count-- + rv := reflect.ValueOf(iface) + if rv.Kind() != reflect.Ptr { + return fmt.Errorf("UnpackAny expects a pointer") +} + rt := rv.Elem().Type() + cachedValue := any.cachedValue + if cachedValue != nil { + if reflect.TypeOf(cachedValue).AssignableTo(rt) { + rv.Elem().Set(reflect.ValueOf(cachedValue)) + +return nil +} + +} + +imap, found := r.registry.interfaceImpls[rt] + if !found { + return fmt.Errorf("no registered implementations of type %+v", rt) +} + +typ, found := imap[any.TypeUrl] + if !found { + return fmt.Errorf("no concrete type registered for type URL %s against interface %T", any.TypeUrl, iface) +} + +msg, ok := reflect.New(typ.Elem()).Interface().(proto.Message) + if !ok { + return fmt.Errorf("can't proto unmarshal %T", msg) +} + err := proto.Unmarshal(any.Value, msg) + if err != nil { + return err +} + +err = UnpackInterfaces(msg, r.cloneForRecursion()) + if err != nil { + return err +} + +rv.Elem().Set(reflect.ValueOf(msg)) + +any.cachedValue = msg + + return nil +} + +/ Resolve returns the proto message given its typeURL. It works with types +/ registered with RegisterInterface/RegisterImplementations, as well as those +/ registered with RegisterWithCustomTypeURL. +func (registry *interfaceRegistry) + +Resolve(typeURL string) (proto.Message, error) { + typ, found := registry.typeURLMap[typeURL] + if !found { + return nil, fmt.Errorf("unable to resolve type URL %s", typeURL) +} + +msg, ok := reflect.New(typ.Elem()).Interface().(proto.Message) + if !ok { + return nil, fmt.Errorf("can't resolve type URL %s", typeURL) +} + +return msg, nil +} + +func (registry *interfaceRegistry) + +SigningContext() *signing.Context { + return registry.signingCtx +} + +func (registry *interfaceRegistry) + +mustEmbedInterfaceRegistry() { +} + +/ UnpackInterfaces is a convenience function that calls UnpackInterfaces +/ on x if x implements UnpackInterfacesMessage +func UnpackInterfaces(x interface{ +}, unpacker AnyUnpacker) + +error { + if msg, ok := x.(UnpackInterfacesMessage); ok { + return msg.UnpackInterfaces(unpacker) +} + +return nil +} + +type failingAddressCodec struct{ +} + +func (f failingAddressCodec) + +StringToBytes(string) ([]byte, error) { + return nil, fmt.Errorf("InterfaceRegistry requires a proper address codec implementation to do address conversion") +} + +func (f failingAddressCodec) + +BytesToString([]byte) (string, error) { + return "", fmt.Errorf("InterfaceRegistry requires a proper address codec implementation to do address conversion") +} +``` + +In addition, an `UnpackInterfaces` phase should be introduced to deserialization to unpack interfaces before they're needed. Protobuf types that contain a protobuf `Any` either directly or via one of their members should implement the `UnpackInterfacesMessage` interface: + +```go +type UnpackInterfacesMessage interface { + UnpackInterfaces(InterfaceUnpacker) + +error +} +``` diff --git a/docs/sdk/v0.53/documentation/protocol-development/errors.mdx b/docs/sdk/v0.53/documentation/protocol-development/errors.mdx new file mode 100644 index 00000000..b4f8dc1c --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/errors.mdx @@ -0,0 +1,701 @@ +--- +title: Errors +--- + +## Synopsis + +This document outlines the recommended usage and APIs for error handling in Cosmos SDK modules. + +Modules are encouraged to define and register their own errors to provide better +context on failed message or handler execution. Typically, these errors should be +common or general errors which can be further wrapped to provide additional specific +execution context. + +## Registration + +Modules should define and register their custom errors in `x/{module}/errors.go`. +Registration of errors is handled via the [`errors` package](https://github.com/cosmos/cosmos-sdk/blob/main/errors/errors.go). + +Example: + +```go expandable +package types + +import "cosmossdk.io/errors" + +/ x/distribution module sentinel errors +var ( + ErrEmptyDelegatorAddr = errors.Register(ModuleName, 2, "delegator address is empty") + +ErrEmptyWithdrawAddr = errors.Register(ModuleName, 3, "withdraw address is empty") + +ErrEmptyValidatorAddr = errors.Register(ModuleName, 4, "validator address is empty") + +ErrEmptyDelegationDistInfo = errors.Register(ModuleName, 5, "no delegation distribution info") + +ErrNoValidatorDistInfo = errors.Register(ModuleName, 6, "no validator distribution info") + +ErrNoValidatorCommission = errors.Register(ModuleName, 7, "no validator commission to withdraw") + +ErrSetWithdrawAddrDisabled = errors.Register(ModuleName, 8, "set withdraw address disabled") + +ErrBadDistribution = errors.Register(ModuleName, 9, "community pool does not have sufficient coins to distribute") + +ErrInvalidProposalAmount = errors.Register(ModuleName, 10, "invalid community pool spend proposal amount") + +ErrEmptyProposalRecipient = errors.Register(ModuleName, 11, "invalid community pool spend proposal recipient") + +ErrNoValidatorExists = errors.Register(ModuleName, 12, "validator does not exist") + +ErrNoDelegationExists = errors.Register(ModuleName, 13, "delegation does not exist") +) +``` + +Each custom module error must provide the codespace, which is typically the module name +(e.g. "distribution") and is unique per module, and a uint32 code. Together, the codespace and code +provide a globally unique Cosmos SDK error. Typically, the code is monotonically increasing but does not +necessarily have to be. The only restrictions on error codes are the following: + +- Must be greater than one, as a code value of one is reserved for internal errors. +- Must be unique within the module. + +Note, the Cosmos SDK provides a core set of _common_ errors. These errors are defined in [`types/errors/errors.go`](https://github.com/cosmos/cosmos-sdk/blob/main/types/errors/errors.go). + +## Wrapping + +The custom module errors can be returned as their concrete type as they already fulfill the `error` +interface. However, module errors can be wrapped to provide further context and meaning to failed +execution. + +Example: + +```go expandable +package keeper + +import ( + + "context" + "errors" + "fmt" + "cosmossdk.io/collections" + "cosmossdk.io/core/store" + "cosmossdk.io/log" + "cosmossdk.io/math" + + errorsmod "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/query" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var _ Keeper = (*BaseKeeper)(nil) + +/ Keeper defines a module interface that facilitates the transfer of coins +/ between accounts. +type Keeper interface { + SendKeeper + WithMintCoinsRestriction(MintingRestrictionFn) + +BaseKeeper + + InitGenesis(context.Context, *types.GenesisState) + +ExportGenesis(context.Context) *types.GenesisState + + GetSupply(ctx context.Context, denom string) + +sdk.Coin + HasSupply(ctx context.Context, denom string) + +bool + GetPaginatedTotalSupply(ctx context.Context, pagination *query.PageRequest) (sdk.Coins, *query.PageResponse, error) + +IterateTotalSupply(ctx context.Context, cb func(sdk.Coin) + +bool) + +GetDenomMetaData(ctx context.Context, denom string) (types.Metadata, bool) + +HasDenomMetaData(ctx context.Context, denom string) + +bool + SetDenomMetaData(ctx context.Context, denomMetaData types.Metadata) + +GetAllDenomMetaData(ctx context.Context) []types.Metadata + IterateAllDenomMetaData(ctx context.Context, cb func(types.Metadata) + +bool) + +SendCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + SendCoinsFromModuleToModule(ctx context.Context, senderModule, recipientModule string, amt sdk.Coins) + +error + SendCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + DelegateCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) + +error + UndelegateCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) + +error + MintCoins(ctx context.Context, moduleName string, amt sdk.Coins) + +error + BurnCoins(ctx context.Context, moduleName string, amt sdk.Coins) + +error + + DelegateCoins(ctx context.Context, delegatorAddr, moduleAccAddr sdk.AccAddress, amt sdk.Coins) + +error + UndelegateCoins(ctx context.Context, moduleAccAddr, delegatorAddr sdk.AccAddress, amt sdk.Coins) + +error + + types.QueryServer +} + +/ BaseKeeper manages transfers between accounts. It implements the Keeper interface. +type BaseKeeper struct { + BaseSendKeeper + + ak types.AccountKeeper + cdc codec.BinaryCodec + storeService store.KVStoreService + mintCoinsRestrictionFn MintingRestrictionFn + logger log.Logger +} + +type MintingRestrictionFn func(ctx context.Context, coins sdk.Coins) + +error + +/ GetPaginatedTotalSupply queries for the supply, ignoring 0 coins, with a given pagination +func (k BaseKeeper) + +GetPaginatedTotalSupply(ctx context.Context, pagination *query.PageRequest) (sdk.Coins, *query.PageResponse, error) { + results, pageResp, err := query.CollectionPaginate[string, math.Int](/docs/sdk/v0.53/documentation/protocol-development/ctx, k.Supply, pagination) + if err != nil { + return nil, nil, err +} + coins := sdk.NewCoins() + for _, res := range results { + coins = coins.Add(sdk.NewCoin(res.Key, res.Value)) +} + +return coins, pageResp, nil +} + +/ NewBaseKeeper returns a new BaseKeeper object with a given codec, dedicated +/ store key, an AccountKeeper implementation, and a parameter Subspace used to +/ store and fetch module parameters. The BaseKeeper also accepts a +/ blocklist map. This blocklist describes the set of addresses that are not allowed +/ to receive funds through direct and explicit actions, for example, by using a MsgSend or +/ by using a SendCoinsFromModuleToAccount execution. +func NewBaseKeeper( + cdc codec.BinaryCodec, + storeService store.KVStoreService, + ak types.AccountKeeper, + blockedAddrs map[string]bool, + authority string, + logger log.Logger, +) + +BaseKeeper { + if _, err := ak.AddressCodec().StringToBytes(authority); err != nil { + panic(fmt.Errorf("invalid bank authority address: %w", err)) +} + + / add the module name to the logger + logger = logger.With(log.ModuleKey, "x/"+types.ModuleName) + +return BaseKeeper{ + BaseSendKeeper: NewBaseSendKeeper(cdc, storeService, ak, blockedAddrs, authority, logger), + ak: ak, + cdc: cdc, + storeService: storeService, + mintCoinsRestrictionFn: func(ctx context.Context, coins sdk.Coins) + +error { + return nil +}, + logger: logger, +} +} + +/ WithMintCoinsRestriction restricts the bank Keeper used within a specific module to +/ have restricted permissions on minting via function passed in parameter. +/ Previous restriction functions can be nested as such: +/ +/ bankKeeper.WithMintCoinsRestriction(restriction1).WithMintCoinsRestriction(restriction2) + +func (k BaseKeeper) + +WithMintCoinsRestriction(check MintingRestrictionFn) + +BaseKeeper { + oldRestrictionFn := k.mintCoinsRestrictionFn + k.mintCoinsRestrictionFn = func(ctx context.Context, coins sdk.Coins) + +error { + err := check(ctx, coins) + if err != nil { + return err +} + +err = oldRestrictionFn(ctx, coins) + if err != nil { + return err +} + +return nil +} + +return k +} + +/ DelegateCoins performs delegation by deducting amt coins from an account with +/ address addr. For vesting accounts, delegations amounts are tracked for both +/ vesting and vested coins. The coins are then transferred from the delegator +/ address to a ModuleAccount address. If any of the delegation amounts are negative, +/ an error is returned. +func (k BaseKeeper) + +DelegateCoins(ctx context.Context, delegatorAddr, moduleAccAddr sdk.AccAddress, amt sdk.Coins) + +error { + moduleAcc := k.ak.GetAccount(ctx, moduleAccAddr) + if moduleAcc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleAccAddr) +} + if !amt.IsValid() { + return errorsmod.Wrap(sdkerrors.ErrInvalidCoins, amt.String()) +} + balances := sdk.NewCoins() + for _, coin := range amt { + balance := k.GetBalance(ctx, delegatorAddr, coin.GetDenom()) + if balance.IsLT(coin) { + return errorsmod.Wrapf( + sdkerrors.ErrInsufficientFunds, "failed to delegate; %s is smaller than %s", balance, amt, + ) +} + +balances = balances.Add(balance) + err := k.setBalance(ctx, delegatorAddr, balance.Sub(coin)) + if err != nil { + return err +} + +} + if err := k.trackDelegation(ctx, delegatorAddr, balances, amt); err != nil { + return errorsmod.Wrap(err, "failed to track delegation") +} + / emit coin spent event + sdkCtx := sdk.UnwrapSDKContext(ctx) + +sdkCtx.EventManager().EmitEvent( + types.NewCoinSpentEvent(delegatorAddr, amt), + ) + err := k.addCoins(ctx, moduleAccAddr, amt) + if err != nil { + return err +} + +return nil +} + +/ UndelegateCoins performs undelegation by crediting amt coins to an account with +/ address addr. For vesting accounts, undelegation amounts are tracked for both +/ vesting and vested coins. The coins are then transferred from a ModuleAccount +/ address to the delegator address. If any of the undelegation amounts are +/ negative, an error is returned. +func (k BaseKeeper) + +UndelegateCoins(ctx context.Context, moduleAccAddr, delegatorAddr sdk.AccAddress, amt sdk.Coins) + +error { + moduleAcc := k.ak.GetAccount(ctx, moduleAccAddr) + if moduleAcc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleAccAddr) +} + if !amt.IsValid() { + return errorsmod.Wrap(sdkerrors.ErrInvalidCoins, amt.String()) +} + err := k.subUnlockedCoins(ctx, moduleAccAddr, amt) + if err != nil { + return err +} + if err := k.trackUndelegation(ctx, delegatorAddr, amt); err != nil { + return errorsmod.Wrap(err, "failed to track undelegation") +} + +err = k.addCoins(ctx, delegatorAddr, amt) + if err != nil { + return err +} + +return nil +} + +/ GetSupply retrieves the Supply from store +func (k BaseKeeper) + +GetSupply(ctx context.Context, denom string) + +sdk.Coin { + amt, err := k.Supply.Get(ctx, denom) + if err != nil { + return sdk.NewCoin(denom, math.ZeroInt()) +} + +return sdk.NewCoin(denom, amt) +} + +/ HasSupply checks if the supply coin exists in store. +func (k BaseKeeper) + +HasSupply(ctx context.Context, denom string) + +bool { + has, err := k.Supply.Has(ctx, denom) + +return has && err == nil +} + +/ GetDenomMetaData retrieves the denomination metadata. returns the metadata and true if the denom exists, +/ false otherwise. +func (k BaseKeeper) + +GetDenomMetaData(ctx context.Context, denom string) (types.Metadata, bool) { + m, err := k.BaseViewKeeper.DenomMetadata.Get(ctx, denom) + +return m, err == nil +} + +/ HasDenomMetaData checks if the denomination metadata exists in store. +func (k BaseKeeper) + +HasDenomMetaData(ctx context.Context, denom string) + +bool { + has, err := k.BaseViewKeeper.DenomMetadata.Has(ctx, denom) + +return has && err == nil +} + +/ GetAllDenomMetaData retrieves all denominations metadata +func (k BaseKeeper) + +GetAllDenomMetaData(ctx context.Context) []types.Metadata { + denomMetaData := make([]types.Metadata, 0) + +k.IterateAllDenomMetaData(ctx, func(metadata types.Metadata) + +bool { + denomMetaData = append(denomMetaData, metadata) + +return false +}) + +return denomMetaData +} + +/ IterateAllDenomMetaData iterates over all the denominations metadata and +/ provides the metadata to a callback. If true is returned from the +/ callback, iteration is halted. +func (k BaseKeeper) + +IterateAllDenomMetaData(ctx context.Context, cb func(types.Metadata) + +bool) { + err := k.BaseViewKeeper.DenomMetadata.Walk(ctx, nil, func(_ string, metadata types.Metadata) (stop bool, err error) { + return cb(metadata), nil +}) + if err != nil && !errors.Is(err, collections.ErrInvalidIterator) { + panic(err) +} +} + +/ SetDenomMetaData sets the denominations metadata +func (k BaseKeeper) + +SetDenomMetaData(ctx context.Context, denomMetaData types.Metadata) { + _ = k.BaseViewKeeper.DenomMetadata.Set(ctx, denomMetaData.Base, denomMetaData) +} + +/ SendCoinsFromModuleToAccount transfers coins from a ModuleAccount to an AccAddress. +/ It will panic if the module account does not exist. An error is returned if +/ the recipient address is black-listed or if sending the tokens fails. +func (k BaseKeeper) + +SendCoinsFromModuleToAccount( + ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins, +) + +error { + senderAddr := k.ak.GetModuleAddress(senderModule) + if senderAddr == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", senderModule)) +} + if k.BlockedAddr(recipientAddr) { + return errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", recipientAddr) +} + +return k.SendCoins(ctx, senderAddr, recipientAddr, amt) +} + +/ SendCoinsFromModuleToModule transfers coins from a ModuleAccount to another. +/ It will panic if either module account does not exist. +func (k BaseKeeper) + +SendCoinsFromModuleToModule( + ctx context.Context, senderModule, recipientModule string, amt sdk.Coins, +) + +error { + senderAddr := k.ak.GetModuleAddress(senderModule) + if senderAddr == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", senderModule)) +} + recipientAcc := k.ak.GetModuleAccount(ctx, recipientModule) + if recipientAcc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", recipientModule)) +} + +return k.SendCoins(ctx, senderAddr, recipientAcc.GetAddress(), amt) +} + +/ SendCoinsFromAccountToModule transfers coins from an AccAddress to a ModuleAccount. +/ It will panic if the module account does not exist. +func (k BaseKeeper) + +SendCoinsFromAccountToModule( + ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins, +) + +error { + recipientAcc := k.ak.GetModuleAccount(ctx, recipientModule) + if recipientAcc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", recipientModule)) +} + +return k.SendCoins(ctx, senderAddr, recipientAcc.GetAddress(), amt) +} + +/ DelegateCoinsFromAccountToModule delegates coins and transfers them from a +/ delegator account to a module account. It will panic if the module account +/ does not exist or is unauthorized. +func (k BaseKeeper) + +DelegateCoinsFromAccountToModule( + ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins, +) + +error { + recipientAcc := k.ak.GetModuleAccount(ctx, recipientModule) + if recipientAcc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", recipientModule)) +} + if !recipientAcc.HasPermission(authtypes.Staking) { + panic(errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to receive delegated coins", recipientModule)) +} + +return k.DelegateCoins(ctx, senderAddr, recipientAcc.GetAddress(), amt) +} + +/ UndelegateCoinsFromModuleToAccount undelegates the unbonding coins and transfers +/ them from a module account to the delegator account. It will panic if the +/ module account does not exist or is unauthorized. +func (k BaseKeeper) + +UndelegateCoinsFromModuleToAccount( + ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins, +) + +error { + acc := k.ak.GetModuleAccount(ctx, senderModule) + if acc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", senderModule)) +} + if !acc.HasPermission(authtypes.Staking) { + panic(errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to undelegate coins", senderModule)) +} + +return k.UndelegateCoins(ctx, acc.GetAddress(), recipientAddr, amt) +} + +/ MintCoins creates new coins from thin air and adds it to the module account. +/ It will panic if the module account does not exist or is unauthorized. +func (k BaseKeeper) + +MintCoins(ctx context.Context, moduleName string, amounts sdk.Coins) + +error { + sdkCtx := sdk.UnwrapSDKContext(ctx) + err := k.mintCoinsRestrictionFn(ctx, amounts) + if err != nil { + k.logger.Error(fmt.Sprintf("Module %q attempted to mint coins %s it doesn't have permission for, error %v", moduleName, amounts, err)) + +return err +} + acc := k.ak.GetModuleAccount(ctx, moduleName) + if acc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleName)) +} + if !acc.HasPermission(authtypes.Minter) { + panic(errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to mint tokens", moduleName)) +} + +err = k.addCoins(ctx, acc.GetAddress(), amounts) + if err != nil { + return err +} + for _, amount := range amounts { + supply := k.GetSupply(ctx, amount.GetDenom()) + +supply = supply.Add(amount) + +k.setSupply(ctx, supply) +} + +k.logger.Debug("minted coins from module account", "amount", amounts.String(), "from", moduleName) + + / emit mint event + sdkCtx.EventManager().EmitEvent( + types.NewCoinMintEvent(acc.GetAddress(), amounts), + ) + +return nil +} + +/ BurnCoins burns coins deletes coins from the balance of the module account. +/ It will panic if the module account does not exist or is unauthorized. +func (k BaseKeeper) + +BurnCoins(ctx context.Context, moduleName string, amounts sdk.Coins) + +error { + acc := k.ak.GetModuleAccount(ctx, moduleName) + if acc == nil { + panic(errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "module account %s does not exist", moduleName)) +} + if !acc.HasPermission(authtypes.Burner) { + panic(errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "module account %s does not have permissions to burn tokens", moduleName)) +} + err := k.subUnlockedCoins(ctx, acc.GetAddress(), amounts) + if err != nil { + return err +} + for _, amount := range amounts { + supply := k.GetSupply(ctx, amount.GetDenom()) + +supply = supply.Sub(amount) + +k.setSupply(ctx, supply) +} + +k.logger.Debug("burned tokens from module account", "amount", amounts.String(), "from", moduleName) + + / emit burn event + sdkCtx := sdk.UnwrapSDKContext(ctx) + +sdkCtx.EventManager().EmitEvent( + types.NewCoinBurnEvent(acc.GetAddress(), amounts), + ) + +return nil +} + +/ setSupply sets the supply for the given coin +func (k BaseKeeper) + +setSupply(ctx context.Context, coin sdk.Coin) { + / Bank invariants and IBC requires to remove zero coins. + if coin.IsZero() { + _ = k.Supply.Remove(ctx, coin.Denom) +} + +else { + _ = k.Supply.Set(ctx, coin.Denom, coin.Amount) +} +} + +/ trackDelegation tracks the delegation of the given account if it is a vesting account +func (k BaseKeeper) + +trackDelegation(ctx context.Context, addr sdk.AccAddress, balance, amt sdk.Coins) + +error { + acc := k.ak.GetAccount(ctx, addr) + if acc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "account %s does not exist", addr) +} + +vacc, ok := acc.(types.VestingAccount) + if ok { + / TODO: return error on account.TrackDelegation + sdkCtx := sdk.UnwrapSDKContext(ctx) + +vacc.TrackDelegation(sdkCtx.BlockHeader().Time, balance, amt) + +k.ak.SetAccount(ctx, acc) +} + +return nil +} + +/ trackUndelegation trakcs undelegation of the given account if it is a vesting account +func (k BaseKeeper) + +trackUndelegation(ctx context.Context, addr sdk.AccAddress, amt sdk.Coins) + +error { + acc := k.ak.GetAccount(ctx, addr) + if acc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "account %s does not exist", addr) +} + +vacc, ok := acc.(types.VestingAccount) + if ok { + / TODO: return error on account.TrackUndelegation + vacc.TrackUndelegation(amt) + +k.ak.SetAccount(ctx, acc) +} + +return nil +} + +/ IterateTotalSupply iterates over the total supply calling the given cb (callback) + +function +/ with the balance of each coin. +/ The iteration stops if the callback returns true. +func (k BaseViewKeeper) + +IterateTotalSupply(ctx context.Context, cb func(sdk.Coin) + +bool) { + err := k.Supply.Walk(ctx, nil, func(s string, m math.Int) (bool, error) { + return cb(sdk.NewCoin(s, m)), nil +}) + if err != nil && !errors.Is(err, collections.ErrInvalidIterator) { + panic(err) +} +} +``` + +Regardless if an error is wrapped or not, the Cosmos SDK's `errors` package provides a function to determine if +an error is of a particular kind via `Is`. + +## ABCI + +If a module error is registered, the Cosmos SDK `errors` package allows ABCI information to be extracted +through the `ABCIInfo` function. The package also provides `ResponseCheckTx` and `ResponseDeliverTx` as +auxiliary functions to automatically get `CheckTx` and `DeliverTx` responses from an error. diff --git a/docs/sdk/v0.53/documentation/protocol-development/gas-fees.mdx b/docs/sdk/v0.53/documentation/protocol-development/gas-fees.mdx new file mode 100644 index 00000000..efb8c754 --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/gas-fees.mdx @@ -0,0 +1,636 @@ +--- +title: Gas and Fees +--- + +## Synopsis + +This document describes the default strategies to handle gas and fees within a Cosmos SDK application. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.53/documentation/application-framework/app-anatomy) + + + +## Introduction to `Gas` and `Fees` + +In the Cosmos SDK, `gas` is a special unit that is used to track the consumption of resources during execution. `gas` is typically consumed whenever read and writes are made to the store, but it can also be consumed if expensive computation needs to be done. It serves two main purposes: + +- Make sure blocks are not consuming too many resources and are finalized. This is implemented by default in the Cosmos SDK via the [block gas meter](#block-gas-meter). +- Prevent spam and abuse from end-user. To this end, `gas` consumed during [`message`](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#messages) execution is typically priced, resulting in a `fee` (`fees = gas * gas-prices`). `fees` generally have to be paid by the sender of the `message`. Note that the Cosmos SDK does not enforce `gas` pricing by default, as there may be other ways to prevent spam (e.g. bandwidth schemes). Still, most applications implement `fee` mechanisms to prevent spam by using the [`AnteHandler`](#antehandler). + +## Gas Meter + +In the Cosmos SDK, `gas` is a simple alias for `uint64`, and is managed by an object called a _gas meter_. Gas meters implement the `GasMeter` interface + +```go expandable +package types + +import ( + + "fmt" + "math" +) + +/ Gas consumption descriptors. +const ( + GasIterNextCostFlatDesc = "IterNextFlat" + GasValuePerByteDesc = "ValuePerByte" + GasWritePerByteDesc = "WritePerByte" + GasReadPerByteDesc = "ReadPerByte" + GasWriteCostFlatDesc = "WriteFlat" + GasReadCostFlatDesc = "ReadFlat" + GasHasDesc = "Has" + GasDeleteDesc = "Delete" +) + +/ Gas measured by the SDK +type Gas = uint64 + +/ ErrorNegativeGasConsumed defines an error thrown when the amount of gas refunded results in a +/ negative gas consumed amount. +type ErrorNegativeGasConsumed struct { + Descriptor string +} + +/ ErrorOutOfGas defines an error thrown when an action results in out of gas. +type ErrorOutOfGas struct { + Descriptor string +} + +/ ErrorGasOverflow defines an error thrown when an action results gas consumption +/ unsigned integer overflow. +type ErrorGasOverflow struct { + Descriptor string +} + +/ GasMeter interface to track gas consumption +type GasMeter interface { + GasConsumed() + +Gas + GasConsumedToLimit() + +Gas + GasRemaining() + +Gas + Limit() + +Gas + ConsumeGas(amount Gas, descriptor string) + +RefundGas(amount Gas, descriptor string) + +IsPastLimit() + +bool + IsOutOfGas() + +bool + String() + +string +} + +type basicGasMeter struct { + limit Gas + consumed Gas +} + +/ NewGasMeter returns a reference to a new basicGasMeter. +func NewGasMeter(limit Gas) + +GasMeter { + return &basicGasMeter{ + limit: limit, + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *basicGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasRemaining returns the gas left in the GasMeter. +func (g *basicGasMeter) + +GasRemaining() + +Gas { + if g.IsPastLimit() { + return 0 +} + +return g.limit - g.consumed +} + +/ Limit returns the gas limit of the GasMeter. +func (g *basicGasMeter) + +Limit() + +Gas { + return g.limit +} + +/ GasConsumedToLimit returns the gas limit if gas consumed is past the limit, +/ otherwise it returns the consumed gas. +/ +/ NOTE: This behavior is only called when recovering from panic when +/ BlockGasMeter consumes gas past the limit. +func (g *basicGasMeter) + +GasConsumedToLimit() + +Gas { + if g.IsPastLimit() { + return g.limit +} + +return g.consumed +} + +/ addUint64Overflow performs the addition operation on two uint64 integers and +/ returns a boolean on whether or not the result overflows. +func addUint64Overflow(a, b uint64) (uint64, bool) { + if math.MaxUint64-a < b { + return 0, true +} + +return a + b, false +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit or out of gas. +func (g *basicGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + g.consumed = math.MaxUint64 + panic(ErrorGasOverflow{ + descriptor +}) +} + if g.consumed > g.limit { + panic(ErrorOutOfGas{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the transaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *basicGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns true if gas consumed is past limit, otherwise it returns false. +func (g *basicGasMeter) + +IsPastLimit() + +bool { + return g.consumed > g.limit +} + +/ IsOutOfGas returns true if gas consumed is greater than or equal to gas limit, otherwise it returns false. +func (g *basicGasMeter) + +IsOutOfGas() + +bool { + return g.consumed >= g.limit +} + +/ String returns the BasicGasMeter's gas limit and gas consumed. +func (g *basicGasMeter) + +String() + +string { + return fmt.Sprintf("BasicGasMeter:\n limit: %d\n consumed: %d", g.limit, g.consumed) +} + +type infiniteGasMeter struct { + consumed Gas +} + +/ NewInfiniteGasMeter returns a new gas meter without a limit. +func NewInfiniteGasMeter() + +GasMeter { + return &infiniteGasMeter{ + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *infiniteGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasConsumedToLimit returns the gas consumed from the GasMeter since the gas is not confined to a limit. +/ NOTE: This behavior is only called when recovering from panic when BlockGasMeter consumes gas past the limit. +func (g *infiniteGasMeter) + +GasConsumedToLimit() + +Gas { + return g.consumed +} + +/ GasRemaining returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +GasRemaining() + +Gas { + return math.MaxUint64 +} + +/ Limit returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +Limit() + +Gas { + return math.MaxUint64 +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit. +func (g *infiniteGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + / TODO: Should we set the consumed field after overflow checking? + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + panic(ErrorGasOverflow{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the trasaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *infiniteGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsPastLimit() + +bool { + return false +} + +/ IsOutOfGas returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsOutOfGas() + +bool { + return false +} + +/ String returns the InfiniteGasMeter's gas consumed. +func (g *infiniteGasMeter) + +String() + +string { + return fmt.Sprintf("InfiniteGasMeter:\n consumed: %d", g.consumed) +} + +/ GasConfig defines gas cost for each operation on KVStores +type GasConfig struct { + HasCost Gas + DeleteCost Gas + ReadCostFlat Gas + ReadCostPerByte Gas + WriteCostFlat Gas + WriteCostPerByte Gas + IterNextCostFlat Gas +} + +/ KVGasConfig returns a default gas config for KVStores. +func KVGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 1000, + DeleteCost: 1000, + ReadCostFlat: 1000, + ReadCostPerByte: 3, + WriteCostFlat: 2000, + WriteCostPerByte: 30, + IterNextCostFlat: 30, +} +} + +/ TransientGasConfig returns a default gas config for TransientStores. +func TransientGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 100, + DeleteCost: 100, + ReadCostFlat: 100, + ReadCostPerByte: 0, + WriteCostFlat: 200, + WriteCostPerByte: 3, + IterNextCostFlat: 3, +} +} +``` + +where: + +- `GasConsumed()` returns the amount of gas that was consumed by the gas meter instance. +- `GasConsumedToLimit()` returns the amount of gas that was consumed by gas meter instance, or the limit if it is reached. +- `GasRemaining()` returns the gas left in the GasMeter. +- `Limit()` returns the limit of the gas meter instance. `0` if the gas meter is infinite. +- `ConsumeGas(amount Gas, descriptor string)` consumes the amount of `gas` provided. If the `gas` overflows, it panics with the `descriptor` message. If the gas meter is not infinite, it panics if `gas` consumed goes above the limit. +- `RefundGas()` deducts the given amount from the gas consumed. This functionality enables refunding gas to the transaction or block gas pools so that EVM-compatible chains can fully support the go-ethereum StateDB interface. +- `IsPastLimit()` returns `true` if the amount of gas consumed by the gas meter instance is strictly above the limit, `false` otherwise. +- `IsOutOfGas()` returns `true` if the amount of gas consumed by the gas meter instance is above or equal to the limit, `false` otherwise. + +The gas meter is generally held in [`ctx`](/docs/sdk/v0.53/documentation/application-framework/context), and consuming gas is done with the following pattern: + +```go +ctx.GasMeter().ConsumeGas(amount, "description") +``` + +By default, the Cosmos SDK makes use of two different gas meters, the [main gas meter](#main-gas-metter) and the [block gas meter](#block-gas-meter). + +### Main Gas Meter + +`ctx.GasMeter()` is the main gas meter of the application. The main gas meter is initialized in `FinalizeBlock` via `setFinalizeBlockState`, and then tracks gas consumption during execution sequences that lead to state-transitions, i.e. those originally triggered by [`FinalizeBlock`](/docs/sdk/v0.53/documentation/application-framework/baseapp#finalizeblock). At the beginning of each transaction execution, the main gas meter **must be set to 0** in the [`AnteHandler`](#antehandler), so that it can track gas consumption per-transaction. + +Gas consumption can be done manually, generally by the module developer in the [`BeginBlocker`, `EndBlocker`](/docs/sdk/v0.53/documentation/module-system/beginblock-endblock) or [`Msg` service](/docs/sdk/v0.53/documentation/module-system/msg-services), but most of the time it is done automatically whenever there is a read or write to the store. This automatic gas consumption logic is implemented in a special store called [`GasKv`](/docs/sdk/v0.53/documentation/state-storage/store#gaskv-store). + +### Block Gas Meter + +`ctx.BlockGasMeter()` is the gas meter used to track gas consumption per block and make sure it does not go above a certain limit. + +During the genesis phase, gas consumption is unlimited to accommodate initialisation transactions. + +```go +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(storetypes.NewInfiniteGasMeter())) +``` + +Following the genesis block, the block gas meter is set to a finite value by the SDK. This transition is facilitated by the consensus engine (e.g., CometBFT) calling the `RequestFinalizeBlock` function, which in turn triggers the SDK's `FinalizeBlock` method. Within `FinalizeBlock`, `internalFinalizeBlock` is executed, performing necessary state updates and function executions. The block gas meter, initialised each with a finite limit, is then incorporated into the context for transaction execution, ensuring gas consumption does not exceed the block's gas limit and is reset at the end of each block. + +Modules within the Cosmos SDK can consume block gas at any point during their execution by utilising the `ctx`. This gas consumption primarily occurs during state read/write operations and transaction processing. The block gas meter, accessible via `ctx.BlockGasMeter()`, monitors the total gas usage within a block, enforcing the gas limit to prevent excessive computation. This ensures that gas limits are adhered to on a per-block basis, starting from the first block post-genesis. + +```go +gasMeter := app.getBlockGasMeter(app.finalizeBlockState.Context()) + +app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(gasMeter)) +``` + +This above shows the general mechanism for setting the block gas meter with a finite limit based on the block's consensus parameters. + +## AnteHandler + +The `AnteHandler` is run for every transaction during `CheckTx` and `FinalizeBlock`, before a Protobuf `Msg` service method for each `sdk.Msg` in the transaction. + +The anteHandler is not implemented in the core Cosmos SDK but in a module. That said, most applications today use the default implementation defined in the [`auth` module](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth). Here is what the `anteHandler` is intended to do in a normal Cosmos SDK application: + +- Verify that the transactions are of the correct type. Transaction types are defined in the module that implements the `anteHandler`, and they follow the transaction interface: + +```go expandable +package types + +import ( + + "encoding/json" + fmt "fmt" + strings "strings" + "time" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) + +type ( + / Msg defines the interface a transaction message needed to fulfill. + Msg = proto.Message + + / LegacyMsg defines the interface a transaction message needed to fulfill up through + / v0.47. + LegacyMsg interface { + Msg + + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} + + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / HasMsgs defines an interface a transaction must fulfill. + HasMsgs interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg +} + + / Tx defines an interface a transaction must fulfill. + Tx interface { + HasMsgs + + / GetMsgsV2 gets the transaction's messages as google.golang.org/protobuf/proto.Message's. + GetMsgsV2() ([]protov2.Message, error) +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() []byte + FeeGranter() []byte +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutTimeStamp extends the Tx interface by allowing a transaction to + / set a timeout timestamp. + TxWithTimeoutTimeStamp interface { + Tx + + GetTimeoutTimeStamp() + +time.Time +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} + + / TxWithUnordered extends the Tx interface by allowing a transaction to set + / the unordered field, which implicitly relies on TxWithTimeoutTimeStamp. + TxWithUnordered interface { + TxWithTimeoutTimeStamp + + GetUnordered() + +bool +} + + / HasValidateBasic defines a type that has a ValidateBasic method. + / ValidateBasic is deprecated and now facultative. + / Prefer validating messages directly in the msg server. + HasValidateBasic interface { + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +var MsgTypeURL = codectypes.MsgTypeURL + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} + +/ GetModuleNameFromTypeURL assumes that module name is the second element of the msg type URL +/ e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" +/ It returns an empty string if the input is not a valid type URL +func GetModuleNameFromTypeURL(input string) + +string { + moduleName := strings.Split(input, ".") + if len(moduleName) > 1 { + return moduleName[1] +} + +return "" +} +``` + +This enables developers to play with various types for the transaction of their application. In the default `auth` module, the default transaction type is `Tx`: + +```protobuf +// Tx is the standard type used for broadcasting transactions. +message Tx { + // body is the processable content of the transaction + TxBody body = 1; + + // auth_info is the authorization related content of the transaction, + // specifically signers, signer modes and fee + AuthInfo auth_info = 2; + + // signatures is a list of signatures that matches the length and order of + // AuthInfo's signer_infos to allow connecting signature meta information like + // public key and signing mode by position. + repeated bytes signatures = 3; +} +``` + +- Verify signatures for each [`message`](/docs/sdk/v0.53/documentation/module-system/messages-and-queries#messages) contained in the transaction. Each `message` should be signed by one or multiple sender(s), and these signatures must be verified in the `anteHandler`. +- During `CheckTx`, verify that the gas prices provided with the transaction is greater than the local `min-gas-prices` (as a reminder, gas-prices can be deducted from the following equation: `fees = gas * gas-prices`). `min-gas-prices` is a parameter local to each full-node and used during `CheckTx` to discard transactions that do not provide a minimum amount of fees. This ensures that the mempool cannot be spammed with garbage transactions. +- Verify that the sender of the transaction has enough funds to cover for the `fees`. When the end-user generates a transaction, they must indicate 2 of the 3 following parameters (the third one being implicit): `fees`, `gas` and `gas-prices`. This signals how much they are willing to pay for nodes to execute their transaction. The provided `gas` value is stored in a parameter called `GasWanted` for later use. +- Set `newCtx.GasMeter` to 0, with a limit of `GasWanted`. **This step is crucial**, as it not only makes sure the transaction cannot consume infinite gas, but also that `ctx.GasMeter` is reset in-between each transaction (`ctx` is set to `newCtx` after `anteHandler` is run, and the `anteHandler` is run each time a transactions executes). + +As explained above, the `anteHandler` returns a maximum limit of `gas` the transaction can consume during execution called `GasWanted`. The actual amount consumed in the end is denominated `GasUsed`, and we must therefore have `GasUsed =< GasWanted`. Both `GasWanted` and `GasUsed` are relayed to the underlying consensus engine when [`FinalizeBlock`](/docs/sdk/v0.53/documentation/application-framework/baseapp#finalizeblock) returns. diff --git a/docs/sdk/v0.53/documentation/protocol-development/ics-030-signed-messages.mdx b/docs/sdk/v0.53/documentation/protocol-development/ics-030-signed-messages.mdx new file mode 100644 index 00000000..4787eb37 --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/ics-030-signed-messages.mdx @@ -0,0 +1,194 @@ +--- +title: 'ICS 030: Cosmos Signed Messages' +--- + +> TODO: Replace with valid ICS number and possibly move to new location. + +* [Changelog](#changelog) +* [Abstract](#abstract) +* [Preliminary](#preliminary) +* [Specification](#specification) +* [Future Adaptations](#future-adaptations) +* [API](#api) +* [References](#references) + +## Status + +Proposed. + +## Changelog + +## Abstract + +Having the ability to sign messages off-chain has proven to be a fundamental aspect +of nearly any blockchain. The notion of signing messages off-chain has many +added benefits such as saving on computational costs and reducing transaction +throughput and overhead. Within the context of the Cosmos, some of the major +applications of signing such data includes, but is not limited to, providing a +cryptographic secure and verifiable means of proving validator identity and +possibly associating it with some other framework or organization. In addition, +having the ability to sign Cosmos messages with a Ledger or similar HSM device. + +A standardized protocol for hashing, signing, and verifying messages that can be +implemented by the Cosmos SDK and other third-party organizations is needed. Such a +standardized protocol subscribes to the following: + +* Contains a specification of human-readable and machine-verifiable typed structured data +* Contains a framework for deterministic and injective encoding of structured data +* Utilizes cryptographic secure hashing and signing algorithms +* A framework for supporting extensions and domain separation +* Is invulnerable to chosen ciphertext attacks +* Has protection against potentially signing transactions a user did not intend to + +This specification is only concerned with the rationale and the standardized +implementation of Cosmos signed messages. It does **not** concern itself with the +concept of replay attacks as that will be left up to the higher-level application +implementation. If you view signed messages in the means of authorizing some +action or data, then such an application would have to either treat this as +idempotent or have mechanisms in place to reject known signed messages. + +## Preliminary + +The Cosmos message signing protocol will be parameterized with a cryptographic +secure hashing algorithm `SHA-256` and a signing algorithm `S` that contains +the operations `sign` and `verify` which provide a digital signature over a set +of bytes and verification of a signature respectively. + +Note, our goal here is not to provide context and reasoning about why necessarily +these algorithms were chosen apart from the fact they are the defacto algorithms +used in CometBFT and the Cosmos SDK and that they satisfy our needs for such +cryptographic algorithms such as having resistance to collision and second +pre-image attacks, as well as being [deterministic](https://en.wikipedia.org/wiki/Hash_function#Determinism) and [uniform](https://en.wikipedia.org/wiki/Hash_function#Uniformity). + +## Specification + +CometBFT has a well established protocol for signing messages using a canonical +JSON representation as defined [here](https://github.com/cometbft/cometbft/blob/master/types/canonical.go). + +An example of such a canonical JSON structure is CometBFT's vote structure: + +```go +type CanonicalJSONVote struct { + ChainID string `json:"@chain_id"` + Type string `json:"@type"` + BlockID CanonicalJSONBlockID `json:"block_id"` + Height int64 `json:"height"` + Round int `json:"round"` + Timestamp string `json:"timestamp"` + VoteType byte `json:"type"` +} +``` + +With such canonical JSON structures, the specification requires that they include +meta fields: `@chain_id` and `@type`. These meta fields are reserved and must be +included. They are both of type `string`. In addition, fields must be ordered +in lexicographically ascending order. + +For the purposes of signing Cosmos messages, the `@chain_id` field must correspond +to the Cosmos chain identifier. The user-agent should **refuse** signing if the +`@chain_id` field does not match the currently active chain! The `@type` field +must equal the constant `"message"`. The `@type` field corresponds to the type of +structure the user will be signing in an application. For now, a user is only +allowed to sign bytes of valid ASCII text ([see here](https://github.com/cometbft/cometbft/blob/v0.37.0/libs/strings/string.go#L35-L64)). +However, this will change and evolve to support additional application-specific +structures that are human-readable and machine-verifiable ([see Future Adaptations](#future-adaptations)). + +Thus, we can have a canonical JSON structure for signing Cosmos messages using +the [JSON schema](http://json-schema.org/) specification as such: + +```json expandable +{ + "$schema": "http://json-schema.org/draft-04/schema#", + "$id": "cosmos/signing/typeData/schema", + "title": "The Cosmos signed message typed data schema.", + "type": "object", + "properties": { + "@chain_id": { + "type": "string", + "description": "The corresponding Cosmos chain identifier.", + "minLength": 1 + }, + "@type": { + "type": "string", + "description": "The message type. It must be 'message'.", + "enum": [ + "message" + ] + }, + "text": { + "type": "string", + "description": "The valid ASCII text to sign.", + "pattern": "^[\\x20-\\x7E]+$", + "minLength": 1 + } + }, + "required": [ + "@chain_id", + "@type", + "text" + ] +} +``` + +e.g. + +```json +{ + "@chain_id": "1", + "@type": "message", + "text": "Hello, you can identify me as XYZ on keybase." +} +``` + +## Future Adaptations + +As applications can vary greatly in domain, it will be vital to support both +domain separation and human-readable and machine-verifiable structures. + +Domain separation will allow for application developers to prevent collisions of +otherwise identical structures. It should be designed to be unique per application +use and should directly be used in the signature encoding itself. + +Human-readable and machine-verifiable structures will allow end users to sign +more complex structures, apart from just string messages, and still be able to +know exactly what they are signing (opposed to signing a bunch of arbitrary bytes). + +Thus, in the future, the Cosmos signing message specification will be expected +to expand upon it's canonical JSON structure to include such functionality. + +## API + +Application developers and designers should formalize a standard set of APIs that +adhere to the following specification: + +*** + +### **cosmosSignBytes** + +Params: + +* `data`: the Cosmos signed message canonical JSON structure +* `address`: the Bech32 Cosmos account address to sign data with + +Returns: + +* `signature`: the Cosmos signature derived using signing algorithm `S` + +*** + +### Examples + +Using the `secp256k1` as the DSA, `S`: + +```javascript +data = { + "@chain_id": "1", + "@type": "message", + "text": "I hereby claim I am ABC on Keybase!" +} + +cosmosSignBytes(data, "cosmos1pvsch6cddahhrn5e8ekw0us50dpnugwnlfngt3") +> "0x7fc4a495473045022100dec81a9820df0102381cdbf7e8b0f1e2cb64c58e0ecda1324543742e0388e41a02200df37905a6505c1b56a404e23b7473d2c0bc5bcda96771d2dda59df6ed2b98f8" +``` + +## References diff --git a/docs/sdk/v0.53/documentation/protocol-development/ics-overview.mdx b/docs/sdk/v0.53/documentation/protocol-development/ics-overview.mdx new file mode 100644 index 00000000..c90c5d2f --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/ics-overview.mdx @@ -0,0 +1,6 @@ +--- +title: Cosmos ICS +description: ICS030 - Signed Messages +--- + +* [ICS030 - Signed Messages](/docs/sdk/v0.53/documentation/protocol-development/ics-030-signed-messages) diff --git a/docs/sdk/v0.53/documentation/protocol-development/protobuf-annotations.mdx b/docs/sdk/v0.53/documentation/protocol-development/protobuf-annotations.mdx new file mode 100644 index 00000000..6ab204c1 --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/protobuf-annotations.mdx @@ -0,0 +1,134 @@ +--- +title: ProtocolBuffer Annotations +description: >- + This document explains the various protobuf scalars that have been added to + make working with protobuf easier for Cosmos SDK application developers +--- + +This document explains the various protobuf scalars that have been added to make working with protobuf easier for Cosmos SDK application developers + +## Signer + +Signer specifies which field should be used to determine the signer of a message for the Cosmos SDK. This field can be used for clients as well to infer which field should be used to determine the signer of a message. + +Read more about the signer field [here](/docs/sdk/v0.53/documentation/module-system/messages-and-queries). + +```protobuf + option (cosmos.msg.v1.signer) = "from_address"; +``` + +```proto +option (cosmos.msg.v1.signer) = "from_address"; +``` + +## Scalar + +The scalar type defines a way for clients to understand how to construct protobuf messages according to what is expected by the module and sdk. + +```proto +(cosmos_proto.scalar) = "cosmos.AddressString" +``` + +Example of account address string scalar: + +```proto + string from_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +``` + +Example of validator address string scalar: + +```proto + string validator_address = 1 [(cosmos_proto.scalar) = "cosmos.ValidatorAddressString"]; +``` + +Example of Decimals scalar: + +```proto + (cosmos_proto.scalar) = "cosmos.Dec", +``` + +Example of Int scalar: + +```proto + string yes_count = 1 [(cosmos_proto.scalar) = "cosmos.Int"]; +``` + +There are a few options for what can be provided as a scalar: `cosmos.AddressString`, `cosmos.ValidatorAddressString`, `cosmos.ConsensusAddressString`, `cosmos.Int`, `cosmos.Dec`. + +## Implements\_Interface + +Implement interface is used to provide information to client tooling like [telescope](https://github.com/cosmology-tech/telescope) on how to encode and decode protobuf messages. + +```proto +option (cosmos_proto.implements_interface) = "cosmos.auth.v1beta1.AccountI"; +``` + +## Method,Field,Message Added In + +`method_added_in`, `field_added_in` and `message_added_in` are annotations to denotate to clients that a field has been supported in a later version. This is useful when new methods or fields are added in later versions and that the client needs to be aware of what it can call. + +The annotation should be worded as follow: + +```proto +option (cosmos_proto.method_added_in) = "cosmos-sdk v0.50.1"; +option (cosmos_proto.method_added_in) = "x/epochs v1.0.0"; +option (cosmos_proto.method_added_in) = "simapp v24.0.0"; +``` + +## Amino + +The amino codec was removed in `v0.50+`, this means there is not a need register `legacyAminoCodec`. To replace the amino codec, Amino protobuf annotations are used to provide information to the amino codec on how to encode and decode protobuf messages. + + +Amino annotations are only used for backwards compatibility with amino. New modules are not required use amino annotations. + + +The below annotations are used to provide information to the amino codec on how to encode and decode protobuf messages in a backwards compatible manner. + +### Name + +Name specifies the amino name that would show up for the user in order for them see which message they are signing. + +```proto +option (amino.name) = "cosmos-sdk/BaseAccount"; +``` + +```proto + option (amino.name) = "cosmos-sdk/MsgSend"; +``` + +### Field\_Name + +Field name specifies the amino name that would show up for the user in order for them see which field they are signing. + +```proto +uint64 height = 1 [(amino.field_name) = "public_key"]; +``` + +```proto + [(gogoproto.jsontag) = "creation_height", (amino.field_name) = "creation_height", (amino.dont_omitempty) = true]; +``` + +### Dont\_OmitEmpty + +Dont omitempty specifies that the field should not be omitted when encoding to amino. + +```proto +repeated cosmos.base.v1beta1.Coin amount = 3 [(amino.dont_omitempty) = true]; +``` + +```proto + (amino.dont_omitempty) = true, +``` + +### Encoding + +Encoding instructs the amino json marshaler how to encode certain fields that may differ from the standard encoding behaviour. The most common example of this is how `repeated cosmos.base.v1beta1.Coin` is encoded when using the amino json encoding format. The `legacy_coins` option tells the json marshaler [how to encode a null slice](https://github.com/cosmos/cosmos-sdk/blob/e8f28bf5db18b8d6b7e0d94b542ce4cf48fed9d6/x/tx/signing/aminojson/json_marshal.go#L65) of `cosmos.base.v1beta1.Coin`. + +```proto +(amino.encoding) = "legacy_coins", +``` + +```proto + (amino.encoding) = "legacy_coins", +``` diff --git a/docs/sdk/v0.53/documentation/protocol-development/protobuf-complete-guide.mdx b/docs/sdk/v0.53/documentation/protocol-development/protobuf-complete-guide.mdx new file mode 100644 index 00000000..a1e2edbd --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/protobuf-complete-guide.mdx @@ -0,0 +1,420 @@ +--- +title: Protocol Buffers Implementation Guide +description: Technical reference for Protocol Buffers in Cosmos SDK +--- + +# Protocol Buffers in Cosmos SDK + +## Overview + +The Cosmos SDK uses Protocol Buffers for all serialization. This document provides a technical reference for implementing and working with protobuf in SDK modules. + +## Architecture + +Protocol Buffers serve four primary functions in the Cosmos SDK: + +1. **State Encoding** (ADR-019): Serialization of blockchain state +2. **Transaction Encoding** (ADR-020): Transaction structure and signing +3. **Query System** (ADR-021): gRPC query services +4. **Message Services** (ADR-031): Transaction message routing + +## Setup + +### Required Tools + +```bash +# Install buf +curl -sSL https://github.com/bufbuild/buf/releases/download/v1.28.1/buf-$(uname -s)-$(uname -m) \ + -o /usr/local/bin/buf && chmod +x /usr/local/bin/buf +``` + +### Directory Structure + +``` +proto/ +├── buf.yaml +├── buf.gen.gogo.yaml +└── [org]/ + └── [module]/ + └── v1/ + ├── types.proto + ├── tx.proto + ├── query.proto + ├── genesis.proto + └── events.proto +``` + +### Configuration Files + +`buf.yaml`: +```yaml +version: v1 +name: buf.build/[org]/[project] +deps: + - buf.build/cosmos/cosmos-sdk + - buf.build/cosmos/cosmos-proto + - buf.build/cosmos/gogo-proto + - buf.build/googleapis/googleapis +breaking: + use: + - FILE +lint: + use: + - STANDARD + - COMMENTS + - FILE_LOWER_SNAKE_CASE + except: + - UNARY_RPC + - COMMENT_FIELD +``` + +`buf.gen.gogo.yaml`: +```yaml +version: v1 +plugins: + - name: gocosmos + out: .. + opt: plugins=grpc,Mgoogle/protobuf/any.proto=github.com/cosmos/gogoproto/types/any + - name: grpc-gateway + out: .. + opt: logtostderr=true,allow_colon_final_segments=true +``` + +## Message Definitions + +### Basic Message Structure + +```protobuf +syntax = "proto3"; +package example.module.v1; + +import "gogoproto/gogo.proto"; +import "cosmos_proto/cosmos.proto"; + +option go_package = "github.com/example/module/types"; + +message Params { + option (gogoproto.goproto_stringer) = false; + + bool enabled = 1; + uint32 max_items = 2; + string authority = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; +} +``` + +### Transaction Messages + +```protobuf +syntax = "proto3"; +package example.module.v1; + +import "cosmos/msg/v1/msg.proto"; +import "cosmos_proto/cosmos.proto"; +import "gogoproto/gogo.proto"; + +option go_package = "github.com/example/module/types"; + +service Msg { + option (cosmos.msg.v1.service) = true; + + rpc UpdateParams(MsgUpdateParams) returns (MsgUpdateParamsResponse); + rpc Execute(MsgExecute) returns (MsgExecuteResponse); +} + +message MsgUpdateParams { + option (cosmos.msg.v1.signer) = "authority"; + + string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + Params params = 2 [(gogoproto.nullable) = false]; +} + +message MsgUpdateParamsResponse {} + +message MsgExecute { + option (cosmos.msg.v1.signer) = "sender"; + + string sender = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string data = 2; +} + +message MsgExecuteResponse { + string result = 1; +} +``` + +### Query Services + +```protobuf +syntax = "proto3"; +package example.module.v1; + +import "google/api/annotations.proto"; +import "cosmos/base/query/v1beta1/pagination.proto"; +import "gogoproto/gogo.proto"; + +option go_package = "github.com/example/module/types"; + +service Query { + rpc Params(QueryParamsRequest) returns (QueryParamsResponse) { + option (google.api.http).get = "/example/module/v1/params"; + } + + rpc State(QueryStateRequest) returns (QueryStateResponse) { + option (google.api.http).get = "/example/module/v1/state/{id}"; + } + + rpc States(QueryStatesRequest) returns (QueryStatesResponse) { + option (google.api.http).get = "/example/module/v1/states"; + } +} + +message QueryParamsRequest {} + +message QueryParamsResponse { + Params params = 1 [(gogoproto.nullable) = false]; +} + +message QueryStateRequest { + string id = 1; +} + +message QueryStateResponse { + State state = 1; +} + +message QueryStatesRequest { + cosmos.base.query.v1beta1.PageRequest pagination = 1; +} + +message QueryStatesResponse { + repeated State states = 1 [(gogoproto.nullable) = false]; + cosmos.base.query.v1beta1.PageResponse pagination = 2; +} +``` + +## Annotations + +### Cosmos Proto Annotations + +| Annotation | Usage | Description | +|------------|-------|-------------| +| `(cosmos_proto.scalar)` | Field-level | Type mapping for addresses and numbers | +| `(cosmos_proto.accepts_interface)` | Field-level | Any field interface constraint | +| `(cosmos_proto.implements_interface)` | Message-level | Interface implementation marker | +| `(cosmos.msg.v1.signer)` | Message-level | Transaction signer field | +| `(cosmos.msg.v1.service)` | Service-level | Msg service marker | + +### Scalar Types + +```protobuf +message Example { + string account = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; + string validator = 2 [(cosmos_proto.scalar) = "cosmos.ValidatorAddressString"]; + string consensus = 3 [(cosmos_proto.scalar) = "cosmos.ConsensusAddressString"]; + string amount = 4 [(cosmos_proto.scalar) = "cosmos.Int"]; + string rate = 5 [(cosmos_proto.scalar) = "cosmos.Dec"]; +} +``` + +### Gogoproto Annotations + +| Annotation | Effect | +|------------|--------| +| `(gogoproto.nullable) = false` | Non-pointer field in Go | +| `(gogoproto.goproto_getters) = false` | No getter methods | +| `(gogoproto.equal) = true` | Generate Equal() method | +| `(gogoproto.goproto_stringer) = false` | Custom String() method | +| `(gogoproto.castrepeated)` | Custom slice type | +| `(gogoproto.casttype)` | Custom Go type | + +## Interface Registry + +### Registration + +```go +package types + +import ( + "github.com/cosmos/cosmos-sdk/codec" + cdctypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/msgservice" +) + +func RegisterInterfaces(registry cdctypes.InterfaceRegistry) { + registry.RegisterImplementations((*sdk.Msg)(nil), + &MsgUpdateParams{}, + &MsgExecute{}, + ) + + msgservice.RegisterMsgServiceDesc(registry, &_Msg_serviceDesc) +} + +var ( + Amino = codec.NewLegacyAmino() + ModuleCdc = codec.NewProtoCodec(cdctypes.NewInterfaceRegistry()) +) +``` + +### Working with Any Types + +```go +// Pack interface to Any +func PackInterface(i proto.Message) (*codectypes.Any, error) { + return codectypes.NewAnyWithValue(i) +} + +// Unpack Any to interface +func UnpackInterface(any *codectypes.Any, iface interface{}) error { + return ModuleCdc.UnpackAny(any, iface) +} + +// Implement UnpackInterfaces for types containing Any fields +func (m *Message) UnpackInterfaces(unpacker codectypes.AnyUnpacker) error { + var iface InterfaceType + return unpacker.UnpackAny(m.AnyField, &iface) +} +``` + +## Code Generation + +```bash +# Generate code +cd proto +buf generate + +# Move generated files +cp -r github.com/[org]/[project]/* ../ +rm -rf github.com +``` + +## Deterministic Encoding + +The SDK requires deterministic encoding for: +- Transaction signing +- State commitment +- Consensus + +Implementation: +```go +func GetSignBytes(msg proto.Message) ([]byte, error) { + return msg.Marshal() // protobuf marshaling is deterministic +} +``` + +## Field Management + +### Field Numbers + +```protobuf +message Schema { + // Frequently accessed fields: 1-15 (single byte encoding) + string id = 1; + string name = 2; + + // Reserved for deleted fields + reserved 3, 4; + reserved "old_field", "removed_field"; + + // Deprecated field + string legacy_field = 5 [deprecated = true]; + + // Less frequent fields: 16+ + string metadata = 16; +} +``` + +### Versioning + +```protobuf +// Initial version +package example.module.v1beta1; + +// Stable version +package example.module.v1; + +// Breaking changes require new version +package example.module.v2; +``` + +### Version Annotations + +```protobuf +service Msg { + rpc OriginalMethod(Request) returns (Response); + + rpc NewMethod(NewRequest) returns (NewResponse) { + option (cosmos_proto.method_added_in) = "v0.47"; + } +} + +message Enhanced { + string existing = 1; + string new_field = 2 [(cosmos_proto.field_added_in) = "v0.47"]; +} +``` + +## Testing + +### Message Testing + +```go +func TestMessage(t *testing.T) { + msg := &types.MsgExecute{ + Sender: "cosmos1...", + Data: "test", + } + + // Test marshaling + bz, err := msg.Marshal() + require.NoError(t, err) + + // Test unmarshaling + var decoded types.MsgExecute + err = decoded.Unmarshal(bz) + require.NoError(t, err) + require.Equal(t, msg, &decoded) + + // Test determinism + bz2, err := msg.Marshal() + require.NoError(t, err) + require.Equal(t, bz, bz2) +} +``` + +### Interface Registration Testing + +```go +func TestRegistration(t *testing.T) { + registry := cdctypes.NewInterfaceRegistry() + types.RegisterInterfaces(registry) + + msg := &MsgExecute{Sender: "cosmos1...", Data: "test"} + + any, err := codectypes.NewAnyWithValue(msg) + require.NoError(t, err) + + var unpacked sdk.Msg + err = registry.UnpackAny(any, &unpacked) + require.NoError(t, err) + require.Equal(t, msg, unpacked) +} +``` + +## Troubleshooting + +| Error | Cause | Solution | +|-------|-------|----------| +| `Type URL not found` | Missing registration | Add to `RegisterInterfaces` | +| `Cannot unmarshal Any` | Interface mismatch | Verify interface implementation | +| `Breaking changes detected` | Field modification | Use deprecation or new version | +| `Import not found` | Missing dependency | Run `buf mod update` | +| `Field number collision` | Reused number | Use reserved statement | + +## References + +- [Protocol Buffers Language Guide](https://protobuf.dev/programming-guides/proto3/) +- [Buf Documentation](https://buf.build/docs/) +- [Gogoproto Extensions](https://github.com/cosmos/gogoproto) +- [Cosmos Proto](https://github.com/cosmos/cosmos-proto) +- [SDK Proto Definitions](https://buf.build/cosmos/cosmos-sdk) \ No newline at end of file diff --git a/docs/sdk/v0.53/documentation/protocol-development/protobuf.mdx b/docs/sdk/v0.53/documentation/protocol-development/protobuf.mdx new file mode 100644 index 00000000..da1e18d2 --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/protobuf.mdx @@ -0,0 +1,1052 @@ +--- +title: Protocol Buffers +description: >- + It is known that Cosmos SDK uses protocol buffers extensively, this document + is meant to provide a guide on how it is used in the cosmos-sdk. +--- + +It is known that Cosmos SDK uses protocol buffers extensively, this document is meant to provide a guide on how it is used in the cosmos-sdk. + +To generate the proto file, the Cosmos SDK uses a docker image, this image is provided to all to use as well. The latest version is `ghcr.io/cosmos/proto-builder:0.12.x` + +Below is the example of the Cosmos SDK's commands for generating, linting, and formatting protobuf files that can be reused in any applications makefile. + +```go expandable +#!/usr/bin/make -f + +PACKAGES_NOSIMULATION=$(shell go list ./... | grep -v '/simulation') + +PACKAGES_SIMTEST=$(shell go list ./... | grep '/simulation') + +export VERSION := $(shell echo $(shell git describe --tags --always --match "v*") | sed 's/^v/') + +export CMTVERSION := $(shell go list -m github.com/cometbft/cometbft | sed 's:.* ::') + +export COMMIT := $(shell git log -1 --format='%H') + +LEDGER_ENABLED ?= true +BINDIR ?= $(GOPATH)/bin +BUILDDIR ?= $(CURDIR)/build +SIMAPP = ./simapp +MOCKS_DIR = $(CURDIR)/tests/mocks +HTTPS_GIT := https://github.com/cosmos/cosmos-sdk.git +DOCKER := $(shell which docker) + +PROJECT_NAME = $(shell git remote get-url origin | xargs basename -s .git) + +# process build tags +build_tags = netgo + ifeq ($(LEDGER_ENABLED),true) + ifeq ($(OS),Windows_NT) + +GCCEXE = $(shell where gcc.exe 2> NUL) + ifeq ($(GCCEXE),) + $(error gcc.exe not installed for ledger support, please install or set LEDGER_ENABLED=false) + +else + build_tags += ledger + endif + else + UNAME_S = $(shell uname -s) + ifeq ($(UNAME_S),OpenBSD) + $(warning OpenBSD detected, disabling ledger support (https://github.com/cosmos/cosmos-sdk/issues/1988)) + +else + GCC = $(shell command -v gcc 2> /dev/null) + ifeq ($(GCC),) + $(error gcc not installed for ledger support, please install or set LEDGER_ENABLED=false) + +else + build_tags += ledger + endif + endif + endif +endif + ifeq (secp,$(findstring secp,$(COSMOS_BUILD_OPTIONS))) + +build_tags += libsecp256k1_sdk +endif + ifeq (legacy,$(findstring legacy,$(COSMOS_BUILD_OPTIONS))) + +build_tags += app_v1 +endif + whitespace := +whitespace += $(whitespace) + comma := , +build_tags_comma_sep := $(subst $(whitespace),$(comma),$(build_tags)) + +# process linker flags + +ldflags = -X github.com/cosmos/cosmos-sdk/version.Name=sim \ + -X github.com/cosmos/cosmos-sdk/version.AppName=simd \ + -X github.com/cosmos/cosmos-sdk/version.Version=$(VERSION) \ + -X github.com/cosmos/cosmos-sdk/version.Commit=$(COMMIT) \ + -X "github.com/cosmos/cosmos-sdk/version.BuildTags=$(build_tags_comma_sep)" \ + -X github.com/cometbft/cometbft/version.TMCoreSemVer=$(CMTVERSION) + +# DB backend selection + ifeq (cleveldb,$(findstring cleveldb,$(COSMOS_BUILD_OPTIONS))) + +build_tags += gcc +endif + ifeq (badgerdb,$(findstring badgerdb,$(COSMOS_BUILD_OPTIONS))) + +build_tags += badgerdb +endif +# handle rocksdb + ifeq (rocksdb,$(findstring rocksdb,$(COSMOS_BUILD_OPTIONS))) + +CGO_ENABLED=1 + build_tags += rocksdb +endif +# handle boltdb + ifeq (boltdb,$(findstring boltdb,$(COSMOS_BUILD_OPTIONS))) + +build_tags += boltdb +endif + ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS))) + +ldflags += -w -s +endif +ldflags += $(LDFLAGS) + ldflags := $(strip $(ldflags)) + +build_tags += $(BUILD_TAGS) + +build_tags := $(strip $(build_tags)) + +BUILD_FLAGS := -tags "$(build_tags)" -ldflags '$(ldflags)' +# check for nostrip option + ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS))) + +BUILD_FLAGS += -trimpath +endif + +# Check for debug option + ifeq (debug,$(findstring debug,$(COSMOS_BUILD_OPTIONS))) + +BUILD_FLAGS += -gcflags "all=-N -l" +endif + +all: tools build lint test vulncheck + +# The below include contains the tools and runsim targets. +include contrib/devtools/Makefile + +############################################################################### +### Build ### +############################################################################### + +BUILD_TARGETS := build install + +build: BUILD_ARGS=-o $(BUILDDIR)/ + +build-linux-amd64: + GOOS=linux GOARCH=amd64 LEDGER_ENABLED=false $(MAKE) + +build + +build-linux-arm64: + GOOS=linux GOARCH=arm64 LEDGER_ENABLED=false $(MAKE) + +build + +$(BUILD_TARGETS): go.sum $(BUILDDIR)/ + cd ${ + CURRENT_DIR +}/simapp && go $@ -mod=readonly $(BUILD_FLAGS) $(BUILD_ARGS) ./... + +$(BUILDDIR)/: + mkdir -p $(BUILDDIR)/ + +cosmovisor: + $(MAKE) -C tools/cosmovisor cosmovisor + +rosetta: + $(MAKE) -C tools/rosetta rosetta + +confix: + $(MAKE) -C tools/confix confix + +hubl: + $(MAKE) -C tools/hubl hubl + +.PHONY: build build-linux-amd64 build-linux-arm64 cosmovisor rosetta confix + +mocks: $(MOCKS_DIR) + @go install github.com/golang/mock/mockgen@v1.6.0 + sh ./scripts/mockgen.sh +.PHONY: mocks + +vulncheck: $(BUILDDIR)/ + GOBIN=$(BUILDDIR) + +go install golang.org/x/vuln/cmd/govulncheck@latest + $(BUILDDIR)/govulncheck ./... + +$(MOCKS_DIR): + mkdir -p $(MOCKS_DIR) + +distclean: clean tools-clean +clean: + rm -rf \ + $(BUILDDIR)/ \ + artifacts/ \ + tmp-swagger-gen/ \ + .testnets + +.PHONY: distclean clean + +############################################################################### +### Tools & Dependencies ### +############################################################################### + +go.sum: go.mod + echo "Ensure dependencies have not been modified ..." >&2 + go mod verify + go mod tidy + +############################################################################### +### Documentation ### +############################################################################### + +godocs: + @echo "--> Wait a few seconds and visit http://localhost:6060/pkg/github.com/cosmos/cosmos-sdk/types" + go install golang.org/x/tools/cmd/godoc@latest + godoc -http=:6060 + +build-docs: + @cd docs && DOCS_DOMAIN=docs.cosmos.network sh ./build-all.sh + +.PHONY: build-docs + +############################################################################### +### Tests & Simulation ### +############################################################################### + +# make init-simapp initializes a single local node network +# it is useful for testing and development +# Usage: make install && make init-simapp && simd start +# Warning: make init-simapp will remove all data in simapp home directory +init-simapp: + ./scripts/init-simapp.sh + +test: test-unit +test-e2e: + $(MAKE) -C tests test-e2e +test-e2e-cov: + $(MAKE) -C tests test-e2e-cov +test-integration: + $(MAKE) -C tests test-integration +test-integration-cov: + $(MAKE) -C tests test-integration-cov +test-all: test-unit test-e2e test-integration test-ledger-mock test-race + +TEST_PACKAGES=./... +TEST_TARGETS := test-unit test-unit-amino test-unit-proto test-ledger-mock test-race test-ledger test-race + +# Test runs-specific rules. To add a new test target, just add +# a new rule, customise ARGS or TEST_PACKAGES ad libitum, and +# append the new rule to the TEST_TARGETS list. +test-unit: test_tags += cgo ledger test_ledger_mock norace +test-unit-amino: test_tags += ledger test_ledger_mock test_amino norace +test-ledger: test_tags += cgo ledger norace +test-ledger-mock: test_tags += ledger test_ledger_mock norace +test-race: test_tags += cgo ledger test_ledger_mock +test-race: ARGS=-race +test-race: TEST_PACKAGES=$(PACKAGES_NOSIMULATION) +$(TEST_TARGETS): run-tests + +# check-* compiles and collects tests without running them +# note: go test -c doesn't support multiple packages yet (https://github.com/golang/go/issues/15513) + +CHECK_TEST_TARGETS := check-test-unit check-test-unit-amino +check-test-unit: test_tags += cgo ledger test_ledger_mock norace +check-test-unit-amino: test_tags += ledger test_ledger_mock test_amino norace +$(CHECK_TEST_TARGETS): EXTRA_ARGS=-run=none +$(CHECK_TEST_TARGETS): run-tests + +ARGS += -tags "$(test_tags)" +SUB_MODULES = $(shell find . -type f -name 'go.mod' -print0 | xargs -0 -n1 dirname | sort) + +CURRENT_DIR = $(shell pwd) + +run-tests: + ifneq (,$(shell which tparse 2>/dev/null)) + @echo "Starting unit tests"; \ + finalec=0; \ + for module in $(SUB_MODULES); do \ + cd ${ + CURRENT_DIR +}/$module; \ + echo "Running unit tests for $(grep '^module' go.mod)"; \ + go test -mod=readonly -json $(ARGS) $(TEST_PACKAGES) ./... | tparse; \ + ec=$?; \ + if [ "$ec" -ne '0' ]; then finalec=$ec; fi; \ + done; \ + exit $finalec +else + @echo "Starting unit tests"; \ + finalec=0; \ + for module in $(SUB_MODULES); do \ + cd ${ + CURRENT_DIR +}/$module; \ + echo "Running unit tests for $(grep '^module' go.mod)"; \ + go test -mod=readonly $(ARGS) $(TEST_PACKAGES) ./... ; \ + ec=$?; \ + if [ "$ec" -ne '0' ]; then finalec=$ec; fi; \ + done; \ + exit $finalec +endif + +.PHONY: run-tests test test-all $(TEST_TARGETS) + +test-sim-nondeterminism: + @echo "Running non-determinism test..." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestAppStateDeterminism -Enabled=true \ + -NumBlocks=100 -BlockSize=200 -Commit=true -Period=0 -v -timeout 24h + +# Requires an exported plugin. See store/streaming/README.md for documentation. +# +# example: +# export COSMOS_SDK_ABCI_V1= +# make test-sim-nondeterminism-streaming +# +# Using the built-in examples: +# export COSMOS_SDK_ABCI_V1=/store/streaming/abci/examples/file/file +# make test-sim-nondeterminism-streaming +test-sim-nondeterminism-streaming: + @echo "Running non-determinism-streaming test..." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestAppStateDeterminism -Enabled=true \ + -NumBlocks=100 -BlockSize=200 -Commit=true -Period=0 -v -timeout 24h -EnableStreaming=true + +test-sim-custom-genesis-fast: + @echo "Running custom genesis simulation..." + @echo "By default, ${ + HOME +}/.gaiad/config/genesis.json will be used." + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run TestFullAppSimulation -Genesis=${ + HOME +}/.gaiad/config/genesis.json \ + -Enabled=true -NumBlocks=100 -BlockSize=200 -Commit=true -Seed=99 -Period=5 -v -timeout 24h + +test-sim-import-export: runsim + @echo "Running application import/export simulation. This may take several minutes..." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 5 TestAppImportExport + +test-sim-after-import: runsim + @echo "Running application simulation-after-import. This may take several minutes..." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 5 TestAppSimulationAfterImport + +test-sim-custom-genesis-multi-seed: runsim + @echo "Running multi-seed custom genesis simulation..." + @echo "By default, ${ + HOME +}/.gaiad/config/genesis.json will be used." + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Genesis=${ + HOME +}/.gaiad/config/genesis.json -SimAppPkg=. -ExitOnFail 400 5 TestFullAppSimulation + +test-sim-multi-seed-long: runsim + @echo "Running long multi-seed application simulation. This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 500 50 TestFullAppSimulation + +test-sim-multi-seed-short: runsim + @echo "Running short multi-seed application simulation. This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && $(BINDIR)/runsim -Jobs=4 -SimAppPkg=. -ExitOnFail 50 10 TestFullAppSimulation + +test-sim-benchmark-invariants: + @echo "Running simulation invariant benchmarks..." + cd ${ + CURRENT_DIR +}/simapp && @go test -mod=readonly -benchmem -bench=BenchmarkInvariants -run=^$ \ + -Enabled=true -NumBlocks=1000 -BlockSize=200 \ + -Period=1 -Commit=true -Seed=57 -v -timeout 24h + +.PHONY: \ +test-sim-nondeterminism \ +test-sim-nondeterminism-streaming \ +test-sim-custom-genesis-fast \ +test-sim-import-export \ +test-sim-after-import \ +test-sim-custom-genesis-multi-seed \ +test-sim-multi-seed-short \ +test-sim-multi-seed-long \ +test-sim-benchmark-invariants + +SIM_NUM_BLOCKS ?= 500 +SIM_BLOCK_SIZE ?= 200 +SIM_COMMIT ?= true + +test-sim-benchmark: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h + +# Requires an exported plugin. See store/streaming/README.md for documentation. +# +# example: +# export COSMOS_SDK_ABCI_V1= +# make test-sim-benchmark-streaming +# +# Using the built-in examples: +# export COSMOS_SDK_ABCI_V1=/store/streaming/abci/examples/file/file +# make test-sim-benchmark-streaming +test-sim-benchmark-streaming: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -EnableStreaming=true + +test-sim-profile: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -benchmem -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -cpuprofile cpu.out -memprofile mem.out + +# Requires an exported plugin. See store/streaming/README.md for documentation. +# +# example: +# export COSMOS_SDK_ABCI_V1= +# make test-sim-profile-streaming +# +# Using the built-in examples: +# export COSMOS_SDK_ABCI_V1=/store/streaming/abci/examples/file/file +# make test-sim-profile-streaming +test-sim-profile-streaming: + @echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!" + @cd ${ + CURRENT_DIR +}/simapp && go test -mod=readonly -benchmem -run=^$ $(.) -bench ^BenchmarkFullAppSimulation$ \ + -Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -cpuprofile cpu.out -memprofile mem.out -EnableStreaming=true + +.PHONY: test-sim-profile test-sim-benchmark + +test-rosetta: + docker build -t rosetta-ci:latest -f contrib/rosetta/rosetta-ci/Dockerfile . + docker-compose -f contrib/rosetta/docker-compose.yaml up --abort-on-container-exit --exit-code-from test_rosetta --build +.PHONY: test-rosetta + +benchmark: + @go test -mod=readonly -bench=. $(PACKAGES_NOSIMULATION) +.PHONY: benchmark + +############################################################################### +### Linting ### +############################################################################### + +golangci_lint_cmd=golangci-lint +golangci_version=v1.51.2 + +lint: + @echo "--> Running linter" + @go install github.com/golangci/golangci-lint/cmd/golangci-lint@$(golangci_version) + @./scripts/go-lint-all.bash --timeout=15m + +lint-fix: + @echo "--> Running linter" + @go install github.com/golangci/golangci-lint/cmd/golangci-lint@$(golangci_version) + @./scripts/go-lint-all.bash --fix + +.PHONY: lint lint-fix + +############################################################################### +### Protobuf ### +############################################################################### + +protoVer=0.13.2 +protoImageName=ghcr.io/cosmos/proto-builder:$(protoVer) + +protoImage=$(DOCKER) + +run --rm -v $(CURDIR):/workspace --workdir /workspace $(protoImageName) + +proto-all: proto-format proto-lint proto-gen + +proto-gen: + @echo "Generating Protobuf files" + @$(protoImage) + +sh ./scripts/protocgen.sh + +proto-swagger-gen: + @echo "Generating Protobuf Swagger" + @$(protoImage) + +sh ./scripts/protoc-swagger-gen.sh + +proto-format: + @$(protoImage) + +find ./ -name "*.proto" -exec clang-format -i { +} \; + +proto-lint: + @$(protoImage) + +buf lint --error-format=json + +proto-check-breaking: + @$(protoImage) + +buf breaking --against $(HTTPS_GIT)#branch=main + +CMT_URL = https://raw.githubusercontent.com/cometbft/cometbft/v0.38.0-alpha.2/proto/tendermint + +CMT_CRYPTO_TYPES = proto/tendermint/crypto +CMT_ABCI_TYPES = proto/tendermint/abci +CMT_TYPES = proto/tendermint/types +CMT_VERSION = proto/tendermint/version +CMT_LIBS = proto/tendermint/libs/bits +CMT_P2P = proto/tendermint/p2p + +proto-update-deps: + @echo "Updating Protobuf dependencies" + + @mkdir -p $(CMT_ABCI_TYPES) + @curl -sSL $(CMT_URL)/abci/types.proto > $(CMT_ABCI_TYPES)/types.proto + + @mkdir -p $(CMT_VERSION) + @curl -sSL $(CMT_URL)/version/types.proto > $(CMT_VERSION)/types.proto + + @mkdir -p $(CMT_TYPES) + @curl -sSL $(CMT_URL)/types/types.proto > $(CMT_TYPES)/types.proto + @curl -sSL $(CMT_URL)/types/evidence.proto > $(CMT_TYPES)/evidence.proto + @curl -sSL $(CMT_URL)/types/params.proto > $(CMT_TYPES)/params.proto + @curl -sSL $(CMT_URL)/types/validator.proto > $(CMT_TYPES)/validator.proto + @curl -sSL $(CMT_URL)/types/block.proto > $(CMT_TYPES)/block.proto + + @mkdir -p $(CMT_CRYPTO_TYPES) + @curl -sSL $(CMT_URL)/crypto/proof.proto > $(CMT_CRYPTO_TYPES)/proof.proto + @curl -sSL $(CMT_URL)/crypto/keys.proto > $(CMT_CRYPTO_TYPES)/keys.proto + + @mkdir -p $(CMT_LIBS) + @curl -sSL $(CMT_URL)/libs/bits/types.proto > $(CMT_LIBS)/types.proto + + @mkdir -p $(CMT_P2P) + @curl -sSL $(CMT_URL)/p2p/types.proto > $(CMT_P2P)/types.proto + + $(DOCKER) + +run --rm -v $(CURDIR)/proto:/workspace --workdir /workspace $(protoImageName) + +buf mod update + +.PHONY: proto-all proto-gen proto-swagger-gen proto-format proto-lint proto-check-breaking proto-update-deps + +############################################################################### +### Localnet ### +############################################################################### + +localnet-build-env: + $(MAKE) -C contrib/images simd-env +localnet-build-dlv: + $(MAKE) -C contrib/images simd-dlv + +localnet-build-nodes: + $(DOCKER) + +run --rm -v $(CURDIR)/.testnets:/data cosmossdk/simd \ + testnet init-files --v 4 -o /data --starting-ip-address 192.168.10.2 --keyring-backend=test + docker-compose up -d + +localnet-stop: + docker-compose down + +# localnet-start will run a 4-node testnet locally. The nodes are +# based off the docker images in: ./contrib/images/simd-env +localnet-start: localnet-stop localnet-build-env localnet-build-nodes + +# localnet-debug will run a 4-node testnet locally in debug mode +# you can read more about the debug mode here: ./contrib/images/simd-dlv/README.md +localnet-debug: localnet-stop localnet-build-dlv localnet-build-nodes + +.PHONY: localnet-start localnet-stop localnet-debug localnet-build-env localnet-build-dlv localnet-build-nodes + +############################################################################### +### rosetta ### +############################################################################### +# builds rosetta test data dir +rosetta-data: + -docker container rm data_dir_build + docker build -t rosetta-ci:latest -f contrib/rosetta/rosetta-ci/Dockerfile . + docker run --name data_dir_build -t rosetta-ci:latest sh /rosetta/data.sh + docker cp data_dir_build:/tmp/data.tar.gz "$(CURDIR)/contrib/rosetta/rosetta-ci/data.tar.gz" + docker container rm data_dir_build +.PHONY: rosetta-data +``` + +The script used to generate the protobuf files can be found in the `scripts/` directory. + +```shell +#!/usr/bin/env bash + +# How to run manually: +# docker build --pull --rm -f "contrib/devtools/Dockerfile" -t cosmossdk-proto:latest "contrib/devtools" +# docker run --rm -v $(pwd):/workspace --workdir /workspace cosmossdk-proto sh ./scripts/protocgen.sh + +set -e + +echo "Generating gogo proto code" +cd proto +proto_dirs=$(find ./cosmos ./amino -path -prune -o -name '*.proto' -print0 | xargs -0 -n1 dirname | sort | uniq) +for dir in $proto_dirs; do + for file in $(find "${dir}" -maxdepth 1 -name '*.proto'); do + # this regex checks if a proto file has its go_package set to cosmossdk.io/api/... + # gogo proto files SHOULD ONLY be generated if this is false + # we don't want gogo proto to run for proto files which are natively built for google.golang.org/protobuf + if grep -q "option go_package" "$file" && grep -H -o -c 'option go_package.*cosmossdk.io/api' "$file" | grep -q ':0 +``` + +## Buf + +[Buf](https://buf.build) is a protobuf tool that abstracts the needs to use the complicated `protoc` toolchain on top of various other things that ensure you are using protobuf in accordance with the majority of the ecosystem. Within the cosmos-sdk repository there are a few files that have a buf prefix. Lets start with the top level and then dive into the various directories. + +### Workspace + +At the root level directory a workspace is defined using [buf workspaces](https://docs.buf.build/configuration/v1/buf-work-yaml). This helps if there are one or more protobuf containing directories in your project. + +Cosmos SDK example: + +```go +version: v1 +directories: + - proto +``` + +### Proto Directory + +Next is the `proto/` directory where all of our protobuf files live. In here there are many different buf files defined each serving a different purpose. + +```bash +├── README.md +├── buf.gen.gogo.yaml +├── buf.gen.pulsar.yaml +├── buf.gen.swagger.yaml +├── buf.lock +├── buf.md +├── buf.yaml +├── cosmos +└── tendermint +``` + +The above diagram all the files and directories within the Cosmos SDK `proto/` directory. + +#### `buf.gen.gogo.yaml` + +`buf.gen.gogo.yaml` defines how the protobuf files should be generated for use with in the module. This file uses [gogoproto](https://github.com/gogo/protobuf), a separate generator from the google go-proto generator that makes working with various objects more ergonomic, and it has more performant encode and decode steps + +```go +version: v1 +plugins: + - name: gocosmos + out: .. + opt: plugins=grpc,Mgoogle/protobuf/any.proto=github.com/cosmos/gogoproto/types/any + - name: grpc-gateway + out: .. + opt: logtostderr=true,allow_colon_final_segments=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.gen.pulsar.yaml` + +`buf.gen.pulsar.yaml` defines how protobuf files should be generated using the [new golang apiv2 of protobuf](https://go.dev/blog/protobuf-apiv2). This generator is used instead of the google go-proto generator because it has some extra helpers for Cosmos SDK applications and will have more performant encode and decode than the google go-proto generator. You can follow the development of this generator [here](https://github.com/cosmos/cosmos-proto). + +```go expandable +version: v1 +managed: + enabled: true + go_package_prefix: + default: cosmossdk.io/api + except: + - buf.build/googleapis/googleapis + - buf.build/cosmos/gogo-proto + - buf.build/cosmos/cosmos-proto + override: +plugins: + - name: go-pulsar + out: ../api + opt: paths=source_relative + - name: go-grpc + out: ../api + opt: paths=source_relative +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.gen.swagger.yaml` + +`buf.gen.swagger.yaml` generates the swagger documentation for the query and messages of the chain. This will only define the REST API end points that were defined in the query and msg servers. You can find examples of this [here](https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/bank/v1beta1/query.proto#L19) + +```go +version: v1 +plugins: + - name: swagger + out: ../tmp-swagger-gen + opt: logtostderr=true,fqn_for_swagger_name=true,simple_operation_ids=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.lock` + +This is an autogenerated file based off the dependencies required by the `.gen` files. There is no need to copy the current one. If you depend on cosmos-sdk proto definitions a new entry for the Cosmos SDK will need to be provided. The dependency you will need to use is `buf.build/cosmos/cosmos-sdk`. + +```go expandable +# Generated by buf. DO NOT EDIT. +version: v1 +deps: + - remote: buf.build + owner: cosmos + repository: cosmos-proto + commit: 04467658e59e44bbb22fe568206e1f70 + digest: shake256:73a640bd60e0c523b0f8237ff34eab67c45a38b64bbbde1d80224819d272dbf316ac183526bd245f994af6608b025f5130483d0133c5edd385531326b5990466 + - remote: buf.build + owner: cosmos + repository: gogo-proto + commit: 88ef6483f90f478fb938c37dde52ece3 + digest: shake256:89c45df2aa11e0cff97b0d695436713db3d993d76792e9f8dc1ae90e6ab9a9bec55503d48ceedd6b86069ab07d3041b32001b2bfe0227fa725dd515ff381e5ba + - remote: buf.build + owner: googleapis + repository: googleapis + commit: 751cbe31638d43a9bfb6162cd2352e67 + digest: shake256:87f55470d9d124e2d1dedfe0231221f4ed7efbc55bc5268917c678e2d9b9c41573a7f9a557f6d8539044524d9fc5ca8fbb7db05eb81379d168285d76b57eb8a4 + - remote: buf.build + owner: protocolbuffers + repository: wellknowntypes + commit: 3ddd61d1f53d485abd3d3a2b47a62b8e + digest: shake256:9e6799d56700d0470c3723a2fd027e8b4a41a07085a0c90c58e05f6c0038fac9b7a0170acd7692707a849983b1b8189aa33e7b73f91d68157f7136823115546b +``` + +#### `buf.yaml` + +`buf.yaml` defines the [name of your package](https://github.com/cosmos/cosmos-sdk/blob/main/proto/buf.yaml#L3), which [breakage checker](https://docs.buf.build/tour/detect-breaking-changes) to use and how to [lint your protobuf files](https://buf.build/docs/tutorials/getting-started-with-buf-cli#lint-your-api). + +```go expandable +# This module represents buf.build/cosmos/cosmos-sdk +version: v1 +name: buf.build/cosmos/cosmos-sdk +deps: + - buf.build/cosmos/cosmos-proto + - buf.build/cosmos/gogo-proto + - buf.build/googleapis/googleapis + - buf.build/protocolbuffers/wellknowntypes +breaking: + use: + - FILE + ignore: + - testpb +lint: + use: + - STANDARD + - COMMENTS + - FILE_LOWER_SNAKE_CASE + except: + - UNARY_RPC + - COMMENT_FIELD + - SERVICE_SUFFIX + - PACKAGE_VERSION_SUFFIX + - RPC_REQUEST_STANDARD_NAME + ignore: + - tendermint +``` + +We use a variety of linters for the Cosmos SDK protobuf files. The repo also checks this in ci. + +A reference to the github actions can be found [here](https://github.com/cosmos/cosmos-sdk/blob/main/.github/workflows/proto.yml#L1-L32) + +```go expandable +name: Protobuf +# Protobuf runs buf (https://buf.build/) + +lint and check-breakage +# This workflow is only run when a .proto file has been changed +on: + pull_request: + paths: + - "proto/**" + +permissions: + contents: read + +jobs: + lint: + runs-on: depot-ubuntu-22.04-4 + timeout-minutes: 5 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-lint-action@v1 + with: + input: "proto" + + break-check: + runs-on: depot-ubuntu-22.04-4 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-breaking-action@v1 + with: + input: "proto" + against: "https://github.com/${{ + github.repository +}}.git#branch=${{ + github.event.pull_request.base.ref +}},ref=HEAD~1,subdir=proto" +``` +; then + buf generate --template buf.gen.gogo.yaml $file + fi + done +done + +cd .. + +# generate codec/testdata proto code +(cd testutil/testdata; buf generate) + +# generate baseapp test messages +(cd baseapp/testutil; buf generate) + +# move proto files to the right places +cp -r github.com/cosmos/cosmos-sdk/* ./ +cp -r cosmossdk.io/** ./ +rm -rf github.com cosmossdk.io + +go mod tidy + +./scripts/protocgen-pulsar.sh + +``` + +## Buf + +[Buf](https://buf.build) is a protobuf tool that abstracts the needs to use the complicated `protoc` toolchain on top of various other things that ensure you are using protobuf in accordance with the majority of the ecosystem. Within the cosmos-sdk repository there are a few files that have a buf prefix. Lets start with the top level and then dive into the various directories. + +### Workspace + +At the root level directory a workspace is defined using [buf workspaces](https://docs.buf.build/configuration/v1/buf-work-yaml). This helps if there are one or more protobuf containing directories in your project. + +Cosmos SDK example: + +```go +version: v1 +directories: + - proto +``` + +### Proto Directory + +Next is the `proto/` directory where all of our protobuf files live. In here there are many different buf files defined each serving a different purpose. + +```bash +├── README.md +├── buf.gen.gogo.yaml +├── buf.gen.pulsar.yaml +├── buf.gen.swagger.yaml +├── buf.lock +├── buf.md +├── buf.yaml +├── cosmos +└── tendermint +``` + +The above diagram all the files and directories within the Cosmos SDK `proto/` directory. + +#### `buf.gen.gogo.yaml` + +`buf.gen.gogo.yaml` defines how the protobuf files should be generated for use with in the module. This file uses [gogoproto](https://github.com/gogo/protobuf), a separate generator from the google go-proto generator that makes working with various objects more ergonomic, and it has more performant encode and decode steps + +```go +version: v1 +plugins: + - name: gocosmos + out: .. + opt: plugins=grpc,Mgoogle/protobuf/any.proto=github.com/cosmos/gogoproto/types/any + - name: grpc-gateway + out: .. + opt: logtostderr=true,allow_colon_final_segments=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.gen.pulsar.yaml` + +`buf.gen.pulsar.yaml` defines how protobuf files should be generated using the [new golang apiv2 of protobuf](https://go.dev/blog/protobuf-apiv2). This generator is used instead of the google go-proto generator because it has some extra helpers for Cosmos SDK applications and will have more performant encode and decode than the google go-proto generator. You can follow the development of this generator [here](https://github.com/cosmos/cosmos-proto). + +```go expandable +version: v1 +managed: + enabled: true + go_package_prefix: + default: cosmossdk.io/api + except: + - buf.build/googleapis/googleapis + - buf.build/cosmos/gogo-proto + - buf.build/cosmos/cosmos-proto + override: +plugins: + - name: go-pulsar + out: ../api + opt: paths=source_relative + - name: go-grpc + out: ../api + opt: paths=source_relative +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.gen.swagger.yaml` + +`buf.gen.swagger.yaml` generates the swagger documentation for the query and messages of the chain. This will only define the REST API end points that were defined in the query and msg servers. You can find examples of this [here](https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/bank/v1beta1/query.proto#L19) + +```go +version: v1 +plugins: + - name: swagger + out: ../tmp-swagger-gen + opt: logtostderr=true,fqn_for_swagger_name=true,simple_operation_ids=true +``` + + +Example of how to define `gen` files can be found [here](https://docs.buf.build/tour/generate-go-code) + + +#### `buf.lock` + +This is an autogenerated file based off the dependencies required by the `.gen` files. There is no need to copy the current one. If you depend on cosmos-sdk proto definitions a new entry for the Cosmos SDK will need to be provided. The dependency you will need to use is `buf.build/cosmos/cosmos-sdk`. + +```go expandable +# Generated by buf. DO NOT EDIT. +version: v1 +deps: + - remote: buf.build + owner: cosmos + repository: cosmos-proto + commit: 04467658e59e44bbb22fe568206e1f70 + digest: shake256:73a640bd60e0c523b0f8237ff34eab67c45a38b64bbbde1d80224819d272dbf316ac183526bd245f994af6608b025f5130483d0133c5edd385531326b5990466 + - remote: buf.build + owner: cosmos + repository: gogo-proto + commit: 88ef6483f90f478fb938c37dde52ece3 + digest: shake256:89c45df2aa11e0cff97b0d695436713db3d993d76792e9f8dc1ae90e6ab9a9bec55503d48ceedd6b86069ab07d3041b32001b2bfe0227fa725dd515ff381e5ba + - remote: buf.build + owner: googleapis + repository: googleapis + commit: 751cbe31638d43a9bfb6162cd2352e67 + digest: shake256:87f55470d9d124e2d1dedfe0231221f4ed7efbc55bc5268917c678e2d9b9c41573a7f9a557f6d8539044524d9fc5ca8fbb7db05eb81379d168285d76b57eb8a4 + - remote: buf.build + owner: protocolbuffers + repository: wellknowntypes + commit: 3ddd61d1f53d485abd3d3a2b47a62b8e + digest: shake256:9e6799d56700d0470c3723a2fd027e8b4a41a07085a0c90c58e05f6c0038fac9b7a0170acd7692707a849983b1b8189aa33e7b73f91d68157f7136823115546b +``` + +#### `buf.yaml` + +`buf.yaml` defines the [name of your package](https://github.com/cosmos/cosmos-sdk/blob/main/proto/buf.yaml#L3), which [breakage checker](https://docs.buf.build/tour/detect-breaking-changes) to use and how to [lint your protobuf files](https://buf.build/docs/tutorials/getting-started-with-buf-cli#lint-your-api). + +```go expandable +# This module represents buf.build/cosmos/cosmos-sdk +version: v1 +name: buf.build/cosmos/cosmos-sdk +deps: + - buf.build/cosmos/cosmos-proto + - buf.build/cosmos/gogo-proto + - buf.build/googleapis/googleapis + - buf.build/protocolbuffers/wellknowntypes +breaking: + use: + - FILE + ignore: + - testpb +lint: + use: + - STANDARD + - COMMENTS + - FILE_LOWER_SNAKE_CASE + except: + - UNARY_RPC + - COMMENT_FIELD + - SERVICE_SUFFIX + - PACKAGE_VERSION_SUFFIX + - RPC_REQUEST_STANDARD_NAME + ignore: + - tendermint +``` + +We use a variety of linters for the Cosmos SDK protobuf files. The repo also checks this in ci. + +A reference to the github actions can be found [here](https://github.com/cosmos/cosmos-sdk/blob/main/.github/workflows/proto.yml#L1-L32) + +```go expandable +name: Protobuf +# Protobuf runs buf (https://buf.build/) + +lint and check-breakage +# This workflow is only run when a .proto file has been changed +on: + pull_request: + paths: + - "proto/**" + +permissions: + contents: read + +jobs: + lint: + runs-on: depot-ubuntu-22.04-4 + timeout-minutes: 5 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-lint-action@v1 + with: + input: "proto" + + break-check: + runs-on: depot-ubuntu-22.04-4 + steps: + - uses: actions/checkout@v5 + - uses: bufbuild/buf-setup-action@v1.50.0 + - uses: bufbuild/buf-breaking-action@v1 + with: + input: "proto" + against: "https://github.com/${{ + github.repository +}}.git#branch=${{ + github.event.pull_request.base.ref +}},ref=HEAD~1,subdir=proto" +``` diff --git a/docs/sdk/v0.53/documentation/protocol-development/spec-overview.mdx b/docs/sdk/v0.53/documentation/protocol-development/spec-overview.mdx new file mode 100644 index 00000000..f0b17655 --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/spec-overview.mdx @@ -0,0 +1,26 @@ +--- +title: Specifications +description: >- + This directory contains specifications for the modules of the Cosmos SDK as + well as Interchain Standards (ICS) and other specifications. +--- + +This directory contains specifications for the modules of the Cosmos SDK as well as Interchain Standards (ICS) and other specifications. + +Cosmos SDK applications hold this state in a Merkle store. Updates to +the store may be made during transactions and at the beginning and end of every +block. + +## Cosmos SDK specifications + +* [Store](/docs/sdk/v0.53/documentation/state-storage/store) - The core Merkle store that holds the state. +* [Bech32](/docs/sdk/v0.53/documentation/protocol-development/bech32) - Address format for Cosmos SDK applications. + +## Modules specifications + +Go the [module directory](https://docs.cosmos.network/main/modules) + +## CometBFT + +For details on the underlying blockchain and p2p protocols, see +the [CometBFT specification](https://github.com/cometbft/cometbft/tree/main/spec). diff --git a/docs/sdk/v0.53/documentation/protocol-development/transactions.mdx b/docs/sdk/v0.53/documentation/protocol-development/transactions.mdx new file mode 100644 index 00000000..4501377b --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/transactions.mdx @@ -0,0 +1,1392 @@ +--- +title: Transactions +--- + +## Synopsis + +`Transactions` are objects created by end-users to trigger state changes in the application. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.53/documentation/application-framework/app-anatomy) + + + +## Transactions + +Transactions are comprised of metadata held in [contexts](/docs/sdk/v0.53/documentation/application-framework/context) and [`sdk.Msg`s](/docs/sdk/v0.53/documentation/module-system/messages-and-queries) that trigger state changes within a module through the module's Protobuf [`Msg` service](/docs/sdk/v0.53/documentation/module-system/msg-services). + +When users want to interact with an application and make state changes (e.g. sending coins), they create transactions. Each of a transaction's `sdk.Msg` must be signed using the private key associated with the appropriate account(s), before the transaction is broadcasted to the network. A transaction must then be included in a block, validated, and approved by the network through the consensus process. To read more about the lifecycle of a transaction, click [here](/docs/sdk/v0.53/documentation/protocol-development/tx-lifecycle). + +## Type Definition + +Transaction objects are Cosmos SDK types that implement the `Tx` interface + +```go expandable +package types + +import ( + + "encoding/json" + fmt "fmt" + strings "strings" + "time" + "github.com/cosmos/gogoproto/proto" + protov2 "google.golang.org/protobuf/proto" + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" +) + +type ( + / Msg defines the interface a transaction message needed to fulfill. + Msg = proto.Message + + / LegacyMsg defines the interface a transaction message needed to fulfill up through + / v0.47. + LegacyMsg interface { + Msg + + / GetSigners returns the addrs of signers that must sign. + / CONTRACT: All signatures must be present to be valid. + / CONTRACT: Returns addrs in some deterministic order. + GetSigners() []AccAddress +} + + / Fee defines an interface for an application application-defined concrete + / transaction type to be able to set and return the transaction fee. + Fee interface { + GetGas() + +uint64 + GetAmount() + +Coins +} + + / Signature defines an interface for an application application-defined + / concrete transaction type to be able to set and return transaction signatures. + Signature interface { + GetPubKey() + +cryptotypes.PubKey + GetSignature() []byte +} + + / HasMsgs defines an interface a transaction must fulfill. + HasMsgs interface { + / GetMsgs gets the all the transaction's messages. + GetMsgs() []Msg +} + + / Tx defines an interface a transaction must fulfill. + Tx interface { + HasMsgs + + / GetMsgsV2 gets the transaction's messages as google.golang.org/protobuf/proto.Message's. + GetMsgsV2() ([]protov2.Message, error) +} + + / FeeTx defines the interface to be implemented by Tx to use the FeeDecorators + FeeTx interface { + Tx + GetGas() + +uint64 + GetFee() + +Coins + FeePayer() []byte + FeeGranter() []byte +} + + / TxWithMemo must have GetMemo() + +method to use ValidateMemoDecorator + TxWithMemo interface { + Tx + GetMemo() + +string +} + + / TxWithTimeoutTimeStamp extends the Tx interface by allowing a transaction to + / set a timeout timestamp. + TxWithTimeoutTimeStamp interface { + Tx + + / GetTimeoutTimeStamp gets the timeout timestamp for the tx. + / IMPORTANT: when the uint value is needed here, you MUST use UnixNano. + GetTimeoutTimeStamp() + +time.Time +} + + / TxWithTimeoutHeight extends the Tx interface by allowing a transaction to + / set a height timeout. + TxWithTimeoutHeight interface { + Tx + + GetTimeoutHeight() + +uint64 +} + + / TxWithUnordered extends the Tx interface by allowing a transaction to set + / the unordered field, which implicitly relies on TxWithTimeoutTimeStamp. + TxWithUnordered interface { + TxWithTimeoutTimeStamp + + GetUnordered() + +bool +} + + / HasValidateBasic defines a type that has a ValidateBasic method. + / ValidateBasic is deprecated and now facultative. + / Prefer validating messages directly in the msg server. + HasValidateBasic interface { + / ValidateBasic does a simple validation check that + / doesn't require access to any other information. + ValidateBasic() + +error +} +) + +/ TxDecoder unmarshals transaction bytes +type TxDecoder func(txBytes []byte) (Tx, error) + +/ TxEncoder marshals transaction to bytes +type TxEncoder func(tx Tx) ([]byte, error) + +/ MsgTypeURL returns the TypeURL of a `sdk.Msg`. +var MsgTypeURL = codectypes.MsgTypeURL + +/ GetMsgFromTypeURL returns a `sdk.Msg` message type from a type URL +func GetMsgFromTypeURL(cdc codec.Codec, input string) (Msg, error) { + var msg Msg + bz, err := json.Marshal(struct { + Type string `json:"@type"` +}{ + Type: input, +}) + if err != nil { + return nil, err +} + if err := cdc.UnmarshalInterfaceJSON(bz, &msg); err != nil { + return nil, fmt.Errorf("failed to determine sdk.Msg for %s URL : %w", input, err) +} + +return msg, nil +} + +/ GetModuleNameFromTypeURL assumes that module name is the second element of the msg type URL +/ e.g. "cosmos.bank.v1beta1.MsgSend" => "bank" +/ It returns an empty string if the input is not a valid type URL +func GetModuleNameFromTypeURL(input string) + +string { + moduleName := strings.Split(input, ".") + if len(moduleName) > 1 { + return moduleName[1] +} + +return "" +} +``` + +It contains the following methods: + +- **GetMsgs:** unwraps the transaction and returns a list of contained `sdk.Msg`s - one transaction may have one or multiple messages, which are defined by module developers. + +As a developer, you should rarely manipulate `Tx` directly, as `Tx` is an intermediate type used for transaction generation. Instead, developers should prefer the `TxBuilder` interface, which you can learn more about [below](#transaction-generation). + +### Signing Transactions + +Every message in a transaction must be signed by the addresses specified by its `GetSigners`. The Cosmos SDK currently allows signing transactions in two different ways. + +#### `SIGN_MODE_DIRECT` (preferred) + +The most used implementation of the `Tx` interface is the Protobuf `Tx` message, which is used in `SIGN_MODE_DIRECT`: + +```protobuf +// Tx is the standard type used for broadcasting transactions. +message Tx { + // body is the processable content of the transaction + TxBody body = 1; + + // auth_info is the authorization related content of the transaction, + // specifically signers, signer modes and fee + AuthInfo auth_info = 2; + + // signatures is a list of signatures that matches the length and order of + // AuthInfo's signer_infos to allow connecting signature meta information like + // public key and signing mode by position. + repeated bytes signatures = 3; +} +``` + +Because Protobuf serialization is not deterministic, the Cosmos SDK uses an additional `TxRaw` type to denote the pinned bytes over which a transaction is signed. Any user can generate a valid `body` and `auth_info` for a transaction, and serialize these two messages using Protobuf. `TxRaw` then pins the user's exact binary representation of `body` and `auth_info`, called respectively `body_bytes` and `auth_info_bytes`. The document that is signed by all signers of the transaction is `SignDoc` (deterministically serialized using [ADR-027](/docs/common/pages/adr-comprehensive#adr-027-deterministic-protobuf-serialization)): + +```protobuf +// SignDoc is the type used for generating sign bytes for SIGN_MODE_DIRECT. +message SignDoc { + // body_bytes is protobuf serialization of a TxBody that matches the + // representation in TxRaw. + bytes body_bytes = 1; + + // auth_info_bytes is a protobuf serialization of an AuthInfo that matches the + // representation in TxRaw. + bytes auth_info_bytes = 2; + + // chain_id is the unique identifier of the chain this transaction targets. + // It prevents signed transactions from being used on another chain by an + // attacker + string chain_id = 3; + + // account_number is the account number of the account in state + uint64 account_number = 4; +} +``` + +Once signed by all signers, the `body_bytes`, `auth_info_bytes` and `signatures` are gathered into `TxRaw`, whose serialized bytes are broadcasted over the network. + +#### `SIGN_MODE_LEGACY_AMINO_JSON` + +The legacy implementation of the `Tx` interface is the `StdTx` struct from `x/auth`: + +```go expandable +package legacytx + +import ( + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/codec/legacy" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx/signing" +) + +/ Interface implementation checks +var ( + _ codectypes.UnpackInterfacesMessage = (*StdTx)(nil) + + _ codectypes.UnpackInterfacesMessage = (*StdSignature)(nil) +) + +/ StdFee includes the amount of coins paid in fees and the maximum +/ gas to be used by the transaction. The ratio yields an effective "gasprice", +/ which must be above some miminum to be accepted into the mempool. +/ [Deprecated] +type StdFee struct { + Amount sdk.Coins `json:"amount" yaml:"amount"` + Gas uint64 `json:"gas" yaml:"gas"` + Payer string `json:"payer,omitempty" yaml:"payer"` + Granter string `json:"granter,omitempty" yaml:"granter"` +} + +/ Deprecated: NewStdFee returns a new instance of StdFee +func NewStdFee(gas uint64, amount sdk.Coins) + +StdFee { + return StdFee{ + Amount: amount, + Gas: gas, +} +} + +/ GetGas returns the fee's (wanted) + +gas. +func (fee StdFee) + +GetGas() + +uint64 { + return fee.Gas +} + +/ GetAmount returns the fee's amount. +func (fee StdFee) + +GetAmount() + +sdk.Coins { + return fee.Amount +} + +/ Bytes returns the encoded bytes of a StdFee. +func (fee StdFee) + +Bytes() []byte { + if len(fee.Amount) == 0 { + fee.Amount = sdk.NewCoins() +} + +bz, err := legacy.Cdc.MarshalJSON(fee) + if err != nil { + panic(err) +} + +return bz +} + +/ GasPrices returns the gas prices for a StdFee. +/ +/ NOTE: The gas prices returned are not the true gas prices that were +/ originally part of the submitted transaction because the fee is computed +/ as fee = ceil(gasWanted * gasPrices). +func (fee StdFee) + +GasPrices() + +sdk.DecCoins { + return sdk.NewDecCoinsFromCoins(fee.Amount...).QuoDec(math.LegacyNewDec(int64(fee.Gas))) +} + +/ StdTip is the tips used in a tipped transaction. +type StdTip struct { + Amount sdk.Coins `json:"amount" yaml:"amount"` + Tipper string `json:"tipper" yaml:"tipper"` +} + +/ StdTx is the legacy transaction format for wrapping a Msg with Fee and Signatures. +/ It only works with Amino, please prefer the new protobuf Tx in types/tx. +/ NOTE: the first signature is the fee payer (Signatures must not be nil). +/ Deprecated +type StdTx struct { + Msgs []sdk.Msg `json:"msg" yaml:"msg"` + Fee StdFee `json:"fee" yaml:"fee"` + Signatures []StdSignature `json:"signatures" yaml:"signatures"` + Memo string `json:"memo" yaml:"memo"` + TimeoutHeight uint64 `json:"timeout_height" yaml:"timeout_height"` +} + +/ Deprecated +func NewStdTx(msgs []sdk.Msg, fee StdFee, sigs []StdSignature, memo string) + +StdTx { + return StdTx{ + Msgs: msgs, + Fee: fee, + Signatures: sigs, + Memo: memo, +} +} + +/ GetMsgs returns the all the transaction's messages. +func (tx StdTx) + +GetMsgs() []sdk.Msg { + return tx.Msgs +} + +/ Deprecated: AsAny implements intoAny. It doesn't work for protobuf serialization, +/ so it can't be saved into protobuf configured storage. We are using it only for API +/ compatibility. +func (tx *StdTx) + +AsAny() *codectypes.Any { + return codectypes.UnsafePackAny(tx) +} + +/ GetMemo returns the memo +func (tx StdTx) + +GetMemo() + +string { + return tx.Memo +} + +/ GetTimeoutHeight returns the transaction's timeout height (if set). +func (tx StdTx) + +GetTimeoutHeight() + +uint64 { + return tx.TimeoutHeight +} + +/ GetSignatures returns the signature of signers who signed the Msg. +/ CONTRACT: Length returned is same as length of +/ pubkeys returned from MsgKeySigners, and the order +/ matches. +/ CONTRACT: If the signature is missing (ie the Msg is +/ invalid), then the corresponding signature is +/ .Empty(). +func (tx StdTx) + +GetSignatures() [][]byte { + sigs := make([][]byte, len(tx.Signatures)) + for i, stdSig := range tx.Signatures { + sigs[i] = stdSig.Signature +} + +return sigs +} + +/ GetSignaturesV2 implements SigVerifiableTx.GetSignaturesV2 +func (tx StdTx) + +GetSignaturesV2() ([]signing.SignatureV2, error) { + res := make([]signing.SignatureV2, len(tx.Signatures)) + for i, sig := range tx.Signatures { + var err error + res[i], err = StdSignatureToSignatureV2(legacy.Cdc, sig) + if err != nil { + return nil, errorsmod.Wrapf(err, "Unable to convert signature %v to V2", sig) +} + +} + +return res, nil +} + +/ GetPubkeys returns the pubkeys of signers if the pubkey is included in the signature +/ If pubkey is not included in the signature, then nil is in the slice instead +func (tx StdTx) + +GetPubKeys() ([]cryptotypes.PubKey, error) { + pks := make([]cryptotypes.PubKey, len(tx.Signatures)) + for i, stdSig := range tx.Signatures { + pks[i] = stdSig.GetPubKey() +} + +return pks, nil +} + +/ GetGas returns the Gas in StdFee +func (tx StdTx) + +GetGas() + +uint64 { + return tx.Fee.Gas +} + +/ GetFee returns the FeeAmount in StdFee +func (tx StdTx) + +GetFee() + +sdk.Coins { + return tx.Fee.Amount +} + +/ FeeGranter always returns nil for StdTx +func (tx StdTx) + +FeeGranter() + +sdk.AccAddress { + return nil +} + +func (tx StdTx) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + for _, m := range tx.Msgs { + err := codectypes.UnpackInterfaces(m, unpacker) + if err != nil { + return err +} + +} + + / Signatures contain PubKeys, which need to be unpacked. + for _, s := range tx.Signatures { + err := s.UnpackInterfaces(unpacker) + if err != nil { + return err +} + +} + +return nil +} +``` + +The document signed by all signers is `StdSignDoc`: + +```go expandable +package legacytx + +import ( + + "encoding/json" + "fmt" + "sigs.k8s.io/yaml" + "cosmossdk.io/errors" + "github.com/cosmos/cosmos-sdk/codec" + "github.com/cosmos/cosmos-sdk/codec/legacy" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + "github.com/cosmos/cosmos-sdk/crypto/types/multisig" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx/signing" +) + +/ LegacyMsg defines the old interface a message must fulfill, +/ containing Amino signing method. +/ Deprecated: Please use `Msg` instead. +type LegacyMsg interface { + sdk.Msg + + / Get the canonical byte representation of the Msg. + GetSignBytes() []byte +} + +/ StdSignDoc is replay-prevention structure. +/ It includes the result of msg.GetSignBytes(), +/ as well as the ChainID (prevent cross chain replay) +/ and the Sequence numbers for each signature (prevent +/ inchain replay and enforce tx ordering per account). +type StdSignDoc struct { + AccountNumber uint64 `json:"account_number" yaml:"account_number"` + Sequence uint64 `json:"sequence" yaml:"sequence"` + TimeoutHeight uint64 `json:"timeout_height,omitempty" yaml:"timeout_height"` + ChainID string `json:"chain_id" yaml:"chain_id"` + Memo string `json:"memo" yaml:"memo"` + Fee json.RawMessage `json:"fee" yaml:"fee"` + Msgs []json.RawMessage `json:"msgs" yaml:"msgs"` +} + +var RegressionTestingAminoCodec *codec.LegacyAmino + +/ Deprecated: please delete this code eventually. +func mustSortJSON(bz []byte) []byte { + var c any + err := json.Unmarshal(bz, &c) + if err != nil { + panic(err) +} + +js, err := json.Marshal(c) + if err != nil { + panic(err) +} + +return js +} + +/ StdSignBytes returns the bytes to sign for a transaction. +/ Deprecated: Please use x/tx/signing/aminojson instead. +func StdSignBytes(chainID string, accnum, sequence, timeout uint64, fee StdFee, msgs []sdk.Msg, memo string) []byte { + if RegressionTestingAminoCodec == nil { + panic(fmt.Errorf("must set RegressionTestingAminoCodec before calling StdSignBytes")) +} + msgsBytes := make([]json.RawMessage, 0, len(msgs)) + for _, msg := range msgs { + bz := RegressionTestingAminoCodec.MustMarshalJSON(msg) + +msgsBytes = append(msgsBytes, mustSortJSON(bz)) +} + +bz, err := legacy.Cdc.MarshalJSON(StdSignDoc{ + AccountNumber: accnum, + ChainID: chainID, + Fee: json.RawMessage(fee.Bytes()), + Memo: memo, + Msgs: msgsBytes, + Sequence: sequence, + TimeoutHeight: timeout, +}) + if err != nil { + panic(err) +} + +return mustSortJSON(bz) +} + +/ Deprecated: StdSignature represents a sig +type StdSignature struct { + cryptotypes.PubKey `json:"pub_key" yaml:"pub_key"` / optional + Signature []byte `json:"signature" yaml:"signature"` +} + +/ Deprecated +func NewStdSignature(pk cryptotypes.PubKey, sig []byte) + +StdSignature { + return StdSignature{ + PubKey: pk, + Signature: sig +} +} + +/ GetSignature returns the raw signature bytes. +func (ss StdSignature) + +GetSignature() []byte { + return ss.Signature +} + +/ GetPubKey returns the public key of a signature as a cryptotypes.PubKey using the +/ Amino codec. +func (ss StdSignature) + +GetPubKey() + +cryptotypes.PubKey { + return ss.PubKey +} + +/ MarshalYAML returns the YAML representation of the signature. +func (ss StdSignature) + +MarshalYAML() (any, error) { + pk := "" + if ss.PubKey != nil { + pk = ss.String() +} + +bz, err := yaml.Marshal(struct { + PubKey string `json:"pub_key"` + Signature string `json:"signature"` +}{ + pk, + fmt.Sprintf("%X", ss.Signature), +}) + if err != nil { + return nil, err +} + +return string(bz), nil +} + +func (ss StdSignature) + +UnpackInterfaces(unpacker codectypes.AnyUnpacker) + +error { + return codectypes.UnpackInterfaces(ss.PubKey, unpacker) +} + +/ StdSignatureToSignatureV2 converts a StdSignature to a SignatureV2 +func StdSignatureToSignatureV2(cdc *codec.LegacyAmino, sig StdSignature) (signing.SignatureV2, error) { + pk := sig.GetPubKey() + +data, err := pubKeySigToSigData(cdc, pk, sig.Signature) + if err != nil { + return signing.SignatureV2{ +}, err +} + +return signing.SignatureV2{ + PubKey: pk, + Data: data, +}, nil +} + +func pubKeySigToSigData(cdc *codec.LegacyAmino, key cryptotypes.PubKey, sig []byte) (signing.SignatureData, error) { + multiPK, ok := key.(multisig.PubKey) + if !ok { + return &signing.SingleSignatureData{ + SignMode: signing.SignMode_SIGN_MODE_LEGACY_AMINO_JSON, + Signature: sig, +}, nil +} + +var multiSig multisig.AminoMultisignature + err := cdc.Unmarshal(sig, &multiSig) + if err != nil { + return nil, err +} + sigs := multiSig.Sigs + sigDatas := make([]signing.SignatureData, len(sigs)) + pubKeys := multiPK.GetPubKeys() + bitArray := multiSig.BitArray + n := multiSig.BitArray.Count() + signatures := multisig.NewMultisig(n) + sigIdx := 0 + for i := range n { + if bitArray.GetIndex(i) { + data, err := pubKeySigToSigData(cdc, pubKeys[i], multiSig.Sigs[sigIdx]) + if err != nil { + return nil, errors.Wrapf(err, "Unable to convert Signature to SigData %d", sigIdx) +} + +sigDatas[sigIdx] = data + multisig.AddSignature(signatures, data, sigIdx) + +sigIdx++ +} + +} + +return signatures, nil +} +``` + +which is encoded into bytes using Amino JSON. Once all signatures are gathered into `StdTx`, `StdTx` is serialized using Amino JSON, and these bytes are broadcasted over the network. + +#### Other Sign Modes + +The Cosmos SDK also provides a couple of other sign modes for particular use cases. + +#### `SIGN_MODE_DIRECT_AUX` + +`SIGN_MODE_DIRECT_AUX` is a sign mode released in the Cosmos SDK v0.46 which targets transactions with multiple signers. Whereas `SIGN_MODE_DIRECT` expects each signer to sign over both `TxBody` and `AuthInfo` (which includes all other signers' signer infos, i.e. their account sequence, public key and mode info), `SIGN_MODE_DIRECT_AUX` allows N-1 signers to only sign over `TxBody` and _their own_ signer info. Morever, each auxiliary signer (i.e. a signer using `SIGN_MODE_DIRECT_AUX`) doesn't +need to sign over the fees: + +```protobuf + +// SignDocDirectAux is the type used for generating sign bytes for +// SIGN_MODE_DIRECT_AUX. +message SignDocDirectAux { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.46"; + // body_bytes is protobuf serialization of a TxBody that matches the + // representation in TxRaw. + bytes body_bytes = 1; + + // public_key is the public key of the signing account. + google.protobuf.Any public_key = 2; + + // chain_id is the identifier of the chain this transaction targets. + // It prevents signed transactions from being used on another chain by an + // attacker. + string chain_id = 3; + + // account_number is the account number of the account in state. + uint64 account_number = 4; + + // sequence is the sequence number of the signing account. + uint64 sequence = 5; + + // tips have been depreacted and should not be used + Tip tip = 6 [deprecated = true]; +} +``` + +The use case is a multi-signer transaction, where one of the signers is appointed to gather all signatures, broadcast the signature and pay for fees, and the others only care about the transaction body. This generally allows for a better multi-signing UX. If Alice, Bob and Charlie are part of a 3-signer transaction, then Alice and Bob can both use `SIGN_MODE_DIRECT_AUX` to sign over the `TxBody` and their own signer info (no need an additional step to gather other signers' ones, like in `SIGN_MODE_DIRECT`), without specifying a fee in their SignDoc. Charlie can then gather both signatures from Alice and Bob, and +create the final transaction by appending a fee. Note that the fee payer of the transaction (in our case Charlie) must sign over the fees, so must use `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. + +#### `SIGN_MODE_TEXTUAL` + +`SIGN_MODE_TEXTUAL` is a new sign mode for delivering a better signing experience on hardware wallets and it is included in the v0.50 release. In this mode, the signer signs over the human-readable string representation of the transaction (CBOR) and makes all data being displayed easier to read. The data is formatted as screens, and each screen is meant to be displayed in its entirety even on small devices like the Ledger Nano. + +There are also _expert_ screens, which will only be displayed if the user has chosen that option in its hardware device. These screens contain things like account number, account sequence and the sign data hash. + +Data is formatted using a set of `ValueRenderer` which the SDK provides defaults for all the known messages and value types. Chain developers can also opt to implement their own `ValueRenderer` for a type/message if they'd like to display information differently. + +If you wish to learn more, please refer to [ADR-050](/docs/common/pages/adr-comprehensive#adr-050-sign_mode_textual). + +#### Custom Sign modes + +There is the opportunity to add your own custom sign mode to the Cosmos-SDK. While we can not accept the implementation of the sign mode to the repository, we can accept a pull request to add the custom signmode to the SignMode enum located [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/tx/signing/v1beta1/signing.proto#L17) + +## Transaction Process + +The process of an end-user sending a transaction is: + +- decide on the messages to put into the transaction, +- generate the transaction using the Cosmos SDK's `TxBuilder`, +- broadcast the transaction using one of the available interfaces. + +The next paragraphs will describe each of these components, in this order. + +### Messages + + + Module `sdk.Msg`s are not to be confused with [ABCI + Messages](https://docs.cometbft.com/v0.37/spec/abci/) which define + interactions between the CometBFT and application layers. + + +**Messages** (or `sdk.Msg`s) are module-specific objects that trigger state transitions within the scope of the module they belong to. Module developers define the messages for their module by adding methods to the Protobuf [`Msg` service](/docs/sdk/v0.53/documentation/module-system/msg-services), and also implement the corresponding `MsgServer`. + +Each `sdk.Msg`s is related to exactly one Protobuf [`Msg` service](/docs/sdk/v0.53/documentation/module-system/msg-services) RPC, defined inside each module's `tx.proto` file. A SDK app router automatically maps every `sdk.Msg` to a corresponding RPC. Protobuf generates a `MsgServer` interface for each module `Msg` service, and the module developer needs to implement this interface. +This design puts more responsibility on module developers, allowing application developers to reuse common functionalities without having to implement state transition logic repetitively. + +To learn more about Protobuf `Msg` services and how to implement `MsgServer`, click [here](/docs/sdk/v0.53/documentation/module-system/msg-services). + +While messages contain the information for state transition logic, a transaction's other metadata and relevant information are stored in the `TxBuilder` and `Context`. + +### Transaction Generation + +The `TxBuilder` interface contains data closely related with the generation of transactions, which an end-user can set to generate the desired transaction: + +```go expandable +package client + +import ( + + "time" + + txsigning "cosmossdk.io/x/tx/signing" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() + +sdk.TxEncoder + TxDecoder() + +sdk.TxDecoder + TxJSONEncoder() + +sdk.TxEncoder + TxJSONDecoder() + +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) + +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} + + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() + +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() *txsigning.HandlerMap + SigningContext() *txsigning.Context +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTimeoutHeight(height uint64) + +SetTimeoutTimestamp(timestamp time.Time) + +SetUnordered(v bool) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} + + / ExtendedTxBuilder extends the TxBuilder interface, + / which is used to set extension options to be included in a transaction. + ExtendedTxBuilder interface { + SetExtensionOptions(extOpts ...*codectypes.Any) +} +) +``` + +- `Msg`s, the array of [messages](#messages) included in the transaction. +- `GasLimit`, option chosen by the users for how to calculate how much gas they will need to pay. +- `Memo`, a note or comment to send with the transaction. +- `FeeAmount`, the maximum amount the user is willing to pay in fees. +- `TimeoutHeight`, block height until which the transaction is valid. +- `Unordered`, an option indicating this transaction may be executed in any order (requires Sequence to be unset.) +- `TimeoutTimestamp`, the timeout timestamp (unordered nonce) of the transaction (required to be used with Unordered). +- `Signatures`, the array of signatures from all signers of the transaction. + +As there are currently two sign modes for signing transactions, there are also two implementations of `TxBuilder`: + +- [wrapper](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/tx/builder.go#L27-L44) for creating transactions for `SIGN_MODE_DIRECT`, +- [StdTxBuilder](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/migrations/legacytx/stdtx_builder.go#L14-L17) for `SIGN_MODE_LEGACY_AMINO_JSON`. + +However, the two implementations of `TxBuilder` should be hidden away from end-users, as they should prefer using the overarching `TxConfig` interface: + +```go expandable +package client + +import ( + + "time" + + txsigning "cosmossdk.io/x/tx/signing" + + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + sdk "github.com/cosmos/cosmos-sdk/types" + "github.com/cosmos/cosmos-sdk/types/tx" + signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/signing" +) + +type ( + / TxEncodingConfig defines an interface that contains transaction + / encoders and decoders + TxEncodingConfig interface { + TxEncoder() + +sdk.TxEncoder + TxDecoder() + +sdk.TxDecoder + TxJSONEncoder() + +sdk.TxEncoder + TxJSONDecoder() + +sdk.TxDecoder + MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) + +UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) +} + + / TxConfig defines an interface a client can utilize to generate an + / application-defined concrete transaction type. The type returned must + / implement TxBuilder. + TxConfig interface { + TxEncodingConfig + + NewTxBuilder() + +TxBuilder + WrapTxBuilder(sdk.Tx) (TxBuilder, error) + +SignModeHandler() *txsigning.HandlerMap + SigningContext() *txsigning.Context +} + + / TxBuilder defines an interface which an application-defined concrete transaction + / type must implement. Namely, it must be able to set messages, generate + / signatures, and provide canonical bytes to sign over. The transaction must + / also know how to encode itself. + TxBuilder interface { + GetTx() + +signing.Tx + + SetMsgs(msgs ...sdk.Msg) + +error + SetSignatures(signatures ...signingtypes.SignatureV2) + +error + SetMemo(memo string) + +SetFeeAmount(amount sdk.Coins) + +SetFeePayer(feePayer sdk.AccAddress) + +SetGasLimit(limit uint64) + +SetTimeoutHeight(height uint64) + +SetTimeoutTimestamp(timestamp time.Time) + +SetUnordered(v bool) + +SetFeeGranter(feeGranter sdk.AccAddress) + +AddAuxSignerData(tx.AuxSignerData) + +error +} + + / ExtendedTxBuilder extends the TxBuilder interface, + / which is used to set extension options to be included in a transaction. + ExtendedTxBuilder interface { + SetExtensionOptions(extOpts ...*codectypes.Any) +} +) +``` + +`TxConfig` is an app-wide configuration for managing transactions. Most importantly, it holds the information about whether to sign each transaction with `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. By calling `txBuilder := txConfig.NewTxBuilder()`, a new `TxBuilder` will be created with the appropriate sign mode. + +Once `TxBuilder` is correctly populated with the setters exposed above, `TxConfig` will also take care of correctly encoding the bytes (again, either using `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`). Here's a pseudo-code snippet of how to generate and encode a transaction, using the `TxEncoder()` method: + +```go +txBuilder := txConfig.NewTxBuilder() + +txBuilder.SetMsgs(...) / and other setters on txBuilder + +bz, err := txConfig.TxEncoder()(txBuilder.GetTx()) +/ bz are bytes to be broadcasted over the network +``` + +### Broadcasting the Transaction + +Once the transaction bytes are generated, there are currently three ways of broadcasting it. + +#### CLI + +Application developers create entry points to the application by creating a [command-line interface](/docs/sdk/v0.53/api-reference/client-tools/cli), [gRPC and/or REST interface](/docs/sdk/v0.53/api-reference/service-apis/grpc_rest), typically found in the application's `./cmd` folder. These interfaces allow users to interact with the application through command-line. + +For the [command-line interface](/docs/sdk/v0.53/documentation/module-system/module-interfaces#cli), module developers create subcommands to add as children to the application top-level transaction command `TxCmd`. CLI commands actually bundle all the steps of transaction processing into one simple command: creating messages, generating transactions and broadcasting. For concrete examples, see the [Interacting with a Node](/docs/sdk/v0.53/documentation/operations/interact-node) section. An example transaction made using CLI looks like: + +```bash +simd tx send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake +``` + +#### gRPC + +[gRPC](https://grpc.io) is the main component for the Cosmos SDK's RPC layer. Its principal usage is in the context of modules' [`Query` services](/docs/sdk/v0.53/documentation/module-system/query-services). However, the Cosmos SDK also exposes a few other module-agnostic gRPC services, one of them being the `Tx` service: + +```go expandable +syntax = "proto3"; +package cosmos.tx.v1beta1; + +import "google/api/annotations.proto"; +import "cosmos/base/abci/v1beta1/abci.proto"; +import "cosmos/tx/v1beta1/tx.proto"; +import "cosmos/base/query/v1beta1/pagination.proto"; +import "tendermint/types/block.proto"; +import "tendermint/types/types.proto"; +import "cosmos_proto/cosmos.proto"; + +option go_package = "github.com/cosmos/cosmos-sdk/types/tx"; + +/ Service defines a gRPC service for interacting with transactions. +service Service { + / Simulate simulates executing a transaction for estimating gas usage. + rpc Simulate(SimulateRequest) + +returns (SimulateResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/simulate" + body: "*" +}; +} + / GetTx fetches a tx by hash. + rpc GetTx(GetTxRequest) + +returns (GetTxResponse) { + option (google.api.http).get = "/cosmos/tx/v1beta1/txs/{ + hash +}"; +} + / BroadcastTx broadcast transaction. + rpc BroadcastTx(BroadcastTxRequest) + +returns (BroadcastTxResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/txs" + body: "*" +}; +} + / GetTxsEvent fetches txs by event. + rpc GetTxsEvent(GetTxsEventRequest) + +returns (GetTxsEventResponse) { + option (google.api.http).get = "/cosmos/tx/v1beta1/txs"; +} + / GetBlockWithTxs fetches a block with decoded txs. + rpc GetBlockWithTxs(GetBlockWithTxsRequest) + +returns (GetBlockWithTxsResponse) { + option (cosmos_proto.method_added_in) = "cosmos-sdk 0.45.2"; + option (google.api.http).get = "/cosmos/tx/v1beta1/txs/block/{ + height +}"; +} + / TxDecode decodes the transaction. + rpc TxDecode(TxDecodeRequest) + +returns (TxDecodeResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/decode" + body: "*" +}; + option (cosmos_proto.method_added_in) = "cosmos-sdk 0.47"; +} + / TxEncode encodes the transaction. + rpc TxEncode(TxEncodeRequest) + +returns (TxEncodeResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/encode" + body: "*" +}; + option (cosmos_proto.method_added_in) = "cosmos-sdk 0.47"; +} + / TxEncodeAmino encodes an Amino transaction from JSON to encoded bytes. + rpc TxEncodeAmino(TxEncodeAminoRequest) + +returns (TxEncodeAminoResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/encode/amino" + body: "*" +}; + option (cosmos_proto.method_added_in) = "cosmos-sdk 0.47"; +} + / TxDecodeAmino decodes an Amino transaction from encoded bytes to JSON. + rpc TxDecodeAmino(TxDecodeAminoRequest) + +returns (TxDecodeAminoResponse) { + option (google.api.http) = { + post: "/cosmos/tx/v1beta1/decode/amino" + body: "*" +}; + option (cosmos_proto.method_added_in) = "cosmos-sdk 0.47"; +} +} + +/ GetTxsEventRequest is the request type for the Service.TxsByEvents +/ RPC method. +message GetTxsEventRequest { + / events is the list of transaction event type. + / Deprecated post v0.47.x: use query instead, which should contain a valid + / events query. + repeated string events = 1 [deprecated = true]; + + / pagination defines a pagination for the request. + / Deprecated post v0.46.x: use page and limit instead. + cosmos.base.query.v1beta1.PageRequest pagination = 2 [deprecated = true]; + + OrderBy order_by = 3; + + / page is the page number to query, starts at 1. If not provided, will + / default to first page. + uint64 page = 4; + + / limit is the total number of results to be returned in the result page. + / If left empty it will default to a value to be set by each app. + uint64 limit = 5; + + / query defines the transaction event query that is proxied to Tendermint's + / TxSearch RPC method. The query must be valid. + string query = 6 [(cosmos_proto.field_added_in) = "cosmos-sdk 0.50"]; +} + +/ OrderBy defines the sorting order +enum OrderBy { + / ORDER_BY_UNSPECIFIED specifies an unknown sorting order. OrderBy defaults + / to ASC in this case. + ORDER_BY_UNSPECIFIED = 0; + / ORDER_BY_ASC defines ascending order + ORDER_BY_ASC = 1; + / ORDER_BY_DESC defines descending order + ORDER_BY_DESC = 2; +} + +/ GetTxsEventResponse is the response type for the Service.TxsByEvents +/ RPC method. +message GetTxsEventResponse { + / txs is the list of queried transactions. + repeated cosmos.tx.v1beta1.Tx txs = 1; + / tx_responses is the list of queried TxResponses. + repeated cosmos.base.abci.v1beta1.TxResponse tx_responses = 2; + / pagination defines a pagination for the response. + / Deprecated post v0.46.x: use total instead. + cosmos.base.query.v1beta1.PageResponse pagination = 3 [deprecated = true]; + / total is total number of results available + uint64 total = 4; +} + +/ BroadcastTxRequest is the request type for the Service.BroadcastTxRequest +/ RPC method. +message BroadcastTxRequest { + / tx_bytes is the raw transaction. + bytes tx_bytes = 1; + BroadcastMode mode = 2; +} + +/ BroadcastMode specifies the broadcast mode for the TxService.Broadcast RPC +/ method. +enum BroadcastMode { + / zero-value for mode ordering + BROADCAST_MODE_UNSPECIFIED = 0; + / DEPRECATED: use BROADCAST_MODE_SYNC instead, + / BROADCAST_MODE_BLOCK is not supported by the SDK from v0.47.x onwards. + BROADCAST_MODE_BLOCK = 1 [deprecated = true]; + / BROADCAST_MODE_SYNC defines a tx broadcasting mode where the client waits + / for a CheckTx execution response only. + BROADCAST_MODE_SYNC = 2; + / BROADCAST_MODE_ASYNC defines a tx broadcasting mode where the client + / returns immediately. + BROADCAST_MODE_ASYNC = 3; +} + +/ BroadcastTxResponse is the response type for the +/ Service.BroadcastTx method. +message BroadcastTxResponse { + / tx_response is the queried TxResponses. + cosmos.base.abci.v1beta1.TxResponse tx_response = 1; +} + +/ SimulateRequest is the request type for the Service.Simulate +/ RPC method. +message SimulateRequest { + / tx is the transaction to simulate. + / Deprecated. Send raw tx bytes instead. + cosmos.tx.v1beta1.Tx tx = 1 [deprecated = true]; + / tx_bytes is the raw transaction. + bytes tx_bytes = 2 [(cosmos_proto.field_added_in) = "cosmos-sdk 0.43"]; +} + +/ SimulateResponse is the response type for the +/ Service.SimulateRPC method. +message SimulateResponse { + / gas_info is the information about gas used in the simulation. + cosmos.base.abci.v1beta1.GasInfo gas_info = 1; + / result is the result of the simulation. + cosmos.base.abci.v1beta1.Result result = 2; +} + +/ GetTxRequest is the request type for the Service.GetTx +/ RPC method. +message GetTxRequest { + / hash is the tx hash to query, encoded as a hex string. + string hash = 1; +} + +/ GetTxResponse is the response type for the Service.GetTx method. +message GetTxResponse { + / tx is the queried transaction. + cosmos.tx.v1beta1.Tx tx = 1; + / tx_response is the queried TxResponses. + cosmos.base.abci.v1beta1.TxResponse tx_response = 2; +} + +/ GetBlockWithTxsRequest is the request type for the Service.GetBlockWithTxs +/ RPC method. +message GetBlockWithTxsRequest { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.45.2"; + / height is the height of the block to query. + int64 height = 1; + / pagination defines a pagination for the request. + cosmos.base.query.v1beta1.PageRequest pagination = 2; +} + +/ GetBlockWithTxsResponse is the response type for the Service.GetBlockWithTxs +/ method. +message GetBlockWithTxsResponse { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.45.2"; + / txs are the transactions in the block. + repeated cosmos.tx.v1beta1.Tx txs = 1; + .tendermint.types.BlockID block_id = 2; + .tendermint.types.Block block = 3; + / pagination defines a pagination for the response. + cosmos.base.query.v1beta1.PageResponse pagination = 4; +} + +/ TxDecodeRequest is the request type for the Service.TxDecode +/ RPC method. +message TxDecodeRequest { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + / tx_bytes is the raw transaction. + bytes tx_bytes = 1; +} + +/ TxDecodeResponse is the response type for the +/ Service.TxDecode method. +message TxDecodeResponse { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + / tx is the decoded transaction. + cosmos.tx.v1beta1.Tx tx = 1; +} + +/ TxEncodeRequest is the request type for the Service.TxEncode +/ RPC method. +message TxEncodeRequest { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + / tx is the transaction to encode. + cosmos.tx.v1beta1.Tx tx = 1; +} + +/ TxEncodeResponse is the response type for the +/ Service.TxEncode method. +message TxEncodeResponse { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + / tx_bytes is the encoded transaction bytes. + bytes tx_bytes = 1; +} + +/ TxEncodeAminoRequest is the request type for the Service.TxEncodeAmino +/ RPC method. +message TxEncodeAminoRequest { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + string amino_json = 1; +} + +/ TxEncodeAminoResponse is the response type for the Service.TxEncodeAmino +/ RPC method. +message TxEncodeAminoResponse { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + bytes amino_binary = 1; +} + +/ TxDecodeAminoRequest is the request type for the Service.TxDecodeAmino +/ RPC method. +message TxDecodeAminoRequest { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + bytes amino_binary = 1; +} + +/ TxDecodeAminoResponse is the response type for the Service.TxDecodeAmino +/ RPC method. +message TxDecodeAminoResponse { + option (cosmos_proto.message_added_in) = "cosmos-sdk 0.47"; + string amino_json = 1; +} +``` + +The `Tx` service exposes a handful of utility functions, such as simulating a transaction or querying a transaction, and also one method to broadcast transactions. + +Examples of broadcasting and simulating a transaction are shown [here](/docs/sdk/v0.53/documentation/operations/txs#programmatically-with-go). + +#### REST + +Each gRPC method has its corresponding REST endpoint, generated using [gRPC-gateway](https://github.com/grpc-ecosystem/grpc-gateway). Therefore, instead of using gRPC, you can also use HTTP to broadcast the same transaction, on the `POST /cosmos/tx/v1beta1/txs` endpoint. + +An example can be seen [here](/docs/sdk/v0.53/documentation/operations/txs#using-rest) + +#### CometBFT RPC + +The three methods presented above are actually higher abstractions over the CometBFT RPC `/broadcast_tx_{async,sync,commit}` endpoints, documented [here](https://docs.cometbft.com/v0.37/core/rpc). This means that you can use the CometBFT RPC endpoints directly to broadcast the transaction, if you wish so. + +### Unordered Transactions + + + +Looking to enable unordered transactions on your chain? +Check out the [v0.53.0 Upgrade Guide](https://docs.cosmos.network/v0.53/build/migrations/upgrade-guide#enable-unordered-transactions-optional) + + + + + +Unordered transactions MUST leave sequence values unset. When a transaction is both unordered and contains a non-zero sequence value, +the transaction will be rejected. Services that operate on prior assumptions about transaction sequence values should be updated to handle unordered transactions. +Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. + + + +Beginning with Cosmos SDK v0.53.0, chains may enable unordered transaction support. +Unordered transactions work by using a timestamp as the transaction's nonce value. The sequence value must NOT be set in the signature(s) of the transaction. +The timestamp must be greater than the current block time and not exceed the chain's configured max unordered timeout timestamp duration. +Senders must use a unique timestamp for each distinct transaction. The difference may be as small as a nanosecond, however. + +These unique timestamps serve as a one-shot nonce, and their lifespan in state is short-lived. +Upon transaction inclusion, an entry consisting of timeout timestamp and account address will be recorded to state. +Once the block time is passed the timeout timestamp value, the entry will be removed. This ensures that unordered nonces do not indefinitely fill up the chain's storage. diff --git a/docs/sdk/v0.53/documentation/protocol-development/tx-lifecycle.mdx b/docs/sdk/v0.53/documentation/protocol-development/tx-lifecycle.mdx new file mode 100644 index 00000000..d193e89f --- /dev/null +++ b/docs/sdk/v0.53/documentation/protocol-development/tx-lifecycle.mdx @@ -0,0 +1,292 @@ +--- +title: Transaction Lifecycle +--- + +## Synopsis + +This document describes the lifecycle of a transaction from creation to committed state changes. Transaction definition is described in a [different doc](/docs/sdk/v0.53/documentation/protocol-development/transactions). The transaction is referred to as `Tx`. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK Application](/docs/sdk/v0.53/documentation/application-framework/app-anatomy) + + + +## Creation + +### Transaction Creation + +One of the main application interfaces is the command-line interface. The transaction `Tx` can be created by the user inputting a command in the following format from the [command-line](/docs/sdk/v0.53/api-reference/client-tools/cli), providing the type of transaction in `[command]`, arguments in `[args]`, and configurations such as gas prices in `[flags]`: + +```bash +[appname] tx [command] [args] [flags] +``` + +This command automatically **creates** the transaction, **signs** it using the account's private key, and **broadcasts** it to the specified peer node. + +There are several required and optional flags for transaction creation. The `--from` flag specifies which [account](/docs/sdk/v0.53/documentation/protocol-development/accounts) the transaction is originating from. For example, if the transaction is sending coins, the funds are drawn from the specified `from` address. + +#### Gas and Fees + +Additionally, there are several [flags](/docs/sdk/v0.53/api-reference/client-tools/cli) users can use to indicate how much they are willing to pay in [fees](/docs/sdk/v0.53/documentation/protocol-development/gas-fees): + +- `--gas` refers to how much [gas](/docs/sdk/v0.53/documentation/protocol-development/gas-fees), which represents computational resources, `Tx` consumes. Gas is dependent on the transaction and is not precisely calculated until execution, but can be estimated by providing `auto` as the value for `--gas`. +- `--gas-adjustment` (optional) can be used to scale `gas` up in order to avoid underestimating. For example, users can specify their gas adjustment as 1.5 to use 1.5 times the estimated gas. +- `--gas-prices` specifies how much the user is willing to pay per unit of gas, which can be one or multiple denominations of tokens. For example, `--gas-prices=0.025uatom, 0.025upho` means the user is willing to pay 0.025uatom AND 0.025upho per unit of gas. +- `--fees` specifies how much in fees the user is willing to pay in total. +- `--timeout-height` specifies a block timeout height to prevent the tx from being committed past a certain height. + +The ultimate value of the fees paid is equal to the gas multiplied by the gas prices. In other words, `fees = ceil(gas * gasPrices)`. Thus, since fees can be calculated using gas prices and vice versa, the users specify only one of the two. + +Later, validators decide whether to include the transaction in their block by comparing the given or calculated `gas-prices` to their local `min-gas-prices`. `Tx` is rejected if its `gas-prices` is not high enough, so users are incentivized to pay more. + +#### Unordered Transactions + +With Cosmos SDK v0.53.0, users may send unordered transactions to chains that have this feature enabled. +The following flags allow a user to build an unordered transaction from the CLI. + +- `--unordered` specifies that this transaction should be unordered. (transaction sequence must be unset) +- `--timeout-duration` specifies the amount of time the unordered transaction should be valid in the mempool. The transaction's unordered nonce will be set to the time of transaction creation + timeout duration. + + + +Unordered transactions MUST leave sequence values unset. When a transaction is both unordered and contains a non-zero sequence value, +the transaction will be rejected. External services that operate on prior assumptions about transaction sequence values should be updated to handle unordered transactions. +Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. + + + +#### CLI Example + +Users of the application `app` can enter the following command into their CLI to generate a transaction to send 1000uatom from a `senderAddress` to a `recipientAddress`. The command specifies how much gas they are willing to pay: an automatic estimate scaled up by 1.5 times, with a gas price of 0.025uatom per unit gas. + +```bash +appd tx send 1000uatom --from --gas auto --gas-adjustment 1.5 --gas-prices 0.025uatom +``` + +#### Other Transaction Creation Methods + +The command-line is an easy way to interact with an application, but `Tx` can also be created using a [gRPC or REST interface](/docs/sdk/v0.53/api-reference/service-apis/grpc_rest) or some other entry point defined by the application developer. From the user's perspective, the interaction depends on the web interface or wallet they are using (e.g. creating `Tx` using [Lunie.io](https://lunie.io/#/) and signing it with a Ledger Nano S). + +## Addition to Mempool + +Each full-node (running CometBFT) that receives a `Tx` sends an [ABCI message](https://docs.cometbft.com/v0.37/spec/p2p/messages/), +`CheckTx`, to the application layer to check for validity, and receives an `abci.ResponseCheckTx`. If the `Tx` passes the checks, it is held in the node's +[**Mempool**](https://docs.cometbft.com/v0.37/spec/p2p/messages/mempool/), an in-memory pool of transactions unique to each node, pending inclusion in a block - honest nodes discard a `Tx` if it is found to be invalid. Prior to consensus, nodes continuously check incoming transactions and gossip them to their peers. + +### Types of Checks + +The full-nodes perform stateless, then stateful checks on `Tx` during `CheckTx`, with the goal to +identify and reject an invalid transaction as early on as possible to avoid wasted computation. + +**_Stateless_** checks do not require nodes to access state - light clients or offline nodes can do +them - and are thus less computationally expensive. Stateless checks include making sure addresses +are not empty, enforcing nonnegative numbers, and other logic specified in the definitions. + +**_Stateful_** checks validate transactions and messages based on a committed state. Examples +include checking that the relevant values exist and can be transacted with, the address +has sufficient funds, and the sender is authorized or has the correct ownership to transact. +At any given moment, full-nodes typically have [multiple versions](/docs/sdk/v0.53/documentation/application-framework/baseapp#state-updates) +of the application's internal state for different purposes. For example, nodes execute state +changes while in the process of verifying transactions, but still need a copy of the last committed +state in order to answer queries - they should not respond using state with uncommitted changes. + +In order to verify a `Tx`, full-nodes call `CheckTx`, which includes both _stateless_ and _stateful_ +checks. Further validation happens later in the [`DeliverTx`](#delivertx) stage. `CheckTx` goes +through several steps, beginning with decoding `Tx`. + +### Decoding + +When `Tx` is received by the application from the underlying consensus engine (e.g. CometBFT ), it is still in its [encoded](/docs/sdk/v0.53/documentation/protocol-development/encoding) `[]byte` form and needs to be unmarshaled in order to be processed. Then, the [`runTx`](/docs/sdk/v0.53/documentation/application-framework/baseapp#runtx-antehandler-runmsgs-posthandler) function is called to run in `runTxModeCheck` mode, meaning the function runs all checks but exits before executing messages and writing state changes. + +### ValidateBasic (deprecated) + +Messages ([`sdk.Msg`](/docs/sdk/v0.53/documentation/protocol-development/transactions#messages)) are extracted from transactions (`Tx`). The `ValidateBasic` method of the `sdk.Msg` interface implemented by the module developer is run for each transaction. +To discard obviously invalid messages, the `BaseApp` type calls the `ValidateBasic` method very early in the processing of the message in the [`CheckTx`](/docs/sdk/v0.53/documentation/application-framework/baseapp#checktx) and [`DeliverTx`](/docs/sdk/v0.53/documentation/application-framework/baseapp#delivertx) transactions. +`ValidateBasic` can include only **stateless** checks (the checks that do not require access to the state). + + +The `ValidateBasic` method on messages has been deprecated in favor of validating messages directly in their respective [`Msg` services](/docs/sdk/v0.53/documentation/module-system/msg-services#Validation). + +Read [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) for more details. + + + + + `BaseApp` still calls `ValidateBasic` on messages that implements that method + for backwards compatibility. + + +#### Guideline + +`ValidateBasic` should not be used anymore. Message validation should be performed in the `Msg` service when [handling a message](/docs/sdk/v0.53/documentation/module-system/msg-services#Validation) in a module Msg Server. + +### AnteHandler + +`AnteHandler`s even though optional, are in practice very often used to perform signature verification, gas calculation, fee deduction, and other core operations related to blockchain transactions. + +A copy of the cached context is provided to the `AnteHandler`, which performs limited checks specified for the transaction type. Using a copy allows the `AnteHandler` to do stateful checks for `Tx` without modifying the last committed state, and revert back to the original if the execution fails. + +For example, the [`auth`](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth/spec) module `AnteHandler` checks and increments sequence numbers, checks signatures and account numbers, and deducts fees from the first signer of the transaction - all state changes are made using the `checkState`. + + + Ante handlers only run on a transaction. If a transaction embed multiple + messages (like some x/authz, x/gov transactions for instance), the ante + handlers only have awareness of the outer message. Inner messages are mostly + directly routed to the [message + router](https://docs.cosmos.network/main/learn/advanced/baseapp#msg-service-router) + and will skip the chain of ante handlers. Keep that in mind when designing + your own ante handler. + + +### Gas + +The [`Context`](/docs/sdk/v0.53/documentation/application-framework/context), which keeps a `GasMeter` that tracks how much gas is used during the execution of `Tx`, is initialized. The user-provided amount of gas for `Tx` is known as `GasWanted`. If `GasConsumed`, the amount of gas consumed during execution, ever exceeds `GasWanted`, the execution stops and the changes made to the cached copy of the state are not committed. Otherwise, `CheckTx` sets `GasUsed` equal to `GasConsumed` and returns it in the result. After calculating the gas and fee values, validator-nodes check that the user-specified `gas-prices` is greater than their locally defined `min-gas-prices`. + +### Discard or Addition to Mempool + +If at any point during `CheckTx` the `Tx` fails, it is discarded and the transaction lifecycle ends +there. Otherwise, if it passes `CheckTx` successfully, the default protocol is to relay it to peer +nodes and add it to the Mempool so that the `Tx` becomes a candidate to be included in the next block. + +The **mempool** serves the purpose of keeping track of transactions seen by all full-nodes. +Full-nodes keep a **mempool cache** of the last `mempool.cache_size` transactions they have seen, as a first line of +defense to prevent replay attacks. Ideally, `mempool.cache_size` is large enough to encompass all +of the transactions in the full mempool. If the mempool cache is too small to keep track of all +the transactions, `CheckTx` is responsible for identifying and rejecting replayed transactions. + +Currently existing preventative measures include fees and a `sequence` (nonce) counter to distinguish +replayed transactions from identical but valid ones. If an attacker tries to spam nodes with many +copies of a `Tx`, full-nodes keeping a mempool cache reject all identical copies instead of running +`CheckTx` on them. Even if the copies have incremented `sequence` numbers, attackers are +disincentivized by the need to pay fees. + +Validator nodes keep a mempool to prevent replay attacks, just as full-nodes do, but also use it as +a pool of unconfirmed transactions in preparation of block inclusion. Note that even if a `Tx` +passes all checks at this stage, it is still possible to be found invalid later on, because +`CheckTx` does not fully validate the transaction (that is, it does not actually execute the messages). + +## Inclusion in a Block + +Consensus, the process through which validator nodes come to agreement on which transactions to +accept, happens in **rounds**. Each round begins with a proposer creating a block of the most +recent transactions and ends with **validators**, special full-nodes with voting power responsible +for consensus, agreeing to accept the block or go with a `nil` block instead. Validator nodes +execute the consensus algorithm, such as [CometBFT](https://docs.cometbft.com/v0.37/spec/consensus/), +confirming the transactions using ABCI requests to the application, in order to come to this agreement. + +The first step of consensus is the **block proposal**. One proposer amongst the validators is chosen +by the consensus algorithm to create and propose a block - in order for a `Tx` to be included, it +must be in this proposer's mempool. + +## State Changes + +The next step of consensus is to execute the transactions to fully validate them. All full-nodes +that receive a block proposal from the correct proposer execute the transactions by calling the ABCI function `FinalizeBlock`. +As mentioned throughout the documentation `BeginBlock`, `ExecuteTx` and `EndBlock` are called within FinalizeBlock. +Although every full-node operates individually and locally, the outcome is always consistent and unequivocal. This is because the state changes brought about by the messages are predictable, and the transactions are specifically sequenced in the proposed block. + +```text expandable + -------------------------- + | Receive Block Proposal | + -------------------------- + | + v + ------------------------- + | FinalizeBlock | + ------------------------- + | + v + ------------------- + | BeginBlock | + ------------------- + | + v + -------------------- + | ExecuteTx(tx0) | + | ExecuteTx(tx1) | + | ExecuteTx(tx2) | + | ExecuteTx(tx3) | + | . | + | . | + | . | + ------------------- + | + v + -------------------- + | EndBlock | + -------------------- + | + v + ------------------------- + | Consensus | + ------------------------- + | + v + ------------------------- + | Commit | + ------------------------- +``` + +### Transaction Execution + +The `FinalizeBlock` ABCI function defined in [`BaseApp`](/docs/sdk/v0.53/documentation/application-framework/baseapp) does the bulk of the +state transitions: it is run for each transaction in the block in sequential order as committed +to during consensus. Under the hood, transaction execution is almost identical to `CheckTx` but calls the +[`runTx`](/docs/sdk/v0.53/documentation/application-framework/baseapp#runtx) function in deliver mode instead of check mode. +Instead of using their `checkState`, full-nodes use `finalizeblock`: + +- **Decoding:** Since `FinalizeBlock` is an ABCI call, `Tx` is received in the encoded `[]byte` form. + Nodes first unmarshal the transaction, using the [`TxConfig`](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#register-codec) defined in the app, then call `runTx` in `execModeFinalize`, which is very similar to `CheckTx` but also executes and writes state changes. + +- **Checks and `AnteHandler`:** Full-nodes call `validateBasicMsgs` and `AnteHandler` again. This second check + happens because they may not have seen the same transactions during the addition to Mempool stage + and a malicious proposer may have included invalid ones. One difference here is that the + `AnteHandler` does not compare `gas-prices` to the node's `min-gas-prices` since that value is local + to each node - differing values across nodes yield nondeterministic results. + +- **`MsgServiceRouter`:** After `CheckTx` exits, `FinalizeBlock` continues to run + [`runMsgs`](/docs/sdk/v0.53/documentation/application-framework/baseapp#runtx-antehandler-runmsgs-posthandler) to fully execute each `Msg` within the transaction. + Since the transaction may have messages from different modules, `BaseApp` needs to know which module + to find the appropriate handler. This is achieved using `BaseApp`'s `MsgServiceRouter` so that it can be processed by the module's Protobuf [`Msg` service](/docs/sdk/v0.53/documentation/module-system/msg-services). + For `LegacyMsg` routing, the `Route` function is called via the [module manager](/docs/sdk/v0.53/documentation/module-system/module-manager) to retrieve the route name and find the legacy [`Handler`](/docs/sdk/v0.53/documentation/module-system/msg-services#handler-type) within the module. + +- **`Msg` service:** Protobuf `Msg` service is responsible for executing each message in the `Tx` and causes state transitions to persist in `finalizeBlockState`. + +- **PostHandlers:** [`PostHandler`](/docs/sdk/v0.53/documentation/application-framework/baseapp#posthandler)s run after the execution of the message. If they fail, the state change of `runMsgs`, as well of `PostHandlers`, are both reverted. + +- **Gas:** While a `Tx` is being delivered, a `GasMeter` is used to keep track of how much + gas is being used; if execution completes, `GasUsed` is set and returned in the + `abci.ExecTxResult`. If execution halts because `BlockGasMeter` or `GasMeter` has run out or something else goes + wrong, a deferred function at the end appropriately errors or panics. + +If there are any failed state changes resulting from a `Tx` being invalid or `GasMeter` running out, +the transaction processing terminates and any state changes are reverted. Invalid transactions in a +block proposal cause validator nodes to reject the block and vote for a `nil` block instead. + +### Commit + +The final step is for nodes to commit the block and state changes. Validator nodes +perform the previous step of executing state transitions in order to validate the transactions, +then sign the block to confirm it. Full nodes that are not validators do not +participate in consensus - i.e. they cannot vote - but listen for votes to understand whether or +not they should commit the state changes. + +When they receive enough validator votes (2/3+ _precommits_ weighted by voting power), full nodes commit to a new block to be added to the blockchain and +finalize the state transitions in the application layer. A new state root is generated to serve as +a merkle proof for the state transitions. Applications use the [`Commit`](/docs/sdk/v0.53/documentation/application-framework/baseapp#commit) +ABCI method inherited from [Baseapp](/docs/sdk/v0.53/documentation/application-framework/baseapp); it syncs all the state transitions by +writing the `deliverState` into the application's internal state. As soon as the state changes are +committed, `checkState` starts afresh from the most recently committed state and `deliverState` +resets to `nil` in order to be consistent and reflect the changes. + +Note that not all blocks have the same number of transactions and it is possible for consensus to +result in a `nil` block or one with none at all. In a public blockchain network, it is also possible +for validators to be **byzantine**, or malicious, which may prevent a `Tx` from being committed in +the blockchain. Possible malicious behaviors include the proposer deciding to censor a `Tx` by +excluding it from the block or a validator voting against the block. + +At this point, the transaction lifecycle of a `Tx` is over: nodes have verified its validity, +delivered it by executing its state changes, and committed those changes. The `Tx` itself, +in `[]byte` form, is stored in a block and appended to the blockchain. diff --git a/docs/sdk/v0.53/documentation/state-storage/README.mdx b/docs/sdk/v0.53/documentation/state-storage/README.mdx new file mode 100644 index 00000000..8ae75dd5 --- /dev/null +++ b/docs/sdk/v0.53/documentation/state-storage/README.mdx @@ -0,0 +1,242 @@ +--- +title: Store +--- + +The store package defines the interfaces, types and abstractions for Cosmos SDK +modules to read and write to Merkleized state within a Cosmos SDK application. +The store package provides many primitives for developers to use in order to +work with both state storage and state commitment. Below we describe the various +abstractions. + +## Types + +### `Store` + +The bulk of the store interfaces are defined [here](https://github.com/cosmos/cosmos-sdk/blob/main/store/types/store.go), +where the base primitive interface, for which other interfaces build off of, is +the `Store` type. The `Store` interface defines the ability to tell the type of +the implementing store and the ability to cache wrap via the `CacheWrapper` interface. + +### `CacheWrapper` & `CacheWrap` + +One of the most important features a store has the ability to perform is the +ability to cache wrap. Cache wrapping is essentially the underlying store wrapping +itself within another store type that performs caching for both reads and writes +with the ability to flush writes via `Write()`. + +### `KVStore` & `CacheKVStore` + +One of the most important interfaces that both developers and modules interface +with, which also provides the basis of most state storage and commitment operations, +is the `KVStore`. The `KVStore` interface provides basic CRUD abilities and +prefix-based iteration, including reverse iteration. + +Typically, each module has it's own dedicated `KVStore` instance, which it can +get access to via the `sdk.Context` and the use of a pointer-based named key -- +`KVStoreKey`. The `KVStoreKey` provides pseudo-OCAP. How a exactly a `KVStoreKey` +maps to a `KVStore` will be illustrated below through the `CommitMultiStore`. + +Note, a `KVStore` cannot directly commit state. Instead, a `KVStore` can be wrapped +by a `CacheKVStore` which extends a `KVStore` and provides the ability for the +caller to execute `Write()` which commits state to the underlying state storage. +Note, this doesn't actually flush writes to disk as writes are held in memory +until `Commit()` is called on the `CommitMultiStore`. + +### `CommitMultiStore` + +The `CommitMultiStore` interface exposes the the top-level interface that is used +to manage state commitment and storage by an SDK application and abstracts the +concept of multiple `KVStore`s which are used by multiple modules. Specifically, +it supports the following high-level primitives: + +* Allows for a caller to retrieve a `KVStore` by providing a `KVStoreKey`. +* Exposes pruning mechanisms to remove state pinned against a specific height/version + in the past. +* Allows for loading state storage at a particular height/version in the past to + provide current head and historical queries. +* Provides the ability to rollback state to a previous height/version. +* Provides the ability to to load state storage at a particular height/version + while also performing store upgrades, which are used during live hard-fork + application state migrations. +* Provides the ability to commit all current accumulated state to disk and performs + Merkle commitment. + +## Implementation Details + +While there are many interfaces that the `store` package provides, there is +typically a core implementation for each main interface that modules and +developers interact with that are defined in the Cosmos SDK. + +### `iavl.Store` + +The `iavl.Store` provides the core implementation for state storage and commitment +by implementing the following interfaces: + +* `KVStore` +* `CommitStore` +* `CommitKVStore` +* `Queryable` +* `StoreWithInitialVersion` + +It allows for all CRUD operations to be performed along with allowing current +and historical state queries, prefix iteration, and state commitment along with +Merkle proof operations. The `iavl.Store` also provides the ability to remove +historical state from the state commitment layer. + +An overview of the IAVL implementation can be found [here](https://github.com/cosmos/iavl/blob/master/docs/overview.md). +It is important to note that the IAVL store provides both state commitment and +logical storage operations, which comes with drawbacks as there are various +performance impacts, some of which are very drastic, when it comes to the +operations mentioned above. + +When dealing with state management in modules and clients, the Cosmos SDK provides +various layers of abstractions or "store wrapping", where the `iavl.Store` is the +bottom most layer. When requesting a store to perform reads or writes in a module, +the typical abstraction layer in order is defined as follows: + +```text +iavl.Store <- cachekv.Store <- gaskv.Store <- cachemulti.Store <- rootmulti.Store +``` + +### Concurrent use of IAVL store + +The tree under `iavl.Store` is not safe for concurrent use. It is the +responsibility of the caller to ensure that concurrent access to the store is +not performed. + +The main issue with concurrent use is when data is written at the same time as +it's being iterated over. Doing so will cause a irrecoverable fatal error because +of concurrent reads and writes to an internal map. + +Although it's not recommended, you can iterate through values while writing to +it by disabling "FastNode" **without guarantees that the values being written will +be returned during the iteration** (if you need this, you might want to reconsider +the design of your application). This is done by setting `iavl-disable-fastnode` +to `true` in the config TOML file. + +### `cachekv.Store` + +The `cachekv.Store` store wraps an underlying `KVStore`, typically a `iavl.Store` +and contains an in-memory cache for storing pending writes to underlying `KVStore`. +`Set` and `Delete` calls are executed on the in-memory cache, whereas `Has` calls +are proxied to the underlying `KVStore`. + +One of the most important calls to a `cachekv.Store` is `Write()`, which ensures +that key-value pairs are written to the underlying `KVStore` in a deterministic +and ordered manner by sorting the keys first. The store keeps track of "dirty" +keys and uses these to determine what keys to sort. In addition, it also keeps +track of deleted keys and ensures these are also removed from the underlying +`KVStore`. + +The `cachekv.Store` also provides the ability to perform iteration and reverse +iteration. Iteration is performed through the `cacheMergeIterator` type and uses +both the dirty cache and underlying `KVStore` to iterate over key-value pairs. + +Note, all calls to CRUD and iteration operations on a `cachekv.Store` are thread-safe. + +### `gaskv.Store` + +The `gaskv.Store` store provides a simple implementation of a `KVStore`. +Specifically, it just wraps an existing `KVStore`, such as a cache-wrapped +`iavl.Store`, and incurs configurable gas costs for CRUD operations via +`ConsumeGas()` calls defined on the `GasMeter` which exists in a `sdk.Context` +and then proxies the underlying CRUD call to the underlying store. Note, the +`GasMeter` is reset on each block. + +### `cachemulti.Store` & `rootmulti.Store` + +The `rootmulti.Store` acts as an abstraction around a series of stores. Namely, +it implements the `CommitMultiStore` an `Queryable` interfaces. Through the +`rootmulti.Store`, an SDK module can request access to a `KVStore` to perform +state CRUD operations and queries by holding access to a unique `KVStoreKey`. + +The `rootmulti.Store` ensures these queries and state operations are performed +through cached-wrapped instances of `cachekv.Store` which is described above. The +`rootmulti.Store` implementation is also responsible for committing all accumulated +state from each `KVStore` to disk and returning an application state Merkle root. + +Queries can be performed to return state data along with associated state +commitment proofs for both previous heights/versions and the current state root. +Queries are routed based on store name, i.e. a module, along with other parameters +which are defined in `abci.RequestQuery`. + +The `rootmulti.Store` also provides primitives for pruning data at a given +height/version from state storage. When a height is committed, the `rootmulti.Store` +will determine if other previous heights should be considered for removal based +on the operator's pruning settings defined by `PruningOptions`, which defines +how many recent versions to keep on disk and the interval at which to remove +"staged" pruned heights from disk. During each interval, the staged heights are +removed from each `KVStore`. Note, it is up to the underlying `KVStore` +implementation to determine how pruning is actually performed. The `PruningOptions` +are defined as follows: + +```go +type PruningOptions struct { + / KeepRecent defines how many recent heights to keep on disk. + KeepRecent uint64 + + / Interval defines when the pruned heights are removed from disk. + Interval uint64 + + / Strategy defines the kind of pruning strategy. See below for more information on each. + Strategy PruningStrategy +} +``` + +The Cosmos SDK defines a preset number of pruning "strategies": `default`, `everything` +`nothing`, and `custom`. + +It is important to note that the `rootmulti.Store` considers each `KVStore` as a +separate logical store. In other words, they do not share a Merkle tree or +comparable data structure. This means that when state is committed via +`rootmulti.Store`, each store is committed in sequence and thus is not atomic. + +In terms of store construction and wiring, each Cosmos SDK application contains +a `BaseApp` instance which internally has a reference to a `CommitMultiStore` +that is implemented by a `rootmulti.Store`. The application then registers one or +more `KVStoreKey` that pertain to a unique module and thus a `KVStore`. Through +the use of an `sdk.Context` and a `KVStoreKey`, each module can get direct access +to it's respective `KVStore` instance. + +Example: + +```go expandable +func NewApp(...) + +Application { + / ... + bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + +bApp.SetCommitMultiStoreTracer(traceStore) + +bApp.SetVersion(version.Version) + +bApp.SetInterfaceRegistry(interfaceRegistry) + + / ... + keys := sdk.NewKVStoreKeys(...) + transientKeys := sdk.NewTransientStoreKeys(...) + memKeys := sdk.NewMemoryStoreKeys(...) + + / ... + + / initialize stores + app.MountKVStores(keys) + +app.MountTransientStores(transientKeys) + +app.MountMemoryStores(memKeys) + + / ... +} +``` + +The `rootmulti.Store` itself can be cache-wrapped which returns an instance of a +`cachemulti.Store`. For each block, `BaseApp` ensures that the proper abstractions +are created on the `CommitMultiStore`, i.e. ensuring that the `rootmulti.Store` +is cached-wrapped and uses the resulting `cachemulti.Store` to be set on the +`sdk.Context` which is then used for block and transaction execution. As a result, +all state mutations due to block and transaction execution are actually held +ephemerally until `Commit()` is called by the ABCI client. This concept is further +expanded upon when the AnteHandler is executed per transaction to ensure state +is not committed for transactions that failed CheckTx. diff --git a/docs/sdk/v0.53/documentation/state-storage/collections.mdx b/docs/sdk/v0.53/documentation/state-storage/collections.mdx new file mode 100644 index 00000000..f4ff233b --- /dev/null +++ b/docs/sdk/v0.53/documentation/state-storage/collections.mdx @@ -0,0 +1,1374 @@ +--- +title: Collections +description: >- + Collections is a library meant to simplify the experience with respect to + module state handling. +--- + +Collections is a library meant to simplify the experience with respect to module state handling. + +Cosmos SDK modules handle their state using the `KVStore` interface. The problem with working with +`KVStore` is that it forces you to think of state as a bytes KV pairings when in reality the majority of +state comes from complex concrete golang objects (strings, ints, structs, etc.). + +Collections allows you to work with state as if they were normal golang objects and removes the need +for you to think of your state as raw bytes in your code. + +It also allows you to migrate your existing state without causing any state breakage that forces you into +tedious and complex chain state migrations. + +## Installation + +To install collections in your cosmos-sdk chain project, run the following command: + +```shell +go get cosmossdk.io/collections +``` + +## Core types + +Collections offers 5 different APIs to work with state, which will be explored in the next sections, these APIs are: + +* `Map`: to work with typed arbitrary KV pairings. +* `KeySet`: to work with just typed keys +* `Item`: to work with just one typed value +* `Sequence`: which is a monotonically increasing number. +* `IndexedMap`: which combines `Map` and `KeySet` to provide a `Map` with indexing capabilities. + +## Preliminary components + +Before exploring the different collections types and their capability it is necessary to introduce +the three components that every collection shares. In fact when instantiating a collection type by doing, for example, +`collections.NewMap/collections.NewItem/...` you will find yourself having to pass them some common arguments. + +For example, in code: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var AllowListPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + AllowList collections.KeySet[string] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + AllowList: collections.NewKeySet(sb, AllowListPrefix, "allow_list", collections.StringKey), +} +} +``` + +Let's analyse the shared arguments, what they do, and why we need them. + +### SchemaBuilder + +The first argument passed is the `SchemaBuilder` + +`SchemaBuilder` is a structure that keeps track of all the state of a module, it is not required by the collections +to deal with state but it offers a dynamic and reflective way for clients to explore a module's state. + +We instantiate a `SchemaBuilder` by passing it a function that given the modules store key returns the module's specific store. + +We then need to pass the schema builder to every collection type we instantiate in our keeper, in our case the `AllowList`. + +### Prefix + +The second argument passed to our `KeySet` is a `collections.Prefix`, a prefix represents a partition of the module's `KVStore` +where all the state of a specific collection will be saved. + +Since a module can have multiple collections, the following is expected: + +* module params will become a `collections.Item` +* the `AllowList` is a `collections.KeySet` + +We don't want a collection to write over the state of the other collection so we pass it a prefix, which defines a storage +partition owned by the collection. + +If you already built modules, the prefix translates to the items you were creating in your `types/keys.go` file, example: [Link](https://github.com/cosmos/cosmos-sdk/blob/v0.52.0-rc.1/x/feegrant/key.go#L16~L22) + +your old: + +```go +var ( + / FeeAllowanceKeyPrefix is the set of the kvstore for fee allowance data + / - 0x00: allowance + FeeAllowanceKeyPrefix = []byte{0x00 +} + + / FeeAllowanceQueueKeyPrefix is the set of the kvstore for fee allowance keys data + / - 0x01: + FeeAllowanceQueueKeyPrefix = []byte{0x01 +} +) +``` + +becomes: + +```go +var ( + / FeeAllowanceKeyPrefix is the set of the kvstore for fee allowance data + / - 0x00: allowance + FeeAllowanceKeyPrefix = collections.NewPrefix(0) + + / FeeAllowanceQueueKeyPrefix is the set of the kvstore for fee allowance keys data + / - 0x01: + FeeAllowanceQueueKeyPrefix = collections.NewPrefix(1) +) +``` + +#### Rules + +`collections.NewPrefix` accepts either `uint8`, `string` or `[]bytes` it's good practice to use an always increasing `uint8`for disk space efficiency. + +A collection **MUST NOT** share the same prefix as another collection in the same module, and a collection prefix **MUST NEVER** start with the same prefix as another, examples: + +```go +prefix1 := collections.NewPrefix("prefix") + +prefix2 := collections.NewPrefix("prefix") / THIS IS BAD! +``` + +```go +prefix1 := collections.NewPrefix("a") + +prefix2 := collections.NewPrefix("aa") / prefix2 starts with the same as prefix1: BAD!!! +``` + +### Human-Readable Name + +The third parameter we pass to a collection is a string, which is a human-readable name. +It is needed to make the role of a collection understandable by clients who have no clue about +what a module is storing in state. + +#### Rules + +Each collection in a module **MUST** have a unique humanised name. + +## Key and Value Codecs + +A collection is generic over the type you can use as keys or values. +This makes collections dumb, but also means that hypothetically we can store everything +that can be a go type into a collection. We are not bounded to any type of encoding (be it proto, json or whatever) + +So a collection needs to be given a way to understand how to convert your keys and values to bytes. +This is achieved through `KeyCodec` and `ValueCodec`, which are arguments that you pass to your +collections when you're instantiating them using the `collections.NewMap/collections.NewItem/...` +instantiation functions. + +NOTE: Generally speaking you will never be required to implement your own `Key/ValueCodec` as +the SDK and collections libraries already come with default, safe and fast implementation of those. +You might need to implement them only if you're migrating to collections and there are state layout incompatibilities. + +Let's explore an example: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var IDsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + IDs collections.Map[string, uint64] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + IDs: collections.NewMap(sb, IDsPrefix, "ids", collections.StringKey, collections.Uint64Value), +} +} +``` + +We're now instantiating a map where the key is string and the value is `uint64`. +We already know the first three arguments of the `NewMap` function. + +The fourth parameter is our `KeyCodec`, we know that the `Map` has `string` as key so we pass it a `KeyCodec` that handles strings as keys. + +The fifth parameter is our `ValueCodec`, we know that the `Map` has a `uint64` as value so we pass it a `ValueCodec` that handles uint64. + +Collections already comes with all the required implementations for golang primitive types. + +Let's make another example, this falls closer to what we build using cosmos SDK, let's say we want +to create a `collections.Map` that maps account addresses to their base account. So we want to map an `sdk.AccAddress` to an `auth.BaseAccount` (which is a proto): + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts collections.Map[sdk.AccAddress, authtypes.BaseAccount] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.53/documentation/state-storage/cdc)), +} +} +``` + +As we can see here since our `collections.Map` maps `sdk.AccAddress` to `authtypes.BaseAccount`, +we use the `sdk.AccAddressKey` which is the `KeyCodec` implementation for `AccAddress` and we use `codec.CollValue` to +encode our proto type `BaseAccount`. + +Generally speaking you will always find the respective key and value codecs for types in the `go.mod` path you're using +to import that type. If you want to encode proto values refer to the codec `codec.CollValue` function, which allows you +to encode any type implement the `proto.Message` interface. + +## Map + +We analyse the first and most important collection type, the `collections.Map`. +This is the type that everything else builds on top of. + +### Use case + +A `collections.Map` is used to map arbitrary keys with arbitrary values. + +### Example + +It's easier to explain a `collections.Map` capabilities through an example: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "fmt" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts collections.Map[sdk.AccAddress, authtypes.BaseAccount] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.53/documentation/state-storage/cdc)), +} +} + +func (k Keeper) + +CreateAccount(ctx sdk.Context, addr sdk.AccAddress, account authtypes.BaseAccount) + +error { + has, err := k.Accounts.Has(ctx, addr) + if err != nil { + return err +} + if has { + return fmt.Errorf("account already exists: %s", addr) +} + +err = k.Accounts.Set(ctx, addr, account) + if err != nil { + return err +} + +return nil +} + +func (k Keeper) + +GetAccount(ctx sdk.Context, addr sdk.AccAddress) (authtypes.BaseAccount, error) { + acc, err := k.Accounts.Get(ctx, addr) + if err != nil { + return authtypes.BaseAccount{ +}, err +} + +return acc, nil +} + +func (k Keeper) + +RemoveAccount(ctx sdk.Context, addr sdk.AccAddress) + +error { + err := k.Accounts.Remove(ctx, addr) + if err != nil { + return err +} + +return nil +} +``` + +#### Set method + +Set maps with the provided `AccAddress` (the key) to the `auth.BaseAccount` (the value). + +Under the hood the `collections.Map` will convert the key and value to bytes using the [key and value codec](/docs/sdk/v0.53/documentation/state-storage/README#key-and-value-codecs). +It will prepend to our bytes key the [prefix](/docs/sdk/v0.53/documentation/state-storage/README#prefix) and store it in the KVStore of the module. + +#### Has method + +The has method reports if the provided key exists in the store. + +#### Get method + +The get method accepts the `AccAddress` and returns the associated `auth.BaseAccount` if it exists, otherwise it errors. + +#### Remove method + +The remove method accepts the `AccAddress` and removes it from the store. It won't report errors +if it does not exist, to check for existence before removal use the `Has` method. + +#### Iteration + +Iteration has a separate section. + +## KeySet + +The second type of collection is `collections.KeySet`, as the word suggests it maintains +only a set of keys without values. + +#### Implementation curiosity + +A `collections.KeySet` is just a `collections.Map` with a `key` but no value. +The value internally is always the same and is represented as an empty byte slice `[]byte{}`. + +### Example + +As always we explore the collection type through an example: + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "fmt" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var ValidatorsSetPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + ValidatorsSet collections.KeySet[sdk.ValAddress] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + ValidatorsSet: collections.NewKeySet(sb, ValidatorsSetPrefix, "validators_set", sdk.ValAddressKey), +} +} + +func (k Keeper) + +AddValidator(ctx sdk.Context, validator sdk.ValAddress) + +error { + has, err := k.ValidatorsSet.Has(ctx, validator) + if err != nil { + return err +} + if has { + return fmt.Errorf("validator already in set: %s", validator) +} + +err = k.ValidatorsSet.Set(ctx, validator) + if err != nil { + return err +} + +return nil +} + +func (k Keeper) + +RemoveValidator(ctx sdk.Context, validator sdk.ValAddress) + +error { + err := k.ValidatorsSet.Remove(ctx, validator) + if err != nil { + return err +} + +return nil +} +``` + +The first difference we notice is that `KeySet` needs use to specify only one type parameter: the key (`sdk.ValAddress` in this case). +The second difference we notice is that `KeySet` in its `NewKeySet` function does not require +us to specify a `ValueCodec` but only a `KeyCodec`. This is because a `KeySet` only saves keys and not values. + +Let's explore the methods. + +#### Has method + +Has allows us to understand if a key is present in the `collections.KeySet` or not, functions in the same way as `collections.Map.Has +` + +#### Set method + +Set inserts the provided key in the `KeySet`. + +#### Remove method + +Remove removes the provided key from the `KeySet`, it does not error if the key does not exist, +if existence check before removal is required it needs to be coupled with the `Has` method. + +## Item + +The third type of collection is the `collections.Item`. +It stores only one single item, it's useful for example for parameters, there's only one instance +of parameters in state always. + +#### implementation curiosity + +A `collections.Item` is just a `collections.Map` with no key but just a value. +The key is the prefix of the collection! + +### Example + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + stakingtypes "cosmossdk.io/x/staking/types" +) + +var ParamsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Params collections.Item[stakingtypes.Params] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Params: collections.NewItem(sb, ParamsPrefix, "params", codec.CollValue[stakingtypes.Params](/docs/sdk/v0.53/documentation/state-storage/cdc)), +} +} + +func (k Keeper) + +UpdateParams(ctx sdk.Context, params stakingtypes.Params) + +error { + err := k.Params.Set(ctx, params) + if err != nil { + return err +} + +return nil +} + +func (k Keeper) + +GetParams(ctx sdk.Context) (stakingtypes.Params, error) { + return k.Params.Get(ctx) +} +``` + +The first key difference we notice is that we specify only one type parameter, which is the value we're storing. +The second key difference is that we don't specify the `KeyCodec`, since we store only one item we already know the key +and the fact that it is constant. + +## Iteration + +One of the key features of the `KVStore` is iterating over keys. + +Collections which deal with keys (so `Map`, `KeySet` and `IndexedMap`) allow you to iterate +over keys in a safe and typed way. They all share the same API, the only difference being +that `KeySet` returns a different type of `Iterator` because `KeySet` only deals with keys. + + + +Every collection shares the same `Iterator` semantics. + + + +Let's have a look at the `Map.Iterate` method: + +```go +func (m Map[K, V]) + +Iterate(ctx context.Context, ranger Ranger[K]) (Iterator[K, V], error) +``` + +It accepts a `collections.Ranger[K]`, which is an API that instructs map on how to iterate over keys. +As always we don't need to implement anything here as `collections` already provides some generic `Ranger` implementers +that expose all you need to work with ranges. + +### Example + +We have a `collections.Map` that maps accounts using `uint64` IDs. + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts collections.Map[uint64, authtypes.BaseAccount] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", collections.Uint64Key, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.53/documentation/state-storage/cdc)), +} +} + +func (k Keeper) + +GetAllAccounts(ctx sdk.Context) ([]authtypes.BaseAccount, error) { + / passing a nil Ranger equals to: iterate over every possible key + iter, err := k.Accounts.Iterate(ctx, nil) + if err != nil { + return nil, err +} + +accounts, err := iter.Values() + if err != nil { + return nil, err +} + +return accounts, err +} + +func (k Keeper) + +IterateAccountsBetween(ctx sdk.Context, start, end uint64) ([]authtypes.BaseAccount, error) { + / The collections.Range API offers a lot of capabilities + / like defining where the iteration starts or ends. + rng := new(collections.Range[uint64]). + StartInclusive(start). + EndExclusive(end). + Descending() + +iter, err := k.Accounts.Iterate(ctx, rng) + if err != nil { + return nil, err +} + +accounts, err := iter.Values() + if err != nil { + return nil, err +} + +return accounts, nil +} + +func (k Keeper) + +IterateAccounts(ctx sdk.Context, do func(id uint64, acc authtypes.BaseAccount) (stop bool)) + +error { + iter, err := k.Accounts.Iterate(ctx, nil) + if err != nil { + return err +} + +defer iter.Close() + for ; iter.Valid(); iter.Next() { + kv, err := iter.KeyValue() + if err != nil { + return err +} + if do(kv.Key, kv.Value) { + break +} + +} + +return nil +} +``` + +Let's analyse each method in the example and how it makes use of the `Iterate` and the returned `Iterator` API. + +#### GetAllAccounts + +In `GetAllAccounts` we pass to our `Iterate` a nil `Ranger`. This means that the returned `Iterator` will include +all the existing keys within the collection. + +Then we use the `Values` method from the returned `Iterator` API to collect all the values into a slice. + +`Iterator` offers other methods such as `Keys()` to collect only the keys and not the values and `KeyValues` to collect +all the keys and values. + +#### IterateAccountsBetween + +Here we make use of the `collections.Range` helper to specialise our range. +We make it start in a point through `StartInclusive` and end in the other with `EndExclusive`, then +we instruct it to report us results in reverse order through `Descending` + +Then we pass the range instruction to `Iterate` and get an `Iterator`, which will contain only the results +we specified in the range. + +Then we use again the `Values` method of the `Iterator` to collect all the results. + +`collections.Range` also offers a `Prefix` API which is not applicable to all keys types, +for example uint64 cannot be prefix because it is of constant size, but a `string` key +can be prefixed. + +#### IterateAccounts + +Here we showcase how to lazily collect values from an Iterator. + + + +`Keys/Values/KeyValues` fully consume and close the `Iterator`, here we need to explicitly do a `defer iterator.Close()` call. + + + +`Iterator` also exposes a `Value` and `Key` method to collect only the current value or key, if collecting both is not needed. + + + +For this `callback` pattern, collections expose a `Walk` API. + + + +## Composite keys + +So far we've worked only with simple keys, like `uint64`, the account address, etc. +There are some more complex cases in, which we need to deal with composite keys. + +A key is composite when it is composed of multiple keys, for example bank balances as stored as the composite key +`(AccAddress, string)` where the first part is the address holding the coins and the second part is the denom. + +Example, let's say address `BOB` holds `10atom,15osmo`, this is how it is stored in state: + +```javascript +(bob, atom) => 10 +(bob, osmos) => 15 +``` + +Now this allows to efficiently get a specific denom balance of an address, by simply `getting` `(address, denom)`, or getting all the balances +of an address by prefixing over `(address)`. + +Let's see now how we can work with composite keys using collections. + +### Example + +In our example we will show-case how we can use collections when we are dealing with balances, similar to bank, +a balance is a mapping between `(address, denom) => math.Int` the composite key in our case is `(address, denom)`. + +## Instantiation of a composite key collection + +```go expandable +package collections + +import ( + + "cosmossdk.io/collections" + "cosmossdk.io/math" + storetypes "cosmossdk.io/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +var BalancesPrefix = collections.NewPrefix(1) + +type Keeper struct { + Schema collections.Schema + Balances collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Balances: collections.NewMap( + sb, BalancesPrefix, "balances", + collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), + sdk.IntValue, + ), +} +} +``` + +#### The Map Key definition + +First of all we can see that in order to define a composite key of two elements we use the `collections.Pair` type: + +```go +collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] +``` + +`collections.Pair` defines a key composed of two other keys, in our case the first part is `sdk.AccAddress`, the second +part is `string`. + +#### The Key Codec instantiation + +The arguments to instantiate are always the same, the only thing that changes is how we instantiate +the `KeyCodec`, since this key is composed of two keys we use `collections.PairKeyCodec`, which generates +a `KeyCodec` composed of two key codecs. The first one will encode the first part of the key, the second one will +encode the second part of the key. + +### Working with composite key collections + +Let's expand on the example we used before: + +```go expandable +var BalancesPrefix = collections.NewPrefix(1) + +type Keeper struct { + Schema collections.Schema + Balances collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Balances: collections.NewMap( + sb, BalancesPrefix, "balances", + collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), + sdk.IntValue, + ), +} +} + +func (k Keeper) + +SetBalance(ctx sdk.Context, address sdk.AccAddress, denom string, amount math.Int) + +error { + key := collections.Join(address, denom) + +return k.Balances.Set(ctx, key, amount) +} + +func (k Keeper) + +GetBalance(ctx sdk.Context, address sdk.AccAddress, denom string) (math.Int, error) { + return k.Balances.Get(ctx, collections.Join(address, denom)) +} + +func (k Keeper) + +GetAllAddressBalances(ctx sdk.Context, address sdk.AccAddress) (sdk.Coins, error) { + balances := sdk.NewCoins() + rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/v0.53/documentation/state-storage/address) + +iter, err := k.Balances.Iterate(ctx, rng) + if err != nil { + return nil, err +} + +kvs, err := iter.KeyValues() + if err != nil { + return nil, err +} + for _, kv := range kvs { + balances = balances.Add(sdk.NewCoin(kv.Key.K2(), kv.Value)) +} + +return balances, nil +} + +func (k Keeper) + +GetAllAddressBalancesBetween(ctx sdk.Context, address sdk.AccAddress, startDenom, endDenom string) (sdk.Coins, error) { + rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/v0.53/documentation/state-storage/address). + StartInclusive(startDenom). + EndInclusive(endDenom) + +iter, err := k.Balances.Iterate(ctx, rng) + if err != nil { + return nil, err +} + ... +} +``` + +#### SetBalance + +As we can see here we're setting the balance of an address for a specific denom. +We use the `collections.Join` function to generate the composite key. +`collections.Join` returns a `collections.Pair` (which is the key of our `collections.Map`) + +`collections.Pair` contains the two keys we have joined, it also exposes two methods: `K1` to fetch the 1st part of the +key and `K2` to fetch the second part. + +As always, we use the `collections.Map.Set` method to map the composite key to our value (`math.Int` in this case) + +#### GetBalance + +To get a value in composite key collection, we simply use `collections.Join` to compose the key. + +#### GetAllAddressBalances + +We use `collections.PrefixedPairRange` to iterate over all the keys starting with the provided address. +Concretely the iteration will report all the balances belonging to the provided address. + +The first part is that we instantiate a `PrefixedPairRange`, which is a `Ranger` implementer aimed to help +in `Pair` keys iterations. + +```go +rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/v0.53/documentation/state-storage/address) +``` + +As we can see here we're passing the type parameters of the `collections.Pair` because golang type inference +with respect to generics is not as permissive as other languages, so we need to explicitly say what are the types of the pair key. + +#### GetAllAddressesBalancesBetween + +This showcases how we can further specialise our range to limit the results further, by specifying +the range between the second part of the key (in our case the denoms, which are strings). + +## IndexedMap + +`collections.IndexedMap` is a collection that uses under the hood a `collections.Map`, and has a struct, which contains the indexes that we need to define. + +### Example + +Let's say we have an `auth.BaseAccount` struct which looks like the following: + +```go +type BaseAccount struct { + AccountNumber uint64 `protobuf:"varint,3,opt,name=account_number,json=accountNumber,proto3" json:"account_number,omitempty"` + Sequence uint64 `protobuf:"varint,4,opt,name=sequence,proto3" json:"sequence,omitempty"` +} +``` + +First of all, when we save our accounts in state we map them using a primary key `sdk.AccAddress`. +If it were to be a `collections.Map` it would be `collections.Map[sdk.AccAddress, authtypes.BaseAccount]`. + +Then we also want to be able to get an account not only by its `sdk.AccAddress`, but also by its `AccountNumber`. + +So we can say we want to create an `Index` that maps our `BaseAccount` to its `AccountNumber`. + +We also know that this `Index` is unique. Unique means that there can only be one `BaseAccount` that maps to a specific +`AccountNumber`. + +First of all, we start by defining the object that contains our index: + +```go expandable +var AccountsNumberIndexPrefix = collections.NewPrefix(1) + +type AccountsIndexes struct { + Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +} + +func NewAccountIndexes(sb *collections.SchemaBuilder) + +AccountsIndexes { + return AccountsIndexes{ + Number: indexes.NewUnique( + sb, AccountsNumberIndexPrefix, "accounts_by_number", + collections.Uint64Key, sdk.AccAddressKey, + func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { + return v.AccountNumber, nil +}, + ), +} +} +``` + +We create an `AccountIndexes` struct which contains a field: `Number`. This field represents our `AccountNumber` index. +`AccountNumber` is a field of `authtypes.BaseAccount` and it's a `uint64`. + +Then we can see in our `AccountIndexes` struct the `Number` field is defined as: + +```go +*indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +``` + +Where the first type parameter is `uint64`, which is the field type of our index. +The second type parameter is the primary key `sdk.AccAddress`. +And the third type parameter is the actual object we're storing `authtypes.BaseAccount`. + +Then we create a `NewAccountIndexes` function that instantiates and returns the `AccountsIndexes` struct. + +The function takes a `SchemaBuilder`. Then we instantiate our `indexes.Unique`, let's analyse the arguments we pass to +`indexes.NewUnique`. + +#### NOTE: indexes list + +The `AccountsIndexes` struct contains the indexes, the `NewIndexedMap` function will infer the indexes form that struct +using reflection, this happens only at init and is not computationally expensive. In case you want to explicitly declare +indexes: implement the `Indexes` interface in the `AccountsIndexes` struct: + +```go +func (a AccountsIndexes) + +IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { + return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ + a.Number +} +} +``` + +#### Instantiating a `indexes.Unique` + +The first three arguments, we already know them, they are: `SchemaBuilder`, `Prefix` which is our index prefix (the partition +where index keys relationship for the `Number` index will be maintained), and the human name for the `Number` index. + +The second argument is a `collections.Uint64Key` which is a key codec to deal with `uint64` keys, we pass that because +the key we're trying to index is a `uint64` key (the account number), and then we pass as fifth argument the primary key codec, +which in our case is `sdk.AccAddress` (remember: we're mapping `sdk.AccAddress` => `BaseAccount`). + +Then as last parameter we pass a function that: given the `BaseAccount` returns its `AccountNumber`. + +After this we can proceed instantiating our `IndexedMap`. + +```go expandable +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewIndexedMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.53/documentation/state-storage/cdc), + NewAccountIndexes(sb), + ), +} +} +``` + +As we can see here what we do, for now, is the same thing as we did for `collections.Map`. +We pass it the `SchemaBuilder`, the `Prefix` where we plan to store the mapping between `sdk.AccAddress` and `authtypes.BaseAccount`, +the human name and the respective `sdk.AccAddress` key codec and `authtypes.BaseAccount` value codec. + +Then we pass the instantiation of our `AccountIndexes` through `NewAccountIndexes`. + +Full example: + +```go expandable +package docs + +import ( + + "cosmossdk.io/collections" + "cosmossdk.io/collections/indexes" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsNumberIndexPrefix = collections.NewPrefix(1) + +type AccountsIndexes struct { + Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +} + +func (a AccountsIndexes) + +IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { + return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ + a.Number +} +} + +func NewAccountIndexes(sb *collections.SchemaBuilder) + +AccountsIndexes { + return AccountsIndexes{ + Number: indexes.NewUnique( + sb, AccountsNumberIndexPrefix, "accounts_by_number", + collections.Uint64Key, sdk.AccAddressKey, + func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { + return v.AccountNumber, nil +}, + ), +} +} + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewIndexedMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.53/documentation/state-storage/cdc), + NewAccountIndexes(sb), + ), +} +} +``` + +### Working with IndexedMaps + +Whilst instantiating `collections.IndexedMap` is tedious, working with them is extremely smooth. + +Let's take the full example, and expand it with some use-cases. + +```go expandable +package docs + +import ( + + "cosmossdk.io/collections" + "cosmossdk.io/collections/indexes" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsNumberIndexPrefix = collections.NewPrefix(1) + +type AccountsIndexes struct { + Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] +} + +func (a AccountsIndexes) + +IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { + return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ + a.Number +} +} + +func NewAccountIndexes(sb *collections.SchemaBuilder) + +AccountsIndexes { + return AccountsIndexes{ + Number: indexes.NewUnique( + sb, AccountsNumberIndexPrefix, "accounts_by_number", + collections.Uint64Key, sdk.AccAddressKey, + func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { + return v.AccountNumber, nil +}, + ), +} +} + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewIndexedMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](/docs/sdk/v0.53/documentation/state-storage/cdc), + NewAccountIndexes(sb), + ), +} +} + +func (k Keeper) + +CreateAccount(ctx sdk.Context, addr sdk.AccAddress) + +error { + nextAccountNumber := k.getNextAccountNumber() + newAcc := authtypes.BaseAccount{ + AccountNumber: nextAccountNumber, + Sequence: 0, +} + +return k.Accounts.Set(ctx, addr, newAcc) +} + +func (k Keeper) + +RemoveAccount(ctx sdk.Context, addr sdk.AccAddress) + +error { + return k.Accounts.Remove(ctx, addr) +} + +func (k Keeper) + +GetAccountByNumber(ctx sdk.Context, accNumber uint64) (sdk.AccAddress, authtypes.BaseAccount, error) { + accAddress, err := k.Accounts.Indexes.Number.MatchExact(ctx, accNumber) + if err != nil { + return nil, authtypes.BaseAccount{ +}, err +} + +acc, err := k.Accounts.Get(ctx, accAddress) + +return accAddress, acc, nil +} + +func (k Keeper) + +GetAccountsByNumber(ctx sdk.Context, startAccNum, endAccNum uint64) ([]authtypes.BaseAccount, error) { + rng := new(collections.Range[uint64]). + StartInclusive(startAccNum). + EndInclusive(endAccNum) + +iter, err := k.Accounts.Indexes.Number.Iterate(ctx, rng) + if err != nil { + return nil, err +} + +return indexes.CollectValues(ctx, k.Accounts, iter) +} + +func (k Keeper) + +getNextAccountNumber() + +uint64 { + return 0 +} +``` + +## Collections with interfaces as values + +Although cosmos-sdk is shifting away from the usage of interface registry, there are still some places where it is used. +In order to support old code, we have to support collections with interface values. + +The generic `codec.CollValue` is not able to handle interface values, so we need to use a special type `codec.CollValueInterface`. +`codec.CollValueInterface` takes a `codec.BinaryCodec` as an argument, and uses it to marshal and unmarshal values as interfaces. +The `codec.CollValueInterface` lives in the `codec` package, whose import path is `github.com/cosmos/cosmos-sdk/codec`. + +### Instantiating Collections with interface values + +In order to instantiate a collection with interface values, we need to use `codec.CollValueInterface` instead of `codec.CollValue`. + +```go expandable +package example + +import ( + + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" +) + +var AccountsPrefix = collections.NewPrefix(0) + +type Keeper struct { + Schema collections.Schema + Accounts *collections.Map[sdk.AccAddress, sdk.AccountI] +} + +func NewKeeper(cdc codec.BinaryCodec, storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Accounts: collections.NewMap( + sb, AccountsPrefix, "accounts", + sdk.AccAddressKey, codec.CollInterfaceValue[sdk.AccountI](/docs/sdk/v0.53/documentation/state-storage/cdc), + ), +} +} + +func (k Keeper) + +SaveBaseAccount(ctx sdk.Context, account authtypes.BaseAccount) + +error { + return k.Accounts.Set(ctx, account.GetAddress(), account) +} + +func (k Keeper) + +SaveModuleAccount(ctx sdk.Context, account authtypes.ModuleAccount) + +error { + return k.Accounts.Set(ctx, account.GetAddress(), account) +} + +func (k Keeper) + +GetAccount(ctx sdk.context, addr sdk.AccAddress) (sdk.AccountI, error) { + return k.Accounts.Get(ctx, addr) +} +``` + +## Triple key + +The `collections.Triple` is a special type of key composed of three keys, it's identical to `collections.Pair`. + +Let's see an example. + +```go expandable +package example + +import ( + + "context" + "cosmossdk.io/collections" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" +) + +type AccAddress = string +type ValAddress = string + +type Keeper struct { + / let's simulate we have redelegations which are stored as a triple key composed of + / the delegator, the source validator and the destination validator. + Redelegations collections.KeySet[collections.Triple[AccAddress, ValAddress, ValAddress]] +} + +func NewKeeper(storeKey *storetypes.KVStoreKey) + +Keeper { + sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey)) + +return Keeper{ + Redelegations: collections.NewKeySet(sb, collections.NewPrefix(0), "redelegations", collections.TripleKeyCodec(collections.StringKey, collections.StringKey, collections.StringKey) +} +} + +/ RedelegationsByDelegator iterates over all the redelegations of a given delegator and calls onResult providing +/ each redelegation from source validator towards the destination validator. +func (k Keeper) + +RedelegationsByDelegator(ctx context.Context, delegator AccAddress, onResult func(src, dst ValAddress) (stop bool, err error)) + +error { + rng := collections.NewPrefixedTripleRange[AccAddress, ValAddress, ValAddress](/docs/sdk/v0.53/documentation/state-storage/delegator) + +return k.Redelegations.Walk(ctx, rng, func(key collections.Triple[AccAddress, ValAddress, ValAddress]) (stop bool, err error) { + return onResult(key.K2(), key.K3()) +}) +} + +/ RedelegationsByDelegatorAndValidator iterates over all the redelegations of a given delegator and its source validator and calls onResult for each +/ destination validator. +func (k Keeper) + +RedelegationsByDelegatorAndValidator(ctx context.Context, delegator AccAddress, validator ValAddress, onResult func(dst ValAddress) (stop bool, err error)) + +error { + rng := collections.NewSuperPrefixedTripleRange[AccAddress, ValAddress, ValAddress](/docs/sdk/v0.53/documentation/state-storage/delegator, validator) + +return k.Redelegations.Walk(ctx, rng, func(key collections.Triple[AccAddress, ValAddress, ValAddress]) (stop bool, err error) { + return onResult(key.K3()) +}) +} +``` + +## Advanced Usages + +### Alternative Value Codec + +The `codec.AltValueCodec` allows a collection to decode values using a different codec than the one used to encode them. +Basically it enables to decode two different byte representations of the same concrete value. +It can be used to lazily migrate values from one bytes representation to another, as long as the new representation is +not able to decode the old one. + +A concrete example can be found in `x/bank` where the balance was initially stored as `Coin` and then migrated to `Int`. + +```go +var BankBalanceValueCodec = codec.NewAltValueCodec(sdk.IntValue, func(b []byte) (sdk.Int, error) { + coin := sdk.Coin{ +} + err := coin.Unmarshal(b) + if err != nil { + return sdk.Int{ +}, err +} + +return coin.Amount, nil +}) +``` + +The above example shows how to create an `AltValueCodec` that can decode both `sdk.Int` and `sdk.Coin` values. The provided +decoder function will be used as a fallback in case the default decoder fails. When the value will be encoded back into state +it will use the default encoder. This allows to lazily migrate values to a new bytes representation. diff --git a/docs/sdk/v0.53/documentation/state-storage/interblock-cache.mdx b/docs/sdk/v0.53/documentation/state-storage/interblock-cache.mdx new file mode 100644 index 00000000..fd22dd22 --- /dev/null +++ b/docs/sdk/v0.53/documentation/state-storage/interblock-cache.mdx @@ -0,0 +1,294 @@ +--- +title: Inter-block Cache +--- + +## Synopsis + +The inter-block cache is an in-memory cache storing (in-most-cases) immutable state that modules need to read in between blocks. When enabled, all sub-stores of a multi store, e.g., `rootmulti`, are wrapped. + +## Overview and basic concepts + +### Motivation + +The goal of the inter-block cache is to allow SDK modules to have fast access to data that it is typically queried during the execution of every block. This is data that do not change often, e.g. module parameters. The inter-block cache wraps each `CommitKVStore` of a multi store such as `rootmulti` with a fixed size, write-through cache. Caches are not cleared after a block is committed, as opposed to other caching layers such as `cachekv`. + +### Definitions + +- `Store key` uniquely identifies a store. +- `KVCache` is a `CommitKVStore` wrapped with a cache. +- `Cache manager` is a key component of the inter-block cache responsible for maintaining a map from `store keys` to `KVCaches`. + +## System model and properties + +### Assumptions + +This specification assumes that there exists a cache implementation accessible to the inter-block cache feature. + +> The implementation uses adaptive replacement cache (ARC), an enhancement over the standard last-recently-used (LRU) cache in that tracks both frequency and recency of use. + +The inter-block cache requires that the cache implementation to provide methods to create a cache, add a key/value pair, remove a key/value pair and retrieve the value associated to a key. In this specification, we assume that a `Cache` feature offers this functionality through the following methods: + +- `NewCache(size int)` creates a new cache with `size` capacity and returns it. +- `Get(key string)` attempts to retrieve a key/value pair from `Cache.` It returns `(value []byte, success bool)`. If `Cache` contains the key, it `value` contains the associated value and `success=true`. Otherwise, `success=false` and `value` should be ignored. +- `Add(key string, value []byte)` inserts a key/value pair into the `Cache`. +- `Remove(key string)` removes the key/value pair identified by `key` from `Cache`. + +The specification also assumes that `CommitKVStore` offers the following API: + +- `Get(key string)` attempts to retrieve a key/value pair from `CommitKVStore`. +- `Set(key, string, value []byte)` inserts a key/value pair into the `CommitKVStore`. +- `Delete(key string)` removes the key/value pair identified by `key` from `CommitKVStore`. + +> Ideally, both `Cache` and `CommitKVStore` should be specified in a different document and referenced here. + +### Properties + +#### Thread safety + +Accessing the `cache manager` or a `KVCache` is not thread-safe: no method is guarded with a lock. +Note that this is true even if the cache implementation is thread-safe. + +> For instance, assume that two `Set` operations are executed concurrently on the same key, each writing a different value. After both are executed, the cache and the underlying store may be inconsistent, each storing a different value under the same key. + +#### Crash recovery + +The inter-block cache transparently delegates `Commit()` to its aggregate `CommitKVStore`. If the +aggregate `CommitKVStore` supports atomic writes and use them to guarantee that the store is always in a consistent state in disk, the inter-block cache can be transparently moved to a consistent state when a failure occurs. + +> Note that this is the case for `IAVLStore`, the preferred `CommitKVStore`. On commit, it calls `SaveVersion()` on the underlying `MutableTree`. `SaveVersion` writes to disk are atomic via batching. This means that only consistent versions of the store (the tree) are written to the disk. Thus, in case of a failure during a `SaveVersion` call, on recovery from disk, the version of the store will be consistent. + +#### Iteration + +Iteration over each wrapped store is supported via the embedded `CommitKVStore` interface. + +## Technical specification + +### General design + +The inter-block cache feature is composed by two components: `CommitKVCacheManager` and `CommitKVCache`. + +`CommitKVCacheManager` implements the cache manager. It maintains a mapping from a store key to a `KVStore`. + +```go +type CommitKVStoreCacheManager interface{ + cacheSize uint + caches map[string]CommitKVStore +} +``` + +`CommitKVStoreCache` implements a `KVStore`: a write-through cache that wraps a `CommitKVStore`. This means that deletes and writes always happen to both the cache and the underlying `CommitKVStore`. Reads on the other hand first hit the internal cache. During a cache miss, the read is delegated to the underlying `CommitKVStore` and cached. + +```go +type CommitKVStoreCache interface{ + store CommitKVStore + cache Cache +} +``` + +To enable inter-block cache on `rootmulti`, one needs to instantiate a `CommitKVCacheManager` and set it by calling `SetInterBlockCache()` before calling one of `LoadLatestVersion()`, `LoadLatestVersionAndUpgrade(...)`, `LoadVersionAndUpgrade(...)` and `LoadVersion(version)`. + +### API + +#### CommitKVCacheManager + +The method `NewCommitKVStoreCacheManager` creates a new cache manager and returns it. + +| Name | Type | Description | +| ---- | ------- | ------------------------------------------------------------------------ | +| size | integer | Determines the capacity of each of the KVCache maintained by the manager | + +```go +func NewCommitKVStoreCacheManager(size uint) + +CommitKVStoreCacheManager { + manager = CommitKVStoreCacheManager{ + size, make(map[string]CommitKVStore) +} + +return manager +} +``` + +`GetStoreCache` returns a cache from the CommitStoreCacheManager for a given store key. If no cache exists for the store key, then one is created and set. + +| Name | Type | Description | +| -------- | --------------------------- | -------------------------------------------------------------------------------------- | +| manager | `CommitKVStoreCacheManager` | The cache manager | +| storeKey | string | The store key of the store being retrieved | +| store | `CommitKVStore` | The store that it is cached in case the manager does not have any in its map of caches | + +```go expandable +func GetStoreCache( + manager CommitKVStoreCacheManager, + storeKey string, + store CommitKVStore) + +CommitKVStore { + if manager.caches.has(storeKey) { + return manager.caches.get(storeKey) +} + +else { + cache = CommitKVStoreCacheManager{ + store, manager.cacheSize +} + +manager.set(storeKey, cache) + +return cache +} +} +``` + +`Unwrap` returns the underlying CommitKVStore for a given store key. + +| Name | Type | Description | +| -------- | --------------------------- | ------------------------------------------ | +| manager | `CommitKVStoreCacheManager` | The cache manager | +| storeKey | string | The store key of the store being unwrapped | + +```go expandable +func Unwrap( + manager CommitKVStoreCacheManager, + storeKey string) + +CommitKVStore { + if manager.caches.has(storeKey) { + cache = manager.caches.get(storeKey) + +return cache.store +} + +else { + return nil +} +} +``` + +`Reset` resets the manager's map of caches. + +| Name | Type | Description | +| ------- | --------------------------- | ----------------- | +| manager | `CommitKVStoreCacheManager` | The cache manager | + +```go +function Reset(manager CommitKVStoreCacheManager) { + for (let storeKey of manager.caches.keys()) { + manager.caches.delete(storeKey) +} +} +``` + +#### CommitKVStoreCache + +`NewCommitKVStoreCache` creates a new `CommitKVStoreCache` and returns it. + +| Name | Type | Description | +| ----- | ------------- | -------------------------------------------------- | +| store | CommitKVStore | The store to be cached | +| size | string | Determines the capacity of the cache being created | + +```go +func NewCommitKVStoreCache( + store CommitKVStore, + size uint) + +CommitKVStoreCache { + KVCache = CommitKVStoreCache{ + store, NewCache(size) +} + +return KVCache +} +``` + +`Get` retrieves a value by key. It first looks in the cache. If the key is not in the cache, the query is delegated to the underlying `CommitKVStore`. In the latter case, the key/value pair is cached. The method returns the value. + +| Name | Type | Description | +| ------- | -------------------- | ------------------------------------------------------------------- | +| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` from which the key/value pair is retrieved | +| key | string | Key of the key/value pair being retrieved | + +```go expandable +func Get( + KVCache CommitKVStoreCache, + key string) []byte { + valueCache, success := KVCache.cache.Get(key) + if success { + / cache hit + return valueCache +} + +else { + / cache miss + valueStore = KVCache.store.Get(key) + +KVCache.cache.Add(key, valueStore) + +return valueStore +} +} +``` + +`Set` inserts a key/value pair into both the write-through cache and the underlying `CommitKVStore`. + +| Name | Type | Description | +| ------- | -------------------- | ---------------------------------------------------------------- | +| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` to which the key/value pair is inserted | +| key | string | Key of the key/value pair being inserted | +| value | \[]byte | Value of the key/value pair being inserted | + +```go +func Set( + KVCache CommitKVStoreCache, + key string, + value []byte) { + KVCache.cache.Add(key, value) + +KVCache.store.Set(key, value) +} +``` + +`Delete` removes a key/value pair from both the write-through cache and the underlying `CommitKVStore`. + +| Name | Type | Description | +| ------- | -------------------- | ----------------------------------------------------------------- | +| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` from which the key/value pair is deleted | +| key | string | Key of the key/value pair being deleted | + +```go +func Delete( + KVCache CommitKVStoreCache, + key string) { + KVCache.cache.Remove(key) + +KVCache.store.Delete(key) +} +``` + +`CacheWrap` wraps a `CommitKVStoreCache` with another caching layer (`CacheKV`). + +> It is unclear whether there is a use case for `CacheWrap`. + +| Name | Type | Description | +| ------- | -------------------- | -------------------------------------- | +| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` being wrapped | + +```go +func CacheWrap( + KVCache CommitKVStoreCache) { + return CacheKV.NewStore(KVCache) +} +``` + +### Implementation details + +The inter-block cache implementation uses a fixed-sized adaptive replacement cache (ARC) as cache. [The ARC implementation](https://github.com/hashicorp/golang-lru/blob/main/arc/arc.go) is thread-safe. ARC is an enhancement over the standard LRU cache in that tracks both frequency and recency of use. This avoids a burst in access to new entries from evicting the frequently used older entries. It adds some additional tracking overhead to a standard LRU cache, computationally it is roughly `2x` the cost, and the extra memory overhead is linear with the size of the cache. The default cache size is `1000`. + +## History + +Dec 20, 2022 - Initial draft finished and submitted as a PR + +## Copyright + +All content herein is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/docs/sdk/v0.53/documentation/state-storage/store.mdx b/docs/sdk/v0.53/documentation/state-storage/store.mdx new file mode 100644 index 00000000..c7070ea5 --- /dev/null +++ b/docs/sdk/v0.53/documentation/state-storage/store.mdx @@ -0,0 +1,11855 @@ +--- +title: Store +--- + +## Synopsis + +A store is a data structure that holds the state of the application. + + +**Pre-requisite Readings** + +- [Anatomy of a Cosmos SDK application](/docs/sdk/v0.53/documentation/application-framework/app-anatomy) + + + +## Introduction to Cosmos SDK Stores + +The Cosmos SDK comes with a large set of stores to persist the state of applications. By default, the main store of Cosmos SDK applications is a `multistore`, i.e. a store of stores. Developers can add any number of key-value stores to the multistore, depending on their application needs. The multistore exists to support the modularity of the Cosmos SDK, as it lets each module declare and manage their own subset of the state. Key-value stores in the multistore can only be accessed with a specific capability `key`, which is typically held in the [`keeper`](/docs/sdk/v0.53/documentation/module-system/keeper) of the module that declared the store. + +```text expandable ++-----------------------------------------------------+ +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 1 - Manage by keeper of Module 1 | +| | | | +| +--------------------------------------------+ | +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 2 - Manage by keeper of Module 2 | | +| | | | +| +--------------------------------------------+ | +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 3 - Manage by keeper of Module 2 | | +| | | | +| +--------------------------------------------+ | +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 4 - Manage by keeper of Module 3 | | +| | | | +| +--------------------------------------------+ | +| | +| +--------------------------------------------+ | +| | | | +| | KVStore 5 - Manage by keeper of Module 4 | | +| | | | +| +--------------------------------------------+ | +| | +| Main Multistore | +| | ++-----------------------------------------------------+ + + Application's State +``` + +### Store Interface + +At its very core, a Cosmos SDK `store` is an object that holds a `CacheWrapper` and has a `GetStoreType()` method: + +```go expandable +package types + +import ( + + "fmt" + "io" + "maps" + "slices" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Added, key) +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Deleted, key) +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / SetIAVLSyncPruning set sync/async pruning on iavl. + / It is not recommended to use this option. + / It is here to enable the prune command to force this to true, allowing the command to wait + / for the pruning to finish before returning. + SetIAVLSyncPruning(sync bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + +maps.Copy(ret, tc) + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + +maps.Copy(tc, newTc) + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +The `GetStoreType` is a simple method that returns the type of store, whereas a `CacheWrapper` is a simple interface that implements store read caching and write branching through `Write` method: + +```go expandable +package types + +import ( + + "fmt" + "io" + "maps" + "slices" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Added, key) +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Deleted, key) +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / SetIAVLSyncPruning set sync/async pruning on iavl. + / It is not recommended to use this option. + / It is here to enable the prune command to force this to true, allowing the command to wait + / for the pruning to finish before returning. + SetIAVLSyncPruning(sync bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + +maps.Copy(ret, tc) + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + +maps.Copy(tc, newTc) + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +Branching and cache is used ubiquitously in the Cosmos SDK and required to be implemented on every store type. A storage branch creates an isolated, ephemeral branch of a store that can be passed around and updated without affecting the main underlying store. This is used to trigger temporary state-transitions that may be reverted later should an error occur. Read more about it in [context](/docs/sdk/v0.53/documentation/application-framework/context#Store-branching) + +### Commit Store + +A commit store is a store that has the ability to commit changes made to the underlying tree or db. The Cosmos SDK differentiates simple stores from commit stores by extending the basic store interfaces with a `Committer`: + +```go expandable +package types + +import ( + + "fmt" + "io" + "maps" + "slices" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Added, key) +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Deleted, key) +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / SetIAVLSyncPruning set sync/async pruning on iavl. + / It is not recommended to use this option. + / It is here to enable the prune command to force this to true, allowing the command to wait + / for the pruning to finish before returning. + SetIAVLSyncPruning(sync bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + +maps.Copy(ret, tc) + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + +maps.Copy(tc, newTc) + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +The `Committer` is an interface that defines methods to persist changes to disk: + +```go expandable +package types + +import ( + + "fmt" + "io" + "maps" + "slices" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Added, key) +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Deleted, key) +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / SetIAVLSyncPruning set sync/async pruning on iavl. + / It is not recommended to use this option. + / It is here to enable the prune command to force this to true, allowing the command to wait + / for the pruning to finish before returning. + SetIAVLSyncPruning(sync bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + +maps.Copy(ret, tc) + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + +maps.Copy(tc, newTc) + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +The `CommitID` is a deterministic commit of the state tree. Its hash is returned to the underlying consensus engine and stored in the block header. Note that commit store interfaces exist for various purposes, one of which is to make sure not every object can commit the store. As part of the [object-capabilities model](/docs/sdk/v0.53/documentation/core-concepts/ocap) of the Cosmos SDK, only `baseapp` should have the ability to commit stores. For example, this is the reason why the `ctx.KVStore()` method by which modules typically access stores returns a `KVStore` and not a `CommitKVStore`. + +The Cosmos SDK comes with many types of stores, the most used being [`CommitMultiStore`](#multistore), [`KVStore`](#kvstore) and [`GasKv` store](#gaskv-store). [Other types of stores](#other-stores) include `Transient` and `TraceKV` stores. + +## Multistore + +### Multistore Interface + +Each Cosmos SDK application holds a multistore at its root to persist its state. The multistore is a store of `KVStores` that follows the `Multistore` interface: + +```go expandable +package types + +import ( + + "fmt" + "io" + "maps" + "slices" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Added, key) +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Deleted, key) +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / SetIAVLSyncPruning set sync/async pruning on iavl. + / It is not recommended to use this option. + / It is here to enable the prune command to force this to true, allowing the command to wait + / for the pruning to finish before returning. + SetIAVLSyncPruning(sync bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + +maps.Copy(ret, tc) + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + +maps.Copy(tc, newTc) + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +If tracing is enabled, then branching the multistore will firstly wrap all the underlying `KVStore` in [`TraceKv.Store`](#tracekv-store). + +### CommitMultiStore + +The main type of `Multistore` used in the Cosmos SDK is `CommitMultiStore`, which is an extension of the `Multistore` interface: + +```go expandable +package types + +import ( + + "fmt" + "io" + "maps" + "slices" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Added, key) +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Deleted, key) +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / SetIAVLSyncPruning set sync/async pruning on iavl. + / It is not recommended to use this option. + / It is here to enable the prune command to force this to true, allowing the command to wait + / for the pruning to finish before returning. + SetIAVLSyncPruning(sync bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + +maps.Copy(ret, tc) + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + +maps.Copy(tc, newTc) + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +As for concrete implementation, the \[`rootMulti.Store`] is the go-to implementation of the `CommitMultiStore` interface. + +```go expandable +package rootmulti + +import ( + + "crypto/sha256" + "errors" + "fmt" + "io" + "maps" + "math" + "sort" + "strings" + "sync" + + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + dbm "github.com/cosmos/cosmos-db" + protoio "github.com/cosmos/gogoproto/io" + gogotypes "github.com/cosmos/gogoproto/types" + iavltree "github.com/cosmos/iavl" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store/cachemulti" + "cosmossdk.io/store/dbadapter" + "cosmossdk.io/store/iavl" + "cosmossdk.io/store/listenkv" + "cosmossdk.io/store/mem" + "cosmossdk.io/store/metrics" + "cosmossdk.io/store/pruning" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/transient" + "cosmossdk.io/store/types" +) + +const ( + latestVersionKey = "s/latest" + commitInfoKeyFmt = "s/%d" / s/ +) + +const iavlDisablefastNodeDefault = false + +/ keysFromStoreKeyMap returns a slice of keys for the provided map lexically sorted by StoreKey.Name() + +func keysFromStoreKeyMap[V any](/docs/sdk/v0.53/documentation/state-storage/m map[types.StoreKey]V) []types.StoreKey { + keys := make([]types.StoreKey, 0, len(m)) + for key := range m { + keys = append(keys, key) +} + +sort.Slice(keys, func(i, j int) + +bool { + ki, kj := keys[i], keys[j] + return ki.Name() < kj.Name() +}) + +return keys +} + +/ Store is composed of many CommitStores. Name contrasts with +/ cacheMultiStore which is used for branching other MultiStores. It implements +/ the CommitMultiStore interface. +type Store struct { + db dbm.DB + logger log.Logger + lastCommitInfo *types.CommitInfo + pruningManager *pruning.Manager + iavlCacheSize int + iavlDisableFastNode bool + / iavlSyncPruning should rarely be set to true. + / The Prune command will automatically set this to true. + / This allows the prune command to wait for the pruning to finish before returning. + iavlSyncPruning bool + storesParams map[types.StoreKey]storeParams + stores map[types.StoreKey]types.CommitKVStore + keysByName map[string]types.StoreKey + initialVersion int64 + removalMap map[types.StoreKey]bool + traceWriter io.Writer + traceContext types.TraceContext + traceContextMutex sync.Mutex + interBlockCache types.MultiStorePersistentCache + listeners map[types.StoreKey]*types.MemoryListener + metrics metrics.StoreMetrics + commitHeader cmtproto.Header +} + +var ( + _ types.CommitMultiStore = (*Store)(nil) + _ types.Queryable = (*Store)(nil) +) + +/ NewStore returns a reference to a new Store object with the provided DB. The +/ store will be created with a PruneNothing pruning strategy by default. After +/ a store is created, KVStores must be mounted and finally LoadLatestVersion or +/ LoadVersion must be called. +func NewStore(db dbm.DB, logger log.Logger, metricGatherer metrics.StoreMetrics) *Store { + return &Store{ + db: db, + logger: logger, + iavlCacheSize: iavl.DefaultIAVLCacheSize, + iavlDisableFastNode: iavlDisablefastNodeDefault, + storesParams: make(map[types.StoreKey]storeParams), + stores: make(map[types.StoreKey]types.CommitKVStore), + keysByName: make(map[string]types.StoreKey), + listeners: make(map[types.StoreKey]*types.MemoryListener), + removalMap: make(map[types.StoreKey]bool), + pruningManager: pruning.NewManager(db, logger), + metrics: metricGatherer, +} +} + +/ GetPruning fetches the pruning strategy from the root store. +func (rs *Store) + +GetPruning() + +pruningtypes.PruningOptions { + return rs.pruningManager.GetOptions() +} + +/ SetPruning sets the pruning strategy on the root store and all the sub-stores. +/ Note, calling SetPruning on the root store prior to LoadVersion or +/ LoadLatestVersion performs a no-op as the stores aren't mounted yet. +func (rs *Store) + +SetPruning(pruningOpts pruningtypes.PruningOptions) { + rs.pruningManager.SetOptions(pruningOpts) +} + +/ SetMetrics sets the metrics gatherer for the store package +func (rs *Store) + +SetMetrics(metrics metrics.StoreMetrics) { + rs.metrics = metrics +} + +/ SetSnapshotInterval sets the interval at which the snapshots are taken. +/ It is used by the store to determine which heights to retain until after the snapshot is complete. +func (rs *Store) + +SetSnapshotInterval(snapshotInterval uint64) { + rs.pruningManager.SetSnapshotInterval(snapshotInterval) +} + +func (rs *Store) + +SetIAVLCacheSize(cacheSize int) { + rs.iavlCacheSize = cacheSize +} + +func (rs *Store) + +SetIAVLDisableFastNode(disableFastNode bool) { + rs.iavlDisableFastNode = disableFastNode +} + +func (rs *Store) + +SetIAVLSyncPruning(syncPruning bool) { + rs.iavlSyncPruning = syncPruning +} + +/ GetStoreType implements Store. +func (rs *Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeMulti +} + +/ MountStoreWithDB implements CommitMultiStore. +func (rs *Store) + +MountStoreWithDB(key types.StoreKey, typ types.StoreType, db dbm.DB) { + if key == nil { + panic("MountIAVLStore() + +key cannot be nil") +} + if _, ok := rs.storesParams[key]; ok { + panic(fmt.Sprintf("store duplicate store key %v", key)) +} + if _, ok := rs.keysByName[key.Name()]; ok { + panic(fmt.Sprintf("store duplicate store key name %v", key)) +} + +rs.storesParams[key] = newStoreParams(key, db, typ, 0) + +rs.keysByName[key.Name()] = key +} + +/ GetCommitStore returns a mounted CommitStore for a given StoreKey. If the +/ store is wrapped in an inter-block cache, it will be unwrapped before returning. +func (rs *Store) + +GetCommitStore(key types.StoreKey) + +types.CommitStore { + return rs.GetCommitKVStore(key) +} + +/ GetCommitKVStore returns a mounted CommitKVStore for a given StoreKey. If the +/ store is wrapped in an inter-block cache, it will be unwrapped before returning. +func (rs *Store) + +GetCommitKVStore(key types.StoreKey) + +types.CommitKVStore { + / If the Store has an inter-block cache, first attempt to lookup and unwrap + / the underlying CommitKVStore by StoreKey. If it does not exist, fallback to + / the main mapping of CommitKVStores. + if rs.interBlockCache != nil { + if store := rs.interBlockCache.Unwrap(key); store != nil { + return store +} + +} + +return rs.stores[key] +} + +/ StoreKeysByName returns mapping storeNames -> StoreKeys +func (rs *Store) + +StoreKeysByName() + +map[string]types.StoreKey { + return rs.keysByName +} + +/ LoadLatestVersionAndUpgrade implements CommitMultiStore +func (rs *Store) + +LoadLatestVersionAndUpgrade(upgrades *types.StoreUpgrades) + +error { + ver := GetLatestVersion(rs.db) + +return rs.loadVersion(ver, upgrades) +} + +/ LoadVersionAndUpgrade allows us to rename substores while loading an older version +func (rs *Store) + +LoadVersionAndUpgrade(ver int64, upgrades *types.StoreUpgrades) + +error { + return rs.loadVersion(ver, upgrades) +} + +/ LoadLatestVersion implements CommitMultiStore. +func (rs *Store) + +LoadLatestVersion() + +error { + ver := GetLatestVersion(rs.db) + +return rs.loadVersion(ver, nil) +} + +/ LoadVersion implements CommitMultiStore. +func (rs *Store) + +LoadVersion(ver int64) + +error { + return rs.loadVersion(ver, nil) +} + +func (rs *Store) + +loadVersion(ver int64, upgrades *types.StoreUpgrades) + +error { + infos := make(map[string]types.StoreInfo) + +rs.logger.Debug("loadVersion", "ver", ver) + cInfo := &types.CommitInfo{ +} + + / load old data if we are not version 0 + if ver != 0 { + var err error + cInfo, err = rs.GetCommitInfo(ver) + if err != nil { + return err +} + + / convert StoreInfos slice to map + for _, storeInfo := range cInfo.StoreInfos { + infos[storeInfo.Name] = storeInfo +} + +} + + / load each Store (note this doesn't panic on unmounted keys now) + newStores := make(map[types.StoreKey]types.CommitKVStore) + storesKeys := make([]types.StoreKey, 0, len(rs.storesParams)) + for key := range rs.storesParams { + storesKeys = append(storesKeys, key) +} + if upgrades != nil { + / deterministic iteration order for upgrades + / (as the underlying store may change and + / upgrades make store changes where the execution order may matter) + +sort.Slice(storesKeys, func(i, j int) + +bool { + return storesKeys[i].Name() < storesKeys[j].Name() +}) +} + for _, key := range storesKeys { + storeParams := rs.storesParams[key] + commitID := rs.getCommitID(infos, key.Name()) + +rs.logger.Debug("loadVersion commitID", "key", key, "ver", ver, "hash", fmt.Sprintf("%x", commitID.Hash)) + + / If it has been added, set the initial version + if upgrades.IsAdded(key.Name()) || upgrades.RenamedFrom(key.Name()) != "" { + storeParams.initialVersion = uint64(ver) + 1 +} + +else if commitID.Version != ver && storeParams.typ == types.StoreTypeIAVL { + return fmt.Errorf("version of store %s mismatch root store's version; expected %d got %d; new stores should be added using StoreUpgrades", key.Name(), ver, commitID.Version) +} + +store, err := rs.loadCommitStoreFromParams(key, commitID, storeParams) + if err != nil { + return errorsmod.Wrap(err, "failed to load store") +} + +newStores[key] = store + + / If it was deleted, remove all data + if upgrades.IsDeleted(key.Name()) { + if err := deleteKVStore(store.(types.KVStore)); err != nil { + return errorsmod.Wrapf(err, "failed to delete store %s", key.Name()) +} + +rs.removalMap[key] = true +} + +else if oldName := upgrades.RenamedFrom(key.Name()); oldName != "" { + / handle renames specially + / make an unregistered key to satisfy loadCommitStore params + oldKey := types.NewKVStoreKey(oldName) + oldParams := newStoreParams(oldKey, storeParams.db, storeParams.typ, 0) + + / load from the old name + oldStore, err := rs.loadCommitStoreFromParams(oldKey, rs.getCommitID(infos, oldName), oldParams) + if err != nil { + return errorsmod.Wrapf(err, "failed to load old store %s", oldName) +} + + / move all data + if err := moveKVStoreData(oldStore.(types.KVStore), store.(types.KVStore)); err != nil { + return errorsmod.Wrapf(err, "failed to move store %s -> %s", oldName, key.Name()) +} + + / add the old key so its deletion is committed + newStores[oldKey] = oldStore + / this will ensure it's not perpetually stored in commitInfo + rs.removalMap[oldKey] = true +} + +} + +rs.lastCommitInfo = cInfo + rs.stores = newStores + + / load any snapshot heights we missed from disk to be pruned on the next run + if err := rs.pruningManager.LoadSnapshotHeights(rs.db); err != nil { + return err +} + +return nil +} + +func (rs *Store) + +getCommitID(infos map[string]types.StoreInfo, name string) + +types.CommitID { + info, ok := infos[name] + if !ok { + return types.CommitID{ +} + +} + +return info.CommitId +} + +func deleteKVStore(kv types.KVStore) + +error { + / Note that we cannot write while iterating, so load all keys here, delete below + var keys [][]byte + itr := kv.Iterator(nil, nil) + for itr.Valid() { + keys = append(keys, itr.Key()) + +itr.Next() +} + if err := itr.Close(); err != nil { + return err +} + for _, k := range keys { + kv.Delete(k) +} + +return nil +} + +/ we simulate move by a copy and delete +func moveKVStoreData(oldDB, newDB types.KVStore) + +error { + / we read from one and write to another + itr := oldDB.Iterator(nil, nil) + for itr.Valid() { + newDB.Set(itr.Key(), itr.Value()) + +itr.Next() +} + if err := itr.Close(); err != nil { + return err +} + + / then delete the old store + return deleteKVStore(oldDB) +} + +/ PruneSnapshotHeight prunes the given height according to the prune strategy. +/ If the strategy is PruneNothing, this is a no-op. +/ For other strategies, this height is persisted until the snapshot is operated. +func (rs *Store) + +PruneSnapshotHeight(height int64) { + rs.pruningManager.HandleSnapshotHeight(height) +} + +/ SetInterBlockCache sets the Store's internal inter-block (persistent) + +cache. +/ When this is defined, all CommitKVStores will be wrapped with their respective +/ inter-block cache. +func (rs *Store) + +SetInterBlockCache(c types.MultiStorePersistentCache) { + rs.interBlockCache = c +} + +/ SetTracer sets the tracer for the MultiStore that the underlying +/ stores will utilize to trace operations. A MultiStore is returned. +func (rs *Store) + +SetTracer(w io.Writer) + +types.MultiStore { + rs.traceWriter = w + return rs +} + +/ SetTracingContext updates the tracing context for the MultiStore by merging +/ the given context with the existing context by key. Any existing keys will +/ be overwritten. It is implied that the caller should update the context when +/ necessary between tracing operations. It returns a modified MultiStore. +func (rs *Store) + +SetTracingContext(tc types.TraceContext) + +types.MultiStore { + rs.traceContextMutex.Lock() + +defer rs.traceContextMutex.Unlock() + +rs.traceContext = rs.traceContext.Merge(tc) + +return rs +} + +func (rs *Store) + +getTracingContext() + +types.TraceContext { + rs.traceContextMutex.Lock() + +defer rs.traceContextMutex.Unlock() + if rs.traceContext == nil { + return nil +} + ctx := types.TraceContext{ +} + +maps.Copy(ctx, rs.traceContext) + +return ctx +} + +/ TracingEnabled returns if tracing is enabled for the MultiStore. +func (rs *Store) + +TracingEnabled() + +bool { + return rs.traceWriter != nil +} + +/ AddListeners adds a listener for the KVStore belonging to the provided StoreKey +func (rs *Store) + +AddListeners(keys []types.StoreKey) { + for i := range keys { + listener := rs.listeners[keys[i]] + if listener == nil { + rs.listeners[keys[i]] = types.NewMemoryListener() +} + +} +} + +/ ListeningEnabled returns if listening is enabled for a specific KVStore +func (rs *Store) + +ListeningEnabled(key types.StoreKey) + +bool { + if ls, ok := rs.listeners[key]; ok { + return ls != nil +} + +return false +} + +/ PopStateCache returns the accumulated state change messages from the CommitMultiStore +/ Calling PopStateCache destroys only the currently accumulated state in each listener +/ not the state in the store itself. This is a mutating and destructive operation. +/ This method has been synchronized. +func (rs *Store) + +PopStateCache() []*types.StoreKVPair { + var cache []*types.StoreKVPair + for key := range rs.listeners { + ls := rs.listeners[key] + if ls != nil { + cache = append(cache, ls.PopStateCache()...) +} + +} + +sort.SliceStable(cache, func(i, j int) + +bool { + return cache[i].StoreKey < cache[j].StoreKey +}) + +return cache +} + +/ LatestVersion returns the latest version in the store +func (rs *Store) + +LatestVersion() + +int64 { + return rs.LastCommitID().Version +} + +/ LastCommitID implements Committer/CommitStore. +func (rs *Store) + +LastCommitID() + +types.CommitID { + if rs.lastCommitInfo == nil { + emptyHash := sha256.Sum256([]byte{ +}) + appHash := emptyHash[:] + return types.CommitID{ + Version: GetLatestVersion(rs.db), + Hash: appHash, / set empty apphash to sha256([]byte{ +}) + if info is nil +} + +} + if len(rs.lastCommitInfo.CommitID().Hash) == 0 { + emptyHash := sha256.Sum256([]byte{ +}) + appHash := emptyHash[:] + return types.CommitID{ + Version: rs.lastCommitInfo.Version, + Hash: appHash, / set empty apphash to sha256([]byte{ +}) + if hash is nil +} + +} + +return rs.lastCommitInfo.CommitID() +} + +/ Commit implements Committer/CommitStore. +func (rs *Store) + +Commit() + +types.CommitID { + var previousHeight, version int64 + if rs.lastCommitInfo.GetVersion() == 0 && rs.initialVersion > 1 { + / This case means that no commit has been made in the store, we + / start from initialVersion. + version = rs.initialVersion +} + +else { + / This case can means two things: + / - either there was already a previous commit in the store, in which + / case we increment the version from there, + / - or there was no previous commit, and initial version was not set, + / in which case we start at version 1. + previousHeight = rs.lastCommitInfo.GetVersion() + +version = previousHeight + 1 +} + if rs.commitHeader.Height != version { + rs.logger.Debug("commit header and version mismatch", "header_height", rs.commitHeader.Height, "version", version) +} + +rs.lastCommitInfo = commitStores(version, rs.stores, rs.removalMap) + +rs.lastCommitInfo.Timestamp = rs.commitHeader.Time + defer rs.flushMetadata(rs.db, version, rs.lastCommitInfo) + + / remove remnants of removed stores + for sk := range rs.removalMap { + if _, ok := rs.stores[sk]; ok { + delete(rs.stores, sk) + +delete(rs.storesParams, sk) + +delete(rs.keysByName, sk.Name()) +} + +} + + / reset the removalMap + rs.removalMap = make(map[types.StoreKey]bool) + if err := rs.handlePruning(version); err != nil { + rs.logger.Error( + "failed to prune store, please check your pruning configuration", + "err", err, + ) +} + +return types.CommitID{ + Version: version, + Hash: rs.lastCommitInfo.Hash(), +} +} + +/ WorkingHash returns the current hash of the store. +/ it will be used to get the current app hash before commit. +func (rs *Store) + +WorkingHash() []byte { + storeInfos := make([]types.StoreInfo, 0, len(rs.stores)) + storeKeys := keysFromStoreKeyMap(rs.stores) + for _, key := range storeKeys { + store := rs.stores[key] + if store.GetStoreType() != types.StoreTypeIAVL { + continue +} + if !rs.removalMap[key] { + si := types.StoreInfo{ + Name: key.Name(), + CommitId: types.CommitID{ + Hash: store.WorkingHash(), +}, +} + +storeInfos = append(storeInfos, si) +} + +} + +sort.SliceStable(storeInfos, func(i, j int) + +bool { + return storeInfos[i].Name < storeInfos[j].Name +}) + +return types.CommitInfo{ + StoreInfos: storeInfos +}.Hash() +} + +/ CacheWrap implements CacheWrapper/Store/CommitStore. +func (rs *Store) + +CacheWrap() + +types.CacheWrap { + return rs.CacheMultiStore().(types.CacheWrap) +} + +/ CacheWrapWithTrace implements the CacheWrapper interface. +func (rs *Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + return rs.CacheWrap() +} + +/ CacheMultiStore creates ephemeral branch of the multi-store and returns a CacheMultiStore. +/ It implements the MultiStore interface. +func (rs *Store) + +CacheMultiStore() + +types.CacheMultiStore { + stores := make(map[types.StoreKey]types.CacheWrapper) + for k, v := range rs.stores { + store := types.KVStore(v) + / Wire the listenkv.Store to allow listeners to observe the writes from the cache store, + / set same listeners on cache store will observe duplicated writes. + if rs.ListeningEnabled(k) { + store = listenkv.NewStore(store, k, rs.listeners[k]) +} + +stores[k] = store +} + +return cachemulti.NewStore(rs.db, stores, rs.keysByName, rs.traceWriter, rs.getTracingContext()) +} + +/ CacheMultiStoreWithVersion is analogous to CacheMultiStore except that it +/ attempts to load stores at a given version (height). An error is returned if +/ any store cannot be loaded. This should only be used for querying and +/ iterating at past heights. +func (rs *Store) + +CacheMultiStoreWithVersion(version int64) (types.CacheMultiStore, error) { + cachedStores := make(map[types.StoreKey]types.CacheWrapper) + +var commitInfo *types.CommitInfo + storeInfos := map[string]bool{ +} + for key, store := range rs.stores { + var cacheStore types.KVStore + switch store.GetStoreType() { + case types.StoreTypeIAVL: + / If the store is wrapped with an inter-block cache, we must first unwrap + / it to get the underlying IAVL store. + store = rs.GetCommitKVStore(key) + + / Attempt to lazy-load an already saved IAVL store version. If the + / version does not exist or is pruned, an error should be returned. + var err error + cacheStore, err = store.(*iavl.Store).GetImmutable(version) + / if we got error from loading a module store + / we fetch commit info of this version + / we use commit info to check if the store existed at this version or not + if err != nil { + if commitInfo == nil { + var errCommitInfo error + commitInfo, errCommitInfo = rs.GetCommitInfo(version) + if errCommitInfo != nil { + return nil, errCommitInfo +} + for _, storeInfo := range commitInfo.StoreInfos { + storeInfos[storeInfo.Name] = true +} + +} + + / If the store existed at this version, it means there's actually an error + / getting the root store at this version. + if storeInfos[key.Name()] { + return nil, err +} + +} + +default: + cacheStore = store +} + + / Wire the listenkv.Store to allow listeners to observe the writes from the cache store, + / set same listeners on cache store will observe duplicated writes. + if rs.ListeningEnabled(key) { + cacheStore = listenkv.NewStore(cacheStore, key, rs.listeners[key]) +} + +cachedStores[key] = cacheStore +} + +return cachemulti.NewStore(rs.db, cachedStores, rs.keysByName, rs.traceWriter, rs.getTracingContext()), nil +} + +/ GetStore returns a mounted Store for a given StoreKey. If the StoreKey does +/ not exist, it will panic. If the Store is wrapped in an inter-block cache, it +/ will be unwrapped prior to being returned. +/ +/ TODO: This isn't used directly upstream. Consider returning the Store as-is +/ instead of unwrapping. +func (rs *Store) + +GetStore(key types.StoreKey) + +types.Store { + store := rs.GetCommitKVStore(key) + if store == nil { + panic(fmt.Sprintf("store does not exist for key: %s", key.Name())) +} + +return store +} + +/ GetKVStore returns a mounted KVStore for a given StoreKey. If tracing is +/ enabled on the KVStore, a wrapped TraceKVStore will be returned with the root +/ store's tracer, otherwise, the original KVStore will be returned. +/ +/ NOTE: The returned KVStore may be wrapped in an inter-block cache if it is +/ set on the root store. +func (rs *Store) + +GetKVStore(key types.StoreKey) + +types.KVStore { + s := rs.stores[key] + if s == nil { + panic(fmt.Sprintf("store does not exist for key: %s", key.Name())) +} + store := types.KVStore(s) + if rs.TracingEnabled() { + store = tracekv.NewStore(store, rs.traceWriter, rs.getTracingContext()) +} + if rs.ListeningEnabled(key) { + store = listenkv.NewStore(store, key, rs.listeners[key]) +} + +return store +} + +func (rs *Store) + +handlePruning(version int64) + +error { + pruneHeight := rs.pruningManager.GetPruningHeight(version) + +rs.logger.Debug("prune start", "height", version) + +defer rs.logger.Debug("prune end", "height", version) + +return rs.PruneStores(pruneHeight) +} + +/ PruneStores prunes all history upto the specific height of the multi store. +func (rs *Store) + +PruneStores(pruningHeight int64) (err error) { + if pruningHeight <= 0 { + rs.logger.Debug("pruning skipped, height is less than or equal to 0") + +return nil +} + +rs.logger.Debug("pruning store", "heights", pruningHeight) + for key, store := range rs.stores { + rs.logger.Debug("pruning store", "key", key) / Also log store.name (a private variable)? + + / If the store is wrapped with an inter-block cache, we must first unwrap + / it to get the underlying IAVL store. + if store.GetStoreType() != types.StoreTypeIAVL { + continue +} + +store = rs.GetCommitKVStore(key) + err := store.(*iavl.Store).DeleteVersionsTo(pruningHeight) + if err == nil { + continue +} + if errors.Is(err, iavltree.ErrVersionDoesNotExist) { + return err +} + +rs.logger.Error("failed to prune store", "key", key, "err", err) +} + +return nil +} + +/ getStoreByName performs a lookup of a StoreKey given a store name typically +/ provided in a path. The StoreKey is then used to perform a lookup and return +/ a Store. If the Store is wrapped in an inter-block cache, it will be unwrapped +/ prior to being returned. If the StoreKey does not exist, nil is returned. +func (rs *Store) + +GetStoreByName(name string) + +types.Store { + key := rs.keysByName[name] + if key == nil { + return nil +} + +return rs.GetCommitKVStore(key) +} + +/ Query calls substore.Query with the same `req` where `req.Path` is +/ modified to remove the substore prefix. +/ Ie. `req.Path` here is `//`, and trimmed to `/` for the substore. +/ TODO: add proof for `multistore -> substore`. +func (rs *Store) + +Query(req *types.RequestQuery) (*types.ResponseQuery, error) { + path := req.Path + storeName, subpath, err := parsePath(path) + if err != nil { + return &types.ResponseQuery{ +}, err +} + store := rs.GetStoreByName(storeName) + if store == nil { + return &types.ResponseQuery{ +}, errorsmod.Wrapf(types.ErrUnknownRequest, "no such store: %s", storeName) +} + +queryable, ok := store.(types.Queryable) + if !ok { + return &types.ResponseQuery{ +}, errorsmod.Wrapf(types.ErrUnknownRequest, "store %s (type %T) + +doesn't support queries", storeName, store) +} + + / trim the path and make the query + req.Path = subpath + res, err := queryable.Query(req) + if !req.Prove || !RequireProof(subpath) { + return res, err +} + if res.ProofOps == nil || len(res.ProofOps.Ops) == 0 { + return &types.ResponseQuery{ +}, errorsmod.Wrap(types.ErrInvalidRequest, "proof is unexpectedly empty; ensure height has not been pruned") +} + + / If the request's height is the latest height we've committed, then utilize + / the store's lastCommitInfo as this commit info may not be flushed to disk. + / Otherwise, we query for the commit info from disk. + var commitInfo *types.CommitInfo + if res.Height == rs.lastCommitInfo.Version { + commitInfo = rs.lastCommitInfo +} + +else { + commitInfo, err = rs.GetCommitInfo(res.Height) + if err != nil { + return &types.ResponseQuery{ +}, err +} + +} + + / Restore origin path and append proof op. + res.ProofOps.Ops = append(res.ProofOps.Ops, commitInfo.ProofOp(storeName)) + +return res, nil +} + +/ SetInitialVersion sets the initial version of the IAVL tree. It is used when +/ starting a new chain at an arbitrary height. +func (rs *Store) + +SetInitialVersion(version int64) + +error { + rs.initialVersion = version + + / Loop through all the stores, if it's an IAVL store, then set initial + / version on it. + for key, store := range rs.stores { + if store.GetStoreType() == types.StoreTypeIAVL { + / If the store is wrapped with an inter-block cache, we must first unwrap + / it to get the underlying IAVL store. + store = rs.GetCommitKVStore(key) + +store.(types.StoreWithInitialVersion).SetInitialVersion(version) +} + +} + +return nil +} + +/ parsePath expects a format like /[/] +/ Must start with /, subpath may be empty +/ Returns error if it doesn't start with / +func parsePath(path string) (storeName, subpath string, err error) { + if !strings.HasPrefix(path, "/") { + return storeName, subpath, errorsmod.Wrapf(types.ErrUnknownRequest, "invalid path: %s", path) +} + +storeName, subpath, found := strings.Cut(path[1:], "/") + if !found { + return storeName, subpath, nil +} + +return storeName, "/" + subpath, nil +} + +/---------------------- Snapshotting ------------------ + +/ Snapshot implements snapshottypes.Snapshotter. The snapshot output for a given format must be +/ identical across nodes such that chunks from different sources fit together. If the output for a +/ given format changes (at the byte level), the snapshot format must be bumped - see +/ TestMultistoreSnapshot_Checksum test. +func (rs *Store) + +Snapshot(height uint64, protoWriter protoio.Writer) + +error { + if height == 0 { + return errorsmod.Wrap(types.ErrLogic, "cannot snapshot height 0") +} + if height > uint64(GetLatestVersion(rs.db)) { + return errorsmod.Wrapf(types.ErrLogic, "cannot snapshot future height %v", height) +} + + / Collect stores to snapshot (only IAVL stores are supported) + +type namedStore struct { + *iavl.Store + name string +} + stores := []namedStore{ +} + keys := keysFromStoreKeyMap(rs.stores) + for _, key := range keys { + switch store := rs.GetCommitKVStore(key).(type) { + case *iavl.Store: + stores = append(stores, namedStore{ + name: key.Name(), + Store: store +}) + case *transient.Store, *mem.Store: + / Non-persisted stores shouldn't be snapshotted + continue + default: + return errorsmod.Wrapf(types.ErrLogic, + "don't know how to snapshot store %q of type %T", key.Name(), store) +} + +} + +sort.Slice(stores, func(i, j int) + +bool { + return strings.Compare(stores[i].name, stores[j].name) == -1 +}) + + / Export each IAVL store. Stores are serialized as a stream of SnapshotItem Protobuf + / messages. The first item contains a SnapshotStore with store metadata (i.e. name), + / and the following messages contain a SnapshotNode (i.e. an ExportNode). Store changes + / are demarcated by new SnapshotStore items. + for _, store := range stores { + rs.logger.Debug("starting snapshot", "store", store.name, "height", height) + +exporter, err := store.Export(int64(height)) + if err != nil { + rs.logger.Error("snapshot failed; exporter error", "store", store.name, "err", err) + +return err +} + +err = func() + +error { + defer exporter.Close() + err := protoWriter.WriteMsg(&snapshottypes.SnapshotItem{ + Item: &snapshottypes.SnapshotItem_Store{ + Store: &snapshottypes.SnapshotStoreItem{ + Name: store.name, +}, +}, +}) + if err != nil { + rs.logger.Error("snapshot failed; item store write failed", "store", store.name, "err", err) + +return err +} + nodeCount := 0 + for { + node, err := exporter.Next() + if errors.Is(err, iavltree.ErrorExportDone) { + rs.logger.Debug("snapshot Done", "store", store.name, "nodeCount", nodeCount) + +break +} + +else if err != nil { + return err +} + +err = protoWriter.WriteMsg(&snapshottypes.SnapshotItem{ + Item: &snapshottypes.SnapshotItem_IAVL{ + IAVL: &snapshottypes.SnapshotIAVLItem{ + Key: node.Key, + Value: node.Value, + Height: int32(node.Height), + Version: node.Version, +}, +}, +}) + if err != nil { + return err +} + +nodeCount++ +} + +return nil +}() + if err != nil { + return err +} + +} + +return nil +} + +/ Restore implements snapshottypes.Snapshotter. +/ returns next snapshot item and error. +func (rs *Store) + +Restore( + height uint64, format uint32, protoReader protoio.Reader, +) (snapshottypes.SnapshotItem, error) { + / Import nodes into stores. The first item is expected to be a SnapshotItem containing + / a SnapshotStoreItem, telling us which store to import into. The following items will contain + / SnapshotNodeItem (i.e. ExportNode) + +until we reach the next SnapshotStoreItem or EOF. + var importer *iavltree.Importer + var snapshotItem snapshottypes.SnapshotItem +loop: + for { + snapshotItem = snapshottypes.SnapshotItem{ +} + err := protoReader.ReadMsg(&snapshotItem) + if errors.Is(err, io.EOF) { + break +} + +else if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "invalid protobuf message") +} + switch item := snapshotItem.Item.(type) { + case *snapshottypes.SnapshotItem_Store: + if importer != nil { + err = importer.Commit() + if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "IAVL commit failed") +} + +importer.Close() +} + +store, ok := rs.GetStoreByName(item.Store.Name).(*iavl.Store) + if !ok || store == nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrapf(types.ErrLogic, "cannot import into non-IAVL store %q", item.Store.Name) +} + +importer, err = store.Import(int64(height)) + if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "import failed") +} + +defer importer.Close() + / Importer height must reflect the node height (which usually matches the block height, but not always) + +rs.logger.Debug("restoring snapshot", "store", item.Store.Name) + case *snapshottypes.SnapshotItem_IAVL: + if importer == nil { + rs.logger.Error("failed to restore; received IAVL node item before store item") + +return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(types.ErrLogic, "received IAVL node item before store item") +} + if item.IAVL.Height > math.MaxInt8 { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrapf(types.ErrLogic, "node height %v cannot exceed %v", + item.IAVL.Height, math.MaxInt8) +} + node := &iavltree.ExportNode{ + Key: item.IAVL.Key, + Value: item.IAVL.Value, + Height: int8(item.IAVL.Height), + Version: item.IAVL.Version, +} + / Protobuf does not differentiate between []byte{ +} + +as nil, but fortunately IAVL does + / not allow nil keys nor nil values for leaf nodes, so we can always set them to empty. + if node.Key == nil { + node.Key = []byte{ +} + +} + if node.Height == 0 && node.Value == nil { + node.Value = []byte{ +} + +} + err := importer.Add(node) + if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "IAVL node import failed") +} + +default: + break loop +} + +} + if importer != nil { + err := importer.Commit() + if err != nil { + return snapshottypes.SnapshotItem{ +}, errorsmod.Wrap(err, "IAVL commit failed") +} + +importer.Close() +} + +rs.flushMetadata(rs.db, int64(height), rs.buildCommitInfo(int64(height))) + +return snapshotItem, rs.LoadLatestVersion() +} + +func (rs *Store) + +loadCommitStoreFromParams(key types.StoreKey, id types.CommitID, params storeParams) (types.CommitKVStore, error) { + var db dbm.DB + if params.db != nil { + db = dbm.NewPrefixDB(params.db, []byte("s/_/")) +} + +else { + prefix := "s/k:" + params.key.Name() + "/" + db = dbm.NewPrefixDB(rs.db, []byte(prefix)) +} + switch params.typ { + case types.StoreTypeMulti: + panic("recursive MultiStores not yet supported") + case types.StoreTypeIAVL: + store, err := iavl.LoadStoreWithOpts(db, rs.logger, key, id, params.initialVersion, rs.iavlCacheSize, rs.iavlDisableFastNode, rs.metrics, iavltree.AsyncPruningOption(!rs.iavlSyncPruning)) + if err != nil { + return nil, err +} + if rs.interBlockCache != nil { + / Wrap and get a CommitKVStore with inter-block caching. Note, this should + / only wrap the primary CommitKVStore, not any store that is already + / branched as that will create unexpected behavior. + store = rs.interBlockCache.GetStoreCache(key, store) +} + +return store, err + case types.StoreTypeDB: + return commitDBStoreAdapter{ + Store: dbadapter.Store{ + DB: db +}}, nil + case types.StoreTypeTransient: + _, ok := key.(*types.TransientStoreKey) + if !ok { + return nil, fmt.Errorf("invalid StoreKey for StoreTypeTransient: %s", key.String()) +} + +return transient.NewStore(), nil + case types.StoreTypeMemory: + if _, ok := key.(*types.MemoryStoreKey); !ok { + return nil, fmt.Errorf("unexpected key type for a MemoryStoreKey; got: %s", key.String()) +} + +return mem.NewStore(), nil + + default: + panic(fmt.Sprintf("unrecognized store type %v", params.typ)) +} +} + +func (rs *Store) + +buildCommitInfo(version int64) *types.CommitInfo { + keys := keysFromStoreKeyMap(rs.stores) + storeInfos := []types.StoreInfo{ +} + for _, key := range keys { + store := rs.stores[key] + storeType := store.GetStoreType() + if storeType == types.StoreTypeTransient || storeType == types.StoreTypeMemory { + continue +} + +storeInfos = append(storeInfos, types.StoreInfo{ + Name: key.Name(), + CommitId: store.LastCommitID(), +}) +} + +return &types.CommitInfo{ + Version: version, + StoreInfos: storeInfos, +} +} + +/ RollbackToVersion delete the versions after `target` and update the latest version. +func (rs *Store) + +RollbackToVersion(target int64) + +error { + if target <= 0 { + return fmt.Errorf("invalid rollback height target: %d", target) +} + for key, store := range rs.stores { + if store.GetStoreType() == types.StoreTypeIAVL { + / If the store is wrapped with an inter-block cache, we must first unwrap + / it to get the underlying IAVL store. + store = rs.GetCommitKVStore(key) + err := store.(*iavl.Store).LoadVersionForOverwriting(target) + if err != nil { + return err +} + +} + +} + +rs.flushMetadata(rs.db, target, rs.buildCommitInfo(target)) + +return rs.LoadLatestVersion() +} + +/ SetCommitHeader sets the commit block header of the store. +func (rs *Store) + +SetCommitHeader(h cmtproto.Header) { + rs.commitHeader = h +} + +/ GetCommitInfo attempts to retrieve CommitInfo for a given version/height. It +/ will return an error if no CommitInfo exists, we fail to unmarshal the record +/ or if we cannot retrieve the object from the DB. +func (rs *Store) + +GetCommitInfo(ver int64) (*types.CommitInfo, error) { + cInfoKey := fmt.Sprintf(commitInfoKeyFmt, ver) + +bz, err := rs.db.Get([]byte(cInfoKey)) + if err != nil { + return nil, errorsmod.Wrap(err, "failed to get commit info") +} + +else if bz == nil { + return nil, errors.New("no commit info found") +} + cInfo := &types.CommitInfo{ +} + if err = cInfo.Unmarshal(bz); err != nil { + return nil, errorsmod.Wrap(err, "failed unmarshal commit info") +} + +return cInfo, nil +} + +func (rs *Store) + +flushMetadata(db dbm.DB, version int64, cInfo *types.CommitInfo) { + rs.logger.Debug("flushing metadata", "height", version) + batch := db.NewBatch() + +defer func() { + _ = batch.Close() +}() + if cInfo != nil { + flushCommitInfo(batch, version, cInfo) +} + +else { + rs.logger.Debug("commitInfo is nil, not flushed", "height", version) +} + +flushLatestVersion(batch, version) + if err := batch.WriteSync(); err != nil { + panic(fmt.Errorf("error on batch write %w", err)) +} + +rs.logger.Debug("flushing metadata finished", "height", version) +} + +type storeParams struct { + key types.StoreKey + db dbm.DB + typ types.StoreType + initialVersion uint64 +} + +func newStoreParams(key types.StoreKey, db dbm.DB, typ types.StoreType, initialVersion uint64) + +storeParams { + return storeParams{ + key: key, + db: db, + typ: typ, + initialVersion: initialVersion, +} +} + +func GetLatestVersion(db dbm.DB) + +int64 { + bz, err := db.Get([]byte(latestVersionKey)) + if err != nil { + panic(err) +} + +else if bz == nil { + return 0 +} + +var latestVersion int64 + if err := gogotypes.StdInt64Unmarshal(&latestVersion, bz); err != nil { + panic(err) +} + +return latestVersion +} + +/ Commits each store and returns a new commitInfo. +func commitStores(version int64, storeMap map[types.StoreKey]types.CommitKVStore, removalMap map[types.StoreKey]bool) *types.CommitInfo { + storeInfos := make([]types.StoreInfo, 0, len(storeMap)) + storeKeys := keysFromStoreKeyMap(storeMap) + for _, key := range storeKeys { + store := storeMap[key] + last := store.LastCommitID() + + / If a commit event execution is interrupted, a new iavl store's version + / will be larger than the RMS's metadata, when the block is replayed, we + / should avoid committing that iavl store again. + var commitID types.CommitID + if last.Version >= version { + last.Version = version + commitID = last +} + +else { + commitID = store.Commit() +} + storeType := store.GetStoreType() + if storeType == types.StoreTypeTransient || storeType == types.StoreTypeMemory { + continue +} + if !removalMap[key] { + si := types.StoreInfo{ +} + +si.Name = key.Name() + +si.CommitId = commitID + storeInfos = append(storeInfos, si) +} + +} + +sort.SliceStable(storeInfos, func(i, j int) + +bool { + return strings.Compare(storeInfos[i].Name, storeInfos[j].Name) < 0 +}) + +return &types.CommitInfo{ + Version: version, + StoreInfos: storeInfos, +} +} + +func flushCommitInfo(batch dbm.Batch, version int64, cInfo *types.CommitInfo) { + bz, err := cInfo.Marshal() + if err != nil { + panic(err) +} + cInfoKey := fmt.Sprintf(commitInfoKeyFmt, version) + +err = batch.Set([]byte(cInfoKey), bz) + if err != nil { + panic(err) +} +} + +func flushLatestVersion(batch dbm.Batch, version int64) { + bz, err := gogotypes.StdInt64Marshal(version) + if err != nil { + panic(err) +} + +err = batch.Set([]byte(latestVersionKey), bz) + if err != nil { + panic(err) +} +} +``` + +The `rootMulti.Store` is a base-layer multistore built around a `db` on top of which multiple `KVStores` can be mounted, and is the default multistore store used in [`baseapp`](/docs/sdk/v0.53/documentation/application-framework/baseapp). + +### CacheMultiStore + +Whenever the `rootMulti.Store` needs to be branched, a [`cachemulti.Store`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/cachemulti/store.go) is used. + +```go expandable +package cachemulti + +import ( + + "fmt" + "io" + "maps" + + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/cachekv" + "cosmossdk.io/store/dbadapter" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" +) + +/ storeNameCtxKey is the TraceContext metadata key that identifies +/ the store which emitted a given trace. +const storeNameCtxKey = "store_name" + +/---------------------------------------- +/ Store + +/ Store holds many branched stores. +/ Implements MultiStore. +/ NOTE: a Store (and MultiStores in general) + +should never expose the +/ keys for the substores. +type Store struct { + db types.CacheKVStore + stores map[types.StoreKey]types.CacheWrap + keys map[string]types.StoreKey + + traceWriter io.Writer + traceContext types.TraceContext +} + +var _ types.CacheMultiStore = Store{ +} + +/ NewFromKVStore creates a new Store object from a mapping of store keys to +/ CacheWrapper objects and a KVStore as the database. Each CacheWrapper store +/ is a branched store. +func NewFromKVStore( + store types.KVStore, stores map[types.StoreKey]types.CacheWrapper, + keys map[string]types.StoreKey, traceWriter io.Writer, traceContext types.TraceContext, +) + +Store { + cms := Store{ + db: cachekv.NewStore(store), + stores: make(map[types.StoreKey]types.CacheWrap, len(stores)), + keys: keys, + traceWriter: traceWriter, + traceContext: traceContext, +} + for key, store := range stores { + if cms.TracingEnabled() { + tctx := cms.traceContext.Clone().Merge(types.TraceContext{ + storeNameCtxKey: key.Name(), +}) + +store = tracekv.NewStore(store.(types.KVStore), cms.traceWriter, tctx) +} + +cms.stores[key] = cachekv.NewStore(store.(types.KVStore)) +} + +return cms +} + +/ NewStore creates a new Store object from a mapping of store keys to +/ CacheWrapper objects. Each CacheWrapper store is a branched store. +func NewStore( + db dbm.DB, stores map[types.StoreKey]types.CacheWrapper, keys map[string]types.StoreKey, + traceWriter io.Writer, traceContext types.TraceContext, +) + +Store { + return NewFromKVStore(dbadapter.Store{ + DB: db +}, stores, keys, traceWriter, traceContext) +} + +func newCacheMultiStoreFromCMS(cms Store) + +Store { + stores := make(map[types.StoreKey]types.CacheWrapper) + for k, v := range cms.stores { + stores[k] = v +} + +return NewFromKVStore(cms.db, stores, nil, cms.traceWriter, cms.traceContext) +} + +/ SetTracer sets the tracer for the MultiStore that the underlying +/ stores will utilize to trace operations. A MultiStore is returned. +func (cms Store) + +SetTracer(w io.Writer) + +types.MultiStore { + cms.traceWriter = w + return cms +} + +/ SetTracingContext updates the tracing context for the MultiStore by merging +/ the given context with the existing context by key. Any existing keys will +/ be overwritten. It is implied that the caller should update the context when +/ necessary between tracing operations. It returns a modified MultiStore. +func (cms Store) + +SetTracingContext(tc types.TraceContext) + +types.MultiStore { + if cms.traceContext != nil { + maps.Copy(cms.traceContext, tc) +} + +else { + cms.traceContext = tc +} + +return cms +} + +/ TracingEnabled returns if tracing is enabled for the MultiStore. +func (cms Store) + +TracingEnabled() + +bool { + return cms.traceWriter != nil +} + +/ LatestVersion returns the branch version of the store +func (cms Store) + +LatestVersion() + +int64 { + panic("cannot get latest version from branch cached multi-store") +} + +/ GetStoreType returns the type of the store. +func (cms Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeMulti +} + +/ Write calls Write on each underlying store. +func (cms Store) + +Write() { + cms.db.Write() + for _, store := range cms.stores { + store.Write() +} +} + +/ Implements CacheWrapper. +func (cms Store) + +CacheWrap() + +types.CacheWrap { + return cms.CacheMultiStore().(types.CacheWrap) +} + +/ CacheWrapWithTrace implements the CacheWrapper interface. +func (cms Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + return cms.CacheWrap() +} + +/ Implements MultiStore. +func (cms Store) + +CacheMultiStore() + +types.CacheMultiStore { + return newCacheMultiStoreFromCMS(cms) +} + +/ CacheMultiStoreWithVersion implements the MultiStore interface. It will panic +/ as an already cached multi-store cannot load previous versions. +/ +/ TODO: The store implementation can possibly be modified to support this as it +/ seems safe to load previous versions (heights). +func (cms Store) + +CacheMultiStoreWithVersion(_ int64) (types.CacheMultiStore, error) { + panic("cannot branch cached multi-store with a version") +} + +/ GetStore returns an underlying Store by key. +func (cms Store) + +GetStore(key types.StoreKey) + +types.Store { + s := cms.stores[key] + if key == nil || s == nil { + panic(fmt.Sprintf("kv store with key %v has not been registered in stores", key)) +} + +return s.(types.Store) +} + +/ GetKVStore returns an underlying KVStore by key. +func (cms Store) + +GetKVStore(key types.StoreKey) + +types.KVStore { + store := cms.stores[key] + if key == nil || store == nil { + panic(fmt.Sprintf("kv store with key %v has not been registered in stores", key)) +} + +return store.(types.KVStore) +} +``` + +`cachemulti.Store` branches all substores (creates a virtual store for each substore) in its constructor and hold them in `Store.stores`. Moreover caches all read queries. `Store.GetKVStore()` returns the store from `Store.stores`, and `Store.Write()` recursively calls `CacheWrap.Write()` on all the substores. + +## Base-layer KVStores + +### `KVStore` and `CommitKVStore` Interfaces + +A `KVStore` is a simple key-value store used to store and retrieve data. A `CommitKVStore` is a `KVStore` that also implements a `Committer`. By default, stores mounted in `baseapp`'s main `CommitMultiStore` are `CommitKVStore`s. The `KVStore` interface is primarily used to restrict modules from accessing the committer. + +Individual `KVStore`s are used by modules to manage a subset of the global state. `KVStores` can be accessed by objects that hold a specific key. This `key` should only be exposed to the [`keeper`](/docs/sdk/v0.53/documentation/module-system/keeper) of the module that defines the store. + +`CommitKVStore`s are declared by proxy of their respective `key` and mounted on the application's [multistore](#multistore) in the [main application file](/docs/sdk/v0.53/documentation/application-framework/app-anatomy#core-application-file). In the same file, the `key` is also passed to the module's `keeper` that is responsible for managing the store. + +```go expandable +package types + +import ( + + "fmt" + "io" + "maps" + "slices" + "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + snapshottypes "cosmossdk.io/store/snapshots/types" +) + +type Store interface { + GetStoreType() + +StoreType + CacheWrapper +} + +/ something that can persist to disk +type Committer interface { + Commit() + +CommitID + LastCommitID() + +CommitID + + / WorkingHash returns the hash of the KVStore's state before commit. + WorkingHash() []byte + + SetPruning(pruningtypes.PruningOptions) + +GetPruning() + +pruningtypes.PruningOptions +} + +/ Stores of MultiStore must implement CommitStore. +type CommitStore interface { + Committer + Store +} + +/ Queryable allows a Store to expose internal state to the abci.Query +/ interface. Multistore can route requests to the proper Store. +/ +/ This is an optional, but useful extension to any CommitStore +type Queryable interface { + Query(*RequestQuery) (*ResponseQuery, error) +} + +type RequestQuery struct { + Data []byte + Path string + Height int64 + Prove bool +} + +type ResponseQuery struct { + Code uint32 + Log string + Info string + Index int64 + Key []byte + Value []byte + ProofOps *crypto.ProofOps + Height int64 + Codespace string +} + +/---------------------------------------- +/ MultiStore + +/ StoreUpgrades defines a series of transformations to apply the multistore db upon load +type StoreUpgrades struct { + Added []string `json:"added"` + Renamed []StoreRename `json:"renamed"` + Deleted []string `json:"deleted"` +} + +/ StoreRename defines a name change of a sub-store. +/ All data previously under a PrefixStore with OldKey will be copied +/ to a PrefixStore with NewKey, then deleted from OldKey store. +type StoreRename struct { + OldKey string `json:"old_key"` + NewKey string `json:"new_key"` +} + +/ IsAdded returns true if the given key should be added +func (s *StoreUpgrades) + +IsAdded(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Added, key) +} + +/ IsDeleted returns true if the given key should be deleted +func (s *StoreUpgrades) + +IsDeleted(key string) + +bool { + if s == nil { + return false +} + +return slices.Contains(s.Deleted, key) +} + +/ RenamedFrom returns the oldKey if it was renamed +/ Returns "" if it was not renamed +func (s *StoreUpgrades) + +RenamedFrom(key string) + +string { + if s == nil { + return "" +} + for _, re := range s.Renamed { + if re.NewKey == key { + return re.OldKey +} + +} + +return "" +} + +type MultiStore interface { + Store + + / Branches MultiStore into a cached storage object. + / NOTE: Caller should probably not call .Write() + +on each, but + / call CacheMultiStore.Write(). + CacheMultiStore() + +CacheMultiStore + + / CacheMultiStoreWithVersion branches the underlying MultiStore where + / each stored is loaded at a specific version (height). + CacheMultiStoreWithVersion(version int64) (CacheMultiStore, error) + + / Convenience for fetching substores. + / If the store does not exist, panics. + GetStore(StoreKey) + +Store + GetKVStore(StoreKey) + +KVStore + + / TracingEnabled returns if tracing is enabled for the MultiStore. + TracingEnabled() + +bool + + / SetTracer sets the tracer for the MultiStore that the underlying + / stores will utilize to trace operations. The modified MultiStore is + / returned. + SetTracer(w io.Writer) + +MultiStore + + / SetTracingContext sets the tracing context for a MultiStore. It is + / implied that the caller should update the context when necessary between + / tracing operations. The modified MultiStore is returned. + SetTracingContext(TraceContext) + +MultiStore + + / LatestVersion returns the latest version in the store + LatestVersion() + +int64 +} + +/ From MultiStore.CacheMultiStore().... +type CacheMultiStore interface { + MultiStore + Write() / Writes operations to underlying KVStore +} + +/ CommitMultiStore is an interface for a MultiStore without cache capabilities. +type CommitMultiStore interface { + Committer + MultiStore + snapshottypes.Snapshotter + + / Mount a store of type using the given db. + / If db == nil, the new store will use the CommitMultiStore db. + MountStoreWithDB(key StoreKey, typ StoreType, db dbm.DB) + + / Panics on a nil key. + GetCommitStore(key StoreKey) + +CommitStore + + / Panics on a nil key. + GetCommitKVStore(key StoreKey) + +CommitKVStore + + / Load the latest persisted version. Called once after all calls to + / Mount*Store() + +are complete. + LoadLatestVersion() + +error + + / LoadLatestVersionAndUpgrade will load the latest version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadLatestVersionAndUpgrade(upgrades *StoreUpgrades) + +error + + / LoadVersionAndUpgrade will load the named version, but also + / rename/delete/create sub-store keys, before registering all the keys + / in order to handle breaking formats in migrations + LoadVersionAndUpgrade(ver int64, upgrades *StoreUpgrades) + +error + + / Load a specific persisted version. When you load an old version, or when + / the last commit attempt didn't complete, the next commit after loading + / must be idempotent (return the same commit id). Otherwise the behavior is + / undefined. + LoadVersion(ver int64) + +error + + / Set an inter-block (persistent) + +cache that maintains a mapping from + / StoreKeys to CommitKVStores. + SetInterBlockCache(MultiStorePersistentCache) + + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) + +error + + / SetIAVLCacheSize sets the cache size of the IAVL tree. + SetIAVLCacheSize(size int) + + / SetIAVLDisableFastNode enables/disables fastnode feature on iavl. + SetIAVLDisableFastNode(disable bool) + + / SetIAVLSyncPruning set sync/async pruning on iavl. + / It is not recommended to use this option. + / It is here to enable the prune command to force this to true, allowing the command to wait + / for the pruning to finish before returning. + SetIAVLSyncPruning(sync bool) + + / RollbackToVersion rollback the db to specific version(height). + RollbackToVersion(version int64) + +error + + / ListeningEnabled returns if listening is enabled for the KVStore belonging the provided StoreKey + ListeningEnabled(key StoreKey) + +bool + + / AddListeners adds a listener for the KVStore belonging to the provided StoreKey + AddListeners(keys []StoreKey) + + / PopStateCache returns the accumulated state change messages from the CommitMultiStore + PopStateCache() []*StoreKVPair + + / SetMetrics sets the metrics for the KVStore + SetMetrics(metrics metrics.StoreMetrics) +} + +/---------subsp------------------------------- +/ KVStore + +/ BasicKVStore is a simple interface to get/set data +type BasicKVStore interface { + / Get returns nil if key doesn't exist. Panics on nil key. + Get(key []byte) []byte + + / Has checks if a key exists. Panics on nil key. + Has(key []byte) + +bool + + / Set sets the key. Panics on nil key or value. + Set(key, value []byte) + + / Delete deletes the key. Panics on nil key. + Delete(key []byte) +} + +/ KVStore additionally provides iteration and deletion +type KVStore interface { + Store + BasicKVStore + + / Iterator over a domain of keys in ascending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / To iterate over entire domain, use store.Iterator(nil, nil) + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + Iterator(start, end []byte) + +Iterator + + / Iterator over a domain of keys in descending order. End is exclusive. + / Start must be less than end, or the Iterator is invalid. + / Iterator must be closed by caller. + / CONTRACT: No writes may happen within a domain while an iterator exists over it. + / Exceptionally allowed for cachekv.Store, safe to write in the modules. + ReverseIterator(start, end []byte) + +Iterator +} + +/ Iterator is an alias db's Iterator for convenience. +type Iterator = dbm.Iterator + +/ CacheKVStore branches a KVStore and provides read cache functionality. +/ After calling .Write() + +on the CacheKVStore, all previously created +/ CacheKVStores on the object expire. +type CacheKVStore interface { + KVStore + + / Writes operations to underlying KVStore + Write() +} + +/ CommitKVStore is an interface for MultiStore. +type CommitKVStore interface { + Committer + KVStore +} + +/---------------------------------------- +/ CacheWrap + +/ CacheWrap is the most appropriate interface for store ephemeral branching and cache. +/ For example, IAVLStore.CacheWrap() + +returns a CacheKVStore. CacheWrap should not return +/ a Committer, since Commit ephemeral store make no sense. It can return KVStore, +/ HeapStore, SpaceStore, etc. +type CacheWrap interface { + / Write syncs with the underlying store. + Write() + + / CacheWrap recursively wraps again. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace recursively wraps again with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +type CacheWrapper interface { + / CacheWrap branches a store. + CacheWrap() + +CacheWrap + + / CacheWrapWithTrace branches a store with tracing enabled. + CacheWrapWithTrace(w io.Writer, tc TraceContext) + +CacheWrap +} + +func (cid CommitID) + +IsZero() + +bool { + return cid.Version == 0 && len(cid.Hash) == 0 +} + +func (cid CommitID) + +String() + +string { + return fmt.Sprintf("CommitID{%v:%X +}", cid.Hash, cid.Version) +} + +/---------------------------------------- +/ Store types + +/ kind of store +type StoreType int + +const ( + StoreTypeMulti StoreType = iota + StoreTypeDB + StoreTypeIAVL + StoreTypeTransient + StoreTypeMemory + StoreTypeSMT + StoreTypePersistent +) + +func (st StoreType) + +String() + +string { + switch st { + case StoreTypeMulti: + return "StoreTypeMulti" + case StoreTypeDB: + return "StoreTypeDB" + case StoreTypeIAVL: + return "StoreTypeIAVL" + case StoreTypeTransient: + return "StoreTypeTransient" + case StoreTypeMemory: + return "StoreTypeMemory" + case StoreTypeSMT: + return "StoreTypeSMT" + case StoreTypePersistent: + return "StoreTypePersistent" +} + +return "unknown store type" +} + +/---------------------------------------- +/ Keys for accessing substores + +/ StoreKey is a key used to index stores in a MultiStore. +type StoreKey interface { + Name() + +string + String() + +string +} + +/ CapabilityKey represent the Cosmos SDK keys for object-capability +/ generation in the IBC protocol as defined in https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#data-structures +type CapabilityKey StoreKey + +/ KVStoreKey is used for accessing substores. +/ Only the pointer value should ever be used - it functions as a capabilities key. +type KVStoreKey struct { + name string +} + +/ NewKVStoreKey returns a new pointer to a KVStoreKey. +/ Use a pointer so keys don't collide. +func NewKVStoreKey(name string) *KVStoreKey { + if name == "" { + panic("empty key name not allowed") +} + +return &KVStoreKey{ + name: name, +} +} + +/ NewKVStoreKeys returns a map of new pointers to KVStoreKey's. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewKVStoreKeys(names ...string) + +map[string]*KVStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*KVStoreKey, len(names)) + for _, n := range names { + keys[n] = NewKVStoreKey(n) +} + +return keys +} + +func (key *KVStoreKey) + +Name() + +string { + return key.name +} + +func (key *KVStoreKey) + +String() + +string { + return fmt.Sprintf("KVStoreKey{%p, %s +}", key, key.name) +} + +/ TransientStoreKey is used for indexing transient stores in a MultiStore +type TransientStoreKey struct { + name string +} + +/ Constructs new TransientStoreKey +/ Must return a pointer according to the ocap principle +func NewTransientStoreKey(name string) *TransientStoreKey { + return &TransientStoreKey{ + name: name, +} +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +Name() + +string { + return key.name +} + +/ Implements StoreKey +func (key *TransientStoreKey) + +String() + +string { + return fmt.Sprintf("TransientStoreKey{%p, %s +}", key, key.name) +} + +/ MemoryStoreKey defines a typed key to be used with an in-memory KVStore. +type MemoryStoreKey struct { + name string +} + +func NewMemoryStoreKey(name string) *MemoryStoreKey { + return &MemoryStoreKey{ + name: name +} +} + +/ Name returns the name of the MemoryStoreKey. +func (key *MemoryStoreKey) + +Name() + +string { + return key.name +} + +/ String returns a stringified representation of the MemoryStoreKey. +func (key *MemoryStoreKey) + +String() + +string { + return fmt.Sprintf("MemoryStoreKey{%p, %s +}", key, key.name) +} + +/---------------------------------------- + +/ TraceContext contains TraceKVStore context data. It will be written with +/ every trace operation. +type TraceContext map[string]interface{ +} + +/ Clone clones tc into another instance of TraceContext. +func (tc TraceContext) + +Clone() + +TraceContext { + ret := TraceContext{ +} + +maps.Copy(ret, tc) + +return ret +} + +/ Merge merges value of newTc into tc. +func (tc TraceContext) + +Merge(newTc TraceContext) + +TraceContext { + if tc == nil { + tc = TraceContext{ +} + +} + +maps.Copy(tc, newTc) + +return tc +} + +/ MultiStorePersistentCache defines an interface which provides inter-block +/ (persistent) + +caching capabilities for multiple CommitKVStores based on StoreKeys. +type MultiStorePersistentCache interface { + / Wrap and return the provided CommitKVStore with an inter-block (persistent) + / cache. + GetStoreCache(key StoreKey, store CommitKVStore) + +CommitKVStore + + / Return the underlying CommitKVStore for a StoreKey. + Unwrap(key StoreKey) + +CommitKVStore + + / Reset the entire set of internal caches. + Reset() +} + +/ StoreWithInitialVersion is a store that can have an arbitrary initial +/ version. +type StoreWithInitialVersion interface { + / SetInitialVersion sets the initial version of the IAVL tree. It is used when + / starting a new chain at an arbitrary height. + SetInitialVersion(version int64) +} + +/ NewTransientStoreKeys constructs a new map of TransientStoreKey's +/ Must return pointers according to the ocap principle +/ The function will panic if there is a potential conflict in names +/ see `assertNoCommonPrefix` function for more details. +func NewTransientStoreKeys(names ...string) + +map[string]*TransientStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*TransientStoreKey) + for _, n := range names { + keys[n] = NewTransientStoreKey(n) +} + +return keys +} + +/ NewMemoryStoreKeys constructs a new map matching store key names to their +/ respective MemoryStoreKey references. +/ The function will panic if there is a potential conflict in names (see `assertNoPrefix` +/ function for more details). +func NewMemoryStoreKeys(names ...string) + +map[string]*MemoryStoreKey { + assertNoCommonPrefix(names) + keys := make(map[string]*MemoryStoreKey) + for _, n := range names { + keys[n] = NewMemoryStoreKey(n) +} + +return keys +} +``` + +Apart from the traditional `Get` and `Set` methods, that a `KVStore` must implement via the `BasicKVStore` interface; a `KVStore` must provide an `Iterator(start, end)` method which returns an `Iterator` object. It is used to iterate over a range of keys, typically keys that share a common prefix. Below is an example from the bank's module keeper, used to iterate over all account balances: + +```go expandable +package keeper + +import ( + + "context" + "fmt" + "cosmossdk.io/collections" + "cosmossdk.io/collections/indexes" + "cosmossdk.io/core/store" + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/math" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/x/bank/types" +) + +var _ ViewKeeper = (*BaseViewKeeper)(nil) + +/ ViewKeeper defines a module interface that facilitates read only access to +/ account balances. +type ViewKeeper interface { + ValidateBalance(ctx context.Context, addr sdk.AccAddress) + +error + HasBalance(ctx context.Context, addr sdk.AccAddress, amt sdk.Coin) + +bool + + GetAllBalances(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + GetAccountsBalances(ctx context.Context) []types.Balance + GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin + LockedCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + SpendableCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins + SpendableCoin(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin + + IterateAccountBalances(ctx context.Context, addr sdk.AccAddress, cb func(coin sdk.Coin) (stop bool)) + +IterateAllBalances(ctx context.Context, cb func(address sdk.AccAddress, coin sdk.Coin) (stop bool)) +} + +func newBalancesIndexes(sb *collections.SchemaBuilder) + +BalancesIndexes { + return BalancesIndexes{ + Denom: indexes.NewReversePair[math.Int](/docs/sdk/v0.53/documentation/state-storage/ + sb, types.DenomAddressPrefix, "address_by_denom_index", + collections.PairKeyCodec(sdk.LengthPrefixedAddressKey(sdk.AccAddressKey), collections.StringKey), / nolint:staticcheck / Note: refer to the LengthPrefixedAddressKey docs to understand why we do this. + indexes.WithReversePairUncheckedValue(), / denom to address indexes were stored as Key: Join(denom, address) + +Value: []byte{0 +}, this will migrate the value to []byte{ +} + +in a lazy way. + ), +} +} + +type BalancesIndexes struct { + Denom *indexes.ReversePair[sdk.AccAddress, string, math.Int] +} + +func (b BalancesIndexes) + +IndexesList() []collections.Index[collections.Pair[sdk.AccAddress, string], math.Int] { + return []collections.Index[collections.Pair[sdk.AccAddress, string], math.Int]{ + b.Denom +} +} + +/ BaseViewKeeper implements a read only keeper implementation of ViewKeeper. +type BaseViewKeeper struct { + cdc codec.BinaryCodec + storeService store.KVStoreService + ak types.AccountKeeper + logger log.Logger + + Schema collections.Schema + Supply collections.Map[string, math.Int] + DenomMetadata collections.Map[string, types.Metadata] + SendEnabled collections.Map[string, bool] + Balances *collections.IndexedMap[collections.Pair[sdk.AccAddress, string], math.Int, BalancesIndexes] + Params collections.Item[types.Params] +} + +/ NewBaseViewKeeper returns a new BaseViewKeeper. +func NewBaseViewKeeper(cdc codec.BinaryCodec, storeService store.KVStoreService, ak types.AccountKeeper, logger log.Logger) + +BaseViewKeeper { + sb := collections.NewSchemaBuilder(storeService) + k := BaseViewKeeper{ + cdc: cdc, + storeService: storeService, + ak: ak, + logger: logger, + Supply: collections.NewMap(sb, types.SupplyKey, "supply", collections.StringKey, sdk.IntValue), + DenomMetadata: collections.NewMap(sb, types.DenomMetadataPrefix, "denom_metadata", collections.StringKey, codec.CollValue[types.Metadata](/docs/sdk/v0.53/documentation/state-storage/cdc)), + SendEnabled: collections.NewMap(sb, types.SendEnabledPrefix, "send_enabled", collections.StringKey, codec.BoolValue), / NOTE: we use a bool value which uses protobuf to retain state backwards compat + Balances: collections.NewIndexedMap(sb, types.BalancesPrefix, "balances", collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), types.BalanceValueCodec, newBalancesIndexes(sb)), + Params: collections.NewItem(sb, types.ParamsKey, "params", codec.CollValue[types.Params](/docs/sdk/v0.53/documentation/state-storage/cdc)), +} + +schema, err := sb.Build() + if err != nil { + panic(err) +} + +k.Schema = schema + return k +} + +/ HasBalance returns whether or not an account has at least amt balance. +func (k BaseViewKeeper) + +HasBalance(ctx context.Context, addr sdk.AccAddress, amt sdk.Coin) + +bool { + return k.GetBalance(ctx, addr, amt.Denom).IsGTE(amt) +} + +/ Logger returns a module-specific logger. +func (k BaseViewKeeper) + +Logger() + +log.Logger { + return k.logger +} + +/ GetAllBalances returns all the account balances for the given account address. +func (k BaseViewKeeper) + +GetAllBalances(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins { + balances := sdk.NewCoins() + +k.IterateAccountBalances(ctx, addr, func(balance sdk.Coin) + +bool { + balances = balances.Add(balance) + +return false +}) + +return balances.Sort() +} + +/ GetAccountsBalances returns all the accounts balances from the store. +func (k BaseViewKeeper) + +GetAccountsBalances(ctx context.Context) []types.Balance { + balances := make([]types.Balance, 0) + mapAddressToBalancesIdx := make(map[string]int) + +k.IterateAllBalances(ctx, func(addr sdk.AccAddress, balance sdk.Coin) + +bool { + idx, ok := mapAddressToBalancesIdx[addr.String()] + if ok { + / address is already on the set of accounts balances + balances[idx].Coins = balances[idx].Coins.Add(balance) + +balances[idx].Coins.Sort() + +return false +} + accountBalance := types.Balance{ + Address: addr.String(), + Coins: sdk.NewCoins(balance), +} + +balances = append(balances, accountBalance) + +mapAddressToBalancesIdx[addr.String()] = len(balances) - 1 + return false +}) + +return balances +} + +/ GetBalance returns the balance of a specific denomination for a given account +/ by address. +func (k BaseViewKeeper) + +GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin { + amt, err := k.Balances.Get(ctx, collections.Join(addr, denom)) + if err != nil { + return sdk.NewCoin(denom, math.ZeroInt()) +} + +return sdk.NewCoin(denom, amt) +} + +/ IterateAccountBalances iterates over the balances of a single account and +/ provides the token balance to a callback. If true is returned from the +/ callback, iteration is halted. +func (k BaseViewKeeper) + +IterateAccountBalances(ctx context.Context, addr sdk.AccAddress, cb func(sdk.Coin) + +bool) { + err := k.Balances.Walk(ctx, collections.NewPrefixedPairRange[sdk.AccAddress, string](/docs/sdk/v0.53/documentation/state-storage/addr), func(key collections.Pair[sdk.AccAddress, string], value math.Int) (stop bool, err error) { + return cb(sdk.NewCoin(key.K2(), value)), nil +}) + if err != nil { + panic(err) +} +} + +/ IterateAllBalances iterates over all the balances of all accounts and +/ denominations that are provided to a callback. If true is returned from the +/ callback, iteration is halted. +func (k BaseViewKeeper) + +IterateAllBalances(ctx context.Context, cb func(sdk.AccAddress, sdk.Coin) + +bool) { + err := k.Balances.Walk(ctx, nil, func(key collections.Pair[sdk.AccAddress, string], value math.Int) (stop bool, err error) { + return cb(key.K1(), sdk.NewCoin(key.K2(), value)), nil +}) + if err != nil { + panic(err) +} +} + +/ LockedCoins returns all the coins that are not spendable (i.e. locked) + for an +/ account by address. For standard accounts, the result will always be no coins. +/ For vesting accounts, LockedCoins is delegated to the concrete vesting account +/ type. +func (k BaseViewKeeper) + +LockedCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins { + acc := k.ak.GetAccount(ctx, addr) + if acc != nil { + vacc, ok := acc.(types.VestingAccount) + if ok { + sdkCtx := sdk.UnwrapSDKContext(ctx) + +return vacc.LockedCoins(sdkCtx.BlockTime()) +} + +} + +return sdk.NewCoins() +} + +/ SpendableCoins returns the total balances of spendable coins for an account +/ by address. If the account has no spendable coins, an empty Coins slice is +/ returned. +func (k BaseViewKeeper) + +SpendableCoins(ctx context.Context, addr sdk.AccAddress) + +sdk.Coins { + spendable, _ := k.spendableCoins(ctx, addr) + +return spendable +} + +/ SpendableCoin returns the balance of specific denomination of spendable coins +/ for an account by address. If the account has no spendable coin, a zero Coin +/ is returned. +func (k BaseViewKeeper) + +SpendableCoin(ctx context.Context, addr sdk.AccAddress, denom string) + +sdk.Coin { + balance := k.GetBalance(ctx, addr, denom) + locked := k.LockedCoins(ctx, addr) + +return balance.SubAmount(locked.AmountOf(denom)) +} + +/ spendableCoins returns the coins the given address can spend alongside the total amount of coins it holds. +/ It exists for gas efficiency, in order to avoid to have to get balance multiple times. +func (k BaseViewKeeper) + +spendableCoins(ctx context.Context, addr sdk.AccAddress) (spendable, total sdk.Coins) { + total = k.GetAllBalances(ctx, addr) + locked := k.LockedCoins(ctx, addr) + +spendable, hasNeg := total.SafeSub(locked...) + if hasNeg { + spendable = sdk.NewCoins() + +return +} + +return +} + +/ ValidateBalance validates all balances for a given account address returning +/ an error if any balance is invalid. It will check for vesting account types +/ and validate the balances against the original vesting balances. +/ +/ CONTRACT: ValidateBalance should only be called upon genesis state. In the +/ case of vesting accounts, balances may change in a valid manner that would +/ otherwise yield an error from this call. +func (k BaseViewKeeper) + +ValidateBalance(ctx context.Context, addr sdk.AccAddress) + +error { + acc := k.ak.GetAccount(ctx, addr) + if acc == nil { + return errorsmod.Wrapf(sdkerrors.ErrUnknownAddress, "account %s does not exist", addr) +} + balances := k.GetAllBalances(ctx, addr) + if !balances.IsValid() { + return fmt.Errorf("account balance of %s is invalid", balances) +} + +vacc, ok := acc.(types.VestingAccount) + if ok { + ogv := vacc.GetOriginalVesting() + if ogv.IsAnyGT(balances) { + return fmt.Errorf("vesting amount %s cannot be greater than total amount %s", ogv, balances) +} + +} + +return nil +} +``` + +### `IAVL` Store + +The default implementation of `KVStore` and `CommitKVStore` used in `baseapp` is the `iavl.Store`. + +```go expandable +package iavl + +import ( + + "errors" + "fmt" + "io" + + cmtprotocrypto "github.com/cometbft/cometbft/proto/tendermint/crypto" + dbm "github.com/cosmos/cosmos-db" + "github.com/cosmos/iavl" + ics23 "github.com/cosmos/ics23/go" + + errorsmod "cosmossdk.io/errors" + "cosmossdk.io/log" + "cosmossdk.io/store/cachekv" + "cosmossdk.io/store/internal/kv" + "cosmossdk.io/store/metrics" + pruningtypes "cosmossdk.io/store/pruning/types" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" + "cosmossdk.io/store/wrapper" +) + +const ( + DefaultIAVLCacheSize = 500000 +) + +var ( + _ types.KVStore = (*Store)(nil) + _ types.CommitStore = (*Store)(nil) + _ types.CommitKVStore = (*Store)(nil) + _ types.Queryable = (*Store)(nil) + _ types.StoreWithInitialVersion = (*Store)(nil) +) + +/ Store Implements types.KVStore and CommitKVStore. +type Store struct { + tree Tree + logger log.Logger + metrics metrics.StoreMetrics +} + +/ LoadStore returns an IAVL Store as a CommitKVStore. Internally, it will load the +/ store's version (id) + +from the provided DB. An error is returned if the version +/ fails to load, or if called with a positive version on an empty tree. +func LoadStore(db dbm.DB, logger log.Logger, key types.StoreKey, id types.CommitID, cacheSize int, disableFastNode bool, metrics metrics.StoreMetrics) (types.CommitKVStore, error) { + return LoadStoreWithInitialVersion(db, logger, key, id, 0, cacheSize, disableFastNode, metrics) +} + +/ LoadStoreWithInitialVersion returns an IAVL Store as a CommitKVStore setting its initialVersion +/ to the one given. Internally, it will load the store's version (id) + +from the +/ provided DB. An error is returned if the version fails to load, or if called with a positive +/ version on an empty tree. +func LoadStoreWithInitialVersion(db dbm.DB, logger log.Logger, key types.StoreKey, id types.CommitID, initialVersion uint64, cacheSize int, disableFastNode bool, metrics metrics.StoreMetrics) (types.CommitKVStore, error) { + return LoadStoreWithOpts(db, logger, key, id, initialVersion, cacheSize, disableFastNode, metrics, iavl.AsyncPruningOption(true)) +} + +func LoadStoreWithOpts(db dbm.DB, logger log.Logger, key types.StoreKey, id types.CommitID, initialVersion uint64, cacheSize int, disableFastNode bool, metrics metrics.StoreMetrics, opts ...iavl.Option) (types.CommitKVStore, error) { + / store/v1 and app/v1 flows never require an initial version of 0 + if initialVersion == 0 { + initialVersion = 1 +} + +opts = append(opts, iavl.InitialVersionOption(initialVersion)) + tree := iavl.NewMutableTree(wrapper.NewDBWrapper(db), cacheSize, disableFastNode, logger, opts...) + +isUpgradeable, err := tree.IsUpgradeable() + if err != nil { + return nil, err +} + if isUpgradeable && logger != nil { + logger.Info( + "Upgrading IAVL storage for faster queries + execution on live state. This may take a while", + "store_key", key.String(), + "version", initialVersion, + "commit", fmt.Sprintf("%X", id), + ) +} + + _, err = tree.LoadVersion(id.Version) + if err != nil { + return nil, err +} + if logger != nil { + logger.Debug("Finished loading IAVL tree") +} + +return &Store{ + tree: tree, + logger: logger, + metrics: metrics, +}, nil +} + +/ UnsafeNewStore returns a reference to a new IAVL Store with a given mutable +/ IAVL tree reference. It should only be used for testing purposes. +/ +/ CONTRACT: The IAVL tree should be fully loaded. +/ CONTRACT: PruningOptions passed in as argument must be the same as pruning options +/ passed into iavl.MutableTree +func UnsafeNewStore(tree *iavl.MutableTree) *Store { + return &Store{ + tree: tree, + metrics: metrics.NewNoOpMetrics(), +} +} + +/ GetImmutable returns a reference to a new store backed by an immutable IAVL +/ tree at a specific version (height) + +without any pruning options. This should +/ be used for querying and iteration only. If the version does not exist or has +/ been pruned, an empty immutable IAVL tree will be used. +/ Any mutable operations executed will result in a panic. +func (st *Store) + +GetImmutable(version int64) (*Store, error) { + if !st.VersionExists(version) { + return nil, errors.New("version mismatch on immutable IAVL tree; version does not exist. Version has either been pruned, or is for a future block height") +} + +iTree, err := st.tree.GetImmutable(version) + if err != nil { + return nil, err +} + +return &Store{ + tree: &immutableTree{ + iTree +}, + metrics: st.metrics, +}, nil +} + +/ Commit commits the current store state and returns a CommitID with the new +/ version and hash. +func (st *Store) + +Commit() + +types.CommitID { + defer st.metrics.MeasureSince("store", "iavl", "commit") + +hash, version, err := st.tree.SaveVersion() + if err != nil { + panic(err) +} + +return types.CommitID{ + Version: version, + Hash: hash, +} +} + +/ WorkingHash returns the hash of the current working tree. +func (st *Store) + +WorkingHash() []byte { + return st.tree.WorkingHash() +} + +/ LastCommitID implements Committer. +func (st *Store) + +LastCommitID() + +types.CommitID { + return types.CommitID{ + Version: st.tree.Version(), + Hash: st.tree.Hash(), +} +} + +/ SetPruning panics as pruning options should be provided at initialization +/ since IAVl accepts pruning options directly. +func (st *Store) + +SetPruning(_ pruningtypes.PruningOptions) { + panic("cannot set pruning options on an initialized IAVL store") +} + +/ SetPruning panics as pruning options should be provided at initialization +/ since IAVl accepts pruning options directly. +func (st *Store) + +GetPruning() + +pruningtypes.PruningOptions { + panic("cannot get pruning options on an initialized IAVL store") +} + +/ VersionExists returns whether or not a given version is stored. +func (st *Store) + +VersionExists(version int64) + +bool { + return st.tree.VersionExists(version) +} + +/ GetAllVersions returns all versions in the iavl tree +func (st *Store) + +GetAllVersions() []int { + return st.tree.AvailableVersions() +} + +/ Implements Store. +func (st *Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeIAVL +} + +/ Implements Store. +func (st *Store) + +CacheWrap() + +types.CacheWrap { + return cachekv.NewStore(st) +} + +/ CacheWrapWithTrace implements the Store interface. +func (st *Store) + +CacheWrapWithTrace(w io.Writer, tc types.TraceContext) + +types.CacheWrap { + return cachekv.NewStore(tracekv.NewStore(st, w, tc)) +} + +/ Implements types.KVStore. +func (st *Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +types.AssertValidValue(value) + _, err := st.tree.Set(key, value) + if err != nil && st.logger != nil { + st.logger.Error("iavl set error", "error", err.Error()) +} +} + +/ Implements types.KVStore. +func (st *Store) + +Get(key []byte) []byte { + defer st.metrics.MeasureSince("store", "iavl", "get") + +value, err := st.tree.Get(key) + if err != nil { + panic(err) +} + +return value +} + +/ Implements types.KVStore. +func (st *Store) + +Has(key []byte) (exists bool) { + defer st.metrics.MeasureSince("store", "iavl", "has") + +has, err := st.tree.Has(key) + if err != nil { + panic(err) +} + +return has +} + +/ Implements types.KVStore. +func (st *Store) + +Delete(key []byte) { + defer st.metrics.MeasureSince("store", "iavl", "delete") + _, _, err := st.tree.Remove(key) + if err != nil { + panic(err) +} +} + +/ DeleteVersionsTo deletes versions upto the given version from the MutableTree. An error +/ is returned if any single version is invalid or the delete fails. All writes +/ happen in a single batch with a single commit. +func (st *Store) + +DeleteVersionsTo(version int64) + +error { + return st.tree.DeleteVersionsTo(version) +} + +/ LoadVersionForOverwriting attempts to load a tree at a previously committed +/ version. Any versions greater than targetVersion will be deleted. +func (st *Store) + +LoadVersionForOverwriting(targetVersion int64) + +error { + return st.tree.LoadVersionForOverwriting(targetVersion) +} + +/ Implements types.KVStore. +func (st *Store) + +Iterator(start, end []byte) + +types.Iterator { + iterator, err := st.tree.Iterator(start, end, true) + if err != nil { + panic(err) +} + +return iterator +} + +/ Implements types.KVStore. +func (st *Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + iterator, err := st.tree.Iterator(start, end, false) + if err != nil { + panic(err) +} + +return iterator +} + +/ SetInitialVersion sets the initial version of the IAVL tree. It is used when +/ starting a new chain at an arbitrary height. +func (st *Store) + +SetInitialVersion(version int64) { + st.tree.SetInitialVersion(uint64(version)) +} + +/ Exports the IAVL store at the given version, returning an iavl.Exporter for the tree. +func (st *Store) + +Export(version int64) (*iavl.Exporter, error) { + istore, err := st.GetImmutable(version) + if err != nil { + return nil, errorsmod.Wrapf(err, "iavl export failed for version %v", version) +} + +tree, ok := istore.tree.(*immutableTree) + if !ok || tree == nil { + return nil, fmt.Errorf("iavl export failed: unable to fetch tree for version %v", version) +} + +return tree.Export() +} + +/ Import imports an IAVL tree at the given version, returning an iavl.Importer for importing. +func (st *Store) + +Import(version int64) (*iavl.Importer, error) { + tree, ok := st.tree.(*iavl.MutableTree) + if !ok { + return nil, errors.New("iavl import failed: unable to find mutable tree") +} + +return tree.Import(version) +} + +/ Handle gatest the latest height, if height is 0 +func getHeight(tree Tree, req *types.RequestQuery) + +int64 { + height := req.Height + if height == 0 { + latest := tree.Version() + if tree.VersionExists(latest - 1) { + height = latest - 1 +} + +else { + height = latest +} + +} + +return height +} + +/ Query implements ABCI interface, allows queries +/ +/ by default we will return from (latest height -1), +/ as we will have merkle proofs immediately (header height = data height + 1) +/ If latest-1 is not present, use latest (which must be present) +/ if you care to have the latest data to see a tx results, you must +/ explicitly set the height you want to see +func (st *Store) + +Query(req *types.RequestQuery) (res *types.ResponseQuery, err error) { + defer st.metrics.MeasureSince("store", "iavl", "query") + if len(req.Data) == 0 { + return &types.ResponseQuery{ +}, errorsmod.Wrap(types.ErrTxDecode, "query cannot be zero length") +} + tree := st.tree + + / store the height we chose in the response, with 0 being changed to the + / latest height + res = &types.ResponseQuery{ + Height: getHeight(tree, req), +} + switch req.Path { + case "/key": / get by key + key := req.Data / data holds the key bytes + + res.Key = key + if !st.VersionExists(res.Height) { + res.Log = iavl.ErrVersionDoesNotExist.Error() + +break +} + +value, err := tree.GetVersioned(key, res.Height) + if err != nil { + panic(err) +} + +res.Value = value + if !req.Prove { + break +} + + / Continue to prove existence/absence of value + / Must convert store.Tree to iavl.MutableTree with given version to use in CreateProof + iTree, err := tree.GetImmutable(res.Height) + if err != nil { + / sanity check: If value for given version was retrieved, immutable tree must also be retrievable + panic(fmt.Sprintf("version exists in store but could not retrieve corresponding versioned tree in store, %s", err.Error())) +} + mtree := &iavl.MutableTree{ + ImmutableTree: iTree, +} + + / get proof from tree and convert to merkle.Proof before adding to result + res.ProofOps = getProofFromTree(mtree, req.Data, res.Value != nil) + case "/subspace": + pairs := kv.Pairs{ + Pairs: make([]kv.Pair, 0), +} + subspace := req.Data + res.Key = subspace + iterator := types.KVStorePrefixIterator(st, subspace) + for ; iterator.Valid(); iterator.Next() { + pairs.Pairs = append(pairs.Pairs, kv.Pair{ + Key: iterator.Key(), + Value: iterator.Value() +}) +} + if err := iterator.Close(); err != nil { + panic(fmt.Errorf("failed to close iterator: %w", err)) +} + +bz, err := pairs.Marshal() + if err != nil { + panic(fmt.Errorf("failed to marshal KV pairs: %w", err)) +} + +res.Value = bz + + default: + return &types.ResponseQuery{ +}, errorsmod.Wrapf(types.ErrUnknownRequest, "unexpected query path: %v", req.Path) +} + +return res, err +} + +/ TraverseStateChanges traverses the state changes between two versions and calls the given function. +func (st *Store) + +TraverseStateChanges(startVersion, endVersion int64, fn func(version int64, changeSet *iavl.ChangeSet) + +error) + +error { + return st.tree.TraverseStateChanges(startVersion, endVersion, fn) +} + +/ Takes a MutableTree, a key, and a flag for creating existence or absence proof and returns the +/ appropriate merkle.Proof. Since this must be called after querying for the value, this function should never error +/ Thus, it will panic on error rather than returning it +func getProofFromTree(tree *iavl.MutableTree, key []byte, exists bool) *cmtprotocrypto.ProofOps { + var ( + commitmentProof *ics23.CommitmentProof + err error + ) + if exists { + / value was found + commitmentProof, err = tree.GetMembershipProof(key) + if err != nil { + / sanity check: If value was found, membership proof must be creatable + panic(fmt.Sprintf("unexpected value for empty proof: %s", err.Error())) +} + +} + +else { + / value wasn't found + commitmentProof, err = tree.GetNonMembershipProof(key) + if err != nil { + / sanity check: If value wasn't found, nonmembership proof must be creatable + panic(fmt.Sprintf("unexpected error for nonexistence proof: %s", err.Error())) +} + +} + op := types.NewIavlCommitmentOp(key, commitmentProof) + +return &cmtprotocrypto.ProofOps{ + Ops: []cmtprotocrypto.ProofOp{ + op.ProofOp() +}} +} +``` + +`iavl` stores are based around an [IAVL Tree](https://github.com/cosmos/iavl), a self-balancing binary tree which guarantees that: + +- `Get` and `Set` operations are O(log n), where n is the number of elements in the tree. +- Iteration efficiently returns the sorted elements within the range. +- Each tree version is immutable and can be retrieved even after a commit (depending on the pruning settings). + +The documentation on the IAVL Tree is located [here](https://github.com/cosmos/iavl/blob/master/docs/overview.md). + +### `DbAdapter` Store + +`dbadapter.Store` is an adapter for `dbm.DB` making it fulfilling the `KVStore` interface. + +```go expandable +package dbadapter + +import ( + + "io" + + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/cachekv" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" +) + +/ Wrapper type for dbm.Db with implementation of KVStore +type Store struct { + dbm.DB +} + +/ Get wraps the underlying DB's Get method panicing on error. +func (dsa Store) + +Get(key []byte) []byte { + v, err := dsa.DB.Get(key) + if err != nil { + panic(err) +} + +return v +} + +/ Has wraps the underlying DB's Has method panicing on error. +func (dsa Store) + +Has(key []byte) + +bool { + ok, err := dsa.DB.Has(key) + if err != nil { + panic(err) +} + +return ok +} + +/ Set wraps the underlying DB's Set method panicing on error. +func (dsa Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +types.AssertValidValue(value) + if err := dsa.DB.Set(key, value); err != nil { + panic(err) +} +} + +/ Delete wraps the underlying DB's Delete method panicing on error. +func (dsa Store) + +Delete(key []byte) { + if err := dsa.DB.Delete(key); err != nil { + panic(err) +} +} + +/ Iterator wraps the underlying DB's Iterator method panicing on error. +func (dsa Store) + +Iterator(start, end []byte) + +types.Iterator { + iter, err := dsa.DB.Iterator(start, end) + if err != nil { + panic(err) +} + +return iter +} + +/ ReverseIterator wraps the underlying DB's ReverseIterator method panicing on error. +func (dsa Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + iter, err := dsa.DB.ReverseIterator(start, end) + if err != nil { + panic(err) +} + +return iter +} + +/ GetStoreType returns the type of the store. +func (Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeDB +} + +/ CacheWrap branches the underlying store. +func (dsa Store) + +CacheWrap() + +types.CacheWrap { + return cachekv.NewStore(dsa) +} + +/ CacheWrapWithTrace implements KVStore. +func (dsa Store) + +CacheWrapWithTrace(w io.Writer, tc types.TraceContext) + +types.CacheWrap { + return cachekv.NewStore(tracekv.NewStore(dsa, w, tc)) +} + +/ dbm.DB implements KVStore so we can CacheKVStore it. +var _ types.KVStore = Store{ +} +``` + +`dbadapter.Store` embeds `dbm.DB`, meaning most of the `KVStore` interface functions are implemented. The other functions (mostly miscellaneous) are manually implemented. This store is primarily used within [Transient Stores](#transient-store) + +### `Transient` Store + +`Transient.Store` is a base-layer `KVStore` which is automatically discarded at the end of the block. + +```go expandable +package transient + +import ( + + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/store/dbadapter" + pruningtypes "cosmossdk.io/store/pruning/types" + "cosmossdk.io/store/types" +) + +var ( + _ types.Committer = (*Store)(nil) + _ types.KVStore = (*Store)(nil) +) + +/ Store is a wrapper for a MemDB with Commiter implementation +type Store struct { + dbadapter.Store +} + +/ Constructs new MemDB adapter +func NewStore() *Store { + return &Store{ + Store: dbadapter.Store{ + DB: dbm.NewMemDB() +}} +} + +/ Implements CommitStore +/ Commit cleans up Store. +func (ts *Store) + +Commit() (id types.CommitID) { + ts.Store = dbadapter.Store{ + DB: dbm.NewMemDB() +} + +return +} + +func (ts *Store) + +SetPruning(_ pruningtypes.PruningOptions) { +} + +/ GetPruning is a no-op as pruning options cannot be directly set on this store. +/ They must be set on the root commit multi-store. +func (ts *Store) + +GetPruning() + +pruningtypes.PruningOptions { + return pruningtypes.NewPruningOptions(pruningtypes.PruningUndefined) +} + +/ Implements CommitStore +func (ts *Store) + +LastCommitID() + +types.CommitID { + return types.CommitID{ +} +} + +func (ts *Store) + +WorkingHash() []byte { + return []byte{ +} +} + +/ Implements Store. +func (ts *Store) + +GetStoreType() + +types.StoreType { + return types.StoreTypeTransient +} +``` + +`Transient.Store` is a `dbadapter.Store` with a `dbm.NewMemDB()`. All `KVStore` methods are reused. When `Store.Commit()` is called, a new `dbadapter.Store` is assigned, discarding previous reference and making it garbage collected. + +This type of store is useful to persist information that is only relevant per-block. One example would be to store parameter changes (i.e. a bool set to `true` if a parameter changed in a block). + +```go expandable +package types + +import ( + + "fmt" + "maps" + "reflect" + "cosmossdk.io/store/prefix" + storetypes "cosmossdk.io/store/types" + "github.com/cosmos/cosmos-sdk/codec" + sdk "github.com/cosmos/cosmos-sdk/types" +) + +const ( + / StoreKey is the string store key for the param store + StoreKey = "params" + + / TStoreKey is the string store key for the param transient store + TStoreKey = "transient_params" +) + +/ Individual parameter store for each keeper +/ Transient store persists for a block, so we use it for +/ recording whether the parameter has been changed or not +type Subspace struct { + cdc codec.BinaryCodec + legacyAmino *codec.LegacyAmino + key storetypes.StoreKey / []byte -> []byte, stores parameter + tkey storetypes.StoreKey / []byte -> bool, stores parameter change + name []byte + table KeyTable +} + +/ NewSubspace constructs a store with namestore +func NewSubspace(cdc codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey, name string) + +Subspace { + return Subspace{ + cdc: cdc, + legacyAmino: legacyAmino, + key: key, + tkey: tkey, + name: []byte(name), + table: NewKeyTable(), +} +} + +/ HasKeyTable returns if the Subspace has a KeyTable registered. +func (s Subspace) + +HasKeyTable() + +bool { + return len(s.table.m) > 0 +} + +/ WithKeyTable initializes KeyTable and returns modified Subspace +func (s Subspace) + +WithKeyTable(table KeyTable) + +Subspace { + if table.m == nil { + panic("WithKeyTable() + +called with nil KeyTable") +} + if len(s.table.m) != 0 { + panic("WithKeyTable() + +called on already initialized Subspace") +} + +maps.Copy(s.table.m, table.m) + + / Allocate additional capacity for Subspace.name + / So we don't have to allocate extra space each time appending to the key + name := s.name + s.name = make([]byte, len(name), len(name)+table.maxKeyLength()) + +copy(s.name, name) + +return s +} + +/ Returns a KVStore identical with ctx.KVStore(s.key).Prefix() + +func (s Subspace) + +kvStore(ctx sdk.Context) + +storetypes.KVStore { + / append here is safe, appends within a function won't cause + / weird side effects when its singlethreaded + return prefix.NewStore(ctx.KVStore(s.key), append(s.name, '/')) +} + +/ Returns a transient store for modification +func (s Subspace) + +transientStore(ctx sdk.Context) + +storetypes.KVStore { + / append here is safe, appends within a function won't cause + / weird side effects when its singlethreaded + return prefix.NewStore(ctx.TransientStore(s.tkey), append(s.name, '/')) +} + +/ Validate attempts to validate a parameter value by its key. If the key is not +/ registered or if the validation of the value fails, an error is returned. +func (s Subspace) + +Validate(ctx sdk.Context, key []byte, value any) + +error { + attr, ok := s.table.m[string(key)] + if !ok { + return fmt.Errorf("parameter %s not registered", key) +} + if err := attr.vfn(value); err != nil { + return fmt.Errorf("invalid parameter value: %w", err) +} + +return nil +} + +/ Get queries for a parameter by key from the Subspace's KVStore and sets the +/ value to the provided pointer. If the value does not exist, it will panic. +func (s Subspace) + +Get(ctx sdk.Context, key []byte, ptr any) { + s.checkType(key, ptr) + store := s.kvStore(ctx) + bz := store.Get(key) + if err := s.legacyAmino.UnmarshalJSON(bz, ptr); err != nil { + panic(err) +} +} + +/ GetIfExists queries for a parameter by key from the Subspace's KVStore and +/ sets the value to the provided pointer. If the value does not exist, it will +/ perform a no-op. +func (s Subspace) + +GetIfExists(ctx sdk.Context, key []byte, ptr any) { + store := s.kvStore(ctx) + bz := store.Get(key) + if bz == nil { + return +} + +s.checkType(key, ptr) + if err := s.legacyAmino.UnmarshalJSON(bz, ptr); err != nil { + panic(err) +} +} + +/ IterateKeys iterates over all the keys in the subspace and executes the +/ provided callback. If the callback returns true for a given key, iteration +/ will halt. +func (s Subspace) + +IterateKeys(ctx sdk.Context, cb func(key []byte) + +bool) { + store := s.kvStore(ctx) + iter := storetypes.KVStorePrefixIterator(store, nil) + +defer iter.Close() + for ; iter.Valid(); iter.Next() { + if cb(iter.Key()) { + break +} + +} +} + +/ GetRaw queries for the raw values bytes for a parameter by key. +func (s Subspace) + +GetRaw(ctx sdk.Context, key []byte) []byte { + store := s.kvStore(ctx) + +return store.Get(key) +} + +/ Has returns if a parameter key exists or not in the Subspace's KVStore. +func (s Subspace) + +Has(ctx sdk.Context, key []byte) + +bool { + store := s.kvStore(ctx) + +return store.Has(key) +} + +/ Modified returns true if the parameter key is set in the Subspace's transient +/ KVStore. +func (s Subspace) + +Modified(ctx sdk.Context, key []byte) + +bool { + tstore := s.transientStore(ctx) + +return tstore.Has(key) +} + +/ checkType verifies that the provided key and value are comptable and registered. +func (s Subspace) + +checkType(key []byte, value any) { + attr, ok := s.table.m[string(key)] + if !ok { + panic(fmt.Sprintf("parameter %s not registered", key)) +} + ty := attr.ty + pty := reflect.TypeOf(value) + if pty.Kind() == reflect.Ptr { + pty = pty.Elem() +} + if pty != ty { + panic("type mismatch with registered table") +} +} + +/ Set stores a value for given a parameter key assuming the parameter type has +/ been registered. It will panic if the parameter type has not been registered +/ or if the value cannot be encoded. A change record is also set in the Subspace's +/ transient KVStore to mark the parameter as modified. +func (s Subspace) + +Set(ctx sdk.Context, key []byte, value any) { + s.checkType(key, value) + store := s.kvStore(ctx) + +bz, err := s.legacyAmino.MarshalJSON(value) + if err != nil { + panic(err) +} + +store.Set(key, bz) + tstore := s.transientStore(ctx) + +tstore.Set(key, []byte{ +}) +} + +/ Update stores an updated raw value for a given parameter key assuming the +/ parameter type has been registered. It will panic if the parameter type has +/ not been registered or if the value cannot be encoded. An error is returned +/ if the raw value is not compatible with the registered type for the parameter +/ key or if the new value is invalid as determined by the registered type's +/ validation function. +func (s Subspace) + +Update(ctx sdk.Context, key, value []byte) + +error { + attr, ok := s.table.m[string(key)] + if !ok { + panic(fmt.Sprintf("parameter %s not registered", key)) +} + ty := attr.ty + dest := reflect.New(ty).Interface() + +s.GetIfExists(ctx, key, dest) + if err := s.legacyAmino.UnmarshalJSON(value, dest); err != nil { + return err +} + + / destValue contains the dereferenced value of dest so validation function do + / not have to operate on pointers. + destValue := reflect.Indirect(reflect.ValueOf(dest)).Interface() + if err := s.Validate(ctx, key, destValue); err != nil { + return err +} + +s.Set(ctx, key, dest) + +return nil +} + +/ GetParamSet iterates through each ParamSetPair where for each pair, it will +/ retrieve the value and set it to the corresponding value pointer provided +/ in the ParamSetPair by calling Subspace#Get. +func (s Subspace) + +GetParamSet(ctx sdk.Context, ps ParamSet) { + for _, pair := range ps.ParamSetPairs() { + s.Get(ctx, pair.Key, pair.Value) +} +} + +/ GetParamSetIfExists iterates through each ParamSetPair where for each pair, it will +/ retrieve the value and set it to the corresponding value pointer provided +/ in the ParamSetPair by calling Subspace#GetIfExists. +func (s Subspace) + +GetParamSetIfExists(ctx sdk.Context, ps ParamSet) { + for _, pair := range ps.ParamSetPairs() { + s.GetIfExists(ctx, pair.Key, pair.Value) +} +} + +/ SetParamSet iterates through each ParamSetPair and sets the value with the +/ corresponding parameter key in the Subspace's KVStore. +func (s Subspace) + +SetParamSet(ctx sdk.Context, ps ParamSet) { + for _, pair := range ps.ParamSetPairs() { + / pair.Field is a pointer to the field, so indirecting the ptr. + / go-amino automatically handles it but just for sure, + / since SetStruct is meant to be used in InitGenesis + / so this method will not be called frequently + v := reflect.Indirect(reflect.ValueOf(pair.Value)).Interface() + if err := pair.ValidatorFn(v); err != nil { + panic(fmt.Sprintf("value from ParamSetPair is invalid: %s", err)) +} + +s.Set(ctx, pair.Key, v) +} +} + +/ Name returns the name of the Subspace. +func (s Subspace) + +Name() + +string { + return string(s.name) +} + +/ Wrapper of Subspace, provides immutable functions only +type ReadOnlySubspace struct { + s Subspace +} + +/ Get delegates a read-only Get call to the Subspace. +func (ros ReadOnlySubspace) + +Get(ctx sdk.Context, key []byte, ptr any) { + ros.s.Get(ctx, key, ptr) +} + +/ GetRaw delegates a read-only GetRaw call to the Subspace. +func (ros ReadOnlySubspace) + +GetRaw(ctx sdk.Context, key []byte) []byte { + return ros.s.GetRaw(ctx, key) +} + +/ Has delegates a read-only Has call to the Subspace. +func (ros ReadOnlySubspace) + +Has(ctx sdk.Context, key []byte) + +bool { + return ros.s.Has(ctx, key) +} + +/ Modified delegates a read-only Modified call to the Subspace. +func (ros ReadOnlySubspace) + +Modified(ctx sdk.Context, key []byte) + +bool { + return ros.s.Modified(ctx, key) +} + +/ Name delegates a read-only Name call to the Subspace. +func (ros ReadOnlySubspace) + +Name() + +string { + return ros.s.Name() +} +``` + +Transient stores are typically accessed via the [`context`](/docs/sdk/v0.53/documentation/application-framework/context) via the `TransientStore()` method: + +```go expandable +package types + +import ( + + "context" + "time" + + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/header" + "cosmossdk.io/log" + "cosmossdk.io/store/gaskv" + storetypes "cosmossdk.io/store/types" +) + +/ ExecMode defines the execution mode which can be set on a Context. +type ExecMode uint8 + +/ All possible execution modes. +const ( + ExecModeCheck ExecMode = iota + ExecModeReCheck + ExecModeSimulate + ExecModePrepareProposal + ExecModeProcessProposal + ExecModeVoteExtension + ExecModeVerifyVoteExtension + ExecModeFinalize +) + +/* +Context is an immutable object contains all information needed to +process a request. + +It contains a context.Context object inside if you want to use that, +but please do not over-use it. We try to keep all data structured +and standard additions here would be better just to add to the Context struct +*/ +type Context struct { + baseCtx context.Context + ms storetypes.MultiStore + / Deprecated: Use HeaderService for height, time, and chainID and CometService for the rest + header cmtproto.Header + / Deprecated: Use HeaderService for hash + headerHash []byte + / Deprecated: Use HeaderService for chainID and CometService for the rest + chainID string + txBytes []byte + logger log.Logger + voteInfo []abci.VoteInfo + gasMeter storetypes.GasMeter + blockGasMeter storetypes.GasMeter + checkTx bool + recheckTx bool / if recheckTx == true, then checkTx must also be true + sigverifyTx bool / when run simulation, because the private key corresponding to the account in the genesis.json randomly generated, we must skip the sigverify. + execMode ExecMode + minGasPrice DecCoins + consParams cmtproto.ConsensusParams + eventManager EventManagerI + priority int64 / The tx priority, only relevant in CheckTx + kvGasConfig storetypes.GasConfig + transientKVGasConfig storetypes.GasConfig + streamingManager storetypes.StreamingManager + cometInfo comet.BlockInfo + headerInfo header.Info +} + +/ Proposed rename, not done to avoid API breakage +type Request = Context + +/ Read-only accessors +func (c Context) + +Context() + +context.Context { + return c.baseCtx +} + +func (c Context) + +MultiStore() + +storetypes.MultiStore { + return c.ms +} + +func (c Context) + +BlockHeight() + +int64 { + return c.header.Height +} + +func (c Context) + +BlockTime() + +time.Time { + return c.header.Time +} + +func (c Context) + +ChainID() + +string { + return c.chainID +} + +func (c Context) + +TxBytes() []byte { + return c.txBytes +} + +func (c Context) + +Logger() + +log.Logger { + return c.logger +} + +func (c Context) + +VoteInfos() []abci.VoteInfo { + return c.voteInfo +} + +func (c Context) + +GasMeter() + +storetypes.GasMeter { + return c.gasMeter +} + +func (c Context) + +BlockGasMeter() + +storetypes.GasMeter { + return c.blockGasMeter +} + +func (c Context) + +IsCheckTx() + +bool { + return c.checkTx +} + +func (c Context) + +IsReCheckTx() + +bool { + return c.recheckTx +} + +func (c Context) + +IsSigverifyTx() + +bool { + return c.sigverifyTx +} + +func (c Context) + +ExecMode() + +ExecMode { + return c.execMode +} + +func (c Context) + +MinGasPrices() + +DecCoins { + return c.minGasPrice +} + +func (c Context) + +EventManager() + +EventManagerI { + return c.eventManager +} + +func (c Context) + +Priority() + +int64 { + return c.priority +} + +func (c Context) + +KVGasConfig() + +storetypes.GasConfig { + return c.kvGasConfig +} + +func (c Context) + +TransientKVGasConfig() + +storetypes.GasConfig { + return c.transientKVGasConfig +} + +func (c Context) + +StreamingManager() + +storetypes.StreamingManager { + return c.streamingManager +} + +func (c Context) + +CometInfo() + +comet.BlockInfo { + return c.cometInfo +} + +func (c Context) + +HeaderInfo() + +header.Info { + return c.headerInfo +} + +/ BlockHeader returns the header by value. +func (c Context) + +BlockHeader() + +cmtproto.Header { + return c.header +} + +/ HeaderHash returns a copy of the header hash obtained during abci.RequestBeginBlock +func (c Context) + +HeaderHash() []byte { + hash := make([]byte, len(c.headerHash)) + +copy(hash, c.headerHash) + +return hash +} + +func (c Context) + +ConsensusParams() + +cmtproto.ConsensusParams { + return c.consParams +} + +func (c Context) + +Deadline() (deadline time.Time, ok bool) { + return c.baseCtx.Deadline() +} + +func (c Context) + +Done() <-chan struct{ +} { + return c.baseCtx.Done() +} + +func (c Context) + +Err() + +error { + return c.baseCtx.Err() +} + +/ create a new context +func NewContext(ms storetypes.MultiStore, header cmtproto.Header, isCheckTx bool, logger log.Logger) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +return Context{ + baseCtx: context.Background(), + ms: ms, + header: header, + chainID: header.ChainID, + checkTx: isCheckTx, + sigverifyTx: true, + logger: logger, + gasMeter: storetypes.NewInfiniteGasMeter(), + minGasPrice: DecCoins{ +}, + eventManager: NewEventManager(), + kvGasConfig: storetypes.KVGasConfig(), + transientKVGasConfig: storetypes.TransientGasConfig(), +} +} + +/ WithContext returns a Context with an updated context.Context. +func (c Context) + +WithContext(ctx context.Context) + +Context { + c.baseCtx = ctx + return c +} + +/ WithMultiStore returns a Context with an updated MultiStore. +func (c Context) + +WithMultiStore(ms storetypes.MultiStore) + +Context { + c.ms = ms + return c +} + +/ WithBlockHeader returns a Context with an updated CometBFT block header in UTC time. +func (c Context) + +WithBlockHeader(header cmtproto.Header) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +c.header = header + return c +} + +/ WithHeaderHash returns a Context with an updated CometBFT block header hash. +func (c Context) + +WithHeaderHash(hash []byte) + +Context { + temp := make([]byte, len(hash)) + +copy(temp, hash) + +c.headerHash = temp + return c +} + +/ WithBlockTime returns a Context with an updated CometBFT block header time in UTC with no monotonic component. +/ Stripping the monotonic component is for time equality. +func (c Context) + +WithBlockTime(newTime time.Time) + +Context { + newHeader := c.BlockHeader() + / https://github.com/gogo/protobuf/issues/519 + newHeader.Time = newTime.Round(0).UTC() + +return c.WithBlockHeader(newHeader) +} + +/ WithProposer returns a Context with an updated proposer consensus address. +func (c Context) + +WithProposer(addr ConsAddress) + +Context { + newHeader := c.BlockHeader() + +newHeader.ProposerAddress = addr.Bytes() + +return c.WithBlockHeader(newHeader) +} + +/ WithBlockHeight returns a Context with an updated block height. +func (c Context) + +WithBlockHeight(height int64) + +Context { + newHeader := c.BlockHeader() + +newHeader.Height = height + return c.WithBlockHeader(newHeader) +} + +/ WithChainID returns a Context with an updated chain identifier. +func (c Context) + +WithChainID(chainID string) + +Context { + c.chainID = chainID + return c +} + +/ WithTxBytes returns a Context with an updated txBytes. +func (c Context) + +WithTxBytes(txBytes []byte) + +Context { + c.txBytes = txBytes + return c +} + +/ WithLogger returns a Context with an updated logger. +func (c Context) + +WithLogger(logger log.Logger) + +Context { + c.logger = logger + return c +} + +/ WithVoteInfos returns a Context with an updated consensus VoteInfo. +func (c Context) + +WithVoteInfos(voteInfo []abci.VoteInfo) + +Context { + c.voteInfo = voteInfo + return c +} + +/ WithGasMeter returns a Context with an updated transaction GasMeter. +func (c Context) + +WithGasMeter(meter storetypes.GasMeter) + +Context { + c.gasMeter = meter + return c +} + +/ WithBlockGasMeter returns a Context with an updated block GasMeter +func (c Context) + +WithBlockGasMeter(meter storetypes.GasMeter) + +Context { + c.blockGasMeter = meter + return c +} + +/ WithKVGasConfig returns a Context with an updated gas configuration for +/ the KVStore +func (c Context) + +WithKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.kvGasConfig = gasConfig + return c +} + +/ WithTransientKVGasConfig returns a Context with an updated gas configuration for +/ the transient KVStore +func (c Context) + +WithTransientKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.transientKVGasConfig = gasConfig + return c +} + +/ WithIsCheckTx enables or disables CheckTx value for verifying transactions and returns an updated Context +func (c Context) + +WithIsCheckTx(isCheckTx bool) + +Context { + c.checkTx = isCheckTx + c.execMode = ExecModeCheck + return c +} + +/ WithIsRecheckTx called with true will also set true on checkTx in order to +/ enforce the invariant that if recheckTx = true then checkTx = true as well. +func (c Context) + +WithIsReCheckTx(isRecheckTx bool) + +Context { + if isRecheckTx { + c.checkTx = true +} + +c.recheckTx = isRecheckTx + c.execMode = ExecModeReCheck + return c +} + +/ WithIsSigverifyTx called with true will sigverify in auth module +func (c Context) + +WithIsSigverifyTx(isSigverifyTx bool) + +Context { + c.sigverifyTx = isSigverifyTx + return c +} + +/ WithExecMode returns a Context with an updated ExecMode. +func (c Context) + +WithExecMode(m ExecMode) + +Context { + c.execMode = m + return c +} + +/ WithMinGasPrices returns a Context with an updated minimum gas price value +func (c Context) + +WithMinGasPrices(gasPrices DecCoins) + +Context { + c.minGasPrice = gasPrices + return c +} + +/ WithConsensusParams returns a Context with an updated consensus params +func (c Context) + +WithConsensusParams(params cmtproto.ConsensusParams) + +Context { + c.consParams = params + return c +} + +/ WithEventManager returns a Context with an updated event manager +func (c Context) + +WithEventManager(em EventManagerI) + +Context { + c.eventManager = em + return c +} + +/ WithPriority returns a Context with an updated tx priority +func (c Context) + +WithPriority(p int64) + +Context { + c.priority = p + return c +} + +/ WithStreamingManager returns a Context with an updated streaming manager +func (c Context) + +WithStreamingManager(sm storetypes.StreamingManager) + +Context { + c.streamingManager = sm + return c +} + +/ WithCometInfo returns a Context with an updated comet info +func (c Context) + +WithCometInfo(cometInfo comet.BlockInfo) + +Context { + c.cometInfo = cometInfo + return c +} + +/ WithHeaderInfo returns a Context with an updated header info +func (c Context) + +WithHeaderInfo(headerInfo header.Info) + +Context { + / Settime to UTC + headerInfo.Time = headerInfo.Time.UTC() + +c.headerInfo = headerInfo + return c +} + +/ TODO: remove??? +func (c Context) + +IsZero() + +bool { + return c.ms == nil +} + +func (c Context) + +WithValue(key, value any) + +Context { + c.baseCtx = context.WithValue(c.baseCtx, key, value) + +return c +} + +func (c Context) + +Value(key any) + +any { + if key == SdkContextKey { + return c +} + +return c.baseCtx.Value(key) +} + +/ ---------------------------------------------------------------------------- +/ Store / Caching +/ ---------------------------------------------------------------------------- + +/ KVStore fetches a KVStore from the MultiStore. +func (c Context) + +KVStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.kvGasConfig) +} + +/ TransientStore fetches a TransientStore from the MultiStore. +func (c Context) + +TransientStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.transientKVGasConfig) +} + +/ CacheContext returns a new Context with the multi-store cached and a new +/ EventManager. The cached context is written to the context when writeCache +/ is called. Note, events are automatically emitted on the parent context's +/ EventManager when the caller executes the write. +func (c Context) + +CacheContext() (cc Context, writeCache func()) { + cms := c.ms.CacheMultiStore() + +cc = c.WithMultiStore(cms).WithEventManager(NewEventManager()) + +writeCache = func() { + c.EventManager().EmitEvents(cc.EventManager().Events()) + +cms.Write() +} + +return cc, writeCache +} + +var ( + _ context.Context = Context{ +} + _ storetypes.Context = Context{ +} +) + +/ ContextKey defines a type alias for a stdlib Context key. +type ContextKey string + +/ SdkContextKey is the key in the context.Context which holds the sdk.Context. +const SdkContextKey ContextKey = "sdk-context" + +/ WrapSDKContext returns a stdlib context.Context with the provided sdk.Context's internal +/ context as a value. It is useful for passing an sdk.Context through methods that take a +/ stdlib context.Context parameter such as generated gRPC methods. To get the original +/ sdk.Context back, call UnwrapSDKContext. +/ +/ Deprecated: there is no need to wrap anymore as the Cosmos SDK context implements context.Context. +func WrapSDKContext(ctx Context) + +context.Context { + return ctx +} + +/ UnwrapSDKContext retrieves a Context from a context.Context instance +/ attached with WrapSDKContext. It panics if a Context was not properly +/ attached +func UnwrapSDKContext(ctx context.Context) + +Context { + if sdkCtx, ok := ctx.(Context); ok { + return sdkCtx +} + +return ctx.Value(SdkContextKey).(Context) +} +``` + +## KVStore Wrappers + +### CacheKVStore + +`cachekv.Store` is a wrapper `KVStore` which provides buffered writing / cached reading functionalities over the underlying `KVStore`. + +```go expandable +package cachekv + +import ( + + "bytes" + "io" + "sort" + "sync" + + dbm "github.com/cosmos/cosmos-db" + "cosmossdk.io/math" + "cosmossdk.io/store/cachekv/internal" + "cosmossdk.io/store/internal/conv" + "cosmossdk.io/store/internal/kv" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" +) + +/ cValue represents a cached value. +/ If dirty is true, it indicates the cached value is different from the underlying value. +type cValue struct { + value []byte + dirty bool +} + +/ Store wraps an in-memory cache around an underlying types.KVStore. +type Store struct { + mtx sync.Mutex + cache map[string]*cValue + unsortedCache map[string]struct{ +} + +sortedCache internal.BTree / always ascending sorted + parent types.KVStore +} + +var _ types.CacheKVStore = (*Store)(nil) + +/ NewStore creates a new Store object +func NewStore(parent types.KVStore) *Store { + return &Store{ + cache: make(map[string]*cValue), + unsortedCache: make(map[string]struct{ +}), + sortedCache: internal.NewBTree(), + parent: parent, +} +} + +/ GetStoreType implements Store. +func (store *Store) + +GetStoreType() + +types.StoreType { + return store.parent.GetStoreType() +} + +/ Get implements types.KVStore. +func (store *Store) + +Get(key []byte) (value []byte) { + store.mtx.Lock() + +defer store.mtx.Unlock() + +types.AssertValidKey(key) + +cacheValue, ok := store.cache[conv.UnsafeBytesToStr(key)] + if !ok { + value = store.parent.Get(key) + +store.setCacheValue(key, value, false) +} + +else { + value = cacheValue.value +} + +return value +} + +/ Set implements types.KVStore. +func (store *Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +types.AssertValidValue(value) + +store.mtx.Lock() + +defer store.mtx.Unlock() + +store.setCacheValue(key, value, true) +} + +/ Has implements types.KVStore. +func (store *Store) + +Has(key []byte) + +bool { + value := store.Get(key) + +return value != nil +} + +/ Delete implements types.KVStore. +func (store *Store) + +Delete(key []byte) { + types.AssertValidKey(key) + +store.mtx.Lock() + +defer store.mtx.Unlock() + +store.setCacheValue(key, nil, true) +} + +func (store *Store) + +resetCaches() { + if len(store.cache) > 100_000 { + / Cache is too large. We likely did something linear time + / (e.g. Epoch block, Genesis block, etc). Free the old caches from memory, and let them get re-allocated. + / TODO: In a future CacheKV redesign, such linear workloads should get into a different cache instantiation. + / 100_000 is arbitrarily chosen as it solved Osmosis' InitGenesis RAM problem. + store.cache = make(map[string]*cValue) + +store.unsortedCache = make(map[string]struct{ +}) +} + +else { + / Clear the cache using the map clearing idiom + / and not allocating fresh objects. + / Please see https://bencher.orijtech.com/perfclinic/mapclearing/ + for key := range store.cache { + delete(store.cache, key) +} + for key := range store.unsortedCache { + delete(store.unsortedCache, key) +} + +} + +store.sortedCache = internal.NewBTree() +} + +/ Implements Cachetypes.KVStore. +func (store *Store) + +Write() { + store.mtx.Lock() + +defer store.mtx.Unlock() + if len(store.cache) == 0 && len(store.unsortedCache) == 0 { + store.sortedCache = internal.NewBTree() + +return +} + +type cEntry struct { + key string + val *cValue +} + + / We need a copy of all of the keys. + / Not the best. To reduce RAM pressure, we copy the values as well + / and clear out the old caches right after the copy. + sortedCache := make([]cEntry, 0, len(store.cache)) + for key, dbValue := range store.cache { + if dbValue.dirty { + sortedCache = append(sortedCache, cEntry{ + key, dbValue +}) +} + +} + +store.resetCaches() + +sort.Slice(sortedCache, func(i, j int) + +bool { + return sortedCache[i].key < sortedCache[j].key +}) + + / TODO: Consider allowing usage of Batch, which would allow the write to + / at least happen atomically. + for _, obj := range sortedCache { + / We use []byte(key) + +instead of conv.UnsafeStrToBytes because we cannot + / be sure if the underlying store might do a save with the byteslice or + / not. Once we get confirmation that .Delete is guaranteed not to + / save the byteslice, then we can assume only a read-only copy is sufficient. + if obj.val.value != nil { + / It already exists in the parent, hence update it. + store.parent.Set([]byte(obj.key), obj.val.value) +} + +else { + store.parent.Delete([]byte(obj.key)) +} + +} +} + +/ CacheWrap implements CacheWrapper. +func (store *Store) + +CacheWrap() + +types.CacheWrap { + return NewStore(store) +} + +/ CacheWrapWithTrace implements the CacheWrapper interface. +func (store *Store) + +CacheWrapWithTrace(w io.Writer, tc types.TraceContext) + +types.CacheWrap { + return NewStore(tracekv.NewStore(store, w, tc)) +} + +/---------------------------------------- +/ Iteration + +/ Iterator implements types.KVStore. +func (store *Store) + +Iterator(start, end []byte) + +types.Iterator { + return store.iterator(start, end, true) +} + +/ ReverseIterator implements types.KVStore. +func (store *Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + return store.iterator(start, end, false) +} + +func (store *Store) + +iterator(start, end []byte, ascending bool) + +types.Iterator { + store.mtx.Lock() + +defer store.mtx.Unlock() + +store.dirtyItems(start, end) + isoSortedCache := store.sortedCache.Copy() + +var ( + err error + parent, cache types.Iterator + ) + if ascending { + parent = store.parent.Iterator(start, end) + +cache, err = isoSortedCache.Iterator(start, end) +} + +else { + parent = store.parent.ReverseIterator(start, end) + +cache, err = isoSortedCache.ReverseIterator(start, end) +} + if err != nil { + panic(err) +} + +return internal.NewCacheMergeIterator(parent, cache, ascending) +} + +func findStartIndex(strL []string, startQ string) + +int { + / Modified binary search to find the very first element in >=startQ. + if len(strL) == 0 { + return -1 +} + +var left, right, mid int + right = len(strL) - 1 + for left <= right { + mid = (left + right) >> 1 + midStr := strL[mid] + if midStr == startQ { + / Handle condition where there might be multiple values equal to startQ. + / We are looking for the very first value < midStL, that i+1 will be the first + / element >= midStr. + for i := mid - 1; i >= 0; i-- { + if strL[i] != midStr { + return i + 1 +} + +} + +return 0 +} + if midStr < startQ { + left = mid + 1 +} + +else { / midStrL > startQ + right = mid - 1 +} + +} + if left >= 0 && left < len(strL) && strL[left] >= startQ { + return left +} + +return -1 +} + +func findEndIndex(strL []string, endQ string) + +int { + if len(strL) == 0 { + return -1 +} + + / Modified binary search to find the very first element > 1 + midStr := strL[mid] + if midStr == endQ { + / Handle condition where there might be multiple values equal to startQ. + / We are looking for the very first value < midStL, that i+1 will be the first + / element >= midStr. + for i := mid - 1; i >= 0; i-- { + if strL[i] < midStr { + return i + 1 +} + +} + +return 0 +} + if midStr < endQ { + left = mid + 1 +} + +else { / midStrL > startQ + right = mid - 1 +} + +} + + / Binary search failed, now let's find a value less than endQ. + for i := right; i >= 0; i-- { + if strL[i] < endQ { + return i +} + +} + +return -1 +} + +type sortState int + +const ( + stateUnsorted sortState = iota + stateAlreadySorted +) + +const minSortSize = 1024 + +/ Constructs a slice of dirty items, to use w/ memIterator. +func (store *Store) + +dirtyItems(start, end []byte) { + startStr, endStr := conv.UnsafeBytesToStr(start), conv.UnsafeBytesToStr(end) + if end != nil && startStr > endStr { + / Nothing to do here. + return +} + n := len(store.unsortedCache) + unsorted := make([]*kv.Pair, 0) + / If the unsortedCache is too big, its costs too much to determine + / whats in the subset we are concerned about. + / If you are interleaving iterator calls with writes, this can easily become an + / O(N^2) + +overhead. + / Even without that, too many range checks eventually becomes more expensive + / than just not having the cache. + if n < minSortSize { + for key := range store.unsortedCache { + / dbm.IsKeyInDomain is nil safe and returns true iff key is greater than start + if dbm.IsKeyInDomain(conv.UnsafeStrToBytes(key), start, end) { + cacheValue := store.cache[key] + unsorted = append(unsorted, &kv.Pair{ + Key: []byte(key), + Value: cacheValue.value +}) +} + +} + +store.clearUnsortedCacheSubset(unsorted, stateUnsorted) + +return +} + + / Otherwise it is large so perform a modified binary search to find + / the target ranges for the keys that we should be looking for. + strL := make([]string, 0, n) + for key := range store.unsortedCache { + strL = append(strL, key) +} + +sort.Strings(strL) + + / Now find the values within the domain + / [start, end) + startIndex := findStartIndex(strL, startStr) + if startIndex < 0 { + startIndex = 0 +} + +var endIndex int + if end == nil { + endIndex = len(strL) - 1 +} + +else { + endIndex = findEndIndex(strL, endStr) +} + if endIndex < 0 { + endIndex = len(strL) - 1 +} + + / Since we spent cycles to sort the values, we should process and remove a reasonable amount + / ensure start to end is at least minSortSize in size + / if below minSortSize, expand it to cover additional values + / this amortizes the cost of processing elements across multiple calls + if endIndex-startIndex < minSortSize { + endIndex = math.Min(startIndex+minSortSize, len(strL)-1) + if endIndex-startIndex < minSortSize { + startIndex = math.Max(endIndex-minSortSize, 0) +} + +} + kvL := make([]*kv.Pair, 0, 1+endIndex-startIndex) + for i := startIndex; i <= endIndex; i++ { + key := strL[i] + cacheValue := store.cache[key] + kvL = append(kvL, &kv.Pair{ + Key: []byte(key), + Value: cacheValue.value +}) +} + + / kvL was already sorted so pass it in as is. + store.clearUnsortedCacheSubset(kvL, stateAlreadySorted) +} + +func (store *Store) + +clearUnsortedCacheSubset(unsorted []*kv.Pair, sortState sortState) { + n := len(store.unsortedCache) + if len(unsorted) == n { / This pattern allows the Go compiler to emit the map clearing idiom for the entire map. + for key := range store.unsortedCache { + delete(store.unsortedCache, key) +} + +} + +else { / Otherwise, normally delete the unsorted keys from the map. + for _, kv := range unsorted { + delete(store.unsortedCache, conv.UnsafeBytesToStr(kv.Key)) +} + +} + if sortState == stateUnsorted { + sort.Slice(unsorted, func(i, j int) + +bool { + return bytes.Compare(unsorted[i].Key, unsorted[j].Key) < 0 +}) +} + for _, item := range unsorted { + / sortedCache is able to store `nil` value to represent deleted items. + store.sortedCache.Set(item.Key, item.Value) +} +} + +/---------------------------------------- +/ etc + +/ Only entrypoint to mutate store.cache. +/ A `nil` value means a deletion. +func (store *Store) + +setCacheValue(key, value []byte, dirty bool) { + keyStr := conv.UnsafeBytesToStr(key) + +store.cache[keyStr] = &cValue{ + value: value, + dirty: dirty, +} + if dirty { + store.unsortedCache[keyStr] = struct{ +}{ +} + +} +} +``` + +This is the type used whenever an IAVL Store needs to be branched to create an isolated store (typically when we need to mutate a state that might be reverted later). + +#### `Get` + +`Store.Get()` firstly checks if `Store.cache` has an associated value with the key. If the value exists, the function returns it. If not, the function calls `Store.parent.Get()`, caches the result in `Store.cache`, and returns it. + +#### `Set` + +`Store.Set()` sets the key-value pair to the `Store.cache`. `cValue` has the field dirty bool which indicates whether the cached value is different from the underlying value. When `Store.Set()` caches a new pair, the `cValue.dirty` is set `true` so when `Store.Write()` is called it can be written to the underlying store. + +#### `Iterator` + +`Store.Iterator()` have to traverse on both cached items and the original items. In `Store.iterator()`, two iterators are generated for each of them, and merged. `memIterator` is essentially a slice of the `KVPairs`, used for cached items. `mergeIterator` is a combination of two iterators, where traverse happens ordered on both iterators. + +### `GasKv` Store + +Cosmos SDK applications use [`gas`](/docs/sdk/v0.53/documentation/protocol-development/gas-fees) to track resources usage and prevent spam. [`GasKv.Store`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/gaskv/store.go) is a `KVStore` wrapper that enables automatic gas consumption each time a read or write to the store is made. It is the solution of choice to track storage usage in Cosmos SDK applications. + +```go expandable +package gaskv + +import ( + + "io" + "cosmossdk.io/store/types" +) + +var _ types.KVStore = &Store{ +} + +/ Store applies gas tracking to an underlying KVStore. It implements the +/ KVStore interface. +type Store struct { + gasMeter types.GasMeter + gasConfig types.GasConfig + parent types.KVStore +} + +/ NewStore returns a reference to a new GasKVStore. +func NewStore(parent types.KVStore, gasMeter types.GasMeter, gasConfig types.GasConfig) *Store { + kvs := &Store{ + gasMeter: gasMeter, + gasConfig: gasConfig, + parent: parent, +} + +return kvs +} + +/ Implements Store. +func (gs *Store) + +GetStoreType() + +types.StoreType { + return gs.parent.GetStoreType() +} + +/ Implements KVStore. +func (gs *Store) + +Get(key []byte) (value []byte) { + gs.gasMeter.ConsumeGas(gs.gasConfig.ReadCostFlat, types.GasReadCostFlatDesc) + +value = gs.parent.Get(key) + + / TODO overflow-safe math? + gs.gasMeter.ConsumeGas(gs.gasConfig.ReadCostPerByte*types.Gas(len(key)), types.GasReadPerByteDesc) + +gs.gasMeter.ConsumeGas(gs.gasConfig.ReadCostPerByte*types.Gas(len(value)), types.GasReadPerByteDesc) + +return value +} + +/ Implements KVStore. +func (gs *Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +types.AssertValidValue(value) + +gs.gasMeter.ConsumeGas(gs.gasConfig.WriteCostFlat, types.GasWriteCostFlatDesc) + / TODO overflow-safe math? + gs.gasMeter.ConsumeGas(gs.gasConfig.WriteCostPerByte*types.Gas(len(key)), types.GasWritePerByteDesc) + +gs.gasMeter.ConsumeGas(gs.gasConfig.WriteCostPerByte*types.Gas(len(value)), types.GasWritePerByteDesc) + +gs.parent.Set(key, value) +} + +/ Implements KVStore. +func (gs *Store) + +Has(key []byte) + +bool { + gs.gasMeter.ConsumeGas(gs.gasConfig.HasCost, types.GasHasDesc) + +return gs.parent.Has(key) +} + +/ Implements KVStore. +func (gs *Store) + +Delete(key []byte) { + / charge gas to prevent certain attack vectors even though space is being freed + gs.gasMeter.ConsumeGas(gs.gasConfig.DeleteCost, types.GasDeleteDesc) + +gs.parent.Delete(key) +} + +/ Iterator implements the KVStore interface. It returns an iterator which +/ incurs a flat gas cost for seeking to the first key/value pair and a variable +/ gas cost based on the current value's length if the iterator is valid. +func (gs *Store) + +Iterator(start, end []byte) + +types.Iterator { + return gs.iterator(start, end, true) +} + +/ ReverseIterator implements the KVStore interface. It returns a reverse +/ iterator which incurs a flat gas cost for seeking to the first key/value pair +/ and a variable gas cost based on the current value's length if the iterator +/ is valid. +func (gs *Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + return gs.iterator(start, end, false) +} + +/ Implements KVStore. +func (gs *Store) + +CacheWrap() + +types.CacheWrap { + panic("cannot CacheWrap a GasKVStore") +} + +/ CacheWrapWithTrace implements the KVStore interface. +func (gs *Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + panic("cannot CacheWrapWithTrace a GasKVStore") +} + +func (gs *Store) + +iterator(start, end []byte, ascending bool) + +types.Iterator { + var parent types.Iterator + if ascending { + parent = gs.parent.Iterator(start, end) +} + +else { + parent = gs.parent.ReverseIterator(start, end) +} + gi := newGasIterator(gs.gasMeter, gs.gasConfig, parent) + +gi.(*gasIterator).consumeSeekGas() + +return gi +} + +type gasIterator struct { + gasMeter types.GasMeter + gasConfig types.GasConfig + parent types.Iterator +} + +func newGasIterator(gasMeter types.GasMeter, gasConfig types.GasConfig, parent types.Iterator) + +types.Iterator { + return &gasIterator{ + gasMeter: gasMeter, + gasConfig: gasConfig, + parent: parent, +} +} + +/ Implements Iterator. +func (gi *gasIterator) + +Domain() (start, end []byte) { + return gi.parent.Domain() +} + +/ Implements Iterator. +func (gi *gasIterator) + +Valid() + +bool { + return gi.parent.Valid() +} + +/ Next implements the Iterator interface. It seeks to the next key/value pair +/ in the iterator. It incurs a flat gas cost for seeking and a variable gas +/ cost based on the current value's length if the iterator is valid. +func (gi *gasIterator) + +Next() { + gi.consumeSeekGas() + +gi.parent.Next() +} + +/ Key implements the Iterator interface. It returns the current key and it does +/ not incur any gas cost. +func (gi *gasIterator) + +Key() (key []byte) { + key = gi.parent.Key() + +return key +} + +/ Value implements the Iterator interface. It returns the current value and it +/ does not incur any gas cost. +func (gi *gasIterator) + +Value() (value []byte) { + value = gi.parent.Value() + +return value +} + +/ Implements Iterator. +func (gi *gasIterator) + +Close() + +error { + return gi.parent.Close() +} + +/ Error delegates the Error call to the parent iterator. +func (gi *gasIterator) + +Error() + +error { + return gi.parent.Error() +} + +/ consumeSeekGas consumes on each iteration step a flat gas cost and a variable gas cost +/ based on the current value's length. +func (gi *gasIterator) + +consumeSeekGas() { + if gi.Valid() { + key := gi.Key() + value := gi.Value() + +gi.gasMeter.ConsumeGas(gi.gasConfig.ReadCostPerByte*types.Gas(len(key)), types.GasValuePerByteDesc) + +gi.gasMeter.ConsumeGas(gi.gasConfig.ReadCostPerByte*types.Gas(len(value)), types.GasValuePerByteDesc) +} + +gi.gasMeter.ConsumeGas(gi.gasConfig.IterNextCostFlat, types.GasIterNextCostFlatDesc) +} +``` + +When methods of the parent `KVStore` are called, `GasKv.Store` automatically consumes appropriate amount of gas depending on the `Store.gasConfig`: + +```go expandable +package types + +import ( + + "fmt" + "math" +) + +/ Gas consumption descriptors. +const ( + GasIterNextCostFlatDesc = "IterNextFlat" + GasValuePerByteDesc = "ValuePerByte" + GasWritePerByteDesc = "WritePerByte" + GasReadPerByteDesc = "ReadPerByte" + GasWriteCostFlatDesc = "WriteFlat" + GasReadCostFlatDesc = "ReadFlat" + GasHasDesc = "Has" + GasDeleteDesc = "Delete" +) + +/ Gas measured by the SDK +type Gas = uint64 + +/ ErrorNegativeGasConsumed defines an error thrown when the amount of gas refunded results in a +/ negative gas consumed amount. +type ErrorNegativeGasConsumed struct { + Descriptor string +} + +/ ErrorOutOfGas defines an error thrown when an action results in out of gas. +type ErrorOutOfGas struct { + Descriptor string +} + +/ ErrorGasOverflow defines an error thrown when an action results gas consumption +/ unsigned integer overflow. +type ErrorGasOverflow struct { + Descriptor string +} + +/ GasMeter interface to track gas consumption +type GasMeter interface { + GasConsumed() + +Gas + GasConsumedToLimit() + +Gas + GasRemaining() + +Gas + Limit() + +Gas + ConsumeGas(amount Gas, descriptor string) + +RefundGas(amount Gas, descriptor string) + +IsPastLimit() + +bool + IsOutOfGas() + +bool + String() + +string +} + +type basicGasMeter struct { + limit Gas + consumed Gas +} + +/ NewGasMeter returns a reference to a new basicGasMeter. +func NewGasMeter(limit Gas) + +GasMeter { + return &basicGasMeter{ + limit: limit, + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *basicGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasRemaining returns the gas left in the GasMeter. +func (g *basicGasMeter) + +GasRemaining() + +Gas { + if g.IsPastLimit() { + return 0 +} + +return g.limit - g.consumed +} + +/ Limit returns the gas limit of the GasMeter. +func (g *basicGasMeter) + +Limit() + +Gas { + return g.limit +} + +/ GasConsumedToLimit returns the gas limit if gas consumed is past the limit, +/ otherwise it returns the consumed gas. +/ +/ NOTE: This behavior is only called when recovering from panic when +/ BlockGasMeter consumes gas past the limit. +func (g *basicGasMeter) + +GasConsumedToLimit() + +Gas { + if g.IsPastLimit() { + return g.limit +} + +return g.consumed +} + +/ addUint64Overflow performs the addition operation on two uint64 integers and +/ returns a boolean on whether or not the result overflows. +func addUint64Overflow(a, b uint64) (uint64, bool) { + if math.MaxUint64-a < b { + return 0, true +} + +return a + b, false +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit or out of gas. +func (g *basicGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + g.consumed = math.MaxUint64 + panic(ErrorGasOverflow{ + descriptor +}) +} + if g.consumed > g.limit { + panic(ErrorOutOfGas{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the transaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *basicGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns true if gas consumed is past limit, otherwise it returns false. +func (g *basicGasMeter) + +IsPastLimit() + +bool { + return g.consumed > g.limit +} + +/ IsOutOfGas returns true if gas consumed is greater than or equal to gas limit, otherwise it returns false. +func (g *basicGasMeter) + +IsOutOfGas() + +bool { + return g.consumed >= g.limit +} + +/ String returns the BasicGasMeter's gas limit and gas consumed. +func (g *basicGasMeter) + +String() + +string { + return fmt.Sprintf("BasicGasMeter:\n limit: %d\n consumed: %d", g.limit, g.consumed) +} + +type infiniteGasMeter struct { + consumed Gas +} + +/ NewInfiniteGasMeter returns a new gas meter without a limit. +func NewInfiniteGasMeter() + +GasMeter { + return &infiniteGasMeter{ + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *infiniteGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasConsumedToLimit returns the gas consumed from the GasMeter since the gas is not confined to a limit. +/ NOTE: This behavior is only called when recovering from panic when BlockGasMeter consumes gas past the limit. +func (g *infiniteGasMeter) + +GasConsumedToLimit() + +Gas { + return g.consumed +} + +/ GasRemaining returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +GasRemaining() + +Gas { + return math.MaxUint64 +} + +/ Limit returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +Limit() + +Gas { + return math.MaxUint64 +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit. +func (g *infiniteGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + / TODO: Should we set the consumed field after overflow checking? + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + panic(ErrorGasOverflow{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the trasaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *infiniteGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsPastLimit() + +bool { + return false +} + +/ IsOutOfGas returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsOutOfGas() + +bool { + return false +} + +/ String returns the InfiniteGasMeter's gas consumed. +func (g *infiniteGasMeter) + +String() + +string { + return fmt.Sprintf("InfiniteGasMeter:\n consumed: %d", g.consumed) +} + +/ GasConfig defines gas cost for each operation on KVStores +type GasConfig struct { + HasCost Gas + DeleteCost Gas + ReadCostFlat Gas + ReadCostPerByte Gas + WriteCostFlat Gas + WriteCostPerByte Gas + IterNextCostFlat Gas +} + +/ KVGasConfig returns a default gas config for KVStores. +func KVGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 1000, + DeleteCost: 1000, + ReadCostFlat: 1000, + ReadCostPerByte: 3, + WriteCostFlat: 2000, + WriteCostPerByte: 30, + IterNextCostFlat: 30, +} +} + +/ TransientGasConfig returns a default gas config for TransientStores. +func TransientGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 100, + DeleteCost: 100, + ReadCostFlat: 100, + ReadCostPerByte: 0, + WriteCostFlat: 200, + WriteCostPerByte: 3, + IterNextCostFlat: 3, +} +} +``` + +By default, all `KVStores` are wrapped in `GasKv.Stores` when retrieved. This is done in the `KVStore()` method of the [`context`](/docs/sdk/v0.53/documentation/application-framework/context): + +```go expandable +package types + +import ( + + "context" + "time" + + abci "github.com/cometbft/cometbft/abci/types" + cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" + "cosmossdk.io/core/comet" + "cosmossdk.io/core/header" + "cosmossdk.io/log" + "cosmossdk.io/store/gaskv" + storetypes "cosmossdk.io/store/types" +) + +/ ExecMode defines the execution mode which can be set on a Context. +type ExecMode uint8 + +/ All possible execution modes. +const ( + ExecModeCheck ExecMode = iota + ExecModeReCheck + ExecModeSimulate + ExecModePrepareProposal + ExecModeProcessProposal + ExecModeVoteExtension + ExecModeVerifyVoteExtension + ExecModeFinalize +) + +/* +Context is an immutable object contains all information needed to +process a request. + +It contains a context.Context object inside if you want to use that, +but please do not over-use it. We try to keep all data structured +and standard additions here would be better just to add to the Context struct +*/ +type Context struct { + baseCtx context.Context + ms storetypes.MultiStore + / Deprecated: Use HeaderService for height, time, and chainID and CometService for the rest + header cmtproto.Header + / Deprecated: Use HeaderService for hash + headerHash []byte + / Deprecated: Use HeaderService for chainID and CometService for the rest + chainID string + txBytes []byte + logger log.Logger + voteInfo []abci.VoteInfo + gasMeter storetypes.GasMeter + blockGasMeter storetypes.GasMeter + checkTx bool + recheckTx bool / if recheckTx == true, then checkTx must also be true + sigverifyTx bool / when run simulation, because the private key corresponding to the account in the genesis.json randomly generated, we must skip the sigverify. + execMode ExecMode + minGasPrice DecCoins + consParams cmtproto.ConsensusParams + eventManager EventManagerI + priority int64 / The tx priority, only relevant in CheckTx + kvGasConfig storetypes.GasConfig + transientKVGasConfig storetypes.GasConfig + streamingManager storetypes.StreamingManager + cometInfo comet.BlockInfo + headerInfo header.Info +} + +/ Proposed rename, not done to avoid API breakage +type Request = Context + +/ Read-only accessors +func (c Context) + +Context() + +context.Context { + return c.baseCtx +} + +func (c Context) + +MultiStore() + +storetypes.MultiStore { + return c.ms +} + +func (c Context) + +BlockHeight() + +int64 { + return c.header.Height +} + +func (c Context) + +BlockTime() + +time.Time { + return c.header.Time +} + +func (c Context) + +ChainID() + +string { + return c.chainID +} + +func (c Context) + +TxBytes() []byte { + return c.txBytes +} + +func (c Context) + +Logger() + +log.Logger { + return c.logger +} + +func (c Context) + +VoteInfos() []abci.VoteInfo { + return c.voteInfo +} + +func (c Context) + +GasMeter() + +storetypes.GasMeter { + return c.gasMeter +} + +func (c Context) + +BlockGasMeter() + +storetypes.GasMeter { + return c.blockGasMeter +} + +func (c Context) + +IsCheckTx() + +bool { + return c.checkTx +} + +func (c Context) + +IsReCheckTx() + +bool { + return c.recheckTx +} + +func (c Context) + +IsSigverifyTx() + +bool { + return c.sigverifyTx +} + +func (c Context) + +ExecMode() + +ExecMode { + return c.execMode +} + +func (c Context) + +MinGasPrices() + +DecCoins { + return c.minGasPrice +} + +func (c Context) + +EventManager() + +EventManagerI { + return c.eventManager +} + +func (c Context) + +Priority() + +int64 { + return c.priority +} + +func (c Context) + +KVGasConfig() + +storetypes.GasConfig { + return c.kvGasConfig +} + +func (c Context) + +TransientKVGasConfig() + +storetypes.GasConfig { + return c.transientKVGasConfig +} + +func (c Context) + +StreamingManager() + +storetypes.StreamingManager { + return c.streamingManager +} + +func (c Context) + +CometInfo() + +comet.BlockInfo { + return c.cometInfo +} + +func (c Context) + +HeaderInfo() + +header.Info { + return c.headerInfo +} + +/ BlockHeader returns the header by value. +func (c Context) + +BlockHeader() + +cmtproto.Header { + return c.header +} + +/ HeaderHash returns a copy of the header hash obtained during abci.RequestBeginBlock +func (c Context) + +HeaderHash() []byte { + hash := make([]byte, len(c.headerHash)) + +copy(hash, c.headerHash) + +return hash +} + +func (c Context) + +ConsensusParams() + +cmtproto.ConsensusParams { + return c.consParams +} + +func (c Context) + +Deadline() (deadline time.Time, ok bool) { + return c.baseCtx.Deadline() +} + +func (c Context) + +Done() <-chan struct{ +} { + return c.baseCtx.Done() +} + +func (c Context) + +Err() + +error { + return c.baseCtx.Err() +} + +/ create a new context +func NewContext(ms storetypes.MultiStore, header cmtproto.Header, isCheckTx bool, logger log.Logger) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +return Context{ + baseCtx: context.Background(), + ms: ms, + header: header, + chainID: header.ChainID, + checkTx: isCheckTx, + sigverifyTx: true, + logger: logger, + gasMeter: storetypes.NewInfiniteGasMeter(), + minGasPrice: DecCoins{ +}, + eventManager: NewEventManager(), + kvGasConfig: storetypes.KVGasConfig(), + transientKVGasConfig: storetypes.TransientGasConfig(), +} +} + +/ WithContext returns a Context with an updated context.Context. +func (c Context) + +WithContext(ctx context.Context) + +Context { + c.baseCtx = ctx + return c +} + +/ WithMultiStore returns a Context with an updated MultiStore. +func (c Context) + +WithMultiStore(ms storetypes.MultiStore) + +Context { + c.ms = ms + return c +} + +/ WithBlockHeader returns a Context with an updated CometBFT block header in UTC time. +func (c Context) + +WithBlockHeader(header cmtproto.Header) + +Context { + / https://github.com/gogo/protobuf/issues/519 + header.Time = header.Time.UTC() + +c.header = header + return c +} + +/ WithHeaderHash returns a Context with an updated CometBFT block header hash. +func (c Context) + +WithHeaderHash(hash []byte) + +Context { + temp := make([]byte, len(hash)) + +copy(temp, hash) + +c.headerHash = temp + return c +} + +/ WithBlockTime returns a Context with an updated CometBFT block header time in UTC with no monotonic component. +/ Stripping the monotonic component is for time equality. +func (c Context) + +WithBlockTime(newTime time.Time) + +Context { + newHeader := c.BlockHeader() + / https://github.com/gogo/protobuf/issues/519 + newHeader.Time = newTime.Round(0).UTC() + +return c.WithBlockHeader(newHeader) +} + +/ WithProposer returns a Context with an updated proposer consensus address. +func (c Context) + +WithProposer(addr ConsAddress) + +Context { + newHeader := c.BlockHeader() + +newHeader.ProposerAddress = addr.Bytes() + +return c.WithBlockHeader(newHeader) +} + +/ WithBlockHeight returns a Context with an updated block height. +func (c Context) + +WithBlockHeight(height int64) + +Context { + newHeader := c.BlockHeader() + +newHeader.Height = height + return c.WithBlockHeader(newHeader) +} + +/ WithChainID returns a Context with an updated chain identifier. +func (c Context) + +WithChainID(chainID string) + +Context { + c.chainID = chainID + return c +} + +/ WithTxBytes returns a Context with an updated txBytes. +func (c Context) + +WithTxBytes(txBytes []byte) + +Context { + c.txBytes = txBytes + return c +} + +/ WithLogger returns a Context with an updated logger. +func (c Context) + +WithLogger(logger log.Logger) + +Context { + c.logger = logger + return c +} + +/ WithVoteInfos returns a Context with an updated consensus VoteInfo. +func (c Context) + +WithVoteInfos(voteInfo []abci.VoteInfo) + +Context { + c.voteInfo = voteInfo + return c +} + +/ WithGasMeter returns a Context with an updated transaction GasMeter. +func (c Context) + +WithGasMeter(meter storetypes.GasMeter) + +Context { + c.gasMeter = meter + return c +} + +/ WithBlockGasMeter returns a Context with an updated block GasMeter +func (c Context) + +WithBlockGasMeter(meter storetypes.GasMeter) + +Context { + c.blockGasMeter = meter + return c +} + +/ WithKVGasConfig returns a Context with an updated gas configuration for +/ the KVStore +func (c Context) + +WithKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.kvGasConfig = gasConfig + return c +} + +/ WithTransientKVGasConfig returns a Context with an updated gas configuration for +/ the transient KVStore +func (c Context) + +WithTransientKVGasConfig(gasConfig storetypes.GasConfig) + +Context { + c.transientKVGasConfig = gasConfig + return c +} + +/ WithIsCheckTx enables or disables CheckTx value for verifying transactions and returns an updated Context +func (c Context) + +WithIsCheckTx(isCheckTx bool) + +Context { + c.checkTx = isCheckTx + c.execMode = ExecModeCheck + return c +} + +/ WithIsRecheckTx called with true will also set true on checkTx in order to +/ enforce the invariant that if recheckTx = true then checkTx = true as well. +func (c Context) + +WithIsReCheckTx(isRecheckTx bool) + +Context { + if isRecheckTx { + c.checkTx = true +} + +c.recheckTx = isRecheckTx + c.execMode = ExecModeReCheck + return c +} + +/ WithIsSigverifyTx called with true will sigverify in auth module +func (c Context) + +WithIsSigverifyTx(isSigverifyTx bool) + +Context { + c.sigverifyTx = isSigverifyTx + return c +} + +/ WithExecMode returns a Context with an updated ExecMode. +func (c Context) + +WithExecMode(m ExecMode) + +Context { + c.execMode = m + return c +} + +/ WithMinGasPrices returns a Context with an updated minimum gas price value +func (c Context) + +WithMinGasPrices(gasPrices DecCoins) + +Context { + c.minGasPrice = gasPrices + return c +} + +/ WithConsensusParams returns a Context with an updated consensus params +func (c Context) + +WithConsensusParams(params cmtproto.ConsensusParams) + +Context { + c.consParams = params + return c +} + +/ WithEventManager returns a Context with an updated event manager +func (c Context) + +WithEventManager(em EventManagerI) + +Context { + c.eventManager = em + return c +} + +/ WithPriority returns a Context with an updated tx priority +func (c Context) + +WithPriority(p int64) + +Context { + c.priority = p + return c +} + +/ WithStreamingManager returns a Context with an updated streaming manager +func (c Context) + +WithStreamingManager(sm storetypes.StreamingManager) + +Context { + c.streamingManager = sm + return c +} + +/ WithCometInfo returns a Context with an updated comet info +func (c Context) + +WithCometInfo(cometInfo comet.BlockInfo) + +Context { + c.cometInfo = cometInfo + return c +} + +/ WithHeaderInfo returns a Context with an updated header info +func (c Context) + +WithHeaderInfo(headerInfo header.Info) + +Context { + / Settime to UTC + headerInfo.Time = headerInfo.Time.UTC() + +c.headerInfo = headerInfo + return c +} + +/ TODO: remove??? +func (c Context) + +IsZero() + +bool { + return c.ms == nil +} + +func (c Context) + +WithValue(key, value any) + +Context { + c.baseCtx = context.WithValue(c.baseCtx, key, value) + +return c +} + +func (c Context) + +Value(key any) + +any { + if key == SdkContextKey { + return c +} + +return c.baseCtx.Value(key) +} + +/ ---------------------------------------------------------------------------- +/ Store / Caching +/ ---------------------------------------------------------------------------- + +/ KVStore fetches a KVStore from the MultiStore. +func (c Context) + +KVStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.kvGasConfig) +} + +/ TransientStore fetches a TransientStore from the MultiStore. +func (c Context) + +TransientStore(key storetypes.StoreKey) + +storetypes.KVStore { + return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.transientKVGasConfig) +} + +/ CacheContext returns a new Context with the multi-store cached and a new +/ EventManager. The cached context is written to the context when writeCache +/ is called. Note, events are automatically emitted on the parent context's +/ EventManager when the caller executes the write. +func (c Context) + +CacheContext() (cc Context, writeCache func()) { + cms := c.ms.CacheMultiStore() + +cc = c.WithMultiStore(cms).WithEventManager(NewEventManager()) + +writeCache = func() { + c.EventManager().EmitEvents(cc.EventManager().Events()) + +cms.Write() +} + +return cc, writeCache +} + +var ( + _ context.Context = Context{ +} + _ storetypes.Context = Context{ +} +) + +/ ContextKey defines a type alias for a stdlib Context key. +type ContextKey string + +/ SdkContextKey is the key in the context.Context which holds the sdk.Context. +const SdkContextKey ContextKey = "sdk-context" + +/ WrapSDKContext returns a stdlib context.Context with the provided sdk.Context's internal +/ context as a value. It is useful for passing an sdk.Context through methods that take a +/ stdlib context.Context parameter such as generated gRPC methods. To get the original +/ sdk.Context back, call UnwrapSDKContext. +/ +/ Deprecated: there is no need to wrap anymore as the Cosmos SDK context implements context.Context. +func WrapSDKContext(ctx Context) + +context.Context { + return ctx +} + +/ UnwrapSDKContext retrieves a Context from a context.Context instance +/ attached with WrapSDKContext. It panics if a Context was not properly +/ attached +func UnwrapSDKContext(ctx context.Context) + +Context { + if sdkCtx, ok := ctx.(Context); ok { + return sdkCtx +} + +return ctx.Value(SdkContextKey).(Context) +} +``` + +In this case, the gas configuration set in the `context` is used. The gas configuration can be set using the `WithKVGasConfig` method of the `context`. +Otherwise it uses the following default: + +```go expandable +package types + +import ( + + "fmt" + "math" +) + +/ Gas consumption descriptors. +const ( + GasIterNextCostFlatDesc = "IterNextFlat" + GasValuePerByteDesc = "ValuePerByte" + GasWritePerByteDesc = "WritePerByte" + GasReadPerByteDesc = "ReadPerByte" + GasWriteCostFlatDesc = "WriteFlat" + GasReadCostFlatDesc = "ReadFlat" + GasHasDesc = "Has" + GasDeleteDesc = "Delete" +) + +/ Gas measured by the SDK +type Gas = uint64 + +/ ErrorNegativeGasConsumed defines an error thrown when the amount of gas refunded results in a +/ negative gas consumed amount. +type ErrorNegativeGasConsumed struct { + Descriptor string +} + +/ ErrorOutOfGas defines an error thrown when an action results in out of gas. +type ErrorOutOfGas struct { + Descriptor string +} + +/ ErrorGasOverflow defines an error thrown when an action results gas consumption +/ unsigned integer overflow. +type ErrorGasOverflow struct { + Descriptor string +} + +/ GasMeter interface to track gas consumption +type GasMeter interface { + GasConsumed() + +Gas + GasConsumedToLimit() + +Gas + GasRemaining() + +Gas + Limit() + +Gas + ConsumeGas(amount Gas, descriptor string) + +RefundGas(amount Gas, descriptor string) + +IsPastLimit() + +bool + IsOutOfGas() + +bool + String() + +string +} + +type basicGasMeter struct { + limit Gas + consumed Gas +} + +/ NewGasMeter returns a reference to a new basicGasMeter. +func NewGasMeter(limit Gas) + +GasMeter { + return &basicGasMeter{ + limit: limit, + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *basicGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasRemaining returns the gas left in the GasMeter. +func (g *basicGasMeter) + +GasRemaining() + +Gas { + if g.IsPastLimit() { + return 0 +} + +return g.limit - g.consumed +} + +/ Limit returns the gas limit of the GasMeter. +func (g *basicGasMeter) + +Limit() + +Gas { + return g.limit +} + +/ GasConsumedToLimit returns the gas limit if gas consumed is past the limit, +/ otherwise it returns the consumed gas. +/ +/ NOTE: This behavior is only called when recovering from panic when +/ BlockGasMeter consumes gas past the limit. +func (g *basicGasMeter) + +GasConsumedToLimit() + +Gas { + if g.IsPastLimit() { + return g.limit +} + +return g.consumed +} + +/ addUint64Overflow performs the addition operation on two uint64 integers and +/ returns a boolean on whether or not the result overflows. +func addUint64Overflow(a, b uint64) (uint64, bool) { + if math.MaxUint64-a < b { + return 0, true +} + +return a + b, false +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit or out of gas. +func (g *basicGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + g.consumed = math.MaxUint64 + panic(ErrorGasOverflow{ + descriptor +}) +} + if g.consumed > g.limit { + panic(ErrorOutOfGas{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the transaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *basicGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns true if gas consumed is past limit, otherwise it returns false. +func (g *basicGasMeter) + +IsPastLimit() + +bool { + return g.consumed > g.limit +} + +/ IsOutOfGas returns true if gas consumed is greater than or equal to gas limit, otherwise it returns false. +func (g *basicGasMeter) + +IsOutOfGas() + +bool { + return g.consumed >= g.limit +} + +/ String returns the BasicGasMeter's gas limit and gas consumed. +func (g *basicGasMeter) + +String() + +string { + return fmt.Sprintf("BasicGasMeter:\n limit: %d\n consumed: %d", g.limit, g.consumed) +} + +type infiniteGasMeter struct { + consumed Gas +} + +/ NewInfiniteGasMeter returns a new gas meter without a limit. +func NewInfiniteGasMeter() + +GasMeter { + return &infiniteGasMeter{ + consumed: 0, +} +} + +/ GasConsumed returns the gas consumed from the GasMeter. +func (g *infiniteGasMeter) + +GasConsumed() + +Gas { + return g.consumed +} + +/ GasConsumedToLimit returns the gas consumed from the GasMeter since the gas is not confined to a limit. +/ NOTE: This behavior is only called when recovering from panic when BlockGasMeter consumes gas past the limit. +func (g *infiniteGasMeter) + +GasConsumedToLimit() + +Gas { + return g.consumed +} + +/ GasRemaining returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +GasRemaining() + +Gas { + return math.MaxUint64 +} + +/ Limit returns MaxUint64 since limit is not confined in infiniteGasMeter. +func (g *infiniteGasMeter) + +Limit() + +Gas { + return math.MaxUint64 +} + +/ ConsumeGas adds the given amount of gas to the gas consumed and panics if it overflows the limit. +func (g *infiniteGasMeter) + +ConsumeGas(amount Gas, descriptor string) { + var overflow bool + / TODO: Should we set the consumed field after overflow checking? + g.consumed, overflow = addUint64Overflow(g.consumed, amount) + if overflow { + panic(ErrorGasOverflow{ + descriptor +}) +} +} + +/ RefundGas will deduct the given amount from the gas consumed. If the amount is greater than the +/ gas consumed, the function will panic. +/ +/ Use case: This functionality enables refunding gas to the trasaction or block gas pools so that +/ EVM-compatible chains can fully support the go-ethereum StateDb interface. +/ See https://github.com/cosmos/cosmos-sdk/pull/9403 for reference. +func (g *infiniteGasMeter) + +RefundGas(amount Gas, descriptor string) { + if g.consumed < amount { + panic(ErrorNegativeGasConsumed{ + Descriptor: descriptor +}) +} + +g.consumed -= amount +} + +/ IsPastLimit returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsPastLimit() + +bool { + return false +} + +/ IsOutOfGas returns false since the gas limit is not confined. +func (g *infiniteGasMeter) + +IsOutOfGas() + +bool { + return false +} + +/ String returns the InfiniteGasMeter's gas consumed. +func (g *infiniteGasMeter) + +String() + +string { + return fmt.Sprintf("InfiniteGasMeter:\n consumed: %d", g.consumed) +} + +/ GasConfig defines gas cost for each operation on KVStores +type GasConfig struct { + HasCost Gas + DeleteCost Gas + ReadCostFlat Gas + ReadCostPerByte Gas + WriteCostFlat Gas + WriteCostPerByte Gas + IterNextCostFlat Gas +} + +/ KVGasConfig returns a default gas config for KVStores. +func KVGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 1000, + DeleteCost: 1000, + ReadCostFlat: 1000, + ReadCostPerByte: 3, + WriteCostFlat: 2000, + WriteCostPerByte: 30, + IterNextCostFlat: 30, +} +} + +/ TransientGasConfig returns a default gas config for TransientStores. +func TransientGasConfig() + +GasConfig { + return GasConfig{ + HasCost: 100, + DeleteCost: 100, + ReadCostFlat: 100, + ReadCostPerByte: 0, + WriteCostFlat: 200, + WriteCostPerByte: 3, + IterNextCostFlat: 3, +} +} +``` + +### `TraceKv` Store + +`tracekv.Store` is a wrapper `KVStore` which provides operation tracing functionalities over the underlying `KVStore`. It is applied automatically by the Cosmos SDK on all `KVStore` if tracing is enabled on the parent `MultiStore`. + +```go expandable +package tracekv + +import ( + + "encoding/base64" + "encoding/json" + "io" + "cosmossdk.io/errors" + "cosmossdk.io/store/types" +) + +const ( + writeOp operation = "write" + readOp operation = "read" + deleteOp operation = "delete" + iterKeyOp operation = "iterKey" + iterValueOp operation = "iterValue" +) + +type ( + / Store implements the KVStore interface with tracing enabled. + / Operations are traced on each core KVStore call and written to the + / underlying io.writer. + / + / TODO: Should we use a buffered writer and implement Commit on + / Store? + Store struct { + parent types.KVStore + writer io.Writer + context types.TraceContext +} + + / operation represents an IO operation + operation string + + / traceOperation implements a traced KVStore operation + traceOperation struct { + Operation operation `json:"operation"` + Key string `json:"key"` + Value string `json:"value"` + Metadata map[string]interface{ +} `json:"metadata"` +} +) + +/ NewStore returns a reference to a new traceKVStore given a parent +/ KVStore implementation and a buffered writer. +func NewStore(parent types.KVStore, writer io.Writer, tc types.TraceContext) *Store { + return &Store{ + parent: parent, writer: writer, context: tc +} +} + +/ Get implements the KVStore interface. It traces a read operation and +/ delegates a Get call to the parent KVStore. +func (tkv *Store) + +Get(key []byte) []byte { + value := tkv.parent.Get(key) + +writeOperation(tkv.writer, readOp, tkv.context, key, value) + +return value +} + +/ Set implements the KVStore interface. It traces a write operation and +/ delegates the Set call to the parent KVStore. +func (tkv *Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +writeOperation(tkv.writer, writeOp, tkv.context, key, value) + +tkv.parent.Set(key, value) +} + +/ Delete implements the KVStore interface. It traces a write operation and +/ delegates the Delete call to the parent KVStore. +func (tkv *Store) + +Delete(key []byte) { + writeOperation(tkv.writer, deleteOp, tkv.context, key, nil) + +tkv.parent.Delete(key) +} + +/ Has implements the KVStore interface. It delegates the Has call to the +/ parent KVStore. +func (tkv *Store) + +Has(key []byte) + +bool { + return tkv.parent.Has(key) +} + +/ Iterator implements the KVStore interface. It delegates the Iterator call +/ to the parent KVStore. +func (tkv *Store) + +Iterator(start, end []byte) + +types.Iterator { + return tkv.iterator(start, end, true) +} + +/ ReverseIterator implements the KVStore interface. It delegates the +/ ReverseIterator call to the parent KVStore. +func (tkv *Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + return tkv.iterator(start, end, false) +} + +/ iterator facilitates iteration over a KVStore. It delegates the necessary +/ calls to it's parent KVStore. +func (tkv *Store) + +iterator(start, end []byte, ascending bool) + +types.Iterator { + var parent types.Iterator + if ascending { + parent = tkv.parent.Iterator(start, end) +} + +else { + parent = tkv.parent.ReverseIterator(start, end) +} + +return newTraceIterator(tkv.writer, parent, tkv.context) +} + +type traceIterator struct { + parent types.Iterator + writer io.Writer + context types.TraceContext +} + +func newTraceIterator(w io.Writer, parent types.Iterator, tc types.TraceContext) + +types.Iterator { + return &traceIterator{ + writer: w, parent: parent, context: tc +} +} + +/ Domain implements the Iterator interface. +func (ti *traceIterator) + +Domain() (start, end []byte) { + return ti.parent.Domain() +} + +/ Valid implements the Iterator interface. +func (ti *traceIterator) + +Valid() + +bool { + return ti.parent.Valid() +} + +/ Next implements the Iterator interface. +func (ti *traceIterator) + +Next() { + ti.parent.Next() +} + +/ Key implements the Iterator interface. +func (ti *traceIterator) + +Key() []byte { + key := ti.parent.Key() + +writeOperation(ti.writer, iterKeyOp, ti.context, key, nil) + +return key +} + +/ Value implements the Iterator interface. +func (ti *traceIterator) + +Value() []byte { + value := ti.parent.Value() + +writeOperation(ti.writer, iterValueOp, ti.context, nil, value) + +return value +} + +/ Close implements the Iterator interface. +func (ti *traceIterator) + +Close() + +error { + return ti.parent.Close() +} + +/ Error delegates the Error call to the parent iterator. +func (ti *traceIterator) + +Error() + +error { + return ti.parent.Error() +} + +/ GetStoreType implements the KVStore interface. It returns the underlying +/ KVStore type. +func (tkv *Store) + +GetStoreType() + +types.StoreType { + return tkv.parent.GetStoreType() +} + +/ CacheWrap implements the KVStore interface. It panics because a Store +/ cannot be branched. +func (tkv *Store) + +CacheWrap() + +types.CacheWrap { + panic("cannot CacheWrap a TraceKVStore") +} + +/ CacheWrapWithTrace implements the KVStore interface. It panics as a +/ Store cannot be branched. +func (tkv *Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + panic("cannot CacheWrapWithTrace a TraceKVStore") +} + +/ writeOperation writes a KVStore operation to the underlying io.Writer as +/ JSON-encoded data where the key/value pair is base64 encoded. +func writeOperation(w io.Writer, op operation, tc types.TraceContext, key, value []byte) { + traceOp := traceOperation{ + Operation: op, + Key: base64.StdEncoding.EncodeToString(key), + Value: base64.StdEncoding.EncodeToString(value), +} + if tc != nil { + traceOp.Metadata = tc +} + +raw, err := json.Marshal(traceOp) + if err != nil { + panic(errors.Wrap(err, "failed to serialize trace operation")) +} + if _, err := w.Write(raw); err != nil { + panic(errors.Wrap(err, "failed to write trace operation")) +} + + _, err = io.WriteString(w, "\n") + if err != nil { + panic(errors.Wrap(err, "failed to write newline")) +} +} +``` + +When each `KVStore` methods are called, `tracekv.Store` automatically logs `traceOperation` to the `Store.writer`. `traceOperation.Metadata` is filled with `Store.context` when it is not nil. `TraceContext` is a `map[string]interface{}`. + +### `Prefix` Store + +`prefix.Store` is a wrapper `KVStore` which provides automatic key-prefixing functionalities over the underlying `KVStore`. + +```go expandable +package prefix + +import ( + + "bytes" + "errors" + "io" + "cosmossdk.io/store/cachekv" + "cosmossdk.io/store/tracekv" + "cosmossdk.io/store/types" +) + +var _ types.KVStore = Store{ +} + +/ Store is similar with cometbft/cometbft/libs/db/prefix_db +/ both gives access only to the limited subset of the store +/ for convinience or safety +type Store struct { + parent types.KVStore + prefix []byte +} + +func NewStore(parent types.KVStore, prefix []byte) + +Store { + return Store{ + parent: parent, + prefix: prefix, +} +} + +func cloneAppend(bz, tail []byte) (res []byte) { + res = make([]byte, len(bz)+len(tail)) + +copy(res, bz) + +copy(res[len(bz):], tail) + +return +} + +func (s Store) + +key(key []byte) (res []byte) { + if key == nil { + panic("nil key on Store") +} + +res = cloneAppend(s.prefix, key) + +return +} + +/ Implements Store +func (s Store) + +GetStoreType() + +types.StoreType { + return s.parent.GetStoreType() +} + +/ Implements CacheWrap +func (s Store) + +CacheWrap() + +types.CacheWrap { + return cachekv.NewStore(s) +} + +/ CacheWrapWithTrace implements the KVStore interface. +func (s Store) + +CacheWrapWithTrace(w io.Writer, tc types.TraceContext) + +types.CacheWrap { + return cachekv.NewStore(tracekv.NewStore(s, w, tc)) +} + +/ Implements KVStore +func (s Store) + +Get(key []byte) []byte { + res := s.parent.Get(s.key(key)) + +return res +} + +/ Implements KVStore +func (s Store) + +Has(key []byte) + +bool { + return s.parent.Has(s.key(key)) +} + +/ Implements KVStore +func (s Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +types.AssertValidValue(value) + +s.parent.Set(s.key(key), value) +} + +/ Implements KVStore +func (s Store) + +Delete(key []byte) { + s.parent.Delete(s.key(key)) +} + +/ Implements KVStore +/ Check https://github.com/cometbft/cometbft/blob/master/libs/db/prefix_db.go#L106 +func (s Store) + +Iterator(start, end []byte) + +types.Iterator { + newstart := cloneAppend(s.prefix, start) + +var newend []byte + if end == nil { + newend = cpIncr(s.prefix) +} + +else { + newend = cloneAppend(s.prefix, end) +} + iter := s.parent.Iterator(newstart, newend) + +return newPrefixIterator(s.prefix, start, end, iter) +} + +/ ReverseIterator implements KVStore +/ Check https://github.com/cometbft/cometbft/blob/master/libs/db/prefix_db.go#L129 +func (s Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + newstart := cloneAppend(s.prefix, start) + +var newend []byte + if end == nil { + newend = cpIncr(s.prefix) +} + +else { + newend = cloneAppend(s.prefix, end) +} + iter := s.parent.ReverseIterator(newstart, newend) + +return newPrefixIterator(s.prefix, start, end, iter) +} + +var _ types.Iterator = (*prefixIterator)(nil) + +type prefixIterator struct { + prefix []byte + start []byte + end []byte + iter types.Iterator + valid bool +} + +func newPrefixIterator(prefix, start, end []byte, parent types.Iterator) *prefixIterator { + return &prefixIterator{ + prefix: prefix, + start: start, + end: end, + iter: parent, + valid: parent.Valid() && bytes.HasPrefix(parent.Key(), prefix), +} +} + +/ Implements Iterator +func (pi *prefixIterator) + +Domain() ([]byte, []byte) { + return pi.start, pi.end +} + +/ Implements Iterator +func (pi *prefixIterator) + +Valid() + +bool { + return pi.valid && pi.iter.Valid() +} + +/ Implements Iterator +func (pi *prefixIterator) + +Next() { + if !pi.valid { + panic("prefixIterator invalid, cannot call Next()") +} + if pi.iter.Next(); !pi.iter.Valid() || !bytes.HasPrefix(pi.iter.Key(), pi.prefix) { + / TODO: shouldn't pi be set to nil instead? + pi.valid = false +} +} + +/ Implements Iterator +func (pi *prefixIterator) + +Key() (key []byte) { + if !pi.valid { + panic("prefixIterator invalid, cannot call Key()") +} + +key = pi.iter.Key() + +key = stripPrefix(key, pi.prefix) + +return +} + +/ Implements Iterator +func (pi *prefixIterator) + +Value() []byte { + if !pi.valid { + panic("prefixIterator invalid, cannot call Value()") +} + +return pi.iter.Value() +} + +/ Implements Iterator +func (pi *prefixIterator) + +Close() + +error { + return pi.iter.Close() +} + +/ Error returns an error if the prefixIterator is invalid defined by the Valid +/ method. +func (pi *prefixIterator) + +Error() + +error { + if !pi.Valid() { + return errors.New("invalid prefixIterator") +} + +return nil +} + +/ copied from github.com/cometbft/cometbft/libs/db/prefix_db.go +func stripPrefix(key, prefix []byte) []byte { + if len(key) < len(prefix) || !bytes.Equal(key[:len(prefix)], prefix) { + panic("should not happen") +} + +return key[len(prefix):] +} + +/ wrapping types.PrefixEndBytes +func cpIncr(bz []byte) []byte { + return types.PrefixEndBytes(bz) +} +``` + +When `Store.{Get, Set}()` is called, the store forwards the call to its parent, with the key prefixed with the `Store.prefix`. + +When `Store.Iterator()` is called, it does not simply prefix the `Store.prefix`, since it does not work as intended. In that case, some of the elements are traversed even if they are not starting with the prefix. + +### `ListenKv` Store + +`listenkv.Store` is a wrapper `KVStore` which provides state listening capabilities over the underlying `KVStore`. +It is applied automatically by the Cosmos SDK on any `KVStore` whose `StoreKey` is specified during state streaming configuration. +Additional information about state streaming configuration can be found in the [store/streaming/README.md](https://github.com/cosmos/cosmos-sdk/tree/v0.53.0/store/streaming). + +```go expandable +package listenkv + +import ( + + "io" + "cosmossdk.io/store/types" +) + +var _ types.KVStore = &Store{ +} + +/ Store implements the KVStore interface with listening enabled. +/ Operations are traced on each core KVStore call and written to any of the +/ underlying listeners with the proper key and operation permissions +type Store struct { + parent types.KVStore + listener *types.MemoryListener + parentStoreKey types.StoreKey +} + +/ NewStore returns a reference to a new traceKVStore given a parent +/ KVStore implementation and a buffered writer. +func NewStore(parent types.KVStore, parentStoreKey types.StoreKey, listener *types.MemoryListener) *Store { + return &Store{ + parent: parent, listener: listener, parentStoreKey: parentStoreKey +} +} + +/ Get implements the KVStore interface. It traces a read operation and +/ delegates a Get call to the parent KVStore. +func (s *Store) + +Get(key []byte) []byte { + value := s.parent.Get(key) + +return value +} + +/ Set implements the KVStore interface. It traces a write operation and +/ delegates the Set call to the parent KVStore. +func (s *Store) + +Set(key, value []byte) { + types.AssertValidKey(key) + +s.parent.Set(key, value) + +s.listener.OnWrite(s.parentStoreKey, key, value, false) +} + +/ Delete implements the KVStore interface. It traces a write operation and +/ delegates the Delete call to the parent KVStore. +func (s *Store) + +Delete(key []byte) { + s.parent.Delete(key) + +s.listener.OnWrite(s.parentStoreKey, key, nil, true) +} + +/ Has implements the KVStore interface. It delegates the Has call to the +/ parent KVStore. +func (s *Store) + +Has(key []byte) + +bool { + return s.parent.Has(key) +} + +/ Iterator implements the KVStore interface. It delegates the Iterator call +/ the to the parent KVStore. +func (s *Store) + +Iterator(start, end []byte) + +types.Iterator { + return s.iterator(start, end, true) +} + +/ ReverseIterator implements the KVStore interface. It delegates the +/ ReverseIterator call the to the parent KVStore. +func (s *Store) + +ReverseIterator(start, end []byte) + +types.Iterator { + return s.iterator(start, end, false) +} + +/ iterator facilitates iteration over a KVStore. It delegates the necessary +/ calls to it's parent KVStore. +func (s *Store) + +iterator(start, end []byte, ascending bool) + +types.Iterator { + var parent types.Iterator + if ascending { + parent = s.parent.Iterator(start, end) +} + +else { + parent = s.parent.ReverseIterator(start, end) +} + +return newTraceIterator(parent, s.listener) +} + +type listenIterator struct { + parent types.Iterator + listener *types.MemoryListener +} + +func newTraceIterator(parent types.Iterator, listener *types.MemoryListener) + +types.Iterator { + return &listenIterator{ + parent: parent, listener: listener +} +} + +/ Domain implements the Iterator interface. +func (li *listenIterator) + +Domain() (start, end []byte) { + return li.parent.Domain() +} + +/ Valid implements the Iterator interface. +func (li *listenIterator) + +Valid() + +bool { + return li.parent.Valid() +} + +/ Next implements the Iterator interface. +func (li *listenIterator) + +Next() { + li.parent.Next() +} + +/ Key implements the Iterator interface. +func (li *listenIterator) + +Key() []byte { + key := li.parent.Key() + +return key +} + +/ Value implements the Iterator interface. +func (li *listenIterator) + +Value() []byte { + value := li.parent.Value() + +return value +} + +/ Close implements the Iterator interface. +func (li *listenIterator) + +Close() + +error { + return li.parent.Close() +} + +/ Error delegates the Error call to the parent iterator. +func (li *listenIterator) + +Error() + +error { + return li.parent.Error() +} + +/ GetStoreType implements the KVStore interface. It returns the underlying +/ KVStore type. +func (s *Store) + +GetStoreType() + +types.StoreType { + return s.parent.GetStoreType() +} + +/ CacheWrap implements the KVStore interface. It panics as a Store +/ cannot be cache wrapped. +func (s *Store) + +CacheWrap() + +types.CacheWrap { + panic("cannot CacheWrap a ListenKVStore") +} + +/ CacheWrapWithTrace implements the KVStore interface. It panics as a +/ Store cannot be cache wrapped. +func (s *Store) + +CacheWrapWithTrace(_ io.Writer, _ types.TraceContext) + +types.CacheWrap { + panic("cannot CacheWrapWithTrace a ListenKVStore") +} +``` + +When `KVStore.Set` or `KVStore.Delete` methods are called, `listenkv.Store` automatically writes the operations to the set of `Store.listeners`. + +## `BasicKVStore` interface + +An interface providing only the basic CRUD functionality (`Get`, `Set`, `Has`, and `Delete` methods), without iteration or caching. This is used to partially expose components of a larger store. diff --git a/docs/sdk/v0.53/learn.mdx b/docs/sdk/v0.53/learn.mdx deleted file mode 100644 index 763e99fb..00000000 --- a/docs/sdk/v0.53/learn.mdx +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: "Learn" -description: "Version: v0.53" ---- - -* [Introduction](./learn/intro/overview) - Dive into the fundamentals of Cosmos SDK with an insightful introduction, laying the groundwork for understanding blockchain development. In this section we provide a High-Level Overview of the SDK, then dive deeper into Core concepts such as Application-Specific Blockchains, Blockchain Architecture, and finally we begin to explore the main components of the SDK. -* [Beginner](./learn/beginner/app-anatomy) - Start your journey with beginner-friendly resources in the Cosmos SDK's "Learn" section, providing a gentle entry point for newcomers to blockchain development. Here we focus on a little more detail, covering the Anatomy of a Cosmos SDK Application, Transaction Lifecycles, Accounts and lastly, Gas and Fees. -* [Advanced](./learn/advanced/baseapp) - Level up your Cosmos SDK expertise with advanced topics, tailored for experienced developers diving into intricate blockchain application development. We cover the Cosmos SDK on a lower level as we dive into the core of the SDK with BaseApp, Transactions, Context, Node Client (Daemon), Store, Encoding, gRPC, REST, and CometBFT Endpoints, CLI, Events, Telemetry, Object-Capability Model, RunTx recovery middleware, Cosmos Blockchain Simulator, Protobuf Documentation, In-Place Store Migrations, Configuration and AutoCLI. diff --git a/docs/sdk/v0.53/learn/advanced/autocli.mdx b/docs/sdk/v0.53/learn/advanced/autocli.mdx deleted file mode 100644 index f2a9e5c4..00000000 --- a/docs/sdk/v0.53/learn/advanced/autocli.mdx +++ /dev/null @@ -1,209 +0,0 @@ ---- -title: "AutoCLI" -description: "Version: v0.53" ---- - - - This document details how to build CLI and REST interfaces for a module. Examples from various Cosmos SDK modules are included. - - - - * [CLI](https://docs.cosmos.network/main/core/cli) - - -The `autocli` (also known as `client/v2`) package is a [Go library](https://pkg.go.dev/cosmossdk.io/client/v2/autocli) for generating CLI (command line interface) interfaces for Cosmos SDK-based applications. It provides a simple way to add CLI commands to your application by generating them automatically based on your gRPC service definitions. Autocli generates CLI commands and flags directly from your protobuf messages, including options, input parameters, and output parameters. This means that you can easily add a CLI interface to your application without having to manually create and manage commands. - -## Overview[​](#overview "Direct link to Overview") - -`autocli` generates CLI commands and flags for each method defined in your gRPC service. By default, it generates commands for each gRPC services. The commands are named based on the name of the service method. - -For example, given the following protobuf definition for a service: - -``` -service MyService { rpc MyMethod(MyRequest) returns (MyResponse) {}} -``` - -For instance, `autocli` would generate a command named `my-method` for the `MyMethod` method. The command will have flags for each field in the `MyRequest` message. - -It is possible to customize the generation of transactions and queries by defining options for each service. - -## Application Wiring[​](#application-wiring "Direct link to Application Wiring") - -Here are the steps to use AutoCLI: - -1. Ensure your app's modules implements the `appmodule.AppModule` interface. -2. (optional) Configure how behave `autocli` command generation, by implementing the `func (am AppModule) AutoCLIOptions() *autocliv1.ModuleOptions` method on the module. -3. Use the `autocli.AppOptions` struct to specify the modules you defined. If you are using `depinject`, it can automatically create an instance of `autocli.AppOptions` based on your app's configuration. -4. Use the `EnhanceRootCommand()` method provided by `autocli` to add the CLI commands for the specified modules to your root command. - - - AutoCLI is additive only, meaning *enhancing* the root command will only add subcommands that are not already registered. This means that you can use AutoCLI alongside other custom commands within your app. - - -Here's an example of how to use `autocli` in your app: - -``` -// Define your app's modulestestModules := map[string]appmodule.AppModule{ "testModule": &TestModule{},}// Define the autocli AppOptionsautoCliOpts := autocli.AppOptions{ Modules: testModules,}// Create the root commandrootCmd := &cobra.Command{ Use: "app",}if err := appOptions.EnhanceRootCommand(rootCmd); err != nil { return err}// Run the root commandif err := rootCmd.Execute(); err != nil { return err} -``` - -### Keyring[​](#keyring "Direct link to Keyring") - -`autocli` uses a keyring for key name resolving names and signing transactions. - - - AutoCLI provides a better UX than normal CLI as it allows to resolve key names directly from the keyring in all transactions and commands. - - ``` - q bank balances alice tx bank send alice bob 1000denom - ``` - - -The keyring used for resolving names and signing transactions is provided via the `client.Context`. The keyring is then converted to the `client/v2/autocli/keyring` interface. If no keyring is provided, the `autocli` generated command will not be able to sign transactions, but will still be able to query the chain. - - - The Cosmos SDK keyring and Hubl keyring both implement the `client/v2/autocli/keyring` interface, thanks to the following wrapper: - - ``` - keyring.NewAutoCLIKeyring(kb) - ``` - - -## Signing[​](#signing "Direct link to Signing") - -`autocli` supports signing transactions with the keyring. The [`cosmos.msg.v1.signer` protobuf annotation](https://docs.cosmos.network/main/build/building-modules/protobuf-annotations) defines the signer field of the message. This field is automatically filled when using the `--from` flag or defining the signer as a positional argument. - - - AutoCLI currently supports only one signer per transaction. - - -## Module wiring & Customization[​](#module-wiring--customization "Direct link to Module wiring & Customization") - -The `AutoCLIOptions()` method on your module allows to specify custom commands, sub-commands or flags for each service, as it was a `cobra.Command` instance, within the `RpcCommandOptions` struct. Defining such options will customize the behavior of the `autocli` command generation, which by default generates a command for each method in your gRPC service. - -``` -*autocliv1.RpcCommandOptions{ RpcMethod: "Params", // The name of the gRPC service Use: "params", // Command usage that is displayed in the help Short: "Query the parameters of the governance process", // Short description of the command Long: "Query the parameters of the governance process. Specify specific param types (voting|tallying|deposit) to filter results.", // Long description of the command PositionalArgs: []*autocliv1.PositionalArgDescriptor{ {ProtoField: "params_type", Optional: true}, // Transform a flag into a positional argument },} -``` - - - AutoCLI can create a gov proposal of any tx by simply setting the `GovProposal` field to `true` in the `autocli.RpcCommandOptions` struct. Users can however use the `--no-proposal` flag to disable the proposal creation (which is useful if the authority isn't the gov module on a chain). - - -### Specifying Subcommands[​](#specifying-subcommands "Direct link to Specifying Subcommands") - -By default, `autocli` generates a command for each method in your gRPC service. However, you can specify subcommands to group related commands together. To specify subcommands, use the `autocliv1.ServiceCommandDescriptor` struct. - -This example shows how to use the `autocliv1.ServiceCommandDescriptor` struct to group related commands together and specify subcommands in your gRPC service by defining an instance of `autocliv1.ModuleOptions` in your `autocli.go`. - -x/gov/autocli.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/x/gov/autocli.go#L94-L97) - -### Positional Arguments[​](#positional-arguments "Direct link to Positional Arguments") - -By default `autocli` generates a flag for each field in your protobuf message. However, you can choose to use positional arguments instead of flags for certain fields. - -To add positional arguments to a command, use the `autocliv1.PositionalArgDescriptor` struct, as seen in the example below. Specify the `ProtoField` parameter, which is the name of the protobuf field that should be used as the positional argument. In addition, if the parameter is a variable-length argument, you can specify the `Varargs` parameter as `true`. This can only be applied to the last positional parameter, and the `ProtoField` must be a repeated field. - -Here's an example of how to define a positional argument for the `Account` method of the `auth` service: - -x/auth/autocli.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/x/auth/autocli.go#L25-L30) - -Then the command can be used as follows, instead of having to specify the `--address` flag: - -``` - query auth account cosmos1abcd...xyz -``` - -#### Flattened Fields in Positional Arguments[​](#flattened-fields-in-positional-arguments "Direct link to Flattened Fields in Positional Arguments") - -AutoCLI also supports flattening nested message fields as positional arguments. This means you can access nested fields using dot notation in the `ProtoField` parameter. This is particularly useful when you want to directly set nested message fields as positional arguments. - -For example, if you have a nested message structure like this: - -``` -message Permissions { string level = 1; repeated string limit_type_urls = 2;}message MsgAuthorizeCircuitBreaker { string grantee = 1; Permissions permissions = 2;} -``` - -You can flatten the fields in your AutoCLI configuration: - -``` -{ RpcMethod: "AuthorizeCircuitBreaker", Use: "authorize ", PositionalArgs: []*autocliv1.PositionalArgDescriptor{ {ProtoField: "grantee"}, {ProtoField: "permissions.level"}, {ProtoField: "permissions.limit_type_urls"}, },} -``` - -This allows users to provide values for nested fields directly as positional arguments: - -``` - tx circuit authorize cosmos1... super-admin "/cosmos.bank.v1beta1.MsgSend,/cosmos.bank.v1beta1.MsgMultiSend" -``` - -Instead of having to provide a complex JSON structure for nested fields, flattening makes the CLI more user-friendly by allowing direct access to nested fields. - -#### Customising Flag Names[​](#customising-flag-names "Direct link to Customising Flag Names") - -By default, `autocli` generates flag names based on the names of the fields in your protobuf message. However, you can customise the flag names by providing a `FlagOptions`. This parameter allows you to specify custom names for flags based on the names of the message fields. - -For example, if you have a message with the fields `test` and `test1`, you can use the following naming options to customise the flags: - -``` -autocliv1.RpcCommandOptions{ FlagOptions: map[string]*autocliv1.FlagOptions{ "test": { Name: "custom_name", }, "test1": { Name: "other_name", }, }, } -``` - -`FlagsOptions` is defined like sub commands in the `AutoCLIOptions()` method on your module. - -### Combining AutoCLI with Other Commands Within A Module[​](#combining-autocli-with-other-commands-within-a-module "Direct link to Combining AutoCLI with Other Commands Within A Module") - -AutoCLI can be used alongside other commands within a module. For example, the `gov` module uses AutoCLI to generate commands for the `query` subcommand, but also defines custom commands for the `proposer` subcommands. - -In order to enable this behavior, set in `AutoCLIOptions()` the `EnhanceCustomCommand` field to `true`, for the command type (queries and/or transactions) you want to enhance. - -x/gov/autocli.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/fa4d87ef7e6d87aaccc94c337ffd2fe90fcb7a9d/x/gov/autocli.go#L98) - -If not set to true, `AutoCLI` will not generate commands for the module if there are already commands registered for the module (when `GetTxCmd()` or `GetTxCmd()` are defined). - -### Skip a command[​](#skip-a-command "Direct link to Skip a command") - -AutoCLI automatically skips unsupported commands when [`cosmos_proto.method_added_in` protobuf annotation](https://docs.cosmos.network/main/build/building-modules/protobuf-annotations) is present. - -Additionally, a command can be manually skipped using the `autocliv1.RpcCommandOptions`: - -``` -*autocliv1.RpcCommandOptions{ RpcMethod: "Params", // The name of the gRPC service Skip: true,} -``` - -### Use AutoCLI for non module commands[​](#use-autocli-for-non-module-commands "Direct link to Use AutoCLI for non module commands") - -It is possible to use `AutoCLI` for non module commands. The trick is still to implement the `appmodule.Module` interface and append it to the `appOptions.ModuleOptions` map. - -For example, here is how the SDK does it for `cometbft` gRPC commands: - -v2.0.0-beta.1/client/grpc/cmtservice/autocli.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/client/v2.0.0-beta.1/client/grpc/cmtservice/autocli.go#L52-L71) - -## Summary[​](#summary "Direct link to Summary") - -`autocli` let you generate CLI to your Cosmos SDK-based applications without any cobra boilerplate. It allows you to easily generate CLI commands and flags from your protobuf messages, and provides many options for customising the behavior of your CLI application. - -To further enhance your CLI experience with Cosmos SDK-based blockchains, you can use `hubl`. `hubl` is a tool that allows you to query any Cosmos SDK-based blockchain using the new AutoCLI feature of the Cosmos SDK. With `hubl`, you can easily configure a new chain and query modules with just a few simple commands. - -For more information on `hubl`, including how to configure a new chain and query a module, see the [Hubl documentation](https://docs.cosmos.network/main/tooling/hubl). diff --git a/docs/sdk/v0.53/learn/advanced/baseapp.mdx b/docs/sdk/v0.53/learn/advanced/baseapp.mdx deleted file mode 100644 index dc758e85..00000000 --- a/docs/sdk/v0.53/learn/advanced/baseapp.mdx +++ /dev/null @@ -1,474 +0,0 @@ ---- -title: "BaseApp" -description: "Version: v0.53" ---- - - - This document describes `BaseApp`, the abstraction that implements the core functionalities of a Cosmos SDK application. - - - - * [Anatomy of a Cosmos SDK application](/v0.53/learn/beginner/app-anatomy) - * [Lifecycle of a Cosmos SDK transaction](/v0.53/learn/beginner/tx-lifecycle) - - -## Introduction[​](#introduction "Direct link to Introduction") - -`BaseApp` is a base type that implements the core of a Cosmos SDK application, namely: - -* The [Application Blockchain Interface](#main-abci-messages), for the state-machine to communicate with the underlying consensus engine (e.g. CometBFT). -* [Service Routers](#service-routers), to route messages and queries to the appropriate module. -* Different [states](#state-updates), as the state-machine can have different volatile states updated based on the ABCI message received. - -The goal of `BaseApp` is to provide the fundamental layer of a Cosmos SDK application that developers can easily extend to build their own custom application. Usually, developers will create a custom type for their application, like so: - -``` -type App struct { // reference to a BaseApp *baseapp.BaseApp // list of application store keys // list of application keepers // module manager} -``` - -Extending the application with `BaseApp` gives the former access to all of `BaseApp`'s methods. This allows developers to compose their custom application with the modules they want, while not having to concern themselves with the hard work of implementing the ABCI, the service routers and state management logic. - -## Type Definition[​](#type-definition "Direct link to Type Definition") - -The `BaseApp` type holds many important parameters for any Cosmos SDK based application. - -baseapp/baseapp.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/baseapp.go#L64-L201) - -Let us go through the most important components. - -> **Note**: Not all parameters are described, only the most important ones. Refer to the type definition for the full list. - -First, the important parameters that are initialized during the bootstrapping of the application: - -* [`CommitMultiStore`](/v0.53/learn/advanced/store#commitmultistore): This is the main store of the application, which holds the canonical state that is committed at the [end of each block](#commit). This store is **not** cached, meaning it is not used to update the application's volatile (un-committed) states. The `CommitMultiStore` is a multi-store, meaning a store of stores. Each module of the application uses one or multiple `KVStores` in the multi-store to persist their subset of the state. -* Database: The `db` is used by the `CommitMultiStore` to handle data persistence. -* [`Msg` Service Router](#msg-service-router): The `msgServiceRouter` facilitates the routing of `sdk.Msg` requests to the appropriate module `Msg` service for processing. Here a `sdk.Msg` refers to the transaction component that needs to be processed by a service in order to update the application state, and not to ABCI message which implements the interface between the application and the underlying consensus engine. -* [gRPC Query Router](#grpc-query-router): The `grpcQueryRouter` facilitates the routing of gRPC queries to the appropriate module for it to be processed. These queries are not ABCI messages themselves, but they are relayed to the relevant module's gRPC `Query` service. -* [`TxDecoder`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types#TxDecoder): It is used to decode raw transaction bytes relayed by the underlying CometBFT engine. -* [`AnteHandler`](#antehandler): This handler is used to handle signature verification, fee payment, and other pre-message execution checks when a transaction is received. It's executed during [`CheckTx/RecheckTx`](#checktx) and [`FinalizeBlock`](#finalizeblock). -* [`InitChainer`](/v0.53/learn/beginner/app-anatomy#initchainer), [`PreBlocker`](/v0.53/learn/beginner/app-anatomy#preblocker), [`BeginBlocker` and `EndBlocker`](/v0.53/learn/beginner/app-anatomy#beginblocker-and-endblocker): These are the functions executed when the application receives the `InitChain` and `FinalizeBlock` ABCI messages from the underlying CometBFT engine. - -Then, parameters used to define [volatile states](#state-updates) (i.e. cached states): - -* `checkState`: This state is updated during [`CheckTx`](#checktx), and reset on [`Commit`](#commit). -* `finalizeBlockState`: This state is updated during [`FinalizeBlock`](#finalizeblock), and set to `nil` on [`Commit`](#commit) and gets re-initialized on `FinalizeBlock`. -* `processProposalState`: This state is updated during [`ProcessProposal`](#process-proposal). -* `prepareProposalState`: This state is updated during [`PrepareProposal`](#prepare-proposal). - -Finally, a few more important parameters: - -* `voteInfos`: This parameter carries the list of validators whose precommit is missing, either because they did not vote or because the proposer did not include their vote. This information is carried by the [Context](/v0.53/learn/advanced/context) and can be used by the application for various things like punishing absent validators. -* `minGasPrices`: This parameter defines the minimum gas prices accepted by the node. This is a **local** parameter, meaning each full-node can set a different `minGasPrices`. It is used in the `AnteHandler` during [`CheckTx`](#checktx), mainly as a spam protection mechanism. The transaction enters the [mempool](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#mempool-methods) only if the gas prices of the transaction are greater than one of the minimum gas price in `minGasPrices` (e.g. if `minGasPrices == 1uatom,1photon`, the `gas-price` of the transaction must be greater than `1uatom` OR `1photon`). -* `appVersion`: Version of the application. It is set in the [application's constructor function](/v0.53/learn/beginner/app-anatomy#constructor-function). - -## Constructor[​](#constructor "Direct link to Constructor") - -``` -func NewBaseApp( name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp),) *BaseApp { // ...} -``` - -The `BaseApp` constructor function is pretty straightforward. The only thing worth noting is the possibility to provide additional [`options`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/options.go) to the `BaseApp`, which will execute them in order. The `options` are generally `setter` functions for important parameters, like `SetPruning()` to set pruning options or `SetMinGasPrices()` to set the node's `min-gas-prices`. - -Naturally, developers can add additional `options` based on their application's needs. - -## State Updates[​](#state-updates "Direct link to State Updates") - -The `BaseApp` maintains four primary volatile states and a root or main state. The main state is the canonical state of the application and the volatile states, `checkState`, `prepareProposalState`, `processProposalState` and `finalizeBlockState` are used to handle state transitions in-between the main state made during [`Commit`](#commit). - -Internally, there is only a single `CommitMultiStore` which we refer to as the main or root state. From this root state, we derive four volatile states by using a mechanism called *store branching* (performed by `CacheWrap` function). The types can be illustrated as follows: - -![Types](/images/v0.53/learn/advanced/assets/images/baseapp_state-c6660bdfda8fa3aeb44239780b465ecc.png) - -### InitChain State Updates[​](#initchain-state-updates "Direct link to InitChain State Updates") - -During `InitChain`, the four volatile states, `checkState`, `prepareProposalState`, `processProposalState` and `finalizeBlockState` are set by branching the root `CommitMultiStore`. Any subsequent reads and writes happen on branched versions of the `CommitMultiStore`. To avoid unnecessary roundtrip to the main state, all reads to the branched store are cached. - -![InitChain](/images/v0.53/learn/advanced/assets/images/baseapp_state-initchain-62da1a79d5dd67a6d1ab07f2805040da.png) - -### CheckTx State Updates[​](#checktx-state-updates "Direct link to CheckTx State Updates") - -During `CheckTx`, the `checkState`, which is based off of the last committed state from the root store, is used for any reads and writes. Here we only execute the `AnteHandler` and verify a service router exists for every message in the transaction. Note, when we execute the `AnteHandler`, we branch the already branched `checkState`. This has the side effect that if the `AnteHandler` fails, the state transitions won't be reflected in the `checkState` -- i.e. `checkState` is only updated on success. - -![CheckTx](/images/v0.53/learn/advanced/assets/images/baseapp_state-checktx-5bb98c17c37b2b93e98cc681b6c1c9d6.png) - -### PrepareProposal State Updates[​](#prepareproposal-state-updates "Direct link to PrepareProposal State Updates") - -During `PrepareProposal`, the `prepareProposalState` is set by branching the root `CommitMultiStore`. The `prepareProposalState` is used for any reads and writes that occur during the `PrepareProposal` phase. The function uses the `Select()` method of the mempool to iterate over the transactions. `runTx` is then called, which encodes and validates each transaction and from there the `AnteHandler` is executed. If successful, valid transactions are returned inclusive of the events, tags, and data generated during the execution of the proposal. The described behavior is that of the default handler, applications have the flexibility to define their own [custom mempool handlers](https://docs.cosmos.network/main/building-apps/app-mempool#custom-mempool-handlers). - -![ProcessProposal](/images/v0.53/learn/advanced/assets/images/baseapp_state-prepareproposal-bc5c8099ad94b823c376d1bde26d584a.png) - -### ProcessProposal State Updates[​](#processproposal-state-updates "Direct link to ProcessProposal State Updates") - -During `ProcessProposal`, the `processProposalState` is set based off of the last committed state from the root store and is used to process a signed proposal received from a validator. In this state, `runTx` is called and the `AnteHandler` is executed and the context used in this state is built with information from the header and the main state, including the minimum gas prices, which are also set. Again we want to highlight that the described behavior is that of the default handler and applications have the flexibility to define their own [custom mempool handlers](https://docs.cosmos.network/main/building-apps/app-mempool#custom-mempool-handlers). - -![ProcessProposal](/images/v0.53/learn/advanced/assets/images/baseapp_state-processproposal-486265a078da51c6f72ce7248e8021b3.png) - -### FinalizeBlock State Updates[​](#finalizeblock-state-updates "Direct link to FinalizeBlock State Updates") - -During `FinalizeBlock`, the `finalizeBlockState` is set for use during transaction execution and endblock. The `finalizeBlockState` is based off of the last committed state from the root store and is branched. Note, the `finalizeBlockState` is set to `nil` on [`Commit`](#commit). - -The state flow for transaction execution is nearly identical to `CheckTx` except state transitions occur on the `finalizeBlockState` and messages in a transaction are executed. Similarly to `CheckTx`, state transitions occur on a doubly branched state -- `finalizeBlockState`. Successful message execution results in writes being committed to `finalizeBlockState`. Note, if message execution fails, state transitions from the AnteHandler are persisted. - -### Commit State Updates[​](#commit-state-updates "Direct link to Commit State Updates") - -During `Commit` all the state transitions that occurred in the `finalizeBlockState` are finally written to the root `CommitMultiStore` which in turn is committed to disk and results in a new application root hash. These state transitions are now considered final. Finally, the `checkState` is set to the newly committed state and `finalizeBlockState` is set to `nil` to be reset on `FinalizeBlock`. - -![Commit](/images/v0.53/learn/advanced/assets/images/baseapp_state-commit-247373784511c1db3ed2175551b22abb.png) - -## ParamStore[​](#paramstore "Direct link to ParamStore") - -During `InitChain`, the `RequestInitChain` provides `ConsensusParams` which contains parameters related to block execution such as maximum gas and size in addition to evidence parameters. If these parameters are non-nil, they are set in the BaseApp's `ParamStore`. Behind the scenes, the `ParamStore` is managed by an `x/consensus_params` module. This allows the parameters to be tweaked via on-chain governance. - -## Service Routers[​](#service-routers "Direct link to Service Routers") - -When messages and queries are received by the application, they must be routed to the appropriate module in order to be processed. Routing is done via `BaseApp`, which holds a `msgServiceRouter` for messages, and a `grpcQueryRouter` for queries. - -### `Msg` Service Router[​](#msg-service-router "Direct link to msg-service-router") - -[`sdk.Msg`s](/v0.53/build/building-modules/messages-and-queries#messages) need to be routed after they are extracted from transactions, which are sent from the underlying CometBFT engine via the [`CheckTx`](#checktx) and [`FinalizeBlock`](#finalizeblock) ABCI messages. To do so, `BaseApp` holds a `msgServiceRouter` which maps fully-qualified service methods (`string`, defined in each module's Protobuf `Msg` service) to the appropriate module's `MsgServer` implementation. - -The [default `msgServiceRouter` included in `BaseApp`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/msg_service_router.go) is stateless. However, some applications may want to make use of more stateful routing mechanisms such as allowing governance to disable certain routes or point them to new modules for upgrade purposes. For this reason, the `sdk.Context` is also passed into each [route handler inside `msgServiceRouter`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/msg_service_router.go#L35-L36). For a stateless router that doesn't want to make use of this, you can just ignore the `ctx`. - -The application's `msgServiceRouter` is initialized with all the routes using the application's [module manager](/v0.53/build/building-modules/module-manager#manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](/v0.53/learn/beginner/app-anatomy#constructor-function). - -### gRPC Query Router[​](#grpc-query-router "Direct link to gRPC Query Router") - -Similar to `sdk.Msg`s, [`queries`](/v0.53/build/building-modules/messages-and-queries#queries) need to be routed to the appropriate module's [`Query` service](/v0.53/build/building-modules/query-services). To do so, `BaseApp` holds a `grpcQueryRouter`, which maps modules' fully-qualified service methods (`string`, defined in their Protobuf `Query` gRPC) to their `QueryServer` implementation. The `grpcQueryRouter` is called during the initial stages of query processing, which can be either by directly sending a gRPC query to the gRPC endpoint, or via the [`Query` ABCI message](#query) on the CometBFT RPC endpoint. - -Just like the `msgServiceRouter`, the `grpcQueryRouter` is initialized with all the query routes using the application's [module manager](/v0.53/build/building-modules/module-manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](/v0.53/learn/beginner/app-anatomy#app-constructor). - -## Main ABCI 2.0 Messages[​](#main-abci-20-messages "Direct link to Main ABCI 2.0 Messages") - -The [Application-Blockchain Interface](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md) (ABCI) is a generic interface that connects a state-machine with a consensus engine to form a functional full-node. It can be wrapped in any language, and needs to be implemented by each application-specific blockchain built on top of an ABCI-compatible consensus engine like CometBFT. - -The consensus engine handles two main tasks: - -* The networking logic, which mainly consists in gossiping block parts, transactions and consensus votes. -* The consensus logic, which results in the deterministic ordering of transactions in the form of blocks. - -It is **not** the role of the consensus engine to define the state or the validity of transactions. Generally, transactions are handled by the consensus engine in the form of `[]bytes`, and relayed to the application via the ABCI to be decoded and processed. At keys moments in the networking and consensus processes (e.g. beginning of a block, commit of a block, reception of an unconfirmed transaction, ...), the consensus engine emits ABCI messages for the state-machine to act on. - -Developers building on top of the Cosmos SDK need not implement the ABCI themselves, as `BaseApp` comes with a built-in implementation of the interface. Let us go through the main ABCI messages that `BaseApp` implements: - -* [`Prepare Proposal`](#prepare-proposal) -* [`Process Proposal`](#process-proposal) -* [`CheckTx`](#checktx) -* [`FinalizeBlock`](#finalizeblock) -* [`ExtendVote`](#extendvote) -* [`VerifyVoteExtension`](#verifyvoteextension) - -### Prepare Proposal[​](#prepare-proposal "Direct link to Prepare Proposal") - -The `PrepareProposal` function is part of the new methods introduced in Application Blockchain Interface (ABCI++) in CometBFT and is an important part of the application's overall governance system. In the Cosmos SDK, it allows the application to have more fine-grained control over the transactions that are processed, and ensures that only valid transactions are committed to the blockchain. - -Here is how the `PrepareProposal` function can be implemented: - -1. Extract the `sdk.Msg`s from the transaction. -2. Perform *stateful* checks by calling `Validate()` on each of the `sdk.Msg`'s. This is done after *stateless* checks as *stateful* checks are more computationally expensive. If `Validate()` fails, `PrepareProposal` returns before running further checks, which saves resources. -3. Perform any additional checks that are specific to the application, such as checking account balances, or ensuring that certain conditions are met before a transaction is proposed.hey are processed by the consensus engine, if necessary. -4. Return the updated transactions to be processed by the consensus engine - -Note that, unlike `CheckTx()`, `PrepareProposal` process `sdk.Msg`s, so it can directly update the state. However, unlike `FinalizeBlock()`, it does not commit the state updates. It's important to exercise caution when using `PrepareProposal` as incorrect coding could affect the overall liveness of the network. - -It's important to note that `PrepareProposal` complements the `ProcessProposal` method which is executed after this method. The combination of these two methods means that it is possible to guarantee that no invalid transactions are ever committed. Furthermore, such a setup can give rise to other interesting use cases such as Oracles, threshold decryption and more. - -`PrepareProposal` returns a response to the underlying consensus engine of type [`abci.ResponseCheckTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#processproposal). The response contains: - -* `Code (uint32)`: Response Code. `0` if successful. -* `Data ([]byte)`: Result bytes, if any. -* `Log (string):` The output of the application's logger. May be non-deterministic. -* `Info (string):` Additional information. May be non-deterministic. - -### Process Proposal[​](#process-proposal "Direct link to Process Proposal") - -The `ProcessProposal` function is called by the BaseApp as part of the ABCI message flow, and is executed during the `FinalizeBlock` phase of the consensus process. The purpose of this function is to give more control to the application for block validation, allowing it to check all transactions in a proposed block before the validator sends the prevote for the block. It allows a validator to perform application-dependent work in a proposed block, enabling features such as immediate block execution, and allows the Application to reject invalid blocks. - -The `ProcessProposal` function performs several key tasks, including: - -1. Validating the proposed block by checking all transactions in it. -2. Checking the proposed block against the current state of the application, to ensure that it is valid and that it can be executed. -3. Updating the application's state based on the proposal, if it is valid and passes all checks. -4. Returning a response to CometBFT indicating the result of the proposal processing. - -The `ProcessProposal` is an important part of the application's overall governance system. It is used to manage the network's parameters and other key aspects of its operation. It also ensures that the coherence property is adhered to i.e. all honest validators must accept a proposal by an honest proposer. - -It's important to note that `ProcessProposal` complements the `PrepareProposal` method which enables the application to have more fine-grained transaction control by allowing it to reorder, drop, delay, modify, and even add transactions as they see necessary. The combination of these two methods means that it is possible to guarantee that no invalid transactions are ever committed. Furthermore, such a setup can give rise to other interesting use cases such as Oracles, threshold decryption and more. - -CometBFT calls it when it receives a proposal and the CometBFT algorithm has not locked on a value. The Application cannot modify the proposal at this point but can reject it if it is invalid. If that is the case, CometBFT will prevote `nil` on the proposal, which has strong liveness implications for CometBFT. As a general rule, the Application SHOULD accept a prepared proposal passed via `ProcessProposal`, even if a part of the proposal is invalid (e.g., an invalid transaction); the Application can ignore the invalid part of the prepared proposal at block execution time. - -However, developers must exercise greater caution when using these methods. Incorrectly coding these methods could affect liveness as CometBFT is unable to receive 2/3 valid precommits to finalize a block. - -`ProcessProposal` returns a response to the underlying consensus engine of type [`abci.ResponseCheckTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#processproposal). The response contains: - -* `Code (uint32)`: Response Code. `0` if successful. -* `Data ([]byte)`: Result bytes, if any. -* `Log (string):` The output of the application's logger. May be non-deterministic. -* `Info (string):` Additional information. May be non-deterministic. - -### CheckTx[​](#checktx "Direct link to CheckTx") - -`CheckTx` is sent by the underlying consensus engine when a new unconfirmed (i.e. not yet included in a valid block) transaction is received by a full-node. The role of `CheckTx` is to guard the full-node's mempool (where unconfirmed transactions are stored until they are included in a block) from spam transactions. Unconfirmed transactions are relayed to peers only if they pass `CheckTx`. - -`CheckTx()` can perform both *stateful* and *stateless* checks, but developers should strive to make the checks **lightweight** because gas fees are not charged for the resources (CPU, data load...) used during the `CheckTx`. - -In the Cosmos SDK, after [decoding transactions](/v0.53/learn/advanced/encoding), `CheckTx()` is implemented to do the following checks: - -1. Extract the `sdk.Msg`s from the transaction. -2. **Optionally** perform *stateless* checks by calling `ValidateBasic()` on each of the `sdk.Msg`s. This is done first, as *stateless* checks are less computationally expensive than *stateful* checks. If `ValidateBasic()` fail, `CheckTx` returns before running *stateful* checks, which saves resources. This check is still performed for messages that have not yet migrated to the new message validation mechanism defined in [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) and still have a `ValidateBasic()` method. -3. Perform non-module related *stateful* checks on the [account](/v0.53/learn/beginner/accounts). This step is mainly about checking that the `sdk.Msg` signatures are valid, that enough fees are provided and that the sending account has enough funds to pay for said fees. Note that no precise [`gas`](/v0.53/learn/beginner/gas-fees) counting occurs here, as `sdk.Msg`s are not processed. Usually, the [`AnteHandler`](/v0.53/learn/beginner/gas-fees#antehandler) will check that the `gas` provided with the transaction is superior to a minimum reference gas amount based on the raw transaction size, in order to avoid spam with transactions that provide 0 gas. - -`CheckTx` does **not** process `sdk.Msg`s - they only need to be processed when the canonical state needs to be updated, which happens during `FinalizeBlock`. - -Steps 2. and 3. are performed by the [`AnteHandler`](/v0.53/learn/beginner/gas-fees#antehandler) in the [`RunTx()`](#runtx-antehandler-and-runmsgs) function, which `CheckTx()` calls with the `runTxModeCheck` mode. During each step of `CheckTx()`, a special [volatile state](#state-updates) called `checkState` is updated. This state is used to keep track of the temporary changes triggered by the `CheckTx()` calls of each transaction without modifying the [main canonical state](#main-state). For example, when a transaction goes through `CheckTx()`, the transaction's fees are deducted from the sender's account in `checkState`. If a second transaction is received from the same account before the first is processed, and the account has consumed all its funds in `checkState` during the first transaction, the second transaction will fail `CheckTx`() and be rejected. In any case, the sender's account will not actually pay the fees until the transaction is actually included in a block, because `checkState` never gets committed to the main state. The `checkState` is reset to the latest state of the main state each time a blocks gets [committed](#commit). - -`CheckTx` returns a response to the underlying consensus engine of type [`abci.ResponseCheckTx`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#checktx). The response contains: - -* `Code (uint32)`: Response Code. `0` if successful. -* `Data ([]byte)`: Result bytes, if any. -* `Log (string):` The output of the application's logger. May be non-deterministic. -* `Info (string):` Additional information. May be non-deterministic. -* `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction. -* `GasUsed (int64)`: Amount of gas consumed by transaction. During `CheckTx`, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction. Next is an example: - -x/auth/ante/basic.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/ante/basic.go#L104) - -* `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](/v0.53/learn/advanced/events) for more. -* `Codespace (string)`: Namespace for the Code. - -#### RecheckTx[​](#rechecktx "Direct link to RecheckTx") - -After `Commit`, `CheckTx` is run again on all transactions that remain in the node's local mempool excluding the transactions that are included in the block. To prevent the mempool from rechecking all transactions every time a block is committed, the configuration option `mempool.recheck=false` can be set. As of Tendermint v0.32.1, an additional `Type` parameter is made available to the `CheckTx` function that indicates whether an incoming transaction is new (`CheckTxType_New`), or a recheck (`CheckTxType_Recheck`). This allows certain checks like signature verification can be skipped during `CheckTxType_Recheck`. - -## RunTx, AnteHandler, RunMsgs, PostHandler[​](#runtx-antehandler-runmsgs-posthandler "Direct link to RunTx, AnteHandler, RunMsgs, PostHandler") - -### RunTx[​](#runtx "Direct link to RunTx") - -`RunTx` is called from `CheckTx`/`Finalizeblock` to handle the transaction, with `execModeCheck` or `execModeFinalize` as parameter to differentiate between the two modes of execution. Note that when `RunTx` receives a transaction, it has already been decoded. - -The first thing `RunTx` does upon being called is to retrieve the `context`'s `CacheMultiStore` by calling the `getContextForTx()` function with the appropriate mode (either `runTxModeCheck` or `execModeFinalize`). This `CacheMultiStore` is a branch of the main store, with cache functionality (for query requests), instantiated during `FinalizeBlock` for transaction execution and during the `Commit` of the previous block for `CheckTx`. After that, two `defer func()` are called for [`gas`](/v0.53/learn/beginner/gas-fees) management. They are executed when `runTx` returns and make sure `gas` is actually consumed, and will throw errors, if any. - -After that, `RunTx()` calls `ValidateBasic()`, when available and for backward compatibility, on each `sdk.Msg`in the `Tx`, which runs preliminary *stateless* validity checks. If any `sdk.Msg` fails to pass `ValidateBasic()`, `RunTx()` returns with an error. - -Then, the [`anteHandler`](#antehandler) of the application is run (if it exists). In preparation of this step, both the `checkState`/`finalizeBlockState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function. - -baseapp/baseapp.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/baseapp.go#L706-L722) - -This allows `RunTx` not to commit the changes made to the state during the execution of `anteHandler` if it ends up failing. It also prevents the module implementing the `anteHandler` from writing to state, which is an important part of the [object-capabilities](/v0.53/learn/advanced/ocap) of the Cosmos SDK. - -Finally, the [`RunMsgs()`](#runmsgs) function is called to process the `sdk.Msg`s in the `Tx`. In preparation of this step, just like with the `anteHandler`, both the `checkState`/`finalizeBlockState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function. - -### AnteHandler[​](#antehandler "Direct link to AnteHandler") - -The `AnteHandler` is a special handler that implements the `AnteHandler` interface and is used to authenticate the transaction before the transaction's internal messages are processed. - -types/handler.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/handler.go#L3-L5) - -The `AnteHandler` is theoretically optional, but still a very important component of public blockchain networks. It serves 3 primary purposes: - -* Be a primary line of defense against spam and second line of defense (the first one being the mempool) against transaction replay with fees deduction and [`sequence`](/v0.53/learn/advanced/transactions#transaction-generation) checking. -* Perform preliminary *stateful* validity checks like ensuring signatures are valid or that the sender has enough funds to pay for fees. -* Play a role in the incentivisation of stakeholders via the collection of transaction fees. - -`BaseApp` holds an `anteHandler` as parameter that is initialized in the [application's constructor](/v0.53/learn/beginner/app-anatomy#application-constructor). The most widely used `anteHandler` is the [`auth` module](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/ante/ante.go). - -Click [here](/v0.53/learn/beginner/gas-fees#antehandler) for more on the `anteHandler`. - -### RunMsgs[​](#runmsgs "Direct link to RunMsgs") - -`RunMsgs` is called from `RunTx` with `runTxModeCheck` as parameter to check the existence of a route for each message the transaction, and with `execModeFinalize` to actually process the `sdk.Msg`s. - -First, it retrieves the `sdk.Msg`'s fully-qualified type name, by checking the `type_url` of the Protobuf `Any` representing the `sdk.Msg`. Then, using the application's [`msgServiceRouter`](#msg-service-router), it checks for the existence of `Msg` service method related to that `type_url`. At this point, if `mode == runTxModeCheck`, `RunMsgs` returns. Otherwise, if `mode == execModeFinalize`, the [`Msg` service](/v0.53/build/building-modules/msg-services) RPC is executed, before `RunMsgs` returns. - -### PostHandler[​](#posthandler "Direct link to PostHandler") - -`PostHandler` is similar to `AnteHandler`, but it, as the name suggests, executes custom post tx processing logic after [`RunMsgs`](#runmsgs) is called. `PostHandler` receives the `Result` of the `RunMsgs` in order to enable this customizable behavior. - -Like `AnteHandler`s, `PostHandler`s are theoretically optional. - -Other use cases like unused gas refund can also be enabled by `PostHandler`s. - -x/auth/posthandler/post.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/posthandler/post.go#L1-L15) - -Note, when `PostHandler`s fail, the state from `runMsgs` is also reverted, effectively making the transaction fail. - -## Other ABCI Messages[​](#other-abci-messages "Direct link to Other ABCI Messages") - -### InitChain[​](#initchain "Direct link to InitChain") - -The [`InitChain` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine when the chain is first started. It is mainly used to **initialize** parameters and state like: - -* [Consensus Parameters](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#consensus-parameters) via `setConsensusParams`. -* [`checkState` and `finalizeBlockState`](#state-updates) via `setState`. -* The [block gas meter](/v0.53/learn/beginner/gas-fees#block-gas-meter), with infinite gas to process genesis transactions. - -Finally, the `InitChain(req abci.RequestInitChain)` method of `BaseApp` calls the [`initChainer()`](/v0.53/learn/beginner/app-anatomy#initchainer) of the application in order to initialize the main state of the application from the `genesis file` and, if defined, call the [`InitGenesis`](/v0.53/build/building-modules/genesis#initgenesis) function of each of the application's modules. - -### FinalizeBlock[​](#finalizeblock "Direct link to FinalizeBlock") - -The [`FinalizeBlock` ABCI message](https://github.com/cometbft/cometbft/blob/v0.38.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine when a block proposal created by the correct proposer is received. The previous `BeginBlock, DeliverTx and Endblock` calls are private methods on the BaseApp struct. - -baseapp/abci.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/abci.go#L869) - -#### PreBlock[​](#preblock "Direct link to PreBlock") - -* Run the application's [`preBlocker()`](/v0.53/learn/beginner/app-anatomy#preblocker), which mainly runs the [`PreBlocker()`](/v0.53/build/building-modules/preblock#preblock) method of each of the modules. - -#### BeginBlock[​](#beginblock "Direct link to BeginBlock") - -* Initialize [`finalizeBlockState`](#state-updates) with the latest header using the `req abci.RequestFinalizeBlock` passed as parameter via the `setState` function. - - baseapp/baseapp.go - - ``` - loading... - ``` - - [See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/baseapp.go#L746-L770) - - This function also resets the [main gas meter](/v0.53/learn/beginner/gas-fees#main-gas-meter). - -* Initialize the [block gas meter](/v0.53/learn/beginner/gas-fees#block-gas-meter) with the `maxGas` limit. The `gas` consumed within the block cannot go above `maxGas`. This parameter is defined in the application's consensus parameters. - -* Run the application's [`beginBlocker()`](/v0.53/learn/beginner/app-anatomy#beginblocker-and-endblocker), which mainly runs the [`BeginBlocker()`](/v0.53/build/building-modules/beginblock-endblock#beginblock) method of each of the modules. - -* Set the [`VoteInfos`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#voteinfo) of the application, i.e. the list of validators whose *precommit* for the previous block was included by the proposer of the current block. This information is carried into the [`Context`](/v0.53/learn/advanced/context) so that it can be used during transaction execution and EndBlock. - -#### Transaction Execution[​](#transaction-execution "Direct link to Transaction Execution") - -When the underlying consensus engine receives a block proposal, each transaction in the block needs to be processed by the application. To that end, the underlying consensus engine sends the transactions in FinalizeBlock message to the application for each transaction in a sequential order. - -Before the first transaction of a given block is processed, a [volatile state](#state-updates) called `finalizeBlockState` is initialized during FinalizeBlock. This state is updated each time a transaction is processed via `FinalizeBlock`, and committed to the [main state](#main-state) when the block is [committed](#commit), after what it is set to `nil`. - -baseapp/baseapp.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/baseapp.go#LL772-L807) - -Transaction execution within `FinalizeBlock` performs the **exact same steps as `CheckTx`**, with a little caveat at step 3 and the addition of a fifth step: - -1. The `AnteHandler` does **not** check that the transaction's `gas-prices` is sufficient. That is because the `min-gas-prices` value `gas-prices` is checked against is local to the node, and therefore what is enough for one full-node might not be for another. This means that the proposer can potentially include transactions for free, although they are not incentivised to do so, as they earn a bonus on the total fee of the block they propose. -2. For each `sdk.Msg` in the transaction, route to the appropriate module's Protobuf [`Msg` service](/v0.53/build/building-modules/msg-services). Additional *stateful* checks are performed, and the branched multistore held in `finalizeBlockState`'s `context` is updated by the module's `keeper`. If the `Msg` service returns successfully, the branched multistore held in `context` is written to `finalizeBlockState` `CacheMultiStore`. - -During the additional fifth step outlined in (2), each read/write to the store increases the value of `GasConsumed`. You can find the default cost of each operation: - -store/types/gas.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/gas.go#L230-L241) - -At any point, if `GasConsumed > GasWanted`, the function returns with `Code != 0` and the execution fails. - -Each transactions returns a response to the underlying consensus engine of type [`abci.ExecTxResult`](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci%2B%2B_methods.md#exectxresult). The response contains: - -* `Code (uint32)`: Response Code. `0` if successful. -* `Data ([]byte)`: Result bytes, if any. -* `Log (string):` The output of the application's logger. May be non-deterministic. -* `Info (string):` Additional information. May be non-deterministic. -* `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction. -* `GasUsed (int64)`: Amount of gas consumed by transaction. During transaction execution, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction, and by adding gas each time a read/write to the store occurs. -* `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](/v0.53/learn/advanced/events) for more. -* `Codespace (string)`: Namespace for the Code. - -#### EndBlock[​](#endblock "Direct link to EndBlock") - -EndBlock is run after transaction execution completes. It allows developers to have logic be executed at the end of each block. In the Cosmos SDK, the bulk EndBlock() method is to run the application's EndBlocker(), which mainly runs the EndBlocker() method of each of the application's modules. - -baseapp/baseapp.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/baseapp.go#L811-L833) - -### Commit[​](#commit "Direct link to Commit") - -The [`Commit` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine after the full-node has received *precommits* from 2/3+ of validators (weighted by voting power). On the `BaseApp` end, the `Commit(res abci.ResponseCommit)` function is implemented to commit all the valid state transitions that occurred during `FinalizeBlock` and to reset state for the next block. - -To commit state-transitions, the `Commit` function calls the `Write()` function on `finalizeBlockState.ms`, where `finalizeBlockState.ms` is a branched multistore of the main store `app.cms`. Then, the `Commit` function sets `checkState` to the latest header (obtained from `finalizeBlockState.ctx.BlockHeader`) and `finalizeBlockState` to `nil`. - -Finally, `Commit` returns the hash of the commitment of `app.cms` back to the underlying consensus engine. This hash is used as a reference in the header of the next block. - -### Info[​](#info "Direct link to Info") - -The [`Info` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#info-methods) is a simple query from the underlying consensus engine, notably used to sync the latter with the application during a handshake that happens on startup. When called, the `Info(res abci.ResponseInfo)` function from `BaseApp` will return the application's name, version and the hash of the last commit of `app.cms`. - -### Query[​](#query "Direct link to Query") - -The [`Query` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#info-methods) is used to serve queries received from the underlying consensus engine, including queries received via RPC like CometBFT RPC. It used to be the main entrypoint to build interfaces with the application, but with the introduction of [gRPC queries](/v0.53/build/building-modules/query-services) in Cosmos SDK v0.40, its usage is more limited. The application must respect a few rules when implementing the `Query` method, which are outlined [here](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#query). - -Each CometBFT `query` comes with a `path`, which is a `string` which denotes what to query. If the `path` matches a gRPC fully-qualified service method, then `BaseApp` will defer the query to the `grpcQueryRouter` and let it handle it like explained [above](#grpc-query-router). Otherwise, the `path` represents a query that is not (yet) handled by the gRPC router. `BaseApp` splits the `path` string with the `/` delimiter. By convention, the first element of the split string (`split[0]`) contains the category of `query` (`app`, `p2p`, `store` or `custom` ). The `BaseApp` implementation of the `Query(req abci.RequestQuery)` method is a simple dispatcher serving these 4 main categories of queries: - -* Application-related queries like querying the application's version, which are served via the `handleQueryApp` method. -* Direct queries to the multistore, which are served by the `handlerQueryStore` method. These direct queries are different from custom queries which go through `app.queryRouter`, and are mainly used by third-party service provider like block explorers. -* P2P queries, which are served via the `handleQueryP2P` method. These queries return either `app.addrPeerFilter` or `app.ipPeerFilter` that contain the list of peers filtered by address or IP respectively. These lists are first initialized via `options` in `BaseApp`'s [constructor](#constructor). - -### ExtendVote[​](#extendvote "Direct link to ExtendVote") - -`ExtendVote` allows an application to extend a pre-commit vote with arbitrary data. This process does NOT have to be deterministic and the data returned can be unique to the validator process. - -In the Cosmos-SDK this is implemented as a NoOp: - -baseapp/abci\_utils.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/abci_utils.go#L444-L450) - -### VerifyVoteExtension[​](#verifyvoteextension "Direct link to VerifyVoteExtension") - -`VerifyVoteExtension` allows an application to verify that the data returned by `ExtendVote` is valid. This process MUST be deterministic. Moreover, the value of ResponseVerifyVoteExtension.status MUST exclusively depend on the parameters passed in the call to RequestVerifyVoteExtension, and the last committed Application state. - -In the Cosmos-SDK this is implemented as a NoOp: - -baseapp/abci\_utils.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/abci_utils.go#L452-L458) diff --git a/docs/sdk/v0.53/learn/advanced/cli.mdx b/docs/sdk/v0.53/learn/advanced/cli.mdx deleted file mode 100644 index ae1447ba..00000000 --- a/docs/sdk/v0.53/learn/advanced/cli.mdx +++ /dev/null @@ -1,216 +0,0 @@ ---- -title: "Command-Line Interface" -description: "Version: v0.53" ---- - - - This document describes how command-line interface (CLI) works on a high-level, for an [**application**](/v0.53/learn/beginner/app-anatomy). A separate document for implementing a CLI for a Cosmos SDK [**module**](/v0.53/build/building-modules/intro) can be found [here](/v0.53/build/building-modules/module-interfaces#cli). - - -## Command-Line Interface[​](#command-line-interface-1 "Direct link to Command-Line Interface") - -### Example Command[​](#example-command "Direct link to Example Command") - -There is no set way to create a CLI, but Cosmos SDK modules typically use the [Cobra Library](https://github.com/spf13/cobra). Building a CLI with Cobra entails defining commands, arguments, and flags. [**Commands**](#root-command) understand the actions users wish to take, such as `tx` for creating a transaction and `query` for querying the application. Each command can also have nested subcommands, necessary for naming the specific transaction type. Users also supply **Arguments**, such as account numbers to send coins to, and [**Flags**](#flags) to modify various aspects of the commands, such as gas prices or which node to broadcast to. - -Here is an example of a command a user might enter to interact with the simapp CLI `simd` in order to send some tokens: - -``` -simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --gas auto --gas-prices -``` - -The first four strings specify the command: - -* The root command for the entire application `simd`. -* The subcommand `tx`, which contains all commands that let users create transactions. -* The subcommand `bank` to indicate which module to route the command to ([`x/bank`](/v0.53/build/modules/bank) module in this case). -* The type of transaction `send`. - -The next two strings are arguments: the `from_address` the user wishes to send from, the `to_address` of the recipient, and the `amount` they want to send. Finally, the last few strings of the command are optional flags to indicate how much the user is willing to pay in fees (calculated using the amount of gas used to execute the transaction and the gas prices provided by the user). - -The CLI interacts with a [node](/v0.53/learn/advanced/node) to handle this command. The interface itself is defined in a `main.go` file. - -### Building the CLI[​](#building-the-cli "Direct link to Building the CLI") - -The `main.go` file needs to have a `main()` function that creates a root command, to which all the application commands will be added as subcommands. The root command additionally handles: - -* **setting configurations** by reading in configuration files (e.g. the Cosmos SDK config file). -* **adding any flags** to it, such as `--chain-id`. -* **instantiating the `codec`** by injecting the application codecs. The [`codec`](/v0.53/learn/advanced/encoding) is used to encode and decode data structures for the application - stores can only persist `[]byte`s so the developer must define a serialization format for their data structures or use the default, Protobuf. -* **adding subcommand** for all the possible user interactions, including [transaction commands](#transaction-commands) and [query commands](#query-commands). - -The `main()` function finally creates an executor and [execute](https://pkg.go.dev/github.com/spf13/cobra#Command.Execute) the root command. See an example of `main()` function from the `simapp` application: - -simapp/simd/main.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/main.go#L14-L24) - -The rest of the document will detail what needs to be implemented for each step and include smaller portions of code from the `simapp` CLI files. - -## Adding Commands to the CLI[​](#adding-commands-to-the-cli "Direct link to Adding Commands to the CLI") - -Every application CLI first constructs a root command, then adds functionality by aggregating subcommands (often with further nested subcommands) using `rootCmd.AddCommand()`. The bulk of an application's unique capabilities lies in its transaction and query commands, called `TxCmd` and `QueryCmd` respectively. - -### Root Command[​](#root-command "Direct link to Root Command") - -The root command (called `rootCmd`) is what the user first types into the command line to indicate which application they wish to interact with. The string used to invoke the command (the "Use" field) is typically the name of the application suffixed with `-d`, e.g. `simd` or `gaiad`. The root command typically includes the following commands to support basic functionality in the application. - -* **Status** command from the Cosmos SDK rpc client tools, which prints information about the status of the connected [`Node`](/v0.53/learn/advanced/node). The Status of a node includes `NodeInfo`,`SyncInfo` and `ValidatorInfo`. -* **Keys** [commands](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/keys) from the Cosmos SDK client tools, which includes a collection of subcommands for using the key functions in the Cosmos SDK crypto tools, including adding a new key and saving it to the keyring, listing all public keys stored in the keyring, and deleting a key. For example, users can type `simd keys add ` to add a new key and save an encrypted copy to the keyring, using the flag `--recover` to recover a private key from a seed phrase or the flag `--multisig` to group multiple keys together to create a multisig key. For full details on the `add` key command, see the code [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/keys/add.go). For more details about usage of `--keyring-backend` for storage of key credentials look at the [keyring docs](/v0.53/user/run-node/keyring). -* **Server** commands from the Cosmos SDK server package. These commands are responsible for providing the mechanisms necessary to start an ABCI CometBFT application and provides the CLI framework (based on [cobra](https://github.com/spf13/cobra)) necessary to fully bootstrap an application. The package exposes two core functions: `StartCmd` and `ExportCmd` which creates commands to start the application and export state respectively. Learn more [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server). -* [**Transaction**](#transaction-commands) commands. -* [**Query**](#query-commands) commands. - -Next is an example `rootCmd` function from the `simapp` application. It instantiates the root command, adds a [*persistent* flag](#flags) and `PreRun` function to be run before every execution, and adds all of the necessary subcommands. - -simapp/simd/cmd/root\_v2.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L47-L130) - - - Use the `EnhanceRootCommand()` from the AutoCLI options to automatically add auto-generated commands from the modules to the root command. Additionnally it adds all manually defined modules commands (`tx` and `query`) as well. Read more about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its dedicated section. - - -`rootCmd` has a function called `initAppConfig()` which is useful for setting the application's custom configs. By default app uses CometBFT app config template from Cosmos SDK, which can be over-written via `initAppConfig()`. Here's an example code to override default `app.toml` template. - -simapp/simd/cmd/root\_v2.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L144-L199) - -The `initAppConfig()` also allows overriding the default Cosmos SDK's [server config](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server/config/config.go#L231). One example is the `min-gas-prices` config, which defines the minimum gas prices a validator is willing to accept for processing a transaction. By default, the Cosmos SDK sets this parameter to `""` (empty string), which forces all validators to tweak their own `app.toml` and set a non-empty value, or else the node will halt on startup. This might not be the best UX for validators, so the chain developer can set a default `app.toml` value for validators inside this `initAppConfig()` function. - -simapp/simd/cmd/root\_v2.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L164-L180) - -The root-level `status` and `keys` subcommands are common across most applications and do not interact with application state. The bulk of an application's functionality - what users can actually *do* with it - is enabled by its `tx` and `query` commands. - -### Transaction Commands[​](#transaction-commands "Direct link to Transaction Commands") - -[Transactions](/v0.53/learn/advanced/transactions) are objects wrapping [`Msg`s](/v0.53/build/building-modules/messages-and-queries#messages) that trigger state changes. To enable the creation of transactions using the CLI interface, a function `txCommand` is generally added to the `rootCmd`: - -simapp/simd/cmd/root\_v2.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L222-L229) - -This `txCommand` function adds all the transaction available to end-users for the application. This typically includes: - -* **Sign command** from the [`auth`](/v0.53/build/modules/auth) module that signs messages in a transaction. To enable multisig, add the `auth` module's `MultiSign` command. Since every transaction requires some sort of signature in order to be valid, the signing command is necessary for every application. -* **Broadcast command** from the Cosmos SDK client tools, to broadcast transactions. -* **All [module transaction commands](/v0.53/build/building-modules/module-interfaces#transaction-commands)** the application is dependent on, retrieved by using the [basic module manager's](/v0.53/build/building-modules/module-manager#basic-manager) `AddTxCommands()` function, or enhanced by [AutoCLI](https://docs.cosmos.network/main/core/autocli). - -Here is an example of a `txCommand` aggregating these subcommands from the `simapp` application: - -simapp/simd/cmd/root\_v2.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L270-L292) - - - When using AutoCLI to generate module transaction commands, `EnhanceRootCommand()` automatically adds the module `tx` command to the root command. Read more about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its dedicated section. - - -### Query Commands[​](#query-commands "Direct link to Query Commands") - -[**Queries**](/v0.53/build/building-modules/messages-and-queries#queries) are objects that allow users to retrieve information about the application's state. To enable the creation of queries using the CLI interface, a function `queryCommand` is generally added to the `rootCmd`: - -simapp/simd/cmd/root\_v2.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L222-L229) - -This `queryCommand` function adds all the queries available to end-users for the application. This typically includes: - -* **QueryTx** and/or other transaction query commands from the `auth` module which allow the user to search for a transaction by inputting its hash, a list of tags, or a block height. These queries allow users to see if transactions have been included in a block. -* **Account command** from the `auth` module, which displays the state (e.g. account balance) of an account given an address. -* **Validator command** from the Cosmos SDK rpc client tools, which displays the validator set of a given height. -* **Block command** from the Cosmos SDK RPC client tools, which displays the block data for a given height. -* **All [module query commands](/v0.53/build/building-modules/module-interfaces#query-commands)** the application is dependent on, retrieved by using the [basic module manager's](/v0.53/build/building-modules/module-manager#basic-manager) `AddQueryCommands()` function, or enhanced by [AutoCLI](https://docs.cosmos.network/main/core/autocli). - -Here is an example of a `queryCommand` aggregating subcommands from the `simapp` application: - -simapp/simd/cmd/root\_v2.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L249-L268) - - - When using AutoCLI to generate module query commands, `EnhanceRootCommand()` automatically adds the module `query` command to the root command. Read more about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its dedicated section. - - -## Flags[​](#flags "Direct link to Flags") - -Flags are used to modify commands; developers can include them in a `flags.go` file with their CLI. Users can explicitly include them in commands or pre-configure them by inside their [`app.toml`](/v0.53/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml). Commonly pre-configured flags include the `--node` to connect to and `--chain-id` of the blockchain the user wishes to interact with. - -A *persistent* flag (as opposed to a *local* flag) added to a command transcends all of its children: subcommands will inherit the configured values for these flags. Additionally, all flags have default values when they are added to commands; some toggle an option off but others are empty values that the user needs to override to create valid commands. A flag can be explicitly marked as *required* so that an error is automatically thrown if the user does not provide a value, but it is also acceptable to handle unexpected missing flags differently. - -Flags are added to commands directly (generally in the [module's CLI file](/v0.53/build/building-modules/module-interfaces#flags) where module commands are defined) and no flag except for the `rootCmd` persistent flags has to be added at application level. It is common to add a *persistent* flag for `--chain-id`, the unique identifier of the blockchain the application pertains to, to the root command. Adding this flag can be done in the `main()` function. Adding this flag makes sense as the chain ID should not be changing across commands in this application CLI. - -## Environment variables[​](#environment-variables "Direct link to Environment variables") - -Each flag is bound to its respective named environment variable. Then name of the environment variable consist of two parts - capital case `basename` followed by flag name of the flag. `-` must be substituted with `_`. For example flag `--node` for application with basename `GAIA` is bound to `GAIA_NODE`. It allows reducing the amount of flags typed for routine operations. For example instead of: - -``` -gaia --home=./ --node= --chain-id="testchain-1" --keyring-backend=test tx ... --from= -``` - -this will be more convenient: - -``` -# define env variables in .env, .envrc etcGAIA_HOME=GAIA_NODE=GAIA_CHAIN_ID="testchain-1"GAIA_KEYRING_BACKEND="test"# and later just usegaia tx ... --from= -``` - -## Configurations[​](#configurations "Direct link to Configurations") - -It is vital that the root command of an application uses `PersistentPreRun()` cobra command property for executing the command, so all child commands have access to the server and client contexts. These contexts are set as their default values initially and may be modified, scoped to the command, in their respective `PersistentPreRun()` functions. Note that the `client.Context` is typically pre-populated with "default" values that may be useful for all commands to inherit and override if necessary. - -Here is an example of an `PersistentPreRun()` function from `simapp`: - -simapp/simd/cmd/root\_v2.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L81-L120) - -The `SetCmdClientContextHandler` call reads persistent flags via `ReadPersistentCommandFlags` which creates a `client.Context` and sets that on the root command's `Context`. - -The `InterceptConfigsPreRunHandler` call creates a viper literal, default `server.Context`, and a logger and sets that on the root command's `Context`. The `server.Context` will be modified and saved to disk. The internal `interceptConfigs` call reads or creates a CometBFT configuration based on the home path provided. In addition, `interceptConfigs` also reads and loads the application configuration, `app.toml`, and binds that to the `server.Context` viper literal. This is vital so the application can get access to not only the CLI flags, but also to the application configuration values provided by this file. - - - When willing to configure which logger is used, do not use `InterceptConfigsPreRunHandler`, which sets the default SDK logger, but instead use `InterceptConfigsAndCreateContext` and set the server context and the logger manually: - - ``` - -return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig)+serverCtx, err := server.InterceptConfigsAndCreateContext(cmd, customAppTemplate, customAppConfig, customCMTConfig)+if err != nil {+ return err+}+// overwrite default server logger+logger, err := server.CreateSDKLogger(serverCtx, cmd.OutOrStdout())+if err != nil {+ return err+}+serverCtx.Logger = logger.With(log.ModuleKey, "server")+// set server context+return server.SetCmdServerContext(cmd, serverCtx) - ``` - diff --git a/docs/sdk/v0.53/learn/advanced/config.mdx b/docs/sdk/v0.53/learn/advanced/config.mdx deleted file mode 100644 index f2e079c2..00000000 --- a/docs/sdk/v0.53/learn/advanced/config.mdx +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: "Configuration" -description: "Version: v0.53" ---- - -This documentation refers to the app.toml, if you'd like to read about the config.toml please visit [CometBFT docs](https://docs.cometbft.com/v0.37/). - -tools/confix/data/v0.47-app.toml - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/main/tools/confix/data/v0.47-app.toml) - -## inter-block-cache[​](#inter-block-cache "Direct link to inter-block-cache") - -This feature will consume more ram than a normal node, if enabled. - -## iavl-cache-size[​](#iavl-cache-size "Direct link to iavl-cache-size") - -Using this feature will increase ram consumption - -## iavl-lazy-loading[​](#iavl-lazy-loading "Direct link to iavl-lazy-loading") - -This feature is to be used for archive nodes, allowing them to have a faster start up time. diff --git a/docs/sdk/v0.53/learn/advanced/context.mdx b/docs/sdk/v0.53/learn/advanced/context.mdx deleted file mode 100644 index 9bd76ff5..00000000 --- a/docs/sdk/v0.53/learn/advanced/context.mdx +++ /dev/null @@ -1,79 +0,0 @@ ---- -title: "Context" -description: "Version: v0.53" ---- - - - The `context` is a data structure intended to be passed from function to function that carries information about the current state of the application. It provides access to a branched storage (a safe branch of the entire state) as well as useful objects and information like `gasMeter`, `block height`, `consensus parameters` and more. - - - - * [Anatomy of a Cosmos SDK Application](/v0.53/learn/beginner/app-anatomy) - * [Lifecycle of a Transaction](/v0.53/learn/beginner/tx-lifecycle) - - -## Context Definition[​](#context-definition "Direct link to Context Definition") - -The Cosmos SDK `Context` is a custom data structure that contains Go's stdlib [`context`](https://pkg.go.dev/context) as its base, and has many additional types within its definition that are specific to the Cosmos SDK. The `Context` is integral to transaction processing in that it allows modules to easily access their respective [store](/v0.53/learn/advanced/store#base-layer-kvstores) in the [`multistore`](/v0.53/learn/advanced/store#multistore) and retrieve transactional context such as the block header and gas meter. - -types/context.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/types/context.go#L40-L67) - -* **Base Context:** The base type is a Go [Context](https://pkg.go.dev/context), which is explained further in the [Go Context Package](#go-context-package) section below. -* **Multistore:** Every application's `BaseApp` contains a [`CommitMultiStore`](/v0.53/learn/advanced/store#multistore) which is provided when a `Context` is created. Calling the `KVStore()` and `TransientStore()` methods allows modules to fetch their respective [`KVStore`](/v0.53/learn/advanced/store#base-layer-kvstores) using their unique `StoreKey`. -* **Header:** The [header](https://docs.cometbft.com/v0.37/spec/core/data_structures#header) is a Blockchain type. It carries important information about the state of the blockchain, such as block height and proposer of the current block. -* **Header Hash:** The current block header hash, obtained during `abci.FinalizeBlock`. -* **Chain ID:** The unique identification number of the blockchain a block pertains to. -* **Transaction Bytes:** The `[]byte` representation of a transaction being processed using the context. Every transaction is processed by various parts of the Cosmos SDK and consensus engine (e.g. CometBFT) throughout its [lifecycle](/v0.53/learn/beginner/tx-lifecycle), some of which do not have any understanding of transaction types. Thus, transactions are marshaled into the generic `[]byte` type using some kind of [encoding format](/v0.53/learn/advanced/encoding) such as [Amino](/v0.53/learn/advanced/encoding). -* **Logger:** A `logger` from the CometBFT libraries. Learn more about logs [here](https://docs.cometbft.com/v0.37/core/configuration). Modules call this method to create their own unique module-specific logger. -* **VoteInfo:** A list of the ABCI type [`VoteInfo`](https://docs.cometbft.com/master/spec/abci/abci.html#voteinfo), which includes the name of a validator and a boolean indicating whether they have signed the block. -* **Gas Meters:** Specifically, a [`gasMeter`](/v0.53/learn/beginner/gas-fees#main-gas-meter) for the transaction currently being processed using the context and a [`blockGasMeter`](/v0.53/learn/beginner/gas-fees#block-gas-meter) for the entire block it belongs to. Users specify how much in fees they wish to pay for the execution of their transaction; these gas meters keep track of how much [gas](/v0.53/learn/beginner/gas-fees) has been used in the transaction or block so far. If the gas meter runs out, execution halts. -* **CheckTx Mode:** A boolean value indicating whether a transaction should be processed in `CheckTx` or `DeliverTx` mode. -* **Min Gas Price:** The minimum [gas](/v0.53/learn/beginner/gas-fees) price a node is willing to take in order to include a transaction in its block. This price is a local value configured by each node individually, and should therefore **not be used in any functions used in sequences leading to state-transitions**. -* **Consensus Params:** The ABCI type [Consensus Parameters](https://docs.cometbft.com/master/spec/abci/apps.html#consensus-parameters), which specify certain limits for the blockchain, such as maximum gas for a block. -* **Event Manager:** The event manager allows any caller with access to a `Context` to emit [`Events`](/v0.53/learn/advanced/events). Modules may define module specific `Events` by defining various `Types` and `Attributes` or use the common definitions found in `types/`. Clients can subscribe or query for these `Events`. These `Events` are collected throughout `FinalizeBlock` and are returned to CometBFT for indexing. -* **Priority:** The transaction priority, only relevant in `CheckTx`. -* **KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the `KVStore`. -* **Transient KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the transiant `KVStore`. -* **StreamingManager:** The streamingManager field provides access to the streaming manager, which allows modules to subscribe to state changes emitted by the blockchain. The streaming manager is used by the state listening API, which is described in [ADR 038](https://docs.cosmos.network/main/architecture/adr-038-state-listening). -* **CometInfo:** A lightweight field that contains information about the current block, such as the block height, time, and hash. This information can be used for validating evidence, providing historical data, and enhancing the user experience. For further details see [here](https://github.com/cosmos/cosmos-sdk/blob/main/core/comet/service.go#L14). -* **HeaderInfo:** The `headerInfo` field contains information about the current block header, such as the chain ID, gas limit, and timestamp. For further details see [here](https://github.com/cosmos/cosmos-sdk/blob/main/core/header/service.go#L14). - -## Go Context Package[​](#go-context-package "Direct link to Go Context Package") - -A basic `Context` is defined in the [Golang Context Package](https://pkg.go.dev/context). A `Context` is an immutable data structure that carries request-scoped data across APIs and processes. Contexts are also designed to enable concurrency and to be used in goroutines. - -Contexts are intended to be **immutable**; they should never be edited. Instead, the convention is to create a child context from its parent using a `With` function. For example: - -``` -childCtx = parentCtx.WithBlockHeader(header) -``` - -The [Golang Context Package](https://pkg.go.dev/context) documentation instructs developers to explicitly pass a context `ctx` as the first argument of a process. - -## Store branching[​](#store-branching "Direct link to Store branching") - -The `Context` contains a `MultiStore`, which allows for branching and caching functionality using `CacheMultiStore` (queries in `CacheMultiStore` are cached to avoid future round trips). Each `KVStore` is branched in a safe and isolated ephemeral storage. Processes are free to write changes to the `CacheMultiStore`. If a state-transition sequence is performed without issue, the store branch can be committed to the underlying store at the end of the sequence or disregard them if something goes wrong. The pattern of usage for a Context is as follows: - -1. A process receives a Context `ctx` from its parent process, which provides information needed to perform the process. -2. The `ctx.ms` is a **branched store**, i.e. a branch of the [multistore](/v0.53/learn/advanced/store#multistore) is made so that the process can make changes to the state as it executes, without changing the original`ctx.ms`. This is useful to protect the underlying multistore in case the changes need to be reverted at some point in the execution. -3. The process may read and write from `ctx` as it is executing. It may call a subprocess and pass `ctx` to it as needed. -4. When a subprocess returns, it checks if the result is a success or failure. If a failure, nothing needs to be done - the branch `ctx` is simply discarded. If successful, the changes made to the `CacheMultiStore` can be committed to the original `ctx.ms` via `Write()`. - -For example, here is a snippet from the [`runTx`](/v0.53/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) function in [`baseapp`](/v0.53/learn/advanced/baseapp): - -``` -runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes)result = app.runMsgs(runMsgCtx, msgs, mode)result.GasWanted = gasWantedif mode != runTxModeDeliver { return result}if result.IsOK() { msCache.Write()} -``` - -Here is the process: - -1. Prior to calling `runMsgs` on the message(s) in the transaction, it uses `app.cacheTxContext()` to branch and cache the context and multistore. -2. `runMsgCtx` - the context with branched store, is used in `runMsgs` to return a result. -3. If the process is running in [`checkTxMode`](/v0.53/learn/advanced/baseapp#checktx), there is no need to write the changes - the result is returned immediately. -4. If the process is running in [`deliverTxMode`](/v0.53/learn/advanced/baseapp#delivertx) and the result indicates a successful run over all the messages, the branched multistore is written back to the original. diff --git a/docs/sdk/v0.53/learn/advanced/encoding.mdx b/docs/sdk/v0.53/learn/advanced/encoding.mdx deleted file mode 100644 index df449d91..00000000 --- a/docs/sdk/v0.53/learn/advanced/encoding.mdx +++ /dev/null @@ -1,220 +0,0 @@ ---- -title: "Encoding" -description: "Version: v0.53" ---- - - - While encoding in the Cosmos SDK used to be mainly handled by `go-amino` codec, the Cosmos SDK is moving towards using `gogoprotobuf` for both state and client-side encoding. - - - - * [Anatomy of a Cosmos SDK application](/v0.53/learn/beginner/app-anatomy) - - -## Encoding[​](#encoding-1 "Direct link to Encoding") - -The Cosmos SDK utilizes two binary wire encoding protocols, [Amino](https://github.com/tendermint/go-amino/) which is an object encoding specification and [Protocol Buffers](https://developers.google.com/protocol-buffers), a subset of Proto3 with an extension for interface support. See the [Proto3 spec](https://developers.google.com/protocol-buffers/docs/proto3) for more information on Proto3, which Amino is largely compatible with (but not with Proto2). - -Due to Amino having significant performance drawbacks, being reflection-based, and not having any meaningful cross-language/client support, Protocol Buffers, specifically [gogoprotobuf](https://github.com/cosmos/gogoproto/), is being used in place of Amino. Note, this process of using Protocol Buffers over Amino is still an ongoing process. - -Binary wire encoding of types in the Cosmos SDK can be broken down into two main categories, client encoding and store encoding. Client encoding mainly revolves around transaction processing and signing, whereas store encoding revolves around types used in state-machine transitions and what is ultimately stored in the Merkle tree. - -For store encoding, protobuf definitions can exist for any type and will typically have an Amino-based "intermediary" type. Specifically, the protobuf-based type definition is used for serialization and persistence, whereas the Amino-based type is used for business logic in the state-machine where they may convert back-n-forth. Note, the Amino-based types may slowly be phased-out in the future, so developers should take note to use the protobuf message definitions where possible. - -In the `codec` package, there exists two core interfaces, `BinaryCodec` and `JSONCodec`, where the former encapsulates the current Amino interface except it operates on types implementing the latter instead of generic `interface{}` types. - -The `ProtoCodec`, where both binary and JSON serialization is handled via Protobuf. This means that modules may use Protobuf encoding, but the types must implement `ProtoMarshaler`. If modules wish to avoid implementing this interface for their types, this is autogenerated via [buf](https://buf.build/) - -If modules use [Collections](/v0.53/build/packages/collections), encoding and decoding are handled, marshal and unmarshal should not be handled manually unless for specific cases identified by the developer. - -### Gogoproto[​](#gogoproto "Direct link to Gogoproto") - -Modules are encouraged to utilize Protobuf encoding for their respective types. In the Cosmos SDK, we use the [Gogoproto](https://github.com/cosmos/gogoproto) specific implementation of the Protobuf spec that offers speed and DX improvements compared to the official [Google protobuf implementation](https://github.com/protocolbuffers/protobuf). - -### Guidelines for protobuf message definitions[​](#guidelines-for-protobuf-message-definitions "Direct link to Guidelines for protobuf message definitions") - -In addition to [following official Protocol Buffer guidelines](https://developers.google.com/protocol-buffers/docs/proto3#simple), we recommend using these annotations in .proto files when dealing with interfaces: - -* use `cosmos_proto.accepts_interface` to annote `Any` fields that accept interfaces - - * pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface` - * example: `(cosmos_proto.accepts_interface) = "cosmos.gov.v1beta1.Content"` (and not just `Content`) - -* annotate interface implementations with `cosmos_proto.implements_interface` - - * pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface` - * example: `(cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"` (and not just `Authorization`) - -Code generators can then match the `accepts_interface` and `implements_interface` annotations to know whether some Protobuf messages are allowed to be packed in a given `Any` field or not. - -### Transaction Encoding[​](#transaction-encoding "Direct link to Transaction Encoding") - -Another important use of Protobuf is the encoding and decoding of [transactions](/v0.53/learn/advanced/transactions). Transactions are defined by the application or the Cosmos SDK but are then passed to the underlying consensus engine to be relayed to other peers. Since the underlying consensus engine is agnostic to the application, the consensus engine accepts only transactions in the form of raw bytes. - -* The `TxEncoder` object performs the encoding. -* The `TxDecoder` object performs the decoding. - -types/tx\_msg.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/types/tx_msg.go#L109-L113) - -A standard implementation of both these objects can be found in the [`auth/tx` module](/v0.53/build/modules/auth/tx): - -x/auth/tx/decoder.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/auth/tx/decoder.go) - -x/auth/tx/encoder.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/auth/tx/encoder.go) - -See [ADR-020](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/docs/architecture/adr-020-protobuf-transaction-encoding.md) for details of how a transaction is encoded. - -### Interface Encoding and Usage of `Any`[​](#interface-encoding-and-usage-of-any "Direct link to interface-encoding-and-usage-of-any") - -The Protobuf DSL is strongly typed, which can make inserting variable-typed fields difficult. Imagine we want to create a `Profile` protobuf message that serves as a wrapper over [an account](/v0.53/learn/beginner/accounts): - -``` -message Profile { // account is the account associated to a profile. cosmos.auth.v1beta1.BaseAccount account = 1; // bio is a short description of the account. string bio = 4;} -``` - -In this `Profile` example, we hardcoded `account` as a `BaseAccount`. However, there are several other types of [user accounts related to vesting](/v0.53/build/modules/auth/vesting), such as `BaseVestingAccount` or `ContinuousVestingAccount`. All of these accounts are different, but they all implement the `AccountI` interface. How would you create a `Profile` that allows all these types of accounts with an `account` field that accepts an `AccountI` interface? - -types/account.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/types/account.go#L15-L32) - -In [ADR-019](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/docs/architecture/adr-019-protobuf-state-encoding.md), it has been decided to use [`Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto)s to encode interfaces in protobuf. An `Any` contains an arbitrary serialized message as bytes, along with a URL that acts as a globally unique identifier for and resolves to that message's type. This strategy allows us to pack arbitrary Go types inside protobuf messages. Our new `Profile` then looks like: - -``` -message Profile { // account is the account associated to a profile. google.protobuf.Any account = 1 [ (cosmos_proto.accepts_interface) = "cosmos.auth.v1beta1.AccountI"; // Asserts that this field only accepts Go types implementing `AccountI`. It is purely informational for now. ]; // bio is a short description of the account. string bio = 4;} -``` - -To add an account inside a profile, we need to "pack" it inside an `Any` first, using `codectypes.NewAnyWithValue`: - -``` -var myAccount AccountImyAccount = ... // Can be a BaseAccount, a ContinuousVestingAccount or any struct implementing `AccountI`// Pack the account into an AnyaccAny, err := codectypes.NewAnyWithValue(myAccount)if err != nil { return nil, err}// Create a new Profile with the any.profile := Profile { Account: accAny, Bio: "some bio",}// We can then marshal the profile as usual.bz, err := cdc.Marshal(profile)jsonBz, err := cdc.MarshalJSON(profile) -``` - -To summarize, to encode an interface, you must 1/ pack the interface into an `Any` and 2/ marshal the `Any`. For convenience, the Cosmos SDK provides a `MarshalInterface` method to bundle these two steps. Have a look at [a real-life example in the x/auth module](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/auth/keeper/keeper.go#L239-L242). - -The reverse operation of retrieving the concrete Go type from inside an `Any`, called "unpacking", is done with the `GetCachedValue()` on `Any`. - -``` -profileBz := ... // The proto-encoded bytes of a Profile, e.g. retrieved through gRPC.var myProfile Profile// Unmarshal the bytes into the myProfile struct.err := cdc.Unmarshal(profilebz, &myProfile)// Let's see the types of the Account field.fmt.Printf("%T\n", myProfile.Account) // Prints "Any"fmt.Printf("%T\n", myProfile.Account.GetCachedValue()) // Prints "BaseAccount", "ContinuousVestingAccount" or whatever was initially packed in the Any.// Get the address of the account.accAddr := myProfile.Account.GetCachedValue().(AccountI).GetAddress() -``` - -It is important to note that for `GetCachedValue()` to work, `Profile` (and any other structs embedding `Profile`) must implement the `UnpackInterfaces` method: - -``` -func (p *Profile) UnpackInterfaces(unpacker codectypes.AnyUnpacker) error { if p.Account != nil { var account AccountI return unpacker.UnpackAny(p.Account, &account) } return nil} -``` - -The `UnpackInterfaces` gets called recursively on all structs implementing this method, to allow all `Any`s to have their `GetCachedValue()` correctly populated. - -For more information about interface encoding, and especially on `UnpackInterfaces` and how the `Any`'s `type_url` gets resolved using the `InterfaceRegistry`, please refer to [ADR-019](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/docs/architecture/adr-019-protobuf-state-encoding.md). - -#### `Any` Encoding in the Cosmos SDK[​](#any-encoding-in-the-cosmos-sdk "Direct link to any-encoding-in-the-cosmos-sdk") - -The above `Profile` example is a fictive example used for educational purposes. In the Cosmos SDK, we use `Any` encoding in several places (non-exhaustive list): - -* the `cryptotypes.PubKey` interface for encoding different types of public keys, -* the `sdk.Msg` interface for encoding different `Msg`s in a transaction, -* the `AccountI` interface for encoding different types of accounts (similar to the above example) in the x/auth query responses, -* the `EvidenceI` interface for encoding different types of evidences in the x/evidence module, -* the `AuthorizationI` interface for encoding different types of x/authz authorizations, -* the [`Validator`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/staking/types/staking.pb.go#L340-L375) struct that contains information about a validator. - -A real-life example of encoding the pubkey as `Any` inside the Validator struct in x/staking is shown in the following example: - -x/staking/types/validator.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/staking/types/validator.go#L43-L66) - -#### `Any`'s TypeURL[​](#anys-typeurl "Direct link to anys-typeurl") - -When packing a protobuf message inside an `Any`, the message's type is uniquely defined by its type URL, which is the message's fully qualified name prefixed by a `/` (slash) character. In some implementations of `Any`, like the gogoproto one, there's generally [a resolvable prefix, e.g. `type.googleapis.com`](https://github.com/gogo/protobuf/blob/b03c65ea87cdc3521ede29f62fe3ce239267c1bc/protobuf/google/protobuf/any.proto#L87-L91). However, in the Cosmos SDK, we made the decision to not include such prefix, to have shorter type URLs. The Cosmos SDK's own `Any` implementation can be found in `github.com/cosmos/cosmos-sdk/codec/types`. - -The Cosmos SDK is also switching away from gogoproto to the official `google.golang.org/protobuf` (known as the Protobuf API v2). Its default `Any` implementation also contains the [`type.googleapis.com`](https://github.com/protocolbuffers/protobuf-go/blob/v1.28.1/types/known/anypb/any.pb.go#L266) prefix. To maintain compatibility with the SDK, the following methods from `"google.golang.org/protobuf/types/known/anypb"` should not be used: - -* `anypb.New` -* `anypb.MarshalFrom` -* `anypb.Any#MarshalFrom` - -Instead, the Cosmos SDK provides helper functions in `"github.com/cosmos/cosmos-proto/anyutil"`, which create an official `anypb.Any` without inserting the prefixes: - -* `anyutil.New` -* `anyutil.MarshalFrom` - -For example, to pack a `sdk.Msg` called `internalMsg`, use: - -``` -import (- "google.golang.org/protobuf/types/known/anypb"+ "github.com/cosmos/cosmos-proto/anyutil")- anyMsg, err := anypb.New(internalMsg.Message().Interface())+ anyMsg, err := anyutil.New(internalMsg.Message().Interface())- fmt.Println(anyMsg.TypeURL) // type.googleapis.com/cosmos.bank.v1beta1.MsgSend+ fmt.Println(anyMsg.TypeURL) // /cosmos.bank.v1beta1.MsgSend -``` - -## FAQ[​](#faq "Direct link to FAQ") - -### How to create modules using protobuf encoding[​](#how-to-create-modules-using-protobuf-encoding "Direct link to How to create modules using protobuf encoding") - -#### Defining module types[​](#defining-module-types "Direct link to Defining module types") - -Protobuf types can be defined to encode: - -* state -* [`Msg`s](/v0.53/build/building-modules/messages-and-queries#messages) -* [Query services](/v0.53/build/building-modules/query-services) -* [genesis](/v0.53/build/building-modules/genesis) - -#### Naming and conventions[​](#naming-and-conventions "Direct link to Naming and conventions") - -We encourage developers to follow industry guidelines: [Protocol Buffers style guide](https://developers.google.com/protocol-buffers/docs/style) and [Buf](https://buf.build/docs/style-guide), see more details in [ADR 023](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/docs/architecture/adr-023-protobuf-naming.md) - -### How to update modules to protobuf encoding[​](#how-to-update-modules-to-protobuf-encoding "Direct link to How to update modules to protobuf encoding") - -If modules do not contain any interfaces (e.g. `Account` or `Content`), then they may simply migrate any existing types that are encoded and persisted via their concrete Amino codec to Protobuf (see 1. for further guidelines) and accept a `Marshaler` as the codec which is implemented via the `ProtoCodec` without any further customization. - -However, if a module type composes an interface, it must wrap it in the `sdk.Any` (from `/types` package) type. To do that, a module-level .proto file must use [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto) for respective message type interface types. - -For example, in the `x/evidence` module defines an `Evidence` interface, which is used by the `MsgSubmitEvidence`. The structure definition must use `sdk.Any` to wrap the evidence file. In the proto file we define it as follows: - -``` -// proto/cosmos/evidence/v1beta1/tx.protomessage MsgSubmitEvidence { string submitter = 1; google.protobuf.Any evidence = 2 [(cosmos_proto.accepts_interface) = "cosmos.evidence.v1beta1.Evidence"];} -``` - -The Cosmos SDK `codec.Codec` interface provides support methods `MarshalInterface` and `UnmarshalInterface` to easy encoding of state to `Any`. - -Module should register interfaces using `InterfaceRegistry` which provides a mechanism for registering interfaces: `RegisterInterface(protoName string, iface interface{}, impls ...proto.Message)` and implementations: `RegisterImplementations(iface interface{}, impls ...proto.Message)` that can be safely unpacked from Any, similarly to type registration with Amino: - -codec/types/interface\_registry.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/codec/types/interface_registry.go#L40-L87) - -In addition, an `UnpackInterfaces` phase should be introduced to deserialization to unpack interfaces before they're needed. Protobuf types that contain a protobuf `Any` either directly or via one of their members should implement the `UnpackInterfacesMessage` interface: - -``` -type UnpackInterfacesMessage interface { UnpackInterfaces(InterfaceUnpacker) error} -``` diff --git a/docs/sdk/v0.53/learn/advanced/events.mdx b/docs/sdk/v0.53/learn/advanced/events.mdx deleted file mode 100644 index 5f442299..00000000 --- a/docs/sdk/v0.53/learn/advanced/events.mdx +++ /dev/null @@ -1,141 +0,0 @@ ---- -title: "Events" -description: "Version: v0.53" ---- - - - `Event`s are objects that contain information about the execution of the application. They are mainly used by service providers like block explorers and wallet to track the execution of various messages and index transactions. - - - - * [Anatomy of a Cosmos SDK application](/v0.53/learn/beginner/app-anatomy) - * [CometBFT Documentation on Events](https://docs.cometbft.com/v0.37/spec/abci/abci++_basic_concepts#events) - - -## Events[​](#events-1 "Direct link to Events") - -Events are implemented in the Cosmos SDK as an alias of the ABCI `Event` type and take the form of: `{eventType}.{attributeKey}={attributeValue}`. - -proto/tendermint/abci/types.proto - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cometbft/cometbft/blob/v0.37.0/proto/tendermint/abci/types.proto#L334-L343) - -An Event contains: - -* A `type` to categorize the Event at a high-level; for example, the Cosmos SDK uses the `"message"` type to filter Events by `Msg`s. -* A list of `attributes` are key-value pairs that give more information about the categorized Event. For example, for the `"message"` type, we can filter Events by key-value pairs using `message.action={some_action}`, `message.module={some_module}` or `message.sender={some_sender}`. -* A `msg_index` to identify which messages relate to the same transaction - - - To parse the attribute values as strings, make sure to add `'` (single quotes) around each attribute value. - - -*Typed Events* are Protobuf-defined [messages](/v0.53/build/architecture/adr-032-typed-events) used by the Cosmos SDK for emitting and querying Events. They are defined in a `event.proto` file, on a **per-module basis** and are read as `proto.Message`. *Legacy Events* are defined on a **per-module basis** in the module's `/types/events.go` file. They are triggered from the module's Protobuf [`Msg` service](/v0.53/build/building-modules/msg-services) by using the [`EventManager`](#eventmanager). - -In addition, each module documents its events under in the `Events` sections of its specs (x/\{moduleName}/`README.md`). - -Lastly, Events are returned to the underlying consensus engine in the response of the following ABCI messages: - -* [`BeginBlock`](/v0.53/learn/advanced/baseapp#beginblock) -* [`EndBlock`](/v0.53/learn/advanced/baseapp#endblock) -* [`CheckTx`](/v0.53/learn/advanced/baseapp#checktx) -* [`Transaction Execution`](/v0.53/learn/advanced/baseapp#transactionexecution) - -### Examples[​](#examples "Direct link to Examples") - -The following examples show how to query Events using the Cosmos SDK. - -| Event | Description | -| ------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------- | -| `tx.height=23` | Query all transactions at height 23 | -| `message.action='/cosmos.bank.v1beta1.Msg/Send'` | Query all transactions containing a x/bank `Send` [Service `Msg`](/v0.53/build/building-modules/msg-services). Note the `'`s around the value. | -| `message.module='bank'` | Query all transactions containing messages from the x/bank module. Note the `'`s around the value. | -| `create_validator.validator='cosmosval1...'` | x/staking-specific Event, see [x/staking SPEC](/v0.53/build/modules/staking). | - -## EventManager[​](#eventmanager "Direct link to EventManager") - -In Cosmos SDK applications, Events are managed by an abstraction called the `EventManager`. Internally, the `EventManager` tracks a list of Events for the entire execution flow of `FinalizeBlock` (i.e. transaction execution, `BeginBlock`, `EndBlock`). - -types/events.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/types/events.go#L18-L25) - -The `EventManager` comes with a set of useful methods to manage Events. The method that is used most by module and application developers is `EmitTypedEvent` or `EmitEvent` that tracks an Event in the `EventManager`. - -types/events.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/types/events.go#L51-L60) - -Module developers should handle Event emission via the `EventManager#EmitTypedEvent` or `EventManager#EmitEvent` in each message `Handler` and in each `BeginBlock`/`EndBlock` handler. The `EventManager` is accessed via the [`Context`](/v0.53/learn/advanced/context), where Event should be already registered, and emitted like this: - -**Typed events:** - -x/group/keeper/msg\_server.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/group/keeper/msg_server.go#L95-L97) - -**Legacy events:** - -``` -ctx.EventManager().EmitEvent( sdk.NewEvent(eventType, sdk.NewAttribute(attributeKey, attributeValue)),) -``` - -Where the `EventManager` is accessed via the [`Context`](/v0.53/learn/advanced/context). - -See the [`Msg` services](/v0.53/build/building-modules/msg-services) concept doc for a more detailed view on how to typically implement Events and use the `EventManager` in modules. - -## Subscribing to Events[​](#subscribing-to-events "Direct link to Subscribing to Events") - -You can use CometBFT's [Websocket](https://docs.cometbft.com/v0.37/core/subscription) to subscribe to Events by calling the `subscribe` RPC method: - -``` -{ "jsonrpc": "2.0", "method": "subscribe", "id": "0", "params": { "query": "tm.event='eventCategory' AND eventType.eventAttribute='attributeValue'" }} -``` - -The main `eventCategory` you can subscribe to are: - -* `NewBlock`: Contains Events triggered during `BeginBlock` and `EndBlock`. -* `Tx`: Contains Events triggered during `DeliverTx` (i.e. transaction processing). -* `ValidatorSetUpdates`: Contains validator set updates for the block. - -These Events are triggered from the `state` package after a block is committed. You can get the full list of Event categories [on the CometBFT Go documentation](https://pkg.go.dev/github.com/cometbft/cometbft/types#pkg-constants). - -The `type` and `attribute` value of the `query` allow you to filter the specific Event you are looking for. For example, a `Mint` transaction triggers an Event of type `EventMint` and has an `Id` and an `Owner` as `attributes` (as defined in the [`events.proto` file of the `NFT` module](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/proto/cosmos/nft/v1beta1/event.proto#L21-L31)). - -Subscribing to this Event would be done like so: - -``` -{ "jsonrpc": "2.0", "method": "subscribe", "id": "0", "params": { "query": "tm.event='Tx' AND mint.owner='ownerAddress'" }} -``` - -where `ownerAddress` is an address following the [`AccAddress`](/v0.53/learn/beginner/accounts#addresses) format. - -The same way can be used to subscribe to [legacy events](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/bank/types/events.go). - -## Default Events[​](#default-events "Direct link to Default Events") - -There are a few events that are automatically emitted for all messages, directly from `baseapp`. - -* `message.action`: The name of the message type. -* `message.sender`: The address of the message signer. -* `message.module`: The name of the module that emitted the message. - - - The module name is assumed by `baseapp` to be the second element of the message route: `"cosmos.bank.v1beta1.MsgSend" -> "bank"`. In case a module does not follow the standard message path, (e.g. IBC), it is advised to keep emitting the module name event. `Baseapp` only emits that event if the module have not already done so. - diff --git a/docs/sdk/v0.53/learn/advanced/grpc_rest.mdx b/docs/sdk/v0.53/learn/advanced/grpc_rest.mdx deleted file mode 100644 index aa196f2a..00000000 --- a/docs/sdk/v0.53/learn/advanced/grpc_rest.mdx +++ /dev/null @@ -1,113 +0,0 @@ ---- -title: "gRPC, REST, and CometBFT Endpoints" -description: "Version: v0.53" ---- - - - This document presents an overview of all the endpoints a node exposes: gRPC, REST as well as some other endpoints. - - -## An Overview of All Endpoints[​](#an-overview-of-all-endpoints "Direct link to An Overview of All Endpoints") - -Each node exposes the following endpoints for users to interact with a node, each endpoint is served on a different port. Details on how to configure each endpoint is provided in the endpoint's own section. - -* the gRPC server (default port: `9090`), -* the REST server (default port: `1317`), -* the CometBFT RPC endpoint (default port: `26657`). - - - The node also exposes some other endpoints, such as the CometBFT P2P endpoint, or the [Prometheus endpoint](https://docs.cometbft.com/v0.37/core/metrics), which are not directly related to the Cosmos SDK. Please refer to the [CometBFT documentation](https://docs.cometbft.com/v0.37/core/configuration) for more information about these endpoints. - - - - All endpoints are defaulted to localhost and must be modified to be exposed to the public internet. - - -## gRPC Server[​](#grpc-server "Direct link to gRPC Server") - -In the Cosmos SDK, Protobuf is the main [encoding](/v0.53/learn/advanced/encoding) library. This brings a wide range of Protobuf-based tools that can be plugged into the Cosmos SDK. One such tool is [gRPC](https://grpc.io), a modern open-source high performance RPC framework that has decent client support in several languages. - -Each module exposes a [Protobuf `Query` service](/v0.53/build/building-modules/messages-and-queries#queries) that defines state queries. The `Query` services and a transaction service used to broadcast transactions are hooked up to the gRPC server via the following function inside the application: - -server/types/app.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/server/types/app.go#L46-L48) - -Note: It is not possible to expose any [Protobuf `Msg` service](/v0.53/build/building-modules/messages-and-queries#messages) endpoints via gRPC. Transactions must be generated and signed using the CLI or programmatically before they can be broadcasted using gRPC. See [Generating, Signing, and Broadcasting Transactions](/v0.53/user/run-node/txs) for more information. - -The `grpc.Server` is a concrete gRPC server, which spawns and serves all gRPC query requests and a broadcast transaction request. This server can be configured inside `~/.simapp/config/app.toml`: - -* `grpc.enable = true|false` field defines if the gRPC server should be enabled. Defaults to `true`. -* `grpc.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `localhost:9090`. - - - `~/.simapp` is the directory where the node's configuration and databases are stored. By default, it's set to `~/.{app_name}`. - - -Once the gRPC server is started, you can send requests to it using a gRPC client. Some examples are given in our [Interact with the Node](/v0.53/user/run-node/interact-node#using-grpc) tutorial. - -An overview of all available gRPC endpoints shipped with the Cosmos SDK is [Protobuf documentation](https://buf.build/cosmos/cosmos-sdk). - -## REST Server[​](#rest-server "Direct link to REST Server") - -Cosmos SDK supports REST routes via gRPC-gateway. - -All routes are configured under the following fields in `~/.simapp/config/app.toml`: - -* `api.enable = true|false` field defines if the REST server should be enabled. Defaults to `false`. -* `api.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `tcp://localhost:1317`. -* some additional API configuration options are defined in `~/.simapp/config/app.toml`, along with comments, please refer to that file directly. - -### gRPC-gateway REST Routes[​](#grpc-gateway-rest-routes "Direct link to gRPC-gateway REST Routes") - -If, for various reasons, you cannot use gRPC (for example, you are building a web application, and browsers don't support HTTP2 on which gRPC is built), then the Cosmos SDK offers REST routes via gRPC-gateway. - -[gRPC-gateway](https://grpc-ecosystem.github.io/grpc-gateway/) is a tool to expose gRPC endpoints as REST endpoints. For each gRPC endpoint defined in a Protobuf `Query` service, the Cosmos SDK offers a REST equivalent. For instance, querying a balance could be done via the `/cosmos.bank.v1beta1.QueryAllBalances` gRPC endpoint, or alternatively via the gRPC-gateway `"/cosmos/bank/v1beta1/balances/{address}"` REST endpoint: both will return the same result. For each RPC method defined in a Protobuf `Query` service, the corresponding REST endpoint is defined as an option: - -proto/cosmos/bank/v1beta1/query.proto - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/proto/cosmos/bank/v1beta1/query.proto#L23-L30) - -For application developers, gRPC-gateway REST routes needs to be wired up to the REST server, this is done by calling the `RegisterGRPCGatewayRoutes` function on the ModuleManager. - -### Swagger[​](#swagger "Direct link to Swagger") - -A [Swagger](https://swagger.io/) (or OpenAPIv2) specification file is exposed under the `/swagger` route on the API server. Swagger is an open specification describing the API endpoints a server serves, including description, input arguments, return types and much more about each endpoint. - -Enabling the `/swagger` endpoint is configurable inside `~/.simapp/config/app.toml` via the `api.swagger` field, which is set to false by default. - -For application developers, you may want to generate your own Swagger definitions based on your custom modules. The Cosmos SDK's [Swagger generation script](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/scripts/protoc-swagger-gen.sh) is a good place to start. - -## CometBFT RPC[​](#cometbft-rpc "Direct link to CometBFT RPC") - -Independently from the Cosmos SDK, CometBFT also exposes a RPC server. This RPC server can be configured by tuning parameters under the `rpc` table in the `~/.simapp/config/config.toml`, the default listening address is `tcp://localhost:26657`. An OpenAPI specification of all CometBFT RPC endpoints is available [here](https://docs.cometbft.com/main/rpc/). - -Some CometBFT RPC endpoints are directly related to the Cosmos SDK: - -* `/abci_query`: this endpoint will query the application for state. As the `path` parameter, you can send the following strings: - - * any Protobuf fully-qualified service method, such as `/cosmos.bank.v1beta1.Query/AllBalances`. The `data` field should then include the method's request parameter(s) encoded as bytes using Protobuf. - * `/app/simulate`: this will simulate a transaction, and return some information such as gas used. - * `/app/version`: this will return the application's version. - * `/store/{storeName}/key`: this will directly query the named store for data associated with the key represented in the `data` parameter. - * `/store/{storeName}/subspace`: this will directly query the named store for key/value pairs in which the key has the value of the `data` parameter as a prefix. - * `/p2p/filter/addr/{port}`: this will return a filtered list of the node's P2P peers by address port. - * `/p2p/filter/id/{id}`: this will return a filtered list of the node's P2P peers by ID. - -* `/broadcast_tx_{sync,async,commit}`: these 3 endpoints will broadcast a transaction to other peers. CLI, gRPC and REST expose [a way to broadcast transactions](/v0.53/learn/advanced/transactions#broadcasting-the-transaction), but they all use these 3 CometBFT RPCs under the hood. - -## Comparison Table[​](#comparison-table "Direct link to Comparison Table") - -| Name | Advantages | Disadvantages | -| ------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | -| gRPC | - can use code-generated stubs in various languages - supports streaming and bidirectional communication (HTTP2) - small wire binary sizes, faster transmission | - based on HTTP2, not available in browsers - learning curve (mostly due to Protobuf) | -| REST | - ubiquitous - client libraries in all languages, faster implementation | - only supports unary request-response communication (HTTP1.1) - bigger over-the-wire message sizes (JSON) | -| CometBFT RPC | - easy to use | - bigger over-the-wire message sizes (JSON) | diff --git a/docs/sdk/v0.53/learn/advanced/node.mdx b/docs/sdk/v0.53/learn/advanced/node.mdx deleted file mode 100644 index 11d6afc5..00000000 --- a/docs/sdk/v0.53/learn/advanced/node.mdx +++ /dev/null @@ -1,121 +0,0 @@ ---- -title: "Node Client (Daemon)" -description: "Version: v0.53" ---- - - - The main endpoint of a Cosmos SDK application is the daemon client, otherwise known as the full-node client. The full-node runs the state-machine, starting from a genesis file. It connects to peers running the same client in order to receive and relay transactions, block proposals and signatures. The full-node is constituted of the application, defined with the Cosmos SDK, and of a consensus engine connected to the application via the ABCI. - - - - * [Anatomy of an SDK application](/v0.53/learn/beginner/app-anatomy) - - -## `main` function[​](#main-function "Direct link to main-function") - -The full-node client of any Cosmos SDK application is built by running a `main` function. The client is generally named by appending the `-d` suffix to the application name (e.g. `appd` for an application named `app`), and the `main` function is defined in a `./appd/cmd/main.go` file. Running this function creates an executable `appd` that comes with a set of commands. For an app named `app`, the main command is [`appd start`](#start-command), which starts the full-node. - -In general, developers will implement the `main.go` function with the following structure: - -* First, an [`encodingCodec`](/v0.53/learn/advanced/encoding) is instantiated for the application. -* Then, the `config` is retrieved and config parameters are set. This mainly involves setting the Bech32 prefixes for [addresses](/v0.53/learn/beginner/accounts#addresses). - -types/config.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/config.go#L14-L29) - -* Using [cobra](https://github.com/spf13/cobra), the root command of the full-node client is created. After that, all the custom commands of the application are added using the `AddCommand()` method of `rootCmd`. -* Add default server commands to `rootCmd` using the `server.AddCommands()` method. These commands are separated from the ones added above since they are standard and defined at Cosmos SDK level. They should be shared by all Cosmos SDK-based applications. They include the most important command: the [`start` command](#start-command). -* Prepare and execute the `executor`. - -libs/cli/setup.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cometbft/cometbft/blob/v0.37.0/libs/cli/setup.go#L74-L78) - -See an example of `main` function from the `simapp` application, the Cosmos SDK's application for demo purposes: - -simapp/simd/main.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/main.go) - -## `start` command[​](#start-command "Direct link to start-command") - -The `start` command is defined in the `/server` folder of the Cosmos SDK. It is added to the root command of the full-node client in the [`main` function](#main-function) and called by the end-user to start their node: - -``` -# For an example app named "app", the following command starts the full-node.appd start# Using the Cosmos SDK's own simapp, the following commands start the simapp node.simd start -``` - -As a reminder, the full-node is composed of three conceptual layers: the networking layer, the consensus layer and the application layer. The first two are generally bundled together in an entity called the consensus engine (CometBFT by default), while the third is the state-machine defined with the help of the Cosmos SDK. Currently, the Cosmos SDK uses CometBFT as the default consensus engine, meaning the start command is implemented to boot up a CometBFT node. - -The flow of the `start` command is pretty straightforward. First, it retrieves the `config` from the `context` in order to open the `db` (a [`leveldb`](https://github.com/syndtr/goleveldb) instance by default). This `db` contains the latest known state of the application (empty if the application is started from the first time. - -With the `db`, the `start` command creates a new instance of the application using an `appCreator` function: - -server/start.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server/start.go#L1007) - -Note that an `appCreator` is a function that fulfills the `AppCreator` signature: - -server/types/app.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server/types/app.go#L69) - -In practice, the [constructor of the application](/v0.53/learn/beginner/app-anatomy#constructor-function) is passed as the `appCreator`. - -simapp/simd/cmd/root\_v2.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L294-L308) - -Then, the instance of `app` is used to instantiate a new CometBFT node: - -server/start.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server/start.go#L361-L400) - -The CometBFT node can be created with `app` because the latter satisfies the [`abci.Application` interface](https://github.com/cometbft/cometbft/blob/v0.37.0/abci/types/application.go#L9-L35) (given that `app` extends [`baseapp`](/v0.53/learn/advanced/baseapp)). As part of the `node.New` method, CometBFT makes sure that the height of the application (i.e. number of blocks since genesis) is equal to the height of the CometBFT node. The difference between these two heights should always be negative or null. If it is strictly negative, `node.New` will replay blocks until the height of the application reaches the height of the CometBFT node. Finally, if the height of the application is `0`, the CometBFT node will call [`InitChain`](/v0.53/learn/advanced/baseapp#initchain) on the application to initialize the state from the genesis file. - -Once the CometBFT node is instantiated and in sync with the application, the node can be started: - -server/start.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server/start.go#L373-L374) - -Upon starting, the node will bootstrap its RPC and P2P server and start dialing peers. During handshake with its peers, if the node realizes they are ahead, it will query all the blocks sequentially in order to catch up. Then, it will wait for new block proposals and block signatures from validators in order to make progress. - -## Other commands[​](#other-commands "Direct link to Other commands") - -To discover how to concretely run a node and interact with it, please refer to our [Running a Node, API and CLI](/v0.53/user/run-node/run-node) guide. diff --git a/docs/sdk/v0.53/learn/advanced/ocap.mdx b/docs/sdk/v0.53/learn/advanced/ocap.mdx deleted file mode 100644 index 5a6ee2ad..00000000 --- a/docs/sdk/v0.53/learn/advanced/ocap.mdx +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: "Object-Capability Model" -description: "Version: v0.53" ---- - -## Intro[​](#intro "Direct link to Intro") - -When thinking about security, it is good to start with a specific threat model. Our threat model is the following: - -> We assume that a thriving ecosystem of Cosmos SDK modules that are easy to compose into a blockchain application will contain faulty or malicious modules. - -The Cosmos SDK is designed to address this threat by being the foundation of an object capability system. - -> The structural properties of object capability systems favor modularity in code design and ensure reliable encapsulation in code implementation. -> -> These structural properties facilitate the analysis of some security properties of an object-capability program or operating system. Some of these — in particular, information flow properties — can be analyzed at the level of object references and connectivity, independent of any knowledge or analysis of the code that determines the behavior of the objects. -> -> As a consequence, these security properties can be established and maintained in the presence of new objects that contain unknown and possibly malicious code. -> -> These structural properties stem from the two rules governing access to existing objects: -> -> 1. An object A can send a message to B only if object A holds a reference to B. -> 2. An object A can obtain a reference to C only if object A receives a message containing a reference to C. As a consequence of these two rules, an object can obtain a reference to another object only through a preexisting chain of references. In short, "Only connectivity begets connectivity." - -For an introduction to object-capabilities, see this [Wikipedia article](https://en.wikipedia.org/wiki/Object-capability_model). - -## Ocaps in practice[​](#ocaps-in-practice "Direct link to Ocaps in practice") - -The idea is to only reveal what is necessary to get the work done. - -For example, the following code snippet violates the object capabilities principle: - -``` -type AppAccount struct {...}account := &AppAccount{ Address: pub.Address(), Coins: sdk.Coins{sdk.NewInt64Coin("ATM", 100)},}sumValue := externalModule.ComputeSumValue(account) -``` - -The method `ComputeSumValue` implies a pure function, yet the implied capability of accepting a pointer value is the capability to modify that value. The preferred method signature should take a copy instead. - -``` -sumValue := externalModule.ComputeSumValue(*account) -``` - -In the Cosmos SDK, you can see the application of this principle in simapp. - -simapp/app.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app.go) - -The following diagram shows the current dependencies between keepers. - -![Keeper dependencies](/images/v0.53/learn/advanced/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/keeper_dependencies.svg) diff --git a/docs/sdk/v0.53/learn/advanced/runtx_middleware.mdx b/docs/sdk/v0.53/learn/advanced/runtx_middleware.mdx deleted file mode 100644 index 53240305..00000000 --- a/docs/sdk/v0.53/learn/advanced/runtx_middleware.mdx +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: "RunTx recovery middleware" -description: "Version: v0.53" ---- - -`BaseApp.runTx()` function handles Go panics that might occur during transactions execution, for example, keeper has faced an invalid state and paniced. Depending on the panic type different handler is used, for instance the default one prints an error log message. Recovery middleware is used to add custom panic recovery for Cosmos SDK application developers. - -More context can found in the corresponding [ADR-022](/v0.53/build/architecture/adr-022-custom-panic-handling) and the implementation in [recovery.go](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/baseapp/recovery.go). - -## Interface[​](#interface "Direct link to Interface") - -baseapp/recovery.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/baseapp/recovery.go#L14-L17) - -`recoveryObj` is a return value for `recover()` function from the `buildin` Go package. - -**Contract:** - -* RecoveryHandler returns `nil` if `recoveryObj` wasn't handled and should be passed to the next recovery middleware; -* RecoveryHandler returns a non-nil `error` if `recoveryObj` was handled; - -## Custom RecoveryHandler register[​](#custom-recoveryhandler-register "Direct link to Custom RecoveryHandler register") - -`BaseApp.AddRunTxRecoveryHandler(handlers ...RecoveryHandler)` - -BaseApp method adds recovery middleware to the default recovery chain. - -## Example[​](#example "Direct link to Example") - -Lets assume we want to emit the "Consensus failure" chain state if some particular error occurred. - -We have a module keeper that panics: - -``` -func (k FooKeeper) Do(obj interface{}) { if obj == nil { // that shouldn't happen, we need to crash the app err := errorsmod.Wrap(fooTypes.InternalError, "obj is nil") panic(err) }} -``` - -By default that panic would be recovered and an error message will be printed to log. To override that behaviour we should register a custom RecoveryHandler: - -``` -// Cosmos SDK application constructorcustomHandler := func(recoveryObj interface{}) error { err, ok := recoveryObj.(error) if !ok { return nil } if fooTypes.InternalError.Is(err) { panic(fmt.Errorf("FooKeeper did panic with error: %w", err)) } return nil}baseApp := baseapp.NewBaseApp(...)baseApp.AddRunTxRecoveryHandler(customHandler) -``` diff --git a/docs/sdk/v0.53/learn/advanced/simulation.mdx b/docs/sdk/v0.53/learn/advanced/simulation.mdx deleted file mode 100644 index 47a1a223..00000000 --- a/docs/sdk/v0.53/learn/advanced/simulation.mdx +++ /dev/null @@ -1,63 +0,0 @@ ---- -title: "Cosmos Blockchain Simulator" -description: "Version: v0.53" ---- - -The Cosmos SDK offers a full fledged simulation framework to fuzz test every message defined by a module. - -On the Cosmos SDK, this functionality is provided by [`SimApp`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_di.go), which is a `Baseapp` application that is used for running the [`simulation`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/simulation) module. This module defines all the simulation logic as well as the operations for randomized parameters like accounts, balances etc. - -## Goals[​](#goals "Direct link to Goals") - -The blockchain simulator tests how the blockchain application would behave under real life circumstances by generating and sending randomized messages. The goal of this is to detect and debug failures that could halt a live chain, by providing logs and statistics about the operations run by the simulator as well as exporting the latest application state when a failure was found. - -Its main difference with integration testing is that the simulator app allows you to pass parameters to customize the chain that's being simulated. This comes in handy when trying to reproduce bugs that were generated in the provided operations (randomized or not). - -## Simulation commands[​](#simulation-commands "Direct link to Simulation commands") - -The simulation app has different commands, each of which tests a different failure type: - -* `AppImportExport`: The simulator exports the initial app state and then it creates a new app with the exported `genesis.json` as an input, checking for inconsistencies between the stores. -* `AppSimulationAfterImport`: Queues two simulations together. The first one provides the app state (*i.e* genesis) to the second. Useful to test software upgrades or hard-forks from a live chain. -* `AppStateDeterminism`: Checks that all the nodes return the same values, in the same order. -* `FullAppSimulation`: General simulation mode. Runs the chain and the specified operations for a given number of blocks. Tests that there're no `panics` on the simulation. - -Each simulation must receive a set of inputs (*i.e* flags) such as the number of blocks that the simulation is run, seed, block size, etc. Check the full list of flags [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/simulation/client/cli/flags.go#L43-L70). - -## Simulator Modes[​](#simulator-modes "Direct link to Simulator Modes") - -In addition to the various inputs and commands, the simulator runs in three modes: - -1. Completely random where the initial state, module parameters and simulation parameters are **pseudo-randomly generated**. -2. From a `genesis.json` file where the initial state and the module parameters are defined. This mode is helpful for running simulations on a known state such as a live network export where a new (mostly likely breaking) version of the application needs to be tested. -3. From a `params.json` file where the initial state is pseudo-randomly generated but the module and simulation parameters can be provided manually. This allows for a more controlled and deterministic simulation setup while allowing the state space to still be pseudo-randomly simulated. The list of available parameters are listed [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/simulation/client/cli/flags.go#L72-L90). - - - These modes are not mutually exclusive. So you can for example run a randomly generated genesis state (`1`) with manually generated simulation params (`3`). - - -## Usage[​](#usage "Direct link to Usage") - -This is a general example of how simulations are run. For more specific examples check the Cosmos SDK [Makefile](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/Makefile#L285-L320). - -``` - $ go test -mod=readonly github.com/cosmos/cosmos-sdk/simapp \ -run=TestApp \ ... -v -timeout 24h -``` - -## Debugging Tips[​](#debugging-tips "Direct link to Debugging Tips") - -Here are some suggestions when encountering a simulation failure: - -* Export the app state at the height where the failure was found. You can do this by passing the `-ExportStatePath` flag to the simulator. -* Use `-Verbose` logs. They could give you a better hint on all the operations involved. -* Try using another `-Seed`. If it can reproduce the same error and if it fails sooner, you will spend less time running the simulations. -* Reduce the `-NumBlocks` . How's the app state at the height previous to the failure? -* Try adding logs to operations that are not logged. You will have to define a [Logger](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/staking/keeper/keeper.go#L77-L81) on your `Keeper`. - -## Use simulation in your Cosmos SDK-based application[​](#use-simulation-in-your-cosmos-sdk-based-application "Direct link to Use simulation in your Cosmos SDK-based application") - -Learn how you can build the simulation into your Cosmos SDK-based application: - -* Application Simulation Manager -* [Building modules: Simulator](/v0.53/build/building-modules/simulator) -* Simulator tests diff --git a/docs/sdk/v0.53/learn/advanced/store.mdx b/docs/sdk/v0.53/learn/advanced/store.mdx deleted file mode 100644 index ed4b70c5..00000000 --- a/docs/sdk/v0.53/learn/advanced/store.mdx +++ /dev/null @@ -1,338 +0,0 @@ ---- -title: "Store" -description: "Version: v0.53" ---- - - - A store is a data structure that holds the state of the application. - - - - * [Anatomy of a Cosmos SDK application](/v0.53/learn/beginner/app-anatomy) - - -## Introduction to Cosmos SDK Stores[​](#introduction-to-cosmos-sdk-stores "Direct link to Introduction to Cosmos SDK Stores") - -The Cosmos SDK comes with a large set of stores to persist the state of applications. By default, the main store of Cosmos SDK applications is a `multistore`, i.e. a store of stores. Developers can add any number of key-value stores to the multistore, depending on their application needs. The multistore exists to support the modularity of the Cosmos SDK, as it lets each module declare and manage their own subset of the state. Key-value stores in the multistore can only be accessed with a specific capability `key`, which is typically held in the [`keeper`](/v0.53/build/building-modules/keeper) of the module that declared the store. - -``` -+-----------------------------------------------------+| || +--------------------------------------------+ || | | || | KVStore 1 - Manage by keeper of Module 1 || | | || +--------------------------------------------+ || || +--------------------------------------------+ || | | || | KVStore 2 - Manage by keeper of Module 2 | || | | || +--------------------------------------------+ || || +--------------------------------------------+ || | | || | KVStore 3 - Manage by keeper of Module 2 | || | | || +--------------------------------------------+ || || +--------------------------------------------+ || | | || | KVStore 4 - Manage by keeper of Module 3 | || | | || +--------------------------------------------+ || || +--------------------------------------------+ || | | || | KVStore 5 - Manage by keeper of Module 4 | || | | || +--------------------------------------------+ || || Main Multistore || |+-----------------------------------------------------+ Application's State -``` - -### Store Interface[​](#store-interface "Direct link to Store Interface") - -At its very core, a Cosmos SDK `store` is an object that holds a `CacheWrapper` and has a `GetStoreType()` method: - -store/types/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/store.go#L17-L20) - -The `GetStoreType` is a simple method that returns the type of store, whereas a `CacheWrapper` is a simple interface that implements store read caching and write branching through `Write` method: - -store/types/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/store.go#L285-L317) - -Branching and cache is used ubiquitously in the Cosmos SDK and required to be implemented on every store type. A storage branch creates an isolated, ephemeral branch of a store that can be passed around and updated without affecting the main underlying store. This is used to trigger temporary state-transitions that may be reverted later should an error occur. Read more about it in [context](/v0.53/learn/advanced/context#Store-branching) - -### Commit Store[​](#commit-store "Direct link to Commit Store") - -A commit store is a store that has the ability to commit changes made to the underlying tree or db. The Cosmos SDK differentiates simple stores from commit stores by extending the basic store interfaces with a `Committer`: - -store/types/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/store.go#L34-L38) - -The `Committer` is an interface that defines methods to persist changes to disk: - -store/types/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/store.go#L22-L32) - -The `CommitID` is a deterministic commit of the state tree. Its hash is returned to the underlying consensus engine and stored in the block header. Note that commit store interfaces exist for various purposes, one of which is to make sure not every object can commit the store. As part of the [object-capabilities model](/v0.53/learn/advanced/ocap) of the Cosmos SDK, only `baseapp` should have the ability to commit stores. For example, this is the reason why the `ctx.KVStore()` method by which modules typically access stores returns a `KVStore` and not a `CommitKVStore`. - -The Cosmos SDK comes with many types of stores, the most used being [`CommitMultiStore`](#multistore), [`KVStore`](#kvstore) and [`GasKv` store](#gaskv-store). [Other types of stores](#other-stores) include `Transient` and `TraceKV` stores. - -## Multistore[​](#multistore "Direct link to Multistore") - -### Multistore Interface[​](#multistore-interface "Direct link to Multistore Interface") - -Each Cosmos SDK application holds a multistore at its root to persist its state. The multistore is a store of `KVStores` that follows the `Multistore` interface: - -store/types/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/store.go#L115-L147) - -If tracing is enabled, then branching the multistore will firstly wrap all the underlying `KVStore` in [`TraceKv.Store`](#tracekv-store). - -### CommitMultiStore[​](#commitmultistore "Direct link to CommitMultiStore") - -The main type of `Multistore` used in the Cosmos SDK is `CommitMultiStore`, which is an extension of the `Multistore` interface: - -store/types/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/store.go#L155-L225) - -As for concrete implementation, the \[`rootMulti.Store`] is the go-to implementation of the `CommitMultiStore` interface. - -store/rootmulti/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/rootmulti/store.go#L56-L82) - -The `rootMulti.Store` is a base-layer multistore built around a `db` on top of which multiple `KVStores` can be mounted, and is the default multistore store used in [`baseapp`](/v0.53/learn/advanced/baseapp). - -### CacheMultiStore[​](#cachemultistore "Direct link to CacheMultiStore") - -Whenever the `rootMulti.Store` needs to be branched, a [`cachemulti.Store`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/cachemulti/store.go) is used. - -store/cachemulti/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/cachemulti/store.go#L20-L34) - -`cachemulti.Store` branches all substores (creates a virtual store for each substore) in its constructor and hold them in `Store.stores`. Moreover caches all read queries. `Store.GetKVStore()` returns the store from `Store.stores`, and `Store.Write()` recursively calls `CacheWrap.Write()` on all the substores. - -## Base-layer KVStores[​](#base-layer-kvstores "Direct link to Base-layer KVStores") - -### `KVStore` and `CommitKVStore` Interfaces[​](#kvstore-and-commitkvstore-interfaces "Direct link to kvstore-and-commitkvstore-interfaces") - -A `KVStore` is a simple key-value store used to store and retrieve data. A `CommitKVStore` is a `KVStore` that also implements a `Committer`. By default, stores mounted in `baseapp`'s main `CommitMultiStore` are `CommitKVStore`s. The `KVStore` interface is primarily used to restrict modules from accessing the committer. - -Individual `KVStore`s are used by modules to manage a subset of the global state. `KVStores` can be accessed by objects that hold a specific key. This `key` should only be exposed to the [`keeper`](/v0.53/build/building-modules/keeper) of the module that defines the store. - -`CommitKVStore`s are declared by proxy of their respective `key` and mounted on the application's [multistore](#multistore) in the [main application file](/v0.53/learn/beginner/app-anatomy#core-application-file). In the same file, the `key` is also passed to the module's `keeper` that is responsible for managing the store. - -store/types/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/store.go#L227-L264) - -Apart from the traditional `Get` and `Set` methods, that a `KVStore` must implement via the `BasicKVStore` interface; a `KVStore` must provide an `Iterator(start, end)` method which returns an `Iterator` object. It is used to iterate over a range of keys, typically keys that share a common prefix. Below is an example from the bank's module keeper, used to iterate over all account balances: - -x/bank/keeper/view\.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/bank/keeper/view.go#L121-L137) - -### `IAVL` Store[​](#iavl-store "Direct link to iavl-store") - -The default implementation of `KVStore` and `CommitKVStore` used in `baseapp` is the `iavl.Store`. - -store/iavl/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/iavl/store.go#L36-L41) - -`iavl` stores are based around an [IAVL Tree](https://github.com/cosmos/iavl), a self-balancing binary tree which guarantees that: - -* `Get` and `Set` operations are O(log n), where n is the number of elements in the tree. -* Iteration efficiently returns the sorted elements within the range. -* Each tree version is immutable and can be retrieved even after a commit (depending on the pruning settings). - -The documentation on the IAVL Tree is located [here](https://github.com/cosmos/iavl/blob/master/docs/overview.md). - -### `DbAdapter` Store[​](#dbadapter-store "Direct link to dbadapter-store") - -`dbadapter.Store` is an adapter for `dbm.DB` making it fulfilling the `KVStore` interface. - -store/dbadapter/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/dbadapter/store.go#L13-L16) - -`dbadapter.Store` embeds `dbm.DB`, meaning most of the `KVStore` interface functions are implemented. The other functions (mostly miscellaneous) are manually implemented. This store is primarily used within [Transient Stores](#transient-store) - -### `Transient` Store[​](#transient-store "Direct link to transient-store") - -`Transient.Store` is a base-layer `KVStore` which is automatically discarded at the end of the block. - -store/transient/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/transient/store.go#L16-L19) - -`Transient.Store` is a `dbadapter.Store` with a `dbm.NewMemDB()`. All `KVStore` methods are reused. When `Store.Commit()` is called, a new `dbadapter.Store` is assigned, discarding previous reference and making it garbage collected. - -This type of store is useful to persist information that is only relevant per-block. One example would be to store parameter changes (i.e. a bool set to `true` if a parameter changed in a block). - -x/params/types/subspace.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/params/types/subspace.go#L22-L32) - -Transient stores are typically accessed via the [`context`](/v0.53/learn/advanced/context) via the `TransientStore()` method: - -types/context.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/context.go#L347-L350) - -## KVStore Wrappers[​](#kvstore-wrappers "Direct link to KVStore Wrappers") - -### CacheKVStore[​](#cachekvstore "Direct link to CacheKVStore") - -`cachekv.Store` is a wrapper `KVStore` which provides buffered writing / cached reading functionalities over the underlying `KVStore`. - -store/cachekv/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/cachekv/store.go#L26-L36) - -This is the type used whenever an IAVL Store needs to be branched to create an isolated store (typically when we need to mutate a state that might be reverted later). - -#### `Get`[​](#get "Direct link to get") - -`Store.Get()` firstly checks if `Store.cache` has an associated value with the key. If the value exists, the function returns it. If not, the function calls `Store.parent.Get()`, caches the result in `Store.cache`, and returns it. - -#### `Set`[​](#set "Direct link to set") - -`Store.Set()` sets the key-value pair to the `Store.cache`. `cValue` has the field dirty bool which indicates whether the cached value is different from the underlying value. When `Store.Set()` caches a new pair, the `cValue.dirty` is set `true` so when `Store.Write()` is called it can be written to the underlying store. - -#### `Iterator`[​](#iterator "Direct link to iterator") - -`Store.Iterator()` have to traverse on both cached items and the original items. In `Store.iterator()`, two iterators are generated for each of them, and merged. `memIterator` is essentially a slice of the `KVPairs`, used for cached items. `mergeIterator` is a combination of two iterators, where traverse happens ordered on both iterators. - -### `GasKv` Store[​](#gaskv-store "Direct link to gaskv-store") - -Cosmos SDK applications use [`gas`](/v0.53/learn/beginner/gas-fees) to track resources usage and prevent spam. [`GasKv.Store`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/gaskv/store.go) is a `KVStore` wrapper that enables automatic gas consumption each time a read or write to the store is made. It is the solution of choice to track storage usage in Cosmos SDK applications. - -store/gaskv/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/gaskv/store.go#L11-L17) - -When methods of the parent `KVStore` are called, `GasKv.Store` automatically consumes appropriate amount of gas depending on the `Store.gasConfig`: - -store/types/gas.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/gas.go#L219-L228) - -By default, all `KVStores` are wrapped in `GasKv.Stores` when retrieved. This is done in the `KVStore()` method of the [`context`](/v0.53/learn/advanced/context): - -types/context.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/context.go#L342-L345) - -In this case, the gas configuration set in the `context` is used. The gas configuration can be set using the `WithKVGasConfig` method of the `context`. Otherwise it uses the following default: - -store/types/gas.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/gas.go#L230-L241) - -### `TraceKv` Store[​](#tracekv-store "Direct link to tracekv-store") - -`tracekv.Store` is a wrapper `KVStore` which provides operation tracing functionalities over the underlying `KVStore`. It is applied automatically by the Cosmos SDK on all `KVStore` if tracing is enabled on the parent `MultiStore`. - -store/tracekv/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/tracekv/store.go#L20-L43) - -When each `KVStore` methods are called, `tracekv.Store` automatically logs `traceOperation` to the `Store.writer`. `traceOperation.Metadata` is filled with `Store.context` when it is not nil. `TraceContext` is a `map[string]interface{}`. - -### `Prefix` Store[​](#prefix-store "Direct link to prefix-store") - -`prefix.Store` is a wrapper `KVStore` which provides automatic key-prefixing functionalities over the underlying `KVStore`. - -store/prefix/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/prefix/store.go#L15-L21) - -When `Store.{Get, Set}()` is called, the store forwards the call to its parent, with the key prefixed with the `Store.prefix`. - -When `Store.Iterator()` is called, it does not simply prefix the `Store.prefix`, since it does not work as intended. In that case, some of the elements are traversed even if they are not starting with the prefix. - -### `ListenKv` Store[​](#listenkv-store "Direct link to listenkv-store") - -`listenkv.Store` is a wrapper `KVStore` which provides state listening capabilities over the underlying `KVStore`. It is applied automatically by the Cosmos SDK on any `KVStore` whose `StoreKey` is specified during state streaming configuration. Additional information about state streaming configuration can be found in the [store/streaming/README.md](https://github.com/cosmos/cosmos-sdk/tree/v0.53.0/store/streaming). - -store/listenkv/store.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/listenkv/store.go#L11-L18) - -When `KVStore.Set` or `KVStore.Delete` methods are called, `listenkv.Store` automatically writes the operations to the set of `Store.listeners`. - -## `BasicKVStore` interface[​](#basickvstore-interface "Direct link to basickvstore-interface") - -An interface providing only the basic CRUD functionality (`Get`, `Set`, `Has`, and `Delete` methods), without iteration or caching. This is used to partially expose components of a larger store. diff --git a/docs/sdk/v0.53/learn/advanced/transactions.mdx b/docs/sdk/v0.53/learn/advanced/transactions.mdx deleted file mode 100644 index 64c5134b..00000000 --- a/docs/sdk/v0.53/learn/advanced/transactions.mdx +++ /dev/null @@ -1,242 +0,0 @@ ---- -title: "Transactions" -description: "Version: v0.53" ---- - - - `Transactions` are objects created by end-users to trigger state changes in the application. - - - - * [Anatomy of a Cosmos SDK Application](/v0.53/learn/beginner/app-anatomy) - - -## Transactions[​](#transactions-1 "Direct link to Transactions") - -Transactions are comprised of metadata held in [contexts](/v0.53/learn/advanced/context) and [`sdk.Msg`s](/v0.53/build/building-modules/messages-and-queries) that trigger state changes within a module through the module's Protobuf [`Msg` service](/v0.53/build/building-modules/msg-services). - -When users want to interact with an application and make state changes (e.g. sending coins), they create transactions. Each of a transaction's `sdk.Msg` must be signed using the private key associated with the appropriate account(s), before the transaction is broadcasted to the network. A transaction must then be included in a block, validated, and approved by the network through the consensus process. To read more about the lifecycle of a transaction, click [here](/v0.53/learn/beginner/tx-lifecycle). - -## Type Definition[​](#type-definition "Direct link to Type Definition") - -Transaction objects are Cosmos SDK types that implement the `Tx` interface - -types/tx\_msg.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/tx_msg.go#L53-L58) - -It contains the following methods: - -* **GetMsgs:** unwraps the transaction and returns a list of contained `sdk.Msg`s - one transaction may have one or multiple messages, which are defined by module developers. - -As a developer, you should rarely manipulate `Tx` directly, as `Tx` is an intermediate type used for transaction generation. Instead, developers should prefer the `TxBuilder` interface, which you can learn more about [below](#transaction-generation). - -### Signing Transactions[​](#signing-transactions "Direct link to Signing Transactions") - -Every message in a transaction must be signed by the addresses specified by its `GetSigners`. The Cosmos SDK currently allows signing transactions in two different ways. - -#### `SIGN_MODE_DIRECT` (preferred)[​](#sign_mode_direct-preferred "Direct link to sign_mode_direct-preferred") - -The most used implementation of the `Tx` interface is the Protobuf `Tx` message, which is used in `SIGN_MODE_DIRECT`: - -proto/cosmos/tx/v1beta1/tx.proto - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/tx/v1beta1/tx.proto#L15-L28) - -Because Protobuf serialization is not deterministic, the Cosmos SDK uses an additional `TxRaw` type to denote the pinned bytes over which a transaction is signed. Any user can generate a valid `body` and `auth_info` for a transaction, and serialize these two messages using Protobuf. `TxRaw` then pins the user's exact binary representation of `body` and `auth_info`, called respectively `body_bytes` and `auth_info_bytes`. The document that is signed by all signers of the transaction is `SignDoc` (deterministically serialized using [ADR-027](/v0.53/build/architecture/adr-027-deterministic-protobuf-serialization)): - -proto/cosmos/tx/v1beta1/tx.proto - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/tx/v1beta1/tx.proto#L50-L67) - -Once signed by all signers, the `body_bytes`, `auth_info_bytes` and `signatures` are gathered into `TxRaw`, whose serialized bytes are broadcasted over the network. - -#### `SIGN_MODE_LEGACY_AMINO_JSON`[​](#sign_mode_legacy_amino_json "Direct link to sign_mode_legacy_amino_json") - -The legacy implementation of the `Tx` interface is the `StdTx` struct from `x/auth`: - -x/auth/migrations/legacytx/stdtx.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/migrations/legacytx/stdtx.go#L82-L89) - -The document signed by all signers is `StdSignDoc`: - -x/auth/migrations/legacytx/stdsign.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/migrations/legacytx/stdsign.go#L30-L43) - -which is encoded into bytes using Amino JSON. Once all signatures are gathered into `StdTx`, `StdTx` is serialized using Amino JSON, and these bytes are broadcasted over the network. - -#### Other Sign Modes[​](#other-sign-modes "Direct link to Other Sign Modes") - -The Cosmos SDK also provides a couple of other sign modes for particular use cases. - -#### `SIGN_MODE_DIRECT_AUX`[​](#sign_mode_direct_aux "Direct link to sign_mode_direct_aux") - -`SIGN_MODE_DIRECT_AUX` is a sign mode released in the Cosmos SDK v0.46 which targets transactions with multiple signers. Whereas `SIGN_MODE_DIRECT` expects each signer to sign over both `TxBody` and `AuthInfo` (which includes all other signers' signer infos, i.e. their account sequence, public key and mode info), `SIGN_MODE_DIRECT_AUX` allows N-1 signers to only sign over `TxBody` and *their own* signer info. Morever, each auxiliary signer (i.e. a signer using `SIGN_MODE_DIRECT_AUX`) doesn't need to sign over the fees: - -proto/cosmos/tx/v1beta1/tx.proto - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/tx/v1beta1/tx.proto#L68-L93) - -The use case is a multi-signer transaction, where one of the signers is appointed to gather all signatures, broadcast the signature and pay for fees, and the others only care about the transaction body. This generally allows for a better multi-signing UX. If Alice, Bob and Charlie are part of a 3-signer transaction, then Alice and Bob can both use `SIGN_MODE_DIRECT_AUX` to sign over the `TxBody` and their own signer info (no need an additional step to gather other signers' ones, like in `SIGN_MODE_DIRECT`), without specifying a fee in their SignDoc. Charlie can then gather both signatures from Alice and Bob, and create the final transaction by appending a fee. Note that the fee payer of the transaction (in our case Charlie) must sign over the fees, so must use `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. - -#### `SIGN_MODE_TEXTUAL`[​](#sign_mode_textual "Direct link to sign_mode_textual") - -`SIGN_MODE_TEXTUAL` is a new sign mode for delivering a better signing experience on hardware wallets and it is included in the v0.50 release. In this mode, the signer signs over the human-readable string representation of the transaction (CBOR) and makes all data being displayed easier to read. The data is formatted as screens, and each screen is meant to be displayed in its entirety even on small devices like the Ledger Nano. - -There are also *expert* screens, which will only be displayed if the user has chosen that option in its hardware device. These screens contain things like account number, account sequence and the sign data hash. - -Data is formatted using a set of `ValueRenderer` which the SDK provides defaults for all the known messages and value types. Chain developers can also opt to implement their own `ValueRenderer` for a type/message if they'd like to display information differently. - -If you wish to learn more, please refer to [ADR-050](/v0.53/build/architecture/adr-050-sign-mode-textual). - -#### Custom Sign modes[​](#custom-sign-modes "Direct link to Custom Sign modes") - -There is the opportunity to add your own custom sign mode to the Cosmos-SDK. While we can not accept the implementation of the sign mode to the repository, we can accept a pull request to add the custom signmode to the SignMode enum located [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/tx/signing/v1beta1/signing.proto#L17) - -## Transaction Process[​](#transaction-process "Direct link to Transaction Process") - -The process of an end-user sending a transaction is: - -* decide on the messages to put into the transaction, -* generate the transaction using the Cosmos SDK's `TxBuilder`, -* broadcast the transaction using one of the available interfaces. - -The next paragraphs will describe each of these components, in this order. - -### Messages[​](#messages "Direct link to Messages") - - - Module `sdk.Msg`s are not to be confused with [ABCI Messages](https://docs.cometbft.com/v0.37/spec/abci/) which define interactions between the CometBFT and application layers. - - -**Messages** (or `sdk.Msg`s) are module-specific objects that trigger state transitions within the scope of the module they belong to. Module developers define the messages for their module by adding methods to the Protobuf [`Msg` service](/v0.53/build/building-modules/msg-services), and also implement the corresponding `MsgServer`. - -Each `sdk.Msg`s is related to exactly one Protobuf [`Msg` service](/v0.53/build/building-modules/msg-services) RPC, defined inside each module's `tx.proto` file. A SDK app router automatically maps every `sdk.Msg` to a corresponding RPC. Protobuf generates a `MsgServer` interface for each module `Msg` service, and the module developer needs to implement this interface. This design puts more responsibility on module developers, allowing application developers to reuse common functionalities without having to implement state transition logic repetitively. - -To learn more about Protobuf `Msg` services and how to implement `MsgServer`, click [here](/v0.53/build/building-modules/msg-services). - -While messages contain the information for state transition logic, a transaction's other metadata and relevant information are stored in the `TxBuilder` and `Context`. - -### Transaction Generation[​](#transaction-generation "Direct link to Transaction Generation") - -The `TxBuilder` interface contains data closely related with the generation of transactions, which an end-user can set to generate the desired transaction: - -client/tx\_config.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/tx_config.go#L39-L57) - -* `Msg`s, the array of [messages](#messages) included in the transaction. -* `GasLimit`, option chosen by the users for how to calculate how much gas they will need to pay. -* `Memo`, a note or comment to send with the transaction. -* `FeeAmount`, the maximum amount the user is willing to pay in fees. -* `TimeoutHeight`, block height until which the transaction is valid. -* `Unordered`, an option indicating this transaction may be executed in any order (requires Sequence to be unset.) -* `TimeoutTimestamp`, the timeout timestamp (unordered nonce) of the transaction (required to be used with Unordered). -* `Signatures`, the array of signatures from all signers of the transaction. - -As there are currently two sign modes for signing transactions, there are also two implementations of `TxBuilder`: - -* [wrapper](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/tx/builder.go#L27-L44) for creating transactions for `SIGN_MODE_DIRECT`, -* [StdTxBuilder](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/migrations/legacytx/stdtx_builder.go#L14-L17) for `SIGN_MODE_LEGACY_AMINO_JSON`. - -However, the two implementations of `TxBuilder` should be hidden away from end-users, as they should prefer using the overarching `TxConfig` interface: - -client/tx\_config.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/tx_config.go#L27-L37) - -`TxConfig` is an app-wide configuration for managing transactions. Most importantly, it holds the information about whether to sign each transaction with `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. By calling `txBuilder := txConfig.NewTxBuilder()`, a new `TxBuilder` will be created with the appropriate sign mode. - -Once `TxBuilder` is correctly populated with the setters exposed above, `TxConfig` will also take care of correctly encoding the bytes (again, either using `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`). Here's a pseudo-code snippet of how to generate and encode a transaction, using the `TxEncoder()` method: - -``` -txBuilder := txConfig.NewTxBuilder()txBuilder.SetMsgs(...) // and other setters on txBuilderbz, err := txConfig.TxEncoder()(txBuilder.GetTx())// bz are bytes to be broadcasted over the network -``` - -### Broadcasting the Transaction[​](#broadcasting-the-transaction "Direct link to Broadcasting the Transaction") - -Once the transaction bytes are generated, there are currently three ways of broadcasting it. - -#### CLI[​](#cli "Direct link to CLI") - -Application developers create entry points to the application by creating a [command-line interface](/v0.53/learn/advanced/cli), [gRPC and/or REST interface](/v0.53/learn/advanced/grpc_rest), typically found in the application's `./cmd` folder. These interfaces allow users to interact with the application through command-line. - -For the [command-line interface](/v0.53/build/building-modules/module-interfaces#cli), module developers create subcommands to add as children to the application top-level transaction command `TxCmd`. CLI commands actually bundle all the steps of transaction processing into one simple command: creating messages, generating transactions and broadcasting. For concrete examples, see the [Interacting with a Node](/v0.53/user/run-node/interact-node) section. An example transaction made using CLI looks like: - -``` -simd tx send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake -``` - -#### gRPC[​](#grpc "Direct link to gRPC") - -[gRPC](https://grpc.io) is the main component for the Cosmos SDK's RPC layer. Its principal usage is in the context of modules' [`Query` services](/v0.53/build/building-modules/query-services). However, the Cosmos SDK also exposes a few other module-agnostic gRPC services, one of them being the `Tx` service: - -proto/cosmos/tx/v1beta1/service.proto - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/tx/v1beta1/service.proto) - -The `Tx` service exposes a handful of utility functions, such as simulating a transaction or querying a transaction, and also one method to broadcast transactions. - -Examples of broadcasting and simulating a transaction are shown [here](/v0.53/user/run-node/txs#programmatically-with-go). - -#### REST[​](#rest "Direct link to REST") - -Each gRPC method has its corresponding REST endpoint, generated using [gRPC-gateway](https://github.com/grpc-ecosystem/grpc-gateway). Therefore, instead of using gRPC, you can also use HTTP to broadcast the same transaction, on the `POST /cosmos/tx/v1beta1/txs` endpoint. - -An example can be seen [here](/v0.53/user/run-node/txs#using-rest) - -#### CometBFT RPC[​](#cometbft-rpc "Direct link to CometBFT RPC") - -The three methods presented above are actually higher abstractions over the CometBFT RPC `/broadcast_tx_{async,sync,commit}` endpoints, documented [here](https://docs.cometbft.com/v0.37/core/rpc). This means that you can use the CometBFT RPC endpoints directly to broadcast the transaction, if you wish so. - -### Unordered Transactions[​](#unordered-transactions "Direct link to Unordered Transactions") - - - Looking to enable unordered transactions on your chain? Check out the [v0.53.0 Upgrade Guide](https://docs.cosmos.network/v0.53/build/migrations/upgrade-guide#enable-unordered-transactions-optional) - - - - Unordered transactions MUST leave sequence values unset. When a transaction is both unordered and contains a non-zero sequence value, the transaction will be rejected. Services that operate on prior assumptions about transaction sequence values should be updated to handle unordered transactions. Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. - - -Beginning with Cosmos SDK v0.53.0, chains may enable unordered transaction support. Unordered transactions work by using a timestamp as the transaction's nonce value. The sequence value must NOT be set in the signature(s) of the transaction. The timestamp must be greater than the current block time and not exceed the chain's configured max unordered timeout timestamp duration. Senders must use a unique timestamp for each distinct transaction. The difference may be as small as a nanosecond, however. - -These unique timestamps serve as a one-shot nonce, and their lifespan in state is short-lived. Upon transaction inclusion, an entry consisting of timeout timestamp and account address will be recorded to state. Once the block time is passed the timeout timestamp value, the entry will be removed. This ensures that unordered nonces do not indefinitely fill up the chain's storage. diff --git a/docs/sdk/v0.53/learn/beginner/accounts.mdx b/docs/sdk/v0.53/learn/beginner/accounts.mdx deleted file mode 100644 index e996c760..00000000 --- a/docs/sdk/v0.53/learn/beginner/accounts.mdx +++ /dev/null @@ -1,201 +0,0 @@ ---- -title: "Accounts" -description: "Version: v0.53" ---- - - - This document describes the in-built account and public key system of the Cosmos SDK. - - - - * [Anatomy of a Cosmos SDK Application](/v0.53/learn/beginner/app-anatomy) - - -## Account Definition[​](#account-definition "Direct link to Account Definition") - -In the Cosmos SDK, an *account* designates a pair of *public key* `PubKey` and *private key* `PrivKey`. The `PubKey` can be derived to generate various `Addresses`, which are used to identify users (among other parties) in the application. `Addresses` are also associated with [`message`s](/v0.53/build/building-modules/messages-and-queries#messages) to identify the sender of the `message`. The `PrivKey` is used to generate [digital signatures](#signatures) to prove that an `Address` associated with the `PrivKey` approved of a given `message`. - -For HD key derivation the Cosmos SDK uses a standard called [BIP32](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki). The BIP32 allows users to create an HD wallet (as specified in [BIP44](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)) - a set of accounts derived from an initial secret seed. A seed is usually created from a 12- or 24-word mnemonic. A single seed can derive any number of `PrivKey`s using a one-way cryptographic function. Then, a `PubKey` can be derived from the `PrivKey`. Naturally, the mnemonic is the most sensitive information, as private keys can always be re-generated if the mnemonic is preserved. - -``` - Account 0 Account 1 Account 2+------------------+ +------------------+ +------------------+| | | | | || Address 0 | | Address 1 | | Address 2 || ^ | | ^ | | ^ || | | | | | | | || | | | | | | | || | | | | | | | || + | | + | | + || Public key 0 | | Public key 1 | | Public key 2 || ^ | | ^ | | ^ || | | | | | | | || | | | | | | | || | | | | | | | || + | | + | | + || Private key 0 | | Private key 1 | | Private key 2 || ^ | | ^ | | ^ |+------------------+ +------------------+ +------------------+ | | | | | | | | | +--------------------------------------------------------------------+ | | +---------+---------+ | | | Master PrivKey | | | +-------------------+ | | +---------+---------+ | | | Mnemonic (Seed) | | | +-------------------+ -``` - -In the Cosmos SDK, keys are stored and managed by using an object called a [`Keyring`](#keyring). - -## Keys, accounts, addresses, and signatures[​](#keys-accounts-addresses-and-signatures "Direct link to Keys, accounts, addresses, and signatures") - -The principal way of authenticating a user is done using [digital signatures](https://en.wikipedia.org/wiki/Digital_signature). Users sign transactions using their own private key. Signature verification is done with the associated public key. For on-chain signature verification purposes, we store the public key in an `Account` object (alongside other data required for a proper transaction validation). - -In the node, all data is stored using Protocol Buffers serialization. - -The Cosmos SDK supports the following digital key schemes for creating digital signatures: - -* `secp256k1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256k1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keys/secp256k1/secp256k1.go). -* `secp256r1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256r1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keys/secp256r1/pubkey.go), -* `tm-ed25519`, as implemented in the [Cosmos SDK `crypto/keys/ed25519` package](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keys/ed25519/ed25519.go). This scheme is supported only for the consensus validation. - -| | Address length in bytes | Public key length in bytes | Used for transaction authentication | Used for consensus (cometbft) | -| :----------: | :---------------------: | :------------------------: | :---------------------------------: | :---------------------------: | -| `secp256k1` | 20 | 33 | yes | no | -| `secp256r1` | 32 | 33 | yes | no | -| `tm-ed25519` | -- not used -- | 32 | no | yes | - -## Addresses[​](#addresses "Direct link to Addresses") - -`Addresses` and `PubKey`s are both public information that identifies actors in the application. `Account` is used to store authentication information. The basic account implementation is provided by a `BaseAccount` object. - -Each account is identified using `Address` which is a sequence of bytes derived from a public key. In the Cosmos SDK, we define 3 types of addresses that specify a context where an account is used: - -* `AccAddress` identifies users (the sender of a `message`). -* `ValAddress` identifies validator operators. -* `ConsAddress` identifies validator nodes that are participating in consensus. Validator nodes are derived using the **`ed25519`** curve. - -These types implement the `Address` interface: - -types/address.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/address.go#L126-L134) - -Address construction algorithm is defined in [ADR-28](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-028-public-key-addresses.md). Here is the standard way to obtain an account address from a `pub` public key: - -``` -sdk.AccAddress(pub.Address().Bytes()) -``` - -Of note, the `Marshal()` and `Bytes()` method both return the same raw `[]byte` form of the address. `Marshal()` is required for Protobuf compatibility. - -For user interaction, addresses are formatted using [Bech32](https://en.bitcoin.it/wiki/Bech32) and implemented by the `String` method. The Bech32 method is the only supported format to use when interacting with a blockchain. The Bech32 human-readable part (Bech32 prefix) is used to denote an address type. Example: - -types/address.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/address.go#L299-L316) - -| | Address Bech32 Prefix | -| ------------------ | --------------------- | -| Accounts | cosmos | -| Validator Operator | cosmosvaloper | -| Consensus Nodes | cosmosvalcons | - -### Public Keys[​](#public-keys "Direct link to Public Keys") - -Public keys in Cosmos SDK are defined by `cryptotypes.PubKey` interface. Since public keys are saved in a store, `cryptotypes.PubKey` extends the `proto.Message` interface: - -crypto/types/types.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/types/types.go#L8-L17) - -A compressed format is used for `secp256k1` and `secp256r1` serialization. - -* The first byte is a `0x02` byte if the `y`-coordinate is the lexicographically largest of the two associated with the `x`-coordinate. -* Otherwise the first byte is a `0x03`. - -This prefix is followed by the `x`-coordinate. - -Public Keys are not used to reference accounts (or users) and in general are not used when composing transaction messages (with few exceptions: `MsgCreateValidator`, `Validator` and `Multisig` messages). For user interactions, `PubKey` is formatted using Protobufs JSON ([ProtoMarshalJSON](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/codec/json.go#L14-L34) function). Example: - -client/keys/output.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/keys/output.go#L23-L39) - -## Keyring[​](#keyring "Direct link to Keyring") - -A `Keyring` is an object that stores and manages accounts. In the Cosmos SDK, a `Keyring` implementation follows the `Keyring` interface: - -crypto/keyring/keyring.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keyring/keyring.go#L58-L106) - -The default implementation of `Keyring` comes from the third-party [`99designs/keyring`](https://github.com/99designs/keyring) library. - -A few notes on the `Keyring` methods: - -* `Sign(uid string, msg []byte) ([]byte, types.PubKey, error)` strictly deals with the signature of the `msg` bytes. You must prepare and encode the transaction into a canonical `[]byte` form. Because protobuf is not deterministic, it has been decided in [ADR-020](/v0.53/build/architecture/adr-020-protobuf-transaction-encoding) that the canonical `payload` to sign is the `SignDoc` struct, deterministically encoded using [ADR-027](/v0.53/build/architecture/adr-027-deterministic-protobuf-serialization). Note that signature verification is not implemented in the Cosmos SDK by default, it is deferred to the [`anteHandler`](/v0.53/learn/advanced/baseapp#antehandler). - -proto/cosmos/tx/v1beta1/tx.proto - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/tx/v1beta1/tx.proto#L50-L67) - -* `NewAccount(uid, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error)` creates a new account based on the [`bip44 path`](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki) and persists it on disk. The `PrivKey` is **never stored unencrypted**, instead it is [encrypted with a passphrase](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/armor.go) before being persisted. In the context of this method, the key type and sequence number refer to the segment of the BIP44 derivation path (for example, `0`, `1`, `2`, ...) that is used to derive a private and a public key from the mnemonic. Using the same mnemonic and derivation path, the same `PrivKey`, `PubKey` and `Address` is generated. The following keys are supported by the keyring: - -* `secp256k1` - -* `ed25519` - -* `ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error)` exports a private key in ASCII-armored encrypted format using the given passphrase. You can then either import the private key again into the keyring using the `ImportPrivKey(uid, armor, passphrase string)` function or decrypt it into a raw private key using the `UnarmorDecryptPrivKey(armorStr string, passphrase string)` function. - -### Create New Key Type[​](#create-new-key-type "Direct link to Create New Key Type") - -To create a new key type for using in keyring, `keyring.SignatureAlgo` interface must be fulfilled. - -crypto/keyring/signing\_algorithms.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keyring/signing_algorithms.go#L11-L16) - -The interface consists in three methods where `Name()` returns the name of the algorithm as a `hd.PubKeyType` and `Derive()` and `Generate()` must return the following functions respectively: - -crypto/hd/algo.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/hd/algo.go#L28-L31) - -Once the `keyring.SignatureAlgo` has been implemented it must be added to the [list of supported algos](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keyring/keyring.go#L209) of the keyring. - -For simplicity the implementation of a new key type should be done inside the `crypto/hd` package. There is an example of a working `secp256k1` implementation in [algo.go](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/hd/algo.go#L38). - -#### Implementing secp256r1 algo[​](#implementing-secp256r1-algo "Direct link to Implementing secp256r1 algo") - -Here is an example of how secp256r1 could be implemented. - -First a new function to create a private key from a secret number is needed in the secp256r1 package. This function could look like this: - -``` -// cosmos-sdk/crypto/keys/secp256r1/privkey.go// NewPrivKeyFromSecret creates a private key derived for the secret number// represented in big-endian. The `secret` must be a valid ECDSA field element.func NewPrivKeyFromSecret(secret []byte) (*PrivKey, error) { var d = new(big.Int).SetBytes(secret) if d.Cmp(secp256r1.Params().N) >= 1 { return nil, errorsmod.Wrap(errors.ErrInvalidRequest, "secret not in the curve base field") } sk := new(ecdsa.PrivKey) return &PrivKey{&ecdsaSK{*sk}}, nil} -``` - -After that `secp256r1Algo` can be implemented. - -``` -// cosmos-sdk/crypto/hd/secp256r1Algo.gopackage hdimport ( "github.com/cosmos/go-bip39" "github.com/cosmos/cosmos-sdk/crypto/keys/secp256r1" "github.com/cosmos/cosmos-sdk/crypto/types")// Secp256r1Type uses the secp256r1 ECDSA parameters.const Secp256r1Type = PubKeyType("secp256r1")var Secp256r1 = secp256r1Algo{}type secp256r1Algo struct{}func (s secp256r1Algo) Name() PubKeyType { return Secp256r1Type}// Derive derives and returns the secp256r1 private key for the given seed and HD path.func (s secp256r1Algo) Derive() DeriveFn { return func(mnemonic string, bip39Passphrase, hdPath string) ([]byte, error) { seed, err := bip39.NewSeedWithErrorChecking(mnemonic, bip39Passphrase) if err != nil { return nil, err } masterPriv, ch := ComputeMastersFromSeed(seed) if len(hdPath) == 0 { return masterPriv[:], nil } derivedKey, err := DerivePrivateKeyForPath(masterPriv, ch, hdPath) return derivedKey, err }}// Generate generates a secp256r1 private key from the given bytes.func (s secp256r1Algo) Generate() GenerateFn { return func(bz []byte) types.PrivKey { key, err := secp256r1.NewPrivKeyFromSecret(bz) if err != nil { panic(err) } return key }} -``` - -Finally, the algo must be added to the list of [supported algos](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keyring/keyring.go#L209) by the keyring. - -``` -// cosmos-sdk/crypto/keyring/keyring.gofunc newKeystore(kr keyring.Keyring, cdc codec.Codec, backend string, opts ...Option) keystore { // Default options for keybase, these can be overwritten using the // Option function options := Options{ SupportedAlgos: SigningAlgoList{hd.Secp256k1, hd.Secp256r1}, // added here SupportedAlgosLedger: SigningAlgoList{hd.Secp256k1}, }... -``` - -Hereafter to create new keys using your algo, you must specify it with the flag `--algo` : - -`simd keys add myKey --algo secp256r1` diff --git a/docs/sdk/v0.53/learn/beginner/app-anatomy.mdx b/docs/sdk/v0.53/learn/beginner/app-anatomy.mdx deleted file mode 100644 index fb078216..00000000 --- a/docs/sdk/v0.53/learn/beginner/app-anatomy.mdx +++ /dev/null @@ -1,319 +0,0 @@ ---- -title: "Anatomy of a Cosmos SDK Application" -description: "Version: v0.53" ---- - - - This document describes the core parts of a Cosmos SDK application, represented throughout the document as a placeholder application named `app`. - - -## Node Client[​](#node-client "Direct link to Node Client") - -The Daemon, or [Full-Node Client](/v0.53/learn/advanced/node), is the core process of a Cosmos SDK-based blockchain. Participants in the network run this process to initialize their state-machine, connect with other full-nodes, and update their state-machine as new blocks come in. - -``` - ^ +-------------------------------+ ^ | | | | | | State-machine = Application | | | | | | Built with Cosmos SDK | | ^ + | | | +----------- | ABCI | ----------+ v | | + v | ^ | | | |Blockchain Node | | Consensus | | | | | | | +-------------------------------+ | CometBFT | | | | | | Networking | | | | | | v +-------------------------------+ v -``` - -The blockchain full-node presents itself as a binary, generally suffixed by `-d` for "daemon" (e.g. `appd` for `app` or `gaiad` for `gaia`). This binary is built by running a simple [`main.go`](/v0.53/learn/advanced/node#main-function) function placed in `./cmd/appd/`. This operation usually happens through the [Makefile](#dependencies-and-makefile). - -Once the main binary is built, the node can be started by running the [`start` command](/v0.53/learn/advanced/node#start-command). This command function primarily does three things: - -1. Create an instance of the state-machine defined in [`app.go`](#core-application-file). -2. Initialize the state-machine with the latest known state, extracted from the `db` stored in the `~/.app/data` folder. At this point, the state-machine is at height `appBlockHeight`. -3. Create and start a new CometBFT instance. Among other things, the node performs a handshake with its peers. It gets the latest `blockHeight` from them and replays blocks to sync to this height if it is greater than the local `appBlockHeight`. The node starts from genesis and CometBFT sends an `InitChain` message via the ABCI to the `app`, which triggers the [`InitChainer`](#initchainer). - - - When starting a CometBFT instance, the genesis file is the `0` height and the state within the genesis file is committed at block height `1`. When querying the state of the node, querying block height 0 will return an error. - - -## Core Application File[​](#core-application-file "Direct link to Core Application File") - -In general, the core of the state-machine is defined in a file called `app.go`. This file mainly contains the **type definition of the application** and functions to **create and initialize it**. - -### Type Definition of the Application[​](#type-definition-of-the-application "Direct link to Type Definition of the Application") - -The first thing defined in `app.go` is the `type` of the application. It is generally comprised of the following parts: - -* **Embeding [runtime.App](/v0.53/build/building-apps/runtime)** The runtime package manages the application's core components and modules through dependency injection. It provides declarative configuration for module management, state storage, and ABCI handling. - - * `Runtime` wraps `BaseApp`, meaning when a transaction is relayed by CometBFT to the application, `app` uses `runtime`'s methods to route them to the appropriate module. `BaseApp` implements all the [ABCI methods](https://docs.cometbft.com/v0.38/spec/abci/) and the [routing logic](/v0.53/learn/advanced/baseapp#service-routers). - * It automatically configures the **[module manager](/v0.53/build/building-modules/module-manager#manager)** based on the app wiring configuration. The module manager facilitates operations related to these modules, like registering their [`Msg` service](/v0.53/build/building-modules/msg-services) and [gRPC `Query` service](#grpc-query-services), or setting the order of execution between modules for various functions like [`InitChainer`](#initchainer), [`PreBlocker`](#preblocker) and [`BeginBlocker` and `EndBlocker`](#beginblocker-and-endblocker). - -* [**An App Wiring configuration file**](/v0.53/build/building-apps/runtime) The app wiring configuration file contains the list of application's modules that `runtime` must instantiate. The instantiation of the modules are done using `depinject`. It also contains the order in which all module's `InitGenesis` and `Pre/Begin/EndBlocker` methods should be executed. - -* **A reference to an [`appCodec`](/v0.53/learn/advanced/encoding).** The application's `appCodec` is used to serialize and deserialize data structures in order to store them, as stores can only persist `[]bytes`. The default codec is [Protocol Buffers](/v0.53/learn/advanced/encoding). - -* **A reference to a [`legacyAmino`](/v0.53/learn/advanced/encoding) codec.** Some parts of the Cosmos SDK have not been migrated to use the `appCodec` above, and are still hardcoded to use Amino. Other parts explicitly use Amino for backwards compatibility. For these reasons, the application still holds a reference to the legacy Amino codec. Please note that the Amino codec will be removed from the SDK in the upcoming releases. - -See an example of application type definition from `simapp`, the Cosmos SDK's own app used for demo and testing purposes: - -simapp/app\_di.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app_di.go#L57-L90) - -### Constructor Function[​](#constructor-function "Direct link to Constructor Function") - -Also defined in `app.go` is the constructor function, which constructs a new application of the type defined in the preceding section. The function must fulfill the `AppCreator` signature in order to be used in the [`start` command](/v0.53/learn/advanced/node#start-command) of the application's daemon command. - -server/types/app.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server/types/app.go#L67-L69) - -Here are the main actions performed by this function: - -* Instantiate a new [`codec`](/v0.53/learn/advanced/encoding) and initialize the `codec` of each of the application's modules using the [basic manager](/v0.53/build/building-modules/module-manager#basicmanager). - -* Instantiate a new application with a reference to a `baseapp` instance, a codec, and all the appropriate store keys. - -* Instantiate all the [`keeper`](#keeper) objects defined in the application's `type` using the `NewKeeper` function of each of the application's modules. Note that keepers must be instantiated in the correct order, as the `NewKeeper` of one module might require a reference to another module's `keeper`. - -* Instantiate the application's [module manager](/v0.53/build/building-modules/module-manager#manager) with the [`AppModule`](#application-module-interface) object of each of the application's modules. - -* With the module manager, initialize the application's [`Msg` services](/v0.53/learn/advanced/baseapp#msg-services), [gRPC `Query` services](/v0.53/learn/advanced/baseapp#grpc-query-services), [legacy `Msg` routes](/v0.53/learn/advanced/baseapp#routing), and [legacy query routes](/v0.53/learn/advanced/baseapp#query-routing). When a transaction is relayed to the application by CometBFT via the ABCI, it is routed to the appropriate module's [`Msg` service](#msg-services) using the routes defined here. Likewise, when a gRPC query request is received by the application, it is routed to the appropriate module's [`gRPC query service`](#grpc-query-services) using the gRPC routes defined here. The Cosmos SDK still supports legacy `Msg`s and legacy CometBFT queries, which are routed using the legacy `Msg` routes and the legacy query routes, respectively. - -* With the module manager, register the [application's modules' invariants](/v0.53/build/building-modules/invariants). Invariants are variables (e.g. total supply of a token) that are evaluated at the end of each block. The process of checking invariants is done via a special module called the [`InvariantsRegistry`](/v0.53/build/building-modules/invariants#invariant-registry). The value of the invariant should be equal to a predicted value defined in the module. Should the value be different than the predicted one, special logic defined in the invariant registry is triggered (usually the chain is halted). This is useful to make sure that no critical bug goes unnoticed, producing long-lasting effects that are hard to fix. - -* With the module manager, set the order of execution between the `InitGenesis`, `PreBlocker`, `BeginBlocker`, and `EndBlocker` functions of each of the [application's modules](#application-module-interface). Note that not all modules implement these functions. - -* Set the remaining application parameters: - - * [`InitChainer`](#initchainer): used to initialize the application when it is first started. - * [`PreBlocker`](#preblocker): called before BeginBlock. - * [`BeginBlocker`, `EndBlocker`](#beginblocker-and-endblocker): called at the beginning and at the end of every block. - * [`anteHandler`](/v0.53/learn/advanced/baseapp#antehandler): used to handle fees and signature verification. - -* Mount the stores. - -* Return the application. - -Note that the constructor function only creates an instance of the app, while the actual state is either carried over from the `~/.app/data` folder if the node is restarted, or generated from the genesis file if the node is started for the first time. - -See an example of application constructor from `simapp`: - -simapp/app.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app.go#L190-L708) - -### InitChainer[​](#initchainer "Direct link to InitChainer") - -The `InitChainer` is a function that initializes the state of the application from a genesis file (i.e. token balances of genesis accounts). It is called when the application receives the `InitChain` message from the CometBFT engine, which happens when the node is started at `appBlockHeight == 0` (i.e. on genesis). The application must set the `InitChainer` in its [constructor](#constructor-function) via the [`SetInitChainer`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetInitChainer) method. - -In general, the `InitChainer` is mostly composed of the [`InitGenesis`](/v0.53/build/building-modules/genesis#initgenesis) function of each of the application's modules. This is done by calling the `InitGenesis` function of the module manager, which in turn calls the `InitGenesis` function of each of the modules it contains. Note that the order in which the modules' `InitGenesis` functions must be called has to be set in the module manager using the [module manager's](/v0.53/build/building-modules/module-manager) `SetOrderInitGenesis` method. This is done in the [application's constructor](#application-constructor), and the `SetOrderInitGenesis` has to be called before the `SetInitChainer`. - -See an example of an `InitChainer` from `simapp`: - -simapp/app.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app.go#L765-L773) - -### PreBlocker[​](#preblocker "Direct link to PreBlocker") - -There are two semantics around the new lifecycle method: - -* It runs before the `BeginBlocker` of all modules -* It can modify consensus parameters in storage, and signal the caller through the return value. - -When it returns `ConsensusParamsChanged=true`, the caller must refresh the consensus parameter in the finalize context: - -``` -app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithConsensusParams(app.GetConsensusParams()) -``` - -The new ctx must be passed to all the other lifecycle methods. - -### BeginBlocker and EndBlocker[​](#beginblocker-and-endblocker "Direct link to BeginBlocker and EndBlocker") - -The Cosmos SDK offers developers the possibility to implement automatic execution of code as part of their application. This is implemented through two functions called `BeginBlocker` and `EndBlocker`. They are called when the application receives the `FinalizeBlock` messages from the CometBFT consensus engine, which happens respectively at the beginning and at the end of each block. The application must set the `BeginBlocker` and `EndBlocker` in its [constructor](#constructor-function) via the [`SetBeginBlocker`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetBeginBlocker) and [`SetEndBlocker`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetEndBlocker) methods. - -In general, the `BeginBlocker` and `EndBlocker` functions are mostly composed of the [`BeginBlock` and `EndBlock`](/v0.53/build/building-modules/beginblock-endblock) functions of each of the application's modules. This is done by calling the `BeginBlock` and `EndBlock` functions of the module manager, which in turn calls the `BeginBlock` and `EndBlock` functions of each of the modules it contains. Note that the order in which the modules' `BeginBlock` and `EndBlock` functions must be called has to be set in the module manager using the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods, respectively. This is done via the [module manager](/v0.53/build/building-modules/module-manager) in the [application's constructor](#application-constructor), and the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods have to be called before the `SetBeginBlocker` and `SetEndBlocker` functions. - -As a sidenote, it is important to remember that application-specific blockchains are deterministic. Developers must be careful not to introduce non-determinism in `BeginBlocker` or `EndBlocker`, and must also be careful not to make them too computationally expensive, as [gas](/v0.53/learn/beginner/gas-fees) does not constrain the cost of `BeginBlocker` and `EndBlocker` execution. - -See an example of `BeginBlocker` and `EndBlocker` functions from `simapp` - -simapp/app.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app.go#L752-L759) - -### Register Codec[​](#register-codec "Direct link to Register Codec") - -The `EncodingConfig` structure is the last important part of the `app.go` file. The goal of this structure is to define the codecs that will be used throughout the app. - -simapp/params/encoding.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/params/encoding.go#L9-L16) - -Here are descriptions of what each of the four fields means: - -* `InterfaceRegistry`: The `InterfaceRegistry` is used by the Protobuf codec to handle interfaces that are encoded and decoded (we also say "unpacked") using [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto). `Any` could be thought as a struct that contains a `type_url` (name of a concrete type implementing the interface) and a `value` (its encoded bytes). `InterfaceRegistry` provides a mechanism for registering interfaces and implementations that can be safely unpacked from `Any`. Each application module implements the `RegisterInterfaces` method that can be used to register the module's own interfaces and implementations. - - * You can read more about `Any` in [ADR-019](/v0.53/build/architecture/adr-019-protobuf-state-encoding). - * To go more into details, the Cosmos SDK uses an implementation of the Protobuf specification called [`gogoprotobuf`](https://github.com/cosmos/gogoproto). By default, the [gogo protobuf implementation of `Any`](https://pkg.go.dev/github.com/cosmos/gogoproto/types) uses [global type registration](https://github.com/cosmos/gogoproto/blob/master/proto/properties.go#L540) to decode values packed in `Any` into concrete Go types. This introduces a vulnerability where any malicious module in the dependency tree could register a type with the global protobuf registry and cause it to be loaded and unmarshaled by a transaction that referenced it in the `type_url` field. For more information, please refer to [ADR-019](/v0.53/build/architecture/adr-019-protobuf-state-encoding). - -* `Codec`: The default codec used throughout the Cosmos SDK. It is composed of a `BinaryCodec` used to encode and decode state, and a `JSONCodec` used to output data to the users (for example, in the [CLI](#cli)). By default, the SDK uses Protobuf as `Codec`. - -* `TxConfig`: `TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type. Currently, the SDK handles two transaction types: `SIGN_MODE_DIRECT` (which uses Protobuf binary as over-the-wire encoding) and `SIGN_MODE_LEGACY_AMINO_JSON` (which depends on Amino). Read more about transactions [here](/v0.53/learn/advanced/transactions). - -* `Amino`: Some legacy parts of the Cosmos SDK still use Amino for backwards-compatibility. Each module exposes a `RegisterLegacyAmino` method to register the module's specific types within Amino. This `Amino` codec should not be used by app developers anymore, and will be removed in future releases. - -An application should create its own encoding config. See an example of a `simappparams.EncodingConfig` from `simapp`: - -simapp/params/encoding.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/params/encoding.go#L11-L16) - -## Modules[​](#modules "Direct link to Modules") - -[Modules](/v0.53/build/building-modules/intro) are the heart and soul of Cosmos SDK applications. They can be considered as state-machines nested within the state-machine. When a transaction is relayed from the underlying CometBFT engine via the ABCI to the application, it is routed by [`baseapp`](/v0.53/learn/advanced/baseapp) to the appropriate module in order to be processed. This paradigm enables developers to easily build complex state-machines, as most of the modules they need often already exist. **For developers, most of the work involved in building a Cosmos SDK application revolves around building custom modules required by their application that do not exist yet, and integrating them with modules that do already exist into one coherent application**. In the application directory, the standard practice is to store modules in the `x/` folder (not to be confused with the Cosmos SDK's `x/` folder, which contains already-built modules). - -### Application Module Interface[​](#application-module-interface "Direct link to Application Module Interface") - -Modules must implement [interfaces](/v0.53/build/building-modules/module-manager#application-module-interfaces) defined in the Cosmos SDK, [`AppModuleBasic`](/v0.53/build/building-modules/module-manager#appmodulebasic) and [`AppModule`](/v0.53/build/building-modules/module-manager#appmodule). The former implements basic non-dependent elements of the module, such as the `codec`, while the latter handles the bulk of the module methods (including methods that require references to other modules' `keeper`s). Both the `AppModule` and `AppModuleBasic` types are, by convention, defined in a file called `module.go`. - -`AppModule` exposes a collection of useful methods on the module that facilitates the composition of modules into a coherent application. These methods are called from the [`module manager`](/v0.53/build/building-modules/module-manager#manager), which manages the application's collection of modules. - -### `Msg` Services[​](#msg-services "Direct link to msg-services") - -Each application module defines two [Protobuf services](https://developers.google.com/protocol-buffers/docs/proto#services): one `Msg` service to handle messages, and one gRPC `Query` service to handle queries. If we consider the module as a state-machine, then a `Msg` service is a set of state transition RPC methods. Each Protobuf `Msg` service method is 1:1 related to a Protobuf request type, which must implement `sdk.Msg` interface. Note that `sdk.Msg`s are bundled in [transactions](/v0.53/learn/advanced/transactions), and each transaction contains one or multiple messages. - -When a valid block of transactions is received by the full-node, CometBFT relays each one to the application via [`DeliverTx`](https://docs.cometbft.com/v0.37/spec/abci/abci++_app_requirements#specifics-of-responsedelivertx). Then, the application handles the transaction: - -1. Upon receiving the transaction, the application first unmarshalls it from `[]byte`. -2. Then, it verifies a few things about the transaction like [fee payment and signatures](/v0.53/learn/beginner/gas-fees#antehandler) before extracting the `Msg`(s) contained in the transaction. -3. `sdk.Msg`s are encoded using Protobuf [`Any`s](#register-codec). By analyzing each `Any`'s `type_url`, baseapp's `msgServiceRouter` routes the `sdk.Msg` to the corresponding module's `Msg` service. -4. If the message is successfully processed, the state is updated. - -For more details, see [transaction lifecycle](/v0.53/learn/beginner/tx-lifecycle). - -Module developers create custom `Msg` services when they build their own module. The general practice is to define the `Msg` Protobuf service in a `tx.proto` file. For example, the `x/bank` module defines a service with two methods to transfer tokens: - -proto/cosmos/bank/v1beta1/tx.proto - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/bank/v1beta1/tx.proto#L13-L36) - -Service methods use `keeper` in order to update the module state. - -Each module should also implement the `RegisterServices` method as part of the [`AppModule` interface](#application-module-interface). This method should call the `RegisterMsgServer` function provided by the generated Protobuf code. - -### gRPC `Query` Services[​](#grpc-query-services "Direct link to grpc-query-services") - -gRPC `Query` services allow users to query the state using [gRPC](https://grpc.io). They are enabled by default, and can be configured under the `grpc.enable` and `grpc.address` fields inside [`app.toml`](/v0.53/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml). - -gRPC `Query` services are defined in the module's Protobuf definition files, specifically inside `query.proto`. The `query.proto` definition file exposes a single `Query` [Protobuf service](https://developers.google.com/protocol-buffers/docs/proto#services). Each gRPC query endpoint corresponds to a service method, starting with the `rpc` keyword, inside the `Query` service. - -Protobuf generates a `QueryServer` interface for each module, containing all the service methods. A module's [`keeper`](#keeper) then needs to implement this `QueryServer` interface, by providing the concrete implementation of each service method. This concrete implementation is the handler of the corresponding gRPC query endpoint. - -Finally, each module should also implement the `RegisterServices` method as part of the [`AppModule` interface](#application-module-interface). This method should call the `RegisterQueryServer` function provided by the generated Protobuf code. - -### Keeper[​](#keeper "Direct link to Keeper") - -[`Keepers`](/v0.53/build/building-modules/keeper) are the gatekeepers of their module's store(s). To read or write in a module's store, it is mandatory to go through one of its `keeper`'s methods. This is ensured by the [object-capabilities](/v0.53/learn/advanced/ocap) model of the Cosmos SDK. Only objects that hold the key to a store can access it, and only the module's `keeper` should hold the key(s) to the module's store(s). - -`Keepers` are generally defined in a file called `keeper.go`. It contains the `keeper`'s type definition and methods. - -The `keeper` type definition generally consists of the following: - -* **Key(s)** to the module's store(s) in the multistore. -* Reference to **other module's `keepers`**. Only needed if the `keeper` needs to access other module's store(s) (either to read or write from them). -* A reference to the application's **codec**. The `keeper` needs it to marshal structs before storing them, or to unmarshal them when it retrieves them, because stores only accept `[]bytes` as value. - -Along with the type definition, the next important component of the `keeper.go` file is the `keeper`'s constructor function, `NewKeeper`. This function instantiates a new `keeper` of the type defined above with a `codec`, stores `keys` and potentially references other modules' `keeper`s as parameters. The `NewKeeper` function is called from the [application's constructor](#constructor-function). The rest of the file defines the `keeper`'s methods, which are primarily getters and setters. - -### Command-Line, gRPC Services and REST Interfaces[​](#command-line-grpc-services-and-rest-interfaces "Direct link to Command-Line, gRPC Services and REST Interfaces") - -Each module defines command-line commands, gRPC services, and REST routes to be exposed to the end-user via the [application's interfaces](#application-interfaces). This enables end-users to create messages of the types defined in the module, or to query the subset of the state managed by the module. - -#### CLI[​](#cli "Direct link to CLI") - -Generally, the [commands related to a module](/v0.53/build/building-modules/module-interfaces#cli) are defined in a folder called `client/cli` in the module's folder. The CLI divides commands into two categories, transactions and queries, defined in `client/cli/tx.go` and `client/cli/query.go`, respectively. Both commands are built on top of the [Cobra Library](https://github.com/spf13/cobra): - -* Transactions commands let users generate new transactions so that they can be included in a block and eventually update the state. One command should be created for each [message type](#message-types) defined in the module. The command calls the constructor of the message with the parameters provided by the end-user, and wraps it into a transaction. The Cosmos SDK handles signing and the addition of other transaction metadata. -* Queries let users query the subset of the state defined by the module. Query commands forward queries to the [application's query router](/v0.53/learn/advanced/baseapp#query-routing), which routes them to the appropriate [querier](#querier) the `queryRoute` parameter supplied. - -#### gRPC[​](#grpc "Direct link to gRPC") - -[gRPC](https://grpc.io) is a modern open-source high performance RPC framework that has support in multiple languages. It is the recommended way for external clients (such as wallets, browsers and other backend services) to interact with a node. - -Each module can expose gRPC endpoints called [service methods](https://grpc.io/docs/what-is-grpc/core-concepts/#service-definition), which are defined in the [module's Protobuf `query.proto` file](#grpc-query-services). A service method is defined by its name, input arguments, and output response. The module then needs to perform the following actions: - -* Define a `RegisterGRPCGatewayRoutes` method on `AppModuleBasic` to wire the client gRPC requests to the correct handler inside the module. -* For each service method, define a corresponding handler. The handler implements the core logic necessary to serve the gRPC request, and is located in the `keeper/grpc_query.go` file. - -#### gRPC-gateway REST Endpoints[​](#grpc-gateway-rest-endpoints "Direct link to gRPC-gateway REST Endpoints") - -Some external clients may not wish to use gRPC. In this case, the Cosmos SDK provides a gRPC gateway service, which exposes each gRPC service as a corresponding REST endpoint. Please refer to the [grpc-gateway](https://grpc-ecosystem.github.io/grpc-gateway/) documentation to learn more. - -The REST endpoints are defined in the Protobuf files, along with the gRPC services, using Protobuf annotations. Modules that want to expose REST queries should add `google.api.http` annotations to their `rpc` methods. By default, all REST endpoints defined in the SDK have a URL starting with the `/cosmos/` prefix. - -The Cosmos SDK also provides a development endpoint to generate [Swagger](https://swagger.io/) definition files for these REST endpoints. This endpoint can be enabled inside the [`app.toml`](/v0.53/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml) config file, under the `api.swagger` key. - -## Application Interface[​](#application-interface "Direct link to Application Interface") - -[Interfaces](#command-line-grpc-services-and-rest-interfaces) let end-users interact with full-node clients. This means querying data from the full-node or creating and sending new transactions to be relayed by the full-node and eventually included in a block. - -The main interface is the [Command-Line Interface](/v0.53/learn/advanced/cli). The CLI of a Cosmos SDK application is built by aggregating [CLI commands](#cli) defined in each of the modules used by the application. The CLI of an application is the same as the daemon (e.g. `appd`), and is defined in a file called `appd/main.go`. The file contains the following: - -* **A `main()` function**, which is executed to build the `appd` interface client. This function prepares each command and adds them to the `rootCmd` before building them. At the root of `appd`, the function adds generic commands like `status`, `keys`, and `config`, query commands, tx commands, and `rest-server`. -* **Query commands**, which are added by calling the `queryCmd` function. This function returns a Cobra command that contains the query commands defined in each of the application's modules (passed as an array of `sdk.ModuleClients` from the `main()` function), as well as some other lower level query commands such as block or validator queries. Query command are called by using the command `appd query [query]` of the CLI. -* **Transaction commands**, which are added by calling the `txCmd` function. Similar to `queryCmd`, the function returns a Cobra command that contains the tx commands defined in each of the application's modules, as well as lower level tx commands like transaction signing or broadcasting. Tx commands are called by using the command `appd tx [tx]` of the CLI. - -See an example of an application's main command-line file from the [Cosmos Hub](https://github.com/cosmos/gaia). - -cmd/gaiad/cmd/root.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/gaia/blob/26ae7c2/cmd/gaiad/cmd/root.go#L39-L80) - -## Dependencies and Makefile[​](#dependencies-and-makefile "Direct link to Dependencies and Makefile") - -This section is optional, as developers are free to choose their dependency manager and project building method. That said, the current most used framework for versioning control is [`go.mod`](https://github.com/golang/go/wiki/Modules). It ensures each of the libraries used throughout the application are imported with the correct version. - -The following is the `go.mod` of the [Cosmos Hub](https://github.com/cosmos/gaia), provided as an example. - -go.mod - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/gaia/blob/26ae7c2/go.mod#L1-L28) - -For building the application, a [Makefile](https://en.wikipedia.org/wiki/Makefile) is generally used. The Makefile primarily ensures that the `go.mod` is run before building the two entrypoints to the application, [`Node Client`](#node-client) and [`Application Interface`](#application-interface). - -Here is an example of the [Cosmos Hub Makefile](https://github.com/cosmos/gaia/blob/main/Makefile). diff --git a/docs/sdk/v0.53/learn/beginner/gas-fees.mdx b/docs/sdk/v0.53/learn/beginner/gas-fees.mdx deleted file mode 100644 index 83c05201..00000000 --- a/docs/sdk/v0.53/learn/beginner/gas-fees.mdx +++ /dev/null @@ -1,109 +0,0 @@ ---- -title: "Gas and Fees" -description: "Version: v0.53" ---- - - - This document describes the default strategies to handle gas and fees within a Cosmos SDK application. - - - - * [Anatomy of a Cosmos SDK Application](/v0.53/learn/beginner/app-anatomy) - - -## Introduction to `Gas` and `Fees`[​](#introduction-to-gas-and-fees "Direct link to introduction-to-gas-and-fees") - -In the Cosmos SDK, `gas` is a special unit that is used to track the consumption of resources during execution. `gas` is typically consumed whenever read and writes are made to the store, but it can also be consumed if expensive computation needs to be done. It serves two main purposes: - -* Make sure blocks are not consuming too many resources and are finalized. This is implemented by default in the Cosmos SDK via the [block gas meter](#block-gas-meter). -* Prevent spam and abuse from end-user. To this end, `gas` consumed during [`message`](/v0.53/build/building-modules/messages-and-queries#messages) execution is typically priced, resulting in a `fee` (`fees = gas * gas-prices`). `fees` generally have to be paid by the sender of the `message`. Note that the Cosmos SDK does not enforce `gas` pricing by default, as there may be other ways to prevent spam (e.g. bandwidth schemes). Still, most applications implement `fee` mechanisms to prevent spam by using the [`AnteHandler`](#antehandler). - -## Gas Meter[​](#gas-meter "Direct link to Gas Meter") - -In the Cosmos SDK, `gas` is a simple alias for `uint64`, and is managed by an object called a *gas meter*. Gas meters implement the `GasMeter` interface - -store/types/gas.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/store/types/gas.go#L40-L51) - -where: - -* `GasConsumed()` returns the amount of gas that was consumed by the gas meter instance. -* `GasConsumedToLimit()` returns the amount of gas that was consumed by gas meter instance, or the limit if it is reached. -* `GasRemaining()` returns the gas left in the GasMeter. -* `Limit()` returns the limit of the gas meter instance. `0` if the gas meter is infinite. -* `ConsumeGas(amount Gas, descriptor string)` consumes the amount of `gas` provided. If the `gas` overflows, it panics with the `descriptor` message. If the gas meter is not infinite, it panics if `gas` consumed goes above the limit. -* `RefundGas()` deducts the given amount from the gas consumed. This functionality enables refunding gas to the transaction or block gas pools so that EVM-compatible chains can fully support the go-ethereum StateDB interface. -* `IsPastLimit()` returns `true` if the amount of gas consumed by the gas meter instance is strictly above the limit, `false` otherwise. -* `IsOutOfGas()` returns `true` if the amount of gas consumed by the gas meter instance is above or equal to the limit, `false` otherwise. - -The gas meter is generally held in [`ctx`](/v0.53/learn/advanced/context), and consuming gas is done with the following pattern: - -``` -ctx.GasMeter().ConsumeGas(amount, "description") -``` - -By default, the Cosmos SDK makes use of two different gas meters, the [main gas meter](#main-gas-metter) and the [block gas meter](#block-gas-meter). - -### Main Gas Meter[​](#main-gas-meter "Direct link to Main Gas Meter") - -`ctx.GasMeter()` is the main gas meter of the application. The main gas meter is initialized in `FinalizeBlock` via `setFinalizeBlockState`, and then tracks gas consumption during execution sequences that lead to state-transitions, i.e. those originally triggered by [`FinalizeBlock`](/v0.53/learn/advanced/baseapp#finalizeblock). At the beginning of each transaction execution, the main gas meter **must be set to 0** in the [`AnteHandler`](#antehandler), so that it can track gas consumption per-transaction. - -Gas consumption can be done manually, generally by the module developer in the [`BeginBlocker`, `EndBlocker`](/v0.53/build/building-modules/beginblock-endblock) or [`Msg` service](/v0.53/build/building-modules/msg-services), but most of the time it is done automatically whenever there is a read or write to the store. This automatic gas consumption logic is implemented in a special store called [`GasKv`](/v0.53/learn/advanced/store#gaskv-store). - -### Block Gas Meter[​](#block-gas-meter "Direct link to Block Gas Meter") - -`ctx.BlockGasMeter()` is the gas meter used to track gas consumption per block and make sure it does not go above a certain limit. - -During the genesis phase, gas consumption is unlimited to accommodate initialisation transactions. - -``` -app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(storetypes.NewInfiniteGasMeter())) -``` - -Following the genesis block, the block gas meter is set to a finite value by the SDK. This transition is facilitated by the consensus engine (e.g., CometBFT) calling the `RequestFinalizeBlock` function, which in turn triggers the SDK's `FinalizeBlock` method. Within `FinalizeBlock`, `internalFinalizeBlock` is executed, performing necessary state updates and function executions. The block gas meter, initialised each with a finite limit, is then incorporated into the context for transaction execution, ensuring gas consumption does not exceed the block's gas limit and is reset at the end of each block. - -Modules within the Cosmos SDK can consume block gas at any point during their execution by utilising the `ctx`. This gas consumption primarily occurs during state read/write operations and transaction processing. The block gas meter, accessible via `ctx.BlockGasMeter()`, monitors the total gas usage within a block, enforcing the gas limit to prevent excessive computation. This ensures that gas limits are adhered to on a per-block basis, starting from the first block post-genesis. - -``` -gasMeter := app.getBlockGasMeter(app.finalizeBlockState.Context())app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(gasMeter)) -``` - -This above shows the general mechanism for setting the block gas meter with a finite limit based on the block's consensus parameters. - -## AnteHandler[​](#antehandler "Direct link to AnteHandler") - -The `AnteHandler` is run for every transaction during `CheckTx` and `FinalizeBlock`, before a Protobuf `Msg` service method for each `sdk.Msg` in the transaction. - -The anteHandler is not implemented in the core Cosmos SDK but in a module. That said, most applications today use the default implementation defined in the [`auth` module](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth). Here is what the `anteHandler` is intended to do in a normal Cosmos SDK application: - -* Verify that the transactions are of the correct type. Transaction types are defined in the module that implements the `anteHandler`, and they follow the transaction interface: - -types/tx\_msg.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/types/tx_msg.go#L53-L58) - -This enables developers to play with various types for the transaction of their application. In the default `auth` module, the default transaction type is `Tx`: - -proto/cosmos/tx/v1beta1/tx.proto - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/proto/cosmos/tx/v1beta1/tx.proto#L15-L28) - -* Verify signatures for each [`message`](/v0.53/build/building-modules/messages-and-queries#messages) contained in the transaction. Each `message` should be signed by one or multiple sender(s), and these signatures must be verified in the `anteHandler`. -* During `CheckTx`, verify that the gas prices provided with the transaction is greater than the local `min-gas-prices` (as a reminder, gas-prices can be deducted from the following equation: `fees = gas * gas-prices`). `min-gas-prices` is a parameter local to each full-node and used during `CheckTx` to discard transactions that do not provide a minimum amount of fees. This ensures that the mempool cannot be spammed with garbage transactions. -* Verify that the sender of the transaction has enough funds to cover for the `fees`. When the end-user generates a transaction, they must indicate 2 of the 3 following parameters (the third one being implicit): `fees`, `gas` and `gas-prices`. This signals how much they are willing to pay for nodes to execute their transaction. The provided `gas` value is stored in a parameter called `GasWanted` for later use. -* Set `newCtx.GasMeter` to 0, with a limit of `GasWanted`. **This step is crucial**, as it not only makes sure the transaction cannot consume infinite gas, but also that `ctx.GasMeter` is reset in-between each transaction (`ctx` is set to `newCtx` after `anteHandler` is run, and the `anteHandler` is run each time a transactions executes). - -As explained above, the `anteHandler` returns a maximum limit of `gas` the transaction can consume during execution called `GasWanted`. The actual amount consumed in the end is denominated `GasUsed`, and we must therefore have `GasUsed =< GasWanted`. Both `GasWanted` and `GasUsed` are relayed to the underlying consensus engine when [`FinalizeBlock`](/v0.53/learn/advanced/baseapp#finalizeblock) returns. diff --git a/docs/sdk/v0.53/learn/beginner/query-lifecycle.mdx b/docs/sdk/v0.53/learn/beginner/query-lifecycle.mdx deleted file mode 100644 index 765e3f5f..00000000 --- a/docs/sdk/v0.53/learn/beginner/query-lifecycle.mdx +++ /dev/null @@ -1,159 +0,0 @@ ---- -title: "Query Lifecycle" -description: "Version: v0.53" ---- - - - This document describes the lifecycle of a query in a Cosmos SDK application, from the user interface to application stores and back. The query is referred to as `MyQuery`. - - - - * [Transaction Lifecycle](/v0.53/learn/beginner/tx-lifecycle) - - -## Query Creation[​](#query-creation "Direct link to Query Creation") - -A [**query**](/v0.53/build/building-modules/messages-and-queries#queries) is a request for information made by end-users of applications through an interface and processed by a full-node. Users can query information about the network, the application itself, and application state directly from the application's stores or modules. Note that queries are different from [transactions](/v0.53/learn/advanced/transactions) (view the lifecycle [here](/v0.53/learn/beginner/tx-lifecycle)), particularly in that they do not require consensus to be processed (as they do not trigger state-transitions); they can be fully handled by one full-node. - -For the purpose of explaining the query lifecycle, let's say the query, `MyQuery`, is requesting a list of delegations made by a certain delegator address in the application called `simapp`. As is to be expected, the [`staking`](/v0.53/build/modules/staking) module handles this query. But first, there are a few ways `MyQuery` can be created by users. - -### CLI[​](#cli "Direct link to CLI") - -The main interface for an application is the command-line interface. Users connect to a full-node and run the CLI directly from their machines - the CLI interacts directly with the full-node. To create `MyQuery` from their terminal, users type the following command: - -``` -simd query staking delegations -``` - -This query command was defined by the [`staking`](/v0.53/build/modules/staking) module developer and added to the list of subcommands by the application developer when creating the CLI. - -Note that the general format is as follows: - -``` -simd query [moduleName] [command] --flag -``` - -To provide values such as `--node` (the full-node the CLI connects to), the user can use the [`app.toml`](/v0.53/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml) config file to set them or provide them as flags. - -The CLI understands a specific set of commands, defined in a hierarchical structure by the application developer: from the [root command](/v0.53/learn/advanced/cli#root-command) (`simd`), the type of command (`Myquery`), the module that contains the command (`staking`), and command itself (`delegations`). Thus, the CLI knows exactly which module handles this command and directly passes the call there. - -### gRPC[​](#grpc "Direct link to gRPC") - -Another interface through which users can make queries is [gRPC](https://grpc.io) requests to a [gRPC server](/v0.53/learn/advanced/grpc_rest#grpc-server). The endpoints are defined as [Protocol Buffers](https://developers.google.com/protocol-buffers) service methods inside `.proto` files, written in Protobuf's own language-agnostic interface definition language (IDL). The Protobuf ecosystem developed tools for code-generation from `*.proto` files into various languages. These tools allow to build gRPC clients easily. - -One such tool is [grpcurl](https://github.com/fullstorydev/grpcurl), and a gRPC request for `MyQuery` using this client looks like: - -``` -grpcurl \ -plaintext # We want results in plain test -import-path ./proto \ # Import these .proto files -proto ./proto/cosmos/staking/v1beta1/query.proto \ # Look into this .proto file for the Query protobuf service -d '{"address":"$MY_DELEGATOR"}' \ # Query arguments localhost:9090 \ # gRPC server endpoint cosmos.staking.v1beta1.Query/Delegations # Fully-qualified service method name -``` - -### REST[​](#rest "Direct link to REST") - -Another interface through which users can make queries is through HTTP Requests to a [REST server](/v0.53/learn/advanced/grpc_rest#rest-server). The REST server is fully auto-generated from Protobuf services, using [gRPC-gateway](https://github.com/grpc-ecosystem/grpc-gateway). - -An example HTTP request for `MyQuery` looks like: - -``` -GET http://localhost:1317/cosmos/staking/v1beta1/delegators/{delegatorAddr}/delegations -``` - -## How Queries are Handled by the CLI[​](#how-queries-are-handled-by-the-cli "Direct link to How Queries are Handled by the CLI") - -The preceding examples show how an external user can interact with a node by querying its state. To understand in more detail the exact lifecycle of a query, let's dig into how the CLI prepares the query, and how the node handles it. The interactions from the users' perspective are a bit different, but the underlying functions are almost identical because they are implementations of the same command defined by the module developer. This step of processing happens within the CLI, gRPC, or REST server, and heavily involves a `client.Context`. - -### Context[​](#context "Direct link to Context") - -The first thing that is created in the execution of a CLI command is a `client.Context`. A `client.Context` is an object that stores all the data needed to process a request on the user side. In particular, a `client.Context` stores the following: - -* **Codec**: The [encoder/decoder](/v0.53/learn/advanced/encoding) used by the application, used to marshal the parameters and query before making the CometBFT RPC request and unmarshal the returned response into a JSON object. The default codec used by the CLI is Protobuf. -* **Account Decoder**: The account decoder from the [`auth`](/v0.53/build/modules/auth) module, which translates `[]byte`s into accounts. -* **RPC Client**: The CometBFT RPC Client, or node, to which requests are relayed. -* **Keyring**: A \[Key Manager]../beginner/03-accounts.md#keyring) used to sign transactions and handle other operations with keys. -* **Output Writer**: A [Writer](https://pkg.go.dev/io/#Writer) used to output the response. -* **Configurations**: The flags configured by the user for this command, including `--height`, specifying the height of the blockchain to query, and `--indent`, which indicates to add an indent to the JSON response. - -The `client.Context` also contains various functions such as `Query()`, which retrieves the RPC Client and makes an ABCI call to relay a query to a full-node. - -client/context.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/context.go#L27-70) - -The `client.Context`'s primary role is to store data used during interactions with the end-user and provide methods to interact with this data - it is used before and after the query is processed by the full-node. Specifically, in handling `MyQuery`, the `client.Context` is utilized to encode the query parameters, retrieve the full-node, and write the output. Prior to being relayed to a full-node, the query needs to be encoded into a `[]byte` form, as full-nodes are application-agnostic and do not understand specific types. The full-node (RPC Client) itself is retrieved using the `client.Context`, which knows which node the user CLI is connected to. The query is relayed to this full-node to be processed. Finally, the `client.Context` contains a `Writer` to write output when the response is returned. These steps are further described in later sections. - -### Arguments and Route Creation[​](#arguments-and-route-creation "Direct link to Arguments and Route Creation") - -At this point in the lifecycle, the user has created a CLI command with all of the data they wish to include in their query. A `client.Context` exists to assist in the rest of the `MyQuery`'s journey. Now, the next step is to parse the command or request, extract the arguments, and encode everything. These steps all happen on the user side within the interface they are interacting with. - -#### Encoding[​](#encoding "Direct link to Encoding") - -In our case (querying an address's delegations), `MyQuery` contains an [address](/v0.53/learn/beginner/accounts#addresses) `delegatorAddress` as its only argument. However, the request can only contain `[]byte`s, as it is ultimately relayed to a consensus engine (e.g. CometBFT) of a full-node that has no inherent knowledge of the application types. Thus, the `codec` of `client.Context` is used to marshal the address. - -Here is what the code looks like for the CLI command: - -x/staking/client/cli/query.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/staking/client/cli/query.go#L315-L318) - -#### gRPC Query Client Creation[​](#grpc-query-client-creation "Direct link to gRPC Query Client Creation") - -The Cosmos SDK leverages code generated from Protobuf services to make queries. The `staking` module's `MyQuery` service generates a `queryClient`, which the CLI uses to make queries. Here is the relevant code: - -x/staking/client/cli/query.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/staking/client/cli/query.go#L308-L343) - -Under the hood, the `client.Context` has a `Query()` function used to retrieve the pre-configured node and relay a query to it; the function takes the query fully-qualified service method name as path (in our case: `/cosmos.staking.v1beta1.Query/Delegations`), and arguments as parameters. It first retrieves the RPC Client (called the [**node**](/v0.53/learn/advanced/node)) configured by the user to relay this query to, and creates the `ABCIQueryOptions` (parameters formatted for the ABCI call). The node is then used to make the ABCI call, `ABCIQueryWithOptions()`. - -Here is what the code looks like: - -client/query.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/query.go#L79-L113) - -## RPC[​](#rpc "Direct link to RPC") - -With a call to `ABCIQueryWithOptions()`, `MyQuery` is received by a [full-node](/v0.53/learn/advanced/encoding) which then processes the request. Note that, while the RPC is made to the consensus engine (e.g. CometBFT) of a full-node, queries are not part of consensus and so are not broadcasted to the rest of the network, as they do not require anything the network needs to agree upon. - -Read more about ABCI Clients and CometBFT RPC in the [CometBFT documentation](https://docs.cometbft.com/v0.37/spec/rpc/). - -## Application Query Handling[​](#application-query-handling "Direct link to Application Query Handling") - -When a query is received by the full-node after it has been relayed from the underlying consensus engine, it is at that point being handled within an environment that understands application-specific types and has a copy of the state. [`baseapp`](/v0.53/learn/advanced/baseapp) implements the ABCI [`Query()`](/v0.53/learn/advanced/baseapp#query) function and handles gRPC queries. The query route is parsed, and it matches the fully-qualified service method name of an existing service method (most likely in one of the modules), then `baseapp` relays the request to the relevant module. - -Since `MyQuery` has a Protobuf fully-qualified service method name from the `staking` module (recall `/cosmos.staking.v1beta1.Query/Delegations`), `baseapp` first parses the path, then uses its own internal `GRPCQueryRouter` to retrieve the corresponding gRPC handler, and routes the query to the module. The gRPC handler is responsible for recognizing this query, retrieving the appropriate values from the application's stores, and returning a response. Read more about query services [here](/v0.53/build/building-modules/query-services). - -Once a result is received from the querier, `baseapp` begins the process of returning a response to the user. - -## Response[​](#response "Direct link to Response") - -Since `Query()` is an ABCI function, `baseapp` returns the response as an [`abci.ResponseQuery`](https://docs.cometbft.com/master/spec/abci/abci.html#query-2) type. The `client.Context` `Query()` routine receives the response and. - -### CLI Response[​](#cli-response "Direct link to CLI Response") - -The application [`codec`](/v0.53/learn/advanced/encoding) is used to unmarshal the response to a JSON and the `client.Context` prints the output to the command line, applying any configurations such as the output type (text, JSON or YAML). - -client/context.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/context.go#L350-L357) - -And that's a wrap! The result of the query is outputted to the console by the CLI. diff --git a/docs/sdk/v0.53/learn/beginner/tx-lifecycle.mdx b/docs/sdk/v0.53/learn/beginner/tx-lifecycle.mdx deleted file mode 100644 index 80c714f7..00000000 --- a/docs/sdk/v0.53/learn/beginner/tx-lifecycle.mdx +++ /dev/null @@ -1,167 +0,0 @@ ---- -title: "Transaction Lifecycle" -description: "Version: v0.53" ---- - - - This document describes the lifecycle of a transaction from creation to committed state changes. Transaction definition is described in a [different doc](/v0.53/learn/advanced/transactions). The transaction is referred to as `Tx`. - - - - * [Anatomy of a Cosmos SDK Application](/v0.53/learn/beginner/app-anatomy) - - -## Creation[​](#creation "Direct link to Creation") - -### Transaction Creation[​](#transaction-creation "Direct link to Transaction Creation") - -One of the main application interfaces is the command-line interface. The transaction `Tx` can be created by the user inputting a command in the following format from the [command-line](/v0.53/learn/advanced/cli), providing the type of transaction in `[command]`, arguments in `[args]`, and configurations such as gas prices in `[flags]`: - -``` -[appname] tx [command] [args] [flags] -``` - -This command automatically **creates** the transaction, **signs** it using the account's private key, and **broadcasts** it to the specified peer node. - -There are several required and optional flags for transaction creation. The `--from` flag specifies which [account](/v0.53/learn/beginner/accounts) the transaction is originating from. For example, if the transaction is sending coins, the funds are drawn from the specified `from` address. - -#### Gas and Fees[​](#gas-and-fees "Direct link to Gas and Fees") - -Additionally, there are several [flags](/v0.53/learn/advanced/cli) users can use to indicate how much they are willing to pay in [fees](/v0.53/learn/beginner/gas-fees): - -* `--gas` refers to how much [gas](/v0.53/learn/beginner/gas-fees), which represents computational resources, `Tx` consumes. Gas is dependent on the transaction and is not precisely calculated until execution, but can be estimated by providing `auto` as the value for `--gas`. -* `--gas-adjustment` (optional) can be used to scale `gas` up in order to avoid underestimating. For example, users can specify their gas adjustment as 1.5 to use 1.5 times the estimated gas. -* `--gas-prices` specifies how much the user is willing to pay per unit of gas, which can be one or multiple denominations of tokens. For example, `--gas-prices=0.025uatom, 0.025upho` means the user is willing to pay 0.025uatom AND 0.025upho per unit of gas. -* `--fees` specifies how much in fees the user is willing to pay in total. -* `--timeout-height` specifies a block timeout height to prevent the tx from being committed past a certain height. - -The ultimate value of the fees paid is equal to the gas multiplied by the gas prices. In other words, `fees = ceil(gas * gasPrices)`. Thus, since fees can be calculated using gas prices and vice versa, the users specify only one of the two. - -Later, validators decide whether to include the transaction in their block by comparing the given or calculated `gas-prices` to their local `min-gas-prices`. `Tx` is rejected if its `gas-prices` is not high enough, so users are incentivized to pay more. - -#### Unordered Transactions[​](#unordered-transactions "Direct link to Unordered Transactions") - -With Cosmos SDK v0.53.0, users may send unordered transactions to chains that have this feature enabled. The following flags allow a user to build an unordered transaction from the CLI. - -* `--unordered` specifies that this transaction should be unordered. (transaction sequence must be unset) -* `--timeout-duration` specifies the amount of time the unordered transaction should be valid in the mempool. The transaction's unordered nonce will be set to the time of transaction creation + timeout duration. - - - Unordered transactions MUST leave sequence values unset. When a transaction is both unordered and contains a non-zero sequence value, the transaction will be rejected. External services that operate on prior assumptions about transaction sequence values should be updated to handle unordered transactions. Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. - - -#### CLI Example[​](#cli-example "Direct link to CLI Example") - -Users of the application `app` can enter the following command into their CLI to generate a transaction to send 1000uatom from a `senderAddress` to a `recipientAddress`. The command specifies how much gas they are willing to pay: an automatic estimate scaled up by 1.5 times, with a gas price of 0.025uatom per unit gas. - -``` -appd tx send 1000uatom --from --gas auto --gas-adjustment 1.5 --gas-prices 0.025uatom -``` - -#### Other Transaction Creation Methods[​](#other-transaction-creation-methods "Direct link to Other Transaction Creation Methods") - -The command-line is an easy way to interact with an application, but `Tx` can also be created using a [gRPC or REST interface](/v0.53/learn/advanced/grpc_rest) or some other entry point defined by the application developer. From the user's perspective, the interaction depends on the web interface or wallet they are using (e.g. creating `Tx` using [Lunie.io](https://lunie.io/#/) and signing it with a Ledger Nano S). - -## Addition to Mempool[​](#addition-to-mempool "Direct link to Addition to Mempool") - -Each full-node (running CometBFT) that receives a `Tx` sends an [ABCI message](https://docs.cometbft.com/v0.37/spec/p2p/messages/), `CheckTx`, to the application layer to check for validity, and receives an `abci.ResponseCheckTx`. If the `Tx` passes the checks, it is held in the node's [**Mempool**](https://docs.cometbft.com/v0.37/spec/p2p/messages/mempool/), an in-memory pool of transactions unique to each node, pending inclusion in a block - honest nodes discard a `Tx` if it is found to be invalid. Prior to consensus, nodes continuously check incoming transactions and gossip them to their peers. - -### Types of Checks[​](#types-of-checks "Direct link to Types of Checks") - -The full-nodes perform stateless, then stateful checks on `Tx` during `CheckTx`, with the goal to identify and reject an invalid transaction as early on as possible to avoid wasted computation. - -***Stateless*** checks do not require nodes to access state - light clients or offline nodes can do them - and are thus less computationally expensive. Stateless checks include making sure addresses are not empty, enforcing nonnegative numbers, and other logic specified in the definitions. - -***Stateful*** checks validate transactions and messages based on a committed state. Examples include checking that the relevant values exist and can be transacted with, the address has sufficient funds, and the sender is authorized or has the correct ownership to transact. At any given moment, full-nodes typically have [multiple versions](/v0.53/learn/advanced/baseapp#state-updates) of the application's internal state for different purposes. For example, nodes execute state changes while in the process of verifying transactions, but still need a copy of the last committed state in order to answer queries - they should not respond using state with uncommitted changes. - -In order to verify a `Tx`, full-nodes call `CheckTx`, which includes both *stateless* and *stateful* checks. Further validation happens later in the [`DeliverTx`](#delivertx) stage. `CheckTx` goes through several steps, beginning with decoding `Tx`. - -### Decoding[​](#decoding "Direct link to Decoding") - -When `Tx` is received by the application from the underlying consensus engine (e.g. CometBFT ), it is still in its [encoded](/v0.53/learn/advanced/encoding) `[]byte` form and needs to be unmarshaled in order to be processed. Then, the [`runTx`](/v0.53/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) function is called to run in `runTxModeCheck` mode, meaning the function runs all checks but exits before executing messages and writing state changes. - -### ValidateBasic (deprecated)[​](#validatebasic-deprecated "Direct link to ValidateBasic (deprecated)") - -Messages ([`sdk.Msg`](/v0.53/learn/advanced/transactions#messages)) are extracted from transactions (`Tx`). The `ValidateBasic` method of the `sdk.Msg` interface implemented by the module developer is run for each transaction. To discard obviously invalid messages, the `BaseApp` type calls the `ValidateBasic` method very early in the processing of the message in the [`CheckTx`](/v0.53/learn/advanced/baseapp#checktx) and [`DeliverTx`](/v0.53/learn/advanced/baseapp#delivertx) transactions. `ValidateBasic` can include only **stateless** checks (the checks that do not require access to the state). - - - The `ValidateBasic` method on messages has been deprecated in favor of validating messages directly in their respective [`Msg` services](/v0.53/build/building-modules/msg-services#Validation). - - Read [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) for more details. - - - - `BaseApp` still calls `ValidateBasic` on messages that implements that method for backwards compatibility. - - -#### Guideline[​](#guideline "Direct link to Guideline") - -`ValidateBasic` should not be used anymore. Message validation should be performed in the `Msg` service when [handling a message](/v0.53/build/building-modules/msg-services#Validation) in a module Msg Server. - -### AnteHandler[​](#antehandler "Direct link to AnteHandler") - -`AnteHandler`s even though optional, are in practice very often used to perform signature verification, gas calculation, fee deduction, and other core operations related to blockchain transactions. - -A copy of the cached context is provided to the `AnteHandler`, which performs limited checks specified for the transaction type. Using a copy allows the `AnteHandler` to do stateful checks for `Tx` without modifying the last committed state, and revert back to the original if the execution fails. - -For example, the [`auth`](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth/spec) module `AnteHandler` checks and increments sequence numbers, checks signatures and account numbers, and deducts fees from the first signer of the transaction - all state changes are made using the `checkState`. - - - Ante handlers only run on a transaction. If a transaction embed multiple messages (like some x/authz, x/gov transactions for instance), the ante handlers only have awareness of the outer message. Inner messages are mostly directly routed to the [message router](https://docs.cosmos.network/main/learn/advanced/baseapp#msg-service-router) and will skip the chain of ante handlers. Keep that in mind when designing your own ante handler. - - -### Gas[​](#gas "Direct link to Gas") - -The [`Context`](/v0.53/learn/advanced/context), which keeps a `GasMeter` that tracks how much gas is used during the execution of `Tx`, is initialized. The user-provided amount of gas for `Tx` is known as `GasWanted`. If `GasConsumed`, the amount of gas consumed during execution, ever exceeds `GasWanted`, the execution stops and the changes made to the cached copy of the state are not committed. Otherwise, `CheckTx` sets `GasUsed` equal to `GasConsumed` and returns it in the result. After calculating the gas and fee values, validator-nodes check that the user-specified `gas-prices` is greater than their locally defined `min-gas-prices`. - -### Discard or Addition to Mempool[​](#discard-or-addition-to-mempool "Direct link to Discard or Addition to Mempool") - -If at any point during `CheckTx` the `Tx` fails, it is discarded and the transaction lifecycle ends there. Otherwise, if it passes `CheckTx` successfully, the default protocol is to relay it to peer nodes and add it to the Mempool so that the `Tx` becomes a candidate to be included in the next block. - -The **mempool** serves the purpose of keeping track of transactions seen by all full-nodes. Full-nodes keep a **mempool cache** of the last `mempool.cache_size` transactions they have seen, as a first line of defense to prevent replay attacks. Ideally, `mempool.cache_size` is large enough to encompass all of the transactions in the full mempool. If the mempool cache is too small to keep track of all the transactions, `CheckTx` is responsible for identifying and rejecting replayed transactions. - -Currently existing preventative measures include fees and a `sequence` (nonce) counter to distinguish replayed transactions from identical but valid ones. If an attacker tries to spam nodes with many copies of a `Tx`, full-nodes keeping a mempool cache reject all identical copies instead of running `CheckTx` on them. Even if the copies have incremented `sequence` numbers, attackers are disincentivized by the need to pay fees. - -Validator nodes keep a mempool to prevent replay attacks, just as full-nodes do, but also use it as a pool of unconfirmed transactions in preparation of block inclusion. Note that even if a `Tx` passes all checks at this stage, it is still possible to be found invalid later on, because `CheckTx` does not fully validate the transaction (that is, it does not actually execute the messages). - -## Inclusion in a Block[​](#inclusion-in-a-block "Direct link to Inclusion in a Block") - -Consensus, the process through which validator nodes come to agreement on which transactions to accept, happens in **rounds**. Each round begins with a proposer creating a block of the most recent transactions and ends with **validators**, special full-nodes with voting power responsible for consensus, agreeing to accept the block or go with a `nil` block instead. Validator nodes execute the consensus algorithm, such as [CometBFT](https://docs.cometbft.com/v0.37/spec/consensus/), confirming the transactions using ABCI requests to the application, in order to come to this agreement. - -The first step of consensus is the **block proposal**. One proposer amongst the validators is chosen by the consensus algorithm to create and propose a block - in order for a `Tx` to be included, it must be in this proposer's mempool. - -## State Changes[​](#state-changes "Direct link to State Changes") - -The next step of consensus is to execute the transactions to fully validate them. All full-nodes that receive a block proposal from the correct proposer execute the transactions by calling the ABCI function `FinalizeBlock`. As mentioned throughout the documentation `BeginBlock`, `ExecuteTx` and `EndBlock` are called within FinalizeBlock. Although every full-node operates individually and locally, the outcome is always consistent and unequivocal. This is because the state changes brought about by the messages are predictable, and the transactions are specifically sequenced in the proposed block. - -``` - -------------------------- | Receive Block Proposal | -------------------------- | v ------------------------- | FinalizeBlock | ------------------------- | v ------------------- | BeginBlock | ------------------- | v -------------------- | ExecuteTx(tx0) | | ExecuteTx(tx1) | | ExecuteTx(tx2) | | ExecuteTx(tx3) | | . | | . | | . | ------------------- | v -------------------- | EndBlock | -------------------- | v ------------------------- | Consensus | ------------------------- | v ------------------------- | Commit | ------------------------- -``` - -### Transaction Execution[​](#transaction-execution "Direct link to Transaction Execution") - -The `FinalizeBlock` ABCI function defined in [`BaseApp`](/v0.53/learn/advanced/baseapp) does the bulk of the state transitions: it is run for each transaction in the block in sequential order as committed to during consensus. Under the hood, transaction execution is almost identical to `CheckTx` but calls the [`runTx`](/v0.53/learn/advanced/baseapp#runtx) function in deliver mode instead of check mode. Instead of using their `checkState`, full-nodes use `finalizeblock`: - -* **Decoding:** Since `FinalizeBlock` is an ABCI call, `Tx` is received in the encoded `[]byte` form. Nodes first unmarshal the transaction, using the [`TxConfig`](/v0.53/learn/beginner/app-anatomy#register-codec) defined in the app, then call `runTx` in `execModeFinalize`, which is very similar to `CheckTx` but also executes and writes state changes. - -* **Checks and `AnteHandler`:** Full-nodes call `validateBasicMsgs` and `AnteHandler` again. This second check happens because they may not have seen the same transactions during the addition to Mempool stage and a malicious proposer may have included invalid ones. One difference here is that the `AnteHandler` does not compare `gas-prices` to the node's `min-gas-prices` since that value is local to each node - differing values across nodes yield nondeterministic results. - -* **`MsgServiceRouter`:** After `CheckTx` exits, `FinalizeBlock` continues to run [`runMsgs`](/v0.53/learn/advanced/baseapp#runtx-antehandler-runmsgs-posthandler) to fully execute each `Msg` within the transaction. Since the transaction may have messages from different modules, `BaseApp` needs to know which module to find the appropriate handler. This is achieved using `BaseApp`'s `MsgServiceRouter` so that it can be processed by the module's Protobuf [`Msg` service](/v0.53/build/building-modules/msg-services). For `LegacyMsg` routing, the `Route` function is called via the [module manager](/v0.53/build/building-modules/module-manager) to retrieve the route name and find the legacy [`Handler`](/v0.53/build/building-modules/msg-services#handler-type) within the module. - -* **`Msg` service:** Protobuf `Msg` service is responsible for executing each message in the `Tx` and causes state transitions to persist in `finalizeBlockState`. - -* **PostHandlers:** [`PostHandler`](/v0.53/learn/advanced/baseapp#posthandler)s run after the execution of the message. If they fail, the state change of `runMsgs`, as well of `PostHandlers`, are both reverted. - -* **Gas:** While a `Tx` is being delivered, a `GasMeter` is used to keep track of how much gas is being used; if execution completes, `GasUsed` is set and returned in the `abci.ExecTxResult`. If execution halts because `BlockGasMeter` or `GasMeter` has run out or something else goes wrong, a deferred function at the end appropriately errors or panics. - -If there are any failed state changes resulting from a `Tx` being invalid or `GasMeter` running out, the transaction processing terminates and any state changes are reverted. Invalid transactions in a block proposal cause validator nodes to reject the block and vote for a `nil` block instead. - -### Commit[​](#commit "Direct link to Commit") - -The final step is for nodes to commit the block and state changes. Validator nodes perform the previous step of executing state transitions in order to validate the transactions, then sign the block to confirm it. Full nodes that are not validators do not participate in consensus - i.e. they cannot vote - but listen for votes to understand whether or not they should commit the state changes. - -When they receive enough validator votes (2/3+ *precommits* weighted by voting power), full nodes commit to a new block to be added to the blockchain and finalize the state transitions in the application layer. A new state root is generated to serve as a merkle proof for the state transitions. Applications use the [`Commit`](/v0.53/learn/advanced/baseapp#commit) ABCI method inherited from [Baseapp](/v0.53/learn/advanced/baseapp); it syncs all the state transitions by writing the `deliverState` into the application's internal state. As soon as the state changes are committed, `checkState` starts afresh from the most recently committed state and `deliverState` resets to `nil` in order to be consistent and reflect the changes. - -Note that not all blocks have the same number of transactions and it is possible for consensus to result in a `nil` block or one with none at all. In a public blockchain network, it is also possible for validators to be **byzantine**, or malicious, which may prevent a `Tx` from being committed in the blockchain. Possible malicious behaviors include the proposer deciding to censor a `Tx` by excluding it from the block or a validator voting against the block. - -At this point, the transaction lifecycle of a `Tx` is over: nodes have verified its validity, delivered it by executing its state changes, and committed those changes. The `Tx` itself, in `[]byte` form, is stored in a block and appended to the blockchain. diff --git a/docs/sdk/v0.53/learn/intro/sdk-design.mdx b/docs/sdk/v0.53/learn/intro/sdk-design.mdx deleted file mode 100644 index 3642eb8d..00000000 --- a/docs/sdk/v0.53/learn/intro/sdk-design.mdx +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: "Main Components of the Cosmos SDK" -description: "Version: v0.53" ---- - -The Cosmos SDK is a framework that facilitates the development of secure state-machines on top of CometBFT. At its core, the Cosmos SDK is a boilerplate implementation of the [ABCI](/v0.53/learn/intro/sdk-app-architecture#abci) in Golang. It comes with a [`multistore`](/v0.53/learn/advanced/store#multistore) to persist data and a [`router`](/v0.53/learn/advanced/baseapp#routing) to handle transactions. - -Here is a simplified view of how transactions are handled by an application built on top of the Cosmos SDK when transferred from CometBFT via `DeliverTx`: - -1. Decode `transactions` received from the CometBFT consensus engine (remember that CometBFT only deals with `[]bytes`). -2. Extract `messages` from `transactions` and do basic sanity checks. -3. Route each message to the appropriate module so that it can be processed. -4. Commit state changes. - -## `baseapp`[​](#baseapp "Direct link to baseapp") - -`baseapp` is the boilerplate implementation of a Cosmos SDK application. It comes with an implementation of the ABCI to handle the connection with the underlying consensus engine. Typically, a Cosmos SDK application extends `baseapp` by embedding it in [`app.go`](/v0.53/learn/beginner/app-anatomy#core-application-file). - -Here is an example of this from `simapp`, the Cosmos SDK demonstration app: - -simapp/app.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app.go#L137-L180) - -The goal of `baseapp` is to provide a secure interface between the store and the extensible state machine while defining as little about the state machine as possible (staying true to the ABCI). - -For more on `baseapp`, please click [here](/v0.53/learn/advanced/baseapp). - -## Multistore[​](#multistore "Direct link to Multistore") - -The Cosmos SDK provides a [`multistore`](/v0.53/learn/advanced/store#multistore) for persisting state. The multistore allows developers to declare any number of [`KVStores`](/v0.53/learn/advanced/store#base-layer-kvstores). These `KVStores` only accept the `[]byte` type as value and therefore any custom structure needs to be marshalled using [a codec](/v0.53/learn/advanced/encoding) before being stored. - -The multistore abstraction is used to divide the state in distinct compartments, each managed by its own module. For more on the multistore, click [here](/v0.53/learn/advanced/store#multistore) - -## Modules[​](#modules "Direct link to Modules") - -The power of the Cosmos SDK lies in its modularity. Cosmos SDK applications are built by aggregating a collection of interoperable modules. Each module defines a subset of the state and contains its own message/transaction processor, while the Cosmos SDK is responsible for routing each message to its respective module. - -Here is a simplified view of how a transaction is processed by the application of each full-node when it is received in a valid block: - -Each module can be seen as a little state-machine. Developers need to define the subset of the state handled by the module, as well as custom message types that modify the state (*Note:* `messages` are extracted from `transactions` by `baseapp`). In general, each module declares its own `KVStore` in the `multistore` to persist the subset of the state it defines. Most developers will need to access other 3rd party modules when building their own modules. Given that the Cosmos SDK is an open framework, some of the modules may be malicious, which means there is a need for security principles to reason about inter-module interactions. These principles are based on [object-capabilities](/v0.53/learn/advanced/ocap). In practice, this means that instead of having each module keep an access control list for other modules, each module implements special objects called `keepers` that can be passed to other modules to grant a pre-defined set of capabilities. - -Cosmos SDK modules are defined in the `x/` folder of the Cosmos SDK. Some core modules include: - -* `x/auth`: Used to manage accounts and signatures. -* `x/bank`: Used to enable tokens and token transfers. -* `x/staking` + `x/slashing`: Used to build Proof-of-Stake blockchains. - -In addition to the already existing modules in `x/`, which anyone can use in their app, the Cosmos SDK lets you build your own custom modules. You can check an [example of that in the tutorial](https://tutorials.cosmos.network/). diff --git a/docs/sdk/v0.53/release-notes/placeholder.mdx b/docs/sdk/v0.53/release-notes/placeholder.mdx new file mode 100644 index 00000000..166ef9e1 --- /dev/null +++ b/docs/sdk/v0.53/release-notes/placeholder.mdx @@ -0,0 +1,8 @@ +--- +title: Release Notes +description: SDK release notes +--- + +# Release Notes + +For detailed release notes, please visit the [Cosmos SDK releases page](https://github.com/cosmos/cosmos-sdk/releases). diff --git a/docs/sdk/v0.53/security-audit.mdx b/docs/sdk/v0.53/security-audit.mdx new file mode 100644 index 00000000..1780cafb --- /dev/null +++ b/docs/sdk/v0.53/security-audit.mdx @@ -0,0 +1,98 @@ +--- +title: "Security Audit" +description: "External security audit report for Cosmos SDK v0.53" +icon: "shield-check" +keywords: ['security', 'audit', 'otter', 'vulnerability', 'assessment', 'cosmos-sdk', 'v0.53'] +--- + +## Overview + +Cosmos SDK v0.53.0 underwent a comprehensive security audit conducted by Otter Audits LLC, a specialized blockchain security firm. The audit was completed on April 30, 2025, providing an independent assessment of the SDK's security architecture, code quality, and potential vulnerabilities in this major release. + +## Audit Details + +**Auditor**: Otter Audits LLC +**Audit Completion Date**: April 30, 2025 +**SDK Version**: v0.53.0 +**Report Type**: Final + +## Scope + +The security audit covered the Cosmos SDK v0.53.0 release, including: + +- Core SDK architecture and module system +- State management and store implementations +- Transaction processing and mempool +- Account abstraction and authentication mechanisms +- Module interfaces and keeper patterns +- Consensus integration and ABCI implementation +- Gas metering and fee handling +- IBC integration points +- Governance and upgrade mechanisms +- Critical security boundaries and access controls + +## Key Areas of Focus + +The audit specifically examined: + +1. **Module Security**: Analysis of standard modules including auth, bank, staking, distribution, governance, and slashing +2. **State Integrity**: Verification of state transitions, store operations, and data consistency +3. **Transaction Lifecycle**: Review of transaction validation, execution, and state commitment +4. **Access Control**: Examination of permission systems, capability patterns, and module boundaries +5. **Upgrade Safety**: Assessment of migration paths and upgrade handler mechanisms +6. **Gas Economics**: Analysis of gas consumption patterns and potential DoS vectors +7. **Cross-Module Communication**: Security review of inter-module calls and dependencies + +## Major Changes in v0.53 + +This audit paid special attention to the significant changes introduced in v0.53: + +- Store v1 implementation and migration +- Enhanced module system with dependency injection +- Improved transaction processing pipeline +- Updated governance mechanisms +- Performance optimizations and architectural improvements + +## Accessing the Report + +The complete audit report is publicly available and can be accessed through the following link: + + + Download the complete security audit report for Cosmos SDK v0.53.0 conducted by Otter Audits + + +## Recommendations + +Following the audit recommendations is crucial for maintaining security: + +- Review all findings and remediation suggestions in the full report +- Implement recommended security practices in custom modules +- Maintain awareness of security considerations when upgrading from previous versions +- Follow the SDK's security guidelines for application development +- Keep applications updated with security patches and minor releases + +## Migration Considerations + +When migrating to v0.53 from previous versions: + +- Review the audit findings related to migration paths +- Test upgrade handlers thoroughly in testnet environments +- Verify state migrations preserve data integrity +- Ensure custom modules follow the updated security patterns +- Monitor for any post-upgrade anomalies + +## Continuous Security + +The Cosmos SDK team maintains an ongoing commitment to security: + +- Regular security assessments for major releases +- Rapid response to vulnerability disclosures +- Transparent communication through security advisories +- Active collaboration with security researchers +- Continuous improvement of security patterns and best practices + +For security-related inquiries or to report potential vulnerabilities, please follow the [Cosmos Security Policy](https://github.com/cosmos/cosmos-sdk/security/policy). \ No newline at end of file diff --git a/docs/sdk/v0.53/tutorials/transactions/building-a-transaction.mdx b/docs/sdk/v0.53/tutorials/transactions/building-a-transaction.mdx index 7dbe51c7..8ecf4a2b 100644 --- a/docs/sdk/v0.53/tutorials/transactions/building-a-transaction.mdx +++ b/docs/sdk/v0.53/tutorials/transactions/building-a-transaction.mdx @@ -1,54 +1,193 @@ --- -title: "Building a Transaction" -description: "Version: v0.53" +title: Building a Transaction +description: >- + These are the steps to build, sign and broadcast a transaction using v2 + semantics. --- These are the steps to build, sign and broadcast a transaction using v2 semantics. 1. Correctly set up imports -``` -import ( "context" "fmt" "log" "google.golang.org/grpc" "google.golang.org/grpc/credentials/insecure" apisigning "cosmossdk.io/api/cosmos/tx/signing/v1beta1" "cosmossdk.io/client/v2/broadcast/comet" "cosmossdk.io/client/v2/tx" "cosmossdk.io/core/transaction" "cosmossdk.io/math" banktypes "cosmossdk.io/x/bank/types" codectypes "github.com/cosmos/cosmos-sdk/codec/types" cryptocodec "github.com/cosmos/cosmos-sdk/crypto/codec" "github.com/cosmos/cosmos-sdk/crypto/keyring" authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" "github.com/cosmos/cosmos-sdk/codec" addrcodec "github.com/cosmos/cosmos-sdk/codec/address" sdk "github.com/cosmos/cosmos-sdk/types") +```go expandable +import ( + + "context" + "fmt" + "log" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" + + apisigning "cosmossdk.io/api/cosmos/tx/signing/v1beta1" + "cosmossdk.io/client/v2/broadcast/comet" + "cosmossdk.io/client/v2/tx" + "cosmossdk.io/core/transaction" + "cosmossdk.io/math" + banktypes "cosmossdk.io/x/bank/types" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + cryptocodec "github.com/cosmos/cosmos-sdk/crypto/codec" + "github.com/cosmos/cosmos-sdk/crypto/keyring" + authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" + "github.com/cosmos/cosmos-sdk/codec" + addrcodec "github.com/cosmos/cosmos-sdk/codec/address" + sdk "github.com/cosmos/cosmos-sdk/types" +) ``` 2. Create a gRPC connection -``` -clientConn, err := grpc.NewClient("127.0.0.1:9090", grpc.WithTransportCredentials(insecure.NewCredentials()))if err != nil { log.Fatal(err)} +```go +clientConn, err := grpc.NewClient("127.0.0.1:9090", grpc.WithTransportCredentials(insecure.NewCredentials())) + if err != nil { + log.Fatal(err) +} ``` 3. Setup codec and interface registry -``` - // Setup interface registry and register necessary interfaces interfaceRegistry := codectypes.NewInterfaceRegistry() banktypes.RegisterInterfaces(interfaceRegistry) authtypes.RegisterInterfaces(interfaceRegistry) cryptocodec.RegisterInterfaces(interfaceRegistry) // Create a ProtoCodec for encoding/decoding protoCodec := codec.NewProtoCodec(interfaceRegistry) +```go +/ Setup interface registry and register necessary interfaces + interfaceRegistry := codectypes.NewInterfaceRegistry() + +banktypes.RegisterInterfaces(interfaceRegistry) + +authtypes.RegisterInterfaces(interfaceRegistry) + +cryptocodec.RegisterInterfaces(interfaceRegistry) + + / Create a ProtoCodec for encoding/decoding + protoCodec := codec.NewProtoCodec(interfaceRegistry) ``` 4. Initialize keyring -``` - ckr, err := keyring.New("autoclikeyring", "test", home, nil, protoCodec) if err != nil { log.Fatal("error creating keyring", err) } kr, err := keyring.NewAutoCLIKeyring(ckr, addrcodec.NewBech32Codec("cosmos")) if err != nil { log.Fatal("error creating auto cli keyring", err) } +```go expandable +ckr, err := keyring.New("autoclikeyring", "test", home, nil, protoCodec) + if err != nil { + log.Fatal("error creating keyring", err) +} + +kr, err := keyring.NewAutoCLIKeyring(ckr, addrcodec.NewBech32Codec("cosmos")) + if err != nil { + log.Fatal("error creating auto cli keyring", err) +} ``` 5. Setup transaction parameters -``` - // Setup transaction parameters txParams := tx.TxParameters{ ChainID: "simapp-v2-chain", SignMode: apisigning.SignMode_SIGN_MODE_DIRECT, AccountConfig: tx.AccountConfig{ FromAddress: "cosmos1t0fmn0lyp2v99ga55mm37mpnqrlnc4xcs2hhhy", FromName: "alice", }, } // Configure gas settings gasConfig, err := tx.NewGasConfig(100, 100, "0stake") if err != nil { log.Fatal("error creating gas config: ", err) } txParams.GasConfig = gasConfig // Create auth query client authClient := authtypes.NewQueryClient(clientConn) // Retrieve account information for the sender fromAccount, err := getAccount("cosmos1t0fmn0lyp2v99ga55mm37mpnqrlnc4xcs2hhhy", authClient, protoCodec) if err != nil { log.Fatal("error getting from account: ", err) } // Update txParams with the correct account number and sequence txParams.AccountConfig.AccountNumber = fromAccount.GetAccountNumber() txParams.AccountConfig.Sequence = fromAccount.GetSequence() // Retrieve account information for the recipient toAccount, err := getAccount("cosmos1e2wanzh89mlwct7cs7eumxf7mrh5m3ykpsh66m", authClient, protoCodec) if err != nil { log.Fatal("error getting to account: ", err) } // Configure transaction settings txConf, _ := tx.NewTxConfig(tx.ConfigOptions{ AddressCodec: addrcodec.NewBech32Codec("cosmos"), Cdc: protoCodec, ValidatorAddressCodec: addrcodec.NewBech32Codec("cosmosval"), EnabledSignModes: []apisigning.SignMode{apisigning.SignMode_SIGN_MODE_DIRECT}, }) +```go expandable +/ Setup transaction parameters + txParams := tx.TxParameters{ + ChainID: "simapp-v2-chain", + SignMode: apisigning.SignMode_SIGN_MODE_DIRECT, + AccountConfig: tx.AccountConfig{ + FromAddress: "cosmos1t0fmn0lyp2v99ga55mm37mpnqrlnc4xcs2hhhy", + FromName: "alice", +}, +} + + / Configure gas settings + gasConfig, err := tx.NewGasConfig(100, 100, "0stake") + if err != nil { + log.Fatal("error creating gas config: ", err) +} + +txParams.GasConfig = gasConfig + + / Create auth query client + authClient := authtypes.NewQueryClient(clientConn) + + / Retrieve account information for the sender + fromAccount, err := getAccount("cosmos1t0fmn0lyp2v99ga55mm37mpnqrlnc4xcs2hhhy", authClient, protoCodec) + if err != nil { + log.Fatal("error getting from account: ", err) +} + + / Update txParams with the correct account number and sequence + txParams.AccountConfig.AccountNumber = fromAccount.GetAccountNumber() + +txParams.AccountConfig.Sequence = fromAccount.GetSequence() + + / Retrieve account information for the recipient + toAccount, err := getAccount("cosmos1e2wanzh89mlwct7cs7eumxf7mrh5m3ykpsh66m", authClient, protoCodec) + if err != nil { + log.Fatal("error getting to account: ", err) +} + + / Configure transaction settings + txConf, _ := tx.NewTxConfig(tx.ConfigOptions{ + AddressCodec: addrcodec.NewBech32Codec("cosmos"), + Cdc: protoCodec, + ValidatorAddressCodec: addrcodec.NewBech32Codec("cosmosval"), + EnabledSignModes: []apisigning.SignMode{ + apisigning.SignMode_SIGN_MODE_DIRECT +}, +}) ``` 6. Build the transaction -``` -// Create a transaction factory f, err := tx.NewFactory(kr, codec.NewProtoCodec(codectypes.NewInterfaceRegistry()), nil, txConf, addrcodec.NewBech32Codec("cosmos"), clientConn, txParams) if err != nil { log.Fatal("error creating factory", err) } // Define the transaction message msgs := []transaction.Msg{ &banktypes.MsgSend{ FromAddress: fromAccount.GetAddress().String(), ToAddress: toAccount.GetAddress().String(), Amount: sdk.Coins{ sdk.NewCoin("stake", math.NewInt(1000000)), }, }, } // Build and sign the transaction tx, err := f.BuildsSignedTx(context.Background(), msgs...) if err != nil { log.Fatal("error building signed tx", err) } +```go expandable +/ Create a transaction factory + f, err := tx.NewFactory(kr, codec.NewProtoCodec(codectypes.NewInterfaceRegistry()), nil, txConf, addrcodec.NewBech32Codec("cosmos"), clientConn, txParams) + if err != nil { + log.Fatal("error creating factory", err) +} + + / Define the transaction message + msgs := []transaction.Msg{ + &banktypes.MsgSend{ + FromAddress: fromAccount.GetAddress().String(), + ToAddress: toAccount.GetAddress().String(), + Amount: sdk.Coins{ + sdk.NewCoin("stake", math.NewInt(1000000)), +}, +}, +} + + / Build and sign the transaction + tx, err := f.BuildsSignedTx(context.Background(), msgs...) + if err != nil { + log.Fatal("error building signed tx", err) +} ``` 7. Broadcast the transaction -``` -// Create a broadcaster for the transaction c, err := comet.NewCometBFTBroadcaster("http://127.0.0.1:26657", comet.BroadcastSync, protoCodec) if err != nil { log.Fatal("error creating comet broadcaster", err) } // Broadcast the transaction res, err := c.Broadcast(context.Background(), tx.Bytes()) if err != nil { log.Fatal("error broadcasting tx", err) } +```go expandable +/ Create a broadcaster for the transaction + c, err := comet.NewCometBFTBroadcaster("http://127.0.0.1:26657", comet.BroadcastSync, protoCodec) + if err != nil { + log.Fatal("error creating comet broadcaster", err) +} + + / Broadcast the transaction + res, err := c.Broadcast(context.Background(), tx.Bytes()) + if err != nil { + log.Fatal("error broadcasting tx", err) +} ``` 8. Helpers -``` -// getAccount retrieves account information using the provided addressfunc getAccount(address string, authClient authtypes.QueryClient, codec codec.Codec) (sdk.AccountI, error) { // Query account info accountQuery, err := authClient.Account(context.Background(), &authtypes.QueryAccountRequest{ Address: string(address), }) if err != nil { return nil, fmt.Errorf("error getting account: %w", err) } // Unpack the account information var account sdk.AccountI err = codec.InterfaceRegistry().UnpackAny(accountQuery.Account, &account) if err != nil { return nil, fmt.Errorf("error unpacking account: %w", err) } return account, nil} +```go expandable +/ getAccount retrieves account information using the provided address +func getAccount(address string, authClient authtypes.QueryClient, codec codec.Codec) (sdk.AccountI, error) { + / Query account info + accountQuery, err := authClient.Account(context.Background(), &authtypes.QueryAccountRequest{ + Address: string(address), +}) + if err != nil { + return nil, fmt.Errorf("error getting account: %w", err) +} + + / Unpack the account information + var account sdk.AccountI + err = codec.InterfaceRegistry().UnpackAny(accountQuery.Account, &account) + if err != nil { + return nil, fmt.Errorf("error unpacking account: %w", err) +} + +return account, nil +} ``` diff --git a/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running.mdx b/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running.mdx index 19c61bdf..dad1cebf 100644 --- a/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running.mdx +++ b/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running.mdx @@ -1,6 +1,5 @@ --- -title: "Demo of Mitigating Front-Running with Vote Extensions" -description: "Version: v0.53" +title: Demo of Mitigating Front-Running with Vote Extensions --- The purpose of this demo is to test the implementation of the `VoteExtensionHandler` and `PrepareProposalHandler` that we have just added to the codebase. These handlers are designed to mitigate front-running by ensuring that all validators have a consistent view of the mempool when preparing proposals. @@ -9,57 +8,99 @@ In this demo, we are using a 3 validator network. The Beacon validator is specia 1. Bootstrap the validator network: This sets up a network with 3 validators. The script \`./scripts/configure.sh is used to configure the network and the validators. -``` -cd scriptsconfigure.sh +```shell +cd scripts +configure.sh ``` If this doesn't work please ensure you have run `make build` in the `tutorials/nameservice/base` directory. -2. Have alice attempt to reserve `bob.cosmos`: This is a normal transaction that alice wants to execute. The script \``./scripts/reserve.sh "bob.cosmos"` is used to send this transaction. +{/* nolint:all */} +2\. Have alice attempt to reserve `bob.cosmos`: This is a normal transaction that alice wants to execute. The script \`\`./scripts/reserve.sh "bob.cosmos"\` is used to send this transaction. -``` +```shell reserve.sh "bob.cosmos" ``` -3. Query to verify the name has been reserved: This is to check the result of the transaction. The script `./scripts/whois.sh "bob.cosmos"` is used to query the state of the blockchain. +{/* /nolint:all */} +3\. Query to verify the name has been reserved: This is to check the result of the transaction. The script `./scripts/whois.sh "bob.cosmos"` is used to query the state of the blockchain. -``` +```shell whois.sh "bob.cosmos" ``` It should return: -``` - "name": { "name": "bob.cosmos", "owner": "cosmos1nq9wuvuju4jdmpmzvxmg8zhhu2ma2y2l2pnu6w", "resolve_address": "cosmos1h6zy2kn9efxtw5z22rc5k9qu7twl70z24kr3ht", "amount": [ { "denom": "uatom", "amount": "1000" } ] }} +```{ expandable + "name": { + "name": "bob.cosmos", + "owner": "cosmos1nq9wuvuju4jdmpmzvxmg8zhhu2ma2y2l2pnu6w", + "resolve_address": "cosmos1h6zy2kn9efxtw5z22rc5k9qu7twl70z24kr3ht", + "amount": [ + { + "denom": "uatom", + "amount": "1000" + } + ] + } +} ``` To detect front-running attempts by the beacon, scrutinise the logs during the `ProcessProposal` stage. Open the logs for each validator, including the beacon, `val1`, and `val2`, to observe the following behavior. Open the log file of the validator node. The location of this file can vary depending on your setup, but it's typically located in a directory like `$HOME/cosmos/nodes/#{validator}/logs`. The directory in this case will be under the validator so, `beacon` `val1` or `val2`. Run the following to tail the logs of the validator or beacon: -``` +```shell tail -f $HOME/cosmos/nodes/#{validator}/logs ``` -``` -2:47PM ERR ❌️:: Detected invalid proposal bid :: name:"bob.cosmos" resolveAddress:"cosmos1wmuwv38pdur63zw04t0c78r2a8dyt08hf9tpvd" owner:"cosmos1wmuwv38pdur63zw04t0c78r2a8dyt08hf9tpvd" amount: module=server2:47PM ERR ❌️:: Unable to validate bids in Process Proposal :: module=server2:47PM ERR prevote step: state machine rejected a proposed block; this should not happen:the proposer may be misbehaving; prevoting nil err=null height=142 module=consensus round=0 +```shell +2:47PM ERR :: Detected invalid proposal bid :: name:"bob.cosmos" resolveAddress:"cosmos1wmuwv38pdur63zw04t0c78r2a8dyt08hf9tpvd" owner:"cosmos1wmuwv38pdur63zw04t0c78r2a8dyt08hf9tpvd" amount: module=server +2:47PM ERR :: Unable to validate bids in Process Proposal :: module=server +2:47PM ERR prevote step: state machine rejected a proposed block; this should not happen:the proposer may be misbehaving; prevoting nil err=null height=142 module=consensus round=0 ``` -4. List the Beacon's keys: This is to verify the addresses of the validators. The script `./scripts/list-beacon-keys.sh` is used to list the keys. +{/* /nolint:all */} +4\. List the Beacon's keys: This is to verify the addresses of the validators. The script `./scripts/list-beacon-keys.sh` is used to list the keys. -``` +```shell list-beacon-keys.sh ``` We should receive something similar to the following: -``` -[ { "name": "alice", "type": "local", "address": "cosmos1h6zy2kn9efxtw5z22rc5k9qu7twl70z24kr3ht", "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"A32cvBUkNJz+h2vld4A5BxvU5Rd+HyqpR3aGtvEhlm4C\"}" }, { "name": "barbara", "type": "local", "address": "cosmos1nq9wuvuju4jdmpmzvxmg8zhhu2ma2y2l2pnu6w", "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"Ag9PFsNyTQPoJdbyCWia5rZH9CrvSrjMsk7Oz4L3rXQ5\"}" }, { "name": "beacon-key", "type": "local", "address": "cosmos1ez9a6x7lz4gvn27zr368muw8jeyas7sv84lfup", "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"AlzJZMWyN7lass710TnAhyuFKAFIaANJyw5ad5P2kpcH\"}" }, { "name": "cindy", "type": "local", "address": "cosmos1m5j6za9w4qc2c5ljzxmm2v7a63mhjeag34pa3g", "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"A6F1/3yot5OpyXoSkBbkyl+3rqBkxzRVSJfvSpm/AvW5\"}" }] +```shell expandable +[ + { + "name": "alice", + "type": "local", + "address": "cosmos1h6zy2kn9efxtw5z22rc5k9qu7twl70z24kr3ht", + "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"A32cvBUkNJz+h2vld4A5BxvU5Rd+HyqpR3aGtvEhlm4C\"}" + }, + { + "name": "barbara", + "type": "local", + "address": "cosmos1nq9wuvuju4jdmpmzvxmg8zhhu2ma2y2l2pnu6w", + "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"Ag9PFsNyTQPoJdbyCWia5rZH9CrvSrjMsk7Oz4L3rXQ5\"}" + }, + { + "name": "beacon-key", + "type": "local", + "address": "cosmos1ez9a6x7lz4gvn27zr368muw8jeyas7sv84lfup", + "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"AlzJZMWyN7lass710TnAhyuFKAFIaANJyw5ad5P2kpcH\"}" + }, + { + "name": "cindy", + "type": "local", + "address": "cosmos1m5j6za9w4qc2c5ljzxmm2v7a63mhjeag34pa3g", + "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"A6F1/3yot5OpyXoSkBbkyl+3rqBkxzRVSJfvSpm/AvW5\"}" + } +] ``` This allows us to match up the addresses and see that the bid was not front run by the beacon due as the resolve address is Alice's address and not the beacons address. By running this demo, we can verify that the `VoteExtensionHandler` and `PrepareProposalHandler` are working as expected and that they are able to prevent front-running. -## Conclusion[​](#conclusion "Direct link to Conclusion") +## Conclusion In this tutorial, we've tackled front-running and MEV, focusing on nameservice auctions' vulnerability to these issues. We've explored vote extensions, a key feature of ABCI 2.0, and integrated them into a Cosmos SDK application. diff --git a/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/getting-started.mdx b/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/getting-started.mdx index 511b1469..89959f81 100644 --- a/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/getting-started.mdx +++ b/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/getting-started.mdx @@ -1,18 +1,20 @@ --- -title: "Getting Started" -description: "Version: v0.53" +title: Getting Started +description: >- + - Getting Started - Understanding Front-Running - Mitigating Front-running + with Vote Extensions - Demo of Mitigating Front-Running --- -## Table of Contents[​](#table-of-contents "Direct link to Table of Contents") +## Table of Contents * [Getting Started](#overview-of-the-project) -* [Understanding Front-Running](/v0.53/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) -* [Mitigating Front-running with Vote Extensions](/v0.53/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extesions) -* [Demo of Mitigating Front-Running](/v0.53/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running) +* [Understanding Front-Running](/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) +* [Mitigating Front-running with Vote Extensions](/docs/sdk/next/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions) +* [Demo of Mitigating Front-Running](/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/demo-of-mitigating-front-running) -## Getting Started[​](#getting-started-1 "Direct link to Getting Started") +## Getting Started -### Overview of the Project[​](#overview-of-the-project "Direct link to Overview of the Project") +### Overview of the Project This tutorial outlines the development of a module designed to mitigate front-running in nameservice auctions. The following functions are central to this module: @@ -20,22 +22,22 @@ This tutorial outlines the development of a module designed to mitigate front-ru * `PrepareProposal`: Processes the vote extensions from the previous block, creating a special transaction that encapsulates bids to be included in the current proposal. * `ProcessProposal`: Validates that the first transaction in the proposal is the special transaction containing the vote extensions and ensures the integrity of the bids. -In this advanced tutorial, we will be working with an example application that facilitates the auctioning of nameservices. To see what frontrunning and nameservices are [here](/v0.53/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) This application provides a practical use case to explore the prevention of auction front-running, also known as "bid sniping", where a validator takes advantage of seeing a bid in the mempool to place their own higher bid before the original bid is processed. +In this advanced tutorial, we will be working with an example application that facilitates the auctioning of nameservices. To see what frontrunning and nameservices are [here](/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) This application provides a practical use case to explore the prevention of auction front-running, also known as "bid sniping", where a validator takes advantage of seeing a bid in the mempool to place their own higher bid before the original bid is processed. The tutorial will guide you through using the Cosmos SDK to mitigate front-running using vote extensions. The module will be built on top of the base blockchain provided in the `tutorials/base` directory and will use the `auction` module as a foundation. By the end of this tutorial, you will have a better understanding of how to prevent front-running in blockchain auctions, specifically in the context of nameservice auctioning. -## What are Vote extensions?[​](#what-are-vote-extensions "Direct link to What are Vote extensions?") +## What are Vote extensions? Vote extensions is arbitrary information which can be inserted into a block. This feature is part of ABCI 2.0, which is available for use in the SDK 0.50 release and part of the 0.38 CometBFT release. More information about vote extensions can be seen [here](https://docs.cosmos.network/main/build/abci/vote-extensions). -## Requirements and Setup[​](#requirements-and-setup "Direct link to Requirements and Setup") +## Requirements and Setup Before diving into the advanced tutorial on auction front-running simulation, ensure you meet the following requirements: * [Golang >1.21.5](https://golang.org/doc/install) installed -* Familiarity with the concepts of front-running and MEV, as detailed in [Understanding Front-Running](/v0.53/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) +* Familiarity with the concepts of front-running and MEV, as detailed in [Understanding Front-Running](/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning) * Understanding of Vote Extensions as described [here](https://docs.cosmos.network/main/build/abci/vote-extensions) You will also need a foundational blockchain to build upon coupled with your own module. The `tutorials/base` directory has the necessary blockchain code to start your custom project with the Cosmos SDK. For the module, you can use the `auction` module provided in the `tutorials/auction/x/auction` directory as a reference but please be aware that all of the code needed to implement vote extensions is already implemented in this module. diff --git a/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions.mdx b/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions.mdx index f91c4c01..406d510c 100644 --- a/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions.mdx +++ b/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extensions.mdx @@ -1,61 +1,100 @@ --- -title: "Mitigating Front-running with Vote Extensions" -description: "Version: v0.53" +title: Mitigating Front-running with Vote Extensions +description: >- + Prerequisites Implementing Structs for Vote Extensions Implementing Handlers + and Configuring Handlers --- -## Table of Contents[​](#table-of-contents "Direct link to Table of Contents") +## Table of Contents * [Prerequisites](#prerequisites) * [Implementing Structs for Vote Extensions](#implementing-structs-for-vote-extensions) * [Implementing Handlers and Configuring Handlers](#implementing-handlers-and-configuring-handlers) -## Prerequisites[​](#prerequisites "Direct link to Prerequisites") +## Prerequisites Before implementing vote extensions to mitigate front-running, ensure you have a module ready to implement the vote extensions with. If you need to create or reference a similar module, see `x/auction` for guidance. In this section, we will discuss the steps to mitigate front-running using vote extensions. We will introduce new types within the `abci/types.go` file. These types will be used to handle the process of preparing proposals, processing proposals, and handling vote extensions. -### Implementing Structs for Vote Extensions[​](#implementing-structs-for-vote-extensions "Direct link to Implementing Structs for Vote Extensions") +### Implementing Structs for Vote Extensions First, copy the following structs into the `abci/types.go` and each of these structs serves a specific purpose in the process of mitigating front-running using vote extensions: -``` -package abciimport ( //import the necessary files)type PrepareProposalHandler struct { logger log.Logger txConfig client.TxConfig cdc codec.Codec mempool *mempool.ThresholdMempool txProvider provider.TxProvider keyname string runProvider bool} +```go expandable +package abci + +import ( + + /import the necessary files +) + +type PrepareProposalHandler struct { + logger log.Logger + txConfig client.TxConfig + cdc codec.Codec + mempool *mempool.ThresholdMempool + txProvider provider.TxProvider + keyname string + runProvider bool +} ``` The `PrepareProposalHandler` struct is used to handle the preparation of a proposal in the consensus process. It contains several fields: logger for logging information and errors, txConfig for transaction configuration, cdc (Codec) for encoding and decoding transactions, mempool for referencing the set of unconfirmed transactions, txProvider for building the proposal with transactions, keyname for the name of the key used for signing transactions, and runProvider, a boolean flag indicating whether the provider should be run to build the proposal. -``` -type ProcessProposalHandler struct { TxConfig client.TxConfig Codec codec.Codec Logger log.Logger} +```go +type ProcessProposalHandler struct { + TxConfig client.TxConfig + Codec codec.Codec + Logger log.Logger +} ``` After the proposal has been prepared and vote extensions have been included, the `ProcessProposalHandler` is used to process the proposal. This includes validating the proposal and the included vote extensions. The `ProcessProposalHandler` allows you to access the transaction configuration and codec, which are necessary for processing the vote extensions. -``` -type VoteExtHandler struct { logger log.Logger currentBlock int64 mempool *mempool.ThresholdMempool cdc codec.Codec} +```go +type VoteExtHandler struct { + logger log.Logger + currentBlock int64 + mempool *mempool.ThresholdMempool + cdc codec.Codec +} ``` This struct is used to handle vote extensions. It contains a logger for logging events, the current block number, a mempool for storing transactions, and a codec for encoding and decoding. Vote extensions are a key part of the process to mitigate front-running, as they allow for additional information to be included with each vote. -``` -type InjectedVoteExt struct { VoteExtSigner []byte Bids [][]byte}type InjectedVotes struct { Votes []InjectedVoteExt} +```go +type InjectedVoteExt struct { + VoteExtSigner []byte + Bids [][]byte +} + +type InjectedVotes struct { + Votes []InjectedVoteExt +} ``` These structs are used to handle injected vote extensions. They include the signer of the vote extension and the bids associated with the vote extension. Each byte array in Bids is a serialised form of a bid transaction. Injected vote extensions are used to add additional information to a vote after it has been created, which can be useful for adding context or additional data to a vote. The serialised bid transactions provide a way to include complex transaction data in a compact, efficient format. -``` -type AppVoteExtension struct { Height int64 Bids [][]byte} +```go +type AppVoteExtension struct { + Height int64 + Bids [][]byte +} ``` This struct is used for application vote extensions. It includes the height of the block and the bids associated with the vote extension. Application vote extensions are used to add additional information to a vote at the application level, which can be useful for adding context or additional data to a vote that is specific to the application. -``` -type SpecialTransaction struct { Height int Bids [][]byte} +```go +type SpecialTransaction struct { + Height int + Bids [][]byte +} ``` This struct is used for special transactions. It includes the height of the block and the bids associated with the transaction. Special transactions are used for transactions that need to be handled differently from regular transactions, such as transactions that are part of the process to mitigate front-running. -### Implementing Handlers and Configuring Handlers[​](#implementing-handlers-and-configuring-handlers "Direct link to Implementing Handlers and Configuring Handlers") +### Implementing Handlers and Configuring Handlers To establish the `VoteExtensionHandler`, follow these steps: @@ -63,54 +102,277 @@ To establish the `VoteExtensionHandler`, follow these steps: 2. Implement the `NewVoteExtensionHandler` function. This function is a constructor for the `VoteExtHandler` struct. It takes a logger, a mempool, and a codec as parameters and returns a new instance of `VoteExtHandler`. -``` -func NewVoteExtensionHandler(lg log.Logger, mp *mempool.ThresholdMempool, cdc codec.Codec) *VoteExtHandler { return &VoteExtHandler{ logger: lg, mempool: mp, cdc: cdc, } } +```go +func NewVoteExtensionHandler(lg log.Logger, mp *mempool.ThresholdMempool, cdc codec.Codec) *VoteExtHandler { + return &VoteExtHandler{ + logger: lg, + mempool: mp, + cdc: cdc, +} +} ``` 3. Implement the `ExtendVoteHandler()` method. This method should handle the logic of extending votes, including inspecting the mempool and submitting a list of all pending bids. This will allow you to access the list of unconfirmed transactions in the abci.`RequestPrepareProposal` during the ensuing block. -``` -func (h *VoteExtHandler) ExtendVoteHandler() sdk.ExtendVoteHandler { return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { h.logger.Info(fmt.Sprintf("Extending votes at block height : %v", req.Height)) voteExtBids := [][]byte{} // Get mempool txs itr := h.mempool.SelectPending(context.Background(), nil) for itr != nil { tmptx := itr.Tx() sdkMsgs := tmptx.GetMsgs() // Iterate through msgs, check for any bids for _, msg := range sdkMsgs { switch msg := msg.(type) { case *nstypes.MsgBid: // Marshal sdk bids to []byte bz, err := h.cdc.Marshal(msg) if err != nil { h.logger.Error(fmt.Sprintf("Error marshalling VE Bid : %v", err)) break } voteExtBids = append(voteExtBids, bz) default: } } // Move tx to ready pool err := h.mempool.Update(context.Background(), tmptx) // Remove tx from app side mempool if err != nil { h.logger.Info(fmt.Sprintf("Unable to update mempool tx: %v", err)) } itr = itr.Next() } // Create vote extension voteExt := AppVoteExtension{ Height: req.Height, Bids: voteExtBids, } // Encode Vote Extension bz, err := json.Marshal(voteExt) if err != nil { return nil, fmt.Errorf("Error marshalling VE: %w", err) } return &abci.ResponseExtendVote{VoteExtension: bz}, nil} +```go expandable +func (h *VoteExtHandler) + +ExtendVoteHandler() + +sdk.ExtendVoteHandler { + return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + h.logger.Info(fmt.Sprintf("Extending votes at block height : %v", req.Height)) + voteExtBids := [][]byte{ +} + + / Get mempool txs + itr := h.mempool.SelectPending(context.Background(), nil) + for itr != nil { + tmptx := itr.Tx() + sdkMsgs := tmptx.GetMsgs() + + / Iterate through msgs, check for any bids + for _, msg := range sdkMsgs { + switch msg := msg.(type) { + case *nstypes.MsgBid: + / Marshal sdk bids to []byte + bz, err := h.cdc.Marshal(msg) + if err != nil { + h.logger.Error(fmt.Sprintf("Error marshalling VE Bid : %v", err)) + +break +} + +voteExtBids = append(voteExtBids, bz) + +default: +} + +} + + / Move tx to ready pool + err := h.mempool.Update(context.Background(), tmptx) + + / Remove tx from app side mempool + if err != nil { + h.logger.Info(fmt.Sprintf("Unable to update mempool tx: %v", err)) +} + +itr = itr.Next() +} + + / Create vote extension + voteExt := AppVoteExtension{ + Height: req.Height, + Bids: voteExtBids, +} + + / Encode Vote Extension + bz, err := json.Marshal(voteExt) + if err != nil { + return nil, fmt.Errorf("Error marshalling VE: %w", err) +} + +return &abci.ResponseExtendVote{ + VoteExtension: bz +}, nil +} ``` 4. Configure the handler in `app/app.go` as shown below -``` -bApp := baseapp.NewBaseApp(AppName, logger, db, txConfig.TxDecoder(), baseAppOptions...)voteExtHandler := abci2.NewVoteExtensionHandler(logger, mempool, appCodec)bApp.SetExtendVoteHandler(voteExtHandler.ExtendVoteHandler()) +```go +bApp := baseapp.NewBaseApp(AppName, logger, db, txConfig.TxDecoder(), baseAppOptions...) + voteExtHandler := abci2.NewVoteExtensionHandler(logger, mempool, appCodec) + +bApp.SetExtendVoteHandler(voteExtHandler.ExtendVoteHandler()) ``` To give a bit of context on what is happening above, we first create a new instance of `VoteExtensionHandler` with the necessary dependencies (logger, mempool, and codec). Then, we set this handler as the `ExtendVoteHandler` for our application. This means that whenever a vote needs to be extended, our custom `ExtendVoteHandler()` method will be called. To test if vote extensions have been propagated, add the following to the `PrepareProposalHandler`: -``` -if req.Height > 2 { voteExt := req.GetLocalLastCommit() h.logger.Info(fmt.Sprintf("🛠️ :: Get vote extensions: %v", voteExt)) } +```go +if req.Height > 2 { + voteExt := req.GetLocalLastCommit() + +h.logger.Info(fmt.Sprintf(" :: Get vote extensions: %v", voteExt)) +} ``` This is how the whole function should look: -``` -func (h *PrepareProposalHandler) PrepareProposalHandler() sdk.PrepareProposalHandler { return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { h.logger.Info(fmt.Sprintf("🛠️ :: Prepare Proposal")) var proposalTxs [][]byte var txs []sdk.Tx // Get Vote Extensions if req.Height > 2 { voteExt := req.GetLocalLastCommit() h.logger.Info(fmt.Sprintf("🛠️ :: Get vote extensions: %v", voteExt)) } itr := h.mempool.Select(context.Background(), nil) for itr != nil { tmptx := itr.Tx() txs = append(txs, tmptx) itr = itr.Next() } h.logger.Info(fmt.Sprintf("🛠️ :: Number of Transactions available from mempool: %v", len(txs))) if h.runProvider { tmpMsgs, err := h.txProvider.BuildProposal(ctx, txs) if err != nil { h.logger.Error(fmt.Sprintf("❌️ :: Error Building Custom Proposal: %v", err)) } txs = tmpMsgs } for _, sdkTxs := range txs { txBytes, err := h.txConfig.TxEncoder()(sdkTxs) if err != nil { h.logger.Info(fmt.Sprintf("❌~Error encoding transaction: %v", err.Error())) } proposalTxs = append(proposalTxs, txBytes) } h.logger.Info(fmt.Sprintf("🛠️ :: Number of Transactions in proposal: %v", len(proposalTxs))) return &abci.ResponsePrepareProposal{Txs: proposalTxs}, nil }} +```go expandable +func (h *PrepareProposalHandler) + +PrepareProposalHandler() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + h.logger.Info(fmt.Sprintf(" :: Prepare Proposal")) + +var proposalTxs [][]byte + + var txs []sdk.Tx + + / Get Vote Extensions + if req.Height > 2 { + voteExt := req.GetLocalLastCommit() + +h.logger.Info(fmt.Sprintf(" :: Get vote extensions: %v", voteExt)) +} + itr := h.mempool.Select(context.Background(), nil) + for itr != nil { + tmptx := itr.Tx() + +txs = append(txs, tmptx) + +itr = itr.Next() +} + +h.logger.Info(fmt.Sprintf(" :: Number of Transactions available from mempool: %v", len(txs))) + if h.runProvider { + tmpMsgs, err := h.txProvider.BuildProposal(ctx, txs) + if err != nil { + h.logger.Error(fmt.Sprintf(" :: Error Building Custom Proposal: %v", err)) +} + +txs = tmpMsgs +} + for _, sdkTxs := range txs { + txBytes, err := h.txConfig.TxEncoder()(sdkTxs) + if err != nil { + h.logger.Info(fmt.Sprintf("~Error encoding transaction: %v", err.Error())) +} + +proposalTxs = append(proposalTxs, txBytes) +} + +h.logger.Info(fmt.Sprintf(" :: Number of Transactions in proposal: %v", len(proposalTxs))) + +return &abci.ResponsePrepareProposal{ + Txs: proposalTxs +}, nil +} +} ``` -As mentioned above, we check if vote extensions have been propagated, you can do this by checking the logs for any relevant messages such as `🛠️ :: Get vote extensions:`. If the logs do not provide enough information, you can also reinitialise your local testing environment by running the `./scripts/single_node/setup.sh` script again. +As mentioned above, we check if vote extensions have been propagated, you can do this by checking the logs for any relevant messages such as ` :: Get vote extensions:`. If the logs do not provide enough information, you can also reinitialise your local testing environment by running the `./scripts/single_node/setup.sh` script again. 5. Implement the `ProcessProposalHandler()`. This function is responsible for processing the proposal. It should handle the logic of processing vote extensions, including inspecting the proposal and validating the bids. -``` -func (h *ProcessProposalHandler) ProcessProposalHandler() sdk.ProcessProposalHandler { return func(ctx sdk.Context, req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) { h.Logger.Info(fmt.Sprintf("⚙️ :: Process Proposal")) // The first transaction will always be the Special Transaction numTxs := len(req.Txs) h.Logger.Info(fmt.Sprintf("⚙️:: Number of transactions :: %v", numTxs)) if numTxs >= 1 { var st SpecialTransaction err = json.Unmarshal(req.Txs[0], &st) if err != nil { h.Logger.Error(fmt.Sprintf("❌️:: Error unmarshalling special Tx in Process Proposal :: %v", err)) } if len(st.Bids) > 0 { h.Logger.Info(fmt.Sprintf("⚙️:: There are bids in the Special Transaction")) var bids []nstypes.MsgBid for i, b := range st.Bids { var bid nstypes.MsgBid h.Codec.Unmarshal(b, &bid) h.Logger.Info(fmt.Sprintf("⚙️:: Special Transaction Bid No %v :: %v", i, bid)) bids = append(bids, bid) } // Validate Bids in Tx txs := req.Txs[1:] ok, err := ValidateBids(h.TxConfig, bids, txs, h.Logger) if err != nil { h.Logger.Error(fmt.Sprintf("❌️:: Error validating bids in Process Proposal :: %v", err)) return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil } if !ok { h.Logger.Error(fmt.Sprintf("❌️:: Unable to validate bids in Process Proposal :: %v", err)) return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil } h.Logger.Info("⚙️:: Successfully validated bids in Process Proposal") } } return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_ACCEPT}, nil }} +```go expandable +func (h *ProcessProposalHandler) + +ProcessProposalHandler() + +sdk.ProcessProposalHandler { + return func(ctx sdk.Context, req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) { + h.Logger.Info(fmt.Sprintf(" :: Process Proposal")) + + / The first transaction will always be the Special Transaction + numTxs := len(req.Txs) + +h.Logger.Info(fmt.Sprintf(":: Number of transactions :: %v", numTxs)) + if numTxs >= 1 { + var st SpecialTransaction + err = json.Unmarshal(req.Txs[0], &st) + if err != nil { + h.Logger.Error(fmt.Sprintf(":: Error unmarshalling special Tx in Process Proposal :: %v", err)) +} + if len(st.Bids) > 0 { + h.Logger.Info(fmt.Sprintf(":: There are bids in the Special Transaction")) + +var bids []nstypes.MsgBid + for i, b := range st.Bids { + var bid nstypes.MsgBid + h.Codec.Unmarshal(b, &bid) + +h.Logger.Info(fmt.Sprintf(":: Special Transaction Bid No %v :: %v", i, bid)) + +bids = append(bids, bid) +} + / Validate Bids in Tx + txs := req.Txs[1:] + ok, err := ValidateBids(h.TxConfig, bids, txs, h.Logger) + if err != nil { + h.Logger.Error(fmt.Sprintf(":: Error validating bids in Process Proposal :: %v", err)) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if !ok { + h.Logger.Error(fmt.Sprintf(":: Unable to validate bids in Process Proposal :: %v", err)) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +h.Logger.Info(":: Successfully validated bids in Process Proposal") +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} ``` 6. Implement the `ProcessVoteExtensions()` function. This function should handle the logic of processing vote extensions, including validating the bids. -``` -func processVoteExtensions(req *abci.RequestPrepareProposal, log log.Logger) (SpecialTransaction, error) { log.Info(fmt.Sprintf("🛠️ :: Process Vote Extensions")) // Create empty response st := SpecialTransaction{ 0, [][]byte{}, } // Get Vote Ext for H-1 from Req voteExt := req.GetLocalLastCommit() votes := voteExt.Votes // Iterate through votes var ve AppVoteExtension for _, vote := range votes { // Unmarshal to AppExt err := json.Unmarshal(vote.VoteExtension, &ve) if err != nil { log.Error(fmt.Sprintf("❌ :: Error unmarshalling Vote Extension")) } st.Height = int(ve.Height) // If Bids in VE, append to Special Transaction if len(ve.Bids) > 0 { log.Info("🛠️ :: Bids in VE") for _, b := range ve.Bids { st.Bids = append(st.Bids, b) } } } return st, nil} +```go expandable +func processVoteExtensions(req *abci.RequestPrepareProposal, log log.Logger) (SpecialTransaction, error) { + log.Info(fmt.Sprintf(" :: Process Vote Extensions")) + + / Create empty response + st := SpecialTransaction{ + 0, + [][]byte{ +}, +} + + / Get Vote Ext for H-1 from Req + voteExt := req.GetLocalLastCommit() + votes := voteExt.Votes + + / Iterate through votes + var ve AppVoteExtension + for _, vote := range votes { + / Unmarshal to AppExt + err := json.Unmarshal(vote.VoteExtension, &ve) + if err != nil { + log.Error(fmt.Sprintf(" :: Error unmarshalling Vote Extension")) +} + +st.Height = int(ve.Height) + + / If Bids in VE, append to Special Transaction + if len(ve.Bids) > 0 { + log.Info(" :: Bids in VE") + for _, b := range ve.Bids { + st.Bids = append(st.Bids, b) +} + +} + +} + +return st, nil +} ``` 7. Configure the `ProcessProposalHandler()` in app/app.go: -``` -processPropHandler := abci2.ProcessProposalHandler{app.txConfig, appCodec, logger}bApp.SetProcessProposal(processPropHandler.ProcessProposalHandler()) +```go +processPropHandler := abci2.ProcessProposalHandler{ + app.txConfig, appCodec, logger +} + +bApp.SetProcessProposal(processPropHandler.ProcessProposalHandler()) ``` This sets the `ProcessProposalHandler()` for our application. This means that whenever a proposal needs to be processed, our custom `ProcessProposalHandler()` method will be called. diff --git a/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extesions.mdx b/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extesions.mdx deleted file mode 100644 index f91c4c01..00000000 --- a/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/mitigating-front-running-with-vote-extesions.mdx +++ /dev/null @@ -1,118 +0,0 @@ ---- -title: "Mitigating Front-running with Vote Extensions" -description: "Version: v0.53" ---- - -## Table of Contents[​](#table-of-contents "Direct link to Table of Contents") - -* [Prerequisites](#prerequisites) -* [Implementing Structs for Vote Extensions](#implementing-structs-for-vote-extensions) -* [Implementing Handlers and Configuring Handlers](#implementing-handlers-and-configuring-handlers) - -## Prerequisites[​](#prerequisites "Direct link to Prerequisites") - -Before implementing vote extensions to mitigate front-running, ensure you have a module ready to implement the vote extensions with. If you need to create or reference a similar module, see `x/auction` for guidance. - -In this section, we will discuss the steps to mitigate front-running using vote extensions. We will introduce new types within the `abci/types.go` file. These types will be used to handle the process of preparing proposals, processing proposals, and handling vote extensions. - -### Implementing Structs for Vote Extensions[​](#implementing-structs-for-vote-extensions "Direct link to Implementing Structs for Vote Extensions") - -First, copy the following structs into the `abci/types.go` and each of these structs serves a specific purpose in the process of mitigating front-running using vote extensions: - -``` -package abciimport ( //import the necessary files)type PrepareProposalHandler struct { logger log.Logger txConfig client.TxConfig cdc codec.Codec mempool *mempool.ThresholdMempool txProvider provider.TxProvider keyname string runProvider bool} -``` - -The `PrepareProposalHandler` struct is used to handle the preparation of a proposal in the consensus process. It contains several fields: logger for logging information and errors, txConfig for transaction configuration, cdc (Codec) for encoding and decoding transactions, mempool for referencing the set of unconfirmed transactions, txProvider for building the proposal with transactions, keyname for the name of the key used for signing transactions, and runProvider, a boolean flag indicating whether the provider should be run to build the proposal. - -``` -type ProcessProposalHandler struct { TxConfig client.TxConfig Codec codec.Codec Logger log.Logger} -``` - -After the proposal has been prepared and vote extensions have been included, the `ProcessProposalHandler` is used to process the proposal. This includes validating the proposal and the included vote extensions. The `ProcessProposalHandler` allows you to access the transaction configuration and codec, which are necessary for processing the vote extensions. - -``` -type VoteExtHandler struct { logger log.Logger currentBlock int64 mempool *mempool.ThresholdMempool cdc codec.Codec} -``` - -This struct is used to handle vote extensions. It contains a logger for logging events, the current block number, a mempool for storing transactions, and a codec for encoding and decoding. Vote extensions are a key part of the process to mitigate front-running, as they allow for additional information to be included with each vote. - -``` -type InjectedVoteExt struct { VoteExtSigner []byte Bids [][]byte}type InjectedVotes struct { Votes []InjectedVoteExt} -``` - -These structs are used to handle injected vote extensions. They include the signer of the vote extension and the bids associated with the vote extension. Each byte array in Bids is a serialised form of a bid transaction. Injected vote extensions are used to add additional information to a vote after it has been created, which can be useful for adding context or additional data to a vote. The serialised bid transactions provide a way to include complex transaction data in a compact, efficient format. - -``` -type AppVoteExtension struct { Height int64 Bids [][]byte} -``` - -This struct is used for application vote extensions. It includes the height of the block and the bids associated with the vote extension. Application vote extensions are used to add additional information to a vote at the application level, which can be useful for adding context or additional data to a vote that is specific to the application. - -``` -type SpecialTransaction struct { Height int Bids [][]byte} -``` - -This struct is used for special transactions. It includes the height of the block and the bids associated with the transaction. Special transactions are used for transactions that need to be handled differently from regular transactions, such as transactions that are part of the process to mitigate front-running. - -### Implementing Handlers and Configuring Handlers[​](#implementing-handlers-and-configuring-handlers "Direct link to Implementing Handlers and Configuring Handlers") - -To establish the `VoteExtensionHandler`, follow these steps: - -1. Navigate to the `abci/proposal.go` file. This is where we will implement the \`VoteExtensionHandler\`\`. - -2. Implement the `NewVoteExtensionHandler` function. This function is a constructor for the `VoteExtHandler` struct. It takes a logger, a mempool, and a codec as parameters and returns a new instance of `VoteExtHandler`. - -``` -func NewVoteExtensionHandler(lg log.Logger, mp *mempool.ThresholdMempool, cdc codec.Codec) *VoteExtHandler { return &VoteExtHandler{ logger: lg, mempool: mp, cdc: cdc, } } -``` - -3. Implement the `ExtendVoteHandler()` method. This method should handle the logic of extending votes, including inspecting the mempool and submitting a list of all pending bids. This will allow you to access the list of unconfirmed transactions in the abci.`RequestPrepareProposal` during the ensuing block. - -``` -func (h *VoteExtHandler) ExtendVoteHandler() sdk.ExtendVoteHandler { return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { h.logger.Info(fmt.Sprintf("Extending votes at block height : %v", req.Height)) voteExtBids := [][]byte{} // Get mempool txs itr := h.mempool.SelectPending(context.Background(), nil) for itr != nil { tmptx := itr.Tx() sdkMsgs := tmptx.GetMsgs() // Iterate through msgs, check for any bids for _, msg := range sdkMsgs { switch msg := msg.(type) { case *nstypes.MsgBid: // Marshal sdk bids to []byte bz, err := h.cdc.Marshal(msg) if err != nil { h.logger.Error(fmt.Sprintf("Error marshalling VE Bid : %v", err)) break } voteExtBids = append(voteExtBids, bz) default: } } // Move tx to ready pool err := h.mempool.Update(context.Background(), tmptx) // Remove tx from app side mempool if err != nil { h.logger.Info(fmt.Sprintf("Unable to update mempool tx: %v", err)) } itr = itr.Next() } // Create vote extension voteExt := AppVoteExtension{ Height: req.Height, Bids: voteExtBids, } // Encode Vote Extension bz, err := json.Marshal(voteExt) if err != nil { return nil, fmt.Errorf("Error marshalling VE: %w", err) } return &abci.ResponseExtendVote{VoteExtension: bz}, nil} -``` - -4. Configure the handler in `app/app.go` as shown below - -``` -bApp := baseapp.NewBaseApp(AppName, logger, db, txConfig.TxDecoder(), baseAppOptions...)voteExtHandler := abci2.NewVoteExtensionHandler(logger, mempool, appCodec)bApp.SetExtendVoteHandler(voteExtHandler.ExtendVoteHandler()) -``` - -To give a bit of context on what is happening above, we first create a new instance of `VoteExtensionHandler` with the necessary dependencies (logger, mempool, and codec). Then, we set this handler as the `ExtendVoteHandler` for our application. This means that whenever a vote needs to be extended, our custom `ExtendVoteHandler()` method will be called. - -To test if vote extensions have been propagated, add the following to the `PrepareProposalHandler`: - -``` -if req.Height > 2 { voteExt := req.GetLocalLastCommit() h.logger.Info(fmt.Sprintf("🛠️ :: Get vote extensions: %v", voteExt)) } -``` - -This is how the whole function should look: - -``` -func (h *PrepareProposalHandler) PrepareProposalHandler() sdk.PrepareProposalHandler { return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { h.logger.Info(fmt.Sprintf("🛠️ :: Prepare Proposal")) var proposalTxs [][]byte var txs []sdk.Tx // Get Vote Extensions if req.Height > 2 { voteExt := req.GetLocalLastCommit() h.logger.Info(fmt.Sprintf("🛠️ :: Get vote extensions: %v", voteExt)) } itr := h.mempool.Select(context.Background(), nil) for itr != nil { tmptx := itr.Tx() txs = append(txs, tmptx) itr = itr.Next() } h.logger.Info(fmt.Sprintf("🛠️ :: Number of Transactions available from mempool: %v", len(txs))) if h.runProvider { tmpMsgs, err := h.txProvider.BuildProposal(ctx, txs) if err != nil { h.logger.Error(fmt.Sprintf("❌️ :: Error Building Custom Proposal: %v", err)) } txs = tmpMsgs } for _, sdkTxs := range txs { txBytes, err := h.txConfig.TxEncoder()(sdkTxs) if err != nil { h.logger.Info(fmt.Sprintf("❌~Error encoding transaction: %v", err.Error())) } proposalTxs = append(proposalTxs, txBytes) } h.logger.Info(fmt.Sprintf("🛠️ :: Number of Transactions in proposal: %v", len(proposalTxs))) return &abci.ResponsePrepareProposal{Txs: proposalTxs}, nil }} -``` - -As mentioned above, we check if vote extensions have been propagated, you can do this by checking the logs for any relevant messages such as `🛠️ :: Get vote extensions:`. If the logs do not provide enough information, you can also reinitialise your local testing environment by running the `./scripts/single_node/setup.sh` script again. - -5. Implement the `ProcessProposalHandler()`. This function is responsible for processing the proposal. It should handle the logic of processing vote extensions, including inspecting the proposal and validating the bids. - -``` -func (h *ProcessProposalHandler) ProcessProposalHandler() sdk.ProcessProposalHandler { return func(ctx sdk.Context, req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) { h.Logger.Info(fmt.Sprintf("⚙️ :: Process Proposal")) // The first transaction will always be the Special Transaction numTxs := len(req.Txs) h.Logger.Info(fmt.Sprintf("⚙️:: Number of transactions :: %v", numTxs)) if numTxs >= 1 { var st SpecialTransaction err = json.Unmarshal(req.Txs[0], &st) if err != nil { h.Logger.Error(fmt.Sprintf("❌️:: Error unmarshalling special Tx in Process Proposal :: %v", err)) } if len(st.Bids) > 0 { h.Logger.Info(fmt.Sprintf("⚙️:: There are bids in the Special Transaction")) var bids []nstypes.MsgBid for i, b := range st.Bids { var bid nstypes.MsgBid h.Codec.Unmarshal(b, &bid) h.Logger.Info(fmt.Sprintf("⚙️:: Special Transaction Bid No %v :: %v", i, bid)) bids = append(bids, bid) } // Validate Bids in Tx txs := req.Txs[1:] ok, err := ValidateBids(h.TxConfig, bids, txs, h.Logger) if err != nil { h.Logger.Error(fmt.Sprintf("❌️:: Error validating bids in Process Proposal :: %v", err)) return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil } if !ok { h.Logger.Error(fmt.Sprintf("❌️:: Unable to validate bids in Process Proposal :: %v", err)) return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil } h.Logger.Info("⚙️:: Successfully validated bids in Process Proposal") } } return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_ACCEPT}, nil }} -``` - -6. Implement the `ProcessVoteExtensions()` function. This function should handle the logic of processing vote extensions, including validating the bids. - -``` -func processVoteExtensions(req *abci.RequestPrepareProposal, log log.Logger) (SpecialTransaction, error) { log.Info(fmt.Sprintf("🛠️ :: Process Vote Extensions")) // Create empty response st := SpecialTransaction{ 0, [][]byte{}, } // Get Vote Ext for H-1 from Req voteExt := req.GetLocalLastCommit() votes := voteExt.Votes // Iterate through votes var ve AppVoteExtension for _, vote := range votes { // Unmarshal to AppExt err := json.Unmarshal(vote.VoteExtension, &ve) if err != nil { log.Error(fmt.Sprintf("❌ :: Error unmarshalling Vote Extension")) } st.Height = int(ve.Height) // If Bids in VE, append to Special Transaction if len(ve.Bids) > 0 { log.Info("🛠️ :: Bids in VE") for _, b := range ve.Bids { st.Bids = append(st.Bids, b) } } } return st, nil} -``` - -7. Configure the `ProcessProposalHandler()` in app/app.go: - -``` -processPropHandler := abci2.ProcessProposalHandler{app.txConfig, appCodec, logger}bApp.SetProcessProposal(processPropHandler.ProcessProposalHandler()) -``` - -This sets the `ProcessProposalHandler()` for our application. This means that whenever a proposal needs to be processed, our custom `ProcessProposalHandler()` method will be called. - -To test if the proposal processing and vote extensions are working correctly, you can check the logs for any relevant messages. If the logs do not provide enough information, you can also reinitialize your local testing environment by running `./scripts/single_node/setup.sh` script. diff --git a/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning.mdx b/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning.mdx index 910615bf..252d4184 100644 --- a/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning.mdx +++ b/docs/sdk/v0.53/tutorials/vote-extensions/auction-frontrunning/understanding-frontrunning.mdx @@ -1,27 +1,31 @@ --- -title: "Understanding Front-Running and more" -description: "Version: v0.53" +title: Understanding Front-Running and more +description: >- + Blockchain technology is vulnerable to practices that can affect the fairness + and security of the network. Two such practices are front-running and Maximal + Extractable Value (MEV), which are important for blockchain participants to + understand. --- -## Introduction[​](#introduction "Direct link to Introduction") +## Introduction Blockchain technology is vulnerable to practices that can affect the fairness and security of the network. Two such practices are front-running and Maximal Extractable Value (MEV), which are important for blockchain participants to understand. -## What is Front-Running?[​](#what-is-front-running "Direct link to What is Front-Running?") +## What is Front-Running? Front-running is when someone, such as a validator, uses their ability to see pending transactions to execute their own transactions first, benefiting from the knowledge of upcoming transactions. In nameservice auctions, a front-runner might place a higher bid before the original bid is confirmed, unfairly winning the auction. -## Nameservices and Nameservice Auctions[​](#nameservices-and-nameservice-auctions "Direct link to Nameservices and Nameservice Auctions") +## Nameservices and Nameservice Auctions Nameservices are human-readable identifiers on a blockchain, akin to internet domain names, that correspond to specific addresses or resources. They simplify interactions with typically long and complex blockchain addresses, allowing users to have a memorable and unique identifier for their blockchain address or smart contract. Nameservice auctions are the process by which these identifiers are bid on and acquired. To combat front-running—where someone might use knowledge of pending bids to place a higher bid first—mechanisms such as commit-reveal schemes, auction extensions, and fair sequencing are implemented. These strategies ensure a transparent and fair bidding process, reducing the potential for Maximal Extractable Value (MEV) exploitation. -## What is Maximal Extractable Value (MEV)?[​](#what-is-maximal-extractable-value-mev "Direct link to What is Maximal Extractable Value (MEV)?") +## What is Maximal Extractable Value (MEV)? MEV is the highest value that can be extracted by manipulating the order of transactions within a block, beyond the standard block rewards and fees. This has become more prominent with the growth of decentralised finance (DeFi), where transaction order can greatly affect profits. -## Implications of MEV[​](#implications-of-mev "Direct link to Implications of MEV") +## Implications of MEV MEV can lead to: @@ -29,7 +33,7 @@ MEV can lead to: * **Market Fairness**: An uneven playing field where only a few can gain at the expense of the majority. * **User Experience**: Higher fees and network congestion due to the competition for MEV. -## Mitigating MEV and Front-Running[​](#mitigating-mev-and-front-running "Direct link to Mitigating MEV and Front-Running") +## Mitigating MEV and Front-Running Some solutions being developed to mitigate MEV and front-running, including: @@ -39,6 +43,6 @@ Some solutions being developed to mitigate MEV and front-running, including: For this tutorial, we will be exploring the last solution, fair sequencing services, in the context of nameservice auctions. -## Conclusion[​](#conclusion "Direct link to Conclusion") +## Conclusion MEV and front-running are challenges to blockchain integrity and fairness. Ongoing innovation and implementation of mitigation strategies are crucial for the ecosystem's health and success. diff --git a/docs/sdk/v0.53/tutorials/vote-extensions/oracle/getting-started.mdx b/docs/sdk/v0.53/tutorials/vote-extensions/oracle/getting-started.mdx index a642f244..c4a4fa6d 100644 --- a/docs/sdk/v0.53/tutorials/vote-extensions/oracle/getting-started.mdx +++ b/docs/sdk/v0.53/tutorials/vote-extensions/oracle/getting-started.mdx @@ -1,30 +1,30 @@ --- -title: "Getting Started" -description: "Version: v0.53" +title: Getting Started +description: What is an Oracle? Implementing Vote Extensions Testing the Oracle Module --- -## Table of Contents[​](#table-of-contents "Direct link to Table of Contents") +## Table of Contents -* [What is an Oracle?](/v0.53/tutorials/vote-extensions/oracle/what-is-an-oracle) -* [Implementing Vote Extensions](/v0.53/tutorials/vote-extensions/oracle/implementing-vote-extensions) -* [Testing the Oracle Module](/v0.53/tutorials/vote-extensions/oracle/testing-oracle) +* [What is an Oracle?](/docs/sdk/v0.53/tutorials/vote-extensions/oracle/what-is-an-oracle) +* [Implementing Vote Extensions](/docs/sdk/v0.53/tutorials/vote-extensions/oracle/implementing-vote-extensions) +* [Testing the Oracle Module](/docs/sdk/v0.53/tutorials/vote-extensions/oracle/testing-oracle) -## Prerequisites[​](#prerequisites "Direct link to Prerequisites") +## Prerequisites Before you start with this tutorial, make sure you have: * A working chain project. This tutorial won't cover the steps of creating a new chain/module. * Familiarity with the Cosmos SDK. If you're not, we suggest you start with [Cosmos SDK Tutorials](https://tutorials.cosmos.network), as ABCI++ is considered an advanced topic. -* Read and understood [What is an Oracle?](/v0.53/tutorials/vote-extensions/oracle/what-is-an-oracle). This provides necessary background information for understanding the Oracle module. +* Read and understood [What is an Oracle?](/docs/sdk/v0.53/tutorials/vote-extensions/oracle/what-is-an-oracle). This provides necessary background information for understanding the Oracle module. * Basic understanding of Go programming language. -## What are Vote extensions?[​](#what-are-vote-extensions "Direct link to What are Vote extensions?") +## What are Vote extensions? Vote extensions is arbitrary information which can be inserted into a block. This feature is part of ABCI 2.0, which is available for use in the SDK 0.50 release and part of the 0.38 CometBFT release. More information about vote extensions can be seen [here](https://docs.cosmos.network/main/build/abci/vote-extensions). -## Overview of the project[​](#overview-of-the-project "Direct link to Overview of the project") +## Overview of the project We’ll go through the creation of a simple price oracle module focusing on the vote extensions implementation, ignoring the details inside the price oracle itself. diff --git a/docs/sdk/v0.53/tutorials/vote-extensions/oracle/implementing-vote-extensions.mdx b/docs/sdk/v0.53/tutorials/vote-extensions/oracle/implementing-vote-extensions.mdx index 9d5232c2..1c2df539 100644 --- a/docs/sdk/v0.53/tutorials/vote-extensions/oracle/implementing-vote-extensions.mdx +++ b/docs/sdk/v0.53/tutorials/vote-extensions/oracle/implementing-vote-extensions.mdx @@ -1,28 +1,66 @@ --- -title: "Implementing Vote Extensions" -description: "Version: v0.53" +title: Implementing Vote Extensions +description: >- + First we’ll create the OracleVoteExtension struct, this is the object that + will be marshaled as bytes and signed by the validator. --- -## Implement ExtendVote[​](#implement-extendvote "Direct link to Implement ExtendVote") +## Implement ExtendVote First we’ll create the `OracleVoteExtension` struct, this is the object that will be marshaled as bytes and signed by the validator. In our example we’ll use JSON to marshal the vote extension for simplicity but we recommend to find an encoding that produces a smaller output, given that large vote extensions could impact CometBFT’s performance. Custom encodings and compressed bytes can be used out of the box. -``` -// OracleVoteExtension defines the canonical vote extension structure.type OracleVoteExtension struct { Height int64 Prices map[string]math.LegacyDec} +```go +/ OracleVoteExtension defines the canonical vote extension structure. +type OracleVoteExtension struct { + Height int64 + Prices map[string]math.LegacyDec +} ``` Then we’ll create a `VoteExtensionsHandler` struct that contains everything we need to query for prices. -``` -type VoteExtHandler struct { logger log.Logger currentBlock int64 // current block height lastPriceSyncTS time.Time // last time we synced prices providerTimeout time.Duration // timeout for fetching prices from providers providers map[string]Provider // mapping of provider name to provider (e.g. Binance -> BinanceProvider) providerPairs map[string][]keeper.CurrencyPair // mapping of provider name to supported pairs (e.g. Binance -> [ATOM/USD]) Keeper keeper.Keeper // keeper of our oracle module} +```go +type VoteExtHandler struct { + logger log.Logger + currentBlock int64 / current block height + lastPriceSyncTS time.Time / last time we synced prices + providerTimeout time.Duration / timeout for fetching prices from providers + providers map[string]Provider / mapping of provider name to provider (e.g. Binance -> BinanceProvider) + +providerPairs map[string][]keeper.CurrencyPair / mapping of provider name to supported pairs (e.g. Binance -> [ATOM/USD]) + +Keeper keeper.Keeper / keeper of our oracle module +} ``` Finally, a function that returns `sdk.ExtendVoteHandler` is needed too, and this is where our vote extension logic will live. -``` -func (h *VoteExtHandler) ExtendVoteHandler() sdk.ExtendVoteHandler { return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { // here we'd have a helper function that gets all the prices and does a weighted average using the volume of each market prices := h.getAllVolumeWeightedPrices() voteExt := OracleVoteExtension{ Height: req.Height, Prices: prices, } bz, err := json.Marshal(voteExt) if err != nil { return nil, fmt.Errorf("failed to marshal vote extension: %w", err) } return &abci.ResponseExtendVote{VoteExtension: bz}, nil }} +```go expandable +func (h *VoteExtHandler) + +ExtendVoteHandler() + +sdk.ExtendVoteHandler { + return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) { + / here we'd have a helper function that gets all the prices and does a weighted average using the volume of each market + prices := h.getAllVolumeWeightedPrices() + voteExt := OracleVoteExtension{ + Height: req.Height, + Prices: prices, +} + +bz, err := json.Marshal(voteExt) + if err != nil { + return nil, fmt.Errorf("failed to marshal vote extension: %w", err) +} + +return &abci.ResponseExtendVote{ + VoteExtension: bz +}, nil +} +} ``` As you can see above, the creation of a vote extension is pretty simple and we just have to return bytes. CometBFT will handle the signing of these bytes for us. We ignored the process of getting the prices but you can see a more complete example [here:](https://github.com/cosmos/sdk-tutorials/blob/master/tutorials/oracle/base/x/oracle/abci/vote_extensions.go) @@ -33,64 +71,184 @@ Here we’ll do some simple checks like: * Is the vote extension for the right height? * Some other validation, for example, are the prices from this extension too deviated from my own prices? Or maybe checks that can detect malicious behavior. -``` -func (h *VoteExtHandler) VerifyVoteExtensionHandler() sdk.VerifyVoteExtensionHandler { return func(ctx sdk.Context, req *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { var voteExt OracleVoteExtension err := json.Unmarshal(req.VoteExtension, &voteExt) if err != nil { return nil, fmt.Errorf("failed to unmarshal vote extension: %w", err) } if voteExt.Height != req.Height { return nil, fmt.Errorf("vote extension height does not match request height; expected: %d, got: %d", req.Height, voteExt.Height) } // Verify incoming prices from a validator are valid. Note, verification during // VerifyVoteExtensionHandler MUST be deterministic. For brevity and demo // purposes, we omit implementation. if err := h.verifyOraclePrices(ctx, voteExt.Prices); err != nil { return nil, fmt.Errorf("failed to verify oracle prices from validator %X: %w", req.ValidatorAddress, err) } return &abci.ResponseVerifyVoteExtension{Status: abci.ResponseVerifyVoteExtension_ACCEPT}, nil }} -``` +```go expandable +func (h *VoteExtHandler) + +VerifyVoteExtensionHandler() -## Implement PrepareProposal[​](#implement-prepareproposal "Direct link to Implement PrepareProposal") +sdk.VerifyVoteExtensionHandler { + return func(ctx sdk.Context, req *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) { + var voteExt OracleVoteExtension + err := json.Unmarshal(req.VoteExtension, &voteExt) + if err != nil { + return nil, fmt.Errorf("failed to unmarshal vote extension: %w", err) +} + if voteExt.Height != req.Height { + return nil, fmt.Errorf("vote extension height does not match request height; expected: %d, got: %d", req.Height, voteExt.Height) +} + / Verify incoming prices from a validator are valid. Note, verification during + / VerifyVoteExtensionHandler MUST be deterministic. For brevity and demo + / purposes, we omit implementation. + if err := h.verifyOraclePrices(ctx, voteExt.Prices); err != nil { + return nil, fmt.Errorf("failed to verify oracle prices from validator %X: %w", req.ValidatorAddress, err) +} + +return &abci.ResponseVerifyVoteExtension{ + Status: abci.ResponseVerifyVoteExtension_ACCEPT +}, nil +} +} ``` -type ProposalHandler struct { logger log.Logger keeper keeper.Keeper // our oracle module keeper valStore baseapp.ValidatorStore // to get the current validators' pubkeys} + +## Implement PrepareProposal + +```go +type ProposalHandler struct { + logger log.Logger + keeper keeper.Keeper / our oracle module keeper + valStore baseapp.ValidatorStore / to get the current validators' pubkeys +} ``` And we create the struct for our “special tx”, that will contain the prices and the votes so validators can later re-check in ProcessPRoposal that they get the same result than the block’s proposer. With this we could also check if all the votes have been used by comparing the votes received in ProcessProposal. -``` -type StakeWeightedPrices struct { StakeWeightedPrices map[string]math.LegacyDec ExtendedCommitInfo abci.ExtendedCommitInfo} +```go +type StakeWeightedPrices struct { + StakeWeightedPrices map[string]math.LegacyDec + ExtendedCommitInfo abci.ExtendedCommitInfo +} ``` Now we create the `PrepareProposalHandler`. In this step we’ll first check if the vote extensions’ signatures are correct using a helper function called ValidateVoteExtensions from the baseapp package. -``` -func (h *ProposalHandler) PrepareProposal() sdk.PrepareProposalHandler { return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { err := baseapp.ValidateVoteExtensions(ctx, h.valStore, req.Height, ctx.ChainID(), req.LocalLastCommit) if err != nil { return nil, err }... +```go +func (h *ProposalHandler) + +PrepareProposal() + +sdk.PrepareProposalHandler { + return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) { + err := baseapp.ValidateVoteExtensions(ctx, h.valStore, req.Height, ctx.ChainID(), req.LocalLastCommit) + if err != nil { + return nil, err +} +... ``` Then we proceed to make the calculations only if the current height if higher than the height at which vote extensions have been enabled. Remember that vote extensions are made available to the block proposer on the next block at which they are produced/enabled. -``` -... proposalTxs := req.Txs if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { stakeWeightedPrices, err := h.computeStakeWeightedOraclePrices(ctx, req.LocalLastCommit) if err != nil { return nil, errors.New("failed to compute stake-weighted oracle prices") } injectedVoteExtTx := StakeWeightedPrices{ StakeWeightedPrices: stakeWeightedPrices, ExtendedCommitInfo: req.LocalLastCommit, }... +```go expandable +... + proposalTxs := req.Txs + if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { + stakeWeightedPrices, err := h.computeStakeWeightedOraclePrices(ctx, req.LocalLastCommit) + if err != nil { + return nil, errors.New("failed to compute stake-weighted oracle prices") +} + injectedVoteExtTx := StakeWeightedPrices{ + StakeWeightedPrices: stakeWeightedPrices, + ExtendedCommitInfo: req.LocalLastCommit, +} +... ``` Finally we inject the result as a transaction at a specific location, usually at the beginning of the block: -## Implement ProcessProposal[​](#implement-processproposal "Direct link to Implement ProcessProposal") +## Implement ProcessProposal Now we can implement the method that all validators will execute to ensure the proposer is doing his work correctly. Here, if vote extensions are enabled, we’ll check if the tx at index 0 is an injected vote extension -``` -func (h *ProposalHandler) ProcessProposal() sdk.ProcessProposalHandler { return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { var injectedVoteExtTx StakeWeightedPrices if err := json.Unmarshal(req.Txs[0], &injectedVoteExtTx); err != nil { h.logger.Error("failed to decode injected vote extension tx", "err", err) return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil }... -``` +```go +func (h *ProposalHandler) + +ProcessProposal() + +sdk.ProcessProposalHandler { + return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) { + if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { + var injectedVoteExtTx StakeWeightedPrices + if err := json.Unmarshal(req.Txs[0], &injectedVoteExtTx); err != nil { + h.logger.Error("failed to decode injected vote extension tx", "err", err) + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} +... +``` + +Then we re-validate the vote extensions signatures using +baseapp.ValidateVoteExtensions, re-calculate the results (just like in PrepareProposal) and compare them with the results we got from the injected tx. + +```go expandable +err := baseapp.ValidateVoteExtensions(ctx, h.valStore, req.Height, ctx.ChainID(), injectedVoteExtTx.ExtendedCommitInfo) + if err != nil { + return nil, err +} + + / Verify the proposer's stake-weighted oracle prices by computing the same + / calculation and comparing the results. We omit verification for brevity + / and demo purposes. + stakeWeightedPrices, err := h.computeStakeWeightedOraclePrices(ctx, injectedVoteExtTx.ExtendedCommitInfo) + if err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + if err := compareOraclePrices(injectedVoteExtTx.StakeWeightedPrices, stakeWeightedPrices); err != nil { + return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_REJECT +}, nil +} + +} + +return &abci.ResponseProcessProposal{ + Status: abci.ResponseProcessProposal_ACCEPT +}, nil +} +} +``` + +Important: In this example we avoided using the mempool and other basics, please refer to the DefaultProposalHandler for a complete implementation: [Link](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/baseapp/abci_utils.go) + +## Implement PreBlocker -Then we re-validate the vote extensions signatures using baseapp.ValidateVoteExtensions, re-calculate the results (just like in PrepareProposal) and compare them with the results we got from the injected tx. +Now validators are extending their vote, verifying other votes and including the result in the block. But how do we actually make use of this result? This is done in the PreBlocker which is code that is run before any other code during FinalizeBlock so we make sure we make this information available to the chain and its modules during the entire block execution (from BeginBlock). -``` - err := baseapp.ValidateVoteExtensions(ctx, h.valStore, req.Height, ctx.ChainID(), injectedVoteExtTx.ExtendedCommitInfo) if err != nil { return nil, err } // Verify the proposer's stake-weighted oracle prices by computing the same // calculation and comparing the results. We omit verification for brevity // and demo purposes. stakeWeightedPrices, err := h.computeStakeWeightedOraclePrices(ctx, injectedVoteExtTx.ExtendedCommitInfo) if err != nil { return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil } if err := compareOraclePrices(injectedVoteExtTx.StakeWeightedPrices, stakeWeightedPrices); err != nil { return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil } } return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_ACCEPT}, nil }} -``` +At this step we know that the injected tx is well-formatted and has been verified by the validators participating in consensus, so making use of it is straightforward. Just check if vote extensions are enabled, pick up the first transaction and use a method in your module’s keeper to set the result. -Important: In this example we avoided using the mempool and other basics, please refer to the DefaultProposalHandler for a complete implementation: [https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/baseapp/abci\_utils.go](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/baseapp/abci_utils.go) +```go expandable +func (h *ProposalHandler) -## Implement PreBlocker[​](#implement-preblocker "Direct link to Implement PreBlocker") +PreBlocker(ctx sdk.Context, req *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { + res := &sdk.ResponsePreBlock{ +} + if len(req.Txs) == 0 { + return res, nil +} + if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { + var injectedVoteExtTx StakeWeightedPrices + if err := json.Unmarshal(req.Txs[0], &injectedVoteExtTx); err != nil { + h.logger.Error("failed to decode injected vote extension tx", "err", err) -Now validators are extending their vote, verifying other votes and including the result in the block. But how do we actually make use of this result? This is done in the PreBlocker which is code that is run before any other code during FinalizeBlock so we make sure we make this information available to the chain and its modules during the entire block execution (from BeginBlock). +return nil, err +} -At this step we know that the injected tx is well-formatted and has been verified by the validators participating in consensus, so making use of it is straightforward. Just check if vote extensions are enabled, pick up the first transaction and use a method in your module’s keeper to set the result. + / set oracle prices using the passed in context, which will make these prices available in the current block + if err := h.keeper.SetOraclePrices(ctx, injectedVoteExtTx.StakeWeightedPrices); err != nil { + return nil, err +} + +} -``` -func (h *ProposalHandler) PreBlocker(ctx sdk.Context, req *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { res := &sdk.ResponsePreBlock{} if len(req.Txs) == 0 { return res, nil } if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight { var injectedVoteExtTx StakeWeightedPrices if err := json.Unmarshal(req.Txs[0], &injectedVoteExtTx); err != nil { h.logger.Error("failed to decode injected vote extension tx", "err", err) return nil, err } // set oracle prices using the passed in context, which will make these prices available in the current block if err := h.keeper.SetOraclePrices(ctx, injectedVoteExtTx.StakeWeightedPrices); err != nil { return nil, err } } return res, nil} +return res, nil +} ``` -## Conclusion[​](#conclusion "Direct link to Conclusion") +## Conclusion In this tutorial, we've created a simple price oracle module that incorporates vote extensions. We've seen how to implement `ExtendVote`, `VerifyVoteExtension`, `PrepareProposal`, `ProcessProposal`, and `PreBlocker` to handle the voting and verification process of vote extensions, as well as how to make use of the results during the block execution. diff --git a/docs/sdk/v0.53/tutorials/vote-extensions/oracle/testing-oracle.mdx b/docs/sdk/v0.53/tutorials/vote-extensions/oracle/testing-oracle.mdx index a956e96d..c60c9ea2 100644 --- a/docs/sdk/v0.53/tutorials/vote-extensions/oracle/testing-oracle.mdx +++ b/docs/sdk/v0.53/tutorials/vote-extensions/oracle/testing-oracle.mdx @@ -1,57 +1,61 @@ --- -title: "Testing the Oracle Module" -description: "Version: v0.53" +title: Testing the Oracle Module +description: >- + We will guide you through the process of testing the Oracle module in your + application. The Oracle module uses vote extensions to provide current price + data. If you would like to see the complete working oracle module please see + here. --- We will guide you through the process of testing the Oracle module in your application. The Oracle module uses vote extensions to provide current price data. If you would like to see the complete working oracle module please see [here](https://github.com/cosmos/sdk-tutorials/blob/master/tutorials/oracle/base/x/oracle). -## Step 1: Compile and Install the Application[​](#step-1-compile-and-install-the-application "Direct link to Step 1: Compile and Install the Application") +## Step 1: Compile and Install the Application First, we need to compile and install the application. Please ensure you are in the `tutorials/oracle/base` directory. Run the following command in your terminal: -``` +```shell make install ``` This command compiles the application and moves the resulting binary to a location in your system's PATH. -## Step 2: Initialise the Application[​](#step-2-initialise-the-application "Direct link to Step 2: Initialise the Application") +## Step 2: Initialise the Application Next, we need to initialise the application. Run the following command in your terminal: -``` +```shell make init ``` This command runs the script `tutorials/oracle/base/scripts/init.sh`, which sets up the necessary configuration for your application to run. This includes creating the `app.toml` configuration file and initialising the blockchain with a genesis block. -## Step 3: Start the Application[​](#step-3-start-the-application "Direct link to Step 3: Start the Application") +## Step 3: Start the Application Now, we can start the application. Run the following command in your terminal: -``` +```shell exampled start ``` This command starts your application, begins the blockchain node, and starts processing transactions. -## Step 4: Query the Oracle Prices[​](#step-4-query-the-oracle-prices "Direct link to Step 4: Query the Oracle Prices") +## Step 4: Query the Oracle Prices Finally, we can query the current prices from the Oracle module. Run the following command in your terminal: -``` +```shell exampled q oracle prices ``` This command queries the current prices from the Oracle module. The expected output shows that the vote extensions were successfully included in the block and the Oracle module was able to retrieve the price data. -## Understanding Vote Extensions in Oracle[​](#understanding-vote-extensions-in-oracle "Direct link to Understanding Vote Extensions in Oracle") +## Understanding Vote Extensions in Oracle In the Oracle module, the `ExtendVoteHandler` function is responsible for creating the vote extensions. This function fetches the current prices from the provider, creates a `OracleVoteExtension` struct with these prices, and then marshals this struct into bytes. These bytes are then set as the vote extension. In the context of testing, the Oracle module uses a mock provider to simulate the behavior of a real price provider. This mock provider is defined in the mockprovider package and is used to return predefined prices for specific currency pairs. -## Conclusion[​](#conclusion "Direct link to Conclusion") +## Conclusion In this tutorial, we've delved into the concept of Oracle's in blockchain technology, focusing on their role in providing external data to a blockchain network. We've explored vote extensions, a powerful feature of ABCI++, and integrated them into a Cosmos SDK application to create a price oracle module. diff --git a/docs/sdk/v0.53/tutorials/vote-extensions/oracle/what-is-an-oracle.mdx b/docs/sdk/v0.53/tutorials/vote-extensions/oracle/what-is-an-oracle.mdx index 6a1daa96..9e273990 100644 --- a/docs/sdk/v0.53/tutorials/vote-extensions/oracle/what-is-an-oracle.mdx +++ b/docs/sdk/v0.53/tutorials/vote-extensions/oracle/what-is-an-oracle.mdx @@ -1,16 +1,15 @@ --- -title: "What is an Oracle?" -description: "Version: v0.53" +title: What is an Oracle? --- An oracle in blockchain technology is a system that provides external data to a blockchain network. It acts as a source of information that is not natively accessible within the blockchain's closed environment. This can range from financial market prices to real-world event, making it crucial for decentralised applications. -## Oracle in the Cosmos SDK[​](#oracle-in-the-cosmos-sdk "Direct link to Oracle in the Cosmos SDK") +## Oracle in the Cosmos SDK In the Cosmos SDK, an oracle module can be implemented to provide external data to the blockchain. This module can use features like vote extensions to submit additional data during the consensus process, which can then be used by the blockchain to update its state with information from the outside world. For instance, a price oracle module in the Cosmos SDK could supply timely and accurate asset price information, which is vital for various financial operations within the blockchain ecosystem. -## Conclusion[​](#conclusion "Direct link to Conclusion") +## Conclusion Oracles are essential for blockchains to interact with external data, enabling them to respond to real-world information and events. Their implementation is key to the reliability and robustness of blockchain networks. diff --git a/docs/sdk/v0.53/user.mdx b/docs/sdk/v0.53/user.mdx deleted file mode 100644 index 0b3e5303..00000000 --- a/docs/sdk/v0.53/user.mdx +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "User Guides" -description: "Version: v0.53" ---- - -This section is designed for developers who are using the Cosmos SDK to build applications. It provides essential guides and references to effectively use the SDK's features. - -* [Setting up keys](./user/run-node/keyring) - Learn how to set up secure key management using the Cosmos SDK's keyring feature. This guide provides a streamlined approach to cryptographic key handling, which is crucial for securing your application. -* [Running a node](./user/run-node/run-node) - This guide provides step-by-step instructions to deploy and manage a node in the Cosmos network. It ensures a smooth and reliable operation of your blockchain application by covering all the necessary setup and maintenance steps. -* [CLI](./user/run-node/interact-node) - Discover how to navigate and interact with the Cosmos SDK using the Command Line Interface (CLI). This section covers efficient and powerful command-based operations that can help you manage your application effectively. diff --git a/docs/sdk/v0.53/user.mdx.bak b/docs/sdk/v0.53/user.mdx.bak deleted file mode 100644 index 02faae65..00000000 --- a/docs/sdk/v0.53/user.mdx.bak +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "User Guides" -description: "Version: v0.53" ---- - -This section is designed for developers who are using the Cosmos SDK to build applications. It provides essential guides and references to effectively use the SDK's features. - -* [Setting up keys](/v0.53/user/run-node/keyring) - Learn how to set up secure key management using the Cosmos SDK's keyring feature. This guide provides a streamlined approach to cryptographic key handling, which is crucial for securing your application. -* [Running a node](/v0.53/user/run-node/run-node) - This guide provides step-by-step instructions to deploy and manage a node in the Cosmos network. It ensures a smooth and reliable operation of your blockchain application by covering all the necessary setup and maintenance steps. -* [CLI](/v0.53/user/run-node/interact-node) - Discover how to navigate and interact with the Cosmos SDK using the Command Line Interface (CLI). This section covers efficient and powerful command-based operations that can help you manage your application effectively. diff --git a/docs/sdk/v0.53/user/run-node/interact-node.mdx b/docs/sdk/v0.53/user/run-node/interact-node.mdx deleted file mode 100644 index f792b5f7..00000000 --- a/docs/sdk/v0.53/user/run-node/interact-node.mdx +++ /dev/null @@ -1,149 +0,0 @@ ---- -title: "Interacting with the Node" -description: "Version: v0.53" ---- - - - There are multiple ways to interact with a node: using the CLI, using gRPC or using the REST endpoints. - - - - * [gRPC, REST and CometBFT Endpoints](/v0.53/learn/advanced/grpc_rest) - * [Running a Node](/v0.53/user/run-node/run-node) - - -## Using the CLI[​](#using-the-cli "Direct link to Using the CLI") - -Now that your chain is running, it is time to try sending tokens from the first account you created to a second account. In a new terminal window, start by running the following query command: - -``` -simd query bank balances $MY_VALIDATOR_ADDRESS -``` - -You should see the current balance of the account you created, equal to the original balance of `stake` you granted it minus the amount you delegated via the `gentx`. Now, create a second account: - -``` -simd keys add recipient --keyring-backend test# Put the generated address in a variable for later use.RECIPIENT=$(simd keys show recipient -a --keyring-backend test) -``` - -The command above creates a local key-pair that is not yet registered on the chain. An account is created the first time it receives tokens from another account. Now, run the following command to send tokens to the `recipient` account: - -``` -simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000000stake --chain-id my-test-chain --keyring-backend test# Check that the recipient account did receive the tokens.simd query bank balances $RECIPIENT -``` - -Finally, delegate some of the stake tokens sent to the `recipient` account to the validator: - -``` -simd tx staking delegate $(simd keys show my_validator --bech val -a --keyring-backend test) 500stake --from recipient --chain-id my-test-chain --keyring-backend test# Query the total delegations to `validator`.simd query staking delegations-to $(simd keys show my_validator --bech val -a --keyring-backend test) -``` - -You should see two delegations, the first one made from the `gentx`, and the second one you just performed from the `recipient` account. - -## Using gRPC[​](#using-grpc "Direct link to Using gRPC") - -The Protobuf ecosystem developed tools for different use cases, including code-generation from `*.proto` files into various languages. These tools allow the building of clients easily. Often, the client connection (i.e. the transport) can be plugged and replaced very easily. Let's explore one of the most popular transport: [gRPC](/v0.53/learn/advanced/grpc_rest). - -Since the code generation library largely depends on your own tech stack, we will only present three alternatives: - -* `grpcurl` for generic debugging and testing, -* programmatically via Go, -* CosmJS for JavaScript/TypeScript developers. - -### grpcurl[​](#grpcurl "Direct link to grpcurl") - -[grpcurl](https://github.com/fullstorydev/grpcurl) is like `curl` but for gRPC. It is also available as a Go library, but we will use it only as a CLI command for debugging and testing purposes. Follow the instructions in the previous link to install it. - -Assuming you have a local node running (either a localnet, or connected a live network), you should be able to run the following command to list the Protobuf services available (you can replace `localhost:9000` by the gRPC server endpoint of another node, which is configured under the `grpc.address` field inside [`app.toml`](/v0.53/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml)): - -``` -grpcurl -plaintext localhost:9090 list -``` - -You should see a list of gRPC services, like `cosmos.bank.v1beta1.Query`. This is called reflection, which is a Protobuf endpoint returning a description of all available endpoints. Each of these represents a different Protobuf service, and each service exposes multiple RPC methods you can query against. - -In order to get a description of the service you can run the following command: - -``` -grpcurl -plaintext \ localhost:9090 \ describe cosmos.bank.v1beta1.Query # Service we want to inspect -``` - -It's also possible to execute an RPC call to query the node for information: - -``` -grpcurl \ -plaintext \ -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ localhost:9090 \ cosmos.bank.v1beta1.Query/AllBalances -``` - -The list of all available gRPC query endpoints is [coming soon](https://github.com/cosmos/cosmos-sdk/issues/7786). - -#### Query for historical state using grpcurl[​](#query-for-historical-state-using-grpcurl "Direct link to Query for historical state using grpcurl") - -You may also query for historical data by passing some [gRPC metadata](https://github.com/grpc/grpc-go/blob/master/Documentation/grpc-metadata.md) to the query: the `x-cosmos-block-height` metadata should contain the block to query. Using grpcurl as above, the command looks like: - -``` -grpcurl \ -plaintext \ -H "x-cosmos-block-height: 123" \ -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ localhost:9090 \ cosmos.bank.v1beta1.Query/AllBalances -``` - -Assuming the state at that block has not yet been pruned by the node, this query should return a non-empty response. - -### Programmatically via Go[​](#programmatically-via-go "Direct link to Programmatically via Go") - -The following snippet shows how to query the state using gRPC inside a Go program. The idea is to create a gRPC connection, and use the Protobuf-generated client code to query the gRPC server. - -#### Install Cosmos SDK[​](#install-cosmos-sdk "Direct link to Install Cosmos SDK") - -``` -go get github.com/cosmos/cosmos-sdk@main -``` - -``` -package mainimport ( "context" "fmt" "google.golang.org/grpc" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" banktypes "github.com/cosmos/cosmos-sdk/x/bank/types")func queryState() error { myAddress, err := sdk.AccAddressFromBech32("cosmos1...") // the my_validator or recipient address. if err != nil { return err } // Create a connection to the gRPC server. grpcConn, err := grpc.Dial( "127.0.0.1:9090", // your gRPC server address. grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanism. // This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry // if the request/response types contain interface instead of 'nil' you should pass the application specific codec. grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), ) if err != nil { return err } defer grpcConn.Close() // This creates a gRPC client to query the x/bank service. bankClient := banktypes.NewQueryClient(grpcConn) bankRes, err := bankClient.Balance( context.Background(), &banktypes.QueryBalanceRequest{Address: myAddress.String(), Denom: "stake"}, ) if err != nil { return err } fmt.Println(bankRes.GetBalance()) // Prints the account balance return nil}func main() { if err := queryState(); err != nil { panic(err) }} -``` - -You can replace the query client (here we are using `x/bank`'s) with one generated from any other Protobuf service. The list of all available gRPC query endpoints is [coming soon](https://github.com/cosmos/cosmos-sdk/issues/7786). - -#### Query for historical state using Go[​](#query-for-historical-state-using-go "Direct link to Query for historical state using Go") - -Querying for historical blocks is done by adding the block height metadata in the gRPC request. - -``` -package mainimport ( "context" "fmt" "google.golang.org/grpc" "google.golang.org/grpc/metadata" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" grpctypes "github.com/cosmos/cosmos-sdk/types/grpc" banktypes "github.com/cosmos/cosmos-sdk/x/bank/types")func queryState() error { myAddress, err := sdk.AccAddressFromBech32("cosmos1yerherx4d43gj5wa3zl5vflj9d4pln42n7kuzu") // the my_validator or recipient address. if err != nil { return err } // Create a connection to the gRPC server. grpcConn, err := grpc.Dial( "127.0.0.1:9090", // your gRPC server address. grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanism. // This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry // if the request/response types contain interface instead of 'nil' you should pass the application specific codec. grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), ) if err != nil { return err } defer grpcConn.Close() // This creates a gRPC client to query the x/bank service. bankClient := banktypes.NewQueryClient(grpcConn) var header metadata.MD _, err = bankClient.Balance( metadata.AppendToOutgoingContext(context.Background(), grpctypes.GRPCBlockHeightHeader, "12"), // Add metadata to request &banktypes.QueryBalanceRequest{Address: myAddress.String(), Denom: "stake"}, grpc.Header(&header), // Retrieve header from response ) if err != nil { return err } blockHeight := header.Get(grpctypes.GRPCBlockHeightHeader) fmt.Println(blockHeight) // Prints the block height (12) return nil}func main() { if err := queryState(); err != nil { panic(err) }} -``` - -### CosmJS[​](#cosmjs "Direct link to CosmJS") - -CosmJS documentation can be found at [https://cosmos.github.io/cosmjs](https://cosmos.github.io/cosmjs). As of January 2021, CosmJS documentation is still work in progress. - -## Using the REST Endpoints[​](#using-the-rest-endpoints "Direct link to Using the REST Endpoints") - -As described in the [gRPC guide](/v0.53/learn/advanced/grpc_rest), all gRPC services on the Cosmos SDK are made available for more convenient REST-based queries through gRPC-gateway. The format of the URL path is based on the Protobuf service method's full-qualified name, but may contain small customizations so that final URLs look more idiomatic. For example, the REST endpoint for the `cosmos.bank.v1beta1.Query/AllBalances` method is `GET /cosmos/bank/v1beta1/balances/{address}`. Request arguments are passed as query parameters. - -Note that the REST endpoints are not enabled by default. To enable them, edit the `api` section of your `~/.simapp/config/app.toml` file: - -``` -# Enable defines if the API server should be enabled.enable = true -``` - -As a concrete example, the `curl` command to make balances request is: - -``` -curl \ -X GET \ -H "Content-Type: application/json" \ http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS -``` - -Make sure to replace `localhost:1317` with the REST endpoint of your node, configured under the `api.address` field. - -The list of all available REST endpoints is available as a Swagger specification file, it can be viewed at `localhost:1317/swagger`. Make sure that the `api.swagger` field is set to true in your [`app.toml`](/v0.53/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml) file. - -### Query for historical state using REST[​](#query-for-historical-state-using-rest "Direct link to Query for historical state using REST") - -Querying for historical state is done using the HTTP header `x-cosmos-block-height`. For example, a curl command would look like: - -``` -curl \ -X GET \ -H "Content-Type: application/json" \ -H "x-cosmos-block-height: 123" \ http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS -``` - -Assuming the state at that block has not yet been pruned by the node, this query should return a non-empty response. - -### Cross-Origin Resource Sharing (CORS)[​](#cross-origin-resource-sharing-cors "Direct link to Cross-Origin Resource Sharing (CORS)") - -[CORS policies](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) are not enabled by default to help with security. If you would like to use the rest-server in a public environment we recommend you provide a reverse proxy, this can be done with [nginx](https://www.nginx.com/). For testing and development purposes there is an `enabled-unsafe-cors` field inside [`app.toml`](/v0.53/user/run-node/run-node#configuring-the-node-using-apptoml-and-configtoml). diff --git a/docs/sdk/v0.53/user/run-node/keyring.mdx b/docs/sdk/v0.53/user/run-node/keyring.mdx deleted file mode 100644 index 071d18a7..00000000 --- a/docs/sdk/v0.53/user/run-node/keyring.mdx +++ /dev/null @@ -1,108 +0,0 @@ ---- -title: "Setting up the keyring" -description: "Version: v0.53" ---- - - - This document describes how to configure and use the keyring and its various backends for an [**application**](/v0.53/learn/beginner/app-anatomy). - - -The keyring holds the private/public key pairs used to interact with a node. For instance, a validator key needs to be set up before running the blockchain node, so that blocks can be correctly signed. The private key can be stored in different locations, called "backends," such as a file or the operating system's own key storage. - -## Available backends for the keyring[​](#available-backends-for-the-keyring "Direct link to Available backends for the keyring") - -Starting with the v0.38.0 release, Cosmos SDK comes with a new keyring implementation that provides a set of commands to manage cryptographic keys in a secure fashion. The new keyring supports multiple storage backends, some of which may not be available on all operating systems. - -### The `os` backend[​](#the-os-backend "Direct link to the-os-backend") - -The `os` backend relies on operating system-specific defaults to handle key storage securely. Typically, an operating system's credential subsystem handles password prompts, private keys storage, and user sessions according to the user's password policies. Here is a list of the most popular operating systems and their respective passwords manager: - -* macOS: [Keychain](https://support.apple.com/en-gb/guide/keychain-access/welcome/mac) - -* Windows: [Credentials Management API](https://docs.microsoft.com/en-us/windows/win32/secauthn/credentials-management) - -* GNU/Linux: - - * [libsecret](https://gitlab.gnome.org/GNOME/libsecret) - * [kwallet](https://api.kde.org/frameworks/kwallet/html/index.html) - * [keyctl](https://www.kernel.org/doc/html/latest/security/keys/core.html) - -GNU/Linux distributions that use GNOME as the default desktop environment typically come with [Seahorse](https://wiki.gnome.org/Apps/Seahorse). Users of KDE based distributions are commonly provided with [KDE Wallet Manager](https://userbase.kde.org/KDE_Wallet_Manager). Whilst the former is in fact a `libsecret` convenient frontend, the latter is a `kwallet` client. `keyctl` is a secure backend that leverages the Linux's kernel security key management system to store cryptographic keys securely in memory. - -`os` is the default option since operating system's default credentials managers are designed to meet users' most common needs and provide them with a comfortable experience without compromising on security. - -The recommended backends for headless environments are `file` and `pass`. - -### The `file` backend[​](#the-file-backend "Direct link to the-file-backend") - -The `file` backend more closely resembles the keybase implementation used prior to v0.38.1. It stores the keyring encrypted within the app's configuration directory. This keyring will request a password each time it is accessed, which may occur multiple times in a single command resulting in repeated password prompts. If using bash scripts to execute commands using the `file` option you may want to utilize the following format for multiple prompts: - -``` -# assuming that KEYPASSWD is set in the environment$ gaiacli config keyring-backend file # use file backend$ (echo $KEYPASSWD; echo $KEYPASSWD) | gaiacli keys add me # multiple prompts$ echo $KEYPASSWD | gaiacli keys show me # single prompt -``` - - - The first time you add a key to an empty keyring, you will be prompted to type the password twice. - - -### The `pass` backend[​](#the-pass-backend "Direct link to the-pass-backend") - -The `pass` backend uses the [pass](https://www.passwordstore.org/) utility to manage on-disk encryption of keys' sensitive data and metadata. Keys are stored inside `gpg` encrypted files within app-specific directories. `pass` is available for the most popular UNIX operating systems as well as GNU/Linux distributions. Please refer to its manual page for information on how to download and install it. - - - **pass** uses [GnuPG](https://gnupg.org/) for encryption. `gpg` automatically invokes the `gpg-agent` daemon upon execution, which handles the caching of GnuPG credentials. Please refer to `gpg-agent` man page for more information on how to configure cache parameters such as credentials TTL and passphrase expiration. - - -The password store must be set up prior to first use: - -``` -pass init -``` - -Replace `` with your GPG key ID. You can use your personal GPG key or an alternative one you may want to use specifically to encrypt the password store. - -### The `kwallet` backend[​](#the-kwallet-backend "Direct link to the-kwallet-backend") - -The `kwallet` backend uses `KDE Wallet Manager`, which comes installed by default on the GNU/Linux distributions that ships KDE as default desktop environment. Please refer to [KWallet Handbook](https://docs.kde.org/stable5/en/kdeutils/kwallet5/index.html) for more information. - -### The `keyctl` backend[​](#the-keyctl-backend "Direct link to the-keyctl-backend") - -The *Kernel Key Retention Service* is a security facility that has been added to the Linux kernel relatively recently. It allows sensitive cryptographic data such as passwords, private key, authentication tokens, etc to be stored securely in memory. - -The `keyctl` backend is available on Linux platforms only. - -### The `test` backend[​](#the-test-backend "Direct link to the-test-backend") - -The `test` backend is a password-less variation of the `file` backend. Keys are stored unencrypted on disk. - -**Provided for testing purposes only. The `test` backend is not recommended for use in production environments**. - -### The `memory` backend[​](#the-memory-backend "Direct link to the-memory-backend") - -The `memory` backend stores keys in memory. The keys are immediately deleted after the program has exited. - -**Provided for testing purposes only. The `memory` backend is not recommended for use in production environments**. - -### Setting backend using the env variable[​](#setting-backend-using-the-env-variable "Direct link to Setting backend using the env variable") - -You can set the keyring-backend using env variable: `BINNAME_KEYRING_BACKEND`. For example, if your binary name is `gaia-v5` then set: `export GAIA_V5_KEYRING_BACKEND=pass` - -## Adding keys to the keyring[​](#adding-keys-to-the-keyring "Direct link to Adding keys to the keyring") - - - Make sure you can build your own binary, and replace `simd` with the name of your binary in the snippets. - - -Applications developed using the Cosmos SDK come with the `keys` subcommand. For the purpose of this tutorial, we're running the `simd` CLI, which is an application built using the Cosmos SDK for testing and educational purposes. For more information, see [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). - -You can use `simd keys` for help about the keys command and `simd keys [command] --help` for more information about a particular subcommand. - -To create a new key in the keyring, run the `add` subcommand with a `` argument. For the purpose of this tutorial, we will solely use the `test` backend, and call our new key `my_validator`. This key will be used in the next section. - -``` -$ simd keys add my_validator --keyring-backend test# Put the generated address in a variable for later use.MY_VALIDATOR_ADDRESS=$(simd keys show my_validator -a --keyring-backend test) -``` - -This command generates a new 24-word mnemonic phrase, persists it to the relevant backend, and outputs information about the keypair. If this keypair will be used to hold value-bearing tokens, be sure to write down the mnemonic phrase somewhere safe! - -By default, the keyring generates a `secp256k1` keypair. The keyring also supports `ed25519` keys, which may be created by passing the `--algo ed25519` flag. A keyring can of course hold both types of keys simultaneously, and the Cosmos SDK's `x/auth` module supports natively these two public key algorithms. diff --git a/docs/sdk/v0.53/user/run-node/run-node.mdx b/docs/sdk/v0.53/user/run-node/run-node.mdx deleted file mode 100644 index d2b22c8b..00000000 --- a/docs/sdk/v0.53/user/run-node/run-node.mdx +++ /dev/null @@ -1,167 +0,0 @@ ---- -title: "Running a Node" -description: "Version: v0.53" ---- - - - Now that the application is ready and the keyring populated, it's time to see how to run the blockchain node. In this section, the application we are running is called [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp), and its corresponding CLI binary `simd`. - - - - * [Anatomy of a Cosmos SDK Application](/v0.53/learn/beginner/app-anatomy) - * [Setting up the keyring](/v0.53/user/run-node/keyring) - - -## Initialize the Chain[​](#initialize-the-chain "Direct link to Initialize the Chain") - - - Make sure you can build your own binary, and replace `simd` with the name of your binary in the snippets. - - -Before actually running the node, we need to initialize the chain, and most importantly, its genesis file. This is done with the `init` subcommand: - -``` -# The argument is the custom username of your node, it should be human-readable.simd init --chain-id my-test-chain -``` - -The command above creates all the configuration files needed for your node to run, as well as a default genesis file, which defines the initial state of the network. - - - All these configuration files are in `~/.simapp` by default, but you can overwrite the location of this folder by passing the `--home` flag to each command, or set an `$APPD_HOME` environment variable (where `APPD` is the name of the binary). - - -The `~/.simapp` folder has the following structure: - -``` -. # ~/.simapp |- data # Contains the databases used by the node. |- config/ |- app.toml # Application-related configuration file. |- config.toml # CometBFT-related configuration file. |- genesis.json # The genesis file. |- node_key.json # Private key to use for node authentication in the p2p protocol. |- priv_validator_key.json # Private key to use as a validator in the consensus protocol. -``` - -## Updating Some Default Settings[​](#updating-some-default-settings "Direct link to Updating Some Default Settings") - -If you want to change any field values in configuration files (for ex: genesis.json) you can use `jq` ([installation](https://stedolan.github.io/jq/download/) & [docs](https://stedolan.github.io/jq/manual/#Assignment)) & `sed` commands to do that. Few examples are listed here. - -``` -# to change the chain-idjq '.chain_id = "testing"' genesis.json > temp.json && mv temp.json genesis.json# to enable the api serversed -i '/\[api\]/,+3 s/enable = false/enable = true/' app.toml# to change the voting_periodjq '.app_state.gov.voting_params.voting_period = "600s"' genesis.json > temp.json && mv temp.json genesis.json# to change the inflationjq '.app_state.mint.minter.inflation = "0.300000000000000000"' genesis.json > temp.json && mv temp.json genesis.json -``` - -### Client Interaction[​](#client-interaction "Direct link to Client Interaction") - -When instantiating a node, GRPC and REST are defaulted to localhost to avoid unknown exposure of your node to the public. It is recommended to not expose these endpoints without a proxy that can handle load balancing or authentication is set up between your node and the public. - - - A commonly used tool for this is [nginx](https://nginx.org). - - -## Adding Genesis Accounts[​](#adding-genesis-accounts "Direct link to Adding Genesis Accounts") - -Before starting the chain, you need to populate the state with at least one account. To do so, first [create a new account in the keyring](/v0.53/user/run-node/keyring#adding-keys-to-the-keyring) named `my_validator` under the `test` keyring backend (feel free to choose another name and another backend). - -Now that you have created a local account, go ahead and grant it some `stake` tokens in your chain's genesis file. Doing so will also make sure your chain is aware of this account's existence: - -``` -simd genesis add-genesis-account $MY_VALIDATOR_ADDRESS 100000000000stake -``` - -Recall that `$MY_VALIDATOR_ADDRESS` is a variable that holds the address of the `my_validator` key in the [keyring](/v0.53/user/run-node/keyring#adding-keys-to-the-keyring). Also note that the tokens in the Cosmos SDK have the `{amount}{denom}` format: `amount` is an 18-digit-precision decimal number, and `denom` is the unique token identifier with its denomination key (e.g. `atom` or `uatom`). Here, we are granting `stake` tokens, as `stake` is the token identifier used for staking in [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). For your own chain with its own staking denom, that token identifier should be used instead. - -Now that your account has some tokens, you need to add a validator to your chain. Validators are special full-nodes that participate in the consensus process (implemented in the [underlying consensus engine](/v0.53/learn/intro/sdk-app-architecture#cometbft)) in order to add new blocks to the chain. Any account can declare its intention to become a validator operator, but only those with sufficient delegation get to enter the active set (for example, only the top 125 validator candidates with the most delegation get to be validators in the Cosmos Hub). For this guide, you will add your local node (created via the `init` command above) as a validator of your chain. Validators can be declared before a chain is first started via a special transaction included in the genesis file called a `gentx`: - -``` -# Create a gentx.simd genesis gentx my_validator 100000000stake --chain-id my-test-chain --keyring-backend test# Add the gentx to the genesis file.simd genesis collect-gentxs -``` - -A `gentx` does three things: - -1. Registers the `validator` account you created as a validator operator account (i.e., the account that controls the validator). -2. Self-delegates the provided `amount` of staking tokens. -3. Link the operator account with a CometBFT node pubkey that will be used for signing blocks. If no `--pubkey` flag is provided, it defaults to the local node pubkey created via the `simd init` command above. - -For more information on `gentx`, use the following command: - -``` -simd genesis gentx --help -``` - -## Configuring the Node Using `app.toml` and `config.toml`[​](#configuring-the-node-using-apptoml-and-configtoml "Direct link to configuring-the-node-using-apptoml-and-configtoml") - -The Cosmos SDK automatically generates two configuration files inside `~/.simapp/config`: - -* `config.toml`: used to configure the CometBFT, learn more on [CometBFT's documentation](https://docs.cometbft.com/v0.37/core/configuration), -* `app.toml`: generated by the Cosmos SDK, and used to configure your app, such as state pruning strategies, telemetry, gRPC and REST servers configuration, state sync... - -Both files are heavily commented, please refer to them directly to tweak your node. - -One example config to tweak is the `minimum-gas-prices` field inside `app.toml`, which defines the minimum gas prices the validator node is willing to accept for processing a transaction. Depending on the chain, it might be an empty string or not. If it's empty, make sure to edit the field with some value, for example `10token`, or else the node will halt on startup. For the purpose of this tutorial, let's set the minimum gas price to 0: - -``` - # The minimum gas prices a validator is willing to accept for processing a # transaction. A transaction's fees must meet the minimum of any denomination # specified in this config (e.g. 0.25token1;0.0001token2). minimum-gas-prices = "0stake" -``` - - - When running a node (not a validator!) and not wanting to run the application mempool, set the `max-txs` field to `-1`. - - ``` - [mempool]# Setting max-txs to 0 will allow for an unbounded amount of transactions in the mempool.# Setting max_txs to negative 1 (-1) will disable transactions from being inserted into the mempool.# Setting max_txs to a positive number (> 0) will limit the number of transactions in the mempool, by the specified amount.## Note, this configuration only applies to SDK built-in app-side mempool# implementations.max-txs = "-1" - ``` - - -## Run a Localnet[​](#run-a-localnet "Direct link to Run a Localnet") - -Now that everything is set up, you can finally start your node: - -``` -simd start -``` - -You should see blocks come in. - -The previous command allows you to run a single node. This is enough for the next section on interacting with this node, but you may wish to run multiple nodes at the same time, and see how consensus happens between them. - -The naive way would be to run the same commands again in separate terminal windows. This is possible, however, in the Cosmos SDK, we leverage the power of [Docker Compose](https://docs.docker.com/compose/) to run a localnet. If you need inspiration on how to set up your own localnet with Docker Compose, you can have a look at the Cosmos SDK's [`docker-compose.yml`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/docker-compose.yml). - -### Standalone App/CometBFT[​](#standalone-appcometbft "Direct link to Standalone App/CometBFT") - -By default, the Cosmos SDK runs CometBFT in-process with the application If you want to run the application and CometBFT in separate processes, start the application with the `--with-comet=false` flag and set `rpc.laddr` in `config.toml` to the CometBFT node's RPC address. - -## Logging[​](#logging "Direct link to Logging") - -Logging provides a way to see what is going on with a node. The default logging level is info. This is a global level and all info logs will be outputted to the terminal. If you would like to filter specific logs to the terminal instead of all, then setting `module:log_level` is how this can work. - -Example: - -In config.toml: - -``` -log_level: "state:info,p2p:info,consensus:info,x/staking:info,x/ibc:info,*error" -``` - -## State Sync[​](#state-sync "Direct link to State Sync") - -State sync is the act in which a node syncs the latest or close to the latest state of a blockchain. This is useful for users who don't want to sync all the blocks in history. Read more in [CometBFT documentation](https://docs.cometbft.com/v0.37/core/state-sync). - -State sync works thanks to snapshots. Read how the SDK handles snapshots [here](https://github.com/cosmos/cosmos-sdk/blob/825245d/store/snapshots/README.md). - -### Local State Sync[​](#local-state-sync "Direct link to Local State Sync") - -Local state sync works similar to normal state sync except that it works off a local snapshot of state instead of one provided via the p2p network. The steps to start local state sync are similar to normal state sync with a few different designs. - -1. As mentioned in [https://docs.cometbft.com/v0.37/core/state-sync](https://docs.cometbft.com/v0.37/core/state-sync), one must set a height and hash in the config.toml along with a few rpc servers (the aforementioned link has instructions on how to do this). -2. Run ` ` to restore a local snapshot (note: first load it from a file with the *load* command). -3. Bootstrapping Comet state to start the node after the snapshot has been ingested. This can be done with the bootstrap command ` comet bootstrap-state` - -### Snapshots Commands[​](#snapshots-commands "Direct link to Snapshots Commands") - -The Cosmos SDK provides commands for managing snapshots. These commands can be added in an app with the following snippet in `cmd//root.go`: - -``` -import ( "github.com/cosmos/cosmos-sdk/client/snapshot")func initRootCmd(/* ... */) { // ... rootCmd.AddCommand( snapshot.Cmd(appCreator), )} -``` - -Then the following commands are available at ` snapshots [command]`: - -* **list**: list local snapshots -* **load**: Load a snapshot archive file into snapshot store -* **restore**: Restore app state from local snapshot -* **export**: Export app state to snapshot store -* **dump**: Dump the snapshot as portable archive format -* **delete**: Delete a local snapshot diff --git a/docs/sdk/v0.53/user/run-node/txs.mdx b/docs/sdk/v0.53/user/run-node/txs.mdx deleted file mode 100644 index 1e55a2f7..00000000 --- a/docs/sdk/v0.53/user/run-node/txs.mdx +++ /dev/null @@ -1,219 +0,0 @@ ---- -title: "Generating, Signing and Broadcasting Transactions" -description: "Version: v0.53" ---- - - - This document describes how to generate an (unsigned) transaction, signing it (with one or multiple keys), and broadcasting it to the network. - - -## Using the CLI[​](#using-the-cli "Direct link to Using the CLI") - -The easiest way to send transactions is using the CLI, as we have seen in the previous page when [interacting with a node](/v0.53/user/run-node/interact-node#using-the-cli). For example, running the following command - -``` -simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --chain-id my-test-chain --keyring-backend test -``` - -will run the following steps: - -* generate a transaction with one `Msg` (`x/bank`'s `MsgSend`), and print the generated transaction to the console. -* ask the user for confirmation to send the transaction from the `$MY_VALIDATOR_ADDRESS` account. -* fetch `$MY_VALIDATOR_ADDRESS` from the keyring. This is possible because we have [set up the CLI's keyring](/v0.53/user/run-node/keyring) in a previous step. -* sign the generated transaction with the keyring's account. -* broadcast the signed transaction to the network. This is possible because the CLI connects to the node's CometBFT RPC endpoint. - -The CLI bundles all the necessary steps into a simple-to-use user experience. However, it's possible to run all the steps individually too. - -### Generating a Transaction[​](#generating-a-transaction "Direct link to Generating a Transaction") - -Generating a transaction can simply be done by appending the `--generate-only` flag on any `tx` command, e.g.: - -``` -simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --chain-id my-test-chain --generate-only -``` - -This will output the unsigned transaction as JSON in the console. We can also save the unsigned transaction to a file (to be passed around between signers more easily) by appending `> unsigned_tx.json` to the above command. - -### Signing a Transaction[​](#signing-a-transaction "Direct link to Signing a Transaction") - -Signing a transaction using the CLI requires the unsigned transaction to be saved in a file. Let's assume the unsigned transaction is in a file called `unsigned_tx.json` in the current directory (see previous paragraph on how to do that). Then, simply run the following command: - -``` -simd tx sign unsigned_tx.json --chain-id my-test-chain --keyring-backend test --from $MY_VALIDATOR_ADDRESS -``` - -This command will decode the unsigned transaction and sign it with `SIGN_MODE_DIRECT` with `$MY_VALIDATOR_ADDRESS`'s key, which we already set up in the keyring. The signed transaction will be output as JSON to the console, and, as above, we can save it to a file by appending `--output-document signed_tx.json`. - -Some useful flags to consider in the `tx sign` command: - -* `--sign-mode`: you may use `amino-json` to sign the transaction using `SIGN_MODE_LEGACY_AMINO_JSON`, -* `--offline`: sign in offline mode. This means that the `tx sign` command doesn't connect to the node to retrieve the signer's account number and sequence, both needed for signing. In this case, you must manually supply the `--account-number` and `--sequence` flags. This is useful for offline signing, i.e. signing in a secure environment which doesn't have access to the internet. - -#### Signing with Multiple Signers[​](#signing-with-multiple-signers "Direct link to Signing with Multiple Signers") - - - Please note that signing a transaction with multiple signers or with a multisig account, where at least one signer uses `SIGN_MODE_DIRECT`, is not yet possible. You may follow [this Github issue](https://github.com/cosmos/cosmos-sdk/issues/8141) for more info. - - -Signing with multiple signers is done with the `tx multisign` command. This command assumes that all signers use `SIGN_MODE_LEGACY_AMINO_JSON`. The flow is similar to the `tx sign` command flow, but instead of signing an unsigned transaction file, each signer signs the file signed by previous signer(s). The `tx multisign` command will append signatures to the existing transactions. It is important that signers sign the transaction **in the same order** as given by the transaction, which is retrievable using the `GetSigners()` method. - -For example, starting with the `unsigned_tx.json`, and assuming the transaction has 4 signers, we would run: - -``` -# Let signer1 sign the unsigned tx.simd tx multisign unsigned_tx.json signer_key_1 --chain-id my-test-chain --keyring-backend test > partial_tx_1.json# Now signer1 will send the partial_tx_1.json to the signer2.# Signer2 appends their signature:simd tx multisign partial_tx_1.json signer_key_2 --chain-id my-test-chain --keyring-backend test > partial_tx_2.json# Signer2 sends the partial_tx_2.json file to signer3, and signer3 can append his signature:simd tx multisign partial_tx_2.json signer_key_3 --chain-id my-test-chain --keyring-backend test > partial_tx_3.json -``` - -### Broadcasting a Transaction[​](#broadcasting-a-transaction "Direct link to Broadcasting a Transaction") - -Broadcasting a transaction is done using the following command: - -``` -simd tx broadcast tx_signed.json -``` - -You may optionally pass the `--broadcast-mode` flag to specify which response to receive from the node: - -* `sync`: the CLI waits for a CheckTx execution response only. -* `async`: the CLI returns immediately (transaction might fail). - -### Encoding a Transaction[​](#encoding-a-transaction "Direct link to Encoding a Transaction") - -In order to broadcast a transaction using the gRPC or REST endpoints, the transaction will need to be encoded first. This can be done using the CLI. - -Encoding a transaction is done using the following command: - -``` -simd tx encode tx_signed.json -``` - -This will read the transaction from the file, serialize it using Protobuf, and output the transaction bytes as base64 in the console. - -### Decoding a Transaction[​](#decoding-a-transaction "Direct link to Decoding a Transaction") - -The CLI can also be used to decode transaction bytes. - -Decoding a transaction is done using the following command: - -``` -simd tx decode [protobuf-byte-string] -``` - -This will decode the transaction bytes and output the transaction as JSON in the console. You can also save the transaction to a file by appending `> tx.json` to the above command. - -## Programmatically with Go[​](#programmatically-with-go "Direct link to Programmatically with Go") - -It is possible to manipulate transactions programmatically via Go using the Cosmos SDK's `TxBuilder` interface. - -### Generating a Transaction[​](#generating-a-transaction-1 "Direct link to Generating a Transaction") - -Before generating a transaction, a new instance of a `TxBuilder` needs to be created. Since the Cosmos SDK supports both Amino and Protobuf transactions, the first step would be to decide which encoding scheme to use. All the subsequent steps remain unchanged, whether you're using Amino or Protobuf, as `TxBuilder` abstracts the encoding mechanisms. In the following snippet, we will use Protobuf. - -``` -import ( "github.com/cosmos/cosmos-sdk/simapp")func sendTx() error { // Choose your codec: Amino or Protobuf. Here, we use Protobuf, given by the following function. app := simapp.NewSimApp(...) // Create a new TxBuilder. txBuilder := app.TxConfig().NewTxBuilder() // --snip--} -``` - -We can also set up some keys and addresses that will send and receive the transactions. Here, for the purpose of the tutorial, we will be using some dummy data to create keys. - -``` -import ( "github.com/cosmos/cosmos-sdk/testutil/testdata")priv1, _, addr1 := testdata.KeyTestPubAddr()priv2, _, addr2 := testdata.KeyTestPubAddr()priv3, _, addr3 := testdata.KeyTestPubAddr() -``` - -Populating the `TxBuilder` can be done via its methods: - -client/tx\_config.go - -``` -loading... -``` - -[See full example on GitHub](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/tx_config.go#L39-L57) - -``` -import ( banktypes "github.com/cosmos/cosmos-sdk/x/bank/types")func sendTx() error { // --snip-- // Define two x/bank MsgSend messages: // - from addr1 to addr3, // - from addr2 to addr3. // This means that the transactions needs two signers: addr1 and addr2. msg1 := banktypes.NewMsgSend(addr1, addr3, types.NewCoins(types.NewInt64Coin("atom", 12))) msg2 := banktypes.NewMsgSend(addr2, addr3, types.NewCoins(types.NewInt64Coin("atom", 34))) err := txBuilder.SetMsgs(msg1, msg2) if err != nil { return err } txBuilder.SetGasLimit(...) txBuilder.SetFeeAmount(...) txBuilder.SetMemo(...) txBuilder.SetTimeoutHeight(...)} -``` - -At this point, `TxBuilder`'s underlying transaction is ready to be signed. - -#### Generating an Unordered Transaction[​](#generating-an-unordered-transaction "Direct link to Generating an Unordered Transaction") - -Starting with Cosmos SDK v0.53.0, users may send unordered transactions to chains that have the feature enabled. - - - Unordered transactions MUST leave sequence values unset. When a transaction is both unordered and contains a non-zero sequence value, the transaction will be rejected. External services that operate on prior assumptions about transaction sequence values should be updated to handle unordered transactions. Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. - - -Using the example above, we can set the required fields to mark a transaction as unordered. By default, unordered transactions charge an extra 2240 units of gas to offset the additional storage overhead that supports their functionality. The extra units of gas are customizable and therefore vary by chain, so be sure to check the chain's ante handler for the gas value set, if any. - -``` -func sendTx() error { // --snip-- expiration := 5 * time.Minute txBuilder.SetUnordered(true) txBuilder.SetTimeoutTimestamp(time.Now().Add(expiration + (1 * time.Nanosecond)))} -``` - -Unordered transactions from the same account must use a unique timeout timestamp value. The difference between each timeout timestamp value may be as small as a nanosecond, however. - -``` -import ( "github.com/cosmos/cosmos-sdk/client")func sendMessages(txBuilders []client.TxBuilder) error { // --snip-- expiration := 5 * time.Minute for _, txb := range txBuilders { txb.SetUnordered(true) txb.SetTimeoutTimestamp(time.Now().Add(expiration + (1 * time.Nanosecond))) }} -``` - -### Signing a Transaction[​](#signing-a-transaction-1 "Direct link to Signing a Transaction") - -We set encoding config to use Protobuf, which will use `SIGN_MODE_DIRECT` by default. As per [ADR-020](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md), each signer needs to sign the `SignerInfo`s of all other signers. This means that we need to perform two steps sequentially: - -* for each signer, populate the signer's `SignerInfo` inside `TxBuilder`, -* once all `SignerInfo`s are populated, for each signer, sign the `SignDoc` (the payload to be signed). - -In the current `TxBuilder`'s API, both steps are done using the same method: `SetSignatures()`. The current API requires us to first perform a round of `SetSignatures()` *with empty signatures*, only to populate `SignerInfo`s, and a second round of `SetSignatures()` to actually sign the correct payload. - -``` -import ( cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" "github.com/cosmos/cosmos-sdk/types/tx/signing" xauthsigning "github.com/cosmos/cosmos-sdk/x/auth/signing")func sendTx() error { // --snip-- privs := []cryptotypes.PrivKey{priv1, priv2} accNums:= []uint64{..., ...} // The accounts' account numbers accSeqs:= []uint64{..., ...} // The accounts' sequence numbers // First round: we gather all the signer infos. We use the "set empty // signature" hack to do that. var sigsV2 []signing.SignatureV2 for i, priv := range privs { sigV2 := signing.SignatureV2{ PubKey: priv.PubKey(), Data: &signing.SingleSignatureData{ SignMode: encCfg.TxConfig.SignModeHandler().DefaultMode(), Signature: nil, }, Sequence: accSeqs[i], } sigsV2 = append(sigsV2, sigV2) } err := txBuilder.SetSignatures(sigsV2...) if err != nil { return err } // Second round: all signer infos are set, so each signer can sign. sigsV2 = []signing.SignatureV2{} for i, priv := range privs { signerData := xauthsigning.SignerData{ ChainID: chainID, AccountNumber: accNums[i], Sequence: accSeqs[i], } sigV2, err := tx.SignWithPrivKey( encCfg.TxConfig.SignModeHandler().DefaultMode(), signerData, txBuilder, priv, encCfg.TxConfig, accSeqs[i]) if err != nil { return nil, err } sigsV2 = append(sigsV2, sigV2) } err = txBuilder.SetSignatures(sigsV2...) if err != nil { return err }} -``` - -The `TxBuilder` is now correctly populated. To print it, you can use the `TxConfig` interface from the initial encoding config `encCfg`: - -``` -func sendTx() error { // --snip-- // Generated Protobuf-encoded bytes. txBytes, err := encCfg.TxConfig.TxEncoder()(txBuilder.GetTx()) if err != nil { return err } // Generate a JSON string. txJSONBytes, err := encCfg.TxConfig.TxJSONEncoder()(txBuilder.GetTx()) if err != nil { return err } txJSON := string(txJSONBytes)} -``` - -### Broadcasting a Transaction[​](#broadcasting-a-transaction-1 "Direct link to Broadcasting a Transaction") - -The preferred way to broadcast a transaction is to use gRPC, though using REST (via `gRPC-gateway`) or the CometBFT RPC is also possible. An overview of the differences between these methods is exposed [here](/v0.53/learn/advanced/grpc_rest). For this tutorial, we will only describe the gRPC method. - -``` -import ( "context" "fmt" "google.golang.org/grpc" "github.com/cosmos/cosmos-sdk/types/tx")func sendTx(ctx context.Context) error { // --snip-- // Create a connection to the gRPC server. grpcConn := grpc.Dial( "127.0.0.1:9090", // Or your gRPC server address. grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanism. ) defer grpcConn.Close() // Broadcast the tx via gRPC. We create a new client for the Protobuf Tx // service. txClient := tx.NewServiceClient(grpcConn) // We then call the BroadcastTx method on this client. grpcRes, err := txClient.BroadcastTx( ctx, &tx.BroadcastTxRequest{ Mode: tx.BroadcastMode_BROADCAST_MODE_SYNC, TxBytes: txBytes, // Proto-binary of the signed transaction, see previous step. }, ) if err != nil { return err } fmt.Println(grpcRes.TxResponse.Code) // Should be `0` if the tx is successful return nil} -``` - -#### Simulating a Transaction[​](#simulating-a-transaction "Direct link to Simulating a Transaction") - -Before broadcasting a transaction, we sometimes may want to dry-run the transaction, to estimate some information about the transaction without actually committing it. This is called simulating a transaction, and can be done as follows: - -``` -import ( "context" "fmt" "testing" "github.com/cosmos/cosmos-sdk/client" "github.com/cosmos/cosmos-sdk/types/tx" authtx "github.com/cosmos/cosmos-sdk/x/auth/tx")func simulateTx() error { // --snip-- // Simulate the tx via gRPC. We create a new client for the Protobuf Tx // service. txClient := tx.NewServiceClient(grpcConn) txBytes := /* Fill in with your signed transaction bytes. */ // We then call the Simulate method on this client. grpcRes, err := txClient.Simulate( context.Background(), &tx.SimulateRequest{ TxBytes: txBytes, }, ) if err != nil { return err } fmt.Println(grpcRes.GasInfo) // Prints estimated gas used. return nil} -``` - -## Using gRPC[​](#using-grpc "Direct link to Using gRPC") - -It is not possible to generate or sign a transaction using gRPC, only to broadcast one. In order to broadcast a transaction using gRPC, you will need to generate, sign, and encode the transaction using either the CLI or programmatically with Go. - -### Broadcasting a Transaction[​](#broadcasting-a-transaction-2 "Direct link to Broadcasting a Transaction") - -Broadcasting a transaction using the gRPC endpoint can be done by sending a `BroadcastTx` request as follows, where the `txBytes` are the protobuf-encoded bytes of a signed transaction: - -``` -grpcurl -plaintext \ -d '{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ localhost:9090 \ cosmos.tx.v1beta1.Service/BroadcastTx -``` - -## Using REST[​](#using-rest "Direct link to Using REST") - -It is not possible to generate or sign a transaction using REST, only to broadcast one. In order to broadcast a transaction using REST, you will need to generate, sign, and encode the transaction using either the CLI or programmatically with Go. - -### Broadcasting a Transaction[​](#broadcasting-a-transaction-3 "Direct link to Broadcasting a Transaction") - -Broadcasting a transaction using the REST endpoint (served by `gRPC-gateway`) can be done by sending a POST request as follows, where the `txBytes` are the protobuf-encoded bytes of a signed transaction: - -``` -curl -X POST \ -H "Content-Type: application/json" \ -d'{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ localhost:1317/cosmos/tx/v1beta1/txs -``` - -## Using CosmJS (JavaScript & TypeScript)[​](#using-cosmjs-javascript--typescript "Direct link to Using CosmJS (JavaScript & TypeScript)") - -CosmJS aims to build client libraries in JavaScript that can be embedded in web applications. Please see [https://cosmos.github.io/cosmjs](https://cosmos.github.io/cosmjs) for more information. As of January 2021, CosmJS documentation is still work in progress. diff --git a/scripts/versioning/GSHEET-SETUP.md b/scripts/GSHEET-SETUP.md similarity index 92% rename from scripts/versioning/GSHEET-SETUP.md rename to scripts/GSHEET-SETUP.md index b1df6dc6..6ecf757f 100644 --- a/scripts/versioning/GSHEET-SETUP.md +++ b/scripts/GSHEET-SETUP.md @@ -30,13 +30,13 @@ Uses version-specific tab `v0.5.0` with snapshot data. ## Tab Reference Format -** Correct**: Use **sheet tab name** as identifier: +**Correct**: Use **sheet tab name** as identifier: - `sheetTab="v0.4.x"` - `sheetTab="v0.5.0"` - `sheetTab="eip_compatibility_data"` -** Incorrect**: GID numbers, indices, or other identifiers do not work. +**Incorrect**: GID numbers, indices, or other identifiers do not work. ## API Setup Process @@ -68,7 +68,7 @@ Uses version-specific tab `v0.5.0` with snapshot data. 3. Click "Add Key" → "Create New Key" 4. Select "JSON" format 5. Click "Create" -6. Save file as `scripts/versioning/service-account-key.json` +6. Save file as `scripts/service-account-key.json` ### 5. Share Spreadsheet @@ -81,7 +81,7 @@ Uses version-specific tab `v0.5.0` with snapshot data. ### 6. Install Dependencies ```bash -cd scripts/versioning +cd scripts npm install ``` diff --git a/scripts/MIGRATION_README.md b/scripts/MIGRATION_README.md new file mode 100644 index 00000000..83641858 --- /dev/null +++ b/scripts/MIGRATION_README.md @@ -0,0 +1,352 @@ +# Docusaurus to Mintlify Migration Scripts + +Convert Docusaurus documentation to Mintlify format with proper MDX syntax, code formatting, and navigation structure. + +Comprehensive migration tool that handles complex conversions, caching, and automatic fixes while reporting issues that need manual intervention. + +## Scripts + +### `migrate-docusaurus.js` + +Full repository migration with versioning support and navigation updates. + +**Usage:** + +```bash +# Interactive mode (with prompts) +node migrate-docusaurus.js + +# Non-interactive mode (migrate all versions) +node migrate-docusaurus.js [--update-nav] + +# Examples +node migrate-docusaurus.js ~/repos/cosmos-sdk-docs ./tmp sdk +node migrate-docusaurus.js ~/repos/cosmos-sdk-docs ./tmp sdk --update-nav +``` + +**Parameters:** + +- ``: Path to Docusaurus repository (e.g., `~/repos/cosmos-sdk-docs`) +- ``: Output directory for migrated files (e.g., `./tmp`) +- ``: **Required** - Product name for link resolution (e.g., `sdk`, `ibc`, `evm`, `cometbft`) + - Determines internal link structure: `/docs///` + - Critical for proper link resolution in final documentation +- `--update-nav`: Optional flag to update `docs.json` navigation + +**Features:** + +- **Content-based caching**: Processes identical content only once using SHA-256 checksums +- **Intelligent processing**: Cache hits for duplicate content, full processing for unique content +- **GitHub reference fetching**: Automatically fetches code from GitHub URLs in reference blocks +- **Title generation**: Creates titles from filenames when missing (e.g., `adr-046-module-params` → "ADR 046 Module Params") +- **Smart comment handling**: Removes long HTML comments (>10 lines) that break JSX conversion +- **Comprehensive validation**: Reports only real issues requiring manual intervention +- **Performance optimized**: Shows cache statistics and duplicate percentage +- **Proper exit handling**: Script exits cleanly with appropriate exit codes +- Migrates all versions: `docs/` (next) and `versioned_docs/version-*` +- Preserves `sidebar_position` for navigation ordering +- AST-based processing for accuracy +- Removes file extensions from internal links for Mintlify compatibility + +### `migrate-single-file.js` + +Convert individual Docusaurus markdown files to Mintlify MDX. + +**Usage:** + +```bash +node migrate-single-file.js + +# Example +node migrate-single-file.js ../cosmos-sdk-docs/docs/learn/events.md ../tmp/events.mdx +``` + +## Migration Workflow + +### Content Processing Pipeline + +1. **Content Checksumming** + + - SHA-256 hash calculated for each file's raw content + - Duplicate detection across versions + - Cache lookup for previously processed content + +2. **Transformation Steps** (for new content only) + + - Parse frontmatter and extract metadata + - Convert HTML comments to JSX: `` → `{/* */}` + - Process with AST for link fixing and code enhancement + - Apply MDX fixes (escape underscores, fix JSX expressions) + - Convert admonitions to Mintlify format + - Validate content and report issues + - Generate Mintlify-compatible frontmatter + +3. **Caching System** + + - Transformation results cached by content checksum + - Validation errors cached separately + - Cache hits skip processing but still write output files + - Statistics track unique content vs duplicates + +4. **GitHub Reference Fetching** + + - Detects `go reference` blocks with GitHub URLs + - Automatically fetches actual code content + - Falls back to comment if fetch fails + - Only fetches once per unique URL + +5. **Validation & Error Reporting** + - Checks for unclosed JSX expressions + - Validates matching open/close tags + - Reports only issues needing manual fixes + - No duplicate reporting for identical content + +## Conversions Applied + +### Admonitions + +```markdown +# Docusaurus + +:::note Title +Content +::: + +# Mintlify + + +**Title** +Content + +``` + +### Code Blocks + +- Removes "reference" keyword from language specification +- Adds `expandable` for blocks >10 lines +- Auto-detects language (Go, JavaScript, Python, JSON, etc.) +- Preserves reference URLs as comments + +### Links & Paths + +- Fixes relative paths (`../file.md` → absolute path) +- Removes file extensions from internal links (`.md` and `.mdx` removed) +- Updates versioned links: `/docs/` → `/docs///` +- Maintains external links unchanged + +### MDX Compatibility + +- HTML comments → JSX comments (short ones only) +- Long HTML comments (>10 lines) are removed entirely +- `
` → `` +- Placeholders like ``, `` wrapped in backticks +- Template variables `{foo}` wrapped in backticks +- JSON objects in tables wrapped in backticks +- Escapes underscores in table cells + +### Frontmatter + +- Generates title from filename if missing (e.g., `adr-046-module-params` → "ADR 046 Module Params") +- Extracts title from H1 heading if not in frontmatter +- Preserves `sidebar_position` for navigation ordering +- Cleans multi-line descriptions for single-line format +- Removes problematic content from descriptions +- Generates clean Mintlify-compatible frontmatter +- Removes Docusaurus-specific fields + +## File Structure + +**Input (Docusaurus):** + +``` +cosmos-sdk-docs/ +├── docs/ # Current/next version +├── versioned_docs/ +│ ├── version-0.47/ +│ ├── version-0.50/ +│ └── version-0.53/ +``` + +**Output (Mintlify):** + +``` +docs/ +└── sdk/ + ├── next/ + ├── v0.47/ + ├── v0.50/ + └── v0.53/ +``` + +## Navigation + +When using `--update-nav`, the script updates `docs.json` with: + +- Proper dropdown structure for products +- Version-specific navigation tabs +- Files sorted by `sidebar_position` +- Conflict resolution for duplicate positions + +## Edge Cases Handled + +- Code blocks with indented backticks +- Unclosed HTML comments +- Reference syntax with GitHub URLs +- Command placeholders in documentation +- Comparison operators and arrows +- Mixed content in admonitions +- Tables with curly brackets +- Go-specific patterns (`interface{}`, `map[string]interface{}`) + +## Migration Output + +### Reports Generated + +**Per-version statistics:** + +``` +Conversion stats for v0.50: + - Total files: 150 + - Unique content processed: 120 + - Cache hits (duplicates): 30 +``` + +**Final cache statistics:** + +``` + Migration Cache Statistics: +================================================== +Total files processed: 600 +Unique content blocks: 450 +Duplicate files (cache hits): 150 +Duplicate percentage: 25.0% +================================================== +``` + +**Migration report example:** + +``` + MIGRATION REPORT +================================================================================ + + ERRORS (6) - These need manual fixes: +---------------------------------------- + path/to/file.md: + Line 45: Unclosed opening tag + Suggestion: Add matching closing tag + +REMOVED CONTENT (2) - Content that was removed: +---------------------------------------- +build/building-modules/09-module-interfaces.md: + Removed long HTML comment: 69-line comment removed (likely documentation notes) + +================================================================================ +Summary: 6 errors, 0 warnings, 2 removals +================================================================================ +``` + +## Dependencies + +```json +{ + "dependencies": { + "gray-matter": "^4.0.3", + "unified": "^11.0.0", + "remark-parse": "^11.0.0", + "remark-stringify": "^11.0.0", + "remark-gfm": "^4.0.0", + "unist-util-visit": "^5.0.0" + } +} +``` + +## Migration Report Details + +The script generates a comprehensive report showing: + +1. **ERRORS**: Issues that require manual intervention + + - Unclosed JSX expressions + - Mismatched tags + - Syntax that couldn't be automatically fixed + +2. **WARNINGS**: Issues that were automatically fixed but should be reviewed + +3. **REMOVED CONTENT**: Content that was removed because it couldn't be safely converted + - Long HTML comments (>10 lines) + - Malformed syntax that would break MDX + +## Important Notes + +### Product Parameter + +- **Required** for all migrations (interactive and non-interactive) +- Determines internal link structure: `/docs///` +- Without correct product, internal documentation links will be broken +- Common values: `sdk`, `ibc`, `evm`, `cometbft` + +### Processing Behavior + +- Identical content across versions is processed only once +- Each file still gets written to its appropriate version directory +- Validation errors are reported for each file location +- Cache system significantly improves performance for large repositories + +### Recent Improvements (2025) + +- Content-based caching with SHA-256 checksums +- Automatic GitHub code fetching for reference blocks +- Title generation from filenames when missing +- Smart handling of long HTML comments (removed with tracking) +- File extension removal from internal links +- Proper script exit handling with exit codes +- Required product parameter for proper link resolution +- Fixed JSX comment escaping issues +- Improved validation with fewer false positives +- Template variables and JSON in tables properly escaped + +### Technical Details + +- Code content is preserved exactly - no arbitrary removal +- Reference URLs are fetched or converted to comments +- AST processing used where possible, regex as fallback +- Multiple passes ensure proper sequencing of fixes +- Code blocks are protected during transformations +- Script exits with code 0 on success, 1 on failure +- Long HTML comments (>10 lines) are removed to prevent JSX errors +- Placeholders like `` are wrapped in backticks automatically + +## Troubleshooting + +### Common Issues + +1. **Script doesn't exit**: Fixed in latest version - now properly exits with appropriate codes + +2. **False positive JSX errors**: Most "Unclosed JSX expression" errors are false positives from validation checking code inside code blocks + +3. **Missing titles**: Script now generates titles from filenames when no title exists in frontmatter or H1 + +4. **Long HTML comments breaking conversion**: Comments >10 lines are automatically removed and reported + +5. **Internal links not working**: Ensure you specify the correct product parameter (e.g., `sdk`, `ibc`) + +### Migration Best Practices + +1. Always specify the product parameter for proper link resolution +2. Review the "REMOVED CONTENT" section to ensure no important content was removed +3. Check files with reported errors - many may be false positives +4. Run `npx mint dev` to test the migrated documentation +5. Use cache statistics to verify efficient processing + +## Example Commands + +```bash +# Migrate SDK docs without updating navigation +node migrate-docusaurus.js ~/repos/cosmos-sdk-docs ./tmp sdk + +# Migrate IBC docs and update navigation +node migrate-docusaurus.js ~/repos/ibc-go-docs ./tmp ibc --update-nav + +# Migrate a single file +node migrate-single-file.js ~/repos/cosmos-sdk-docs/docs/learn/intro.md ./tmp/intro.mdx +``` diff --git a/scripts/versioning/README.md b/scripts/VERSIONING-README.md similarity index 84% rename from scripts/versioning/README.md rename to scripts/VERSIONING-README.md index 31db04c7..6666ef13 100644 --- a/scripts/versioning/README.md +++ b/scripts/VERSIONING-README.md @@ -15,7 +15,7 @@ The versioning system provides: ### Version Structure -``` +```sh docs/ ├── evm/ │ ├── next/ # Active development (EVM) @@ -47,14 +47,29 @@ Docs now use product-specific dropdowns with per-product versions: { "dropdown": "EVM", "versions": [ - { "version": "next", "tabs": [/* docs/evm/next/... */] }, - { "version": "v0.4.x", "tabs": [/* docs/evm/v0.4.x/... */] } + { + "version": "next", + "tabs": [ + /* docs/evm/next/... */ + ] + }, + { + "version": "v0.4.x", + "tabs": [ + /* docs/evm/v0.4.x/... */ + ] + } ] }, { "dropdown": "SDK", "versions": [ - { "version": "v0.53", "tabs": [/* docs/sdk/v0.53/... */] } + { + "version": "v0.53", + "tabs": [ + /* docs/sdk/v0.53/... */ + ] + } ] } ] @@ -89,13 +104,14 @@ Each product can be versioned independently. The version manager auto-detects th ### Prerequisites 1. **Google Sheets API Access** + - Service account key saved as `service-account-key.json` - - See [GSHEET-SETUP.md](https://github.com/cosmos/docs/blob/main/scripts/versioning/GSHEET-SETUP.md) for detailed setup + - See [GSHEET-SETUP.md](https://github.com/cosmos/docs/blob/main/scripts/GSHEET-SETUP.md) for detailed setup 2. **Install Dependencies** ```bash - cd scripts/versioning + cd scripts npm install ``` @@ -104,7 +120,7 @@ Each product can be versioned independently. The version manager auto-detects th Run the version manager to freeze the current version in a chosen docs subdirectory (e.g., `evm`, `sdk`, `ibc`) and start a new one. The flow is fully interactive by default: ```bash -cd scripts/versioning +cd scripts npm run freeze ``` @@ -171,25 +187,47 @@ npm run sheets #### `release-notes.js` -Standalone changelog and release notes management. +Changelog and release notes management. **Usage:** ```bash -npm run release-notes [version|latest] [evm|sdk|ibc] +# Fetch latest release for default product (EVM) +npm run release-notes + +# Fetch latest for specific product +npm run release-notes latest sdk + +# Fetch all configured versions for a product +npm run release-notes all sdk + +# Fetch specific version for a product +npm run release-notes v0.53 sdk + +# Fetch specific version with custom source +npm run release-notes release/v0.53.x sdk v0.53 ``` **What it does:** -- Fetches changelog from the product's GitHub repository (auto-detects `CHANGELOG.md`/variants) -- Parses and converts to Mintlify format -- Updates release notes file in `docs//next/` +- Fetches changelog from the product's GitHub repository +- Supports multiple versions per product +- Handles version-specific release branches +- Converts to Mintlify format +- Updates release notes in `docs///changelog/release-notes.mdx` -**Sources:** +**Product Configurations:** -- evm → `cosmos/evm` -- sdk → `cosmos/cosmos-sdk` -- ibc → `cosmos/ibc-go` +- **evm** → `cosmos/evm` (versions: `next`) +- **sdk** → `cosmos/cosmos-sdk` (versions: `v0.53`, `v0.50`, `v0.47`) +- **ibc** → `cosmos/ibc-go` (versions: `v8`, `v7`, `v6`) +- **cometbft** → `cometbft/cometbft` (versions: `v0.38`, `v0.37`) + +**Version Resolution:** + +- EVM: Uses `next` as the primary version +- SDK/IBC: Checks release branches first (`release/vX.Y.x`), then tags, then main +- Searches for `CHANGELOG.md`, `RELEASE_NOTES.md`, and other common locations ### Supporting Scripts @@ -209,7 +247,7 @@ Navigation structure cleanup utility. ## Google Sheets Integration -EIP compatibility data is versioned through Google Sheets tabs. See [GSHEET-SETUP.md](https://github.com/cosmos/docs/blob/main/scripts/versioning/GSHEET-SETUP.md) for setup and configuration. +EIP compatibility data is versioned through Google Sheets tabs. See [GSHEET-SETUP.md](https://github.com/cosmos/docs/blob/main/scripts/GSHEET-SETUP.md) for setup and configuration. ### Shared Component @@ -243,16 +281,19 @@ Active development uses it without props (defaults to main sheet): ### Version Freeze Process 1. **Preparation Phase** + - Pick product subdirectory (`evm`, `sdk`, `ibc`) - Determine current version to freeze from `versions.json` for that product (or prompt) - Prompt for new development version - Check/update release notes (auto‑fetch when missing) 2. **Freeze Phase** + - Copy `docs//next/` to version directory (e.g., `docs/evm/v0.4.x/`) - (EVM only) Create Google Sheets tab with version name and copy EIP data 3. **Update Phase** + - (EVM only) Generate MDX with sheet tab reference - Update internal links (`/docs//next/` → `/docs///`) - Keep snippet imports unchanged (`/snippets/`) @@ -269,10 +310,12 @@ Active development uses it without props (defaults to main sheet): The system handles three types of paths: 1. **Document paths**: Updated to version-specific + - Before: `/docs/evm/next/documentation/concepts/accounts` - After: `/docs/evm/v0.4.x/documentation/concepts/accounts` 2. **Snippet imports**: Remain unchanged (shared) + - Always: `/snippets/icons.mdx` 3. **External links**: Remain unchanged @@ -303,7 +346,7 @@ See [Mintlify Constraints](../../CLAUDE.md) for details. ### 1. Google Sheets API Setup -Follow [GSHEET-SETUP.md](https://github.com/cosmos/docs/blob/main/scripts/versioning/GSHEET-SETUP.md) to: +Follow [GSHEET-SETUP.md](https://github.com/cosmos/docs/blob/main/scripts/GSHEET-SETUP.md) to: 1. Create a Google Cloud project 2. Enable Sheets API @@ -314,7 +357,7 @@ Follow [GSHEET-SETUP.md](https://github.com/cosmos/docs/blob/main/scripts/versio ### 2. Install Dependencies ```bash -cd scripts/versioning +cd scripts npm install ``` @@ -323,7 +366,7 @@ npm install Save your service account key as: ```sh -scripts/versioning/service-account-key.json +scripts/service-account-key.json ``` ### 4. Test Connection @@ -347,7 +390,7 @@ console.log('✓ Google Sheets API configured'); ```bash # Start the version freeze process -cd scripts/versioning && npm run freeze +cd scripts && npm run freeze # Enter prompts Enter the docs subdirectory to version [evm, sdk, ibc]: evm @@ -358,10 +401,10 @@ Enter the new development version (e.g., v0.5.0): v0.5.0 ```bash # Fetch latest release notes into docs/evm/next -node scripts/versioning/release-notes.js latest evm +node scripts/release-notes.js latest evm # Or fetch specific release into docs/evm/next -node scripts/versioning/release-notes.js v0.4.1 evm +node scripts/release-notes.js v0.4.1 evm ``` ### Manual Navigation Update @@ -419,7 +462,7 @@ rm -rf docs//v0.5.0/ ## File Structure ``` -scripts/versioning/ +scripts/ ├── README.md # This file ├── GSHEET-SETUP.md # Google Sheets API setup guide ├── version-manager.js # Main orchestration (ESM) @@ -436,5 +479,5 @@ scripts/versioning/ - [Main README](../../README.md) - Project overview - [CLAUDE.md](../../CLAUDE.md) - AI assistant context -- [GSHEET-SETUP.md](https://github.com/cosmos/docs/blob/main/scripts/versioning/GSHEET-SETUP.md) - Google Sheets API setup +- [GSHEET-SETUP.md](https://github.com/cosmos/docs/blob/main/scripts/GSHEET-SETUP.md) - Google Sheets API setup - [Mintlify Documentation](https://mintlify.com/docs) - MDX reference diff --git a/scripts/migrate-docusaurus.js b/scripts/migrate-docusaurus.js new file mode 100755 index 00000000..e01282ab --- /dev/null +++ b/scripts/migrate-docusaurus.js @@ -0,0 +1,3055 @@ +#!/usr/bin/env node + +/** + * @fileoverview Docusaurus to Mintlify Migration Script + * @description Converts Docusaurus documentation to Mintlify format with proper MDX syntax, + * code formatting, and navigation structure. Supports multi-version migration with intelligent + * content caching and validation. + * + * @example + * // Interactive mode + * node migrate-docusaurus.js + * + * @example + * // Non-interactive mode (all versions) + * node migrate-docusaurus.js ~/repos/cosmos-sdk-docs ./tmp sdk + * node migrate-docusaurus.js ~/repos/cosmos-sdk-docs ./tmp sdk --update-nav + * + * @requires gray-matter - Parse and generate YAML frontmatter + * @requires unified - Unified text processing framework + * @requires remark-parse - Markdown parser for unified + * @requires remark-gfm - GitHub Flavored Markdown support + * @requires remark-stringify - Markdown stringifier for unified + * @requires unist-util-visit - Tree visitor for unified AST + * + * @author Cosmos SDK Team + * @version 2.0.0 + * @since 2024 + */ + +import fs from 'fs'; +import path from 'path'; +import { fileURLToPath } from 'url'; +import readline from 'readline'; +import crypto from 'crypto'; +import matter from 'gray-matter'; +import { unified } from 'unified'; +import remarkParse from 'remark-parse'; +import remarkGfm from 'remark-gfm'; +import remarkStringify from 'remark-stringify'; +import { visit } from 'unist-util-visit'; +import { execSync } from 'child_process'; + +const __filename = fileURLToPath(import.meta.url); +const __dirname = path.dirname(__filename); + +// Create readline interface for prompts +const rl = readline.createInterface({ + input: process.stdin, + output: process.stdout +}); + +const prompt = (question) => new Promise((resolve) => rl.question(question, resolve)); + +/** + * Safely process content while preserving code blocks and inline code. + * This is the foundation of our text transformation system, ensuring code is never + * accidentally modified by markdown transformations. + * + * @function safeProcessContent + * @param {string} content - The markdown content to process + * @param {Function} processor - Function to process non-code content + * @returns {string} Processed content with code blocks preserved + * + * @description + * Pipeline steps: + * 1. Extract and replace fenced code blocks with placeholders + * 2. Extract and replace inline code with placeholders + * 3. Apply processor function to remaining content + * 4. Restore inline code from placeholders + * 5. Restore fenced code blocks from placeholders + * + * This ensures transformations like escaping underscores or fixing JSX + * comments never affect actual code content. + * + * @example + * const result = safeProcessContent(content, (text) => { + * return text.replace(/_/g, '\\_'); // Escapes underscores in text only + * }); + */ +function safeProcessContent(content, processor) { + // STEP 1: Extract all code blocks and inline code to protect them from processing + const codeBlocks = []; + const inlineCode = []; + + let processed = content; + + // STEP 2: Replace triple-backtick code blocks with placeholders + // This prevents code blocks from being modified by subsequent processing + processed = processed.replace(/```[\s\S]*?```/g, (match) => { + const index = codeBlocks.length; + codeBlocks.push(match); // Store original code block + return `__CODE_BLOCK_${index}__`; // Replace with placeholder + }); + + // STEP 3: Replace inline code spans of ANY tick length with placeholders + // This handles `, ``, ```, etc. to protect all code spans from processing + // The regex (`+)([^`\n]*?)\1 matches any number of backticks, content, then same number of backticks + processed = processed.replace(/(`+)([^`\n]*?)\1/g, (match) => { + const index = inlineCode.length; + inlineCode.push(match); // Store original inline code (e.g., `{groupId}` or ``{groupId}``) + return `__INLINE_CODE_${index}__`; // Replace with placeholder + }); + + // STEP 4: Process the content (with code replaced by placeholders) + // The processor function won't see any backtick-wrapped content + processed = processor(processed); + + // STEP 5: Restore inline code first + // This puts back the original inline code (e.g., `{groupId}`) exactly as it was + // Handle both escaped and non-escaped versions (in case underscores were escaped in tables) + for (let i = 0; i < inlineCode.length; i++) { + // Try escaped version first (if underscores were escaped in table processing) + const escapedPlaceholder = `__INLINE\\_CODE_${i}__`; + const normalPlaceholder = `__INLINE_CODE_${i}__`; + + if (processed.includes(escapedPlaceholder)) { + processed = processed.replace(escapedPlaceholder, inlineCode[i]); + } else { + processed = processed.replace(normalPlaceholder, inlineCode[i]); + } + } + + // STEP 6: Restore code blocks last + for (let i = 0; i < codeBlocks.length; i++) { + // Try escaped version first + const escapedPlaceholder = `__CODE\\_BLOCK_${i}__`; + const normalPlaceholder = `__CODE_BLOCK_${i}__`; + + if (processed.includes(escapedPlaceholder)) { + processed = processed.replace(escapedPlaceholder, codeBlocks[i]); + } else { + processed = processed.replace(normalPlaceholder, codeBlocks[i]); + } + } + + return processed; +} + +/** + * Parse Docusaurus frontmatter using battle-tested gray-matter library. + * Extracts YAML frontmatter and returns parsed metadata. + * + * @function parseDocusaurusFrontmatter + * @param {string} content - Raw markdown content with potential frontmatter + * @returns {Object} Parsed result + * @returns {Object} result.frontmatter - Parsed YAML frontmatter as object + * @returns {string} result.content - Content with frontmatter removed + * + * @description + * Uses gray-matter to parse YAML frontmatter from markdown. + * Handles missing frontmatter gracefully by returning empty object. + * + * Common Docusaurus frontmatter fields: + * - title: Page title + * - sidebar_label: Navigation label + * - sidebar_position: Sort order in navigation + * - description: Page description + * - slug: Custom URL slug + * + * @example + * const { frontmatter, content } = parseDocusaurusFrontmatter( + * '---\ntitle: My Page\n---\n# Content' + * ); + * // frontmatter = { title: 'My Page' } + * // content = '# Content' + */ +function parseDocusaurusFrontmatter(content) { + const parsed = matter(content); + + return { + frontmatter: parsed.data, + content: parsed.content + }; +} + + +/** + * Extract title from content (first H1 heading). + * Removes the H1 from content to avoid duplication. + * + * @function extractTitleFromContent + * @param {string} content - Markdown content potentially containing H1 + * @returns {Object} Extraction result + * @returns {string|null} result.title - Extracted title or null if no H1 + * @returns {string} result.content - Content with H1 removed + * + * @description + * Looks for H1 heading at the start of content (after any blank lines). + * Supports both # syntax and underline (===) syntax. + * Returns the title text and content without the H1. + * + * @example + * const { title, content } = extractTitleFromContent('# My Title\n\nContent here'); + * // title = 'My Title' + * // content = 'Content here' + */ +function extractTitleFromContent(content) { + const match = content.match(/^#\s+(.+)$/m); + if (match) { + return { + title: match[1].trim(), + content: content.replace(/^#\s+.+\n?/m, '').trim() // Only remove the heading line, not random content + }; + } + return { title: null, content }; +} + + +/** + * Convert Docusaurus admonitions to Mintlify callouts. + * Only processes non-code content, preserving code blocks. + * + * @function convertAdmonitions + * @param {string} content - Markdown content with Docusaurus admonitions + * @returns {string} Content with Mintlify callout components + * + * @description + * Converts Docusaurus triple-colon syntax to Mintlify components: + * - note admonitions to Note components + * - warning admonitions to Warning components + * - tip admonitions to Tip components + * - info admonitions to Info components + * - caution admonitions to Warning components + * - danger admonitions to Warning components + * + * Features: + * - Supports custom titles after the admonition type + * - Handles nested admonitions with a stack + * - Cleans up malformed syntax + * - Ensures proper tag pairing + * - Processes only non-code content + * + * @example + * // Converts Docusaurus admonition with title + * // to Mintlify Note component with bold title + */ +function convertAdmonitions(content) { + return safeProcessContent(content, (nonCodeContent) => { + const admonitionMap = { + 'note': 'Note', + 'tip': 'Tip', + 'info': 'Info', + 'warning': 'Warning', + 'danger': 'Warning', + 'caution': 'Warning', + 'important': 'Info', + 'success': 'Check', + 'details': 'Accordion' // For expandable sections + }; + + // Use line-by-line processing to handle complex nested cases + const lines = nonCodeContent.split('\n'); + const result = []; + const admonitionStack = []; + + for (let i = 0; i < lines.length; i++) { + const line = lines[i]; + + // Check for opening admonition + const openMatch = line.match(/^:::(\w+)(?:\s+(.+))?$/); + if (openMatch) { + const type = openMatch[1].toLowerCase(); + const title = openMatch[2]; + + const component = admonitionMap[type] || 'Note'; + admonitionStack.push(component); + + if (title) { + result.push(`<${component}>`); + result.push(`**${title}**`); + } else { + result.push(`<${component}>`); + } + continue; + } + + // Check for closing admonition + if (line.trim() === ':::') { + if (admonitionStack.length > 0) { + const component = admonitionStack.pop(); + result.push(``); + } + continue; + } + + // Regular line + result.push(line); + } + + let finalResult = result.join('\n'); + + // Clean up any remaining ::: artifacts and malformed syntax + finalResult = finalResult.replace(/:::\s*\n/g, '\n'); + finalResult = finalResult.replace(/:::\./g, ''); // Handle malformed :::. + finalResult = finalResult.replace(/^\s*:::\s*$/gm, ''); + + // Fix unclosed tags by ensuring proper pairing + // If we find an opening tag without a closing tag, add the closing tag + const tagPattern = /<(Note|Warning|Tip|Info|Check)\b[^>]*>/g; + const openTags = finalResult.match(tagPattern) || []; + + for (const openTag of openTags) { + const tagName = openTag.match(/<(\w+)/)[1]; + const closeTag = ``; + + // Check if this opening tag has a corresponding closing tag + const openIndex = finalResult.indexOf(openTag); + const nextOpenIndex = finalResult.indexOf(`<${tagName}`, openIndex + openTag.length); + const closeIndex = finalResult.indexOf(closeTag, openIndex); + + // If no closing tag found, or closing tag is after the next opening tag, add one + if (closeIndex === -1 || (nextOpenIndex !== -1 && closeIndex > nextOpenIndex)) { + const insertPoint = nextOpenIndex !== -1 ? nextOpenIndex : finalResult.length; + finalResult = finalResult.slice(0, insertPoint) + `\n${closeTag}\n` + finalResult.slice(insertPoint); + } + } + + return finalResult; + }); +} + + + +/** + * Format Go code + */ +function formatGoCode(code) { + return code + // Fix imports + .replace(/import \(/g, 'import (\n ') + .replace(/"\s+"/g, '"\n "') + .replace(/\)\s*([a-zA-Z])/g, ')\n\n$1') + + // Fix braces and structure + .replace(/\{\s*([a-zA-Z])/g, '{\n $1') + .replace(/([^}\n])\s*\}/g, '$1\n}') + .replace(/\)\s*\{/g, ') {') + .replace(/\}\s*([a-zA-Z])/g, '}\n\n$1') + + // Fix struct declarations + .replace(/\{\s*([A-Z][a-zA-Z]*:)/g, '{\n $1') + .replace(/,\s*([A-Z][a-zA-Z]*:)/g, ',\n $1') + + // Basic indentation + .replace(/^(\s*)([a-z][a-zA-Z]*\s*:=)/gm, ' $2') + .replace(/^(\s*)(if|for|switch|case)/gm, ' $2') + + .replace(/\n{3,}/g, '\n\n') + .trim(); +} + +/** + * Format JavaScript/TypeScript code + */ +function formatJavaScriptCode(code) { + return code + // Fix imports + .replace(/import\s*\{([^}]+)\}\s*from/g, (match, imports) => { + const cleanImports = imports.split(',').map(imp => imp.trim()).join(',\n '); + return `import {\n ${cleanImports}\n} from`; + }) + + // Fix object/function braces + .replace(/\{\s*([a-zA-Z])/g, '{\n $1') + .replace(/([^}\n])\s*\}/g, '$1\n}') + .replace(/\)\s*=>\s*\{/g, ') => {') + + // Fix function declarations + .replace(/function\s+([a-zA-Z_$][a-zA-Z0-9_$]*)\s*\(/g, 'function $1(') + .replace(/\)\s*\{/g, ') {') + + .replace(/\n{3,}/g, '\n\n') + .trim(); +} + +/** + * Format Rust code + */ +function formatRustCode(code) { + return code + // Fix use statements + .replace(/use\s+([^;]+);/g, (match, usePath) => { + if (usePath.includes('{')) { + return match.replace(/\{\s*([^}]+)\s*\}/g, (m, items) => { + const cleanItems = items.split(',').map(item => item.trim()).join(',\n '); + return `{\n ${cleanItems}\n}`; + }); + } + return match; + }) + + // Fix function formatting + .replace(/fn\s+([a-zA-Z_][a-zA-Z0-9_]*)\s*\(/g, 'fn $1(') + .replace(/\)\s*->\s*([^{]+)\s*\{/g, ') -> $1 {') + + // Fix struct/impl blocks + .replace(/\{\s*([a-zA-Z])/g, '{\n $1') + .replace(/([^}\n])\s*\}/g, '$1\n}') + + .replace(/\n{3,}/g, '\n\n') + .trim(); +} + +/** + * Format JSON code + */ +function formatJsonCode(code) { + try { + // Try to parse and reformat JSON + const parsed = JSON.parse(code); + return JSON.stringify(parsed, null, 2); + } catch { + // If not valid JSON, just clean up basic formatting + return code + .replace(/\{\s*"/g, '{\n "') + .replace(/",\s*"/g, '",\n "') + .replace(/\}\s*,/g, '\n},') + .replace(/\n{3,}/g, '\n\n') + .trim(); + } +} + +/** + * Generic code formatting for other languages. + * Applies basic formatting rules for unrecognized languages. + * + * @function formatGenericCode + * @param {string} code - Raw code in any language + * @returns {string} Formatted code with basic improvements + * + * @description + * Applies minimal formatting: + * - Basic brace positioning + * - Removes excessive blank lines + * - Trims whitespace + * + * Used as fallback when language-specific formatter unavailable. + * + * @example + * const formatted = formatGenericCode('function() {return true}'); + * // Returns code with improved brace positioning + */ +function formatGenericCode(code) { + return code + // Basic brace formatting + .replace(/\{\s*([a-zA-Z])/g, '{\n $1') + .replace(/([^}\n])\s*\}/g, '$1\n}') + + // Basic function-like patterns + .replace(/\)\s*\{/g, ') {') + + // Clean up spacing + .replace(/\n{3,}/g, '\n\n') + .trim(); +} + +/** + * Fix internal documentation links for Mintlify's product-based structure. + * Only processes non-code content, preserving links in code blocks. + * + * @function fixInternalLinks + * @param {string} content - Markdown content with internal links + * @param {string} version - Target version (e.g., 'next', 'v0.50') + * @param {string} [product='generic'] - Product name for namespacing + * @returns {string} Content with updated internal links + * + * @description + * Updates various link patterns: + * - Removes Docusaurus-specific anchor links + * - Removes heading anchors from headers + * - Fixes relative paths for non-image links + * - Preserves relative paths for images + * + * Note: File extension removal and product/version prefixing + * is handled by the AST createLinkFixerPlugin. + * + * Link transformation examples: + * - Removes direct link anchors + * - Removes heading ID anchors + * - Converts relative paths to absolute for documents + * + * @example + * const fixed = fixInternalLinks( + * 'markdown with relative link', + * 'v0.50', + * 'sdk' + * ); + * // Returns markdown with absolute versioned link + */ +function fixInternalLinks(content, version, product = 'generic') { + return safeProcessContent(content, (nonCodeContent) => { + let result = nonCodeContent; + + // Remove Docusaurus-specific "Direct link" anchors + result = result.replace(/\[​\]\(#[^)]+\s+"Direct link to[^"]+"\)/g, ''); + + // Remove heading anchors {#anchor} + result = result.replace(/^(#+\s+.+?)\s*\{#[^}]+\}\s*$/gm, '$1'); + + // Fix relative links (../ or ./) but NOT for images + // Only remove ./ and ../ from links to markdown files, not images + result = result.replace(/\]\(\.\.\//g, (match, offset, string) => { + // Check if this is followed by an image extension + const afterMatch = string.slice(offset + match.length); + if (afterMatch.match(/^[^)]+\.(png|jpg|jpeg|gif|svg|webp)/i)) { + return match; // Keep the ../ for images + } + return ']('; // Remove ../ for non-images + }); + result = result.replace(/\]\(\.\//g, (match, offset, string) => { + // Check if this is followed by an image extension + const afterMatch = string.slice(offset + match.length); + if (afterMatch.match(/^[^)]+\.(png|jpg|jpeg|gif|svg|webp)/i)) { + return match; // Keep the ./ for images + } + return ']('; // Remove ./ for non-images + }); + + return result; + }); +} + +/** + * Copy static assets (images) from source to target directory. + * Recursively copies all image files while preserving directory structure. + * + * @function copyStaticAssets + * @param {string} staticPath - Source directory containing static assets + * @param {string} targetImagesDir - Target directory for images + * + * @description + * Copies image assets with the following features: + * - Recursively traverses source directory + * - Preserves directory structure in target + * - Copies common image formats: png, jpg, jpeg, gif, svg, webp + * - Creates target directories as needed + * - Skips non-image files + * + * @example + * copyStaticAssets( + * '/source/static', + * '/docs/images' + * ); + * // Copies all images from /source/static to /docs/images + */ +function copyStaticAssets(staticPath, targetImagesDir) { + if (!fs.existsSync(targetImagesDir)) { + fs.mkdirSync(targetImagesDir, { recursive: true }); + } + + function copyImages(dir, targetSubDir = '') { + const items = fs.readdirSync(dir); + + for (const item of items) { + const sourcePath = path.join(dir, item); + const stat = fs.statSync(sourcePath); + + if (stat.isDirectory()) { + // Recursively copy subdirectories + const newTargetSubDir = path.join(targetSubDir, item); + copyImages(sourcePath, newTargetSubDir); + } else if (item.match(/\.(png|jpg|jpeg|gif|svg|webp)$/i)) { + // Copy image files + const targetPath = path.join(targetImagesDir, 'static', targetSubDir, item); + const targetDir = path.dirname(targetPath); + + if (!fs.existsSync(targetDir)) { + fs.mkdirSync(targetDir, { recursive: true }); + } + + fs.copyFileSync(sourcePath, targetPath); + console.log(` Copied: static/${targetSubDir}/${item}`); + } + } + } + + copyImages(staticPath); +} + +/** + * Update image paths to use centralized images folder. + * Moves images from versioned directories to shared location. + * + * @function updateImagePaths + * @param {string} content - Content with image references + * @param {string} sourceRelativePath - Relative path of source file + * @param {string} version - Version directory name + * @returns {string} Content with updated image paths + * + * @description + * Updates image references to point to centralized images directory: + * - Removes version from image paths + * - Handles both markdown and HTML image syntax + * - Preserves relative path structure within images folder + * + * Path transformation: + * - Local images to centralized dirname path + * - Parent directory images to root images path + * + * @example + * const updated = updateImagePaths( + * 'markdown with local image reference', + * 'learn/intro.md', + * 'v0.50' + * ); + * // Returns markdown with centralized image path + */ +function updateImagePaths(content, sourceRelativePath, version) { + let result = content; + + // Calculate the depth of the source file to determine relative path to images + const sourceDepth = sourceRelativePath.split('/').length - 1; + const pathToRoot = sourceDepth > 0 ? '../'.repeat(sourceDepth) : './'; + // Images are at the same level as version directories (e.g., docs/sdk/images/) + const pathToImages = `${pathToRoot}../images/`; + + // Update local image references (./image.png or ../image.png) + // Match both markdown images ![alt](path) and HTML img tags + result = result.replace(/(!\[[^\]]*\]\()(\.\.\/|\.\/)([^)]+\.(png|jpg|jpeg|gif|svg|webp))/gi, + (match, prefix, relative, imagePath) => { + const imageName = path.basename(imagePath); + const imageDir = path.dirname(imagePath); + const sourceDir = path.dirname(sourceRelativePath); + + if (relative === './' && imageDir === '.') { + // Image in same directory as markdown + return `${prefix}${pathToImages}${sourceDir}/${imageName}`; + } else if (relative === '../') { + // Image in parent directory + const parentSourceDir = path.dirname(sourceDir); + return `${prefix}${pathToImages}${parentSourceDir}/${imagePath}`; + } else { + // Image in subdirectory + return `${prefix}${pathToImages}${sourceDir}/${imagePath}`; + } + }); + + // Update HTML img tags + result = result.replace(/(]+src=")(\.\.\/|\.\/)([^"]+\.(png|jpg|jpeg|gif|svg|webp))/gi, + (match, prefix, relative, imagePath) => { + const imageName = path.basename(imagePath); + const imageDir = path.dirname(imagePath); + + if (imageDir === '.') { + const sourceDir = path.dirname(sourceRelativePath); + return `${prefix}${pathToImages}${sourceDir}/${imageName}`; + } else { + return `${prefix}${pathToImages}${imagePath}`; + } + }); + + // Handle references to /img/ or /static/ folders (from Docusaurus static folder) + result = result.replace(/(!\[[^\]]*\]\()(\/(img|static)\/[^)]+\.(png|jpg|jpeg|gif|svg|webp))/gi, + (match, prefix, imagePath) => { + // Convert /img/file.png to images/static/img/file.png + // Convert /static/file.png to images/static/file.png + const cleanPath = imagePath.startsWith('/') ? imagePath.slice(1) : imagePath; + return `${prefix}${pathToImages}static/${cleanPath.replace(/^static\//, '')}`; + }); + + // Handle HTML img tags with /img/ or /static/ paths + result = result.replace(/(]+src=")(\/(img|static)\/[^"]+\.(png|jpg|jpeg|gif|svg|webp))/gi, + (match, prefix, imagePath) => { + const cleanPath = imagePath.startsWith('/') ? imagePath.slice(1) : imagePath; + return `${prefix}${pathToImages}static/${cleanPath.replace(/^static\//, '')}`; + }); + + // Handle GitHub raw content URLs (keep them as-is) + // These don't need updating as they're external URLs + + return result; +} + +/** + * Content-based migration cache system. + * Implements SHA-256 checksumming to detect duplicate content across versions, + * significantly improving performance by processing identical content only once. + * + * @namespace migrationCache + * @description + * The cache system tracks: + * - Content checksums to detect duplicates + * - Transformation results to reuse for identical content + * - Validation errors per unique content block + * - File paths mapped to their content checksums + * + * This enables efficient processing where identical files across versions + * (e.g., v0.47, v0.50, v0.53) are transformed only once but written to + * all appropriate locations. + * + * @example + * // Process flow with caching: + * // 1. File A in v0.47: Full processing, cached + * // 2. File A in v0.50 (identical): Cache hit, skip processing + * // 3. File A in v0.53 (modified): Different checksum, full processing + */ +const migrationCache = { + // Map of content checksum → migration result + contentCache: new Map(), + // Map of content checksum → validation errors + validationCache: new Map(), + // Map of file path → content checksum + fileChecksums: new Map(), + + /** + * Calculate SHA-256 checksum of content + */ + getChecksum(content) { + return crypto.createHash('sha256').update(content, 'utf8').digest('hex'); + }, + + /** + * Check if content has been processed before + */ + hasProcessed(checksum) { + return this.contentCache.has(checksum); + }, + + /** + * Get cached migration result + */ + getCachedResult(checksum) { + return this.contentCache.get(checksum); + }, + + /** + * Cache migration result + */ + cacheResult(checksum, result) { + this.contentCache.set(checksum, result); + }, + + /** + * Get cached validation errors + */ + getCachedValidation(checksum) { + return this.validationCache.get(checksum) || []; + }, + + /** + * Cache validation errors + */ + cacheValidation(checksum, errors) { + this.validationCache.set(checksum, errors); + }, + + /** + * Track file checksum + */ + trackFile(filepath, checksum) { + this.fileChecksums.set(filepath, checksum); + }, + + /** + * Get statistics + */ + getStats() { + return { + uniqueContent: this.contentCache.size, + totalFiles: this.fileChecksums.size, + duplicates: this.fileChecksums.size - this.contentCache.size + }; + }, + + /** + * Clear cache + */ + clear() { + this.contentCache.clear(); + this.validationCache.clear(); + this.fileChecksums.clear(); + } +}; + +// Global issue tracker for reporting +const migrationIssues = { + warnings: [], + errors: [], + info: [], + currentFile: '', + processedFiles: new Set(), + + addWarning(line, issue, suggestion = '') { + this.warnings.push({ + file: this.currentFile, + line, + issue, + suggestion + }); + }, + + addRemoval(message, details) { + this.info.push({ + file: this.currentFile, + message, + details + }); + }, + + addError(line, issue, suggestion = '') { + this.errors.push({ + file: this.currentFile, + line, + issue, + suggestion + }); + }, + + setCurrentFile(filepath) { + this.currentFile = filepath; + }, + + reset() { + this.warnings = []; + this.errors = []; + this.currentFile = ''; + this.processedFiles.clear(); + }, + + generateReport() { + const totalIssues = this.warnings.length + this.errors.length; + if (totalIssues === 0) return ''; + + let report = '\n' + '='.repeat(80) + '\n'; + report += ' MIGRATION REPORT\n'; + report += '='.repeat(80) + '\n\n'; + + if (this.errors.length > 0) { + report += ` ERRORS (${this.errors.length}) - These need manual fixes:\n`; + report += '-'.repeat(40) + '\n'; + + const errorsByFile = {}; + this.errors.forEach(err => { + if (!errorsByFile[err.file]) errorsByFile[err.file] = []; + errorsByFile[err.file].push(err); + }); + + Object.entries(errorsByFile).forEach(([file, errors]) => { + report += `\n ${file}:\n`; + errors.forEach(err => { + report += ` Line ${err.line}: ${err.issue}\n`; + if (err.suggestion) { + report += ` Suggestion: ${err.suggestion}\n`; + } + }); + }); + report += '\n'; + } + + if (this.warnings.length > 0) { + report += ` WARNINGS (${this.warnings.length}) - Automatically handled but please verify:\n`; + report += '-'.repeat(40) + '\n'; + + const warningsByFile = {}; + this.warnings.forEach(warn => { + if (!warningsByFile[warn.file]) warningsByFile[warn.file] = []; + warningsByFile[warn.file].push(warn); + }); + + Object.entries(warningsByFile).forEach(([file, warnings]) => { + report += `\n ${file}:\n`; + warnings.forEach(warn => { + report += ` Line ${warn.line}: ${warn.issue}\n`; + if (warn.suggestion) { + report += ` Applied: ${warn.suggestion}\n`; + } + }); + }); + } + + // Report removed content that couldn't be safely converted + if (this.info.length > 0) { + report += `REMOVED CONTENT (${this.info.length}) - Content that was removed:\n`; + report += '-'.repeat(40) + '\n'; + + const infoByFile = {}; + this.info.forEach(info => { + if (!infoByFile[info.file]) infoByFile[info.file] = []; + infoByFile[info.file].push(info); + }); + + Object.entries(infoByFile).forEach(([file, infos]) => { + report += `\n${file}:\n`; + infos.forEach(info => { + report += ` ${info.message}: ${info.details}\n`; + }); + }); + report += '\n'; + } + + report += '\n' + '='.repeat(80) + '\n'; + report += `Summary: ${this.errors.length} errors, ${this.warnings.length} warnings`; + if (this.info.length > 0) { + report += `, ${this.info.length} removals`; + } + report += '\n'; + report += '='.repeat(80) + '\n'; + + return report; + } +}; + +/** + * Fix common MDX parsing issues while preserving all code content. + * Applies automatic fixes for known Docusaurus to Mintlify incompatibilities. + * + * @function fixMDXIssues + * @param {string} content - The markdown content to fix + * @param {string} [filepath=''] - Path to file being processed (for error context) + * @returns {string} Fixed content with MDX issues resolved + * + * @description + * Automatic fixes applied (in order): + * 1. Escape underscores in table cells for proper rendering + * 2. Fix invalid JSX comment escaping + * 3. Fix expressions with hyphens that break JSX parsing + * 4. Convert double backticks to single backticks + * 5. Convert HTML comments to JSX comments + * 6. Escape template variables outside code to inline code + * 7. Fix unclosed braces in expressions + * 8. Convert
to for Mintlify + * + * All transformations use safeProcessContent() to preserve code blocks. + * Error reporting is handled separately in validateMDXContent(). + * + * @see {@link safeProcessContent} - For code block preservation + * @see {@link validateMDXContent} - For error reporting + */ +function fixMDXIssues(content, filepath = '') { + // No error reporting in this function - only fixes + // Error reporting is done in validateMDXContent() + + // ALL transformations happen inside safeProcessContent + // This ensures code blocks are completely untouched + return safeProcessContent(content, (nonCodeContent) => { + // CRITICAL FIX 1: Escape underscores in table cells (epochs_number -> epochs\\_number) + // This must be done FIRST before any other processing + nonCodeContent = nonCodeContent.replace(/(\|[^|]*?)([a-zA-Z]+)_([a-zA-Z]+)([^|]*?\|)/g, (match, before, word1, word2, after) => { + // Escape underscores within table cells (including placeholders - we handle them later) + return `${before}${word1}\\_${word2}${after}`; + }); + + // CRITICAL FIX 1b: Escape template variables and JSON objects in table cells + // This prevents acorn parsing errors for content like {"denom":"uatom"} + nonCodeContent = nonCodeContent.replace(/(\|[^|\n]*)(\{[^}|\n]+\})([^|\n]*\|)/g, (match, before, templateVar, after) => { + // Wrap template variables and JSON in backticks within table cells + return `${before}\`${templateVar}\`${after}`; + }); + + // Fix invalid JSX comments with escaped characters + // Handle complete escaped comments: {/\*...\*/} + nonCodeContent = nonCodeContent.replace(/\{\/\\\*([^}]*)\\\*\/\}/g, (match, content) => { + return '{/* ' + content.trim() + ' */}'; + }); + + // Handle partial escaped comment starts: {/\* + nonCodeContent = nonCodeContent.replace(/\{\/\\\*/g, '{/*'); + + // Handle partial escaped comment ends: \*/} + nonCodeContent = nonCodeContent.replace(/\\\*\/\}/g, '*/}'); + + // Fix unclosed JSX comments - ensure all {/* have matching */} + nonCodeContent = nonCodeContent.replace(/\{\/\*([^*]|\*(?!\/))*$/gm, (match) => { + // Add closing */} if missing + return match + ' */}'; + }); + + // Fix orphaned comment closings */} without opening {/* + nonCodeContent = nonCodeContent.replace(/^[^{]*\*\/\}/gm, (match) => { + // Add opening {/* if missing + return '{/* ' + match; + }); + + // Fix HTML elements with hyphens in tag names (e.g., ) + // Convert to PascalCase for JSX compatibility + nonCodeContent = nonCodeContent.replace(/<([a-z]+(?:-[a-z]+)+)([^>]*>)/gi, (match, tagName, rest) => { + // Skip if already in backticks + if (match.includes('`')) return match; + // Convert kebab-case to PascalCase + const pascalCase = tagName.split('-') + .map(part => part.charAt(0).toUpperCase() + part.slice(1)) + .join(''); + return `<${pascalCase}${rest}`; + }); + // Also convert closing tags + nonCodeContent = nonCodeContent.replace(/<\/([a-z]+(?:-[a-z]+)+)>/gi, (match, tagName) => { + const pascalCase = tagName.split('-') + .map(part => part.charAt(0).toUpperCase() + part.slice(1)) + .join(''); + return ``; + }); + + // Fix expressions with hyphens that break JSX parsing + nonCodeContent = nonCodeContent.replace(/`([^`]*<[^>]*-[^>]*>[^`]*)`/g, (match, content) => { + if (content.match(/<-+>/)) { + return '`' + content + '`'; + } + return match; + }); + + // Fix double backticks to single backticks + nonCodeContent = nonCodeContent.replace(/``([^`]+)``/g, (match, content) => { + return '`' + content + '`'; + }); + + // Convert HTML comments to JSX comments, handling incomplete ones + nonCodeContent = nonCodeContent.replace(//gs, '{/* $1 */}'); + // Handle incomplete HTML comments that never close + nonCodeContent = nonCodeContent.replace(/)(?=\n|$)/g, '{/* $1 */}'); + + // Convert URLs in angle brackets to proper markdown links + nonCodeContent = nonCodeContent.replace(/<(https?:\/\/[^>]+)>/g, '[Link]($1)'); + + // Fix arrow operators and comparison operators that break parsing + // Don't wrap if it's already in backticks or part of HTML tag + nonCodeContent = nonCodeContent.replace(/(?)(?!`)/g, '`$1`'); + nonCodeContent = nonCodeContent.replace(/(?)(?!`)/g, '`$1`'); + + // Fix version markers like **<= v0.45**: (must be done BEFORE general <= replacement) + nonCodeContent = nonCodeContent.replace(/\*\*<=\s*v([\d.]+)\*\*:/g, '**v$1 and earlier**:'); + nonCodeContent = nonCodeContent.replace(/\*\*>=\s*v([\d.]+)\*\*:/g, '**v$1 and later**:'); + + // Fix comparison operators in text (not in markdown emphasis) + nonCodeContent = nonCodeContent.replace(/(\s)(<=)(\s)/g, '$1`$2`$3'); + nonCodeContent = nonCodeContent.replace(/(\s)(>=)(\s)/g, '$1`$2`$3'); + + // Fix block quotes with template variables (lazy line issue) + // Replace template variables at the end of block quotes + nonCodeContent = nonCodeContent.replace(/(^>.*\n>.*\n>.*\n)(>\s*\{[^}]+\})/gm, (match, quote, templateLine) => { + // Convert the template line to a regular line (not part of quote) + const template = templateLine.replace(/^>\s*/, ''); + return quote + '\n' + template.replace(/\{([^}]+)\}/, '`{$1}`'); + }); + + // CRITICAL FIX 5: Fix orphaned closing tags by removing them + // First pass: identify and remove orphaned closing tags + const lines = nonCodeContent.split('\n'); + const processedLines = []; + const tagStack = []; + + for (let i = 0; i < lines.length; i++) { + const line = lines[i]; + + // Check for opening tags + const openingMatch = line.match(/<(Info|Warning|Note|Tip|Check|Accordion|details|Expandable)\b[^>]*>/); + if (openingMatch) { + tagStack.push({ tag: openingMatch[1], line: i }); + processedLines.push(line); + continue; + } + + // Check for closing tags + const closingMatch = line.match(/^\s*<\/(Info|Warning|Note|Tip|Check|Accordion|details|Expandable)>/); + if (closingMatch) { + const tagName = closingMatch[1]; + + // Find matching opening tag in stack + let foundMatch = false; + for (let j = tagStack.length - 1; j >= 0; j--) { + if (tagStack[j].tag === tagName) { + // Found matching opening tag, remove from stack + tagStack.splice(j, 1); + foundMatch = true; + break; + } + } + + if (foundMatch) { + processedLines.push(line); // Keep the closing tag + } else { + // Orphaned closing tag - remove it + } + } else { + processedLines.push(line); + } + } + + // Check for unclosed opening tags + if (tagStack.length > 0) { + tagStack.forEach(({ tag, line }) => { + processedLines.push(``); + }); + } + + nonCodeContent = processedLines.join('\n'); + + // Convert any remaining
tags that weren't caught by AST processing + // IMPORTANT: Make sure to close the Expandable tag properly + nonCodeContent = nonCodeContent.replace( + /]*>\s*]*>(.*?)<\/summary>\s*([\s\S]*?)(<\/details>|$)/gi, + (match, summary, content, closingTag) => { + // Only add closing tag if we found
+ const expandableClose = closingTag.includes('
') ? '
' : ''; + return `\n${content.trim()}\n${expandableClose}`; + } + ); + + // Fix unclosed placeholder tags - convert to inline code + nonCodeContent = nonCodeContent.replace(/<(appd|simd|gaiad|osmosisd|junod|yourapp|module)>/g, '`$1`'); + + // Wrap template variables and expressions in backticks to avoid JSX parsing issues + // But be careful not to double-wrap already wrapped content + nonCodeContent = nonCodeContent + // 1. Complete paths with template variables: /path{vars} → `/path{vars}` + .replace(/(\/[a-zA-Z][a-zA-Z0-9/._-]*\{[^}]+\})/g, (match) => { + // Check if already wrapped in backticks + const beforeMatch = nonCodeContent.substring(0, nonCodeContent.indexOf(match)); + const lastChar = beforeMatch[beforeMatch.length - 1]; + if (lastChar === '`') return match; // Already wrapped + return '`' + match + '`'; + }) + + // 2. Template variables that look like placeholders + .replace(/\{([a-zA-Z_][\w\s]*)\}/g, (match, content) => { + if (content.includes('/*') || content.includes('*/')) return match; + + const pos = nonCodeContent.indexOf(match); + if (pos > 0 && nonCodeContent[pos - 1] === '`') return match; + + if (match.includes('__INLINE_CODE_') || match.includes('__CODE_BLOCK_')) return match; + return '`' + match + '`'; + }) + + // 3. Fix template expressions with periods that break acorn parsing + // e.g., {module.path} or {params.value} + .replace(/\{([a-zA-Z_][\w]*\.[\w.]+)\}/g, (match) => { + // Check if already wrapped + const pos = nonCodeContent.indexOf(match); + if (pos > 0 && nonCodeContent[pos - 1] === '`') return match; + return '`' + match + '`'; + }) + + // 4. Fix array/object access patterns that break acorn + // e.g., {data[0]} or {obj['key']} + .replace(/\{([a-zA-Z_][\w]*\[[^\]]+\])\}/g, (match) => { + const pos = nonCodeContent.indexOf(match); + if (pos > 0 && nonCodeContent[pos - 1] === '`') return match; + return '`' + match + '`'; + }) + + // Fix malformed HTML tags + nonCodeContent = nonCodeContent.replace(/<([a-z][a-z0-9-]*)\s+([^>]+?)>/gi, (match, tag, content) => { + if (!/=/.test(content) && /\s/.test(content)) { + return `\`${tag} ${content}\``; + } + return match; + }); + + // Fix standalone closing tags + nonCodeContent = nonCodeContent.replace(/^\s*<\/[^>]+>\s*$/gm, ''); + + // Fix unclosed tags at the end of lines + nonCodeContent = nonCodeContent.replace(/<([a-z][a-z0-9-]*)\s*$/gm, '`<$1>`'); + + // Remove malformed import statements + nonCodeContent = nonCodeContent.replace(/^import\s+[^;]*$/gm, ''); + + // CRITICAL FIX 5: Fix JSX attributes with hyphens (convert to camelCase) + // -> + nonCodeContent = nonCodeContent.replace(/<([A-Z][a-zA-Z]*)\s+([^>]+)>/g, (match, component, attrs) => { + const fixedAttrs = attrs.replace(/([a-z]+-[a-z-]+)(=)/g, (attrMatch, attrName, equals) => { + // Convert kebab-case to camelCase + const camelCase = attrName.replace(/-([a-z])/g, (_, letter) => letter.toUpperCase()); + return camelCase + equals; + }); + return `<${component} ${fixedAttrs}>`; + }); + + // CRITICAL FIX 6: Balance unclosed braces in expressions + // Find expressions with unbalanced braces and escape them + nonCodeContent = nonCodeContent.replace(/\{([^}]*(?:\{[^}]*)*[^}]*)\n/g, (match, expression) => { + const openCount = (expression.match(/\{/g) || []).length; + const closeCount = (expression.match(/\}/g) || []).length; + + if (openCount > closeCount) { + // Expression has unclosed braces - escape it + return '`' + match.trim() + '`\n'; + } + return match; + }); + + // CRITICAL FIX 7: Fix
to conversion edge cases + nonCodeContent = nonCodeContent.replace(/
/gi, ''); + nonCodeContent = nonCodeContent.replace(/<\/details>/gi, ''); + + return nonCodeContent; + }); +} + +/** + * Validate MDX content and report issues that require manual intervention. + * This function only reports problems that couldn't be automatically fixed. + * + * @function validateMDXContent + * @param {string} content - The markdown content to validate + * @param {string} [filepath=''] - Path to file being processed + * @returns {string} The original content (unchanged) + * + * @description + * Validation checks performed: + * 1. Double backticks that couldn't be fixed + * 2. Unescaped template variables ({var} outside code) + * 3. JSX attributes with hyphens converted to camelCase + * 4. Invalid JSX comment syntax + * 5. Unclosed JSX expressions (missing }) + * 6. Unclosed JSX comments + * 7. Orphaned or mismatched JSX tags ( without ) + * + * Uses content fingerprinting to prevent duplicate validation of identical + * content across different file paths. + * + * @example + * // Validation happens after all automatic fixes: + * let content = fixMDXIssues(rawContent); + * content = validateMDXContent(content, 'path/to/file.md'); + * // Errors are added to migrationIssues for final report + */ +function validateMDXContent(content, filepath = '') { + // Track current file for error reporting + if (filepath) { + migrationIssues.setCurrentFile(filepath); + + // Skip validation if we've already processed this file content + const contentHash = content.substring(0, 200); // Simple content fingerprint + const fileKey = `${filepath}:${contentHash}`; + if (migrationIssues.processedFiles.has(fileKey)) { + return content; // Skip duplicate processing + } + migrationIssues.processedFiles.add(fileKey); + } + + // Use safeProcessContent to only validate non-code content + return safeProcessContent(content, (nonCodeContent) => { + const lines = nonCodeContent.split('\n'); + + lines.forEach((line, index) => { + const lineNum = index + 1; + + // Check for remaining double backticks + if (/``[^`]+``/.test(line)) { + migrationIssues.addError( + lineNum, + 'Double backticks still present', + 'Manually change to single backticks or escape the content' + ); + } + + // Check for unescaped template variables outside of backticks + // But be more selective - only flag ones that look like actual template variables + const templateVarMatch = line.match(/(? letter.toUpperCase())})` + ); + } + + // Don't check for orphaned tags line-by-line - this is handled better in the full content check below + + // Check for invalid JSX comments with backslashes + if (/\{\/\\\*|\\\*\/\}/.test(line)) { + migrationIssues.addError( + lineNum, + 'Invalid JSX comment with escaped characters', + 'Change to proper JSX comment: {/* content */}' + ); + } + + + // Check for unclosed JSX expressions (not just any braces) + const jsxExprMatch = line.match(/(?]*>`, 'g'); + let match; + + while ((match = tagRegex.exec(fullContent)) !== null) { + const isClosing = match[1] === '/'; + const tagName = match[2]; + + if (!isClosing) { + // Opening tag + tagStack.push({ tag: tagName, index: match.index }); + } else { + // Closing tag + if (tagStack.length === 0 || tagStack[tagStack.length - 1].tag !== tagName) { + // Orphaned closing tag or mismatched tag + const lineNum = fullContent.substring(0, match.index).split('\n').length; + migrationIssues.addError( + lineNum, + `Orphaned or mismatched closing tag `, + 'Check for missing opening tag or remove this closing tag' + ); + } else { + // Matched pair, remove from stack + tagStack.pop(); + } + } + } + + // Check for unclosed opening tags + if (tagStack.length > 0) { + tagStack.forEach(({ tag, index }) => { + const lineNum = fullContent.substring(0, index).split('\n').length; + migrationIssues.addError( + lineNum, + `Unclosed opening tag <${tag}>`, + 'Add matching closing tag or remove this opening tag' + ); + }); + } + + // Return the non-code content unchanged (validation only) + return nonCodeContent; + }); + +} + +/** + * Create an AST processor plugin for fixing internal links. + * Handles relative paths and versioned documentation links. + * + * @function createLinkFixerPlugin + * @param {string} version - Target version (e.g., 'next', 'v0.50') + * @param {string} product - Product name for namespacing (e.g., 'sdk', 'ibc') + * @returns {Function} Unified plugin function + * + * @description + * This remark plugin traverses the AST and fixes: + * 1. Relative links to absolute paths + * 2. Removes .md and .mdx extensions from internal links + * 3. Adds product and version namespacing to internal links + * 4. Maintains external links and anchors unchanged + * + * @example + * // In AST pipeline: + * unified() + * .use(createLinkFixerPlugin('v0.50', 'sdk')) + * // Transforms: + * // ../intro.md becomes /docs/sdk/v0.50/intro + * // ./guide.mdx becomes /docs/sdk/v0.50/guide + * // /learn/overview.md becomes /docs/sdk/v0.50/learn/overview + */ +function createLinkFixerPlugin(version, product) { + return () => { + return (tree) => { + visit(tree, 'link', (node) => { + if (node.url) { + // Skip external links + if (node.url.startsWith('http://') || node.url.startsWith('https://')) { + return; + } + + // Skip anchor-only links + if (node.url.startsWith('#')) { + return; + } + + // Fix relative links + if (node.url.startsWith('../') || node.url.startsWith('./')) { + node.url = node.url.replace(/\.\.\//g, '/').replace(/\.\//, '/'); + } + + // Remove .md or .mdx extensions from internal links + node.url = node.url.replace(/\.mdx?(?=#|$)/g, ''); + + // Fix versioned links for product docs + if (node.url.startsWith('/') && !node.url.startsWith('/docs/')) { + // Add /docs/product/version prefix for absolute internal links + node.url = `/docs/${product}/${version}${node.url}`; + } else if (node.url.includes('/docs/') && !node.url.includes(`/docs/${product}/`)) { + // Update existing /docs/ links to include product and version + node.url = node.url.replace('/docs/', `/docs/${product}/${version}/`); + } + } + }); + }; + }; +} + +/** + * Create an AST processor plugin for fixing HTML elements and MDX incompatibilities. + * Converts HTML to JSX-compatible syntax for Mintlify. + * + * @function createHTMLFixerPlugin + * @returns {Function} Unified plugin function + * + * @description + * This remark plugin traverses the AST and fixes: + * 1. HTML comments to JSX comments + * 2. Details tags to Expandable components + * 3. Command placeholders to inline code + * 4. Comparison operators wrapped in code + * 5. Generic placeholders to inline code + * + * Special handling: + * - Preserves content in code blocks and links + * - Skips table cells to avoid double-wrapping + * - Handles both complete and unclosed tags + * + * @example + * // In AST pipeline: + * unified() + * .use(createHTMLFixerPlugin()) + * // Transforms: + * // Transforms HTML comments to JSX format + * // Converts details/summary to Expandable components + * // Wraps command placeholders in backticks + */ +function createHTMLFixerPlugin() { + return () => { + return (tree) => { + // Process HTML nodes + visit(tree, 'html', (node) => { + if (!node.value) return; + + // Convert HTML comments to JSX comments + node.value = node.value.replace(//gs, '{/* $1 */}'); + + // Convert
tags to Mintlify Expandable + if (node.value.includes(']*>\s*]*>(.*?)<\/summary>\s*([\s\S]*?)<\/details>/gi, + (match, summary, content) => { + return `\n${content.trim()}\n`; + }); + + // Then handle any remaining unclosed details tags + // This catches cases where
is in a different node + if (node.value.includes('')) { + node.value = node.value.replace( + /]*>\s*]*>(.*?)<\/summary>\s*([\s\S]*?)$/gi, + (match, summary, content) => { + // Add Expandable but DON'T add closing tag - it may be in another node + return `\n${content.trim()}`; + }); + } + } + }); + + // Process text nodes to fix problematic angle brackets and template variables + visit(tree, 'text', (node, index, parent) => { + if (!node.value) return; + + // Skip if inside a code block or link + if (parent && (parent.type === 'code' || parent.type === 'inlineCode' || parent.type === 'link')) { + return; + } + + // Don't process text in table cells here - we'll handle them properly below + if (parent && parent.type === 'tableCell') { + return; // Skip table cell text processing + } + + // Fix command placeholders like , + node.value = node.value.replace(/<(appd|simd|gaiad|osmosisd|junod|yourapp)>/g, '`$1`'); + + // Fix comparison operators and arrows + // Don't wrap if already in backticks + node.value = node.value.replace(/(?)(?!`)/g, '`$1`'); + node.value = node.value.replace(/(?)(?!`)/g, '`$1`'); + + // Fix generic placeholders like , , + // Common placeholders that should be wrapped in backticks + node.value = node.value.replace(/<(host|port|path|user|pass|module|version|namespace|service)>/g, '`<$1>`'); + // Fix hyphenated placeholders + node.value = node.value.replace(/<([a-z]+-[a-z]+(?:-[a-z]+)*)>/g, '`<$1>`'); + }); + + // Remove table cell processing - we handle template variables in fixMDXIssues instead + // This avoids double-wrapping issues + }; + }; +} + +/** + * Fetch actual code content from GitHub repository URLs. + * Converts blob URLs to raw URLs and retrieves file contents. + * + * @async + * @function fetchGitHubCode + * @param {string} url - GitHub blob URL to fetch from + * @returns {Promise} File content or null if fetch fails + * + * @description + * Transforms GitHub blob URLs to raw.githubusercontent.com URLs + * and fetches the actual file content. Used to replace Docusaurus + * reference blocks with real code. + * + * URL transformation: + * - github.com domain to raw.githubusercontent.com + * - blob path segment removed for raw access + * + * @example + * const url = 'https://github.com/cosmos/cosmos-sdk/blob/main/types/coin.go'; + * const code = await fetchGitHubCode(url); + * // Returns actual Go code from the file + * + * @throws {Error} Logs warning if fetch fails but doesn't throw + */ +async function fetchGitHubCode(url) { + try { + // Convert GitHub blob URLs to raw URLs + const rawUrl = url.replace('github.com', 'raw.githubusercontent.com') + .replace('/blob/', '/'); + + const response = await fetch(rawUrl); + if (!response.ok) { + throw new Error(`HTTP ${response.status}`); + } + + const content = await response.text(); + return content.trim(); + } catch (error) { + console.warn(`Failed to fetch GitHub code from ${url}: ${error.message}`); + return null; + } +} + +/** + * Create an AST processor plugin for enhancing code blocks. + * Handles reference URLs, adds expandable attributes, and detects languages. + * + * @function createCodeBlockEnhancerPlugin + * @returns {Function} Async unified plugin function + * + * @description + * This async remark plugin enhances code blocks by: + * 1. Detecting and fetching GitHub reference URLs + * 2. Adding 'expandable' attribute for blocks >10 lines + * 3. Auto-detecting language from code content + * 4. Formatting code based on language + * 5. Removing 'reference' keyword from metadata + * + * Reference handling: + * - Detects single-line GitHub URLs in code blocks + * - Fetches actual code for 'go' language blocks + * - Converts to comments for other languages + * + * Language detection based on common patterns: + * - Go: package declarations, func keywords + * - JavaScript: const, let, function, arrow functions + * - Python: def, class keywords + * - Bash: shell commands and scripts + * - JSON: object notation + * - Protobuf: message and service definitions + * + * @example + * // In AST pipeline: + * unified() + * .use(createCodeBlockEnhancerPlugin()) + * // Fetches GitHub references and enhances code blocks + */ +function createCodeBlockEnhancerPlugin() { + return () => { + return async (tree, file) => { + const codeNodes = []; + visit(tree, 'code', (node) => { + codeNodes.push(node); + }); + + // Process code nodes with potential async operations + for (const node of codeNodes) { + let isReferenceBlock = false; + + // Handle Docusaurus "reference" syntax by detecting URL-only content + // Check if the code block contains just a GitHub URL (reference pattern) + if (node.value && node.value.trim().startsWith('https://github.com/') && + node.value.trim().split('\n').length === 1) { + isReferenceBlock = true; + const url = node.value.trim(); + const lang = node.lang || 'text'; + + // Try to fetch actual content for go reference blocks + if (lang === 'go' || lang === 'golang') { + const fetchedContent = await fetchGitHubCode(url); + if (fetchedContent) { + node.value = fetchedContent; + } else { + node.value = `// Reference: ${url}`; + } + } else { + // For other languages, convert to comment with the reference URL preserved + if (lang === 'protobuf' || lang === 'proto') { + node.value = `// Reference: ${url}`; + } else if (lang === 'javascript' || lang === 'js' || lang === 'typescript' || lang === 'ts') { + node.value = `// Reference: ${url}`; + } else if (lang === 'python' || lang === 'py') { + node.value = `# Reference: ${url}`; + } else if (lang === 'bash' || lang === 'shell' || lang === 'sh') { + node.value = `# Reference: ${url}`; + } else { + // Default to comment style + node.value = `// Reference: ${url}`; + } + } + } + + // Remove any "reference" from meta as well + if (node.meta && node.meta.includes('reference')) { + node.meta = node.meta.replace('reference', '').trim(); + } + + // Add expandable to long code blocks (but not to reference blocks that became comments) + if (node.value && node.value.split('\n').length > 10) { + // Skip adding expandable to reference blocks that became single-line comments + const isShortReference = isReferenceBlock && + (node.value.startsWith('// Reference:') || node.value.startsWith('# Reference:')) && + node.value.split('\n').length <= 2; + + if (!isShortReference && (!node.meta || !node.meta.includes('expandable'))) { + node.meta = node.meta ? `${node.meta} expandable` : 'expandable'; + } + } + + // Ensure language is specified by detecting content + if (!node.lang && node.value) { + const codeContent = node.value.toLowerCase(); + + if (codeContent.includes('package ') || codeContent.includes('func ') || + codeContent.includes('import "') || codeContent.includes('interface{')) { + node.lang = 'go'; + } else if (codeContent.includes('const ') || codeContent.includes('let ') || + codeContent.includes('function ') || codeContent.includes('=> ')) { + node.lang = 'javascript'; + } else if (codeContent.includes('#!/bin/bash') || codeContent.includes('echo ') || + codeContent.includes('npm ') || codeContent.includes('yarn ')) { + node.lang = 'bash'; + } else if (codeContent.includes('def ') || codeContent.includes('class ')) { + node.lang = 'python'; + } else if (codeContent.trim().startsWith('{') && codeContent.includes('"')) { + node.lang = 'json'; + } else if (codeContent.includes('message ') || codeContent.includes('service ')) { + node.lang = 'protobuf'; + } + } + + // Format the code content if we have formatting functions available + if (node.lang && node.value) { + const formatted = formatCodeByLanguage(node.lang, node.value); + if (formatted !== node.value) { + node.value = formatted; + } + } + } + }; + }; +} + +/** + * Format code content based on the programming language. + * Delegates to language-specific formatters. + * + * @function formatCodeByLanguage + * @param {string} lang - Language identifier (go, js, json, etc.) + * @param {string} code - Raw code content to format + * @returns {string} Formatted code or original if no formatter available + * + * @description + * Routes code to appropriate language-specific formatter: + * - Go/Golang uses formatGoCode() + * - JavaScript/TypeScript uses formatJavaScriptCode() + * - JSON/JSONC uses formatJsonCode() + * - Others return unchanged + * + * @example + * const formatted = formatCodeByLanguage('go', 'func main(){fmt.Println("hello")}'); + * // Returns properly formatted Go code + */ +function formatCodeByLanguage(lang, code) { + switch (lang.toLowerCase()) { + case 'go': + case 'golang': + return formatGoCode(code); + case 'javascript': + case 'js': + case 'typescript': + case 'ts': + return formatJavaScriptCode(code); + case 'json': + case 'jsonc': + return formatJsonCode(code); + default: + return code; + } +} + +/** + * Process markdown content using AST transformations for accuracy. + * Orchestrates all AST-based plugins in the correct order. + * + * @async + * @function processMarkdownWithAST + * @param {string} content - Raw markdown content + * @param {string} version - Target version for link fixing + * @param {string} product - Product name for link namespacing + * @returns {Promise} Transformed markdown content + * + * @description + * Creates a unified processor pipeline with: + * 1. remarkParse - Parse markdown to AST + * 2. remarkGfm - Enable GitHub Flavored Markdown (tables, etc.) + * 3. createLinkFixerPlugin - Fix internal links + * 4. createHTMLFixerPlugin - Convert HTML to JSX + * 5. createCodeBlockEnhancerPlugin - Enhance code blocks + * 6. remarkStringify - Convert AST back to markdown + * + * Falls back to original content if AST processing fails, + * logging a warning but not breaking the conversion. + * + * @example + * const processed = await processMarkdownWithAST( + * '# Title\n\nContent with [link](../other.md)', + * 'v0.50', + * 'sdk' + * ); + * // Returns markdown with fixed links and enhanced code blocks + */ +async function processMarkdownWithAST(content, version, product) { + try { + const processor = unified() + .use(remarkParse) + .use(remarkGfm) // Enable GitHub Flavored Markdown for proper table parsing + .use(createLinkFixerPlugin(version, product)) + .use(createHTMLFixerPlugin()) + .use(createCodeBlockEnhancerPlugin()) + .use(remarkStringify); + + const file = await processor.process(content); + return String(file); + } catch (error) { + console.warn('AST processing failed, falling back to regex:', error.message); + console.warn('Error details:', error); + return content; + } +} + +/** + * Convert a single Docusaurus markdown file to Mintlify MDX format. + * This is the main conversion pipeline that orchestrates all transformations. + * + * @async + * @function convertDocusaurusToMintlify + * @param {string} content - Raw Docusaurus markdown content + * @param {Object} [options={}] - Conversion options + * @param {string} [options.version='next'] - Version directory (e.g., 'v0.50', 'next') + * @param {string} [options.product='generic'] - Product name for link resolution (REQUIRED) + * @param {boolean} [options.keepTitle=false] - Whether to keep H1 in content + * @param {string} [options.filepath=''] - Source file path for error reporting + * @returns {Promise} Conversion result + * @returns {string} result.content - Converted MDX content with frontmatter + * @returns {Object} result.metadata - Extracted metadata (title, sidebarPosition) + * + * @description + * Conversion pipeline: + * 1. Parse frontmatter (extract YAML metadata) + * 2. Extract/generate title from frontmatter or content + * 3. Extract description from first paragraph + * 4. Convert HTML comments to JSX comments + * 5. Process with AST for accurate transformations: + * - Fix internal links for versioning + * - Enhance code blocks (add expandable, fetch GitHub refs) + * 6. Apply MDX fixes (escape underscores, fix JSX syntax) + * 7. Convert Docusaurus admonitions to Mintlify callouts + * 8. Fix internal links for product namespace + * 9. Validate content and report unfixable issues + * 10. Generate clean Mintlify-compatible frontmatter + * + * @example + * const result = await convertDocusaurusToMintlify(content, { + * version: 'v0.50', + * product: 'sdk', + * filepath: 'learn/intro.md' + * }); + * // result.content contains the converted MDX + * // result.metadata contains { title, sidebarPosition } + */ +async function convertDocusaurusToMintlify(content, options = {}) { + const { version = 'next', product = 'generic', keepTitle = false, filepath = '' } = options; + + // Parse frontmatter and content + const { frontmatter, content: mainContent } = parseDocusaurusFrontmatter(content); + + // Extract or generate title + let title = frontmatter.title || frontmatter.sidebar_label; + let processedContent = mainContent; + + if (!title) { + const extracted = extractTitleFromContent(processedContent); + title = extracted.title; + if (title) { + processedContent = extracted.content; + } else { + // Generate title from filename if no title found + if (filepath) { + const filename = path.basename(filepath, path.extname(filepath)); + // Convert filename to title: adr-046-module-params -> ADR 046 Module Params + title = filename + .replace(/[-_]/g, ' ') + .replace(/\b\w/g, char => char.toUpperCase()) + .replace(/\bAdr\b/g, 'ADR'); // Special case for ADR + } else { + title = 'Documentation'; + } + } + } else if (!keepTitle) { + // Remove H1 if we have title from frontmatter + const extracted = extractTitleFromContent(processedContent); + if (extracted.title) { + processedContent = extracted.content; + } + } + + // Extract description from first paragraph after H1, or use frontmatter if it exists + let description = frontmatter.description || null; + + if (!description) { + // Look for first paragraph after H1 heading + const afterTitle = processedContent.replace(/^#+\s+.*\n/m, '').trim(); + const firstParagraph = afterTitle.split(/\n\n/)[0]; + + if (firstParagraph) { + // Clean up the paragraph - remove markdown formatting + const cleaned = firstParagraph + .replace(/\[([^\]]+)\]\([^)]+\)/g, '$1') // Convert links to text + .replace(/[*_`]/g, '') // Remove formatting + .replace(/\s+/g, ' ') // Normalize whitespace + .trim(); + + // Use as description if it's reasonable length and not structural content + if (cleaned.length > 10 && + cleaned.length < 300 && + !cleaned.includes('|') && // Not table content + !cleaned.includes(':::') && // Not admonition + !cleaned.includes('```')) { // Not code + description = cleaned; + } + } + } + + // Apply conversions + // Convert HTML comments to JSX comments for MDX compatibility + // Handle both complete and incomplete HTML comments + processedContent = processedContent.replace(//g, (match, content) => { + // For very long comments (>10 lines), just remove them as they're likely documentation notes + const lineCount = content.split('\n').length; + if (lineCount > 10) { + // Track that we removed a long HTML comment + if (filepath) { + migrationIssues.setCurrentFile(filepath); + migrationIssues.addRemoval( + 'Removed long HTML comment', + `${lineCount}-line comment removed (likely documentation notes)` + ); + } + return ''; + } + // Clean up the content - preserve structure but avoid nested comment syntax + // Replace any /* or */ inside the comment to prevent JSX issues + const cleaned = content + .replace(/\/\*/g, '/') + .replace(/\*\//g, '/') + .split('\n') + .map(line => line.trim()) + .filter(line => line.length > 0) + .join(' '); // Join with space, not newline, to avoid JSX issues + return `{/* ${cleaned} */}`; + }); + + // Handle malformed or incomplete HTML comments + processedContent = processedContent.replace(//g, '{/* $1 */}'); + processed = processed.replace(//g, ''); + + // Clean up extra newlines + content = content.replace(/\n{4,}/g, '\n\n\n'); + + return content; +} + +/** + * Generate version-appropriate frontmatter + */ +function generateFrontmatter(title, description, version) { + const frontmatter = { + title: title || 'Documentation', + description: description || `Version: ${version}` + }; + + // Generate YAML frontmatter + let yaml = '---\n'; + for (const [key, value] of Object.entries(frontmatter)) { + if (value !== null && value !== undefined) { + // Escape quotes in values + const escapedValue = String(value).replace(/"/g, '\\"'); + yaml += `${key}: "${escapedValue}"\n`; + } + } + yaml += '---\n\n'; + + return yaml; +} + +/** + * Main conversion function + */ +function convertFile(inputPath, outputPath) { + try { + // Read the input file + if (!fs.existsSync(inputPath)) { + console.error(`Input file not found: ${inputPath}`); + return false; + } + + let content = fs.readFileSync(inputPath, 'utf-8'); + + // Parse frontmatter + const { frontmatter, content: mainContent } = parseDocusaurusFrontmatter(content); + + // Extract title from content if not in frontmatter + let processedContent = mainContent; + let title = frontmatter.title || frontmatter.sidebar_label; + + if (!title) { + const extracted = extractTitleFromContent(processedContent); + title = extracted.title; + processedContent = extracted.content; + } + + // Determine version from output path + let version = 'latest'; + const versionMatch = outputPath.match(/\/(v[\d.]+)\//); + if (versionMatch) { + version = versionMatch[1]; + } + + // FIRST: Process code blocks BEFORE other conversions + // This ensures all code is properly contained in code blocks + // and won't interfere with other syntax processing + processedContent = convertCodeBlocks(processedContent); + + // NOW: Apply other conversions with code block protection + processedContent = safeProcessContent(processedContent, (content) => { + // Fix inline code and brackets (template variables) + content = fixInlineCodeAndBrackets(content); + + // Convert admonitions + content = convertAdmonitions(content); + + // Fix heading anchors + content = fixHeadingAnchors(content); + + // Convert tabs + content = convertTabs(content); + + // Convert internal links + const relativePath = outputPath.replace(/^.*?\/docs\//, '/'); + content = convertInternalLinks(content, relativePath); + + // Clean up Docusaurus-specific syntax + content = cleanupDocusaurusSyntax(content); + + return content; + }); + + // Generate new frontmatter + const newFrontmatter = generateFrontmatter( + title, + frontmatter.description || frontmatter.sidebar_label, + version + ); + + // Combine frontmatter and content + const finalContent = newFrontmatter + processedContent; + + // Ensure output directory exists + const outputDir = path.dirname(outputPath); + if (!fs.existsSync(outputDir)) { + fs.mkdirSync(outputDir, { recursive: true }); + } + + // Write the output file + fs.writeFileSync(outputPath, finalContent, 'utf-8'); + + console.log(`✓ Converted: ${inputPath} → ${outputPath}`); + return true; + } catch (error) { + console.error(`✗ Error converting ${inputPath}:`, error.message); + return false; + } +} + +// Run the conversion +convertFile(inputFile, outputFile); \ No newline at end of file diff --git a/scripts/package.json b/scripts/package.json new file mode 100644 index 00000000..1030f870 --- /dev/null +++ b/scripts/package.json @@ -0,0 +1,30 @@ +{ + "name": "cosmos-docs-scripts", + "version": "1.0.0", + "description": "Scripts for Cosmos documentation management, versioning, and conversion", + "type": "module", + "private": true, + "scripts": { + "freeze": "node version-manager.js", + "release-notes": "node release-notes.js", + "sheets": "node sheets-manager.js", + "test-versioning": "node test-versioning.js", + "convert-docs": "node convert-docusaurus-to-mintlify.js", + "fix-frontmatter": "./fix-frontmatter.sh", + "generate-rpc-docs": "node generate-evm-rpc-docs.js", + "generate-rpc-openapi": "node generate-evm-rpc-openapi.js", + "parse-changelog": "node parse-evm-changelog.js" + }, + "dependencies": { + "googleapis": "^128.0.0", + "gray-matter": "^4.0.3", + "remark-gfm": "^4.0.1", + "remark-parse": "^11.0.0", + "remark-stringify": "^11.0.0", + "unified": "^11.0.5", + "unist-util-visit": "^5.0.0" + }, + "engines": { + "node": ">=18.0.0" + } +} diff --git a/scripts/release-notes.js b/scripts/release-notes.js new file mode 100644 index 00000000..496cfa40 --- /dev/null +++ b/scripts/release-notes.js @@ -0,0 +1,428 @@ +#!/usr/bin/env node + +/** + * Release Notes Management + * Combines changelog fetching and parsing functionality + * Supports multiple products and versions + */ + +import fs from 'fs'; +import path from 'path'; +import { fileURLToPath } from 'url'; + +const __dirname = path.dirname(fileURLToPath(import.meta.url)); + +// Parse command line arguments +// Usage: node release-notes.js [source/version] [product] [specific-version] +// Examples: +// node release-notes.js latest evm # Fetch latest for EVM (default behavior) +// node release-notes.js all sdk # Fetch all versions for SDK +// node release-notes.js v0.53 sdk # Fetch specific version for SDK +// node release-notes.js latest ibc v1.0 # Fetch for IBC v1.0 +const SOURCE = process.argv[2] || 'latest'; +const SUBDIR = process.argv[3] || process.env.DOCS_SUBDIR || process.env.SUBDIR || 'evm'; +const SPECIFIC_VERSION = process.argv[4]; // Optional: specific version to target + +// Repo mapping per product - expandable for future products +const PRODUCT_REPOS = { + evm: 'cosmos/evm', + sdk: 'cosmos/cosmos-sdk', + ibc: 'cosmos/ibc-go', + cometbft: 'cometbft/cometbft', + // Add more products as needed +}; + +// Version configurations per product +const PRODUCT_VERSIONS = { + evm: ['next', 'v0.4.x'], // EVM versions + sdk: ['next', 'v0.53', 'v0.50', 'v0.47'], // SDK versions + ibc: ['next', 'v10.1.x', 'v8.5.x', 'v7.8.x', 'v6.3.x', 'v5.4.x', 'v4.6.x'], // IBC versions + cometbft: ['v0.38', 'v0.37'], // Example CometBFT versions (to be added) + // Add version configs as needed +}; + +const REPO = PRODUCT_REPOS[SUBDIR] || PRODUCT_REPOS.evm; +const PRODUCT_LABEL = SUBDIR.toUpperCase(); + +// Common changelog file candidates by repo +const CHANGELOG_PATHS = [ + 'CHANGELOG.md', + 'RELEASE_NOTES.md', + 'RELEASES.md', + 'CHANGELOG/CHANGELOG.md', + 'docs/CHANGELOG.md' +]; + +async function getLatestRelease() { + try { + const response = await fetch(`https://api.github.com/repos/${REPO}/releases/latest`); + const data = await response.json(); + return data.tag_name; + } catch (error) { + console.warn('Could not fetch latest tag, falling back to main branch'); + return 'main'; + } +} + +async function fetchChangelog(source, version = null) { + // Determine the actual source to fetch from + let fetchSource = source; + + // Handle different branch naming conventions per product + if (version && SUBDIR === 'ibc') { + // IBC uses release/vX.Y.x format (with .x suffix) + // For version patterns like v10.1.x, v8.5.x, use the pattern as-is + + // For 'next', use 'main' branch + if (version === 'next') { + fetchSource = 'main'; + } else { + // IBC uses vX.Y.x format for release branches + fetchSource = `release/${version}`; + } + } else if (version && SUBDIR === 'sdk') { + // SDK uses release/vX.Y.x branch format + if (version === 'next') { + fetchSource = 'main'; + } else { + fetchSource = `release/${version}.x`; + } + } else if (source === 'latest' && version) { + // For specific version requests + fetchSource = version; + } + + console.log(` Fetching changelog from ${REPO}: ${fetchSource}...`); + + const errors = []; + const sources = [fetchSource]; + + // Add fallback sources + if (fetchSource !== source) { + sources.push(source); + } + if (version && !sources.includes(version)) { + sources.push(version); + } + + for (const src of sources) { + for (const p of CHANGELOG_PATHS) { + const url = `https://raw.githubusercontent.com/${REPO}/${src}/${p}`; + try { + const response = await fetch(url); + if (!response.ok) { + errors.push(`${src}/${p}: ${response.status}`); + continue; + } + const changelog = await response.text(); + if (changelog && changelog.trim().length > 0) { + console.log(`✓ Fetched ${p} from ${src} (${changelog.split('\n').length} lines)`); + return changelog; + } + errors.push(`${src}/${p}: empty`); + } catch (err) { + errors.push(`${src}/${p}: ${err.message}`); + } + } + } + + throw new Error(`Failed to fetch changelog from ${REPO}. Tried: ${errors.join('; ')}`); +} + +function sanitizeLine(line) { + // Convert HTML comments to MDX comments + let sanitized = line.replace(//g, '*/}'); + + // Escape JSX-like syntax that breaks MDX parsing + // Look for patterns like that aren't part of MDX components + // Preserve actual MDX components and markdown links + + // First, protect markdown links and code blocks + const linkPattern = /\[([^\]]+)\]\(([^)]+)\)/g; + const codePattern = /`[^`]+`/g; + + // Store protected content + const protectedParts = []; + let protectedIndex = 0; + + // Protect inline code + sanitized = sanitized.replace(codePattern, (match) => { + const placeholder = `__PROTECTED_${protectedIndex}__`; + protectedParts[protectedIndex] = match; + protectedIndex++; + return placeholder; + }); + + // Protect markdown links + sanitized = sanitized.replace(linkPattern, (match) => { + const placeholder = `__PROTECTED_${protectedIndex}__`; + protectedParts[protectedIndex] = match; + protectedIndex++; + return placeholder; + }); + + // Escape problematic characters in the remaining text + // Escape < and > that look like JSX but aren't MDX components + sanitized = sanitized.replace(/<([a-zA-Z][^>]*=)/g, '\\<$1'); + sanitized = sanitized.replace(/<([^A-Z/])/g, '\\<$1'); + + // Restore protected content + for (let i = 0; i < protectedParts.length; i++) { + sanitized = sanitized.replace(`__PROTECTED_${i}__`, protectedParts[i]); + } + + return sanitized; +} + +function parseChangelogToMintlify(changelogContent, targetVersion = null) { + console.log(' Converting changelog to Mintlify format...'); + + const lines = changelogContent.split('\n'); + const updates = []; + let currentVersion = null; + let currentDate = null; + let currentChanges = []; + let inCommentBlock = false; + + // For version-specific requests (e.g., v0.53), extract major.minor version + let targetMajorMinor = null; + if (targetVersion && targetVersion !== 'next') { + // Extract major.minor from patterns like v0.53, v0.53.x, v10.1.x + const match = targetVersion.match(/^v?(\d+\.\d+)/); + if (match) { + targetMajorMinor = match[1]; + } + } + + for (const line of lines) { + // Skip HTML comment blocks + if (line.includes('')) { + inCommentBlock = false; + } + continue; + } + + // Match version headers commonly used across repos + // Examples: + // ## [v0.4.1] - 2024-08-15 + // ## v0.53.0 - 2024-08-15 + // ## v0.53 + // ## [v0.4.x] - 2024-08-15 + // ## [v0.53.4](link) - 2025-07-25 + // ## [Unreleased] + const versionMatch = line.match(/^##\s*\[?v?(\d+\.\d+(?:\.\d+)?(?:\.x)?)\]?(?:\([^)]*\))?\s*(?:-\s*(.+))?$/); + + // Skip [Unreleased] sections + if (line.match(/^##\s*\[?Unreleased\]?/i)) { + currentVersion = null; + currentChanges = []; + continue; + } + + if (versionMatch) { + // Save previous version if exists + if (currentVersion && currentChanges.length > 0) { + updates.push({ + version: currentVersion, + date: currentDate, + changes: currentChanges + }); + } + + // Start new version + currentVersion = versionMatch[1]; + currentDate = versionMatch[2] || ''; + currentChanges = []; + continue; + } + + // Skip empty lines and separators + if (!line.trim() || line.match(/^[-=]+$/)) continue; + + // Skip lines that are part of the changelog header/guidelines + if (!currentVersion && line.match(/^#\s+Changelog/i)) continue; + + // Collect changes (lines that start with - or * or subsection headers) + if (currentVersion) { + // Handle subsection headers (### Features, ### Bug Fixes, etc.) + if (line.match(/^###\s+/)) { + currentChanges.push(sanitizeLine(line.trim())); + } else if (line.startsWith('- ') || line.startsWith('* ') || line.match(/^\s+[-*]/)) { + currentChanges.push(sanitizeLine(line.trim())); + } + } + } + + // Add the last version + if (currentVersion && currentChanges.length > 0) { + updates.push({ + version: currentVersion, + date: currentDate, + changes: currentChanges + }); + } + + // Filter updates to only include versions matching the target + let filteredUpdates = updates; + if (targetMajorMinor) { + filteredUpdates = updates.filter(update => { + // Check if the update version starts with the target major.minor + const updateVersion = update.version.replace(/^v/, ''); + return updateVersion.startsWith(targetMajorMinor); + }); + + // If no matching versions found, include all as fallback + if (filteredUpdates.length === 0) { + console.log(` No versions matching ${targetMajorMinor} found, including all versions`); + filteredUpdates = updates; + } else { + console.log(` Found ${filteredUpdates.length} versions matching ${targetMajorMinor}`); + } + } + + // Fallback: if nothing parsed, wrap entire changelog + if (filteredUpdates.length === 0) { + const nonEmpty = lines.filter(l => l.trim().length).slice(0, 500); + filteredUpdates.push({ version: targetVersion || SOURCE, date: '', changes: nonEmpty.map(l => sanitizeLine(`- ${l.trim()}`)) }); + } + + // Generate Mintlify format + const mintlifyContent = `--- +title: "Release Notes" +description: "Cosmos ${PRODUCT_LABEL} release notes and changelog" +mode: "custom" +--- + +${filteredUpdates.map(update => + ` +${update.changes.map(change => ` ${change}`).join('\n')} +` +).join('\n\n')}`; + + return mintlifyContent; +} + +async function updateReleaseNotes(content, version = 'next') { + // Determine output path based on product and version + const outputPath = path.join( + __dirname, + '..', + 'docs', + SUBDIR, + version, + 'changelog', + 'release-notes.mdx' + ); + const dir = path.dirname(outputPath); + + if (!fs.existsSync(dir)) { + fs.mkdirSync(dir, { recursive: true }); + } + + // Write the file + fs.writeFileSync(outputPath, content); + + const versionCount = (content.match(/ r.success); + const failed = results.filter(r => !r.success); + + if (successful.length > 0) { + console.log('✓ Successful updates:'); + successful.forEach(r => { + console.log(` ${r.version}: ${r.versionCount} versions -> ${r.outputPath}`); + }); + } + + if (failed.length > 0) { + console.log('✗ Failed updates:'); + failed.forEach(r => { + console.log(` ${r.version}: ${r.error}`); + }); + } + + if (failed.length > 0) { + process.exit(1); + } + + } catch (error) { + console.error(' Failed to update release notes:', error.message); + process.exit(1); + } +} + +// Support both direct execution and module import +if (import.meta.url === `file://${process.argv[1]}`) { + main(); +} + +export { fetchChangelog, parseChangelogToMintlify, updateReleaseNotes }; diff --git a/scripts/versioning/sheets-manager.js b/scripts/sheets-manager.js similarity index 98% rename from scripts/versioning/sheets-manager.js rename to scripts/sheets-manager.js index ed6c7d4b..af59ef9c 100644 --- a/scripts/versioning/sheets-manager.js +++ b/scripts/sheets-manager.js @@ -130,7 +130,7 @@ async function createSheetSnapshot(version) { console.error('\n Setup Instructions:'); console.error('1. Enable Google Sheets API in Google Cloud Console'); console.error('2. Create service account and download key'); - console.error('3. Save key as scripts/versioning/service-account-key.json'); + console.error('3. Save key as scripts/service-account-key.json'); console.error('4. Share spreadsheet with service account email'); console.error('5. See GSHEET-SETUP.md for detailed instructions'); } diff --git a/scripts/versioning/version-manager.js b/scripts/version-manager.js similarity index 94% rename from scripts/versioning/version-manager.js rename to scripts/version-manager.js index b8d606da..26a90ba2 100644 --- a/scripts/versioning/version-manager.js +++ b/scripts/version-manager.js @@ -44,7 +44,7 @@ async function prompt(question) { } function listDocsSubdirs() { - const docsRoot = path.join(__dirname, '..', '..', 'docs'); + const docsRoot = path.join(__dirname, '..', 'docs'); if (!fs.existsSync(docsRoot)) return []; return fs .readdirSync(docsRoot, { withFileTypes: true }) @@ -55,7 +55,7 @@ function listDocsSubdirs() { // --- Versions registry helpers (per-product) --- function loadVersionsRegistry() { - const versionsPath = path.join(__dirname, '..', '..', 'versions.json'); + const versionsPath = path.join(__dirname, '..', 'versions.json'); let data = {}; if (fs.existsSync(versionsPath)) { try { @@ -74,7 +74,7 @@ function loadVersionsRegistry() { const products = {}; const subdirs = listDocsSubdirs(); for (const subdir of subdirs) { - const base = path.join(__dirname, '..', '..', 'docs', subdir); + const base = path.join(__dirname, '..', 'docs', subdir); const entries = fs.readdirSync(base, { withFileTypes: true }) .filter(d => d.isDirectory()) .map(d => d.name); @@ -164,7 +164,7 @@ function includesVersionInReleaseNotes(content, version) { } async function checkReleaseNotes(currentVersion, subdir) { - const releaseNotesPath = path.join(__dirname, '..', '..', 'docs', subdir, 'next', 'changelog', 'release-notes.mdx'); + const releaseNotesPath = path.join(__dirname, '..', 'docs', subdir, 'next', 'changelog', 'release-notes.mdx'); if (!fs.existsSync(releaseNotesPath)) return false; const content = fs.readFileSync(releaseNotesPath, 'utf8'); return includesVersionInReleaseNotes(content, currentVersion); @@ -177,7 +177,7 @@ function updateVersionsRegistry({ subdir, freezeVersion, newVersion }) { const product = data.products[subdir]; // Ensure 'next' appears if folder exists - const nextPath = path.join(__dirname, '..', '..', 'docs', subdir, 'next'); + const nextPath = path.join(__dirname, '..', 'docs', subdir, 'next'); if (fs.existsSync(nextPath) && !product.versions.includes('next')) { product.versions.push('next'); } @@ -199,7 +199,7 @@ function updateVersionsRegistry({ subdir, freezeVersion, newVersion }) { } function updateNavigation(version, subdir) { - const docsJsonPath = path.join(__dirname, '..', '..', 'docs.json'); + const docsJsonPath = path.join(__dirname, '..', 'docs.json'); const docsJson = JSON.parse(fs.readFileSync(docsJsonPath, 'utf8')); // Resolve dropdown label from subdir @@ -265,8 +265,8 @@ function copyAndUpdateDocs(currentVersion, subdir) { printInfo('Creating version directory...'); // Copy docs//next/ to docs/// (contents only) - const sourcePath = path.join(__dirname, '..', '..', 'docs', subdir, 'next'); - const targetPath = path.join(__dirname, '..', '..', 'docs', subdir, currentVersion); + const sourcePath = path.join(__dirname, '..', 'docs', subdir, 'next'); + const targetPath = path.join(__dirname, '..', 'docs', subdir, currentVersion); // reset target to avoid nested 'next/' execSync(`rm -rf "${targetPath}" && mkdir -p "${targetPath}"`); execSync(`cp -R "${sourcePath}/." "${targetPath}/"`); @@ -284,8 +284,8 @@ function copyAndUpdateDocs(currentVersion, subdir) { } function createVersionMetadata(currentVersion, newVersion, subdir) { - const metadataPath = path.join(__dirname, '..', '..', 'docs', subdir, currentVersion, '.version-metadata.json'); - const frozenPath = path.join(__dirname, '..', '..', 'docs', subdir, currentVersion, '.version-frozen'); + const metadataPath = path.join(__dirname, '..', 'docs', subdir, currentVersion, '.version-metadata.json'); + const frozenPath = path.join(__dirname, '..', 'docs', subdir, currentVersion, '.version-frozen'); const metadata = { version: currentVersion, diff --git a/scripts/versioning/package-lock.json b/scripts/versioning/package-lock.json deleted file mode 100644 index 9b93b441..00000000 --- a/scripts/versioning/package-lock.json +++ /dev/null @@ -1,598 +0,0 @@ -{ - "name": "cosmos-docs-versioning", - "version": "1.0.0", - "lockfileVersion": 3, - "requires": true, - "packages": { - "": { - "name": "cosmos-docs-versioning", - "version": "1.0.0", - "dependencies": { - "googleapis": "^128.0.0" - }, - "engines": { - "node": ">=18.0.0" - } - }, - "node_modules/agent-base": { - "version": "7.1.4", - "resolved": "https://registry.npmjs.org/agent-base/-/agent-base-7.1.4.tgz", - "integrity": "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ==", - "license": "MIT", - "engines": { - "node": ">= 14" - } - }, - "node_modules/base64-js": { - "version": "1.5.1", - "resolved": "https://registry.npmjs.org/base64-js/-/base64-js-1.5.1.tgz", - "integrity": "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==", - "funding": [ - { - "type": "github", - "url": "https://github.com/sponsors/feross" - }, - { - "type": "patreon", - "url": "https://www.patreon.com/feross" - }, - { - "type": "consulting", - "url": "https://feross.org/support" - } - ], - "license": "MIT" - }, - "node_modules/bignumber.js": { - "version": "9.3.1", - "resolved": "https://registry.npmjs.org/bignumber.js/-/bignumber.js-9.3.1.tgz", - "integrity": "sha512-Ko0uX15oIUS7wJ3Rb30Fs6SkVbLmPBAKdlm7q9+ak9bbIeFf0MwuBsQV6z7+X768/cHsfg+WlysDWJcmthjsjQ==", - "license": "MIT", - "engines": { - "node": "*" - } - }, - "node_modules/buffer-equal-constant-time": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/buffer-equal-constant-time/-/buffer-equal-constant-time-1.0.1.tgz", - "integrity": "sha512-zRpUiDwd/xk6ADqPMATG8vc9VPrkck7T07OIx0gnjmJAnHnTVXNQG3vfvWNuiZIkwu9KrKdA1iJKfsfTVxE6NA==", - "license": "BSD-3-Clause" - }, - "node_modules/call-bind-apply-helpers": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz", - "integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==", - "license": "MIT", - "dependencies": { - "es-errors": "^1.3.0", - "function-bind": "^1.1.2" - }, - "engines": { - "node": ">= 0.4" - } - }, - "node_modules/call-bound": { - "version": "1.0.4", - "resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.4.tgz", - "integrity": "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==", - "license": "MIT", - "dependencies": { - "call-bind-apply-helpers": "^1.0.2", - "get-intrinsic": "^1.3.0" - }, - "engines": { - "node": ">= 0.4" - }, - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, - "node_modules/debug": { - "version": "4.4.1", - "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.1.tgz", - "integrity": "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ==", - "license": "MIT", - "dependencies": { - "ms": "^2.1.3" - }, - "engines": { - "node": ">=6.0" - }, - "peerDependenciesMeta": { - "supports-color": { - "optional": true - } - } - }, - "node_modules/dunder-proto": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz", - "integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==", - "license": "MIT", - "dependencies": { - "call-bind-apply-helpers": "^1.0.1", - "es-errors": "^1.3.0", - "gopd": "^1.2.0" - }, - "engines": { - "node": ">= 0.4" - } - }, - "node_modules/ecdsa-sig-formatter": { - "version": "1.0.11", - "resolved": "https://registry.npmjs.org/ecdsa-sig-formatter/-/ecdsa-sig-formatter-1.0.11.tgz", - "integrity": "sha512-nagl3RYrbNv6kQkeJIpt6NJZy8twLB/2vtz6yN9Z4vRKHN4/QZJIEbqohALSgwKdnksuY3k5Addp5lg8sVoVcQ==", - "license": "Apache-2.0", - "dependencies": { - "safe-buffer": "^5.0.1" - } - }, - "node_modules/es-define-property": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz", - "integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==", - "license": "MIT", - "engines": { - "node": ">= 0.4" - } - }, - "node_modules/es-errors": { - "version": "1.3.0", - "resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz", - "integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==", - "license": "MIT", - "engines": { - "node": ">= 0.4" - } - }, - "node_modules/es-object-atoms": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz", - "integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==", - "license": "MIT", - "dependencies": { - "es-errors": "^1.3.0" - }, - "engines": { - "node": ">= 0.4" - } - }, - "node_modules/extend": { - "version": "3.0.2", - "resolved": "https://registry.npmjs.org/extend/-/extend-3.0.2.tgz", - "integrity": "sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g==", - "license": "MIT" - }, - "node_modules/function-bind": { - "version": "1.1.2", - "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz", - "integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==", - "license": "MIT", - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, - "node_modules/gaxios": { - "version": "6.7.1", - "resolved": "https://registry.npmjs.org/gaxios/-/gaxios-6.7.1.tgz", - "integrity": "sha512-LDODD4TMYx7XXdpwxAVRAIAuB0bzv0s+ywFonY46k126qzQHT9ygyoa9tncmOiQmmDrik65UYsEkv3lbfqQ3yQ==", - "license": "Apache-2.0", - "dependencies": { - "extend": "^3.0.2", - "https-proxy-agent": "^7.0.1", - "is-stream": "^2.0.0", - "node-fetch": "^2.6.9", - "uuid": "^9.0.1" - }, - "engines": { - "node": ">=14" - } - }, - "node_modules/gcp-metadata": { - "version": "6.1.1", - "resolved": "https://registry.npmjs.org/gcp-metadata/-/gcp-metadata-6.1.1.tgz", - "integrity": "sha512-a4tiq7E0/5fTjxPAaH4jpjkSv/uCaU2p5KC6HVGrvl0cDjA8iBZv4vv1gyzlmK0ZUKqwpOyQMKzZQe3lTit77A==", - "license": "Apache-2.0", - "dependencies": { - "gaxios": "^6.1.1", - "google-logging-utils": "^0.0.2", - "json-bigint": "^1.0.0" - }, - "engines": { - "node": ">=14" - } - }, - "node_modules/get-intrinsic": { - "version": "1.3.0", - "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz", - "integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==", - "license": "MIT", - "dependencies": { - "call-bind-apply-helpers": "^1.0.2", - "es-define-property": "^1.0.1", - "es-errors": "^1.3.0", - "es-object-atoms": "^1.1.1", - "function-bind": "^1.1.2", - "get-proto": "^1.0.1", - "gopd": "^1.2.0", - "has-symbols": "^1.1.0", - "hasown": "^2.0.2", - "math-intrinsics": "^1.1.0" - }, - "engines": { - "node": ">= 0.4" - }, - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, - "node_modules/get-proto": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz", - "integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==", - "license": "MIT", - "dependencies": { - "dunder-proto": "^1.0.1", - "es-object-atoms": "^1.0.0" - }, - "engines": { - "node": ">= 0.4" - } - }, - "node_modules/google-auth-library": { - "version": "9.15.1", - "resolved": "https://registry.npmjs.org/google-auth-library/-/google-auth-library-9.15.1.tgz", - "integrity": "sha512-Jb6Z0+nvECVz+2lzSMt9u98UsoakXxA2HGHMCxh+so3n90XgYWkq5dur19JAJV7ONiJY22yBTyJB1TSkvPq9Ng==", - "license": "Apache-2.0", - "dependencies": { - "base64-js": "^1.3.0", - "ecdsa-sig-formatter": "^1.0.11", - "gaxios": "^6.1.1", - "gcp-metadata": "^6.1.0", - "gtoken": "^7.0.0", - "jws": "^4.0.0" - }, - "engines": { - "node": ">=14" - } - }, - "node_modules/google-logging-utils": { - "version": "0.0.2", - "resolved": "https://registry.npmjs.org/google-logging-utils/-/google-logging-utils-0.0.2.tgz", - "integrity": "sha512-NEgUnEcBiP5HrPzufUkBzJOD/Sxsco3rLNo1F1TNf7ieU8ryUzBhqba8r756CjLX7rn3fHl6iLEwPYuqpoKgQQ==", - "license": "Apache-2.0", - "engines": { - "node": ">=14" - } - }, - "node_modules/googleapis": { - "version": "128.0.0", - "resolved": "https://registry.npmjs.org/googleapis/-/googleapis-128.0.0.tgz", - "integrity": "sha512-+sLtVYNazcxaSD84N6rihVX4QiGoqRdnlz2SwmQQkadF31XonDfy4ufk3maMg27+FiySrH0rd7V8p+YJG6cknA==", - "license": "Apache-2.0", - "dependencies": { - "google-auth-library": "^9.0.0", - "googleapis-common": "^7.0.0" - }, - "engines": { - "node": ">=14.0.0" - } - }, - "node_modules/googleapis-common": { - "version": "7.2.0", - "resolved": "https://registry.npmjs.org/googleapis-common/-/googleapis-common-7.2.0.tgz", - "integrity": "sha512-/fhDZEJZvOV3X5jmD+fKxMqma5q2Q9nZNSF3kn1F18tpxmA86BcTxAGBQdM0N89Z3bEaIs+HVznSmFJEAmMTjA==", - "license": "Apache-2.0", - "dependencies": { - "extend": "^3.0.2", - "gaxios": "^6.0.3", - "google-auth-library": "^9.7.0", - "qs": "^6.7.0", - "url-template": "^2.0.8", - "uuid": "^9.0.0" - }, - "engines": { - "node": ">=14.0.0" - } - }, - "node_modules/gopd": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz", - "integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==", - "license": "MIT", - "engines": { - "node": ">= 0.4" - }, - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, - "node_modules/gtoken": { - "version": "7.1.0", - "resolved": "https://registry.npmjs.org/gtoken/-/gtoken-7.1.0.tgz", - "integrity": "sha512-pCcEwRi+TKpMlxAQObHDQ56KawURgyAf6jtIY046fJ5tIv3zDe/LEIubckAO8fj6JnAxLdmWkUfNyulQ2iKdEw==", - "license": "MIT", - "dependencies": { - "gaxios": "^6.0.0", - "jws": "^4.0.0" - }, - "engines": { - "node": ">=14.0.0" - } - }, - "node_modules/has-symbols": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz", - "integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==", - "license": "MIT", - "engines": { - "node": ">= 0.4" - }, - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, - "node_modules/hasown": { - "version": "2.0.2", - "resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz", - "integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==", - "license": "MIT", - "dependencies": { - "function-bind": "^1.1.2" - }, - "engines": { - "node": ">= 0.4" - } - }, - "node_modules/https-proxy-agent": { - "version": "7.0.6", - "resolved": "https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-7.0.6.tgz", - "integrity": "sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw==", - "license": "MIT", - "dependencies": { - "agent-base": "^7.1.2", - "debug": "4" - }, - "engines": { - "node": ">= 14" - } - }, - "node_modules/is-stream": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/is-stream/-/is-stream-2.0.1.tgz", - "integrity": "sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg==", - "license": "MIT", - "engines": { - "node": ">=8" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, - "node_modules/json-bigint": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/json-bigint/-/json-bigint-1.0.0.tgz", - "integrity": "sha512-SiPv/8VpZuWbvLSMtTDU8hEfrZWg/mH/nV/b4o0CYbSxu1UIQPLdwKOCIyLQX+VIPO5vrLX3i8qtqFyhdPSUSQ==", - "license": "MIT", - "dependencies": { - "bignumber.js": "^9.0.0" - } - }, - "node_modules/jwa": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/jwa/-/jwa-2.0.1.tgz", - "integrity": "sha512-hRF04fqJIP8Abbkq5NKGN0Bbr3JxlQ+qhZufXVr0DvujKy93ZCbXZMHDL4EOtodSbCWxOqR8MS1tXA5hwqCXDg==", - "license": "MIT", - "dependencies": { - "buffer-equal-constant-time": "^1.0.1", - "ecdsa-sig-formatter": "1.0.11", - "safe-buffer": "^5.0.1" - } - }, - "node_modules/jws": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/jws/-/jws-4.0.0.tgz", - "integrity": "sha512-KDncfTmOZoOMTFG4mBlG0qUIOlc03fmzH+ru6RgYVZhPkyiy/92Owlt/8UEN+a4TXR1FQetfIpJE8ApdvdVxTg==", - "license": "MIT", - "dependencies": { - "jwa": "^2.0.0", - "safe-buffer": "^5.0.1" - } - }, - "node_modules/math-intrinsics": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz", - "integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==", - "license": "MIT", - "engines": { - "node": ">= 0.4" - } - }, - "node_modules/ms": { - "version": "2.1.3", - "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", - "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==", - "license": "MIT" - }, - "node_modules/node-fetch": { - "version": "2.7.0", - "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.7.0.tgz", - "integrity": "sha512-c4FRfUm/dbcWZ7U+1Wq0AwCyFL+3nt2bEw05wfxSz+DWpWsitgmSgYmy2dQdWyKC1694ELPqMs/YzUSNozLt8A==", - "license": "MIT", - "dependencies": { - "whatwg-url": "^5.0.0" - }, - "engines": { - "node": "4.x || >=6.0.0" - }, - "peerDependencies": { - "encoding": "^0.1.0" - }, - "peerDependenciesMeta": { - "encoding": { - "optional": true - } - } - }, - "node_modules/object-inspect": { - "version": "1.13.4", - "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.4.tgz", - "integrity": "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==", - "license": "MIT", - "engines": { - "node": ">= 0.4" - }, - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, - "node_modules/qs": { - "version": "6.14.0", - "resolved": "https://registry.npmjs.org/qs/-/qs-6.14.0.tgz", - "integrity": "sha512-YWWTjgABSKcvs/nWBi9PycY/JiPJqOD4JA6o9Sej2AtvSGarXxKC3OQSk4pAarbdQlKAh5D4FCQkJNkW+GAn3w==", - "license": "BSD-3-Clause", - "dependencies": { - "side-channel": "^1.1.0" - }, - "engines": { - "node": ">=0.6" - }, - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, - "node_modules/safe-buffer": { - "version": "5.2.1", - "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", - "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", - "funding": [ - { - "type": "github", - "url": "https://github.com/sponsors/feross" - }, - { - "type": "patreon", - "url": "https://www.patreon.com/feross" - }, - { - "type": "consulting", - "url": "https://feross.org/support" - } - ], - "license": "MIT" - }, - "node_modules/side-channel": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.1.0.tgz", - "integrity": "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==", - "license": "MIT", - "dependencies": { - "es-errors": "^1.3.0", - "object-inspect": "^1.13.3", - "side-channel-list": "^1.0.0", - "side-channel-map": "^1.0.1", - "side-channel-weakmap": "^1.0.2" - }, - "engines": { - "node": ">= 0.4" - }, - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, - "node_modules/side-channel-list": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/side-channel-list/-/side-channel-list-1.0.0.tgz", - "integrity": "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA==", - "license": "MIT", - "dependencies": { - "es-errors": "^1.3.0", - "object-inspect": "^1.13.3" - }, - "engines": { - "node": ">= 0.4" - }, - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, - "node_modules/side-channel-map": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/side-channel-map/-/side-channel-map-1.0.1.tgz", - "integrity": "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==", - "license": "MIT", - "dependencies": { - "call-bound": "^1.0.2", - "es-errors": "^1.3.0", - "get-intrinsic": "^1.2.5", - "object-inspect": "^1.13.3" - }, - "engines": { - "node": ">= 0.4" - }, - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, - "node_modules/side-channel-weakmap": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/side-channel-weakmap/-/side-channel-weakmap-1.0.2.tgz", - "integrity": "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==", - "license": "MIT", - "dependencies": { - "call-bound": "^1.0.2", - "es-errors": "^1.3.0", - "get-intrinsic": "^1.2.5", - "object-inspect": "^1.13.3", - "side-channel-map": "^1.0.1" - }, - "engines": { - "node": ">= 0.4" - }, - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, - "node_modules/tr46": { - "version": "0.0.3", - "resolved": "https://registry.npmjs.org/tr46/-/tr46-0.0.3.tgz", - "integrity": "sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw==", - "license": "MIT" - }, - "node_modules/url-template": { - "version": "2.0.8", - "resolved": "https://registry.npmjs.org/url-template/-/url-template-2.0.8.tgz", - "integrity": "sha512-XdVKMF4SJ0nP/O7XIPB0JwAEuT9lDIYnNsK8yGVe43y0AWoKeJNdv3ZNWh7ksJ6KqQFjOO6ox/VEitLnaVNufw==", - "license": "BSD" - }, - "node_modules/uuid": { - "version": "9.0.1", - "resolved": "https://registry.npmjs.org/uuid/-/uuid-9.0.1.tgz", - "integrity": "sha512-b+1eJOlsR9K8HJpow9Ok3fiWOWSIcIzXodvv0rQjVoOVNpWMpxf1wZNpt4y9h10odCNrqnYp1OBzRktckBe3sA==", - "funding": [ - "https://github.com/sponsors/broofa", - "https://github.com/sponsors/ctavan" - ], - "license": "MIT", - "bin": { - "uuid": "dist/bin/uuid" - } - }, - "node_modules/webidl-conversions": { - "version": "3.0.1", - "resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz", - "integrity": "sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ==", - "license": "BSD-2-Clause" - }, - "node_modules/whatwg-url": { - "version": "5.0.0", - "resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-5.0.0.tgz", - "integrity": "sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw==", - "license": "MIT", - "dependencies": { - "tr46": "~0.0.3", - "webidl-conversions": "^3.0.0" - } - } - } -} diff --git a/scripts/versioning/package.json b/scripts/versioning/package.json deleted file mode 100644 index 4cb1e409..00000000 --- a/scripts/versioning/package.json +++ /dev/null @@ -1,19 +0,0 @@ -{ - "name": "cosmos-docs-versioning", - "version": "1.0.0", - "description": "Version management scripts for Cosmos EVM documentation", - "type": "module", - "private": true, - "scripts": { - "freeze": "node version-manager.js", - "release-notes": "node release-notes.js", - "sheets": "node sheets-manager.js", - "test": "node test-versioning.js" - }, - "dependencies": { - "googleapis": "^128.0.0" - }, - "engines": { - "node": ">=18.0.0" - } -} \ No newline at end of file diff --git a/scripts/versioning/release-notes.js b/scripts/versioning/release-notes.js deleted file mode 100644 index ff50b004..00000000 --- a/scripts/versioning/release-notes.js +++ /dev/null @@ -1,200 +0,0 @@ -#!/usr/bin/env node - -/** - * Release Notes Management - * Combines changelog fetching and parsing functionality - */ - -import fs from 'fs'; -import path from 'path'; -import { fileURLToPath } from 'url'; - -const __dirname = path.dirname(fileURLToPath(import.meta.url)); - -// Get source from command line argument, default to latest release -const SOURCE = process.argv[2] || 'latest'; -const SUBDIR = process.argv[3] || process.env.DOCS_SUBDIR || process.env.SUBDIR || 'evm'; - -// Repo mapping per product -const PRODUCT_REPOS = { - evm: 'cosmos/evm', - sdk: 'cosmos/cosmos-sdk', - ibc: 'cosmos/ibc-go' -}; - -const REPO = PRODUCT_REPOS[SUBDIR] || PRODUCT_REPOS.evm; -const PRODUCT_LABEL = SUBDIR.toUpperCase(); - -// Common changelog file candidates by repo -const CHANGELOG_PATHS = [ - 'CHANGELOG.md', - 'RELEASE_NOTES.md', - 'RELEASES.md', - 'CHANGELOG/CHANGELOG.md', - 'docs/CHANGELOG.md' -]; - -async function getLatestRelease() { - try { - const response = await fetch(`https://api.github.com/repos/${REPO}/releases/latest`); - const data = await response.json(); - return data.tag_name; - } catch (error) { - console.warn('Could not fetch latest tag, falling back to main branch'); - return 'main'; - } -} - -async function fetchChangelog(source) { - console.log(` Fetching changelog from ${REPO}: ${source}...`); - - const errors = []; - for (const p of CHANGELOG_PATHS) { - const url = `https://raw.githubusercontent.com/${REPO}/${source}/${p}`; - try { - const response = await fetch(url); - if (!response.ok) { - errors.push(`${p}: ${response.status}`); - continue; - } - const changelog = await response.text(); - if (changelog && changelog.trim().length > 0) { - console.log(`✓ Fetched ${p} (${changelog.split('\n').length} lines)`); - return changelog; - } - errors.push(`${p}: empty`); - } catch (err) { - errors.push(`${p}: ${err.message}`); - } - } - - throw new Error(`Failed to fetch changelog from ${REPO}. Tried: ${errors.join('; ')}`); -} - -function sanitizeLine(line) { - // Convert HTML comments to MDX comments and neutralize problematic sequences - return line.replace(//g, '*/}'); -} - -function parseChangelogToMintlify(changelogContent) { - console.log(' Converting changelog to Mintlify format...'); - - const lines = changelogContent.split('\n'); - const updates = []; - let currentVersion = null; - let currentDate = null; - let currentChanges = []; - - for (const line of lines) { - // Match version headers commonly used across repos - // Examples: - // ## [v0.4.1] - 2024-08-15 - // ## v0.53.0 - 2024-08-15 - // ## v0.53 - // ## [v0.4.x] - 2024-08-15 - const versionMatch = line.match(/^##\s*\[?([vV]?\d+\.\d+(?:\.(?:\d+|x))?)\]?\s*(?:-\s*(.+))?$/); - - if (versionMatch) { - // Save previous version if exists - if (currentVersion && currentChanges.length > 0) { - updates.push({ - version: currentVersion, - date: currentDate, - changes: currentChanges - }); - } - - // Start new version - currentVersion = versionMatch[1]; - currentDate = versionMatch[2] || ''; - currentChanges = []; - continue; - } - - // Skip empty lines and separators - if (!line.trim() || line.match(/^[-=]+$/)) continue; - - // Collect changes (lines that start with - or * or are indented) - if (currentVersion && (line.startsWith('- ') || line.startsWith('* ') || line.match(/^\s+/))) { - currentChanges.push(sanitizeLine(line.trim())); - } - } - - // Add the last version - if (currentVersion && currentChanges.length > 0) { - updates.push({ - version: currentVersion, - date: currentDate, - changes: currentChanges - }); - } - - // Fallback: if nothing parsed, wrap entire changelog - if (updates.length === 0) { - const nonEmpty = lines.filter(l => l.trim().length).slice(0, 500); - updates.push({ version: SOURCE, date: '', changes: nonEmpty.map(l => sanitizeLine(`- ${l.trim()}`)) }); - } - - // Generate Mintlify format - const mintlifyContent = `--- -title: "Release Notes" -description: "Cosmos ${PRODUCT_LABEL} release notes and changelog" -mode: "custom" ---- - -${updates.map(update => - ` -${update.changes.map(change => ` ${change}`).join('\n')} -` -).join('\n\n')}`; - - return mintlifyContent; -} - -async function updateReleaseNotes(content) { - // Create directory if it doesn't exist - const outputPath = path.join(__dirname, '..', '..', 'docs', SUBDIR, 'next', 'changelog', 'release-notes.mdx'); - const dir = path.dirname(outputPath); - - if (!fs.existsSync(dir)) { - fs.mkdirSync(dir, { recursive: true }); - } - - // Write the file - fs.writeFileSync(outputPath, content); - - const versionCount = (content.match(/ v.version === 'main'); - const currentTabs = docsJson.navigation.tabs; - - if (currentTabs) { - const mainVersion = { - version: 'main', - tabs: currentTabs - }; - - if (mainIndex >= 0) { - docsJson.navigation.versions[mainIndex] = mainVersion; - } else { - docsJson.navigation.versions.push(mainVersion); - } - delete docsJson.navigation.tabs; - } - - // Simple sort helper for vX.Y[.Z] - const parseVersion = (v) => { - const match = v.match(/v(\d+)\.(\d+)(?:\.(\d+))?/); - if (!match) return [0, 0, 0]; - return [parseInt(match[1]), parseInt(match[2]), parseInt(match[3] || '0')]; - }; - - docsJson.navigation.versions.sort((a, b) => { - if (a.version === 'main') return 1; - if (b.version === 'main') return -1; - const [aMajor, aMinor, aPatch] = parseVersion(a.version); - const [bMajor, bMinor, bPatch] = parseVersion(b.version); - if (bMajor !== aMajor) return bMajor - aMajor; - if (bMinor !== aMinor) return bMinor - aMinor; - return bPatch - aPatch; - }); - - console.log(' Navigation structure updated for legacy format.'); -} - -// Run the restructuring -restructureNavigation(); - -// Write the updated docs.json -fs.writeFileSync(docsJsonPath, JSON.stringify(docsJson, null, 2) + '\n'); -console.log(' Navigation restructured successfully'); -console.log(' docs.json updated'); - -// Provide guidance -console.log('\n Navigation structure explanation:'); -console.log(' - For dropdown-based navigation, manage versions per product in docs.json'); -if (versionsJson.products) { - console.log(' - versions.json now tracks products and their versions'); -} else { - console.log(' - Legacy versions.json detected'); -} diff --git a/scripts/versioning/test-versioning.js b/scripts/versioning/test-versioning.js deleted file mode 100644 index 46eeafef..00000000 --- a/scripts/versioning/test-versioning.js +++ /dev/null @@ -1,90 +0,0 @@ -#!/usr/bin/env node - -/** - * Test script for versioning system - * Validates setup without making actual changes - */ - -import fs from 'fs'; -import path from 'path'; -import { fileURLToPath } from 'url'; -import { execSync } from 'child_process'; - -const __dirname = path.dirname(fileURLToPath(import.meta.url)); - -console.log('='.repeat(50)); -console.log(' Testing Versioning System'); -console.log('='.repeat(50)); -console.log(''); - -// Test 1: Check dependencies -console.log('1. Checking dependencies...'); -try { - const nodeModulesPath = path.join(__dirname, 'node_modules'); - if (!fs.existsSync(nodeModulesPath)) { - throw new Error('Dependencies missing - run: cd scripts/versioning && npm install'); - } - - // Check for googleapis specifically - if (!fs.existsSync(path.join(nodeModulesPath, 'googleapis'))) { - throw new Error('googleapis dependency missing'); - } - - console.log(' ✓ Dependencies installed'); -} catch (error) { - console.log(' ✗ ' + error.message); - process.exit(1); -} - -// Test 2: Check Google Sheets API setup -console.log('2. Checking Google Sheets API setup...'); -const serviceAccountPath = path.join(__dirname, 'service-account-key.json'); - -if (!fs.existsSync(serviceAccountPath)) { - console.log(' ✗ Service account key not found'); - console.log(' ERROR: Google Sheets API credentials are required'); - console.log(' See GSHEET-SETUP.md for setup instructions'); - process.exit(1); -} - -console.log(' ✓ Service account key found'); - -// Test 3: Test Google Sheets connection -console.log(' Testing Google Sheets connection...'); -try { - execSync(`node sheets-manager.js test-version`, { - cwd: __dirname, - stdio: 'pipe' - }); - console.log(' ✓ Google Sheets test successful'); - console.log(' ! Remember to delete the "test-version" tab from Google Sheets'); -} catch (error) { - console.log(' ✗ Google Sheets test failed'); - console.log(' ERROR: Check API credentials and spreadsheet permissions'); - process.exit(1); -} - -// Test 4: Test release notes fetching -console.log('3. Testing release notes...'); -try { - execSync(`node release-notes.js latest evm`, { - cwd: __dirname, - stdio: 'pipe' - }); - console.log(' ✓ Release notes fetch successful'); -} catch (error) { - console.log(' Release notes test failed (may be network issue)'); -} - -console.log(''); -console.log('='.repeat(50)); -console.log(' Test Summary'); -console.log('='.repeat(50)); -console.log(''); -console.log(' All critical tests passed!'); -console.log(''); -console.log('Ready to use:'); -console.log(' npm run freeze # Freeze current version'); -console.log(' npm run release-notes [version] # Update release notes'); -console.log(' npm run sheets # Manage Google Sheets'); -console.log(''); diff --git a/tmp/changelog.md b/tmp/changelog.md deleted file mode 100644 index 4e3ad482..00000000 --- a/tmp/changelog.md +++ /dev/null @@ -1,88 +0,0 @@ -# CHANGELOG - -## UNRELEASED - -### DEPENDENCIES - -### BUG FIXES - -- [\#471](https://github.com/cosmos/evm/pull/471) Notify new block for mempool in time. -- [\#492](https://github.com/cosmos/evm/pull/492) Duplicate case switch to avoid empty execution block - -### IMPROVEMENTS - -- [\#467](https://github.com/cosmos/evm/pull/467) Replace GlobalEVMMempool by passing to JSONRPC on initiate. -- [\#352](https://github.com/cosmos/evm/pull/352) Remove the creation of a Geth EVM instance, stateDB during the AnteHandler balance check. - -### FEATURES - -- [\#346](https://github.com/cosmos/evm/pull/346) Add eth_createAccessList method and implementation - -### STATE BREAKING - -### API-BREAKING - -- [\#477](https://github.com/cosmos/evm/pull/477) Refactor precompile constructors to accept keeper interfaces instead of concrete implementations, breaking the existing `NewPrecompile` function signatures. - -## v0.4.1 - -### DEPENDENCIES - -- [\#459](https://github.com/cosmos/evm/pull/459) Update `cosmossdk.io/log` to `v1.6.1` to support Go `v1.25.0+`. -- [\#435](https://github.com/cosmos/evm/pull/435) Update Cosmos SDK to `v0.53.4` and CometBFT to `v0.38.18`. - -### BUG FIXES - -- [\#179](https://github.com/cosmos/evm/pull/179) Fix compilation error in server/start.go -- [\#245](https://github.com/cosmos/evm/pull/245) Use PriorityMempool with signer extractor to prevent missing signers error in tx execution -- [\#289](https://github.com/cosmos/evm/pull/289) Align revert reason format with go-ethereum (return hex-encoded result) -- [\#291](https://github.com/cosmos/evm/pull/291) Use proper address codecs in precompiles for bech32/hex conversion -- [\#296](https://github.com/cosmos/evm/pull/296) Add sanity checks to trace_tx RPC endpoint -- [\#316](https://github.com/cosmos/evm/pull/316) Fix estimate gas to handle missing fields for new transaction types -- [\#330](https://github.com/cosmos/evm/pull/330) Fix error propagation in BlockHash RPCs and address test flakiness -- [\#332](https://github.com/cosmos/evm/pull/332) Fix non-determinism in state transitions -- [\#350](https://github.com/cosmos/evm/pull/350) Fix p256 precompile test flakiness -- [\#376](https://github.com/cosmos/evm/pull/376) Fix precompile initialization for local node development script -- [\#384](https://github.com/cosmos/evm/pull/384) Fix debug_traceTransaction RPC failing with block height mismatch errors -- [\#441](https://github.com/cosmos/evm/pull/441) Align precompiles map with available static check to Prague. -- [\#452](https://github.com/cosmos/evm/pull/452) Cleanup unused cancel function in filter. -- [\#454](https://github.com/cosmos/evm/pull/454) Align multi decode functions instead of string contains check in HexAddressFromBech32String. -- [\#468](https://github.com/cosmos/evm/pull/468) Add pagination flags to `token-pairs` to improve query flexibility. - -### IMPROVEMENTS - -- [\#294](https://github.com/cosmos/evm/pull/294) Enforce single EVM transaction per Cosmos transaction for security -- [\#299](https://github.com/cosmos/evm/pull/299) Update dependencies for security and performance improvements -- [\#307](https://github.com/cosmos/evm/pull/307) Preallocate EVM access_list for better performance -- [\#317](https://github.com/cosmos/evm/pull/317) Fix EmitApprovalEvent to use owner address instead of precompile address -- [\#345](https://github.com/cosmos/evm/pull/345) Fix gas cap calculation and fee rounding errors in ante handler benchmarks -- [\#347](https://github.com/cosmos/evm/pull/347) Add loop break labels for optimization -- [\#370](https://github.com/cosmos/evm/pull/370) Use larger CI runners for resource-intensive tests -- [\#373](https://github.com/cosmos/evm/pull/373) Apply security audit patches -- [\#377](https://github.com/cosmos/evm/pull/377) Apply audit-related commit 388b5c0 -- [\#382](https://github.com/cosmos/evm/pull/382) Post-audit security fixes (batch 1) -- [\#388](https://github.com/cosmos/evm/pull/388) Post-audit security fixes (batch 2) -- [\#389](https://github.com/cosmos/evm/pull/389) Post-audit security fixes (batch 3) -- [\#392](https://github.com/cosmos/evm/pull/392) Post-audit security fixes (batch 5) -- [\#398](https://github.com/cosmos/evm/pull/398) Post-audit security fixes (batch 4) -- [\#442](https://github.com/cosmos/evm/pull/442) Prevent nil pointer by checking error in gov precompile FromResponse. -- [\#387](https://github.com/cosmos/evm/pull/387) (Experimental) EVM-compatible appside mempool -- [\#476](https://github.com/cosmos/evm/pull/476) Add revert error e2e tests for contract and precompile calls - -### FEATURES - -- [\#253](https://github.com/cosmos/evm/pull/253) Add comprehensive Solidity-based end-to-end tests for precompiles -- [\#301](https://github.com/cosmos/evm/pull/301) Add 4-node localnet infrastructure for testing multi-validator setups -- [\#304](https://github.com/cosmos/evm/pull/304) Add system test framework for integration testing -- [\#344](https://github.com/cosmos/evm/pull/344) Add txpool RPC namespace stubs in preparation for app-side mempool implementation -- [\#440](https://github.com/cosmos/evm/pull/440) Enforce app creator returning application implement AppWithPendingTxStream in build time. - -### STATE BREAKING - -### API-BREAKING - -- [\#456](https://github.com/cosmos/evm/pull/456) Remove non–go-ethereum JSON-RPC methods to align with Geth’s surface -- [\#443](https://github.com/cosmos/evm/pull/443) Move `ante` logic from the `evmd` Go package to the `evm` package to -be exported as a library. -- [\#422](https://github.com/cosmos/evm/pull/422) Align function and package names for consistency. -- [\#305](https://github.com/cosmos/evm/pull/305) Remove evidence precompile due to lack of use cases diff --git a/tmp/release-notes.mdx b/tmp/release-notes.mdx deleted file mode 100644 index 4fd9e370..00000000 --- a/tmp/release-notes.mdx +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: "Release Notes" -description: "Release history and changelog for Cosmos EVM" ---- - - - This page tracks all releases and changes from the [cosmos/evm](https://github.com/cosmos/evm) repository. - For the latest development updates, see the [UNRELEASED](https://github.com/cosmos/evm/blob/main/CHANGELOG.md#unreleased) section. - - - -## Features - -* Add comprehensive Solidity-based end-to-end tests for precompiles ([#253](https://github.com/cosmos/evm/pull/253)) -* Add 4-node localnet infrastructure for testing multi-validator setups ([#301](https://github.com/cosmos/evm/pull/301)) -* Add system test framework for integration testing ([#304](https://github.com/cosmos/evm/pull/304)) -* Add txpool RPC namespace stubs in preparation for app-side mempool implementation ([#344](https://github.com/cosmos/evm/pull/344)) -* Enforce app creator returning application implement AppWithPendingTxStream in build time. ([#440](https://github.com/cosmos/evm/pull/440)) - -## Improvements - -* Enforce single EVM transaction per Cosmos transaction for security ([#294](https://github.com/cosmos/evm/pull/294)) -* Update dependencies for security and performance improvements ([#299](https://github.com/cosmos/evm/pull/299)) -* Preallocate EVM access_list for better performance ([#307](https://github.com/cosmos/evm/pull/307)) -* Fix EmitApprovalEvent to use owner address instead of precompile address ([#317](https://github.com/cosmos/evm/pull/317)) -* Fix gas cap calculation and fee rounding errors in ante handler benchmarks ([#345](https://github.com/cosmos/evm/pull/345)) -* Add loop break labels for optimization ([#347](https://github.com/cosmos/evm/pull/347)) -* Use larger CI runners for resource-intensive tests ([#370](https://github.com/cosmos/evm/pull/370)) -* Apply security audit patches ([#373](https://github.com/cosmos/evm/pull/373)) -* Apply audit-related commit 388b5c0 ([#377](https://github.com/cosmos/evm/pull/377)) -* Post-audit security fixes (batch 1) ([#382](https://github.com/cosmos/evm/pull/382)) -* Post-audit security fixes (batch 2) ([#388](https://github.com/cosmos/evm/pull/388)) -* Post-audit security fixes (batch 3) ([#389](https://github.com/cosmos/evm/pull/389)) -* Post-audit security fixes (batch 5) ([#392](https://github.com/cosmos/evm/pull/392)) -* Post-audit security fixes (batch 4) ([#398](https://github.com/cosmos/evm/pull/398)) -* Prevent nil pointer by checking error in gov precompile FromResponse. ([#442](https://github.com/cosmos/evm/pull/442)) -* (Experimental) EVM-compatible appside mempool ([#387](https://github.com/cosmos/evm/pull/387)) -* Add revert error e2e tests for contract and precompile calls ([#476](https://github.com/cosmos/evm/pull/476)) - -## Bug Fixes - -* Fix compilation error in server/start.go ([#179](https://github.com/cosmos/evm/pull/179)) -* Use PriorityMempool with signer extractor to prevent missing signers error in tx execution ([#245](https://github.com/cosmos/evm/pull/245)) -* Align revert reason format with go-ethereum (return hex-encoded result) ([#289](https://github.com/cosmos/evm/pull/289)) -* Use proper address codecs in precompiles for bech32/hex conversion ([#291](https://github.com/cosmos/evm/pull/291)) -* Add sanity checks to trace_tx RPC endpoint ([#296](https://github.com/cosmos/evm/pull/296)) -* Fix estimate gas to handle missing fields for new transaction types ([#316](https://github.com/cosmos/evm/pull/316)) -* Fix error propagation in BlockHash RPCs and address test flakiness ([#330](https://github.com/cosmos/evm/pull/330)) -* Fix non-determinism in state transitions ([#332](https://github.com/cosmos/evm/pull/332)) -* Fix p256 precompile test flakiness ([#350](https://github.com/cosmos/evm/pull/350)) -* Fix precompile initialization for local node development script ([#376](https://github.com/cosmos/evm/pull/376)) -* Fix debug_traceTransaction RPC failing with block height mismatch errors ([#384](https://github.com/cosmos/evm/pull/384)) -* Align precompiles map with available static check to Prague. ([#441](https://github.com/cosmos/evm/pull/441)) -* Cleanup unused cancel function in filter. ([#452](https://github.com/cosmos/evm/pull/452)) -* Align multi decode functions instead of string contains check in HexAddressFromBech32String. ([#454](https://github.com/cosmos/evm/pull/454)) -* Add pagination flags to `token-pairs` to improve query flexibility. ([#468](https://github.com/cosmos/evm/pull/468)) - -## Dependencies - -* Update `cosmossdk.io/log` to `v1.6.1` to support Go `v1.25.0+`. ([#459](https://github.com/cosmos/evm/pull/459)) -* Update Cosmos SDK to `v0.53.4` and CometBFT to `v0.38.18`. ([#435](https://github.com/cosmos/evm/pull/435)) - -## API Breaking - -* Remove non–go-ethereum JSON-RPC methods to align with Geth’s surface ([#456](https://github.com/cosmos/evm/pull/456)) -* Move `ante` logic from the `evmd` Go package to the `evm` package to ([#443](https://github.com/cosmos/evm/pull/443)) -* Align function and package names for consistency. ([#422](https://github.com/cosmos/evm/pull/422)) -* Remove evidence precompile due to lack of use cases ([#305](https://github.com/cosmos/evm/pull/305)) - -## API Breaking - -* Remove non–go-ethereum JSON-RPC methods to align with Geth’s surface ([#456](https://github.com/cosmos/evm/pull/456)) -* Move `ante` logic from the `evmd` Go package to the `evm` package to ([#443](https://github.com/cosmos/evm/pull/443)) -* Align function and package names for consistency. ([#422](https://github.com/cosmos/evm/pull/422)) -* Remove evidence precompile due to lack of use cases ([#305](https://github.com/cosmos/evm/pull/305)) - - ---- - - - - See the complete changelog on GitHub - - - Report bugs or request features - - diff --git a/versions.json b/versions.json index bc2eb9b2..3f659abd 100644 --- a/versions.json +++ b/versions.json @@ -10,20 +10,24 @@ }, "sdk": { "versions": [ - "next", "v0.53", "v0.50", - "v0.47" + "v0.47", + "next" ], "defaultVersion": "v0.53" }, "ibc": { "versions": [ - "next", - "v0.2.0" + "v10.1.x", + "v8.5.x", + "v7.8.x", + "v6.3.x", + "v5.4.x", + "v4.6.x", + "next" ], - "defaultVersion": "next", - "nextDev": "v0.3.0" + "defaultVersion": "v10.1.x" } } -} +} \ No newline at end of file